From 9acafc9b9084cbb127ce5669236bdd5dd8e85a0f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Nicol=C3=B2=20Boschi?= Date: Mon, 17 Oct 2022 23:09:50 +0200 Subject: [PATCH 01/28] [fix][sec] File tiered storage: upgrade jettison to get rid of CVE-2022-40149 (#18022) * [fix][sec] File tiered storage: upgrade jettison to get rid of CVE-2022-40149 * fix --- pom.xml | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/pom.xml b/pom.xml index d0c9cbf7b0addc..e663de7794274c 100644 --- a/pom.xml +++ b/pom.xml @@ -247,6 +247,7 @@ flexible messaging model and an intuitive client API. 3.1 4.2.0 1.2.22 + 1.5.1 0.6.1 @@ -798,6 +799,13 @@ flexible messaging model and an intuitive client API. import + + org.codehaus.jettison + jettison + ${jettison.version} + + + org.hdrhistogram HdrHistogram From 83e830e6deb9b5c7c3ef01b9120e1ba0858db688 Mon Sep 17 00:00:00 2001 From: tison Date: Tue, 18 Oct 2022 09:18:42 +0800 Subject: [PATCH 02/28] [improve][doc] Redirect dangling standlaone to getting-started-standalone (#18063) --- site2/docs/standalone.md | 240 +--------------- .../version-2.10.x/standalone.md | 264 +----------------- 2 files changed, 8 insertions(+), 496 deletions(-) diff --git a/site2/docs/standalone.md b/site2/docs/standalone.md index 70122be8a461c7..d1658b84cc27ee 100644 --- a/site2/docs/standalone.md +++ b/site2/docs/standalone.md @@ -4,240 +4,8 @@ title: Set up a standalone Pulsar locally sidebar_label: "Run Pulsar locally" --- -For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary [RocksDB](http://rocksdb.org/) and BookKeeper components running inside of a single Java Virtual Machine (JVM) process. - -> **Pulsar in production?** -> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal.md) guide. - -## Install Pulsar standalone - -This tutorial guides you through every step of installing Pulsar locally. - -### System requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions - -:::tip - -By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is an extra option passed into JVM. - -::: - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -### Install Pulsar using binary release - -To get started with Pulsar, download a binary tarball release in one of the following ways: - -* download from the Apache mirror (Pulsar @pulsar:version@ binary release) - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - wget pulsar:binary_release_url - ``` - -After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory: - -```bash -tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -cd apache-pulsar-@pulsar:version@ -``` - -#### What your package contains - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](/tools/pulsar-admin/). -`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker) and more.
**Note:** Pulsar standalone uses RocksDB as the local metadata store and its configuration file path [`metadataStoreConfigPath`](reference-configuration.md) is configurable in the `standalone.conf` file. For more information about the configurations of RocksDB, see [here](https://github.com/facebook/rocksdb/blob/main/examples/rocksdb_option_file_example.ini) and related [documentation](https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide). -`examples` | A Java JAR file containing [Pulsar Functions](functions-overview.md) example. -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md). -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar. -`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar). - -These directories are created once you begin running Pulsar. - -Directory | Contains -:---------|:-------- -`data` | The data storage directory used by RocksDB and BookKeeper. -`logs` | Logs created by the installation. - -:::tip - -If you want to use built-in connectors and tiered storage offloaders, you can install them according to the following instructions: -* [Install built-in connectors (optional)](#install-built-in-connectors-optional) -* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional) -Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing built-in connectors and tiered storage offloaders. - -::: - -### Install built-in connectors (optional) - -Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the built-in connectors. -To enable those built-in connectors, you can download the connectors tarball release in one of the following ways: - -* download from the Apache mirror Pulsar IO Connectors @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - ``` - -After you download the NAR file, copy the file to the `connectors` directory in the pulsar directory. -For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands: - -```bash -mkdir connectors -mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... -``` - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker (or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions). -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all built-in connectors](io-overview.md#working-with-connectors). - -::: - -### Install tiered storage offloaders (optional) - -:::tip - -- Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -To enable the tiered storage feature, follow the instructions below; otherwise skip this section. - -::: - -To get started with [tiered storage offloaders](concepts-tiered-storage.md), you need to download the offloaders tarball release on every broker node in one of the following ways: - -* download from the Apache mirror Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - wget pulsar:offloader_release_url - ``` - -After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders` -in the pulsar directory: - -```bash -tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar -``` - -For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage.md). - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory. -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or DC/OS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - -::: - -## Start Pulsar standalone - -Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode. - -```bash -bin/pulsar standalone -``` - -If you have started Pulsar successfully, you will see `INFO`-level log messages like this: - -```bash -21:59:29.327 [DLM-/stream/storage-OrderedScheduler-3-0] INFO org.apache.bookkeeper.stream.storage.impl.sc.StorageContainerImpl - Successfully started storage container (0). -21:59:34.576 [main] INFO org.apache.pulsar.broker.authentication.AuthenticationService - Authentication is disabled -21:59:34.576 [main] INFO org.apache.pulsar.websocket.WebSocketService - Pulsar WebSocket Service started -``` - -:::tip - -* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window. - -::: - -You can also run the service as a background process using the `bin/pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](reference-cli-tools.md#pulsar-daemon). - -> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview.md) document to secure your deployment. -> -> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -## Use Pulsar standalone - -Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. - -### Consume a message - -The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic: - -```bash -bin/pulsar-client consume my-topic -s "first-subscription" -``` - -If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs: - -``` -22:17:16.781 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully consumed -``` - -:::tip - -As you have noticed that we do not explicitly create the `my-topic` topic, from which we consume the message. When you consume a message from a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well. - -::: - -### Produce a message - -The following command produces a message saying `hello-pulsar` to the `my-topic` topic: - -```bash -bin/pulsar-client produce my-topic --messages "hello-pulsar" -``` - -If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs: - -``` -22:21:08.693 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced -``` - -## Stop Pulsar standalone - -Press `Ctrl+C` to stop a local standalone Pulsar. - -:::tip - -If the service runs as a background process using the `bin/pulsar-daemon start standalone` command, then use the `bin/pulsar-daemon stop standalone` command to stop the service. -For more information, see [pulsar-daemon](reference-cli-tools.md#pulsar-daemon). - -::: +````mdx-code-block +import {Redirect} from '@docusaurus/router'; + +```` diff --git a/site2/website/versioned_docs/version-2.10.x/standalone.md b/site2/website/versioned_docs/version-2.10.x/standalone.md index 3d463d635558bf..e279db20486ffe 100644 --- a/site2/website/versioned_docs/version-2.10.x/standalone.md +++ b/site2/website/versioned_docs/version-2.10.x/standalone.md @@ -5,264 +5,8 @@ sidebar_label: "Run Pulsar locally" original_id: standalone --- -For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary [RocksDB](http://rocksdb.org/) and BookKeeper components running inside of a single Java Virtual Machine (JVM) process. - -> **Pulsar in production?** -> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal.md) guide. - -## Install Pulsar standalone - -This tutorial guides you through every step of installing Pulsar locally. - -### System requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions - -:::tip - -By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. - -::: - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -### Install Pulsar using binary release - -To get started with Pulsar, download a binary tarball release in one of the following ways: - -* download from the Apache mirror (Pulsar @pulsar:version@ binary release) - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:binary_release_url - - ``` - -After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -#### What your package contains - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](/tools/pulsar-admin/). -`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker) and more.
**Note:** Pulsar standalone uses RocksDB as the local metadata store and its configuration file path [`metadataStoreConfigPath`](reference-configuration.md) is configurable in the `standalone.conf` file. For more information about the configurations of RocksDB, see [here](https://github.com/facebook/rocksdb/blob/main/examples/rocksdb_option_file_example.ini) and related [documentation](https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide). -`examples` | A Java JAR file containing [Pulsar Functions](functions-overview.md) example. -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md). -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar. -`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar). - -These directories are created once you begin running Pulsar. - -Directory | Contains -:---------|:-------- -`data` | The data storage directory used by RocksDB and BookKeeper. -`logs` | Logs created by the installation. - -:::tip - -If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions: -* [Install builtin connectors (optional)](#install-builtin-connectors-optional) -* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional) -Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders. - -::: - -### Install builtin connectors (optional) - -Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors. -To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways: - -* download from the Apache mirror Pulsar IO Connectors @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. -For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker (or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions). -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors). - -::: - -### Install tiered storage offloaders (optional) - -:::tip - -- Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -- To enable tiered storage feature, follow the instructions below; otherwise skip this section. - -::: - -To get started with [tiered storage offloaders](concepts-tiered-storage.md), you need to download the offloaders tarball release on every broker node in one of the following ways: - -* download from the Apache mirror Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders` -in the pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage.md). - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory. -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or DC/OS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - -::: - -## Start Pulsar standalone - -Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode. - -```bash - -$ bin/pulsar standalone - -``` - -If you have started Pulsar successfully, you will see `INFO`-level log messages like this: - -```bash - -21:59:29.327 [DLM-/stream/storage-OrderedScheduler-3-0] INFO org.apache.bookkeeper.stream.storage.impl.sc.StorageContainerImpl - Successfully started storage container (0). -21:59:34.576 [main] INFO org.apache.pulsar.broker.authentication.AuthenticationService - Authentication is disabled -21:59:34.576 [main] INFO org.apache.pulsar.websocket.WebSocketService - Pulsar WebSocket Service started - -``` - -:::tip - -* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window. - -::: - -You can also run the service as a background process using the `bin/pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](reference-cli-tools.md#pulsar-daemon). -> -> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview.md) document to secure your deployment. -> -> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -## Use Pulsar standalone - -Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. - -### Consume a message - -The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client consume my-topic -s "first-subscription" - -``` - -If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -22:17:16.781 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully consumed - -``` - -:::tip - -As you have noticed that we do not explicitly create the `my-topic` topic, from which we consume the message. When you consume a message from a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well. - -::: - -### Produce a message - -The following command produces a message saying `hello-pulsar` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client produce my-topic --messages "hello-pulsar" - -``` - -If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -22:21:08.693 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced - -``` - -## Stop Pulsar standalone - -Press `Ctrl+C` to stop a local standalone Pulsar. - -:::tip - -If the service runs as a background process using the `bin/pulsar-daemon start standalone` command, then use the `bin/pulsar-daemon stop standalone` command to stop the service. -For more information, see [pulsar-daemon](reference-cli-tools.md#pulsar-daemon). - -::: +````mdx-code-block +import {Redirect} from '@docusaurus/router'; + +```` From 302854e7abfbf056eda8df59822d58feadfd833a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Nicol=C3=B2=20Boschi?= Date: Tue, 18 Oct 2022 07:31:38 +0200 Subject: [PATCH 03/28] [fix][ci] Fix SimpleProducerConsumerTest failures when skipped (#18068) --- .../apache/pulsar/client/api/SimpleProducerConsumerTest.java | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pulsar-broker/src/test/java/org/apache/pulsar/client/api/SimpleProducerConsumerTest.java b/pulsar-broker/src/test/java/org/apache/pulsar/client/api/SimpleProducerConsumerTest.java index 10917a39efdac7..a7716150bbd69a 100644 --- a/pulsar-broker/src/test/java/org/apache/pulsar/client/api/SimpleProducerConsumerTest.java +++ b/pulsar-broker/src/test/java/org/apache/pulsar/client/api/SimpleProducerConsumerTest.java @@ -126,7 +126,7 @@ public class SimpleProducerConsumerTest extends ProducerConsumerBase { private static final int RECEIVE_TIMEOUT_SHORT_MILLIS = 200 * TIMEOUT_MULTIPLIER; private static final int RECEIVE_TIMEOUT_MEDIUM_MILLIS = 1000 * TIMEOUT_MULTIPLIER; - @BeforeClass + @BeforeClass(alwaysRun = true) @Override protected void setup() throws Exception { super.internalSetup(); From b3733ddb68e8084376b85f4637ebd41335955486 Mon Sep 17 00:00:00 2001 From: LinChen <1572139390@qq.com> Date: Tue, 18 Oct 2022 14:26:32 +0800 Subject: [PATCH 04/28] Fix flaky test V1_ProducerConsumerTest#testActiveAndInActiveConsumerEntryCacheBehavior (#18075) Co-authored-by: leolinchen --- .../org/apache/pulsar/client/api/v1/V1_ProducerConsumerTest.java | 1 + 1 file changed, 1 insertion(+) diff --git a/pulsar-broker/src/test/java/org/apache/pulsar/client/api/v1/V1_ProducerConsumerTest.java b/pulsar-broker/src/test/java/org/apache/pulsar/client/api/v1/V1_ProducerConsumerTest.java index 097cf4823de260..9b1848c9ff3a42 100644 --- a/pulsar-broker/src/test/java/org/apache/pulsar/client/api/v1/V1_ProducerConsumerTest.java +++ b/pulsar-broker/src/test/java/org/apache/pulsar/client/api/v1/V1_ProducerConsumerTest.java @@ -698,6 +698,7 @@ public void testActiveAndInActiveConsumerEntryCacheBehavior() throws Exception { .topic("persistent://my-property/use/my-ns/" + topicName) .subscriptionName(sub2) .subscriptionType(SubscriptionType.Shared) + .receiverQueueSize(1) .subscribe(); // Produce messages final int moreMessages = 10; From 48da869ea8a6d36191be84110979193458af9a6b Mon Sep 17 00:00:00 2001 From: Jiwei Guo Date: Tue, 18 Oct 2022 15:04:24 +0800 Subject: [PATCH 05/28] Add `ml` for CI semantic check. (#18082) --- .github/workflows/ci-semantic-pull-request.yml | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/.github/workflows/ci-semantic-pull-request.yml b/.github/workflows/ci-semantic-pull-request.yml index 9395d89881e58f..5d45c7c17122a2 100644 --- a/.github/workflows/ci-semantic-pull-request.yml +++ b/.github/workflows/ci-semantic-pull-request.yml @@ -49,16 +49,18 @@ jobs: revert # Scope abbreviation comments: # cli -> command line interface - # fn -> Pulasr Functions + # fn -> Pulsar Functions # io -> Pulsar Connectors # offload -> tiered storage # sec -> security # sql -> Pulsar Trino Plugin # txn -> transaction # ws -> websocket + # ml -> managed ledger scopes: | admin broker + ml build ci cli From eed8c74d0abbc16dabf3f2c624705bdbafee4146 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Nicol=C3=B2=20Boschi?= Date: Tue, 18 Oct 2022 09:23:35 +0200 Subject: [PATCH 06/28] [fix][test] Fix flaky test: PrometheusMetricsTest.testDuplicateMetricTypeDefinitions (#18077) * [fix][test] Fix flaky test: PrometheusMetricsTest.testDuplicateMetricTypeDefinitions * fix --- .../org/apache/pulsar/broker/PulsarService.java | 2 ++ .../pendingack/impl/MLPendingAckStoreProvider.java | 10 ++++++++++ .../impl/MLTransactionMetadataStoreProvider.java | 14 ++++++++++++-- 3 files changed, 24 insertions(+), 2 deletions(-) diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/broker/PulsarService.java b/pulsar-broker/src/main/java/org/apache/pulsar/broker/PulsarService.java index 0ade26e661254c..f6aec263f13600 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/broker/PulsarService.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/broker/PulsarService.java @@ -562,6 +562,8 @@ public CompletableFuture closeAsync() { if (transactionExecutorProvider != null) { transactionExecutorProvider.shutdownNow(); } + MLPendingAckStoreProvider.closeBufferedWriterMetrics(); + MLTransactionMetadataStoreProvider.closeBufferedWriterMetrics(); if (this.offloaderStats != null) { this.offloaderStats.close(); } diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/broker/transaction/pendingack/impl/MLPendingAckStoreProvider.java b/pulsar-broker/src/main/java/org/apache/pulsar/broker/transaction/pendingack/impl/MLPendingAckStoreProvider.java index bf2771abaa65a7..130b485694f678 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/broker/transaction/pendingack/impl/MLPendingAckStoreProvider.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/broker/transaction/pendingack/impl/MLPendingAckStoreProvider.java @@ -61,6 +61,16 @@ public static void initBufferedWriterMetrics(String brokerAdvertisedAddress){ } } + public static void closeBufferedWriterMetrics() { + synchronized (MLPendingAckStoreProvider.class){ + if (bufferedWriterMetrics == DisabledTxnLogBufferedWriterMetricsStats.DISABLED_BUFFERED_WRITER_METRICS){ + return; + } + bufferedWriterMetrics.close(); + bufferedWriterMetrics = DisabledTxnLogBufferedWriterMetricsStats.DISABLED_BUFFERED_WRITER_METRICS; + } + } + @Override public CompletableFuture newPendingAckStore(PersistentSubscription subscription) { CompletableFuture pendingAckStoreFuture = new CompletableFuture<>(); diff --git a/pulsar-transaction/coordinator/src/main/java/org/apache/pulsar/transaction/coordinator/impl/MLTransactionMetadataStoreProvider.java b/pulsar-transaction/coordinator/src/main/java/org/apache/pulsar/transaction/coordinator/impl/MLTransactionMetadataStoreProvider.java index c11e422d27a8fc..6eb6402c0eee5b 100644 --- a/pulsar-transaction/coordinator/src/main/java/org/apache/pulsar/transaction/coordinator/impl/MLTransactionMetadataStoreProvider.java +++ b/pulsar-transaction/coordinator/src/main/java/org/apache/pulsar/transaction/coordinator/impl/MLTransactionMetadataStoreProvider.java @@ -39,17 +39,27 @@ public class MLTransactionMetadataStoreProvider implements TransactionMetadataSt DisabledTxnLogBufferedWriterMetricsStats.DISABLED_BUFFERED_WRITER_METRICS; public static void initBufferedWriterMetrics(String brokerAdvertisedAddress){ - if (bufferedWriterMetrics != DisabledTxnLogBufferedWriterMetricsStats.DISABLED_BUFFERED_WRITER_METRICS){ + if (bufferedWriterMetrics != DisabledTxnLogBufferedWriterMetricsStats.DISABLED_BUFFERED_WRITER_METRICS) { return; } synchronized (MLTransactionMetadataStoreProvider.class){ - if (bufferedWriterMetrics != DisabledTxnLogBufferedWriterMetricsStats.DISABLED_BUFFERED_WRITER_METRICS){ + if (bufferedWriterMetrics != DisabledTxnLogBufferedWriterMetricsStats.DISABLED_BUFFERED_WRITER_METRICS) { return; } bufferedWriterMetrics = new MLTransactionMetadataStoreBufferedWriterMetrics(brokerAdvertisedAddress); } } + public static void closeBufferedWriterMetrics() { + synchronized (MLTransactionMetadataStoreProvider.class){ + if (bufferedWriterMetrics == DisabledTxnLogBufferedWriterMetricsStats.DISABLED_BUFFERED_WRITER_METRICS) { + return; + } + bufferedWriterMetrics.close(); + bufferedWriterMetrics = DisabledTxnLogBufferedWriterMetricsStats.DISABLED_BUFFERED_WRITER_METRICS; + } + } + @Override public CompletableFuture openStore(TransactionCoordinatorID transactionCoordinatorId, ManagedLedgerFactory managedLedgerFactory, From d901138941150223e503c9481f5d39997ea59775 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Nicol=C3=B2=20Boschi?= Date: Tue, 18 Oct 2022 10:27:02 +0200 Subject: [PATCH 07/28] [improve][io] JDBC Sink: add flag to exclude non declared fields (#18008) * [improve][io] JDBC Sink: add flag to exclude non declared fields * rename and doc * fix doc * Update pulsar-io/jdbc/core/src/main/java/org/apache/pulsar/io/jdbc/JdbcSinkConfig.java Co-authored-by: tison Co-authored-by: tison --- .../pulsar/io/jdbc/JdbcAbstractSink.java | 4 +- .../apache/pulsar/io/jdbc/JdbcSinkConfig.java | 11 ++ .../org/apache/pulsar/io/jdbc/JdbcUtils.java | 38 ++--- .../apache/pulsar/io/jdbc/JdbcUtilsTest.java | 136 ++++++++++++------ site2/docs/io-jdbc-sink.md | 27 ++-- 5 files changed, 144 insertions(+), 72 deletions(-) diff --git a/pulsar-io/jdbc/core/src/main/java/org/apache/pulsar/io/jdbc/JdbcAbstractSink.java b/pulsar-io/jdbc/core/src/main/java/org/apache/pulsar/io/jdbc/JdbcAbstractSink.java index 95f66edf7a72ab..74a19b7b187b30 100644 --- a/pulsar-io/jdbc/core/src/main/java/org/apache/pulsar/io/jdbc/JdbcAbstractSink.java +++ b/pulsar-io/jdbc/core/src/main/java/org/apache/pulsar/io/jdbc/JdbcAbstractSink.java @@ -112,8 +112,10 @@ private void initStatement() throws Exception { List keyList = getListFromConfig(jdbcSinkConfig.getKey()); List nonKeyList = getListFromConfig(jdbcSinkConfig.getNonKey()); - tableDefinition = JdbcUtils.getTableDefinition(connection, tableId, keyList, nonKeyList); + tableDefinition = JdbcUtils.getTableDefinition(connection, tableId, + keyList, nonKeyList, jdbcSinkConfig.isExcludeNonDeclaredFields()); insertStatement = connection.prepareStatement(generateInsertQueryStatement()); + if (jdbcSinkConfig.getInsertMode() == JdbcSinkConfig.InsertMode.UPSERT) { if (nonKeyList.isEmpty() || keyList.isEmpty()) { throw new IllegalStateException("UPSERT mode is not configured if 'key' and 'nonKey' " diff --git a/pulsar-io/jdbc/core/src/main/java/org/apache/pulsar/io/jdbc/JdbcSinkConfig.java b/pulsar-io/jdbc/core/src/main/java/org/apache/pulsar/io/jdbc/JdbcSinkConfig.java index 609fbacc904e87..74f339bd44346d 100644 --- a/pulsar-io/jdbc/core/src/main/java/org/apache/pulsar/io/jdbc/JdbcSinkConfig.java +++ b/pulsar-io/jdbc/core/src/main/java/org/apache/pulsar/io/jdbc/JdbcSinkConfig.java @@ -73,6 +73,17 @@ public class JdbcSinkConfig implements Serializable { help = "Fields used in where condition of update and delete Events. A comma-separated list." ) private String key; + + @FieldDoc( + required = false, + defaultValue = "false", + help = "All the table fields are discovered automatically. 'excludeNonDeclaredFields' indicates if the " + + "table fields not explicitly listed in `nonKey` and `key` must be included in the query. " + + "By default all the table fields are included. To leverage of table fields defaults " + + "during insertion, it is suggested to set this value to `true`." + ) + private boolean excludeNonDeclaredFields = false; + @FieldDoc( required = false, defaultValue = "500", diff --git a/pulsar-io/jdbc/core/src/main/java/org/apache/pulsar/io/jdbc/JdbcUtils.java b/pulsar-io/jdbc/core/src/main/java/org/apache/pulsar/io/jdbc/JdbcUtils.java index 5fceea2754728f..f2907b6eedb8c4 100644 --- a/pulsar-io/jdbc/core/src/main/java/org/apache/pulsar/io/jdbc/JdbcUtils.java +++ b/pulsar-io/jdbc/core/src/main/java/org/apache/pulsar/io/jdbc/JdbcUtils.java @@ -24,6 +24,7 @@ import java.sql.Connection; import java.sql.DatabaseMetaData; import java.sql.ResultSet; +import java.util.Collections; import java.util.List; import java.util.StringJoiner; import java.util.stream.IntStream; @@ -113,10 +114,16 @@ public static TableId getTableId(Connection connection, String tableName) throws * Get the {@link TableDefinition} for the given table. */ public static TableDefinition getTableDefinition( - Connection connection, TableId tableId, List keyList, List nonKeyList) throws Exception { + Connection connection, TableId tableId, + List keyList, + List nonKeyList, + boolean excludeNonDeclaredFields) throws Exception { TableDefinition table = TableDefinition.of( tableId, Lists.newArrayList(), Lists.newArrayList(), Lists.newArrayList()); + keyList = keyList == null ? Collections.emptyList(): keyList; + nonKeyList = nonKeyList == null ? Collections.emptyList(): nonKeyList; + try (ResultSet rs = connection.getMetaData().getColumns( tableId.getCatalogName(), tableId.getSchemaName(), @@ -130,26 +137,21 @@ public static TableDefinition getTableDefinition( final String typeName = rs.getString(6); final int position = rs.getInt(17); - ColumnId columnId = ColumnId.of(tableId, columnName, sqlDataType, typeName, position); - table.columns.add(columnId); - if (keyList != null) { - keyList.forEach((key) -> { - if (key.equals(columnName)) { - table.keyColumns.add(columnId); - } - }); - } - if (nonKeyList != null) { - nonKeyList.forEach((key) -> { - if (key.equals(columnName)) { - table.nonKeyColumns.add(columnId); - } - }); - } - if (log.isDebugEnabled()) { log.debug("Get column. name: {}, data type: {}, position: {}", columnName, typeName, position); } + + ColumnId columnId = ColumnId.of(tableId, columnName, sqlDataType, typeName, position); + + if (keyList.contains(columnName)) { + table.keyColumns.add(columnId); + table.columns.add(columnId); + } else if (nonKeyList.contains(columnName)) { + table.nonKeyColumns.add(columnId); + table.columns.add(columnId); + } else if (!excludeNonDeclaredFields) { + table.columns.add(columnId); + } } return table; } diff --git a/pulsar-io/jdbc/sqlite/src/test/java/org/apache/pulsar/io/jdbc/JdbcUtilsTest.java b/pulsar-io/jdbc/sqlite/src/test/java/org/apache/pulsar/io/jdbc/JdbcUtilsTest.java index deb2fb88aa8000..13d5980b52e655 100644 --- a/pulsar-io/jdbc/sqlite/src/test/java/org/apache/pulsar/io/jdbc/JdbcUtilsTest.java +++ b/pulsar-io/jdbc/sqlite/src/test/java/org/apache/pulsar/io/jdbc/JdbcUtilsTest.java @@ -26,6 +26,7 @@ import org.testng.Assert; import org.testng.annotations.AfterMethod; import org.testng.annotations.BeforeMethod; +import org.testng.annotations.DataProvider; import org.testng.annotations.Test; import java.io.IOException; @@ -50,8 +51,15 @@ public void tearDown() throws IOException, SQLException { sqliteUtils.tearDown(); } - @Test - public void TestGetTableId() throws Exception { + @DataProvider(name = "excludeNonDeclaredFields") + public Object[] excludeNonDeclaredFields() { + return new Object[]{ + false, true + }; + } + + @Test(dataProvider = "excludeNonDeclaredFields") + public void testGetTableId(boolean excludeNonDeclaredFields) throws Exception { String tableName = "TestGetTableId"; sqliteUtils.createTable( @@ -59,13 +67,13 @@ public void TestGetTableId() throws Exception { " firstName TEXT," + " lastName TEXT," + " age INTEGER," + - " bool NUMERIC," + - " byte INTEGER," + + " bool NUMERIC NULL," + + " byte INTEGER NULL," + " short INTEGER NULL," + " long INTEGER," + - " float NUMERIC," + - " double NUMERIC," + - " bytes BLOB, " + + " float NUMERIC NULL," + + " double NUMERIC NULL," + + " bytes BLOB NULL, " + "PRIMARY KEY (firstName, lastName));" ); @@ -84,39 +92,87 @@ public void TestGetTableId() throws Exception { List nonKeyList = Lists.newArrayList(); nonKeyList.add("age"); nonKeyList.add("long"); - TableDefinition table = JdbcUtils.getTableDefinition(connection, id, keyList, nonKeyList); - Assert.assertEquals(table.getColumns().get(0).getName(), "firstName"); - Assert.assertEquals(table.getColumns().get(0).getTypeName(), "TEXT"); - Assert.assertEquals(table.getColumns().get(2).getName(), "age"); - Assert.assertEquals(table.getColumns().get(2).getTypeName(), "INTEGER"); - Assert.assertEquals(table.getColumns().get(7).getName(), "float"); - Assert.assertEquals(table.getColumns().get(7).getTypeName(), "NUMERIC"); - Assert.assertEquals(table.getKeyColumns().get(0).getName(), "firstName"); - Assert.assertEquals(table.getKeyColumns().get(0).getTypeName(), "TEXT"); - Assert.assertEquals(table.getKeyColumns().get(1).getName(), "lastName"); - Assert.assertEquals(table.getKeyColumns().get(1).getTypeName(), "TEXT"); - Assert.assertEquals(table.getNonKeyColumns().get(0).getName(), "age"); - Assert.assertEquals(table.getNonKeyColumns().get(0).getTypeName(), "INTEGER"); - Assert.assertEquals(table.getNonKeyColumns().get(1).getName(), "long"); - Assert.assertEquals(table.getNonKeyColumns().get(1).getTypeName(), "INTEGER"); - // Test get getTableDefinition - log.info("verify buildInsertSql"); - String expctedInsertStatement = "INSERT INTO " + tableName + - "(firstName,lastName,age,bool,byte,short,long,float,double,bytes)" + - " VALUES(?,?,?,?,?,?,?,?,?,?)"; - String insertStatement = JdbcUtils.buildInsertSql(table); - Assert.assertEquals(insertStatement, expctedInsertStatement); - log.info("verify buildUpdateSql"); - String expectedUpdateStatement = "UPDATE " + tableName + - " SET age=? ,long=? WHERE firstName=? AND lastName=?"; - String updateStatement = JdbcUtils.buildUpdateSql(table); - Assert.assertEquals(updateStatement, expectedUpdateStatement); - log.info("verify buildDeleteSql"); - String expectedDeleteStatement = "DELETE FROM " + tableName + - " WHERE firstName=? AND lastName=?"; - String deleteStatement = JdbcUtils.buildDeleteSql(table); - Assert.assertEquals(deleteStatement, expectedDeleteStatement); + TableDefinition table = JdbcUtils.getTableDefinition(connection, id, keyList, nonKeyList, + excludeNonDeclaredFields); + if (!excludeNonDeclaredFields) { + Assert.assertEquals(table.getColumns().size(), 10); + Assert.assertEquals(table.getColumns().get(0).getName(), "firstName"); + Assert.assertEquals(table.getColumns().get(0).getTypeName(), "TEXT"); + Assert.assertEquals(table.getColumns().get(2).getName(), "age"); + Assert.assertEquals(table.getColumns().get(2).getTypeName(), "INTEGER"); + Assert.assertEquals(table.getColumns().get(7).getName(), "float"); + Assert.assertEquals(table.getColumns().get(7).getTypeName(), "NUMERIC"); + + Assert.assertEquals(table.getKeyColumns().size(), 2); + Assert.assertEquals(table.getKeyColumns().get(0).getName(), "firstName"); + Assert.assertEquals(table.getKeyColumns().get(0).getTypeName(), "TEXT"); + Assert.assertEquals(table.getKeyColumns().get(1).getName(), "lastName"); + Assert.assertEquals(table.getKeyColumns().get(1).getTypeName(), "TEXT"); + + Assert.assertEquals(table.getNonKeyColumns().size(), 2); + Assert.assertEquals(table.getNonKeyColumns().get(0).getName(), "age"); + Assert.assertEquals(table.getNonKeyColumns().get(0).getTypeName(), "INTEGER"); + Assert.assertEquals(table.getNonKeyColumns().get(1).getName(), "long"); + Assert.assertEquals(table.getNonKeyColumns().get(1).getTypeName(), "INTEGER"); + // Test get getTableDefinition + log.info("verify buildInsertSql"); + String expctedInsertStatement = "INSERT INTO " + tableName + + "(firstName,lastName,age,bool,byte,short,long,float,double,bytes)" + + " VALUES(?,?,?,?,?,?,?,?,?,?)"; + String insertStatement = JdbcUtils.buildInsertSql(table); + Assert.assertEquals(insertStatement, expctedInsertStatement); + log.info("verify buildUpdateSql"); + String expectedUpdateStatement = "UPDATE " + tableName + + " SET age=? ,long=? WHERE firstName=? AND lastName=?"; + String updateStatement = JdbcUtils.buildUpdateSql(table); + Assert.assertEquals(updateStatement, expectedUpdateStatement); + log.info("verify buildDeleteSql"); + String expectedDeleteStatement = "DELETE FROM " + tableName + + " WHERE firstName=? AND lastName=?"; + String deleteStatement = JdbcUtils.buildDeleteSql(table); + Assert.assertEquals(deleteStatement, expectedDeleteStatement); + } else { + Assert.assertEquals(table.getColumns().size(), 4); + Assert.assertEquals(table.getColumns().get(0).getName(), "firstName"); + Assert.assertEquals(table.getColumns().get(0).getTypeName(), "TEXT"); + Assert.assertEquals(table.getColumns().get(1).getName(), "lastName"); + Assert.assertEquals(table.getColumns().get(1).getTypeName(), "TEXT"); + Assert.assertEquals(table.getColumns().get(2).getName(), "age"); + Assert.assertEquals(table.getColumns().get(2).getTypeName(), "INTEGER"); + Assert.assertEquals(table.getColumns().get(3).getName(), "long"); + Assert.assertEquals(table.getColumns().get(3).getTypeName(), "INTEGER"); + + + Assert.assertEquals(table.getKeyColumns().size(), 2); + Assert.assertEquals(table.getKeyColumns().get(0).getName(), "firstName"); + Assert.assertEquals(table.getKeyColumns().get(0).getTypeName(), "TEXT"); + Assert.assertEquals(table.getKeyColumns().get(1).getName(), "lastName"); + Assert.assertEquals(table.getKeyColumns().get(1).getTypeName(), "TEXT"); + + Assert.assertEquals(table.getNonKeyColumns().size(), 2); + Assert.assertEquals(table.getNonKeyColumns().get(0).getName(), "age"); + Assert.assertEquals(table.getNonKeyColumns().get(0).getTypeName(), "INTEGER"); + Assert.assertEquals(table.getNonKeyColumns().get(1).getName(), "long"); + Assert.assertEquals(table.getNonKeyColumns().get(1).getTypeName(), "INTEGER"); + // Test get getTableDefinition + log.info("verify buildInsertSql"); + String expctedInsertStatement = "INSERT INTO " + tableName + + "(firstName,lastName,age,long)" + + " VALUES(?,?,?,?)"; + String insertStatement = JdbcUtils.buildInsertSql(table); + Assert.assertEquals(insertStatement, expctedInsertStatement); + log.info("verify buildUpdateSql"); + String expectedUpdateStatement = "UPDATE " + tableName + + " SET age=? ,long=? WHERE firstName=? AND lastName=?"; + String updateStatement = JdbcUtils.buildUpdateSql(table); + Assert.assertEquals(updateStatement, expectedUpdateStatement); + log.info("verify buildDeleteSql"); + String expectedDeleteStatement = "DELETE FROM " + tableName + + " WHERE firstName=? AND lastName=?"; + String deleteStatement = JdbcUtils.buildDeleteSql(table); + Assert.assertEquals(deleteStatement, expectedDeleteStatement); + + } } } - } diff --git a/site2/docs/io-jdbc-sink.md b/site2/docs/io-jdbc-sink.md index c16234f20b87a8..4c9a473e0277b3 100644 --- a/site2/docs/io-jdbc-sink.md +++ b/site2/docs/io-jdbc-sink.md @@ -15,19 +15,20 @@ The configuration of all JDBC sink connectors has the following properties. ### Property -| Name | Type | Required | Default | Description | -|-------------|--------|----------|--------------------|--------------------------------------------------------------------------------------------------------------------------| -| `userName` | String | false | " " (empty string) | The username used to connect to the database specified by `jdbcUrl`.

**Note: `userName` is case-sensitive.** | -| `password` | String | false | " " (empty string) | The password used to connect to the database specified by `jdbcUrl`.

**Note: `password` is case-sensitive.** | -| `jdbcUrl` | String | true | " " (empty string) | The JDBC URL of the database that the connector connects to. | -| `tableName` | String | true | " " (empty string) | The name of the table that the connector writes to. | -| `nonKey` | String | false | " " (empty string) | A comma-separated list containing the fields used in updating events. | -| `key` | String | false | " " (empty string) | A comma-separated list containing the fields used in `where` condition of updating and deleting events. | -| `timeoutMs` | int | false | 500 | The JDBC operation timeout in milliseconds. | -| `batchSize` | int | false | 200 | The batch size of updates made to the database. | -| `insertMode` | enum( INSERT,UPSERT,UPDATE) | false | INSERT | If it is configured as UPSERT, the sink uses upsert semantics rather than plain INSERT/UPDATE statements. Upsert semantics refer to atomically adding a new row or updating the existing row if there is a primary key constraint violation, which provides idempotence. | -| `nullValueAction` | enum(FAIL, DELETE) | false | FAIL | How to handle records with NULL values. Possible options are `DELETE` or `FAIL`. | -| `useTransactions` | boolean | false | true | Enable transactions of the database. +| Name | Type | Required | Default | Description | +|-------------|--------|----------|--------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `userName` | String | false | " " (empty string) | The username used to connect to the database specified by `jdbcUrl`.

**Note: `userName` is case-sensitive.** | +| `password` | String | false | " " (empty string) | The password used to connect to the database specified by `jdbcUrl`.

**Note: `password` is case-sensitive.** | +| `jdbcUrl` | String | true | " " (empty string) | The JDBC URL of the database that the connector connects to. | +| `tableName` | String | true | " " (empty string) | The name of the table that the connector writes to. | +| `nonKey` | String | false | " " (empty string) | A comma-separated list containing the fields used in updating events. | +| `key` | String | false | " " (empty string) | A comma-separated list containing the fields used in `where` condition of updating and deleting events. | +| `timeoutMs` | int | false | 500 | The JDBC operation timeout in milliseconds. | +| `batchSize` | int | false | 200 | The batch size of updates made to the database. | +| `insertMode` | enum( INSERT,UPSERT,UPDATE) | false | INSERT | If it is configured as UPSERT, the sink uses upsert semantics rather than plain INSERT/UPDATE statements. Upsert semantics refer to atomically adding a new row or updating the existing row if there is a primary key constraint violation, which provides idempotence. | +| `nullValueAction` | enum(FAIL, DELETE) | false | FAIL | How to handle records with NULL values. Possible options are `DELETE` or `FAIL`. | +| `useTransactions` | boolean | false | true | Enable transactions of the database. +| `excludeNonDeclaredFields` | boolean | false | false | All the table fields are discovered automatically. `excludeNonDeclaredFields` indicates if the table fields not explicitly listed in `nonKey` and `key` must be included in the query. By default all the table fields are included. To leverage of table fields defaults during insertion, it is suggested to set this value to `false`. | ### Example of ClickHouse From 561cd3d8e1a002c259ad8f516060a8ce7a1693f4 Mon Sep 17 00:00:00 2001 From: tison Date: Tue, 18 Oct 2022 16:55:18 +0800 Subject: [PATCH 08/28] [fix][ci] deprecation warnings about set-output (#18065) * [fix][ci] Fix deprecation warnings about set-output Upstream fix: https://github.com/amannn/action-semantic-pull-request/pull/208 * Update .github/workflows/ci-semantic-pull-request.yml --- .github/workflows/ci-semantic-pull-request.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/ci-semantic-pull-request.yml b/.github/workflows/ci-semantic-pull-request.yml index 5d45c7c17122a2..2473f8e11266c1 100644 --- a/.github/workflows/ci-semantic-pull-request.yml +++ b/.github/workflows/ci-semantic-pull-request.yml @@ -34,7 +34,7 @@ jobs: name: Check pull request title runs-on: ubuntu-latest steps: - - uses: amannn/action-semantic-pull-request@v4 + - uses: amannn/action-semantic-pull-request@v5.0.2 env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} with: From ff44420922e2a26dc28a7b6ea4e0ce83bb2d3aa4 Mon Sep 17 00:00:00 2001 From: Xiaoyu Hou Date: Tue, 18 Oct 2022 18:58:33 +0800 Subject: [PATCH 09/28] [improve][broker]Avoid runtime check hasFilter in EntryFilterSupport (#18066) --- .../pulsar/broker/service/AbstractBaseDispatcher.java | 1 - .../apache/pulsar/broker/service/EntryFilterSupport.java | 6 ++++-- .../pulsar/broker/service/plugin/FilterEntryTest.java | 9 +++++++++ .../pulsar/broker/stats/SubscriptionStatsTest.java | 3 +++ 4 files changed, 16 insertions(+), 3 deletions(-) diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/AbstractBaseDispatcher.java b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/AbstractBaseDispatcher.java index df02bbd85d4703..b52f30361b1929 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/AbstractBaseDispatcher.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/AbstractBaseDispatcher.java @@ -110,7 +110,6 @@ public int filterEntriesForConsumer(Optional optMetadataArray int filteredMessageCount = 0; int filteredEntryCount = 0; long filteredBytesCount = 0; - final boolean hasFilter = CollectionUtils.isNotEmpty(entryFilters); List entriesToFiltered = hasFilter ? new ArrayList<>() : null; List entriesToRedeliver = hasFilter ? new ArrayList<>() : null; for (int i = 0, entriesSize = entries.size(); i < entriesSize; i++) { diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/EntryFilterSupport.java b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/EntryFilterSupport.java index 8704e21ae7556b..1acc3b23599bae 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/EntryFilterSupport.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/EntryFilterSupport.java @@ -35,7 +35,8 @@ public class EntryFilterSupport { * Entry filters in Broker. * Not set to final, for the convenience of testing mock. */ - protected List entryFilters; + protected final List entryFilters; + protected final boolean hasFilter; protected final FilterContext filterContext; protected final Subscription subscription; @@ -62,11 +63,12 @@ public EntryFilterSupport(Subscription subscription) { this.entryFilters = Collections.emptyList(); this.filterContext = FilterContext.FILTER_CONTEXT_DISABLED; } + hasFilter = CollectionUtils.isNotEmpty(entryFilters); } public EntryFilter.FilterResult runFiltersForEntry(Entry entry, MessageMetadata msgMetadata, Consumer consumer) { - if (CollectionUtils.isNotEmpty(entryFilters)) { + if (hasFilter) { fillContext(filterContext, msgMetadata, subscription, consumer); return getFilterResult(filterContext, entry, entryFilters); } else { diff --git a/pulsar-broker/src/test/java/org/apache/pulsar/broker/service/plugin/FilterEntryTest.java b/pulsar-broker/src/test/java/org/apache/pulsar/broker/service/plugin/FilterEntryTest.java index bd8017390d807b..f10bc1ee864e11 100644 --- a/pulsar-broker/src/test/java/org/apache/pulsar/broker/service/plugin/FilterEntryTest.java +++ b/pulsar-broker/src/test/java/org/apache/pulsar/broker/service/plugin/FilterEntryTest.java @@ -162,12 +162,15 @@ public void testFilter() throws Exception { Dispatcher dispatcher = subscription.getDispatcher(); Field field = EntryFilterSupport.class.getDeclaredField("entryFilters"); field.setAccessible(true); + Field hasFilterField = EntryFilterSupport.class.getDeclaredField("hasFilter"); + hasFilterField.setAccessible(true); NarClassLoader narClassLoader = mock(NarClassLoader.class); EntryFilter filter1 = new EntryFilterTest(); EntryFilterWithClassLoader loader1 = spyWithClassAndConstructorArgsRecordingInvocations(EntryFilterWithClassLoader.class, filter1, narClassLoader); EntryFilter filter2 = new EntryFilter2Test(); EntryFilterWithClassLoader loader2 = spyWithClassAndConstructorArgsRecordingInvocations(EntryFilterWithClassLoader.class, filter2, narClassLoader); field.set(dispatcher, List.of(loader1, loader2)); + hasFilterField.set(dispatcher, true); Producer producer = pulsarClient.newProducer(Schema.STRING) .enableBatching(false).topic(topic).create(); @@ -287,12 +290,15 @@ public void testFilteredMsgCount() throws Throwable { Dispatcher dispatcher = subscription.getDispatcher(); Field field = EntryFilterSupport.class.getDeclaredField("entryFilters"); field.setAccessible(true); + Field hasFilterField = EntryFilterSupport.class.getDeclaredField("hasFilter"); + hasFilterField.setAccessible(true); NarClassLoader narClassLoader = mock(NarClassLoader.class); EntryFilter filter1 = new EntryFilterTest(); EntryFilterWithClassLoader loader1 = spyWithClassAndConstructorArgs(EntryFilterWithClassLoader.class, filter1, narClassLoader); EntryFilter filter2 = new EntryFilter2Test(); EntryFilterWithClassLoader loader2 = spyWithClassAndConstructorArgs(EntryFilterWithClassLoader.class, filter2, narClassLoader); field.set(dispatcher, List.of(loader1, loader2)); + hasFilterField.set(dispatcher, true); for (int i = 0; i < 10; i++) { producer.send("test"); @@ -360,6 +366,8 @@ public void testEntryFilterRescheduleMessageDependingOnConsumerSharedSubscriptio Dispatcher dispatcher = subscription.getDispatcher(); Field field = EntryFilterSupport.class.getDeclaredField("entryFilters"); field.setAccessible(true); + Field hasFilterField = EntryFilterSupport.class.getDeclaredField("hasFilter"); + hasFilterField.setAccessible(true); NarClassLoader narClassLoader = mock(NarClassLoader.class); EntryFilter filter1 = new EntryFilterTest(); EntryFilterWithClassLoader loader1 = @@ -368,6 +376,7 @@ public void testEntryFilterRescheduleMessageDependingOnConsumerSharedSubscriptio EntryFilterWithClassLoader loader2 = spyWithClassAndConstructorArgs(EntryFilterWithClassLoader.class, filter2, narClassLoader); field.set(dispatcher, List.of(loader1, loader2)); + hasFilterField.set(dispatcher, true); for (int i = 0; i < numMessages; i++) { if (i % 2 == 0) { diff --git a/pulsar-broker/src/test/java/org/apache/pulsar/broker/stats/SubscriptionStatsTest.java b/pulsar-broker/src/test/java/org/apache/pulsar/broker/stats/SubscriptionStatsTest.java index 67c60585d2f5b4..31a833047e7424 100644 --- a/pulsar-broker/src/test/java/org/apache/pulsar/broker/stats/SubscriptionStatsTest.java +++ b/pulsar-broker/src/test/java/org/apache/pulsar/broker/stats/SubscriptionStatsTest.java @@ -191,11 +191,14 @@ public void testSubscriptionStats(final String topic, final String subName, bool if (setFilter) { Field field = EntryFilterSupport.class.getDeclaredField("entryFilters"); field.setAccessible(true); + Field hasFilterField = EntryFilterSupport.class.getDeclaredField("hasFilter"); + hasFilterField.setAccessible(true); NarClassLoader narClassLoader = mock(NarClassLoader.class); EntryFilter filter1 = new EntryFilterTest(); EntryFilterWithClassLoader loader1 = spyWithClassAndConstructorArgs(EntryFilterWithClassLoader.class, filter1, narClassLoader); field.set(dispatcher, List.of(loader1)); + hasFilterField.set(dispatcher, true); } for (int i = 0; i < 100; i++) { From 3da7b9f8bfb55bf4698dc68a8b05b4daa3277faf Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Nicol=C3=B2=20Boschi?= Date: Tue, 18 Oct 2022 21:33:34 +0200 Subject: [PATCH 10/28] [fix][sec] Upgrade protobuf to 3.19.6 to get rid of CVE-2022-3171 (#18086) --- distribution/server/src/assemble/LICENSE.bin.txt | 4 ++-- pom.xml | 2 +- pulsar-sql/presto-distribution/LICENSE | 4 ++-- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/distribution/server/src/assemble/LICENSE.bin.txt b/distribution/server/src/assemble/LICENSE.bin.txt index 2920c15c8d142d..786c19171928da 100644 --- a/distribution/server/src/assemble/LICENSE.bin.txt +++ b/distribution/server/src/assemble/LICENSE.bin.txt @@ -506,8 +506,8 @@ MIT License Protocol Buffers License * Protocol Buffers - - com.google.protobuf-protobuf-java-3.19.2.jar -- licenses/LICENSE-protobuf.txt - - com.google.protobuf-protobuf-java-util-3.19.2.jar -- licenses/LICENSE-protobuf.txt + - com.google.protobuf-protobuf-java-3.19.6.jar -- licenses/LICENSE-protobuf.txt + - com.google.protobuf-protobuf-java-util-3.19.6.jar -- licenses/LICENSE-protobuf.txt CDDL-1.1 -- licenses/LICENSE-CDDL-1.1.txt * Java Annotations API diff --git a/pom.xml b/pom.xml index e663de7794274c..8f49987d965550 100644 --- a/pom.xml +++ b/pom.xml @@ -146,7 +146,7 @@ flexible messaging model and an intuitive client API. 0.40.2 true 0.5.0 - 3.19.2 + 3.19.6 ${protobuf3.version} 1.45.1 1.41.0 diff --git a/pulsar-sql/presto-distribution/LICENSE b/pulsar-sql/presto-distribution/LICENSE index 9ec3679f87e00e..18c6420f0c00b3 100644 --- a/pulsar-sql/presto-distribution/LICENSE +++ b/pulsar-sql/presto-distribution/LICENSE @@ -474,8 +474,8 @@ The Apache Software License, Version 2.0 Protocol Buffers License * Protocol Buffers - - protobuf-java-3.19.2.jar - - protobuf-java-util-3.19.2.jar + - protobuf-java-3.19.6.jar + - protobuf-java-util-3.19.6.jar - proto-google-common-protos-2.0.1.jar BSD 3-clause "New" or "Revised" License From 7d6dc2ef6dd1d3a59e49fb88afb6981e58c35f97 Mon Sep 17 00:00:00 2001 From: Matteo Merli Date: Tue, 18 Oct 2022 17:38:17 -0700 Subject: [PATCH 11/28] [broker] Fixed delayed delivery after read operation error (#18098) --- ...PersistentDispatcherMultipleConsumers.java | 17 +++++-- .../persistent/DelayedDeliveryTest.java | 45 +++++++++++++++++++ 2 files changed, 58 insertions(+), 4 deletions(-) diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentDispatcherMultipleConsumers.java b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentDispatcherMultipleConsumers.java index 15b42fedd38ab9..9f09e60abb29ee 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentDispatcherMultipleConsumers.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentDispatcherMultipleConsumers.java @@ -100,8 +100,7 @@ public class PersistentDispatcherMultipleConsumers extends AbstractDispatcherMul "totalAvailablePermits"); protected volatile int totalAvailablePermits = 0; protected volatile int readBatchSize; - protected final Backoff readFailureBackoff = new Backoff(15, TimeUnit.SECONDS, - 1, TimeUnit.MINUTES, 0, TimeUnit.MILLISECONDS); + protected final Backoff readFailureBackoff; private static final AtomicIntegerFieldUpdater TOTAL_UNACKED_MESSAGES_UPDATER = AtomicIntegerFieldUpdater.newUpdater(PersistentDispatcherMultipleConsumers.class, @@ -141,6 +140,10 @@ public PersistentDispatcherMultipleConsumers(PersistentTopic topic, ManagedCurso this.readBatchSize = serviceConfig.getDispatcherMaxReadBatchSize(); this.initializeDispatchRateLimiterIfNeeded(); this.assignor = new SharedConsumerAssignor(this::getNextConsumer, this::addMessageToReplay); + this.readFailureBackoff = new Backoff( + topic.getBrokerService().pulsar().getConfiguration().getDispatcherReadFailureBackoffInitialTimeInMs(), + TimeUnit.MILLISECONDS, + 1, TimeUnit.MINUTES, 0, TimeUnit.MILLISECONDS); } @Override @@ -811,7 +814,10 @@ public synchronized void readEntriesFailed(ManagedLedgerException exception, Obj topic.getBrokerService().executor().schedule(() -> { synchronized (PersistentDispatcherMultipleConsumers.this) { - if (!havePendingRead) { + // If it's a replay read we need to retry even if there's already + // another scheduled read, otherwise we'd be stuck until + // more messages are published. + if (!havePendingRead || readType == ReadType.Replay) { log.info("[{}] Retrying read operation", name); readMoreEntries(); } else { @@ -1019,7 +1025,10 @@ protected synchronized Set getMessagesToReplayNow(int maxMessagesT return redeliveryMessages.getMessagesToReplayNow(maxMessagesToRead); } else if (delayedDeliveryTracker.isPresent() && delayedDeliveryTracker.get().hasMessageAvailable()) { delayedDeliveryTracker.get().resetTickTime(topic.getDelayedDeliveryTickTimeMillis()); - return delayedDeliveryTracker.get().getScheduledMessages(maxMessagesToRead); + Set messagesAvailableNow = + delayedDeliveryTracker.get().getScheduledMessages(maxMessagesToRead); + messagesAvailableNow.forEach(p -> redeliveryMessages.add(p.getLedgerId(), p.getEntryId())); + return messagesAvailableNow; } else { return Collections.emptySet(); } diff --git a/pulsar-broker/src/test/java/org/apache/pulsar/broker/service/persistent/DelayedDeliveryTest.java b/pulsar-broker/src/test/java/org/apache/pulsar/broker/service/persistent/DelayedDeliveryTest.java index 8b62845572d265..7f8059e1f0bf36 100644 --- a/pulsar-broker/src/test/java/org/apache/pulsar/broker/service/persistent/DelayedDeliveryTest.java +++ b/pulsar-broker/src/test/java/org/apache/pulsar/broker/service/persistent/DelayedDeliveryTest.java @@ -35,6 +35,7 @@ import lombok.Cleanup; +import org.apache.bookkeeper.client.BKException; import org.apache.pulsar.broker.BrokerTestUtil; import org.apache.pulsar.broker.service.Dispatcher; import org.apache.pulsar.client.admin.PulsarAdminException; @@ -59,6 +60,7 @@ public class DelayedDeliveryTest extends ProducerConsumerBase { @BeforeClass public void setup() throws Exception { conf.setDelayedDeliveryTickTimeMillis(1024); + conf.setDispatcherReadFailureBackoffInitialTimeInMs(1000); super.internalSetup(); super.producerBaseSetup(); } @@ -580,4 +582,47 @@ public void testInterleavedMessagesOnKeySharedSubscription() throws Exception { } } + @Test + public void testDispatcherReadFailure() throws Exception { + String topic = BrokerTestUtil.newUniqueName("testDispatcherReadFailure"); + + @Cleanup + Consumer consumer = pulsarClient.newConsumer(Schema.STRING) + .topic(topic) + .subscriptionName("shared-sub") + .subscriptionType(SubscriptionType.Shared) + .subscribe(); + + @Cleanup + Producer producer = pulsarClient.newProducer(Schema.STRING) + .topic(topic) + .create(); + + for (int i = 0; i < 10; i++) { + producer.newMessage() + .value("msg-" + i) + .deliverAfter(5, TimeUnit.SECONDS) + .sendAsync(); + } + + producer.flush(); + + Message msg = consumer.receive(100, TimeUnit.MILLISECONDS); + assertNull(msg); + + // Inject failure in BK read + this.mockBookKeeper.failNow(BKException.Code.ReadException); + + Set receivedMsgs = new TreeSet<>(); + for (int i = 0; i < 10; i++) { + msg = consumer.receive(10, TimeUnit.SECONDS); + receivedMsgs.add(msg.getValue()); + } + + assertEquals(receivedMsgs.size(), 10); + for (int i = 0; i < 10; i++) { + assertTrue(receivedMsgs.contains("msg-" + i)); + } + } + } From 269c88510a10e3f8e51f9bf00672d2ed4a67b02a Mon Sep 17 00:00:00 2001 From: Anonymitaet <50226895+Anonymitaet@users.noreply.github.com> Date: Wed, 19 Oct 2022 09:16:58 +0800 Subject: [PATCH 12/28] [feat][doc] Add various doc contribution guides (#18071) --- site2/README.md | 39 ++- site2/doc-guides/assets/contribution-1.png | Bin 0 -> 71797 bytes site2/doc-guides/assets/contribution-2.png | Bin 0 -> 52487 bytes site2/doc-guides/assets/contribution-3.png | Bin 0 -> 64610 bytes site2/doc-guides/assets/naming-1.png | Bin 0 -> 117563 bytes site2/doc-guides/assets/preview-1.png | Bin 0 -> 139893 bytes site2/doc-guides/assets/preview-2.png | Bin 0 -> 126997 bytes site2/doc-guides/assets/syntax-1.png | Bin 0 -> 160167 bytes site2/doc-guides/assets/syntax-10.png | Bin 0 -> 149702 bytes site2/doc-guides/assets/syntax-11.png | Bin 0 -> 106124 bytes site2/doc-guides/assets/syntax-12.png | Bin 0 -> 44427 bytes site2/doc-guides/assets/syntax-13.png | Bin 0 -> 172770 bytes site2/doc-guides/assets/syntax-2.png | Bin 0 -> 38032 bytes site2/doc-guides/assets/syntax-3.png | Bin 0 -> 358193 bytes site2/doc-guides/assets/syntax-4.png | Bin 0 -> 243160 bytes site2/doc-guides/assets/syntax-5.png | Bin 0 -> 53777 bytes site2/doc-guides/assets/syntax-6.png | Bin 0 -> 315928 bytes site2/doc-guides/assets/syntax-7.png | Bin 0 -> 83816 bytes site2/doc-guides/assets/syntax-8.png | Bin 0 -> 161006 bytes site2/doc-guides/assets/syntax-9.png | Bin 0 -> 151067 bytes site2/doc-guides/contribution.md | 215 +++++++++++++++ site2/doc-guides/label.md | 19 ++ site2/doc-guides/naming.md | 211 +++++++++++++++ site2/doc-guides/preview.md | 164 ++++++++++++ site2/doc-guides/syntax.md | 295 +++++++++++++++++++++ 25 files changed, 942 insertions(+), 1 deletion(-) create mode 100644 site2/doc-guides/assets/contribution-1.png create mode 100644 site2/doc-guides/assets/contribution-2.png create mode 100644 site2/doc-guides/assets/contribution-3.png create mode 100644 site2/doc-guides/assets/naming-1.png create mode 100644 site2/doc-guides/assets/preview-1.png create mode 100644 site2/doc-guides/assets/preview-2.png create mode 100644 site2/doc-guides/assets/syntax-1.png create mode 100644 site2/doc-guides/assets/syntax-10.png create mode 100644 site2/doc-guides/assets/syntax-11.png create mode 100644 site2/doc-guides/assets/syntax-12.png create mode 100644 site2/doc-guides/assets/syntax-13.png create mode 100644 site2/doc-guides/assets/syntax-2.png create mode 100644 site2/doc-guides/assets/syntax-3.png create mode 100644 site2/doc-guides/assets/syntax-4.png create mode 100644 site2/doc-guides/assets/syntax-5.png create mode 100644 site2/doc-guides/assets/syntax-6.png create mode 100644 site2/doc-guides/assets/syntax-7.png create mode 100644 site2/doc-guides/assets/syntax-8.png create mode 100644 site2/doc-guides/assets/syntax-9.png create mode 100644 site2/doc-guides/contribution.md create mode 100644 site2/doc-guides/label.md create mode 100644 site2/doc-guides/naming.md create mode 100644 site2/doc-guides/preview.md create mode 100644 site2/doc-guides/syntax.md diff --git a/site2/README.md b/site2/README.md index 4405a9d21063d4..7b1ec17dfea949 100644 --- a/site2/README.md +++ b/site2/README.md @@ -1 +1,38 @@ -For how to make contributions to Pulsar documentation and website, see [Pulsar Documentation Contribution Guide](https://docs.google.com/document/d/11DTnNPpvcPrebLkMAFcDEIFlD8ARD-k6F-LXoIwdD9Y/edit#). \ No newline at end of file +# Pulsar Documentation Contribution Overview + +As you start your Pulsar journey, you might don’t understand something or make mistakes. Don’t worry and take it easy. The Pulsar community is a joint effort based on respect, openness, friendliness, discussions, and consensus. Everyone is encouraged to contribute and any help is appreciated! + +The **Pulsar Documentation Contribution Overview** provides a set of guides offering best-practice suggestions for contributing documentation to Pulsar. It provides detailed instructions on the contribution workflow and conventions. Please follow these guidelines to keep the documentation structure, style, and syntax consistent. + +## Before writing docs + +* [Pulsar Documentation Contribution Guide](./doc-guides/contribution.md) + * Doc structure and project organization + * Workflow of submitting various docs + +## Writing docs + +* [Pulsar Documentation Writing Syntax Guide](./doc-guides/syntax.md) + +* [Pulsar Documentation Writing Style Guide](https://docs.google.com/document/d/1lc5j4RtuLIzlEYCBo97AC8-U_3Erzs_lxpkDuseU0n4/edit#) + +* [Pulsar Design Style Guide](https://docs.google.com/document/d/16Hp7Sc86MQtL0m8fc2w_TrcKXAuglwRwHmdmwfk00mI/edit#heading=h.b8ogodj5sj0) + +* [Pulsar API Documentation Guide](https://docs.google.com/document/d/1-I1oQp1_HUaQopqilU-JdC-ksrLAgYNi93FZVnECwV8/edit#heading=h.wu6ygjne8e35) +## Testing docs + +* [Pulsar Content Preview Guide](./doc-guides/preview.md) + +## Preparing to submit doc PRs + +* [Pulsar Pull Request Naming Convention Guide](./doc-guides/naming.md) + +* [Pulsar Documentation Label Guide](./doc-guides/label.md) + +## References + +In addition, the following resources can help you craft and contribute to docs. + +* [Google Technical Writing Courses](https://developers.google.com/tech-writing/overview) + + diff --git a/site2/doc-guides/assets/contribution-1.png b/site2/doc-guides/assets/contribution-1.png new file mode 100644 index 0000000000000000000000000000000000000000..7363851e8bd9a4b0529b6fa2b8a96e2a60803bad GIT binary patch literal 71797 zcmZr&2RzjO|F_~~9LeU+NR&;PXYb5XqU@2q*Wr+LXOrwLD-BT+GUAT2caem1Mv-x3 zW&J=58T5iuLlxxXG0 z5#1nW|L5^7Vu61jLqbG^awa1C_c8Z?`?G(kz#s7Lf8Uc96aVKEi%I@In)GQg>EDlu z{m$M->uNm>+);XIoB0wEG4Y-K5$oK#v`IvyOr)c(avM(kqu^Zg#gB)lDV(tnc?9C0 zOvYHSaK%3SdM+|{@@stj*RM&yjgp6)iClmwV0mKy1NvvH|BBVdmn}13yno5Weoy zp_`qlra$q=j_lBRea*UXth zeJ**zkfVbpY|(s~u#DWVaOfuGdsiO4I90dlj7|H#@LozdKlx9ix_CMD4y+f(c`O2p7wc(ATmEVB`=zLJDG@V2JWx9dxop+24{x^eK=iD_ z2Ght~WNjtZa_j60+f|ogiS>V;tUv!;hnR3hoiCvluCx}cl*yNm>Jc(}b6yL)e1GV6 z?FPi){(qYtV1*GCd~=Z_zK=FSwi-}|9tknq+y3`1T`{=ERBX13cMVl)9ZYc+{A0(5 zy~xZtx~^cR61PvjqPG;*;2c`Ahf69um{A@9zd>{e%d8JwF?-zLiQhlU{vRjY zN!d`-&nyw7cu=r9Z|8hSsD1GrgKKCVgxqFM?pdoFGr0ep$h!j~rI_j8CNl9rUk&%W zJr&E+b@Q-oXGqSzb$Wezvey>CYO@dV-0sX;gcp!KOXyt4jPBv$xpVo?D?or6WXjcb zyjN!XgxyuneeZZW)!kzjdx&^eb(&|{mlS>2x4j`-iA&O^{`=7=(Ky)Z*T{SqGELuWsl{9#SSB$Rs6 zU;U~Y<97;Q{p=H;HEa<;tIP{HGg^s!n18u}#R?nsLOV{5wui1b`f309{!vh!$JeIf zO?7;un7?~>pwjx67mGJI=U3?~HEw<5z{wVg_ItJ+XEK%l4%&T^NhW z9}29#1o-j-eb=L4@=$iv&T-r6iF{@C@&7W83>_kxtOvDB3y56i>uXK6*ov*<%M4qf zc|1XJ`JfQCH1WU85c-6<3-Jl*6ffz1V9@t?W3O!UyE!TYCZ~Ms z@L;duqBC}W+xaku%q24xFGdKv_eAbb8#HPmvmIdGSQfmQDQ2Fh16-d@!gR`kR8UfD z!XLJB4N6jdj9||aS~Eo`&W1gR$jv0e9z7N%=4AdYn-J4G&5)ThFO4Oc=~~?`uHZ8g z*ivN`^eY(rGfLf7jPSW`#pG^wySNMfTAc+o(fX%B_wgA$L!MnYdL{0cl$Ff?`bI@E zbJAb+)qH|xz-1VO@vCvs5X|_EWXU{^oh6KUYPL#FY1lqKMQI*A^*@%9h=itUhfS|oiU>6>0JvFE&FBrm zm+9FoN3NBgJjGty4*B@b*(Uebq{)4EMv<1EqhiND$E5vD*c6V36c2^mnh9Mv6g14~ zXI)0|U)!s^L+1oq%bFPzjfivXeaB}$#S=Liao;9}QFUXc<$3kvsJp+$43%gCXM!Bt z4bJBOjw00EHI?RTs>4eJuqxI-u;4(0xQbg5!>>bkK;!MEf+`&HOxhp=w^x7 zbVO4c$M9MViq?D}u^G`|rg{FVHMpfUVK0(O1VklVXSBcis;%P2`$lLn4RDG-c5dlG zuGz=AHhr(4IzHS3FBgV*n7Kc*lz6Vabzy&dNC_Q50b%)M8L~F~^8Pz9#QtWL`qlZL ziIRgL;Df^U4>#IeJ9lPJPbRAIVQW)wT}{0^8A-8MJNLh`N_&{Ye~xIay_3&5t?1@& zdowK$`qd>CW~j737gaMTw>rM+>1?-P;ry>6~ZAzx-O=;~q7rPXN4sI03g{8uf_0172=yiRYt+E@3)Ao<` zRo@4!b=W2dXY$|1LzZx~)0q|lpEZLfd(>jzZ?wwm6~fG1Vb?tF2wp!+L{0>AJJt0j8hE#rMGxRhYAW+o0~yw4=nMOMuh;Y5rYOR zHD}V9J7nA;DOxTHpFw`nF+~tM(ZB^|zQ6l;C;CIYi;=3iKES`~SjIm@N$o1-q>KBv;OlvJpf#h0|jK($FblLAJ zJ!uz%ph{uQwYiRkK^@;Kixe~7@#(^LILW=EQH}?TS>U@?q5dvkha1BHZz9l z=mo@xbIrOsrIH2+cwm-gu=93*^UqP;Mj0S=9(n_)y1WtsUu%pj--pLpED+Z`#*pxL74Ke98>QpM@w2-TjP9mS3;AMnTKL z61@Av*|`zK*9thB5=Y>)_fpR#yE3%bDzN7DCc!`eD0nM7HPf_O?dqQ`s69Y?AiFbe zye0>G!s1QGj8)K2MpGtJuaygXSz(qbqeje@gOORc3`{Y&)}t-mzA+&Fuvlx()KB~L zZ34megk2>r9Opf7ePS|{=Cni&Ww3aMClt08vi3G>F@Zw3=Mc%goi4bmdHQqAFyIFe zK6)498$_BnRq>>4yvZ^=LVyMA-X|`d1ci+1@W))AFdsvXNti2srp+3vHHa$DLACcb^RRV7J)AQ5w-WdXuHW( z{|O7`;GGqRy-AVxKQ1&l&P(A4j9+d|=t(Ps0)XiFX@j0%5q7YM%e?PVyW2_UICas% z;VMO2l*dtkF>{`Ky_u;?Rr!y1hb?{0OOG!k%gq3;_Tp;@EBy(tW+Dj7;hU&2 z@Qn^Wd;&y(`FjL!8uzdvg}u6w^G8b!H9Lr#4Z5c--=l{OROAmYw~y8y_AcOkG>Rf|4AA^4}*(x%o=E*?7S@$Vk**O)w-37YIr zVk?P9dd$t_y27Az$#NPmtot=8e1}psr%2Je`-A8hV9RG#^1S@6$zn+N9DWE3j@!!p?Z=L6 zaXW1(*vUTraxi^uoq{@OM|U~hK&m6B9;Bc&Egf6z_XbX5r3F?S!uh2ox=+UY#$Lcd$yZVNQPP7jDvq~9YIi=} zXD3}3A6?#ykOIBCdA;p%-M|&&iu9wJL1_dk~719H}b#o8oa$?9G-#8PlWu3A-xgouU4GIgPtBSpo^RKAi z$NS-wl6|`H+brK;<(wKHhY|3wCB!Yg?JI*$wga!oJKiJSS(UJ{qfzcWvf?k zH$C@;f2nUf0~|wZNg*fvV~o(LriP^`WoS1Jmg~gmID+X_9b=MZ<}>0i(a16QNfDln z9F)}#69mc`)(`A-wJ1uG;KlfKQAqPSkY3@BN`FPX%-C*^Qh`h&t#E-=q6DA(vw$a$ zPPinj+FDTYsMIYuT=^mQ zwQfOGPatO1+JRsg$OM;xc>O|DfX;mlZSJez79?J=L=mgTH*9L#`PIcG>_Xp(${kP3 zFBBYvPgB%B@@%di#{!Ym4!X1vVMXs;*Qzu{F1>78fu3zaztIK364ZIa{cPZAC=<5y`R+|BA*H zMqWq4cMnmsh*2yY9wEbL5iQm%qX|3t;zI0tGVFoFg6Jl2EBBMx*#M==KTa82t{oz+`slT{${+!yJ(xK5dprE7_H=FP8~Beh`r}`OR(G3r(|o zJFjx2OwBRAF^i=@PVSjsvguC>J6-eiNBlpjH?|BY4L^#HF2yMom~8dksIL{PS7tp; zg<`}Z;|K?=>qB_?Sa1H2&nS;ab25BsGk7kigM~!rAg^OCJ{xVE@GU~Rphx67L|b|V zH7MFO?6Q|Xce9)tq_~0M$rWQtFCjJLyijO^unc74!b77`37ljk(`J;6HhQhrD4c_u zHE$Z+S{2Lk>XsY)pdnL2qM6ORyOoM>Yh^8N)rt9p02g3TbEJ2uvydAFotN3<20J2J-PVRxqw`_%*!>pNA|$|%88=iHg; zJC0+#U-3MB3dMlb9D?W`qQsxygklV3tN0ZMXQ9+^^M#mSyR%o6EPFE>aC-NV&>*bf z2KX%;G}m>EbSj0F(Fj6*Egl|kt!}yF*7m&av7Cl#JRKvq#)Hqn=VEb$#nvY$1WcL| zta+j53+cp2YeRSyGi1rgV@2ixR$P?osHGZ*M^doceG_qRPuPZ8t3IvxY8Y#(sSIz@ z)73$4`w&hw{zeCqROV*2+LIpLBCSwC(Rn@$WYUQbe@)n{TX++VlPkO`nJ;vZBF|d+ zildLg5uC^I>_+9IWbOhIIc3P;*CoFXEYGx15wwse#I;O;CNPWYoyspjZ`AhEPuCF@ zDz~(lo8W@XSdx@8_M@ZJb*$njHSUWdBNO3NwGw}9a6J;+LcNHwSikOfVm*)w+n~F^ zY+GP(S@V5|aTNFFbOWu8CPju+Phj&s7F)68p;uS@`$%b-NvXe<(#)MJ-PUe(lAZ*h zB&9p!)M|-ICBy5&Ho6FmK5Rw=zv+BfC%q(P={KTbY!M|^?aAv2fALuGInS_IvmVf~ zc!`)HRhtgfz*8Iq`|B%ylQlNGbCiZMZY%-ul~WqjB&2o%wQov=2?cjEj6^p@u<(ay zwWP^?P8C1v*qg7gGSu#LDWb@`!UNsXZ5kO50FRHi?g3p;)Uo?X&Xal=kLypoX#G^d z4KDT}N)>sKX;+t_x+e&}VLf=PZK?AZ6~MAaRT^fVIu-E^>}{-c)HPP$fNC0tk>Odu zEG+cxp5Z3E;MNNg9WdtwnoRHam1`Y+UtG7?PS?Sz2<;~5#NX%}TfFicx~G0*K{RoJ zdI$Xd41Qz=CE=~qLHHt%60wI``4_3wWSDQ)X6voWTwqGT{|uFbWd|a@XLl8xmuEfZ zhHcNW^u}6a82V}?Vb7>#6Sgl{WCms>GZQxB&ynf2u2O&@Zr)kj&#^4_tp@D&Bkj16MluP7QGgu*>qS1HlQ)TGT_&Lq4&f(E5k zd&e_2lX76Pw57;YABkutmF_6su)8dUgKBITTSdj5{UiB3F|aaxSbci2j(a69s1{_8 zT=g*o021iy2G__dtb`^|-k<~H50hBVf0@!orr`%YZ_RfrSb^|%>|5828t+7@=$ zE9rCwx4`oMhMJ9v$Z}(p&w&bvn;FYlP7&W9UX+xRG3#i=#+E#!4q0GBa&&28$x+#@CjCDDR6E7qxPJbc|i>0ndM;(PzAc9a-qu~5G# zT#rTWc{!W+BZ@8c78%j`2x;fwpc`@WOe8(4rrgZ$cc)x^Z3|FW5u-4q$U?rMM62y= zl@*!HUi3SK-lZA!rG_r|DpQ2nAcGQgv@=SsVF^dQ?ZFy6q(+qehJ{h{e%F1S1A*r$ zs$PbLA+KR6nESnA_lHFN*V?3G9gxvYGP_4<6jN7h=v@|yLZ!FnC-!T*W;%$a7BUd; z9~rI-u4K#R3*|Gu`)WT!Ia{vTFTRl3xwNT$ORGO;Cuj%~SSw7D(%6!CK6d-|kEVUt z!BSxSLEC!y!n+uXbIzJ(IYWa-wn!_plxf>$U(_U-zX-Oo_GMy9^8171wHmV1J(L8noC))L1v0EF15d1L z2_aEhg|lkCSffxY`#6g8we&zEBA_%z(*QnL^R`={)|_F@)P*;sJrMF!Kx7KiddKFU?Yy4Q@-e9 z#e2TW6XFMFM^R0Pxn)R5OqGm3f?5hiv+mJ5kA&^FL@@Azk8l(_$PdctqbGAQfk{#W zYE;aV5XVpMt-ER#uOG4;a9P3pxZS6yvDO!q;5-l}0=E!&##%3QRCB*zu8^w@bjeCP z>0I0X%b!h1Gyr@RJcnj)vG=UHK7Hgl)(RakP;h;R^yoxYTNLt{4s9q zmL8#z1P7@fd!Krp-n!dE!F*&;Ljr?@tovq<1{|qe@*g-oPbRmPSbqZnGC6@>5*D8A z5rZX`2*HfFL;ux4J_KDOCvKs5ODuqAo$rw^5Q07O^ZLFptm9qkt6a%OugM=zOF4h- zvvcudbwn%`(nCTzZV(l?_3bAM?l}G!QTh>i0`_h%mWtkYk#PY;5;q`)=9=q1@`ZQPc0e5J6htZ6y%jRMrfV zny;+R4%DMae_dd&><#qz2`z|ij_Im@tv#e3mg9pce^2gRJ1k_@hTzviPp zuud2ucLvvVuqV@T3oYZNHoSxL$~m$_6coVuhLXL@k@2pq`X7$b#~-3tr*!lAd+~{} zDZc6K+~&QhrIX@Rk#p8dB zEeSZjhNVB=rC}Emkz9)iiP)k~=6+vSyZ^3zm12lzdA~sYXQpfVBe{8Qvb84N0W?H< zig6i--5@*It?TAr^sS!WJ`$bdX5vZ5PqU{Ftx^yVeLFu|a?&l|M&C25VOZ4klOlF{ zh;V!(zfMK{Dz9rSg(V?W85XNQ$~RjGPMLi-&3oh-spQVM`>$jbk*V|W^h;0cwPSEN zyysNtq-g&%eIZb3c{!AQEHOo=xXXUUdsd#g^Q3C5y?8VsrgSv+u~*L^=jUBy>IS0r z387UHdA=}U%DFM?L#0I&1(7IJjWuCWcI}so+){tbSib1SV5_N2il|lS;ejn-p-*e8 z0h=Y-V!z*n*sEI++%VhjI5zgk8_GqZwk4SG-*M*``_SY3uWZ}+3Ln%Sg#-z@ukM*C zR_|CS%7{l7q3B`PuD+{||45a;D!;Kq4+V3>73fQOkGOA@zolIKmqex58?>o(ljJSg zC)n4;D`M!?Ue2p`Rec7e1PTNS*SGx&HM57{&kQIYdHuBtPJ?8d`&%J*t7Tq3kcu_e z6e>%qs9N&5@^6VoIfIj1#FU!}L74Mxbu8sDv9EL}0@gURDgKwU)sc}1o8^zNI`=Gk zQm);1xOgUAxWQMfk#_a%gLKh<*98B_YMgAyp3Ow{(p_RwBy<{IEkDT??VMLIYfj5Z zb6iyWo#f6D`L?*4A5bZkGLUT=99-`)XG3ip00o-v1La&|ZC zgif4(#(xNMWhRmfPt64G1hL31gdY*KJ|sjkJ1^xp(yo1Z2j1?N9J*adBbW9Ubqr9e zB!|9(vX0pRUFsT?FJr*JlOWn!!`yhrY?k;g4>EZso(K18K|?RLp8Rs%Rvc71J-Xa) zI(hyO;GNyOgtJ&`e-nYXEV@d@TUdZ|yT&}2`u>w89%VH13dPVbCjazU|s z2QL={lFRJ_HJzzTw(9qJmCnn0+16^DAv-IKw6INO{5r|bn;}Iiv9crTbC$Mod>tVh zEzDsj2abJtZK`q2A2Yz@jZ8h~vd?if(mo4!m&e;MvUV0qu3b1m+b`+Fe>5`5D7vfuXr~2mO@x?T$pE!K`Z4i@{Ya*|^dERbu zS{XY*Y3qWfuC8To98g=%hM$HelP(&t?I1caS(;) zI>kyveZSiIUd=CD{b{-)W~++4$vuzxQ2@dAY3uEAP^rgAdiN6V;6{ z@3w7jD2u|%VcYKo7=<|B?eaWIR&xfMXjQ3S~Qb|!2_E)1KQ z!*Maf90jU!`H1Y4z;15Tf?P0A?Oob(;gI_lUBUq09+1xC$H>nEJ&U&t!2G=&)4?+0 zBDovBP1~A(c^|m(!Ap7c=imanXR{5bONS7^`n&mjD%Z`LR-72I2`ptuhVxlrG>%df zc1=wIt`w%NbARh6Kzx0hJ?$Q8k9lY8=xK!hW>&cl( zdjWp1qC-fsN-tEdm7Dvp#zB1d*nvlu*w~ zkTzu0$WnSMLiz~cqigZ`_(|u=f(J9!7*?lP-}?nf+dG6;U5{QRXdi6*|;T?>%0TtAw(2W>Xz+HDrPA2s@{1GbM_ zcx+gI?WsI+`2C2~bT$RGD^T(ozC~SM==;Kib%w?ouB1>127dKtMh!Tnc+|Zcke;1x zyC<*KZ&wqcvZZqVU>1mzu7pdk050p*$LsECQoU&2=&SAw4<-RZHR)~W2=t%m^+1EG zO}E61@r?sB(qQ>=o|`QvteFmOlx-Zhm%8`X*6#*AXY3feM?~ZI4Q%p;)=w+I$ez^X~0#I=1Yp3y#`g-3u{&3v5T=)jjQ7D#;4F$v% z_vK3x;r&DM8?fiP*}S_ACqNLcccoiHETa`xz*(DfzJ|EZ)9d=ZLAV&^ zDSv=NX)VCNi-7MB&)D*|rPC1S+N;W7jvo?}{%B`y(^@D%iO2o~j#6Z1oJiY)T=%u? znE)uj7nbm0V*bDD{#fO0Uap9#5B~AYol1T}gs}ZR#1|o(SizV0EE5ix>bq&51b->k z_&zZ?va6iajzyuLU=M!q9*JSNEg6u$qt5o(g;W>hTB0@TMp-XYL#hD}>4qtp4|&~! z>8qh=X#neZ9>ZrK*T!N?R2nrntU$)%z^&iv%K@g>yazX^DbKQ$3#6G(E2&wW0$66( ztkIr9N3n{iq?a`v=Vi$cIwlrnEN`omY#WQORE6xVU%uB;NS^?|OaBb$Es+%9SR=XB z8a8p;Xp0p1R+Gvc7715_^ET1hVq*9=08NdUXCaksG`Hq)u*GAk!fT#j@8-^3LfK7h z`kPelPbT-A`KVA77x6fClc))RE_ZBqo~^rrRKvF?C8319R52a72=_MwQYI82Z&(ux z+Nd!T*8sZRLGU6=X$m=j>8koEmm;*Ya&J=CQLJCr)ZxnqXR8TR#+v|f64-KvZc7sgYhtY7!&%4mz zYN5=M84vBZr>}s7D0P&c+{j+wD|aZwrp{kz69zvG9*4_JXScvs|J0; zLcRHnD}de|knTjqJ&5M$c+X+?^pa{at*FBQHitVl{oK$6|JaGjl0?`|iWw=oZ!?XR zVPeszfO5rlM!gx~;kQqTM(uVQ88h#v(>(J1?2r|-DfX;O5Ct^Vm)6$a&DIAcf;1qR zMq$nNV#^Wds1!0JrlnZJYd01qR-;jjrnQWAcp3U@@I`a3M|bCSX9VGEHE^e^B#(m7 zv3$GE8}K!8%+nB#151yPV!Z{52n|k#6e{iJpEX_Z{nQ5w6eBB08GlQd@`oEcy9|Zr z&}W3at0P4EzUH0r$`MN|pgT*DIB_pI-ue<*nJ)L!&lwd4_M)HOPr3y8!-am0>WY3` zF&|+S{#4t00XZmXnQ5zuP3HFPFDgZcEoR)!-QVAtuyQvo3~cy57f?kgoHMhtWasol zzV*rHj*cbYS}ZvI8t`Rc>4SJm(x9+fAnT|uVy4#Kl6{-8Wn0hkvzF6?>m8D)3n~32 zD2R({o`f}^xp)#3^QLPN$FkMSquviqly40y6z-r-Vf^j^aOVyQk7Nc_4@Cq*bz2cd zt8+7@OWZ^=LP*Vdm9A}!x!RsK1FbmnTU-vmg8n)^Iwa$wyZ5nC7JPNHdIY(@X&sHr z2Q;_FQR3cU&PDIiHHYhy2Z+V&s1Rxe2<&OL(naf{&BE3MFUY+0B8)C>?47pD3A`oE z&@-GM9=`v9G!2g9uI5)ne(hSC=)&IkT1=cVk9T%;1l6t(Dr5XN znw6S)slTT(L(X@$J%W2Y4$=-Y)K&YBit^cGppGG(V4qRN7^r0jdQf%hQ=}$e=7}jO zC}g(!DA}5609B)9wT`K+U!loC~k{uxZr%pggwp#eu+e z4(iPpf9c+`?UHNz=r4QEfT9|Cq@tQ& zYcG5|tNSRF_F}x&@;~C!Tz=xDM=o>0ie8v$*$-bSL+fK7+a&atZzo?Sq-x2H z`7zOZBM^&LR?R4{Se2!c`6M0@7_8p{Et4$nNUuf|>^OnunWS&19ignWi`QaN(Vtqt zCfW4BZ*Eq5CCk>PQ$}1wZEwcZQ?Fvi`i5PZy?Qo5RB|inkGS9Zi)TL$3>Q;A*r$D$ z9bS7)K%U(0l@?rKtDCn`H_o#(NlGGF$X-omCAZ{Lt$1MC>ss5Z%T?_2j^Ri91lbCZ z+)5sd0l9CTB6cN19@labONDoMb#6c+v+S(7Q;AlHzOd&nRZ=90q#z1w3pOW9Se;6FOb+cO zxnJ}p^o#qu&n+rpEN|OAY0%xr?=IaidH+wtEgbz2_H>~62|Y_cf6G#?Yd?GM)tB&F z#~+z25pu~dMtcxyCUA#Xs48u#-TIRYDHMgAFlUfyvuHESwAiI| z13}IJxt~Ir>ROXYxQ?>Ed47eE-8xz!wCi`!SoxDv?E6-DZdD*5s zpUlM7W`}T(b?~}v7QX&qY@r|<dBEHcynX@&H^_PfI7B&;eTCQIheFdd5c>C3HNHqCnZTpn&)#b zyjbX;zNda+se|c3jpH2~j4aQ^%XMnW;?is?I>)b{(!FvoY4J&!bFM%r^2-~%A69fP z2!0S_qj_yJmk~1OrS$r>#NM?c)g}Km8z)appRL--2h<^oF;~_gtZ~Tse!rbV$&-CX z83_13)6e+`D)+>@%-m0U1WVbT>Q2#@veT96vCh4Vy3^X2l>;3x3LVAoMatRQldVN| z7Q&UK+$L+DfIZU|6SgZu|EZ8yE*t$S-Cq&$Oz2bKl*L>kVZ5p4NOu#>BhfEdj!v>J zVMxA9PrPcW36p{seEZ=GDMb&68Amg6-o(&v-w*OE3(?hn9OTpQUp5vmpz(taIs=pI zs~x$?ryniU@whQ!hS)(D!s62J(z*b}#cP-2E#pUe_(egKFsXxng^m|~)Ok%TRP$o$ zbwKZE0*=iGly5m=KP7nf>CTB&oG?)pe z`tYztN2Yh)GUWZ!B`~09UK4&L;*l@{1NFLWqge}Ze?7;C(7 zXuv-%&&E`|1XjDy;yaIBX)2G3p0oMJ&Y5*XTrR_}A$#>ZT0T$CQ-rH!aBC6eHO_!* zK6z{-YbZ157?-lzGH?ms`%6K$b*nQ*nnFAxi(Fh2t2D5=%QZ#Mex?$?X?W)z2Tdq7 zz423zZwYl(~yrE`0pr%Ux|*VcYz=Y}wo4Gn1~U zE{7RlViwWT480L z!EcC7BHA!`Y1#NzjT-Gmume2ZtxNaSE|C2z^ z9u!Z3ZXaK5g~ytv3LG>gS5$*cw(K*$+}Qphb)C8VkGYTo&>6ZaAiS}Q-yzGeZw-hB53#wvpa;NFIywK9+k&2F!Gc8*0ndg7`7{7JSM;;z z)h@P7V7{3lpr4+KUIr#5Y;Dsgj?R?jHofUrWNqZbj`oLJLl5uGgGYKI)kWedXxQv0 zW&s5u?rY#IjT(}g67EKLSP$T(CWYrixsSehwz)&@s&4`aI4e0Isz2L5uSpwJ3?R)H zO(id!0XF#ZnLsL{8Ly(z(Dz?9s7!V>3Mkec#?_*hY^xiWB*2a^d#0gp0K&&qh}i0R zXq342NDCsKa;flB&V$@5=3edRZ~(k%me@ zpU8X6&Bd=;91!?iL_lGK60kU4fFDpG@tLlm#jxXFIm&C(jc*pu8Lj{bwLSV#QP2`A z`>zn7v5axA|J=1j8wJSoGxN%-8UB0mpK1XR`3^`@_$pc2&@yIh4;c2yb`%$W0X+nO zJTz>(-`I=xvgL=jpYA<(GKcW1`eI_~ae#(hT0B;HVH48bIWEq> zcrqn2`VSuWuTzX5KLcp2olcYB6BdE)H5Xax`^<-Jkst?C5t7F6(-Rv2q$R9jJ{Q9}}Ul1f8m!JOf-)Z54#u{K6mf352hb>2U zn-66TXQ5pCf&W2_|FxI7$g_US*D<-0iVMR)#dNS&S4MxUe_LIhzBRQGTVuom%LQQK z+Mw%%$oU+N^e_NBK0_|6oW;NwAECrIE&@eL0i}8KbI;a;3eG(-&@Dx{uw8qCSmGKm zk(JbzSvPd_NqPp(D=ozQ%DqZ+1`3}e&Wl=|;CWHS@K1aFukk_UF^245%8800<2Sv; z9m(gy_iKhC)lU@Mzf|b6Q=z5S5D04TJN=+2F$agQ!0_1shSfH$vGRaSEP<6fy>SkLWhnjb8lhSj0&_(VWItBu!9Q(!$@pf z&=&!HMM0c=YYId`4t6&00$B0er>1Za_^RqBW=kG@lSv0|zP+x?V{2oTDV(+=d}Mfl zB6<*cot8P_6I_Yb*r^Cs8%g-_X2?VEsCBc0ig-k~@OLUOLj}k!5-TO`J`vU&fO2E< z41~7`{K^O*-i=;o!xaG1KQfxA!E33PnBjFc*mM7Dz@Y3hwNM!5xw9Co!LBP&{84X+ zLC#ow?fxx*pmK+bpAGCxsons_>_Sa}(E~Tar5W$|1J?&XbO(xn-d}vvg&j%z#)&>? z(f?X)CprcXV<6uUfcD12kAZO_ACj!QKQ;^SWeLR(PTT;%7G06&!IrG}%SjixMUVQ8 zr7Zpq(lh40?9;MUN81FoTZO(}J&b)+w;@!A9v+xv*)mJs6HMl9Ot}lD-vNN)C1lZM z^tH~tT088NtvYph5;Lbud~@pMZ4F=!DTKi_LFjcXr3~^&Vm2Mz6{yZZlKK%Dez+!` zcQ?LUm#t76No!}(ZUa>`8(J9Zd7V8%s1`VUd}m!%&)Km2BeMQ(Rb zc$cIYhEE?7dN#rahYqEZMHMSF(rhIL;Lw{WIy`=ZO{1=piRxx^B4Xao+qAmCE?M@6 zkpu*cvQ{Uvoq6~SDEHtcGyC2+v7Kx zOv8T*-nr2ak^54V0en|~FGYA#FL6cWUV8FBevUw2wRq*+RlEG4W9`G;wLCKGR`nd_ zK2=!r%s-0Yf4X8GqkQF$qG^~TG%)jp94wGuW*WI-`2z`V&o^WO-E*t$bQuVD+fUDk z{C$LX98CPv#7-pb_73DW(}bCGc}542=S`HZ8yVI%1Ef&>vl{c!*;&~7+~?014wS&{A~8NJGD7H{q9(I#pMsKaCn?6i7+dWCN=4ScXs0wl{3jv= zjGdlQ$NikoF`P>RVtcthzm=vJN$%O!U%V$QBvjZ0a9Tn@e&wgR3OTDD3d4RmGqeK~ zt?wNp9UsxWw^s-~0ael##Ug*;;vv3MZs(->FiQ355G;IJgg8;2;<0t76F--#PW833LidU zsJ19z2E8wvA}EX~@r@RpfLRWa686PYBms5=g!aZb@2!6&of!pOPu^4ZSgtXA4yMD#sha&euV+!oL=8bP*ZzoT6rPF&d0lj zW$px%w9|9U=3+kpSpc`Y|AJNc@pTTMLdm@Ry&8bi|BBm3K}`!Zp29)44L<;eo^Bnu z;ClzJGQ23QvVDVqxv4KTPT{A*-+`gvwKFTg1;5G@2lyv}=1ypJVjt2%Uq$l%-5#D8-j_ZOu zzCh9c*<~D<@-;jgS0yJ{j$4||MoLj}Ibg;84;U`huFiRY=yZL;7=a9=#l^-h z!t+JRF3nT4_oq%guWq^u2ozvvIt;>sTi@m!;0drZ5<61_trvc<$lO;H;zWNhRtnk6 zQ%7ff#ze$CXEckmqVr4NIVb9pX`78ffpD0ROOd3r9ojT z*}(9qFflP|(vIbOX^J8notG#u5NaUc{<`Nf`9s-C&=oR_5Kp#=vd1s|WZ zyP}L9+?fQFBrq7j=>H#EUmX`k)V8gHgyaGe(k)#QA`Jr4rF4hVARxP>)B@7ojYxO1 z(k-CU(v5%;i%5v@oqbZj_xVTt$ucuLbLO1;iu*b&-GsqlOgWc+?cNK6{<`{g&;!P}Y&ZKGTVy)n9JmgFjs zv_dyxhlagGDH`xv~SbK@O;0)Cpu%~mgad+TFIBO+g5f`l7Q>}IvB7tjX)R) zblofbYdzRt%zapVJ~%nyD)t;&!z=oGgd;Lkp@=!?J_jkh*`Ucue2RCm{tPOJoo~C= z8@~E3$FKJBD?+lcXjIktV{hgoI^|M7kR9hutOK8u(1#p`>mD+5BV4gau;c6PMxmi>(2 zW3qWMmOL_a)z{jb9=T%8a*bsZ%(v+-Ee~^Q-C@CbPR4pghYvBM$-a=wq+cVPQRZsU z;Zo=Ci7fFK(H7E+r;&+nl>UNP7z+@={vb5E-@J}bc`I^&l8b_IQIz{9kS_ymBEJIXB~&{rk(-}M({IuR1b zN=4Z|=)3}T@Y0Q%D@x`yt1ORcwtJGdiVd5bSTrE_oTwDgU2&ZmQJv0HM1^G)jRs?N z1F&EkhRD1cTUSUwo-<9h8ks>m0~ZGxqedP18op*Phpre+r7RPp#?gw$6I?dy7iBnu zB6!FT)$8Ux@U+fNs;W;dV9M_4#^G_8q1!#Y*KZxME|n|rahz6kWz#*#oZmQ@0k_5~ zGZ9nb?LDCTiq#$UphZKLK?!fi!NWQsTb^tOO%|u=<=I@9zG)~CW)k^o|0JP2Mr%R` z`fq9p#bG2XgVi`-FS{>HPYy#?Wrc#{0U06R>y)}#Z)u1lR-Xm@c>*jAs!s7H+!ODB z)50#CIe8GqvuOE2LywHf!7_%$Tt1TAd@Yf_9=a|9FP2?l?jzbfY#w~)eIZt10oz7+ zTS0<*H*b9+d-Af;)d={hVmhYho*&U|W!Lf&r{W=FJ<~FV?AurAzF>CWk|YFJekJ$(o)9oirakAX(jkFZ3=%E`7wfFV&5L@;ofW)dxR@ zEQAx?ts(PT zu!vA((xXOEXDaTc=M4r3dn}7BHil{6!Lm?U4T$hDslHG6Q{VGwzU6vgd{_nPAk`1E zskW!sdz;R;GPb*6Mj?%dfJCWtxZ;Fa+;;Ie4O!ha+&W^|;)A10 z9(JMyBqgqf5SEv{W~3RnzRA@4AY11nrcc`P@F&aYqq$sdwTH|NFRBa059u8q!u9%n z-da2}-4c}gcT34X?W6ZY0f2uvS^D)}FXjP(HTSAKrK44k>Jd3IT4m%hclHypOe`7F z-(BkiMkAovn^D(5E-OL>UPpw{HC$Q;#SrK2cx?$5`jU#PVHt6#vwp}-=(blgV~p7N zI5odj7>p5Sdf!A5w~CHN3OQ9^1LZ&OdZcIsC0H!jb-gSKzdT2lU)It}tMOof zq40y4kysx`MSc8xVnHtM?o9!)^3+gzr{(8+FuAZ}!oW}(gl_WcsF+n)RRhlj<6nS< z0wBYQ&m$*V@6qm=ZV${}gdVpaP)}`miaiVqoAm=;NXnn&v}Jn|OH86y2%)!=ca`?i zU*+vVm~n&p+(-$IxZG7`vt^6dS{Nmbo8+sDXVEtb zB*G(*d?P4p2&o)%V0>Oxxyc86N8VubOHs0z7_J>Uqc0Y?I4m^6yA+UKBQ;XR?)C5+ z(NXq8n6}NB0umo3@}teGagB=>>M6D~Tj`YHJU zvDPiY%#r4`Q1u`{=`+IhOg6-)Z@`)^p@KNX>8}BCGYQOsp8_ z5%d_3LA3!8F>hR#h_Y-ahwdLeT>PFrkeW&~Q|e05e(rc^EF{ACA|5 zQAVVtoF`lB|-FbNa1o6g;a@)rif_8RPL z*TWHil`5dJejxOZlVl!)N>uH{X|x|BO(iDD9wuq6&G{^0B;vpRoqv_#QiA_b2Jm9b z9>$Ekt*Bp}m>Q|$Lz-Z_)pF1iM0FYbE9U2)AD`s8i9LC-`K8=K)(RD?u$mi$Z})7| zZY%_tr1iVONoi0uISU61Ok^#LvHPyB(Pt}|M)#jZ!4wL=itbftzKoK35AahDpp)&~ zd@0#Kul9t70*4(h%9MRAfEU97SINjvzQO>l*@pJ=iVkUVBW#{D(@2<%gIA@SVEQKZ|!U#CZ*nn>jLCSOf{D^%#p9rOv#Q9a-(IGCRW zL5*RXNj*TY{U*EMbDzic3Rr`Gisqv-Z%F-+%_7%81wh|YOSS;;*Asxr*Wu<+`uryY z6AYDLZ5{z_X!3`y(LZQF>Pcp->?FNYO|xIL7*a7q{m!GNI7S6H08?|l`}`j`Ep6S} z0VrxrZwNi++FS;QsPNTb~j|DIE%9-`m#yR>=OGA?+cbR(`7_*K)IzJn9q0r^8|*v+V@m`Ga5rSWvz zqO@+nu-Ji!KTuf3%mQ>w0l1hQ1DJIAx0(T(0P~h!LF?f}UYC{5Y7oGtdctiW7Krtr z8-X&M-T@E5(L|Slnbv0-pr+&9wJ*KI12y2zh^266KMLMQ0Y-_dGs_Gzqg8R!I{zO6HZo?k}pM(u1 zsEOK^@}A?()Jc;Vz(bQPg{$>}^77F}~j$hPxh)TlMK9!ysEgYaRM^4nR0()dtx|ZdcyOG_D*#x%R^q46Q!XlHyuG z?n@LZ`0XQ()o=p7wgFbDLS>RbHqkeLp0q?sKrA)PAE~ZijE3C6=-_Ic?UobY{zqWW zj-SNdE8tCU_$jjhyzJSo27obF$z-@V%CrPuDLX0WmD^Dt+4CrqzcWK~NrA}|kO@~E zD{O8&v^1=}VcuiNw{SXHqZTp_E~*}*7?d&D(sp~@G7JZufFTeTTSK2H@^L{2HzvdT z#*lE2V#3VfMJ$+^KEccTUHLlrNuOl)7=B%gZRTnH%5Y57PctqOy&^Lw_M^h7=E3+MMZ~#`=}Ok8Jm}fm+guate$@Cb z+l&45ErL5Tn!y|Qxb1Sv#6<<|iWljSU9G$g( zz0X1*(#~cJ!GHRB;>f%^?0aFKEz8H%%|+n(|0a7*SO%JSQn7o0^gSPIE>;~fa{PMm zdGP~U2Vig~zHAy4(!jqtD$NyYaMw6U7x#eF zu$SG0cZ;hwRs9>En`@VaDD)a@z~XVuOw{3ii(D^K`p~KVvpg(Z43JNxWU?McH8NGlrR5l6zpy$Zgp=}``{2q0JF))3v`bIc=+rq6gIZ2`@J(Oj zT+)xQNx~RDa^4Sq+t(MUP)HWxL;AM6nKbB;EB!F$OxR}Yy-jNoAqE=1k&39O)e%mg z-hoiBf$h3)fL5CZw!>n?zMhZrzvEK=S%+;Bf(Snjudl^Ivq->{~|ObmCzm4IRLvRG$mv1Xwn* zF&|ap zQ&>0-UqmiLCAV?wB(Oi5=t(EfR5x$CfH1cVI}>I%d6@B?s%Kqo5rd54k(bB!D3LJ( ziFt(_<<)@|B9W;Pc+M==ij+T9_#y2ELC)Ki6ez}s3QT6P|MNS zEND9HZx#o9^TUG&vK8SG1jm3x8lpTpb8+ZCG|@rlIp3S9%9(;}wyZ}#a>(ZBNjReE z0D*|+L3lz~eQ+i3tNOn!BCNcsQD4s_uZcW05(Y%JX~)hb@|nLsvgiusHh7&biTJ=@ z_PoD}t?_M7q~Z#78gz!e4{D8bD`=*)GoJ|eGv+l^!pfk%HkgA1?uMlrJQ9pe>~ONk zxl^>sEFn)7CW*|Mezsm6@>85%;Uv8?tP(F;=AR#22Vy5=ek>soH! zmS(|u7(f$KMn?7rWz zN-Dh-d#b3G#{V||g+C|+e(r1fCn-L*R!jSjckkb5^A3~7$*qQ4X$Hv;3hT7&<wu*%$w1(4ERWnzKr0cUdh6~CM%8L?p{J5vfzkyIx+AL%YRi58#Lzle05IMu z2~!%ukXf(p!N)=MWxg2nG^$-Y08IIugKo>?uRsXg4lrNl>N04syW8q{Km%~VB+;8P z{y&d2$q0_*75s&&Al>f*Zu%7l>S1h@?3N0&z;s3#jv0`(Nr{TQwb5_)2}w7HQy{a( z|IP(hWd3xx%P6Q%?ci_=YZMgB4}n*!<0BI)&-)Pc>8d2*A&Tn8M9W|T$1qh0P4DKh z!^9c`{xdzBxcfhAK%3qo2%>?`@s2ZbOJoa~uiuTu769R)dRId;0PEQtc^f(WzPP^J zmV7lkp}G}wNt{ASg{2*h0)+kDRjSFKKcIOc71CA%kD~RWTSmc71uZi~T~W>P^5XF< zoA2;v2gHoWe}Nv`?^Ks4jQOe6!(KTw>8<#8E5y)2@Or?_b2{qhYDTl5GgoJi=%e!V znZ0rO-;e%K5}3RiUl+Yjr4WO<{6jh_rh7vkcn(ZqTj(6*gqlE2dj4XfC@N(BMM0EZ zxLh|~DF!Y%;Z~-b8cxuxAt+nB0i&N75dlp$dO1|RQgm@)NR5b!wxuiTJ*&yI`;RNN)!9!z> z0fQD}(G!EHDaQq57K9ZXrQe zB|ZQgs#2*8^?(e>UjLdVfM0M(5Wgy1tOMN zj0R$8X#oeIbxhJtS4Q8B64I0pfRhCu6^#o8(4YsXqiM=G&J2OH(x{%^S=V+;sLP^# z#Eoq*C?$7`3%*fVzditnIwc}zk8ssRzBAr5m9NQTq1ly%26)3rY;t^GS-SK>6v%@B z=^&o@a+qxuT+RnEo5p2yXW&M_|U;rv{w^?`tsl>NALz>=YE#jl3r~JkB^e zOE`nWS^XN@ES29XsS&+(xUxx)F*J ztUa+0E+1oc2Z!RAyj!rZ?hHZzI|3X;2QR^uwbikfLF9p8rpL5?n(nkUl*cIg;TAb@ zrr3+lN58=2m!a`^lFh}*vP2pRU;ojgs&`}AVuM03C# z>GOFbdZPHneUR;2otK8ox=EhH({VXIwtBONkcr#t&>}ttq##VbiNFY^gYCC$J9rpK zKKY`^DkK&WrZODp>4NfWuD0O{k6DC?B7?rZlzws%)&yoX*3RK`FCPGJarXsWQ!QMnw$F zv-K8JE@`g~%qyG3JFbE^@?%nj{R_iHJj-IfRbRyizo0_cXlT3QW%MfIOTgGN7liGX z8PkK0F4%&M;kSl=@TjUsb3T%GU(ZY3ce2CdrpPTwaU7rv~vl-8X(fbxaE} z4x2+^_Fn2+5iNoH*d9Dgfs?m!=TVdlD-eH1@T{+xP!e?-Cgi4~5mxdnp`xpi&p>Kh z+7w}NZ&!os=~hmpF2iKo-(l-)1$!;_xpv>-5!u3d z;2Qa|?{dxl&i8f5E7C;j2nBiG2b2YVmuv<|?2Ox={`Qp)N=e-kL@R7XHy0`MJ|nyX z-CMhBmEg*VaY*tKZxtu4D$wvguEi!`J!|No>kByjM)L4vuY1rvUieijoiusNja4ew z)rWnDM!+&WEk_`0OX0Qast@W=EX`9nB34_A4f-~!*~c~IvXaOoixUA1CT02vq1%>^3Fr(o2RR+sD9yRze!Rn4 z0?L?I^Cq;D5tpKS1TC1CkN6+tcd% z!ovVc{g>AlkWh9p1DqsU+rtd6g|R{ExNmQozxw?65G(F{{qYeWt(X3N%P>(yWbk93 z76qa>(g+rW2l+7xc0~0#7;SD#+G7whoOQD>OENp7gpaApPmGK{bgPiNX<5pi@#D_k ze#%)&SiBM17iv`g@;i95tue=Q31#ZiIbGIR3b4|eS} z^!ur%VKrFzWj|^IxsEmko}q1du;JdMd0IzERAs$7#ZlvmYl*)N>S1b>?|Oksi*%3v znsclUz?L}%S%eJ6(MV~}Tj(C47cfThsk2fMzQ`uq@=3OU+CpMDCD1vL-&|vtXN%FE zwK9L?W#Np#TxIS%R63Rl6s1=Ou9rZ(tSuvi6w&hD1Yx4nQ9JF2TI*C{0&K8R%VQf8 z{^OAU1NiZd{t5VJqED0gNs^{)APTdZ}KIR+{wa zxaRsLNlCD@x*<$iF@Cv35ev(=Uzn5i7|Po(c3(k3IoG7y+Z#dT8VJjP(x1*+5)kK6 z$$qWwB+4GkaB5{cR7wXckOWFs$;3zO28Y1L5fQd-0xfXUuw7np==TBswA#ZG#LlTxLR;1x>{6q0)xC*Ub{*NFGH$6AotY>00Nn zrS)zX=QJEv@PQ6}8Q>UACKx!Q$(Ruzi1!!}B!^k$xQC;yi~93+CjXRPyy(F(vG4J= zPdITP+l%>Y79*j{of#o{t(~Hs^dp^OxQo}I7bh@ZIqJ0x37-nt=YZEQEy=me&ko;E z1Y9G#+tHxiz|mA}FD@t~XQp0LY>yr|ej*-=DIuXiVXicnC=PaKa^>0#gT-^p zj%P6&nGbu8t8WFgTJlwbYER#wI*%$*`p+Q_cAP<&lBab33ZW{XJ@7rN2QJxP@7tr+ zBmNab)iI84Cr(=NLZ?P5i`P6bJUyR`?VrNnfBux}gMh52(uc)&XVBBt;fET3Q;H~I zkOQzwqe@Dv{KRI>oIT=@VqEOMYaNttl=Rk99t{e$;oVAVQTIWuC3&2lN6bJTIbKkz zh)Q+|Vd~9%{I?weAm>EjmkmAR&r6;$INblF#BU;9d(i%?Q7^#YiANsdU(mYq+VVhc zpLWspAokjoVp+MR{um8_8cg`w+L|vpD9z@cM#1&v$Ln&FlTuwg@~!KugX>Zd1kvEN$zl4?tQnE|Kdu%k4X*Zn$|Q&o z5VvDP^~Fzt1Si2=aGajq@ z^}+S;j{NoItDjBNyfFU%abgof#lSfn16eG9ohTjp+P=6^-tLV10ZV&cpma zPPIxGi|!%v)x(vC=nF^OH-@aDKh7cGM2}MLbOVe+EP>mOJWca0jH*1iruA8Wa1`cW%F@{Uk zM#jkrtSkNB7XgerelXZWagCS$#{dBXFCI&U4E8J=M{ErJY+aT<;Upj;ftB~gDs-ngY&$h^xOU;Vq z=(Q_XxBn_F1F*Z=pPX&ET9-b-ErZ={j$Dvs&bYf1f;%Y;8hc&CW8*@F5J zb%erS{==sBlM1%5NyWX0)U>6MU&v8szq(dnI8zV}mtGg>^8#S97^US#pFbu^ z(X$A*1N-oB_5aL`6S_590-I132JWQSt51nW4uE^1&FeU0*{HH(lgqgT5JTB^V85#h zaD>$1_radFU?(i*kxaKS{qs-r+l2{g?B?)eYwhYPzmu{lAX@QfzCbQ<6Rte;*?zRF z@B3Ajc9&;at%)9H3Wix3$f#@8jxF&s$@)F0Afe@0-Gwm)XaLh44pcZE+guE2Z7g~B z4D2YhMKY8By=(?G4V8hS%kK@Zwzy@Q*`MQ~M~+UsxCg!Z`z!c5AX`kufziM*Md8N+ zK;2J)+-Pkr%R+-F8gP*#whxay>Nx7BB#R%J-QO6p@;UxV<1wkpRfFsI267cddzp(~ zPd|-1*-%*c-YepHz!l&?PXNf6V z?zlRs!m-T&Td0+VkQx-{bRI=6_?|9Kp%I8_G&-69%8VnsqZRVzD^w8Ga;p9N{G&hj zQU_@9@TexNMg6;LWaxdL9c9x} zlhf9jB!n=dS)~82b`HDA5)l0nf-1+^Ti2+_tV|i+Q=p>ng64!ultgY*Z3nzMecpI& z>OO14Q+uY@7i(XBk35&q9>k5IEC=~I{kvoxXOA1edGLk`ILyzrgm|;Smn#FxvhN+= z%-8ShX@H-7SJl}|8?|id>;{Qyk$K4Ybi*FUU04jiTYisqum+k$^-&zd|txms7b#MmE_PXHJ@|^-^Z$rdrS$XZSm1x(! z8h#fk&n5fQ2Qvx`2_4tJ4Z!XWHF;(*%_-*Ogb&_VboxTS$*=mI8i~u&*}+ zjblvts_toA9L0F=52PQ~MTJk^Zp}kXd;5E}$ z^vSp__X7wm)E|zP7$^1rYemt<0Tm2m*bm?UfBGE^jJ{mUYFgMvT%YA2)CgE?$__x! z(Qo|9cp+OQ1Td#pfOPYQHnH06FUye=m3k+*$X}b>QVyz0P`I zt404DS5v;}K^)=UJe&b5n5};8*V%yNQcjbN(cma(qcHpqq=Jy#6T8WtHNdRim;Vg} zK|`!vpx_!J3J^t(49>`?UIta!QA@*xvCrY2ql}ONks2^GqDeAkOyPA;zE`| zN}gtk?4dg?vSElNLkGy_YUq?d?&r|{{;Ck6D?aWGTBb~kUYmf%ktxuMpudJ+ewMmIaige>A@;0~syFHO)1-V;B7 zsq4F)x1YYRiZwW&enu$SJMAyynLh^hEV`Hi6pqnu*-NlP+0$R^f*SO+a&vPkvAQSt zX{7IdN&KvdN!qDrKI~}6Wv4%KCqQg0cWTCZSl{jeW}k{V&=`)RuOozfAtkNv!J&v~ zmVrt&sOD!%o^=tBzGPpn&L>;MGyDDRxgw_!bQ4v~~hh2sD zvFH#b+tA-xvG{@U@b`CTe?weo##>%?P@VqpLwis8B934qk=|*SqblA#E}9N~v3CNu zC+xakxLuQ;Qu=?X2K!2|i-jBz9i%!o>2$nXcg2j0tjtj=P7 z#H?zZ*GX~BvAicA?tN~RyTICJl{_C>yG=I6VmdCEf0S>^EkWi7{l8X3#+syp zzKUq(7%V5ATCfjhlJrem!lO@%&q3qVH`c&ixL2+UuGQr6l(H8FNqOJ&{tw3CZP3$l z-?{^kx5+OP)IG@99w(#y0;O=7*Ph|>oB-SY;||=6KNf2#;(#4^E>iRe+KXxaW;rXm z=!-||GfC6aTY@*sugS)I-RW`H!dP)tY-zwQb$ygCN=v<=m9cj)7#MS;9WEGaY!VEQa;OEs5Dj8k&%92zmyDXv9P`Cmerk`bo&imiMGY2muqCd_F}J1f6dJlh1E@9d(Gm89r~5J%?13} z4V1e0l%?PY6wE4?AC5HEQk%jZ=c6QY(WeGtYrU74dtLv9b8zcvkxYa=Sg1IgP_TvB*MHYBQtfkp{^Db=6MNT6KRIrQmUd(c z@HhsN1&$FfHNqEcQB!Tf6Q7p`|bWEzZn`RxV`tK6NmpGw?Kxi3SfXa6M(7`;G0W&Orx%bDzJv?mL_<)a$3Jy{v^6iHv z39euOCE}%$$1jF8cur60V-9^K!BmglsfBn7A5Hc&6VR`pyw#UVkyiu0*cm3|mgd2ocnX9m|J|?Z>=a zzsoXxlb9S&H_@X`ECZddXdoL1Fnw-%>IvDzEGor0_dXp{yLO2qpLVo7te6%oL_{kD zpL%73I=5EE_NlXqqu^j~n?7qj#IJcE^@7sgsZXKmh1=-Gu205N&y;2T{!9#cup0sh zhegGvyGpo`6i{9f_-tD?*1xj=3X#$Cd>Nrx&ll+Q36DRI3$CmFam(lkQFlys++G$H zB{Hb1J0L1e^%b6X@2#VJmfNcVIk_F`-UnDQY#w77TZ-gSVtVemRZ&`s^U;N2iiYXk zjJAVRXe8#f(?`H_#%WJbp}F_?s{=I`0Z#4b@Ehqe4tMe_CuR?X#P<~Ah_!=NH6v@Q zO(sb6(QrKi1KiP*7mN53UBo_3@9D>oXM1bM?aAmC_NPn8aZNAJ;iViVCUh?Um~+~* zYNU=??@8rG0LeCr{tni2B5abvKqdP@TIX~Lc((MjI- zIkW8%7CIs6I{}7?vhO^mAg?&78=g3pM<-lg$W1JLk9Qz0RlB7?`6lfr2X|Yjp@G?A z#b{gLh-a(hN7g?m8D1?c6=8b~3cdG!u`N#fJ9i!A89t`7HDIB=BnrNnR>#4zW^=FV z7>kr>yL~n>cqWV&AA&8<_z~{mTeKS9!97TTHp!t5)i;iL06Qz^lT~?3zYOMfr-!5!oIO5_CuB%C_TMZ@J|JAc6PnG9O-x-QJHmaH7_ADj zAIhyAiRG)abYbC+dJl9MQ}%(|!v9HNHtzlzoyEqZ<#V;JyV*h5G2rGVXOEfo8w@P|` z$F^~-Twi`vWbc}y544+F!dTy)3Nt)#WFWwzvive>Z)I`6d&MTThii#|;;1uq71vPy zkq@Cn2d2S?@Wiy?Fy3^vsD-mmysq76i7_6{Mdb7M2Xb$SC6(NK&3g!2om8FhS>>S` z|9-+Et?~BBb0%3|-YlXWjqI>HA9b5<*-!At;dQ(xzN@KiBJ}G{t(``1*$XHg9uwVT ztO}1|zW13arp7V`vX|SN4JFZtl8?J@uh+D6VScKNjfNh$&CRCoU8?wHGBYgZM2#W* zwv2OPU#~K@mVWed@U)h&uB}(dw@+%H++!;!@G35EkY4p6lx$9~d7a26+)7`HIIfOq zSIKUmtO|$|zGXPk=6Ih3DtIjL$^S%mF?&|H`t!oqh^r!s0T$KWQr^6_n`Lz4aq#Qv z+&UNJTU@is-<@jWf-513>G{zs_L94`)|kw4)uG4q=72~wg zLvh+zB)WT@2h^@Vgv{v%bFKbjIPeq?k$#3V?_AhIN;|VIyg7HpD6ME`O#bzxcW(xM zLCK3nNx&4(w}j63dQ*yt8`lYT_f;{`Q5Yk;4v#C}Se(`=BVN|<_?{`$J2ro|x;VwX z-%!uG_!5JiQtxr|3xOJ<*fbpM0Ag9SVO?L_ry1hKscQ3--H~Pr7z@UVKe;{V$>3IV<~G z_4y41_qPW@JHg*O$H1m~v@>g7RLU#JscW>krk<4c7-;3mn!e%MYxZ ziC4eMUptNOGsmFy2`_kwYAj?`MkVfNtVhNsGh zyrVBR{m%vTKhj2|cVPPC|4lpl((&%k>?a&w(8G+q-)CArTbK zZTkBt#y_(=*Xg}^usYc5I-)PN18m)nq2dobL&ZygLG-k)XDk94>8~2r-f%oaQEw#o z+S)-Tiw-aY+k_0hVsz1xKt(7CceUZmN?L7TN}fsEw=im;2)#0pAYZsAG9c3lES$A- z(DkS?6xTKfk~}_?Im2a}1AE1Ra$g~WC-@OVLgFY~J+YN{GOH`;vkVTG&hC+ceZ^B0 zD_v1_b`Dg6r*msD{+H`RlTT?uA4qfH6sE*sK%iml)X=T1oIeL%kt}+AZuDmv>(>cR z*xF5B!cG&W#f471DF(uNQT+ItYq=@q_ht8AE0~gcVu;8KA$3;jZyFKgPJqHShOH1r zp#a|$kKa)QibYIkphR9KegJ_7fm7CS6Xov}5WUCo6lSl1gBPSL%P%&StIhW=dtYtz-LNt)p)aGg{4=WUP+T zJ*!JqeE^J={>8-K^PMkIf^Pu#)gubKv!)L+;Uw)v{!u{3brK*li}FI>Nww~U;= zA$a+dnMxd068n}K%FfN*AATO^Tq6MG`My;6{f=r#n9zD5!Aq7H+Uv6c%uEcb({4ww z-N=3a9MJ7$w*=ef>e!qRgvqO^zKgDfof z_O;MpE+i57`c9U$c7|W|aM89ZEBJ={C=Sp`i;sR%kdV0+P4-yCxcKe|6^(Cfo$8f7 ze~5W%BkqoezZUeojz(FQ9L9tcA&F}gQuO;-my@*qYw5K8iog7wGMV*nlq~V}n?YT< zGP<$fP|n%cCC-#E8Q<4syq~X*pUMTO(fn6NGUKF=rJ01V;~iKuRF0+B7V+Nc_}NgE zvqcjZ@gHnS?3BN5D{1h0+<2tDCI=DGfXKIsqafXXN#B4HP-o zN0(C65WEb&$D2G!{X+3eb1Pc!Z#A3I!0geGl-YXrPV#q2EAVVayn8Cqc6T-o5XtYz zcaa@{xhJutRy#}s(cizYcLC=**Wsi=laz$CeS=K)H(&9bvwfFz#z?hhmM(;(+$^Bl z$m3qVzB(-f?WBoFeZZiXfk*tR7Adfq;WtjF7~3#3Jn%qkhj63EU4!OD@RRgm#6;ri z%ZVGNNKVz0u1=|qu8R>m5v&=|A!gFuqV^FaH{^DV;S&C~O8MFbY`Q-O7%Sf<85s|C6(0`BklQM?CueY zNPlDUqZydj-jr%e3`_jbnyDWJ__sb0C(8*mw+si%~4)jYj1~=s%y=4y5=?QZ9K7D}?J^c=7wLa|- zx|S4Sd84B0J3WkpNlAj+fzu@ekGow9y75xh!Q5_wC9-B&%B;Q`fJvi= zRmNZLjbZMTUb2TuNn!Ioa{#?T%-od5U539!Krmw&s#ONjqi)Po&S^(2{Xt2&Fg#Aw z_meax*t^kMf$Us|Y%+$m5>si)=sVck^VaC+Z1E7y`Y}gbZ;kxG_@K~_u)QNf6^WkBK(;+E|HO*l2 zo~J0?4M)EP1Z#X~Gwg8@e4q3@>iNgXr>6rtI`I3JSv3?BSaIgp9VZpecNyo$5Li6M z<|7&e<61L|B#IjtEejwQcrRtx*}8wdKTN?rD7E#zcyCbNuMehIwXu%dZy661Q!fSI z>E98E{dAYl%t*mxzPrqRRov1=VbLCEUJUcO6}%=|ir$HL^YNyjArJu~$Ywp7kM)ow zm~ekUkm9rlP-Z2yh%t(E&w1#clfQ7mzbPdINd`-5LE?egX3z~rUcjE391#g0-}l=z zi(LbXi+P7%oT-)C6);Iu)@sHaBP)zxr;}|a!J+urTJLZf=sbe)=dwNC6aNys#F_#7 zpFORlp8%O(jwLu^)zbHj-3i3&Q3iLHT4{{g$#=v!1hU^V22epg+$Zzr_tj05dMU0v zq~$>kbnxc60mb8mT>@m4&nD*1)P7syG+PdO1PdI!IsM237&lVg9Qh&|WHW~#mIB1!^w+0h46gLLaeY*d3c zEPdu3-Ew)?ELYUNGWw%o^H5e5kG>+gQ#A^zx079v>a!~xD?IN|M-;MCa3{c(nn*NN zX()p@P=uaYngkZ5CUL&8Hf0e8KPAYMbAs{jtukiEiSXN1A&|pFNr-6g z4sxvNvU0g#z3e5nd2Swj^Vwh^QdR@C;*b(W<1&(IW<#zOsdmDeiFf3)kjn=qANqXh z5d~uI(z6iN?gS-N@PXFMhn#kv^9dF+{Q-{3ChDC73GHcB3gJg|`+|Z?CtlfeYcBXd zF*L&1dOX5C>M&j-FyQ1a;m`)wV2SRj18U`tP0btym7vOq+VjsbemY{Uyb}c7G4Bjb zMjK;02Hk!yU1t3uDsQRIp&=WkqJdO5+{5i4Xu z%Rt@lY)r_XV{-cmQxUHxQLq+8Pf*t)hja*<^IZZ42l5jHblhfyc*)sB5Tm`;;?v9r znN)0FCIh3mQ_*mbY|PlEyU*m(5o0;$BhQs!9B!L7E8T5;3R+)#1qRoPebTMXS1gM2 z{~ufD9Z2>6_x(huaEuDsJ1HYu$KKh=DqA+8%wuKmRb+3Dy~;{9A+k5w`^a{ZW8Lr1 z_qwm^cm00%{a=5S@Ohul`!ycV=krb7c>aD5w310P?F!tN1z?eNMmC@;6e#h7Mt4)HZAs?qcqC!bOp6IlXRtFwMYaa5~vOJ|?` zP~VJBfI@F)R^--LQ~aQA2R&NxUp3(tSjG5fJ?V7+Q(D1D;4;tnI=jCxI_LFICNVJA zcdc}1Y@SN^c!H}~wL4ZVKPr0j30YSqpGuo8>En&v*Aay zr#291(sWyGd5OkC$R%VimkB{k( zQ$DLvO=fGN^zG!I;u)z3$)(!M^r9ZCQ{{z!Lvy1Ww8X`y_EawB)VQO&YCcBa!r_Lb62H2vQX;h5aw!fd1JPQ1L3XvV#+ zudtB-W}&XHwM?IdbSX}P1nsyJ%!)S)#Lz{{r%rq|L`70>&3LI~?!{F_$72m-Suiug z2nHIGqQk8M83t07R%7U|9ZT1G=#a?E7&M~U)8Q=3_%Vv-(N`>=o4FA5OreRn+a$q0 zRi`oh745iTjxGFMP2mPwBsi&)d|@)3LHK=wF<^Nx*&sS{_a-aPoW^u-YGe4T%e?P@ z>%h(%wI5E(tFuZjHGwX z$<~-+8hvqjl=)$ED-?!x3h$Vs1$Y@Jh{@1xA-Za_cN%(AG1%KZ0q=UTSY20Sk*nQ& zrvfAyCqvFCR_)&5(i`UHi^Y0sDpza@+*M(B^uHgaE}SFHL?8LdFLH3l(mF-L`EoOd zHmcZgn>H!T3C(CN4qLHd${G_IYHJ2_qiTQ4_Jr#kI`|KCmx&J&LjQd#%kv(ZGIU@2 z^nJ0GbvT&A^`0z&T~d3!OzXMjqr+0h9JmnFX&#iO12NBcj$X5wJE__9VghQWNey%! zuPYg!EBt`q0jM)5i8b9Lt+_{vb(^PV5GmsNilv=VXW&%~IipX8;d}z%AX>NZ>JDUvR2 z4Ph4tx z4s*A(pUAaan~YaV<@s)m^WmDq8dHS~g?YZ$A}yn#W=xz4l|ZtTYB&9TBv%R5gi5NM zomrX^=hr+b=iiv8BiF?%xT}gjS~oWW$+!HxYhEEIDvmIIEEP;DzWg?&VvY}aWoZ9| zKpB z$#@SpEA!NO){ES56H2ZvBmDOp^%HT@8`lH#Up)25eC~Tqp8}mC$GG2n;DcHF;O8Z! zQXLt8cb~?IyyiZ(=Syf9yljbN4&<1IAKb5C;Ypahqv0PWzPK|a*1|ovf_KZG1z^L~m&9EPeL%oxfd8QM=3Ju@*=ez7(1lJ0d=H;!kiFfy z_ci`6`@lOC`-_yig^4i#Kz;fTfo=w;IL#zF)b{u8xO59sOeyrtX#2n455BQfqg2M{ zt|3hjJS1wEPqu;g^l*S7#V`)LphxFIjJaFHN>fGJc(aBBS9zv$?3HbJ!q@3*P^N3! zoao^_|0$}~s-p_Y6a>b{Q>71@Pv9H&r264qXUn7XxXmMx9Z7(x&)ik8BY3tyG#V5# z*&5|_{18`MIUunPadt1p?*{g>beV?3`;6Y^5eif|41#|iQ&wh^FA+|-mVG63>Mte4 zW<9jr@8o;r$rHmeq~#uy~xSKu=@Dhb2Iy4+|k^zI@s(ZpVVB6 z!>oUeD*A4$$$pclT;%X}l4YZCl4)4Fd(xtKXrhKUno;&WFHvY4J$ErfPi;moZ|@&G*{Rt}#NnUqaNH(%a-|WPNI`3KQdk&x;%|_)b#<%9qDQR#Uoy%N4)!ivP&sh z93^cl&c9RT^dw2Q@RjZTF?$9tky7hV!er=)7i;+LCRfLe3dJ;H{tXKKXA0VS9Mc+q z^swLxZb|g7Nyz{Ds$KNgS2D6va2tT^Z6 zqpjqI|IcWbTN@(v`QP)6yFF6e9QEq&)E(h*`CsMEe-HSeQ4S!X^G7IH3}RpJ1srpB z`XQ-r>-Z)_==^m?#H>9i5-HCK((=C6{!L2r|M}jGH}NAM*R2d4#kmo^Z-q$9$Bp~# zYu-8O{tPd{yGc#2?+mmOUjd-?wHTTJru(ZM9YH~_F3_le%lEYJDat<<8rqCQnQuJG zvq2z~Bv%A1rO(6{&esJ?#IH^VgI49;X%B%RS=Fu5d-{%TX#+Hit33aE2MK;23Pxy9 z%jmeWC15xNV9Zhsgo7_1*dFDGKw1kepjk&8e120iuALB-(Umjg0tiiZoH+6$2j8>V zJ2s5J4jp%DCbe4tBvnv9zZF;y42Ty|fd5Kk>;ckrEd?I6B1g!7bL|A0&7r^#MLG0T zRQ9z8<9~cB3OGm!0HA70Jn_z4JoWs}VK|KP@B;X*(LY!#5Jd|;z$b~RH*K;%Vr7mC z(WGbz+P+ik;ZSh33yqA6|Ir6P5UBQB)I8*{_&cW&LW2Q$uzAN|`&6g-FkuZ&iVC;} zb-s`1r=W?*#UxN7>sG!5!>Ly*MVSQD)F#8MhuU0JSF`PM`q(B5_x>HafX-K!2qd6D z+>FZd2_=7S`4R_fjY(`HV4wQp&%9@XDppK!(+Tj`Cw%yUt z``%&PjdAk5=P7CZcbk&BFO_hB>=^B5w z^jbPUH(I3F$amf1VNbz^&l%(_fdH^5PF)pFk@flUQ9GB?Jhe-F*FR z@zXU?pJdjbCGKDjpHP!xUpG4ZLhlxVu4+dw1tgXy^go08clyIiW?t$(IaZTOL#ox|h?AXEBEO zW~F=Z-<2K3H@#CI?*PUA_jl9lP4?@uJ|wE6ZrhlLc&5GOTi-&r?SL$Mx=U*!Q6Pv2 zeDdBf^oR0R3ApYxo$Q`;3GDs2`q$3+zpg`fsec4XpH5sDb#U}nXjY=OnH31RmP)@2 z%&6_ow1@YJd^ePtS%$q0f|IO%wpX$GsNOJGZSZ`-_=gAG>05%B(xO6!0hqj;{dRGP9zc zpD(5V7BI}~ViiZ?GAZsFfhYzfeY<2`%Ss`a{hEZTZ@9MHT&qL zD>R!^4>qp;^YihNDb`k9{woe~(lF+@7z5TLdaiM@@v0(Qc~XU?O0u-u{t79k2+&Er zfG*EIT*=0UUFpakHn0<1{&6I7XIo><2@-bTjL)dO9=A{)sYXsXPt#kWs~b+E2o^xL z6idIV)4mKMpWw4q)P_Q+TLxxoXDa%8CsCsdjj{QCLDM((h1LTCFQkgYuz-UE&_LX( z5-1~=Ob6$#1jq=Q1BJn-q}K_oNFOqzfo#WVI=wEf)SJDe0;pKb9j__Yk%`>UDC{3D`bI0gV zte6o9I-mb4oK`P)Xt-BVEgan+Z5qC((6ykX?;2zP52 z;%KZAD5t?}xCF?!3dXSHZAG`i&G0RL&THEcjzgfwgkLh5(+GCBK|h6EnNYX6O4*wQ zsHM7Qa5TPrsfbDzJN=>dXL7c-5<6Y9z=Ow(#y+q{ZtSUY8J4|aZtiC&bK)#4$6a|e z9auf~I3+`w|(A7d{3itF+qpKoAS zqsYILJ{dKW{496pTzzlPnXa^{!v&bJa@7kuC8*_T(#F1^GqfRH z9dkHHN+7PE>>Ee_r@-&OE}O>0MdIVV;51h<`vP}&%A;de?;zu_lJNv;Nt2&M zl?oh=)&`j6Z6a(OT7%R0 zZ@$!z$UPR>`Aucgbx3krTRY}#`1G(GbOPobMPQwd(BH6qWR}tJ@2y6p3J&tNzs>!k zq7pMj_hx*D<;Vxf>Nh|Du|{^f=Q?K{n5hwsUpsm7cg!6AaTg(W==L8B$?wrz5 zm06}MC_06FVJ6O2WWt71{l&eds_jed=jd(w$rRg*Zd=d(hRebGPxu|xGre$2IGn>o zsi=bnvF24Dd-NV15cK;Uuvby`^P&;6+TC~$6ERKgE{KGAD^#Wv+jX8Xgt^_4z3z6sTZd;RaR4rHfKWvQY4kFPsbb~-qaWz25jRNX^2?f*j5 z1GZ`A*{_YR^h>52`JcfypH44d(f9=I?%R~pAthPC_IY%|PhcyoG!80fgDoBV2@g;~DoZ}O0+To` zD53@ZL(S(i)Yf6NbT*(BB?@e7T!bx+mpF`0dtcP~V(tPlY%7IymYb4iq&fXKCCCU# zfp+FKpf^*Nj76y2oQC3=p6Lh7Z7&s*U4f17FY$1(DN!aT8a;h*6&C|o#kTbvT9S>w znm_#XPw^+Lmy`kN=b*^FWh=RTND`W6`NI8uKPVNxbDx2C{I3c!>`qVfpaT+}b3>rC zEN0!ZNAgVhiWt8A!2-$bgtZRnlqWd_mA~D6gZE$yuQhPNt<^%b|BvlH-}R(?l4$fDYVEbir54ANb_Y%Qi#KSEQ2?^Etq+BFFTRqFa_@$8doM*M%4WqjaD^m>O`g9eWC;E>Wn0wnnQ4 z+N(m#Lo}%4nxPq}Qw^+>!l?v~dTsJC=C`%nQ z=+NGDXH(tbRV_=wRDV9m1eY;zQ%uxkoxTgj-&7+!+~NWUc|ivxO!{tdrCt;X)Ixf^ z<`D^ZgE5aUFMe(gB_38nQ`FWLAnL_9JVI_8iHreaxCI=jZ?_nr7kdN=1}WH7wEC(W z13u=(&Yygkq@bGVj8HE_J-sSq0c!Tr0RYaA*M-3P` zZzkXS`=_p@sdKGG!-y$v*#P8|iKu?bZ;5lkTrI(TAJBF}6{nfI6O47f4*`&|7keIq zp2bk1^u(ZmJ<_T36x2nw3}2*qRsk=O@gy8&;tX1~#5nDwwlAEt#*LG&K{NV20t10l z?stug3Gnys>VYt@;*+L5Z^FijRPh2B-kW3gpq^J{ zGxh!q-|rx@u8#Ys9}+>V+8^b#?`C**XeAN+dPGt4Aw|g06@XJXtwloA87_FDUQRbS z0C~kD7vaONAW}$;r?=nsXf*S(x&gclId-thfgbqmH4b6(aZWc_T&$x2^z`_k_ym$B zhQCDJDy0Q3a{e%HNS<){Ry)L?AQz#Q7r{-kje!4wxvh5|`@9MPn@jvg z0A!+GmS^}a=md-&G*NfY(Pr@JsxBSj+Xc~LU@(wRyZte(BH{|dn3TS0xsJTBTrDBj zfYs1!m29BAn)EyKeTuswAH%J_HLbHzpX&jJBX{o21Cuf7Hswo`S1}P{$5wwH_d9oD zbL?on8vE1#rYJEVj&k#G*dkOzzcLI^4%PAG%{MUGJPB+T>;96Zqns*IdqG}+xTa(u z$C=@gaYKVVFivo%+rxenlK1XN8EJB{Wz<1NaR@L%QxYA@2!5C`4u6lUwq$iL#)pvA zS$i-+kBam6wE#m@xJd87`p5(W+b_M0%&a1eNhcVDzr{-Lc7oU>>HwNto?g1mr znI85-^q<_)@yS-L%so9yZ@o@gH9GIlmm@#=~25=;sEJUI-#N z@s`b;2-S(Y5Y=Trw|L()2ik;y=009+_pL&@gWtr4rzPEv=I^$;7xRo%6mhzkpiz@q z5xVe8NG~Wdb$0;pv(~m@lGiaAhX~!&b+Lmw>(hP-N3os{S7MPqqgu|9ooBHW6p&pjY+d`6y_SDJ5E@;Kwgeuv zqO{;+%}42b@w3q&3)aUw<)9~(KNLFfi_l+XlFMN+x=Ti$Th;8Om945=P7TYF6p83=MIGKlSgQnCb+ zxnY*cTXdUtT`D*N2namoo`quJbrF~)HWM%&@VFAcS5T{mwo;{W+73QvvkRY!^iWqZ zKF7JHX~{R4L3STV{^~KMp_xG1DHs#C*GBY)4aW*)(JmWwl9Z zyK=fo;a9PE0nw!BUc>nPzN@9oKNY z^VXaD4Gk=?)h@9d07#cR~0EAEe*;m_`k!tGTelqV2e6--m z7>x2)aE(ArWCxEkN0{5+4_xL!J@Ryi*)Z?IKqOUUJV#R=OMjs}uBGu%iPh7LFx=54 zS+yyO-%vDkpYK-DB&f-xiv%5qPDXXav!e@`udfjL%?BHHzuJAi;QyZJmayzOg^=79 zr2!FpyfUIVvw@rh?*6ZFaqUf> zrXLaWNjZ$ocPzvk1iyz9Fh{z=_=1(;_;|M#`uNRFMQ8^7T?}z)j9cHAM4$_M*!PMx zyw>Q`8uDlZ_qiDgfGGc25f|=*)xm%ka!G2@P!kqWv;ggHZ`gy6klH#{GtiGss%Z|S zc842k)aB4lSNroYyuq#8mK!t(scNK1u&5q-MPnYwKt&MM!0lCL@H#ZK8~-a|2nFHs z>-W6Qn>b1?2DD=an2tdYdsZZO1E&hzPP77^u!a8jP58X(TAyuTzFl;BLFi&L)?N4I4q$TKw+$)q5_q}zacB#I6hzL$DJ zaBTF-zo=QR0n&c!Ve6kLOOf8t)Pg|?2=aH8(MwTaab9-TXq^y+zQu$-I^(^_{v zlC=2t*dgGCMN>}zATG%W7_85(*?7?kq2#m1rUS`Oq?ewfE)A%A?YF z%j;$V-WlmLNtisN@x)&6VaiPa8M0;1U$0A~tl} zJt!E?foz({{9n2a33CjMsck3ak|~aEH#4nV^_jx2Z8y>K`dxUD(b-1wkCyLbO!2fL zB0eWVsK2=Z8Rz9~R}5N#Zh{KxN1QV6t5lSOe*a<6j=>v1S|5aRXZ=oj<{p^Z&fA86 z(E7TvH`ULdyD?=c1e4kJyRG6p;QVE#E_wZt`Uu_bb`qz^}Jx8c#j^|)cCw9z-t8WrJBmrl*=dv7z985VdLnqlaqgSmfi zeSa_2&W2Vg5CDbPVVk=ok?3YBp8G)!cj)*6rV zvYTzh^u1DuiG_rOK7rkKJcYDalLGJMUpf&P{q%uu^qXh`g}*%I9tN8OR_1>{6rLOw zH0Bj?X%237^xS{!|M23?Vc}2H%2tW%qAFT47Mq*m6^U3$n|N%0C>K$FpX10~h~s3Q zcpT1!WToEKAg8?-%p8;7fX5a5)u;8nUp;<~l)GHcUSftGmvmgKNu}o?xYBb1FwMX) zHF446CV^}%>12x>BA7CV}=gi+~ zr>#`M*zX?P{@j-c&(*)7;uo88l6uYsuo2N7sU!KwKI-h3%3hHX(>vbfgKM&;LYpuy z=`9mgI5=}-P?%@4T_6tGR@xA%^i6%^(GmU3bMK*#y9Zu!8C#E3>h+LLU<_SHRDCd# zQnl!#i^w=;+%1{ZYgy5ZVQUvA(%N(boxmrtwj){J+%$MyX!;40&+8=GjJVz|-fmv5 z-!0jw*b|oPqqqEg#V(LqC|-$%NV4=B!zTN>n~iv+FxB**5->%WfgGus1^2-)h$2P0lQ`2H+V42Ja*o!-0!Oc#igv6%Fm9D16agRcYhipZ{q1#y$$f5XolcSF){-=eMI|cfw*>ca)@O{|UY25in&cS)byD{cp8*7q&md-i_|b@{Jj&c(x6DD~|w`+~e6%HLR& zPOjm(J{d%fJOpwXPg*>PJ)~TG=2SP1vPQ5{1guvxLi(k=(wVT8M3^2gkMV?L3y~hY zN;R#X>!|c$=-4VNU8y)g1-d#iDh09{67iCn@s*&5d`V|13x)QXNf zQM_d+-!|+?l4oT}giBb1X|+IL#xiNQBTiSp-}`lGVSM6c8|D^vxy0=x%5;Zko0QB` zXZI4E(qi^QHNU8;SrJd*_feSb7Z<>L;pVtf6drKv{d8;=u@jE8@)l9u5#|bT2HPgd+6RstmXZotpYd!AS4&39u>s(DDg)3yc)B;$|dNa zShd&0BV=V~lS60sEI=;7VG=$BZOvc7y>nIvp@2a&6|F( z#L{||2URUXypb|U>GWe(Vu!6)QfQ6*0_hp%NoXZ-L)CsGbqTqCg%F+?)!K-!^V6=C zSru-Vp&{-vuE%|a$6sz=@)8zkoV)65T-tL8%_d$}#s0w3hk z8#}$>i{|N%E#Pc~p_m65_3;fnw6^Ydppoan9EYsOPa9b>sp6sg$PzY@Z< z^T_L{h#6>mvOV=9R*ZKz+JoHgtuLe2n^jE+FvOz+33ZNsNQ=5GqFs<^A@SbEnzMY*w~VwD zy~6%ZGC9U!n#CyNRfQ*$w!Kyu#USCB6@#$5Hgjb7Yj}a`&M;SU4hxdVc;Rab`(9B|ub4k<7;9U z11b4c949l0`w5oU{&to(>i8E{R~K$x+VojnUuObCdV}A)d9BTUTz>(ajBG` ztz4>{{jW(^zJ#bYDyGrp1HUkmO~d6|9Zx6p*s0`n*?7!j3S-*kyxllmouga~o!_P# za+ihN^~qzdeBq{;x2;R=jZ514E?S?xq%eR$wWTJLqS~>gA)s;AIds!9KJ#egC^PXr z-x;%a$kwiyg+rhZizCb^E>pzEN51)6)I;P}v4fMinYxekDqi*&Yc-*{U1j)WJ*O)# zi$a`NZ>kXb9b7w3C*$$j-hHpw_@P`{Bf%CXLMPet-J=%0iMh^6dx~wAr7WM$6zKr6 z%Hjxf;?Bi#%o1%*^MP2q##}ZBwfkWwrKYc0x4ZmF3L)8GEUCsqqL(LY7cUVqLkrEG zhpy}?LMKdk2;fmN{*sVCu%*(d%T>5$ga$A70hShZ4wN9ItZ z!s%QuR~oeP*m!j$%+Itpb-#%?Y$|-={)?@;n`(?5sRs&!*`XmC@X%CEny66K&7(+< z#oDEbPqYEO82RAmn^AvX&h{Re%0Clw!A#Y#0ln2-(JohG-E1_dw1{(_DFmw ze}DfAFAEdSmV75=RHQiiXG2m0iGEDW>1=3Yk&6KQ*4``P7WbasZ^ItjSz`t;SB$4F zRZ$50N+Fd-g-4Z2zt*IgXr1+5q_M}Q0O#s0A7st2;@5{9_MPh{DjIJDHY$mxdg?v< zpJj!(w!gO|=56?%9o!2(`1KQYB^jAQqMfE4TcFwQj~3fS#QiheimR+r$DQD zn$!QNYnDCz3PUbeuMbGvjbUuh%GB@?J1i~dZmlLv9h310X>KROeOHHO-;K((4nc9z zpjB`#N1jk*(F(E<9Ai9oh$V;Aioc-p+_z|##b~wVa+;r0GLgYW(KTn7h7JspU&&L- zC^IWFQ1lUrI#oxg5(TaxJnn{zWV(E0s$t?#_UOT{W)&T;MT+^y8OjF=nxfQ%JW;Pi zjwb~cN%KBuWC*|n159ukM2(KQoHaM#YId)`mlUMi9KjDzcQy7FZ}Kkvf@zYj6%kRu zD@}-E8a;QsP{OZMDz?!gQyg6^RWpeg3r!LGm$O-kK9UXgXyN0?2dJxYLCq$Q*CvbO zEW>+SY$%hJOI7&`qQMfD2X;8~8tBn?qPT(RGmNB*7M-z~kc{D}2ky4`QtmJK-i~XT z+E6*+vEHbyHcoabVmCQj|DjP5hXd{S5Dn|DhCn50=AQsiRWDh*)b+Hj6hwvkF=G4T zim(Ala3rew>?-mQw)WV55J?d~+VaprXoSDlAOjuHjxg*kdCT+QhR&ej&UeHw_xUP~ z<7o>-RY+$MS3sE$J`$seeG}IOQAH?6nGz{RvUwlaick*YxzIGQ7&||3V!xys9{w>Y zyFnTC*~dqyMO3EL>~on{B!-Z!Ij$j5a-Al#_Ik+o>N_xUn0zOJgl|MyoGECy zO++4#rsjzS5CU9^crAHoZr*b#&v-NI0OQ1=aymBNb2f?`VH)A4C{F6?d%t#267g0Y zH#7}C(!r-0{1s0t)k;u7Sbq`5;gpI*F$^ed#%Ef1wX>6t&c>&PG>6cyB*&3b<+i3X z5DCh~IeuhSd5Eh|@poBUuKkAgVRx*B@t@^24L0ML*f1p;Ekn8CI4lol6mQH&=hdK$ zU6aQ4>)Hb)Z2L7w&iMWz#!S~cJJK0&eO>zA$mP-?mhh@!#E&xIyR(Rkr)j2Shda3-(>;<8tSc||cqD#1Kl0-Bb|PY(DaGV? zZJ+cprBfnj9&~&p*O@&up?~&NA;EGIjLo-8_OI%^3s7tq5W?rWzZW~GB!RhA-ggr)lyclEf^F&I@x33oajVD^zmCPlK zT-4`D2ArLuPsULH(Cr@;D(pnhlwqNLOOWz+ue`_Yh@|_>G8CbF z$K<;jM7_htv0wzAh=uicQg*w$#>l*^d{n1pr{!&4jB~Z5`F(o{!x|F%aL&1K&uVN6 z2Fhdp*p(LNlniVNALXf#G{XWkb(WQL4lIEQUYM2LfYmc~`dFBK$K5Im_*f0!bC+G1{ z;uh1g@Nb`a%Xu{y&Gfs8jy`U)a+~?$8ahjCM%?4IggG~mhk1i95(ycLsW319+T6zQ zn6~Oti!ICc{_f`f3J>uE-?7!K+* z6pf++5y~)+(Br(KNE3q0*74Kq(6=Aty)*wVPwT|o_DGa(ur`avq;78K8hphgLM6aE zbSXo1PAk*x&NnDrv@%QQOZs*b=&aef25k_@)31$(0m)*VM?7RXAv+fV9&B_B_rX&X zV)hE5@1&OIsb{o$@if(nKqc&5^Z^O0kGwR{-{SLyQJM+eu}AWbUA;H?ooeae>{^Ns zqt5NHI=O*!=EYn?gjGydIHYhT&);9t?X+Zf={E=)_(9u<=0E-M#i^Zi)$S zQA{Y!n58=EcY&o}1ZJ^KFxHt$wMkuWc{|@LuS|@mmZHUp3s%s^_EfyKY1K<;i~oot zMx(&w$(=Av(L4cvG@{|S$V{BnX<~(LXCc63MkO;al%Z}WpYo2GtdyI##;dPZ*aJ*t zG6v>jyeT`!`gE&KO@^2mj#LdjD(^GPy`x<*RHQi_iWeu-mE-kujORgH z-#c<#p6YtoVk_ zK6IVT#i)h%%w=~?J#9DXUn3**J83uAxY^HzmFrlrNbd8Hl6^;7;||5tOF2%@yIt=? z=qU?F1(J;K=WsKb$e@ci#)uZm;>PUjmniTEP61p}s{C}7W-*G7P@r+!-sKJWwN>p3 z#Ppw`u0mdlZ##|o#+)F8D3AeYMOt@c1=%;hXckIkH_cAUS-1v)?bGDL<(H##eC2B; zsh?>H2CD+qQaR^Vt9F`e-=$+LFGAT{dWg8lV)C(wG&el76J8nMcHBuG8BEe^{no1_ zOilTy$#9ug5iVcMwIj#I%jbUXt|scxFW@xrlQkedux`%x3y=FZ58ZyoC$j}*7m>~j zvr59sWt9O5!^YktVeK1$ zWXqv)CdktN!d6qpBMWhtSH!ew)A5IOg|%A65}^cuM6UT;*CBsnaD)>aMom<#1p@r( zORCDBW)9!@aQ+MjX^VqQWp74$>9X-_cdlQXNSf-*ZYDA<-AqIa}94j_b}hJ1olU{&R8hPrU~PX(&@m?=5&0yb5e zM&5+~ph>}Bzk<{yB9{^L@BT-l0)8trSZ9jIheT6V8JS}a$HbNA$_7O(0%ph0`;vfE z76zX4s8MosAN6$a)t^}hx-Z)`|IkvvXn;D7@cwW7t)P_yvBrZJ#sW3_&;Lx-=X~6q zFoaoN@Jp+U&>pHBh?%G6c$`$h;Hh%g>>x9bOKu2Pz#IRXJRotkBj5pW9D=h1GVJ`g z$;+ts2se-vs{_*6Hd@ryD|mw56J|G>yw)!?tGOur0?$xBtJX8G#0~805 zMdw=|CxGXT4R9qlf0xOV|BJr2^#oYL8n^)C9~kM|8T|?05hh$T^}EuBngWm4 zhUMuV59R|aazyyXX!RJ!s47m^ZD?(+q^PoixCDFJ2QXimN=1P#sfi}fehwx%(~ggW zz)i*c?EIkq+ZV||h@Nct{kc!%-fh!{0g>$&5b{(KH*?VCYfw3h)OqX9@KidNeLfXx z2DQOX+rYyC=n_cvDn9LVcH*yidv-AVRH12Hc?)7b3n3VUN6#n~6TNx`e1mE!KHBdw z{^n1I9DtzINxbjGrw2CS{1dun`b{=S9e9_myXRwfgV6Njru$$ICSd)TEx-$d8}Q6! z*r#bj+j9=FU3Rt*qX*+H^JN+Y3oA_*5R_fLZvS#yP#uG5R{Y4S1?ZUFSj?*5GHsZx z>$a8yWUFK%Z+R6HB`{NXb@8Jk+A);m5wQIo2M^u|gi&iVnFJE&-QdFMQ4j~k&6^Ku zs|h5g)!x8@-X1M;)FTf3jP70!9E+@$X?AC|;N!7sO_CW2;eckNB><1}4S({ctB*<$ zB$#Y)d`oGK^t+mQA9V}F!+nR;IqZ9%tMR12%sn43#W!uu=4>RvBifdTaY0nSLMO0iNAKf)AuJ}Tph+Xiyz2r7 zpyUw8^@>e&FoF0bF*gE*HGb?)PaUdrH0KqG$J~qiqEXMDN|YhKHjF%&O?`vIPLvRK z%UX$P-i1}!8GW|w|iUpo%lPBQ`lZZMj9nznBf>z z9$BvCoGWmQ4f=tKVLY#y832Lm#?EU81YgA_TT*ujUFaWW8@zczvN6-;S*8 z|HEX2Dmq%$s_Qi165j|66M2{hBe7iE@ol=a4gta=yva4mG#POc5BmtEBnR*9S43s@ zGaiVbH_L@9TOtNq3b3sXnM+*HxggJ3ndvY*Rt?VnX2xDcUF*6UX~b}LdB{5w%e==l z?!$Uw`DK@NYi;}{`LGCtaQL8yNhcX%d^Cm+?`gR7ybmB)K4oE?5FZd;?SC7APdnF! z%0byA-qJ-yo&AQ0;_9GFlAx2DA)%f!K#jvw&o{{H3SVpt~sBvg@>THi3qU$MeuKPa=o#CcL@a@Rvc&Zi@NjclukO(p$`Qt4!YgdDy#&WaXOPqEki$~A*~v8I*Q-z=k#d1S73Qp>&k(s@{x zrWi8sf_E$?K=~wd4BRV(S8gGF&Sbrz=%)aa1NhUYl-~?lX6BE63e4JFq*p|rLUAsT zbI0NSm}CV2r!-5yysJ2A^Z~gsMG)A zns0wOb-Of<$LoyNfOLfW-)S6*4UFSy-u5tzZ-cM_8z4z32N@8_q(+Z>UR)`%ZR?F| zBz1^2!d=YcV~V0Q)Ha}8ii)N`56#tK1yELE5~|F>+twEx<`&F&#j|rq&F&Kr2Wy(w zpd(y72|B1#NMj_b%|(O!2*ms4bh-iL>ZFeF zv?e+nKbLmY7Q9a$Y{ZgWdHcgh)&M423Ia{)&>JdLWWJ&G5cu|XAv=Wjr79Gd!G5*% zH+|2G&Hs}#r1KgPY_$cB^_it~5$};1T)@umVn_aB5_uha$$h@YMN@&qV~u`S;4RsK ztH=`t(6mo2oP;S+PHZ;{cmU_LPoMp6@7;B%@{~3N$C&(Aj0mbwNDI~6IP`_Hu^%~E zVu>MfdgcaF$2d0vwI*=mP&-CV!oc$9Ecb)k#hWq9;6H$$pQ4lZZI$$~%KZo0y96<< z8aG8j*6H);UiQ#P** zj*6jYC)OB+2Yz%&;?cZ_-VORHuv8bb0T#JCI_2{E(HR`5sLI}wq89FqROh=f7>3Ma znB)-r<5;SS;9PZHw!X*5ZETX`RoR$7bZfaS#eGrUnoZJf!17D<#aBQml-%@2NALCm zeDqBc<3kVAlFoM@?JEH)f{f$Calwip(B*v|*aCtr4j8HR%?|o1tZoP%g2ajAKm(_r{Lp6g^X2|Y z7BY3KAQ~dcZtEE<$muWtftc!BByqlSUFx0ELminW4`H;Cp_T>|8f0gY26CT5R)Cz+ zus8e-NSS%=tKMlAbXWVqTGwKMc))Bx+<{bJoTVweac~~5M)NvX5?Qa`?w%}W- z1=Zy?6$)Kx65Ql!0%EV*JP^6rud~HH&WFWf_X)5KOj%h)nYw>e4cWDtB}y9PVQJuXerQ#3-9y0 zwFI4e(u(Plaug|5t~y@0#+^{rm*7s(PVjIC$1g5p0qG8fU~t;(fa++i<=qJV^5-~` z=gxqujmLyoF{C5E6+K-ZS)a2MWt7Z1zk=+mdbEV6Pgs6)e*68rr{Nt;^jKXVCH_qF zRrVF+(>x=VR=t4mT}k^3{&U0Rd635gWzUGqUQN(>^ue~aB36&yV-r2Vk@oH2&}Pf* zrHSsO2J19qAaFiXS3h6oUK=Xk6mdu{*aT^PM>4}BI3Y0jneO(;TFLaFTPo|@ zVy+-G6rF*2HpQUk*Hl{=)t(}jAwENW$aBX}k@&%spseWVB}CwBm88&fJ>aY`RdQmBWV{I(eZnx z*Mvt%KSgJp?Q2-Z{i4|SJaHDPeHANa$MeWHRxgn-KpE?Tiu>1C5z4?b< z$lg=dtXVU&o@ds|@+30(J>+*3LO92&-yqc^-qyZ*@cDAZDSR-n?;<3KZKH&LENxri zSLORcSQdIdi~L7zNhEaqkLG2BB47Wo1dt+;r!XqWEL<+bjmX_{N4SVTYRgOX!6796 zeO(=CAv*+Q{94FUV2Q0>=Bn*8QK)^jV;19Cg19yptOyUV*_&{A#;Y!^~ONqsJu#D*DT?tE*QgGAvm{mSYeKBEq+T zL`+@!wYf~F!0){!N!$vGSG}7k;Z==Yiy)Yt|JPDdLcz_oJ)UzWERv4!=By*lE^%KR z9fili9vcR!-Tu=O#JXL3C6AyUk6pkc6~_adzwYjOd2-O#4%ZF0u&@{cc8 zht3_hOtL;ttRDKyEOmU0SA8T%m`%qU8}m$e@|l84wnFZ>J8yan>orcsB$TJ%jT^U} z`GPtG5@hv(m~rEhagU+YxeOWSXRuxG^KI`QL(k%``#w`;`+KL6Uw&^FE~O+*o%Bql zZ||oyh%e0=aO$qcBe6H3&v9(eMx|SI42!JZE+Nh17TyS(CG$QJ7N+gC`p(S zb>k?AgWtGEdyZQc(G0JVoc@mL_*ADkpZ4DReK@Ei!d~1dA@(v|e!N`(t?mK~1Z3ZJ ztMh{ptHNeV#*OP2I@T*_-X*DzVMBMKC0Zp3XMvmBshP3ylwj)?TgSCk&8K3sv3*pW z6R%eydM~}Zi$7s;#^}nY+y(nh7x5$6^>^pM3n_Fqy4lK7(2ka+-8M-tvpmoyF1Od3 z@<2eQmuCyA=(?A!+o%{J+($oW`;!fn zmfcUSU;~r=nP5x$D68@LuvZu)M1n;8F$tzF4au3gpl1kcI^qz^FIE>jNDFnvmk52Q zXU51%$An#SR^b|y(jlJ9PM{nu-Dwr(&2IP=%}}iK1e(wEiw3PGBt$zPx_Fv?fDNK^ z6?8;&5`N?(E`jf@&s}*5dQM|H8uTUva$1i#;>RO*#4#_&Zv>P3W9bh$X!75Hq*vs7 zKGXkn>zA7D^7~B(+j&Qk5xF29D^-uR@IpmN_vO%A3@!=rwKeSEUc&m8W{j)Vdjg+6I1-85unzC^Zd<%-8tfIjy-c(|OS7TfYE-Utn zTCaTDd${QR6Yn;;;9X?tF;`ZpaXZq{)Az&GJ}35YLR`^0K>MzHfw3`AFm23b>Mb0b7M{l+vuyWDgkF1j z;Yxsr>ir-^u|jl#%x-O-2&jo)gtIgy-aFZB`eB$%Q%cC%_;E8m%CBJg(39J}kri5sbn zt$vm6paK z6t4M&wF|W^T$-mn$JP>SKJf-^AEg7ngkdd_a=g7KmP--ellHRtT}a*IeJy#I%yMK@ z!0HIrc;hxU8VVj0y#pjo(RSDMhSFQSOkRDAHutD~IzY|P4T8KxZuVet=t=-exb5Rq z(=1hWDiQR)Fya@(TZ5)CcAO-sDB z)+7oPjK2Y2pVq8XkG*gC8N?c{+F=&jBBCU3P?_^woDAh}{M7orU(|x}t%Od?;sTfJ zgEPjdsucW$AZ==>NF)i4X9qkz&bQVv1obuNu@Vy9-jp@a+NVvyenD1lih=~f;<$Sq zQ1RA?U?KU5INr&I>1t5rm?I`2#j%aI_wp>STw=I;h0(~8U&x94j0cd*&EpA91)rVn zZsSat!g6)yUPHnU6$*e?U+$nwS3qmIV)gcF=DZv_o20uXt}1acHXxSAgezNtEp3MH z+IB^98h!ZC1EYW&j+6l!qp?fEQvu_OVZZYReCF9r(Q?B%nbbD?6!w9}AzOFqTi~D( zc8A&EeN``$TDsclD`o4)>Ycx$i8O6<8-kEE%sv&!$F?DZ3k?cn)vD267tXYOuTU9_N1!|X4Yr8q7=-uKyX{w*B3%MkoRvsX4 zT5*PSf>l3%WsZ-s(v-1t_Tx7+FD*2M6|Wi%eSX5)zE_lrEwP_yA8r&`0XM&<311>S z&sNLbfd5kJ?+=N@7c%D1!wK0bl$LRt02;M=XWLaJXfUdX+My_ z%SBNVPTb+%4NzlQS9k&w)-Xp42+XFzz{it04;3G(X0UI03J}Pf-6L@PHLc&UHX!?; zas8Qu0O6%p?_ZZhZE^=HavT_i+V-oXol}84BNEoC=KKurpN{;D=Kt8|#7;=@-pT-7 z%{ZMS{ARQ1mdg-sSv+?*?y*Yjf~kayiIk`Olk08$&!`JQJ}^z%F>K6vpQXLyqk}J2 zvH0nqZ}kFVpask-L;>E9*=C(aIf{irXLIlTU5W9+poI`&O>m6pTY;bsXpb556|eah zYVCE-h8ngNhF)C<{1YM{3@yG2XqVnrx&l#A!!i%dRL}7?CXK9}(&zVgJ>;l;D}IBP zyaT`E353SnKE=Pdp=p^sfxXrI`I|l*)xyuMX_ykG$#{)y_DdOxoRp#095jls0P6`E zw|e5~$2ug`^uIRX3}EN5YSahL+*3&A)}E)DFIYqJ-p5Av_rC8g@kMj`)o5dCjgnor zIwW(Ga4{}C!#?fJwSt>InQNY(bxQ8Ewa_)oP1zN*IC3D>uJYeFDLhbOPCrT7tw9wB zsvhD!Yi+0}j!$sp+Z|=XNwIdhZE!>5` z1$*BG2PQvz^)Gz9d$~O{to;o#+uNv~m!iO$|6KHYGUr*_O;y`p*AJB%?&p0$UEX>U zq{=u4cDK01o20C?F-|?SVD)&Gzqn7`0=DFhFu3LG-msebC6(2#oBkLwGjr9A<|&tU zCjwKsVpk$Og-qZgX;&_$46rcnFr*Ug4@`j@L@4gtX|1yv7;I@tzY2)Gy48_N9B< zHQXWf*NJR+H!Mp-_iOO;=}6+nPL*oI35jfBAsHT)jal93cHNj}@tmb>hn6%d z%ir^lFy+iwH0nWA&K&`~pF3({W#dn)976j-=%;2h-KlEl+aun?@Bzl4GV!0-Tp78K z3|^c9y%Qwt4cb1Q+>mSH7L0++`*?i_Ld?FueqO+Veu;K&hh`G@Twc7jpn_S}dptY~ zcy=qnp}|BN--fj=TqAs&ThDfo25KhjWh-pK;gDW6GR!E6*YaIcois0nCwJo!?c6$C zRVn<2QQDwS^m>^0TaQKi%Rp)0L!EW@PAr=twqh)x5y_hHdFuu-sJP|0$j4y19OLDN zAk}l`vq0gL((50&tCo*<1)lI*8wkqsb8MJxj2w*B-R&TP;W+ubA;8%}JZ!M_#4;Lk z%4T9O8_u7LJbKCOwLQtMV22;;o&R_jnrBqaBCsHAa5wH~PU>F$w08B^YWwR-Qx54^ z4bzV@QAX?HuIAA zN_ab+wpc>wNz_9*gUbCrelbW%fj_#xOSY_Mpk#i4#hT^5ksVg*^NlZHJL7Q+pTwBqG*wkQ`OzY_+Ent3vz8{Sz|ns#|f z{)@eDAsueee}KJsbcAr31qh%^dgU5J5Q*?6qO9|hWYzatw*V>wz2$(D5jt( zxw79PU1`S2Vjic_mi1fa0Q>9{T>*552Ib-aWXtONHeOs89^#O>!q@pxHXqgdP(fSz z!r#!-ITH~g4={H3Fe|27$jwF4ixR=%Od&Us$o_OdQZ~^<&ODnZg zf$5=j=llwn7cY?LvOu<@{D?Ik_T@-!F2D)2l_(QW=p7oi?V!w11H{>1e7<)p-m>(4 zWF&{??n~;>p^;Tayjf}bkhlcVA@pXNRjFDsdCu0oHZdHXE%TA(^2C8?s98qD z7sm5>+xOjwFR1O<_DMQ_wpZ#z_qV8u!Ihd(U;^4t{PB0MM&^{|6X_oD^hlh<$Py+V zOSmi&Pw3^u-<+a04Yv!m(6m)j6!4+_7D&ZywL;$7IBi{WH%9w$>}Avp8-$SkEC~g# z_Rutcmj&*?4_!vNAX=V(r->%aJ=<9sD0}p1M2P0w=UFA**6b6I8Hzr<@xau9SgDO` z{B>x!e~k0Ytuuh2>P}4OqZ^{3pNC)xzZH~xnW1z(oK^7bzFZiTff+LWsfzZxwI$Ja z;o72WHDe-eypXq?KR%MosRe(*_m{^w`< z$n5~*dip%B7lWEM=K{7-aKp#2lAeA-GiB?obkE;KGf<+7TC-}~+YDD)K zdn;K$j>E{a(PyusV9-@>tx^3=FfU5q7TX_E&gz_fN;Hte-7O){^zw)lV?4?Sc6xhf z?g2x<4SCv>lUTvxvRYujX=P{>MH1R?3sqrUA?|94__9FBjk!Ph^+veW5sdb2TPlm8rfPQIs2+((@bJ%YN`#7+C#M^@$rbd zoBnJiO#Wv-x;^a4cY@>cbYC}3d;<>=T9@>87}(jF8ORgMVVrG@Tze!&K^|f3w*!{( z?vrraL>G{mGB=@P)8}SlX7*?VrM}0As_2^oe72bGwf8OHtc2D^33A3JmCVcQQpM5b zNh`obZ_CFX(Il{$k*qR?8P8W)| zYfj=%uP`mk3Ct$EFIR2#3@yLSN%4azgzl21mmE@9lqG7{4?vD+d9G%@Gq1M$32xtI#VIG_ z6W!28_lp6$GGlH(u-%*iL~(Q4)eg?N!KCnvn&M_Qtnm^$wEucqESqe$7qTkSeE(Z} z(n;BlrjO4tl89ts@$}adoX{eq5ZY|wSwZxyZu*6R^h%C&8!U0ZpC%L3bjcCR;NDCp zuB}+YMS<@0l%Y*A(r>`jeIj4@S|kuh3S(WMpHtN|1z0}Ap?p3&oI-vfIC$9ibnH!2 z;h0P1u-sg={=W7!#9d;g7b*EJza#Y!Ae(-6 zqazZU$Sz@dw2=OuwgSuuW83tJ*_(15cW3gcLT`CQ`E4j45?1jJD=6SycF=$8m}HS4Hb5iCHUNO~EX9@dD> zy`Doc^rrkebNNYzn|tPqe&^enzaHF<6>mAtw##C&pgPy&vTj@ot8s?g_7BG-nhD)e zfUMw8wa*`?LpTu@a2PU4b6Y0^c1h;siS53I_DSA>pR`xwh}sm+OEARH~Vq2kM$6O0;&)G-)<#I3lQ*8m85?!{jCG+ z!sWBB()wSt-~sR@1Yiw`A{c$X(ecZ_NY6-3kuk0}18>DrLDC4r*4KD${PW~LuQg6y zW%MNcB`wB%AkH(x6B}?jeXnBE?`?L=3wPfCzjWXo#w$cob^fay628Bt8ycgiXWC6o z=3NT1Ta5nK!}1po-_jD;l~SSgc_X>9%Y!pA`By{#ss5tXL-z?3yFS%a^gTZtx{Bmbrzq+*+N-?~gJgPg9Q)hH>TI;7EpXxf z!j$GQ5@hAPGs^F{4QvYkC6rf`M8J>|)wN08B|5=XvyT*iTm8gto8z;!@3VChGPiWx zf$*KY2qx)uv@vzIQRb`DKyDSj45T8tfBq{%hHDCSqTZzF)uO0odbVKd!|`RZoYAt% z_bA^twSKLp{)NYn7bYUo^=+q>-eZ;O7eNyCTkcbxFFKyt9PK%t?;W0-Y}uAHtjhkxh0Ev&7D8+y<{csjU9tkVkRgN0B_ak}Ib^PWux@qblaw^$pn#9NK z+sKcX6;4u<%L<~Y_2yFBDHQFe~zrHSovD)y|HuAbDBeOJ6d;7 zfId;5kY2<`!v4&Zbz<((s)FQ1xliC!g|~_k^X^68+OH(y4gPAs87IM5@%3jZYRpX5 zL_7naf?Veo!lkmuB|i?}Dam3N!R3myVdgn2?B1C7C6-KRPJx*mk@}<$UV+Cb%|n#< z=OG~2#n4su@#I98L|C+SY<};jS7|w4L1ptJ#}1?l7xu@6Sx0*7)HrU*5z)->#w`{?`xloX`ozBdYPKG6SK+cG7o$4>43_AxwByWG|W*!Y5qT4-NM6Jg^ zd&O#!UTye<))^|opmrA3ruKrEw`7)HOT&-eLyq2uy}L6W*XGf()=^xNU06luqbFv* zkR*gRtdKX4%RJWGojJ_AoA>ojaSjdo#3V?sNg0wEI7SiOb>u%p?Rl8xco6KEd@kc| zd^$Y=2g|^gBodwUd@QSSDzBC0WP@tVAKYi>BZ(cr{<=8S|7+%`Q+^`j{2?rqIL@5QZM#2ynhUN{ivNsC7ItaAXskKRHK-j2i|8VQ+%LTb8ZPty!Ujxa1b5;vxj1LJBM4!wv;x`eD9a(=`X%WB9&Fm zlNL?23&SCre@NnsC0XL&Y%o1(HBFbs(a1Qpt-h(Onb;YSIURU?WZ4Kr{Jg*WJokLo+kNb<#X0Rs*E3H? zGJhPrjHg1&?7xzqqtBM3&z|os$Gvctyv5V&-t#&y`gAxtJi@L@V>f}Auxix#c=em$@F=u@!-98VZr+X5Q*JxuaV>55>6MV)Y% z$6vOM$UHyHY=HJoo&101L`e@TJEcEhD#3$hnVuQyBEd(5*lyOTR2 zJWl4AWro%OmS%&k@u|=W6OhG9W9w{lOL}6r$SnhrsvoPz1v}*W^3t`yNWI~ttHDdd zC#;#}w9WCPt=emJ7AkexC(DG7SqYF{4pO)O-|~f03q%B z%J88;5MpY1qi@K-So%~{_C-i1k8VpQ5y%S~nRydw#y z%aNR``(hpH3FnqiBsf*h+)*wg=u~3ZuhXuQ-qn?H^_bj7i!xBtdf5u!d_?;@`shP+ z>J5=Y@7{f{Ui*{z)BF0wWSpnqUPN1A*5nMiTe8n0+{eaaFq_F$1KRXm0Sq?8GB{Bl zg!n9nZL0qHSe5%k_-V&BA@f1w=h8VT3GbsFMs90ui1zYV z3gUcds=YX{729dymUcq(A6|@~yS1M(Q=5uX+TH3b^(N|iW;`o1bg1UEu*n1dmgwbZ z^!+0sW8}fDE@?0`!rV`0(gO=e#c0t-!u0^%I_Z@SLNSVuS zv&=~|yt;1+nd1Yt=ek3$1S-DJTJ+fN@ZN-P6Sr3r|LX(c*v2Zno0*j-12onJFdYty zsto;(&}#|efxe!ryZ%;>Px+Ik00!DnhINaMlQ%g9DEQdQ*d~4fcXTpVoxt-r_qy}0{M_WPrh!Qlds49+%)b7TCy(Oy? zio$45b2~f#**%`Jz3Pi9b$OF9a|fc0u3ls>8;9q)9Hu=Kx$|X~o+C-6a*2estaWSe z<<@<^=fs%oN3|}rKG)`-j~g?b2zwzK5HNSw!4$%K^siBR*9tw}rF--|$mVcYaQX)D z3*dxNcz+&oS{n>)HtI5;I)=VAEM&c_e!a_vbG&RH5xt+Bz@u_K$P+dm=)7VX%;X4D zL9yTi1AQ;__7F>9!iRx!#*k=#qh()#P9m=GG|L=i%SbR>$FyG3W3mlxf12|YN{Q{R zieU!7U*@Cf6|U!PLW`^~EZq)B)v4wkq3aZbQZ;rF=(@*|!Gd^FFgCuoqq(w)Gi^;L zXC9`_epSGl4GWzxQnx(FhT$J~C7CxjIwwbevl)k$(tlYEWu479Xo_4p{&CEUtQt-6 za5MYq2CP!(R$AuO_=k2MKL;fFU`c#mkr!<8e8OZiA?n?WvHmFai4M1hos`VOl(PM2 z#*u9u$!H1Pb}lO{^T@5{^*0b8#1H>-=2vD1BG+a#7QvX+fe_)Dly8ggESDpeunaB3 z7A#F9Odx4Q8lr0Ex}%t9e_%ph$z*5z_X6aoD$hp9yABpshLl&-!TBaNHdjR+#d;ha zM4un5boW@PbE%X|#or$-^U#^%x`!C+xMA{@RN=kE;txIFQFQ%V`xnhk&3>Yk_Ydyd z%1Z}^EJ#4My2r7(C$Hl6$V{*RIGJ|o5}l#4+{4Fj#RZuXT16Jju%{QhfF1YDefQTB zaoz}=lARk@D?LD!Gz96Nwk$|`r@px=QvgwW&b!T(qXx3rEhw6q1ADC-ykYJ)`D#qe zN>MHG+cBsRrNtEd{+k~~^ZA99Tk5B>~(Ep7H zi8F*?PP;EqO&phFK9{hVOG}>j&Xv$-eaIeJLMLTXrS%Drbl;zjN5gdLu z=PQ2zV(RGgl*D$3VoPHqU(iiq6Zdhf>`iiubd}yXxJv2HP$#9MfJHJAFDWxV9Wjn) z#E(l>5p!*JR#ot;)98eL?=C=jA-XNI;%Z{)i~A#Mo36!P!4q3fK5e6+$T-r1uy6eG zDYtErRH&y6xUpvyl>a2{+Kh6VZW0K-p`UVHWt2h(KG*LH{S1JU{wm~>maN|%2i#I0=1=eOQjhevQBiPfzV5K2 zah{c3q#pqNZV|cM9?(5JRP@ywx&emK2(FMi60kd`zJYkth=Po0eA=j?YEKfKQ_DR7 z;PhW7$P!Y5ds?CoS1y_NT<&yk{I`?l=%SR{!!Ql~+F6LSDSf}IYT2!dP)r8Vo?^rGAfF3&CFRnB}1Z}aVDC_-5*ykiML5X-`XLgV-~ zF?VLojq#V*kGIr7jR82(Ma9msXn%}Ili1h7A(9CorBDjR`n&m;Du;YDu5?$ZeOF)I z)#8msl3jbz#7RNiqlD_>87x9QcFOtw;VZ{o;+&F$$oEnf@ppME*J`^^Ho=J!B`7Lk zKYn-*kw*W>{HHT|R+NV-*$_B?avPbJuH8OH&o(1FiZ??F9UyWk!4$%mDA_}`#i_?d zn#H8Ko3ovg-OvWtez1h?Q&b@3O?K787_miRk{(G?aJlE(qe=U$c;=MQb~+nu1s#<^ zlXx3Wtxn|Cw0dy)P);*_;?jMNQ?qLg+KEX}n>Ye%y%0}WJRB|MJ|}O`Iq75dAeGg2 zMW$?gtS{mx*I)ufW`F|2L%HOHwj7b_mdXMFP8Bp2Rw zNhf*_0}cHOcU@1>V_V~r6!6zD?g^{iAdA{-iQ5;EjK8F0$R)euX_@f@a;*3FVN#Z; zsLZ^Ic#NAMQ9&CS`w)$1Zgn({v&jJy)m+E+hr?ZrCEPV)v(!YiHhQhRSWO1$Z)s~V zR?WsQDF-xFc`5K3yxh4p#52#4Hq1Cxw8vvHxj{&!Lvi}xq*uITP3{dnl-j-;bqU<;za|wLz^9;&!2*~2 zth-xg%nFPYPf*9gf+OZn<9y=@j_;{q+osu=Ydb#@G98&ETSNA4*(G)1Lu)vket%^~ z^Tb1t(vx>h?kV_vvmO>4FwNVj6gr|RFDzbHgs3VDwCu=VJB zM;-KKQ*jBq{9+QczNI@(KVCI<3SdPaP0p(`b^I7EqIom6*6uyt{=*@e1RtJ}a4nJC z$ijgdb6F>?|KshsSnen1l=X=V9dTxS+WZryIU^iHbhn+u%-UGl=l3(?8U*TUY(D<*MJc z&L~~JS+~?bIw;o6m>p1xA+vqp6{d!LA8+^H~7=HL*p<3A?pLGBhKohm=#%rh(1B`tyZ7VM?4h7 zXzS)ZSA#e3Ed%^Ghp~x$%W`xn?~irG-Kbe|h zo|P;+MC$;NS_T)Ys)=$gzq6s&%eF7>)js)Po_1{T{1akM1Wy@#e6jWJIq>YIwo^Xr<{6nu3TUA2X;F(K$k|f^4ib|w+DCA18M#flL2~Y^ZTBWm zQ(zOTB&MsOY9D;B6~UXH#?>%8Q%2SfrhUuekrX%Zw&&|Yyw;-VjXG^E2%Rl7>6F^~ zo#$M7N1%6*DmsQUkqfatrGDz}*wg`iwfiGJ$ETc?<~Dhf0bDMkhQqmoj6?*jI=QRX-Fj=9bD)g zgC`*@#~!MGyMp&39wZ%X(p6j1M?)X_=V7au!(uI>c{mdvZPCn1IUlVkp)z?ebKB#( zK8am@z`0r7DfR_SGps!nqCa z$^DhW99bI|3s0W5LQrnH0(j4e(Fyz+bhV|b_$+TC2_tPx^$gXrxj|cl!WOdT!r5k8SRzf^^bG z+Khemw}eZUk_qSr`-d2tS^xCB$d7OuVk8d!0)aBo0D_UqbDw;bL&==i3HSS?W*UtO@a2 z1^I`iPhZj9kn@JNS;mkRI5Mqui}2lT^ZLyAr8VH)HC^ZRohgK)`cAUf(=%(mQ9UcU zQTEOxxUV?w3P5kOyd8=2_}dq41|d2U74c*(@7-5iakOb;(?V*YKHxGuRY>G%&uQ7h z8Jn#?+AD<++~BlJA%_sQ2InJ(9!P-R=k_!BB;rni`-mv$9(JKzX)-}r;E=RgD9l10 zEYj6*mzbvu>2fXS)L45^L0*mi(7-bqH@^2N z*MieD>0HKa)V%PmN;D}vOz;HUpgkB6d7@(?Hmtf>RtUE}USrnp>cI&Volwghl8XFJ zSz@&g;BC%yyQQfSZ@idC!Abq|McqE{ieIIz6L#uLMk5b zVf3iF&~b(5n5UUt#wt=je7rHm?YHwG)$4`FbZgJ&wZ)!_y=wn%H>+^7g&Wd!)ZEaU z994qGT^&`mPbsPR%*gK8Z&{0_RnSuX`Z%CD^EPs?nozq zZMvy?%q0ka+2Q9OxYg|F*66FJMWQf%lU)MT?%wYQRY)I1CKB93<79^_gz^9T(sP$ zxOxHT?_UOnEX42Z^wF&k#F7%i-^BiFUx8xOvxZhADnDw|Ov1>+wX=0m= z|N5Y{VVBt0$>iP7>P-&RHVl({*F0lR_(}Km1z~sBK$<35X3SgtCa~Lk#{np(`Gt~x z< zt{BFDMc&}Z1%=>)kjOTg6#V1R`42eo%T$2JLCoFOzyI~OwFOW(KffFHbj}p3M$zf8 zwqeN(>|l>6nV33`hsx*iypP6bn92eVXZqgL5XA7aIQB1pKQHf8mvr;Sa74syzA|a`=z-F_K=uc42lrG=Emk zrbYM_`WM*w|NW{+;D76b;`>_|x&Kk(;DgVg{{uE$ysZrj1l>Q-H(pUoDEF1dffp zudUOiy*)f?Dm`n&>DuNm64zpgvmEtc^#b`#N>0(`|A zDLv5mHRCOhGLoU~3;3{W2=3eN7xmD4uk}@YIrbn}wALIkF!pC9XOsZGEHi#TYImMn z#@*u8OI47}lipWz6ouyRE9_>+E}U9Neji0A6y$CYmYz~*w{^!LE% zqD$3m7S}2&>o#`cl4`B?h#LT(da6|O(A8m*e+2)J9td32c;)FaDJJyf0{1(WliCq0 z_FCY}*RP$u@%q~u$f&md0+4L3-S=$RH)P7$1+Le54J}|^uO9;JR<5Pu()~hx#W;q_O_n^&=Co z4;NW3;F<-e<$J#p|Aec7eh{xdq=G5nz#rbZD&*3HFymuFIEFC=3*UhppCcn~YyA=o zS)zY-_d4Y=_0;tTb}@g*{sEQPIe;%nozshk{Er`pFZt70W=RO_^JwoOiX?%*IK%$| zq$^(bw`?H!h3TPfZ{rP91UbnHkgY?C^`Pwr5bj?1H6P=B{Epp z=n@(73hecUC3Q^!`JPth1BYTwwo`}HPF3TmVlS9y|E=tT1nad7(CEW|cQm{aCku=`uAt2M*&p^oEcT+v*k7ZM{^VFhF$3lx+B73!?$08_ z3V+}H^oNanbpp%|kCW9x#y^Fw!2uMhY4x>2>rXA43jr+aoIZS^>dzwCsH%YafHVaz zY!B`Khd{Zh{K*`ejRDC4NiN01Tz^CV><&;j-Dq%4PmmlA$h|U|K9L4o=wDd{Ze_<9 zh!+3R#RpWwl$X)EZMODR*RyYn4PUvZpHeBk5OnEkS?+(YVF2O9=wdM% z`@}c*XOZziK#}o!nF8v6YTxjM_6GbeeUhhIhi)FY>nO@%RE*HpWol&pr<^FhXrHu>P};5xTwqiABHAXa8r5ne*VEE9N}>uils?Ihg-- zjIoUFjj>YA{}tWfxhNUBV_=Xo-@hIx>%2I`z<7=njr1X{bbxFD zFx5xrr3$`0JDq$GLt+VAnjD#S(fU`x)HL1PCSb1M*oy-GBz8-?*9cCF3)IEZ`Kqt6 zm-O|#?}-23ixX6v>DHgxb5q~)gR8OwqdoDm`ccNatC{xOg`4DC)WJ%&>(_Am=p$DF z2$7`Oc*7`#$;Yux$oY>Y7pY4oVkIn+K;ZNLr+I(IbT}GtTibN~_lK(dU-#^5!AQ;; zrOWR>rDA-e*`hbr+s#hvuP23(UW~E0FCK=!{vQpm32s54m3@B6-=FEJ5~c?h z4LI)qSLa@26I<(kL^igg$}i8`?QAs8bCiM%^7vB4Iy;~HNBuLADQ z1BT6Rk;!=G5(3KqTjbA)QzWo%k75D@iF|$rrCl$cx5eu#acr@htooiWcr`|hPW+Ea z`v0aZHKOCqoQtkqPDBn1^Z%ZRSlpo}SVglT={lGmo`N!NssAc*+WXm}EwN7h*V{Q} zh{fLZi(L=?*8%Ljeoi>K^ICFH?O$(-C{R#qE9eE!zbHv8t{Tom+Z+v?T|=w?p+Enj zR2_cz!=iryX83=U^FLMfz1Y1HiD`}Ulm1Wjz3;IphaT4D^Zv*GdP@_5uc=BV0)3MH zT^{u4VU6G@oBoSwpy%lc*7)npCzG{9|0<8m`(a5bA!YydmOeg5%kpDRmLL5Uv!qnUR_qb^~znsQt0;!qki>VRpekLiz5+6l>mR>vT7vDcx%5 z*3X>&<8N~*D=QHgA(wV{es^m&dJS6mzJ&Ta^UiC*WB7^qpPyyuF2hCXPP2ZS0@;Q+ z5_|kywNlHUFw<1In+lo^{11$msBVXURq1L7AGA+jc4=B@d$c9;$$Zr|5 zFCKL$(0)1WoYs22_`}e-^)!u8=vR*Dc23X$AVuMb7x<4yP9es%!172-4>Z4u3Ah1G zA|5d9{{qE0HY|9~1vV8RXvrY8C*1q$MMG2G_5 z(i)|-^c8D{i|lBdGmP_PsV`z!2Np39zbeY&*HNk%eibx#Z zqFrO-CHhNd{kFuHZ0SyNa8Bdpi~gHWWxsbx${49grnIMn(`@hem1w&)nj2d-xQ30F zuBJ)$O9|yiRY+%mT1>JGc4IkM$VH&`wWy!+8q=NW4r5fy4+~MGwv#?3YTB$h_K2ghrACDAY$@Q5kw?27QCh*e zm>4zrBMCL;eV#!S_e25N8|3GQooj>Bf=!Mr*^s#%_;ws^e{oh8Bef&9*~hr?Ez~d1 zf1P+;43bv?WBJ=RhFTsz=pf_vpxr$&cBqyY-OwtTU=z6LQq@n@cGlpqg)(QT%5-{V zEZyCn(?kfiEmy`^8@%RKP2cAwN%j9Qk?>IePf{?sHN zHuZRs@KRiwW9XD5nRx!*lso{B;`U~ec%S{Y7IyaTrcOiabopW z)iw8d*S^HQcGSLe41xpsXG3nSXFlKE5^JET=tyjCH!q%A{(^VgqG{mQy*iP2y z&cDaa2xMOlZRSOCQ#qwq`tREhPa8Wg_#LM&(ZGf==?*_JN;5hkP+X1kHE7=i(MghD zQ#DP}_8e1Y&3uz2p>lq+CEeb1xtYGD>bVY66iQNv=TWVz_f;qOEcZmTfbG_OYh-^Y zUt?)cxM|+5v|MF9_Q82=HmVG`&@Z8=VN!ARom+?dWm^x4svx5yL9AG2Th+kJ(gfx{ z?@B`9!TIxf_t8zX(s|Wx=S3zlZ(C=9tvb!xX}$nJ-l3QfPNqiARML_ndepY!u)XPV z8u3`y((PHJCCvp|K1Qtjd*}D_ZIx`|$^L}V@!~T@OU_s^CT1VC)AwA&7d8g#M6pPh z#+_OFN=G1j0NS<_pAjcb8u|E+h;9)Knxlr!+s<1z(axBM*woE{^wJj{MjBVusr|ml zKbZcwSKgS^_HFcU-Gs*QTYZyPLyaG24ahE^ReH;VhXDD*{!gTD4$VQYkb>M)X3JG* z7n|hD{Z};3BU&n&gLtm7b>ETM?InI0D(;`)^}W^o;)>OPDUWJ79^vra;&iTC3LlfT zb@aVHK)+KX5&h5}&xuyFgXB>riSh$^sMc13cY6?9BS8sMv`8LXu_go=pAMEA)7enK zD$pB{3?02NrW*v!wa71mXt{nAAaphK4a|dy2Le!k3X@HZ2&{ayD88@_5kU@vLZwIc znmZF7941!!6>=^-{)5dG|M|K^QSuw-1F~nP?!&Jl<2!FnU>|@-OBV6!13GF7oS3i& z7%?*E1KQETvo`r3*tDxCpOV=`_^$UNZvI$H8_$Z)KBI?n`hH}>A3;=eenjh_t<+Jt zCRtxcW7}!@hXU+_tZojFcZ}p2TMhhmRkl8mGlRY=OMt9;^F~V%9#go9_7}**IHiz< z&2(qev*Yq=I8~wJJUx-EDv}mXD(dxj!b$bcw+up{50A0zAYjK12(wH<~jO5_1n zyjhE7Xq3-1`27+b@x(ySrmxnYW9Lg*vWqEVLqseNaWw}g78KiWHh8G7NA_+j$`gxu zh-6*H&nc(SE2%o4A7wr-ZqQ@$QOH>>^%I`x#~yF29RubZ8>85wI}z;2=9$)t<*MCp zrR49rK2?(1d(HIcAuON_SC7^vFOJc1Q2WW_Ye`Lsqn;XzA0t7S$^rE~Nk>YzcHKtl zB_NTsgq3+52rQ}P8(lne>)e4|+BTR6J+Pw>b43irm^WPUQC{%3Mm zRql;;9FUJKJb^XBjt%uX$A(Y#46Vru774P(q`k_M6FP5!lzwZ}O9lz5TUqC>3z0Ew z=Fq1K_=V~{-H^A$#PGN2Y~^_(C3?T+cHm;gfYNkCq1f6FbtohDlc2D`nZzQLTIDgT z#0nmhQ}}lHSj*xQ1PEr~7B4Y2A9 zN-%+h0{)_8ajQHx4zmI?CLRxC9+c+vEsvY7;vkbVoPLP%-T1oz)M+8@AtzOt({Ar; zi=X_az$M02PI_(Y3xVC%FjW)4kHa(piyZqHh_aEtYwt6CPR9beJn;G#zvlpTtZ=$` zagPO;Zoo)P@g(Zg zg(4-SE;@;lmS!JDSDpLK^>>ICRk~qq&uO=X)-z)sqrDp!azBd?mK&K6>O6ziv&M&= z-^l!fhU_u-|D<|SD?pGWg^o*JlI;J)!$mlO!SL6oNrOJ^UCy zOn8))o1xe&6P;C4J9jRBanlaFo*KGkK~XtL^onepH;`u>P=sB>5IqnD8$_W zJNpYa5_2y&G*R+K?5j{kIL@Q&jC#wU38qIPu6>$Na7YM`B3GPmq^w+L$O)TASg$|J|EP)X~ zyj@o)f1dxYFQhccclS%eghnX@jWjqn5fR-ypII4JVi|lDGeTKUBSP~EdhSERPPwlC z-@*y%hmV5;RkUXzHfG}%#D#t1-?grM66Jo^{E6!9`t+LYwR4jNF7(u|($Cif9ZCk6 zI%&@Wo_pLv?FMYhKj}P`PXK;p8)hUp6=)sUY1UXl!G+tj^V-dE>nF6aj(T&L+46xNBZnv*o>=_zcMRkUA1ic>im)39cvztx>y7jD|58vDE}|Hj>FS0tKO;5MFgm zH$VtLH%X4sagD1RI<*|gFz~5nROdy!EBq7aX~Q6>0LxV+$A%RpKdF+&E6eCe(~qC8 zl3wrnGmAjn5}sv1+{8-mGv8wueorj5tiDj-{%Od^^t8yAzhPpdZ0Oj%du zx|zjG$W!SkrIVuQj>O12)Cb(KRsINyNs;p7%%P=Sg>Z7mV+t}8?00nx1e3$2QW^vy z`3>*udd-YZx=TF|yU6LsjESZ8E$&M1oGiAP6r#h(^0S8oZ2GD5J)dIY-sO$%7PXmt zVI9-aW`nC!JjseeFoC!ked>oiHY8y)57WgoPeH&?bE@;*W{tXBGE47j&4e|;U@K<2 zl0N;Mf~h|`pqOPl^AOPCvac^T!J6593QyUajd6&$=~ghd)90U*w*U#w-nNgt=AV^E zK5JTkT7C2DJLw>#mA7Jn+XQ^#t1gbcn4w}4;P~c1ZsSd@hwK-C&{}Vs*)6}xKPpZT z-64A%_hQk-dH96*L6znRUxf!Pmp=_h)Yg?DHqFn-Us@>=wP=$`M!o-gdPREWT`|K) zc4*WeQD;u`3sdRnjg#LYF$@!YaiFd8EqOI;J9G%F^d;jgy47uvc;vNI@b>Sf{6nyRD)5(-N(4I4R4*$tEdJVMVm6* zmCuJ2TzQpo&v>e2%}+XY#8(MB??R<-%r6&Cd#i&*d}qmOcEqehHWsVhf*fpE2QL!M zH5=R{^GBm+c)Qyzug@m2ogLfqh+JKGBlfLNuj5p?DLp^bWmE3cjF#*%>--JRV^U4R zBvm}Epv9z7UgUI`Z}Hd;)^hyKlluSo4)KSNqXJbfsZQiz`&+3H<8b8WhN9`8u6UjZ zVr^{Iql8elP%@3|!;8MA5-0VG{P|3`0e0r*%}QYjxd)*4hB_(?4PP|s)IZ9e$bO7j zRmH0?Gw^Tik^#J@~Zo?@Qu#-k^WMp%g>1-qU&g3w4(7svzp+W*1v>~o4hG*X%uSvM- zWn|v|fn98C`76GV*6K+*F}HQX=3PEqzG&dt*hJP@9Wst#!>C~xV;YVm|H=#+09kxj zFco1Dc^8}PsO-Fyhf7}8t@IMFXon-uZ!yo_x%sRPN(B~;ZC1m!eau5gQdnnZxWr_i zWR=gmTh~aTwNrYeuVZqvU8!V7nmSYbp}nBBQ%}~LJ?<)2-ZGCl`CZLBCh04-(9pL8 zXUd$7hC2@DiKyhS%P~!7VNGk<~Py94Xo-d2u4`|1GdLBL2=wBUtXhNfRva z89)JrrvV7%O5{Y_o_t78PckM)wA+7EZ_Eu zwcfLYn_{o4_brYLV#pLp6&hmx49bW|1287_ef#`3ngv9llC4gQwr)Cnzc2^%@gJd|G&X6stI_J zkqEO4%^uh)tI=ybfA#(?UrzL5#rtzNdUO1?$|6=lnqU!;zFlmyc_qig>$X%|6(JN^7rjg6wkt8- zRPl~~tm>TU0|IiyK92)<_^aVYD8dF}e@ReR& zA0@zx_%@Gr_j=5|Tj@B|Jibx?UvTd7V#KGWI7wYn z$?%a+vdKbAo|!-B=+Bu= ztu@d-h>uc~cAU^mGxg(-<=V1B^qM_6RWUX*i{;<*!@iWHwS3ZLR;2!4yn_|>LtV$5 zO_C+$gxIe_8gDK62Q6qaJ!fruc^wty9x7}^_85L<9kxJ{T56oRmWe?<0-oObY&`M} zuqO3=52CcLJPAr*ca*+7iMRUcM$vGh>zINKLDi|F2i1P2ogi#FKNVfoI5EBTSjQ@J z^m%)6{*cVYVBLS)WQw7QgXn0@MMs98rWL=MO*G{7e&m&!*d7}N;A%B41)r2JCfCmJgT490QTA19}SmEk63GbGtv-UvPuKFFv7;$ z8e%`5&V+MX3vE53&wh@K)%bmv^MC@>+cmG4w_rV?{bPGhpxms_d#!}Xnx=ZOgvszUXz4W(VTu8z+0}S*8dHpf+pp)o-PDUe(fi_2 zO5Wl}s!kYpduDgD7eswyrg_f-_*>8z+LJdqTg?Qtr{`vcr&u3WKlKMwnEqziJDO${FR+sOX;`R|tYk@*SvD5%GMu715xL!NW0f zc$$zpmf60<+}6NF53AHzhWm)FUf{)PsroKJTv|=KDYnCn_Xjk=fg%b zawW&^p*lqidDs=2A77;Lns@EBsc=qvQyB28n{N6Z@v)IQZyNQWYC9FU^mV}kaFXrP z-z?*->8Ul$yA4{^4hwPYd!J6}3!_i%`+iK-f<1BfMGEyZRI2WBEC%uhdNK==M$IpD z2TD~QWP>+T&)?B3Cm*dP89UiVOI>Y$5b>1sL4(#ps$BJNj_WJ^9q0K!jiSNCX7C>n z{ajfKa==pJDBXP&FcArTTM0QWO=p$dQSQwtpYTVseC!~q&>;)b-W0Xv#0i!Rhj>Ho z)Z^@y81N8?3RI4?Urm@igIgI;Wur_zrrxIWzKgm(-W;Z;Yn)s2=IO5Iwys<9UPth6 zEgEO&=dbYnK93gLE!hm$^jl5QFxhwOd_*DGM;~wlwM~vqw3gg699{{fUwr4gU*B6z ztl$-u$+y)_f9rCLj@@(H-a*{I0n#a(v+r)U#F?s&B@-w}0x6-d@2D;bDu6Z9^bg5C z#&3MqXagm}mwnKD-fw^ldksOwR~C=?fIC#A4sw1=%1bK$sWeN7aNALoyzM<2FMasQ z32%gn{t~g-L#pX;dwbqaq_5pZoER0E_m~IVubTEu&mdQg(RG2ik{)e%5smI>Fqtk| zY0?&Xe!`SvqJGQcxAIu^#k=(Q2adkZ3}eqlE>tsP{MMNSYG&ct`mMWfVYmm^R$t@$1{_~ zA1-6^6p8Iw-}>mnqgV-g`-Py(4r+5BBDttetHdl`DmCHI`?-9jZvlL^uV~uzlb60( zA##?w8jDFfyS&GB;rF&*eTmljcNX5Ew4OFu=Pg^W0qBdXi`S<({{Ir*u23`Na4 z?^=IazCTe(pP?XlaJ+v{u<;4-=@}CrMd-KWAv#is)MOrx++Y$(R3-E@eeh#mtJYyQ zaB=)Q-^Bd6d5C|oXTwbp`==M8TUDG{C&7rxRoM|?wmx%1sf?yji zI)yk6>(036o9J4Dsg{Yv|5RzLO&+)GkU4o2KtsIz>ssc{$)zXX+=BaU_@sV+lJ$RySTqPX^w3=tUv#0gUIY_3a|f`gzX}S4Qq!H#X=- z-*9$+Y`H-Wsv5&)6s51m?8ZIrs+ z43>S5Pw(~{XgdU*Nn}UUFE+&hmJbE&?X~B7<2u0&>TaQf- zKbuoNmV&zLJ?pt(;qji~&x4FFcejqB6AlVByJXY9GMV(y$Y#}xZgHo=0uI%sP}8im zQAF!w&cw$6`UKW&7n<|5(Eg^|mt1OfM7LcAUy9ErDo@i(E1JIz*}I1lDDs#8frRtn zDO%cD7dTjc&9BB|*F|+<%wV627?BYAQwpyLkrP=wJ=^E|O6OI`$C}cPcop=kWppP# z0T&%Np)+EhjZeIZmYHsQPHIi??rfRzH3)?{AxiWg+dn<2fAY2 zYtDvI{hks9w-;x4-&pZxmd+X)pJiYM;P-iB`^5i+Q#;T59YvaP?#n3)>=@Tr=qT>W z+vd;fyigP5eRU*YRPWjjlv%dDMT__CKoF~ZU~cC{7m;Q;s>Y(nO$>|l+RytN`_W+z z(8!JJgX3i$p6PYlA4{He9+LB<#+Jf$RZsOr=p+0`a%QPE_vF>6Ar+xM*zhdW49m-n zho2B9Lj2JOi2^4cpRq`ltV7bKnwPIz&lbFv7|&U}E=0UE7w!$2{Pw6}zhRCX!cKoi z6rQ3jd&sTHUt-pkVQy*r?RwUrF03>7Y|-~1?|WAf{djUe0n_()CjdR>@!EEKa6-p=9VnJJep$ZJlmpz+qrs+RA4PYV*5;Av7sW;HaR=f9bS2~rWs zH3P3-|IPN<{aV=i{tHliP5744hoSr#u?uU^t`a#gt8vz@w2S&?s*q(r+NYkc?U3`K zO-4vz4T2`vWWj&=(|n)q3E|pOM4NpNfoU$RUj;)^Q~ONX8*A6cCc=gN5pQ8r!I zY}cYQO;q1u4ho93W}jXU;#0&6=PXKDPKYzdJ}HYk~Ns!BQZTL8C{)&;oOGSKvBMD)iAaRYJ)xiWWe5GGr@k zS?72YEQbWGmZ{CEEi$3qRCEJfp;9X#kH_zhj7KA{;*JtqprElgeVUMpK3+UwK%+ZDDq!`G7>4HC0LizZUrbe!XjToYmJzYO~qRf_M$8&CC@8 z@h&)<-9^KVCpto_IeB-zEnsn&klT88#iql0T^z;g5-MGGVsfmtESzC`tAxdZ7w9|x zJO0gUEAdv`o=>CK3g!xA#9LCiDU{yoIxG+3BLmqIaI$K(l7O9@ihMh%f*Uf*DZ&;x z$do6t*zo(rBmi}SVwMRwjo#cmat^z2dUBwG*klh!-6n(dq8Q6Byf)*3}~g5W_I^1$$V4>-)m zLzZ*+rMuH#<(46oMyF*aL1>VX(lA%>2&l`6C_*R$m^#;<_jybk7OVYS`Fuf zhyuc>TGi;ru+BRk*`U)R8oZkv_MOPRV81{hiO+}9S_YZ_i8}Pe_|43cKlWEoQh7!6 z(rfw^CktWq)cWE4(s9-~8Rf_9vAjFzCfjQ4>$y8&6{<_ouJe880AKBw{k{Opi_hu& zhe6h$cy|xHHL|4p{EW!lZU!0nvOFDIpSJ#h(UH`jiSvli{3uYlr3il5qBnej-te z92n{TQ;T<7zm_{-8Yr4_S%3wBa63E_FN(pUGtLW35txOjuI^}ICcLGDTs{4bi-fl2 zjw+F!12i^i-Et4o1rRNQR{)ebI@^Mhu6_4LB&#lyZ~g}#!sJAeSl2Cx6}wDuHX}bU zuuS6D_Za<~&VEtwlj3sZw#>{scXd}9pz$MmSBr}@ea=%MI3Qi*Pn7O}!d0qNfukt( z6Ck}Jc0}@9~*Cr2}aq<;g&4OJF0f}~GVkKwI zGg~%VD49x_l6oC2Q*z(0rn8OT#UffN5}4Gajh+_Eak}1!WbKg2g%{FqdyeTeiEfYWpgF-$s{sQe*Q6Fr>q4F znYk)Edxp%e`6!uX0>M}6zT^{lT5Pz|0iu*hcP-{4*>x=Oxw(SV`a*rN#JZW`ESB~N z+XqthLuO25@%L{`eT02@B(Ln_#oUpGzh&^Y0JiZ`%AO~pssJ43{c=r&I3t8WQ=1`z zt?g8EO@c;L;VFofCgVz5|N3N=8UEtD|BLnM+!3IJY_~LohCa#EPVouNWG`txv^E~3 zSr@L`m7zCi4)%V~zCK9Y4YMwUxgvh)lY%8J*;F2x?OUrc$3P!>^A0W3qGai+X2_MO zu-_mYGrcO8c4JzGpXtHT*)h@Uo^%%KO%`!}O4K}W*5BjNty|KN*iPW|$Ata3^39U! zz~G;b1v{1nq!X>@E;_m`4FJ9);$_MSEVy}Z!v%Mx8NH#Zr;IVA>UqU%i)u~|;ow&p z)Y_XFqV5rLOHEUfG@3L|Ug^O+i)iXOw(7S<731AOMQs{FL+Z1+aPqU=0!Luo-jQd~ zB*&$JH}Qq00cfRsscvS-lrcznyor62EO9KXadz2UYg?WL)0CI=6!>(Msiuz$n<2j7 z_#OaYCIQ0mDqT1q9YbYCu!iKyDspj~g{iYXGu#t6k+l+cGkkXQfrkpQ@ ze}63ZEA6#xh25&aH?av#<+a(Bm^3#28gB_rauP@Z zZ-e6~m4f1|DMM+9SYE-Td`mfLp_b|_0;I{p-Vq5}+(4qIL;{rG`o}f>F~UWd2GW#c z=Z~Lrl&ZsG+}L-dOjK&ZI1hN3LvoD=*MJ;Fj&ytO9wi$Q5^KG?j}&X@3x+3y&B^gK zCZoz26)(Qi|LQwN(Xe}t@obc5hiin9Y2C6!Os2ZBkucb)odF>tOq?8qbZoMDTnT4k z1UN+N4`NH z<&ADAJO1EjuuJ=b@OJ-^rJDAOIZ#-`3~xummCKXdgz19S z$=H+EVu&3Uao-7S3!RrkBd-H}OjoFaH|`u?#3Xf0!~j zv^UGhPMo5G^_TK%QO%^UwjuUk0Kv@@BBagJlXw;+4Oh0z-`@Aenps&fxRH1=seCS_ zs>y5~B>NAWO&k5k;@%N%a4{WxHYb+Hi!-zCX3~RlcA~`0 zJ~I957H%>9V435+Q$J3=nt&duR-se_Sok~^Fz~pBBZPyfIjk}Ho!0CXGd@Rg(1y|3 zjMg{GzS+=KiU@2Xi4D#5^y=Js10%g;c|LrwOY7~v#BAi}pOK6{O4Vc0j(P9aR3-c0 znJARo)Muw}AKS!;Ep0PZOraRAbmq2`SjKzDYU^dUTIl(UCdmx8XX*oul1cZCrSJ|`gMv0b95GCj`XTl^0!YZ!RT@l69RttqrOL%Ps2;*IFTNc?MHL+ywH+@ z5=DKwBrAcQ0;b|CY^KyPxh$46=wkwR*rHIHb~?d*zce z$i;+bz~fI~a?&4Y@d31#zD$dnM+AFC0AA6&)N{~$V5}FuUOe0?%BhNa`=xmx${VNNXU+ecyIcn| z@@n7LGiWQFAwn7H6|!DnH&tUdK*kY+Iik3|7;~>WdLUejqgpA2jddj8+~6(m3BYTz zOf5ULJQgMuQ(sv^c@eUFp>`H&n2;sa37B)gJ?Aw%Qh9dtM64JqoF?tLZ=zy9Np<1V z$d^=5YkXK_N9lsXkt~)-)U(smGNr(96Y6s56Mo`&g5DavW%#$ovtleQ`~=mN8o^lW z#5^5PF(GA5(ys`3q+0zRuO7JCtbV`z?A=MYlH8(?JaC~lK@%u!VM<55bLcCkwDh!| zRJjAhR_8K##yB1HVcJ>pJr!zJ!1OTi#NIC$rXoYwTRD^E)}pf3M70#R63`BGbfDBy zjDXqDfPT)snOZ=aT8P@OGqg8+zFGPMmxxu=NcqLH^a}n`|IG}YsNdJ@lcev4%>3Kl zRB9_f)CaZ-c-uNZ)cLFo+Tn~mTQfz-XR<=O805e-Q4dEe1gn=8BilbCg8gT+1c*vv zP;10Tkspa+dqzG40h4bvFXJdQ1QbungUNqQInUvN0SpfM95^cZRdFvukHi|FAe5Ye``rp;(x7aJ&v*`*^k zz~(82S@#!OrMaLwT`Nx<$Lut6!F@j#;GW_Yv^` z)`#(d9g^l%{RqipPR!EIS5nxa7Ylp8G^s|PsO0`kN3hEz1OtAmzhQZ%(%K&}{^51B z4eg_W97`Ui)!u>x00BE83)$f2Y|w{I6NIf6o z-i%dd5<_Z()^VO~kBctD%>2cXHKWXSp{T2Lz{+Y6+pDvT@T)E^z@6%_$OdtRpOEe$1HNpv87k?!|V zT;UId)D&BBCPkZUI13Z>g=OPJs9IAf4f>v}TO9goj87{k5ZT(#?Pye~H{>*0PdFOh zEQh*&#(}e!6&&HIC9*f02hh*Mb+`)?v!xj6{dz?-+eApBjxC5Z;ZKR3!1g>X#&$!w zg-c*lsVez|DIu$*ud0exlQO;vsRYLS2W>b0vWqGq{*aa57gkH`p|XAs9x@pgudG0p zk$oLTvn9p$L9t*`zK)g0x@-FCyUvi$pOG>6UK+b!;G{#HY04t!9Rck7A{PZQA%bw2K@1mz+QPif$q6F zyyAEEFL~3D&e!!{)TQ!#2>_TZm0A4kzKri!fP)y64cKrg>XF5r*ht->A8qe z)}(D^m~zW&a^Fhw0lra2u{|9Ro7DM?z?ER@u?R(W0iJ^{HvJnXhnC2H`GT5|ALD#< zfZ!bYOZMqqdDweSKAMM-;hXVsU7aiWCjA+F&u3wbR+fQ$pZ|0W1i0|;Hhc4j{5meq z1gIU>pQpczjMmZWk7_t1dA70bZb8P^0BoH!zAGkB?S5~stAFW1Ww}kARIwx6YB8Du z%D~0saXEBL*bX8S<_V8y-6BAG(I<$YM)=PG!cg$8_rOmnBXowhpr_$L#Gg{(O=ZwV zxI{lYWpL>&0x=%CVBdbCBt19URpQf*sL(Rpd08uDmIHbaVPRpxe?97yOA* zQTe?iR7luBrwp>oiGNV|DuXn42%E=+nI|J?HnD}T1hA8&C16)hw~YJN%s;VnvkAi9 zVDU?p#?NP3g?jDFl$7ZkQWf9KA`@)Udj2ng0FNW%kF=T=a%m96_hIwF%YtY<=aSkZ zQ6xOFt;=SeCOCfU{kNztKK9Jp9DflUk(wdcZlR{wnJUe*b$Kw7H>BHS4om2QiReOD zXeCHnZ5qJ7&P|gs5Vh6p{WEkue^~za4)MCtk`VTHMb9e+l=@wsaL4PVps#HOjKSjU60jTe-fVjzrpdNu6RM#iJ$0uUxlNBX)(27?Hnc@BMFRRI78I5!fu^bq{*b7U zQqdW5Di9MZQy%M2_GNoDe8Va1lgBlzY0jLnBZGLbbGdh~*28{U&w_MB{GtM{YS8(i zA86{tK1C};%ND7ITqfeGF+Mz}r7GCsQo7xbezqkB)zBf`@XizWW-}sdlS%$E-J*Wt z9M6jxO+pdW2TbO(fy}~Lc3hWoRL?GUK>qxNY-g19_h${_rV)L>{%ivS1OWd0cEt~qH38KFB zXUigZ-c!ZDDSs`}xYch!>R7qdt(`4cw2ec?b+Ma1%Yn;{oN;n&>Q=ltebh8D>O;&L zRkI!_t&)QJePzD%9yHdA)QX?K;!d3)&6s-wA8Hf{ErWiKcVxa@4%H zP)cuoVSq0Aidg`n2H7wEC~@4$k4ZQm)l95I4Y#1(j zZ`S>k@-E0&l|jSSKXA!i1VUh3mwdCQul8xzaA~4k3GzLCJva8!Jb!qwi79Xo5qsl* zE_KzSoc(p7`CG`TOLSIfT)fK^WnMYFOmNVVEt59{QM$GGlo4;X@8pMMWW$&0)&wIb z&PCnpa*v?LK?KvM+C{B{>m*WevUX7qJCG~q;-NhG){`iT*)K;9@_340_&TIaFV~{9 z724{E7ng^shkk{(#W=$}&L8To5nsaSQ?(VN+1fYc|3)QREP9iqyo`5%`F?Ng_eTP! z4>1+7OZXH_-gBtO+7J9B>VJvq@&-A`=1zQng0sKh=$cdpr=dG8b?c8WT-RHiA`CRH&|kEhxr*cF{88($1n`Z7OU&u8(MguB*I(A%yyf+~;Q@~%lb2hE39i{1g43alsO+O&Q}Pl-f>Bl>8_Ys2ec z%Khsk`*XbkYVQ~_?kf`&Y=ymFs-_0Tg--z0z8)f@eE7p#4(-0iBd9g&lpTcPuWVR< zDbknkk)|{Y6Yu*BI_v${vjCXT8BSADe^YZPtyly`C{?CD>rCeA({9k)D#X8C`)LE_ zWXoRjhvZ$2+H6FcYCUByA|OCH2$mfJt(s@Sb#BdngQ3xSrgdhy=bK!Tcs*|HgXoyFA^K|mh@tITzQwxHHPiVR zP^Ou`V}WstMq>LxJ?_&k2pRlrUC6Y9{Eazr>V}D;3H;zD@D*-?Dt#e~)hm4;v4Z@b*(Dm7M_b*#=aiAr#~Zu?@X#%>8LXEpuh5G z@O>JYJT5o7lzD}4(Ihb=+I~@Ehby|)@owki`$WqTC#*lx>S?LCT0qth(uYH3CT@x7 ze6f6n4s#K0V{Qcw)`S0eumJBE?$8x;8yWBQ5;hoh1q}L?nB55feGGt2RtSQ*{;9J0 zCvkU~?!M}LKyG{Ld@)Hp9#`6=GA3JDg5`c(|MR&KLoe?8ip6)Rnf`Nu|Mpl4cuYsx zQjpl~x2^a!nU1CJetX7CF9SR!Vb}KiAas63fWa->0(}UpI_JY%)^Sxgg=^z*bp8D8 zMkOxxLhZV!Z|9E^`@6O5+mqr=G_9{)i!PQF%p#2In`t|%M~*Yk9R9ey+B+W-&l+5N zk89w8L>Irewk*tKx5sP+v)|TqJ=S+>IfB>p4F_mnqstiwRck8RZ?87*YomD&M}4Mi z(^5s8Y>k~=rny^hKihvNk%ZW6-Sg0e{;0wk3)t;}=JL^>_Ctr5olxshN=2yg8f;^1 z*L}vs>(xD*nN}>hZ+ncYytWyQ@*t|bjWN!?ucC&lNNt>>^FGXyFAZAv_*w4T5I4_Rj!7UjFP zdxI3DmF`B6k}m0PrMtV49FXo37(`0CySoP%O1c|F8ir2yGyiw3z2Cjo7eB$l;mq^g z_Z8>)JF^!6#V;}k3gb@eNr#W>dgk49A@}7juru`DC9`AYI=@Dap8mlg_Vn=yFg%*q zE?A^-={hu3R_H%51_11=Dw1#9JK16F@|1HfSW2T)I? zZKWpn+_-f;N5~(v2Kl-*`2eKZ3T66C-m`h zA^8Ct?Zc5--vP#F_r-MI4csOqN9oe+YE;G>AJ?JAcW{gApohpFry}rhDv-bz@g(5u z$aJyi5nB?U`P=I765rz>vSoK(x1Sufw5#Zy`{&?wM?{NR7tXc)<{AYA6>3jq?f5Eq z3arHES9_uchk(uuF7lys3Uzwi%b%9qeuQIJP`*g$vbLS}{MPKe`&xaZ8@)v$77wh zC3@B-HHpgiBJ&JGFf1YS@7nvi#8$AMSOXG_AccMOVs6Kq*o)(t%Y99}?Q9bFK*T3z zo+F(KrVQA$>W?c9PI=pnE!;dMSPX5WUu%KhCHlxi=o|%-X%e=ymjNJpp@mKj6R3^> zYR$Wzm&je7Y`BWJpB>LjVnrur;ZOpBB^`c2o4*EaQU@Y`Eg9*^7O=9V!chUim6T5EkwE05Kb$CgnyL+Juw+Y? zTCTMOk5kp_BdZ`@sl_YoPRLh+mqX7wl1cHl0$k!hKy4E5Z(Lay)rofXIYvDN0LX&8 zoAb!q)xrDmv?1d;T0exmS=!)}y^HP&q)NIhThxGyk^WfH-){Klh11tY z!Lv<4xEm9&VV*S$6o$<)yMIJLukkr)_oyA?KR|ycbHwavZFrG`pDemy+d@ha=r zGW}Q2Gv|t#*}VY`Wi!XuC23RH_zO|GD7bz!rrjC_tDcT7Y%AD`6Y1~*a2IR;XedR- z%EdFCtP^sD9DvWk?i_B!))P3Yn}Rmm3hvSsP*~5*Y!mDNhW*jrTgjG**i4^c-PK9v z`f0U%B~RC*>Nt?yIUqQ7?`UfP&bAHyF+#O@xDk0nb}rZ|D}Sd1jxh<1swep-M)xWR z5T;pvxOxGYvK?6xb2w&{f589hO|z)~`cjR)FNx=R0CmzZKPz-rvjdiFl6R#xwtvy? zr>?b_B_q!*@lr$2x!)o`S%c}ae%Dh_!mAaqBd2lAYUS32c}K3Re6j`1>s!>@0X-cv z8oF-;_uWh%!6F8X$$s~O!1+o#Hhc?6GB+#$%*0Zw+7Lxj-8-m$~oqC zJ=^knsH5sQ+;0EUl!7&Z1N6im=cTYW0H)Z-H~VZ@=!Db`Kv~{lv&h5Ff-%WK^7oWW zh$eO}m}E|>_*v?qPX+RZo&cRhD|6<{qPkAPQ;}&B-|WOU*93|0(3fw4`)P?yB-0-N zd1&A13xU}KFZx9r@w>MEO@a9m_!MUO-LaKID(Bjd-4O?5aytN-!ZAvR;K$Gl za%dYn^(5SVeqyJ5qkUz=C01VbZmPytDSuEr6>w$5p?Ty4=yeBv?qFNOBLpd^$Uz{yyG-MAeO%1>2tbNNF%_3;Mtq%esw+uOz8d_ z#q!*GORnQW!=5ax_7BbDAO|hsl{KTF*ZeWc@{7g+n4quSdgpm|M@3{6_kw$}$~ODA z(oaz8myl=Q4Y=@VD~48`T)T4t*n)`}TnDs6Qt*49?a#LV!7n|yBD8h7xB2!n@$8li zeAk-EC1NGtX+Op0P%qnY_tTF$<7~_q|B-Jpk(DFNa!(L>j$gUdVb;Ls$iw##{8j-}3{L-t|jr{h)UuIO^mM=K1XOO)>{FM`-c<*Z)?7u{$MNCKXCR&PiNceX($g~|4j(1uW>gvA*KbfIK`K(lec%)MAf_B5HW37Y%cnJPZoyIq*ce$WA zF-1xmO+Y_SKkOobL_>JFF=*_h`c+(%h8o(?rsLE5QKFxZdT5mnDSDUD4SndR2%s|U zH`;PDeqetqk$5Kmqv+?#G^8ScZewTXe#?Q5G+N+hlMz$5d`y+V~ z?t*wt0#^b8O{DB4$If;>*dTQj?%<8ZNOpXQdbA^3`I6|XBsYOafMMa2hOh*`G)FcDjN&ESqaE?S=S3zLA3cw8+sri_ER=M% zGVl9zyNgd9bS}OUBsh-xvy;ej$WH)T`8b&95-$|xNregdMxlfUDYyy|$wkV%votDc z`$Br5Zx$8UF+~S?85Y+}Pc`=Y+206&I6=#T@V5- zQ%qbXa$fB#hxe9!KeBQawr*_=?-w_*Xy_qsNAdsj%5_nR2VI?x%w|$l;Q`h++_{W_ zKJ4qwbPm(?${_^em)anZs9&0l#2&%(`nfNnu=H?C!Y_EwBB_|^f{cXUraCdcND`Pp z4r-1V*+vq8#y^@G$nN+t5>1V8`z19=6nbhV-^X`vmGFDRZ#t82#4rw z4btR+PKhF|@z_80 z9#DQjmrRz%Rb&Q{^Ocga6;HDX!edps_Wy7qKSvDj5#7kHN_c}rz?V+doG!gT!dkVN_zc<5BSmAc-*vj!PfacOZlNZ<10kcI0)S6gx>A%tX*4 z+;(n_cn~Do7tnK`*YqWfbg`pA_7!IHHc*Pph3}~NOawhq(fWVj3{eEVA-goM35Zki z&W)~)UHr;%aFfppvhL{E_c|)f3dP1VUpLGc`WlXu?=q#x!y>6R%Xk@0^*D1LEqxQk ztR3-;`Uul86x&j%vk$xQZ_n4|M8wDB=lPW_1$I*JOqmx`ImXsEM{xB~sR}iC2?Piu zaxA&nNq|b{8n105UvK-qtjIfmn$C3fT{4cwskjr>lv2U}^$!5W65TalY-f8!UL z1T*7K-Aa{M(^(;ol#Rh~IrqlJ(1fT^P|e8t24m z!Aq1I?Fhkk?-CJTCE4NU3r+2?uwz3e^vEgSSqov@TJ&Y+uk>#iYO*i1n)NA_zm z4c-iUd{nYJdhPS*7Ax7 zSI@HOE~T1ltaC-Nd#I~kjzB_iv&cP`1x z?RRQ_24l9V4)$0%?4qJc0Yp$|@eTXJgWYrvb3(&%tJ#-|B3>7Xp=GL}=)OrsKdoF} zQHSUyN$5bbT5h#au{emwFI}T|cEGu%i6|)IHGp?nkzt%IYHI zPwygnaoVGY%qLzv-ra<_2Gew^%M#g+Bx||%(x{o2QSp3GF1vz2iK#%#o!fG|2TD~i z1tf&K{#y?(jzspfN5cJnwg{&l60K5IyelueU>*LBZw~uL)kkzJ)1ODr7V{v~HLFXan0h9T#JBPMoN%XS)NnKR5EJNofl=lLzgV`E9({o zwX~N$EBhocH0Rn_b5Vv+ipGp6;zp7&RI{IGpuF^}D$bL|j z@R4Kk!Y7GGI=PW=DkE@l>t$|oZ8~){$$G|WZpZ#+Oa5^Jgex{Sd|SH13r@r=7X7KK zoAsmS{y1Wks%vbSvI9$zhM$ZBQpumFepPOZrV7KPs4*ly|auY6b%) zb-*N%J1Hin+1Ehan`rttzsv}8_h=~&4ct^((`-AHXvruBF=2F4q)3o{5LQUMx@gh@ z4hQ+Gx3mh=2#lOL(|GUYVmKt}84}$61C#v-@gjBeu**N{kAXeuz}Y3ZTWJrod2!7U zK5Bn=l#-avCT?wMq%UcMmWLu`e_yIS&V)^BZBc{L55gvk#V_u^JmqHe=7T4kgb7M7 z0^Ubn?X~bMa$$2|Pz+PV;0mYvZ?i^csU#6<=a`Uqn+*$J#)o|&gftsm=j|k6jn*h& ze{lz`G?u9vRt{LiGEek`7s&Ep=lM##G;OkdaPI``??Rm3*#Mx)DwE~L0ood)r584i z@)WovFTo`};Rz;sCMd2%%@7LnpHxb{i)T9TF+lbAWtJ~o1w)@(+4YOORK}ZW_)d^A zh?`qX>Kx7i@!3`oy%Q_zDLOYXs%I}fnd|Ir!bgp{Yjz}gP4ze0cN&ZdW42-9f1UD+ zT53vun_jFPV41xg;M;BHHbK`3B#v!i@UNq$mnut7B6E}lCz|fTK3}rdDUcq-Ybqni7TN@sR8P@g@C$MIudp$mlE2Zfjbf=Xk zI6?HP0VN)FA?l5U4*@&4ifOk)bLQ3%{W*n~IMXt^@5ar( zI`8MBksyF&A{@RD#_C^23Em5-=_N_u;_smy?(Z^sC2{X8%*|JdsqeDC>0D|Xz>#;S zF0`MVr3^|!n?OJ!4?|z5t(&-|kjoX^3Lyk@y3{Z%O?t{%*L4cpqNYD;=Cna^LNK>V zMr|w42jj5kfzjGwGBoVq4#MM3s0vGGLkjtxXkv)tUL%d6qFWe`_OGBx0HkM5Uor5V%CGZVl z#noFMBFgEHLmuKC9{}?#<%UfM=YX?PxREZ>8rLlHco3QCyI8^GxiR8`BinW;%8Ga-&c377q zZ|9||pcOHs=B$~;wjJiIZ(v3R36s%1!H={3*V$xb_Q%o&+NPN-mSDkqdowDIzuO*5 z=Nc(hdGFj#-akh(tm`Od-gc(87}r}H<$0uNv2m71p@O5>B8=pVtL)RmnaYO#`Y=gB z?k%P}12G=jc#BeVA$sbF*R^ej5IOlnpX0y}hG#N)z2{@puZ2%fXcYz9_oFeTeYY@g zg%%4nWS}pFIuv}P{Ur9>$aZc?$sDfF|_O zBu&4M7f{8KKfxm0W2#*~yip6q)V2`R>KGXM8Lje3h%RO@rLA2XNA(Gr_BQf@-+LmK z8~u=B*YDjh8Lyda*?GL^z#sprNZ-2VHiC{xW&DjW_E3Ui&q>HtZ2waS(Oks#6o<}E zZ@egtTH&G7FKwi7Dp)j+yiKc&@LjAaGd5Tg%S$xL8=Fb4DVgDlA&ipwE|}p8LU%-! z^TD7-GX-#fxcA&x?Y*Zn}586u*5hk2=L)nUVJ+jA8gm1m}MAtJH z+`OUVYK6}N<~e(cdV@;YmFE3)9yRT z*?wa?a}vgbj-2Htj4CpMCh;eJu8IV6H7rz3D$qr;sQnQiX>eQK8TZXkUO><>REQp_ z7ltq58HJC;K1PtbDkz|HL6=z?ht81c^GZu%;NbpjX$dz0zvp{2eHw$vYFY8)=FDR> z?k*cDA~N}De%@x(H!r}L4$e2k!y>Tpk4#{oO07qh1Esgd4)W(iaV)Uyv?483!|M__ z?r4t&zGl}Yyt5bYs5e|f`E?xa7YWWk-?-UpWZt`=oX~DtTq=l}7X1`%XJrZ+ibQ~B z+3rgtDHD1##-c>gy6qD{`NJZ^D?rmpS9>&%nSm^k<(1)0;Z9-BR;->Zn7E^*XQSWy zoUj-0BGBl;XQ*BQQS7ob%8LeOhvFp4(ce5}(`}-&ak{DFbS|?og8p)>ke07rr(J>`PADO1~uYa3<-iRl}&kT;-;E+_pdg5r!@KO z+eqYnAi8ct99yyMIU+=jgr(>NN0^x7L@&#jP;*1|tjNiaaQi4GFbC_|E25%@f4`V8 z9$u3iV0rw+<31ozSSqv(!5h||Nq9L?W1~K!&<59~U6TdNpMWyyj&4 zDTCq8&m*pg?Pxjs+{X&b|s^*%RjvDP3e89OyYWnqi|-b@FA3PFM0+TMR4cx zT%AHDufN>t?=E&d{A3w*hfVGxt@L1fg(XDe$GX)pkHfjw-^mc3awV0O7qs{BC_l2J zO;DK$i?ZGp*q!WJ+qOv9X*-kLg}e^`lSmG)c`G+cIEi*+3d1oZ}Wge<`0N zEr%JMG$YyeJDMf*wPajRErmpNPEoWJI`=CTMJftWydJH6!nwieD{=Xb9_lfxpTV}u zuQJVNv4^grcj}DUI5yYio6kj$yBi(mTuh@`k;uiV!AepFNjf`qvSy1owovuf*MlGH zJOuGWB}nw=q>?^DodyFBz(P`)f=`o09`@5k9=rkIC4He6Cz2KndFcp*_;W%P%UM9HX@N=bzkRaX~A$?*B zdnwu~P5Q_!%sG4D1wGLcBX`+gt>gVdP&LSlbI!zfv5KTb3lf&;q}A;%6rLg*uJ?&| z7P#{B4&{unLnw$ARAM+M4N_gB=Ul>!Nf1qMy7TWuK;tdK?@h$`I#`46Gt;-ugTy%k z!ZNLbDEYaejJs&{$tOx^=NkGbr2-DhbdJ?2&KItGQq<LB3ep!KN$7M#_FR zRIVG}T|i^;wGrp4Yg5u#sf}DTW(9g9mGH+I2La}2L5KSLCN1@5I6{`G;KYtmA$#5{ z+{C`C0@-RV-@{_K{zDa%kG{tUK<0!uaFZEq+ZfSO>Poi z8&6g6bmV0V&g88whMJu4q>FbKPq{yW4cy{kgGP3yxZ|fyQFI(?j4o($-Hr^Y`nScH z%_Y^PQ(g_AN{HAZe~izfj({z_eN0@-*q9J?%omE`#R7e$QRz8Hb8DaPIwqxhUiIdy zwUaL2*a9V7Gv@Nf5e>VM?Y3p#E<#fepr4T^<|~9h6;k?dSFxN|(dXhhdUd^gD;$K= z(VO}Uyf*j_T6C|nTpH&Y*NKgtonW|*15zTR6!B|yT>iysqE~W$c&SShQ$5&>gyyc| zNBpu)`fv53=-Sc#y*uCHrzM*c0_yZ(`NvbzC4ogj`P@}Ac|7fXg-=gWYmarI+VvNwA-5E8$Cg#b|BK zV1O_ht36#!X?c(ddQzOAPfTHgUtR9*C8@m>{shEq&I{n*eA)^_7L>#E&ua&rma}>b zFZv5&RDZJu6vJq!FCdm<)jvk!E7fKl5PQ)Vw42gZx7;K5)BYj%;@L=r5*GrUvG}xx7)Ck(*IP!GRx2nQG!Wh>=QY zPwm4d>?&(ISOX=n@o)7BlC2?ApT#HmCK~yPc$Svl-zxtjy5Xi^n?bU9f0prDEkTg( z1&G4vG@RC~Jz9OCZ`gg3_vq!caAG)>mJ72>m(Mtg+*Y0PX#qp(AWGbEkIY>wzHN5M z1n+0o1OwI0e+M6gkOPSq_)pwpL~u0_4RzP)nT4_RFzl^{xYak~^y$O)w$Ox@DoJJf zH+3U||5nX>M}n(gsv@*Sx-)W%gt)=o zo;x}T_lr*BC-JJazjNq%5TCkbuL9;y4Uge&`WlcaACDkC$J*Iu#`u3GmDIio@Btyr z$b*&mc9LMjO=whPRKUjbvH~NH6+`kV5)Kyo$7nGP@ zGPkZrnW&JaZX-0kYak!NqH*PmtrG7>Ff44&R8VKN<`&@c)HMU*o&o&<19LVqOUHlQkXnzBd5r_}~xh zw0&S~nZ)R8uisOfaMb_d1!L@niWkagK`U8QglvRIpubShl$|$m}kigU+(pX`~1CM6IqMO z-v6tA__@I4;Z5j0Rq;^@FCsbmG_uG|cYbCC<@hw7^%j71u(|-iMT-ELgK4K4KnUTE zzB4$B{0)a1ou`ib7Tf{^77a^HKRATf0YmV#%L0%z*v|sEtKV=||3-F(hv0nwl9unx zIOJU4o~uPGk#Qe);C<$@?OL4MKTYfZ{=dG}K(H`JNX~JL z*hdC-QNKfp9jr3%mv_10G>ci~&e#92 z7_|WvNgAK{uo--N2$zOUgiD{a98_4Wfv)W}3_+Lhy}S*aBVl3{BlE$HO72}rI3_lv zP;iHF6gJGdmhN}A*N>MBo zv&W;cT>wqxLUPvi3$^z(MmXg$;rwQ{{YA2bxZ&EvZ=NV|%6)I(%j@JjAK^@cYd{cv zLfgzI>}xPD;UuMhBnH4r*wzpDp~gf!KZ&pT8(l#5!YZ5Lw052Tit(#LxQb+_nZFiz z9FOvaB+Bcvxl^tH$cWV`F^#wHiu>F@8ZT{!_mO(T_IbnqE9BuH?pqZbHJk10X<$*_ z2v0PLWbZM$2fq8&Hef~1ilq2qK89y?p2Yv*j5^01+~D^|oZJS^x{<2sh|RHs2c@K$ z#_|;qBt}ypRqu48bJOqxfcoX%MY*v|Qb@U}lo#-5k>h(Q2yQ%RAo;w_7{Aw z>f{ySTEH|OApB(%PHg=R&nQ4*2oel`{*K8THbkEybNs`$Vj&%n2biBY+MkxNit9_< z0P+e)*Q5Hm*md>R1$i`SSns;q@i{!#p$WvR2bY7H^d)NF_yK`7j4o4etTBYo|1;W$ z*$!od+~5Zymst~=6#>5iQ5{@EV(uTqHERWIc#F z>{Nsb0>@2Tr<1Ftl^x#mMtFPbR{$7B?@E_Ljvy0`FRKO8D^kKKwkqi!{NRGS(DDoP zUyCT;z5sYEdq82>Wa7&ql`n4mEM%kiM*MgQmvB$dr;nOUH7AFv@k%1JdymX}d5cUF zof#OuajYJa_gA_QjAp7DYRy-^^DcVr&~f`4=_-ue&(oQx?to!D>U~SmHZ9STflk>2 zfUqk(9S+yXxhLmS|EH{)7ED z0ULN#M`xLPSCT6&gh3m!K#En8R5&A$@2)3KZ7@5dPA+#7CU z#t|R*5UNff=`0lMOz!75-0PU-8JgWcFOIGy?*cz!z$q`g!xFSvb(3SO8FiYblmxhj znQIklFK2b**kE~Hzk8o{e*N-?>cILy z$^$|6Vm9}k2aw+rdgvM3J^rh0GxA-im!U}nZPiRy5+$0t{ct$u>hb`tcA_$?%Q)v* zK)&rI0Qty;^9@hC!#@uI2>sSJtkU1@ntWbD|E`Aq#hcV3!1|tyJ^gGo%F-riR88js zNSG;P9*uUxbv6scCO8XlxE>zaL=XYJEK7M;KV&dLgoY7hg*(os zMwmT;0k&T{cvmt+8E}d%VzM(jLUv$Lg}N$GE|o|C#;pgud0R=Pg@YJuKEiQ%#})POS~nl z1ApU?TJ(}OUt<)Ro;R%?e3U5gWKLnTk^6G4c+>@@B(&5xi>K z1-x7*Mq;Q)FTP{Rbp*%w1-ed*r8V1otSzm0j9sq?a1!mw+?8hJQd?`eYb)!LJ%l&v zYaDj4gw5~DPT+%UUJ2aRNz55ou36OQ%~L9ao;J0>dDN3RzW2!7UULPY1y2sW|0r^+ z-e76jA1}UQbeOa{F=S@U57vT3Xmv8EmwgG(!QK}5Zi@GGFZbf_jr!X|E;Bwj$!u}u zvcjYM6=Dcdzo-DXrN&p5*t{U7(>wh-lh-Fk!CazeE^_| zdbgV_LU?C4c_RQi(%_Aw9KHfcQC|)^+Flh8-l~#N!D%mae03R2;uI1q2pbedjCC1; z9-5gT)2cq?kAE{7j_z$4Y>C{&6__+P>53S|FtCYula%J$%2U1ZX>-2S&awQY0m`$? zLU=vaF#~Qw3|(fNKU+45M5g_u&EGxeVlKz7fPKKsafUq%>bJ){RNY}K`FC^n@8Z$O zQFL8Gk}~zXdnmKu#&05#^*#`D?U9H&^W7}nY#kPK?giVbW>e3b{SAQ|m4-kDvQ^5r zLOwW*&Y~Ry*2NV;9?bp8$KXT9x;eJ896W-s8LfSA|nq zQk_Lvr_@%IjyE_470}OQUoyuixqj(!jw-H7ayk32 zm`R@(&uiSvKE&J%m1PGd91PX^75ybS*#zkYrPv-ct+~8=PyULd0U|?xbp)-xnn!&&8N0FzljLIJhY( zi$3gd)605uJ=Hx)Cun}_;L*CYz9H7LE}z<9H`A!km|vCxBKh>QVVzodkF-bZhq(48 zeNrRJZpO`aA_tR!Zy&Z+$l4WAn}PvvJ=mqpmpOxs+B#uC#&X*WMPScF-~2Iu{l1{2 ziRvUtY`o&?@P~>MzoRaG9y{A*Grr&{j-fm>ZcW%Zs;=QgN1?g#vYQhz!{sl$xn$}@ zCi7gUAoolB&n9{$;MoHAn`uIsFvR3Qxdd@MtjJ+3FP@`EmGA70>-j%JlW;eB#=E7ZdTJDTz@7ln;EOG6l_73VIn!PY0=&8;W%|qK>WJH+u zPSa;Leaype(*JP){-XN$Rtd3Q(rkS-Q=+hGad;qyufPZ$VTD7zC4A2e4HB*2A`rOr_jwRg1@7aol{<7Q5x|d=}Tf z4EAP1*tY&>y5uNL@n*Wc9m%fd;oj{Ryy`zb_UQESsBq1&P0V-r;^Ny2cCy~PB~aCQ zxY3rtig6P6MLh;>tnB(c9iKc^v<@jc)Vz{^u? zEmhh1>r7zK;qmv<@b8S!A5qL#yKVnB$P|sTYUldr`=q6dwqk6GXTZ;i9Kf4#4Pe$w z|AoSN(m#|lJ-ZbsB$}c83)kLDOREh6JB-SFDJt`!pxN5YnInK?y#fplejh7X@=tAh z0Igw%5Ae^bcR3sqd(xU4%U^O2)$FJlZiRDUSNtCa{M`VZatUpzuB zRN{1 z$NbEqz=QrpWAmte3y_2Au|iFD7OpzX@cbAkx0F5LHwv{Aj@&No8YJIsbzh@#AA@)O zt1R2wn0I;`uM&5gjGKLcE_zY_G!P%X(3_UG#zAnK5_1pEV0H6q>Sx@x8f5J%^B$E2|cf4bdqcHou?%bWu^ zQJO**S{eZc+((BipDVp*7RO(EGsjLggZV`uJS+d2Cti^usEMxlK)uy}Cv{!KIQr)5 zNWnRus@m@6j+Ehl9smz_khp!htJrI`%P~@+5gWm5@=stc^<8BnWfUq`I8}YM6!W(?>MYC%HSgB^VduSW*(@d z-b1P3rdlThy68g{hq@TEVI9dGa5tYYk=)z@a}tB8>|^~y6a1)kH2oQIuYmT8^dmdV zQfqa?2|&`_SW*E)!+Zkd@|Ck~C-c?0p}p@Wfp~BKP7}%_z%TZdJc@NlbuOtPMh?sCD+%Smc=#5WT;TYk%7tcHjhf+=k`#aaJ|u zM~fMCb6S6Bn4#eMGz@a$vvzNAM0^_jF8wa=Xwx;;H9gnl7XIWEq#1&gv$p$dY%h)zZCPWXgAOCvQxUgjq z-EF~34KiHWTfnGeZxxWxYX*S3+_LVMEmPW5xq(yh{BwTPM+kZLAz&;}sHc;7))PzX z1TM$XINi0dUI&_k_Wc;dO(2Ta?oCc0zCS`v2OF`krGEb<+*bL@0dNF@G*O-GwV*R- zhBrOOOq$Jd^*W(F6L+b*nOOWW2DebnFL3f4y#WxlIp;RlJmyX}e!hzJ6x=?Qar_BU z^i1s;;{vk~<%gP{nsskXb-(rF_Mxwmf8e-vmUqs*pS9oNi`9EmVEA0T2W)K~z!=(x zzKk&jPwX;K?E-jsI&_u~En{gDCfy;Nnbl0j0oeyBfHg+Pku8dULwFxSOW@vXCIzfcpNEpA2;`#H zQaUd3b%eDTIC&#)v*B*W72qzL{vl%hH31bN5(#3dZ3ETji~$rz!RPr)yI8S&oyQ|aH;l33Pm3GbSjOWDZe$A z2Vy*DC$ZfC&VTOvg)r!I!c8)3ORPY-b&e=APoMZj;I#q*YJ>N8^4(Vr^?Ky zi-B+ZD_BFhABHd`+w7+nb^4G!5(5^sLrLu_^^Rwh?sCpmIh)eIDyjK96rlDfHw@L~Kn}+CNo6PuRs|=* zi;oNt7}Nrs&3eh|KD8*LaM#UG2~M|vvj9r#`(U5NaH-l1+4VSX$lj`O?7rGF)LV7clue9k-^vj&{Omo~}a zr|=ZpQIUSMZVN&+vLy9c9o}vLX&!{wK}z6r)NGR zrevJhneDj>*pomOuH6LO#qjy8L^oJl?ipp;OvogTF(We_S5hY^c9Yr-H(lH)#ZgSy zmKm{&GekwmcR*lOwO8j0P?1%2Yu++P2Ar}bW1Owa?iCP(q(#IEuK=&!35M{^dI^ns z61*_eWKCjpS2M^%!&vNtXEW=N?+8@=ddqS-ZIDDSjF$p`u|8M-8Q{B$E`-(MC4DBU z*zatQGro-F-6Coe^NQZ}s=#{_hNce`0TpN65r`88j!=25knbENnXvPOINv-!k$br3 zw&X$b$KkNx!)fn3jGm@Aer!SgO80Wx8R_9GT{V2h!o|RvY5L+qI!zrD#XiF|$>#^x zl62^Kb8<*DM0>2dC!81fdQQL#Y! zSnsZ119`^$+1dI1r*xk6Zm>M3IFpMMAdz zo4v!Z6P0gEp69~psH&7)Zu>KHdmPbOag`O(>=D`rr+WWPObIH){5XW+&*~o z^SKhiLX4n03Tp}{V6bKgxCpO4cjfn~3^F7CV+Ko=+>LX;08&Fe-ngJBClTI@j#f~M zVHhi|vYc#%)ezB_H=K?6V9LvHVMR@U7o#$mC6IqWakiy&Zs-(d-Jvjqbuz2s8A;F6 zKC3}e`-Rd*hnR%C@l6DZ%M-{Qf>eR9NhxWg3GMe1u4iGP2SLYv5JxvI(=|9~Q0?FF}aSLcz)bl&y9lCZ)LCD(Ag6{tYz1(%cv zFpELI;UGB9Qd|Vn3uZ5=B7p4o0Ry$KyY7YZVp2w1FaGmnqv(Z}j*Rc(njX^dQYweu?Zn0vu}nHj2g4BqH>kHRiM&9pC#VO z9Z%-=yQ8tEdmAqs-*9YAjc4v$)nqFG535!Xoq;#=$MwjP3tP-f!|%?_UYbwk zV?E2mG-+al>fdNYnPyd&v({fB?>}J{*iJPtVc-6%)v4OS8l>&etr(omG^*VQ{)Sa_ zs7XViNW6{;wJ1XsnhMA90U}#QJ}IHNgY6?z;zd)dAC6`lJKT5c1}WZ3LnCNQDDC9v zvmM4I?jYJJlxx4QZD{4!^M8gRQ?~=J`2wtRlvEhN;Gn}dYf<7PyUX@KwV z;RfE@C-{!#U~hRdPoa{2XfAg0N`YJJxMXBTG>oY}kAzVa9NKu)zt;*|rN@=C|BIbS zv!QJ0QR%Z%DqM2%_{H9_P?pSH?6#`sJ~ce52F2jO`E$xnkjo>!^{FUEU&tA+iN(q$$n2OzN(<@hy6aq2)xfR z<73}h^k5ob%X_xaIP}W&cjxFp&e_?x4&E?01yw|uz?=aC+6PZVh(P4*fxZ)g4*2nl zRSSJB8x+PxkO$95!sM{b-xQ|T^fg&ptIjf)cFVV2^3prO3W@rrd9A~i>K(H|n6hr;8hF{lgkEe=GWr!Un7}8&yFr_gXy2} z#R1gO?Y}r9iPqHW~(zVxyJZXzC+tu5n;ijOtA(@cjc$S(90c_r?UAbV0&mt zPgjNzKb8%%6wQ#L>t~klSN=P#slsRuE|N>xff@I^sZ}VevIHGydFL5sMHaT-9+?4| z@1*9$;~q`Dq$Tw;!B`=cEUbPCCW6P#aizB{!}`ZHa(tJ<(3&G2BSh z27C4(+q0ee@NR=TEBH&KEwcd0E1qqLiMjI6ESJmFH<7H8)MeR|5URpul9EkQw8)`| z@4pB(S3I^fSEmWD10)aP1fuY>wOX^kVfd35 zGfm{nC=8X!@tA}M8$%5fUKA)aD7#}gFRMDE_fY)nakM*_^PwF1nu6Pw@p}0Td5%M>K2a&i}8}XQKn^szs8j%kxCO@3_x< zdZZqI%f6uOTNUejh5eQf`M9o+gE9k;3@r16LJ8Mo(D{&|vLJ}`Q00gG8Z|`unS|7r zwPSad`O|`sTt`aFEXKUq_%$KU>1n=E?Q9K9ZT$=KkjFzG{&fXQcThfBN7?&RG~)~- z+4wNhVayTKECg4t*9seU=J;YH6M>Zn71-}pm*-e<{Bd8AS#{LaXh)o#6>C+kO^nT?LoZ`El zEzC4f6qH>QLaXl;sY#eRZbR)B z_3N+F4N@W{APrJVsUY1gF*L$}bW6@4A&ns2NP~2DmxLeAGf&aa!!cEfu?^)~K~wHhI~lItOcTwm7pZtMHZ&gXD3tk`wdC>fB^rsSzAW#~NLB zNJ1tont|8q^lCm!r~1@ZY&DlSu(3SR7m55%m9s>jzi0LIp7>5a+KC^QXM(cRypkmdrONL)_U0Y@biH~ed`gx`0vI#N$#aRmpoNwh` z8FDOhWX0#hxD*TG^0kPfFgz+>5Y1vxU8OO6hocePSb&x$8NDudBVLv2I8u2^VrBji zyqZln+W7e}pQiGydOJQRMz^3?eCO_`=UT7jaeBD9GX&mPA5F(w4hFgQ8yXJAP-Ax| ze*g(P9l!WolDkZ-eE_?eIwsxP^HZ0l z!}_eXBnBGAS1S0CF-eC13szw&a~4)7oNK4|1pXI`hhNCXTG*z_T=)V$shFZ9>t5x(O|1oh@YpG z#I2`1B%i@Oijfhwq;fFZ<`EmKGRnY=i9fDTZL&Xlb}FnoNF#jZN=URA;2cW9t7bex zTNJqQnWHFHNP%7AvY2M;GP^yc+PI}cCi!LD!_MO|wtOAAe9tU=HNW*HMr@qo_l22m zu}T}%gPjd43M*t>bfA{XvLr&4B=LFH;^LEmC-~g{4^}9ZV=(~P^^cbc{bZf|XfV3x zh*Rxe=UXB;h&tCKc=@k`^4N0(X5V-4qtEI&X?q+4VaLyw8DdD_So4z)hy^RmlAP8( zMo;xd-+E+6b&ThyvJ}KV&uNb3$^}u!46lV|$y;W=yO4u{reb|7fv~}j@ne5YUsWFS|aSmsJ!1#OY!&&rWT?SKn7)O}ICt}D{ zERq2iCFvpd8DLUmjf^$yHlS2pJnrLN zxTEJfPtUs4va7Gt-gf*Q%m3Clps1e!QPY*+H^D_L3@_y6Im%fIkW+v$({y%8>bA(#cQnOd6tPUGOxeJyyHlVVtYT^ZfoAeh+>6X zTXLhjj~9)u`%r>|Ru89bkBaO(-iOE=cOSCTpFZ^J!87=nj&P3XZK3@*3u-c5`laL> zDo@dfGB>4@T>W95!)vt*rMO%zkdQj26iaw-8f5l5i5N+fYT zD|m!H5tCV|>beBfyb0lIEeJBz2|wcfn9`kDuTF2OZZaf#5p1n>w|Pbm7Ml8Uj+!@q zoN@3s^l4@v^LOM;AjX=6BC1SV@x3%+>-Y2;L|<=Sw{GR^UCBYS3hGCchbNe7Px=xW z3p2VcZ+ipMMItLKnz8JjWXL_f?F~lrZx`TvV`?H0bMyaiY1*<`=V~_uJZnN)>EdKj zff-0YaSS?zYSA2}^o^Kv?pW4851nkjvoNVhw@eP2ehtmr0PM4YeEsvPm#SXrQdQiE zq|l^`v6uEnvs-1HppI|$DKbK&(8broJN8;aKL_(O>b#YaoZP2GGfyzrA1#%rDJKmm z1%85n;;~hTRarG|BS1O=p!ixzCu4aY#it& zX%^JE39u5)gkp9>gGv(5Tj+isd1Ss*z6cx}2B^h6d0eZcL6Q{pHyFGY^`Y{x>t7mQX!Ww{ zdB6UxeY-@c;l!=fEorQz6MOoNlu-g++7MxI3xV!?OLMfPVlUG@<|&GLWrl~FS^4KM znTWdbqqW+vr+}?iHFOtRgwuI=4H#j~0?EmxqBEy#d<7sc(>etKd^HQ&kog|I9p6g- zk~k8ijLAh}CmkLV^ML=!lr1Q16kcXjCW8yEn_UwdVkt|tZKY) zCo)+fEpwN0J1a73X*#bdGw>0~IKJHy_2~+*u>=^5`_l#h(zSG)5iJPhu9}>Iob^J_ z^N$s}Ts}x3XTbE0JrdAxfhIIgn)cF}K*900zBN~xdnFgA#rDcBjjto-WhCmscu=k9?nuzSk9Z)5lo*gL* zhkV&U5|r0XH^#>V_NaZ;7Lh87_E#T;I5P%YH4-ua@vDjR>U0YhRmG1N9o9$yqNyr% zgJVGo$f&^VJ<{EVekmgj)s%a~tPOT_UFSVkaExX+$M2JH8vtLpSKADyh-@RF&!@%1td6T!(n_WH(z;Z%_m_Rs*L7;VlgQq=8`iL@~Ol!yspdWp2_ojO`Z=IAew zu%PBxGXT*UEduPat)fbQo=ZR_tNZy{6jl~82^}}Ws_YoLpFG`cegQy(-lSXvV-5*T z^SNKho0=APqZ9F50DJ+ZV_~@1wRuOb!+uD-P%HzGTim4q;-YILS2l$RlTq#-}i>TA2^R}%ye8g zk(5Iq-M3IWe66AE0;d!+ZwI|%5jAyn;aIe+Dm68k8cd_#OsN>Nb66-w)$lq;8e~nJ zR~3^oTz{=UKe+)Ao}6#|KKzytT0U%2cQ*lz`1tleI@?HBtYka^y}fy&Ov4B(|+XrNx-d_NCbJdquNZi(XA&-z#yjW<=#7`1ddtF~HyQtSTYSCQ- zsf)j9cea_o1fb$MB(EwB3U(VSb}4zu$)xs!_Xs3-%i@X505>7&(q{sJ{km(0k%zDZ zsIRxQE)IYtOBrwSy(zqh zjw8RiLH9VqEq3NCN*wcX53|Q9=<&vjH4Z!Lmag=MKShF!ExA9`S$3e@WA@2!&#Ek5 zcb7zz-9tw$jvnoEdR3+Z6}-C4C*?tJigi!yoMKWqo+zE&Be?*TB)^dqU)LtUNLqM> zk<)fo6=oWhyq+k5msUZ0(0Z4qjHQe2 zGavjDzJbB0SjHOFRzVtcB8F#CV*eON!Ld>LPOt6w1>V8o+*BJF&C&x?(SM0Bk2_E$ zWsBv4pTHe>VHua5X@5Z0S=0QP)MCe?*pQXsPF2rhvXJh52XD3_h#O@gS}o>nR-{|Yj)CSkW6rk(p1=Hc z%bb_0-cz|wsJYgrXElxHKoxgVP`I{(kJs8%)RwZmmuIKm+)aKi$X9dgg)6lc-^|n< zt+5(9;(u+N$}2i!Sm?Y8(ST;RQLaq~(l++-9ScktuJ{SO#{3TdJqr;0VfIojRI9Ik z{!BEnQTJH{Xz<#M2|a9=4d*)obs~3Td(-yXh zdQLZgbJv&tnKS`QY#W}SGNIvNbJGEBqFJNecOdB<%LT`0)@B840vhm;cC?^Jr1o07sB00*X{%` z=4CmJ{&93d@tsR^{6vl8hi0Gq3m+p47Yh>sS8m0eA+jq&oE>7oYe5XPci{s(Y1~7xC#j->{ z$lQ*X7DHpf-qrcpp`8`Dp#9v|6gthV!&h^&!&(^&46dEL8iQYa&Z{z%vecG1CBBF` z9l!rVIrZU+Jcw4r2*7#aEz8HuDoDO8Gqjo~;|f0Afh@*|OOQk5>$sGwdeC~1&_s%g zu74&?iK!m6Z2VFzM`6zR$TGyCbpJaS6u^(knYPKkJS7+};ES)558)hfFGCYzi0TevP+x}c3K);60hq#sZWXuhcd02 z?P-(c1H_i^w6~y`8+UB`)X<~8nl^iN18{|47saru2jFf;<7FeMNh)c(7_F?(y1`zH z^`k3Gp*^=>=VJ$sVR^4iJp^c39oycuQFBkW3Is@N3KaTZpa0%s4oO{O>czF)1n4GC zZUA|t7P}aZiyK~K_Nn$u`SD|B37%wzWVn=&Ds9exzjYIsV=K$n>TM zm^2Rz;4e({sms-akDZWIq2qU3*S20PkYfgdfife}%jtW0j9H8$`>}*@Is^FQFuw3> zU`f+Hl&Mv-+xI{81dN}{zpMDMUNHpw9IKqlT#vPa_4Mk{*0t&yHSq$2uIcQI@-Wr8 zs+HVVc9;*e{5j+Ya5u+t?Sk7PJCM2A@2VZXy9^@k6trZF#1~BGrb%UYEp6CJJ^=#Z z^ggf`p8>f~&4qd($JNS))#-F%H(KFCn(d1LiL+IOB{J4r#w5OcM~8zsr7y@QOmdMrFZqLN&};pD(CZGPG$PtUTt!myn98% zml#Wm*QPR$dn!yT{`HFv%8MT}IIbVml<~6A-6HZBI+xu39H`0;MdY zdjNUlqlhpowYw2ZhXix`LusN+z^eBpu;Bz&OPx68Bg)G+H$=k{iQ_!KXRw|KOB`)~ zz`{k~*Z=G*_CoQ;@BWm7LP3uriyKf{A}+A*GxW308A~_b`LX+G39$L=e?;i@7%7G7 zdTs#!ZQ_NbA=A+XP|nPTnHlr^s*>;MY(bDnpwgmbZwQiL4*V5O6Z(NeY$cCjJulTS zRy0?wkyDS(<}vEtOHjbF!j(H<)~+#_!27Iio?c{z&0<64NDN(fBw!ZGEW!!(_X7*b z(0G`UHm)1Mu=@8=#w}m3rukxT29)RlFq|t^Mt>++Hf;(WSv}_#?_(JT>qBzr`YTyB z6t7N10xa#&Yrw5ICAP_jYUm`md$1J^xKFhVN*1^P_U-(s58Tfeq-QO+AXK`m%`CfO z@~Gn?0qYH&OMuTvfSlhFU_O2%cSFP_rQNC&s>3Ed(_5DIqcdod$cJyzJmUtX7AlQ+ zx}J$+@LRlQxIh3k=tC8pVnw#CU@v$Cx90q@e8S<%(i>jTr^5)5LgAq+@|*EfT3at- zs?GDaHI3-4R}3;mN~EPY#Ow5o$Ia6}83#xoA@t*fmQ$H#wVv&a!)wU6=X zW!pH+B5x^Z+;-D#yjeIfoU3po{^ zVNXx-wdM6r{yJx{;^HJuW^>uo1FztM`<@VT@(^rT)HvF(LZS{gvf@`RPb3$SE zc}nsJ0sad50aJxTZJF+(wn+?@J|v&d_2Ezpe~uA=Dq7Omy|(Fe!q{GpjBPeA3QPuxQJ|Z!f?MV+ThWvhs=7C>P)?u4%?5G#nentG~Oy46BQ5 z5e_v{3wQHiZXNTTz5xG9X^w1Ei6f-R@!2wLD z&pL1~PxM0@5H?=pR8V4aJi4Fh6RB;Kfh=orFin7^L1 zxv!St4osF{q7dJ?7cc-XfLrjU_mITPjNZ2U=VOM&HSFbQhDO=}yK%gS zWw6)#$hFIL^DvTq2Mkw zS_jdd!;(L#G>;G*Xsx?ezN$VrWFzaj#?RYpx`Ql}?89TPwNQ_Q@dLala@gDG14^vC zB+fE%ODF9?W7Pv?wvosOloXx^v24UQaajToGtoOA6FgL@eT0jSd#Z_~dc07uW!`H!iDt^62%v zfu4R?wB{Oz9xhXJ_7{@R&Z|>_9TGRJovWcQ3fTv~u@uK?6S**hAGvD#bVfQzu^5eN z!S3=l$`&!W&h!Kk+G@DgKE!*POQ`wum@3z}Ycv9gcXMPtny7Pe^bY%eGGRZfy)1*B zSmnKYq?aDKW{ILJQx4|!5Fbn=@bajxaonskd2qzv!SlR(%J3BuS{I!IxD*(7F5gb! z2xyyeuoSUqHF)K7oB@ev1W_p`tzEjF0Qj9&+=NT4T$G&P2K6Ngqx0$_0qqg4LAC*# z{n3ZH4)%2#C!FR98Hhl`3->s=hu`P>S0fp5PM& zxqS9kW!htQ?DLw=vY8Sb$XS1WFnN`wICcI#U8Su+!IxWj(S~KZ;DDqe9`V`2W8WZN2hF>$`FHhMZCf%K1tX7)Kxi5ZNUGixuO=CI zthF&NPQ*Fa637W%sR`tU6jLU~g_Oay&U3XE3Kaou6yp7!P?2EmB$tAfb$bpG+ge3M zEuY>RIv+aPxc)3TQSNp7mU-h}`L#-{#2ciZnXOtP;`Zy>vsU*|2xPqK`cYd9NqY=G z;REZLu466ioZ(uXk1wRn#pE~2m~Z36pPXn}bn1K=f+;d6=&D*yxc1IE)$kfmB~_Fc zBtiHGBM$le4cJT|=gdmxu=gHR0#$nE(^0pxw8oq*Npx2)BrHBo9Fy{eo2Sb6qy`i| znBGAAx&u%edzNK?M6rIppp&D-P1nM55NMR~d1-;JE#n5>ji!u0#!gR#kj7UL!+nb5+jzjmj{|TX>*Te8_ZL zvXTC960`<5K|CQgQ3!g+*;dRxZ_Gy^m9FFFr=yVe(UJ_*qT5|b^t!jyynYs0d^9jQ(>{Qq z=Z=3CS>cUmA5We$&xt2_4T=A25Ef1pJozn>*iC^gTLru5>zn4)1gMtNlFGa^wDsqrNt$>J>=>8tB^`Tw+b!cc}dt&YMsjWaPwQs$RFwwmNEG5Rx%vJ9NtyE z+Z1Zyt+k#HroOtx^B3v7x1qtSU&e8#KmlQaK2aZ=p~aj(ax^VWE7uYQs_Tov+i!L# zoO-eu5lx&ZY8QcUpPjarphCwq@aQ8a2$r9lPX4KBIbmJx69knHQNHlwLC%I9i|oaM ze7GQmIP4Tl%lb!x8$3}&&I=YS!n#g=8B<7aFggtj9M3bjo(?Mxz*Lw1!G;4_qem4e zdTmqmD}%jCetgJRCyxQ?~ag-SBU6RgsCyQcLospyR4E_dQ#uwwk>hFKizgDw(0gZz>!F2pAqgeP}#6^hBFZR}ofojdM)D_~L z(6%`cnA^hYcp(oJR=$0pcou~(?Qh+M9$BN0hySDrtczg$7{GP|husblC*XHRztkSI z3=+oh#w;Om9&F+;d2%A4l8WBiKFj+|KD%T5X>NAy`p@G9-V+@Rv$SW7RYM}mGeJ=^ z&xw+ocNczNQcwAEOsgAKp;8JGd=(~^G!|N;7}4?I^z-7#pi>+m_XBu}97>)h6^0}+ zyzy32!ad}Jd*&vic?|A*EQ5`H+8kWg2u`Q6eQ8K1v3*W+ANqR5ap41*yTiO<9avsz z%KTe&HAG;gB&rv)Wdg;i?)@ocr|u%zb}>zg=$_+~!XBv_(mPQ6pau2Hyeg`n?1i=1y_i5?SnM)0YT z*?Q(p-b^k1YB|qWo;9*-aEq$&_a+u^ft<=Nfw_zom~>&n`)OFI-yRRLJUQN)9Rou$ zC2Yj+kcR%Qlwm!P`nQkOXBt;4X;*98Y4#S>&o6vsVk3FoC_7$xH6OmRNju4J3a@*c zjA>*drm5y?Q-#V-T^#zlq|>7h&K>r`fvg>gk-f^>rQ``tcCEn@CDEt0FJ7vfN;@my z;@D1{3!K9D8(WpBz!bfmTUw{gIL}U2V){WSNu4};;BLObuz2xVXFs!=5Y~gS7N27~ zI;(VtwPSwNhfbZA1Ny6MrYoT|5|n zXR5m1#PTpkMZ8E%(&90=%m;cU90%?!UanIC>ChY_Xs7htvPy=uaiz*U>SQJ4No(d% zVwy#g)%doTTV^@lhUS~Z{CS4rkI#IVXpO%Um^#u2Z0qS-jgK}q3Gn2r2w+*SDrOWm zpCuunolUh|!6M0Erd4BuY#SC0_Q=>8@2W4HyrY!jw1ei=j*ok2IxJl)Gx|KzO{th- zrD+T#)C>=>X8Dx>lSSUjlB%!qwGr*Yhjv%pkbsb_b2i!mJ<4yXmOkhA73q{gERL}n zrcskJWAx8$=jga3j=n^c*fyJ9z$+T*I8i+YmFnv_bZW##SJAu!yu#OY!cvB~=TSn6b^6kGA@B zgGCnWA7gCBD2h5@q$uz(IYrl=%kO*V6{;B3ajyWwDOajrwC6Wtwau%`B~m3Y&d8tM z89~o%9kzeZ;3h&jyv>EyJa*+rtGvOxZ`{1Mw&!0mDG4jUVSyV@ z)T?DN(={?t6s0!PXM!ItWo4Evs@z@}-hIyh01~%pV7}-0a+1?z#4+kY47okjKfu|&)~_S+9<+b_Sd{v^7>&6wPg=%N*FUJlr@C@v`> z#7H)u4zOAi7@A_6UK*T!PooP0cfWmbU^*cA#pMD+7MjZtf9ipcJt%<2bxh?ia|t?? zzYJ*%v+y2}oJ&|tT+tGU5;k3?VW>L%@S8B2aBPBeiW?i3_t$EX4Yie@4t=%TSqS(Q zs_(Uvuc+U*yY|Xex(yO`#}9q0#v6Rbl&2tY-Xw;*lu)%Cc9?|G)s~D17S%O>eH|ir z>x1*HaAwTg4P)XnQmDky(JO+_xgZ|~YECuPv_|W~gJ1S)nL10j+KE-EUGI{T6G6`i zcz8B*EC;ZftMS0XlJN}a_c|4(6kgSDwz20yc{w<0q4e{FORXs(Z%~VV$FMwC^N}c6 z$1sAfzqxfX3V6T!f{s!?oaZ^rnmy-vr&|d$8!|)o$dwJYM`&+3onL*{d`7;JyL$?3d}Ikv3+d4X|U`aedJ(( z5mWdb`X&V1IvdY2t@P_=+U5hn&yT|ojQT3|{+6w%I`^W3^?bULM-Tsw^N z)|4_pg6XTU@$sW1owx2u?cc)nz~5A4IJqv$%Qrf5u){cq>g(>T+K-uO*7%YFc1h0o z7VU#}Lh2-#cnQXdo+f^8J~!9FmSExj5wkE*yeI7~T3?$h<8kbVqMrQT<|dx+BFTAZUX(R$(SNkvo{+tJr*Tm9P30p>UMvryM4!XI@s26Uo8 zq6hEdpBDNYQw_T5ik$b-q47V@ieE8}0jbsJ!PTWLajhQSY@?nmVbgOm& zts42^``O_BBDjgbT|Xs9wPyY>Or~%sYvD9TobK1+BwBuX(M7QkPw)4gV$Ew$a7iSI zbdtvF*v!fapOkff|MA#-(5Guloplg}99QN0AB$jST#X-Jy`Cvu6S|I=wVV&P#i8_4 zZrjN%y}erD8eRwvu}XZhbf_T%vYIqO1n`K7QkdP^-8{Q{Q*_T+SoA~3Vpr8ZmO@=C zmKu_NBz2Fgt(QBYcS}Zx9SI$c^bVR|t70jWT20#&xW+~3=lqnsC1iD8i)Y_G>oT={ zX%-2-UVDra-h^q_DxYbfXHsA^ngCa*_Q)~({pwN!V-mt%)NNnv82@lX*GFkVtY1~lY!MxyExfxlh8Ki&dkgdr?~I*U47dfZmE#_6uzw|r8BJI zJs9%7O1THdk$)*~QSmLVe&~> z>8Z~LeYO*UD^5jvsx#ih91mT_bSb>GuQu{^yozWyGy+zx4GjpDTf^R}FNCTas=SZw z8xt#AT|3oiemGA5ZDK3!m0RhtcEF$-zMUi zTa9Iym%H@5I~mSWXe+}ad5443!;0DmUA!JV9wSDxrUo~(-qqQdb8Yc*)uXqPPkKwG z1dF@}*N2w-BTZMx920=-*Ix8p*19rxr~Pl#1N!P|2erE8{n!CA6lVMcTTCFKH~Eql z{vqjBmo*Wh5wolJ7BoOchl8bb1wYC8B)O(SWKRc9GX13Ng{ zgtUzpi!ZoHxU|h`&91M|<1mgUzo+i?1iPues5enTQCpV#!Exz<+i#E;=R>EB(f5Mu zb(mde6Qi;TP9Kem8#SkPf_OIL`@`f!2A~a|*!j0LruK*IotLN@U^Nn$CmhnuMK&9k zRjvgEuewJYIpy+8*G6SPg*PEu%mb3|-=rnoTJ4S#gwmqXx)$h^G65pu6m=7C?p7gc zxLQ|y`_k00Q@`i@f~=F?CNF=|PI{-`lYO>hC_Gn9+T)VT@+5{EZrYq6W17l8aje!h zC^_G4beZvyli0Y(z4~^Zee1DAsFXuz6U+2V{iJZcxDiXo!h84HjrKODRYRr%c%sRC zu<)a8R^mQ(uz1e3b;YAacQGg@w(g`-;Fv1Yp|#F!UcaM+NkicvxYDDOVT^Q|ca*PC z;I@-(u_3z5ukM(=$X2Vx9ueRFB~(JqN(OOAd%J4Iu|-p!=~Hl}bm% za#HY|N<;foGqv%zQ%jUR=?NHP;^ey>Uo`}#Sl>2>-#lu)-V@cMu`sX5eW;(7Yqv1v zX}>uftzAWEJ}H2c1Ar`vck`t*Dp z&#UUAdFRh`Hs(ELq0_l|3L!q5+g0%Sq>arsq1B7Uqw&D7$_=_>PJ;oJH~dZcsc~Z4 z-fUc2vfCD)O>6~ltmPinK|ys_DId$AA!ROsXHSDg@QL`xc3PmuplFO0&_XQZYY=*` zEJgVpDwl?D^M6cQ27T%oo6*-g}7EYyvNgH3Ay8>%LP_+Xs3 zoBi{!3@Vwfv$^sdI#?L%rf|&oTuWbogxp%Mn2Iib$Ptd8% zSZ8!$^TbAc-WdZzL|Jfy9GtDxUc`8-^Z1mpyj(Gcc8{~%3enecv8orpyA*y`l+g$F zIPbC?hbx^PnJ>p_>(v?WPRARqH-( z9zc6>!Sn{5J2C!&@VI7&gM=ziJBo%z_%0ek?bN4(tCTyWvA5)>+hvk#9KPB7uO)en zHiSD@BuuOAr@H``RQg=yW1J#UZe>coiJQmPu+}y_O7KXDE#+~xGL3ez7f=CFifGlQ z;w~rkMZ?-Jw;g}-__x<7oi&{;X~OTyjaRpakrQ&RVWe{d8PJq?E%wpW{FOsS zcK9{qC#{guedIOwdXbTRws$pO(f%^feRFx&w^_v~Yj-#9c8;nEYs(^AnAMJ-?<4AK z!Q{6Li1dvIWKfBbqvwl#HJ*3lMmzD?R$)WJi}w6zuGCevC7|t3Yus7XQbM%!X)zEZ zVdKywZZGz2cPv_m@;Z9UJ>E!^irNQ{m!t23KULPw#~M>% zk7)GpKtyfPCW-rKCDuI#Iv2s> z_x0YAm~JQSV4>vOouj8ui}P228_OB`2~-7IA{`f+QTeRC49h}G>t8g*Bz&q9q{fzC z6z0p=(R%U^c_mT$x0}LwyVkXB#6mBIg&sh53f(wnnM7FahDT?<@IGOX+g*s;+LN~w zTlf~Ogab!^kYXc1`jF#Ue6O&LpA-jhYmdD-oN*O@>H;c5dZfOU1PykK!Hk)DFKLtZ zk<4IcOdXI}f9uAPvQj$#^JG*h3GChU!H%g&AK!gz!jA5sRr^?uH|bKGl^no8`u_vNpi34)(+ z1$CYUni`WqJylZwh_zyY6w8q>xz_wuuf&@dPMh9*SK6fiFP7~&I;tXW_cO6|HwFT^ ztCUz3$Sx)ub~{61Pa_!<40w%kp9rI|V1Hr~ZT_-`cVziZt6{bNQ0)5Zhs<)=9t zt%0BM>FuolC~WvUZTN2=Hd?5)5q^!J2g5G^Dg5=f{|^PdWh*#YiEaJE1!PhAANvA) zpvTK*8UN%n|MTqJv_M7MaFE9Jf0K@p{cenu=`j8j1Nnn+PDr~qW{#gQt@1YefAtH{ z5@Ua)0E7Pn<@0}D?O)`2PUsw>ubOkqz(kjTmc>%qWw%c{h7Jg{od;o5CUK@j;9^R=2Jz)p`9W5!){2e z7q%)>8h{yfUk8v55xs^4jmWaGbe2KpREN&>)`P1T`9eyrSu6^Jgf-28K8X>Yhg{ts znoK4RNCSOd)}22A>T9Rc029d!)Et|4K?8sNg>xwjP0D(iNL7uxXW;-M~r!gjN*K7evJ#GOAN84V7p?ja7yJP~gonLkU)~bdJpk^D% z)@JZJPvQ>`4(DkUHvFM6=h_)k<3zaa=iAs3Rrx-WIaSZA8zh}`)C_}bz~%o4^#AhI zc6{42R1QD{cKr;0dgf1wbRPLue|wHKuZ4RKaEDu}+~VR~hJY32xDRl_Yz(h6NS@zJ zI6DK1lDfTj2ilZ~u4bvxm-4%WX*}e%206 zi2Tjh;on0Q`P&uI%(0iMic@rq3!v>Q&OJClB?5U2wm~Z|t zhJk({1VURRbxivYjSgu?X8^_*I!fy9AH>6DQ8y)we`3Q!lwGo(Xy|W+@&Av;W1xzY zhwtbfSDT~N&|Y~y`#!>#@yGM=lRu=)yeVn&YixKrIPqV?V@QoK8&c#n#>N=8jkIR_ zk2^9>VJNTmg9&Vv76TMyJ+HQ(JVU#G=+N@%r@!Ey``dn;H3xbZ9{UMesy9Q^(ZM2R z|EOR2_wfQ$vlQn)ueZa7Pknl>k@Tm@w;V*Ill$%Y_bdD>TBoof6)$q=Kjdrvb=&}t zLx?ajuM4k*(kUfAruAME8(#hCsC4Ap{Sa;4ov_(H^Q+l@{eKaM|CdKy``1GWVg38t z%pcN50=uF?ZI;t9ie9rm0(tI%UivpHtDsF~fMDu|)HA{$ojFXH{*V6o|L1xW^w+}v zI zS*89`rT%lmxp{&ajh(GA$&ZJrgw1;)!pAxSIy@+d`@p&<|xW;)B5N86~_E^Ccd;| zG@Trllz&-E6#V);xfiI&^4a$Rq)9Z}d@j3anM~?&mhS4&&<^fiE;jWxO>rLlahQLU zdP?%+T&e!7Y$eF+#(Dw5|6=j8mwwRYXFXuEZ6GU_*26)cFWK9I>yFuoFU#_R=ui zfCp=%N|UUsh1QN`43if6rm9*Of4;6hFryy-D9p`6>A%Ah@FR)%tA)*_{#QpQr+Cml z>HZPb!5Yjha;rYD&7;&Gn@zgpSco9|aGZ(Wz1nW23_^Z{_kj40Uf=MK{TMIttt@4< zLsj3O-Djn)0<_6NRWiLm$c6hC_O|^XmdmrH zi>|;n%h!R6xq5$`f-{JI?-=?(t90$H5j7XLUifvu=&IAcV7a|J+8((#-G8a5hCZ4B z5pXlzZ|-iY*lC99`XIbUSXM9BH17TwVjQ9EQSWCOYcAaTPG(#UMG-FOZFCoEI95x} z_Y~t=j^SJN$4k;f_BXqb0Q-eCk;Sts_kSoO{rk+QL*x6aDNwMxss4Da6$0ERoaPLH z7m#IBB#0fa|X+WHN(Kw)-%!dsn(G{2bBqGAW5Irh1F8s1!tW@ zq;)fJV4&B%@W{>gZ`gtg5A@d|J8rwJ88aSZ+$2m3|Fs$+XWMugW11eG`Z{ReewV$M zEuwbck@jG^x?3Qz0bK&S@J3bfW}k>@vCnxh+ja2I%~%L~``eLdXU2Ky%`KN#nw}Wx zQIz^@q~L807G7=*kQ4Jr7YywDAA{#TF!SnBI{j!lnDVjfs=GM3CN|$^Y$fWT$6k+G zqN@YD%Ow73I~|3|5!wC%&FS|-)SS-l$u*u24toK8dY&0Rqs@=|sJUj7pxgg(Nf%Uc zqSXDJ5j{aYpOe4{?yyUOHexi=7>~H#U>6 zUIcK}^~apqinUmnt(+N|7b{$UdZ&8!a3`$YbKBT@qlyWQ^x4->S{?f6-Tp5qCLbBif{L;lrVhRE$HImFvjzWumW&FuqD}hK4#C*# hDIu`HZEfB^D0}2Q^=n_S^Dp2>PD)v_RNUC_{{f2sT-*Qv literal 0 HcmV?d00001 diff --git a/site2/doc-guides/assets/contribution-3.png b/site2/doc-guides/assets/contribution-3.png new file mode 100644 index 0000000000000000000000000000000000000000..bf4c5fc180e5f12c6bdf98cb6a58bf8294181db9 GIT binary patch literal 64610 zcmeFZgLh@kwmux&c6MyrwmRw9wr$(CZM$Qm`cISZFAy*e5ct2!KtR&Kxc^&L0;c?T8xSC%Fbg2C zf49;6mj8U>zmIS2e@oDO;Q#iR5AvVZAd>l@|0x4^{-fsWf@kn8pzS3!oPdBZ$o@Hi zrIbjofq(>oq(p_3-GR?_Aob;rFa?MbLf{OCB8M=D9yg0DlM79w5UQFrb)L1BEaxhf z@RR42G)pY)8rlG==lCp%@%ZmES6v%vX;8#5iTl~An_f@8cWzf7JzIQ7-bYQHGs4u7 z{>1rElpy~WreKZfy(gGrg2+F>(18Cfgan}zH=+J-L(=amB^1H?vj3|q9e+>of3(B^ z&>+(Bi9m${=>O^HpZ)@ipZxsQB+BBK=WFyCMH~_-`%AK!22CQvYLx z|2wg2bddi!4fq|<$M=_m$VuD$f(&`|Tw=HY{x#T+n^e)*2jJCewz(DW_1WWxBku-{B#TFE64%`;de8 zW|M=*TD|s>;XMi(O_{NLd31kNsX!DkfrBd194Dl;`0YuSeH6hz}^L*jZeB?P9m z6#i)ZRPA55Xicq!dVhi%`r9qx!_8joKU0rSf7g%x}1)IyH)fp6Bz`$$ek9#Njk$@8IMrEw`5|hB8(LvkeJd z7$W*>-d80@6_uIAa%v)<_R)#rnidJ)E1h~(e`&3HybOSBsnVBtWLGuxluXN65T^m5 z5)_m~{^If=i5;oykz1c0`XLEsrKC}Z>XmkUaO=wY=CVARnrs*t4BfxCv_elJXB zON1b*ntW^)st=OkdENzvpH)T=i}^l9-Y@6uQHGHYJ{*N&3CnfNR&yfu(|hs(*hcB~ zETdH&SrQ*a20?so-7k2yCfw^?SO!dP{wBeLN+4^4fQ+g&=>hu%maCJChRYKz)02Jy z;GYJ@i|9_Tj23VN9+1vS@LV?x%{Aavjaj$(Qtcv7CNVP!KJUtl3a0t8qIk6yi-u~Z zbg9mAm(jH|ZM2?f7|7~(KtExcR4HZ`XA?~R18klI#GT+;yPOha1{NeN$nb={PBm!T z?j?|2xwbEf!7df#K8`p(0IB@YK*VEpG5QDur}zW&*=CpNyR+n_`TUP`;fH48cnzf4 zBjP6Qll!?DBz7V!yK(adx^w7%3{|kB(TvP2Hi8r`;wC)e*OR)kl59MIR^s>I|FsrC z5Fj6nDo9)MBJ{u3S&aW}i}MIFbW;B{Y-XtMI_9yEMce;1|46|#qHpDk`o+Zm*Wv%W zhfv>-%^wT3OvND>_(}h|xo7Eq*!#~Cf zu)t)JaYF3e?6DZKzG;Tf3rg4i_}sn;Bl23r`(=J@b|K|GF2Ffja_Q%*i|L z-A8>JpRqcap`qc)?~Eqr!*h+bMjJ?QJZ{-9+VnKj4?g>Aq?+~mgvay0lSWzSDG&4_ zB{IqRkv-_;k_o{~jBMz(tDQOQwj0u|t*gOkA)%pj#^jPS!iI)M7E^2R&Y$mMolZvx za=DzJ8muB`vj5{DNk6dNo(Q@Z>ka1zgZIqnbxDw--=1nHe-+_(Zxr8xp44tYmDHJ4 zyKlQ&A_hx{E;PURDo>3@ardalRKL)H9=5(!P$JpPq^)OkMWN?=z2l>d1lt}3{4}H%tQQ_{u0E39= z*N3v*`vnlI4~=p!p2^}sM8CWBPL-Qtb)gaR{9O`?eG!Mwvs#=IMx9DUT_Iv8F(0ZU zalAIg=yBKAXV1qOooctCpyCs$Xq$GPX!}&a(tLEC;r??a>C`pRZl$K)KJ)fX6XtN{ zajKG8NSa?qI!;`Ccs9(gb=%cTDizqsw}|ddo>EAf%0E*cd8`;p8@f}PetP=7(LK#} ztA}ogj4nUi7E|Kk(*{_XsvqOw(L z4??-h)J3z!)28ya%!J&Gg5uR_x;hNwrC&niWN19eIEfC|=XTCZI-Vd>c`f%x;5`n)%8{X#fAG9T#H8mt&_GJp9+(1m_*y- zU;&H2v4h#rNvcJ!12R_RseGg!qa!H1?O z=`B3pL&Ygmrve%C{+ife&JQ@J(q*-*24hUxd>eZ~_!t%y2u_K)D^OfybgA`5Y&C|R z^0ix}0U=Ft$Q7sApiOJ*`0J2q_x@Vd=(ooJ_z+$1CEKb0G8#N&@XLI?lO>|Jm9n^p({${?bJ=H=K_EUW6yJw$ym z{zH~gcQb2J$hrDS0|nPEN7_%{_^Va?a};De2`?>&*;@5KLAth@8HMTIlD_1a>mLd~ zW?PUfb?bOl9v?}+NRG@GYKxlBW-X+Q-UF_FB0p^`=6hmO>opC{N2s5SCnp4}jiU?L?dyu$T9gB$m`^wU|L=#=+ zhg95qv;G4;x94f&#xX!_nhU?eBgb)HYb|KQ@gUhr>j`4M^@AcjBR7$hCGb@Yc?=Iq zxk^)v+mq`UrZ@ZsRv%})V!`Y=gjrq}P^DZe*@N!=o6KzHQj-5*LJ8Q-jh#zu3>H`3 zm&+IA7Z+#d3%oYs>!WYdwpz3Ms$<{?G(qN5dk)!ZScSny=X;~MroItl235p;FNSLNJ{Sal@y%^!4Y-nx#j2Pzv%Ej zB&FG83);UACG-}H$0v?T;iuqYwQQ(Yee1Y!i>TY)n{ki-rn;(u*P{>ugFzZcEM^|G zCGOjBfcfUm^tzoZ(CZNHdcGkBxte$Go?juLJIq^wY!uofWbeanvpHdb;E*NDI*>9| zrNh%pd}Q|V$P?0_iVT7oztN#x@iA!wc9-+(arQBX#{+@61=wj0RxWE+?s88+CN%+{ z!4l_nJS_1^<4ZB{KDiFai&BwZ*W>YZ@*&-LwbAs5KtQ%(*H|YrBI;LGQ+!G(TldLQ znx*0l(SrH6zvAVbUEDvq6YzT^&wAnw_TC98*Q(OIU&uS_Fy2d$J_ftJzg&sm^l8C9 zm>D!-@#u$#GDWmenn^oTT1NEC?^{S4Z_+1_GJ7U0n5L(*l;i^*WtdG!N-M^vmycz2 zSv;#2qai`0W7TVbzf?+X)0|()OKXT*YBbhNoF)G#{dSDln833Baeo$gHMVxVZ6-An zvDnKl{<`^(_siw#%b~JJH@3ExmdfEI1_gCL!wd2BnqGKzI#WsN0_Ji%U94nE0I+@^ zE#ZQljIINEl}@9N%DuWdkT~G*OXV}8bMG&(#$7|{FEhI)&ECV(uQ$IUlKH$`T?=~z z7tG$NllWo$w-V{zBtN>{ss6S*B(*#?2pbl@kg@rFdOP1PTD-nT-7v=%(nY0J=M(y7 z%IEoUXIm~;64_jD_2#wOtdo5R;|iTZAh>IdXzqV`4v><=lQ#xArqSsQlTYy|Cy!4L zLd^%;^^mBkshzNd6RoLqYoed7cH>UH+SDT9Ee`7#-(pEWoc$p}BGu<`elF7t09f{*b9BA zxR9wKBb3}4vh&Nw<7$+7v#*(B*v6SIeu~MhgQlMu%riSndE^CLC$IabU|uU|wj2O- zdeO<{msatuD-4#Hlk@P4jh=U64{@RFAVO|C=2xEdh$z_q3Y}ZmSeRe^Q7#f@YBX8E zveh*&K$73Sn2s5z*Ba3BpMy`|^X#7YkHl~`V##PRkNsoKLu#kVIO%bFb5gVwkBb?5 zh$KR1`GW5vq<{P7D~f12L36~2m46BKs0D&ND8f?gMsvE@82Tb7&wN7oucfh4gkXf> z@KAB^xjNButB6`Aw-0$p$lut!m6~$J!BBE(Mu`;dDIKYa9r8ga!8LYk1G)zrc);_h%c(>8-UMp-Bvz&m1@>m&HkCpQk7c5 z4Vo?Tm>txJx{O?!MRjYU5h5(8{~o@c1w|%Sa#%gEjb>Z2Ux(9CAv;4$awixCeXrEs ztL)*ebjlYlQ>Jeohu#Lu^}u-_zy2}m*x&vj;Ki>TA>r0|P6Yyvi5#A`W=xa*d40)W zX<9$W6NiJJfhZ=^Jj`csn!eQ7b9>_v!A|!heip_Iy4+DJtxz6j_)!aaPNelkVzZ|1 zxE-?usM_*Vw(N}X_|k`GKH*5L`fTN&;!7ybx|I7zP@$(aP#0|?lon)uZ)Q|+62&lApOClE`QYo! zvSREgoRHop5r;#*25ZhsONN={+VgHrx>ThlL+&x@B-J~(7_8#J(^^~R^Q(2X99bl| z+<^D*q_1jN9G=mX3Q6Ysl$vs-GBg$9!%~Gx62=!_+?{o))K)aAgyVjP?+1BO0!FjZ zr03e?(dO~0jwFgMKO!MNISxhh&P7l6@eyG!2}hf@=~VJuD`qBpsZrRY(v4xM7%Ko> zgdbZpAnZwT5`zJ29K|OP;KOGNXElBblhMza%b`a=B+huFQmZb+%7JwSi%Bi17lThi zF7x#PX?{s~_cFW@+hP)XzxRY{v2{t;?06zP2NY0^&x0Yk*Yl~$=vSj<=J}M+ksX;2Sl5MuCQnZQ>yvz6v{q6D3~DiE2;`9{LH1poI9x)`!m*&LfR3iE^gty#VY) zK#hF~f&h;f^<7;$MCiq%Ln3SWCcT#Cg zWMrwY&z7N0_c#YMIZ-uzP;i(;eaFmJ_HZ#>gCV>mo{I5+V*3vwuA_qksafU#Frl;C zKe&8k(4=)%OuJ_Vz1MD@+4|A7I#r$N)H{X^ z#wRc=LfS2+-R3s&BfqRI9%-0g@;{s0JDM5j=~H#h{{&YmSCTI68x_8~_UfYO0+sf7 zc*Da5xK7;SY`5P85@1IFO^QzpUzr;vY#^CSDc zGEkxCU&{V+I&D27 z&^Gu7#>A^2m*>CoqXEgp`m*)mieQe5Eiw|&f)NMLR?9ZVHKJe_Yu#ya_yqzQph}qw zB@x%gu}m{i$x$~;!%HFb4G^%JvD;eY@ZP(fvZT<)P~`0ChqsL<(7=QVgvFCch;hV+ z>N1oJWTK;ES(!5vDb$H+YKq+bhcxLANLSD|NwI_iM47nHpI!3>IUUC&)%XM2VDB~I{{M< zC@=QBwr9g<(4BY%sn&i+rj8s`D3o{K=zu}ubx=J;zXi=r){mw(N;6k4Uvm#CeyjRK z7|oaVLm`rVm6hUt;m^ilSaQ0Ncibv*g30#Z6%YTcdnrNUB!CVJg~4He2t6`b&bJfc z!*I&>PG?M3<)p&)W3gMpeXw0Ek76cIpaIar(!+&*f*qZVLVL^0?KKbcBIQBv4`R7E zu+M4Vv@yZBk#XK5P z2a9-WB9qGE#m(nxwC?qO)`C%mgiA09^}`3^LJff<3M-9I9MW;XDmfi^mal3sTC5fK z8sM<(d(X!KNo!b4&0TV>@y%EBdzQAHcq3#^NNskkCjU8V^ufb9-EF zaJ_&y2C3N2$P@6qQmi+};;+q*$YqhiaH!^B3-Kki>*eW2+$K}~419mDorK{+0>}i2 z4Wj%M*$X-w_{thm;?mFH!uL}POfHNv6C*M}?#k0-L^tgP#DR2F_%10m=lX<`VxXis zTwVGMh&Sqn$W;N`vw-Kq8>W;vJdxHy?Sq2Q!KZ$z(Q7Sc>t}sQ`A*lYj8g2xl(3GH z2a-#6eR7$Tf^dj#DvRDS`uozP*T7JNamEI^H1L3{Hg%SIT{qFypVsSg&rwW(~0T-0MHEO$Jav zliADy-Q#5ceiqwj2z;?2rbNH^thlHjy;3_;jYg9YUS)wv?T^i{&66z&Zyf>%MYz$S z04Z30zNVw!Nf9`o46d@7(>Xi}zSYRk(C{U1?}4f{n&`Tgwm3-zOZLV!D26B6?e-Cm zgiuIIK{MZJKbf5L=AB^p7)is1#J!*|j>OllhtBv&fkTG<&kR+}LA^5BjNl~|dmY9X zv<=JwK#zuW8GbmI8p?!s~rn8T`<=&C$Pi`qc?1plMVU)DJ9JYkSAPDU&UHo z!uCO1M4pCdP^g6cc< zxiLm!!$Tw2wJ6mgMo6-?gb*i9%&1IG{WIg3G(DVBSINd%rh9)yh@~mKm{IYDK>foP zWqjT9a?0cmtFo*?exc(KiHE3k6>8``Vi+3Xi9|;( zVAkD*Hpm~YNW8`6l(F0S4T25d_j$jY$`FZzYG;zBiQLj zk-h7^H>cBsfiJ9OS9WBs1qn;eZn{T|W57cOK(NZATwjL$cfk~NwU~3!?!Ax}kLghYl*>Q0 zFG9_Cr{p%L?d=CeZn)XZOAtQ}uPoGrmfhp_!4@H3BEV+7CrLaGTS6iZVgki+Xv#sq z5quyxMK66H85?)5xlAF87?o?y^=6ms^hTVp(|x=d6bK46eC>n3W+TEHL$C)sU{mR_b^-iJVblfPacGR(f#< zQ->gD(0aD-TC@g=1@0Q7I=XZUt$fG|u+Kzs9k}a0{?k&MFfG#0@ze-$9@~Xc!k)DI zM&4^&CpC+!3Fe+X(zRC81H;vF4vXC@zxRwDJPskr+@zlf2z=Wk?pw%?;2__td7I4F zL1U+EYVEg2{9g^Gr^WHv*?10kG+%-4!v$$&koer*%C&5)@+)KRyUTjg+(SPD8PEy~ z_Hu>bs;HxP@}IU6&XSsSEN<4nglS#!&}Gf+cKOLFmBvE<^gNl{m>TlwYBm!Gr2>!m zMYm8eJiLV(fn}sl9vS)-8H(LTk~9g$9^NUv=8BZgXULE*oeyrWbY5}dA4)DUhG7qx z5Y`!=F+EcdNsK`@GJhVGuzIvom1C)PqZ`93qBT7uT*+e$1Up0VY7CGNb{shneHk4c z2RT@)!op%E;mr@udH#JITnsb9E0BeX4Z04P`f9SRyN4ewNO3egvQ0K|q$b6gCOw@p zSfJW>4NhN-Y`{kT5{!K$;1c=izLb6Q7{sjnPX=?3#9!)Ww?w8<7=~xcsBx&Wrb?}D zuHvWJ@2ci)Vg1QB#a%t=%~51Xi{V7@dtims@U5U4S$UQQ&VC%UWcY#kas!`NmqTS1 zE|4{FSd9EHrWX+CKnNHML_#iUx|QVppVL~mJDlF>vCREKTGxcJ1s~zPcOP!3$3qoA zQGjtD^j|*C_;k@7gSmw>^7}1tzugCDK%gqkb+CmARVl*BK1|h~%h?x%oHr&-=v_pKoE};83gQ>#X#E<1mA5dgqF= zNh-LrZQI0|#IB6~o+5+1s@S6q2LI$rTcz?^?oVnQ+MM5vq)xPq&+o=&abZ%qyD1ia zacWvU1;gidQ1s47e5Dg>6o*V`ph=~g_{|TSltThmb{1(8nVwd>Yv8uuz3mEaFFK#j zNyNpN7I6Jc$R`7-UZvtMg*Xvl8sUK~KQ{|Ul6)9!N}x@fdYQ9a>zG2MGBX7 z5ExE0R_qobT-^XuDl*D2KTOj$B*fE^;-~;+(2{Zn5eWqdhcZb6PRnlS2iH}jFjg!9&onsz z_okqtuOTv@YPElQ3m~vEL87Odq(^lc*a5vye)DX*HUa82BxGdxoQ7i)l!x}{D1t{Z z>3uy2Y!Jvjh(^cLzyw*!U0v9eOsCZbyVH2YtmB^HQK$W1Y8kNq~n;07GF6VZ~yvG)pJk% zb}s03O-wOnd%cem}U-T=~4J{%o7wbZ$J=vqcNhjB;f z2R#&AY`@yLHu&1)DA6%QU`2s~aQZXeu73K0G>-3jEmi=o*HUV50R+*iUL2^qQWNg>HsV?qUy$fkkYadsz_ycGk~wn3-bvy~x)uk#BjlV_ zltRB5Dakv@r^?@v?DETPkUX9MbOX1nQiu^Ok?Nf4*IqJ8pz;2~|1ws#9iZ?GDBL)8 zj9&P3ZF6#RRt?@wbT!i4AJ+CJ#(y8B0Qe575SuU!k#u%djLN3<F);RMv`5UmO+V&6u1nBVE(KVSPhAZH2zEwM} z79Nt_0ASQ?gXi;TTwwwPGcnIn%b15@$9zqR+6FDw@_dC8X(K%^dji3PqA8bdbG}3w zBisvIxhl0vNV$&e^UvCB7pi;P?0HMHP>|72=8N{XYgG6>n@Ye&Me)iT7E&i3Wa510 z*l9q%Ab`7Q&BNpTg0_1ny)HMvUtMpXj0YV80Z;Bc6zA)I%9e4zYPXLlpH5rO#%JtM z4XUW2*ii{)>o~4Vjxu&t4fR@E&fd+JGF!SnlA&qNcW0X|NGjFJTq7r?;-DA6g$6zi zLT--=4L;V!`~z{ZdDdridV&t?z=VibjjGiXy6%N$FimKp7cgt*inmk+mcgJilf~I4 z61#2KH}%<5|6K73ue#I3)n8NMy!lF;uD39)Itswo%>53@dLaj)!AJZz==Qirt68pU$5W4Yy-v7#kyO$QCM1DI3QpQ@*Dbg^8HG3 zye<88%RD1mVLnHUoeL;IBqTiq?TVqHdstTxTDIkLo~i%UHAns1R=*9J2kNUWj@_1E zl_O@|04Zx<6KFw0Yi`8PSTp&U}=?><==h#n#zfiljR+wB4 zx5|vRI&qLx$2|UA1N`-DmyY#6@r5ExQBxXMG;cYCgg|Yp{$nZ*O`uq{G z*eeq0zi~HmLYZJ55u7g$m?WsXys12A9M9`Fe-d`S?erW(&2;)n?Ate3VkHcE{NGI8{>N z-F>G{g(3JrFL4O$T;~Js`1JJUiOOTNbnTD+uNdVhyWnVT#K&TKa6J7|;l@dVkkF{@O8_TL7nc_R)eh4DUZ%h4O+oM>{ zc;`)g`aMji{u%P8+X2uCwRB|MzhE>H3k5`%a;~iHiJgD%p$h z=hqfcnTTE68~9T;^=Kr7^^qpT9#@}y7Fz6QU{1Lvp579g_glBG+cvP7+@RFk7~>zK zhrbwcLpNgqm%Lesd+m7|;5^=2Ui;M@?)Nkm+iM%$lMgo(U@pccU36QQ1fk&bdpOt^ zqPn;vLSo47o5+37`gV4n6+G_OfbNgSbz>_lnTOM*B5IXND2?xUERk#KZZq$GzQ>%P zE;07$q~wCiu8c{#j90>u>dLGUb8U#l+cEqda+I;afVu8KD?1W%ioury06v=uhSw{| zg?CQq0cwtLdxlLUQV;!74aL-hJ^+VaL>3?xuv%h zrzQUeAOFU5OT2;3Fd2j=`wCj^>J$RdO`33w4-4h?clRY#e7c5sk598jVh1It^0HuC z%xbV%<&cn&;EJDK1bF^b(`k20u$uIgMEyW`yaIj+5WNxc+dlT_#K04$!+C~LY&5jC z$02p<&-`mRV5Lz9g6=va)+Rh)F`OC%#-gh<0L2V)WVtmTf*thkSRbEK0cHZ2;0^fr~OxxYSo1c*-yK}GSPmz)ITf1*lgyX z`-9^+8;y?WVR#)9-FJ|b_`nkjLuEaf0-_AocE){n<7)>@A_h?Dm*oe{H*N-mvg2k zLazAEMqJ+NzS%G-1c*22jBY^=r2KrB-RsXA;+L_=g(3?bgxtbq{?7#45V7!3xD%nn z*|=}sZs9J0Y%He$1i3z>%|=~R>umTqtmqq?nYSL1Jyadg_;|7H=Mb>)kJN^*l|{w6 zEJDU)P{$T`P^d)GqZBWu#d=NbR|8`O$MF-&V%Q_Y4D-!)m$1#RTerY*6lQ}0iR77J}LR{1g5-L=Duh~~dv!P-tgcydyd3 z=~&ndf*3j3j`*Q!!bsv)2iEQR5Ws?5ZOw+hB5cwU3{y`$bPvdsxgl z`i}E{+kj5zlSuQ&<=S<@Y5D2>y`QmcF5_Xi{q6jRQ4aS_S{Ytk%!IfGGA z`5L_nTq{)ER;ks^)-TkZc+O&sLSKi|SqghSSaLO*>>SIHR=wT_mN?wj$-mvw?-&)T zb$U-Ceun`nSj;}e1wMGAh=~U}rqwkE)n9BjrPXS6*_PApVWIXZf+QH{p!YqnG~a(U z3L`n0eiI8UHqwd+`z&{huTI*y?%KN@<~i+4aK732R z!?=h;7jN^;;5KYDD{M-}6JflIrp7-5vEtneu|iYQMjkDw{DkD~Sd4=!lQClZedM&} z8h~;$!M4)-;kUXmJ6`Wudi{`eP%Dnjt5t1TmAub-$=Ez2CpF|;e|QV!$t>!|P>ABb zZ){J}pyO5O55Lf)2^8KCypR`DMiAVP6A;(I(Bva-y$5`w{(&uOg^_%~BP@dBh%tF{ zfJMSx(Ujxho!ej;G|eU)a6sRndA7*gH7-sAXaHt~DY&_vY!JmYXP0#~5?(P=0s#$l z$!v=X-Ro)EK_DSS7=)}kh8#BK5`eB2QM#dlC1Y>Xqou-pi~ntt)jz*`1Q4UZ^T;wl zamwGPz7)GmTp*}rTSSD#S^0)mv?|*u7#rPbzd8L!msP!K^r_%8Z@O?ReRk7iVLK+C z@elg}%0Q^|_k~i^mdUJI)Mc}-kQk3TP`1;Tl4Of!_>%=;M^iU6NO1+ta(-B2BND}Odwx*J6^c*)<}fsf z{|(r@*jm~H8TM?EOvBkxD|taaMAg0LHyRvR8J zso7~w*3r0d8~){!$eQ)SF>=A@^*DD(r)82tJaPR{BUq-8mtYmgWON*^K=nx0!|LgCQS9{B5%#|#_~cP zvqSeLa&TZA=ZJTI9Apcol!r%?dhl;6k-*Z!$ftBgmTg0oZc2m92PkwZ1gxw8j3E#B zY)SGBBcP>u4usB$Kw$*n8|6Yv^TP4=dup~E**MFzGI|K7lrOFIhvi#idQib*;^9dV z`P**Y)TJCfA!n|z9{w1RYv~MS^d%R5(fJ$nT%tv6hh@Xf&OyCH!fVr-c_zM}`%hK_ za-orzqM{A1CI~z4_j7h*Kmhn{0ej`TuTwJ=dHtms!totyYY4ojJCwEdxAu)}LKa(> z4nB9NHTD`=y(V_}_I_)&W;?4QO3g9TjuSr{!>&f9qtvAerzjjp1mipQwu0TsucFeH ziHv6FXs%|XCGP0_u8y2wb$0Iyr%JsRw3-i-`VLudA)RS1yb~vKW`IOI*~Zb$LYfS2 z&ucl03hf@za)VbE_r|5s{of5XRSb@cz$O@K@vF1MPkN+VAowa0)Dv^y;-JHmeRi> zprN0#f`ULhe*H>{ih@O_9m8?A@;%u;qUs*6{p)4H&maM*F{E@$SmrI$bFS@uyHE9M zI*+E=U|AELbc|{ELi_wi*lB-Lxcz8&M}GfOMiQd_!RVE#HrUnFX^sA@Z7C7PU5Zc--59jWotI1gz(PjVeDnZBahmiviQwLg(ZbizrNn zgY=MDM~#K=YSyc)znjr`aerm7cIa;(-i#@;EOLqjtvO@1$R&CgZ z=~r^oVs?D)r4pq*rkV# zC@T2W#N0xf$w^NFG7c%8lkpuQ0g8DZ;6NG;@We~+EbDC*xQ`|4`YACpXH&a8TS$b6 zmjgNj7LhuVK*;zELWU04Je%5&7_K2-xBwsoNTqS{2A06(y|*EWq;u|Qsl-f~0}yLv ze>mP%HhqKf|1<0G01_(Q*7 zOtTH2<&cfn;tf!b78+yTOaSoRs8Wm?fdHY~PVN$Of5r*v6En_Usp)4ERp}%vZZQBd zNE8Mgv7g@;8kg>#N8l>-p76`=1qQXo=E3rcQ_ai z+hc;oVWq{3P^ZT+gdQB6FBxE9TdCAR?DKfSILC{<4|T9ut|$kEz-kQ1rK{-Z77d3`7LEkT35(VmNZRuu{L4YQH!A;b4c3(=4Q2 zD^g?NKG%xhc)X=`sOHr45GCxXnn*p&M5df8rGd*_>`6 zwM{{u(1?!>ySGBq(VsXc3%e{GUi5SmOU%rWAn8B^E{Hq1!^I(fCKJg82>9GTB2dUg zwZNtnl{|2s+~>J-N3KA-ED;g<08FzZSbOEN?SHu&M4Ux@edX9BM8z8gNK0wEHgwwU6=l5v_Rl`C zaxUMYpfOsG$rf(>@;+X=qS-exa(lJ$0=uh+j)F4LQ;^_%`rCpC7$C{GN=CCmznB3~ zP3Cc(LqV@(O2qpmSV~SF6!GGC3_2jO`{C;nQpv(sgy0GE&1M#kqG(3j6upP z<53DWp2CJcqV(b<#GwxAoMtS&9~FeIP2-`0J|wEX(y{6;yo>xR@3$Pj-^6dW zAJF7(;cQl7)=gx=pM6eQ90~+4W~`v7XLJbl7K53ID8t>+gNF*{7I1}7iJ%>x51G-5 z>e$DJhomoC(7G8_-$rasZNNf6P}l+5b%=h8HZXa0Du@%)Cm-VNZzvSoaKImJ zK@w0pxZmWdU{K3{Lzi5L-*w)3Xw4k>0}giMP~+opt#_IBcT@D|}9$^L`qzQ z`CWDN(urkwsx~k@!ZFzc(e#2?X(oeWvfuJ9imURyI}{&Smi9S}3EC8uv+7TcR@&?1 z#pLv_2#UC)ZiWh*ytX!uFy`x``q}l}FGCFka=os4q3g!|J_Egt)e7w5ZPGV%GIXPT zpi&PHDn77MB3iYuR!ZKDVi%@cG7kB^jZk#k(8Y4q5FDpte|4E8h4*%~Vta!g-+yS2 z0O9MI00C4e{P&PdcAFss5NU_@0J-~xWTk^1QXwSjeyoM0=EZ6z z&{C|usk`X?ce^iMmO!vONu;-;K5NjxZHoV(1f9Ykq)N zvc`au062qEI}``!NE}hviu1C`?2~LL@j+oBJ+HTY;mEKc(5N5FDy|(~IU-_{Idl3= z5q|jPhb3;32-7Y@P6wD@F4|_t4SxNit_OxQY6{bDpmiXJ_1#~}vC9CfW3}FrnUcnI#U-o)- z+NdjQ>aFDEjWFJxG{>VvH^NZ2+Qt4T#fHOux0~)Q7#o$re!a^hEBXSUw88+VR4)ij z^s}LX$vFbQiHX5q0FDuZ_GyKU(^=(Ix{z)aZcv(a&%mHfFnw z+0FKik0pUgfO4%dt-jd3f-?l3d_ALS4?-yOn`4@B6m^M|g z(Ooc$M`W}#W6RL6JswHy9q1P!=c9O&#xxQW0Vs5YY;qk^Fx*7Lm2CW1ZWrFw}*yq?*O|Vw}C;J?1J|9 zIE#4pk|LZSRA=Ik-^3#A!JxPw!sM7SleIhXfU|G~Ezvb7h4Dsf*Y2FhF3c*4IenyR zPCQ$`7C`Z`fO&g+oBvK{Vs$!{ecg>>U?$@NdHxQp#!;PY=P+n>8>2s8%&@IUgxy!a zh%?!J$Ui=2l-3%oI8>^Xq*ObWYJNuP70Rx)*7at{e&ShQ-)A(jy1sKXJ3SF#L&DyY zo@*6!X|I#V;9S2Wk=^~4)@^9j-w{yyJY_rMI-J2HV#FGvZ9!itnp$1m-}>IH>XCW> zHx7joWqJ$BGTdEEU?x3W=a~ezwl1D|vIqF3O|2`QD?27J_t6 zX^^z4M}3lVz=+4Y72t8(9&v@S7UzJTj=%pbGYXcr`#!q0 z8mkGtKrU@c%I^EcyAj?{T->p!Lk3Q-0MLC9Btx-Ff03Ks1XHtus8MLPluzB&-T4_( z@3tL+u`@lV$&$8%CQ!?KoOwT+lx%~!nFu84&F@KK#Gz}zfGs`sV>89(TGrLm&?9z^ zQ|CAne3ikhgi<^SMmL84fh`@EHg}yv*CyTmGU8OEd7?3!@F4Qyb>!T9$akgi^j(^* zY8D?1F*sh)xn=w%w0*4{w%$fVB&lL2gPNdn@Jy%v39z;|{!?Z!;_KuO*3vU#F98^E z;lqjveb6c{i{0HITqRV^v=lmC7>P_tQnOiTJWhl95YunIuyvGADY&Nx=+;eyImX33 z^0_DmAYk?jQH*Y-Lf};05D0cL-^fBClNxQ-@x*^Y)frmJ$q<^xg?%LGA>!3Bz|=Fm zp0Vxrd-F`XcB8Z-yqIlq51!m@uHd@HHnMMa%`mm9ShdMyUK^#29g6m+Ep=6+f71|1q{C2uuaZ*yr040FgX7jrQ(n#ZV zHko3e{C%^e*iQ?QaFg_r2P#W6n-i9%IzhO1lJD&ccPv{nIC7!I>lo4t&4lwE|w0 z@OIGs05fK!fYjcXdWq8Qx(w-HkXQY$U%!s-bakG+FQvPfU3F80hq7X+5AWCJBva*$ z6YOm?>`7c3nsu6@HCUpB>}lh2!;H)%J1~fHAP%fW?&QDrk{+|@fZbH-PA8SG9FnEm zFqqkFdzPw{@3{)C7-a-D6v$Pw7xB__hUlSVldV&$R#dZ5NK5O47PTiNF9mez?O{S- z7niR%hwc|!rcN8sU9Z%Mh$E}$=ueBj{_2-XmOAtU+|z@p_a2w-LeN`;{*1H*GkpM3 z@k~iAhK=&qB%+GO9H4$XrMwdn5g#t!o!l;^*cm;_>aTL!(|m)0R|%}Q+6ZE0Ip~83 zz8KT<5JHFQ4<969>YZu{HRezQRLg=;cR^; zCYSHLYaj?1QyZ+;y7$Twug}+*t#$&vGr!ttWb9EZ-{vcjhQkvI<=I+xA;fQM{PyS{ zo-=?8nvn10Z9fe99+@v0Sjew8d56PxZ9^S}?;XQtxhVX>!1+5XFr`@TwF;Hm?mhxZ*xAT`+-w`F~oPD9M`Y^}hOuAU9)o&9x|-o z-o}fcDwN;gl(KF>$Z66q_}nq{m%b*ACL7Acv7=GkvU1!awU7DX2pHJ)c1K&BbnMjm z4Ix3KxK?2yzYYAZ8+;)OQD%TxfT>RSso$#%{&d7+5NsNxXK=H{mSBR2y9`?BwMx!6 zs!hIhQ^Sej2LC(W7`N<5Q*b%>WX=BtL;*rT(~bw_cKWa|W0DWZ)45y)OjbTmm|a(!j_Nd7tvB@A;R1B&8N1Y zuyhXL03$pe_a7~e=UrDnb@U2vxZ@5;&}xaLlkmw{OH}h=aS3sSVIh!Y7rXK-4x(oB z70HWn&cNk`9PMLNAICrkc-vhYA@Ezb2%G~e2`YZoMlj1{Fhot z6>~W~*KvgYI=!cF+_T{QwjU#$`o2S;g5W#kev=Yz)#ee}u*Eo};Dr4>Zn|wi^(>q; z2pJiggUjOzN=$maCoNl3@bmEHaBT~&5kv6=5?QCOA!d<%b1vCUQW${*Hv?tmIxe7t zMLZ+~G!U#hn!gXA2IrkI#^zJ|32~GMKtp#b^p=sLvRX&uD6Up#rfzz1K@{m|gFIX10sqAIM7m~9wTrYSCBpM1 zc*#LLz4AvU5}W@@Xw(&$N?okz=^BuC=DXy0OmG6}VQM6KnM77ko%nOuIDB|zE{oVC z)fKX)`JyD1=5EI#A@Xcb;%#zrUSzdGRdWvGCU4Kpl94&NK2U%i|3^PWn*uT5sygI* z0a@w3Jj67vAEMvX?f$8zBXNdm=JnvZU%eP}OFGNBna+|T74papkc3P2Qr+BoKx~6# z<10S%3C+|Oxb;CO+utAIQN8z*M6T?U%eSxN@m#r@y#qypZUj?C8Jr&TWg$Y;QzQ^g#<{__4Q2jC}P>?{#{vftHi_%Vukkn-w#swe%~ zN7?zHOOm(ERzs4d66EHNcEac|R6X4~LyR|OuqOy^nD7zyY^^cdkVG4$&{R(D)05RQ z%o5Amz!k>{8iBVQpS}LiV+pYY1@-*D z;an5-J|mF^ zP0HO8C7@hhnjH0*i4#MR8FDPz4lnY0?5&RC6PpLJ;gy&alo+vWoJf+-oiglnr@tI? zAtM-;A0VHRL?eR;0FxICARYCc_#!Or|2Pow5^n`dCiYSWOG09nn~)}}*HF{w<;p{` z5O+#aJ}~<`PYmq?>WXpHh{GZK&6DiCC9dGLLlm{An}gFpzh`sHi^Uu?8pN%dgnT{Oru?6(lwV=gAL1uc zwaWjY6?cUAhxmCGZszfS5=(EsCy0*RkO{K-#+jnqk^XZ`z4q0W(CWMD=^ zVP-wC8cHv#ASzhC(SBJHw`lZ_Oju>rf&)W zP2>q3a`1xiXcc%dYvi zT`5lQw7;3KmZMf8;OlR`D&^|Qr|)ST7hCcvR0;HAh$jn$!HCB-QbHJ$zK_M{K5PepDw-)EZ*J!)!erPwm@J>6j_1l-*hc@yId4R(q=(wA=)en~34 zR}I0N_KyDoD!l!)TW%tcF?V`b8tz)RT@+W@2+SHLoe>277yvkN%nT*@-e_ukPjBy7 zdz;5YMO)x@R~T9$1;Zu8+w+Q6iB>+U*UMb8bXs^!c{wd7CnsZdi%jg*u}2f%Cc*CT zoCR=)+G)ALkeybi#ozngjA_r_DbeBipcAO=H;uN5i{p~K{g_P%_`GCU*1sFH<8rZ9 zSf*Y!n6`mjW>pg|`+E?)79&)GEqymL$Bs8?Xo80hoQx$QGcgQ)c%sKxFxPN9Pnts& z2Ne%(#0J1AignJR;iJ@MYL@T9TlaI_z%PH&D5t_6iCuhd}#dsms_6Z*rTg>}7LSkLWI7gC+u z1S4?2_zSQMAtez30kHT(Xm+Vxn0(<$Q>^9MJCw7gA9ducU6H~9=yvD0JU-IdP2=C_ zK)0a9s_l84egD!O+v>J<9VQ;vuFdwG`QmVY9SJR-*z5c>^0>&F&;3LN_Vnl@%BrZNV~ih9OP&SK_|nN7^2kQ- zs7VswcqHqXYlXFVUV_J$wc4Wz_09tMBs%iE$;Y>uWXiOGkWGnW!}b-!)+5Rc|Gj$2(GIy~PQZ16I*KM?l}#VqOK zRIFfszH6s{%kivGfM$`iXdo!)TsY0AAoH?HS1QPbT+J(gc(MzCvRL-yYjI~zfO57G zoiCH|--qctc6R+tyU~<=h)MuzsPwlYLp(9anWvIcPdKLOy;Ks7?7FcO{8aao;Q$zc za9GIX;v&Qe2e5kn>$~4Xy`7cGWBqe1ERFc zq$R7>>bIfk0P(`d(4pUP+q^9WXpPD;{>01WF2&ck{(r(!@nVhn7`Zy*nL z_?%x+>@JVcOU4MY$dA`|kfmHXs-;~iBKOe>6s&d*2k*G{o{~U-7|otL!@2tA)(c1y z$>29+F`#65iYF#f_~wJgo$^G*Z_ zy^*`A*?9PThvWGA2xopuh2D<@=%$tk9 zXPpg)c4<@2Si@7gVb6UU+VCI^)LgBd2Jvm5-oEk$sd?^DjG7%fW#UfHU-$uyg^nRe zK<}Mu*%evY$CyOflv{~Ki34dyJ3ejw_NvKk_wWU3OuGRuLq&1@sKmSwv&==N@O?{nAg!Pg}neu$K!2Q)=E-H1s`A=1%H8Pms;_6U)!T= zY zW+;Wm_JJH`1yp=QJh7DlA3!RV#UpXGZ4#q=**-?bF@Gqk{%gfJCq<)_GoyUY=o7U| z82tp00npDm5w*c0tKB5(k-At{rYaErr+z!8=h5dvfs=lUV5e6epIQ4W0p34ggA*4u zDL@CNMdA$FbHrOZ=lfVPakf~?E=kBQJUo5-+o$M=qe3|kN%d?*OS#e|mIKX#P@^Zq zex&E`^OE<*_TTQT>;cD&z~o@C4u)T`Zo!8SEfZij-QjjaR<3;@hDooDu>e=5SE^k4 z8~o8|Bq`E6ohNz!Ai44zozD42XfHT|XCogF8?Ya6V}GF#hDpmiR;HGZ zz)T-%wqG|ko9K9}vwA^t4%%!6D*#PY`s2=&t5TV|XCawCzuE&BxVwKq`VhddNCz*S zp$=gCGqb^}0%&xWRjLq^euUncj*Ru=Jkf|F>pC0x#ht0okb$)uD`Z$on& z&z_xtiaN=h=hNWm$9@BU8homedDOua6HfZver^fFfJpfpKBDwj?Klgu1-vn|aqkT! zD!%&jx5UBK2^FKvT9r`mJv}uid;_h0Ul%+}txEjvZ!`vR@_i65t;>imA?#<|uURd` zv0r}#0PsNjSi-~MP#myBqtFHYY(0>(rqN)K2$2v-S)qHu zPN~}wh(0kiS3H0p*%iQi^=sy2Nr?z#JXY!4{t$#73FY#osl(FQAz_A^4KHsgqLB%! zS!Rs`VAPnz8votnxKkj1d;5H<2-GSF*Du5GanF8S77fZJ|2NM-c*!EbO|D(wpTznI z1oTHD9o;yK#qVgPT-Pgd^DFInjzl``aBOySy>Zn)Vpe>Y;9Tx3F|5K{b^1HsnFR!M zu@nm@hm|e*RFCw!22yAiWwK|_SF2xyP;SJ?aoRYzJ?{-6e!OlpyNXh zwk9i1f|j}?d`yp#L<&>?QZggCj{n`>ad+1EIVQ6L2+d_y?z34cnV9z&uGcX6NO!)B zIsuqr7+?hwMcYNw^3K9-lI%2iWtwH&1ok&G)=0B5thS3*vSrmtj8!R=1NS}Wl4Sze zu9M=6kl%9tw1M%HiGl3UhDw@~9I0vl^yPNLpdB`>Gz;W)I?iiWJ;7!lJF~a z#wV}e=R+WX4_kO|#3N!3a5#*PeY2;EyOpkrU7806ItSn(w2V6KvosChm)ztI%)e)l z(of~B6AbQ(79W-O#5apf$K9xIBf$Q$%MNi60aVJ$CZmq3Iu`7Z7DW(v{t}gkmq{Dd zvs5@3vn_IcC0SqN7>n0my^OG((A&*HYnO_9cD6|PRlk@*5F=2mXWR<4&jc0P@f{Z0 z&eGt8cfdyQ#hHLh=)&_JN4o`MRDi}oK05j28#9vf<=)RImd6u5inNvde?Uu&F9>c$ zwpt3f`Z)ApG$V(Hv)IilihlL3Pfy=h|T0U9H)pGIPj+e?TO)Wd#1RzHqZX|t8nJrtw2FCDC z0XHp&w20@IRzDU$RS!Q3!tCl6gbvY&0)#xmjnTrlHU-3e>c`szsQdVzBU&rl?^l)Mt>t-Mo3s+U!anPn=mQwf~6W$(S&3%3i%`{)SuX<6zee zBMfykEZ=USLP4ev`r7=-+!{>{sB{dJKV^cD&E7g(3_(iI%-K>qYcyjH3JR7tj!JlA zXtv*@5}V^Sz~@E*xV}VbEB*Z0YAIy!qqyPg#+MaNJPzB?d)ul(p!f|FXbooJc~#ZQ z?NhR@V!Y@bX~()BIP_98Hx75Om?CgH(>Q;nclqZ4K_n2PitQNk!q=j!6wXzs5p4y3 zmqDnbA`RdFm1&6z-d&w=oe!t@{&hSdWC+Os3(G&z639&39$cbpn>t0o9<&Sl8Edzj zXdZOE7iXjA?jy05D;GHbWW5WCGe#p}(L%`OK5 z`ph(u4ztNcfx5hKgy5|Yh6dBOch%p02#kV^7k_g_tZpK<`slNDV)YxOqYNsnL-pl( zFlG+L&4tVAHqqtnFZ64g-Zcm*HQGyDopBDvmOfOekR>ZR7f&#(5FP@xvPzjTD`((s zqD*Au%&vD~R$Uf7o#Tm(0uWeTt$qZeFwDhWVnTe%gCZ#b3Y$tbftq=GqTYh{l|Ni! zq4uw2#N$L>LUcEte~XUxku(wvET+4(j7eokyE*(@jGhS=3})EW#Dso@9!vXo#Np1U zB|zBOx#6ofcC&Q$Z&w>!tMxKQk=mWR(~XbK%~%sRgvKo^5u(Gl2dHlYQD;X3#WNx| zjwdRi35f$^kvlZXzoMZKcfGbyS80D--!kvPs4>3utv;$%U!4oU5T&F-b_g+qkTz#4 zLc;Pxcl*Y(LKqwBvT-;7Iq&<-TG*Cx78TK*n@xla$%F*qzFZjh7N=K9WtVP-ny3jw zLFDN)%>fiE;X_#XS`L~Vgpv$Ty6X5c6@foI5MHMG@zYZ6xQlqCRADS(eiX2qY~ zwb=#`Od_G5nu)q3JI;mN&Z_;G3Z^_YtLM&l3bcDA{V^aJ9@mNbj%iT*Ql`h{f%ftf zRhiOA$YO)J|3Qiudcx)tcg_HTFbFnonDp~zv5%Kp}2m-0#$isw-`OiEt{0S&O`NNrRR~h~raU|vU`>Y*}Fd*`u0mcv%F}Vml z?9%^@{I5y>zn>C632a|fKK14n%f!LB$Ey_BCqloUKhtwRJnAlea8WSC{V>5vRLU;7 zZx^|;Y`@kpSn}fo4gBZ~GGXJNAMv5aQCK>%u=0A{XAT?Py1`2BKnyiw;8SK_~I4 zwL&y`%MO7dsC0YoK#;V)wtylpm`smFO9J6WtbreEx z&`(YW7hURcUspcjcq9mQaCc&Q(PDv~*ikSRReaHRMt>vLK?!g*fWf8Vjfw`_#rWcL zp;D!N^UBlL=e5WC#O}9r>@f9x*iGGGvxP(^PsQh8_iUrJe=43o4_0dqCxFBKyh4$z z`nDuhaC@!A45spzIBeb}*UrJ;U-DE)7fR$ZRx2BS%f!3fow2jUGl$NJeCZRgdj8ZR z+lz5@oMna8v*w?lWPV4!QfUQ1i2Fi_cj?0DnIpQ4x#kTbZ*>{_EVY~RjgJg$i`!>z z(`1nP)7yta0amPS9Hl zIJ3?193 z=1v|*T#o-ttc3XncnYU^A^mT4P2kMl9)}#Z|Cy)@bhy{(>i1Irt=V=gVjPQE+##=06g`BW8hd?^Yc&0TE}BSS^R(GPp~DR>`OC8!meMa+EEY#f9L+d?>df5g6L=_4+Vx) z?T28)PaJmtcBiXp(qffnqJDV>s;QIyOfg3H$mm~$e}^1DQ#lZf+(z>@AQ4&z(WNY< zNh0=Zs?kse92^zk3a}gDu0z5eZrDUArKdzDv`? zqLRQkm0t1<-yi-`s8O5xmb?W$6#&F5w;V{gkpCDA_$C-6cWIL>$yBk_6G*_wDHL?W9jd{8^I=9~bejHGQ+uigeD5B~lo+@I3r}EE_9- zt$uiE~(cV*{^SJn92e%S{eZWDf(~IKVTAkw%nRgTr~mO?anghbFO0_&bx&8 zjF8hpo|nxhfBz>sh%c3v;!wYQF~w=J-z#amGFRTP`oXi|0{n+P;8Aju{F8sgMxV5q z^jpZg10TIs9ETl|!4>hJgv7CmL+S5?J12I{Tu=q$&vTc()`VDM}d#)hmiK^XiT+K{P{K~nZV8V*H#jd5Fly)dlObF zx-~?Y7GfJ^$oOguVo2psP$|1va5w@#-AOXUtp)uMT3;M_Oa4gKr2 z8n0zQhxbl_){N;5o*0<%NwvlO*ZE#3Lb!*A<3uTY+RZo;sZ1Rf2cyqXyIadF;)PP! zVvrtt!8DyDJo{P+H>&=QysSyW_8y-WP`1~9Q-D2o zLqe`4e1<5bT0QXjpVnj)gNvvo3U(=CEkvt2_*JDaYaRZH*+8kN5w z;k7!;L~Qvf2_$r5_QLC)#V+4!&*=JlwQ!u)#0H8L5UTybVWK{) zt`xAFWS4yH0u-sV9k(U1f?30fn>o8(nNe8~fL%Lx{H9&tnFCMRuO9EgBvBLTONVZC z93{{KAk4a|6&nduKs@sEK~ZYi2n9_71y%e-ZX?FNU3T~io}Hvami8y);qQu*fJg{@ zGO(*z>2z@d(*?&8POobYrcQZ{p(_|#iBB^RfR>AsN#o?asmB|8?ZSBmbOd>yqh%S` z=~gqKfu=Br#aub-xP8$(TqZ&!=tE!R=IbOe2_b;XJ#}!>iG8$gHDGHaz{IgBD-0IO zk;e7G8aQaM3KliT@S*wk-(JrFu&P-2x#alhCD5AB`E9p`QfQ)1PmA6mfp8ZQCZiLV zo z^i8l=rk9&*j;+X<$^&nnZwoH3+Ja8%ZR+4utl%7Tzo@0jjB6-0*kx4TZ_k7=xvN8w zeK!@KTniNLIlUL}nZ8~@att-h@Lbkm;x(8*Q{TjR`!|T+S2!g*CDBYJ@_8TOxDC`h z?^@$PF3c?XuJxkEt^_FCgR7hhYSj2gRjosI{Mre1hO zfVL|6EY3JOf2(9NOM{Ul%0AVWbqstAzcj#i1JwJIZ0`j`Cnga2OM(^BL#k(SnIMGO zK+bVgJ|#ootAHKNssB49jyKO3nGNwl}HTyqo2V9#3ScgK3rg{kb7X@u* z0rAxu8gIq_)S=LTA*a9Z*RU7SzmThm{Erh1jHdo?sy<=h2;)_eefB@j*cakF3m{*%@xj+Wn73xzK)S`AMdrGkZVnZSj@&Tex0|8Bw=V`(E{#RkdSeh6rK@(J&sK_zYVchLSWzj%z%bO z8cSFLjk*?E>f-){NPxw?Kx;=3P)CldJwS^>g6o!EDvxBudh! zo4xjsbrI|JHZ03GwQ2)Mk1@;+Qd=QNpmG!W9+1s~3QSd8@wjDio3TE>Ha$9jj=jhP zqr4t%g8CEnzJ7+N(CwsSU9X@$S+Ggqurq)U>_3_*+{^Ys8eXj4$^V^RImBMBnPS?4 z1Mu_nyMAbcoG$mEU#fMY!lvk96>FT)rH6twc%_{ztV6Kb8)LS0F(r{&J+e}&x*@~ zWTyWKe13_RN@d% zg~kTBbdK0%QZ=XG=r6|GFcXY(ak{jk%?`j#5c7N>>B7HkqfY&u^D^4=!HXV^!|UZ0 zx!8&Wo?JTR8#b$znW1blU1S2Cb{yD{-wnw7_n{1lOV|6XlJ+<>}Gb~KxTDU(cjK3;0g_I6gppwpnKY}ZhpE{w= zp67#@rmj?qB}4ouoUtfA1#`bUR#q=fD#a-8i1qp4A^O1m`q@&hT2{Dl7?n&KAQ4Bx z=9fSrFX$09PBbG{6)83qG5CX>X@s|EtvPpqrEF>JknOuh4E`*8z{+1~BOej)$OL@O zpNeKBezONfMPbAU1T)0I7wO0C;nT0aNF3F7PRBt{DI($++ZlWwOD`^J5ihc3Vgh%9;x~d{>AnQa02Vu*w;iyU zt=lqqJdo5@&yMG)ucbD4&lkj^@f5R5Qftg^Fw(h3)5#lmN7BWxJ@1P>QWXnjM<&3% z;}r|#jy$TBTy6)q2TCcQo1!#DIT(XD zQ!-;R7F)j3P|RvmY0Vt+`P)X-bM`#z7M8tR=yFc)$K=sieH@broLh~h+0*R~mpiTy zo!bj5#Kj^X`CtoTy3vcX8XelQH~afB{7c1}&UN?KU;OS28O`W_e9Gp6qsRq8>mD6)i8x{2(vRxT6fvqI zf-xRgWiubieoT6k$gl`XA-d@5roax+WZ;<$QVUTVgRK{Bq}8`?iPQ@zeDTbXu9uto z-Qe$XIrYE9qK8IPeTw-d@*8GwbJv^&I`RO zL}{^zx#7Gww_nQm?f01VNv6k^rX0Z5=e4-=mCdo~ES|Xlcs}Mx5#!G81S2u2>NqZ; z>dlb_97*=J@n0p^&1{J2JU8J>m?hh49CrP1PQ@KJAo3wqE=eH7guik6`OJ>T>Ap%= zR~6N!SSngHLhMg#^m)a#9`}vQHdI@jz>TE->&}B_Ut+GftZ|5fhGYpPUPO;BFEBD8+kM1#xVpgpM;V(GL-S0}`6jCQr7+0PJHmZgR zpFb&vnN6>|t!6k~J>cBY3K=_IRlUMM0X}P>LW1B8xOfi=La%S6Ig8C zA~nj8#B+tir1SJ}E=67*2IZG(j3drw0;`GF(oL7!>cb{ohe5_Kf!l1%@Eg}a^;RsR zCBX*1+ed!fE$+L}bFuj{^-+5$lQ&=A(rfof7vr(;AZsbT!8@<3v@jwsfkdxAfP)`_ z)VH`0`1H~j#Y=f5E@n$8CFZ8lQk`XJ`9JVEb4&Yb23fQ|^WYRzA)}r@btQbprrR`R z@kjFNN}Kss*xqrnEbo1`uo8$W3{}P)Unr{z8l%Kpt|_+e)mpG(;CM7`TwJKgQ?BW0 zg=si(#x+6U#Zu~MxiDW4e2Fitg^%ukG+mIGOx31`f%k(40N@XrExlH6j~?p@GGk2^ zp-4)g(UBI7{z3z+oy&iedy`e@tt}O)By>|jDCCLLWiOOci$w75H;8hs3~WH%@g-tr z`HbJs+;}|(C|7EzW0!VfH0k?9Q^pS{&pMq=>v%tQlp4)kh0L?rEKw;Ij5C}lG40K? z9jP+8?;<}5iD;fZtYcjICBn?p*I}6t^`umHqU*ZLdlN8q-NPc?AoHf!ueG_AN~f^& zCw+LV3ohlO1RR3DH9oe)-tp{5=K#ghH4VSRyT3Y!)5`9UOV=1eoap@uMuZso;QZe7 zRV6Rx%<$VJV(l0Avi+OF&*HpdqlkY>PW+<)U-*r~YP8cH1nMr#Y@n>5+*gdF%w7>5 zozYYjzv{btAt=LOU~8vgrc)pkDKC$MGI{o#E2m2^67rt`R`%(*Au$Fd9+ z#mw;Cge8+{tVcG%IxPFC&=5dDlM`il8b6^xNJX;i)O(2%NX^Alyxl2hZV*YGzGDpaH ztD>?)pnlzbr@|RYB}6)v>1cLAA078&Z3;RgV{7vVJTX2p3VS~*Z247FBi2%L!kSpY)}1G7!X*LzN# z-c;NjnWV4HFJ3R-J|uE3`df?>{U%URmqCvovqf7&=`C%8P8af+2l9@ zcKm4VkEfZ_qnw2_7!J@a@D}gE`A^pFO&$$Hz;2K1jes88I8O5V`v&$S(mEX{u;qWJ zza@C|3i|B136-5cl1zVctB0E2LVYs}?+!7?Rq&xL5ABMp$^t`av?E)gnDr@F9>4=L zy!uy``N-(^G3!6QIYL&W-z(a>sa*P8&EpRGuNFWJO9X#{l4&b0y)|wGM*x4%EEWxE z7Utu&NJQGrgLkBTI=9O^t<=PsR4FqdbVR@n?n;lyhMlD>GN0c|5B4%gk4Go}`)i!M z*N>>8AoJ`f$QgeJ)@ss*2rC|yyq_-yk??O%@CT#~>HN)~*a-&Qne7gkL}<2@y>}r+TG>4V@6~r2maa0KVUz=x3|HOzBloDN%gn zJ~eMv;QJFs=@_9zA&FVRgoM^TQt4dpkCUFTOiamSrac#LR8HqjVKz|F1=gFl&?rKG z!b=)boJ}6=9e|xjRYRuJZXt2I*^eUx_lfn@RQx{Puv6sP?0%M}Q>2}Qa5lZBml!H- zF`t(+hZ6A%^8k`-trfzc9Y$U&N{=%ph~X63TCa+q6lp z9D#scP&mmzMdQ%gOtfj;*}*q)ZQkP7xj$NK)os7CL|1CcT^hBl7igzX0AC8TTIC(T zpUY*A!O7vsjX32b&g`2+MY6PHHax%m~8WO(dljL3GTdLWFTSumL{1%;bZ zPBJ1ornF6yiq?(K@K$aIZ-S85EBBYryF-c?&HJ9gqlejVm9ZM|I2Li^i1*H0V31bX z3>ACSDqIoEx`F3HMFy|7sA5vtbnQ=mTThH{@2<~kgiD3B8!1phhXO)%M*B3WXDRrT z#^I(qW#P}-H=dtIb-T-J)u4GzOc?5_=CiaWH-wk0DC!laLa z8jv=~{X+bFvj&V;TR@gvj)UKe=CByeEQ=M{!m@5SQsfWp&PQ(gdjuA8MKRMD^M?>w14U4GX`NRb7V zp9}FKUoug0-@IB|O_|Ml*%DISsftO#0iMv0_%*6ZE5t@)WfpHWXnYh(5{m0SswMxcRDg0daCpvsE5WK0iN#$9PR<8J-6Sd~rRaoMIZ9hVYho2{|d za8!ew8<9-6Al@c6$K*OU4G(bJo#;f0JD#J0zDcdgjNarMpo197iv+c&10m8PaR}$Q z_RFg6HP0}Bk3i!4Q?2(4N$9HJX=O#VEM1%H6|8c(@b(Do10~nivYp%vsF8>K1fh&k z52M#!rIDVbyP`BR!S|_#vRJuvVHi8J$3K)mU=Ga7WGqd2xz5!3V(U(dYYf_AKvX6r z6#0P=n=RU@iYGJ#5E&JqTv{b+oI`khTs;7Pbi06$vm!ml5eXf~M&#k>dRBitm-jIN zPTNM1M1AUzm`uH1t z-FtDiR>x;OMFrq_BJBGq1^ytKh2FZbtiK+jy?kQ%2IP`pcSMIDc-+m@-jvu2&yM+Z z-tIIMUfx{tjBm(f7joX;c7J`6c@%^hQxt%w?edVhp(AVu zuW@})%z26kM^_83LW+7SB=Dhdvu`dl`{FQXPCpWg^2^H0=s5W2?;3G=Uo){XaKC%| zxdqy>E@-avG@A}8DM#GGx-U#jP75}di7%}oPJ{>_I#uE&?pyfBQR_R<6*gNu&OLK~ z7u5|gK-FYp#3)3sF*3YC zS@bVNb}CsY(mq_fGCB3Ajl>6bR3l1bL1{S$y+CVDj=)RLb^C+MgTSNGsIOx`dOlb| zKXqt|4FUIno^}EHBn`n{!-?9xydQOL1oG=g*N}C?oSxjXy-9Sv z=DO5;`R-|TUYn@utQYG@?!hm(2cW$5Q0_oIj)$0-9F~D?2PJ_PKd5;1v;$^XnUaYm zNSO3~`B{`=$mB~MI6Hr3KzbzC|Dj%E6zA9ZN(#!g4^ZjTIDAD1%R3*{QR}+v{A7cdCjc!#yb&wC5nV_Kw7{a&RG9f-LY^H{h~M^9A+pPDNJRA6+CbShf@+b2&XbwvgrPwrTRiuZ-uM6^{ z&TjoKF!Qv+C+L)UGXYdB4EfLzDRc(QFyptEm^YTp_+?U()XhciA3Xr$BRK6hvZfDiAOsmAQ*(ps>RZ*r+zYqPYdiG;>4mpS0& z6j`;S0q-9Coh-^Fon~$BX!>wKOEG(CT4NmITc1wxmh-)166HH_%T315B(b*fDbu6-7a z$%(NL#tsZVxK&X*;g{y@fsoe~KMw_KG#*KSQ^n2MSajFv@5hS{Wl`(DytEWw4-IJy z#yNbMiun!lXD|_XIEi(1RZ<9uK<`7L($IDVX5+ia{n;gUnL+Fue&E8vn&*2L1giP2 z!aoJ5OnIcyb_#LIsGh$R6x9X zc{XsL**f>|ZlUC81HwPy&=iKS{Y;%8wS2wPIA>Kq?rW}oEe@M?b@OIlBdXKdh>Gn4 zSUiLGDZ@?AX+-dX?t5j2sIg@nxV!XragPr&Cnqx8a@`f?v0k&N7j)E#xFavYcNMlj zHLy@o2tX8BN%lmjyy2)FE_4qlvw`u` zb4en#+1*waTejx{PWuEk(J9(ppky+gldgyLVo|D6sZKB4n@l?I%o=;H)N-R0>0XxC zZ$XYVg2zW#^Bc#RHyYJ*b%*7ftNlfuKQ;1IRzHzrmLHP$wz9wPb`wjK^B&8uiCxwx zRc~>?crnGRKIa@WwN0ZSspSUB;>h3H|u+;7w+%yP9uinXI-gEdK)OUGJ~%Vs6W zYIiEiI}WK>T`rL`InOdIRl7x6t~CkLDr%FK2csF_0*|R#cshuTUF!qaUS~X7uZ$#; z$0nk%UWKVUqAAq1gx*Sc@bHlR*m{dBISXc|^ZtwehUBNHp#vGUZTtFO=B@oyAakR4 zjy-Pwio*>1HHDHbGujiFr&*R1%Jm>fD)QX69Toxi`7r ze|zFQvl&9&fCM#>=c^OK_HGy?1z^$VYJ_i?_Qr&O>~=^-IXd&wH^HG3WO$VAJS;ol z0VAUmWchH|(coFP6*kK~B^PhRQRx{SLDV6Op;9mHSD6ayDM^coK-g39zsWya;y6{E z$#bAvsNUz2ExUnp@Z>%Zp9PvKr8y@4UyI5Op8mS`}2|UtH{sDefYkz_{6Svbr zvNN7Y#4%yM1j=&t2mCl@6r}N0b(o>3%N`lH8w2O0DCslhpVPu>qhF`K&h{`z)8r{%5svuT&z!Fb$$)se0%PquQ9l&2_RGFt43+>y{6$){+l zT`NMV1@GuqJ4J@tJxKxV6_{zKca zu98`^rKhY*92vB>Yi*bDhU=F7MWH}$1mpRtHE>hB$&|nE&6tWQIfn@&m`VqGo>L=) zyC@D#yhly}!D#slXTS}-L`mWJf7>FDW3vgP4%L>k*jQpx$k;R?YFQ+EQ#zs(VKwAo z>Fzfvg#;6$otD5FpI781RRYJKBuHJ+y-xHwFs#8m=To_U>QqQs=% z!E@7=>KV3ryy3egolc8Hvg^pBu;l{PE&oXKF>szPz3w(VP zE@NT{Vj|w-NlILj+swUo3GrwCX8%G8V0++!C-K^<)&x8Y@kl!sU(r1o^o*R&?Uy9f zHTYft+4((C2&I&?Wfh?;Z~w5lO`1dQp@0}ES-U6BJ`$%--t+m*h(t>UG8T7+?nuKY1t6D4RLV=yuzy z4oFmhVptgd2fVNL4Ym1ahS5Pz6XKt5m)U4NAMA|pY*00fkx?(_f{y<`yyXjA6Wr4p z%y4^wjshNFx)(*Dw21VaeXkDMAYICz>DT1Nri?&^4MsXhn#r+A)A)T(n}n%8v_&vY za-d#oi1w#0-{Zw9Q-N$syoXcKXzu&!{wR~cPjkmTz;|v>-D=ygI6NC16U_nC4sM$r zO>Rf~R4wZGF01>D1|A!(VMt~P$~ZS46Z zLNm;|U6UtlWT1%Sq7EO=p=x149Y#32)9vcWAO_bq%DA;`v3&7lJM)}9_&dl$*#`nuPPuX7wf6Th zpH?HFM78Xu7LMJXAb(kZ`bXzmVl;ukQsE}Eyk?(rnK+_H0ZBM%EJa$M#J2(vD6GLdtp(a9_NkHaRv97!b=ZPuB$I-lzw+bYls9*G!n}EgFIcG| z7s7Jn*5d@KPiCq?TY*uuKk2fHK~PVN|GJAz0~W>t*rA_!wopHwb}J!#s?@6hKv*2t zC#vqipzxLi@MLM9J}zrv<1y!;WYNgzAJwz`nZ7`td)Sbzg4`#PMAQ4T`@9GUdgTfk zN)LVV$qTn2)6;f0fXpy@|MkM2U#e`#=nV7?Q5sCEHwV%vhEu8)NdmgAX3WMg91|2vLhw|%`q+clh<+1BW zfRYeNw*zkMQW`{Xd=Hye$xc7}qN2Amc#KcGUSKi#A`Lj2&i`8B7WB#OxtHesx{d-O zv_^p&$zXr~@g(wi`sd*?qqb$;=r3RMn0Dx~f=QXn%7xB{A}32jaZWB3sxm+mu&AQ0T!!t9I>{ zC?sK@O1p&&W6QYWCyF&VSi$5gwZ->vNZ5pfBN)>zz1qk)W>2Ke1X=eGwjGS9GSvHt z5nr&ros7;%IPtEF3HZ@Kya%laqUIIj)g8=pb$zE5|n=FtDfLxp;}IIMy6$EP2?3_DprD z_w4w;Ux26|Qig4xIex#qMZWJ3CFu3f+8}Sm!r%pCL1q<;k5~@PeY{vzX|}*oQ1w4T zokdBq8duE{e}WC8CzP*Qo%W1IAE=ZS`ClRid*x;0Zg8HInpZ1`HNf)CFRt8tIC=@<<2mHN znL>`(3-g8!0q#OR>6xCwaTPtVqz~=}6gygZEccHx^CPT22iA3Dxic%FCi8HuxZA3vnJ$Ln%;9`{OStxGW`?ohk(2BVYAa~R9+^Bg@G zNB%P~S)#v8Sb>$$WUzSHN=NliVhQ+MN9NNTVf8DB34b*PpLu=C3B45KZKkTR3|bG- z__6vj7JSQy38xS|&v#OkMIsp=hDC*3{%UadjwG0(k#v_q5lgL^$waaT8`@RCnq8A^H(tygJF1M* zGBdK>q2U^cC-sXrkt6P!vnS*3nds2t7S9bcN6saCmwMS*B&*2xx69toYG#{o#5s}G zVj#2u8~tIvy`&|&$-IO^SHtPTrc%Swtq}XdPU(}QzPhaZDCnGF5v34hxZE9QSjEF# zrH+8!@%_Q{>P@Ywle@A6x5&VpFIEW>$BOLLc zyKIG8tkioX`MO~*=@Ech!Lw7`mGWy#Ify@g-yXVAlJWZ-M0-~@|5Nn9!{+~W>H1dq zM^%m5J4+GuWV6C+61EP}N%Dyl>_^s9UO8m1%m9})QsALG-&<{@j%P|l@ym=om^fG6 zT7B$JZD#vAq?Pki>6dRaSsc48DFnC0@GggvmRDNR{3=?!YZ5SshL_-%__l^r$mjI zEf&#=C+TL`<^-63I6D`QO0}fgZ9iDzRqO>Bo%Gd9c0+jVk3#2^FiE4Q^Vk%6Pgj>j z6TKGkI9&e0v+KYvrNxY0L8bFz10o49p{j!`h-%5oj}8ZYJ2jEG_Ks=wO7lZ^JIPwr z*kYzTz$;;fJ>Y=wu2k{y7v?{+tqOv-Q6fuRj9bN0hzP0)Js}m^_y3uc76KrcuVJh| z>ONmqf#qQGrE(N&=6%@wHIq>eFO#gb9LiavT-zpg6}<07Icq+a@?V||IXhI;Q93*c z%`cioDZEXfX{q-GItmNe3wEN-L?DgDg>}}DDCIIJbg7T{Umr#@bRe#)%`gp0n~S#i z-jzde@*m^g*}R_RS2=&lk(Q7ZLL@jlEtGS!`fdNt1O+z*cLge7mYU(v$8RpI)>x(L zszniqkp}PFV3m!OFu05Wrgyns{!3AY zcLmIHnv4|EIX<^o&9o|px+}lKUCwXiS%fRqYCU@wIIn^ju`&txE(ylX<4?L{EtCC(v5o9JV zP%<90yj~%`m))yH)SF}K?%E7p(TqnaQCYlXpLNl0b?+5;en)BFXTg%hx(_taEK2pg zSe5m>Kd!=A)&dl08~1!kLGmv7>v2$GXv!y(JVmU$7@LBGPxd=Dl5UYwwJ2t8=2~wN zIZ>Qx%wF?Txc8X_D;R-uMT7=3yNf()kI8#KzuTahJw5D%t+ZK^ zyVXj26OZtJmOEceskgYEl1jR0pZ7(CfFguk!Uu<|Tt*8Ah^Z07K^}lyXK|cJ?Fm<% z*wVpB%)|qRP8KEP=`arWBiMT#8FQt;2k1{QhZoKY(Zt<%`R>bktJ0?ME+R%kLs@@N z=5qz&Z-PZ6W);$|x2=lsOyN)@3s0F%dR?+L*+~yBD63Wru8E?k;FX-xl ziv~XZC*=$p2$M!&nW`4(s7Xg}~J@)%4Au3zyTg$xF+e zH|?OC3pFP6`hBjoRe$MJtAltBGN_l^J2t)>H-a`h)F-T~`;Js(o>a9GN8-20Tr>Ia!NQo-*oeD*z(UsizO#Pb$~A^v2r(}+;Txv{-h z;HmR7ER|MeCD<{8&C+x(yt~$VS*GFQ{14v=7m{$~*!tEXTQt(N1^7EjvR>;$8YTK> z9(X1g{tZZQXdQh!uTiR9D|JW_TFj-{G3DvK*nd%(<=c&(BKx0+xry$7i7Z28q-_xf zy|AU5EJ8BuXgDgi2Q`r?dM{yrzT%Rosxrs-Y;vq~I573v zBk&*ZQ+|IHAZ{pS#INv&QAq?`)_CYrCsn|zq*7R}(TIVmCK;LNtu6kwj%dQAXEpMO zw_;De&J%37E2Tt1ZkA*rc3rgMAAS3N$5T=X3~)3#Bnp{cDf+xA){jbds0> z39*!+O}8dc<7-s7G2|49hy9Q9pH%gl@N{!|-g4vhllMHdtFcg}` z{LcFX82{{vrAYW70yu4D#)ZvICSuEe+=#)Oh0`J67pNJHZ%VkIJvCqce|`sit&=NK zwm!b@s#gj1X3WmSUlzTe+8ARyA_%fvkdTG<*`p|Lfh{h_#SOFuHGk;TJmj{bA}6o4 zD%ZFRem~>TPt!quT^KYEPbOqB`N#KqX2ar-%;WMr!FtKdxp0(Mtqn@>h2kv1Y53R; z;McJ++nt!+@P$!BJ@HmH@X7&{Sn9~K7SOhD5QEHp$5Cn2ghIb6axN24?C1G=5 zh2jr4TD8;K=&+-JxWp_g9#4k;_?H2grGAv9GSyBdj8=hJVzYW7pD9X&Pk0bfB9@>| zW#YQn!ElmFp&Z^}heHSr9t$Aakm_qkLsPEH!>wEe!L-ymUJtrkH^<&C&MAss>Pqhf zY+RvJ~q==Xvle;|QS0vX(nCT==`q+~75 zr?v~ePt9|m^_5zF1I5K6N|yPuU?MkA%c%N{C5A#&2qv-P17*QsQYJW;AfHK)F|+MLuy8r#dAs`tRmqj{qS3O>9qdz+)x2jONW(<*07iAWbq z1CT_~o}~R^l@T;D!jXS!Ge`ztRv^Q`{4WX568DA75luBwCJ&HwdiT1d)VV4zV%>JV z@cJX!H4|kFFMA+RNaUD`FBC9^uttuW&#LKTJkt3Ts8^1*b-Sw|n5ntk;geeEzHIV* zzU9sDy5^1{S#8$#Y0D7B)r&b3C?1usG@F90iH_C-OA(+0xkjhhTW6Wa#AbNplKD3$ zS49=6!y1s#_SYx3m`jGNdk{ULY13y_Nf88Bp^?*ZqQbK#hUHB`r%q!B1b$Y1t5D<- zlVYjBT5eF`jY^aF6Tz|Rto}LqCk5E&#JkP^=q%S&TY0o|4B!@R2+GW$So9)wfL#VR z@>{AtnbL*E)@tQRewF}XD{+tS(^Q=h@?B0jhvtrdjsw6{zsU3UtKED6e_6y74AnmjM|7i7-eSgYBBJ zkKLmi&S5eux%XWRQhRtuTuv7p(RzvFL{ag_z_+XDEMlQS^N&aCf%oB!8 z1TyV?HE)%d&Q0T5A=l=_*!hCADq0$^AZXFn>&?H1GzC$ zJg?h>ij%tL$)|!D%li0_0{KQ*&6iRKr-r8Hz=CSV!nYq9zS1Xt${O!>2)t1%s0J?cR?W1w84;9 zaLNxC84UP|mTK=1>mz&!lm(o<@X;JjcM?UgVlYQcodo@Gmi);Uh=Y8p!9E08P%?%3 zCyTO9mSCIxX14*bq1s{l_D9@=S4HQEzC6@{WN>S(6KCl+DwEVn)XBFJ<$_wNE$`47h&08j$8;8PDWA@CYGI=1VE`YQfWMCM zLw`8wi8Tun{G;NAVP*7I-FEz$U|Z?Z@VD6YFS8L@gIU2m?Cu)g@MB42^{?EAFq(5E z^3XI4-o6uwhM`-Yf=dWR!k5C4i~6?7s#yW978(Ah)KjuS zO)LXuMG;(dKCZq(B?=Ny#{Mr&`j+fvi;STu3`-PuCbJXJ`}imNR@ipbf53_t&V8O|M6W zafYWqjd8u+lQnnT2>iJphEjcR=a2m3&A!*QSjt5lkcn0iKQb^%R3m<~+N{vuUh@>s zW}^DSpb#ZjWi~2NpU>aeq&}eGHP($E-929HyEv~P#D@G&VdR%v0G4y8EJ)C|P=a}4qSTjm0T&#E5(Dlqgo&^+MSDM)ZvkEx=dl zpD`Z(kOz}xR)7P1y`yyT#TJ`&mB*>8wi+upLB+e%6Rog8#Ir}k5t_@@yh9ZI># zM$es|55Q5HK_b?YiS2H_+TmwZq3F!I!l|E8LHZT$545ZWS{jNx@!GY+27m5>K*@Cl z{Gm6fD3kdUmzkGL`IKe6*PCQ)0w(2rRP{!R^y}Thx=9H@GwmK`po&Z8Lw33Aiep2M zzvxW9w9WI>$Mr9REo{47hO=vda*5cZfNI2L=JU|quMP-)v_+SqKjK>+&&bEwo+*AX z{KJ~rOK|~onxjRzBNcg4!%^zL>O{@z>sSP-Q#}@7>sY+s`k-HFKuBecXj^}H-u=F4 z%ClnN&?Hf$#)yDIyFp1{v3(Q6KpA2R?u%)FD?_?-2s>BP10FQtwYWR}MeloD>UxWs^D`Ow3Ybon+?Y_`!mrGWPMO zEsHa?l#4Qz%gFC`C}k^dAmGg=pS}Ld3y(V^)T;jATB)}_m_SO#TU8oG9O27l{Ss-X zkioi+t}OzdNUv0=x~zx{Y78J7gV&yLgg#e5loBV;e&)rIJ&%n>`y2 z40$YK7lePD+%a1OZ;@Y|>n*^e`$ZF3@iCP`A*wUa!H6>`6G(Dd1o{0Vg3%mjeqhi6 z;o;}I3VHlUzKA?=_8#wot*-V~|1>Un zybe=O<4M<++DeYLpkvb{R>ViV++`|ktwzQpTS_dB?=g2@Qcd%PJjEO_CYeMZu^~Z6 zeD4yO)vi=|9B$`9yar?#s($#s^)L4PLW(7WRbWc-dfmI_iWyo1^4 z>PhBXY`T{wFU|y*Z*aZ#MM1T|ms)br`* zSMv}R)sz&N|MghG?oX!@s-VHT3w-OGJoTuwMP3+^4d#h=GM8g9w=n-|&i`va7YiJm z9LwNPD(cfsiGIUdh&`Kbbv;j5o;7G{NM>}GwO^Le|LEpM(Q4oHd1V;4s~0Ug+<}MJ z7ma7PDes{bRyjpjb#tW3#W(3CVm-p4>-_pz1`fGt>#=ibhb?S)6{XD0i6w(AfTUe& z1sS{sdeg~|;^*675c0+#H!jp>!q|*urIQQ#=+Jo^&J7dUjAG}V*oDf4=o543 z(=!R{-B4Z(Sjf5de@vy!Z@dZ|@k)}La)1d?Ki z^N^!XNM#~v0L*qD6trvE8>07Taj4qQkNmll|5q4uY&)1B-LGzwt+L)=US+*eKOc)0bT&?odC zjcT#~cy^KE=9j^U8{r$Fd{{E|EewBNYQFenMpm1H^5vPv&o8aK-jCUhWOXM#S`Wb! zhjDnk!exG!)y&HnfRbO}z^FP`kx&bh|1un0y#FMen1+LMBk#p5EvWn3%W6M-kQ3kL zrRzVVid0?&M((_g`&7o}Qn6i#W8Ca5t}hQ@M3~;7Mg|OM0b=cp?r~Sl3uLexk3uIl zeVI-dY*f+ERuZQ#eM~~nU-aD4S)5|w@R=PUp~hAu=W@|XUX|`~=kp9o)Wx+q?8q|N zfWQ$tyOlMcq4q$yr^#vZJDY|@TEt1I$oSEh6Ltaue82UPfjIDnjxqD!p@zThuj%0f zn52b4#A4q*r~Tv;S$=3vX15x4eB&arKz;3a8|Mpu-{ttmTn;CJxN9NL9NhO?xTR{x$IMQfNS$r-p(TMmQGxW0HfJ$BU68wo`xQEx6 z8n4km(m@pDG~cKbXj$xoiK;TYFFi`+v#u8_rh(Hy<5JbgvC;(XheRGB{TWzPG0(Tc za+W`#o3KP54?dA|u}((+-LhAr5CYA$xdVsfs%D*4J`~q^~aj}_C|DC zz$WfG!jDG-?lKk1tMlP_(xAQRp|EfcF{UG*#``GV+p}d`-t_L0N^s}#-njPVs_IOr zTxP@5rpt1zP3!8{W#!&e9?RLn8Xj&iv~6$jt0S&HC zF$$(3pbc)Y{q-T!UsThgt?Am?D}q6f7s!wm`YuAoawXi`q-+IR1Q(*D=iPx5+%|aA znfu$-p-sq&#Q4D`RPu>ZEpDguOvzpo$J(OC#0gX}opU9<{NNn)nNG^TsSxNMCAZ6E ztM_fqo{O)iO0F&!K}mnjMVn}2&(rJ7?|IP%2_6j#0_^bc#Lhe`1_%xMV`k$`5Ert* zOudCba$ZV4(rg14^h7e181Z*w0>dEaA`dQfBBe5AUp?%xUMpC&e$VfZDm)GDg7}5Z z;57~ELAJON11B6<*6H8n;fmLXc;b&qXs`bT`2&umx~L&6r-21CEqam&oqgFXj7#^Y z^J<&fZuAilGyK{Luz}XW6(`6U%CG#g$Z2#Yo=a|YCOnQrskui-T1^KBDz!-JL%(0 ztWmC80Nl#bg@c_U3rUv?-hwc=F2LC@fM?P+@VeXKc9_8&hpAoa+eUNwD_^|xB?N)x zK;gufMx5OmlYwL_a?bNo6hlXfAQZvoKVGhR1)T1rR|bN|GXUaFlsa{PY~Snbr_mV_ zo7TMx^(!?=)3_W{g0^Vl1v{zOb;@4P;Co#J5T0>Fu-QFHB6?dz20NVp)Cz8QaIvSLxo+k+N?T2xWnxbk| zmY7mtMROl4P-=nDSXT0937e9_ za?m$Kd#!@Gdm@7^@uR7MI1^x=OX^mAQ`4RHWh9L>8`EnCPOH%tkB+KB_CJL5jz-#S#!^1;JRPoe{XW$a-Wi1 zo@Hhq!hE&BarDrQz3Rb)No{iW8AjX&hQ+nVA?Zc9MXjVBQeINj5P{eby@k7~TU4C(u z{y3b?ayj$477nvDcaQ6kL;4-$(QV$~OL%%q)Vl95PdhGRc=V2gaoN?E`(;(CF+)G} zQ{fQX7{Q^$S=zBD4Lb${`*m5dx&AbGPisS5Ys2pV%149xDGj9AG$v zvQH&#i71m4o-5N^GWa|qwSNU1v^f!f06}AC#~20P{Q%DZ!4qtvqnF({z^ZtY>8UHL z(Tr;A(Wl$BV0y`LU9HJPx%?bmgZp+|hSpsZPvzg)*4KRirN{AKN%n_HxD+kSSsf2{ zR#*|6o+iQn7z$Qjh}z1NLhXtdPn; zF*x?iKH?jdPJ@EQxOZzAcojwaWX8ueCyskpE1ay0bw8F6s9czuNQ=5Dyux%5`2-Ig z>}hWwuSaIIx0B4vG$DlMa|0N}1u@gG7$`BhIx4WHIdC^4Rw3?m$dDWog`K(OQ8s|j zde&%QdR4tW)2iTyQGhy^}-%#Dp~X%<%;LO^nM2 z%exPHLB*N;#)*WYcDu;W8z<~m4Aim%(dqe`T)FS0i}&7y8iO}|yKyB={4}DH(G3uB zvG>zX&EW!)d*&HzCG*;8BOQWhdU2!tZIHG{1*u#d^()gYq=wULiZ^b9`QELNs<`SWRVQl(PkzJ?_#L{F;rF)JR)kA1Mp9cw8^c z7U03cevMV5980F4 ziR6;5I_R_hQlBV{V5b`$mV#~a zU8dwi!kt(}xu5-5M`^s1&97TTj|$Ya2rpo~Qo#<3cK8bU$Er$?IOLhB@L{29lj3XY zx4rSj%MDKxd?n$a=nmpGolzn#GtscP8MY6=x(b@Na8aVsdhKOr>ZbTmz>b%SUXhD| z`$u{_@ijYKu#DlcAlWH&PCVTwRA>vL;D6Nl_kdTog6O4(Hu?fz^2Ke#b3n1*?NG%r z+M0L{uiwTqjfkHSrBoJe6P zlYb>hq@6814Y>~dFXToL8->I0gWbHtHZwR5E@O8w81pjiZ!6GOB7kQeT9BST@H+Dk zTz{v?*x%f^WUwhEzQ}{Ho3q9f;&L>V3(J5n!HM=^QKWt$ktiRHCe4!yAS@i$0cl~` zKIv`$3{Ri8TJ$`}I5qN|wKP@qXRVHl1U_Y8Z(^Zedb>9I8Lh9h__X@s?8w{$<}X0_ zkDisrCodfNYJTNa`0tYft3Z{8>NLXnlD2c+`2bx6GZ7kLF3(62H^~>s%U4#nv_sGn z-vMe+PEC=Djid z9(bnAE9Llf+C`dSWf7$wEMUy;V5lZKBRYrD;)o@{iyHq|rM(ELvKCqZCcvUp*LP@Zsa9q2U!G>erjV=%=y@`CrHmV@=(e&-fmSl}+)$e9_ zh6+H=8r!N>SiRua^KEsll)LaliEamaoddXH*dM4FO!9!a;#Nmk`9Gh4!u}I>&7i>k zbHHVhpS0i$uw*L}B1~M&i&VD0afs2nPbFI189=qS#Rmm*l9H?WJ7PzCe$Xr;q>4$c zGmjAO+=XV#Ai!_hG<)X1s*FqytM$qtdd+IF3OEM_uX`0!8Ck~|Ls6(>Fn3WCMU|Do zKZxu2isq3`s>idc|4)1TVC-Gzy=$Y@nBCg=X=wVtevc?3>*?|u-jU1LkX{e+poMi& zza=7(L#O=-%VT7bN^GS79LIJqpP0Th1Z+lix^Wk9k6>cV{4Ljp)#t{-;A5FoZs3D^ z&03vu<3lTl-qTAEIXRvBGV&%iSuJXE$zEx+VUvLFt*^opnVQ2)Y^+YRxpCX9tldDICXBt-F9qEiHV zem87E?qF%y^jkcQ_C`oRv7e?${y!+1DJQHT|fb! zr2Qigp-w3N>WA#ADEzq}=}8r>=aa%k*Y**@JQvdeerSI4#i#OOlY>;)>Byh#Aclr` zifOc5l%`ThujC|j$X5m*e}jM5-#r(%^{0w0Wm}tj*^!Yd$@3%;S$c(>*P1&I8RE?t zR<#p#qfzR<=FGopM!oyu5{BVh{~~SE4)k%IWa48wxuSN|j^|n0=#W)+`0c@)rN9#6 z>$SXk8TyV$26=#OO|^rQs+mY-qg_|NHOgVYAn>F_x`|pUkfWGj-wH*XWc610NK(n{ z6X>c=ehDrsq6Gg86D13G6crh8b4aZ+wtI}C8y{)_w7G75{*QRn9Tb*emaO>_W@>Ua z@A@steKzY+e!@Hm3Nj>Lr7Ddovt7P&^>U5PIZww{*VSYqG5kdCO08D;EnZBW!P7I8 zSW<#XB*qV<8vaKI#BP4Q7iSI^D1@1fb}!ShrRx+L^aWU*7`5?6^T;dH`V8eqUrC9IlC*XqAM9wW{b)#dKm?Esr`MUO>^RE+CG+8)%N$SVv zn4){$o7aE7wbmbqNV>&&n-Qz?&6l@@ON4NJ?hstOyKQL1mS%SWr?H!y{7k?~n%a-u zHLJtn)%1NB?=N3?Riwp)RW*Vvn!Uhoa&8@2@$PYp?@{wQ_q60F+9r`I7}B!sMEpS9 zEXt*09374lKKPnMC#PGYM4#~5L+E|HTDA2S8CVSd=u!M?zn&82FYIc$M}qh8Duoat z>1yOJ(uZEy%W5XYL@pUKZ+}rx$UzwXL`G6Osx;QvxaS#K{aBEJytUOWY6jQSj}AwJ z)&PwlY=QJ{90?_j=fJ|Be|o7gI>9Q(K2`UqaDw5?k+u)r^Hs_GJNXSYA&+-oM}CCb z8o_iiFGYYD$YQT2fOwjF{PDT%qgiM_kU3v7;Jxyfn!zQcFptU38ULQZ|Gw;WApL0o z4gP9(kU7ie%lDHVWdSTf=?vIZcXd#LakE08Lo?M{--}ce>}Gji#>JP{II=$6JKmtN zogv(iIn=F?<1y|QyEsAhrGcIEC-QN~8_agKTe(d1d(bbyT2;YmLdCC<(mdk8Z zrxTLFP~@Vu21B6WWX2zkijDk|meq@`UQ>%=XNmL3AQQlfH86yTH zIlra|q!+7Tz$A0-O8Q)5VVT+cfEOyY1;TtP)UNk)0lc!}PbI&~chWwe{1?<1#T+8< zlIcV$S+c73mq_osZ7(V+$Ad8=tp@5}S=Kmj4!hJVAOy>YLvUGHQbvP7JcV@9lB(i% z57bz9>tdYG{+3`E<<~55S(!$azE$i1O;I9844s_6&Df|QYC6pvui=y;Zq@2aon0rK z$fP*23iVCgskb3+8H?>)wNA5ROa#;0cegXKkzisr!Rb}1wUIPrwc|nc2je$B!dh&8JVNNr3|H+{$GE@TRd0cxy;tfM)u7{gJ=G zd3Yzo(!2CPHp7tg6SbaBy&|FWkcrhVo7%ZeO3$B9mQKAAboI&p!r!ZT;(hg(b`f%zg8(uvO+O2zTFUgjgOCUmfkHTvJArDiTmzL# z9_slkE0Kn` zW$Ra&)6JSdV|3&uoaPtfC0p+9s{~GIUOdm7I)(f5{Lo~EYLNTM4YL7@X;L;2f`KMK zDR^6df=U?t5!Z^Q+4J&vjx|0D^J?W)^QWkHR`gSPvv>mw!q2DF8-a;h%^aLNH!5uT6dje#~W3nXIX z3#HG6@WVHi$k~1kev`o|#1^=jZ_hVYY+GjI_%0)TWQ8maZG1(C)i1=xgUcVemw`Ok zV15d`ck!ju#yoWEtDP3-p+iqgSd8SlK$9-q8PFMJ*KezAqlJp^>U^UY+J`K@wivc+ z*LHZVpq(u|;D#xsF3emIcGT_UpGODrhLdjmPqQJzIPjy)J58MBf>71~ z6&HVgzo-xQoH8HWx;rk_@ijLL*yhgXP&T1sq4$qty`AG%Gk-zuox8gg&F+n30sMwQ z?V}+PO4atnk8&IL$~fLC&zuD{;Xj-r{nq@8`%d6ryc&+gN1`=W@2+sdMI{Pt#Z}R7u&&nakjjnv7Kc7jn&(faK0{|3HeD_|jbPPdL^GCz_2s~sSX^swcNTE1WO{BTJoqJ6 zdtBeR;jKp`eJhm@N!7O|?(&$%?iD-#7xFafm&y8^7PcX_hTUIFcjdq!#Y3otVwc{pU!01@)6TF4nVD zrq8zf=t&8yD~ZF(O&vo-@8_vPqV$prIO77RK~P}eJSsU&Yw)p;rW`>O^BZr%u;1&0 zUZc1AJV$=^Rw#l}AGoBBJt6RXi_D+_Abz^}Y#xO@WxG)uBj4B>>7I8*3`qEIXgRl+kHsbw| zkro_bX_~nIZIBIkz1&$kxulhl6>onE{CtF>J~9b|9(PvJ_$DT4+?ikky64O;FXnlm z1KNNm!)`jOGM37rBh7jpQF1{pr^qq+8-|008jp4rs66nEcIvK6M4*70{MjEUG{Ye) zIb45JnV^Q!ShPv>d~7;TSEd<4S%{v7fkRD6ohZNUVLo>zys_0gLU zASMWp^vXxnz*;_;(3c3$*SD)|E$XcNHXZg%KKdUa2w3suTC6%uQEJi7pSf{*#b_qV zyD^L9@f_usL#B@v@IluYMVgj{RhevaU(!v?`U57nKk?=?QbXQO;*vIhkdsJ3s+fP; zUH;m#E^p=XHHjONH=#Az6ak6M6e^2&9m~;(_tVL?V*a2IUmjIIk;5ELR7S5gH>fj7 zJ$v=)IGp(DWIo0WU>@KmKG=9I|NOjceiZ2RO4;0AvvWH(L5p7G?JzkcM_IVh>wSXU zNIG1p=bRBYKE<0a8{Un&-O%qXk*-9Qce+K08D3d0PXA$)s50z_0ke+ZO3WO+p2J>Br& zz-5$iXr&vT6aBI&cmk-=M%t*5ERe0g#iNEqQ8vuKk7x64Iy%|(y1y;AdDPMCRu8R^ zsx3By)cG=tQ=^?>9KERJUN&ybxK)+p{El)LnHnVm0_(qPt0nxzAB)+Sn*Fu#ppY*Q z35|A|=k(7${Ny6jWg~{z%A}$XyM+46`}M;GMcLl;eHzUklHjZ4mI1lH^}zXxB&tG> zgpInduG81y>%9YQF*=`wkAEYn)p!;8P3a)t%7QfyFG5+Az(GYQxtDHO{(v*bnqnAcS->B6p3ZcHPXgEKTq>g>0AXjE(6Kf~d0^hwDMj4qg&W(Jg z;QGiAGDMFBMgn^Gy=bv}k~;LRC7EeqXT4djFY|!AAbE2f7wPZ7Q57u@0mW7)E6Hr%mZolVS+)|(iJ88cZ>r`&d6P`$pHhp z=N0Wco>QR@CRz(Z%B@rpMD%;9dWFee_+s)H&g^eI5yzt~VPxhWllqdtZia>5M2IEe zP?v@#qdCncO}b3WSiLa(B8+3OQoZN|Mv6kj#ChY!@5$%cU!oBfThscH8!^Ek zSaTcZKo8wY<@8j4f*PNWw$#k9$iwRz!2FNVCp(GbPbTtVv{wfP@EfuPh24HB`!uq6 zf~=ubr$3g&r$Xt7s6G%g0coF9I4YHX`fvK+FX#jHaXO@)TpTMYR@sy%EOe;DUx;xR z!ZL^&-Gf`OToAEBiQ*($doiqOWP4D}NVYA(iIp_EteU2)(44z@Cwd~aLEeP0GT`z@ z26OZ0n?JSU7{Kd|_@Cc|OMue77KGU&SI#7=QNkdB8T~Zky9@kg{9KlB_#0O1s063r z$q~svN)ugF%Px z6v^8C;%2Sar7c2@@LCs{e3V0JvgbNks4Z~t%uNw)1A2Z*bn7XeYQ&&o_+grUfCX3~ zo6nR*LOFx5+w1th=ST^ePy^oI+&!5uBkLm+U zG9Gkh)BtapG0(?iy3u6s=+_6{eT98c5)w}#Xfr#^EP~3HMp_fRu|Iz7mC-RzKeo<+Wa$gxpHnlc}& zNY8rlR&`Ly`ZSw_5=6JN`+raV2y*}M-9H(3t$=v|MwHDas38SVAZ2L# ze@%ULRFv)WzTzTFNlACNbT=%pv>@Fn-Q6iAozmSYv4AdJ3n(cdEnU*xzx926Ki}W} zv*+Ar&$D;UJTuo^Gb0Vlc`d3}I5N_?u?CUv;`J-vS)F<_!#rQ{S- zX}m*|sN#+OTlaxcF9#T)^P2Y_aA!rc#?4JC4M*+H~DzH{|1ks z%QZ2CR6GCf+QE++7Nb1~o|uW0e^dD_Ap5p)6GwG5O-~4ET(_d0F2C;G7#^-mLavKX z-1%;iu8zL9#d8sLFoo;Ed~>M+2*InhW=a5`IkOiJS8EM%hzcWG8J$K8X|0tel-h=* z;oV1vfXg-ys&N}0mySc|83gtU2Z$t#)pVRFu2l5wqxVChJt3ALbw%zlUnCagWYmhv zXT$Vvh%dJ~(a#&nG+{ZRR@29wKk^+!eO!k{p$Jk&rJ-hTO0=4OP4lQm74YcJ-eklTk=!rAt9 zh}DAgRGC4dyY~1t!AJt=^3iE^?ce7b`IqUEPZUQ%+rjL<%h9l=z_ zca!tNiTM4(sQ64)8<7`H3G-FA;xu|(9StTovekoTXE3QNy-{2fX<2?iF2FaL6ZC?9j$#Cob<%mqK>CFSOG~a$_IKBOY_jc5hqPO;kS~nqGJ9>ne*#}?zBNanz!Sqp>hJR9q z?J}GwX3hH|$2Px@I*!b)_%XBQ-PM`iRVC`c@V1Lh+26R}Wnbxw(M>&sA<9EKFun_sbBDeX`mlP68LYA3k zIqFj3ce0CF(=}{c6x;>9^s(G`9YQDT=?-+rzn9G`dt7qgEO}*L6q}2wl}xW&E;`Qn z6>GgDgXdw;YQgF3yRQX@t-$M;V6%0^LjCrzwEQXN#xuR8gKRGmV1-Fo9R*q{lL56z zv!clHe7!@dPfliwojso?STakKRXuAbKr`cBd?IPHCMFM}+ABtA?D7o4`FrdgYi^@q z(xOlA_BVb?dbTD4k(VVIASF~?> zg9e{$7Z$%KFd;~>(WY`F<~smPdTB>K$n0)QkueB(o8m9(jR@>=J+MW{gpeU*WQ8`-7UR zOa3_VHHBEz)87qDjd2#!6EW-Gz*M3rfzOU`Oa< z^f&~uAyU;;g-l(4W^b-6_O;>+-3mN)$rvpT&hT{bk^P9r%CYA1Iqrp@dP%0BTEW#)`y3s|o2MH$2!PEfCMGvN%pY_f~ z3CX=}y;1tt9zNO^)mB#EBU>;1W5bh3XJw*5sd;`3TrZ9>@&Fbn!+qBawo$pd)>gD7 z28dE3XWNH8{x@W;T|&KWgbH3{8g_&8rKYK2y*pq<6#~PLh~e1_f-J{Os2|R*hfL)GzJkcxa+Cq;N#`MM~qiNVt%kL zkV?Y!v$u#6rbLb`Yt4N{y|H|bIqY&)`!Q7kYMFu%?9OJUH?-p0&u?KRN~pI_O41!iP&O?(^q!hogkp-IZn;QMT@ zed|a{$sWv@nD_#ye?RU`k$p7Mp&P61O!Zz{l(HzLW|`S4q zOa#Q!A){fst0OT2(|dUW3IdK;J`u90AlyOyyibO`rD8i%S~!UG3to9vw?B~q6(4xX z>>feyNIlBCP3=0fNBgr+H^-Q>G+=1qV;9|B&@zv2GAF#edWEUP@XA}CjWoMOch5sp zCF|zIIO?sI*BIx@drWtyOP1QEs>+lSD7>zCp?Y_OAK;|u&W*<*FF$g)4*17CqaX?9 zQcgI=RLo$XF|8eI37nHuwOGZLkM|!w$EEJ!+#b}keWytLCVRHloW3wp!#9}wnqWV; z2hXtB)U&w{>DGK2dSxdqct=$ZkGq z8M__tv4df91aJeZAkCFC6)OBZ2`_jGLfsVyMdLHutmdHy+PH$2$f$2>^68pZb+pTJ zqjQ8jbsfnTJKZ?qMj|g9RzCN}$pRyOI;B--)X=2*&pe()47_V$5)d@*Lm)?^3E?PH z$y=@@o2k(wm&=6p_FTwgJ1epwIXemqq-bF0eJ|hsB56=D#D=nTUh$~i{ysKQZnS88 zck!KeJo%`^3s5C`WQ9?QAW7e~%KPA}K}ZyZH}0^J{p3x`+N)cya&DjNV5^9qrMRzl z0`e1lK4VkYl21qu;7x&trL@8t-kxGIY$G~iqjC^w)T(zku{ zFNp}tgkc&pKvWQ@#iOGrg6gv_wR+xJlbeWi zXlsRRG1lZGn$nphDHpa}&aBQr-|hpY?qt)YVQZfsI_JYI6wEiQ&@wutKyAu8+h7#a zPB(#yws8y8=u8cqcD17`f4WF78u$bxM7G#ZRac!zpcXld9coXA!%buNXm$ehZgnJ_cv3Dj`j;eWL&T>i;w0;*$vN}gNxhNx0U?u z2RB|FvxM=e42`0m96y7K!Ob0d0yJ zb%)F)?n z`xVaXb(041cGb|kK=ZTFDl-tgo^KeReO`28f$ekWU~zW(QFLzx9}O&TsB@9A{E3fn z{&rcldnStaDx8s2E*^XjVRI$q^b0FiOjN7=3W?75s>)smgyuu1qz0S3DiUYW2q#By z5m9zpXLWIUMaeI@#*~rl8O5jjDm83lgq@cN?=pGz713gC@T!fwnH3vyK@KxTQykju zdc(8DeggK-DneT;olGudYMS<6teVGY3;K)wlFO{3l!D=&-d@>A$uSv#iyeRF=dO-j z1d*h%s4m7?3f=urX^La<(^CN!-R>|5?0Pi(MfV)ztyKMZZuemyIYEdOncM+CVA8ED z0-cv)gy)IgZ!rr|z(_(f)d4wZcdeb#b|S`oq3%N5)iYB49e8Y?Xr+)YxK`hQ2MVN+ z5LMn9!$r+*xMDG(;+55tAG1!oUvLZoFSt^@!nD}3D7+Wg&SQJ}se00#Z?@LrA!xk8 z3XbLIwD?2`7wiLd&`B4^pk#}SQvR+XTv4QEVe*|te#5ctDs^O>ftPNCqHg)v$4T+moc?_s z&bcbUE<~_}fRNh!YN>B%;`07vFz@2RnHH?ES;&9tPf;JXbNzJs@xc7@Tb{+Kr`g8A zn;gX{09Bae0agx|sTt~)a{36*#)sf5`cTo77rqc3f@+tN=+n9SZaN|H>yI|^Y(Eyl z)dQynpUhS#)Z}f#{TEfhVMu{Jfx%cEq=naf)q!Q$HWzQKlPf>688WdFY5q<&TVCIHP8a|D$-o&s#RRRDb5hevL49y+F5`(brq zZhcKbEQoi`t~IXvN@`yYw@r)7N zYVavFmrM~GC<&SO4^QKLp-BoydeesDi_5A8X3{s%l6M|Cq@aNqh_;U7%KJ(d@Ofi-;(_b)SwQL1eD0^Uk4- zYs~%pqzCVH@qt!1xs~i4<7H;_CM!*!5{ZYj7e^^&4BoOQz3k<0I_~gAVL<3xeLrOE zK{_bXoK=27*>d~ckYnREtdLeA=Jz=%`#3wJZgFHTb(Ucf4o;)M%YwJUCIqYX@H$hO z+XFu5pafbN!O-tATc`@BIgB8dq<@AMo8{vl0OA5Y6)33Y}!}r0t#Pb=vF2M+>uZ59aDc?p(uyYP#QWW(> zJt3Pc{+@mOyj=O?6^Q@?t%A*?!l%`Xo~aPh$*}k8dMXyi!%tnNXSIm?roI3ib_(l< z&}Dx_M+^ZC9ZpM4#yh3baor9-frZqQehV5y4ARGp%~$p-ZBmyuqPUa;MNDv7{s&|1 zo_l&ct z?%-TBRLlziJ_b1BrUI4nI27Jny_0cCRI_5h!q9O4zSJE@5}QM@%2DWY6fZU-7l2z> zt=pCaXBWzg82dKIk85dqj6H4`f=BGTMko0J02DW;m(%EWWW zSQKX6`nMW2_PEz%PP1miK65Z#j+S2ho|cud zQRCgb2eZEIf7$T@$=@z&#nP1};LSgmWE2r$ady0Q#HhaYHM2(dV>+YJw7J1(zt?EG zYc5X+$cy$k#_3&LJQi7jS}Zi*=>0eD6A59=lt^j-0j_*>fVePM0gIW^+VuA7D&eU; zcLJWE|4(rn%{n~B5PoR`&m7@H^}KlF48)KL1ownPp~>!QL5VlwFpD4L)8#JM8oV`>igMFg?(i@B(8qridbNb(oe~87 z9Hd{~wKZy#Kg-xgwp~uAp3Z$he;)g&_FOTOJUn0debnJ^sjMVpWqmX5xUQfpIb#W6 z%B)E`lXi_4lXqMfK}yjwMuC#(l)S`zH|7^sgD=Gz(jEtk6I&4w%`38Y?wf%38z=Ws z|2PA3xX($R-T&G$?0%qaC8wRfq3eStbr#~y2A_A(hL#>B*Z9WGDLoEXGZss6`Jd{qaaQ)p#KO@!wG9a?4ON0*Zg z2O1vh>(`9$3Z#p}q}`(!mgxPY$%664rNIHRn`zO%jL{QtMy~(+>>n=`&fk zAs|82#FBET9>8Fl&Z#^&;$dWV_lAHIJ1G2Cf9e#5AJ?w59*^#8Q#J`J2js6MG z_vXQZ3Ak?W^J^~reA~JO#f69V1k}s1)xy%IE)NzgCa=U`_ujfQVryJkD-z$QDA8s& zW8>0Z$9Yjx)@3vXG&*yoJ;3W)rB>pGb>Q5J7ZmTjy+*Zu2B52hCv0e1b>8pU-duh~ z=Cm9YKFf3#h9LtmSSy=}==55h60yiC+rPA5lrq}K->)fD0YhmdWa3Cg9OTB-YbD;w z#1obc&0Dj~1=Nm4*z_0lit=E|gHb*OztDCypi|B@ZBs_WZZGCA3ar9)+ElEIy!r-j zVy4dO>)+QbUz1Cow9z<3>(dSfKbgavlsXLQ6jE5EQaG(}5(9-!$Kp*ZO9ahD-BT4_;-mlFaBbTNPwg6=x{zk;=P? zvslHbRRCcd7|gQ}KfH5;WWKWc*7o>o6=BHMF%k9xm`eZXlzmw+(>A{hFnpN<`WC2(>LV)9pvN=zKNl)d&>@YgZERd)~@f8!#C>3-q7lZF}(5`Je&pD z&xgg_d4wblwhN}|ZP4Z1!PsQvvLI}_RtX&9PLvYm>0;+y{z=SQ${7`{MdBSY&c>uq zwoDZ)32b?yd5)5~K)JN3eb9a)w|psG)nYIF9OeR=Y$Q&DUd>M1ELw$edW|{N>>x$p z;oR6}uXiAkPoJ?t_J^8F)ebCE&CB5!DD+YW9|y4cAZvOp)Bh*k)R@HeAv8%XmUMllD8F(jB3$jm~@-~v4 z$xR5#y&Bp3&FMlrvHZmYc7ro#07`l{XINcS4D09M4i>fIdizXS-;!pn63Z}rleu&8 z5ihxyLs@L`KJhl0o_)MA*HS#oK^^4fvzCx*qn2(M7V(Eoukn`{*X%@eb|buAfUj;otgmZ7I_=bxY@Q+9oMp+fxareTltRAy4u1b*YXld^ zJQG6^X~KJA&Q|P3S)w(a+fMjmUFEsL^pTfz!Vl-auz>;!26bOZ8rzoyO#Z^v{=YGU z)Rd^#Bk)4UUc6aJ{XiKHc%?{JC2pW!=XpLiS;ITXfyOSz!>B|4YV-4Wb9wG{q2w2Y zB{>A)u>{r#(8ZZa+akM^%fQ*lELulRXjMRlvN7%ujpU5b{u5N4T})Git(gAXCk}o{8EbB`D>701XgOairLow#?N~d zb*}Ep!hP!Fe_XKD(c$Rb2%-TKQs}E_h1D|2eC%R_Xt}Be-m%iGhU`cRt`jH5aFxd6e!0X}R{bZoEBsDr> z5qi8GDWoL{>w$9Sp-omM9AonmEYM_ z!9zUBkGk2eZO!3S?W>K6PmO0dlSD#?LiSXKKV4MdTxRLmY? zHGZtBpOUTp7@CN)FMND#mgXoo&4$Q$GcDv+w57ZCt`L-nnrfPTb@=r#;ns7CU#LyL z77*n(&n6qI+LNT&V6AM?(z3#Rz^^P{!k)y*p{MA+KQRJTRJYZtQPW}0Hk!8C``FCB z#aA(VEP=`#z-N+#dtE%~x8mvRZw!ig^Rgz_Uzg}dm2EpKrS4d>1AWZg!4DuPYCqdY zZ{_bgyRg6;K&#(!vE!2#>6_e1J+@H(HkVEIwJtc;iF#_Rsj2ZaRmDpzGrg~jZe{|f z#@9boB(;~Cet@OD^9rcds6$Jl2R_2PN6$gFF-Agu7{X9)BvK}~XKf6DQ}V2^8$IR^ z6VA-V)6i9mOHAQ7QUdg^SJo0J?UIy^>H0~^hn0GRFsudjacBj+*XF!)I*Z=>Bwlbn z%cPA7iVd80!oc^V5^L@{%a4wGhbq>bqy>Ib7;4$$*C>B)N+OFb>(xp+SeFP!Ptr6swl&Xjp(XbV7_Hw#Z@gwG-Vcsv{QNCC7I&zub zkW!{&KTT8o!9E}I>x0+;s0&$55p>`DkKqg_xqv7(^aP!4WJr|ttNW;<^^2zx@YV(f zP3`P9Xlq{#x_lG)>Y2p{LOlwL3%D!xnLT$Tb>yT?B|ARc$ItD9^&3f6gF6n z(4Ako-C4q4w6~V6H->*5H93-C?EYHLcx>~cn7t#Oz{03e{%G`xOUAY&JoTG(+jQ%6 z0Bk}UpB7GMqNfJirDL$Y4U03LZ^GXvAZsx{aY_PkNLsQLdEijUWxHFel}(&JDs6p0 z8IbXePR?QbIZ7@C|Ho6$kpC?xsCcWZ@Rqk>qt)9i#GT`g|D{P&&%9UGTLOUjLpb`( zEu(&V9p-+FC}eOb|AJP z^A}XR`KGzZpV@7wO<31C-E@~=aut^XAmY||pJSNsu17sci2vJ6@i9{>)4YdoQy)2U zP**k!BDn8y2f*l|?^p5f{JOluo*Cl!yd^7&Dknl7rO}%b&lVc)Fcat-h_2sph+S@y0NqMXNCb^R5x0i<% zLzBC|`Mr{3V&)R1OrOG58QKQN6)Q_7U!2n~l`4&9wfHIH)H$~JcV4?RxkuHb_dYiH zY*|;#{tBQLH@bIBfusnsTpU)~%FO{yCVd+&8d*1H^c)&!j_$ZQ?jJ<`H%-4M+FY6| zcsJ;nDPWVcU*mk&(6u$BfnjaLm#$e)FPKXznQWhz5ly;b!f9pyM2dpXNS~+FF1UNq z{xAj8H+qg4S}z=0*ao_-h@clcwEIpIyGD73T6y`O*5} zA6iPv?ty3x8~Lmm)|%$mgO*F{vEv7PxQvsE63+LAO6 zsaQIz**I_zu|IiIrwe-gR8IsoirLI^UN5B26T73ZiTntAc)o&KSh%{kDoF&St-!{^6*_5432qEH6%QOO9v5qq3)a~5xx?X>;oHqWPE za=jvZ2)%iDBEE9(MKta(OopOlPHCfOjA>yr822Kh~Smyz$y&G%w8_GA2ThcDbs zQ>g@f-UWbxH@v2l@~K7*V^W*;-T7JHz6@2%1w*sQg9<-_azf9<@%DcR3@rlgNP4>3 zQF>%q&IIylExNN z9YW8v}$0;A|sd z3p$86N0Fn=7Dz0 zPz}vGL$f=qH7X=zYFQ?ohN9DeF$Wjv;CmJqp@=`zkN9<0LHe(Fy%7c^;O|_ive}2- z#SSCdC3Sipa5mfBwoiZQ_(B(M*>)^07sllgk^*Fl#u)ndPEuG?KSTDZp6HeHP1?Ts zZ3+&^-EKTt75Xox3P0F{XYl`#e}?&h`S+Cnj5T38g`_mqmW>Vk`{-Ia>3Dd@jBT&HQ;<@;_)c-d&DGA@S$uGI0f1%SqO9DRw zz*&dSA_IW`s*iuhbpOmocXrBh{Xa7n9{CLl!YKwko4{E8zcD#f_-y3%eQxdkI|ewn z^*JU#Ylz!@EAVNFBp8+S`4fx&&QVaGz?1oqoBqA0E8H2e0*V_br*~whP?X4LC??Uz z-!|FL`783?yK~RWJ$#^cqNM8D4jPj(_EWxw{y`V8gY9qOGvGr)6E zBtNoiedF$Jn)Bf&L4@s>!E5OyDd*Obj`C2s#u>6CYvu=`yG4C;Fc(-6g)K^&KnYOOgDR+CM}9CNw1@##M+KAe?5%-GvU>|WaA7m_>&QW zPt(h=VOV@2vA0!i7x-VzmtI}n=Y6zY-Bu(4o_~VMzPxWxnJiDw8|$Zh;1|}QeqcmO z|J^!Y0r~P}3Qq@e5c&Cy14_DKwxaxoA|-~VGJU#5D>yFa6j&x0TIUJC=0mt&`*{tA zPfkY+by{~L@0*WWX=d~Ap;xGuN|c@Mu-sPqUqxSyG5Osmkp_=rheRe+l`X3{Z1|qb78fqJ9Qmw>p1%ar^U&htIB??7+KG z9z(!x!oX)Pkrve?TRY;*58YA1s?+($V)#IW%PM`bXRCR^MYnT5=MFgKaMP}wgS%Fz z6#M>h5f^VCAy!8U9d!?bc2pf!qD=tVprhXu=pmu&zZlb~KYX*o?<8+Jp#qb5T|kP| zw%hF+4v>Vy-WKZvpiqfyb2Vp`PC18>io=bV|f$Ht?y?M7Ke4U9h>klGN~7$bmrR z@~HWQ!|N!&9Z_Lfif+uVj9yO<`G?HfPkgB{{!CTHk=t4xP-N_)rE1FXtBw_F96a)b zHsV)$By86_ImV4AF9Z;xUOyMLsyGkTp!1ZkFyM|AHsEUC7{_ePqE67p`qX$>@5k>R zuc+ez*0d8@BP;InDe%*`1U&C(e?KQe5y{1@?heM{KR`)0o6u7V?NtE|0iRxv0XY!`N<9+Ydf!@@-?Gvz& zk}2@o?F|SJa`fAcQMz_GL(Rn^rfh)$tvV2t&~tfJ_J4Zf7m!8(AS{}&@uorr_J5k< zk@b(ZV-4x({7>8c^Fz3>w<+x5{6jJR*Io$x2p8J8$JpH!XV(xfWXkHfYKd9NGmM{ol*iKB}0SMC@mlz(v3*t&pDU(={y7>qJRRqs z_n?@|)4smb6$f4jUMlH0VPTQqy8Oew|Bz(|3ria7zTCY>NbJoN0(7y~QOBrY43T`a z!1M!3ej5gTS*QDtSe}swhR)g71hH7Bvpu5=d8Fj8Ll709Qk!6TpN((Pr$M>bdg^zB zhnVG0{;i+Kp1<631-(bzXUfxNyhq$wC8VJ5?tw@meQ{YBunCl;|NHGVzxy~QIrBdZ z{`LJoM-!1~h)~JRg0B>AAeDxrQqTn;5VIGwyX7~fDZRJ0N z`DgI|7^>98DR<>1`9WaLc`<)4H;3wUn5d_b-R?K1}7k%tR<;ephhWMPD^ztK@zK%l&U62gAgindk(`!V8Gk z6qcx#c;A@|7Wl8+`XAjRiEL6L&3n$o%*Z02P~7?a4$T48qeq`cm*N|*x;~s6_d$JD z?8m_kq3)qP-%szt=80B3OZVy3rd00!Tdyw{pdI9AxWQVyS9&1wJl2YB);(%PV;yGQ zuIrUy_~Ht2;Ci?U6C1e&CeEKF7iZXd@8eLUaevNq)HbBr^XA^OeUuQzQK(6a;Qrs6 zNngOMSQ;>|<+wWwmPKD8@<$f(^oZ9hQG*+&pT9JF{;%PY#2@0WJHIV}4{~1Dd{Vj> z;r>fSQrZ8;)SyfO>Atbg6#^jW-T*R z;V6cI>I-88pW&j;i==GMh1x5g))$wdPbJ zD%;+<0e4I-na{gcD!Ia3@+vduDFeqaJ;>#>VYSN?3G1oW3!cm39=x&XswhGgemKBp zm$@(f{8e$lj~8gZu#>CoDct~+-8aCYT8<1W5fqy#O0E$asxYtfJDVd^Y=v2_4~(Q_ z_5)Y0BX62~@Q-T$*V6e#F=<@q8RnxlC=@qxElD)WT=;IdEJq%^v7~DQm*zH79ZMh~^g-U&t%4fBIZ1YH`XOuv$RdlqdXuYz| zvA_ZG{wfC?299ZYkSQ{}pG~Kk6S2P}I^fuV7-ci8Og5xh`BIh%O1aq=CNPZ9r4=qT z611bN?yvkr+Ie`@lrmt0iR86bMV)jR)>qkvDGZPSvuN zx)+Z*9Oy3gTZzXU;#j*kwrjGt16RvJq`HjgP*-;EU%BG;V8s4+{@UWz+u5^Pba5y3 zxcpkV32~L-#wq#`S^oijmGF)!z%5=klUuI%}5B zUsCBjne@8Cce3zk62#~}s8#pqT5xP63sIX_Ftj67l>LStk6+)_*bnXe9DCKOlEzBs zC&ippRJ%`d`Kt|9a?(`l`B}tr-+j96`mWmH3g;6k-3Vnnh<0qW^~u~fPyVp&Y9Mmw z?nKYI#@!cUS4q|f!5!~^+9@*iY~cM%ftVKR?EFhLP6`v?zUpuuL-5cVi_40Ydggxj z`1t+As(?(3z}QhLGMW`LJyk8&hR}s81NkZiXqCl9_kMDSU94i}{xC9y##I#%?P;}9 zYRL7Gln|S%d}*Y#6lva|>b$<~+@Mt>^`^xg{?e|G!v}Q&4kbF{HRKaY3s3&nyI~Vx z#Qpk|P4zl5$13X%mujC_FSsj5WZsXy9eXsgt!SYidiLY}3X_9d@wR)L)0H-P9438t zDho|_QWp5Qb_Qi2?hdiGw2o<3O3r26rF^r;afMgKCJ1!_W$z4@Gz}ICRbqgr(|h`6 z&%Ku3p>Snhq^j-iZ{`%et7f$?0{1}*%P%u(L_tR~cW>-TRL*2o9i4;TbwyZNv|VOP z;mQKCVz^vw+(Q4#^OZp#q61!#m_LrtONrQM9TQ3^7|D}yxi<2}?iE)_S=d>fCv}R) zuQAI(r(TJ;LA*W%2Z<{^vcfwoU3~X1-}!r!EfTWxPx)jkWmp-#Hc9gfeB>z2@my9q zHA@Db&hIou9)gc(08hE?>(O5m6lDu`6VY6|4*OF(jQQ1cJQkPFs&*s?f^xZef%FijJbA6X~r8j7jW6>B^UKfac zHEz(In<8MI%t)wcetXHAmF#Uz8Rx!((Nz>f8i85QU3 zJ-l-6&?Hvb(G_*m-LG8bMG6XECYy;*K5cXKCuew$8xEHci zbGJoP{90UW_}P!D=;FO>%sVn7(A_&du;bUsi9Mdsh=se>KzdAjdA&9BFUn})a zUqN4l>RY?cm1Rz3Qy&45o=AZfdc7z1v8_ve=*HDiz!FsK98(am^2z^Rre9+O-Pb7+zxylTbjAckx6EClKt@}e(LS(&h<`oHe zq;z1DhI90Xto)neQzoMUnQM;4G|ry~VmBpXu8Qeivc@bf?^6jBUBdFjM}_O3%U)#} zO*}xHy#q{>W*gCckJ~ie8t+)Ix;db*WI%!JFl_MTO0x{5@zYT#$$O(Upu*oBgFG;x zJl{{dz-*b$eDu99i1u~9Sch_5GWiMy`-YahN9np*a6nHa#|t+qPIs`;VP z47C^MCxa)`zZ(QrlAY9ki>_rw57?G>3&|$F@itvfuvDWkM9w*L81F9&H;SHZ+50H# z+p}I~T-IHj)ya%$c2#4yy9Gz}x1O0KUyvZzl#LRnk9u5tn)Wb*>OVy@p|G)b?-bPV z{#)(;nLy`2Uphrv(>9!Cd2V-2HNhFr{p_cW$HQy4xZUn2dTiHHyVID6MB~XG&s?1U z4%kcg!!Ca58`cK}0Ucn}vsu-*b)kkhTpX18WZQf1C+P^(ZIpZ_G1T(`me}L=ID>}7=}zlfpDaWbkoQ#`aQmFr zfSTiQS2S}m@a)!S2cy07YtAslzabqzUgb-v4fuNz`$TW0ksUkvcmbK#gE`ryURyS6 zZ&*{m4o*%VwhO;n4Abg%DxzhE*Dni}ew)6BK^{oZm>l`hIBE5xpil5ir=%r>9-!_=#H zT*xwdJKJ`)PE%^XL(Nw}vb)&JyZf56L(?#`qKz@eTtOl!_DH56>PXqENBk#xQ(i7b zO>`-SvhC5x#XQ0xp+qNWmXxE@W^>d4LYMW%YaWkuU}w%fc|q*M_tHsLI=9EY^^xm7 zKAGqr3N!LL7a^hxhFNnDbm&$Hq7)Z(MsD>ip<<=B+6!6sHU4}yX+7NYT)fEB9P6jweOw<)b3$c&IojP^e2GEz%BU1S{uzDtopTFq9<89S^Ro$tk2 zPq~b$rX9Ai?#T?o|C;kA+FO(_b$yY+1H<20$+8nHc6R_*hoV^_Ag4pfMD0zTcT1D2rd~}r13GTG zn{TVgW6Z&js`-<%>IYMG;H#pl@#muSd!}}!^ry4LwHG% zpQkpAE{@SCp}~xBiC;fBTAFY8HxgA4E_hy?{qAj{8CB|XlY<6eo1f&cjg8f9nOjF~=dpcaBeAnQA$Z3rlUmpLF8SqZH}x z1JTOTyOsVoH3<5dnr1t8$hY`fTKh` z-lnA1QUBPbgVy!?Yo>*evhU*>p|CdnoQv2qm2*^X+lTrg-tDF2GM|5?7|wpaEt(Vz zPs*F%Ukb5)(j%^Mect?ZTE*K2LTDffDILIYA21ZRXHNCCD->@T2n_KHaSCh>ye!`c zTPe(_7g8$VVYs2Mw={thcRQdOt|{F^lq>uOF?w!Nkki;#Wxfxn}>SAhr&sZhMc32|PEs zsnrl=?s@^c`h$_9p+#) zes>k5KzbO?HCPrHtPZ9@ySzv+fB)?@&6z{ByeC|)e@yi>U7anqi8U1o;Kluxpz5th93qp(VJz1yvwVIjhU zfHCS6mJa{i{{k_bhyR)f%ZY zGYy~vbu9i4v-f1C&P7TwXlUM6;Me*H9-{1p=UhZ`D{w4D{sNM^-q(YQQpf$9OZ-#{ zcg*LzzmTgFss|0=cJ&-c8D3nUX?{?1mQ!l`-y)>xi+3m9mSub_|2R6uQ6iv zrdrXb-fgDgj!JQz7L8MF`HfzY&}ox}9#8%Bj;oh~?iYBywvrwL<6(sLVMT`wJ^j5| zA@bR<^quy5Y;EG#9XJB;k8wJpzq#k3Q() zgwm%*G{IX81doL~cs_tJ_QX!`F$RgW^MGIqLem_KVkR8(W3NrZQN9jhKhgUmR0~BO zEtPEe3pdRYtaSpExoTNf?n(L_w$gvnZgC@ejG$l(ajXqz{5GgV;gJ5Yko=U9}Es|z<{cn9o zwUGNuq=G&>k){RIF2*W5e(IARF^XwEzsl3P^I@nO%gi+PIKdP!$`PEN#rvx^FUeCV$;X6x$t{r+mUu-+sCYgX|1`Ihifrs}$?`*f|6(-HYKukyduWF!h z3JfKEC#!~}ODOa=i8b?$-P%Tx;E!#*eidcCg2!*HU!zPW{T(7AS5$?kVg`BIJNNd! zb<(#f%s<94mow%le<|!0ru5d5a;e`Cou_^Arf$v!&ZL{=@-I*|T@Tm7Ic)Vw&ocir zkczoU7$f4kcHVCQXH+Tj^RH>0@5V)w#WVTe(;A-WYmUve?z^ih z-m1-Ncy}SSV&`5Y6OSMlZlR~?toloNX9fnE;Wr}axk4%>YD3<~b8OR*ftExLGU&YJ zW?q>NEBx`-al~4*5RqA5^m|ep`7$AhjVSN%ZU(kJ$8>5C#bf5?^6_wsts5pWBD@w0 zG)HZsZFP?50V&39dCxuLG-OmbBF}QMx6T-41Qtt{m6QAU=G@{f^XNDM5~u2|a(!(t zOw?yL#3-wayAi=XtD2w^#=;}CqhY%=H>8#l%KRrVl^!l_7MRXewW1J!)qaQ2ivJAP zEUH?1?x;yj1oZ%3Q&dU9q*VDZtCUopM{WTT#EmU0+Qt&TOVr$YYu=%R%JhNy!Ko6h z*^hECe<^n`!Qcv#nS5i+7_FWnCYWZHXjBnh7sVB>+?)2~;D!VCZ5MrGc#B+H@$%T$ zj}ekthMK|yOn8jTW!uE|`h*`c6S+GoLUwt{LHY}vzD3brZYt(1G6w7|^N-w=5qVB# z+>dfK7@swAEuL};+w+x}em-Gau1f9!ed^XlNoc&#+;JxBm2my7$o=|ng`2&fZlL_O z@U-Wf178NPeXCPIPNo3$wkRR_ZOfEmg@wW?$57UPO$Q9HLWO5_$cCtU<3(UOoaxmz zT?Ckk$oj-(aMp=f!Ht?0JqMqev`v2}fObB0uhYP`pNaW=YXwG8ux=AC@Uppwzo%7LX3G z25taIj(4x<7k)=3CskAMZZ~WE(pw52Q(8tfMh!fiBO1?p&o@#|=-T~$XD#%hi;?-p1fD4948W8~U!bROP6xQS> zsS@1M73j;>EDFlN9lUs=Yn|=s2~@;7@97ZfJ!_y^?s{xCSzdK>6iCbL z?|@pA(aoXzDiVu^l!SgB@$Nxb?apO2QdZn%#uIkcTRM*^NsF8UYPzV&N`We!gXKll zn8lHP=x*xcrqDl7LBEFEvyUq2T&3s;I6}WaVi}ncokub6B_-_KZaI{zF{!(sG8X}~ z(E%a1YFASIL123`)2;Y;$C12aFeTb0tb$`cq{(;n-jC;n#h>zIrob`XNf}Tqs3G*} z&ZLgpQtZAULW*ZohbErgv4}GOCP9UF*F4qd{0Kk{k00vVRSpkamdt^)a#zx;q9 zhCUb}_D~g~{;*S74hHl^;(!g z>p|^|738-QH;B#}(6ATFBwH4+!=2ab@(IiAe|kkMK+9X<`0%%k>1`FU&tJja+KvE( z^4%|>unfrUvd&F;F9nMm{WeOVsIQ1=7Dox-hJJB;pf0I%t%j5B zL5HvnTNrKponRydh;GI0GdYu-OZ_c`3$$RTIa5Vx&LnGZb9JlM@2aPPYDQ@g0RQIcM0`> z0w~pysz#AZ+oRii+Q_*;QYNx#!ePYD-fZPGQfBwM3%Qd;0reK6I2UZGPL9^qen}Ywfbi;ua>z^E zFB%`W1PbpwHnxgR&G)H@7XHMj!f>+*#cs`^6#n!nPew6V3IJS2`F(I%a){qi$bj{C zW&Au05@H&M9x5-1F(W=cA;B)CK|k3^Ea0+G8ymugu(yv~f<@N|5&H3;ZrWuxYKzoz z#OZli;`+RJ?W2wlK3B{)Dg?(bRf&Y4;2i~zPWJaN&9B2>-!i2N zkzyv9|DZ_U91A%kwNZ3P5e!o55)>(mprDy2MnJ#AS&;;G061WMv=rAnEsbHcSCF+e zLqt>jzA1CZ#XNa1mm^3m=_|!ZO!N4&wgoy%gF88%3TFwRH@U-Q9dt<>_tGAEQ+|z; zZ-M!;6>f`awwF#Q@?=Z$DB5yd8?HM{jsCOm&_@i4#tY4$G#n1NH^kOfsh~IDP>eR* zjvEECElLHwLf`&3*h0G4W>8oV`+fMHx$y&7kPX@`A5DIOqDHdD(1|a;Vf?{nj9LhL zZR{K0g300YO1N155(I2UJKR6Wf;0n(5(S$V&@OdR&6g$3%vm#W^c9 zAgaO1PG6JoNZ*eS-%^<;JJQeLL8t)^)~ubpp6L`*iKZg1V5t2#zLUtq4jznNtWFi_ z2Y<8@g!yoY@>dKtbxCs&XN!`GC`Ti9 zxWj0){+1Q8RWw3o0>UmjzAc<{pYK-51orz-97zoRLV|g>QyjrJK|AWLn5q2+H35{^333sythIH z;4_c?UhlVc5aOs)h3a)?as=ENaH>y{PP**pZ1&4dFmu$*zUD_cEK=It1JBB)Pm1|V z?A@t07dKI6@G3uNr+jan!DW8#auOwF3*H(M9Dt_b6J=cgP^2uWd3sODtdr4@*I{9h z3+>t*m2g9C?~d-jh1z$=pWF@-X~r9CB})?ZbNTEI_cDEHGjXVlI66_TN}^U zs$Rg>GCx^4_xt4f&d;*P##{Our17tFuy^+l}^Rd|Xyio?Jn>xwrp97H9L~B49@GI zL!FZ%AX}QxKEc>@tc;B4izq|6imw5cU^{UyXxnDnJQc@fBlP}*KYBWFDO`OTk5kNW zd@;KMlc8cU`FODnR>mNyIzrR+n^UG-x<1lPAO{luu$;4s5EmL!5aQ;_s8{~)S%>*i zOmB^7k}yXIU?U*&TMw8Gs+NXHXE|Sb-t=ET!nEp)l+HIe6JccSClvmhOMu}9N-^H) z-|*}>BX75H(-6)902LW_cG%>djv>&LIQH*V6W@yY1%Uj{CtKAkquJ`H_Ayiy!7L29 zo}asXviDx^g|&DJ?2p6}piv0}da#h*0W>G1vKJzxBn^OASfVF@cO1el7fk1Fbv_BS z3a%@frK>B{U00Vna~1=l{hpVo-FQV5*Yw`MVC+yhASO&!0`q!4c3UNkFO(RhLcdt- zYD;`}pJ7|EL)zz8WAT5}DghE*-}+WuYE{Y~3iF?+kS+^IM+@SwyNX-@@Z7E|o(&9E zY5lQO|Cw@UC7``1Ypf9WXMp*e(|jfa3>vwQzq*oUdV>J4U-0+UDjQATZd7so1)bTtN%$Ggybl z_g^VdBNQ0?{}&ba|8r3VZnz&Gt)%r*BT0M8@Z@Sxynw%baIuxg@!bxV~KS|C(X+v-E$0FBf4C<}m#@FjD1 zz`>ws<&u+Pcrre^eB#(6y0U57WAQIema#1?USL*q_qDjPW~oJERtJ!KeVWsCtDod9 z?Qla2FQAgh^9KMjPZGZmxBpuG|b=j89 z1-fsy8P|uW^v$vopJXcHLc_Of7Q#J^BmuuDap#Nezmzpp60perpZ+<5(lIrQ@016~ z^=zZmq|PuH!08Ew{roab0*$4JTOeJu0cfntHWh&0lg|K#$6%M~_s^;zl`;T!V=XrhSb6i|0?dPF zI$mV@;^Lr@^AEo$b=eqBbDyZX+C^Qa7+76-f9S?PPBZP&MQPbjJh{>#U|+f@W6H-@ zx+v017qxCTapx+BAjSZAA#9G!D<$$~0tVIV?ezXN$A7yh{7V<*YyRj;iPSG$RDQKx z;gv4R7w|&(A49K{==cpVD0NU}_pi>)-%3S$>4mV~YhEdl6bu;5ovtdratd)S<4zM_ z!IcuRUdElFiK;87&{-M?EjZc=S4wmy2MqrIi%PQkf4Zn6R`OowM=MVKKp)#jq44^9 zJw|Dxoi`2n`oBIxz;ZJkRggqm(mxj7Pm+a4nZC+xQqXPq@l#d6fgp7+Q7oV_g4kY{ zin9N0LB=Y9|rdoRXdAQ*P9^pqt4#_VS(dk&VKr+J9P^_O#~Y0gING~{-sTa;@~AZG6}iF5#0Mj3`%hO01Hhf#YJ_rfI4E_R4k3)+p1ZzH)sn8D zoZ-?DcE_8)9cr_e*d^aYnf`nyc<_?-0(kpcUhS^){D+fDmw4FtvuM=;0Qmv9>hc}+ z=akso6*qZ*e{8+c^?L~*7!KNZSJ(exNz(MNzIj^wlrYyWyXvX;fFHggq1P_$GiT=r zOn)}OO<|TeKU{G44fB(vT)V?CI}Kp5M-6S%egE2aW}HFNw_rts;jmeU>d#nhI+J_7 z&ecRRgry|vL`N|}e`lH*XQA}|-0;Xl1^5&N|2p$4eU*cBQ~vXX^ys1tT$Vl`+?K(} zIDC}=U&|*I`HwxUoY065k0%ZFb2nL2kxXX*7bdZ%F!Hz&Ac1t*KeDaXA8%n=SFykB zs46~VJg`i4yFHrnO2epre%*tR3&YPn-Q*t`M;p3aQ&?*5Vd>@oCbW8jWAEz;fT`oJ zJ{z|LmtFkyxe$-gi~opX9W0-y4=y}_mv&4ly9`F@^u>7Cpu^*K$LUA({&LX@5xy<` zt7X;CE?2k`7fM}%-s{AyKW!A_{MZCq8?#L7P8oaATb0(p#)EHOic);BRAqZ+^e5WKmmq0i`jagtTA6Q`G)hdN z(b=8S!mnOkNMx_abo8qqCczGzC6&*84#f0J8_$n1f!v-ZVkMKxmz1hOKYY4)`MKwH zyYAo1e}4gPJ4+}9g%JO(b$&o>y2yQFYIG(!ZcGkc{BA zoOq6X>F4b_%;%Xiu1gl~f;;WKRBBYx>#a_i3qS>h*~<%27LF=GIEmoQ)g z)P2;Ch261eYTi((TTj~wL9}yr_8xi@`oWr~sN3>NXZ;c0F(m-bzDZjPxTj}~ZgO`` zO#Ye$9t)sy2^%XZGX=K#N|7|}nCX56tSasmz$e5EmwF4H?F37y6qeLI|E1k*d=F~1 ze2;IW#>8{(ODi?g#O`X-hPePv``%@9_nP?e{Co(v8o7F2CLlb9S!#d>Y4b<_DcgaeD*b5x=G6hjhA{YgsaZ#S7au!in^Xj zAlavCYEunyKL;QcjK~7WARTT94)jjTnvK!Pc=`%`%Q%p8P5t^|pZmRx4!?wuvqD2` zDMrijbHejFpr>GAQe{wLRv%`5h8&r*el>8Pb!jaOS-{IR?lm3j_MhEwq?e}BwxOlR z0u;7RAWzMU$rZ^sNq`xc0~G4KTlK%E#|VSDF_*}~IN*-)C9vAN;@pX#a@UDU00I-I z9niz6WFpyWmZnzyiMQVAF05iCqY)1vTfqVuJ2Xi^M`XW3_LmNG&S(Z4G}R$>zKd!2 zUC&MJS!6FeRt!>LHpyx$esZwuZix#Ohn;~ZrBI3~c(`)tVX&TE? zV8AdcOLO=1mHY_|}ncdUGlSFLyVS4JK>;!w7kmig+DsS7}YZ=fR>qt$C zdAteZ7PlV87lBRkV;^l|Al72Unp|58veO?wc!j~`Y2Ogy?12@c-tUJU-#HF&#_gQf z=f-jQ5dJaz)2ib7gcXOPSLZonniccGi<=o+Wp+195s-*goPbP#aA|+E2M~ihH*bB# zKKgRFu`I%1-%hzSIBj8z!chioqV$Ltn}e~qC2`m3t^MgcDkR3g94Xilrj~UlRP6>I z0+hQvsgC~k)!yf+Q9*~i3Z1-iVyUcQ>ksPKqS1$uL5|Fg&xaQbEvZjY{@`v_nMn@^ z{TG?vmDCeF*~-V;pNp-l%!;;p>p=Eq5eE@fQAU}((vm}d$8uiOWlSZ1Z@nL9?K`Aa zI&p*l4KLIw`Es|HdQ5E$K)iSS(Z9Yw$5`k8^IHHEjfY<s)1-f-2lDQcV0IMvXLOB@4_>lYdG|W z(&RU1&bD3EpBjbcT_QiF>aCJ~Xr88qi?g3AYYHaG4q?Zq0C8UJl3`kId3{6Z3w5pW zW!eQxZEi z00Eo7-=5m8Li2c}x*_(G`@FoS9nGrVV`Ex&h^*xHmpJP&T`3ZlpH}Gzk&iGL=$1|> z;&i!Wjk5b*vZn?BTZ_=gowmWyZ$0x{TCeXstOXL2Jxm^8$im#tYBD)0yl0`;KJIc9 zR`BNSUcsqtswVclp$z zXpa>1A$_OQYkGE~zvGH0;EY-=t!`X?s(XMXn;LjOp&-cD_rn=L3lyfND)I{G|CW?~ zSvqqR?A%IlDlBbq7vkboJ9ISqZC+nUMr&#Tz5|y4iw_|Y!;8ju|NBc4c`8}mj>Nr-vq`i&=1|X)+MeP1yJ|1VC7=Us6;kqBsy0jN>gFQe(cOFy~zJ-fF|Bk#hX&+@glu{;r z%5y&F%h-ug`<3uosdmu;z$aMICv~*5-y~{B4Lz2McCWWZE+aCaL7ZoSA`jo-y%1;+0)C=+04t= zHt)!H98}gW(v-*AhYx1L&}ZP&%P|E=Z!@m9dz}C#EJ?T$svaJv`#j$>%!Bp?f}b8h zZP0X+9y2G?`t~IbrID;>5%Dca18k&aUgH(kZU(HWq^KN(Y?tIWvdtZc>2(?SYf0>$ z(8stM=(>!szc6T@i?jVowrGzq$(+$?#1!G zrJyuR$k$l0@TQG>@NOyE@!UK?7-{lo|E{3XX~%;mjeDnaan22=aenT5cX{dX;S)6F z@;~NA-P*2&I*>cIA1;nMcw@8Dk9g0=9Xme17DTK-WNzxbAm2PL1*gtUd8K=PK*dZK z1!AtG_LFQ)m*31h1<0SA)D>X7?Ec4`AvW+*Mye=$mzLQ!=of~Z5{!>v6Ub~2s0w|A zZt|+|rt*MzhYGRj+P-|u3P06-jf|K+e}*K3#vyHksq+Y@@zJ~x99l*!#^7T! zXgSo=8ui}X6i&*MI_DRRGYv-4GhF8-*KVWvNrpE28tu!|6#t?3DUj;}n;^W{BiIqz zY!FyRlAjP?RV^#QWH3^hC!8`D!<<1@)OGUiJeL#sR71E%2>;%)EbV+P)(!Z6BSAmP6T0ozKypdQuej`~pAeQ{lu?gGTYN6$^mbom z+NOGdH8I$~3D3Tq$@nm2>`l)5(Eb)hKriMES-bS?>jf{=pV7ancvDCHS8iu~2o$G5 zy?r#7pPXZ{+Hu->gP8D`Ora%5K>JR<$+x9u<*B>3M7x(yFzu_ajhpQicyhOiZV<`t zwP3*s{kV7GN1p2-1!67e&UFI+YmC0XOl(aA7%+X_q56p1JnwjD=AKxx7`GMIE+>wC zY>sE4+k~>mpZmrUP8*<}xU6lgkFCLW?br{={S>XrKwoo*V5B7Y+$CYNgRi1*et9Lm;rpOONf)~`QK;60j_oGtT*DK&}syTY@QnJ?xX z8R1KE zTK(VhQclk7hPys9OI&*j_J=Qn@f}jo;>0^9MfGc{_MKk$(lj6~uL-$C*k~M7L^X|$ zLp%zirF91wW|^Lotg<4?*)CbbO};39rMKGQKY~#TM=GOo`m?nF^+KSnnPj&wX z_8dKSfJ20$W+itm1#B#je-s)Re;u!UheQTBN7d^pM~3iV6Yp^%-hR2H)8X|cg7|V9 z&A!)H!8R(;K^6zdEYjhv#?1p$YBQ{nOO&S^DVv0BaI_~4hIqnc-1wvF|Ou zK0gU|WTUpg2)mIS&IGTJyDTIGfV*O};H?oSxc2NIyE!Vq3mDq^KbXoCbKjWH`F5<0 z+GQZWjf>q-G~&f$wb4`U6wdwi^j%0)ptxi^w+!y!koHEvK3rL4OIm4`!WjYMVLLpC zV<%D}lYZ2Kwb*SNBTf#&!%n?@$`_`o?F}doq){afTNc+JyVc(pRWaww5VV5rz+2KH zo3BD~a!M86C)YIFglyhv#=6yXV}Ua?*3%L0lV=~(`hLN}&k;t=<4piJMrjX4W8gNY z$Kh5o7s0zvmcNP4E1d4>PI(XU4#+2q(7Tn{J-$$yc-{qXBLwx5jelp|CM9}|D~$lS z?2SWVI>*#M`H0Nd+sQQpnX%;*-!bdi{)2U@szPXGRIa63RP%#L&mY)D*zm zBXEB_DERt<-x(bLloqGN4(yvkZ>3~ z0XQOi)HKsmHzhZsHrVY~Q-v*a3wy5^DAjVOh!F7;mbxG+O@~rLvEk;}evy!&oTLVC z$Ec$C#@dfHlUH=Ks$eCaUo)Rk06+X;m`Q|?&ZqY3xL}kSibZROgf?B2PvEy%?^)l> zW1jaxc`vpG?!SsA`@Gv!qg-YJ zz<%Qv5)oi`F`mIfW|U{XeCX;B+mbSu(GDzR2h`;ItP@WQa=0U(02xHYPnQAp_%iqA z>4Tm)=aRB^!qEg0nJFHw_z^#0yp5okR2I#ds7X^ShPS+rvwC&{u~(Ft1D%de~B zqx3lMDJke5MIQ>%My`x3&`3Ej!JuU=6$P<6tbavYKEkOv= z!jbO>x-G7W5PTV0oPcWxxt1=50pl$GrSMZT4xXTRA%%E^+g^HeH?PCaUK7vHS519i zni}^y?+y;-Yn3D0&O&&*9!cOfLEp0_m35y{Zgd$x8?CG_Vzpsu=9GKAr_22tMgU#d z;aOw(*mJYDC{Oql=9PGD+i6=_TP|(DV%sg%4f*L3?h_?MKa3wyIAeMtXkRjCpe3n? zBJIUb9Yb)lGb45i+$ua&QWCu7B`7zLV8);)IOegv*?VHWKI=4ouja@uty7bh6|4z< ze)P~tw z(wKtgxiR~+B(DDsEgqJFa0cn%FEHG%XxI-%dZrl%X(z*bI;{QHX!}0@cnfT!6T15Q z;iIpttY7IvpP^f|Ed!;~`f@`pk5RjC98`^j_|DDlsFD#VF!?!0z2_WE1%vb9#U@IQ zVUC}>6FqB3wbR~yNrW*&A%{@76UVb&ruI45y$Kk%?2tJTL%`z>n;zp?2#p6LQvSTV zssi#JBkOh{pp0XAoQM?x#fOg6x_&F?dZ`|yL^Aq@kOCdL)080vTi=ms^~fLQ53Cv9 zo9)H0f#__yQ*UWTSm=$*`OGiD>Fkkme8j@z{yX2abGe3(<)(QuUzjue6nPpBK~J&~ z?h>g#0{wo%vmEb%ZIZ9&XZd6(WmlWbDW4S$3!m+*v16fW4m>Yh{=dk2>!>KZuy0!$ zWQYNUp-Tj$hOPmmK}tjgX`~%mVn{){L0Sm~rAujqp$7!%k`x$_?j8i*YrLQ5UGKNP zzgcTg=DPMi_jw+_W9yak`c$dXF9V5I1=s7Ik!u7gD^S85oG+bP)jIxDaU9EFhiWW1 zb$|h6J>!GnuRj(cJ4BhIF%C$JfY%7S=b%f$Uh zrx{e>wXHfx#<4L(x{CxHd_~FpizSW{moVJrMbAi)W@VqXY&vBeq@ICXOTrohQN-Wr z@e2xz79#FP@@(nT_pe?^+s1lh7(c!Bn2^kZr zQRYaN;f$A2lqga-fbkJ>s?r5!Ur!lwBA^bFrD^Cq9tit-sGxs0SIfpVLoUTgDNy!`n{ zg|PXlQnRK1>lVFr>rdXh4wLt--24h2Ex%B?fSKF#o>A=ldGqPNR!@@}FqZS<8_)p0 zAZpw)<8}=7LDOoJEn4<*;Ma;6eeZa|slV@r-Znv$_zr*2z65~_Y!kUEFbF!fEJhht zc1zc48saV6eO`$z{H2p1Jt{Mhr%hb?YiRqeX{!X;Z?rTgwo4WZZJ`E4WA_%H((~KQ zF186SAly>eCUx5>qiQ|-wziY)AidjG9%3RvB^$F+_gjk?2p8=ku()5f`Zr$aDq^YN zuRwS2A$|DkG6HHs|L;EHF+>KTJOI08MBx(wZ4-lK`Fg%#og9^et#F4NLorugsy$uDssR+9$5cuz6Sk*Uv{Bo6hO}Eo!IW zwDIkWD$dg)bwY`u;HEDoUdPg2pRqH3O=B^0gc)-u9JJrGnaxj4H2o8l0rr>JxQQp1 zQ4QE}@4xEhh_iMi!aRV%6z-KHI0b1y@LAX|N_82r@_p-P2!l#y237$RyA=wTPnMPj zlYiSYUY6jpT)0$qOmik)Ltw-PLfJ%>-&8+2D&LLdwCYd$)^dG)e7(OedoaL4u zel_5BXmCjCQfdw*yZnlLYRWz%>~px%?iSwzzi?!y*sY|j-e-j0XPihKS!eV_2U^ZU zakuQ)nBAG#(u7p6{-@qB1)50ns}9bhukW48S|KA5Yd+Q)2K6IX>(`d+T4 zW1^7kS{LHT(8!7raZGC6lXPdI$$Ev4k>7gdPlY}!F z$}z-i`j;&5%|zx;Av{K9-m2Hf-kJADQ8W}8*wIM{JE2K$MBB?j;5h2o37ki@iQ`h}Q*>a#{N!MG?KWYW?5x2*%(7ntRhH!lqFsT;)K{OI}4ilm4 zoIpk$SH;(cH>BI^^N1!9xoB{>{_goe!ExnU?`mx-k9#IGM2+k6d;9cA%5IzG-;`N> z?53V3@*f%zD~*F}<6st&bpl)qgDyE8iygPR9(OcM12f-mU{vKABpb~+EIujTUYb|2 z0O4znX3WQDi!|RW)X9D?3kBZL%@8(}U0({%w7v4vmtZ%1R>TBdJh2nP85=YAx&1G# ze}ppZDvc2n=l)L59UglTtWxpJyP0_)kcL0glNQiGzmEFD4hUf&puV~*il{u#0&D97#|;`+DC$p zGGA8u7({ADzoekzzxg9$Idx?`wV0(W4wo8+E?P5!m#21fVw@#$bcu0ak}Y%XtPXe? zn7FB!Rfiwk66xDkZ9|0p#&&DNo`*_dvW3AV2u<^PJey^WuBT68Z9+lHjKA1Ip)FLE zVkn%gg)Z5)^I#&gkVrCw-aftGf-*`}9Cf#+hg?g$`{A0?OXByQW$S(@-k23#ASwJ? zH>F!7?BMHe(^kO`JVgi78RsMJBdfA@qTN`Y*ef6qAXPbl`V|DmmD*BlkFM=UOzPR8 z1mH?+(UgnIe(IfhW0B6p>cqm1KB&iLVyxHfd2@1Af9_m$ToH2NZaFWpjoWAGM*#U# zQQg#h#vJ{*!ts#HL?JSRP{Za;dr~VuV01q)R=BUYdeP1>v=0U0d^PHktSEhB!doB9 zmcxlS*>489@uO&m)OMwEga!)QN9A^DQTGq*DFWKg^MC)UKw>sSa%@w6^phY2gvEV$ z>Bowe)Xc*osMjt=@|~M&XHnl;wvT?UTrZoEAEet!4%OQ(6i)d6r{WAg1?HQzLwy<8 z<0}x+d`YQqv@bpFYJp>^OcP5UN-^ouX1Js&Nw|U_flwoUT5#{v+sVb@s;X)!J-pX$ z*abx!I4LIzSE)aEOAj}J+QfNEAnv}VqcrQK2;pCF?)C)9DYI0B4L>P}38-Kj*Rz`x zXlK2hPi%Et=}spLUV0PMvp|5=xtlk?F^hj1n`Al16f9KKjx)?r-aJc zqqmgiq`V$#5m!bD3tVcs@Zk|k;IgLsx#muXy-H{gvIEs_JetC7Rw{gp8QCa zJ`l#mBza~~5;rCw{-g{sp_t$`dh(lQ-)!YE9~l|+ba$^BXe0X#xtLG>Ob2^E=ny%R z$|X+eX|Rqq$v1O{ubW!Nc{ox7$ebI)%=DEnyyC1F*4o;L{QCjuAk zNIkz^7V^~q1{?92QLpAQ#AJl4@~lJ1f5n}z5>VVpm<(WPt+qsxDtkUc78FTq9?ICq z@c8|SJ(TZ}l~pf6++)R9ACOd6)N}LgMr6lrUTY(x@U;ZknjT(ork{+LQnH9rFp z{m6??QuEZ55qjBQ%=%gF&WeFYfq9hxNzVNh-3zVcb|nsw=HUuGY8tR=M7;W5YC}bf zuB4}khgc$%7Ry7~R85xkxIpK&{w1rT(yH9;G4WoDoZTq`_<9%`p^)Nqf6A);`^3SZ zNvu7nnoXxwuU`1EE{%>|Hto4`Q)CD&uFDI>Amw}P@-z;p(rA@{Oq#Gr!vTp=?{8mz z9|S3|cJ135-Ce%JxS@n0v7#l+;>C*vheApbUXKMQ!(J}gZmKb?lg3^?4^Wh)ghpw~ zL01{kWN!I=k?>d>Ns(Q9rApimB3yjW&eze0=Z}f-DPhZLhA*T3Dm?x22F#p+?71Rm zP>yof8f8}2KnY!Nylbn^LOx<9-ix#?bMRQp_O>mE<$*CSJy|gJ+$q#*ZbYQ9QV|*S zS!(Is_x!{hDH>@=JlLh=|K+B!$AR}PX@|(NEk1_DEtw|{M%4FkAq(vjK|FLkxvngc!3YLTaS4{8u{PFFU@mA^dan)l&nV_{IOWKZrP>V7~#`?ys)q zD&$j6(K~7M)B^5NbF+k_ze>xws-SY><+p!HmLn^)<}iTqtT603-%T@wU^ts(J|sE zBeps3tV+1NsuD32@}$e^0kQrY!=Utl0#}P;3y52*dpyR&eb4@b?&=P~)!K;>nZm6k=&SBe;6%6iene)y`~NnyprF*sstMSmbh)N_u*lxvnd{MP}a(va(n>I?i$dUPg+%IE~=V&uN!d;p!5R1U*IR{l^Us=<1Dd|pXv z?B=={Za!Wc+au>0?{9@+IukwloMmq;#?)ieI-28!XTuKTz&8uLSoOhhzU5u}hZCXj zaIT5=<%^yz_HgA~;pey=M4^A7jw+j(r`|g?J|{~u@Cp1#o(yPbqZ+VoR|^p3;*?c^ z#ldQ};G`vfTyP0l<827jYURX^?YPc+FVF_m7utLQ3gdAwqBBu)vilJN>xhgtA8%YJ zas-V)Rr~#==L-V>ySLu79%1`Rcey7r{b|Rek$F~LD%73~@ zUam?m#aKqtO}KwZ1qU#p#g^+{*>%M@$(*TneObutm*T9r+;iQ%xgNMNNY4bcD?dPiFCQOwix=wL&P**dt;2878Mnv_THMFaVq@ocD%u ziP)wQT2j-=yOq?v>UpoHU+rEf?$lV^v&uozUHx|M!e-A_uRq+_Z+RIzOC$B%yr?X+Af3x&{c-|3>62b}PSa{#fy8w=DFb5u_hi|}(Z^Q(ve*uwT_f9Z? z2t0d}1i2~yvWK8%ak1pszluRV6{WukfRh4ky8{Kl1ju+i_D)kEllB_@Z2W5pMzZKM zw46PlYCwE;Ma;f$YDwo8;VhISRtfj($8T>hCAVRve8B=h5@VPBr{3W0PwGo9DKhQL zoUFk}PN>7;R0iF1Gp9d;EWjGY_T&GqQS73CjIw?kf8pUL@Gw;}_}@_K|La?agy0+Q zzX2$I_={?Q$oCj_3uRG3D=D5@8P~G_FTrcVRC(Yn@NIK@IjXfva@NC)BfSfViZrpj zm^8qwX48xHnck70+Tz>>NVyFFEgT}M`3%C^tUYyG3;I$5rv zSHO%&17Sy}ntm(CU5(ktS{lf^j{^8ssjFzyyyeC}D&ha(M2E3CA=%K`@-iO+o_2#r z-ZAU@N0!+fbEKxPU!C&}{8tJ5Kfe?u0QBU(&Pq&b{r~Tw>)R*)tO?SzACIC3%)s-^>ZnCC(KC z;^df-2l0G3u^IY0Qk+;)!vT;Ioe0hbJklTcNKAnAzXMRm z=^*;{FM#3sdxUB6ClOqK%Rf^Sxi^2;{y8Y44szO{X4qT_Fa#adUm|viyWDG)$5JXf zM}smh?}6sH3_R#7oFv7>{A$yn5Wsu$#dEl1xY}epW!?WC^b}(KkTI%Q-{IYOX*DSU z{Y8=kz`e(hg@C8g+gK6c5SdY5Gmxr2f}rVs$PUOluao2KZcyt;vY7$^rUSs)sF#*~ zd!&1_=UF&uB|(eY0=$S>t@>5JH)P+gO<`Au)8B7y{)SAh0dAV((RAc^eOSW0KxKJm zhaNwT?mrf|O`WRE{vUJ+P!^Jg+Y<)n*?=(0r_~Td<={dTOzgcMLN0CZ+B5+C#gYM> zE0?S(sy=;e_~jLvq}~Jr{I_#{%Uu9emuiT;J$IpIm>$-4Cc)5vP0Wdhi;5O1 ziI(Ak1~R0BEv?sCa1nA?!`em{weHWjKQ|-T5>b-bSK+Pj#my;jmbG@tz(P!bYtO2M zyTs*aB}d#N;uiX@;2-Q{kYtaop!o*CkT=(vKgln@NedXu4y4;K0RZRs4XgjH!65t$ zRlvhNa9?_o@8chB;l-LGQv$HcrKIUMzyQ74bzJ&n-|8{5xiC1MvZ@2Z z5;%C3jc0No+#$j~8MW@hJ`b0*N-^&eihXY4)(v|`cF;D=g1`P|mj_>|nYE)~$7YM? z$VSlb1x@^M*^ZpUxpVUYY9o8k-|m{5kRm|{MpyWk**ZBt<3f6BT1^}qfBHx4rLFW^I^}OO*ZXr^&f%vZkpi zh})WVU=!(tkNmk0H_Go86@qH$)9-cJyrRK1q>IG*;v{oBDB}o9UqIdoOUC;zn|v;u zH?h++L7u-e@W(088kE2sc8lR|Fw}DPmCUL#IN$}#eT?^vF#_spH`z_~0I*En&96x} zcBYe9e|A|Xj=;)T1SS9*Vk#3nVO5r4_{vG{L~506wX6ylN*~VVWv!>%^BydARMt-% zC|!4P3`ewweal>U;{Ih@^@`D^b}W5d48`R0q|l!C5;b7K|rH0jGqlzB~w2nTLO*}MSRJtmrJ_&UhPp?6giZaMdz>JxQ zs1L)ks)mT2GR-^oEhHEsqhn!C}BDI zLQ%}+GbXoboYpTTH}Oz<^G9K<`6-t}UdzOBHgnG}HYOW)t;oPgxn;(yxk$a(2*_(F zvD{~Vr5dt=l^+4dtDOj^r!+n)E&0v8xy>(O-Hn?xY2y;3PYgEO3#BJZ9Asn*a#Cnb zuo0*%f|HdCBeJd#0^=F*5bknksPF4s%4L)B#=g|c?1qs+h=}6{f=%N|sq^u;souge zi8IR-CUhi1h&d0LRT|GQm?XYwgj%Klw9EWyXXS2I6s=E{IpGZI7l+Gayz#5zFNZcc zm7YWD%gGWK?H?<2tSL|O%q<>f_fcLbze&`E@}L`k(*@rI=S- zc;U3J{tK4d+z7%W?`7u2BB)2lS+?e{B8*6okLLhB>h7KFqpANg$K3!47Pa#9EiSxhlg8jB7Q1-omU;3S{)U%)zSl}J4W!Gpdm=~baM{RfNx{o=NoYM=ugb^`OZn*=jIGrjs4 zcYOID%i4iWXqX*D8>|SOU!S!Cx)pFW{i4#PuJp!d4?x0NfYmlc1kQ^&8}1!($!A6MN^AsDK?YOH|ppB)#w)D>QKXv;%6cY(#umqtepFl(%MevJp< zSPwvSvLYt%+|V)%8wz0K2Ur}k?kwmNtpCLp6^jFsnEaKObL?Q+W?>P>R`Beb4L!xG z5V<&D#4<69{oZt(ofSehbaz(+BZ7-|1hCS9E5nwXs3WXN7cf;r&z67d&sA&8X%@K-QfLnfU|pxF&y9GTYDQEWbs4^XGLzY< z(XZ&i_6KPIC-)rTVROKuPq;bp0l2R)PK`d zi*wTf4OtjJ&QfmeB!K&+`QIQ9;#_{35tXroohdjVRE?AyvHquNywrm}krcO<&5}}E zig%Pd>~e(GFfJ^fyXLj8W!LQf2JQ9@@cr6R{5(W{cER>od!&4@9-qc|SSvPO(SDhD z(56WMdDD3%-&LK()0NUA+woo~6vzZT*a4*07nwamp82^Ovzw56Ohef~G)hFCH=j@TxPKJ& zIz`DJ6S4*!QbzS+b*()NXSjk%ix>SO+gLPr_sum0TAS2dvgdV%9naRq!A-oKG&*!o z2*%Y|U*edTT_p=pqlA}KLSb@tWv>OFUkYyy+~Ck$qb@ywuCnIQko+6ulszfvT)yLm z4aIlz#lNp<`&+;Ji&%*qV_5zsGVf(%(}N)N{s=Ate?NMc`}Ln-`Q@fU|0{@1x|!u#Dx zcDTV_mXE)A5t>_*fO7+u4^*;h^zNIlV+XKnf~x*6)<~Jm>!f;UpCE6k$ZFL0vd65f zQJU*85Nt+ScHdBdpfsdTr*H7luYjdM8{kXN+|$N0VaueP=I_7?=c8(E!`P+th<4jv z4f*&eV(mU~?S<}-R3wVB8_L<0{*yZyv(E_|oU)!j;6dF*(Fcnk20NrgeQN1p_5%gx zMj;Mtm_Ef)da^TRA&j*m!7FO4WadV6Xux`4el4tHuA13pm3z?L(k5h5jEAneYk*l! zR3g_X<9g=)1vJyv!2VcuIvt|9lA!nSBR|#vYUUT_*7K|TT(y^XtIxX*o9bmy(=!`f zZ1qVDFMb#KGV~p|2lC=<-rJmch*6&lWWCuJpixwy&?OCIS&jlGqtymKcC++_QY@>9 zSyWksT4p@$#G2OkG&8JRtAFu=7O77;1W@_GVf9cBgx`En+B3PblB)jzW779hmd|V^D?)4uQ(kw{Ig!1YdwY=26{gP=A8o zRR(#8BScCF{TPHJ6e4v!B+;5CE&kg3EOb7(G{?@%)=v6q>%8pPiH>K2uk}M71A}$% z53_RxS(Cd!fP37X#WiozIGa7_*{1zk5_3nw*9tPQHBnSU zFV_k=Md*8on<-O-p1Oo_sDdQN&{mj_HvdtFLd(8w74p5GR$RhgFb>vNA2Xs(D^6gQ z4xYPfQ{HdQ&weu95C6hY|ND)Duhb`U4T%Kj=2iaU)obM;;)KM<;Xt%3FKg=*tXy^J zC7Y^#L7A1Kva22I9&@KtjV?bc$8^(-DYR`sc0(`0_o>ufr-zJ7+o?o2#pP;79UI0T zre9yGMDfb6p|7K}Mr=Ftxel{-b1EW}c4jvUddrPsd0}{Xf>WUpc(Uv;ru@IV5|R0s ztToDxPN~&ofhQnvbe4g^h?>Qx1j$NMLh-|JJjJ1i2fq;aJV^cLLgvO~MYbAp=G7mb z8Qe>{e#xkZE(48@1*X;SL?T^BTBH7Eb0qzy=bsLBoJe2J`Wkd2wq(R!D?eIvBAg>s zSJX+5=cA$8Q^6X#Me9_dNqw*;RWOI0Svi>NadjXx^328=nkl5$KPtZ5_z+x*WcKlL zQ}Jk${~1y2zMrXe={=Nyd4l?dTm4Jsw`}}P$SLQOHLlST-FoS=!{0JK0pbbN!OE;) zb$qJe7m964^RqBQF?6cs;*J1E#MkAWu}=}qO{1(fkP{FLRP=eva@#38SV$1hhdxv& z#cKbK51k3UD0>`8nv(nY*p;}yDx^!WEE5cA1Dt;bTic_Esvo?q;uWxB~g>Vzb5T^kLu?C34A=)8qRt+Y>BGQ4+BdYs7?}p-`b;Q-R*V z;^-D)K(CkH`^b>4mjg2X#r+WjNjvnpYQv#_G){$ru#%=^h`A>>YY9;c&rTAI2SK17>q z9?`-ln&}G2X0g7+I$}x5Vzo2!GyGq^Xmj+8V0qLa(es(xODg#SCt?ft4)w`ByYlmQ z%{^x@HJ3R}mfWvIfGsI~*H3}FUstw-+H;^Y$&C*t$aThi=5!_#pz)gotv(vZbB4QC!2@7lN`=}*u;D;GE{XM*>{Tps$2Oyn?k zPz{NO%Ihk{AmAdK%HQ`+@2&>zM5l?4*88Mu^t41o#Q7y$GS1Dbg?Fa~Zf;;}Sf4aR z+JkCXBClTH(knew!~F`mQ0uAOIhPuRGq<&x*UBa0bc5tzT5zeH>>i9d^d#$Z4^X+A zc|4wMdv>+ay4$&_YkEZZ`X*&>dgn;ZZ^EqEKGI+D%*bV>{4(Ebhj|EY)UpX5O)DHY z0KACLq08+XjcOD&PgqbEk#<8u0pvNGIquN1~s~tt9zNxVovk_F26cz zww6`GOi0MqYC1h?g)%EXB)Z^mlID0IOZ25!TRJ>ScoU5YL9cTx<5%n+xT;8cx|Mql zq2-8cQe`LSXQG+6>TBbye@;I4ot{w@wuSRL`NLfxRy*27j!sq#a3W$GxXYkbUD_8W zxQn#aPF%C26WV^ZuZw*kvij+%>noN+vlsETF2+4Z^^=?t?fxb3lM(Vf)XawgLzxT~ zxQp_a>>(~D{q&m!1FDFy;E**n&D$w{CNk;_G5}lA*ZEQGqozDOD)j55E8`3KBdu3N zk2qVsJ_=2e|3VpAeOl3=9g?s#dSYG1bSK>|y>3Nng8Wdk*^5%d3kaX5Re=k|q){^t zI;*}iWQ8zV+3K0#t@8^~!R9KMi?LO6=xcKqA_S-7I@Q-bS0oDB7KGLg?Rr5};UECV45jdu;0on3g|Q5Q1uHLVT8VjMX| zC2CbJq+d~Q-EX+{PDT|V_^=)oG`T$_ryjv6IzgE6#2NU@wp z?{mIamCin1k)3ZjjtiM^(R5AmALnTC{W482!WrB2s!%`WGD6g`e5&6NzMisG{&h|w z&TQjUW{N%+y_#;PSr+d=x*4)C;{dNog&B90ueChMy(3Xq2$XhKW{}IA%wVbKsjZfW}U9`Y*(y$7${LHG6W~Y|(t7cYO4Y$L({_=F5i$*@@ zs^)KJ4V*Wc3!_yhaMGkf>$CkR(N*vn{+T4z)HYn!axsTMA3c<2FNo>n?5py>IDDdm z*}8PwbBId*n)aD{%3=+v(xrd4qnRQ!Bsyi(v21kavC%sefcpoZMvsesLw@od&6hzq z$OzXHrhW3MwQs24`s4;Nh+y{KGxapmmwImN(-v0q*@nkJb2MQ?&C}@6(fmd?Z7w%QX5fd&;8y?6+`u4~*- zb0+RAN&QXT*`|%~%d9(14y3L@+1fI7#UdNi78PNJ`j)V(Yu5S z_|#G?9?}dLtKH}o)tv9TCRP%$7ecA>Cgrm1SgZlNmZg09a>g2t2#}IFwB@jk@E^rV)=ba%g|q_sk?qU^YK- zlJqlNOb*A~L!Fh6m@SmNdJPT!B$2iv5Gj;nbBIV1HG`eGuVOr9 zpYY-AIUDpIC#rL^0av(xj%(M#mIq$TAOyxVH@~GRho6rWI`)ZAcxZPn7E?OdOTciW z>aSd^vVJjJ;;u5vZ8vRpKvL zh(y0wGJKJ$!V;Y9)m|MD0U0X8$T4i#e`xN_(=B0PW;^&B{AN5`vSKUuc>FBVYOU61 zNT1o+gE&c$g7GrWNLT5MIY-rPZ;7*GmZQMn?E9Irum1FycDCvpkO^0ExLvV(`^4*r zb&64UmvJ-8SyF@Z1A*~`JZ%T>T_WWGhe|i|1UqIB!o=0^V687Vni*4p#qhWV|;~CUrD`O zApTofc!tce`@i)%okCm496g@pIwVNgnG6Z?1);D`4`G{*j}vWuK{kd-TtW|p?y3jY z_I9!s(ya0AKLQPsg({gG{eGCh5C}p-M97zF-i++-!Oq=TNBEVrri@_iRl)1{EA2^he=!p)hryDrKZ8|HxNC- zigUJt$E;EiDU+*-FF0*N@!|3kxj>GvTsl&S2?xNXiG?iBf@$l3dly^*e;QI5Gk75nVc&uBU?) z2U{>LTx03X(K|8tF_q=s=pQWCU3Wk}L?h3221gPIOeS0w(HS^InmgisQV&inDtA-^ zM<)Uk)nXB<`L%(eTXh4&!3VwS+h&D=dW2&mifMx8W(9H+J|z+-W&Se};4{TQnNG5hPhrepu{N?-8h=ss1atXgaF7vx!y zzVFz`9iy?Z+PkZtO5N3R^>SPK3=3t)dJ0?_>|Qj__tQ&KIW0hNa%OVtR$ zbYKI@_xWyeh->zSu)z3nvZPUwSh_D4Zq!5esx~k*QivO>F7#2Dm2>dYWmkHSm5}&b z#3wU~@B$s1%|6W8mBjT2S`ua#xKVV|Cpt`-n){Jbq=rNa(z*lJ0lC zsalV%QLzhEhIRT-&$%DWy!i?eLK*Qmo~`Fso8M@06b$kl>nHAt#|oTR(w%0i$Ta+vW`EZk%Y8u4AO*rY1>2Y2@jmlX4LM`zOcG!^Tl9_%K>wiIUo8`o?CwIW= zVynmOo^QB=3oIbhxXDfO>?{vJl;--cDuy(sEVe^XE#7|VDwt-IiUt0m# z`!mFGnNMSHhd_+^pzy>MqeS9@R)~p@m8^Z62%QRu`z0j}s>^y=h0$_@25(1NAVgr0 zVWr27so?j4vSlY^N22KU!bfxItB<?{X}7MtEBK?^0@Wp>myzd$hL0K23yRj;x_e zVR51fp-!I+m=a4@I1g^YS`D1GD;=!M)R|l{qXQDv&K2>j60c!*@>_zC zSc;BM-7nda-iG#kVt89->J7Ir4X@XiXjM4*O-5Q+#GpE9jqVWJoSS|9OL|K4SZU$K z63H;{Psi*J<-TS8`V1E*7Z^h((CEN%oWj(XyJi`^u3f=(b9+d3{u~2!_PqH6x#gBf z*Tt}ntgYSE!2{o_#A+VSk-`qEj^zgAJy;}Sys{<)gXG(b>f`*WcxNbior6ZtR-Iy# zBQn;E*(Z4?IXG!8IIP15l4(XSNRwqZ&T(s=clIPW8qDH!Z&w!9;}~+d57y57;(RVVg)9J;cx;1UPIeoJi9B9*Cyo^~ve)piFQ_6|qai#+?C+ zNVYp6Re@$SEu#f4AlF8Zo;Jx2`m6)n=%WQu)FQM4RDD)QOD+Zmh-<5k57xhq8G=+7 zgodMU=4A!>#ugCWhTer+^N%AU>{iTkbGL;|A)ncxvj}ut581G!`Cd02L4}7w;iQ8D z?Fe6v?!oGP_;JSbDu67Yt(0i!o4S>qo(_k(HT*bZ$QaGcpOI)fa>no?mGuOI;Qswe*y-ltB*nWM!>Kdtrh zig~X5^30OEJ>Y_cQmd1BU6b!?vU@KCWW=2^>eoJses`LG6EKSr+Z?@`rTttM^XIxF z-=RDDZdeM{bGj9WE2@ClDV9#>#2i$ce#U^@vQ&uu zK7qte0(cMpN_^ox(%0UqtTY+35YqO3yTwlW*UxlG*O7Sm$WYdl3k(jEhY8Bcysdg% zXlC|v2^-vHa+Vb*&wQ2b7f#V4ngV5`Re>i(Dwm(O$rJo;inPjKE zp3mqSmmZ9IRXwfNB+38h=E|H?;Lj7g>3?6ZI{u$KII3ks@*kWZ&IlszQc%XgP`2;s z(OImS&153n*%c7$u3I|v>b&~wzI5Y+0kz-(Zt{1XTsfW9>>ffEwx4^9&n%~kqCBjI z24JIJ@y;gIR;IY2x*%K*o$WBPZ$IyZH<^I%pMFid)u}TNY2z{Wrbfy z`R<-~WcAUm9QWnbJJ!{vixH)HioWXP8Qaky~W!U)`E^C>;m!qz%e2;=9HMj zr6r$&9r7}}vZ6vH3SR!(UryD_yD+h1&`8+QpIpV#l7iq>rhHJ`AS)G4&AL%0+Qdnx|oTJxDsR@*`rqERQ!W0XKB}H_i;&& z&h8d23{r0SVr**xwkel)<7?KYWV%@oa$k6lz1~poncuiTekCvM=zCTY)4eaHO`Id) zeYpe}tAX!LjL>~Yp#lA?i^};HVtUtsVv_nO&C8SDUNnB3*Bmz^{esM%3nelOG2vzsY7$WAR-hRJk- zL)hrburgAQr?N;b!~Rm;z+8#cf1EVr#ifTSqv2>zOS(kL>BG?Str6OfDKWG$wLL4~ zUz+AdqaH&smwg_6lf3cQHU^Ue!8xbO%8}04hFozeWz?OJcv?u_qh?=LLtuhKXiJlOUrx6b z_`Ex1kPJ|mZ`=KA;C4%cg$`*!m)vnN?o_UheE=NbhdUbba%|?`lN(?`unPEF_6B9z zfzT>fEvx)4JQogJ&54Li5*LdSrUx;na8|gRj;HGjO)plnX4yIcs%1)W^yip58SQ74 zcjMT=4Zi5#-Or3;|87s{CrTEec}j6aOp+o11?9_PmyjbpK53m7`udA`xkb8HAhupZ z%h1E|Z}CapJ#-0OAC>@gET8Wl>Dp?4diq9)0QuQ>?h(eQ9k9Bshl{9#gW)W zd1t}L57p!aWeMdf=$<=~#~dV%)kjKC>MTmevfkFRVjxA^5!ME0GacB5A_sp0!W)9lIhp|od$XX|i9?T~KZ zVRCn(2f*#5ynfoD?wJtEu$POa9lAfBnLK{JWBt@gz(e(MCQ7CGPEg$CyT6I#CvVZ1 zNV!g96_~2%5cq2PNS(#ar7oUqZmxgXjPgR;lfv8_rnJs>mXwSMt0gLzrDmxfUgIBT zyyDoE{a|*aaRS^?^R?y+W}(wqqsi=P?B)XcVy*ELi(2lfdx(AVU{5WpM?v(DkNY|8 zIfc+sgR%+R0OsYK0`e$3q9^lrOr^J~0r{V?gVp8}W0|E;>PNZN@d<%)TBffA3qJS? zP!Ej^kBRgf(6pFjs2_uev`)6=HooNe#>%~|D!YudI{RBhdZst@XoGTxZ8L<~wU|JB zS4LgXea4|MwWT6WfLe_Eej4}rpvPlx;6 z?_*D9iNsoTIC!_iONpeus^_7{n#o;jGMVT#eN%ugZt_L*P;8$(#fZIP! zAxc3RVgjzv8T#N)DgX$vkv7mZkzFv?vQPnpwQnruY4O;HKB16c99zqA=Ny{*V%#96 zrj60C{`x~^|NV9w%7MpR%dEDLN24l=Z1+}d+IrM$RdT>RNb_u;tv+I5NwGF%)UTB< za-H0uJW{C;$Laz~juF=P$9-dsdXWkR3gramPA!LutglLgOWV28lZ({Ty^EH⪙CT zs6n3O4;#=gJZ0Dyrd-0>-)bEz*j|lO&*2$tnH6jL3a3EM>nbXi&q_6?3fTK%GsY># zr8Qe%2dq7OO5kF5gkAJ?kgd6lkr)4BP>-fxxH{KYN>2zHjx#Qa6@x7Z91r`v+m zNwJF%B1y+h``7T)6>)o!ZrHjAjq$0;L#h|)1WBX3axWzY)&5$Zj-p4bOeMmh9W$~? z!*uQW+8%MUmWLh`ict90=A9qmKNDal^M>5a1^bq;Esn_hXIu=7xCVYs9(Mwin_QF9 z{g}UTL(>zq<(FmAbf#;?Z+cpKs^xZ{nVP10-Z^957#+I;A3WLOp}1m~!sC_63AhW+=SOLCSI2CVcl5P@%^~qUH&LH5VQb)-yoVdrModZEiTm|s zVt+C#;j{UbB+3O!y3bH$kXiU$JRmv>SP%u%aDGHje#!Js4#v;c`m!&6S3{pd05sRa zWX;V4aBtoHU5hG=;osp4(hpdY5+pYNEHK;(gFI$?tHlx%%-cyvv!Hm&o&|{}gcE-7 z5TjfMha3x8#K1TbcA1?C4;NsE3yEp|e3J1$Nj_EcJRl5023l9~w{jP{R&a3p$2S_Z zXml>dA9RbLEK=3TlW4Us69=<%Rz1U!3-l%S`R0IvK;y7!-sZr^t)wxby@^PwuBAeJmemM~bgC zSveuo8&Dre7V}k3{?R+O!$okBlS6JZci#NroNL0m=N_d&cEegD zh3BmK=lOYf#O~|fJ19XVzD+&3(a{LVC%yDz1$x%3xl)9A5ii4B7rHpeaNJDkjZf#` zQ<;OgDar?7u&fwroMD-WT715HR}Q<>U`Iam^UW`BYEX`yjd_S znEfLsf&_G&09X#{wB+@Hh4s(6(V7Ou+5Lb}JA!w`vudypl(oA@V1w_^&JGoo+w#-a z_i_kkoV$|iF3A$u%xWvt0!H*;LwAN?YaBT>sd3(q0`~OoJ49=)uP@~PR%=@A5Pef4 zxD(|40joK>Y4NBN z2Y5j?ZY|R-Sh9mRhOEF;_RDG`&Do%O;rEDEH$4gS)iGpZS|2w*zG3U(XlKJ`RPC%< zbJ3_;Xz9t$X3C8pN7W`@JqE-aRt|A)P|{HtnfqlOg`wlpXu%~o2v z#0^NJlz<3GcY~B{TBWOm6$rh(v~_X9!n*CbvW}3ifg&oPm88k_jN0V z*sHrq4mm@P_5$J}jbPW9xnO@kZDubt~`txHnJ7rS9uBUomha2@bf~zayPDL&n17N`hfq@U0BN z{$kpJ2<=KyJT*gfS)NJ8{SiU0htn3WSaa;-3vJsYl<2d$mRK)N(*UGx(#i2 zCO$XppR=QwI%XD56NFkL*jfi1$C6|u!<#OvgX@}Fq1t1$%Pt2aoAp836K;5q2bOEi zUDg z7k6G~zZfiNkv)nlWLIo|?s6bF<})T0O~u=}BI16hxfXXNZ`$PPR8Piq?~tIn)?=~h zT7_;^!4IC*Sk1~4j-DwiGh%+LkF`R(GTtY*a!DN{H%Gx|Ie{%F91zHT692i9IuW|e zHNZAI+E%Msw?w~&mbI-Cn^Lstq>a_)PS97;y4UkjZJcctPJ%LWGau!nP zz#Q%<5b7ENuW@hnbW|AYl5VscKA=hVm|a6;1z9AwTaq8ujO}-i9gCjPbrWfvht=;* z&mRk$>d-oR&=#CJT3|cRb-@ z_(6}>&3o;^9WRm8*Q#tBu=U%c^?hCTv~Xq0{k^RG(R$T}i#|Cbr&X!=?{gs)_T2)o zL;)SWYu`@Y8t0puUimn6mA(y*YJa->BlhJHLW~bfR+3#b_=_H=Rfs%(YKjAnmg@-o zhf6YUy-NjnkEVM}n5r?T9CIj1?|QA3=20gMJu|m z(6j#twD+fZQ+xv#&DJu?tQJ3&!{T`&ciV=F{OPdKUL*6t!DhwgC4lNz?7XU|EKTQ631myQ17V4KV^T{q)0MF;y;GeYOHl2p_8R$jt* z`J|kTZ_N2uBShvcyDekwd3zX&^t$;HYeLid{213A_EK>qTSj{v9JQo(jB8m*mnw5Q z9p;8tamgCxElMc*Us91))?<@-(hPrd*mUgXNp$=eeL8G8SA#dP%-}%Yo2#4t<<1bK zJ^M`%ThQS7CR}~;7}N?^SSYKu++5Yh zxqn~s6POxV(`~6yPL8{*cLDDFmJ%h)E$*=y{;XP*;p+SFyYi?@#3-XP9E&{y%0yO0 z*=D_La>|L~x(-lnBEVw8l8LO14n7PPK`eTY_}p7F?!IB@{p+&m9yYzHqB`l6>ZW~L z;?az%xN8SpA15kH;M%&f5U!1TdRP9vvQFyGxjQI(I9D-+8xIO>tY(0*V6>2#RerR% zZV5cRbpYjq9%ht&**qo@=EFsag0w1(qsGgNp0OJwFCS6fTU+aEzKde-`^vS)kSH>l z>pIsw1icWVp57ZrFvpwFL0k&~N=p9)5swbzixg>8q8liA<=Quy?$m&^9sH6$WPUz(O!Yq?`Vt%G_MUoT_w#bn zeQ?L2X5l}-uslT_596zw#ioCH2^@l%c(Y&ezZ#$O{NY9Qp{S!m#C5EOz-(k$qt+=# zle|{akdiX5nz^t#?LHXVON7$uZv;9kN$DIn6DNR~q^ggnYk5(fiS&+<=KuNc{QkdS z%GY;7CBs%bmBpX@0K;p&tJ$eSi~SN|n(=p2blHo6PG6>MImMM1^6w?X9MAfnKjjWe z5I*=hL=OuRx}`t@(k06kE8@n$tW5QHgKw3Gi9Fp+Zv0K3{(j`Z7qxEi43tM(2=}D? z*DCz;uQXjjwF~XN@?ux%cMUs5t^0fhlzLn$2LfXs<#%8g;BE@?=^k{Aqy=%*^S;^E zy#U7U_@49?szQsmR7r_N>t%CwWvIU#6gt(%WGUwfQ{BE{vvl{07!k)GqC^_iSDZQ^ zo#yK2mEZRyUF3?0trywUJCd4CY@MWsZq&?fD)tSeH4bNBK0;kc_eE^&RHizJS&+Qe z8lLk7^BJV4|HK+;Z(T9+%z_fk`h|JEhZi19%B+Ki+YFdlyJfpc>;hYH8~b#!X;Z}r zsqPQcW}c`y`b@i`#1~8Fd;K=7z$4Cf7iQ)F{E@4_0)Rac_);A16)N0w_^f>SXXSKD zXu)HxU+|Sh_1B_c!mdi50r!xvb-?;%j}3|oS3x%U$~6+&KU-%CiSF|hRnOl=6j7e&0Og|zn)#<-yLZSWNKR!aDEpcjCT zUo0Z5FV^e@mYuKCeCvd!VMsL^@lIU28MC=N$n9krRnHiwOB-7#Hot)MhT9=5(5xdVoMl}=01yOi%oOinHF3brhD{dpk3ekBO7x&*;|uI%66!$HAX1)&KgUx8S^hY*d` zCJYQA=YIsFCkcR;DLR4lz;&FGa>Wi$<41Ld+^t`hCGu1Oc{h5bWCVZd8)^gN!O3e^ z?SM`Et>o}Kb?;^K5_6l#)w6@E>Nl)<_CMuhaeQlLj$+j8OzdH6tozW>bN%YdgluIQ zb$b_iTt*v+0qJoA+z};8TwUkp(lh&yZ7g>7y_ydDWWPRhfm3 zxGD+lLlKsdReyu1X9XrCOOE5GakFx*>oL|8m9FH~gI(Np5aM;W^IeXLL(s0{P~PQH z9+_SmYlcLckjseC4fYVKlT%%P?SOD_Bdwt2Q$LqiWlJ#%dB0QQSN(^I$t7 zmERFD*0ynX;O7r9RXr;$t@UGk1(L;G;k9Sg3`OpLzYfJu@ulZXb4CG{`fDBUOEv+B z6WKx*2;9WE1+SY8-_>PVvcwF-Gxv)y8(0FdHD|q0YAh$=(#xt^Era)KV1&fOXh+9a z*PlNtdPi3@cWPbx$lTgxDDmy8=-_uAZ*g|H03oewDsIX{vW*d;m);^~-Cq+d3Q5&t z+!u~EY&>1vFj5S56Th8jot!TVe~a^N(XF`8TThQ=wPOXD^s6VGf)~_i5A0->AKUI( z52%^=7g`ilJnQld*VE)oYR3KiT|`QfJ3^=^-F0R!1-q!&+QVU~-hgIlvqHdPQQc|U zG4a8-lRk;9YNGa2A7brswQp-+Se8XDI% zdcD_0ASGhw2F7QtWHZG*2o9W%YQNQg?04T(CL$;bKwDJuEIxyhc;xHiaGYsZ{q%;llLh6Dzf^Sox=@+|Lm09?r=15(zGc^WZ$g{$9)-iHY(p;QX*mVs&C(;h-qO~p0-iLgVJ zF{#!eBH7uYZ0BLY?^kgd4)k2|9W-?+E!7!|>XiSk%rXNX5=hBZER{|DI+#AwpkUoc z+XD?!m;RUdXM1mm>vWTJb5z7g9NQAO!Q8INUQCf&W`;q19G(u2*aA7gY@;SH zZ*TGhfUNQL$DqxT zcdZOvo$IlHv91f7Q*nBw=R5NwJR0cFTtpHt$0O`#HWqC%&%$ZX7z8!}K}cUOfS0>mTuMWxeMxYA|hW-(V!08NEQUGHei=x z&m;5{C2jri7q0XZoKky-8@-SW+=p|o^h<+$n*-HBv3R624Gg9Plc)K)hbU9ypT#a8 zd8QF#mm|~aPVvg%@ErZDDyR$y9pCVLnzn;~{F?>xK^Vy!Eh+2R}_Y}mI`9GXd zL5MV4$V{zvJG@Z(Mnp1JxMDo8njR)pBU_P%UI%$A#F(i!OHv*}T|y2#<1=G+k`kh= z+}$yV`-YN)K>BuWWS@gL>2S!Qx4%fSSs77YcDmh8`@DMA^xl<0^w!yceNZx%8|Got z+?Tt$XT-N2O%oxiOJF2P@+fMBi}_hT!^z%@A=Wof?6m5}&tvHnsB6|NiX}P;%+zmb zU#1hgJL5<++ubWP3#dxZ7OxN8=?HyfA6173p!=@RJ=?7+CcB1z@=WjHh=sG zPpVFx#3hPwXkg}~&*|-fE{Ve+rUI%q#%KUd9sX%%&lf{#g7%sX!7XQnlYaYv4skZP zwo3_zK7Az2f@*eS-cvKoB7$B%Wg3J1~(Yv&3#zFTg87>~iuV%~TGCSsl zX}TYb+|5sA5YsxWCK>xtJ9eg(*MAh)amLumMCpx~Suq>(1mnd;wzQ9Q#wyVQd`{)= z-`yP&ifHqHIz;#HHG9|~q@K%@x+o7p1iCo*5BJq>@xf3jF|-Wq!d;pJdrTMcb#vnk zQgE>(XJlb$zNtJ0EDY0x+U!{pZO>T-4au#iHlsrspY- z7+Uc;ec#_#QOfHxh%-PdJa{2HFVet|I-r2O9$slxZ7F*mBUt+c8g3F-AAh=R`X1rK1$+g#g^tx!WM~7w?=3CUG@SAx zHCW=$x@>R<%MNsGT_ZTW=5So)QnEM+F21?wc!@t)cx38gTJ&>{*(R&4_vL-G)uge% zD~gjhprY8zq*Ds}_4(f7p#)cN4?eL}ew8Ike)6M|@rj7|@nfR&lrap>H4t#cXwkbZ zyDG)7!@6d4XZH#9U(Dg+94(2z0K0I1T4GwS8!99@` zy?eAngGh6=l?mIin^M)sC~sYm-O4FSSy>01Yw4Pu4chA}Z3ppFLy8)lce~R@5uxRi z+i`;)&dX|Nox@qy&L&D#ikSty=nYwpIaSBsdan^d{YeM$(WZ84$R$Un{I{xCr__707M z%d;KeXsr#I>m-J`Ex;wbK7J|D8JIOMAW{B|Cpg-r*sn!#aRi)vQUuuCW--wTSl66b zZ;)<`z8N42s|@Fx1NoD6d2aXn=Ek8m*lVE-=Zhvm*lUX)wZ|PNNn~MHyflA_(<=Z+TR%asH5m;eKr!?xBZMw*u5a-~VCZAk`308RDu=#Y8=S0aDAi3cR0Nkwxx#W(7b7f>ptv8`>wcEg zh)(6&O}7t2-3xh7M-4@bkImIC7OMqp_3#N5mpQQ6y2mVVV?PqP423uaviZ@};<*u7 z(5q@Y8_`6d>=ibw!41=RZ_oNp52;hpAZn2KBQD~nut$b0_)@MplbTiXXc3ZgQ6A#* zANjwVqzTbp0F#c|j92+j=}sLSgD<@tDBekG|NZ$1UK73kNcGx?=Qmowav!xhd>Tw? z1(tLKEdZ+?mYF+P^~kgSar<5uv4dM#?2;`Ew}fDk(BzxjoIC9kd=+WhcDq)I_9WoP zb2s<8Q_C6cAOY*{S|mx~h(>G9(GDT^3A;mjQiL%DZ#PN%3-6t8v|k}>k3c=0j#Wpj z^kdqmEXL#{GC_S6!$!fo)s!9YrT9|z8Z;M~Riy*R>XvwV;Q-Z)EVn2b*-0uJ=Q#nu zf#&GD8_C}Gf({`E;g7%LpQ9EII*}`-I^FXYnxJK)HN=?qcM)}Ib9p#X+21m@7|iW0 z@iu^&9QW=<|K&LC4~C;%EzmhUij+!zNwrwGCtYC?uJ}Rfw$RqDu|2j6J zQY{VlSF4uPt}l-8-GhNm7+#0o2I|Oz$7+~5d*>JzKjzi62DnD`necc2?#-o9v%H%w zCB>Bk-@g7E#+${{MGDQ6_M1Y{nP^|_K_+(Ws$RcypMAG}OQD9?(*BSF8`>S(<^(3J zP0+~yTnv)WN*6r-glkQCSnNj_=|?<}IJ2L%yLuQ3oPx^0kwrF?lMc7ENkQW?h+J)w+GcxcsKfJG+OcH zNn02ykB@S)Qzq0pRzcrkwZr`!44Tt~wkAriSYeZ3Uee}6N$R|iC0Jkw7H6OH43~IA zu;pR^r*A9)Io`8wM!C5>mG0m>NB|m))t&U(u$#B<@$yoh2O`lus;7OR+#Os>63aj( z0?CQ07@N56D`GwSouDF?3+FW2fD%^kKMNC@r6h@o8aw9)b3)c!1#)VJH&|c_Rf<7w za+uM)1cx9FojJ()Odz2&M_4?^#S_$ET3Mbs`N361jC&_q*>IF$(hDtwd`qhH)5L6j*8v424?v=d;| zO}E8+X8@YT-5yqmiNOJ`XvTZsA+7rg(nvejEBg5U8D1CHWe0#9b~}C1pA82OMGw&k z)VO9)xILjlD&!E;n+?Lg^jmj%A3Cs8lX}qvw}U_6+zX(cXISGPsVGyHRFr%Bg9z-n zIfjjaCiOJHAwO)te#^ZGj}uE%l1^kLALPl7C2aZ{j)T0m1ORkrketj(A%kpf#tKX3 z$VFZ-5)+8r7jx^Q$kD$&0n+8#=aRix^vB_i-<#eYdd|NLyls%GiXqx2I*#S3Q@^Wz zYE*z8P?5C4BTaJPk3*7zoC`1#!(g9r)>FGe$|&I!r}6#k#G~1O>ryC_s&#XOK3^2+ z#=dR)@l>pq&EDi&!davxdwgF z=T>aPTs2SLB%R~)_&+uh`>bZ^I;=b%eO=_`_t)rea~!)fJoKL@FzP^nPS3EEkr;_t z|7i-_Ts%&orV3r_`_qxbeJ+==gZ%u{WS)0;PdlGEHUKA^TlpiktL~>S*a)7~O72o) zIv1b3p(vH6M2thl*ADLo&sDDCBOgA>U|eL=H4IWrG540(5Q@zJe0`0PSvM+iecWvxMUc>1!H)!Wm|N8Z`E=AWu#PIm6X?SHs^v zScmfQozA3cN>_^=Y9qWS6(BETh0`COp9Wz|a*EKc863GvN>f-TW@0O=47mp0Uhpom zyZ-h2?Un}KrUL9%_}x*g-ON%vaom2E4APu~VD^$&VRt+(f9BMBTv|zz3@QI}w(9S1 zAUHitdEPZeszl8xX$oW#c65!+j$-vJo|~8+6{G1<=0d6U@;7yf*~@3yF_y$68%uqRDyI@P1(<&CuDJnxk-~D z)Nd`=6)weVr~<>KyCcsw2NMBVS_{KcE2L>UPJ3eCX#^tl=i=Tx4|0ok1%>{~qa}&j z4Ze*+dFFrKzRbWTrj09Go<*WXBJU{|UcL?hbX0TgGo7Na^>y1B>LFbh+MV_fV`Oy! zbN3e3(ZbQJf=O&H4ngYlj*mF8-AAI_%{4Jso`NQ_CzMxaUa?lZ{P<*@?I7hFY(*nj zl#;a*6Rog_4n*Cs?sI-9xa(#3omB6={+*{tLg^lRNdZ(21Exu^b&Z$4;+Q98gtOvO zD%-gkCp$fovqUu?)OYJBW9~be7#b;$_ejexksxs%EMZP@uX7KVt)oLe-ENPNT7he` zCD)osq#V*rSHD=ly^LQu)1x!>fw*RPbyRPC-rBK=ltwlYu=$8wrXTQO9aot-mWUfxd6m?zg$JbR4SDI8u=jeX4ZBMPWk4|#b2VZ& z_cFnOgf2{A(_tT*!nrYC-vom?J~U-8Ar!}vG|L_l<5odzs8}w1A~#H5DU_h7T1Z7JJ5MLs=^xcV5XI|RN#`JnWEP#jkdCH&coS^-K07HsZ}4H4_ZFcs)5$@g zFxhgJd49jIJXeh0>G!Lg-;5YXx5wU(#KGXkTe;%@7}N;;5{GsSohX`fU|K1V!YhUC&q-8mT<00(haUay`8f5 zsJ83IBeuB*PR~vH{`8dpN%J=C;{n{rrMmYyi+EcBK#Gllsg=bu5dR?h{PXjrctr)& z;eh^&(|w!5h{2m_i1#T=IL~GE2^`HVe*zyoKE9fPkOT-hD+>kYdD`l` zbrAh9wLr%ge{k;i%DOMj{IU_0cL@;H*^6!c!V4lw9p`wvCO`=e*ro!AKJU3$JOuxJ zENBw>$gudI=dT(KP~N4Ce|SkI6zMd`;T4=qNtHbv^>=VR!LmHBGC(yn+K=1JdP|v`RDvg zE43T9oqu}5noja`F3k5J6n;vqkAFURiMQ%w@58!F;s;^>tn@oc0DRmILHq)J0HtV) zima~#(61E$ka4-k08E|;Yz$p`p|tEdxlefvs5y`h}uZ`!V8?aksH6+{m$K?QA zGbM_>uw7!x4C!Azzqsye+E=Z|Ad2YVtFDLJV8}xuU zV+)15>k*p~eJKxCc(ajlNptv!H*Fu&I|%dhEkXn0n(a{KQlRc!(05W-JMQiN<{P5{;a z#`76sd~4cymxvagr#rGY=W`%YIFTjA-s$5LHWZOgag9VX%>{V>9Z{3ptFprX{AfuZ zu!W_Iat{5O&)`H6fvK;zm6!7Y1HzE`92kK&Ur3H>u-(wNe43lyO-~1J-+FdB`YEE& zCvb6H80BHgs5!=#ulDMfkd(Rcy*dK6Ia;Jkz>D3|FuVAd^GOv!y3(m?dD-&w(8^UH z?k#YrzsU&7D`lLg*37(@g3T(;B+!u{RAHSifFrv|jQqkgAwoP>Ls?N#EpI|8eGpBz zNYS=m+~gHH7=~D6>vx%WcTk5GP_8dXw}|Bi@3q!+B(N&9cA5yb%JNRx4Jws*ur=Z_ zoVbm;)Qv*#GNhxqE_h&3+lXd!<7o zZlnUasSE{_fEeO4Jo|d3B)r@TSeY?x$b7tksHpJ56{^Ow@+TXj=~9%UP}zSrhKj!f zXs_Cj;N=(8b7jFlJ#QL*DjP@b>;Ys9(d88~2Od2dGj`-Mr_3@dlXAP@(6Bbc90Z2d z9wWbGCsyu&-eI{wGgAuM)gF}(d9V~`(A~T;Dx=f@+$7VThtnxX5a8uvTMXhHv(2_A z-&lqM!R>)oBIYn%+uyM}Wj8vT8!)x@GL6BA3Z08%IU5&c8ZzMG_iYKX7mXJ+b*D8d zM*alz5;F*ZU9n!S3vG-~X#JAm$h$w6z65UAE@eRQKf4l%vr`X?|k@mfhUk7xc2?j^g6GKf}n~VX;fm?&SArdR#aiJ*53=q1l?Eu5+K_y z`1M+g9Vzi&J=v`&g9HseaUzu7?5)n&9QpYCgPtv9Gf#+WB*UrO^Dn)zL_}$Pnp?c| ztVJwQG3Rf~A0nb!L%>8rJ-_q~)D$0|*&2Z76ZeNT5tdw4P4&gqJJ)d-knrKJ{r(2C z+ssH@TU`t$G>B134X)Ae8z2k?f5vbG$)1@Lo#Wl%y)z9Rj>fub3F#F*mcE;TqqT~KzVSF7Pimst{F<~>g@r_Cf^chb?T6{?8Eq|ouxnylBekkUJ<1_17rlMGmY>p z$#?Gv8xh+XV&F}D!jJ!H&zFYs4$NzUI|IUy?+R(+se%}lnWh|L=@2jlMm)H}>hw`& zU?lcO>m`H_b{0T#zVF@axSERqSE$he)SVrZXm zy7YVVf$48K@*$)xFtr9PPmD!F5*l&bX>(%#G@KO>*R@O6*^r<32d;=26|L(F7=`0# zA&;m-p;UGrM7yrv@SH9Bx9|@w$5gI0gQOn=`9)vta4J(FtiyxCkgf3cp+TwPk1ze6 zit+EBMxb%F6HkXn?2G!Fn>ZvF`fshi z2D;%WkY^3rDZFFewE$VwAXy_js4BG%mZp{|gme7X9FzRJ~(Kd$c@8bVerBnqw%?DCeWyn2)T&(eq9eZHp`C$kh@Pji=o&jjhlBI8tEE z!+rJ1`X2D>E!;VI>05x1YnzK`vG;%7XVBq1BZ$cL*iWa5x%AI3sGKm@B$nI^e3_ff zXdLV*S;6@70@cjh^5IKwrvNuur}>Rf;W)OGdiq-lJUVm)CucPzvnM;C;b>?*qAIKl z6au>u?=O`@2h!)uHO8hyZ)#h8rlh)v2Z+pYjr3cTI#kC^=osqW?6lG_XIi+NlxsUz ziee`|r9##i&FgZXFJyFU;ne(i%sjr$xMEq+-bIhdoV|20x0qdkGgoeExQBR5g(sd} zmg@~%rwy_j0mq^!R=~*J$L&3r9aj7ug>}E3VUaI*GtRpJG+zM1kQv`r#|wsgPeUp*WN&g;uiYF`<+!QV}W7)_IgcL(~cN>kK_ zRCYQ!F(-q+lKL8=Ix8tZ}?X5`?8;ls6HK zvdm=-K?Xatg^b1uE`w2lF2R=~?GKWWT=+&o+{n_YAI5wd0So?aKj`7F^A3V7?+m`m zv(u>idS9D~{fRsYGoMI84qR^q#)jJVLE^Nb+3(2>3b+mld>+8aL}(^2 z_eGfqGMI25I**Wvsy$E=9vxU)$`PcIKS($IHaH0|uZ=)@e6~ymlW_yLLbS>EoKpp$B^6i&5+E2U?M)>4|tQo&~WFpJL;_^CP^WVUn~8j=t~QZ43?a zMiYY!0dO?N$Sfk~-IRyl@tipWN%s&2FdG^)Hnvz7gStdC$<{nU?e4BM^iQ`W?HyeV z{YEutvvM2-qoeV=2#s(xol6)rWHk(0;b@|*^D12C5p0pzS3h0DeDq{K*9z}s(wLx1 zF}6U}1z}oVR8xT25f#7ykEHW9xmmK}Hf-`ey`MTwRvq5iyDiB?pS6=dosfoRqP)Oz zN^$ZPZWr-DTF>bF*U`J7ttvK-_q`?X2o5OL$!?GxT^)Kbs~l7wp*oFNV|PyaXTOdT zqt4Ei!^y^q`RjY^UNCkXX9dKDnF2GV)F_L~0-WkApA;tc%{#soislhSE&N!Ls2#lTI!eld>JThHio%s@^%Rws|`L+~nSEy(_r1{LMk>p?uv2%CDur=W5a@ zwllbX$C!kuyLm(WtvqYQoo0sXk}%XElrYdQMV7mWJf{(dQOHJ=6&)}V6MqlOd0C&K z>Z&+H?Sstuiq=(pAcns2x|7Uy(hA-2XEO*}-oWe|${aiK*BLV^S9P4J$CZY6R#Mx* zL;i&xGz~)yzd6!q<@a4C6@B$-WnFcYuGAPuA(C3ZM*aB%|JL%NQ-S%R6Lxme7+X!h z#7&Mj*%srEdrms(8w`!9jDAUOiDEXF74NWyMQ%s0Wm18ycg4|t++L&xtC^>pvU(6* z5#6cd1zG>2CQKU-c7M2NbTMyUE6n`Y)B6Gf zU&uX3|56`{>x#qJxASj0T%LVvbYtw+uR=Ju+<4O@I?QA^PUAY=S!sw-+wnVJU_R^z&0X+ymYuelcb;ubBy$ zcfHr4R(glg>~-E=x6hR)#1M@bZ+#2xfn^d|rt}`fp~R>7i0}Ha^0y>46eiM6W%Z3Xk9MSgR&aK=VkHV)+AtPap^j(bP|40DpLpp1^TURer~lr8c9LEX)e*XT zZY{4`d98=IRe{Vuz*s<&E~;Hf`lsWCk$-9nd^49IHa!l6GdbXMsMg}oO5euONQWj+ zzX(0@E-H+tlk~4gf;(1)}nMEjf?|BO_LY3XGPMlha}NYjZOu|BF9 z;pvx=u{AmZN@f_^n_EHi0dsOkHN2t0%4i*`I_?L-!yotELxQ2&KQpYeXmaUX)p*R8 z@PJ%AlnqWx<+@1Erj2Y_V*FWZu?fa|MytAsCqb4*UVmI6pXFChkH57}!_Y>RQ}v3) z^VhfbU{|$80dMHI^Guhd{Y_6rU~u#C4_>>K@*#OAp}FC)wY@A10!j@cl}aIH&7(zbWms3~s4lK%%z8hF$klrTLEL+71Y2H>M zE#aL2gGwIXe96Q5I^0vdNI^z}{uBx^g8bw5>|SbGP`TulQsTZ@hgea2kdnYNY18N)KVdMDcuAl1 zLX;*<2=K4G>-lB!96!#8Jl3CcRLHd1dO?C5q;l*b#*V)>|wY&?ZE5rIzFa*V>o zbG7HBQZ$3*FhpHA1wgTCdz$A!wvqPE=SbWs$q1^jbcl)TrST<OF4XmVFPl>Xvd8 z+|3zrn+ZR;fbNm7^v6}eT|L&}N;G$N#uo&gJR2$l8~wNy5Tak#S&vokJe{)dr#PZ@ zn_k!EoAJ#`Q1xCOI09iq8-Lt1u6NMkQxObY)C zgVB{nZI0@ktH(Nip5*!oQ~OEXQ+=x>YkuXg$X)h^egL!0DSpbM^``FvFofEutY~Pk z(kLjf=5m@7-yfRll5drCQ1*ghfhTqMX%&=nNlHdviIDGvzjsv~1rkcmih>}&%Wefa zWun%%j#?zc^{68HJ9Bu35p{BGT#i9wur&V;{C<25m>5N2nDb|YLJ8~b{c*r_^+YqQ zV>6};U^#=-~=Mnz@*UHL;tDE~R<%io;DEqq=6 zJ@eVs*H_jKRPO5G;l*m&J$jUU1nZAuzr7)c?s_))O}nA>=Fd{ z(7f;t;#YoJX7fV4=jYj$PO!(&t8f;7j5{suWHt!v1xb`({YAveJFh@KOr^GA>KVz= zT9NkccQueyt3V>Njd*N0>*8>-DF5E@-Avjr7dD)8WSIl;PGQS^qDt*1vSZX&c*D1PhivqqbDecKIG*Wtt+=T{ckH{QtgKO1!s%V8x2*qE?>Ogxn-A~TetlvLL+m@(! z?TOCz{l#eI1W48`Q>fK9iSf9N2n!{vqBVi(q7xg<6G@5GX1ARGh=1Kti@zU@5_2%* zdYHvLSF?iC)_uq0SwFra=U$5hNKP^pi_f|n&8P2Ad2{iR@SL;nEj)DFc6-s2g&{&C ztU54oDa#jn8Rh=0sXXt9y_zXXzz4gTv3m)zs@i8~8z}r5n?*18^P2F4j4r!k%HRNH zx}5Lr7ps*Yt9gG2IaC#8PEviRlPz)~P$1P(mDFIn@x%Ftc&#$qoVh}jm2&;yremnk zZbld)hH>I!h0az0I2a)+pmD`LxaQt7M|Q&A5-NVmF;x|A4%4W~hjpArW*0 zDF+<9JC-4Gh>nsiNAfYW4H=Z=ROvjS>9yj=F61Nr3$}l_I%@XLPVDP7hLTA|M_AI5 zIez53z78RS0_Pt>;X1xf;WwiO_^{3dD3fapMG8!Wo8RFKts|c#)HjbUr@0M%BE)g5 zYce8nH=kacZ$i6|cR+P=gYPRMlPjKAeOd5_?TV11fTOdEPqHj2JjA*-Xs>>dVo9@{ z%8dGhWrj{X{aRTeu8S?nPK1OsoWRKb-oK!5U$O6(*u;YW^oKInu1ebcJl_-9>ypPy>Fu!W};*PrR-SpWX`Kj6A+BVZCojVPyb z+gjEy-d&Ef<9D_8=Zo%3sFr!Tlk&`(L&5XQ`A@UENfB8guXt{oBwpMSyoBm@9T^hJHan|7A5E&pI+*b-!p&FAIz zOE$zy*`VfIUUsWCl(HP&&s88f>SOE&5}779(JOijn2L;l!;!zQTuD?nX4GCkb3_Qe zULpk0_)YeIrb@R$R8+MbMW@{LdFrj{ycw_kQ-c2&vxpG?4hSI+dw?Ro5bZ6jSJk!+2Q02MpiAGaf%k@<7bF=`Ad4=Z`$i~XeAJROd= zD9ivTV|ECR$|VTeW{rN(;TZY#@6P_;7yp?Yn#d!sgv*=&HHEf4h`?)8|}4n<~u5$ z`Kj3DK7YrdaKy9;s5_fq9U8HggJRUs$WlYva2GUAy`-Fa`g9MdXg%t<`s zDL7&o690cL^RJU$0Tq011+`tsoQee%m4(O&8n;J+3{joS1P(TK%y&A=0%MHfU?TA{ z&en9nWB@T^1=yl*r%NVl89oQz>0@q(k2A=$RzGWmHG~oePOV7tF`b&5R4QV!lrO4X z;TXit&r+%yte!b2RNYU0T>GVcWB`(SFji%sOh}=0Y-+2oYF2&9(H)Vq`Qqt(TrDwd z8){nkp*2#z`O?qlsrKSJZne0Sy2$e+oi_l*0v|-fCTh&=zU>9djxoL)I?s=PDZeM7 z7{)}6sW~F|mcE1Q88EHYI?%}TXn?H6H|k_qpSB?~_5>`$IARVz%oMO`e%#Q>KSqnkin8MXxItQ7+?d0?Gga{p^6ujCL5{K}_nx+}1U|nF|5NnzyQEq~I}Cd~k}Q2l5fT)UND(SBwQSBDg%vx&M$s@IP;}v_Je7&cIizImxYsmKT^KXe52l% zp6Y-mEb68&<3RDzjWPwLPVZR5T8z<3HHsfKubK#-sjGAU1fBy@_D~I9RXe>QXD(^sb-see`D{83b#8a7f0s zx&NcMB>}P|@Zra>iGxHgA!kj7om7tEV^Ct+FV&7|9_)^IoA`!kZ7onDCW1K1;_%|# zx-L0uPCbP48XS@jKrqg&x$7rBEb)x5LpayqLQjgPuh>=OT~la)_;=@_#F?k3geZ8t zb+5wTIDPEVl61kfO%=tU4c!Er4&T}yaBi?CkeUwcy8P$hU@w9yS#4DXrmCm-{2((i zp&vZozaWL+bZD}BUODqE6J32bg|d+Ba^Sr7tndAfZ7v@l8cSiaHw@eqYtZx4^ds%F ze?E|;=!LQk4)L0vPLt=2^SL^{|0%U$c8iqOEZhv&<%<^v{vJq7&O%<1%kCY(SH~aX ztC;PQE~nX{qfJ=bUlXF;>RM9G<`|jpI*bs|@{THs-URb*i`^dII*k({Zh#Jx3oYl1jB5y$ z+Boi?M_0`p@+TNJNA%QE;}y3L0O&>|E0i2U3Gd-r7+8WJ(O&!1vnkHZW`R31HEhvl zmRR~jdX}L#;JxiKdbD1>w{F@-7MjfQE_O2fQp6=zTI^ihV>h@Gzh+;CsPN9(LsIkeq$)=a49$(jY|Y(Z zhttKeErjefd$WTQVmzce1Q z5*Rb3$?~ouW^X7kSlkNe-;+Ku7ENpTehIpV2FqR2VJT7Iw3r-pC@^9t<#KVjMopC9 z7go^?OT#oXy!TO?z;XHZwC;zx!56*v7gHE5U00)*{}GY_9TqbLl`$`HeG`9&ixM39 zOF+Lc4&2=mE{?u6LY@xYX%&z;>RvN(LorKV?c!SvUT6X$**@zfVDgQGIYb`*|OTn=X|;fK&u?a(aT1)Pm-j-cZ*y zcn;u5%eNI4sOk*IK|1I9pSs>Ojs`jdy|*fs^m@FSm_NP-rh7lYt71mYMEGg8T=YVs zPr=nZA)1_qB7If&h3&;|Et9W%FOOn_yVdzf24p+}ZD>21iMi)N*BzO?A z`@A?oXjEH~3i!c1%b{PN-_}&*TT51-Nx75_6B9E99uLiu4_Pr~0gMjE3 z>UI35k<`YhvbueNMO#O1d(8I@tcC=gU)GrPJEu%m)mr-~U*^`-E(>SNe;WS(=z8m@ zDA)dtS3*K+hLE9SKtv>l?oL4wR6t4^X_0VfX^>W=OF#iBX{9BG?hrvhx}-q?iE}^x z-u=Grv(EX;wf5R>x6CuoeSg2#b$u@H;03+Pdm92Tg|h2+vj%*AP3vOY(&5x5dsVN| z47e}o)kvjntKKv>iAqhBqbDpHSjdlVr!31I-9EFZ%W{h{;TJJ;ND2j3cfAV8)#B8a zvO^c`nEanL!2eBQ#&RY)b_KeXccdu%)gmt;Z|&%Oz7Iv$grWU*1@sg_cbkfZXo)}P z@K;Po5cr5)QvToo9P2i9I^%Z_cn~Cukyg3k6T+(Te8e=F)vJ8Hz&bTRMP4x0d`Nws z?mk(7@Q9Sc@R6IIAFnt;b8ub(d(-CiFOU3uSfnA;iy(B|-5NBWV3|a_!0~RH?{ssO zrnNu6FY|-u)YZbZRy}c?N0|)g0H^~zc)K*#X+yn9s2tpD?*!m7N;@Xvy&|*T><&fJ zqs3Beg8;y&v-4J9US-E#(L$nJ{dbrE${5IcT4YE%PJE~?w2UXe~t1Siqc?lkja271V9^S}1kJd4{0CZ#_U^G) zd(eNHGDAK3@Ir-aQowz=p|N=+co3#YyEX)}z*BETxnTPHfLI zT!+;x8;~3g6$JAGdhaQ;W^5XjjT9P)vSOaA(AvmM3P?lez6{TSe0_=nF3Psi!w)@x z4i!7>@A7Pu6#8=cwhC*zM6qP=e%cHJ3HMFth@Cu1s0>A`WC0izNVgz_$ z8H)8@=%gKKS+?M~9-?sPXMih19rz0C2ip1A5%`bHCSKnvB*U;|cLpP5(F9=;&2U%x z-R6q#fS+_zC37H6tO8NHPHO%d{t1+G2m=zACaR4=C|bib>ly#STJOYt{A~3pV6!s( zD(nheK3B_wP1p6F8C#g!JZ5!*Idu{m(0a@>tEV}#!}tQz^7Lb=sSJY+vnq~*U}n7$ zYi&3|&QYu8k{g+gt^Uq_`+Xvbwg+%$^P3*bgzM2kb66vUg^!u$WMeZW!($m$2cg>Y za!PG^qcC=kAULDZa;$4xX1SUFu|uTUxgtA8el|{%fUcI@rC;EX6}Q7o;~>0)TC;R8 zP$coyL@$}~{SkYe=46v@uPT%2E&?uOqMa4vInk?|W{5j_Eqrs^qu+v*01gDC_ zH}zN{{&zFJ^xY@?nZ4W#tz3h3jyEx_5^WN)>8=Xr&23&7FXr)}bLLXBjNcA3sTX0& zwFs5Os>3!4&G=slzsf17|Cc-m=WS9i!Rd$iu9VK!IGY zRYKkqox*$x#vR6>oYwr^cY`Ba?f8y?V#^by>>VEY=RyAc6H47IacJLsx-!bm7UHdX zR#1pu8_>k<-wLq!!Tz-NF15S*s+%g=S;)N=S&u`1EN=kejNde#wD`uV+X=)3 z8D{_H>&k7Y5;$Ew7tFI>t%V}8Khk#bZmhB@XD-Q}Hj~F;GmZZfK$WIa@L_K7@PlG) zZ%H$5tW!|dUN!Gk=K|B2kmM`$5)BioITG2T{`+K54I-k;)3WfO!@lZw+FgmW@Ev!_ zy2lXswh*U`h%1O88}?4JHrB;e6jCiZx=q|j4nLKt5bJZ5K(H#y!o|BiF>IN_#Y*N` zw;6*ivOYxxB@eC8sGxJd>a0C$bKZAE&X}$<<(Is4CSa=0nZ))0rSSGU?bLn5W%US2{|n}B01)*U_b{D`=Kpjc{Di{2vpQ>dJh`%xHG z2AS$xzUNjFvYJBIqvIUh!ni|de7?)YO3e4e7I9j_m@mc8iLuGFPU&1%4A^9iR6itP zyCYX|*qIL3^saHRy!Gf5sVohczLjP{8m9HE-)+sGX8oK`Mky!`QW07jq@%A(G7^}v zpD4A0DRT6lttNnz#&y`DB96@5$=}srOkDAe zxTnzDbVo6=zVkLNXAFxuTNA3-yg$EV0^wywWPnC~9{8w{DqIZT?s3`(h%PpNctz&W zbb~SAMLz~}^1V3l1MQ};5Tl^#6urv z!-FQ{oP6Lpd1tZ;=>RqErR=bwUbNNXi$s?;xC$jN+ai441iZd@{64P?J(rzNW^3$# zQVT~tzn>h?yy#cx%Q5;$aNj`!K7iAPWgg*tT@NLfAB45+xDo%fUdvv|%p_!Iea;e| zm#*aLqhi=L&Yt^9czUh%hzK6|?wnx_*)WBD?-lk*^9UB|&Gt!nGwq8qsrwAEk^brY zJIRwOCWL|{j|}#0)Acu|O#(@7rQNI@*EYZN$bzkmfdq>>;A7aU_vqITaU|#Mscbj% zoF^<3jb4Z(?n$oo-m|nFVlwD#@#@#PmZ9P1gPCyg5g0!S56b^yF{(zb_|{u1&${tT z6Y}E8w(DG+l9R!kUd@HO5ubF;^<`3j%zBi=r{vo&zrXAsN?F5WUnLNYvB@suPpB45 zkwD4xQDKb{mWt^%{9N~4`t`q|0i0Gi&}kK-)hn$*fIa{7s8CEZVKUb8{?1Fn0JHpV z&K2G%NpVEtJb1qlnpX0aAo4mB_;%Lb6!jz!hPWg8BI$zS_$fGDsrV+{VJEw+HXQy) zByq@zGCF9r;ykpor`493a9EX5?>#l)=#WNMtFPtYwY8P#y$%~`;S(86=q`7 zJbbs8d@seNoD$E$EdCBgJAb}&v;1;YUL0uraP!RN&F3Mb+qI5ms26qz$&+P3X!Gde zogv<)!Q(1rW8{#`gk68*Qs>?waJvpdkNL?OF17)yt3f9-2C1O+gL=xVMO5MWL^&lM z4YK%wALdgn-b?0Ni&5Lox}-c7Md7%L>C&Ux7mOP}VM0TRwYL|QD(M)F5BEj{c3H-}Aax>Nay5_7>D;e###O6Uoq-23XN$=dwUG)?)*clUIYBmQ9zi&15_x_Y;iS~wfAgJN8Hfap{s z3%a!UF}5X=z53hJ9`d~4U!ghK?Rc#9Qn&){FYewpCRT~j&nr8Mwhu!Hov`Zhm6Dyg zq9_lEoGI}dB($#+3tQ-MVa+{5WShTP`B9Wp#hc@h+Hr^LVKzywQmhns5>oJ0-+j&% z{^>9fUYnC9ykH?_1A!{0s%;#?J}EXqrWN~Xm)ZoAE=>LSs~Va(o=X)Dj}Q0vKAdDJ z6dLbNOp<7G4P#CmiLT1i5Ze0ckDy9BYKx5v$<|cR6xHNGnA@JQ9rsS^y~%5g>E=fN zy8T~+@*cOx79SM<)T_N`Qm)(gS8422cBHIY;ada(n zW@>+=out?3o8aIa?MxEjOjkGh6C}R*9(B%6{We3|u%m(%l~qBBZGuAJ)~Ww=ezHM?H}B!0 z+lC-kNo3v|E%2__4Y{XMvn0e6Hld3tzq(+1r(O8YcI6#h6C<9czmL~gf=C(c0s4z; z<-v+3do0a}EmxGgVR{Bx0%E0c&x4nM*lCFmg@I{aBgYWY`#fY-(?cN+!IddO7>ZEY%nE1^=ZsS~< zsgx1-EvND0mqTU!Z#n0yuMoN*_%C8LM8{M89g4>Be04C0e5MjV@PD+>f4*1EZOpNZ zDincU)hPt7Ad7b`Iu4mZCUd$YpAm)wt?C3L-7rZMUuM9THPO7YJsvJ7ey%-RHcyCd z=DVtkA-q@r0#~uxNNLARnPnlahH&BIg996g#-vizIuF=!8sQPhkSN)?9oW82(dBbj zl9!iD-Y}WTnrw#|Ah++D;Y3wMZ?3x#OY&w!2~Xy?(U=8g0(f6=?sxXZ(W*FfvWp_TEMdwzKo*n?`WVR;a}6n9klF3Up? z4xf37%?*DOA!do2byOpeuv3Rab*76*%+=6lVh>T{F9n@@0^$Ym`#SV4%rqY|eLyaA zZA-Z+Gd!V6{ZyNHAiVquNJB9l zdNbT+s<8i;cG2tyM@(NZUKnWlLP7#c{<(7~+uY~RM>1ffqwPqg=y=Y!q#n#w1yz>6b_k}@d92<>{H|3``iu4A z*t=B6$!~raXG)J|0>T&vU}DP+NtWrj*E*#=(A+GSw}IhMK84fV_l1MiX2h8*hz9=I zYa|!)$$i89^a(Ix4?C_@NJTnU4|IuWqj<(ywbiA@(M6W~ur&%hMLGiWGnJFw4DfA~ z=sB5$NI2H7g{&GwYT}M*jV+L4W%oWMtvGMT^N##mlRu9D;lhh{)+TmrqrgjKR~YHc z{9~cm3!Df~4*uDF3VzLK#Qzptc}~Z`;elj6`jseFzY>}iEY%-in>u`JFUo4?k>-Z@&lz&C5g9X>w!ZC z>WI~KLh_iQZ#RbZ7iue)X|}fceB-z}ai$srEbdbS%wC6WVe$C3+*kq05s4Dw0>b(3 zW{CleMMRCJQ9q##gsHq)7Bla0AXBf!%5bjA6f^|s74yY8GM#YpGd|r#_|3)#=$kA$ zQ7(mTNgYR((o8+Gm}HUo<4EGbxDmMBoJYls=LU)D)6YS9uQ!b(qhYozW?0F2xHf?k zryvr{5M%wR#t}!!FiNK~sxwG+$O{ml@INp;-wM+k(lE+n8%t za<67)!Gj~+V86p@3Gqv^6GnaSMDlamb<9-3J)@xnape>hCV0#;Hmhn~y-UDdlC$;S zcPESYYkaA5(-Y}m;*_=HI9_LqckgpD5BQ-?HXPR9%0g#*rcguM@5o0Yt553~?P_27R+TI#O*n#*xMvlE@5^Na0{wILjv8{Q!o>adILFfQK? zsdG(jnHX!FX(_wc`Gt43N9JhOobMxF&QkdE$UcvoKR`_hwe8Oh4R@h8u-!t70(`$3 zR|17PK9Mm{yxvfp#>#EUI3?PQS}<{l%5vU#h2eIJ=j7|sjqL-U4^IQ7w(oDtuDPWz z69xpX;Ms@rJFV~GRE?Zyh+&%a6nkz~XxONy;*5z5R^PByn-!yY%(&&(Ks$h&5H(Quv5Jyg?XLt%1Wu!xk6OhHc;k4zfOPuq;#On^l|M<=INx zd<`U3CBk){{_)4v2JUtua9T5ShmUh1?46%vXXMbXsr}=f(|SPsCRZUp(6bjMl&Sw8gc2L}La=Pc zdtu|w{?CUeWYE27+7RSAmk`z$$A98*T~W{7WM~@nJv~O3aj{DimyJr{=ua759*22b z|NHsIqM|)bNT0Yodi*9h1r+kt;aA?K`ui{N-;_GgZcNIdCm}H=w-^U3rd^;{pAZ=K zjU5fv_I7U_{;lr(_1n6Fz%e1km+~jc4^Z_reZU+aR|fYqsn>4*dl3HL@BhC)h%Az= z6v+yE{QtK9{l^6e{Zc{XA6Y3`7u)at=QpFT_%jD`{9e1|A}_NNBz>o?Wz)*A?qSU? zZQ*Lg3CC|QyL!G}&;S1Qj3JtVF=N9hv;y%Y#mbXAgWT=sD}LvB$$sa@bf335a#)rr z%b}{ox0$>?YwD>`ep*UICy=u!M&@*mOr7Zwh*}Ck5I>G3J<^a?J(hQB+J}R zfzs--@EdRy*2_Q+b^O=SLW~F`|EGY&Se{2am+?@Q=R33ieHeVw1dB6yyPt*Zpa46M z@+GanPBidm81`+oBsP~~S+55GKW`h)HGz1`FVo(&Er@X9CdgNCZFKp{whq^Q83 zuUMiL1+J-9IZ~fM_+Ez@U`!SPxc$%+2t73@?e3LXE7#OK<_`lQPlrFn19ne*KyLku zq37D$zJs-q)XnOtl3104^dH$VkfZ>K%N1gql_O85PA!;re*6dgBFj36=ig;rR=H+z z?T;hkF0%PjJ*06%()ir;6DVN-up1W!)-*NS_4j>&bwdO)CJ==SJ|r0c;bx~`1dtIr zQXpqddnY5o=pWL?A4dx$V^u)S68G}%aGMLTF2jw=Ly$-IsG5*_}-}4HoAh((C0H@+tS-r)Ie;#d0l`Lsc7>FO`MZ+m*8~fsC%)J``cK$0~miO#Pw~Dm!rKVo+>_3Kg`+by$J|PujY* zNd2{lSQTh_5u5PbI0r;^2d;lxs!7SG<@XT@wVB>~ppZ|3m2ENtt>&x3P(jJKOeX#! zGw0#}7OQ)4KqTF`K_u0N`=xy<7Vov zYey9aI)*t$n;+L}eMGJnGtw4shM(m&wRsP&Jmy$O_q$|J+irzprR29#j5Q;4D z527Cj4rR1`OUv4n3ArS;+EpMd>U92~J&QQ$6OqTG-UF1y|Zt``!2h zjr@ECFBbops(0LHFGZFMrK{&D4C21O>DPIKjQlpoy7OuKV1##HQQxj%J26%@nM(fr(90EA8QjOyjr2BlViS-t+IAOCEd+E zNrSoulVu48myP!j&t`KNwiV4t)hkM>)I-0C2Z;W|0?>anTV?5kD?BNHKf30X-c)sV zn*Nu^@e&ZcyFLEAwivK5-HJ?ec}_{hN*JOsiKyp8CJS>NA6e?SllWKxv~ ziD3wFzTD^J>w?;mf2h0-ImecNR1EN-RMH%kKzydm%Jz|W8YD=1fWXQZO=Z3bXh0*+ zQTT1P#@9v@9`L~HPA+#@8#z0HJ&b0Oyi;F;tyX%c8K)ibD)W5u8}Js3#(27CyDP&hsBYuyst}C)VBkgxKPvkO@fxqTySfS(>Uy zGdC!n9PAof;V<%cdnXwuM*1XQrB7R(d#nhg?Abr=fuVIGM&09R6yHY##3AHe z^QY9bAg<~fZ)=9eW~#~uAK;e8BZ(WRNxRd*g z$I5Gd0M2;CQWW|53RltH8BOfFbo)uO%Zs{=d#{JdCZ^2nN*Ey(m~Xi8*@_*gf7x?H zg9B-{sifE=X#)GC*vY4`7wO47QoqpaBzJP?Cb)gm?XNDN=0)hV?UHKzTBn%B1vf#k zn1ua~=e*bbEMy95sjim-v8qP1Hnk}bPfF!0x3D;(2k|u$^Fx68xy@O3y4ORkkZjuC z+z3vD?tROIhwpNGf*zaJ^CM`+f}DH2I3f)(?Aw9*FzPT$TvBI6(Q;L(JH&Nhmlgr{ z1J*CffIt-8;n*SZl0;zKvrA`im90o)yu57?<5CbS(`Yhz3I-Ue2AKU5tqfFT*b!)c z(Mxf)KLT^HL3FgWBCCP)ahK**!#Lg`t)5Ftm^i487KRnM4Q(`F4g|1VMzc$t)RyZ0 zjbt-72qf7k_u8FiIH3C^%F1AWgTsiXxN44eO{=svGO;l@e*?_zTt0lh7;MwF-!1e{ zk_{kN;F{M4I-h7xb`+;alqQNLs=n@U+#uD-dx0)^P|#Y1wVS3+V(ViLOgxNc;RK?= zJOl^!Z$hE!VJgGiy6nSr^7tI?R_G#_eF0u$WG@=!5G=YLTWDUrR00MUy;-|?)rb2GSk}=n z1dHfgUeRHH>Yv?_`Q42hWaPwK0H*uof>v&QcnLDJzB_36V%FdK6IcUyHipGn4?x{% zS<}H=ULlY>Hem6Kq%$lg^h`?SIUIt|eFS5B!n0&lqLm@^mb{96OM0Kd<7Q?ZN*`Q+ zBYs^2tgJq#goyKHg82fssL`{&?Z2*A)=+*RP3rvXZ$)sXn!p6^gImupO7gIpD>;-l zI}ra0AwVpQ3lC&8{rnq?)8+SOjfe`pvmy%`G6d|Um$ePbBSaa~{rh0QsJ+@^tUHX10y`z&;Z882%+{|GwL)Yd zL}dNF(WtBuYib3?Mm~Ah<`{gUxgjh8OljFk8;7(O=Dp;P-_PLqY_zJGNydqpI`0wkyMrK~jZVR8PTG-g8|=YGr-zB+6S1WLQ%l z=sbKIKj(mNy~ybe*0^1pY+J z^iFdl>Ag;5yFpjo0PV_UnvJg@v8-nvwMROVu`AC=_h?(~Y2je< zaasOoT}t6!B75m`bN+7PlUyv;=Whh7oQQ&(=BHBHcC1}ds&V+>iMvwDjK@lzGh_St z6^?%^LM01osA>P8a9v{!nMlxT7zqpElEW>i5jNY{R*i!AF^iUHB{9NAfqaiSzExoD z?Ds8=(Kd-^>b&6-K&Z(fs%oyvk4IArE#%i?tGA@7z3R;W}PF8AmOE$QYe@l-R#Hgl@7vHPDf6y+Z)1pA2CadWn zuop*w3GN_NV0>dDT}^RhKv9i=yX8mJygp~qfJyM}PCd)LG(c%5I|H>^!4RDyh=k@-LbSCC;mZ#eRZ zkB1K#hP0c}!$=NEld)+J=$<4=%uwK#T`3jM7>aLv7QC|KT6HBY603!CoS=n7EQ-L^ z2wC3?vupDr#mM35_jFn_nuICsfBvo5z|zTP7r;T&(wU;N$8sAy_hG7>8`n>q7=}9N z!>Y3zp(eiex}Et5ULM>WvABON9XN0?TYjzS`ke=VrU7y0<}A|>{F2jzfsxe2IbW{Q zHmTXGlpihv7ybh^iV6aopb1lpcJ>(}^A}~dL@ES=MAq)nuK zP7Zwe`}8_Fwn)O68;E!GZS4(EMf+p38~ky`dF{W@%RGJK3@xc4Uufd)=!&u^VDn}m zP%*bK31p%ybBp0WzfIA6<5mhiq%bBB!@Vp8s({c|aF_aw(#;z+n}fpgOF5*^TwuHJ zNH$s%G+2GL?MJS7FHX6AG3s%d8nCx~Q5KJX+>t}LYgRPZN)(wy4ZhZ1zxNZ{prcZi z62Z#L1FYrQuU1?ard42;bL~3h%E$g;$nkXUr)*D#sDsqFZ00{X_>GhJU)ETE+O5Cs z7Mb_*xPWW}ON%O~qI`1hjne8u1FkN$e6pTbn85J@11r<~?KGrCl7k(J@$KE&4q%dd z)ZNS<@E94gT0l&mfZ(-J6o2_G3C)YhdHia|71a^WowJwspP0>`Uj|75^U^l-);loS zHrvZRmurQYtp;JWnwYgI=_Bq&N*;TBOp-(_FWnk~;kvp() ztuv+8_!tpv&OUb;u2+yKN5tfb(y7WbYC23*wZ1Qar@&+hQ>zysAPaTRX0*&PT$IDW z@vdt`jVOj62?o`KAb3x%21Lqv@}+^QJo@aPN{?cJy(I+IeiXHbWRu(rOlRjG#PCF#!Ti+)CuiAJ_}n3GM6N z$d$4%Ed`M&9o&?$tF2L3kso#<`va&2n%6d83SVtmWj|p=1SiG{&xtxvVF*TXIah}tB|R1@rJ$d_>H5@846 z$8j6ZiM8G%*d4nis3f)#l}D~AyU>~w_3S0L?RfF=5fsdTF_wko{7~JhKWnolzy7US zeBq{*RvWkc6|v1&>1FTl-x)cMJY?D=X-Y`l(m|U3j$Ke5o4-W%(=6dApH=-UhdzqF z3zQC{+y1JBD<3{StZq|k_OiIaI^>x+>#dMgRZZov{4X1iEVydRvO@PBJ{E#X+M)D# z6TYsE(jTULD0`X^Nx{kR(`v3@tudYf6Pro1q!)56%}-mBnPK=qTT8S$Uw2Sqpm2_HczmKw*W)^*au(xTg&ed^$Egr^ zkZf!fBuKCIU7KSG!ljE`q_ww)k)c-NxQbn5r{09ttA>w@_Yq-rc(B0-9Nv(_Qfjmd zJK`iU0r)UhQmr7-c`V8@pHWU}-Gc|oLoZO$K(6WWlC@_?1$>KRzr#jhukRJm724dI z#=H~mK|T=b8LB2>KlNJW3HdfzS-U;HeY<3L0mbM0=7uZ9O)AMBL#6k2_yYnP#Tf4Z0nFqZ0W3<;WRU;92R(^J7=P9z zjsE8Fqu%O?82<+v=J(lBjB3#{ zTUQA+=}$yy3WbMhdh7YjQWH1bPliaaR|eDPX4jQQ-!H^BavLlH6vlN9jn_w61_4C8Ktk2jM71D@e~bQo|*t}ADU9!D~V7dzfzP`o<6Z*T^c+TJ>p zG|*QQ*{wt85R`M5j78E2bjNL8VMewcuLtEFa92-ck8(2$Zr3y4xIDoaw#XeEatvon zb2mDPiiiK8e<*KaM-r-sZ)Oh9(7J=PU)(RCOtXA8}zWvi1tpy-(K`(4e*{z5$KTeN+&pA!;WOcn}u2~%nbMU;DNz$ zi^&oTVa_++h3RofO4{Z;>{v3@KxMs#2P%1F;PVV9WZ(8H=7{D^yt=vj4W{_fAgJcb z_7$_j7Y^XHO|LDGyx(!-AI%czv>XPszjMhE9F#&R)u}O3UTO@o2Phy(_^yVVMP&Rqs9&*NA;7HyIx} zhF{8A7TJrlAdBD8-m&KRU|h>biQEW`W$j*9q9;el%r|R16jzxH+Oy>hF!auvpo}r* zu$1EH)9BJ-LyuY4aO$lytDzYz(F^HAXB@Z2{6GL-ML~G!~_Zt z+hJD?f}O96Ru-d(`7*8n5TdJCw|MskBCxOQ>m6!yMF(K&Zolh3Di#=_J95xy-a(_k z$6rFz+DZ5sO*0nIyLVq4Oau|8MVmXcG)_C~jg{5y-zk3+CbukDEkud%bkuCnwLJf< zn()Z?nEa#O1x6mtE2T(T4t--P-E!|AXeocP3-QJ(gYid%loQIHD%VJJQ#u z+~}5xmAQOmF}D`6R~vzX_xfl-dxav?LYruYfSbhI9b8`oFsy&gO%uh`t%3o1o5KcT zia({Dczh_7_iWbZ*E}J|(JmxrljG8GQdw{j*w0ZQi5n*saS8c-AP7zn)X5_MGs&s! z@+R;x->`^%I=_;nTkN80yvQRjj4Ku>ORJ9hl6LYF(2xr%t?%p#4_g~7qI2MFEG=TH z+Gk};WBNZy%c9d+IY_@&_) zY~o-(m{PIkqs9jKL+=(krVi+pfE{CdYWImkR)k3K{h%nY{;@1+OR+W3y7phO3;GnH zJ_#`ruo*Xzhndu~=p(p0aYIOPMyYHlE5>daQ#-FbFt3Yj`X*3K-?8~#NmegUwko1M zo5U|l26&vh$64@pE>jy0K1xG{dV_gJ1M^A8vIB`tuE;c!QCw=Fv{X9#|&stAOT`rs<`S|?Z6Fqb@Fxxur= z(3ZTCwG-eUYtmu0*TY8Yux<9i0kfS0Y!3YDNf_g+`dSiKVeRgP+Pe*=!8nQl`Cp4ze9_x>d(tALl{I>BveNFv+?z|eNCAm`TRgVbGMomrv8Nw5y$7U=W` z4@IH3HFL<^TJ$&5)m?jZF7GuW%_|xB=h^*_t)A~Pu2KeFotrPcP^C1eK{E{c$MKY? zBP{u16XDBfCj%#3{si4Ko+-5C!e-m*NNL=Qp8z;?>W5O~6CAGhRb4V@n0@@ZFpd#;Ce(#HDL&JQ+PEJGc$(StYw@J(? z29~OXZbVllHofX3LT-*_^L(80ysjLb)o)6Se2v&aop*PY@1JZQ|uXOc|YE~9l!tiFemeV&~&2G~?%8*%RAurE(=9;wvmezWhuho(n zsKbwL!xW9xD%>Dc+uquqCM44?Q|@jO2#a!l&XL_FEYw!y@f=;S`|68fje9$>_B&tW z4YRaBy}7Tg5~}BAlpA|V=FyxB(Su7578tUaRN)KnGwa9oo!=6j6;ZhKxbql4g=^3+ zSChVP{T;sA^r=ACIeW?e|ORor>z=A|Q3+Vo`rIqfHThVRC=n^}EC_RYkD zqsQd}Ymui3tbr3Q5LX1duLHf?lL1Q^UUt4wd#z_h6sC7ZEXvh0tD*gOeuSN@^J_ht-c~J)`mS2I zuOwyqtwoszBVljbi!7}>^;gI{?-?nH=^nDYK3w8$#B)Gk6sf|Iz7+T9;$Y<01W#VC ztXWCy=K~wvQ1@78)f`1$U_Bcero8IgCnD?Ud18qmseL)k&m(^B|(~ zx?S|ANWWOo;-gMFb5Yne)CwLg^^6SUL(>Y(Q*WChY!IKG>rS<~#pDs7QKpPuuQWie z$7ar8n11fWh2)|dPf!yQ%myz1T=%MY5mmT z_Nf=W?rF5wsn?_bAG?d9Nj1x0{|>@dzt|#v7rTUgc%#yNmL^!CuV-S9-_kp-_Q&0g z$9W2MmF|ZmLfq>_3m=4UO^pqu$oSUGj8VH)O9!^gVyx}X{e7R=<`^X-a2CifE~&ea zTr+>WZL&E!u=dvPoTAw)h*onQylAzQOU!%Jif;n~nyv-UW=JaXz zWfIpU;~Myx@LfZNuF|blZEX(gXOgkU@E_9e)ESA{6t5}VzL$q$SFL2ewXs{+lo)yZ zr&atPUIHC7qT<~8KT#Xf?98KN5xeF_+lpU8E9GPf&VNugiOw)SJOs@d9~{!k!pFs? z{`RedsuFSFMJ8b=?EO>D(VWE6JeX0W*plQwxL_0+Zw~d0yEZJdn3s9k^kO5-blVRND>|mG-YDQwV2_PObc_nK*xZfrXNmbqw#0BO^YELsi!aqr zI?B^ULIklY;TpJcf;8bsV-<*KAC84bcjg${VX}jvEpYA=;z!sn{Qj0CN(k9O{U+7C z7N(w#|8;&$2LRj4I5%=gj_iV_2}l979HSbW67$ahM0z`>Xy%RTHOMHJh%_;xzV4oitleM`G7k-hcUTfm#OMR#z`vM^Zoy4 zp9=I8_z)bhIv@1Xf7$}>jDxCDpW7NJc8s(vjbzdjQR!{r>>R*eLf1U;C ze=WQj5Ff*Hz6QHizxwBI_xCpI&%YxZ!?MGqo4|NaeC*GkXp#*_FAHQoj5ekII0N69 zW;4qP&NC~fHVSpA^cY6WGgoW#VA8wzxA??wMdOV+x>+fV&CNChX2R1tS>P|66`wOig>{~=?tw0L zIRt!@ji59}1afE!fa|jUkQpTv=#rm20JJg% zaxVpoq5;Tp@K0qtbQyYi2-@NuT|gX881lvYX}PQdeukIJ>YGquXspnTO`qW~Ll{5p zV()|S9{hJ8;>OgvW6uc)Vcva$9xF5+O9THQ1}$#JN@s6-|GvFmS;)U5ztR?fq?k|BXH3yLme(G(Fn6DQ{lMj9DFm#1nq=Q zU?xG#1!=ASx~nMFjrw+Q2q7KrzK9YfiwS`ILVb^b3&jdJvu&1veYZs9iORRHUkwNK zXn+5~D?rU2^7Zw==0>aQT^t7z>eXLCOgcmPem_7b^qp*!&L)UzM@Mga40xqLNd5}u z;Oi#<12U^gpWG8x0Q9WlBAECG0#<(_L@U*m8x-Rfy2Mkrf|&&$pHVw7~}I;4ATHBE7ub``?FLLmB#6MCPRU zKrTaPmFHM0=u3RAwMwyiL_8l&TtJla(8~s_K!kybVn5Xsf8tB&SE~5T&jUok8Iqr* z58jZG*ht)~`99Od zaD^1*;jk}#kUP5`)LQTatpHfjsqZitpjtpCe@46MFicS#?5|QG``qibHjS_QD}<;^ z%7K|YDXp3Dd^_-`k0gVTAlZ=`k4Nr_YlYU-rl@D*SAjQupE_PEW z{CJMZ`%OP{3sh3wytG2y4@ZooP+P?(O6Nl}p%Gj@-wWu^4mN>j#!YJh>Id=5#GcZZ z@tJqXNh^NA7herdri9AoN~4DZhTXIs?h|7URlXt@W|Cu0#Ab%#V@fsGbBy>$#j;3z z9?U$et=Z5m%s?J9fK}5ZvN2WVV7zJYv!t!h)-(GvF0w*0pXWGd8)8J7hbp!1MtioJ zGva;^EdPGD^FSsD9>%o$-0-9y1VNu&fnh{Q;AGxQ^5}aYdq(h)Ft*r!N|{f=J?2Lt zVcZX)BG6lD6qsHJ3Z(s~$`sW-Wtwk<*AES|-kWxh>_C~g)9ddIMBJ5kkx#aH>V6D^ zx};K?YWQRcw@u_b=*!***%pU0e21Z&t^?E~W;I!9({9JQlfPm5{qu5)T?yw=*i<=e4IrKVl1o(fx+yrhQcB?*!b|cRNzyKcL zv`&<5igZ+N%9>e(Dxp6IjH#GZ4Jp$JaTQl;qt?~UX7TOYwxD%Q4?RfWG-~Yt1yv{o z+Q&V#eEmS2cIjXm%>EJm5Ppcx`H}q!JWKEH6Jz~qwXQ$*HbJ#aNH%&KGk zxcZR;*jTYk-&8rlNq z$)ovSe+GEK@a0?H&04+DNM*R@3FYotoxrlC4~#l9MZh?t**Zo`g*LKhnK%b_kk##T zPy+KMtUxiAAB^f&evwGMwW(z5jE9pSsz}P7qdx*KQ_t__Z_HsZb4oT9HTrgZ!}dSc z74@6E&LQ53dZ)!ca!r1qSAvQ+AM&(3^gUimg?2lUw^t1d z@pUaGfz`@j{7(~@6-Z8SI*c7-=IewGhhlsyd%1)1tMKkI8dUp}PxRF6v=E5^1D-Vv z-TuQ%d(dHX*~!0Psi56yf%*F}IK5A-04kZse}9aldJ-}t^LT6hKfJXK7vRk(SNh9?6y@XgM=Nsg$v3Xq(Oa!{YESc4RUDhUAh%&7JJyoy2 z7i4gDteV=_B7&elUER~+h|e}ol-OMvW!m=G0QQ(8_Y(})?DCkU`m(9-<89FIp^YM? zSs>u5gT^B#7OB;qSQ4#htOmg6xZ0{CF0~e|c+#mNt&M%TC-k}5M}A@s`PJvm z5Fh4+qwAnb;z(TD{E{c+re&5mU?7Fg=u{e?Ts;%|6Xf$)Dn;Ym!1D(171`v7uIzej zoi+pZW9+4<(HSLbqx&U*K~nP-9BEe^*%b~yhXK0fhllt*9pVLlBxyq!m&-Dr#d-K25|3lIDO!3{Q&y)}ky zv`Q1aq+Tp*M$>oi)l!{Uc9Ap@W07Pwsq_ch;z2rpY~3?<{|)f|{q_+?&f^)hx&SUF z!SdHs3*F>;gAnHX*jJx02bi>CW7wAcDaK-56nzfzH6L)1DIfVyh^^(mO8LW)N_gVN z^3rG-hJDZb-D<=+>!&E{aIq|=Wv<~xU^wR8Aa_>FF4_iy!4sA{y{>+Tv-D3`V^9$_B8)5V{KRt-} zPR4R>Y6q6NfU}oUKHOT-;Z%j8wc!HjYFwW);*{%?-u;S$)dwOh&6*#VR>e%lLd7*y z!(nqOni%17603kq6g~tv^=ul=_<_6Nw7!uGN5*B;XSM0i`?usTQ(<<?_|nn@ws}K?p~cM3lJT>=0u2kg3cVX1futT zm!X{K?o$^QVw&Tyb_885?xzpH40)9o$tWLb<-rTd+C7lfQ=a^*)!RqwBAS7M@yG>8^^sK z+qiD60n+A64vW(Yya8yX1N0+iiCE{(n|iHdAA|JlB&`i4`^-q0(~H$}VcLl$^)bCJ z^V1f-UXG0^uOw^Y1H9S}GA_8y5R~rb|9wbv z!Nl-S5Y1}D4P3Oi6qBk0l3qNE^N>%A?x=`nB*;sufv7uQUm(~_yV98cnI*yHwTj^M zm!G3iq1lZDFt)scWBE;61;G@r;IK!VSbN(;dw68%8zZ8?kpu)T2)`wnzgU4I!%SN* zS$zE3g28DiP=|m(D0>8FG(l5Uj36uVG;rslz3cUPz zVmBd2+Tg4(8MS%IMR#&)nOpvN&$9))ewa`B80tXbR{f@f6IY1D(`b(9`gF75_*)@6 zpNY00fK^u)S4`uRU@ZDLPK!Fm< z(#U9ZnUe9b%9M=Tpiq6EeU7yHVBcW8KL(6h4r zkA0pQzRHMuzVQFh^_5Xkwrksph=ZUEATV?Zs5D5I(vkwwDIs0bEe(Q%N{2yrcXvsr zba!|2UH9|uy&s?bzTaB>V9jEfd+xZd^E}QYXr}qjlc}L488dK2cFbo7KV?ZycN5st726q<2J2bunF zhY1W*^@--h4!^v_|5-P_V9;Rn@XnwOHxqADbP|Y&c2AVFS`%g$zACyPaeLI(hA0i8 z@_Jla>|+tEENY@5caM^9U|x$bdOzllu*NeI2{}BZ`}p-nhL7VItBHOk$Taf6JOj|@ zJCYwpeeY37m0BYuw82zuR2dOCqLlf?D9SAF$<*il=rzU;*gC_QEL!F2iha|+TV|z` z7D#yb#8IwmgZTMKu3XKdgR#OAX)XOUCss97jU&QbtF$aOicd5#$2q?Vl)0nxn6*X} z0(Qc46JueY#Rvy9_wlur!+hmN+~|A~Uvlrmu@`8@Q_+xuX)p_XWjq26mkB>JRafbd z*nANC;Uwm$MgD1SUa%gH9D5R3IXge!?;Aj1iaPlgKl)xSTk2k7;Bk%m5h}1e>KYMQ z49{u6;5&T1hOz&vnoV>Inc?VSn0q&o)jyE?p4ykK^K&juB}QzlzMgbw-|6Pfv|1y8w^@X?+2CPMH2WID<=+((7m zl^eJNshYo;3ME?Q+;>%8N--7U-|$)@q5^0uS;y5TgoJmiQN4R2BTr?BXUuP4y}@#s zoa^(!!uq%`Lr_o6={{xDx=eFNCYgWbfXN9ors0ZmVPLiL-O+(J$r%%`n!xkk~fC@`KOix{NU))f9^rwuC-`MsOi_dHdL51z8 zM@9#GZKRBQ-x1f85q;ibs2F9=Zs~Z_uotFE)7;xb;uG-+OFAgMaD7q@z0E0YknV_M zN#SIs$%;$Pl%Hyzv~<@fVSL_1I+L(KGT6E%_m1*Law|^ly1IV2h{PGej*$(;w$+RG zd%xz5{fvT~sMx$7jrwiu@5d8OBr? zpZmBLP8qz1!z_jv1y*+Ci2Qvl6e5a%Hk6kS?O3#aq6>v!HgoE?|Hc&sGi`$Tr3Qq@ zF76!jBXx^-mPLw#UICBaN^KiH{%-<+6_0pRw4xaN?LJeZBa&@UUld*|BxB|EgtJ|2 z7RkzC++K|-(0|@m9<1bW3Qv|Zi~7ht*TnOLS7enn%)z@l_(aC$>AIand=9r)0=Anm z6;87M-VXj}ud=(wVlo{Y$Yn}$9HCsgBQtepKp zrK!XN#Z0AxnS7D3&H&|yOE{}&Fs9fWwAa6Ee)20ySL65~zqMsjY2;jJXtGKFf=Q_@ z4Hx0EV}3ZSOxQZmCe*Ac7#nax&W^G0B@MPil!hPZsSnaP68c!t>FOr%S8`g@T(PN` z78C04G;tEhUIi7{-jCmaT&(<7Rdh)F49$Z@Z-y;h@05+J)9w#BBw~7_?u+u$Cu44! z(_@LD;`_Mj9P$iYj?|v2GLx>oCE6%38+7SZ7=hj<6@TY(uyPwpAgP+`l2u9ddwm?tD@; ziHAuMAR`ob&5=29`(p7>n}lcIs8b^7k>2uz2Q{?7i)w;?Y~u^5>HKEOpSZ0+XvO=w zd2H6Y-X`$&?+z<7uCZMbqS@JfrT@ z~O zsgYh#y$8p3sCkX5`d>hjHw8TTfw#}?Hcreg-;+=E)yQXX6a-~UY2xm$mkds=j;1wU z@#zo5{fKvNHC|BdfAv|va~p21)IoWkNuq!I3b7C@oQAshd_Do5CQbee3FB((Uz$3t=q&^MeX*X`$B zX8nAGNQRFZy8}(yMOacopCCSYW?3A7N511u59hhGvR#nF<7*}Q^II1PF|phO_AEQa z$hXtaIz{u%?a;U_YD;@;>XwMn^G42ju9jblz+=UKswp2l%y^s-j5{jdfpCu&2w)h< zba{SloJHl+%-AK8mp@vXO@^h3 z#Si*_^{Wz?p1r(@_9p3fVS3r4UM1725XmtGf{B!tz4^Iu9!?)!BC>n_6iC-VHV8F5 z_8CqTeSG zX&!#>?PNNuc(D3K=_4pG_Q{qJPiJOVX=Z>ai&_}w(_oQl&hr#vw`6QLFRPbIA_Dk@ zNK#?~G9ZyjFtS5Z*&r0)+TWh%rYnNi!o^5_%W_6Y$+|tET@@&hi7zzdZ&D&2|EMsU z_>3i$!v+;INBUrt=R<8l>8dtbLU7TSp@3)^fMsazJ;#~}fI`g-WLkMLA1{mj8&U=8 zZ#865OJ*rGqe4BXpiG1@ahiM2D$eDNhLDum`6#ynbbADn$)ubCGK&!|+gBcd&IR2z z?D_@raWzBa4PBX6BMj^J9_5>7^4IwPpr-ze?b!~^yJc?Xx4hrH?UM%XZg1&IoeO^b zY-V!jERFXM*Bi_OIDVmY$Vq!fTH-j+ypnWN%HHj^E9OoLA=3#%(M^Z@(P?60@ID?< zHB3lzD*GC2&ad+vaJTn;`a5q4bN{BMAqjUMO_qv#h9WK4M`YcMx%8FyrLB_?zGOih z@?D2yF6&=hl%)v>LQYH)!%~Lv@wN{&C83L~r6q|n(r9`_>@fU>SSL1P1$>HpMxp3E zeu9@zIOT$OtCZqJ7!Ht8Z@+oznZJI_Kg>_F^cy!VF*;me{NrR&_O*t47nz+o1MHx z(+L~#S~^%PFc=x129uC3$5L7jT;u5&s{(gNj(*4+l5T>`31iPFxWs_ixp%cp2!<@g zaqpM*4%N(Dy-kn9qATs*b6jpi;{R+*6n&boU+0FY+XM6^kof*p$Su9lvb#xD33KKf z1Wk49GWb^4&OJ?f0BfKl5rI8X&Km1^q=k~0VJI|P7or#%90dxr%wp`r$&t_|yai=Z=yk*TL*fBl@2XRAwCdeV zhFfQ_uQK`-Tol4U(eMK~t$fJ}ek}!fVy{`fWG92A$|R_vJ0vFr?drbli??b-l}on4?G|cGn zRDr4dNB_5uCFJjF1dx>V&FI_VJ9if>Gzr9ZC?EfDGE^T2oUpa`B3wIE!X%dQ&h#!J zK7~_3X`gefN;4~A1FbgtQfWtWEFrsh3M3h$qE*K^Rs?~HvqCV|WyOf}Wk4VM4P?{4 z-xIuLJ^-(TTrBp!Vr@wF-HhBi$hitF{gW|N1F2|gxGXa4DEbayOusHuNHKST`z$cE z2@iRaUlX8wM=)hr(wE%lwOmiZ4zFMGSrDgCs8|Ca#%ZsMeH>a^Dvity=durvvJwI!T(T$zzCL8`gWuGuc@hcC#{nq@pcoJ&Eb^*D17B+Mh? zekd~3f)-?Wzwy%?Vry;3=puuWw*-+O{!o;f_ftw&_;mX{69t%lNnaW6+xo>2A|OY z_n5r)`|5Lt9O5NvD|7Cn)Q#Qt(t545sjZ6WEI#)NlZbQ2zl6Z%mgadZbAH^v3`Va$pb`>WcSdnr8`UiJvUyR7hnjkh}`uK42B*(#=_=ob)eT z{^|NBFb^I`345{nTl(f|Xm@n~1FIXa7>I@2aE3IQSXv z(xv9tEZyl6xa_B7GeSg51RTM!fQ7b3ea)?_g*p;G0-W zck2yDE$tsu5*6?Xcm8UKPKg_4wN5O)nDfcdDI95wX(X4PPMlUWRYkBhwMFdMts{p+_aMdZLP^W2|IT^6_2@hXvP>_GBu5V2!nCq zHc!Jyx1xCi`=Q0u`V-A7Pl1}t5$%ab;lwau6s}G6qmf+xhLOMzo#Z5Uog<^J6VR=X z_LSb+@;1(g^IKvqikTAH35a=TIsdHQDdn{r=sr->(cQulmI>3hOKhkuixWCG2{z#? zHrN(tO&HaQ zU0D!nM60gpH2z@e>gBipOdh^ZU>cq5So0+O7QNPde9af~m(5O2?zgrg&+Wt#i6O6dEF*&EVp8=MENp)`kKAe<&`1q!DBmBKMG2XDs<74 z0xUb`@2~}|j_-BjhSBRcFB(;JtI11<&`6xCFqGksbN9j!k~ifgAuX?4S!@Rw{Z?P! z4b;oV#0j=yd8DzFwIW{S@p#D?v(%zq60?4W`u(9)Qu~*+j|% zyN7q($Mdh7vT}~|jg{-zl7kCu^Hx6WY_&43GjnK8-x;yZw(M;)H5Z-OE|A%HK67WK zk1({;8*4Yz1e~babVCYcq>=&f6{1-3kdk|cwKGl{3 ziI>()(Ghvv3^q|3)O1d zurUIIEIHHaFl)=x3=}g|><=TyB>^_9Bj%<;q^~S!@wqDJ4>tkSOa&{LZGt^EmZ`#i zt`4^(5np&kFs#S5pQztq@j~b43}4ZmJ3Db8&R*#cZ)qJ}$P+|&k5qo6fY0E$x|83~ z>vs^VfSjDny-R{N=Z(y$?yjs4Q)c`#u`sx|xivv<$TZLVDH$*FJnk1N1ASODqc}xc zFfrhm7rmLU;U`&8=Fp|PD{n6md#lf@BrQ$t)9W2qjwmqBEjz4pp{D#{YH|PP6(D?+ z$0N$YH2S!ieQG*JB*XaM^(vDq^8TBWIX!lD$y3=m(9U+r^$uQy+@1!k>T}txn4FLL zf0eOlNZ0qbg^@*jUNl%4J{{5%aLnK|=+GqnVzydvJ9~)DZ#m0D;XNI#j`OZ*wF}&< zi1Pcf@C*IK*t}(W+3L%t-@?IyrT-#6F=@dL=QB)R5q%mpLo_*GrFC?0A!W-9l}0mf z$|(IShWmdAMgKm`Fxao~{~rn?nRr?>4iW!qi^}bsvcl5PaUf^b$){@FOzkC$bg{|I+l;F8NBI zmrIKD?x+KcT-mquqfJABDyMCmZ|bdc(7@WtkLBP`y82%SQqKvoH#k4UDnhY-r{r+t ze&#kJ1@SN=)V%{J_wP4ky&>H3x>>Fl8qnDQei9q7;j=6PcA@`cDum;N9Q0Qn4`i4} zmtT+6U2HrUi&wKMqWN2_(tU+z#u&=*d$(P&Vqp&$L+@QL!P0f`zldM;r_iCC&c>EN zhr&tzf%6>D(54`Zg>Gn?J8&N=-SS~41T9tqSG(A{-+QPV9$xk4@uvdQv<2hhz+8)G? zR;T_3!*hS$$c5)qh2#Hq{dGdbz{TWpzyJ7F;0sG~9om>JDae=MrnDF`@~HrB{b)a) zPfnBlfC;9mmt~N3Rz;;3CpdjuZp>vueXha z5?6RvxKh;2FBlw)OE2v$m1`A4J>C8JJx^XZ@daIuTeM8GRuT%G_iOVbDlTQ9rskig zb{cw0rt!0eo?}KA!(if9U%p)?pMfw4)%m^4!!!ZkHD?k#UwdbptW zYz-Bvk+>c9V~M?*-DVq6NvZ%QDun7AP`Xvrtv<*P6B3`Q!f?zdoi}p2^W2ZJ2BsH6 z1++4SBfCLLLO(--CPW64cx`zS9>&s6;RyR)v-&fL9k5Y`K3My%9)rxHvakIP=aM#D z6oQN?*G%*6?))p?|}i zbZh5p5JP498F;g1z-O4_1NiM9fM|U1&2svz$03A!`@#Dr%pIC#9z1r7wz*)06 z07)P0X$kgfK;lPEL3RS!T)hLnD?2ng$=`slO?Oo7VTxe!?ihI|k=>Bol~0tdelI`^ z)Wp;66ID}YZ;{$QBD=)?b}9^+ui;$5of43lmwnhL?l_r{H^EI{70?8G^$K#m#{C)U z|48;}G-a5mpq1SK&40OfLP8o>Xt#6S>mCO}KAl_0Va$JphX7*zvE_j?WaH2t22n0Q zz38hp*C^%xqy)zuUnD<-hVy1;^EC(Fq?g}c+LP&qiRXABoaO^*Iy21oF&i?S?(bz+ z0K=9pV;!h+wgE~TygXvz`++$c9FtZ;d$@v~W{{^|!pZ|8j6zq{_;zG+_nC29Z05n!w5AqjIl!2g2uJD8oLK_p{0dw@$u+F?pGJM{NMn zpb@lje7=U9uzmxM5~0Ji9>U%>e)~W427f&#tN0MXCFX~O2)1X3|A#GO zD4U1e%*+L>s9CXP)l2j(MJTFzrXzq{+0P63jfAX>I9gJ?uE)T_c;4E!^J&_19^i@g zVRk?IH)z!fv_$9bmVgV5``NecE4@7=%IvbwW9{X&!Oi=gj_A4|3_?t>|td`i6aV5LXBQ>l_{4dOq_YfR;m z`A<6e{LU0z^&Arn;TjG}N4tIWQMvl(SGEw|ACQc`HL~PCrSyM-NmuESMQ>;>r`Ih_ zAfrXV)&6;>fovIr3!GkwVDj6Un3&>}!goU-tN+wB9aTxsV^Gi4@E#6AoCB&>KQMCA zek$A!ww}&X77`J0?2`y3wTzc9!a*!-k>l^a8b!xNvi9PO#V@k#AbC-u2weOqv+Bel zJV8b$jk%J=n4NHvUeR^)WPXEiQDFX(|E(K@NGl$Og^eF;aLn!iXS=b z{#=FEivl;gQU?)BOq_t^z(*vk(BzW*#p+A30@qdTE{LYOKR7}+F{>6xxhS<|on(om zRt3`983f)f+~ae+M{ei%;P?Ad(4j=CyoOS!Y=EUGM-W}1_yI7~t8p|!{V5GW8d&0! z4ih)JRfM0VvuT(U4jn)kP*ln+2SHxC{44kWy$u5sd%)3t=)?}?2g6@f2QUgx12)_- zDVH`@<*GMcSdQ*X9mqxajtPx+n7EzjVz8qiIgD^xN*s^E=tMX$T6E6Z7AOoObw*`` zi|&RZXgnLjwPk!LA+j-L7-xUzUwLMlr5((V64*NEAerMV>0f*}|(k6$x}W+-<_Z+i&CiooN`S752)~v57Di ztVw6Sx00*)DT8D<=X|hwR{^?ftf3ERWS_=lm9^dM^k~)px7Xu8c2~h$a{4Y`u`8R~ zFbBop>lWC_A659PO;%Uioz?SE4&r0PDU8Yy>u7m-ki}v%6IR+pR}~5C@QUh=)NnF&6M;4rRacSWJ*rMv=<0{z7DDls0hC~Z4QbM&pH zb3M8+F)!T%3bnVsI1-iLXNAek)O)-*->1bDdZMAn-=bl>3^T{-|q*mJmPH*eK9^Sf7d!W+kFQVu6JD!&d1R91Ff>1oSFA-vyCLj zI{(pd{@dR165fswTktm@i3iKV9kLH9`E2VS?mRSt?B#HNLI9izlr<-)g&gj^s0w(q z+|HR~_D~VL4C)~VoBa+Nj=%C;A!Vi$u(VR<)qu;Fs+04=i6Auvq)))55GKGNr+^=V zy;5ah?OoL{XhsGhalz7)GcYJgg9}6Y`B=z)$8jkDRoHQWEFnyDBd^Y#Tx^KAS)${4 zd)o^q=wsM{B*Li)@YUX#&S=(ZQzH&%gh)w1vD!3C!{aP9U;oH}G(qfbzwBRfRn}>e8&6)Mu+cZB_>LYnp&D*FMZ!&JsDSGnJD|3VHYtPt0=p{ zdgRC7nQltfpU2JcrMj~ui(|dN)ZDf@clt1ugN-*8S4GEDl_{`#d*3uo@taz&{nlC< zZ)w^U)HtYLSC1yVNaH)IK*35L21t$qZ(Xx!C8+l$J?=gG()-_?h?ZdejnkE!GZUtK z(@jC(&9^q3x&?-sBbQ5()$5zzRTxaBA6d2*K4JdC3=Y7zQIeuv_QBLB`i_>H6+K!_ z@oc$x+2SM5#|v2YT~C;p z=}V^=W5{Q83K9%$gPNv=MNeY)&olG)@6m}be3I5wbK+!`#5BxD=_y}v{&6Gx=cdq+ z#B84*Q_rka7z6KhBD2a=xDqNVJFS^fE3e(f_VtqW^1xU9;b|&jNj-cn)wgcW2vX!if zd%;sqzzn!xZQNs);-beG1)C&$0b;ggHCyKD#M5gs4km9&j?%^0tj#sb<*bLf z-cEz-;owl-L|h4ZK7K(tPKrN1&Qz|+tO7vF3?KM>!Y;cvV=0MED@~-=Rp zFa7$Sf$d7Qaqbv=Ms?M2j>Rx%w<*8jQ-w>gaQy3es^^6xrg#$lfT2z}ITfp8|;bT436XtjJ@F>l;$_6}*Z6Z{IOs7eCwKvAldkoUQsh*w0m z>z#F3w{-<+vbS@r-5=Btxa@RqRgk;)Ma15sO1po-J))6tJ&2pxtOXJ|`!z7zo*j>m*$(KwPNXbNRUu% z`FO}dnc9l9!87)O=r>w}mrx*jNonAM9`Nkl_GNT^i&mts>!P)gHAC8GZAg{@aDX9n`421+N7o#3m zqooXy$E+c}8v2|uf2`ne6>B8S5G5jG3Up6+v|s%znuMW|3+B_(T*(Kbs-!Uldi9+J zj`=#fPuc{%$#nOoA^l7@K_f*M?!!{8fhSohSS~(#bNU34#W|#u*)8$(NE#s?))&^L z%_uUCPC0iS&mtJ)e)O<~c3ld+xc~_A&8V@#T zjFrxYg4~vPK#^UgLp9jjC4Z7-Hqgg^9-r(>CCL?B;&d{S&o@g}%|7HX9?vT~pTsX! z#@Z78Vohsq#?3;`{-ljFTxI%7%JL1zItI;2QvQ0Y^`EPM{W!E&msWpK16ve363Q8* zt*mqa0Z8UAjK3Rw@8sKji(%^BOh889s&=R2L*Wj(E@XOJDq=;?a+R>-nx=W2lChz& z!aW57K4&yReXIp{(s<G5Rx!Zh*<-@I+%16$$>Xn$-zgo|80TF+so$ z2nsfPnFvf*?4h`LT1Uck=;7M206Wh&x~aPul4Ei*8~m?s(vw2gYk`}8+3k|_?2;z6 zWf0@M(ovi5o0s8%`#P7+CceoQVt7EdHVOeqlI`OkQ5udQ-*9!Q31^{60|uLzZ2;z2 zI{@3}I#fEUu?o@XeDUHuS3Y2)Ye6T0P8k3|Z_Cw(zzs@ChE5{j(Z~@5C-Q*Ydv*08 zh5*9x*aOg7ohiNfo^Eg0U}b#Wdvhs^U7YPJW zzMQDLQg)esNnro+w_<0K?wm%E6wEL~3@;4Yg5|pwH^FH8T&N zU7HF0XS)UF>nRe;qyp3M>)E=M|xF?|m8gdWDjGwbWP;>0AUcRZ0 zF_VOCetSNn0j53r*Os+g;?PSV-XJ(GU3U*Y0?{*1;$CExE^E3Wo( zF1W~yu-jYB@t5R}P6vo2i%m5a2&56`V8xz1O8~?j>CIL3c(_yMB^9@lTmYP3{RXnt zl}XU0B}O8{>~NP}B2*(!pqRyyf!hp`lLr+kVfFXw4%Q}%&Z<65+k_tA3A_tf=D;{` zVX6c&x?Gq3Ag!VCCqdOl~5IC{=uR%q#WIO2JFVgx3?8@-$!i6 z^t!%t0FwOt7R&Rfh9j`VpMS~GT&V6XzIe<>BmGT}Bb^#$a+$qM6<%R7{${2!WuI?D zDP}1qTDQ;qW1k+$LG(6mS-D7DtSDliEsq8y-qTU}VD$PM%}Sa%etN zXb9(93H%_^pZEQsJwMhU`u!7(ApD~S#kkj{<2zB3n&s>g`edobE_9Kn{nrmUXFpI! z27k;V-O0WfKM~{K58s9~$Y>Ie1{ngsA?LXOzkcb(c(&VwKIcSTbNiQ+9>tL;mM#+$1XZ;q;Tz2?-$GWtrkyfyXlLPQ;k%`5igHxSVc#EO z`|)A-Willle3L0}pOKBRQTgJ_whYhKbL*+ZlvK)i_H$5zen+KVp9MvManUh(a155A z1iqXl+oUgp?T_vI%H@q-^%9ROK`~Rwl+*iFW+fYkN?v`v4Z{^J6Yw_V_V#$NL^;?PFMgl@7_P$<$wt#{edgKFp)q-d*rJmVHoI8f=L2Fvbcz2Y% z73yk@-pq~YNm}JkO9m1F#K%ot%l~J`yR%BX+x_PpR3DzEo$Fe+WVBs7zoAv65lW;5 zH)OH0c^Fe8=ChLSLUBF(cbfFNJtmxj>HEfmbR*?zMq1Mn@#KvUTY6OwC=;;aU_v)X z+Bd_(NzbxeBxA>q`YJ4?I~KDsm7M53hH<3|)Vt3!4-5Zn{y4cj6^>_R-1)?y@5ID` zjLxh24YB$jm+&}lFSVl1>Q!Z#igQHtn^%}FlI8L<+XtWI z+Z6r&Sow8WvB7p8W6EKG<|!>&|KW3qoU|P?85<(!CGbui5oOj)i_hR35h!F2W2`Jd zW>#zuts02P9RhKCye@nR@6SzCBC?vl9Q_(JN#m2rr_Dm>3ZpU&F8VSRPRo$cooj5c zU`qRW1uPh+deNuQ8B9@4(iRXLecVfiD0)9^omBF3qz%BZzCrl}&+=JBtzQ4Ym&vxpzNGUZyj; zKjdRqp4Mc~E2#(ZmN@d95P)@~t3`?KB6FTx716UJ|6DQ?qHnXgb*c%!BG-;S);T z4q44OHjF*eKNpyNXo2A!Y({`vhtl(!%z8j)hHbDT;8`jIen8R46SQQxj^hRUsrn(E zX^JuBs`24iwt(AN#%!VPhe;E$&v7S@1NY~?#(u1u&o)oc&d7ORC`10)8f-Nu>{lPu z+%n^S=sC4y7#?ZmHb1lz$WAL9Bf!!2)ebaQ-7~tEH1K{pR!uRKZc3s#`>^GQJ4aBF zBhTCs1zv?Xwr$|6)SM|`h_D8^PnIyYpM8uo{<$vAH&Q9t%pYlxWv6Y9+xk@O-|Z5> z1lT<3LTW~uzi5F1u>*#9jD>9SqMPD}u;&-2N&6SCnNu>5ea^A4J?8PR{pN_H*t|XY zZ&F*~R2DXI6VGWW-RbK27%+N9=W9~QABdpMDK~bg)5rdrUxkXZt_>!OGyJI0A2Uw; z)C3S8gQn;iMP@ZntegKnUgA9pL2Fyj9$8&er+xdj-^NruCv_0ll;H8+$6ZB1`4rkb zx4HJicD1xXQYQnsr7p*~21TCc)~lsN`#kvP#M5S)Qi^8+N=u`!++81py8yEpyN`C? z$9>H`&fl(Ul>Uh|Wyu2QV4YRwK6nuuhR`qMOkd;mLAqldiPnrf@&580`HFbk%AcAj zA3*90q%$ZpRLH8XW3yiso1pEWRO$5t&X$W1hw7vY03u8bkewqYB^Q8gzr6z>rj#V3 z+e&^ zHb5LjyVoxDR%MUu)iZ6*VXlc`tlD=5ALc;2K3U3HTHQ%g3*|EenE=ezR8V>tts&2c z5Iw-OF}>>qz~|kS1{x&I+UQ^F#&6%))s{v~Gcgzsk1OCZGs=nAoS`yiuh~IhK$h4V z|AuO>$x2qRtW&3gpIqL>Uj8S}<*)a$bQKJ2rMXnMR2SpEtBsx-+6g~e>;e0KX-V+8 zhAQT{AjOFvNA(X>V}t}A%?VCxsPj;g=*KjAo$EgltZV{LDMfX*;y?fo=@1~H_~TWL zwT$=T4WbjN25A8HfTg#bx9HDTfdaL+j=taHOTPfHL0>bd12E z2nzdH#JS(FM3mUTL~$i)?qhw1;jMq(?Q=lxgp}+#rc}~q*)}QWR!}h6 zMWs3_E)Z~ydc7D56}vhW{Pr}IpGx-Dz(Y5_;3|;nF+Bx}TT2aqWLA~l9n1NVLXXYM z=<>U(l&xgZ?G68han7Qki!t0B=!jPl9bKUxd4wl zpo;{U*p8Qpd11L@5RY?vn_Ot|9Lm<=J+q-hP zHz~BEs#a6rVbJ853$Q~N_V33{E>cS)Kihb;66mr%6IC9-wnP^xJ*#Ja)WYaq-~u+Q zA@^KtR@g&`@fCMs!t*em=5@`kFQeJ0pNG|U)nAa!A{AprABa$16pM4))FYi3Cv8ZH zH@0)AULcNZfBP)5^qll&k5rnd-@Xe|Rb{e3MSf;sT=VSPJN0;zx7JHEFl#roTI&_y z!IB?7xDt@uIpNUEy(Q}c1;0!Qm)%_N+-zT-%aBZhz_^{8>Fo|ImNU#_&_p_#PH-xo zH$8RrDSxIeu)XR3A7~4#)eHbw$?D=L053|ttiKg`7ex}WfmRUxbgtz^c>a(>kLpIi zcdM$2sa$D2!0)LYOr=%L3rO+yOn>QdgT0D!FfazPc6qKJ_^1l22>5@Xz*rFWl9;Ja zY`Kg1!7iB71TOVPmssJ>{{5nO`k+w2Ov}ruI|M6v5zqDM!h~#v)Ua+$Nf-NGI?B+# zL!hL_PgM^oNmO*Ywsha2&Lyowt`nv5I&8DlvwykS@)Cd z@PXLt{#H!|ehNy-63C6jJcr)AK)AjDKhDZi<;@ahtD?I8TmMh&3MbfI41zRhrD}|w zkr5LB#*ez|vG942eJ|XPwCkCY3Y`z#sX0BRf!^MV`;gMqx2DQRbr%V2pDzZB*&4H5 z-Vj>n=`D@o)wV9tuKc6jjs)#?#8S){z~Lgr>&X(>0vgO`1chyrPzF$9FAK(WMD%CZ z;~UJb&^IsU18*03vi@jMPB_KmxDh7ZJGu&`{m3*=(Fcz`mpPR?+q#G`(01Kd9XykO zW$<>jG4Ubcn`Rkn2BHm~Yd`hq)Xh}pGGJ@T5+?q6heN{1D$&F47149@*QrKUkwn9402oq6S|KF0oTG{ME49i&|pgAcZ;PD3U2MTlAO~AVOQrCBQK3xhC}sAj%h8aJ}PoAbaHLG`T`&H zbk`BVHO1vuPD}o?$`n!pwbpY9qz>ltY74p2gxV>pTg4$Q6EmbyJ;f_xONQe%5m70N zY2QR63if%;U9UJZiQgnX%l)GWCPM~INr#xUX`H=0+TZh>KWj~Rv>rePm&@OOE&(rh zOV)o)FwS8b3G%v>j#Yd9RGUq@%bxSdO1@kpPU?F(Lx~o3TkT#R?zeG^ORN;+zYB-_ zYvk*j>X4&e$@hQFuHs^Ekpn}SAP#6iW^NwugcG{B5p2k-cVSy;tiy(57JHvX^Osi_ zZb_+@jK}%d^SK4X=eLP+*7S*)OyRy8Pq|dwKQ4`ZA=ecVD&bU9OoaW#HaBtJvE=Lkn8&5 z>R+!!Gh5-%h)Pr3;zj>>lUC`7E}nxOm*#qj(c^bw6|OuhdZf9AyeD zPJOk>Y-ueU)w9QmDT^Z zbR>9$*vW7A{(uR@ z&3V?pCnctp@ukmCD7^HqY7ZK^tcD0&pHh?M;3Oqu|IuB6$1I-< zbc*9dN73>=^{Es(=Xlo3U?%TI8+GMm6G0O+k{7`iYI}X+aZ~m~g--hH0vgLrEipK( z9Z%ZyZqr*jNtS_~tCz5Kk>3yVJ_q%MY1~h?Zm;2i$LXi2vMS-VPqhYp@?52gaO$d+)EdQwoPqtMc57;y zS)&yMtX4v`$JPxHOdy^j@2M~87AVrCY`JRl#=enpzc5}xx>(SEj-Pu_r-#5E{P)((u-XeuFK zehS=1bMGRcbWS_v>Uo^FfaeU^#ZYp>Hk6om0$|A0BKJNJ#@g@~$`0z63kQg&-*z|Q zjJJOkEQTA209kT?22owKo`!(yzXj&*8@5p8Sp|lr`)N-Y=a#`d!SDopJI9y2pE(;= zfRlll1AvmQ^aaCgvnviErso8ZqgQ^B;m(j&#$EOMh$<@Zy)8Bdg z$Z*OrM}DM)>0Cwez<^OJ?s#btFM$6frf8>OBsHVQGJanq|%gjiS6%-xs4T7 zE-dn!F4(og9AqDC@q=fm?7)z#OO-&+V3%h-J*g5BtR@`LHoPO>+x)XBgMbz)P%+x5 z(s#UpnYklnD_DGJH(Xa^)=427luI1%#_L`uWu$W=v-MKlY(TUA*CTdl0B%9r+QkhC%7gV06A z47ha!iqTvbFae%CmUf}tAXKQ6TA7EIR%(i~aYVoG6RIgNV=`-aW)yv$-3AUKZ{iL2 z90^dQa5PA&bw1@YFmw_r$LGcs9R(4uRlwBL`O~L0SBS01)7bwcEx7q9MeHfGPHsbl zGwcwlz$(DPEFmL$_3gWolF+5V9IEUpOp_=mhi7^(co7)Ub3-efknCV-(0H^d!?gKp5aqR?%%98z$I(3ItAL$f%|d z9wb<08%iZnIBM*FjxyW0a|JS5&F8Y$Y*J`7Gf-bO##$T$s=Q7ExBu)uc(n3FBsD7A z__q6x6QbA)GMXtBW=w#GMRBu$Fy7g+ibRB-^kY6H*%9t}%dZQv*H#6k{kl#R?0eb< z&u-@TutUFXP!V*H%8Yz`A0lpBRyCCqsMyS@OT4#b9qKaKk&3KTA-cq~^7mHy*HDc8 z9&{0tNEAN8$6g}`Jf;gNN7GytOgk;+Wo~(%`Gbi(e(eq0(PrgnjRm%b1Q8~wb@Sn~ zkoM;JG-3S4ptZM?5ND^fR8AoKAVhkBAko46={8i>VK@4?{n8^DOkIRr7R?76vU8dT z((3H{r1U47^E2KTjpN}2#YC{1J=-llW@a6OyEDo$^ZNk2Q?AW>LpKA zV80k|3x?dyDcu`M@tm&9N&G^hbk{>V9-kF2B};g)z4ZXasv+PkfA3 zsgKH#d47_lJ>0h|QGL<|_N+hpNi!)V)deHH5%u@PNQ8+mR1e2S-0NO92g~7#34hub zFwrEl9&&QOXq5M*74S*Fu5dr6t+31?(`Ni@=9c<9M!xNRXDH491mM(w`GXofN{MC5 zc!2^5CrgZPUCvz2VnjWO$YX?u!2Pbp*O~bLs@BKGK>cODc`msvq$7kyHj-aHXF{J? z8z97_OMe&#)EW9qlq&iEkG=PdiXv;fKouPX5dj%;RFoiDa+734kSrjvfhGt@2BC?P z1SNx#Gb)m4ayOYqqJn^uTALi2C?GjDeXD$D=AH5N`+3*8Kki!nqd+`er|MLl=h@HR z``XmS)mV45m4=$ykKSr@%y5Q}qma+lk=K$hwF`Qiy<;JY zMy5RBkA1WQ?j6SfGj_5Wh(Xo?pV$F7XfepjnVKT<*#tyP5>zWd?d2|+gD?H-!t>l@ddE+h_CdEH0K{mAERsmiR@0=w!;b(o z@{HQaoeIFdw`Fkjv5xpMcNk)>3VRj%K#sSlC*jHA?z23I!Pu4Xn~s&mLLh)iCa6J_ z4tr)H3)pa!!4b)VFatk;i1Wpe1a{bJ&{Udd7^-)mDl+!3`2dNVGP~}GI4mn)@v6}N zd6Mh2&szX;-XM$b((5&m4L+!I;PIFjNbhO{7d#nEo{6@L2t%6@02sK8wh(y&uzvY5 zuvK9}exL&x4Cl8@>p~fUjS~`1lY#b8K4Z>(MkXo-B0!h zQ(9`k5ULUM2w(J3SrO%Zj=?$rZKms2OsnBW_T4s7mcI@rNnWmqVP^`9sp4uU&ZyEK26 zxc#FN%fQZbpXqkjvsN4S?_jvn2<+~YK!^yFHgXH+%qBg5NjLXL3?lYYhF4Pm5AaE9 z1if010g78-RR!F%32gd1hV&hkbR|%lu7fBa$eYmO92FfHJFVLwQO<3_c2ShD?BhU6 ziDDuUdy-ipTtKU5_$XQW{AIwH2X*Xtp6|n9=m6vH;gksEqZT|;rOf#8c^xmj$ z0|=S&70TQ>UI*e2uUY`A6;%H!ayRak(!K2@uJv8pd{6qU+`Xpl+ibdmYp z!$+pN7WTcx5tNoa$({8)M~b{psVePs4vhF>$J+;A2Cfnv+B0nShziOd-XDG4ile8q z)0pGq7MRr5WRxUF&_w)UNN!r`7UR0O>InwvjqaIq%uWbSM-!$6A9}5y@1v&XZ9+?z z6c_sOR<0p)R;F?+@TNuyoh3SN@igxfNbTgTkX)e6NMC8lsi3QWe>eSw!o&0~s>eNo z3U5bOueO%H7nV5G*^2e(nIs4iz^@3T^&=Z=UIpIJc%>kfTfb-Apyk?@Fycn0)3&Zs9r{8|7-4b<+C6c z6U8G8&~Xb!zpZMyhhH=LpSgm&jG+}#S~04tH6?x=%%9fnwET>{235G=jesof=}KK< z-IbrPkROpIaBBHCD5G-iR1mr7TXei4Nccz9_{%?EP4FAO`~hyquX{ALe=$99hO|Id zYV(=~{nHxy^TAPVwFQr4UQvRnTVyM?y$0mJwG!5i>@j7~-cAUDr$O5gn_@!=m> zo)K7%f2RKr?(Dxtr@aQO2Qod+NXB2U$7;%OS5Yzm`Tf<@NrL$m0Z@Xr_}lMd|6b;Q zT|}F!!0@!jn@qnR9U2bA%KQ($^4R|?VEsFI{rCqkJYUVM#IHw3Q3Ck@w6IEd_@{v4 z&maE*;rw&be;qqs1@VIW)&+&XzBNK*!0^?dBBy>m`u~69|6jlHqaQXvpe=a|=mCS) zl)6{-%NqrK*=mQfd@Z4q284)cFNx-5jYbfYnE;$P#hKC1b%S%#S(LTNK`3_ zWk-4eTzB7)dC{)`!T2A~&Y#C$emsIc8la+T0;I!E037gju=ykq_3Q^O%5;AKSQUS* zs{bCe{#gCzhUdXde;o*ybU_%_IcS_S>X-A#01-#>2MMKd|DtArugU*>%n>9H^Kc;E ztRH>mEJElC&ya$I#sVB-b$&Fl zRO5PsLjiLXc`-(s5S3Rx1VprireiRp$O53rH6Tt_gZIcFq-R2@8$g{t|2MEUO+H|S zeLRpWe+~=E&k+bP6-LBp^3GFQ*5E*^j|tFpp=`Ymz#dJqd`C2uq!u;o@SvMR)@HP= zV}QZ2TJ)&tF&Il{&0Mlw5&%%CT2$@e{%dhOP-I^FeeDnksBP6m&W&p$EKu%v+kvnB zo~ThXTc5p;$=EG_jO-hP!!|sw4g)GRD)--S=}tBYKRO4ZJB^^CW{t244)pTr+QN+& z@qgEOqQU)@IgS7EWa~7LjTo$YK!c7!SR1Nkmvh!|Fzu2ymDEiX#(_=KD5yXW?K+CC#AHe81KmmH= zw)CB;6~x?K_t7609eWsfZf>_;?E^;c?Ylc zb`XmCA*NaX!PTtgtq(`vqKW~Ev5ht20vZEEmM=ZmeQvAALRo`PtCh!mz{r{Ermar^ zzu<+Tuw;WbD`WuUYY(*F2>iNE<#6`27k#3(570&)eUrWAf3tUgxn-LuXvhqyyXIxa zQnro^lbOG&Ap-$cLppnt@j#ppu43?Tt7*$!aG5%EQI%Tr$$y?J2ZiZG6v5%8{a`8+ zkzy=J(g0lu8vw_HW-!T3$Hi*)%Sf?cfh#gLAq%kb*Fk!)7odgp1X=OM%9&zqO&(ov z5}>H%CC(fEBQXQy9^%wWICg<06UIw@w;m!@Wt+?^?T&6s~R-u z9REZ^EEP~s6rgKi0bJBl%8jJ%w(9n%4F@69u!mgpQBgT6`bJsHBNo1CI=2?%tYKHtZ z4d}Yhp1~U>C%WDe2!M3XH4=`>3HD~B4W-XX3RaS7h9n(0&K?x4u8-LbnexGgOj1Tm zojlXMla@hb{70c9DK6ZjzSp#j^h3$ZNl_ncX+Wf{(_&@jmJ+=`pjeuy+IJ!BxlDkX zmeJ8wRyXv=kc7A&#Ywj@J}-6Qtdu7)Rv~OH4u|9PDJgxAS#@<+HK3;}259UNUq+te zpM1I2(Q82DhtQiU;70J=0F#J@9{@@>z_nWkNKzJl8G(R=puQ0ZK_^T7dTM-SiKz#7 zhGuMh-emz<`ZAc~i}iA^7JHXS^SIC`2l`#4{!Rxh>gxe{`ABVDh9dV++H(VR%os zz`Fl7pc!2#8Z2FVE845&#B~TqoK&_{qo}}_MY++|FyU$E`pd{%6n{Jq;X&%D=YvnD z@MC`rdQ@KC9=+j3Kfq0ZJ1=<^Z9aGsrry9l&} z{B{ad9qOAWYHdQF-C9K~8>?Hf+cVr=H8S}dvNZLj@%QH~+DL_PZC%e1Cqu4`d)Id&3hR%f+{`g6@P1;<@TBgFVRKHv7GmlkcVZ9Av!!MHEk*j~uHQeWwh@x{d zFK#I*tDJ#JMbOV0P>7i>0i-lP>fN`d7B@|yGyFTH0>dL<-A!C9e?=#KMY#bL+q)@P zS`+G}kx>yYc(wA20facIw0St~iE85pbH<=0|5dn=8~M;M@o+(r&ESjd$FqZ4)ol9! zOP{E0%PL3g*{#Q02DFM>`7EmFjd?bvHAK$`!I#gbX8hlwy>T$#6@Nq6tQ*B60=98a z`*5kCdDws1(gDwDo#eGf40A;GoywwyJ#I1+_vEtTnXQUUu-z2`NwA}>Fof*|`K>~y zi;JxE3)=PLaVHHJ{^8QTBP!_{k;PE2sogBmw~LcI(My<&lSg4I?Qpk3tHlx)=`!0W znfJ9a5B!oO0`v##yT=`{;W8iMj_cRyqQ?ge8}&7z*IwMzm9?EHNisM+@=4mCk$uWl zz@GveYe*UEbP9Y6h1(qoKF2pCy6i$qdoQ~1`VSq{TwT*+UKeyy@LLO`lQ>-&I4ris z;ek!IR!C39EXg2%myncSfn2KH`HtA@3G4(Y$h@pR4+OXJWw}K>u_?zrvI~S9yhQ`e zKn>I}Ne#MvX;XdiWJB|gU|JNnRJz&*ZM&Ywq(m9(uGieo&0p=lzv|(9_4y+ zIQ6OAVAs$IwL)cJm%Z_1(ZOe9u`=t+UbmEqY{3C6y2Ni#wmf_OE{p5R?bEk?5p-;8 zuL1*Wq7MJ6hPx{PIZmeDdt+PW?ye&;+F|7PG9GqReu?b#S&ByvtZ7%w-+k^iS-8cb z_BI2kVNU?t<>4p~BPQ51;xRqtILN4E;bpO75$K=I>NhCRS$x0>H(t$*6+hdNh@t?{ zrwOnyHfkaIq8?rZ0c4g9TtMSPjZd!5Qkuq)cc9c0%idafQkcLr zb)9+W9rk;EPe#S&=eeHWJwBchhf} zqd0tUmz4M@oo29%qDAiK$_fA->ja2O>l1newE&R@O+&ZLf-u>&5Xpo6YV%rTn@|AQ zYts}!(apI1kkqRY3{A$l)`L&SXR|ta;`~4xD0t(5Dy8S7!TEiF+2HAN`9ga2+t2&F zlVCj>;goSxXMid>8M_G}uLS)9{GPqgM|&+f4Y4i0*$!N<+dADqx0=~jJlR@599|S4 z<5%Vu;~+YN|DYA{7}}cl<&`>iJ{OSY?#UoT$(_OI+za3|5To#I~N9N2s$Y2%f*!mK-Z-2 zvh{85H&6s*45cklsHD5RrB*SWc$akoiY;9XTcr0~Uj!Q&;d7`0r7)!wcOJma@QK^e zh+_qrW5*0%gyw3dLIY~yZunxuS=oZM618OH?B0_Nf?(mnNWJTNn%0w9YNsBmz0BWc zWG@&VM)dkCON#Vmw6^*zJO;z(MgWPYC9^B^8&Dl;pBTdp^N^V@&#VSsDL@>WKdm~1 zjZ@4)1a|q3bAXcco_znCU2N{6t+2@2P&b{-2J!<=(lJ)#x${{2p=2S8D@IZ0Pms+@crNLE}3(xjTF@;;nCX2*fS?Fj(L zi(0F-w!jN|3Lc>SGsedA56f>Ay9{;KNbggjX7|wJ1OYs&V9yD9xf^Pdw??3l2bmMl ziNo}VZhC}uc?TmqLF|JOS%p+;v*Tm~O(t_$JU{|&!VlPSZU04cSx5x(L@7invL`t< zA^2JBE=W@nHyn%*PTje|on_1-9P)&0kMRq0mDBffj>3euLkKI*>?BW!Be5Lhi5SU? z=iB(&myo0Yu^>#E7`^b4KBuN_!YA|Tr4nn@& z07y@MsETrW+t(DW;}M>3Idp7FCsOG6SXXwZEK{QBcL{431cSMyp0qG#j-bruldOwwCjbQh>)|<rjvwSGxfey2M92oTNT@p2iPndh=Wy;Jv z_(Q-9;My`7`xri)UdX<|X%`pD;iO~s!x{jTp+ogaD!)L=HC358D=@Eex=&>X(G)9R zp}5BylRXSInChSxxVsU1`DWwYsaHbhvnYpM7H8$`Gv41W{VXT-l;fp}_nSA{9ZjIx zbtI#Px4Z}HQ@Qa3X}4Iy%3Hpr{B4Al^nK6l*Sm=?wFa!6HslH0@?*d|RI_7|p(l>5 zrQ;r<{N_}4Iq(V@1Gzq`o*$srZ3f2J*LPn5UBd`D*y6nz!i{&Kbx+Hys6--sn8`Ta zq#oZf3BblKHlXHNH+5F)oCd7W&3o7qe2#rEVS@r?=KJVQoeaQDgP9{(N5q1)#+f%Y z8n9VMUgZaY3Y2A#xq%i)JGlY83~x`DxfjHHHj`56_pVO&(KvS#=wdNGvT-jX?BuVc zND4Ihh~>5Augb5+kWb@BOHYtctgbLzerpByx`({SsE!*eg4s(jgiy2@xv9 zfsjMI=KkaDa(9JswuF)=g=rTOL-WsFBpHeli)UP78B!O{b? z`~0d=abLXcMQwZkYUaw4YB$uFMXfDK(j*Y9n|7Q}<|byC#6nzx9L#lQ(`q}dwatS{ z-Hk);G?8bo)@`8^5t1H%kn+-hh|(;tZ6Us%Fi5F+0cLAP?-ca@cEwdjO8dn2ryaz( zxZ;GUW?yzbs`(0xHrLT3I=j!SZ z-WW{5p4BGBq7Wu>wpyYO%p8SnOy6qQ9TTT0GFv3C&i764gYJr(46gO^O>16#twXOa zV=NktRIytGKw4KKJ67?T({Qa0HbO_mV0%tA8BUBmu)HbX@rQNoI@8%6_s%ne?!VEU zyX%CJy>|Mzb3*o-=BQ>U)xd>)M&S?Bv|h8CvJ%Dq`UjEasI89Oa*V9mVz|;eDJ?^` z$d-9fEYRkM0#5&G>!#Y&s+(tLf7=Jcq*v(3zXw6f0$;hJyR1E%}^_JIe5es2m< zf>YR6w(;WZ8ng5^iX1a1<(u-#r{gd)9ja>5U@i(%>2zbvuqjzr+B@@CSRLl zdCtr7I7~};^7;jQ+T-tjx`X2!i;JNK)1bok-niH~p^#I?d!>rc;lts|#J-beD<^J{wb-n37eAQC@POe9m7K>ZiOysqSwX^5->m{_*x^g9 zkc$EHCHdU%3%(~QLs`0#9$ps+$v+x7gXj@)jw&g#*(!;Z^5Hn&Q>}o~Y(A?_?s)9R zkV%#Hy_E(wo90gxt)Qmybv3NSf35#a$n6=8D*tOS?;upv?#m9aP-fh)oCcRGxl4%e zxikG|KAFFP>=XZbQx11sy*Ff!ldz@vh25~~;jmKj%KX$TBPOJn;SEPh=@R^-RpL74;D{I3nZxRR z#qVOawF92%vSE`Hxb^`$xX)E}>CeY=0oSS;2WnXMGv9G^H@p|A&4xCc_hNTHUSp(k z7f#x@FXww5i?zA&y2Su%$8qXK=`TY)f5*RS_*xgYbhmLkfx5qWHBYhf#*1lPeV*(g z;t}R|3O*}l6)@+6XqHKit*2)P_}|NT^(tG&08iks!B53sM(?U$37fka;zXJ>n8NB7T`~W)eQ=1NZ)YbzD4_7wJ%RqqK)K{m zEPEUFvMx@a4f~tJo!Pq@l`COJ>n*6eGFJyx21K%!VmTDL;BTb0!?)34_m;lqWv&d0 zWE*zZ3p%88bMC53h&7Lr&r(>uxo`B$rzj+5#}%GKjXT)ZvCVN3d^e01#eJ5&)4Dg# zC1)YgU*Glk16h)sfo>Oqyy*0;?tt^!E9~3Th`9i!?~cWb{dJ=5a({yM3&B9J=2Q5L zXZw{^@zAXkD!J#2`LcVSdW)D>e=Gt?pKZd27R`*y`Ipmt4v0M_A2+O%m(>)R@6)Yx!aRFs1wJqdcQcJ#3Gt|j8?(W#_j%zjVM}w{E6rzXAzaceKY8lyL z4CHKHD1)1R>J4j;taChJ!A)n=08-CQ;>Lc4>%2L!wXxqTN+ENjEj;h;dXI-YFqa*W zw8c0*zpN9ZH<~HGqLxL#|#fC~Pm%A*!zS%+@Cf;S)2B&>Q zlYHjBJ`r)nh|Tc~Ic-uUy?rk6dgdNyf08FLx});S#M}B_@y|itom6VQZ*200JU5s^ zDNJ)T0#_=;&0m?RkC5k|#*yfbGPNHhKOQevTdP>n$*O+1*)$zKkpvrH`{SldTHU;+mC4Td%j$QzLW3Z-+zEaM;iMM zcB(RcfBa5(#@Sozi14vE(VfDN(|zT|Z&#@LWEXLioS6sBmp<)Jhii3erkk-AO7G4S zk_l5I>y8t1e2J2FEAlb!vqD*}{PnTDYW;r-z*>#37zVi5^1C;74po^2D`Z`MAazV^ z1#!S{!~F-70~aP~e_Ocp*yCjXtN3k9+Ln^Wg$%u?MGO$z$(}0pPxTz4K2b~j>|{4s z6q4B>rF`+lbK*P{y#9Sb>dG~cM8!%KO=aEgVJ0n2$+ojEFS$h7U6u|%VfcP?LkYfO zCd=bO9+XtjpL9F_iQ;b|t1TkjoDIXRO=Xpk=}Y~b@Rmc15m8l1*O;Nn;W*ESQ$cm1 zuLBPb!oFBT$lP=@L?!PNTU*HLV+?PT3bOw3V%a=1m|h{+J?7xSu2YV;=m)CzZ|z*G zs93VsO$UOF-(p*%apB@ZMFBQE<$L)8m0f8x1{1l}9U_RS;$HiZe33(q$2Vjy_Evgq zms)8*jd_RTmROfkeu1dFnR$!)rcCaoGK2Ww8WP2Jg+{-3{>$}Bx09wC7I|U46ZX98 z68rDIv#xbJ#nAVoN>QNF2Ad7}ZfR+l--5Q=Bjq=9HWZo?@jnRzua()b zBX4O^cM&~|*a_F~)buan@Zl~%9LNt?4As{0$m99d*oYSGarHU+Yt*lgeag0xBTBL& zZu+dNgBp)RWe>Z<0`b&;$g9Z@?G|s$PF}w^qioVvWQes_ZtGIHdE1h0AAc*%+ z*&0;yN2`kbMR7G}1?%)-#)j7^#Jokki{4EL-E029k`4RzrS;30EQ|0^d1;?HYYY5B zY6i8Q;SIyF_j1%u3pnDZe7|SzFJ!7T@9H%5RR>(Gv_NF_=UskVo0h2%U5gqCW?txK zAI+FI=2Ga6|BmaEqWa@WPiLHF(}4+w)O7u}O==2KZZvpbcfl80*0eGrh3QGE(CJ3; z=_u9T`XEtQq;Sj~IExX^nO6=Nz9V(Iy5^o$eZ#|bP?Vi#v-bh%(93XD#eQS@FR~9U zoceu*dU{pL1-;W2tSE1ll})7Z#S4wS?-vO=aDb31JL;`7Rv;9=4%aHMCrdJ&iJfcE zS*B7^JzrxJ!K0Rot=c)zox+ij=PMaYrc?)2%lPTn!R%j}SQqZ;S~#eEH&h85v+Qpc z`WWY$-|o8p@+h#%(4^Xgt2FhRlzJmum;;}zBp*^_e?@XJ4LVTh7LYHc#cdg^1>g-S*!M{W=S!F*} z6bEA}d_on#8L@suRSGLHYij*or5bn_*12tL8ks=DW19}`jN_F*6?C=ckzQwQ)g8;c zURrhmlYIDhCIJUCi7a08^m)WP^Uoi&S@>8uMiqzyhc59QEiZSZJ)yg8N$X z(!FZQs^q8rrN6l&lxLh?i`^{cvZ8Dg3>?j`@qiJG&pA#5TTCs zCck#`2~xst?}9$|sb^7hmFMSt_>n(pyacSeDTZ@xvOF=NEmXQ5Db;yXHuTd+Vv6BfZ&u(?WOfn6xoOL9n8&6kDzOJ zPeXT^uQQIc&4Dg1Q3E2krK_Ez_a45IS~n6}e}~nDYh8ts}Z~IzJiRjd2(|*zdg> zEmlL~V|J#3y;>;?S$zUqFu*6z!LlPuj>nYGaw5=+f#H5nG84Mq-ijq#JSp^Zk0+S? z#cK8eDKpvjR*GgG_dTk}y}PBd%1(oPGSS+L@IsGLzOBYy6jG?^2hkrpO7=;37kc-g2K$CT7>oY}5wX7BrQZpB1P6%7d{Hr*ZFf}!{$ zd!H6VY9LQSqZorUsHd*WGvSS=t`x#(SDjr~%ykwUi67!C+WkW=wZ)3t=iFXddptr) zn7wNAc<>vVXCdyuDl#2;C;V!@X;tZ*VeamBAxolO#A?qHd99y=Nf;V?nZWd!>S%nU zLyz|PN*sF|q*qk_TUT>dPF~YW++B*~y*%xZdZSq$^lno-?8%+Q%-zs&0&zsC$T03Y zS#7Zz^qfvV_FjwXsEa{%iX=^no0C;)L1(Pm_vCJiwBdobO;OMmjdniqr0nfd+qzS% zI^6#Ts?%A=MAA+xvx?G$4*K3d-J2_KnPK`INv56l==Yob{Zw?a2G0qN-!Wz@hqn`_ zl^K%L3slw=eXYOVgdyKo<6Eqf9+Zu~E}5J0EIMTFs*dmOQQ6VHaGG+&)BPe)zAr#3 zbl>C_O^oZqbi`>y(R=2o#j4wIzEVh6@(`;Gb%)4@-67_q3=gLzFYl(ziuSt4e^DbH zG^Pmy%0{b+36sO;JtWDAyYL=uU8c})O|c&Z#?Et`Bhwu%b$_T}340;T5@IBI_HI~) z_d^yknCRiBFu7qLG8al@X8+DGW&Tc~)gxCOefCm~@7nz{T8ovC7d`jSiG*~t99(C% z_=s7skBL?JT7;zX*y>dbh$qB&$4r}(7gjCZC(bM-@7ygogQ8YD$Q6hnAEc}mQ#}fi zgFgmp(*lO)D$JHBzKFR)c6v-pXL2*6hp0p-(icGl!6_3Cx=0(-+5% zlH7}7zg@qYR^9Sa#+4>8P`b3&c;H?}ThWPXx5z8nLuab8R<_O2VR7A~p2u3l2h_@H zCk!c>GFACZ#wCH{faq8yO@exz-O%uRk#yeK^4h<;RMTV{^UP2coN-w}aK zXkUh20uMFRWtO^DWjn_0nY5v8l%O;r-%+5_-^?Dz>|hpsG{CiK=oh`6d9CiAABWWb zE4k3cnx^X0B0MlHZKqbTQy1T%3~}uN{~5vy_D%YX*jHG{ANI58&A96fJzPBDU6eEII$i+Ay=*WW8t8Bi%wz8R^ zhB1DY60uh6PV;WN6+HAgU1E_u|E%D<;OfF!G3QGi^9BVm)-N?r9Dp9@d!76&o|vYh z;m>qo!3c@Zkn}c6CNqFhku%$?*1kpN67-RQo=W_ZNw`h`FaPz3f%3`89zokNwEig7 z+U&&>Ve@?y=9Q@})VQV1AWD;(nfz(;kW$Ao`C|>+%|p+Sy<+MRsUcFS5UP--8FSqU zyY3FkXnSG(!^Zb_1s+G9Y;tHmE!@%gENWg`&WBQ@X*(<2_NtorId2eLMVmYKNpA+i zs%#~%Opl>P`y7`8qmQo3ZQS+pl^OWOTUdQ)~0mPWa?~*p9343l7REB@vdQ9=j~@ zeykgo*|RP64Ja9?idgE!eWoZcwWfKO{=wi?>B#dK8DkRf%+}9w%1|*`g8j-B^hs z36HTFs(jL!D8kZ97LSvGK@}ZNlo|LtVDK%__#{Hkkh?%Us&V&2xWZ`xx8GC|Is-5( zAxplSDz*6a8GNFhM(KD9X&@NVk*6s;R?IpQOt8(S!iEqm+9hw6yXZ&gx#Qfez(;&eE2R(yKFLlY5| zXA)2qdXQHk$&NEUHOG0$BrRVa-v-*J-A=W#J9w`wS4IBtt+L{kuq{lw z?dS#LGLicn^TQZ~W5m^#4#k=qob`fnW2jI3Wpl(XDF_E!e zvitb7z~{7JtS^Vq8dMP~oA`ab7* zSDPao3{&uuwqN&UgpryVFJt#^%ecF!wUbe-y@7QZ&Bp!3fHmX=7_+01JDK^pb8p-o z&-6o-!}?zJV1_)L`BK-&k(XCgFTF?(dB+6Nyx8 z<^>luGOG~!Tu$Yrg>}Tk(4Hu*m(Cw zsf7u#xwYLi<5{H!y|4~J+G}P+7-4-}DbIB+{J>R=*cjt^ z#xrq|wHp|@{tk&O`(&X|jcc}2%*Zi-$mR8#bu}Gv>&lIA>KnyWXKGg6G}9F&La5ie z4bn(F?kJmlx7)RZG_?%Rod#J~){Vp!(dEZoW{Hy0RXB>3d@~^fmgS)#X`n7z!*v-G zvqk-pgmzS%L%bfAygJ_A!RJ;lFXN?pQc_30!(~)r(dHYZ7wLPoW~bN1+=kL#Xg%BJ z>cda`vPAK)DLc0F(O16Is0~n@GnjG2_fSodr~{6(?${y=X-_t{-6s1B&GI*7C^q=r zlFd7M!tX+*9NeiMl#YlAn6>@33fb*?TD&p;wL0b?v{$v>zGOYkH&!!}iT97Z6CXLM zxEl0pk(lAAXg42($FmHNICIN%F*53XqRpvt=9e|6TEq3Hql7D?H#=^uB|n3b&f^A} zEmXf1Zk1U5kw9Pp9N?%|D<5ay4SjpFfKaVSGXQaNIgk&(!GATXGp0_t-X6BTjdaas zS_w4nM$>Vx!}v7NB{2ERZCqsa+D{{^N~EtjOIcDVxc*K@aA~Tz`lU#j=sQ@dzSvMP z!e(ACTCAbR&?(=vNOz6Cl0(4Sfpg_^Rj*GF^JVPvNslH>Fz@$@4T=$+xIDLnOP zSiY~v8*tCS4Ioc~O4l&%S`@W~992WF8 z#B8@F+gkk0&Mp_r?Q=3NOeNK_0UZgvp;DI&lkWAJ z-{CSbhI}KwFuSdgwM1LT=;93O=^= zgv@DcrU$f9UeUnpc#hqUGkhD3TG2hp%{(B7kkYO55DkdZ5)pH+glsRnZbrMWWT)n9 z#P%vb(^Eb#56?A0DQXgZ{cfh?Nh{S8QxZH`Xj>zihnU=C|5WvzQIC*B;nrL0`Cvcq zh-$L=wexsNR8lwGXNT8fObRC)RN4U73?R9QWj(8X>wAXelbI1ArM z<968!Cl#-wl3EGy%&DM^XKSkIJH_*8g_5~B+~IS&nKSr%`ICl%;g^2D``Pt`EbGw- zq6ZV?^?BBvwdPmemI{yXdbw<6c`1 zkaQk%SCQrM8nN!=Xv#T`mX|;F?%_p^#>NxeG&2YF!r(5`ik}Sh;YOF4*1`;V`1d0S zJ}WN?SBjL}Xyb0pa%f=<<=0iM#%Wyyl|VjvGR!CYLBW>2ecls$=s`?LwMBYr#oNd7 zvpNIt!#xuN{k6CX36GU?9Pr4HBd@q=(MYvtDH2Zo&Cmfjfg&==L)P!Pb?YbpWmb2s zqJ5(}eWq(94*TnYnr6stQ`D|yo;o#af(2BMBgjfx5zMF-$`GG?P`U-ck}p+eow2qZsex6VieNgL9G+F!IZso$ct= zlZ#e?qcS^qx>^-GOL>Ir^%m_-14z459@Ju`j%}dMy^T^xRdVb3u&4mb@@yn^AYSsh zPhm|~&b$kKzgwqlv6<2#TQ*b`o0Pvdw`d40+!IpI4QzTabB{R*V@IDf6@Z=F<828@ zg-F~~KV(|G5N7l&S5gM*u- z!)5ZzWkFw^adIY3pjGCGg~|k=w2IG zsfEQ2<%_8?&TUDp9^&5ee5=STF&gbzE!{Pov9T7}LF;T@n(K4>O9CJcN&qyx&vk4d zY)nYh@4SLEBeOWw!8RGC0%!NF@cI&Y9sla&K?64ah(u=J8W0P_QZ?%lWYz znN{1=2hH=1GIQt|_MF_Ri$AnzP8F|=S1gE%`Yj1=TvLhD6j_(|%U6?a5vM*oz_;&; zXs=1@Nv`@^WWfhIWaBo~mM9Iii6CpVy*r(^$T+l&ci?xa&8}B5svltEp4sqILe^I` zqZE!JU z2k!Bqrgvl#Wqk~Nh_bcUEnwW$-lEz%jp2X6K|qwx7n>zcN#w57x)|W^^n5o+vk6JR z(8+Mmo7X9*lHu8lr1|KCawem~)TB`x8of(={xlk6-^E1M#GThj9@*|jyJ`v8zJd*~ zK1DW^dSl8&RqUEcE5wWASP$A#_h|k~S+7=9ofn^-{Q_9wqsXtLej!u=q$~6);KJOt z>JhqPr}=9|8E~KIfP1)lS8n;&f6$rp|2=5+1vO`8^9jzs`d7e> zL<$(*-Av)vs+j*Te7zoZETExi)J?P-uGjVMiTL%kBnj4~q|YmfhImB=^-5OSw`t33 zm6(PH82`;l{^|eqshjh|lWC}9ljE23Lce~b$*KDF6}(Ml80lP#95zPH+-ByHJ}!~~S{E^;+<*MfXX|g-D6sq#gZis}_qXA>n`$I!ta@#(P4PaX zGUv~>2LF)~p&-FNpEO1_Bv*K?@OZ=|x6bK5zx3~8gvP7#gq?j%NKe(XTHjr?Dlq`$>UKMqj0f;&ADwK2+B(2MQXWXftPq89(vFa7&(7jjzD$&~#&OVIL! z!ua;Tzw=)QhSEg-6K~OIPpPE_F(CISadjZQW%ui=+G_46WM=_!~j|8_x<;}Ft zjl%D0w6RKe#no&MK&OStzAH8l(xv`>O8;?705wWROskGZ!pJ8>{Rfii4^;3UF#%3o z&w4?-x?g<0M4nfbDB>r^MkbRsFoa zDrRY5`X)Xv=cV@I5yti_jQWO6zx+P`D?+Qlo?l|{h_M^D zgnY2X6sTAk%d1_9O-_FoP=_sF6J=QN3S;qnWMjl@-6^Wku=y8t`M(4Fn6%o}`=bhZ zw(3mE%Quh!eP7;5rk9lKo1Klyqa}mqT%?jwrfgjjeVavqQNlXtO|%b+*N*{3n2a#q zZZ_&?$E!atkRdsd$4jtV$Xe}#&$c=(v{CMrs|mAZ4pz@NUTR%=cJnD#a|_16(j~#8 z-YIR*I}QYAQvtbUi~saC{Kt#T(Ss{l)(LbBGEHhBof>lv@toypy2~6trp| z&OCZBpH!rRc^>e*JGX7&|C;3uqC3rRV;!U;JKs9wo@lu{-vy#D*VNB5ZioNZnf_re zPIM>B+FJ(dnBX6WFUHQB6NDmex^pp+)b4^_G&*)9LNpPP_a zvI>{!L$T$f`l&*xfl8l0u0Lo5vf9H0S5r^jv&fBC*MU!~-3eg&HyyzIE@ zo3(np3aH@6HrP_or-#~@URa>?4xn_&ny(FX!s6EeStc6qAHPU+t6z)YyN1zERXK(u zQ2u~4rVjBW8UkB9+Ds&LqVN?_hkd)hcsYOum*$BiB1JX+gxtO3#SY!>In*6_TzO!? z_^BlNgu--c<9qs?6S3qNfA8$02XFgTyb~!!WaMSm!w?QYe82o;3TR_mRum#J$hl^~Nh-T{{8Wo24jbvdR2%3_zwuUP9%ys(k0<)? zV?L!ZTNax$F9ITe zQ=MRPSnec0{?;Ad=LyJ{cK>C@`}f$6&O}iWV!35;kT4y>lQoQ|cXOagz;kFo9b{^X z4PNGx{k?$ACQ*V`IqIS^ktRps4n7*YkyG+0!{vjr2h2GVT0WG9)9H1 zvh}&!{IC10xl_^SQK^^EB?<`dY}5sopfF_IGznMBV(TYYDlp!YaR7kD5rD|IONBr33M zFKn!Z;AYXI^$anGjg^LBe*GYl1go4UxtTlu{RyECJw*Xs41)%&j9(`JDzZ^jzZOtd>H#Mc-4>^n&)*>4A(0qj8ja}X@A%%09EJ-}DqwbHb1Fef!2 zytN{;YVT;94X>^o9T?@HmFnI&?b3EM%Mg;@E{w3{s$F=Z?|&*4HDGB}ZQRSu@%LDN z9I^VQ(c`f)XXgc?OXmB3?5R*B#uRg~ukQ|f7yj`{1+&?ht;GAVUA?Yb) z`lNJU^B*VmW1vE!&+2KircDc@nOQ!gp7oX44pKIKU+WUsq`FP5rCB9dmbVR?}*wCo8+xjkbQC=msxk^K-A9Zw4KbW8h@7p>uFnDd0RD@Qy7R zOg`Nk#&g`4n+DB`>fwNWc-aU=i32=asCJ#R_m2bOSTQm zH{jXG32(9e_vlH<$adR4pc$Q5ZNKet^eTM7&FM+bT5WN;!@{4}{O=1EY#OY0&ITpk z)wS=tUWf8!!>q{7ov1nwE1hoowE$Bb;nd2CbEw`A$BB%#Fb*`hecNN(ZleC_bOQ5l zi>jB&n4XS!XG)EhPIe=!rP1+$!%sAT%MHu81(z$sg~duS3{*llW{nKGdFzq=ng3k^ zKThX>fyjVSz|@&e>)qEQ`VwG1ewi@_4%Y|=&7+c$y-kB23=U;2<&<5`_Rh7mB&>g} zpTFjEo9nx(){H9J;$!pC(^L zI4H+ImmxUJgWyMnP1_$=rXsQ?tWo>L`3r2@aj=qkuF)+bWCV5P5$mF-k57TH{9K9f zz6s9oe+C%m`LxU7IcGYV<@IgvNRm+q?Ov%a;sI&VNyzB8x2=4?F>CAKll=0hsk49A zBPy}csuc%TbmP=_RVReh!Un$F-siKPE|+^lWy#uX<$ygRyE?CjYP=L zRmDa)xP$2vxMo?iF?2QdP&OvFZn#k+y{sjQtE16&;~Q+?7S~Vn4S0Ur>5U0~HVZ3I z5VbKmlXYb5_i@94xf8eXsI7D_9C;En(ZsM)?3pa6^wYC(;u!o*`Qz{QXiHQ!veBnn z*3-&o)cG{kU^VVgo=N%T6J=dotnAjTwBOqNe1Ou=_o{V>2zhoW3z;x961yqA$G9&! znR93`GW)7AU21o5B3sNhbk)8~X5)XBD=$$=fMV?cuh@hcL!jlr__2ZJI_un^mxR;* zGq;Ype~-`Fz5j9@h9fBc%4Gshh*{y*#UmB_C=#TZW0FnAr@ zV_Ckp{%WMW`{JkHD_(kL`0V`i@5sFA`hi^gZr7f=>Vp;sjrl4cLVai5^{;zmx7oJ; z#{U0BZ}vR7+&gvo_w(-!F!Jud^GutUotu0A@>}`Z#rtkyKkLuSzw7V0+rd&Z zEQLJS&1rMj-tzYUi~AqH;MpB_uD-EkS?XMvG&EX%a58O;{}f;U{=WHq)qU6Oe?DaZ N0#8>zmvv4FO#rkzD>ncD literal 0 HcmV?d00001 diff --git a/site2/doc-guides/assets/preview-1.png b/site2/doc-guides/assets/preview-1.png new file mode 100644 index 0000000000000000000000000000000000000000..3c60383177f5c7035f8d9efec0744b7809e54b53 GIT binary patch literal 139893 zcmaI81ym$UlQxVEH16*1Fu1$B!{F|NyAKY7y9_e8yIaHH?(XjH&d1%o_rCk@+4Faw zI+;}!8C4mX5%EN1c84pPenCKSK)(D}9Rx%g6!+h{GAPyGG+-bg zp_U*Jf757vmVbR>KabDWf0y9DLI1|{5jW@E8+Et7v^^c-ln z{%a}BL&IB>mS2N*z4Lh4swBENZh2Yj52NrABMLHFOi_FE&+;Sy=4Q9GJqmW%#f=&J zv6Zg(7@ENG)?S{{tN<0?4vPXF8{PmwKV$PzZ~sv&^YuV0nK6f;{p;QR-j>+?+jIOg z?#c2v_w%(4RhodWy(h*t$xboVv;yoy@D<9!``6wXgY4{x{{N?oWdJ1Wr6)3cLu|#l#=-w)6%CW7-N#hh3#*+n)JvHaVV&Sv~fzK zOX!0T?X$_Are6H(j5mTgT4dh!zCinZS~kE=g8<*n&0bX9^6zJBt@(k7amCj!JUTNG zepp5DU##bWoV@O)FY*8ce|1d%-L7f3J}+@x|L0jU3Q4(vMrG9OO~&qQ%pKa_bXmq5 zBNE=<>UuZ5(?BC!_8`V}_}cCJXC%)L+e{qBzo_Bs0UIA#?zhV0I@+yWz7kR7J;O}8 zl5ht2^t8^8sn(e$>mhhzCz8&1>^;|tiy9rObr(#{MnYi@j-m@-;Q64fv=azNJG1quf$ zfE#N$&1HabG8I#kCIVgZtn_@X2=LILVEmkpPSw-HiZuf8 zG5G2V0Ek*eG`audkJY$r!h)+qww+=6RX2 zN%AEHN+6j48$bAYTQiu}kM;3o0$lCZRCl81;35G0O^`4pCWU7P5^tqn>-4Jxf^>=$ zQHh4fMuR|eO{7kZRz?JxDg6Sj824BgM^w^l|WQKwr}9-Kkm9ZyHZuF1mk351cZA~KYS|FJdd#R z|vu=yNfy9Z-JR1bXAj(7?zBMzut3|lE*=QzJF~H^AJWujG_<}!qEcmDT zTUY%0fI1Taw_XIf=u}q@3}iOr&&p28o5vt{e?UdevyVCuyBgOec!y&P>A!UXW78`Hu)V!p3+Y{;vk9oC5QB#^Dl#U^pr^q}@`vF@jv(S^d)FfX^3 ztwQTJ->b)$6}0OxWEEw6n?f#EYR~sn(pDp#BR53}CcUTt+`G9ngd9Bv5?UWIW~a5_ zO5X@^&OE_h1*|Q)tkb28%j9xFnh5A)^_(Q`1rd=Mar?2lZJT0X&5jDb1&~;Wz+LKO zdd^xVVxBWX9eID9x8ccs+IhS-5qZ@lj`cwv>I{<%kM6~ej(Z1?OyL42`nsU(r?3K! zNW#IjrOUQ#3QXJ3!sWtK4j(P5Zn??7XrZv0D18YHDU(Z;*q6ghpzeDObbJ1rDnnmt93O`?(U0%~tJycjCP z!Mk2D!~bAlA4Nq&YL%0JW$bquZ*$e)^@MkQcZt!%+Trvoyd3YCXSVCO7lZdjf{?yz zkdnE1=8t7@x~**Y?ocH#gEQ7*)Dc;Y!A<<`r-NtakjtvR*4}>W;@8nWtDtZd_^}QP z3A7b_B)x=SQ}MgIMcaj;{@#o6N=GHVCNuReLnnpSy=shUp zrLvxKG%cPBRj14L zwCI$_t#1#)uWZhiKI`uyneP51toS{%BbkpXXRlJ1v1JJ|ieg}WPv@HI-52fB^WlN> zp5MIQ6e(M)$PtwLcbt|_sLgN&sUKJGicb_$IyK;{5d57A;jDgO{)h+ju+FyB#X5XC z=~a%+PARLp2cEiXw`BM%g*X_%P9BiKz+IjW3Z6W$-eQX5~5A;K3m%-*)`?w1$e0<^Swij}l zU(8vEGTE|r%Bu-nKEmae+dV%%TSi(H)PU-pvHy3O7bgcq(}6S?g)8LYY)hv0`8Qde09AiD3K;ITMm@g z4pLxk`jwz9FCk#8`n^^bLiH*-Hu{gZrD5R_J6x|;l}t6nuethqK~Z-v-+_@CDKb0f==#bWdUq8;obu6AbPl>IYV>NR=*o>1 ziZJSXQdiq};mhg_Q*&_vB`V=&@Ybl-cqoy8c(O z*a4`=o!6Fb!W<5Pz9}$xrinXKL(#?7B&UzIBK4`nV}L%AvMQ|hifCBE#cA>06CM2X{ zsG$!p$Ga$Ns!*SyEK8R)%aWGB=bC1P{2(>}V1d1mfXyv#)%>m&)ITkuE&;-a-}j|! zW&R^R|7XA@U!&)kl_t=rrFURJCgz;)@-A6Yv-W#QVsf%17*N^akT{fDDpSBOt6Z;5 z{Q35XG_#gxD4tw~Rd<8m=Z>3#LVUhbn=}H0F&W`(S*A*_;{@~2TaYwwM_EYzgqow~ zK-s~6RyK=B>Be08yF(mBiL4Fr94TT-F)^u&xpIoansJb7%Kp!CFR9#-14GI}nA03g z%A~6|vfQyFl(bd!DLx84j!a1pxt39?RO$ZcCT(iASw0m8<*{>LwsUleU--2vH7P&{!1{A$iH2k*PdONO^*?kb(Bd? z7OGMR_&f(QTAWNy8Y`wr*whesZtY8o%VRm#eMP3Hr^z@NHKO^qsnTA19%_9>qQ5V< zOUVn|IU~)8U5|U&johED?0t=0UhjB$WUGA#TZ4ADboOzC0cxp}H0b-1)TmHMIxG3V zycUTHC1@peiVZ6b9rpCI53*R(G=|dwMNq6Th;^x5fK-(jZA|njwLjsN*RE*7rH>Fg zTge(q8>I`;#}hkkJ>}KsY-6SWJOm&9rqNSx7I&0!+HD>o>#=sisQZ zB>QV!jJ^q~>o)es&JKe%Af-ly9{tLt?dZq~RZ`Mo80N4X${KV6#bB(E$gbbc2UM;0 zs)^cV+zgUL-8#Aqnc2E*I4IYin`2{2q7eqCG|G+C0`(ARW*oaHAGk zx#3+`sw6Esrs{enhA0OI&%ewuobgR>xDhdK86x#Xd?8bAU8hEjfBCtEu0*$t(AJbY zrdxc_l{)z&(T`>aCMFqJ@+M64J>_o&3F$hQ_)*vGw(eC#_6glV^?=@t_+Fzmj;{`s zgP1!4DGK~AsxT2S1u6sITeg;Q$-lHA0dKSYhVm~cq>M<|33^{@6fS&~gHvu#?0^g# z^3UqS!^0+mAE1tRNg6WFvTquf=XSw+7XM5=RzlE9;yo1fcd-G{p;Zqjlc8yH_o}<3 z-CdKh*b6TCNj--nboT+dSn}YA0-RkoW@hQ@Ly)+ zQ&`n2$X#urX0l>pVvg$*6Dha#&S`Asjox>68rMr7{0TXMp@VxVI+bR!6CWpBsNS2nOXna zo-U#SOiUZqjam)1z?Nd$a^#WYzNIV|$Y2ijo7gZ@SZO!e(p%Dh`U%^$0=X6TQp>SH z3vvgC)d(tH4ZmlM@@foNee02tFRB{q-`DB(47&H`*lKz067WR(Wt|F?i_M1=uD~96 zP=tXx^J;EoXxPBr7U*G zmZ+Ca2{SpJ*VdyT?Fzf8PNsl1aSWK5Dy66iHrI|!RKXX|voo6UZ};7h!XkgF z$`t`Sz3u8DQZK|eO*<@|2PN%VU=fpk;l1ft8&(M~vo4 z3OrA&mN9FvPVD6==WCws7rh-baI}5@c;-b*h)0C{ki=w>4Icq1^>Q5s%{0wwhE_qV z-@SB(LrSs z3mar-@ZGs&GGb$Y*L8Jx-uB8~VfcV5Wb(^#g#4DWYpPr;a4M3&Mo#X{|AW^IfeYhE ze+*)w)uJZAk-8az9FanoOk4FhS2;gE{jc5iLieA_Lx>wLmHr-(ShSlE6lJX#!+&fM zJCDx1{X6cNVRWro$;?=4v^uBf^+c_uWct#ur$FX_L2jNa*Th=Wr$1B|P)6xTI)yqQ z!#bZUZWIWG^)AiET9kCgBXw6(B&|wlC5>zU=xW&)FF{uGiW^?O9wL|AjBA&uD%KeL zbJmzV_sv7`cVlHrEDsq2jULIzpqG@HI(Y&qQ?{2}y9t4&*&&)a{^CUDIHeQb3yb~ytwcUllM(2Tv()lS!*Q+s zRilBZWDb`#ix7x-{&n~N__9x{kxf!<9MGE0zMO9QkH`5N^ZZv)eL*~V%P#$* zmsuhHpH_b%#lO#9h{41aA9~~)D6s#z^dGuQ711uhu|my4`S^b>{@eI86d>BB3+S=d z?rFPV&zf(Xw8O)N{kzwX)Wn;v$8XegOUG9KJna-40%t#X^>UsIv*!l-vCZDC92e2+ z+IDblv%8zKb=OezPQtRC`JKwu=-(!JfY?+Da&jpv>0^E^k+S4}ZBY|)@dRvH=@ohM zHOkww^yPzcM|$uCbROxix17u8BuhMmIwITu`94I7QHXI1su~|?rB*I~yPmHQzN`>d zmq0pNM$~pTavB<|q7!6%RMeHoUb3Pw;$oBGt-3uk)!y8HMep^+KWvbuL;$s2<7cDr zzqlG40aV3-z+TrDp8cOu|K}d4h`|MEmnnIAq1{=T1CBRpSLW4&E|TABLPq+hc04^c(omhCzxff#&#{JB9&Zsg+c$#Go4yJ?WL7kLpEv870 z*2*{nM?c3I1ZmT!?)wvS8n;jB?VpaRCIx{f&u)!cvoRd5w^}tgO7-S@o{L%*?#|Xsm#=9;okgQ-Fq!IaC~%XDZkUih_z-rxJVx zq-SKrj90bgUtIM04&hR?*&zBt3YJk&@VnJ!Mg3^C1U6{S1kfZy&d>kJTbmU888I=x zmuw5c6qkKh(m^WRdH6g?Z5}zklTDx9ZmGN)j)3!9>O$k$ipgF2+;pLf^=ktlwL7mC2Avj>Lyf~=**8z;lN=5k=CwaMJY!|r`;0s?8F;CU;juf{#*~4zK4PFdUk9ueh9O%G-?*}iP-jb zo$#O0@i78#-}EE#m)&9`)hl+#n)DXDoGyP8yHgjjKi9Gf!hg<=1!STYopzHNm_l6k z@3H4=tuk6#>1Y_}$0o`Vvl!Dgfe-5=p8)I$_8_5exE@<&uUj+UIZtf9U? zd*<@gbBqp3WV1{32lMHDz!~#Z8+1IZCJP^*E(!{Y$@*)HvX**j?Zs(J9l4uv<#11S z%ty9~S$%voew`c6SB|0jJh)uj*7N-~S2V+g?(Yftx;+!h7xWO#CyS#?EPHE+>v07{ z1rf<~s@o5fd5Bx4tpLWpitDi;JCnoB9A@KvXOEjR4%4COK4nG4-`bV}58ZXo+poKI zF=WUr!dFC@6TBahD6_#47y;iKW2H6}f+);?uHSyTzPP{KoV8uE*7Z%(V0Ry^DQy^2 z_`!)?C&kHpRf2-RhpOlODS2>k@O4Tje<_PbxiLEd%uXi(<3=w3jG3%T*w3FocPrdB z;u~$2hmiow`+uq3Ojlx3(pF!qkl9391**&q=4gtq&&yDparCqlmAv4d&2s*OYXk;8 zPEbW*qc~;Y_VzX|9EZYZh5~>HYNdPL`XK*_b(2w)VSr>M7rvL3Ktf942$3^6I`+%< z_vDj;!a|&jbvtr)cJ+&ug$+-+Lb+^;jkn9p`o3N}GqXbd85})$y1#5y5D5}GHn}!Y z1cPoPTJ0H18joMra1#_fE(57VWa6lLn&V`wAvbZo_mksMgd0`(cah~-tJOMFbWyT@9w&M1EA31@LgnCF+jhWve_wl-fv4B{tO3gMLoDl??X8!l4f!SHZ zxzZ%Zam|C`{p+a#!nh$7ijmI{H&NEi)NIH(cJA=dVr~>q;2c8AI*mf#_bzyL-koE| z6jPt#6JyVGlxtLIeL35gq~BRs&?nb;-=+bK;_h;AY|wCV;R-7A*;tv07e?ap$iLq# z`*$m(r-9>8S+^5cry%_XX*Jq|`78D!V_=Zy ze|;2rG8{&=;e~Le_%E<(TelxiPSjT*^H4(T6e}9?2V_8W>nY%qDX@T@aC&oG5Lo7Tnw4XO(e%4Fe$|4R5P64rqc>GSj6oM92W2 zSx3OHEnvCc7V+nB6f*Qr|A;3`(MUXLDm1b;Ha9iJ|IuMC#}0ub{-gV{3AHoMZQV~o zy}RA_sFpl6OpA13_0qcKV;w5r{6P<%l2!k6nlBmy>?eZs_`J$MGE!5E_k)?Hn4aJB zH@R*0S;N=xA|4F9d@xjs_`V#5J&9MCz@{(+IuY;U3&Lw0SDy5;$AJX#3Mo)>QqgOcSZ8C+h;uiMl5 z(rlxsz#kY1h0@{GGZ$-Rh_9vS0_hBwRn`mV*VnWk$uIA(5}9n=_o+LV%nWf0>+WJ> zIFg<-+J%8X)UCMJe{=uZl~LGn-|xArG|%vD}Ca$|p>JBHt={Pp!Td&JJJ6CGE;WH=I= zC0`?E-Lpzn!-GN`iq`=0Q$UzVoBVLBwA)?^8!5W2FewACdsh-J;bSF~*))OGDLq$| zgk>Yf{4d)qMmwf5`uxy)Ei^%wu4JJ zvmuI1ouBYzopV;PF=UC5EeD#Ww!S_3%=G zv3-K0NWrMO=5(PP?&@qfH!QDxmmqe)U1{(N>~2_3Ol+V;WPD%=KrWW03N;IAm#QE6wBbbCP$<;Q@ z3^&kop+9c~c@G^csRcRS-{Gx9ic_)K_N3%m_vB=~CuTXi+-_4*;B^f!& zANXN9p#1h_aX4#Si91GTC=l7U2l&XyFRTDzXTPU zDaPj-4|-d?TUC0a8%0{=K-z4x9C%Q+=d&CV9E0cJVfhPHc_I3TtXpoi!+vUT1=**g z7yR)+qN|IITG&NgnOiH|h;D@5*hpy2@(YudU?fsl;+K05z7?Fmj2M?lx6qUzEFPElU+g$^n@tT^;6}P2Vx@4GZ~_(TX2He4 zJG-IVb~%|hGV#>*IQ~~%(a-IyTi5&GqEgqrxULHq{goFfKvdWx2(MSTm)^k*P))3wQJ@|jm0Um#la4QM#wCz7oglN8)Y_nW8Wu)~} zpTgC(z-#+IqMpB&dt70`KJ$n1X=&6A6>3v2a;=-X(3#s-d1MNG`o|H^!CT~9#`jC6q>-Cc+O{299weMVJRm`#nDv5_bLI5Ba=P-mV&&F5KPywjN%k(w&m zaxxyc>(+#j+=F0~=TRp=-q$BKgk9v8n?+t*GRivpgGVjI7QaeQrS5Gp95e5$a(kXCoKyCs-4jFg7N=^YDB$w0Uqwe|D=K-|k6`Lm42d!Pdh{naRCQ`) z?j8>n4vsH2BI(0sRiS?Ooej+A&4_!mGXg#4@FewbXmk&lnE3eAx=cz_9TM&d=w7*N zXo)Yk@(K56-IT(eEgsc)E~-3F_bYg5oR)dKmqnVj_FL@aY2i|L1C$pv@VL@nAr}UJ zq6fRAPjS-?I(&jzzSm#5o{R6_cO|x7vl;P-{X7lw zH>$WOOG;{fN{NemCk&}qQ73ltap|vn`6ZDR^-7zSLEx9kQQV3(Uj`6d&*Kgp&)NC1 zd?WC4u8-s@&x|j6Imah{WAd+VT0P~`-o2Sv7`hMm*`c;ez(~o)CVt-H)vV)o27>*S zNaX$ckm_lSKCnP{g{aj_*z`I9<2%H*b|^{DrF4ID?74z(`YEvl+PlqRBLV$v=y6?A&-YG&iBr?pns zoQ|T&I(};)ORWt$>~;*+%)S;7R`FVmkx6sbKX?Cxnn@WNnp&XEW!wiMwKG-K>!sfO z4rw4~RgEJGUzt4L-ljR_B!Bc(h=kdB9E~9pw63@D`chw`ghEJIaaU1#eMt-sQ>JqtMbJbl@E$k&eUyH#m7Q&8(!#8b>&KY_1OpS%=t`4@w6_i!1!8VK zU~zVKmgbyHvG<1-7tHXG6q9oFJ!(-1#H_3=MlVSW7!t%i`bv5{N>pY$gu9w`)@j7+ zEAv_4CdE`X4#_?8h&drT`j_WlJ6!bi>PE)7`6wH!B_3qdWa4-? z?dHO|7=W`(KTb(bk1Oi;e!MC5+By}J#Yx$x+q#DI`+Ai&3q6L46WY1j?SMxNObx3v z280;6gH$u0Cu-XH@H@)N-p=y&;R8(nW{1N(n5E_#)pZ{`@=Lmu1o>mM4Bbqp=(l9a zgjg(ca`!?czec;YXCq`@LQbXGHgg&J?EFO3Z>FYdF6Wx-66J&foul>M>^1n_pIv_& zV(RhA+N*(!CJm{W%4BJ7ul&<$c!D?@{(BEcmvhhyi z=M*!AcFByLS{IaPBB71(BMe9MZW}yuj*imZ!bY%wK6XHLCXr-9xuQGl0KI0dobSU6 zHBx}guD}O>YxGwv`RDhy={*t)i*Go=*9&_33^$nH=km)XQW&VG@q0lUgOz)U=Bdns86Ydt z7f#?e+=eu{hzhjW6uvpDn>eK@A;M%a{~pJ&u2UIsdNheqTQ2u&*yOf&zWY6u%et=p z_M+gDR*gjXbRg=Ch|jBG76>5`XE%je)d7_K6`MssIS_ePd?W=%4WaPNpusOB5Y)CKCOm>n7(H4*7?8%ZD(%zhs7e zE`0b`wMeK57uR(^?!5?6xrtJ&~qyk(%vdu~_6#%4yAdui6r>7>= z118h`fsIG?a#~1kyqaIJ+!5UNRsywF}4+2>*_?C9;tB8j|ptd zKK*9AB7%xj;^|s$hpFYL;DL~)lF)+`;S$6YT$1)-uyVe2$f2>Zcs@60kmLhvtp&qF zlrH_E@}K~d3^?)UWcr=SezogoTv~(W>?kijwPNButm7X{taf_9*N3@1t#>9Y9#(5e zlV@44^Cz)Ga692h_e>B}O$Mn)f-?cooo2S@M0M;BUyl+x-9}x9^RRcG>%n{XkbJ{h z3ay*vuq%&=j+tL?tIC2t0%Wt^(KgFEG-c}OU>{xXfarxqu5jL~bAXCvIzz%{i&J!- z%Sp%8m{a;kG&d(KD#}+(C=BZbj3H|~x&cn=MAYbsWT21Muoq!@w_^muGGYdm=ToOl z(qXoPIA#OMCfQ82VMRURhL>N1)w~q{#G8lPmJ{;E-h9o&7zb`X-UBiIz}*-J{A@U< zJO}|(Vdk^a%CHnBf7??e)=%#k`~^@MeiU?-=(vf_Z$(-c{mXXm|ETN_;?E_9AIt@m zH?zdWWGZBMq!Nwle_{f#!n?;f5Kt<$bJJ{_qn9DAXsNzCVDPKb&VA#zUO*Q~P%g)b z>)$>d%wDc{1l0Nn>OdmelGmBpT%oGMi0ue!iARqnl8wGU{75@W(zsPKbb7i5^EeL{ zL)s_V)6lqA+luVA1Z5R(+F)0uGD#rH_~{ZMfj=dGvzy+MojEBL{7y_|y;KWEUfi_G zD;Zo@l$BN08wJM7rmPej(v$gckj{%jYo&xcU3vxx#1D?@eHKC)=p$O&kYD^w7Niwz zj|Yu3bYz-xpZo>3Y1oqJM8m&cF|1JVbwvL*&hNu3JiX)n=}%yiiT9mabgXx5r-=UbWfN^@yjjjfq0gfxkV5#fQ>2b-Bo@nhaLU zsD|-{9<=Mno!Y%VKYDe!s-A!|zKeDOB3MhuVax2fsnZrtczh|(R1PWRp((3PRB1(Q z-<~`0BvpnUAJ^&d%tWL$`x&=d$Ys9+_YGZWog;;5LE+*0g62-oJ1gQfq1j@eZ6ERW zLr&Y`D?TGx?I*$@kA4BdGlPvaw+}dSV&j3v6hB@z@rxVF84iEE0M6g*WC5Ana7*)5RF-_&P8#CQbQeib4_nGr0j33~+#s=<;@-yA3u}Y-~t-OXH7Zrd?xpxdP zpid|uIxHy7#T1v8_8ZlYoIFWf5xSD=pM83pltDidqKM`>4iC@ri?|_@tVf#7&O=V1 z+ymGup@buZwyxtO_JhcO&zznZAg!)~;;6HU)jj*u9G2;Z}^7}g67Nm@>?K+3n&<1}*D=DJH9r$OR-J^YUT40S&- zd8fm=kYAk*i#qTDtd(6Ln-;w?wNNWAuz+Ua;D!lcHGeri!5a%rCq#LZ2E1;yqE9-H zh4zP9!)oLZI?ksInpj+QJFr3*gitr1&GN=SdS6aiu}$So4&GUy(QSMfut^G5T2;9w z3WQ(d#6rVt~;id{QxuM-gcW-OxT|+=4PCS|wll&>? z>GI4kwR8f+in6NH_ILbo15si-CIU}=Mf8*czc#6P=9HX836u3b2K@m{tYf8(=^OCf ztfM{{GT}|(Mu(dqjCjnI(8oL48YDW484q?2BryF#;)7YDc!hYNrKY5tCbZa-vDgGpkKRPNcsZ;6o1C|98xAeXo$xzvQ`e>_D|p= zcH{jTc(@s?PSF$LY|RTC`$-nSSpt=aAy>cmKF=qRbXgRs1#$-6!??HJm$>w?tZIz3 z2q^+BFgX1960^3J)LW9QRp*EVsEpUm8-L@VbSJ{zpKO%Q#Rb5#`;yi;bh!~JcI;5B zncd_EgoZ2)ohj`sccaML<0avR%;{`Y6|y0QiOp7KK1wu?n9i4Vn#_EU$BDez+8-m7 zoDQO>S1JroFsRy3+*|Gt6i{6KWB&uiDxiVFFw0H0e`=i8csNpsrI8dW2TZ@Yk-1_z z67agzoO=lkjTN)~`X|4YZ!9Hg& ztpfzvvj0mEzwDFz~$ z25=O@us4LM`${uU_ieO-&5nBpfS3&LW_XJH-n}(9jiH4_h+?J$Vai9x{lEkyG={Y3 zAl^3P;KrB}jF>M=TUv9ljECVhvuX=CWGM97DKv^b7n(uVdQMYRwc+W-rAxIEZzI>66N^m~(6VaG$ExQkXR^LI*xmtzeAaoS~5@?hWyx_mUVfKlzV4aaT;?8W_==) zD(e5(k}$LUa9FkW(fuPNq2c2O2Lltkz@4SjgOjrp`1W@4aZ;2npMGSv`JO{~A{@^P zxZCC9^l;45@it_XrE~8Mp;%)jJjjP;YnGP0%@P+9LeD9T-JkI`yYdBI6<&}dri(O& zMx!dSicR?Ue_M+kH^tD!Nr?|oY+DTM@Aqb&E|H~$coZUImpeJFwpjl zwLXH^Np|*Tuq}(~YU?aPw@CQzMMFi%^LL{8bHkN?HYAvm(mEy~6$UGjr%*2}0L}$K z^GhNpOba8uB@1lp0V@^E@jmMpLhz2{4+3B3Xe(*kR>R+T91PkEz9qj=p>J5x2{TE-y*+%E0yNCNSNo8` zIkUb+^o7WcGYPcf`(%ar$lYw0my52kC7Uh&fk@hq3hiEfQ}D|wpvYyoSaaEZs}5K; z;j|Gr7CJkduUO|E%vl5pUoQWQ20?3q1$s3)b<#I;4qwIS=}YE)iMWi~oP_}czh72) zB+}@bWDZp44ksRLl0}5O?wxemGuO*b+4r}cbW)@-clTf|AG`STj~-YiTzDK|C3|7U^S~E8$ zeY!bUdYyXoc`Et>%5z35=a!wX;j7Vj;e^>eIz2;7LIDHM4$~pKiH|G-puRfEf&IrT9sYQ`)oe4nwHh!sM(T-#0S_%&n zwc5oN{7P8(UA0yyUWPSjJ8>n|%{MDI?`6f}nR#(GMcv^r_!Ym8D$|rpE!z*E7W@%{q17J?J z@p}*kW1|lu{wdaH{2@|I3nS|xY!z-@bSb?pq=Q&D`7P9A^}2ikGcOZLef^_)_=!;k zZa&f#d;+8*-J=d7QKgljeOTU40y3&)yNnZV!e_*k0EZhB{UC$_c(`%JJ8->DgQX$}Nx4yAD0sNkVH8lUh^L@C64e^Ql-3R=nOO1{bZsiNY zPCe5AQx{u(iq5JEXU-)gJHs=`FCBW_R$WR592p6}`wIRZLHV1e#FIsf1qP+;KV*rs z%C`N?P_@SA*F0ZFNW_iaj)oQi;lG+a-q5IWrV5~Y1(3$vFb)Ylo9+22124e!thpG3 z?fC}-jB{~zip>#`EIH3fyw4iIT{!jxEB}O_uKp(>e{Tv=S{9hht*irx0}Wt2K~`mt z3}BYiw=@z%o&N%ivPqvggG_J}n{ozWcGtdx0`Bt7_HQS5~<7U?mo4xouWbdkT z{Lao~TGbaB30UmX0%BH(F9gEh$ISub=8WcBC^F(sGwl_F&;o`eG7-;6A z@m?F_OJKadTJcp(Hbvh+PA3~!btEn!FG+!Iz?Hbv!UQ+5V_2Kg_AbnZof)L?&7K~8 zZ0q_r@-8J>w~Nk!SnecFdb*-N(<(2{&jGn8+dmw9PQ=A}K4)a)mO5TC!YI+SVmelh zRAG#I?Q1JR04>7GU}xMGMHN|=tAP*4Fy2tM+};ybgEwdj{$gY@b+pJ5!y<}a6>VjI3P1<6+-YYQLk62qq@f$sbG^6glQbKI99tKzNmYa+#(h&K%c=kN2w1 zT_cTCmFg>u246&8P8GJ5qOJ7ZbZf#EO}T23sQ5+P!&UWaPwyuoz|#_Fcx|pp={H!iW zva{E8-KzJOjlZ|0*CnPQ53r`1b3x$LMYd=~XkSK9hDKvVMu@p&{V~u$p z58QABdJ+BG%~$@+-cfx2-Rme2Bt;D^FvI*|aPTBn-0-LV$YavpEvc<0Isqbc19$Jy zC=O4fZM#|T{=$z(b-}}FdVjbK_$6nT5?Iqz%j8ZPV1tTp1bRgQgY91j&@JoZuZ}I*AvENnz8^Svm1rfeujyil5eHjae zleGnptu*TO?Bd2HWngpZV)7AQ*DVn8lDRxEe~Hp90)BxBpF<%hoN%n|aR_n38-Yj5 zNRHTT4t_w%8W$WzFkXULP=y_)Z`>z9L9bfwAZtdXUl5!Vzx!z65e_WaZ9c^$K3kCM zX?GlHxpXh)p;;&F<)8loPzW`nAruYJDhBHd+BYNX7)e1dRph|kx)IuM+ST~6YJOaq zyNaHs)%-$`n$z7(+gU;?XQJ@=YXO^_r%iK~;=4usXxxkTn{%3$o$-=dJS`5Zt8`KY zm1goBUDyP>O2ic~c=$2A5;dQjilovhGS}>FCx+Fp`?WG&M-0vSz8#=jw6nLdCXsEt zgbt!!(l^&={1Z0=FXlA&tKkpssc;WnK|+YEt`wiz841?8!dzJrNxt5W zv@9a0X3rDF8EtUvQ%!n3&XtA=WZ{@2_AjucUZ8g*EfrYh zlDi=Fr3u>KaxRBri|PI4Gpy@ZlCJ>JTD0I8K?3Zxz!iO?8+UhiNfX@NEjZlX-?@A5bH}*9dyTHDWi{7) z-)B}Y$4?2DAM=RXi(5O~!+K-f@T2B^1+kOyzD6nxR#j39k04oLPfmk6qnC$Fk7>9V zYiHgene27XQ>UjvBaxs4ayx$7N+~XQp=iOC#xFY&30zUrSz}ar$g8E%F&Wmi#9B=9 z6W<_e1WcCX?$m^Wk%qGmcIcgP#{brM6kz=)CS+|&e|<_6TNmRhD(A}(5Mk?=`$ zTx)r+9e2B{G49{_VA71ezt{MU_xAzCh%LiJn)Qt%W4Ski@#0Tw+~)nw0QdlHO{#@! zcS)~WwfnDLdq1Y`4YXB0~$@W!xHO17m}OVWEvKZ|lg22&?h& zaceqS+T$0szSZjdxf?o()Qy1@e3&Y_UD(qy?yVv2g{P~GLf)O0t;r1g>bP1>eJ<#X zPqQ7D7YdUrb}Qw%m+Go>DN>7{UZ10;AG_D@VcIJ zp~Z2B+Da#OFvN4!^d<7@GWWG! zqI#JaFQOGZ5_g_+$8QO;b)Q@7W7;L>OybSTbqwpqrUerFGpBTS;C@zq<*YPBxzb5t z;}(b`lw+OdZqGnpaJzz!|5*77tE~U7x-vG(2UPJ92~Tj>HI|y+IJlHspy<*w_bs1fkM+m&EqL3|2=FGGf}7xs02QBjCq znLZ!7@)o7Rl{%sJR8xb^&k?T1tn_sM9;p9yN01{n4|Gf}2N>BdVnEP`s8K0^=hvly z7f1k)SUiVvaL`|2GOmT?v=4zP5eP&d9UYAiL=n*3i7Nyy{1?&re{V8ONQIF3&uz1C zz}UqCq0hsXwAf;mUZesw=#o4G;tER9_h|*`wkl`)JXw0x*XbG=8{hp=?R}H||1;y8 zenKRvRN#Q<>-Li9*3XN%e~*95KpgJ31niV=}!NlSRjMcgFH&JRVJbMc%=%QxjR`LS5e@nDEJ-?#AeW@PMqpU z(N?SCW&aQI^Z&YyPT~AVB#F-uoAxRbNKh4r2T~$6p6^bmw6wIYN2zi~GP#|0N62Fn zcVc+ufA~C_{w@WAs(y58%NlFLDz*RLakQX>4*zuk*!ITyMZwPcG13n1R02-5fI<9E zN?qUI>p!s`ztjK4#2`bxQ%Ze1Q^U~5*%bV2<8y+lntuNGVnhG>`q~5nX-kU;zC9SX z{#~{Xmat0}dp7iidP6+>4}T9w?5|lf$EspvB59K-hA)al%oW+XL8gf4_#`D&Bui6~ z>_}erY%8eKkLdpq%YT!ppY3J2q*_Ax1a%=X5wonSDw#&5`0z;?n#=PqL=1P-J6G4IygAAG0*vbscT&MpCg9B0oq7JOybJiHsG@s+e`lPV+TVKMDY{#Bg3Pz znkx`fR|StemHSa{m4LA;s{dJLNTjG8Qp%!RF_|nNWZ|_Y+YGO(L7ctGoF8!y z55Hb1k0;Vt_a<1UG11l2KViY$erC}FcM+_#WH`sAoU?Uvt4UX`~fmxN{a zGV?2MF>j9z`}{@EOT!*wx)XgRK(MFvF+R%lH6EVv60Po+Yp%F*<#?R}FBUoNaez@6 zz06~ylXhMR=s2;9&n*o~!l_$6#V|*UuP#Ms@%*!fe4ycaAS&r?cB%`Um6lf8CDQ^H zCMMkbq1i+#DY-<9DsYU*>yc=IQLpW=SnBcdQO;+FlFk0#hB>aEXdgI}+_8RND;ox< z&K>p;!NNL@*Bg`VW$xl01s9Ol!?Jt@yZ&>jW7LJ8o4Mf^C)M4I#=|j5PE*_x<>lo; z?JS*7#}y(0R{ybD!QnvB$~#q9?`F)=+8|@t(MFoXoeIL;R>S~}F-T%&*Vq1s>3pVv zY|}BW95$5bGdUrt4(yG!ldzyA(ObaPldm%}21 zi@a3OPG9rrznk-a?)7vLPHZzlo7b^7@4KQ~P-5_ZbMF6va1G#2(V<_CBc1$z7f%g) zL;Y??9zYCQN&EMM*XW>1N=pN8Z=J)#%bt;< zbeaUKfHIBhA9LNlzVKev-*0tFhkJ^v;qL)d0<@nCwbIz*MZC!FfSz}z(N>4qA;g)z*8I%Xoa^iA|lJh01-NO)rmW3BvW>p(VoAwKB zVR1*>(SV>))E{hyLeuWII`1nonvFWs5^Hq-rL1J{XK7_usx_$g;tpNM?pJIs^;75U zb~;1+b16jmr8WcquNh;UjO3!)NW78};Hvnu2R6+4b4DD0do7A4TRQlM$f$^9$JOrO z@f0aJQ_})*fZH*^0k6_zCEyWP!!&xxnwEr?R9JfxPfbUK{8MC#q|4*fK@!;453uQ^ zl~F7oKNb*B9I;-~2Qg>@czLy!8>ABP`2?{UMTj@uMgHaJ!^3lUK-5&t;z`-oDbQ(3 zl!?Ar8cGqCmIeU^B(`|Ld&*jB_wMoO_;lXS4T*4xN$q)J5Dy;oY}D_2(fjkYSNEGV ztq(5(ueU1QIlOr6R*8tX--OoP4~Vqt&5*{bbV4E*GmX8xI-L7{jg615P^}Y9mOZbB zjUW7~P_MitYKN%Z-qYu@t%*>}jn)vOUzxJDc6m+;Bxw7>TTxm{T`BQ7Tbz)%wYBx> z?6!vWQryH>LR>;h$|1Zfr(bURnwwjMp6mSK4r&a)S43IGy{NE|`m=0KLIU~V;Gi7E zRcJl2_U0OdWGcGG?8cf_xMeT#@-_P(HnK;7(4TVSUD(Jf5azBhK^~}M*D7ADkaPZ? zh)8^^?|t{15n};Tqhg$32yspliQe=HO+4zk z`ipruX7$aFfzd2wlfze7nf3eDxx3U;`Ta2O$_K!l!7>TLsDXKEF$IzpXMnogO zoAkz3D9-OH-hiKhJa37y!r*kO4hbr8nG)7^Dez$v$qYN{TSXC~J0|@qZN-KT>F=Uv zN~JR|@Y3AlEvv~~wNI-x4Au_c&1PWH+8^UW@#!DvSj1Gs#IDre{em$WMgiu?1gM1KG)$&pglWs7rMB-|%)UaNw-G%!+Rx?|$QE z?6Ev|2}&P@AR#5=#h5?u#3XQp#>Nxdb##3dHs!WO=GASQNT&&=Qx8g5`w@wOBH})f z>Uw?S)FvZhlvo&GGL_yap6%h$1=7-i@#hk@YjenLuus{o#uz>93tris9p3k&SA`2c zm@UvA52tbkTCJWib+l-zgAn6x8TFKRA$AR!iysV0A2c8@#mIMUdX-{K$!?!l2=-q4 z*5NL{wSuO)FPHa19Tk5w#j5}*%u+D@5-a$o`LCi1G!h$U5#(Fk?=Lrd2?_%77v1kZ z8g?kDEDe|6Qk~Il7lhrbuNM)8r z-0-@YteLNqHJWL6vuDTwq+o0(vSTy-Ak>x(T+EcD`rH7W`>ChR|0je;~P zF9Ur@IPlcxdsLKSa=zLs(&qsQvc`XF=rc60xn@i$aOGAP*FNWV67lb2-PR z8KqHYuu5_AkZiv35XOyNG7Z4rp#8y<`A2JvGi5B|oiQoMHGJ-(m=w55qgK4Zp=Bk) zhRZwO(sKJNT$CG)*RY2OiiLoLQLOvjQj06cM{RXb0X92Jz0c;PtVWT zsp{DEq8yQzw}hn;Y0p*}`k@JL$ESy(S(+E_4;m`1kImvi`U#*_Mq|lOip*JT{kyAd6Ohk5Rx?Zo78?S6zO zZfm>I`osha4Ud9Ife&rRf^Fm|P3-+^xf4vIeGXge(?T_+e9-#VVQ^`lQj&FdSJ$-^WM+$G=)@yTv+P z!v_uIyWkDPl~qN>+jEKF_v^Y5SpZ(2v{)nFYIv10g$+ecVzX z=V#oz-vE<5q)-KBQH90|ZIE~1QM;uAs;b2HG}jKI`(BcUE1fU}B|xk}ne3c*CyelnCcWR_u-k@riCB_x0RCDq0?zq2{*U3) zY3A89kbn#msMoZ!=C*;9kz0qHS*hRU)hge>iXv}SgM7d0q>!+ua^pV9zW6Vq(AhCN zDC^9;i>dp?G2zbO^-+RFMo#c#dh~Iev0au*EkMcxSK*fO#2W3NIONYF+9s=#o-wo# zs;F&TE{g+u1H%;I;J6%snMc^8@tlDmfbAY)o4pQ8IJ8t z%ve-$P=5@ng~3m?3b)Vt(A%OYXJ_e=JV&8s{!n&R2##f})aRK%!JvS)9KV(>`nlxF zuJ*f!=tc3bdE0GorOA$%g_$C(73O0;!v3e(@LAv-JJx>9H~0{DJvE0So51ia*0SiZ zn7D;=28uFKU{jzAoi_Z9MOXM`Gm!(1Aglg-vtSd(gvvYx%V#E4{ns=`yt90%pR*BX z$}sah-J-)o<~D}yQ>#4~C3Z#Ju>C@)lR z6+d*!wb$4oUXO%>?2qVwB&6sjeG_;9GI)}%n8%rO1r(CfK|`kehxzFF6t*A0S;F-y zU-DXhiuPZPh@hd$W;gQ_wmPu%*?*#N5;ExH>@*R*A*fY`B}?QV1=LY z_ULjF{Hca9)G5*)7(gbXK<>K*x31$x2ayc@LSvrA)u>3EiC;+C@5MxTU5|?lNaKMw zb}X@hk&xtWkkC1p3=U~sAm+!fKQBX_GXkn8+9g@JO5vCB%L*?a?gCQV8HEJ@et$eG zf_AqVp71dZ!{ZOvi;ZN8$8%d{T*2<^NTCpsG-_8S5t8WL1s28@*MlSglmXgC``Iq^ z;~UU7D_*yjueE3rP{jE8L#5EQw&8E#GBO$rrUGGvdV7D*>ah3UY!Xbc|_qhZu?C}GhRBu8IPDEKTU`%C?}knX!7I;QfKYe5)6No01J*!@Q;TS$Kxq%1uYe2`44n|3Cz5=nbD{fGfAv6@5s^ zDD1qA_!~?LDy1c{AaRPTkE&u1g+IOyR7Ngsvd?nv;XB@b3RF? zZQfv3`xo<&>1u@oH-+Z7^@)EsUc&r@U)TTH>l(%VdWO9IwJ^@44u3jalHS}R^k7wj zxH817{u7&C+TGAdO)a>To*E|eDK!pUccLOj@ z{2LQhtNT_kj>(%>)U26>LYjj4rVOy1XeS0Aq==ycRJB@q*v<%*N}!Yr3+SN=Cx@qS zCxthDYdCbEOx2A6!pl8DBx4r#N?DZ<&kT&#iAYN+M@@Yoep6s!Y*46A)U-8lWah4w zmzIeu>ZtyT@FyjRO?Yj_PaJB}S*JhIn}IU6sRw_ck%_B%#;Iwi^M3s%W0z$u42@is z02iP0Hr-pKlKbl6upkz59;-%#>$Ij{_2w(GFojv>Ozuz?zS9`Oq{F~ND6x;C)=)J3 zzA$~CL!wm6!qMLq`^DPfT9N2`(on+uAQ!B4EM7-IZe|b_II-lg#mj25$Dik`Ya+Ps ze>Ngwv`dVI@A$AD1#IEH(dY{ybCiRC@y^gSx*Lw^gm%hR3e(@&kB4cU(}@Y~IrPmw z-W9t1w^n#@+(O9j)%Prx8LR4cggynkn7fikHOk)ZDnE}Ppfcah1nc!?Aj zmViQqcQQGm&r$1gk-NOtgul)pgyZ;#++=w&0e~M=a-+3oI^{hN@CBQZDxEr=3BiT2 zvmCrxqRx!x#Sw-CTs#7SQw0EfjDCQ}-TK{lmYF0Vq8qz9#+9e$sTlMn_{yMBh+ z$Cu5C%31CVzU{P&D-`uBNCpB@t<|Y?AA%G8qT(%(TU<&T2o{xYjG#<2#!c2mS(8+6 z((m$(UyW<0tWQoSeYXFcr^d{kl)nMtSo!^Dq(oXTYnSSt^sFk#ihf_xAGz%qSm-AR zMc2e&SsSMavm)kUB?{8$yM^FE5N^|Vb!v!=Pt^!CG| z_@U1CtZ{L-s{Q=>ne(9VEMB5l?B-|M`-8C$gU0kYQwlirDeh$o4MzY5rPajVC<1yjDJ=mtn? z${<1k(Cc>uSAX(@Uwu#nLA(JZEDwKq0ayX7J7NJZc2NFlmCG%Aa~Jy=h9avU@n@^^ z^!TKVNJtVorXJ^Czfnkh6&S7y5qvO@x=X9u{;e^C@>{c;kdDarFY6H!>*8UM*c~~Z z-t~)9-2?G7_2bv8nSS(n=$$5L?{FrM`+S(J}c{yp2FoBGR8XoLS7ra&_Jshr_*k4`5FZjqN^^`@|8K2 zQ6|+`dGop|G&iK@nEe;@hjM&qhxUuC82JtqP;!we=60_2N2D8^#-s4QDRc=h6T3k# zYXrl`!i>Hi)1|rG>y50pqqTl^ow=}w1$WN6jq*TUG0?M(k!mI!BDbC0Ycr)oWB8^Y zO+EAzEqB@B{GiT~=arLUel1!3MNUb{H)+ooWcXJKOd2W;_<8p8y0pNa<{?ouWZqoJ3A-`7?IkXm==aHiPM}s#l27@7t$Jg}$yQ zN(sAG%eYmid*PXf-O(N>ylZUAil$x8P8+|Vq@a*{PNLvDAQ0%WtNfAR@hH;4h5K?3 zA=)Ic@BoqwRqE>poW^n-9#mAdNmFr2vEaSE+~A%V`bk}T60yAlQUTonVtwCNcD5;E zI!izrsc_!@iAWUA*H#~g>)lbS<1uTi1&g7HI;qmb`_uaAK17cQ^gIkhZlQb$^rcT1 zeCSo(FMCWZA9pgAd#AE^tqEcR%ecc4MtS9g`k6^h)7WzM0mLce-|V-i4$9}J;(O-A zl8U=*m`;2JLC}4*A?1DE7lbR~(bTmb@2~e+ZUquRr{rOe)^NnHX;gAP{SJU6z{n}V z_Pp-!ZA>Z)gIt*uYR24Ko1?|rVBMp{a2Ki1?S-GzjG7#(YD}*IVmPc^wk7Z@kJS91 z_MzW{G5?c{M}cc`bAn+nLgIS}=nh<_571WC0#r0tC-1a>Wg@137NV@oD8!%NX(z01 zv8CIfT$^hf?Ewk<5bhN06zPoHp(_QZJ7Y-&&+s`8NMeyabsbH!|` zlTPam#vb-OC`uRNT-$3>0F4UCFWirFM~PFYlFU1&D+8hF(pLD^!iAEUmc_h>DE$=o zH;pWJ>IyTl{zAhpiCbJi8sKPy4<2xVSxbr%-p-Z+7eBqHvEbGID`IbFH2u@F?&ve4 z#?$&YNx8t*D>S{>@h8b?x~I|YlKruA&OAJ5!=feI*$9zv`;T+*647QQD(xa97X^jM zyYWHakVFhNPGFCo7A(6n5G!?PCJ-e?OMMB;v=<}trH>HiySNsN_lXjOi^oU`Q4-#o zk>e|GZb*ehb-u$Ia(*Tda1rRNaPDqtrXNm+l*A+u;+lof4nL~o$;bZ=P!9WVR?j8Q zj?Mxd6q8M2VgkU9wIC1UG{nc9UXqDX{3-YVWTJI z%Xvf}k@>6JkkZK?q96cq`pjx|_qFvRkk=(7?RsklxhYwEny+a2_Z#ObH9un;jCYO5 zqFK{!h^T->T`yZLZU&$In9}p8W7%2e{Rp`Rcr+m@`ILk7pbtxc)Kv7v0foMFoJeQ9NtAJrKsJpa2DhccA+_ zORbzvTmdTp+FQ2LYHJKjTAD7CDrOkyH9(bT(nv~xt0myFoO7~XI$mFo))Fe z^l-y@8Z!U^)<3HgoZDP#h=gj6>J+%uSV&TvhtoGE=ZmEg;Nmkik3G)aTVff9gNF5! z{nMQ)8eBvbF*ru?Vj| zZ~RJ#;g|mc$W=i0+!Bx5UNKp|@F{WVdkhbtblMH0=YDOdy5^(&iEUKrrosr1jfPi~ z`zPu^6)hX~Uru5x(TfgG9&r3uBw;?a{fVV~0*cv*`^*02vF~vjRxPhbG``JEb{``Boj<;Mdc{~75XkJjNcp`zPW#f*aBYW?oBTapDra!s z+9H&aF7@|3tsPD*1m1FO58issI=I6viV57#h_-<*BIv3Vp|>RmW5!6eNZ|M9(1)nD zjNR!2O%m`~%|&FdB?LomBqgrppvUfl78c?W1+E4#q@D4m8UQU5DcWM_1<&6&(r`}X zeUpW+hg)7;%A8_=X<6{`sasDZ7gvu3%MuN6Cmq*lUhPF^w&f@Mfb>rw!J-?T^ zen2fuH0v6KID${&Cf6aukz})M=-^djZfbTEnZ^_m*oub)quUkt*u%8;xqdEIUGg~Z z20JRozc^2rwl1R3kfzyZmL>(me}cwH)H2u@I&F zECrUT-+IA&pI&$Wpshyu?wb_z2@^AJ`{O5sQcSGg5GhVP48`m-f>4>osto5(C}R zB!-aG!Wz3$-I^@kS8YVo=Vx^8xwrGAq8@#X>3u0~@|W;>txsXZ=cD9y#LAe#8e#W; zBqV}arv#WV8|_mZ^?dKv>kJ{1l;3`n%g`22q!LAGFw6;XUIa zCufxJ%gN;qieShfx{%dUOTLhZrrSY>pbEXAN}|ogQL9R+i%6E@&ckw@yej3_h(HBq z?yKDd1v@jfUVM#iZ%-FG@kf2CF{HX`O~T|bN0am1J3uPVd2PRatGMLbOMW@d`}5AR znzTIC%YmTCTltZ;@~ZfFg;D%tp5#~v?v~IuQB91PQ)_ z^>4iI2+z*qK$4q=OK0%FpO1Y%srB=8aUF)w3|(W-=ZGajgGE1A6fKS|H|F|Sc)cPS zo_4=8h`r13GOWsu;RH05Xp;R!rEr0bz5Lbt&rYqaV6m6lsd#<+9q?)ZA}_XtDN z6P~#e+>j=_yQ&;i+kla>#-G8CfpE)w3><$_T@9fUSfM!eTz9|LZInb+4WM9ezko^u zaq%HOo&Cm_9XaknTfOH$&`q<^s#G5)9y~+SVs@{CQKO~fBEl>G`U`@5TfA!@9o(T8 zsk|KTbvYHj<#)>l)q2xUyzUw|zKGzSWX`F> zAIEN6>6Muw*$xxdhI!xrrhF;ww@Sm@x_~8BE(`eN4bp728q=(x7<;+pY#U|0C8oGR zwSn`RN z{cHDLd8THxf!6GUSA0HKiLHl z5%Z$stm(^O@B{p=vy0E;q$n33hyqDOdq{rlk4XtE#CnocFQ6fiVgH~DeIfm+=^q|9 zt*tO(&xTT=u!?paHg%7bjSR{D(F!=omlpOBu1A7WuZOf~nf$E_w9p{vtki!+Zqw* z-kRuT(E~UhAFqzz+7qZ*|sn$7FUy z17*U_+Y`iw*Hl1q1pUgjiU3JVv3DjF2Xlv$RdQKx2d};?jqz|GA&ryOVz}WUu>eMe zcGAqg?^z#OYyWd;LBlboaW)Tz|kr>`dN(kleWHb_zE5s2ZLRvyWV*P|zc4snmVcM+~OO2<>&R*RKf6o92b zx?*`041mWOXBIJz#$}^nj*2`IK@bVkpYx#;$q<+JRwr8fNcr zP5U|G$>Iy@_sC#&+6uto(Bd|Fq|OyuEFbOBx%&OaEXrGVkf$S?_py0@#9d?g&uX2V z8i9l+$4YPKHJgv_jQ8EQ%51hS@TU2>+u7Sq4Eeja*`9-6yrQCZHV`ad=*v)?UrRNa zt($GBOfHyRqJI5twqYqXzglI?AT~RSbkq`FwAruytV2aO{_ZEi?gT;8mTSDIQi=D@CUotda>%*ag1lerY~Jh5*Mrl z($a(?1@5zaNef&^YcFq=OAlt%d5hzKhC=5G6%3<Ge zoI$pl@U(-#Z2TDU`%#d^(9GyjM{6?)!J+w?)K7u$DoFeD)%SqH?H`Eil&oeGGhvO1 z!t05nzsA)_o~H$#K|mEsT*pe{J>hkT(N#X07fLGV(opq2Bh=gFnt{U2{a&5u!^7CDDWS6NZJnXL^CMDUREA`#7SJ zPNcf5GXk91)Io8oWEp$eyMWTSnsmc9xr|ijTS^@dia4%qKfOTLk)`j1j;tSoEE$G@hw4){;PlrGo6`iml#5G3oK)6vG0VeOi^y0LU zIyBF&luat}^D;w>@ZR!gG}dAjkviL?TX8EoVeFwbfUX$HNF1uyu~rxQ;7D^!q#|R0 zf-jq$)dn*UT6FPr1zRl_H4MfM?RamD~%7Ny0R!RY*t=nj@|PSiSG0h&K^Q1h8J7Zb5FJ-ZBkkWM5X z=9w8rt5cLyZ*OaBfiUTc&};rizDgVv=LB+@P_;Vf4_!ljI=Q>3>Fzw`b+5~9PIVdSnR@`s)H?1ye_^wV z2NYk#jp(m5-nUIzVDe;{Cy+n22TU^A^@Wc}kadHA_To$xI)z=qsV$ET8Kef))>9#b zfz@@-Sw>W{)SLA+(Uu$eX5zM$JKcyla;i!y+Je1k>6zYk-GpZS*W73{W34_k21-tH zJ|UyJkfDVoI{`n$~c=wz2+Uu=l90dPPAcRXv}q)TZMcyzUNVuYtSNdGJ- zzmCFqDb4PXu^R0vPJ=t9O^fP_SkmSD265J5*gf1B)Mi$~FaB_1g>`lYJS)ly$)mQ(!@D$b^FzF5`(>^sztsb4(>2VU!OHY+8t^7 zDi$Ot2fo>MO-B$8Uf41e9#91(#YuvN*L~k#wHx9YD_}PYraQ9~+=4*0w(!EgYVui> z#+F6GUN8YMZqf4S$Kbo zbcRPnGcmLw=n0cht(2W>mgeJ@YX|uKSP!GhkXsf*bgc6O4`=ki3)|&}+1k7G4$m;k z(``6Nh||M_=Wnx4MK1@*M`+M#-HbssBB3)JWI*Z%d^AIWBg)Wa#Qu$EIMP81wH<-F z)+`-NTX*jKUL!GBzI7~GjSQ>G`wcOT7l4Q+qtSk|a%4lss50^Wx_<~o!ujUdK9o^4 zK(kv^REftl%Xo1#o%0KMZ`hbBZGpc#I1C7uOsp%HD-RAvpZVD9IFif&L#NH`-rDjZ zMj0L65bz5HMlFyEiy?5qljB(;F73C9WGxJ-VeA~S214yeHqywJ67)Y7D78K4jc(mC z_z?t#4DiKdK@ym9V2JeLfKtU|*NgCkGdYB@3MZ3K)4vzww{0eys6+Ny1)Tm#Tr8F2 zoWWW*#k*5X%Cz?pDd-z!h&%E0Y>yadZy?Lf?jp^KyK=3mDlQ~TOlf_WCy2uNm=2?{ z@)OP(6L^PsK<4X+4_#}wvgYdBSLxo}c0bcNb|C6<7M&_)KwG%tmk;=G z;Uk0A-58v%lB^2dBhn61rc5dUh?S{)Qj3OFIKVEOZJ5UlT&o5io(XDHgxZA+i&hvL zZ8|0Yag8<6k1z}vCwpI{WNOH;Xsm6Szn?O-RT$jvWw}|XuIIyuzZ!#!t;DrUH7~o zV7wTWn|2^m+U>W};`k;_Ib}AEKi3PyPTBM}X{T)pO8aJKc}R6Z2u=o9G59x|C|h;P zrn6f$cyh^R57VGup03zOhS4f{xnb04i@%W!iyfM_r&C;4ZGHvSsF@Ny+p)9;)lK>? zcn53R*TX}F!i5C7V91aUL_2{3Gbu5|mx8WqTC@!)BmsbsT(R90lv!yFY!gfoD72wq zTpYc>PNtKE@OJf?!$uK%u^V)fG-n3;g;R~X`LwIU-3 zXxw!O!ch_ZhoJGIXClXGjvTatFY8^tm~;uPe~KHc#L_81;2uO;3ToRzGQIoAr0ZUb zDItZ+r6{-z$0#dn5N-lZLK_v`ZxRsBH%cyj9rjbUTlC8)O5JE7CGMacq1a84ujR*` z=v(mTWAwpz9%at!Pn<)U@WOcy@>!ZOVuG7ksJ2v=H0IM*o*@bXZvZ9VA>ISh`Y7@z zv^=jc&xS0I-ggHWNJO-W<1ZACjK8vA&TkMTFW%tYV*WN^x;pAtB6M7x)DBW+$aHoz-LO$xhl%QcyM*@g(bp7CjdSX9X z>z=}z2ZAg!e|u|}x{KJ>s)95$l*fv~_ApXt(s&hhSMbARR=<7sG?&Q;p5wQzy5PYo^s>gZmOKvB6p?&*9BIH-ZT^RZ>GYasA2kZU z$#|=!O2tlK+$enAS-1@vnV8$yIL2W9vSPrBDz>S60jRwKtr&i3V6q7W<7EVis~1N; zZ=y6UwAE#e(}!O^uZ`rhtH$Xyfw$~|nmm8Ll$s zWMnKc9WL7PSLmzv@_DH4e)Hh% zTtZW}U`)U=?Kq|Lck$6|&;%K;C9JWNyQwEOPQg%$;Ps?)cCHW@cG=V*Gi#C5Z+*h~ zffx_6ITm*>$yxVc)7v08&A9l+kqDV1tmn^Qx93X-vmuGa+M0@uM=ofO04JU>NK;9^ zLvmMI;kwGukI>wHG|4nja_v^Nr$~QucBgl zCUu&8uGTdQjhZ-+`9 z+AmC=ndp|Oo)&`VJCwJ~37=s{1tvf$S(L_o(4>`_%qrk!pM1NqmaqgSoe$m=Rf{*z z;TaVzf#?(P0kNjpBa@m%g51ya;{qqOAX!~@OwM=Ox~Q$ZP>GPP*9gjGmbH;MJdT%@ z%PtsxqiAZoc32{kUh{#-?Jn5wBc9I27GUShzZomtH_d=L4iK#u&Z?n(Dd3UKwf8T=cUGf>65SZHqr6 zXA*xN(dY1&)TM)}I35RlI$@!(I@Dl@?D3Z#-j)qMSXBYq91)$!^@$Bk!0x$(qRe8hKI^H zEs#sJU#HoHO=C|n8K+u{Z&MW1)h>YkdMaTBgV3yZ*^5Bi)=k}gwh+sb#$G~1ZekSL zBpxjAQ2&d}k7i|4=nV}7gy{|F5!vj;a?^VHHIpk0u`Hyq`FU|9u-|R@6=i$8{w?@cH6`KDZSI#t&w za~J5$o0c-vMGtU^<5`KfyPg}}KIhfw#@M?*Qc?(I+DDnMv^1+q#p_B_WE+pD>&>>B zkgeb1jqraZNkH+;04MhFsF*MuheF=6HJgbzVGtwQxf5d%4gQI3fuHx~=%nm}27R09tECaYJM4 z3<{&p9_5GETdcaXp+DcMrwR24o;o#cXg&snX0Y6&YT?ecTU>f6`I*{6zJ%NM)#->+ z##v#msE!CmEETR*Q;Kv{hC=)5%aTBV&h@hfVfLwheJ{=hMdnE5UeuRmEJzPQ6;a`LTz10AsD|t&Ub~lddENwo-kSa73&A z{5a_GIvgWWwMwT;b||&QdeqM_RxWJ6!LC}Sa96`+ZTkspV;pAGqEX!_4WKyh1;1M_ za<0WV#?Tw^Gviervrg5TKwU)FCEOxXd~r1hBRT;kP7(Urps}CAW9NyOJ!~eZBu83$ znrDnyEY194;kQ%K+YS{b+7F)ZFft_AhtY!42DN$s=_Z3-fucB$<4zOV*|)S)G#Ph; zF2xc4mKq6q&z36Gail_EcM?P?^eO0Pd=6uwk>2~|Fg>7*Pi2ig#!6?sPvT+VzjEP7 z<1|~tO5_oUP`{@B%mRAJHwb#0q!TxsgfTD31Lq8R#UG)iv8VhI=1pbhGV z2}D=N#P)h?O}uL0Gk37Na-gn}uUDjJsSvq{HAAQk8>T&U_&=DA1s05GuY@+P*>u7B|l@pk>fw5p2#y%$`rR`4S+5k3zht0t9 zRpUvKzsYXKu z!GkY>SV-4!t%c{B-{4^|DN1!ixrAv+sC12uk~nIruvx_Q1meRH&x28o4EsgCXSBx1 zR8$JfRepXH{ljg97eXTJS75u1@4jXBg`3r%PxKD3T zqX5#CwK!Ob=YqA&`VH$jW;8g;P>PhFA}fK>0h<*pRa@X zfZ1NXS#L#aQPI@jlQnK7yW_qRh1@RWcs)dox56J$C-~$?Q9qj45Rzd4<1Ch6s$@5r z^H8Zap+BqpE{nha%t=xpdXOca6{PeGUwL;xzsQf_!*DXqe^qhEPR2UnOCE}a7 z^iSt%+V`Niimmb&!`oO`A853_6E;xH!|#@a@Erh*hr3=Dt?r z1e_NOLJR&`h&3wlS$}!FRabI>0#ah6a5T?bYANoAyom=GI9KurJaV6Gj&A8CGb@u* zF_|i9^fHTWy2Kl6eyhbY1|7BoiqWGS0F%Gy+cZ7i(VfHdN!uY|f_iyq`+4%0NAhIY zWJ-{m=9JCqtN7peJ$`xBWmQ981^P0vyJ&P!py`^A0%@0ZkrnX>O9ZcJ>y)MyQqyNoIS~j}d1vuN99;4N5EXYH~oib!}_nzpQ zej-ka0Oy}XqbmUB&vNvrG@sG0>hFVHEB4}`NG3b3i2OrX;O1(&E) ztNs|IbJOAljs}vwM{hg!SB7j}6YP<07zEkhD?9aX#@W>CqOKO65^vqeV;mlb;=wJ> zX;Ve}C|f;sL29EuikAUWI6cFR1v3A*pu3%6^6_oyg-wwV^Ny9oWJyZfs}dTet;a%=PBcQKWGI8n#s`Qb0w|M$ zEA2CYSKIWcQ^}F?qx(Od|4*_PV&w0!W+^}&YW{PIE~BWmgAr|Fp5{%^g*2Zc$CR}s z#{3a2%T@jUNPI9UNNa@ld{`e78WipPuM5lx`;Cf$Rf}~rw0Md|L9mXu;e7upttlgT? zP|gw%fBZka0%@aI2ZxIM9lE|9(npdoJ|hRA&&_Lq5&__D#Zbk0G=V;i# z4mjH9^u~)BfkZkT#*WKAgrX4Ay_!+>XOr*bugR+ZFuq0?TmjyR&Ts5i3vjUlPo?KK z32hrXjCbZx%+F^_9LO+VNdc_BoAHq~<|G*j^-?^ZZQf5uC2fm87(G8~zwM2uEb^ba zrtkxXV~8^5y8>L9JipYf4KNQA7Ca-=RTKL+cO`k#IE5cdE=A+B&pr1BJk{eERtanV z`5b7=K%MhuK+MvPR(f8s0WhG%SEaGP#z@5JB zo=#NZu#W%F)BvJ}6_jS6?}{rfeJLJVXHVp_8_VAnB>J`KpQqREn&)|cZpk^go~~S> zT84h!c2YG{B$HBz)ZR>~(2nEi0^x8dn)GXKTEAL@L3mRTJVLQdnPM$4IeM&&4g7`EcR;UXhU~F<XQhsMkZnQHCX+oZ8TdZ()_UGiv>)UFC{;Vl19TDyo-v43is>7n%+P%OaLr6E$-K8Mi zjkGkS`O3x7?pRr4UWL> z5mFQ{tlNaH+i0uPT6iYt7WTL{gH7*|()&>IBFx_;WK0qIC}h$MSS&tEb7a(f&uL%PgvICUn0U-1~Mes(>4ejVe; z!Dx5x$RveJ_q_WjG~X3V6!Ev~jMpvmN|K<*UzPfz0&CEsTs_OW2J6XC5K2wZ>s;`m zTP?6QA_Q3Q#$gPVC4R z$v>~Z*TrMMh&;D?W<+{HI3t5mz9kXu49 zHx5x$yGQXAtgGOGb+c*C+nLg0Qz0*a;8t8`E%YhzKJBGB&T$s6!^=i9J_95Ij<8L? zn;!$lcQ=OdHtV-{=R|xW8YB; zf)>{m9MXKz=(~Bc&_^KtATlsSjOY5-!P^id=MpNu)PD0uS$TIr{;3baM&oD7CM= z$ayApF&8lZTM@Qr1QXlfBtuKwW`-i7~x&@QJ<^CE{AFv>w5`!99D*K zX7ZeI(~K`$P^LpQpDw#mO3@XYj#6vtK&Ei$C%reS26Bp45?mtK&Ib6SnYpm6ohW%& z3J)vY+1NUoT{e-sf;VgXqIE34PW9*XM4|i$Z~Ls3piV;`M3ClCwXIF9s6sW!7#K&9 z(So9F{1izfg8%kYuhfc{TMvDnPm(&1qO|MxdDVBzV7aEg7aB!=s=WE1)&Ac_4+=0k zeE$ecrC&{>SwB(c``9Ft=sbl)g1{i6WCJLY@+wZFPlwVR7knQxB6!S97Rl`MaCv<3 zEYnrYdSaC%8%B$rgghC6vL%j*EE-~z*oz&_Y{|6nNkayTvCG{OWAe@p07+9tNx|i;{f)z{`xR!4PPhVp zbzf2@F(gX(oqCg!^=#zPri1{`qw?O87;^2R5uwVgQAwpP^7EDEk3C$KV`9B3CEP+cY&D2ImG9ngm>-gBh_ zh1c^FB^*UTUSDTt!tB#X!fGeGjWK^TLY-)`fFANR3NyC64{J_)5X**<5p^mIY|x4v zMWC9}3*H>}5Wj|=oBUd=7gd1aX$`z0)fO2hUh23zdoQ1_eI!3ol=jk#QLKOd!_Q&n z_tS38+A^cQX|BMZ+0;sFT2X9YAKvgz`?`}74`|5P2*vn^aL#8<Dz4w_bT z%gK8W$Nlf>bY(#UdYmNLX*3$|6e)y;Imq; z@kvyZp-fA`Mtps-jS}JL@(*;%F4H=0q>yzpYR#v&>h%{k^5v$`0Poz_s68~x1YmKW#Rx0SrB-|D3zO}Fuo@`_PTmEZOi^I5k^#`CELp(S5hw;Znx{%od&>Za3} zz$9X%)Ux~LqKz@W_>W)z_r+^z;i5^#m4hmo4yV?z;zMjXXou?IoZ?K+0A$a+^;Lz@q@Q0v3qR+>HsH zFFOKvg1tL$@At~Ov8h`<-Ve<=Ij?3bbzZHsp5f1A-pZk(pma~lkdwaAsHm0llynGU zn0s$1;>9E?+$oS%O>YZgmL%G}&UltQ7?;}hynN4BJTQPaK=svRVPC36&5QFuRLFfS z(kB6OS*+WBC-2lKJmx)`hvU?cQ*gXNsk0cBrwUI{?N`;9Z++DjgeY$Y?DCHy=b&xsnoHl!br4SSSF!qbXC2ST8^~gA!v~>0>aO>CBN6BTer`92$ zKcnElUj-VV`rQpSj<<+kvPSLuXzz&fOiR&|c#Jv&j{8X*%D1w4qy|PngH*TBZ1l+) zW2)9%A!m0f$zmh)L08Dm@73S8hUr4y?$~@|ux*g%CEjGeQh!7S>?X&(ILaZbw}&3a zqnS@L>=pFugqKrIg;if8`#l5jvIO{F2@rUuz%h4ITX8>NQ5byxWTR|jLk-i?OcbrAJx7cbD7nN6G9Dr==&fwhB9RQ zRA`|JCD`+5~Z`#Id|ERDNEkdaSB zR~T30mU}&h$2F1q5e`EDF)bq<_m~J>@VN8-Y|r7kGa;!*lm)cj)&>2DB$ZBvCHmK& z(Khp%Pvj()IwS5ZVnhoSo6+56NZnwKcE@X7nPXL!0zs7cl8ZgfMbEW4GV}ukxh*L5 zwKJ^FcP5J(9<9aqMQsPw)5Uo9_oCIXEnW{+m5@~k;K)q}S`xq^cuQB8N>E&mo$Y)G ztzWnrIzbTVhbD7OujaBZy7Xen>~O~+!aQW&teHFR6ofMg#~Ah#`2WhyM;G)5T}Hc7 z|C;+PC}U+7$v-_J$ZnDS>aX*I(NWs8In{b14;)j1E&VXc1O zd;(*{;$lQN${v4t|M=kXCA(gu;_8c*1lwWhXS#Wl{t>?0++PDN5m+k7$8CkXeK*BO zF!dLSiq^*XSmve!Jz{SC660J>F=Xz__4IFL_y3Te{byH^@*;dzyk6$@#c?qvzp!#m z4Usb>Uleln>wx6$diD375W^LNBBlZ^0xsUz3Mfi&i1>(GppHFa|9D|ybc;Wy0$=Uk z4kGFHEQW43qo<^GFj*=g41C2M8&S*}l%hX^A0+(p|9x%v8brA?pfmV4=idE@_n&rd z#(!b<&dEtepUjFH1!^CI$%oqwYPp{ z2QotvoL#<{HcZ1PL0cstLF3IU^?p5v_7Z%olMg4u%rj@$;+-B51K=c1{1E+dIm$i& zJqiwFB9Sm623^L5%FfmC&1>m!Dpl$He`c+Qj48XK_k`iU^A6o~iF31MuwAb6I@<&I zB|9WSql(LH5c7Ga{GD@4o_G5E&gU}aFZdd`M#mysFm>1OD*l5I@ueN_{zuJi*M}e0 zQ?y>H_D$_ck=K4ZYU;Rs_hsA@rULMe9g%&*si}d4$AgosP0QHe=`Hc?zS7@=YcW=X zE9p=#Q@Z-Jc7n8Paw9%Lh$;*2In0O{ZZetix9A%p^3%7mzX9CCaI>rIKjqg39%*-|8Hy;$)7p3a7N96bAIPP>(M-qW=L^Ss}<2XCp24*!_+6M7$NZRDjb#1hq z*)lpfIn8Y9J(W+CGM-qeHJ^X+K9i21xE<@&clEV-qQ&BJebmt*OuyEn?FF)aTv?<( zb6$2{69ip>EQIZY$%%=> zfb$Wq2usiDD1o6*0o$jT_Od)b-6bCoK(Tj?9?dS9!V`os&a7!%xv(%>|MQR^`2O6o zf8KD0dGb)kPh-_8tn47<((j>D`hCl=>_#H@Ng(V|eXK6>*&>(n3pY={Y|jw^PSyQs z$1Nzq1L?@Yk6Q7yJ)y+f@I@;~e5@x+(410L{ttWeNBcDxCyM2glK{0Clh8uP&(C=oyCl0LRd`e;RB)>gmqE0WoILX|O`u%*fu$x%9||pAq4Y5p zZVrikvsj8JEb1N=enTVr1gnys0fzZT8KhkK-?R+$4vvF%q|EJ8@$r(7yJVtROSn;y zs4=OOj&w$7NTFiQAu?4ES145oRXkVl2`5l<(84q5F~EAlF|@XzQZZf&mHajulq?WJ z@r)id#G#+Q+A7IRpf1bM_2ECpHr$PnhmH6o(;%(fx`VOH?&1dooVda<1$`Ej!@ zm%GX*tV;F-wZSh>=k$xVJ6V_#u&?l?vaWXo`W>xXcZ4R#i%nzl18|N}Oc0#%SJM^O zBaj|I5c;yW;VOwFnyI)Nca&jXbjH<{iS(C}??fiKit;GeF}r2RJZYBwt`DwmCN$|} z2_-Q3p$xO8x$1HEUtUT~BF3sG+fcYA7Iqg_2@wkx%we(9H^u;xO-LQ{BE&BsLDZT8 z!AQRSyHI|l$z9-SAmZjFxaKKNDDqnN8QeJKIMGo;u8{X$FIqCTHjihitny`yVXmqy zmszR@%D5sOoT)7D5&NMj8W$%59Pd6UO^^s3Ck5dp)XoTPjWlSVRnu+}Mf0^F2OR^! z2UoI<=|RRuN;h`Zzi+`DrTvFk~{q3b(sXL3t!8nI_z=Oit+#w4Qlszx;aP% zAy0mw_=#hv=jm0j4`FcINCrbO2H|Rbnq|&5p7UUOH}NBBK?22W4#v(+(m*WRZh8{2 z22Ql97(VZDlL`y2B&Ti~$PtnsZ+gJjoRLlXgL=*!WP0kZwz7=k&Lu>(vH49tV=lyX zp-(??@K}a!8h;3KeYtEm%BFIUDLo(m^YqclP6~G#`iwLBP3s*#J%fo>HIg6t@ZZxZE)NO7#RcVF~@hDy#k3s?VTrk-?IL>n9E^ z!IQ$Gjpx^!Am3B)Z4SfC4)>1Dc1B0E_r~le9%l26RD z|1wUf*e-JTv6pZCL-Q%8nAj?XcCaU1db&}JXuz}Zbgt226bPQlcrQc~pM`Up+OS|e z18rT0oB2r|k@Bg)=7U0|1=)IE*l2sHpu}{MIGOft&7usRC$&Q{;_Bhb2OIL^oI4|* zf*4DjsmX9fD#-z(`Zz%kOl0qGo+8KLXW2gh@JmiF%WE!CONc5tFh(cHElrqSnJ@A> zS3#cQ1e0^Ib3W)v5b6=$-ca3;r=tZnvyvmNIrcihr&GQmh!wL_2s??d9;2C#j=W$X zheyULWvOl~|8$Dzug#qyTGJ{%>!~dEkuo!ZBScR0%eMdRz9CN4hqiIgqluSPG~8?% z%heuKl1U!k{uwb6gm3kcnrwrfW{Nn5?dayZlix;7uz5HA5H$KxHah8KcV0}Ev1x3p zA``+%PFhs4TeJ$U;)A5fPo*`YrqzL0{J+2db*z+vO8T3kO3lS#t33~FuSoBq!_t7U zz}R89ERY+cpGa`AOz#;~88o0#sAf-@7$`7lAX{BQy=#ZEM9bt2G-45ybkHqT{L3Xp zS7CgFQJNl+&3GHu;c8?zE@mW}ge$3(N4N>Ed{w^<(kB;&ZkU;nb_;C=n-h)@53}Ig zz!-5K;|fjX-{gPFD1nyb2vRhiVw`o=KA;pY9~e}nZeYJqB2h_gk}>{n>=p=1PeE*p z_1T`Hqtl^CEo}xRe4qtombNms^pgv@ub9w9k%-F2SV5PZT3fWAEB(;hjQ1+|rU6{d ztNqw{yEcQ2Rx#O0B2muuEW`A6%Y+Sr4AjYzbGb`D6B*kL5XkjE7umS)id@0L8SbG( zKB=+n-9xo9!G7?T&=igfZoPri_}rX7Da|;Q(ct6UcOhgk)51tU@$?v@DU_OX#9K^y z{Q<8i#N=(lAS5EQ6Sg@0N#I4@MH((8@6182-g(`?zv!KLbP8wgNPa~0rDK*|f7{O+ z{k4cVndi%lLE{aqG!MEBipEU|;_I9>zePoV$pOY~?#Kp#r;onCP183#7pzmH*`jKr zuV%=ac<*!+sd!f6SHjs}sq`1}=>80{F<8Olj3e=!svJ|I(l>!bpSXyI?M=BYqY4AY z4Ty7FDqwW~)Rt8e)#k#88cI;|*d!r7`u&!JxhRuLeGH}x8Eqe~LS+gA*JMK$gKjZ* zr~n?uh`^cPs;UJ=XY;-ZQIA^SH7UxosOA+LnPH=MEGPoAgo`oo5hhl%^vpMm*iNP1Is{jf7jxFmov1q-#b0L#%yw4JfoW%;G87_f2 zHEh$=EbRQTUFT04r5zDE@E~_H4NKysDve3WC9j}jgjkY_<<>QjEimV;;G+ z>D0EfTOlPAyndiUcTi$*3MWp=Hy%Pgj&Nju^>7^V3!hKf%ChNe$^CSUFVi#qs!uR- z1@=14@XZtG^DcY5DF}?0laa*Rb(WM3i)Y!-yq`GGq3tMmYe~Jo{2NF&&jm?>GRfny zRaQPSZ6_Q?lc%lv*u0!UmHMuHTE|Wy9zHe&uDdTr7$0{QP7G70!#qi;re>c3BH1BH zxwrJ*fso(q)mSPb?9m{^-g7gUnMz)838Lb#D0{2DD(S0}2BD19J*GI(MrsZdAd*Kh z8)eDY9N(FQF$_x+Nk#UAJ~EHLDUU|(MG7S|kEW(cJ&S;DiEHZQ$rz|=Q{HGlAH~be z7i-=yaGQBy?JF_Vvp1aCW~<5lR-%!*IG8hND(cz*mTmrl_p@7dslh4gg=vY24+qaz zhys^YhEYJe#77k{v0Chk^1_I)kC!d4*u^R)_mr|Y*st`6$UotwzF!4(b5;PKo$(3{ z4hC#|z9{$9qe?tORjh)}u%BfMM+f#Ru?b6aN?mxI@N4p`T82}j$ZK|O@k&DK$`(`o zg!i$=C-SVky5Y8YT@gQG-4dLJHXkIbz>IGb6zOZeKIY9i!OrVNqwRR_172=>|XM7+hdCZ=C489=|`&Tg16lQn3WW5Z8B3c z1ZlRpsBI!ASadz#g~q&dhG_Pb_)cB;yHbd|6)~x*kr0*>M}4zb5hJLN$eDxvHhfdZ zhVAU?Oj~PB3n~re#+mXp;x+nd$~BkJ?bE8 zeY->8JwEYqhX&3p-Gh1G?FUWJ^v}3BrB|h?`>TI}VN~$QJ-Mv>&k-pLQ#+#$>-HH~ zw0AEOzYP$Zgq)0e;xgjeSY)g6T*Ads3*e6VGDuCjrNG358FUyb7+ex0E!Jyjl5(Ag z$j#yQAy!{Qu4`-y_48FWq9RVRqSunbog<{s&EdB~o_bldKT!)Yi5HJA4C4wBx2XEW zz{23gpirOE>PQ}?u;6UoL-`zC4k9DAV5~`|@3>x+JL|rPF}47pK z8SOGMFic#zh*agQji(o}V@4oq5-+05J(O5>rYx_|bF*3dq#d|3Vc(By)ybJNVYrDp z2D3E7W~TMz;CryJCioTV<7CqtG;KJ$sOlf$CHks!;Oapi9+iXD#-6CiR;c3%?OHns{UWm+9k&M# zUU_t5Kgdta9tUW-Mknk5&2-~QjJTj22_8qdk#o)sg~&AO_Z>rybbmSBFjS1Gu)TBx^H(%z?wqaPw#ITKMIPLKCcYZR>S(o=){MqD z?7*pb@i3}DuCT%=HH66xnaAQ5!`}4|sL>P3U87i{9uG5I7RCH9wMRj{lt5Fx+c+s> zAr>Qsu^u+GO6?~eawp1C_ElzyGm3Ss`Kgl)k%Zy~MRd#GX_u-b70OqwKfqsB@K$Yu zBkjYaGVv7uR?Xv><1(GYrLv)7!WTa^U@1$UDzZ$XdehlerIS@o?KvfbiR2hWF-3oh z)$Peq`UNH@Fhq~TS9{~aH>CuV7=}x8_03UuSz$Gp)u`%GEv72%S?XKbwnQlEB-z~m ztmIZ-B4aaN_#sI@@vU5|t$52Rd-^0j)DzS!o?NVc=;f6AUON9vBHm?7Us2-WCMPT{ zjK#6}{M8|Xk}p?zclFyr?!j1R)P{?n_##3hxB3XT_%8$KqXflQ6GcoTUGx!{bBzz- zlsO~@qlHB)ny$~pGyMsVijfe8Y03AHN(J`?Kfq*uQ%r~M`^3l5QoV9vUs~E^-bDUo z)Nh!5%#!r5z@gHgm6h=G9Q16Buj!83-@6}8WYPNKIG1{ZNAu$ym#3JdAxTFJI|+%A z<$K+v>_aveophx(%$3qPvhDU%E=Fk+69Bxw*H2ZWnmhr5spi^Lj0PF zx;-v4@EA8ZYalcB4DxM6{tY8~zRQa@O`Cp=Ghooja*K<&PK=FgmFIRQn*mWaFJj_& zbJY5n%oA;klRCM^la@Hik-=2UsyPL2hrNV%b}h4YPd;y{4d_q7EJ%k0)JeGvm=C;w zcygCRtyeg`*}ZmVe0+fl(NA`n`{89*)NnCQadx|SP2*B8(s|!{ya#j>w-%L;cFI=n zIh(45Ft%2Q&mnt7J*@#oBT=^iA~UzyZQIW=bqs`GtLC*7hK##cTY4YMJpQDn{KlZg z)a4w{{UoBBt+0+1b%Qe(%b$fZKOLR*S^eX>mLFwo@i5D@UTtT3RPAmIM}8pK??ZFy zdGc0f*>}>#$#c_1D3fP(d|`Nuw`2OWB*)*{Q}0A5jsIJcg~IUIcCMn^pet)0mpm0~ zowb&}Q)tzJ87qCFz5m!IjK{JY1J%^OCU*+I($`1X_q=yy9F4lFR$JfS+D2dhR=)84 z&(WgdeX#_-UlxFQ4~*lagLm}gVLn$j=92?fpKosD=09)j)vZc!8nOz1 zMf!v7@^gkP6Q3n}1Zw?_7keCtEwQ#1OcEV^G@(Wcea} zT^_#}E9)mA!+#^-;KV4NJ{uEbezC z*^K6k8UoTK3Q6jhshSJ4DeO!5Pi*H^npBMx;D6zM!ru&8V+2rmCeNH~&T`M4uPe{> zp{k}FsvdLoZ;B;jic`dEQ7TD#b8?*^eyOJE$!O0z)7vq8l1V$YlTZE00~DRljNdCQ z?S-|!M&_?))qhoF8ggs-AGO-ws7%{ZC&)}fq}z|s1Twd2)jwz3$eQhd0~VJT37F4>b^VU zblRhpg_m&5<7ab*?;*V2rd*A#S%$rtL#0RBY6~t;3Ai3`BK_2#(hUZL8PW9mzZZcjf4%y7?`IaoOhP1(y@ z3!AdcwJ9)@db^3g=~~-pNwu>P~-Hqe=3NUan*bzl~Gu) z)lNpu;_h#>6W9ig!bOd6@M52YC#P#}v9pTx3B~(fZ0P@_5CufuTV2yZ{HAH}h0#V} z4|CcVWIzedk3g`kd@V&OCh4+<6YV#4lA4BV0``}c=W6jjkhg|8{BF}%(1jo~?Tq^O z04diM*$a7F!+4g!Fo;Vttsg%sA4O&FKTVrVLThe>NXs5a*Zg6M=cKC-Kh1Kl?@PIA z!An~@<&_0JP9|9NQ~aheR=hn)Qc9jlUxuz&WavrRID<$-4xk88_h;XsChUOW3rLG0 zB-QISZiNS!Le*p8+2SxOl7rH?Q|is=8tbaDY)iB$^y*^&3Y_#Q!_MYFdn3)7)fzM4xi*qX&DOo+oCcWr{!}#KX#Y*cI{dA=2=AynB#Pb6{+2 z;;RLHB!_I4{a$t%4+R)JO(JQ!>b6GE@Rb%zUZp^>0SqCrv-bLOJI|8# zXF{%Nybw>eV>hxql_Pa2%hiv2a)iry!8|RLI_{bH0Akh>#^YxkV!Gu<1I+wkEiT{o zE;=8jpNIg}!11>%2f7ICOAip+H1y022UE0`nfB33;DtC^{ODgNXRn7vq>4zsCfQLV!*{ll9!7)m7ER7YrJD`1SnTPI`xa!`rJfHQ<5Xa{yR9sWlHzWa)SdANlXFAsoyfQ+q(NXE@DXUOOX#7 z;FCQe^;j;@2D&>EDAhOu(^!erd-Y?ruJ&n)wt?X7zSv0T7C&8#czdkP4Ux3ph| zO_SDA_3V+s02vKb_QL__><*w7 zh<8b=BvFboe7U_o*P0WnWb(i0A=hx)<{#$thh930%o3hRWb)X(IqX>e4cI(J&*p>1 zC5`nN2;WWcQ=E-}lW4b3ducuItSh>13H$BUmT4vICvy z&QjC+SyUgjpNM(X@z2x@+KyVb$qa%cMxL2rQ}r#*lQ^_-irQa~JnQ6#M#_)U8yFB7 zD&htR;E;@q{Pwv3TxL}=@^Y|wF6U6g{a}r`Mzy<;A2=|d+T2t@`;OaR_iR6dm04jh zh(wh50$7D&Z0F_Cn&KpM5Kxu#H5Y3b(IuqD^=wA#?N&x4qAcBkX?Nh&v7i~YWB#KU z2|E1O!s*X~CR}3OCD4bq6J^LI?=hIi=werH3P)Y2Qh!?1})`v|kn}J)%;;+H)tOc5p zra~u#an1F0Yp}&iFTBC*0+@#B%7Sj-P5zDqi%Ksg&KhY-6duzj-xn35l)8w7kml z=>TGd1_oylJGnG1{k!a78g$|(45vGj5+g>-V_Ce)$$4(GI{blMgr>HBms@GpDPgb1 zz)XPs&ZkgdXsk@h1p`Hta@VR*h$X0(LKdw?^$G$E{U0uoo|4 z3+ua+?>5HUXZTu`TRdM1&wG7K?G;r5M9#WVVmaU*?D_(ls~fH60d;-i5MJf7QDj5f zmH3mPV4h8;Q!M+gO2e?%CqBk2A&w_)pV8q1jX58BR3&!^g`d)|*7OI9e5HKxEc_h) z2sPKTAFBPeX6Ib8M>AWvT0y$0>{p(b3jzmp(NM9lZW>)FRFvl=Lmq0w0g~-(Kad$BxGw!An zql9!M9r^VnndF2@U(S9QR%)%SChv&CPlo;{RX49%-`!@j`A49nO6y?^dn_XPX*291 zM3`$#ZQ(t~xm6?=JAQ|zul$@s?Gf=4DB$=DbEzB#IpLZqv`<+tPBNU(e)QO-^Vizr zCPDaOt9H%ML@x$Mu7-ydgiH{7hmpm2yG){I7WxxF7>}8v=-n~4sH5Ds_@9C@F-2MS zt&Be&pV)Fa^}10DqC4QMICInmu$;W;a4tLId1B>5e z{?RZ8=83=~lVS|xD5&x2^oE~Mm7B#-+2X6i6bB#Ld@c+Mis>FcD>o|G^ovMi{D7Gn z6a-HTnXF$BPUiK8rMyY7Y$cd8g4wXBX*U*%f1nVYMiw`PBnGf!A`TaPnQR36BegcJ z3swY5@gT03EVkxSIU&!a%cWmwRmH`2*1&AB^4!C#JRXw9wEzXiudl$mikQ0|_ddUv_Sb3&?I%*&`TDGrbmX@mUzX$jx;`P8LFHf@W}dEo5^`DBt_g z5vz?*`HxMa!xPk$5-5~yF?){s=P(%}kbv46|h)@O)=1hZLIMPBY%R1dnrRD|xU!_X2RZsFKeNWq6< zZq!0OJ4aKztDn>~Vo#+S%|uQGeL?jM6pblDvHy+YnIM)U$`mng0lrU=QQrcCL3-X>8-_aN%?9ta1@MNk{Th z#R%-J4W>wQHcG3j7%res52p#_dnPtm5k5mX=TlpP-~r|L#T^we@OLD;>C9^woJM}E ztvAHqmqf*)O*VwFq1-Wn0kSAtkliA&*#*WS4->7CeL_-_evG^}dc>TIpeZBE0(s=q zm2vV4qdQZPjvxB)p=S}xRko|s*X7|jvUk}#e8MidYx*1A%*3e<#df zX{-ja^GF_}3gaA#K*yK3yh?@D;$$vZP_9(^ATlgD-0mm^dKe6ph(BCxosv- zLi?qvko6pGb4pkt7%?%Q8xoR9xsO|c+RNHXH$T-++LbJvFY-0yr#1;?CZ^r&2p9gd zabMMQZ4%{8xuvV~Bl3GU#(_u#NqX%U&N*R--U%n~&*`hl@a6+9cl2j#_vmGe9%t3c z>C-ca0|;P^i_tqVd(MbM9%JZO99wLfonho#wi@Y*iX58n68F)}IUkTB{55bqF^`5V zg?MfFD!h3D2ZN1W6&Hk>Pt;`K34biB+6TU-47k3ZQ#I5a->9P5DqMe~77w$VC#EYd z=jY7*Qo)zJCrQJ6o%%w^bGxLO|b%%53v45U>)(FA^ z2-T1UybP`DR9RGODY;b(1E->$V+5GZ;ISVph>G>UF&!4ZEa*%k+o#;UMCa@jNA8lC z>|+|j$Iaf>e?o|HSv2pGdtGBK723ua-w*^FJ=YI)7Zb#!#!n!Q;6ULsBihgs zpMyURKynMLPr~R$D@10)RHa6k4zIT*mmpEuqr_zRo#0HZ19@2}cuYr5U9|aJH>BR^ z6LGITqHQ_-ta_y_Z0KAc`N6>hk3B38jDsS-=`4|}6u-@z)`tChmc_UPf-_2X4hjk^ zU(_1o?x1{$WrbswlVQ3|OVdcxH!4Vj)Mq{LN?x;hLF6>iToeau^5m-Ii?T0IslP`4 z2;54(?us2kwZYg*%oTh^@HNDzx;lw>>vI~WLltK~;qL^;;WaNFv)nw?r3OCdEr|_pHXZ1rZ<9{eEbw+z}gi$8eM;@K>&OO6mwA={&(fzC`^5G&f z+v(GThd5W{1rCJgu)qnrDpjpqvA?Ne0Co^aiUfy7^c@mBiAE~GB)7VStyO0;GWj@4 zBw3X$s!`{KdY!C?@Kb}rYhfL5EcLp0`hT_$UU-)F z9wW1n)xrj;vY5TS8n%yFgJTiMrU^*@xmvU+n+!vAlY?wbwFd8=gg_c+sr`bXZuf6^^~(~-}+Lor8ak|i&! zZg{ve?0?z(^KJh8NmPzfMH@F)rp^V`TCQO4@_#ksf4<-z#4?jKO6GR(scj3U!Fmdl zso$_TJFtPw?=M ziungLWaKT`>F#fYkr7ejs;)jAqAO|e6YS1D^Ty#IfcqJ>;N@m=YsY%2ezJ%GaO4$O zjAj-9;oA^_{2Ix6r1G=%kcncs;RU>i^{B< zSVB(mqvwc?;-M(5yi8T^VO{5V%$mKW&)d_Ereh15&zt|u0`R6&OveOL-+jq;7Q2>O zHNQ(|>l7LTAh-syz?=M86@{L}TILsTirs4VXW52<$1(K_Njv=&5wlHS6kdZcrp02< zDsZWsVqp6pwroo~OpgED9>=(8u{pAsxIq5Lb9BEZ=i#taa%t(NsHljQ)km9KiNyNO zP9vz}pg{hk#mp#)M)k~bty8-k9B4({u4zZXC}D|_RMwd@{MxXmHU7WvBCjd@-sv@I ziVtsp8_jx~#eAM*f3`OhS?@$n(S>M|A_z2=Fq-7w36v5^hc>Q8MH7WO(G&inzrk>p zfpFrd)*%4Yv$~~>-wez;0YG8Ztio(Go8d>nHedxZ2G?BxTn5$)kZ_!K0_ zV~yX01Og!`DEhBawJtS_>c1^eniC; zNRJX*PZnq2+oksBfsvNBH_gCZCmk3>2}S(cuoYQ4F{Kxg8AIna-@i0Kc3o+8YhcUs zziYwPV5z-#Nf@y*LWtBxTGcBXrfJDX7+$yhOCDC!qe$~POJ*<<20FlX&CAux;#QKLiQAqp zSmqty{2UJl_6fyEQq6hb>kYGo;5o~;gTdHaj=;*lFBBOQEs`K|_yJR;vTzy?K4yds zo@fPtG_JQyn#NGNS^YsOD`4+`+lwD0uXmx;0<7s%*G?;Tz#KuyVHEJK!(x<}wjJ}Q zkJR&(%P*U||5ha9Fgd6#`7pjlKY(e_ME zYrK*hdDs5*=ie*ve{Y5@J(Ms{UT(|t$e3tsz32l^RbeOUcoc{XP_W&_?;vWNS7Vlms z?+C);5%uHikk4_}33y-5zRSd1cs4U^JI6&jH2%+~=|65DQjkjeuw4?@5bl0KeCQjF zNWsxs1j<_e`d5?U24<|LEaARar^uBfy}qeNGEQd9r0U<=40;*}k}klg&aD;t%`UHL zpAauKe|Uah@T{%_Bu?Y)uG0W%+F+R5TWwyQWNW)`RCz6FI~@sXApy_tu@>pm863vn zW-hm*O?Qk7*INIt-Z(-UZ6JkCzRCJNlm_=MAL$y}{0RVErUV$f-GD07Fo`L6r~Tpb zYX7|rY#AtMDVKq-#MTvO z!`AiMcaJ=EZ!5`w#6IK~K!eZ@78}$FR=_D(d~QlSp&}T0R#o3uM@GT7CBD1^AmJ#$ zL5zCuD8EO~?u{GMQHricG&AODZk3S4Zl>qAjzl~KN>ESZ#QrU!ssF^j{?pCX85b$q zhGaDzq$^GAm-m#+hFsNc!FnqMM+vHp=akY%ir=I!n{r^l@4P8X8`69wZmW|f|A4wO z9%}yqLcM{}Ez_$%8?Yg-a7=SYQPYgwarrF^Ir_~E?dL?VYKs)F4d)T!&sLvHtyUJu z!klgO9?MdT<&`6-dhlUGqYmt$^}U&DgS+5)NGnGuhWTnT76GGGE9;|6F^ZNvcQWTe z%Edb{-Tc);r9w$8eqY+r=5rm%J6fvRM4!blJp7nn^QIWjB_vH{YyL^8{O2kKu;C$< zx1&HeEof)C1)Jq*Oi9GKF)cey&Q;E4yn9d+47G4kecsOg*+X{e9$ZcB1-L715Sn>w zS$)-{a0JT%x5wA!T3J)g=^eMa0O!$vS+O9n2 z1rp(5fM3}nT`qsJ_to>?TQ^teT09faG!?rcUvWczsx%&DIe?a#V=IM}|K&X%2OgjW zQtemV=(V{z*V$P#*8O_z_AjN(5#rYmEv^z6)p$W)o-y&`-}^niP|-d2?1~J#MGK3v zHONyu(%*-F$u0k?I4UMkGgOWbjIwI$jRwKGCalX!TqYRWziu)FJx}oMDtf-u0_|0u#RQl_9dJBY^(##xl6e90wC9L&3&BhHCg@lC2HcL{RP*Z8r!_!RL83 zL9u%W_&|JN@L58Cej8XdhXDU&~@0L^Zr@ ze*I&k8+gQPRb(BM&dWyT#;;vAO*;ny;^D{+mgsLj_>mdb7@d`~FLk()nuoe+cb znA!bY77iSDY9hSW(+m+?YCnS?gZAm8JT7B$Eig3J3)kN>tGZfCc`rz^zmcCL)UXWz93@=ovJ(oJ$6qX zprj;49FgqWf%T?mFp*jU?5a2!D(-Akjdpr{H^6qH4B}D9;q7}jwZH{Xq`qLy;s-%b zf35F(S1kBR1sh6(GffpoG}@HzPNzh>%pa=w=EEys6+tn-EzTWS(O~iFeu0V_`Uw|& ze+|d8dJZqn)$}EuO#4`_f!&1wQpLoyk5$-HCllg;-Ld%-nLYBrHv&oKivLW zLQP}QQDaCQ>%9#pb=O`8R#}wLPW<7k>F(R}Rm1nFU5c=rOy$#SaF{7)Yp?Zt@v~T{ z`C=-*7X8IFd*&CBGV{J{8i;cNCnz>662`!cA6Bl(UzuL) z4_-Q>YZ+}?Gi`$0n@JC0hVo4Xh)lKcA#3UdAt9@xir;mECJ`U3PcoL@t{K{-eE&n& zSBFKlwQnmTAvlOg$X-|e*Z8R*R}WBtJkxh`+n}{()L~eq?REthd)2WvnauE$~+`>-IA zM}@myq_63}%%f;X8z_=F8+hkp z2)^jbv;t}(v8oTaVVm_wixDH)Rlu69p~C`}oz{zcaaB9OAml|jWD%bcT)ivn=K}~y z;SXz3=?ojyds%zk&VOF<*ih$DnQ1GYYs6bHnp60_YsxU8`!XOvWK3AbUDqcP;RpaT zcqMI|gdwYopNNioScYAJZ0+kt-hjQ8w^6KXfnXVuna)tr&&{cRA%DtxMf28%^Gg6$ z7YCBa$FSWP9C_cB>SJCJQg&2RQM_vV88COK+S!W{AU9Mql%KIRNf1&nA- zC`2)E#e~+}j@^c?WSUerWL}zxl6P2@aPgXT_zhfyYxbhVmmE7HGNPY>70!!oVO{{X z!&YeeZrNEhK=l!>zTw{E=lO|jO66UYrZe|kBX3y0%p{D{8PxO>C3QD9%pq)+2Ivs3 zzMJ;7o1bu{=-(g{k{x9{R&i@&m|;b83KPX9etezr}9wwe$Qn*vj)*4^18J zxv#LfOY3y4Bk{2Wjv++uy7a$meUAfo-ttME#`Ri5{U_p7A1Z z0{sv79mf|H-fRNCZuuaFJYBg=7Bw>4CI8Tsh+EQ>EhB}7l7{-IlZg+vyErJuCAYrH z6B+^{O~SW}(dzq#)0u!{V2432yBzfX{^e+?F!D=#h9b=nL$^H9W*qefyhJV)JhAyi0LY9>0IV#9o$6to^hygOT<@xjErI z@*6j93?uY$RNh|l5TsVFI6 z4m@rc&xx_qH}XBua;<1D4IKed8iq3ReuWA8pZnC?>nGeF)vdMGuWXwklkW7*mwZdm zS$)kPK$q_duL03(qY}4r2{+i=Sxe=o_(GR4ZW-xK23dj2ee{2{cKM3{ple+TsDX6LX& z+T?B9>tm103;KY~=Zlm6+n?#IXbxjKc$RAqf6kop7x8njy{t}V-E2a|^(hC$$-`kH zqn$S^9O|Br)p{Ojk21)1wl*JVO3X&xWWr3%{;WJnXP@#;J;=Axg7n10^Oo-JVvz(a zIAQmh-v8~V$(Dh?Ys}_2hfM!<&a#b4?%cvY-TGe2VO`BS#gr<~z)5Kl`2%>OT{c*t z-@0z@WlvE-#*8*kcG)DVgb07=R}}V+Xy|ej{-Q2kl$*pUYT)~`oZZSP`@B-vcw4*} z=84e)OD(3!Yt~J8N=mY*!1iQQLkgwdtO_eGfVxe+>D7Bn%?-%~ZW-?Nm4bIm2}mL* zJr}JOUq=?-4%y>qAjE}wA45{Z$e)Sx=oY8NMa1+#K&69r=x1JINVi5)F5jpB)1W_^ zR~FU;oWVwv2crfyeT%Ov#rENiiQ?yOt6!7Wt?L(aj|gwWBS@GhTJ-%c$~hV-ClBfs zK$_wOWgMrryEpJ*7I82Q>JVSh9-2C5VlZqu=5Jm0#}|6&0toRdxC`G#-G2Il-I zqNIHcm2oIKcmA_#Y@~9m;b%o(h|GR^VLyjuGYb`;0|sr_Fwx|wL2Y&7pv>)9NhE*bLWE958t(4P_5&)&*Xt|W@Kz7 zuLqUJh~?%4u4g`Kz@4hbr6Jt=%@n#UwwEekci@OZlv%UtsFM2E%RzvlqUWFKGM;J_ zJv&gn^S)IDe)wEEVzafyN5Esob*wvYPo z_2$NQa6MJOU-4)*$+q;7 zU7p`!w*c8->bk>X&L_(jt|7L+EvjNB#-_ptF+jGs?86};vjf)Q5?0!0O3A5-o;!i` z)3$(7cp#>JHo8XQw-fl;z-xLFAbX4u{F*&%&@pq&+F5j*@mP8{ay$*NLtfE=Wm`^e zLK{cIDrdbm!j2Y(A5p>s=eTA7hao5ZKz8S^^}M1&B}7<@XJ2p>=fJ4-EZrIYKO;Lf<79a~DLL8S8J+xgi3q%}? zkQ{9lw}M6-I|2i}fTy*Nmk!ky77ia}>#STOkt%M`^#O=wI+--VEzyF$7r*9;8YQ{T zzd(`UA78&V^~TAd5nZ=-aVcM<@*3rt{X$`xp{Tz!-4_!0xN)so^thtPx1cgP_u8hC zr;bg@zmQfcV6-sa(+@4B#l5 zK2k%!)I8yUEy|cvuc+erhBQ3WXO|zGX1d}`mcBpk;4;vTOLtDOshdzZO@T zERGby9=iV3St7*^MPYpHL|flZ>!uK1;?@6mKL$?u`$Mcn~?`dZTRh;3J@s%2^RamxM{ zbw{XX_by3@*2j@FKv*R^AQM5`)^rc0!Q&|+^f4?u8^-(>Q_ zx2Zu0hh3p<1jv9PiAM>lB9b~__05zN=O5x}cDD-+UpWK+HYtz|vYBOUbjqH?l6`LT zHv9}Jhc$LqAq?HR82C*NH$$M$!F^2(!2G&wew6fm*|vgXrednYCwsy zbidZrXy^9VC3uB#%CFlNpg(tvCRD`zDT%}*+Gp(+xX)q54=#h)wK~Jd+VqehSeCd8 z;E8LFHAivLwC>*cYvu&G1kS55{c*GQizzyG z&zk(&#ijdA;=>f*ctiDCM-r$10OXst6g4FBamNopJYlou2Ddt#ms!sXuk^GvT1T|w zp4BN_ax2#Yp#&zovQgCNw+F1_vc=c322_ydkXuO0juNNJM+Qp54g3$9YiS({3)e#B z^{9Nc3k-x#TM47;*5s&ywIlP-nJiwz@|mKIsZN7PWT@W?1cxRh?zh zn#m|~x@F}-PUG@xM+*@o5K=?9r~vUr;=K#j|89g$xHx9*G_-4FYtG>Z_8Ov8Q%^gu zzAz_CqO~44 z%?jOtZS=kNkK}?{D+NP$oZa|!yKdpB22~Asd?k4$YZFqvV&L?R8usux z+AOG*wmC&W_&LCMR&-QcHVta-`5}#;k?j6XSoxx>skt(-=Ln;qZ}^lAt81ru_z?V2 z@M(8v?ozD2n#}y`YhRju0hDXh3BVZLD*P@5V^Qw!kJh_6%e4gSx)nh&pmfSWk8gp1 zE4>aCfG_VdZubLa{wr5d&~gpoR_3_S;>8w!!lE9lC2a&~SX4B0ec}N-_X<=j! z65tiCB^4}sz;erJFhxFSu8PejV3e8s2W$hgXHWo44l05ra|F~yN+B5Q|d1y^Oq4((!B}eMXr`Gk}!aGzg1~emlu&wqOeA;Zx zVvkP!b4ZlmW63wEG6_atuhsNEKoH7frhYX*5A0B1hLO0%h>um%JE>mb=sTp0r<&upvkU5N&z5 zs)Xm=q5R$>lFSoA^|aIqP4{UbWPcW7DFN3@8gnT!r5$fg61ql%M^oL}tStsE+xEdoih%T5 z^(>kM=b?a5S+Lw@A}oY5$Nko45~Xy6U$CIr)GO)`#Ozyv>8L0Palo0*)q&*0E3{id zi4lJ$b8Wq~^6as45p@%RQl#gx<74W=9V#9rDZ{tT9689R8wv9Gk{y0i)MeMMreKoY zjnp!yO+$rZ7vM!0myog^4$(*a#MX0cmgVYL=l2a_+Ea<$!aFoDE~8h6iLpi)jrMg z64e)aSEk6w6LMc1{Oc6!*DDwHz2SU@={b{h>?#e5Kbbupp%tzmcnuR*OAM%W8PGHJ z*{l^WTMvReu6McbG~=!dZ92-d9SV|JmaoI3)s%Ce-m4K#{Lg>u>8Y%Lth)*aQ?zYI$0RC7Tz&GvBaKfc_>-%TMzIZyioo;LEh?R@ z?$^~)TcbOp38imJhhb0@aL~1s27#TCYlHCEXwWyUW!3mK0?f<}TXy3^?J8B<@+m>j z$spOYWs{HK;|{TgC+tp_&hv1gGh!zEhRUDM>tnQOU#f#P^=02E*KMp<3!=u_LTchj z4BLsKUPeW$|5AlnwL);@N>shK6L67m2C_%mza{W*%=P zj+wO9S2h)R4z5%8StQfyjO94Ok238abllm|k%_T-NkyHVO$0g-qJ1jkn&wv!t1>rA zw%O74T0)ERq@h2I6eL(*eB+(OFAh_e=J1MwWfvW9ozx?J>vq`_V_);XT}xe*Ul)+x z>E|HgD_o*!Sma(vyJ7y4UtIhOu^hb3N@g#?+D`vPxlp3nNRS+j5>FuVDjD9}fKpv1R~kP4i#e}V;4bc!m=!colb%omAu9ANEDMhXLR3L@ z$&?7gyC-;K$yvVp&gX4W7^H+<%qi$B;B=MabhUcK-Xcb9dX*DDZm)xI*}&MTn^i<6 z~_WzgCx?<@zLaU$$^bFogwFQNFEq)}*40$-|c5-)ow`i~0+amPQZ7&ldFTQovqQ4@g94;e-e=|LXa(U^sB4FrdJQa3tRr%*%oa<&FvoVcwBGmlddn*Dv6pwz3?_w z!%CXpTYMB`YW-%>=JdL|)A3G7*=TjC@X%?tSj0fU(#!Vn;E_)Ll&L8xw1eXNvhltv zZ%CD+1Z*kjN%9={NlqVFu2Js+_m#9v%Fk80H*Got^maB~DLP5}!+lIqpNVgG++WGn zORz$wRPEHQ6wbM)riHrn+sVtc?6%5AZW3&}u3QQ7S#*?IRlwX(DcVQ%Q=ctTHtNzU zLg6zwK;#Pa=AjfwYPjf{|8eEUtTXl{SpY|U+Mz%+>B1tDm(7c&+BbH}waFNd-q{Ou z^X^lzO6%v}m-Z)lCptb+;-gERLZKj249nY}_8CEPMSi!Ahf~9D9?&mDhj^diB*QD$ z^sJ;xZ89gF%NcifD0RM;bqjbww;?{}E5jeuz^+P4T#oC;U}#bU0iL#J#Wg!6$bwKl zJan#u)c+vTB+WsDnnJCwbJy_Hld~dP=%c%QOe|B$_eZ0Y@v?q%qgbTTO zxBAq%=(L%yI1*!?nVHODBo@9ipQGMH&nvh7QL043PVlwN^XQxGLW^#adGdK5!Q0=z zf1ix{mIvJDYu8DrAHe(fm*t))Z;jxCB_8=6Fr8*b)-RfS{N=1t<>e<1Y+oi)Wv!G; z3~XC4y+@UAC`8c-ZY&Rc3R=!%Des109M3X&V^=66-MRBOuG-M!S~s!Nz9%ib|5xvOI)?ov7tM?QTRB3WFl54nb9J)zaBID zZeQwQ<9#EGEEP6V6AF(7zT|U5D%ufkQe?T1D=TPPdJG=IYT@!wowp_*GCo^)v;_bH( zLn^4Y02LlFA70F1U>keLX{#KZU&)v1na9*t`>qMM3-j8JW$|OHk^Z=jc~Y)mVoD3g zE#>ASp%lUKleJ!5#whP#fy~&*2o!}f*IimZ%1XsP0e{tNHuGtXTG}SKp||$I!-beP((ibZZiAswFuGI3lPa-)Ce+?hdq?;IS9wjVxqT(d|+AH3hI@+`x#~hj#$cwFot?JX|t1S$Aihm4rAuGu0(Vs#v6-<*$S2J|%8V(zqtmq=Tzc z5EYVonwaQXd0<(yE=}ByiKBqVtw1{24kA%;AWzZ!slM>>keoG9+e=56RGV&U7l4E;snVCy|ARYXK z9(9uj>)g@R8L%hip<5k}eH6t`hw3Qa)(!zn*R{KAhY)K8GfG>J#ry2UI4l={=9Nhd zRf>3rWm8(Z+C0sNgU{O%S~F$jqn7&|JB>Y8iEl@pBCn!Lvj>+IV#W3&81wBRcKg(+ z?@6`y??ev?9q{>ojo{gFtP$u19%^HS~m}n9usyFIpmq495R^sLC33+ z;>#H!D7ci)VM-6t=p1jJv1S>JSzSHa-gW(C5yZdMas^R!gtey;PF`DC zZ_rkVzRII^ZmztX%5%!us&|s(^RlKCMs<(BwLXR_0=z3~giWxgYp&9N-oa}H9~yiV z>2n;iOKowdh$nEwmrK84kgM)2zpqUHd=pdH9_BG~6hsV2W|CFR*tc1)JX1KeU#rd5 zpDdo|^IfhkudB(X83rwHzjv=X#h30hMrLp3Fm2nHDvzg$pP7T%?3Rb}PE*{y*}(q8 z8=k)7n7Oq5T_4`PCQ0lXw{p>rjN#KW)IRP^o{!P+(sd!5ULrBvI*gVRmTse;vu#GU*6H6*F<8vO)8 zi*tF?q_a_n#jq2E>8*=uKRSz5*A(mX`lHHLd^Dqdk$8Q})M-e4ikeaDO2Y1cJ>ujf zSD`OMLY@-|R9j9C$rlzAHdXY>9*sE!pGj9S-uuIIQm z-FUU#cFY3cRws-m2nyyU zh`(m)Usrej90C_TdAsY`ff70M?0f&)xc~c4+YT4+%w3v6< zc9^co-0^q)(9}yxMPswMu-Rgtf3PI9mgaW#Uf}OL0zbe}V+#H<$+<2O^-c}H;+?7y z`+ps>e_((ASKD499Po&X2Il%nuki1N<_eCH+B>zTwX{#rzm3OVmFuO(vB@t{TA<~9 zd1(70$w4A_X1Dv>-y8Pd`sv9t7<^-;u58=_2Ij+&`NO|T`K#zAZpKT>%1`w0xyMt9 zT;Ft5$nSeKtm&apbxk5@-GC{%?zcU9!o)AScs9K2{;a+r@!!+(kLCjznchgKH(6Ew z<}3Nn3Gy3G)_sW#C^b}wWyNCjeSSSako@*zme&HhT?e7|7Tyk?T~B}xCrheR|I>c^ zZx;al@=*jA0HYgD!maD9CC)3?Sbmi(Cc}9^s`#$A?S>7r4)^dbE>jbgzm3vOEOn1#B#VjndGO4vBEXa07$ zj0jw6vSW59)t#>Y|7)g+JGqg)M6K2kFez-)YdtG3IA;?0Nzec6ll6rZ)Z=^dqOu3y zXg%=P>ZE_C|5sOzj)zo~bCE}V3$r28yM2S4Zf#L&$fHO9#0cvW-03_2O>cmd4r8>- zT&Hv5LFXf})V?RT(Ap8rDsWV!L=Is}HS5=q2&8lwYF^vyOJ^(Mkw#a2$ytAV)%{f9w zQuKFZZ0ovm_M~i2OqZ@-gwh`-qeFK>DF>EhQj|TW-BY~VqjviDlNXW?n;ZXIeL%PW zi~+)Igz?mnA#ME6m+BV}ZbZ2N#TNxdAj| zdB~Ga^sIRIjelFzuYM;5kG2R5jDCPwYD;pUevyd2w$U2E`92+^Y1N#!>ytV5WJq-@ z2_oCsy=*FHguS08VP@o{O)fr7*iQ|HhO088fi#R^wYCdA(|@0Amm4EFWk5;EZt4%+ zg45h(2fry{O5;J{#*>k7{ghZAw46y+NEL?W@j`2VUf{`VeBr(&s6!&3-)HLvZ^fb0 z^FuC@BfSs*m>?#8x2SAJ?`fZl>WlRWbVe4XrBpfkv2+`dT?JN*M_#_`pgqxt1!TBn z5orAdlTq>eQvWk zu@sNI9Z-GW#Ocf>MMqg5t9P%MgsyxClp(u(WfopGxE!2I)icSXBOb zr1g^EF#DMca**os-nv&qmGUw~YU#OzU?cq_ROR3kF3+TYoLs%4ILy*gZzlTLk6Ce~ zZ+!&5b)a6mW~Q1E*;f=eDSu83{oU53J8|n?gY_@pp|C}s6orMF{tb#b!T>DOKU(pB zr?E-4g2^}0JzZH>D86g;9mlLh5V95m<#{Q^mktySMR0Eab3G`SjFPUR z@AP^pb@P+1!taacHoP6SlWQ!<0M64+e)<2c#(&mn{Vwg@k;sq#7>fTp`S|T{y?j|X zR-m?LH?1maxggAatX_jra!zHwdPIY$jywVEz0k&EZm4p(4;LzjpVG zl-~{p;&#B%!w)TbJc=b=lb)B)lC%8{p8cCOkbg>0uKhUY&L|4~B1(jIn4@c1?O*i( zs@#NR;J274M+EM)_iq4e=9qj=JA2vUACK}=66`zOSYaw!P9SiRGkGU6x_Wz3`lS)= z@D`KB%YQule-=~Im(Zrm;-a&(o}f#6|91uU+zBw+KI7436fD!C4geS!RSvsT5vRZ~ z|3nDV|DAX9U+XI$haf0IVlXSP=EgI33W5Kbs3zn}47L|qRByOXuLAe8U zJAF8aEx5;F_?!AUhctcoHGmnBORjyxhWW>)_=jPPz8Wy1y@7V?!1^JRERVL&`sKRg z_Tw=OG3E&b+-A5+sIf*cz|Xx<=$~m}qXWT8RuXMQZ3-&1_on8~-9pE?)MN2wb^`h? zX&x(prI2zj7G%F!F=oH$jJ_1APm<%CRcA5sDFO29j0)Q|kSZG%l=VyW0puT}jVEBi z-s_5VW)_l!zOWQfa|`1og;^{)OT=FC%mo|ef<+eg4pBH!+|C^`a#ErNeCvJv4=Mxo zsBp(PLj?2>W;EezpPM{RMpPu`Qr`R3wB67x0}u5X-*vJ{970sH=vij@n65SDlqK8? zl(;{W5+&Px@t;xZxfQSqXwbzK--NlQ_(4W2_uv%;}L9RcGnGv9p)Jh+4aAHd1i-e{)RrJfAuB)%o0goXlC zsU$HVY{RK+2r4N81Z|n_Es*$stp-v)eL|ji7~{DH^e1AZfV7dIOZanHN|Iuvy%^1 zBHUIoTMPzI8}r@_l(827rXOn-xS1S>pD2s)d9U@X-!LeaUdxI}WS3rhG?EZn zyoMA5$WN9)WMBaZ!@o8PG&pf?DCK9lX$D_BOq>u|Xne+*-whL(F$N?+`he(@DP2=^ z8f^TNJW?O3Emi|)ek3~hEYb8OSf%LEQbOl#$r98gY-`p&X}R^!%F~~Gv&<`~yvxbe zzb1!IJA9eIOKYIZ1I3XeOyY4R!dqMdpkO2dH2B5^?$!tA$O_ZqSP~4cc1cJN>=pO% z4?wW80<4+ZdkS)}yCMU`2b%NH7+S{&O8qb;$%D!e{{(XOdg&$`vz5Ddqp2+{4x1I- z)yU~}`uR99A;bxin0@m0ZHX7VR;+N2wK<~}Q98l16V_RJ#L{=xvq{#?X!OyNQ==IK zg=V-rQKv9f*!J8;qgQ4?1raG#4UXC-v$CddURZW&QbgE4>F49=T%r(K2Gc)*>%_^I z4bj%sfji^Os6XHD=HU2a^Dz;?*-Y;&{&;yGDGR-COKw}#^3~oU#QZ*yb7gz+S$@*J zMYkl+4>GyQsgA*-t+n)e)(F`gTkFs9#I4wfeY@jqa)47>_BH1NPyLvIvzjvyFeDb` zHJtfL=7$sIsdTs$a!H~xuLG66S26bdmG%0U=WTgc4bNeNcq7JEf_a`+slGphh_zoq zkjO7`SyClfuWr`(VT2C6Knq=eI8j@DL_>>a@7goa*L68cF89K zdR;kvhVC;X@yax^r1BxPtZ(?%KQ$K9baZR2$Q^v^r&0g-k>IFX-EVLNR#l36Bn`ko zcO@z=sc=?^J=xsT&APag6Ma_nTZ;zqSzPh221)9Z<^*sC=}+KBzOQvvS+ykw^xFOY z#wUSGJ;(mR^Cc=_WxkNvXOk@lXde%Vk-^}$@!*_Bp}!8??^%W#Bh;`cd58Pvu4L6% z+>eg%K* z9mbl{$H)KhF|IZnZU#rbss*YvUxT20v!w7%d`ScG&x7y4i4xY^1W>y@cUV| zGHw=Cp#Lztto{&R_9)<vDZ+jA$8ADO$+XQY zPoHpQh!PQ(jf}k50a-9y_)&NH8xraRpgc7V_cTmX)jrgGBNkz0@?fVxX^TObxlYrnbz1PXu$3W>`iA4BR20tEXo@aqQP1%jK3JUErqucvf!wSJsQdUk zL44O#jGxoetRV>Tz~lo6ouf?pv(=8r#(cNOzCS{Kz(`>rf?PpMG$;>Ps5-m}2Fyjxgv@k5DIjwi~MK9Za z83QtE)m##&@d^`{9NJ@No1ilji#-YQ>C6Y824taZ?6hv{5rq?MkzEAL=tbKPLi^GE zgEUhPb}+cj;X04@S+*D_Y(L{uprQt8JR4Wt~dty z?`P7FoQg+P&3f7FrIYyPN<2qmbQPIv>aw+>V_lYU9`VKlIDO2M)vg@ z9iIiP`}^;lES=kc>gAWC8#?DdZ`g?$6))U!KFM<5oac6~j21t--W++3>o5SRz{0gK zfPhB1!~jxyyWQz)0k=wC{`yC-Z=(Pgxo%r52<$M5?>DrS#5#70NP(}x79+&MeVdfLg+wyx2ckkC$}3j*ZKH*3$>2netReVt1KE zmvGu+=pDX9E*6uO0;I+sfwb2eYJ2)0oVj=;5Q<2dN6`LgW zu4x|b6jwm~WsjKB~fwC~<6S;WR8CJc!>=qKNQG7b^6Q79Vs*tqqDb9{1IJ_yYD| zF0G(_U~O`Ds-VD;j}v5`zf~K;bqeXOOw8K=$Qh_8`MgMNo$Qk9@pRpd&gk(`6IVt) z?E5@Jf5gK33$N^$!^@ZD;b|-<2THkC1-cxD}b$k9Ba3H zb~L=1HNYt#Zj;K(?0jL%t_S<{9?`7X6TVm-F~^KP{oG)BbVs?&Px|Ov`|bc2W~t{1 zvOUaQ@#53``D-V7C3)DE(s|yps~(}|zD>_H=SN7*{Us;eGnV-3Aj6)LGzA|YNG1;o zw(-JJl*rM8M9-(;3v$Zkesu}@DBp=9DwAVvaT)!JANCahsQWK?2#dJKO6=k@`xxnX zI=9eAFCjrC;i|V)*;8*z|7nIk#8oYJOTI2#9O>S$mIey5P*TASR(#Dkph2>vC7}dS z7LN4SCz)Z!lMeaLd567QM80A3r7nm3kt18S!$npoqR7op#3|K(v0;(CDEC-F0-k;P zi!>9NnGW79Oh3|2my%D>YiCVGXy$!FwLD>~I<$iDmt;S?i!N;ur3n(FE+D|uVkJ9p!dSgH_*UjP%4F17MsCv|LqdBn6+YP==(FftXwq99#Ah}5 zD<)khd2R9#`AFZ%Zu@{&*f%iRD->(&vGA6RD;K3QS_uuu5a4T^efAb~lz@0@KY|Pd zcuP6-3~oL@<4~5AEqx1jRS}xu2XU}Z;FrV(^lIB>@+ei;?4r8>{@p-+UfZsLPJP~{bPZ`^6?wt3nqX+;{_SuedMCWiS*C2MbdTD z2rU_UP)*w{#qU{1qHHh;4VYWfB7P{> zr7(B5P|a79cqh(La^+`U68)cjVh6k{DtYpp$GU!{EJrzwG|^p%!@U5^F&vdtlKLfE$Rq0A&iFS~V(Jib7HKb0l4V5E4aBwy{I9(?p@xGEn0M6N&tw=Bx z&t4nqq#3Fz7Hf5Q$Fh&YM{vrWd6TKC2f%uUg50L7MNQc=QrVA8^@wk9=IRCLa`n)?r)k zOfY@<(Na&-MTnmMh&zAh*TYzZ!T6EK2dyGLp9P|W9AhR23>j41^hHvhlcN%X2=3Q8 ztnAZeKDAetdiG;oc(PD3)VA#BEWc5S#pF6oh$L7+sxI@9pM_nKxJr&KN`a!L$@E~W z@DZE=E@rs=z38e0zpM?7)LC6gdv#xi?ly}MT=K@cgPu>TTJjLx&;L9wS0-Vj_Q z3Dj+rm5 z+8KeMT!`|6!>>+KrC`cfGl6%G52AQi!kuh2;J&Elg@zJv6vt^Dz0I}mp&pnGM4QWW zjuRt^3aI4k3{e^Xj6Pj+YQnSAfoJxVm_n3#t{USQDY}JxTvnh0 z#Qf#Vd@w&eh(WskeEckp8+9Y(N-d^(z}LQPay_P~E7qv$gFua1UVM_! zk)ydpZqMz5UYkNvlj-N_rC(lNkB;6lyANmz`^LmBo!cVNTIYj9+ zl=>OQ?^%cp?8&vIVYp$Xkk5X3!=g{1Ih&u%<+V@p_no*WWg)G_hcGsHOwrX}X)y(* zNbWkwt{m8k%L=>L)FRRqrwVm9`8<7@mfiJkth(~5*`vzElVcj)#qCTbzm`t#ViNCh zAV=9kEgb zHTqQV`kY&m-9{D;;17_DQTN=bqD?3qto}LbIIvbP=-6SHX9Z$Oc>CQuZ+HD2a(Ua&=Cj4#?5t72q)hv$n+; z_n1uY^0^9{U^2_Vv1wtlWBfq*u{gy48CJIud3L^0g1)zn6}`~k^4El4&QJOIVNHPg z(f!5cEf+j&2?I#WxRy>F&9EJkF=lhQ8AA8u`Ml&39-ck-JTNO(laj@;EDUP`Gt6UD zeN4wHN3;G85Ukt{IBdwVHri)*PFBIH<9ach2#zZhC&OmI!-73h3$cZxNoq2W0Moqs ze$$r~{_Vp)#o2D5|3NGmNOfm#6~DNf_sOO=&ylgc?nZP1z|)}!xV&-Ovl~=aTiX9G zWD?fo!^FRJdMsO#?fw8f<5#?irhUd_B$3PIdSJEi z_b)%h)%L`lY-r*nQNrP;CpfZB&}fAjySzAdF+jGHY!)Z$;8z?Z?d}C=t|a``z`v@- zj{y*MrnR8VyLwnPAQ;1p*=%nkEm*_z_wc74=|=;BcX=Lr!}T_p=#pqlBY#Ua*$?HV zoF;Kx{}`#uo}$2ssM>|vAJDq1F?Sq2ekIfhdPNZPZ`|l3GwzG-SMYK6DFu^TIA#Uz zk~v?@48?fza+d7#F7{49>uGP6f5%a}9e}tJKsewsP5TZp?AEQ2Fviosqg=zI^LwsO zKFj~{SqeU=`tg4z7l$Qq<&tSEEB*%P3TrZ8`ep>NGgeNyL{amx!NR}klL^JNBuf5T z2uqbh8Xpc23qIGjhgA{Oe0l0ND1_9(cU=UebwmGuQw-Adtq#- zRr>#q?tlJ^Q%FN6^BxxmX9Z4P`}lio`lA{yo6h4YgQ-i{cSXl%YUIP%UjzLIxCmz8 z_xSz_B&Ze_*2DtPmcDOxnoWj7Jf#7aLRQ!BiDJd;4E)vh5^E`7UVTT)z;>Mbhj9ea$f(6P*`ssWBZmThv0O#lX*15llyN3%pCzPjS$O$UF;rP51B(K|lRcL?h6%M?E)>@%)|;Q_B}yu0M-o^IHF z&rs~U#9dwxG~9Vg2dI2i$O68R1_75>v^#-x`)QVYePRy0K)`wWkQzJFIgnKY=G=Ny ztoL!!kKSL+Bz}%dy3TzfHq!o^HpY3I*G^0CpHsSJIW z-mI_^&;TH6@c+m<%b+&DsN1)N;;x~%LvbwScP;Mj zZa4kkxpUu{_jB?gIdh(Kp1s%Fzg4LXliEI#+uj$Y+ogF3r@vo>spIJ)PwSW?Fy3#g z+#-8zM4{jEzdUkYSbxCYEdKiMr5u;_3CAc2%q2D;W=o$1+pYoB%Sq0uy7tw51po3l zW+BnhtQ;~9lfpOEF(awn%?c<@390OPFIUR1br=I&t}5pO&yK~P1W)UnN&n%bPIh3O zIPJ?mHoxk7*a}o7;F#5zMs&g83d{fIQ)g2CiI~K$O*NCSBo#gSpDp|)BuzrMe`^H@ zVCaQMc=(6!%WkVb`L}Yx!zG=DO<8^q`y+%MXQ4$S28f-9zgl_!VFxZ@;);_J$#fL& zVu^N@q1b;?BqE7}s54B*x#EG+qpY^%(3|Mm*3aw=-*X1OiTO8P%2Yu1tA6(Dj4a2x zF()+_H65MP#| zblkRzC^G%?Cg1KKz;!2>9G1cSPpZv}u1ED9fWgu9#$Y4s(<6rmP$Bn+x4n z`dydr92&fOT`r&cHDWOT8_7$Dbz6p9cjB?HHpyQ-W-oQI0Z>sZsC!Y!9Bk>A{<@#~ zj=VmNy~2dUF1R#UNs#La<;t3P+7x*){#V?7E7CT3;Fl65LfBj4y^w~Poh5|{r?*22P6%oUlfkT=S|`j4^Va?ocnQiFmkB9 zUHkRy0Z{1WJjS5RYu12$cz>X=`WRYpVFQu3?Z}*4gmGfCW;0;4iy6j=(|tHZelHSnK}?0V zNp)kr^a7F7w#zh_gx)X#t!Ns>cP!$Ya$}|)L3?{$a?10zx<~cJ35ud=%9P#}Ofo%- zu}Vie4La5#ZMzs`vJU?V{0I777IuM!8@Vq-Ef8jIpn(C67bz!&8c0&Z3IlJHpnkRO z&v&;>)GNUV2X<}#uv&Op10PzBKmFgE4`|X#H67u+k!naw>+5E;>YLR<#s7c?_XEw` zp6t=%r+&As`bUUiu(hoiu7Jg}@WAAxi?Xo=J52UH4ZnG(t-58SEn*~)bck5_10=y{ zP*5gDu;ypu;AUd5-&4Lmk)poXo|Fq_O!yLuk~yFaMzh#v6C_9XRrYHsj(*ryG+&!4 z_m%1FMj+;fVIBK%JP}`;OCH_Ko?X~C7~u|%FcX!D)kHFV*>%HG?+OruGa@YzDaUj( zz;XGS9rA)nmc}kkgr6TN(mQ}BOXvHsUVFvh#WriR zpZ@JxrT*M=zN+#GZ3^%p2l%?}(#O&MuVU@q&!(zJrA{o7B<2}lrSTi+KOaH8#B^>d zZM`ozpC>$;ZzFuCI>y)LQq%uh({(6H=@`adu~y>WPS}BPjg~C!R}4BN*xLRR@vOnO zg#PD*xJ&Pe1U*t*6ZLm(4o6=f{a-5w1O8Lzcl=*({nH8C*513C&3X$ohwV3}bOB zF1;CF&?&91w}jpBl}kiFN4w`{#e+kS!~AdE{WZ>tpd7s6Dxg8dEUJ|cD}Z5eD(`N%nYD@ z7&aH4@~~@7d;lTJw87#8o9#+Ns1>Fw@t;yM=3S-~)01JFXat7#(VV(^G)J8*hqZdr zvIS~1;wi62-8*WVuO{Wog(p%!wXLITB=aM%S1YQePQRzRRnNLeC@a1a)qC1Dc-cuY?VZ*L=6Sx}AncgHZj!a~nVXEoXxU$-7%0LIP_LF~`*BKPrM zrQLW_s{NLhRzRBRzsX_h%=IZL_vu>>iuzvvJFq{Jo6wj^o``wEjalU0Kgw2FCkPk~ zQ?*)xH(&~`EsSE)WTiVv)PilIW(@PrSd%@BgJ<32ri(cTzb_Y|gI$aQS%HGUenVj7%wD}sCW z4izNm6VWlk{~ml_`Kl-UPfZf^IC#O1@sLc^6m0zmZ|D@;-J1$TB-6_LB=AuA`cR4c zoMAic|2*ff=UHxiEcy5{20L_HSe#(U@zgctN!dzuT+cR`Tv& zFUHIAklWYcb&Fo4nIpHz=OIAZ_duOz6h;`G0Pf2wTXkF{R(=rz*Vc60wx8OL3LI9L z!8S?7{yH2Xw3+t?CPuU;X=WEhS^EF0_@6cF+=;q&P1(G)Nf9e@m|kHHlZB@_|7Xo( z>MWQ`L9;zs|8>3D=L+{>dV#|BlF6BKe=r724L?wJKwbjKE-qSP9hu$Y>-%A zLMa|hAs{yBKj3~_wC0%44p%uw8Pj!XX;4;t4Z3aJ>c}cT!F;74ZBw#bQ2!`#= z#g{D!vzmSKI-X57;ZvC2lGZPD;u!UF;x1j%3+B7}vQ+P8$Xsii^#pZ#Mk5)h#H_QH z_np)>o!#{^l}Ls3f*SP>=hjiw_l-uQotEj!ryn39kF_G{7MRDoLw(m@;*wG0)L=&m(00xT5`mRJfpEq~9GFjF{$1cz{Vg%WWC zI!UxKLT~N#FVp6}^rmx^!RX{fIXo5ZxoE)WFw0J z^I-=*KH)nk#dm{NjAiG+ScePec|K+|H`m+~AT@YbtQ*(=_nGG0>y7;@D9UzQtvDER zvQ|u%829#aU)jG!TMJ?UZ6bXO`HVmYPBSu$V z1O4l?(gS2Q*8)tH0#Xet_iM4p$U2WR=)QT6bS`zrqd(mb(yArch;&w@H$?wr`$5a7 zv#y4I$F&5d z55OI9!MS$jdAy&PT|a!Aj62jG^8&g0)vLSabQn*D))qw@m1iX&liPk2X6>~wc@o*> zA1i$>ihzUe=1u;ZIsfM|XyMTknbYpPbN;K?5dvI{aC!LcclW(gWD2^UX!|!@scV51 zV>IBxqV>VsRw^@IgJ_^cA%p&UFwPk0Ojd_^v9bHyvqztgwjT1*@BbuC%C*;Go8@2? zk1PrB`gL->zZk&z!FyKU{CKg5Z)~yAEf0q5R}~>HP*!6Dt?l7ThYM3}PZK0Cg~wwZ zf6gxSolwWcTx{mGulbe|15PzmB>+qrv;^|qLq8aJ>)Kw=9Pq7Q$2yPl7vE4mu<(t) z-TZN$6y4L`hzIdDPxw=6G{9#wJlA+Q#$WhAA;_dVWrS;A>AietrK4&7z-`M)_Yn4{ zSd%20xcCoE1TU(;ZAP%E?qvU7?UmRAm2c6_doF>)Wk&%^agcAF>u|lm7b({gHhd|! z`Cj>S&QjBj#oypINIM7U`iEf8`5gToTLFE1+qa78Q{v#dP&Sy0qrSas@~#+TKbhCO zMq8)AJHvUOchMvNA>(q8k-PgOZPSXgdyBKV-oUu5rMUkrf5_sR2W0QI=vn;z9EScn z{~Gh~{wB)rw^kQ2uiFRI>2}^yGJ!u$c{UTT?!se_u&xyV;qB&HlRF=*E@sbTRd#ea zLwF~(+w<+}u+6mzA_eV8?G%H3w9SFEXEL%zWFm6gJT?=UqzYx2S z|L0?E`DRvSB+8lRI-T*JNxTP5=J<-lj_Ef7s!n>MbOyV4fhGtc=ynO0GaiXc@17OV{;dH{|W3Q6Rx}|a)7@*+F*^ubi-(nMz{!J0XX8;j${S7elAs!}D zb@{mU12I^Mbc*dBu>l)=v{CBKH3y*@#pI835Rd``j%5~O$ua08752u@J;-$O0X%3S zp@mV7cy)+Ugk%sNCr0w#xInYd5^X7lC1{UNI$ttSnNF$!ox+ylr{I2YQ~1 zCfA{GhwXAq7NFreFog$IG~6!uG8|Fpkb93>@v-99`P^(pk|Q5~hghzl5zr~cP@W8f z@>>SShQg6#YC?MOk&t%{n!IY3o6l(43?dV0bsE+KW!5s@pskv0>nz4%2>tH*zX!G) zdoLBAIR486#c_;d3A%#=Angtu%}^d(o9l!2*P{2>=Rxlk_C7VgVib2;XVlNh7@EjJORkVVgwIadGnDfk}t|N6#AMH%vJsg5AJ>?O+^{2X5 zx%J~C&PORa&R(LRM}<{iyP?%(}}~bIx%qZ z4n+Bv3Z6`Y;{4$KF;X4HefM{-FiFr|i&rN=eBXM1;Pb~bEapUnb!mHTR%z(mTPw3q z>^iE&e8eSRJW-G(AL00ZgCgRZqYITRk0(*b)mZ!?ywX%!!5$nTL3&fhb*|8K*n3q^ zV*iY{k>N-;Bc=dL$)3~S<`&D%ZorZI6&tv+IW(*C$%et_`t1=0ZiaA132BY1!jGP= zTwz6ppS+wM-ot$6M;xHTNWM5IjU|8GgZTVf;1;g01v*0m`!hFL+}Kv|OOzcYMh&Tx z3u1kL&4;SHY}vhkUEaNzfR_?-#g<}IO;BgJ^K%8nmkDpb3J`@>>;S>*D#HfLg89Pp zg6&|Q7Hp}n2ydtv(K_~2`nV)m*uqp|&v-@F5jYS?CN05%Lwk7?cZi?8YKR<00=$1^ zFl4pzo^es33z#JY7eu9AA`#y}I=ctogm-r1cN_=C8j13b@u?t8u)`y6Peq_hjPY_I zN)P6|p160meh4@i4={_h5?s1Cmd^A91n_H6QcIK*$`bEqCT1EN(Iw~ilnBAd*B?7e z;`Q68{vHWP)5v5>3)5-u+3bbzoe%T)5HXNcJ4}bT|2Xz`NvI%Mpupt^HS?p3*^+KZ zmdZ={#y<7sO4EJYYbJ6ox993fBX9-zC*-FqEccvgoY*~uI=@*Gkmx7?3u!m-Yp7x(w3@5-b^-%ks2;e+rmJ}?Uku6T_a5_pI$H7TfRwM^H!w8NN1a6g zTaC6nMK-pyi6-cKLN9-@1z5JZTW~_EKLCDXy!8oX6bb(7E!`lFu>L1-*Mxz3H&lIC zo0MCI*qjeF15DvLknL!PBmDx|xhbb**Z;sw6*J1^W6J~*IS3F2E4f2mm5aIt+x^5u z0NZ`k2XmK3;b_rzE~i4bvw9zMb#f0)#=#Tn7-1S~IfAE6wc76O7y6I(VDCy1mnK|I zfjp%v!U=?gnEcDR$h2@yt87u${Q~!3* zM)U^_d}Eb6`h~vSm{`)IPEaN+oyZd}Q-7Dj5W2vepGf7oUuP_uw8G2D>8d7}?E&qg zHwU0?z}jV?UV{#g#%xqBVj@%HH(HjQk(TTk=o|B#B|k9$0o!j4YC(XqiNdTbjEDr@K_BzC09U8Xf9@k=-+ylTwxFanhP|~-}_RN zjhmbWd`}#ra=i>|Ih?{3NGaV7YFx}a7tz2yBx5k8yz)NEChf&01I$3uQRDo*Fpd|} z;ii4oc(FyaNo)}I?ug%i5FjA8+q3^0P8lj$LD#JL3c>%KVq~bzVPxY<-L`HtyF?R= zZ=_Rcvy7s5oTQX;!68?nk{WI#*=x4yd-F5w`);qw+8mQz<-;j{Cl^WBMNZ5z(yJSp zBMUn9?02NyXx5RCAp0{1Wja@7se)*JXIm<1w)+{5{ORa$58s^3Ssv1QPGtjI$2yi( ztFkI~i*03p0n?+cnd2KZR1EKkr=9wv zr=X9(+{OM3!r4A;wm@Bt8KTEiEh@dNj16DT4B5{zvH*@-$Dyw+3 z%#vTjUtML;wG*`(N{toOxSO1Jsp+ld9}V2fz(QUXCpDL^B*f8)L|>)iXCQk6j~w5$ zWXINxd_363-UR0vPn8}N@S8R`b9A*M=H<_-xr@ z)48mzd>xKhz!@y2%5Ph%!MnXhfh>W5rxQ60i%+y>=e1Zb*6sb_l7RzE2*e%G+6v)X zu`&7~va6`gZQ&*W>{i)?VwJ1^(r!xt7`gn@p6 zPz|$}c{RN2ISWYl>5BtpZEdU#mJ{w|rgW(^e@h(Vl`7{xSQTSiI~AhRa+n@I`#4lp zz`7oo{L=yiR(Qj1#b<(saH{gbX5XLna`!hH3|MuM_3oiRFd&^UdZ=WZ0+*aN*gynYL=o3<1eE&~u3!i4tEKc{vCUlsIL1xc z|F>o*9rc<*62(IWsq7FIf&aXpjF5MtXnT4^4a)mZ@ZQhZ%RHVTpGdJ9kJ+aO^j_jp znIDKuYjpx@%a6yWl=3zQ3$qgR$l=KEH>pW@T~+{+9UE?LHK`Wq=Wp1K!<;Kgw$fPC z8V}%Z9uP2^Y$qo~hC#xhJHt1(C9g&YTIswiL=yW9X#06DX1oJiNq(buo z_>BXRM;FDbnFUz|w}1Ivy0)SkOlwqLjnS~^K|I%*56cdK?egV1O+9Kh0fkY!d+ChP zTGICqXiN9|+0|eCdS|Pvr3`W_i@}SrUSG)Ri1uro_5}w%OJBPS=iw=n0tm-M^4$@dtYphqOl3%0=kbL7Tf)k7EU#soHZd|~37i!?% zJpNbc!FHMex(1genmH6g3p!mH2K8!n0!g(+k-$h5k@LrYGZ-yd? zivmpu(*soOdWC?LhCYbALsc4NE|e@%ALoUihyE)66ggm_JC}_edq1D0 zq4JUa2k(Ha<7`9S!ehVO^Qwq&>CDa?V~ivGC~V18vbmVAr?i_m27MNzC4#Z}+odFZco94EcC?g3^8VTp zTIOM(O12e(*$>q!ilNYip@cvChs9tXPQVGQJ-cnDI7UCw8J|Jnl)Xxx_am{uAnzP! z+n|U)aJ~T%giC?agV`Ei?i(2ohVP6OX67EW^dQMl%<0r$Bh9aGjcC4&B;aptQj+M86KgPfDnlj@GqFuhp*XY>^C(aXO$> zxFw5RsR|0ewMe&JZmPM*64h55btk9G^jSYEP=-fu&kdso3B9bMCgAMZurWDC%Vz9nUF40}Lo0Pp~- zzh}WPO3HbZOqE_1o$8A_bDksJrhl0PGx&?HBp0w3 z@+z)Eu-_Tq|ImJJ*$EFvrS>Wu-A`bjAIIg4pD944mM5#qmG4>RO501Zb`20&*u!bh z-@Mo!80Sf0r|ZEA2=;);adrp$Z2d6bx&JRDTRh&^l^C= z19RnFVjzR=tIxMQjjvlb5pGoI!)uZv5_7?tm9-ag<~0FmSLnin(U^bp>?sicO3-an zbM-6pPiJ2=49zjEBGJ>ggBu094TL^2wGBvN*upM`SXI8s*@2v&ablU1z89ODS-A}4 zK7g@1FIK&)x0A3A`rSWQlIY2s{%73fa-d$PhnMXH4Bjs)of{0T@W{6XeNu+re zDGba)7uX~gm?P3$xF&Z9$c^~eB2HHmb`p!26z*dZd7d9{-stTqW87A#=|ZMl6DO9A z{!G(3PPp@k{$Q6>*udY}~2D{3SWF`b(GFs9oW;d4Pl{7k2g^>QGi z4b;p&0ASimt|}lVECc$R@uCZYUH1xZL}y36ps4<1hK!i-)V_Sc;I%TnN%g0^Um=cJ zibY)-I)JdP&`zhp7_`MipFc4ox-)FLb#Yx_=EYj>jkQ9I_5C!TLFLZhg{Xr>j{7KW zLizbnq+=BlFBY;D4?b#JO{W+a~to62CP1(ttpV=&+H-C-&F0v{)F$WumWnvO{d~66@zcizcdG2 z5M;_5P1I8}HeI^Vay;uFl$_LBSl|Zv&019)smKX^eS5vv{+jT7_kA8ilP+~gslA#L z`_m1$^_3k&w_HSGN@aPJ)q0wWln%4kyuT{IfFJ&mX2~-(A2lp6A8j`DQ`?sd14P~q zp353JOycbYp~LS*Ywzd_aIZ4N4!M0MZ>_Gu=9*pMqH+H>A~uZi4&-k;!7Vl4XGjYh z3k49oeb~?-BhNOrpZ7KTgyAdqpOM?J6q~(|$MtP}&BdZvY=68vb_-klgU1CwdbpyhJczgoVJjOx0y3UUz>EHxdG+}$W z3>aq6)?8?4XtRrw`w1I&l!UM2IaUxc)G>+1X8Oi5nmtU%2cd=HNrLBz`H6)6T{+`3 zhES|7^F*?c!3ya}!)R|;1id<|&E0Y9o{|#mX$twDHoO1tAId>} z2WP-Vl-4{Qcf{WwE-L^Jj-O{m}!@?iN8glUVfX6plA{EK_bhrX4i?t0$aGYB>`YY`p z*#@EwrkWG$SB9j^28Sp%w?TeVcQj>Mq^Dq(* z|3Qf(9%0s8Lb6ZM2o^VVe1{crlBY#wt~w8OqmIDTI`Up4l#PfKl< zCkzI{WYG0}lll=}#oFHJJ_PgE}(>!F3yx79lSSI> zF`AG|6QDsW-=0Xj?_b9>wkAy4T<^)5{vx`S)a1{{W!fC^(7&PX=Hw%rHSvL@~5LGol7w_SZ2h`*<++b4S z4y##~n2FlWs%ou$^g+umBw&^?j%Jze&;w*X(Vx@y~Jkl@$yJ(${ zGp|U3WOoq@y`3kjyluw!tdj|bo&(u@LX2$ zvizG2rbMAVEY#%bT#xL)gEbuPOGYntJlAbfLlv6kk-2HjK@jS9V<%&-8taMP>%s4i zpT1S(L|xcyl!wW48j))1%XtTNQD7*i74L+C0!=cXOU-tt23A zy3`yQ7%j}^*5=Urh;cO_uRqBRCg)gA=j5r1)H2OYR;YCWuDrd2B^6Rl33&jyan;uUbISB=_!G!s(|Ic)25C0NB?N~ik8 zKG$HdKwyjtxud4Fsjx|Ov<>K+;yOo+=o#_?6D@6>j4589ulnE-ACf;q@RTgsL_q( zb)1)PGn)h8{|F-3%)jqA2bwrw_>@&_lXdcDWj-x&#!e`>uZ8=YdwC)qCJ?>!B>p5u z@UU&MMX2;7&kxA+CQB#njR&j+T1bu6Q{>Yr&!0MXX@04YQBUbE(~B4 zvou&pk?DboUtUh3ntbj|VabWh7gI_jzgnZ_M2hRfg)f@u8N}?V(u1Sr#Ij;>i--xO z*$=9S2`wM%jkYUInS4%J9~PnhQ9+sb$$n3*+l)u&G4@gfELPA_;a1BVMq5$gKFWVu zn2*Nw#i_z9F4}&xvpF$Wb1jIGzS(Q0)8&{GLAXZ`QTcisNso@+XO`KsQsdzESD%h3 zb^GpyG$}l}lVnp>IJr@v(O>VAG{}au$c&<}r|Te^pl{uaL^Rb1aS3FH3hM$GH-=XnS%X4QD+uP%(ttr;pyFt3Y{8_TN+nqOyW=Cn( zMF$|kOio$MY>VA2O!>IiG}>q1P))-x9UDS!6Jw|=v`4WLtxV5@l5PGEC$Gg9%$?1_1?A!9|j6eQDJ zKKq-yt*kUIXd5jy)XP7y)~ff~Xp!AJ2@~DM z^BB4AREj%hx=MUI9mh|etE9?dYt@yZ)*qmBQ3yHwWlp9=cMFl zOWZg)z;sso()H!DmcG_Db(1O~3oEFxpL___SZMoh73t7tQ?YjM>i{F_7LI(FN6f-4j zorC_7hH@>%Y9+Uagi6?}69=QSUzuixF~)Ty`#W-A^x0dKHwD5nA*p#XVm;K8%6;Xi z2yEpqbGpVTMk?fBV*85Q>V#N^F|CK&PFr&IFIon9cDH|VH@ptZFs4Ez?@1j_r}yv?|~A6T^B3=kXnD;^V#-d&ac#?J{u>;_2M!hS6UK zYGd+68{nImC_vn$o;PV|)oAWxPw=a~nl}<$FkXAWS^n*QxF$l0$c%xZ0)WP~D5k*U zL(G9TvH8~X5m&RVGLS6KgAePi*xgSLZavp}Jdmf_`LsKM8G`c7v9&R_Zd8Fx;nFL9 zPyWXH#o4@bu1jXhkymt29Bx($@jnaFQsnsz_ARAG=EsPF2jSI`PleOcclcuj&qILb zlNJ&TMqTmF7o0(}FG!hH47u)S(&nynTJ=*c3p&0k?Se3s`?vf)pd!0$NRdq=-Vz zQ-eq&bHSo%qCUmh%6Ov2gscq!%+bUk&-kEm5`Ko=F)h5p%ds)j${M+>ezGa9ai0zC zdOuCaAAT!E z3}yi0I%9{7W0GTkL>p5@U?c1FEjdeJ`#r*~tZttWqGNBX^Xq^_l>~+Z!0NapO+TCx z+}Jegy#Um|6Szk#l(II^wV-6`---CNYC+7&2C|p?db@GkQNH`dJmI@_-(#g!nYxmQ=~uIa1( zraP1Zfoz?$vp#S~X(XRqUe=7v_vL;VJj9YsjlZT`)v+G-0F)B3@>$SD0iEIbV5-j5 z&&~o11*qt|h&`suLu@GRO(t)Wn;s6#9OO~vFmJ7Xg|Yzz)97tk#KhD+BRo@FbFen` zjA^Zu8k?myd6&IoprIbwo=MGNEgUc1;a(j5II>bNUlH?Woz<4zH+Vy=4tJcDg zx@r$Ck5$N_JcwYISBFSl&ivo8$=3B2F~}}D)|OARjSz#V)p|O*k=6B;$2z5w)soJ^ z@E#Ub5hP>dRLu@n$gD2cCEV?oPHq#K6N3Y`jvRe6i+tO<_3YPS-`^>k;ZAs?Z^?-G z*c-LU40Iy>P#wZe?r^i?uJ97DblkiiSZnNmFgsAgUI0x?YV(*rN zDA)fSFX6n4q#o3z$4ecr-lEPBn?^h=A&6yfC7=p@>EoG0aco?!cevpyq_1wm6_?PW zn4ZMhh!))EN4}<(wh)^P>R5SDQ{YLJP#fQDSPE-ecB7OulPNiU5%^^4s_DFjQTsicrk&fI>eg9kaok*~7fbQKE_Rmt_a& z(f6+ZwMNaCKYx&V1vv|5LpJ}c67V}7l6x0So+1Vh=}pp^J!CG7t!fY17!iU;54BAF zl8#6w0} zRv#T#YDmWJni2B%^PFHsPsKXDODPgy6PXm^3e2>qxx#Yy7oHXGGyDi`Qr(L=T3$fN zNN48Io;aUl!sL&F-%#2lq==4=ldp^?g9a!K$L61NOcWvzi8PU^-)z>kyrP6k=Xx`U z%KDBOXql64GPd-?YeM-q@dA?_#RXMKRLU5FIe!0D4|j}F{oWDudlMvmN51|<89BuNzCd7LfKg8^$Lz=Ak49TaQe=PiQtK89WaE>m`8Bh!< zmSQg326fbq8k^Ky+YFmGH2kjTJ{fvolc0E%KKJ89VE60GbT8e_IXCX?dz=*+ruGO# zZKv#b6YTfZnZ1Zd#pLJYSDf1=kull|uIGnTv6&wYSBY)a8v52eFX11)Kb~l2Z*nUI z-7NLvYvPQi^#h6l0@|qE4y5NSQY6R0bc%+0-(ZtE(2oV@)|VBt9%qfsN%!yaSvLWN z(XQ_bI{|Yl)7$mtq^Bs_T5V8UFFhbcBH-Hp9>1W;3Zt7Z=RwlL*YTSAddcMyD@mKz z{AV+#oIc9X00q-#7*P_pRE1)JAxmK=fzI$wae)`@jhHcw35$g#wuq&jotgmX{$b1!>wN#+cV$&I9ySN0J3*B^;kjKVvm;zUb6 zmq+OQyGIpue}ppJgz$_W?!%x8Wsc-n?V;{wmvCxJp8v$Nc;U=pY9^ z<9d35yd`)(?t~>H_OQWX!O1ZF+lM`Zg84V}y}=>kA&v-?SYw-{uRqReAfD4do3)<2 zXB=2vZJ#91r1m8P!HAY3b1F>~QRbJzOlcGxNh!EzRT4+k#%Sxuq62XpsD*t8v@uDc zfZk6Y)qQZ-_HS3vE|~9vrX(<}fR>Ov!G2zO1;p>XaM;c1lvu90k=f&agM^7bA$k0H!F=<)Sf(Urab-~T*^kGkC8i!$jt!p zukCTy86Q_axxSM1_W9&|PzOMSoeTV0-eo2R8THTvtS~O*$*8z^urjKD*v!H|& zPNE(z2K$Pb&}5a1lV(lp<-#!AAM0;}xVL@Ug4`>U8GQ7yk307klVfbcY~V_~3FGu0r^DU)xG zqudALvB9~6FY@LQs2%sx&jzzdBjsV3n<1`*W3M&_V|wp|8P05GRDXl+cB4LO#Z38y z`_1T@0-X+7Ql$%YjYS(}P&}|D-e2PQL}R0UbnWHy+uo-9^9E2lv@LP*G>{P}HubLt z6()RVD^!vu|AD}A1fvD&8liDh`8wr`RE2rc6DdCL|H{l)tLf4qOi5W_r55S&K^q=h zAaz(9ZBSg(s@o*y;=Jpu?QE@k$-A<}Fi`X4CKpAPG%R%Ee*uRkyB^(NF5VQtfI8a_qbxat@PtP&ZOfR*XM%jQ4eHI<^4t8is$PyI^@Ks#+K9L)V`*M_U z60WmS1XJoKAEV;?k7Sm#5906iKU`?mR`l17adgvWdWd6`2N@t`PU=&uvtn1s20eIm}N3g`OO#A-98jQJ;w;?AhLPw{PJ=d&$A z309KI8_V=X;s~i&iVwFd^xpexsGlhdl&6zXyiZ~axIZ&a>_kYH6hy0+4}a`#J$u_D zvbnWjUglmKY7pcOd^&pq8WV17KO2#;EYbyjI%#v}b(1lYHP0-)L1r5azAeB?V^`Gn zpecKrX{h7v9FEL8lLdYaLHrt?4psj_k-CS8oLgO3xt+ad{_ws{{)Lllp)`x+MnY)N z72dL*f6&{uEwmA(ubH8EI`7In{lko3g3n<45Gy)z@CtutV>88mBje`ZewloUbQ??- zs7Og%|K-d>rVxhJGkBMvzLEl?3f6q2 zcb*42jh_;{*9L*^w|bsgqI;h$vBM&;tRW29$tYnA?-vn_Y`vb&fzb4yJek{Lz!%N+ zQ6S9M8icsoU704mssCt#Y4XRsV)A+C4%ol}U#q z3y$O}YB9kU)s%lrF9Jm#q(fm{`Mi@M7WrNf0(mL2fp`N9N^lWxI6kq*;8{)1I+gF! zAM)%Pv12g24l-Tvsq+*3p?!&zei4`>CCkxAwR{mbn)0$<5)5ZxytKbBU_a=)qqRxp z!EP%num=3#K$&+7MI=Yrzx`Ul7IQ>K0J>;mRyIP zt;MF*Ycg7~Dh4S3T8~YIJ^7t|p*UuyyN&g$A+|}cU|JLENXeg-tjg&?bL>&dMO8dJ z0<($>wwS@&JIy;u7Ts9{f@sSxayYCD4V4H`bO99jkF#2jvHb=F}m=!6$*cwC|8|61Xet2#ULB>e6SJSfsY97MAFxl zHtI&54N;&3;dqUNo13So5?X2AO=snWxTZYoEYbtUI=kSr3wk+ilsPF&6NRgMV+(~% z)L*PVSAF^@x3}%YRDDJOS?~X#Oh9HCPhcN+U?cgA#3FJqaCDpyDoExii6miSDb)qW z>Y@3I?PZ>H>{I{37KAW+YZelR`0;qXbr60t{EaU4QoKwis+~s-#iDHXic@65N$uiA z_+o}i#(8Y{Xoyh^HO`xt_#b9IGk0Z?Km139F4>IVjA0q$21#Q_m%BC)zvq6Oc)*;O z0t!AJEjRd`k$9@@T+jV!b<w9OPJ$dtx75P9}Og1&1Dywbd6cYWa9nnRo5y$-jnTz=!^k=H4(-In( zzW0wR0op(-i@n-J54G{#5mc&mi292K41onN;fKH2VxAKD1xk4GcMNP^g*}4!CG}MC z(ys`?l@QEy9L{3ws8DRoJfXD^$KiqV}hX3U6BTSbDVdrWevkm|5{F6>%& zSH9t7%!hyn2ny43ir#aaG#L}LEiww8)Re;q>^kukFA~tENLFlO1qpbcSK)oqCSz9I z&MpZBQne*M(=%ip5lbM4Aw`Kdf3{=;99I`Jk6!K-nQ2y(YGq+d$x85P-!lycnNE+2 zWH1bLB*sxo2%OQQ7`4du|9xdDppF^Vw`7q4Nw;Yup<;Ox&S?p;nK4^k*rmSD5D@*! zy%_}e?lCaxEW`n8YOVX;eX|osm^zX5@RIony>5H|ett*qy zcNIEX0a$z}%54YV>04SM+pk)Wm<#`ht+x(~>si)?gM~qY2MLxz2MbOh_}~)ULk0*A z!2%?>4i?-A5S-u+0fG-O5F`Tx34?oZcNo6;?Q_m|@7-trwVw5?TGgwod-c;*{k~Od zpY}bSa7+)PlZbxQVHtHodil(LJ*<+-_e8KeBpYA-xR@HE?ayh|z{^ibSoYrhr9u#x zrxi3hmjC^bldy}oNG|Y8pt}*mZ6K3x3Q4?AL6NFdqsf{PEdBYBS1o&%)_KSiL4M?DO^@@JT!6FR6<`jBzOuVh;o z0{g6OM7~IcJ$p&9j}b^E#cJx#=so-#$>I_=iZO+Y(U^1K0hjB_cN)&+Ngx*=Wy*!G2AOU9LU}0g0zoIqcLXR7eXkI&F~ekkLIuO zw0~(%KU7&;Q>Bpv#5lt6qs>hzBzzG(l|Vp_du#S`&N$vb4_cwQ=_G zkyYJW>-#7!uUKWxhwiQz9@G_N=SBZ8tVFo0>Df!C~?5^+=cX zi+yhCn``D{CdF?QPz0Hshqtf&kO6DP?b}rv*ATyOy z?t0U$ITbzf+{P=;bUq~3z}LOjk~P{sXNi6-!vVMGlzwkkEN+K%Kll?RgZ~I)G!>!f za;@w4_Yxl*WOW8Lc)d5^ZVqKc(yI80vTn{#NosW*0yEF!D+}nD5{N{f6I`mFXi6s& z%UdK`TgYu!XfMR^H_p6AgLa*mvOD$kA6G8~<&Bf{Cr| zzq`IUQRp$3&b?shf6LnF=)zs%$3_IS$bGuOFl?A#*;;hthw{WL%A2e#t0D9$J>$HqHCgxva$oSH(5iRLL3XAOQ z_)|UqxE5f#l*X=yQV}X7%E?2mCv&Dt$CqAV)Gohd=F{oe%VP_h`-V=DvsN^CX<|Kf z_g*df0%zOx%Xp3QxSp@P_K?auUM6|}V;0g^#@JB564c=rTPSqqCfgl62C*UAs zjRLXoFNg@`@^M)zds#Q$ICha(udD2cGds#Vs_k5##7a%!!O_QS?$T=8t3|WWqX^cw z*Y58=-d*!HT!slWKJcjmR6&ON;K&qGj;ilI2fkN&SFU!%5JA|aJ}a&>QbAzdwZI@nftgtv`PqF zHQ56evdz5t;n<_k4y$_arM@UvA(F_`SLgb#D-F@B`s4(vNxJ?&HjBF zinm>$p5|4IH3FLK3Hn}&{(MAQTKh)fx+de`n0fvf-N(Xf>GrCo{2$+bn%mskKTV?% zW;Zda@{-28HM+=gB#I$ugbI1pUL6%>i+j)9aG8yDO$FF$q?JhhY@ulWGojJ^8kug- zzDy>7d|E!;keMT_uc|p_KsJ|4#JeUu<@KZ_Cker8{%id?`<2PP7;lvcig)t40<8#zV+vhleb10bcmO@t|@A*&d1rJaL{^+PGZDCeL*VC)wss2OG$O zNQ(Y&^=^q?kTW;>U3E%?!y@k;Ky@bSJdMQpYb|T<>zWJf3n_JV8;j>nXiiuZ_H~

l_p2dKK)u0kLN;E(%(h8{JdH$$1Aj3SuYRMSDVXIkGKnj#+SSdS!Cjr$WUUV!UYK*x(S?okCw;^ zaMx!_|K@y1N(BGercn2NO9H+-3~;=nJ$&w(8|dtx$XG-2_IbNW`+M)BAts4n&Vi#; zCOdzYmERQYwmGR(`(r)Z|CTy zcR=9IlyW99Gn~o#IxNX@PVronnokYMhjP2&}J3;GJ(E;>>Y|N4X9 z<^H<}PlsO6dDXVXI6HYdL{JWL4EXM2VLO&Spm=@J|LXhXwE0{=45RQ=}}Tvj1r zS_m>=l|NT0e57J41a{Ba^dORKkgMurPaL}{&sSJ<63Ly(cax&AVS3d2M^2lHIXAk zEm{HoXI_C(5K$^|HMz+TC)^c+OI+AY@LK02ndvocxm@aT=7HCBjae{EE03V3iDCZ*{9yaoTB!34L{+T zZ2bWmaWa}fzLM$X1uKu8(P^oBLsnlUhhAZWFAUdnj`0bUw3C9?nvEitJ6{_U*zx)} za=hN@D67Vsi}&o)@;&cgqRXWaLAEdc*2uE*R5#&1x!?Y#<})RS&t6((c58BK>l4|< zMvYZdb6I#AblNSy8SM3UMWx0hQ&dz_SU)M`OOeFXDWfP~zw{md&szO1k;J*b?ne}C zOQK!s)0SJ6g4vAz8au`eM(7$#PnSnxK2^-mTghHZpyP}uoai7J$#|LaW45e<;zjPRz zvx+YETbiCcIEfe>;=OFRt#?;!(r(Q|HQ5TOOPKM>7%J8++s2r&!DfR#;V%S zcB>gCqX1K)5CE65-j;An*nDuv0N+kjHE)jf*_^Xk8M6O00k}wqQF7?L*iPl&!>Du3 zw?SM9quWJm4Hf~gfPe7=Gzgg_5=oTk3!ZKr7Lqlh$!2dYCX?S7t@Z^CoO)n)CIQsl zz?X*gRP4Ko>AHRm&yeaWRTZU@<#`&@pqCZx5|P`QNCIIoa{Rwz`TNZ>zAq;HFE=HH z_DfF0WN@351PHci`C(B5|tMk8~#C7+9^Je;nTDZ_?u+ZdxN&nBI zf8JG>$2Q%}iCp%Xek+^TpzF6UY~K{)qbIwsYAF%>zobPmynp6I#`pW{S8UHaFt2J_4yLGIhU_=nRnt-#*yZQunO#uPLzY@ z{Px`_KNPAEd1i^h3!iRN>h+wN73ZvIzW1df9lPE4vhW&J$DxO9k9OKSZom6mh8B2*3rPjj-x$&16_P069AOJDe?2yxjV=5^yXm7+uSgDrX)-{LLa%2}pA-mPQ~@c+5E^=e$> z*Fh_#&rkGVgg&|{Lv0y#Ei+6;jaS_sQ2Ee*EPcJB0<)Jm0r|=bvvbC2?u3h-gFPVf zlKYKoOBY;k!Ozi0RFunA%zb%DerFD}Np{;cSbprd*AEgf-UB2c-2^3>SkDm?$!HG@d2 z1`?ahvY)TJCOI%yBWgzkfX?S%Ne1F}g|1l=Wx}Id&ZjXHD%x%(Re&*coL4urK0>e9%Ad!y0y<5Ney9_EXL_F&ct4v!oe!+WBPq){)in z;&j5)J-lTJHCub^ox88X~=mtmI~@@HF9e7!|nXhV#^}C{Q|~E7#)#pzWYwFsMXs!VJyM zAem0P%wu*iP~`D9@yaKG+`A}ge(DGuEUxpH7a;t*ErpLQ@i1{=D9ehKdcg zEe1JjzDfvzf$Dly<;x{TSRhJn=exWVy`Rtcu{qJ zJ=8GevA-~L4t;;VI_R+TeJR(Qtw>ksB|6XeCcNY>)yrx8*etG_B~-@q<~Bs|g5~!& zl|m*H!nFf@dWk#N(-DgmVtihemy4w&X5NBU8-xg{tX^xYQR0T(cm>^juv6ri3V6om ztG@jJC*3}M7?FxHx8#7lWMTWGqAv=nlry!&~3uev&gz1_FC{xiM_d%Y}Q|w)!}tI7a|5_toB}Ne+?l{fkzSkQ;iRcoTsg zrW1(GapHC8pUR3sjcA18o!)47fdRa+jzFi9_mLN=XJ_L|4yRc#ynPgrtW3&u9WUO#L!t|4!YgJqlK+z2A zV=|wt5d$?Kn!~i6g&oTSE#}-viEe7@r?B=_`=R8qr!VJ0P<=xJ^xv#3{pvZ?_mIHi z-X+ZS-m{g(D;edYm_i(PCU}P}by!Zg88Y!gswYh(DW~;uKke!(gl=GI%(#FiKha-W zr0m(CQf54$$C_x__~2Bfb6u1EukKaFdZN4YZOw!wc4LC%32GYwPGI`V;L8nl|3!3M zY8D~1ry*H*L6W3Ozg^+-B0*Zsc{*TaSYrOoc^>LaH&xn;sC`9Y!pIDt@harIf1V~^ zVW)Y2TfUh;PlWq#hE1&_i)??+#Yb)7or8+a^8Y*sWZE3g0DCGp>FD~~Q z*$@>FjeI#m&~e%-PMsE02{Mk?tZRCo(Vkn1llt4(r@p3oB-EpZuc`_6b@)hyg=rF_;lt8Qr-E#j@9N7}(A@aO8e>wss^(C#^oDQ6ST@CxgQE5| z{wbI)-My1o)EO0v{)X1=nZsDpIVBIOLZtiLj!*VCz2O3DA=jx*?Txwogpd6M9&cMt z8}F#jN@$`Qt5Wp8mf({iq{v||aa>)UnYm~2$;@M+fFc3ha-z(rH^tu z71Vt*01;zb72CqMh~@#C+HAMyIVNfXq}jP**{-fiq}7b>6Csl*8E52&;V@7%qZt`id5NEFKVE1nH^`x6 zuZWk(+D^-Q($La;B&#ixk$9ByXU=^!sC~6L+Ct?dh%zIH6z??bH7maJvAge7Jsva zlc->d=?-&rpRZ#ZgEJBYHhl?uMepwMX2VC{dlsEGxZUWWHuOdY9KomI)vL<5u9Nkk z)J3_4vvY~`pZ@x#jd zs8w8K`A7DPp+)ykl!PoJ18+b6gaIY+i6eO@4%)0SYQ0BK^u6>T(Sw~J?A7Ka(s%^r zM5^K7oY$Kfs2m|fxtXa~B6o_w3#&8b0XpXXy={`l2qC5vVC}X~LJvC-bl7C;%m{0( z+;BTGF+P@Ywcn!f247&AWB~V`r_@6>-#YxUp+4|3K$0573lM-R0U_(-5~- zNZR@`@JmwjDjJu6q>@R3itI4}L4De7u^x?N|AxxvonywUSJ{uJW*{k#iZp-VIhU=q zCi?autU*0&alIkf31~sDjw*c>4G$>qU*YjZ${8n~9XEIPms5{pqn`*r>exD*#FhqGzdHm5^ zGNAJhW6HlYGyeUB0}qOR9?{d09bmAr`1xcEG^G|40IWgBZlD0?Rb3dB_pcO58u-M* zr?9-%=@_a|$|u`U@)K&Nz)|rIMt|-Y1r7@yq>3eB{AmN?&wP#+H~Ks)ue^`?_YcxY ziAS{BdR`xO?8Fr7KoEhj5OmFxF8zC#>Gmoy%eH;+9sJHAH^)MK_$V2jRa9j6kTrcs z0+tP3F6ioOza3;2lb(i2UQV{3n;ZD`LAPx-{BQSLKkC%35({sQzpP9*iJmAcv0~_d z4vgnj3vOkB}Pq=G$XKnWE-Wtzu88Jf``i4Pm6&n!T-6FP$RJbQF8;h<_1(w3`Tb+j~It3(mC^ zrjJD8l}@qC{F^**DvR0mGXAg5jG332_-LlVGobxm3A*&0{c9B)x}Ik1d&extd#Yf$bZWoJrDj6!9yGso1=@x~_ynh>21-Y^3 zYEX3HvN!P95y7^}5{~)ZX9wnws2=RKAP2#S2|$$cP&HWKMw5C3Vcl-9y4D zBHr&NYT}2|?C`Q|{8>6qqTV=LQ*NKNEpY_$gzve~d}z&K4@Oy#!hGID`i&xu+u$yg zFYsEn3kVuqS!7xLF6;v^xk;4qkEs!k$NXT|lwzU#<9K7E4F*haW)RVFok*;QSdND5QsN!WG? z9QD5nl&+wqDY|0E>y%Y}FB9d&f{c?X@E_PV>`KzQ{|L|#DNK2``{|%C;q_G8eln=u zJOYf!;iQjDMwcn);#<0hyohIKd7qKwKdxZh6Y?_PsF!5=Ew8T;YpD_;h^(lWzS(wh z)`9E%HVx>O0EMnz-5bZ+k9jvS7PSzkyL}ZPQ{I@%^7DM&`wd=THV}g-%;a$< zv;@oY@nh(e0WqFz1Xc)%4zW~`<2hq-o!g}(u)5=;$o%t+X>3V?Sw$@&SvCBpcQXFh zay=6d@Hs#<@x3_qxh@?jz?tRdvKxzaU5YNLJ-{Xp^Ce1aQ;SlVNJ}2pg-V?pmh{51 zmRI$7F7N+`-lk91qTxpn5K?D=X@q(Pz&Xqdt&&cnFeRT~>G9qg{^>h~>`RhuJ2L&_ zl^@KM@`S^1yKX?PyB2sDyTVn>lgHib4qE<9rc9ki0x6UB9(&yT+LBHo`>+F@R9+(R zE9qUumd;YZtkFp+_EghMnRLHIuS8qlDqBkl8V+c2<+@UKy*kLeyM6IT_nel7EL_s{ z&8G!FgP>|-AG{guH8y%TD9!YqECuCF=EXP>svO%a%^L?%5fd+hyU4HZo_)}xWR0>| zs=aXkcKT~L0>K9_s+1qCnzkGfyVH;IZf_bf>8j>-p^8{D*P69998LYb$*vsH4JqE4 zCqU$hE_hffQ<)}Q$6YjSn)xPNIDh{_BlH@`d(Cn9hY6A*`<&jl5`L}*WN=H_zg9mg zK5rT_^GZxWp+3<75GmT$*52<7hs4I>sYw*upBD}qqZipe@nO~co`la9F;a<48F8=B zt|*PmdeF(mJv|sUcu^v4>?8O&UrXvpW|RLa!}iE8&27jQhG>NOEmZ7f?qygpS5je{brH zu;{H{!6>XfQuPu&{z!${YtuN$v~%bL{^6%AQk~Si83Zb{2jhdypGTWPYtIC{-xf0RJtAsjJ_VfDTrhg4E)~aD@p}9#j>#s5?UR07?t~>m zEHcTr^Ueu}F!QxfT!zv&P(7Xwn2G=glMGV~+(?lIT44be+4s~Y@Jl=~s`yeGEfeHA ztx4&@W=&}a#UrJ>1!=ySY{0XiQHa*Eka!waD@D0OXhf-MO#3=R7U;b<{-d9+#|YN$ zEs(2Av#!5mcyc&Yl5*0QLoSg88uj`G@$tS3>oA6Zyo`M61en*~@hbwVk|^s-ljF(r z-vNk(D^-)^GiJP2#c)_O|?d zWHMTnKQ#3Ql&f8W$W)>ZqX>y0V2rLwYX9OzMCSJ>H_xi9>f0Fq3?Lp<8PdK175mo7 z1AM{Ftkm!$Auu(vbF5*zB*F?yr*JIR_NXT?q7m+3EXFlX#!9@&f1`(6!9wKK>l!q{ z6)X%aB4Zk2FdDTzz7-FL))DbipHe~dBSl^>%n=%|@KSB!gU|IQmALpn z9lh+0sY=fb2%ecV*aXgn$Vlf01<9@SjuxK<%Z-xQU#IxK7IX^We?ng_afYis!}MsK zYyTmdKe9lCC$`xF zfogan%s5NNm(O9z#={_DVw!7y>CsZL5bqEVFV)$XEx&)+>;VK`eRn<|d!_K#$xqia z;bpoA3r&AP47Fwk*M#0!8!Li_=^|*MlKswq7v`fcV!uT~fg-!utB!%FPa& z@(?!=gg^GR*NR_y3Fa><`3L-t^n*81w6r}5i!ogujlMJ`8aVJBHQ71mzoYJDnOZwi zKL!Ro2jQO}zrTQ7YoMt5+jm}DlEMYQZLJ9mmj|?Q%zMndzuv%eWy+4QJjPoCT2-p* zk@%^3kzt-r-rOpCz9Z$4_Z@y`Qoa4V^5qk1q)NL`Vh03Za1t~Zz>&ku^SOrOiRoWV z;JPxY3v2uL#%SqCzE|zAszAMx5GJltE+t&t)qhfcDE~P&mAut(7(-G^L2#uAkPs4hc&b2wRA8 z?A1IWE`+8ZE8N=)8d1JeXKEKT1mMoOT<0UlQ}kRad~j%K&3q^ex|HHZ{6S$%WJBc% zmyxhui5CK)1Et!YIK~img68HnD@}~lyE1WqJ=U8?tj(2FP|2ZM3we6Z_&0OHoz~NX zmo%XQs}^>>W5F0)2aWEV99NGfTH>XzWThx#e;BB!kj+qshIQF$%`HdNe{`kC3F8X{ z;Dkm#0x0NNKJPe{maM7k_VP0K0?0+!6q4IlU$=SZ71s6}g(|g8VL6JHla2ISLb0q# zQGP%BXJufgRoH2-Bc)25t#hnjFzFOeGfW@!na)2gc%q*~D%|4^a_tA`Py3Y1(SU5W za(U%;m5+~4raP5EAEH9?B$1pcRtYwHe7Jq1FPi}+4%!zTM<36_`RdcCP^I&h@BbJC zM=%N~Af3wsS0~iAu%1e?Ps^^(RYqbr4Q@0=L#=$ z7+GRs;8;TuC^Wz6YlKG-xua5w(u_vJ;yLbYt+p4s4A@nh>7^$6yfDLxg!;|qXB+nr z2jj1=zGYfDNV4k5V|x2&M$^E=LY`R&(AmfUo{;lJi-inLB2SH>*4!O?#KA^Z4<%Y+ zcsNx#Y}IT0a#tJ#^aQU$IDEyUg)==!Fw-c>DLvkh{lRJ|z;3BuP7LoS8y2z<-g|27 zFU*6&Z?HhHg-C%3H)M6_@K)+bECL#|elpWMO|rdq#1QGpkjpLa8?}jk1_p!_Q4sk; zqV@W8`}1Faltm;yc05kjgnAryAhAmP75B|1-=XeWAd=X$9sqR%-j%4$?_!?*fTtzsLlP0b9kK zp3bFIO8vlkoZ!XtF8Hv}ZPMnZq3x#RYfNJ^|GO4+aUeRW*BSI{6(mb#w)})CyKDY& zxq;7)LkM4!ZBo%&fzQ3-U4ySfq~Y641dhCE7pE#f^%)2=R_M)se|jUX4#bL3Xi|aU z_1vye%7w!>oB?FhAe|P}u-ZPZPO_LR@=cVjmXJEBT}Nh@YewuQ=jpp8Put#6q~<1@ zf;Njp37YUI^G9^i;QJH5yqhFx|0>kAX^|g!J7J*tSst;dYfOSK9B1)GthW(sT|7ld z?(A-O9SMVZJiYrlCX7&lQ9!N$+h?arjPoZo>5NH*l*WhhyBePQ7`fE9z0()$f}gm2 znQyQyvo=9;1mZc$fvfZZ>vh=)woWk+QnJKH)L7a9XZDX(N1wFsy6u)I~97pv#P&4A-J=ddSEQ;sVF;dwh^v76Wq`JvnkjAG7n#0`q1E%jQVNLi!+? z5S?#uHzN#6WvErVw|c#L5tO^&%&9Jwa^ns7kJIH1Cc>Y48NGYygSVSzPLB5e%qIlI zn^-w9(%~vSKO#j{-~?S(Aq%<-!VN>IBGqW3U&(_AO+mb(k*|ED4qBVBhgX11qOm?= z^3b>3r|X5}*vuhO7bq4E2lPqI$vOfS_FQEVy~a`s*U0FA9GK)w#(iS=#y}*GgrajU zD(brX(80W#f)g{(^Y=Y3LKZfczQ2fq9@H|D5p#c*01JDLFJv{IH-{>~1E+2SBtpdl=p8h5jZ#+N(l#%s2SXsIERq0kr{Xi>zImqTEj z+lXk~=h4GWH>I5bGbkG~XqlsDE@-S6tFR!a;obu3jga!C^z)Qlp4|)bV&jw+hboj5 zy_2X5?I|)XS+&Ql`9{sThUjDGjU7?l7DQ|Yu=K@<U)q~_}SLF=|Rf`#m z|DI-smEL!3V>Ds(w5b9HVUnEK6R=CtclPX%JWdN`COXo}qn%z$FAbq@Vv(QyGs49c z8EV%^N;lG8_8RCxqCWHmRt@2U>Dm43{ewDo&dN^W8tlx(XXX-+Lx1e;e}1NOCu1O> z54TRlLw?FE`U^jw8OCJr3{HQ3We;^_8OaE>hdQR@IpW?2;oP%?!BZ4=BDUy>R}z&< zyEaGi2O4p3WZ384-IeF(oR$!ym~)I>{Jr}P41R3YyQ=$Ua?D z>x!a`ZRYF7ls08JC{jq%lbW}~NNu{6SL+6ag@>UZVe_|sy%&h_91uzsuua_!%I(cF zFyfcL_+yc0>GEw8s9V0G4gaZ_Wk*CyMs_-GleOt)uyK3x_U9al9~4Mlhy!sPb*IKm zrr3dGR#Y+G41((V(|~USo#0v5QFni-m>7#P!aS!5w!0Y(;elB?>(3hidAaSqb(xYz{5->}+Al@fKAza6Iu-KV>ef#k!r(6-{$)yT-rM$TdS1OSIzPjg z3W|mPU~75pV(EL1ntf1k3~)VCKq-mXU+njY7>h|bhj=^5`=)6lqq~J+tKQ8|U0GW# z1u3W8;oKXx8}0e$rs3qthJ0}iO82w2A0h=E0Yf`5lTWjh9&6HJiG<6yhrA3e+?V_< zR6#>NO44zY609523qBR2HQJ8PAX^$wCMFU8z^2$}u*-iFB@Nd2l-mqD-rKxLiQ(T! zU^Ab4DAl#AJIl9IOrfYOQZK5XSE6j?DT1n8$@ZxCMnou!@)w)Ze6 zVE4>yj#i-wyG$K7@hD!;A94H45RFSfL{LNt#OBql&k11TDx5DN0{Mbc=rn*U0y`f~9ws27q5UX?8ca|M^{Yrn+VGOt*8C|eDf2fO11QMlwffsF zM(d2WT`XWraXbML0hpI*MMolk=(PrF(<_G+j4+5uOaNk**;hi=F0o4Vn7BFmhAt+6 zgblf|QYi`Yl3E8LDNy;0ja!q+;}bCVUkNTp(c3kWY$g~fLD$gdXr5|GZMhF7P=jiY zSo*)ezbr#BM>%<1)On#i8q$iveOP_lgGW@+g43J(tbEU}76WrD#{v&$*Y?(+S;;HE7!!_rl5!O)~1dKpP|eLS!O6Wve1v@U_7F8U;lHzrQHU_Waoq_~MP zcEa*kgI8#M=i_(U>TeuG?B5)6+D`gdilb+p^2yu zsjfO=KIdqCTi(Nnk#hf$?tLGI2pzmn{w5plnrdP)A}SkcPc5W;C7rnSNFs8Trhxem zuFdWIRAAK6`2dsy##eL$SFnz``Sq!ip3T(#mFQ#o*cs8l+!47wAVuyR-Qz=`JTYCy zZSIXD-JkD|7n=tIb_Iw?TQ)q=B{JMM8h@)A2XYc@v_2kv;86`)Jav$WF|;WG0(;P67sb!UwB(URHrs>7LzmmnfU(HPhej_I0C2+vC<@<6{ zcK!yJMTx;iT4JU(Q;5$rl^~4o#fvuXnyMXX!Au>hog&FnI!Y^TUb*mf?}?P-?Mdlp zg)zWmEqT1ogI7PXKUvEemOZL@l+-gcEs!F9Mft9}TUqGg@8nj@uy{|~ z=7lGLy90?kq{DUzh-cy}iJ58J|DGiuwMdpMtijcHxFdYvaQ>GD z>Hg_wr$7o5@oQ};)jK#%LlU^xDC0{cfyG<-u{?Jp86yD6$k3;p0hn=Rg2WtX(lU&d zSPDqla3V1SVq6ttv&UA{U}w2Y!b!O!aDUESS9*k(n0C9aI|L_?N2TcUWZgLulr5BS z?`VqJ2qUuHQO(P}?9o)-W2pKJ{oyUVqq4Nu3#zSCevV7t3IK3TM|H5RiW*}IPP2sE zmvx-+$2)5GJvi@s7}d3=uVS~bda`}oqUy=m&H@+U4tH+?(%w_bNc(}CD-Z%5CeG^&8DE@LqX{dwteOnJd2*xQNsNe-3jG!cW+r8w8J>w6-18?yxLi8oS&)^Ui_eG;VY3AClI zwQzS3lCJA7XK0ww+}@J7sf%=vbu9mKp_n>{Oh4PZKmx@{nu5EX3{DACqAHbHyIvB(=v3Ku7gfS2p>$6OXh3S|R- zW#~(puVsq;I&~qfzYf7g`LBx~hUwlQdp&nd(Y=(=Ln=d3#^2K=5N{-ecUo#3avsN= z%f%wRasD%JZ+5d94B z;Av+8QE^}kZuLCxRoY!^$&N9UkXmbo(^E)O|F?fwSFWcbz<2-j>ErIK#WesoP!9R_ z(?YSmk_HYuAZ)K0?nyJ-G+U4a>k4qg&SS-;d7q+leRyf z#*O$;Bei_P^SQ?#=E-9ILx+ITq3UlXsHvqXpQ8>Msi_MQuC8_2*|(%G?0BaS<4OU9 zvGzt+<>xB_*RcgJ5Z2iWkS(MIH=SwXdf9Sbrii?Fc}r^6=uKCH6bGVy%(5SDX)K;P zaRFGfU(qKH%WTS@=gMXL&Rk>>oOT&~A4>T>I$f`3SEU}+ygd_l6D5$@)R3|;Yb(f5 zaUqVWo@(dv)X}RlGpDhbxB`)=X$fKDe$90_CjbOljY9P0{WG;~sXeA^_7@I^=*FJ< ziL3sud|xfBBRH@63|j&9EDB}k;-~-nqww4FFDD&>F`1)JV^4Ji2~|;kABEy#2A*v$ z@3b$)M2HW!B~sCPxA=6?#J}8J`XfEnxmW76{yS1%`0#`tRWLfbX>#HMr`<4?65oA* z-x%idEVaRG-~KFFpSe<@r@hVRf8DfHahZAiC}#ltb(9f3ix<($O~t)Q$FIu?;?bKB z->DmZ3Iv+bCU=AqEKw#<}173PkztmU1 z&&F_a;O_ytuV!*ODzEa*n1L2hH3o0J`{yq%K*u>hYA{Jv;ZfgCHfL|Ca{X2-#vEJ< znUAhXp7%YqB=YV2`Y+u6KR|m866#RxYPq2y4wap9hX`Wz>@44XV9D=V4!*umtKA!| zuQ2~_%)2ONE-eP}@%?_n0Ey(qc-kLoGWTa}RQPD`f3*L@BKSYR zc63{%w;Jv+G)17{+q_Eb{rCR}lDCSMS?f`r3MZKuHp~APo#5{G|E1;s*J(Z^pT~^T zRM(G2`x^*vz&~HR{6Ee8w_q>_K`=aF5kCEox#8(^MuHFfXxRV%+o^xW8<_dx3^k@o zTa)Xnzh6{_m^f_b{x7M27z1W1*gHG+0Lh9x1ILrpJ$#qO|IiOz5zG!B&&rPm9p&^T z*@L330k0ArWkFAnp1nYTs*-gDM`TV&K z{eh(+&$6bfCnx?PDgV1lw1eS`U5@YY8$Nmdn_)2&+4wPF{6E=nxgtcST=a$SWay1P z#{As&e=>~LxO|1VIw!-VD(FL#4{G5*=`qKbH7_swR^C)p5M_X*yiX{?e}AI1(% z{7}f7dTYY_xT&@@96i0#R3|Gom;R%=Z!Fk8mj_jwlZ{;;Q@Sgg;S3jf8Y-o}Z~mh+ zQdWku6vcaEe?1Zv{YLn_F~o#(LWB0M{(pUiA3yfp6pb07bl?pEGvCL(KiLtD69pJ) znr)=JI}Y*TS8=rIA@S+F^Gi9w|Mv3O3rrP(=456*4rZ-!P2cIz&rAyyO8V<6ty%5P z|2DU}spKBnj(%wrsw zqM=f>YYdYb!6(>~F*0xE_O)@&Ly5r|7B>uVSoEiV8>Mer*xxcaq4Cl=VA`$G%y%Ri zR$#v+FG;A_X2v_$^YDNFCzC1A4J`V*B$X!_ZgR!QENCL70Fpbh^!x(QeoDu#4*>)J zYl5-S1*fG^L%gkFxtHwp!3Q$WS;M|;J!ILWo0v(Fhh(E9m3ZUvgF};ur1-)uK7ErJ zblGj~l{Dnxo&7gm8GfV*Onm1;KtgH~SWv(^%i*Ea-E(s=mi)L%ThCOQNlELwM z#)cptap%puqu1dmyiiIWIb+5;Fx3~dapKtFXbzY(~MO(tvQBEh6>YV`Y&a2gx zgSNwIj8@%U;X)^2{aYSlt|5-eLIyp~qYH81-QnB71wtcHG@|Oi$V&fxSqOWioeC=V zdnJaMjE~u_^UViDcFxH?WE;;_suvmDoBO}XO8?N0Qo_28EWK_CD|VRo7MBT~)oFXQ9@**S!{tmAUsF z;7<4JBvGVG7iI+icBd9PL9R$VXy}yTv2&Kv6=P|=mi^Z@`^ywYcZ-SS^j}A(bO7cP zH8qunAcmO!dmaDr%wF!W^ZI3#IX>>ip3?vMXa4#5KI$*a(&!k0QXUO`;kVJ6P|oH5 zT9LoJl{5yJrV`bJ&y~)9iv)BPW^rQX!UUYFMF_9_>>Xb5+h)$*zqy&q6Fc{u*R?OH$Wy{0T0exB%SDwPdCy8ewp z#`gz^=deMskA&(BW6cKy`Tn{1GTrWuV?rmlw6DGJpS8y_YA6t+YrQyK9QzoDQ?yU&o zX%8utt3;RBRJvjUzul(4-GZ%f{**HvdwL7H`od?F<@|Vm_jvx%sKupucTP0(?4bR3 z+Wg4*x8DLt{KG%(|H7JS<Ns@BcnLN+VXfO4R72HGWH> zE0(vrk^LJj0nva33PrL*)$h>Qq&s%@bXDdwiVyn!jV^4#I+6V|0KtaBW^de0T-TzR z8vTE@0kpbhp+cnqR-4(^6Hle|&hOthoEPCXm%k-UTeTjGUJXigv>o&qzLI)9pSKyG zq<#ksa|BOqcrr!4J?=73Sl68xAZB;oL<5EK&iVTgN$$@H2HQV;e$HkUWtm~WRbprM zE-MPaWz*OeFU_ll9v@`T$+4Z%Cm+ImJSl#DI4;UzZ{QMP1PACu1T)H)z{M z9wXa4dfn7c8lo?SS$TaaUE(dhSgtYU^}Uppxw@hf~iPtxr5?MJ{!*a`)MfDUp$%+yS8P<4B|Tk2C) zdn~>qIKLBrGY{XrwBg;*9G8W zuUVv&H#^7&=b{QaFQ#tqpH?E#WqDctGVXlpojz+q$Zmi7{!1c!F@X01y>Q_$!90Vo zFx%>!`G^W81Xm8MSX~Xb{b@ST_)*dKOJSD#!Su(=Cd%fT^$qiR-BbbM6y`e>)OMx! zL!1bYg3KS!Qd!#tieTJ>RL=3HbC2yeP?~+|qJ;{QrbR#ZZc&%1yzBjWTYdpWB&_o& zFQOOp|E;^sV?{bZ0tv-2a3gtr==wNw*$=wb(6wo}JSnRRG0$=b&E0x67cg#)MuejF z2i^apwF5b7pZ(nc)Kd(>G%wEmqBiI+!dB+n4Y;Mt?^1(uF3NO)=UlVvOUcW*Kc1ta zuN@{e?t#KVWa@l0aeF>4>T-HT*uZ$^p7=Vag{Rca;_&}?umpN=_93!6tI{0wmk7?A zCxl59+cj51E8fej=+3*v%gV&$r_Kgpt(}i|CO((fE8}^=N))2VFJ=al+?v`H z)kp6yp)e?de7DI%HjBYHEWz~;w`{+9d6s18f@hjgkQB4MldlWwF$I!9KqB)f_P!hl zA4~~-UvIbV*{&dNO&xA}@8#`n2pTaIB4;tTliN8t?LwqF*{}?o?cg(Rhhuo}oAgaY zkfhX`j|;D;swtE zjCR{$Y4$^T)4Zgfnx@Rk*AGq^A$nyd?hy@^lI$!165f(cMJ}$XJvSBH{e^4!AD8W) zCI0DmRWUGX)S@yK6A^Hf-x9F$TGCiiN|R^9bdr+gZM!FXz@l%LqTjbmm7SRIOGS0g zdeD>F@E2f`yRSTg9)H9Anr5Fx1uM~9eRQJC6%j!e|2Bbn%nzz>6-97ua!$WZZkSPK zBN<+OP*fYqaG%E$1$>wcitl3}+MeMIlmBDN)m4r2`F;2$(a873sv^C_>LH<=e8VyfRjbqlhXIR)N{tW#K;M6eY} zZfbN{h6c4Vlb2=Fm<9ovR_}-PCB6x2Y|#)zxWjDUP8vjN+r+M$N#6AMP;~pAV4NUb zKnq1A-m3d6_Z=1F7kp|MkyN@MhuZVjQ=J!Tgx6zP5kKWyMo<)P2Wq1PL5xF^B*Bf> z=_;E|xGvgn5D&35v`8}Sdf~9Y%e#w$m@N-892HdeLtfPL`1rOq-_Bi;tT1c3-p*{d zXJwg?h^&b2OUo+1G_Jd>+JVgBX9;TUpolri+bjDa0-X0p5bp?qwBow&C5Ti%PlgCh zMec*hVVI=wQ*X+jF8-sOK*Y@fY+ZVaHf8+u(;PjIXTRWxn1Vre8AGv|N2-xdj`3Y%Skn76E|IvN{r#Xms8MRMJ6 zFXFHJQ#%$SU$6LXJOKmZc&hp=_8D*M@GmaX|A|bPkBOC>a4pro_Lc@su+Pg@vTG{2 zJAlJI{25XKH*DEIn66!L?e%qbQ~sh^syyX{OrwEt2J{AkMpeTdS4m&~VK8=E(KfHM z+pbZug!g_;OO$#0B@9E=aCexDssACSER@Q#15JLd<gwLuB`s4V)$dA= z8p@x)KH;Dcf(wuU|8RQm#Yy%Gy!^pP)-r*^Xq3$Y2ERmKXYIE$d=6P8tVdNhk_ z<>%z_krWd-jN?Ya#(*)_yBS1T=PoBYfuQdRf2d zUC=bN86_Si^0cbb{NYZf4ud(0BjL}`xJ;ViaZ(b(?R|A>__316mS|@u+5?#|rzp7I z?z<=Gc4>4Y(u*VbJQLGf*>-L@O5@!}_~SXFQye!KruYKaJj2-ASW<_AZg9?bSdOfn`MauGf7g2BiI((3Hk3s{Bu{f#9Rgk4{tk4`0b3oI`(bzKIZ& z5LqZYjARP{ZGB#Mi9)Hey;$2pVNLQBXXh2tt}5oMFX;3jQ`GJN>x1uqzB71W`j_(Q ztINp(OP5;pg@LFFuimc(o54@OMEdrjtnGxR4SKUCpITq1=a$2o&1`96f}5U~?RE2P zgpR_@L1anN{^sdlD1wtK_3iY#stw$`n>H@{qjXh>ZV*?G#E<#-dUb+C%YBLU_p~p83R{Ml->--2 zCLE3YSVdQrmoG_GP%t?XPlkwm*Z4Br46h3Ejq*P$bM{&*7xR!|x?ocA%~JQG!su`R z7PXKN2BI)ycB_Icd&*lOp&JK_j2)3J(fuD-mt@Mc$NGJ%bCrH0aL*`Fy9_oqBFn(0xq9#UFmSB6Sa$?L-uU ztFjXy^J`pmqagjC>>UsK>)wNGRg;UtKM^Owi^VT|2m-UY+N@`B#afBj61#j$2y?U%%6dLV$P=BI2PWEeHmXRf2pEkFRra1_r<)L51j8uC zQ;KJH5&d;uhtZ4_k^xtJq2N|gpOnTZil@4cjE^($GLR`8fWt?|6{amGjK?o_@ z<>t);9>OW{fvtqvP_nekF{%+1{m`u9TuV}&==6Fr_F=&2y(zGf-Hj?=C(!SwzsE`r z<2YaUuE<=3AcE%YT$sph@I{R7jU=oy6d>fmF!qbSw#q(Jn#Cr&P z)X#3`Iiiivz9!m4nu17xLCg>|U867U>gbD>Q*LGl-undS=AXKzIuaw;&)^|`|2%ZF zMEFIYu^zd&A&!{+5?B?rRNrnZ9GhqTeKlTJ5T+wZT&@f!sEgrE+PdLX)cR1F+#a3j z)K`EFAfxCETdB7t`rYK%3OhEc2*=n4If_{$r= z+(gTbI7<0yd&Ydms<)1!I4ekOuE;~8nJRG}!Z-)T;_f{4+!*^K$1w40l59M*Lls7{ z@G$&&i8>auf27z|U+wT0u^ek1gm=d+wLT+fMAN~=Uv7>oWK3KScL7< zeHM#LMV?|+FZ*dv8C7%3=OjdKXHSmr5R}dc;RAhO zZ3mglKuv9DbyfW+gvjq+EGG%wi>h;NWHbrpf3-JTWPT~9eH3f6p><7cV(lQxKNL-H zy4a<$NaL0I9!U{#+r0nWP#e2${cWc|Y|?ThO{RJvFCv}6xm=Fe{U~OPDxH11F`Q!N z6h*lkh&>xRRyGv#no2M9gKF)2&FzkajIN3L9VFx!WdRlf@pGJ{kB`snm(L3`4`{H| zHbMr8iuykUXHCTmye>RsJZ^y-T0z=mvbfeqyKpx3kw@%iM#{lT#_9!GTZ+8Xg}?71 zo>|3ktmt{BLuXQ-)hKVd?O=R|cR)_@XqfEclXv1oa`l0VFQseur&AX13fk+KqWA5^ zn=Jc5+{xCH4)i0>dYsD1J;g2v-?g9_65XDa@fA}JRq#NyS;YD=CsDqc9&hiC)!!0lJ?6 zTAl}Rz9qEQV4rN&e$#B$sL%=(RMLEuGc0 zsEln7N%xqg$g&Z9;#}+zTwLN7?BK0bIQHJ}=S+(QV+vG|Le#*3?m~Rl#yEmQBoDgF zMg0stSWXbERO^D(BvY=Vl~j$n!!!CU+@AmgGGMhpj8Wrte7x=IIOK1TCL6nf$i$LJQT!rBWp`m23^*3t7xg$97Zai+A)>cR9FyQKfN+GRxP!!j#ZQ3P z9gK49k2Q}aUqQVN5^^$^L*e{kB46RF@OclWd?cH|z6VMK?l{29BbOi*-$;H_MYR*|i0nK|f0dEHtx zjJ7L}F%}HStb%;g`m8I~=4?*aEV9B_*UK>Yv%WoWGa)VaCa&ph5GZ3=oaJZXVVcJ! zvZ*)~(uI|tThsDon1BM_VDQ95<@A)^jEkH?>i?M1zbvX6fxnz7hl8xpu?#nL#NzpM zen$O_uzm!Is991a$vWPfMlmw#ZwnmK7SkRtitMUk`6e zK4acQ>XP%(zW~|1l8r~*9!&tu{w7&hMxCQ`lmcQpaT$6SulnwN)nc(^TOi}xYJRP{ zw6NZnVs_vz2unU0Ow{r^UB+rpt~l}u!VS6^kp~xFa_J@x20}-Qh8zl{rL+)c(0{Dm z`q@uE;WA@{yimHk*FuLy#pg8}^ape0p**p19i&pl#lkxIHH&=uiJ;7-<)Fj?SfMV{ ztZ-?KW#XyrjDJjw5jdLkUnMuI{*FN^ul*Y~D1@(sDF`y}_)eUv(%T>wsyA$ky;by< znED8s#>@}JAjw(52?FL?6y_&uL8$pW0;zD;9u#9enN1J0R6W8z*_P0kMgDjgi*`S< z=bHPOqR@;yvIhe`==lWJVj_#4pnDOV9aYAls8>`<;bn`@IOgZSC01*rQ&|xr;LYAz?~ut>uydK_C_ATSVy06arc+u`3GXoN;8b5l zQ;)=XvRw|!Q(5=VXjI&Y=~4c8f8Ee0<|^5uwKE zA)RH(swuy-;kR_4=s@d;RDbW4G+3N!mq9TGJS3d<$KYwLJ_B>841;-gqQQ_%i&lc2 z@pt|lSA>Cp{_h-Gt#X!xVDzzCpLXkn*$r!x{&&Rj*WJCopi$SQC+mx!_ygnRWAb>< zdM)j{-o&I!YfpKsHS!nr1Ro`<(48~CEg9*Bf2>2PjJ8TI3b2ZTY4@_DIan^X=g;vj zuct&swYQzl&@9NX{M$(X%T`CrLoVwj@cYbaD{b3njR5O&2-WEDhghg$_vn|D2pwCA zD1wP|eHiwXX=lU3Nc^yP*IS3R)Y{np>pF;FsMPG48c%F&AzA<7&;KPuKayAmaBCx- zR5x=ivf?BXv+e&9@;|@F{?pG#rTXhbB4g*>Dn~5z*^LuYsRUirNZOQ*V`K5@g|I$E zX?A)^I$49|jg5rxe%^Trf2MIqAfqm>lgy z^66)x^D{;a?O`Yv zy#{Om7uQn17=kN+WXZs33hWIHeiv22J8yf2Z#Fm~ zE6|0^yKn-PM)85u&OYY5=kxQ*?|*vjE!7CWO~GOHPlPy6j)QfzSj84DS7$EbD7bDf zO23=zy8uF%KAN@3@v}=zKH|vMD@+U_P00s=I$QxVA;G!`W@ka_&v0pP@xIEMCBQxB z5m5xVcs+_-26i8Sfz9u)>RqSJbyA!>rDPQ`wsi6y$Dbsyg7nVyJ0vx_s(a(sc;ruI z#&&gNcM0Y3XXK7HCwk9dz8#ZA6hy}lMVq_MygH1*mb{};USuLTV^C3~%X@MraG0*P z)gLLMwNBs^5a0aG@ISHk4a$RCfvt1)wY|^ zDn2EJNQlA#fLkoQi}upQX*B%E&W*uP z6smTo+$~uH)?G>2-kl>(SUc~+F@~Ok!jcjy>q6K2-|;^-L<%a= zOEqsFJkK+(_s$%%F`gbxK$4my8%#n zL<-2N(}Erk*fi9!IWC!(5QnX%Z9b2jc$npk;(Go+k|7E^Vbmo_3cg47=!- z3P0X!7~VNWikG3xN~G+6KiKbb2DU%6F=k722OY(1bH_UhX?dMCk@o`=J%3tW7Fev=B=<%G*T@4xpXf6I+w}9i`&ne50u5!~+ z2Q@xzB~N;!EtTXVzE`2m-VFLgZ;3dh-&&Ts&pU3nAX$xFW)m@acXBI*+3Alc(^+&R z-`%a2Jj3V&pMo_ZB?)YPrfy&dR42ujKpS0T3M01*uPgMLIS$`WnRW=md4A zIUS|5H3>fz?7m@AaP(-}qASxm$oL28JJm>0QPjkSreV^@i6*J#-)wl7$j*+$Z_TTc zF9R1<%Hw4U0|n7pH zsw|}$bZAND8<$YhOl?pUc%%d9Dlsj}la0mfkvi=fAUj^TWbK8#0iEJ6vM2jCm~4@uQQok_PGWH`AEZ5w6Su%(xETK zb8#?_n8jug=;;c@k{A@Ydx}ql=}uQ~jrEY9hR|Fh_VtlW{hL?=O`>lB*bU)F>Giy0fSXx;YT$0?uMjL35JR>7LJEJn>B^RQz}Sqw+1vB>ZOn zcxJI#&Ad*b7jAg@IA$b4SB&2xz$~N`iG-*l8t>2KTjZP`?Qo9m-e=wq>21RGTl~9B z=-*xDxm!~JQAPY(4rSf#9`byvshBwU-34%t*N*Qi$Hap@BCv{t0FOTu8D+VX7ZNhgowfOo?d$Vm3ba& zop@E#TezSDeVV_G;BnQ}BsL1V^KNWE(R+@@ z%m8UoHc1bNg_jkOLH>UA?FZc1j3~P6I!*1jFi%3Sx6}B~0(>A8UJTCY5*)4+K%}eW zto&YJeDeS55<6K=exO=>$ofh5!Pa|Vd|C3Ggp7*zP^zBQmx&O;6Bc8+poZE$+)VN& zV*>5h6ewv28MQ8-hJ|8|_>@g4fmYkfKL)LIsR)&ADlDttn!AiDN-JLzL!P2C%^H=K zVk8DCk`@|DqZUqd+LUc*HZo0w_Ar_~s9EkGw6k!9+V0xE_@1pauzEi`aqlOp&ZDAN z7M+#@JW~vP+#QS}5HA0EGnPUXXVxO(_o)A?v<0v`yPo5_=zIZ4V%jNt>#olxr?ieE zErCIOgVj`(`UG)Z<$Y!)ZSo1t?Ph{AlMr!s#P`o~jYBQ+H@?)>cK15Ihc(A&M4>{# z$1Mo$X1m-n7LDz6+fHqt=ecYibg*NA_s89ZG24=8O=uY(pH4D@uBQoGe2Z34X=Wc( zQBT2T>xBc~QN^)))LI`R<}`fW!CCOEsUt%;64C~h#^oF?FW{r!0yne-VUWI%f4m+F z0<$OgOiQwujRPEO-eyxe$$j5%vh_2&cR@us=g+DiBMDSvUkR!`9E4M25z1F%QHLqr z1Bmvw3i+*lJ==U~@XZbu^EKPaw+yY-gCBxpNK1W!N zw?L0)j)yDNOnZU@&({8;N>mZ4+wAvSs--Fg6|2cmBA?=opc94SSy7nQBQc^}Z|Oi? zUdHNXk^H%_lLKj@oY(YDmjpQv$N0X3=G|z=y*OLbj^~pyWV#h(1PYrQ6awq1)co^Q zU~hlV?>nFLq)ah)RnU1Eed-YiVDNg#%qhzXI`rSOt%U60>2Q~RaJ!(<%I z-aV*zWp@F+JUuA~FcJppqfsK2?wzlSZxS?^NE<*%7qc7Sx*yG!xm%I(%wkeM$x`3z zgvEZMiPHp%)BSq85^I;s~*S(wu_^ zv9ZE6^z*8ys=Yt3;$CGql25}G_qSb85K%B;YSKFgoF31Yj0~NV&e?)kIV77`ir;DB zX3Kb8x!>vyp$wH&G#r%bF{Dja7pL_RO%y;=z2D5K1h^vKR`L( zRZ$E(Yfhl}%&BoR7T2$qd7E)th;B^Lg8Bj)sGXx}dx)N{q(aTqLH=xtbr4@vaNq7o zaWun5FZ$6*aqB(vdnDG$vI~Ra-d_Qke5C2ds{uI-PUm5E zQ%>`feaH0VGcPS~gE!}CL5f1Ok+A%DCSIi}^q4GTQvaXdxn}Q4Deg4li&Y3CR#urp zpQwC%Z*Aj*lq93!U-0+pf%i=U-uTdf*}&I)$Rz3bnq6|GCIijFrLq4p9?7uZ*zT{j zj}0IHK%uu-oIwA+o%A?<2MVz6RmM$raIE4lH^c6@T?&Q$(Nf~bc{nDC48j5YuEJJp zpnaB)@`ChT1e44He4js`k3Rpw-&#SI4rywyf7$kX)Gx5s5mCj;B&x_yxIj~S=?Z5dx7{etRxpqxJE;>W~uRD$3)d^`x;o9VtmVb>jvoyrwt_!9}B{fm6^M zc-;S?*?hlm%0xKUBiIvO1Sy0X&YHgd+T`M$l^kO%A)@(Y#_=_!;-l_rSn)8^d|XL#VP0ZIy67uK`!o&MieXO*1G|0@@x|o?WV8XR=3p2X zd-yRv%M}W(9I@Sk^$t+%0Whd@whvMBWKg71O)bce?GMo`g1|);>Dd&Q#sou zJ@pN$-)SO^{rYLjcjL}*N9ape`FUil>IMs?9Pvyi3OZGSkV2|Z4%5uyQ7;>%Zn0pi z$j&OYdx7mtXCjfeE$jY8?>z12BhOz((7KVt#I`7 zu*-#wn+FjlxjF0g&{HnQs!)%Y1nD*IW)-L#oaBGl33b3@=KHlPYB-2w8QgM`<$7}+ z6L}l8m;wRSpbrAdJ|F49AnO0 z3LJr=3k=9Ohbj=7q7#LIdMHAXq{aMZyUIM^_-HQ!zcBk$g+p-_AK!~b*BkFnrO2xf zJ|NHI8fVmQVCdoAEYVcLXmgdS&(-@^76io<4=;63j1Rr=2Uxt@j(tNS zt@M)52%3=4-8wvDhup{u$4rX;ujx;HLcU~~zyxU+?+0|Ar=;lzVrp+&2pdhxPX*aZ zohmu$G!4AK!0`_ALV#WU+F5->OPs!YLNC#lit&=+qBr5Zd_=X=AXG$olV$QBrZ5<) z3#*bg+f!v53Psm1-<8yxpZvy~olE04Yyf>gZP(6=%9l+Jxil0T3c9RUE$mm1MnJt9 z^?+x%+4lMor+Z5dqG`Uv}M`4lx!bXA($63ccGkdE^EnZ)DB?A5pB4vTL;|z zU=9O0*s?^}l+dTc>Xzc}rxi^Z>7-(Cy&;~v9MA-`TCweAWaMFFnasnXQbN_lxf_%L%ON0!!%u@P zrPNajccI#D5<$UDGx=0hr|Vw$NFl;%jgPAsUnjsq>$Sc(PzbK4=c5Q^xsesRhx_C_<{Mz9a+s2WzGaX-$asaFxC`|=ni&g-Us?#MBcZ`OH)0_y%CeT3-A(pmrn~-KKsKoi+0Zp7)(dwE2=^A z@w0Hr_LSU5qOoSr>^yUTqT(djo%b=r_t`tJ%kH$aC_fAgRyp#&0D=m>mC`XAa;QRD z$DwjZ)19)_A|;OkS{B@CF=^gO%+xIE;AeVLBtPU3uPRH1L`|+pDY&SlL|%f98XWh# zCopkTY(DxY&(NvS6pgOZITa)3`D!Xl-_NIAX|HN@#*4l5byyX6w;`yE zG)@G4qZCHfl;xfssoKt$N`l1sk-DjHM|H}xI_%aHyMs|yHrnvNeFm@?Zfi>xq5;E(XT&Q zowTX;`OLX^Y4cA@Qn9Rwjj~OZifgLqTf ziT({}NL=iKQ*UM0DOE+HwqV+MUd|lY`qr@Q_98^>-u6JP+2HpkT$L%j=N9$`1Rvhj zmfPCZ%h@69k!Q}O;T$5wluFHqc%3`Py*J%Lo;NRn1?LcI;#eB!ZLNrQS!xJZjuqpr zG2%3TW(Ze$fmlW+pSS$3DJP$!+|a$Y&9_R&rf*x3cL))fppSK8Wu+1gnLblsr(Q>e z2$K#6_V5s#gMdGyDysyi8o^*3xV^MrqEtyYVOMpx6D?hnL=14>*@qIS?azH4eSgBh-*-bceb zhjtrhiV4ymqMEFP!$Dp!evwZ&;n8oKnX+@H@Lj7sq;dCm6K(9F$MY{hBO*m?T-)?G zt*Gp%t3g+R(JMV(g>@(1>b-z)hBl>CLS6ZEecthDtq<`d2jv3RMv7-G#$Cc=gM_}U zxqD7A&Pl7Kds$HKv(Z`MIUa|#RNc((t>|q?qDfChd~s!Yj9Lf*dLLg044I-` zBBFG3JO{&Ai__o;QDs(XI;Q29oyvu4emK?0^@jVl{yiF=vf|2;pZ8ZnOH7o+fd<6t zK$|V1Bwo6wL_G?&DH^zMNI&EP9H-#c)ya!w^xz-@2mL zW?;TmINCs?NzAWvE9Vqk6;_3%O)Z3%*^CGiZo#($TD%m3K4Tb!SpLhQpnE>zZUqqvi` zUb<_UNHi)N3hi9Zdp`w;7UdLZx%M#);AYy@YyiD~gqxU|{2OyIj|$F%E;Gp%I$jbE zwSA&npFi!5T_`2^!MejqCDUWJZdT_r3I(pyrEkD0xn^Z`>?%u=ByO3z*7iN7p1Y(2 zP(LqL<$TXU>J<<&5qI4=Ep^hc;4cGBqby79PGN-};i&|7vzVH0wn#9Cdn#4rcw@F+ zR%Ukdm4Ld~sK-pS!1cqlHgaYnb|qulKrnM|);qe;OM^#N|6O>H7rF*9vBM7OX{%)u zmgInRZ2F37Q|kX&@jJX{TXG11qHxUqy9D&dTRNGm&9Vuj?%NduWF|WELGw zf(HmzBBoY1%{?V>Mn7Uc9%CGWBJU-K$gpm~5$OA4o%QrDW~NfBL$Ixg&a$KY8kR!& zNX(SO8d)ACFM60|>c@=LpFPB0jJ|GKR~`Er2v&>oZ5Q!(U?m^6lleFVyvT{@|DyKG zc=mD^tWGYBcW{=)B3Z8sTv;4}3!F7ft<8?EReSbK*C6d;$J&{mgt-rmVK>MNYt5vU zUXW4$nRX4P(Ogy)*KgBQxB)^n{ehz@4Zc6NBltEdCqsu>B~r6UupS7jP1$jhEF^el z8(l7{r5aZGA$Qb9+xCQ~l-3eo@woQmdWe@m;T73~FOSEADMyPZ4^kj9%zyJkK|ftr zD(6BgrwgTLKRZQq$%kdOGFL|ZrY}tG*Cqt=*=g;bsdTZdtx_yguZ}k5S0^NI5h{~4 zbEmJk8jFzM67KS?!-40A{t7$f25P4?7+3_n#ryg%pX^gnl@Po?rGogWGt~KwV2qyA zFH%sQovjp#c{}EMfc3urMnw3Rl5Gub&E!VEwCSX&-|`mM%m`x#$61jdzNI4KEgd2@U_KcQQ|4ptqR~QvPnUjM+Oi zxBlW1jR;F-maO<|XL4vkrw(q`=go!gQzDYa=xorY5z@$tG0NAmS~}MOKLRU^%Jf)? z3kO{#rL8roFvKg<4kB|spRgun^HqC@fxN>3ly_X_xo*T}v^5p`s$ek>e?(Thp-im! zfHFG9^7661osmx&V0;01ahoLBX%`}%NpZJBKM=W?x0_?p20c2o|+6BbD0+X~UnawPhDBc0IwbQk41ROo?7l zuLtZ4n=B`lm5n`Tn{*3g23oA@E;!`oWMn@xDuD{&_xXMk(ZoTp%Oo;t7=mq*=!Va= z?+X*L=&^PJyYUJ?x)yDE8*+|!E;BY?`6_OZ7i23QCTnHdj@ua8L}o{H z=8R5ObH&y8iMuITEaG4GqU*TI)OmirCesb@A3kj9+nBDzHI%es|F*dmTesM~_z(&h zB3g=STWtMW^1e;5#o7HKFM~Cdh5K)m8zI=xh>Y#*+dySJR1$kZN0)%mm!k){rS^6i z8K=5@zzJdc5{)AEyHB)kU(4%N4Dih&bHBeUgZ}0slgviJn#o<}tLCd|7^V%=EK<0J zOj`nPvwSl2se}|=wss}<60}N}JcRap|4{8y2h#Zs`&%c!l%QmtrpEE15phd};*UcC zguZXwj2%cf=~C}N{I#rt$JF-Pbp4I7paV~A`}A^5{*fdhd9HdMJXrd zj*Z;1wl^e}Gz(E#_ta>BwYrjsF8fg)V+i00>M#l3eg@ZpSknP!?E}pENwzQ>w&0?t z3AdNnl7{eS;-Agi9-2z!de+y05tLUXEY_5cA8!x54ujpx7YgteJA_t=RLkY*riZF} z__#)O8|bsL_#7|B3?z8ne3cL=&>KN4#i2%9C*h~0r(C#hs>p!EnI8WAZ=|YP7h1Mb zoqwv~xkJ1>--sug#n59~uPT&=bEV13JoP%>- zb_=oM-zq8+Cfe~E=^m6fa7zAm0=fM?-k*cSfJ5X75xys;zS(~6f%se3;{elIWw%_o zQ2eUvk19Ts&N_7S^rk9VJ^}Bb=bEgvYYVILsF@3Y5A@TyXMx>kzs|m({H_#-?f2ZQ zo&Y#}4*hEkE&;$-e(7Rf;{xM&@P|KiRqeAa3v-ilM$Ok`PerZ$tx-@}6EQ~#%?Wc| zZaDemYY&3YU)};A@XLR*w5jBtm)de~Ydgh@NW+{W1zIIN8leRa8z~1Vp`wf}g=4is z-}c(A3unakzN3;ES>-+9vpnngvO-O&XCBu8#KGStfA^=loe;m7yp{Hi5C z!Our!-ox5hDNG3KHvEgqO=dd*bmp*I2vcnr(UipIZ3`-8&rm$}+l#N3?>#A12IFbvxA8;@^gpRR2L?uy7xGG?jYF5e4DL|b^5r(8?wb6qx=i*UKH_t3h7_i zJ^w`5BSQUSnNu+8l(iFXTcFJ23lW}xHtdKtE|ISWMtAyT4Tqk2Htn04r! zvtw|Ej*P+~t+D>GfNjT|Kl;^2WQDDVsSfh|p14rRHW(reiSMD{iyPJm67M-ap` zdN5FEm>3POU+g&r?AA7Bj;j}~Un zwlubjE~71gbNER|FBbPof)-Rq#=I{wt@RQ1!qh7AiHq1zD~1|}R2;v&1yP7tH5UB@ z?ss^ry@%H@QNU1lmUCoc)JAy?xox?smMjUE*J$wTG+22p+VH1uc=+6bG*6;JBY5cG zXfT4Xd>Y@G{pwMPdZqDru5~kwplN5blj&g-zI~tEF_d*$Fkw>Gu*LQejZC zVYsR^ad_zaVog^NMEZm?N-R@Jk-R_g7h_m)NGDVXB@bse?&Q2DM#K^=tH7Kn5~VWR z2h`=5CkO_m3k1v7RV64&rn80h^ur{ZuPphh{ABOZQkzCFK|Y6VWmWF>6#58a7Mb_v z(h|DVpuWn9aU&r-4YPoF=6_V?l;YZpdkwM)h>mzFrRMPR z1d?Fzp!$GL_u2l0L~bv8dU^to2DpZE5c>f0FyO&#fX_V2Z|hAmPm%DaE>FP{^+Yf{ zEvQG7T!1BY*qBa91!8R;IP`j@fI@N-P!_@uU${3>54tFW-v3^iLuGc7sdYE0?mjq7 z_Cq}~LtW{ma(v-IailzeCi+jiplC(@&0T6T2G-Bo_fANeN37{sMDf1-rL?Yhw`To- zK!AQGdw|_r#g5#1hAv&U zy1N1#uyabmoZ>=(*d@wIQumFYG>a_DNGemOG&KC4lY_(|&wJXN7dwK5$HafFR-Iws zgp&71HDvjXdQ3RvokZd|vb+`*#x&24M$E3`<`n z%&;?^>}UC7Xb-;V5~*oOe2wZ7h_9dFV@=X0Fng?>&5Wr>oHZ{MZFR_>V%xVbcorX% zcGgdLTKcw8auR>W=JWUq)o+@$rtcd(29_LVwI@dqyY9%m%M4#_6pp+q(8>pv-$Fi# z`Ag-U>t^|U!(3R~IsJ$J%;@a%e^39!3pMt zT*tj`=M<~=4_9x$50$k&b|iP1Ezq+wF5L$weh-%!8u$_iZof-L+zJs3v>96&kaaF@Re$C z*ymm4FGTZcp-@#vqnqg7w?Q6ZWXtOHa`&W(@le2tUbxvJ^ z&Eo9&d|o%OvPcbD57p%_Zmtfk=P;O}(cHhKZ=~P|5-B_vGyX1|;>cQJA{5jqjOpVh zoC+Pt9=eztDErfbG9-_C+UkKxyz%mX+Pdnfs<$O94bp-j%^^iny1TnWy1To(q+1$k zknRTQ?vQTjl+O3_y?gJv`s9zZ)>-Vc_w3pI&CEA_FpGcrs&$}-RMeWawZ+ApDy-j4N}KA2S2WD$v&)ZSohAD_vKZYqcNyjN1ON{j-b5b?Gv&HaGH z(M))y=->OlG3!7nMDV?dt$ca*Xiy9tFE{hD^3eBx3xv0pIO~AlF~-WY&NgT@>1BO!yh6{# zy#pHQ?1V7OrFc>x7t7zimgrztP`c&O4m52KrWbA16x4-5*?+Z)2+G3HWZIsU{hnC* zH@NuInKY9d^3`F0Dm(3$X2MUux?unFjR}Yp(O}5nNMqc}GsG7Zmfd_i-}>)L{;@99 zf}$g{0QiB4x;8^T3w7Gef0>^CZ|VMi^8UO}6utvr@k)_xGJ~uw-c|GX+ePeOK>`TA zC8(#FBtHb>15k5J&uPXBgk`IPW{9x=evACw=ugK`Tnz9*Q7(#Y+}N2@ZK?vT@GLiy zLyNzWqJKI9{6zK$cDgvPraIhTm_}gm4-hV`1Z#$lOJ|huhpO11B0Ay zF@wBB>sbEC_$RHs73yewaZrmbACdK;{GV`&U-bGO($3zV;X9sYk{Kiuu_A`apOj!z zgF@%$Ruf%k_wV2SRpAYbU{-D}Nr~gUmgsx*LGbXb-mFUXKaLPKFX8@>I?a963*y3U zpGurR7E0mE8kWZ%;klcJHABa0X8u#h3PF&ARCyEN*)kDX!ZDTRf2_z%h>)C|T;I!x zZv-|9>d*6l8nPnGjj~uxHVR{3`Paaq|2Ysp2f^ZUy3mz4PAh&u3g-2Y{Z5;oD52Pc zQ_^60VN%#f!&SErMtw~Fs%E@eT(9)0x?fZI!MLKAcvR$0Mz=@D{U7}C-?Qozi?;){ zJ(^|~fzL9Za5*cFR}fU-c;O;3zsDiY-fZ&q7JF7Db7a;S3)T!gwpk-;pRDjwG3tGB zpX3P6hQ%N2lqH6R1Vd01+S8s?+{X`ih71*?FT8_$mwSF+%!^0;80Ch@r^NrZ`SPuD z`pV=FMZ{*MO~#1>hx2S`*(k0(BHXV(IA=D+M?C!Zbs;ZdeQI+5y<+~79v1|D%H97( z+zB!PQq(m2lx(Cu=Yp*d(T!@_pK;=X;}T!Y__bx0ZUYqY{iUc9*)3{;qiNcRoAUWV zZrjrrNX5EcFfgm4zN^Wu9ih8kHv<*-2$Mf|2Mx9mjEN#ap$0TaYYj*8UQ^+bIhp3M zXAhfEd$Y=ES|XSC`Rz8ACC&Jzn_+ZxKPKZkgxkvJU`(o--3z?!GO5s9+l)yx807_q z%Ld`h)uOSJX$C>3Q2h3(`&+l(T2^i3k|zHPliXlvAWA23U-!>q5X17e>ey}vz(5un ztn^@3xdz8hJ!0cHFI(gm2&W`Tb)(ygU(9d1r4?^;eG%C+lG)aCnr@~oezM-Z#f|vk z$rUw?z4u+BEPuK%8DD#5lN{SC5C7nDTq-HI4!?`>Se1*gkodwQAAj?5xAgiS!|Ok8 z?msu(nUB1ksiFx^d(W?h_EM70F%(PU8v6LrgZFJq$yW0!y$k1r+zf~vndIwi8MZOy z=&=yb=cS^Ai{~UG#`((Ca@=^MXxxUUAwDa6Px`xS1gY50G76=92XPTrJQrMhPaZV7 zS^l~E)v#|L#or=MJrBVVB-<{MnDV;%yATNL7YL@;^t!Biz(%W<}k)ee)vKp zF6+-+tc=E$shNVp1S+)X74({RYR1nfIX{fXKO{i_3-+)l)4E+afcVGF_m@3=Q2>E^tWs#2MfM5?Hs*GbsWC|3RhxJ+E<5!K3M@0$-CN z!0DOG&LcYe;y3KJ70CfBgqy3FD0jX*P^-vT^c_h>5x&`F3E^QwSTi-uakDL`B@iF@ z!b_F3Yb*PMAjfzqR83_;D5}owhVwygaOZLh27YCmm!lvadf3 zgF5G}WO0Gswq?koV(8B0r-Chqx{Sv1VvFfv$G)ZRz?9Pu)X+qqa&BA8cTC-O^Uz8% zZ?TbLma~yFrx?nbv_$8WqTrd3n!($%=W;bm`wraRf5>#6FfYlsaJh)&)+g6G@93F> zH@(q_s82ahW3}jX`1lWi$7&d;FRPHqT+?-9Tv8fvHZC3S+a_N>!PyHP*NX{coq?KQ2NJqc@c?GK3k7>^;}BJz$v0oKv97hPt`l|1O#_nNXkAOV7Tsm{KQt z_W?IFdnTqYCf?fg4+rWE0&#Szx-du1R!zwuCt|3Y;UZi@UM1z94i_+itOX#F1{<5N z)X9cg;EzS3c*R0jZQj;CGNRhvG5PJwRsR|R{MQoaMI@3nqM+6mowo;0#h6NtKUL;y zCfH#a5dXfNe#mUzY$hD87(e8dO_gajE4O%iYiVUT>`xC!A)x^5CQjE#lRLjJZr_~H7n^g#@<#n7 zPJZI+k9U`GetzI;)kYCIOkH}RGY2Lw6Keq-C{?^W`c~PcmO5SzG(D63%HSn`^x!*X zYABDF47#z|?$^hWqSP7<pHNOFazx+*meqrcFmDWqRh@v^r~yx`ILCw8#x@qf|Ez%W z1UGSt+w%@^dA3~Q5Pf)hyyu`~HteM?osT!Q{cXqc?Go~-bT^`?{;bTwXE~7_h(N%- zq?V3BrMd>)g%ZLV>QiHPptQ=+=`n0d#bs;)d(f$Sq%KGZN`1qHG*O^(3**YzCuZ*#?7nJs-~cM!~V?!miFO zF)W=cPL;cTAz*~#8qF_qA3vFxEY?{D+MH)=M@RKViMEBmEqnbci92g_ItmWyJmk9o zdMB#~L2irUO~Fv+L*YFPfybK0xMzzC+mq{sBv-}$2Gol)?Iwq!YOPcNEymwcplbLhy=XX(O2FiZhYJ4SO6%9Sqjv!wrNkg|F zWN?A;fy&V#+^Ng~ii58aalTBssMk`^qpIF-Bv0m>Mcl4-_n^-K@8&XlydX#cy^2?r zZK&6}gIynncyZxr3#gnZNCjaYi(njckW`z-34B9PpKIr|?62bo7;FSFk1MbEL^+d#;-iCX0#2j2 z@A_k$HvI8uvQU^Ohf`T-)$k!Qf+%kc~1Qw^p(hH zEa7wwsL9V@kns-}qL@Vp2iK%h0UEjqt}llKcd#NFF3e`rZ`D4Rb?PR*FFsRx_ zQF2&{w(}K|kL}e*8!sKP37DB8-ytfSEMzk45ry9A%x|Y$k9Pn4y#DL0!s-U=0{t&6-$fO6wpuE}N9B?r-j3G$m z+Zan`=!>BW`AD9K0!??OEO_ou=d?H3V~AMk>w|#J*dL?ku!XfpNFCw~B1H}%HVC|P z!J8mJ0M$gh9S@wOJFnQf>X`ZuN3*1g${E9w@oS+-_p{|hBW`T4g&FRYrAIQ#zej?F z>DHOAGCWQJV{w(GAqj>U0tvp2%O!%?@)tsnJvadK?V>+iCT}dTpo&Go2cqCdg$d(U z46*EBLf5cF6epJz$~{O)<2k&a9!M$!8(jSM=ZN(#FS{{^3u(^?Nhqf~MR9J7yMhqy zFM~+rpkdIUYM^pyi;<5+@d)|2ahJI=wxpW<;JU-=_*oWkxa7qcpB2}YT%pf+0RLaj zdxp>s1L)k?JEQ7RH2M*~=dam#m=1XalLUnXj{c{Hc34s0`Bu=#ByC)?a8A0<`Up;_ zYTL$yHbu%Qa2`y(LnuxV-7FSxKX6qFPdTATFNCxt*L+4~uR*3R!Z#S`n927RCkoAa97%J5ax7?&Wl-^+oncR{?672~gv z;C#~roh3;l$&EFPUpe&H!GT&H&WG?OdV1KbYCrCXB*Q|Yc~&gBoe$WbU;5Recx10< zw8*pU!&ZXX?+a$6nX(M_AYxE14UlIZo{I2<;DdjU#*2Kf-^)rQ&e}a#PqED}NP!0A zuqu)gk!?MmvCUu2qK$jG+XD#=Ze**$kdQ<{1GzMZb=X0$@hM{5xbm1Njg7D{TCCqF z-wrM;R;GdsF&7=|q%)iUntL3XutU;7$1x8%27}Mp;i~CyzEuQ?bkh5&kF@F^zlpz% zQeC_+kYeP-2PkZd)3f-OD(IAz9pLs`_e`Np{86S{ zm+!;Lk%V#h%dCV%(H4VWv3H^TcHnZo6Za6<);bWAmo)(XCykvp1lvni!6^k1+T9le zIl~TqG|8P55YAfrVqUOgoCAUVt$=TIWVpce82bU$S4T&H)`NxyqXP)^;2?@JrxV2B zAI8>|k=N{|dgco@!K;=`WX3-7=WZC^C_|(uke16%W428Mi?=0t9I0pU#A@9|mV7hb zo{a`?>u5sUX^Uq3$~PiWR7U$ezXy?gli7_{H{QFtt z0DDtu&B(dGpXw7EA3J?MOq;SnR}O+!-L24aJ?~2P_hmY+Z;*BiBR2^9%A`CYx$%Q2 zvOOrIa<o}+Fe~t$uv7wMWdYuUI=ck>Ai%zFz8oLRI45^Vr%otJ?s3=t2$OuBiz*(R5$_M zKpKa6xqetljN}$gU_1W_u4*tvy~HL!pMnIJawAtP&R{T}VpKdi0xHIhcaTsIHj$?v zBau6*vK+>^oM%TdmL|<9z$cEya$rF@O1Ic!l8e}TKc(!!VUM0G>iYnb>iFpB9ai#9 zmIl?8+&PkyYKSRI$?v?Xr4VTSO2qt&I0bxrKCn727Bt_nQN=^No$#0T28>Ui(<0_+V* zxk(TZCiE9-Ozp1bjfcd@)3>NHbzm^2W9Ze~`=#r*gggszAW?BaP8M_lS;VHGu^PNH zQ_3jBfDju8F!)trn#w1CtqJCJHq|NehhAk#I$3VF?!bo*pSqxy^}+p~Yq*k}2Fe^3 ztG@V9-ZfQwaW)jYh1z0%EEJRTf@4YtOX|)aF3TI&Giwcy}=tqh~BN6L*ih(CWEz`A_iN)xgWLtMvreEp6A1EMHaqJr~O5r z&`Mer)`FQ5AHIgmfce7DO^?7|!cZ=RT#4||ZbfBmK_I~% zq4Z>4TyVj1xvD665P*;`vYtWl@Y2(WB@=;gFYZK`x1(t-qh=UBW?E~Plb;ph9PJNK z>0a?eBb-yGXk2XeHm^5$c9mKB3T+}FoBGJ}O?jLJ-!3~Pa^yx4F0`vOlD$2QL%xLnUHW!~KY1ig2C zOiuZnB;sx!y7;R79qX`r1FPzr8?HAMlD8^}Fs^>KP4uECi-bySyS!-*mRn~a%i@bO zcA=BjiQp)dUo{w|lUf zQgr=(g&g#MW_mp#dbGDVeCWx=VK&B;{zqfu2o<70s`3AdUTQf>@`3 zs!*JDT^nXx%A3k^{yWb#oyT$A+9>8=hECK;G zE0ce(F*fmbb@2ZHe&1q{kKv5E%21n>d@Uzc9mr;w$R}_4cNGCGda&;vtUmtt8iBaM7B>?&HA;@RHq1fTKOYWMyu`e#7?| z&zM_e`u|<(C_n`}X$w2{K(YhSkym9B+hT-5LOoR;^DnKsVT=90KK;&b^4~{sAsl2j zB;=^&^ z{LxIO2|2yZvQ_9yBT##ig|B&9%J|3GMgkr^wOgfFokjM!^7q5xfAeu)_@^z%GBG2q zDzw}ahKX7dgfL-l0o*{T_}9k9M!WSuOmTqcE#3QcqbJM|xV=;XTgc1#a4ZI4Xk>h< zD4wSVi{++T{+r74ETu2X^Z-UP-|pQmnZbD*GGhinE<+~(V?Yu>4 zjX09dQ9n2^&-O(+ojq2)*<(%=SDZEc6uw+?f3{Q|#AW(1>|r<;WQ(_z_*lz;bp*I;POs3Q2ukxC1 zv|Oy?#~}tFl(^d;J446I&B)DGTLbRnc@mgvH6}3t;CN8+|4qQ9 z;1+-ome*Kc^Kq>Qpcy>e09oGEt(7?V0wMW*=E^iFey=X+Jwt8HCGg_rv?aS}9* zf+jq4evFf!33+)Q){!;a5=?JYw7hN&u{fMcI68uX zlrcf-DoQ4m z_l*qOHo7Yi${Guqz|LqoDR_5vbRf!#Mv3x33@Y3bQ`<>*&TvcB4$0`9D?>yIQ5jq*J_%4re*qX#r+Kd))VO#7hLvOy?Ooq*Gq0m^t)RUj1Xtm6j z1c}UGr;j5d%WeGVU#9S9Xjqp{Y-OhP`Jxr&&@8>8CV(ZKRC{0#pkRJClFE|gz(LP- zv*v#e@R1CBd`(%5w+65g4b-wWr)JexzG(AkZy+CZ18%pqeyFQrrWev@UxKYn3A$v6W^&YC9`` zoQ&dfvmTs&iR?FM0I@z*AS076Q9z#C2*A|fa7NjFEo|BJ?pGFbC!Zu6*Zknqgl>RG zs6*ncNsIntikl7?4HcDS@k_6vB-*6=lXmUi2%H^Q_J9Y`0Ok%eq`iXYlmNiquOQ*J zI<385Q%G*D@eqq{MIE1|Oh|e!PS5X9coSXXY8z;gQ76}=J*Cc`kxIMuI>FFeoeBs_ zq~WsOrPQQ3CS$8SHPQIRbiKw9@j|0^xGW(fi`i;PwE}v>FVwLYIiAIO<#6fU_3`pS z=Z20hqCYF}X20OGkcXoQOlNnr^FIN?iP?y{d6VsM7)=P(>nv;q%RTR}s!XTIM9e(6 zqtQj*G~q=eyrJWzX+EUC1=31TTesaB(ZfUUsG?MgfA$98jr1NAJ!!RgiB?n%30vZx z2+YUqtq3>5TvK4Lu}@xV^K5n(W*_VjhpOCNYODsB5s32=u&4$g5NvfLn#2{LyAkLm z-?-?K3Z68`BH4o2+pNx5C*04q06d%cxdwE-mHO_)gxx#h;2NOmJ+w||O7yU4*0TjM7 z@r0rD$!RM~L)(aPN!y>?m0_>LU1G5>@H(XJs{vZ9B@SA>^d|dgwraB+9>t0})&1;o zOg5g`Yn4|j<&K`dg|Dr+cohdqGbtTY<`O!O0`2JdPtDBn5jmLRKRDWNb<94F241cC zP6}PLAs~)fMjxCn)O%&(E@|JTW|yl$)bh@nFY}5!VGbP|HV^J)Dv*m{I}A_R9f)Ob zMIWwKj5SW-Pny$4G_xcE%v2PS`<-Pp4T1YcVJo6m{tJ_Z?r8IsI;n>raO)!A$M0q= zmFHqcJ1qlgxF4&$7Vk!Nb=vn;YpO+&Fz{ltHH%u`enbz>#&a`iY2Y*8lxLiALArd*PdS0_Vdd_#>ens{o z{AoYQlSpd1+WeV=h6Pt|9 zX0K;5LWmAq*iCf$XG}e;ZTxip5DvXG9xD);pG-xr(_o(dpELxjd!V2ThLTIw zJa2$oWDw}Z&fSk?=hdTy?((EZ;QUy>;M&WLOG{-nmtwJ6+P%{J- za)Xi4b*$I7KbF#AZEgRI@jp0`{B)`{oW?c~m*C+KvC6cd7DXKhI zmm?~|T1jnf6^z9yjnTv* zd($1+RM$KF`(lgh2+7HkwPrJosRYU`;Vb%F+OnG5tA9l3Zvs8V<#u5MF$dgnhP0pW z=1gcz%iHe~%Gu=w@e$nEvcdQKK2>7G&@ub&EjQN#q&nKpf(Y9n$LJJ-=bJvO5z1Nh z5U|d*r)dQlM*x#0h44mVHW=&$5AF#&CzDQlqco%`0A3mJDwtj1cpMoX-IUN*b(#?W zV7AoW;C3U7)+v#3mh56RYlxahsW}VGX2+2=69dxVX8+HiT5ca6-Xkwor>ZT#=#2tDV9bzAP-2ZmFxP{Rjn{(*JI@kM<*+P1M9(ys4oPg!KiW7y-HW&qUUMh2KC#Wy z>nnU}ld(7T{vcAcOgE<{+^jLJ#9s7;#_e8>$C>8(K0hH`ao*73%`@(|Ds5#T+G$a+ z=&>kW1F6<*E*8ny+4ewbuC)osyHiW@%lyrOjgQQ9fZRi;5|OR!eqYgR_oD*>nWU6j z^@RdzlcG{UI4|RyrM>}_0LURUgRpclkT*j10D<}2EyS-M9Og8g*(SjoJNCPL@FiyO z#H}h*_%WV=`PRtCJlnnc@SR1?65Zbr4xVIh6M*>VBunZnu>EJE7|Pmm1(0WrtN0MD zih*2LB^uF&A#mV)9WU=Y;b{R1Td=@2J`a@1ERIxYIh7D(-K}6H3_Aq2?u}sHl&@$Y zd=k7b4i0m=4q+eWZ59I_`0@$GE%@Uj#+{-#&qF+JhqM!MtQz<#-srdfoH&^zYx7$W z6Mi)e^DmHAJZB2>$~E2g@bAPZQ z`)1jYdw2qt=brn8oG5#u?H!iqX8ZGfdt9o&9M9nSo#9}nlqb?!(--A_BxA>z#3p^q zvR94iVa~ZS`4%){hc#(@Ja5W4zL9NpoqU<|e?@Q5C$eaGFg(&3IRQkQ#j#864@%ae zyCtnG4>=F^*3VJCFWYe%=qYtMN#U9lr4DPxICAK&>q-WFrkscf{a8m3t(>zU>P|X! z=~zwVGmr*g>mpF>N4jchJ7_%`fu{ z2i7GcNj`OwFPr9_!`H?MQ4~t(^62>8lqQoj;zly>5Hdj^3r(3sEX@*4CB54L(iUI} z#aDz@k?JqS#`fGPiEi}A5aY?^ zNy3toK=AuU_W?WE_c97{X+~9RSlu&lHr_(yIEba{m5Cjg`5%tk*ppzO%RLY72?m2; zf5a@-s@H=7inhFFCh)V;dZq#_%&ddLZ*h_gW7X$+9B!4YDNA{aV3 zf>j{5UFIBQD&Tfn`!NWXtmj%PmP+|7dD`OaX988rW{u4+(Wtbtd%Jl{i{$j?MCZIP zD6p2=DK3u(5vPwZ{r2E;#^M+Z8mxxgwSpLs$AH%g*UcXx6y zPN|0cJ`j3`2O?UksUUPt5z#RGhA;dVErv5t!cSkJk{yG8(v}|=#5tsw^s&66{vtuO zw1$it^^#rAd;j=AfOU*eL{j)PeW^i*y3a3o(J7y)Qbz!5Z~x60%M4c&@3--&w)6-c zp@K41{Ny)hkV!Su2_6-K{C6H1k{{ND4|)pLB~uy%S1J1q0PG_b<?! z13zX9`X|^k&v^R1oQqVG)U&qD(UW_vCXcwWgy6IJcvqhL@y4w>oK?9d&(BE?oL{P2 z95xuN7>G-9e4Fx!8X#HrXH~yR4#1~AFyf4hpR=<}j`-Ex^$jrKz|NQ!rwSDrGCX<8 zGRwW2m5QnOHadA(tg6+DyFeg)b!wWF7e$j;i>4xxinb{Bb;o^hjR0?l8UI6(IV&0| z-0h8Tpm?#$?v4(-=u?lTBiDiKz|2Sr#^WkDn0lD@M(Z`cnMxLn> znxw5dEuWNTMJ#)ucUv9Z})+Y+K79D=-~(aS)t z9%$%cq<6Y}TRsNwE65QIGT7xYA@zC+Y;0hl1unWi2+hNueR~sZr`p_~&uW1V#TL}h zFJ6T6Z38+rGc$7-$1hyyl}076JW`_6!oVCs2_aE~p?;peB{7C}iFX2=VGeEX0vh>j zvz}Ow_+g;S?g+L>$+6sLkBb7DhB;v<3)R)Z6p51PX@3YmVegMG8>&}W_OHZO;-y-@ zrgKAs8P*$%>S+)r>j=2+#zp9ih_Gn%6~SC~6-)CJoUCg}El$9^je~>KNZw4fGJF?S zm!)-qx`2w;ietlw=NZ@}x4O%*wn=B}Llba+)x#bJl@yAxKXwl-d?9z5{XB&fKg6&f zO6gMP7iKBqcx1^@<{yvyHs*DcmBWu_oHQKEM`rfW&ooVk@6FXvzn|V!^<0LyL#$mgZI-=P&A+ zP?~Z#c9FHz~h*H)I&^1HWkJHU&RE5!*OLAxz8ZsAC<)0^IU5?gIs730LRHJ;+_b$F{UJKc@p{?K^2ZYU1d_2s zEWmYZYi_4d$~Q8D7+>KfFQVZ%6#9JxsTYwcjlJqiWs>K9Tg#0%rl|Q+{0yRNM678$ zWmS~E@xP&5Zgbf9>MZ`cZ*bSn?f2;#o|iLLXqcjC{%W(|P=n~!0=xN9RFOZ{#^C8y zvbyOKB7fGWCQcM?bKje*s_LT7eC_L_i`Vr5YEQ3TW0H>q*ulPTyaZi#zQBbibFCgg zFbJo$#=D0dmfeb^#J)s`7;ZuG*%A_Gza*GgRlRh&Edy z@nz7K?Exqt>`BmVVJwcu!Sg59XH_2GmCwD)@3{v7iy;+4bf7m82Ej!bQgUxlU`!hi zxJs}CX}_w);OmfOoQ=T!G{2bYx4Sd2E69+&BYc#6c{376SQM>%_qwy0kLQK)9n1#2-hk;k0TRXrXT>5sKIlcFb3YNBd0i5}y%1M(@(3a$`6nF_4c#}bf5qW zJ;uZ13Hg=+(W~Oi@sl{zhwpe(c?Uc0{n44>v9B~`K88_Ql{P0G$2*pn zCbryv7ybI0%W&7*oM297s)E@u&Z@Y^b5uFWJLf@oQfDrIaML21 z%rm924C%qytO&f=7romEq!p`QN?iGg{GzxSWYzwmu%l5-MDRQ&yJ|s@%*GR9oO_O zf|svS*)(tQ8*1htYLa}vbRj)2IBwTUaO1CAtk17Z!N>G(GtrttgwK7hWnaTyKKFG|W`$0}fT8#}9@Pzk*qd=M1LqI-W5cE)r7PKx8G#4@;r z9Lm(970!)@%M`giadi90dm#J(IIeldV4Hq&WtM(aH$Oa6E@l)VQRGw-J}psr-P844 z-v^zZTrE|auSre(9fbt89S{#4u`Vbs8u@VK)R$l~kR{`<(?BiJeGcc(i_NL{6d&n) zHkqnv4>!tML@{Us0tItmXh_J&V*7>4GPv&GIN!8?&0xI+<8;|sL*#TN1jV}0n<1Dn zVyuB~8LNI9zbfm;;GGj{5h zUcz(+Ff1&s499JE_wB)iHs<_I=u$Y*DJC-5Zr`fL#AzX;*b|nWub1ohsTOb_=abC& zoZ`%fklFg}CWXPZ={J*TwMV^NI&9x4h`4@68jU-71+V5(=J>&8?S71DbK%rFsII~W zVJ^*EM%Bji(_-S?t=0Hr%GWklOs|UWYal?%YsM+}ZjycICS40@=zYI_n>2kQq#e%c z^K0hK>NLSD4=*AkVdNe*VV2GQM&!*Gf)2iqJ4x-EI)arr78bO=$CNuitlE|ZmZ>;I zPwQ^Y(-z)B+x!suUX8GQK2NM0xFdiW|pYP3Tw(pj5yaYi`(;Ts|}Adsen(T%ef6JcJnq(nnwS3j^&R(5+<1kr6DL# zWivzO)mK&@>)Od*B#&>tS0Camau6Xi(ehx5tij`Y4hN%$r=jg()6n)hf-Ojm=m?Wb zgra_eo9sy*rZ5_A7r*}`!bEMLwfcPQe>!E%6JX8V7LyQD*Qm@328AvQd}LsP9dn3< z*k})WGC)+m!i>j^l_n`@=qis;Zm``ZOM+?iXiw%5eCthO0K_ zy2du19##+z-G|VJ(7lhrJ%t|JCd_CaVKm&1gk zSx%&>0syG?78H2u93`f1=Z<*xjnmZco+9k1mXr4qB~hegodGM(?6Uip%UtF)*;b5! z%&{+XjUS|Mx#lv1ru)-cT3` zxvo?S?NR7pDautj28tY+84*;SVGnXB)1ax^6{?6VuVP)<^0xr}57 zxAuBEdwAQNzdX~l(3TkrcKP{D8rnF+>V9awa%a+s7wQu=w6&Rf&z6Uls^_b8%<{8D_#@K9;F2+1QQe9~)%X)ckb z#K2(hoPD%b%8f*FAPW(_wjQuBKF~6*9d!3fdXY3a5eVwW{*FRo4p# zon)thgE!+rZw`>Co3W?!{nSf-{P1Sava&VVq#@t@QMtp-mgy+)ooyP2z{}8P;q77k(x1T)tH%txkVl0-4+s+6PnW_YaztWWB$R9DP2he^sHBz z$GzGV(oDSR*v(M?ERp{;5l$4H&nK0v2r(^pv83|%sQs@IosZzRZ`&w_07Wd%&nN2< zQ;GjtV2u!Vcod1?uF&PxjGPB@CuXO;;xFC2%Hzc*E^>jbb%bWhY&O!)h>ei{r2zW3 z0`SGfMWG@2jGF23mSF1>|5-4z6Ikt^i}>GTNB7$IDXX@efzCc1X)yl0(f+ga_Dj`} zU&`3LfO@w#DM|QD6_l;kG7m3XUsu<8cjwmN`opLY2PG6n{bZ@p{sk(R;co+G3neV( za0Na+GBRQVu&CPrwiDmWm1bw_HD;FI1(ALIW?G`YWz{OAeIvK9^~L4cW}mLUKJmT6 z%Ea8iCH~og!$4m0$Q22jL2C>cu|d2C^XMZzZgu z9<<8PVqjo&j!GlQCK;_DR5Nkp;%WZYo40gDFZ8czXla8L3ZySz z*VfdaOP6OL&6Sr0AM3HAMN+rnH!*OBEfj3w{3d58R9$_2HT@^?@88Zg*VExKJNNg^ zhI2=Q5)u+@_9ore&$^9Nl_&T2M1dFzhB4Wy%zgfK+q zD8T{vO_1>ykcq4;1P%BZ9s(8;0|NSwF5q7XNIVGGKc68Wq#*JC^Q;8<>0f=IARt1` zAz=R1M+4mc@e>Pvfk*$Vh0cNeYsMU?fA@wi&Vl~-GgRRp!#oH1rNIq?ous-W1jGl5 zKVFa^CBQkDZU~U5kcu1Ru{Nxpio{$;8$J$Qv*(0C0X=*1WfGoqo^Vv7@5;){ks8jS zcoYrxA^qf}Fk-OMvNoo$FmKyaHtRQp7-P-_BTlq+6suLwEAdkvR@>E#7FQE3A1=ir z_)k#beq=fS)f^kaEb{<;M|98?V6Y&?1pljHg8w8m6aL?W{+vmH7EK{QgEm|!_#a0q z1e7}e_iTT5%uyvlYs$+XZ%B2)noN)RABsLf)rkGrarnj2%~bL^B8O%HDX`Fk_J3)w z#wSxCMEOFfgpV#%5zR!GaQJ%(c}STU^j}*5FY3Pn26v(W8=7mk0ttwmn%y0OspZiiXyRF;=SQZ0Bqgst=ZWhMDo%6 z?apvdDfXaE7Dlc!;)R@Pp-3rmVadS4tx+qaqTK||Yq@uqnBWDr* z_9G4EQD5HGGHy|=C(;p;;g*K)UPIPDNMjCs?>oq$jR0jt%E)$kfg{_PYg4@TMhVmA z_3O-yk8|Akb2>aR`Us`0Xk*qUyxe0ad`Grtuun@z3Vn7y*J}TGJr!8tzg(Kf3{#|j zS@h&xbPOTCtMj=vd!b{@@47L}7}Sysu)wT~J7JF*bzE)ox&4AIVEV|RU6jo?_s_z0 zjAn-+TWEn$$O?C48nbP6yG#`5PVi#QI@P3q+`yM^ZDJ+r!9A8aH=EtJ?wCDh zI>mbDc7lZIWj*CwX#6sW>5wsBPoU#b`d^yl|0@Mp7)dM%3)1 z!(FC*#@^_)o1E#{(p4;;3g+ZRABHGZZvCE|!=PjK9F@H@CX&$26SU~emIQ~GPqi+3 zOdq+uz5Nc%>~GiIjsC@qMbGY6kBWa<6Ac$ij>XLIi-0F!Ab-o*+`PoCLx-6+Mho>B z<@)n-rQpX^LpS`;jbq@JCXwlwH4(>|Lf=<-8gdQcLtfW->D4`FtrfS2#tD3k6qNYz zBUuVyZL4u17SVqEna`8r2YL-aT3TAJk)psq=m;!-jQ~M$@sN7kH61wukNE4;1?j~m zYsLX;>X4vEZ=aH-=I9MStd#A9u5d{UKukkaSe0=z$9IZ?TS;F}qVSB%EGj-ZpTqO4 zX5}qo!wqruCfzKV<;$YTmrcuW-K|>Zg;sYycGn+&!_ZkC0s1cOZppeLma$ z1oZEo?oQ*mWWP&EoWn(sockKwXWN=CNmY)K#26g!lI?mVX}*6jK-IM5OunyP?)wv= zMjPLQm6S9Bs5AXxodSWR+00n3^*-483Zz2L;`Lz{l6`sDy{oKqXb!Tk+-b?oY9?~o z2QtEA|I8tMnBCzl^1M}i3QdQmRIvnM2VpqVdM)Nj2^Y7cZqfomRJiFXWRs zi@nH@^buy54QWexYey*vLvLtAGt~d25s!V~y?6pXj*dmwwMXxB+AB(KCae1<*8EY1BDup=ESQJ zE*IU$9+&bcirRO9oHp8Z?wPXUG2j&oWxq*9e4v*puunFnN0{j-qS|5;UVguL46K*`>-+QXKai$t<`ClK&G+4kH_?WkezHtVJ+|CVU1 ziS=Sf7C#KK9IVUs0fpDJUBAgdz=sx&bRtVgsFgO?l-UZ+Z;FVTJ`I9&*g{~jEcJ&7 zJLG>lHiWli|89x(_iJbnAh=stej?*Fz)r_$M49Y4pT}y5KYP%3bpmf;2VpAeN#s?( zohZ@z^ZUJhJ_V3 zqQXJi4ZBT;%-qA~vn+s_!Ci>YSkD_7f3K#cbR1c@$AjP>f{g!uFQfF&2~i;%n+W`r z{;tn|j0*Z$8pYyhpqLrM?}pzlu2vm?OwL&ha&vmM3Yd+7TFAcHQd}98$dUcVv14(Q z$rvT|2jr>|_lwA!snQjg9kB5DyMztHH)(z>4I*6BVESPX zGBU=qcwSDDS#GmBIN}0dtt1qzDTOu~$$2>Jztt#r;1h7p&Y`-RQ8ZnzNj#T;$oP-j z-v6)?QaFKcqVDdf7QS;b@?Ku1v*5A}$q1OXH~K~b=obl*7N?($VPc{r7}fU`1Tvb?kaw*&$k+oPE6 zOV)BHnbt#SIOcI5FXEu;)%(srQ5_fph_)U$k8D?#{DrQ5WJUVz1Z@X2${W07GzV>c zyhh!nUqV+g6~0|g)3Xa1wcKlo!av$#CDa^B3Ppm5gJLdtH;L-<-r?|4SUc?PgvTj% zU;tFR)=N!N;$IL$o3ODnl(4v?NB+Yp8tW- z{eo+~6Uogi`@hbB1wEA9>mF~>kKD?@0kdK_L?GM@ZY)Bm8mrQQlDJO%?N*U8lz0!> zgs#$xk}TNFaf}v4iqZ#Gqqj14%UqTa;_>ch|_S?`PPhV+Vkb@3$d-$nmB+0 zc{@=p1}OVFQA(`&V%sS0e%e78TSzX(G z+{xH$cR1(M0bZ$BMLrBMiWy7k3M+qB)6l?kXfUms!=Du3@;+@z{PxXq81f5_yNPy1dPDk>_<2Xw~4k=pGln#%lq(XXDe+?#k|T5&-oAfTn^qPx5M z&&&U#$OJ#ZWIV_0&0Z`!h?{;~bqzsu?g0A|2=9n{N(?g779x+CxuU*kp_*l&yOBJd z_xkPayL}_>C$2(Pl$DBuo)Y4vwST#l5+S%ECIz`c_fCt+_jqg#<0jZs6oS@ndCTT0 zoPP4En#ob8uvm+Te*w~-(7`c9Q!x7WfnT{nNOchDrovsAhp2Z#=Rkq3b zygrz2GzzkxTwYxnpDwpJjIhs%CSikCeBYjQA9lalZg0g1B**YSN@jWAsxMepbHsf%(X!=jV0t3kMs8H@~oOFqGKm6H*&_CLtEW=oAVGTHfL& z7o#?gekpxHuET!-Qv)jG&`XAdClQ@BmmDqu1DOd{Mi{I8YTaGtUN`Qq%~pE{mVT-) z#JEHcdG^gDnE8hZug^VsOXMS%7&2eOUyY`FA{FunKIO;cC%(gq*V&Rlwpf>8B5nY6 z3^s^eI5*Fuai@th5agW5R(s2B?+8sAHvvYINIF|7%QP>dr{j>zM;))!<}sfwU*PCopg!|^=#foVvAMK5 z!pF4B;=x4PJ414ulI6rbu{>MtKvr~KMWjUNo*S*^MeQW^A|%L~OS}lel#6}$C9IV! zKub|$rp2@d0Vn0oI@QjhDG2vvFo||2igPvgZq0WsagXP$-7t$ng`kLd3Z)Db3OBw)`n&KK0^H-<`_Hk`R6i+eDU}|GTY9FU6=GXXjI;D zBoYFKkTepqPg=~ z#o$oN?EWzRsbqvq9?*FAJ`Un?F&?B_LPuM!Ro3W|r>3Fzo`#URq@Xk;aTArP583s{ z@SGYMEz7_%4F??kN+RQFp}ER;jE}6Gjaao4G-(>3aElmz-I``j*yr_*DJ`<{$ZUK@ zB_+Yj^%QZ1efm{+ebPI3xR!hdx?Jh=d3(2Kk=7PtgCYV35{jUfdkxFaWw&U8NAkSZw zx?MWOzX1CC3w1?CMn;lQ?>Iib-7eX3K4dx!Qct!xnhp#O#*gto$#7WB>m5GDNf4Qz za@s6^S*$lFzuFs1X@9pi4jb$RXD4s1>CwE=*ZX)p5 z&y9~1`j|eJ<@0#(J1E({B+Y1bjPFkC0ZN6fs^j@`D5`y7VL{xdOpNl~UWQ{d>GL-Z zYnJq0Y+BXCq4f*5$innqw}Z)o%T>>_q@tw~Wh&t9yx|Bt&dB3Vif+(`NXyPp%Fuj` z;fS}zueaAH+baS!UGIj-kIX;O)Y7r)=pL*N4i4URt=EG$C$In}N|rl;N#a*wDmc>gm zw}x;12N`p`s5A2K_lV=(&7|RuW#V#I9T0mla@nxWVa}EZVqlUzowIrcs*EvB)mmy@ zh6##$J-D;*b5>0r7Y?ry=tUUAQ0gt>f8FgP0}cNU#!h=~KBJ!}Q^rw6Q6J5aArG8L zB!m5+#*4`U3->)-dEZ#bQ9C}V3B)T+L)i|mvlacF9ST8K{#?2(5&}qgRU|)3y#L`- zP}Kg#IyGRHFn60^e1T%UhQ%dBKK@4+2J>{cZhkZ%auMKi{IJzhcdF1MH9`=EIIxiy z!%rg;g0vu~jGM7$-Lyh^x9VBt&ioJ%KOMxvM{%%W(E|vYUW^Z7X0(_pR;O?;;EmzE z{JQLMR3RKi5QdMSK2<1-9LBr(m3;g4=`1Bdj(Y%Y65pn|JGDaz`Zp8js#_j`^R!|& zS+Z^ySiUGxxKc>?r`GF{To)y<-%H=SP}_J&fK4I_txgM)=P`K;!d4Dn>3 zWLfojQj&Vp^S`kqi26Pr6bUPv9gT8rAss9=RgpF!NnG}uQZReot++-FSeB_)Y=iF@ zm;~Ml@v?JCdK2_*1Kw80F}^u8r7TwKbY;hCQnyM*I2bdcWvrbM2RHD9BM{KpJ2Ps! z*eHPM66L<$9oB?(b*Ry9KYwUARZC9*E!9vm3!(S*gt*+`QlqsdE}K~_wZJSi3V-#f zJyZ88RTiN5+E87&Qj*|kfwrpMAge;HN~bWojxNi67gt1%I+S8CF26|s?@?Ymvhzu4 z#w{e_mP~riSCyJRDrgipVD7=U-yqEC_i{vztHlCi&B?qN_-{%%4k&

~~Ec$uc?_r;3coSjC@=(YQt?I-q|T#2K>AC4NlEZ?rv*&+}+CdSr=X zMX{3>PK!S=htaE1*}b0(^05CH0~2(ms8*q2{Igp5ZCY7=0Ic@)>ist+cl(hB@IWg; z2$;jdyxG35Efa~E-nR=&W-K5BQ2Or3H`7zi6MPUY0{vc4e0AJ9GsB*ZlCpTkV#Ayc zt(vxTk~>1l1kZM?2oml$Zs$^FX9{%k0YKVLBM%^^TSHPOVLKK{xkq{g7lhuB=H#{a z6}o4-N>}&wuIp_eM&R|+$zlU5<@YtAE6(>u4&XPxWp2ur{&hbx`5I;9)Q6(deialjz7d@deac&)02pjc*Uof zX*SBuesShR#F>!sGcY#c=(c@_0w_pavCw{dxzlBxOQNhnk{%%&3QoOB4@z3pcvQqE zEFmW~PGcb+tulJha+(k`v0J}O2V_6 zCBBMQBSGrne|x?vY)?Jyk0YmGWu3~`6{^SU5_=9DN*j|fjgccQ7As`;=l;ucFXn?E z$FsRPoGBz0DHu5Gdb0`2GYAfRI-aUsYh)#O!pvnW>8=$ORmhC9lK4fGd1vwXtZTM- znsJ%J>(t*(hCrh9JLE&q7?ZRWpU9MEF}>Q)`pZf6D{_pzvEW~q=bA&?(bC-h5yIG# zPzNUIUv2s+=2bH^lZQ1))H=0*8^(pftPw^p-yHKFesus=38{!aybHsBbr+P2Yfb#! zVuWsV-J!t4?Ut-kIdVI2LL6@O?|`b31uU`WF*9a>55V{t?9Y$fD^7vtzKbC`BHe`Qe{=;~zz{cuCwB8mfC&Hm z10{3y=wSOGGzjT zC{lBjH2b|k9wkELAc}?Wv*Qd>BA6N$9}moaOON2~9ZW7?JH}F_7qc+ze6JIYI>68^ z6k*k|AU!yKt=G!w$7nSkN=|#IO8jPyNzNcBCXwbYixd!sc$V;06Vpnk4zixItG#kq zQO@Ew=v;ktI7LHqQ0~DjUHF9M>A)1AaT|!jEt$?~ zYlG)h*!OJyXmEsOY!E2l=Z|S|Ujk{&N8<9?xi|SBA!o8CT5QowjlD{;XUSs^Nj{?o zm>i5TZMR?C^>3?JeTAl%9WvE3k6Kt8K8x@0up z3pf5|w5)xph4sFiRI&&XQ!T$6Sc?#fRbrMz>lCn@to7xhx-81}VG#}VVY~3&&kY^6 z)=ZIP%fu6tezd~tmZ*f00sDFh@Ua=(F9BP?D*CXzY}v5~gScWK0x180`)d8F1NQ1Y zk-4x;A8#t>z(;&c?jpK&6CFTFqQ6NSK+2YkkV6n`kB9>khmI2?KAxv@jVrd{AG>9X zCPt+Yn&%PsXt3ZDgu=u-nyND(Nwf5|m*-_W=fX3bSuZzF4Br)qr(D_SdY-Ve+NKnC zYEh)7i(yD&aK+DkY4#oxM{aXEFqkxQucb@vVnwBT9qmR&QD8#B3ERwp!G9^`N@gFR z&rZ_Ns^pMywAM+e=C&`BYMlsi=C6_+ntjQnNJtWi`FAAAk4%9bhO2qZ=iS`hx@(3v zjhu8Tbzdje6UH&m{HuubRd{%gD(95h7GIM^W_$A^PfNbJA+S@C&pc!xrfY9|K+}?A zkQg|#qGdhtJwa7&I|o9Jvij2p->6@0=^jC|5h66j$W z%%d;-T$oNM{H-H1ohF$m7^%C1lfH^wq6?7f7(HW7NcX0EP3KFOi|?)dsjS=&Ok+DL z?`J;F5eghO;GMRyKPACAv&V=G-{d)q8;`TIcmB-KFH$RRaPe}t7D{0~dcQGRa#My_ zK{e1HiOVXJC$tH)S>~$SkIAf;i=qUP8?QgBUC4#NcYQtSg&^$ zcrE9#>_x~FI2XAstg=E}s44l0j=tsrGQ%QqIPr4vtW?wu=E^pe3d0WtiT=Ic4tNNS zJNB|Xhu2}kNcRX>y}Eo}h66q$4VfBkNzpsgNn#k|`qM{q2?{@yAE*4tc=$NXOvzlB z*K!K>r%VUaVKO2pC$vIXFcB?kK$Y&Y^sb+GqizqTB)HyE7aeEmlWC{613*<>udQ|$ z0Sb#UuPT|@-jvOVuw-7(kdXc@%*VKs1Efb_ z9kHe+&W_8o*{ILXoviTzEC4hzjt0EEV`=9VIYu*g2xRy2Lv9Y|)&=X3!l?%cmS#s4?xx69A>%6sgAaMlE&8 zWb>~K9~h-yoATwkQ|NM!UTp&rP^WT3;&;DIb4)(Z*mPmgp$@!1Vi;=COpt6>iVW!Vgw*TgTumq zPLBugh)cfFda2Q*lWuQBgV$}%;0MBaV%il<80U-msgRQKNMZ@(4y&g6SNb9UD@@7g zz?$lZtA($b?17E1xBH3y`6(M;7>n8KcO>WzQa>Eg+J8@ucf(K{6jR1J{XBSCq?&SB zy^cqLn`>UGDGTcUW;EEB-BP{}O|uD|fp{0sUzH%()9 zAAi_OP1TU&?VH=E;d#vdc+3mZQ{LKa7Jk+ANez6WV7D``%(ttnmH4>Bc%<}mgS|+j z($5oWl&E4W2x!VWq8oI$4wT(e<_Hltrf^vT79TPRjgI66;;{=&&#ojVe+2uLXUj6) zf6U0H(CogD{8)^RblcQBTZo@Qv;)8M^C6)=)=X3BN3+b;QB~I}gDr*(hH^-ORy5@V|3Mx>7$(KK*J4rBcNZ4F!caNpbi=56!-Wg+=v9h2}8P249eoFRsus0g=L%| z#V2|Hiua@CH(O(gLflCID9lLMqhv>gA0bqoUA4Tl5l8L1ab9sD5p2k&9&Fh@dzr4o#Tioq@s2hD-TrVv8g1ZDVUgws`2!gQY0seN%qCrNhZ%n+RFf#&O*U!wY~A9JI{dq5HXjGByjh7eo7-&XVmt?X8=RC4 zA&IJiX&U&K8qcY1k!Dgdd9*YZ+GQegX|!9a5|;Rxy?C1cCx`3zNAZq_9@;t-8)F9> ziv?S~Au)SU`rjhke+C$<`^a)`1KZ3ej-Fdq>h*4Fu>472niY~QmIDPRr2i&z#^?&j zhYXrd*obmbX7x5pvRvO~Ww&UdG|`JgU>|1tJsOdr0L5DUpOz1QEWyko{xuYwlmS9X zDOkEgepcf>o%oW}ffaZK5-w_zH_rt$SHDGai;;-KH!~g9Okh(vZpMpRs#*!|c#gg( zC*}NpCoe1Z*S=gBz!SMLyt2o87RhCPCo7`fOi?`AlgyQ-|6!T$$ELWjZ9lYwo9TLZ zE7qpW%x1kfeT>t_!Y-rz)L2DbuY(vJijc=8@>9k;rN7y&91ijvmn^J@FlR&D&?Wq$X z#q3lvL1kU%8ZkeUPOz?tvi_c){@23bqMVPA>=!HDesd}=BkAl3$@H4D3YR+Saay{1 z4`p)b#UG%2RI(ajft$(|<_+f&Y)atx*>C^|9FUG?3l9OO82?ickfTZpCPNilVkT%q zsv`=YGdVI62_z$fRk^EE$+%uYCG{rz>)47%;CnN2l9Q7M{UD(&BI#yKm|XL%k|Qt8 zeA&}{3HnPUJ{PkiT-anb+R)Z*MR4-*X}7wb*|6+F{N2@4fH4Szc*5ECsc~VZD#jW= zx}gz@at4|I|CFW*y1!XtdZ~dbp4MN7e9Hu5VF96q16u5%iRZ8UJOBQL85^+u@g{O+ zXG1@?SfD@}3Kp9LvBRBLW;wtkD&x2a&Ct}9;gk>RI>OsMijNdjQvau9rvV$1qPU^K z$OXNnT_+n0Cq)w7hk}Cbg2776( zb_op3%^C=ufMz}$q`l(*oW(5?Ok&O{?he6K5giRpTt z^^A~a+rqWu!B0(|dIGjOZCJ=Ib*BJC`RyH?RCdD2DJ1|9X#dUCOwAoHN%`M=D1HwX z^|FBEkR%M-5+hS<$T5fog(sN`^gnqp%>_kpzZ9>g=X2mZ1)%2d{Czcz^l$0#&kXj% zn9dA-u1sdc($#dhD#)qsFM(N_$r9fA{D%zbrHGTbUVWg35~@J_UiwGc6#QY13+a!U ztEamIT9G$aQE+LA@?k;2&~MpqMvIXLERPY)EUp_~7tKwXmnv;pIWCfUFbyt=A{8kH zjxLpM(oz;388kV6o>dzU)GxD$x>^qDz<18BcM>yue=#O{00e3qt~)UkO8R+b*6aG~`I$jKRPtXb zY{3)39}$X=BjbgCWZnO`67%lxi8;PrhX_=I_l&=#)^s`m{E*v;L)Lb98A}&E)Gu2# z=PMhbNQt1eXA-#6rY?0uy*-ewr=u#e#R>g44S@OrngV^iq}ZPbRZmUgM~t`8y`$RH zas+f}@W&{5;ZQSI`(AQ9HFCt8B#&PS>7NSVToUL2G>7#1d=q%eSQ(A3+m?Es*_^K{ewArAzaR%BLo6$Bqnpj8!#W*ZEt-k@PB= zyu+V1IQ(?zX^F9)I#vAWt?nYM5`R1IHf7 zpy$4rCoo@NoqfhZ1*7-7!dk|+41xvChv2nRI3 z&>}Htbz(w4A^?G-**%%H> zDom@(*6?5)s`ukkt0HwzFv)XlkGhr(n!3TyGiwKf_GK;$mVL0i)A?qUW|6RmNiU&} z;o|~Pk~-Ja&;~tIt_}GZw|vBXs_Gzwc@;^@S;d`p^y|)_q$nzem?yW3c@+0Bm`MD2 zicxL4mJ)q)qJJP#4K#l)I#b_VsTWlBu?xGGzQmoU#J<`BM}?`Z7wgf9eD60x@y@Pp zdI`pIMM6~{fb)gRo@X734cLIK6y&bzj12BzJ+8*C?7tHddY8UBG{_6VDrb%Ij6 z&t&#|s5rh-&@HEzv_?0r4J@Z~VeBDlIyzbIv(wYlmo78Zw${9BEi#P4qF>_D*)1u% z-d@|z99@;W!%C9SA24jycPU ztv1`|nZtZVq$LrU{3CYRmF@FLeG4HPOUC01_tPs^^NwgT-<;J+Mj3$Vz@XhK`kH!O zR<7PK$SVy4N`l+CvR!Ro$ZyE|$~nN!T%lYP9cMU{Oz&1BHaTTEw+Tp0*pZ7A26Dtw zB5^Y<%QI&ft`wHR>lXH0f5GY5!C~;eS%XFw)HYws?LgvrqB~eDpgsJ+KkugxP6V;+ zMpSL5==#J31re6LY(cC&>}AsS5xSPkWbuj-xgU4&7!#p^W?rFhQy`ztsJh-DSrla* zdgl1MawL_MO9fKGj3o3{Ot`Z|8X8^5pd zuE{KU_HrpTXU6IG*T;=^4n1M1+2%v>Dc5dm99&C@H6TYl^RubF9~SvB_5&hZy2+dW zT2Pzhy*u)x(Q0sD!A$Q$Jc~y=DJ|def5IX zaXP5&KIVSiS1WSK)F|)~^eiF(v(C8&%lR<_d5blVP@gPFu2A(!$)E39j}Bu%=snlw z#Z;sb9YKpT$*%_px}wV0-|-vrLTMLLfU(bXQ`(lp?hS7_OiFj$;&^X+KGUwVp3@?L z-J;#Zhx34W{IOc-BaoD*xns9l?R8R5nE=M;uyL>mHvN$wrJU|T6|j2jKW3fIw_K>D zSiIaSO@c+RV5>aR@fkAVgK58z{q%A6VH#e=W*I6sp_~*Hzytwc=PJf6@FK4 zoYe*+WF4uv3r?)}v-$BvI4w9BQW>nayHDmxpaQ|!o}f#W>D;D^?|hBm8q8JqJ;H>! z&0i6+oyTSqZ^Nu(Y+Yc-Lw1E_qiJAyq4bcdbqz6TABExBd@2qW7M``T7@?}m_~Bd?6&Mc&^3ux$ z*13#m5H!8q*Fev8`TBwr)JaCxgYmdiru~cTM~&t(z?|_mdB%f)uooY%d6Tg$LpJGA zkCA6?mtzuec^oHUk66Tqfi+)w&NHK9#c?cG9GKe$SKXG!!8d49ZOSn{QSV=uNt5;F zn79xIE<4yM(wx&Rag1FA_`>LrW&}aFKCdkf*Ao(3e6)hzr<6^rUJ1OfT~scTJf~Q3 zcvZ@B`~yf}SBXMek(s+|#>w6NK@nZ0y+3}XHnSYJI-B76>Z94}yeJI!%!a5zQ|if2 zVu>tD{I@h}7GFrR8Z|BNl+;wCVU08-gilMtF7@W#3A(YU1|P-|e!Fk>2HV$y;(<=AlIY%D532prk72U( zV_{VG>$?Wad;cF5Kou|5pvHT@S^HYuB5@_D5$f#NA0%f#F|xhx`sfCzKfOcAFWer} zIhg*Qdu_HRgEtL=Hm~A(DP6gVz4jMnT7&!(N?Tk^>(3rAt! zLxx)+qktvx86Qk*!U7~^B;x5P%PNqY?T%QgLaX53`?0lP6inEU4TUhL{N#MB)%T^K zr>|SU?%-?aw|2V?O5+r$Ooj-Zk(5Je?;3yz?f`(|3%Fx{EIllX4g=^Iu;29(}u zyJQsmnsR4C%UU~0*eQlqoBnPmXn9~<%LB3C6Oa7nvsstYVK_16+^H~z(ic9Pfh3;F zE>LCXO<^kPqK>^-j{K<41FmTXsXXt6s7y_9Nv{Q;P{F{MX|}hCc#6Ehi<5edftaj5 zx?(E;8Jw4HIv73TT)U-k_9OnJfhl?z7(!6YhaJBIKZaXgCVA=n=qsb)DdFXl|O5Q4|7?e*Rso z8=UzvJQOuCNj?T&zg>}wfxc|LSc4V>QO9aT1_GPI!EkI^Atg;|d8BeMhd-%z*?7Az z7G-&aJ3&0;e}*`^W9~!o50kTexLrCMCw|XsDIXb0Fh@<0$>#j>Nw`P~H>@XF@baz6D_+dE4ST(7z0bEy}-WYGaNI}k!4CZ|0 zB-aL1sx9QZbc|B-iG*mBnUI~K9g5UG8?F->pvliKprNJSN>vi+CHTvDf%*tB(81-1 zejj+-USzQM9PBXE1Ch0F2bb7+SK^LbP21L^@xHGQYDyE3t7VQ7VQ|zp`ouOC2krZI zuROk$<(f?>OI+8nPDlio_*>yDz7dSyI>dTn1kdf_xrAj4gl@H(0@~f*oK5Yt z5aZ~r>MLt`0GyM>q_2h1e#6iw2 zLtLw9 z(Yr6;Q?$5KUF_t@3NDNb0!$GPMaULeQ#DzmouT1S;%DhK2NrJ>$a=@42!KkrYreiy z;$GnT17?{PlZ4V4n50YuRXZOqdF}(il)p*A+Cubh6d)|IOu0zT0bCP;tIWQX7E|E$ zydRr0m1*}A!6Z6L*%RlZF+(gmxp+Ugq(#k;(HAi0^s9mMqfD01vm&c}a&19jAqa2K ze&oY^W}u4neu=9ds`bB44+99=_~MszI79}i1A11P41yp9Ci5eo}U7lMMO zE7iMZ1YdGwX0?*XdK_Ct^Us{xO{er8CH&}2jIHCr8L$sV0uyKl_%`iC&f9M8yJ>{?yB>ajqg;JulIGoC57YyVYwwO`?qPif|FwW}OfjjkwlBEU zVrZZwIOObLn_AiE{jQFV)5iJb?euEQP23r3k>U9YuFI%<^?q>--Ltre>%~{;C^48b z$NZH})0#y8hp7ca(%S^0a93wiI&?67=^0*=3JFctUFAr(AH(Q=iOo~p-NFEM#$6J)PJPtsqLNsn|G}e4*R~C4HGBLzOu)h>-)m^1? z3d|9|^B$G5hNXwbMVs`GlWG&;&O^T&GD#(_!{3wXu$kkCUtOX9#ZYBL#{1-M&n6!d zVqW$mj-<&uHNY{ZH;_dl4ukhhz9UrXXq)YT-#w9zW;nP?x0P}(;mkApQy_k0a-NBo zzR^`;q7>zB8At1#wwc8DBOQ0BCSuaxA16%qzw6hYoAUxIP;MFVqq% zey1-r?RNm~({V10qI4%OejkyCJjxF9sX`OMM#@Vgbgw5*XaR^C5#Aweo?qVQSh|XMrX7s+C3&ZnQWBxT+wJvFxHaCFQ7v5_EPL zeRH~;2NzLg7vc84@hhz*J$$KV`*?Wkp3~-$cX422KWrvj-X~42 z>({d8dt9*We8+b=tC{EL*fPgKb-dn*8ZY}(oYQvI3utjZvD@M%`}1xt+xP3#^v*tM z-#`Y?ad7A~tL-d)qi4*}ZcC2;fh~>OI-U2T>&<{k6DjW1YZzz8ooRKw zTLRYT)zShuk#)UvHrb@B3p^h8PhEQ^fA{h5W7?|QR)qr;pU*SQ0xF=vI~3(NpqWK! ztbO_f2!GDEzu;(#`!vE?$ly>t*oYv9#ZSPeLFBdWqwIUK9(mUgEpWI(yz5$pe5^tuXpk;nx4%#-d8obc94x z+b5>bj#Brfa|WjzD%WPTj(We6x&hQNj4yGCcqB_vb<89&EyT^HUb!=}w;kbApHydH zKbGCvC}$0bjkJN*>cuD5n^GuH(yLX>91fCk_CR~oXZ=54Mw8g&1GYUTHiCvM(nw{~ zr+FW)17TJot93rj5#BxoiiRt6id4tJLFZbu2GmZ!jYnRl*5fm z@0K#5a6r8xEi2Xf0W^)!UG>Ei#dRSB$)SmybqHrPXehh6ufGbfd^EkGD(i2!O!;?oHms>r69cbdpn- zKgDP(dpoo*;5}JUph?87YSbSw{+M^@Flte*<-C~wWv7F6iaMY{>gmG>6j}AB9KIdF zbRI^DO^3Q()T8j_orvBf1^M(@Q2b;@)bqTan6=)Sh`au(xk2XRrgn$mGT~9)bmMG< z2rBoNRUrZHFCg@htJ(|~3ImJm22~6d4vww97)}<4OV$PjfB-g-L+LrEZ&GaXpwJwT zM^&rA3Jfcl=kiy*u10p_Oomg@Yo~fc?(v_lzX4zt?AZ?77>CWF6yhAN|HNGz~YJkQu62HpD32ugQZU)>PZ893<47-Z zD}THt?mrNDDg2#yV@>l;V&96e;DO=jrwW>EGTa(0hh*Jr0k0;$z~(PGtGy2Hq3^*F z`PNGul*mV8Y@8e3TX0oG@Ay+y&vf_>I%HdvrbMPTcSq6PB*qm+mp;`9I@rWYu)}^_`dK&1D?%yvjJ}~sLZ;gq$ z^C|0zd9S?*3E$} zl*5Yjdvu=%*+umPiBW$Ue17Qg@!Xv#%gcw>2R7SaG$@#e3E~4xQ&WLk~&>9J^ zsdL;p*R17iYogQ1D$#`nI+to(EcJRi9RhXniac|0gRkzlxnsLV#C|C?eL}u@7l^gW z&BE2$1b9@H>_kjgo{knZ8-4w%sXv;RRDzX$!}Wo2wNGV=e)or;>`YekcKeG~Ey3j~ z5GDg}$_UuJ4GRo6zL@V6&ku~C!+f%?#yAV_!?<=&&+#=X#PT~|DuE@Sx#|fg_nfA$ zb()nqSSlzmGUWbT3um3~&iR%9XHk885bIgWJxY1o((O#kIYz~Gwd?2MoDI%huy@O; z{F9+||N9ADxC(wYFVi+Tx00{wOp5&H>mu9dd9UP6D{h(0d36qQPnIlf zWpDm2Of7>j4e-Q)ZKtg(fSE>XDi8?S3_S*D@|E}QcU#62~NXnxETVsQ~cI)~~Kpi^M44=|v{}Za8Em@v*S?=2%sQif!B>8N#-%XN? zVvuhTMC9Xe_m|OQOtai|J}{l(;k^X1Hpi=Q+QNiUK`Hao7+(gGaa^4NqvL)P_%VNwd02{ar-``J=)Yq;o? zL@9q2llY_-_zt%g{>|)C8LCcvSJsz1J;8clCE)RcEjPsU_*JLlcwi;~gBC*4 zwVt(($Fd{3p0>XC;YwTR^TQlg%g*eNr$bQy^3aN7V(RV#`@t7ibD{HnVZPZRLcO zmc0ymu=~Y!_7aSvt+dJ^Mi#+XoQDa@u>QHu9`YRpPdElCK-Gzn?%W=~+WjGFj^b*1 zN+@=7JFscB_gOlI7sX+bg+eGkhmd7NoGu;Wx}pQ%FGu##EPhbtV^bW*_(tFyN6+L} zd~3L6d1r?-TCt^B7{oM~VULQl#=eSN=vOv>hVpMT>ko=8@2Xe4JdO#58E=M!JY5B~ z&ja+_m~n6rw|Amu7uh{vTU!9Ti8=t^Hy_f?EgU~DpkfGx)2rkgS)-`9X} z+f_Q4Fz2-GjaQu*eAINyzlDR?@~diKn{OUhWrMZY1Vz@D#Sr5DJu+u$Gz#}RNq(Xj ze{0t*M3F1HSj{+V-y~I!_*JEohLzL`fxW_v?wlvAcDF4$3<8;!h&Zmf>x|H~81oO6 zv!lc`Y3LrGD)C|7P(gP_`Z{_`d)e73-}8R5ISMTfvLChzH?7O(r*MPS+C}?mZI1oD zMrJkkhDF|Rh_E{Gi~K;ITcV&g4}HKft?M$*coTLTtHp*0cehNE;+N!Hl)qXr?>J3W zqu@OH*~-&Jdd1Qf1i1=COrta>q>ay+$KqR`Y0{FuPYvIvCYeRH>oDm*Pc8B7=XYJF zFGsD4JojLRef<@kGka_lHn*7o$N`kWf^DYdUxzANP~Nx82ah4OHp7I9Hi)2ca3G+- z?iJVK6e|)VT?8xch8uIxbe}we!^zySe%*8aIvRs;AB^he^q#!_iXNq0NMqDOOG`7HxF>3kiT8!Vb5m~7aS z7Ws~oZVSH+m9gg}OqwP0tlUnYWBD%Fo5~j2QCTlx0ZR4JF?%|r%S#cyS%Ki-~F*5>unb}jAq(j zHi=8*+Vtf$T^oGPES zksWy!k_m$)o-@*KmO`tAehQ)z!4};<0WEB*q~5-<8au7Qp#F6?v9gEg=pPGv?_t7> z#CF6;OR-Ga!!7uG>wEX8X0c=8 zz9C&-OYxlylYoS4sxvC1kZZmIOjEW99-4pt0lAC~6ZfdO0#5(EGG5~!5g683zjOLR zRc;k$BJXFgac)~>XRuxwUK1-6NZep9O3`XqOpsnv2UWNP)9b{h+RThky5Wp{4d9e! zdtu~?UO_AzKVk^tOhqZ9e{r^$I(E3&iNh)WI>8E3Xnn{WBU8N{*p*i~MaJip{M3ig z%8puVU;f!cF?nun-qJK`()PqO6s<56w|xqS)U^K&4j7| zN;Byw9S?HbXgmm=E19I743^NWlL~;eI4HME<%b^<1Hk88z! zP^p^QzWCAdL^fD@f?iIW)_MN#hU=W1c@Np7#CoFr&g+*9ja9+{hI%w9Gq1-JVEl8v zcA=`)lidZhN0JoSsqyhFB5HrE9fR`+$}n?FO!wQN4G8(v8ur_JumA7=I5qJd{%oRq zgXelB>`&7J-Fn+S-B-5({;$?pqI)*@hcp522rips`@x8l=`K^al&yx*h=<&c(>sk{ z!TsVV$FXFoe)W~pm+th2O`M3 zn$Gs7zNGQ=-A(Tf{gwqXe)HNL^CbvXEQ9~~Id$Q(9m;SMsjy%4T|LfZ<>hkVzVXP2;WsV-no4gPk1>)h>Xn&}{t~$_lkfq%`il^^I{WIS zm+3Vi<4;`IEm_cQ(jn&tbWWc3>8h!D@$&lHNFXd~k&zf-kA9u}BVX^l^f0D66e3ar0h-cC4EYhz=*ust;qUEReYRCw z3vgECTB$;o{cWT!Lwf!gejizgv%5B3EZjcRq}-zU6C;~0+{=*YG9t27Be#oJepW@# zdB@6$)z%VFtr_;VQ%9GIKunI_lEI5IAxVmtOt4ZmF7#rq#G%zm-swf?G)z{D?HY+V>bOID^X zPo!0s^M-KLrMCl^EB#a=NY1o#?1r>48Bx5SAyWL^{egSyz0or6haKhAgh+_(z*v(_5j zw#H7u@8f*dKWen46rc0w#fbu5uQRf5!+hWFm(0KDB=v5m*Ha2blFNN#AGK<=-vEs$ z$t9?aU4J@w#y)M0_VYKY6Um05E}xRb+?Bn5J{gVYlEGu)=IaVud>1mGOus9<^w?FUWH~L%LuolujWNDk(A>eNf0Ki72LrUy zcHWX*f-SDGg0SyfCzVV&8cCI74LaxXI8mh$MvmaWJ@`xaxmQ7XxXQt-9T0vHkx%hW z^!fP6K$eKS;d2vzaDBOB|I-nf-0cS*)iCf6w^&(YCnzlu1q?JkTg!&T?XwY0i6!f{ z;~TQR^*;(`p*CIj<^?xGVvmW4w3{J*yY6LkZpyKtQnZ<9E90xx>ljLy-TKG~4oiw- zFPdY=Dcc*HFM$Ud_CbXl5uiPa!V)2^YmX+C$jE+Q6us!@!KO<{qOMYb2}aLXgOpn# z{0wfHi@~8Ao@5$D8P1)Z zZqUDYlyon8Z>%_V1P*?=5KmF_H7aG#afX8@<1t-uy_l#XxBq*4Gy1_JQ)NC~sNUdl zKK%WlZjOcIrYAK~DiFou#}$(ZRm#P#*PiKH5YAYa-D9c_&IhAgJtahzrquVxPQJNg zw6>;S3I#OvZh-fl+%AIB9Hm7%Q)Nw6_*vq3o}fOge_Q_g%H#%9StK<=C<=JfN^!W; zdMCxsYKwueiL9#VM0D;?WKmgpX+dCP{~PE*bHVwtt-D=bD8qu$emrmIql4>u>aVWz zN;TJSzpMR2r(D6jueIES9F_kOdcdTv^7169Jd8q_8m(PT{#oFsV$ltD+DEpDF7_wW zRASJKgG7igo_85NEE zrj%uCTdO(w#fHZgN9n48-+k(_ju=K6XSk7pEk}xZ&YDNc`a)GPgbFEWcF|KoFT=c- zp&^UoY9##ZM@7i<+B36eRjYm2Q+pnDw{PcMc>bQ=r{GH(`XI8_zTP0@|W# z0Z{4iuz$Vd^7VF#3o~a0o|ll9txIUJw)@2zs6embk7N(Z*|FILKz|j;AkXoee7?DQ zdrZl`W6)(ry=}W`5H!eiM{guzcT9Lr@MS)+n?tXvte#I0c=;D_(2u&AoXPf{MwRO; zt=$iLC}lmTG7$6c)c3pRXoJ$x^S!yY^g_H>9O-+Lbn7IWPjev5hgLzH&^Wa*qI(f;{FZyK_ijgeQN*33+BZP5yTu*%Ex-jllo zFr1(}ZS&#t2p~3R_q=hA&ev4cwS8S{1EuB#ybM~3ySO+XO!vbE-3^7XGbS%SNfLPN zqnGZOyqRiOr}u~FP?Q1O8+{?|zVY_FfiFWA0yOZP$9&)Wc6D?UpODa=XNdhxZ+O0j zHbW44+Cz%`(1x!;I4a5YyotQ&0~3G8{Zu2#q8)lZ6W*GBa+G$^42Q3i(9?0{tb6?Z z6qo++V|;T78vNTI?~9PMiWMTVG$un*pKjp9!L!tG%omnC`~nQ=RqOE}=Qi7Hda>N> z%mig|;M?`eIrj#>$>IOG?0!2aGw|m;M~B<4t|(09N+Mee*7V0`AeIy#wpvf~U1|FG z74GU(H!G=UB|^BedBwP$QkvZe}F%U-~<1!aIRCzM!h-+rP}7QW3F2Fx_(zhM<=U?F*lBN&-0=LvQ7epWeG~6alX15J@P1wddlrd217U zE4T>qjkc)ME`_L{_Zl!xiBTm2d z1?3wx+V{6iZAEus7*zkFzoZ@t+eAMX`^R+F%TMW&vfAO&5ll1TQx`mQttdb>ouwS~ z(`-+Wlvbyc|DrSweGWq$@;{2C%B2Q`&mj{FtjpT?oYu6)+zTTm*EJMda|j=(s1Q!fWPaizh^un^jUPm7 zK=aK*(;|PXKNT4V@y?Du>6&r6aO3iY!n@DMWqEiEC!bnpB&mbSZuaCo_{9~AJi8Iz zNQ8z;+8xA6VEYnTo{xC#pF{7q_p~*$`RqEQk}-!;VNr#?Y?C=omJYsKl9QAt?Gz109^PXd6 z%oC20m=8{p4g&&Ft%G$#Qa$q`Kuoc|Gwb-zvvkhKzoRE!W%h!Gf{!)%+R9~U{U@i` zG!C1PS@6$lH@rovernl&@(7Y)K6Kv$G!Tv~6V3>|1`N;6!PxK*{RfYqFHE=R+o@hC z$@!>Bz%$c{2C5ehOt%#IYq8{IC(`}K18p`Pz|dCn?u65q|)3FnKK(OOV5xn>3X>enGx5|UQ|)w zx2gY}HI0HA#U#Mc^ccU`Jfq{0t0aE)Mna5!YfZ)tn{jDTA`h$XCz#iXz(x@S`^Kzb zy6z)R8M74Fpqc9ceJ!**Ubxp#<(Gb5^+JP+V&4Cm+!D531@Ex#?ncL!!0>MPDpc(* z3k*rLT~ekB41IAcoaGnfeHMz#a{kX)_ z%2K1xuJcG;J+qx!y z_~4AnAz@Z}jJh79qyisA`BbR=vLL8htJQ`R5sTGB<_Y0lP3-l^)PmDUTDFkCVD|Fo z=e9qPy`~68GfA@2je?_lrW_s5wg}31=eiI1SzOv?mV-4;|KZ&ymX!go(&!8ze;|OI zo9*QlQI?)_o(#O({s%{T&=E{)1l#i@kih&5>QsA57s)yDbtp`TI&YhEf{45QXVo`+ zPD(sz>?l9X*;5MpW#wb{u2+!}5Kvy9_|o;V(CWOYB!L$bn~FD90OM2GzHx2o5G9*A z4O2n{>cKbQmX$s<;Y!+9y>=lsYr+&ai2`YwXxYVbVu!#dTTI5OhW_r?^Vo1xR7Yal zv0_w^d#K(16eyX6^oIpY{{8v#7d<#G~=Gx*m0k8n#x&imm*_XS4>F@vVO|#MxgoA1-F}SR3-EPFN=@{N(+c zYCGOx+aB6aOKg@H!vV_4Hb>=ggy2{yAP&d|hf&-1aA4DUyhdDn(qSj69h7{#FZq|p z?ZR3hu+zM2A4uS>nIBZ>>q>50v)06gw{bVmp7CRfrAVzsX)^SoTw+Q;K~R{-)zY9l zBe^T@cY)+dbEF&o~p?z`dXZAw^y1gLW$Oc2q4vXO%r% z7dfPPp``@~OwbbFb1&{xVK9k54Y8KBKiY}>i}e+P{>I2*@287e5J;C$^EZPrVA{lH zmnP9HqyNa2xtPAFdJzXY!3`B%!&w4!piTaM=JW?LX>o@J^z4IiOS-rks1vW=QX_z! zSHNQDJVj08w=Xi<&&N7TKPmA>>5>^U7$4_9s`-Zob8WX1=pV##YI8pwu}c@D+i#2U z^)^C`nyW|h;c`|R*3~W+a+#s=hfDPSw=^i5Ch)P;Y=IitPHs#}*eSz+u(-8r?yF_+ zejk1Mj_2pr;M(ub&a!N^n4?;q~#M9+WasPqk$N8N#o)pp+3J8e8hE(zZ`wYRjKlUZXEI5TFmp zX6o94wy>#Zk>vVg370*NCLL9=`pZEM)1M5n9R!*h z$#!%&QTu>>$2dC^`co-LSbf3FpHlKecV+8tHlKgE`tNo(JdCzXmtKxbdk!P+4R>jn z1BUnxibYSQt=@T%Hesr{p+eb77-D47!``WJ)m!0mTPz=##1P#H2$2 z)P?t?CFV91GUt|*UBf0J@%-z|?{z_AYeY>$kI2gTU}t&q=UVg) zK6vS)b)km-3E{KNq=nr<^ z<=+st{T2mvwaon4;#Px2h1nk5Z>FN!r5driA8{U&2EV9)?@GJ&zuR;(O$Jk)2zXAw zza5_YccUcMVxO~isf1x_HMeQ-jzqhxyymJJFAh~GI^4)Ws71IGNyySE7*lJF2|6^Kn;Sx zZGmzYmwJp5;(#`C)c5biHYqF;;R)%)(0rxm10^g^U!YmVVLOMp`z%d5$Gg!lZ!WLB z#h6gR66`qVOLcjqC7^8HZ<@p4|1dMwW@h4Tv_C~`e1~uIlA{{cJ#@M$7c(AP=_aTI zBo)h_u9vrQUCoBUb6(Z2rN-K-7sfQl0cehJI|C$1__rz>x%+{oDQ-p8h+hV)DwtNh z+HMXUdWHc23@}{&`tDp8 z+e{vzVP#U$vt}0Peo9_6-$$xoz1zyl>s_C{R-&D?lCNZF5#1y>bSNsPUJ^g~26S)w zC>r-``!~_jN(m&lr=-Zx=d=b z^Pt!hlO5=wd6(0=eRs~mK>QaOLyaN^d7Mmxeaf=_Z9AJymi!3YifdQ+B@G2PBu+io z)iQ+|u1PKx5yh6eE>?%>Qg+x~)eM(lnNm%lvni)opI`K(|12@iyl*0)tDRnKFygKl zhMpD*C`E2L2O675M=x7Uo{NWqG>hlmgUL93jaiU1qD`MwTd#5WwgoWSJ-(s*Lqji> zv?dzerurJ~Ao?RnpJWct@!j`MhIUeLU$1s{rK_e|z$!3h{tNn%AqmeO5q+B6*Cu86Dn>>Y){T7B80F_GuCW$Rc?1^ z5!(qNQkflAMQ*oM4_HX=Ee0nDD#gD81o1)p{r&yp%NN>FTq-X*dxV>X@p3f7FrlvB zmu}IX+C*_t5LM8kJXWA_Z{Y|w*Gs*Os*^{%#H4@*l%d(FvA>xREYEwkT+9BvU{CCg zXsJWVP&;e4$C?)D2uo9|8ER1hukCIjjKg0Cy@5E0ksT#i(Nw8vhPkX4l;rctVok(E z)DV^{+u@5j`zf{XI$DHY_p!<`7Ax3<<2g~hXWR{oOHw)1!R3x+kGaLgn~!KS zLL%TrH6oPtuF-)>-V8g9-WNerE!!&;c+8!ay?8N}%!?a|7_E-={ud z>O_`pgzQRNl9#wJ36;T$r4U}PkM8PurViXc>H1DLy#ND{vQ-DT3~f&HT1B$>n+Og8 z%Y|h5H_V60%oM^?U++1G_EQ=UeBbgxl{dV?zo1(&`{I9n`@&s<#K`sF3|I*%TkxP} z!$C-CUnVsa*A@yc6(pgUyAa=in`^MO$ktUv*Nl+VuNaICI&{t6cT!U7P@3u5cCGZ8 z;N(bhZ*D)<_~x(>F3Wv!xONaNK?E@>!m{1vQlR&H*mw2`w%*gR!7rSO>bZ#iN&7~} zB09SCZ-ky-7wx?W1oFTq8y86wd-U6n&*ou+;{K&DZfKK_tMu4% z+*Z-F>HeY{Y~5w4PRXKh%F%~+oiy4!oyrcIG-`ip92Pa3A#Yg$7NPz)iGW?mgJ+=S zI45noZWx;nTM&B7*wbz%{uU|1x3dZ48|rOa&tSuuP#CpBk|x!~AkU+Bs5Zt<$Jm8C z=e``A9ry{L_No5}KeOfj=S&~o2H~QEab`tZehIj|C-5^a)pEsM8<7DF>_P zRg*LaLA&;*7Tiuqeu5qYArXo<&>KqsQ}2J+_~k@U_a~*ea}_6pady?l<)=hE6=Ax( zcvV}JYWb0|oH}P4JN7g%m>=(WQuK~O8>nF;8kFh#iNzih>5 z2vFpC(1yoVV^7D|&_+kR1XYpT~QgD!!eboK(syv;=N$llRjARy&~!9nsO?;bCB$VITt< zuZ7`oT9PQD1~;Q$Eq<1*-F&MryzQPlZofY0U`w-aNhpeK%?7QxilRthq=-LNZKd>MEsIr*tR(*`Z-B@kZ{h%}P+L7I>ywH5Z z@4y#;f`SYR6cRFY5GMiUUraOuac+m;(JmPVwA|_lcke}iR$eQec4fTvt`f&WsQ-BQ zK}K+_Dkk0<2YD^n?;%7&QV$Jzt~R@EAn@jbahN9nK|OqzAa$wTM`2RD8^= zd~w0uAgL0H7^Jcg0jn?sn7ZD$8wQj-d&H_ z|C$7qnAoKod~b}Hq*(!Gn{MvPHGIJH?y!=e{a1V+KKB|z+Vj+H<-gHgwoE+iG-g(; z&jCQR#m#v?dZRa&B+}q6$_9}JzkVj;zk@^adF`_3f3Aywc;_f2HiWaAX?PEUg7OIz z8hW1Nm!uX${{OUgKp(+|Yc}i)4tUSad}|qV?lIK4h1K}+sPt}gR4*s<@bw$Dr8n}M z0se2rm(TeB=NA7-KNJ+08mi5W)+vtn6NqZxFg83NPY3|U8^5Jn@aF0N>@;)ABK%|4 zRWI3adB+s53}M)co*+5Fq_k4GP>oY;(<+RF4*h>p`hU7SX=~zH4k1? zF#mH>oog4?N*)kD*R7V-d%hXk`x+v>()l_V?9$GqCcRDZ|BfFrsD~WNZ_>bcK^i+U z2d{??Vtcw)3pg}8J9}py50yq0FP}G+?l3G^Hhr57*1gSnQt)3VWB8crBFGb_m~mvL zqCQ(ok?8-18LhhBiirMa%LX0(_QuC<0cPlbjU%n8-5wW6U)_tW~bs6-jL??lIcL;rW7{_m`P-mX_56k_sU3XO{G=T$66Sc}{imi^(# zoW4(2gpYsnbYy&828EH?%2l=U|F4Aq_ekq6N$TCtSC>IntVhW{E=Q9aI~f-`Y~^5i zLKyk~E2aN4SRo;JM^+|ad4=Vtla`lY-c;VU(zC-^(CwllNg5vizw7D$4a9$!7duq; zr#gHmkdkiX<^I%T{S)A*=;27n`2QOZA;TWg&-X&5^vas|c8i?)zd+>medX z%)|cY71_#*4wU|xJ#|}rv{l`tWrR$1<>NrP3=Q5O9aG;I0+yEOa(|38CN0UcVkQ=C z&?v|c3xE5Nlx5u`);IP%=EZ(yuJEL^mJgLm4RF7ZFyk1KDdyvgeg(-pYlV}>t9qK^ zdZ9w)_ZOONgcLu#ltQ7YrU8@q6y{n<}EuQ&GKN+vz+nG0PSx@6riY- zU{)Epx>Q2U$R_Jv{wAKeZ1CA{%eJB%rR-0oT%c%YX5$7i zer=nPFy*=wB-52YMK%c6^m``W;U!6k>(%EE@qQ^_J8OEMS|y9dSskD;u^Y?JZF$RS zt(hLGaK8mnsJCoR-Y*x&tPH20`37_>!a1y3yNlxhX{y})bI&HfO-#yn^q&=Uo9D@o z;)bQs=czu*Gg&;S;{(f;8s!fD@ot<&)q%bq%3r(2Bb%a=k~uANi^Y-Uiog%Ott>xoGOtcRCs^7Bc$*SH%X7k!WBou?ODo|& zyk)x>khM~iG+ZuQyvxmXF&FONHhOjyvR5&sS^DB{F#c&hL)Q=ZvswL0dd=+dsEVJ3 z$HgOq#frbdNCek*!PM4$a_YHaoyHEn82ck@tCnHEH*rCT)`K%;v{Z+}P~HeXC(XpW zF?~r3WYA$3`a$TlUtmr)u$8(2f}K#veTc-9v{D*9`w%ys(%j5%0_&T%C^`G-d!*{w z?Zp?`*ktj$RcbYVWY%cBgXsz!%-4^lJ&MQL?U@))-LD@aUH5R48brUE>XTvk+33r+ z-~9;l<1Av_sCE_L8?IKn_ry#7md$Z;2FuZ=Bl#z$bMq{tRTpd>09X;s$|SsN!aXjX00#xEV}X!;QRPD1J-XT#Qzwnp6(MPT$S?IkkbWYnDTJK;CXvowQm_ z`^wMO^`N#+4WmgOY4I39My~Rmj#^#H2LnxohMDJmB6NfP!3SA+Va#fw6v1(GK%t9t zTjCDNUyIT9Gj7qgAZ+>lsdM&m24CwQ#ytbV@q%`uBKh_@BxYFB?+E9vEiUq>J?Amw zbf%r%o~2q@u;Ko?x;>r#O99EQ6p}%Ba8L6511o>etFB?tuZj^ z%Y^s_^@{3;Q^h_$`h{<;*7e`fz01)JozrsUC3c&l7D4++Wo7e!=S&=ED(}DE)brHD zpWo%)7LfC@kpWCc+Umpq9?YjWxqLk9Pd_zva-XOqs*^6O7U^)NmptKGC9Vv$fM4QO zt_rP)B@QX-6B-ko(l?MB1r%klH5+(cfI3b zK1o+paEeuDONzTCojr3Zrd@{O4vdNFpATYv>G(w$NAMam@!}K@ouEj*dt%0UH3d%0J4Ra~QfZ+1rHoe6A@%R{wbKWQQB41!LvFG? z`gNoJV6QPYP8IL+oi-Oc9{f5t17^jfl-~AwNHk9_ z0c?H@UW;gH(B;^4JCPFG0A@7v*KGzBa2Fu&vo+nvq%&zgNNI7*!nV%P8iUu#-;;?n zl~9;hsfoT;oeAijWLW%E--dV3{T(68!a5=trz8*tw5FrEs~*cUwj zhY)3aUs#_x`ojxL_36h6OcM7BpP-P}zFPNaZUN!rJI*$ZIf;OqBinE^{vE#VV;BRJOgFSH$m z*X{9Ksi(T?PiFjV?9zs-DAY@(;G>XK*hZMVOmtQ#R5Vsi`NqAqz>e(K10_Z!v9~UG1c2|zBdPzs4&gv?;uc@l?|FwsX|Z>1DMTT{ zF_?032>$Yxq~L0ZVz?iI_9KI?=9oQ8m^LcwS2=P&$2~RljPFSZ4c+fU_e9dI9lZETzS&Ucu{D> zm}o}HEPv|=qy@(?X<#QJ*4)xqi0>0zh*l%Iw!2dm4~j;YkBIs-?p3|oGvyfF4mFkC zZ%-nVkJ#>aO1XQk)P%Tl=>5tRSj=9^EXq!yZas-p82wD-Eoy%ZGg!TTeoqEUzVb&? zv6m4(FIJa{Z39+9(f6j~Mp{g#&a-4nGZ|1G^Nj0uwxj+-(lr$7zOX?zZP!K7j&bJ+ ztXTomuzuk=ImT2U5mMnz&UeOZF0fV?IxbE&h!!Ba$ThXPEX(Q?B2h|ZfudlA5gLV8rjgoA

yG&lD7XQ9z<~`LH;4}XGVIu~)A*eJ!t4VF_G_*a<==Hu$R$;Is($XI zC-9Cx=5hC~-QoRr-Vbx;%rpL=P$LNJ5FL5+Z;yl(E0%|>tgP_(=0lGN3Xhs1#NpOpIYH6T_IkI8GQNP?Hf z8gN-Zll#tI@CQw$st1I$O+PFY>BhkW2NMNZPpAt#pi|HZjEA(JfyRxnO6OvSbm<$q z=k?=`Bz1efQjeW2ek2`me%2Xvu?|=tTw^?;n!NJ)>M-5Rt4r9WvWG?UT#o*$lQgk! z|Guzr!D7`@RKj9uRmga$n31K%9~D652OcCoP3P4K)8%BqrWwX}-3MOKX~RVQ_1IN6 zxT@+k&&hWpoKq|J4uucj|46v@WN6j$L>oR_UJWKKGiGJvFQ$0y3`w84EJ)E|ospm; zUSp~(uESJuKX)6tir?yf(qx=lJ#!nh?TchQ)N9&CzWgbbH=+eJ^_X$w&(H7lv8Aq= z7icK`&Z~=zF>iXgRGrS3x@JtD=JOG5hC%|9RDOR^cUt}6KJbmOOY$Tc;2YEeu8SZ>IjKR^885C1L0Ua8QyaU<7@;(SQ_C;@5Ghom0~Px9c@bY5aC$$;JWbi9Ku zA!|grAeNzi#;|ke&hX6B&-y*j9XfQ7@l%chD`tWraMYMhe;9AR1l?Y+V1f8`DH}~U zY}gRKedo7BHOV@0EH08(G=ZV+M)>h8?YXbx?_@eWFTPCcgQrEXWelwGx88Uw3?DJV z?G705ad_t0XG5ElPO^a(SrwRIOrji)3&w`rmSBqdAi@hTqCU?p9aml}pqg`NM>@~r z@z*xxhw0b0qenXyWv~eU_)U&@o&KGEKmRqqW1>t0!|sD?A-f^~mcA&t zV8rsg2o_2EdS_q;1Uz9B^&2z@=bU@46og7tt9n$2ERI|QgF}ZAD77SXLoIvk9;1Fh z9AOexh*IWH=F5fk4U;--r73BPeD;v3E^vYQq9uyZ_b5cAQ0>wDe?4jd3Z zc>jZN-SyXn-hKLn@-p0vA2&WM%v~6M{9iwIWWus#OC>Zs6Kd6}Est|K&$WdH#+;73 zj7M{i;*cSiApHa{n5zU=go7|+fsDCy$f&-VXKqpz_l-+}`OaNCZL#MW|9}BySPT1Q z$=@gMZp1U-h7jh7R=-FFadN32$p!sdDthtgUmvx{xK^#|p?lBn#>GR@+7NR110QxP zs=`JUjme#+PPMaf%bqZH(&A95a#0z>%4$7^uwe1dFlFZcaL0`&g_BOKuDKV@(iryd z*Vt5?c{47ljvc%7()Dfo!?v9VL#fimO&iHC1al{WN75vK(w|Z>(Nl)NFAHj1>4q@L z|DeH?VL7-#JuXg})VR4D6L@t9x}a|oYu|KNnN^FGHs*lDTox0}rXtqfLBZvaXdG~$ zYF2$7eyWVuC*x3OpgmrHTN{b+w0eD2!HwWU&2KfUldL`<&;pWObOS2U_bPQ^(P1j zmhPj<#>9G!I0$X00dUUuX;W}QNur-t+yyJ(2CZgIL`5S1gis!1E%jBFr=90}Q?vzG zknoZx0=<8dHA$lT;7ZaT_5HBgG@KIpVj)ZIL-V267(`5WR2~?y7J)tW)9+;Mu?~~D zI-v0+^aeNzI7$gb$)mohj*Mce%<~6a9G{*y*zxPqbc~)Clz*oibC@R8yjDg z#dJ;i5<^dd|BR0cF}@!@n8+o?MDvQ5V1ASK)tKwor|0*R@t;UG8WL{5@H)4Gg22L8tQNEtX-sGClc^c9@~tV?*T+twrc&-~B@WB_<*Tp!3g z+{f70xl3n7mte5Vusl~RB#}F`&-zR1O|_>oB<5npvgP5yKR;+*)xLc@%L;4Pu614T z1MrEy^&1x6U@tQj-I?74FS-n7o0yU4m<)t(>1^gbCg-qxr(_3DOp z?b<6$t*X~C)}Fn4_9|R&Z*DlnoRYOGJV3<2+u9-H@1UbQMr{A7ZZ z&fOuWQ6txh!a^S)lq;972oonxbZBma1`R{|_U%JODZUUkh!}*v|3@E%l`B@t8`>(I z*!sj!x@>6+_FJ}WRsMDs#tw+sPnIXRn!<(~Hf&(xJ8jC;FkK!eLONQuXc^Y7Ul+2n zvIK|P?$6{&lfvRfi!`2Ep-r1MVd27s_8?bLIB6~2LkXe(D_5=*QLGS9QZ_2oxoz9F zp@r&$k?xgOch2lNVbY|D&eNhr3x^yw%xqv-0AkZ-eee%rJ88nCuzdNl&`9uT-=V#f zl?n<&JQ&7~9^*RJZ&)A7mMI%Lbm$Z+R4Q+L5ZyL2TeEtd=6swL`R1D6CMPt}eRY>-)B4Hl6H4xA|(>ej6n=I71}4eHeo3pCcHi9Xp1q)vFl;V3O1U zQOA2VU-La{<}5SC4xKvMI*F2)yKq6+u5e01wDAJAXxUO>%GGRrg~89BGuw4g&&-)K zLoMM^hmM_u@0HyraAe|yiJr@>>?|2LDp^Ts+rDimqcA3S#>@-_OP)uyHG0%&!D@ME z(xkCq)xjP`gbl*TCr_RnmMmH#<*tVDrf!3JVfO6Vnv=a+OYO|W7cN*7_DK+J)w+dn z^@!$kwQy~#l*eYCKkyXc4q;*Qrp;lhyqt>{E(%SXHuYFAnoOHI&FiOW-wQ>gofB2z?!=sNp92zxlr2aMu*WYlxA`^ZUJ`(<3a@i%}#TQ=i*uV4L z?}jSXwa!z^4|pxiU0}RnJ_%dJqqL%Z*IuK4ul`0y} z?OQ)|>yn8K7Zxtt9UgglMi?`7O~`CiSs~9E5}0;|+&L>H0B3~k?ApS|a$1`b9wqGV z*`J{hV8cdsPTkO;K^3o=T|4)MiIY}_P3v}t8nr8hP9186@)b&W-0nX8*B@bcvu4f_ zjhW`qwN_dW7_Z97Fmpi4)e_Ai(E#SDjdP-NV0Ywbp`4_z{C%T9beCAnadfknepuS2GJhki84xNRo)kI^Ub9~n6}Rmzp1YJ;FGoL)C$Fl z9K})UXPs10v`jg{c%^8@+BNH~jLw^z>$*Bbj!jx~8X`@G1A7mI zRjb!nA!2(#5z0+mj|v7F}2#s#dG22$wdYPTjgzotLqfOSBKC2MfTjOk&zcoznglSF^uDJV`;#G5gML5IMT znzd@0Hi4TqI#a*kv1j)l^LW-(bBzVrg9w^2W5$G*G7#3OUoWf^ZJr@HVCm9jk|9o% zfxT72Z$!(%Yvi-#D_4Y^?Cj7{xae&kFa;%e_Ik+;!-o$GbNYLo=%P4bwa%q^C%JVV6p&O7gvqPQo#_~MIV7E`^~aM0j^3OgMW zZoU2XP`ZrvhH0@uP`2;b8T!5aYFI7C0FfZA-?%5Q;r=l1a0E3byL&pF4Tawv=l7%W^8L}ixRwuPN$;uR#xZnNex8A>6QN(oEkfA0{x88oM>wxItVL_2X;do7L-*n5( z;p}tI4hU+Wh{(ZA*r$r1@T%a{P0W*i5BRu$cwPcO*569Ve1VTEZCVu@mW>ut9R^hLvS&pjJ{{Nw-UFv91bd)^*21RR*jkU@jPfByJKRxSY$XIGd$8(hHKvuubc-ySCBm(mkP@7hgDal2%K^a-;bC&lyGr=L~0p)AO{_eQ_h!=;yBW)H-e(c{cOZoTzpGoi;G zeLS3TW*;$#lHutmpHiRps}1l+O4#bv3W=N$zVqGhnDL{4;sM&TXLoq%#g{x!2rmdb zc=~R<`Nq(_M-SZ}8s2>U4JpUvB=9t}R}X~}W^n1{mxedrcvAxJER8$U7Sy+;@J^pT zBi#MHyKRVh=r0e2>C>i5D6MYh0<&b?zsdP+A`m1zUwuKa+Yv^N80k5~`vI&vb)FjT z`1T$0_HXc*7A{!mx`quO8io!V8oqn?U17EqZ*U_gCr5RbbC}Xv2?xLW?XP42sv2Gw z%rwmw=u1i|QB%nr3CGGw057jO95!sY87B;XouU{fj-ME^RbNHzTTW&Z?{hI5fb6{$ zz98^ldDRu+zkW2+IDCQ>SOooZ&pS8t>(|dh{3jD9gzK)oHe_aHsvae+?*utQm?fq< zdgQ2Z&pkiTtsmC9Ak7DSnCng2%m2%p=_Uo4wO9vf`0v8GjqT3#+#&!uL*zr(;vfr8BxlK(N9qS zdiLxYF1+X>2}O*x-~?Xm*uK+-kY2rd2iE+E;lm~HuhrNRoG-W};0AM3Ny6|!;U&&V z$;A37syQnzZ)Pzm4Ewb|^kA4cd9~K=gix_;Wzo||aM4AbB%IUy+nUYiEiPk zVUPNr@UaBA7hZVLiVt}F-g_T}E3UXK^zPj=3=v%b*A5)qZ;t|V)VJ@M;j+svwL$Cm zzyH1FsE+Wzq;dSC4?l89HpUZ-0w2m~@spqYKdsw?;ctJFmsvtBo(^D-!q-FV@tUiz zv6Au1tFM`sVUWZ#`oRYuGW)t9yXhHV%aieGsCc^UYoXl`rRX!UXbkHanEEf|Skc=m;G`WdH(`t@bR z5FNyrzzTEId{#xY-rjcM z)%LC19R-6TyJ*pZ&}xm0g5r6EuKxX{m#j!3&q3#}x#pU1&N*j?*>W2E^{;=G?6uSU zk@-Sa`u-2TpU~mg`U&qQ+xTqdPV(GMH{E0y{rOK1N->=y`d%e`+<%kS?!tXv)%%Q!oD@Zd00yqE1t$TJuc;DPYox-uNZ^#^^KBSZ048Shugu)b3~ z0Y?A|;c%@F49Ccm>(;Hc0rPsvG0P>hJ^JXQ=DWzCgv>V8RA_5bL@aKVKag^RVGFsT0NkAD&@7Z`Vjsl7#u7fF^pNn?_7DaCZS*1+{Q zT&KBNZP+bazSR9_*S2kV|AY6#tD<$to5hNi5Z(GPoPWXj#%&b&M<4l{VN^r(s8LQM zj~RFsrf)t33Kh^VJBLDiLI|;jU2)|V769H7BSnc|AuXGKY|e_3_ug`i(pm z9mR;-nqEL?4jnoW)++zTjTAw(ST%(mjOVisdR;sG;HgVm>1!P(#pezvB*B zFI$?BAb1s#FavXfX>E`<;I6yxwiRxa@}pGs?A619-a7TU%gJ2@zssZ`9|(VW@GlN? z#6k@rWv?urD-r?--pTDdb~*GJVK7U=`i(c;Y~k(YmtVF}gy6g&ccB>ddlE)33}=Xu zVrhTq!H2?z^&1q*fj2OLIl#fgi6`lj%PtFjdiSz$h(Ln35ARdI*ZWBTZ=f;WCdI6b zgynan?7t>ry~!44A7~>QxL2O+5n`$sA-?n7Z-=JMo7$T9_~VaDA#Wqka$5^=r%JfH zQ_2Ju$kJsJg%NWqq04W*`L>0DyTnxRz`ZDCkv+BTI<%9rb8>k2!G|Td_76*?_?M6{ zg{NSb#+PiPo!Z8m1$ zMPQ#H!fTTzO~Mk5f9aBCw%q3~$PH^(tqt>}{AaXm7N$<08hUi^Zuo%vPOJgVJD%~5 z9oyR*#CoY(t(x|^Es}?AlfBUh*{n%I!WcUSA_Vgy3ZEmKD=RxERF|cGjRX}0?FyAD zi1}Vvcd%eWQ~_S!2$v|h@LT$#l`n*i8S+|e7oEA`rn4kG7Sr4ec0@!2 z3{VUS+EPGD0x(lb1J92X56f5W4s&v4MAM-^bLS}}YxUO9xqV}MuHSxdMyOf4LAdO~ zCYq!p;e}Tgg||PN7wXk-ul9F`Z8FfE)VgIjv1M6#9QKFDo}Lm04_^_^J*RZodnhBE z*0a6fP)oSGKfLhrv@m4!ywJE=jWBNV@=&Z~wQ%M+Z6%mz$b-5i?4vL1v`#0k4i}x% zDV*M;n!H~PL>CqaulLE@Aj6O54H~+3-5RZ-XYHB4^_E+$2)^+A--XYMEs(w`+J@)o z#v5;NIcO&NptD4Et0L|J!)+}?5L9WrWWO3__nh5Foc&lOMVY2(@NfA>2Z`tB4RC6t|n z5R3tBqi8tBRYLfeX+5DxvQ98eJtkT^e%u7l8A7_t371j2SFTtY{_{saw8A@T)M)ds ztx`^7nsmkJevZD~Wh;mWaN5ELw>X>_30>A5z#GD7s^^gztwmjuR!IHxV6H zbcPgvj6)o{H)O~VE7L>~VW7oJgwcxrEzvk1l;SaP;1J0JCz#H-gYg4$A-vnSZ%-H` zW1KDxBXDwrA`yV=j2W|~e7)xaVQnCDkalWqKK1m| zmVxfQ?_TrA_hcMF>Dan?tJl(5XZ8&pHGb;&vv6*z zc`Z>&M36u1D> z;l__^uf5JNWQ63}aw;=;4PQp@b(pSKi4x zwYI^>nc{E1_{A?}GZ0UeybMmUy``>XAK=2C(2Fm=#70pRQg|H3pGz*e#Inh=&pjhU z$qP3A;tAieW4nwy*NDIOFkQ^^l&A2b@avlkfkFlJOU@m{sbs0rp<{;*CL|ox0P(@g z0P!PC0EUl%j6%xwrp+6?&>Jbl!34vUMkh>45jp+LGbA*&l!8}0oY}XpJW+4jI!owA zqNoH5>Ii=+`s6SW1SZlko(KJcQJ9G9F$Qjthe~*dXj^}T=V4KsK5d3VA5U_aVvU;BT>r@@pJIgt1!M5w zAy$TG&Qgf7h!a+~6(TfPJ_$u4H1({r&yqJPv231q{(0fg+Pgp~Cu78-nIsKD|M?9= zA}n)*6u=v9yix79vge?8pWYTU*ekG70v?{v?mfH9dU%4uUk}KudR}&p^2rMCZDFOE{WA0i;R-%QON4Nkwe{K-q5B zw!K4TOO+|*aHbJ5D)`U?G5YGVruXUFC*Z})YLp#1P3~-gGUBK!OMPHHC8UXv)H3DD zs;-Wn)Kl##nWm5{1jgDmYKIzCt1Cna zPp!gyg`3TqwXotH+mpQN@t2ZEw1d{`cnQPPRM*L;bP0omqm9)ELWxkMv9SA}lu$`f zNfiPB3M}&l{v-G>=d9t()-OUcsQ zR*=Ne%u?b;A>q|F*-r>W=!+QfZg)vifqVm>*VdwU3;q>lVp~iWg<10!h5^GCgcDm-4b7U@4E5`j z4=dO1lJ~7xm^f{2C{wa#s8_qZA`Chtdv-+=m@D?Ip)jmky*zy9J9ipKna6wYyT?Kl zVbIH0tg=A>A(99V1lj=u2YMZ`b_msF`v6`Fyjq-gjv*2cQ+XL2=p^`7uWo&{nGr4) z-Nmz4wp^Lez58hraNl%v0U>sHC^70Z)_Q^d^*y6+c(#R9F%%F)^uY%uhEB!e1; zAB0mplXxQ8o-#{@&Ak#jqXJ43P|75+fAF9@f|-KVsiC$G6$3^M`!{esHn^p7(gt`x zV&sVMmW+plLN{rwFi{z+*nYu!y!^7u9r{kR!(}xt3um8wwhfbPYv~|^RfkT(j|ye& znZZLdN8YQZq8BK+XPw>Gp@Jn9rE~bmK`YbCmWV&eSjx%uYuBz1BjgEW?Ltpcz`UQI z#R0O#A?dPch74Zvf)g@1Zp=90)5lWQT8HkvPLo2JDR1XM!DpY08hvHplowTHyGRNC z@86AbTL|{xB1DQU1{_$37mcuC4B{wu;N{Sv!$MYOmU(X#DNB{)P3_wC6e}&@g=L5I zj!vl9~j+vL5 zqU$G%A3`$;V}I%IFBw0HK5!#ZDv{-MEt-m)MCfT18NNF1>KKMeP9y3CdCas} zhSH1@#cWu`2vkLU2H5P}MI=%w8>pCfA}yd3AIif>Xz}b>b1ZW$TC`Ajw9$rz&DzGY zWbtBAWYw*Cq<)O#vu4fE_K32Mu!1%~Pd3S@G(~vL_OX*B5BBKU)35}77~n9TKtDYw z%`ID>;DyKld?Po^k^yP^&h6oR8CF{~ZxJ|s{j^h0GyW15jMAH#nJGN&YMz*fo@wg` zTq#W7Ob8SzpkH#{@W>$yKrkSVT*Htcp6rL*yk)b)Yl&wk{0(l=qi3(sQr5pRWy;!f z0W-FMDj@=;&}&pV5%f8LhEQ7euO#;+9OVhesRIPmF8>gWy~mF=uEd(;z8M^ZtT^0` z{V+NtKxK+cIHfMElKdv*(~XG8dEcOf=lvo8)PvBTA!6o4{Rp1=hvEj2C@1A7LyVJz z^_4I-Ufn_H-4?-s{U{JN2r(Tx>#V*ZD>KWly&#OJv|s@<0*2)zEEDPkV?YSpy+@(M zx=#Y$2+>gEz5DkHHhTnb#w7@04A7X=Ia_HFPRd9@!g2@fcr`>hDLFo#T)*i%wNeiF z(`ODBFrigm7^iri_u>%}R$OCXgdCE=Gxg4GCqxy{apKzn6Y;F<)L{oI!`Mp}FX@92 zIKh(;*|Ok*m^s5jflUq1{YH6a&jfpXX&(zMmRUkl*aL$V7wRB5A;Fiv#E{cuz(7~o0JHwUlS)-| z3ans20*VKv+;coU_=NC80q_jC<4N-cl>xxKq(hBV7sd~J3zfNCnew4zF?a-L^xFRkl{ucIdDMHMnK)Naf`$4cr62af@VygDdnxIL%!&DO!S2@BG@*} z%(B3ZHy(k5{a-iVdb2IFNtpx=0*A-$I#dB8Ua@k8y;b!a)U#J`hm0^Van`{VmtUd1 zVuS6KJWUD&-g1l_F;%HrEo5b7%S%5-`y3T|D1ow$LK!E>dS6k3LJ8qXv!>0hke1Ng z;en+CD9^=;Dg<5l#asi@3JPbhP@$5>1PhdBV+8K%;IS@)@h(^&fp;zV7x>kdGbU4w_5v#(o6`Q1_Ko9qX z-BP$}N(s}hgm7M;R*KjtCQrs@oqoJlYiM1#_`DVp!peu+uInJf(4sJ5($X+^%Wf${ znc=F-JA}#=ISdPYCyImvp^6FLh*AKb^(vlHT37l5@9|Wzj-Xqt8$xZFZ(ip^1PW^# zBLmO9CW{u6;HA3Mcb=6gUDj(3e!(9`hvLEk>ViJwb;hs(ZD)G~hJ=Ln97*Vv)-jzW z%od?OnG4Y}(LEB;4ZI$QI#34LT0nV>KI|LBXa%0(eNRGvwI(Hq2b~g(v4tt5U+^?w z!k*0AZo9=w&?ldax1x38Nv*=IT05CpS#FaLUa$13wu9i&uO;swyaTTXdzKL*fdxE+ zkYixMRucFSeZ-iF2iEgl^fP;SUuCm@^LU2b2h9`_w zBoQ7Q@cwdbZNNBCRP$1yVkM9HNHM`y@Pn6Z-lVWi_0b3Q35F<*@Z<^=F$bIA0Y=olxJh0c@_ZZO~WelmE~Ra*BkWW4$2TQ+*nkTTXx ziWEjEyh&``C@Vt({elKy_(aJ^+DaN~hc0lb#EErQhF$)QdPyDd{m%2q99O6Y$X zwaejk+qdnoQaD-L3`)XZg{OG1ph56*6lGp1m*Ad$3olTVY`oI**c|icJnlyVB-R8* zW%wn=m+yW5`(d2+6AqNe8Y2sb+Tht^&g}${l8m8-f8D$H@b)4bj{(2_XpX(Lv_r@- z{bPK{42;M4C0-1yX_E*CXl^3cB{Z0bBxDW_U?h^|rkihav=NFK`%4K!z3{?|%)fy# zkwLU~Kx66Ft*db!!zG?`cvUjeG?1&G>|MpcL!UX}wk&d_)++Rff6`^$SMI^bhf+rG;@Lbl%>Am};qM($Yq(*@ahJn3`L?C)1foRaSgUjrB z_3OvvvG;swJ@AbO-;sGLPrp;ESJ(Vb{@5SP0b!*@ACbiv3;X5?>4s(jFY3h37nM?SUEzt|kIFd(8Zcr0{YC|b~jE1I~41y_8YLz z1uHm;1r|GmLz0=0V8X&hAing{izPf)G9lO|VqL$!4gnBj;8a(AP{N8*x$No;1!U$tVD zLwPua1L0B?n1CmZ!MJosd>uj#qeC#mLvrIyH`tqp07+d~2=St0xo+RCo%d(q@w(#< z?PnFU+IL`|=3$!lgq|LX?UT^GaG{l@K7IOFi8JP?s~j4H@;+adO25iSc*Qlt;597> z(M1Fok6-t=um2#8E)v8-=sMbe3CKDznD^SUFKo+}ZNhQF(rP#g0FN-jWz6r&R83?& ztR3p79TZ@^<{X2M<#ghNNny>Zbz*FlY$0B?a*YKR4lFrQR@R3UuDxQ}%5aVLYT>!> z|6zYI`r!_NWbd7YG5W4#oK+(TGu4Fi9Q5a7$NAS)@{7UhIxo%Cais@gqhu1 zxAbxUzTFNHChWeMLSMm?vEwFc&-5$dsw=P5$+*44Z-4tcg-yTU@N00v5Ci8(`aa== zX5PET;Us)Raajb_DB&$knbd~4+b$p$6tG`pAUSfUPJp!ckJ=u5ye#x za{?Y8IB?jW&Ykjj-FyGNR+cd&ux?*^@g;|}5v~cKz!PXwf*Hv$eKU0jvW2MJB0D2= zX)6nSX63MAafvWy?CQ|Ib=6S0LUAdHr9-PG)x*W-){;=np4|Ob{;E{a`c%m5>Q%DH zOWCYmrHs6%2X%nP)=<52g|KJWA$i3&hqL-N4n0n*qVeqvE0=8y83%P@TbFiG9s9K1VQm;WW=80C%07jtskaP9Yk)&h z5vj3jIq;Jwg9UZAvL^g@uN!bH)RlBD~FkKn?b0apZJ zV9#MkojadwPutQZ%e|)&1&RGmY%5r@WQmLxiHDEyb3*M>wnG@ksNiRxx1I zs##m%mkq;ZQnosG>7srdR_OY2^|zQ07efiwEhp9sIFHcEyjr0XUcj4J-xyU{o7Y`` ztp(S~lcs3z-S4!PI(k1TC$0iJ4%b2vI9uVugd7v{P1r5QsibZcT`6qJ3w$^@4+R?h z#bCm^LJ;P#C4|>wlSXEIQ(+|xEl}?yfrr2%U%0>W<7vjY)2-{N;*(nyJ@S|E+;cA| z4Dk>03RaU>a=OPs80~898zj`0L(EXt@cM7tzQcn5hP4|^hlyr@PNGyTUA#e&&7eCo0+pbz|}3#g2&K5JVK2l2VT7yDLdS1K4sv8KI)D$RNO8#WrmT zM=zle$hx9KW5e>LOYI%zME<$+=P43PN{T#E(1FdO)&!-W*J1CFxAhiyfDW$1>U*5A;;dl^x_ zFy7)#LoRbt-E92<_kGZ!lrP#iAU=eg!F(hi31s<+VVu*_dFL_*%Ms?zT3ah`CLYaG zPi<@^6#3OYC~ zX(PFAu$FexSBzbPkM}~R^09vHkYIICMrFZOWDG0V!Xga)$Rrq`iD;;!XrkM1yFGAt zAf99*qL5pun*@zQ`D5&OiSPg6{T?^`i9?)#?^5B@gbArb2s1PNnl%g`>`|o;iDI3s zSa=@%-tg9%r5_mmmM>pnZ$Aa757_dy0t{P`e$XFi9#MFeBrhj)mAi%Mn+k!n75_~g z!avw=Rt5|HA5ALD!eW84Xz;RO`Q}7V2+xZOD{0f_B!|keiTu0Y`L3*!BA&9?vJ@%D zK2Sn;|MHgy<{mkBeB|Lr>}gxGW}Ph8Q+<#|MJ;;TbSMn(+{l+uEZLXHexk=7 zdrS7`eko*!zcqhp_j-hwuBK z3Y4K8B5Ye^Ie$e$b3HLk!dbA~qBv(}WlD%XMd6c!5(c1qglJ+J#SlQq6+!}sKunM@ zu~&-kG=;e2c~b=!;ET|PH#9+jL|B9oVo+dS5KgYT>Pqi(pg$08PH}`OMav_gC2vJ} zq&NYY5aLH3dDN%*5{?72X|BE?uwvar`13^42fCp02!U86+qP??FtNAf-K;2vR8Jvk zV?16C4dLJ&LV550-gkYnB70m3{~_#-lZ1yU%!~fAA7rTZ43$<`aq||eTbQ^f;&p1~z2KFok@d?Rwbw4=$7F69Ov z4bh<;j|lHaIW%X+%o#Q&IFn20g93~@0i5q9bHQc8oVw@`iD?oF`@QzMLc3aq$ucg? z6z=@+{`>7I;sjzC5DHdKPPX7xN!I+b_Hcab>Z_y#$Rp$iT>UX02tKXlxxrffVgC=q z13&qxgo*MRxAuId%76n1&K&pwBc@ji7v?IQY)-i0`WqaQ$w`DSYi+}%dn^31q4q{z z{jF~W4#65YV4(J@Tx*8!3D%ftw_W@8;h~2fmJqBHr6u6jlYyqTl!Q66XPcoTm=o^z z)?05}ePvLb%@Xe779_Y!a0?zB76|U{?(P;mcyJ5u9^74mOK^90*TuP;@0{<{y|=n* ze{AjUJKHlo-P7IkJi&r*Tk^T;RavYS43`K)O_qWT*3-^CIwdjw3ewA5O6m_z0rwNN z2dx_fuR(APrhT+TX!UJwyp-&dX*x{CVo-Pu3MQeb1f{R`B+L2XPT$RAuE^0o!bYlg znokg;Z(Fq>w~wESk_gzOfTr?X$TGgB5W%o^NKtyu)Q^aiZo`?v(|(ed#M7Lw>&1{C zQ-wd3%8bhRP;UKdZ74(>fu4PVob5aRj$>sG_Upw+ua;AFXEZ?#Rsrg~oaCHwC^XvPQ$i(keQOI{?M)-h^$71Sv2 zNAqVYBvjL)Biq_M&~VoQTOqG1L88VUiv=@3qjvH6FC`5qv=G~sDl_}a3T#wz-ogDu zt^4<3O~Mbc73GAXtEy^!5VyBW=2=NbjVd9TL;@N47pdmoF49iq0(9mImJ653A%xme zd+yh#pY5CM*6f(@y!O$?Pv*)F+;V$e0!YOCDqR(|zEeFbK7-yRAkAIzW{(dylc@{Z zXWg1FuCaJ>HKnj2bCMn{zz^EEZH8;twr$IbHlsyz$Dq z$t({)7WBw}bIW#JfXSjyMC&_17EqzpP(A$?>v6raZ6B4+K5f}Qqw=~1Tx3Gzsc>Y* zc&Y#OAr_Yg{-PhKD(8I%MfB@>U)$bFw$i<1ClZgXPrkAiClL+QQg;w+T{x+(3=KD& z#gF%}eh3Ny;h=y1K(x)w@nC7zTOON`RCsxkOf()5=l*cHC4l5#lM+oOvDx_%PZ<6} zpLqVn<#V${EsS9=WkFc>c5ejN>>g&tO$#fg)Al{MLb=0zK=JKq+>OYCEcE*0Q~G92 z+c9cl2)KJwJM=Je-UK$-n~K>$|CBOWXV=kDik;8C{|DAj>q{|Z)SKXs-=n{~U#83| zaV`5-#)l^W31dXLX5BKev;zejn>5){ZwFCH)TfWw3>qaIwOp&^bqoq`e2!M? z@5sgQ^sg@0gf|M@c^7N~j^=lfho9CfFV~+hR#LQ8U(ot7@FJe#hcYO%0EPA7`+r|t z+UKjXn@Sjl#_lfKy8N1M+BlMWH5{Ifd_GK=e~odfbtXIlc7WErsyf;|R!L?&kK1|ay}bJGC|Y1hc(b+Nj5 zj8HaX4mVWC?yaGAnuRA>_b-g-rLKmpTc}LMB*n!w-o{k!(W;yhlBy_BahN_LCVKfz zCQazV=Co^}X|WQ_>p}y;T&mO%-#cyqZ#)>y7RNwM8TD3cAG=Tq8so3 zCf!k#Q`2F8O1ry>3EhZi1Pr1iW8VLzgZfeLUco^FDmUDNrKb9`R{iAHn!+c|COdwR zaU(QckPkewn%+G%x*7%sqL$BI|Mi$;;VE4HXOvtXS;$eLV6!L!-t{vF6W;aaFncFx ztsF}-kDB-MO!sxfUa=1#opmy^a6nIh+^OvmJrniC_soJxDmhD=*^;b zK;Yq!s^&dh9aG=}N_PqO82VlyzZ~BM!wwWR{WkJ;H#J{%&{GXbC_ksivjTc96thB4 z@gmNLPJl7=O_WkJHlq%v+)vgZ&Mb8?1C1@_P~A57WuBEXdHD(=1mAd_%ZG*d__;E7 z)E$};Y&f$TLBt6%YX++2e(P>xaN(L;dc>ETR_nQd;^LZ&HORM49cA=_mi3oG$<`fn z#I9&IpK1Jk4?_a8YJVtGrt>{_2r}XzXB z8P^o8yta-_#2T;FU>C^==aHEl+O8e4s$*SVZ_p9D^P+jj4Pi*OLmqz%89#wGc>P&T zv^Lxb=m_jKnou-iRxBy!BmT3tO&QVd7dC@;<+N!yrW*65Q>&XmSpR-{g=GEb&}XZX;(XLubshpt9<;-5Rf6W9E8NSDE0 zDKjM3qx9Fe?8Dk>vzafn{nsY0Wz-oAcv7QYPs1ie&w}PH^GbMIIeH&eow5^ud_!L> z3qep`2yqw8!Yi84WeQY{?s;dMGY?j`qsLf(_i)4Iwu|cBsp`0=2sTT`@TQDKv<24? zL3Nx?4Ttx@iapQoyw9&y$q?T=ZVA!)ItAngzLxe{-1ojWPe?&ZZG&&c1&wCfVnR+7 z(F;SYZcma$QlE|)cR=;Vhz>sAa_d$VrJQ2eB0RrN#rW=m(Xu=@p{N;lGPbS7)V;1C)h8Z38#7s>5WUVK3$2j zKhY$}gJU7df`Y7Ce}lPXB9Z1gK@;$$6aNl%;&T~`=XLwTnO z_pckIV`H1<1+7OpyE&aUA>m*_CUF?EWFk97O7%Ns|j5sCQ6D@ST zP8?=kb?#&@{-#a*Vz@pe4DS-@b8ne|UAvK{42N6CJs-T&F!|!~7Hn?Y>9TUxgcNjN z7fjHJFb*OFO-D4I1;GF)rWvT)C+`29^1^HV|f&oe`T{b zyhQcKJ@WP}R@An0fnEiKTO*Pup3UlZs=SQT zHd3&5U}1>#8~QAxag^)(7l?`cL=v&!Nq7Srvn%`OdE0wNM=9XObUfV-k+%ASjo!B- z%7))|14!6{xNL`2nhJ$yEc;UvY@tBP=az%)I{tDC?bVl>bk*^?erm}QM=7+{Fw_g$ zmHN_|@-d@=(sY||Io~3s1UFvkt7NJxgmH9At~4WH|Ad!{3?w6cZtmV6_+wj25J|u) zZt%IA^W}EYd{uk8JfG1Z6_4=?!&5D8?>+WYaKAKmdQE4X)-4llsg|$21ICMV;U({P zLSrn!&Z*m&PX*b%K2JXP{1d-F3N-WXUeDsjlYK1{xYg5)cHi=;e7^h~lLO zL>!LMU!;J}<aHi@q!jnHb+Xq7%p-i)Dan!psCni;7e|*iza0dcKKi&$^~xs z(JIS+ox6R@$d9Lpg{x5UzQK*|R31sSpOOTjE@3|SGCvmCfac*>1ov~@Oa&NS@2@^? z1rQ0~ioMS`0OjDQ#t)?|Ny7d8xvQ&8$O)(RJli8^q(iDV1My#)_d~ z5}aTA$#im$|O0U9W0QB(OKJ^{#aB|j0p;7+Dgle+%4;X8Kp zYrgPJXZL-7-dF%Gt}UyWrq5?lTH6yZ#-g+vztbuCsdzz=D6$PW2tBwdz#yIV%-;4% z+}B_8ciUt0M|Mei1y3<7#=^a#O$6M!op2GfsI<+i&oyGu9pAOXz2ePrwd zJzCP`)lPS)@=ie8YWjM~#-h~93=bkBJBUQC_D`G!kPw(`aPvF=RimwFeWhr8i zfAIb;a8Gtcw30=z*Uwgei7X7)UhG{SlHVzgClUI^zblX8b?m&LgFYgdW3+|I*L%g}nS;{!0c)AH;wM&H@V@XoB8&Zis&SeseLK*4+{hj=(`2`WB?(tNUW&`h> z@xxor$Fv8$j@kSXm7yM*uT2y$B?k0D&L1(^^|*Xrk<$YU#-_;f56F0^n!eR1sN4-BJN-iVi@suEL#9G zh}xuvHZPA%eunrgYVwGEI>C%AHi4wN$MG%>ml03?b`Qfb9`*KuU$mK@q4aG&$6vyK zlZbeP^PEJHh;b9+2|)crP~vqQQ2(fxasUjx6Hn57_#*RKxGcEScxO67t?Rcdl(rj6 zj^ZPDhAwDf+*$C6+Ez`SAn+9=FwPBKsssk(iTWLelNzn#khxgN&% znvu=@lAU5F`+=x48_4u1lHUL~WfF+B^*A2wxmk4|s2zH_2tXC{jIEWPW?wS|a zg^Rj`cw1~Pf2hal*`mR*oFSRf<70XO5ntzeml%n*&@+7jOgmLhH;yXkby-PYvhG>#tIk;BS+aMy|88 zfNRc*Ij*A=0eoz>lu6iIS0c-)EI(okwwVNcE-t9~eiLBD=6yoW7*CFX==xDtl2(1_ z)0o|A)>JI)70(_Z^5lP;^4p+7ZK|^C?HljGhpbvj;xZSFCS%rCCHkoHPj#(Q#o@P@ zun4tfBA7&yOJSp5!&xupp;IJ5z02Y^k>f<7YmrG@^zEoJNfsk|1=B6lM zj(HqAQf1@!6ftB~4{v45HKmWrwD3kELiXk4U~V*X2@IDoLgWP{0|WXboY-{Fws)?|Ojv4o}7QTMub+Ar$m&)M*{AF;+EOA(yqYDp{a$S)Iy)X?XO?`qWel*(OZ zDfcQFyb4K_c0N5}X5{5oYAya*Mv^=haYz!)^}+P;H4#?II9)9LQWiPA!8(Nt^=npC7{hmR zTv*wnupKGd`>r4; z0CUFJ`pPG-xbFc@HZd|;a>4XPa+RM6`w z(#0Y!8t8RN@$lPqAe91IYpdt(AZ{HI_3)mb9BM#lQL0J-pa&QwA|yquf+~%F+#&0s zZnJ5^z?$e0Ztibh8_l_;-+#-g{B1ZRgU*8geHocf_m3h=OQ!~1%V^>EQG*|JQkIf} z1P^h9EYT)}AWcyNR}|^Qa+?Iy%^ykJvw?A!xk_Rbu;R{SzRgcfaBt~FJKecuK?W#c zh*WCQZN#B>>Ufk>-KMWz*hHE!s?)>1@Dz_+Z}w@Tzc;oYAG<1;jeq+6{tgo#Qdy0l zW!dHHX25=@oFmzcTsds8^;>wTMe4_vX`l|?1jxB%{S1fZMAT=yyS0^BfPR#Re?GL6 z$21m&5b@`q`m$ZwKp<<`+$Af8kvDnTS4rXU0`#L2jLAU>`QU<8SfeCD4CmoZ8`fPD zZuJ@UR|)-iserG}wZG_b^Yo{~pg*&{j3t!%X-ePIUrD@mK+|IYAFsxscp!TH8xf~I z`9LuGJA`#i@W^;bG)dL4;t5HrlB~}zWoyy!$$1o*D^yOU)Osi>6P-rFlU8) zRn}`Y&Hb~OF`8cTd+&OXiHItQf3fU3Zw=tqWVd9b%{h6RvtE1R@0g1?A-paX*suIF za=unC&)cAZ`F-{zZ9gz2hQl?odrL}M+GY!B49@we`c3xiofxrmUI2u9ZLtHiUAAh~ zwi$WJWoV$4$Dk82mIRUmDr^hQ-Mf)jb8J3^NYJ!eaU1YW?-uFMwUn4v0oJ7l%0guxnj#EEW)LpE8-{fUp#3EGn+jA;DG6s`af#LR9l1%+o&8Knr42tweL1GR8jRfmAC)um%bNe%f(HL2&Nb30(J80Mo~oL{@?xuI zSdUiPc4Q_9aZ&@ou8D6gOerWOIs8GVrmm_H1~=xG42ljS`}dx;(+CBp_ihjp=}^z^ z!vkAY?SpN+EZ?0-1FJSnLF+9fW0d0^Mn`!@?MU37mjsE(O23NNvS~rzQL-lJ$W!U3 zy0=X&A-ukP^N&fDXQUflipX}cW7@UAKcFKP{p&bQbWNH`02Og52z&g^Wcuoq?1P}w zuUq-^m-G*|0{qJvEtdjcktco&BBDA*bf+Iji zE@GTXAS8l;ITJw`29}a6sE__?6X^pW1SplA1Ml74lnC)}$+i?^<^l1NB63~3rud8V znCI0`rZ=Sgxx@FokwIqN-f1B=PhY(%6 z6;o#Dr>dvv__B87RB#eKCea=Fcjzg5ZMG#+dzpN?#V>CgIa5mjvDYD9>NKF-MoH2i z^xE=0$@N62)Jo@g#==i4ZG}7@2@K4w zs`Q*IK8=bK&nYWCm`k~Z$s#Ba(1d*Sr#Usg>nJ?G$uFNKQcMX z?QjdJU`2Hhl>bw89``18U?~4*f7I_!a3~0H_;pbve?Yzxh<>r*UmC7W7EG1lsF!Y) zh~RcDm6>LR8cSg#v$T1Z{qgz=KEeG*qBT$m*8quDyqiHgVHGQ_AG54k)szwMGDGJ$ zNGwX^_3nm?1w7|Cv!4daO6BmUs|5I}tlvTd;atb+1iVYEiCy=!dAyDHlg69AfF|c> zACt=nEwkp6-dt8=-E?Q*uhNb=REfI=n;;Nm3v~}voUb~^!`bWLorvTlQ5K8W&Qw2oesWG%H*8u^tuFqg9dp3T`eW zio2|A*$9{m<;CurQYK~LT(c@*A^)!q3zrQeZ0OT?%P)ihZoz|+Mh5hl5nS~K@ed9ud zu+Tl`@t<%Q`;pF1JM24-NK8Whua-$sg8D_HNlKTy;1V@+K`*t4Nv|1uMF0E1C%9-T zl&|DpT`SI@z0X&a&+n5i=D02P#`m0wwK_-XX%wuWB^)}4;`hlFZBd+A&?x**KS3MS#+JD}5eG!yI5{@i|pf#LP>{VUi= z{tBTXVXRff%y;s#m}iRDUCV5CgZkF6Yv~B=r~GuM4mu`!2QLuaVpisPI~#0^btTU` zJiRpq=YMUT)M<0lkrZ;0=MwwKY!r+U1|~G$3qcVac0JcQMC|^V76a{pu=`Qqt(Qnj*hfount{olQ!+5x!M=(=jrlH+Xs#CSY(rm2VS@^mMP6vAY9y;>yOwl*ut zcBf+Z+F2-~HTBoB|2yoxzxw`(>^~fcX`_4c-N&x4+P6|1WF-0JrXI2~HB7BnR&b$+ zDD4}C!3-7Q%T$b{Bx^*eX*7$*e-~W0;Q`QW4o$;y&9fGyjCx(ynW-xNdT)pd#AouK ziC)uc)g1&mfNu%#or;D*|NGp3c9yRIt${pClS17*5 zvez-uY^aTSlB@SC1^XqPR9RjwyHF?Fd*l4O*U%7N>UIRke!FsV9WR>8^CG~QJvpF! z{gzdF04aO~rbs~)PO1P9kP8b(3A_HG_@AzGhMoTE`VYcOfryhrM82R5Pm06l?a-5l zs6bl~dRf>yb?Udx^dQ;w1#dEMmw&A;^+)30 zBvSENPAw5y56PR-SH%D9Txmpb+a`};<BI;!uGE#I4?V3ar;su)K62t7&twRC-k={{=4V+@ogO>*0he-(jfP&j+$Q*o( zYl>F}<$6}A97mz6%X$4j{NVqVv*hG|R#w$7RmupCL3Lx{CfeOy7WBs**8 zc=~ZK`?C1W42Q>?XiLjBlg7+#^|)6o``>4YOh9}y{bt`)uaQbJ0gwY5j(V8#X2NR@ zaLNgrT4WcDDD(NhI?T!CLMt9a?jIJDE(ZIke5`{>^B|K1BK|ZZfG9`JJU~1*nX(0)CRNIy~r^}A+&ifkwuhHUvhk8baIqd2Cm`A2TF!MK<_$wB^gkk8D z`|8CtU;j66{!n_~kA71QUocHYR^Omi^HbsdyM+Gy-4)r&CN(nteXdA9n9hBsJHd;T zc7d;SP3^R4!$|PzF!$5vB7*;Z3ys_1s{?oi6{*Rll~udb;D55R!ooNT2S1)`nHz2V z`OiL7NEqE--hRA2*;s3se_V;m{Uol#jAzOdW%#@wVhP6vb{bNbpiemwboFsE| zBuL;}7XOGtEZ8`2N%3&=Nd!<9w=U8U&S&D_^tLyPgQNJlDcncdObCNXLaP+}GRWcZ zgP~LfotPl*Uj2z4GvfT!kQZnQ`epugqo3f?xZv`zR;l!a;@VdUnJ-J{M;r2lLy0A9 z{xsAOS-C&cjhwMRy?7G|ObAYy+;V(jVd?s-6Or+N>Tyig>j|55fXy_VY2f(j0UDd$ zztaHr3vwu55iBq&LCm&S^v9o6n0nfPZd%uk*=G@GhiEz?opXB#V=XT)CKa&X;bV1& ztiB%{AY9Vs_k0&t@0^fZUlBARnqCofnorlU3|CvJBCt@yWfMlvlKiCJUS63qIUB@cg<1Es8LvgQGU9PU5P9Lz(?=0 zGc&KPpu24dsA7HoK#L6>zk9&Q`YGU(H8zp*+YZSLy11o1VHJ(~hyr1Rf&b=USLxJ;le`de{Y> z8e{*vOyi&$w!m(`VM__w@S2X#q25t{-rAYFTS3z^6@o%Ir$kdh!$1LMnCP%t0Rd1l zGCHa&6!2nmy=jh?)Z|HZGz!Z4k=$K+S?(>NFO}KZu*6&+w`X+8^qraW#=mhGa!kDKx}m_raD?q0s1TCn{- zT)qu->gEMXY{6zk)E)iF*xw{92$3iV9wR$4!;tbK1xrMk*N*!2xK*wctUzUt$zmLx z4(#If_%vA^qPm{i`N+t9Yfle$Ruwk4BsoQ5W?`17(Sz~VAX3^NtW9L8eaa4abuRr1gh3o0!RHowq-gzH}Jo_dz8uX$}`j@c^jhdcJ=W&{aQssjWKnn4M zL`ndb3A8_Zj+Fy~Uw6@44T%z4w$Jm3uA*XGpVK$B2XBa)T2{$-Xf24AlZ7hxGPMdP zN=nMg=H^9;u*n*Em(3Tc&g4H%`gVXc7k-C3(o}Z->PCtC!QPFYc1U&D=5eRV?Yc`~ zd_5SM@NC^<2ig68$-M1+t&N%zoe0eA%l}k+=G^9X;YQ+p8Q|sl;?ZP0h?4KRI}~^R zax>v9%J_rTbdxI`@~ZzpbL;+m&A7WuRH$>LHxJliog3_5%X+1>0EP%|Zn@cSkt5=1 ztsHd4s+R382xIOH%jnpKnzkB|AM0iGsI}>L;&eS$asT~z&}JOXKYu*q4KYUtxAh@H znV};erG%!iU?i?~(+cN-my7xmmYxw%jo4Do|7hzMz|Fz0sK& zP~ih9791RGH#nY2rMXhU45qqqJ>zvjb6{jOP*&?yzi3?Y8U0xSUDA%ptPij}kPF^u zoP!Z^Hx+{(SUqa5;DNFN!lm;J;PFeCw zNxQlm&{8O~iB_80)GD+*pG|CX-HfncY;4LoxdCR#< z%g@i+`)-IDW&kUE?|3WLp=A1`B*=03C?acvyWRzzoptXUS^ClL9k|x%X{CS`rzYE# zFZ3dgjs#3en1_FG?7!GA*?5U<`Q&=2Pz+nrzve((`dxH@TEVCgU+-9_J@gqm>tj`% zn|hSRUzUF)S7` z6DmUh46#MU4fFE17Y{IvK%H^u2EW_!axHClzUoOQ8hpYCaJdr2k^Z(kRiV)`-EqHs zVz<`ns@4PX`rP6y7R5@+Vb}xB>Ae4;$ztjgs3(G{8H`_v&n!@Btq!qgY6q_7A{fv) zP-pYaW_~-Sg)FgMyFXi5HICswkKO#lU#O$5DaYU-OgH*f-rCgUvDnQBV_2vO#*FSU+3)cfXFG0r#Dam$p;ui-be{{Ybjsv) zcLO(qM+R5}h)8{&B2&*Rnu_oyA|h5=>&AIzkn!0i+O>DGysssZx~%4`pb{j~#UpXI z)GBqh^e~xze%3wJ^Llk{GCR{+sM0Tmm;$$9wQy2(Zted`&1VPEO~}P7W$(b=UN8w% zIbAxREpt#IFftyM!33eWp)%_+lybUu8TN|NuW&&A;aBbAbvtFpKN!!neoNf>!zSJe z0rf=Pez}KQL3cUc73KHWeS;&Yt!Ewut)Lx(4_c_Io#Mm*vU^rZmapB?U1E+SVQ==Y zA;^`5*dLCHHMs!CF zF%1~gA)iSZ7zx7?I>=~*F-C1>o}RAPU?8Qwq>)Cn|qdtmSz!5M*l zZwBrBJWr^gWVo)J_LwbIrY5MnDU>vZ#t`_R#}P;9f!hPSyv^~C#1IUuW(PXwhgFL;J6~>?0c@33 zb<({n4+=2)w@lAQ=lTUwSp>qNVrRRL z+Ly01JLb#I!=mu5V>Q_xY)(j0XHp}qqjbu#-%x5a$8YyGVDtNyVIXy`%sIp%@a=&k)3gQiF?Dh#cn=$0c#8PbznI(AQ zgF2trcvmsx`HmW?SA?$N?OA+QW5r-Ld749P2+77gqW(EfaDC@Oqs)m?(1gukry*N2 zo;3CsA)2`;D`yV3&@VjzZ$^~lWAy=AW##^aW(Xxgdd_2^$Tl^=?zAFR(m{d%9rS83-*kd(Ej#{%5F!6g{`4LaHt8 zj&D(I?{9)MW11)qA-m{zo8X@6g(y6>ff1-@1#rC(0l@$Euw!d9+v(#aDJRb{slk0T zWahS3Y<}tk1Mj7fZl9B(0+D;^FNyc4>)DT089~wNFEQeu*j$#yE8+8)7x5YW@ccrn z$?AyM>%}p>`}TNFdU`!!Z$mGB!uPy>MhsVesMvbpvzneio#64uSXtdqwqp2*Ir{A$ zHL}+^p&q#6$Lkg8$D2};fiWBanGR+atF_joV4-i`(~{I}G2XirUh1Z``7YPCbNqiW z`m0*&J~-w5P5N?u=dO0#p>hZ53OXy-Pqf?Ury5&BF^8UEBx z2z}yzLO+o|WS=FOO0EWI`ET*9jk0ZpoqIy7;7JHS(J=K3rKMKYjJ#lF!Auo5~M^M-Lb_em0@?IW( zdx6%)KPmt#LF(^GFkgjTB!7|Ktw3xpK}8idF8- zCo*j#bc&<7RJCwA3eH#u`9VE5%TR)Iz-^8Ny_0bsuYxuZC#Y3Usmm08=@S9KpYRl_ z`YfRE_w7+Eq|BD7X6NZOKgxPbnt`tEoS5}4RdF?bszV0KrSB(;rfC^W7#z-r|CCcTdQ8&PGX|Pt}3T4CG z&x*r@BhJhx+q1fbW+;o*8lO;zi7$DD)zuQ7h|u(_<*96I?r%4OYw0uV^|oXO!<*o@ z>gqYRT^R@L7Bt*5cvZD&Y1k}t9&4SS<=8%l3Uz(yuaLcg=7pUEaIIB3M0#3~)-WUK zRz+rSzP%(15}b(?z|80wll^LHb3gmIxN2w22|yQdtzcjOFpr(Ku?FJRUNx~WdObHx35OJyPCJ zmo?XgWk9RlnwB)Ge+abGa6Dy^l+?q~jCsm08mrQ@n3UOg?^IkMl)*2NK{>O&uyAC$ zUj&&h}6INH~+(C;5n-#WbD4(JcU1hrp$K**I;uAL0PBcsg^6xs@aMh znC`(JJeD|?!hrX!hy%Z@fUgweB+rY3HhIvCg`Hu8Um+3Lrfs&BLuGE82fNS@M?Y1q zD~4VYS<*gxYPJ+dFQtzlzJq<7ruxsm%fB9NEEVdQ3Mm3{&JS}$0O$@1X7XOI=hCow zzKZcXCH{74e-3Wfts`*SMr)E8SOqgCLe$mz-VE9~c0Khot$jMXBjK?r@w%H^IDM5R z=@{;yj5zW0d$_R6eSx+1dr6JIUvrrex7m1G^>khnL46%-RJ56H1kVZh*LUoOBt=&3 zeQ&^5Cus)}Bm3=7 z@tWA##=)cO5^Rni_#Sso+uPI0Y9R|$3F^OC0MnRSjV`IrTd2N=pF3`qKx9i44|`00 zrpqj^Qqp6;UfWx4)-=f{T(X~657?C-^HX5qLz_-Y)sT(?Ey80}Iao6AC*2 z{fGx3e3yAbB`F6E%Yfn42tY0`UPJ2HSNNMs{I>_%v3fTnknzm}m~3y=aH>co_-0x< zKR%~7y4TpVT9&-A@N~a&F~l|Fkv>V<@s)9r0(Lnc!AT5KN;3!HQoQxlUidqY%wBe$ zkBSWYleBCyf33edMWKXSX;q+R`l!axt(G^Ok{0%(_j5aV-n zj$sXeSxRr#nMVe`40`r^O6u1mixBE7DyK~v@X24m2VNVkx;u*UHd@Q3?QY)BF<~R6 z+_;>o#g$;bt$~#Xv^~0`uSG5Yo^X1g!F1X|AmNnwY-TT7poM7Xj)iIr=TSd1ptQQh| zHeQNPTa%p_S481Y`Bu;%d9K9r=)+N}l5XHEwCv$blO%6;a*E<|? zw~UOAgAp73XAx_?kWU(dBn7Mjm)(J8mOi_h=1UK(%1Wzncjqc|BKn#(#qIws(-0#6 z%>tv%*Fq3ZFXTAQsj9bnvhMf#O8Qf)*(mlx9|8_F)`-~;Jz$8yt(F(;8_d-%dr9hP zE-e030L#tSHy{HM2w5>i)06N&2gNnbyNO+`?AiMr0IbNd`1n%RN`E+&5s( zhj2odTsmmm52s+dM3&9|rHoUyPb9FHQ~6A_{6)K4LKG=-2QgNgVslBuaKEH&O?`fX z=D1Nu=zJKt`JY_qH;ON4;I5V%ZHs@u@v-#<+$QiRiiBn{oYW7-es9|fIiWQ4o$x0G z5<>wsiq04tIMjZnt}+h;bv^&2L?8o{1g<+zr$-r2#Uyd69pH5VB1{cBMtnx7mg!#z zlecAva5)%NTEJWyd}U1O-88RcA|cJ_vW<)4Uzf`>$a5+E%GZ^uA}9iYQeI%lO1<;uhdX88rck%EBu7BQ z+V-v_EmvZ^7mp|U@2%R!!7XKiFNSwXBvU=h9(6qe_ociq*{^mSy|}3Rt3%9TF#5l zFeO`m#N>J35qw)KkIr`u6hkyXcJLggG%k?iox+xo=RI@LKFqH&AB*aYHU8C{M`ykF z9qyahad>^)UW+AbJ?Iacv94toI@BnbfXfXWXC*@%hf6)%1jtQuiQRhN*JZPjKF?M8 z&UL&Um)ZU95k21_d2c3gzB+L;1A)V=9RVguHZ5ix9gj|WRpJtXd|jwX(d|~5`4dH~ z&t;SI5hmrmVY<>$$T-%K&EO;&@~Pizx;|&tV_lPaT98L|W1r5qVN>jA^6?UJ>|%}{ z)$@-H%MpJ6Eq_siGk^SjcO(9TK?aG@Ks?uR8Yl(dq=s%4PKYnK4?=ql=#wl+Yg@eDKT-iP&YI6nnzuJZ8$r3o1n6!Z&MHNg6HaA)t!n2)KXcM&J0Sbc{@>05UWN z_>jNbluw!#Hv(*y+ou{BW_=~^lpjh6_PU}-7|I*0*XBPA-Yxu$a#dmyt#DhYTwAVt zuhrx_$gsTko6fG#=7~J zqR%9(T`GsrXL`GD9U`q3&?Ycc$r162B=O3<-t`~TLzF}Y6lXhz5Lq_39I+~o{yn0e z{1RPK)VeLmF3gbdrIihMGOBDVm@@Y17)fJ7CU{zsCE#A+-l&NGf}s8BbNgmbIO}p7j<1Qt)M>xA~v6+?^IZ zw(!Fj&zmtc%v(hiiy$e&9aDA95Ry{X6ik)JTHMif4I30@wJwJ4_r4wZR=jxMMl4h& zVzz-8Y6c$bO8-sH{}xX7$H5^0sv$BIkAq>(HZic*8nwPXC;{{(77tCyqoU~g52`p1 z0MG9Nlz=`Uea6nmx>el_pc3|MScs*v7uGlz(K?kjRp@)lmWR@pMin&M4v+B?LKz!_K(XCXNE_DVyqhFKvbiDkdebBnZM!!l_Ye_|pZVFDT(@oThC51yj~=z3&u?7GB^f(J{Qw``F({^t3$Tk_BZI+ABP}7 zoyDn6Y1UrG({2cj-Kkvn0{OVN*PAciR|^kvdnTI$qkM6z!-6$j9p`bTnrc8Ft@rcA z^-FF?kilMBi%HCx>&-g-L%X31HB(hYCjE%!9By`RihL*9v>e`|p+27vX&PH383Cs) z`w^+XH5j<)r(Nqi^w&J(@9gzJ_S|)^w?~|mA570uOEZWx{EgBh9+?C1*b>rGQsomF zbk*<%fj=1$C-mluv%Zb!R`6%MK7%-)q0*b|Sp1_#KGW6i4mM(JpqAmxEh6Y%VnF%% zTWD)w`Y;$65f~(6Gttg?TjH$wfOH{;>{D-(gU6D}ezv(1ED+GBDXC^|Kd_KRrsHGY zM*C)aGqqnr&IkwxgKTZpJ)f3}RsAkfRZ`B_BAfnipIZOXYN;yL3IORk^eB`{4=kpG z@Xh(VVBVYgWY&0N`QTI&KQ6@qOms;_gZWmwv*&xCGY)kGB=Ek> z%}pSALD`h&)?Qj#!5a}elmZMVSjmY_^7S8HktVncpjOIQIrvU1YOY_d^F^U{5{HYV zr6HyzPppLg_$PP!3C_tG_?pDmh@fNG2Uk{Jx1M!G?72*kJ|x~RwhtrfKHnOr%h zqkx3WbZ8C8w$PPHWnJq2Bo+Y9G#edAI^VOT5rf@-tW>2uLBle2mWR#(Uf2KO=`7gd zh_)=+cyIytr_Zk^e6Z_*C?sVHB}&X{Jzv0bk!@s^wXvbsrVzuxBJU?Lv(=ohiS4rZf;_NnT7Y0JtG)VW?+!|_Pstrs4}u&wvh9vqu_a4xRBCD4Y&8<{3lRBYJUF#kKe zOaJ{+y8JgN`MUSW`DwCDNF_0Db=Q1S5e>Ke2Y-8G=3(R7O^0@}%#yA;BJZths6u;> zOuwpZ@J;c|8?n`xFt{{E@PfHI_UAL^O+s7LH zi*lgqGZv0$+u?=B+SJ3_yq+P}`QdBeC|KQeXhCG;;c`pY{P)8eRB7WCP6+&3_R4Oa zeY&5Q&6`r$5b`Ba%T+XcQaul!H9mSMe28!{p z$GmXlHG4Z@^NiM@>ryt&Nb$bE%|N1a>Vsld*KW+yd+cC4r!)j=sOOLb;Eazbx>G_UFyq? zb`|vt!9g`;n=0Lx9>XIrN~GhlEqr0#FOC?&T3|dn;ET_){*?+NQy*GaFovywc-imJ zSq)#$?1)cH;;+g3-9IeG?uz9teB7>_<%%8Cnt1x94->a#uPT!?p;E5sZk z9t?IKzh4gnKqh#f2B&eueTIpAIvxs8Sz_9q##HKwyR=R^&XjFP6M@@xWgGP-3{YmL zzX=HLRNl(L?B{=*lxu`UoreMY^{URI!#~BQ>k&l!8@_iryKt#o*9Uaz z$Z`*{ESQF}Y9!XHP0L6V=W!=YafYtLJbt8OjM!=#8s>-b&$BAG>* zeDkK|s3Y{(K<}`(d$!~ywSHHO8W_+De;DS5(PnYDbjntOjt+E=A6{b%8!ey``k}QB zWs6o@|M=yMqher3j;b;oYHON|I$c=X;H8Fy-d?kUF*rD^`iVSpa%c)Ir3OO#GoxL? zkENF_-BSx~N{_t3qiQGE5K6r;uf}g%IapZciIPZPIBkF1`F8z*`EeR=F3z>Nu??9c zV)`&V0(Ii4ITrrZ>KDOY(uq$SEC|> zY#Iz&yef#e=``JnUOpddGwC?yLMUa6E$Z%2dtq`x_=#cFezI{3dKG+k7k~PW^+7A; zZo09HZDV&TMaB(*)B&`ixmB%8|mUnz&UNn5%Uz4E*%g;_i&3S;^;n>k>W!3TT`lh1C`?1`jFZsIv*dqOH zfvV`%rp%w8hFKkJ+Ui_bR>HFI7>joIsU5FS&Cn@2#=d5`sG!g>ndjO!&dVm#YM%t7 z5?G7o*gCVKlv*${HtV{F>3(0(_C83?1Sg*c*?jfJU3OD8HZFmiTu#} z&pTb3%0*OBBm_*_9T{5ZuBSkCOK@SHRVaZ{QUT;Zx7 zooebUd*=m}$+zUmeFN>CBRD7pArR)SZfgs1`;dTmPv`yrJm*E- zx;4`$vZ;_q|AmzqtjMPN=vak3!3YRqJgnSP0hv8RTGMV}AQ~%@;P#WivGB+L0dOMw z0{gVx^7XuJ%ur&fFub%iVPAQeuVAy{)xJaFZ>c}|T$Mrit z6(q-J3L|;3z?ouWv5Ec?6SY8ymmc_Cew1)@pj|@TOxwP~L+$2&mjF3#_P!_=SKpX@ zcYiPuE6uF9H04fJ3#wFkr+A^m9~%!YicI5<+CPQ^dD7)*mH%_$dVcMgy2jQv$Q{e487;UT7?|VMCdp5F`P%t-!5-UUL0mBoqN}~Rq7+goo zU}=MV86OA{F4F8vew^A}B@&kuvDU=gr~pBufXat``h0mQ$wPDD1c9fWW?kg zu$uf^kP3jZL}eH;c!3~}z*{P%sD{Lx?*%Ds)P_XEy6_e~DnA9=3QS7lnx-=#n1$q~ zzurjMFEAUC(1*ASQDZ2SU#Wd{Qw62T)4dizcQtyA zX^#S@-7?#ymp7!Oph*Ums$7bVV?L8)mM_=rU?Nllz^IOVOh=a4HWxH4&NI7~&pZv5 zL>EM6(;qaITo!$)zAOg*`>kFQ&w*-cn}A=y5I>^#8mGAY2;6p1d(r<;-7G>VzrL=C zR44&-nyrVAJL;!SwQ8DRjS&I`7c10H52ms&n@(fw1W3z@gdDM*YbSLz3mn$}eHNIP z`ljsc7!W{is2;{TIj1{e0(l0ZIsl#=Sjdw8zAX#os_o>GNos_9t1^eVi3xX&4(iEt z@%KM(Z_%5 zcs+o<$F2&&$arjkTPcmzd{NY;b{Csw$zxiznjY6sq9@1G#fhEsjm{;oDu$cFZ=cuk z-!9x_%c;V|3x;W^*L?pEsQx#|14gr3|KY+SRpMk$m72|%%YxLUP~$;$9N^eiLo+^{l|z&I za1;U3T`duM;(98xNgDxV@#FnD%@vhxy@@P$v;6|~Dt9Y*u_jsUa47Q+Yh=7?knpzY zyieNj5XXm0+C_C0gyc3HlFEnpsFJ7-0ciONoYfhPt@i%aZ zmfh^2FIFgRP8L(_Qit1Z#TqSI4CVd5#`khnOrjE#jry(WV%b-+ z`qiJFQUAd;N?764f+GH(QWtk1mc_UpALqyVw59EueahE?sNX@I7v0aRhnO>izeWg= z21i72DU!uHet!&{*}tSwtYrbN!1l=r?wPzwa{3>AZj|kT_bn7Abz)sXI;fk6ln%=F8ed7BF7JH{RS#+$nx8zGqZ(4;StlrQDKG3pqC9A4 ztEAqMAP!S6oDpAR@~e-MIu!4yih_JKB>oQMYgm|uhTB645_ScR;!p9m-$2@AwK#4@`Qu5>N7-K<<`>zG_)BW5hC>oFiehhnfM}anvyI4E9 zqvIa$~fqAlV3w`ksq@le=)})p}{Vzjpl!EY2OK>ks584 zCB~FP7vReIu7uw6#TQadx*ah22TSV@Mirp^q=;K*pLyUeHw7Oe{l~Y2^H&~8@1R!6 zsSG5!eFy_1OQNaPq&V!R`On!eUou^$|F|>I1Au^mz6G9pxSmIFC76)i&3Xvm9Z4xY zR{D~+o4Ro#adnU{*L?8ZHVyz+j%rXxlJphg3ys^7M9!vhEI5!Zi)uc@`55OrdiE?X z+}w8WRbJh{3siv(fMudol2IiFs{qGe@)_yNMG7eCJbM&BZrP+Q zq_^j`nPTJ9(_ZWPi?UDAt0{nD^-X3*A-g!q<=KBxPK5|teJpnLrM*S1L(>WN!C6D4 z#2eudhOP&Ah%A7#nRk57DA=y&F81>nc<6^}@!Y>B)vfR|lq{O&*CtQwoSk4_PW$5xuEpdv( zav#q}Rk3x>(Ri49n)&XTi%afrz+DLW!q*Gp_^6rxBL8X7a7VjpbF@pyd8S$m+dZIw z<99}{OGBl1WbcA8FV`w0`Sy0S?CE=HQdOCS!bESPi+^e8M-iSF_-FSg9qor65mCGH(UTHe;1=u6y;e=)Au!bIxG zd?&0eBcC&WRr<@oFuy)^lc3_W9JJtEFLEN;u`P-90^0HPJ0H9ZqvY890H8=&VS``C z)zEE`$?}hb+)^1a2E2j3es}~lEArXZ%il<%+Pa1$YW4zPkAl#9!hl*H|Mqfnnwr=$ zTJeuoIib(E17ofN68GN|N*R{?5XQ<0pXN!z`!SSY$}MWkO!tTg%Pb}vqm0a(oHUt} z^K3A-0h)3uRcdCd#Eikid>%~%)lC6vx3a@@$)h&nDfdeR+h^GkpCneJBqWEiNW4A5ZidO-EZ6H6CQ|tNk$Jpx zEY`~!(^)NwXsD=S-vV{OzAj%=_s@L!sfiLx4V`xpP05ijJE~P~kqe25x?dT|D4<+6Oz0Rtfv)L?o>ZH4je}qAcvT(0!n}fpr$oRU3fst{xV}7MeNgP;jwEGj zYlt+wY(!Te%7%LE!rGa~ETU_xYB5sE5YD#gEvKuX>(1fL*dCq@IC=B$uAA_-A5r~` z5Aq~|vlaismAFq8R{;VL6dV*P3 z!7Q$zpKTjWxLz4QLC@epiqm4dXm@08G~UaAI6#ybbb)pVQPN#akKml($bad zCO%&cSB+tOMbai1C$tW&sJKjk;8E+rk0Y(r7^_*e((RadpjmgATECbA!!8hV{7vMk zdpoLW-G0SwOhb&Rld?$Aaed&w;`M0nMJ3< z%@k9Na}m;V?Vc71ap})Bq)!y{pHM${QOhxRlWCt}En}B@{qQvf2k4J&Qn$N)2EeMr z(Mh6q5kLbI0nV5RR*Z^%R1Pt(*Co(nM#Ut$nhr=Qa0V?!Zn18tM87E6;E{ZmiNfM= zLg-A-854PFF%S7IYZ`JDHBDp>w@9}kp_i+B7 zm(u_rCx-63I+y@s%qNif&zw#f>Bn54Cm5Lb%najeG(c?LBr3EJT`8rd{nq0Gd>l_@ z1{a=o=CpNBoZ#&yPMt`Bt{G%)v)}KEk$e@5EliXIbC?r<0=lO!o*1|Wgu01y-Q~E) z=EFw~H><0LJ`ij`P@{C&d%R{p4O>}I5rwPw(?^*%*a-gxooT8UTu)EWg#P}LL7x@% z8@oPdJ-z(qh?|fxx%aP%wU&Mumy-I8$0RPdt?r{jm|*_fr%=)4LBAgMAt*!FB%j9U zz+}Yhx4GDtWP7#PlU}$THeo*Kv=hZ6N4(Ssxen>5{;%oh|)Y%YILVJhTya>-b(9ze*xH^AlfvVXOh5%0ty9c4cC%Vs(WcUcX(Ul`Z6mE=d? zO>>3O8{r=TdP9q!oPzQ*4l`Y{Xy~YFO^&t#1NqlnvblH1=Y-oCh@3_YWC1&B=j?#Y z0>9d{2|RWjn@X>89YFu-l8tTs8Fa|Gb+M`Y^&r;zr8mgqKZ&ZYQD$97IZ^KBOGh!M zZ7)1J!s!RQX_Fz_0Zkyoru)yX6M@&D%`X9r4e+zurb2-@kNvVhw1!(m?~ctbhxP@S zbMJ|RM&}b~Ge!OxR{DRpK56&D0cxM}(4vdtaomfdD;JNMas=7S(3xowF3#_)yMZM`{Z2 z?78uDZEr-eK;orJ`b2h?a}_J9OHLuBDyvXj=T2WTnciV{xZCnj1L|9B$@!NSqn$2E zV&}<>m5cr4;nkJJ3^QH{jFVo4DX;SdRpd_)#Ds6d`PKy+4k{0))!hdh3qN$)p@Mz?bW~eau+>^G+AJcY*ePTj zZ91-lEjDulO5w73Pb`=AQw-N>_gR&V&KHWC3po`OdUwe=y}&%uuiEWm5C~y!C{G%p2NtF*( zhh%Ln7OQ@cMjn=B$!`jqk8$oM==b^u4K2;da7!PCqE26%)vt%|>ewS5PPs($-27>O z?pIyq++{Wym-BF4pZs-6r@=tzP3*!t;c;|vf0FFi4vwQDR%&d## zI49Fl00j6Q?-RYl3dH|n((&}%f`QkeOz#66HV2Jv>rRi?P|7vz!cFNkEiwA-UN(q{ zUB`(-Fb22b5z)zqtRoq6(9<=GxO<)BomGv2QI-8&Id^qQB8>4TZN9H{KwHg(KaC1Aw&#>?eW6n(AH1uW*Pd!=l?4i8L32^QoLY|FK zGcdwqvrNI`ax9qGKsHC%J2?L+ZN}O!-~eg^b^6-+IkAG5h0}lQ@3d z4OJN$b9ErsO-E8G;i6tc>s2``iF5{PE{~SQ`N@}-#Zp2$2;NK2DTN^NKVf^GAA>0Z z*y4l*c+K|f6svGfH}G#K3rS+lVdav;D&#w{Z(E{8Jk9E}(iMNRoq?ZFr5a2_q5p&) z@c*Xsnk%9+rXmoZ1S{+C0D?CUIZDXj+LG>+LI*N24Cvp>LU1j$yoP0RWIXBQI4=tf zKKq2Pr@Gq7hV7qN5dL&jj5Jg9FKN4E{fy!lqi^b42E7si>T8 z4<{Ua+CHKoRjq_K=|OkQvNV?nnYX-u!5+Jnw?|6;AH+Z%495Z$AGzhMPIug#)bBYj z-5(ys*P*;Q7!(m$*y`IdGaJ8pG;9GWn%r+Pjjei@H=hR$Lgu^y4<4sBPhNF|EMmB;BU1c@Uh63zl%#I`Dqw=Wn{o z$nW%i;{si65;AQ83&(0z4k|U-EM)gWt+#yrxJbPUU%(CKmx~oTq&2P;bUXm#D=5ni zFA(p=ib*!B?W{EMXZZ7Y$ig!cK9Mc^&dJxJalqYYa|^K_vBclBKl+K@_0YM2#OL<4 zfN9jO4=5-OZb$k@m#$la)vD=y4|Y(JFE=7?e=sUDF{tx$eY*)E#rb4;Bob>mgZ0#E zxDUdE{5qY-tq#n{vAbO0gPsSLlvO>}klEMteCe)mLU+L_C-EKb!Gg>ThW8DROZbY; zoM&8fAONtUD8nn9Qx$OH-SyIWQUDe@V`4Fd!=6a0+3GE1=U)s(6I~K|KkB=LD-}Sz zXLR#j-!HAB7K9<@Rl6NJ-5qeG+tokk|L;qp7n7XUU?O!`varm{X zJHQn4kcHO;2i1Bk!b0KL3e{>U>dmZ8Y>Qb&{!%YR9_ zTWG%h^U_(+Atvupo>{jgs1DdWD095Y5Mn`Y z$e8l%9+j9EGqMEwK9Z;DbYQxaK`tF(SDPhM1qXEC;~<~w*^`GWFm8(fE%LVyKODWg z5zeR6G36Cee2#Z{^lzRrqT+VUgAMk~pCy2SRcWUbLIhSsyjcjU6`_w#OS-4lE$^_+ zc5@=2L3;D^r>H%YrD6fqg%kQ z=z0pA=O^B7TOW@{nNV~e{N zOQoL4@Hz#y%fmkLp+(B_JQcKq8C**n^0>in#25uILk#E%9p5O*MkdbJH@}>y zcAM*6KOJT{bUzL}5zqL24W$Zaoj&a5ME_a*I%bDVmlif(Gg zyGk3+e(k5+a;d`G4c(6+rJoSsk)dr@I!A2~7-5%9Yumr{dC}axZ|;7h6K{qcGo7-W zx$}`aLGXpE0G+CybiLnySZsm4*=`&Ov877hTEWFXpMTP>lFNbHsrA=NG~g&;n!q<|~a>v`!bFONrOar1Zp>zBiNny0fN`=1sTNx3}2t_L+Rq z_Hg>H$STZw5cLWUP!Lu)9DVf*L^xs)dE?s)6sgUsQ>vn`!s4% z-Bw=JHoE>glOCz+@^NPEbs4Qriixgpu{H(^9gvwv>_NXuf^@~b370LqKsPAt0zkEM{O6%$%#U@_XX z_~EcQxl!3j5Ysbb7WIx&QdoNoLZ(Ndtn~5X9#%nxUv{Ft*W?xWalh(!&Sj} z-3)$0yu=ybBIQ}m-UuEk{-FTRLs?y8yES$Qwtv`u*WS|poBj&vL$kV+Taa@-`>@&4 z^;7z{DL=>5#vKR)NPoWAysqJMsxB<<6T9v4unG4RN2&Tu-|c>8cD;%A-h#$QHsSD$ zclX})aqGSI;-#DW?x_&cL;&-gHqJWnN+C4AO+KJz5Gi_|9ZR-g(tNo*GY_vR$!(VX z4uQ7v5d@7u_4Q?3&Z6LYBDw8ghg_0u255Oc*Sx?J&-QJk^&ky3{q9eG_avuFLaE5b zT-F#00ry`4Np4UN6e@=RV*H&Im|fZ6qu8@*xQu!%sfnYQtuRv<0oUdEm{a!wV8leE z*>~?CS%LM9)%W71_TQ{vwxS5VVsB25sG$v3s>RYOf5+IAKgdAt#nj~~h;h3cUpPCq_s=3ItD8#P|9>DI@-kjzP577px8cLtioczY*U2Uk!!We zeRGjIifv%3UMBl0>`QI*G!pGYd!ww|wDVGnwb%loU_#UX#UXijlt!Ad&JR#NMN#bR3}5Kog< z`KVqrl1X47bzK+yC|c?mc+iGxAbc6ki9V23)6i})t!)efS9k&i0cH8v!T!x>n?^pC z;^L*Jz{YZ#S+C*OC<veDDX#h5Oyb@$-H1f&bB@qgq?@VZpjx^cU;XNh{2P zhK66}IsIh!?~OfMV{w0xzx*Y#nl&s)0DDv7e_t-C+_yJUx=i3gOu-;ZOSdQ<(iaJi z@(pb$IEa+JXDKPs@f(nJH1RA9#mwh%jb^RuKrR$Zc48)~6#z2}>$t?riV7iDY$W0$ z1_&08S^NlB=|T^H^j@=vvKWoi;AML+lU6IoEeTIF_D) z3vI;8rij$Ixj}0VTBk`&uk!}krGNQIg@no%*i4gsjpm&~vhBk>G|F!K7i8o^)+-uf z)#$bB=HDKgyW^HiRpRd(PU>{0H4 zLz#Si-{=V{^~kFwKzbkbuMn?4O2Qj_MN{x3?^nxiO;{cZYtO5;`O(i&sHw`_5#RU1 zvv3Lar=w9UvsEH)#07|$pd_85!~y%{-%ebpQj2 zy9Qy5%wOiOthv5i7=X98eR`x{@**@OHYC`22 zbcX3_ioNppmbIQB50RPXd~-0>HO4TrrLr;`GbI$2#)L6scb@ zvni7-yCU%>IcDT2aRx*E+WBkSh)A`5VU6HeLZsmuAEaX<>OW`fLVfje@ph#0Ps><& z{L{@|FDRgCT_SsoMibk>d6E#*O)A)!1F_CU&=a3YftgekKV>M*3#*9)%!9Ic3Zv;A zX=wo@)A7_d_n-%$35_-jo3uB1#Dkjb^!I#KRZ(zq={A`qzn1n09jPyoVX`MUZ0~ z^Mx8U*rejbi+Y{)xR~6G5%qJf2W#HqGgN z85?oHNL;6?eepId9oO6k=xnOc2gRmB#8SrNIh1Wwr_azx@Fd3Sscm$o0djz$lkSP? z0mN%q0H=g`XSJAno0@>gIqLG zp)d?t-uy`GVWy!T`d3gA?xsit$AnWnEY#+wp2W^(x<)+#yb%6^pY1!TomJrqPV$ zyZ=+uY(L!a9;4^-5Te&7HE}~-zzN3-avgksMZk1tn}d@Ek6DepsLZe=oO*nH3xdBg zUj$W7xa=BkW*-I$T*-1SM5UA)0Wr-fM{VFx8HI<@!FVQV{KM@$34KFioA3$#N^-5> z2@agpxIOgg;&1=ivjeGc|==>G~xrKa@iEn;si0!ndc z*GH&7w)~-uOMEXW`Q6IN^s&OLAmjVgrVFI&ms@`Zha8wrtxfAww)?cdIfD|bJ%wLr zlhgKK%}{UQkcuofn-tNrqS$gns+>+2Q@^;>3NU(Xb4eOu`9|$BY|a^HwsW|RlEo8x zE0y&c(i0eNeR&3!RTKi#LlTe{bY|1pO{zxuXP&>1b@7t({<05EJ5^TIHiP{WgYTR` z^oooUAm~|*JxG%2#1z+`#aE!xGRwUSorITAYCuS^&zBKWssjM2BPi30&(E z@_@qda6X@b>>*ni#0=Ta-juX|0EK*5@BhX(k_q^gF8s%vdahl8M!4;dT^Ts++6vn6 zkyY5Jq9}yzfpoVL){4l(2S*7QNx8+Em&RI<5n+vMmrFoC0B>43#_APXv}mW_&*~>3 zoCKFYP#iBf^VgEQUddhV7gW>3=y(Jm#l47IyClp9OyFG!i}5zfXZ<7K3>J4$gP=8LeIaL1|nkS z@a+@i=TQ$l+HO40Q^OHUVsf=P?ega^8jN28(@r&QYNMS>54^b7^T@-=TRk<+y92C< z*bXnjh3*{jW%c|I(&uhQB)&FLcc1edPdp*9VkaUX}FWRMh~U4JfJ)=y!;AdV@9Xi(N^j-dJN| zU(FXvh)DG@?bYda>*3ZM`L&!QD?G~a`Z;A&(6OdzQz^dZE2Uf-;gV z_RuI=r38~!<49SDERsjQyx6e~+l&M3z>o-*>8;NCJjJ~!zx+#ZtA}9 zTWtW=VBY@#;SnYIGk}Ziv6rl8h|tf4OI%Gd35hr%vKR3y^3j%P%AL8UE7ncrg+vGA zQ5?gyWnP{KmobtKW2j!YV)yeZd>ln^q|55%PBE zm`8T8jb<;yz%JdTCx^k=<<*oTcYsU9T6o8}4#xb*rq2gdeml6jRU(Axktw)?wrkec zTNhThYm3Mfp7VC28RJNFbcCA2({5-iGEPh{u`Y;$B)2n0#Lu*0hZ_Isik_BHfMtqA z);DGbrkX_6=v#Enj?8jHeT{khJYAGcQqy+@=7t+WhqtctCwF23Q*rL}jAIyYKZ;pY zP@ry6C{?`h_=@V`r)aKOGU)<_maV>D2xX2VvU@Kh$qb@m+`K>}hQ5Y{C~t*-~{-A;4RqbX8Tx;4|Z%aE3gF*g}xBG^EPPfQP(%`{C-!#cg- z1@q|l8CRF3sf(t3L&mV1j0Y*}&R9OwV|V^eetemJN~9}uop2c{3yq*|KdJ_O0rzY{8wF_L0%~l= z2Xt0kR`#Xz4jaCa&X08WOAp0&8*3I`3JUgK%={mHeo-BR(Q@r91hI5%c{e6oUVD=w8e~or>|qGDbBI39jc~}Ijon|PJt-nSSd)m_sLKD z#h^DO56D#<*MV&*tm~gCc2T&jvv1K-nv@m{C8;5!@@N{|PMr{imFOD0H-Gd0lXw<_ zNt#jl5aVO}K{WUo6>_9EAfyErauRL*pzjw{%I-8i3ZZA}%Vcx6Aq%YM54dH<{P+Qf zg>ucQaVb0Spw!PYs$L&y%jpweP`#KBs2$Fd*QS!s@RAM?zUxqhc?5m+b|lL0$l2Oz z?KR^rxhZViaAgC~X*k7K?R#yL@&{ymS^?x{Rb)P+uGbv(_0BSQKOj0ZG%7fmemj(! z{^LEYj_RR1EdKaj{qZ1uiz|iE0gr118s6dot=*!dOC{piw59gv?H^|L*3PMR-F$_z z@p9CIK{RpN0>2HN903G-JrTKmR{&I5@`G{ZO@cCW&CA;`^ zeZ>TKa+U1SkAL@7_YoOinl%Bm_Kgbn$Rv<#I;n9t2uqj62+pul%n(!Y`GWVZ3oQ9z zY>N^|Kh742kpUml_9d>dS>Wfk?-{_MWkl4P~f=9JR8nvn+|yng#w%G zr&J5Ajyeh$#d%vwOt3b`GJ` zx4nQb(ay%k<}-8O5OaPy^bgbu-felZ03#FHcD8;F$ei1$=fc2AOJ64$ta1Z(edjUb zb;*(#55>u#{)I%NTKI%7O5DR4rW_`z4 z``+QQIMT<63H(>i@T8W5&P$?53 zJ5+_tM@Ntm8iEPQ@W+XDF_-dbtL&|^d4nwyf@P2JFsHT{9wV=>5AuH9yl~ZAZydZq zjpQC0@lHj>2zl1i2;)TU`|COK zN8dGGAc4W3+C^esTzGy}H`t_iXlU?dF1Tn2;oC2PE7)ONPa%P`f}LWwR5Jlmof^Vl zR7|+b<{;ngXA;w~n7k?!RnsLfW{q~eMMKLqgMMX{$e$v)_DUzpJ%{-X^dqgbYhl7jY*?e{Yf|{e;HW=UG2+`PLXiQna3q zM%{H|+-14RHw^vlj`J30Qfhkf?rcWU1M*gyJK!Gu765GXZr!+QHxxAZ(1)Jl^>|?H>3xwVQgE5QvWBSokvhoo*tBZBbPe`7W)6}vT+(|&W!ag@ zj$wMg?f(h;3}letwbK|_7N{TM9_1G2=5jpS5-r^L44nlw3$iLM(L+*Uefnh=&0EjO z$*E$pxZdGu7EgahihBTTjRdeA%p-b$?3XrdwM#h(G=yAwAXp@qxUxvOz~Z^aDp3aD z>KH!hbru8#UYQh6v^@{J2hOo5&t%jEZ*e{oW!_KL!rl#HRM(1uhlg7EdAG^^JXvkEPanvoUG}f&bT|)Q-W9IQwB`J zx#FqmSVnfB-M9I%sZvNfNN@#hy(*?}R#B5yj{R^14yjIM{E=I%#ZptJpZ*Y)84i?d z;=N&6C_!?6M~I;R%}mU}!2xg?THq%JBuFF@*GR@5Evp`syp#oSt*Ca|-SLB>Lv<)A zc+_pVS2^WAH61kK)vS1W_^0GpSc!J{dkT|g%B?=d<#=S;=N8pBg3b>u3(X5L*bvSa z6ynQ1Ha3<_TA^97c{Dbe$%XSFl4V*Z8iB2EWst#yx>K9-W1y|?b$mnRrkFQbi$}nV zhCKX8Cg)96*DDgjZiwsErm5kiBFNL>4)rCR%ECX8$Lo$zFVExX>SsRlFvbrN%d?V8 zJBcA3GnQq%cv;-Ji1m<$WoKB^D0~j16@fPbrsT;xxMX?I9145`#`{ z$T>}*QjA!k`Ld7eR{T!EFCF@v3i?5G`ofg1&*Zp=5#r6}B z1Ca4*6j&OksI z6M9@UvTb&9-G63Cn@=F=7@$~YA)LHexmrT>Y9Nf)uk+7QrTco8PYP`(8$kd44Gz?V-3D*|;UX~WJ5!wDnQm#Va-nLb|Yr|^8CI%>t%$c%{< z$B}_|s(S?i&rk*q4i1sWkZN{H0LNZOjAN=P`gY42>}0MB=}6Vd8N&}a0A}#D1Jl^g zd_NovLnOciho9d-IZ=^=Db2;(p%0s_Y^L1{pTU6boYXXBpayUbIv%RtPZ`1x8`X2q z4`(_em3jQ@TjP?B#U&#U&6fm`x=730;udzaaPp35)*V}Iywdiku2RFI?<|7f zxpADF&!xvEjgg?3*DJ>!+6B6q`Npg~a&+q4Crpi-j^~qdZ4WanTW3gz9_(qFlOi)( zdL@q;W45G`FA-e0x{w_my=8mP#5BnFFru1_Np6%lAr?zvA`D$Qb`7F;JxE;rqo-@F z@emuu+1?t4m`fLv*vA%ytb^1LaCD61rb?g7)~Kg~IxwJzP;tK=^}`KCU&G9G==W2A zb|e{q?>vjxDQPo6W|eO5B#)aB$_go|S?r~=&NgKv+-pn;?%iQ|=|s1l2A!povmV6x zi%2H7m-2#(2&xU21KAD}HtRKLCbLWK1je|hkvVU%PlYzD>YJA30&Q6rC`;px^+{ja zSkn%z+u!XAr3*+Wj-e6qnLJ<1UAbB_Rn1l|MYl4cTGZu|m($9xaT*@h*!h0Rc9a%3 z*ImKC>TY~}Ped6NXCb#*K|6R1%L+QY;w7=I?sy!>8p$h#w317mX;IC9fo7%|U--WrVVZon-fmQ;71@-cOsrw_6 zN9L5tDr4-)raeW*}92x5MKo_Vo9sf|EoX~|r3*xzKZx+|y8w{X? zVIy&I6il1!{+$t0w@DMqau}%<$v4(^-WKws!3t0r0|-^Fd;aoA;9_b0zPN>nX19Co z#TzGrR>F|T6A`(48P513zulw)8;s$l8jSKzKW6HB7Bh(t&PBInDIjD88Y%CW>Hv<(8PhIp(i*5 zM@n-G8^7C`u;UvGpf`gJ@jq%nNKxDqjKlc`WBH_`f28B=r-;vRgt7so^3(`F3RE^@ zuS)scE6N9D6R4wXFVHO%Ja#xPORs)cS~P1{GeZosq?d`c0?Y`8FaZj`3)?LtX#&=b zkxqEr9eaK}SxgQ*nhJ@<)*AK(74K$-cmqrz06t2WR!Dkzl-n0bf`8L^wyPSRgw{JN`@+Na3rOI2Ry z7=Br*u8VCtzfgM!`=>XG;+HrVUy@d-s;!1%K8zIuGL4sHdhMP%ewTVnI3!ukt~m*q z&}m^lwpH1{OLaLD?=Lr`we>4ZGr=7=aRFi@8vK~>FnPZlUR(kPNPrzJiAx5^=B?wz zmbmbaY-?nL)eQWDUli5f#kFO+2}Zag?`rq56LAyEF%}dN1}p{dnu^!lwA0soAO2BS zTMEg-ACo?jbSMtW=&3Jlc=_e?L4hScpsYUkWsj}SeTxz7pMX{=jZgfru3q7MA>~cJ z5ioFsbe}TSiaZWM{4)O1$vC-AVSK(!qg?(P#NMcdO3X|h`2p2AR74fqN__gVd%OqJ zMcM9s4H^yK+nPTz`R zgEIhbI>50%(`0gu*Dj*oe6^m}F9lyABT||e=|x=zDd8^%=wqsP?$wWEfOP5zlqiu# zKp(*v&jF4Mw@9x4Lrk1S-P-%j)Iy99-})FR(&K|Ofe{d5S&tO@^P(2U0)_?+kK?<| zvrsH`90D*9qJvM1ho)YkV^#DD^QR{l)I0sNj_h~DxY|!SK|CfMn<;<>V`}O}E{Un* z*MzY96h4sRj9)!-dSx& zWF%HQ67}1R=KTj&T$qd0LKRXAQ}6xPmARvf16;{gzdV;`wm;cWxq43~lhkt6h;PaI8MU(YQPlm&a=ZTw{15v`4Q5HgZ61Bp(YvtQ>Oxf&HKQy=WBsF*EAm{aOfS(J{U=K z7yT~gOUo~}BGT?}F(!aq@kz)1IGV-043@LJ(AM(Ha@mz*A;%E77DHA|C&G;`2GipJ zaoTC6-YS0s-c%JdF;bVC+a1h3jk7|j{uOEo+#kG@;k@_#%Sy!wI&Kc1$;8*RX#%zH znMRww@!kNnl@WnZcsw+r`sEtaVu)GVIY}+gJc*bd*S^HQQ2_1HNRB&lI*cqqL^k?#er;W0+%f9=%xTe6g5PFjw|1RH2HnvBh?(mm zZ+_x9t;8agG7sf;C=Zb%T)ungFNRTu*`=78n%HLZW6p4tGBa|Ooh!e_4!Zzdv6&hA&5C18mr{6eNOV6&5m+kUpp0OsfebS9T7QELe53KOy+ zwCJd$vr_n4j+leQgvuY)hPes(h0lP1nnAw6I$ zGRfE-H4km3N={s;TSQs9L;T`jlhu5U^>NiNTj~s~92{c{EXN}l-^WsbD{MBeEf)lL z@}`5O%sLbzq|Y+x!yI39BkQf`;pTrdjvN4&=ly)8#h#H=5J1x*A+Y;zA$SJWK7d0O zivZeV*v}5FfIh2R%nD;^nI`K+MfUIzijX|>4PD18Ty|p^u#=6rXCP$5V=f9l6HpZp z3hiL0!DCD#&P8|k1hG>EIki>3ZMioiEd6!|RIlx%!@(K-!cD0W6JjaO17>!b2&NmnU7u;A~BnzCldqAxPLx*ge4`FqZr;ENpF6nL-9v zvZYbkNR0I$>g`t)h1F&@LeqiApvca{%sLEINi~DSAp=n&^5HQh3Tqr8W z!V_>M^25Cq9Trf>UbVVbh2accr-+&E z@-N|kwiSE7%ydH|W$VxMCMVu}46yD86etXGR2pd7j-@y{4&YFpk=*(o{qb!_NsLm9 zs9brL@Fu`c90(KYi4Ww7P+%@yvVRU zaHij(n3GxcGuMuKvY4lMKH!oz|lI3Ey_AW1GWOux!;lL*FEn}etwOkUN80NXUTq%M+;2Dz9ye0?^QI+?vwcQJ|Vz`>;ReNTl>${UPo!> z^m>Vv8?(slR96sdINm(I{9j>4@nn_HZ&VNK>Ywa>iV|#3+p5MWveXYTdt_fBEe2q4 z{A4-v-7Un!$OLm=0>KIL!>RTS2y>+PtQ-=Iq+@W8R9NhrwgF#f1RNN-5J0;Tnjhdf zS+ZGd!SFLYCWJ^8M#?aiEWW2tzx{6ZCmuwkpjRH2T^^@YLGNTwXCfA(fU~r|b__M) zQOi$-h6|NUSRg%{y8|0G0N%4m#IIkd(>0c80V0H8DU#d_?vZOyy!E&s3=_1gIK?4O zHnN6gL1xtd##L{#i?kEYMKI;l(9m@NEX?yw6)i|xqdgX5B>iZ|~7#+ivkYQrak{^?n$r`1c#t6d*;Wdxf z;7ydJNZ%^Y+DPDd{>pNeH5)1+*JsC(xFG~Yk_GvVQ)^0{V02?{5;GAk&SO^)PIgT< z)>IMPSj#7}`ofQI_2zugD0pv(5ey=A%33 zI%61WPqFOqx_9`;;eV#W+u{AdTHT^(8YR?oUC+L<&}#_uTB3vx$tONr%+lqVB+a_n8fJt6U&t)iNcGKpW%}n=1y;M`!Y-FBv&hm9HR$-JV8Ze7R*igR zB5$hCy-=5oI!u_nQ`^y}z&G4;;8hb=`BCJhP)?1--*pL3Ij& z_yL-6z1d6=)O>f^JS+nGR1F7z#P1-N|z<1Uo5I$Al4xMHi9R04N0EuXY>PEk%D0%nCd?u6t?CyR9 z%>4!}>3!?e`yVcVo3BCtZI8up54z^8OtW?cjcK(p8j96-G9Eg}cb60q_wN40^@n znZ~4rIs}d1IRuLnio{#4$W7S@h-HK%-)%{ea*+p$7+-Go3|`Jm&4;j;FBu{_08{mx zC;m2CexwnEs(qTT-QY>3q6n>w@N*G8M82PWh3^iJaRxLW0eN^-C`c@t4`&;PoDQFv0gd_9g_2bj#dohO#y`;RKz%lH?0DzfW^H$fp7^keJ z#3Do7n`D-fw@53J>l&j{&P`5xvJZa-sYQS40AG2YOio>oX8#^PpVm#4c4$jlj(*)# zRbhe9$Hd3Ku4wU3!eX6z;O{6b9s_(Jhn??lrIX@o#s`W!tOO{i`z#P3;1>&wgJK3RH44mr$GcP2 zJ&#t^YvtcU3?bh;>2Xub0%I=s%&5pI<7~?~0m}-U9QanP5T3l4M@79pl)$vKc& z9KhnN*l>s4C0mJzDW`YG-g6rF&iUGbBil^Dte;7|Z-2q*u&$cKfOHr{NaFq1RlHTs z_>FbwB!B0NSOerCPQ=(j&bX>}vIws=Zg=oD808iD?Qo3rnmZ&JD8~WEQh0P^RYX9gbR9G~Kf&Hf z01w*|dbCTB4vYVdqZ*uv2v(Us^c*Cun@qn)C09`iu((ArcT&)w#=1It=AGy!AVh`W zF)5;WDi$(G_l1dc!A#N;0s8#-MDSd;N&6{;?;9Y34$o+rm#7Ik_we>fmcjt^yKWcU zI)U*(snI|b5qOJ4Q0E8=b2CdBLF(e@qf)D8nek4jTOTIb%*@wYJj<%QC-K$D>0mCA zCKJ|aZLwo1OV?XLmeZ?k=2~W^Ga!M^T<9>Pv6JVh!Vl# zcUA`en2#ET`TdQhYZe|=B_-=$T#98l|UPB(u)|KzhM-#L3UU~@VfTDVs* zodx;LNBHLiIc9w2_0vCEodF8|9`_pC^dA}!yi{mZexhd^UGcSDj1#h~+_f_dcOXB)Y zK=Vu1NYDW(@wUszV$8D;I42L1DI|NRCi<9H>j_^lUhH!DD%HtzjU3qcc-^k;+J|CD_N z(ES^ugn`gz`@?VyMi;Ad3_Q_dYhaoizF5CPJNit?!ArW=G&A_*!!=+bZDSdsLAMvX zT|n%{yVEB|+s@%5cDEoqc27pICqcv6W95qL=g7E%a`Cop%ATO0$y zXM>8K(5KMq=rD2`w^vrQ&FL9`qp-^0O)#%vY~)o^=pRPcZczA{cZvDUJm#ToEnjWS zER~+rx<(=s|I{#e2BZVz@g3{=0V=^bhHuw`jZLuv9_VW5bKgCGhGTS7&quwq6w#dN zI`#=Z@3;KRBORI(zM_4?nc!ZNpW)lx%~7tx9F@b%zS4)ZAQTO4JkZIe45gYTLhLz` z!09u&cZV}$SdVVc_2%!%Uz%(LsYE_xA$5+=o}qXaUP}uWpOHT59Ah@f^CvW6^Y_K_cRU|3zU2XXfltMN3$_iMTO`^|iQsqF zIQbTgWl~Q8_&Qgk^7_}*F2fSvoB+?2=W99Uj6La&KVBow#xta*$wGz}ZQ#W;qG@uF zQ@I9W6-d8_#zm=F$=r3K*#cEJCntAxi94c&EqW<_fn+zYyB%*I&M)jLVVpg2q7PG+ z=JF0@s^ptVcMVsK?5=BbU_i%vY2c|C}c&%&RfkbGb zXsx*gX9av6l^J^;;lo5Vr5~s9!&D>x?*+nECg^f=hS3tpxl@Ppb8D(8xr7e@3vQdhj`}J~LK;Y+99uO|`ui)s$D}Y(XVSs;mrF z$0$d-Y|e+|`3WS9uE2Od6$&@Ip#m33~Hdoi^c3jux{pTh6wQ;E$6Jlf%t z;`y`z_CA#XE{7tR1!)?4)@PPto*fZ6{|?Dnn#5`vYgV#8yfm{Kon3dA&&U+GzBc_aKYV8u^94?&-An)X0 zEVM(i%B~<>$4q$hEFZNq$rsZsr{ym&31yT_WRf!Rrq4$^xrI1ZhSEh#3Z zPz{aJ?TDeKk_EK(pIDs@neCLc$APX>7qL)`%D{XvFSi{AlUwW4Pqis0sN0$Myl(yS4unyPvi1?KCJ~c!VpE1UYNCGDP{#wCil{tz?gz4^8&24cgfpt^$V`(a z=pH9`71wXU>`9Q)j72k>AoqX#NvEuV+)v%CQMg2dnKKZ?8DMk*zxan$l|KnCFMG=* z&Il&*5Bo3wR*Lx^Ty~BAQQ7&B_EHq`lz|k9zsD6Dd5Xolewm%eAEmDe_Nex>59J^X z1o#XB0w7cBv;$A44}%^@$6gL@pLCL=4Lkg24)#KnHoVi*xyP(8Jl!wUH->7DJ~_M5 z5he;V%1S&@`8mL%StIN8c3fk|xUr#x$u=@Sm+;>9mf~UKMmGVhJhUg}5w!uQ33*BxUWDD> z1$M7q`y_3Rj7W&>VxNg~s6~)A%gcS6?smt^aQ9!mE-vRufoDBCzLMzzCgo8c{)0Dn z#(_trYQkLh8^onSO;_@~!F@Yd=)CDG%m@@bTBaLZ9J#BQ@n?&1a=SyvhB01SAkjd# zu*UQChSmDvM_zIaxj$EqBDf$P(f%Idr+8AY^M4&ugqjOCeRf?dNlYwu(L1BjbPRNF z*#8^!~7bwoA+5DPbho9U#M^+N5f zb-9ExEskva6eo|u5Mc!1pmt6At%I{jDHBkzxxYKm&pJs5cgTyM`cbNf#D9_0ckqpP z<5~&j@+H#MEWc&w;lMHh{lKzZ`(j$&E?MiQ$&3p+;w8U`M9{OdizX?aZ?z@jHXr?% zR`sPpureZrk!$PVaK?1)=a^LUitG1P_I7PNT-?z>ZH7h7gKn=tU2$LV?$!31Bv4z> z>z;*1Az%Xz1-H4>X)_eTYt|U~bvA;zh0}Vmb&$`nrD?KMQf|3cLk<+U<8nF^M&<`z zSy`yiX=4!P9*x#T29k0lr2bWWel+D5vW2=i?eCr+vU=j;(PpuGH$F#1kM+*IKn-B6 zT?e(7)}UhDv1^0LI9{GC!fF42HlN+S%D@*ko2BX=D4crCF8mN6azTy8E%jw<^F)E< zqEY`qXy?@jDtEFV^%?1oaDBVfaN}Y$7~1$~>(~k(dof>H0If|<_7E!zvozXkjS2%U z&~GGtPi&i70)x!%6^urH8ZuK#ywLoYw#w9UPf_*mh=ok)CP#eaC?L5dESPDV0ovW) z027Htz*bWeB!I-H(H}@g;$@!dRO4| zOiLj={fB;`muazc65fz)p5mqruT?1A)^MVZ>i~vXr*``F<)Eutuo++lR4lNVN{6~p zZ_Dr_|J=!Ec@i!X{KKGG=^*_4G}Vwj@2~+`XhNM=9ndJ(Y}t5!`|69A?o79^B}L{v z-%UUHJKGOJ(BRl^y;@YkyRkj~FO1BqpJQ-t4}1)@N`X4uZm^5#W@%O|+wYy!HVU&tST6)8NQkI#{cESt&>WnusldK7}ltny4j!DlrmGQE}cDMM~ zl>BIP6Wg7?d@0$n6jEQJ$?+V`bOm;E3{ezy7bgRC5LwvI@&zk5X48r%T!dH;VCuXk z7H3#f6F(i=to&|yUV)C{&Mb7fcXp1>+csXaCll9=EXRTOZ$~+A%l{nu?)Gy_YN9*5 zHr>1Ka0SNo<9J^#w+d!CCf$_;@7!0FTFpk-uq*#tBN&ApuSkKv4ikpNwV7>xo@KOi z_V>6{0@|v%WZSuO-{;j-iiJ&{8@}He>VvZE1^w539?tCjwW4#2GBoe17e;w!%_ftR zv6=RTefJnDT{9N!ZU!f2+0NtLciiFFP}cq~>d{z=9W@r2iVB!tOW9?O2fp73zxUl8 zFRNL8w|9L&QddmB4Loy~eD*wRSZvX?@A!*53ArPbriA=zd)m*i0h?KJJ8eAOyXz?x zGSSmJ3BO&vyFwkASgIlC{1>TQcZwtn;wss9~wKk?bm004G*T=jut*Q zaCf5q`xE=9^uNVSnMmLiH6V(SM0tb|LIwgN7a(`eLc*p=9}>U!ygZ_1_6ma6 zx3C;miG*nvmjkqF;{8Ll5bc*SK$-v4;H!nQu-Ox&@0=wj11S)#VCi0Uv*AW!IY9c) zc%~wSce?GOv-~95t~&r`omE`qd;J*w$;sx}mOcIB4$Dg??ztD>qD!qt){x>(lapyq z{C)%y_R3tSmibl%PEZtXs!I{emC}%{cElMp`izw=OOP~tA1PTX>Kwn9jd3Qc*_f`M ziQJ*ofaEBgMZZ_(&v#Ew8oPY9YTspgaBTup555w0{@C9djCaxgqE`{Fpr3y4_9=HF zzA^sBu4ZARBx~#X=Gyk%vZ#{z^q=m*4%e(ssdnXf_gCy?=t#hQO3<+Jrt-rsaQ^Kr zrg#snse(aAjf%N}sW8oEZ|cHsK|MMTBD?xnqTA&c#PN&Y25mfEn=!g`_)Z+fJYgqv6Xnu|7wC>>vX2z%G}S^tX8z!3ko!Bt8NPCBrJDK#r;>& zcdhkb5tQ}%{wsycKEq9-0hriFwB%&fYgIexX5bxtO*BzrN78H=J;3l!)dBMr3{iRV z7FYF_0m={B%8qpgY6*fUQcS)}Z8dW727tU4sbp**N3srr&8t9Jf9l`)c+Gg8eyP#r z3SI6jQ=V?7#D&|t#f|C^uj$;cGIZ455-b$G{o!I0yx3^$I_G4`iD~xj$lERfg0urG z3{JPS7R_&B+6_tQ6Q{p$#`L3c>7rq}cf&(`pwf-~-mKPHH+hda*PKBhvh~9Jk3X%m z)bbH0h_uCH%Rh>XiOGjcfm3k-J3K!IFfWp)l}08UK}FJ;2~uzK2sGFT8d`n{G?UoV zy};gyaTyMF+pF-l)87SOCA6z;9^)Q;XM+#edaW93UPuNHYGEuZ8Kt+L%aA>(54D)u zlJP_WSq-@|nF8gOofL1M6Ayj!QI&505I)R>m+7BnhvX&Z9ymhF%@}V7H+!3umQym) zmjc~#&a-UqbEgVe7DMkdPWB)PhxF`9=g2wUb8&0rq3n{k)Wz#xn=c2`J+C_6==6$A zx{Keir!#9C6VclFm(hrpSC)q-#Vd?~ix0x7zd7wfDyBUC{OeM(uEr|JG86n$*no6n z0JyCD=d@oi@HG9kE3~$=2Sm)+<1dPKme;PHJ_EOs@l-VFj)R{pMovvb%%IP7>3MS2 zRAwi&<}3VK8eoUQ?KF?017bH9U7LZ@&;%P5uG}ZYgM;;~pqo z3I`?DG`UaL5zOT_+7crKGsZBcw!5s5-&RZ(3IiEXZ@?H9k@1yQ{|%v(FKTA9iWyuQ zE`?d^W7URYaE=;6QWL+;hff@c|G%up!wtCoNUy1%0G_OJ5-DqzTN!&0eiTt1i)wVB z2yB-plo6OzNX8CXZCC0Yd8ol_%{97)?GoOG>)g(ts5&C38i-joO~!(L zRU=;2;K*+Pba8ochxmZaOjKuPtduclv+6*e#Up{*m|dzsKA0uQFq1$LFAKETw3*Mp zsgco`5#J7}@@_g$7DQQi1AQ(E&*JvamMOfNMliDI>E++Z;DgT1in{&-VrE$ly4Da& z8#x6Br|!Vg#sqDj$i>U)Q!Au{qdx=cI=t?j1ifw@tQT?9C=rf>&63^F2-w6cZ`u%* z7iP25MGdDW2KK=wo85tx#j`RT){(uLEp>2wC+Sr9+>jJQzm`B#f4h8cT>;a`)}Oxg zJ3kEg%fH^04c9}=_SpP{Y+>Ynl)?77{jD`stKcy;J4e!ua<=Q6<>7dWp$Rb65vo9GOjl_I}3Mju-Lk?5*+^@ z%N>c9U4BWs6&ogjzwlMJ$`w;CzfTIiC2h`8l2+DxVI zyu}|gU;a^|#+AD+@gBnKqE*|tJHP4@ppX)+NbF2#09lGb7gt#An{E!iV1g%-Df2=r zJBS%74C)5aKIquEd~M9&bdEl5*K!?I(T%Zs&DvujJOB4AdSXL0P+J&=N56_=7z z%vZ@>y`KA3tjS{H`xRwIy#W{Y1-bJ^os!x6o9{kPVBKJ=`$b+~QA*1|L<)FnfW^5Q@U8D7Ldq#NjiLjT zYs)zV@_47LAPLt0ssI%Gbsr6T7WBPeK0y7+ z^_ODlqT`|Ryu$g_bzd(J_tkO7fOCSL^zB>hOn>Ly{c738CmHK1F88x$zc4zFiw-38 z7%#)eUZyWf`$F>}kWUpw=qY+mHMmWj`;HgU{O2qc#|%~NHXG3WAt#=19%8CnYnj9? za~l=_Y^E>r2;SgzpBuEmYn0z{751JsGbSEeA$YA45x6aQhCV6=&pIvCY<(;AL&z8H z>^%+Ei;~@~DY%!oMFVgpYBZhYbY_2YwRTqjyV$Es!vGRG=5Pjol4TBk9FC85OI;Xr ztm-`WyE)m=F}#?qp-)}=lsM2kz-VoQ$*)n|H$+gv?(>M!+bUEj+SRsd*ENAg#=3T2QRXts{ofI3{9NAid7I%uG8dat3*{aKcYxFjyTI%19NQd8m8)Ar+SR&8AKGm0 zew(gc=dvKjz45~h6lW#6ifWa<*uo$c&CxCA1?bEoDGNyutboEQ5FVob^AmGhCcpYr z1JA)1x_$CrI;K|&oMm^|n;T#)7lEOwJ);VjU(Cjwq+BiZB}h#H4z{|Uednc6Ar|zi zXL69Hbj$@H>VCd(Po2nrr96XL1jrM8bG*vN7W1ytEfSFi$93b%C5(C+1#sQR1B9xY zO@5!Bxpgh%PnxV~xWc4H6S@|1RQI9%G*RUDjt10B3iokX#tR(5Y)i(xWdRYNETCAjIz5zX8yfV0K-Nurc-AVXeSZEK>IJ&H@~$fYj?rWoW4mQ3ge)<>p7oWzku z)fYYsocbNKZLlfT-%*Ko9GA^FiOkDdYfys9g&(WlXZ>at(B&vaT$(EN*WRD1-m58< zS4LW9wYnfv@DM1-#&|RK9%^gaGs)kv{p*994YIYM2%&w?H|($2D+1EFPwCc`_s@J9 z2a~l_qhk}UR%u6ekJVfI9^kW;zib*pCQ@TA7wZb%N<_^>cI`c9xEq0#VvVpAm#+&= zUFK&BQ)X@Q|Cd(3fBXq^SUOW7TMAGE3^lD2+OP27D&Vqp6L~Xgv#?-`&JNoPN!vRH z0F=#P%p74^t`uH&f(#{Kd|?A{idsQ?r5N02Q^N10oOrU1dP8N+;*OV!4O45fEJr@x-aq^>^c zS9JnXf*ZF;7sRV+^2Pqo3m{+?Z1h?a3&_!{!>hGkoV>rn!1l-C{;+Dtk+73jq|kva z(v}ytJ4DCuM@t`$xXtB=80&<{*aj_&VP#sMUZI51WMZu`aG{<;41Sdp4wY13c$>cV z{@Zu7#g&L5-IcH8F?oIP8Jg@bjR=rFo#iHbLtn@eQ+d_4DUyWH-N}NpFI+E78<<`D z`Lx|m;tzw@U2Ap9$#S{n@Z{(=f=E(#wvuR_#e|-kZ{(+|Yb%cM_y78z4*$QS1ZQf5 z%1kC{8Cb1|>Q#WC++sxVKB93AjEk}CocOw(R;8YCK&7}hV~{C%gx=ql+zSE_JH!> zwGEoH5H{}?jhCdoWHK1S>7Th%m<^2$9C%^d!g2k6*MJ$k?@x`bUcyeJNDowQr1f#I_<|-$$G|@yYgCGhUt67beO9 zxyJe6OW+E;=EW@Qw&DM_ji7Dcf3)3e2K1Nwz)954UYTYy`Bw#tg*Vex;)FTAX>76* zimyChg!jHelm}*~e6#Q6{lAA^C+Z)0>?&obJ}qU%mI@YPo|BUK)YoEIr*^v^Op;ir z+DNt0ZeQ&sYo(|Y2z6F{TgFnY*)IPt?m|;qHjouiAyA>nOHDXt!;8`#h)G?n+i1h( zJd>HJ2NE-jypQ$>5C<@nV+J&|ad;=}ih$#Z?{dTSYD@}=j~&o;=wsolJgzwz92kNO zpMB?Rv|s~}a@Sg2GumfIccrL>UY{SDECFicn%bP+G7u!%ej7{3EkThdQ}W3JG?G%2 zeB7Exa4z_w--*S`r54L7pBeFthX=AyE+gC?0o9BN?ZVF`kA((;O`I( ziFmXQgoC`LuL8n~*o1|rAZAa$aQ1(~%xO>ZjE#!cEzNfmqW_#$^iw2UMSps-s1$oU z&9K_&?0`|>hyj0{p)wJTwA39BnfbP`=%={QxVA1hJTm)v0T;j`f@{mT9x&sw zom={QH?}_g6^?Bczk6VgqNpiAUcThgY_&+inxrj0PY7&ks|Q_ykE)T|oxxyd8YCiA zzOSmsTCyv-m5wa17)yPNr^!ZgY z(KtHhFd;u)4YP8t92eXz*QfeDW(!gDilA>4=a^=S$p?Of+jdOKAE0r7!%gP|Up-E6dSg2rnI*vcj#lzQxju%Fc|v73z#XnIbe;e%K~cA?(gu94{PU$c`EEL%r7~ zHm3~VhD*@)3wtF$M&vIG`M7KBs5bEJ4U8~DjYS*OEt1a+=oBa>J*^RE5z+JKK#C3q z$KVl`(8^71n_C7@C@NH6_S0Zoi{+1Y%0u{*9(r~;pCvJ9h~G9TcL}<$2k;DI-Fv)kH;`NlN$SHJFVC!YR}pMWKo%3H-V_%sxH4nxuZu(0X{Evs zk40#ZGF5WsAD`8_G+0}de9+%WH3qOW1Q7}~Ha;EZ7(q0bbws%VKJs0os*E@oUe2dz z81{P=M)osy14VRV#y;OkepQ)#+(=3-O{&lD+M#m0fFt8!fp* z%S8{xU*4xkfO}Epb>H+dU9FIfDt-#% zKiW)D?(h%ngkSPQ6pk;Ud)*wC+rhanRQXYT+Qk5)57lP%!tzpHfCK(N7A7%8-6h0N zm`G^Un@C13^298P_y9RlX;L|GVoc87NQ?&oJqjlL;|=-DMi>dK84xm@^P+mnNnO95 zrws6zPvObS4$w#_k_E}|zEFPw#MG_4+_)eSv+d9O==m`6R=TVmgDC=7G@>eUXjALkj=L@{SCk<1CB z)cwJjzN~)&{;z@gUnhVkhOT=BA3ALPpHM7?+z+@VzD3SGKuni6X08;>1$|U)Mkv_K zO;??PW!c+_9|Wn0Fal~r=aB=1uEXImgk~a1XB3t9DtG|Z02Ua}o)ZSE7HSlX_FnqE zKzc_cn8YhI2Us77wV{wmG{)Wa6u`$oOoSjWI}v9TWn~T^BDSD;Z`3L z<0Jrq6#Rx$yUgB`%;(RLM=xjQfCXixy+K>Kos;ecC87?p(Myqq_$9I`tGm7guaFlX z9ed>N282dB^6)+ivZ0g_MZy&Vru5>R1PD1WK6*3e zU8Y)+I__dB8%UL(JqKEqg z8fMU43&+1z*PC*K4{c*~CXPyJg-`vAe*`NcCx~*l(^9OoY+1TtASU-~TGQXw|H;`c zh%l~9u8dZ`*eYjc>WgSq(RRPqK{W6-v_s|?1u_UEN%3G#;FM{%^JAp(qaw?G>CI^N zwuU#agdIBA+KAb?NJ;TN&br>=|0brM=Wbb&lyl*bHJ8HMU%KY zAfwg?UWi3;5t&zU4sf2O8bO><&0iGXY~mLGD`Dakq_XGtlc>|Rd<70<&hpNdCE-nP z|ILESr)d9(JxNI-7$ukdsh!wF4S^jyS(`(#=zBGdhPN6k5xP_#)!YysO(B58(&gE4 z(r;f3iT%dury(^rM0A_Pn|qdpRf{^c^hV)WRWw=Jt@96CTG98SG|}05fmo+SCDE%FXUG?!bF*<^>zs> zoDkE;=?p98`u!QG`P-b0t^{_!l|0bK<5LS4r?zj);Ih7ighnZxd@D{%4F;UD5l5nX zSG)HNN1XN8m;RVFVY2;qY1f^xo8bR>N>ZN%W1hN{o`gXo#5y1gt@pDX zf~oF$EWv@_kp5krf6}6 zNT*Z#ti`YRurn{c6S^NIQ{l9O5#G>RIWKhvCgsBgX2M?lW}!?bY?ztpA9 z*>mRc%D0)iIvm=*CN=Zf;qUl={51Y#NIamGhcf*tS3jfcUgBMbI&WmwEl;y_Drrus zRfrA0tIrX^jE(f$saR<`PT0Pn+fKVSGiv;?qfQ8U93lE;wfOrEuQ2~GY{EI*eoSkz zpRTLAAC!Tq%`NaqC1mY8+Z+lG@76J+P|t=Yd>`isw@bz+0SAoV3kv596la$+(wW2L zY12}l_mRYY>*Z8A)xIR~hOUlWVDIAzN75-vKx1+}GVINT^=-voIyaYBm3*kzcsqIH zGKql0w8{2Xl1u7w;g5$&WHB@~s}%zC6Phmrf8plP17+wPt-8*#?H=inig06U!aHJH zQA}+kzkk9oEfez#FGDEeN~6FlGiWvxV^1j3VF29>R2wF>Ge-YYBLOqN4-2cPJkuw! zo~e?~?!v}&5pSu=J##Y&qP9}I%jJ|9rtRJ*2s{!eFB${+TE(lPv-5pfw_HibwA=f6=7h7<8Z5vNo#M#vx)f$VPwayA7kbpWo&dt zm9}?Saac}`K0Ev%Jsr_NPmcfo=ol}Ho%ub4FW!DMPJMKYl^~dB7khMjbwhZc%l?bo z-N|BIooz*TKN*>DPYqt4rz6;u}a6s|6I_@(PuP*C0Qh(EG&ISJ;08v4%zD?sui~x|l6s%BGL)WorIdQt_fqsZhRL01Udw|MB2={Y4d}<5PIPGcZ-2w2x>pHMEMc8cM^;&aYdke4=$$e)=DR)h+zccQhu~Ea|kL z5?QLMs{k_6g!uA2qd*KmtJq=Dm((ZaPUNjFo1Y%@q5}x`>PVA?3m4kisGU1^HYSqj zFvl#Kt5>gfTefU<9Xob(_3G9&4GLad(S8t^Fkv1K9Xgow^2&Wnp8$w}I2Pe!GFv$$ zjoaS6dtF_Do^&0Z;ulIySejL8ijhB}^Xl-F8f|+WDugO%o@3f0EqNR{ zOgw*7i=6?A7hnDJ-gl`Pb@OSsj#7zhCP@Im`t|GG)~(wF%yM0m#!dXJXH50+K!|)o z*r5a7N!zkjOP8IU9d1|nZSB$QzhT3AS5#DFz@Sm1Mv5PLHGKOl0H?uR7IVk#+jrSH zruFL8vj|}Uqh;H-ZxgsFH1E9)8Z;1HXrYLs@|G`O=H7Vo4fW?y*P=yp*QQMywLMi| z013d;j{_x9KQm?sfV+$gOfXqw{C!MM_wV1A1YWgjrdz)R&6*!09FY?Q5mCxpwpIq^ zVUBu8nn1=`ShS3Rqcqild(?*zp8IPLrVg7{j`3?zh-G!gi4=i4K*Ip|xkrACOTIaB zhx*z$$3!`C1QGZ65QqA3O&^3l+`D(5d*h8aT%SI@UC*AqBIQ#8l>vY~7KHuP8 zd+jv?g-x3_b(dUniN;#v1YOlr%0Jpxa-rFjpDqpQ5z&gipniC8h9Z;_zKPGT8hYf5ic)}XxPoRi4F=c2qJv-N}K70WXb0T1^_bp#l5 z2P)@Yty||7FIwy_6z!NpeO&!jdX7=! z?CyDU=4y>=>H10V#t~{(eyD?rBG_C-9n=QqRMwCT3FSjH=D*@CyWOr`%J&F!E#B;p zx~9!DT~<9=tf=b54e>48MBd!nv+JN+vUr2Lpm80So8OG)C@w;C<)n4lx=;61v|pkV zd09MVKvxkr4dG5G#J>Fx#sFH+jq7*29XsKn>S#UY*tweG+iJR2tgOv7p&5(a(YG`Bab7gRr>u`FR#Hq`t055Ks*jYI0)vaR@ z7cN{Nz_mq_oM&n|o0Dr6L4XW0yRZl`@u#PIaKZ$}yg*z)CO|NJm?4Vpez!o#OqG|JS;yUQ!}V^|s8Q0|Wmz9W7xDpQ zgoW6c7_CyK4Hl7L80c^EqVD>V=5LdG^UXI+V8JuaV5Lrn^kL!jAAim(`IZ-GvT4cn z4frMhgX&X0w9D7uk10!HP?Eu<;i2ARn&ZZd z8RKre@kZCBOBYvPf&|82C=*)QyQSp5&Ad;ZQvm!T0uI6z+RZy-`Kwq;^b%r!j|O$< zz)?c&jlO!Se%mb2{FX6tlrwGGboc!8&$&&THd#K5Bl?)QygOEgghUM`(75>GiQBZbdHcCYL{T7)zkk@?Tgvn8rzLTyb`(&zFt;*q z62jP{&w{m#4MP9;Q~P&|0OW6f^IP}pU;ok4-;`cfe>ODj?JyH||xpY2zk~ zK>EM_^$E9f#Y%g(nMZ&1%U_9!pX2sQFhl^L@NU>T$$Ord*Z4+QdyIE9agX<$F4-dC z(wlF+WmXxAWemNgQ#^a4n2h)w7ntvuOBD2+`O)5G%b!~b7OhNL;)LxSHaCTQDWaTW zSo;BxZ>f%3wr=&Ysgfqvu-$`E6#{dQjjcpIc(#1mGB!!_w_Z>hkp#J&1aQ-JfR`{<*U zTFNy^$zl3TQBFz`&aXz|vsUuQrO zOk)=zPXcDVw+7p0et5&UCw{G1uVQ7-k%*6t`a^*@fmo@F&%r8uMCM zSm>rso$3}XT;$SqJ|+^&oSa;l?==$xyGQBPXrb7kveHcIOS*mg_PX_2NT}D&T{|V! z+i!X1=H^=d++QI^eoOHdYfr~c9bI;VY?E{z*tcJk;&QjHXq~Gg?QJJLYuK=nv~+34 zFfWqibjy}vmnF>&Qs$PDl2SI)Ic*v~Vz_|`v_}M!p4wH#e~bo+g}CBarJvA7XtR6w z9<^n;q`AFJBTX#sZ{NP%+Dcuif3IFWZQ-6hd$wuZa&mH%cBiyfZ440X+_}@un>SBw z+-FR^Up!buj=6$@Rc`U(g$8)rwry)P=ev3H7DijNa%e|c2r;YdBCU7;nv}nA!9urJ z_07%AlUBTgjSE2Y)Q_jBZ$FYr?+(|tZ9DaAdxu6HAX-#ZsPRx>fP+5o-o1y7HS+a! zOVI<+f#&g`gaT-UX+LqOBiiUXl3=)>sD*&BgZQQYAyR>#)T>JL9RtDp5)vDE2 zZ_>4IpJQ6hojZ5AdGqI)*)e6K)o<0RwE^`-62Qz7JvVIFs6OAXH0`Z@j19Dd%f#qV zpJvUPsckt%8^&IEhG{Dx9gYxdS3_wFCA#~ zWDFvNK>OaUYge_Qne~rpEfz0P`A0R5a$SySU9Wz9{dR8AqD4jv^1kR2-nqQxM@(FiAqlT`5-aX#!Z+`Qe zdKYp;#}Wy_K6K5rCf;)E7wr9Dj$mFX*?+*L$%=#bGEEi_ydwyE59=K)k)W9TGi3%} zTWcNfX{PyOvF*HG&>b7iL9Q$QaYMzfmGy^M?u&65djWu(2-nRgfAEGQr?P3dbZ zu7JnELnWF$d?3Hf%-RMTOVn28FTf)L+_W@}1EOiJrCrQ5%sp5>VU2Q7?OL*Av3vHJ zXI(SR4Ov+Ytp3oAvH^yIqLvr)6`8R&{-DMh^}=*{zi47}o6_)2ecVwGpx3U|`mw6O z-E`wEZrHG)mJdQ|))s&mW0f<#nRjZ)Qp);D~O}683r@~7nmWFXO=(&3o?1t)k5A-Oaf-UEcTTRenIpMK9hWTfjHqAARf@jA_-mi z)UI8}CfUNm0{8OEFKaPK7x-x^fZ0g^uDeaj(2WTg=C4=)yl&lkswWz~!v-v2^oxr( zxgI^bs}CEC)W_7Xd(HHdIP?c5m;ewY)(9OQfBbQCQq#0)6ZhhaFS^@r|Dqc_c(7?W z|0ZUJG%VT^CQLM~-NhFV(1KYau&_nX_F6mY*4=7my)dC@MxT4`If1kK24bd626P>A zg9Z%}SpcAD%RCbbJy>u6hjS$TXB@R?(Za|#Y0^Y@i$WGFTC)A^*ISKas8Kn`KrcMCoPg6n3fh{31f_L{FSeKMeR*@8>R7MY(aYf zEeu=h)~#%ejvqf>;QC_&UI;B}*PiOWbmyH`=FFKh)jzMe#*Hr!u-t0k(lg;!?c@H? z2QZWXbp!A{rpei{um{k3F9=QlzUxN+kQ zn36wp0sYXd*%z(t^b^7YzC)hOAuw`ibm%7lJM>#8^WvMN)Rb?n%O>^{u&M;}c! z4KVvssXNT~EWOjrIdx^7!Tx0Dx?2A6pXx&a7*aIY^fzn4^yxF)h|7m-o^PvpA8kr% zegimIU?3`lvCcQsqkSt9b5M!K40-Y~_(GbHUYE>ohH-gcNKCMIN8jPM-7NEmsEP7R zJqh=t`4J(JCt$%_lSV3G&wVavA76X*Ro78_s(0<#Woyx;mtJb`74Pal{{DB(MUCx! zXWxC_zJ1)AvN%{UZ-E>8(8KO7fuSxE&dkz0LJ99rdf!cv6-H)8hMWBU2LdrQP1rN( z{Yer^Ei=G4p#Mc~fZitr#(N~VovC$fs@|jhdhdGn?BRwE9cEzo?YGCf*>h&OC!T!L ztQD@k_FA{9u+V)dz`R!Aw0%yF8=|>AUxL{>dcTc8)*;UAlCW;Vg!jxe7y)vy^2#PT6>Z`9dU@_sn2__t)URPc@N?Q^7T03@1kT!n&J0{FZmmnL# zZvX!MZOwi5+2_>W53T&`uD{M*I_OfjzG%IBPh}&t1e^~aG0b)C+|_`~=FJ=2__xNp zIa=Fub8@W{!d&L-uUjG{g6EfCdPx>cMQ)yim94ZTX5heqwk~2dgm4|R+^fcnb{!;$ z+_q)A`#|)DA7b5PF8{ZG`!}swSuR8QX`~7)I?a3a+<*Z&F8hKs&6h`93)Sz>4=r&k z3ii8=n|H|aX|J|YbaWX4*}Hcfavx7$CbR0s8qeCEa5Pic7+(_3ty{m#O`oyE6|7w) zz}8UoxX|_L+Ei=#F}Gy-cB4(vx;5glI=D+O?%`UusHe8<)cpe2ylF$XcGWW1sbg!` z_o8kBuYUbxtQ=9DTDNNE`uEP!9EwGWfMf>NOU_N1x={7Va6NkExpaZ|T{{nGzMbcK z_3Ws6)p3*FU*dAxHkQDCgIgo|Ts)w&8*ou)0oimp$vf&6uiU1-TI>#sH*6?=@zP5= zoB8pC$whA3>>X~^x@GRr(Gt-o*UeqJSpv#5nJ8zePDkB`(>7{tUhejn>~dWsP#o01 zoj`GIY5ljkg4G*b(`MNcZZCCvrDW(MfoIP?ZC!l<+-+M+WUW=;7B62aA!S_&n!CHM zotnGGjRX_u7q!b<`Pfj1|HKqHZ3@Hy^t4G>aRx}<^xh`d;$vF$k%AIn05vb!I-Z}O zFR;_mO`JHvCT){g^TH<<=3c#exxqsQ8`BEFfN{r+6(9mA?bolLF_CB|VQgzlO3L0c zv`}a!w!|H|vjQw$#iWW(m`Mk0J)9QjTZNNLl$jT<14L>*pv+6D-NnZkS)P4DE%lLgQ(6T@|-)$cEV`HRsm&^z@lc3So3 zL*EV`K0urE?b($Uq;`3?Dkw z>I?w)Qf$4ST8$HW2P+yNVE@&h^dDn8j6Wh!AKsVg)2CUvnBLNE06R<=WuYygKLO6X zi;P*O0qCXV(F2N#gd?i|iWMteL1BTr`>wkUv`w7& zzFRG!QF{qzS&v?S{SCKOLcX3odfGfg{U|40TAmiNFv*mlkvaYq33@uo(gERK-P&l% z{I(UeMo&NUjGL&n>>{nH%(1`x)vw(*zWEI|LUZj464E{P*k5dZYpD6~&tu2B548sV zudja9<^*WQoSK!zoWIWf=C{8w4OefiB?yyNXukXRZ++XfZqv%>!+t)tyF9LW?AaHd zcYQPmv3~sQ{`=hxH{Gap=nL)@3Ebw+o#QUnTG6L>9|PCSdkB2_y;j>+es=%QOsm`P zqKn*v4?d`Oe7WY(Jgfgt{`;ryU#0PF*|LSz4e;AQAQuY~X!rZy|K8@FoSa;@L;^oF z%s>9|k43L$?!N!N&n;NA!1e9Z&os$XKA0k!Ndv2nvpGVY#Q4ZaPZw=7ZOox9X8xKl z>y97%;0FRb&1_DZt!*eDeDHz$!GHZ90j1{dZSfpf@mxIM5{Cs3{cnOi^*hV?(ZdL) zHFjmPnj!6Onj{l-rClsJxXZ@c%$e)lvXxt8!IUddnd$b(G8n+>5Ukx;mMMU6W>Q-PG9|-Mqzn z-8b)*5P0ih_qXSkxyE&my8eBzh^a4Nsqv&O5yxsscylzv9Tv}UK$YDo?P@In;LLRV z6x5V}GEIVqRqETCu6xgXS4;PI?mp-~nzmeEGTYV4Xzs?1FLq6{HoI<}W%YBQfgAgm zd5T}x4H=Z<<}came)*@_(%PrFjya9o2On>8a~Br7o3HIF5SZ@vJ89%=N?@3k;nFgX zxjBo9-GRf|u4{)2+`AK(x>w$iFNz%bp^%@67hb^%Ym58J9sS&v?X}eQC7SEgT<5%O zw|z$)_lJjO>3yo}`t@$)o_}$%0QpAo6s`0gZFA2(zrx*ld$zRt_0^}mNyojGkvBBH zj44oc3d8`k>NKl*3ca~V;&_9D1|wt=z!%?W-a>$}HKs!8U<_F-iNnX6uhcN6v7I!N zXc#etZ7b7WfCtO~8jU)dn300Qd}E3OLrh??U;!)MT&#(4m+gIlLGh-)GU-wLI!a%^ zzC4WTgR`N%(zxvqNbMu3GTO=u#JFC5`Q^3%!8{kOEi!?b;$2%eNR0D zEqopM?W_(kWe7O{ncTbLijg)>gC?53BJWnMT3Fvx2ei361f~vcDK>_UJZTejp{|&M zE?)ex+Opcf^=6GVG~M(t^@fQKZJ|9dEsUex8jq`2ueJTQj344dS7?O5rFHApW)h1) z09uAR(!R2R_7Vq=v3$tK(+;z(W2RxI4y~kpq_sCDktN#nMuX`2|M zCm3$YDj*brCS?NxA%Ha+S+wi`aN1;M?PM+bQ*V-TP5a3w%o{NIyJTTNU>pIa5oBO0 z447`)wvF|rjSVr2(C6ciKeo27UtgqepEva8larI9b~jOfWhqCTK8e99r`}dz{O1#+ z0CUmmRjb?#&0jF}%)tOc=H;0)W}5I~zXTYJFN9<`fdN>}n=?fLUmwS@6Kf9^aJ9#SR_-bh1Un9c8$H34Pt{#+t0SAKq; z&8JJ2F0t(%5D4?m&Ye5Cj(TsGS$otNB zzvF6aU7IR_)`Iy993Hh1*y4}irkie-hV3HL&LVic;l>->ZL%)9UK+=Of z>m9>fa@pc#?jr%Q+rMzDyIq1zXhi=(XRMcaM|)~+{?GsXPZI)9oG?KaOAFkc5(ZwW zGWV*_fBd5#x!2{8t-AzE{rdKkmh>9aLT=C&55Oem!6QeFbT?doz59Rcoddl_m;dj3=8fLf(~~T@V&{$hbnl%z<;2?y{?zLA`~UfUwUGjo27mJ#-*E4a9qVoo9W1t??E`%c)rT?1@RKAZ zEcP5doa44_&T%W2YMaC^8711jR|~U+u4DUJu4Qx0U*>8|IGE+6wQJJSL+X?d3h*62 zCU7QTD9$Mvsp1|u;tGp*tGu1XgP1$6{%?t-p2symshM@$4INhCx^$=`iC>0mK6A7C z)6?_Zn)UnK(F{qcv+KGcL-XCIZ`5K%9TotA$O4(lQ^Sf>YMq<;!79-wji_q7HE+sP zn~!M$xV_YYrM|2(NkiGfAJqi#z#&t~7i^dw1Pc0(*Opx}QBV)y-da zNMp@G_vSlA8cVCYuYP`jYt>9z4~2W&)XDQDNj>TY4QMVPTj=^;lIMmFYUOt9Kjiiw z)V+sk`+)^dRG^(c2g!#(W+Ob z4JN3iu!kcYZNkbllL*qBqLoNig8`FFVAd9|u};PtTG7N;6C3_9d1YXy#{twb=|EkO zi7)#*kmj(e&0Z8H?CmtM>D;-q0g&4LYa5^ei~tq}NGgXUjn!Esk9SM05`UcW!QaIv z-S%+Gf0##y$#kZqRj4`@X%f&>mS~aGG4UxXDzw4S15T}<3W(Iy&)1C-kRu%&;e!e* z&xA<;AnGX*miG!_z#&@2+{0%kfUGv7>I$$#-4>7vutciLN;rcwJVH8+3NTVi771`n zOmJ3bx(0xYlo^L)43$gbU3={{mIu7>Vg>cYGw%U=NY+sY1C#^eP&sBo zhok)w4v>S|788lEcmjyqe_+3R{?C7QERGD7sxteVc3F;5&;YgEx(x~pj zTX+M=vVCuQ2cO^#RYF>!el2p+A75UdHuc6p>YeIEXPe&INAj4zGo_2y?Xd9FbTF46^|Cu}PFjX|lLD;~P5Z_+`on9M&ItY$$ z)0f+qi@IP^OkV7%Ws<&36YhKNxyL4ulc!8I6=nb?5*>hx2eZBnkQVhqe6m4nBkew6 z9Eev5(xvt5*9rK{v@H>AS)jjPvRG1&SN~?J_O#`Bl0J187y*F%;SYZ>X&@5QH{W_g z6Ybhk2W@MbAS@dCb8ywp+$EBFHM9O4<8Dai=sPC=d6EIMf`WXF8&hS}?mc(I4cEKn zD^|FT0xb94f1lL(7C6*9v2DP@BT_h|clW3-ctZIzZglC|MHW;S8eshLm%l7PQska~ z<|%j8Racuf!vskkSv<{cA}i?0PHl3xRBTyjrM*s>GELu&`YH8~fNwNQP;1Z8_na4# zCH34!`o7gy>s!GBJRc+No5dR>shrP?1MIOcls?A#eF0E3R^E8y@9wE5pR@%wRGyow z-`%G^ku=cYXq>CXDE(vcXz(EQeM*Nhkv@V&+v*xW0QqPh{OJ#Wv_*q{{VsK*6rMez z&;?I46q`gB)!ISoL)o6exO?MGHz+S{;fUK6U|emgeuKUq<4HxUip1|f3(UQ1S_}&o ze60Smvw_=2T1;YmB@AF1_1XUY2bg3!TjgZ@w;oJ9MI&XE7CV3c``^2oPy5BHg^Lzy zd~9awylk~$3ko(JSdi{5U_JBwS@Aws{gB2~nx;2?$HJAspvHLild>4b+=Q(al&P(Z zv9)jC!Nz~WGtTjxt3@~r6!|`h{toig4Q`!ijf{F#Dugv6%M2U=&5k;B;xu>vHya( z=PO3y!sL-jI;xfo)Tqz^ijHW|Vec7(D=K+5kybp60|1^-DP@3R|K7st3*9bBkm&FM zn@Bn}?AoA(s^mqIs-Z5!{uR<=^2&gNK?fvPbhI{kCIuS_)Kw}S27L_DpgM%Lz=o2P zc~!z$ZuE7dH6Z7jX?7+wty;CRy`5&RU8-HsJA3vAHaPVcIBK9l4S)k6CXbL0G@H|< z><$wVSO-^(&d zNBszO$3k_`Oh%Y6v9gcE617c$A`(^hYZ)I@e>iwRdcM9raN-%sBXt1KT()ev9VtRS zND1DM%IZfSVbZ>-Su#dK$=({$V5Oc37hs+?!sHS)CMHYojT={@X3B8{(COrAkhy3{gjN7+w=iZ+=%OrA4`bdKb8Y|uy$*{eb zTHshMse@TKW)zP*K;;%4fxv1$(h$lCKamEqx0w8i4^YR@NP`9qO^8~s`t^Gic9X>;ANHaeEuxLZ117x3 zjv-Hol);|Te?AEupq--OGJM2HlTh)DL+hZRAm7yP@4WLa>&t)f@Gng30OKNCwr)0Q z93TcY(yrQ?friWQ;UlDRu+8>KvKNpACKjVGK3814#%v92-@a9nxLuMOuG0QD&a=(8 z@#!;n-YMGvGfd4Atp%j-d3iCD$}#4!{R6{3YuBui!Juijn1pl+}feE zks#KUw#20ZXh;%MIaG$%33$(u6>ApDuqVJbj5Lw)%eHtl^51e$^W_s1+OqGrn!Y&z zDSZXt?|{B*(&Qdc3ZM8U8A~6M+N>TV`NEJ+bvO2%chpB8lq#pBV)~xxpO3Kkt9#<{ zZQ7WjK7xf1u=i6H7lw^3}BKMx8%}8s%^{sCS5cTrko~jV=3SErT>Q?ph z`}8w1MVCMt;23bmf*C5lS=m{(usiDVD_k?F)spvh+Un3mYOV8Sq5T&RKkOQ*U!$&> zFYM5w6A4pZ_#ehM`fz|^EA_WG+;D^CLmzjIwi~dwc%JGMZ4?$Ftv?f2ye3sbOADTH zL>!CV^dUy4o>hFyd!PEBgAx<64J1Q#0Qj|qJ%yp~@t{-VA@xF^O1EuJHyH;dnLO%l zxiwEOd;#%8xk^<)ErjuGzQ0y6Rj%q?E^{@uh1G8Tz^T<}R*# zuV!xU0;#etE_PGjpXk1N|K&P@XTSUF>x-pgo9Fs<7HBKnp@rL3N*^^_ss1XdTD3%( zp6R=37r{}+%xp<4^BcR-*UC_w@=-Bdrf8!eHFONif!)j}^pm7jtX2EKP!avVf@ENo z9PrGjp?JC{&%*+?)f8T4so;n}zRIdJ*}j#nHlU^;1p&1^LE4;b88ea^F+5=5@VK>a z8JrSd)U2hpEG-$`9M~u0T-24)-LqFJ%;c+g+1WZ_AxG^_v}9@F=(0=ONE@huB*a_Y zhjR- z9Do%DsTnh-nRJm=XH=uQ$S4U02U-X?W@qQv;KzUq2x0FelSnr7GXZ@~hE$MrwrZt~ z@|wIJS1(0}f)ojBw^L+j0ibims8Ld1>aNM^P&aAPBpJSW#Z;A$Vll~Pg&ixofD1Yr z2HS=Wa}7|kLVTrGtEX!a9Y0~baiF{QBVxUkK@v56MqQ%zgUywF!z1{Kg3SHW+epjHOP(q(3ww z#s>yB>(s>q($#-k*+$a$ii|Bh_SoYt7K^#tZ@*2G{jFvlmwI4QG-1L7fz*{!xw^@~ zA(IPugj!k{$f2iGr%sxvEwOz|@D}MNN3}4}q1uMZ8Rylp7ceg`&-S5WEQvf&SL@cT zn?TAO+fNASqfD{9n9}ViV>Yka1b{_}?%lgvzOdO3qMk#xQmqLI2FAxT43mwSv4!86(pYP__ETW`J9{GXI00lxOqekCSr{3t(d zwOO+!wpSQy!#7J+txK0~W<-ml0-kvM37d35Km0>lIYN`~SGBs2sekq^;?D#;*-=*L zhSqRpUlw)EGmb0#L3%QF?0fEyfBchyVD@{Wl4^U|Gyz1S&$q^;Qcb8Nel*XL9wFT5 z%J)P$y~NtLPpr?fP(XU{HaL-KeB-;szzh>s%9WQF6JM*@;uJ=rHg4QtQTTQNs2nFl zyFp4ZY0_kyP$H=qG+Mb_Olu)Ca#AGlO zDZEWz&jAM_?U_k?mL{UKUk{%Af(Cs93sbxUD6y!8wEeEn+-3VB*>*5h>i(K-N;9F2 ztE2CMZwk;^w{|@XPkTk03c$PLGj~`&&fhYHste?O0MY1spYpK zbm`W`4BY@^S-7N)wktr2*VLq5ENmeu{p{VJHMM*8lTQ&)Mb#8g#a`EptVGM(r$v`} zQVV8F$%7C6yDhxXW*0A7Xr%fEz!3ZqT(dUJ1+cwWV=5XUNHGC%03nkE zDhmUZ>cbd&Y%7UCGw1ALFb8QPap~Jwyk+r}MYe&12AUP(x%1{3puR)l8JiiGF(y`^ z^1*M6njr=4EAZaDc~cuVwn-(}#yLooE6_N}c7^w6%yhXLYZ-HqGPC%0!wokY0HMo* z*VU?Jx>(zo>Z^{sNd4?kfn>^qVI=CEMZWEd#{xnB{+GJNQoTMP%e5JLN1j~KU}FF8 z5%u@&J9d!aqz?sH8=187fZEKQIUi~vpn=QNV&oASv;maUZ_=L7u-dd?lhHz3K|9LE z1A$_m{n+3Qt#QUGZE00ek>(Xyx~_6dl-3{_#G_C1;M3PJ@r=Gu<+^5N9?~MkQT1&N zUE^l-pXx7JPz9i?Vr$o|uCi&{#+D=6$9qf?^+UQBIF{CzImCNPbmvYCBJLEw>bp-} zqeaa8mhQ19-m9atu0R5C1n**{BZ6r)v3yWB_Gv=mU2Cv@xSaZRc`to zv&H)+?qeAYTDW|lYt|^oZP>h13psP#AQ@b$Rkw-Tr9G$Hc4fF7Ql~wnJ+f#)@T*fR zN9C!lqiQxwBd(>Zt-YvAmvfrI5iL$~9xAkpFY&Ay&#H;&<5&%rmK^$Vx5YxOKq&NuXn|6CgX#AVJzN+_`(=y002M$Nklz$j~7+K?KM#XkR<} zT1jUlSy5OdI9FeLt>S0}OT7sb3R@WzcsZ*50<|^Cxbmti4Scd<%f3S1ck9{1j89;M z1QkO7gH5tU{%9FwVbzV*Q1-puCXj`SFX5O-j20+nWiv-dsgyRB?Ye0fJX8N}cNotm-JQmpJB;`qUU0a_fjh@lo3xN!<%e7^F^ zt8CySZyYP!Og5kcK!(AVqem|>aEgQxM_k^+qrNgsk)K~+X%UwRA1X$l|NIv$41+2w z%sBF(ZUGXv-FB-6Dh)6i(D1_{>AU~_`;B*`AJjn^$N@`NUU`L7>Mo0?>-R1a7rL0F zj8;1sK75!->99OZ+U#XwFVgVgBdkrZ5CGjOf#pj+IMU&sM`)&9V8q5tz%;zek34A4 zOl$!+Z7bJ z=J(wz$=yDq5lIOX7ABkpdiTJWAFz8b!Sm@^qDg-OpNU8L0cjmOc2s%TKa0womu3Kr zyGUZQTZ_r8CJa?6wvLya$E&i;Nnp;by!{oS)K=&;- z-7KRS%7Z$ve{e=^QT)WX(x{Om_3Zghx%qyO#(woHUon5A7=U9|yx|#PMvuPET{n8P zr%XJquDYXsnFN06i=VeiCz5V*gfAE=v9C^9xDj>>t$2W*{5D>#r9&GzqSy<@akqYT3 zEn=YtyK0r_Roga~R5xgfOq9i1jP22<)Kh<@#;o>V+tl^Z7(#-|c!Xrl`nh<6MXJg- z&+#JwNEXxRfBd(nzE0l(-zI$_eG<}JEZT0wxR=y~**}XFS$KuE&l6AlLB@H0WFT#! z^6lKEt1X7HP>4$OV~;&%Qs{z$eD~y^{$%{1-&wF=zQAK!1K7&kJMkr!muG6cs4xTA zsGE;I{K#E@Ca}*nJwP?g;e;(I8UFBDlUB_eHHwtjoqkyo)1YH`?H_@%+#;ZbU_k2TGCp^ z6v9Qv6XOYKQg8HB0+1P!*x93ikG;@?xL`+=v~Al&`D}MjJ-f-}x836wE=A);QbPep zly9>$4=N4uPWq=)rY@7!+^w!x&-!lB@N$6@F0)#@9IibXPq?^C^MnT6J^W$Wfi#&wu}cd*q45u0^BzZvCcxu6e^+ z;(Y^GuVJ>!ZPw6DoUzW;saMDK>fK1{xHZ)0W5HRAv_smv7hhlMwrtSxVH(R;t=p@4 zOGk~zZDibNzZNfQ33Q4E6@ziQMs^)ZcLdxNY@=W0>@fNNAY!{xQP zNK(+_?!W%)e`F;(!~OV2Khna0*X$D6DO{XC_9is+6v5J7X%2`3Sr0z=;D4M>PK1dc zX>!kG&rfDgXD;U^t)nW>f<;AIO={{csx4_1U$D6od}zG39@}4RfV9rXcSv-u@QKxE z21`JLNlp|G)frZ|m>3cdAW2g2)Y1_NoH>RHRdq>LP(i|j?ZZ~0l?I&T*{-|`e&t;imcbI~B}Nb!{7nrd9$uFu=Owy$GH3=KF$hAN z_w!KzLPm@7P-vCU*DqmEUG{b5B>?a!)EU7^m-;h4T3!iWl9sO*R*60EXHZ6M%_c>P zOFSkVc*U>0ZEEKg27K3e7({Z zY?7*Yp@t^6KvBV{!Z z7H?fTU@xpd(tFrUg6JV3h4=xw-Ao}_*de5 zG9{1UhRGWHnlL1SiZqfRr2R~0$cyiSNeDFRtBK$9%pNk)k*)7UbxK$o0_o+byevSG z26a?ZeTC6U`xBi=H>=AJeldw9Es$~xi%UX+Tjdc*lP#}pKbXay{>K^4d!*V8sOczc z(JVIA6io{yk=(dxgMmcAOJnWt%g@U*EgX!Z?b*BAz(}V~ovm$RPUYPwx1qaP3uIW+Wjrk?D6neSFa1`e zpjTgWjq%qO0aR9N?^ahV;2za}+uBAOd_a;7&9o)>#hKyv-FKhWzp30RUV9d;HFgJQ zeXkzq*A7T^w_cqZdf!O=VnLk6U4AUwOTs91+?qA(X#b+N8OY{9=MHss#EA?s#%fEY zTXPSom9}YJTWzq8cq@yqI}ZwAZ&Tf5x>3X1xb7WlyB19tqhtww&o;HEgK85kU0$oY zu0viuNe`J9IMJy6#8Oo~q`|$=w$vg}Evd0LQk&GAOKErR)X>67U@8E3SZ%qM>Zz96XiY7CHWBX* znAP9>w)NfBSGH4~$!3B0Q@8F#YBxvJ52$VA>D+jw)wWeVsZ-Zfnd_?^s~sNNqp|Lg z>)E5Z8+}!K)vZ4!M$=YV-fb-4sW}Gf!7|$4q>0q{V|84s<_+EMo%=PX+M&8Vq&)Lo zzur0_K>v2`*e?S`Eo7;=htl=qlGU5OvDB{I7pa3M`<`O-{!!c_dlb2i4?tuCPwti#nG_=F0L1>|qMj!Jl` zD&jp8R62UB<1v|r&$gFR0Ldn+%82VRtt2y#j3;CH@`v;o@C>L4Y?2<54-YPUXu{wC zkmeJJ!H@fRdmtYxt-_NY&aO(A$-5PTu=a-4L*+%dI8mf!kMU>UW-PG%37{|G6S-3| z(Az9s@5degC%X-*_~8L3#AgatQwNGZiXJ z8eZrFn#>^ z5f_WKaQ*vsmZf58&1kKX#Q>YCl4Njk+QZ{%xt@tMPYYrH7>NWM5@Akf?ak(~zU|qZ zMeQYA;YxY;QH~KNIHj#6+O(?@2NLY7QnU&KQ^~463lp6T`^r)!n<@&Or#cXYt)^-V zm7SBnJgLxEmgG%b`VAjfJrDPq~9c?ch9_16%Ov?_9m zawg)3@SZMTzTg*4m-kT@77Rj~yz}+U9f4Wx_iZhxH}fV6wt1#5eOPz{Kk<#a()Aau zhiIted%{ugfj;hiUZu;9Klzt98=ZM7Lr6)GR5yMxru;a8Zlf!nfbR$5xm(nv_dS4_!5?cI~y-`sPf1$;xu7xDPtp^%|L)X%NfW)HYfy3%n8P zm+#D>QMh{E-M99|?x;k0-5X$@GWG#FyApCkQgSE~ap!E&Evg7^RK6I|$q>5i35-e$lqIeVyw#Oc*L}mI08!>a)ohfou zZT|^K)yJ36GMC43x9%*czzc7SdHP3FHVuA5!JAKQbqL6o&Y+-Bp^}VZD?oW{N1;vi zNxzeE%lnJ~0W^H(pF9H+Ye-&qe&8$?K47ug7t9_^_skiD5F>+xjwPHv7l$fwrE}y2 zqyZSjRx472+}q6)-8d6?rejjh$HCwk11@F=Qa2@Us08om;Pp3g{N6@)B-NZ`5LUny zbYh**8byF%#c{NdIYasAxz45-E%ih{!^+|s{apMB`zLeJX^lh6sCmb$i@Dmw(p+ZpFNQwBeq3j9S9VddbmI^(u&Zlo_Oo}q3<6(3&d5y&OVOdG zPNOY>_9@Caog01qM-tJmVd_6$!&BjR^T497uAi3p@bJ-0` zN`?33jaVJLt9oeEgXGA^%P%#`{`#s#Ki0E(_C`a?4pvNOjCUxB`l&W~LZ~Dgq#uv< zJkd;MW&&d93`atg~u%^=RJuL9{veWV@rP|f%f@gYfx|6j z-}N|p;i}K0xw0(?c8I<}R7BE1=@;nNP9Ck#je}^4M8cQPdr%bkSeKt}2GiuQ;wTL= zc|@a-1FNmnSCGUWi|VfQVX{1~h=}7SqopAIE|K=t0Sv&>OAcJClT>P*19ETrR(& zuET-6bI4Sc9};R1>Q_S>fbRlTMWRdJDTvX{gOv~MhS`4=_|k%glQ^&*F=nr0I9>w| z=X$X$ilc+LUStvWea@~6iB+3*khzX03$i`!rP1J`YFaT%*{TzJ=#u6l`})V+w!w!O zHfTk>@&CHvhJzkhHl^WW*qgu+RxhDBmyK`y-q{hT3DrWb`i=~}A>2SBjTrO%TsD&^WCg#S7mO&hZw;>^yPnMY z!t>(W7Z&3Pjp@<3p?d9Sd!;Wui6Hwh>eh`bW^l-cI~h9(kSoTHZX6DDICBXzI*L zsRpx<5nMma5!>{ubMV1PyhISbVBJ4F_|r5a#E0Df{YLv$+FGZ~TbN{9Z%vQr=sp6# zOTB1E6L|F9O&UL5R$Y4H<6NcONlxqSc|^FT?o>@YZg-A=z(kI9FOdMv%E@I=WGnhm zjflQAa{ft?MR+B>fCD~sfbUj$MLa!{?ZInHj^|AL`7&{BT8q&cLi)Xr|JSVx7d54p zcAFrb-5i4XAPItCo9Ai0+XbK3(QXEZgRSMBap3E@rv$+2>KlC5RrL*q;iI=EGVP*K z{J1Z z12T)3;l5z<8YU!SfB{CMU8I7>HSccHDK1K-|s6O;(7daau4LpJ7 zDLnY`_;JSE6ya#lkQH1Gc=rCavlF?K{8ORl1>--4v6Hq8eS%-65jL3xie7&%8}USN zmwAdtw>%U%SK`~KZ21{l2B7Mmf+xNZ3V1@vc3YT7WB5u8xwgRZ);6)G7O)5o59tZ5 zd-A=TdRWn|MB3T#L!^EiA@=sJ!A$$GIQ|QZPKi+4m?WHn6j2b!8y8TlJQjQhIMdxt6AtWT)H3jxpC5Fy?sq96_DM50p6W-q+5Qt= zx;w@xz5kmSj#s>csa@H#C(7B=viJ%nVteX)_a~U^;KXF7#oG1GW+_y{!|3`F&q1r5 z!4}7V*LToSS$=-p_}?brhoBK*nhNy%r7DF=YMD zuHxR@A9YAG%E~F^jmnFhEMPmjFo_KWNK#+uK2M?8=kMX>N$28fhsSXLf1yCu9Bs*x z!P0Mb+^e4XHkqY6B0(fexaqbtsOU-N%1(R{aI#3AL_>~P0Ug*^EP+AK@)`-6{BM*< zwjsBhRK4}tKvgwio7m0A=dolP>Cb=Y-b_xr6#YNW`Lg+G5!|*#!Etu;u9@6Pf0@)KUAxA-A1Vw(&6O}Z@yH1CdCLB`Vk?`P-glSk20&YipaKaxVq7KFIP#{}}d2{{@ z?53Elo}e^@E(isn?+j2qc)RFBSJYenZH6$DV1bY)ZtQYNKo3N4eHAiyYV zVR1gCh8!MNepf(nFBS` zyd@6EuR?XAxL!*ZsfZUwrv6f}D1Y=jd>hfeZj`cdV-i2WjK|lJ;G_xfdLGqY25kEQ z=NS=_l9)ZfdaRdQ_}XYUC1~+zYPr)rqMHq&P_>0j?A22E3%P4+1qom7qYM~vlBqvt6f{CZh9)SXps;9)QNosW_j z)aL%+AA$bfWWQujIg{dfwrVlepgAhZ#~{JNLteaAZxy~mIU;P*)VR!e>H|H0x6CqrLW)|gUTd54(wRoJ5nq+*O6WMDI_D3I!%BRt1YOl`zULVWiASSadbrGd%qiS-3jMc@tL! zm;$Fv-v4bCbjwUdv4MIrAq1SZ=1Cp^_(b;ZZ3I$(6AgpF*HRQf-VY=bJOS_cOdglI zqBJ`PH8eo)JJfTgNx=BQfV=g#*+yd|?`EFmgbkG60EJ3SxkceHv6-+2Jh?BZVckOC zf27U1X$k-IqFgsBazlz*PL+L!#`+f+Nd?G8fY+}eM!rZPHLuf+KyXkBuF*qfx4I%s zBs}~T;B)TNoQD=QU>PciFtvbP{0luoT64XgfHyku9yf0dnws``49HCmgfg$Hp?nbd zRk9DY$bWQyo3?1UrQ)f;$N-&Tyq@o^GBw;UHhZ87tyuVNC>n@ZeIxWM7L^TrQ63Ho z+38GSDIu$A8o(rsklQg!_8Sb}?plqpFfi*xHJi_&q8teqX_$-(s*9`)vgzKLD_7V2 zmpy71?ya&iNgApJpoE-S@CgvwP`{k6wKdB&Lrnpv&^Wglrb9R1e?7Oq9_Q<85BJM< zH7(Ep2_bm7)uYmyXbT<%Poxh1A~+dS1f52F`6EE$OS{tsXa{T4_U7Q&pnCl8Y%HJ{ z%EV~6YqFj(1)4QbjLNm?H9KY(0|D)?cPoZiGad)itoM`rPWq2!{8xuiv);_rzf7m2 z&Lr6p8+0g{DQCWCI$zxI+qe8oHj6sOG?l;(KY32Qs87%*UA>K7jo()r84JOI4MwEn z>8NsT^e6pVyU(>p;Lk=}Z#Zxkk4qvnMMIBrLje2|2so2ZVZezw$Md#=3A%$5oZGnA z>SGjop=L0_2Ei@|1IxsLfE#EUzlp5~ww2ufDdYbvl^FlvB41#+XrHM`!eo^#3|93V z!TCRus$!E)B4k`+|6x$M9v{kS_`SO6eB*1x02hubo4nq6#?pC!us z5}FhnNNWgzVQrJ9vz!UgKFCBOFG{T8vPkL>iF8;^<-mhXe{<12s{qGj;ZMvn>nqkd;1$6 zVwVmBHF$9ztQeo1?;D-LR?#ejFoJvW{WG2j5Mbd%ek+bY0+b~c_67_eZFF^=PDw0B z?V@*1D z0Ps3=OhoUQj2R-qhNR*!Ur`Jp_~w9Oa;E8<9kkQ+{(24%3)8jM>QNv1!CTVscUrCe z64EJRj*ac+z<}LAk|FGK?S*Q>MY0x?IqX+e3s8<=4a8HMRzG`DwzRwelY^nQ}GZ`aHmAxV4Tb6lgPw7}ls$dwDipY8R z1A=S>FVA zcnZNgHQ7%oq)g&Ap;lMpQ&S5y-4B;RcOIr65D6Y&YB%ML82&XOs=(H5vV)N>!fkD# zu>DYOgkge44vfOC50ou}PxHY5mQ~%Bo$}1fe;km81Us411%P=O)!D=OVLIIQGSClAVoLU3rz2cHGLL3;V zaf}yvY+df^WIDg&RJtS^<_M6?ki2Q#2NeV)d~k1+1^GyKKs=X{PBdHDV@SNC`w1dl zp|y_vRtQJg5X2$$PE~x%w(Nv;MtiCe+j*nWuaq&?9&m#YYa&bgzKeI>rRchA!QL*{#U|vjh^tphy{OJBZ8dU#$ zxABx@Tri?BOnpz;Y%H-~G|3epS_I3&(wN+aLbu9@cwYb4mhJnd>Zz3m9;mZ@K1OT( z8bxO>0{@Ojj2%8=aaO+LY@=(c7n4b!&?G;|Tjpsv^|e#y#Sp5oc=qmrQf1_sTjNB!Az@^Bkf30(mTYz^=eABT2969!GO(&wQ47_|6fnbbBylTu$rb`Z3s5pF_Q; zXQ~Z@0A6-Gev3{&w}OGeNC!o+f{pOPJ&@Up*els38MaxF28d z){f$OqNjCsw4?iHaCJ5?AEd=R&$|Nr#*sex{NTZ4K1|&nM;GnW)Mfb_e#wBjvY}^G zhgl<(cqzB$j$c~6`Z5wLG;%L3Butu(8;{ARNGbX-p(kUh`rq!UE+IZ``&*ua<`$iP zmg8DUqmyuCnWuE_$f9aBdxik1po7qfV~m~7la-wcBO@pYtzFDdCb0i=u};crk-f$g zL9q5OgDqQ|7sv8TT4WMhlHz@BKE&JEA`v5BprG}NW&b{=kULVLCz5z35~A6@$-wAO zCS&BbB?3}yT--JTMk4lqYN_Lr-|Jr(xQj(W7zhEa#%-S24p_EOCB`M5@aw%pP{gNg zpZ-NtOCJYD-o6VWSqg8>C%IJ>u$=T46W5aRoTS%?RJwZWKL>(aNd~SrSRa&sWP=3I zdEy`HCV{3?w2`IL59ti-hs8BwY$KFuvB+dSf@Cn|pTYuvL}_QvAy01w>6XYIq|LEg zvpyuEotyn`Dw%ANRV#w7e9NF|=mt0yTn$yDjIvC2{<*{fe8h4>*(R!c1;L1r|Ab>9 z3l*aq0x8XDCYX|A7n@0$9>HXDmxvp(u)xO0e^AQiTRUjo3L`}EE-=-jL?n4EK8kqR zJfzA6>)&ZZW>_C!>S*{8&z2gc_jpYo@wy{|KY|h9J>oGEr?&cXQo*!f9tgiRkRbv0 zu*r6Q6mSg-Bcx#iEl2F0zia_*$L2!iENttOZBo6zHyTDS)vAaPdCM$&{)#<9NBsIm z&k1&sl*Eptc{Ud$!+H3WD|VfaMoZ|&i^!k%UVN70Cv?o`b;{r|hhCL!QdAcD>!Ls^ z`X=MG?V__4Y>yzTsv0Wg8XT-y5_!U!sSyhjlqvvo{=HDVA z&;bCPNmM)Cu&sX1tuC#M7r?&cy*@tOS4_yv)F;9R9HqQiD|dG2&DL+JFKJvJeI=gO zE;jyoT~23Miet2!tp}@VwTnLych2gcq}6o28Q1uAQgjaW{+$MG^WpM)+?v~zkk!g3 zxzX$j*a19!6SyO1ws-6aXAGCaoj9VD zLFH9ul5F|N!dy5o(d^&IGBegBMlR?c+auhA=r8N7AJ0plpwF}9l+3s{+fC%okh1Ih z+xwe1f%Y>6Ox8+y=TrXsbIbGf^;EcNvhNhoCplSIZ&;+nTj>$cYkwkp|IT{dRbq$2 z>`3{2ztpku$@t-XpZ;Q5JF!gLM>9pmio638X*d5x$Nb@OD)8*>u35Y4937U+d!#+Q zp>Xnnc%K}(TRIsYWFPRTv&`MrNEQC8H|~#)68hhxpjtfA7s!6s0a7DgG;1g6>t?uP z??dE|{ulTOj*SvPAyvC7A3Zde)pEVx60ptgV7L0+uY> zQ1A&cWaMSK_|_c+G&C~Gk)=^p zHK;R>95Ac-BHfEZ%B29#?+Qg&oZf1-nf>c=23g{h)51w$fGgwyVDPD2RqC=!OPQ{7 zUi6VjQ@Z^kgWP_tDSN7WMvGu6GcEA^F^6#h2mU|V~eFuMGYbqLEM3g3(eio=}3JY|>R-L{P(AGqP1T`bE?- zQ^gXe1eV&7Q3sP}C_VC;Q{b~caL>Ho6ebU22rNTbh{E9Nod@0YW?Z6Pthaj51N(Ln z$%q7-^>r7e~iUMdI{?LGn0w~oRCO3HAte~O|l&V?Af8; zSdYh(`Y8fysUIXoSr?%!)58*D@?vw7jcTBD@Y&-1;8KnnwU>;=7WYQH5LIEEB*9!O z%+<^}hRG|m=CnD#8jwPW!r)mJzN|b_`XZ-U&r`YesJ7!NFlpOq#l~K#y#bd!Vy~pX z+z?RnbIWSGA>b(lQ@T)IGw13>{!5A*mP3Czgz@Wg`o%we!{XBqNz}@o+ID_2a$kLU z!|ZAR2+Pby;=SP?c8MW)8KTm84F~)WOFU+J!eAgchBWE%EXnX72?l!+AE1(;%xTlM z=_Rv|FKjM4waFFwyIPf-$oUA>r|Ld0 zbGQ6=gR8$$1U9kV#7}m1JXdW{fSh-o%zg{l2dQ&xw!gx)-3Dr~JZa;oY+=1JnUO^O z3i@h|Nv!#G>h7QZ3-`nFo;J)oCLzpba{N!e=8AOhYVSSDG zU7b!)eKHtn2yg23Q!%?d{3G=+-WD{NxWC*?U(1u+_Y`pS^ru%%sgryU9ns&Icc`4- zr2I3MBRn@H&i7tcX88Ev6ZWU(fA5X3F zN)-n+R#(P9E^9GXZvG0E@s&+sa|?N3RLdB?2c>e5@~EUu)Vg-EjzynlveeUB=cwoWN9F$e+3*JR&*J1bv?SPoUTczo+Y*~_i!Bm*zfYCtFbDu7|_;(;M z-I^$7(O`Y}AHXfComB*|3ve$|439Ivb;~6*3$v9;4af+Ao(ohe_QM#v7eBDFYNsAA ze{5KH8M2xaJ>v+p7Vm}UA#!_vtuCi50@D=v-){uw659m%GXK>-^geVSLB^gfQ55{s z6DJx+8vEkx{t$A?Od;U*!7{%8-~K}VegC&W1~9ydNNEh|rr3a6p_Czcx!(SRL~rE; ztYduj-aoFUD}X7?I&^xh2CT)N$rFvo3c33Y+e(YOh5rHH8)gA|JQSZ?n>zDD0(2P^-pKvrGjo_hEtM^8hnwhWc|) zjo-Z^dVmyfAr*9Aj3bp2r74XNERfWTW%Ik8 z*LGMHgLBP`9$ZSJeyi=~uz;8+L1wtUQtyW7Y#vca*cLeywW?{{p;3O%6%TuyutRu* z5+S`UvG+}R|5cQsiS+CIXOq7S$2AekMhr|!*#c`KShvANaRY)U$O^e!YdF3ua{7nc zM!0S5FHP_~QQEkfqR_qyy#oU=T~%^QmHr<%H(^h#uL|z`#R0+dkDmzQmc53P%w1wK z*?ck>zZWF1nJ;3@tdCo-ezDnI^cqaR?655GOFcnn*=NBmw;O~Gd&KjZ4Z0U|4{$;|m zIXMT+9J2?h>N}ZGQ_+a|T1A5qmjA#m0)GV*xqv#N2Y>VMnRFL(SzUx6WfiOyJi%sJ z*c#Q^%;IpXbD^@p|1shsf&$K&Se7O^Z1K9{%}s$Y!Cnl@X1N7cQzSU=sJc!=Pr_y^ zP!YSyZ2Cx1%O8 zTTSrneef7Ge0l!uFV;h!ZNuSOOi>Ih;Z+IyFa8)tRXRn1VxsVn0I7u~gon|WoeNjS zqwawAV>qy9SeEOTNBZ$;OR64hFsfAxOZdzH4C1a~Rb+#vtV2+ph%jU_ei)Wa-XGMH zP#rg`WfU13@~7PlV5-u4IZlFB!UOKPJLT6QDMD8fNt7h$FD@LiWx8)ijMvqT z<^1sbQZX{(!IfG8313{NL=<@eSpX@6VdQu7A%4i%@I@YC4oeFiYKltsRXe&sFbQZE zmByTGKb+L&2`Zs^*vV54d^C%#-3_J`Sbct)VzD7tQ$6|=-W8Zy?pY*>2_t)Vxk%T1 z^!-6bD<94oO26RWlNOlLJ58T|)?u*x)*gZXzfcQpLEV$}=kJt1vEu^@KR| z9d0WPkLhGy$Efwy)mtzgcl|D4xe#pE|t=l-*e z4r^K*uMoyLuE=UV4t=s))7ZzSg}_&BFA*{*wOkgfQD@La-TLQv`fI}T9&?~YcY;d_ zBaU@$yq3f1`};^-pl!Nds=Q9hZLW zHQk0VvN=WMkjhMWHjjqi-iqsKKw*VW6G7GG>LnSL%Q$<_!orV{OWY9yrc!+IVfBI5{_b6)2T`!3^xYh3H{*N&F@wHAErVyryAmlXY3gIh7zfdDq*dX>b z$@lx1IWNX1$CsZMAhvFHa1e;&(_oi8%IFsHmcKV?xZk-Lp8!JF1HJ!eRwP?*w3f$q z|3yHZ7y)u0=^__mBFK5(oZ?3(Ko{)d3 zq;_^aa?+I+Pk`P%_g@4qAD5k>I+_w{w@O&-_$RkfYD z?uEAftx%I4^uj}^{&pYuzJP=NRUvB=lo#M{{c!QB9A9*_?sx9dyzW_fd9bj2^t~Tz zyhA3Dfzs^#sw2>)<3oE&mCiQ+axD%KINO;h-!r(uaN1z0s<1ET{W^DTne!M0e74#s zo;dG#=J)=@Tr$y{K0tbqR>cYtA*d)ES-gTJOJ{!;51RY8%;sFqAuZL$l-;(-6-xSc z&^1Bb{ZreJsVbhnVWMDq?RBZmAUN!H%?b3XuP^ZoC7zYSE&T(n#DH<1kL_&8J=fde z&MINYDWuT6?0zh?lD=MaDPD(29s-1_80eS-_9Q)+U5($L7Er8RKQ9IfJOpqnV$KK! z__i6pm}Z=IbnPa}M}tmaYyYGUESCS7Z-DzqyXm>)EwZv(-FBWbR~?fnXcsHX$V+m< z4rP_lG|ryCj@{d{=2^w%oHc42bBnxJIe}(5A=`+Uhow27Wsaz)N+O1|m9+e@W^Lp! znV&YRSwzzs$iR))vt|d;Yv_+$>pgxg!v~$T@JXTYZ6fRQ0QT&;McMZU37w|7T^El! z^i!E2L^_O7=i*hd(9NigTMX&T0=i+s zS16hiX}9g`{DOg<*86$qGZP}7-qIaIawy`!PKlb}=?mMmX zYhBxYr=R`Ur|W|Aw`(s{k@s!fu3AYf91;TS{CVqw4!;6__2{PeX&&XFp8}H_%ca4c z?gY3D2$<|w$W0OZ)h1k0bpqB&9$mg^3aH;6o~o`DK|YYF&V{R_cWd;)4{zqa8P)_A zk!<#wf7|-Q5mD8|)bZ8hDlS)v&ASR`rcJqzyRP(?bA6$=-sh}|H zw1a?~fEyu#FuzvOz{pp>8?)M zuxAy&J)Uwdi1YB@?-sf10S3o7RVaBJw#jx!#jkJ=3M<JUhkK6hF_)@BeZ zWdpZs#lp0aA|ul<58B{|4bU+$OcH~3L+h}^Qmf=)H%SJ1V6o!NXAZ5KizYO z+jKR3_wuE7QX^&Xqdsdbf48Y?80B$z9Fc1bbZs-Z>`-^8hHfbudIvw7{Y0W|sBXV|wNN2jPUKpg0a zoO5FM*5A#*J44-LgMpHfUH6zZ7_pJ61563jTLqhqFWw_A-b|<3pw$cNPwTk!mj;-< z?c4P<*XFfV5nkl~mj3wqivLvM9wBSuz_u9EhMqo@#)FQUOBYh_%!6B~$?!(MIK`Y& zVTD^i^LOx;s>{1=-nf8As%gIW6jsNulr?`$lkq9cTwkFZh&J4}tMA!f zlHkr5hX{asQKkhe=EqEsK$%Q-X>8Rj;XqL=vnm$!(1)^9y$j@kBn5qC`)$~^zU-dX-!ck z!7#y!_D<;0vcBS#mGUz_nnn;EcT^z0z7TQ2j&y{jWZND!19{q}TfWV^0ML;YNR*JVRNcVKp)!&2gMw`ZH1gJnq zzJZX`e^}NX&J)}oPUj>PNXeb+-$+X`x)q0fF*rdt9R|1fli&Dj2gN2?;>G4-i(I=nn+8kzVEQDP3 z7MeKqj`!Wi?9NXqb?mjz188zbU%4A3jS==zg)XWDI#WomC7deMgf^sXa)|59lF$grL?F`o`$IyacULn`EHfo`0Dt1y-{@dwN}n>FZO@ZM zOU!(m*^-ZBhK?I>NhgVjHKBaXfdB!XUuG1K*M2Xz4t7D2SMybd+D8SXe=2J_m-o0P zIpr z%{xfTIFv6K>lt|6xbFr{`KwL67INJW!&(i3+z<`Md%~d)Syf zDURd|Ua9cc%t-(3C<~k!&2-LYI=0GUO6X3c{shIU_35k`5aai8JKtw};cqOuJ>Tg& zr6!j2ksg$iw7}YO%c?z}tEoM#k2Xnh|STq#f&889>ljyxb_{`caf}v9nHi zjueHYm@#fNN3~3`EP*Vf^DqGslWj1zn&N7Q>lZ(FWTc#*A=$-zi)eaikmW92NZ@Ga zw0c29KeWpLTBZC~gIGRhvUwb9RJq+6w>?$WpB)I`=P6M;ZOEp&>taKT-FVS}q5FbJ zntXPsf?SFBD3yq*y)C|G?$e9O^qw9Gx|pBb#TULGS^hi!?J|JfKC)*c4Y?n(i_d9D z9Uyblaw&=F58jd9K6}jBa@({;;mT!k7(2(D z1c|QUuD*ddcI|%w0?KWH_GkbuZ97szzfao891|J0K`Yea^mu=4Xm9%?`CMV=ryN$A zWfPmn4ZU7(Oa2b6w+Ta8mQ8VvSG(m++_8u+FL2k2d)IZMz|&PLlA?LB@b?W;{mJJ6> z`#C@NW*ZX^PPunPv<-cV?}UB`y%xs(4n&HhW0r~`3EW0(c3)B}UpkxfNPiyHCSh?- zOj0*YF@JZ2mX?@2%fDW~<8{H9bNn2*!^Q17$??6i@rnXp0!}NTcn#^*^T21nQ|?pS zzveJw&>B;|spgeA(eu4|G*<4bZI_NF9Do22T73?tC1%h51>bNOUWWDlIn@vT5 z!^)q@?6%egi=|rx;g@{fgnMcKM;y`SxEpkn5ZhDe>9BM-2hI`)eC{v6Ur@UP;Oc{O z9nZyFlCmt-3%)GYwZ_?FSZOB}Vu?g+j+|s^WxWO-ay*UgNniNtY4(e)oSOXF?{Iwi zo({kJrl-M03`|Q6(}V{K8-F#Qx^OIPqse1`uah<{>Q3B;zD2o>=wvQ>`Ih*u7aJ$W z;tW(W(B99~32)Mk)vy?ygnM}OgrWkqn5G`a=G8CjBy*}R&(V|Q!3dci>$%n&PUwtA zc}y{Uw`u%m@lQ=Sp$)_FcvubHMFIhV=c|1GRW8T(_BE`B1mXhWO2@_z(ZS8DZk)IR zy>8I)!*2X-k-t!){sU<#`{{x7fFK>b0uA#AFO5Cj+AD%#Qyf+>xSf{wKm5)-pJjvkmQPfwI#OQMLbCdU2|IUn-ec&>X&MM4YM{#$b*4-o@zW&F*rQw>(2 z-QXzd^J`cDYy{S$FNS>EeH1L-_sf^7jRq=>9#Y6QE{>U4z!MEQDk*hH@SB98M*IW5 zKU~zdJzsE3R&Y_J4$Tz828o=cg%;EPO5(!Jv^EWAzpdkI3USj=jo-fi&bM-JO`0nq z*({#Q6fc^}re#!MuwZ0O3AnpJE9(?jW(q4DZI zX0ydL&Z%9N6^3Qw|5mvlFAF5Z6s+WjKfVK9`s>Uzvb?a=pt zVr64ig-uKS%$gasK$P032a$Vn3 zIxCs`#FyqPj&*)oSW?|5uBZO_2U{PN!k~e#oz`%Ar@X8Q-!p` z*ZmSw@!_yrpNl?ot~PhR2?TS*iu5e=Kbn~wxV_)-0xxCurpecu0xI=OB%_t-<8+`+ z=9|q=hUBm8iPOHYc_+CaqWg@C@(?Wi_f1cQZ#UOks#cmZ5{iP@*h1g}Gz#}8a+wcb zd{?kWE-ef-;Jeu-1BLM&JLU$Fmz$ee_$^M#lOEQkqF|4z)`U)diivbqoffLQCrMsa z{5TR=U?;mSLYeUbd4#is8dL@Ymu(Z^-dBX|CLk4Vs(nU?k%CGQk7M&M6)KiuS};_ZvXZYt-a;|paE(ibs2zOv0BK{_^04(JTAl0SbD;{G+W;&j+MNwo)7HYx2-YLBre z(rB@6;mfhNN6bE_2NN9i+ioOIIthT=gG4Axf(ccO71w>e-q4441z0Z)vqff!@DUMx z0-~Q)j2n&(U6OPLD824wAX13}(VsEc-kXH1{trVg0I}0gKa>G4qDw?Zm$t>y;2L-bv5K+~1!G?TpF7sm?z(O7KhQP>WY z3xEZvJ&MoU{BSIHgv#H5#s&+=J#wt2q>NV$gyw8aN89&rBGbWGg#m}%3^U(9Xrp+x zl=QZ8RUvP)<^4>69H4j=)Lb-m50`xbyy&LfGvK8ek8aL)lk7X3Ox-qtYor^v^1ld3 zo7anAphax9wQMC(6|6tV?+E*!Fufimj^B~UMxep_t14cR=uJn@6_@O^&|+Wd_?%Mg zavRE;+3(L3)D5(;iQUtvVd{gUKj`l7#~00I z{IdjUQV-Vh{9c4@xBibdrU!^dl&K4YE;9gQpAl}^b%p%)+?X{7@NI!GQ{x2T=2w;d}C^nC4~B#A!xa95Y9QU8up~x@anbuXA zcZ1~4SdY^GAVnE(c9hPaSKbBGo~T45B=6N;)$?hrPe&uWKi`3m7;bmRChm2`_li+8 zl*-tzKHe&KCIbo5#Cx=RXF!#eYcpu&fxY5!e{62;wLe~%J9|N5nX`Lxm`Ry>dh-RlqS#gefvukza~q)f2g56(UXK_ zt~Y?P>2}jsW_siL-O3uH#*unSH+>Kfj=rjMsj5DlU>sg>hz` zL}Rq}PdHqcb&J3LRkp;Ruf#{3Mc*bEGyZf4J90Aq3f&)I^~oglf=!|NTVL$0madrT#f zb@FnlZ#@{!Lr4BRxLluvMX{bI(Cjc_80odEIVhN+BkZf$UZBBktpy37d?-{OQg<(( z%H-j7OauGNSZg;g_dHE0dw`k}9pF{e#sdHn3t%Emj-S%s_V|}qger9vG)(QbVaB=v zGiT4ToB@mkD96cfPdi0r4l_z}9rfqq%sjtUKK%J3H6-QyD@3f)38JdAH&^}+Di~m} zfm5BT_hSB@7I95K1yk@~rL+s{?aTn0o$tImf(lHj3A@ca0;7vS6Xt8!WJ%dnosgYQOrIw^kv43j8#~aa^OA5SC zUkCxymgT8J*C*`Dc5N#R5M#MaQ_26Q2LTJtew5m4)#~xuA&Z@a+44mx31Jy7R(j#;A&^r!qEP9Dx^e?>yt^b z2B>Bo#D3`rU6J_X?c7$h-4=#GfkPnK!k8nq6XHICzAsvAY^^fF5ZkF2WXvgp%<$th zFzad;Rsu_Tqm@_@87?V-(*I4P6*^9s4~|WC6wk~>1_X|y9j7kwS z)lL0B0D(b%zUry12%-X}Zdt#u>Ps2@;E%sQj)3^KYN*KsgAFT*esWNnszg52mhZiN zJk&A$3Zt*L($YTiBrahXv{_O0E56<@IvEOdvf2uuAP!)iH1_P-?OuB6B?A<_di63I zd54-T@%V*&0{slO&`5sJMmm9SrAz0Z;1%JKb^x{jS{ShKv||m@ubx9Qd5}J728kg2 zi#pOH9PvmCV-cKdciU~Z87PDw0JI_*fw)L}tvCvR^pMn14>qhS56VS29IAcfPu~1! zpPuhr{WB99z#aL=bfiL{!PD%^NPF^i%AoC~gwze~f^UNN@Rtc3?TN|U&YcI${5|dV z>Z`A@^vw#Q%7}w^{78e9Y=9#Bp+<}tVc*AJ|N2)mP(rz}hS^XcmUG~k)c@cIKQK@U zP0&rb;05W0dJS!beOpLP;U&@-B&ylje)7R06^2cqA89K_OmKW>&_mk_5(m=rX$C0+ z{PQdR(C;UiVsZ+X+BZeOBrmJ3B){|e@njis9u=BNIzaRK>u(TXpwHKNj%xF2P1e?R zQ;s|TnG?3Xpw9Yf?YkQ}@^ZI*`*!Q|TC`|p3r5*G=EQgqipy2saEU9IKP4gTrX)&; zTFH2&Q#rR!NMD<`Y_21E{@JYCHfY$u`fLx7pv@Pqq^$90Pjx7GBAoY+{*8u7gbV5U z{xjhpVp^PRc3cdBWnX98*Kaa74d;mkOk}Gk&en z{u03YcBw5hkwVS0U%!4*n`)!|F)IaZ=bQ8x;L=&r)t)_j7$}M6!Mz7yo?bx6oH=vE z7oDG0w~hgUmMvRquTV|5Xz?a^*_x8=7f+LV|{QdL(q1 zUdNEbym_)bD84k*9-K>M9-c)74#r-vV1b)BVWLfb)~qSkKA~PFp@e^n7ca5H$6*Wg zf5|1iB_;J|lCjsTxVYG8M-`Ma&dPH(a4HefWG~EHIvWisB2u|w!-g3k!VG;uL4oVu zy_+@Qg$oy%rA*$V;%Y0(@eEA;NNp3qG;Z8D18*OFG}qc^Q%SNpa}SzXs(GU0q$N^%Z2{0O50K=T22f$?x0&T&un8N-t%1(R0>LvS) z*dw)h^Hw`sZ<@|~;~hLhDn4k?U~A(uq!vg!qd%Y>c}Xpj;e%$hWA^M1M9*PU#U@Q? zSiD$!*Hj0zA?hB@3Es7C-NyQtKv(o?Utv{dAo0)EPmZD~oxk@KsVXcu?@Foy0yZ=C zo#xh%(H@)%i~*JS^w#@@zwtV@$7IGt--Ym>Qh@P%FZs+_=s z^6DMLExlBNFI>3#rw9#7JNhm~?jcHr*Jv6Q228(9bKnef zz>5IRa3&W-@_g-lxtej7?1zf<042A&q0hFU@vAN+j=>MGh?D}b=~XckT|xj8On_0# zL){gSnVXB5{Vbb=v$35$23WJ3K7G2W1~QojTx}O{B6mO$z>WziKnozWN)ju;0BJ)n z`2*?z92GmkYQ;mcgJU4)xw4MFIP}M1pdl`s{a=3hWs@?smhlDvDf@Hq-zq~0ugh{F zpsjiH=9*k@R9UAPu)>-hlXc1zi^WVGbjp;;0vkvSOFPPjR1SV)H)u&~g*_4Ydk`~bs%01OMj*VkTq&AtXQ$az4`WA#s}zPj}?4GqRc_l zH{N(dK*aC&n;~#JXU=Q`4J8El*)N&rr!%wSXd-pXP9|2p}Wl~ffyI}} zOH&suT4>)G3p^|ypi)acb3UKnFPmfFk~V=M44;-?gg_I4#TCi~-;nB3F4`Z!v{kFt z_HDoM#+wGNIp-0HDhno*leX#e+h5XDs8fD8yNLB%EBm7}3$?}dLinAfc{MCQH;zhD zrAPZ)2%$ej9nxI10mQw^_*;XCzC6$FL&Fu>e9Qbw-1%O%l7dw6Cc=}Fn8A;0xDPL* z?c{XdhdfSnO{%M^l5wKs?R`=n_*FHZs%+0Hp(&CttPlNl^y)kAk{EWAs;WwdDEpLL zwPi!3ebO8_dmKnhK&!}W>nrpJX#fTv61_EV{O6sxJ8|L}>akFTM0GPUx%Db0+XTp# zEL~>S*e;W0H2@l*Y`G?x7t2`DiWMt$kalO2NHM|XTrSLc(+MIyWkSDa&t9{77mMY^ z)%rAV`EkAM7w zK-^pd%@}9;)TcgWzz&HbpaKv95JMu&q#H1e33<{3G(?@eN_fh|W5tT)G9ACcjTtk> zfXL>}9R+As8PH%)2I*qm?hWb!Hz8zzwT?oVBLvv3E%1kX-le+edLSWnHha0cxaIs4N2-hYTKM1{_e^ zL`qE=eH-)qq^u@gR)w1p3zZ=ji@9OLh6(J{atB2N3mDzIcQ>_8B)EVqR7c4h-oi8L zDB2!)rtJVgu|SBL_Oj*6O?ulFo0Q%VgMnL6sf?Vk(JgGN1E}3@0+n_)@y4uGvE5#@vdj=@r<>!W!aRa zwY;LOv8X^XlL9F)6Cx1^fQUrSvC$25@A;j2r(gB$_qyNfh<<$vczx?uu7B0N_x$Tr zV67;}TG(1L5{&pG)Qb=+<&BSjB%-mO2dUdo>5tcGnrL4cz}T?ieR(#IO54oV@`DZS zgf=|l&&-GmwB68BuUWH32xx=I994=d>&ELuYkSY#AJ=t&)agkXC}A2f2I9G5#hu#E zC=;4e@TjXBMXB|fweo49ts80V(*Kn9wkSP457~PFlYr{2J%ob<~H+tsr6L>^_PL2#8PyIAJ#G;tTR^k|Mik09U#v?~QbRM(2KQ3={Q{Lz}IMn=A< zOnF2j2g{`#Au@j5BO{G~ee*ocZ| zl>9*ehtQHY(;^`uXrIvTF};@~b_jzX`Nec%3m=q2&A|17WT8J-XlQhuAL2UFuZYksB$y0PnuJ*N-xXF)15fj^&yvP%R zgys!0M%#>+;|y4<$iRju{&e~)1e3N!dk3zh0YPN204+6s)M2>vUDiXBLqwMR@}^13 zMR^oE$(MNCTbT$+nQ*U@fdI%M7_ws`8?3zg>T1yzu2rop#{o>AK0`619IT~eH%8J1 z544L%51-O0!f0CO&Yh>EujyQqVn;i$mh^=5t+@6wXdeI%iqtGSVAJO8ZcLqdR?nVg zQ?WBbcHh^d7r>bQPdEljz!YLgo#-p1MPJ6BG^iVXkpDAhq@Tz?@FMLx!Ir@e)K|1| zV`GEHqpg`z%rgYW)a&?h)&eu-oM&9HjtvWZ;-n7I?tl9af%us9U;dtR{~%E4#n(S2 z8MwLj_OsIy>6e)X!M$q?+%D-Q?-~M5K)Z%pkHkVSL^jNC4bGZSc@@cj&eTF!L|A6m z%<0poiC}qLgzG~}ED(+nR1u!f3OOR7V?u_|0+A+b<>J;Fn*1nF`*lwOq62*WuP%KdJ(_6$BhKkDLRWEiru;azwe9=E(*-*ULFT6}U0Y6vDM(!+LF>PsxGdkXNRjvKbJBl{mED(Awd8Pu|oI zvcXhL2nB09Crp?qny%AIlE@RX0>Oj4wg{nsU_&sB+~umH7y1wz*ic?RVN5@zUw|*# zP6!Toj~+c%2PIy3;U#f{uzgJWhmbfxYBe-SY9c8gAybH%3JjTQ670>Oy=OBB$^|C; zBEhTCc4jx;kACzvC9Y>B?i8l)Lc}bAHrIHzW+Y(EiZAs6OWW{@eBv`B8e74MA2`z& zcFB5Sw6Rfx{A=CulIf~(iYuDQIGvS29*7?p?UaEW2rGjbXja(-jJ{#drD7xa(2kHq zBZpE9X~#zGkF+Px9Q_iai?;mNa+JYccdgJiJ}zg@aWFg`5vXm-J8epO)N#0!4I+E| z_-TE{!4RxtTjpmA92AW|gH4o?858sq*7VW#F)gx*N7+zbs|VV7%E+41;gYtSendbX zu~V{M`GY_GgK%Q=25J4mPeVVJ(6jDw1Z2=mzF{*T_;Q%C;@(g#?_@WQfFs}t40Z&Z zfc9e#Kv+ZoMWgx7J8$b6L^Nuvf3-T)*VpTm&x5{K!MkRyl!(`F%Qtk;_)8YLmS_A+ii+Twtkc&nn*%99)zj3tfNKy z$hm*?6$UAoD$GVcXngCXe?i2|Kvwc;vgI7F#KnExy7k)5TefVG^nz~MLq9MA4xV5F;UW(A z^!-hnHY+iMfHL^PS$@nGAPvHS0hqk>(l2E|W4I0`!~udD$#0vcw{G29?O%{nroeJ+ z1AT6rte<7`B;s?71sX@zWydw9RpA4f63<}I!i5W!U^0M0-5F3}fQY_DKcXyS$BvW! zyja@iJuMdl5ez~w(1PZavJ(%jI{oDu*_4Yu%M|7x|M(vj%eAtulmpa9%c1Gi-Au$) z?j<8I!qBethaW%jlkgRpdd|V`Nt77K|Mn>YJO1Wd;mBLB(l~vZuV>bO2%M9_&TZfM z?_tzM3&NO1m-sW-e@)>zI|7ctd4fRAH@@+W|Gg*6qFRGog4p9UvkKt3B0RF=>DqN$ z!;G0z!>Br$7AaZ=IjllHCQL0AQpVy-*5hRWMer1ljBC6_IEGxINkf<(K76EV_z)%` zCJ65c(g@^eZ&)|BWXTd;<2PPN0P8|o&xf$en#lR{FADW@>O~tjLA59!e6U&1GGwhN zamfRMkjq^!9hHv&_*=VD2Fi=pZpMrmDn!S$3jc+KAylKeJ#gTVlx2t9@7FWf=GD&) z(}dV<-@aW0?2TIHyY9M6*Jq+B8Y7!0?b@|lg>UjbAyZQ?y=<8fxOycNY|iuk`|DNP zw_?RzN)pL0Ye1fZ$TN+U>?_8GQAV>LA>od zwuN`!Tc_b!_lQQBxR8(cMay;i^hqVl7YeDEBg7hVN}oad#u~~GHi6rC^_wb0bf#zs zAt+4iWtUVmx?^N*?`&Bsh<1{-gJ?!)&6=%cq70CxtU->Yedq&CO>tTBZH{>i4(`eOz8t?Iw6FR zZV2`?IWN@QHgDb}#BjIZvP6jdBFVGxyMTPqc)$1Fy8%Loc8_-l{f55AbXKrncURWd zk{@T;GJr5`+J!nRfbyVyWI*D^8*h^QmS`HJ&5u4r9S$5gs3e*?fHy>v4XNnQOk3vp zLURlWgbY$&G}81N1`ntM0}a+^Ir9wy$?O8wurd`IB29ZtoOnTC?I&r{r|6$(^huZJ zh_o0`qAt(NZpq9};Mqk(%D|DoB;`*5r{HZ`Ycy8owO@$&pn1g zt&@Vy5+U+Tp~leKWtp-gl$H8`2ppe~_;zmthqO0Fx|o=#)s`*ZLbDvRz59Ud^exZb zC5z`O=7ib(kN;mdv*$wsyBS0X$UKEG;i?LCl_BQ}?W$do#o_^Gz{TBFMG!B_kOYfGck1UdV2476!r4+^{2T zvAy?zH-cNE2=_mH{JC)L^_PapQzwKPST>_ei^{2<`68zw^WefQm`1c04yX$Lw{T*JncQgURiS)9sf`KO)2+}d9AQ~V$V z?6Ns`?!4B`f7*sE?G8*R2gI+CK@ew$U$jw>JTz$t(ws$!Hj=)T8r>L1917JXy2Xuk z(AH%Qr=))9T41zpmUkhKTJZ%CdH^iY{Mhp|LPOPzf z#LJh5{?@Fd3k;77!FlCZuP9Oe^rt_iWQ)NFD_^{}C!n2VT+o|3ke;PoFi;7O#HY_% zQyZq)sJIu(t3?KgqI-V*tlfhRSkb=an&SE8$O)ahouQ3zq^y=_Y@_HNBebta>=T9h zEosI67~{si+V;7KKKZ|`Ime>rJvy)vlua_qh|)o71J{ezV$czzEo^|uIl5Tobs$&P z26I-eY?f9%EWEjXOW3|;OL+MH+jRGIy2iV|{YW_S_G_AsbcRqPvr{y^=;D7X2m!uh zE%FV!^G=u_J^rEZCs%bi@i>|15J10_8F}GUD57Akv`5`B2;KY&nMeq$RddMNRR{=2bKG&q9k~se3JKXH zmhaqYx|AD}7yMIS2u|CA#Bh-?;Ain;tbEvPyfoE_b3Sb9*O8sNcg}kX@OG$N;YVEUt+K{irae! zfg!S)*7EQCW2jE1!}m^3o@!qsz;5%~|K|S*yB_}=-6*ZE^Y{E5fl?7@`z0wgmWOi$ zx`%)h(C*>aFR>vHXz4gR6OAXwRUmw__K=-EF=)LIlx@@ki9IXKms5DsA`eI}>(A1( zOBQ@H8D_PZ>Md-OX{17(W$oL=VYVEMO}JsAC16Tsvy3KF|F%Y6pDKJy9mS(4K_HTb zYa2Uz$IDn{bkTNnUsf+Yh4>MW@Hs6@wp<xdvSle8JhqkQE5)YGSSu-Xt4(mzeww2e({4(- z=3}&`+@=rY{Yy%&n}M_pZSv12Sj?}Odp>5gl~NPJuu`CG(+XbMF$+hapY=drE1dYi zai!l6k)wL5oPhRSqtTah&j632c0+wIdbSAF@(r$*_pcjAz!7i+9D()-WF`6bA>5^3 z5a6R&L{3#g2I*t^>d29!^v6BI3_t9|<(k*1nFJZXyyi~0(oBpWr-;T~YKcc4-8Izp zoUC`-_wyfxBhUUUw8&8yg$!9~DXc1m0=ZiT0kmhd5q+XLoRlfp`v~C}f`XfVK!A1C zW3Ik2O#S5NLbZ^u!ESgu{c-GrUXX$;p!4lw7wP;UZ6u>bCmfVpL>ktZZn<$iyd&TU zI06-pfD_O@ZeTuo5UxlR3uUffE%EPodSrH59_ghfy^kC*l2U>xkrRuN#~3w#pQdAW z!o&}C%m>Ys$dQ_sg|Ta_W83nWpIu|#ku+D!Ml8oRyc>3X`@hP3*jRrH=T}F-5vV)_ zPJOU3RLk_!3qSRF8E}egF=R(u({|R# z1~lh{l)=x23_$}bYei<+Lgt6((Evj>4AZr-h>dW2_Y4 z4$`5$Tcp3y=WLf+rexA*IVPvIovhq?l>{}L91gil2icVDm4i{?BvLy(tsLsp0~sr9 z`lsYFmKD&-%TH@5?u^Hazf|2f{~gxGoGEKCCSs+5VRLO8;eVfjztTh5zs${*!)pCQh8F zM}S==>*K%v^+)7rwNd#OKu2k%>AJmt5GeLO8lde~tUCPzuIK6q^bG=iLqG@UOQBd3 zbmwKE0=Nib)z!7Kv5su2BZ(QIJJZ;xBeaa}BAYceHK_m@kL8F&?@Z$<*+gcjlF_t$ ztvj+(IEl-dikyeYS%RGV$$6r+wL_GM9zQPU0?K)TY*d6FJLeWVpE1n?!oV(`r%#`f za>es8BiXFdGBq~FwVROy)97&v=QIuMQSO9vM&jEn>87$Mgk<)={Jk)G+45{uy;qqT zTiUO3^`=go2N}!48J44T#N5`r60J#yf5Yi`ej=D;W$#v*Jevo_P4mRRkg*Oz70upJ zxFTudOnuG?xo;9|tAx}y%XGz4g8$jGag!;|6Rj4qu6zShCvkM(LG09%z7w&{WugHn zSMDBy+iATdYz}nDW3>#LG@fY+d-v@VY=`Mtm*5-waW1SiuVpJ8y++E(&X7Ar1M$Wi zZ-kduy%Hu*nh@^3_v7KLxQ~+aO6Sa-r&+NfQByPJmp&_X9V&f74#m-aM{!$}n0Ej! zURrUnB*C=Wk~aO8K@-mRY?NoD>1R>Dwj`WVkGyIhlR1Wg8#EJJlmbM4$t6zgV$KDpt& zu=zj!TNzkte-u*Zsd;FTXrYkmupCzkVz{`SYKL&wcK*dY*Hq>3D$i8v$NA_RcD!q+NKu zm65s!bp#xNLJ0H?0WFmCAap}eJbwIG*tzq=z_cF(F>z9X55EsT+$B5O9tjHwXM zw~ZS&hB0Hth>$-~1@WC9?$q$}=FJsCncrcLu>9EZv`K{dEnBvRXP$W`95`?=ELt=_ zeBu-LsSt|Rb^ZDcVfE@)C7ovZkS__>U3a~Fe1!}nTo*KGIOCb3PVimiJ~DzC?-!it5V3|p~Hux%&S4gZ)9Dl9d|*yj|nAg z5n(yj*BD+Dz^Tu1IrRPIHSdL8yLM@ruetWxu>6Y4C072cbW)D4*(oF8GiHnlbrUa8 z9HP*jmeZy($9Rg21Zp3SIQZtmNdT~ln(e$>nzGp=VT<`#eDkwwb5mp3C3sg$Uzt8_ zO4ze!Z+K_@Mrq$I5~W%Mr_Y6{6DKQBy%kA_z zfBd8{dg}D>qoJ$~e9*t36Mn1123FnYu& zPOw#!AvMi96Cn;@XoLcPXag{bIL81|EV-xdEjmE6`n}h~PB}yOlnmHhb=B43raAM3 zjKy?`*C2y9J4E1}HmxqyO_-$krdg!}8-xeSV&>qXqv55u)`Z(`yDij>o2bF89g4+? zWhc>mn-vO6xMrCtu}|u9M4tI&vg7d?IVVeXV^axP(fen{uk zv7_Pbx84euTyja6Fn)p_7eODQ?NlR8zLGYGSzBT)#WaI;5^%_vvB{lV9f8siIMdJ& zHmrXyG@cdhx%8>6+qUR{5$~GiP~Yc;r;jD4{yHthG=aXRSSwht%qIn zG0uHFxsDw>Rz`U*6EZj|?B9P-g>c9qzsZv)3prV$ZV(##-jWvnO!dVcNnZ3tD1}%Z zJ$f{}B?9Z(wQEJo*r)vK+tmIz`0x#yl0aoSi;(Qp+W|B+OXmMux{OY!SPtGFSKa-f<1!cI2lwKHF9{= zz6*Ip%h(`&I+`lch&G3JrH?)H?DOK+pxRR{yO2uif|eZ2DVye%iXm96VG(Rkyzd*?72_g5prEGapOI%3x!Gi~N;K=Gh9{5?lU`V?8Hb_g6x6>XQ zH@+VZA3i8`I;H)D^2c(;w#bLh+sG^SHQJH>B8S_H8*$BWPdhNM!~hpqZ0U=w)1sx0)YFY4(ANk|7(X%m z=Ku0YsGl__ELyNMJoMnhVWhn0o6i>bJ`^jQzK>eMYp=Z?zV$!;EIjw@3;OW=x+MmR-6`1#{dGa9Jw}2~@%&f-(Z_?%jLB=FOW_$e%G| zhGKgbwsOoyKvp#t?$Dc`p3+k~|4 z*Xeq5=gt+vI!WvB)1UrSwMGc6yLRmkr%s&?bLY+r)2Cl3#Q8lX)0;Pcpdjquzc6Tg~yjDPe25~00EiuQyzrtIdkS{ndi;BDAd>2hhP2b z70E-ise&CE9mxJW@4T(Vo_MvQp+%;~4guLOgl^opaZ2=Q}+?o|D zh$7P~nd-b~(L$Y?N>~J&_9hZf{D{xmq@6o==oIOhGiE9gAP#=px9`w%UZ+l-s>zHR zHBvC2qis%qiS3G4ix8+?Dpb)ga33mXr$XAAh0N{QyGJpmUvQQyQ;(x&lj+Z~Jeon9 zKa?-_H_CcSHtpCf#Fnz0In$tdK-yk@`4!2lR{GrZFmK*m?Ze>p_S#y^Fw@abv<)zWSTt3Yp1LJ8WoJvS@J_ zH(^{^+E0Ct$#nYXUwB?;2z})%zbWtPiQzkc`5k$eZj|SW*&9-i!jEq6R?t&-1RMcJ zU=SkU1hkhM9l>yzd^GFpXDi8g_uY5HAt5U3*RKz^-+o6JE7Q^tfVXYi8a8j*Btqoj zaOG83g^MqiDRd$Pi{3%COK4Bg9^tQ~QTz}dAAR&uAx-0@7A>MFdoFzSt6vTCgfybv zkswMEC`111iY6n9`O!V7##MNilCYOvdLcaj{EI^7>Xp=O+O#p;bI(00tWTdlUC72N z)yPq91n6bUE>Y4pUZ(G@UcFkij*!p0@4iF}c;{YcZi`R1Es zO5!rnGEUI7bi|M36Qb6(c#xTY_=mq&N7meJ-MUq$JKu1_4LK5!FNeh>JqRMCd7rGK zglIi4>zl5*=340sGc_0_1`YVuty{u{7fw}8VVq(cr%zHINk$F*0WIOaefx##ag8M(esu@?e_lG-<;-@3=#W6VtkNU1JN{yY^d=QU=ndzcQuyrI%jR>DyoU z!WU%FVY=22O*q>2UAuQHv3&U9hjmZ_!t~TrGOc#!PVJwNnMsqTC`qlZhRg{m)GV3a z2`PfO3C_9cu}6eteYkUb_{6>U%G6!iC0Pb2{^vjbC>+>#AY6ar4PluK+)SA|xs@1N zTSr6;mnNNp4DmWH#Pip`{LJLqPb(oP9cjQ=s-=KY*0d- z=|i;hbI(1m+UXl^yiupjGw?8SB&-@p@@Oj0u z5OJlAY0HyB{?TwU&_f$Dh`>P0(4q0!MkCEEf}z8P=>P@&gT5-=Osd_ ziN4B=3$7pe$VX(V|5%x#7^nUJ@Q07knngMjum z`Etoz?n#p_2rE|JDe204LeB1FP*JQUn`+T+Ru8GtZ`DgLt_rV45AI{gp@++5zv$Ce^#TQ@HO+!BZ@sFvd z^r@$w(&@QOu^l^RbWTvrPl2SWq}j~ktein8W!fqlCA6MsW3Rd9>TuUxcd20d!yo=I ztXZ=rTrE2VvQCU?m$PQg(&>E*7cLN@iq>(MYQ7+Y5Yjz+_NdkrjVuHd0Xj8NfQ?!= z@^2?e6mYWm5I5HC{Nfi+s_+i^`Shnh6K2j7Az3t9CLBW)><5`(Y9(>0*XGTe!{7h? zKS;Tn)QvKZ6K&SLLUz`#-=JIczw*kj!lj~trruMgOf9NqBktnGi{lzy$p?)Yqzr++ zVBMr5-~q8*wQ7};9E7WT?zvYt)}Wrmqs(ZtSi^hA9k*+}fQtOUD~5@6q+V#f=gpg^ zQ*>!B`T~Rnk`3{>^wLX}2r{5R-62V=MTGRma*{K7$16mGb%R@loX`g#I2T{MRPmc8 zT1>R&l;yhXt_u%7_>i{wW8ZyD22nQax?;)?d7V6YO8CrYKCN2QA+mXvu3MHqxOdmy zuzTO0uu;NaarqT-I<1fJB2J&hZ#a#&&hpV-N^9p!=I7z>d-qa0+7#Ohri$8VOHL#KZ_u5xPW6U)46)RR~IllYd z?<(m8cd+I;z5o9Epn@x!{+n*PS&8spedjweSk)lquGNjJ7{G!Qlim|gJR$Fzrf|>4?+%wq zAHvu(rZ@h_5$G)hq+ipwtK@ybv&4H7f>@=4ROM${BP8<^_uiv}eye04m1QlT{U;B} zbG$K(md(eUe3p;tc^N~#U1b^iSXmJ&IB>bxRWKA@wcba-325(QEIG*L2SSdRYI^6L zcc}39$}6v^1{3m$@Ob+4DV?r5PNo~Nfe^wa1d5T~`LafB!Gihv0cHvz?+A86Wx6Yy z8om7T%QCg_xROu=Og`G|fQq0f|KmqIdBY-fqm@Nag^*(B+8|m$2+y_GUKhrU9-~AV za>_K`n{K*Er*J|lzWBv2iD3Sr3bSbGhRWtLXk*cO-F^4nvMz9)Zm5KY56x&4fZGC1 zxw>K?jnw&VnbORgEyS$B&bGqD70Trb04xYZ20o{}>?;!)00__(Fyt6x4|U z00=Xh=6y01cjwL>BKYr9LIU}UShXJ>#~sqb06}A8lWH|%IhfiSLANrc^(7zH^U}tP z7cUa++BDIi$uR)hy6j?nLbX%04O3nzC%wl$)U=@o>dbu1#OHO}SyY@W|hXx;fATAt#z`9*2YyUVWg0dF5j$iG?OTgvvTgfz*+qA6So8Lp@X7l=q4i)BD+XNlAK0%`<(6H1 zv9#An(SVA^RO&H7rrkmC=gpg=VLsR_7V5&6rj_qg9T`Yr@OrK}T zgQ>gpLpH~H`st^`L7DPeKfB(7wYgH3CRt0pe*HQv6IxiNQKPvAPr3x<-zozIhh#gJg+KU%|0K_9+MIQ5F+{Ac;gLWw2%S>*_k5DkCrKK z7_@jybA`;XOWz(5%vpza{J5;;6QLYIF|PNj7Sggor)ClsLAb8&LJ_Pl)ASKUM+TL~ zLUFO;NZUbhZ8{~mVyth`psjXwhp-CyK&vJ3R;_wjr+z})U8FrvH$9B*%<4n3^H7Q%{4-vq#hFY#HkZnhqq+<=|~BG+4m+NB@k^LVPwzhdvKkgODXyoJ%=|`d)HIry(A%sZ15<^0^ zXLPg=&l7!xskcnQWk$r}#Y<$W_r20L>a8Bg@KL+(QbkRYsg8j zMrJ$EK9EfeIrEqDu{IdZF4OrLpn&W?_`ri<(ZU53I2=@qw2%IBC8-*Yb zal;*fkZHGkSP|~(Ws@Lw1Z8s&gm%_iVi4RL8xb0-R9nV6MTkojnun=I4Q&}ydm%8a z-D5}0eL`q=%Njj2Kafev!0vx&!uY6f*|J5~f3X?Lq)C%igUK-}Xvq*@nPN&D$mX0m zb96Hkb|EES2qD5Q!ubTz?4ZGtOaf`a1C1zY1$y8Qf-~A=2xoo$oN(yMD|JmL^_8lK zP#mPJXLKW#ojZ4^_7LqC+Ca2`Y&1l@&{|P{rVE2F=@Mq;%9T2emHJaB+)eN)RD1~C z0nykElM!_6TZH7GJ*F&CBUXTnh$nS`Sg;P0c7z;3Kp-Uy0-#x=?H4XwCN&7POe8-NR^2{99^`7Ho<0CZ4xU(V8cG1={ zZI}9um%)kH-)pt6F!dV^F!<75wY8A#SXa%Y%^xAx@*J{@G=okr%Pa`i8Pg9sGNery z$RINfC=WaHLWUv#Y)W)UHWj+%mRocXi-!QB$>vFfM}tW}*&+RXoXiy99l>A*>&ewm z`Vmu}IRll!9s2!@8Pj#DFM~~J)|ur%yMmc^ZfzH7!`R3j72o|Ffu2R6*yGhvF2s>O zThw$V&7z69ha=z!I0BA9X9zd}?F^xA`^T?doU#_lCZzMMXs~dD+;D|lVz}pr#)2bM zn4$|YLW@)*8!0f25;Dk!Cv29(R82O|VQMY}ll7YjrVvw(?>HoaDum+C{_I;SP~Lj$ zE#VDW5BIa5{Zs_dqq<8fn;jAFdN~htrD*b=eeSvN?Qefu$?@&C-!6n@h6<%o$mWQQ zIIRdJTD^p90Ol_4sAz_hML@A8?ATZHHtsR3WOwC5CNjmgF*89Ht>TBWe{_Y1l;PH`< z+@NGtiMJ$7*<<@tSNgV)GjV6v-?L}Y&Nb;ywv>my#k6vW5Q75u-Mvz#pUZJBO6HoC z5H6R&hN|k?TpdJxro%HBf&X?PnV)&! zGaBac#~;^YlOT65zW9=qYp-IteEAgtttY$pGI&HE!ALdu%b$XV0USsq1bE}dO*$Y# z--Cq8lcVjT1TPbtdQdiShY0gLGBCtD0PLQA`kB^26V_|Sg2XBH-}~P8Tc^iA@slT{ zkE{v5@f)Al!4lSYvKD!wkkTi9{DiKJW=-v!IdfX4`RbXjQXbw})~+o&GeX{5t%D1h z0{g8a&|L(0F=2{7PSL-dzon#Q`IeKehjj!T0Y|_QC>sGMpk+f=PJ}q!)nXtlLLxC} z4-qy^;51TzA8rr;HWPaI;fGZ4L-<4pz3HYKRrrKRj2kyxH-))TcH+b!#7~@f0V#y5 zuDVLKQKUV)Y;#_DDRmcv6md<*#`0mp^40OWz{d)JVu4^@gA`y%b?NhK+0Jh7B7o1a-LL zNqG$u%0OK1bsbQy+}tZ$68~alC4Gq5{WA6VYQYq(FzYzM6Ri=|i|uTs&0k5L_?9+# zQx^==TG1v?7XpfAlX4glMtk*uOo;?*G9W&E1C8CXWy_REqah_c=`-z^_Cu3NScpWd z8#(1oEJwVL<%(s&g*fy*!Z1ZM-Vf2~*w%vf*r+0TAf%K%BE zOtrELCA-|Bp@#SzIU+mLu2~zFEnOBakb#z3A62 zxdI+&wHX{>ms#3`!2=~^l4d+oNR!t;;7gp;;KusRuYK+7xpt0&OlTY_6PU*Smd3&! z+ujCpaIrdAe?hy8=9zLa2*f}MeS)dvU;5G)wO;fW>c=`dcCe=3e*W{nA^FCB60>jC zK^&eF2EgzWwI|(}uBN*wW5tjbKR1qmBj5-)0*-(q;0Tn9fD_Pi5$i_ytWer{QdL46 z>O>HXJBlLkBB-KCi3DSa5*0`y1|gRpL=@o@y9$xwXIH{OLfK%&+z@DWU6)FfqLmY} zGD3vV(M@AiAdQ+&V#Yk$Ue!4v0!XEK;oIu0NynhMziq>4IGnm6)KU*HVydl?gQz(k z-I_DR#>gDytQF#PMznd5QiOax9z+X^AaBuHE#}aArQ8tM+4c1j7iYm=gn!o1%5FQ7 zMmcNcBB2~3I~cQW@wjX(1(DEHC2`uGx^Y|v=^9*6D>nAAdJ~TNm?m1o=E{K^yn?<_ zt|8sCXB#zd$QT+_`XZWlt*<1Awq20ZF!Zx%Ah3}Do|d(oteJgVHXpj;vP*Q=+$VqW zq^#wfD!c4131cRX4@;LUlzLM?E2G%7zSJ0Z^NIQ7GEu+0T|#QJ>7Q0-R$KC)0i1K9 zEo7bRIMIyLZkSvXYWZghVPsIja5T)~HTI7bMsUXDE^4noD;&?K<=N&rjDsQ! zK3JV&OiSO=&Zie`#Jfi}{$dSxe4cp+#P-FHDYUc+GXyZW(LV{ppoNlHxgoAJZ>yoL zEpKy-^)9VHCrMd!zd8cvKLR|$-g(decJdNB0*=6dN5BbauQp*@0U6@cnMuilP5DHF z1UX{bBSNDItfYm2%sRPf5E>z?Op}b$C?Q}b5Hr=%;*t--C`1kI6oM%C(T!<~lBc*u z0@PL*MG%d`X=}!seOm!NS0WA4HXM0dSju*RXl?83>%+NsR^{q}0NrXW*+QD6ZA3lh z!)8jc9k}NQCZsP`l{KhibR&KwN-Opbm(j6q+GbMxe|R0hDU#jg8*py zSQm+D(doskFTL@G>*e^B%XR(YyK+7!n?#M19h1=nGd(@#z?MYB%9UG_5L|QThH~59 z7$R+baBcOGFSL;v!^7_L3FKX{8SW7m z859zXz{`lb`L~UwG21y2mNJmmxoW0X%Wj@xr{Cbmk2RZ3(mqN$Wl$;_w9wJkEa3$S zX}4S(<@#Op&DSs9OL^5I?yW$Yk6#^uUP6Fw*jN3neV6a0N_n!5fFs}tI08;U`>+qB zq=0PO3$9j}NR&i|6cdF>Y;l@t)Mg?Zn-CoN0C7kY!Wl^lQ#n&ZdgN*DgjYdT1W>;6 z_;ZipFJYrFi(Q>HjQZ!oMOWgPEqkT>S+|(Bxw0|Jg)+5tXDpCrp3BaHlyo&bYUR91 z6{G#*bIrO#;j@77LY$ei8MDO+d{Q+_gYIqrwbL3YPTBf zLYkX(xg^--(kqZGL1SGZY3b{-aKCQNqktv0VbpdSj`Y>mwk!|_C+kzGcZ^RN=nq0X zTbIMeHX;sqw?0xpsP)SOLEoUC!b9=mwZS#kIp#)uTw9|@=aT;BbIJv8=TK5I9%!&5 zK)=y`IM{*k`a1%SfFn>j2&58K<gLK-9$SV}r&iKS5xSYbgzx4dxQ)y>75C_BqK?muOG- zq-IlFW$e<*fj)AFTa}Gld*ljT55M!OD*5w>qcVy@0*hzXCSm=M_LkPVlT+&-OQ$g& z1ZSYa0nZBufE5rNqtlz8O9P+X@vgMF?T9)sUGFAH4j(Imf9_h@A8T@fRp>H%nMi#X zm4r&;$z2+cjl}7`DZKWIj(Eo8MHvN4@UjhFt^<>p%cGGnC7=$^NPxLoRUDHyKWUev zOp1O_#1wFU3R&H(Fppbm+Ff(Sw|87)Z_D=OVdvv5w&c1ShNv7d77Jc zD=vD34ne9ZL0Fc{O(6osdM6M0z^Oakl#0#?%y#FLx*p&};Q-R%F=A}&ps`F7MOtCpad%~6W)c1z5HbfwFmwMN`Tb$Dtsf&LQOBYjqx-tC?d`!OKxwv(p}R^U zKJZnfNx@%L)q43%ik`%aoY z{Na09M9Mwho_+UIDd7g&@|Cn{3CoK@uBsIi*KAN9tl!;W?-Vv?J>7x*l=mSFL9tFH zlu7H^~IYlF&q zvb)=b97MP*AKiLKB1qi6Ded~}(Bvg7amrAWMJr9(n%-d6sincxR`qzi5cCMMx=Ysi zbbME+o{6X4;X{lJ{!C?&E)pG|>!27h{xEAFC|-csn2$WESrdk2SF#nJd|ldTPc9Cz z_x+U|nif4WdT6{(LwJpJMVy=wz5>T)CN<0AT^`Yj!@T$QuZWZJ)^9v-J$|C)`<1%w ziPI(90*^KZ?C2fvYJ&m5g_xQv^}-FH$(`Actv7(W@v#!@+bBU{(W|das6K^dW;H58 z!#>Td`Mkv8ykZq7?^_Lr?;*vHnvhwMCqazKgC4p3*F_zm0vVE$wyRMkev>B~ntt0O znUA07o}fogwcqkPKiLZtKMmVhS}qQ*n(*QT&bq6xy+u(xs=Q0#I$%o1J%urBVsG94 z%xBi~cBAfe?DX1_c|WP&QHLL;?@oDo5RiQ3=Q603j!?LU4yo8FJePJ3YJmQ6-^vVl z&!%%dAQbAqSZZ#{K3;0v9?V&+$J6R$mLJU!M1QR>aAeYh8^}Z~XFfzN_=2-8lM#W7 z7%6>EX8UJX7GSWB2-`9{7yHp_N8{@0(9ce5%V=UYM~gYs=PXJf0K8usH^r9d7ItZ? za?GC0Z+CA}c6=9EYTS|PZx;1S_q=}u7)WnfS9@~(e#Vu`=qHC3i#r|#ZhV&{3m#kF z%fFte%CNIO$#l{U6aoQf6n8z5Fqc$NaHTa65gqFC>V2=qOAI81NSx<;3EYeH=g`u6_l9I{>IYEr9kkY zXf$^_^nAn6QxNFEMNL6MBR5ZdOgB@_VC-@sU7B3J6 z3~TkjA{-V?69Uzn^cVzfzuioxB_Wc$8LmT+fCyn*{RWn7nZ5l6IRIushVdTVsV|?y z@VWP;U_+s?9%Xc&U(-FfS4CN}_23-WZ;4|g!A&Ifc-Tdb3U^B^NIpeS3zgR~9%_m) zrt^@9fG*-Md{e-H^$vUy_D-K!w4WMlGvq2JUu!)2!5bugkR={d{_xba`q`I@uzAGl zRA@JXxC-M|5Dei=rcMQXiq_oV7FwZASRBJ>e!{kALA3EmQ<%A-1oSigHFuWG8&tak}A1>e4aH_T&bSUOoyw_3w zc)l;0p>bzhN`R>)icHOiTYP-{oXx@@%wquWyHjlK0el`H!Z$&`!?>qXxrl>w`vNNY3NOGvKe#kB0-I|A5q@O&3?f(-!AtSjA?Qha^{)pj z4D#{!w}u|#nDn;*9d$j-xZBeClQK2vHoG|4@;`Gr8`FR$HonK14PgS>x^*GI2(SDt zoK6O=3_Q~6;m^9^*jZYB{-siLKs)?>31omZF~Us%*S&lzLK52lorV`Jt~79{n<+RQ zn{?k=a0I^tr-+PQ?@NP>An$uRol_$fYntu`KG;lW*1KS}))9Y??UAXVrQDHT+oH(9=R1^1htGdBTFe2r z_+dyVuI%s3RaLWhjo3zs2K>4dkhW|+TwXSk)}48MppfcE>(3IOFzD&l>&7DE$FuFo zN5<%{9C1DaNU3@sh>G zS`RObRP%wg=W(VomvgtlZDafGx#?~8j?JZ~qTIWS40wfk9wOVMjgUkuIcGUx-xZJ1 zQfI0B_wNbrn)TpkJsA%AkmTeXJsfXY-4Ve=QlL>pc(}7h1nf^Fbe`)Wt(w^R%x69P z+Ud&H#`#e79|3xh*nE!&Z4>Q>&!_T|!f`M=F(oIIgp~4%`BLR2S#Pfi!rNTWb-Ab+F^!5A?>0XA)&flUrV0%n3k=cuTDE+6L&p!G&r^( zLKiLs%&jowdDpoUM*AH7WD4%) ztg`X>X0TUp4CQ>EfsG+?-M2z`;F`NML*|5#7^^xGh$#-7#aeBk+3cGBrql z4gP{K=hwS3fJnrC2YG&!g7r=O0^L;K;_dH+)UD}IEVW0(Y*Kqoh;X>c(F%HNRnJ6- zv;ezO+3_klM0ZBrk=+Fbq`*N+2$xoMMiy#q{F zy;pPD?HrI@#bbUY8BD5fYp&qxO3QELk&lR9FIjn}+h|E}gVx6mZeG8 zzTtj~sOycD(bBA=S<&G0uzLcz1Bpe#vvO&%*LKo-Unyv5@%K<&%@*vuIV|K9Z||7O z?j27aH*Bze0mWjd=K2zKt*jXPew&_*WUT#S^N+j6 z@TJqNv38sqo})3etPaK>kR&3BQ-&-zEPh>~=|E6}LNt*S`FMC+k|vWk5}i1AWyQ7= zd1ew>)4Y2CyhrDIy6eZ=gD56jKRLiSEN#6nF>32_7*r6teZL3~RZn^*pTu~8c_$6j zSjxMzG_ODFLT@7&-H;wSgMI)?Wqt-3!c(i8PdZhvxL+Wx;k%!BJ@xz2L!?po^m6$vJeLD@RCBGx5A#?>HL>v0;&gOZdpIrnTzuhp zUg{URhG^0!zA(DDI|4!_3i2tEv2U6yH_skSlVObv8@nFxFcL@2#%t-*jns%Qn~S0^ z8td-k0CQ-zF0-%JEv8hqm&UF_qWH1gO{^`?j1+JUuL^XmV0!jwZOk=- zSo&gx+_NQq{t9SurjSKc|CV0TldJFGRA)_`rNwT7x&(Ac{nk(>NRxGu&}Cm>e0&=e zB|?GvA|F&Zq;hv6rd`%FWr6EVrolj8-$j9wY%dcwH&N~Pl?vwCoUj(*ySD^$UEm$K zVSZswV5tWyg)&i!LHM(8_qLtu0D=)K;>pHs<7pAw?dlYt_WT%vjHVNX-}j4 zGL%L#zw7e?sv)2{o%kj(C2&W9-h&5tSGg141xP{3d3=MA1=sF}ArpPz9CX=Yg}OTh z?kCX)Ntwu&g|>vi%o%nU_oH`Yj@!RoxR?0ZH}6)sUrraq`$!%jk5Pq7CB|PvWC@HH z&y!2bSoG&C-9Cfqd{Gn&94eiZX`6icbYh~fn~s*GAJry2^`z6ZS>Sr9Z#!au_fmTV zZM0+LmiFdmmT1>-+8)snFP zS-oP1Z;BjI`Sf6uQ-mc_`3|V9(RE1-r|~PR<`CdB0Pp@uZUB%A8VMet@LNU|r+{3N z{bRH%`NqvLUAKq!87gTTM4IGQbDZ(|gz&t%X1%Q`c$JjE89rm;)R2pLdHvt@zo%e2z? zvy8ZK$_Jda!QBuZ$l*||&T|_AcqJ8j==j82GNa(Qb?Kq^0kP)3V_z^&S+T|vx`m7R z`4SV?zY(c`7yi*bD-~KKTzdSch44=}!5>2Pz zn;uOyM}Dy|>__D4D2jGBqzJFdE%Cg#g1ZXY^m!D{l!r0OB3%e4QS_)EK9`&e z#JDt3wC?4c>e8!dj>L5R@{^!7xf>ZlH)38{iu8)RmP@2S;2OvrSg)OCSo%bs+b^Z! z)crgdyvr9+bH69~-fiMyJ5myoJd+o%=7hUBSWG^eBS(WvY2Fjp^WOpbkjs72MpB?D_ z9kc?BV%y7%8fS&o@3F3O$GycYLUKrirPjErvG03bS65f{1c8j3mk2%Op^J>6SH|MHzbQIr->5g<(mefs;$rlJ+(Wy zIXFSmnWh#e6@OY#Hc46d!Bco-UZHFj{i_G3J^{ZE-ph!VF8YSGfF)1j@K4^2M$5K5^GqH}Cb zOB%wgpgE_??a+r3Fx4vHu9CmgZ8R_iV$QZ;St5>484NAvI&Gn9*377+*~SNaa?sRX(M< ztGc59#b2l82_eubdR2$ZW-0yK;D(0w5m6AoU)T)xnE!u+8rUZex&WhkR5itknUbS4 zrYEgIXHQ}+Xq?O2tHR+uz1#SYk)Kq0Aes-%Yi2!@i0Whvf~GRc@ETi1o?|Gwo5YZd z!TsO3%O4?p#>|Wp&DoJX)td$gGxa9tY8r=1=Pw-i@<>uJ^r}} z%y@}0S0TD4cdDj?%Bee-TJpf~FC)D~1`6^}8RiD=^$k{C(&({+)j6*^5F%LhULGYf zjSMwW!Aq;r9|sz9qV7P!cKRl@sJyo7nL6xv7*{Jsrp(_6poV$Z%(G$d>(Tox1Omxc&i?kQB) z;oVW->_x8rKc-;$?GpmEcPL-7`B|4t0DPZR%KKtnng93hfDrx`ULI#V!KvTnX)D#H T2{$er;3M-`@exAe+4KJcQXU45 literal 0 HcmV?d00001 diff --git a/site2/doc-guides/assets/syntax-10.png b/site2/doc-guides/assets/syntax-10.png new file mode 100644 index 0000000000000000000000000000000000000000..e01122a5ca0be9d8d2625b85bd8ebe9d119f4be3 GIT binary patch literal 149702 zcmZ_01yq|$*C-qa65NUvhX6&2Lvb$i6nA$CPAP819SRhR7ndN#rNv8er?`9a z^M3ES_h0LNJ;|Ci^Q>nkdu03Glc+bU@;I2!F#!Mo4n#p#0{}pIY9jT3(4Hz6)V8eu zJh^GeO9QILDfgZp!YtlFELBthtWRwa02PT40Q^svr!N5MIRN#4+5mte62<>*YaqS& zrw=j!5NY#Y08jn&p2~kd2~Xu|^glH)AL;-9#eC%d(;Eel5BxuEq@Mo_Lw=+Sda5v- z6?EMI020Rk6eNfS_yhou1VCh^w7iiHI?#it27H)q)W(J6N3mU*KtU$s&x*iGDYANX z7GKSUhW6{{&X#_8+1)nSeVwzrx6j)GyKTL#A%w8Kd8WXP9t4)eCnv)W8WP$Obz8Z< zuE@xJPX4#I{~1DW zf(^(5{blIWOBjDp5I3MISgMiZ`QOJUjs%-9E}>PL>yGS2Z*ikwBgw$XClX z-3|P}PGa%*y>E9S05J`RimV@g=`o;M+A6{7a`y&Xp&sqM_t=3ZJs;lhHb+S zHe@?hO93MPvdMEi$Z?W*;)Fy{$_dg|Pr+9A?m4;~z#WE1?md(}XpV@8NE4+)iH36L ztVPjK*vtLYlkfjIf`87XiV=k}@hI*BA1kc-+4r%1neLSM5>NqkL|g9pPPZtzkC9 z!@$5mD#q&Mt&6Km@wcUxwrXsEd{`TU)0yba4)fKLQ2fzi30|k?X_WyYafjrun1dZ` zhl(VhejhJ-jgTfl6o1X=hfZ*#KFqhc8v1%=mmtBE<4mq;9%aQ^aGqw1SS=dandlXS z{_-U#8_D_o^M9e@|3GQhBXABAy5IQ_fM{VjS`s6?KHAx}H#!%krAdt>ye^Ff-Z0P4^ZI2cYYe*}y| zbar;?qPAfkSpS_o$dE_mVL$&AcSEA@LmAF;e1h6MV*p-bqQd~^aPE+uhs2v}<9b5) z6AT6*mYYOujl|!PFNm(D5R`DlM0I5C0$ppD7tWQRJmW)ko>Sf#x3U48L@2@BEOBR1 zoMQmXUVHq>kN?7^u!D)^gBqHq0M63hliT1cNPU~2PHoW~`Ek!PAZ%$9M}AslJx{(A zY4rnCrX{E6ig2pOaW^!H zwc{Gl5Wu9TC9sNc0ASLZ3SytLA+D75VKa)@XNEQY8||^8D0of5^Z0nl0dX98(T-lH z-G|ZcDD4u#(Pr;Sj~f8q<|UO0J86u9@BLX)jX`U;nG&L`+CdG5AcF9^BnE@*M!e3f+JeYejvCvSKqt zYzu|>qIZi=FER0m3#+xZIEBBKRa(xCS1|XlYoH2%+&!NNB3&(iJDM|c^j85+hRhZT z*gI*lc9+1Jcad>Md|>URXl@24rOxI2ojUp?t2NdGTDGCJ0;Zxlw0iO&y?){jwb9BNRwMB4^yN; zc>-P+z9-yTuZIC9(r8{6%ffZ(ryzihlYFR1=Hk!YI7d8= zxKnJ=FXymTl%oaWRf#$tIN1P03tk#e%^t&Jjz-rpBnmQQ%Guz%uxWMduaE?LMz&{@ z_K9I)NWr*N{s0Ijr%6m)ZlqnO@wx2MiQqv?_w0n-wQuTOKa(*n@UK$d+>t=57C%;I zIEeo{>2cv9Ke4C5|4e8nur+7LEu4i2kTz&x9?`N+$q+Q4L^vOqzBn-OePnet-#vTz zmCU)xdwba=r6wg;xGy=?!KYaUPRXS%60OGiDS4!1+9OCA8_8;*?0}-XMy6~(SVrFj z8#~04{P0*V6SV=wIWGo1lU}yeeCXy)HMZ9`T3gW%-CVKQF4)mAN$JJzQ87x4m+}5U zp2gp+$qX_^ZOB7r@6F8Z-R(cNxR&#?f7W-f`YZ105F!Wa`E&T@9OAs5;|=_X1jB-Y zI7W$AmkS@it(Tu;!lY9AfLf) z}M&`soG!raCvq34tKedY2)_ZKD~dqSGM@Q|?<-270rNlGSnJ#LdE3!klb6 zGnq%8X>GQ#$UvsY%L^Q?`m>r7kV;m|p-~is69A;e_>5`FBOpp!6hJkp>7kX0Ctf6i zRWqUAWS`pCCtF@!otByToSd9IL*pFui#GbNnoaW`0@{w<(q12`2L@0~7Tf3$0F?UE z*t_iB?QPw>_or%44BESm^>y=^FK;#U^jIMfi0{(O48P2o1hj=0!=<`|DCu2|5$4g2 z#R-tfl$;ApG(Q_pCvKM-CZm|wYFVmt;P6&g^39k7dI+__xAQqPEUh9JUspcISY9- zi-LR5GbbQW&2V=en&j?AVe7|G?l10fnT0JHgr`dNZ#+9t(OWnP^*y>?q zNW2jP9Dh*-o4*j8Os&J+JQse>bngU8O7j2=kfBIb_)x~~Ys1bGN=V=5($alSPENI% z$MU-gN$IpOsFG6jmtH(<>@r=L#_)A1PN-Utc-dYWVztX#AK8@@i5~}luUw2}0~k=Z zErWopPen5dGpI$}i=+s8S^RKWnRm|9l0fjkU1I*VtQjCpzo7|s+h0o3Xw+O^`%rVg zNarDePe8@+Gm6~WE#+c{I<5Z|G+*S|59R!>z3c{RKixvjdb$T1DdHDY4&`NKZ^PTZ;GzvvHniF63E>b-drlw{!}r8?brY&PvYj7 zqXjC%2aWjAyriT;+oOz(jHK-B%GYleRu&dmXJ%$5mzOzma&m0X)jkKAVY7d$D!<6D zk@s7;;1MPKjW$VOLQi!rg&FThP%}Y8Lp5{0yJ?@W$(%TAB@ry(o+m~N{5q2vp(Wl# zUyK92i^yVRr4~nBU)xjzHd>SQehN#@wS^jNvqCtQ=>t4U2_n4(-xP0mll_(%mcXCK zCA^$Qp+%bp2X)@f)LAgy>F0=BtayJu6} zv|{kyRpY}u^vnwIw+ml7pvjUv;tg7j$WwQaFPmxL4?W<^P*u)DEs74JIji?la-a+( z^Q20{Hf?AWB#JN(pYt)`u^#~H8Rz(`%4Esl{EG}=Km4NjP-~yrlmdxf^wle!rdtM? zpg#yPF|jYfl1wngu$mf$C8Ye_qX}t^ys($Rd-LXcseU;$DOfy2%RUJVW649qzc1){ z2jxsBUi5Rh6Au8!sKh8fyTO_UfZ{>&&(-+5ZVzd?$0Z<1I)@3(DfSRZM5D}_WybuY zI{A@JV*gwh^UHZ3*o3w7r7#C>)MLXm-_Xzy!JS-wJRqW=fNt-&vgJ9%qs>mL_a1AjLH)A52whM%wkfT2rxhk=37x@2r)Q?A%L zgirfZ>3v!tK;d9gHov;ssS`fx6R&YMNLDsxCL&IDr}VhLl6sX~ZR;z1UG!4o?KMNu zhM0UnfggMPyk0!rOvd0vyr!SZQq)v$L#37?Vr zCNeOs>F>?{uC$E?!z|1I%~x*<)yf497x!jz@0Z;~VElHoWgBgMVX{jkToMw-A3uKF z{7!1D^?rWKc$k}A6!}6h`?2jfF;=%=@0c>#* zefSg;+oWXg8YkOBQ1IoxmwUC!u&ubEL967^%mkc(ByBLBBl2pC!ZZXIMZKB;K^Hkh z|5p-{rgi>B4yaNATi+HHXN>|Bn>RK#N}8MB&8)U)O4{33Ds@(F^zA0Z;9mCNnPhJs z1j_cT+L_c88Sw*o1Qt<39t5}mgVZ1@h2nt{V%z0UGJY2JWoqjM!+9-F>dc3asa3-= zJo6e+fiPKs4a55+Mnr%7Fo^OI$K-cc;*mCaWHrDMC;j7HA>rgI8-5^u=7il!`GXn` zfDjJ8E5DT7flm9bjsq%J)}FP7li24l;M$)t= z05!aEt0l78oyZEYI3aV2Yp6iI^r%q z{7Z%}pH`449xK0kL7vQ?7`&hTD{h1G2&5;?Z`i=v$Eh0pg{W(WF!vkg6}*V{prY7x zMRU^gj356DiJl7Gmhyr>J8`!d+nOA#F#P4E9$ElOOhE_L4D;%T6C3h9nj0X5~#buqCi@K+qefKr42O({8uZ-*63cNndw4z7-fD9j( zIjFYWCkb?k=pMJ^0(`M(Xwu9;ntBd0QN+emSYlgA>3>vHve&EX`@9KbmaNP~!T$CN z2F}ND2-lR!GlzBtqFoPE{gnc+gTK?CbeaN2R|>eu_m4?#majwN^F7s%6j<*2B=atC zKkaJ=fzLe90hsrRY9*)2l@{KU{g*?=An8r56AfDgU{ zYi>ac7!32cHIcv3>lCI-;*fzJzNo*A|Ha|7PUoD;1pvGOt^29YCul?b>>mg6b*R5e zcnQJJ5@(fgogT0VPlNlE<G6ih#|;{m1EiMbHTU7w51f zj0_-v^A|1g$?+U)2S73d7yv`<4gYE>aG_L@F$Xzz`FS-I{F@&L2`@M`Ked_+P14-F z_{6IF?*fDi#gZPMh=`Z>J-Ln!_V*6L#Tbi=mAZdf^Bf9tS}GDg0Ri3G+8PfxH(hda z@*K+QEaYD{+x3a%kpaMWuN)U&*x1;(KHRw2I69I{AvaTM!v6eqY0XIl`7nRRub|2sIhlxF3eG;$6dt>AA7IVtch9``icv5u z{QR-z>==|Y@Zvh@#v-FPcPuw+^=Ic@wOA|+GTIi+p0%2N-t7a~ZmI4hFBL#o!Awd2 zQIJV#o{pG3j9X${$-p?$#li&%#rU08!;?Kdym?a=gL73>^9u`+FecZLtmiY>_ME(b zoV`}UEzQ#F0VF%n57Ds94U@kksfM*x7pb86imc4l$jgOc`F$@J6NobunRj`JiUB2` zzzk@l9t=#yHS(pt)u#hb+A z*)nCtZRJvOXm=dD0YrV+B+nCvv7}|tDuiyz8 z%|MiAfJPxc9!Y}ue$lz;?@S+5c-UGCL_xK;iBt8iZNc2$+Zt6dM>mY3tE^ctN=fm1 zc60(pjF*%C)M!}R0cK|cVG@|zZ2Mtn`oP91F|qTAI0undsjr>*U(N|5j(sA-Z3=1* z)hSm-Rz|FHk|xl$FSM*ks{c-Shs`I~Z!JlqRmQansu0npUviX+U^D}YMeCvTSrS9E zq=y$BzSDbA_Vt+9c?^cLlXgt?k!jOL@1KbVw0JtIsKzK=WsbQ{RKt-#Q3{giCntH{ z)XVokp}K1bK9$+AlB5Diyw0xZSuwNH94w*T`5~Ln^GeayzjCpuMA>yJ$=t+StF!(% z9Xl@0=OVPalz_HUv2F@15YnuSF3FJdC^NY@gTCGiwGtjn_3r=VsGrLAs*)q2Y!LLz z*oXmr;?;|_h89}=BNQeuJzlO|g3Q|wT`wZ^t1~t3ba*Qx_}0{Ic%P zg~7q~#bxh-(4?efPK@#lLK1RD8QQu$#Gka1>JE?D^lEFM^fLjZe#O&Sip@`IaHc=F zd6|fZT*{l%S6iDS60fVmmt(!92m6pB3L3(swd~aYA~$PTN1g!sgv2{NzcvOt$-Lh)Tpr+Nuw%W!zS`M(_Obq<$60}-2+`G zjcSVTyZZqy5^b>tCKt2R$zM8*zpn=ZzN<84^+BaAYt0?IU zW(66)UtedeuTh2}y&2M=Bxy)toy`&bAZT)!C*9eubH`FL|LW_@B!|~R8hAYj=eI?C z?iULF%EgOyrm6OWBT3`IQnzpsW?8u(S$j4mZrmb55OL@Z2N(8wJ1U0=c6(W;D)Q}b zC(1-|E`0xARn`g$3RUBfwEL#2#ISU0o5Q^{uko3R2w4})FaFN=!sKM+sk!hqVpG5y zG^KKDy>3<$l7i^RuoQQsM?88aLv4);d5qd=>|fp^R6L?wpJi`fQ(@z$y4dU`8NCUC z_bHg0$Mk{K?`@DL#^@&bXk)aiQ0LONhraGBnVZ`ytImB>^3lvnkowjksB^T^DCY}X z_1#d8SLa7jf*O_SW^>)ECC%~RE;55=Nuo#R)aZN=iZ za;-Diyr9t>{r$wW;w!E}g%7~`V(il>*M>&jz!Q>OhrEdXVJ}OK!VN<64F^0Wb~1MP zT$OKp4u;TauJoYKkjTCem*u(Hy1HZsb(Qc0 zIu>0XtYLE=e~k>^+9;O0Wl`wKj3mP2b2xh}3ZnlvO+ww0m!iC3$pQ4t_(VKiBw$dQ zee%D;cQsSyN$6u?`@EA)2{SvTB1u?2`>lBSFu429Fgq^pVt8}#6nGKyjxM2hI8y1&cHE81h{Vr??NPD6__L)kV{(MQ&* zW9g9YN!8uIx9##SJ&0Tn?zGE8eoH{rf8M0=2jOo-F+rK?^svmyx|WoX`!esr3Grp3 zuIngu|K&oQOU}EVd#e@?%~q)Sv8IO4i3j5a79CEZfV{AP93n(Z6v`ujpI3?ba&(9xLNhC5Tb4zgG)p3@WofWcGhj>t`rLPYUdGEUFZ}#*SMJmeEVF(Cw-+^ zikyP#)AFpPsAU=)weMK&wlhyj=2o36x|gxkzDV@J56Y^3Y7(+XnbnnMz+AZySabkC zDJ{OcX>(m=UP(FamI@dc&zm~>C-`kJ0wM#N#QmHL=NBKO8Zb_TNAPNvGns-jzCAh) z&3i`@vcE)!vUcz%#L<2wHECP$r)JB5DJn8_oI}|8Jj$ay`U@XXXe*j%Uo&Oz6I#T1 zDE98JBo7v2r-;$9*m;$2OOt`%S5r=kJ416du&9)u!KxN$D2mei4aj1uAwNJ^@gaJ} z17KP-`|5~WR!d|ZY*qdVgZCm$J?11F%?-FDt1o+^urkO=tFA^+i_xe34^KfaoD4!< z$1Xq*vMZ4ntShMiVEjK~x%+BlhlY1FyTW91Ky;97(wM9K|_)7yC ztDSU3iThctn5;m$f=!i05M_gdTkNG)aN2Iz)sL^=tS&o}CRcX2K^@&j&8SsV+0Ou) zR`s49iKw&^J88Dhxtc7qB&x3u@@*83##E#!=<(L&?)-{zELDlTo0u#%Luj!y7<8K> z)4-@1K#4Hc2fNZQ&q6`%oK8mi>`l^G{ie^q(6yHy)5013!%Y8}d*Z$_C|GqZ-%;9F z*4GQST)i24Tx{myJS1?SC+Q1w(-0|s!7-1mf`z?gKk2#+7-1tF#!^>LN=n5-^GUzp zYZ$6W93vP545{?euE$QvgyF+faYmF25m7OpB60_m!y;R^-P}37Q|3h8ROD6tVCA3`?1G1-tqC1Cfr3kt+b~=#uV{ z-B&^ajPma_>lxXFRpoqtTa}v`y6;r+sW#WMY*1Qx zU0bp&&Ei5JWdl1JUjdbVJ(S^WM-xdf4#6yh%+xw^K_m{Chssg*?2>Pa5s_7xRWD8B!kjYpAytdjpH) zMYnjS58?UTkTtXvn@=a$mlTFyjujtOeurzk5>c)8XjaQDgg`DP8sI%vx=+7%E?ZgbO$(7^OeZ{p9Lgi<+ zMa7n+3d^ly3s;}(ON;9}F@37ArENizKV?#Jg2P#|jgUsvRvM{Zd~eMi6?Dao2yOlt z9~&~}QE1M@WK#I?j>l^o(5I+&FdDEUXP@N#Ft}MY6|*eK7y)P zV~d5;JlV%0qk9LFI9&QKK6TDA*^~(u;65sBPJEfb0pFMMpLMueM10)F&FG5{{tUc5 zfx3*CyA1Z%kxplpepXUd3^6x#Wb837E^>PFN4Tv7YW*yZ-fEp7_Vi<91kyKoU-Sr> zakGVW=Dtu!G$z_wy@~s%pPFms8{$!o^&eFzkluH0M*=YWQXETGhO%gujH5cV1`$f- z?TbRfA;S&=ftwBuzjeb4^w>O3Kr&am$a=B{xXnJfcxw|PulaI&lZE^U6^P+BA<=ZB z5~rpb9HCw;`@LVuH!!L*v>%oos|tGukpE1B&H*$$)Z{0k3CvT7v8vJoLXl1`k;Yo% z?rd$!S#?UY{AtjSweWKQ4ovBc9cp$f`e4qDiFMddAr){PBo^3~4wHR+#iV+`ST$`cKyY{ znICJsLho<2ArKi7+FvKvK^^VW|#m5bd-{$9BF7N%Cb)zP8X zq_eV)(C}z6+3;*ROoPg200xZjq7Lqxs^9+3yCx)#$iCg}3;o<>N&W&p=t0AtEASAG2~W->4|~ zF0-~En&2pL)stIGdHzbcV&_zh`Fuiu+8@*4MJY#}D$iA^KFxLBzG^KBisN;TL*Xp} z*?xT$us#Q7VN)YCZ5uYSFAa9%3%u8|9`V`Xv8trvznoV~m9}&C@P10IN!V zhxW{umxx#T^>BB2yA`(>w~=_KRej2qc@SB=#(M76+@nk{?Wp{w$~;RKCX4;bkA%kf zYsV1qm5j(-1+cyg{!1KRxBZo&IK~%rlZ^n1gB+3k#1k<=cG0;>B)nY$SB3|Q#g-P7 zmaB2T6i&vf&=Xx%DkG`3zIa62N{yYwM$o+(PU*h_; zaf+r^c8l$|#fwwpUXka@jP3RPQrKQ02@>bCSi>M3UCsy*;?d%L8h?Xf&t+DZK~gNV z0sP{fN2;%9%ge_@m(^kwfA08o0XKe^J#FS z386sXO&pwToTE+u`8V4;$CmwiUfA3dJ0_^JHkM{5z`BJeCd2~jhaHlgbuw?dGbQWm#&;wCA_Ap6#*{a zs`aLB^u2 zDG5#ITXMS(AJvk3M8w1S7mKkyUP%IJwF%wk6s0{gWP5DX$0iWueP?G)L+`0_<3q_T z0Aal=!E?MdShS}>WS>~t^LPY5XHc51{YYh37cq1Ebk^Xu5eW$>({Zhr2vX|b9*orUEOx~T;2dLGvH zB7c_oxgkxM@Kc)ek>{Ai$Je%^l-0zJ{kS+4uPV%~k5>XAq~S^qD_YDoWn1q#ppTaL z?}Ri?XQO#BC>r>&j4GmEIYoL(cgc1uwMy8x?DV?cmm*#%G-4K{a=pmG;ZJ8EXk8s+ zHRSX$F%i8qhUz~biMQ*Tt2XAuNYHr6baML??o0SY;*?lX<|0l@jp)diPq=_(jbeY9 zG_qd3F_+f!-TBOONiJz)!wEvTC}X(EDMINkB`miS zK!CqQVhMiY_N+Ze=*9s@R6I;v?_qi8l_x0vO#%;Q1vyMFv3$zlqG*H@{SpNEZq{6jjV+2DKnr2mf zfnF32Ayvlr3Pxg$pEU5Mm9(mUVSF9)U1iBoW{S~D&Y?Sfi6Bw#$(hOG#uY8> z8XbMcylL&^ly<%|ffa9PIF0svPBhx#V+rqnbGf|{^v+}4fW-lB>OV^@EiJ9-?Ck6* zPbpH5H?bigdwcto?QKgj4nOPK^ukQIq-F`WfR2*!5=J`FbV$Pfr@@NU)GSTf#9-6HFs=(>$;}Q6|jUex#BJNeEFtIm_$}CMG7aFx*UGnmERB1^*G*{ z@z1BhPB@={f9+$kQu+lnPPnOj%f<2{X)R8!!E~Yhy*gE2NKABNYx1$FX){UKxHdaS z(@+brnk6*E0W`4gKShNX1R-MiHh>4waUfm@-KIOHy`rub3w(p|=kv#nNwgWLqDnb; z&ACio63#1$pcVAZ#oyJEcB^|9imspi3UP-d-I`m30tQ5S7YMIE=On9V(K|hQwwx9r zq$bZWe~#<<+z%lzO0}K#=)J~r+st=*<0}@fU*_F-+|m{6Z@lq)T-*_7{@v&>6>j8A z1-Ij{ZXxquk7C}Gm$*wLMCZ)=lA@U)my}?0cnWS#VefRl$fm5Y(WVcOof&#$e0iL$ zk4gRYyTA1j%FiVhm8A|I>?mt>-;bj1kHWv+W^#JjvnR^T#JV;ooy%;pPv*)mT;A%| z`Kbhp`t|KJ0jGavwkxxQgr)p$&k5eSR`nKUE>N|%`|Y>YVn*zWZ1iJ)p{)cMrpKpc z;boT96)#oTD!{BMRyfy%goED@8uoqfPQd4RCK7mCHAF$PkY-5?UHZt+i7L|en#@^B zbiIA+`mE+%AO{z%WW$@8uCpImlPER;c9%llqE6wB3RMTq2=!m?JLpn12T3{c{u)L% z3wdK-J`y^1lJ4q0T-fG9rDnjS>4s=Hj5|POu`g>Wp#xxaLInVhfI*2cD?8p@Bgqtqa$}{ zFcHHR;g%sgPTFTZOh00<7W+Rqo`Dc<-!sX}af^@xmI zH)RED`HcG@%-xh(LM>wD{Ci}x+4|R8?H4rNvzhu*gHw@`FSnP-mX)k1YI%|Dr_VK$ zEgXy@ieb8zq{>Kw&koqg(Z1`lSieixYVtH4_X?T|^NG=&jkoeg{PGo=nr9`BrJ9_K zJ>+*O1v98n_qn-={qe=ej`Cu)Eg{jr=U#b{8=FTZ&Jb1mQ)2~nETLOH)+zGSC~u_d zzCe(D(^P&lKsLpB#rZs-?keK<+K#DoT;~m6qmMu4bNa`5kM+P`mx;z0_>~J{hxN0O zI~Ni15^Rhjj@k(P0plC)GkDuYgFsovP;7jlDtZ0yV+0vxEp>_;y0PY-NQO42|6!LB zHXmx?`}HlflrDJVy%xn*DfCgkY;H=HB62uTje>6`B@x3Ce_M`<@(;gGZZO)6x=n=$ zo?C{ab;Q@Cuh)+u_MIC9znp)|XapY*tyy(N7MyHjW&pfZ*mzAyRJ}3EigTQRF&lNj zKhJybPqz@4X@oej(pB7!JtBc49e9d57O@Wp6$m^se5B9#3zNUzeO+N5r>e4xqY`2I z_3PI#XtlL8KVMQVN>xu^-(#tFCv5Nb;N3r$jGBcBnxE3@AM#5+e;)obQ*p_l zpeo1@xG|*$6c`#Bex(^2A6I=FzmdR+U;xo$>L2nS-6IwE<-F4}AsB!z29~v+q^XTN9 zHsJuYaBU{OI}$S}*q?fMZLuL zi`jUWoDaoaK-P*r8tBQi#b(DNdab8UQ-WB;3t&#il@N7SL`#f#6h$pWm^(;{ARnH8 zut?-J!A_ZzYC{eFe$vuP8SFs0yO)rbdxNE*WS2U>Hfxm-`sJmYj?A~>ZDJWhKcelQ zmY#D!a9J%*Dc;D4F@lYeRgm;G&5{SJDhPBo%*;PN;$polg-NnvF3f1C;aYsh`oFGDw>+#;R4IcMQhy8(5Z-6eP z>WgvCTdO+c1xPJNRIN)Y$Cc8|1^W(yv=t@JvTbbBDCV9)99=`~Sw|-YrjGJN$m~O- zB$1vX8nlzRxlTU>kLtQ_<+r8SXc6FCT0Wvv37ca z`PW~7ZCFlO+(aH}*}9SI(pcTvXTNSq5cg@AD?~`hl!d zy~=pG#y6n2IPciKD)@$tm35M4PHGxE-zliEQNR%RQ#NSgm-IxMwmtNpjFM$9kO40? zfRJ0n^B`ezQlsD^Z|-6Bv}iD%iV+S@DB~E8FTD>kn*SjE*ifO%2zS{NqP5jzWD+83 zymo$uvI7r0Np_IQ!R9))wRys?;3p4rYU{oJFmV~v-H~45+^&a zyY;M!5%IeHasO#$Vfhel6q~OGj+iukmGg|0!G<<&(s_K3SgD>;OZk3hm-fr%8;{V% z_9}%s7-c@WBr#QE!L2)ebIA+&`yYU9X^ZX@SCM{kPW;C`kGzfU{MZiG2F>t&>5Fp~c;g7o>ThB8PBqk>I|Q8DOc zP#UNN$Gf8ggb#`~^9c3CDGmbU4Zmf3AL`CKIfW?lEnC&ZQwQBDi%&NZX-wW&7aU;d z*5Bi5%nb%30fHK&sKZ&%k9V9FgN3p`>+yFs;bqmCT_R3z2a5YZ(~#`x`h#{)Vfr#HrjX25=6I+P4f7mI=O0B3n}XZPC9)$ zO<7pnTRg-83(JxV*7iK2J7#L-BIU*a^+8`houlL*H_g*;MP!(m9FQ_RHSTz(`s{))79)A4-6^Jr1X0);Ei zhO)b!KDBUk^R52c&5;!3fSuU}e22G-bu{M0ukut^Q7Lm>!<*ugIjOah(0CBE zerT*TC)83d4NF-rET=8j*z8=n>DA?T&Q*UefKfqsgb(Yxyu1{w*^0&jH!c>sxCn8G z66=%3`a0Ko-@rdT+t&_6eVOIt?VKBo!PBX3-wRQw~me7ZD(ig zobdO#ljJM8@&3=p>AiwM@+&9G!WeAil;A<3@ip33cZBmCH$YaplX`)RL(J)CWsvHe zVC?Q2u8w@I);|@~@C@5of3Ddj_xQ?=Qw|-CRgkjotKANFF1ni-A3XIpb*LOdH09r= zku7V3eYUT5>gF{*<;I;`csRD&Cj7__6Gw8VMrES0tRs028H)k9ML9V1tCE6GY|iSC z#M#bPr{k3?C@F^q*W0dSWuh;-`CtLnH=+qq9_XVON1^p223$ykCi$L0osB1vVQ{j3 zKGtEssOd_kpW3(?LxfcF(8u1};Cby=U7F(8Ye*QBIjjz?`=%|&a|kDP z(W!#>cu}WAGR{|8cogeWskv3ZJkXT^wCb~4tMB#eUH6(LW5Xl8#?z<;3xvH^iAS*v zZb+YkP}aNw)SvNEvcKx?k9lzbcJ$|4?)jqD)_=St)2TvtOY75hpV-#8y`T~SBN4t* z+)?9`+D8?*Z}`nbCx4IF{qT-4b6#ccP^#>drraT9qtjg%h;9cyU^7wc=nQJZkX6Fc zs*cyQ}=fx)TQJMKJ*E&QOgG$?#G$!^oBfXRy zhT%kLEM*3}_Be2Ld1MtnQc8R4?5`Uyq??Yk6KjI(e6TF28*A#qyE28FAAs0+)Q02(#fz=QL8VkqiF{ ze(vk~32afFTlAd$I3KOrJ(3fZbY{^o$TTVG0y6)yS^;E~fnto#Ds^E^b2unwWi2)% zzXn{@w;!ea3gb~@ymL#79OXS5Pjr>bd~IkVHIL;G^|8q~yq8DtC{)V|f!B$qzZshp z@-_c%LBG~xx<24y9 zCgFb+Mz>!TZHs`k1aj9FwDvZZiOuhW~qXi>lkh5ZlhS3zHo;tj~ z=SXVd!dGD`Cg^vS1Iw@+lzHoZUgL0#*TgqSor~xHqw6f3;)uE^j}x3gf?FU!2*F(k zNpK4iTmr$}Wq`p+a0?Ct!QI{6-Q5}7VXzsN-Tl6*t*zaEpzBq??tb^&KIeC(BD2^Z zHluENe?sE115tO8Mk9H=pJtzk2fFVGze5z;eqlCz2unpA{c_q}E+aD_ zR+cd%3#qI){bjdaU!^|3Z&nCGgMIWD&!>g~N_>dj`WZ3wEW~z?b@pA8-8Nl{O z$>WkuPBHfO0@;Bq3U=4oME~c>xr|bEhPUkrYo@>?Gy>hdbk3FqZ-ILVx*-_Gy;caM z!1i-o(n`ql{o<|IS~6yrRSWH+u>YXW6^#vIjF-4x0(Fn9oi@~~N;a7cZkDZ82s3bN6B z<0XAyr|78BZ#u!(0-$~5u-3l46E4^vW%pOma9bN1FnH7^y=AfzNO zuU0lFV*$Cyn5dgrsxjC2)a_jGle6Yy3+I2@&>eIRW9$fxG&I@DJQiLpd-W=-a>cKq z9zeiNg!~Xe#lzOlzgPkluU>tp&QM+g)vM$iZ|~6_)54m!Q5U|fCUl4p;YRAeD0NcH z@yDM<)p72%Os4QL3Os6`p!46fc&81x*VGR4MboC*KX;8xL) ztFfTSz*TFqsYvN}Rv$lk+=lob(PHfBM|Ee#blJK{H<9Cz$PZZDMQBT^qPdY9 zvaHGkxz@N#IcKiJvgY+dM3v-D{{%W6-L{3Hj%;u9U|5yPTB`SCj1wJ2+&dnNg;=af zlRcyiMZX=F{!RXdwwXGM@wO-+mKSN7L@?#5XW&G#GHLJ5y2;dfE-_Wj;@x#ZT<@V6 z24Aac4uzc6tF5x#p#h8ah6q!kCz$q{g^`_lE+e#x1TT?al_e#JP?EUypBfLCFx_@6 zw9JW9J;l>Ew^@6=n+h@;W#*~gH-RqxF0!>zgqdBD`8V{s61IVO$V3Ts{V5OF|8Tvz zHte-18=dvJLirbhjUOFvDzUm`Un($HGq7^{8{2R+s$HId<$B{-@u|}{lv+L{6OyE4 z%03b#lt&`Z+Xjuxcn8~(3Eq(^Fb&c*mm^#AEz*$=Lg4Gg#$na+J4zedFGr* zuh`(%0g1(biYv!Lj02D>PkE8DVQpht*>Ag`SgA(5vcx?EUY@{8G zra4AUyi6JW2bw2WHh&!r1gvPsS!jQ%WE}@m{lSWs$55d91M$QDdBpPIeEk_m_vG?` zqcDGEczx{o2^Z0$(FPgw^e@&I7 z&yidDNiVT0K4ndLHVTL4^2&Lw!b0<{rTm;uYD!@Yvn0skU(5S-*&tp;>}2a-F8Z1H z`MUYj=U9>qlm}hEeqrlqoV@w7v&y4ObBIT#wt5^Y2-vWnS`4N|bdJ7`fTQFibL{9Upa8lOMKA&Nyey8Q0pClF&)&eU5E#i#z^tJj3IKcI!-({JfsN#2zz@@oBqU{YW( z)s3zCg1vUYB;?cSH_2d~W<_ElA9kj!6u*YSO$kISyLtbdOzTqv;fzyHjT{lZH561H z7QY%c-o%x+Qhkh`rYOQ#!BvEIQ&ECmTekp?5)|8|ynNh@_vbW-dpd|nsG3toPG9}} zD9sscJ-(rGKVpDUh*yzOM+l7}B)dyd?HT$;T!UUKXg3VZBNa zGXqT#%L&Lv;8U||SxUvcDX{YQPtT#OtozmkTm0~$r172VJkHUDBLx;qv<#G}RnO?Q z^Cn0Kp;rV0C@+m%@%^5arY8F&L};_M?4w1PUu-Z-5!rC`u;5}({;pVn&w1$HW^@Nm zL>nqQgmM79=eThzm20{>qYPlFXuC_zoS3>OvS39h;;2dQ6v#w*T0}3`Wjy~eXHvpx zA8mP;bTmRv;~L$Klo{dghX2h)PpUBmkCo0$Vx*mh5+jKpUjS;y5=h7`h+%S+;%|yA z?)p-vHm7pP=C=ka|JdnCZCi0{q^7?zpQH7UpKY@+i0gbN z=_*RK-dl7P6v;`1KmI1YarzCzj{FT2K}BJ#i&Fw3#_g)K6`x}H#3*jjrL8)BvFbTT z^jnsW&nYr6Ly2bkg{Tssm2*pjG$H|5wdV82uLT9W`M1;3!!PkWiyr$V`#_=D{CHCl zZv^$$TiWzjUAa#f&N%*)oqSj_16f3WdPV}kU`K<5)9DIWe;>#$G>CFd%Ts^==QF+( zMjE1nz=~zc!$nvt0$fZytBscK&TzM2mi2 zjJA_GmS%_QMSuEj3jPB~K}^^;MseuAmbKBqz&lYng!r7}Dd!`Tnxl^|jKC@(jA|#M z_dQv3!v^eVO7nHY?4$bWL^#e5Rl5Q!@-0ZBd__j87fpXUpU7*Bo8#m?r1b7Hk1Yzn z*ThCT?7EY*)YfjVjf%Cvj;Xz{QxPwMbFwUI+kqujb%uuKShd<=E=hhnY_I+ky~C_o za!~tmch}}1b65E%57y?a^7DZnW=luV+!P}^-iL??DT+1-n!-~fzfIet!0T>k^=Yc1 z!V#Pw(?1xpGEsI1P2;G-)_rY3UnmUr?H8jUcDyqqhA0z<4mH%*;8csIN%ftE&<%nL zz`DE14=Rfw@gb${&EI^;5pC-|Pk)&PU91cO93jmGuLr2x>2e-~&M0X5Bz8Yq54maW z=Y4Bs`QA41DsKz#euA$GG-$##gv+_%g z2a5y#3F;byIUWK)BO33^s6!y{?V2KwjP6JJMTzs*XiEMpm_Mk`i2F?%u*&>#ho-!# zGPUKF#25b;4SzIMLfosp3oour2yor97z&H9W9DC6L#+u)N+@DSL{l=P$LIdlV`%>EAemGephiyriJD`N48adgCYEGjC2|(0BLR;1?oq$tf2ig z%t+1*4>tN!D)F>a_362v81$#aUclu5zkj@;IkN@JD>PgRn*JrmW#4x8Eg|Ix>XjO8 zyI?MW7e@u+_@EX%HS7gIMpu-W%V`K>+^tG=fKAg2o|g(rhw|(o|uaaAQWfmvUA|qjv`ccq5z~U zXf+{G=8FW(;S)E0slVy`Zm20)Mv$9QGCxe23|c*NLV-*Fux$}92Oc>73Ha)yM6;x( zo`{Fz*YfHhE7X6XI4>cw5xB#O!wH$^e>3S0hT*0odFh~LjJZ0ISiQKzLpPGDkL?qI zpi?|oYdI^lNlP6os`@(Sl{ZLdtV=Ju3u4bcpv2lF9mEV4#mUcw{yXExRSi#eO;4IH z2tGpS45R#STD*!xpX;q0=jP{Gy}s~=YLu8v3A6@LxC&0^qzpz{hnuR2*Q*y&0zS4N zgz`Reb8Qj%>|O9oVF2EHi}KJo(~}qq{dnK`!8FnPBt-Fww^qBk^V4GIu}GTHKIiG_ z`s+6>DQ}XL#z)m=d?HbIYc-ncdZWZ9CZyaTeL_LLUlZMCx5>F*T1E4-!mq{*BLX+X zMjs!J+Mk%M+3k92q>T}sha1`E9pP1LxGW_EO|T@8ZdAuTorO3~V2Wp`U@^AuM$BLZ z(g&aYNbty6G(0T9IIRO61W~5R*eDqdlEH^tK2XKi-8`lld0l-RZOv2(jh}hL0*}aN z3S^H@N!AMHU1LE8ABw^3KNH5Jn&|$dA`)l#ref5xVl^J{oi}UT%pTtR+LBM6`N>k> zrsux1cE8U#vx?lw>9wqrdwKJgNvlIPjQ2FIMfCdV!deAmE*WF3YkyRT?NHcZ?mA_Q z&l=@8Y_?Rd=~m4!20OV3rz-@EUEp?tG#%oOtVr;HJvs5Vjz;#`c}!9l^a2zm0cl46 zm+4W?ic+qP<_98qnA(Zfb9h85EG|}k-_(?W#3}Mg%y-QPivyJMu~zW&o&0h$IsJzg zY-~TngwZnhKs5j+lBn7kV#w2SSKY+T@Rhk)%DtA_zmYY>u(SKxh!YQd2nPYyKW}ei z_2sCJPd;@8!jSJW6VX0rk<^K^8Y_j@7~Nn!(E1xSLzF{nw?Kd@6y_{fKyJ=cCD zHj-z*am+11{W|#R4OSsU;z-;O^_0;N`Z(Yky)8VGCOD#IMP{3ZX(4nu2ULq45y1#V zaa18wYGWj~Ys0h3%{=uOi-pB#jk`E~#rD!6&-31pFOEu0$Y6&0!12TrxDNv2R8tIA zql0o8G1}>bEqx2!vXwlvL@{W6c<$=*d%Ds9aDgu_p<*}6a^@yH>Dq1O^^1DRl<-6M z52R~I#sIAF?Uy%}#2<@PUUIFV>_lUZi+HO&Jtg50>HS=lnlI~sA$>5+=eF2)!Mx** zGrEl6PeyWNbbpMZn{>)xmt;CYA^tpXl2EHV%zj0qbsUrOBvbQ3u6EJIb*RZWeK!|~ zpx>{0%A}6PD(QT=&vK$|1`Kdg@;$6a(MC4+eP}rEY(GRMz?s-v?PexN>8EV!UiW)1vXWRw8 zJX~{P&m?i#cYHd|u8K=dV{7oeYWTI{L#`9+>|mAk3l#84LgfKh@X>4e zZDdLV&=vhc*+G?&PiZr_&+mF9K(8j%kqQ zJ}TwqRgM=3iCc$ixk?EwL@pN{g_-PQ3Qav-mZ0jwQkg=z@2DeR(u1h)tvWr4QOQR} zf$u2>d=l|S79I+m91VjP#ot5)?{|O#9#&NyEXmM;%?4xZ56_WCkC$mqm=S244IUe) znGLjd=dqZc&;6&V?O(Xw9~#AFPbj7fPDKP3{lKoZVAagpNDhOs4IT0sygPeW_5Qlb zWngSmQ9_rAPP&7#*I)m-X8{r{R41V~N5^L%^$~?pzZ?~d92I0{QJkZP$H*(W z&6daBaDe!VC}vFx%SFc}q74A6U+6I|ITfS-OG-3*Q8!mEWx9SFqtBfRNY_P0+qvnU$Ea>bf_#E5S9Rc_#1=r|igkI<8BcG*DRwAcHedTS zbo#2y$;XZ9(B8r(Ci9Gn$Mz#t(uqK-T~)?^C<1zjZ@zjT${Ris*7p=OUCQvV!wb<) zUcT!nX~?`P#w`y5RsSdd3-SGuplPZv3)nK zxac<^Hm!8%@ycjyNRwo)D1hAzB#{RyZ6uzJgOs(A}z|MBf1-$&#Kc| zR`yq!*Pt0z9#341@IJIJPS_6wgnSmMmk1B8MT>xYRlB5I*l%T`K5KG(Kp`}KeW1NP>69AB6jMr`YE7XuIqx(f-LcfuoJkWjeefP z4hFL1=bn)dr=q(OHT4@1$ix3F`UA2VGXno_#~gs2UW;(ljR~C-B=+ctT3A2)pGO2 zFbirMxeeb(!z#MYKTFQI>gqF1ghnygfxj7?(ekkut0RCSe46QTu#FdU8IjOAau<^a z4uPk;5vvZe)z7R)P5L@LMLjCV(dbb-F~ctY7XgQ|LU)ECWF-)MEUSyixnyTL2L-Ld zi4S?RZ7PmFPlpvHggr9s{;3mSw3$uCFK9od{GOh_Z>#&27%YVo zy6;@D_bJ_sN9>;q6Y>gg)a6-dGpB6^$EaX~iFhNPWL#g{6ck((3-Y3^pj!IC) zL#faC=8qYf@C_H^A5bbrdhplljdgz7A_os%%>Bg=i#M8eZpS1|&tLu=%_IyHiS;!T zilQk=T0N}-7yXfOr@S=|Yi98F#aK?@YFRA!{&(Mu{gwVjwIknA3DBLVm&9+6zRNc2 zCNJ3fNuhuZ0_wLVT@8w=sxG>meM}aqY#7j^-(r6QNmmDn*1VvylSsQj!q)2<)+W+V z!vkzNNnH{uJK(1sGd_zZJMnetTu%nQLjFfvmChjji}iTU$wy0V z7oAGC@TB%Th{{fdq?i54Zwz;`LcwOkmA7fh5kgxQh6Fa?98;2Z*5>S3hqKKzr|mj$ zw|D1CLB{D7dg2U@ETB%iv8R*dH~8mGc<~wwM5!zGgQAvt!qJ4=_wgmJ$BYAiMjtCA z536T(RHQ0tf>Odu^EAO-^5pmdtD@7rpI!Zx)T8e0d1|DZ-r)Y2`<|*Cca`?1d5)sc zKsmvh(%_drfe`X83mPudW^AjP-p&beGpBUUiQ??8tE(Y78`r}&WUXkd*?(I;zM6!8 z1&GSqIqxZ^er*;tD9dpwAck~~W)UGCx6bl0&+>7JSYt(xtqmn+$O!xTf`*4Z7~wm0%W$|MmK-#EDggsD)WXa3WJb6=C>2X zWdI1Xn1zAVX^-)tb$Z1xwZflPaPh2_y#EDkIOjq+Kt3#QIh(fEeb>$dr#`<-gZREv zXr%5S6^3wN_p-J-ItU~GTX+lmc=51<-UB~vA{$gxJZIJ^6Jj%Bi*}p|gVc$nN z-X#j0HJTJ<^(*UO!pbP|u|%LQhEK@K;3Er*VQ78XFJ1RzfyO1sY`-F1O>Tv+C66!y z{=`iLBQ->+GJ)lPUmem+cFF%s0_GN&?aB8S^L*~J7X<3%y%aYW^`etFm123CqY00C z-0xI2j{2`LWmMuop|a@04bT8Y@qAp39yZrW(d@ugMj?7p1E;v{DrL`)zw_VXv-?c* z@&8*NYO4Y-3A)98^^ntxDBhPwCNDmBcV-s5307JXll}lm{_8x7N1*rlR9btM*GPw; z!8tg&&|xT~%#}B{&P8Ey5`&5KH@)1&zu$lnglpq7g8k1A+kMWTLwgKEubI123x9T> zvbFhnMnHa#_ZHTJlN0m4D(Kz{-1Nn-mY6#k75aWc$}piiqwPFLBK7<3kzcbQW-7vh ztWX(mMA4Zhlo_bv?@230BkgyfJjRl6)6J>t@0avT?YR-|8lyaUi3XO1cjK@B|jqwkyh%U}$k8CkCH zyoRT-pl1m)Wr@M_nL+(RtemcOqBiLE__TMVZ<8+Hu}XLJQv>_M%Kn-Y=pn{ah|VKu z7W)Jl=3+@K%-M#8xo0EXB(qC95jd_}YMqFBy!}jLkAP)XhJFxlk?f4EC&@bY-SMkN zIr&MGag*3BEA>D`&)$S2w>ceujadFqtxuFQ>}B(il94@K1^q8OS17enI4O$y{y;6G~@YJA<4fDnyn6;dtulrcMW3Cs7a{&l*HhC?Y~Lqt z=O3}7|7y5QMXz?@5z*+#Lzk;*;philkSyJM>3I^XWQgUE_XCmItnqZ?QHoo+xeL=? z;}z@L+cMoO|F9iBXOr@#z32e}$UM^A=t=dycza5fe5+2Qz5JJpfH*>e(zFlI$ zhD6o9S!ZYQwzm5e$@cT-YrAp=y}?x-g$IAH_X9EKpo}-pbxH$*yDc6VuE5|4L*+rV zFHKP~d0M&uf%j-cb@B`#&|HE*_sI$3b?}(4Ln%WJ{?vM;Xf5j;C?2){KYX_08$D-i z1l)D}Sw>MhE)7rd;3*;t8#Ym?8LjR9wh>!3XWZH& zwY1KrcW~oWjgk*i_w#oMGu>?e&pZ2n4~*D^`E~Mq7<5As<+It@*Fe6X3;H>d?kV+K z2`R-_%N6Ns*z1)#u!o5+nBS1kIPDMFHT-{&W6YTTB}&MCK|&b5RE%P+@q-h1Y3lD^ z;wmwiBu_r$UkluGj2R=jBK`Y*q};u|`;zuM-cH?N4zYoRhEZpoPQTN^@S~ zaR2j`69inMuPNEu<5uOPeMZ7s|7iGej7cDIUdU&fPm6y)YkZL*#3E9u>M6+r-}+S1hG7!*vLX?wy!)1mDN`{G#A-{L zL_&!>e>rhxbl>O^;N$y%fe2mQXfg-4na`KbM$~K}4~hFY zFz3){yr4>hdg2bB#c*>gEF#>90$OOO6wURojBHCeH;0fP@Gr=W<~Q+{4dc=8c-V=; z`X?i&mA_;ft{3ikmyyQKHpwJh3{hR4kLcQ7;Zn^>d)>v)H&0YGoZm&u#cylT9Imf z6b3MN+PNLJ3@4XzlU7R2X$GQHWT@}OJ58;L(vtrIe$XxLN}^92Unl*^#@?c3vM*aA|vd|!St!i1#I_) zbWN0-*VEg@=Ls9Qwou8KS&?jX8xz!zm#=ujZn-tYEBgSEBq%%;d7QF3RY+`Qk)&LY z>fKKaLO+i1E%vhXTI*0sbxu8H!QuoVyqL}+&mA5L001~J^Dav-AA5k8#I2UpPPCWR zU8b9Z*Bn<#*N5=noQ0LBRFw=eYY$i4#>_-<1lWy#4t1#=2GlKuA9TCC#PFjd%>#71 zg^iV-T2t+H%QBHb{M-o`Ko=50cLx33o_#W!sD;;ikN88#(arJ!++WT~7^pL%>_33V zE&NHCxgM<*QdP=0kBY43TzbOk2Ny1}Fd6c%%SZOX)n=#(CbNRz@lW)DuFAMqEwuTE zVd-Ze$#Q7wjgdaDHY1}c>feR^2)`G_EVrJ{(w7K!i+E;Rw#mjNXl{Zh>z0Uus)5MqI4=u8V!;4Xgz&OL7fcB^thzX&-+rMw7XQP{Ma zzn82am3@qi0*63kt#E01gwonu7&B#g^$`UIb~DRuk260>9@U<_C_ndk;6{-C70)@- zhC-Wzld{m7>$K$DQVvnc8Fm&^9c9JJb64R*@xyhQK^#nCdB3qW(RstFy;-CidQ`i@ zA}u27xoQIU3;aK;vz_*P+Yk}Hj@EW7AcY~W$P z?aUmiB2ht0_-IJrqQH}R^lBQN_+^8%tHYcD=6R}Z32O|f&Ee3sVi&5P`NX=g*81r| zI?kN`SqvnOH2WbBLx05P}tH?`wK1MhhzWe)z^+zaflAM5Nv*S(aMJ_dNPuH2I znVdsG9YrGp*Ug0cKtnUQ51=P6a+UZT zoV2G6<9xJ)pqZDm4Kt)dtPrb7KW>XB3i(adTA>W+BlIyXZhi>vUA{cGAEqjw`-B zIE+c%9#tk~>FT+Bs%GJW$3qtV#X~9>uW-K%9gf0&-F$8DZ*~qC%x|b)zR9i3 zDM}&c3?b!@h8&JL${pzYcVVQlbo8Srv!9JtI3C)hG2?%UT<|5rT8pM%Q0#h-{gBs` zGqXHG$JC&8r5$&oNOA=Fa%(NkjMA>Dzn)aB*uEMMgfb{Q=am=4clDz{PD!M8+nC=T}YkD_f84 ztqwnBLtd1b?kxrDNDx_i520#Le3Ke(@@0_?f}XGIFK$h{O$_1Yc%Vn%S<{t|q393( zgumjMJA2_04W>8s;j09mqAB*=o}!wWRX+9E@Aw<`pmdC^DFpv4C$F&eTACH=8}vNv zOapkHk#`meMSByh&!Br3Y`H}FiclEDED#{OYTK2a618_53LxePADj<*e}a8uOQN0} zNkeKeAw|lHtxcjwHlIS|A8U4Mu_Jc)x)SEk!@7Zo0;UDZ=bsm!FB_9$R{tuFB_q zY<%i8xYUEK^EpqGyTT793cl}?f<^UwKX5D-_`cQmW%@}0LUVRx#adLuW^a^>@Khr7 zYGKtGxlF_pLy2p7PsIV?td<+ZtIY!XS++q!-|4zoalG1JWrV=&^K7t9lSw?7EM~@` zK0kqNDIm^J(5G80RR4S=sI?z zD1i)ovbbP{^-&&buGVDOtWO5#`g(V|^O{fXc24LbYo{Rt8mu2gQoHk_8PHqLwHm!| zWi#seEtoeIE6^^_6zmR!rWhg>Go!{`hqLuxB(@cGf10M(bL9JOmuXUZ?`B`SxJqn} zC)#teXLA7Cfpx^Eg)4U`y)n7AdZZv5h{EWkbKN0?N}2K=`^B%whur69gA!|UD4U#; zpe(#knp#zM0nd$659Uw-j~{Pm=fAOHc#!9<{>gj}Pfw3G5-qrrPrjFdSmd5d*I5$NM(-J{9Xx%UIcTNl%h}&YTIqdl&pO z6dC1Q|5S}O?ygZDjJpb1NLiqf2lp=o?mML=Sw|u?!k+O*stV;)M#PCp zqkjphEP?NHjI|ie?7P&I`wI8GK3d)_326ZyKJvN$!mznj#KakntSz@Y7?z*WX(wMF zeU5C&le7`!`W2Sb?)sa+^PKzWk5!8--U5(p2--U3On;Es;LFR9f|u$7s@phtC;=#@ z(`5|wj-CjoHwd{PS?^Vlh?w6Nl|D|s<)tq{8}I4T_H}d00s*I29p4lmwP+(Xc%c<; z%2b==1cFD20uAJ6T>U%9=d?MYhqG0t6r&O&Ovj=Os2HSxTw*#El>&&WyLTadLzUxfD%e(F&_tFwTDQrhBqI>(ona0C=DuE77 zn*{>P*(gP%>*Vj9b=%#2K6?+wjZ){E1%9rGSh?xJj(O%;TWcTc_;H{LehWpGp3sGH zMpfTo6#VUO@gWE}{J1wIrHozCL1I81Yv){);HuOOt+rX9S>aV?At`s=O*j!zi>(PY zO5FAJ>}QbqbsJlX;-PDn&yU~>PJDc6pfakeW0bF`LEQ@P`gfw%DkS`%e($AtU-?uh zUDD9o^x|^RM#C%VW8}@w4oLVJ6aA~`(sXE{S8^?&_B@zxMB02ah|m11IrfQz&gF}A z4t23X>YUt-S_=Af6d-9y>B3?zn+ar`_^;wGeR9dd>dD{@`BJ}=Dw?=pBCR$$flhm> zX>$q6zCg;)rFz`AW6P%U!v~(-JL8-dxDzckt&emmTLadsRK(KI`PeoDPgRL z$g0Ekz12+TlF5z4ZKwR!x!7OVB?08kfvJ{-ptN+UTgsHjcCIYyoS-Qbtmj=+s1$sKYop;A_d@hD*cdAk*q8bg0Y zx&u#+{jMv4@Sqd2VTo=SIbMb2VN0|9v~@m>Y`ZKvMW~vO63aeq0OxbHT`D|IpS?|) zrKj+6Twa;T=>bafU3it%W>ugYj+jER&Pw+AWOYQJXy|Z*SWtS8Z?KJO zZ%Agmkf&XRO;+_3eq)#s=fa{y4~=4=+HI9ZD+k^pE`*r3;M^eYe$_FI*5kDt+)EhT zlaIxEzI@=>LR})8{Re@fcTAx5?{!6wS5CD=_?9jgoiPfY?#108Up=}oJ}AqB215U> zbYO!k6yX_}IyB0~N0eIw(miW${ve45o&dLDd^7H}WB zS;chH#gwfEd^A|zub2t#;38oop*ZM{e6@z8)o#YH&^L()bOBE{VYgyjemJbH9r&#i zc%4DLYGw*VHlC2#dy+o06F-CGKD7WL;ceLG7Lkov=ycu0stl5xr)_` zS{b&*@xsE=8|pyMCCYu78w6)m@1CPE;m4>ic~<4>uE0l|iw;4LO#r-|O|VJcVQzoQ z+12I5kx@pB z6d&TxxA@qM+NXM^GfBmWzxEl>`x_&aH_|$fqDXMKbP`Y3VkQ<*;C$vkG`)2v+;B;- z_x>K6mtHk0U1@!|Xm(jP(*V6Q-vPf)U`;=(4mfz&PpepVnNcjsOEPF~-U=ZvCR7%# z^4iAG|M8N$DuKnKZd*6^W3KgjTvy+1$#ibX9ITIi1C!6|nLy|vvo8H!)&YN<3&J2Q zo>3AiYd$J2IYRW`8ou?)H}DXKd+aB19DAm^NUSi@Jilgna;9|Y7q$3sXZ84}E&ob_s zJGml$9J z+&)g-hEFqrdyU*p&)~d!L|BHtskpb}3U50ET(nv5jmY-EjN{r9 zHZQbF{y*-HH(h*;S~;R7e-vFu@sow3j9aS_o8O+t_SJ2bZ5n$qTmh{;98@Ttk4+Tn z!%t@2y^(fzdIE8}8(;xD0x45Z*2_1>Loe`M08Q&E1MV<~quF}qrriw67K8y1s&P@{ z=>b%(Z1XT$Q6@BhyjbBkxDHAhM{N?UH*dS>kK3>KI9bCO8xOxa2olpUs?_^+nLT@* z$Dh42fVlH@JO!8tHh8~seakO+y$hATeo~uSt0C}CZu@6FmTSjMN`&DHh`I)98g($@ z-nU53a5$>4Vz8bMpHDDYGzmN4g$IhbZ?^L_!uY-KwJh5CT_`D1&SE$lq@SX&?6EPc z>eevOn1iG8u3*=Li&lXR`F5t%^8VEQvfMkjBL~CDC_wa(^{TF)H3wZl6s4^J`yA#q zTbW9U^9}-E1#IeQj6@UU!;+~jK4}mxmSNe6r+)fSnc708Ki%LVcDX|7I<+W_fqONk zdHrol2Zw2tB7laSH(o*11=iWw#N;_2>0O_4FGKk$Ys!d(w2i(pgVE98$V>5PnYW`O zwVD677mdeRs+;Bf_5ET1xf7&wQqXXD2wwEEUK?JPBLtRj@!%E&{R%)2oq*GfH)Jh$ z`ibxI6;Kv}O;(s4*#YlEoneEA%3E97Rcb%*P6$42gi$UD3ijBMHrfr#TBmZkq?$Un z$+Uy01f0D=?>*OInm2;(E+Cnk(0FXsH(^luN+)}ZhwoSX@;mymhpbaARaNwl?dOah z4Sh$Go1pI=H3lBn^%wj87yINGK^-Kmk4vK!-youd>*WMK_u8A*(2DQ~?A7P1GkGqIrv7J@YQtFHpIo?=l_F1XQx?e-%H*(L&$YwhPHn!w!+r&lG*j;nv z))lmfHnn9#4F(Rv#4#kh&oUlW)($*Izvf0VF|teTmPI&?0pp5k0<_+olb7q0w=);d zFw(|GOiKS54DU1-exv9N+ja8Zf11=ZF^mn-QF7Gync5mgDyFd70YHDhAw^Rub<>-R z{%@?nZ>$wEDd1n7J6E;A)aIZze9Cq}SR2s%qJ?hmJYDQKKKs0Taj>HgUWF$lMHDVA zRCj)sI|lnKI#t41_&*wD8PSA7_M<6~B}5H4y<*epuv1Jm)S zQFW2_^8#LM2zgozKdHi7njo#enQ4iAD1@FLP*yba<~_dOBCOz6(VH#2P-L#hc(3W? z+xjv&ZLr^F!$xP7&!`p2@AXa=Fye`dPvd-FtqPs|f(}aIlv*OE3NJ0dn*(Pa`?%~_ zIU_~)a=i7E67dsgUZxLS1!WIFP#9=4#@lsSD-82&tT#9r%a85(pq=C_cLrmJ8c-~? z@$HGmH;bnFB2H}epn>0cD`AQT!z&*^iw2=;UhPW47`k9U)J#`P6ty-7vLf@Eh03rT~a+% z$PY$fw^O1FNbuR26phd6|Y+#R?VK^zv(zIDosM<-_aJ>!EbuuG5 z$2-iWmR};Ak^TM^UHo>(VG1{+F{PU^C#^$1^blnal=mF;>9X4&E2*F$DtyXY;`}9M z+zjlMXsK!@A#IAk`Dc_db$4!E6##gFPh+%FzyXzp#C9y&(wIWtqq;hGpQy<0m$x7})+iWqV_?YPsExOKEK z{f`x=Mlj0JPqCoabCqxWAsfTSWf)>gvzyD$W@a9Z1rkjakF4rCF%jq;SExcbS-m)d zZmkG8Lka?#FD!B{E?OKrN4rYX$1}@MC>zrQ%(i3N0OP~|w&pau(>g-7c4l+faWkwa6l)l~XQ__DbV`j` z7tP);0{6PM4zB56Y8)H#67?BaNn|ooRDhfy^3!LP`^&CX>a@6(zBq&TEmq&8rMa4| zkDsN$fANXG)jJTsBGGFc?V|%>GlRFcj{O-USCOOtoDq%%{WVKF7CW81wD4_ZWa-1WU=G^n3s&nGF&a0pY`MRS?)3Z|>2N$-XTOa8xOD-$ zxM%B18_%FlNo&*PnHFK(2Vm6VQ>TNglr;P@l=b8gijrfn3I+w&gD z%@iL$Zi(q%ZC^g^?56P$MqqdYNH8+UGeSo={CzAe_WY=0pU(f792>PL?=N=-2usud zelWQzC4n9MovksxfXu?_Pj5I=iA4d)ch}1YPNZV76+%rANxqFFw-&J@_}7!(Lu6O* zyJz^@HK62?D9zO}2e;N-6>$0VZ>*)vcyWk8cZ*|6Fo;%=Tet4}uBVa5^%~+j1*yPh zBxx1$c5cU~XD+VQ5wmIKkqHx4+NY+S-{x*SP9C3YJEySs?5EG_v8m9m4yiN#V3Z#Y z?x{1nQcK>l?Y#Yc8sdUlH{-qT<@GNMT`u8`%EXaY z$=zv7>K$jd;Z^OX8;j*>o4HU1Oh3VAVV-+9OV{=fOS+Cf!s~C_`21RR0~mC19rhx2 z1^Dj}sapt_Udf#XC)>>bs6)TG@fA#ZZ?Wk*1ApCTFTPt1OPzh{SR!joUt;u7lU5_M zcu}-?*+Mw|fo_A3qf4JKLG<)Z5QMk>CRp^{ zwyLz7D)l$LO~qKzgs>}5P0;vHQQCC#smX~+iG`t}?XF z@@f>VI>vS-O}|v941Dvgkg05!T3+j2EXjW7{`e2?t9k7oJS9DTve_X=r(cfhyQb{n zrzp^{)hC6vQ})^dDRUZuG^>VH6XY^{qtK)xpY(S!s%H_KCht*wQe*1v-e+>T*5_lz z=CI(OM-Dul&t5m6H1A$KB|LyErO_som4#C2-lerQ6kks7rc_tx*A`x%cJMYzJM38s zzilslw{A<2>>*@-V*y23d68>t$peR!oFcpb5<4Fu)$-Bz@X&eWZ{&$@_mn7H&6@)0 z9N9Z8)(i2j8!c7rEPz0rQItGJ1NDkR@b5QSxr~3qBl+;#J<4%bNuHnOcJ&{#8F%c- zEvVz2Y_u9O-)Wt?zxL&5(H9?5GD$*@jg!j_cTwl`!&$U5oF{zdg7Jh?Ngt->8ke^y zw3?{{Rmdo$J61@9347nd8~9^G3veA?BW~h1vC>r96$to)9ZzfL<#sKn^+(h4Y^UT2 zM;tn&8UxU_kC>T^-E@HUUx6AnyBQgaVP3+66NbGW+pt+qe&Nam;Oxs zw)&LnvtCMoNc9G5PCw(XqC-pVA0FSEvkVJ*Ds9%Q`?unnqavMeE?vw93WUv%|GA7N zW(gQo*UW(U-(;9?rjtdu`j(^GiJHl|#98+%P9lQAxQ&}Z=?nAv87a^&ZR3%Ee%UR2 zzw8W(cpB4amt3};7)Ykl48d5>&SYW~Mer4vD)jKmAJ=!Rv0kruZ2MCwHKV$!ZxqT8 zG+vZq>yHzZ7(^#a)D-M#>TOJvOl2pVQ(M&cYF3&hWWShe(Epcf~Tl%-17Q)VDk&{`G>M`*UGkZ z=8AHG_H@Twa6XsC7iNBx&w`ydDoP*Oubr=9w$rO4+J(>Y9avI_T{iNtyo}|;c_AdS z8z;x(>#86qA?CF*j9`Qm#E-PTj$YMR$9#ezJI`An`4jc<=rGC?Japb&jS$JExjxAoZMFr zedi)qp{4MNcmv->=TywpfrkUS?dhn{g}doF^o#> zlq+VP_0u7_%nC=7Mnh8nBhqNE?=n-h$UEn?nx_~4rVp_Pwq@wxa52XFxC#I0*4uCW zIAnqeslX8FNQJS~!; z96nId1#%;|kFe4o&!1OV>qzy&XyeGi_3B4DjG>0Yz1~-2XTu}+1MT5$r}@!|Czq&tcACU?=FuDqm>G-;#*&59Od1r=;v?STfRlwfs)2LjLlVLv$*mH*44<_k zSzNEiq%dyE$XyM)2f1$Yx~dQ>x25Sip%K!K0k_#KG3xA7_Mi zgY!L|)w`_|db4L}>{*SxyT?9fb(Fu_|1N&vJZN+E#T6w+P7%D=s~_tqYL*8r0DLAZ zYwfQYdbe&~WGKdup=)*7%nQ#kSA!@@`LgBnn(wUiTs4VPF+GA>jNbBeB|^jv2l{-R zpH0C&U3FtOwxt57^h#*Ga~bs6_*@;@(3YIi%d4XtW*z9xDdC*k{&On7N?_Om&dyw~ zcu#_;n?nz$Jv)8e;BLpn63SwgwM#D7QvzgfOY&}e ze=y||L4~F9<2?pQ12z%4m^4c`JnaCCA0G8ff!D=RuBxRV57Fqx#gGtWZfiXo<_GI( z>o(iksE%lAKb#TXX>r+v3s8B84YRy4TXaavS4Y;ne+JS`nYEfJe?3Qi01&YXcD;i9Uo z$(8Pm6dJB@C!YnwejW)DGHQcg>aKOAzwc*Qx%Ei`yzGP-6@5uY6Ngm8WQtLzE;jO;P8h57w@pnFf+i`ZtV6NV z;kNCjzMyqzLL;Z{s+e%}2GGU@()=iAoE!Zi??Cx?swIQFQ#oagX%AZ>b6 z)hxhUo53m!xgVV2rG`P-Q2UEw)|YqKy$M@8P8bKx!F zH^$Ro@j{-@*T?aD@eha#t(BJxs|JoMPVazs*;-aDobm$e#VU`QnH*9SS*^|Afu(*w z)RKB03QYccxS9(*DP)e;$PbNR4Nu&=BA#>`O)N;Se`Uqaetc%JVn`Hsyc2`qFE&}> zKyoaD`B=HO`sQsDhjV*-LoV7w(_+cf_beKAN%b54rttNtzB|zK{5a8}QkiM|%yN@8 z60(b=9js-3*~W?XrXJ$?Lgi0WJum)lO17v!+jPOx7;1EUz++FM_1LeNNM``+=SGma zBRfkx%@g(sfq*uc=MQbVX7vBCVVT--tU-MiXo)?y=SouSZDmMrW$88~xY{3Y{kZ)# zA4f!2RNDa{3xY*_xyD{zj5+C2GdcbhGSH}2_6r?+0^xD|8|x_SI~XigyRwxM=7C0a zxqjKvMe=|>YL7h}z7fO+G>CdO`_iZRyn73aU!MZe$%>iLl$PNf4CW30I+fJl)jN5M z*?BisG{*Rz@iol-<0$%?MOvnw8AA5V+&FD zj~v?fsN27E)5WtF3x27JwFXKJl%8LWJPf}k-sK}5_!1XU$|z-p>Lp941Iei{PJKF3 z-eyW$pL-ijWyxGJ4J_tR`p)jZO9iwUJqYenzfZRDp+Y2@YH{e$Rz<#9=xk*I#-)^b zGzyEjtZ?>gIvRz4mtDA*x^o|5j^y2Pyaf1)9pRZr%I|&4+4@&=O>kZ7%j-N1+u5Hn?R>bly08&5)|S`( zpKLWEIyo~lB&8M}j*iBHevRe^BORe&X^3X?nS#0r`D( zTnGPfvXDdvr4Wx?5I1;g7S`4t!E01n{r7Fc2Qr?L@|TnyRcl#6m5b_GQ4(uC>J_qr z3nxacR+`b&7k_>*Xq7gsr6!b;QK|3_r{b(TYs}2xx2%wTj}(8%ab25kh?l8Y5BD^Q z83`3vdNT#H>a`~gprFj4RXGL)Z<+jq4g~-3i-EB*N{OF)<|+DXoutyt(sgQqgK;K< zy$4p=i<$al>cZiR1KmlXNeV%^RYx!n1Fd%DR;_<;$X(4ja=eYDY+jWD#x|!_Y5N7oz=`=$#Ki+I&pJl2D*;^>NEMwjp zZ$bjnQ<0Xo2ec_Os2*1^FX86}>-L%O$##Y3ET;JJUFc+Ra6nc!{sl^1~qG!z*;n^_m0B@e6qHV}xzDn;3 z{PsgGPBG>dl`K#AWa&L4W7Ys+jHeTRH_<$wf|czO`od`~HSoEwQJ#_AmaZb#&(_EN zlY5kV-jP)9+8oW56o*q1iMx38HI08Y;gL_v__^Eb#qx${o(nSX zl4b;zjTZ@F{IKR{o)5MWwS1ivXIySQ#!D%LWI+1>VW3WfH-7ZXsXzG3b(pnI@-Mh< zJA0uz>m7Q`BNb~TQ0rW+5#&G%2>y5!>hfmdV?h<@8mqTA9QMCsb%? zK}PHaZww(pI4Iecca3v*=w9 zIE(YLCOZs$2^Qtsp{7{pn2)cugkbEJ$QjxW$70fwW}!?CCvifTJP@$3AI8n)(4q6_9EMpx?G2&U-7Q zsdyG7BqN<1y=zi;fuGL*P}_Pd?jvS~X!8tourLG!B|eI*yxP)?0PM`@gRP=tFM2s+oReS6mZf%m9)<}SobvX*h*LKW zBF~5HPRBUdlR{cJfR{u|MU?HJ5OybDzP*W}E#uLwgbrQltneY3Gsi{sY9)s;7vGs; zmLtqJ-)CZEdvUL!W-cLhSof1Ta=Fb)<0)_*q2EHES z_j2^z<}NRR5BxqM5Ah3bz?YMUHRrUBRq5plqsmRPohZ`V@0_2w;~ zvKXovUx;5|I<|r%b|n#ht30OqFbnW#go!q*p3NLWky=JO(^b%Gd)ha@UFKvZasl!B z=hb)J<{Sg(7VIWc=eb^j*}JSKK8gBPDVsaa5WfJ~_Wf>F)64|a)wALf&6YJ0SkbWN zyUfrBj_+MEhyq!>FI*=g&7J*cvjEfFh2_wnf1&Db$R7hE>aFguUOttXi^+{knWc6q z;daP48YpBS3u5LjWwCDGXjo9G2L3ME;uja!vpo?{KW>_hTVr0_%4-HpHb+%5f|#kP zHokJyd%5ghcnyEx(g+_?tWpiA+7|ZIJRxzVZ3f~JT$u*ilUF2#Z85=*Be9)8D( z0<0$=Q*Py|pUOO1C2v;P_*FEHTq;7Mg}&}5@xnaXkNRn()4BrmQ9h2-84}xr!O-yT zr(mr_LO73)Nm@XM(x&Ny^Z~M+O#O#17NA4M>z5r&ufSan?mDdCXFN~-$zEwlF2Suj zuaUhIkQi^Y4Tn~oG@hN(a0QEf8NrRn%`)V$&hm86fR*)-Q@#qGt4@18#EdUunS0UqWUsL zHfU7uwT>oqIVrSGs|#VnDQTyesCC^tJ+Lzy%yIdom@%*Tv3a4lR1FSY zovorcqw<#|@wP@Q^xbWNCf9<)746v~XCBAE)R~9EU7$asjch^&qCPt6K})5?-at7R z<90sz>$UV*2q+O`U4)~}-l-q)a(vs&E7|tQ>z+7^Pgqi8$q~+Pvr*7fKje2AKF2Sm zHAQ}pO~h4ucZDMFQn-A-Bik_XJ6!rF-5b0mq?(|SXCwftpd|ABp@bO{tP0{ksKeHb z)YUh$qmj2WKzWxL%*G-vT<_e6{uBk{j!^6C47)n)>A?v=8@&6Al8eC2+Mr zSfon6HxC?#eq?)<_TsCx={|{T8MN{8&&u6;{av1VnHRpcmL0Ii!wW>qtfagGLji=) zI%)9WqVdvTjQ;!aVXKE=sTb2#AG(!T`M>^Y5nN>s@(Ju)NSx3ClbW986F1ckpFnY5 zA<|NIwSz1!MQ9E~1B;O8z69$7TQMqb&(jsGkFTW%LIy@^?#}cF{HeX+li$|kfGbZ7 zqyH}_YXEp+UO_+FHSnk{k-YdR`u7Bo=rwHC->(%SloG|I2A9j$`FZV~13WLR*-47t;UL*5QFmo4 zI`iGIF1#9&pkEAeRaiOPD6G`vxHxsSn~}RX)i@(Rg1^8n&_fsGYDt<+zG+LwZj9?% z7#z?-*|qS5_arLpWrf!zyrO!LSQvUf@;5AFZJ~Y4jhY%d zbGTk@bI@L8nOj^4B3F+nYo9WFbTpMoid?*Fj4|js&jD?t6>ND-M*F@P(Kr1wfsc1u zmq&UX_)V6c1Qwe2LdRx^t&P%O1J9;d$~;1z2`gsVVT2nHSev zI8#A7F$|I*s1iOUjOGlSL|m}>pT0dhbKYPa(wH+5I?;(9h<+p0#9vIR^hwU{-z)?l zAF?wgI}Xd7T#y;{$2uS_IVZQB50_W`5C*v8?Yubk)Pj@oV(W(0#NtpL1QL%t6BQ*~ zbLT+R_!p3@Yg*aa@~wU;?k^_up0zF9EeL& z^c=oj@>ymZU8Ts6%M)hnT|bfZTBhD_5c|yof8FojYMNS~N$)jJR9!zm_h^NG6(&{+ zhW1Nt7>TPbz9Y=wxJn{Pw5*E&(eX+h+ZFG1@0Y*KYHF5Bv?lxA^1UZkjPQaX*iNTt zMJ!^&^K}wO^j(|q*@_ueA zKr!z=W9WGjiB&_P?D@*DhH*MG93ds49^wuh5FKP~sF8yp>iUPk{CaE!i}IQ z#9c{6g?P(LanMnyFgXt&N@-0tMF_mnSLbGdGGFj@@q?fE4R;n*8x7BqruythR=Fur zR%peS3@+w$x!G(|@V0fc`n-S?_l3B8oaCKvE;wi?8%|9*>0pAkdD zh2VwwBM;w2Xl(wXsGE)pCp*^7Y$IP+^zT?#NdC+tShwU0$GgtxC}G~AlBc`!9EZ5r zOg_Qe`Hlv}E`E{GlZlGM|6ciiST(mju`Vpl#U8B4G!>VbJ{e{OZW2Kk5W0pus5g5W z6b=YCTe@}7@i_&i!@XICGg&siLyWEm`*~a-2xmN-b2fOE5(9ue5qh5NVuC5k5st_= zY4layEYMwk`Q+-NxPaf<_sMWeaJebdOvGX5J)6c`Lvp~`Cs4B!b+T6E?Le{cB(srl z$Y-Tmi`KT_y^7H==hb*8gq)9r2L=DECMX=V9!Y)3*@HD0SDw(1X%RadNLXJG74E8% zlZ5G$M^AThyd;N8^E$Q#8Kq35^9=J-zwqcFr}T*Bh5ic&6vX-36n2-*a60x)k_cIx z%#e84YuOAhH!e8P_1nMOyg-qPrc(IEtYQH^?)K9!G{Ek*nA|v2$nc_+amEqtpqdB6 z_Nmi0Y2%7QcULxGPtwD~;jv`3q3GRas}Q7l_STQ_scirS(FOn|Uw78KDi9MPuQbEX ze%(FYgc7KhsGZElZ32|`bEd<&y)j$W706e$V2YD#Jg|qOUP66Y0O9Qhl zlC$X8J~^`W$({oJbV?OH$x`$F9Np44<7SV~yv~5RH3Gjt;x;>$oo+jXe>9lU-ZDgg zTBooP?@YLQRtWLu_Mshezn>m&aQuA{DU@|`<{dzEKCyD0-)rp>#r%Ts=SF1j@T&Q+ zhMJ#JWFR)GO-lUY(-<>FPq?_;sZ2_3fX<4OOKY2MUt2Li#&7>6pfOrH(#4m~ zmy*qUZ)MOfzo1t?9LOkYyITtQ?YGReLZV9P^%j2}%#23l&Hu97xM=0_mwrKr4IKT@ z;pDQCX_Q!N;rG6|Zt*isprde9X={ReDGbpz6H=V; zw5yIFlpm>e;tsuf1k#)eFU`@AiwuW+wYp$C4)G_KbJypBt6VF~k(r`;_|u^eDVRoA zuV2A1N`9DBkXK2%H9^#b{dVtV@INuVS_uaKtilLj=G@`uW(Z>dkLT+9Gy|%EdralC z=akN1MH$>PEBiocn(&0^Z;J;@$oew#zFU^x7TgHlg$M`3oe5i`SJzj~X=lG3t7i)! zGw^eTE=^myo2B&9?uCz=)B5ap(ZPoBDS(WsF|CkMS=K(nmk|84Cj%Hkn<-)A6m9H6 zQtMHEUkbr3(|ZlWV%Po?;odlx0Di;=UE>e9>3?R*S4xbI+Ayj+d47QQtMffm9?JYm z;+MucoZB^1CUL8}9jfwsi+07OI#ako=fJAQlSM!Kg;~oaYBV=ySsh4KQL2Mm!?j0(%Us;;=RjSo5-QqAva*}WgYV1xPWzF(Zi$T zR$)gpl{1;#d{U~%gVe^upl>8abm|$!xzbxBQry(5UsS-xeWHh+;OEUHs{%*3L^5Wx zx9NhuuG)#k$ooJ%yX8tsMQCklZaN88?j(*sb-)-R_3SS)H4Y0Oym)F-NMx81ctJqNr?|DJ<__h%{;qnqVTUNH-fK5q zO_hAEcVag8^sJ2Eg-qb`xWo|nB`#DbR2uDQPm<}4qLY0fjp8n<%R>k;r}%K1#wdhe zDX@sn{j4M!5wP_80uUGLzoRKZQH!ZTz1w8mZOUADTJE1qJitk5rv8n}C$$LR{I=knbrAK}X@iYc<{PArAX-jg z$4h4p`DxNSQ+Ry4wDP4|W5P*ET5AY|MQu`eYEb*&#g2qznq+$+y4E@jpZ^s7*tCnf z^zcdWCD%}C;JogW5q$Dabl`W(p7q4sIfyTIio`yXdHT|$21AjN&%Be7U?*kyV0g_I zv^Y-m60W`ZXOC)#*`#Mr-xeOI(qi%6hOf4it_=DYdffsZ8O_gj^QNY1nN=yOVw4lP zuScA@>&2;o*C**YraRU%N|*TYg{2vg?o@EjODQ5MeG$bh2-Vp%^F7FZ3U)v;NSOPo(%Cq!+Ejf@@$KG>}b#@D$Q>8$f zJ`~|JSEj#PfKJ;%zkq2Aidf*6>2tl!iVMg_eo`5bacGE<x)DC^wK-;CQ)I4I|)yVGM`V{x*RuYRRJbrJf#>zF&QScjEP7!a3FQ!iG5$u z%=FpP`~cu_bJL#Z)Ohf!le466!X74D5N9lXr_RGCena&kh4UkN&NiEh>m_I #Cq$U2 zfgi=PDO*@pjH|8?TiFd#@Pa3C3h7_kgEfRHCl{Kfz?4Jr2Nd0mKS_K9?Ltm}p^;jk;_hM@P9;~-J&FCuCk%_u9R6tUSqm`dGD({D-l zfFe=ATN25RfCvu1%b>j^Dw-o{5@V9hNE+`IyI*S^icL(1?l~>i8*cukjb8>P9JKg9 zw#?%+jbD&9%xpf>m6Cj>vbd)#RT=xSUVAgKP-~ryuf@f2QXfTYbdKZaFvaZ>*XhnN zm9Ss{}Qnx?Mv&7i?P2hWPyMnD40|Vb5o4!>eMa3YSce|;4209I8Kee@0UT`Gp z<*+EopFuA{GEN^WJkGcH+z`F&)!y!DO1TPg8#qsm%(s%`7YVXjDoFu4U>%o#%NhyiU4C2H|_C9_a`3;iEmxl>cDMx(={9NKPLS{e6iX{ zaU0wbRLZDBPfviUB{Wow_a+I>=UV(}Rsb(ydN*}(IBd%LxW0?lP&X|UBc_YA-e zL7ThD=P~u5ApeRk+T!j0x&hM>$4L{Hy?KK5+j@%vzv!}(6bPrN9HqBPs(!19&g7X8 zm!EAG>v}KxSO@S2=?AP1UP+C>;tyM}lZLv^`0a6<`jAjks=L~m3M9qU+aF`bM+*`G z!ruYMac_47$A0{CA+K9)`>`Hm^0<(`MmSM8W5SWLN*s9Wa!fM0v;=|iT7%HixkZ@t z2XFKQdl8sP3LQ3(^brK<5|rV4J47}MGkonl`<@nRn|DW*0{pGX#4C`tSFsly4Xt<* zwip7PNvF+1DWBm9BVTEmS;~pvPnA9wXge7=CKz^Z-v6^H0Kuu?6U4!ZSB$D)0|=@rJ*?QAh`?1lA>Mv_ z%mjGY0+7n*GGp~R1Z#ydeOFm>U6AJVo0`nX2jA#2b=h*UDgZ-rqEZmSTV?&^5H~&s z0zJ!w55MVJ(66x#M2pW{#@~a~WEtD+0N5ExR3pCLar0=K|D6^a3BdpfK@r2S0_Q=# zYHKKfYK_K&9K$yQK|gY)U^NPa|9#8z0e?v9!-NF5c-PZRHR(?Z;w=!qpId1C}0)H)cSdA@fd}4>*vGr zUjC;A2hT)oWx&bcxHG;P2TK^-qC>!@BY0D$o%Flb)%zG`Hz+by0`Ns#rdVf4Sg^v+ zSn!PUwGU1JCY~o9lF;gz1>1E{ij`0Uw{@r7zc=tz{+HKICYBhXA{YpHnWE2~@=Yw% z`RTQ$3g3qvJlGA3t#@7CncEDlM9YdW5_FTTN6v%I;}W{PhS!u}?h z%KuSHKtjG^?6*_1B?i>v%W?uzMllFm9iZf&j+GR*{C`gVfAYfm4_*XODf`Ze{KiBZ z2MduU3>Q3Q^*Lt?dx5$F$hsd$)m9E%;(n&<9|o-ErC95JWE?f|_7}qAo9t~V9?yc6 z557DPoMRa&nW}Lop^`mC`DV2>5SqxL&5H@7n?*-Hh73J<2f@4L|Kq9OsJUwvGeL4- zb9z2oNJ@n zt`!s33%nvd^5Xzd;is4SVDXgK88|hpZUO)Z4-Mnx?r2z)%<=TYp-^hs=~l`tI+{7Z z|8^QmH##uZQ?Flw7xX0$2@@HP@Iw)6%H|1zP{`J^+$hQ4jYoe~3VoFzYg(g|)BkVF z^1o0c9D8Ghf2{1Pe%=`m2;WLT1ZjFo&}3aW!43bLNTA?xH(xq@wabBV23)OwlQ$Il zFY+S@l@Rom?661NxU2uozWpzMsA9X}i2gfnzPJD7>~H?r%-8)!6c?Esm;V2LPXFf( zb-dj1wE6$_Q1SMxgu4;%Tx05)_Dm z48i|@^wX}iZCA0W|Cg6k1yQM{zOWUlexIwP*1FqDZ`-=-wzGHh4jGI$zn{VA_V8}v zwIfN>h?EhR=;J?wAt@Qd)q|>Db^ht1PIdliqOCRgU)JSEc^WdJy_(X_UhW;nOs)$% zF1V$~z^yr1xzyA3_!`kU|LI_qIp|@Zz$4^F>X`Y<+lVG^Z0kCsn3K^5OnOYS&Bq(T z6Fd~Z;SxJ6?T?ENdVZ-2(@w5iH7x5V5_oaCiC1>0QeOsRmr^u~vG=F)*TqpgxL7Ov zP?OYJd<*V7-YN6$-q9WJaD zX#pP@1&uk>RlQ}LU|G-b+ug()F&H3%GisPPL!C0pMp0m~YIz%X$dPbG=SVGgfvH*q zEp53pz9>dR1aQN6?XN)=M5o3`hREH-GBt-oQo6+@m2W+dMnSre%PrHb#-{yt82+N% zgwM4?f8QXS&t`@Lb!1Lap#ebh#{Rnh2z@g4KcbQUL%aiTY(F`^@eH}KES@I|-qdqA zRm%s8m10t5OOoPYUU&eI-Hc^Ysk+oo^=v))6YJF%BGn=rY@wc$`h~MLxsks~X*jh( zOs=o^!a19*u2KTj)IAhaV+R^t6+6B#I_b3>B*03rn|MH>pC>+vEN#y$a4|^43%@v+ zU~(ULe#=;WXUBWqL`HlU(HqrY@+9;>7)9`MWHCMv`7OEMNmHyM(Be^P^ySWh=|QN2 zKc{R0qEePeu7*!^C1B6blsWGb6E0W6;)~9J2!gVp28$H zV-dHRXy{Y~&;_WD8P@qu%x$rlq>+NBnifSm8$zhpHC$|hUIqn>zNVWHHXf!eQ@dD_ zu^LGBC1@)lW5BbOW03b+WNP~>Vbx8VW@4ufX~K+)W-lhyP(>@tUDm|f-~-%Z-FwU~ zZG8vb+b^f<`O+5ReVu!WNi6RCHNje&!1vgZ1+_IoK)u;Fe?AZ&*B=tecdblDpcE z{usa?K$5wQ$K`s(VV?V#yF-aZS9JwKQh#u{(cLV54%p6R z4T`Ov3U-qBvq$s)f{yKIwK4EiQfz_ zZbKV)DpSl^KPWsbjWz3vFunNai;a`_u_Kwt^!-;fdzCd++iW83Q;RO7AsLzR40t-D zHhAD33fD<2hgF3@1@B;U=vsam`0saxI|RQ#ue9#nqHAHgA}z49E4jw1A~Qo$-C9Gvka0{^tm+(P9Yl%|ViM)j#jJU8hvC z){gy`8Js@?E#GE5MQnZ%i@nGnGC}r{>roEb!Z3Gw$o|%Zn_vn*+r!p(6V4q7FQ4_p z`U6g}zhU3Jg`FGZWcu%)swMB@>Zh7bH=Sjz;iGF>yGi~neZD$ob1WQYpS-6B89qYB zCrYZWVjvw$1r$c#O%(!(|Geo7byVpM2D0b@=b|GaCC&lFCWx`U!1{#dR+X#Rx(j}v zzRQEAcvqwy3D5gedmYK!CZ&=~g)R7Wa2bCyrhOBCcS+mU68UH&Z1!u|8|IseQ)yW^ zxnL=C$Oip%ln>#(x#!ycy<7fejBgm)RI!?9IO*%~_#0f3DG&o3OS}cvUo|IqjdUKd zerrh{fO@0Vhet$QIo||mR&MwJjs8&`!wcai^U4yva9ib+7|1Yw=3s)@4f7d@^i=!z~E*H2-a#DMMT3aqLWlP`n0 zC2H#@ebwu}dABXT2|h8TW;DI`R4~x$F3Q~NfvT|j4N}hX)p_GqnV$g&p)mfagF!%r z2DZg#RniqlLvK(INdtwfhpBFIH|P*KpGH&p0r%fy5usw=NNx`XTwC zT9a>yalQAPIOLebXJpIhOl+2VhcR~@U6RP)jkW+JP<7~mk8;N#*)Q@8n_w>E7d-3) zT%mZu>44xi2pjj~ns;?c}!mf(zlmp(iG;zs|LC_F07wjIAi zK`2N`Toh;07w>Gjl+WAp#M%wS+S*nqwG@8#1fwxTOM#=fkIgC4H<7&%EPu|qVHrD>vMHsO>%=3xd`ea*e2-tmLMFH(3^V z91G9|=SLNoHw*_>5F;_I?4%#PX88#^zjsVVVY?TDl~w;le7|3faWubVOtVZJG5OeU zvR}vQc-mcNCw#PAJLgnv3Di;cxV<;gEgB1gKhdaQwvdz2`JEgIDy;oZRZG~qtuHW( z`c7?wo|HQS&_0w77hox$dEGs-)w=hodtyrw)`)!|(C1#hkr3>09BzzGfMa0e%2s_p z8T@it#Dfg)rH-tz7sB)op62&fW6f76;2B`y$K4--*6Unl;8^EOMa8wo-F>xLEXCyNkgk^SC9p6oW_!Xr8zkw#q$9Uah7 ztW@BACW=#)(`g=OM3sMku6pw;@`dm%nUh5=8HuHhE9Uf(E-G&Or`K;dX;+91rF`&_ zk4b6sK_B*6$WzC#_fe6rGVL!TZt~>#t7nb4;B?xc0eH}DS}=#=m-?ITPxtl9yKXZw zOxTJ*d7w!FF{^jjxXLyKBSwMd-OS!4ARP(l`F54Y0%_5*R_2-Z!pBb`>aR`uVLEu= z*dzZLd?1xs8%>@esp?l7Is0s{Zm7qH*y@S68-=?t>0*LH<`A(=IsxHS=JmyT>f*n5 z7FT(Tbi2H_Ud$0yj4ClsR3Vti|Cn%<@b{}vbfut@XBnxZ*tEIODfwS5;lGjTjW=IZ zR4Upx!*?1M+W~Pz@1fOo|!W+LRi^A~NfUPzr zUiI9T1|YM?J7e$B3C&jj;-)9qCGzI|2Jz%D{wgZ)Fyd9A^bvIwQv)74S6x<2O5=d5 zo3$%G8+m+oTh0^3y55xnz`v-Rz3c-c^w*YQFwqGSjWLMD$D&0##ow0$w{azfM{D@9 zMMV9g37BnQkgm6*X*nvP8`$;3Skv5e6BtmZd5m&2qNOR$!f3Hs!B-Wev)tW@MjLj@ zwHyhxr^~F@^=mxVlL~=0M4^E;p+~rzc)((PQCRpT&`K1ht7($uNP`IxJ zb<|kslv$*F>EV_qhz@_i<1q}}!z4KUrKvqlf?rrVhNilCsc|83L*A2K**#IaRk|KA z0jr(viL0SJb6_`;?6ajES;mU@+l7q~OepKPudF8CIbdST(2CWPnzPI-u_sXTGym9b%8#9|iCXK;LVt2YJng zNFKe%4=d1tf*97J;;M8G;*8btNvbEa_OM>@hC<}V;*@h(R;`{3sCcCs%jvw8h7azl z6drmOS531Xp;|=G3gs?~3g{dz{D%D&wdUgff?^~B%S$~r@Y|IY)pG4dtZ*kF*@KwI zk~0h<;4)Z&zCL7d?eZTT{8w51zwf=i5hi{U*a(NXCvq3Zc+_{J0Pl+_YT(tz1C5*H1s8_I9fM}8s0`0{F=_R z+Q8+CcV3fj$TYp& z)SbX*tOQfm2hWW9yLBqMYv;2=7eee~B#ic}QpPtZl0Pj^^LU8OS>OY!RHJDM9GMjNfmY= z$>B+ln`Bsui6Y`p=MCnqw|NH5+X{i(uX3DtSoL{vS$e( z#Bdf#b8lqF7Y!y2;dh&>6t3!?o9P15!l9v72M=k0^rjS=DA?yrn;)laT>D76!VJAdr32L1&%3_c~K@QYBAep0fo)vM-@-N#Lqa`7I{9L#K8F)THWsrT> z0q`J%dI~L6l}{yD!w8qS%vW9Fm_7~WZ*EicG^`mB$nMu?Nmp-F2cp0F_A+JcGIk0R z2$qr>9S1p$DqghPe=Cb&SpR8K91-;vP5bpc9>5Wye#!|DTDQqD%?U44nQQBgt!A%B zLHGVr^&cKYh7*xUejFhlIsyft0@kIFMLy$vp_+m9&fpG8Ih0_29@UZlQ^IIlHJkT^ z3bMg@$FHd{*%7-P>vPDf&{bOr+kB!S8o>e%NE2m$8Y+sE<#;fQralWW z5{r{WC^0ttgkNdHh3A+;%Tq0TGE2`#!ezDVcVudQo`u`pyE;>n9PEyjGsz?0BmPD#V=X%HG7@f}s_xRNItdJe+x&ZAHMpEQfN^&vhAUDqj ztYx*lGtArxo(!}2T~tU;+fA;g_WnmoOg_X*I+4 zMh3Qx3rtF8jEC2su6-S4ZpNz0r1Mr5j;)A_s_0A99;AK9GbExd<{B{i`OR%M&d-DN z%If_FWx7P|5yS3+a?8qArz7cQNaoT}hp?N7JibUzf=)q5yFFG3eZ|yio!XO1qw0)F zH}BGq!vx3m`Aq55C}5McSJs+zUxIuJSS%_3(YLkHId<_(SDS{u?0Whk>PP&BdLLt%FM zZI_en`KBcj{VeSd|KI1e3WjUwf8T^=sr9USm%hrns60E^rz~?Oq?9F79oIdG7t%T` zbx2v_q}!oW1y^2T6v*e0@nXz2n;PEG)rRwJ`c~4fw^(Mm8*Ak^rO~Jk5ptUBqY;*kL;A z^sQT8GnD%r6mf;$TG@QPj-Ssb<0F_!ISFf*BPYsC98vkR{W}P{u^|-7UH>1BLl4biMmAS~(y<0U& z99Iw$;ehVTHeUBhTVKu%jD$hDyfmPHJ+&BC#C(scpb@nyTga0ByeJJI&nljq;r`ga zO;j%{-JbbHxBNO!!_>~5+)#i)x@D;H%3lG58OEK&p-eUH$g?Hw3q|-bx#&ZQynDFYZap1Uv8hw5 z8I4QNegTG^&jjvRrC}&|8)f;FC8YBqS~)a(E6~_w2~Ft{cJ#;Laag3CK*+;0kl2ix zvdhyf%`c@@e1#b5=#3pN+5K;lBQeA7lGOZqvt!2I$PXR(dy0vvoN(5s@^9w+;y?hb zuroaH=qcBo5z&ZT4N+93`E04InW{q)qR`}Em$->ecJ_;WukcR!t&eBVJX0wh|1Y}E zI;yRxYxB5Ek>b*#ZE<(fmI6fsrNvze6n70;pm>4e5~M9q+}&M@LvVsS1WgDw={sxI zeBZZb*7*mrR&wt>x%cGky`SH+yS-M3ahxm$JFP^)5RPLa6Yfb)6J#tP42%?xl)louZ2!p- zdvc@3H|V*cMNxvw>|03^r1D_A2F0JcN_AffdQVG~3=Zu2JTbswQpd7^Dt~ZtJ6v#ciU>820K~aFV|0;sn`wdJv)*Zs|H-_dXAxCNlost3=VK=1c~lbsI}F4E6Se&0VA6BfID!M`3Ah@(9jq(d%_ zLqn8U#dk7tSm0387ZNCG`%JE~f<)Sb)Zyf*jP{sQ662!6S>(a+Icl1JmvNOGi=Cme z32Wz*WP-`1+QNe$FVR)us=~PiQj=AaHC}t4#F$FiLeYdzF>=u9ig_$N+jca6wW5$(ZNFUpK(O|a4NAwWMb zm|l_}ld|tdM?vuwbYe0s7HT6fD2XG~A^B>oD1N~NpTT_}ZPNDlYuqhkF_2`(#@8UO z#!aH;xvqWw86z0q8s-a#$;nmoTK0g`jWX`4|0;-he9D$!gXmjl@yuouNfPGL*TSTC z@e6`?EC$(acapE~uzK8JKQU^2j9`~WIjN`38k;7?9nyT+)~qU^Vp(yXni)Znu`Dzp zg$LV*l#YiQl$F$p;Cxh_j6%3-k4SGZl_!C*lObl#??#v*6j@aB$8pJ5S% z^QM161gkyT>m|pH0^?Apzkb;LGyXk8uklyEgkIvV)sl_(vx##Z|D2~2SGRkGpfaWu z4@dYCPJzSSZE{t}y_jRWp+%ch2x1LXGB+O~UMw>m$GMdYkuY%X5s!e{kAy z5x_E!Kv)KgGN8*~3cad?#RMcaewNEP0dSDiWFChLoxs7=heKnMqki5$+PCiMEzA2{ z$l}-fAl@#9#_6|U*AS*=Rb%@~WS9BvrJ8%}XziMH20`35eQP%bJOkcxc&iiUb+{KE?dY(Gd+oGh{ z$UeE?pxq<8pqKk;c*yI{F5TU+FI=jrZ6jY)ILQRs4kJmmC5w zjqb>{3Dp0(E~eC%F)Utbo=-jB&7Z*Z;5rPAN8oS-*m7d{OV7!6t@qfBAcd?3v4Cer zEkd0HBr@+fjeen8ad*`$_nf*K!A`GYQ?=P9pWa6#d)jo4e1646pTR$g2ABAJNM!i* z?l;>Zhato!F~gyzeR9(z{YP%5bJN9%lpT}k*@0$h+w8xeE!CY)4MM>ruE6q-18=y5 zB8kk>%PfK1Xx zBc;TJ>fdw!(n9HYuL|;7i0&lD5+!SsjI9bIN+*Z$>loICi6C`y(oc?xdJ1a?91X3n zoe-ioF?{y#ZBG{?{d)FCoyFbqGhq=Uhw07AOPL>5*K5tU86cg+HmC`f z9p?6~()&!G?bNj@RnhZIg#OH)y|t76lIo#l;Iq<+$f+HtyBbr;?EFq*Mf+ffc|6e! z!H*0~G|N}>P8{{|t-tp3wAAKV+Z5u_XvfM&>;v-J@FPw{lI|{x?F+iWEKb|oskkON zJjlJw=CcLmMh@bWf>efD6eYI$Gkdwee@={IN!LB~_(UCgP{2U0yIAdqQU~bOq3n@o zF)feyn9C9GFuWsX4_crsfnBq8TV8s}IEu0Mw!|qVD zj7QY=px=wFV}Yu+`%Q}gjS=ii3ZJ;xX)%2V?Co1jU9D)HJVtK8!_=9=z{;p!Y`*Uu zQ8GqUX<}~UUj-4{=ZE?XUWJ`{bX#+uk8sV<-oA|oP8KQS$PaQ|A(p41 zv#%{*=Q7lAITaksjrpwY;HhqJ_gu4jxs}4ASGpQjbSxPft?9>#XKl1k;2|rgOy2Hk zCzP#qT%klcRTn2EiZJ^nIsANNS5^Z2jJj;)S}_}TtbT+Fg^7jMLvDXBD+j=_GTIia zs{fVfUS8p2Kg1xkz0w^-EtY#rOhAawpL$XGyM6r**wi0B%rqMa@9|@yllN4#&b=vc z{MPcfBSM6eZ6;W~@jwylCwr&;T)evUzIvGA{ZYKdn| z?7{I4EpW0V@+IO`1?YGE-mU%!RtnCL`j7^9V$ zna7KqkzGlz1yIgz)Bv*qRM9? zOIWnVKq-p}abgD=y|}mJot|=iEsicUwqPY^Z#i{7xQ^RVya<*`rZ~ke2I!R1b=cxw zf(%_{M;ObiZ6-YwTN}apI_K$LdTQo+gjz2Djue#I`!jp8Lx1^sj@dY3}}u`t%RYeApV{cmG*IFN6$hYJmVrZCoQl}f_r z)JH@bEUNyL})o9UJS9J9~!)*3P0inSVMlu7YoLATrBgw)`kK7E5rwu z2Mtjc6{{J1Iaky%w7kb|GN+my&3uO3M?8DSqR#*a)gv)6H|-ImV;l+jW8mFwKP2{Q zA}B(ge72o5bj}mTfl+Yln)PAtX$R z>Lb26&8KuLg*A+Clh#-um_lH95_&AF=jiR{;>=(Lfjg+Pe)KErhr7{}uN1%+2zud5xa=dqq4v`Q~8h@1xhgoW7!&wimEnufije-qjC%6p%VBi;HPwT>$`I7tXdVD5`3YaM%%2J2Jl5?m;W_A90kh9EJ3f+aD3^*|@K z;0xavmCa@8`L4Y^VTOsw(eGttpYj35TKuy9XM-&5De)7OM%lYLWzEKnth)gRps?jE zo(a%V(uwj%f{E$CrVGqeS(uz@r9sGLr`r#-9G9r~Yi_1MUG$;-ho`;TcCJf9tN2)1FToD{wV74=wvG>n_mY z+iuC!hIl_D6Cfh0XW%~S5$F&9JK5kydA{G1Z+v<_Oe`}nz1_Mq!`vf6mrt6}rE~rr zaJ2LmzP_bH4Y4_`u=`FQQ2wZFy3#JID5m7?{$wQa(;eLus_l5{5&grfMB>DcRbsY~ zxLAx>j8tCidShXa{V}(`0ymNnF@;z%&<&UIYWC$l59aH!ziY8nxr^cZBXvaeonfJ- zMG}~b3O_WHKcxtG%4#oD9Etm$;|J6O`UiK#aV+%Pu=<7o;!RlKtXjB%$?YPrQ#oh zYnt?knp1E4=Wq-?EW4zNy!9GTByi3{&9n+Yed=x2Q5sm#!O`?H$(WGXE-jEToxHL$ zR@J*>azj^>lNKAeONFt(qd`}C4u)dXvjo-TzLx8o)^tVTWLXh>aqSQGlZ@iOdip?E z^ev}VeqO_*>M!|gIk|RN(gektXGnvPn^wyVu}_hI2n_{U#JfvwQFThILyk5#X4POu zqhx!Hj;Rmv*=LNgRd5eDuM`k@>1@K7A&GqbR6K!_RB6&Gf}^ta6>M&eEp{l1wDQT_YM!y#3~Egu}ogw z)AZ7Ha9Ou~Z~u_rm`LmP266a`D9nuB|9I6-r7PY)!6fj5r>$)<;ze_hZ1VtBCMM{| z3#f%M_|T++E%i6{ttp$Wy-W98dS;bSK-@@n>h<2|Ds!Sp-WG;?ndD3%rig%be(X0I_2f zZvZa94D8&UwZM*G82WW%8|rAPx@kaMB8xljVT4P?F!aoajoQ;mul6(9pkspAd5DS9 zpIPUSL*+PVuR?cU(xddyt|BxqAj5tsR(=(uu-{_W$n)fS(XFD|;8{C{(>#Ha9q{p~ zphXOw!p>frn}eGXBrL@qOD_TKf-XzP3`r`~TU-bhxUs_lEL>8@c77bys@6N=K<{Kq zNb=26y|0Yu28r@7QcqhRnB+h97T(vKnPAqMsa zm+_CkeVcV;MrJ`T|$Hs+frbJ`|>DJ8< z+8Yjq=8=MI@Xy6HyW(@>do)Hrb()@GT6g_%3v=6^XYAUJ0ew?6*=;}IeHqGGpS5XK zza*JT16{r=78?88x^A^Q4RELlIAvRhE z@v{ws#Bfj^gEyLKcX-Njoi)u3bS+`+#hK%5j-Lxt=#D$;8G*}CHtJjArTG^xmWOuS zwsb5p6s0JP+(j4~qV-8a+N@@^!ORn>Y;=Sj=VtaWf!4-W&OfUK) zOcNZpf4FB%C6ve&qV?vBB_+?$)EqY=yuIFi+@Sl=XNfOATVOxABm5bR1J`C>&SHyH zW`X}YA0s30&e%nwH$RO_KCq6=&eomK-k?3>d(2v{f=k3E+Dt@HDy!X8ib znnih?Pb{qO|M8Eo;Ac0Ia=mc=PFwP6qTJV5Ft;ifQk<{&%VJFLOV?5p{F(9Lb ze<&$EUf^}sa`k``eCvApSt_n!n)Y8qsB!`_wYJEdCVR*X1z71MIr+?=LkJoXF}$#ig2C^iK>rO2 zmABaC8}E_!hJgy_--b-oi=wM;G&~jU^BDbpJHh@e?J}s_4AigF`XinrRJly^TO?jh z2HS!b+Z2Kog3ExxDQIbL-NFqtdhvyJa_*%1EBA-z_56JbQ_UbLHSpeUb43~4kpu>R zSQoo^64~xM?!Ap4NOc^02du#m(qnNElIT#+)6>U?)98XYGV8Q^S7L>z=H-e*tY{35 zH?ImET?QoFzK7s#x_-!#cNoNUn$xv_$J&uq3%mGI>4iv+k^Ge?q3%HRv3E|d8DhVV z_upa-^~KlNT*l_Z_wIV~^|mOb9fOv+ja+w~kO^;jQloSD`R}r8hI;Jh`p|6)ri-+V zh}=7bY}dw#>$!J1_SC~e z1yu9<_7#@k`cN96D?*XMbL(ov1nCOjAGcpTdomw)C`)lgk^fo%NYu50s)-yMQQ7AkGd$P}wB$$(4? z7_Qx)=h}l!UVF8^+2sq|9>K1c^tX1aGjrRMb1>P>7Hj&+<@{`fN*5*=(Od~Q3dg3V zt+7+&2iDKOI%l&TNXejQWJ|RqKay?j{M^BYrdZP(Lu_yhNFlp}%lf9pI(HxPK&hr$}Vts95ZTo6{0f%;GeYh7C_P;v*xW@Y}7a)i# zN8{IU6`yl%kr~^|_*JfD!w<(~PM1SC5tiC(qgC#$2xbrK`;AibiuJFaiS*tMj+SAM z?9BUo;Swmd`!L9KFv4$Zty>QOf%&8Kye5hG&~xQdfHzTh(<(vfMr5k0W<1BD@oN|_IA}@2#437gR!%v*LE)w z*RtGXBG}QJAYSxNOd=_!)OP+9m($%aqnF=?Go^1@&yzsaVH6mw!}V`lc^HqugU%nu z8AHSRqGe>lm%NJ~PPl1v##;0lVHGbyr{1%6Wp!kQz{U*7>Wcu}t)C0)Ge>u|hYr#7 z`*#r=_QUlEts3m0MsILv3eS<-U?=_PE&*x+3kV=oFF5;zu~J-KwxLZ^O@ z#~?H7stQmCDl9BZq&9cPZfZFhI#2N(*PeATy6ujN8#>%UAz*r%D{&0#N-Yc%z?hz& z@YB)Oi|kYGM!%3BU&H;JsbdJ~{^AndW0jq__2YM95n-@&MLm1-+VV|~OK3!aX=sf| zJaG~fW4uSqg06tqnM3R)Ipxb>6Xxtg*2cG(7c3Sb+``qlb%^@p%b4lFx=#y@s6{|j zL?#y3368~97vqdPIP8nP_QrGKpTy=lWPG`2WgUi=EUo7~@zV8GrNri_&@dQpREwPd zDWR1Tt&&jniy}em%B!ckdbQqR?Cz7}ccah4m=M2y^0fAl!e@F08WTmSd~Jf9ptMO$ zQ(?&|(9@9#n`d}JV&^u-5zcCPDjvG|^ydpKa+ZB13^~#&)dvOb^>%x2 zw4wN<4|#K+#1naBmCXrbS#(VEPXDnU8(eIB&~$k=S^d$pmKDN9E_>1C&h?JN4JjUE zBePHWsT5?~1Qb-*N88+oQ=Pq|LoI1Waw6gNbLr)ke@KDf82kF@4X#E zCt)SO>&X%(PeYjI!yTJs(7GR2Vl*n@fYYymP7!nvgJo~5U!3QVv? z0~9abVnzpE)&$LlGgR6hJ=YtYH=V#o?#wp;cHSyZHq`?xnX(aon!j=)fYir7#5b)w z&Lj8wc$FXW(OXCrkI$G$$T4?bbB@lAgG3G9em_;a3*og)>wux^p?UEZFagqCg4S%@ zddXV1SYDggCO6an7)<%`JvPJrpj-DfIrQeNvreVS@W8q7`wc$Bf=ylYwNPre0?s|1 zrP#cHoi|KFP6Wmor%#M+r~eb`8Gtc#(r7j##~hK${H$H#LJt8yL;>Nk7SOM*?x*SF z{-5#1Q%aWmhZL2hUACX35L(!MSIHgS%Okef7yn4}B{-TKnODMM1Ec&Fy8iy$F-w+6 zCT&kjdjgxtJ7|BAOm?rRkxC=P_O9JbEY1XZwVStUNDK1TR}JZTL+}H(=s{q3U|K{< z$_r!YX6KK=U-K2nC)MqD1N3<>&U!RK0CVmxe4Cw-ZmGEfzzcHK@3!7_QpbP)zEy#Z zrQ_@KAiBb%*GiilI(oh;NZAvF+qM2J`<+_Vof*>E3CN2+n?~s$(xT-{eDq+hb0NtY zCa*)tBMRTI zKfty68E0sISwR!FjP(_>N| zEyv9U4dWH3T1S#hm4G;qk8n=m->S03GwU64|A?*Deq7!JFx{?f#b3s@15uGIr2l)I zeWCy!wJpK``&6uudQh`9x8R3cYzrHD+Bfl+I-~&MVizKN_lZ>jkFEueNWXHgl@m!w zyLwfzxv&d5!vyZAKJeDe7BZ4 z)MFPAm}#H=7x8g}F)qI!`;58eXdMNM5M+L5r5{q?8qn_JobVD&A-2O0lCWVefOs+7 zmrpOw!n1I-jn*O{9qLOnxC-TN9VUQN&@l!p$dw@^y?&&PI!yvcy_~VIt8|GtoAhdS zOuR`aKE3(5?tZGDtwGOlTb_lC`U&ctj1j00bCWG|I{DkpWAO`QDiyb)0$Qn;We!Qc z?J5jUxGo;?SQ)s{0Cn(!7%(f!=_Q#%6CO^aD&pmwwe?e7rT~bavXCZ#J5j(1PlE}u zlTE2D?DtB^dO74G!x6-=n{Kdpc}p^>nOS?f#*9iuW$*eL>yFq``|loWjf;q*&f!oL zb;2}VoDhj}EMrsRD`LZe+LFU?BvD<-?UsL0g&%U)c+5?84VmT__wxRs80=YkYe!Av z0S?GS8iQm@oKA4z%}!%N+KazeU5<~E&|jYXW4oTfGd zAB{mW#Ky#|$1~fl;Yn_A8MLaAUJ^2?{Z?OdGyadQ=Fqwyu}O@?o|%@oy&6k1N(!xe z@_er<_KGf=!`hip+XJ5ar7F!_O;b_qy>k|Fw4Mxr-`&WkXdhxot>;x+)rH=>vRmRh zqhk=ibyv|q_MGQp8(qI^zah!!1I@il3Z`ArvBpZNK*UL*qp`QG3G*%J7UE&BJAD$r zLXVmrre^guhC@*~j}yKIS^fd!_Cu+~3Sd`a?4Tv<)`EZFAi@;laCnhd@3A87P*|7l zS;1u8mILewTx=V)&Jnde6rT$yZ9kDZy=kq~o^o83o!-$Ox1ejag=1Lch+foZQ_tlu zVRt-KF>&rnTrV|`wf*O4SNBN&ocfE}VzaqjIZ(vGlr=m2_Z9wfl|_dW$OsI-1-3w( zG7pn4mDU&{Oo?WHwH4fY8X-6$jC?$&e8|N=a`U(HLJgjH7A_DA5Xy?@@FEvtbJxg{ zoL2XwYDY|MmbD=-CSFuty_g_JGl-va`1vok`tPGm9i7lc360R55LqSP=1b^Zk)vt{ z0SA5U2HZ~rlFJ5+h4|GC?%>5U<@DHdxc@RFAXuK+B6a(ndbEd;M78=3Gy%D z?DP2FJHI5<&aYDTW~%ZFrOp2sjiVa5JV-vF(l6hP$6h)QjRzW!;ZOZu`yU_te-Bg1 z^C6cVtzN#oUgF==`~M!7|1MT0q;}-LU*~`Q&mrO8N%6SW{Vhh_RR5o$h>Ae>$bK4$ z{(p`mOkf}VFa7S2P~xEeZA0L*#T_bv9QtJm$IY~0I_7G;GzEKhv?P@D8$ z&+6abQ5cc52Ccee+Z)p3d`qRpmU#oLjxV)-M0Ej(OWPXMqLxC{_K+{k7Ahr`L$VYIV!T z(+XX0XB64w3BDP%j_M*dIkTl>zO|qhNn#o zJEDaB)p}gBlf|w90w)x9MwfQ<7KfrTjD!pTfzKPGPe%l9Q>hcU40K1v@+~d<#ML#} z%Bj^Jn!2}`Nu?1XA)9?wJ^AhqDNq_W6nS$b?Z2)FVWARHpsHFwUf1L5M#MD zx$}0((ndJDfS7UJycef)=8jPSJ-g^X2YCIB8NPtC=rGUwYXd_QdvMTdzM-MXKvWZH z1**UBFTX)Krm2<7-(ttI*H!Tz5zGPJ>z@`&8!S}NO2y(XVZ~3grsAJ=TL-)UwKJ|k zrOn&tKqqcs14xLNXnb9{BA6b9(5x8y9nT(D4KVdhWIK1`TRp$I06+TIn4u6ItW%eG zKgz`bc*`0O;i_$G(XC_juH`Z6t8;``M#z-5^ox{Fg4PL5_5x6tlH(gGhY5tISEF3R zX77_n!~lTzc;?V8b7}eal1m|DODw&Z=NhuZ$$^9oc{vq;qS=ogj>aEKTh3yg3@!EZ z*HNJBvNV(NQTy%5%BOPIWD-(Z-}}!`9XwIaT_D>~C*a zX=s^BE}yyIivS$_qrN`?2zTn8O@e~G$Sp0xj3ke1r^AzE+tl~`QFwnx3#!GLG2-ql zKCtz+i!?w3gSJ&UBxLxyjbbb7~3EXQ(F_Bb-0ipEExh<3L z)IFD7B2LmynWq2Q_O1OndbZGf@To#SQQeu%>tZgxU> zx|OPs0p+p7&VigMl$`|~8mwigpdcyqY*fj&d1>Xuvk&as$>KQ@;%6)_cid~e@v4d? z?E<0cJW4C;XXp{%-2!AnVV7Rek?a%q!Vi2Ww~Lb|hx3zA0@TN&%DcWlKRH zX4vO<;Zdq3edVRAekhx!jmzI>W_bPe9|tDxPfi~k1^)a)sfIBW5`X^2Uo<}Tg&gJ} z2LhK-x1{*M@Pk5l*I6Q4) z>jPo9ff`i3Sx5h7ped2NKWXrG`tcMvQVB-%az+(AonNscFPi)mnfZ7({>Y7(b(J6< ze~{V4dj?f|h(kHfpG*vuCsn;O>i{eulK>A%j=R~OCvgvplk(wg72Fd!=RXQ5({BeA zWj}01$JM&tB@dXB9|>4RwaQ4vUgb{P8Hkf!cLDBQH!m9ZSu^>KR#~lE z1)miLA`AOuFIvb=@3GE>vliP_A~3wakU5|$kFBCeK`-$Ak#+RrwqGCk?qtN|;rwJP z3AoC_MAJmW43a_&*#XEZ?imv!LsG9ljkU8Ed09%{lLtE4R}i?%p#SiA!-S(%NV#)-7c?e1;k zU;O*#Q?WYZQLqO&CTjC(%V=^>-*HC2Lct-O0Q;;HBNVM((?7i?#|#{h$@S@>U!BO? z=59=AQj|H+l;>beYeyzmP)Y4~q3eI+p&=CZIpdm1tcLM=irffs06}T@S0XvH8s#xw<6fN21U}xw`Oe>RKA@I4(23>1 z6LmX12sxUa(#ogq^6b;4)$96`G>f=&f&>`3uk!S83T18@sqaK+^FN@m4}EY{z;W*i zMU!>VwD9=d9VnZiM={fxxIE$a1vA$q2 z$oV!;EI6KZrN|dd;C~=15_d7?CE{iaf>2|_?m`IpUdZ;DN8K5y{yjfS$JUEl`9Qtk z8dECIhuT*w!j60s?jBJ0{(fs8!;qwST~?_+d~R~H*6q-o4U`Fx5b=3loMx3f9vNwM0o)}cYFtG0XOqF$>MGSZd} zqnhI9Dv!pr=2EZ3*wcZ*p8{GYaJTZm|JFltYJ)qK7)F$rRbM%2-{Ad&!L6@9@Mf-m zOHh!hpCah4AIq$68XXc}X?!!R!yEo|bY@)YM1NCnoI-ts)`|W2@N9>XtCrkmXh*-c z9o5p4m*h3axg0Zcg34g{P#{aoaH-6a_$pg5L8u{SC8BBY@PUSw9)yG=H+c&kkjnY< zYR`r2NAC2gaFTlz+ zW*1GP>X8J9fcy7o8%reS#H~h~9jT5`uf>57$Q?+BnBASv z2}GQidVG6hb^vgH=rS$^Z$0^^)KikJalue{utn7~OfmP03hU@a?O&X|4-A-gHdbHi z+=t`eBd<^aV4c!!#;QkV)kX4k*=_8)xHcesME73WBKAt{zLo#Lb6?86R_bf^JEsR7 zS8vS=Mmi^U5BtW#nhop<_r>7)I_J{;Muzm#+rJrZJBhMQb52E$csb%K4BP0#|56Pb z@LsHfU&C)Ki7pPQ`K8my_@dVA6B>R66ghO9ekuQ}SZc$dL@d52Z{`=E2Cfp=u@#z0 zHKJad$7UB0O%9hstN&oMgwD^I$BKvXssAqqW++bxi>{n1hyU>RDZ5kI`Iy9f-Z!t~ zPADuba7Z0^!uu)TOGSB>9U7J_Srr-);y61X>Q0omNS9HIgj!*|o9((uu?ap2X-p~X?>`!(opdtovGlWuN_ zoDN}Q007=OOSLv8d0Tp3E~R(U@K+bf`Y-Tx_kuV58XpL9dBK4s~* zFPn7yXI1PG{CES-$6VxY&ehyuTqzrH9_Oq6v?j!Szd&AF2-#B)9~#lcXB?E4eNQqa zbT0kVIvA5wur$!Jx6$(5b8z9q-VDlB0QS#{Cgpk+ay8RCIqho@6uaM}w#^4H9L=0+ zLPp$Wf+c(&y^3f?>Vd)N0Yei@U-;1Q9U_oCf8s!+;(KDhO$3qEi2Ao~J6mh@R+5$4 z?5!zx@_vd4rS3{2`#@(>64)Cp->FpGE8A1!DBtRJyx#)mfTM}d7TMBE58;${Z^lfy zfb%jeII?d!2H#}!rm20duKbf_MiwnZ;9f`!WOs)t)k@Qht0;*H5T#XZ&vbBi&vwBW zpVYMpf4GzvMD367);|VTfSdLpV1ste5ZRsIx`<*)hDp*Df$aERqQ!G2xu zUazr4E0?eUgvuN_b#)yrpOW4AKD8lXcU$1)Q4m>knLsnv=IVL|aAsjuRIOJrn9l>%%UUlt-{+4X-t=A1x(P5b z+~`(3K{^XCsqCP9Yxs*-XaE6S-XTJIhQiFG4wuQJ~4qb|%xkXL+5R zH9%C+FgIQR_7v3mJ1F%i)uVGFZSm%;!$0T4h;wn#gP9hHQImy�O#wo*#_vi zdBT6gOX&{_E?c7OL)c&f-C{a)ORA1vQ86z`Y}dV$!d=Lnu&^s-=%@^epeqj$Tq=U!UNAHn8UpG2@qGOG~!>;j*#iU^E88Nlv&N3&?4tjo2U% z#>aSLg;u-m{nFE@l`7ZCbcbnufq!U5sQ;GxSWEUN22h-T%Guyuq#*vXgU9*{tOwZo zNA6*J>dSJ^e)p|T>EoJTa|2uA1~@&2mSb4*rd)AZv;Shew|=J%!48DY@LJlBQ5>(& z*5zuI0hP?KAlx5NQgvt}=^7xXHIpfH0MCcIL0dzb0tR)1+CE-Y1h!cwL=fQ_u7AB= zHQU6)=63G)!yzR)T4DwLPk86}-$1xRZ@^3sfZcQFNaf4x*O5qxw-@!VUP3YI+E4#btO z(C5Eg38zk9u}=JIHUd?A#3l^JY{v-aN!yBLnAW+tD8JXI^3Ii#-UoMw&*T5-lb30i z$7}oXbHb_TT?ac4@}l@wRf;*^XoblznxJ;@kSv7Deb-`M34!;!b1KKll=Un_<^qdz zhzWkS=p{9eHk6Hc<#{w2AK;F!KJ@*@N=4G|8+A^GPd8*>NhBx1%Cigjyk+%xj#TDW zaHwGg^=r*@4Yv3G6~cFE+~SjnFPg-ctK)oqQq+{-0{o3Jy*%-)J(JIE4dfpX}N$A5`g?1a z@3*&Etb3C-Ync!f@rX*S)SEB7N-}@}wwU8`j+Dc_Q>oUF7tHkad}0>IGd0im;z;Zp zdY8~~88i|}?C}=z{p|nTdc3i4@8!Fsw(evE-?N|@&HW!Q09jAOfFv`~@n)!#rCe#h z<5`3`x!`lMFcW{fHTV>x>`a{a{TVg4uZ1+etpGmC+F12e0g_iI-$OyI0@ zf8vj{HoPb$1i8eY(2_DE1g<-Pr_F0$#$!iPCj&27laA)+)zxav#rA)O?OmmouQ7JG zAfEQiLp?K>R(~(S#jbg3IdnE==b569fDAX0Co{|@i3%HU9&mVugqFrRx@GD~0}!v< zMOFP1uOhcFzzVX1bRA+ER*41TKc2Y4IG!GVE)Pq&?!YBe!X)Y~TZ&1xT|H`mFO z*qOA{OVOEz`o1j%X-g*Wo!wiRtyhhuDLd0hh-;dJy4|VqinaFAG9-dw3b#tCIaj8I zUwg2;{*J{+>vC6a(xFUQ1#L8!`;8eJNcwB1IZ0u+hHH2ZjuF4+*sn;l=Rni%TQ0u*Skei#@KWXm(sP#Zg&dCfmEXeW{e7t7pr zT0z$ppLA{b-_ekFjj((rFbtnz$$Y~vT9AiFB!M0>zSplu*oji03{Zyl@)wH%)JfH; z#AGux18M1rF3v91(1O&fi+B2SGpJ?p%Ebk-@2!k@reX^4p2Z}iyWzcY22x5RT8#9mATH11e4Eefq-cvg>^a>7yv--sEs4S9Fagi0G!CK_A{L-Nf1 zJ7mjhXA3Vgd&!av;TrJ29^Qr{h#mS_eBG^>xO8fXbBe~eE48+EiHg@L%`*iaQ4)lr zfxBJ-g%fZ1k94R>Al+CX+A~3j66PiR4Lv^p1~@u zeL6lgPrruful`0l26KsP@K%T4V^KlZrB>6T?Fdu$oOV9+D6dSA#o7Pp&B&DpRL;L9 zO3zYiGl@$Shj>3I zH^wm7N!oedR=Oc=NS)$-GB}&Ox5^9glP{Bmsbz%2pvYPpkti|r#rHGOohVbLg(h3B z&uoct9kyB>rin>F+-N9}c@QW4bKz{2j(c!>)uZ#If+_7oGp-XeM*#)mYid-zO&cPTVcC zZXzOgywyq0aLZy;Jm;UDrYr7rZg8$s->~-_oMAAL6K-%{b6C*FA72QFInQ4FvWyt} zZsyt-IM)uz8ahQ;v>V#<2ij4uyc;2OG~MUZ``OXQoL!%W?8KGTxO}26$4<(-_lNYj zE1{JxnrXs=f2iFO&4p3qZ<`G5YNs!5X;u`14`hglGLdjDpK(@IGxy15*Sjy=bXWuP%CZ=3Qi z?g1I;b>n-^GS_YA#(FDv$$1LZlTM^{*e7VRuAuZBB6S**xXm(89f=(R^v?h)pz8qo zQR-?HCg-^eR;?{fKMVSZ(Jz}hjs1<5tGMZ9ptoFZ3Eza64EMb5JNd+C2V#!j1Ki5g zpefe|QI^E{Sttjj3}$^*t~yZBl9m5OmNrbgH5bb1$NpSM8){_~5@Bj>EN2%hK*Lq+ zgeq?qh0~^WyGwO#VStzGGi=QSpt*EDokh}U}$Ultm zd&T}jn}*BX4C`fJ)U_^!Vw{!-pqo@p?$ueR(DKKWt21TJYD25TJ2DHP8OD6r28 zDs;LsT#v907`tVMf64j7TCs)2BY_u5wdq=v#qxj^(YbB;H_~llJZq4T5n|oLcFPyc zVUuJ;ZP_2(MaXdXf%A#5b(i$RWGoxbt()$c2m^aER%*Wm;ULv6=S`(BzMx$e2EE$Z zZfFFk5@k5@mS4q7=&`me-ko9Xa$KEqHw6F6vI7{=44C&0<%~MC-NffUeqyR3@r6+3 zP7YF)PinlL9%hRe*4CHKi+!hPQa-QMJb02B-)R_#d<$MM3jWxUM(EFQ>*gZk3G{26 z6xvKL#G<@6(}KxW6K0dXfTj^NJ)(gR`*yC`e-9`FC*V$cb99ocZ$!` zmaKZ3I-w;c81J>yFIrPm7uSRBh??wpCTEUE3wdV2-tg-9#SMo8?BlTo!+}{Uei&om z5av=8iH%@ye9XY9ja<%Is%(%|yFX>-x~%W4w}R-rn_%RFIh8Zw10n7ZH$P&b8LwJQ z{U~ETggaD4rw@=|Avw_AVr@^z;~N`Je!TUL{58wl>V7hH)M5hzyZ>N2$kVqLBnik5WfKACw{f;wiqxts zF-&GL80?)J?ffijs>wdMS*v=0cCy$1^?y4ZqshGf!bMq3dZK6TSELh*ki`SOzj$TS zQ97$X!#Z!kXlSu7w*%CQ`+gxf2`JIjONYCBA9T(s*f)s0efGTd>Wx(UkJ1VH^FiLI z%sQa7QT(rpCKcyzu@r$ddX~Uy)uZfQZhEeIfgErchjAvCQSPpP_Y*v-Oyhs9O`{MrzbiR|3g;gf7l<-dm@6pa9LyH%0+jWM>jjxdr06_)(m! zxy%Kp$;)*a23RQZCq<6+qb!mMnFofl&J1lBh9fegqJ7bP7G?Z(2O zXuDPJ+@OV@!>ZnSef)byom$Xv*a{`byioP?Nz z2}Axu%8MDCt^e81PYkjo_e9l9Sk;kI&#aoKt0&8lvn*e*EL|H zgRa+hn}vA%_iQFuoLiqJ`4+ZN54ouun`Mu*gaEoyA}LDO(BE4Sh=>S$;oU{X3+0E6 zqfB5&Bp5;infqPKC%==VQOh(S#yvlaPtm%*GwBAvh8P$607~hpD3j>0z!UtC$p!Gk zh9#CT!~7GcQHb;)o)PiN5Z?cy>pp%9e8o#@eP)YT%05F3#w(L2$J-bn}+ zQKCc(B8cAmYO#pkJFBnW+uG$>|Nq?YJM+HxGdr_ptTE>}$2sS^uHWzTZ5_?q!oEJE zZtT)lR8i%(0mf3AJ+<2z35wl`zq_j?w;hP9#&nE);!2Qnyj^jRs2~pTh|ZrPXSz6B z=4eYRv{Mk*MW{Lp@XvB7Tw)F%UJ&cU)sM2&>S4;;eeSzdjG?(%xSwGQfoBPQ&mtU< zfRLJY<@L*xv@3eY`YAPSfka{umu15H7?K$4=Dw9qz2`&~vmYw*eBY{PpVxSfX;F0k z`8gHGEou)AVim#?fhdL)wEGf{)~dySr*`eo|L}o#X-<~V*aL@#9Hz2TO*{EUA&N98 zrYr$m=yBdJhss7fPc$sn&d3pcn0;qYMODpKPSaR;CPR|u@fflghu+jCRMjQIr&LRk zaemwbj5=aDL`*NxU1ykiQGyTd-y=7`0W$Z#G{JP`Kv{I&EfSDX`MEDrF5Srm}@$ z|M-!?5)6Ddeuo~en>59mZeV6xkYB7OG=cb(ufcV;HGfl(oXpiO@c2|^+^mOr^g37` zsb6Q*IuGB&!4P+obnrA2mN1KS%L$L2r% zRl%OlrT}H(wfZWc>g`rL8xLHUMD{5xSGyQ|>VDkT^JGgE@4CU)?vDTyXHw!5#5X6I zk3+2GMKQNgfxx0f=D(7l4WpWE|hAF$l^!Z`3=39iYZ_82u;ZV4A$XvY&n1! zdJj1m)c`wLUjRDh-|#L}faw`{}xW5 z#XF?aQSpW-^dq$nlV1m`WruNx8Rj9gr4BxMa%PywS3nVCx>M;fNW?bcd8L)0nT{Dz z3Cni+33~ma?76b1XJC8Ioqdvg=$t}VLhhFa`QwxeT_fTTgWt7{OS)X$Br=_b@YOLD z&bqy|lnj+QWB!-d5X=d)Uj9}NbaFz*)X}obGvhiH?-*Gnr)v1F1D#vjB3AhZgGoBJzgefsFf~FQA13(@ zrhYHyE1?$}`XaZ3YQKuj?EU$O(Hx*%UcrHOK98mqRxHIe_r9TQWDHD__}Ysq`(cPaMbDg%jK4a=5B7U%p|7gtz^SHzNELORN@T z7?KhOaU^4iVs>OT>hs!y;t_r7tPXi{W*NmBkj$uV=E!)=FYa)I!%)Uq&`I5zKX(N} z+`Z@wUrJGPMPhMZU+gvP*EREK4v!39v-9-5L!Y21K619hqF`bPgEScJ)h0Dp+Sc)j25~OvTt4y-a+# z^K>Cgnn6f{_HYdlqj{H7UdawnGl@(#`{r}(gdxMZ>I96Th*# zM*anCIfhE6V2qkn53SKG4)z1l*|**vtlLnfU{o#VwyQ-(KkCich%!wO&2VB9pnR@h zcoIKHO!cs(+?wM;S*>2VizM*eEvt#Nj+eL&$E^9!Xl~Nap<|lLF zej@oD1_nXEY*5QVOH>{2!QXy_ex+*rVC0B}T0b`3VQR+<{L_fuVXAKRzV4{~>kADl z8pd|tXy2%zQ7z1H7S`JhLlQi_bB$1{^q&=5&r=nAH@I6&vs*R=2Wq`J@?HPj>;@D; z4pTkOf6-7bUORhZ$quBBvywmFiST;Hw0iK(>^kl$g}9 ziTxN2iNULX7TTR(uiOqzJu2-~cyLsnk)yS}*#6;| z9`APY^`JQBHjMl_{&AyQ2c>l)2G+~|1RQl`)bCZfJ?0E^w-m*%M%-pB^n9LS7M(E% z2R^Lop*b~BjeCUDq#DyrU7Dg!f5iRO{hevTBg_0p3v-2Tq~4s{P|omumt%LUhS@zn z$A+B|H|O^COY4Lut5FpI5|B?9RXy$7KqPxpU74|QdkE-q%;SgiX|FmH{ldJ%E!$$C#| z>q?pC_tx56P3zWaFUax=y1NG%ki$9Iv>}-eFm<&H%a1Ivw*5 zbX|J4RfrBz?kJ;r4H0~=)@;-HEdnkD&J#P#S0kJ^v&5+0e-EdD-Pakf!BLt$JkhA^ zgqc2k=`Tq=)wgzXR`ESPTR`3Dvh2b5){nM~YEo-04#$OKwj`HtQqTz}g4YhS_b+C_=~^%p z7B@8$n8m=p4#S_1znF?Mt}{f}q5I88wtw9dU`MJbE<(-B=qC(lyclv`M5UC*poTh)4VOItfs zF=NEkMet;j{40-b=-Ve;n$&9j(3NV-2dx9cOUpBJE&Mo~%Q)yFaDk|2CV|(-;*u@u ztztYsPYyE_561=JJ)y)verAC@oNO$m&RXt7tdFMXu-Tq~J^&ag0mo6t!}MG=7(Yz#ZzHaD(*32kRsrZpH_3x!xJDe5es?9O8#rS|_PFusR^rX% z!FfIdddVLVV{p}U71%ryv{l3DrBCgA;bW%+I%BGAJ!EI55=zWv1v=0U$f<2}B;6z& zTau8WC~~H(1?;rD(UW37=&`rcxF`z#0xGc1*|`>AK$Vwo_@tju?2GZe!f*^c{NXwo zUFMunps>t8JMaH*y*=4p_$k<8IFSw2gs&P1)*hEgzd^`zGjqQ1ZkJ!Qmzt`q-;&O> z`p0zg%!7a|cZ}kY#z+cM@UmULv}2!T%xjR=!ND>8Wb|Fca>|4V9nMJRM}&ubA<}hX z<&z{RW)G)3FhCdmFWf$%C0E8zbd?U|d_+t5j~_yNL|<{Ie@H0B|BZtjr`T+C&Ua@n za;}|BDDt>y)2Aq>hiUOtetDeZ8-OIQ(B<{d8Ugz_sCvid(<17|pZ9OZY#g;Iv*>pl ze#nTwbd7{4i}rYtjs0PsF~Y_5!Xo^)|EZEVBs_bnAwq1#_d2!ytZQa?!V<)FAehj| znuN9IYi>1YuLcU2b6WT3f5mSiAj;3o!7@7GKk1WYq2<~qbP=a~Sr0j49E2E!KF+$? zl+G||GGpj+!8~tA4zwrl<=!MURdKr?XX$a>BLBE?yC_?BFHnI9RbpggfW$(dlBp>I zh(|fI_dmU{=KNQI9#XlY282sOQf- zmyW9@3*wSPUA~*}CI&WXVY%zxZ!Xg)+OlRsa&hO(*&oN)8u|S`*^ zY`Ojm!in)bDuKPzoK{#zcRGnCu0SF#Gj0#U2H)E@VG6M?jCXWsA0ePO1WQia6kND| zlTKk;@Tib$;WHAS6AT#peU)kjxqV(72&V1rZZsV2Lte(zYYPX`N}X-(@8)GWiU6My zae6WmLv|slm(oDoA92_F_R4lPCM44Ag~OiFdmh0}J19xfJmWx9*eO0CzS zSfh^>qZ(Od%6bQBsXVY6nKTjf7Ql3EVNSZ0rvd#E$&q39JS6oXq0`0D2!$(?oxPb4 z^1^n7#5{%NvjpXHG8c5jCeaa1?nDb^jBP6fI*U9?n zh34NH<8qyh8@_r}zp!2P*)*(yVZdB&0=?<>Ssf*xs^=X(U;dUo9WV z1dx8rLI*XOifkkw!-Qhqh*%rfb9neY!1@L-K}OwW-(S*a`SuLXBR zFHT=bd$YK7hPW`*e9RY;<$$EWhgr~M<6GK=a9OT!rQteE2NL@PWs_P(g4v60gw9cGx`Z`tLjpo0f$OmS@X2qK;^txKu5Q)?zlZ{XbGbw5-7_vFz-0QY+DXnY$3 z-Sy|N1o!0kHsVJ3s`CPCFU@#h5iKt8L&)l3j}oD9?2^zH^d%9&SdXqW=v`Qeyemgv zX5nQ}Cesh$XhvD=B4a4Uem>Q#TM(+MVH({~*A8aN^jN8L`gW{}hDnuG$rV~_^i{)~ z8*RX}KHH3Z@*%zY-x3T${YhP>r3+sMbzZ%AaZEG6m*D|qa6j$T8&%=%>-_*&@8kh( zj@k@{T%L?-DK@xSz%Qr<6Qn&%U#u3LeAUX+v%+jY`5{6sw|4^n4102QY~q#1o=j@X zx3~fvi9-0UL8IV&@I!90&gXp1;X)+>uw0#uhKNG>_Cd^YN;Ai-kWIgtHWTUJS&FG; z3&28pv!g{|z^`bNQHB`f=Hx2BA-zw80Do%{6uO4E8i|7d5T1dl=DH-2=%5d0Q>=Ag z#BjJU^E3j#T;*GbsjG&~>>H21wB1D7=aTLl%*-7L$ovESPKQgq%JQ6?OGjB>j$nY^ zZmi|uVGSry_M6*9c+B1u;0(JL>vLweq=#kiV1wyoLe?l21Yl+dS2x`XkBh!N(KaK7 zrf?}`o>5C(hiV+r7=t;R{yNiPqjpei#K`_sr;)osbg~JHQDQux^unkxmM7Gn1|zVc z5u1Eb<#AbyQa-yz_a$1rMEB*v4) zD2Za69NF9v#l$5R%41W7eu%@rnw58Ye1*DZuOR+rEgN-hjS#rA2(NC*Hm(GM{nRQcBj5=3JD&i zbdhVd7+|?5Qv3e*5V+p)=wK(~a!nwr6a6QB&Sc6aw$?Q&jyh&pMpcc{RZQ7lrFyig z-uo||nK)RGgAMOt=?<3Lz-ytkuXk%Uvh=Y^6=Oa64%*RPX%}nPB zb0h=b*2o<*u&T$gdP(_e);0clYL$FUk&7&b&6`IPjy-Gov@gh_IxD|XBCj)c2%mg> zgD?-MF2*qw9#9mUzRMQC7uJ}??IHnUe{>{^Qu=Dn-P(`BU;YKdzuBDRgaA3KMmh6g74MZ8 ztfeijAlbvCa>GlPudeZzU!CACgryq!tG5iIfLBLZw&tk1RA?7M1xKXXy|< zBPqRVRvPHiaKvTnwYrg?wQ@VPG9eKxUM00qX|9|QEVZ{D9(3-{+??+eCTi#x=-o5j z58OGN_*5{1B_0EavFvG*ks3}EyXprjz|_lrfnn42>jr_gt1eGQTRxINP^kNY`Hs>2-{PWrx^;W=JrW!Zv` z=~i%bH*BxuxGH!TZ6>@akOG zfFJs;KgffrAQRdF;@)2C-Ym$LrBxQ$Rygl+Wx{m_bm0-azX2my8&}@$wKtr^fa0YM zB!)3^uvaS`+iy_k82fcU26C1ecw+4jG*`tO!&3Bs9J(&QZE+ zB2K+l)kFQGeE+Nkz7sq_=Gw;DgvxAXj$uXm_}35Mho2cNAJWA#YJXsHyAjhzd;BGM zsBI$(9L*Y@@bnqxJ%3MvnPN=-36Etj%>n>6xLReEuh z2Q()f#zD7RD1TAhQ>nvC%neP^DY>1pT784l4sZbXP@#KCs`V{17tb2h_k@n)Itewe zw(XNHC*nvRb8~!4oHdggqLQZ`EGLNQzU;vXcHN@j|I42&WrE_l9sfJ~g$qG2I||XA zw!a3o=#5xTBkR6Yc-P>BGU6~EliY=21$!_tswQBkURQogqFCjSrgUe+5W%lj0`!5} zaUyOsz`%!A9%7#LTy5yl)qv+#lI3n>+;M~4DWIWM#d{&p#W0g1lC4rtiYtJMmgqBV zS_sg_nC_j^FJB0OcT_M_9pKu=0oZ(W23{~(; zj1VlcFedl4?2}d*Vco5CF-A^}0ACaT_Ce*mRiS&N5HYGhu{~7tuK%f_>fNesoaij2 z!SOnQgK_p3{CV}omp1DZnc`)r*E9n`>R@S<7-V;*9 z_{Cs$V#*{+p8pQND6z9+cTG&=1XvIL}6!f&iEs%-LJfDr>Ea?@$UUIeon-hJMp zX+Q5te8Sb@dit%eaV?89$%-MnDCwP|ngSu==@aKXLrQ0>unOU9T!!F;cIlPEU|2_S zFv!?=vCzSag_M&MjH&ktj0)Z6N6SqMuTyT+n|rv- zbc);72WTHh9-mfnxwUUFEy!tPhk=fV+z9UScI#^+V#1n9JI$X6kE%7{EJptf$F%X0 z#)F1o{vLIsWDWf}+wuk{L@a+W4q$;ruDa+_M1e7<5KJF7_D+f>!Y8LcEO+?RN4~}r z%NJ~#p(Vp|KId+l3|G53@|L*9{3a$BfAqdzy-UJbeTk}ma#|}j2P6tfk^lFX=P6@7 zhhp+3x{Tvs8GSQ$T73dDT%YPKhA6zYN4GC3pA%}LhDc-S%oL>ZY)R_(CW$Er2LhF% z#mmy{jXK&oG56)TH>-KRg?h=5Js`E)ge(U;I-pmkL zTYOV=o@y#aiWB9_^!!3RN+H+J-N510vS)h7LrHQ5R-IxV&uJ7q#HRVv|F|hS$o#1& zG;%!s7<1AeiHj?30GLFQte6+786RtyZP{mW;7FhI5+Nqu2cHU`G5z4qKC|8! z1kU0h@KSz`ShW9+mDUKTzIT1hPY-f8qMXFSC!dqAyzN)=0<=jQGhI2RUCjMO zJ>S*62>Y-%FuEAAdrZl4{YWAW86&NWPo{zGE01EeK z^IUOQRsudhYVV;&3i^HflG=u}HsmPgEG5hOOSXYYwvQV$#v<{fy<0D!*iD=aB^)df zW29c5C)xD<{4`WiG$oJRS6S#%Q`5%HVGR6xpOEF|$4sjY@mzKOuI$R4)=A6!ONtJ{ zc1|L=9P;n`EoVkt=5siwkgs|I@xg)Ls>`+6K~Le@SJs@eVIQ;gsagw7zcX}0b2+fd ziaV7H0@V*let|QG6GMjEE0Sg>pdF!=XmI}3N&lS)_$@gZ*-6%A0bRgv{+MF(Z`Au3E1&ebGqIZ6;;?v7tzi>qRD z&dje$H<~p~4>!qtapC|$b1m&K$Y{^MlAm}8^_R^`RUS)K5_O=<_)f4bKa7!+M6ot^ z^v-R54Jr|Uv)!pmR+ahEjYwIP)jfrB$R#~QR4IU9u zDW!p{?Dr&9DM>WXx;Kz)QU-9D=539?-@cr?N4wz=1D=2j%*>mFQ^c14%xmp;c*#2I zea>OyH+CtOy-7~ez)+;?26|8mPbrbkQ-16uDqNHAR>UCW%Ky`bo^IO%`2eTJAkN)y zU`j+ED4z`XuL(MJQVc#4mTHd7@f3+rR0~#NiSDPv(OQzj%|O>VC5x@mL}REfcZ zE2eJ0!1!JYTB8&6>6JI~Do)6?ZuM3`M(LLGeaE6r?}S?c0@1^5L`4J+5iz;;|?OE z7T}^3Ry95E*<)QCR53mM!j(xc>tv*q{&UQy4*WApWC#C4-V%SIrW!o@a^e+MLhYDI zWi(fUwM2;X*&{&Id}V0e(yM!fT{*xJ6pWZ47LZ#?lzP|xwd;4K(wJMdR=dsEZ{`J~ z<5hGZy_dneSNg3mTp`*`(%?FABbV`; zl#ZR<{{A81@;r)pJI?V0+K481jv%VGR;=V7iNucIh+mI9%D;W6oPZQ=*!c%QBW)H6huGkot=h zn;w!0Sdxa}{0+bzvB9m-^j9!ZBOvQnOpm()9)YHpM&=i?s&PpJ=7cjw8-)K6@5QpeB=~gnWoMm_KR-InQC0o2E zi6RQ5luYQAkK5@!{E_cA>hzRZV*z0obj@3(tO{&Z(pVqIy&hTfggeaq^2s5!oY*xD#3_bPX*(4`lF# zt?%%D=v_|dlT-1(uzJNL!yz9bX}}8Fr#9g5aMcF0uxQF((_d@x9V;99eNXF>vJ$Es zN85#V#3S~VOX1<|vVx`uVX>uAXr+@4?N2x7h~t4!A1Aue?wLU#gbtFcVY;X(V0<8xVd z^+JqE$ZL2E+Ru9_T?;0!GP_+!)B*>;bRVSBYu#r%bp35266Gler^;^;{JvHG5`LDX zqBp4RA9!mCRm9V}hnz?o6$A?fGVa~HcEC? zMr|l2m!rcQ!z+H(MW4*#QW2Bad|SNm<6pXLisSuj(DF9 zn?0{-bQh|wXoq;-(yLwd5pruT%e~jjb3l+@JmW&Wi#SYWm{Pc=4gB**$ZbAybDj~$ zdkelK+4x26SQmj3#JpaU2s0d>@bbZ|_=&Z&wWEMjQ*Nn)kbxLA&L<7AbG6hueorP~ zGX*>?;i>Wt>xxY(OoE;$8!A{6=y^J_X7Gl*sZ|TU+DgE*OcXJ+xQGR!CkwsyvqGdf z*PEGlWq-5mtSE#j3HEcl<+@YEeNZ~^nKBvi)!FqgFw1P(Z~% zXjZ+?6X|~u3__Py;Bq@qJy4`~rSI#d@@YkO>utiLJH4#J3kyi!%V41E(8TjsZ2hS5 z3Q3!-1oj&<1|6&!VDDN(t>+7uc784Pc)E7yjI-~52@rWvIBbC>_H>PLo$ci%BTM*d z?k_2OpAQNGThcm=UQ~_lavR#LPO;E136LCYRHi|?4~|BifI*h%z_p1Sv#Zs~8P{)8 ztSnX|o^V=ZVo{&xQqFn`fDll|C$OKf5%u{=z|J3z=oj1WPD`y)*L<87I;|8e{*7`Y z`dgX`)ORb`&qQl}+>C^u{VSFwKX)o64bAJI{WVCruc>e#MU9)enbMIq?eZZ5%`kbO zl_o=$y&$mWzgpklpe1?F9aEOVr3vS0l<$Jopf9YTEZ>*^wU>fE0YMRt0M;XQOtKLw z!eK?lUW$v=ZJBfQkOw+?(|Uji+zX=9$!LbGaL>#O7e#8-8y$-ITCh?4G5@uc7YC?3 z^1br^59dw5P;RZq6R*m^$c;Sem5c|W~-YVcSwdsG;1H z6+f*DU(%NHir%r`5WTB@xvSXiRM9ffLsM|(RK=m;x^&4(>FqPJ3tvPTG!8#Ii=n>m zY)Sp^#`4GbPLDd*zm6vMiFSfgchLnUSVIxhqN4GypP6kvHre(gNYC{zDQ2M0H_e(*s66h|svZb8Dto#QgXmB7`+r!_ z3C##|=KRsfcIFCe-y~}jv2dAmACe-)SqtG#0Cu|n1Ux^skPqTqE@U)Axnkc~>wl($ z5f9usr6wXb9_c>1QPU69kRsFtUYJ%$4gV;RE>)W!?(#Qm2W9iW09L)-s`IXacj4J^ zQ!Y{ZpgUsJiv%fq2mUY*<=t*x$OlD7R$B^P+Q^Mz#{r_UTkC1f_dM`W2Wyp7_S)z8 z+@ko9u873;I^KOduOd3Jbu9)ELrYlp?c#RIWnwXN(BOnO{i5$-9Zm=E-A5;ST$?u` z+K)=-$Q%a}zO-`-Bw{2l=sp3xs!y>PQS@eQ*nZ|pOy`JY^(=y++>kxLxu{7(D&uJ- zb9j%uOW*Vdm#Mw@rmqe@#QMPE$|uynN*t_FZ_LK3akG~U@5I^t;jQ zQ{Z(OiWDpVtaGreHmYI4Lc~%gmj&iNcn+-Wsdx&h423@=8TK)AJ*=CABY0$5>NzoN zCW`9psMFLjqxmpl%GSe9o@XZUxb1_#%M$B3R61?397+ExRZXX1d2!u*`xX)&+boXt z`&qUH_DX(;N}UzxpWmta#z;%Y%>)AFcEt_Mu4vivDU#?<~qv&Fyv*-t@`g4 z|8MU-+GrdNpf-T_e@D&CmL34LX#od%i~n6Kd-!flzT&M|$p3$XDu;B?O1a=Xu-72C zUdYL|Zh@pS>wgTkQl1AH?sb2QJIpODkRa6-U*<*9_fhQ3@8^4GYp0*&UihDx5;9>s za0Uv2#M}AN0R{)_LB5+8Y&)2``Ro52wM?1y+~vqoH2p^~+O)l>E7xXpkJX~_Ca>Pk zu);m^5bf^}H>XNHd|i59;WssOAey5g?)dnBcEm6ieGrAYi-lT`yE;=cPxF}0E*D%@ z4jx#lVcDZrUrnvE(wgDjVc^-Iv!VM>_(%Dm+;uOsgLDJLc?h6wUh-NfQ_i3c1Sz1- z4>LW6)u$%&MZ9LsBgslVM&ugMzvJw8y^Cb9N+^!MPZHa)>^fMi8LiHw7XR>lanIzv zSas}g0vW=Yrw6|^bC?%sU2>>s6U=S-bp9Vld{-^OWLwgr!W~TBSEU+^P7f9EDk;yG zd_dl}(3u&l#Iy2&LM~ndNCEMF8bgoEIop|yMc!>d)qcv!iCIyHOT66WQ=cvQ#eR8N zy89IWnm7b~63K}k^4^aJ?gk6R_d-{`z#(WY%S$M5M+1MI@^TxEhk0G42H&J|dOget zj6Hxn1HXIx(!IO$|0Ca-8RjaYi=aEzW3nGuG0ycN9=^;bMHpJK(C!Z^+1^jt+qmrx z_oyo|F7yA(&a54_11?-byJHJ&n`JIL!}5Fk@8c&}yd6w4QempDt`A@rkdQ<@>z>z_M|4To{$=~#-4c3v_bG(c^21;oI#5?g!VbM4vl`BVggGCzT?u|dqSK@0 zQX$r@Wo||qZL4}L=Rfg1Z})3JS7-{V>K2BtFgq31SC`2`mjYG`j`4R#ey<29upHRM z>MaONX%1)hd!w%VVudjtEs7nMxdiT2J7xWf$CdE+9arb}d|=0BRM>_VZ{84M5Vpq$ zoU=dUsh8rW3E&Tg96@dYEO+QJB*H!Q?|j(PgS`J9bf!1x>wt}LC^n6v!bucp(L!ms z%I)(*ED+^OVomweg!n;}xSyl`y7A5{Jzla)`k6-P)|2@OmJ8W7%fiF`{@~P2i@314 zNV$FUB~oYCaA&j&$KU_>ocgJLCh;jc`4p6_sOKLF6>#w#sQ#+x-;YfgQW11fQlOP# z-xC;ki0-yb>Dhlkj1DbXZfBc)|FDI$PL(;o2lQo=J)FWOl1h||$9r3n?Y_CWFa`mE zfQKQeqluALe!D*mwu(acpFe;`I}eSPg0fGupFv+8CBiQo|yAGYtRjWOsz?E5NT=1}U zKe-?5biU}B|7GL74>P=814ih0CwHmC#`R`N^p^AlrFTEBZ+Y%r=3r9N(j(xk{ps_v z{$PyZgNb4A8d9CcLoqfGP=LarI~-oN+-6_WYS0n&$KL0J$Ohehc4bN3!eB)pGK>5p8tsX70NN0_pSNa!`zld;^o@9cM$u>x$a z^IbjxNpB_}KB&P7#Kh|l;dI7{X=h#f6$i7$52%GdI?jFR;qt>rNaZyU#~C$OVv5?e zUbV%{6d3jrH>jOs4WFhAh!5Pi9BA6xOm<}N?Y($AsDuSW@2ZEvulc5nrjfw%zWMu+ zM4-FLp&3So)VQM^Sk>>hAlzC}GIJKj4@_)lRHc%PJkZP{2sfXuomxvDT39!WSE zTalS#P1^ZB2sR8vEZ!al9(hhIy}M`*d^=TceBV~`@yOV7*l^cP z!`S*R<55FF#~&PoTE+61{aNp~eRwY#O?p7 z#pt0hGaHGmpPip}>{{_ZY8#MvlVriQpUfbs75x#qF>Juu#emr+v6!}P>*X*>USr~3$jRJw`E+H zJ}qMfG=vvJqSM=0-xTk1@ygTxU+iu>3`8nPohoIm(q zp`iFo;kz>z2GOM%2_=aN@f`<5<`hs?x^z;FtJB&jL+(gn!uCR7#4#zylT!*g?OBp< z3NpnTBfm-thKLrkTkkgCijPLlgyJ*nGs=*~iwx)n7}ntGm^Jk>`l#fmJcT0(Cd$&_R`k3J|zQMyOKPqA~DYDO-DfSEFtT+_)X{5Qx)v3 z;B`yLHk__v8I2UCG^MS7T&8~&CYEU=-bQAbYi87@y!m~=CB@yYAah+ReaiD+>aX(V z);(k*w_0^QgUXweFk-={sH`BZarIgp{8&;i&h%TRHc_!+qg}_|2u0bgrHmWyLWFF_J z&&T1Gxa>Cg{e83fNyKFIfVc*Nr;o_67Wqg7OFT6y0C7P#mf zP6<*5W5c-M22;Uk>r`hqs z0qPSjxtGQDMHS&;+&{$YjO5&;M1g#((pE3ftJ97o!P_17;}HqiF*SYVyZC@(Zu0Av z;9==)YW7;FskCphHjSLu3rQB27&5_UcdNcmjCK9xlohQVVonzyCa_tqJMvaY+)EmE z_|BhKug!t%Xd7VrdIPN6al7df{UutcK_d2aTL0tM%jDrD>$nue~8 z&zKR{-y~%=?G}m1Prq!vX(Mim2`dLW?5C;B?&oOo?=UUd zb23OAOt;0m=|JzfoLP8h_U-nJ>AM)QvSx1QP+<=KL*Ad;E zfj|o!vCe$eXn{0vvi&Q}fxN(o2rYxerPtC0c?tTAe5>mF*{zVTNYNb~arJ&rv^Y6* zRaueTw@*1b+`5G?vg;KsHnv2jH71q#Voj_rlP4Lq*@@Mg7c=`=Ui9gHyioTs4}q*R9qC6+mFiF4bXUalE^i8Y^El=6 z?RIn9&PCI-fi-teYclhF(c_QD{B;7?Vs9RVNV45lAc;X|ZGc_0B_=z9BP6^X_EaIu z@1IWg+xoOFlb^j_L1|q%RO;kAPYab=78R~x;R7F&9U8lTOehNaz7Az3`oQQhC_+3R zOVzsN!JH_h1{46?kGfCBo%6({m+jP_eyo_XmnmJD=lONFfw8JM;rXO zgjnWRAe3qykoCB-Bfq{l#;BGw$7Dd(PW+Ou&`I|wOXJgWByL-hVBa}fZ?)iD`A77% zm|Lna)7MCjE`w(B+aHQ%*UQ+@z{|Np9ErXc8E~#%pr>AsZZ;8ZrHIu1)>GuTvE}u` z%H8F7eUPtZN!56X&nI2iel@bOuXiP;KZ(mb6$=nyejLd3-ogZd^w}>~LAiz|ui3&2 zx~haH?;Hrk|8wIleq@IkQk+YllhD}#7a99e`8sWw&W+K1SmQoRa)d6EQFRjqNE#I@ zzFTNV8K(bwU8)jz4VzUaq@ zbrPC`H88=irwoP9@0t=$_ImaGb~H>s2ty0~Xh$kxe>*UBj|-_XHGr-Gbnnl5JenW% z=J0b|(KU<_&bZ;cicmpDaw(&}jm13{?PXx@H)1$;=7diQ1sC6r#RrR5uhD4F6+Pvz zQ(35JL`FNu>^W=CKPR&{ZO6mo4Nqq8o-OsHbOD>VOYautN9UFTm-}(<=JA71*<_vG zMcj!w;;z9!_~+WS9M>&0O*kGijDtOJ4tlVyQwCRO>;yE;RI_b;_jyvy3%uQN`9PC) zPKM#o6O3~8e*E<&ZI()gS1pUCPvdtR0+*d})>9GveSyOSO>|;7g3{IFb^>h+HYHB4 zSU6=78pG$G2ajzBn@`9j4M-N8!q?~8@(An9d_4owsUVq+y9WhSw4P332mIXt z6Yt=#FxMXioG>XATEm4v2@wXeRnG)`#$2=Vg~Qn$&FwQU_2|I;2^sbNhqrjg zqgvIV6=8Y%8oigb+UaUv9QFsoaql9tMKt8Dxm_AzOYvmH5=Eee zV-Pgai`zFM;v_7a{6gJQHLSrZG$lEHIAYo&{+gobvz$Rv>d)Fsm`i4ODe1<3g)@vw z0uhV+^#1-AIrJsw-uZeT(6YcmN!(KBoF#4xvsq9|H}x^~6=O~f__bAH2mRf78I#vy zuIw$hZK}&t0YI-V1pNeIhdyO(l+jO#Hr*w(0Inf~Z4a2$BC+PV72Ed>bCMhBqe7sE zQE9?w$Ya~25(J$?5SS3Jc8WsPMfa>LrdgU^H?0is>#o>-T_n{e$}H(w#U=7uLKUxkF9&3wQv5@I^VLbVLk#r3UK`mw-6-AOK}3Ix+t z>As;eAdx)M{qUWpQ5tcf=1p^F-czqDz8TLd@%O6sO6rl9{kO*?egB9kEJ3%W0R~YB zu>f7YIr|cWbn+2ha)qm#k;ig#SFE??rT*5xmkIMTA#S3j8^QaQEPQ?{-exYn(cc|J zrBD7+5gXIm$Fm@JMAUj;(Jlp}N1PDK`_26KXe+Ee3uhw_IA1vyynNPvv=elUVxgW( z6bk%Zli(S@8I@|c2O2?#mqeQ%`9c2b*RNX^>w-?omJz*a+#dZBE(aP!DNVy7n{qk7 znRp}*zY49BL@vy#>q#|PnYE6ONU?8x|5SGPVaYnxRM*OJugy(EVQNNN8LAW*759@9 zHoZMnCqeMUmFxnWFWIi~(pGI9Mw>4c=qzg9=by1w~>P4u-=6>PHGA zw`pB0vF!1{>IJKb*U@o|0i&HgC!}r6>A1#uo)cNlAFvD%xguNF>+wRCS!QTcmiLM( zDAvHM>>J(U!7GYVHkOr9R!OfY#s5RsS;sZmw_zWV6ax}Pl9wMG6?u~`>sSuu* z!>2yJU#}S4Ba+rxbl7?=sPe7TH$heGUb%KSyZ-G6o4`>STtn`E*W>y^*? zxA#Mr3zy(77Lc|(&@}X+$`W>6l#Wz9aVBzrd0n6H+gh-Fn%JFG(MWbmv8O?Nl#LCe zeUy|7VKF;f55VHfyIj{L_I}p6w|_S@`-Ll+4kvlF-)4Gi>G{56*>rv4CCA2B5aWk$ zpY4_|^KW}EI7WRnU0cRamJVO)m=w|SeG_yyy(H3e@N*Ti<_?L&18A|2(;xb-jTjl} z+U$VUV(M{k4+mfkTy3u8vH10pdz-ey>k+ex^P>K5Ptao*IXOIPe=l3Y|27{JJcerX zFKs^VRx0oGb?x}jH0}170{K+j@oysAjdW|hDqJrglJrltyb~LwG29;V6>IwHyG4YX z7&#EXB0q|k!VspCF=F$_WQ>Qavgp)-)NvWPJKe%*P|{RgSNhHtp6{n;+X&tyJez=D zaspJTSLg+uSJ%XfVxwEHHHJLUHKfRRYdONm4C613o%P*H4^s~*?Sa!~{omuU$7J7K zqv;vja$j#0=#5OwJX?K8O<24xBS}PixfSbG-Bo{EPwwP)x`QlzopP@7F~}G1b$*8oN$}Ty^bvjh;N5_e&cc87yKR)$+~#> zY-Af>SsY8qLc1x^3vLTI@Ei7I8{%BFX2ATw$J|RV#Wgl z>F=Q+8bJ0_wUUQyMTNlnnlu0eNTNt0+&OJ(9gCn23%hS~SPH|YGSf(Qf2-(!tCFHJ zamYu6HRg+(=!jln7JL>9NTReB8u^$gW|RWxD(JXo3nn(wx<{{d$lq5i;T#uRZImL# z>K%B$Xu$QsYZJ2UOuJC}Y<+@&p$Uces8yY#^+x})QV0ur!x}D2neL;TI{v<&!ayLd zZry_ABv)mQtVu~|b$A0+-|=PqXGKM2g46D}Gghx9wvDR2$D;1DUt&oMbaEr(a1K6c zaDkg(1Rw0^FJTPl1h?<|Qgv||UGzP;r#bDTbXokJ`xVZi&Oofm@z}MIYsO4}1xS`t zV6A_h9<>|-7Nv67ny_Z5T#0jJJ7W~T(7O5>C4s zo@{T(87zq>Bf8yNwhf54@$>9^kxa!s1M18lZF75=WN@cvo+}yKT*slMm#uPXD>=(- z=guV&iXK;8*SwW<=i><)Aj%Utp?Fc+tO&a1+~Lo43un*IJ~7dHN9JX?1mzVfmg_|) zXR9rp8x=DZakcfBRAA5g8Qh6~kK`l&kS6|K9tHkx52}aR;JxCAzc4_ONR{QM4ugEP67&9fOlMdCC8d`u--oFhB8pr06xeP* zh`eE(v>`NLKj_CISl=W$q6RzaqzJ(@O~Mv14F#KsVdkB~OMnksDaPrI~6zNXtG z#oCyxRZo%cojZTN9(neF6;?Vak(XxqjH|w;7S<@vTh#-OocZW^R|`|-tkAZSQPKC^ zzGyeKvq*jy^|5QJTkb>}ltA+|I|aerB)UKAuCP(bD5xdA^YgLwS+%+o)%IK={m)U4 zywW)3bp6i0zIy9(hrFqp&%8LI@38^7vTxE*U%{KA+4~=H_0@xs;a)yr@C^P&hg9nR z_hhAk6ax1aj`}{hOtxTZBq4RDqi9$04lM>E)9MZ^bHSR_*IlcW6}7Zu2!!ns2HF6w zOyCtlX@;fAm}(CzWs)k(RBM#x#rTN-Rx-p<;B+5g&fWf!E%QtdQR|=oPbl<=FkJCJ z&+Uk5pH59<4|5ClTa4rRHbvhLa>e-^E%hwe>y(!X0)^Ax{q=@RLOsGp`T~2Go8UmO zo{25Ix{tMueC7uOhc*))LF~R~%9qi3ANJAZ?!n;Z1G1zEzasiW{tWe~+}=dSxc@|u$a0jFQxro5_s+C91w4W)QyRfcsO+By}~ zxD8$IThhEVl9Yx=vb$T4GIxM?xx)5cb0~ZnnX;z4 zU7ijXpN&)p6)lq+-YK2C^_zpa_twdpYncn#)@Rb8wz6rxldiUvZwHD@w#Rn@4tOuZ z&L5K^7uY-jwQG42_I>djTm8uiBCl75v=VuD)JOx6ALGsNPm|R)B+SA9rI~oz>a`5}XK$ z^3&YrbY|Oof$zs(YsQKC{g}V-A-{{GFall-z>eaJ;1n6FSG_uGenMCOJJ) z@n%%{9;+@yzw7wAC3rgxHOEqTF`mw1A=5nRSvKSh80GN)F!E_hC~;J7?*zj<|M52M z*3pMB89A#3axi_B(_(kQaK5!!sGGrY|I{>cqaT@65f{8COn>fh<%^WtkLg_LOeB=U z+f3x&aSI~8Ixbjzi~!21&rgfj;uGX=GMUpx;w7GM7F|m41KmMa8;L$cXa%`crJch5 z(h_GZ$Lm%7=xOg2oFgaJvOk&Z^-xea*kC*BqEkmgiS`_QjK!L|-0rg;vsP+%oKZIl%izr&X`f6+8asJ__j;X4>Q2sOP1W!wi=uZzoJG%et)ksT_cwZjIjO-e|s7KAN zB^Ve|a?>zn3&YA;(W#Cp-pCEU39;Ljd_H+fP_uXb3SEbz-LYc(wNnDc#3cJa3u-X! zUljC=+ZDeQOCVPFeZZ^|Sj+C5+Ja6k!=f#Z@}u^I?3TS0%nRj|RZ%LVVS4BTCXPvl zNW)94!i!q`*O zh2&87=ldqgHSGp29f?^+MAuf9jvwQc;f&9+!IFu^=AY^eZh?*0_u|L-n{N3`GTq(T z-xXpicIz)%{scjZ9)B8o+cLf&DDtQYsu%RYl7Zt=;=w@BdsRsR*C?L)bdt`(L#W}a z%cZk6?eh+r=FQ=Ipp6EZ6OoM(>Ncn!T8;hhnK{2qe-#1$n zOfa|0DRO)+oz3f_{? zb{hXpH@{N_PFm#UX8T6un`(%>6Z`<+q?Dn_gxj8^?kUlflP9or%)|)H3ciN0P0$o< zNObZd_-_BI)$#Etn{v}t2qBP#=8%2OjC&ulKqI!F>mAys!4{de_m=~fSOyT#${t~~ z6Bu&JSAFo2cZkfV^}VvF`!LeD&zLRNU`BR@oIgmn*E9n)xf+ ziAL$v2G?5&ZO={~X-AwGetm1SucaXKa75Vcr#I&VX%@Ul<4cwG^MZMr*&J{)05ZVq zX|5L3pOdq|fEn>f^dk02KS;-o`B_RyDqyPrQ-vbjN}`&bs*$% znoZ%qt8NNR$W2fl$|p)5?jZQO_)$`^t)5vT|Kl`4N);$9_Urt&RwY=BL;Ee#xHjaU zR`1$D#K`_x^k{YLy7YCisUA0EpHFPR)1eDao_KpZRS>Ptnc4l8zu_`*qPWNxHutmr z=vyE!t@lpAYG-2asR7lAs+|{X*YB*wsKNbsRI$vQc_rS20@e`oj#S%!X|Sbw_y=?2 zp}-u{#j)5#2bvv2b31|Pg)y-pNw!SNuLBkiu4pKunk-aF$EzkN@FI(@r2xa^p>F@eAuEkREL*1^&AarcwhAgG*j`>6%GTQ}eq!vW2 z&*?n1;Yx`as({o}P;MyKN{%CQUt8cjUNNSMm-U;P-Mumm4s)dH^f^H;!mCrBjB0w6 z?fmj&F$z@nw;O0!#xIJ+tEi>6eIcubpMu>ZpIT)~3d*8cTLM6&6U$lC{(n~ZckA~W z4thKkP6=gAshhB~;yM|F_rEgfsA^4z&>79__%$u|!(*U^+{A7!V@7XuYHf7s)qV8i zJAbG_a*dPSexJ=iqPQg=hNALXF!cF@@Wu%@uP3A2wn4!a6w*kR)QA-F*N!WjeyL`W zao`>F!_sbibCTHJU*zq|fOFGVxB5}45O<@fv;1`*>W`U?K$V&p8zPm_px~9@Y1N() z>BC=oAnIZ40^SAID5%Rk#ePcKsZG(1!M@s&-WjS5_8{Xz22O_xoN^>G44{Jf7X|8K zR^MhxuG77|apSa!!diYne3&*^*NHR}nU^wPL~Mb8M&R;;=LLJ21u zl;i~+jd4~riVmTSXt9(z&ZOo>7Jv_%mZOgGQ>)4b5$?#`=>qna2O8t)E~U6XR7e1$ zh~KhDH-mfI<<7TuL{25X1oDRlleH*sn-Znqz4ULyq#P_uL7?(AyO%e#OXw9!bX4$x zcrv@n5G`qHFj^cP2@Bh03r?;ue)u5=ejl=ydMgWiGs*Ph-_M2tvEAL(V$dy%I9+bx zS*Hc8t&74JYsC)2(A#0*vLq}nj*Y&m5-PJQa?xRoW;^W_91r%z!PYqA_ z>czGX`R|3sTFjYAmfJJ%xuoL_Bj`BpsUkR7--5BuYO7@>yP3R>hSbu>Bk^bSub0jg zQ&#`z-GRDrrHA36f<9_!3}bNgZEI0^ZqZ|$@uYeKTndNo(>ZuTL6Y}gWe4(_3u)ls zKkscRX$7CEw{`)f0f__epFC#}7mn28p>IW%M!*4^O5JCFy7EWbX&ynf+VBGN@|Ci? zH$4gRp%HZnH@xv%omfojQ=L(FMZnc|+m*EyLozTUp$)tC>P$=zKlnPVkBV3!FP>kG zD4DWnWtj6lhnMAfgWI2crD4>GJ|m%V>+7*B(KzWe!VSc0FNXmv+DtC+xNXGV;q!5l3gvp5FW!S86vK$qTtXT;>rF`OY$lztT zEHCe{h39ybJ>&bttj{PHX$RVaWeffI3&D{7YsPE;MNMk&PpdG^0_e--oo6E?L&f%G ziGonQ2U5ls#eW^gP2{H9zsf*Dz9JlN5mv&qK?kjuBJ*~d>x5K=+U%bUURpd@hNW-A z5c)%B3$L|_ASd@vJj{U)bGD=O$`z;iZ8>CwUMwmTbJ!-&7V zUeH0$_W4@r+cd#a1&?@8O6aO_;=LeYjfl2pL=!SfrgcNV*J0BCI2mRa85|za)?Oz- z*ihy}XYLD*-SK97;ytfC6$^ec zr-8CNqgdWwQ(m#>0Hu*{8%sGxt#q{LZiiJs-hXx!_(F93^Lm$V-EL#`J39`_kQQtM4MjXK|BP!Txj_S{|GNaCx_r@m%Se{}LUmiPhE zy|xNN{IO22adEjt#Fux}-2^ha!jG*Lo~BwC1I*kW@0(%1@ zbK6#*NHW}^p}?w3sggkQzn%1->li|q_qq_;?aXPU-6kNYu_F^#bW!dKMYER_%W zr-&I=K&h)`xYFMCQQ>Yl0tW=tpJNMIu=a`rZj8h zaR?@9R2n*X?dG5u1;^zOG#`HOg;HQ0so56OlgKkhfm4wYup7N*I~>sR#%UU0=$LwYmxZ;% zyy9u3O4jnEwPub|OD&yCs0)k0W9~SnsB=v?IQzNZpHF;tn?~~FYRB)J_LX_|9bRhh zQyciu*P$GDEEY1V4oUsR9Gc^u?I@A^ze$ab;j9f8v?&RvF!-VtP^>bhx?@2BP3$}3 zto}k%R?d5M>*8%&2Se)Wo1u%$LIcl`oO>->GUwalD3s!IkKFZM|1A*EYUkn)8m#B? zhFN>Q`l0^^xg|NLMA|!+ zS`tMQrVYL050;wWT}ug4qb!OCd7T50H3~E%mP>jE6{+~H(R&$~f**SUfJ@x#bO8-^ z&+BDXOv+&Q{wI6 z7$=oY#JA>>lu920jVq?2JcwXoeMRph@D_09q0$n`e&zSGO(Ne)-ay19J}mqpqr!vi zP`P{q%b|oIrNfhp4H@;;s|q73e+Lt00U=xR+}ltQ1OOR%H7%9L)3~FX)+3=dlvA6T z5D65|&X)OUQ5fsBU*rpfkfeEAz@6Rrp}d#l`w7{aGCtMhce``I@VOU=qbIJ@xM#-) z-81m~YoFVJo@^*q| z`fhAm1u%V!MMf=wzd0%&%ubC{C{x>P+n5q4Iug@1G)$b@d6dQJzaf#Z4wH9zitR|F zu?DZPIVYZsHrlxy-0dG|JO0*bYX_s#+6q-FcQ;A`b01?0qx`SMllMcEV=*hgpY{e* zd)e2)0>U$>d=H_mDnCg@*1aG(4c`RSrZ8&f%8TM8!y!;>kNX3DH??krxqnEzTGUE{ z1aMKXZf3{K;u|AP8Qz0ed%UAjMbiG?Hu(V=KzoquX5x7`H;bk1(C{eM3_3iF{cXX+ zPbZ$dj7OWK5JghGD3OSF2EpJt+ck~GXxVo z3)||e>H=!2Z=98(v(vD>bQ1WFI=T=(iAMG{?$JqztMwU;4_@Rr^}-;B>s|$=QhEtd zR^2bo_YzzU`b}t)GanXX%(l^m+2B0=Ye4z}b;QW75p}#P@5SYCx53$FXc(|NzGyE@ zVfV!I-&M*ptO*{zobE-7ZO#e}hgq%}ZVkKos>u0+7OhGUZt-hlzKJC6tyE zLPQkn3Ua?Jq>%JaHh!z$qZm{DUBt6I#BhTJ%E}wRCxR|2-zP})x3Zh`Dyz^APb$Zs z#5Ts~kpsFPj>aK=RT?a=SUwcDm4A?|NIC3`)5PjWH)(j zBw1F)=>OXE|Neu!fx=&J_M_y&u=D@!;h!TaW+kg`BP=H&TD4z(bLHlVJL{j8eIH@^ zZ-wsvIlR0V_-m0DWQ9?=S{TF|66*l3kA`L1jzk^wKK=V}|KGc6pNzOUid*C}w??$C zrU}abe_MYJ2t2(TKrBCfN93O?|9_qge!Ipvxr`kG9e>kFFV=x#|D;9{D$Z}WA((I8yn zRRqhp-*jyoA5$kjeQ(56cQ2OVi={{VV`XK-&extzhL4C=_DJ_!A0C|qcIU<@WE0+k z%trH2En0sOCdzx2k#3D?jI+R9X6{^-*tr%Bapm2}MH=z8kkr8v$=IELRsuT};JbI1lgQB@D(l;c$u8_smLsc;e1rV4Z!)KX3!$ zb{RXufxv5zO;D4Jf!B%eQ%$|vS@V=#1fGt6))-KC!=W>#>wS0_7_yNips`$^8zx%c zaktUh=yGn6m`=ZZ=uypd%dV^Btun6C`YcJj`+C3Q`@iQVLxTtmnr8>?v=H^xUvCVe z+qf&?=1`ReH+>-CiH{E}2pmjs?x{vE9>Cr2Vtv8D&xIxk7=7AP%ea~0! zn;>2T@kpD-&Ac@D!P@U|Uc=d;IMhrrsyq76vf?8T#8-?z zt9jSTu^0y};=A5W(x)@o`08itbjwlNUh@Y=s?g3IJNKsFK87x%uNH5diYk*08K_dT zZ`_s5vSe=bKu1H8_QD*WKCtksj9gx}U0+7BJZ|XrxWH5Ki!FP;dt3-b{9J>%o5o|) z&9>JL#L=mmKx|`b?M`Js&*BX=Oql@39&;?FEFZ6rQSGph9-faSwEj(*RCH8kwy;X& z{?rFi?*7-x`S3M%7*FFxG&y?cYwilZaL0?8cJoAAtVZz|!Aa5LT7nVS{z!D*rfL^| z*!W2_i+g@`mz(PTA6!iE9g=4=H-k-hy&U>D^PlUqq0K%9(gW5!{3o)R-Ut14Q+RL0 zk5xc#1K|`lhFn@K#!`PV`(ei6hLNR$3Vv%`Mh$al=(kDhUxh1wZgd{Iv~~5b$R`ZF zHggweE#QkWU1k2b?MXgnF34GgolExaR9&kpIV#cOkA@%`?lx!jpu^j_EHmy0^+M_i+E+x(TjecOS~AS zNbq#0ZrQJkw*1F14?p-6=16vp*9aNBxx)V@C9R>ZH2Z%hW{x+HtC;B=?(UsiuoDD> zy*?%REdun(v-8PO9U6>0(_!@PjmY8*?NR4u35Zo~<>nj&`%#O^SyYFL%vUEx=!doFO)dqW=dftDC*FU(!y!YyW>TEysuGvXV^+S*h} zzFs^e6Y;@|zV@#0J|Owe`G~m9pe*=;gg3pQ%t@hoyyGt^Rvi=3v##v z+ETGfG4R>)?Ur16XPF!6X7iud`}pG#^D3*DPZv|(L}Clv&m(9e=(R+DF3Ps`F(4&-|AEultH|{1Hc%(3~UH zU2DblbrMNCywIS$a=Zt?y3eii{QSA4ZB0u$Z!e;XLu!?Lkri~&^PssV`6?TPD-gR1 z8cki_shzhI7JmdE=pZC&yBM_BvSRafXoEvT@IngrLTX#*jQDHYy2aZ40daIW2oLU> zr4InCQu}=XzK73|a0_HC|2P&`5#?L#jYojTxbrMnKxNi~oegzId8HBdSOE zT!JR{D?3bMS2fufbs*)bVy;md#;lqIgc&8psW+f{`Omf?>1#9o>&czGjdqtAUnhQn zzjlw14WjP)H>kA&*Ak?{%nrlU4#gW2u;Qoiv!fIEv`6wl)2qxHPC`lvhJF{A#7dkgGj>$kId6UEwHiCR5){*U|3a@kH;={(8QB}oKJdh4 z>k?OaQswBhmC##Lx%DH;vYyT+who*(w6G0{zC=d$b*|r7%3E+`<12Q+Nr6$G-gr%|B~ zy+9*X;jGR|a8O53UF%6n5pmV*-I|hnOgEyz`1U~CYkMCgL%*Bfa;)c;Cj0{-irS<7 zc4L%37HO9HXS!y@`Y2BO7P07`&5V~I9-#KxDl zx3<#yfiw3byQ0!F179KaR*434=A^iiRIj=SmQ}`OI1X#E*hN>uB(A+=TC;dM@0w!U zs5{GJAy73q=I!$Mc*z;9e?IeFLW*W5fj3llFw8l|eg3OzwCl@MN7kS4Ny0Up$`|3wTXV8@ z1L-YxQkh|T;Vt&IoQXX#5D6!;!1e~f?-M1DJK|aY*sAn)ymYt((H_-F2C=QS9s92A zwMxt0gf+QXVJ;#D-S!ZefQDI;a1V1o^8loaY0nC6B%q|+HJVX%R;iG z^^SafmR!`=RWv}O-h8xAZi z&C>qN+&a&iZY4j&U&JPqFxZk>`m4>)eExS18q$YA?>`l0Pz|`0+f_`UvrFiM^yclT z?|;n=U7D@P`k#d3=MZKua^TgaV>~z(kz0mPUl;E}R3!CIep9Q#3R;hl`=7*0_lA|% zy+2QIWM6w>z6kc-(_9LH*zl*S_)(fD&>>fHd3~*tbObq*f6<$H@6>2>JQ&K*9P*`U zQX1~I0kH7*$fCvNPvO^_@O~}vy=rp4FXv6eBYc6$(3YcJR`D<l zM;#+l8yUfU-fPfU3w(Iq^;Cnd^=cJ#l_|D=?w~FZ=tZY1o%x>W?`wV$TZ7t3b-iF^ zN++XL-|Nd|qb&_e#?X!%4eZCiI6D(Zm_palGS?RZx<$^Mv6$jg@UCMNS7PUUQxv6X z^BZ6+ug@xlSxD9%bg1w4AHW2dpgY6lttxC^oC+_ezt|wOfUOQBA}Ixg#HoSg0)G8^ zOB%--o<1GP>3mErWf%Bdq<$k74HXJaw1xG-ByZQ+PxMME2Zcn@XO;LhhSSXypeHod zx$70t8z9JR(|)QXNtVDa?5<+(xBJ1LhzHiI2cKhvl^6*8_E`3#!L9=<>itTIHo9R8wF|MQ*K$dFGf zjDPafM(p_5E8pN!*#Ly_Y6^w{+Wyt=%GShs9sd(o%6Q3uihQlXZIHJ$eGNQ5B5%L8AB_?xTFq!FrQ{vAPxeT@jIOq(=qD&QGLvYA3klY@Ql?z7kYr zam;kOrS`^qjJH>?*_v+YAj3!6EkCsaB<;0&$@ai7{+(7*a@j;mwgsxD%e8KUr{O%d zLtyL0xU~a@Y%VBsSDAY8Eb|qqJ_GfxYnk^$2D#Rp3y;^Wnc0;H-$2?S8DCnW&`GvN z`RchAd}6hd9MNv1uxu?tH9a$xlDg-W(B!7>^P8sM^yj{CpdfSc$Ha5`320J|O3;n2 z=UbKe3)2n9AFAz8LYh|NpGElblrPHMgi-|`S8drBrIItj7dMiWCHS^t395&K2t82EhYv`r@3Br!d!?C@3blO9l zBi0Mx-&#H#;As0wTiT(WwRvR&DGM#>V)yy^G9#$MZ{#W2$M~2(#)U2EtI93VKV57d z`~ssvoQe(_KjH9SfHzK26~Du@btb=JOJ4pq2zuBZv#`M$mU91k`5L$Uis#UhE43GK zM``P1KwLg4^JQC{flUxGN9>W3Oi!}O9(bur!00G5?Km0oiy197O6^(Q|1C4y4WskW zW-EsY7Yf~!6t|?)h1njffJW)8gWpC|iX3ah`?hRU{TIYrPh=v+rK=bdq!1?{mue+R z{7b?+?#hNZ(pkC`dI=PFbsR7zr|`h-EMM7n1krpz7BXSV)VIrk=76njzEQtnv6`Xy zkh}RKcd}pFr7|8KdLP2z`VN2jq{fW|)fvORHbH23=kFv;_19FNcZMydE>pS|5A$z* z2XcV^7BY8gj@2+6c?Dv&7@}zyXMn_b&yldMos@l8$`oFHXW$l%9hbprTFFQt#7_XC z>k8*ZDbIbSRQ(P|wXn{WE$+T}c7ZPc?eZ80HsoF+*#g;biSA`J(Zxb&6!3Kjz%BQ1$`Tck~(JV5ktwJt9SdHsL-xJF$p6Q?uqgEbszRV6>vJ^xT`c&On4?ukF77+!J|)VBqC!#4syKxBsI zJH`#=GOZnp>NIdr8;ID^TQD{61uHLquB(^@#9&aOS8qJD+0ms(MHO81Ue~?(((9zh z5oR7h(c05*b^ZUTQM|o}|E^irKTeDjJBKP!eVu&7WzNn53s;IWG0RdHxRn&=Yn#x) z@!M{)J!e1+LU6%w(8oU(T5FSSB48?%C~G@gl{LWkai4!kWr|%qGEWL>7-|M|#Imhq zP%_Ejr#=)5tfC`1{Qo-(Kw!7HCkaZ=LmH84%0;{dN>CZlQo*4BDs&quIQb0sID9|F-at0C3%V#YAzgaIMsQ#=HPH|L(B^kC_~(3 zoX@I-rq9IX+}()2%`y8d*-!C|ev9wxkzT@G5%H;E_O$+CGP;p$Maqwn9?&@@v(4>0N1hCbo z%P4HepNI5`4K@(gg6=p0erxMOvNH<7+@qa!cyoi0?*gt&P94;Yy+*2hBF}iEErG)8 zr`iW&BK6~fxH&r}Nb>0?Qqq@3#r)(VM~tJ2VKi1MNU|{wF^WdVt^J;k3>d-$)c^PN zO>|*rm(Z{qb17X34N2t!MAPf8CLHlv-dmbbB(+Im=TO6&ipgLPOl;U$&$Br>(TdAE zsF7bWIq0D1zNI-)%Ke<*?F!9dOP*bgwdGCwmQA6^ma>2Ua_Yo-7UCbAP zkc<9O8fPY)y12k!J^bo=xb>&73UH2L$6K-Ay#ULl$OVd+qAlu{g>6v0@Y2fU#WHSj z`JuFXdgZ!*;F-_o3EvK4k7VzNOgC0bNhU157FpxuQomd_spyW)iHn)`$gyz{D!UWg z1Ca{KtuSPM8_?gZ2;T^9b~V_Vc89tmG=@jYI~7;H;UI>Iy!2JooPn~~x_=m7b2(#2 z!I9^}HO<>>gg_64j9w_rc1%5Pfemtk3(xnfX!@h>pERQSiCr?3Gi<#c8Ofy(L6ZDM zEELM8*FZ1D-qO0tQzoIBUOSroPBI8gYwjdeCpGg51DjDCEZF8sm7 zIsG%Za}>o*mJxAsgzCS44RjNt7fa;|ZGrTWTk@$*M#z{EG26s^2T;uA@O+gw|7hXC z&j-*sJl6J5UMIdHQJ|8dQ>XH5#W5j2|%VipP&Fy-DjO~O#`H25?DJ&9g}nX=JDp}Jp@abaj~Qjb{oJd zr5RNyf$U-%j92ma*nleZq15zNZ(kuR699*+&17k-cvG(amJ{EK(x1GmO%>_KO^sGG zVxd#0o16_+xb*_Jyvr2WnCa$a7q1t6?-ck!Z@9O5sLckozq!X(ogJ(R@VrCt<1A^| z$K27O+WzSZjgOeoP5n+8J|^O_-+e0)UxAKi^{sd%-o$b&jQI<#dhNQ)b*?2v%mww{ z;*}dZgOBXEg)iR8OLdpv{6ee(p#u#SU7wywD0u7(H<3&I8ma`iX5Y`5dZH-9v#SNY5`Q<~%^vG`raQctTpy_0E3M@@j}k z%2U@uR*V(+v40?Sp#089-f~hRQdmN(6)Yp1RYmFu55FX8zjIo4)IU^dnR`9GsjvJ` zJ4owh+=JFHk#cP6Em*qJ0NbYTp$$Dsho7gJ+9K-*4Qba))1Q#%^Y>RIXtZn3`Ps2f zZF#FeUoO23zAv!-C!7|CW(4RO!pLg!@t1uwrcYO*U>#&T(`yn~EAzWmT!CM$yvI+YsC zu~_x{=zO|-E7`rccl6?WDly3eiM~-{L>(4ad^z2DQURjxTS@wrZ`8-Wu2N$ZYC+o;WdYkQqQB`G)v0AX%LkV}4#G zC73Km$o19UwV^4bX3Ci=-#`*0FT=@|#M|?NnYWQ?_INuUbc%`d>ZNHb4^}-k^oFX( z6gtH=YxQjU?+2%N2UYa_k*KrA*R-)tYex^CRrFjnQI=+DLPXG~RDVdG-+5(1 zOY#0a4jr5g>2l|NvCu#_Z{bc4(_%yLTt^%MF!hhyJxZaxPcS09Q7lo?=F_I0y7$#| z_M3F2Ca(AiXWuI{2vk?tB;lxsInHre{%ctnY6ZV>_MIS6FTbIG`-D}@*oGe76SCOebi$SQvs+)rAUzE5LHU*F@iyz6>4!4&CQ{y= z9Pi`kcP)PdrcTr)x=C+CsXl~MO-_VWim?L?a9`zyHbZqCepcBYXephDV$r%==|UvA%f9K^9f|$5FFypS!lkyNOt^W&go^S#H0oNd zL*DtVjXEv6%EXkJrdBe1{Nld0K)28yj=AJSzP)bU&h!d9yt}B$tI|uus1|ON_G0;6 zKtT3GjwR{9zwkVSGqcWTf4RqjF<_q3L1JK?8LJ`>e#}oVGT#W<-+=m3E3taZTR?bM z3B^mzxV@d<3|#UXqgq?FGgLCb`5YN=De{VW#S&C%D}_E-+8`$V-bwEhcBs}!Y7KZr zf+Um^_Qm4s0_5wHO5nhWKLdPM6S(&b@ybq&&2fXSE}R!YX;B_bZvxg=yrGQhuq*!n z&PvVU8DHbO^rq>dyBP8vXu@xJP4LPK_fl`4Lcv983lDs7OTQY6ouJSN*qs<^G%=0#b!tur9*+v&YOHuU|UM*s7h=F)S^ z5)RKx7=3NG>OJ=R^J8p##g3-*cPIgR{`9XY&u^vxbOG=zYF=vZ6s>HeyT1T0k>Yhy zYTI@2!}(*UGVq62KkkNObcX~B1w~4{X0-ZcEtoy%K)hOa0K&*Q`VZs48jvc<3wF#! zUsvM$&}_DH{z$q=5`A+ee)iD50#dkTWRY{h!t=9{Tna}5`a!-AFeUF5T@JCA?363} zd`3;ua9(wTsH053IH~i>k`%}8B(yAFpD)3fWC7HwNRCnmljaA>?fH@Itwm6R*}9#| zppa*A^u%oQ>bcVVI8(P(c>p((b_k=j)R6W!t-V&$YDRNLut+2WXpBUPA8S1~v>5rt@JLpJg)%a!NY1HZG6%!|el=mqk__ zPLDE5z_*6P>PxrT2neS$D_vVt&-$BWN^LwQ^48#nOSsdZhk7XaBCyl14FgqWgL7w< zmhwp%%!RSn$i_p84i?Xq7ot$k(i6}v%X%&buT_@m@@S`>#G^(v&len0jR~I>8(*nU zwj{*Xh8~sVoOzP_c1YUp$*42L4p+dtr7tS=e%91lb?02Mz|laXHNntE-oh|S zo)(*~AxSSuwd?p{`&XNrSK10rz)uMl^z%g2o5=?Q@QQf`%*kE&f*xs2k3)&vwIf(0Q%A375C76XMWraGa~xieu*q?}rW_P;xEs~aTjqmwI0g1t9UW|krUTs!-eW=Ui3GN_N&v)~^xYn)|`mBelTa;eP@wfX+6kdAsb;@!9^JVO-Na}irRy;F7s6PDonErcnU zuW#-4_!&1LMXS zd!=6Lp-pEPdY(&)2^Y#a<5O#UJUvo;@C8iVzh4G+wH{}iWs{)~QN(U*_cg(dv%O1O zvJ$l`_ff(!o(J3Yapcms9voI~TV#Rnhh%kLEe<HyKs#`UJ#sFwOmbWsOR9+Y{?d>WnXH*`I(a6&%`v>XDuxnb zYeV}TKh_(j{|IC-qC@}ndr{KMVl^V$IaAy-eCV9BVnsowc8 zFGdR<60Flg-H_}JR z561x4-sP1R4GHf0rR9VD5rXLz0uI^5D?1eU`A%{c8j>c)z?x69psJA)PZjCij{S_$ zZ><$N4k2pfA7}3(vjZ?84QyD7?||WB`lcahQcT^c#q{efogdj?VbMFUS6OigPoSNe zV{H=kWRhb7<>(ei^~N{9SNX@cY=M2sQyMO8)hOT7D!4&8%BF3b1>-`FBHwyy(mI<*j z@0CVVETj`|Wt)>2?=oHSz1|pVl%9&%wiGE;s%uNe{-+yxSSanM_V>zzb9p0#9bfa? z6Rb|iMWHPxM<^dxg|St^A9xeouL7a`$& z(E_fruOi}?;}VZ7=H2_(xya)ztbwHwrAE<9 z`}wJiUrZpM@YS~l&v=o|lnuS43C0#sLDBM-rNAv&VoV?!Oj~@uB8S9zn$-fJ zV5fc063UBAk^q)$; z-NoQ2EZ;~p3JRacyx-|T*aa@VaxElL0p%Wh`ssX)8+UtL0+xAYlFLn5o)7{Lj775T z)e0}PHP{gTowI=?KUIjl`>I+_-5Ctmujaa2(&!dK4H-zcS60_dE8JOfh*8l-6_`64 z$@}pT?>h9{U_-oN*nGM8)PlrZkUD?#w!+tOJi`{I;-@9&QSQ7)GCw4E2L>f=SI(ur zT6Zd*>x`JQtzFV>67Li?B{87Eoy2Bv9_;O>>o0G<@3F*k^3ZgH;ew^@`D*l%HlDa& zxw>yn@3w@`IF(sET_$dQgrs|9kp74^HDy_{Dg?e;!k?`YbsVe`kXttXT)5@Qk-+?~ zvd;S0qgkt0YwRCV2erqygEv~*U3!`wr@jshdx>NfqgF)N1o{WE^bAoSeJTHBVAwL- zPd$)8phUVzZsc4+{gqw0^6sw~Pz+p1)t{Cp<<;OzW6awV9BZ2HbVdeg^!`P2I3(F$ z+1<|?`eA6oUWH$aXI1cH67vlieN2``L&<8gy)b4fQ{|dmrx^WA`}H^NAK#vKk%(d} zwOyT5F%+l~xM-JP%)QZPn!qY1-*wO08_+I3FMScA6Dk;V)*3Fy7C15eRe4eCWV8i* zV7?`!Fzq;K5f`r6#ysrm!pB<~iS(<~z8@^HQ`**ZlqbGxxk@G`{J~xtSJE&C&$v0A z3%ZPA7(v+|<%UZMD=`$a%vzLhigfWZ7iPLYc~N1D#pigN(sYqBEZ1K!M%b^C`mQq0 z}Z)iKnePr;Z1-K!JVRE2=%BZ<+9@#8dn>;(lJyRA_P zXEL{5xlU>soKp(&C;fTQ>8+BGV;}?Jlc~66MkAxXU}bA1$7eps8b&u zMf7uwvt&-By5hvt?Zg6yT^}^puI?X)q^_on8t#6vGDOESUQ%{^Uu3vMJ#aB%cnO?0 z-h_NuL@tadJC>};3cp;lL>;SOoUXpog?Q|kpt+1kSeE7T{g~5>uGM`~vM(@f8!Syu-k?YHe zrp99?pCoJHOJwm*t_I96$>!Iak{6L~_Q`5@UhFCj%6gYJJ0x^|3s?kiLgNgcIrmY- zIDERbXo8*zvu4?CyG`dxQhLK%%w~rt`kpu1J*^%ZYdUyxdRe6XJ%TIMvOHF8;la-# zwM}-h#fLYs6Jj6to6fg(*Q^Uh_BU(l_t6zax^UrMs|T;Z;Bh;VlDcR0&x z6jR;Zy25Q47!8qmJyYDRSvmct=lqCp_f(aho<)HyG_pzjIyO%%hjc_@zCFNRZWCUz2yRpHo&0H+BS$`#ayY%4{G=oLwGKCTTnjlweJ&y()?wjWpeJd31o{aN zyLb%sLKXRCGagFzg6f3teV*~bQ%*jVgln>s^G@pJR-Y~rkIwgxtbq2Wh}6d)H`3aZ z7NbAi-+?thDUg)NzrnJ{q0eKz*T_r21SRCgVDdC!t7t}lnEJFc^%i6r74=fEJbuEd zuET-Oyt@bY2Vr9A%@3TY=e$2AtQG7YiU2T{oyBt&znq*;1d?oW-oiOpgrg zKc-_#r%-LFseQf}nU)_uOL}_wP@Zb;(REJo&(0H%CXo`++u-H!pss+E_11{URHSmC z^VL*)Z*R!_nD4}aH{xSH;oF;F-`@y0$T>vZP=RH%iz*CtsccanDZgoQlYN z^T=KLZnILr@gdA?hu}d#!9_3%ID2kBV;nO?Vlm6o_4{oYfaS@6iLsgfC zO-1JdQ4a6N*h*fyUbY`7HQ%$9QJ^}aHXk_uYGBd)zU@~)XSiE48Yto}aA6Jmq(=$R zn}(Am41S&AkD^|MA(tRHrp@-aiLZPEGs0c-!*@_q=>#jkx24k<2w+-7FVjLOG%jDQ zG1-)b%lM1%zQ-BA4%Tk4NN?C1^AJ<&aRQX-2DY-y&F_wPT7xch7oVmR;AN9o6V+^Y z`A!o{bW77H8iz$xsu}Syj+nUy9gx^=Dd@IvoFr=Vj*obFyfy+`Gc*Cbf4F{_iqVgg zQk-Jj@|uHuX~F*e8jmI|o62-4k-s?e?HYxNJdFH(Bp%eq z@Q?o5CbEVX%26wd_g0n~KR{uf9}WaR3p4F8b(@d1p}tvC(>+s(>T!&>4rqg1DQuaR zCPv$wRlIt(u%@)7CK2=fGOo*1vU}V7wK*#H4q5*EvSvTenGnRjwWOh^g+Y`~KPD$@ zy%eaT^VPVAGMx>zG0AkSltk2Z`}2huxhtgcJzGR0xkpzh21$_$hIYWzC|w+FLIMc` zLdPs|tR;;MawQUQp~zU%T_&dI7x~bksPyDCr%vpw?r->Mo1mnLi!IN$!ty~e!2;Hi zT*ptv;oqk@2bZ=lBeq93-=p)IS+~tX3lpGP3fH76_HQ3Rmq{phA$by+MV(BNo;_vs zdXhCUnh~C7bmpPC%{XaoSq{TnnmwYd;qp4^?!zBK80W61Cz`a*zm^f=(b6wU1!;k1?i)-lEyz&!Ve<|4`LR4M@as)TCyChn>uSB&G0(a;M8s4I z8)QMFTrDVb5eF;boR{#h+pHqB?fT2o#VktkLx=K}pg5i>@fPCc!k0R`+45@-#4tu% zd~6>W;|YyY`IVt?oztt;cyIO^vEuLx*GR_BYFB%X6i0mQu@t>t3BoQs=DJ~ax|g&Z zMS`7qy#;@9l{NYD2}3hdn&Au=oxky*8# zpKAA<$nqA z0YN*Q`G{lsD52N$ao#un4LFk5@YgF%7aVqb^JnvCk*20`SMG}SPNV2TWtXHuo}X^% z$Meb*(NIV%MVHe@;y}R%s(SBIH5wWd4n5*c50BIH3@!z`6ns)+j`}4_u1+r4Zo}jb zgSDcTE!Hk-OnW@$*Ui5wZwca(u5Tc*N9g)K<{TL;J{lGK+U1_p6wNcWm38U-sT?Nk zJ$5E7>Y>3YKXAgx}Hz&6HdfXnR8(rV?)pqp}e%Lih zenBQm@POMeDJCLhm&hMmS(_JQV7gG*Gvdpm%1)KU?&W7>vV$s>3P*9jo_@X_p&}V> z>=-3#ByGV$w$zvjBth|i(y@_JqMXKu)7#T!aYB2>I2I>%-^M6E`_k@}hN|4i((TRhJ!Rm$^)@8Z~Rv_9>B2OS%-=tF78B@ZDI`PM~y?Xzx;=7fHYPlBPC7{VC7L zmtx_@TV|r?bbWDjVj%r;bD95AGOFbEMjzs9V3#&_mMdQ3@Km%Z^DtjD;HAeW$$`+G zk1-BREN1#@tn0P4^R^%ngM=gNd_wZxC|c3J)Pb%UjfsVg-+k0L2y`6mr{fTZZr{rH z#PaaB-W+&3&#uO`lD-dKk9{*%_B%FZyNxAea<|B_{4O0{!_%KgNmq+E3)xWivZFoW z$nLAWW@k(vV=Ksoza3OHtqEtmvdHBqac>aH;S9V|B$Wmu_XsAEv%Y+V~Gb9*mz}}xX z^{YG+HaJ~jLOa+wg{_oU^cXv!fDpi8)rbt)xggjS3TT?2otfDgiJR|Keg+jZk8mO~ zZ5BLkSNG4WKKptHJU!mW7GYmFl>P-P6mUm8Y!Hcql`(+ollC9kpV0XIPaS6DKv1Li z>|dY$LOK@E4HoXaJh0IIeHQ))l=3f#;3N&Aml#{4u;}^!x zXmUmtP$4_W6|?jwp4xu3F_@SMZ9>50W}HV<>U+7G8tUP#_b2pp*>e}H%Q}eX7*ni= z4G{P(h6Mfc`PYNwiqmHQnpnh>Sn&gE-#(J~?sCXIiW?Ha=h&T-;GAISxOs0kA{zkT zImn&?0xFg_c*C*!whafJvQgTG9%c^gdF)kM6$a}f6Po~4f0qjaqE$DdDq4hqSxwYd zH|9hNOn8_$=*MKAen2+rLB8JXGL?-PayU6&tw2oe&vAz2ce!5iCG)~_0W z_r(1Z{JPDD*p(7WcH(PuJ#Vy_gjttAi6H<@)d0xCRq)5ANUXiOTbXu0_C^uTd6V?%sI7`Uw zNw=DB`4u2$G^Mg-^;qrjv&*>doQkbu%<#S{LTwDu|E+vjy!rhvEaE?(2|K_ACBUWk z(om4wQoqmyRp=*x@H5agSb4)?)Yx!!0QD0a7yby*^ak3z%dLBFH{>Pj`tN}VN=rT8 zjhzF!Li%TI)VKE5GocxcGjn@d6xB}s!g*IqB#k<$tGX1sw8=^iYBQ)NzrkM#2Cl!= zxB4o@(=7MRnec?ux*;>fYR~9I7E$iw-oJ zt1TY3+#w)i4(*ahNdwN#Z=MN$a+@kQGwbS5^F55&+Xm2vqn^wlc%+?o)>V&oqsAqR zvi|*9tg-5i&c}$T?B7ATR%nP##l~H&_zqQw2>~4qqqhj>&65Th>wii=vd}wZpGoW* z(i=Iep7KoD!-u**bwg;6T>;R?m#K4>+bjAv!2Gc9>wBH)drc5Q!m$vL*K+ZBSNi~R zvHnjD!x@Dfo7=CaO(2wtr5I~WY3{Gxx;wV}>-TlX^7ITL!hqgeT{<9V^m&3c^wafm z*T6-gLK3d`BFY>B0kQ`mQ+=wpi|ga(m@9@YFaCi8|ECEVr$ksG00TMJk0Gz;7%`Hx zjOWxX$*HEov^sZG(fad48QQF(uz6OlW_q5rkMdltppoyzTb7uH%>8!xdu@zDGf4anp8N5(bOYA(N}b7@` zMf3%NWr%_-4^iEzV}KTfkgFoepM(lFn7ARdp1g&207@x?DZ`x&kA=nc_x6cm72kTF z1N}uILFNTD>4$r9ymRUG9L4n=Jv$O`x0|Bk1)ItPxw(y^!@VB1Vs=%z1>NT~-uAz;pY7_LEDLy32Q!IWz3l)fa*mDGx9`8|2eq1DCe_k+ z0LA-~iojgIeUz}M{c6sTf9f8j)ibJe+<&B(~iqk1AVPXG!>OQ*wQXquAY-K6FE=enZerk5C@ z(J7P!mTm@7szCZPv3KjdnXX9HFpSR+WUO+Kt@4fT{g;GwHeNT`yLonwzIO$ zR3gx~%d5>TVs#vKw+}!*U%G!iT3#1yx=^|BZML!VngOKK>J6%QU1zGGNaHmBVUM=S z5H>aq!#aOz)!IETaoJR_(Jez*n!*SNj|qk_HS3g1k8nacS{h^fFZ_UfWYmF17%R;o zEDc~4b*<9$12Ai>)VRw#=U>i)sE#w1o;mi6((hIbNQ9(0@fS8CG*OsUe|WhpfiGw> zxGL{!*+;=^y(=l_s*x_EggxtbtMtyIA!6A=aEiw^RjtJ~h=&MwBF$5f0!g}m|8b*a zbg5B&wO8*}zf-Q{P-sOCzA93rsM z^$6e{+viq*ZZ;pVh&NgyMXBzpJo31Eh?P-~0PDyWZ>?ZiwTS~Q`gGMcRa3c@N|bMu zi_fD6s&VE4Wx8>8;%Rg$cVh=C*z@XL1hqs%vBu)J`$(AXmlhe#_k3W5n9~-7=;Gya zBL$Dt+*=7TLsyo1Pa0p+d_|)dS@Obi_L!a6v2(3RiOaZ0jLP~mz4$oha`E)5y!bb7 z3*5Eh?pvS<*=CAyKgdf$>x22^GiRcBU@Ojr<4^*P_-IN4#xhap6!ucWOm1S&R#s;) zwn9QA{JAQSyaVqRUMaN~D2jK|8tuqmek1G#9a&muyW5HpQqpDM5G=k_ zBkK+;9(%Sr`s<(q*$Yr}YY>9u0R&Yq8MLe~)@!H$fOZE`SQc9wf;vGO9g2XXhqsYR z6n#Pb1gGjJZwfJ0`uVlBHAQkEZ!Y(i`(ZH~ja3)~Lh@bsPzS@Y&19LcJ}#dYx=t4R zbP4emtv^4;R8MstRwyu@;WF1d1#r9@1PuY`7Y@@x-pJJMckN6P&o{nN`%=hXJJ=s_ zO4LrSsmupTA$(|f8&|pjGrD9$4B9Ju0`8A1gw7jG9;-5XhL^cMsm?aLOc$#jON8k< zfwB~S&YNq1mt3YZKb(;8Ot=(Ycq6;-eUm29rcOSTLH`EZ59y=tTyflLHXHpy_8SHR zZY4?oOkR68qZ{NmPV@$_6)(n*d=zMpQ8kPcv*+HdPTiUBi-C4nzuigJr_2Wo(gqI} zWYBzmbbKfiRyi2qPa})Jy>RbZ{Q*2LkcNg86%~p083sda@VpGhj}W|JFzWEt-fdHQ z?fy=^m(#*bb+_$S(#q}+x4ph3zJ>htzNFegSi4UuOzi@X>NX=NOaCiGsP!Yd7m4@@ zdR^>Pn%5qggOmySTQ~-l{s5rn`t`a@Xp_xSBTqds)1KaF6x%JzpMDW6g#3>`y-di( zARUR`qK+Cli$y-C!l>7tQO2FDY=LM6(-ff2%`1S$Pv?A@`ff6sGY~ECsa7-{Xuo_P zUtLG{?c$b+ZsHn-!DS_yiN3cQA!B)Ng)B=vF_tgDrsg#Ecejd90<*?Q)KQHc@LTgujeDp zWbH7D2H%ua?*=g8(#uCh)Y$yk9ft)ktG-wK@dtJ2&;UF|uy{Wr;R6lU|%SgE8>2}akY z??8kzTOeu=6VrP)(0_m7QO&%O{%oN5%_U2NwZwMQX|KDRGx?SWeDLFTDkMA;Oq2VW zShTLsDV?9Q!)1Ox!IQ;nr7fiWIk{!Vf`sxAya`SN`-wFj&5N4sejkqt!whx|rlhFE zchnJd@Vn(#f+CQy%8<#w$-ZumzN9smh z#J;j7)$WBsUwR(`JqW$MWORKCou z%H$)eR>^S+W-PFkS(ppD^nUGn8hN#1Dev{|)qd9_7HZ$%E{!1XCxm=%sMc9ejOTMG z_tn`C+5$l)4XN~En0R^@O@NHp!w2-)@)v$*^$U@$m)d~#;=MY#m}C5QY}}0iw3EaE z-144&R*}_2*HD&&{FJG)&C*#-C{W0pet+NQz55_YK2`;Qj}FjB{m4;AVk?-@6;TnF zA(Tatgm=9PWjih{e)5$|yE+5+UaaU0-y|`z$`%$T6g%gA)?BR9;*&XmtM4UeLz5FK z1(JL-zWrZ+bX@W6(VENk1eAjLf(f;_@OO5wK*5Tr%JV!h`5IUl$R9p59d)cItd)wt z0dBKfi%5Eo*P($FW9I~qqKT9S1w>yXhz_s=NgZ?}nIsb%QdqustcT(K3~*t_!>joz zDJ>C3pnUovVZB}VI22-SSUx~3xjP{4pd^ecq-CpX)A`)DaOYD3Kcyais2#;V-ZkDA z7-eSS&EnfATkXSFii7kvPMei|T>Gyqg_a27FXd~09CVToA%i4+S5N{He%!{AtR3rj z=Lxr({0cY>f{OuSpz1jLEB6^dE`e^A<}Q>0X)h0VuAFt|vnpwQHV#mfEc~f0@zZ56 zuq5OGU*C!#5sj={tULWe1eV!(!=k=pVxxaqUSkAaffg9bma8uWXc;e-GLq9@66s> zO4F-A+`%{2y8s82SQbC+9dUoK_I1T^gsqhU3@VmHhrYvO31Mr5Vi z?*0_xjZ0{Q&c*rxm`Y{rq``e}{r{S7z=D_V(5m~!*Y6x`Q@ngH^=gs%ajiV_YlbU; zx*?)qHCit5joFCuvDew&oH1QPHd?E*mP%5EX=kL^N>EG(F@ygrpyTy~j| zNi#8;f04$K@>DJnXfRJ1zP>1XKSecxXx};M0O!>;d|-H4{FWIz<9?N~3Ctti^*oDR z6o?bE=t7Ac?VsYCU_OdPN&N+iVn%PSCP3efcU}CL;G1LlT)Wg6lp-d=@vCIy&n;ph zj;fC|=sC$r)C1-PQ(DrZ(|$A>$0z3V5Isn1>k*>CCVZ!L`Jv5nXUAip4(`?|!r zvC(GxBBh22bGI^j zt>WvXD2AUU-Z+7&U_2%3_QJ|-Q`hbaVj>>Cqh7&g5AK5-awrO5i|^6Uv@qQkeB5S31HU7i1vcLS8g&A!=)uzR{`b;;+`j zyr-cUqG6y@h!Z#sgBtJkYbZ@kCJvH}v5B3}Y06Qeq7$doIKJCy;Ts03OWjOVBn*l2 zTMuWj%q7QWM)wp5Y`Q`gfRk1Zp?Hs0A1jfKEFozm|JQ*>g6ucr$`h8N{|sGz&q^f@a@?;|JtmvH=~gf| zkG%iJSb?_K_w<-6;e6j+m+`1pX+uZGr-}B+*jER;t`d(Ev?xfiXWUdg!5NI#aH>_5 zw{3$gt2z#E$3DAz8)Ctbsh+WM{m^h$yr3T}i$kgdzX5o(CLUra!{tQ4?TcpXXt7@= zDDEsZ^JJ&tAfMvZ!?z7O^VojF{F23?Oo?SL+xnRnv;CBZN1UHQc}w@Zj&LL*a#`ee z$Y`Mhy_th$&aW5ysOJM%Dm|^dz?=Mnc5=L*1^2oquBR`ampyqnYOs|N(gqz6_yTG+ zafNn(4qmmIu)OZMcX~qKzHaiw^V(frd=0WpCac`!d-FQwSr>zt9$IPDB9+NP@H@B+ zth8;y^{2|fp`2K(#G;f)ZXXNqh73m8cFC2{r_r^TxSFmV!(l(Zig}3rU&jOZp8#3; z46_77lFzu%sa9~}VEbc^_d2{_!osve51zd#FCRc3v*V?D@Ppu`AY2I{Oueb|V0rg0 zRC3xCcpENM8iTF;hu{t+2OgfiqVSTbqOyDN{l0#l{7zhxdryJ}@@Y5NzsX2-v{SaE zJK+Ffl*Q7B>rRsV`$xMRRg-jkV{4jz-VaL{C?+R;9dTw+m?WCye1FbZ_&mfj!9%ZN zRu__`LSL8-!k3=NO|hNMcc@>-07(6^A3|xmwXn5{Gb8QQ<|pR^9MPPdHv(8T z>8uYUJ-28Ypk@P_Z4pe0<2nDM^sW_aa4+;@`>W1E1$Fg7+w(qJ*yPnhliyupiFG{^eNQxR#x14a^#Kf^ zAAA2MKaaVec}4^05-u~>DiV0~uoSh-6~#tkqDE+!@w33SCP_E0_jA8z1E2r&cZcDK z5N@HItyehk5utl!k=#m^!2s`3FBGDoy`= zDFuH#pH4FKr!c&h;L^gdWP9_?DIyUtL85+z2o&TE=TDt?8ZYba(1fm5XI> znk^hel(tue#;8dMAMeeHrUJyo*z4pp?DP@k*vOeghAPHNdFGbcC8udt<1mQ-8L%86 zNGSK(374Kr(Fg?zwoW5z2Z{2npNv#LRkxkNWysoyZdFA#vzZ5DsNsw3D6s;CeG*4~ zeE5>0A8rBOD5xS)DHCTaxiooHLz(CG_0d2V7YOL6!+Bl{QYp=tUBqrltvsX<3m6OM zY9u|K_u98G8`i6#it2wUjgT0de7Tx(_lH@R-p#s~Dl`)-*j81E;;~=Ggwa!YCNOwHO15QhM z?+0Q4JDK$h4LBP17LiN5Te}RB4Cw>5&EscxK&d}M+S3jeYOFHTGBnRDSW2rZl9mHa z|4#lul$qQ@vQnEO{qzc`%s$)(0U}mn311+7^P)@3$_n?4WJ0k%m>F<-6ZGDR{9)YK z3&1tV#=0CyvMb>qL<+L1G$IrP0}Xzbb;=7Z;?YFu+SBNBC5J>lef^WVsj`BP5KeEGP2H% z;&$*3;EW|iYeJw-dF*oiVU)z!%$7gdU@afv*-&JVNxIM7BE(p+_Otua2Oc|PnbgYV zBu^fT!HoBUS}5?~ptk~IzHqDSBdRB9*RY?07Yq~)csTOHw?K~iUMS{X7&EFNn|VK{ z@oK7ld&s>+P0mr2VoSpsCU$9&n)qo`TuQCM=k~`jbXZCtcI2-pmY#4h?&!tM*@7hI zCAbGJ|IlsDMf=5u@sEHnSo8L9kypeI1?SyOgtqagiuCr613r0`_B*xn zMC^!P2cVzpiC0uELasET;8VOz><7$z{XE7Y6ggP&$jr1qV{%NLvAY^XkU0vvRahP)vC822fJm(;AogX^5h!uLDvxa84X6su4vJr*? zBp_7uPiM&xaSvX(c&>ApXMrlAr)5#y1i z{F8)~8onv>oMD{PEXU^x>`^Q{JHjgDCOan+Mo`IX4h~J?m5#FW26~(OR=09pX&6Dq zmYhn2Pd>z%>7Rb?pp|G6i`lcLIJ8m>-w`cUI7;!c+l1j7S;9nb`uA#ckK#`U>IZVW zDomc~M#Lj`QtThL(m(pLShkB4TwOX?y6zrx1SwtV-I5h~CITQGJ|43=Y}N!bVL#y4 z@tYJaKJ6V=UbcX7C?_xMh~96D^8{3CS~<-%OS`DqJ{2N^9>#Ko^*Vp%osS^#Qq6nf z@sTT*e~n#V^JKC$@5t0wKhu=c;BNT#bl!bM_=<{37!q(@_TArxZ13N5TYJ=aKOl53 zy-t8h3;$GIBf*Sbev+zjpm-sZ`=Y_op}bq0pemm`uF;M)^+1bLzt)X-p*SwM=uyNF zn@fmXj&n~)QS3eM7Z0n3?L9o`ZMDw(?`va7#vKL>zlJZiz&^@TeNY7R=jBlFA8o8% z#M}5nF5|e*^`bw)X&;ea zHPg}h8~5oV?R?~7_{v29TV*D=5GbJ8GdMSj!W__b`L~=s0}BYonNgye9DxWbjxZqZ zC0s3`$~0Q8ZJ+Yr6MG&!`;>m7EDCXwKZE&yev;A!=0=^E%5U?J1n*y!xD`B$Nj?H(d);OQB1g8lt3#L6}kfy`hB3I80@zi$0| z5Y1r$K+K^P;dre7uY3M^r5RAWkkWO(&iVTRmcFK0#Pt7vPr-Do&o&n@Zy87db#-;_ zMlW9yoyR%Gg|ZPX-2D4={-a!CfsM!oAAaRD=ilDmCKuvlniN2N2P0Q%(w`14v)^1v zA?O@2sW_9XAb1!&WswyreRaCqHd$uccDy;_6bG`IslojUXcc(1`*bMie%=`+di zJ`gg6T8`vCJ~VnSg#mr3rOV*FJN-D?&KDlNt~0doCPt+d*B2f%SZq*l-$A4u{1HIz zKA-|jH#!z6UfY7xfHdmT zxqRGTJA`ET0{pb?$JCx1nTdA8-A>tVD<9?MGH>IwjWCwGphnNmnW!*>bt~S6I4pOD z%(>2R>@?lo!hqNZ3y3TBq&<3+x^Dj5^1nY>sGvSndR3J4X~d8be`B`RY0`Z;vW-~J z8h_l-YnP#)p6v=?&Q+Q8z1o!-NE26LrGEGB-Iq`PMDAL<&-H;<#dN$elr`1*7+Jja zaxKM<>gY%3L#x<6HyaxppqW6N&JTPxAadg92fzApMHR{iruMl%m1dUUsAcbIv0*tatsQ`|WKPc>W{E`AJLk zkDS0pS~OkC>4Zl-De2-~uJn(D-3R-YGP?sjv!RQrs#(t;iNiU)B%}nC=lR%9t%|;D zO#+e7@;0D0!9jJIKq9b>*5u}70v9-y zw-ft{`$}kd3RwAV2zKdnW%n!Why2V`8`}Nt5^~?*xXxw7LQC)Imkh5}KZ|rE@|pKz zsfnMwkXqziWv6DAUPOo8EDup95G_U$d%O{C1KMARbZ(5yKf8Qsr7r-N!S$bK>;UaF zFo51B{A{phHD0eZnpURAR`}Tn-0N>1u?Mjl9XAS}?(68T(@4zbrTaeg0~Z4jqf`gE zV`>p+i#AL#OyppzQ@GO?-{~jeL~b|}Z3G|_;(%nfsFAA=kQ31U1s2VIFUE@HW!$`J z0DRZZPxN?uf-CnIm9NBEmk9(xrWo^P5gUnWzWI^Ws(O^v3sUcE2Q2AEPrFG`TWV$^ z2*0|z|7IZCeY$>T4Y=FiEMX}ngcXmMazM)n7>p{t*r^x@t>3F-Q%>S#um;`)44CmK z1AO@>1G_6{KDmPG>@QIiVU8^>I2jqtP9%InBy27E^f zgRG}{ajxBeLRE1M>}i3*>UmEFg%(N7;4v*dj?Q#$6X~`d@9t+wBs$`7(;2CB?5p4j zLmv*35@-Ilk>X7^-`}qLKb+PQVzeDk?9jr(qa&idMpOht0CGN>Vks!4s6g>E1kdkt z41ew$VAKLSNbHKI3g`^2!1*#I5l9EObm>2erih_&kQ6kvqBD58>kb4la zPsFz?aRZopfbBJ_ro0(r~1Ajz4h*JkGYeY!)I>D}^R%Y>ypZBYuvc?@yLb#~gp8gQ4YX?TvYpKJ*A6{pn zV_xi_pAZ&-XURn%S8fv)t>)mD-i06`T#yMxlOkoHq~wx);_HVq?AR8-z^-6z&#CX( zWPo2@YHRCFyu23IQii8EJy>B=%#Rz}?^$@YlCE3lECS zVB+XowD3HKXGbCG{q-6+tLP>>eJF((aH}XjG*JA5biZ2<+2>XyqoH+ZE&&C9hUtHq zk5>dp)gTK6{sjD2-b!giUa0!PS`uDen9joTLcwo~En5-^ckr+ALo-dKxfPN2b`x?V zjcR|KY?g+5rJK{KT^3%NRt<=U`Lj!c*F79Yu4%5JZt}s0^8uXRx4T5Yj4&Vo+hz4=$GwLk65i zWp25j{Z7_;G+6xjix-bz=IK7f9<3#A`|}P)zJ~hOi$2qpmay+1OG1D{BF;3XW(nU| zOcmjcpcbGVkst^leZc7vKCA@P1ww94XG%;8T7jKN+++yq{EtZfw_DfINA5K;?4@)2G&Xl7MGzna9Bj)#1F5CeV`srvo)zC|IK4j#(WzTlo?@3HWR} zJK(%QP0YDWpTh6&UV{^!{eAznjPDd7k4+|vRmxO_8a?~~C9se&Z6onwU;68PKrMiW zm1z@7Azc0h$zOUAs5!ka*oIL1y7RlN^92^ZrgmQpz%H2FbXGtKM@BV$fW~1rrwtn?6agF4mHyV;GvLd2Ggx(h|GoQ%AebzV#`^y+o;2oiw z7=9f1Ji)1inC4q0{+g^;0@n+Pe{fipFRhWfl+v9dJ2v1H$R4_0Fh)3ATY5kxRnWA4g*En zN5UI{JtJUEXgo%BcYAXsAC4T04(@1&K`tba`DX~=T~lKl%U$B*AWO+>;3NdE2T*)y zl7mtAqUps^i@I9NP?VC2x!b?c+5zftLzn*N;1j6it^1^7FGm&{%Ei;gekb@T(3JF4 zDe&=xK3`gqJrJv`quIb^cz~uJ(#+;&R$4AoTkonDqZOniUIh`~VU7H@z!QNqVlp#yA=KkG7KaPWiD zLdVjV`)Ykq>H$%z*ax4LYF_9A1%NAP|0h{N47xhRn0OQLbh!u;Vr$Oz(xV7|W#ZHK7)4Es1vPjI} zn;gIJ=Na1Xd_EciWVwgx4wCZ5EHsOOcN(C%80UX0bo`DM=up3De`)a4`kP!F$S?xp z5H*OxO4)rfxBvL&Uk)P)53oaQ@l`3#)pRwg)BhEX09R$`g3rl7zXA(3n^Tbo)@N%j z74}q?S80FW;NSB%#CYNn#TN^-)QgCezw`URcZ_wV5gC_K4y@sSR4s~vf#*)fV9&kz z<n-@)`pilQ`SwZ{Hh zR8+)lsG_1`vC!!Af&ysUGXMAhO!W3wknkV1{eKLn5Eb#R0;S)cG&-&K!!m+!Fv+b; zpO8L&_X2P#=&gmi6+M_o?b|>$F9g^-nQGg))`F78hX{2@e<>qX4|Q#AlB(%<1*O2* z#8b02IIOLW+^zvX4wtBO`^n$V8bv?|@E)3_quF-?O8Y0z_%@n&0`bK$eV8ccx(DdUn|08*$HL zusoH=8y&#t_a+}_aRcAMZOwQR)Y|@mFzYespw4yom4G3O#O%V~tsP)Vph<)8NkTP0BvN3W*en9SL#z0H0{OeM|)w0wRoZbR+_(P^vf_ zU}A^3&3iC8Zw%7fvNSLJd3^Y<9BGgO;VLL9HB6cSB#5!JcMh^JOsAV#(%1XJ9@__b z(tc`EaZhJuO0?krvA;5E5zDvsJy9!tqBdQ#c{kTPF_P+abl8!g(HJ?8VR#NQl zgS8NuwjJ)WI!3Wq3lg)G(-5AMF@lCInaFuAmPiarAM=}J){Kj#f=CI4nv6z6eQg%G z#_bIIT_W3Qx+of(2TkD{YJ?YS%7&5T_S@eUux`{V3xC=M@X9Nw8?ibB?-m=?exdDi zz7g5Z%`eyDb`u6CF3Q&Wo2OZ^4X9oQoO~HQ5=&MZDam?Ao`?CS;~7yId1m!SM6*+%s#H>;))aQbm(;I9B{nx&OhhaRJ)95 zS-otDSGLp5lAUkK$gh1V+*q-yAUPw4{l?R0`+E$LjWsKz1X1w{7?(Fz=&mSAk0Dls zDXH|ldrYSDy^)iG?c-$>-0a-#q|*NFNouLtO9SufN9-u1a<=)&1fefgaG+W~_*6st zNH?n(&SHAf5ap#46V1w5<(3DGHfKsRNZ)Ch zZ*fmgTvCj~_v*z&30~k!jIYc5hxV&3ODr3Lkk3VcZX7nJHf3`+$Cs&g%mm5-)B7MpYwYM zJ7esz_R6)^{LJxcyT}jv(a);p0old;E|q!ILG|5s=PKq%$9(=3Xy+Aq6kqhE0u&$j zW;8JVmLYV&msMHh+m4`Jto2~c-5IOV(GRCmFQjdV@tRQ`wOluy|RWuoy~>OCuBp_f7H!tVWhGQ+f7!)o^*;2nyuCXye*hlwhor5931cQ zA}v^i)C0PW7dag?+kEdjT7pGCvSfiG8(Y{4ZTLBGrsJ~g0M25{v#>Kn<~Fp!(KH5mpe9T* zl&v<-Gg~DI4hp9;6{T6*ofA(jN=YQ=NrM>dfU1>w)x-H-<%R7U|oC>EyY_Kp%rg zOk3>wRVAagd>ud8z}te3Ter@Tvf0TphbN4=Q4q|9NC|h}-elQ*UnWDv-jJ+61(k>$ zy&FzMwY|dH>u^n~Pt{KdiE1V>Q@|885|2$y_RdHDyA%4xnK zdDHx9&S+4a`=O?|k4m=@ea9{_ps@GK){880Cpm@zh_t@f7@4>2U(`Q6FHkSDU z_Pw6Va?$s)zBkU;T^8~ec92IObshXF=iVlFBa8d8J;1S$KEGjCAESfA9J|!zajB~X zBnypE->S{$ya?-NVbx9Nh_=DoOTwBJy5b+m-Put2+HRR%! zl^$T^MX(?i?c#gYQ-VA`P5iu#c;oA8R@|X3Y`A2QT3mioWkW7g92I0OF*l%6`V;oL zZ5P8feSzdPosWJ*D`tBT3$+v}5;s!PSC~aXr_Q;<2njye{e#U#B(3GV*>QcHmIvm2 zoxYDM>4vh6`s|^oym$am$sj4Bg(sz@!=N0Pn(DBM#T72J^ub|DpHd|RHju#8B26vuw(bu{u5KDwSQq%eI}j&N$%GBZn-#w?$fuYG)Hp-tsxGdSBzl+K{1SNeZa zqocZ50#RgFI37vvv))%(FgrW-EGVya>KSdz=4O;XFrM|eZ8)S54${=}nwhuwB9f_N z1!P^0R4csGP<{FsHW~^i=5MkQxNl1l6V{vBG`oQS$1SCC0L?Yq$6ed|c;E{BCrHYn zWRMW0WMf)ZTXdlUZOo$pzs%aHvSGRIn@6R`L{7y3;A9ay%w3O1v={R`4j1 zJ1LT&zfD-;(!Bh6*}?`P+t=ahSkwJFvu#jU&qMLGl>MU4iS%4KAwNFYQdH3vRrAbC z>Vs4Rt#~40aJi*Shh0En{-6veae|TmIPcnunC&S)ahfVj3UI(jk|wv;K|eVmSl;SjmKF~1k*}H#fSedkWr(B#=p44o|4%bv{J`C z)+Iq&AH$pP`JBF zk+P$KvoWwB8Bp%}_?5^vuf&T=E3N8gKYsP@22yqFcd7W4*RihYNysia zVoQ5jDXJ?f5fUy{SBXXR74=ES-~|@K&n=|5jAgjT-W4~KPgP&kpnG~v+zG}$_r(5+ z)Iq}J-}@PtR9$juxgTIxi+BQx6$CTx4`hPxhgrFRtN&sT3~X2YN(YB593d^zBm}6K z*|Hxg?gMM1y;!z*75d>uH)h4wp}eJQ!OSTKvL}ZsNJYwFc_Yfm9)&)T%EL|OB-%M$ z6D4X;g{PI*(q#pa;4y6eL;M0;;&B^vpisON*3HJ*?>oaC9=(3|DI8>Pg8iecjG(H5XTF<*Jj{*%aVHk5p-Xl`n-f`i}Tp5FR54=u*`%b#T65z z14L)q8W${!<8?4AXfem5x-82tIN#7-)P`$d@(rFKwXU~%S$`2WvsKLe@Gnf}w{&#L z8^|J%3%ItvShxI1?w6$L`Uk5N*zB$hj;MvfMlhq5hd`!~^rFZNuZ{ zk&+dk%iDIt1WgKn2jr_1WlIA}IiC!jVc!*;n*a|$+viV(odi#1QR|RH#SdYKheGg) zftOj)7RvgWwFR#PYddok3u}}G^ZJrp*dbrlK_tC?b#vgsBZSY5sZvv1InbK)CT2#K z_IF!HGh(o9`wM8>n1aht;Gr3^$qbtG^~&u}Sn2laS{a#SwOGMQ`NtRD|3k)AC_$@q z26?kOd@`$QK5D@`>0ylRHh8IG<2HGkr>cIhnFxE%Unf?PedShXf__obWyVoz70WMf zsRw=2dr{Ga)ikJnt)_fVZN*EUtcth8(Dn^)*Y(o{cvC7h$wcQ~*yLWg^E)o&_rjWg6mp89?8H7tG$@}t(M&tc>j0SOr>42kUq_cVa6R|jn?H>n8s{S28S~B{H4K;A` zpdT5rxBpkhLf4E8cb=hegydUD6F|7m5yafEUdZd$((Sx#Y%?Pmk(bnIQG?*E(aP7* z4HPTW+O$mJL9l|xx+pX&+~VbXw7R6UZX_K%^tVxPOD+a zn0W6>VDD1-aJdnefg$h>v7tL%)_RMoPBNp4;D1N!c>DW?W`(Mm=Cz%7T4o~3fs{eb zpmcx!?uYzg0l&iFDv1Wve9>otM@G$-pJJGg=vQ687dJeA)VIwmv-hZbS(@DJzN@x? ze+tBf=3m>T9_KG|ITv=))IX%H%E|jJ*~e*^+*lzK--oB>7n@y)wl8t z5mwOFtei`FR14@Typ;(KCWebQVg$Zr!$CjQ1jng~quZy}zN; zw4YG%6ZY-+?;kPoTcqLX((|K|xn1t*2iLJK1u~JRqWg~-(k3~kjO|X+CuP)e4tS{V zymS@U^n1QQEb*~opIT#6JY(bMhdjw}2u3ixveHAn&m)A@tnt$hDz1V|);~7EQr~qh zif?;+W5-_DyLQV_wf*@!;LV?!TaA7nmqp2I&y94#u103*J@d+~2|&+f{|!+Om{|`$ zm5z9h2!BbKeYlq)SH;sb1msi65q6T3;Ynwbb=Dp)?fZpox~-%?d0+$Tf%R3&>;13s zkOiTCJ9@)&j~|~XngGh*f14620(~?W^X5HJo_Mff0|IEZwHa7X z>5i6?8vO_actn68Jk!&ZDb|oKKsKdJS6=oCd}*MLMP^9;EttbtZ7MR(qD_74XUAo` zbl5gi)|;LZ9dT)uSWHk9lK@?iRtg_GVWO^I7{8miN~?Y1PJU_3cnr+{gB7(_=a(Wr zE%7HK@Xrd%U=RZ-ITt=2n056S?~5qvCC%*J28S;W?A*8w zAzgiiJy$k#n00nlj$Maj*mI_KWD-JYB;P{bJJ9w7UUkcidu{#rEc*?qf{0V@1 zS>vh`5vPvf<uR-Be%O%*v2 z-_*X>y0G9(#eLjp+$sOvpBOX0kLOn{X~CAqG~BYRASZdKWW>*brDuGB@$=T&Q+DY7 zmku>b5_ol+0Lx*uWgtpGp-YdaShzQPqakEX5?#ZrsII#2PsoN@uL-${?l-cCv@Tio11f-NAj%2a!F z2A~~ERU4qLP4yh+bH=nkNWXKTT)JdpWj~UxgeFl~XR>U%uWSMPu`*n$LpiglotGZh zVZ`!VH4&73mPCKWRF{9OGdd@O3hSy_r8uMFucRdFAt;OFT z>R@+iu2R7Ecwi>>oS+ap;EDWc}_r!>OOro zr>_mxvJl_foFbmBDAwTNS1W%HB5r>eaE?@)HfVe(`1`^43Aeqf=cHQVUsPOACFFv0 ziA-AJx4d0>FTItdPT;b(XVP~(P(?0<2Wp}VTfTQiUBOFh_`i|CX}OoT>|8yf`#_^B zST*Q(#rbJ@2MpIt6XZc!D~*#)_;tm2X)+MKRfEbK{R;J7l+qjmr*-VjMk_|@MOMxa2Cw>+X}$h-r@$;yYk!$slC9YrY}fezry*e~dVxDiYHPxG*e9Fa=p{Sz85#l*T1Sasrpfs2Su5U88fqt3*2ZQn11TxsfL zkQ01z(Rp{D{%{9;G?TS*(S6#~dXglWj7tRuM|3GZQaQ(6;dhsbMzrFPaBUfap^?6f z&N2SrG?bX)@JZ|{mtB5x5gb+{D!>84bE+1@ z9=&V}g-;YXHgn3$Vy+p+8p=#)+j5u|T=&lL^|ui|aGzP$Uf-1u_V>+&Z;QV;d7u*j zTWDY87|zvWn(Z)3#*q?oq~$kBE-4;it2D^%&wf?%~#3gL~)5{aZasFi17*?4_Mr znJ{1`5m&EWnPT)(uQe`o5t*Xy*#MdG-cO5g-5_-%)0nsf$Li@m4-QWOm@%_a|JjCvA?bt`#oH+Z$C5s${H*0Z`tcrHs>Z5SFOI=ixH^z+8x9l zIQ644Y^D8N8uI9^qp((^Okpc#m=fnS^oGq?h;7N+zQeuJapH@NXTy%JH7e#f7coL| zsL(8PHxd2XA*+GMNJ9M{AijT7*-uYT## z5B;sP{C$cKA!d5>WnOrH)Wl9HRk=QTnuf*M5oWshOI=`W^FzyFWn_L8MR(d;#}aZ!UwUl@LT0nE40VNbZTrQ~kud}{2O@b@Wy9`CYhh~)K_tv`j9G)aWlT#q z-+wvIDPrq(?Kg}J=s(vR_(Fqfzz_aAsJ2+P-FYzg?;Bx?Od#CYMr{n)bw;d%RW1je z=w8hLM45|fdl6?|4Tv+GjWTrSQRM^;)aMee%I*T9CV7>v5>1OcpwE{_Kx4;RzrX`j zA5oI-lvW9mh&Zdp@M>@ks+t&cIy`bJyqy1W@tzG%HaG?5{MUq8pseh<^WMO z?2|NF-tw*jZ;!uz+-8mPzwxYSxF=$aB$zzmTdkfxt$j+fik5V1fG~MEr5<6YHuNpa zo8*+cujsa%YtcX>5!I7}@9sxEN*_4b&yKW9ufmpHRB3fPTFTe;sjp-1*svSy!P-LH zf1eT%ajJU2U~QUF1MaCbS}|4UBemL3UkSUO%d_Hih)yG!*WXz`%JwPHm{2Y?AA=5G z%{v*9IAHaPXZdT0-$x98!`8E5Vpe2+!s_(FCceHyuh zTU6<|Al1+8EVbQ1aeURVlt=aqh6|a_|NcJF4^|{|YrwVmo0Erw2?ry!SucntG~~RT zEn$2C*|~7jkWN8D0M(2r&%biBLpgAZ5x79 zGXI83?b@fpgQAgdF%UtKzuNmszp0A4^SIT;tQoiAn9rn@8aTwygEW<*JaRW)s?dK$ zu2NSBG{$cZ^huS^vE4UoNO}9y-X=Zj7&m%;&F|fXTze;;4klL!m%koihEu0VA(Z z4PcL%qFt)mJ}8M-?47+e>QB$D+P9ym=~<*i57IwQ3Fvx6exxxAgsYKTUbu#q(0#{J ztGDNtvHDil-4nieC~%bUeN(fzqX62yqsI$PYf*6go?k(kUx0qQMEvY96s%wIpqGL& z;iZZWbTC%a*jy?JOa{aq_=#C!ByJh(&KiyQAhmpHo=urZH8<|PF4u?#s6Zo~SY6UmuWH4w1m3&xu zD^{khhVMP|F|bA0%3VTEB-{f0ZsT0<$%W7sjuJcrki3Xbq&69pe~5VZPqem%X{bqq z-GtkyME!HM>6aIrZNc^pD-gMj7=Y{e`o>1xCxdLW2%W&h8sYH0hun0?V+GpMz0=qa z&DaRz+E?8$R9-iRh^>S*aavVos~LMKL;^#&={8jRaoM&VvQmczc5>WlxoW0#4r;zJ zaY*Y8HAyTb>)|dH_)a0Wp7CoF&d6>^TfFrhRiRrVz`|LWSARf+)rCO zeB6IL13U4Gu%|Wx+@%&7yi~X2i>v0KD0du^_lB6eRP5dO1){yDn>xA3#@^y_hKpFS%G`L(kJ&I9I; z??Bk7#bv8Qc#F;Xm3jLjM*#(GNtc@zk7x8crV?~WR{}3(ZFITQrTZi7I>=@#!e{5^ zbSWCy`tv4w<1exiC|=YrU26x!401sXQGH^U0#*rWlSz7cyH`;jF_p~K?0OBXvf@aZIi)l-?oSMb?&c+(%#styG^G2Fi(f_l z<_329SFRV+%AqkPfrKGo*N71;f1%k9{m{M>h_14Y|KbN6{y?Q0DSWCUo0A*HFq~~~ z6({<|I5!YuGag&&lE33peib#t;|Dja0=2#vkP3J|_4BVIDlpD4-Q`*FKMQ7d@c|qU zoSKAX^X4CQr>((iUZGDp+$Dke;Olpd-sy_N+ETR=?zm|)rS+NGM^3%S#($P*_2!qv zO^~Q(KE_S8tT0^6Y_XF~$rmJcm%$1wgk^26sjDh6bCEgX}~6d zHJU91>W^hqmJ(=u@j^yqJ{x*x@gCuSNP?v_p_4#1_^{}Y?%3pyc`{w!?10;Sr_QAj zG%Z@D+oNKMytW$9s{ZAGZ;}uo=-)4bxW+w%mKV0@U}3;7_uE;m{a53pL_{}%iEW3yHS51aRT`~Su;uR=PXdgPnrnKn$Q{u)vACVR2^Rt%I)b6&4D7S}taG)R|! zoH{>qq!PR!XP(SwlLGp8>)nl*A8{?xy&nv&lslT?rpabSM z6A}-=W2$QGf2tME)&9G>_12&Lz&u>^@yZf8OS#uVk%n5Wk)tK(}poT%~Gw)f>j`z4|*wMka_ ze$0KGqYSbc6slHSWVHYWPo*}5GNM#ddA&aUpdQ_P|LyfbMoBtH9K)tE#Yt?;xUI~x zB3rE?L4*7@Il6R|;+Ocia05sZ^z)K+^n)0Xe%|WW&oyn_12DbDGiF}r2tP=gnKFAf zdVDapT7-E*s~b2E3Znh%Wz2@EIxw#oDZ(rxnltl<%h??k_e{L21r>waF8FhWaQ2B?;AIsr90iv z2hbt^8j4m_c^cXZKdIqaNF_h}`)$r8Hgy`Lz-7Z%^Hfq~55C4%Z%6-9o?=x`5fzs4 zJ!X5#H=wdLEuJnu<4!C)A-=O;LUnZb=DPRKtL50VX}P4GjJ}ao8A?MtW1sY51N4H*WZrq3{w=e# zVOOD=iYSnrjSAMhNeBuH@F*K8t5mSNAD(zOH={D#t@2ByX($ujU{o$p_vWxdcvDBJ zU$AcXJ?VR6SLPz+c}Ubg9hbmEh2~{KCE`FNK-c_Wj{kI{js)Oex~aCL0Po{X@Toqz zK+P1D4)7yq|CFdFW-ZC55WCQ=6aFYs6_=%gkAW!$%eWh&myyAbKz+m}@pl*@G$CIt zS_B{SSqEK%UEA3BYq~B4duZ4q-*KAwq&1dH4`ZMHdv2>`z$^_d1b-|$JWd3SEi8+C?$ug3@K9h&A`ba zo)YSv+?w4$!WrsVDQW0i$12w<0PE&_3pQOPi$Sa%_)Q-*LD$eB8n%oYH<_LY9>lN| zDpVWZ(BG%htS+gP)aOX3ZhSc)=btMh!uk0qb3q;Q&I*5Ri>%rBwspl=c)t^5@0#VR ziSltObd0Wu=%P18zd@HF=+VH?otBz2+yC8xNzv3thmkiosNyMC*w-(~EvKI1whtR< zJ&~>^nkm(|g95hW+YXx~iJRXqTYp{dTuMobWE2#tiWTzJov!mF^v8GAT7$9a95D% z&h4zqBAV4lxb}zveiy#6x=ojJWJ92$*R>}VL&oxVhJ~MhV#x`sJ8zM3uk>sxMrQVX zbIwH}=d@IVC{zzy6r!dH8Z1xHXT8#4&tzn^+JL=U_qjY*5XT>l@Zv8GF+06##wIKr z;ck2=0JGGh!R@#=GksW@TrrkjGWWCHje>M%tsrvPz+WUyNFs?e*==Ke>9rIvIM=IF zG5UjYK)q|X(e;>=olg!+cyP()BL@Djf?50ziA_eT@ldp2G~Z5hsd<3WSaHxMXH|_3 zftAhTv=Hc>Pbro#`-y$TFXKS;SZQ~+j0WXYu_1Tq5cQ2eS`gnrfXs7G_-Kctg#l|9 zo2`CJR)(l@@rqkfaA;jmbmWc7`+bVjuu&zt;A@M|7g}Cl=K7#&=h+`WV4LY!AHRp-i@RC`u1T^iaF+HuHej5S_{EIxVvfQ>iIyVIz?PE$KFN2B6-G zOvzBRVhMAz8wv^vDpGJkz%7sxzH9kg+@LD<2VtA&Ot3loKJltYL6UKd2ge+=z@n0# z-9n;}os(emGY9vB_71OkXL0J+1(FDPx64JvR=!C>FOm~#1X_2em6y0Wja1uSK*`sZ zD2Qn6mo;R_8P?L#tW0o$#_zt^Q)a3LFdF*wB^X1}R@g}LIN(S>B{7GOq-bK_a%3=% z&FE+uc!q*P7AWrdA%g~fHL>01bo_+|8g}z0s4(y1?<`MqI=U+bb-ar|7jEiG<|&i&BNTmvgbfH`ZmW$*BO)qFDZrHH#>3IxD05E< zADj99;^^#T;b&U*YvGdfC0=JqZB+SndOjt9jG(7g6NdVu>%bl$fyaCSgwUEb7r8<~c zV4G12Ux%v{{|4M_&VP&;qwV3~x1=inJ6hTPK7mcC`yG47*Ux5Y|e$Z_i?o zf_-gxvufgXM0nugVo!|6X92J}VNQ>kt5Ol4N@GL;c(#AsD$Bd1rm!3~AjCtf(5>C? zJ(EY*-SSMsQ7OUZc1_uNR{r1?c(WiZ@tzWu#tJxPWn>cP4G>dS>Qei~P$T)wAkHJO zVF*v=x+TY%kyMy4|7D?D`JOBZPco)M+JEv!@((u>7EbN<)`j+d>TPf$IHYsH!r)%l zyFFN*P#vhqY}cQnnHpb8Nl1h9cUM|q=jvbKQxT=F@gr{k^E;KR6M|4-*Z9s59*dlI z_4$#om&GX>8MEBdD^cij@HSID`~>QlqJYY`yUJjWrPakLKf;_sgY&gNH7~47c;d%b zX(O!fjD#|{BIRb1mA6feK9br`X(UV$&x#JY%t*YsX;vs_ceaxu?yh|36MHGV62{vu z+P{1K!oxZZkKekq1(JV~!hb$~U^Nb-4{W=?+(%c1Pk8=J1Mk&K&Y@QqwADHhNS*d`VdZ zgX$#u4c~mD%3l%xu-3Hrjq6vh(Eg995e+cvA2iSby>cU*G+-2Yzsa`D6z_>8y;z^~ zvzZNJey#8Ve%zhE1g_5d&b@N2>HP29NZ!%GuTW$b_)BM=6gN@nzA1?OTq|`Hr zZU})FT5vbDr%GYHud>G*$o`J7Pyb*(&k8V_sZy=dWu}ZHOw8i7hzZb?Ih%d+P?^p@ z3`Qj-Vr2QBUpYO46jxJR_dZ95DW9}kuUP#m?J>)cLejOxw9WzbJqjg@iou`QqFGC9FHnO`eh+)}M zq@d-RE3}Vt+B~w0oi2aqVA)13xwMjU{iA)LR$@&ReK*#dg5%(T_m=Rq=cHLAAo*en z!m^^dq*3jPDZ^E(vc@b=u!{X9U8bT3@GFO+yIN8go(VvRIpEn?K8FEs)Q(iks|{w! zNp96LHmmE*8F`*au>H6&Na4#yDN#cFl?wNA>w6_%CGFre*b7{$+{UUw+02vwYz5(; ztmJK~d^QzKvE~`z))9yN9^nc4{Nz@fl@1RaXQu*vNHudu3}=8!e zG~x0E;TN>3fKQI<=xFIG4<9;}?Hd%kb0rI-xhpu3cG&9Ilw(-_%7an!*%3bhrS#$0 zY?f;L!Z*2R++ruaLO3*gy54GS-BtWzj=aO7+po~$DBnkovthzR?!-~w4l4Jc68i|S z!$1Fd1YaS6uaF74?G1>==03}-?;*3U2l=&OxfqrcTvG#%3xc|QX655R4Z5A`Ev+R zo2^@uRJHu?Si9S$CshZdt}VyJcG4KZ@*BRG60_cU4M$S?1y1T&2(7K>ugfi3HA&6; zWXS9McoUQT=|TLbW}KdMMy!HxQFZ87d?_0u7+Y*(9lYDv3*@<+)|4{C=q%|y)R$y9 zk{leQ=KfsKGEJDUis?)s|0*_-PL%z`Y*8I@(X*JIQ`EiJqZ8T%0B2X93QwG|(#iYp zvEiS(J=KD@$w0^%T6u@azU#00St` zffFN?`^DZSAm#r9@LJ_lyH3U7oF)EL=5#p3O}QkkRYGK%U!|3S%ssDM4G#vrZ*WEYkw znZ?i3X#yv|77Z;?3hOziHjLh~%xHjXGv*mC5_DUeQ0}%VY${#So#F${Z}P6Fzv>5O zu^|RPF5oK{WBKPsBHVzL#Cnth=2bOYk7^>Be3cIkkwnQQvgQ+RNug*~P)*0`i-n4p z`4z!XQCN~eYW*X>;Dh?vlRo0B)r#&}#M7P@2=!5Fosfw< zl?$sIl$;xPv%db2X|}w|Uq_Ig!+hg!tWy>7e$XuScPo(*AR8V}cg591DyzR6{AxGR ze6onD3x5ldPdGz%p!-gTykj*%^$MNhO}WoH2x(aOx?Twq#!?u6N7~+JH;ylt)qyu`#c+G8AC4kY(UupfpC3s7Y z+#OxCmW_Ubaw)40O>^H+fgOfO|8nu@Xf6vqR(F{1YXFgy_9>PuL4!iCXE0STqomr! z*-g<}JLYy9Mt=|9GiHg?j)so^gYC9>Y(m!I%M+djZJpc8)3H~Z<+p;lip zf&*CSSdw4|fv$sXB$9-7pUPl5Dr4m(O}6Ti<&`yk*wi9w*nOeIBsheA7KWyDYLR8& zw>3QZ2ZW)xjxuREmmJfSqywzD0VGN2M~+}2!}Tg3gHe^J>-*Fhek!Av1Wc{q(%lNSF@GiQ(mLe@>w*kMT_q(D=|3P+8S((7w7z-dc)ouUjK z>wbIJfKm{V2HD;B+hHSB8!e2>+$DmJ7RKZ=*DlG4WK&O5QjXkZFjbrLRDN2;k5YR2 zK=@^lx8IV{GyehJEq?Rr`5%Y?@QdW?mZ zzW@ykd8$YIj$Ae$Pir0w_-X!mozLmw;j22Qp}S!Hd}g7x(g2-`@eFh+#?2ly{BC#Y zTxmmqbK57wY)Yo5WI-eG(H~C<9xPg4XVazcC`vt&AlUaZ=+YvWwiMtHacr(sWVWc5 z0i8g==8}TXVS@l7USIzikH#FBW%>QIGTk-{HfmseRE)>j=TFx9Q!7f#6n5Vf70z)J z*P%`(c`U3VbE@R#c+kgH6*SKU(O2_lLicATbMV-TwRwKw=)4B|0WPrTxkS99e8S-4 zAiE0JdsAm5$$}|T5l-!H;Ww!WjHChL;g#3-@NzKP5USJV9kD*rSa5p5 z5Rz-om;7@RB5_g3tx5lDOyruqBkx~PB!TL7>+?&U)kH^5%YYQ72ibTiVd?GF4q?D@ zk#zRnIMr4ZJy(3f9%>M^ZeY=x;kF%lglItKxw-a}(xKN->4vvX-tt_OB_4#N02w=cUUq~C)!FYlG&vN!W5s9tO*`H+-2O7tP5 zj-L2khwu(BPBQ9B_x(sZ)T{`;n^I<`_jD>zeqm*Lcxk?qY>-N{@UD9N`pz=HZhQ+R z5GR^3*ibDY?LEXA{i4N9a`L|EdxxBWP54{^buWzvbK?D@-XMNwEmkjrO(z4Y|NUyg zPf2XNr?Ut-SYulAfVD*%V#ZsBccoj3`c~L}S^NwI8pb`99i^`kU{(_K`m&Vm(oBiR{aS}r%^^8)FC&Vtl9jdP+0UKx51 zHN|Abs%9k)Uedj6M-X$x|Kc}bw3gX>-^YJzcQLAmv*Sl0tx7B}0Kb}VAOiC=V zqC=Dawf1QCccMR{!5m*mxihS9FeT2XhRRmxHM8T=qqY|AfrBTxo^5=x48T~1#jUt$~%?VP9TNFjhCgv1)s79msFx* z$wSS_O_3D$yQl!8waby~tM;m2h|VRfOt@EEb>rU_qM9Gg0gO(Z=q|?{_WRP@Z@ms7nwhmEtYMbZY%IVIR@!M~To*^9`< znRRRm>1+ibN>;4y%*@RvwQ>Z&x%mcvFNPu#r%)yhk)kXY2pn=lyyd((M-IR97$9} zT90Sm<7v*KSe3^4 z8MTVxs$#pSJ^Y%3yLLW%rHYgu!@<@_I^F6fqLC~HL4OYB+6VGmMaBs`x7dR!VS5r& z5FMD_gOhFEFk2t}@xZT<((!uJEajs;7m;ql8j35B#?ldk1LH13|3Ayxqv8rg;*@RS zb6ZnL5C5k@Rd8K23s{y27kA9@T(UAcDBV#JW|oBp zX`tF%37rSiIduD$7{pWn(UGnr0Zj*mmLiDSXk6%+1pz(-8c zf8CXxwoJlmL$Emxc_!tgLKZh)w<{mkGLv{G{!`)=v~yql0z^wIsJo8_pV{L7Ha0)yBS-u-R0$vJ@J@ddg&X zjs7rab5UQfJ|C8}sC$Mb>!;W!`JR9$G^4#nGOob%I<7sTOoeh;qKmjrkVZXOhv&+S zG8|l<5~bP59?L@X2s<_$b$0;umnU~72fYs<5i(YHpg}dgVO~t9+w^i zw&RxdlkeV_IjAIDY;&z1u*>~|55ZbyFfHP&VgpjBWwbpP)eW8tbguiObG{ z94EN$c%;7dybs@2jdSfqljwzoy?XoSu|+DE&y+KHj2AFzWMz3jmf!(FV;6mm_Vih? zfWvjubjjch&57b=^VRyQxyXT&3itnKoWeq1Lhfnl$?lR=Atch@-Uk0vql|I{g}IEc z_Q`QbuPDC=6}<#9-}EJMLdCulk}f3P#}{KhtD*&He#(@674le)2;lmn9ex)E5&6a7 zpcqFW1vm}2=W$$ry1^9sf7s_Wxf3pVjRxUVo5Pm&Q_8JZQ|4ZFi?#sMj0Bi=n z{$Ku2P5`L$pHCI5cgc&Iz{z=mpty+DYsn-1Kb#5@(*M{2&d><>>Chn5dFbeRw7V%b zHxwxEA1_a|1`5ial92JrZnNs^G%EapqbR3;5P#=hC}JY?MUaDZbo2j|J=mLHXaq~1 zxVPKCY(^5T7WEAhQURKIlm}DUq@s1*LH0||H}3h28l~|*X74_!J=MBV`EVf6|HZtD~1{G{EPUSp}>A;El@iJPgcFQ|KpWlC++i z_NeHCt=KFKO>TWU1+%do;dbbMY1Vv|GhltH1Z&y#ueR8a>Q25;cigN1CMfT^=k|*Q z)tO&unR}W^)GS??6rT8*3<}?i1EVU7EzmEppK^7KGJdvx6=v`>`kmC@Ne$?_{ho>x z)FLNa_%GrAuvW*>{_`8+LoIK5URq16!mtI=a>cxHUi491-7>WwYwMB=92xH%52t@n zCekL(+bxOSpgFI2r)`FEyZj;Q%4fTfD@Y7}*AL*y`uEe^qN%oiH*N;xB2iJTLkZOu zbEzY_aFSicF_T$MT&K^yZ!Ql<@iSngK50MTmNTyY-0gHUYv*5TP9^E@oFQYB=kdNh z{r`B%#9p2M912FupMg+pe}qRrekADmWx0<0&qhsB2qrq1&Zlu?2u||<>Iy%d9AVsI z>K6{ZUlF6C_WLp`G?6bM5g!a`<5mUS#5~k{dDSjY!#(vJZ!Dm2%>Xe2Y3eca2XSrj zA+}tWeO$wpq7^d<516+d)@QwI?>B=w(@)0zVx5X;jbg*S}*nWx8AEF zcd42SC0Z!oml{F+|Bp8Z==q|Ad}p-^ABI36@RPYxRl^3$4;ef*3hy(@`rTBQU+A~+ zn?EL8k@+ud7TLeCWuK%5sl#TT6~1RT*&4*ky_J4#Dm||a6}R*0s?{))i&8X4)-hAr z^9oJ@=sK&-`J>W|fTy8z?o9W%3Un>Q9Gog zaNanuKU9RTzeZCE>&i|Yx3JBv(@$bLV>AuvG2K?CTtF~frXuBe0(ue2rPx(d{xPYi zG>W1q@MqqR^6TF_i!Ba>&^e~YodAA z-h`@DMFi>>uATe~Q(v&705do1VolpYOk*a`*2q?XU-a}9jq)9JQL+{lPN+3CT z&$+(eIj{QdHGgcoc4ubqdCEPr&y$gvt0=ISe;V1>P+u83mM`Yqp7PaJ<_2?;cb%2y zq^IH@U_uud${kLhS?EeE8RZcbj)K` zdj|@|HCu^MC4E)wtlXZSl-HNgkSbCdM(d{|kRv=|DufC+A&WX9snW$bbsQxhN zo4$zXQlfRvQ;jO&$f^hDNSQ8}0#@l)EXgZ(H2N?zcL|ejzB3pMz)YzKzeUivv+ch# zy>&L@cbq}YpF1KkA`L~gEx2|TT9M3dXtlGk#@sy@5B#fa>Vt0st0@w5oYCL1%gc3R z+3wD_o#kkxEOsRc*`T0pPhqge+mezR>aXx-sDSBnn#}a3gDYvIw&pu9gIfRmJRa}v z03XIeMw*hrgVfn%b;cdgv^bhBtXiXP4smxUXzj3B;En8YEJH7=f{*|E37Krr*-7(> zb56}l3kf#fJ{1sDsO)cja6**d&*WCH3TR!*b=WbRJa%qiVLZ z90HdWGygD{zOdM^NZDYWE<^VwK4U5wgxR7ae~NZ)=5|XlA1WQCVHaawz4Iz;!8Tn* z>6-E9_Nen)cRo+j8`xm?<80<*PXwheaz5OHp3ntunjUZm z2!Fw4?v+>S%uaYJjOK<{p*#!h))Ti^0!^G%p3a}vHfa{GAy*o{e$>37p-d*zboPo- zq7+r^6!?ABf9l2y{eEE(i;1PRm{};DO3trGC>}>`Bp*mvz@0JB(aJPf5OcEw8;8mk zIxP{$7zSnv@Z89sYZ1+QuWWX_6K|=HFSViQ~g%xJM)&VTh6SCGG)e?P zg=tz5r<_vfQ(#99Em@0_-A4^>ES)yA) z|3Y0C*7nm9gq%$G^YZd)O(~vE);_nlcQpa;I$D)oVb@=H^>IoL0f;^0`3-s-q}U`7 zAcTvUn4KIG>ecLZT9RdnQ5R2rdwhKSX_rW%#V6%33P^XC(6>4O+eKY|xgq+Z3JYI?H= zFGj5z#2f{G%+3sAkSLWmk0BFFCgovyBnRbhdFJEuI5|1_gCxa033T8=Ul^Ap2O_K@pl%1@Vj)~1a7jrQ2nel}!R!&s z0bPGW(PAfd=ZMH_=<8C6MB=S<; z+FxP(9um;c9Y>5I^)kb-pCgP=K+bTenxJD-a^yjPQqyQXhE41k>vqHx=F=JI6(#qr z*^~GWo`ud%GIQ)?d6LKm(<-!eJa;#di^(q*)w*7kMlH9`uRO zn%2S@Yi9=_5#L)#WQ(_3g@9Rhb5=@>_k7#Al?y;0J!v3yeh`@exmd!#^=5pFIbWGG zHDDAJW~5>%RTtDs4@ly}!pK+1X>}ub2|CUYuYh9wAr`Ct=+PU7x8sVjBR0%r9D4f& z`Oe4VD@!fEP)nrviLW8WQ%(6D3%&VLn$PtJEcVi8pOkKkoO}lx4kPxj(Dd> zZCl<@y2JFk?KZQejy6euo-aZEs_Fs6V`a>hL|)?F(~1hI{g1_Vr{Ln56b&)0;vcjQ z;jBAo(&kD4&H9Dq(_PX=+SkruUP3)Blh3Rj!q_Jx;BjK)`g&+gP}48cpqt_IsHqe* z^f!r&RQT@!uq|TVv0;STK>g%J9m!?HzYJ*r z`q+poK&|VJTanjneUvVn>zLn3^u>PUX0#&HQ%tpN(ntY!Oew1H>)p=@`;hFt#Xb$0 zis@@Udn(+!iiJm#h-Vb;HtaP~RSknH^klQYoF)%EcMi)300lpsQ>8<6q8dV7b)`l; zKrt-vNI*K2ro7qy#y475&M1Ay2D7^HBmSB2na1ZV@tN(KEt0M4GR>~mk3J8Nt|p&H zhS$*@{TRmwIdbjN!c$sE&OXj`mOlXV$^EO)rM5KR1mwo$G2GyLVAOP-YwP{+aBe_; zzD)3SlE107e&FhIJT)!#<-P2fbF~aUZ45i(QPYvm@;8!&k^_0?<2L396T9pqB`TWN9(~8n|#$86Hw>45+1< zLW#ow%T0QLlwJsL#4Y2!n|TYI`5Nuw!qm62|8i~c>e!{C_WV~{BzCZ^iR$Uc!eOi5 z#2cdK;oCMH0p1b5Q&jf-sG7XyPdp}uhqW5Uh(e@AAtbJAGwD|;yTQ*ij1jU-c}`m( z`D(QkQGLVV&0~;OsoZ+kQ#iE6pUKgI^Pw9#53N_Qf#AUUf82fGo)-i<*IBuet zqcKkfZ$${y?9eTc_BKrpPeK%B9}FnX%8t6|_%+<;mbqJJQwOUDS@&jr4qqbXiMTi% zfbI?2;-o*7n!g}Ku#ED3#}rCH>e#UVhVMA(Ab(=GN4jU9=-L6cg>Wc({34(Y$6`GX zU&$#28@1h1idKOXbF@-g`Wt1?@$@-lHn$~mgfkT+kH!V+zgiyoK3_t|m){N#TgXT} z`P}Cifq6-?Zdx9bNwlZ ztu}kk-2aB!pc!Y;jm&B^Y`-*DQ1r4a|$i+Z*;kz;qui9sDf|1vo)nSn_RiFt$YGXd>{rhrHl>ST@d$pimLXI3^Mi zaq^t{o%af49VC`xP6{%*gLzzi8;TFsJG{4TuqW(d$)(4wDe%!t=$L-rZES{~8K~jU zcz`-2DUjWNt+(DKGcn8=puE7CJ#9*k`b8n$xU1;DqTo*WHh1`Rb1FllLm=RBp)|+P zXHX*_qlJ5WIGjb~Yq_sLk72vhs?LTv>GNWjE$FM2z0#Q@kj|-e80~ohFjIz=5(#8J z?a4;CbsUq`AI5^eI-DHn3ztlwOmSt1@qRr*Xl|Af;JE++Zi02ZRMa*QnsKWi@sd}O z;zD;BmqR_)dy#0V##B1IImtR%w)v~GP{EB6;6i*z!D#!mdS@CTbhMT za(?@{Rq4kSA%$tCnX=3x6+*u%q&zf6pE(tj&*|IBeJ==C_B*Oj7J0C-7dBKXz~)-# z{tJ5Gl0SGDDiHQVK88hu<@SX!9|@VZM-CS)%sgh-*9#!4`$I_3p4FMNQ)M#)BD8zr@-F3suK zJ*&+2dl^2CmNd8XU2u6Bm0qKV}3 zJ1KKk-W3e!`Q_y&m6lrnQZ2+j;4}mGZw}b$HRBHujgNZK>9Ip=H&puF+Da-ppJdH3 z^?$RlGhFNs#sGEQvp}X51OSuI7^J*xqpEY>i!1*ihwRSWNUCUHx3s}T6;Xe2n4eag z5uWE9!JHGkj87<5p0a2-mbZFvHa^NB5!E>|Ce$5G=V^4UVWIISBdhaRN&SxP$XVe1 zKvk{X#zTcpm=SFw0Tx+E&uadzuP*%xqiTfE|A2z`S{t*_ov~$2KSV3 z&>rVWK`o3k4m03gw!|KoD;aQmCbIH3FPJMxB_5ywypFHARJYVn@D*pKnB7NdaQgkB zc8PsMmqxtxaF^`b$qwjZ(Qixz@$lXtDZOM&dSc>UVq+je^~N8i2a2LssG6YU971t#N-a-PR?5lHh$$ zWX{D%Jhf*VbzSAJuXupYx&oXApiG;*buB5sFTE6*?U)Th`~eMO4I0(|xp&uq9{=Dk z^&wu@MrU;Z<7bA8lz+W`8`yRJzqa}R+sA@HEP%}0xDR8xyVk#pu7CO=ToIks4jk!N z+(m$=^02hCv?})-aSwuA^b{m|=c!2fs>OtVE=FOihR#i`pkFBe4r^j^I(g zWWuMC@+7UtpvnvBvSJroyPBc`G-LwhmP=9G-IB8Lru=iy$17o79@K%^3ZG&q_?Qb4 zb&A(NzAwM{7szaq|MM~+duGO50AKmD_!f|T>q;k|0iTQnUyx3f9YCq{p zs!`(bRQ}>v96#pxEb)DY5o$(^dUKUGKmE%UX3MAJWp_98sSNo4?W6yLlpqRuG_N(# z($(BosbI0aq$_YaTFuZfB}9QcH8nNi%NO}`q`_LJbp9)>EaaK=hx7`l3Z%9#3ggUo zsU$fmLE|NEM0Ril?>@l5^~PAA1a98AINY@EjoKi26vQhplflT@c4oG`d;<_8wPOQ& zGcCG3wq@Jp7Wv|-M$um@@;Y2vx?LHA3f1B!@sM_3UJva@FYa& z85l^91#GVLl^_OF0$7+SjZI9ZMbovqiOH79%z*&LGap85Wr`NqY zd?j!R z);jiw3_YbrRkI?|EXw&*KvY1=PaU7@tYn*$ANM4C`mjr~73tH3zS?Fgd>(Hqc)cL! z`eR*fa#F^gocU7rnMNdX`WqyoV?m^&&g<5@Oj?FN8;0lFRb>Gl9Pvh2C@Q`(8IW~B z!x?HGJ$jV*fWJF{o%#F;4tJHAx@&PUd#c%gGLrDbCgS*$T%W!D0*g&8i^a41Gb?mz zxV|%M`+)gh&q?z=bkB5|w$FB%RYe|r-{QoV_gKIb9f9SWI&34oS=+lqgC`in^b0MV zaCsiywUPT*e73Zoo*;K3jRm7<_UoH`1u#Swh!8gHE?e3jZfE7ROR2-FcqcV}&XAy! z8?bNY_{PvVyzGONtHS%Vv}p+M_Lr%t4>v?a8f5G#uYIx!`du($g@aVu0n1OanJUZj z-)pL?)og9^s;a8;sZb3)MMt5T)xJy3MnwNK8XsqCSZaE`G$f@6wsScANX`uF0MpW_ zs=04qt8z8#BY5rT6V<-q(7`*sYPQ4~QM|*A`tD5%L}8HStx>sa{0Ax@lZs2nMQ5A1 z*7Ot#A)IgQ%+lOZN03A_lZ2ldvhF4%7_zH+kA>L#&K}v{!>V=W7}{9vva3D62WG0w zlt)rZ$Ewve=weh!B-oy^y?M=Y>C-?y+ufH+wCvK0)Lhoq*5WsBat{p+1+S>yCZEEk z*lPYRl*ctl?+A^A?>?i)xD0fxL$TXlUS1mh{>b6c(aWL2*!@|eCl_g4hfUE~T{zlX z_+bPeA?)qBj^S;Ynq;P5#wVp_H1iMbpm_EMiV}@elT4pzy04i%aYVj6;yi0^W1&-D z*wgMDM3Z$E3p=Mi;lGel&UseQ(?lTYTo!*=B!r`o#X%lsO@b#ro<#MSLG$uxmffMU zq6sIS1rW^J?ZdwBZ6<-j3(luGvGtiCYQ^Vlb~-z7-o)X%EhL~F3VtRLQ+i4~QR3L% z4g!%)f|FBH(j|c=u+wzI()tsVw!R-W-9r21e?pm;sA5=vvI+|`Ghu&!UnL9exXBex z+VFH$Gb1Ax9*(b^LWBanuB|gs0n-Z%zzleoP3}q%!N9Q0@ z}6} z+o$?~24H-C4_-u=Z-JDWaa|f(d8*wTQF-fEN zb;W}=HM_K@R_(q?hLYgM)ohU?mrlbZ5i6SW`ISS+6UKQuWfzA>!4zNKlXLZ~>b{a; z+!k3g#~*EJB>HYLG$hHz@#>A?zXjEfsLUcv$PBKQ2%RRZ8`%5ocR$xLN@_dJZ)bV; zQXaS<%eCW`MHamPr7Sg~!QSU5 zZwh<%fdBloAGvmQh^)#ceL+nJXAw3<)WQ=ts@czjdZX%5L|{)B*_Z92Cs4w@^6o(J zF9R|f76VNuJ|2@5?D#)4B>HC`=|JZ3CsGgTS|lX7hAY(#)=>4W_*exq&Hcif?)1=%`7Y_T=Bq^*-I}NCyGMU)VeN#Q&nfIl*+j01y<5b~jp6 z2??ZpSXW@J0gwva1Gy}68dS{{Hcv;%@Zb;K59>U~cVa&iIG%o$!4)R+4wm2!giKD{ zMe!ec%mNkX7sjrB);;w>4Sd7j@>$Qnup(#cPux$=A9I1mt2q z(wCK-jGa@Arw}ZxUaY2HyvkhSDnS0!!#VKT2!o^?^4#Da8PhFGV{HmbWkHF3!UcQj z<6kU7=&BGQ))CgZtB)UQ!>iX!Tv%Op*rLC&$+qbM45qDe!CXOTuuMb&uonlTd;8|F zwd_wLeE0hWZ#!~;HwYbYgd`^Wae}l2y`sqeT+({KBK#(rm@{?T2(oF<$;|8`OJz>s z8m;f-P17>XUPP!l?{r2ezIzX#pX%G8>)gjkJZ?A+dithF@Z(uSCPD?fHUJyQeI+7> z_O#}FIp59RP4*bH^OSE zJB^3@z(={u9BN#2v=b#guSXFq*RlgXJ2*UKiqj0K>qtKTAp*{MPuNoD%P8>2Y#dk6 znF>0K6`;Nwa(!wDVcslAGDbn#G0t5mYfzEU3dyXf81x|-PX>xz7J6W8AWH2Zisq9Y z{*^bc&j&Dr6OkB;oZN{fFS^rXXl z^0T5Xe}y<~QT#}K`2lW2Idq*ovOw3$XY>8ykl@4!7y40FoHc03A(}-{j_@je45#XH z7@kFH&T!@60X$mP0LOhQz%Rh}=11Ml`gv!IkUwW_Wx;W;LBJ}v&fcq=0NB&d&y+lz zJSVI9ONk4wCD%oRWA2l=_1F=@Q}H!wlE%5w?IBkKCxTumK5v8p4afM+d3jK^#7PJe z>hw6JxO?-==`%Pc^D1Au3E|>?)GADR2I4KcHgO2Mw*(3MeqKOmgbU6EuN?cK@x;pX z6hz9@Q{NKkFM0-foX!@%G#6cQ=hnOyQ~oye>OYnM+?Tg!qda(RiA;(?iesGPrkE7h zE0R~+s&X$<#hfY2p8V;7jL(vCTql7aK5C%{K#f&(ivbbabllNfU{#~Da1emoD3?Gt z$5uVs+w8#mC~c3e;+8HN?AVHCyfJ2+p$V52sh-wW)mK49gV1!L)2{v+t(TvYfP=)_YG`l0e5Gq@Wyq}t%*wU zAx{S?2JwR*-CA`WDhM?tGJl&DUFbf-T)eyTVg^S-0hK7zM6p{tj{lj8%@GB`goMsh z)hLdFx{Y2=4DoBk2>8XY9^_(fpe$1F2zn<#O>RDC@c}~hr*jDlgRi-CBSoK0{DdK@ z{8UKb9_U_2u5Jy;)Rsep8Jm6hI6w)OA{uGS>}8+J_3iR5=h`U64Y_q=eYe24q_wsy zRLT4{o5Y~h!Hg1u=#3A$T-CP??TUf}=i1YZl{v}w;+zuK8!T35p}`Nwn8Cy_@)pW* z3_Zo%`+?lSvg?i6TSG#X;@Ab}24<2kUB>5>3N0V@pAj^+`M_c>%3%lmJ-@Nr#6zqM zXlQPR&3Vf`Utl&?=`jghLS)OXQNCuwuXdaFGQ8x}g1D%m1P@#B>syc>K>cjBnyjJ6 zz87ankgY6)?PzX7>|-^im37{$rZ`XJa21aI9lnq@64jyvZsE!e=x3VjKr4|A z$<)872oi8E3^_3c>!TlsHr6y1gIYM8#=b+|jhvOed<~VA6g|nKHyxxCK#@!BTwBKvAUvQ)WM*AT^*SM{7 zeUj`81(%@1cG9D=jur3!-21;fyH*#kjm+hCAm6k9`f}h|;4NcXukP75nN|Cm^7Q}5 zIshzFMV&~S*RdOei^$q81>tJ;$%x$bbi zEZR7K8ve(~Bhnm}hLmz?is3Mc`Zf2Vt(m4Nz8YqrALyLymzJP$|MhR z&6AaJOf~25RoxuTvZ3(+%R3toIY1r7Ig#`5x*Go*6X}|O7vMW}hwp8EcUg!ez<3{N zlkb$2_fFPp&MI%>>W~RmU$(K_A8Lq1EqV4uyX;?>^d2tv_vbsDYq-KPK}~AFbg*gbZ_+r( z=Fdn2)X9;#zn~eoK>kI7XuQ@b(^4rGz-xUCnBD}h){X>L9pKd`t!e)T#xkaJ-qkhd zybu9lVeNFjtkG-9pI?mCQD{2JRvls;Y<3lYrro&$G%*$@Xtv*=WhP}1-;15PIAP?E zI0`2QF@XEf7+H*lSKI!LeS;;z4Q*iux7_bkRgD3bVZbdgS4ZEvQQFHeU{{flQ(y;_ zMasRkww%m}6DMst2V>a!F8sK|fIyY^Z87g%f{ZmMfW=JDS~Ia0la zRC=bbs6ZEc-uLa6?K}#@B0V+n7He|`yXzzl7?Ck7-?`}?1xvODiX1#%RPbYv%0SgO z|HaYT+Rb$O?!V#N@Z7twcHHAD#;i7rN2&RT=4nT@Q|mCdx57T%o}v+H{9UdLV>40A z^iMeIp{>=mrJ7xBvL`~qY3SN=b+<8Y0{5TdAqcA8QdJ`1(*rk>%Yo42U2y9n!^d9`R=4IpQikGkdVEJCxDzH zLad|u>f7?cosvhsj*bbi?$y@P64}3suZ(l=ma)q-;^X7h!e3JwYhdjan2CeWpPK4! z>|A%cuz{uasTY?P@_$A$d22`FZwQeCvw~-h-g&Dy+1&eOo|0TrGE(W_3~TK8@E7c> dTGBBb5yJJ=Ai+9^j{xGPc3(@WRKYU%zW{`{Vw(T} literal 0 HcmV?d00001 diff --git a/site2/doc-guides/assets/syntax-11.png b/site2/doc-guides/assets/syntax-11.png new file mode 100644 index 0000000000000000000000000000000000000000..d8ba64d99bdd47724aaee479e9c4f946fd8656eb GIT binary patch literal 106124 zcmaI81yCG8*Dj1}aCc`R36=oCU4py2yNAWyS=@b*;KAM9LU4E2u(&Vs=eu?9cYnF> zKT}=jOxM#-pFZ6^J=J|`qLdY-G0;fSU|?V{WWGzN!oa}i!@$6LqagjuL4N0&`6poA zRHemWswc@${yl_RXvMC6`F|PoO$~ z*Kvb^A!PWk1S_LTe+dI40wW_Ks_qGU)`i^3C^^$&kUd&h7>I(4k0^nPj>`~9oirCr zjgI~qF3Am75<7vOnuGx$>6#LgBvFq`A7w0}5}ZG%u=QSXnswrK!P|i@)V&4u0iSI6 zw0F2_EZxd0@r5v!X=}Bk{J=S;otDf8i4X;`!Wkg^?~L964!96?q<5kNkHU`s3=_c% za)ezCEwuapA^Hlp0e9O&vY`n4e^>I~wQ}Md6Xq*0wM+d+#LG82_S+ujPA%X=SJ7tV zkJtaFx&O`d@=gTnM|AvNcKy%=10hU)mmVosTl_z@@L!SlE0j*K`$j;$wVx9D^n;nc zQ|KGO{Py?R^Z(ry|IPhhZM@v1z4Rway30E6`kiqG*GMSDaNeo)%$&F~^PgStZjZp& z0?bYtvS2ZjJx10KTnuJBI82#Ukd=z~`+xTTe_7W-JKeFEy$<*F1R%L|0mn%KICx8$ z{xJ9szQm+&2=VxjeU~C>tO^yI>rI%Wx$S@fY%4H zHL3vQ|KO0`_hFQIQ1@+*hhOUJ&tgUGYx7*rm5ZvKY;uKbc_1ub0uFOvdrA4)-blQM zxUKfY()b!E8`n2Lfa2HxcX z=q{xTUKjlrQTR&9Bye`IwmsMLbbpM`rRz0|+ot_L)2;HSxqo(@Nv}OqJM#WyXN)?< z?Hg5Uu!o0v)|KYJ1*;I0jGM+ezY^Ykz62S(TjXMEZI?&WnsbNzr&;C{E7IloThat$ zymcNl{g=7b7TS1xZFQ;$&tKBeFVGEF~ zHX_4^zuZRM#V~T7P94IXUkSSHANaEB@wOKJLmxeEk;5*^yN_`Z*OI0?oZ5@S`}No7 zUT(7OmYlsk?v%^b{Mz@L@({tvif+h#wG4V3CT>XI+QsYGNx9Q35-cABxd+E4qjcsQ?D&;kHgU(9a2M6X zys6xrQ@L;7LvLOhn5?{wl<*D5mR95w5KS*RdHa#lEl>q!$-$VpCVa`knn@E9c?=?nWbD9^_+yci6?Ihcs*O^IO(MkiZ%<{8x@)Jk*@mCtMo^$*QbTgHM;M)Q6zfj%>i zQr4?e^7;yy@OI5{)W*UAs(M?P3wzCie}z=8F!FL-0j~+7+x0+YYq7NtTgLBy`I}^e z;2H1?6jVx^=hlLLB)bNbho$DOO&#UTGcOcM<%Omsm+7CmtdoBBr0gNZ2o_Z|37Eoc z{1KrjxE%h0!v5EVE6!noyakQ)w)+6h+DAK78!pb@As zEdx%7SomK#?Zg6lV*2OM{vomUR9FN9j?8b!MZ;+#O89u%Iq~hsg_(%c#!p{G@A<2q z>9u8ossFN7Gv|Z({(SyIU#Y3rq8A2%L%JIjdlx8(irSYn@9)J=bl2V9K`NUCqauL) zCJs_pZG6^7tU5?bw6=CR;S`vd1I@Fl+DMtdNw$4FsqzV%OAKkMLEHL5!x4(Isc0VjJE1j@ zK1pb(l)Z(w?+@x_;n>4BlEaVvuZ*!EhOxu$cDx}EEZGnTvN36ev180-MK}_imB033 z$+AhLCOF?N`1Jo!ep*m)q<{bU_m5SXuVDf%(ZTD!!rrk}wl^-rSdIh*V8oO(>WA?s zgdZ2ayNVCfP2|ZAPxJGGRcHk~c>3sL&Wr4;j=JS`H`bGJ;Vc6PLSHN;aiXO&`4on=T|gwQZ;CC^_zHHZdPL@X^Is5qN> z^v=qVVR|xPPT}N#4hH)XnGG!Vac)fJ5JyY&QJW1T1h@qN9}wJl3KK;QGBzxl zl9ia800QEw@>k{SUO0uj_1?D9DHxvSKZSU14c?E57JVI^=S6{MO4U70-c$1G!8=%q$`omH`9Iy*_>SxJuGfD`mDVHiIn&OMxIRSBr@KLka>dm$}h*DR>>Q%ihYy_8&k-nYWTYh_mj_UfkJ@=mzw zkBuv}%{!U9N1~K6w`IVz1uS2d1;s=!+XPvQmUq8+o6-99(u6lJ zq9*K%`~pOsd2~&~&ebTD@2w&iC}MFKm@DhooQ4+LKgfyv=wP|V)@>;tOsIZ4=wk^D z{Z2F?)hQXgbdqE?YmOtb7m_*XJj>|&(J}6s63t%0a0IE7v&*%?PQEI( zMQd0pY1Z2*Wu1(LL9cOFz5&!`pj~s!d|2^#PKem@6NSGumiUgEG|nu$-Ag6RSAJhe zO8SmtamA~I>W|&kQuR0om|9Y2OH_C!x>EQz55bieGbkTGbGFls$=oi6baaWfBmIVh zpIJ>(Kv5+-H}jrc=E-|aQ7$r7G1856SozT$zx|kb#_;eF;&6zkN~}Q~l9+r&B^%Ah zcUs=e`Z6?UeQh?Y`!Jy)qIy{67XgI>A_oYdMrT7G)-U)mqU0S5K@P#Y$yvj8%GpC! zc$`Fd-W64)dN{ePcEp|e;Z)8)a4UXFlbGMyJbK~39#_PXNEwt?+JTphHOnnpjc~V3 zlp*!?;fr3bi4yYq64{PpH3x@!Ig#jYa90T_!LJfxhKxxO)`q*iunK zdQ%BVtCRQJgGb$|ZsehrA7;t%yCY&VH2=7D8l~V@Wg{!SiIvY8CtMhA<0|Gt@=rT* zRdeQS#B;C9nRTErNKDPNll#FqhG;TpUp|VS3n1j(DW?6V)#@z{R7cTa7-;)Q>QpZp zd}_^?dQ+V73w^kT+M?sIFTL{!UVmr=-ZMUiSdPh8!a9Xk?1%+!K8j=rUc=gYV1(^~ zTA2w|zoIBk@Kdw$P#(5aaay39p&2{&Jyy-JL^lg zW!v$?N;1AG%CX7zi1K8T$MBcF{*>|$Q{y3y3!{}s z(WWPYaDsrwcqQ3=q#>F}9aLt!lF3xe3cX^4%0DFgrTEW`Q4B!SpM}UWNh7C8^;ugI z`v&!Y5EZ82%!#ZL=-Ur)+bdrsi`s{>`XLxTeh(HSWzRrxHeiN-h8e*`o=CwbQbkdd zSqxND;yi)+(={_LS;I@Ki!x%oJusg(Cs<*-5H7LdGPL06>^dCpGGODV$b zq^-*|Z7%`y;0Q(&76y4i8aYeXc#Dtkff{DN408mG0RS{}zT`*Go)XGoeu+!d776zA zz18ZQd~m<%*Jw;}3=Hq0R*g|?)+Oc0`yI_#yGZIBp&l{mo?We$6C!7|AiM~tTtHTE z*^U==0Db34QQ6LAiL<0$AH)1#+IZl?2iz@JF~W^MYPPJVD!cSwE+LwWJW3`%Ub|*b zLfH5v)aZ(Par(!Mqp#YKJEKA9WSy~sy|dbY(*QMjr^H=1@J|4tF=BCdHno7j^u8&A z!Sh8J-x|zmGoE!at8z#U6eiHCLZ$$m!v8BKM-1PWW6=@6x&!JQy@Xg%1@9Gu=X{T0 zi!$`7@K~Ij5CPo&v5oA|v>}!5RC(`*#vHWH#Z0!R1jUnA6%~v1M~74vTJ_q_b{0_& zTGxed(t+sjCy5Vvcon>kXtB+G`YUKnrz(ZXYYC%n4baSI+uD7UZ}Q2$$VT zL~1m50?d6F@utHC6J>m{PBy3&?*+7^F$+8}&vhPF1q+jPqxg$65Z=&n*b*e5F_jyZ zGoWhUgeEc#0HVR*bd>|iRUi7Ll-k%`jC?O|^*(V&ES#E;(C9(R{%)3C^e9MK`lr5w0+zijXq*-8`42=ga83DgQ?ckHqQ42i0)=u6A0be!dW|V?58cmP zNaBFV9zy6@_4SeXrkOtWboTaaYwzk4cY1}IF!KJp1Gv*9VrB^5bXdMh+Emwj?;W;5wrk`e}cv!UJiP1{jk z-ZsdmM}teC5fS_E{Li5c#QTcT`SMai+`(73im1ux*4J=4Vv<{#h{k`=`bEjdKX#D> z#2RGaj#?wzRo`hJsnmnr`%km(_htEYlMBvrQ;xCBKcM$UNh>Oz*xK?8anCW~`{>V_ zVSOjv&k)<6vlk8hu&<0{_l4)M{;R{IqTL$U=q;O`9&CT5@~I#3e#}5-rvRU)%2nqS zU`0l5Kw4qZDn&u~k>n0`0OFGMQF?tcML$8CnZ!h=j;#qAcQ(xELVQRYonz?2a$v4I zdvMw)iq!53urYU?4rWs0$4;2p-A8H^C(F~Unxk%i4w?{xe15G(HX#Y*#5gR8wfP7M zC3{_0{J{1hHc;#&InbF0J_`D4b&oA@gpnmCcgo5pCa7D9W+^|64Le8imc`2zu91yZ zQJ1#b&r25Tx_@@Rgfx(yK{=>TRhPHeIcbbexmP=M*sQsKd@FyhIfDZ5>ATs~Gy|3C zVSi$Hi6)g_K+aS_Kk}+ir+6>zk50z%XKj``lLqs`?1U#7%BH`lo_UWua$mOyJ3ldH zt7kHu+9OAuouV`4z@U4Kfik|K{c$TwD8Vgd>wiwwIqsT02087fp!2lrb$Evri=7(k z$C&n9s;4zc6XS6FK5;a&i2k=>5|Y!pHUdQmBPpJNOg0fQ&vKD#ax4<-@%od! z;^8Bjp)~DD+0Bt#5?|bnLZb4j|A%D1{T8znutElQHT@^15&l>J=PkB!p(2ThH60*8 z@gP`SsYd?N51qs?-IQHa;|ftuwBugF2jGUi?E;K7RBU1F%=SorhIx#kijArx*kAoPsZ-<5`MYfZ8B2s9C z@4XdPLD2Z!W?}AhEu`4|+r(uNgNrsy(TQH@A(qH2e37}Z_p6g#*8X3xYBhfF8!lB| z7FA1f_qHh;K_d7Cp{}x22Usm{0JeyB5Qy4QYMTZCvVVRLOFE00iRX#d+`vvuRPZj4 zFnVX_aN?Vt+#OjylNU9x8k5*B$Cf}luTd0F5(*C9WB!s9|L0DGz&Y2yb#>R1Fi>^e z7$>8A6Xl*`pLJMJNx<4_^G8${*C7S;QN95sZ&pt?U^pLP1(`YJ)}HsfvCJnLU^M-J zsy|;;y0AM;kfo^4nXE{!vGj8?H{$xMB6efoQ=JS?gBU_{*o+6C^_EqT@Bt=$zXvrS zCKn-FVC)mp*mpI&Q}dNfiHSNMn+*kPP`f~6>QiOvY^8jLrSM;yfqmBL{Q2DQa@^%X zx+=70*p>v|a_)H@dVo@jZYUmO*!mfNV4{y$`XJoeS2i#EFC91t3Ka*G0K|L3$xu4d zr&!7T>~Z@S1T6Qrjq1>zEZNwUITD)Vc|iz(^|iT>)78z?B>Meww3l{pQ;+YYxCHxIO@M$nz{(Tl28roi?mF{o3zF`14GsO`Rg(t zB2)5PL1l47DH12>>5;*3limAn-USnWl?RH<@C(HAMlLn7B5pB>(O4B5F63&=qW4|6 zE{c=_A=w=}y!?Hji~dSmLIF5r$;6Z3m3%fnb?xauv>bokV`$H_B9XnK(UB;XenI(_ zU#1?73wb=pea2F-0Hf+qMz5;#M4d2H%*4)uZ+-^3j0)#_R*WuPEe?4Cx!;+UzZ0R? zsg>ae(w`n&f=XXxiIa~6ZTT!X5*!Wai5q-lXvW}~*pm$W`7=oA9xbjiP(EeI)wsaU za63dxx~y3_{nqad*CwW;s1smG-W4~{CwCDnr_y~TZL%SiAld%F-Gok5Q8L~<9aVZ# ztfkYR4EP(m>4mXbN+?W!u1=@M^)aY>j3EG&9j?mszp_4+^4w?xRn zWlN+-Im$7aOrotJ{AtdLMJ4}{DTHvT9+=K6c%y`}%=i(`G6p23HJ9{pwf*l(HDpz& z{#g%;QNRW7^(+f*WIz-(J*V9N)PV#U-c7Pm8c& zwxrEw!a%+}I~LlVGw=4Qijb>sKKGvO6m0a12YlID?$G^Up&Qi9b|nvp{N zwFrM#>Xr$DIm-Pm7Ye|e_+yG#UaOSoaFGLrj62x0SC3qf+H!|e6SXMBs7X>Ty<@ai z#BOj}sIk~@)ioBlvu67GX#zdhQW*Xp$LwU>Kgw<}7K`i4!xa}I5iYK6=b~9FcDpM* zW-vb=($noMvJEJMegk7G*CtR!a7+42z!)bf=jD+e%#9J4Hq~K()>wjs)xLI!5TR8T zibw*xt|Go`mkzn+!4W}l9VkhzNh#5fg-jnpiE~L46LRKZXd>>~boVloq`~L;j?QwX zm!TpoOYBVmZ}0L;&C_SKycDOYRRX{C)3KsKzsMCg`sKWeMf?un+uO`rdYY>k_Kbcs#~2={IJQ8Y=S*PJ`#g(94yU!10z0+&2h3UNu=b0#2HBJ?IsSr1x1k0DX5pmVtM;F`UB#L@ykPQxEFGP_rp33|r$w z5$}V!N@~vMx~b)8RW2eD@*m_XZY|yqCVj+xQu&EO?1ke+!v7(SdWaXK?w(C6sDKN? z?5of=ckOiV{&M)efSnE6&6IVxo4>mK8u4LCng!){2sD~^jIQ%5Sn)~pzzbJp;4@Pp zkRuNHl#;)THGt1x+wc8m8-(K7_2w7?ftP4e%Bs9T>bt0i?Tknga5u0A^?oO84V`h; zgS{8UFAi?IsesD+BW|dVVNr+uTR|}dT(|_hk}=6wYJ~O?Avuj|61&8lt1DV_=-b4@ zB^ULlbuK#A73TQ&jcYP!2Km^5#wYlzg!&xuV>7n0Cy9ydXjI86uN|Sf?(U;d_V%6x|0}Fu6K2+f z;A^CT??@A{&p~T7H-3W(L`XGqiInRg-ye{^!yzdf74_^0t^jZPS5>)eTd$qW!w>0& zqv>tLh#}i8{Df|T5X>V7zQEx5E8TFg8a3)&st_khml`i-|WpR**rnkHz#0~y)3 zZt`N#Xm^qso6HH;_|ydE+Cs?{qPueV{V11EJ>t9)n!q%Ns*o8VRabqPMGWK9A)i^i~Oe@9TSz?XPi*orRK2aB}j~yPeGQ z1WTlN94@(Utz}46t#VSm*itTGGq(=?pAx=(>Pjf%mv`I1n-a9p+PZSQY2grFdT=j*rKNmtDvYnPglwKD7_j;t& z$C2L!O|gWpBJU>xS}9zD(ED`TQzZ(B%=FRSujkpLR%0cC}r_t>(tw^J$u=RJNKyHPB290;9u{(%`Kp2!FtK{Ky{`jGfqM z+}1{X+0EJTI|*B)o1Pv<$EHQT?^d-rpX2&I+U{UHwMI zi^etRZsE}Rhhr(6d??yWyYkIA8S*(?AyJjv<-&H6ID{W=*|o`7MSi|WEH0w1jfw<* zro&?1l@0~D^vz&Vw0~!=Z@u|MdyN0HpJW945(CET*#vH%0Zg_47okt8lEejtUmTTR z!M5ljOkgLXMey756o=5W2G7km#M|rHcB2T^9>ow>RYhNKUJS3kR#m{7o{hx z!kGD4j zhF@AHYytrOlKRfXJWC4$vB0tC9iV*Ej9L#i?$W+Vq0em_;B5adYh${h&q7*G0FItT zqJ_h}?BBJEx7%M7x5>$8QNn^|pb04NnnP~Y*<8a^RWcB%Fm+c~HoYS=NHV}$N>4Qz zkN0pCrNZ@Ht-15+KKuT_DYlWw@GU}-aChIg)ADzwFlVxX&-MxT0Oyu{ zR1nb0rquG4c_E<7R_XPCHwd?c4_;I(6(Nc=>plPHU)-HT_1y?t8m|`(t2K;=_b=ze z?OBn34C$I1s=E35^VHA0w&GDWaFXn#NFKkvO{GJ^q5d>FSK;YKxl1@; zKLYOnEOm!pv?DKZ0-n#m**qj2?)H}upkoT@YvR!-9Q>3&@qT-pK|D6=iQIADe*b-M z)Divy^3g5JMbxVeS>^JUPhc>B(KzK0H!|0lpG0oMeqjo$8vU^Rvw0n{`?bJmWh_0< z;vvtgXscMMZGC0)G0?%>Co2pFMc1wGjBY@}ef(|Q{Oy4T)Wntnvr)(E4HGs; zl)e>5a)M|y$tsEQUdpaBsnwhVJITwp?#z@<4tTSI+q6SV_bb1J1ok3+z=~5P;7tjq zvfWGx*1VpLHte3~sm~WatAV{#bXFurPe6em+$?M6j(83|555*Kd{_rQKp!xjnuOn= z0Q-&*kK|bIEb-ti?<0q7DUX+xZzmV6W6j*IRk2y?#xX0U+V-uyxCBuAF3s~^9CtgY zM2MY%KE2n@MSy$Pp6_M^f-bp8b0kmlQ5vq_OZntkrqc4()m0qy!ZiTNhU1j+K&Q)h ziNRb3jVxcvpZuhENQi{g^bLDd4nhIi2XA>58Th^)07nqNos|Eq>n1aJ0D;&1NBoz7 zTc&_D%Ymww9NV&-wUY!v`=_C_rX4Vq1EL3ySjQ4hQ(C`{T=%w%5F}bpHMb-utDcY_3&I=ZK0DHehnSwdfpNhd* zsA`-L0%lfz>tA=I!{n2a+{Ny?3t%&(WNlfKPRdJuJMidRyxsYhpPSEpoRw)A=+AUy zIMd^R%R`*cMWYH?Onv+m=}U~!B#U;aH=BmMKD!?dr{~pEH}&rJI0^&e^rBu_&gM-W z!9I{?U?<7w)>K>whgwWU7?w}1%lgr8?WkiC5aeC%*scq6jVV=&<5K9|rfP?ig3I4jALjJf+raMq@_!61}PsFgg zgMWpdx&0HbiN&O`0ZS$=i^arUkk?m%zZ_WA(UQxzZs~vv?x3E0>?_9RBiANT4pNig z{7BM07i!Y|?qyhi0f^;Uh!KM~sTew`O?@kA7Ly0)sjF4BPqElPD$Q-T4oL=@U0A{HSRxVEK2XGK4o z{I3c!R##PlRDD`=6u$%Bpzhg<07g)Nr6_cMnCIRn~1BjbINgV^zP>D%_AK?W{4#-oDPPu zz5D;gSi6RHtffb=*_B?nr?{HtumU?B#vzSC4qo$lz>}cT;d6RRpzn)$K@`hA770;_ zL|)NAk)eGq_1YeN$_y@p%c75rw~*42G+Jw^`Z-NbIA-(6`{mEq-GYwW);r}`Xb1P2 zv{i#qu4|R+y~Ci7OYZVl#~ea(aV}OAMLiyld&e^;)xGu=LQfJ~0Ih!CMk^`QF4FoM zY4IogK{b9>UG!RH>SD7RoAAin^EHTj=~qMYq+wpbj?$`fO2iFtE)K;kW4q%E*pvP~{4%obY64D|XMrJzZ!c1z3~9XLOD7Y=JrI z?|pkGBhekGvzR20d+4lYEy5dM1g*8(H5{`9EE>206S`%>W0kh{loXgE;aj zYo4S#>sxYfWakBip>ro^e%y|ot&J@Kq{F(%E1z7MkcFc>0(dc2HuevZM~ z=&T<{PGHWxM$L^d4UN-vJyCsLDtvG(YhG?N^nmjfz$M;q%OAeVvt=9#ttjL*OcV3_ z$GmAjus_w9iF5S3=>nl!iyG@P7%$iBCJe0Y%Sw$?sF_fXJyI{T`|}JA;H!<+qrkWj zrAt(Pd~zWTlgjjlKluph1I?lKr9k3-a+zx@>s6h|wYD4VaN?PvGfdu@)u*_lM>5Yb z&AD<3%n^i@)<2iU5NV=Z=J-rA1pLd+OO*9R}#J{ISc+cdALWj;kA%(?Z$5sIskvb-BDs=ehuR~EBkU^> zC6_kAQ52uc9zOyJoCk_e*XF^8jeS{qbsQ{Wpo@^C*pG1ZxPM;}ZAsc6m>&g1FOZ}$ZEU#c zb+EX+}k^O%M7MWl}ad}a72j%8+A2}v#g>sAONq@Gj zlzZe^3X|kYTH`Na)y*}5jr{0Xv9@4jbPABtmX1VO;A29`9BYJvxyE_G?JHaud!6`f z{x<#XJ zxMx<_=Nvk3l~XpWv@%qjnfvk;4^KEE8I+|@d^#_ah>+c{^igtsWkg`q_tVRRAAsAN zm8#PZt`^B1e~^d~p%yh@5hQw#&|{3*vA>o6aQvFcb9l}C(~4_5>}x(D4rk2+Y|glZ ziSaL2%x!?S=l4|j=32J1?gEys5fq~w!}nf@KlKk&i)E5;hUP}g?t*%*Eq6_|8)5(+ zK5dunt4zVh?yGyU=0V55E3<_v2P3k6;z&fiZo8b>q6e zU5LX5v-jUz58s`yO_+siV9&rRpXp}BWWd5?5r$TKeCsj3o;j4ue#fcW5t{4uQxU0E zk9_7nSvwNSEa@&xeCE(cSy-`UinOxk-*UcT+G4-7D>#l9@YcoBG~nHQ476Kj+}dhe z3F=iU;xDGdd@k|(`>W{AbBV0^@|)&eY zgT$|OX5WjYTeVhA;PqVPp6c&x?+4wK_$kcl@=cuwx>{rwse7F3*6OyJS+G`$d2M`8 zm3msH{QSJq6FbPCvUsz(fWs89GP93RE!J1_IgQjYms( z^4+abrgCb39~*?BLIx5+K3c%8y_;eFdmKTE`aFAoc@JW*+v`c>01LTT{S~4g)`yBG zbw^-(qb-krfj~w}h~1Lh`itj2XEb_VYwl<0bUpTZ0Um(T@lkUDmtarV{m2j+mR(Z9 z=OA)v8FmIWeRU=tA~F6re!P?=C%d?tCq*|D=hYKrbgFBgwh1kFXiw(>>HW1`O=O&( zTY|yEIGMv8WU)ogv3oY{|Ng1OacWU%bm@c`7b8mU=P&8{{_&b{`V%Fewb zKETcXWj*RuKDv`Tn%b;8=kE`VU41WYrf?r|zHMo4fL9XuleYOoQqOnb;OriHkygV& z{8`=qwxPj(ELN+L&AzeIp*fz==W$##_(>9RiMQIN{?G;Y6QK{oXFz0h#t4SIUNEC6 zC5Ib1j60pHCAzPxQ2bo#Hv@?E;Y=wwU^slLCqeN3cF$XqaZ|46LWOedwPk{Tsg%5V zPPAk7&}VoZIO_#iBl@O0Q{8pvt>>7W>M({gzT#p9p3VsVz4f;|JC*x2nx$(8joUSS zVt|dQU%^P(Fkl3ud9dB6(L%Ce`1tOuY3c2R*L__Fu*}#I8?A*lc-=~g`1WEu0I6x` z38vZ`j;#w@(q8v30H7Zyi|>5%CyeMScEcJ$RR8(W7BpDu+w`P&{Pt}q`vo`bg6F#w zw;R`cnYf~hXlQZqX{X#|lf{zJVw&LvsCf^T|HMy=a6>4IH3EY-V3!xLQYI+1@`@#F zELYu-;^*?pw%RvBz7Lh_FhepblJ?$kTuA`D?@Nrvqtt3yL_I_#(h%RIRdA;ExSf*c zzYw+OkV>QlD{HGo4~iE;M4-!ca8l3i5Nc4Q5t;RR#vvbN3j)YBE2T%zpfqEav!=Qqh!Yl=YBXi{&flw!X(<)U&v1;HnpG zir2w)$QW-<%fFzab`HKMTur`u>D=ov5b1e}tj~nX^TJ=jXZI6G$X~29ffL93Y`)BT zq(q0)_T1J`WXyqGU1^D=?)fn`(@7%pA{4__)F=*A|1wmhnJ715+SB3abrc?G*nL6qr_VIIAEuRA1li9>B0!~f%@1q+` zminL$KKD)m&0ni~zN?#%Sh_6P7QafLO59LD_wC(Q*Bku%-<}TtHs7ApWqmqiHBQRD zWRqDnNw_-~>GRQGESo*FcZ+l+a=o71jSMR$x{6((kT{R_5`s#Q&ulAR>EEgg1i%zWgGQ$?fcA<0~T~^6K~{ ze#QlE1Ua{4SH*^au*68%-U>SSxnXszgWT3_@=Hfmz;<&S2*ZPWmlc_6jn}P)>e1A! z>p-&!y~jOiLSRqcq!Z7K2Lnsfg}Q!eh7!-oW^XfnPlSk&3=WzcC~aK>~uT@Bj#pPc6J%OjoU%=vLW`+`1P^st_RJv z?X{v;iStN@hZYM${dJ{H*6EGryL6)o1B%ke-tnO}BjLCFSiT=bdiHN&klU{K`k^De z!9}8XBcqQ1_jdm4jw5h=GMliThkJapf~M|NTuN7quy1@sD6x6QDBV6|Or5*nDooMz zk+iUh?S0T3vd1V8y6o5E(`&dS`I(SOAl$EiYO+>Rs3Bp>V$iyi5MN(j?7>d=_SfTt zdz0sDWNz~Xmi^2VVLiqJnZwk}6V@as{z(7k>kO?OX%`q;EBM&4e4(EIGHBh8!b0II-Ofkz?>GjHHYDDnc8Z+_rnJ665G+lMi z>sao6ABiy2x@QNZo8RH6=%I;#P;XdqV1@;r}ajQu|0L% zKbEoiw?i+UiprplP1y!jBWmdjaK6JNLdI5Qw{67bUYd2@ z`zs^vJ2zn9eL?A}DcE4JS=ijeM^*{&VNtTHfmL`*vF&)Id4!OE&7{Lu>))AAb2(+u zAwMGC%LieDKB_ny1R1yVsk&QU*q>@h>Q$Rb8TpvZ3434fKFm!S%ds}L<_0WCR_{oz zTt@T=I6<9y;=9TohA#Z}MIR3OEB$wsRQ}x0=nR#$EnYCWb*6z=iMBM}b3RSB?Df3D zuerWXoOOO;ZTuVH60+AeUB5!9xB7=KjS^Z_WZ$jiKP+{2$raUy5?f9vny&9r4yp7o zl&*$y@t^dXHPw8m(~z=pskzcQa|5k5ObOc!lpLBSubc?Sg5Y}=_%k0L(tBPP*e0u5 z_gy!!$`wyvBe7DnB zBYn*fM!mHuf|epBekrAz!6*4=af%Rdnn@cVWIKj^=z;u!#oGOPj3rZWnNQ!u7b_T?s1;Yp z0>n3=p7fjy`s^?wD#3iqi5_X4M!|ZyO^%uahZr|IWz-9uEyi>%RyepvC~XKiXD(-6 z@t)55Br$`H$bCaQBH&ww{Ii*BXZ{5{9nOtCC|J((dT^*NuwA^cK#`9t+pu5N@)&Mw zUknw4oo!64l!yo)7OZVyy5+M$-=+O827;!#hx_~EZm@W6e42kFp4Pq>8uubXZhU-l zLr{|mnqMzWy2Br%6;R+l@mI@N7M{O%MjODy(fiz%D!fg-2S3JABE35`@0G2Jcw)*_<2H}A>As#CgAhxX!zRSE( z4z1uqap`unH3^>!1fOTNfW{`wb=}eBl{R$mO~aAZ5zEZZ1bPQJJL63*=?iEtyA2?6u3AvLys{=)|r_%&g*&xIMNe1zHSdYoSJpp!?UlVF?&pFBq|jDE{z|r{Byg)Q^iPq)U9pzq7ormGH0mgTt%LABe*8t;!x?MU@`Cy%1n7KKmsWYvIkpAv)Q^vYGf;^q zsBXDP10HG+t>We6O&I(J?K zJX;*C90}j)@V`e=K`A!DEWqlj)}hmjJ)e#L^)CS6fI-W0S<^Ql!Ssg9`JJx1Eo>P* zW4um=WB2`%!vsOkDQ2F_O55A{RiY2&u4~-pjq=(lKYy!{jn(|lQH(d%%TIX!{8x+M zT#x;e=F4ZfH9y^XKg+zS<@B_F$6~sslF)s?;nf~05t@gPJzLMfZ@K|P=iQ(`itd36b%ifQk&#~n9LSA$&hlh4F8e9gC%r^gvS4)CMMfEu9QWAQ>a0N^ zawwssWJ`d2oA|6L{9>B#$U;YD^2_)|@mCc#5iFN&i1p z=Lt)EsZKu4!p=i~~QVWs(a}gi-36Hp+Y9*-xAkC~z^+r0Z>}#*RZO zm1v70seMrx0(@IyW3RtauGUw$ftvo#o+MJ(R>R}N=ZM6)pJ_+hnjAsoSHzRvcYwh* z(+55zSrhzGOb%3Nf{PzthNVANkMi)IDktc(>+h3t#d543WBkUd8`+O^hx3pBctLb# zcm3{uolFg~eK*u&tU6)iIwf~i&$;=; z?bo`4`qjl(aC?w9*;2XZwDcofZLpM}&9mY{e@G`j>n{5h-6;~Wb{GzFe3U~(;vo_t zqU5!U{oS9Q*88#3+j+WMe9kSG3#L0B${!Yn+<{w4ixlYDN4!Q<%Ar<=;VE;9D{vTe z`emuxz>q=V9ja4yKE$sq7whyP2KBSd_z}DN#LMgv-Q9TP*(db~?2UHN zxyRT8x+8j=?s#5#*G;;MG>v`dtRKVzLekksu#?9G!whm#cVcwapFiht&982@Ywo$l zwec=4L^mM1~9*DXMCUm zUf1e2C2=lG0i8p>{qUk6*(KjU!><4JZGItQR^8!bvD_39^N#Cn58UZbB5b4ga6|qe zzkBoMTW|WlyrAxSG10!I4>A4phRbXf`N`eVi*LDF3tTQpjk7O*VTER{#8MV6eHK4W z{Mug*JA1BRx_r-7`!(gD+S-xEf3TyvDU!fd1hx&v_0!ucs(x{qW2h+d1rY z#dwt^SM;YRaEOD+mzP|@u2l}P01ffsskf_UAP3lDl8WydWmiCctFgx?e?x5+Bfu$|F0gp z*G8|uxpT~*ivuh^^SDUP;LG9*F4Dk0s71m|bIc@rbz&vi)97gj=A!Tj11MOep9p4A zo*{$6T>Oy_eZ;@m%VK=qPkhSGzxJng>cwXm?^pie&(BE=bAWg??^1K5^E5ypoNd|MX||`10GX>!x4bVjHbKLUYsWw(eKfwzDri zPfwQUU1a&N^3p5GEcvQ^u80Uh+H!|{mYF{4ayVwpt1PEcw`fGmI^`Kq8<9AkxEAGwuky*7^zt6roavNJhPm)}urx^Cz zVPAhTVxR5zvb9F9>rXgbciZ*0(5FA=b50Wz*;GQ(*hEy*4E3ILxuJ9G-^05&w3**C2Il_g$s*f%5KIlz%+-SGz;s!=A);!vLi~4J< zyr%1`H?!-5WHWtObDN%~;%T!D75~FdI7}CT9=FfxLTZ!rzN4mBl8cemg4C6H6ms?@ zye)m`B&*o3*w27OO%qD;2e^yYP>1*E2?D2)cr+9wt-w>mAY`T|pgtv(^Z`v##|^;_ zj1xz1A0f#0QfO#n2*l}>i=rZa~O*lJeVX?%*$ z?2bfj?Mq)>rG&dKHrY-kwJ;|FP?ewyOd1hAatN%eC|jSmoEsW`Y@y4& z%s~Z1J1>-l6i1Y1OC&k<*o25wTzo6Qg z0K5}{QAml89t1idGA?)W0GDlG)8)1&as`1zBN)_(o<}Fiw?EcLz<`}c^=AqG{?EVb zByC3V=ZLmaVQlYklFl5QBZz&4O0iZyDI;tQx@B|+=c-@dVvp+1(Hr{oJAbAE%dvXL zQ@-nqg6&rSdc1ofSbp#TYj~HgDOzyyr(@oEcZ?njeOrSVm@!(sHbpkXM~f!dG7Btb z3+P}>c})j(7Q*w*@k!Yj)-^m+(#YU9v;yeAN1ZXFlzL{})!lNH5q{^dm>3!%OQVCq zw@4uu__yLhOS;eSk*o{PGr#WI&FZl#Y15{N>lb0VQ-%dzKxLmCaQ+vG%cbVTWzl&T zoaiG&DH@s(Du{ew2Wnu^imk(w3!3~Dm>b7Nqnk3|1P!=Y(AO;ZR>w& z4PSgFy$e%^J_`W$$)!HGxOzZLi=I!~JhRQ|F`j-}eTYWo8J*xipvP}_-uN4uYo^zq zLb={A0x&7F_-Kp9K?dPx?pBSDA_^STp{whpQLhi|f?y{JFMlE({o1|3BfR_1~YaF%A@6u{F+HCgI*!_>DK?#LF;EypWo zWf)D0=eRoUi8JlMg;POQIpbUf>igczx=6&`Ik!!+=KlDc_Boy4#{yD-*sY4tx=KZY zVM8-E>NEu!ADHm5!2N6M#Xx^l&OOq*$a=hTlfAaEr*xNpgO%6vcbj<(ocwTaHNLsm z;yW;RxLR=JyNO%`O$!^^q949uRRsxgE>@yH`jLFls%yZEB80J9*ZrbiU*y!@L z>#wc3p`G@V)9sc!Z?(BTKDQ0mMS)p9I*Z-?z_0D5J8!W|PWp+{qZf_*urKOt(WAr} z1W-yuTDj&swg4DK?qPEg6wL`X`)sq@@XzT&taHYedEfT|bu(j^)JMjKlt*ZJs^C+4 z%zoZ^J{9s@C!638VKB#t4;oTgh8@KoSD6M;Ct2eu6ae{UG9z>1yHN3Gy&3^xX)^~X zmigF}w!;c}UVv`YCIfY8H@k>H4x=va(wwNIjEN96fEe}kv)d;|0wS+2z4#{QM3jyu zPujwuXK9*!41ig`MAFWF(j~44D&Y3mB7Bl0D}CQ-1#r6yFBRuk&?O2#=oh%lRX2d8 zLLM~k4-N0ArqIq3xY#=zrea>p7df6r2XE$mK*^=Z<&+ps2DDdWN^tvXq?@1(wYSsG z(yzMzQPcNBDM;O^3wY)R3L3l`#InX`yP>HVGLNWpGDb&568j(e1auiizPi3=Hu*ad zqv*xpU{R_~vprs;Wb(!N(#`?M%c*_;*!qB<%BctXF<}Osd)o-rx8+IzqDps@pryuX zdUCoL9x2BO;aFRB_=?Fg*?M|CZnxK2N9qw5B&3_6iEXx`W;oQ6?&u>vCfiK;RtDDL z=XhIli7)%&aynhH-uIlN>{XrEayR7~eVpW$-~7sMc<@e}RwuEG%)Ow)pVvv@ORvAI zyclGMTy&z}Nm)QwtWA&aF0YS>0N>~Aqy1w`$ht~*tI)<3Ni17UcR014^RKh+y3Uri z+qzrXvY%Z{C%)6!+_TIv=>+Kwy2rQ#B9p|O*M@{6IcdV@@Mw^i4);7zQ+*c=k5EDD z4D@V%`Nw0>JU+U`S2nQCR$1RR)FYrP>M_MxKKxOqaXxQJR;UTL1SB0h12 zwNKSrpm|ga-abE8b}FCKBKWq4?$*_Ky?-Xf+(|z7ri=T?38w^d#{>0-KAe$PeB5D_ z@pI!c*AfO#62xSQbKx`FaM;|@^#+5BKQtL69hd_sO6roZ?3;wyi*9XA*aNyOY+a1-aTw&LPnRBM85zWUc}v(-k}s5QQ( zkNvEvyCR?UiyHX0`WIKRpX$BZXJ2?;A6Qw~7X8fU?T`BK&jmMJD!K)mN9bH=*?5Y0 zyUi1i1u^v;!`k#9?CI-mO5;7x&-gwx>YyFPqUT05X59Ja!qug>UZuzHX0T7{!qhlT z8a$?Z^S$~rN_qrZsyv5o)?={T6>Zd|=mF|#2;+$a(KRrc>~V&-G{TFRQ@I<)pRD2n z$*N0z$$ogjsrDN0z|Q$^?!0g;M4A&%<9+w5Um zthV9jt;M+HY3nBzy~3?1@-9jT@T#ji(20I zWI=ZR6^L{bv$6ADLzuC6> z67RqLV#3Mionjx*pSc>o&~VqtNK(*_ZfcImbs<9{g5f`u2naMMQL= z1P}}VXorRm>%uR0c|S1Y3^sDTP3`OJY+@U&xq+>`?8-LR9P@-hA`NkjqTXSB@h|#v z^D7!-ow;aUxa8`Kect-)yz@I<{bW;Ufg8>!X-i1Ow#-Fnz_y7Ud$?la?9dfU+}Y^X zH-JKes6@}c>HFsp)xj3%G^aHQwQWHSKMJ%puLB2wC2mr6%EF+W7#bZiBK<UEEpf1Vej6!7GE7=uG^K?y;Cp#jWwqzteRe>uaT^yS-?YK*c zcBDG)Bei}oCftRDq7aukLQaz<3>d8}v_cW**NDkn3Yy4esObkjXCr#O@}N{{>`^0? zB`o{YDHL4dlfR||_MS_n8{S43pRBK$Dl7dC#R0lUzB(E8d2gVmS12?(t-FmNMwF9^EJBJfH+9tOzkl7KW zyR{`a+#D%lc*5drX-d{Xe5;i=@H?9~YO%S@o*UUhJFH>*opZFuVSPQ0IPYw8*{X|u z(N_5U5`OZv?4Dn}6-`&r4_lK`C*IPPI*V6L$HNFi$|yX2L)N zl3Kch48Ju=sN?29tB$icZ4nH(s0IAd56)y;YGJg)i3j@Q#1B06h^=I6rxoBy~q|Gv*s@J`8Qu-SL%zf@qw#iZ5oY{UQ{Q{8`-Jwm2j}` zuh6KS_v7;z?-4Jx!`gQFoqC7(g+J^ml0d* z@#VX0nWROSNotluv-WM_Pr-8r| zBmH_$W`H4sl5J`)*-*`5osS7!@yqLM1-%nGy)N9ed`ZY%^KrJ;va9GF%YWGh`)y?p zJ@uG>ZFcotH@55UyxI3{2I-sYZQ<`Seo7~Sv+6x)k-Tm zE3M-vuKa0?y+-a}SKj_}+e2UZ<%3Dvet%bgS8ty!cJ+BDB3E>_p-3aV&T*rY`(-i3 z2LL>Mr$Yvveq*Ly0c3qj$V%Zocn2pgYuIbLx}8x0CVDyi;SbJe)9I1? zIqx_R}-^UMy=w!>=EG#e*I(;rj~!BPW9xTxKgrb2K)VsNq!91xU>k z7o#^ErMs`s+e%BX&^&=cSTe%tM- zF&JYT?=sTvdFa>ny%WE0ztS-k3GnkLw6@)NYr8^UAU^htAKLwTM|Qn!H?&vuQ7GOa zW&WH?cdP%UWAF3&nAFmXF6ndBmzVjH-c9|r7RsNtcl8Bi<|M6+ycGAmUB?7?{DOjo zfcv3t<_}E%cb7%!R8HWcyiYeuMj8z&LQFf&G`91o9qq8=53`dswvXxxw zHRd03T~3M+`GtI5BUpX2HSE?qZnm@aRpb-SKFPkZ$!I+et_x6*!O&CkNDnyh*pdr_ zB!-(sK;#L~JksE||4A0-R-SMS>QR*Opyo%Od9F#E*OeOPeHT^9@Z3;8E+`aYOybv1 zDN*bPGZ|3IO9vVqQr;&)Q%*6K%6U^j!MmJ_7rLgTl?}7p&UO+aFXpU1&N{l#cu+-G zSd-fOnZ%OPV_DFr35hH+oVV~4EpEreLQ;UxLWY>k<$%R3L+*f0MVUvTFmSQRjC=$h znqvW5>&bR&nM-ZE;+1@WePHTS(uC->TWAxPb&H*{R)6~X<1D)TxKz7S7L{Q~aiF;3 zVwgw;)`OAS<<(H_DMoIp5p7XImxU&@C{yJd)eaVbe|6qDiNg>Ra_aS6!czSEsng|# zH-*-UKqfp1v)^Bh;c=z7Py_IY(APoE+c@M1LBiFnprV9vWx`f}M*aP@RfltDV#*4S zCoZiI3%skZiIV1Ce57eLT7}CnNLMqt6q{sbyx&<}^Thsw-ev7kcIGaJ*$g_t=e;1_ zY4blLuhzPYdwQ(^gNyxcQ~2lGfj1fU znxgokkSHwK2?A0Qn|NH4&=AKQO32|MH#r~-)?$x7A6CdK&MeXn8MUX4(3fM6zWB$s z)iHb9C;0o3$6e-sUL>_;%?{;-eZ&>0hf=}Qw1oH8!PsuO^2>tpczyZxsGpr?N0;9Z zFFV7H=KaSD_Wz+h|Mx%HcKVYf&;0c{yYk@EB|wUcLHw=Ecx*J}UUTSK{?VmfPW-O@ zLnkv0q1y_R8nnG1JpFIe{SoKuV?uM-9w#4cd!KrUeSM{M?V2Ocar6D^H!TeuAAi|p{VnYqt8Z$j zU30!|al~$J;_W9~u0@JI)Fra>zjM5Qpy*I7a`*f3;kt9Vv|aYSAN!Sa#(2xszGfqI z_xeXypKq7netp?5(z^+^W z!bTns%N$B+O)a7_{0=IteZrft6_q6o~(&Nj9ug|eeyMes=Il5y!o=@&Q~m-wk;3b z-Y(Fe#QE_-C)i3$E)S{v(54C}vT`orLswY_Y2F;-c^6(LT-4rlo#+*Mkc4u4X^Mi2 zMe#`fSo_7fH|lQp3^rQtrT#?kHy*w35uTgSP`FDzNRPI9VRaP^w=;YT2qz<1a3L&- z8m<=G(-Fc=m0h3FBchD|XnnA2jP4i%=R-&%R@X=2)LC)q6_gvXFA;P0J?2~FGv<7` zyNr?`;bT6QJ4_3y^YnK^$)BCSx4S*A3jkM~!n@{*hXnABFl4|hd;!kZGw&s!)$fEdx?$HmJj)?y8Np4wjO`wZ<+EKHOGGJq0~MTlN<%$ zPSX&`CvO){{3=rLJIQ3K`D&TlWc1tbRx6|KGJ@?Xc?Hl`U>rbNGc;59M zS0@yCEET=yAE7$>fFo`HqYkiB&Og;wS#D+9d%L~;jRJ{GI^Pot0L!G({F7a6>UDktiTNS;HAPDOpIBvb>?bo`d%_QAh zPyO;3oAJ4KV0EW+F&_Z7L@|>yp(Il5c5(e9_ZXR{2*pPv1{;sgEIEaLc7&Joe{;e$ z`k2UDMIVXDBFSAAjK#35X@c&fD|kF`>sKrr+G4W?l)vjmFAEE5WN?Rt*iAKbYM0AU zwld_O{NSBLuD(vKlZbJyx7gz2Ui{Z&%}E&cVTtda3=^vkol$WT&z)dm!ry_+KU&^@ zT+_;aYILHW45Z=hLRYt)BzFFBB5HD44J^iIKXI2Be`nASxVn=hc&dm4X+&+X{wVeY zeLN#ocwL!5YM_JgWQ|o5wPEZ{BM3`Dr2l=q1T;0{1v5n+NbtCE9D4k8=GYY5-PzhJ&VDi4OA^t^HgzfZ;o%`Jkp~e6{lR;kfQ;8;+I&IQuPPVY zk~Q**$A~3FLCeRh5XVUzIj5wNDcZAS{MceVCwc6R~dcOrGTo2NK=co9*GKaGNU#70QZ zhk~nVvrSDw^rYH+6vy)q{h~}-a+o$boP`tqarzTOe$tPPPRvHk>ad?80Y53|o;W6s zYVw($@A}XNThk8-1sH5lHLbQ$YycoMdnb<$GZs^60UQT;0!N7;A_Xq;fiKDOhw6Q` znBwt)0iM#50t>yTd#00c=>?0ce8)INlMl?J%IpNWZt{@@K^gKUu8a*b(mKy4_3+p8 zB9%#+_Yr`5VhA&7C=UgQMXB0KFz^sC4nBlBSmg;ahUIaGnp}unhlzsHm=D-zrqcew zpPk{KAIN2c&)?dA+Y6f-E98*$rQi!CbBH$n=6)4Y#x8R-|sH3?XKehD$gkVJ9f|@vx=6 z#0xDx#Bq(D+9Jj#=7!*vO~}_jV&F1G2OiFh3kdR_VA_wK$87sQF=E!6LJU>m+*#PY9#{E{gsdol3(Ttlo`B) z1LJth{$1u8K!St}a4C(N9(#L~;Hzmk4#-$Yy{M7?Cx{vWA+~}RJ12ryB>zb>bE7{Q zX-|!d9Y^6Q2FW}>w$?P>Zt>nN^bU2oY7rA0=D+OlT#L!3Z!8)nBoS+9Bidt!=za9T zWBeS5V-4p*bxoW6t83a6pT~vEI$_kJaqDQmr@}*vZdT*i#f~Y7KKoLdky?5oW*a1;N1-pWxfS}T( z_aX>_^rk=*q=k}Ba+kaN&u4b${r30n*Y0v9Wj6WU?%SQ6o&E0YzMb8D?@5+;yD{T4 z0-zHDNTl2`9izifK6JF)bQW7KJr>9=BNH%lvlq+_Kl|`E#Q2mqTo%mY;CuArlDV8DO z%C%w44k*eH2RM;W@-snZQpZHPxEU*_99NCV{$v6Z%VjJchDk)2r+7{0kNk+1GPxCM zeUMv~nFNw!8=GN*>^ygdv%i`~J*itVJ~iD$7DA;Jl9Y*L6-^t1^pvI^e=M(3sz@o6 zJjnw-eScb#%0gkr1H)LN6c^AJgbUatk(8)D2R-%_h)FuZN;Ck_wWL&_T&uwl1UUW=J+( zmL@$k=GP*TT3jqXhFkedHuA_%>j!k=v_6tXJ@}ciRjC(5NRTpUt{A6A25It({Vpcv z#wW$1hs0^QL?8pCsgH(k!4-H?JYqti7%(;g4ROh-fv5t~9#AvZRl<~`Mx4@=G zRct}tk^{mu2@hsZSbS$T}jN+q%7*!ODOj0mqV!|um z7#sigFvek(Tji7Hb16I&7FZyBa=he-i61Scf3+NFj3WA65~n|Cg#~f;@-y;t4YUd~ zxFmXAXp*qjgVHn}c^F$#K77ewi$f}*)>Y`a4>GO@O{Q8BLg)m!AUN0QBCAp z;<_==l%XV0Hsb(b$baOvA9?BNz(bf4Q86O8)(YWMwm3@Db|qa3jAcqb4Uih!j?Bo* zKh2}LB|maV9FD(nfk8}Esc*Noqh{1JA&vb_t}w{E(&Ibi!>ElDc9`gY`4eSUzKMS< zg};mswH&c|N-ixZNj@Fi)4WfZ)7H9pLqU=ruVXwhIN(4Bz*U|3)kSGNwRnriw{Xf@ z<@GB~h@q!!^I=wLw3tSmI%AG6zv5B?Du2c(wKw(82qhfbJjTJmDlS2F84pdfbn&&k zm`7u)vP(=xYTVQ>6kV?JJf`HT&@qR$hn7>tp@5`Ph^0$?l%78@jQSkIMiccUVqrbV z1i`NunUx*^pI9Mfue_^ulrbvyKltO4EQUjj(iCMC8O3YOt69=@=2g*U@%;%Fie1X2 z6C8<)phZX^(bYH#7UA)vYT8?=8rKY*=BOWSs=UOH|0A8|7h32=2X#?Y!(=9-5fkKE znTt;fiV$gR%q6~ZpW-TBm*ZkeYk&K_0~+7%Cj-3vN2>)v8b9>=~z`#s$Hawz`zSNlz)=#I{7B(X;+;i&%C)O|HM^ zmhj|r&xCjFe{gv5hMUR_Np^-LUsT{&o|US;{LzLHIZ@(BixgHiVMw! zN6Z0r#pOJR9 zALLHeOrBoCG~=jcC|e0Z*T+es!XfaepO_hYrMCjI%msv59Km|=MPudkT8J=L0h5$P zSp>{k4e-%xyH4rKN8(Lr{HoeUdP6bhs_?KPW4gqr>^hOAG%Hf#jkXeq+$=sy9ktoC znn6VuPtZXDDAFVmRg6@i6)Zj+=((!tfGVN9PQ)`;)2fz)VSLcV6^U2FH>tjLVyDs3(XsV@>gp%u%p4vuhNTJjVBAhA@zFzPrIuowAmVo84 zxGG=J6J82i5g#)s`XYlCRZWm6p-XZVe!(OKNKc@u{BmvJ>O_$AO z!p6Rhbvc!>PF(v)Yk5>O$r$Tg;b8fQ$-{O{4J&t%R0WAMgrLK5jf-PFx)^asGaBTaD(wPeO%o zFp04--i9@mR}wT&6yC@(Uy&6WI2!kgOjc(#YROgV+g2luFX|~|DxmNYmV`~`OzsRm zE!9e|_+S&;y_iX)M8*J^R*EyqR0W}nIq|6ryyB=EuVXybi4Ml*ghCj{(ur#ePs_K% z=gH@erfsx|&e7r+BoxRY=@m@4#wC6>zFDCvO{4&kN*NZH<<+R1pVSe#Br3K^%u~o9 zmnx0`txXh@D}VDUR$_|~$=YVw?Y2m4B~UOB0X3X5)e6&aicN{J#uJx#m?9IOqN{il zmy#-lT7l+k&O`{FMFt6HdShhTMnaRa7g*=)EQ~L|s;UrYj4jy1h3Ty_O9^QV&>Ve~g+dcu z#CXxIwEb(&#G&elW z0L35kjJ&P$zg?7(oK=cTt7IOsu+f>SL5r)zD`f(f-9~moiz7Eesm~wEjFsmli;|d( zvFcmI07zsD=L&zlNL2Bbpo5$$Kz>UiR~(QnH$3uA5ZPDatgFQm>~G8w%SdrnWtGCX zOQd!pt`k90B$ZtiDdlg~s|FzEib7}MRIbJlmOyHXig6LiEx}rf*gRZKC22Y9ln8xZ zhULkttXM%N#Kyzo3i<&Dl29s0Vq!Se6f7%9v5}0Wi9f7JicX0nnET)q^@M1}HOoOh zt*80a8x9qwkQ7*(#^vP_rc5nIGE#=hARux=fr)~~SSCqVqgSruN?2NaNu`)P%7eH{ zgqCD%sD`f0i?7Fmp%X{(Rq{)8IZjz7)>^FOBM-cdKYlG2OUdXYqUb)gGUOJnWzO4x zxUzs%QWTkLaj%N2)JvIGrByC0E%!-HHP>juj7d2PiC%$9@d~uj<~$@$NhA;1j5mI% zJxQ(UBp)nvD1x9QPHOHVtM%C;A`2*=?7G!p1y9K#Op5B7Y%x04(F)P}N)%QK=_0r3 zhwjGbZLbve2YQ&~E-{m+C`%NgLS=>KdHNH!GBe+VRXG!XZ3L-^T=vSN8f~)M8MhLp ztXv;d&UD|A{5mQ~daMP!XhhglMv|Bo|HiZmoP`LQt8cs&jc<2g4O5gjxT>DWwS`&C zFAPZ1jZS=AlokmyCz&HG%0+3+*dd;y(Q&M+)GfauIJ#@g<-9i~UwFH73#+U5t-Znck#) zRKc8%S$Yw9!`;d=UDGI~F^&a^kyiUkoBkNPW^s9Xt#EbK`cM>YXVrkBx}ut=N*v>v z@)Xk&jn62HP8Exdi$RHb!6OAMoB9#m*j6zC1GKkY9_c6@z0I#FB{x9L;4J&l*;s`E zYNH78EjuQ#aS~gdC67z^6apYxhOx?)_bh+Onl&3ur!*Oz4&XTkUu(NWD+C1 zIG(3{OYSU8^dc{jgmC&1;2#>Gx+{h1Gp3R*|Hbi_oJ5#3(Bjgyc-VGIFZ_t5Dh0hH zSozRceesn}e4OT&Kbk*A#dIRXk*+j)RiYz7=(JeUjZl0Y2#Yo*A^nPADnRo|q+O~H zktQU+k!Q40LgJyv(u|*!OT493q=mrp7Q{&o;>kq0Hc}eB`4X?$EInz1=1F{$S1Cs# zia2=JcEtjP@|R-BQ&F~fO(Z>2q zaX=jjqn47-PE4ScT=Pd|NlY{k@#2bhjF*X39A_eV$|O9Lmv*E$ZkbdVp9)7JrA``@ zKjfiKDJk)gYxpU{Sa5<+(~Z`A_9}76%0Imiiq8@?oQaQ^5Q>H**D5Ocq@b8L?s`UP zVoVH^M3JI0I4e9}we~mXStEzy3SR;yLxfDAOPR$}I!jN{ z#*R6R^|vCR^2?uGW*yA3I895;-~q!Fm*c5|XIOq<)k@{y6sf3wfmJaY+3*%iBp=lL;=rDlK%9Xi@zx^RI5}M${6RWF9+{`V z#v`UBiG)X7{xMRyg%mv!^C5SYdc^*k@eoR-O@EXJ$*|Wl5}%ZXs{$+XP$$wfOlm)! zmP?+*Cj|?vifai7t%(tL1S~NXPa+s3xnZsJhBp!Nv-pA+Y|Kd{@zBISk)4iNmL|UP zNY^U5W-unvx8}B3yVhp1$g;1A6d*qov=9P9;!{B+<=2JOCWsP5;T4leu6a_{>MU9# zQq)CVP8YB;p;2d1B02FO^9VWSDrqF^zMg)ufEXG4Zo(d+KD`utu z@~BU^;%Y=y61C(=!K4w9>=&7dBq^>^Ps+`?=3x{dlMAFs)}O?~thgp%E``ae?B&|R zM$)2Ryl^m;gQxZjEagYK>W;)0w1lCPpSe6-E3Q^LJaB@7x>yAYmCJac0bO1tjwFdk z??^^2qH#KL)|Qp3z6-<((omyV9Opg-Ie;>5MAY2TtmUmOSvL zJbZZxx5ASkigGEP=FmUXlg9+243T$~k2ncQ6kM4R@FF*>TM{|a2w6|sO5$ZuYCytK z*^xUws|lwx0!fHcBGQp7;aI*V5fq-vLuG(plt;->Kpkv_4$|mFzrc>#;Ur&Dexr-= z`Cm&xOQI4L<7ebhw0Eon^ypn;&6Tmp*-Eu|CmOCuH#+i)XvJ+BNX#V~Bem$3{hU`G zMqro{r}6TKYy0K2Rv3b-@F@|miiZ&ri8G&YnW1IQY4UJRFY(rSG+O@Xf8rBs%Zi`E z1K*d)Pg;05Do)Qy@!{v5hjA{PYTD}iu zf0>!#Suip%BqF}JBvy*Qu}}y=iXqz!L%9?euOzMD0nE@wN2uVTgQ%o3`^lVU)t4<8 z6LMJ(gI_lFPbBt2qAGZ`g^V{Tywrw*s-&QWCC=o`Za@+npS2+%KXsC>y~F zNiU)V@o&nc>?Nk)K`JtcBN&PGvhg1wG+OoT0v^I#{@cK$!+|6hMQR9zmq;O^%_}M% z!>Y(+h^%0$|BQ$ECWGFXOyDF6O-d1Ve@w;cwaG>nOSkN0U#pXNmeY!BE6nKdwQ!LX z-)dh7PC6G=5s{X74`c1f=o;8SMs|VDu`n#KSGh zG)nlZ3{f~OS)qmpG!k#JYFWaHIIFLaC@c1;{HQ6>py3HkV@Wkgb?*Px0| zhL|bx$hj*4r0S}#EH?Keh3$M+bhkREs*o0G!shA3aV<#T83|}9nf%G)3G1T=h8!6Z z56vs_1rKFFhQv#P&73w?BA09hE6YcmM5?iq;wYsLozbT>OUJh`B!tOtJj^Fel`=)( zDRruz<`Y?sm))xh!x+$ot4`dNyIdv(tlr%e9+A1ylLw==!dXIENG@5CLFl6F8mRln z=&j8%LQ5jbDx(ET0%X-r-uLSD9|@ku6~s0dre107qG4H?Dh>b8~1C{Cu7R@L(a zCy8dXa@ko)K^_$*Lq*Hg1qLVo$EbND`4r zb4#L*u~ZS9;(wI`9*PNs5^u7T(GaO{6`qkJQOOBfLQARSg^ksZ{*JzJ{8wijN+*)2 z@I9qm(1p%7fG`i zxz1%l=8D`_9~&B$Qr}qRTecNNG|a;_7%*G zpn-41wK5|x@kT?y=wPYwmI!T1BnmLN)@1X^giCblK|sKK3qz-w35ZL1)F+(NDS`aS z=u;z@o~kAo3nM3ov7wnnt*q^oy)_;5q~Yf~U|De_MJFW|Oo>U1lOILkuf<1QPCZlN zBLz8e#8U^f&;Ah~sp9-3iUB`<5d{-X=}f_@Su3qFsudqotC?9`D}J`%Q;?fbMMRB& zb!|c*+>|7hUm_M7hQ5fm#HZ*8PzGG0 zi81_*rI^ORgRBOu1jQLmikEEA~|5yhxQWJ*Y( zn7tAoIglw*YI=&aoZOdTX=At&S2u#nr7~J3B}($9uz9glCBi~hNhNHU8*$mMY72NI zVT*j309i0%VxtBL_M-rJ6hjLU>MB)~Vb`)+q%O@18Eh;qW;2N-$#gR&KxU;(CH0Cj>Ud#sC?^cXSu?7kpv)^J#CqkW zh#nM`wNitEnnJS>YE;vY8Pv{BTwuy-=M{$u%FkT9HdBBtuM(^pQ(kcuDEv zaSy=7R8UjtH7>Pk3P(>hLfKNc6epRZ?nf@v-O}Qla;0WMo5x$5tI)79fCfSC>9}%h52MBtR1z0SKCCZU_3-j4JJA*HRqubQBN$3M(LkQ9piWch0XHwG>EML zi?^9_L2PN974Kv1Q(<7>?X%c4; z#fnEd8d2pFgiTKJ=JnIgMpVSoP6HM90LiFhjpj)it^8Cuv+v9|`!6LIo~0KN4Fc(l z)D0;5k;E1)LOuQOr^9`xKOje2+QLG4OD>-b>5|>Yp589Sk!LhJ+d9Ld?yk_;BcDl; ziY^k0Sa(q;xn|c?*El4SSI6toL5b@Y9{R1UGz#%q6kSbJI%%KO0PMyTJm9R%r6@_) zXIu#IT|=p#wr`qX>jXpc>TwX6sOZqkN&=r@kusG{YFs`L&5b*DlT^Nv+s^<1KmbWZ zK~#Y7lkD<#Pmx*j@y*Mal>wJB3nuX974Yp+2mQvJVAsYc5>mSI7dmc7%p+Ar zWR#|OzR$=Dm-IFNpem8^Bi^|dBY#{4fd>wFD3>e3?Q;Hz&!X_z10uCF@wo&k3bs#E zz>UtXCXCWbnTb>MS#(!3M_>^;<)K{kO{dY|5qU_HL}bu>f-6tdwo4tT8*eTKAKFmm z5<22hRu=Wammlf+x_l;-!cTb!Z^H6!X5lThmZVtIR1YFLBt<@JLK5FprJA}+FLEHV zeiU8+K>rjgkws~d7WVSZP~_1g38{|JlR8Pfra?PFsW>>ysV7RCrF8sC)S^5wcyLOmMwg#3wnc05leJKrhLL9 zexDP5LM8RorjHyYzf^{X;9BzQY@K&J+g;!IYt*Qqst6K_TD7UYX{pkxs=dXmz4s<5 zBB;GrGD%D89 zv_kn^p>M+Ai#Sgy2L7K>4(z7{q{Gx?GFch&fR+3gu_1#T462}}`|?|Cgmw=Wm@Xep z=ej{jZkWis-hQe28Y+5|?Ka|ifkT?`F2o67sh?A1%Xz=1Cb7v;-Txcw&xV%)@GMy! z34h<@Qk1tz5IowcFOh1bo0VPR5c{s#avUuhpIyKoKS@pgd{~I_;Th-6HU9OGTKr4> z>RBYPH}|y?N%m*@9M(B!X^2Yhm!Z)s+*WN%$Cz^KIzwxtX=1;q%CV%&JU%(EuDfJ% zND=ZT0a`b0enKPkGc~mDhGmss#_Hfm1YlU7X%_?(v z*N4JyQA*sK#=6C);0yb&*m4U=&Xt6VS(n1sGhxuiwi|dE1?Zh>Uev3Scg9~~h#wyn zXEaX4wi*Y5LNm+aoi`@nKU!tCn9&b8?HHAoVs!FG+!H*Dusla$%n#PSC6(>B3sKJf zYHXvGF<}GcH69FI6&hpHFD*&4CrPnxN^TUOMOQWw&ywIdq(p)pRz7j+M?q9K?;XlK-a6p?_rJF=y+L#^S(iw2yb%!Ly{3mAEdZ z-{V2Juh;Eg@>Fi5wr8u01w5#y>{XT?mNjE01DrJGiP?F}+1;f8<1 z8s(WEK8(GqZ>{J_z;XvWzVE^gVf++cHbx|N)0bQsX*NOcDGw{=TN~wm%u6?7zk!XH z;-6)>@+RY>N}1FX!2K5z@MePDH(%ZomI{L&b3WbRyKyV)t5)c(4=)dg#hI$oTFF1B z^Ulh-v&MR)dL?klQ+}90)|k&7_Fkt)xdtO?nJbFsnQp46}&r3RFSf$w(EtC+i@OYI^R8@V53@PZNDNVlg}C~VF)L%3A4=W*IPw6 zcZJ^_y+vGW$>fO1Wb?k)ja4-pCXvyg zU3;5x*gWyV*G=xz??@Ka)s95iJ<6AMIP!`9ygbQF;6YvmiML zh5z7Sk*89w|93ngVNryCFW zD50(_Go?ko$bKrpOdv@v-+9TV{xJQh_LmS9-T5!NGR||W7dU)WUPI(~jA-0F*QJPJ zCvYyi4EMu=r=D?I#cnY&EVm3Hp+)n2zdm)#aEZI{{dR5xMX~KMv**p zJ6)K^RC@H`C!Ru0fgN9=98>P&obI2FESI6^f`lUq!*2m-qV7-H`8)B*e(0s6dL<$F z!s+4HdUS+z7nj74LC51G9GALcV>#O&9MjK|*CYy_c($XpEjSI@$tfcm*J7V`KISsA z$)zY#d1mSrPqg&CH&17~c$mYQ8p@;=AKttC?tS7i2-KVD=^bPTS@po|i7iBWL zVlF?#d`wbeJY3-uYh7CUe5AVyUYsCToNRD8Pl1J z-Op%W--AaQmmeJdyfgU6LiX8LQrWEakO=n=a8A|sQ0*0Ly>ULmGg5NdR!lFu?SQg} zqd2t>QdXMeWJIBaIvQ6@aW69xG|k4~1TXgPu|OScCnh_~Lba(FUi)t5LcX7W{6dnF zbH`h!i1l&a;j>_Kl>YF6qdZ zq3zo;{0K9<5PlY3sK+6d=3TxsmLpkNGiUa>OXc?VPHi2)1~^v?yv&tl^3hYt)kr1h zdV{*wb$~pq;v z!NL7$jPabwg!xh92!78=kYR-kOExwS;n=>oS;YB!?#~)nmGfbRI=*?!4$gALXPxhU zn#dW(b**8$7qRDAYN#9Jg7ZvQeq6@Fc-tv5p8;`NLOJ?zh|%l@OP~0uPJW@=FA;e( z;=vE9j=HHXhCP}el9bB^alTYm(B~9}p1d`EqOz?2bWLq7ru`*_PA4L~irPGx$@WY! zlJ-N>Gd!yXY_B2zLpGi84Qo;ZyU13T442juf#1|Z{x_Pag2BxDLTOdj$>Nsw8#_Dk zB%KlRxI^dEciNEydS5*=EzoQ0mHiuC>e14NHR)LjW&t#x&CTdsaY0y$&18;$`C*(Q z7ich9{zU`%s^h31=Tr*;B>KW(t*qc3=%#A)ND~Ejx9+>#A z5&k{iCpTQ%+~N~Zk$PtZY0+r)vu+om4CncqfJh6dP3Iih>pGO}6b&84T7OzTG=KMX zReN9f-)8e?tVAMQs4c#$=0v(0kAxqH;J+>>UPMB84^hzMj_BoM*= z?;kjF#xDRvtZR~BDW7Y#OXv)i(rU^-%mFiP@qdjwsl*YsnWjQ?ygpz9=5563K}MW} zC?458y^tyL4a4*V6l7oZ5k6wjzUV2=J0b%&3NT9?CN&?DSx8MI25ECBiB$c4!;u;P zFc*Z39WJ!}&Yaa{XSNF}XPP+?!Y5)WN#CjOt_F0uD#|{S7LZGR%8s`EVW(a5+eU65 zk2cYz(!C#+menKx_uCpYK_3f8%8Co1Ur(}g>ig{vRm|JSZzQupt*!|?Z2%pK3DF{h z2}v9o!pHTFyY;lK{m1*n>Ix-v?wO_Xo}*j%VO%WsUmdRO60$}`|IbQc)~%G0Vd&TB zXzq;r&I;<_LdnsBcBuen9VanLm=r2mK;zo(W+azhI z#xLEpXxKgG)d;m|Zt=(p3mB_#3Z0nrLAXC{WUj-F=cMtEE4i5(!go(@T|IsXOB|oq z^IBdyF%&j7#8Ye=s6LByhy#!5SXuHg5wB{*FQ7UwkSbyI;ovMbaN}Toc+V2WJ zmq6f0MsXU#4(OSiG!#led%Ekjjt9H2?Za8k3$cF^5RQ9)6v33af_!=?yrs5N)LFtk z;Se6&Hmr?3L`Bs_kj;^;<o$TPT{XGs zybqQbtsSXo0_093Kga@&2*)@U9-*;xUv$!$GvKqaWn4sD^Io$*xG{V8*=FCR%(R!3w^DPXt6{@pj~pL_A)*^s`; zp{o^UhQc3dBV$)Chkdukj& z`*ccq^<5VXOh#UEgiY8sg#dk=H7S{1!yAT}JM)0WQ3k43v>X}_LJley=IANil=6>C)+Y@E3?5~xd5?=C|srPBy__6x=W%z?-m!8Hnn1)a$C+T8U7`5dB zjo5EtE#d(ZC7<8sLFdv&T=l)I_~X4P-$IJu6-NGiLPVJo{bM^OV4bd^e;CbIcE?+_o6 zFFF<}zLRI|4tKO#PDg4ceHtpRe`Tksvy;g2iqr!=@UU@()Z|D?FZ6J9f-vxGNT(Va zCV}%r$4anAfR**i=G1K~CTO-7S5#b*nSw&65TkmL|s85x@7q86ob%#*_FVW*+4SE+P^K z`Fe8ueO~W2hBf#;&8vk#j=}ftjYone z!-YEDg{Q6J4y0H1jb0*V6B(1A0OBecG$addXO6?W9-B_nd$q@H*M1#uc0LOHh)Um) z8pu4SW5kCwm#MZidV?4`>fho(Y zXG2rfp&h2a?rnyiC5_v7k1l9}G&ES4wLSSrHl z323EuP%+GMLMwV$ctU7d$@|LfDlAGU1t`a=wG)_{kQon`meK#&ggms0aP%{n4yQnhcD)r!eZEOiJ{c=mU2RK7#AuY%w(J(L} zKd9>?Gyl=2JFlTlSo88tviR}Tke~wKRqPM-p*Zy>#`k)jjq72h-OG^HtSZs|sKLbh zZj65u5{$6}egqHs2eDDQ&E)Uab;Td^g69QOp>T4DI-Q<&jp*?!671`#H zb?#SFbTxlqybkC6YG;}(Wv}bG{G0Zd56{vOCJ60l** zA5kZYx~zpN()+pA26@idRTJl4y#6ZwFY&-5!)h#BWmUi@mG3R$Shf0PZ2zz^9ZwOB4lmEiRe%Z|nUYJj8z6Wp4d5&o=t~GHF9;h|7{G&ni#Jp&L(rCu|Ynt@$Z8i41?}+2->3I6~ABNRNkzrm8)5604Um^I< z5J4ze%zmRO1yaCLPJErTFW zBUkIajxV1}Pqy0!k_hDt4E|?P|Nm@*7!*Dg6>)*w#?(+f93_BU1r2&zuz-ci#@5!o zhK2@>w=wH83%Bl*n610Y(;y>3b?`$1vFBI1K<8h&Hbtci{#RjO zkJ&Pm!_{sz>D5k!>ZEZ9u|nUs60HV&%Yl17H!HJ_w~_*_Le7^fW3$(l?4u0EusIBiMbdI+zJ)TS9tWYBE8spMQF1{@B{dxi4+vBW~ zUQ1MK7-}@@*3}a|thhY_aBF3XmUw^=T}#g9{=yP~_ai#sdKq zctZ&pNEdZGMx)W3Zp*I?Oe9-g@9iX+6~ zSi19D1|#>dw;HeZXfypUHyt~vEyr#$+-+5fknd<@ zFai((YOnU{N3yx+lpv@-sABxTsX z|5=^b8$o9K*g${fIeN^FQMv$lrYZ)dzC)A(Uf%Q?-99k9nx4QzXD4EAw3enuh%9~3 zRigWEtNn-o5RMhWEHyd6;+yVZkIvN}kaYogL`$9$(ArfF8%Of9U3Z3~M9R&=JTSAXIDG6wZ<@_NPDa5%~f6z)4!m@mRxZHL!VMK3udz3^;D+OR&!dTh_2S2WSzVcbF@TnNtX&5ilTYQe9YV*LMgBQPlYLW^Gz{Sc z`C}g0r6}w^JTTqdsuQ#+^Y%JChbc;x7N zDp=wb^`&XsVE^9Eozz_a9aq0~@rB{%@BoaAzeMVFA|y{I56Y7oRNS^BIEo1zyhqcX z($*R7xaYWlD4H;fH=gX%t!(&!&S!@YQiN0^6Rt?W z!xxfTr>xQY77W454IpA);4N-E@8F@V zdtOP=Nxlok3O@{&z^C_hY@rl&ZmW60c2`9ABB8|hd6Z6OOFFMfFpV~_S_J+Gzy+9( z_DAP>SJFa;W(5W%$o(-Yz>!FQ(WU6hN-nh+#7E>_znAKAGPJ~Nn*S<|lJ{+k<91QC zX2$Yl%_ak>AB0U72d|P0nkC89T@eO4{JfWS-p=ak6a*=g&_~F_tdst<5)Z0cut>8T z`_OsN5=*boSNOtG_*fw8LPcFZa{aqB=K3EFQg=5#wA0Q8o%^K8d#e^weN=R8RGT6t zKohwV zh^vy^JCs4$CkN6MjEV&Pu~3B+;!D#rGdP`$+HzX{kn{)D2lyEVPSn zXedI>&g3*_%R2?6+NX1W0Mq?>Xnc-*zI+)CRdh)~A1iRM?5TW7!FBjCuK9C0Qmtd) zKJ@eM!;g)}1L9iLpMZEkgvz*2Q2h~x9yd}@e zoVniBWB60(Pt5&iNQC)R%rmL!eh^gb5<-HL@EjM7WzY{+OzFM0t$rxx{`P{K!j)6X z_7sLv zP>-yiuSIT3{o2HDJvFSTmW!g}E^k&oNwz8H7frB!1+^T9s`QC=)=XKXZ6Z>sNr%d9 zbUFi>3TjQ%)|7NJGhw}z`T|qcgiQvt;JW;aG6gtY5Ay!1_7Qzq$TB7Q*IYHoL_UyQuMaupYeI*VlTN$SSF1a< zLTf9V^aLlKkb~R^0bGL;IMusa>bQ$L)M$$Aa$P~02I|Xs*$PxWR2@V%KS{Rs=9etI zJrVJbB!f8Ut*(kf$df#U=$I%-?L2fAvIT4RUi2LQ>mAgdy>epBH5&R-Swx!RvRE z^m*Wldug%?beKjajps{GaAF)Ro%JU`A(YZ_S-Azk@$Hf(|17MiTvS(_^aRn~j@d}! zPJ~PRFDcqKJ{<&=z>6g|q${$=VO+Q^@fL!}?Ka1m9-+CA$huk#A=PYf{6Ix+5?U_u zj;@L3@aXAZ(=aFv)>>qryq7e=vBGOO-#!hy7DU`W@0reR@0`hvV4F2;SN&bQrBuLS zX9I@Hbaz!`0cC;Qpz2l80pGU>r?!f8qu-YkZmcQmV>Hyy!+7OEiv?Ww278E*0$!Qr z*M2I1JWMe0Hi{16Rj!95Q^`!*iEb-sUNlpT0z(X+y7qEaZw%v{AAlGYKk}`4T9flE z^5J|l6s^)b&85G25&bDOJ8{${Jz)UeVq`OsSiN=B$5>}sUk{@?b?mQ*EfPTtnsz}{ zoZu<-^_ATKq0EbRtMw0ePZTuWS9G1K7DMJs2?CeRpGk!Dl)YbB4bw1Aq>b62RIIn= z(VXxUCFLJ?=43qF!BYI-6-&Y6V#*gd_kJ1Xmxxm_IFElo(gt3RXBx$Es-Fx` zkemmUs7r}u&aU=J^MANOH#L1Yt z82+*ikjb2>21~3jwQMe4ux+qoL%*g*JpFVn2_|V&Y2WmCaZ15(`nYz`@3m1*o3eLu z{0!M^_x#Bz$U1?1peNnafdx7}A%qA`N&pl$=!j~=_+a`};`->RmD;?s|B?%gAsp3m z43$_9ZwV=j-DhhJSuWdoY_by58R^&OH1|?5TM8)T`#y|m4!Bb9`eC_A1mWR~3*6z2 zcrCX1co$wp+gV(QvNm5nPE>bXsePGA6~?mAvf3fx`ydp|FP@qL5!A@cTA_{I!7l$E ziRwDU_pv5%PHJi++|#twEcQeZVZwblr7=yl8f+wh7RQFeKw?>6(s#qp4b;rI$5P zX}&F`2JcfZ1!|nh*HVu4G&iM8Eyku|DM{ugz9X-Ax)tjxwv`NcLjw*YjmGrZy1e4s zeDTNT3-w+v)8Np7-T#`YH9J0PmWYhA>2o`xHy3DHLh47=QXIb*%Ae#lMlGyawJ}&4 zDx~ea+~kN+x#R2aUO3pgMy{INEsSBX0zoV!L}_z$X+!IGh3>id#)G8{O(Ej^LjqG~ zj1k%94tMO@SydrtaD#{QkHm%`m_slZz0S_JIKq z^~v$%VVsRFG;mQ7L~Tsi_Mv4(nAbfQ@wLM9_?E%ec-7>ru>R9hkvbuxr5QXaoMP4@ zWOZos8`>-a6XLm#GHoy3sdC59ipkfKpD0S48v{@@ht~1GvX1cN>K=DmR(SvTIB%3< z)7`sx_tzTGO4sejxz7CUL7BwOpO3(ZzaQi^bmL;mc+MItE_yOYBy`fm|3;sZc4ofjk zk}Q~w*WJU`Frv1v`I*zmNn3O`EIk_bk4FfgBthmqcZ$w#8n5euz+kXEEYZdN?^Lgu z49JUp`ydBPex3Rs$GHp%r2TXw+iU%guj0X*-+JLDOk^cS{xWI=Lswh9)-xvdeKY?Bpn_uM0;}DV7G@>imV4Iz zbxDBO$LoWnlUnkqL`=w#(YIY($E}hnQM)!{zl|l+=ykldIperJ>{ol~a8_VP%kgw6;y?@**PF8|*|!#bWVaQgnfIfKz-600`No4vig0}Lz^c(XUMM9E_aYCMc` z2*t1}$eZVvyAA<*T2xnhLkU`jU-<`81;01xeBao|5WWKpSbTbo!5$=Nn8h9jMUpgK zukxhD5_^l?03|oKwrK9}a}?Le9i~e9oCY58?51GCEV4hahWWv98Iq3ZPf(Hc0r}Bv zEzWr1@>liyJ;T$aeTm^X1OM~&g9Z)V^4z zU$YwYh_d3t0P@tb)s5U}5|_2UvI4TRrqdrKn<5&jF~@__M*EnHsxnSqKqWE{Z-kpJ z`eTyRzXxJ^!*K-;zw!H-t(kHY4%unMMADaW9e;$_nBY%M-<4oRpIh6RykRv_?>*)+ zgrD2Ri7az%ciaEqRznP~g6;e5`!HFA+BaUTnfxg%ux7Z9IBJOVT5oKAoi74bwc0#< z>W$tgy}jIqVUV8HlZEaZ44KmlDGyLn1UWr3{uqD?qdMh*5N~l+O){!vhzS;!_dH4M z$&dHPK&^>Sh4d%u(!WL1ojR#a`UNKdHicyhB4@KMZ5XU(ARUtThx(ad>;*~F<#-9r z2(ddOtHe-TP$rtUUenOaoMLy{ri8c&17q;?`efH(Exvtx$k4k)luuWpxa-lMd5Dqp z0lxA-6~c|hKax*kI5rQ+u^CI*_Jtl;)c$HaKnmE%tzd^Dm zljc8&(44=a_;bJz25bh9Vvtqo?L0;xw{Z7=={eZ_>^+~JXcQ`}7!X5YB0abMqr|7> zYIU^xzRvHRqI@aIo$ELBP7WCD3MilTK;)THlxPJk;QkKOO(`>&Iz(Hh&)(F2w;hMJ z(6&RA=h#S-x7!qkv$pwP?Fw5dY_L%;&nR-$qNZNZr<%wc*hHl>nTsP0~x6gib8 z@V_^ie%a{k#fe+L4BjtbS1jV^ELPKrI;gn*ML#n$qp{wfEY~>Eu;-?M$F^KHH=%?y z_V3l3A6fS&*)!kiJ+XT%TrNnnphx|^(0@%`^|kAm^TsoQ!uQ>nCui4$n`%|l61EMw zE1Hgy%Giw7uW3K8y3%31K@dvbcqMTIT7u{`Qxcx{got4Hk1O{4@Gx1j)jmdlpBso6 zx?phUR+K~0k6-ubuO^YM!30>SiTX9cAf|{}ps4c^#-{6eBu|EQ44DU?V{mo$q83l{ zV|dzQMHA`U!|1cxueyz@zDwj-5We;9J@!sQ;oLv;6NTA^0ORMh$XENyMwf5Tx3rqR z)fsl?CpHaAUE$il9;1d7W?o_J&%;bi>Ell-jOU(^EMZ&Pf``JPKM$l)RF%G|q>H+2 zA@zS#YP_B@_mlo+L-Bd~-5Qf}j53Bayi{s3@>`L zo~(g_QiUBdUjyj^t`F}>RYh7~yJS^0VwCv=I{11C!+vQDua~Gex~ZDZpL0=NOi@3+ z!OX=~hG#Oi^^cwtdCYs|CrVn6JRT1^Tt?dH=1Pyjz|Rl?O-mFd1_@d9>+Sey2M9O&IO`PS$zXdSd{Cmh!Ey zO_H#35FbvDA;|Hp_CR5 z$p;CrQL)#tN-ZLH26%YE4OUDO6?@sx8AY4VDiVWr zHY87a7lFsW_a-Z;euWv;j6=%F+uhsUW{fXcXPou>di^-hgt+3$_i{6TTLdAI{y>Jl z&>;rVD21`2x&d1j3S_~CzK0z_?v?<#-!J6%sns{c(OQ(wQkTQg-v;zu`USBhMDhAp zZZN(@Gj|J9D9f#qvg+RzYku9!|E*zv*o?-@-fj4gwwnp zk}mImURKcDfJ0rIkEss?jCzL%+NsWrIrrHCBWsMZRe@JB;n{BoWi&5eUxloikB5UAt z#%b;{3+&+QzQI&fvdN#?r_aK92Y~UYvp+7>;j~C4C1u&7FDjGp zku#skN@uD#3*TxDhQzrn8y|(5XkqU2_F7A^#D-!CYr#1(mCcsIb2)qZz>{MFV^%m@ z?Zjs_%ZSl%fUp=y*l?uf?v6DFi)VbaJf*!iDC;>Ti^mCwob-sFJ=>4B(G$@Hio^7k z=_U0)2#H{YvaT*AC-Y-5MU@wjTKBU&>ZJGcJV(KB6fL(!S>C9qB!6g2BV5LWSoq9} zLOsuAL)YigyP= zNYyFec?jUS336|xHblEj!!zWz2n{jxzG8$I3ssmusRB z7uj0t=`INFn$kzljZEty`U3cw8*D^x(8yQ9bMK-E9XrZA<>@E`3#b$Wzrwg4@!}az z4mE99f>PRR96E;ShgGM`6YT|;_;g(hGPeiq+3cYHBLTdWF--K)Iy*8YHLRs?x;#H- z#q)Af!#0|}lmPvJNx=7&A(8?;t#_9ZqL;#U&lZ?}V~YE3L1QD^7o6H{| zc_n!E_>xWHgC&ZrP^Y;)8wuCiklKxwwL4=N`HJH`5Db8jL4gn%0Y!1s4F>!ZW0g+< z5RnjYq#Imv!@wUq{As6Hr&p;NAAZQt-tRFA8W*%*C;!&#nIX#-T*aA+nAZ(Jb!{sf z*XaDH0R0szWX(`E|A?4 z2sUyckh*8~Ao&@pIfGV&+|Y`5#2RPjFc`W{u9NW@oD%QIG#KFiE~Z=B4;qFujUeA6e^*uj9YEWQ~l2Bjj1!8J#npE4cSr z%P&p+=oI;q736Sh`qsK5ZYG;dMvnPlqKicIYa*t1=eF5Y=P$1dRwr#&Z z;5KdcWV?Y&MqBI#0N|}0rn-C@88#LbM&8NU=`x*}Ava2Tq)wL)oSb`_kxx2IA3F#4 zw{xd;lv1_$=hhQ&08luV zXc>sHyjz35fpglF6howxhR1v{SLX9lW7Kmp7F*AB(9*t?- zGk`niF#Eynw$X3hV*g}@WlrWbo({j6ZR-Wnwm(SLL%TmBLEGBJMu9=`{UVqj1z?E|E}Bj5;- z1E>TB0mcgN&pathXva-ZqTB>Z*Fu&$Lq8Jo;|P8)QUT~W&8}#tm}B?%e@%pZ>}wAm z*JbfnS)v7R7Ix&Sf!rX1acRQB8QzFAmA#s;E+G(PP3DfMADsn%>mG%h<;OcvVEmVb zdL>ec-PC$xhXF%=$V4H38xjx6T5q{^mZb_0dlu%Bm z?Y-v$Na@BQukJVND7v$r3f1IjC`?_5P2-2Yz z3hlsmcVl0Vktea7k9197>&WJ^F@$NWxeIeCi{H_QBN z{8=JJn*t(DP~1<#rS-iP6S9~{m>lp1bCsguF=6DZ5449BmA(&V`q})$?loOSOLHxWvZyvCv*JSy|b}!A9g=FiLbP7Kg#ekjeCLx5myp*4Nwa7 zNza$8w}yLcIF{E0Nmm?xd`63bcu3Fj0wYta@*%~5bysr5=!DFHOEOu0zORah?9YO;g4PArc4`;*H1gJh+Ke|Y-Bty$QN z`_vQ=%|Wy^rMVFjpkwBYZZhjA&~D0a+k~X$o_dKerKcR-Dw8RdW2rTc$9W{Re9@q> zHdicqKjXJUU1dz641`rt_B~@nARRS4dAJb5AcaAb8^;iHuHv#enGo9Dg&V%EZ?^<9)7-e0TJN+fN9Q~A;P$we3WvUo_i^qIx*{xtphtmfAW4^LYw@YKC; zz!L(}TQD;vltO!%u|;bfm{8TE2rQF_h%bnKPlz_4a21mlgE<*%$HDkQ!ydR;R653! z;t%W`tsuZwmnB?FYS1W5bGh{`8c}rcoL;F6E8fy+W|iUj-gggT$+uOt1ACc{=R#?r zC6!jbAETY3Aka$cIPc3MRNH{KTW?lT-mA1IMV3M1^DmKWGn3F_6OluCcQnzCi=uRP zKuT^FUuWz09p?ttxt<^rz|IHBiCM)>s&su5hPx7;LO{{Qw%*`W(M~Q$d#Jhn%8CTF zQCGB7n!EzM+#>Sp(C9A_Ljv=g1d&di#VCaL~B4?75^)E* zY@Osn2wY8}F~_bl-n@=rXJZ%&w5zubalgxYVA(oU;FnT3iit1#?Yr`FBkRwDm3Sw1 z(oMRFjdNXe`7oBe$+Cs0~v=@*9z95|nnQF|aV zdl2|_u|1dw4_IDt2l5k{_KeTF$4{TSi=HJKl(ha3e_2-m1~TrGBPZfDhTX1^t0MX` z8SAV;*=BtZERy=&bC$toH5Q5H@NQA(Aq%+DVi)1v^R~1OF>z;oWhsEwdD-z;&ypS& z9p%{;m(abzej{L`-w8NqKd2E>XN^+6>PRx6ekG74)xk&+83VKt723eP0WiV*BM@&F z6#QDz**dk#sjunRD+$EpjVx`Sh;J!K=hzj{E(3WAH%oHofhf)meuZo%v@}TjjV0yx z4I{Lc>i}hQVIe=a(CH9Qf$~MN@WnYjqLRz)^KZUHI(7xxGtv>Am9<-w$fh6Xt4^FWOr%7wK zOS+p7r(kb2)w;no*RGaimmk2w*}+)a9WO&#e0oX{5C${?rU3<-p;w@~>5Qd#bCeyY_x$?@poyE-yHB zuaBPKF(F3qs4@|=0)C0TQUx_g+s04U?%uw@7~T&tnH$b=iv?OUY6r^KxXK@kM7GAt zId_#vYI_*bc~hgfOQ1#fEChE_BXo*WF2IJxmqu3MUn{>WwDzd6aiGSKT`;i z9hB7=2S*r#jp-*@<8uhve{}P%_l%=fK9|^U+~t_Zr@$W>%6AGbVXE~TY9AMs}xh41|yv7(kxM`fmOPHOxaH? zJ}7mY%Y!$Mfn-4|veL4ENQKi`g}Y=abY3lcS?OF@z*5}y_gkWv8&4{RLVpxIKYT2% z9z5usVx|Mw6X6H0-^h$^$`Vu&w=nOwzHr~O`a^fB5ub#uu|f%>bt8?rFku}SL%n1$-Ws+&CBBcXw}MtmvI2NfyMeQ1L#Y-5T<$rI`~1k$tAHur%fSV z7Buck<+9Pg5#BdwGk=Tlhk>YSrfp>z}?ad);g&7iTr%7Pf_fuCq0cklqUEvy)zL0~{QJ8j_9za$a%<+9q=6&C&_SSuW!iZ1jX0n2)12B$# z3OIbDH`Avg8$zG&m%I_&-C1J~IlWUlPyohen{f?PYj$-tBmB;Sdgcl}*X$$0uNLfH zkzCrm0nv@$W}}CKWh3uaCapURT8y?~UgmzIq1=Q}M_0$#L$4O;c17A_n;FqwPfZ6; zrFl(}Qal>nX;w*yU%!)hxvIfuHdPdXPA`^-*WHLtHYCe#0qGxUzVkhqs`Di=46ABBpMrw%|Ce$pc z>7r_uXC=BIKsMR3-P)oYyCDg z^fC<>w~w`F6+$L=T}Him7l4K&>1({7CQB z|M1m3xr_&XXMPQD;uHQCCEn%W|Hs|-IJ+DPM6Xcj`{T>~^^lCi{}xc>(n60GLM^SX5<{39&ZrJSW%kBXKrO%Mva=Ud9> z@8aeV|Bil7f6O^V=}9~x@cL}`$3yVPukv!c%&gzkqTPb-c3eGJhjJu!15v>QZpR4l zXF5)BOJ72`nZO997D?Ae%U%C<)5l~BjtQ2qgfOizSRY&@-T9p!uMclDJf9KB z&Q}~KVX3ZfRv=b1BF;6p?JCH0(G6oM@a9Q;KVqaRYu-yYNRZ=1oGp_3F<4{ptZA0@ zEdqC_&h26=xQ-&p4PuW9OiGe*BmUa87520SwxFKS9r4#s1!Y1!|zY2C%vz|dnKqVbkCd+0B z{U&=j|Jl=;&)TjJL*ihex$f3q%lVw^uY^j&<&WH_oopPs0+$Fp^F}bm$ApKGgsH5atRXaAXgI1w7}W@IF=7D_}|=tf@0GEd%K8+d-*E$!4jZqFxoK`(SYobfzEjD174``Xc(c6>Z;Q`5RA zw@svby3(xDnda%(!L1@JHH<`v%w6|&|FDovfu0IybvS`~AAb#1@Mx;eVzyAP(?_0w z$K|*R@S_!hLU42z^6p?V%W_Vx%?4HGI}o!JW6mi5EZ1&q`)VdYEp-2|pF?cetDD4a z!V20<)YtmP)OIdRP4a|SE}x?3)7%DeAjKyp?w~7b+5EBT92Yj&tkUUu-y~f{dZ&kj zW64Bn5GPvCI_T@R;`JS17&|0>{ZN0eRhSB2{5~OZHb*7FV)G+}ywAKo9CS6hJzku+ zy4)Qdg^)BmeMWC?pCB)>KKFsPW`2(@Kd40$M;>$CLb22bHs0czTA#huz1I6R&ok}- z9xr)%f-vlIYsiD2lR)&Mxz4EFz;z+{MdPzkn&W=jJRf|{1pkv2H5l@AC#c({aXS{t zrm^)`qq3i^wR@VOv5e&i=yBa#`@@qA$%ni|spvAd9FP}JbUt53uy1dbH?BE^KCQa0 zXA!%+f?s;r&>^YEZj$=PR{M-MKh!HHvXLhCU9?%p=quZ{5`#|0vRWaNOym7{%{pCM zxiX&d(!E}LEXV{8f3Sd7*|ddJtk6ob-?Y2&?&*3Qw0?XIinqb^Vsboy3?};gcOJwe z9PUS+`{zT@2R$?#mO}vc@_h=DPOa&9X1#$^2%RcJDNTdb;*1i5NL1zKJ2d|`2F25O zR$eI;8sU6{vf}Q(0+Xx^{b>S_`4s8kmh{mfgy7qwIR&yfAI&*S0*hson=0Z0qu<8& z#mo*%@k-iA{-gmC5Q1!+hRBB*rO3lj!!U!4lag8Fp}v_knN4O#3%qW&n8o8`w#jFG z34vrN?cKeOu1WJPUv)EWMneq(D?7W$V`BX zFz~2OBjX>MZm(5?4Voq8rTQ!ko(_w1h=jk)Anr=Xzh`jI`sn;!L@gnCGF~E zLl7I0ILO*OCCSGMes&!%#yF*?h8?AWIOydFVv$Rn$l~E~t36w=w9_=ZrES1@f-eqC zm`aEwG+4XZ8+WvkiX7)~yjs!0!otdW{#tLE?p8#y$KOcb8FhMooKr?Uzx(~~O#6|e zOTEoEgo-kTO2d_+PU5B$8Gy!;CNV`KDwk4a>y@HNeCDva5Yqqk{nJEqbzN)-dEJ}O z&c8nOI6DG0fKJ$t)xOi@j}Rr+$K;t{6v7dRkm*?C{H$pkRic`D(|on1J`$gOpgJ}E zFa}69ujkvIfD5l{uB6jol_sJkB`o>X{E$+pCg^Mk^2RuDiz~E`x`v!4(^KoSCYAhX znMQTKW1!MqOjQrODpq zW7|F`kjqXmq6Z3A3!U4o>#z)1KqQODX_b1F`8F}x0#pB%kiTF`+Inf};`|B?o>UU` zoz_B#WbF9q7Hn4MUh&`6OZnd4P)HC_Ac-H`gCX)UYU{l`0YR}bIg3Hh+Ys|Q%Xy|k zq!n*J{Y_`nic)_JPP5Ew<^&MrFlodUyl6Z@m7TKa6@AjBhY$NPOp0Yp5j=n^Sm z!t}E#Z>PW@;ypAl2S^QsUSUn^aSlGF00se3fL3vsS^#TSA>O}|^`9(lE+hv| z+pgpX!-qW_elyNXI9Qiwkh0loedJzEcArH$U}|(nJ8_t?Mxa|QOX#ebB_DQWT@tq3 zzCN7Is;Ncc4(O5XL`iJyyo5ute9CYs%<1-fSQJ*C^jLno5N#Gxzq+?<@`Mw({Ih`c za)$TcV~2cfEC~N={02HACfdZp7F2W_s$ecgB{2*ZgxFZc>1-ZX8%|ns+6@j3lAB&< zyExL^;03#n<%}zJA(Q5LkquMvBlUUD_HrDPsFYB55`x@vNoPb-KQNwFSyfcraf{Ph zWDQNQc(tZN85#3*Jii$4jwErZTg2;Mk(n+>?;-TAe6X*WrK}QGSAAUJ?`=TB{jNkp z*z1xorUCVBHktBlHnEj$x*#^PJJDk+x#ToLB>VkOS|o?2-U5#Y6x2_O2sDVO<#j-W zvRXoTS8q3kuj42QOzqdo@b56mFy9sJD%@w9P`MZGMyD>E7)_+@v?!k1jS_#iU~OPE zePTs9J%&_Hndh41dPyX_zNcshkHi54FwHxjc)y12lEU3ODa&h`3KV>6RImIj|1uWc zOrb^beMSfhn^U+*4h;$zh;sBr$XEhH8+#CZt)o^%3_` zZ|OWZaPwiU39&)b^Eg_Bo{KUSyc_-F?w>v7&(^XNELcsNTlU6va|c_?=Aauz@3_>; zpH4C;@~GClKmUdnTSh#Wvqd|SB9dnrCNZDLfLtO1i(QU`OY%2ujVcH+*Et0^$H zc+3Nn#cbL{*v3BO)!j~yZ%fd~zCuKhNsa41%ZrI2hq0L-{9##A<@{v%{VO;dz&&{ZrHDQP4f~zQJYqjmF3UIpS0ke^p>8JK8=}X3 z1dcm3387up4>u@~uGsm7nMK4W!q<#7Z|^uyT6CW}EZK)5BJvgE!1u(jf6!t*X<4#( z>%dO3oPIK9)+=tE0wB)jP#TlIbpAwRF z<=^cQs5G)<30gGwia)Nk|ES{eHtabQwGp1lB(z7xPO-m~=26sHe%*>-b}5_R<1rJ6 z2+qK@ZYW4;T zd4om5F{0LSuj5~t0q8_tLnlpfMrcVxzwUzxZDMUZU*@>!@}emRv;`Wg>*$<#sR|IYIXCl^vDM{-h<*V=^g1(|`9Lw6UNkSN`b-Ql znEv{;Mfac3`+p*EF(jx7$;f0J4m#YaXW8!EhhUvMDSA5Bws8e5vF|wbEPvO}f1ZA# zZ3NLEzu*Yk;eAM6y({_(sZCMq+JFM$i~6tV=ICvWZ(<8OGOE5sCH}Z`%%)SJ(;QpWc(uBtbqom<_h@__ z9BSD0RZEq>5luQ=X*Ako0f9j5kwnYWnsHmyZ-)Kh%{a9S0@}5vQK}U>i`+cT3+hF9 zYJT3<^A);=Ykb+(t4;RXh8hVyv^Crj6`HlCO0n;`#;mx(QD4^DnicTcmEL0e=Z6T!k>>`Q zL9)9=Kzu&D`UT_k9P!g`P$Tt-4D$<>a$XsSjjHaPf^XTqnxqT~Cp3@iDfYxsrAl!$ zq*rNF{06fT9fFtqE@L7^93C|v->P1`(j5h=X`=~UL%(wYJ0>CB6vQol@uLkxlF5RG zA+umDwx1M+ZpaFz!nPP%SJ6Al+Xxa1qb@qrpZ33LI3}+-ORw#y<)b(t4cly_Y2=kg zO3v#M*WvQ(7y5|VC5Z^3_MbhwJBIIiVyc zLkw|}T5JG7ZzFWZa5V1DZd7*zc7jrbZk#fYq*=9ELm=N!E-zmPPfg(#MJ3n}#TfhD zq$VB_(a@2$6N8~es6k;T>oZt6ozptqY;>+fNhF$xJHGdEI22fQ^V{9jv0Sh7GyKw} zs0s`Xv>1#(c|Ba6xuEi;bX;~e+3t_Tgd|@+hKNBv(!=3W%F1}Hf)a`@+RV7@?^@ra zaXO{N>ga2&mpLUCBNGjM?$0-^ju2;o-6%j{q!)?RTHOpBA*W(?L+9}N?bZIClM;LV z()yL+%h*GQ`!7o7=VLE9-An1&Xcau0Eo34$c2%P`YbXJmeZx%K;~EO2z(d^2 zA`R{fePLoIdjKNt{ydOGEVr*)5>s9{?W1Kj$a(yMBo8jLgFtYP-8yR(QG;y&|ziDH+bCqt||{v)t5B3z}NR^i#Jvb! z$)@!*l8_-8necm+Z+u&o*F##vNJlrnVc;3&6Jh@g39EpXt%j|N>(&JS7wz49CIV=~ z9@h3RzrJ$|w9rxkTGFClP-iDslgDyq18K)9g#zHBd4Bq)R<@QObA-aLL*4RBl;k!o zXZjGOvQ4_%_IdHhuD@F%#Iji&H5bcMU#-z(MMCswEoT> zEKEiPJOP^b1%-3iue5tMVolQQn-;J%FfVg-8ZZ5%t)!p#TP|ECUen+j_rn@~b0kV` zV$|;%z|OP+^Tw-ER)9K=4Bt$TO@$>5@DlcJU8Do8Z<%DMdej)sLNYQmK>!g}<`#g| z32#L`d&_`zG~^0WWj*6s*WMaEdp|mQw?>tF%-k1}S`ARTU&EvrAj`&{OA8P_UJsT{ zJ7ak&JA5U2)_-^aV;bEuveO9qUX|NpqFe%!{SR5t_9pqqTns&)7w`Yv+=_XSH}z1N z!&{@ry%#F}=ZPwtzNjdgxU!3JLRjpm=ZU z)H!_*FHQ-`H4rxdU%=#)s%*09u&bu3bu_9&6qG<1;Ct1f(DyhDbhFrWJ=k=OUgEIv z0tG-!cu2-dWKG5dDeD?-Ft!J1z!dl82kiu!Z1DLywFX(R5%YsaSk^`8w8SHwqwZNy zz#2J4LS?gT7EP7qTJ;W+%%hew1i+kkuwA@W!;0)r2!Z z6{sPup{_y|FW3;UYexqNPZsYfRrBx~CYg9A)@mgckhJV&%`+jpN0oE460doNdGC$l=imdWln}hc{u+O{;c9;(?@;n{c^#yo*NL zEG%k9$Gi?CMhS6r=`tY~pKMp5I-%);iMluFlFK&q1LKihpdfW7$N7WGt!$HGHOEEx z?_bFL=oBI=$2FJQ77FXecmu2&HI?Y>t;8qy-Az(D$QQL2XjB!%1J^jWWIny)POi5f zZUt_k=&rEmBHS;C^KeEO_f}cKWjRkT1|+~{FVm6nyIA`)P)@CEGlpz}k5A=s3U>Va zR{kQL)5pO0oTn`6U@Czkqet{rpt6fzZbs_4E;|?cd+nkpMc|6cG_iNMQ ziP2;93nTfa8PCk6bdPk~gSGOXplFCJ_brEm#usTa(k_9KOz}Q}$c9+n%NVkpD>HCx zx+jknDX14NekB_=yZ}rI17&1gKrXx6bNa5$tdku{i|YetcqQ??hL0pJ%_(?-msWMm zx_Z>~DdVpD_TAh`MMf)wY{(g4U53s=wS%S%rrfc?xz8Qa;_-;J{U^G7vcK~?RWuQK z*%`1M0**|A+Gll^hY)@?=vHqpa6wZgjIbv!@IT z{SZb(o2w}2?cNp)il?WcMQS2aGa{pAs^Du^G`PgKR&}W1f(E3yZ|hhT=XtYY+#ty4 z3wbwMyLK|8_TKHKr)`nOqjyR6AOm!ff|3yXQ5{0CQh2GAqR`kRoD3D3FR4m1D^l#s z3e9@MrI7$La--&zcz*9;0?>fq`KxF>HLVi4-96+L4Gd(zT9pc;tlraLehlsN~?7o(LJ>!$ald_EU z^M|aF)qBn5wx{>atF<=M=<(2W1_PH;mssCphm?k$wKXJpS9bT>t;GyKgC#vVs*SUO zqIIY^*bm11Wvz$r7y<-XcUG*ZjC$}QN}BInKNwG~vp-&Nud{+Lc|r#-$iS9bnxJ7A5Z=kJ7M_Z@r^7jh%@7UiC8mGb{ifm=(MbG8=Ml zb2V5~{W@j6X7-detWpUHE&K#k5zKSZx~kupD^(gbyG2)d&W~c>Tz+}T4|$UmEh)og z5sXEh>7X)Q;|t8((-v#29t%u>RPHKqVrOPYN!^rcC_$(J+bGAWd)zw8dlshB*51}C zuGZl-hZT|eZ$>^6I+zS0jEejyA)|14Naat`tu;Wx;qx>?$C?8q1IgpMW_FZf)#JTx zQRbFuzESx;;yGHlIw<=vX4t)J8fa4EtTf!Qc+EGElW68mid$wqpZqeq zB)=mX`P8JWktxyEu8qi+$*rRWpI@iI zOytBkjpUb-rZnrL!%2Z;>(d*=sM%IKwHY0SBu2U~+HSKEhp%PO3WkJ}megdmgiwg(y8PntI)# zaDUk3)=f$VVUnXNS6P*To)8%oZ1Q|RpT!&s^o5(SgtvPp5c{}}`JvC6tOcjC>Y=JF z1}M8547c5%kXN&)Jf=lk3_c^Xv1YXex>pV|V9bTBt0WKV58n+TT$t*Pqc&fg@l+|; zvjn`1A%98ZIu7s}xQjo;tTa~;Gto}%W|`%`RdtLv^Arp)G4VC>1$q137_VvoIw1Wu zjC24%!&o+V#FYs>u3Dsd>oC0X`Zyl-8+#WTHL z_k%gR(RFrNfZ4_)Oe+%AqS7fm%FtaS?x^!L{J3I37!?j#JQQ8O>PYEzQOB}_0k?tzj(ylQP(v@jzYC|4iInKlszcg@q++iu4s~47@0W>Nelxk7) zq6se|DXVAXfjoK`Uk0BfJm+sxSYyt@`2ZL34N)&ueb$9@;`CC{o@x$g(HC~2kckYY z;tit+!-6MFC9ww@-`Ox~@FC9}ph^|I@H z)?fq7g}Bu+%JHxaMy<=vWv%0ZlKUVuu}7;tg}+JX zmggBRWJ_i@j+(q$R*>~|+&4LuT38(L@6!WcZ)>Twb}p~uZ4J2SFIQDyU-8u}hqN+c z(StS2_}O0=72Tm9GVXFFHF0kZipPLZ>P= z6uF=!ZIT?=-o{W<8z;*M28u2`Q&~z`eiuR&{f7)gf`gjS&qp1A!uoB{=8zjAu)Rn6 zZuJ?z5#Ni$XKzhs6HtIV_0>XT{Le8xf_@ZJL_foWZ`dQZ%oAzIyZou;FW7MbT6SD- zx&<_*vpewmzgW!t#mF{<1rbZ$p|?X2MgRPOH^sAYsZGVXI9ZrqAY}C$iBBivW?Wk% z^`D8vK0yMke*=I1@&1sMGJZN*>z!DxD0$ZnH{Bo{be)sKOXh#UiiuWl6aA8pZyA&8 zx6=)6$aC}m!ubCl0Bnrkq8Kj$H$7Ao-1l_T4)l$50Wf4{{+kVie9PUy2;D2nl@@&0 z!S_Le)DEH%kcI&Jj{4mv+s&wBYnXfzq&NSWv40NtFCpuSA}uh1{RB-=Z#o<3D3S}| ze|q#tr|W(FW0idWpFbgOK@fF>5nR<`X|XOYrT>5M%6|>mm<$~UnqZu}cgf-J4E_h6 zq=D|o@-IPGv?;Fq-&gvt-!J^ZFPHF3wEy1Te+K*Kl1ZV(vi&Y)hkpK4HG5CRmrn64 z&*MAF#BF>1_V97e{P1p5!`*0L>l`aWc`QsMWcY_O-3a*8kAM*}MoO_Yb49suGwj6 z?jZ)DTYVLEVoYs(O)WUT6a`)>m0N!dUHXQ9L9EyL=Yj>NMRFz2zG@fC6I(1d zltW#byV6~x4!_BZkn($GoJB~Iv8WhXW!xT973`%9WEsFr2&bO_etF_in}%rXLTy~Ib<}n2 z2fy({85tVwK1Do9?+!`~mwnI%601!&E%YpMg`c!|o~~nJ9ZlCsMZ=&l8I$nQE$>ex z>%QBQ8ILCZwkMSuLm;@|x*$P=Ldco8RBs87fS-A1iWjSHg{4`q49xp3`2&ueI*Ud* zyswwiQax;}KSnKk0%3bC!E(?=gS9b~!;<{+hMD|CE5hjz$mSD11Sc!U+nOPn^|&UQ zpueFTa&1^cHCFJ&#{|5N5CCf7E8?8cqhWK-9=a$O^&mf)F?kjrp1mTy{2bh-UZCUY z65+%pRcIm-m(IlXo)HS;P2O+6w*5D^?oDUNN2=5Ua!z=gY7C7v6W>#IvEm(HJQ5j| zc8quW+sq~;xc#1JX)L|C*9&jEzanpUmi5HEtr8)uGci)gcnro~gpb%F2sFyNJ;qLp zbTTMRxs_~mgsWnINalOkd z)lb+HFSg%}M04{N=7Ya~iFwrZa$Kgbc|}J`g}FG!~KsCt(_d zs;AtNJC}cK>fLkh^bQ#^y_SfSL)u8d-snKz1-fLKOyODs0B0E^o-Ozw#uT?qi%QRvbR1OM9t)|LLm{I2&vjLrB{{v48H$l+|>&m z&m^oYBn6jd4%ctD=%(jzH!VRf@(M2yNRJg_mzoc{f2%-rb@n8ARy{AobY8|AI$%&I z#*Q92p0p9-WZ6blgJb-2s7X(hiNv4dqM=A=u|fr9@|8uQeSVh7Us)S)Arw3(3>71? zP;g&gfwFZ~3(}#fkS5h;C4F}x8RBf{J(D1mtTfQ^(ng!*C@|M#gT;*k+_8;Ko$30` zG9;*iu0eNX-_m~1L^_f zBv_Tsq1-^xGKO5n3XIHSveq|t9MU0V<>M`Hyn(;QA+__vS(wLf zid6?G)DU2Z($*6fGZ>C6u{gwk(P>2!)X+JXfXPo?vx61Oi*8E6HzCrwaD3PceDYdf z%Fo0bb$EQ!%Xm*{8Fz-;ToN>7{__X%VIccGoJTG=nzM`CwjWp6P|!4{I1 zoA8peVqfs%Q=JZ@U7;=Vp197=$$oZnjJTWAintkHK=*YbgLnNgmO=w?Xzz8Wh;|9WXD=7Qxl~_^1!OFGa086}ln2(Kk=V>ljsp>44|Ztl*PdEDl>{n&T8f4Bn3l zJ6#NuXHj`o(ZuNZPQ|v@8G1c@*ki=Lek+3A#O_f;Xo}|629-rON86qT)@i8Uu|@6sth6H~LoB4i>bsvJdPeUcMmSWaS;rcD zh>9;ZD;-gs5b$F|{+jx>rp`B9@{?+A=a03arE0iJ-Dn4LE4JT0LhP|HeZF@_>sCt* z*FS@9O2ca4+{V2rb{!;?g~)jVn)1B*Ue-R79x853w*3Na4J)34zIFj&0CQ$1cU@R- z_nNMzsmwvz8q^J+Y=7R2&1HKKKWOi;$DU~PdB*WiBLTz`VAnML0w==BDw~05}A4%cLX*OcCRSbMI9UIS^|^U!J|GEk^4GL zumk#9ep#AD3oR@7l$$StVwpO6)%d>bFU^DIXR6y*vf9ew3z0wL)tBIaR2jX!r{f5o271VDw z|3sg;#-xHT--i%E`1!DPA|a@FdAH^sD^Mx3H14`>ZuZe^js^LDD%-)KkuX5SBGt8O zb&v)Z@*laO0ox=`XvNZZjUCR3>k7{~qSQxzj!B6yyjnD96Ttu|6*PfoaIa?O1&^XJd56qHcC3~b;?rg7!{y{>RIV(8o)JV?}jc$D06#L8;# zO$;VV$MRA{5k_RrW4oFMjAukup4ER0Jz+NY6-lgYlHOT5^6JIjZT0pPYHy$y@tfhJ-hikT_16#$t z><8P!8(XM5c5@1x|J>w4vJC@Qu_`r)hCaa`27V-0C;&)HJ4}|3h={0R<{~*)uDx$m z|3kGj9vP3N>mmbEjjYzl=8vEPTIOheB&nkFR;;Q6R|(5^kN>RN0pr=%#idj z1a(wuAMKLX;)c^$oR!~auL#n6bg`8ZzNRpvs#`~$!QT+cB!h=$n*N_!0E4EvG!u2Z z#HFd;2;>&V&rUSINe!q$AbyfvM}0<`-=1wg1D`&UJThbJ%P*ECiD^X5Ogh_~$abMu z=rEgxViv^d$N5zZ*GQhg;#;szfg*VxMoQMxVLz{Dgc!(CWA6l%@Qh|4unBLb3j>7O zm~`9tfpe%BaKrq5RXQr~%Q$}Tc5^Gb$tw_}?6$euDj>+yx+e5=c*!)S*DC_1-~v(2{x%dPw*%s5!P*bm#FpfVsMWlNvuP;A?7?F z9EF>8VS4=l>9ABpwN976Iak@=N47>Yy|54#(+H+F^1fNDY9fj0E+y+oUdf~_`6K@M zmmFHr3!^J%BcB!?l?XjvvEt6^XGxq0T?&IHxl9Vv4OvcIXoJPhDrG8JA&H#xUB}4%q)|>eX6|Ug=;bNWY9NmdVjbxCQIo=0-!0bzS z@P1^RgGLR3*|8jY6ZhyBwex$`mIVdPvM+M1I1k5QB_p`b3MI%Z9WlxITTwP;+1Evm ze%AZ#MWxtfyTCbWZ6Yzk!#s^>JI9QOM-RzL;t3b^A*41!V3UxohOL0UYfBmY&{Q44qF`>OGNJGq`=+H&zJB5t66;QNh= zkPppLjv)XTEoqdIf`k+Re))4=rw&v-FG*IwO+;suJ*oC}p8G zf5l&=v%*2A+O2GFHp}4J_It>d{fQDzyatix_ph2H+abCtwHig={h9aPMs8;fUsvLr z;AegJyrQ8JLs(|FT;qyC8zGj^=MNW*ZLVD!|3PDDq!6$*p`~M+52qj{sCFT}XJWH> zT=z(Gkd2D=NCp%^uKXX4Ufgys7=gvNPO<~sYTsVrjZ6+(#xM2PRd&}-`(=Ja_iojQ zw3>gvlrE}`D!Jp=s>lFb9cv7K2x#!SMwE=-wI>)_eM{32qRgPcrOtTQe&?Dp?i~qp zC@LZuEfIylw0q|(`Xb1$*Q&hJ@YH5c;<+dy*v_xX6q^>HcyLl6Ml=Hr+ksielQ;Au zCIoXStv!_j+7*ooiu9pS$Bo#=|0ocozj?Il#{G&+aOV8vXkSIf%sMXKLr7~s6DjC1 zA>Ss+FpTD3Al51ungbm#W>ZB;nu~_kD=66B^`W9)2pYX~eA&B~tFp$jYG?fQF?jfj z>D~1TX@0$9Z&4=giK)&{VtP>9*TA`o^R=1XvVL)H1*Ckm`++`v_#yhIWq}9-DZbg4 zpLZQPlx5WY9GlFWe6~eHKh_O4ETLTha2=(1$JHR;%cxS<|6mRmB3<90vD=aGMkx`$ zk)@m}J469`<01+2T6y=_;}hwif_@#%>X|%WIy-pYp|UyXo5845HxDGy)4xW7*NWRb zQv_=%de13{s#v`DAU4WNEONI>W11;v%g5k#sBbO-5$3n2r~!n*PTdc)5A@}N z{fcw8Z(2%P<@hzD{A6r})QYudxh=@H@%)MW^A%*2v4aW^dR@KQ@j!@k=9u3YG{>jE z_AZp(VQ=?BF~n7qN`z)JmyX$liV{WWbtqqoN5i{+L_ofxFU}5j5#{UrnlzqFo4_YN zG^^0)9pIQF5_&~nN<4?~ zM0ug#S*Esn?@?jo^0@ZO4Y<3fU*toMB$yM|*|`CXB_fE)i+=|2hgkG_x$4*NaTxsF zi-2T`JBa3=^%LAGwCa&m@V0j$CGmQfvNb~7e)bSCubuVl+1AD>eYQyYxexMvP4T*$ z)RHZ*Vt75Am*UiCO?oI-Sc!2&eo1Kh+C@Q}eDw3>DNFBB$vjYTE5^`1`-1(7obUsm z9GgLT>O~h!d~mZHt#;#x=V}WqLq6Y21H*pSp2%-Q`K=`sa_t8HPQNZ9Ka-Q6mQ@jE z-0PwMV$@M|>}80eOf#G`t1b|{?wP%6;4KpD60&$8pPiNohu|fdt^s%TjV#JtNKECT)U|!rtjtz(>Wax(3fWF7++Rr@GOe_1^KQbLF20N<}%31vJkdy z#jeC&4*kTpihC-B#EaAD&A0XTE@I$*dPMo#0H=;8+9r%87nUW@Qyqo%jr1jAQoDWK zH}q@53c@RP2P+MvdtE#4&{-SC)!HPOy-vLk%Q2Ap3>w?&X2M>clL9U`!6i4 zr>9WwS=yfc%Y4F~nE}5KQG9nSNwyv*Ng5N0&p1-##kX^h7Hp&nELe zC0kUTT8TVn$PIGcbO0m|$ga-}8a=XW!L5S9+%)>5UA6kOx~GS)66aBiRV17D^Y~|C zzi80b7Iqc?Bgj@nM>s!ot;W@ARx8u+7n#bCf}pZlIi=_^mT4Ro%7UY%i0U(y0A?LT zN?P%O%4PC;@RPr-bGPa3%RZcxUr;rR@r-Wc=yPS)Oidwj;&9Ak&eqCB=Gd~jP4eYJ z!}aqa*3QYEv&m!0eii^mwOmsTwfVzknG?DD19oBl83noA#7&VFM)+o_oINfnw!3yV z;On?LfA`1fTE32-wv#1L^ao5ls^+KS-nps5Vc*U*60;O*esst5T;a1bM*J8AvAwQ& z(M}i>M?lmdKZNZ5wr0&u2&rx*&Mp&FGit>%*bk*$Z>0zK5M>9a<=zZ=yzDb6&!bwj=M| zCfM@=XXq8Dtav3<&UWIwa0b~pz$}-bTS~YFeWzct^pZS^>3rpzft|m^$pYJ@yle^> zPU6ZB{X)W>xbScwFIoka;zhqY^BHvngwVRt;n(ymG<#FOu-#+*#ww!|1BmY_k`zb8T9cZFASu`;roXiWSFsb%v-zV_|;P5AZC; z5xC|;ptK|I9BS0#6zpuf!le$M6!4yY4d#Vm29qA%leD{54qAh~cYNYfij3O)nL&X^ z6`m_6_O++$(OuQ_FMM|bF|enk8G!MD%9z2J*UKk^Q7`m)7xhK{>}%@I7@_i>2aYFP z@w0ybZ+`@G^nlXu&lzcaa!IqF9@u9!qn33J6+ki4c^T^4drs<)f4QpKHhPx^_p46C9 z7=6LCD33X0pQ?={M^3XaLtvFJesy~>7znIX8-;AEG&`7BAse%j(Cbu9wChV@QTO%) z0IM65epbR-zGKkg^(f>-C}Q-)$~%`2q=cJy6n(XX|P9r85azRE=l! z7L>3dP16WGKMPQ!gY%MX2!Hp|)`A12+ZQnw)*XF)M9ue2v;qJ`#CD4zOhy?9eM{LRbKWu$d zU}Rmoc6V&swr$%TTOD?6+h)hMophXzZQHh;`fJY2eCJ&Jw{=lfd)HpI*8A{6RIs8)4~Lfa3q0MCEBH~2bzGZt=q71nR}v-mM_n1;RVXX!Us znjDudSl)Ym3KnN92nOH5ZBkOIs@x9?TA3Hu=8ygCGBx`1C%3vYG z|F%ojlqBj^^id)`=e_!e%K|?`6-s#n+5$>#F)va`6KVyFtqSZ8d6XjWRZ60epdZQv zbw*bF%I^dVLZq_hzr-4lpppmCroo}Y4aGfevRwSN)>N(<);zAitc4Rb&0@utQo8qM zJdXy4ow~2d6-+@Tg9l|k$dl;Uhu+Y-DrCv$Q>3b(8-@=P#AM~JJmdF&4G;hDYj+X< z5`74aA8_)xHQ}>aAeFcp*5$Bns0YxLC_0C_p@A#{27 z&+mQalQygor~iVBMI`bBNW^$Y3Y@Mx2|HkwlIEpb=Mkj(?boR-V;+pe zIC8+AMq*z@%=GD{7{kpfF!MHLVRYB$*Z8~`p^lMZ3oW^0QZ>H8R~W3JT=|cG01X)| z@&j*SZsQfCFQiI0s?A|v!aDSa_SlZ|OGEL#$*`rHzqe=Kiy@47t8Thxxkfo#-)rqr zdyb+^v=c#~qTK(8^O=yTEbg$)nAtP_JN*663@AiQypx-bu57fi`gAfY3WRhn;uRIz zE>{~%uWg$&_dV*cTi6~*OaD}mQO9Ld=@3}ZcPTvdOd9`M#0YT1d`v3wr2hZ$lEHDn zU!=?{Y|TD$xmt&_T&*(2db3>(*vCrwiYjSm?^Gz@^TvgtRMJ*2ZAbSyz>R7vJsE$S z%9*lWa^)&`+clgqgI>+}@Ho%=U&P}7*fG%p%7}oxXU4Xky+^kk(6ZoAvL%oX^c3R_ z1$aBh;qxvL=a*)?W|+fUG^?+;Q(2Yj;1bBS**xsQc>IJxC>xirGo&x!K+skBUk3UA zFudVBAU}xnUg_DP>HKKbtL$qi_)&hyY__}bgEhgmbUSB5o`L-xYVkt+%Yu`+jPOU3 zN?ZL8zcbUk-+#A~|NCYvAmWMsjxdaVS%FrH$Hk_}0}nI#q6EgCWYoDjwt;yBTd1bt z>>mHW%luBFLFao-%?TeQqs^Qf`1mU% z_98Q+uOC|~{UFrfxb~Xbo}Qj2rlnD>|FLsp zv1>OOrX_L>d|RHaH6kNnT@(9T7Er zdTS)3EfZN(j0f!I1BLZkt>7W&#f=BC$K%^cA5i+E#v$;(Rw5Y;5b-$>nQ&VSPkmVR|E*s{ zD#)yR(4r$o;5);4XhOC6XY0bF#jN?danE)611~}44egII=HwlRAn%=~`YAEXUUh+% zfm`{OBzNT#$}4}WAoLZN`0ZCaZ~e6Jc7Y}febRqd9uhWKB-?vrOw8}WD9rIR=AR}& zKXwDqB%Q0B#K(~#2xB%yD)1$|Y52Uc|Mm9FEG%S@+buWvE#tnf80P zcYm+=Q8XByPyeJdr+LCR;7G}+l@WdX011OZ<9+)|BWBnb=Ehk3aLE{XO!d6$JcV|4 zL2FgbI`(*wMd`v@>eoh(@+>x-yBClZu-izCMK_zzvQi$H%&#IgmWs$?4-K6X zplCwXD%rk9*u{`o0me30_sjD?V1PJO07~y)nxczjjGwg$YI+1HyQ_KF->W>yvLIWe z?kxeo??!p!pwB7awJWvNoSfTWz zVhi|Adl?kAj4UpDUdNoTVlt!GmyBqEis^gkn9L$1NR&;})=-baDMq~%GKSM_lgvS! z!b_%MN=C9-QaD;-?bF5~4&!v)Q=|+Ik z4<-wP{)l}d-~_-!jNkf&%Vk26L^MD{urAr5dc1LaQzS4Bn_snSk@dcb$-T*%Y{`{e zB`};|mKMT)js_cucL%WhQm!=#>m_tRMzPuK;@+t|K{svoKSy+ia%8kZuc#w?z}Rf% zLBHuo+r7Lxi3xy~SRs`XF!dOHJM)gT!`k{}C-}q%Wrdt!d?%Xrrh^3bu8Lv*0llsg z#NQ^moW$jSNsbj=1srDjp_mawd*hpm8Pw`Re|F@I%w&4^Pl)C2SceP~3LA)ZmD(be z$9;J}7rV!4zlI)eIw}5Cg1(gB#aZGF&>)iL4fMVBBaxjwo|_komHYy<3m1u)Zetbp zAYpgNnQYrC;=0w)JK~RN;VnJ9Es6bin36#OVP0{-pn(F+@L(RxaJm(+e>vjfo10P$fAOHK`T2 zlrsG;sN|#wew(84(n%E3>`_o5NX;V83?&@YeA9!dt za@@%c(?7y#)kB{Uyi9Oh_gc-?K*~38oFEzxI-DR!QY=--Na@vL4Pjh50#e8Q?F2aE zvd5sT(nhoaT7S-=b-qHHU zS|P#9?lx4X9&O+n21(eRz1;sHFD*B`q0|NmxY$+ra&0*|@a(Cm^(M`;aWY zHm=Jk$Yt7gue*m5wq7qcuDqg-2u}v zY*okbvKk$(BvAo}d8jifWukOvfq?9nbU@xknJn&(`1zB-0w=u26B#93&Ga^CgD*L$ zI~f)op%2!4vma)JD7M?a2F$6kYdh)3+cbX6*W7^9v>zbkrf{f$VdO!`BLSdGemAhW zbv*Q#dn`l}Kz}|1rdbV6UHE~};4%cPjsZOJ=$LSXE9*2(cQ&6$N;%=)#qdXSh-XJ+ zCzu;xdYrJkf4@;D%B7vzW(ioPe2&s}iGdEGZWwVpxo?%!%K&|5m+B#((}Ujd^He^# zxbHLya>(}h{x(kG>*3GqdlS=<4vL^E?vhk#VNg8BPL+$VwluOlYZQ}T2CbNQD2neu;S zk{`r%KZg;p2otj>4LwMpQzh=Cja%Yfz?9kR!RV}jceDBtCgfZQ30(H*x@e*w+s0Mqo^gG#%k#tAFwsR z>pmsdt(~rH37Gi9r((9((SH1NRRBh=yT?Y=>ywVw!{YE<-cQ~(UUrpG1}%EGs!fLC zij;(0JN!f|>u&GY-?Bnnd%|Y|uQ)J(My01-DXF0i$F&F&)-ABp;K0roH_c?sDBU^m zV8iB-^YjlwGwPwr~Ngt$K`5!*( zQoWAkK1%yLc7ww_zPV7KC4ZFmJ`34OWI@hxg4zc2LMaQONY0_x;C;`$0+D;B1A}C}1iMW??xdz5+Aa+PfidhhgOOTMO?V|FzLq)h^3|tzg-kg$FXDu-Ne5<-zTes8 zuo*F>ZW6(-3k$T9Uegc(Pk&npkBy_0KUXxDVEhl~NtJx#^>Vm=Bj~m_J(v7}^^M7N zzvBQ1#NAkDev*X|(Oo_LZ0)yLM?O($`ob5+sd^8sKhZE{-f=+u5Mqd_o01GT6mZAW zi+!A+g00TLCei`;t^|v$2V@C=^-0ir;jDRKzv)4YDd7(|j@*}lzJVOh2tUl1-Y5|H z;@Io*YnjZ+wt`gJ2j>PG^gJ7i02|$N9*o8AEd4m%>U^Q@2Susw=^ogd3|Kpp1c^7Q zxD}j_3AJ5Umw=dwEVu;Su}N~0`U!6CV`yR2X=(nKG}Z{VqvwvV7ZLQqJ;qLS%Mot} z_%-8r-Jv-g9{augJ!JCX01Or37(0Nnv;VIT#!a%2%a_j(0MZn@5| z$_}cXM^2V~SZ`kg_*-sDHJ_<&R(Olh>&K{M z97v^8r{Y!nR-hYHwTPUG}@hAnzU3A5{`&zDan;@-njg;pBgaNlni;)DX3Zhb=Cj}v| zy#ge~jZ9}-8VBCVPOA{Af`hnA3WDrHxke|m`}{8gLC;lo1vLdUf3A7ep;Zb+x2KrVu2UhrTK=JGklesLt(X zR1>i2X>#V~B6GV9J|mCd-3!faPUXguR>{}aAvJB|%`FPuSqQZmftH7pkPg*#N@q85 zBx;?p0Otj;gf&Dt>(@(-1Xv*v>5&bQhT+Go7Njj!e@llHYO)|AnadS|6gbfOkE8FM zsOw@pIR`#U2v}*4F%Q*xxPWh&g+yr*Txm)$pB?>7`}e0Xv5y#vb`Hd5Ls^(=ux;Iowv;||g1SXGP(MjX>)N&0U(zA&mbCAR+0k3hPk90SwAmEk8$qy(RN`{DF# zigN##_U`uM@a`_}N_mmaohBRvt&lI4f`U7A5oXb9bk)u`y4;Z=6Thi97=y_}Tnpl_ z_7ZAJek?wSx#}yLCRKi!8~BfQ=rOZPo;24#=vK!PZxKPilE)jyUOdj+t3M#mZy175 z4In2MhWTV?J-P}p^WjkqGPvfgbUuP1OIuM0lB=Z>$%AQFytTk1h#9r_vLTH5P4 z_){s5Yn%szz^4r4ErZ_W+~bPFw#jE!z_;!JemA`&69SL2GqxL?7Od4{j0K~Z{u6%5u32-^cKBKOe=%Dz?C<1xKOkDEDE+*H)*7te zZKZj#w8@|>qI(9-UcqN_)-fcsi%~12;F3o))9i478vkfC63oGzyE6rs4aPdDJ|`$T ze8Qf6A0C^;W&DcsMnNRd0VA4lyXBm%mHn@$_S0YM1iFY=)jHSb)8)Z8Q>h2&J;I=5 zao!ultG1NX2!Rg=>RdHC&f`yF@o^IR+t-|hv5<9Bw%{KPBVqhW8-tRa$U*k?SFab{ zzD^Nh8KwRs9|+#4&&K9^6kFCyRZp#oe6#16Rh4uRT0AB5uEO)P_(C`wL!jR}z*h*D z=Ws15m(%Suh@#hieha^B!1h=dsU$1yZsc9-bCzP`HMSoKXXOSlQf#0bIMUMBQ1}h- zsWzwa)(4d`-Dl>yL-rksVg;A*3Ci~B8Bkb0HgaN$SN;bY&Z|SpOAtB45;t7Mn4ko8 zD%;5+)%@!|V3IEBS`{=(FM^WP+_TinejX^pOkgK+=F0>v=&1Y5;Ar@+&fQ=Yms@&6 z^4<{KvEx|E_=Us%%Y;87$H}jO@-l}3_LNGK*HPujiFQ^Xo?g4n`fl`QJN7n4oV59h$)Zdg z_#Z_@13v})?qgtrv7mRGu;V%GV3F?pf3>?XD@URYqUY3#zN^<^;*L zn$p^BIK#vKaMtT&yyg>*i8Ka(T2!s)$aq(kxzOSpO@}{^f0WO()~i-*3mMC%Ch}~i zyDC^ZEgm+UbTFEnkv_|IC3OU77Rk=TLJ%uhp1!Y9c{GaRdQ?ioV?T^OD__3sJ2WU3 zSZA_uCs^}V;AQK6Y2asCAw;zC#Q zg@EF5tfi@EXzcUCt?B%s`mC$zE|7`P;0f+p+=YcZ^%w8))zO(*bIh7pBbOV!2I9cTD>pROXMIN1s#@#D*jQ{9}m> zO9Ms^r;&?5CF2=9iQ}$~I|0P^`;s$sUuEYCMXqpH5p3_e1w4)Q2dQ9Y8QB|dRF__bm>^~H9{6^nwxBT~2ZZaHV983+@ z-o&y@Jp?%n-e4qKI>QTvghtl?3^COC8DCOLd9io8J^c_+FbaAx??W6x1v_uhVIIWQ zbdwO+c=+}7E1t6H)}WA?9PjnvGOWDlgwtvHr@6(0TMfGp;q=!QiWr3lSnoLlcG>-Y zZg2Y<9>(O_Nq_N7t(QBgt0?wR@1D=#qdTGP@Ks+&-Tig81tMG@AO8k!ECU&NQo!~w zq0y%)WLEh*`&6U&^7+yf=LU)4lzjgRpsgeW16a4@0G=}Nc`(nX;x>H|mz_u0R|Qt& zu{_T;e=ny1<1=h zjtBE`r8^0*NnoC0huSqVo2!o!Q{=C6|6b7R43)<>!cI>TdB& zK&{f1p(=`9^LmPj?bMHcw|I!j#ha<@aID>PCXaJ0HE{(fuaZmK6AJ-3QsE@eppmx$Ac?uj$n1m|WJ2}!|ySP9i(3kd3HbU?Ht5}jes z@tH0Z`qMtLufhI3GaWB1CKls$2=KrjAF^(r1uk{++RrTC%(oQzg?n(y9S3q9ak={1 zIIh-O8GjhWIHS+CK8L7`eM(6TKZYC+hev-wYTf1hvToV~cVPX579TH7g2@kkg&FtL z{41<8zifG{M*;=#9(&<&uQtF-!fc53;X$s#Qtw0W=fGs6-f?JK&Vv>bVMC`jV^py? zP*2WD3@p-Jd)G_#|CW6i!+%dISKO(3puS)rG=vrV{$uh5(iRS?HH7!D#C-SM_o&3D0NkEHsnZ1@Y+9D^mztKG!nN9w+skEe8-s6ee80h zIah8vKfMsFVg$duzvBAvz29iD_J6)YjPs(a6970^H{7FLt(GFSSkfa2KM3n+nm6jN zyznCBA-0!tPkZ2seF;3&nvm#7wWEl+Per+~Cz0qc{bTH(ibG zYVh6mE!SuZY^sEueA7gGy7zt?^LRZx3v-+F|hy z62~qa2`bVeJG0*5)xK}CDJtz)&E-nHbAVJ&K*0W7;NYag+XEtANs+*6JN65(Gt=wu z9fBh|^zP1Ba0F5Iq0qoh#_!(6+$yb*(`B7_H%322(&OXj#sI;D{ox?IAZG$U`OB-5ueTIs{#eu3XBi7G{ntLKWQ@E56rDI# zfW~3cg`5g?4jIAV`VGm~Nd7AT@+J4*3IiYFV0|L~GE5#_J_|#O z%Z4~t%9<-o{0)}-b&2paDEC8dh=gK_H~5^o6p&vp)BqTkrl(+HPMMfeLj|C}S48YD z(o7*y-c|VtF$MbLP8i@ryk*Ow+N5<_9wFWdZb?YB=bTu&71hl1Kj*Iiln5aQ56yx*Bk_& za6Hx`fB3gG_cKXxVrgVRq$4^x2t}uxg@d|+2zwNACrrd_^5v!+3(IAXve3k)R6Q|0 zu6#kj@+^i#DHw_C<&m%m*$N2Sd5%S>qw+q?R zUxIDwr8?t8XZ5Z(Gs}l}I?HayxTcK+;>i@-6tnRb>GACJIqz2m%a?OF>DCjO(E|p) z7XM(gi!F+CMN10ocji){co+&LAat=#9nDH(Uj`gIpD3tzuFcZWeAV?DKG6dVlqpLD zsbXhyWWe}!pjIH+1wT{jtO3M*Rg}IkRO8=RB2^-km?_NFe<1uR-5nCGb0pYeU_=C% zTo5e?c=)%7-654sX&g8D1~OC`*X9m_)lRLr`K7{MkNo1|V60u1j0?vssE^t=ku z$$dPn6pi(tmI)?ktKe#3B=NK$lEP@5Ios=^eB5?hiE>x_@!nEn%{G*#iWaoRzZPAO zTFfujDa$KUW$&uWQGz_4m`&@>6J@z%^R8sCr3Pg~TXRmT|IN4xIfxNusMtsCtynu) zlhQW=UqwR!$(%k-0fsB=cE-Q`q?AqPxD=&Pvu-|9zHiego&!%ENg+Po=OPgO#Y%2- z&9KVip0@xX;GHf5da98Enp`SP`~S`x-VOv>MtRRAYPGoFma`mClsj@`*roQSw0$Rg zU9L|I|DJImz0dT!%h5aFie`=lC8%GO)4(CNZSfzQTRbRm>f|Q+fe5?}R0vSkJ?GjH)h4x< z(Fqhu7}9d&7yf9(5#>tyXFazO_BL6_3K|=Zqv=QLwUI&zl0@LcDEONVv!z+Z=N$1A zjddm5xypc!Tjg$AgpXz^3kjJ9>oyA^0hOhb|6C?HI`HGjn@yYbBZZ(Ph@AWiiYC%7 zs)>`gBfB7E%0m+b?b*5kt0>zoL7a@5T8NB)cVlTHg_wm6&*f9!C(uUZ6jDNarlU1a z&wZYPk}bFTG6}X4L}AS$ZsPU-nFarb_@gLc5_whvSGhQ00RHm^8FO(LZt*~JOe)W{ zA98ab=ZiJnl=OfDx>g^)AY$idsCZ3Ut_*fAJns3NtlqlW6W@zqyFu1*^p%}$`ntwm?7Gab`kc=*7l)T%<0qeT=VB$>++rgdb0x?( zvX@cYle99Iqo_zxMFu~8LR(B5R%eUqFz0qXV4FkGm!9q^_6u&>>u25sX?bF&%^tI{bVGd?{!pk1{ZDdG?l8_jN zpS1SX?t9S4+G2| zexnol20+JjgIioIwVt+ni!Gkp`2WUMG$4iNj1u)BnLEa43`WjyN69CUMhhJzW;R-e zIeUKMo<2RfB|?A~wT^zNS(&Il*RWi6hXpdcC6@x|3A1JyPi|cQnr?eDq?`!%M zF=bHJSka`h@0;TenQJ`dsbg5-fDI_@RtIrYHV4;MwIc{T$j8C4*RBP4@^TYTg+HfO zkH;Ri_#6G482A)hej<@j%s=}`Z#Rxp>XX?e28@4)%L1cSVLV}Q3b3(Fe0Ua#zgcVp zzxCkAf2MBO4dV3vh&-EYxgH)HX!Mkxi(WOtWM<_TO*~>uF7$d1$(HaXWgz^rOLsn^ zew81J$nCaiG%3P+kgnWV#WdSCFTzXQ=4tsI;dAq~V@GE?ivK*`dSzoWs8V+mMrF$D zBEhnS@4!vFdlK4ATvE{cAHARq4^XT(3G5_MoZ>mB`&TTpYB|bkGc9>n!sg@o+Z9%N zhfC_jR)Y!)on9|~P1zQ2;Ul|@>}mzHFeWnw>Boy8T||f}T@71-wt8?KN~B{EYAr6s zbFm3ePea|^q3P^+BUhRk4&;Qf(O`J;Kr%HwqNuPb_xs1WbWX-z^AK42Xm8J#Sbk^B zj}nK*JmVpvM&;Y^$e$$91HWl09d5Y=gS?I>QuxP9MguO#v$k^vRZK=@`^meSI2q=- zSgb0fY4(=IpW%|q%^#t7HXGP~NvEH&JHiWg%X(?qPtWM~uORMQsV6FIj(La``K%W_``bPxf8p8p-o|t_Uj7 zA0)5~_UZ1S{QC9ci(YXs_D%J|_!>&aZq6%d%ZoA@X>R;$+O6OE%GNbWo)njhvS)H4 zTVT7(?JA4ge9G|e5b4xK*ih_nzmIapD47;3%BU z>e61eL-P;CbQNe*q`X&8TJk9sGXWp-eMD}y;xuBmAvk*h`{J5uP>o~r; zXqb9YxLgStO*M$=Ht>r)AeCgk?aoXHkV}rJ@UI+FkZ2oICGkh!*oghkI{T0 zbe=_*<7<1XIu;vs?@w(5P@vB=QYm3fwNf1(Sj_{A7vZp~Rm>=iUSVQ@SypliEh4c2O-JAkPgO72W<@?XuVD0}cKkqVj3=yVv@x!rp-dcCW~~bn zm%w*ImxQGRAtT1K-$Q=7TTyU0htUsBT9YbJJk8_oT1|7oiKrEz{*}MKVyE}?hX3-W zgZH6#wToThvd>=TJE?XwH@M0Bref{uN5;Z}ilj@StorUJ)MjgpG$xVxbx*dZZYPv_ zx?lemip6+1vrqc z@&jP^QouKNM3~7<6L8PH;MDE*`t2T!xqwJwOyTOgBXmfNcO@Lta#g_bDB@o$tB!5? zXgYo}k!7#YZs#B^T=>9icMFb@Pxj{e^~e4wob{Q7tHN^SH|CrcgQCgrH#Z)*OC z4XSmFRY?|U0i0T;-d$i~H%L0KqH_{om0|5C)>BuwqB?)QVFtHeuJW;Xv|qrw{x&hA zpdg`QJNB|=>|yfVLgcMyl=~(vwmqR-bPU2Pnltux-r&;! zWAohFPTuckc&nWd@5?~k9(~^4es>Sg5{s!_7`-a{l@=tnD9>Ay{e$g;=ORmhTXGGx za@kllMCgOx9EC<~LCo32#adSY^m>^ptm(x)^??Q0K~jDGv%hMzOuv>q_F5pEF!ZUv zBEfj3U!q5s6R88ER)TD-&^FUnYNZa+Prwt=_wtm7+B~Ic0qkP)4+yPfcffTmf&YF- zEXj=~%2Vxns3`#7lD65RE9`iBxYbWETxA_GU;DD8OuHI-i7_X{jV5IOB=WH4H?(N8 z+dPV#&ndN-Mpev1f4wHhmD9&UW2J06+^XzTPsI;S9#^pKhKZfqTpVY~iuOa4LiN#S?bgW-&R3&u1)_ij3IVmdXk{ zchqX0aM8p=(#Qx`#KeNL6>`5WD;+8KXt9+3u#I+1>!jcQ-#`O<_bIkicl{ztG7{3y z;AAf|gF+^{&X%#H_Esh1(xT7zhOv!0ly}Up;1~S!%UK0L18%6{VWS)A>&kkNXVN|)b*51l?GnSd&xeGX|Au;8b-61 zsrwJzF=ZPSkcoc=jCU`C>Q8Seu`i1zr}<(I6*prKsv&>m`$}4WIEQw!api^m7gSTGQeIC0p75YSLdQX~lC&pe3 z)#I}=7uvu*DSScC#_$mPZktWuui<+MOtkqsIc_>;CBD0#)CwNxf1Ne2dP(?2L&Fz3 z7`}>$rPzfxEN1MFY`ki^Z+=VVgN1lXcJ-lxYp5LHbvR$TaK8I08mE^^brOa*9np6? z=SL8egn`>e^uLq2tv!}%497#+EU zzLOcrgLdly&pK>PWQkKFtU1*^pMmp|#1 zI*u4c7O%A676Y^zZ~N6Sf3C&+F`UWW*S*V%(0OnejRRXtdi$g;XnkG;QseXPNA zHJfo4j7xIgL#mfYZ=oLN8`oN&>U4ZUh6tC(glHfP_+N%)Tb>#bQb{YWwBv$mSPP5I z52=5Qip$b*wi=5+=%%<_r^21bp;l_-zxRcxXO;T>kk8yR?oDT__F#{r>0m|_g!+>D zIS!U51BgB0_+mU?o9_u#6!`unwNHBsy)*qgQ5?kGVn|HYVv(TB%-oXPIXY@M_i^0l zPz5n>DO?B1d#Hgq*D;I^PyK2G_Hy{c5WVR$X1VTZk+SYJJ$1%=fmVGbV)fp^uuoI< znjcDU$V}GI++YR+SI`sv@u~~a^lVxDY`0MJd5IRPRv98FIDoR4fG_$K{|vTe$+R2L z{Cc56t7xW8L$O31v_xuvUVi;ZFs^l5o0&l2mByIf0b$D9)G}Aqc?ofWNpXL&KwZY; zK9cUO^~`g-V+$uc>S?Beh~K+#)LeI1n5y*r#oIWZ>TsB-*HL}W{=5<3ljJ)V+xV(n zlsgV^s+D325XOBbu2NWXPfSmX*0UWdh^5bE)Fu;6;I< zADs4IzN(Cy<@vG!`;F<~;VVt2ObHBM$#Y!w=;v{rvZK!USJ?yboV$Z3nWssXXAa9+ zv1z;2iQNPE;*BNX|5bhj?BYN}H$^djipr^Od2!;mWO;pezLJSP*+RfvsMynxZzh8` z#(rFg&1Sf)a5dsgwR#tC|EzQJ3V!($hf8~w``3{BtY7JV{TibrKe4T8<6$0Jipv2e z+pZhh6nAOZ@`vd`t!_Vru{$MLC1vu9i@R%KvVbVjS_Q>>U*N^6Pfr~T$S4R zsCNkgZ@_#Yf3b}T+FtD7X#S(tukeyNB_pz5kEb>av*+chbQtziQE}Y;vDSKF+Ag@H zW;?|SbfP5U2zdI?!>-qu)LKvaGUgP7{NAOtJzt(PwFn_^5L7Fg7F6`GEDtBuV3)2kFK2=kU4E*OWP(Snl(Oi4Kw0dv5+Iq+)OrT5Jdk%Y;NT6RalV_7W(1L2+DW_ysh#kb* zOsU9gp8ZJ4oHaaQQa#U7Fh!}LESmbzI{x`qWU>8WQ-dj`-Lb~ER ze?>IPV}~KE6-G@c9_`LF2^N8}^dcfkhA>94|hrZa5T?`QwHc`l42Ny>;d zCn%b+&{{_o97p0Utg%56D$s6mgr_yt2m=n{Auy|6D`okzl06Z*I(Ol@c2j~-h_qs9YOVEw_V5#n<23)r5?5c`&Xshsbul@Pu<8ki;WTOc%Cj> z{4s--MRJDPC>2`uu^&R;%P8Ix?h7J~W1d#lin-A6q9Zvz6usY>X}WHBK*m$XE5p5q zhkd?sqVZXFvh)3!u%l?(8NSlFT>b-pOc{V#&ri(3gB0fjJ9h@uwLB<6Wz3p>6iTGA zTVcfF^QyP=YJ@LQMw(cqN|~)KRz)r!RiTlxAOu6Hqp_QqK`yT_>BS%-?9I6DVmeL$#EZ4?YvZ`nw2)%@=IvH|TP49xa1HD0#mpzmyD`?l`Vy zW+fzA1C=8?7Fxu}V@Q|aiJvD(`e9-zje_lPG>$S+lW!?t5oZ;!4gnIrmT2egKk71R zDqZ*z$x3M)e5nsln6FB><1E}>j_U)o$?-@||;DhE3p>CimO#V&H+KFO9nY@EMV z3oJiC|MU}^J3}w{PO|qavGaLRpWyf5GT^^wM}PS5+ubHDa9>yODU13n*?hE3z{=p0 z`BFrM2*w|qclSNn>s`+15@Ml(NG~T4)qc(54VT(XCy?LH7^T+nB+Kwod$(dW?B$H` zN!#<)`||$^Cs!9OH+BzG&Z@xY)f8@&*s!s@+~V_Krg5{jOC+{kRw&&rlzalifULL6 zIoauEhl+rd7_f1Xm6%2MHE+dZ*ocu}6Iaw|eg51iXOt|=V)`pCv+CWdX1&2_KvEY? zyN`_7LC_mhh@G_O*u=Xb&VK|g15OWtXVZ%&EIVO{s}gZ{p!%s5+- z8c{c%DT5e!^*X)rhBc>-$AvDyyMT@>lyl724K>!Lkqu>I9`n#8Sqd=fKL1t+>yR&u zbrKmvz7>PQVI%C38R@b%$|W19qmDs;CyUPG*7w!pE1F@=4PBo2>>BwN2Y~&+$6A7n z@T;{~cl($JLEA<Fx@0lD>sPEHoT_?)XY#?*5s}FE;f$byPWp zjO_|=H9*)8iLUe6hCvq{#2XT>tttA0k<>1cXh`=?cG09Dm00mHWxCtmY*J2PD2H|U zPugF{7+g{@IBDAn(;^- zd1kRwQcERLr%340hJ5X&z(KvX8cfFvvZR(Q>?f4$A2Wt%Y5aP*zJA=wYAeJ5MiU*= zBm37~8Ze|pXgeyfD3nRgcUL)Hy!gCXf66aDLx2P~)l3#^?FjcuT`%q@eZ zfE9HV_~9PDPQd1xQ$YV3GJe3XN+C@y!ZQ2L)zB#gshxlM`%giE)Le`ACn!hEC-k8h zJy(N%i%H0^sr8pGoXrYwamtM^xDDV1O2}CgL4YU_agYfz5tzyqb}YT%M{ooBuRMuo1q&AM*5S~VVSgAm z!QlSnRw-Vo(=bF5mqq2^mg@^~E7`t40|W^!A-KCk1HmP@ySux)6WraM;7)J|?he6S8+UhkJ#+8OyYD;m zAH1LX(+%gGsycgD?OnC@S|Y1KB+Nx4Evf#JC~XS6noa|6rG`wcig2^|0_C;F@-4o) zLScy;#OA&0vYkH}H}U(%3XDi|I3oi#UMZ<7sOEVOuorZJ)3@PkoTKt|ia*WNbd}DT z`aE(hHkek;YrdX7*yG}J=`0gaMnDiH7d#ha&a}Rl+tTHWZPilzx+;W_dkwZkc^Ye9 zWH};}CyNqn0E4Kx&Y#>ph*uI~JNXX7bdsc-N4ETW>(iG$NFa2w%2DFkKQ4xDzO}CF z!+%nXb-?UOM=Z3h-Z2VEsayO>hT!w6Z7T+=vF%->?D{|{PsWv|arBK6mCMjwpfo{; ze_ucd@>&@lxbcs&v)6m}nfZrQ>cS>=qas+KJ8gV)IxryVtXg!<=Swd(bfl<{+d4l9py2fmlfUDxUx z)R$JYUB{Jtw*pGwO5STNVJPdm!R^}E8}Dgjr;G|e-B~X(-w<=ighVL4lo-jIy%vT3 zIz)6Vt@_XG`<-~xpZ~jBL_?pP2*6C--l*2BV(2Q}2Z+rV_553Q;bBJDI9@-(h}CUp zC{DQSY9!!5v);_l*`V3)_Cdf(H42;ccc~(Ur1SY^cYxYYJ-<5Jj>K*j$Gx!{+T*A6 z4weEqH?abTp46hYJH271T=3bP6!^XkfO(VR+I$q_L&k=&u6D9foc-lWBXE2miq?@n zbI;Ca5?b{m(HG|gLjw9ofTV+iQZ(bE#2JUHAybEZ3?j^zq;Ul1Js=00h}moEWFXit zIPp^T-3#AciK&Czt8z1Y+4%y;C7f!#5kpzvyuI)CioRflM_N^s7#Drc!AQY`_l^FT z=drfw3Un_KzddW;lNJqKek*^s$3Yx*%%En=L4E$F*{B(b$2|C{ zj-x?Qw3wO(!HfyvS%j3D&BBj2V?pzl^n5)eDCM?4>{PSD@~zzUQO@b*&Xhyp5I&z? zKjP`8XR335>k6o@rV$pT{wzO`gN@P9EnN5B=>NbV9WAF}zdyc6G58T{-04>Xv{v|? zLmyHUMjYT{3}ng;|H(3|%1Y0uXzp9Xz2PaeZQgm{a0$0rZc-KDd-SCCcV}5{QmJ!T z$--<)T)_k@>l6Vt-b;cC-)rgOnyQOfyPH%d)l|c{@SGob=P^CMFy4aTIx~n&eede* zjFj5DRk!>%w8#b(HI+nP_Ig!hus`HOiS4C5Q_AV5s?8v#KmsCmc-Aj&I%^)Y%E>hSbUBiD@be2SX#X;-Raph7uwwY93+<;&HSa*#cHUB!cm5ev57rDkvvUP zeq=NAB4=?OB%1e+dE4l;zdDW)rdxZ%9i6aRB3RLzvw7?=;)7^ODXRHz%={MGrOz~w zMz}BJ2=}>(eR@|uGrAOv*b+b;!Q4o@Y-_u*R{lr7_80}yt**QpJfYdP2QIqyb%nPb2j^9HA|#1+SklAo{n~QcJd%uU_KMR> zac1DqujY8-5<+a5R`*%nQY*-b7EJ>ed&>+;REQCTUnL!`AqMT1?wt7HBy*s-A&D>l z(#Bk33|zRbn#iCsjVa1v_hanMbZ4ltW72twt9OPLpjP3xE2JG*)CueV&il>)&ylH* z^(e7gi-2;AGt?~aWGGNeOB^S8S2jT-axmsI2S6yaDfO`3fQzk_F6alS|3t%3%cDOE zsa;t02b^*dmB>ilWy`1siSZE*QVr*+Z|CIAUCcDq!jM>y?Ic zqGdVE5dsoD|LKo3bctj{I2`*Wae@`9K9}9Bqb}qF0t}to)2E#EA0nz+K}T!!+@4X_ z`ROi=G2en;y_Ds1D(>b7DLrq;g{MandNv25J^GDn@t1Qy9Zk@tw>t_?i_!HlpbVMt z??!5?)7?>U+PxBz(msqDUk)tsJ_egi!`7To74BphRkjZqVAHKa$*msA#8Z?=D9>ip zfcx#{J2DoX4c}_~XIr!D9feW`YPL6};NrVmJmao1HP#4dLsKs)vs6vA0(W{-%*v{w z5Fm+SGnvg($;>M|!iyy#s2QHr5841-G4_{X)z;`JEF5ErQ@F5_)99juzfMzaWz_*8 zwLStAXMP%tK@%4Yd2UO5+It-NKL7H9ae_778=t6t|LubF=0dxcf(~UjWJ9||(@_C) za`C<|He6akNRXz%H%p)pMQS4l<%%_F9r?e{t-G9?@?;LM%+cbi2M9(KHKp$ zV29525WVDSimiowM(jUt3rx6+P>vS9go>Mw{@&v2m(c={XtrRe)vqnyNs#sjdqgV0 z7`Ce8$sz*9nD92{{$vRPP++8!@zpx~t99*NL1hXM@IXSMg>15@Uh47L4^@Cr?GBpP zYu5Q)=M!sxbveTE?@!9QK1iZDzOz)$KlF)pa0!nrL*mYWVX7kGR1!7lpHKO-^E%M$wb<@IG zXxHTvIC0C0rubi41s4clo%;OdgrV^JWGPw(Sgd(9aZk6$Tv%dd^9t$Wf~1L|@kG0< zSjvDRBaMXkct-1GmV)r7JV15_&xdpg6v|VWL;`!H=|$HnymRBHH6 zB=j7gVX}#p{S?Fesgf@2(&Yu~Mznm@DK47f93D z0hmVS-4j!F`3-4%mt)A~P5f1<{G0h3*bxunBx)zL(VQ zGvqEi{^d!3uxo{5f^|-*?qcO|1IQSN^M_w^K~jr#h@#^(?E@h7p{~aue!Dfy**74= zEZ#ka0M3YeWvz|zsqQE`R7gP_5Fn(1y3$&*@{n|LqJ2y!x%n7E{ji zzBwgf->XWyb-q+XvXml5{Ra*n9ke#hpIDtu67XG{PNWyAzmYqCuJ(zv;6C~Oqd+B( zn*%=(ZKFC=a5}4{2^Q{l4GlWA694<#>mS?iG|}_+tV0(T5A`nppU3}20j7&` zf%D0%!D8-r?l%al)n*ish&@CUDy!`}E(A1U{gc3G6KEU;Kz>dq(g50!P0sNYP5KKa z{?|?Zyz>jeAJan&K2P{3G;+ePU+0(iTR_rwcDiQ$;plYb%^;@JUeL~6MIE2{YjPxI z_1u&7I?~b^TJh9rzAg%>s7&+Bp6^;a_E(r~R!`m52=PV;EQx_~fgjoUs)Pf%0#+qa^yM1g;^_ zl_J#OTd@(%kYXzoQHE}(91(o5lxrf+B6zIy7f|)D!SlQR>}R0Qk4J5OJvw+esH|Dx zBE3gN@ModO#V3O}WvwVsQ zf7B5qqzeqf(BW58OI-k%4;|GE;$LG|qx_$zf{?pjFdvOfOf+yHqGsdieNk=ZEM5kT z*dmHlI6<j`V#|jDx3T^X}A~hhkgZVtD$v{@5s$zKK5u*m*=eyH+xGk!v;Qz1J zf#zQRVd7kPv*B%Bg9c57E@sj1!B8oYOgl$&<%A?QK6#b25?p`#kM$1Ed5G68fVv zzXq#fT?=9AOvAA)XXH1C)bG3+OOz3K%2az6D?L$ge3#3NtJS*yWyFLxtw50o>hION zqpTNtdH;SQ2+qX?D+8~saoY_5U2=S8JFXs!Q$FZ*xrB|j#D z@yOb6k^PAKK{S8PeOT+F_>cd3fd@kmcw9o5+WYT1_rD)@plu8IgBB0d2Hk&{W-iPF z7Wl~br~iLl%7HU3K0d^+pz*X+B60kGhaBWDU19t`XH>&^($dn**4sF+*=$Z4FhEKN zXcRIx*uK0%@NXW(Z_70VVMe@=F|oOXUYzw9!MeYw3nCkOZw-$773Qk?S=6A8ktf@+UMl^5ZOH!ukhNN6$tqEn90Ux-G}X^MC`qd>tjM z)w@wC+GS5no+FjP+pP2-y6x$Og*eMH|5#rscmHUJhMW|&QpxvG`c`->gTgtqxXaZJ zZ?0r|?IS9UUT(rz^VuH(0aNZb)$Cd6(V?+=UQI4Eyt*wY`B1lAy=yBTL#5yT#oj;D zf%o@({~{!yUaCNLbG(2=MMc$e_tR$Wmzb0kNEO0T88w|oK|OAVj}D)O5}9>WM6VxW zK^>(yUQzxnNcLhV&>eH?2Ugx_;Gat=5GGk_}IPK68YYdJ75f-$J(g)oDMds z#e4Gdi7UWQ7){Yy#%3Fy1T+&8Zn97k zGCxgk{HKPb1j-*_l&0L6e}D6YKP2355EbmI*Y&tg1Q7%_4o(E_61ATpN zLCNq3ZSL0-qu%*+<*EYbPz{WRKzb$LT}gQBF9EVD*5HBez6GM1O5t))v&RAX|FQxb z!6bS>C5p*Gx}LAM&oi(Y!5U<-HC;(4uIjoX^pc=qY-Y@5TwXt&mf+$Cvq3xcUcjZN z6xo<*{)3k|_cMu|3}bE~=6l`pjeNkb(UzRGNsBCvw10R6B{&pVrvX3J4=GvM6i1qV zKs0oo9}(K1)4)S)NWr{3{eE5F3>rCh5hy7BKZeRLPQ0lKPnkBNX#=tn?xDbV2WEDm z0Y^|qpw{8lV&!;KPQSOz7WI$V2n2M}66vaLW|m;VV9+^Ht`$Et1;vk*4`h|FZ1=r2qy+ zDIni3n>XbNsQmFS_sIOu-%)>GKgLaW{EK+%Ce|O(A&+X|zg-PVKbZgY{}k9N36;rx zQRU~qDXss03x#-KogBerOiWXpDup0;zfO!Q5rjpmW)ZW`eMXlfKh_}Ne?r-=PLzj@ z_cv3#-T@U8BcshIwsn3pvl3r64b6QF7_XZt(ZyQhh`BElXrfWryNa#ZURUByr_1Od zK<6!0JGsI~32Z-*$lzu_hHhVF(|Y(s7FUXsBdl^KDANYzX@WvRAL*Mn!5@eZS@!)r zj(E~0L6VqyGey!ch<6$E9u`OHpH^IG5z_#ap7~GcqeP@X)dIKH%bbD<*zG=CUS3XE zwuanS>GuZT5q4f7;j$y6)2Q^_Z}^_8o*EdE%Vt|iZ?5}1TZ14+^rFw8fXtL`I#n1q zyZm_akDuD!o{k@$&pRuJ4-I2@f@!MTY8TVJri3wwxVdo^i)H8t;r^9wj(%=+6!FOpiV(WmC-vAQ;0@7peg$MV;~ zIFk}+AX~NK;R2ai&u{{Tw*mkAb$h%N3l_z9{|Mq%rgn1g_$=38 zp)8fk2sTqJJGeKV;XSl5F}x<9D5RzMs{Ufv6vWAjMXE;0ux6Qmt_bD9ihiJ`Kut zGuiFIaff<(dSI24d6=;pJKvs0X?#G(KD8_eL%$!H2Tl|f6i|;As*$2mlBkty2cT^u zzPP}*J*rk|g+(xQU{3Nr)4si)`+7<>noSWW(x^@tu0A5W4dS`#ZTE%E3*cj@PmsUf zt=KkSPjFQRWl1u$-&m<=TRE?Jp0__d0Ao<_@T}YSwXAL_B*ewPfwaJ;%9LrO(pgIT z(g$2^K;Ow!-dp4@H+pSFSC^d+8XU~SocFs<_~fg+kLy(4LEOa^zlLDiKzHYzd+vrN z&|CbeZ?;rhD0_IFVD|cSJHMEg!@WK&lLqM5aJv1Xo8D8JG!)cCtz7n5ybh$%Ym-!` z*X>VC#n2g)$>|KsY&y}mo1&Y&Q4u8ZK0;HyS(IT@+Yx7UAW8Fx03q>#Dhia9P*tiu z8MlTV&T2yM#(pO+Dg=(47l~!yl_=BK_ASd4`S%T1weDn~mN;geSO^L)Er29v0|<7% zS}(iNCI^s*(KO_EU4;#@yrGH(2MT?frfXVNI4~ijpdg{ApE$=l7jUgdhLmI8y4dO^ z16*#k|M(n0aPQ~`U}nBNZQ9V#-ZDtwLtSaM-RMBob+UhldLU7GTQ5p{fDq$uY!}K{ zGQN=wrG^@yua>#wxCc&kRHm~s{74p*M;@eQ z4xtUdy3!N@ag1Nisp@X^z%h*3&I=S$%!otX92Vy^l8yvXSA1kniCj1SBN1t6J{$85#4t>5~(PGiR6&Dn;8Rt+0om73%tUrKx&w#rfsIUT%(@uO8$L zUi+6yyvnWKWD-QB5_@ouJyxJ~ZY7=cNdaFuB?O585%ED-*m9w&AvHdaIwoPfQnP7K z9hVPKkj>+{mo}gBQOTEd*VH$yS^=xd3xic&K>Cxr%??d+QDocfMC4s)bv$QXtij0S z`1o;lhHLj~@#+Zm8GAI263(@VQS$xy#yQxZ=AmY4Y z?QUDYJ+5j+Z1=bYs5>~raUm|k?X!UvNuIl0tyiu3v5369yb5=5o#uVz=&*ga-Q(2$ zAXD27`TLqwb3;QzX;z&z5JpAAlc`jG4^AFP%aEs0+2pc(z3#lAy@^}Ll3=TwF}*5i z-75OA>d83xA_vT^O;$AK@(uH0R#8^{$Wp0yC!A1OrM*;6;cArN-c7D-0?gmt0#rGZ zTV>w~1I<}>{BsqBM8r)wt7b+l57^ zR@=oiG|2vf^H5{pYewlW#El@L)jh#*Y1sW@^8Voa{;X^24lgSvW=9o&pHY?@X*rb_ zw5z~6mzB(*i+DDw$63ZWO{UUOHPD)3T1C~r$78glWp3=bI%n8U3&q+ffcQU;i3(hz7&0?t8C`%vdLAnLz&F3tCmz|(`aL6Kf}nDoJsEMNY5*+ zYF(d4W77=wPe9@Xa}w=)$fW&j&xs?N6%j49`(7lzOiR3=C@EwpV=bW%CN5^7_R1to zRRSNr@?64`DkmGyP$m}`{2n_)Kn34xeHN{HD1Z>NN+*2Vhqrl zesqX)*`XXrLFgiw0%K<}1xzbMYx?M1H{B=1XoqvOJ0KLXmYV2u2;05ZIUpy)V;#f? zV<;XecVAgn54_;7F;!_17S)#ph&zvLm6*}C%zRmzX8hsNe4C+EPTSbA{lvRCGboc9 zUMgL#-&Suev5<_ZN(D2cxgZjsHJrE~=Y~8g?!VhN&-6tFj^WghfqKIS-~}RxHk+?V zTd>x^#TaJ`-|(6sdU$oaUvqVw75#*{0dbvGppSX)y5dE^Zel03b$h#@8#6Q{cC78{ zYYU2FEgPcFIR^bLt6IoK+u=8xvd?uv^IRq8IQzw#>xR43xg_Brb7Rg1?sB*u&MIpu zdo~k-^4_@HL`{w%`K4WPd3DD!Xm}(mVDpDrY*yLJZRR^AF&U#nZZZgzy-xe+blP0| zvb_yB4QweAQl2?DWQ{b+KGpf+qOFCMb)LZGz^xPbthAv3M^YGU=OjIGD zm)hJNX`&|!%_)@o+T}ZaUMnc`wSjb5d?C=!j=*~B{(!Fagv&)&#xG!TD53BORM`?& z>PMi(dOy!!G7PLR&kIZ3Xjvp0*PKZ&$NMfVDLDW#>I5F28`JyA?Ts?dey1QqLAh#q zX4=5#5VQkXkEwnpUf4}krJ>__+K}ut20?f+pv;7?3#z8d<2i*S@|hj}`TGN84w+BA+3%hXT~0o=D-PmpuSPSq zFzPBTb>Gv51^xgTujt3ELTYXb6ffJwI|ARADG!q+Xx?|1?K&w|b?m`S9qGoDb8IZM z4w9G91T{9Ypc7~c5v@K#!%3czOsUkEik6MlfT8^;5o-7Zjm7=NbsOE6RY?&Mx1m@f zm6cNKVuH`CT}#|iE> z-TIE9uszUQu;EE)+VE+0u4Y$p>@HvreRUo(9f^a#$!Z6`u0Dw#&*_MqpNtf+OJl6? zdht1Vzp|YDb*fVS6zI$~_0T;iUpmtdQ?7wUp;|Q%{vO#CC}GS*$_weI|3if{gajhZ zPdwE}9$M>Sem~S={d30qGj4Q6b{L&3Zwxs}j;jlWvo5U$H!tec=`X?p0hZ{A?4aA~ zcO1oI<({tgC5ySyD$L+Oo|WGR6VP(cWn1i=rr7gs7cBlHJ3Poz2pp&hjCHm&fq_ZF zgGe1hEhg6XMtdw)xVNNxw+F#G5h>{fye-}6pi_EB*@7lqW0MwmFdv#j-ZFqGS4{pVI&!}ki^nwzrGsmCV1OmjT2Lw3ezHlg{_ zm~5a4$*U7?@+u4bbu~fcXd|wGT9g$+h6dYOlu z>l%Nca%2xA!Q1y*z&x4NMv${CZ=h4gw;l+-x9^G>3&eX*D{}9R&_;E!f_IHoeMDHy zu!6W54_2+rM0WtH((UK*{fzSugOc>{2}}7m6JEWPcz8q+Q>4+xxYsrvk0!@iyANie z=V)PLFxS6(m5Lz%EhlMFp4&{hGqr(;U@K;6R{XTXT1(z~D1e8HWo{Lb} zhs=gQ>o&Q$y%rv%nFruX!Fp{GJJF1?S36DX`@3o)fw(I}mw~sAR<$c~n`#rb5G#Y1 zGn&3vX4&Y{ZOshH)`mP4@5MPj4#~sr@kH2ctOKxV!BS+M);G+2ol7p*%qEcyp<_(3 z>8iL9!(P|V#^*>Nc0c=UM;;#<_Ua2EP+G%_?{)Dl0&KS}3Z5q_Ux6!%dNa8B=b+to zUq2L}fTbcS<3smY)bPc%Pl`l5QX1{HG3U;=Mj!A3L7gulUa=K9AGTsK=7|V6L%6LF zTTl?)vGBk<1bHJ7<%66;VdDjb1`y@H*$r6)3X(@6riF3{VcW^PEG{^PiM%ze`cbUA zMrzho_O+7H7_v+rUs8CYWt}DrXCwtrV&tgC!y6!`rE&=YcBD+Xi=@&%r{{vV)&h*@ z3>o;zr&T;3GI3CQSyfWu*~y0-i~;7mDSne=nHWV`0Lx!jPJ5aR}#cc>16Flm2| zBy_J^qSV%!uQ5kE2S0Gv;@P&}CVz6cSkhYr1tOXHH$z72uqngAlFbYl#NPE+u-7>x z964ly213IU`_YArFe=UBqN=>vDwu<|f`G%C9_|MDiq9JMcTZBhzd+x&gP7`|xpwl+ zgW!N?@F_uXG*(+ZK@*HmVL0{<6#6z5?D{@y`QfkW*4wg}64lO^OUUG$qHR)N93#=B z`fST+ZP3M< z^&Y6ZpVCEd&`5ACeDSK#bm;#uxbq8Ld!#|v3g@;gsJ}zDqaK>eq|s^{*}O(_vA|<0 zdgEhEVmox^R`G(Z-HSYHSIy&lwo7n}XXt1V2G5|Zt?DE-v;8RIy;U@Pnl1LzGlqc_qO6>RKHGsyj_3mAagELK6PdjA78F zHxp?(stV=?hCT}|-=J_jZGq;(UCmNB`?V}4SxB)hSpK-1v8?TYO-He?uoc8{JfvB< z*o3OvI8%T(fK;j(oma2x1KQ3uwv`ds23cD!1~HY>P+tAQ$=+MDZ_bj<1l8%zCrKYo zEc%GYy8@Zc5^l}QT9^a)I?plhdzMa;7~{|lC?C};!JVeXuj(vgvW;HFWHE#L8Mkqh zB%OndSH3$@?4sLsqb-!3qA$s@wk+JFkK#TM2r=evGUi-L;1ZuFm^~bx6odkU^vsTf zeP$dbV0Xvu8bf?qo#GZm#`{V0o*=~W%`E4lslCpWK8rv#zsFeoblH|5AR~1FSYV#O z6-or}^w-T&_&)Ap98;p8os~--eynV>_YHvuPu}sH{_K88Hyl*#^Q_^bbJ+YOX-+{# zpp}+?1anMJDX*;za}E=u+fMv>`^B=)=sb>h-e{Q_nUi8u5x{zh{%?sVUb!SGd$OWN% zs6=BNrPCM5e?*9okyvshyV-4#dWKs!mm$5jqnr3Ce@8!Qy#lZjIKmx+tKn?5>CqH)A; zm8~MppEZw*ENc#JISeFKFMar7`qNZTsIG`um_DFa(KL4$Zb^a5G?<&4 zDkLV)q`HH0hFvnFKbu>Q)F*hvvF+6n{)_H1!MLzDN1l#$pc1EHoMs0RB$>pin3C^! z)B|1)pKph8Qd&CSo;I5y@AX7B&x*(1sfOgNe;&DCoX7I<=CeUZ&a4BGJV~ICb+CUM z-z@J>w1#9#B9_+zmKbcMmopFIX!TkD-1=yt!vb1ljSVD;2eHHg1qZ28ye2$M;@~@n zQsHmgt_+<*r$hl!#cjBzSC*%D3{%USFGF6oQZ0#AKCib6d((%oz>}L9L1{eof~KR} z`qu%nnh-b%kT*FkR2e8l7Y0+#k(rPPc}Rs(E8Cr-Iwee;r9U!|%@-ymg3(pFR((Is zZnjVHik+D%5Ge#tmg8O;BRr{e0_bN?)a~dE#X#sf$wsEWlHIGg*lZ1;Ql#m+I;h(I zI3f$9F2kig?duLa9b>53FuB+s9KH!5ph2bZ=@-ZC;&4+G4&+0f(`{iJM_YGq2=W}^ z-UafiPnRi|J9yDvQP(wZkAJWoVVnHG*ArYq-75~6CgL-6D%2njAbBJ+Ea~!|>TSm@ zb8t@=a_@~l;&TCvN&3wFfKTYJEfF!v`OX*#CwlhkGC0jW*#G)c%L7RSN}&?Fy;Dfb zB<-1oWW4w|#DT481W>b)IJ6mPI$ZW%z{S0=iN;-F-s4r^iOdNN25NJEE+LxP{S~*) zXgRu@NFi4soSp+7awd=Wu-I63ptPkr+OQ}hF=-hsc&Q2jGUc`cHU@00)e;<31qCA| z%iuK*E;4*bhI}Sf6b%vUS#wEcvM3=%w1SCVI$nWU2>>v4a49 zj0N)2b(nB7F&7rZZ%&meN`?u@xcHH$8{UMFVy0G~HA-OtGRF6VUzADL6_4jhC;<`n zO&FM{>f;6_vXvUZp6?j&hC8jp>E9=%&ShG2$q1capNz2$s30=XDXyg%;h8ZoTX=eK?77*z^>7}v^>m)SdL#FX zq!hn&dY=LLfb58SDb2raO>Ne$Gdwr(vsYN79v308$myDBW1CBC31*txyRzImh5V23 zGxrG_mDDICI6aF(*8Is2<5qX0%xBk?&n90m&3u2XCs#UyV3=9NLeWtKbbWOoKJ3^Yc<_%^ zC)-`;ewLs}gO_;-&tP|R$OS?NJ(E!~kpKlr#2`67rqT3f(_-QTFMDBzx9SQj^X5pt z2`M*}ggXHkOfGvhyC3_fv%cM_*ywC!+SzS?+lF5C22WIdn?^pjUM=x8K?0CkZ+9LL+2y1Yc8TS{_KOW}lr0F%0_0=0&_;S7UaON8^ zODicc`2e9C-5pM0(?r1oyw^zbx(om?s*&1+DUTzIWBxSF zP4^v!M5RbTI-$>%jZ`fe?0bEJJHr6fkj|Q9Q=K#-ZURrR<=Zu;41snCmI3lnIIw35 zy4CNe^FK*Lnv0`P{%$$?(>8 zHKUhXEdtTv8~?h6FTUKuC^RsZkfb?3p2LN9)UjzcGCWQf-7n^o!Lvrt+t%XS%Du+6 z#|H0VbLtwC|2bjD8f+>%JHoAEMQ^fwAXCzzW@9YszAIbo%ump zBNH_EusK$0vkjL4wP?P319KTIhr*-j2h|@XuoeVXR4q&kEtQ$*jiR_(YlSH;thkE2 z2sN=M)am3C8KYqFJLa)Q8}JF{ro2nGXlIpLy1m{>vvv-{F_>6BWeR)6F?Wr2{JSBm z-b{Xx@WUl7%h#V9HuTP{^(JM#6T`9R3SM?b=_;A_1cFJ~sAXuQKK`C@9R50W04aVK ztY#zs#*FIUrZTnkRRY`WW=PCCq82ZEFwbymp|IFl>)rowJ?OotsHoF0W&>qPg0W?Fcw z;B3wHSHU{-BTcWT#ukdRWtg&--m$7}O99s}r=r4iOCwsO4VIQ7Pc1O#;6zvZd`zEFvHi^^ZddX`Sfzj!=PI}9t zAe1*!#g@0l$9w)_kCW6H2grSM5e={5dC#mBQ$Yw|talGEs3>#`|4LqLZa;%u*=Y|D z^7iRfN^Q<_YQcyhz|tsV7Si^1-Ogt0IKX9JE%d72v4Dd2J|Tyc9J$n zAG{J{zSTehILfnjdAe{vL1|4|ICL5bD=op484 z=i9b9R1S;x(~ZB8>#!j$#`JUzS50*`ug}@xYPGq2?8CH14ws@qCqa>nE3LV7<4-kg zs08Z8?9k^l@zqvKRdGYidz!tXtA3W2IQcloCb@&`kaH-qkQekh|42`~vjtpJO7|jG z-Bw|PBA~qFEb z*Ov%K_`5}$@>?4bh@n0Wo0ooOo(t9J}TAu=U) zZJeamT!=jCd6BDbZ*~sbALzYx{I0(5X>`iTO}o9-!}uzCj~5vZeU%-Thy@t@&KsXU ztJDSK_M*n1)p6gq>?toKg!H|9Ta*5hsnd?=>5CCj4}gMJEj@uRrVwkCuyLn)Zf!lO zEf=FYf5?O{yQBeE>@Ca;F8CMoXqDXQ^6D?@X-ZD(9T`d6%!YhC)M z7D;<)ZtdvtHqn>efs&Q$OzBQQt|{6nYL(^D0A8MAe6Z@1EB#jTPN#>A+he?Ak8Bbv zB0kGvci2n_flul?_;p-Bq8#iT@wW+mm_YwoVE_;iBxzsWa3C#M{ z7iD(nQ_#_w26bPi8M3B+MU3rnhhgp=S6{N@9mhUH-qaKY z8H34Tq|tijK$cWFOj^fdpdb>Dk#mj|OJPJ`sL2h2Jn%azxX;5dx4rMzm7zvQKT8*JOe_Y2qo}lr%GdP~`fxk?R6G5Uz1No1ZqEtRG^y65`-B`1H{J2qy=UtI zgtBK8U%+B^3FY(TvkAOpjZS00d7$=;-z4)nbPSISrqqR=Tb{gBsIYRRp}?IunyNLzX>ao)F^ZMlIu$pU;yut@Ip z2U}w*mZW*1H+8ppeS}-8qsD?2L$L7`x+@2FtU3ugls?u|;gY|6+wwFuM>jqxUoYpA zIbRPx9N*z-ZH9Y0+K`c21FzJLJg>SM=n1%JFl1jVvfL=3>|*X zRMszo7JQ3&NFLF0R90SLOG%btv_m;;gc&9p*@8B@Ma(cGUl$Db{xY%5S~t{|-iR}x zLOZRgkBe~{FGBC2=B@c7J-upV(be{)sO?44A z2{z+MJIJagZou5vMr>;&VkJ&Mf1EZ>s?yEpG$SRLgyG|OPH2u8w#zBd@%l8pXy4PD zL{!D80HUtu&3WZjNbiaXR-dXH++C1`5K!W2zt>s1CMj3Xv~sK+_}0H>fip?IdvpW> zkroJ8?t*E?HhW-v2%>-rS0R0%4zlGw5`y~Pz(K$) z6rw_XiB*4K7q+N439#Mkq294%W~*zT&k{udC*xn5+y@p%)8>xgRiq&FN4N;bfO}8>)NzA#v6$WdcP~ zzg{q}ASg@@=*SZ$k<#?J?2|GkDK;Rg_O6!5di&-KTI;`?HZ1XN(gwhZo*<=Jx^9am zc<*h)E7v`)w@Rv~h3M}=91lxWc(!zCp;2_8ouNs4>TD0W`o)7q3nzA5={d+$ahs#s zoosVuWTZ20fUVoE>)vH0$aGlNQ=3nSvsx2ggPi>?KmDTTmlk_2t5h@_uH}PFbNNA? zw$z}{b=pp=Tq2yP7Y3EP!I~HJgG{hWl|Phg6u(gd?RXG%3WAW`t3zIRT7%E}=Ghjd zZd3c5w?;qS&!Z3XJnxtaF+|Z1ahftfDt8HrQV|-+EYf7fWDjzH4CnBr{1^t8SR$ z)JC~c1i=~k2F5keMi#&1rbOVzsuAhcPp18oRzvA%28TsPsBE#dSkg<#2_ofKhgf?3 z`qDI1n*)vt1V+&KVv)i32s_efbtbiNHOD4@cm0*!+YSOCMAzsQ(rD(kAsE{7{>>B{ zNE9FL5rLJfaplH^6I@$jMjiuO8)Wk7H2)d8VF*Cf zg4q*D)V2%*`qzA&faIcoiLW8M%mUWg?4fGy>D4KM)hOIbRI1R-f(P!eghiD@yTn=;MDmc&Sv(& z3hSVVMGT5#KBOUmX&4dE{oRZ`8i+Bd0r;N1C%8A}V$mE6R=BGPSajLux%)b~eLwV; zMar94=Acz<|Lv*8Z0xJunQ16E!uHTPW##hS+1g4v+ZE1XShnw*`|+b*rz1)9L3NFbT$i@<%4y;aJol~fq~AkzvDH#W)e_5kf5Juj|adM0ro0#B8-N8 zz<6(d7>A^m_0)6RyLdf3?~`;G zhZB$rP&Zo55zEkj!Xqb`KbD9HE0Y>B9&#UdxGW%{z`%d%|FMf^caZmKVqSQ`)<3&0 znZ0^0|37W|@Rj^W z2=t{PZ~`#J=>zRrlbDl+@#O{eY3r${S7@-a~ymKgLL=X29*ku@4sZheb%ZvY59$i^Q?Eb@R<1xjT+n(3=HtZGPcJ6&-d7o zCkH|Q8N$MjO{xS?0EO}wlhuELs1k7Ci6`jyH8zEiaMs1|g#19h;7RH*l_ji+JgX2{ z_>y)9&;8*jMJs6oY}>;V_u1;d-o$^=n-6H<+_nCqxPKOY(SQdJGXGdFsFB*IaaZ2F zQ*pC_+%Mi<9hqYPH|VKfyl^hzi?)tefhyE&f={*i2W=n)>G}&9T*~}EQKBGhYOL!rUhu22@r(uVt-7Ym{rJ+@8=h?E=|3D}Ifusf1 zlX2;q-e)9Z?A@TnsMx88Weu~SyO(9!ldl%g$gnXpqrKF47Xd4BGPJLkFElbwr)&S^ zM(h~KAm-{e?beWt4o^7KHc_ZqBbKW64^Id{Q`g~_G|PnQ60Ky(>qQWFQuK7WLB7So z_(Cfa1cut4EmahgmhK0^cWP`mctJ%+iw1J1*7RjAW*q;9=tY8w&DoxWd4%`fJwt^< z+k>O$z?3s-WRAbd5Qy5)TG6J7dog@_Vd>hr-iyDi(Cy^qbUlmQ*)i(t?^lmB@$lf3 z$z=BnN2ejCrG*FKDFr~7Pg)UY5P*EK)3>uf)PqK~_g^|F_mP`qb7CgV5}2-aXZbYV zU@y%H+|%Qi7SgGrT4gNXTecf=-@PZ}rKW~W(ea=H002Khb$Jy!?aT3A%UpsY34Jbd ztWBCtHec#Sj>FI>h{?$MQ_5JP{_?#4DMSkYPRQ>Q;gz*@3IA?f1qTs7DH7VIY=Nz> z^>v_tPO7)WLUQkVD30U-gp&N}^Xeh_>#)))7#b2%|8TnS;c-8Q&iwGFelP7ObN%HX z?*Gym_8eqJZVc_`Nj<$DH|y#2=C)NWEIGwg`!zV`X&E$IEwr=j^@Zbu1G7J{$26){ zeVH7Nv;*<~PhD3U)zr0xNfj1pgSL+@4Ol5>)rJEWQht-SpAen*kpOJjoHEsu-ITs;hemf1Q5M8^MR>0+ zX@Z|rn>-`N`l!7H5sAqBdQ`~*&!y`nkhtJx?A?XRF|D|G(fq~i@TdLv=?{PTSAWre zsukDwMBF@BdB_}W;s@3+?>QLp(_m^DSe-u7QR)Xr$L8L5u-=esBWx3c=V`LqLANdb zcLrnA`Rtmy0I$yA=6VlUF?0;Gnv$QfJS)xKbba@2xy;OMs7@3|+#)7z{ycZCkh+m_fP-j!R59MX2vI%9h3|vT4PWKE_HU$&OWl0NZPT+fXf8_I)3z^|& zJk#9X8m@EK`Touhy7;n?X;BkvO@rf5r3+`C`@ZC(U^00EQ7ED5L7DQKI{J04=Uz53 z?9M}UYiaDDGFrm~Namq&In_9(m4AOVIk0$kggDOr6 zVM8lua6EUYi`tIWEzr1uksD=hP&ClP1dP+&ic!@8)X;fN?ea7ef5eFs&71>}Sp6jk z#`H$$WZu{1w3Rp)xj=XC0=MQDpw#7@&WU5N7z^1eC+)y~eqe0_Jwb{2hY&AeOi-&$ z{1Qqg(_kn`27od%D%!$giauebH!YpxYb+Vm7QCa2q8y+!ii8#pmZugA1A&2yNl))z z0QP2bu+cR(B7466&IhZnmEUhAhgWlv*oT3%?UsdZ4QwhfilaViSjPLXcQiZ8&@H6? zF?`V=MpqxYkKd+(7X%iZor2lzh=vD^VJ$G6Th8hSIPTTam2012#*3evul$=BWKr}u z^C~=?3HY%IfG!CS?}Rsd5AdSYQSE>@BueaE|BAve7lqq7(QyxsqHGy6=dLPWEKjAi zM_G$7C-vIC8LaNvo0&rE5iT)56DZZjaFQ42`a0FxsF_33xNjj1D8^XrSyvy%1{0KZr<1U z&)!InWK^*f+BmzK$-}ElPh3k#IF2)EyS_5o2oFpB(eQ2T070p`I?h<*bpoNUEd13N z*JNn-V_uc_GyiWhAQdX)S0r_ElaK5CAXA?8v+dTLBMu_9%>71X{rGq8H>XY@G0c=f zfr9x4&1+jDAtU3ZsPCu|a}#V9xA!!Eg70svYQ38ItMYiEM8$%xUKyVX#?y0XS67Kb z8zAscU)GWISjG~A!^gKJpP0}{;&!s|j9)U5T$u9IWgm9{A@Ej&&)1GEo1wpV%aOFfj0ZZy6cmqRE?>?Q z=d0F_^G^tkzP82hAN#jqyhx>K$WOb&FSKKMEv)`eHIx;jRc*dGZRcA3Y^p*%U`;T3 z()YX$#T)p?EGAmVPNM41T^k7a!xQB-kDRW8D5WQjBAfpnjhGYjXQ{ZuYc>f6{!W_qwV@9Z4nOHY~(6pEipeB{r^!D2`%! z;r6OxczR`~DY?1!?Q_jO?U BpM?Mb literal 0 HcmV?d00001 diff --git a/site2/doc-guides/assets/syntax-12.png b/site2/doc-guides/assets/syntax-12.png new file mode 100644 index 0000000000000000000000000000000000000000..d50bfb007167ea50717956193b3c83ac647f622a GIT binary patch literal 44427 zcmYJb1yr0pv^6}qySuwvafjm0V8tnJgS!!0ARYGfgBe4v!p@DD)Tb}b5fNO15`~A9(`tl zOtoap6chmTpJ`YC6c{=H;$M}|HvkMD0QFxQ0FVVE_89rP8Zw!$G_W$>o1O9)t!E|#V{$HB>Uo*5d48zX|4kWGZ1OQ-D{R?0+s+5-i zfCxZFLR8%y>{JiN5NlvI`xa6f7EH`PXhX7$lomD|N?qM%$;)2r_d``SE%vD&zpG?- z2$m?+QAg7h#8F45y{}W#!}&Uc2i-*>2!W-^v~hg>&Cb=;mAk(u%EsKOd*6Q?EFA)D zPXxAY%74KS$L{}%pkZvfxwMSDY><}-Yj{z$keB=kwBA<=S3h%xPTgDnNX^g;HqrlD ze}MrX1tS1Kj2g5#|I67R;}XJhd%JA?Mxs3SKXOg{0)uRXT$3x3dEYtcZyjisxgp%< z_^)PJcR<<}(AM9j0nkJpSp{xyqxwG`%n=}`I8?W8o#1SxF_dw4G!Yp8H?RP{9e*s0 zH>fRHutgn^?LAA3>7p6`S?#}3S`ZRm;^3CAd^1cHHhqCyTY>IhVXiKd|9xcp`6*95 zui*fxpU8|s)DuK2G1%e?tsevD?EF`}{cD@9|CxqWIz$$zjGDA!th7hAz`Ja9OeZH(gFbWcPU5@>Pgn-gzdRC&4|WZAcTZ zukni)t{ODa4VNpleL6)%jsN7f@4w?ubE#(eu3xSe!i(}EzDB9*#{i?J;6Iby{ywE% z*zu+E-(VusA2|E{5GGx zcIWniK?QZz^kPJY-z2Nic9nUInG(=L86a_)`*(e@@JZZe7+=BuukPm)XMphz$ZO)X zzgnn8t3qZvC~cMPF<`xwFpsPY0S>hflsmsz{g1qlK>!iUK^zWHnY&X)6Q9TlN+y?I zBBky;sWsVtO=3}*k&|}k>w=@t^#6*27+D?WaLUWNvn)i=VM-^21;6OdptlmaJM|{z z%_4#8v^4j9%6-IqX!%o-rrdeA5OBP~az=_R1cHHSU2Ecaz~SBGpmO~Eax@KmEDg=S zTrZWT+Ori#3_hlN9(hf>|2^s@!b_O73eJU^RET}fJxxtXbEIxDUq=@wEbZJfm%i+~~s?=1+L_^yy^#O|>9UIOk@|v|2sh$9<9SE|{+UE$CdDbEP3O@{>(a)KhQuciR z?NzQX!nD8C;U=u$aYl5!+6JTNyUp96<@m-h0-vz}Qqa-0?FQzQRarp3?$?Bu%Rr#a zx7YbTXZpU|G6x3-4IYN)GEEpEO^N3S_C46p*Y%^v0us;2`2@opy=CjW5U-6mMh zk$3X0d%fI+^iQX}M0Wx&1zDIw{sL7F^q5TMbsZew5|fhVJ-$u(3%=aX?oM{d(w)U0 zcJkX*IHb34CjncbGQ-Oi3-z1K>%ZjG_6kmWg_lg7{u;9h@g^ z7pIB1^<`{VDOHVB*gEddha%Gyf`cU!dQ^R?iS-JyW9*!+uS?WI|KbatMy6SY8XjEw zs;lFA+0UmyWL=J^WmJdL=Rd$VzhjocH|fVNeipEwanisfF&j_??2nsR{=FodD{^6w zm9dk;nB+?&wX|7jAD3B!^*Q-sn|oOU!e@JxK3XTad5^_8DKQEp(6j#){!2SfsrYt` zF?Fl5A`nj=220Hpd4~2Y>@sglVyrA5b+XEr6;);Bzna97kfk_J+K*07GhMfcBg1za z$PsI-J!iv-iOB?7Kb+?R4Yaj+({WG}o%Ef&d1p0;JbD?dGq`j!J5I6%!LXdQ_2`ZpQf9 zcE^-?l;v}zK+mukavK{Pd)p7;b{#sbXXbgn$8GuX9+(%W+_wpH-X4pt)8Cf(>nDov zOOi%~{?LVOu)i6tut6q&(UgLw!A|T3;nURths9C&Jm%m7LRnRjf^o#(U`O!sJCPUJ0opfDI*1!`uAQq6z#PpVyTiH}o!%bX? zgv}W?v1SewLxt(Ro%pTBA{C(W(r8=47Qpn3gM}pt2Y2;q&8#Xw{D>_e3a!VLmH+1t ztW}tERG@wv8NYO0{JC$0s@p}bJMjRa%U?tMr3=>cxWM-(`_ylXhI==E5oSw#+P-i) zUjiI2rEoC9?fouG3C?_?1(o>S&agHjaa|#a7Rvcg^nV>~7aJIZ91|nRpoHJo80rnL z*y-u$3IpC=W>mysR8BOQj=HEAB+O4V0_b)|FMM-8lB#md#?xj3-PTXR8zW)uoz1a@QMDvtwdYgN%0M;J0Ly0-#+NI^UvF`%bCoTFf#+e{6JD<8sJY zMW8?xhi{4ra_f`u>nQ>BY9eMI$%xlQvuIT~?Ac1A56!@hp!e;@H&bSC%h|4H#VpIU z$)WQ$g;>oa&tN;U-z;Z(B}7MVTayQGsmlJhwCsqho9Yt#>eR-Q`qmQgF^!|Z*CiZ!5ed)%;TsNtSzd#n%n_yP4@UhCmRa`O7F}0kAdkhk!{PlwYzJLx7V4&Qq~Yf9*3oMH zY%gD8gk>fbzfoU%&S=n}F#^4XWC}<0tX|$H1$E@}a_cMuZaBbguV`UyW@Z)%SS;Mq zL8M(%-LhiBKicQCUXry(dO>QY&$mMD)e!Do;7Oe9e4Y#_wfF8GqdMNK=FW5oSDQJU zZKGLte4wbe*|U;e_uzDqw!HAmy-T^y{z$rxd7uHyBmd%MjgBF%)b-d}kIOVu~7Kl-|=rK$$IBnPc)bPi#I78FXglZjxl(B; ztGjmcRv2drD;i}gO_#DqFupY$PqH+6ggsdGl2jX=LCPZ)tXi_f{JD(cdOX-OXjQd(V_Wmoxkv_svc{lL&;En!jE* zo2^#XSE!r(shP#NUWHy{6gd#dXGN>v^GufJeZw_V$e|<{TO{BhjI_IK%V_U&AKPX* z68|vEfugie2D+mheeNws9v(JcNn~y;O?xu1t!E06g38S`G7j@MoqliDc)c2gv3)BV ze~&>>(6!-c$#NKhn`<=%7wvl8pfPCqt7Ml=YpY%F+#)AXu%oLHDLV5dy@boU>?i?& zXZ>`+p>3?&Tyd5WNC=tfY|G1`PuJDDph4sdR#i6V4fJ&qA9FD4N*%n{oCu&?xCp)) zz3IykmYh-J^?h;@>NG}Q%>RaiwQf z!G^^#r2W>U@7E&zXH>J&-Cd)7Bfj=qV<=b^Fgxie$R_};YSOF6a%$%sg9NI zMks6KU)a>_u!Iq6sP3Vg$4yMG=xdmH((emf(o##=%bd7&la{MT#4^I0^syqV%hP3h zaDG>F0$cLfxa9|yH9vSj90dNG504*L%GLho9*^3`b8OnD%xh}KRA+msyI*w4u~l#h znG|~a{sn7QW>w8?Ru4sRyxp1s(C+4kUwCu5(Rzh)s^R5&A;z#+QS3$H6V=MT$)??G zJ*mIabOE5lU^GXn>fEou=5gyCzCD*Rs2yIkIi3&`VOR)gvReKDjdCbAs#%o}lbs>9 z&gWK8kwLgJxj^zrs_PSGwoxQ|G8EstIi4l{E4i{ruk9gB5${VoDt+Ubbm9}@r$sWA zOTu;uYW%A`(b%uH?8+m0d!CZec6Iyy5ze+r-6{$`Gk z*8;;Gy5pKx7*X#Khe66=8zN-FhS|IngWaoYx%Ft)i0x>_@N_~TS@ z2(lgWy!lqu7J;M8iH0T0Ck%6NC1rWI)n}?AsTR*^GvT%jWgeEWLH(P{uY^IXLq#k_ zprTamF}S>!3#)jfF$Cysx`_H-+v+VMh*huy;(AeO4p-B}DmJ6frLbC%pM6nZNLf)# zegi!aT1>rc|A?N7-=?~>EmL1zs~bhYhA|XJc(b;(Pl!H%E}ywv3-Ztcw>cxx$P(>> z7i<}$(t;~PgKida`!&pKzkpq)C2*p3rJ;v_3RHBd@@VHpKZfZBuS-z>RsIxOTWPt^ zGIN;FI$xW&m)J&hlHI({BuctR|+1T|=*gYPansA2C0t4>iSI*ai3+u!$fA(zc z8e-nT+`TXx4@C%JSXNGh-w#Z(8`m^V{Luoi-?ED0ei`?${a;w(7y)LC$MimZ^?+$Z zUM;mt$KSdN*Y~wzr!KXq_UqEoKL}!pumpv$W+y4m@zbF71Fq^ab>FFn6@PUPze!k( zs{0Rm5Ycn$#?bfpwDI_@kw8F-DG?`%abHOjgMau)4X<)GkKrI4L&&@tzuqZmJh4pn z-)bcL;L=cr=lyBk_w0%|+{-#9SeD1;?W|mtHHLI_*j)q=~|?iB8^0?C}9FVeOD&?M`O1v8A&A05{R zjhA|*dia_;P#BM*ZWPjIM2@AaW_0eaM*fNV2T|pSIsqQLN0cEzB$lwvs$it_VL6g2GQ49NLptZH3%MBYA`TjmyEfdE7rcHS=q(uD)Y> z66WSlZa_6$^=bR3;m-_{fu73ke}`?5D5OaMvW73)z)#Ielq>Z@QI-=rDfJp^9K;zS z@Tu9E-h@cytv5XovKkQ99%Z$uI|PtVK>0rurt>jrG`*qVTqZy2 zY*kt8rd1pHw9-d8isl+DblkmeKXTRWR~lI<5W)VhYx_TR2q^|)12F$hp!JYpeB_4? zH$t7pmOXB|l8xfJ)Y(dXnVX^Yxp;gE?eR2ks_U830a2w@r%FhRO*sh+_)d+0jbGD8 zx9GK7wwg6B5}U4$MvvEa5}pDJGu=U?Oh$1NK3wC_z6ivQzX7@(>vLeF8LQ}gk?Gr} zYmJOEWy4Qv+0u3c^XRC6WueQauJ=L`zbCb*=D{NTr8P=&@`wo`+YG{Tq6jZla~Fx= zS%lgB+0HkQ+ZmIgQE0|V#c$nSC0}O>7@}ilA&)C;T?f9vSak<@Qf&9)=h~hG;X6|` zYv=0w2O+Rl*d|XDr}ShQ0-sC*T( z?4QD<{c~6y%pW}22YqeP+ci|W;f4_%`7}+(;&w>~hh?`Pt-&|XR@0L~a;2e@qMA20 zvLTu{sgqT9qTFmH0Q=epv{#3pTJCu*&*i}_1rB{#%6>QcP`2q7>~S_`xsDWimQK4_ z6vULFDs%?Z7eXG9?RywD%#EU&a=rq-E4K<)7NRb0-P;n_Ib62 zOtaiHj$TCk7IZhBf0I}$o6}6LPpNi)+uf^(XUX-lClMv_uFYt?7(*v=nN5K`UgVD* z%wCEN*A9Ifd663sx~=*aMNPlpyy72>(xofuh4Ob+kQ;ecNX&b1eX(wFcCZr9`F>c? z@~G^s-c}Pk8(&qH_9jEkOZ9=-J0~9txV&~f?}mLtmbs^{sw29nClG_;UXRlYdHhte zPe4$zuU@&JPBiTI`KadA7P!~FS>o>wJK6?|9lfn4S-2V< zSO<%lK(0!w4p1R8-JT@zt7?usA&xOK0gVtTAu;iw_$g;ETp7@f>^fIDj(4Pwdfw#> zz})t5a4p3hajjk0{ViXW4}yhKSUyRT9rZ#uWs=}KKdrFRs#aUos_y#pbn&{EQ%g40 z?I8ch^mPNwqeU8SxY7Du&zP7_trbb6aA&!6@v58Y?kF>`^d-ovG5+RaheVk1N3MI+ zL-F49xlaqH(*(7Q{bCWGFaueA@~**$1)iY80UIX9*>lwvui|=qAAujo)^&BvJomV^ z=J{}`X+_q|YgBnCiC5REK0;8{T6HjZAh3V%t&sCJj7Xh$C#PTuv~v61ys1q1ojcU- z)uyYxp!hDj@JN%oB|`cN5SdvUzi-Qs1G?|>)8ds5u1W&TgpE0kgBtMWQ^_D=0r8YQ7Ha zQ12G6Q%%DDa=}uh%>X(sp~fqN*-g!!(Qd8GVfX#%r;{nA&3It|;~ zS4JKbzO-|=I6p*+Vh(s)n-DVfnJtFHlRwG1o3cD<2l3xWEjS7hq!Q8{?Zq8xf9*OEYVZS z^8_0_Vo@j;ial4PkU@wQI3K&tXMMlZ=Dri#wLc*cm&_EX!(G^~@w#j69>le-T3uGn zr&*u;n>dbdLq$orfZJ@*kZO$;jGstEdt{)``VC}qSi~zt4}nr4o~66%`-6Cr&OFh) zRY+RVV)jl`SDIVNHwXn=Kss9ZgZ>jvBB<2w3PUE`8m1H)L!nt6tg+N_A=*fM{#zjU z*vD;h{^M+>-b6c%N`sxp@{C@WMgN{K`^ZAk(m-%Zi#Bzyqhe)MQD9_W$2c@{^zfxSkX`0>u&>FDhK}~ zQOgbV0)%ymV#N@v`JMK+M2sFzp8CiSFUMDGYfJeRT&ito_-16!mpBQ&(`oO83*eN0 z6r1z2$XSNxje_@M{ets5L7OCu>+o*}%uP)6(peX)fq1yUox{E^_OQp%_W>lCxRNXc znnIjx0nD`St>Cu`8}Sf=z%u7A~YN1_!-3wbphMGI(8$lXzL6O{qle?nt5=D^aZ*BkeG~VZf^N z1WvS=7M<|kHFQCdahT@H8c)@(;aGMOn(fB`Z2DRKfgtg^gSJ^8Q9-_vls4LD1&uZl z^a7hRPy)p#BY#PTCr3FB?gm$HljDqHfeEsW<`Bt;qj+u^Lo&N_(6~up#gLS;Q7#7YMF<6L>Jy^(-^q z_m_I4@5B|R0oLBI{Lcx&Qkm;^IEEg7p+)iiJDYs~Ay;O1`4wx+0xcm*cC=tC1rdK9 zM&Wh;#4mf6y!avKLX7A$Sp)B;5a0A{S)~mZ#0J?pYJZ6jj6YEjw5qsx?t$B|GU=Q&2=suI;5hh2I$g{qXMTV^}NW!z#}aIh4fL}gm} z$P3H(zKZ>INvUdM@ijGlOvM6?NOst25L#0Nc?C|uE@91^Hb$*-a!NxF#g?q&xostW za2Yj-i3Pj5?}xNr(2w{l@UCV}TJpux8M68-K=f9of znP;vWgZjy%FML;|XS=>hoiv%LVy{x$Zw(+#Iho}CSbk!hxSMA)=a}OWlGZ)+fxi9f z?zss``c?sXkrY)YqM;^Iop~ZkgrsnM7i=f=q_Y~v)~@2NC>o1-;#mg@om1Kse7rz7 zjoExIKfA}iw!}257=oSCY_wIi)>de=o7MQd``T&kJ*=1=oN-AuAiY~~^*e4C*x!?F zM73C83{Si62cn^zaMb-;(hVzI<+Im4;|9!77I+vSFTk6GxaFMdeN)B@JjpZlg0x`~ zZW122+JR(LXXS6l7@#tJ_1^Mv;L|$s#}X)G{VB~r|#0)``&^}4`gv!A!K1w%S8G^Dw*srQ0>4Tl0X zS1t$hR+O*30%5%Ud#F~~{4V1lbV@g=IPRf#E;TV;siUyv#zZT6 z^#}L&&{>V2zzs8}mvP^}V;k>6*_KJv)8p}>m*0VJS|fPjxr`>tP=QQY=`plfV@6Y@ zHPVjz@0=3CSqG>ZINiV+%8OYxTiR9PgWvFA<=r}U9)u_m5)Iw*j>9DIqi_qu#_ZKU zL^WEJwRE=*f|GOi=gt3|1VI9W93)p6T;_dndT>av+!-|_kI$|XH@(`35`a^Y!l$vu zZH$w|O%!vK%B4ImlG+=u`<^2Cgvpu%CiK7woz^r__5iRwL$fa^6u&(3U;%)-az9-S%%-JIR?`zXmgE`uC+nO#S2MkbH$$d zi2*Uf-i@3rq@tn7LONcL6NmAdcZEI;%SUCm<2{YPmr%fj9|w+RDpR?Ulax!<p;!kDBQ`wzZM=0EZP$Y#NK%jBaoSmj)bOebz^Moe^}twGCXd7;7!vbb zJ6eMe)#v;M&Dc6c!)uT^@Sh&Q7zEQC_8*p>_tcDEg!nA`DzQ0-IOEf?g?#NxYF z=wa0G+?P-CTgeXN%`|f6-TS$N@a9T=$g`hok{4_?MD^5Gj^nCQe~n3?K07P&{Fber zz|4knm+xP!Z7xus_{PA*l$({Q_GG~@!8J4vH0S64gThy_P zUMV`2Ea2I_7%WZfAb+?-Je*at=t1e)Yg!K`42pRk5(}i)Gm{t?uDQ-vUr6L#wLD1V z<^H+*=pST?{PvDY;STOvAgE+S`pQ{~bydaHe7nk0BY|As;_p%l!(QtV>C?H?b$_%V z1AlXsA>DdEq{$%vUGyeT@HkX<{5^-Ox2lAhS2_QpJUewBq4HrCozt_rL!A09H#Aum zBwoFc+}iV}jDD8l7~>Ec3Dgl#xfAICvo!l#kGw&Vb{|W+9cn%6P5lo?HL^TrN=74f z7?{dVeHZ-797RAF4S>rm5SUg9JE@Nwltw6zpT7*~(2g&fDjFW^wk$AzHvO!`?!$iZu#+=zQD@Kb^qe&Iu_ooYOf28>*eApQ~iB@DffPfYZvE2riU zN554NMD`)FfKC=rO-;k8KL_=rA%_cI&9rraZh&m$4=Eei3kGJlOQZftg>ePP#t}zg z+Hy310cSL^mKo;OSYTvhq{VoK?8gePiQtPN^3G8OyKtQsbNxm?Uy0?6d-8%vu{D1> zL?_SIN#Kaa)?ZdR|v=6p;^eE?pI^WXhRan%3gTcA(*7}rkKx{|DU>cqLW`O4p$Y%W2_JBkNtu(K4nT9`= zc9F6!<0g6jIBPi6yT^YUhs&`n=r|*LurW-;Q7#m%n@<0#AV#$~2rkVOBF^J}vR1ZG zcm1(yGVD_)U>7x&UNSB7`doe{-`T_G;n@zv<1n0|2!mRy zM7X8>%UyI1evJ-{8Jho4B}w5D(<-l%0K~$Xm?nLCD)KS4Nzh5{k&<1y=VqUwDiAk%p>I3q99n= z)gH)6w0-+VBEn7UZ+*tod;GPz$U}pS$MUX?Vr@{*X`JanF7^_Li%70GH8O=hz;Z-p zY>=+0d0_9kuO?ovL-~Fmaw6nzT z4!J>Nzj@b_K)_~-zZuT+5FN_oHt0*AwUEk7soSpUe-L^K|6;Q==9RraI(=ySI2U>| zAr$#AtblwhZfvK?`l|KiP3VD03PB;uIa4d8{iVnJbvi5$VM4IaJuGlKX;3rtXzwcD zDO{&p8A|@?(w82<*_AA2%)ZB7TMKi$VqG>QyZvl^hCeVM4PE(FZ-;A+kXX$2E`-tw+|4V@@KSQJJVNu<(ec1!N%M1I!dPHQ{Tm)jZLC{3h!eevQUU;WZn(nx6fNL z16l{FxI^WA_m_rcM8C4+sNrfAprTe{K((sR?yT4DF& z-h0s0VCCxL=YE1ObPHR0&_y-6H%`n~Vqkb|OIZl+Pf7^O>X2P;ve?(aWAF3#fG48K z4)NdsPO8kyvkEQL9DS0n(GwAAw{EXl;3vELR|@xPGuXQ0AviIH`*l^{W}7UQ$+q?! zfE*_*)wRLv^d@wLNw;sOD#=JJXF;8~$n`J@G9Lz7eqvD}T*v#6%wEL&vD8xLh79nf zGpD1$Y@eqscU{6d8Ot5Noz+3!eGD%~YI<9gfq`LH5GA}cp{Y7v&?FW0WtjYolm%^2 zh3b!5?`Y63Sy>5Vm&QIngC^uNi{1NmUByyq1AV8om5AgV7h`k?U2MUJ_#q2)MU(N(r>k_wPB{XF+h-3yjUy=Pd9SbM*PwWlhn$1lM zm%gzE-4~fEZ+7~Wm<)Yi8GoU?P@2r^41LHA(qOWj^d-b1v6JRTsz?X3wX+k-HXgqa zA)Mrr_$r}#d@fjTzX|q_t#uYBkb&Mise_Dw>UCeF@}+u2-eNjqrAZ-=T0s}63+2rU z*hkwjPMYGcNnF;;YMFea z{#P<};Z7gw01Rq{ep`U+#rlwv2&{eUMlABld>`OD1SuE*)*Z;Q#o^Dr{sb-y`_#^- z3^@d43*qS2?E>?S28lytH?9Y2I~lDqC3OBf*0mUT5NJ2Z<$CCkm3xkt?H~%qmp`SY zt2MSb5T>P!NhOxruDu`(1PM4}jqf+|84*NrzM5Y9+u zaMIG!r8uadJc#LfH{qf?f~KkYWcdPLA1?ylM5%3%t7}p{STEX0uhNrf`Oe%tq4^y} z1>s@h?)8s4uoety;QYB^xhkY(aCEpWAH*FGQ_>AbvFykvnm+LcUcb(N?HVOe3O;qW z@2HKlG}i~2$n9xx#fldQi3DiM{C5vAf8gIfD$w*htI0GI^z5$l`;*cHPF6~P3IGka zP5NcvhTtMii4=D$30cu4i5 z`pQ^1mXdD1&zfz(3N%{QpYH52Zni0{9*uf=0#!j-sYs2$?Gy!Tr|!>_Z-O%2?Zs^5 z7kjzF_s|DZ#4RH5r;edsDzwSRH=L;k@KD?(>sT{Z$q3Sr#0@8mAJ`{)gow%*-H=e3 zW$q+kh73Y5wleEO1PL!Fjb+ULv0p{r_C%T{h)SGf(WCXlP|){y zadU#(U(@Q#!Ahf$4nqpIqeCk2lPun5O(Hm%usygqRg8-JD~!S2kl44Z7^mK?GOuB5 zMB#|RTw|Mlx)kG1xY-nEJ`^T?|7iGkKg9+jO`3PE#7G;ZRs}D4ov|iQ#&8CMtZF{O zbM8S2O;Kug9E5MJDh)t@LIdr6EA$JkN5(U>KFVf|!-MAn&pg) z{f~$&gj{cj#vs+1yfq~s0tTxC&jOOvdNDa}qf-T=&!b$4TuNa1xj_nEb=9bk3$cxYzGXF2dF)@8&lh(LSN5SXj6Ezpn{b%M zWXs90Z2OOJF0%Ub!eRW)e9N!W?A~^TskWq~qc!k$=zH`l9B6H6iNRIzh_}{$e@L+He4fUdd*3IyT z?6d$X#5%S?NGz5!%&DnwV^2mZx6dC!`&cfBZqGm9=1CY2-AzU?=y5Ntl>s;%0FmOyQhq8%+>COu)w9aoY zB4$v(CRQhbf14x$=)c{m4ri10Mlc0(Ztk0JMsr_dq@JC}4lJ;Hh8> z&d-H_#+u1c96dsWnZzlO;gAcJ_en;Oyb*9dn?A1DI|KbebZ@7pT z0{P4K{HK0|e1zQ$lehuGwe8t&=+m%ojrW4pz6HY&vjuh|<(ga7J$5#RM*#bNYiy2~B4K_*UH0`~J|d&?KA| zQ&OFT48!2WtqJ8O5mAI+g~oUJJg8HQb77rXrTtGE&F~{;*?=Jk^0bHx zc;WKLCs-DPDWjV;mK;8UCe%Ln1B>-$>(EZB9ruBn$$Bn=k^E8O%>6WlDSdl@n7!QI$OedP|Y_#xXu6b6v*vW!DX3mgIo`u0s6wy&4PIf*; zDDt-Qo2$0Ac0yK`k#~vjviq(x_g^Tr~nM7oN~JhH4HAR5-`Od50m}gCrI;vPH};srUUZUnK8)$s`iIbC3bE&3 z2ce#NxJX_*Og-=Ir-V_O4BwY^d7V}ps7#0Zr`sKb$r+D>46Mh&Ib)=UO(z_xdK=MX zeP<)4Br6TJy@P^E2=7eW)1J|ep(adGIa)zmn**a34B6SPI}Q}3K(QaCG2Nbl970m&!A{<$ zUDIw@p+dC=xZk71+s`?}a1l}3Uv|pvX_KREh+W2DacY2EDEyAFk-V;)SlHMzn+Vut z7A@~@7h#j6R+O1>3W|LVR&hG=7Q=ue&vgL~vxz_d>65>(# z3MNC@GWK0J#EC}9UanMBRL(DdUo2*b;`Oe}yT_Z>B{5i9BW5_*)zu3Z$GL?^R(`V$lPx3P|+`cE{!Fo$hbd^5*T`MYQUR&h5_tQHKa4Ir)=h2Mx#5043<54 zrU{9e5Z5^^e6b)b6R{NQouLD%AyqD2fPDl@8YtQF2jz~?O|KlHr>XOJVzO5UTD9Vv z@G@>-bQ-q>vCpgJ^VaUsE7ZXL@XFUXulw`-LGy#ZQzRAQP&q<3 zubA{Zg1p=8xIEc=t-_CgInq9cCNc5wWYW3qb4?jEW)v0;pee{!ELl? zTl%gd-yY(*SMY00%>7ID?TNkLl#kn{sHUDnsi^g*8`oR{gnL^(QSVaz2>?*@1LGMO zhzz+z)7*pey`Hb%vY&i!XFUK5w3j^` ztKieE6Z4F*yH<*HCr*ZN>_U|;WBteL9YKTTY{>oTGQ|wbNT&CJtvY@8EBKF)^p{0% z_+9p@mR8rsY=ndHR0{H5%e^>1%Vj^m*`zy$63yx?=dEthwD;v?RTh!^+q%ho!ld_$ zwim@BZMw7f+uaWpXJ^cg*A;@VFP|=I0V`YnjU2ei{r)*{<`HeL6Q%22ThV{bxfvri z#fjwCkN4Vh5KKk?Fp{=RLy?j5_*;wV0O&HvuB}DEwAN2PV-%hbnl zHpft9?q+_md>*t>IIfXNTq)N*M>I}*YjXFyjS z6`UX(XLe`L+JZm&($Vo|xhs6b(~ocb`3Pm?uL>SxSO&91qtn>vXtBo!b6!*G`NIxGeQQr|95QLd7A=kG#+Tl-v!W`x+O?AZ0a*^1zO*# zyb8OdyD#mIX0T?G2M|Z{r6M_TOkQk})uSO^&+XS6l(@)*+^E0*Bdjz zvnFE8YNLYV=xn$s=G&i{00hXry}h||a}@5C!xRe@tiO8)1ePXI1WYJX)m@W`%@0D4 zTDM)%HTunsgiG}1LaFf1StP~l#okGnTX}i;mczAs^~Up7PfP|4vd_qu)g@EZukr&+ z`k#i@4j`*_S$J%FcrB0_09VTGazK2%(Sot=wTk3CdA3+bUzG?8BJqBvLVz+-B{}ST zdpAcz3_%pg$nv#dpEXeWt&pzjpd!<-qCP9%1EqV`vG;|5c)eU9(b5mBv6VxnJB>cz zCbgEncBM1RZKH*}l5V}Q_&T{-OY*{xGgG5VpJJGfC=QC2+h*zS*G>wTV(AcD^i^ET zsZ{3r65JTrZwB&9RFw6s7GPsji@sO$i|7vvZ5Jw`3E%O@H86Xe$9IkQ`W)u>l4sTo z{G6ZO_%FDh52_6;ug~ItEK{~seVqU+xqsiC#?be&t(89kaX^nVu<$0wp?^~$&}h+f zx$^KX@Z>#{lUod~M{V2~F34=VN6%Vyn&3(NH0f0w+rBuD@8PJbgf(8XFP$PVZ>4$C zAbz(9{?4FUK{bx0tw3v?^P5hb$;-a^o<97$;4#gtaPjL8@aRIN);H$Xk6bCFx0A>C zAvoGz$L?d_w3}bjB~pm4nL&Y6BFNtuQ`U_eal6^qgepQE>i%53dc91KouS zVEueTug&t@;O408#2&j_aFy$weVp~Qb8(0V?&y5vZ4I14U%(xs=O2O(>Z_R7tkn!q>#DVF z#1TN#{{3VlUWW!l*`DwngN@B2J;qz{(Hn&rN-qA_R&q{vPe1S=Y&xZSH7f z1-;l}0^|8NzzeQ?FQM(eY}AsZn%z22IEP*+j+*!@6;82Bm5R1zfK`-4-9lbf*99A&mBbaYe5A^73a97mJW1%g zQEtB^PKG9(hovQ+TpxlYm%~J=xwSV z&z`BBj25ZG@vzC>L6U{kC;eFgnO1Ad4QjG(maKL{u)#>dSo3d8GmiPkmFsxv5Q@mKgJ~cv!!iW{UJA0VFbLg4{eH>1 zv}&*9sQSds^aBHr^ucWB3dX3%r|2$^{-G)J?{geb+p660X5kE-e6aYLWP^^d16l@8 z_#!&9MGD`SL9&ju29*k?b?haSRn52Pp1bqOuOlWu4IwB0@4o;rmI*Bo32CUQ)YYNt{x%oz0^gXOn=lZWR zTy+Ec;KaTS*iC57?@lliD_Hv~BnSG9_)UD6T|X=s;Sewfn;*GGG-bU>oZg8aY5E$i z6@lg9QUrt+_y^T7{~vK@*%VjTv~38%J-CM?xCeIv!QI{6b#MlEyr?H^Sah~6;uj}lG&OUAO2?yFvZ1H z<8rQcGS|gEaCF&f7E1I3t5L{cD^PPB15En`C2W*{$;4hKhYd?uDuibL1VB`(-8}IN z7fFVSdIE0?hu+sg^_4A0XFpkqD~jv>e?2J(bcZtnaMe7@G$nABop4ttLiC7R*jz?V zNO7`?O^g4paxo6=S5|`Qmv(lA7eMS1NhoEpyaKF;MZEK0hw;tfyj#n-e^M<(719rvH-;Nx@NQr8m3>Dc<>ctgkd zL);LW+dbS-6v)VZbMuroq|TtD*kc%z(9t2c-b-oA_ps~qq@^f@LxFG`sTg8N)D3Gx zfeGelBTp$oIBdMASwt0Cy4#%>F_bl?DX?TkGK`F#qLig+AUG|@jHT;SoW}k}Q26ey z#1zzA1JeIm)qx&j7Xx{ky2uiEVuh#D<>T}XU?ktcf=Z^!zIa;+rHp9HKBDm7DQG-> zqGz__%!C?=`il&Q*#BbVr9Zx3mGr79cDdmt+N}#0@-{0T$DFGqwFOEw+2_sh^N#dx zg)#9(!rmmjdnM|3zsx)CQWB7M*R_5-wvPom%GL)hU2q2ZzQKE4jL5r&DH)=`?rJHh zG-qKi*CK!NDE@U4M><_g{P6@I*Q_hD<430_G5W0N?6rd7f4H1x)SKRR;gKoeGrto| zRD7Bi+*{r4{V70eJVn3SnIPqQUU1V{ssj8T4W49 zs1swB=#S>~sfi#4y zF6J5Y%X~nUo`Zno8{*FLdEI+QSg@}d)%Tdj=L$H-E49R21})|KpCQV3Ilb82PZAL) zK3u{7*rrwSE84CO^G&OI9v1yzCU_AdDP+PtoWcNhm^ljlSQJBuwaE=oVv96eo@VKf z8&)TwWd}EUdOnh+Te-g5g10oj+upQVwPE)~6=eH9xL{{z4Jxpw8YyF>6Is9M(eJ?h zE7+jFh;*3i)m2VRztEyte2_4UGnQkyZbN&|hGt_|k#hca>=zIGiNXKlf zfCX`Ws)I8hBkxFl!dwZkamZC+X0|EM8vZO9xx&N-YWU6@ziOd#`CSub^}U4Q0gze} z*NLJH6BXGrT*T!#5-+3*ch0p-gOZ`}81(-JN8%y1+;Ed)_hCtshz+UK5r)jb85Yavn_q@^c#ab zG((xSL}T+tBe~svUOML7A$6IvUM9iJjb1DOoP=>9ihxF(MES2oCjlQyoooG-UfbMv zuTqm)Pw?wD(Z~0s-$K09WuF)k1Y9Y38iX-t3+X?2YBo|-ahs4);Qdgmig~{lWvwu> zQ~RA#9yu_{IK_EoR}iex5X#6b4+21}yVBriPabq*w{~eu_v*@5`Z=*njdhIWMo!WD zBhhKyc2+2tyep6#HdqK_qp8zR!B#~%-E?kic4sCkVhn{=etm+B)3&b01-{&rqpCQs z(R!&&c=hi(g$Upa$_9B&{8X$s?Q&h#RerSqixhHnRIb_XL{1CqWTAz?I#=AcHpOG| z?OyzUQ~@oHxhLFf5haOXkm_VZ110|gG`{GZwYrJ~tg~;S_eMs$dp{xVaz{9r)gu8# z!ndtJegC1}1Sx@zrM(YLoE{(wxEwcF$^5|!D7>0jcaNt^?ZeVi)QJ3ww4XU5yt~7W zRZ6SA!1X&Q0R1iU=(6pz<}QZf4T;oJ#l%#J8wh3of$;@2+GJa_Q>k6*e#=v4AmWZ> zt}xKq|6Ac5+rP}AB(ET@uKOdbuG>9Uc-e#@PZ~X9>vh3pE`08;W%s!YP$K2X!Qhay z5$Jchip|$^zm8U2IyH3led)OD!O-~t*$-VrO9Kv@Aw!xw+ei4>`vh7~t`>g&KAL7)Xi|;fP;*Owfz&To(0utDj8OdCK@mYb6kWk@-u!P0mY@o) z-fmM+w+p5(-(rMaX9nOD%0{LleC3N35?Pz+^VH1{ZmjJ{lkCmtt7C{vn3(I4)HqWM zC@qRN^T$&@0b#Xmo)5x0YIhs$2@DzELwTI1FgB>LvlfN)#`H^x!;t6_?BMqT{n{32 zI;lH^+yq3znE%{Oj9@(Jo3-l`lOJ5m+Y37>=#mu%=|06_kq<2+VcfU}61f?R1A6&_ z0Nu4zSu(OiS#q+mWM0EkU0G0geT(R$zKfIv*9pe$QattZu1lO4Zi`I z1%?I06SS?$aK!SCvYH5cbpfa?U1F+<{G`zGoLx%L)gvOXQ}?x+Y8UX;v;L)%BTniB zq`xf-IjX#WlBWtcVhoO|>}nWUx~?o^b8jJ_<18zpX2H-?h~6Fj>|SD}B3v&u04RUB z=_@U$&zd%7zg!{tlUot&{Qw5nH6rM2*b6FcFm2vamwJQ0KX)bjf@uxh1_ku~n=lKC`OR~D9X`j~$~)fi`cn@cn-ZDKy-ZmQ ztU$4cyRYb}N2#qFGFyO7nhQ&WH=B*_Bf7o{z{%kL_e;k@^Q+P6+p>SB`3L%@(gBF7 zG|ap_v0xk3X28mi0l|O$+bfB9fU3c4Ud%e^iE5YUThj&hzULjzX6c>u686q~)i2X=0VjB_9>eVAbnN zH*?L8rv%6)?-~l}&)cq$6pRdYZBSyFjg35d2&AqS8_O4IgAG`y9@iHwK8@Xo;zPb@ z$KH$R_IwlX!{JnX10+ya$ZG%07Wd?xIq<_n`->_V^6feF5OK8SwkuWc_^mhDhuI~r z(JV1sB}T?~S5)dyQKNo$NI5-u^FWk3bW6#uBNBD&Id`O5oWIVy(`uz|ZE~Y{?!VC4 zE_l0p#54^6MFv3wz<`S9sV9@`$I4I5!05fl_e<1BcNnMh zJKbd9+;7|#5J`{FJKP!+OJh{>Vo?@z#>2WJ!)4O|>qwT3sit@kVni6o4)k`BJZHMD zm(}7PpBIYK{D;F2y+D`k)T;fl;EB$z0K|=4I{L;wXL5IKl*}NV!+;V1M!tpDB z-qw)Q*SfNSj#~#u@bC0g52@?y>p$Y>8rtC<*w2#O~GFxxg9OBaCzbH|sBa}R}X?gjIIwE-5k zCYNeaHG|E8K~Nj=`bw3L#N`Y)sT|w_~g%CsH1Q*x$OYwpv(R^6CUsObTb43_IPA&oApn6RBPf zu$YP`%3T%vI%AbBq~Wv@`UxaVE6DogdC^Be7G9?y7+Ba2U)W3LB4Qat^-8V&eESWz zJi@9taZXWD8x{Rn-7}A5B`{MY^>gMVf9#CL3IlITwri%(51dc_X41!ko1$PxeI)7U zmSVyf8@g&5F6`(Hsowum<#@%=4{n>?S-(ZL%8DvNvmeg$F_3v=^(_xhpZ$YNIo@7b}=2S)z)HSCo-Q#z)EL%_{j<9c7aWG6&~av7Lkyp5b- z9d0UDM*Ca4?hNV@{XzIUUND4aWQ6VFA6D5k<3@P&;<>mHdL-PAu*N)w@9p~Be)m20 z2uGJfbSoFo?s3fl5$A(th@vg`tJVOef|BzhX7RKuu&{P1H{g7G9y|8GgWf=Xe4*m= zAUfZT@r|FZhQL5Jujdt7)3k18z4a!5;ABP>!&*XbQo~Fjo zx*2jCB-NEC#wOj_4nc)A*gluC$x1`dWz{K2R~0Ln10lAF>Z zW2SDzv)i_BgU79!=K4%RXj33OnuBC$jKg*w1!%2`0dT$&{2mFrwwumbyek%)cQEc@ zjOsci!g$~+)Yg>bf45Jkro7Q59&BH~!PQsXv%!rox>EF~plGvGrcQK{YRL__8e1CKTZ zWyD4jzC_~p=S&+Wjnf<7YFPqyDdR7Qd<3*)9(G1k#sPi5;N& zRc??E0d?226VJdEnz4g+vu!@>1P?^Ty!)}zWem@t(UPM6RUCo8YckvCO#qeGDNJT+ zo#k0zrPGVkR;NBA{oICA?;}BuTLg;2_K|~T@dbY9bZvrXd}ii%+wqI_(L(!0u0@7Y zkGvD2J-hqO2-WoJtlob$;C91D+|6 zWrAHna#0s;|BGV=T941$h`A+f;KnEaE1RyvwJG*wdR{;KzG7d8;NeulB%l1O;A?yi zn!^4k^DPlig*;PkfA*!>r%-}jyYz*~ui4zr7U1iC@!X}uC0Rr;N)Z-QM^F~<#Ifz; zgbq?ObI|o1U`II%cUiP;o?_>ei+}a=%&qsiI39Wy(6O1bzW2LWA)=hFxb!X9?hlz- z%=R^+D0SNoTD>Uz-NZEcu_I!;>(vmKJ=7=aZAW^!J9W3mdZL@&$!>AzuRGVInm1&^ zH^ij>@ip^i&g1U%q}yU)hg(J1^sX16Ow(%W1#F_#xyf|=C+RYX3Gro{zqP!%EB?E3 z-v7${70cmT=~gH1!8kl3Hp}V#dW^3;SZaR=oxfU$K8Z})(xyslSX=>z%UxI!=;7fJ zlaWCTf5~t>3yCZ_FDeIfA zGDa8A{tzMVeCBk1h>bxd;^fW(eu&0gUL39ER~wJ70F)M4C*}WZK^J_UE4=Wy{lsj0 zB?GQeY!LT!@@1{g)n9(MQzsjOk!g2*hZ{O013ICQaSu*6stXQDGj*GpEO&av=6hvB z#}t6vthy4X;+?`7fWR`$s|4mD*xC2ba_J&FQ8PTh7(} z@Z6rn98=f#QK>N`Ivb0dKq}iaz>@*Id>f)I5a@3_te8)XHPNcj_vG9^-Yl7!aOpD@ zW=);7ei*}^<$KR~*7|f8GUGiI&#q0J?pDN8=j5D$uQ2T1BePK~q|xb||1+|`2N#!(xFbDf3w;B23!}!)?L}|sgMp_yN2vzK--ovnk z=YZl~RD;R|-vM($gYu8Jx&|}FY(Opur)h`w$Uf!@luQ2QqDonun*(5D#%Dv6kWT_n z+R|SQk4gpwWs(l60#4KAed7O8YP)mF0BnfTf?Hxpe2Y5PpbWU6>|W*gi926WT_K)| z137_*5VQjqs_pP2^cU&(xPP_;YDQU0{?mqjY-+!7Y`9#Tb^HvBZQHY^_TK@Nlce(u5nMtGxj8o{p%y-W zX@U6Sk$AkJ;R`3jI*1AdVhnGo_ibs@_mKy2BZOc0SsM_;e1Ajw41jHs8*hueTCah5 zM*ehSfi_mue89RdRyTb?BDuX<&-TvtxlSAd>M>svbKXQwWK%<2d^DP^T`RR4^Xw}6 z*?ISdraX@862Gk%NTNgw-eZF&MsRdX^D=KotFgyvkx~Rz514O(pZspEaev&_YK_GS z%8c>R;5*sx2B>0r6+>>`gusn%lUZ{afPWC_&WR?F0*+_9Wt!2E)~_kkr4Jtmbc$Ia zvJamDhjw)MO|ZsSP;p)864`C;>gRD%dR1`@36H}&&J5h12)ug=*->Qz%H?U>XFaS9 zaQegaV!eG)0!9P%$hQSkEz3QNInrQ?H4xltp83c3jXU{6_bU4+auL#eK*V8NbXwgh zC8WN8?Lq>rlrVT3#MCg71|#TR(9gqS zt5UGTGP*i8`c>)IX?pGu2QNDv@qwB0afS4B_v1DdoAZLrO1+!l{!I|N3z+XECj}p! zD78dJpsJ>_u}veZConv&e_QJ_-gLCE`sk86i`izlYFV;adExMqs7)t@W2Z(&gMEqd ziP*x$Z#1ux2#@`gx7nf^Q@7pK%+e0|{brTSr7y*pf5z||e0j!q{x1dTb-yAs=lRf^ zTAyvC;9z}0mWw~3w8X@Uw3k!1z99T{Y6YWm8^uBOpupusGNwo{b^ey6%(&>qfSqfzRv1f0EUSe3*~R5K)U}){U)xZrvkl?u;$8=B_=B;n#)7mLVaV1yrN;X#EE2 zl!xcT`eq#5QZ3xdo)~`!?q~yN`U=n6A(B0EE_toC|0WJn z-R$rIrwghxD|M+W&~V$xlvxavdUgZ>*w>4jp9-T%hm5JeW74ppGsxqgeJ9uHhj>60MnukF~y38=pC|Vc+*V zO{p#_oKJX@4T%QncB?oGyW^*Qoc*NLIL0Xltxz_SFQs)o-IUwi@e%woK+Yj%7bbGj z^Pc|uA5U@TGuri8>3{I9CDk1dAy607;;5`U+sl(8PfXpZ+uz)V_FiEf#EJRWC_c7d zFNzeJ;Jfx8M*b~e-Hcw8LzS$1j{ zZ!PkoTnCKKptZGg*0{yRUYJ*-O7Zw6mjiA2%*;7BO|bB~9?Z%Fxm<6p{J9BtkmwDH5dJHUUKJyQ+K+ z8o!`dmwoXV^XxibW=lI(N&^a|IFyZ6n6rdfpdoT*5C&{;E?fT!~DB!3UgKxy{25c`xpB7 zQ{S6f_~=<2Q}TCOK%d63o| zrECnbqV~Y*zDgiWv{GL*fzP*K?IdQky~7)U+_xIT(rU7cC1bIf46?5L{16h!^k%!l z!(y4jWPP8&1>2>bwgI%e*)#j&x=1EAsQ~a+|7WKtdhM?WL`sxT^NE+NUe=5(&mGLP z2QYRZF6opE*Sm_R2iUnG;XeQJ;}Rx|demU|C710oHrte@RvHd1isyhDdT%#|M6^=t zx%)3`c;WprYZt_ZJ@dHpQ6P`qdg02dR6}p<>RYriAj~Ol32*E(aKczR90uMUfL^X5 z@O+2dWG0b&O!g7md-+9_HWIFmAgaG*Qw@II70o_$#J*kn8!n-Vsd>9(>Anp?d}^Yk z;hUjf`{f1iv6xQX)tX|PE8neDktR(i5j;5 zU~W49&i6k}rF*d|xj$H{Yyw7*p=_B&9RLCUM;N7Hk-3Qivfh8D3x{Q0C8xTNd_p$;Qk%em z^2!%aecQq#LtVdli6$PnA?uKMuj zK$@t3Ps{ke^TuX+R} zl`$aT(Im+v_vxSEsFhp%z6DA&v@buu@;;TlA5JoQLH`^yE*CF3#3!@mBXeKC49&jD z|AA9nb--0lX1Fg@O`ME3L#>K*WblUSrXX|`KgDe)%cqc79mpNLV#)LA?;tX<0$!~*xUgHCs)UlKE(DO?0VTTo2C4yQtTkP#e zuG37tm6(3Kbxh#q8;0g=M{wuvAn!cJ=M7Lb3`i8yW^n-}0c#W*f30C_; z$f90D%JcIyWG+!`c|ojHNx`k`ydm~kA|78PLhU*xfPRIgj%nZfE+zu$1Sj5H0KJ;r z<7L;#<7W)(Lnx?Px3_pM3u@_-6SSH4?^@Xz_;IM}*pZuz1a(SS46b zz?(@=HZ5HN)kq3+^ckK#SRe(fx9aVqghh zG9F=y3#~AW&oA_PV|m`tZ^a#!6J`()a?ToUua;5pytcjM$Iv?)mL*qgL45IK-cf#R z<>LF*Ld#%DzwI0?Ea0aIM8X?_wJtpl+x=UodhKKi<$6{0ZwYK?efbNVa`Q{I;M|$K z7GiCGf(noS*SY&6N@)mXJAs(704d2~JqV!)QQk)6y!)xNH^8=Y+l_&9FAhrQioU^; z)#~-y4dgy)%$Mu8-B{;6#YNcHRL!+Zzq@a4G1?;s`q&~#+v`EEK8HZa?C|dTLg3E> z6+suRU|Fo(;XQ z&Ve=a<<5YBtV3=U4p5X?Of36s({BZ7dk6IT$E%K7Ug8{1Js9#^P8QIOD2|omYjR#5 z)VQR_rdWPX;DK&ty*ErY^F{%q2I6zD@O*2nZ5Cnx`xDwEE=N2avGpbs1r)M(lXi$JfQZq|7}aYo*6Wk*ir3>Wpu>C0 zNa)uwY2V$+Rt+1U{&?Oe`VF}T?k#k9;*K)L%U!plEo&wP56a7(gHT#zj~)4R35IN+ zVBzPxjll!L-GWrqmKSC4p}E3^diIs31(Pz%DqBe38`x^vz%!3zsU+jEA$re8H2=u} ztdT^Yc7;BqwmNa(CrC{FV$Qh#!ImD8%utY*DE%CCtc2uG+q39AH0n&lm-ely)jv(& zg+18qY)4)dHGU9Iha4UZ>*B#iR*?fO*X*HFcBBlvR_d{Mm7YcxW$>NC}nzRRE#@! ze3x$A8jB5x#)WiKn(-J(t=Ssn)cu}OXYS4TvoLO$Iva{E!@ptFbM@-qFDu{$#oSH| zJfUEJODMgB<{E|s>4CK=kS=@BI}#n_n>^Rp{>vph7fJN#sJnC`IVtA=(S4@RIKj{ zR;u6%HtDRvV!|FTAL0DM$%?46m_Wd3PMMkZ`@e~?bWucJ$S$ytZuRx~0XN%BJOw)R zDja=!aXz<#y(&Mav5N0oV+E)k|a@N)X5D7RKKg}~5}l&weG!l@Q^%b&Avg>>@O zggXkDTLIEWqrpGPP>W_+i?2~}ohYLZ$w$fPUb=sTIG>?>!UzBKNXE&OyP znPNtR2hQ$60oO8zsLHr6OMyO152?$Z*Z-KZN~n{;NA-02c_jtQi!eMrYtZ&^{1?vK zG^=^xUEA#~Uu^bCPQnBdJj*P~0m%)1Nra{AQufR+pB$p9?_@&0nj&tck>2RQ25;V> zj%vsK@DL+bZG;n5DU~y%kkS>3mQA|d@8NF3mRPGvnyAtiF;{2E|JTWTymp-ra=Ak& zPg{zBf?+_`K#BNMnwEi7vt?Exn0HTRt8B*) zvjxwwJx4yWL_@PG0|M0;X1)I#&yb@keuJ?0?4S<&SQiBZ{gn7Z9%L~dM%Sz-E~&ka zpvY!)V{_Sk5}Pd1-5nrluj{}6)-->D)j_lB{ZgPJ`ii8UDB7Up{9JD&QHi+2W=a3l z<4{%JLnHZmfy8R%V(RBTN2!(O$oOWvqOSt47iWXM0}m_x#a>%LAzSP|vi}y%tG=N} zn5W>}q8jVkSg3G33n~4*o&i-ZXM!q4t&v8_)`lVjXu;ogafq3}JSuHaW}W|aH18V_ zXC3~=X8DK~k@@n|x7@I!BWC3A++lNI2CIp%mrKv?W(eRwlkD;Ym3a_(Fsb(4T9VEs zwdskT=>{iPA}d-P#2~&+59*Ih&Y^FS0!ezoT@-rH2VXIuT=5$oqCL1N&BWMZ4_V~| zk)&5Nh0%rv=RQ?XI<&?OBlH1!(X8Nc8*;gwYBOzk!9aQ5h!Wk+Iw=IDOW$78^eA;U z_$U%PxDC=2IEov3S|F>w4JoJwh87LRJq5dgkLK)y;sGIO$^*csg7{H|z~scOToByhan~GT+|5Z1#M* zO&#&>6?h@`<>h)t@x?r~;VZtDBuvRY!80qjfp)mKll=IxMnfZ~hO+|oE0B06c{p4k z`P{6QoG_j};z#*>!7zB9y2H~JXxl+(%AoBp{0#GZP?rmStY|s-c4E&2ZA&9NjIePZ z3@K&yW1*kI8ooQ2ZEU(6WOsX%dZhup>0LCSdfWbFq+!x+j$}#+-TlJ|;XuSY;cj9X z6b;ou^u(sLdl=YRMbu{g{-x-*26T?KP@=^vb9h}q(qZ&yh5ibB3Sfi=)pbd9*q)7T zIDhxu%oRp;by|p2<%;DsZdjsAqvxiXE02hC`xFDNP+EdYpG~GR-8bw;uG>*r)eI|7h4r`tK@IG<<(A*bO>s zTpXJAh3&V>kYOHFZ>wPUj0uJg7e)#wA(^k447OXS?mtjZPsX7*9VhwP6`Qt=Ea;Ab!54sJ>gHpkavGm2qO zIVSIvn^+>zsofA7hgnV}_Cf!2s0Q&mlHU8ph4lb*b*0Wrz|P|6N;3-#HxW)VRkj`B z4GBBVw%d`Mu}j~CiJ8(FQ2pNO{G6T-6{}JH3`7yh-7+A+VN?I^qp=Q;=ljVZj!7@{ zQwp?mi7J$mWSY=6qo#Lj&E=5FW$^XF)W};5n}|@^50+IQK=jOnmM$=bRo{hAq!u%o zuIiphP@TA={Vj{jMya-nGim$Vpc(F|ud(LhSh`rv@b@ABnzYkL4*FQml_8l2dm?AQ zg9W=w&W#x<$83%AB#@e8u5alJOjE_Sx9BZg)=I{%g9gV&LBHd}(wK2QZOV3V1ls&Y z%dWcwwtj~tESG8CqTc(a0!-%&H{{t8fuS??KxWQSQaMswD^132G@z&2@)QSgOB%Ps z@tH}L8Dq;y4Ej~U{?#;Tuv>_1a@j(e__&syPqm5b*-W6F^+wu64b4Jui#?UMeB;T6 z5HcQv9P4X%LUD-znZ=Vqv!3M+;n#kD>;GWHw%RUE91q)eU zfwK4i!m&VM&08}LIX*jkl02yQL5?9Xsc|r3g?qA)eRC!c~ zLwCG%y*rl!&3o|xS68Xfy_^u)-$W{o+?i|+ zW!&+mZHNJ?swr6i!l%SQBMeG!6LOS@KW@xNaK2Y|0uh$zJ4$f|)3vrA@Fo~VxkEy> z{kEaB%?T+B_Ijiz`DHP^oi2~6hYB%r;L%BnP>DE{dWB(29rIt}xl`Q^dHI$yU-)0+ zf40Oykz6i0VjbCi{7^afj4mC}wLQLN>M}7r*9CUJtw5bDCCVEE5U0%bj^xoFkEN+R z5Ubi ztnM_jo8Z7+dq;|dcN1Q7MLy4RZW)e5x}ROYm>w>w zxLvG>%$KWTwn3%|;PPMp{5;6l$*tz~MZe<4k^`Rx}r z`zHQZFZQ3%9LGH3h&FId2D$SBV{Uj%AEBJH6H~`*9CN}?!4Zc2;xlL62jB{P);-iu zJCOCYp!9L1yr2`<8D7CPKiw`tlRZdpyDu1m=-=*XwYbwjmTIXj-5BB3Zs2D?OP1rW=6ltpxHas{F~Aa4C~FujY*<7vb34{H1Ndoz-ZA8*&i{T4Wr zDCMe^!lSt3h2y8{Gim#gM0W(Zu{>H!devel}du=~saw#&3^;H_ZmZq9tQ# ztV8~vBdo($VGf=-x&;6vyHMelQSRz3`a>@zdDy1JV?C5D(=7$WB6%#Y$CUYrRQ zDSbsR*<_t$!6$G7I2v4CS*o;p_`X;lQv#b2FuN_|CgsL6e}$l(;#WqIfP(BN(lfW1*JMA|=| z)U2n7$^K72CXYz-X}&@)x+{};+P_-6JL_iPp8_VXm_M-!zJey-{G%eRYFRGq=ZN(em;I5|HyB*2Z7@aAL6PsQ`$NUZz` z=EBGG-oc?J&tkKSq?-PR#WRw$ZqBz{>x-G?*X`^i2Cc9j+qoQp_34UI;e0z?p6hm8 z&7&>NY7NxUMEn8sf~1~@ojvz12}Nbiy+F5(zTJOCH|>?{f(yCq~I!( z5F?ZOX4mIm0|-tlIFheqfD7y2;`8bXn`hR%IT&yc58)cnkqK;5RhA}bJ_+ff99{C6ZFg$dJ>8Kj=@|zwzv3tWV!`ws6X^C24JZLPi z=;hmb;tCXNzwG|Roo7orz4WJv&vzaH8<*R?5mEHDN%+^o?VTr6gt4L~n^ifY<$qGT z@Mt98lp&$zmKXbEzVhc?b1lY3y?yr5!@{m9pG#Y-(?Ono@nnwj+sL4-%;AD|+Hlmw z$QhDQHcD{?+TEV>o2`zo1IeLq-&AK3^#$FP5WK0Vm|3G)3LnCgGjnZw3@5gBb8EI3 zkR37%PD}K{9NA3y{w=!NMg3_82;-eK6BB8I!3|)lwHI-WRNgtX` z8MJj}!fw`!whkPLE>8m?;sUH9H|KNFxK6D+f5 z*DJNTsr$vE@;EnvL*e>1P;@Xh4f6(ipD)gyV>e7qA(Xz%#9QLgF~q#G14N%>kBB9~ z!tNBQf3XC8GPKrJGUGK6`h)9vN-dkwk1i=bl}0JqEf4ZgK|Ots?XfRYqawa}>z~() z_ZnAnA3Ks(j>haG37CRrA?ZlT$Th%5@8#bOe)`%Z0#0n{7MW(bQ@<05Hwxk@q^G9y zhb;In7f#XY^=vxEKThr8by8%_^~eCe}oEk+RCs zQ%OCiO7aA8O*J};dn|1b-Z7H!crQu5Kx+9%?PNfgol@AV_ZoLxP^oRr@iV!DxX*j^ z(I3Fm*5d}=AFo^4Xh4m;`~1oL?~-*uF_V6`fj^N0$EgT;iv)L9>nSTPw3kiHH9|SYhKUaTX^syp?q?s?rQqC8+1TIPJm8rE;%tIYfw!oh@8f^&hV$iD>e*v$ zek9W9#%~Z_m7xQN9v-K)14C*iINU;z>vWHdA$X#TT!eYk|4O*PnM_YZwyJ;863zPh-|v@=ITQ zX8kL~lHC0^my_=c-)|3?lJNL?l;`+UPhT8jLlKS#bxDXNYB>4lvqXEeZaWzuHiPlV z|E>st^d>B~!I5T(|EBX7IRAc)gBBFqg6rLL+aRS*LK;uo*uf)`{&a&!-9v5U6FAmg zMnUG;^te5&-#F&lA4!xp%c*e#wCmVpUX6raj?nX@!4_O}kv<0>#TpV3S6KJ+nCr9= zP2wV@^op%gDoO;>;B~pe={7n|4KO}=QFfcuD$LXnqwm?Z-Z9U*Xad7Gf#mk}jS?)0s&Dul3H@998x4VI_C4iH%2jKv8PS*xJ`fRfX2&;a znpTa0_%QZD$X2CYjiF4jvkYAqt5uAvXV><@;&h;};=eapGd;H6L1ISphl&RC!(J5I z63muAHM~=2a;B*J?MwdWV(+aHIN;hihC?PA*t4wmw{ki=i zdLR9y2N$2U70bqoBSx?=tcFp@F#B{e(TIetszFB+cd!L?`mYQmga)JB-t<@?!Khw^ zvOPlr_q{W#X5?O+*XzMX=2nltJSf+<>oIGAW(#Vr+N)Csh=qEUwhN{ABS1uOJ~IQg z1m^d5o3D0o%-laC{o?wNv6iCRS}mn(?G(P?sljCO@(U$u+FNC83z=<0fuy?f9&Q@f z5Y|SAUR`JiMh^swK82afwHn;RJ7oOT)ywPU9>w&!;EhEx7;#wtqQaxV4AQKdgR^`Nc|D`=-`ulnrH+Mf0s=+;Ezm60Ekn1D zxprQ;pb+?{$BeNT{O|V}8Q`x1M8bb~(0IjAW407VZsT~2GP4H3TK}H(MBbmI2wtdC z64t9$Pd9F)#W}nbpi;o@riONtewP9Xx}hPO&&eaKN^3bgF-~uK?fKEPbk`8Uy-LH1 z;1h9hJMA?zEaVLA3;9P+iy<|Ik`$vftPR3&q?9C~4r78_7i~AC=fR0v|BV|F*WsIU z8E`q+IvPB`*cyot4Q%fE5n~Ps2qz6dkHV{w2{CFMXZ-lI(MQGq_?7AtBC?^eM2z7F z7Qcx$8UB9e?}y8XV`c+iG~<%O8`a{Z)>(l+n{ZOX2FWDwA(zfIvENpZP@}hW+z%AL zCoSZ&=M32EgsNOK2?du)0;PDGe-1s6>wAK855k(v2UN4H!+%x?kIWlrXqe1-51qCM0G3T{Q33J{G@H}9kk8k@FAg_zE)jQ)b0xycVLx(l7F-OJ#V`d^uIEK?&cQdVVQ@Ih?V#4 zIh)bkBA&Og>&2QVNxN%4BELN>AYR!IWf*;+$ zqLQE4?ic^d{PiyzB9MFPHO9_`Y*Gs=0ZX(n4wUh?(rTR+W~u8Zyv1`f6cl}!H~c(>j%U7;;gbmibIw)wpAlwQ6ebRCYESFY+Yd98SS)Gh5+5f4p# z5;O%L0p45S4hrR5aQbUnoVdDJW<#)}Z{~lRS+`TuNGDkw?tDQsS1paszcg|vMz7Nm zMCx5a)hXaG1&g@WUs2|l*JYIm7I2XbGQk*?=~G+3C7drW&AFZogsBl5CduqCe{p`8 zb;SQg#?kJTPt2p;rhw>KCdc&3tVQ9B53!wpb{@zO566r3OogqQhdR=7=qrjSHKPfO z&_@LJzucQZ;?otBVLl8cjf0m4+3IJbf5Y~stjoa|0XSQbY~9<~*t78yIRIUEo&TI? z>Ux*TQnOt$6Xe7dhU-L$8_u)h_3_QA$iwd@Ci9!}!*L1d2 zBK7vvpZ{y=s^g;io;E3+(kv~~A>EBggLFtqODrWFOE=QpUAlyHE)BxcpyblsNW;5+ z-{1a!=X37tIp@wi&&)G-Zo3=>c&k)$A>n>fOHIz@4K6m7Ugs89kG$$y#o^af#wa@z zrXTWgC?~u`nV%iRQrQiHN7K0!PH_Te(C`vL&Ul3eZqG%15x83JO%B>|Xz8FIH|6y5 znB_fli21mb(vnY)SpLr=5!!{u28#CZukdIlZ$!;23Ch2Wj@GM|pR-cg^ol&u@Jceo zyxj>UV8ui6nuoq88{QPW!oUNvl-07=cH6`FELMiuQ&cUx1+d=n(cm9TM4z3o=Ng|s$_nv-LW_ZD z&Eu>&yB^9PB2d6t7qw`7z_S(W@p`J3rDcg2vOl$VhEsbfM$*8_)B!apf#Nw6De8r{ z`EmvtF|z~@oS!TDp%DosEL!Efc+bV`Fmp&qo;WqUOC^gnW$d7V-2-qR&b z_)AI!yB|kVHErkej$&9E1Uej**WAQXWsdMH1lOem%;G97I<0jj+s}1^NZ5_#^yx{g zdR-4@S?=wB3C=dCn;M*$y`rHaA25&EjtqI<;y?S4$e>W^_@LJ4HS%#^C=83ZXmL@W zLfD7h!?z_tD^>2K0N-aVHe2cx?k7X8d)}9^4Qy0rvvVaAMcH67PFkFi%z9Bd8g__`ax+dM(=ge1E0jxeXe{|3h%C8zkY28@9}N~PFYZ6qn(Khs}8i} zE)qR`-eEs@xkt$NV|X85hUkn_CSGGY6om1UVDxmc6m9&r?@wf zhGgKKM;qRFq90d5^9_>&C~~6f%K|kVh)GDs3vo>@z|dzX;e7L+3-Wrxc03Gk$sqmm3<^JB%rBN zRV6t7CPz$E-#qVgmkk&ctfR%5qZ2)!nv-+nVk(B@d)u2DBN`k*quV->Cw>?e6LWdy zFwqw*{e7lf-B&G7Y#%mpA{_A8vb#4`Onvf(dX?-9oXGQ_62lc^ox}v%uSdA-=lyhR zv-$cq`X)!5@-(WCID%)w#{$~#ZP8k@xX4Rr+U69qe$ki3d9rlA=8Kumd`f;f-psO> zK=Bk;ZoHk9_mzs;eOjN^XH>TeMlSd)3S%VfDSsundca~*R&TRYBaC~3C;4>5qE**s za~~@SAVcCmf2z^XpFTvc|=9Zemvx3+0As>^AgRZkwAQuXE^k^t;I>FNt$?i%f!G-j07DMuDk4 z+1+{5#8tjKkK*c5>!b%UzF@ZnCy=y&%BBtRO}FcK8|#}Etqs~ zBJ#*1`q%rtcj_an3%A~I z+O#;2%BeYw%4AP`-qGxb;Ksv;s|I(yrN37eFAJ74hj%s|yg2OnZrh`ZR5UGXnya4& zg`KX_;RZJoGSz-MbYbD3ZbcYb_w##YF16cu4)LM$nFd~4WhTb08x)^Vz~>(|U+@Sj zll;&{-!53r|6QXr?v+@s44>cZKEc@_lKToG-Y)s(o)a}NI31ed@4T5wxBM~Bit#KK z{~&~Vr48tv_M1?^DniZ!h^MTb~=dlCu*qv z{A9j1hswH!-RC@s+NX`9hW^9cxf^mv+IY6`%`4vCK|#JGa@^T>!0xB#h%;9ru0t#m z_VsTydN)3rVKpWVY4KE&oM-bLnb9Hz;y_8YtY5r3Mb1O&AA>^akwIs z_K~ZbfbcOPB;kfm12Zco{Vu+F4?HhJgNM6)DqN)coA26p^{8VvJ{c&=;WuuY9*T% z5oN6TLCF~or0|{^fy?xL$QE!SQOPqp&t_B9uGYoLx|q3HPGp}y-YBI_aPN4Ol)Y*6 z52A5(j;nJYkaat%nA1ci_J+%9dV?37Z`1!Y9x~%~Lbl0s_N4V{jccVg*!*j8#fFt# zQ`X?1ivyQKCD}JzvV;lxS#yD8#$UcmSj4Oq$@m~l(a(9&n0}&M3cWF+PTk}10OcP8 zfkthaveaP}gW~;Gxz!u25wGB5P$XlU(@bPW-?FV?wfEgtQ<(;kkN+b?E3~dLbamx; zG=*h=7Vq3%De3RYgZdv~_q~@rxA`xuXrMk`lh(DD`hYI;5FdB(YGUrmT9Z!QqcDYp zr!@Q7iuZzi4n6icMMYzs-*r{%>_J_vDw)&K#u^f#eqDav1~;M+_Onj3P~&v9T(3!6 z)?iYP;KSuY@lO8RPX`aPyjMg^r%kOl)T>*^Qrn|xiTA&4rBgB9RB&so+2jRA>F93e z_P*oei=D4C#DG5jrE0HmEsd6*>Co?YWVh8jHB?qmr`eQ%_@z8=7uGyB>1)q_7$rRL zPkT!B_*8k&=rhc2FM4He`11?$v1ps=-oo8!f}H)~9pHz!L&BJ&eOz9RDkO&^5DUEC3j6CW*5e$eGWWLmZ6 zYYak^QrJ&=9#Lt*Fb!*>L*rz^WI$sUf0c3smu6K8G8U1BJMAJER0KFkV^Vjjf@JHt z*i3wk2mIC{Tv)=m$1(egmHuZZWz;JijY#T3_^S>y@uMd9W}bQyAMgW%wRgIh~{2>67T#j#=@0oHw<8tvroM zHhsV^3$-?g1R-1sEe+!yxSkMJpmT~_yK znUb-IEPu9%*OJp0$_}mbK;|8I0+c8o(G&2m|1U_9aC-SbwXO~}piT$aSs?NiY2845Re z2{^1?swT`sD-28ZGQ@@Ixh4gSJ(A z%AEOBCe`H-#78l59en?)I-_~jD=bX^z}hea6{3WkM!5(O_u9k%f#8%X-HZzj7`h05 zds}b0&vP(yXb#GEo}nud^|a?AHHjw^g!$IJSc^ATd@b?@M)avrEEpl1q~Z+qo(4$O zES0C5dWQ_wD=7nR7|=g1D|{N3kuPYskdK};)K%}8e~~9?o4YO;Vj4gvfZ)ONu|6T zu%Q_vyow6n4opn5C(e8BCVXYJw4FB+lJ_^E z;dlD#SN=74*1h0Zz4dd6XIb>WS>+%BIOBPnq8KlriIx|nlOTw|ru^{EAA6*o zSAV*p!MLn{fk}jXj+l*#{baqYQ$h|bCXJ9SOB>w>lu*lLP%$IDg^Q?DB)|NFPkm&zLLu%ZyA=q}0nP?wNcIP`|e; zbs-I;i+8>AoQRFujT@2eq}iuy2!n<%HK1{}DeuM?8QQK@Rxip!{Op%KB_qA(l%Lk{ zALeK6Bdz|hIedv>Bs>jQRxli-vPKHjDH?zQwzSD3=>EKC6R3PEi;i*32 zNz+{xYqj6gyq0(obVIrTOB~88+BNYDd%ku1ig~YxiI8nS5KT<{V6`Z${&4JRFy0{m z#20_LVB<4~$mPck>yA5<&i(Sr^M*{2t}O=jxl$uJVIO+bm2erAZFvh4P+D?*GJDP(CijN}_Y!_grD`#I>xw z`x~u3HDS9Zjxn|er|LSeHCa%qN@As?e|dq4hChH}>XV?1548ecmxufxsQxh+Q{}`q zAy;`T?a@!0KOETO8EFI-&IponI{InupUoD+lZQ-J?~+cRHAoG+EK=RUf~$Zx`Dq~t z69e}h&s-8i_X|q8uXae&A{Mb-MHk~Ha-MlTb_8LIvC)Hmym^{pH55_S?bDA}+obs- z28)BEe{wDxD75}YW0H&XF^W|iHaU=Y3YX5?_v4`0eSwWF;TvtT!;pZ_u;5$(RFemOh_h+)z-u zNcz9Pt4GXeZq|G>p2WVZeL{%|6pP);TM_xdim)}dE^})wDj(A7gq9KffYolUq=zB| zu@yo|kf&&DK>^13<1P7waeLB=al1yD47?ALLZBDPE-*sr=d)aC64$OZ#>mqNZLjuK z+{iHRaV?;QYM-SgFZF%{IQG5&hz<2gmCfUD&;f>yM<~@fHS13mN$mH8oIgniAlJaz zY?f75nft&FhOY54%`!_xR>PV|vULU=T!`Po$x4d2(N_T#s59cHZVl(7t(C1>QBMtO zl0~mR>;}Onx@D-xfJ38Y)CLd3$2XIley4QNruxhuTfj&$32}XnM$I!UtAJ|bI2@);Q4j!Y}oP_nc=cErYwmc~XfnKGpcqt<6Sa`9Ns#JcQk6|=M zYYyq6a74w^RM-2)XTQ4nS4KsckIT?90V-jPlal^`AB!Yb6T%MxGM28W_phta{65eTI4 zDFKZE+GKn++Pj(kF)}Tri`WBh(aK?{OW~D#R6Do~-*cXS@pgX0CT+20MyXEd+5Uh& zriX6EX9%PPX1WEB;bW;eypYL!X$JRjy&Jm?M-E3$Pe8_RVT{=-W>%?@Td!vd zPC7(EjI26Opdth;9v&j;;nO;hnR@HSd|PU_OVdQUxi|pQ8Ku5HVcB7Muiu6Ki90qo zPu`0EAK}o$KZ=O1;gscbKB6gfH;g6e*e!!PU`5mddB2k~K+d0hZI#Q9xzBCGHv@X> z@Q+Oyca<_tL$faldtH+9lgSxmTgoIRi|WK3dudlA9(~vI3$Tz*_10PlsOl!AIz0J7 zmDaXi2g~FYx zwj$Ylo#JjRX|Ht|Z_phxfgA@c0!j`Q(JJ~!wD`F7Xv#*{^dzj@ zdHjZHBUxm;r1%lE#6k1$7DtlnK-jNa0cxn0pHT2&?c@qJdOXTF0G&onlR-D{Rq7^duCI(3^+`H* z=8HY#pM&;?5ETd)WguCZ7kTkX6av9eOoM7QrQgAF5mn)t&%N%%@LC^e zWOTI^DCBL9t3`U8;(j2d3dTPh0m5d=%2t(cE>P&DlZ%l0EPu84Sv!s{z2=i%I9u7) z-{&;ge%~f{4!6}iuSr@tyqP=v$Jhep6Hwu7_h4eM{0Rm#8x!>UXG@O+?JSCP7?D5& zps`bIVgOSJzDF#zP1S2D`7nwu6^>=B0L0v7{|j3XRWcZFvhJR*?$o7z2BYA8c_g=b z1}gJW?IyVVkDzl6!b>F1#2?Jh(l*+UKl=7|>CoH=i7i=k%;DE7o;>HgF=%xG%1M~^ zAEtzSO?i|myk(0dX9;V3_@JHeQQz64-3bEu2=$yDJPzN4qSnV6K7MXEm)W8X5b}F` z6NtkdB-Dz**ixJtuKN}3<62$R_g~UJKPdukzDAsiM+)V^`@(f9%ixh0-T!c*Q$#j^3 zutY$Zcqn>S)$ZKr=B-C^;1APb*=FEoz#*0+0)AiW^WHwSW7Lb+>0^ zQRq+Gp(JaLpr92@F}1#1&sAo`)R~uq4qKSrHRQG2*=Pdk7}duXC!r0{G*a)=6go;f zGpIW9fY>F1@w?zEhA5&9zSdQTgd2gH#@)~a67yafYgYb5j%vZRk>||FjLjCeLa(6Y ze}6L$+F9*k!{fUv@rQ)TGO7n==kE?Ff(h5=z7+(8={od5q7u;N4p&5Ae~x?moIp5L zgSa2_!gkH$-;xA7w%xSPm;rUq-ZXU~(G>}xb}V%coY;@nln{XBAlhImwKSRCyNN=m z0gyiQmrrKxI;lRrjW&>0S)YXUQzJq?7#Q4ip+dTS80)$K?f=Iuf0x4r;@K496Hve3 zjwUUr@$-45Mol1Hz4c0@i@}gX8ljhsC*=|{bfj6xPyjg_==BUuz)>DhR!u8OOT$#W zAZVf8Ko^y4omRR@9>BVo(J+rR15-$2=$cdWVs&lA_c0?H%A}rti@*rwQG3o&NIMFN7IEO(gJ_mRF-%X3c2 zg^TJN1EE3~Ft_iPEaCih`S27chm$izA^nW>m>gE)U&VTRF_W@}8lY9&M%1^f-dno@ zjuLQQp<}viYHtr)Yv@9mdMsJ**%%+IaIK5h=C+ZLFj@NJAnx%OyOVeS^Zp=LqjxZU z{n^u$vGJpU&+hKa`4eVw`X4dH-}Ekg!ujok5ZZUYixT^vnPwC|!g*SIm#5>|7MV|K z%9iE?tO|{ftzRYJ6K#_)w?>&#URJ@8JI?y-P}}Tq&yR70skda=?0lcE0U$k?$g0r~ zRL09FrXY2FzRW+fcD+NtX{Ebg;yWW8>tJixE@ppjU^?*5n9HK+YR;K;LYw zvW?OPjkDBrtl zhEuZe|E4qXlX;n+=1FO=blpwDVom1HyzP=~S1j=%cd}D{xan1FemODW zyi=~$2|K)d9MTS{Kd5o5Q7X|Ql@Yj|v#n52`xP6e&#OgWWENvUmp{V;M1r9NG0A_! zE^Tp}t&c^{Wp4|8uUnZk8XFl&jVx-68}igOR-|waW(mzZd{*A8VHJ|7W}kW6YR=Pr zOIms?8AKE_$-7B_9DCc#l(=Jv~WzFhFfHUZqbjkVZ6)P?d zOvj>)*(d%lG*xo5sV76=UXwj(T$F+ zpHLVZJWmL?%aF*qz*t!+K-J+e1gZ7s7Mb&Qm9a0EeT_T$x>ptbnLEM8$8G0V8mP#) zl_%w}&Sb*u``P2&!8XzwpR==jC_6)iFvk7s9QLPUYWv<2Sp)y<8OA}3!aaqfkSZ&P z6{Es`=brrXX`Q7IcLHgC(rsiWf0FWQWFyPgNuHoSR(xCOj|HQM>l zzuTgpjfJzVf7~mdeD?c@bnQ47<0EkV3yFmjRT7bM?g9B8_y>6JzYWN0vJBr+u9=CY z2BUH&Y|UN|K2%8mHoO`pg-)^$K76@0?(O6C<$_vZ4H3E8Zj$70`rgducJ41&biGCF# zaWRxDp<3wY>JGp=+tWL-ktfZIj=UM!+5H{GEq}N0fZYr-r{{L;;EoqH$SAmyZD(*H zq}8P=p{eZ13E50?dZ-{hus2vS$^qBwYlo(2Y<3OPZVg&8X^RxgLn|>@OX@+9Cerf5 ze^ZxbX4#sJKcv$hxJAC%+eeb0LR0BoF;7+KO~s?a2C(LH7zu>lUNkLkpzg_BnU~1? z?)gSTX7JQFo4X#@*olU#Ip$Ob746g% zpVE>y6#C>wn-iuj=Y>Kg=8}P%D<)&$U1s>EjJBEp!XojmH}GxvsQ>3>AlZ6D76Rux zmc$Lh2_^j}savyfAF;nE$0BX%uJ3o?s-alJPa$S{&>>RZp=nay_cJ|k^m}OVX0q$^ zgH^aulZK)uF%0F)iq*Er=lQmqJ!ug_-`IWYVu&{;SOe2zmO?}V;^kCSRD=>6KsHv+ z-ky6UNQ&=x{g)ZTkW6jiGm2>Ob|psg3>NuXaABaql~lw-W8{sw$h+gMeN})e3k}RArVL=1L}i0+0b>U*evBaJitLv#;Kg(gtWF*|mB} z1c>nOmLYzk`<@}z`X+mtvhDn1vE4K~?i=qU78bhrdIhUw<#R^Rb*F&^ry(Kw5pzbz{Mw%L`MT`_vd=X z#m_23{=IrLZSRZ8w;r7>O)#UMA6tzX%UMkm3wV=F1VU@gmxgY}o>PKhB>XF|<+m|& zz#AMHQixuUN{`-NFw%whA(P94T0tKEebUgNv`EERoPz!+lD6<*^CfbTQ-@O6kAv@h zDvQ9LF~k6f*V~wPPp*<|Uh`DtI)ewA9}p+e#^7^FnD6s))y3o{#f-c_NI5k)4svsA zh@KU0S)*Ls&zdgPeo^m!5jLIav=%V%6Cp%gp|FHrw%xxwlVrmK3WPsIRu?IN_>J{A|sNo=hk%^Asq=}%ZLL5 zfv%@T;IfvdCI^2jew`Q~E&q1JX{ctSG22E{yRz#Y>c6ycLrUeZm^Qts%P4hbZa$+J zSNcund76{w8y(HLIt|(OHChb);1NlrCJY3HHq;mWTa!^WNiBB(Ae*F*?pvXXOFv8f z5~?Dew##j7Lhoq1CYsz@0IPSW&bqs$8zRcVleCYakck7E`|54YW}-EVj1pO;kRR%d zKk@sv=>?>+^&q-fOal?)Y+Qn_zu;-NBUz%^2+ed&52&aWpu?>&-@v#%s& zCv(DHWR?8eLhr(j=PBX2Y}$6D+*&{z3NITX1mGzaGRoBnZmCQ(w{TYwNy91#0lU zi8v74Yo1o9F=5G#t3eaY>@ zsQy(cGd`rtA8;+c_I%K|QZA(v9Vp4%z-_6MI^{2314NjCp&!O+?qe!k7-TZ$2eZ*A z0wL5MvK-&E;;D!M5ve(d_HlBnjDrgUWCn$Gr)GBCulfFEnS20^CoGAa17AS;oBm@k zvFkl2Sj$amm<8Yo0HTKc;1uk_RUo?<#Bm!UcI?s7zd@~dVirTasQ!qMWVFaq^t^6Y zN=J1!yB&YXw<3f|+O@QQcg5%N1rga4D1x%ebLk7_+redNUSjR9#BPaJJuv+F{Bb! zggrTF$C@!3(Sq4NCYuFsC51PWMj*Fg10$}5go0ZFSU6e4Qc1ACk7AjQ2vDUN3fY2U zFj+R*Xd+wU@c^FGorNFzN;fMXgIJ8^W%2J6wl!V;D~B~+0%+7r{p9)Cq7?EYV#OP| zluc~Kb4JsJis>hC5=oCKSSAJR7D1e+q_F&nC3R;4tA(;`W@ew+IMvDTUk3fF7x5Er z!N5ri?##@xdH-Cgn_BPkEv7;Qn?AV96oeJ5#C#Bn)awBVy~bn_5mA}al+hYk=_;x| z(_a1*jw@*XpOGQNWV>`!T6_fIvTZyf@AY9umGjQ_IJ4*f?q}fo!Io4(z>j;0X&tgI%){C_b0+|? zp3ta50I_bD+NEEggAx;`yaZoQ{tS%u=x?HW#J;&_R=1EDkTsuJnOq&~`*fW1Kg7qw zjrJSZtq*yCSd4!y_f>qWIqUcGR^jT+3i!n<^712$#iETH1>0r2dX$-LgSEy_CsC?P zQcS0*s2mR5q4NF*)BPL)HQu-9eTO*F0S=(~*5s6fllCWi1TK1!M6`&70z0a@YRLnt7Pe+%pmQOC;=OW_qZI^}2Ae`o*bkWL%Iim!(iv zfZ8(rDmTO?_Nm$sZ5Xqmk_{FG<`9+c)qgMW&0Dy@5>2Qef?#1AeFKQe^v&5Ebx>=c z^Z;!Z7F@u2^hjJ#UJYK-yCX6#Kg26L^DrHnf&B*-!nHMVnGp{XD1EXp+15-=Qg{A7 z!P>^OydQgo!#;-U~*gR)9D zAFvL&Cnq=I{_$+<=J7E06q&wNGFqqqY$&6uu4lfDHkxnPmRF`HhHpId?*BJKA0j$W z-~`UXrjP#ioH_i-^*wlidK02(lcUT4)C6;4KZSW@m}vHB8l;}YA2x!@MN<_bGQt0$ zi(?=u9;pa6?;^94mTXaxJS8pe{%f$3=Yt)LHkV6zo!-q5!dB}ph(jf*vG~bO5XZAk zZ7A1z-5r>wrKRE3<*k>M zkAs5$u8}6;f-%sgU8U&S-5|8QMyjRWF1+6iMW0)!ik31=~fiCd(wb-}1H<)qZi>a{qE=0fnw-!4WTK?aOhKBa;TzoXPCTG>1Rg+ROJQ`s- z65^kx&eF>^|8prEvJC^Se;iIzmI(>Vg_7DEPZcbEExcPsHD(oYYJ2MhKP=x`g@7z5 zZ-l-Ux-Z`e1(>s<1~#h_bnAmE_iowxdj1v6!%OC)k@x9hWL8qU7iGS^SlG23TK?zt laAfiUkTP8#(n!w>T9T&fx~5ah69W9FD6b~>UB)c<{{Z0rTG9Xj literal 0 HcmV?d00001 diff --git a/site2/doc-guides/assets/syntax-13.png b/site2/doc-guides/assets/syntax-13.png new file mode 100644 index 0000000000000000000000000000000000000000..56a8cdfd79a3175527fc45ba440b29960b2d80f2 GIT binary patch literal 172770 zcma%j1yCH_w(iW}5F}`Df;$0%yM^HH8a%kW3`q#??v@0%!3GAIB)Ge~ySqNlIrrXI zuj*C(w^w!V)m6J?ckkWn+v{7a!&Q}KG0{oT0RRA|+(&7300438MQ=exe%UVk$}fH? zfNtutl7Pw~vfY=RZ!x1n@szdZX7@!)?JgAc*{Kh+x#99Ct5LH{hb_^5D0QKmlEao> zSGidFb4~Pgc#OIHzi46%ei+D>EOoM3Sv z+hp%J{0Xc3pU>Awid3*dY(8x#0YUohlSp;^M>S9vG=!0of_(%B68@SE51s=ZKr zu``mWDJf`hdHuHlEbZk1Y!8PV|8r-W4+l|*pD(xBtDOu|y0M9hic&%m!XiaczHc2P zu|yi5-;%uB4zAnb>kbjK=n;4}n&2ds%{^~_#Q5v=58X*T4}Pc!#4hkL9elo8A#E2x zMu?=4t&-jE7I`v$9=!CA6i~kFsCjmdF^BU%ukuZ~j-HRYk=@4z?fC)(#~l$C-nj8}241t%j4pcn2q z4Z$Ef$Xq5n57?Vv^4c0m_Xo$gJ^ke-J3l;!tm|%#UZ?K7F@U{M)+?BAAIp@~Yga#e z?reGRTrVVL#^y_O{HyFIbs08E{lNAX$&tk2{AL?+v)yl&pQ$}%W@CPvnnxj%Tv+VZ z+45^<+~j>M$DMx-5-gnPiLB2XuZ$Fej*t5P2-FfEuFh^n*Ucg#P88~q#SSQ!5u->` zke7G;sgT;B?n?fO_3hiYlu_$m6Y}i(*&ujUu(gjVNjHVfI*@ha1~71%VDfl%eUyzY zir868v=jy+FwqZz6>qb5^2UY{_E2)~#&8fE_ocAeC5r_;ed!ko8Zvp3Tt3BN7$&%O zv;5IT6&qQAbji?}9N`uozG>8g>aw~<=XGVTT%%|3b?+&%d05QkV9hczoW16 zES0gOV(l`#mBk8^n8>6rnJuyS+~w#CU}C||dXlilH;Hk$N9hB~=3A@qEVerwb-EOm z{kyBZT~BMP=8E0>3dAKwi1Iuo;c7kMmU&c`UQ%cnYiQaYnoZM~2r$_&WQt~a zRZPS#AGq9MQKu1jF^d@OV%e8ezp#t4?H0L%LhZ48Gp3ETZnlNt>RhMR4bpDQ(b59f94(^fOk&6sHgvD?3; zlXMyELXotdHu1|tfoZ95VUNddaY-ayPy;vLc~)!W+g-YKj%Oqhhf9A7TWw3k{dk>R z1gU`Qw=%85U-J#-Fu(2WYeP>@PnX8jl?O|mLMtV&W07mL{_ zybcRVK_yUx-Kx!)iT^`Kfp06-!{iWNszGS78+01D!3(MfKg;wmDIJKU~2)W&D zjfAY7$LQDFH4SA85)U6PHmcwpS*9}PHMu6s;|Bg*+%30HP0qHw{!o^cmbR1Qb7#j+ zBl1@%ows7fS339h*xj(im2SGwz^l>jnmx6~!Jt>-lSj`g%?GHhRw?va^hP`q*IJBd zR@aE6;%T{A3nQ_RGT@=cpn!AjObJnq7$93(ez`a4QuT1Eg8URv`G?TA*mknCD-q^p z-F!$=CfIPi^L@c1wamC8!=S}IJX6>*BD(O2b6>~ny=$!4U6|i_MWd0gZ}X`+Uo@r> z-S)EIGlJL2ibT-8)-s?dn&julr}thg+Lb9}VrIkKH?gXZeaQ;$ls2ocR8Kyrw*YQV z*Sm@1ZhAL*buRNsd-r{>&nkfkNA8#;ORGx(*G9d=`>}_B)MQ#p2GMuF$p!p-4rglC zeXG!>|F7kLGGl_1p?!^w`xl&P^F=pp`M#Rc-R~csmRtB0Oyi>RJ|E8;%>Inda2xl) zi7jj1>*I{oS>_dcbX3w!-}H+q8ZoVdHKQ*nWil-;kqeCzfP~W-#IK_**2j?IBGT;X z>NQbiKFv?rx*T(!Gdv`?%Uk5=zCE1nS@9E`#GxT5*8b=A!}-Z{BuG8oyleY-x+a)bU*2BIYVfL#S{$$SX3cZ?zPNTLJ!Fnf}JSyYq z2M?88^%(2pVx4J^6X75F zKAas#^w;_`JtXMrVL0X<9EThIm9L9B!#CQW>_1Li(2$4IUQQ@H&`duJaVSt=dgr*x z;9aMMD82T%JJYPxloNNgzD(D@DjuHRUJp7CIJi-3bbJ&{Ef%b^8C_8HX*CRAGFcF+ zu~;KP_ok40G`}L}SgDY`rZ_rsSvO=7tW}W!@d{B^fu< zr0ZkWVrq>cs-x%w&HYVvhh@S(RPQ+1C34n7t5c$Gxh^#Ho1^Bq?EcMmO;Tj6f-_Su zC|tj>Z<#*JZdCKe>mozU%6~SaVb)>3ZIS2m*vw?5!8S7?6=m61?O7 zw25*!vUFuw*vbThq7C+@D%a=g?W`m5+FC+#YnN?Plj$;{Z(n@ zPoo$6W`wDSz-9>T$=1vg$%R*$!uOX(HJ^n=;QtACSM283SydT(lkVpNri zYaub#;6OSpF-3GKJx$l2(vQ1SClmx^3FFt1jbOP{DK%Ewu3h}4RKl(?#ZL6IWrTgu zpw78oCi8NC8um97yX>fjFGnzaGo?y*mATAFe}@7$HOlr*Lneiz!l1#^YhiRG$Dlp% ztB~tKnBIt{O6gIv`yMq?U4alZsQqu})c6Efb60Nj<$m*!kzkXKu@BF3bJX?JYmvDK zUR{aJjVyxJ*d}fPSA!uI=Sy2HBfBOJaKuTx@bd`}ZO)dp|g(v_@N`U{}rB8saD|ulF?=#~ie}xgp4l z^!?m{jp|L+7K?04LarNOdbjls;~!Egd5V80|M0h^ovijRi?sGS9CF{Es?-wegZS-5 znbHXmMk@X)->8u!4N^7ly zzJ%gv&2_eu@}}iWOt&hzppT!&>caniv9FlBD%jJ4X)9&h(YFz_#*p(7dxtPplD);_ z>pQzDzk4$Jiaa~xvRR5}nr(AdtyXB^)@}lGY@M%lioJYo;X2RjU}SVzaWFm$5$|{Y z43Yfm2ryOB`lZcgAlJ4LX3d=Ov+gw6J0WnySQyg*+vY$Y_Ni5fj#zRnyVdu40_rit zqxxLJJohKnz#`y4=WkzX)y~~c4o$Ol_Z2xk!}Y=JI%J~l@wkOsA4#Z7y}Vw0fP|zy zg%6Pu01fE`4xQ2^}_Qv~@5i*Y@4){3@`9hx?dw@1vu&*>1P( z*!XGOca(E_j#6}q(U(eRZ?e)3Oaa-2=ObL%2X<1UOkwX$Cwe$=Px43retC0VM_5u4 zuCej3|NaZO_$C5I?x>UD&rXoJ>xw3|jOU*jOd&XysJ+@6vvhMgWNh+~>n_REa1h1u z@!)^;@44oUJYyV7I6vo6mR)J5A4ht7!k2HUPn&9anr+C}sXC3Sh2p_tc<5Q<{+HIN zcuGMu4_Y8(i8{J|jK?a6f*ymI#qn?&@h`y$$WY7vR2biq|9-~lqBoh0 z!^2I@QMEYY08Ij;WvyyMs}y_%YgO( ziCuMZXM=iL->#hybEDsyI(;|OZC5{%x|%`ZKj0sM!U2N6Ebv=uSE|O;^##2SXmRpt zZCIUa1B95QlrDzw9G90KD-9Ye9XqO5f8l?!M8XWbhb{MOaV)~gl1_~mVK4~9UZ~Q# zxC*@_wMR?u^bg>K|0?_p!Cgt+0EAofTb6DVV7iXY(O%v#=z_-}Or^6>2?8jWDsCDg zF}cvmy`v!1*~o43s$!~tSELIcP5FxQ${llWn$HqqvfRe*rt-_6)+!m8hPlV*y!6%-LymyJ4)?mjS%Qte=X7vOSGkZ zu@jH$JQ5WN6DW-zla9WscTHCsm~WOE6{Ywc&04?8>P^8#!#qlF{4giq#4I%16~;=WBCt{@sxXmXiOQ#qawHVuFKq9P@^t9Vhp2%yQ~=m9keU79OQHyf1;O`QmV1mi?6D&CY_1E3Vhvc>$-Z z<&Kfm!tv8d0etHcxl*(YB3WyQnbI3gLc%yeT8e=fxUm{)q`(Hxn&Jl)&siCfUU!Ry zw@;|BXfkd;b)OHD-XM&GER@iX#6Ka3qbKsEOnUbdRxBuRw3|f=wrzBfXjwiCKRDgJ zf@B6pYq6Stew7#7sf1`q#%)y>{W>a{TkLVd&|sK}kP$79LN7PZJEqopcn}3u@BIse zt_J=gMMOj(+qQl6lQc&CBPzUCgXC)snB3VqXdPBTQdA=zaI+fvs&3PmFn){UMp4$K zGQUe_VJ|W1dbYM{N_J}?XI=DRhek(f{Ta`8-fJmnxx%oiZ;@sH6TIK#VNVA_rlV}3 zE&Zn;?(}>p@Nt6B3l(*ViOQo)h(RkVLvsL(b-Dk>8BrmyjRNRT~x{zoP9y7%i^`4HHtn%0<;rvVGCw@$o?mzp^h_g ztDskVvu^m`_$?V-$uv%w)k&&F`*SDu{a3#Usi}0X?T!~UGO03tV`N4ksQP|w`r5TZ z`teG^=lQlOXp_X&^?9LoxasNfsE%J8fFm2L4e4gW)<)g@X__$9m$4t+xa2Bi(RrTT zes6#vfAELrNwrGo{?etFiRma*>>*z0E!A;V`y*Q?MMQAMw6|u&nBMi&yi+FK!1B}J zGE|ItizJ0gpkP;~qS;{~ZO_wjnaQr&P)~m{o^CW(t$+E7(E3oW-SWk8(1 z`cf;aWO`@}K#w!>x5L!4euPO)P7_u7MHYY`-^T{2eIg;`s922#bHVSlGu1HBDthtx zDR;T%i$tzO0x>v7>~VJ3;n^ai#$N0-3n6mY zDca1&{3OK~WqAu01^urKM3BB>{tdlM;V) z)#eVj#P9~$Jl;*`y0rRTarM_2xBAYl4PysAn0v{YZe43>50{a5x`B(GY$dEG_a>(p zx_a11h#?;^xGr|*Kz(trlT8<(h%V1q}yUO%dJO{%{$Dwh}l1_`os$yQj2x7&P{LA}hu#dYc;b)F!?^h3p(n!XpVde|wD~ zGA{|(@f$BSf zJ0@G5G@ctCEB$aUXxQNWhpN!^|M*c1lz`vg&#Ljys?3yCy`^yS82xkq-<)k56;j-3 z8h+BG$(7lqH*qsEGKqg$Zh_y~C-&INPM6gr>;acVos<`c=jkevMNc%eB$o%WV%e;R1o&O|>QfmIVh>_=)c8H@KYA^F zWyG2V`Ju8>Dug(W^zMK$$bqy+aTsq>7@*6aJQ}qDuK-fNVds!2|5M==T*uNnsi_is zH?6{aHOD>$CNk=uxOU&UAipF(RTjc;e!1_>+BenQTGPxD1{Lp_5+JV386Sy9Cumr_ z)=~Bd<5NIA>x^)I`h$et5->BEEw=2j?@}M>qQ(KR3Fj;Xn@rMWtdO(T0yqHL)ib9p zJpo`ZfU4Uj^Q#f>CtQ~b_TIU&dcfPmA%>B|oti=rE>XHAs$(t~X;yF6D!jK=c9SCF z&%Cs0*eHWO*evF=Q-Y53qeFOsT&`#<>+_`g)@Rb(;`t1ZeZ}~Jo*63ul02uj{6?)d zJG3Jgw1N1eRccB}R~@+1dNbCqjSXmY0ZI9YI9h_i;6&X~RT9IMoyZeHcjijbB#dUW zMbtDuX+ld~91!3W8d^2CLFSi(xu)^KT)t(Is~(y@Qn)T*I7l7HFoyW%QwZ0G%#b*5 z%0~vBh+`j5Id&q13O6Kugg>>by*|N&0KE^Qi?Kg3@(8Ft`smSOHra)d@;m1l&fAfg zei33i^6!qN^lG(K`alW9*Q3LU6oeD$u6=UQ)$j<~A8ec?f(+2bFGp@>%+u0!ikIg; zZ3ks^j_~F9O^KKMf<)XW!a;`PtFH|mb_*v8gUf?Gt&a%#nvJlZs#A{;M!Ti*+|K6tHiuA)WiAA+;ank( zez+_07j432vuvc14+h0{dd=m+5R1i9)i!Nw@LSwtbB2AIwRNwhKv}Bj^5qWFI*}>6 z6Y|^bo;yy16hmDSbEGgHT0%YDyFjI>V3_9eCX04Nv{_A$w@cX;+hF!}vW|(TkTgTK ztFX5eW%(PvBFFG#jgYs<#AktDZ-@io(G+|t;#91IOL3c<*KN-t`>`DD z7ZI5ZPkp0loXTJ>fZ%5Bz>b+O^7I{1&~rhK$hz{Ruw`8xhPIzx$>-lcjKAI;5rKrp z8|+j1pB(i@DBND)5#GBYNoA&Bcdv|OW(2BL5ZCpA8rgtrWe&K!!eAVUwRcb6q298e z5csV=O3bCZGlCR@7hL-0O@`?T&*^}(EDG2Hk1z)zPU=I=`kv1y1D(JuYlI1&4Hi^u zf=Yn^gEqS-!5$E{zg@a!R``_Imvp++%In>fZcPhK1 z1{o%oYs&vJb7ag&TSpRBdzMzEYUCoVa4 z=DY*C1&Rgz{6R*fdSOZ0a@e7@tib1Tnxx0v0?Jm+_R_2AeS8( z$~H;t<~c)IZaZMrb1NhgGwTXv5d)C(1P+W!qvp<~ZbV@TbaeESy4`WimV$RlZ07g8 z8qkOH&VX(GTl32e3f8cXBR%aDU)G-7f7W^S^E|J3D(s5rN-oZX6J&g>Z?LX8x)o;1 zW%iXt>Xd6Xsf^It$Ptj8asVSFmMTGRq(@r<1PZnvYU;l-@1M(*Ab&%Er!noP{S771 zQRYL~I#3;GZ-c1PX6r04;>z3Bu^n1)1{I5$vT3V8p`CM!?YDhWBRO1dB4WE_j&-xH z&zopRtF796r_>K0CES1!?kHda_yS!pUDBlYN4}Ma4}M9st55nGeL{{=0MaGjSnFe( zkrDBN>v}h8j|loDHSo07cf>nztIoqOU=kO|3V;vNC4L^4J;T?(Bg_Nf=RK=pvnpEk zGQXS3No6CXWM7!JHxY5V-SM$ObpBfC;SVAW?qSo$jG=a-4E_cd;)U*a0_?zoj3)x1 zsh!4}C#6#kQpR`>^~zma1(J}5aE`8kqFOi1iTsD8Y+ZvofkRN<8AB7A*{4wTXg@Aa z*h(zr^M;sFmGL4HR-?0u=A++|ZGv_cu=^KYz%Iu$AL~pC#yiC7K7ql??zQNw z@x7v-lZ3JZAXGkGvv3KW8)DNd+S5&1ZpEiCWNQnpuPu2M${1u&s=1YvA(3s4jSLu> z58iy_m8w#QtTUy@80F-CmNx>R|C|eq?~d?emieUEyjwZK+$VK^qNI?uuWcJ^o6J3t z!nbFQTse`B!QpzRFm~ec=2tdLo<||R_c)kNJkSC|oEr8HK@%_WD5`v$Q7kV_CF(6&sgCv)lb7LJ6xWIN8uYDOvd$lw zjQa|P5jF3H7y*&lGE7371r$UaXsiWU%oxH-dU8pp@L;LA(|hvBKHw9W#2*=@-Zq^jRb%N36>SmyI(^<-cG8dxCyIH zbmNm+4)Af*E^{7g{-IJn?N_zwX9V~KDCO|Md@uJiJO7C4{dmoXTt>d2g_@%r_1;V@ z-kV697iyrD6se`vG&fOf{CB}3;1WktF2GblB$7);+>ZXn&r%>%@|$EVoqSWr8S=n;O{0enN;;YQMdCcT5I_k)bOyx8 z?kCg*-|JoeAo)d$hmC>+O=cZU*oX=F38-vOlEA0RBvlp;nUor*hpM;jTMeXr4u*cd zVLbW9wM{^mBo(`Grf(CAYf|%Nhn*xZ&p7TZgGn&YkTUMFJ7SY%vKgWszd7Qf_dY*r z)OawH$77!W79Pz6U{^@+rX-|e^s2X*wAPNM^NI7*G_I}Y%!w3{ZUs=7ULa%cVVv}8 z0?xI6C7w}~#~rA(UPtU%TbmIZZaW1v^}Oa_ci7jy4thF#E6D z`3w~6j>&%4_8XUAq|m?fY%W9xxpohQP7W1C^dQaG*c90zTC(hW%w;~gP`L@UF2Fln zB2)$|#aDN?&SqA9%4oA9pf$m2xThK9UZp_ljn(52lZQCZ#guy`CeZRY$b-%Zys*J0kX&N}G+ z%YxzLaWz-R=3|n0c7wxwf51^*x#~SE5K?M4RUV=@B*_B5_XVN!p}*W~)(5$G3%VRe;Ax=|1RP8V2h?y+2hZd|2#<7n84ZBn%-I3J1CVg*flW9UV zp2taVzKw{}e%fVl^oT8rb8?l&8gx{PX$u(7BQT|5@91!mx`O9ld|nWkMMTu|IW?xNGv{-xtLc-z#io!W2PChd-Cz1yX286HOrn|Wce5noM_YxrNti|5Us`{(9?Q`0 z+g#x7l%N-JMm7})mF4Q9npam>f0d@LhJ<=FexIf1R{IvDooCq@sjIC$m1h24^^s!- zfiup$$FE!|-@t41uA*^?mxZIN#zvCY)FW(@PP>PQy$_;(TohP`;jb8RC`y7*pU2O+ zvd3sl2Pe%$lL*=fH4TX54r?*4^Dc?j8olNe1vZZ(H^6H8-ULZdIZ&X=5%{lB>!NfWNk{wGFG)W%n!A$INrNJ}p`uPo^VIP3YL-IsHZ(B_1F7ND^{$M~yNh)0!PXQd znH0t>L1v?4Q}`Fg^p!a~dKx&3%TyT+SP`rcmp$lg>m z%u}J=+$aU(rvxwc%M5g*lW*_~40NGWtKg;%o@G*YNSVJ+#UkqgUM?O8~u@ z|NSYP4+e3nWcs%#o@;Man!X>!f zYj|p{{yrGlpC<4XWjTv;_8)?nYy?t zQ3M&XEAc4)PG@LT4C}%A^%eP*D*UEsz!FnIu$+CwO#kER@lpj0ko5h7E^qzLOikfh z<{&Qsu=cB{C|wI9_(09rdLV5zrAI49{CW9li_d{UCkAL<{f$v>O_bG=eTzvK)zp)N zF7E-*d1$sh`5kl5z@J$h;XSt@14=Q4mkon#IR-N38yiPCx9y=uex*8=6jKI7Ld6ke zbR(FjOYTrknd$ESf_SH7W}dhCA)os` zLv|Al!pP6Q&2qc#A>_QU&VfE8=Pxa4Hv(R*i0weDvm8bUX7yBs0#W0~J_h)QI;qiF zXX2kNw1#NmMcu~^do%`!RM;AnzXR%Xl1xBpPD;zRl`trHbxlXdak&??qdKR?UUHqp zlxPWjZ|qW=Q=9!cobvnT8#wl5c*<|%a@P^|7Sc{q=ghWJkl-F^r3-jxuSKB!`LYPbMk7KwO zR5N$96-tCXWrZFqb``dk>5y!2;Odsy>2Or_yj3-rO~hw(cO4seL5XZAax?N3`AO|4 zDG7gH;h+PfnAiJ+3c-W7%mIk$qJf+=Pp_i}&Ry=ubS*WT^uRM2>z=KMAJnrN%G&Hg z6Q*j@b{Hj`7CGy-wVf(gu}q#`3ea%xoU3%(WMzJ1XsC5@ z0$HcS>;wWA_~(uAu8+q-8#Sgh&uFf{BZ;7ZVst9^kK(7FfUpa_IscP93FIk&uIB3- zY0}?9`visdBYD9`P1`w~#9B%<2V-MnAyJ)k*n#{s@5uQ+dPi6f^^b)cvAC4b1)5@a zdfH7_R+RU~4MsFF#DyxUUVwv}dsuD=r$4Q|!@Z7e4!oMRUOPwBp_fQWI!zr|B-X>N zrAlO}iE&w%#mW}&n^Vf=9Ta)4aBAAleg37FMyGI8`XS39yUtJ_|5&Uo=;K@qAWaiab`%QKxfB+|pJSLyq)XL)$=$(M6 zs!KKpu`Fv^7rouU`!Ii}uR5DwvsPzwIr}AKj;B6089_BAB=~T?k+hv%mtRGAd{%_s zM~U@=r@vL`Ddj^uB`RuiNE-M(;X453WrpsU9}8ODe*POz`!}3!Tdz*r?^@;%C{EF+ zk_Q<8RWlFPso>pRZmYdV9!znobI_i+zpUr8akG09MThNKtG>wP*gl@@Hx3=nh#&v{AzpdY7JStMD&MK@Ab$ydC8W+t zvGd@GoVOzEUQ72m7jsi0*ko3YgMg!%E#+B+5QCiSnD3}^+{3xB0b%$#+6!KXb1?^Yrx@)$U6nAE~IBP z&w4&_$H8-0FJ$|G=pgy4E+ta77##Ui6iR;^Z8F6Nv z@6?^$SOFsFqLxK%0Hlp*Au?D@iUtM1mbernMKaWq08}iMOvr5fx^AA?G&;@Dw^~&K z<8i`cPw5e03#CMI%iS^7NTHdO{(1@dHJ2v{e9AK|X!(H(kc|JXywq%81RqOOKl=z} z23frJCwpg}x`VA7x`sj(PVfqyB6xCk6}v221X5f^-%sYr-??d%iEmi`soEH743aiu z`27Sza2}+kh^rE;lVW1nnr3&Xe67UjDNu^yQeKVK%(%&Ad~H5kXMV4&*PXv_+wTtY+oaw})s4ySR9TVCbCv&3g6Nj;_zu5b*mhnzU1c7+JAeir!^OorG>4 zD~bW}yEK9FLIvu#*6s(CddAorIDsqlJY ziRIN}g*_7?PkSBHyFz5mm&7U@)ou^F>5r#XOwO?QVAse=q6YMh(&9Jqm?RTK-pd`} zU;@zjvj4df0uaCq#F>pU!#lF~m5o_o9xi$5`Uptk=9cg6rWEHNEFYVr6S(a$}5v^9=Dx|oP@|F{2uTNZA)}hJk?RW1c zXo1zt1RX>nBdn=9k;pDCkZQazkI*|;HsH-YqO|-lOqHI2p^Wjx(-8`oaZ4@ASPrS(U<4#hC5%2I;zK zi@!ZewuizWp#oFtej`I<2+c=}v8??gvaZ_0HFC(N?CwY_i!^r9gdWA%@8y-18?LM3 z222=?yV1!~)h!r=JNW_#bf!}v)Rv9vX73>iOv1>A1!n-!!6a7d6cR97<;^^SEzoY< zB5|+@gwGc_#F~qcx`urE%v1A;JmPgh(*Q^Z)JnmR!2!2^Wo=$ zwjgB!ge%}u-Dv9eYWE-6Glz2tI|J0X;+(nXpk%%G^KX6H)HA()e70XRW(~>rdtq0i zIDn*l@@;R6cB{U{+$gRrGWF^&#Jezw1)f3ms?Bb{XCYBp0z;Z*@GcI^Ww6%%a?x|; zNQ#wX%;sl8Gke~7I$Hfk4P5u_*Jg8&)mezG`5r~T!Alq&Rgu|ful1-R%ke+_i82rq zsWJq%;%g|lj~a8XElKC~!QA_W3g-&}WmItzVKJ#vsjf_EfOGg(6U<8X@5{G}kBybx zy==MFTS8B)nD-DWSH%YG9zj>_jGdp?SRZ!K6Hsot5YwE>X;r0f-hH-87?vhp**6MC z*VV&u=OTo$C?~oGBdX(6ukmwG$#fy7T*xDk19;5{)j-|vedxOgktA1Xtrn1)KISU| zY(6=BGh0v8QuNtJ)}S7s`-t}p$yn&#*9+8O#}D0cd-8c6yjKOuF>3U1?GZ*k68!Dy zv$2*j{8BF5_o^SSGP6H^NsE7#Vn;%{5*U%^9{0e5@6D=NoY06~*fP3(5KR4~8wjze zd1L!JgDMQKzL0u(8)+h42IWG4<63^QKlN<}-SMy1fG@;(BG~Izw(cYTza`M9Mil~$1K3Sn$ zJUhmXv^goS-OTKFc=8HwVdAeCfvL>g4k44o!{R8cwfrhVI8p;7)OQJz-E9@0bI+R~ zDG8*+fUU4ke=m0|=k@BHDZ2VWy~5ELsigfBp500~PDV+vm8OMSZA!oTQL?gO#M+Ee zm3kCLSYL&-k2B3WDS+75E1tz@TRHpC_X7JNE+^g!j+Ez9|HD>2&@!D)^jFZ>(}slD z-+;4%op@UKae^nTlMD3(5r;`mTviKc41j{OpvPJXGMIJ3Y+!^D6j6lYE;S zXXJQn!;3i5VeE%mMqsI1R|3W0&|j9F^DXPe03I-_p@~aTRAg#>{+&RIY!AYMsf$nr zYb4*s43onQd<$8Q$4&t?)m4uFEUUQ2`UQrINP0^;As#)h7Eqy15V6}f5~oiaX{Ybz$Eh8 z-UA`KGzq(Y6tBuDdZsW2AT8uhPuD-~%Lz{uG0=nnDIeM_gIlI8v`3+QoiqZ1^+hDEjZ0lq3gIgwK)E z^*p3frk}EYo?Y+F&PNlAO$8LY#3`xA*-dU{IM%3E{Lp!Nx1?%(_G1v(TYblx71!Q_ z-z6xhLOdB<^F}pZcm`hhl0-o7H-lg7EEz}`DCQt1w(ettQANewsz@7j&qh&%ITV>1 zk3kDdMOe&b)L+kdb^Ln%xPTRLm0_chayZWE#MR!u`wu<_fArU=*MatGp1N@QvWgk} zy%)Il+Pm%VOIlt#_uRRloxvbo+%o;~q*HI=w}<_)`p84UhusB%hT9P1#9T9T%curc z)SKpwWgM*k@WPIqNO8sH?71Gx&CY+c;uPD2)FM4IoL;CBXKqGky>;`fuC9OQ$vBE2 ziK{M&1dTBHwh2|b z8Gm08nl#Qr_5@q;fCw2_=qarN2%41(40W}n@h(6!55m1?iPQ%C`R<>5)HTO`mi4Jl z1k_o&TZ_SD`#!liNa_vPRuvCa`eE6nX}F^Ji{&~M#Ntm<7&dYdc2hFjsrg$wBo_?; zi*Ggvgr8y1J#G+;bn70}P?N7#HqU zgJ^E84#uBn(j_eDm|$j5jf6%pASnXl%vbR#t029&*f{H7=rOb|hHdhNf%AYLHC+XY zNzu(f`_p);8aKeJgQwzB=u5^~IugD(dS&_#X}w9!VrON5Swp7Cxkf5`8LYu!ZdSQ$ z1hJoMj>cFLw@KpGg)LeqO9WhneuvZXd^5VPeE*wdr^`p&hn7TA9h&qsK9pOY^W;kV?gz+3b zezUPWfDy3Z0#pTfE5e5`CX`rpM(4g>OPa7Y0}KH)L)ip?La(UJYS<*^Vu~@4L_#?XdR-Ae9+5B z05ey{jeI#<;YFj&NhDa21Deu5OS2FPJBN47#ooA#49C*64B7m28_8B;D42FdYC$*v zqNg%ppbp=-NIa0b6^YU`0~F`J1ot7Uc7O)Z>L=5=XMF<1Ypy$VPf)&E#du#q;ZS?M zhWKNjNty=TPqY0Ve6o4o)H1b@xa$U^`Djnpy_F~V^i|F_42Byc#oQng06#Os8OH{> zlBAc_fBy^M&7nJ4vcX2M!oFERgfB`g;nsM|!9joDiVA%Cb-FHD?)`id)`&hoL5~zs zJOk1O5f^5Y2BQ{16(%XKbPahXY1M$qNjh81?R*>`ctM8F0?F~JF8fp|Khw;7bMlzg z-Qb4!g}wZL#o3K1PfqaMdt8J^XERN}ZWWtj)9_z`9Ji4n?T=rxuG7!C@DE;wuP8QB zLB!Rz&r_LySJK+r`*j1|cdVg=J;`qxYYQU&A|nD&laZ249~s%MO2bH#zEbCQ#tJ=)&*R{1V9~GFn5Dp$p#{vO4|zT z4e&tI9sKwirJP^)Bt%FbVxsznc#P|m7m)Cx_ygb+o;`nz9mIJJ7GN=6=Usr3l#M#!l5gb~ z7){s!cvZK5>FE}{;!Cds?&I%$@V@Z`AT+M=H)`1i?L!I_2owku2owku2owku2owku zxSc5A@iPDIu{Fx*j|+eM^5+9{{zUYfhxkM@&i{=?#BH2>g`-9MOPI`4FJBKV6i}l7 zyCgsxLoH%|Zfdoobmb)?vn5aKoBFbwl3C39&Rlnk$Pd22$_Llk;u3W$Cs)C85uV~+ z&I81-!>T8Qahc8&%L-t|l1>7K*n9FUiU94ga>C*Tz^H-%6u>UP6s+750YLJUYCs{& z%M%-#+pRIKwl>izKoQE5_n5t!KY&8S>#7*XZkv=Pb_q*CjAgZ4}pV;--TYR0k zo9pEP*C7=8?EFH04v>GUT$c+mq$=?HT#ZJ>EGgMq?cRBHj zEHRK&AZE7*pC4rff-Y?n~s8o|gG1ZuGztq=e4_z{adhZw~dWPbo)hlk1k@ z1`As#Kc2?g)7C}3+$?m@b$==ofax0rL$90fZF)mgu11RpEdklFvh?FZ^!k@~%==8B zTzQ4$n1zOrpt0~3-0~GB7ATPKG1`2E1-aI%k^3>htb(c+7SSK9SE|3T)3}Wl*UIlk zuZcd)e`v2*@4wLuuH0X^?D_S&a?79Hku{Cz^WR0z+|&lrf@qvN;8lEai{(}H{e^kb zk9+>x*Pm|ayQM$(g}+PC*T2X+Ox`JjXnns(ez)VZv=a;S=+Wp)c-(zeDatd^W!w15Ck%TgZMa5~k8r=+nVr7E0 z3CiY`B;Q}T*0HnXpKD?gNf2)0f8m12E`7L=;m(IJtgbqvbR1VqsGC8@FG441Usdt9 z&YjIQcfP_x{YLW!9O&4wgKZeHIb*vX|6)8BeLu8fV-j{@B(@fT;4MC%1lKv8!)5WJ z-0up{rK5R#Rmazw5Hj^g0V|h?n_toH;-(%HGm83axb@+p2;bq3NW3*ROQ@?DY?b8R zdid}WZO(Eq=6)#)CW7%-l@-y@p+jvW7oH>W8o)S!mviUk#wiy>dx!o4uo2Jfn5<`z zF9`x{s%x_qTtRW+EslTKAfQ{<&hCIVhuM~q5xI{^yS+C54*(%~5`zx8MJg`e;?DC7 zF#>w`>ScfpZj-B4t+DzSe&%NX_T`21*-QaBcT}4MFu-2#UC55MZQJO+QnrCb47`jH zh}eat$!*)$Gg=-1XaI6HJt|pD*67o*V-;UJsCTn)0;!ufU&V`8 zKXLUL=5F1T#|CZ6vVZ^n{Jsrs0K0LsF3JV@>YTp1GnAEuhVz%SDH(3r#k3cT7Ff_S zw~U5@pyZXbz{Kvp#H7qLptzV!Zelfw+gV98W?aMdpF~|6DV`=fGjo0I+I8BLYjsEeFmarVa;b-R+qP|IR@;8lmFTm#!!hmJu1#CpxQWHUjj5@&aZR*r%!!j5 zC)?(5%a$%P%lhc`FM523;$$|c-@xRWpX1zUp0qu4DLfa{;PyCw&uU)5MrB`BiY>;K zwq4*qkABA-u5FvvS`4jZ8d)l8%op0WZf)Zuo4i3w5?%nrw%xBytkc1PjAYI8@gBe<=BVNdBS!#% z7$1rCeM$?UMU9Kss{;!J4eQr;T|0NN{B~%QvDK?rn|H(Yo-N6B=Ej{mcXqjo<;$j! zpQnr^d5^ybk09m&EKpu6{D{B*rhJQrv_m_|b=r(gvsQ|Rs_ae4{zfuk{*Fgc9c=)X3q(T?@sC`PnaikstiI@G~X--8iMO?XvdvyKmdBg(ZEisQV~i z^A^qBx1WFB0`9SkALp0||K>Npbt_h`{L%$lQRVug=cq%21`XZc|L^Z8M%xE&_1e|$ zAOGR+-F)qcH+kw*n_xZr%-7`BRmuJ9zx=ByfnW5UpZ9snnw*^MzVY01ZuXqn?vu$= z3W20ImNT6T#s03rgKQVeEeaNXGX)C4$p7|l|3*7!Zgi8UOm*M=r++k{j`&fB1^7&! zI>q?r&k@umDE)}x!)zB|1~b~`MiTe8Xx_s8)z`o7-gxUR+kvsRg5p0iX_5xBP6`6A z_+_Hg8`@}L(ZWRnG|L)b-&&KhC!c!C{l|}gVzJK{%tG5#vATa@enILg%f4NwebG?CusNN<=SO7JzvQz>Rv#( z@BYI-*zUf+{oU_Aw;W(#X`}kJXwkv|7k0-5n7ma9rR2#c9yb8_-+%A}DV#;eIz(yzaO2cSJ_)Mz(q*f0Z8P@LF(_T^Vz)x`G-;6=TNbvWU^`&_f; z&E3EK+rLXDUcGroN$QUrq~Ebq?xszfY$2{B&-r_hnQV$OapDAb>UXDXM_&3f3E=mk z2On^2)~t2Q6vq$x-mVFSf5?!#-2Q$0H8!s(a=t^GwrktgJ@n9nwlg-nE&K70Zx$A4 zlTT;QvE7ru?7AB7-Fc@QJZP|c;lROiHPF}hPr{sx9Xr}B(SqC-#RvW}==|UQAAjRcW}R>~ z1*kmw$fGipXm9g>Lc2_xHr;*l$tN~8b!gw-{h$B)?~-GOZG*7y{N3Mb@p`rC2o~$Q zWMs&{3;XsY&7r#Y=%%=|&D>JO*vBnxvQG>p;yP7VY$g;ntfqJ|7CK4QwKJv4K%jKW;}d0I0?saiHMQd)t3*dM z$pRJUo#cnfEWyNY)Vv$|4!vuzDLCm;tO*EToj6~jC-i{oxLmQyVU^HtEE1UP(J7A~ z*G`97TCvDOaLKQd>dkJQ$4(y8j*wTZ6)gspp2aK0BFlY5!w>hjrJyQtA~UcdBW>h zmjPZ_q6i)4>q_N8V+8b{g&5d^AE8@aI+oR$;=tlx<;yQ9CCSMXn=hC*5E}*;3oRplnQdal?zJ@2T;yfcd z(jUCMpe#)rH&&Z;b02*8p$%%}YjN+?j#w(P2?Vl|ok~fx3H-Bc*-Bf1@-mD1lv7)^ z63EzB7JyTyPnTthtZ^kn0bEJ&8&+|&2OtnTa#A*Ubo$I$a!Jh8@YG_@X6CuWtdMwq!#S1~hhYE356U>4_t+5vlCi{MUG%K z2>4h{vYFL>Hpz&M73y2+|Ipa;HuQzx7(3{T+>eZ#q5ACCjKYk^%P$*uL@kT_y~ut4E(CL@&K=wx4I8=-)n0x)jJ1h)LGl;Avki4rU*t0Kh5IN2 z8>!&7%W5v0iM;mO>o)fO>%aW7x!OWwpLl;Itz-i@(66wjG`>?Apbb`l)nq-yH~`I} zRt&hjLl=0N=TD38j*0eODe&(o@U482%gB$adhf~Xto$8vxsM%knFoCK*+S7HZ$l8B zXPjirMYgsOP_+ z04KBuAQ{$`(`U^xAf8G}-OL6OTkS~avgwBk!* zxsWZ|c#A~lCs}I$ciwr=?cBB7+MIXu4D=p2V1SJUsoG43Oz4EYV~EJ9%Rr zds+R!-bn_m=iE2l@+fsFT*k1+m6JY;#{%&x*&K@ z0}UF~cd!2OPmVdrG0iD-QS~jWavr%f%;JIVsDE5rL4q5&B)jx{y%NJ5xrj2vCT77KVs~{kH|?ZgXpK;+6VAGHXh2$ zN4#L6Im8$AI}-hfd(LV85ncR>UA_7*3lw-A(7##0X6&XM&=1kpGUR_l@z)t3Dzy#k~VRcKq#ucndD{HPo+|R7+EPFT4w9%Y^ z{xwyA3v+SW+b^mh17lt#rA?R1l~;_(0nvatQw;$($QWor`Mn&xy7mRnkS|_X^jkb9 z*m>StXwe?A`E9n%Z!N}c-s4ywT`r}$u&n+WyH05nt7EdlfuAW4JWc;bA3;X2kcup2 z3?`IHRn0Y+GebWwCz*%hCBi(7a?qd6%OHP_qDAJaRcu~?e(3vpjPJ^tXVY1YAGGTQ z@dD?e2mO_~6zyewRF4NT$QSOx zdJS(BODb7JPDbxX3c7wyLRu|)%B7Nf&4@9y-T2B z7cI)ap?UP7BZq8pje6#4jK4AfmEM(kbqqd8p ztLFoMPvLL+Lp?1<_UR)XNaOtO%-uE@VVw0m0-wWw%-=8u@ic;W*bt8~>#%vsFgBp; zG7cA?m{Zge@1u7uT(rm*LV2IBD?F1VPdxg=k;C8>%spfIag9C%P@7b}hIqi4u}XA~ zU;LZZ1j>hcG5!$iru8a}R!bJ;B&hsvhF(CU0 zA0rNSwUVS?F%HXkhCahwg89v-vu3-4U{m>RDK9PSBj+X0tC%iiI-+Pt;=Rau`0glu zS@Ry~S~@{vtoV|COuJH#8%laVW$@*Q9$zn?>jHB{Wk6xqtY7DrELvidUpi{9o;}^W z+HDgPNfdyQvWh|QV97FW>^Kp;oK>C^haje3$TYP06Ts%@Sqq0L@d+$AqWiM z8@INU#fcLq+Qb2dB_*j+Mpof(i061sE7Ve!OedqJ=RO6QTE{z@(?Ao1)rJ$a&8m-K`^> zlJbIbfolr`UehK`-QDBIxm{Yd#gYcWkEP*9AARiFNEsTX$u3OHz3Vn?bZ@@>h9-$u zzzIONUK-RFEm>l&VRh=%b`$Qu-v$nVvqS1gwQJYX>cnREzS@iScwoW=0}2@!_sLXy z%G7D5?4UFa)yjD5R;{ca^FLeQaAl=U>k1GUC&3Ru0BDrbI9cH35UqgEHcNmfo_O2> z$P+ItNdrFh`&g?w*-~&xDNlCWB zXwIBDE>V0uM5{7F)T!f!Y4VM*VX}TeivBy|b(EQhCq3kr ziXYd?A_4k6`Q#I}(gYnJdT63eAZWmB@eh-mdGqF(5aoF2&>^l>%N8~u@6OD0Q^i;C z5d97zk2!h$dUXY?#0L=;wx(IJJKL7+7J9@Nh+_21)R?Xwlp(OlHpFoiPE9vc#Z)Vr_y@Z2@5Uq^$2| zi~bw6($i2@EhEQ_a`WcSH#b_uWv31ur2N)1#deF7%1>nF$e`|pDHEZ-q7$wG>_*B& zpl{#4a_QYa1fHfq{56N2QL$_NBYokVYXmxU|l%+o!IK(PoyY_7bhJ7MsAEo1r zmGh#?cZFJ34Y_c%ktNMYlXn0{+r*o%z5a%TJFCcjY}Javp2@>TtAW*RKwUH!Us3v)D$~3oP z)k+S15Lg!P#;LoNVkG9 zV7C!%MvoX_zzv&sEL*-plh0-DzI*R++ji^_56(4t!2^KljA2XEF3XlLQ$2{QS3zw$ zz->v}>Nbflk!O8+_i~*@FZu+E{6Wc(4<%bO)vr5f6&hZiJb9{FUA1o8!u1y4G6_B< z#frXxWd-z#CTQKVMN2nE{ep5ZSz9eRGkw|&O>j?Y7t~>zFn2W<3c!xkO{tRk;=Al@ zn{*5je>85~MCEgGQLk$X1>-s6MR9FbRu*G-_UrF@_3o*B6Kyx)x#Ei@OPAP0znRA4 z(IbZ&z<_*Rzka>S?Zn?r+?Y|LU8`0tZ36zN+MK>OL;YaK&K=gStkh!(1JJfj@_x#+ zsggfetTB1txUpkQQJ*(|o{jH-977~0yGdyP2!K{orcO8D55TiktJWGH?h-gq*81H# zTv?UsYJ8gX&_k|e>sA^ks<|g0f6V4CGqu}uT6&sIYUu|}#5eRyTqLH*$`6oWrp8S` z0%SO2?(ktlm0z7`)7XT!Yxk~3XLzGt-FjNRzT3U=_S@!0$<5G(`3qwmG6z6=sM@TS zsiFgiB@8Sh5K7eK&{U7APi-?!J4z&mBJ)2o*j)AYW6;T56|{Cr@@AGzaOSaj%K^BS{NjH8q9U~! z0wTc^tz|(qTH_P_4uFf0C)SrVh}5oCNAhr#OO{*dMLnOEmgc5SpCJY5h$+1A&yc|b zrR3ExuywQgGqMt;W%!VxwipR018DN;>`&by(TINOQ*+7nWQ{97`}xnMROZ^oNDpZq z+Dda9fG3n+=8!1gj8S(>Mx9XKoi*!IlZlT$GD&Tg>E_O#>jvD}-*|wsGY zF1dM0KpJ2TuQRt2Io!sLsjAB~S=)AWJ=M-l)z_gNpgQxvKML^2S`RN6c#v^8WBWFZ z&l`+yk;RRhG?6SEWeaH38Ssv|7PP)wb4@HkE^6M5#lv2;$Gh*nC;r-Jbz`2!xW*U< z$no(fldW%EHF$br<$bd_g?COdWer%ymYnEbY}s1Z=3Fd*EB20123L z)6UEl7i<1oSWf$Uqu-Psun2i=3z<3r&_W-j?)B=|bANhW*6gb9aOoeFDwNkE&nMCW zlH@ILpXznqWPXmo)ya}~8#T{`?>b8kvKWBQg`52%ftZwudZ6I3fgj?+V>;1E>>v#?lR$H?k;8AEB(% zwcyDj*hqPVG)_q|MW02S!-tQ^OW{rHBY^8;B$t{uZKj3%9M??zj>Q=XzUtP!i|Z{n zR`jO@@=QTKz&G`k&$w}8Y)(*Le1Z`J?L2M9G#7VCVH-Mjv@wZ#z+?1PEQI|+g}p$3 zFla&ds#uA6cWhxeL;7u6T84$0>87~@iG^9_Pgz-~Y&>G|3h)$Opg=6>%$YYwmdF!~ zZrudxArVzuq~;=2P%CV@XfVL(yYd44b^GS@beX66-p_GC(iOw;!s(NY%Yx*wf{ zF%6!D-YkeLSg=s-IR zjFk}p63l@4`x04oFF}70@Ac^3Q#xBG%?VCxzM5i-gHxrmGUvntqfgIXHck-c1-%P? zMo4uXG`6(RV%kUY|x=ouV25x+?^m!Ie{Py;`8PM)UDHmqL&c(F=E03Q z+*FhEu_9~-caPjVSP^7}oM!nd?iwT|E6t_qGkDNI8>s!@Oruk82Ie(#HQk!7m{;mx zBv5OF!vGDGm8!&oliP(7{DHw4V40ZUE{GCvt;EE%eY+%X-Y#g+k}X1h zaTqAEmO}|xv?zvnGXclCh(VFTtd$0CR#*@iRWuob=B!q+`ngmtGynzn-G8sW52a+g z_$FPQ3>W1_vUq}`v@zC!UDX*`4dG-`wIp@guJ#)%6D+-$VC>kw!<1ayRsgA@;sl@w zP-fm|^YaG@DgzBs<`^8|S(FQe5_#SyUORI1kXt3;j#4px+}*}gD25MeHGl!1m?ujl z9EcUd1dd4-gIj$mWsFR$ob;9uVL-+zvul?wHUWfAwo03GS_*G3vzi(yo*F!OpeYKg z1T0Yo_#R6iEWidycwjZQTzuTDX;Ud*14NEEw_D}L}X)x>F?`qoxWu9z?&oE{P= zUQP6$F2QE8ECli}Ao#?}1Rw$fHIs05dc}I3wx$Pqm zs@Jc%-xx09l*CAoB=B*Jv7lh^Yqu;eOhs2)sV!>cW)_T#I!?L zTDxoT#syKYv`GO#FF?r-eQ?u(2WdkRM*+^LW^%tOpl@NS~vWrq{ zmPHyEur#b70F8X=Np^JV*vVF_c-H65aWpY#j?=`JiR(N823YM9zl^>IcAcm;1Ni%N z?i|UmOaT-csMY37Z0MVfn>4lozq%F#Xir?VSO_6O$2xUoaVR-BON!sYLz=jWnXwWB z^rh{l&zNC=H&!~-n}oG5z;y#DOzqpXw-rLppVB~syghL6fK75YZrCXBdZU3g^j~Dz z08Iooi-*>%TPyk1&B})S&B;-$K~MWSb0F?zl5(?fC1=Y zMUiKBiZ_w>@R12jgl7JSUU*p(y#_`PCQN__Sl_TQL5%BSe79w5x&V|WW?@VFk(kIb zL1Ph*u@&$>HFdKoh49@y0>JRKVXc4#{tn8 zw^LKMNC}htm26=yz?{Is8;CB9d(1~zpkTa+rs^7(kTU=^J7g)vIMQBoC9Eh>Mp(Fl z_p#P%D*gj3c|zbIiq;af#a;mf%zHYh@50Z_2e1mnT6w$V9oGO|0S#HOWTL%X!0_C; z^UP9;ag2nV-M44IEvEL@WE;>19^?FOO~_d&qfhM9`_{;E3xxrIfC>F$0)?tbnP0qk zsXLRE?H&}M)J%N3 z^iRqNMp2-QTSM`o14G~DfZ^nl;GMOxlL_@j!FyhMT5vi%7OpG2@{FpKByhhi@*+8x z(gjDz7Dr`07a6Co_=5F;oJuJ<{fyIU{^UwzEIRd+Q;PTO731ljrLT-x*IsvRInIy( zrT(7#?kz>b8hHN$50p%BQ|MO2hdH!+;+9))X%0E`;O2$H>k{~ID3hP?K2BOuNpHyC z@R)ocdKHdST)cnA>8E9$pI75VYYbKt$nPbZ;qJSahPPxfOnVda7q=qAhZn%Jm!w{} zc*T`hmh502m6m=-`AuCM{zZAdHS^HPS)8-mD^Aw77L$88qFxkQVsJVk*Wir#upT`W zLmikO5V}S0RWcTI2@Wqg^^}VG==jU_+I8Q0@<;D@Tl3y`zo+>Z?|FCeX0&&2jMZqG z!`ar**JK@gFzf!xY$>$vpB@>V+;n~LnZejrbOtgy*gOCDoka&%{`f}~ks=6UUgTef zo-@yURSeXxDWlL8(E%QeoC~hzmFx;PI5uo~<4~cLf{E?pFHdCrZMWTCGUM~I-n}Z0 zXgUNZ@he~Tib`uC;?1t1k;UL~fH0W#()w%*nke4Oue_p+@gfNs5XpqkoHs8#ZDAQ= zt(AudUJBfM-t(Rt$I~k+8uO=dVB8-*Zj2(+=I1@MX)V9@`s-5vU7^{@wI03c+;cPD z72#=D70k{#=M8bf>|ZkZInjqifPMec9~8}I$*D3YwK732*D*NoW@9?@6 zT1ev$Jdhm9i|!~rp`6~K$uO?s$qY6pM$f@pM3I5xAqO5@;~*Ex-@Gs~7(M5eSw|0u zywW={8#fGn6{(iqNyp}xvUa@pz3*vW5}q^fne$4AL$e~$ zzjyH^Ww6CBtWg|F8-hz5#&YA@6@*U=Z5)=1F{iAv^qRN6`7MEIoDq?i@Okskc%ol) zL8X;hjvkpuO9b8>{EIJgu z`*=ltcV!NK8JgY@I9lgz>%TMUpvdupIv5KVWexs$Vt-{E8SY+V^aY+Bmzxa#hFaG?`i!TZ|A*&g;bVg^#xqVXfxV?jy3M1WZx8D{$ zZg@Qm|0b8+A04P7I`hmko^fc&`D`o<4gybNANB7@Q4ZH)Ej{HvaI#3D0eB z)%|)V4^^K>uC!m&A#w%GD4a|0X}9B!du+^~#!IL{#5^JQC6~yx*IbuAu1rGTfkbP( zu)Km4qpf*X#0JAE#j=Y^z~FhpSjWw`!XU%IXj{?BRjbRmVk>9Mw%gwG&M_p31iIt4 zBwqxBwBcZdOS8zGa>|RsFsEdZyuQ5f`uVFRzslTSRtl^r4D)l$k%gTOy`~d$y{GoeSBRQ8_%wYppz5E{q663yQ0z{L~V0S|K>)>P_{+RJpQ5~ zkBfj!8HG=jKMM=P8$*b#2x+{80p5YneD1SATgvTMztTqk%%3+u-pBpx#8Awbp_*}x zapmF|ZA6a3=P9#n#h6x#C<8i%DA_Q2QH*il2nU!lh9#N-eltWsR7_N8vPegs_r+oC z-%K&xAn5^#@~Uy5y2e zV<2dgSYH|%bohyThlD@BAd9A?=|b*#Q+d^oje&E&{q`xIp&mFQ8Uig`9)l3!3*;t8 zN#Us3FZVz3=}(nW-t)aB)0Vb+ui)2PE5i88@l1-m5IuIxvyaXqd}BSRZ?}avfA}LG z4UM#0ZzRrs#Tlm;k69fW!87nQc>k3|t5P%=%b%IG!=g;ly8Vtjn+G0FUcj^uFKn45 zz<1q&VH8SyL&RRYwe8Uuq4y}S_mU+`3Z8^g9(z%Aw%b^v?!lw;k3$0Oq8&m+ZxKx{ zl2%mmKKt$$r7K4Gcxg*2tX_0I>3wV4->q%!_S)cGcr#Z_6&W9 zK(bZXmVvH-tF33B%9?lAT@l)1MYi4VjML7D0=Q@QSbd^b>mdHjUO?F$T!Ju7m~|1!>-i!*Pw`3!_V zi4>bRZ(bX*7k*7id42@e|3T^i%xvA_^vdv5sE@mWW1&d!N1$Iv~ql(6HDJFcSU z@D6*dyT+uLF9z2Wq8R++hdx{m2iL6;w(D?a zy(IkU$BDKOp{j^jtI4UqCjJE`#s}6%9&uzD^(RkBl<%>}l>z31X^UJ1#shJ$f$K1sO#6e<$)H*x6#-mHqbFudd+_-;Q&Oq3*x`_rEEFfbF*m z8RE-d`D$d(>51?;c{uWe+%&D$D6%o0V2d0BIEU=%r=4EPP}h!63hbgqOUf{Q@<}IF z)Ql)n#XFG7d_G>mJHDSNCd%#M&ze8sqnVaowp7oc1#-5CCx#)>NpOG+F4CW20^hbq z9~?a9_^p!eYz{8`415<#^ty#Cx64Cc+Bmo;i$WXJtSa+KlAWOD{2u>ZZ2f3Kp^KK+@` zmK=fKGb}O|UmmCXv*vmKz~%>+{2-}!;)KgTJ2yCxZ3DC`C>^Zy7U=({Kf9{d*xSQ9 zzLuR75vhd#&zm#l8n|+>*mbaIQM=OyU;DE^d1po030P3>i-8|aSQFtWXC7nH zvaD+i667a3j3O{X1oJm8x~SwNvN(geEs}N)h=P-IjRP5PCIfIhObS1^D+YhEjdJYK zN0-iJE3JHO`q#oPhQr7n0eKZNGJTYR;_N?seS5LzH^bErGH2fkzkE3T{G+$OBWw2F z!R^TqnmMQvKv>tLVYwu{AX7Dhm)mNVBl+BO z-dIL%YXW@=A0y-8F<<=RmuhDRd3#Fa5YEmE!o%RWKnCj(z4J$L0^WJYtwlE@YkL~y z%(a_?%-uR&^<-P{1t6KdpZclqWt6!7E$@$fAOC1_#6Znntn!{7*da1 z9DP(!$??xQzIe}~yB0Tp_dou=BHC@we!&Z#pMISYnQCF}fRa}n?$-2;l2cEsyZ&{T z40D>}O+WjrKggDtUo=0>Vk9i^tqC0`+_K={ zVRV4B>({O?#c7R2JBag&DE6;^{aJ+=yxJ`Pas*fyo#iat_eCn^LRN|J2|7Mqi^ za8Psz{|s;P7!V41RKbdZb;FG}6wJE%+`r+*8|yj9r!XYg4t7qg`0q$0iTp)EM&%Nt zVDP@5kgXDugMRwc?SkLc2f(YNcGHa(2X;PPsB(aPwY^L24T+W)@hMpgfm+z%nJ{=MT?NWZ-h}s)R>72zdCM;xs?$$jXM?2Hf!Fscm3JB ziY6|Jm+I0>E=|3gYS9$!L*d}5);Ht9$c&A!ZGI@nya+sK-@D{{flHc%FB8mFweQ}@ z;M*S6pz_OHIY7YpC|2^=DI-;N??vF~HE`-j8`>R%e)>-NFb|AXl!0TPeOv_9XVtbN zLTQK3J{i~Qm(Rd67gqSBjN74(9wloy|9FV zH^z&Ip0x!(;f=)wSh|8&5;Q=kx9q#`zU7tXMe<@li4qNwTx(fJC{SC%$45#x0eMdWDM}rF!H7WF1sh5z zqY9Xd_~kw4tu~kV%%c%XJ3|2eYCER z=8W}y^D-_s9W-GZmt^2mqGYW2;zOjt(jKbzk zRLK!X9#IkSj1zZdtM(VZa6zSq7#WvQ48yFu*7BMbjxBVCuS8)&2iiR_Tbymdr$m{1 zO6`98?^}@tz10~zXMhL}faWZ)epo+;5>bH$ZQZnW1UyC6dB-{?g^BHN{ube4Jv-!( zLz>Sg6~OY)ywdh8UuqI-Mn|vZGLQW!r@XkF0k#LicYPLRh~_A-U`3H8k1BB8E{k$j zrd~Oke3uW11IyML&&qdL?WS4dQSOd%$07aV%p*erqdS8cXNza8p=X?aTJc8)ZStA% zDaJA~p5edi#st5*+39YHgtop}^C+I8@)c2l3Wd4e$*<0kYTNV6<7hnf)Ke-l^-FQ+GmJ7^bp5SL z@{7zQ`amwq!~g(507*naR8dDoJ}nI{1z^>0>-6~2wm$x93){zDZLgG*IeZ44ZP*Y8 zz~;6V_*k-DU`p|)B%(JCKU=MqWG%Dv!vT6x7jl|441Dy1e#2n^F6MBgn-BV=zH)2e1Xxb5p%(0v5yV;IpMQ>QKa(Nu}b4w=?=)l+v1`LAWwl(u$jWADL@oqb|?#k#V+U8V=23GR{ z0s}FC*vW!F;GqgxYFT)wy1HVWmcw^ybigF)%h)y)O$3eF=L%$*k-S7F$6Fb={^0{3 zh*9#8atOR8k(Y0M^P97)W?{3uSr(_#^D07#p@nml!`CU(`3Y6F_pLQbpfmc1F4GM_SSclzDd5c{^RM|lq=LVQR3+1kw+dX1NIKJY4V1P>GXcv zkIA)Fe^KlpoU-_wJkn@Ue-ux`AGlc^^=9Ej-JalAocf0xTsnd1+eM2OCvy4LvfGgR zbe&aH98tTbgF|qK;BLX)A-G!zK^vDKjk~)O+#yJiUJh5ISQdsrhgj`(xyK_SyRN0v5O>dn~KO>}C^nxiHIJQ-v8)>ZT#jyPDLfGy+o}6GifPSzD|2C5s&Dzl-%Mbok;wFn_7GjTLU#_BAW%bq{Vh-^? zKpgRTOghYb*IA{%P~9Jv<5^yjwVANL_@_3t&g^&bPcf9yb01}d9F3pn-}G98TX}N| zpa12&#oydcH3!7_KWX3yybcm`3T1 z1dI!(f*lTys*J_CIaTjn@qcyRB}@t0YKR%prz}=!uBtZX_2q&u`zyL;$kQQ6$m^L8ef@?;x10lGKBowG@%@Kc&8wBL^wGuy;m)b14G?j}M<`k@NNobGfnmsD!Vl4iQeEEuR;5>6>X3a?f+c{_l*HbHIe%MC z4k3!dS5Yo$L~DR;m!M&cF^L|4Dtb^dHal(MPx&cnW=RvDjkz_%MeVvqW{Tm=(-vsk|PDX!qU;f zUpn5IDBU~JG#C#EU~0LjpLpDe^vGd?)|Vp%fa@LJgkXO8;CHW`7L)kO9LopwR4-hH z)Nh*a%gE<=7@s6JMclmA<-J|j6~a2X42k_|EH_+F*vEYepZ`X~p^Uv*s%^VTVmzNf zHLSEFB!ag{s3HbOaEbxOI&n9#BViBD7bP5J>KCA)DLD$OSx(?Ih0VOU=ZAQI1zL7H_8f`%j8*1Yu{=GZcEdy(x#wH*rFV%r3Ec zi$;NfIe?FH?elOtVN4IRZ99R{`9;IN5>>*dwZLCX-4Y3;O#UvCT^B;)+CbY4kY6%N zp7(8z5BBOMG&4U?o?#XQTW{E`#4!Ln_hjb8frFIf$n)~u3F-oGv-L%RBu)*KRi2$& z^SJM>za>2Z>9aglc~T$e*-o&2QX4HtghiJ7I-C$R^~&Yu`pu ziE}19Sf*W1yvp7iv=;(r@dIjxs^#~B%N+1JOv?>_u>V!_}q>ef*|Eo}RZ5(ucLh zHk*?-yY8>o9?0@f?caNbx&gC{69QhYLS9jbJ!b$lic?In89A}h#ok*G=HFj;TQ|SwU@YQR zvr#AlD_}??I>-YlUQYkKU1O871hJ$=#Iy8Aa&Hf2jr#QMN(ZOVX=4iS5Hl?p2)<5S zo~|*u8ohaYS}KtHf+L@PK%|99BR8<+EXvFta!9bBf*@6x-bS^UL7oR*xfSS~OnxRo zZiPHSWfLDh(~`*Xu&irAu5OZgmG?iaE>m{vVLKcUzbHp$^u1pT^H?by9{(`qTbbJ@ z&kZ^BvYNLR`wC})Jb!FyO!}mvK9>k=w{WcgB1TX$Y9sN~`o4Q;bdIqvIgK5|#nn`a zliDUP^>_ZqUs-OhUrm@vBX+*4|D;g&L;h@UJ>}pWQPjm2flWe_1)}1|Iw8p}nqQtx z5+tm}2BX$hCG`a*y&le!r~g{OmZ@pQxy3BNs;53_TS(98`|OzSAw{=L6UP-T@u9C6 z`j3DIoGW0{biCds3KI0<0; zf@W~}b^!O=#Bk8nbu4_q#HCU)LH2FJ(x->l(ty=z+7S_#ZIXZyUB;Jd7AN{H~Ikw-E>unpnH4b7()s0NZ7~A zDZaK74pDUZZt;M&&$4FEpAy5kNHnVGySOiUc8S}+8-En3OU# zR^@2jvz8o0@J|$KmsF7U(4v-$?EvP0@>etY#h2OL_Eq(`l{pQbSCH~$q8OlYUd@v7 z%spC(0toSUeRPE%jZ0eZhR41>li1oRFL>OJjU8Yk1xV(B--nxy_Dns9{I3XkSUCuQEi5_{(3{mBuSC zhw8pIw+e5ySg>NxKCY6XCg_)lpNKvd5VqP_M7I0)mG1w*ISaZU^-?fhZ+=*K-QNhO z{=(O~I;JXUZAsK--8AfC709kQNb%ai6C8Nu28eH_Te-S=mcogSniXI!+EbzRU((*C zj*HNm-~yLe1O!=wD$u4pzo{}nRq4_YC{?v0!I>_rcoDkv+w0P*F_ojg@+73rF8g}{ z<-?XPL;-is-(5BWcGRH$kFu>o+tGC94=<+r1%u%8^V%Oq*{{H%bhEjO;DgAz7JgZ!3%36 zU9rL;Kk~;M?XHyDN|jRD{@+;^tlmmz+yegeG~@Z11{>b$sLb)Q@$D(xROE0unsyv= zq7zAGn};OQveils;=JB;pAh56*tXOQXx2%!MB*aTqdfnuF>fkha>i;ElT-S(fTnRO z9F!Q1%6gs~m7^-+c;UTP9`+%*fT9TuHs>p!hleXYFp>06h5K6TT?f(M!gQgWKa*%8 z;QaMz9rMn@Ul*O4Y%h!7pPw0{Cd_#fK{|vYttKMcgPIz{&2U8#0q^lN>X1S%>0f0Q z3`d?gz_T&jSybelZp@yg!665}aiB4zXCh&@!F;dAbR>xatByb7S=wOMPY4!%bIDLt zU&zpx?+6VEDzx^LLdnr8B{*?kO;OXCPY&SOyWcj%>Zef=L|0N3Ns0HXk@EkP3V9a4&yFNUF?k50Iz1%tbHE(n!W*#@dn? z60n9c!IzT*UC~4w|LsqTgw>KWWWH=p|1fmLq;%E9qHbkzzp<4oCHMR1GJcRc_F{$R z(yXhFr7(brn*+j%yQyT(b&>ksa2StNu50pl;~^ja#FWksKaV_8Ktqh>ff-v|C_tiV zJ|xkt7aN)8hSjDL^<(S@qbP8E8K?uF7BtXc)5%UrW;aNJEx!93^#g>Hrj9t#)=_L- ze*+El#yn?WO6Avef48Gq{*RZELIOk2u;M~qSs2K)lPs~si0CD^&k;-AOyBa`+Bm+4 zJCwr92xKF1ApFya+*wNfA>3@I91)(Y^%wmdgVC&o@7;) zcd2*uY3#_-o@TG_4hhynwuPD1&Nw|I#w$O3?s>u<(}98A7et!0Tb!B)(;ZIrl!>=j zFFg48_sOw&c7ladg5mjfKqUGXapD_rDUnv%C%3?n9PrDi&A*9?=8Pm2f|5-Q;&ut>NsnC>Js8#nXzE%cPSss<7Qjo zO~N2A(7mSDSSm>plZ?F!AYJIQLX>``wG>{IMP4UerTwgPCp<}_JV_k|k5GMDe+cx4 z(}=Z;YNmsR+3aJO#7Gb>-4_$XoJc3{)@;B$(b0SjP90|+bv3A;#C8EM4}PhG2V?sU zg=%=unCdaA^osY5tKXUjS)MLrk`SYMnc%}R5%)(3C0|_7F1t=30Z>Vjt~*mXN0%S@ z{#Q!ejg3G>Jy|29{tX(zW!1Zg1Gp=T+rV7A-}`9S@_XBk8)mG1oR;a{+X)2(kyykLAb_eo zB3t2ECW-Q*2Gh^U5Xcf8h0QcX9i>728@c(EOp%FYpf&7kOCb;QPXDCB$5|UZ;isw( zJK;NIZ_+r4T_nM`A*k^U6R$F&FN|G*+-;pwVZ3u9S{rLQV+iGJsW*t<50E zgQBaGcBMy1kM!e(6|VY~QEZNk4o2$q&+iI7O|EzfR9v3VGk)kp^_HO|ABKVM4fq=V zW?K=c{O40FLO6jSXESmq8^+MK@hHbQdpU3t*~z_Rjg@o8s;WsZ0(P{K{Ja_B-x9g* z50Wz3d9|SRR5=mx+T7;BkLGp`&u#JoVWZQb!AYLgt;MKyUA|_iz;p< z^n9s%S;2Ly@RY0Nk7c;jD)KxXlyd%{XNB_$camLST<7^tfkO-X9_u6rSaU#@&SB{x z!@T)Q(WOGb8xsiDzAS&biPx@R;=O~;c3?M1HX^&Ckk?PuEk=V5%Iwr$@y1gIE359~ol|_HtJqZNv z_}m~}6|LdjxVeq1_r6o^+LRJdKf#b$U)y4EPtXgW-&zSJIo#^QXPH-?P($IG${$K+ zitvKb{yy*5+d+P!VD|(XQTaUY4-H>Ej>;%YrXtKO2CNsltaK{8E&(?Jdc%O&M2+3} zor6L{rcvNkxASSl$<;z1dq;WvkgTGc3X8fU`(_Oy0@PYzvio399Xgq=_Q53Q2!?Z&2W&+<$ zc-OxaGxa#Jq>W)Tsc=%Xt1nhPZ}M3uqozGGrew_4@3Ozjvj`;$D-b~YrRLd(#Kgg| zpZq+s+ES5x3cxrT57ytJ0%1G3X@5G?*k!Ni16 z-)nzWe>_e-K*?>U#*0s~bt{AGN%p9T$8#^hm3Eb?6LyM*B+i6>E`{OYn&{6KjV~uL zKp$61yD>A}eqM|CLd08OIlqZiUVs;2(v?4)o$7}#KIa?m9XO`tQ{*cmz<^+`nVsp+ zb~fBY^WH*<;FmtH%|htd*S%Q<=YJ!b9l>dWs)g_9&6CZhGJZE{F*_po=BHOf&w zqy6j*&gQ|W&X8^l#xGNi4qCD+*OR~)5s=aHQR$t`&-!LaMUv|lRG0XjUHi0=x&I2; z2YhSV*x66MCo9Bqvc4FxQgD-bIb2; zSMXr~;O-?CNfksYP<6JYi7PiD4TR9W)pL}AthAL$A4y<@ameI+xF(dC^~ zy+dpbAu4yH_79P=LpV}CeYNisMf3J|nJLo_Pii|s-q=Qm>|)MWvIgG;c0~i>IgP+C zk!0qLhMLGo%A`r$d(CV_y2BwslHRUPdGH&fL|-26Ng5nZ>GFDRRp`EQ72~6F4qwOH zDo_M`zFh+~6LL>CuW!^?eWts=_4U3Sve-$>k_(3 zW}P_*G2)rkg1OHQnA~DSN%_730q1>`If+5b#j!RKYL%Eb;ssq;VV*oKSGxGYnO4kR z{_()O-mRN``Y?tcLASlTO-g@5=m&t= zl3y<@k;;vB)t!4q)iP2L1hNXjOJV2yYr7rn!5B}P%4EXTN=aRm;L z|IAIX2M06Fa!hR=w=K%YuKb$gQA*<}F}@pT1_uC6A4S+;#%%u(U140Yy{U-(e4iYh zhYLy|F2+{=DQx~0gD*LE7=f5%$-MWglJl09K+ruUU7f#H+Vu(E@W9er+(9a+2MJIs zrmM{n-C=>l-a(e7eS*P6CIfGFJHi&kXYPK~_N_g;jpPl;%6aaUqQNlp;0H&a^1u{rt-aJ z1puE)PnB&Vn(?dahFfG=mmZaSd|#g=HT^qI$|3D3dg ziT+=%6Pvr(7gwC@o}P(QuvvPOJakI4DipDXa)Ix`aV}=pWi=7td)zi!d?CIV>6z!n zE{53Y4lQKn&+4yqqQ6(sP>Eqa{SlC|3YCf`>Q3ANTe9Q~AAt-ue74d~#Pgkr)0dSvO7 zW5S-0W*ZaoS*xnqa*6Aslu_nYZ)hDJ%dFviwOt9*X3xXk@uG(4KXW*a-UX%1Zdb{~H@*J(u}6 zgnnAreMofu1>{?R3m3@t7^n}*EaI#rROXJyV9Kl@7jX`q()tAl1-_Sbg>R=~QE*6a z-tTU$s%vGm92O$#^9LQW1& zzHW|$gL*@EwB2o=URt=c+HLx&86}G;-o*o5ISO!Fogtlf)f6cEY9@_A&y!L0kz$`u zR>P77+xUo%XC$#$uKJ~Hqt-ull~lON_NmmPD^LbX7ci82^1umeE8||2?w%ywUIFywVGXVj1NpQ1(?&8K5ARUGz(viWR(wxg)eF6^nE&0` zB+sTV;5GEJoif1r$ z1S+?71VyGCkvb1`SkA&yRnj`tqnv{&OO~uM0R{LAKo0$=iN^`amz`;=3v1ugJ-z3O z$8TyhqpXCpDd1ZO7OwENkm2OV^$`*!8y;kyEFA_%t@ZCdIfeZH9E5QEi^cN#*=n`H!N+}lrsL$-_ZQCq{INo ze#^*ZHnq-sHmaSlmm6{d%%g(6t`8t}Vc^56`}^ipsz1ukp{bgbPCFZwczM)=AJNpd z79?X6CB?Du;PlB6ukL#NZu*p13TyfGwy3|8&2cE&nRC3nKtpG+b*V_cP{qfG2l~d@ zjxTiHd3eMIXjW?f60R8ha63wJIXbV|78&U1a5kFt@9{euEOZw`j?Oay4Ue^;Aka4n zZoXEH=h4>g=?Lgb{ zMdv~(fsDp9+i}kMs#|>vCHmpRXcO5po+sW;A|@|mM=V?tD9pCYD?&}lG4Ex#_;HB# zowtnGTUpOj88#VwX*=78uzCpwB91|Ne8%M$pc0=xsIu&hiNG;dExyEP_(kWSo+R9q zS6;^SXQ_eMN0CmXk}D2#t}n3h%9-)2u{h>n8R%M7_8+e-w_}4#XOit>t8rKrJ_{fF z@YTV@`)l`BVrt_!#N1 zZhMsqTjEINYv~pn*+xvA!i0?{*OsF~b^#RmhjY^}ssi_w*u{TQnX=Nwbi;6YzS+Cga|Av~EkW?$9+E zT(KD3aMND-W`>dhl0EfWrDXJu$)J~YXa?<|{n1kG;X>iduqa!~1;Xon5@FX^_h5;} zg)1Nti{=ee;w1f^>MN1aP{w|3x6R5@_`>L){zYJJG5xQO-{DTk3LWxh-4g#uqvk^q z*A08=9adbZvc@G5NWzG^%4~PoxySH$BrYp8rIhoxe0lvhQu3D1CJtKZ*_Hx)N4eomUXU;N-G7TqTinq#){J{4l_Ki8ck_ z>95?5q2)~{4G+0lhkK&>+ET6I-DUf=W6{7z@i_0py42<%pgRA)$R*e2Iph;pD3ft{ z^X2gOdZ>hw4Q`iZ4UxXgyI%AJgjXx0nnxb>;HLc(vQ=bsK6&Q#yq8S`tHY?IEiYFh zEqThKQcDiBi=dPlWl|RhmJ0`50LXqOF(w z#q!2JRXz#&{pQ0x#ZO`lVwOloVdbkyMZFHXtG!cTfPIZHmE>zj&S)}9Bv?~+&;Bpn zCcj+j7F=zgO1XzotuK6(2P{=!Yz6dF1|R{)X_XKQ5?L?Uqg z!D{!`NzcE;oyz?pB*JT6wWW>q3f;HavrqpDb8rYD#8{3Bdx4uI!lSUW&&I9L3?M-- zhFd-Mvz5o1ue-;Z0yq8TmmawIVGuirZ(xzZSZ~08Tgg)0Ue#vzGWekvABuX~Yhe-c z|(%I8+ zh@}?I$E2qpHN<1e77Fmi_BTAny-tAW8*ULUrfLltm{zG(#+F%6$j`?ZAq2Jm8B)z- z^dM1MCVV&-l`su}N2q_&QXr@FIfvw6zqPn*b9K~H00e(K`WrwtMIY1xyotTHh6&ar47e39GM(c2?Ska1FJm)|Iq+v6^WXnys>C5QN%n!n;ZRP8Xi9K%zO> zY8`|%QyZsKDF+||3rypMlsD01L70Z$&biMohKw8Ga}D>P#T0fsl zXz|eB9rTl)u#gzaJUP|Q;dlj#S=g_<0O+{tUL@`wE}`7Leb|y61JcYDtt5ftLM+aR zHLrIYFJ*`l%>#WN{`pCc9@}i|sh>fa@@-6BZ-nv~$B=OOWWp8JUKkTY$hoYe(@8!> z>w4nP09P^p0Wh$=62Fur_*{*C8Z|LK$a!_60RQdZ-i9f)Gp4!{E?s}G6pYU%{yMj{xPSKkb{)3@ik68Y3|I+F0 z6wLD(eOpFMe&=({V`eZAJ-`fkeqo`tTvsO#N+AOgwojlgwh$7-`?x z5;?ofe{?H#Y7}?s8qkv+61f_p$4a`osru=4ek102o3Pn~crpgscPlN_U+^LdYV&Nq zwX5~8xo_{?8zOK{I$;C+OqR~{=Av z#_GcwyS#b5d3_$-cZ*Sky_wyWHoi|wO-)r?hs${t6SZeC62k9yeb3pibe4;S-U{Pf zSDlf0)?e!0JVc(nTM5xAuYgMpz`%^}ts5f1lT!`kR)6h#v@>OezM#5`RlSDaQw+`d zfy~01XQW1%35wFXJ%4!#nxAOQ&u?FEGn|FCqikoRMA>%T@!Nb6#ZaXk&Y)L*?xPy+ zG5_;NY)V^{GV+oB-w8^7v7dolWX?Mrr*9%J!UR_TkmX@)0U?ejVGRc5bnWy67e!AY@(F%;eLjryv`UV{;8la*e;5tP=;aaujTMF?)^Qv+ewr zzZQr-Di^HGf6}!Wg*ozZ*I%6CV8|OfoKz_**zY45WcZ7b?sKH$S_>Fh&CPINsO|AO zKUURf7Ugn>EzjRG9Y=faD*h3#t+6dG2L(Bnj=R84*Z-Mrz}5o31(DUcCpla<1bFFf z6tpbkwqQfw>ul==?iP86HS=bdoU0o_&$>k5uN^;&y-1Wx2}1Ct*DCu#qqSrPf}hZL zswe>m_fIR2?dw&QzW@AB9rDF=n^!$YJ?+@KGR)e%9dcKW4-$>2C`RuJ%P>Z1Koqmt zKZ^6B2|{+-(MrJliKzbmdLyY3sdV=Z2tNpiTaV`jw1LkgvdxR%*ndSvsUsfBTG_ai zR#~_|C!JPsi`{*L?$-t-f?7{JRBx=~yh_C7A#X|Pm7S~84`$F-K=W8FcX}Vzj-RiK=d5@MP4&kLyl(Y7$XLjAlF` zz$5cz`HLSbnWTKsikkCOb5}kAT->kDONe{^qArO;RafnL{zD_hVoa{9_YyZv&dITZ z6;VJVogCo31iqLUs1RRu5%K)r;ZG-Z(A*v9*hj35{Pf)E(B8Xs^;Z<7uVQhwji4ax zDs#MPsf;bht1;j4_Uf{(xsz;Ra4hW5qxGgB^LdnZpv&(P`pxih2T5HlP{rc>?7aq+ zVc!I-sp{sBTaCl3-D7Q|A}yf}lgMn(ThoKL8ZC*%=wc<0(}d^o+Mq&O^d?6nH-jo_S(4bCjo&f@&3IN;%rwFUnt)-GL&Ng z`_N;)tHX0;NBv2hQlG-wNwBO=&t)3XY)Wlw)G<5hL+_BWKcD&@e4ej5qu6L?u^c>swXah^NBxwc>Q4(Z2hFX`y5XT=!YpL)LRwMT6M;uE zrkOOsHeB11$vCIRP93k?h&WlFC-8(PFHXw8{b^#1(lEW5%Z|S=JM9y9|!$$NHIX%n$Pe2d&FR?+Z$!7-O?$(o6}2 zcMm&5Z4M7Q+wM|l^UM94Ol@*Ci!&qBz_ z-^DO0kS4~|BxVab`Be;N4p;Wg%`=yGflz}iZSg&REVdor{{vyg+f)#UoDLISO=lM5 zgkjbG3fQB)ZtLZ0Y53Wk`N*q{fRH@WrtV3Y^yD`t+6RfBo$BQ6IcWLD=)GMTpj4YV z{?P?C9o6j=8N9pEenODHw%Sx~=Mlq0Igxbz8CfdVtrjv}dfOP!tr3@?xq9DK_1D~O@WTk0WQ5?r7ax1L=MwnbR2Vv|i#;t))u#$k*cb ztJMSYLOSOlNxwki`8z!(2cZbnCDACcj-6AWW1`UW1^p$!hzUTRH!~D?d7V4Lb%6NG zzP43hPz1~~&{1{VvQ@kOV)NgVJsBnS{|qipLc22#6`mN@5}m^~f;Zy<3%{RE#q4l! zuu{Xo1&xnSgixBvsO}wUQ?KhQV}Y?Hfo0Mu#sfa>R;)nZ+qv=P)Oh!qGVm!YTU-56JR{7tTl^N|2*iG|>kIi)%d8*%PLz3N@hWS7(*cb+b z{BN_rIUFgKnDgx=a7Tu4_QJAxuI|+iF(lA4zI@6O8ugfq9O4N$I6InqtIXu<^bvv> zE?|}y6#U1ftxSvrADmfl_1fp|k!7ZHI}`@o*_rt5XA38s?Z;ZD64$cDTSYM)pNgLc zdYx^|8U>wV`jo3ZQ}~%QC}7wF#h&s=RwHxB8q{v*SDZ$bJPk*OnrU$T{7ySHt|J>D zwH9%8>fuJj&V$}i#YC`im%p6bF0R7d{1V53@q*`T!+(At|G%B&T~94#l~T1_@MEp= zued(!bYbOxR`K@^j%~>ox9&syc)*%fpV5LY@^3Yn+#FH-d`97|WCe<;j(=7*e1jBE zdHWsL5c@YBp^ZF{sWh*}e*Auvp3ypa;NOPh?#QzMSf=P%*V8$WGSFTBCC#LPJ&D6` z{-b_JOSa@|_ISurneRa8Oioh1mx6T;l{1}BD_*=i-+EE=q z;qbEQ0p93%scy&tS{YV_QJdt80XJF>bB9%+&g$CvMv^nGvlq)Ax#k?Xv#Xvfr{fzi z*sJY)-NvND(WiCuf~QN3+HXWA4P1~KAeQLO_x+96m7tZ6UoWQjGn>~9Jcfj!51c07 z72C*i-5@>|r0>A6{B!z))#6Zj-E=@Y*lA&NCa4NKq#O$oC!A~c?F!kn!-hk*DI_Ym zv%N4xeAkXHME+!T@OGZoCi?z0ZzoL3R+xWedc|Z&ZaUfvD&T8gYRm20Ti&{h(KatsF!G>#Gd~uH| zI9vYi6=&u&y3am7-JY3vvsso4bXvdK7ZjXvGpKT`Mxk)mbbCXa34)*6nFI$Z{Tz6+ z9P5BVgDbh5{+Ds%u@3xe9ae6|X99wT0#K(E2prxhCFxeM*S_s`yX*AHn-2NfSQdAu zekQpm?eW=!8NU6OIwvB|)++t)(Vd;pg6%}nGYmSIn)y$(3FDiMJ)w49C5fLP^ZsJjl@BOgk?`P%%W7(p5j)#EK*G9XU5xADlOhd|S%d_@uy-gW1 z+Jntz!#=PeBk*jDly|^=@IJUkP+ajGp^`Sr!SPquDXMofvc_e_u-26TCgz-1lpoA3 zeS44e@9bFr1i=+mD*-g0T%5$XLno2!&tCL>`{;O}3|{PZ5}6eOC^FB_WdJ@ly=vPp z+@CtIzj%{RW*3!{#uveu&+xWq3Mm2kuq0-`=gzErMdT77qS%<7s%H za(TbehJU3c+Omvm64cBU6?()tuT9m{^yUwJzG?Ja%P}iiNbG_^F37DcPuoPEt=5+3 zj-V4W;t_$TIk$Ts7fux5Dl&>{a(g(xoN$%Mvdn?Ihg z8czO=O}FDs{NcaxRrxY?MPyk&)%L21RFdJ_>2j0RM{id{m|Cs4d>Q6R4ra*U{v^|Z zwEmQrTPCRmIi;NbMmFs?fN$csR<~zqwPAag_wLypj1Yp!N%pJD!3nhAkQumYGo%mh zp)nK>pPM*fOrDRSd9vVW`=#!i&nu`Ph^LNhW5BHIPs3>z z_)R{Ou+wowOo9@yhF8!|6d%gD!iGUz_2msI1YG#?3W6p+@#~SQobY`J=<`pBB9(3e zv)0$VoF$s8>EoL;FsP$;IL++Gxa?jon@e%}-}4Vas`P3wYJovmZ7(7*t}_{G;{#*2 z%aadn&loPBwogfqbJBZuD6(d1yEf)E>i-^ZV#?eoqM6sA0hak1?UTu-(Ybx^M!%wN zvauESZ701MjQ__j5hKM}WN47231hO-Hgd)=Q8)Bx>1#t>PTkHI*{`6>Wed6~*)Q4+ z?E8DB%$G~M>;LlnFkbEbo3rz#UVF&9d4Hl{jXZtTolL-*#*`HUay@1r^vXYs?Ip#+ z^E5CtpyINZF3Bpr|G1m&H(XiH2NZ#>?*Bw5DjhG$$#r^^a|5aabjAE+T_@Nzo&-0p zM)Bg0W4=rUvHK#Y6arBf(>Dlr!Kf~W50ObKYu7kJ=DGn*OkkAMoYKuMX zj~JZqbkk1+sJ#-pdT(8^!~@7v2!`gnS>)&cZYUw_nsJJY1QC&E75OYa1MTZ{LT!@5 z1?xRVC8gMos^VzpxkSLaA-E~~FdgLF5LreHhEJAmk43PM9+u0*{$K-bL?=9On81-)!G(Vxezb^3A{2OHqd2*-!G zl5y=I{Dovu0*KajOy_DE`TRkBO?QEsEsX%J@0B`;%Vp0bCj;H{oh-C-kq0CEzXzyD zDP@6b8i+g+tAx0!yXqb%qm8m@*$2nK0kCjyCAeIT*hg6SdZE|dZ9tg8qA&wwT#iwP zZ4y1p$JwbxeOz2xtMNqGDq#PE^gV`PEpbGOKv*Gyc+2A5xTp`~x%4~ledG71^7q8| zxv6&9RXKtXh7@z=9dhz9|2oY`jt~8El9;=MhVO-dM2HRP-?r}BWcU?{wkaF}tEC8}2KhYoz0#j}re@fIbJlyoIm|x!w z;IV?~$hp<05(*qIX_9%5GKWcvp5iTJiX{c3qM|E$B;mV$$>bK)wGvfT?Yv9cH2R$t{*q@N6zx$p4F@< zu?l5u5D-%Bv2MI{E);_@HI}R1lab8jc@f9DYmKBmtp!6u0sVy|ia7UV zWH}#3pglJTg*Es)I(6`n&Ayw)njbDaNzp9+s7|ruh!!nHlD00?_w5X@+}+)WNvgJy z)0}If1%qYchyLHF>K7v|EQ<(z_lc5O8#P67l|}f^b!?u5!6ieszu&NB2w@@xqQ(?H zgC1h;wy6@09UWfr$My^1L`>!3@?}wajTYD%Xk*Cb>ezrBqf^JkOB}3jY1Ht<8?vh( zF&GW8!oh`hU5APQ5s^CKk;My)doYdrN+R7e({*|!vyl`X8N3b5exc~kRIoP*$yeU) z@)8CLsb&l!ey|KL0y``YpXsD1=BoOxoxkMg`It;?ZmI`&eX)l6L82sv702e+FZEkP z#$p_frAce#L`iDk#!-*}|A~znM^~mid7y$Rq8)3aHzB7U@ zFq=DaqG!G~ovScZn9}+qyF@f)evUACg3w=xAO_L`kEFLOSq&k%Diz7~1^tOI$+rFl z)^m5&W{%%u8P(Z|p$)2y)M^ViUwRu8iixdRv8{oamAX61^>2}h47JLjB=P)zK3;l{e8HF5P3(7BFVKpJu|K34Z!@9T{hmjx$D#Mz|Zt>C=~$%mO&_K+p1d7|r- zy75T0n0Hvu7w$qrEUpot%_#f(?AzXn!de2gpBfiM{S1x5HS<)Ff_!wYxL_ci?cCIe z`n{);O2<1^ax=nq=a(P%zPYsD(3*`m6~4BfWxyQZ9RKgrue8(68i|d&hvJF=8Po zQ6hppPeFy_32dJn<7@o_7o@qCkdxDYRBT_L&+Y|fMzTIVu!obYL8WM3;X0*lSVPXQ zl1%bb+`bf5bwGhekt)T{$awyT~dlG;%*#ZB>);C615_Q|2 z6Wewt>DYEoI<`8tosOM!Y}ENMI=jhoy=BsbbljEN5N zTOV7HlT9Kd90@p5e;~a{RveT~gcIED!XqNHW$te%8aR?~NqeJOvMU7>=pq*Em3_-{ zC)zSiHB9!FVc4rl0NVTQK3{m3ldbOKT_&{ zp89iO+F$0=+5ddGNkJ$%lnj_R%QXPB!xX67^421mnrPOKVdvM3N%GuSN>AFE6|-SF z!NCqrp~T4BunAcGuiBD4OWp^Cbhc-?m8Pm}gpr&we9Et%KQIIoO_ zc2{Y3*1fee_k9Q7nb4@8lZ(t@E@vqXOP-ap;~Ex0N09^b_4G82D+deOA}^^K*U&q9`=azfK%bQpgI{i}^igk3Y7nS_&fk{KSvBnADqvd#66z z?;8~0qgtR4bqwaE@0Z;VX^b59Fg`;ZaO7~Z0)+Q6D$M<{$M7nN_7j*&)}4%)C5@EZ zB0{(7Ym`9dKUv#11vfEOSy(YFpOM1k=&AeWYlDB zVWF;bq|$z3Y_tYe(2E24fLqD@^Mn)Ja92|QlKtS3aVPY%^eUOYxPV?JgA za)pJ7xeBKQ%(b_-9zi=u8h1Fcr%6BlcvB1WeUlC2Gx1xs9y;G>y%EAdUQ*IqHunYr z(4zm1-LJo)N&be##ylbpgC66v(0h3|g}r3cW{yYm0anjAu*N4vu^LAQ*;P( zE7oJO_B>zzzWXSv7j8=Hc!$(1H`M!a7d0DN&rLro9o5~$XswAOmz#;{(^eL;zIJe; zdB|?a3+k7fxz`Jp%(Lrw2-jU-$Q%Vx+ORYOj$Su&wM2SmhpJPjK%ChNVgGA54q!IR zUE~-+lxQQ?$Df}^8$I1LG|9EM z;4vPyTgA}&tbBMD>;clf>TH?2-rpBR{!FtaD9YQKXxDaRGI6^%A8wMZucOR4kIqn{8)s zNaLrmt`t^aj9<(vtxqD)&7CEi7rH=3z*ZwJ-=~JOKDet-79?R0@4aDk_|;t}B*uY7 z^*^4!h4c;EyA$XZ?fA>l%LW&f0p88g{EJKF(E!()+<;AB0HJl&zZEW+*jIV}!@)+Q z`L$i>?w;Qt>hNEhtOAos*8=~f$IRU(TEd&3OW@kVy8mVuB{5Es0>8nGdp4Hl2eeh9ZEdQjKYtO+p~HJn1AYKL!}^J_7!V4I6>fhvWOF^cA7C+R%+YloXB%dB zaI9Uz75A#UsvtOG@kh9Je{G37`GHm(V964{I_V<$$#FcwcJOwy+=Qvqc+G5K)oAMw z350t|FmeLNDwQ>>Uy`*cDyB!V?$}X_gGtFXAz(}MOJF$HNC#x(1G8Mpb5FnIq^~yk z5+qu+DcEG?7a)K>wxn-Vr{hHKg%H^q^=6`>8+hN`G+<@xNh=~cbei7dlt#4VO`0IY zgwmMFe$ef9rs@RN&b(FNG-Lc&yx^{nK(!46N%mw?-!Xoq^SZ{oX`>2%PT+jPItTJH zl`^whi~54sv^qXpA|feTengFGQci_eXj2eJDh4zHuR&WGa_Liy15e?ju;lZwe988ON?Ka7b0n@MBKC zZrpU4;6K#OLqyw@m#a$=A;_su=b|n%zQ85!G)6d)cFZbvz*MME;EpW;h7**q)ab(m zs?o*Ljy$!Hq6ln<=a3K>G%a3%6?x8*zh>#&)Wd?f`4RuN<5Xsw8PP)Z)ZV6h_TAC7 zhI?_8R9GJQzDdt2YpZA_I&Dfbs=)l5yc-lYUEp{@ZjTKVE0a-$smR3=ZO+XaDL00Z z*}$Yh+o_*yw%@QFfz+23p4s{`R~Zc+1Dq>QTJ1njhJY-QsxkEpMo_Px2-yG^%+MQT z@&!hueL%n)J+Wr~fIs>Yi&Uyvo6^>Mq(LH4<-6=@{F0x8(WY^3T3naHfa&xq zt24_{#aDl$u6?#>CPP1I0OofoHh?-)7g_@}<#)|s(q!(XWfmUDZnNoZLW*3~} zi^u|u!3km$X(!lhCNegeGR;mNFRG7ql)=r%R`Y@9hxK@b035Na~CP4BWR2A9>7?UktY>1AEM#05IRa~F;BFKXe;0CF;_&xm5aW_#hp z@rl`q9po==Jh@WfovfeuZ!7BV(_ba}Z9_FcSJ5eJIS5P-*6S0G!5Z|ST3>tZ?1|9~ zij+Ki(;F9-uGk;2O@IA_8xhEc)9pCuQEbYnIKB$RsRfJTJ1? zzkNjl$cF=WlaTfjz;UO;#s*%QmWrtj6{6y2hC3vBAR8q_cd3yRehhUz8DO5ywrp)~ zh%<2?kT*(WA-pls z>haP}I%2x#YGbSs0$)!5jlZKeJ@~hkjp@;Fi)w{21dLv4;F3RO8Tvma!AL>nXDlEj z-oc^E`Zs}GB(Ve%?_I=~*IU1ky_6GH#3rtu8v?JT&x-~Fq~xFyW~cOjEV0nhI@H;* zo<|Gne5^&8>h^Fwt*7+FI`up+m^jWJKs*-l7}Kob?HjM)35{MxF9bnF&iAy$DZKN@ zUX8(o_9oY(glu5)3wrhl39_aexj^8Oh1wm(4^)MTV$4=b}Iex;S#k#D=DFGspP; zj=8-ggk$7-SZFpkFB5`I0ofgB*I-M?>;TCr{GSAG;lCSk=4faAthQ0_`5!TM^DVgg zxI5O9MZ)18q0gQUU{poq7RXAAkARxcdcA1cpWX(U#G!eIhg6in7R$e=RPjgFvI9JfFu z?^QL7o_eyQ7KG| z8R}?O$(llZ#^gmmHyvF_{cj{nej))_{Wq|K4StwsTq}fVa8xv^jyH*0W)ALF}_Ws?A$VFfjrJH&#nbvC-cD@kh_#gF)V&b znc*2wU5! zGLqml;kCMZUff6t?u$o|RlJAqR^|KoTCC<+{h9D!^(yWbe zjn@#1O7Yq)?CgrdJGQLs=?=;iXYr+a9;K!*Jx2PW!W??QCVXwbrTek$94b4hMzKB{ zm5n6Tie{J;PI*R3<-2*fYdkw$XJD=tOfJ|UB(Ns`+~QyJG39$B#}n$l4SAz>jQ!SY zfxdo+c)u-{Vr&cU+3fijXoeeti$rNNl%a`CV-izf!1A5#`r)a*Sr{K&eBPqN&7b3j zgps+e(gXrN68`t3)zN~^{yIkX_*Vj=CZgo>*s`7M!Qn9_4G~ves$7QGE`qr^ z2y<&tWGA&C*6ogtXyUv;fTgiqONpvYVFO zGTU{jjGMMK$xiUDfZAz31BMxp=t(hpBz3;sI>U}=KcSTsepyL?AYN=!&-RXpE)k4g zu61vmEOVeKs9&%@=>S~TvujIQ564Ez<;rGiBk1UsF4%x|?THfk5SdSsaHe%_BvYlk zoUs72mue{8S>yLI#SdoWB%x_lv2)aSsoj0EJGkexT*ea54Mk9nJj>O4 zMpoNlFoZDSzV=zr5{3;AkOu z!(g7uGue{v8gU*PRqygINUPq8`Bz1KIiAZe+IjKi#0d0OSVm z9dvkaiV@0>T+Q8r|0vZ#wKTkx=!)_qyb*v86%t}<)h`X!;_qO2DSPr?r|@XvLUqL% zPKs$QF!Q$5a13p&g~h(6v9esv^0XZ*YaCHFrVcL@aTrl`j0hib>ce6Iq}vaj9Zov+ z(*+{<8ae#AL!>69-)53w*YLaUfBvg!vYY7lcAM~ns4lT48X|OJhS{Qzj;p34UW(jf zOoO(o{N1X{g`L@CI_-&R=O?_tN|<=|*1myuQ- z-SvmqJ)LO3tvf0DX_7Gj%N?sTUcCLU`IO@uvf#T7LiqDWeO1Gon4;BY|9;P(q}h>M z`j5B0%j7OG`_VPU=esgO=VqifzV61Li1U+?g2j}F0J?d9awB}A9Ha)LiF#ZbX!pNW zydQTN2!6v3_k@?1R1B9V_}x#(ZxSz7(60$wB)aMov=c3QO^CpjY1&q~gAp+zn^r%x zBou??AA|RLS}hew{Y5%7ZQYjMrGh;jSk-TTps)Jj!G-Zn60m2N0AmksYXw{=KXJjR z5dyVjm+_QyBF&hcM9kOGj4*%I28HAf&FjpfDW+R@o_46q%{L%cB@Rc*8_U8f^y@TG zB^jcB^P7#ifw2)2iyg9RH@{LWmJS>1led!{SG1jrfg!X)v*F}aB*LJ8>JR@ReJp~j zXu8Itbbhm4sV@bCfjZ(g;>JN{q|7I4LGqrQOPG54t1**VnV_-2TMpSy-&(Sg-=+Q8mwm(}@?hVO0 z8tpE}KK{y|6!FYi8DnUnK0d0z?==foO?L-E5M%r%lG_yG`DmD+vJ?7*eZC#jMhWI|0EJ3i3#;i(s;ywNB z6h0?`i-~FcsmXIiBtvgbm$V0~4~2AgL=FQ@+b=GkaxUs^KFssBy2y2x1`S8yCQcIz zZ4c#xq5a5`8l4!4`cD&gf8{93DuK+(cC+r$(elL7G@pHGuq8Mq`Z@yot0NUhKrTdWvfpC{DWQ1Xt$ zwPfiTyIu|my`5Ipv%!sDy^Jsz-)Pa?J8tojp`=kSBew~(vH}dO=L_rtHU`^+XIEA# zwMeTP#w85b3q2O}6w`04TCN;mitbo4kz!;o?R;zP$CItC5Q&&i(z#K`mvEfJmaviz z*$mzB;eQR?hV9Z2-z)Cp@|NyV8s}Mr-4F~2<6_iRVe}y_?$jEUu{KZ4yPCFo;~!wIYx5Y&OLQc3WSC>gOK0f81ta$u5?x< z$1m}lPT!u%ex)z)y0OJ!A9Cu|?$u$cb2(+~HxX4S`Zp+e$FL!JZ#2&2HUy#;Cf8z` zI{h~$0;`(t*Hb{6U&*+8`JK|e_7h3=h5Wt{)tmIModQmEM^kD0p~e!51zulFooG%6 zmkZ))Tj|{`;nfxYqu{OA-JG>zmz$%VpMB+BA&1>4pM9ao5kgE#3VMZsV}?@Rrsmd^ zXf|w?-9;Sp7fbl2KVHkhbSTZ(hDi@d|H|f@S}I*ukG3*`keNwsdrnPp>;d9k>Nm%ci-ujzp(BtifPP^fC5L|D9pRDq7<7mX^rUNdU5fWO|%SWH?Gk0&mX zN--Y@gXe}qVhw)zr2?d~dH6xw{Ll5?q_g6zyy{Ghvp6goP`>S%7Kh4dDq z7X$sOKbsIZ8C%c5#-B#5E(%ZnWnrEOs8lcM@$f@=5g+Xv{x9EZcr+rcA{J_c*>iM( zf5vDTnu1*KydFjgOHg&qDWU@71Li_~l-;EvqO=sWpG#8695Ui!*?q7Sg#6cBXXdRlC{)p z7F54IUi9lGz$gkasO6a3|IWF(*q`%sceG;hVOgTe%0Om<9#&Pvedit$y>jvm{1^Eq zW$HuE-Zx}N40;Jb6w{%Mfc7fFO)&C5*O^TXYa^I?>rnk%OIl1eQNkV&ce##N~w!@{kfSGT&7l$dhj@hTbL@pB}WdQ`>}@ zCq$f9r^rx&pZb&NxF-bjC_Jt@>*SI)LLe3HjOLnnNciDM=_02ErK?QAX)@J1#nu2; zWa3YtTvEuOR?0iUo#Z-caS~iU7$$gXD7u3$IH!iF9@tU>>|QAAf~Dgz1dTZ4ToWLXcMIXidSDBkQ}fW@klY=w&#E7y+{#G}5~v!9N;!bwb4H zH+|y!ni@rG6fpVSJ19~&<4$ZI>s}>J-RMq-nw1ujO)vNO@LS0*#?GiJku@RZm!J4< zenL6Nha=bV05xBBXiW7^%J_OeHqIVZ2OSb~3T9-xX-sU^t@V$t#}uYEPSQ>hTwK;j z#GY>)CL1{mHXk&!WZ2IHM@J8|J|j5o?+kRaou$~4SDrE+Cc+T0qk1t9Lj=QK!(#2~ z&xE7Ld!?5D2GwwqBVn&7%SE7>^cDc4f%5b#@I2oJ?{N7sZ&a{ei|-Lz*F?7|(D4;H zu2VPjktq{Ap&nJmR8GL|FXfWX@p;M@e+xjL2s|<)nv2b``;O?~wOf0b1{P!&C^*IG+(}Wi1sKx6 zj6_*CHP?`%#SyP`ugEN0NOL=7wWQwqNP)c~2p@gmhg(^ql<3i)33`l}vTu&?S$&RG zIt=idTHpT+@i)$=BqEMy`hNEXSf{p{@g_DGictKu?8k-eRE(5sxmYcBQ_V)4NG*#t#>>1xy_L?0PHrnv%M z2k{Vp_uld(A2WJ~k{XshU_5$8u>Wjb$}Jd{6?=GofIk0eDtgq%HM>my7J1N9a(A1b z%&sW4AIS}i66OiJ?XQxwvFsG$C1+`)L9&eWAY5TH6sWrAAlZ{n4VFgLzdZVhuw1%K z(9a5wu1@Fc_B5lB*spGeh)@#(%W_ZPt|49sX|3q$*Q0jyHC99e-{touUAcX5?K?p_ z5SF`xzSAjfjH%{;vU|saA<=2DY@Lpiog(^k#U3|0GLOLxql$hE6J6tX@EJ$O6buXg z>T|bb1VqsZwdIU6@I+c_%od87xZ}7OA)NraDj4z|pEx9STBfZujTH6kO|Nlc z2x|`nG6{zT9gl`@w0ceH^@`UG$TY5>!apQQ(Cz*}nyTJj_7td8HHZaYKp?7RqBsJE z(rrgKV2E2rDP%D=$8$yqZMtJ#1MFbwsc)%%UGNqe3 zN#<%;XgKUi%v)iSw*1de9b?@v2K7MNh~^MPc7dYcpT-WTVO}qo!vFYMib#IRXJbLd zO(%^Q`v=iSOlrRp7YMEt9$}lO(el^SVpnftFN}=+_F!w@a^p>tRrGka`eo zGoIaR`kIICfeLF8I68Piy1-LHLU(_D1cy>$T&p$i_-ud(3baWYW^P` z*NO-x%!x>dPgW1=vs%GtHBGs_cYZThp4R(atE2sB--_XFA8F*RP&fH7s=(0Lf~#)= z#{Oke^!Uv!`49?x_MAW3qa(&`vrGmVNC*wYfX{s3@Kn&XKONcefd7l(o?Uf@*}BT> zG=*8Kc#=x4yOlv6-%MaMK?^l0!s~=Xw_*1gstMsWIN@%m9V$2GFZuQ(uFf>SU&8M* z8hRDphA?nG$QzUW6qyHLu5}^VyMUM?^8J{>_uJx@?Ntb01@d>QI+y})U(-bFQ$ZH~ z^GWRl+8hPnp^b4&uwmS4pXopJQz(gbr^?3Di=^Bv-An39-9|2b#nq)uca*^bfSy+5 z)r#6tX+(>-impMC%4SYW*Rbe$)9B+dHR8t|acVFI=jyVqaBUz^iEMwM;K!c}f-igc z!FkH60d+&7f~Xh2G3?y&PPz73;K$*<_G4{}30bkSKvQgjY!Ks+CnVQMMZ72D6Jd}% ztC_x3PAgaf4Ldf8D@!eo>yGl_4s<*>5RfqG*L@qEDBuOlLzAH0!4-3zkR zQuX1UyG8x%?!16ww6)L)<{Mw#O6gybFHk!3KAm}TfjenbL({l^>%yWN4@-i?f1?B- z#5i;B&6N53QXp@xrMd6-+L4Rm%bm`Nvv0j~Qm{vJR15Rc{~PZfHY1*NG9l8;`>`uu zWOI&&6-;EsSH)6S0aXB+2?-Y+g4*a)>^c(m%^#aB=@#dKhpqe1j@RVygw?1>j;iWd zO7dIa9RZ*O=}}fR>7C?k-rMmGK^c4n;~Rvx(w&oUnhfA8 z!1hsDTs~tQVm~~a+ZEAk@0NT-#otc^;P1BE>oiwM=>rVr{M*X_k%$gCU(SFa0skI$uhlzi?C;C7Ihf+tr z@#>)@n$|OOu_7|?X4OoI#61`+2Z58NPUWtBBX22k86Bs%u;Z}sZOr`Yb<_fhUd z)5IU9L^yKs$RwDA4!;kjn%s1TUIxbe0&}*t`IR*&OBXEzmYnAZyM#|^kNVJ}9~!4dOn+#@%{m7*O6XL74zbEQw$c>IZpn=*g>OBi6?jG3kmj+<$EA3b_UD*w z^mmXb=*8uD20kKMkzEbCT}8j4*oBrhzFv_j2Su8n9X|9g7h5th3gp|DcH-`vEwBQz zZXye+-~Os2nny}aJK>z`>t>7C8hVws!ih(JTQXFKQE-8f^Q8C)cuL?0xen;`b^PF; zl3Elx;{GV`6mm@oG(dW=dW@NN9|N}s_ICLAP{Ntz>@l9HRK>_gdB0ch`3DY2hwNV5 zVslia$Jwa>sW?jQmot{$3;Jg@g{Fb7(mSFE|AZy^Xj6?G*#w{mo1j8Vz>M=3THqMK z-0nLLmf7|h6Zhs4aS0Ekmq+L;sldMnI+aFHh2Ah#;Y=I3{ENHy#2MjlORub%>4L+<(coV$sQ(2&!L^REz_!-4p~RL6 z5M<3zd$V~&GK&H5hou!)=SrMFV~b$fpFf2P9#E5ft!+8VCshWxBHr*qx?k5ID9)dK zCMTnhjfue!5>_ViVR7&}sG-dV1W6-C)hRfV-=pL-pAd-+nFoR?l784Ci8~D9z6?Uu zwvhi>e$k#o^@0FJc~_g>!>>qh2osb{?~l}u+8-EEIp*mlv><&%IwKkdcXQPO_jL^s zW`;-aI(4H$z5#!Y?yugW0kP~gPI8SA2n!p`zZb#P^Auh1MLC0%Rxt4%(*GLF=#TY| zgNeDTPA?#Xjxy%mkY>!hs)1;jVVh-n>~yplsxbzj5R5E5+$u3CZteovxzEncwNO{5JIQ}k*>_0OK2rTvXQ`3Tuazm%@Y0|En(}uJI zVa>OIic!z+0>dTev*VIPqVt+7lt*O0fsR8RNu9yBkP7Jp_^kz31?C&s+*_x@>IkFE zqNA&?KxJ zLvqKhaasKN_7j-|ltW&bGp@DGIbOH37nJE>cFw_N=k#NV7sw9aEx@bj88Vy297;^0 z>E6`lJ|0sfLSU{acR_7s3TJ_`VcFh3bs6tb`7BR(^|4RXLpXdw+_W;NWVf(b)ElL{ zB6N!Un(Q}q3Tj}??0N4CH;=aIUc@$b!2w1^D(*PhA>O9yQbd6=pG(4JvFlKJ%BeO0 z*N(JFNms`0%u?lsJ_g%DG5Q}fGvL`>gEZoPD|#!sGl@D5_P8`t+eXsi z$fST-C$v--{CdxoJ2KwP02mwgeEZiT>3ch)?}er-207X#-Ji?u%rCvR#7lTut zAb;ebSx?IdRK#WInZ7;K&(q9rKC>QRF!pZ+mU3uc56E}g{~=~D+8|g(KaNlNLop^e z``ThUVK*eDt@m>v;~%SR%9fwhQ2ANKFvWUwJ?7r?GsF+^lIcv?sb(LnbS4lJ=S=v7 zhN`69ne`pTbk?trQI;2VXN zZrGqh=E}PP9x|^Ch{HB9WLQ@I@(U{RiMlbH)cS3b7PXIey7X}k@z`2b=m1pwDp0jIA zE>Zq$HIuQ7U|*&*%QKJT#&Q{u$r3|Lg~11jOoY@2kC)7l78-ze2?OAk|7?LvrCOpp zry$^o@UFc-fS4PX0#H}fzmoM2g8dWi4sHci6{c>F0^>1Ij_Ik1hKbd?kieIg0xEeNZUCdAAtuYfMnw|48wEzc!Us&m zoG83Jqb>HV@ODbV&^iR%vM^jdNPd-A5Qv%WP!!D^JkJjOnAho0ZZDBOfX=TrH#zivdb&nu7i-2|Q9kHCPqYm08l2tJH8jI#_-5B5 zlo6^$$H@yED=5so<`s3*l|8Y%<|O7jYC7N=s!Ksuuzn!@kIZ`l=UtJO3n?j<=KUc( zxaoTJWW7Uq-WEQndpq&~|Jd1V~0dWy+nsZI_9k1S zjN+@a+A7lUmpDWsFnRDY%2I8$yjj6FGr+SP8SoV-0Gb+`z5Bs1JMyIY>?_XXgxue|)Vel(mV0$$4k8(@SL?-(e92L<`w?nA8%wTx%` zV@|vdTwYtWYnNaa8&DK_ikTA%2wAzd@=Bb3Is83$W&zm{A&1gygejSx(DwV-`XxV-nm%`|axPrcfM=E#D|jnLUNUjKgV z);~DxK_r0ErCs-*XSs^EQADBpPh<(Ld?(LNpoi6m<1s}erdg|{k6KZ--7!R`%O>W} zz%cL{qx`3;C)^wbHO;U=4X&yX)iA8^g~meYk0vbFgP%UsqYexEO)OsOL^$0XKJTjV45E)XuKnMUxR5^%jVnt{l~`sNa8gS zy+RV|mcS;NIMRNhZWblq>!8HQ-Yk{q=GEz$OOEn&1JS$2Lz%Ze?Im1_+(E6Qq`s~Q zOaXxO&lAos=*L+@P&yHJ$Za6=FE^x+Jj+rm zC$qk<$1w*Rck0W|!Ftq-jB+wc>h%251%SSZGKs#8WUr;eMi_JNz@GHCs0v$(hjsAI zvz_HJmX=0*8G#eDE#Anlp_xeh8yA-t(;l}i`14Y+aGcA3A(RI_$W3LGXNr_FMf7lH zd$kF8!;%~i>+K8|DgFG|TEEkf?}>P(e%EIGSz2LmA9xWq(Omn8GR{)LaCoF%@mo+H zb|J&s$ct_k3MsCIf?m99Q-`okCfav1L?YRVD24EFv0K-cTyQuaV>u;}P5@irzt4^T z%oGN3L}ORX6r*$Di~y*u0zAOG`LFB5Z79)KR6u-zgebM>32+CDkxB)|o%Gqs#Yw9+ zA8D5fvj$2YGA>|=03Rb_G=}=!p{-B?6rEXfgtSJ28mB*1MskO;lr5o#BxHOOOtd&p z+l194?w8q?92qVKg))ygQ*16ffJU0hlW0wfgjm*PxX4MBtlezfM{Vuun!>uK1ZFS6 zw6K=0)iIU7XXvVcyt9z!l?dyrUV5kGPIC`XDT~Ew&V+QLjLg2hyze71Hb&0)LD$Ixuk<0)o^S~8%S1zLz(h^jN~gu7C=0E2Y}EHxT61!2}l=! zG7t#}pk_^? zaX{sH#Mh4$>>1_r<7@!#pc3Y>g3zHWwX1xf%atI5kdmEYLxp`}WWM)6a&H`PBEna0 z^YE-?>8O+ILw7orUxJTqZ_QGy0^7nn_(0iQirC98VFplKDy3A3Bb&5#CCJ|JKfw=B zKM!*8b3}tgX#^{+3K@lb06`pi8vzy+acsT{z-rZ#0u7;Frqt{v3{FDdN>_8m}&m~!RV1AF60#8qJ_E= z#HJL7<2-0M0F(pn0VE!sU{BBj%H$7UO8_Yi(Ft&nW0{witVWX=BZ!;tOH}>NiBCFPxJGl{8hDF961EI58A1U1MmWOaa%Lvxv@M@g(T3MM2`PolRXiD!Q&zXBbnR7!3f z;Z%eBQT2;sWeusS+YayB!`UF@8 zwsEBj+4sd}3SoGUJ*bdinwI;@j6%H3(LyOkgHTQ8oC|11jfwu2g0L3pAHI(r{s2_K zBIGmq35P9h5 z`i9*7(rq%ZJ7DkO=sPGAB51iPUPwpy6h&xr)0lmw;7(|Y2)Y?$--MsTfBjgB7~cqf2ZY5hCYR=P&|!klz;od!&4B)Ya5=_uLj6w)oxKQx z%E4LeA~36XjIa-EHSV4eW=)+5^9`JqY2h4YiC;>IR8OfS-|=V}bQ1>C5j zrD4V4A?wYE33di*+d81U0s|@mko=7XY63MENF|)ckkFM7%r*HNOm+wjjR_(cCs!%1 z2Txu3C!9Wust@B&XTu@l$JBY=uO-d}Z&1G@@1WBPXqZT#?=tm{m4Lv0uzcKLl?tI96N>Fr<_q-9Y3Sgx(^ zVPax37240nDF<+PptNSjajyXuXi@|8f|!;K5k=K@$C3FF5l_ZT9bx>^XiamzSnK2e zv@q1k;3bscY(nzXG*tBXbDm3xsfaEJetQ=_QcC^)!AGzX2m>MBx$!V>?CdnWF{4sp0}~CcdwM zZlG|~AYw@1%w5&mGfc5c=>!>Lp?9Vd)?ju5G?groyO!US!MC7NjZKmc@BS?oIUKOc z^q>~k)m(W%zUe=O>CO_u_8Fo_|1T|o83c{C!yF_4%BF}ATzDi1eguL06kxzaLl|=fJww=L7VOjFX|E%hiDhik;>hA_(94y_m&G<&p^!0MWfq z=21>aM?k<)D0dxPUkzbr`znsg;YY9?&GWnSQ0RHn%n!G&#Fwr<=9Gopis_J_seMPH zGaOo9r2j}*Fcrl9xchzY$WD}$0zltD7LF%gg&%l}uvFxagb!gaOYn&vKg!O#fH4*w zX2NY}TcV-))AzK=ek8w5tF52<+Zw!z2#PuHdX5iJdh9OE;=8W^)!=F+5%hM!#% z00XHc%I`x+JJ6;+u@M_7suc0Y0w75r6$l0xyIdw2JXkCR2354nenz}!E|3)5QqtI< zAbupnic&$=KQizNa_AT*fsk|78Yo{y6_cWAa0u#=Bv=w_>i3RA)t?@MzI6kp4FuJt z_fztsXQJlqZUwgCt6&5($Yn#;L>yZ8Cm$S+mkx|1|FQ zW}Vt)QT6z2^A){%1SD8HfGSOy&|FDzVj9!1sQ17~8)zwt&?Mo}m7?~Oi|Ux7u^~=U z#NFc9yk2unM_2c3Kj00XhPDGqDxh(Ag*1tWw|e_}SDSf?^kzx^RS;0EqH;zXwOv|G zx=Q<}JaaS**R^X6jAqRJjqvrgxECMorQUcmYmVZ*OVq@ z3?ui?*MC;ILzR)tWFzuuWnz7^*0p|qKT9h7*AvGpArn~Y|B?~z#4iFopUsEeM#FI+ z;PF~Oc%iyidbxcRIuoMMNBSNMld08+KR0U>It|GQxM~K|SlE?K*A^@s2VzlLD9lz$ z&XG0dPtn(;MFnDl8S###B_-1;TlQ4f+dpKjECdemEA8v#&X_WaPa6lD>`a4f*5=K1 zI?bSsC{#|qr^5Ca42EMatt=Re#}GNgmdNMWx^`a6)!KVpykh$p(9=(YavN%%41SdhCZL(lYqf?JRZje&ChGyvQ?g3`#t^p*57~8UIU=cPOI=1^>m*k*WGnZ-ag#Kk*0m!Yt|fDcm6Th=2yquchs)X=LsXt z59++fWxv|lJO2#w6nc^OF=xhxYw{Bd)4O7shsaW}=cOv-5sN0AO=~a);KnvVlq9){Ijc>%S*NeWysn;a z-jRapzBp>UV0LiX$mAEfh*&f`qxRl#C^0~FznEuKwn`vR?`o2slIxL)sEOx)CdNjD z987J$mh-)=*F&6i+|x`L>FL=LiJt}~N8rxAacj2C@gKLhw?{mg#9c=3#vtH~5Mp3> z-I^l>rGk7WK`$Fqytc6}Eb$_Tj29RuFV2`H<}kgVn}D7n(9rD(O1#PJtTw>s09*J! zg^0{B-~1HYG0^d-_CImR-~{7D)scVFK+KXj0FRQQEO6~)sMGmvFtz(G3iwNmg zAoLDN3lmUDs5uf;LaLyd4_DvP9QG;`j}!5`|Mm_%p`CnFL6zz~uP^!b5hE?>k7dXM zN}4o`9~-nzPU3dhNGC{u$We;J#mkZEJXC{0Y?`Zo+p?|+{}|NRKTHRzlFXC8qgY#; z&^gicoRL8wWmv^^rH(H{qHWfb_G|t2dD#)Pp1-`JX5qNQyCTc}0OBF`+OY#5DxJ(h zF%P?VEr*`;>eVXTdgXdkj}SIAb0#IA#-m2QR_}9Zf0#m4*cVALJfV@=_OD-5%;zYa zKSUR-Z&#)ZuER?Y$r3jJQ3*L!$;ihDJBP|#I#jkn4IGpB16RbwTra{UxF(+mr*N8B zlzDK)EP5R;eW4NpeqJ#(`s1@RnvE#@5lO*aSGZ&2yPd38UQpksTiPy=fc%d#%KgGY_X#r+*D-E}SY+Bicg*Naim%N2o6u7y$ z->LDz2ZYgSkvE(n71$15ty+P0w0&TXVKWjA3^wtY(?New0?x0rK?3fd-GmwphiqcfIcCp$Il2-Gk&1%iML6Gd;E_>(slE|?yM)p0ge@{?C&5sfAH{@BA-VL@{iUgt_dR z^x)`gwHASVd0Cq(%TKb)Dz+Z_@2YRBJ{Bu?dDVE%SwlyEMafN6TuOW=60ywinMtbOg0*z`n5%#eTss1}7u28Cw5U^fJ&!(AQTsxb&z?b^mF9AdQH-{_}1-!kGI&vxgegbn{JNhU4*46O?)2 z=;;(es6dS?537$4l%5CLA>v@p%2d{o=7#J??4Y&tQ0d3(Tt*ykZTz(@Ty}6;gjLie?*X zQ8xB@wY~v4=vO!9gB3tf5=ZV(*s^b+v|C9sdD{a5kvh+R9jQ#Szw-MQQID%pK%{3P z)|2hy*MOi{6ZHC0FL5!X17+8s5Zt1^?|)B@w#!rTLRgPOx@l1qcI((*j@NR zT(VIu621Q>R)Zvh4do%A(+dc|fnW8mq~D0&VL}fob)K$4$Sbo(h6M%&D{Bg51uK;O zZ@7p5>ojIMHRcSRUJrZ*#e;Z!j z^`ZHCDfEDNK@Y!3Z^2}K3_{iGTFS{&BV*C+e{$1??1wxPgH3)hLJBPAHF&|M7n70O zbdL4+129f5Ph-dL`fm%#C4WBLg!&_wGUUPe3R4VNFQXj-3YETe6K9=*8&+y6AMYEE zewS6$t)iwWfj4K{kYliiMTbNrO)>dM65^n&vIZq}jb_BmOx;OsQ%t+`(R_Qpf z{>t=5krEe^A)Z&L$c_0GYvj{QDp`j2N|E{o0=8eclw@3$|Z zc_&pgvEl3A4tfcF*ETuIYnpqr?~Vy$FK}imH12;ao>Vh(A@e{Wh_*}c{cUPRZL*$f z?d>K4qAP7|q#S!MElo4^M(P|j4@(yHRI#pT^K4e}oL^*21#ti==znjV5B(}|qC9&Li$ zJ@787+~=ZoO@v08iB4RZb_dm$%DqR*3!Mr_39ua9J%U1vPi1HOj4INcI_@Wx#GsQ- zt9L`(!Hh)+ZHM_Pw}9N8f(Ygk7}`?v<&%KQSN=c@5e>ey5W5^os-I9-e+z1mE)!Y} zFe*$zJo}ogR8S5n{KVN!ZN%X=4R>mjE9bdMW9OPOE0N}ud&uKslAH%Ed~HD6 z3V-yw>a#)UN+u!%%DmvJ*)4Bkcpq}Z_HcfSx%7{cB}jI`lgpxgd*@0&1khB%Khj&p zkr31SLwnV-^tWzMtYsLySBX>a$8eHrj@J5zC)>Z~%rR*3LrgC9^Zjp;3qdpx`?2?8 zRa&Dh0v-^Md>(pAxCx60gP_^l4`IX1Q;{@$q5j_Yyhc2=dvfUhbWLu-60QRbM$Ec9 zDgMz=PI~8D-mTEk!2b_F{crs3|NV7^|8?VDXC^Tzn$$9(sChPQR{vy}gOlxCeZlIf zx=qTv3-#QZA{77N7d`T{eRa@LpV zL7VykOLw`NkLIqeitPt#vqB5th9!$9io|sHYlZ z_};4Cz(z_PlkyjHWXMEy!!#)ujmZGw?Rz|lLgyT}(TXL68aW}k|LW6QGY^%C79j+e zo+mwgn{95{@{>z#n-}k@GLpY;HL%|_8Zi(bE8Q@)`WR%n{{~I8c`TS326i|5+*-PP zG9?SZBEbAVuGS)f|G2r|wW|y8BCS{E*UJkH)2qL`U3d-;WKPe%!UwJ=a3Hq+Vdw@r zcH}T79IlzC+-$w#>6A)iI8N+n?09&fxZ?GS&aE_U$Q^ILUa*I=L#NIBL1i=&bL)j= z0Db}T1QX_8_ZAoD=&v%+V zASTR3nT%E4UjsctoYchjZa)a7xnL90y6+f=pmSwPdMJ(7E0vLR5C~GU^(W@iF$UoJ zy{>V>idg$SUGq8E(QdN%Wpctvhc0R=HsI?cGWKy{OqVV1({rhp{vF?LtcRLq+E1N` zXTXz@SAo(^3Q38Ieurcs$JhTX1y$QG4GoR@w#2}$!%KDbW(ZhdYtL`~qnhhxG_=jPWDYN(^dJvDNS|y?)M?c}aun$B7Hgz}qVP?Xtht7ZL9+Z;pri0kdH#MaB|FZCbm_PfJbI0N#{w;aFhuW^Ke{fr zxG=V7QcBb2n{-B6ialiRD1PcV7*cZP24(sAD(0Gq)}cY-jOz+cXKF{lWrFOXB>EPF zou)0phk?+Z40DSvNoQw0EiM4q@{jxD zAF@&Y(T_8CYWYrRZanQ53I03tQq~<;eLTY~hAr6U#d@cq7=b8qo;ZCh5%Sxc1f z)jXcYK7M^XojsNv$*lzV@Q!=+5A~vr$`f%+?;n&EEry&4cp2U|A15O z{6DZI|LfOxZi9ncrS4~>?un|3Hl#Yq+RumR0IGLkO>YCKSKBR%?o#B|V2j$}fi;=N z>7l={=I2*gSZ?GIr@;s)obiv$9{y<6PRb0rGYoSLM`eFsyBBK1djw+%9qsJT5V5V! z==AicmuC*QCH*?v#9a-J$+mB}XMy#8jQlQK=UE7x{~(8sP&3l3L=q5>!PfmjAUl)MC~HFUa>M z*$ifMCMTbr0;RzI31~nfR(af@pXO8b*k=!{?iIzV9)s2gxXe0NzsTwOKNWdf7~n2O zI7$5F~gP0z~8O;owvr`r)&=uzPeEA)KumA-Y9*oPS0u={E>KdY&vZ3YtZt zQ(kbC9^#4>^9h0$H>_!rE1NfRNrFLH$R)Q4I0PZ)H~^vH#(&;>j5V-7m!#h`YbUg1E8--meHE5Q=Un^i6PpEGAPdK%LCYm&a3OhYV*+EwP?KV~Kb zHfgi#0bIg5TYbce|IimvTsYL7h)3mp$@ zVZR=3`rihInXjTqZbUc5`#xG2-GJ3m_iEs(mhk{loFguc+=lhnRJE>=Q9oIZ@dq zF8tc9n)75!%fg$UVE7McX@Z7N`Pqs5>q;ND*=%9Cvp*teQ{icQTDKDFQ(pV%7mDBg z^$kp{MxZJlkFH@zL70}*0#n>gH@{e0_o@2-_POqS`MR^&vCiy_x5gw+<|wSWc}pl8 zti}AHF^JmrFUiXvmL#QRW)x12Y>RF_66iwAO2pwfKUp7WqjzIq*4(y>Jr3BieoWp)okvt;Ui)>FUo6~Ekf~It z4x-y&;)F_a@BW~Dhz)!^`40qub~G_4yRWbBr!xP8MHQj=j}u<$UHv#fWQ6>+7__2U zmnNlC{8JLnb^gq5*6B?j;4IjGr6AM$&6x$Wh^Xr8xJVxD7k~_cGc;)26NlRRY^;4 zFNbV>|N13pG5#^ntl5roQwf`?K{A+WYW^Le*p+oM6g18zT>GRf@J-UM?n;3rNU|>R zQG#GMQo5&{Grf%T%q7N07_;lA`mj<_z1}?0sqXx_^LkH#NX6z~x;E4M5QD!J|IXaW z_d2VN{#mcPeI#hb?_B!9*P8dd4UfIr?U2fKZ?*j2THYp2ve!BfZB15-7d+>2+f3x$ zKoG&co)*RhO&4u@8Gzv)4>k!G+p3$zp&t7lPg(0xMH%s1>_wU{uXUsGaLlTB7vJ{Z zbQO&RMHh{ft)`)XiB=ky6uVJq)XqDJXY#s2iwr#Je*cWc4M8|Kc@z|fVpGdZGn4cR z|LrPtZs(#Q>d*Twx)|`xOO2e;em)~l(g4uVt0|Rpbty6k;W$r2#LaX~*s42yd{A|^ zBp*IHNCPBug+ITTNFRo!k$mQ+LcfKrn9UQmU!&IJ!9oK1qBvky}03-Y^|HOlo|#fz68VG9*UV{tn!qJ z426bflq~PBIAw)|=e84X5|~`B=>=bWD6XxpMbfbYULf;c&fTEc6Wl~r+xyAJ!%qC;na6G z=$@KHt1L>gmfJ+m>%X6%owCYy!+@IuMANQW6%xWaeJX36dE(7HB6i%^L&2kOl>u>t z=$ZMensie!eI5Nxuk#N0L5g?M4buGi_nzgguOaEf0{yT4#-F{?Qc3|~iW9+p_(6LmVVs3l%%#mwS6h2w*=uz>{?|O% zcSO)>sn{MH*hfB7t}IDJG)%ei&3~}_nIJ2ElozAsNYj!jr?CxB`DZG8-~@8UaMH@4 z#`+h)ABAXuXByN34r4a^<5uefxwtKs;+RL%bh6rW~8 z0S*(iqXzXcuZf8_k0UXwUgTF1&&VTUeLQFP1Y>>*m~LZUMlJuSUXFhJB~!~BI<Pw$xKmKj=_ygX%E(iZ|2~ERB zX)D7L*E*!_MEo!o{cM2oVW2eE0lj*rzOH4iApoUnuA{Er&`!w;GL%kr-Np+dA;$TZ zGLUV6E<#@@xLtBY9sH$jFn=@v_$B_t7r^x;wqQnLm@@sHr?Pzc?s;;mgdj>INGxd7 zsp+{)Ag3GlTU){dO`4LA$xlWXSW)C;?8Se{`ji8ZpP2(6Fk`ILli7#8@Y`sUW!l;j zI#hfq-52xL;<2!{T@p>M1~G+ksYtx6+ZDCo&lVgMt_Df^r~rPE(s}@8B3<0I(nD0f zp1X?l;B>CVT*jfJlJq2Cl%UMFo9C$s8X4oLf;Fe>ni`04s^m;A=7o8R1iiC9;4I7H z)3P+dFbny+f8pX_S25rDd+Aey!V8mBaQh1KWQDcpzG&5D*G&tmOR<<~R{%NGh;$07 zPS>FCGyCQa9;~qmL|qFmlbwt+d+SG>oZ^GfD7cKx!C)Y5WAMhoK_P5RX!(y~!GNg_ zF>X-dYq|Kg_2xse+4Vn&7w0F&XyIOO3vKf4YKIa2S3Hoh!x(rx1{ZR{wz8_hOvIjQ zHBRb->T4A%ff*$~3kQ1ofwwK|NMFhYvde#@kBwta?Y`MM^C=7b>a1}xWz6}C`A1#a>OL~ou<(TO^oEo^&+sZ+n?6HtIYx>IUT^)mmGKw0 z7f;1KYQCB(fc&f9gaqnsHfejCdZU4gRCshIX>&Pj(IB%! zYfSu7pZ?tb88dsgstYy%UKSkyPqN!=Ih;n_IBkxbZBlN5lZ+1MjjUW#nS!<5G3AD(1||8*G}S*y^7_3D#qXvfi^@dU}T z@iwiQh-k%Z8sZm*vRBip?+g7QlEv3IKXUkHM~Fe7ryz&!hNGeoByF0R2=F-T)gGbz zHeb}n1|V~2K}Rz^vcwJ(1vg5srKTBp-)MFYQ^(u*ps*C|!j1!WerTgPDdHSX%pK5v z460O2mXRi@=^etwgT(^O3G8xCcsf5cCw<}?X!FCSrqPz6M8DP!>*f-D7an*^T@N+` zqDAF90_3dZY$d!I@Pu24+?K=96eJqszT;!SqgA;P8**)gMwQYZmXxb)zbo`~ybzJ` zzMxx>jlTu_WU#ox!})QiO}`f9S@SIH%gcIZL#7#`LE53@lPKBIFbTz&;^nRFjDyL` zpkP3$$ZHG?3#R@E>H}9>Z9X+oEJ+)@@Y){}xbNep?l7DB#c4d|(uzp1U%lhE;esrl zrx?}?cIwP&)!&r6tPvR66~yR#4arnW{)!tjYls-KACyT;}FEnd|_yI@2`OX3iNBWxQdy$ZjqX$~HDJcHf=<)cn$E zP1vlsqfUIgQ)6tFM3wpm{a2yk*u};=idt|P>^0_9m8-EUARnzTinw?M68J-fGOk?Z z-${b!ZVNm>ERc8XMpfmXh_K9|Lj|zk_GKBLZC6n%9SaiyNPaZcUwnSNiB_g{&VQvk z+aIDqK5OJF7PD$>)l6lD`UVEW>}9_k|H3p43Vc7@NO;5TItZEEd!Bl7-#nUyMXNb2 zyn@~Cb!9%iPxE`Z)HYX*t?P+P*rHLx!jw7lw&j~IJPw3ixeQEfx9380m1a^sbl(t* z;}(AMC)Z23w-TRfdUu!G;N}uaO}i*O5lPKnUR}MivT~X$b+J2cT8te4iESaj@$T74 zC5vwAq2MwESI0@rL!tIkK6kjj$i-dpQda}-#j1D0?f329SH&NCW3wcLY@y}rZVG)e z`$ud8DV(PHtRrK{XwX9&P_x&Pf{9ZX0!U47{-B5$TL<$rZrq_-G#Lw?{uqp%f?vcGOWLuZ_N;*TM!hiVJC z)>8{n1&Gf-_HA=~$J;N#AR`^Qf}PA)>1F*l8vZNUy>=8rwE{wqP>c~^NiL&2K2Kj3j?H7@Wa*pb<)LDawm8$8VRv8<2|368(dH%D7d2(_} z1DBG_#SsRkm6;mB=m)I;u3PG@jWVbsrvWbEAmtgLs@fKlvs?W)p`AbX&y*l8sQna7 z<kR(BP2dew?#2viEyWH(oC8p}Ia$4*gt7>1~5w&6ijzmzJOD_7 zcN&?W0!;E)=i>vXq=HNU8SyKaKf2=F&JB411w2G`yyF|mJ&udUSJPgzbfU$N9!b;p zt9x`YF4_Zm%r_qGP!3{FA^Vs!rlvn$BSftCtnLp(#?DvCpuYu!#wM`obRt|A~2?F;*rtW zBKnkp{EGH2(SFF=%3_f0*9$%e`}DE7PoULmL8JF&-4$lMbM{yaOa5byWoDB=DzBUx z83LMSO*h0ala}LUx4qchw1Z9o78FV|1)9+HS5=MXGHPG4drC9mJ{yF9+U_9&@81jm z?)WxkVN`9=q?42iA4N`cywFusnjymzy#k04qbR+;+#S^W_8_M%usUgP8>jewmO;|^ z$?Tn4LImaOsoqyKmqtr0jN5Zfux1ORs(h#o9z7QuM^*j&ga+#NC{8*!WyZwEEz!Y^ za!!fvUYEaMw$N@f;6S?pu{0S6?#G{-P-$8u-3-h#OEdQd)IlJ>>iI>Or>>dQ{GKh7 z?R~&#DQ&Cuzg}^^Z$##k-iK$%LqxR)y}YpA^zyQm;hg>-W0zKPdhRF zYfdpWHmsxhG3Tk^5D9|lt(lmCY@kp(eQo>{UAT62BFxZ->~G10=!N%&e#>osuR)9* z)_59ko_0GO%5it+nUJl!K^WFnR~g-_$#6q7^9mAkzS@e}6`0Gg+WTK^g-JyXd>pA} zd8nchdxs(;k8Lt|lS3Pgcx&YDA1*VW=#}9*c$bXW`xMLvE(3=#a1r7<0t33_It<$i zblsmW1NPU6n3~?HHw?p@FI6*yY(vM*UOkzS=y1Df8~JANhypxo6(yFpHPUT*x=MHsaEli)#zV zImwBQ9psp>pcN0RO$KOAFO8Jys8SBAWxzf%uiMr~-^*Mn=iCmK`6MUwCv*mC zh%-!$>_>LAVH`#WV1EAj?iAk~eGuSrqTR#z92&RWX#dmXuculWcQw_#id!em&Jr|a zY#eG-9MGg@&Jx*_Zch}^Hz+W2;J32qmah{Nx;-P)llkS<|HQgG6$q+mX`lTeDOw1* zzE$1zo?7Or)q&=fms}Glq?`e;E$Y4?i=!2Tk)M}kM8}!?Qb%6khFTb`T7Ew`5&&1a z)nO0(a4%NxOdhW$>`SHEsYsiuk>{c?4;Qgja~j60??)so+B^mYNg#53L{?%CU*8q4 zYavF-z>f@%?ngekCdpRQ&9z1~TgV8HpPaB(z@1!Si9u_~rKGhqx-(7jSkU*75R3HW zq}yg8L#yvOvc6_PRTbRn9i) z_R^>6+16`{)Gxd6RB)rH$?)UvQyM52iJi4KUK@W8UMItE?Jc6kUCJ|ZxUQ&B8z4Q~ zqAiyG2H}W0ikS?G#6O9)RUzlP}KsJWTaWT@OgM58X>fx?=^aj5H8}c zVrEyu2R=l(`aWK|9pMFw5vVRBmy50zvS6nsZ{CHet^@VPs|bOfR7So>d;5ef9%JsP zY0EMJgS~#Jv~e!MW|@(in*LR$h})4{&P`N^qhs$so_g0gB^12lUW@={+;X+ziz&`| zm2nya2i0a+sokaqUP@v535cyBPkZPfPx?QFSd4(Y2At>J4Rhkbpd8eZt%r(G`HJL{ z_bv@`;_*(x*Wk}Gi~|ajw?;0&N>%F)(gcB}8mNy?$f3!)Cf%d9jzzAKOC>JPms+-CMUk7KKKu4q0NC#coH*!$BksTxYNZ~ez@ z0T7*(1J#eY6F1E+S9T6+M8mC3%Hwk6*7DbSnbw_hj7(IvcH<;JD*3|p{%31m&b9U9Edbd6Vux~8AGHEQPAu|~nB{(1fg zhsQ-jc*fCjizI_opgf}UVJyYW_@JW!6gUz?126Iwkdd1&qYWf00m`I@ks)@thzU*I zgvSr*m*q7->45Q1(YoOEpLc|lUU%@FdE@QYTcGPE5#>Bp0iyv|Jr3pV1tZx$n^nLC zk6|_Zx&z(SUN$pxvFEJ;L%hXJyap`ZqQAw;^Vra8ly(Q-yEzq@_|I@g@8V&1{?4~- zu2j0}!kG51JlNZ(b*?>n#XoYCKHw!bauI`~_(g8OaQZxlE);V2s|0!N1@V4d&i&`2 zFAa~70XCbRA^OF37ytEuA=?O2i2fh=9TU1O)^vAod4{QtnBeZ-gI%>@wJzOlFMD+L z-QEM+_!X)|xsxH1@pH4S^ftt1hggd2K5qsv8dZR|A3SkQ@D{(lbUJCQl%rP6y3}Xp z!0uShd^7tK_71D}@%yY9@%!OIQcnfT7~4q#jv|=ANReUs*4nmWmUQnt)(%2++QMUc zqT_y+T(GTB*7^eOm|o@OUV5Uv5M5RTH<=%=$#OM1T{4n)LgFGcC-j!l}amy+?ev{(L42cSy@3586FyGl{jj93Ok+fOLVNg{N_h~kC zc28%L43Qx~cBE{2eBNtVh)6WDu-;A?=V6Q0-$l*F-zLVSsP*0vCDZK_-!W^Y43Dx! z-Z>Y27Oj*pi$Q*P_-umcOQpL%@|kIW`D$1xuU)p^H9C24!>+OB)Eb>2Q|KM`OYPhb z$rQtQ3Y78&Y*6}VO<0X+nq!LFczzB@G#p_e|DBJ)jH(d-o)N;~U<!b|&^eLlqV`uw@Ce}N6vXA+28C?s z0P2}}7x*KEBrSO=jlWPqL3Ksh0sYxfx*Hr;=he2WU&)XqQ-6d2_8c|lbW;cFKo1yu z$J?cb5#DlGafN%z=~Y`LcoX7#R6TIiLnLLD(cm;=g;knmt1fv%3g@O+20cmeEb743 zyJmh<*_tL#isc$9ROMse;WgB&eKT>O#(KmZgJ`iZIsVBCS%?{m7cKg2H zth{zyu-~YnGuH;SK1A5^SiQ{`!Cg%k;G40Mxkkp1J`zV+g~pP$ih}u+NkN%v_es1e z9ycZk4}f7S?^!>O;Y%%6o`^u9N!D+o=Zk{MpTd-DsGJ&CVyH#zx%j+W(;|jq0u0!G zZcZ~r0uPZi51wx;+~Bv*;5>h-b!v7UK7%P82p)k!O0uWkroh9*99in0roM-cM*ef| zzT2;EzL5%x+HADXycFi@+$=dcsd6|;JDit2F}#>eny5N-N(Zu#rz49VYqV}TYt$bf z1PC3^6xg*T`uebtc{zw$(&6)|39~mEMdJZCvuJX4z`B#WOQYrvU(6c9MWM$#0xYNa zM~q9-h%n_y>G(v8;Y)UHArC(M({~)nJVS{D@>U;jg#>-aNqjHjSCdZ~LZQ`peRR2) zE`a>*R}PAbp|Hpkj6h=%0BJI}b4*bwiFa+z*5;IK#WkJ(0j-$pb=*s9#FL&=YA;G> zR81Y6CBK1> zsc+U!Lks28d<@+Sjg=DsFww8%e(gkvig}&xVb)$KaD_o@W~1ig<*N&y6-T=+Mv8Zm zGBBeVsNE9 zKMY*}@Aw~jyN=p1P>9<@5HSvv2YIBc?Dc*j`gaB>o+DVqW7~qsYmvwOv7@GXcBwpM5y@vS)!G%M(8Z5)3J(Lf7+VoK4q}=b5>c%YQhDg7p5ezx2its0` z*LhDnRlK$aA1l`Gr;!QXpqeRMKjw75*kn5vY4hjy{oeF)Tlja%hE=`vbl$D2By+5Q zN+`kt9q;XSfvBd3PWdB}kCd+|wxcokRD_Kn>Znc8gv8on4ed4Y(qA*G$G?7mC!7B+ zAq@<0K+icCSXj8uo51MLkq3K0)Zht-wF9W`ce4TPbmLF9#t6_;#l|lAOzKIyt?HOy zUv2-#`8GwM^c4MWk|(WWO`b)myQdI5UMmZokQtlaGcw`!XRcUw!VUY2mysRUSip;- z(oh*ev(W}Br`9RsF1P8IRcnR*zCF*?0=%jI?{#b zzA1IlBH^HWD2Q#cv+Ii>ez6eWs>=YF@MmM7)Q}O=9}UkRPEL^_cMyo@B`{22#4iQc z_2)vs(*Vfl)%gYNZ*0sz0JCA50XmW9;F0`Ocz; zRpu$WDl}Xifbi8$a|3l6tv^@{f9?bG{TBnibO3j8;^u>fg7Ndh7atTeV_$b|;kf-P z%y6m`y#(F6P59Ccq_egK6+xgwq0!pvoLt$%RfK--9tmPnE(=5phm! zT2QRU!g$u9Kw+%iQj(pqJ3e*koVZLhaUNcAD-sBs!5w3t{nIpY%aQvej1tm)iT>}@3`hweG%mS5> zI_#IX0P+hjAaFB;!!cMz;}I_97@v-?XyNM7mGC~xA0ZtfnHRZDDLJ$T|3wv2V z@55dE!brO^&ki+gxRSSfHS3>_vnO14C2^6e-(_MUQYg@&8o&RrNU}aRV|=f$198moWXb zvEL2<{0An}xsWEcI|(!veO&q74r@kzK z<7oA9>@b_Yk@APR^NHIR*3Y+iZ{3_$4Cjgjn`L-z>@w;u+od6}*E+5dr z;__vXkL^FoFrn5*3zi)*znJHAG!JSE>vf|OWDPx7qU`521r8OI{ET>W_#B1GoyJ8E zzp2#su4&Hv1=TYK6VWpmh^;?!rOPr4)=?n^7uC(g9F=!;hpe1ZR0@@j1k%M3Qd2fX zb#^0N30(X`c$4g;;F)dU_Y^Y zqwHnWKcXtX>A%O3^4l7DL`pe1=;h;wcK zbe?(B)%cBZ;?Aad%u;yjfJontD>>e%idvRH!k-OXcr!xdSlkTu+BE#4ju5u8-a6h2X;^Rx7e1ITRt7~c@G z3n+N_AXTyGs7dXtaf!XU$WE8+A!xKSuB#ovDFNiiI_W3%r<(neRyqNxsDZ zvG{@L>@f18Gm?@nY`EK)SOk&$s_J_-XPk0K*^|dL{^V^_L+nEBVQk;V&x!v!?*AWq z{NmvCdIT0HKbgnzYiDx=H4oKTuGqkc46^p?T5GU~U}DfSSzR__oGzR8BBKXzU9}DA z3X9fz$3Fb68CJ8uQ4$FeXS#Fa?({`MT#pJcPV!XUV(L2*Hj+rVusmedDbkCgYhsto ze4;{T3~9T%V(zxhvybGk9UiqSMb-JZ`G)x4px{T1f_{T18@F$7N=ZKNpl+j)u!B$a+|Le`qIz zw(vrklwS9K2$$cPM9#BFWE=eULxCekxl0+ z2bE;>SJ-Eo-y%mZ+pq2}dbn!#4zx`wVH1tc}KkQy^%K)9fJP^^CdE*7I5r`5%Yj;;-?h>7yJo`gXbpV+C z`qE@@!OK~G^$Yml{NjhIFtm&U)~qyjU0pntOkJk|LP(#Sjf)S>s|qD+KHJHnK?<$5 zFrMjD^x)sI%k>KtiTmrRj~-=Uy~$)*+@bQG)hYEHK+AN3QU|ECuNkb9zq{pe-S}2R zrXMZ0Vn{bDk)geEXz z)K|>w*d0k}y;9zOzo~AzzBVg}MSBfZ_grkIxr8zi4}I{j?s_MNy#yvdBtN+R)=7QE zdd?6fl;&4OD1lyj;3eAJ?0QWRpBv_p70>g~neQ9ZSv$+&{mb0MF|=9zv|ho?tKn)e zG>Ac`glEJ7Z<%r#t$I~lQ4U}jdN!cRPO5iH;N2+ng|1<+VI3bEX^=i@NO!ZPg#BHp z-FxX|zwww?ii!374cS$jul1RqIC>Y7M*T_2H)E5*#y=;vZF_1MY&|+9wtarw;kU`f zE9iE#d17YY68j!bC3ZN$`ug5gY>w6WnGbqu@0CSfe*jSKuPW+s(U)kix#E2M3PY3I z2OB326E~~aD{OqL6K;kgZ!|!mu>x_x9{KSOhOA- z+UW6z3m!XJPrF?Aw?l19vwdDLj_t+oxTk?;LF4nvRCB7UzY~6IO(|M(&)P??1Qs-e z#vg;Hc&gk98qD(B`=iup(PD$-I~mtroZ@`L6i25GE(?+-D|2-z;IOKGpWXYrpHx5- z-4C5;lFw{zrvAs^x4W7GTbmb)pzN8`__g@-=#YTBK>_F-z2kFu@q2h0deiHta+t)zfMC>S{uMpqqp9?C^6&tq6lHMf!bu@}9kAb`l}%W+}*znar_UxIm1p?vqh zo!|FIqknJohiCpj&Hkq2f1ioqB_HyEAa#JF|PXw`6GEv z;p8ao&cqlZ5uz^=u&k~qnBpQ9VqRgSr(&co$@+WOiV-!UwD2~&geCJWQy9H!cNn!K zSL?-pkRJK8Lp}z6X73R7iGu|T%jGD8_Fb|+uw~?$*dk)HF4AK)t+LLAZ>$fsQ$SY6 z4eKu>Unkvjf@I1i+;t-e5{R~uM4>P>X(07?b%a_Xjrrl>jTaW3uNneMsOosBl%-EEI& zv!=z~f>iSuTF&kHVk%q(UI|HQfUkNR{+HM@URBNJc`psD8JM2Qffb+#2;agaSoUVY zJ}<7i`5eNnm$9_5fAVVP;u`oNq5ZtiCfC&9Su&vUKI!b&4qU@%tLPCOwPH~Sli6$b zIh}dcCs~yq8rXpbsOH5Pu=y_(|2|Ptt1sz$)`}u2ehB@uD{YTP+0y!Gz3@#_#xExt z=w9dl-dRESsc0RC+ijEbI?zNxEWg!f+gN;U&7#*@E7GaDG}YYxMW1gd+H#Amt@~hy zV3jtpP|B68`98KLUH)l6e}JnWGu$ZmZ)Opp{>z$}U#6RwMyP5MdsUM@j$R z;_{zbvt>n$@{&^LvtI`VDI#R^FO}~#b&K(uY{Pt7^t9>TvfGH`1s5fuB9{JUwrj z5vXtq5%K-y%Byw|@@t=8H&}AF!3p{?UmvqxPzX6q;CD^$)07n;y|+L(e7#wud|`3H zsNDyydpSdcilQvsp3%Wdwx!{-2Ui~>Gap1T%rrgs}g0aX?8)v?g%8{yTWJ12Z zAmL;}{tHNfVSYVyEnybK>hfxbfsA(VM@pO1&~E$_o=8ZVKr~30pdA-g+V_d+FCxU4 z+={M+E1AS7IPxabLUH=xg`06CpEYq6^To z@QqzHX3}5JP%)MDa2={ZDVEw@RI{k6s2qDi)$gU4y9}O6LV(nRX4c9o6JQJ4{%C;H zcSbR{*v#P@vV;>0|Rjj*TG~u2fuUUg)6| zmzuFGZnQv&nEdi%;i4eO#GSLXjX~*Kkdu}!y(!aRglvo}?}RR^DF2By&i>rk`TFI_ z$Y!oo7Ib`&mBTOg)e0q}v|EPk&%{s+{<>oV>gTS1FF(@y``fyiw?);fTG(3|&(ZGi zugN9;CtaO3C-Rx!N;%(-abLw~yZ)~?FNvCH#z}kx7r}>Z!u6^Z8bJ+W)OARPb z{FxzM;k`Ahs`5S}3dW3(md|JppRQS)NMht?Ws@3)A3D6t*td?;VFo>5LbMLU)jqyd z#?gGl;5UZOh>H>sh{??OYBMv$tl>KRu>AX-<)ijK?XhUa=-kz|vouWueKdInGH>-> zM?%~+QXAiX1X)s8143@$c(9~~M8B?e_maV}zFr0Shv;1xip*BQqp&Z7DSz#|vf;rM@36JFi`I|q>tEl9 z{%82oVSv2wV^f)Xb^3jK_(=nRObZLhd@P1Gi$c0R&Y$|DvNHXy9D{Uv>( zskrGc3^<{W1OIHTB=(h-7G8Z`6vvZ)9{#BmJ9#&gb*}$<4d)FNLxaiYhSy1qt*R6N z@oPDZy8QcJ6{O$4e}t{i*{5$qZdO>XfNUntL5J(=)kuH;Vb@N5g(vt)M?#8pPTnlH zmmOmB3rs125099j(3(H`=!Ed&Oq?#mZ6s~>kpH38cQ*zH1Y!GS~niIyGsF@fI=b4XXnMAQ;ddUy?`9`w|El|?A?eHxjuUq{$5#lX0 zLOI)bOh7xkS(1fzegcN$l42GHjza$lLw9 zHfF88F3_m2q@#WI_V#-y>tU#fFR6(Qjj_TFD~|81l*&hB#9U*6Bg)wFWw9L;P#j7( zkX>Y)>vc79e!wFkEUQqctXSxqqF%blP>oJ^pOlmoLBYp;$E5<6mM6Wr4lY13#1YRp zTrDVTNS41?3lPX=c8ylN-u2z`2U)xcC;hC5hV-h8hV?p@CY4fE9@EwNtt7uOozOgO zjUwER^P<1`h5kfvnX7H^MAos*EzPao;rW<`6+EuY_oO;av1xRflmHV~ zI~r$wunArr4P^!>l-h#R)}-gagGp`DvJP=N+_e>Xcx1F-V;s2?MQkVO&({{jaiW+d zkpp3ZCEorqf&^tvXq4Gy#3Y%{4j#;NZem*HKiW5C1qBdiNjh(9zES8RKueN&zZCjw z2DtI&19ukjX5|NbQP;db2%Tm@&?QBJE26}JgrdmEtJ-LWMTblq-Q={e-(e<0HtBtb zXm^2N!YNPc=yfFEM!Of&+&7rCyyr^$9Ji7x^b2&0rXgYY^r&R(lAc}6fvA_z1B36* z6~TuDs7vi-E=;AY#kdo#>iSk)0YSzLB0Biskt$ck7ffH>)hO5+-kOWk6Xcqb$|N8O zA(x@>Ys>%4fecWA5T6)yN;6F{nSbFEdA7$>n0k5+G0901 z_Yg}SFS7N$iR6CR^G(3q*7V4h3?hvye~ylaEYJwLDj|of=(3OqvF~E6Nxqp%Gywmi z$_^7J504DNPl`Zb4d0xXhwNaR(whE(0R2#K6sa@)J~NBLx@edYk~d0}OXdLZ%U=Hl zfS@=-Q7BZNCOW)7>O68*U6SK7+_7jqhW;dHG8jMJOSt~h{$pwr0zDWH2+-8xQzsn( zJ}ju~H>K(O4z8m>oj@4H{~oDZkg%*a9%U4qzA7cn8Slno2+A?<2Nj@g_}~`zP1al@pc4 zj(&#rg;^T|t_^)|y9nll1d*7NN!%n-Fks&+J#=tj>k>gwf3IP5CJ2N6-zhliI~_p8 zGbC`FpZ!3EJ;536EGx^HpskdJhXl?9{%ja-m}o!AVO@yLAi-M(&Rx!iZpMPSLS5-e zHa-dO@+$qdf*^rFOQ6}pt;mt0Mlq$tOXB6WWAm++)G^T_+yO3W@;|l`d!u+d66>Id z6^Gu1)+K33UHRg2EK|!@puuz5{U?K6>D`^mDS*ek^&TYU9AQlz$#r%XA_Axzop+rV z?-O;6Q|!DhFE*B95ya-Mh*(*#riO!yjHf?0D4ITi=-vEA!UmBjI8YivD$1^w0U+J8 zXQ806@z%D6Svm{d3z{~_NomtJV%cL@LLZ@MENH`Qx^jW_fJMz{Dl0Q}@C8uJQKH%j zUF((?U)8zsUk%g|hlv(PE>m)SZ;kkg-A2XQ$1&uo9w(tXzB7^pNiw&#VZZsh(vkXA zko3Cpx1aPNe+4j-{YP;@%@rlZk@6+<9@Y5U-<_SQcu*w#T8%(1w>0B7&>$IH@iga+au5md0s)ly~y+f^HEOie#b(_P4YPlAuFJj3(Dx)d_T>OBRkWa9w%#x6e(M1iO%{@Ix9}=W|N=%$Y zRS{Z1m5+`idlp^=^t#><9h0jPwa%>C#a7axidrcn@3O~wS%1dL_VUX#e@*I5HX}Z^lT$YTt!M&~_0YZ)XAyM9v0JOTOBkK!>|M)3bBHLY6qE zFfzoxAl#t`);5dm>h^?3yPZ^c7uL&DKE{HnJ9jg9`?R4O79na9YAnn;n1|VUi2)^V zsmaOXZ8?uxCe_aCawnMqZGOmmZ9_u*haIlHcZfHLKFMHQ{4NVC3TBSy4D?J!JeWR3 zlRTh4dt5d*%iFKO5zKC!xon~cRc>LeozXl0&rQ_-y_iaq8P}zqcHJI+`xof$v1Lb~ zh|-32;(0Vo@&e}Uj-=KN_IBvOI@vpQc!K3p5~E=yGKivx81mPI)`ot7mf(gvo0UrZ zM^%GaCdpDw4A(!~Ju1M2RZvV8sGKJka(WV__Hy@dZ^>9%^9i(JSppeZEE`IMwjz?1 z2J{k?W=+rqmk_xNoX}xH4b`dqlhuWsSo#__a6%;eK(MTEz*&Q&e1aSioG2T(BnkTj zI744igKdL&))ctjWKZN+k zCBPpQk#yl8^plL_tq^ofP=@-gdlp05D4Sa-)C;;>2#hrFmCac1e+_iU^FW4HfV1 z#Z6Wy&(!`YKb&Bp)_EzZIEk?9?!f(fLYu%H3+!zv_!spX8J5gaND4s;&zYGjucFSS zTP%JqHi!V-lJ1;wLA|&M*#1l$4UM51Tnz_2EmowM81idoR{Y#6^a}B^Qy3E;WPhkJ z);BlOwDDTJ|0|1wXq)DFFo^hFEJ7NK@z= z;5qppmRi;|PNz`2R9-qX{7lRg!FS_#=q7FBwJGvTOW8AH=;W4xPTxK-3-Hmm=eH(I zu#JUZ*1@ooRQ+v8vz`DTX>3IDro?_yo;2GwmGD4K?XC|Qi=5|;0qO}YqHN3y(l&e~ zWsve|-eL}kS!*rZ$KU^MX8-qpa6k%lPTX(vRkx&lp3#b%Rj0TE_$;aB?FGK~^$ic| zC^mN~&_mteSVG6@HWJvm1M}RFkBkpr--r<5zQCay&UPPdZoeb%X`W8gJqz`2{DPT&mvJE&Cg<^b$s|e=Huseg^b|Q*!kgr=pHvVml;Hv2 zzoP46Y*@)y=x!R=kqg0Opt4pA`V_7fylr7Elx$I+b0vI1f;0ESR+~yWzJ~c5bYqe` zOaN7%Yv(hAN-RpIY`TwoLOQ{9-R=xs>P5pui!?UhBIRt_c7^eJgy$l5IgD+JC!;im z&0XSjM;I$jh-HZ)OK_AvoUVY$RoxTvyyf%M8FGsc3&|cbkWCDqJvzpw4Pm`^!!ms~ z_x=?DYm1aPheM4=wz#*ljm3O2d%wPltgxKzZmpT1R5(&g9UHH+o;%ENaNm;WECin9NeOfF%4|0xVPp`C^Wp1GIq^ zI6pQ0^d|pRVNTdRKN|>P7jq!li9Zz%Y+yZ;0}R-L3FJ{Glg6+0EPxVTJly90!_j zN#`mST9q2$Edu#1w2h|iw6#X?%Qv>}Z!^=}GsCd{mcYp9*~c`^s*YCtB$|bV3D6xWYLA2@cjZg70Rp1Y?(Xd9%)eF_jyjujqo(m_D!UQt-BR%P0an(~tezRX z2)r5YQ3Ht-#Q|s0+I-$w)?1@Un+}Id}J8P~BYVHi1cskjK63#6Q1#d%+s4=nQd*y8G3vqb; z73mYd$uJWroqj6s(G%tU|Jp+liNjqUmZwI{;dv8*X1HYl=d&)X+>9jh(DfxLM1DBu zycgFGd8L)7uY+p(+P%SUs~&P~2D#AVwrHdci;IDh>r~ZQA>H`cGA_tFs}H$u$mhnG zKDqU*Y6r+$=_jPhfAq#H_LB7_i$%=TQb`RmGx1RWJ+F0tmzm?4CP)yQg2_=*CKZ95 zXgo=>UGyjownz?DbLe?(D?mE>+)kxwJfg{KzOJS>OL+ zHRJLpgoVm&yeA`F!S{Mb+;*{A6XgA1=fm)hw(VK2wv_%==S&b!Lrcjh?$Ob7q9`(YWQkLKoh~o*zVtjvS^|nK{sTtX$p5N-oxuw$|Mfx{3DOZGqHT0IT*d zxY>KK%-yD8Dcb2ao=+7FTGsp&0n8{w05Y;=DN>0WEuk{#>4wB1Z8yJ0dQI+?QRMCX zo~o$>$2a#4E+Nl&%-{i4jW9NUmF`;7Yw=9u!`U`Bi@Z@aLhwjgyX83wD)DxY!48{t z_nF!=XUrGElzZ@E*-9KP{(!ce8-w2lNQ0-eI4An@-2SzSG|I(~Vy5`A=fm&Q5;yk- z;^DXcYac33d+v$r7=4!Q^X;))b^yiZ)1u(|lEvEYvcqVbZ#!HgW3HP{MxGiik%9XqRHiHh=0%nTL$pf4$w#-LF#S;q4iR-^;oLA|aznSAQs9 z&(dM*htkvMsD)>ok5mfKNCG-{~TW?VcIB=aL z-%q_V_le}+%H>@QM|~OY@|(-tC@9O6rZlX3fhuoPK*2eg_u0KodSaBER02`S7skYZ zMLMuIo-)8Z7{d?weWz@=R&2c2m}I+{p5mv;MFtW@L;T*Pehvc%T?Oq9@tv2VK6)-Z z*S3t|3{MTIaYFTjrF^MvL*gZ(E!(`I;tez9PGt4evW^Y4Mxcv33lMH03o z9rp@jHFo9r;z4g)@NR8SPIe09DQ(}#0I0fHZ%ct$aSj4&Fb(gSfTKef1M^6^;nSv_ zxh3VEuw|w|yh}1o_8(S^idf>c;m{x6_B4(IYgr=8^TE#^NgH6^K-*4Aolu|IXOF$@ z@Ep(>ZC&@tUhC|v(n}Nn6Z&(-YWG!*ecMs>C-VOt~J= z0yRV;!T0pC+xF9Mt|-sT_PJVU)#0y;_3MS4_?k_-b$i#pEMdWUm52(6CqU6QYRKZW zjj9Q%q6{4;|6r8)0m=f``iV8zq3_2dw+oofqHQ=_SxqizLNn1k{GsRyZYFuWHKD%Z ziXoAj(BHO}s5Ok%-QjtpTE!Qg;w@Jz(Nz0Kx^2&UubwqU;!zIrRJekDeXuQr=I3{9 zac)PN0RTir5+Nn2Dv(vxfQ#|>(&$n_E(jvA-A|6|EpOJKJt7jghk54)m$$ z(RZGS$D|YVb-k7dUcZ|5vDRC(R80{f-6ue$SN`2}wwtGE!T(bB5b5U~_L)Q^&9_Z< z1jAKDghfXW1ka72RHVe3DEXGel)E)Ih|$!^D2wt9&vZu&1Qm*8wX{y>zDlCBe&WkJ zmpk-|=0Up-dy2RD;a8sBaTUX>YBbx4p0~@7_4nJ&_sI3mC$ob;V~={qy1%(OAw+M( z1!;5wRaj|RCpZ}C>!&ulP=d&a(@!$N=O3>_YP_WMg5Y)^zk>C4uT0-d_wX8m*$TvS z{+e1Z$K7$j3?v2|&4(eFF;Ac6(&h9-pw70}()$XeF>~hvZv7nUmsj(rh(SvzacmLmrJ!7R5>?JopDcz z%+P4dJ9>UJ;&sB3yo~CZ8iLG+I(BVJ$wC8hqi5jaEP9vv%^>7Dc~4jtMifaps}b=d zrm_xJ-Y|tKNe=kM6fgS=a}`;VRz~@ly8hM_HUK|`tfm^e4+UG+Ga&!gV0~7{zD(|i@bDfY5VoBPAPr_xmOe0QObR>* z+L%Z7pC%$q#raX0^94r~GCJ4$2o+ALJ&|CUlCn@X5T)FDie?npu_V{D*`DX!W^$B% z3**G>pIpB%?J)#rdYzM$m1NCXqkkDP|FF4v4DI{mGn$uf|K#Zj!eZH3ZeXEYJKR4M zbjW`b6yBfyiBQwd77_goc(S*wm#K=jK~_#i`lmwA%9tV+e5yjp20M=gM;}#NVd3ct zRDN$sR)1qM+7Zy{WP8D2R*+lo<;N92v!RvTl9Y>Wne}WFnUHSOwO1#hdk~tY@(>_? zAHM3WO%Tv8IV=|3Dh!>AqELf|h;}aV0YI=(;|jWFKj5X5aSRJ~z&3h;9|enT08sE_ z!Qn_kNRq}1-M>}bWtMS1(A<&|_d0dWRAEEo;o;z*WrrnzoqZUP0$UI8F@FM>B%IB8`H2H8L|_AT37m$ z7l|j9kSC2e0+^ku5yxF8RGzA(3qDl%PqMxrL>1%UQz~~7lh5c=KN9^e%hyZrM^UI{ zK%tl|l$IWA>|6t24>q-MPTyiE0ssQe>R^*=<`PGmiM7VvmvfIN!I#IsJ<<1TD%b*r zb;z;=&-KmCBYy9{?ZK(5{HFLEYvR5bznIC)Tt6lHpW2__*oGv*%<8y7X9(ktqIxtvUv%gmg}^Kq62imr`=dI9mH z;}o;xJ=C9xARa?LY78dGPstdeO3r*Tv1EZZ%TJTk5-Vbh_~U0s$R%*uN$P)|6B*23 z&1@$uuy&joAfhQ`C!4-Xmx3!1=g!{F+2=!>V~yeOAw=A+ErPr5*Bt-J9z3tq@~`s< ztV9ubHx~9qMV01zW>~+m^MpT*|4ryf_p-LdFq;)pQ<<+oS3ckLwQB3?J_UBQ?j~!? z|DGO>HkOqNAWKf&Fd9DaUQ7l528z}}fFq+;Cb-asMnul$mGWZm4fqnYcwudn<|5>I zu1i?;*!^vD&pN!{=>9~uTt6E$SlNnKNtyiM!c38K3$Z~vRmrS^IrRVm5Q>fm z36M^UtS#1xZ;fL|Kpmw_hv*cX9IUM^Mt#&D{!*i9lRn8RbkN5?av2xN_APP-@I4=(t@Rj^Bj=WG}II|7->R#_*d*L&KmtuvwAuBHEF zzP0+HI6Qd!<9GP+j4$1`3vVSxavu(vu1%&6(D{+7DiwuI%&@9 zbXW#m)`0q|OZ{Mq);X37#gd`1IFH-NeikptgpJzv zZG($-`Y?*jGmFB1WErS{jn}(wkJEr~X-tDDBDIb%(m*9{0%}rsh$@-4V&VDsI{R8N; zQDxQpfjU3azi~uOHsgzH+UXXsxKOq^PG$^ct6V|h#F!aXutiY8tAngfHg7e9Y9SP< z5x?X3K`7T1bD$mtGs;ms5&+s+DX%%tEH!@8gbQqCJJ`+p#2m)5Or{v+@M$yrDQWm; zCTDFiev$-}rMralaj;~(0}3N4hMzp6 z`(G%4w~fT*yvL;59vI5t2ax*mEw`sz|1))M9*eWXJrXXU@OUpl>{)}N?v~v3hr|1Z zhz)d56;t%zfwP8=1Ms!y)4hAovhgKBxnqc)l2PTae}B?)1hP{zEi-W2Kp$JY={N7o zUm;sByZwdtW@x-Gp7J#0)IjULQ-{QR_l7~wLd5aXAZ+0s1tD(*Gmq!5Bl&T7byt*t zvMO4So%n#7IJPwrhfuP#NpuLpow`jPMj7i=89`&~OxsOO+shx7&0;d_+a3f3xCb*W z5mI+0iKFm!Wy~64A=>2Ji`e0)_hpqt+1p4;W{te+&;lc81lnDOG$vx-_jE!R)X)vl zY*KnKl!vqIXaflX%3&TxcVq6y(Zh+si8Eq3mda7A#E?nner(t6BnO}_9XnGP(H++A zbjp_v{&P^M%(9&&o0axJ#qu&r@V{237$)&qlEeeK2_WJag|?||RZ5;+lVrN9-D=m^ z#+vfEL^an5werBq-pe>)ldYa42P1#t`4oQxbWrtD_ghvM=!OYwl>AT$T;^#qPJ=x396`R%v2BZMi-TVB^D09t zb$awon|7fN#f113tAFiiT=$naT?NjD3Rgg52l7)+7w~$XJ)^ok6(wUln*E|^ZEjX) zEFh)n#jMmk>#D;8aP~3!dD#?Ev+uq|V30b&{-)=y3a{#tq5VZw*j7{%499r>n$Jn1 z*+Wg9owV7(>Q$xL%Za>u5j$3Yx6x{&7oL_{I*qVJrN1biogRC?oq=DM$j1fG;-4{+ zb?N=M7zTstFv%stTorTmf~`2-YK~#HUpy7n7RoD4f4TC#6tbE+fR3P^@H0d|Du0(* z`K~{>xqUvuPH~A#InrfT=P|q5jBtggOlOG<9S5Mr+->UTpM(gf1?LI%a1Y4#{r{Y9 zdA+(E4s!2>P={OJ2GC!&Q(VR&88Z2N*AFV>8rKE{0? zK*-}WfJDT%2fd-$WUeXc*wFRwdhGt)`|=Zj^_d2(H+nX$JKapAr}LT0T5Q_1y>Yw* zhZ8xK#-h(Sefk}Rc0E685xcp7 z+#+n};O15C68#pt7(za$Go2)q)ID137yMN?zG;I&@=)-G6ed^7yYpY}?MUI_?&=&rHDz>e}Wl_HFCUH~Q~S9rx;a*}?U(ZqseF z+$=lhj6a&O-Etb2@_tRGTZ-V?($EL*26^MjUA!6pvdRPUaxM zt&C{Yx!E7KwD zoDWhW^9pg3%l0x5Tah5TclnOmmYFj7Pe-2>>f_-^1EKWSJ)>T8)W(*Td^4^&>Zrz2Cnj3bra|6jFK}hdv@&}=HufvH;csQc z4-`W1FmIceuM0IHXY0Peja;zbT7mMyP77FR*wJuw9%?7aB^cq-`c$uAS>Rsf(WKk}s~cY$%=-QlL4OzbuKpr(=t!j5Qa^pbVyB(1%o{ha z84PRlFl9sHY|eZdH{{HYV_|m=kVhaZxDW(j=eE?AFrAV@8p1tQzq`O zWKqJRKgeWs<&EESLd3SQFS6qJ2lZA*@jjzeVhF$Wa147I-YQDYp3LE4*Afz?xV^Y} z{YUoj-|FVWh1x1&G->lo3n9y6?&B>=%)9biQ@i57gkvk)J7$7J_55dMgQp}Zo1wLR z*ihI?**i5R&-NSTqAkQwWwUme^QyagdJ^lekYMb~9BXCfJa91aLlJiD@+mA=*;Kx6 zAV^`Q!fb3LdRdoKu1S5P3fK=jFNNNOmVJ=5Kq#Afxsmr+)Fw- zHqUJ+%k=`^8@)gFydQpq*iZ%f@Y-nWZ^jZ;<65|w^5kjF4rlOr*wkZ~vpl`|KS6ZY z70aRr*Ok&~f4y1^Bx1@)9MzFDi@iA*cQvc5RF$2u^v0{nr=_6xR%NY5* z{ByC?LD=7|g6#hqzWcDH#;(5by3f>RsIhq_*61~litZ-)*K!T3@8^hjLyyzvdEOM8 z3RH_yhD8xl@f#z;7S-|j+c@|PZtOU0kD_qG_5~l{tA&PSLOY$S;f7=(zqcEHIdp-=Y78Pm^^Om=AVEg!`5=$ zE9v~5c+A|KtTy`zq{W@#o6ZdfvioKJ1VzG9qdLPdagGmDNC@c}{GjMeRNA#=$rO1Z zsoC_{ZY96=zAy9<^x|MQW_fvTGL4Rt%i<%&cjnA+H0FbaQ&$dMsZ5?9{@AgP`+gnX zJJMw_+x6%cs+zB=`i&Yjr&#x%&yQN)^KL0#i`DjEoILuF8d?@Ma?)+HIOH6RT%k~Nz?I-KH8pekL#Jqprnj$+_G)Ux>*vXycpCi94GSo0ntXI zXmt$!zY>t)2w6lv+RHxe-{~EimMr^jT-$}abWd||o&TG4_+2?Bj@utq)j!PPMM+?V zsh`~a<||-oJ344C%3{@m4JkQOx2%=W*!UtocL7LMn`)8Vm30y|6+{vml03 z%(jJ?KI592FA@RQ(d*$a8vhjWxaEiU;|`FKSd{PeJ#MUs7fj=+_|%UEcjQGa^t~kD zO!sqA&V7UI+<60iQ=}M=L0zY|;m6}VL_|G@HV1*@FsS`eMKWuPcrJgreM3AU(Ef{%F)>!U?0Xt@?!GF=wR3jGd$>$@ptf?1 z^+Hd!ZH;0>xw7a@lTrlQkbRe9lhQAxm!ajfa}$ER;N39D>oSieMfZPrc61L)-mo)R zlTd!7oFW1l1C>vodqkbJZ=6mqXuOe?F2giELEB+M9chjH-39Z0G8_6`Y6ru8+vo4~ z?{2$-#4rDIiTFR2=n;k3L2@7)OmE19D+$2gF{mb^996}HoeCE6H#W|p?{NHWx=)QG z(jrSoCpK4x3Yq^!pLJ!jM^?@+K~}0lR3_iyFtv3JuG9R4t%yX%k%*l9$20O`s(RcT4ge2kEXL)tiByGMH zt(*Xg4^Gcdu=3ac)>Pafr@C4IiVPRAfOyuF%3T7M+h>pJKs!*MIRWx2onzl%v2xgH zE~=F0X6X5uHU8V0it5`J3QJ-3*5bB>WxTU~gYlf*_wV<=LCu^f%L^2Ey*LQ|9i6YJ zYM8LSzOa8UTA5*!L;4Ncgh!EMV;|K`32Au`hE*5pVAn(oN4Gy#MA}9k;}ZqeW%p+6 z9JsQ$QbVY-5Fi62%_+?+W+dMcFnyInBq{NjDsvrt><{J2YVJ_X2FOuBG`uo``iV0o z^`gCXIV?J~{l2;IN7UpEc@h^BzdW@3u}Jt`GD(M-@wP zt8=TPIr5?jl2QOR7b9I?_B|_@^xMX|I5&aEHt^TbcGwIHdCA`k3mNBszN^?ec&~XV zboK4YXpWTi<8{!G%HT;`KIWhmynWhb@Y+t{!P@T6FemKtxWV51qs;S}9nJ8L;;RVw zqRLKh5iFXGTf)|F!a^i$02$*MlLkXPSSFkGO>dsLl5e}Ys7E(XfirxFDi)<56~#9f zWP0%Wj-LX$s?C_RQQbT2U1z^PnG^3#v?v4$<4s6G*&yffU;M%Ghi6WTGcf9Uo8Tm* zt|P`tVm?j&XyJhc7j+&@k0IR!52h;*U(iKtM*xPyO9AT6_mMYf=)Y_6}BQ3$q znoNCz9>2Tvn_%%PhKgM9qI|P*`e{pTUZ8X+()dZ1CbB6&Z)y4BAen_GE|cQ=3PsBs zLE8CWMw*!@bgiU7(Mxb#1|jh8$KC%6tvsL^kOLbun-xdJx*LDhv*QU5eQ8#bK(#+%r5>OuSVUN<3dEJ<^H$({4N7A(9Y+z>P?|CByKoqAxxaNqp*$KnZ(S-&#kC11+R~u8Ajg_HP5tOX-rk z4`JL-r3PQ@&ng<>WqzcX@YESrU)5b>a%-BN5v6>;4K7bEI)KopjED6N$&-5HxPaHBKs%aXi?Y>iGv*+SIWOBR49X84)KoaZL2F zM+qX7RZL;8VOqbtx^DYgD^JVB{5B{po&#q>ZY1NX-d4%c)~H$v z|3iG1I|_}?Ca>T8fCT6-%H#i$(GP-=ZCVgf?}CfzT~)){E~=T#^{(ne)!7LZHB;T? z5g3@OsVZH@c>dYiI7DU~+AE}3E2rhZ(8*NsAqtkm;mDt-wIjzHuiiA5FH%@gmNee* zO%k;Gj-dd+boRJf2Mp~k$#RSQS#L0-F?Is%BYpC}2iusJ126x4gY7J`aIMwkLOmbW zdASM4=2o6d+vGK^>(oCU*IJ^ZbKt`?t)vlh@lf&~g~{nys;{x;5JaJQ6o!XxWs1kG zC#9@d)A8B2)%=AyajpZNwtw-gxQ**sx93G#)I=|B_~aP;Uy$QJ!s{hU$TX30$+<1p zkjisd8f+;^YRlB$-*n7jpbSI}+GcObDt@4g^)WP#fWxyxYvvQmxTC0zzcEu`5< znFzPMq5O_HHkoke{&>)4WE&#FOeU}xtE$h?*$YC+`c+`;E1Km~UNIj6V4RHZ^Cgcm zRXKP1>25j;$)tH|&|@WtQ_V(e*E`_;)lVY*_$l%yoW@}?_*D{E_I{_o{zMXp@`|HX z4D_=i88tJqYIj@bIoxD$*{aQkU%blNe4AXXR`hLLACV)4OB6O40Pa}2-3%d$^dfvc z>AfaLoGY4HvQyW6IBQ1atZ8vgR@c;&B>t^quGaAuV|zc z^pMwr$TabImx&0+ei;T?RTuVU0J^$_noOo5p64B&yjeP^3y9r;A&M5$c2sQf8!h(` zHGXF}(#-stBIc*6;ClfFLyplYb6{%r`)DjY4?CB3ea%420;PSP<*|0$DuTT_kCB|X zD1~H|RDw^93uKV@YiP4^c@ZS$^mT!3^!RNd} ztx7P74MzE{`qjioxK!w3d(F=*w#3MaO5aTt(*j`|rnaJcPH7h1)=04(fohvzL|L5* zJxJDxP(lg2>kG>msu2M`Ep@=({`kzk%c-9B3Ax3zj^Oj1-55>^6ZQXT4! zAF}O|Us0$yZ2(&msNgaP>Tk|P1+er zHXiRc>cE4uroCY^*rS`tATM&H|g*w&vYz8M8CY#||*L z{H?Av-B4^Nkl555T<2=M3-SmVZgkk7dc$Lrtn5{jm)?4$CJXWf=VCZ|+4-l(r19R% zio$fczXLA&Yd*YcvixGr3pqdD1Fv{pAU9KVF1xd$YysA9HDhHg@=!!RRV3>oL0t`p z$v6CEUQ|x*8o3wPF!F2~ZdQf#Bvk#Gp-&*sL6foxb!0znHS85uXbc)*dkN_ifT%S->mmdsS_%TSzYS|03%g*emO{v|UJ5 zT(NE2wkvjaY}>5ZNyWD99ox2cY}@|w_US&|{q+x6*R|G~7;}y>p8HvpqdW~lcHh^@ zA`mnxT#JgPf^3`VSgv!i(Tx?M522ihQ!ix>-pzlzHG3_U2oLeWfrstve?9N;IBx*w zb>?&8h(KNSfJa3Up4FRfQhAFe>N$K{RE(W^kMq5t5al9SV+IZ`m3;PL0&HW)GnrQI@z}et zW1}l>ZjQLO>`meNs0>$A#~+*h)U3rui@t5xGe*On%)`iV0hO0p0Fst#6(nG=>_5!b z&6qA%8f`_sz8`K++hSKM3%{0=K#Y`4Vt?P*0Ul*y06o1B3g=9%1;0oTi94AUjXRmP zgfc74You~%0Srmqa)=cdXW_*ded5jZT!Wa`%}-ji^+l@=2K(HJ>9<+)OM$ z{SM`_xf^SuYk%WhG4D5Fi`7)83=?l#CCa~#7tHgDjdIYtp>9b&Ea9;Joj zNT1H?XEInHC)!2$kZ?>U{CYMWsLw*ki*(qGZA_ZaY5^toU9&X%-9 ztcl$>+i2C+hM-=`DYbHjOqU~AcYJ7J4lroT#K)w|4t+!Irm3CAO<>LNoHm1X#TlQZ z9pX0g7cJubxksnNY5XA6!y{@N>c3f!=AiZ?+SMMwVYo)@C%4N5+tmy2Xocf-Keu3y zc9{zWBU&bV%km2Rw;=D|XAxU3eE9EX-&K(wRc2h+1QUnwy-tX%AiRWk#igY1L=fJd z=(4KWuBaRd^kpz9AWiK^NDA4zC3$mO_Q;@o-HVF}Z%CL?@RQu#r=lPXi=lIqQxm1< zx>u|wTdUZ8Up`@?C0&13LI^QRlIz(G-O0bof$XoP^3Q%_$H`7eccUQ)gL~Jz<8x@h zzlv7Sl2xby$9O-UCO>hsvwQ%gRFyN~g?RNMTNDC{4-3W=r1u$pq;7(nN!qBm&5^ON^cVh4JuQ`&wuSr=aCm&W{c3$($e&r? z{7T5^-c8DZfhSY13Km9qw70*n&g!I$ybJjKSt;#pf(@9WVkD{H79A|g7 zO!;B_A(xU2^$)?c?GuGLi%7t^MiD0z=DRx*W4OpL;=1~5fF|=kVg?ca8C~Uwx=;ao z{qUMJjz_pk+DPQ$d~h&}Z4vgnO*B~nLPUlFDcx5KU8>1B1h_wV~TZix2`WsMf)p2w9&J*MH#{JK}1OA1q4SBn92c?_z=QuXqPwHjSDYP`0fv+BlfVLY5pEbgJt!8JNXr3h%v#YnxNW}>L#4X{c;s3IJbNSB7Ug}1SNF8`T99;lm z_`_%lw7#87Kd7e1I^8ubj)YP42P&-NZOO-Ly}e1EqO{jFmX^v=!E8f^x211<%jVV7 zCEJLt{L5aX?f7ke2r?7SGP$7aTaqweNZVXniNB+A@QIrEKx89tFYS@f|EOeK5(w$? zUo+R%{nO^sb%FPFxbWku>K)POZ%Nb!!XVKvQQo7wn1UGIZK7M6W$BgYeI$qZ(B?8& zd2@}#HQSvtpvz#{KbpIFPC}8jYQs>TAo%wR8=sY5xi~}>4OE%L6+dP!Q0UzAzmar> zPc_9OvmdzVpBAL^Ryw3F)9}9L&Vq$(P>GC{td^HkQ<0Mg;2+l_k;w`g!;nDdR(X*C zMdVO^kGbp`dwG4wj7gX{040!nv>BNuQ$U@=S`@Lz${?u55tRA7_3}8k*cUI%xsJBk zKlNp6f|9YdgCS!4i%`(G2v3Pq^Q`1C^{zuX44!|BEe!=|f{M}7b{lK5tEX!5U1f!A zM`gEc3GeksU-55>_@|N*<6Dxz_sC?fYxl7sIoI8b8{;c~NB=O=I@Lk&%39*i*uLRqZDV`9=tjQ1A$ho6F>=atE;t#ga3%bKi4M)YOY` zmH`$l$_KxR_{g~@IwunHT=PR+4yU?r=(;mA&v92eefR)L-aKZ5%KZDs4m^dd^|~%Q zJB)H0t<2c8XXRAkiOd=4^TZ6^S5jDhnSPb4F@t!rtjC!136eeP8WC;b!&r@L-xpu zs!x7xP?+P3zkQy<{sT$&Um5X#y~!fxH`VmyxMbGU{nXzLJv$R8L!n7PBP9hw|7jZ{@=EM&?mF}JLvzt+UuTtT6Jo#VDu872T!r5@!IFF zE+@W3o?w4nE9>ix+{3deCpACZft>uoj32x! ze&~N`lLZiEwhpK@ti<3>yGq>7LX2)(_`&Q=vyin&I=x`=v%9|A)m*W6UpRji6|0c^O*||A0gO{ko5DMDSQ; zNnxEI?vI%N{K*6Orrf@CQdbkMY3|Tn3_Ovs&W(WXl4DvvSP%5m4-yf2o?;31dW%%$T~1{)!PHaF{sW2|NHZy1k>q z)X8$_-G9o2PqWwoyk!X+_V_vzelv(#oluk04Fw})jGFsc6s_>0`R5Cw8NapE?ZU6! z&5Ecy6ZL6wA!x;2%0HwjjxY4SO_2SYit-lb_@-;VkrK$_Vembq+X zI^NIWqpUSkzN2FalVdnt8h5URhCt^e4Wp2;Y))4ztLcJml*cvb4Bn?>;!!kOW$_?q z?V^g-OcA=n*m-jV66P-)lK9(PZND(P=)bSY<^Qv8A!JC!XR(RZV|;Oh=yF;F-~VOl z?MDhL6N|oVrge?rLaE2iY(-1n;<~Bn&fcsT8rFvW|N6lH7_iN4cyx0ccZ37ZIwkR{ zz5s}Kx94(bSoV~v0sg}N?LM8~p2@;J{evV;+*VdrkH|uRU-J#@A?!HNQ9`vj_GAjQ zxQYW1sZKnW4~RIj?>;S@S3*A}knqHo(}|>;A>nNaH~QN8?IymQ0?_iKuDM`&DiJP@ z$xbh)pIcj@dlvpR4z{oa*{VNgV-YR&++*J=Tb4UK za@XH;UL$bKPHOhlhZ6@h_Agi$<$|9||Ic!*v;(76H;^Gt{VnF^wme;W7F`XAdj3c>$0g0rKe}R8Kv=L1q&*y}>32?&Le06>O=u>dcs=uBBeh!fcKk0y-X$~h0VXQ1rGDTw0OfR2nC1AzS3tP(EPh-)LWT z<=(Mqe(~h~f6{c7JYcjb$%@%O%?A5_ckMcU(XoLiCp`9JR-#6rSXhCzL-TF&ca`dT zt*^_2R)D2xUFxeVhG8FXvOe&T*{^H~qKetp$6s=6Kb(s$Tn(5KRql(rmgD_Q*YzKw zY|BUp=aB8^#e*N3=?*JH(iz$#p$I*15f1nKi!RJy|P3`iG`JP;r^e^G1Au5*t%^5HA zbv{W^*sUHEMco4KecYe?a$ge zEKm?O<0aY&&!ora%m_EQiV=;Q74BW9UwHZ& z-fg3aD2!eh(u-OMA%MPXKzk`2%$qib{6s$7*N}Bh%HQ|t(agnXSF0Bh>gyF3ZHmRV94=u*HZQxjEpevv*3I4 z(VVm$Z=a*l)lXnM+?PH#DabT)D8<=B<1W@T4jrKO6awAUWLx!u8cH_KK{9v{S)Cn7 zteq@YSoR4|x*==w^0%SXe0v3in?_XjpJis^4iWkBz?hGa6cKk1N70K{_K)c7I34?4 zT3VXdRyzNXY963EDLeT8XGPNlyxeB@FV7+uy$t5fJ{u-2oR^ zseSPeE8~bi&f9qo{L~{=teSCV#5%S~3^Xlc)5^jnS`y3m4bSqr{`{i=g!<>8gN?g; zDZ6m=y6T1enXOw8eZu=KBXGw>g21%TZ*kqbFt8!!u{&B*S2wyQ?1Q)Xuz36buSCyy z_`~yKC;OwjnSB|}bw|hgEpv0r+6ts3cEX_+f~8$F&%U9EFp#UZ-;~2w++K%oJ!h{s zlNnUoRdsc}KWtA2RXyB`v~L+Y&k&k;KLZ7vyVsnXI87_PKX=;)j)j$# zAk)yw3GCMc&L@Ud>L<<;XSs3Pp-AUX7=7fHdrN-nD*EE-jwtK)+m+z%lpWWp%jTcUgDSG`mIF=OI9yg-l81Z(mqH4t3gSBT#JfX>c0~2vGZN10 z89Y$(E@?*G3j@2kzLU}=pRqw_agl+KmlhV_lPHESto!I*^_Dl@ewl7@2C$e0N42!N ze0MBCZ_*716QAR$1g~fg5ch*lUfN)Te~dEGiW>lR@kbD0L9Up7CmeZPTo%z4C60CV zJSo|5clRiVixjXL+)>n)vZEu;6mNKRU6sE2O%vIJ%X}VB+|y^briRSy?nL`!OE-Dp z>ir-%=EQ}jToYg9^R+U5@NQ>^l)dw=B@;pBcRoA&d#PM-2-raF|R!3+r z)zdvcfUo7Mq3Y5Mt#YYVBm7khp+mE8W>89dZh8{7iEgz@&q|0f5>YHdd~2K?iAsX) z5WsYSM9Z)fv@A!FNjLow7^ngOSddZ4a7qZUakXpBs40CxS(6)#+eUJff{izRaraL< zxV$0l+xm#fXphI>kq$n@$R znbP$dNz~(z+|t_GBVEt_87$J9QH3%M#G*BXgl3oT#SnL_uC_%cPFu>$$IQ)-Cvds` zK^rx>f9335jhuu=DR83f5t41v9oE!6Tk+8 z;G8ORvrf7J4xGFKa5G~z4}YqXkINg|DQ`5J4OkR+a*IvFq$yG@m6hk6+saqhz&|af zX@cyrCNEieN}~)CP`s#BZvNo3SJHOJi0hXZ7!hk&NF_M|;k*))IjzE^X;RJS=W<^A zhDVv@6;}McT~v*k{0f^uIc^-8#J<8y#qqe+3#%BJ%#%pl2=&ZWm7CAUnQm&mTAkX0 z&31DVLhT2H1KpRvj5pXXW+fD?9{1E>^{y**9&mJ~E7Kj(y`74=EUeBNY?auR!ZThh zOyOO|%xxtCsF(+m(xece*C%K-hZjE&5d$gul1ZcHO%-gf(&8wd)!@8Q+%w_vesMut zofABMj{>ezL4UFm?zQJR>1hM6dd9&z9jEY)XbZ~{TC_Mvycvo1&xdID`%ds}rd))w zl6pP(A1jQ(RzNM{myRQQq_icxQ%zIu*ay%4Rtq18?{Wazxkk=U%qw;U$0dbzxmnK(fv(OBd!4C@-HtwFpSvl2|k6ZLbu$YtfFwJo4jy9&H%6nR~C1cMfz(E}Gx z!Ecy$?F*Aa{0jE5{kGBae|RfuV!N3SME=~xsvbDj8Jo#3)&ZF>R$KCc58w|U3-_$y z2RZtJmfr;4IUz=hUmD@@@O_dm@d2Q_s&47Vm4@4Mz4k-mi z!f@g)uYNB}Gjr265+V%;3(YZloRq>>BqwePM2UI{5vJrhoH=>(+N3N*czY(r2KGqW zu44iB-WkC(mA~d>=5k+gb)4ur?Ny7x)`VwgW^tOcJCwt!`S)e1|HO+EgptWBsAt!U z9~>OCa@C_Az;PBBiCTVThqD!%@?Q0VK8noSk z@DCYF&ax39QCD$k8J9fRU5YnLaW3reuVL$GHFH*p;(1@5T%u`Q?SW;|4~E0rI+HQE z-K%`OAH%3SjqZ3226{d*g8fzDQc*E}ic_$h$n~_aI0nu|fTfAFw#en#SRZZ<%rhJ} zDS1XFuqA+E6GT`IWHMP;fJp6k*bFF}C!-M#hEe0^NPx zFDD5Ehy&0G=ifzU>#QKXth(m0AoT4| z<@C0>h16iNB`&2(Go9^*GA&GCHzy0%08QOuP#CStsM1yqg6@r&4$fcu1V`z{s>5o7 z>MSy%DzsV`a6i>rK)YE{Dmp&BPm$LS?#{H+=)|r%3Ej8`m;37XhA5Kja=U|Ph0#gL zhMl6~$se)Wmed)Fx^Qc(gDTYI^BGS-%usjT0M8mjC+@r^sToVB3pndEOI>@RY z%@jae(aiC*Obd#!xBCg)lkH6I_579%U`wBUU0it>%d!2BFr|RU&gU{88@$gOGpo^v zp~pr00|wc+3O&i}xzFv-ObRl!y--mTL6&vSs6;wmM2j-EiRuQnNnOu5rgOm$d4{p8 z-!HERd?qd%PDy)vd&`wtt!tXN73Etm4ZkY$>5|5&>X9NOVk*!iOC%ug1Hn;Oo=FtT zP4$Ct$g4~ubFz!$6L4z)DkuO#Fm2e$?-~}gYAtM)_6|+wgw~e7#tf-F*LzH;Y zHTGN}E^Nl?W{t^Jlq?&|!krxV#AuPnWzEXPvUv^l(th(YCU0S;C*0SLwEg$rbokId zfVK+bBnf;Oym|Y~Ujv+xiOr5CJ76whQtE$zkcQ9jAJP!Guf^nmzPMU zGnIkG?IU%S$+u?QWkM2~z*NC)0--|w`%e!58|b2gB{S_xj&KUI@o708gTp6Hiv;b| z2SsCj;_Z90b;k!sbp?GwMzp3_F)+eLl9Pe(%Y z)R(7asISzkRnC$IwEIS6LTZ8m3hG*lFMR?LPIyCx*4ApAkYd^ll^v?BnH{g;Li`qQ z&rTNVs*KnY)Vb`q`T$A7=fjTd=*JNHkWwJaz(-I%gO{=*HuX~k2h~{+i_0~CTgv1( ztHnB$7zC_b==Kq`)n3bs)fE)Z3QSEk)K9NNeO8qpNci`KGCE)CvYg%EaQ)-tQc1&& z4Oc#{h6@r$g^gCTlIo>|r!y?V&d!}kWbk?DX34qg*&~$-Ys(Y|eskUlx`|ThF1fiQ zU4Pa>8nA3M$9EHfl6-8vNB0}=`&9Qjo!2_9&`iAvO59edZ$%%Mc&7{FX&C(7SQ?99 zBePi1^0TJW7*-z;KT1ZLpD3Jf+L_8-QG58!`_4UWf9+5lSn5dSnN5!f0vhZvZGQVT zx)33woQAdB$0qy$UQH+zDObM^Pynr^C~S@<%w!`q1~z>|A!eY_XzLF$y%xM*cd`g0 zwAG4RUchS!nw0=)sB7-6}%X$%Lvp;r>2@=na<|tMi3bM zJ_rj5m+4_e`dWgXyPxQ>?;un7Q`Hgm;iu1WJvLy1J(_qBwtcwdBzOp zdd2g`t8IK1gs4gH%VzoX6o>F?v*UZ|z?}|L$%U-y(X!t0erLluV};}ArmcOx%&<5! zWB#$@^V)g88z;LwQy_)^At}~$U2s7;BvbRkIsug6))f}Q1}b=AwCg}W8(wM#cHIQS zt?H7*)lC5Ow}a(Yh?XpJagx`7A_&+Jnyu7Q}0u6 zV9m`y?WhF_(L1|zaNL%&IF~k-TOx9g=5vMi_ot$!UCURS)uv(ZOaF^yosT=8n$=ZH zwnEm8w&jldhL1^9>f{6MAo82$6IdG6yEz87A`a%)mQ*S8-pjY%6{PnrTus^m+t`%mttwsmYmR?B$b(z|DQ>D9~b6IvX!O;L;cQaoD*IzF6 z0nK_XfKN%9O$X6-+|-QnOxi_F?6dCO$liIu)W+Ld8< z{sf)NDG#Ao$88tl%W-Gtd@cAv$VlSP<%|>KrO0ryNXALIT<>qH(} zJA}kf7@3)8V1H}of%m!FIa5_$D;2_p-{!*RBBrwt2E9sMoy*i`(62QNpbUMS5xXm&-(CTY;v+;9GQ+) znQ9fV^W(*bUr%G#i;}jqnx|D#(=9C))4`aw*Sd4FkqkkSt!lh3QNej_YDGi02{tNb zR>eJg%==EZ6Sz0ZbOy30A}R;AWj~VyjAS!9DPaoIVO^&qVrHcZ#N;x!d(~^Pb)>Di zSm9Zyfb-g<6{E5mDWRQAH{)0U3l-*bS#saJCO&b$`!~uC;hj>ZSb9EQAGRLHgR^JB za1qrTULWQjj|1Lw1t7m|rv+xF+lJ0CVeRfVDRtME-Ast3CRJrLJ_Az&p2i?rt^;KY zvO}>om*-R%;1kC~PzOXM$BXxwv{zNjBrS^$Sm(T_%1%*ky4R*JcrzXSdxJHO*@pKi z;4-9YNt!&v@n?OAsr4uOOZlZ6&iju-(rpGe)uBIPPT)M+{pQp|YEsL+)tVJ-S@v_q zGfFoo!%>UwapT80}`>^OTL8$4NSZ52`d&m7syPOOE0J?cX|DSk)kl(>w#fQh=Ej zH3J8udpKGyv)o=&5{cW^aHd!7UF zC8C~<5elJ`66w*x`<>t*Q19YBaA3%f^0|>kvJY#Z2tX;hDtATtoZ9a{9*lo!7}qU5 z(xQfZk6|`_e+m2PFSH~Jw6Eiri#i*2+6?#$mD-Sewi;jFl)M&O`z@!~EhQ|)bp zX`Xr5ld@FIGcmV#0QPr{j+a5e8^qJL*u$^dkC#ndDa)1XF?PGM=mhN(%2Ed36TBwU zUe3?`iurJh#MSVTE$gNI{xouD_7qJA%h*aDrcAt>9mnk?sHOx}{tUru!rxR1z}<6^ zJ~#iIFT_VHv@+qgpQPiNLuAV;=7ND+;jf6<(+eJ6uDc)lt ztPq&~@~JX^i>v-yJ87+uZ3h!@H`&4Y3U$VVpv5qL%~>L7u@)hnIZZJ@3M&D0K{b9E zpdEa6I&C8NLtigU4A|6Eaa+}>4gGeFqJ7eQvG;8i@j>g3gSIlG^(HYfaY>@*SXeXF z0O4;z1Izge3AQ9SoIXnFy}shAkD*dN07gUK%WRQ${nw?3M2%*mZ0jCAE$u`Fl!zx< zZ6dQ+^D;I~MRK9*c?q{Y7~28j@e?v!e_|qh)*nIrCOlCrHp@Polnw{4&wL=sRN{1= z=*qd)^7-~E=41;*rnKUfcRp;56-PC|fP~A`D0n2fbEgtk#r-m|BA)P-`BNDl-cMAL z%$lGY@4>fMi#^z0TN?l>Wq>r*r4c;r4f*@WL>YQtRFaK?IJJ9>Wxrj91*bFyg5qec zhF4TkbvhXM|?&N{LBa@d!L3dRl{*bypcV@Vd_NN#P74Wt4H4fJ^mSPv+h7? zhqf$o!AUoVlJCWD^p>JU+5&Gez1`zY*+OhA{yFEaioSNA{I_9(%E5frQ1SjkkbfIw z#0vp+3e;Gyd&EYTyf3TL>j_K`v{W8R52e!%YDfkMOC*LtAw~}QL?TwD|#xt zU!o!dFl-uwabK~dWMuB_2&`EpehUZie=aB2(|+Km{s2UJ%qO*)cTVLx5xNl#G5EDFaIeF z-E=x5)7%}3$>S}`b-}by??Jo_krQ{2kf0dnwyLSIDwjH7=4Py_jpoY)AFnd>Nv}sm z7Z$Fn>YKV+_qbRoSI2W}CCO&V!#z7nPG$`#Bh$UJG7c08ENm!zE8sn*Hl1yOU5}qq zVim=@7jky~b-T%KHLV?zUlF<<^!g1XX{tl->=d0vU5TC~-^K*KTGhVn_kj4SoF5K7KGLyu=7F@=3BH5(rtdRfDx@ohKp>Ch{m~sp$-mS4kmLX zSIl(H0=gCv+sojxYwlUitXs_1rtml#_?)Z~40u0WORo5bqI%f+?**@zGF%X%Mn)JaK3Ae!_loUUHXo5L}Cg!lh4yZmoOo-+<6?Q+c;+ zO1T`ZY5edaok)-yw8C!XPK54W5S*{Q8{>_5Im9??o&Mn+m^KY!B9nh>lr5w(lL>4% zBfXfRxGG%68<5>oW!?BnO*b01U}PjrBgHPr&Lb(J*#$^enHhR3ac1W;-CWu$K$?G& zOsIT<7XBP2P9(?okr&?8@1ZJhI}v6-exW3eWtoHIB#d1NV-mS5xu66wG5V1az$eXg zzFkrbmYF%&_BVf+R~FpH+H2jax7mHYJ-2l$kOj6S=x29_LWWMA5S{Op(jQDUlX`0| z4jGUac!g!Lcs!i>>-Z+^QCnhR?F_JAZIh&vkX>3@45T-7k4i7I8a!%ca=W>?O>O7R zk*s6b>8pTr!-?Jmj|fD@UArWt5gQzQ00XW?s9vKAHV%9CAe zNI39F;xR6o0XLTAco5=P`?)+8o?X?yfW{b%UpyY!JxarzvMJ)&G^ts97|qEzIt-Wt8!8*s{2F}yBN)n_UC5$O77lF^y*Sl(^B|qQ^?Q^@)Uo=tBYdr3pV08RZ z*XzhuzjIa>MdMif%qTX@Om$^H#&U33@?-HhOURP$H_d;71C>Bk>hjg|d#_aIb=hx0 z!*!#D$)eIN{4|PdJL+pC113axh22Pb08?!YJ7RYXvUSF{V!epx<&%JXQoI z>XJ;11V&|a)1#c$)vj%rOar-xR_sI*HphK7>0{*QO*$t)&Qsl{hBz53xudjvCG*TH zlVp!r6=!e4N}+6v&7YDB)53+20hPrC<=fZ7TlIrX!&6E_Vw5W|im6seaxxa6QH1~F zmat)|0MB+x(LiEz0eJ|quMWKS7D~PHfhBgBYz0$N_z?ivNyx;dNQ( z{pNu0n~8%{QbjxBDnfnP$Z~+j7z=@qM$`2xheqIL&qLPcF-^{+JsQp_gXMq{`dnyJ z74T=0;XtextCte^$p*_rE`ndDgNF{9yUwd^FpWsDbX0h)SmmUxQ~~m@jGCX8-3`xm zl1a2$rxPpI4y4&MdUl|AJTK-&5zlqS7 zLU7!hSh5cT0|UA7R5dtIBG_QJmt!fn4^msM$KTjPc`#ISn5x^hL? z6Cwym3!XbncIpBeEo;iE+C~N<@q)jkZ9+96Upm~M4goj4fp+k3oZxKtPn{<^;t!wZ~l>Ot}A?-!}G)NFUI;Ca%K z)!PdNV%aqLdVoKFr4C%f)A5;gJ_A|X6IlPe;%p6hGNk#Vl_PY9w57wJ6>S&dHS+bM zM`jTni>;upVUUw-Z?59uK8S2bUvV?i$g5p+=@Hj18bPv)U{;x1p`<*l)}VjitL8cGpGWJ!#ia*@zAGsH&(;BuW@ND(_If{i&IBhkt z{(P!eEeF?gh$-rzr5YBsS?N?T7?q{7If|gK`|)nm-NkMg;{qNByX*Jz`?!EMXFM3N zktxdY=t|fY%0gwRpzHW4Dl@-MPP>uV|ML zqo*f2A1$!Bj6;D{Hl923s$jI*wR4N9JyS;1$x$x-xgw+K9i`Uen$+JmKA!k|pn1*? z%L)(+0l3Z3`*hWxz{w?lIG#<5=G^U&=#+p~pO1%+>QCo@C!WvDLblV<%<_T3_n%27 znQo{#c`(M61<`bD85FrW{lU?XEz=#yZ?oRqlr5vFGoBIMY5GG;69M($1JTy5b4C~B zzz1!uN%hFq_jxU=xdg2mckl&^z*je&N3oX|Fc0lVmBE}X#?<0yqL;#@yvF-RxTJ%n zR}>Bv*GmWozuahs>2{EdJFu!Wt^R;#!;s=f6WXXFEv5Fhg;rTa3%$@RFaop?4V zM%i+O6)_gmIXWN*1^OYtOYugFx^8%`y>3=K(6IRJMQplVL$ai0Op>OtXj zrAfgxxTQrp;C^4bM%#Ck3F(^Vxd-wlFC;u$d*ZU3PZX?Dku1&PX#zDT1Oq+j|#@>d6`}J+pEK1QNKNx}c z>KJqX>sA_cVf2wcoM!L?N>AuSESj-3a1q`SgpgS@^?U$7rJdC}KIuCaDZMn>oey)C zV7d_?Lp(hpOJ6<#Xr9dU_TUNY+gWG?w0S+8>n-3K2r|@{scK+{__JhKtW~@;=0JrCV(D!BF@($NFA^f1!;e_wZTJ4D7U|Kshfn>p zDTPy0S>9F={X`+>JrF?m6Yq;gZkbN!(Jz&i#5I;ccc!|0u|=WW~fdTvx)Vlgv&ud)wx*=c_XD9XM0bo zNAO!Vi?&Wo&EvEf-qp}A-}c6kkZP+HAQ#PQw7$h@6NB@lmJ0{$^z%})LomjU< zQ}8@Z%}O#>zqi!Z_MqG%@ZVCaRd;2&KdYZCG*D^V!YyYDe3$QQsL^>l{&;eCJ!@Dz z`nI)CeknXV7=>YVF`108P%(RXp>_JQ@zKF2dp5KO`gn~d(@J?GCKnUt;^kOAgz+iza(JlM#{Kzm?uP?(wm=9PHE_o)j%_4HK1W5f~6JiLK7nl439~V)1m)~@gy>GJ$0Dv zv*`2Oou~f~a5%0+F%VI58>$LkM+uk}36f)6b zh!7gsMZlqH&0b|a@HIal%qi<@bDt}LTuNb1t*%+t(p0ZhWfooYn-UO<< zAh5b0bX&e(-}dT=jql6Ziu2&EHEXF2;z-lL1RCGXy2>cXBiZ)--kSLFK$Q(7 zxC?Ux$7+ms&w!%TH=ka2{X1?(n)y7kdHJ4!`-CiFIX{QO`5|66N|(I?WrP7G=G^h| zCb(*}Itm?Y_putK0bquwFs#{g=)MAr{n}dFEjKjY?{M&EJmn?^&=-J5 zdRj_N|5{sHtLt4=EtA3dob$dF>|oj5BjO4-tgtjx*>OyX#c5gG{_ze&9k|D8b&TBl z{m147!d!mW$y!~}cpBfvLdAyeqLxo&Bn}T|dL|A~Z}#W2>zHrDo%$rNFnM9XFNB6W zO}eUZJA=XH1*;Wv&OS|CIeycnI)W&knJm2bX@HrY_hK#7L?W2 z*ji$m1N!KJUE0mWFZ3No6W;I#yc(1n2dh5fwpO_o8lChLK_iR+1(9#^f|B{rhWeC} z_jo_hF?0O=t&!H9Jb+2|UovZ914Ki+i*w&6DfL@+S*ELW@E{o0B@+<;(&)Z4chW{RI3lJ8$xW+yJ|EF?slg;p5yZ#hoz=WO`OYU-7`b8NrY)joJF=VYbUCJZ zTj;C+A;YQtq=fe9C6g1R>@C@IOmPrK`U8rG>TE|~+qo_E&R+oC#)x^WQ`hz*=`~?d z)@EL|R9RQg?ki1{la;M?d^x`|tY%X^2lw4h3L5D7jmVKW@Us9`9uB{Lj-fi8T%PaZWK>sa9X!aScyn-Ms;jVdL5wjZX?;P5T_Y_6mg zC27~gW7C1ICFF{ldi4VJDTXLms8wdmDj{0cd~=QPLIo1DHwd=32cY|6e^Y&_ytyoQ zgf8dO9Z|Wip)19m=$Cg1;CWimuo)Cud}z#R7pD~jTx2e+RAP=WZES3=^XW8qM0?#O`c~qL zOIv=Q{Zo+0ckt^`)h`#b{GG?viw_X0Dnja|D!!J-23Lzi&&qIn)cOEZS8(#Md>#42 z`C)FoG8yRtBl_=iFL-b-HcLa%)rje!M$1QP2hin%;HCFvla&F_2_-QynEO{>)71jN zhHGF}$uuWkX?eMI8L!Cl1&9L#>rZZjwQk-cMa!*)--Wt{PCvgpo{uZ#Ygp9r8Rbdb z3!MU5MeuyC%F_p7kEcD=>{fYaricCg^SbE$35OPL-7bEQ{pEwz*TUYP@#U;B2cEjFoCR;0P3}*< z2s|8}m|Wu9;l}>T1Wo__#HdA%k!w(nqsPvZvg^^--bl+zOHLQhkbVv3MUP)ARA3#q zuB`qSIrP{1v=^U#2V=2Cw0Yg_bHki&7D}NLZF`A z%{k&IMNe8%V9A0hjt(5@y4dK$(tZYsHqCeqb@fO+ozLwQxsHX;i?u=+qFd^GX5_ zd_1su`!H&_{S-j{=zVJgZ7Ho1qXj85J=wv5s^Qp&B21NyrmkDU(wxhoEi-(Y(S7+f za*ayo&WoU6eURC{Llj+KRR&wu^^9_KSOC&6EBkMV(ANCADBR&6Rdt6^0SVy$19(7( zzhqsmk7XIc|4XyC+E=dW!#ub3aaKRHa?%J@9Y46E{l7nCx>4)V+XiAdPPgB2rk|^h ztLeS??bVy|@p!qy!(5Y5jERrXc4>)FH>_{6KmU0YuTw^(6JkBkW-Q)iZBAi~W({a8 zpMGndd#&gU$iu60)6OEt1AS-V4mY%DxMk$fdXO?$XTwmS5}rqSDZyn-i+izZ4 zW!H@D2JA~``wiZfYQy8K7b)YV9O>OP-X6VekgevdScnpE1ncvg^mBVoDd$m9O{Z$T zYlb!;Jw46o>r+oX<3{QSAAA7CTaG>X`f>!6bvE$x*@4*mK2(A8aKVRkH zEEt!iOPAUo2?JF4v3S@%y?fb14?k!tSFN&@D^}ack;CjO_uuFIxV-FiPRLXK)mLA0 z?8(XSvGeB5$0(BzU4F}XT`AyRdiiC>v9TL#M0)}c>{3?-u3WkDOm8%fia3#hTlFfM z-5c&kupEU`r#)Q?t$+OwtE`0eB17KWFR!y~ z(L2`THfUW>AWTFn#3wAlXt|s@-^d)p3i`R#5t=X0@xr1K+M<N*d&%jbkPtyq)|KMO=m60jiwi;E(uiFJZ6Iz)dKa9*T$k8^y<)l2pw zyB@MMl!!YX#8YU9@{_Jx}b!8c1D(!QK=WM@!0PA4d@Q$__US{Q%5|n|ZM6pzG-bY}dh;h%i{$R37@wht|1DSi6d-Wcv zGKG6rbIxspo=X#$gL^X7P<3ajs}g?p?NSyrCrNvxR%ruT41p_k_DK+%`TGh)56 z0{EJIjBle+08XdQEXH94{McrUmxb_Zu4f4z`{loXAtP|v6?5|5DauDFQn*jl$&ymi z0oxylXVXDDR91xdmt4Wiv9KJ+lF;5drQ!8dMo17IFiz1f;XvxJ7>c{@*Kg{H_ggDl zv;Ba*_~CXZD5U^u%8PC09+5A!4iZLDdj}46R?`}1AA^0-Xd=;*zKWYSK&1R zN%_nc>hO?|CB>SQVB5CuuqS@^J4;VZwQVThWb>3TzH8TR znAsA$^Y+_e9`~Y5Ti}Fv5N`M?+1Yy#So3Z2)mJ&QNmX8&;32kh)oSa}qldftA1CBk z!1!tOL&`h}fh$(5uzr2}+dh~BDFl0RZAwat4I4U?LjW*Y7*~btR#a4y^|Ct_rLTWG4;JpZDD;jKaw)Gw-Ji9-$PaH>bhd`x^L<0pspog^*3GWF_8RL>+gGkwh2>A- zlxG(oF8&E?#+eHm`b`(9-{JSkdNp$RaG3oGmfXI*D+r_v3YB~;M!8t})F+jq{0iQ) zVEK52aF6rw}%sHrJ1m6mjw@(0bOmNGKLlasBvi=QLEIBrkH$;x)M( z%5h-zq`BfAHOM*FhQ83MC_s7k{-RxY(X_YjSmVNr6d4QG=GrrFZ*rIZ2}4rxs)%vS zIw=oj!B;p{G~O)&lJAMfOA2c!3NJT>In)NIZN!)j$N-cc2I>L9`^(SOfjD{BF+0x8q(5>z$@Pcq5d6mTyGAcT!B5U)41q<=2 zc*+G7EuIl0N4l_+ot^C=`bvZ&%XGhf{T+KyYhVi=$gjNe3Ur0D1qE5BPHD9Hlr7K9 zbjAI{4?SqhmSrNSuW_qz4~!);ek@wF*oCgEubzZJ#@XnLwUI*+%$Yma`t<5$ZR6Uq zXHVdj*A-?NC9S;>C9J%{SW8WlIrGI}kOWb_p|U|JK(}ZitsHd5;jC1M2K|llDWJ5d!4accbD1^ely?tLr10{ zuwRXW?uLn7ZRV2Q_OCCkwjm?N*bM;sck9;Ge)n&`hd-zcXqwkh_dm)o-3QP2PS83D zO=k*kOHfQN{b(29-bZcuf^4vgQklc{%b$;`uY} z8#nc_X=A&tYJZly)etJ|K*>0>MY$O97h=vo596{nz(B zSGc4(A7$p4ypww1g)|K1Uj}=A4GR8hnWN4ID|_~0_H$8brF|PO*|SR{JPnT6E3*R{ zlA4UAodgjPD50fX?2*>aJ^G};Rj6bd9s|Si)XhebB(L32xMQeJtw%*1bjQH({EUtE zGHp%7E93fc-JAw2MEN9z>F9zPs=L3OqDrh$+J>ZY+u@ z!67{R&N?5jad=N(i-*|{?jMQQz#;qB7gyVk;>PyK!-02__WCxAhuZ&t%A)6suiCkD zr+w{fUvVUPCkRA{y8|k-m30>&Oy) zk@w(#y}pk9WF~yRG=x`QEDfA(qS@;ZzN_$7oyyqNE>hgRK4%;206Gc3G@@S$w7->m zg7n4{Y6VI?!6t}T6wjv+w_Fs>QYu!WWEYPbUL<%=9o5-3ZY#3?psfmv?B5M#^?;5T zbvAi&pe@-4>{&ebyr-`knd;sy+81$zamiDtEsE^kU;@o(whtUg6G-hQ5qA3wpv zR6m?OOEto$xAS`hu>awQAKG`m^SJfx+mG*p^?LerV=_(`!9TCJ)YMcI8i7(oc*0s! zqW|in)R+~G2FA1#PHVgY)&Gdz$g?Y) z9GCgjn7Mwv3Jdrl8#&G~|9ko_C!@_#b-&bz)W^Cg;YfNIcWX`!@k`5A$ zGUK(4XP!Cx&CT8i(Yu+8voSj6zz=mo$Pdk5(az+<6;bj^o@YwwAL} z^!Brg@Y;ksC!RFCh?iKCnFGFEJW(B-g<)$*_3qqS3}#)v_d~|)!J|^cZ|A*|AD*^F zC?%zsE5bOD0bkNqiY-?{at@}w&)Q{@S^f6lHT*sEw-;|Tjn(k6 zoh)bfLCacK?6Rfixe0qxV=V7BDS>uxJK%dfO z*|%@+XI*hb*@|N7^#yz2_3}}eSFx{If!x=MdA@E;swW7V$8S>NxctCVH_=A%oDW~u z%lDER!f6hlYJ7jfW@YB0Jj(V6oxehP7r}m3hv))zHwjj_-os0TijspZT$fJ>Y^1e~ zLLQ|u{qb0@UB=#5IIBe$uVG#~3PyzpjIpdmU8i$Xyk`}^55O?k0q?(7C=#`9mF&T5 zp1LSypf>!1G! zTz19Wr>9Or*!Bb?!6K@tEVV(yr-4h8=(uyG#S)D2{P&9TQZRC&Eha9Jki&d7p|;zl zD-wJKW*LS1zCt96S+IpD59E$7+#_LkNO`9X1ZDB}+Aty-(qy;5YOL@|kx+RAR1nTe z0O5HxP=cSl&BW{+$74d+7Q$voNKCZvJpPy~)`W2(iRP^Xg)=(C zQ3^@e)*$H6f8?PDEQ!#soE+hCRC%2e=v27M$4%jZp>RrtdkT9)495u+dF_*uyJPd_2bo^|eQ54N7xSvID%eZd1dUTY znokvtL8$nAhdwm$>WuR4UG@f+(P>ksy2Vz^k?=sy6snEpu#?5Ck}yw&7>0@-8HFXO zDHsd#$`pQ2coD`x1nck^YwHM@5-JM8TnV_fFtP5u^G-Z3yLvy6?Y&5$Q+)*b=TQGN zUmdW(-}lWC&U_c5V0H$E>%~x=TYUD;e66;g*$ZdB#p2FErttH_Fn6JSa#)t2(0Edf z(%K78j7PCXlnz~1EIqS!ThDHXnDTcQ?y{MS@ESv~*RnfPcwc(3$SrdEUhAt_GcmDe z*P-hhoX_w&&s1Lb8+EB&JF-0C87i-*nKNfPbFSy>w}kja2l!vPay6Fhm+)HJ?$2&b zh`gADy&SZa^#9Y-2t6pIVz3TM=#^lju&%vW{!|naDhx}FtM-y;O@sBAI7}q#osSDxMo+ROKBfVd*eJ zOBuL2>3GKoqv)){e2B)1fjWQ{Pc19tKF9HlKu{rsp@g+_9M3iU^<7aQEGhLN8m~Yx zqQbgT$d%f?0tkWiJX7Hf4VOTvb&R@L0nnAB^aWfOu!rQ8_E&%PT@*YOD8X`_ZU%ML z42bBnGAlQ3EUdNQS;B7`E4`l^s_8AgP~DX3$$Aw;)C z0|SFV&03WZ^}z=-?6Jqb^n(GRKrq#I3yJsOl?;mYt3grUQP@|8jd zMZcpkcIY|s^y}D>#0HGrh7Fs1y&j`o!UmMWFuZ8Lg+f+CE6C}vw?DX2^=Low!kNT- z)@Ea@|6<;SAt&X(uwgX@0_m0lCp4;|{;Q_&_o`1gXbQ(rSYrhtucBuHUlz?-f^tzF z;E#RlTdtIBfkIK?Rt*&~|E*bD@i5|Q^O?e1;pn))!>hbZCvkk@H?CMS+Bl`?yK)r ze5bvqCM)zTd|iZ^qdFDJ&LZZlH-e-1nJAQeQtXLF zDAAZO2;0Zv#m9X~tT&zYIv1O9twO3lShUmjDBSrxrbfy!F$(HaCD98JE$9QV6M`8~ zM2QR;%IK`H)7pDY-ne^ma`-+lSctbyOoTt>!NP@W-aO66L&kK4k!qi7PE=oG60XQg zNp#csb=s}5)0hV9Q1n0W6bjAP)-BuIKy&>K*E;X4@5L9(pmO)!cj4WK60Eq0?;;pO zonxWWh;h~V66_uArO3!u>>qF#k5oBijQXeN%6m_SBw<`?jzzQ26vi`&p=08d^}e8iPz5iJ_S(*%9$f-+|y2|Av9XRCWM*0d;Sug_ul zmudi?k8-98^VDfR&)2yvnN{A4;wjG>bbYKYp*7+(tyaD2$DwM60Xn|R< ze*&4kUE3BUjA(%Y^Mrl4Vjno9pEypPWFTSBh45STIMJI(c+PU@l4z2=z7GRIBK})) zm>w^_@(}${`-Rcw#wYq9m=_r$P83uSJ*Q|cygqR($VX8p*+aasCUzh5rg8Wbti2eX z-YfxGs&{gT5C3k;#DS17D3Z#t|6(5=m+gSHd+ZnDPx{)vCL zRanO4S(X9jMU&9US6zWZPleqODhlf>0}M7dFPDrwuekzFZPkXZ0-RhO@IytNp=kKk zr|_bFxxgsJoX=12tL9|izC6oZzQV?XMJI)xrl(7n&Ug)F5DxGxLf;0r%v^QVBz~zr zkgDfURu>UjsTx#{p+u;FX_6A|MZ8Ns`82>{5#eld`lMYq7MvYIg2IdV<#!k4IFldN zwD3#Lycct=L$L`8uSPm>YAqh&NCj=FH2`O7IR7^S1J<8#tRDO3H<`;QPs9rFm+Nu=e~)`b zcr{1+3kp13M;?uugJ#X8Wg2zC6{`l-FXGX5Rz8BtavbfibgI0%> zpebN-hKa=rB}Q7-@fet^eR$KT_!6jDre3%IphpJF z!9UjZkDhO$1`AJb0VCJQ z1;RMIniM{{Y}s-jH-(_xGqtA;2S-an=MKF8{`1YXV5JqfV7fi$060{soKZcT}ISg(~EZCyI zt)bzyg}l1zR5r=GruML*>=DsKt);)4vw-mCJP&1+CuVWU0dQCQTd$tI00fVL#vZ7v zsx#jSBs+6nHXeOBXTA%)QCrg&dL_3W+P zFfQ#!IraPpn`}NF$J+?^k)m+bs=&Z{RP9CehDuOB z>xC;1l^=Cw&g|v8Z3YH1EgJPfb1Xr0S8l0e-pRO9fYL(klwn5C6sUh21XCp!QD(se z6mOy{GC*pcrGOLt_!OnSlBelj$pnIOC_%CD{ESTwb>QocXR5xiW`>%3&Z{6>nxovh z;#9y_E1a(lNwP+b9N`9(U;p~o@HVaBo4TB{KD2*iunrZC?|!`EWI+Gp>wk0}m?5OC zC>;q4?1i_p)>z14Kwf&aiwrh0kO(uYvAl*wwPC8Qu`h4`?F5`JBz7Q{u-@q7;yqQ4Ht0_)DFIZIy@$1w=zd^E?(B zdGn+U!YWBbhjHLzutgQxzaE2ylH?pL+lpuVK0Lv1vCiq~zU~4(tp1wIScC7rw1N3K z`x>nqVJ+_^IwQ1~!ml{Np3|Q2KhZ{t=GD&z9~6DZ8CSiR`~oTs&7`i4!~I47W5Q>Zv8x5=q3EL$9*O@};)XMYdNdy5QM4cZ5qu}ulr?zVZQD>}NgZ3;Ro8d3 z!6Q;^>pBwJ9aoPUE|Mubr={6zgk`US2hiA+mp8HRKe5zC0S(g+*rmsA>*ITJ;re|0 zcqPfxSOen;L(RwIY9-!eG^?i2S&<80zkIK=K<%w&KX??j>PC+wD$}n`^Z|QOzWwLhTU^#FKrtrlzJY7=!In#~n{@WT9 zaBnZz<4JJ_;K|vUGw2CIbk#nZRZ|!^@heT-K+KqP58(lM3geKSggErip^LqzVOqx&Tj zC!dVFX(=1>q-R_eGji3ij<~_%att$g5KGTiLMyMrbycz9asgKSekoo8=Tl_Sx@yuS z2uYRo>(j^a0CM*2g@6Q%YKIOT+yXal>=+as(NtNR+=++>-UvDv1WLOe<=DiD6G+^U zLnf#Fc<{92FlcV?knK;KQP-U*VQ=-F4t5EJa__%#!bCDB7SR?s{OWQXkB*tMW;$4H zY_->h1X9hDFf^p>8b59f_<%tcR^1kQMCjR}&s9gOP~@$*NmowtF_1D*o*?>t!J-9N zYqDV4bPUuK9RNjmG=_+D%`dgyae+WuGy1f%enXS36boJfmWacLLWdzwId>>)RdW6x zHKwXg<1dAdEc@X_8OJ6OlF}oUuxHlTGCY1XGP*~?iNH(84M_4s@gqR8{RC45G;cu-<5F0-*V5B^+N6os zdIH$t`Nc_0Psucp$fytT=*NB2W5`*!VP`El;{O+&U<*9WWoS#g%MVAj=+j&fZdka zcKY>4LNk?AVeFs|9_qH9tb9@)^e4=` zAbG=hg{L*~kh<+;`sfbDrIDWlu1I zx_|*yj&P*}0*&p}Ia@va^)B$?zJA*PXKbVd8c2VB{`69?feP(j6hYqxgnkjmj(ex~ zwr%@LNhop16#x}@+}%B;x6{w-NdzG8z#o{18D ze!WG^m_p`nxZyfHS<@ZdTb^x)zzvf3nG`GCFo2||r-1`E2IeL>`m`_WE4#CIBb@DX ze9l88P^cvQJ~b`XHgDdHcjV4sErce3xe2{heVQsIXpoRMZQ2xs#VC}51^!&EA^l#6 zN6nl$a~vy6iXIi=4GCbmb9!g8^d*6dG>x{`&}F}fe}Z*b{X{A|O#2otS_qElD9eYw zD)~hx@QLzK>aSgyY5(|ardysv_v=*(i9ICwP+RIN`t3LtUwOs$><0!8UZ5o~e%U(@ zkO7daz$i(^T-^Eh86YKdaZ!o& z?u9o_i*SWv-7VEVl-D8=%ykwq@u{6-@zf^+C`oISEw3M8lNBfe|sAPmYA@UDq-#g^uHd(jPUbBlS4(q?$BPz zD6OOL0+WXf!Wan_^6{b-sJ2RIT54_!eG>q-3qXI`sIGPkNmjBE1~Lgjj7Nzk97!c| zX@!t6jAuwmvz1U9Z-A<(sm7lpfoa1IaAHxQJP20UT#~dj;~r(^6kgrl{bf$iZk&+e z-f!8QuL?U~o3$S$Qz>hx%#{FpQP6j1zqTXM(ngF+3L9EZo2HLR$M|{3x^|4S?ihyF zZG+Y^zjMg=`0$PWz`pG531t%SaFM}a>AF0U;-tBUfs6txHtz=~vmE7s_E>Q4Y5j{X z%acga0~b=dLRTOD_E_71Qf@2lvr7=^qq&vgN6OSn&I+v$m2&pS;&lb!X%W5!1b_@S zljz&@v0XjcN82QzA)w1$@IF?!^8_%EgtMr7f<0c-GwsEiwGe#2o6OxxqH#a;c0J>| z5=>JmX$PVN?8+R)5f-XB5$wSh;1jg&YR|uF((2y?Hln=xt{>Oc1zsuDda;%=QJAlw zUWIA@%>$!6(MKk51A>E5mcds)+-wV0<=AAx((k#35KQV4mf;J8HXnj--T=SwU6LBz zeziP-%=%{l<$x!q*1$ZBQa7=#ZYTM}T$D8}n&U0qF&3Byl2HIF(@@cP$dEC8`V^GH zfZKOSAxH|l^W>FR zUOVU3+Vi6eWcV2~X0+XY>+OV)mAbMixKX2W8i<3H4u2tKTn^rP;!Tv`Ck{xMo*6wY zy>pr?5rtV>b6@paA6eBCc+qYlS)07}ZoTbho4;@&^!*A~4lh}<%-#gMaxo!;!PTMr zPrFiEi61m}4V7|uZ(TBBat4WV3c*7w1OKet4X{^X;2k-!3-Uq}>rSZq7WQky<+P0% z*wH|SX}l}!?j9*p(K3u6;CWy&yP|pcI^(t z3p6eorLi_vKf?d=9acm)g<4i!fq$&|F3oZ0(jCerJtNNEeqpWS@*PImKWai38#FT2 zqEOhy;n9}|+)PmDBWIKpbv*|VbyV~|JQ24^o1 zZ@Tc(n()vbsnM2>QAu8}JyP2M+tc2%IrD4Iv6}Iyy`E&0k+;}{0qrq9oC5dk5d8dM zH&S)RyR0+vg_0Z%r#*RGw@z4uDm7L5dwo2{x>&*)@Zi)8&V`P6njP|_NXp#({H!d; zQ@e3|y4`{7BxRL6)KZXl-dePWx{qM|J`O&78ay>gF1TN~m^+mf7w@?v(64LZTh#t5 zS)(#4=)JJw)&lD>1%;Ozkih%vMEI>dPBm5|!0g*bJA~!7iM>&YQ440>n_q%FI-Ahg zh3oc1*U8k}qaEik4i`X#NRKdKU@`{pW42~%@i_$jIC8m+9@CK_t{##Ce{sb15(fym z-cAVlwBa3aj5uXu2e!xH74SvfarEew9_!eFE4P-|RuUf`L5Z%8gb$~W@0b7lMT|g` zAOHBrKM_+K`je(z@AaxbDR4r6%D$*SeZLj_vwZM*>N=dld)}f&D+K&5cvmHZ_k*B` z^(lAH7CRttb|^|prwqJrQm{I-!?Pd)r3(0G2-*VHKcCW!dy=WYc>i96Qe3QSQ;0e$ z@#V^Oay3;Zn)NW|>j0p3MG(YF)?B$Q_ezbiZYVXh5%=QxQG_DOKGumPG6AsG> z+W{e8z8N?78fta+6M0TDNfCw&9_-Hl>mjKq*FR~dHE=8 zDw&X3X4$delm%Bzh;ZK0)6%U=w=S-LkaDjb7T;|mr zD^r4s&T1tpLC37+KkB{uFC30{dA-!>){i%5;?631W{>prjBw;AFv%2M6f@guh^~!Ukpa z2^i+js<)xvMeO7;H+G)7!zV#LPhr|q_Pilt+0v;6obc@pZbbGs7ngV z*c%XS6wk`8D&blH=<71~(%!&(Lx-luuQP{7DxoLxlv0+heki3!U=@lX6pN)|?MQ~w znz8k4h`SG30-&hEnv;MlZ=ijAx1EeYZD7`;NawBe_9yYSNaD~?HdtYLoPFw^+j0qz zy}-DUN9s&~!*fs?s7PVlrE727zGEYd?;fnlQ6wHow_Q8gBY6r{DfQobbnnlaZ|R92 zvbL?a-Pu{(jVOmeS5BCGbemY1bTDCJCrR0$Xek4|TG5x~xC_;&#-%6kn^AJ)Nf&-@%@(yBL$LV;C zG08@#T8+Ri;CqF&c1C%2FG8vWYYBKU2<$yks%-}!MAkBasLukYN_!&>VNT&1A0pgJ zfKri{f!5tXFy536QQ4U!xJ@6^)xqbZU`#}(Isx=6%iB&Agk4giv7#Sz?5b7~Fxucn z37mW_ctLye%ba-{)4wD5bV=4Ty&b{^URelbddAxnC$|;Wj6|j?%>!7pJ)ioOd9P|7@*o( zy;Z!g;s3}>OWE+GcoTCfVN7S((@#GG?$aJuqNk>&x)3MaGSNBV3oTr@*pmxHwT^@y zb++~E)}yr5It}*ZQ5d(vBABZ#olQv|`uFee%D*k5cO;ESOG`tk%~%oMDn(L{o*C}N zu#S2}3xsp0^%n?--5d|J3>;3;|HX@!pqMKmJiCAnwgH1T2 zG5{PnQ0iEr@&u96PFOOXJEys2+%4HCMFk$dY12karA@+FlDC;WiG+{1aq~8W{Yv-9 zEM)(O{)$;2V%|Ti$g4*l+8qEi7ASTz)=4&&7CF`){uO^9wJg_7PO+;xln7!T? zVML0oC1jw@1m9O4HKIfE*3+zOoWqM?B$PchREsjQ7n~gA0rh zFB;#8(6aX6^vwcqQM6w)T^OJ;REl2iEjnz=0bLJT+{m$pVwpo3MLM92U7JQORo-gk-C7F;?T<24}m=>1GxigRbxC=p3Ng0$qbrf?c+`$yWxHMPV zr#g?-7iAt7?p!z4q(ad%8AmDjQmr1aPaYBo+;1hk^MS4o~W>w z1k}EO{V(2Hg7i~=obd9YX70tQnBeb5g^p!cgY&G<$oN!Pnum(dX z^(;jL2x?#fMv;9e#6+i3fyPj%Z-1~AJ0(V0E@#~`m*!R5B9mv@$jOvNM3*`BoXNW%~m9t6o4|6A7tFs zj=r4n5@RI?tU|e`aM)EuU}>DB4AnVOyFKC0`11Yb9|7se#*7}t8PUdhcJb|!O$3*q zKN))Ur}H@+3PJPsQ%WNQ2%JuYH4}u*$tz0l^{PL)dh1Vzm4pQS$*TVb;adXwS49mK zg+DbV%&a+KVp_Q;j#p7HhXc>lolTqZ8_GL1@WcsadkgT_8mcq=x5y~69FClf;J+|X z_I*Zs!tV{e4{eUnt$LM9YXlxKjnE!e;=V1+2DMkABMB(w2k;28KMX$F|*$}LH z4?XmtJ@d@7Cc#V!FL_c3XQ`1hP7yAgi7_Z8Oj57(2{-y(iZ^9w>%$W zV`F@=R^m;fkkF%e(}mho)3XbE=j|kfqA|o+b9s!pIRU{v4Rh9_ zOHsrm-FE9O9@@Kg>lS;PH1!hFwRVJ0ARGXNmV}fhdM11ap&oK`2a}3}QW$gLbXji5 za+8*|N%w}rPs<%bnw zh$kHi<#1s{2U3LE%&RgdDqKMdH(?KX`}jxw(n0eH$~I-4k_X;l8187is2+TLtj&6J z0~rr@;x!Ps@aq|3z9R4}m5{B^pH(!E8asu&97aHsWlKf0A+l9##*8({8kHh2qGc3w zeAH=_!Z+H%oC}|_th5;BiuorjF+5B~OkWF>Z^7F9l*3O6XKEC`5$uB&DEYPD$}12; z<6`Wqk3531r;B@8?dkvkKmbWZK~yVnUVi0O!gvGBClTB4eec_rnwm;RwC(n94p1Nb z7R*`0-i+bnpz+oE*Rd{|BjuTdbf0{^Qsl{^iHJmLD>r^A5rm!5v?=%*2wOE-m+&sa z^Hkwt@;s?z&XsvnpA|9~%w^CvF*OP;Rfv)lBTDx!WkHmfNobn9Ej8&;2-Ar$UBd5> zwXYcLpb8ioDfkqwp^TFP*j0V`fJ7evYs%-H=OtC&Cl8PVC}ToQsX_qT^P2=P&qmAh z^nC3J&69A91lTL)O3I@HtTTnOdi#Au&i+T9iwaK_-qbID`72)&rKKbgDk*gaQHncd zdvgUCUXlk73L_}Mq!Rb@4xNqOJN$ffUv52xoy_|kU@&k|_4LZ1%NR-F;@ z%4~{9j^$(RqQhU%_{k-UhIdXhULRykjbrpwK%=TaQAw!@6|#;GO3u?Q3PU zRz}~G&~_z+&^!xsNLW4!=@F2&#zTt!x|D69M`^5de)u{MeOc>Ow4h_B6np&Jk3sKZ zJe&E9_dj561A{`X%Rlp@wvrOn$T$*ON7`MYX~M)*o9ZfR$CN2o;gxa=<9*bgd0H?I zoBTYh>w0~sz0m?8;;Uqr?1p0b7f&y91%=uWDymnWPTn3poZmzPl)+6{G{T|Leh}`B z3`~)DFdTzkxv{Fctt=V#t9UE$aiIwcN0rAwBw;&xmeUp9sCs#+>IqVWiC@w6N(Q1( zBPq&t9yF)TQu0X15?_-KlP_-pUF+)ji5)DYe;L|Q95}JR^_Mq7jRloz7g+9OhM>Uw!mi`FYyMyP0fBH<}C zj@pkpD^$JkyR<%h@~a0?G+n%fj4~B?R%?Ap*cHvMz)&aptaC#lse&J9&R$V+7zy#m z;h)q7Jx4;jU<%a60?ileN$u7dA;YzbykHb^tZ+d6CU2DvBomP#Mwm=mQ(A{kLs+wI z;VJY?DMkudKT;rtl;2)qt$z{{;_UDL{%7vd{Kgw^IgQNE+waC+t-I#H2MF&^hHKGm z@dDyO<(aJ4maQV7rSuy`nc5I48Y)E@hz3icCwg2=7`?FUG+)A=6Wx_Zo@lW`2&*to zANO-ZJRWY@!67t^j^hzD0v2+jPo+2e4e5aiGartj;Pvqg0*v$Khcm@ z%%%9h6U=2Lv_oO9GUx`77ztEx{&^f{yAL6`ZBD`Sj5J;jen; zz4zSXjv0l8MfMVWs&ESHE56dXM|fzD-P#vLe5F12!4BVd;rFP|!b@YAykBaR{*%n2 zJu1FYGK$`dh6o>9bSiu=>0FSbf#}dt6q!}*5kVcafJY75!HS@MNWClJ6)B`~5dGH$ zf<{sQB!8>u_qHfWV=?@TUI{m^Qsd5Vn$Kv~Wf0Xt#0Na5lzOVWIelr%J-SyBZYA)s zqCYB<%u>uSHwRGo(!Pk9Npg>NWAkt!KmA?YEQmkVgl`8NB@>n?Hxq!an! z-~7O%3#Gr&pY$Wr%ji#vI{ir>p;ytkkfM(FjV`-l?jrkj-nT_Y#n`}MlWoiTMHj7@ zJMV{G%!TZhfdl&6efQjJ|M8#y1@_2}fY4n`U0>dF-Xq|OW)w8xr7__Smr60GdL^(s z^TvTw05*x3xXW5tOUlM8rcs`eMI68)_;nR2=1LFX)*i-m)^**5eivZZCEG+WqKE)k)Vi_T@05<>0ijFw_1qbfuZAL4Emz0!_e)#6)<-&m0cqa;Xr5gan8v25Q zJiM-y{G;x9_`L1-bGctuZ%{(&Y>pVVp!esFLBsdeKWYKPqI%D*vvwn-n7jY}`@m`& zVL$oFPY~7=K3-Q_#o*WVeeD|0{Twi1QgU3}{pZ&BS>I>`KD+OA*K%q1Lfy}}{NHNK z6B655d|W)?D1=@Y$ddj=jepQTF}sP$2`G;eP!N~8_g+H^gV3l5(=iP%Whp54;SGDK z>2T;dm*=(q^`l?Vf?UKt(4FNbZnL7S^XwrUc5%bFaPlXG)nC@K<$>!MRWReBY>@ z`nx##7PM6@xVZau=Y^GvAJ>MSBPReQQIXM0Q8%R+mxHy|&|P2LZ!$pr{D~WFZDx-B z@xOy?xph7rO=_FiC^w?l`THgOGNnTYlwHk;);NIU2XHsOjv9V|6lsZIJsyIG7q;jZ zvBDc}(f|BkH7|)oc8Lp>!9z;7FU$YT5%|1{xd3SAFL-RE3^`De$HDv=0M>u1KEk;u z?CHH3?-OnzoS834;Gb$LF7NDTi>}EL{GT z_4Q9%w68AR-a3M{UpQiQ6?s){-n_N0pZ>>wSD#W&9j;q=3x=ZM`|2OAU?$IRKIQGR za^))MEJ{)o*7XY~7taZOaRORsOwOY_Y^-^YVC@LsM@DW|73+@gls^Tioq zLhbxZX=|vx4UOK#{U%Su-6+^Y<6%y~WJ%Zonu2q;~bJdD3AUyi_Mjlf^LV($2{O3T`m$@#0W zH(7E1{7O-AvAe^6u?PL1-#FCUBJm@Co>NzEll6l_gB7Nr(4|jsfv=BQZLpFH{85;L z3f&awb$BS|1y*zBav8oQ;9N*4699QYhQB!VM?2knMIIsAdJVDh;dSd-Zfxhv@b}@@ zTy&=cg-nM!(UL*<(G30UGDlXsIzBZEbrLOo40sUhA-Hm>gN2N2x>EytG3CsJoZro%wc?Mv*k_*Kqb}W57ud&!9;}Sw zhd943E3Is(0kwa-gng@B^`Go>Lw(}6Ge6g+A!mO0?3N!R>A}bI=J`B^vMT>ipa-Ab z^DnRXlZ-(1+7y=T;*WR0qls~@yCqK3>-nwjJ1*b1YKyR&Pk|FxyGcz5#a3ord7*o_ zbcmt2e9bwh=IUWR=ld_e_;LjPH;=#=9 z$FI4j?ssyyZ`QPhV@eUNUiU>8@$LB#SH;<@Hv;Hwut4zWaf6rqDSiw>&}zuFlm*Ie zC4Na576GFJMj)YLnlEL8xzt6uEdZhhW8zqq63c*-2L9ap*tv;pw{ zil&Ve5D2LHQO7e1zh3)YiEHXR+;lDqhw%u&+UK(-HD)nwl>u4G|5M;`g1ZF`zZ50# zY=O_YG}WDHlFt7KXpiPtcube1Tpki}!A;Zr3iMqGf{r_W*9BMcXLmz03`<&nk71~-YUm7i5~4!JG9`a&gl5k{AIxDYn&XVvlL@Bdpy zpne*9J|p1#4|sc3c-Vf*nhwO%ItU$KMMQxU$GIdr*aV)fuF@DqneaSMfE{=NqOL2U zFGS8!8)CqJ?K3FRc9I;RFt75g+eeQa>GDeniOXipnn@P)`myQi>PP51(ded}EyAx1 zJs>!{LRZxND$KM9ePcQHR*s~wgZg06sG^P4`)qL%a*OltTP7Y^dQQK07&)oOr5+{ zk|AaT$ug)+p#rz8DTvPeE+$lh;Ny4nvtd2kdCKI5nDF4Yy1*i#ocv#>#9i~y%koZw ze!l_z5F`MUCuxC-z|pTm@zbSi7s76fJuyQ(SRPZZp6rTKg-~8cR4 z*LSg@V>{aR&BZohn$nxMCoHp!fFQJG3sw;XS@E9~*Vn?wezax2IU=CbZPFaBF8;ClBQWVhe;bpYINw@zel-n3~Y^Asr3 zd_0+) zx_7ftqej}&rOQaXa$%voH{N(X?YfODk6S!bvb<`g6z)KluZfd00R2B=C7KT=L>Y&t z-O|VU4ej6=1$z%73)cYt20}|wRBLSS{l-W;SX@Cy>+{P@eIcXTqhFHU^~f;Wyf!aT z0AEhQmJO}_oI-}QazMw=SNWrHiY5bPPu7^$|A|W_#4}K$Gm~%XWy8m(6T)4|Fz!F*cNuvy#sCH^q!W{C&`M~Px4|7lp~-28MSNBGt;}Zwc8&WY9q~E{C36`ZXW^yt>|N&voyeXp^q% z3C$#vAKCL-M>mI7_9DA=Tc>eSFw{3b%4~b>?fsa;44ZayU)#JcAD*ZN$5MS%X1{i1 zXFGVH=KJur3>?uBy3h}ytI+2n{I|N^w+3JBPWJJ*$=&S#XYV}#^sKHs@2fVdquzV( zy@C+crWtGu7=jInONyPI>~6A~e7l?S?S3hnO|r?xYZBMQHYLH>27>{E5JD2_y_XsF zJ}T2kZGXS>%rkmtG$RR;>^fg`Vf4Q5Q*J%?+;jf-oOADh$1Go5@=r5{^>cR6upSY2 zwCQ*?SC{L()&=0|lRtNJ%(){6vs1dS_uU`&=Q`82&;8m0WZ0s#XUB;j_v&7=es0!{ zpVAlJG#j3DCQfZ`cnLl+ilFCDz(eGto^pAB)4*%$K{z)QKmQxI10-~g)7{?>FLSO8 zjgi5(i@d8r4+4e_-Fu=&_UgrB8fzcG^eSmU0eI zYbmK4PMI+vee{zvSQjS6!5#ZcuH;#RhIdc*e0)qgd#WZ5Olm2=!s}YS4H?mc4LJb9 zg2P`Lo?#9cXRt82_O#)ji8K19(Gz>8{iSDPzU6C@2SxSkhyHNiz3KRg<7wy4=0mp> z=GpYShc(RNRMR_W4UCh$>sYG=+-8pL!+{CyIqlUDQ_auCQSg9&_o?aWCsD=2?|#ElZ$P=W?b;Kh*8u$NB-A4S-MjZq6Q<0^ zdZq^_Kh>hJb26ga^G%XWuepa(9=-0Tr;_o zC>YW-S(Z%K&vs1?FpvPe?z@G;h;{P1f4a*Ffab~PJUQx@fAJU7-FJN`o-rD7Uzqem z4?UDV_u0>+*_O ztT(egi-*qFro}?hyqeqX_9SgSaa60i@3m3*| zS=aOKKjxVsLx-}PT$R50t!L7jwQB(_E`-(F;Njf}g3SrL?A8fn3f9z z&@;@Z6^lyJ%6Im#i(M3!i1|C#G4HcVthbII;gqlXo$cs9`%pT-6y7m0U$4h|bfLff z^S<W6oyKquc&I^3T_uBb>K~=NsoVp8Nkaq|w7xD<{ zt#ID7fhTz1vER3US91_JeJ>RvI;GB*5yC^z_qzoz|^i zAG}5DfmKWPrA_M&rO^}mh4rIzz&zVS+Z`FFaZ~%FSdZfr+AcxwS}1hjRKCF+t?p@Q zp0L~oE1JB|^Lk|CK|jHJm~z*he9sf-4(&f3mK}M^@^&`lThMp*yVsp_8BfdK%8K<~ zUOzwc-+nK`f@s$KVd?SD%}(7oDn7q=tqR+wA)|VwpF=rQfVF;>-;ktBEu8AF$qCWbJ5^#xge_`ibc= zKa1gUSM(1Jvks;U)PL!lI{~{7vi88gncsP`U$lh#$S05Is56^ zoYLJTn|q8yp7MDh9oZ*&YE;*z^OA>R%&{0edh{$CuU9I>=H-dCS$(NU*0qs+-j0TQ zd7;<%ETqXonr(F5`BTuZ`8azm_zSe|^VdqYfAG+rY)p5G`Qf`w--~kP1&pQ7UvG-{ zpdq~-LjJ3NH?C7p`B7F^Ih}l{k1im3<@Y$i&sOQv2#*6 zVcvp~Y3RsaY4y7$Y0Z0kk!3QV#C9ngOUtJ?={J8bZ)-l5wxel%yyu$N9?Slr!+OPG zRF_)SD;IeNO+FjiKpUga$)hmEeaZ^H-Ao@8K3n)blkb&{vw7FFu7%Gw_dOeve7r6{ zk=4iXf(8oe>i4W~G8xSA$~U$f-}AaU!sW4!xySFQYr*x)rMCRd_1%8U*0sz0-#4PJ z)GP0UJeT0hd3}Q~H6F{XK9}EXJRdC&1Pn!2x>!(Fs)H8pX&rTdhcIN{l)0|s|#9?cV^s1(16#T9Ls9yY9NjgVagNJoR z^1VDaHf-FOzVqUX>BNZ>X~>|#jry)2i-xZ;PxCr))ttHke17`gF=-tAgIW~*(;1!Z z*D+do{KKQ-8QW6uTJ(7lH0N_82Xe|c0G)vU_LRog=P%7EE5h2ne)CFIh)||RFcqE|AIh2f*fExrxu7%j%|^ZpH$nE=~X)Hf(6xf8bzx^@l$UtHSXk$ER5{XU6W#wyoRJ$~CJIc&gK= z(W5yjYy^FMK8+kPDhWcoy=YM?<8-^Jlc%I9Q>Ua}ls9qGr1Z$gAK^rw;|Nm-zqF&M zT~S)F085~yOQPQtIBbXGI&Iptcwb=e6qa_Y*Q`mq*lFq2vlpi#-ULw8jd4op*zsd& z#PDHh&6>68y%oz5-aDi*oMbs`<}4U;$FytLuC$zh?G+VgQ=g9AXrI=JSk1M$m^z@G zyXb2gZ67*nj$`(b34@MhpZ>*Z+>}0P01J-0&U0=b!fw(zEC^1dlUNVAyEJBE-#9JJ zSe2s(#VhWxoz8s*sV_#GRxJa`9r;X{8#sZS9+Hxcv#oDJb zT!6Vz&`#Uj9!u4*7KQHRQ+YpB1BAZy@|I9A&PQAj4*WP$nV$Wd_;@}jt4Nn2YP&pjVF6Ce>?i5~EEh!Dl0o|m!_3X%a4MIrm0Uy1PHm^RAO0dwl zmiwcE{!O=!NZonA%u}Qod)r}Mhm7c+hSSzdfQbS{j`PxmyBX)2)OSE}nlQaTF3K0v zAnui8!M1MM-gE>yD#m?!+MJ1lYJ^(-G!Wt2AZy;52dCK;~+;--w-0{49TyG}C%FfGz@L zJeAh3*pF3W4fNe9-E!BcRKZ~%efkxnSe{F(mr_>_Gm+o*0koeve<p^zO+cWnjaeWNC$*wa;4#O+ygR)~+KcqZ0|IUcLl;L5!QEY zVNJu*>S}IZ_V*yx!->-wlj(zEtb0HU=ddbn-R@-Si7Vz5+PL<;eei+mG?-IJM`N9^ zarL28j`i2g9~u<^l)OZcb=``6X$NbmZl5YdgW(@jpv!Im_T^`)>7RYjL`Ayk&Jk(i zl!2);?O$-$=yYmIMcN5(*|Y6L0H5-nzR=BCtV*j3!bqq93?0=oO_@nMi`!!bd?u}1 z$$=opDq~~EK0S{mW&gq50`%UnYJb{Ip4R96Ci_N~P>xmoAm;l3HjgILM-rR3fbODN zMhwNe&^l=4FcKhdz`)|Pa`D~(E=qQtVuPr2^ifV$DqQ`MN2bjk9M-d165WGUp7gnD zQAsK%A2+LRzGGz2@@aq!PXjf{112)>`V8#GT7f45w0!^`x^5+>N}s7^zF)xd`FI*T z!8MV}7bMH=)4xlaF>fe9QD>~rt7BXb?!%h7;JKz>wdS2RcW9b0bztgCy>Go|3_PGR zZNUOit9I);apu4@3f^SjY}s%m?c8)U)j>m}S({aywLq254pY{;<$F1Cyf$1MrE|yg z67{a~$VnwxOBTUm%g}0c*`htEl6G4E88;6}{pbfbJ9ckBmR7t|0^hn?OURVl zec)Y_rw_}~llrqRUx-Zs_HEP3 zC41BUJ!SE1(>^QwZC?EccTe|!ViK}mQQFL=%+4+J4|Ljrb}7G%o!BqHwLLqIQJ+J| zcb7LW3-ijqyQU8t&3uQKcIwie5Ck=8Ei|}$+p*M}@tSq>@NlE+i#()z@ap$U(;@oa zzUo0c9WSk7>u|^3jJ%*VpWv?^4FjR`DOhStx8?NpdggS=uCkD|rtn@rXy6RiFX!-X zaJ+2aFyxO(+?zCeP^@*`;FXoM;b1A&MD+QvF+Icj)H!k*d7?1sif20pFlVc<(A@wn z@84aPF0fwrW^-!BP10+Z@LXuyo(WJuBZc|o->s80I|tdm7n`%XNSE$Dp0=(#k}9Av zH!nu8u1%$`ox614T1{F7khs6}>L<>-293wE+_7?_>~vX0(8qp&n$U;x3N-U6FYLuS zGX?(K7n#<1x`{Pfx+@}I*V)PRxuFZR+FHNrK*(1Uxz-*`AAsvq;Gf9i<-Q;t=Wbfu-?wL)@OhY8dA$9Tz`PQY7N zy}JkN{;J^VL)jFYFk>Kf>P)$3(gxPqC z=6?qMv4Qp#;PZyt(MLZDCxE$dl|i#>mzA>lrq@-9b$uqXs9;`OyeC>AQ(cR= zd{6pS-Z{g%5X4#5uiBbwCYBai z#-TC}2!I+r7%`ksx)l!FwIzT$)4O}m{FyV(HXban(B(wyZ_KM zpe;u}ALE3_L!~Hn9^rf`zzD6(3{Gjj?b-nl5U;HcW}*O8+y2UZ56G z5rN#bT+=eEJqy7=1dAEt`oSasF8Xy&TM@K&l$=O|`*utNSm@og7=obMjfHCyi>vJ8 z4wR5zdTd%c#KKmD)y@RmX%?ek)}UP1u|wFJYsTXWfD2{l63o^B@j2L>h8s6-M#&ly;9gyU1)CbjY=E`l z0B(Yd!-oz_GpA2aUwZm!1i((|Q=j}~SOJX~IU*f8d>BRk0-)f?v;*SmM`^ut$QPLJI=JoV^A%mv0&(6lw<(+alvS{8!8LaKA9xuk$3_%dPgK>F<*C$An3iwgzU zo9`T%?#5LAFfOP99wSk@UjN=E1l$v8Btqn;er7JNZYLsc&gk)DS)kjbm!4gR%XAfr z`H^r@)Ousv#>1HNFAMPO-GL%PWRsX1SfKq0BA3P4m!l|H2`-zc5DoKicF%hO3kykwr^4r)i}F9?(P?F%TXR2n$Im`GBYpOvi|MTud(*oc_NSe=>?mxI zU9?rOy;rZ^89(A3$f;2(M`%^J%Y~?npDDy>wM!kyNFh~_&3o2K1#H&M!w_^&hf>vV zP`C8#msbQ>bnnL|q}jKQMDQ-dsZb#7*|6_jtt@Mhc9l%YO`B<ncF~-h2wA z=^AJs4#3Un$)B4?>sqlvQH6ESaN5%&z3_LdQxBB0PkrHLl*BRst}^oJmPXUIum8^_ zSZ`EET(#k2`lewBrMg|O2h53cI_Pt)Q=j_zn*#tp0ln&;Rase+j_fOEhrUnx3Fd)w zqzYwZT+ab0RnP$nOg^qnx;uW#&mRAodGM}w5%*E$ziez)l8<+$OK$o$DyR0MPcYmesfVw9gp?IGQ8UEyWoJ9#ofAhOzlt@93OF@ zcJKk~^3f-zr_!A#0dA`RqI#qU9-WeoM4RibtSk8q-jIQ3;_0HShE?!>JVV3px_Aj)|dZm5oKN-EN!Ms z8!jD!zEy7Syl;FOjFPf@$MLXiRizmL;NW$E4MB%V0C%$idDpKv7;BMQ=S9}L!sIX4 zwShx=#++`8Vm=5!{Vd>0ZV9WkiU9oZF}*{PT>~(x<)FN1>YQO%@J&cNw;GC~j(I;K zETR{_yp?ft^?*(!l_sfYua4=_pPC(gFUU1^(g5VO-oYn~f7epdl2=7J<}5Vp*ttno zN}HvtSy<9N{Nzmdz?lF;rvu7A`;`>{^M?Tu2d4WUnG*Bv4DFL91@2$}?iRqOIsliB z0B-H`@rMpftR{@j(&!z!>cZZfr0YDrx z*Ct(d2LjrAio3FtJjA%N$`bzP=7^hl0^9FmMY?g-L1?rUmi3Ou3}lMFShb%{BLMIx z0e*esua-u8?uB*);ScUDg9f_+o_1=Ev>SDkyifpnDu7G{vZCw7>|2JW9~N&;Z@;>O zIKCZmx1AH~`4P(OKe!(N(Wwk1YG&T#nYTm!bW_Q>Pz@ajC{LO>gt2@#9X(i{KX%2Z zS$2iyAN%Ya`luW0#o1U-+*CEp$_U)qKaUl-{MD15vC*XEmvdzahZop>`HJ%Lu#vq(PW}k z!}U-AQ8{G5oUQ@P{~crL#z^!X>zXpdNZLFJz3i)h{!VP(b%Gx*`0$uCoO&L^dbS48 z*R|%ocXoxl|D+j1Vk4o?z;=LEqtf4hZDm@CWw)L%*2$QMJ^K_>&L#S$7BPUetAD5T z1naE)=ETu*z{UXqX1?&ZtHWdC6Q7;~U1l4SLr3Y^k;RR>VgQ?5klr%}>z5|H9r@Nx zgE3Qj!!J8hZe?r^`i?;NJs%qryiE&u=TI4adg4$8Ww(ddbVe58pgEYBZ7OPC9cyco z;+y|T{%%5=XnLb`0iC!Oov&LrWD4{_Wm=Pu^~UOZ4D)s#n*%f1U=+Cj@=IG{PHXVd z1%7|;yuoSD(KBiFcEcPI(~f=`#YTg%<9ZZdNMoR}lWf{{Xm^bG(rwchK0Yx3xLwe} z2-fSt@Kwv|0SNvm^!y0pw;VuvIhzxEc;*;#vpnO2CTZ41!?p4KB-cuL^7SVR()cG^ zWSY7aUw=N0pOjxoPwyu`agz7j-o~Zg`1OCg2R<+`C%uGwBm`Vj!vfd26AQ$-*!}H* z;wcEGU|7@v26OUgD#lW-4FKEVVdIGhHi#XOx5IU@T~SAXm14jeEK>kF#!WJYSPhH% zL6i>S0&skGXFwaI3?`^zB|#=@F?UDVS3s3PG+m7?+aT!eVKfF-j{2dy7!)Y(u9fQ>4W1a;O9~Yz&!zizxrQK zr+s_(vKaPY0b{|v?N$`nqV%I5{Rp9|B0WO6WP2jhddtdwdH=+xtOr9K{nmG3%w%j8h|2To|f6%CZwYs~wkih)_ zhQ_4({O5l*&BS7FCl+wk)m7;Q^8e}2eimz`9dSzHBgE1g$aDAHeK)S3-O}HF|0QB| zm8Xx~e_y)muDjBTWlK_L7T&J)E{zvJQw2EWZ-Sv(YrXi5O%eOe_B{KgwE;r8%jhoV z@xzt$?}XHih3u8T-xfQG=jtVJ`MCm)wD9tCn*jo%z z(0!GpLOl|T$Jw-Z#>^RM$Ie}0HF%X*{?WIjo9WZ1VO2RTjiGF<4C|N^dD^z&7>aRO z?63)dX<=h5I9JIktgl|-Xm^#-PAq)4oF9#_m|2sQ)i&Ol&hyJZj{}-~@7wFrnUB;E zS86m1gNxK9EHm4un{FEgsBeNR|iKq9y9iWqd!e9N@AF(4{*RTM(i0%Q5TaAm}sq#W)uocR7=d^S8ru5aX{81V` za$@@Y7yfe^Hf$W0o<~vKJ26Kxt2OJ{k!J->|FOyVW?HrCa{A;`PXSbqNedS)N(T=e zqI)l;J$p(4|F+}){jm^i-hO*=TDGAy{l99(D~6 zrgu-VJz?xj+hQ)key~#64Pd(^%|eO(_>(iz;5WKs^#CoR*oCVGyyjn7k<;~>*eRx2N*C4Q0-NAZ+C4z8prKu5i%Z- zp?BY|>5G5-CRQw`(kDJM8%qo7`orxZ^mRr7HQv^%f4_mc4ogpcVSXsrr%tkK4;b+E zzkD}9q=7?;RxB)-nXQs`gm8zVqt#a5-%UfKh|e zaW!F&Wg>R?waWQhz;l)1yFM}@-Sg1+=+~ocAiVtShOjj0!MuC<+4Wcjo?vqzJ9wb5 zPYS;~CpH2oYW>rmHLnMt!B2kSCajzWV3~0+fVRTt^Y5#eu+CCpJZFlikvWx$kCxyxFA1$+qnp9F5}r)4z6WD3!+#CzSsYgh}X_ z{`yZBq1czP_H}eg4?h3GH`W5uoMH@mrtbsjAHb5sxtRT3NDE)u8lb`$z(%btq+$8D z))BgNf9a3jWHaYfx{tVw^KKiF-b9%Y(CdIAp|$i2-&~jGU{Uob%5c|iodD+Kh4lj) zHNVb(c*V74C$huscaK9cF5~Z18iyhyy~+!<=E{LjH!8JwxJ=7emT8~wgGRsk^pc2K zXa9Z^c|+yNibbzE6 z`F<3c$~o)dI!yZ8||Mi^yxjg59|8)wZB{n*iaYXwQ)J6!J$~3YW?)m zvuk2~5#(q4gUa)A3rQND%-T2(@c6}VuL;H8_uR-(A+=5a@U>OQC0dMkBrf1&>N*nZ z=B?1Xo6~Kuv@Ho*yY-&2>GpfZLEl=H)fJ5Qm4(Yg_pp}!-4~YzNboEF@$Rs|E!lM} zzzF5tcV62bvbDhS{11)6A{8sptAM4w*`O3~`_U`g)50HYN^PN!i$QeRre_!Gzwp(q zjP(Zi;IQ9t~&hQ2;kDmGR@;FGy%_Y}Sd64qN@BR75kt4Eo zTY13^k~z1G1ccg^R?xS-dl#q2nX_8vX?b}QbNe+w+&8|z39IMPfKk1!Ig722n^!M< zeFfCeD*F7fr)H<=v~kgEJK$OE14MlF#f@n()@AKuqaAtb8ua7gWbeMQGeU59V3r%8 zk3KakHp#9A%sF+3GuOsqarqqmvKAL#`9^Pm8aLpyGV20w{GWgDX2bwC&Zm8{J?2l_ zv~lfzz_gRNyiX6>yO&L_{sVElWOLe>!LQRdkF(yt`GYMXhYezrN`r-^Z|#aX-Hzvm zkLZ&g2aFsuzCU14Wq^uq0yIn)rjD-nx@q>o%2Ju)rRUa#Org<3O$`PU@ST~|bp*iO z_nui3yl^gae-_skzp*{$h#TcA@q}>Q(hB`4HfG$A5m0~k^&LS+TK0{{V*cKTCI>)w z3}8z7@_H?@((_+kN!|jGUD)udOWOf(_ObCJ0N(>3|EZr}5MCn7k+r zZFtyPrHgw70J0!US5g5xFUV{&O6elpL#NF+Pnq5ZC$#1)_^n*i{ufE_YpwhC5+L^ ztyq?po{X7r7GsYYqlbp2+gDj%j{($*2P}s!cVHql@u?sW)R8 z`Aq=u=ik~HAlz?0Ih}g7O6zx>NdFDGn9JHZnfY)Vn>KscAXJH9v4zsw%^z`&e{l%itO#o~f>71sp7GPU6H0JLds%n4jCAyS1ZiPgWhOtPC%mVf4>Q_^7=$=`f;13=uV zRC3^WOhWTg@X~rpU`W@58veM%M8otML2ElZiDd}5``H;ifFR@IsZeJ7tVb=AqC+Q6 zTjN^Wi)b3M6aE*xTeo3DSfp*+wk?3Vo&apVJ9o~UG<)`}02ev~aCYLaN7o*}eB?;I z-7Q_YYE>#J*%OU4CP0rKxDAaS4WQL6J^Jt?(MMg2aZRAl-2HH1%1)k4rF-|W+kPe; zBe?v?5xBO(q%>Xjzy*QIT7X0?*tANtk4&dIxl&itU;4$LPkSgcH^n!;-GuSu(;c_p z7SG$RZdl(Hh48*&cl0BlHI`?ugl?a*rKQdj3bc z^wL)pZ=HPCdY;Bi``mfr&!Gh7=`CyS5x>$B;N_xF4QB zmjP0c6-j}M(>k%`+qRvtfjF9^pf}n`@$px9yWt*v-IQ_K{VMh7 zFLqIu_)^@ZIC23FR>p@%+)A3M<^IxO(&OvNA`S z&i%r=Ue3AT+htNJxR`xyddvRN64kbi8im5LZy$7uqRrzzTaCrAJ6VGU4FV(_#$vc4 zl|xI5R_{;iw;xRtNA^um+&?k>-;Ee;}sRU&I{JZUF5we zKU&&(XoTQb5khwdc3ZV3s9}@D-F(MEYo`vx_0SqecheRVjjRE~8rOuZd9EjfB&5nYDmz~OzwszR9GiVIp;=g>FujcT6VXWwDjCfF}n zuhrkW6$cQsmKL-Ew<8_@XwwuL#wqt+Zkv2_c)e$VbPBg9F@<=mhAGDG+fAwX!g_b)SB4MiyfOYt72>#ntis# ziv>l=bL72{U94TMZ8gq`;KNxCKI6FS0(!XzvU}3}kptyH>%Ln89nQxBSqlxVtD8## zOIl(0%q&2>saRG^Gtx;h77$lU*0l~$LNHJoaV?T(I9EJPbpUj?d=VR1$Xn8mmZd{S za!NLH#B>ttkg%f0EwW_W$)qyh0r1%5TJ8z#jn^rxHQ8*_YI4Q9yW@ZX?+fml1ZwxP zQFZ5o{nBTCX+Z!=y0N;^)U>bSh5CP_uTgB)%13m~RwnRICEC=jdodO*lT&YbDCPCV znpd8iP52Zpk6MYl(Ih{x?OL5*D_QrwJbw~bb@`6&*SazbA`60a1?+zifZ1qNJ>Z}A zS4RLY4+JTk6NTlNhnBjx+&K~#Vyrj;_?Il)O;xi#cjMD}pry0cZUP2craFhZ0s;-g zlJsGKA+0XlRMyf;YqtR?*3MZie{~!5psZG`^(NaYdU9k53EW)DrvQQBehNQtql?L+~KL-5r3({x+{v>>%#9EXvu_?2 z8=BJKIGbhi(BYwd`hafr1`{wc4~O0bwP zIwt3|)@ikXW5xq^Oxi;`t`F5!wc!F@!+H_V3_Hg-b^)+x9UCQr;6?Did;`G~MRm!4 z_({UeFy66PSqPmj2=?sP-U|zO*K}iR8h2Dz=jF^VH`YcHE;&n)|zg%C|7jnAS107C$ljy_F))r-npgHS551^Hj6lZSe}IjT z_UUsEj!%8yH!nc@FTJ}r?Zv9S5?*Xlr_#IS!#CkM#XNo&_xk{Y2vOv^U5O5yfmP6C zH2@bp*;w<#S!0x0w(ToR6^!8y_}(#O#|u=v0$N=K{clG`*5dB|`9squ*d#Cp-4+1h zGr0EJIQ8ud?&>od!4J5m&X`$*~xc}C#Zhk5d(pI!`Wwj_aY%9T3ZGT!o2-+ zF8E=cEh|r%o}INpSWlqy5{5_00`1~m-^uhg$IU!_$3Hi{tdIH07frS&FUw+l{zm$j z(F50(99n2cPEQ#zKsgXca6`AfNkML)3=)m8oFp7(MOe@Ipx~8)+y#(nftd5>v7F(0 zRZS(pQm=-|l1~`(BnR@r)q?{fw9Zpm(t1c`tT3HJkvhjBrjQnjNBzz7OvLv#A4|{t zXjhtsYtrK%9-HP(sZ0x3l%_pcQyqdiwytAJScnaXMmP@zh$Mb$$YgV|5ICZ7Q7IXO z6Un$2*~HEYjOX*ZnPq*p3MY}cAXxU;CmzM>?aheAHV_xnPjXsgB>GO^&iqdTvgu|j z7@9K=h2gi~UYOo^Yhm0wM_;r7IPmn;sZ%CL|Hk1G)LRA1zO>J+bo3dMT}EX+3zMe9 z>j4oD9y(0?u3PButjuxa#-|5A`TzjQ*7VNe#VAyD>A(E`?>D@kGalWAWgITVcA^YO zW!nP_{`Q;SYUo!5#L6>e>6ag$nEC?vM7^k!C-QpOz)MR?)0?Z=J-SM=Pffw#@_x-O zHTd^EyJHgvF=o&K=VV#}2ywi9wpE=ormH%p-W8u~F(m2Da5O1B$ zDq9pUza!XVUzxcJqbW$NFr&+lH~Q8QWu^;&;wk7J5&rT$-Zjo*w%Bl){Ixc)0v6?o zyQWrv@*MzA8yXf@vwy4uq?kQlKr7}M?Z_v5{X2p>BS($^6djy?=XZWDPG8mfHp=6L zabw1Y)$6yv{akwX*=GTvs&IX8hqZ0rbQ9J*H%}YF$y7z@`)}_^-+60$I(DKwegCCb z(xOH0ra$?UKTfybertN+g%>Zksi3a!_p8E)0)ol!405!|y6P78y=T|rVwuH@GQ5<) zg`m^>N@f-vn{}RXTC#a%pIRn=_5B)xf*jpT_1EB_@tp6-(@ZJ_`Te-SvxV=(wd?(- zq3p{qSYugUtOND;?Hju&Q1EDvKeC%ln@G{r>}%tHMb1g_A!!FQ&okyXU!5PuF8iB5 zeJ5P6M&T;<_-Ez>kh0{>J-*(2GMnf}zGVMrKqnnk&q_mi%h^mwP!x&_q@PT8Om3@$Fmt-^2_n4 zZ)f52iU`&uD=%xER`N|#+$m_hYgxz%WSp%7+~hT57Mh}5K0`qR9EfXPk8hX%@h}Hn ztq<=nTK{VFTG6d9^?Ks!xZ8njOiSzWR^`n=%Pb6?4YGm4s@^^ zogpIzh4562XFgs*L$~S+oIUPJu#)i}Y zEMfWwsPpuHdpi{EET`uG4F!k3GCrU+un){v0N~~ijV4ZGuk^|bo5DKwzK5~s!Kx+_ z&&Xpe+ja={hq71S=Io&s{-q$v)PBi^S>D8Vu4{d<6i{<9mRJY_abLyu1ml$TL-TnW z9^xywP#Us8_whR)n1GdEhj2^I>zl9H<`bSMHkiLk+|FM8hxNg~oTJi3Ibcfe)Y1r5LRt<~ZFF6l^E)C-A`bJm&-)TieJZiaq^6T5ijmuTzdtZzW2?w^?>u0CD@0?lDr2O z@-^@6Ppg)eMqJmQ;-qxvqV+21iYqg?pn}ikCUSWGp!32q%*44QoyK~=C1n@&8tKI{ zq@8%y$9c}V9<&sb-VBK8M=Y}ke`0F548OOCO>tz^U;OuX#zANY_MA!2eRW0XPGr%L zq(!Z^1h{lDHRh;s>U7~2j9Im~G}b0#*Z%$2SK`i#UPf82^(WaVG=`i>oy@k8jz&aQq5`N;dbFb)*v(TnW#{Nup1kn&q@9KD3c&)5mQ zB81F>yGH_+bx5y%XIlVB`Rj7?rpZl|!C04fWb;?+WaIog=GRK{s?Noo)4VyeQwciF zfB&;TBbF(fiR0eK+u^r>24tE}}xT;iyEH6LQ1H(D(|JBIR9`iDjS##q9p%$_tT%_rZk zY~C$edmz0{$b;kcAxYFFssO+u3jnKj#Aj~<2xT~-j_9b;ehmPhVWzY&ztjqSmPT4(*Zcsf!X^{j})VsZ+E~i#c_bc$qn0qu3k&{9o*W zOgs7j6%gogQP3631we#3XNZN_wMR+r-lI1JqU)2vs{03V)Z$7fvpjv~1nwl=A-H08 z0*fIY;)D^%T`MtxH_I^Im`OoDb_@XXG#l;I8TS`HW2VLv*+1t2jEoW29^%w`&WY@w z%(4Xchc6Nr#J*`)^NLb?@5CP6D#BFST1 z-*nG8XV*!Y`!Cd+Wt?nHULOCckUQeOUzTpiAl1wu#Y+@*abefP?V}}hr5XW7+H|P@ z?4%$E>+4e0gaY~oYiWj|UHX{QfZj2ohn{I2$ zL}c_f2Rsk*-U&wrFiO&DR_2S1f9*FNhK3UY-aI^8!`xZ8bP+5TrOgqyHqCNPox)6M zIB=cjRiqi$QoIPNs(pJtl-V&oPQ-Zwou;4ug^cIl!4=rM*BkQLnc1#a;WSQpL8aKR zNkvZR9?3@mJmc<{vo){ai2GPFa-LtV4zKJ>Vha_#Q19#hZvvKhr(=n{#I!@Ab5{sx z#GeJQ5G1-|C=4os)356b3gDoAMEpu&w#y7FvzTeUxjCxyzII5-JyLjI3!Pz{4J{}{ z#O-=695_mTyJ)y!v&&3``mTnEC=vb5l;OVby4s+C=_4uM;bU-rDryO}6jOBf$)5A; zU!8p#Fxs5#<(!ndOz#6&VFyLK4M0NOtFn$YAD#@rY+QOkbJaLsW{5f=u|JovgDUQ6$} zMvNW5U#7j>8*Xd8?)$aGaQ|vO!xMB&Jm)tTvhg3XBFgB)z3N69EJyaKqD)&D4GL?D zzm6XM3xp@;-<7mhf3$7h7Y8|@!zrG_bupdkVcRyIQe``4e3K{^H@9G0?Q3DKniX{; zWxa_+<_>(&;iSoGn8(vP_#283P&lsRLiqAL-Pt6hV1P+xPzpK4?0m*+ySg1M*=wAm zzH{WnrP58s8x%t#hSmq3@l3RadR`qu?dzYvt$nvsb>~&oKZvB3lYtB-@j5=ByktB% zGVilM{(5V6+OG~6t+FH%${Sq%i)h8=TG&X@8)S>uz^9hIufhu>Yk9cM6nQ4bo>zXg zO)q!2Um!1G^f)pz*IN4{WEv?4*?O?7DGiR@Obq>kowd!FYMvkeYMlj+FW~bqU|vu* zHX?|9W7(S%!5i#!l2`5SLw6i+;E$I`&))_WF8LmbYK$E~YoToc7SOW}n^ORFNnXb_ z3nGL2^u4{3%k!MgpCA%D23mK9!t~V9Y!xrhq;@vMZu#j&HwU>)#LKy{w0;~($<(FD zox`EaQj?Fogp}6`>}&Z=KL-zdk{!XNB*X>t*)FEdC`X6Y9ATmmvq1lvh14rOOL>Q( zI~{tDl)!dowDay?6XTRC>9$uQ?+J2u875(EL^Y$4F-s&6$(0%9IKj4f1PqQ)Yob45 zLM~zBxD)c{qyq`TK`9|S_G`Ut-nV$({wBrRm+8cH0A(mLif|CcT{ z)l#c;L!!oMmYb^|c}j=Ql2nY4g_E?VSYp<>=!>rNw=$9>1ynw&=!>3;2f(~&61?)C zusKBXt<-VySNyh89a#b6on+oJT5_L|BY&S4vY~i;I&uM@1RM$Zd~_w{^%p~8 z#z;P2xUqJ?qN?>&U5_@K)x$dQ265UkTbLd~$!DI)GwAS|;z1k`4sC97n4}!tkgQeo z7f^RiE;6(OL49oE$LK__bB z-063>vl-N)ph@>7 zr7Wu`try&Kr2V?h)xey51Hf+?T;_#uGXB#C#FwnKAI7(Q2H+%t^gE9=|Q0P4O+m0Y)$I2NU9_ z{QK5_FAN_V!Ea?Yv>#&^86y<#inuRx9DdyV9OWXcz1PRRH?V_dDWCy`=UQcAr1^LfH$)gB~#s zn55|AKneY=iOOm&mrLs^0#9Vx6~~#`B4^m}{gViH+dCGbN$ZWnNt>|f2PF}6nHdwW6t_c-g3 zF*}x?t#uVUE{$T)vGqt?r7P#eGQ)xEH2eMGJ!@n@5n#Chr&?2{#8zEvoaNFcG|Sa= zQe0C##ZnORB~icW_3tLRW=%;DAY!7$l$(inRxFn=T)Emw>ob$+JYr$0>)r``#&U>P z1~rbIXv@q1XqPsgeDIF)(AEC;L0+{td*xm@@XpB5!9QLe;nx^%B1U35V?xOkvx6-P zox<(xc3#FM3vTplsjHPESRU$Sp0&uMuDvM}RtfEt@Go)y{4vV;jRcuja?Mr3EPqR* z>Xn^gWGYIvlSUGe#qR-rg^f#x;ErbRwcijW{il`gnN~t|=$C%tA1(<(=bwjXa!Ady zR9{fOR>oz_>g+~5CkAv0jiqs0KKVO@B123LCmIo{M$u(w^6VCTd85nz5K#IE+@-eo zGweZ;oz8|2ctKHCADb}5&8<`BA=4AT!Q2sx$0~MFTEe4Jokn5EfX2Q;%H1=DqOa z;1!E`h*S3-A7K#O_*C5U?;LXH`uA-%PM)h?A}cLy#m^Nk6P#93JsI=mi!s*dcqR!! zc99ceclf~b9~`Z(uCYm~NDb}N0i1%b(gJu`{D4Xx$F6(moiSuw7LY9 zS?}$)^PIjzVOWc*C=0JcteUb*n{{a6#~&GyewGAd)W#zP+=~-9GFFh4QE=)G=NG|v z89%o#?mAR?!%}RAyj^bf(j6LSRI&FDvj2GvY#V}OF@2#tn2|J!KJDKzTnWMI?^yXO zc(cj9s%CFE{&8U^+jxT9TllE~gungJEu@FqfLBAyv4CW0O18-jscIQs16JXV5O#Q% zhpwFaVB7C96VQx2CL`QV7?$J(lq8s6VUPqo^3?3cSok zLW%DI2k|(~RxsR;ZSSL_#=Fs=21tAzj4~G6FA%iF>=flodeaK!G6_YaeUkVOe?t1! zFLZbEqI!w0RI;-pysix&E^-|-^n@elrke>sTySuO2Shi0%H^LA)_=el)JbH>GwQaM z_%^7*lc+Go+4D=;2wLCZOb!Wpo{c3S=X-re<0egP(4qeYj+0vG82L%_CkQLy`9~Dvf z3Aix0;-4OCH3|L{494S9)MKB|aNtF{HO$F8i$IV(6L!(z$iU_1a`sNdK^bPGBdn9?3#u@w<|h&CF0(;|7d7??rK z@s4##r19w*#f(O31ex4E4mS=Y0<(ZDsJeBZ$F zm#^FSWzW2gJv)Drce>Zq#_Nyl?^sA#2&IoSR^QR7k4C~1=XSPAfFjmydoEZ$p2XDS zbSBGk*awAc# zc&-N%XeVFzyg0qw6hq``^vf{uz7MP?qupdT(BVuNjt%{+kXX1z>DAY2eV-+ekuu`A z;U7imB~t@foD9HN$yZn{T=Lr#o{_$wxt-o^#KqFb!@)ylB(tR!^Uyt_U>Xj+heoQ6 z&HnZ8zCPay*D2KyCmYa}K&X%KHBstah2m&bjDSiu1%FI2LKp|IlvlJ4Q9J>ka*vRG zf-kmqs)FHNj%2B|0eBdj|pW-7%Eh8Scx(Cx2-wSLlugFs8FX} z+*=S9J6ba;+#e>o=!B;o+u!UI7{T{vg9}QykNexFGA{icE&>+4fN9oCqMq>Hg2D*C zlrtJ0^Gx926n7Lx3$&v3KoWTg3aDMuC9arhC62^Hf( zXY4$&Y+W9*z?^G;2ZcoV%mkY{IvU;2d2}*In?UH_EPc2R-&Ta%v>g}a|3_`hs>^&* z9ee%msAh;*4jlt)y2j`AmdgU!kpgPZ^Y-|!;9X5jAp(iF|65l|mdC!wLu$7a#ASCe zb@7sqELL<0&|`6#zn2IjVzxDlYdsrq6wPOckV5GHOOF5-sM%J_2mG6#0Z+{Co6%8~ zUI`Nblf83H+dEWEIyKPyWF4#yJ?sDlLNsn|Ce5ZOig)}CCw+IuOI+Ma&?H&y&BtND zpFWl$xC3sPvJ{AFm?37FB0UHUG##T>pkf#bmduf*?4SJ%%0W=jWT)wnx~~bk5Ch&1 z<#ubSfC$9K&v)W}@G;h96qen3bq4QFvoR;JIedDCN6lG*GO!8h8Kp6?66_$4r&S7O0 zS8)H^bu1K)(h$u`n0-w5!4q|+NJdF|pL7NkHbY8XeXn6(O z6pywT&KEW8jBL7&E0~~7nW3Vw3N+z3*TzWvph2VMUnE4`F#RTx>RfWFb_GG~G`g7C z;jFhV?U^(it-sx4XOsfHy_C&=Od$Tf$wpFUoXip-n1boUcg*Ak3LBh5A?l?6$Q?uC zMS9a_etZ)gRM=A80ur6-yxkEGY5|$MzX(_GQzcplAT3DLnQ(9(b#eaSE2M8FZl4h8=j%^X`7R2G53M;uR}S#TW#uu2wi-bT?%9(>r)q z&tXX?nx}xrGHoF|g;|cF!AeX?8O1#PP@kVLUTxP8`5kBSA(}+ek=MLu@YFe;22{Io=*4fwzcvSPN)4!k zcr(7tm?g}vU3?Z<9s^Xc=hIF8{a}+IAmU!g@@Hr^F0XVNyyGrLmtk*JCo*`mDxtGw z_OfPfAL8taRzwCMx=IRtmuyS&1UZ;$H##Uld0)vXEG0B84vLkN!5AWF0qK-J*_JH` z5|`%9k1)Z+pKnBs!Hg4>sg@yCKo>4G>8%qNi}U)R}>|c)OYBNL1!&TBZZG466*r?7JR~e-{^klb5IDs9`7IX~)aP_TO z(spOC>NSkAH!d1I)TGlsYd-7mcgNK3K!x3;U*F=tjEm_oeJ5ObtN2!rfjfBvl9xwv zUHxE+@Zu_8#q-z0q;_v%HYv+IjH8 z-dLn=7~Nvpc4iS^D`X!cf!Hl`9W92+)_P_G`4>t^X@`y2%=b}eG{xmAJ3(fAiCk** zB*?XKpKK||31^zIJ08JH>TF1Vqc1_INCUC9nXNX6!L@M-ccp94C!Z8nB784OyS1d% zlH2LCMfg?`!iDU#XkX^8-z2^ErOB2%HXO={-=G&uO+z^w(5WG&Es=r_vo&!PG@MxJ zI=FIevKQOt08Zfg<}_ZUkM69SMz=+A>Q=vT=Q&z+)lKZ)#IYd?{*~QIT(ABkN6IP?v3_c#CuZu%THOUCu@j! z4IT9xiZBv5(wk@*xt#UZ86&(*qs~~X&UB`-nF2g70-k46rj2LPk?}$fBxb?h@;uvc zHx0-{+Z3;N7pLSi?*0$Voz)-6MHKd76gm#|skDlO$F6v>$-1^~zY-E)}>x z*-sL{1=c$q@O}raPiePv$1}I09stmmjK6;CYHOA?hhZv}NX^3G1?Qm_le-c8=*q$0 zu5pzhLJ@O_p|=})hD*GjkMR1*SA!G&Jyx)Sr8+athLRspm64ozl8y6EOTXr^uNT?{ z^Nq{0Z7oS2%CXka_KEX0X$Y?q<7F0t%^z39+GV|!e6mpCBByM}i$s090>3xn*abUX z2N|f1=wLq{s>CgmQP{vDt#2Ab;B((M;u4(SXchRXY|io6Iu~u(Z=cg1is2K&%l*Dg zA!pn@5_e1Y^|wfC`-qDW08n8P-3ck_2YjT&G3b8l{Glm`5J%U0%$IRgkbi4Q);;*; z7zeqXI~dieZ2$f$F5xKCO^WwB!=jaC(R>=|uSI85ip5HXR^7~j;PcW=vfr(S<7oaQ z$o)zdcFrVDD}<`Fh-IZx-=D32jaO&H&bF#}4Zu7k=d7OU53c9}%X(<1EzGwYANaT1 z#!o(W?j4-(J!*Ew>xi4geMW|X3)mmppkD1GvNtDMv>h7$Lneu`d*LKWEosehZbG0WwfDJ}YZ#m?0HMj;Uy7qP$~|co)fMehr1mfb-U=`9iO-A2RY_ z_ogAq<&k<9^_oLEJ2#+Sbi`BvPG5PPf%9B%Zz<8===9I!n-MQ3xV{{}o)jvpuH^Ih zUdN`~?Z806Sc^`{$xG*#(rTr54(GC{`$u;U4&iHI&gAbE{YgZZle12~L;$-tp>^;r zJ$k=8d;8SA=~K;^CVn7wU6-tL7S=8YE zMhlL}JtXPJ$KTR-ovliQ9Iv#@A3c<7KT$JE4RT!bU)Sn1fKT0?rmwlrB|D1*rutdC z`vC^^>N#0yo{);BJSd<6mXUTwmW{p_iwyH>mwoHfV#F$}jb4irjwxNa8w&kDx^2P5 zeWvEt*3=w~P4}8Y<=OV-~&S@3Q`khF*76|9YwxC|-ha#BMIW@9{@R2t@AHdc` zkV*>qmUbS+39S$4LV9s|k7o*M2aBkHo^4_t%(Ll)eFRrvhg-m5adi6fup zCP|sQC>v$H_xY4kh;PxBRoXX$erv9Jv{%&U>4XPL8pFYySoc<&dd+yYab+-gN2 zHaJP;6mDOg`a&BqdSncShwq2h@7fyA zTGocT4%fSFFt!Wpjl0ex&kMy8F-FTOIHP&@Z7?sxk2c9WMt+tRih5`4zSJkMuN#S- zyq-63#&8w!F*irg6YTJOj4Ym_D@Zv=@mI%nn$~4GNU~bYE6LQB(mCZR-t|!P$D&Wz z28|>msjb}Et5IO(sk>^s1#>c=lE@ZJa#9n%MD_xB%hd*HnNLH#-Y3onB|9hHv3QaS zSup3WN8n4h*g-;9}Q{ooL+5o6eAF0dM>dFt*b&oV zEj%IU?zK}EBMgBZlc`h@zi~a5@%Qr{R#Fj3D*%RsF5sx6V$Rdw4&Oa+zz#>TuH@KC zX#6mn81=JPeEZC{jl?Kl+Et!_6^b4o#4rKYI^-*#h5D68-1&8(2QtL~tfk$Oq0z|`%#-7V3|^)(d1VhDc_k~;nsx?fZelpsygv0nXu;%3qx~{D^<%K6k>d; zomV(iC5@qB9((-X3=>r8j5A!2gs9o4CDw^}ekSeoI!>7gm7{yB5f4+sz>VviM=AZz zgOP#v{rfX{h4;%9C_03E`#ph55|6l{pOP3GdqkkI^A$lcIBvi8qQZpiB%FZBOn>+z z(N`bl7Ihw9IaBDNNZJh=R&UhF$E+PPsK3*7uHz=FIW3#?Iy-&`RX#oy*bu{4uODkx z2-y|qd>%nkT9U1;SthzCe2K1$N(GRLm}}O<38xZ;1j1S**Bw2+r&YlI_0pJa(BsN` z(kY*>H1Q?J`-Ard4{?HqIPva^u*Uv~*LrJK>nB^KGjefy3Co;~14 zX-KB)QvY1R?`EaS0266VDPFWS&MhIm?rHCK#8Lo4I7A!2;a)DNMRE|jCw;J=e!H>9 z8o`_1w5uVkA$ZJ7c>No%PLM0K+UE@4Dvhm=jAM7>;-pt5{iVC81{ z^n)kM`vrPoN(pE&JSBN$Py=m>4(8+n^;IY@=amL(U$vqQ_m6@`N3M3p5;?3RoY$t* zDp>>PDycn`qn(IQiO}?*N^zaK_*dN!BMR+D+9V@hOsa3;&G&4Yd&y%z!WefaRXs2at{f zfzkt|wUBj=9(tg6aBeILiK$S77`h|Bt3pH9KZ;wB2|g%RU58Ic!8LWhW1~SbRkc9{(>7*#Y+6^0M$G1jd6gPzlXKcjpq)c?xIQiumBf z5Lx*H^#~v!vib@8)#rIh>*t80^3a%-yM9rj^7LXt^QPP2DiSlccauHNHaq9l6>1ed z>3#{N;jcjtkI(T-`o2RzpQ;O8kv4I&$;$m(TE8ZTp5dy;Kdx~tKcvgq*t*?PUZtB! z;1VL*OEh-U%-3vEwMK;q6W;>-7Aw`tG@wVzBS%aKUnUD2sk8n182Do$VR5~ONp}7M`>X;+P2YwUgs;d=!K-B zj0^>?i5@OP+|Pzs3}H7kD}h;}SaFO#_%U;}eF_{Ly0(Ul=0D8s7n%Q56oTmo7c@gE z0P_Xxwi89>ov*TUzxKv>n-P&rb|G0XtsZS9O3~@@5&6y%3B^e~4ln(eH*4p-gXISS>Qp7wd+MpyW*W zX6GioCW32KL$z28{^5#D*gwLEP80&uGKNwC@(v4@6h7Ho9_O@?Kd1|ya%^49r@A{} z)CCFKmUr`GDgNw>S_r~?p0AftEDvvVDe{J=T>ChEG~6%ec!LLJwUR?;{N(mU($$l> ztk|-Op8su%eJWUPt7+-UVDkgk!Q+pC+z`K&Tp)W|{E9?wm1fSY z5oO!Ru#4m=y3mq`+IKq*52_sRdyBP$Z~TT1A$9n^_SGsU+7atFwe=UKv^goyFWfQQ zIWX!r*%80yzsLwaNYb}3CSqb>+iX2Lm?b%`pG+(Y8KDb&`C7g^xW^%Z-Er-; zEeRFR1PZZLh%9bHILZLN;45l<5H{)RCN!ghQR@7!uSNXebpDbYD=EdOhVWM^-ISC2 zm$t4w$imBudN)Ms9>Lnjr<_F_*1EPu^ zlk7xED-5xsJ9Bn?RF!F0xq|bJ=7SAvCbnMr`?|s3gi z*V4YukGHb>6N3j=^Mf0JG~-IwN-S`C`Rrg6;mp%5BnL5oqPy4|vss+VzLEz8oe!aE zz&&Jc^x^W>J%$jQq4+PL4CLGg7`cDN=AV^b?|WS)<>J0>@VAC+n+ha29|cD1fOa*Q zz|^L5@;h2H+ySdXqyhiroBkY?Kpg(dCIV`mTLJDUjj{728jz}M5dnq1-2)y;@| z50M~fdZSdO++jG6V4!YOg)w9yRf1HjV4*x}CiN$3WoLCK{o zhCZy?S|7O8jW0w)1CCvK-(qGmJL`QVk%?{UeS1GGAnZ?mcVV4}>K9@Cf)cK>QrToQ zrlMW@vAMwK07DehMLJA;viBh(-W zYA25fiO1uTGl8c(hV*Sg=u?lw(f>{L&LhDL?_^yE@*wXSc4uJbPE&c83P>0*?#Ptg@-fxAZB@v-I|7Y1ZJr|FbH^&yU1siJ9r zD<6fH)9?FmZgb9Ne46EYy;Waw?+)q5bQX0IKRw9lPxakaag}Q6)G-j3oof#bcpP5s z;+s!C>-L--A6@yZY-@i0;X;wq&N}`LR$;boGM!2*Rn)0pVb4tcM=$dYL(~J`0}KIQ z2nU03)CtdliE4^@F0`Hb`4ocAndVO@NB?^7J`k8FX#be3njT#RAFuz`g+m1SpxDKd zh)6<&_*(I?AqEi4_V@A=Fdt{p`9cn!;#jsVL*XjGJ=9YYJvtj)fhRzced(=pp<89{ zBhvSH>OS2B<)S4>fIn0SRpbNEd!ec{+IOzt!D`X|6G9 znC}bSc>EPJWcw;cyPGAgHAZeR-@-gh(TsN)RH6;8*7?E2d>N%(DK0bbr4NP|a}j!- zAz==e3%I|%iVC&A=|(t#2FHHswR#~-s!{vhZGklE({1zqtNj!3p{Am^{W|Cn%T5x% zw4_CqyRos|@-5gvJJ)9VkVns@!#b_@Db3oA`CJ=Jh`S*k^Xxl}0I%V*;W~hQPuTUv zTJ_Ccz3F7>1@N|1G!>lp_P6?jWQNoa#Gg=dm5`YI8g4ZQrtGTPYkiG9g1?%@@*Hwi+CeYt zw{!b2y%VV_LzsY#E;d(zM|xGMmtfZ`<*jq;y?kM}*0(-^fYu|?nC@YLccv!VSwZ9tP&Wy#&hL1rbQ%fho((2n z!Lk8ye`1#KvEBiE{M5jrU!kT}J(N?LG#S5PBrc(Jn?SSu{C7D@8+NTECLHFzIk@?qwTLM*J8XO#hF^vxZu+B zu1z2fRc+Qh(4{!Wx!VGOmZ*h%N@lh*+>^ccPW}Vjyk~FbJK}AkRHKJVq=S%XXUY5i&dV2=SDlEk;F(G3}or_fy0xIJ9FjjyjM2?J@W-owXgDr>* zPY$KfCv*r(zxIL2cE!}c(1(1pOs&XqakaUIBnTI2dH0!BBVFgZ6?meJPB+!dRDMxM zIQ}m4)>>}yxzekO)yf6oZ>NBym?9Ib7n{b&Cf8MlC)rX>9SVrepoMRNQAuIJd`bK_ zQ%Ru5RSd(l{IdNU9uv3x9lUpWlwBLaKXM#ah4 zC6$v53Af%b6E20RTY*tOS^v2b9}HLv>kUknt%!M<>l_Yvrm5c~2}e}a6#P3bX1rqp z)fS^$iPuel8=lN$y6GW;i<>abT{+D80W8oWZtS7>XSbr=2LvBClI(<-uJ3scK;Ykpo5oq%D1@-!=8#3#r!mX6l9V;kS$zZV z=4(1lCe1i1)r{0M-4=%kVyt+ol1AOTz>yo784 z9HCY>NSLUKKI&1p&}#$gU6#~Ojq}FuZi5JDL}Cu(U)&@K5mLII(!rf&20SU`Ro07J z{Nki4ChEV()k_dA75LFwqaomQwjPLQcIV}46!F8yOtMnStXN}g!|l@IvzlVy%9e2O zBNW_Ke3L4}IAnD9?ArpFKc?E&aT@IrK=hvuk~02mcv-IS!Fic1A^GU$Go5ft?|E^d z1wo$jK}p5HK)<9I88Et3C)Aqz(Tp?$O4N<6Y)ooOB%;u06~tEX1 z)-~dJb*!g;ytBQz(YbnAKL$SC9AzzKEfJiTzb@EK@)XO@YugLB@)T!XtE({ux$&^z zo0D1_azdHKqKc2^p{~>8FoV8jbJh&r)^>A*R#eD-bUsnX7D)q`))|Nrxn716g+2`< znpC&f^dzh@%wjKJqr1k4_??B;1ry<#Vi2IFoBesiNY;D%WBiomRC3+ILn%Lu?SqKjK)o!`G+Kfw0J&ExpAytlXpT3BtXkBq*n5&s}ids%vThuk3 z_ITK&6dsa*Q>^25l>4B`YqU48td&4sXj+gqx&bz=KMLrD?~S}ANL_{&EJql+v2wdG zt#K^cRx3+=FpT0^CUBL0N0nFx+fDrWIWIQ+b(y>Xepb4MrnZEBCb3m)-IHhZcPnU4 zZ2^`XTU}oE`nP7^Hd@nnWPZumaXC!-QmzDoz3{Ah%w- z?mu@w*C?7WoD%R%5iLdeK&DAxr-C)nPDhIJ_&FPcp!I^pS(PP8iRDPXT}2X)Tr(?H z{nie2VZUMCn^ILfaH#-lVb53EgBEh;V2) z0eLvvxkzY+Tqpss_HP#?%P#@CoE}SyAXMw91a$9*S#Bu#H~{q8BP@r8EaCYs)usi1Cmw78DH%|5)se>ER9{yNV4Pw8=Gs<%=SmF7e5(eoSl5 z>hvyv1uw;Tabp-N4iW6%V7$a)32K^8MG3o%TVmj9Ots)_{ORxu zQouD=UY`9W;WDv|4R@H(a}`M3#7vb+=a?QwPgvW(y zMFs^d!=Lo)@iiMtiFA{k6Vnt^oicfoqeHVeil%>jo=St!coKnmkcsKzAR`dTS_+*o zH%a6Hw&y!#MMf4lw>&g-0>32~a~72hr9ESsY>3)>!Ept*Arvkxb3S?ysVf=e0rDXG zc0^VO_BiL=T@=$XF`5K@j%>GiS%m|>T7osYf>CBbvy$Pa$7G{J46w+W+A^iVn2-5u z{$h>0)(7Z5b$X{)Kk0m>m#cFWOEflN0s^mF4e$zu>OV_V}Z3d`zm$L zZh+xL!SjLVZrC56vL%B_*i(Quq4*}WS(TKzQ>s0E!T)2D)E+%O;9*yb%!(5e|ln z7+P>~J$9VrX`ELK{*!C0SyhLpJhPl9!Hjx$dx5ZKhs&8`NBv<${X}t*z0`TfIe8it zP>DJ1v}(+&ct8d~%yC9k`OL(uD2qb~PAK?%>N~7H#i#DznfiQVpCQ*|0dAJa_ELF5 zDbOw+n&|+_6)UXyQ3?2-jo0UuD3k%bA4|6{kJX80(X4DBmC3_x@(}B;tD90>?#jc6 z`?Q5}Ew%_ay^~hg&)LG8W+L4P-@(Q9R2;EeuqA7IwB?v;Jil*=IP`ex45pS%)|jw_ zf4wg}thR>-xqNE%aNJ{vhx(NnZrGwBE2M8OGy#ehNyb9#{oqA_fws1KoB(NrlvD3lQH(a@_2KPp zXPYK(gbwQ$SK1PEOvdiux2|qC4AC7dz4jE9Yd`B)CaOt;f3^#k(8TV|R z-t5GWN0XLy~}enhOZe?P(-G2!jHpkj0x$RV+_%bo&&X zJQJLpjqA9lAHZoU5}}pda;kfDyjnPyFyF1z622h()ZNDUdh1g>3wxj0c)LB(Rg>cT z(obk(^e3$lyj}QMpj{lGU5kxlA`oAq0zc~1wfnOb&cA1V zld?}I683U_7h!EP$%3xHC-8ZO%==@XzGYe&Rss`c^#66;e~0$}j`06{xL{#SJIAV? zr#q-w!4dAxxxOb7+=h!%)@N7NxEV*huBVNj$74j6Pjjh6Ms~v1Q<)v7#B18p8s-}( z8aHo!VHC4*PWx9gAb^-SZP!<_a-+T5ig!(e3BY%>Y(j@j5cV ztZp+1{Hfr%e>#zls0OXWF8d%`^Sa#ymKn?Omm_M)aZrgtd*|CRpFE;gw|yIa_V zlGKL$xWK>0aA5J^o$S&XPRPnNzH1S4XzKBM`)3;W%j4iVSE(1zCM+a-U2lVyy$u_Z zQ^YQf&D%qLF9DuM4e;rJr>*<(g09!Pn1Tpr`XfJLAI8^v@~W@7cWqrB5}q%?pd(8; zHO4m73c?{fW8=QQ#uZ%1?ozZxg+D`v&KRLJ$VXk>Q-8|S^}fTJx5L|!)@R3WvN-$g zD(YrCIQTcEUi1G-^Q7l+e;SWfe_G8FuhS(1YmB$f+jF?U`NnA!tp-G3$Ks#2!y^ zTBAOPXIMuhoo}kGqfhpfP4&xD-7fS#TPnb!_}Tm;98BAoE|w;-iQHFV89ze*2=0ew zh6`Nvg@%^?_@hUS{Ra89(((h+FaK58bw)MSWo=$jgNhJDkS2s8T~GvR0@4Jei_$|0 zAVR1Kz4u$ zbN0Su_XU70DnKpQo`EMuUKACnSamSPzzta+*63e7v1Ge_@$7N%E_Jon6GtDZ>t{eq z;g&W=^o-PdJpz%WU?H#mlKwjCCwo${bp!eg^vua~BK;#5MXG143H}G}YW?n0lZ=Uv zPUW_jj0LcTY^%L}P#aGpJa|uvAGQy;$?w2AL6$Qv|1`C6{u&JLD_AdHJ7GJfs@vQJ zc~_RWAs_p(z6+Uc00fIJMM2FA^2DC;JsFd$DIIK-bYcA7M@^VB>zhn<2x+FgV^J8@ z+TM`Gx8#glLtA0J>DkYs%)N{8dn_llSWnvc#z5QI4C%95P73~XO7)T(nz{O_@p-L@wkhTf zSIzOwrX$rQT&9_ubU%;`4g1jbh4y2glexZawT|P?oOSF7xZV|~iq zX25(j8|H#&G-CU{)e4Ij!U>RTvd0B>)+s!nV5k(D!7VPP&fp_9TF8*3QpJE*ijpkx zacu1U5|uWoHZDOHYz=z4nHxYrN^ky|=&LuaoO--8>}&eBVAVCTh8WzhiiyXYS%_Uj zBPdEU1nepti<=KS5$=+UT2f+p%Ky8B{|#eo7>2Uguo03YXgPXvf-DY?nQ6fo@_v$n zO9)Bu3pstSvJ#EuFcy^>4Mb<9(eiIK9R?QgegcN;Hzv$`k$y0pY_b>-B?_sE7I6|2 zcB0JdY7jVbsxadNxjf`q=I}#y+NqQmpW9P8IiWUc=3*?VG=eAPrArF7feMI++%ibmb z%d&*d2*J#~a9XXl{jBPzMgFO-q>JH10IYakt$8G7knpg0XCTqR;W6de;0Z#zTh#vm zb~4Z2(X?!9`y5IuaPwC*R$x}6vXa3C*=@T@cw^AOeShrcJ+iqWvTqKogE)XCSFxK9 zf~RQcU8$7ayEF*@$KJlm7q+);qAvesR58O&Ykj~N+Z#otRMhw&FG~j< z^O!`hL<WKDi&Hjc5LaMA%A%K=L(z$m^9I&cgH3k48$WPOD1IlD zt`pEye5co`DT2;7Ob$Czuwl8UV1_esJB-pF1$d@?V=C_{U^RtikJ1F zn8h*_5_uro%(`TWC4`sFYPCe6H!ll_1*D2=kN1GyZ6UFm-|f7lTjIHIGg;oJjaj@& zuSB#L7FB!EXebi1*_EkTsph|SLEBQ^5&ls@U>pLM5=!W@l6)vQb$elp+Mo1aDd2L; z<4CWx?M$m4I%{#dT#k6I#7xsI?7}swQKd`GSq<8$fL+wpBe=|o*)a8ty4iGIppT9v zQ^(J_$+ygyZMWZs7oO_PY6oDxEr;VT@3eI=7kiAFrF=656Or8}OH(2qCJVUSwV7m@ zj`~^B->`3&SU67V%E68IMxPvkmy&czbK?X0f%S;0iZH9j)ao4}*0-9zN$dC1s^%x3 zDsJ1MKD$iA*Z9DfGVk@t&O)Hq25jz`TV1wwk7q~H@OacLRz|( zjBhhI6E%uxDV8avcxmF}9}x2U!8GDwJ#?v7D2sDx4KC*8{6;ehQALelo&~MGt+2S5i|l@-$G8LAXn-AH&7&;OD`%Q+GL+pB=8q^ z?2b0v)%DMA{5f_11H>%Su?37OB#6QVhU3eOrH}1g42C>Am+QE*W$8LrmNSQ6AP-Ik z%je5x1NOWQ3=@iHU-}Zg6cpy3FWSeG6j5U6b#;Q--h(U@O!0gR-StSuDMOn>5h{y% z7~@^>2~NapJ+hVI9fzMrz7=0{>QZ)p=Y}g7rl%Nl`?*{sdC=KF5hb;Fogp-GlvUEl z9sbbQ@MkVnf`^LvL3f3EWru+Yj!}w*Y2TCYLST7z3A7EmVxX#FYA_adFqG97xmN%dxP33u3MZ7dYZw?5Y(u#P{a0v&{BCWW#OM3N|UB-tVn536X?rSWI?;-PmaLu3M0D zZupmY_+%v`Jr%v~>?G1nf2mG8jDJRah`=4tZFAnv}DSjNJ?0lZ9(f>j@*B^y~>$+uz)&u2#eV4u%%)Vbg6 zh`P3;a;f?i>MrNIZh^drl5Q8|^Bp13p26eRVhZSIsPwkz)FEs8Zw#9y$jOonkt)8T z<&W_RI}$g`=?%?Cu?e9|N0qnKTT2cL?J`rVMAHpj z=_q`gwr+T)|0LBFYsES%ke=*qv`ktXZZTKwWb!1Cd3_ii#oucMD>gX{&A}vmiJHGT z&KCic#rxP1_kO^sw!!djSqO-3JXvi1rT$%JFwV04yV|w6F^zVBGm2*|T6oa!yx;04 zBI;oLwbHs5odXOuHc6V*A+b=fVvb^-xGO)y4GjzAQ=k>(P_JGfYX?3DxK}|_OY?)c z#+5U?Y$km36In`79z*_7nc~LO~Lo*^THy& zvrsyfJ5AG0E@e|wt29k;f5WKbk^fpD0QTrU@{By`Ty;z8Pqz^0PbrgL|hhMc#J z#Gdr0Ah~-b)H<|4gcZQkIKdU;=kjn+Yy+CRv_id7p{M^qK>z-;A>oWG6y~a*2@*a) z;*%$(W06T*RhI|Ul4)z$dtMMf;ItAAVDLv|WWR8S(;L_PhWXp&^Rft*@3ND6hvcSU z@OtYICl{BU)bL+Me=Sp;+FCykE#;~!uxvjwKWtjb;4En=_OvZRxMuJ&CG4^Vd+>= zF6pcnT049Vo8}eSCeZO%N;EuN3GZ7m_2f(FKHBxP@G;M^>DJ!s^Tpdh1+`^kc0MpNAE4rF$tOSwLII2ZX=j79OiknoMzQJz(1WfY$ zh{j%G!dmT6Idg!ybr#Eb`1kd4o~2gHortznHT5?WxtwEFxl(PFTdU~?oqfY1551S4 z+`@HeY!G`|hqtySa!1dls;1QE@tmccZ~8oyUdvSa1Kd6B#v2+^wKz0mpjDN7r-ZRr z_oPo%^;j}Pxpv@+ed+G%JNuYm^&x%2)jsd19Y0;*^ouro9(fq{nt4G__oF=)3%Pmh z66lw*17p_ose|9V)b3tm?(tvIel62 zLhwbbON4ru{o!%_QwBfoCM4yqDeZhXQzQJvA?&qbRau$Ll#YM~dAip5`UJ_@yonOS zUG%{;pIP2)xULJ|b?& zII2W~8bu5J>~D?^W;PYurzh)QO{%%OK0?kE;mRpLs8)D2DS%at064Ghs9FnXSgH4yHyB z$&g0UliMda>AWSJ+~-iT9c4XN!xhB|ZOn1$WxFU74!mbV9!qN@X)UQ|=J|Z^j2lJk z2qDbOheI3(sBQan+xH=DoBhN|o1J|dOrY$|J2=eX#@FU;Kd5ApA*f1ISXAB0#Z-5fTLf8t zZ#Ow2@88k6U|OPgu0Gpi=4|6&9O5>bPx~zOo_S|`G0c=Gu41rjWK$hFS72m(^~zKv z$zh!I{3iF|>?!bsmYg3KUKY&c*_@H_NfF~ee74){gzQUT5jsmdr6Whgv@qg8YgJv9 zkH2x_I^WSr>#ryS6#Dd@(T+r$M~;kGvCcA1yF%`c{ySm$lY;Sg7%Z;@qJ|UM+x;;C zui=TYg-I}`XspH$nr^ubah|?9MHJCF--4I#6RS3vFr_jFO2m(Bg&8ZxHIt2JH%Ez+ zN%<+hii1-+Cah@w2b}mXuH+O0bdru?&!)@P8->Kd$;8AEc8{l;V)&ArzADE3_OgP9 zr?bDOhA{`S%VlS8Dh^Tg()7qt^nh093@PN$DxW}WUNo}5{wfInYXTq{6X0q#HJn`- z+mr1Ahs0S-3!%HkG}r3SlrO{tk><~`AML~AILHguqCm+`7u^fp&hadkOtf%~UYdB0 z0oRs)L%4r7{5S5fD8@#z5DJ&B@(76t{v8wmhfq|0`oMx^?(=*SCniHXG7PEl35NQS z1xM^(I&m&IpgVzXpe2RrrCg`%7*#GjzVET|fAfvb*E=;Daq}ve@G@ z++GwR3Y&YJQ(m?GrH&35C?*5jK9+i{ESAipY5Fh;COZ^(sb~Hd(<>1Ha1%e5O;H(% zd%dGKB#!r?TzkgX`+gb&9%HG?fSoRJ z&E8S*M{!P8elD-#pb_Gf;WBZXADH&}&(=X}yIhY#GP!ORggvTp<_I!<%*wTLh6^z! zzPqygKSD)~iu;aD4;K+x=R^;9MIt72FaBKD$(9s9Ra%|99W>u4n;Aj!F>kQ(NvV2XOmwXo}69|zxdK5$|> zM|ylG22jDTawoz3z#Q1!{Y^G)J&u+4q5Dn(T&I7Pj{Y*q;xG0^4#Uyq+Z}|F9Z6Qb zIP8>E9H~@k{gvB+tP{Wu41?eHj#3wGq_~WGaSMyUKRE6Gv(OfU%h0Ow;1KMAqNDx% z9CBOFrt8vI+NqsZ;T>*cvC!=B1$?^vY6o)(?jH8PzHg_qT<49Z4;tIevk8c~ebd?& g=H4BitIDI(IK#L`HpASfU%T~Y#4OT*G#(jAf_EiFrzfGiTyEX~qO z*An;oy|3%{y|2&p`}c;kk8_qYGv~~l_q^vluX!FK-l(gP5kDZt!onhZ`9e_>3kx?G z3kw?nAi(^EumDtp*bFD+lc#(IJ|24LZ1Q)A)&O@jHw!hV2- z|3AlASg)`d|Ie`|_Tzu0!NI}`v%$jqR~kLc{_oFc%m59i;haf9=4 z|9y-N_?s=_NURyNBXWG9>w<+v$NBez{ZjMc0Tz}V)=NcsZ4d0-YmQ@_WL@H8VDu0PNWC-dGq2ZXsEQonEndjZ z5@WzFsAYB}@Sl%?6{i$ao{siJ5qb8s3O|Xdczln%Xj{UR<8m4?d#t0+Ed75z2Fiey zS9z?8hDNJ^u%XltK>>FD1;MQ=o=VzspMSQMKNlsPX_jiCd;YD@g@-3Z2_;{)#Ewssa9G0|Ehc;^$Si!{!~GK#9(%%Y?pWYeSLsdM8oFkrcgw`Wa_Te z)Y?DHPBHH(%>g@Kd8#SGG#@$MKp84R71P5Kb`yNwoU=XM|E1zTYzAXO zNzZWJvZd~dd+{-sX+y71qxpglsaLb;eV{uxm*(xK$O6nw7Tmh>{6HY4105cU-i>nOEojbT>bm*h)RCc`1yk%L-_((m z#ZzARHPl1D+1VmDwgF?Inf@JPMXF*C!e{Vi-X#1r**C?2-}HLlKSVsU87ovv)(@e@ zSUUrbl?_4r?`NgWzy;iu>iGY!K_Wz{RJ2||q&YAh zpJw^S+U);w`;Q;snA+1JG`LPFTKiAm7rjbuLegXx&c9`jxZZ~S zHNzxY9I3b)!tGYh-?>r!tcZFKq$oA!f~!mc-TLkaC_e)d<8!pBY{c37N?qG`Pjsd? zkG?JS=Q$skKmPalQWTHHevsq6U*phzGkG>`PSLWFVO#FJK3MrRyeHKdIzujXAZPnK zPdE(XoI3p4?395t-&>i4od7WY#an`-Ovd41s zE6fcQzh~cQ$p}3_FiH;)+{z4JU0dCG$dAd0SzHh{tmHN~*oW1QlPCUK4YVn+-`EtW z7+y~0I!-~^`a#j(lB>WodizcaZL3dSQ$^n&8~OfGw0hStj=#=~&ropT(XC*|ER$Dh zGEWtDgaU3wxKds;&63#hN|ZoDUku~i@rbO0hV*GtwhLV1tcyf|A*Hh<7dk6*xkiH? z@w?T^D>U+4r!9M15bAr-N!Ywnp;!Nnt*h<&XvhF5ktXfkknOpVes+sSS=rTh3wxZ+ zSj~B(S{p~b`FM27tJSkXGgi624R2vn#(u57hrRUblnzN>G(A?%7yTMl`I~!$YYNSj zq%YSJ_pXme1R4|x$eFBFm^}+ki3C12T92|bidaT7+P!Sl%HoX!52!>lnC{Ki5B==Q z;e~rl-CpeoWJ-B{-)orrbjY7N#K&n^Bd9er<#Z_idny$7SKc}E)tqHW%(a4ndw>4N-KA!A1#LGqk@bg4qx^b z&5o-ZrS>TN&Y3gCY!m~Cx&}B94OQp`&CNi^XMB&zg$L}#QU*%p!uBF2=XmPXYJ=@mWvGW9rmM~N7A#~z#$GCe&k~zk_J0g*z_V?2 z!XbepaX7-gX_Mn&tH??v@C z-c;DnH3-FeMsU4Li=g3pfGbvHNJ-8jGylD|)=*NdH+aYI`Y?>~=My{m3yfvWx=tCs zx5GERXl8(_E9X|376-p{1GZxVBJnN{^m6p&&LRBL_OPMTWf7WP+d+;xkI$6R%b^)? zwT4-Xg5L9|Ict5^$#Gr}ya+6(ZP3p}MNgu+cX??&uJ5j&%@CZo!5d@`V0rK)ymO^DYVT-Bps~#t-FIVJlGqU^lM<|^{CV|Zkrrl%>J7#kSExyF(x^Q(i-wC# z=I%8vB3y~L!VzwB_HGzYFgvQkvWF1keWdh8lsFt~lB>KIASN)Ipw#NT{;K7$kI_*7 zMahWL^>C(G<7%uVTq%ln8&gKZ9lHIlHq)U^LRZtXt>JVM;{{uK>GanmVy_9-n0a2j z|9!|gsQt!B5j^4bc4q=x<$JCj{ro)c6ei zfyjhq^zGmiozhSGs09gdnCmKGXVNliCW}Z`fBj><-H;)9{SW=!rF^4W=hB$FGqSl` ze}2)eFn7EC9x9yzD9xPP9luVE^Pk{>k!(TU7ss~x^>#IXMi6~^^|A|>*~dshw09p% zgJD=@b64!uiOrl(ORHV&hU7YkMLO^Md;XibK2lGiaPPO2#jVfmT^#uxv{XpzF30$W&slBa!Dcj~i8#pPpU-rI2QzyiK9DjdyMZ$0P zdWCa@GVtZ8z+ye6x@ znLg}bipJhE{6NJjuG9d!#&oBe%I}(jix4_O(Q`(sATd$xD+26&Cp_cEPYb)%-Q=}# zGFLw_{)5pSC4vJTe=XJwzpCkH-YFK?s~wT~^k%b%M&l`7zQNYRgGqgdnXQuCoC)e* zD6Ct`;4RRVrR(-YO+~}(Aiil7mM_Mn4E5^zF@`p3m@DJ^V>nAyV44a^W~-hOkIS+n4C zJZ#?&bcdMUr#u`N8X&o}*fy%Q95#?SJi&|zbHPM3^QcJ)=ElUhOE6PdS+&(ukJN(RogxTI4>*7vt_SAWP4VUpdO$iV;bH|qk6Xe zr)ezfMzbns`(|gObcKWtVSD|Db>KUKaJ}Y}a zZ53x|k36~Lq~x&=awog{KEf_U(_2-5qb5>GLZiOZs-0aAlFx2Fvr^ z^)is&!XvT@h4pedR<-(ybD)We!Es6jSa<}{g0{1r+>k44({)J=IW%rJVUdQ!8{TgC zKD+mt>0I5qE*t zwXyV4AME3PfJ|kY{8?|VBWiJnre<_1+&Vt}8Dq6OXWivE+fx9gG&1%)S zGVnbTUIUA|FL!H4Hd>+~Ia^=h8PA651&X=Hh*ZiCbGS(!95O<7DjI*N^DBhH_uA0= zw%L;IZntR(GX*MfV57~X_On^reP8X_LzfY0q#`{D2b)1!WSC>ulQxr194==M_mn1G zMIyg&3D6-E$44+8Q9p{^EPy-jt%?i^cC5~|@>B9MTAeRT)ILMMC*mok4K4wxBg(|N zYMF8-Tyac}SBr3M8FO711&)kIE5tibJWJFmbxAkfP>6+IjT!E+xLjWZS=(lQRQ_9g zh#KJ?!W-r4$y!xy_U58)T-^mqWx@O6PqpcDuD+%mAV9P@ZMC|MO_LG?c!KbLn7c-Q z7%DDr!XZ9BKvDjf17$%?7q7x7hsTCfI){l&7npqxI;T6H1(6kwn1}_YpC{h$kxTQv z+A68EXM3a)^I#p1&HnIPS*7#s%OJO(5-Vyd)1Osl1@d!Y3H_Zxgpy7p$B9?@G7in_ zbW@0B$lc9(OX&naXWI%hk+3OJny#L+NHsLb?p{jLQjdrL559z9ysFQ%cfjtk{!(-B zd4!Y?6G9`!Q7=$X7w=(k)ThJ{?oC^X0Huv0p0k<^vZ1-l8~UbAE>iD5h<4;eUTr(V zEX3q`YZ$rrdx)hjJN8H4rRQye&e=vx%XEl+b4{t<`?LlKu9k z9rIAw1MsCADV_35d*)m*PXj^V;jLVGg&(DipA8M8cm2V5n+0?UY<;clc3vi5k`?2ON1v16^+!uJFtHXiK2K-TCe1r_vG9O?(u!bYwwqcAo0UNWr}JDklY7in0$c z@Q^G#3##w;Kk4te`k1)xeI(P{{U~5GI;rk!t!PqVD0_8g^~P#$zcv@>a{PwgvZup) zH!5c<&fkpEsQ#Ko@;v=C|71Ax%*pIv=Dzj7_xvaUJ)wtvMTR-%;r4%_Jh-gYwwg}7PxcMV_dhdGXyR< zX@jEVG<~Z~cy~7E;I2R(MVV~~y1KSprVIuodc?xTfDXZor-AjTFJa8zpHoOrWQf{t zD>5J{gA0Q)pV}ns{4}Qp3V#T%?dXYObP9efpCz33s_pi?{cio6-&oNq+)Q1Zj!yy4 zyHh!*dWnkOEFHQ>CN{rB=VKOM@YS5{u?HJg>+=J4V(@Mlcd!HOP?;B6?s>sQQOaa? zuRRG{46^wq^}QuAtDY&g7@b9zDb>zQZHc)*Lq0#1cDOEz@@t(yj+nSd&37U8O@CbS z0|fOasuUSn(id+;2UXE*{5{P1E$_;7&ft=$CQu=~wOb%C`s_7hx2#rLYe)Y|v{C~m zDEicawetvCy*V~H<^=PL@K-$f?j;gL12DSdwQvjyz$aj%yj?pVDO(R!^%F~g!nz#JjkU6r>n;RzK zrQC1x(nXb?w~nBqAu73jRA&!o$WoAuBW*U4)#nZol-+cE%VYJY0aw7YlVw&Eokcz8 zK&|EumU8g<<#$7eN&PU;&WSbh;8bekJ@mZ$rZ8+9=SuFaNW;4Y#4`j_@xpv`~aLi%veqzltrfPtTSBR;U(F9 znSQkcm|On5n$56=*9EB>udK~o9OAppM*j4Zu+|mPc2OZgd zMpoQ@E%}0)DB+AP|1%*HqxIO7c(qnBKe>Kv1IM`qzq zw*bd2G;ZM5YBf{WPAUq8kUz}nE5AtT%AEkjl|p+-^21*X_5>F`D>+WDRNuR@2>~b| zoD`=8Lo(n`wRGAs^Ze;Lt!ua4>Cq6fyRbYJDM7;#fy$tR&V;S^!<={F4~tFE-#`1U zV1yA_^FV*WaCxTnVNHh z+C@6VPzLu>kv5Gnf2N1WR(88W*#^GyY3U^nWet9*?V+A7%xNjl!J8k^M7dBn#QJov zOLC=yc+*f^re}mg_L`DWO^$aZ_W+3getE z_s^xm9)aMql?7??GmYy-EJn%0W@apltx{?<^;^CnULTs-A7<1|6(iuho?Ob3vunNV|G?XtLsGlg`K*gOCNf z3X0m%+;(c2^)sv3OOSOP9hjA|x3b3n_2rU7A1Y{}nQkw(fo%k_#H-=Lh3C=`WVKY; zzqnml0e(8Np?<;nIcR;prdE7Ud}i<1xYe@49K9>JCnBhTzha+66U;L2mlp#R{u!8F zrPG>&k6OkLA*rD1k=Eh1ZFNmLIDSO#uG5V&^67wjxOPu;n>3X2={F(rMQwNEQfR@0 zu(CItZ=w+RhwFNIT!Q`h?^s-Tn`qq1QYwgwgJeE%5Xjt<1X}m{pZ=V7f>>eZveU@8 zT$`wkO0r@{f-QXJueo)GpH&>}A*O+V!PG_Zz>c5Hj`}OGr}i$^m79$A<1$lD--9wv z8%c}TDv1ebKM@!2$UK0#gMOXK?3rdB+3v5;C-zdI9-LEA_%d5Y;|Sviqt(>^isX@z zSiq0GYQek7~$!Y9(y|##H+@!aOZ+dp7%HQBdPTTj9juH`A&u$uCl15LJ2({7E}M_;Ro4H z#|iHy4lN>61zZM5nz3zDDy^mWK0Aox(5U0;!L58AMqUZq^}_^H`!~vpq$z5_BxX+# z4qvLwdDBh%w~LJ^zjr;c*!ffTzI^1B0pJB9^(5clCdE|chqy7~;8~Bxglu5Y^@>Rh zz`|7x5nrH2dgc}OY$A#t&c+H`y^R80;nm?fzA()4Doc zGURGZZ=kZ|iJZ%l4Fw`$QIjOFng^ld`Fw0?G!hK580|PTInLW3odAmJ%YH0{)M|BL zh>hC%wtUV{Gy*Z-=9P^V9rBe!`(xv@3Nl^$dR~iWs84I0=$(k$%QKMu6e0>?RFsm9 zfY*~7bV&yZq4yPJ2?R&%@VDaYq%df#a7tmcK#@1oYtndT6$ zIEV649dG2>M(pIw>pjL9(yYJnVUToD9IawX3c25%KCkS6hl+9$}Dco(q zmSeuOx0Y*K%gXMn2C9m+A>0D*tOsRWCmLqJ6y<#NbHohI?H*W;a{d()u-VE(88U?* zH91kg(ZX~e&_$s|@OmvEmR|POhawtIf4n7Od8PDnE1dI5CLE*0Co$phItMX)AZ7-N zAX=7P7<{xS%U(ITL1NP~Lf>?ZtpG#GMk@^p`h^b?fhnD<(nw!VVLDVYYc8uvYA)!z5Svy^`}?e`9a zW)#S3+`J`c^6We6Ln?Y$Q4<(>I7=g*YQtB5^4JSsw`k0(Q5ErfF2eLa>#=pT`>&Qb0Qusx4t3> z?;;OiqU&0=-P%2dv$|rN(`n@AR2%fu>~8g>R!1Yu_7t>Ve`R3j?hAwju@QW>W+*fPh?D(QEsSuMPm1VHim^ z>x0?z{3Rf)sPpy}dVA*6*%B3-M(4i&KG#fwCk?J?BM3}%yvN#>EMIb$LPmAj+QF_< zHYWs;=4QQNlr z%WD52Vf`A9oK~OH$$%xnG8VAlRHg1d-J6O(UyGtFFIQ$n>x>(d-cEjON+;VNAAY@~ z2!rPy)k;q`8&q#^qUEP4!wiHtLAgWY%7pcd_0{sN{KUn7_#L$#(+@K{w5s!1wfo%$ zAM028LPx}a_N z!vg`c&q$K-Xr~F<#qzamO)!1PG4J?Go^7$sWUKxBc*^G&1|G8ScKi1~?;$v^QH1kN zolIv-H(jY3(!bph`Ot59&6T+e-WjJYf^%N(L>r`pI{Sm~hIGfX&+6JARk5&rn<|fS z6-RlY&qB~a?manfW`$5V0h-I-(kX7Rw8+<@Mih&`W^XLrT*ohyC! z25P-t*R&Zj;+F+Xk`A^*y;hmIjIGGNyGN{J1a6S{VS|qo>%BQ~CZt%B)Gp8Y%)#iEbl+?uS88pE`XX=NcTLt=eHPg5 zLSP1&9*7?o1ws&=mLc@%VeRfQH`n{m&!&kwwcoujmd<+jG{x*OHbUe&A6fM2fn8(c zb9#lAdkKCK^QQ0Hn8R-~I$zbah{x8o8%ny?2n89b-+KtTUGxZ&(|xv0z>}t4PuJop z|FgblE0lepK5BDy`Eau!ZUZzT8G;GVD_t=W@M|_Q4WWVcw}cpT=S+}m!d2)GIR9Gx z^I}mG^n#RnRHExBq=<*9Sde%aB8jNknyoLZtuqzsxl%A%BaXw${&|q}mTpn}`rtI^ z7v$Sz%KbeYCoKkeqXdf>$*0eR!HiRsZs6$dt~lQwhH*5^^;Z~$Lqt>nzRr&mCpK~J zME7vz;v>^{hp`PmpQ=GwXe1Dg&%b^b7O+pkzDEG2`i6+-CmAwH=kSH9UY5&yeB&P1 zsfl#S_KxtpP?j*b<@UscaWd=Xn%0 z?-<>ibWWK*Iai{=V%2J8p@~?%EyVRDvOIUykz4APz}zB6C{ z2xd2x5)u$FFcC@4@?Ijd6()mUJaX(_PS~Si@$3yb_f7LaUe|w^bH<(k?>tKQx|(&j znz2aXAfd5-5LR$hx~VqLWnMf>iz0N%ACa19uuswj=fbzwRqYT-#=JZprQ~JHvnCd_ z5V;5@!@5)aDO6KIh;OjYiMlzKf z{P_J&G*=lcWtGpTFF3T_IM3N|L4>F|;M0FPMMpr=_0*b**PRi@8c{Ts>gs9ot}wbB zdfyGqL_bV=#8~*&VF-wG$2drRMvvg)8&vUGe^VS_t_07Li?O`XH=^qeuzxpwJ@=-j z>xYs9PEAis`n!*lB>eV*ue4-`0#R6F)wX!3)#VF=e%KP@=gemgtB8?@>Vt|uc2g;| zc^5w$pu{d3uF+q-5j4Muib>xQ;bpq>q+G8f8q57bWLfABDkiN4 zH6P?z*WAg)b;t%A9)C5T{dL6%_awdPx00JICz5$|gWCRT8)#lW=3|7bl9`7XP|X+5 z>i_8%9XU1{g=|`+jJ(dfB9xc%TXs2A?&_YZ$1q#fTSUJP3swhPhXd)rcxgX+CqST* z>&mhC!h&Cet`KN7K+wD6Zo^WO7$ln*RO(uu!{DYiQXtafJp{Q**E`27;Y;%10j)qr z!DLa%ju(n(h(DMqxG8^xACH{w9w_)$T9Abz_kkP$2>C64s~s_O1nZmatZRk#F2v%M76elGUAX^QNcNE z9!9c|*Eu=NJmTjKx@JpiqKl2%A&m?k&&zgXZXaNRdN@RR1J05#n2n?XIM{E?eCX^{vuc02e=|O-^X3%pepmaYv~%D z9DFWiEzg>w1`rb)H-B{g>0o4i8|>dz6SI$YS+2NqS}FdeO_=~M#E+^_YCX;Njg1=fl=m=dpXqEodqDeQ?3n9@ z8VvhXD@#C8cZ?n(MOdlb^?#8Q&U-QV6r1_uZ5!V>tF{N#<379#`Qqns_@BAeAkfXV z8bNH1ONcP|Q7*UnZ5T^Hh~v=EQrY4o=^Dx4`!7yID&U?~`^)CfO4xr0C zt&}a15W(_Uq$H<|bVE-REj8PNnAB1euy=lxOmH{-l<#0dDeH$&K)&FU=)~lBka;9W z;H>*Z;^kER8Xk=G18XrWlRcI#*CbmTU%K_SjeKE|_$&GVHM}o=4SjP8y}1}(N7LY# zzwz>C?r^{8)tixr-(GVP447YbeA9N1(qYt{5FYTGK=sElv4Ep=W;#>PJ~Kpjjn@3X zgdL3nuhb~J2aFeJW?ED(EXr6L3%9Ho-a8y(7&nJ)*im=Au~tk?AYq zxYzL-#j#iCN5t&D`^XX5yLmod4k0k5B5fE~M{DlXA{L7ZBr`tb_Z>rZ%y~ZrF>)r< zPcc6tCC0qGj4{c5=5g&S-7LUQ1#s5K=blsnCxGhTN|^Q9tne=9Q1h$DU6JQnly)%m z%3{zffyonJAq$cKs+`$msP$zq)=i!oUoiXs;7d7jPe|CzKFboEUqd0P2FotD( ztb?=~m}I@RMoBzWkGb)V*a6)7U{+_?06v?R;dMgihZh(vAa9fC1tp!dOVQ)MSR!@s zM_kPAlI^n^r7**0NbVy0wd3fj8s2g!C(Ci7%U^)+AB@^#Ck&CRYLLv`jmh{Q+Z#d5 z8IsTfGpc`N3ZPbz3#{&WmoEHYEF}JO%o!8$Cv^XRUnbA2Q?UU2R|;XomU%Nsn6QpH z{`)o7Kc_{hUynON&gP-q9beAs^JAa1Ho+`BQ`0x^ve@5{0<2>PKJ?ww$5S z!LNtan_2&x_8u`N6=@ATS9bBy>x?ARuYu>cH|LA=K84(7*O&wQtCKBmAKG1u6x1z7 zk|g0Bt1<7N{ra|mn70kWk&191@@N(>78wUR>oO1nlL%ezr@R}ch;gD3kF{Ik6 zvo}U(qBe?R%nMh?AXHP9@GM=$(q0YA-bmU>%&I2)4SXJhrde_*M~CNrQ2?oiLDzo9 zpAB&3-g`mJ;&TwtiUD(KnxL2K^qJyLX1n4G*%%pl@KgJZTg!4N<1S{rboW%Fqmvi6 zo#gdijS)-mRP~GjU0@M^s$)Cqn6)9I|Y|@tj+?q`kuntW21- zOaDm;(twIk)Y!Xn@MtCM=oNZ3PIkAfs%_AH z>@T-xh^c;!2Lg(=XB7VMVplvn8xyd@b*V7iBX7&qc3Fkz@y6L-;A_6wg@zz(0iWK$ zgoK{|*L$l34{G1)r=~0TKFe!~8Vw-VZ7_xfIRd(1Oy6#QjMfdI|9o%M!LRY1txGwp z9>ZQ;p2WMaZ@#>k^^-?RE+%UOs01I2KsJ9zn}w>H%KB;6Ag?Zwxek8`KMmQNrv=wT zwg0l+d+%{Lx4Tcky44%M4=dkS<=8i`r=%n&)0z(fP`Q4$HY9Ysg28h-`u4Ot_b^1_ zA4>Xn-?W`L#)mVW)rOKW{-8qIj2Ab@g8DI?H?lX?y5!Ts54bR7(f53&m)_X!b;i?( zo#bm_rFhr@5;(4YcZ+tI^V+GfA57xb!4S@K7^vmOLX5K>!;Vbm$p=k0dq@%jA)hn# zLhD1-UwwyD8+@Fqv`>EW&iRmoeHnQi!inE7XWwYZ%Pj+ykYMP=U|W0U$S%pGIPT~q ztJ-1F+7v$h2icY=2Ah(~uk>T`;+|n0b7oO|28NyOhAnQ*+hb^R)n>{N4D0iC$PazC z*PXh$pXdOx$AGnv(;)v&Az-vRa4+l!TRW*#N6Fg)Ww`9E<*&45JXxr({<2hT(_fmv zb~HakQf56#%fLWwy^xI4Wnx@pL~6A-j@jjx&;aa*Er!5JRiz-nP%ut6A~Ezk9I*|6 zscQWOyYw>sa?7N6FXBepE-j6DmKa7cbzOoLnZq8M?4r{dEA)&6G7G5AuYrE|%=BQ3YaiYoWU38V;75e*})ozh)0xbG$;rKlyYmr~C((D~8_ zLc05Wvvj7OIv)Kua)uqUK$XlwG8c!#a=@ZKY0{zEs&2naU|nn4veZa(R#tsLuFYT2 zlVOC0fr{&r(|FFVHm2-Y+mnrSy}+P{jN_>LIKWm)(%OC5iovNX^g2;=WChdneiVM? zntfowfZ-rX^jp1L-D8kM_VFj-Ba4muW(jwI=Qoh=Sfto!p@nAJ?4am*v6Z!7ox*U*F+_+BNk?v9tl?ePBg%%chX`r|JS$mI`* zXS*Y|Knf6oO}pPthm)*`=LOwz{n+u2KiR^X9M#SXx)lgRc6NyNGCOtVz%ynV=qH#vL~yX79V#m1=EN0 zG)@_L_a3jnFS%Qj65YC<*6vicz>^tuCR|hb4edJovEj174`)=?;l>Q6hXt`ON#f-t zk5-m2-0s|C``KL7!6%52^9tq`T=NewEmVVXLo&{;-E27f%V8{TmvZHA$UQiULt_YG=k+gSPAHCtOUpooXe2K3rWMZG6R*O&1$%QhAvQt? zb{?UoIB#Hidgye?cg~~`^t749#BYeNvFV^k1&u+=GS5HFpNBVU2Y$XPvj0rKoedMS zpMBbwhE2Nva+cA8E!XZ2vx?qaXT7=>0(<+2z--!igzc87-&g+rfJ2%VZ|xUg@kjQ& z`%TMGl{Z#|K{TWpzbnJTq>koKaMDUgNv}GnMdzPm1YGyqh`5ISV|NaCoEo@Y*Xp# z_a2RU1~cK{%D-&he8^5(tt~`7BFd|lX7sbgp>0+1@+-x~sDxT0EL;I(it(nTimEL! z-^0F^6hv~u^*_DzK3M8hKM27+xMj&tAr~2!b%Ge!&A*z&rU{;O&%-%i0+_+qT6$l>Nq3ga+K8*l|d-!pKl;v{z_H#jO(Pwq6aL@cw zsfS~hN(=GUk2IWhw>cJ{;`SGyBO$mzI>&3f6z<$7bpFgW%38rO1UMe*8OKlwkZlrf zQp|SJV0}t8J8xCzuuq8h;529Okm>mvm6#`|U6oPm{DkFpMu2Lu3HzO_0^ChKBBf5A%CMX7`6#E^L+~m45$^ZCJ)99k+ zf&>LHQ##fWBM>$6yK!~TT~56UxWP2ae%E*k6+$Z4=l)Mw+>cgSPRieQn|A=9{kOA{ zr$c$w3_SD0__${ttK*K=WVN6 zH-aBbQD$w0%d;RG7oj`K`BW;0S-Hv~FFNDhc~jL?cwVTKq{aFQ6Fvu-Cb7ss5RTzr z)qvUjpccrFYS3@)VvgM*)qe7~fo}>|)n*G-Rp?c~#d`4+#w=Ty1tV@CrW3n|?_`q; z0-w`ib$W%awst~_tyWTF**)_)R3%p^I>X;R%IOC6n`V^e>S2tPr{AX75h15i$p8a9WB&rZpJoMXu`kT4gO9~?UK;&QY|ACsT~+Y-EXHIX z=Ozcfi(Zym&-1xOu=9?{9<9wi8s-k8BK9vE$fw4Cl`w zltP*ev)F{_O2&P;Tk2xHZ}vHWBIo|DRs#g6km}|iBzulsE`@uvwv%wwWml5z7O6DC z`nf7-df=X9QhEbz&c|H#aQ?rq%l!33H5{M4r@d(`1V-ZofgC6Xi<0gRZAInft&8!k z!~9V3%}qh#UEm7Ge1I}*C@|X?E>s^s>Ws$*T<<22a;PB+v$%^AXmdpIanj)SXjp}* z(`kL%Wbhh#_p}YhEPK#_3(rU%{6<6j1wy&}sw9e>5AL3$xgH|Sreo6ou|#eB<6MLN zud%g~XiY-M#HL-M&a|hh+=an+UzziOQ*s`S{Ax&`4HtNTvMh;&r!(CuCw<5)=`j$L zU&qPIpA+H`=6EzI(WT{9TzW`unq;4UA)&#QuoW6^DPXp8sS@1vYI6&^&>=}{!S+{97Iy?&FYktK_pH0_YqpP{< z>@8GIE;}sw$%^k226Amb!6O+y0m{H*5e=n#7@gO9aFI^GT@~0kGZ3XTZbHRl&Pl*`p7;&%G>Fu>rsdA>Y~c2!cbVur{x}23UXD> zb7}i-$+4sJL|J;oN-jECmozEls7suNJ~r2PKT$C3!1cmK+D?glN*=W3%yPVIuIKn2 zEuP)9rt+_OtSIN-8Qj+A-@19@v~&(~_r*T>RFZqR9UzFuKRbeZ2q>8CjG>oBnMaCY z!BpsKrCv`pOmCvOWsp$rG2x!UpmbW1qGKg414&=3t#~sYjw`2UybOy~PxYH3^n++t zTZd%Rvtl&Uvn~I_0dnQe@$|}`q`B&X#8*t=OrU7-?JoD*uZfOaPnc4C@gDoFv0xKO z29JJf+1ohN_2|{Fy1v(H&9uclzGW53l3itulzKoI$XFB7?G?;W;xY(X){d>NE7Esp zeOCH@#o`Arh!A#{R#~t*oQJQqt{L%yfX1iw;-ZwsST>wsI&A0X)C{>~os)d^FOQ*w zINSh5l{D`-@7PidZqG~^ruO`GZzLp5eZcz*bT%(e_D_Rm$kEvVzRKJeU)k#BJ3Gn` zGE6MxvUwIX-UrjCDEX`7hz3B-W}`fnEW)&OC1)kO|EuE(blPhtM>g!m^6h~Z?~P|e zI3WQ=7dbSXbWz4rEOu*fLPF=^T@L5zI-k$-0W=p$blV-6$em~OP?#+3>Nt1ji`^l@ z_VlnuiS?M7{gladExnWJmbi;;!^4k)EvwQzln^I%s}TA-B%obQ$@It8VXP>Mg1VjM zF)7K3_Mu6-5~=Xd{#`*5Xy>e+ug^g?VB)*-wzE_QkG= zk-u-pxD?D^EK5^@>L8YmYB#2)*dXI94su>A*i_2T(dA7Rn0~MyNDWNdoxP*D?4h#J zf@*VFd6>-bkX$)#D@hnfy@Yo@An*CUWWt3-Bn$9No1D!Nqfh)OwYW8G)HV%IwMl|F z-|t2GzU;wg8sI=KlT(1wVTAXXO)2_E`qQbJdxCOffs!C61@YVPC}A3Upe--wRyA44 z^7+sE$z`{naQ}pX;ytrxf4bOvf+!E8sN+S)=PAt)ojBMdD-4BX?gTG4 zm&_JA4=eBI{$cf-z_<5D;|x7AOLV@p`4il^o%1c{1xY~6w1!3UnP<$W8XTzO*|N^F z??0^7Klk5!XVkBPqm_alO7(ryxCEczsSB>OkxLRJn&1kEFNK9zTcZ$c->r`wdsr+A zeHAO}F07XPkyz}ZPfPU)BC3CQhq2^P%v-7bQFng=GOd;wCy!t(yN}BbeakU8=&LHx z1h3V8__OTa{|n%;-PX}oSYxn-Ti}R*lb~SdvCuo;u+^7VxgqJ+W%FXyg{>X_$Yl4z z0W4aJ4k1=T+gW}j<=lKKMi-Yr1Pzz1N<^E#<&gW?Mfd1dL83a>F#1dP z+WX88PjlSUHyzPC09x56EELSB=-+H+u97UF=agGNIh~`Q9OJ^ZThy3XgEn=5L{C# z>~|N4u~KCE@|+=EGxE)s#1L~u83C44<>^tU-fs`;S^I}l8wrFs6G5l3qI4rd)p9D# zBtw3;$1GMr%Vi2Yd?p_zi1TvD^XWwT9$Y^}jX2Tdk_S0e{z&07dr3aU;7=3@Co8@V)I7?UP2=QGuWJ&P4S3d8Ci+yJ8D zyAndoqzLgE8%qoWPSJmmCJaxKrkffLA`xsvT zxTTU86HrRRMRWEIdY#Po_SuwZ6>npWwESfV1!QMD2R)`MX??TKYh( zDS}l<9cAV}J*Kw^2W|7}%0!4TX*~6TVOAT!u8v?&6Lsm3iq|q#YY}HO!?>=mr3GUW z#I0ACue&36f;o(g+YgVrn_&|w`Ay>O9=%WIb_U#cmr^qIf$-OKf+)G}X1#xSv@{e!baUw_$n&}qf7!k6TgZ>XZhm7D-l2gmDeSvh@`Z8- zHS{LyZ61t(5xYYNKV1{b3sa2iQN}keVsohDcW~j)4lj6?!gv%GMhU#|#k^GA-94o= zQa|#?y~P`#ZF=pyJ|%VOoGSlh;aOBih6TCLz1`sJDAS0)D&Lr7_dRDel73MBbn+?K zClU^WuzAf{dw8~Xb%>%~^~Y4w^Iu3a8avq3pv}SIK|lrnTwVhaGnROEMNw4B z;^U<)v&PVQRdHm-_2vVjO5&>EWHEZUhd513=-xM|W=eK#>o1NlZ#=2+`OuAB{<7SMq{m z<8kFh|HlC%8V$4(jGyK%{inkK1>&_}*dydI?YhO{u911QWdAmAiMa%4IWKEJo(kVVMr`t5`grNh6u)VS~M&SXjFqY|L((>&~nw|x#HB20YpjrI#>Oz=P3 z2^avTA&lC6TIE8cr?hK(6eO66HTE~(0$)u8#Wn+F;ZE}d6#D=J#8?dPQc>dbsKWdh zoz>sXFmD9!3X5^ItU^kSOIy70`94J7R>A=%*_vSdeN;&$;rCne;Ox~vTj!?A)lWL! zs_z5Q`-VhoD~H@dFaBb+iDZ^vcU+zd7AQl@{A6$+?xR{c2cbqxxcAX31~%BPvinj; z_s1NFLuHdtS$Boi+%)8A*HS-_dWizrJiTURInc=$su|my?M$7G$nk@?p!~L|;j9#w z%^WlDhMJW;N+1EtcmI~}JA>OkZPM+;bEH^|vKB0d(kJwZvHjCZg}l1->E&&|bAm=*%_Ck|aA)OW| zZ=Qjc&N^4t1GMhwX1^NL1V6fmw;WAbO$2wXV(g}ywt1R)wdx&@T24?yc49u{Dx>|= z>>}2M-FPo{w@TZ}g73WA8^g>znf;oS`(*f4f%zES@(rl_{hfCM% z-KwCEKWo9+8yFNzL{Ws|jYOdpdt9H+HpJRg}-#V{u8VoNJ^Bx39|w(%Q-a z_8B!t$4ch0l9#ohc~SE=Vx2*|wi>e7=5KwJ&!AI3xw_biXC1tnA(zHi>3V8TiE2n$ zK?>tSET6eRl21rSP5*0-LA_a=VT+L~&Btg=^j9%ZKxw%`BR|k1muUHSMd?vjwc~Nb&;lKM3Fmq9d<-6J-rG!IclCCrM#<32dwIX-f z%!o5kcfGw_pYyD9+%;Hd9}CrOs8lV|DWN;ro0|ctt{V60sG{L|IT#%Y^k-e}XEOcS zJKn^|)~>5O4#r;5t_C9GDbRBY_kYgZ2s`UK!HS9BNAzh=>M2F8OeSIr>p|(Nw5xpV5q!okgLz* z72|RIBmVZ9^;8RxNTfog(Bzf-jn~sa3MGGmm?gdl%m5rltPaP^OywxyN^tjp>e_LD zZQ>h3LErQ6WUx>7TyEt4-cvpa@F5r(8&4gr_RTF(Vo*pI6FAL&2eV|A?p?k>FAPJ{ z52(7I9aNNoWs1#$a6RZp?gDjaF;t(eQEpa3T^38bd$`=i?Zeu~8Q@y`ysc+$NujNEG4(fs>~Y!+)y9GjJrF*M`lZ${Jz6FIE+eD!yP znWtKwi}SU)&{L=@vP-_aFM|WAf+?dZ6njnoLb9UJSGBP8e=MSU5Tf58E#`Oh<@bch z#VbEzy1A~pCS^=27+}^c`7wSvePY)crJP#$fBl~XJi!+Q)zaM$AhER-`M3tdbu=+^ z^Gi?jZgBp^`@g!}pOw%NsM8hH8^3)tSNob<(0Kgio2UK1?cZOma7o(x4X@NREL2v} z|6Mr$c1STnSW~?B!}#|k{reND0`P&9#XzpspA!xGKKuiSERO_g|A&l_5hIYGqm)^$ zlG{uEZ={2+oM6O!HLJYFtJdDt=+q*eBBVftHw&REg`xun$U5plfZV`D`1HSyJKjRX z#aA=w?E5EQ2jt~524YzAJiHk5ZqryLHG3o-T)@yAPIKS=9-90Ctn0~!KJZ9-NMwWa z=`cKNzbyYKWk1Xdq0kI7i|-wjLvmYvNTYULp-JD52r8X)a8mpDew zvao65Iv4nK(>12aKnRd4;+1?$Nq(OV`NwAH>{ck%*g4`{Kh3h-{Xvd*q}=!+rbxq~ z%?5Esv&yxiK(nIMcU5ERF$iGDBg2h<%y1AmpL+k~ZOu<{s&KV-S5cyV=FHWX>N@$1 z@12-;R~|x8sK91Qvz9$_OyqyQwh|&ht))p}O^j)-2r$a5(h94%(^7kYBk(hr>Q7*0 zMTnW*27XezV7XU-iV%5W7wA6?T%GR|oojrF%JZRqAWgD4Q)T^}(S_^oVmV9`K-1Gg zq0|lP@c$f$j6y#1;bPQ6SovI<^IWg?U3E;|66G;0><5a;|>eLlZ(UU-MzZ~ zO3)-GFYG}sg6gpx*HfIP4;I^&{e&mAK_T9zh@2~cxc+?Jt-y3BHK2ZF^M*8xXXz+6 zZsOQi&imKoguWpr@n)e%V8nYl+S#vbHe=bCusfcc!&dZq>F7lC%wk}o<=GVOmkLHs@naS?2< zJ>zxkle#Ga|B9iG!kz|*<#?pA<4ic3qiL76oi^bh_{IWYA{B#F*|MzhJTH-7q zh&GeTb52gc9!BOgnC@S#T8H5hw zS(*y~7}{E(6)TgvT)z@273>TTT#Q&v`in_k`BlFuRMl}YPF+?K`0uvkT@R)kj66P~QoVj3*cMpKp#(KDz~T zx(eW#K|o)4#vRuK3Q+aSM~;BhF~i)2A4N+=8_@fI_F7DOK=1K#;Xm(l-bQox`{F*Z zH4ZAJiH)B`!gIdlx7PsAp_=ofCtMrEsx0mLp0i+WrO-NqOFVnn`Vq-4MQ&YhV>o?q z2DHo)L!B=F3@cQ6F#V%TiY7r=j0`{HpK42x6ANOG&RTq>P@9x0`(4fR|<9EQAYxIFmC_brLV~7p`aVSq8O_K0k@;QU}q>n(i{2z_`h|- zd(T)m%@v3@fs3sXg3I=3^?LlWJN1K=5EP_&au~-txE_+`r6_k~>Nv$J7bqR^^=;5Q z0Bq-0XmSFWFKb+b+by7z6-&FJs_S74aABVV!`uUvR~@{^{j8gC5&VD53h(##%37^} zf_i(lrcC=?%R_d(=Jz4h?x2x7USXliVX*^vL6sR&ah*5zz!(7(rrECnDP~S`zXWNT z`WUu1pSb<1LN2X}p7LVtXT|mI+cdd&E9>AZ)-w*HAHnF~UqbO!+)y7wU4H=$m&=h+ z9oVsU0jY>S{N}j%@^nIXE{pQQk(f3#6<`UMz@&3DXv-gu)Arl7)$;|pWESJVq$LgD z&BoJ_Bk=p~$WJ2@!#PbEW7PqEkU?HKDYClSaGjtJ+?0;nu5p^jSpwj84H0^ryf#Jr zX8j4Qmj&e4z?1DY4EYe*GkQV7Fx5hB!)=u!wmkWHtJ}BB%BdYb`1Px=3#OS!$gP%D%4Yh6;JN3tdB~%(!R+ft~Vun~hW?Nb( z8$I04=Y$(metEw^#~Yq30A*f&HspoNbYHsI!TwSNHtf9%z?vvdah&dXyi$kJ;ssb~ z>^h84ma7L6y8XFoJt_-!Zy2x_eQe!R8hxul>-$H@GpMuK-m86Jl2rll z9JiEy{YU*D)Xz}x^Rx4{ddrpCXI_gDtg0}}P`b15OPi|I*k+WZytMg(H_+S19nkY* znCsnZI(Mr_HnQ5F6UT8%7KV-kFm*y`Y!{$zjNtm6Bt(0L3^Lu zMHZHg{cDTm&Jepe04UMU(CbpSzdaEq7W#Zb^+mkJ{x9C=ue+S#eQzP*3<BHPFsfw(Te!J zAqg;3z5~6;87Mgm=sTV0vLIKdWguCcj3k!|mUGi*-ueDjY=?`xp2PX5hjJSOqb7TU zSdw_(?$KA~&FW!<1Hh}BZzuq+01so~7;(;bXQ~9U-m*MC0qyHbkVw@xA4v$)g_b!V z8pya&$>#=2|BHk`nL%$c9_smVF?a%)CT~D^aR|C8adiW1Fl}b5>P5IZpv+Er$Fm&F zx&tsgYi1wow;^nL`2o@Y@7t)a;dMRGFmkF>En^gd@JMPu2XP(3bz^|1YCj@?Q zq|2lGBbzdg5dfU0*(|b)eAu*5J--Ryo3tmKW@q)Qp9}T8( zSAE0OyrhGHarG}oAf`q9J*xoFr9yz|@l6fmC*Yqi0-l`zAD+0G!Li>1RZD&ya0x?C z+2Rgs!G91+MHSd20XN%qOM(2?{!euehORRrB_$;*9yKzc@nKvR!c%eR<$jd#J~Q|k zi=N1@bV0rpVP(Cp>Jm}2l#03vOstHL0A9bzk%)Qq@BR4~?X!?Uk=ag%)b8oRZ8;v_ z-S@(qpB8odGYfBjeCWV=PUpVw1NXQzKc5bd&)s>EGByk|761+Ri8Wv$(Je+Nf^h=o zK{>cPQ-J0bNFkW8Qmz4qyiwJjVX;*oH(smza>)h>smmsvIaCV zSiNp4W_8*PbvZei)-)3 zH(J?B49Dr-0Kxbwi|Otwr!6y;!g>#88xu4YpUD47Y`#$bg`H;ApVCy1SlEqs*5%v% zTDSkakvB&WM}!g7r@Ud2h>&e*GhOPfT_8^>xXA0j%(WH_jor(Ri^}S1pCJU8hnj^3v0{;2#W>4 zJY+maG3y)cM+VKxe!v;yCVEj|>HmaacAXX7g#rR~Iyi7zOE9Idq|C48dOOKf2adSY z$Jiu|nZfM^czcs~z%Av!spyf=PXHVo3=W20p;S06cuZuuUx$^7OF%Dh@;MFKuD z$_icxLbJf3LGP6xvc#4XAdq$ zKit~452wGfTcw@9zf=p@H=P{MM{Iyunlv>6z|6Vj*Ye&zYjV!K=$gNIf-2ppNHQ+rNWjq!`nFFu=yAD~rvX$o>;jBccB|Q!9ItkudE4Gx-TIa_F{+JnE^Lu(PmDT{GT7vLY9H8K ziO{Na)F|A@8rwcFH8##reh<>CtW`z~Ica_w4v0M@!jr;tWWbujCUZN^-?4PFb`yB7 z{K{K;3Ge5-eQE*NdxQa$mo$s@yi)+N%Mo0R^40XeiiWW0?CL+m8E=AO z$=m(ATgE~Ww1@zzSrEB<36)}Aihk~Y)cOV~ZgQi>Yi4ogMOcg8`br!>V&f~b%7xg( z#yF(lC%=vg-moKjgCce{bqct1Wl}^^#7OP+CqJa1sRAXeX@9&v@Z1V@V?>q9ml@`d zPgyag6=9t((B$BNoB$LYsPn@Hhg9C=;o{1}pxsZ6`v(Jv7BltfY}p!Pe0O%P#l2?3 zFdsUM_W0e4SjI7xwEmqJP));<`1=Zk=_>0qS_{DXt+Xht=}d8hoKDQeiWszpKD=0} zfmVMBy>4vk06l-#7en^by2yGjsy7QQ!pw{Le%U!oH#d)!w6p8kkKPPgW-+v?#U6o$XJ?d41G|E&&?taA=Uglqrt5l!<;T_$J?+ti0 zZEE67J3yCj2?_`Wr`r=lP=iCKP2kQMAcM2o4rShsGc`Z|v|MIDcj5##mq}2jTO8cv zN24nHFs#aZg+lTld5MTlIO64eWra z{X)e;?4$&BAWDw!4PdmF*M(vWt1US$csRKMd_2duj}0oYz0;MJ+o7krQ>5~15A~<= zY-4^Er9q_pOok?~Q=-=&K$*JHj_c#>+G#TCbfW+`lr!_7A zT5qp3j3rnrFIF#7^^}>Rvh~Xt_GwuB(4$RR0kT%Wn4) zwf^o*U;kVB%U^gPkRP-&CTitg?9^@-t}S!~#cdZgUiEsU>MMFQ6QFhZZ(gnlH@Kc^ zJ^M=g>4^S~)?^qr+RZt*UA6pP`2`5hFp0FA5}3=ofH7?fvb{D}6ny0j6>ME4u#*WN%Q#$(ny4&@euie+v~e&E&rB{JoMs+oS2VM1eG)jD8eUi=%WH zcY**Otmp{BqLeY2Dm7*Y-sFF6>;Q9tq9U?|Igl@OFsb(bTd#_K3bFib;{O~3*#QJ}B_yRzfxYq10`_)t5jJir6+Cilt^@v$ne zfid4KP3kCrXw1u|yIZ$B`E((=&7&5lTyFafJ%HdH)^n9|44Q6^f-C_Motb|c%{x*Q z)a$UvvH695C&@{E08O>nntqQ3MeYe;&1(s2!64YN8Y=e%G1O8~Ak^7!F17yzJqgnk zE{9SSzKiC|#Suhz$#kk&VA*R5a=o;_{G36rU=r#-eAMN1TTjaWmmzb95j~6wWER}E zkd_;OyY^)?(WqVnL>y$}!2F2(7vhyKhjo?JmS<_4331oA1 zLob4H*?OnT-z+m6>YpEAD)3c(TqfN|_kHs5mew2A=mU*+OeWoL!~t|iUpBP*L6@T zx^@BSH$R{Hr z# z0uR>n%i~RaGit&?%svB^=Nj)^(O|>+^Lo&m_@8p))02n4jQ-P zA4t1VNsK)&%+jyY*qpOZXKFus*j%Xt&#eVgv6=UuZ@f-=cIIhEr5{FkO`jsWQPkzo z6Y@6}M@iD&CMV->1Nw4>%+ zLM$WhyZBkdY}ko>t{w=Dg=^;*_ka~UZ^7blA;_MddtCqih9{t1+JmY(_AP>v{(FGdN=L11f zHo=Qu`8n6Xb7cXdL2JcJ#!uV%aE_q7IRlP_7E}ldpe2y{ooG-$p;0RRJcpT964T^) zJzye-Ys^jPTB^)Iv7g=uTs$4YBG&||(_7&8=@`T&oxsuo?aNku>C?Zs{_x;Bm{gWg{&9A$TZVg* zQ4?vtT>}uENmy?rQznI|^2UTITTv<0`wMC4%jdCLV4q4B2u?w8ypV>Y4W9?qg1r}z zG^lyM^S+WStR#*1(n!#pFYHAAKndiD+py!5KID}dh6%aVkMA)xORiVvMiib=9T*KN zYKNjD686awR{=K$kA;y-X(o{v>%*yXzVH)Cd(erWf2XFb^A3Ty@u};5r!c@}^mg`i zLGj(^9!TMy08aVqqWRIs@BeK3*n0IMbef`$!=G%9-xOqfn)Z+I7?Lsan!tEb6%6JZ`5%{>h zgiD$qh`bi`&C**={{-^|LGC{Fj|ed|R5XcfKMHXvV&l8xnU`@jNpoZzh$|H5P8V94 zDG4c%z4;_wZcxg}VJ~tvP%Av0+Vwm#pOwL4Oet1D{OTQ_Pz+|W+sStYSdb2HpNJ9N|txlDoCVw{7g6*@66I`oLusGk~w%ddQT$yWsnxeo9 zZS-vyHCg#%xA0_Vt{M&r+tDI!=OPo!?I0iVtsXLt*hgj&<6BU%p0#x#K4f>w^cYnAn^g^UhxziiXJJv<&iWiUmb0INkQ5FGW<1MK#&;q2yQX)v76EC#X}bRpAP7SgexCJI>U$~ zTCyZHy}h|S^Fd>`V9O8-BRore>0CDcMFfcBISkqnuJwCIkzPDe9bzgxP(k4jR(l)L z;5a)@(0YOU!Yo{m3}O3G2`ZjPPk`QIQe-%M6i723B!Sg*8gD9bcgguezw?2SyL&yP z5{+=6m`@+5g89Z~tmCD0w+t|pjXRk+fM&grares+R_-B@-YoFDVxq#+f?fN`@gM6 zZpI|4hT(X4PF>yl=3b?wU<0V2NPx>I%{K9Jk6OB_r@P_0C(SFO$a;?+BR4uSq4(z% z#Pseo(ew#X^rnY3CZ~(a^HB0WIohVEfSBG|!u<&X{_eeMhaS0NkAIwtMIOB*Y>J02&|kym)2{Ho3(OD1kYQlvL)q9RH=4bH@UH|0FU zeZ1mpFg*@MA4PDSuaCdGrD5ig-X9Q2r#lR-%7(pL^+$0|H`E-u@%?)|Ng z-4vIr%R{RLqHMLDic`R}nX<9~oYlJ3;hML~PFT@t$` z^!l7V@j37we2T&<@g7$EJ*?2D_pZqZmYW>Txue!9+J49T*hE!xZhzA#|4wXA=9U-i^=+DiV{p$MvtD^ z8_J-vM}0)Q291}g5m_A|vc_*P=DA&FSHBu1Ju*2IPdFWh9L*d#{8$;+T)BO?Ad$k? zFu>5AK88{wp&j{}-$hi0-(_P| z!ewiNIcFq2sLASm4r|?cuisE(xNYxC+AqN@ihg_C#Hz^7@~xx9V@DFptE-V6t9|pl zWLD#GnTd>{ul2PQ>(!bO`k^GbN>Bt^K0oxo?mcm6A|an39Y~-kAbg4Jsx#u z;$Cdzr@e0Qy*cg6rAz52qM@ZRV^<;(zRO}O=|t>3{6iW?-)iBi+Z=Ne~E0s=Hrs$_~=3v&D-;|Fk`K@T9&WNF{O|Ol(FnLR~3TJHV?i&DZ9_ai^Pa~@BzlcF`4ul zVB}&*(o2x`UVnV(N&6a}k%0wWS{0ZHl)u*6MSV`{5gTz2F&w-EyeU54GHG0W6^*Am z{<K$s>gDf824r|vtGe65zH>ZioE5wy3>l+hKwcrf(xia?zv2<`Ep_X7(Tgr55 z5dfv#EAzz9x{b0_584=1M;SkwpPOtWw9rQU#eNO9D_tlIrO=ZE|JsWGwY6`vL zYp1{Iy!tF^X<_tDttb}LLNX6-eB1`(=JZzD(&gIHB3Q$ zdSd2id}h3_qy6e6(b3OwPq?YNA&afHSnW_M&UEr;#F#~a9JjV*NS;->#?UKjXMH$&X+ELc*;ur((%3K_>HQ!-PARSV zjlpiRoZ8TvBHrM*2F8_9RvoRj%{h1uK3BbVl|16OgijzuwH#R}ljXq5z^t!(rK zaK;n)M}0$Yo~wIs=oZO$#Gc3|Qm3opg+H)E*Tu|*XcG#@ zh1dLvWF=N}ut?o@3pP@g&qI`NcdCPM(PHe{Wb^3z3q(~#W!9+P-I>dr(Yjp|b*@ES z+pS_l-vhY?OYg1jxF%_8%+pgdl1}_oK8s!3A-?wQeh3>p7_tg((4Ha}; z{PMrAa=96y8pl6MOlU5{O@64yx+nm9jx>&Ku#FK+VWAG07N0svlTgT2c)fYuVbGKl zr_qHYO1&Sk#!)4n5okbeNUap zq+00^qKLsLAT$=mx3mxl*sfIsTi z5LCM2F{e<({Z!=V;Nc_ zot3ro=Mo{cuFB`DZEd)F80&;Gd=KLgbXc1E@H4MN@N+sZkOTInHJ{(g&kmAxm!}dK z+lB<%8ttO^iC&q-YxtNAeS8u41yyz7G23cOH-t5}P)doR7G6#Pu78P{v71vmZs#cO zq|0vQTdrNwLP4Cge5mi)t#u4pUXgg-{Q1=SyEpr8qgP51q$S)&3PBnyb#rWm=iyf9 zZCV|qgH7>*o|uh6xx~si*|H_0a$+gZQj(`QCGT!-y~y%?#rt&5#f8!C_PsURny6nL z6lfS5U(su?4w$a6kJUB9V+Ot{Qad*bRSLClUEbL5^QJ!T@TwAy z4RMiOQq8GwGgoxNmPzqzq6)F9Q=;l{-O^E;N0+J`Q`nwnt7f?R%c8%okJrvSwQKt}HX_&K{j|^c6lthejyASCbZa%k zpv_gG>G<)F#Z?Zn{NmD^JIv&c9=lsc0`xHar!SKNltjalR?Zqf((1T(I$mHN?{QoC zmmSOAoRrZK@Clz4XwguPx~d23RT5onD!Q}Qz;g8j;;s}uC3tG6>VYl!$SSKc=dG*Q2sG^jlin*xf_d6gt7X;e=L>uGSRmU5D z)6}?jU)arqC90v!1sYxH`F+Qb9i>5u+^T)5hOz9y<$&1o%O+Duu;7c&<7ZR5ddpU9 zU9)8}n0gPuLu8u?!x9ihO^UJffqjV)@cF^XRW%z^;A&zONmWpEt_yaN35;Q2)Z|Me zwHBF(DvGvysyIlLXGQ4@^cQB91pC%6ql(Ot% zlukFkQM7ZHqbl(lF{(5pP;xGDHADK%ImZsx z)mls2qxtAIJ3=021A@=CKMbr}e)3f6Sg@+`8Z8xY@ExSaY2wuwY{N+-wQF7^S=IK; z|M+6PP`;^cqFWq_Hh(`IUM?#B34%9#?+yatPhvnGz{Jj&Ji4btx5%cAjqm6rVC2X3_QM~=*8FnX*uE&dC*j(mSdS;y(WMu8fq$_hIu_P? zUS0#|QfgZz5mg)Sdo(M%US?#+{JB=wS38yn^J`r}9==C6Q|7~lM^yv-J|zGA2&HPVy+B8+(Ljijhrq1UEX z^_>7)SFPz&SS(;wj?J?RWj#Q&*|BcC6s>jIPaV|n=f6EfY06~64ag~<%=`)}oSw$< zuI7)N6Si>=nQ~GN>C{Vwq3WdEtRN1z6aAJhO7mNiabuCHE#l);XL*IYoJux%I3c5S zsv0-tX+BfFIeyaNDl8XHq|%7_ssqh_=@a+T6~A>W4PPWR9u}z){S;y z(oq&>jDIbJ=+)=DJJtKrawDYyp-Z||>xpQb$L&^K3ZAX2YW|L9m0;q@4$14)URU-@ zy;+hWmMWN+KU({UZam|k_s&xI*N;1jM>3O7t_)x?YqCJiCI&IAleX*y5y--VIu_q$T3= zP!cVWPO5a^q86Orymc>3g%Oo%MkD>`Oe}T<8jR(SU#tp^duFo}Ri+0)eimDyT91`D z5pU6)mz{kIc7!!_L`ONCe2I_O4Q=w9L=be9E0+h=Vmpj>;%wkW z<>M)@tfBOT;HL=g{JU5DEVjhyTC0zell3rBL=B{+gL!xRc6_X`JUSmdXHWL?h9Mo} zWJC54;h%$D-x@DWDjaOPVw;IFUVI*ze`YWPSyeTk)04@WtPM%5R8`Bn6xztjw!qFo z&SP5Tp~^b4U1VZoAwTL)HB`ucn~=;NSID==anNpWTpw&USL;v<9%_ZS;DWtct!o3( zS2L_-(vcM?<|pIRi6l64tPAAKwD1!X?u`h+8;$NKP>|a7QSDv~P$lA!*lff0MN3c& zNMTmwYp4r*QClB1TW&8Qn}k%|c!X7uOhir-R{i?JwfWtLkLb^yQY#k)yjC zQhSA+zSor88y$PPI$Wunao2mQkUrodSHfwjI?>et&)ACny{(TaShfu(nDYu=+dZwM z62}!gr4xzeCBu*N)Z36ByIij}IK3#7xupFG@7G6X15K6yWoVP&l{GT{*On;ykFc`{ z`1Ldz3Qr)M^g0l~r!_ma)aT-#?`Pe+hY=znEU0t|!!hO_;bihdjgxOpL|6Cg){F2! zq>0NP!7GIKo%g#I=x^xTS!snJPTv#8jcc|1YdK63;nFkF1)NKeTan%!Ma}qpwWP96wItXN;rAZ564z5b%$9nAfHd4u-<@Tfj@+7z@> zK-hTGgmrCyQZ7TF10FB${3}ZS{igTs9*Hl-{kIeMx=0AP?7K1UsV(~~$1&_SBrx)L%(l7Z;;Nt?9$D5c&%qA1z(*Q2evxph*~by|Ug6VL&#zse^MeVF)e zhj`aY$6#oopEq}q3uLii#c_m*{sRxlP*-4WS$gB5Y6U|-&Vj^&>NBhv>a$^_zLDpD zo-H)p=tls<3r6kxyDIzl#fAR+`91n)K{WwJI=O!@`LEAdNKt>t8pn1JhO@3m{$Q3= zMry4Sn~;qgLjEg*`Qr#w;;=%~#5?CxLR070h!N44cNVT(uk)+Y&V~#F|GATY|LhT` zHy~|vgXlpHO8-%yl_LSne~vG4O0GG{QGdYcr@vz}Wakta=e4aP%axakSL5mp|IaTx zg7sD@(C7igzHdf7ksmyY{85M)pbQGoQq2%ZCX##Z1|3X!L=NMtB;1%Xv9O9?X9Rsi zOoSQ_9=~RZY_-s+jm!Sl?EC^q2x9Q_5L%#i%mN-EWj~h!p2e+)iOMz1YrFDE4qj2j0yxb+#(iVkbZ9b69>$Se43MFRsMoJwD26aZ6`3vDeFJ+1ZEFvpF78r)GvQlKmmvQ3Sg1;BK3~$s7**q_?>ql& zV1n4c>+dg|juj&Sgc(K)jUK68`9PMXUFo#1qe{^C?h8~PJyl^5rw1V($5s%350}17 z_{oPPTlA4d9Q`Jhe&ip6c@N9L2W1yd!p(wSpF6zQYK-a*b=teavET z%lZ6(#uZ2-F7SJR5=9%#KoX#k%Cp`^YwO}cvAQhHB^+{b`#tYVh0EGVZsj-tIQ2reO zjlHMXCW2NYW1(XKC94uu(0uSP#SlAqdPNP(fBS5aLa8K_7muS4#y;>Z_p; zvybeIpW(~k9nOZ37fnZFoTn7dA4(W!?6$niO{}#m@scMqgvUvHX1IaWdF{{GAt-iF z^OYV^YcMRS$3=AyU?&h?$cr~i5F0&iEfops-Vd82>Dej}SCFKMeJmN#9p156&lpef z$My&6ISBV*NUZOR8IBIkAg>H23gTR=vo#77_Y$8he@#IgG9udNj@EgBS z_8>`1Y!c#|!KHQc}Zw;Nnv}7@t$Rej!1vo1Z;e|Lw!K%*@^|DQ>Gjdstr}_}r`+ zpZ`8V;srD+ZBd${$;AEIfhSm6(2ey;e3|qaJ~zo+5?grDwZozIYP6EnD6Z~*LkKu~ zOfwj)c4Fi{%JI>Y0rPAfnML*xL9N|S?=*ZKUKOtQ{DJ|{DJUMFb;n^K(Ek4^dK3o%wm>kooJ{p^0DJCV$nWU>In^?2u_|NraYeORwb6-k-(c%f5RyW^9{GS^I`{4daC(ClJwUtXP z-PhqEV&rcT0`{z10y?oVj5@-NLEPbmNOfUpku2uX;X?miF#<3xvc#w}{W*Byaj!X0 z&6t|3cx_zfwpes`MVtrxb$iN_xbiG1>1wRUpG)w>k)i0!kM7$48B|E*^gu#`j+3z5 z)g8Vrevd3%8HSpg8l8ZEKF2(XM=`lGMLl-RIku_C%8%%%U$$n`sDjibont?g!M0&K z=-pyygM#Jg^&4&Zb)_;b92W@V?(r9?Kfx_5?|p0^G&D591v$rX*hi1UNh{5X!ttTQ^G#qgiiQ7izf4D`y{Vxn3tx_hvd<8uyT@+*n_UC0LJchbpzo@P zle6^ig^S<{10GX}aK%^ZLna@N(A)^rj^DHpxM{oKNVp#MjpD4uA-(sEqt{S#eUTHT zTifsXdruTdMgO^>m%6mx@+$s4J3@-SZ1^`KUHdmOM8g@~Ly}{zT=pZTgnz`0e>M_5 z*z%B!SEKC9gX1hTtPW*Hs9TTBlsNL2UG0ZQzuWlzx!M2yzUGy{jDgu>8_S?gcgwF2 zZOh*y3sw1@whXXT@suUo{63w3wp#CejCW-vvqz=#28kw_aQY_jf;HpJ%ifN@SO5Ft z^RUuxfI~4h?3k#^c~rKl1owD*QNkBBC2QHyL=YmE;$_gQe!YVpInw&UHJ69+-=jef zVt1E%Iz37TU5eH)nKtXsIF9QzC3Mj<&(NgHHg4%g;8JsPkNbz|3(v`farngRHaBhY zUviW6;R}e(*&{%!OK=&iwi=OubItt5itu{F#Qsg2Ok5IIt+aoptclC; z$9PvROTSNw|Fg>Y(T3sTpL};3aUNO!a%^w5WgKg|#kE}Pkz$>RhGN!B#Rb#$*r0P4 zxyy@+#_Z8zF}q8zlDxp@nHoXCR1kl*@O#1iVf?~ww41ybdn9D2kXKZ zu`ucM0c(uItoj~LcHwqx7>}vPTp>HM!*qVp%&0N)|J^%?)ez$=gUiBd`;i^+Yd<9s zX)pA%o;nZ|tUu#Atq9E%%to3MdhsoU)pT~0!@<6Be8MkHmitvf*l+>C|I183P=P6S zkpBM06oFtR;gZWLHi3(ah@^c~uCM3j4y+h6Ej-gJJrfP0-YG=OoAlXW?vYv#ZQZH@ z@ZbN;!|(-!c*hW&E|-mRoJ5;~ugA&jd;LzCDOV<3=)KbyEJ(9U8WYIrS=(-_ui%X{ zKZlMW#VK};PS|+RrI?8S&!!{P-1q-%*~e4uzAV!0Pov!1f6du*>dvRRrKQq&iY2!0 z9Dy@JAD_N*wXh|s#5!%>waW}fm#(IAoHaXBa||QUZg9@{*3&opw#SUW?mPb}#J{$F zoF=<{N96fPxn(`39me4 zqxAyM--IsvJU7PZ)OQ)h60@Xb7ZrGH)(h;4W54?L#V@P1uQLt@E1KChKbj;>sPyP+@g z`kKR-2{N4NR@|j+{eF*YN@Cj26}#+xCr&sRpT6@bZQf1E^nJ!_ zOZR>%wRDDMYgn}kOghURrhU6O^Sw`YEYPh_s*je+%Fe2?I?ofBC*5-5%$X0TvUUdh zFE7m~JJ6FJmiNlY4?T{5&U6UtF`Y9Xr0T^{h1E}Q&NF&tW5CC;*iSw}y29%1mFrKf z?tA-BHwG6WNKR;M@;Z>opE{*%+S3-#Y3>|_0?P%TmnAIkGoBl0owqCRcmhVj`Xrnw zj%{xAKI8K<%aR2;{2Y{9GAxW2pS7_lnVVU1<4mzN<1>^73#iTFs5)V$?Gv5yWyX($ zI`nRR>=jU&ICCSrtZwAhua`cby8d=|>&=yqukn3GFF+nR336P~aJJv`s^Y2j?|GrN z&o^$$6I};;lIaZ2l%pA^yZS!MFt1I1#cq;%<@JF^iGpVb&we{& zwx`6W7Bgk<7dpW^TRr?%K--K|rxKO3i+mniuAH6uI0U^>%_#I?PTVX}6F+4mcjcCI w%(HqgTa;W%oQ0k^H@P3!vUSt-rs}o-**X0`6?L$kS7!hMPgg(SR>_1W0A8u_nE(I) literal 0 HcmV?d00001 diff --git a/site2/doc-guides/assets/syntax-3.png b/site2/doc-guides/assets/syntax-3.png new file mode 100644 index 0000000000000000000000000000000000000000..b1026a6eb6d813c46bdc3fd35fb8c3067ae3909d GIT binary patch literal 358193 zcmZ_01z1#D`~E-l&<%q$NGM24Idpd^Eews)A>AEAx6+{^2#821In;o(qLeU{gv1ci z@!yAZ!*82(6VfZ#S16 z9z_ZR2Yh!`6E6^mhU4Z3LrwoS8U&IBsVT}m^uyRHzzce$^d*+dyIjpcwwEOVho_PD z9SnoMl-uxO=X_w`z1lY4cfkqNdKfquICltMhgr+L6O+|deC`eY{ldYGQ$nIeEP4(U zP8X8nHS%rxx^Qb~WMpT+`0Z`&C{)%E1mhjk-K9vV5oHdzlU%J z>^V>xR`kOE`~=`l*(aA;;?i?XfB`&M>ZKZslmx2NbHyHVnMClTI|(f1ku){N=}uwoM4 zz)qRfUUtuN>1aTaUiCc)l`C%S!|&W!*?ox8(ec2<Jz9PYBCWmLlQL`K1=+-FuHkc%SzzmDwqFU(Ek&H%em;z`;?|{Db5NL5S_A z9(zKid*guk#}9EBjR(c5H5Ni6Sxs?LpYmhqvG6JMySHEc!!Pmf}?%WfIL=Jngh09(p#|$EmCD zoJ59pS|la@rFES%N}WWQ=F7+90tbF9yaA);9YLr2;ZJtTY*&@Kx1@o=Fih3f6cyE* zL>#5mZP07xZ(kgj{p9%Hn>eAs;>ytZ8k9HA>grf0J|S4tnJX38Xt(#T!^AlZ{PnGw*c6WAb2`2{QkXO9scj!#V6p|_ zec4p;kJ|`1sd*VNV15Vw?}rKtRtSst%DVfAz%KE0XQD#G;AI)ni>O!XJ&I5s9u324 zYHpHGe?q&S< zqEV0>=Vw*xPis%3y*^mBc0xy+y)tbq<4hGqbWT>)bdd@%KSx!!c+w_$KIHb$QAV!} z#`C>8{rpSDv4VRvb8+Q=&C5DHHuT7YUDe~AA6_#f&?cYl8HrJ!NGntq9&_fc`%jH^ zN1mPTuX=7&!xl@N`wp&CVk_g}YYIruX9X0Lu+R+}ZW>di=u2&5X~FQ)<;Z z3RM_n?^h?ZHNEdO>=0I``nl)+nQ%8&mRC_`W;ndXt9(L*y#irrprrFE)h^^+{@CkJ8?+^j&mPf4`SwT-|_5by@DWdhg zKn>(>ADoUomN~>vM11c0_jk+s&wQVdt2vg1{Cu}Mu`dRBYKeZ|vuG8+MZL2SeEisd z`@{Eg{R(yeB^q91mVV~YAd;e?{#YV9w!p1-D*6h_g+uS~4x?Oue8gezz595~vhC@! zF~*(dWs=jiqR^H0^PP;%`)>%9vwT#VGA@s1`I%`9Z=Z!t<2%yZskq=o?d1=lVRjGe zh-f+|304zejaB$hK5-hTHi+Ii?)KHMJiq+8Kh1xuO43k}4pyO{>s0|u@h zgLj^op7^b18Qm5ZHUNg*qlqPG$KOv^mQF5A%_~3M#SJFUbpN5^0gFN|VMZ+r+haLw z{!H>BBd*!AH27}BV59iY{Vy-IGF)}MGkP^>Vn2?^TnBZ5Q!tjwGqlsMFV=qoAnnxCKmgOUv3qd-wSSl?ywHt z=9h^R;vH(e7x!MEoA>t7^`BfQoEQ?GRYmatk~*x6e8j$Db2yDCGg^5wVDc$FwrF+j zsqNy?(ESvQ2oMj;G`0Pyo?r*}!OEkzvTum!Z$Eltq~j+YC7a3EhOZnqssG-TIPZD@ zY~nMO_At8uL6QCa9=)4-(hT)8LR|Gc`yt)1Lv?jH{^Zj~=?P!X2KI+)ZWR(Tak&5N zo_&vt+z$wNjr1JGfNYZMSsdW4>8-Z)ao%68psO9wGJ9-wwWdptc`@Tuo8{6t&ABoixZ7q2j7;L1=MQ$8 z%jUdBrS`X*76xCqwcqib_kYCzOj=JQYb^4$UYTue+flkxO$(<(y4#ikr$1A`dWm}- z%}#-z84;u8!;tGs-_x!4qX?#P;A#_}gSpu?l54Q>H>nC+2F-9ozUGPjttn^J?a8ASWAImL8 z>d092V7)9Q=JK6i5?SNv&lh;R$@iM(CY~?6=fjjwZC-)b;JdP_-@cy{I~qz zW?K;xWt>VOH~1|caL-e@fcw(!RTe~D79}3k*BdVwYjeOJsFhKkknb1ISs~1EM02+j zzW%HkLwlGktl04D!xtp~WE>R{AR%uc>~{Swy3Ey)x%8eo9#|<*c#|vpUPT0{u^Psc zan>GuKDZEi%{ZdDzR(ehRzCrHKmm$iE_H^JW$u0E?*N}0EGy0oWN^i{6_?XP{oF_J z7s^ypNMgw6_;=fm=XGiAUa~>SU$0FXS!Ss)`7641T>4ZRzq8hEpKlAqYto-NUCGcH z6dr}Lk4m3c4?VxnLxs-|p%i#iM~!8HB(>b0`9Qewc1OEFQmxja&9*0woKj7bK0=JaCP zx)n(Wms@qm;D}D=lP&pR6mA>AFmC0IVj+ZdLAKFeDQt*Q1PpZC%OHlWcZGxX8C5>oy3p9CWl7gIA^SXej`50lcP5r@Sz8`F&kP}GaRCHnq)_P9 z=^APeRs`v;!Ysr2jcVz_u<*0erCBt1rA^(+j|O+4FuRzyOQ9qSXS<=-JeHxC&%~*m zN2QL577h8I_{=?0jl*wATOj9A`nmA`#dc)2; zw|GP!B=oEx6?lFNznC?gGYuI!F)*|>lOA;FAmwTB3E~sb+%mW0x|HcMD#X0#`XgSr zm9Ww3oxc|Ce$3Q4j&@0a*NmA__b9Y63i0^%cCVlMl@Vg4D^$%R5YZ7>8H;L{Ev|yClZ2L?& zodxPw#Dw5I%1FJth*!ep_4Kkz*l@7rnRcS+PPwJhm+^6 zMU-~4Sbg}LFb(`Qc^JrQwy$NakbWcxMmYHF`(s)t8#)H&gu7;-7=J)7@OtWqu<!P!<@I4{gz$wNl|_&d~<{3U3Su`+=1o2Mg*>Bu|f2 z3=S0@PhyZJ`kq#U-B&d!=K+?&*6tW$!kAc5ml^C@Oek%&_Jm>^Yc=m?^i*GXdzde# ze<_qwXATifs6fYUmFeKWNFH+1g}3PSa|plH#HAYk-C@q%M{lqTGd`rW{y|3cOqh&1Te@%h$2Ff(7^vj39 zl!|&AzlXLRMM~2!r9ScilHXSmtW0zej+LKJUKM1se@}C94(!oQsZOMvkIBo+Q(Kkx z%u#qD0h^FMjmNv~ARYXB(n#)V_x-4}m0m=5q2T6=U=|C~`4{Mfi`Cdxr5MjumB%l| z-0yc|ui7sBs!WMaeKayjL@_tZZ|Rd{HohaC*ZO5d-(?I`d2(KxUNXyK%<1B<Fx#K#MBgL3_+{vl!b-xt$xQ?Z=B*7W5p&>BEck8ng-DSM4RMC}l1^D+^N`jb(Ye zy2=zX1d{d3Z*u&kN~*J`P`?Go=jk6@CFqJIKZbv+5l1p8e%XGayO}DA$$BTLq{ztw zUz)$;!qXo}ZX(IUF8o-mQEPLz+ehpibl%->D3%<;?VNPRijk8$ErqTfgcY+0VNu8w znxw&c!yP`Y(Y;(&S871_NU>DnbxX25y%$3J7E8HJ{ZDVe7}>(AV>RQAW2^lp|19IO zm29wipKs{?7<;lwGM-&u~IKm=fTmCxO9Im`Q=3XVjAj`@Jw6Ao~x0}R_*e(?E3Z}@IEVddTXhdv3XRE^dS44G( zt!_2j>}TFej^QKGYks7%uJI3lA~HjkJbmh?i+Z0`tg`%C|LF4n3t`Kd!`GqGmla&< zn)Y~g0#(A(?|Y{KErGyXHgZaRd{6qydGX(A5iwr)$;Cq9wrOVePo@q=wSsC8V(Jrl z(z0|w`m^%5hIvojID=_dscMKSdrmp1&`-`7YE`+S&gv(E8XH`TGS zkYz2*^>1T+>vNPOeD9=>6{X~_My{6{K@V>0*+Jek(E!^6<_Z{n}WWUMJL_G^0tsP$YuOwTP7dzcwnyM+Y8hr*g|*Eym>rwNWAqaG z>J@-S)ZUrpggfO6@*ZM10e0k@q)A{b2s|A>aJQjQ1_*%Yg8= z%Lcu)Sw`Av$we2#goa;r@*?PKZa^8E)&yNGU6X`B+Pjah6{RO;KX9L_ zIvq5YmN$ufJ?Nu%L)La&-8!kZX#1ECWJ@L6*_n4qq|X*+Nm`6OsT342(6vYE_x^S2SS7n zb1LiCmfdJGMV>J5|1Ju>$`eWM6twB1f)oka+>XdWm&7t9MBR2R0@ft9(E9yPyX_hA zlw3b~*b6YOug+`XOj43h(o9TD{HvYv6CUTwpvt2IivP@0G6fWRCvs(6k8-7TzN~yS zYQg3mj?8|19Uy31Iu&Wn-Q~jfl;RnneszH2715cvWAGBmo&rthsxFSz;BRXvUT^66 z0Z8gQpYueX=odCec$Yx&FaJto!#_lI0_1SD;fr<#A%ClYW*?AZ{c;}_#*Hf-D^*Hd z>A1eE&G%c1_w1!MLPp>a5)8+XF9J+8$3REwQ8J*;VUW06`3BUo?MzYus<9-D^7G80 z7w=RP`!jyDtn3dbqgx=j&sJ{binz=?9FbVdcVB4n za>$bo4oKV7F_F8YnE4R{Gx0N^&{hk4=Lb%QUx@J=dmhcYbu?(fXnD<8jXKUNtIe}! zN_(-}b&90x{q~kXt1AN;t8bg;DXJB!GX?gp013WYRok9k6mq6^cD&0x0f1AVt~ZAO zfJHhK1=CO^+fusmY6ICd0ss9_UZ}ys*Ghv(YJxJ2e81_C(^Xpk;~&1!S*ZY-}=Y7$UKD|A6Ab-Hls7wxC#S19`vSzl6_S2vAr^_ia5g!E35PXk6ima3kqmQr7 zcRx3KEX!NK3H2-#VCmN}#Pl@sU0cA3+UbbM|AZ+Dwp7;rr5u^hQ!);(=QJ#6l+ouq zpLAk~=qMBA*X<6pfHBF)Zo5| zC_YJIkPuS*y)|uc_WKtX&^T*>zW0bs=STP{k$6mgEGnWh$8%m2pL}=gZpFBIe1GhV z6P+r*{lyNL8v&`;>vwMG#F&BOC{JxG#Z1&lS41z+>jf%q5k+#5xcH|}=v@@QDF7I! z1bV*p{uIiuV0#n_I|e3a>?;7k@}kL-j{j!&-?*^ThdcE5V!xkK>Ojf+rs-Xqo@5?(J+hBp*=h5TjZ1TED_ zd8+_o6}KB}p4Pa+FCGA!9E=ZqGCQuX9(doC4+0)8DFV{90Kj)xW|&c-$@Y{-KV$dt z=Xab+_+_9~n{Dfe(I}=e%xqB$@j{ui^)kn7celVm66q5ZTE9%!O10A|Duz~uI@YYgsaQgbMVe9a37d`20o6_OX$n7 z*@Kc7AEe-ugz!3Q2)!|Hsb>J7R>@3iE&|70;6a-e157&p!l$S8oD z=&9ZUS(JWiR)yTt03B0aWv@(CD3;(y%b>kptI2*EYWG>C>8**1L#oVRis2QY%n3adi|3w4BvK3{(A5`3Q&Npe9Ze}5%{cM1l9b^nlYH){Hcw3f2FGIo zrZ9YhbqqWPG9ohIFV@fbEhl--`>)}YfeGxITz{0Cytl{ffI#>@uCqx1cdrltXF|iJ z-G>DGk)pHClr&`W6=%$enCl$E_Cw&y2$BdnoXeQrc%Z`!Vn~@?2M!*a2zKjsy_p91 z1e14iP(1Xnl5hBLGK`<0DvZw_j^r}hrM;sb^@5N2k-;GLAw4Jaul3lw5+mGy#(F%z z?_`)uD=22o)PE5hG&F3?()T&ZLrNQcaB(mi>oI*|UFbWZ#uY6=FLd(`=>gK#e=ifZ z20ExKOOZC!mbp9>_%1&}rcFPgg%tf+NoL%|zAd}6NNs8Qko5=77dg$`Jfy;-HxCDl za0+q3Pi}oHSIZcyR1#!c;Zemhtg+y5_|3%n=x!$I_#>W2sU-Xz{FfgY(zqm60SNF6 zgCA>Cl!;!?Dk9~ zHL(qz@sg{;RJNHbo#6v?qikhD;A7_xlZIVW@=RC7)ddY0iS)RvS0MegZ3hI|(mzP4 z&Vx0KL4KZvaLR2^9&_LI2F?iGW^_Ef*r%kA%X581LujT6sRR}tNmc)K_u%nTzCYGOTdZ#8mWgx zYee;ZXh^dKxNn3lqe;RsMjm5t zb#z86&fB5hgOm4vEugU3Y%mx2vyqc|@w=_0(7UITZ+h?5AUD3ahRMnbZ`gfcPFAiuAN`eWdhfI-rX7d=__SzrSU5E6DqPBC7{s)Kv%NJ)(@dPs0o+ZiU>`x86PY z@TkfVL5lcI9*voTV`%8Nh~{G}3qChklB;WK-$r~i66C4t!mM(y-|STvCf zf7E$aHgAbEP14cmg{X_XS%y7Y_MhCAm;l3;9v4i-%GpnL%&oprsvRhr5t_rB;UJ>{ z=`PwHyIZsu{&7qgt=7{aw6vDtNH>2xYhlg5=Mc5mArs@U2#|8!VTZw(y)I- zQj0y)%N9;YVme+gs4OTJr%2ItFH%>Frnh;Gi~G-Nb7q+yxa+7Xt#bo9_oc0WRNdQdid46 z!%5l8s}~>>7UdE)HI^7!Mpwf~ltT{EyAQ%oZyhv>7p-u>qn(Iohjw!{*ous+e|6dwzw&&#Xu2D5A&ZFVA`hYDu!f*BCff8|H0+oT*{V zzttGRQg^hBkAbRw4l8HLpA7cHB5WnowLN(!g~3AwA)~!dqswe2(TUAY#7}pw`-GW^ z%8wSUpC+UJl~lXjVM(V2_BoNXW+hx8vy-sZ0I$`$5^+lbhnkbNK0v~e;z#_HX@t1PJ?`QqNx`*^lI?Dq8q1 z+-WFn>bd>Fy*^*)qO+^M?vEk=6Tu@+`ZfJkJ57g?ZsURcH{3n>NqdDJc`bJJE&8nt z6Xu=+9Q&w9Qd}%gr{Bf>Fu5t0r;6IW!Uswtl5C7oP0+Q+ojhK0wq$mK{k(VdaB-Pe zm{woFR|iu3vh!U_vy`_5}r#>0N>Q@3XkqES4!u0$l?M1?u2Y=eq_-_h#{wr0(pT){rQSLwMHyLnB9!V)X=HU0gyi)??Y+D8^ zUMb4P@3tzs2D{rcy<4J<8UGzG{3Z<77n4QX;T%;!-jPNt6pOEe>lN~O=l@f#`M<8R zxO8i!N60ZFNmv6<{AFnCag>Wl(*vo<^G44(dc zmj6pFgEM-Yb^uPf*KlR-Lyr2i`G;jD%QA%bCerptUieF?gt@gp_>Z$^L-wW>xu$in z@S8OC&Z*b3g1RFV@kdVow{!opIn|P>=5U#}{C@LjL{l0&n;)=1@czK?dz;Cb<#XD1 zE<@;PG)ZGe&D?}I(NwWSOB9@NJv*_{KFTw7GP*E@6zv^4?YD|<8dJ7IQ~tUOJ35TM zGw(a@nUpzP9KD=*@Vig)dIv0XdCeQ0O*|6JoSu9sVW#TaT3Y3?-AKq{ofZmCF?=np z$584<7;qfN!`=S+8Ao@i_-^pW6K?ndV*UeSKk$Y+RZC#fM`1*CJ?#v`kl={j;u$q#SRxdFnaw;;OW5yi@(=Jt>_C-;P;O`T`{2@()1b3RQfcm+qK-5ca(%pP zc_8sZlYTz*`Z9CleSLk;VLYJZ%QcPC-IPHm@f7lr-VGU^eJRZR03`9xy{#`(^q@jYZ^kN?@upHcUhJMykIBca3s}6; z%aSGUYNGR|*n;xl?5{OJsYgK7D+7?%s^<7}b~%r2(Wox9Br;gUV&_bfkQ#lp3|Pg# zUXGcF4AF6UWX4n|_QTuYj!{j1(Dmi9KVq4bde9E}gS|c5szd9`A>H0O8cw0i7*qR! z`htauFVi5CTjPP#H{MDB5Z&moh<$=k*bS~ot|LFfF6`}yygLHFo5!) z8K&6TFggLs$sGvcjV|2i#E>(fHtjiY_L-=01xTPwUuQIY?KDv>#5}@fQ(I)y9T|mMzolV_461$9+CghA!t0#7w_; zh{dO|LK^gm(ibtJ9t z?K3>2?0fbCJtHn-(+3|DU19SS*M3sr6vI6HVFr{PoZe-izwmj3goR4nl+W{y_8G)>wwe|~5K$0`!skM^ zR^JFD-Z$~pKlxD^=bIwA#fbXxldf>AqBF{uehxM29R@HX6X5sDw)pquS7rd>DBE`H4YIFHBlmr9H6o*|=WC_4j z1d$IS`66#)CdpA_Fc}%5DimT#D30xA-31+m*1PBtU2s~W31tWo3P2}E>C(Yx*wIyn zZ%wU)66J6#AhyetvjFh<43vTCqMm2uqH4?iD6P*8LK^=C|C27(K=-#RmZu= zEqz|8oCbuO$GY-*7h7%-4SH-0N}1k&ueQ}L*tr6%c#!XqD#$N^WHy_$X!u)A?mTQ^ z3n3C~mZrVw6BodomMT0xAigYML93~o3j@l9IAn`^LXg@tTG2F4#*DjuGrkMKkwmh| zCG@hIxWlmUR{x`Ehwf(Spc8bl7)uJ>nu%0Q?!`_0(BBp(!-LZWSw|{!Apj{#yF{8C z(%Pbnmx*v(SWf?{jOm(r1f?eMF9ch65!k; znHD^(izDY)`2ma>UgQk79v+0viRI*#t1aU*AkJP|dBQDvqL}A$9glW?JXn`TV;7=6 z^wOANw~rCtAV7fL1?Q^1*in+Y;4Zp~_3fsXPz#m{I8429Ult?`+}Rk96wocfQ^F4> z99jrE7$iEMw{Gu!Om~k0vS8`jKS%LKD4*`Dbd5klBAx(cgL2Z-Mi-Iwl z`FjUcf|rx~sYSeA?;%XKHC@_nhbh-Vl#wKD8V8HWEJrwP}XRB zko*7))Ex(6JtQwCY9)G3S$~J+?ZUQ>B2B9_fjY{mjBhOZ0Aw zm5CMCQh82ze9ij_oOE1VqdP0;_vR#FaE2Dt~fgEGv0y(AX* zSPlE0F}lpv2}?N>2viRs3YvA+1)0Ka%@J- z61_6z5@rDegV&ItoFlt!dO>qin0%Lpa!T@0@LU;6)KZO2WC( zkz2AabvbQBqQ-Gdx*%@Q_b%y23Jt{ma(R%fQ^5^8YO!fW8iq3HENvAxWpK}oR^G)B z4>pq{za(B%a#$-HuQqBhN?09*SjqMgzN0A3_Ulwo@1+Y`>6-OdUJv^(V=V^+0WGDtj9HV_Te+@u!H++!Z2;Wj988vE`DRVVymVHh%20FNt15H&h@UhI$HafM z2Neto6#&@~Jv6WZ0r`^x3>h0gK0m!p(#bFcu{5mQ2PBuVTgm zQ~td1Uuxz;9+K_9Dhzm<93~9$;86+l-k9L1CNUE?!ex!3wjk59?&M_7kdI=J!Ir^# z;*HQ@+UzTx0jePab*2*NBy>+SQ3$y$Q3KHs(UdY#E?DP8q1jsP6(iAzGSP$wCMoZt z=2jApx%&W7_(BT{hu>4RN1MF*w%OPQ~ z{CQ_=qg8`k9V;0b>cTzChI%RAYXqOt&HTI+y9Uf@c$P6f`PMflQdo>s5ByUl`6{Ks zsW6ZK&bIjV#X)2prZDFWOv;3qLaBoL8?5&z6NttS8t`{9sfe*3L>!6H%})C}=R8LME(q=VBzTy#AJ_+U^Bjd?snEbh-IHaPL_0t${9%p9Fd zZ%JyP%C z*Q z376RMS7q{3zJ^`cPr!*%k`v)wea!XA@92V209i$f4fLjlSF@1#s|Q1EqF%P8YQ4-e zW+jz;wn@e5xXz)6DKo?1BIeR99%ioCR0n-Y2?M_>$Q;0?%oFomL5_)TCfI1q3KEQS z=|#AC-)SZ7OunjchF^1b7mW58?PEMNo!fh^by6`Y-9OEEtH+P3!4X2Ov#agI6We~f2Cq2%RR4FO4f8o|PE+JZeA&zge zguhe_~B%?L@^LT31bd*4xWj+%BY$5T1&a!`qDihtxt6b zAEN^)A}QI4jIWE&8s4b_|EJ8AGvBW8L;P z8*WfTq_i?CCf>ZpyuplCK6$8ypJ;LU{uR0=f^aGlBZ$S8905}a4P#d4i71fN8;X0i zyRI|OI9S1OMihkq$~lT(o%#2k*wClG^Uepb7$$Tv^cU73n+&uUUmB^w98MX`(9coW zUj%4W*fOyA2;}9MQIYrA^fB~t_F45&At1GM3Tb!XmhcBi*FGa#u#KclpHiPX{1~32 ztm#U3UTQ^yXIb4U^d!Q0U)3L8qMX^40A04ocqS?fdNNk) z#I1O!cnjH3_E<;4OXAQ>`d}`K``ifc>vpX1-F&)-k|!fB@U6G6XgPxXMva9y91 z^lebayLFjEMne6SHK{NoL|z?2<{4{k{HtB9LNgQI&eV>&j>PH2?7S-%!L(Y*(yf32^*jcj)gI=Iyx0*D5`Ef4 z!~3ne{4NSj?>kgBVa|d}Zs69f-LHt22Sw5z5{bJV?_#0TQE(risElVBoz7`zo7|W?%`?uX<-;)|<-R@XRsw077Zq>Tsf}4F?EDog)Gyh?O^vsw5E2Rn*p6ZKzHbL`gkM$1k zf8Cy$YS`vAIRma)PDq6XyvKmB2|9_h`>F;4~t=XrD1S({Z2Py=Y z77@{rdj0X~e3QWzFBT6lmpu&8YI*F03npQ)w&cKTavzmbV+f#Acx(5;X@luQaW~UfpAlF!1Tram)qx_TH zM&u{wtRA(Fd@}Rwxc0RwLc2WiZHdP{?T*@$d~osMyf0-20xDQ=ch zY@^7k?sI7C`u^3~uwdD+TGz~|)Wx7}0&9k~UsM%p@S+v-mU z5z)4POx0vYRgng9uxSUGl-&+C$(LSqvOfBdDb5@%m63v@dJ|Zun#qlMaKY`BpI>>L zcw!e#SQRgueaeRnHx=h`yEuyYY#DF-_;hUQL{nrUe}bOY<$SHgI%g~K?g5=3UC532 z;_3xdj?s!xMiw=xf||l!$--&SmziQ(3qRI1DTUX^+p}T~NjjB2yYp?vyaJVL!mr78 zy(PiCCl_XV?b+e)Dp|UT?TDGBq1^oW+HH;Tqw&etV|ZQt}i zY*`lMHhi-`EHAtjHWyDxIZDp1+!L920UsHwGUBI`^nTq6Yz(t{L6D&vSBF>!VBn}) zgljbP4m_06Lk6e`yPr-rsHrzBf?orN$Ip9;7uzBvBUn=W>xLaJ1;}8ujI1J z4g|XRN%qt<4ow}AsjR&N&xP*oQb-ca=HAq^pc}7A0ZBzJY?ua82+@1i0mZSGDbAo( zMBDBz0@{OKI^0zE?4ifHcqA^d`cp-2JI7;4EEfr2=|lbh#8L+3E~d|_T3RHcdlze`bBVdiz`l&!7aU0Ar)8%Vv z*5t}~@*Lo0CnWb5E@McS{AowTm*HcpVD{>zzqWa}?9Vm2%%XqTQIP9M8xHLihFeCl z$h5a;Ll;Vzgn{j-sY{Q$eH)WXgTr-Sk+N`p4mzKS8+ZT-C+rifv#JjQ^5|I4%Gje) z5c;j-j2aze5|vZGjzvTTIF3jyWWX+)u?}a)H+W`2ELMemM#2c z{J2hOpnZ4TMKJ?UmdwW|qF$pMp*<4JsRou*%d;kL#B88I!|{_N8d;Omzzr z7DXRpS^#^cH2jPl`EBsFS(800JCqV>N``|1848~ehH?dcnH#t%5QI<2hwXt|2&}ka z_5CNkq;%mqT>Bt-zPzm;06IeCj-Dj)D!<&vcl9p5(ZYM#(B=M=oSP&WX(sG0{IiUd z#feUVmcH`O{{rBrb2`0}6oLd714!AHm@3Ny{y8&+!J%5Y;|&1)QU`9^Lme*cojLE- z+Z?WsBbpPc*$Z3!sze>fSdh%^rV$q9PdqigtYXN zH?q+X+NYDsJq|lY#9b*JN0wd=3Pa_NYNGKjQ+#`8R3Uzm_(p^ByK;sRJRc!0sJ3Dq zq(iXv>NpK4NYD8KtIr50Jyb>(dN)xE)BoyrXoR$~%?$&xLBODMI) zS-~QldF4O=k--+kC5)oXumVSiXT<~dDLq_LCWcIM>^iNHPRdsD_9%8oz=^gJCWuP_ zhrFHSPR+}sGvp#u*9;h-0ESw4=s|bCBaK=M8rk|Jq zU9`iD+BuSt?;>uwUy%X}7eUAarG#ZlU{{d4I7B3*ht>BBZ|F#Fl(S&_+ou!bepR|P zqf5q`tH;^d9ajhJ6T-&Axs1{kQ%aDQ-%vz}0^Gb@fljuwG?#Vjo>cPzQ)=PWaxlv% zus;tov_VY$f)OTwL5h7W*v~*2B^kq1=>u7Izwu{HJ1M)#qyo#2v`Qe|<`hQW+x0k?509(Or67b=V#*Y2LW zALgDkdMA(dEwFp9gujwDq>{Fm=t6f#uR+h0o1P;+0Q}xJ*EjsCH%veO9mwgbAD9VxgO)k&KFx~LVY>DRX& zW)a8w5dgt*jbT;f`$QpWRhrIeX_~H2zStJDMlxCfWS&Ccq@3*xw=Dv?{s8S5Wf8tr z#z#bmylX3@C=-Tt2BLg2(?hC&Md@zK5p#QI#mfaMJ2nIR76M24^nXur%M zC)Gx-i0kx`$wMTYuCh59A`B8K$^_I*lbjX2IYPHl{!-X)U+ zeWbZmHS5Ip9|}zLh`_-a>=TX(Dd(NA-~6=d`CvX0D7QPCgEK$kDb5AZ$L54*cI99U zN9jYgOahh1Sae;M639BwhET(upKnu?L~%eggsQf{q?uVjK_vR8wU#|~I6lX>r_SVPXc!WXqRX=8b7#|b16y))ZRzAi?NIS zdz=&g2%W4K)nPfm)HMYQ&{#nM@iLyFR?t$`|I0SNw>fOF zP|yW{`$RFWwt~xyHpNN4~%CW?mnA!i$MJ}g))oQaLb@&SXWeq%U;%8@*hGp^oIAs_NWP+FJCMr0rJ6jQMA zg>&72=tmqvJoqB=cAvlZ|3}wb2F0~STcfx;0YY$hcZcBauEE{i-QC^Y9YSyq!8N$M zH}0=ecs)qPlDEUN+a9V~sHu5ugvcV-C3IU^?RquLY#olZ{hrb>Z8U@3lY)K^3TB!sOv-FjYrbAOk+i~ zjACxPZ_qR+%@2^mC~q*aTYZ}fiqC^5H-9WDtlj@iPl!cBoZy`+rf?>%{j2Omq}J=k zr3Dsn2C!!7H2OZi1kf)u9y%IKLZ*e%TI3HfpwdR_I+gbYp9qf{8d;1*p0}eEycYQF|4V z2q}*_`~#$6C!i!}fmr7?TA_36M@wyC5bWirQ2bT|!dpVxCrXcGK9jLzLU%g9*T_Kd zJjIewR!TV_hX+Otg8VJ4f1nsQx9>9<3V)%9w#H1qAPDl6jDR*yjd!xZSg8)lGr@i^ zf5FFN1DuEeVf8kKNsL=)CnY!~Rke8mp#emnFzA3-GyetCd^{3a0CO*rNq7xIwaoP8 zd_Vz697-obp~&Y*v?^(_8SWz<`(lPiudpS`0&CRaO87g$d$<0slp&1wzU&Tbj96AN zgMMmwDZWcixa1TIi`PHsW2_)Is)6-aC_@mFL^!5eqQiK^QSfl*=&<01Ar!9O_ONlI zRRX}kY)ICqpNt6hrDTwIEcz}-)zA4~8uFRts31WcdQio}P)tI>Smb?2*wIrN+?pc> zQ^f?i5k>raU(4ij91+Hn;V<{Z_0l)|Q8U>AL7$h-bIry|Yew1kjGPlAq74&3jo6qG z0vU|?jQ9s5xx~=+(1Ur!^w5ymQJ&#)A%n95K#@gw#TikEg3|W}pf?CcziBJLK$1Qo zYf5|sQaMh;e2+LA5DvU2%-hajH9$~APRGH<*Xhd&L7goUpP`Quhb};N2PP$EB)6a} z)kzy0OnW(stl8a>=$$v+{XGlVU9%&j+Kh&%B+vAUnC=4v%veERe8k>{(-y=tZoxSf z2%cd&Xhd)Yo>AklH`p*Y-7gcncGXJt|4H1%^Iw6icF!y}7H-bv-T;uDT&vp=gK__0 znzsEW$3k`T`u4e9Ev%gS1ZZhCbap5ybcvbHwIuS4Dt+GdXTZ2W;*_cLH$qcp$rtO! zD8_zK6C{Qw6_A^DiCqXDOFgVXpqU4r2VaO2NV9@+W!VrX=O)Y`2 zG~4q+d2{%*7gykylnS4*P*jSIhVXiXsOf2SCDoibQ-uJ)4KGZH5Px z<3Y8iq@YAl1cfH2v?~Q6)~Q&6Few-L5WX@R1R=Rl{45cK28w)(elBudNP7~7wPE%g zcMIbf%)W|rgzP>5Bd9GglB-ic%=s-iQ1oRBwYm?)P&E1;ReV(eqA#J_C=jX6KyIBf zrTFlz$t|M!3Zk_+fcKW-t&ayz|Fc?on(K7gno%zkG9s^YZ`^ZVF5n3IY6~v_YlGtx zE%1~=`bUg8@+*4SHORPo*|MHUPIJx!Vs>xvYcl~jyiF03J+$41gAU(zk{e`hIf33E z%^a{;@BKXDZe--f92;-vSu=F#49FO0B%ri zaAxvKinmK`RDhavnQ(F-&#m^3jb6h$>^y8oo;0d8Y_x|5w6@vrM`k8i4htjm1`1iv zOvX*dgY8ZXc3_dCE0Tk%%MkYPY=oeS^c7-{5v|jVb;Q`2ZCM<`?2@WR-HhR1iC+>L z#0o`(W|QaY(*e>6-iNkL_d2$l_n?0c#u;CU<|LU4c$zhDN+~Fg$o& zh|b^imb^+7++g0qyg|@5CWYe*d$5jQu%m#7=OiyP30H`U#oadKzWw=(Xth(LYg ziwq{fFuj!sn6urmJ1f#`ghn--xt|Baz3Mr1gV+NwUI)N0L2UAK<{>{iDT;*;1~|5F zk)IyDKyX$De_*nyXyp8%kp( zhovC{CwoA^Tyrw%-HGh)1?tLWzE&XYf>ZDPS zm+e^D1e1_vRJ$tL!J$)^s9xgOb)w!Irn>SS@8r29zBP8%Q6l8|x|3|(A~RJXVZYzXWTzY?wx%@~@%G(qHD-auj7N@f?Z(&E<^R)|y#{RgjF zC-LX^&+Q!~!(aksuImOm;&kr8Fv1!ujq1J65|XFP44_MELF*yNq+h$zA{7Yl?8LqI zi)0POsRUn0@nv__^P_9aY&i%g*+e6I6Pfso`1$y9WOIlc*pT?QIXlzt;2Y68ZbYN% zJblAq+zrMi-wB05oT79Ahe-9V{-};MOc-|jwTD{8d*RNgj$$(oG(oe!<^+N^|Ip8X zPE3E`H0?M}fxm!RY;Pus!WXFU*D{zIF25(o9mmi%h!^+<->oss5}>iuO!NQS!OR3J zK#56kYjMJtSBKVy!D~6B+&z0&OX_94U5R&%x8)yNTCCu*I0!NfCjOPPjF|OxV^e72 zkk|h)ReIB?Es5XyIsLsFoVlG5~vefa&F{jXj!842t&;cw{769kxO^#6ka z)?X7VT!A83`NM0|G2d;Cs%(zx1raZ4rAjIvj_#2CTGaBU@b6btnFRo|XnvybjVpkE5K}A#my!g4^nM!4WU5lzRBT zreZ*LW$%E^eZsbGY=Ual5+1VNzohqn#$@jg;u$G9S92YJwuN7P`J6SOnpm z`d?ob&;Yn+kISIe<*(GKZWt7Z_MWK!ji{g&s{L`c8mDNfd>($czhalC%l1f`U~T&j zoW)YDu6yehP@ygH@%|><8RB5S1Qa{XjwfLKj|UMam{nviuy(r{y#%s9#*(R&`#~1&-2eQ#Vk}Tzv^gO;vCx&;Yys#j|?3_tM|{r6_z$RY%Ujd;_@`#$GXYcz}4eg;%jt1O@W zj{yfK*eO2fga7D7kXY7IA`U&Th=StDLPhI^*D%-rvNC~bpux?F>b^h>M_TIhF5%ei zQt49AAXj@Wl#`}<8!m^;44jn!%ubQTYYeuA_cAUxtX|enRR^2eYA8@u2E?+ z?QTzmkjW*Ub(el?wFDKTwkaLgCR8$;OPp6}?!38cI`$6j5(6c`$Y2pKb~H(_ge;~i zXAe;%m&aH(Jq=Xy$o_YLdxwL2sg1AT*5L}l8VsMnf(@AjHB0h->)l`WD3$oWd4I>0 z|InshswLS+HMc0-|0VjKTj1yu6$qcr!$2fnrAEc6%WY^@j-*xR+UGf(Hz)GNnq27{ zBHMUKlt?Q5g6BzTkP*Yczx|L;3iHQRi)-U;JON1MbiPny)=a~%$Szi8)M&HSwODI! zfGc22{wg^}7nAP<9hb1;)=ij!d>dbb?<1^sJyDA`3&+OX>2#(jDgAWwMg&m4R8JaC zA+^6mUuaQImC3iJOuY_>j?MwdYi0J>&k_K(zaNs(od*4yOw-(qmH*OK8lYSj z-u%7(YOQUZiutUlf3AdNNK!qr-d+JEC!hD@-bsiGF4jwZ)p3ahvg}&p->C|;#q#%H zcihh|b5{87{0WT#9LjCk1Z&waiq3es(c}*p((H~gcq+McgID941lg8L!(w|v?KNLT zRu{_+9$mv-`PTm0$$*sN*5u}6yEa1aIvsrO?PHyDZHdkPx& z2d^1oSbDsFh>*+4cpvKEaT=d}PQd4W%6t4V+9V%Up9v!SM1tySayZEHflNSN82be< zxEn%);rBrY_+3OS9JY7xq2}@AR5=rXX=SN8aR5%jy ze8*&&+-mthC;QSKrV;=SA+PA?if_9uOJ`ILZxR?;kSCx3 zzZJwy-z=Ej@o5TtbdMVs#ii}eI0LBBVro9;#7V*Smv!y`OT_-BY<^+9Qd$_5kjo#ppKY>bm_Xs5gM%Wh?#<97(# z)XD$o`KQoeC8qwrfiy(tz=s+cby#kv1DOi!Wm|*PT!J! zY2(dQVqLpM$tUTEPd{2!gTn52x-3uA%aQ=ps-kk8%muux>QnxkrQtg6Uc~2#vN zRNti$F~vsKsU4p~DvQgLq}IVzXMU<#((-$ryN5{J=!)`xVQBoAiAX=K#j;pdHJ0S75^ngRe@4gzgeWULb+-%ijXFaHYG#f zkA#GwixoOv-uAmt_j^y7ac(V$_d_aNnRRCQ)E`CJv=1%q^4(hkanDDAp7(?*RI4;C zef@c1#8}(7Oacx*);d3NqeL4^YmboeENP0_NYpJJ8SpG$B;xb5ozK@8aB7wqq#uG< z1S;Z>YFLn3x!($Y)~)MfE!+7eNV}jG$bX1r$}L6e+~rHWL5*d7;fOY!z&v)8*E5P3 z60OUkyj?ok8Nozk`)HsxTd)V|npZYP-p?C~6v`t*pjpKuVz$a1-I@|z$;px!`LHHa z{Sk}nojV#!y~Je9$}M3$r$t0}5?z#AkF;^g&l%OQ zp5Br6m+YzA-%JL$Yv;I@=Kc0%(v)5QLy2s+tvw{9H4v+u2Wm-||Ij*_8RcXzf#(@? z4TkZxm}__2SW0D5Cd+Nt2V~TZjkbKt2f;|p2Yoz_V^4SM-Lu;<@3TOruklBeGdW&C zopT0(T|%Jw%`-7Yvtrcy4Y4${ibj>o2kt~~@zZN#;p0I&fnn=Ce}Xc-?D56j7~H3ztT%>*t;5e!-`3S$6b<#-{px zrv03!yzuGUS1uuU|Cj9FGO>%+$0H#Rgreh>4t8^tDwhBxE$SvoDr z=KjN^$29oDI*siB>b%!y>HnyOg`%gJQ3kLJ9Alk);#M*7@XBeoD`EbTG^S%-H6d#I zl$Yz_{9{yePq~QKd6dZVNZsi9BpMj!)n_H2`P#x?dDLUz%Ph0Ux_nf&Ormz<<6Tnw z zOtlwingcmfMoMT@T=j_^;{o^Y4;}gWHr6G>Ag*nC1!AG_jX!obxmNV(Uf^Wm+19jU z=*FOLGgIF-U~CTS2XE37R%)q#IC%}X9`0Qa5)v$FVsi>ut;0+ zX-D>YVk<`4dL;7*D8`nsa?yC*i!(g53=xa zdg5{D9=Eb=)a4ESHQ@3O@`Gx%=`^o*?C~cON^&t!=8(8O2_uj!!&umrIUChj~@q6BU!_RF^%pG?`^Z z(N)#)3A}h6@-aLr)z!(&^@usFZENY^|E&ZBmqiU4;2P)% zymZO+)+H@7_kRpd2r=)^R_^~Q_HG5KQBgK~TOQoj0eVU57C3h8dRTroFG#XU;W^wa zWtLSXir!hMB!}xy>a`rFtV6f=X+4bun=h5BLd1bYJa1LMjQt^M+armHzOvC2t0EuMg!Z$j1A=hIw1|8)Jlap+c73}JByb`F#_;SUt}r*2rp2BQv?3gax^abugATG~H4;^=vbrG$c>4E2Z0 za!*5Y^N>C-84g{kL=hd4cdROUa4FV0@xnz4$MKf>Du&dk5Nks zAkk9#n;usRD(lKvXz1@fQ#hi_shNaylBkrr7pGDGe`(bjalQSr8Gm7h`WaZ z@sw3vd4fsY+N*Rl$_9lVy11m-8Zk9wG^H7bReMsbj*4V*ffP zSVp`SS-mJwQOD|K+iM}M74z7&jGmUVb<^DUuYfb)4mvo+LwJfFd|mp ztQ%WS@$_E+k7%t33d%09@%PP)*AVS*z1z1UR=3Q7S8Z4pM})U&ilHyVEeTDKz-d5T ze^cs*X^5%4((%(Rr2W|ZThdt**Nb!!#9A+ET+H~mSKaB8wfOfq>zuMJu{nR+Nz#6& zkv5^tyC(`LC z_i`%iFJmB|l0MO`(=Utem&s)^jX$(&PYH91o7ljo)L16hBs57?l}8edFzg#xhbM7H3-{h{6#NV{lLL~-0dT7dbV#cjYHXaK^U%Cx8-#?148Rlup# z-SS~BjW${PP74r+y)o$Z+6Hq^# zRm80)aw)w$2q{_hZ8mo3Jb4cpo0^8w;XD!Z8gvN?PAw&R| zd9Y>^ncP)S5L!QM2MnSF!Nf{b^|iSTL+3IPp1hGOE6t|3P+Qd!GY) zYp_mJV>3_TIVQvYA{e2ef)&_=SH3jDV``ZQkwA|>9607Ypw4FvOF`Jp0~JCU?I41^~rw&lh%JXls}jg3{3XicbEv? zi2s1F`cRnUF&Tp00icSG=8q3DzT>Sl=S@MUZ<^PAcZUbTZqRJW%TgLP#|5eEiSZAM zS}h`{EL>`uB{o&RkgLB~@&EjxYyu**&=62C{B6r+@_F{9e{4?g=J!rKcyT&> z*rkzHI<$U=Bn0U#a5rWq!K>5$gazRIKOgq;&%>Z)fVh(OV<#!J>U^cD^pw8_HFoxs zK`{lTI&EE;;%^r9^~i7dXfp;q{||38+41E*%>LWv_aTP6m%O{x1Ala!ioZXC4u~|~&H?R{ z|5eJHK>xw%r(dh9X|dU@(-A&13jXoeM*JCXI#8#yox?-rxBZKY*~;PG>3iht)(`w= z#b_lXDw8_in+191qooSK&A6Y*5An@-NvDG{8SVM-zOo$BsP|VIeL%|1d@mX=pJS0{ zt2I$o>Ty_2Q$_6;+xyN(xTk}mGP?4?CD)~JXJ7>?%)w7I`y4pvWAR?6uaQz>Wk4x0 zmc%`kU77cS%dWDQk#_qFN4%o3k=0@>IY6KS^k994s)+Bee%=6F&_HO>@%us0)cz)! z^xj#U_-M*yoTPM>-i8e{#^!hKXp9a30fcUC*x}lxU-CNzp3%&9$KAHL~1%gzc(8%EJzxY8x-B+wQ(p~zU@pFtCZr)C=$ zcv?AGAB`3u9c=`Lex6wTm(HPrm2j`NUHWOQQrC_q)(CV9k1;G{P*uS!4~IrqKh2tt z>nc55*3lkob-mkttnXhJJ)~tJZAf!pY{>BERaW6zNzkOxr=VixG0gSgFcj5(*Adh6 zO@LFescL-~kfx82bh#ja!Yr(oT?Fyq}-mCPT)k^hM;0|YZQCCPW7w^?}aCc z#&RjAfMD!Rr*%|1uKEz0=@2_wFRiIY&|IpT6m&^YAm;H>CAc=!24|NKMNSN~2o9CA50`@#j*b*d?qHzvTpxK!}E*mHCaL`%-pb{b_SocQjj2 zQ<<8IS;D4O)bg<5B{q%{fg(c8Y%FPf=r`$e)43rcf(8qF_{e*C!~5|4^r`!L(`& zXz{Cd*J{BDWI>0OHe@9QoTsqdB`((v`SM|z((_-m zzEh$S5kds}ql}7yl`f~dRcapk*yE+R#foNEkR21qnA_tcqbtcMoXW~kf9LY#Y6JvA z4_gZ$dBH1JpD!@ar7$)$i$r<&5#JujvgW+QTT7>v$^~3>X}!NgBl$21Q6bafo_*8o z#jp8z#c4x67q1e)?3<^aQvQY5xf7W**V7jtmD}#N^O^nbG?JeOiOSa6!qCtVFyH*@ z??UJ!>2Zb1wgwa}v6K^;RVZatgOv;SgWH~x3r_Q&j6Z_X#DQI`mfr})JRq}3atR{d z_ulu)NKHv&TPors$AY?)voM-avH`_|JW>L&wRn`9aTP)zf!rXq^G^lW9f@>i6-CM^ z)t?u^M75RqJi&}g74(JzA_%~rG84c8pkBz5cFK4)(R2HgW6gNJut7#DbEnRk`NO~W zX2huYDG}Be7UH=izuqT&-DRJ?2j4oN*VTEa4xN@kU1|V#iM3TWvqW>T<#kiWPou)4 z!jr2jMUtki4@&ipCT9O2LF(Kyni3id8{=vKhbi#-Zy+)th({~y7X$Cww+W(+iaMOI zNahSh->X+>wV!iQSMh=|ychUbye%xB;{09*ziW(4J| z&k4TNC3%mY%;V?QGpAdq);Mc8hllTxTJsVk!^4KFtu|7y2~x~HC>RmkE&=%Sa)_nu z9@pIKB(|qLg{~L+ro79Ad`Q4`_%x8deSDf>}r{$CieCO62Ocq+^EA=cziADk=Xvt&)?m7gM9 zKx4-LT_a_|N_`x@_;JyXk8IaWr%$|_@wqhLP3f0pVlwgxYRi^b=!4|7=5=BJ+}4)c z0cromi(x^LE!0q>1gZ&9ACf>T zRWA#Poocal`G8u+u{r#CAx9#s$d$^>M{7a@xg3oCK*kf} zg`y^1{*&Zs&=cq2kfT{T9cdFDkd1jFBZqE@xBeNYhha9C%CDy3}9 zwEEw1@N*VZRO^W}jg)>!)zy)sg+Amb54PG9Wq!`*zPG6URpBL`iJC7 zwk)|-LP^+o6T+Ot(YEX{KCV#dkKJvqf%uEQL!bvN5Fm2`Z(J(H>dpbh(>ObrsI^t9 zFDpFZKwu^GeBt{#0hEmOLUuKBG>=t;o9v?`;gbbY)e z_(t7~-~d{Ghf?psBa336l|6qk1BV`*5csZHxYVF9U@PJi_PZDsPHsf5E{|+HvIY+> zt@yDI*zmdLM&E2^&pIi30VagMp2N9-VG8P0?YNx;N?#jO!Wyt@-x4f!Z<2ZByMP z=VK+8yaeP6IG+cW3^|8OLx?;{!=dh$V1> zvP?;j+%6f@w6*PPX#S_yYnFj3hVrIfB(j%T2d>eppGiyBm0Ym^9|^c9mmd|KHuA&U zC~D1vzd`eSq@`&(EyzPn#hYVgyumcRZ^^Ak<+FyI8m)d)XW*-;OVmDD|6BA@Iu`q%khP|$HBodn(yUMy7=JJdO}JdcBUn$dsHObr*QB(+j5e-GL* z%IE>nNuw(`b)Uz6fL(d#CD4Yt7hK=oMcwkij%_(D0>l?I4>CZ+d|~Sc`qVPIp8ZB> zpi_@s(X)K`O*e1(B|U4l_uT@ZrVDrzytbCuzS`x^R~EKxsu^`dLtNvDEp^T9-%Uz} zJ=?T^bYU_Nb9ueSqWhMA6-RO6SX#=GfnX`gj{!BKByJg1gqRHTlbCG%%6O#xxY>9% zZslH~QH<6U z7}mp!AoiHk+SSN4Vja2s43#eaY_TxDQhR+!c=(@iEC{@y2DAtw6VpIhfd5DFY##ZP=a)NTECDZgd z#9}d6PnRajjKjKu1WnGRUa}H=DJ+er{wf$H%g&YH>nV#_qK7%QuKSq0USOZgZlebF zx5+U?G*{!F`WdDzecyBrt(GbfDvf^bbad1UMcnIUBd7gx*>bz>6f&z6hCJycqq`LS z%+$zs#Cy%HT*m`qXRFkfI%yg73k~(3pazNudck8a(8pWhBNZb}n$e}%u&m7v`(Oq} zQfR**0D}R)T!g~rtL1geD+*A{YJjp~Yy0WYsm34~{d0bj6`9SE^xD|+v;2$Hmn4sZ z3Zd2U*vefaBNgo_8(-XY=v*G?EHp9$S#l5D;k!fEoYmWA&M&QsYy1st#4j!eRUtHg z9_VEKB%xX5nz+huyO|NefcUyx=CV9Hf0*BzIiQ)0LDZ6j*a2<}?ShGpe60HYt;Afa z-G{Xr`~i|xU2S_FdQ|${cfXX?qakhORHcJrjbp1D+mwTE3f>;|2hNtUUKxody&1!AWEcrkxpJ^ zNvzP{ef^^{_vUS#Z!~fX`b&8dJ=qRFSF~2mDNHfPDvq9U!WG#>GN%H?7Me@uy#zF2 z*BNQL@H+c=j-9+k)8+tI}3d;aj%=mjk)U3v!;>8De*?EI*C}Xuni6Ahwse z1MAN+#s*@`3>+ zx&pwwawd2SL&lqcY7*9iA4HEW5h!>n>D8+m_I<^crMvz&HQG|8AcPB7f5VwPcra=j z`5MBa4^x(gBz5p4DfMreg#xB2>7S#7MVI=7t9h+ZY<@5&{r4mH7TC|#wO%dyDjRYz zz?c40mi=4s{{6?@140D!8t_~C=btM0|Mv|&9I!%viECA8(L}E!>tw=*s_-bnhv|yZ z9PmBrXDT#we5jrzid+EeP2c^@Yy?6nJl%+9fhNMh61!j?Z^Ki+s;R^_Jeu^a-BQKZF*?c@_+VSjjocJfH7Nxcrh z1ZVS=G$L?{yfJUo3XA;P7mYxZPZ~n^sv5xlFSAp%98J+(zty;c;mfK8ObyKxvoY^Z zpaHLnpuw1dOOIQBy2=38RY4nvoZQoM#T3hne;XkB`)OR!rf;GtCTw%rLMSNTY_!qi zG_@TlZssj2{yK&1SE!+1Nv#tn2)D$zAz3rI8-6B?;F0`^u8ZzhguBP2A^!&C&My!b zq5Q$Sw>=I!i(S?}M27@rvxJ=Y5zJq0bw<8F^$4KWLblhvBR|OH&f7MfS=(AbPRZhI zcbj@-dWg-iV?p!drJ3Ts36mJ5lcyVwgo8fkN9)uVZk>g1Sgu~9^Klag-1kJ6oohVI zQ$wW*@iqLABGSo9>^o;K2W!Ln2zaE;wQ=9`Ku;6EuF|`7-n^HH{;35iF4iuo^JiN1 zR!I=9<|rIY^)#F1YA>zMRfqYdu;aH_z#3b+?Hwydl& z6Om(1vXu7m*3nhQk#o$Ey2au4pZolvqGi~?ea5H7b*8UcSFaV^&U8s}5$>y0RjgI- zDb7yHJF8MJ>(Y|l?zW0~2X#ZBr+Y&G#?htFtccz@BU|Rq$ixm;y&LSA$dC^Rg3h=?MhH!%Mp6Uhbs3_^Fv>*)>8y!g3_9 zzrG}?lbQ^Q7da=PM5%$PQiEx`f_CwMv{-?8#ezV`fveJhgKwyD_3|DwWYJN+>p&u$ z4!kUa;kbfN9{QUi`XkXfJ51%2@%0}ArbSnHc;O!1AyA}`58mJ(lPE4@Cjt&+ogMaB z4pGD5I*{ggpT1d&dVP=lXW52>OlkHsc`}S!S2r1hy=>dq>F9N!z|Q-sV&BpOQtAX( zImouaT2A+0?t_@TY-JL)f3)ph%nVTK)R0FnlaNRnTI04iBpR8r%dvRqKlL zNk1tzdpNCeIfPd;<7;E^0(y{_{4hzlSwGo1pCgSt&%-$CM?aGvnobpWq;F3=49|L{ zCoq%Yr(sTsKjPV`VDln8(Wof+L^H4d7_ct1HL%q<0N!x{yypTQn8+_OaxS;5KyS#5 z^gl9D5AsER4a$L5UBeT#R=h~6`(~H|{jgXU;PM@=z0C$3uv8iK^_+(!`pc;o4|{LN zI1c58U$~T!$QzdI#cLBvEZ#;a*Ig6X&&s5@R~MW@#yeS!!poQKzMn?M8Q1~hZMKDH zVdw@jNycr*UJ;#6(2$nw{zmdAmnc{y$`=^ZWY;KvcYtcL_;XAY9bE(X`tRW#=HW^AU8U$i$QsvIzB0FhIoNC(emjJM)S~yg`=0)i0$B6K5mV?1kcO$-~Q9^Eu z_dBR4Pex(cGZ9c|{bO=eznJ#DTyRy9)V>X#^KKRS_evdfArc85X z@*^CZ-btpozMsX$0tpVlWF;LKo73n$d1V=6p5fBJgO~%z?GHX}2`!=llTK$M>4;0J z_h*~9rh~_p7cf_L?)`y4jNVIQ*_pggSi5E*;z{JWuIFGY(Gb<>F#4aU2+SLe`jG`? zIET(fUNQ*$vdeV%BnjDH@oVX8<7lK$`g}~xxVg8VPwW{W&^X*4}Uf z*}X{C?CgDyZ*4PoBv`;-^@>jhpI}mtw40yQPiFj-V*o2&f1AfGL6`#Snxd>Oh z3p?b^Hl*`#=Nrc?o*%g>QRH;^l|DEcpGzNndeRU`F zgsH38XyoOr+XZ?(!+I{a)B3KTzRxsJVWB@TTnoa>QKxQ!`%6tYA{KAjV@PXltw)#+ zqk`M;Q!TANR2*Ja#ldj6oR(X>pcV1z)CY9Q#ziEzzIOpY_A*R0Vu!H}cr8-#pg8dE zJ+PShhzJvY64u9eFBNRDqPybBfrN~tZ@D9ZruO_{AsJ-6YP9lA>15KW2EaRFErz$l zQj4K26@yZh5TFvt#Z|tb^BCOW=Cf#uMq`7YQP4zuXBC<+XR`lBQ!lmTr;LemM`^Q} z+S{Akv6FfU0>=p=Ko5m`18Q}Y8$v5R6Ehq48OmGiZHPCuIYdLKG1;DMUlW9ig#2jd zr`ZVvnt3W6KRa$|>eIJ`HkgTI4A3_!;-gt59qkP;CS(`5ws~a%{ql$sJV2^9yhqMa z6y`sX2?ew#EFr!5g_OLlTv!0;&js;D$XE9J6E)oBZWt;adIU3H$Eg#|Y$rDD=&Uk& ztGD^{KI2;HtRiKK-?$UOxLyP@SgXMR^J zWmdgXE=S=ktfhJH+Da#fL-c)BQ@^;$;eGvlr$0IMmXZumY;XH8$P~rpa)hYSZj)n? z%bhJzyibaAfeR-njW-cW+y|4*}>4^9*< zSMGT*SgPl@d~ZSX{lRhci)XG4hxn~i!DX?Idi<@8%Glxr=AlBMpR^Coca^=${$h1m ztvcK>&j)fukIz828+UxvlJ)F_EZC!cXX3k)DzDSYKnkwx8*ZYbaZbM!dcvA8ArUP^ zTQ%IShQy<={Ft_f;IVh*61}A zE@7fDP&|GmOSP;~fKla%)YyBrU#=HOu*wORu{DP}pG&!4tQ0DbDez*_FNK@>YM}kJ z%)c2vKjGeo9+5syuLANJkCk{c67Rrj-W5h}{hX1=GzA9J$B(9i-+Y(jTOi;c%#K=I zzN)^RqUXPq6>A)Ue{zKAuOM(UhCG&2I*wYbNlvCezFlzN8U)(ae>K?ov^5;DH?>dC z6(EvH)J(G)FzzWCltlWiEp}^i!+|x61G0qYL(-4N;kM1F@O>6-xA4erRA&Oj@pdrS z-GtuS*jj4aR0~tm;t1Y|E+OCuz+7seoNW1!)t6K({X8JEb;mJ0%*f947*sH#wPNg{ z^oEXp{PAhuj4~owocN8dWPE@zRtFxGD830M`vGzAEiBI9_MDj6BuZ#^C=2!J4rXr# zstdb@zQx1`IlDK;PkkHYiQUn1<;#TGlq6dk#4{d~w3b{&%ko+?PR8eg-Aq|(*^%Pp zwuiIAdkKzqwx;3|;#;22Un3*B2;%xEkqHJN!gem7G4A%uebL_3W{@lR&n&QTIero< zi9Th00+H>>2_050`?k8pZw!hGYxK;wxZ-GJX|P*V0b0U+)fkObWxhj_qts|DskM_n zk5a}GgBkOJ0_h$@(ly(HtxvRaiC-<1R-n!r8sbAl_oYG7-vM8jAl#F1UQdB9X|Q2p z^-2eT{`Du^K&=`bsn|&2iN2G0g%}rI}V1jRa3)(2u%l}4p_@wFOQG;KuD2NtvWq(f+PBdxGt_|2|QvmT7=j|Vcg>R z#UmM6=Mo2py`NKGkM-q1oEOd~UoQmb*Hu=V`6AJ$?KWGQ54gn;xLt0<3*j}sdb>54 zuD}Arw(j2_@Ae{VcL7rgLB^IIv0scx)|>?7k8O9x-INsy^+-|}kdN?Z6Ooo6n_F2) z6+({Til>~}i(Q+$Sw?rcpRzflG#XVRm?DWo#Pp~)u_lcZD{8*&lYPFR`3RYsDe@%y zde<%7@X$q6S=&7l7q1#t*Nyb_^XX`a%#ZN)zz7myDv{NET9Dw-0;^%3!s!PrErn6U z6=7qz#K~&0cZt|758t)h)}&4T8&5HUMdh!AXM%F&C~y9FQHx)wpTvA3-B#utV5->zywHBFDU3c?45pG$2?);oE0t=O*MQU*=WC+^gzvT)I zJ_>8^ZpC_|V;G#62-K;$f2x85t4TdakU!_1ofc zsq-OKXdkJhhiSj9gX?H05O|Y)&*u~SeNJ)(^l`KOmASnVx!B%fL_1M={7&FVI(6q` zp*aWUnb_i52K$6csnh+Xv(?M+@A0AbO zRVcD+`UxMjsgr#^^?CdEo_yj{s>Jepg%t1pK&>^`8kaXr-_r7Ipzqwcp@DT4bf5Z~ zz0si>+zb^H82t}ofATR4tnL|r=U0CbPdg|$!@+e zrq=s>;O%u|{i;8vrZXy04?OYTRm#eFy5}FJHIwRx7Uow%qAQ}b{T<>2^Se!5pKNkT0gN3TUP(sv#3LFKgB2{8!hO#FkFK|hs-x+; zMuWS1aCZ+*a7l3Y;O;KL-QC?GxVuAecMHKGxV!z$WAAq^&P~P$(7Sqfbycl3=Ui)Y zskg6N2j=Jy-Wl23e=D>W+Hiw0ECcOLF(xEbAjQqBukWs|eLe=E$J#JKL_}m%2GLWi zirNk1zf)u7ruwidj?mp>L!sY&i$?Qgs&UqGw7m&^b=MR6Fk55sY>S8WboJGoo*$v;F3``FIzmTXukvLl!?|P3=qXuNWWGGKN^CudP_L#|flY_(jI84y_eq`KUJhiS zm6YbCe_S*=oa9D_3o7-FsaVWwpli$nooqb$*NuMfA-V_5ay*M4zJiuNKPw={ zB!!V(?`dm@e@=r0N7562P{$~jn9!BoJghBBfUt;DYuQMbBJR<rl@hJyb`3^sDm&GzFv?QhTLyC5SfKNa-Re;<7vO&lowlGV~^LeEUD zRBl=O?1a8r^3qt`bJ1BfxcR*9-Cd*YIp^x~sbT%|`GQMVj$Ff``GV6%NjPVdBgpPF zT&H`~j7=tldSpi~`p0xgC*k={{|m9IPAaux_R{vxAAlc;*0dtXl@qH&#bC>WsTD~> zbt>E)k{YE~zKxUbk9$7m3#D5x$+l_zoXz>ZGtXoD;Ld>P#RC%A`JCkCebqg;0?Q^A z)f0N4Lp(H#5LxtEP>XN7h3D~2rp$Tm?wr}vt=tFOYXNK@^EqtvYX7Kj-RsG!8bd@Z zu!0e9HTKeNWgmK}lklYSzK70?E8KIt=?9WQ;p9xIquE?q8~ns4J#;j5kx_a_Oe(h) zZ=r>TJyF=)J)8PM&WB(sc2i`$kBvp&IDQAMa>VN?=Ov5;C*ndJ9=V$1;L2woJm0(6 zUBv5|dalHm(`CKch>mJbFh&XV`V-wtVd!fj95Z8yGFgwrP5! z>*4m`(-U!6qj@Cdex-ydCT$d6I$vJ(AKK*M%dJ>Gi_X3FWf6{vCw2n)LH2=hWpxwV z0+CBE0+%C5pvsneE0-quh?)dk+Xz$)1_q`8xyYIi-?3ui;*;j&JB#zijH! z_>1c?vg-I^40Dc|R<^<+Pm%n{`y*e$ee6wccMau{s0A4`l^7Y{UQ?X`y_791Nq1@` zkqqYM7sQ!=c2A>{1G={bjXGLA8HBRlCAREGd~36>zmgh=;Bes*yiJNFoyF!7B8T*lua0)F*_<+w$>jXZNGS(jb=IOrx5=Rc(G-B)Hi5 zTt5?I@c2yP@r^2a)Ak2n93<4)_Tf6haJ&a4y4h{vNHjm_pHcXutX+YHBXNI(i7kywMju^-5dLY6TA?0q(r@H6(p$T4RdjY zli1y$;;Le^JNx`0>dpt)__6B@nBoGakbjkHz_Wk*-hC0h3w?k8Ilx*2VYas)+5HYv z{Y@?B_~yf}9}4J6$`%{bEwkok;<;zWJyFgLEs}3ogk0G~>lAy3Zw1WKACs0Gk@qml zhz|bb#M<6P2i!;e z5V*MX`$25_hGi&QKmp~CKfTr3QoEajR=!izae_jboXVo`rMr|*1<4~J?-i#&Wq<%C z+Qv<*Z?Y=U+|dWN9GDQ88xnkf9BQn~9JM-|gbRd8f+g;TDOy5((r2^bXvT3+XA-oX zy3);E^Ibc@h=QQkhEVi3DtB29PupO=WmDl?^&koNF(NfuP{}hI0v+j+j<8ZSrKKeM zJT0M;S72Fb6GE567iyw8>5qha4-&r8`Dbloq!3UE@si=7MsoERCT0b3;(v+%u*8X? z4T#~ny*B$XsK{Y_@uY}?=+IoxZDuPiN*n46cBMV2n2Rv82FQlXL+?R?$sY`JdqOqR zEkwM;Gj22d)Kg)vrIMpC5Fo_fkRH~vp-DA)z`Y)7YCWoppFj&fgmQa9yFlIjT>E_@ zzgNqF(RUzqp?iUs@XG@kv*BpFXHxN8_f#r?E)DK!vQyOU&D9m1h_a;KJ!`wYur%$G z_}SNiC7Az}=VRecaLF2o(LHp!|>%#ox&0L|H{Xkk_OLf)CBQe>h*Kq*T+ zW*J86{cvh1$dr(PoMavf)>T+TZFV0|mgGFB+b?>50(Lmn`!Kd?w-fA}DYTM!;4_80 zns`Y(#0j_PQ?16I zTB)>iCR*i>5PWaiI#TDCqSa;#64Zp8Q?+&u2}{jLE<>fn12;8s)qkG76EA(Y(eDjm zg_0P>#u`RF%CMXpX@KRScDB<19X5j=NdSeEw%&fhZcRy z+VhcJE1;{%g?h{jS~%4FF6cfOvx^Q{%J9BM*5 zihaWX*M3ZY=AQFOmDclAI_dos-7!LX=$npNFk#7tg7xwtX#wJ9vt|cTI$#_ zN-DcaUkuRSDe_?(8m|HJpLjA~4?!}%FoC)G(dJnhlbYdZ*E!KyQ~pQ&dblyM!Vo5sS#dtsgZ_qHdSrYL$=BDNJn7!+GOW-d(x zv7V2@isRYRcvqZC{rNt$!&&0Ly1v|rjl}s8+Tsvr?F9D<2=T@}kHzdlXsS2UTgXh% zybM{+WF+~1A{vFt>{qN~L!6`JUhOb1o0n)>&~a73I?*r!MErlCzTn=7-Uxik-{`&ss!!C~J7 z`iDsDa(#6y3#rxR(}EMur?)1CsXhQ zYpc;l0)|2;XFk(*7r4SCW}C(8550|F?@X7#vs8%8)d&yw&9-iTA_1T#qI0##WKqV^ zQO1c3E`NvpiCn<5N8+N|ZdHS@MF!$Cj6cMGDv_dhSgbzKe zD~O1FRT;E73iEcN$mx^RO>FdR(7f%dKR0Adhb46#xjP5f!)!JLPaU&?S5KH zvKGh0%tAL|Ee~HH-A^E#bx>5uNsGJ8FVOJPvsO)OEd2Hr^JBX1i5gU5FxM=5Q)9+o zbU9awfQJyt!nfPGrh`S$IP?gv<_QHh;6bv;cGsKQtr+R^>YE)&ojD^A*!X`{S@JS~ zBz-(2S6eXqfcE6aL{5lUUJk9%-kMbJi?2xVI z(~*l4K9q>%b&m+k7CrtDdRrUrrN2xTeelwBe5e^VfUOf(NlCC|6{dgM2q^!lJ55J+ zzcMy}V&iDi5B-9@BiB#82;3O7FC~n~ET&5z2SM+z3F#g*SeD{*GEx{6(5jCh&hR*r zd^o)ebj9>+;e9Dd-gU}gn%rtCRe^bL<*HSdt1i$4V+9qQCNCjVM&Uzl&fxupp$IG7 zavUFAAyd)$sRF&yf`3Tv+rHJ=L8oykQ}b{6`|I*w;9~D?o&>qc_2gwbxlr?4Jt2xM z4JbFO52HX7PYC_aIxC1Exf9w*B#zmIg%)NBYSY?L+VwAGL|_`htQOrLDU4lL;D|TB zC$Ognc$!-@UATbq1e4ujYJwR$#RMd?t=M28qGHr)GJd5KTCSd;3AeG@+a~hj&bm2C z==KG0Q!~-c_2iuHrBDE9#jv~c_)?<_SH_;HtA1DG|AllVb#iza4TXoD`1JL2N+Lbd zQPK&Doh_0SemGTtbKiQf@}Tvw-Hc)2_Al@51uR{i`eE2B;>9IY*gc>UY85bS2fND zg)6jRoYpnSXF)i28995~?AE910S^nSc%r7J#{5XCbN>k2{#IY;ZTShP64^4w!l$I9 zv|LNuA!0~}U3T21*vo7Ig=6uyzqlA25A`qTi<2%V*96{h;GA%Eg0WrqeS+CL7JQ{f z#leB0^^dw!%roNVWMN07$Fvf{3HDHP=mDdBY81`paPs$+GNFO_TR9>C-5&I=nyGl^ zF|V5FB%uEiJ9yHbieu#@yCCeb=kEmOF=@gsvcZZ=e9o2~Um0nr;bZDtrXYq7@BC{a zKgDSawbQjMX@E<3AGq&){r<4(#zvKcW>$8?A!OI@-?wPt-b32Fj%V}NgaVJCVN zp7iR)d9qfS%HGI?oG5wi-`WLeU)2~N40YZNf?3Iq<=)D1IlWtn>r$1ZBu^7-tHQcE z)>fCZO3MCtR=kB0MM`i8h=aa#4x1>z*i8RotDs@yI)KM)u}WvL>FROFp7?K(2(amq z1Hu?|opv_*9%z--!vfcsiPhdfI8si|7sM(LPtV!ZIuSI`e}fPfq3(fRJUzP$oUfN6 zzkrIFt<$wO*#MpPA2}RrwDo zy$SyPzyJXTGTYBg;DlEEK_gL!Y96;V#`UGiW=FwV_m2uje@`DWC|8;f715ug)ae@L zUoV7Hi6~wtOVCO6+M@p_7VrcgSz6t^n~>hXA_$GfRTY;{AG4}Te)_Ft{*MA!h-e=T z83No>e4$>uUZqBe>uojeBkcQe@|R-943&vk2gzn>Ur{^_4CdF#JYszt!Ph-JC1d4pWQ4@$hp0K3 zx2UM{_lsRIO{#CIcWk@kz-V7MK;#7qa1%w)|9iID7@)rVp83~^p?Myz%AE>s(Rg2d z65I#G&hgnlV2FUkl6Dc7Bp|y^ziW;1LM7e)dxqVTlrdjc^=6B9TECyGw|O0~#lr%x zOUMT+@k#xkq=+lMNxYq9?(gdQjhpxRUc75d=D_-IWYn!#@1>_|d?q|YS_az5&O-XD&a{dH$@@sYWPsQcL{EejzNlz8JTfxU^FZed?+YiSC@+by6e(xSprty( z<4hN7$zNjh_ba5(pd-TsMWpEP%IWm1AFtI_32;-FumaRuU{e_+QGtgS0}I0H0J}rb zKt-gGODZU|(Lf%fg?0|gRBySAYHV!$7>8Q2iv$hr6CMty&fnRt=e@g3dm-kf86Y;O zYt+?9rDu{~m9OfR9z|<{QIjaPG34zfoi{8zyWv1V6Uddz}1y=BI|Ryfqd)z?_eEQGO122pBY04 zJ2NdpwNxXnG`y!*1?K-QhXwK0kiik2IE9sdN}qr+e3^pxFHLq=)|UM({7HlZ5=TJLVW0hssFW5jo>e&tztbmUg3r-*n$i05 z(U@E%VG3CP4kbi{53WB{%fdj6;=+&bXWH7B-3tCv)&YX15%#b27ntmJt-sO%RXw;6 z@n`~K5)z>|QR z$wtCUPDBDqW*2D5W+bw|O?Fz#Lsekf2xg)Np05#O!Ix2IRo8_#?9hV$wQ7Zkcwo?n zEg&={9m<tW~VG>|>s z^Gl+?@4hA72Z6ozwvmxhV4KU?)@EO@MvTIjfPjGR z^>rOW3ao#h;@{8VU*%zU*;gV>bmaC`2wJR9dZSQ?v(7ymyV&yb^7^PHAaQm7jPc(s z{I$4MGrC9DytXicX{#vzd9MF{-apIv542%8M8KXc3%TJq^X_a`aRlSt|KAM=5{=Rm zk&uwc6-nV#lS-TnB{Qz0>A79)DUUMQgAQ=6kn&P6XnUV~%Al#Nh=>m!MK3L7s19Zf zNF9>?J$e-($nSvBYU%x>{X`%T-vm5bhigM4BY|cw;V4*G$#fp0N%ZHpw=$T{Z?B%u zfDi9*ZPyWb^wukc#;BWHlHN9Z|YRe=05VRsuY&597(mj;@&V*(?@_SXfYjmOJx<(r5jE zKZfn|z1>(gKYz#9zg4^f&^sjW+rZTC^rodTX_jIOR!=8QhLwoT2HJx&*6%0;7{O3T z2J$1svaRt9uTLZz4RjF23J=$2y}brZXR=rxd^NeS4m4)L>z&rY#M>Nt1Ibp?1$uTy z2zFP-Ozmeu`3mIjuHS=l%w|!WooK;}iUy$Jzl*@a87O(z#f*? zYGJ{u#0wo7q)ecwRHhNL6}Xj9hm^P*7>@NEyEuVoBysfnq&%Fk-8vx0B|GfV z1vuCSn_-@q-tL?qnE({_?^iqq2Xq(ja&zFPFU^5lw!j#d(?LrDk_J8?M~LTI+B*ko zXp!2ll5^3i6#`Z!B+cXcdwYf6Oz%$Dm@;DQ92^vRd=nYIeIy_t(CrV*%*?D6v2}JX zYFvcW_>P5zRgD@z>7w=Rn|!GfHQRFjByT?iEx+}zW?#U#)aS}XuPgvChRT=84a}IG@Z&3Oxv~f@KQa!n=eyE+S}Xvvs_21#(Xias&iq` z0S*qn0mO7v@_FDiDjvU}Hd-%x^ehD}^!4+q;52<~JtdOpN+!h_{@G7m(1tF_v0HG& zm-0|qsd;#KS*@b?K5~HE~CJpZ`rw6 z2me&Eo?}2ir?^m}YA)-8F8T{H}cCEicAG z>0Q^4IYSVQJ8fSM+!sNLWR!brKw15`?bSVjD&5A|$1xkdF~+6y{Fu(lYt7qDT^ps= zv49YaTt=j`CmG~Ev9Y*`s26IXR?0|0DDGX%Nn1d3QoQi zJNW!V6|c%ep5=YPxl8MFxbOkhw-){B&DEtxGXqDNT27|1cAabqSCe~V%az@rc}+%x zM^!7isUFzz-!#6kH&{e8?2y9i>p^VZcT7^Z_i5!{N0S@{%D82|RDbPuQ{^xGz?3Va z(6R+#dvi>cP{l*IiugUxb-BHocr^H&C%)Qnm^|WDpJ$v#P9*_~(3)}}#}bE!r4XDV zUc($wT_@tn5=y9p;&61e&Wh?5zYC^HGk2+ZI}KLVD6u+tb@@(L6gJ8SDprl93GKU6 zNr21=7Ac)>Z5VK(_%xJN>h*H8fhdwjxGJH+?K_ElF-0FyN)I>PcC9~afYp3GDrA)X zXORHKJxM;YYTzUWTXHME8S2yxEx1*T0W&E=PLF#gm2x#-cB_?*xe`Ug;U)C#fk@%L zAOvRHoQb$?z>iUW1hkRn{O(RQ42Iwy*V=6Y6aF~^sqxbWNLNxP;KZ<0XM&{8zaTe? zi-^aWlkh>u`voBN@Jh3sEI4x`-dspN|g1nG}rC>5Vg4T3U<0f~KV45XVmN8X)5ee*|x3 zP?uQTq%kxRK4LOJ!9Is}yZkdoD;b~Xi@=&^+7c3X501`YSjf=eoW+9`RBMurwGx}y zckb0N-E0g~zIIsTtyS)-?PMi!7W?H*O`*u7f8e*UHWSOD6WJ|cbKzEtHq6hyW^5t1)G|g9WXFW!B8E>=bj53+-cm$qd!AUsb3$IXQnQ zxN$)~sp@VT!C2z-NHUjKIKB8r^F4k61lc|Q&h$~YbaZr5*^HX#=e4mwP?l#jnnVLj zD-5K;;xCa*%6loy1_^=Sw3N+R4Ho=sVF49HYWMifCmh!xMFdk&U094zdU~QTwDM|8 z7lldM5M-X!ECql&a>{3ScD~npgFf+hSm!RNob*Ia8@i!yRGlDSRm$E7Q0Moe=ZE!v; zej)#l{ToM*C|XsJQ?1js9v6MYMxU81dd%oF?3f$2ZuH!d*1G*5_BZXvlNa>&ggeqi zQEd6~9E`L369}V7kmjk89xUaAxWQ)B$t6diWVT{>b0P4N9HW-0b3SHRz2AQtQa$`BeqsKMK>q41)pt!m|uD<2TBITplDJ;@LtozMrE3$=E`n(q@2wIn`? zgzZuR_+RaRx1$GOz*3KHx?_3`c#*q?LOgX&B_JI~WK=MH(u081RIYx$S=1@@=rj(8 z_u|03<;gvvngM*_cZV)V8M~Au1(m?xv+4IFSJe-ICgM5=$W7MH-L5nyUkX*|(y;Wr zUHYL;MtVoI76tg}X9ps~V(p3egRZ^N$ir82b9~Y?2=4SOcoHnR+*3$>fzdTF0kOb!;2w)fMwC=>juFarxf{9n+LKQxEX0QM zIuT$+f~dCFll9;`B5L>WN3pePFU8!~ON>r{puUFje^lwqv2I{Gak&)f7=!S1R}#8v zmHETsh;Up^SCUnbLsXMr7uj&iO*r5GQzTiw?|c9G<`6~7OLCor&Hy!`Mw6q^xOvo{ z`LZ~Bef3;@FOPdcD~(Zezx84lqF~KER~K95#43+}Gnyefkmzn*M=DJ`V@tgh;Xryk zwfiv@$YaO2OySE~XoBu+esAr-z@Q-g-T)Yhk|_YGQd3voG?T>V;Lx)*`U!(IQ%Y?V)phfMS$A zSnulM)A=XR^Zg~;9WwSbq#p&pfAC(mS!Y1fxyNde|1ehM>jCh%Zka?D-7R`kQs8x9 z!SV5mMU8eNQLQ^kygWDGz}#yE)Y!X@(3Fd+l8vddpk-V>*nm9J9R)j@P@w!%hn7gv z{Q`1Ic}{d`*1O7j*?nykez!`G>^Th)IyXptKa9TfB)1QKX|xzVy5E^%EFH7_cHXah5p#gs1Q?il(KHWc=w_Hd@PCdz#h_9tqQk!EiUPdi~`Wm;_ZT&9@xa)NKO36 zoqOlrO;9%QGt`L7L-!0*SKYJ~1{WOu1ud<;@5iigbgsA#R;X?oKr~?NClbM18yVhQ z-9*DUTnTo;Ovp!q(FJx@{>30L(Ok86!Bi;=FE`EfWNB#|t|y<;WJj;eHYYW+3jW5E zd^5xWaE$AW$L;iE2S@FI!#*+2zc0tX(ez)r?~nv2my4aDN@w8?P;l@sW(4wYPz9{5 zb|nBlbS=3rwEaIn{NHC)K>-k2JHdKgd*MI(#{az-+BaE(XcYr_<@93^k41IUb_Rtn zN_2J44diK{=uUm3ZQhEg> zorhzq&X=$n=kY99##JRA;)M!1M)M! zrp+G;g|KO#%=`e0w~~Cb*`gnsO_eL#sop;w{P6BNqZL?HyZbplBzo&avN|&U4{zP! z1D&p~Fuur6(pyW!z27`{Yj1PruPtptK}C%_5$Nsflg|vHycsl}oA z6~zX|D5v1Lcp)X-9W-Z1NbFAEVa zbb$DvU&Ti(XB*!We;)lcwyOP8i{q6BxY;6U|B8x=&H=!;6Tj8@B<$g8zw9*%5SX9- zYVo(}dcg#PLi`E%IiTQ{9l_l{JsAM@q~_B*Z+fQd*Q2T4Z&NY_843;qn{5N`N z=!?hov^e0gNqB9isD19^d@xfu1SHe1k4KEpz+X_QRTJ%vrTPPiqaJYChCLgX=U#p3 zz8?f!JyE%46C%);Qy$E{@!T~HkJnXhKK#ycz19d06ckiRiQv9Us|8iZ^9F<4`8Xse zho}%7=kxb^{XS3rBp_ihm?`+0^1N$!eJ~3JxE?5Mnt4|5mQye=FqFc}4DJF&R1)g$ z9)Oo=7!aL$M*z;2#56P#BFP2D$or|pT{2-1aVH3|a7_BSyTX@_bJoU!FI@72Mlbi<(HaU|fMOirZ^?ZvmxE?!W>6z>IUwJB z>Hr1s2LK@I&lL_%Fy>@nKy<%7mQO;OiZw6ONt4DH@-0i!c*qy4+`z#d^$&y)2_CH*GM!cIZnp|w5Bp{2{g){^m<%)+b zkQo6vTfAA;z{m(f1v>(pS;(ltbee?C=WL+@*Hw<+9j3{+lhwv^XD42bjH<@tFoA;&&%C@e(fp z9l?P@I|=G95(!kf3B0^HI*pd88Y;c8aN4(1wMLSOjI`e{8MHzGJA=}*(4*c_wR`tc zrII)mHq&Euy>V)m=LQ42RS0 zvLJ2v_VN-G>Cs}LyaaMG4FUuy&rK{H1BusQqHg~ki379E(wiJpfO2qfm~YbW57~?M zIqJtm`Gthbr_cHNbjE-4F%T9*AqlfRqq{n~s3YIm*a&8D*CWNBeQIyN+-F0Ohq21Z z*0vk?ummv}Aos_<*r}n;Lr!X(p06J@$j0$H;sgxXa z1fbSRX@M?7C0>^Q1s#9`7p)aPJ3Ls{^qh^@*{pX~Y}s*YHdSBxCzCtibs9sA0}c@t z_m?w9(rWVkPE6=3j-fix!X$t3&DLb^v@|X*?)(>9(ve>N=JJZ&fk?K_=j~E9Zi)d4 z-4h$&%WKo{oc{1wZ{<5oa{L*=e#3lbXix7bMuUZw?bC($dmtU{E^pGtq}p>t^S;Qv zKf;jM?q8rDV2x0lABf-^llK1T$J68+QvDIxDBULlD8D9aP_{Frlc2B?|%W3tc1f4kXz$@HA-Wt>?7 z`*^-AnPIlr zqxKBed1g2)5I9Kw9F0x4Fpl+(pfhAghlD^o9?XcU4^)Ga(dz*V+g0T6d@ogCK*Pte zo%+qz!QYt5GHi(d7FatHvtt@|3sZIHtP2Vjk%<6Z;q8WcJ zB%F08VYUDAgIWiAHmfFm(8x$Ip^rAnT>pMV-+&jGd|fL_kF?U@;$UyzTh;Yi>fAZ? z7#FmA?gaQ>_oz^%v)d@F6Vhs_<>q!o>v?Fi^kKqSn)P^%-lqj74_6b$%!x=&PA-}I zzCpVBMba~WDui21p1oR!bf8wP%qYDA(t|mk3wzuOzLG?s@knsiXLr7nnt;ELBM1=E z$C~{0agllGOC&;_DxtJKBDi&Ssyhj)>0P_}ow+jzz%5=DAV4VJO=Y^D4L;ai=_ zBn<>O2Z3ej5dU!o2hYxOmv`5QYi{2(%_%gl8>0@7Dltw>nt{2R=L87+Ngy(-> z^mYmGW_~~xYZXV_;IgfLJZJo(7@odd{o(SZI=*qUo|!|q7`9MXuT(SLCMEm7BmwE~ zB*8XrbK7u{7u%@t0}{ytzFQGYuk%xN6mvYkD;Vmd|_IyUk%SA=P z)L-E@LtL`0sS2n@E^qgq$?=g-h4?J0^Yzdu-={rmYwM8%4jy*HKp4hwqj2a`o0=*} zZHAsvA~m(I-F1)cpL{*Z3dq~F0>eSL{w!*pw+W ze_pbnSF*B7RaJ`$I%xFv`hQnRsy2m%?%X>I28hG8r5-X`PuEa*o~)g&1^^PJ&PF+Y zj?H#x2jWQDfw3qIR$i_FKKkbM;R&7x{G~X*1@*6a^Px}m zfK<_GkB|8bdQbvJg%2NqGpSM3VEWgufWs=G0XF%HE?oCkwD(yJB^H-#;Rw>Ysa!g( z5u7X_FyG_D15Iv?|2wze68z!K`qElB?VL?B@#2i6$>^D9*o04wqx;TdZ(Wb?>Wt*? z6loL#`PqLmGGXtlS{WVC&Y}aiYIQp(IuXTkVgMgQ+k0@rtJ$795Z4bY*XN6R<*&cq ztG`w@4rz=cI`Tn@lNj*taP;;$(G>t9NHvZMexH?SuZO8df>QZi4eHC&P;Tlb-|M}| zD>%5@DWzwGdKbF9C}xc;FR&A2^YxGle1;+hA2ww0xAhDI&zh_{&sdtRFa&rynt2`m zv0w@k=7L{!KR5@C2mSw20%+e~5hFhCeJ6w-;@r=2z~muCL% z(h;)?!)s=Lo?-J77FvaqET#nm?#;AL9mkzgrX>^EB7s)5q zTx?xy@f5FEWra_+_-}{${}7HF^CU65l1$i0nu68&xj@)VEiU{X8y)S{vpM1gXTDYo z`QgKcQly9eA2#)GvIZFZFUgN#IH;dpjSiG{*&sol+gHf5!m$bJWn31Xi%qMLCD!Vp zc<>ZB_n*c{`P6|OYIP*yS2>xNZq{ukRrALY3CV{A*Lg3zXT36|K+up)8LmoE(dA9% z_M4rOpKkef{iQ{1t&335;`=ajV+k`XH}7rjlUAxR%SFDpUA@07yj848;xL}JRIT%g zYObr9CSBJ0ML#-KA&}p}?*#8-a!}zs z*B)#?L9?;jIX#W)3q~rd_+cM+*~3GIXY*VO`trIOXz#EP^A?7+ydh}mBLyjxAl zYPUA=sGjMT!H>@!*KwVECvBBtEi2wR@~ef9)yYmZ-z#uw=5)?$!@To;;dEt)z4fGL zLl>{bqu?ewz3F6834j999|~mIH~v<-bzhk`%fPkhH{CPoY&thwp#6hwTjp9X8aq(% z=xNLh1g+wLFUZ`7SW0)-Hksd#%xn)ppj89>)-bv`1CSvd&H{YreUmYer44Tx_aii2 zX#K7?n@w=CRM|;YApim(Dfa1p&7sK*E;NI|DCM?VGc)nXkBOE?_dG7Ax=Yo1E8BGZ z4(ll{0oyR-&Ke_9urx@&jQ{NgfJyhGlmC_A!Oi>~1Z>zX+pGhK(~?j=cXxLq^>!dV zc-fxFza*9D(H}l7(2qG>&^= z#9&tS75D&0Wt8&EYW~+6Kee6m)k4y3uGn3u@ht5Y+^ot1U`7;QS(+~%PMX%|77E%2 zStsGo=|+Lfan6CBRYj)S2;7D9lO7@NXfopD>vRT-)tQ{Lj*AIPo*yA-55e@2-lI{IrwXzj7>kH64MeVO;K914imQUap((+#L(J zdtgWZZIQwDN(W>%KO=BhyY(jH7%2!TWRT)S+Z>rc@JsQ>(v96neDF$jf-GBTCZP^OYkE!NJH5rvQfvE-&v)H)z1Rx}jTogcdHNu#`|9Vg)bLcqXKjw9y5?8y zb;b5`aKK<9+G@I4&<^9|8b>FCdl!!Hjrk2vU*ECa`MCY-L)PH8>_P9gw!2?HA0tmU zSO-m(TdpAe2t|6X#Gr4FJHC3OaT~3fndpqA9{G({g3u7`kDl3PrqJDA@osH7owd4T zW65N6g@Xm#lr3}|AnI1vin(WajjrXiKQqqh^XO@94ad4W&seW`@w$n zL}E$gWkqD8vyMAV@DYZ#gxi-Z>EZ3$3vEqUQ85|*s=MF6BI0%I<1^yp4X;&j?)$i% zCSIsrrr0c^C9}OO49g_?kk7nvYSi(SPf_3QV+rdTz2DR`RkwT@bP!_?SZ}Eyj&LH) zbT^F9D(Ey{t8OLTu+uosaukLjY4mifA_(fFy(NQ@bYilhY!Y6<7DfaU2~7`XNNaV6 ziPc+gt7*75!|{f$d%aab(lwK%<{+p^?5#xmUnc3%pf zS)Pk}_qX~jT4i1eW_Z3|=kWxUjXdVRN}{bz?(MZCQx4t4oZGWo-mK0~rB*FX?tzik%g#vp(4`^b$qHz#5 zSZpwWy;ykjvIgZy)wCAC&V?NeqqU@mXnx0+HioB8{Yk zP6aF(J3jl%lYBba72s&^J?l(s5%$9%&%hRT!jTWagj(3IM2H0k!KZUmeSJe40HKNy zR2-{`%$ggv%OtHo$?eWNen`NiRHh>Bu17)gHI7V!`e%3GL^8;@Bx46zDyzvt9TPlg z4j@@Lnm_`QRL=$9JPQWz!V6Ety(OmYg&SfXEbHuMT+V5RRUCah_$?Fb zP1%GSX#&OedOeM1GBPg}(y3>*B17ubWeBk7c&PFzt&~3s@ITmHe_yVB3GjM3SPU=d zy&`4HehIFKrB9drdc|QpbX7qyw8QTrArR_o-tD4oJh?{M@G&};FsLeCgE2Buwo8Lf zFPEWY_Sj{)b2}j-t@`W7hOE3b$me#ItRGxjui7;_1Xbtp5vpMc4iYuz$D;;?D|KC8 zS7rCU%&PX8EIcA=lO|Vc$1A5tOYkSY9w=`5OXm+Uxiz~!q|JS|i_mJtCUHmr-C3qE zm>2lssike&+l)U`eM5$FQ{i;bAjj=Vz^BWo;+5jTVOV&~=^=60#UvPf{M#Zlp@d6{ zS`cy6AQ5UbTY|XOX#1<%@i1ZGI%C;x1h0SGKE!bp2GRY+TX`|KSOfva6N{>vzu zD~rY60=wDaTJzN|7WZlGVa@L`8Az=Zq=VM6<85HO$AM)t-b)(l9h)fxB8b>ED&}3h zGJJ=f-Q#<3RZ>Y}Rnl?3q>Eo#K|ObVUW`ik+Nqhd6VTs$lucOVkxFjF-}~V_K)xIf zpO6aYGWwV)Ug-JcW>3$lRJ!;ijd+X{C#?x;G*JF|Mr*#w*hVUuKVi$@Wx=2{5+<(r zC`vgjBxJob9mL%Lr8ye8x%BZGyfoW4jy!~sM3UJm^0_;=inUVoL z$Dh0Jcc&E?w*cO|=*Q-FsH)>x*g^U$|EBx)j~Go1Yt*Hk&CgItdi0IpQ|Ky*CAh=j{71-SYY3=;K+Va)cj zqtR1=%i;ML?2aT#H+pCTym%$0)p<8(5g^@LS=k1t)Wyx7!SkIl+EPJqC#+^Xv2If6LeEtkT>4)dfU^;XR-D5F5y;Lx^Zve00 zq@FuwzEGZeI|7dGHWEi3-k4$}1??7<8v;c@Eqp2dyIDMSy5K9?I4&jw946h=10IKU z#(4w_jFz+qoxx2#xZv{j80%y+rTHhY+w8w~x@Kwca!?_Vh3YTX2h27N5o^twVwJ1J zRAl-RX7d_PU2RG+U@sODQ-v0&O|NGx#%}i!mX6y6b@w~*vh^kVKr~dv_7))O3&=g^ zp)tvzuc)v<6OiA#HD!ODdSL24SgOe>%1m{z(ojaA^xpJhrQR%` zhrt*(3Xdyq-JzeQ7mJxxgz|YSobgb``_EMF3avlfr!yeAowAVk3=~iG>Ge$ofmATP z_Cx{u;N{p{U=qb|;Ny=Pd~WFo8;QrIKr1X~di`fwnATy4)L(+zxtv|DbLs$VTS%*he(lJ=bHa zi~LYLkfh_OpiHAYLQ}?$ml$4}MVGxSlr(P5kbVbV+GCRPQ@-w;-&0Xj%baNCbUv=u zZ5syXa#bu))5A=oE2Ld6-zVZ@-ES<-nMUQWIBb8^6SaRlGrx+u6&VwpaCFwP3?!U?PA#5}knogW~qWx&_H!Lc=gq-~I`^H5@7KgcTd(%PY8|iOJzW$36 zr&=Y2f+v2cix{e}#R+n99?FfolE(2);o9*I!91nQmCZA4`-*8cZ}%5FkNvc@o@dlJ zEI%4yH3dLfil>a0j{8YUwI7Eqm9rjZil>)w`6%CNgpDJt=x9xv66DyrGdV5xBU!>p zpBlRgAt%(0;(Nj1qU!{c6B9*3l&WGR$M@#Zyc4~W5oIM>xJh?5F6w5y|7aXsbsb*5 zBp-5^CO$GGq&nZ|6u9q)q0D?TTP+ja@De5>Qf6rZ>LR1d8YR}`H(Bo)J|(ya(8U*Y z*`AIY8qd~9cwCKh+{A8O<5Q7oG+I&%2fOQ2V*<8qMb=;2wUH6fug#c6)t0_x1jh zAI*nKAtD#IGmjPPfx}ZhNyd8DwimbTdn!_1`+c1;GSuFJs91+YDmq< z1py3MRJnSGm}aDIX7_zW2YJdb-Q&bhW(wm-?Q&il{-QJsnGx~T8Umqu(BW{GSv1Nt z+dDTBo_X-p95aDm(wT9abFjO6dvI$ep&s-mk)DA%9dY+Q`z?T%Qyn=d{Afa9v=QB0 zE#Y=l(d78TK0=q|rh{%ptoEp*WNNHKKBT2=`S6ZJTTJ*I zVQ}o;1>1TKq`%y3Al3K6)QOL%Ymjx%QBy%d$9YB&L(T`tVoi*}zxaE&p@N|Zs~PL| zj-bL$B2L|JN>C07_$;RY#G{2M>?wu!g!?^H5;}Y((~Ss@9~W^^YDx@CoACa)Pp(8F z&G)@Lp+H!X?Zl$VC3?wZyjVPS#Zk2Wx{;Wgqj`n2?aR?-X@w$FOt#q=ng7SuTZYBa zW#QTh+PDUHZGuB^cXuba212kvaCf)HEx5ZA+-WR91Hm;&aM!OgbLO4-&UJn^S5ZsqPZ zMrbUQiO9ezK0G*!y8H@i=W*(^1N&sOIs z%u!mLpPoyue~Eo6Gp^3;b$th#2MveHmZ;Paom;-pa(HbR>7DD`qQ-libR^;}o@!Jr zCXXQ(5aYV6`;@(BjB|PUQdg*Qj{hxh$!uwgMqu(Aq1Ms@A}iT5ql0 zhEusda@+`|&?e`S)Ar(v7ujERyydjLmM?C6+2~HwU?u~!jAB?mq7Z@!6so*@n zzhYrjJ$Mh17Ev&9xbx#Lxd3mQLang$M21`r4{ZfmU$Q168hElW zH=Ch;C2k`{4aSsj@7cQ$gL&^+;drWx*S2$f5kw}RV;n(Am^xEJQGzo+$O`T-r1&FY ztjZQ^`Uq3^Uoe^K2EwD)7v;8Nb`qmss~$IWyb{?Cse``(!)I;1u?G71ZoNC%kINn) z=mVn@aGokIn}n+><@7A;cu%&D3(Y44n`gb60u!TpM1{GaCb&V_WnxJJui(tFaRmlo z(@#%9HHzC5-?uOB6 z6{d^8oPBab$UE~<-7VWxQE)eM8m%Jk!IV>@KdDn6jtK^X3kQ!>u%KWkw25a$zpYw(g3)j(W7SK4Tz8V<$)#R$&y#D4p-~MuQC9Wq^(9yD=d?cPCE2Zs( z?*bW0(XYKIS~)YnqQVG21w6Lw8g0H31GPTijja75+jJ6a?;jG8-L#sm5(B0*_B`+l z7IHYU8|r#H-*YI+G-%^2#KnqgNALg)oMqV^?MlSBFN_e{KC{FhoB;Qx^CvB$@qh0=koLiWZPQO z%7bPFd7%_N^xReouGu5M1n>POSD7DugfzF0TKkJ$>E16}ZxgTOrskj(si~->U2?@N zOAMd%Q&nMt>vh${U))5hAQ-~gtY_1-Y7Ghug`RU+&TOYyv)ZkABMH)IBo62wsA)u< zz#+=d-es0sDO*rUDuc72-0}3J(dV89TZgcidMAxm*Kt<)HjnE_excn{z9xr|r3bJ12`(?r5pWek{6PEk;)_FsE$P770%4J=6zQoq5IiNa)Gc~e`TT*ioqc*| zXyAU0ERZy+|M-d6vM5n?npTBMV4d0pL4{bHw2YnUCH;ml@-R>kMlMlyZ8wbprDhIoiLn@=Xl^3cNA1p` z+^!^F2%)p_036&CS4xCcH;u8c&78NWE%<_W-fylUf65_qKRXvuS9PI1NiqCoo z^2n9kn4y$RsCBcVuZBdre^LdzrBKvX!bizpk8%)X9{wqwK*&Ba?%{n!i#(q#9Uz*M zHfJ5(PcV{15XOdJ)*rq`c?EB)c-atiwmH0PF&eb8j&Ckni*<~)Y3HYQJ!@R5R; zPk<`aWYB-^tylOD^nz0p12>`V_U7w%=HN$-u30G!V>Nj!@_n4ieV+DK^{^3qUEoIO z0u!EHYJJ}3ciPb9tFZpUj)`$vsNxcBe@+DrpZZD`W!H+`jt)%!hS}a|aArA?Z46L25a-}XCHbjD*P=Cy_xfB~R0gAO^xX!aMK41nMeWc{iU}URYSotJ*F3>AsxUb4kH?-F(*r8iZox?)D($6lX)p1meP{1Y% zSDO<`jVM%Ng|#^ny7Q@U#@erp(8X`DeOrm^J9e3|KrYd5T8MSBWA1yTpH~Ol$)Q&$ zrEF7U-<^mXa}l`CKN|TwDWot3^mX_XF)fq0r?Mt#CBICdeb<~z1$_E{JiuO(wHk0( z%hFZp<0fNEaYE0pWGv>_yl9=MxGMaD+P_p%aPF37*L~bcGM9I^EEvwcpG%BuXO4I9 zGh!G`7T+rPvO^mH#rN=k3%+>KC~>eK=l!!FJ?*wm)VVjHk?dUBt151DW5v+v&?{~! z<#eWBtLevIa*u$`7!@q3F2n3<^6Dgts3Tq_V}dy^waozR_18b8Ialc+`9AA;hr-{jF5 zoF(j`6&klj!+apbBIKo+Ky7QIlvXbau7VJc&8}j1quG89iY3)nS$$Y{xU>2$|4Hv` zCoFmHhC%&s-yl)MfynI=?A&#zr7<-1lVwUM+t_w~lz!iViN~YH&<07JIDS&qOPNNF z&=?s{mT~5JmHADDmBxl4$6bw~dY$+ey=$4z9^WoV^ct_@$8(kxkPgarphr&>J7L6DV^kxf_fs|$-Zm%6ZxKgVZAm01U~bT0Y)>-LzFAJ4&8V|@A?$0&G@-bOf~+e$WhkUQ;yeTcb(7y} zx#TMNg@2VJm(1<;m18Os{Voxq(mj4Id{1!C1X&s)dadDSe)d7PFxT2#GKKs*PaY^P z8j~r8ZJYySwX(mU`rKd#*LRsy2d9APF)WO$C|XMcM~gHMQA9}VN|%F6A*YfpsQo%6 z84yPm*r}9pa zmrYrJ7RxDoypc;=sjJ8l`q~c)lONp^&_VT8gA6Q00eKZYDjP+Mr5lw`@wKuy+X-#J zNF&;z2ZQl7_mMPKFF{a;XmHW0JNBoy8%n-uE~ZrB^=3$K%^5(n$nI*svuF@~yXnzY z?OCxD|9HAqF~!1K$xzE8-AdIkLRbz2DzcOT_3m0xN|9F66-+(9D2^a!HgnDF_$Q#v z7qIZyZnUC$vdcA%4kvu08kM<~fO-c4PCoOsoH@6|YNZFUk(p#+ z>V1m|6yOlM%3&+V<};`%hjY*`Nofm_Z^!VIg}SYFZx!;5zAEBXxMx~12C=34!@$!b zq=1;jM1ObT2fHH8lg!2-rl%E}GL7a?{K$)kec1l4L7UtqsMt4Y9Pdq^yc3;B%B1%< z(n5#HGW!5ocAOB3biVh%tq@fpSvH(c zsEH?UOf65pGLrX1vggqPhxJf7mVk6o!Ud$hhqnCdy@B@{__HYWY3X~>pSC8zGhLSK z(HC7F=v&o;Bh!R4>xb#P={rDtLp#>;uRPV?ht34^ue1Ip=>Prkh7^c4oz~t&SdRSj zxBu%9407O*v(x0ViG=_89yhGlu}s?p()4x4STX{h(|~UGN_s`vjeGUou(iO-pU!`d zCF6o4lY<}dYDkg)gT=Y+nZ|!|ir|{rB8_}a$VmEPV48I__ut2Fy-2QOM$8gmTSH|H|ZqWAS;1;A+Bhcu(MAIr*&pCZZ-IO}B@||PANy3l)tPFU z*Ph#QBO5&?M59S+e4uJKz)MB^{eP}X7K@<4spV3yPLG9wwx8BDnA2s)vlQ}dnm=x@ zr?UiMVL}_4Ovs!;OOW2XphXV;^Q#4An1R0xkjS=V;c5Y3W;&Z`A2T#E2oYlydzCP? z^Ph*&t1Q(JUKW|Kz&G;?XHdncj3rzUX(|5v@MOUIbfGE!#ntRP=nq zxTEg)L9w82<9{AzI7|RO`4n}-dqKvCr>$>32u7_mPO~0cHrVv>DM`CyBwTfvU-0P= zc;(rF5aQTn94!Sc2STqx1{6;=&jxiBAxwJHGZj9~{~04a)FFQ_N|u22m>J&t`G89< z`KR$}MzgAP*J&e3x8oh2$Pd?%RLTeg(u+xH?pXE8XWvzV%Znm&kN}-iJ|8qDM5D(=DsGnf$jTIto zjuC6%kB)?@l=)Eh(Rhfz2tYr;aGlC!Qw11kTYY^Y=?0%|u|L*X{Gag*j9pPX>UdsX zZ$77uNG1Un1E|$lX{s?e2&C{o<9ZnpH1G|>{Z>WSgb7>3zV*3CI>)YF{ST5FmyWg# z|FeS!w_hy5h0FQ?EyJ8^o}eMP>T$L`7&4LlcF1Yb|9`$oGDehSuIyp$_sk5v5{*-G z|D}%YC}mS+{fD7<*gVK+!MW=Cx_<+`V-^@+B%GBM&?h_K^d5T*F2!-(PXH+4#{DHKnA!cwvg2d1br|d$h}!i^kGSAT5F+lsoI6$(0`@ z`c#=jv#+GxFak0Rj0aY;KaX6 z@BjSbe?Bft0a^b|mOz_=e@F5E&m#q3;(&wBxm@4HBO0p7d=4$A{&p_O%Ab}oDPvzYHy)pTu)U8Bw&g1Bzc^nEjXagM zeDfuy9TPspF6u57wENY}{Fc>djhxp})Guf@apY^WR|CZ-QZZIUPJ!n-dKM!qs}&fuwPx>$FAgh>=FVv&NS)p zX8|mVpSeO_+Nqr6KAM+(D6Go~iE!`=dz0A(^3vV8Y^H&K3;5bqMy@h8(#g7xjvhAhHByGwm5h>?S~Iff`paUYgmh)&r6ThR_F*J~>m!wtu;N^c$Dbw1gRs=Q%y+4b zGO=T4mIsA_1lVg+0B=E3@ExoQX#bgkoXfYWQy}P_&Tc}w1Xzku0CgBI=v(e-r2Qf&3#LE2LgA_{y-As_}nURz1?_TI+D zq+U3V_sPzX%fIEhSu^;|tID73r&x|ZPJ6bwOh_|42|pGpaUgj10Tk1s;q!x=RxVNU zuP~tsC(Sgfz#X7}lEs1$MuEmO)S%5!fET9>2nguwxdC{Fcatp!!_kBVh(iFwoMh~K zH^{7-zf8dAT=<1E02AbQyMg#uiB*v0129l^g$k*YaB~ZQu1E*aWYGaI(M&@C2tikF zJd00bcQTu?Kssg`kIbFUxRBGN#~;*i1ZdP!0o}dI{4%Cvv)v-qwke<_;upNVrGkJ8 zSDHTRm|EfVp^;&h_kIVvrWu|rHk2?Kwrl23UnXGSft9XG`99xtX=@ThI33lr#+GJdyay-b#+?~n<3jsSp9aW(_aary>)cb)lg zdnYw-=gad$y|>49AMyM<(5Nyc$_}D;A|kYRox>7}ODO1Gt65`Y$YBkwJfuKk(XAB~ zzW-eSO5vOpS?&$(2p9##W{qsE9v4mdVtCbydx%zsb+^UG4NWU}7; zH=7ui$%pQ8SRGiEkpugc9pk|U)^^VWNag;j1h>ykb}%nf9lCNZ9q}9wt|ynSWVUU% z7Q*AF<6kPVtj3fAS{|Z#@}(kQ**x>NWR7XVFO269j>s-5u20m{h|0OQ@#Tz*8$CLz z7$biHNDB0qL-!x_Fe;NE_1*8yyxsT*^x87uO?QkpOuDK>?>?;C<2!8;7aB=`?U7*f zD)T?FyXp#*`!x=t5px64PW=b^$Bocr-5R6L0dBdZpU-72+IaTtR=fxoG~9Fa6{GzrnN{T-FxWki6Qc)T(*JZ%^y#F9WS) z;U(;>bMXVuJjobrPcEx==aS8+ZXtdzw@!tJt)Dj6?8+~5n%^rMcX(NCg2q4)kft<6 zKM_?mzYd|Pi1|?TaW7M#Fz3zRYQ1<`&G;Lzswg(SGSl7pP7|-s5yTZOf*pfiEdegWcR-c7@zwhZZRWpfB0%D+tV-u^3^0xGw2DAVzlkl zL%%F!&@BzdEF%P#S~~js2A~9BF4HLalB)A~d**6*2Nch@fqMP|9=|kFBVe#;h=@TZ z2nKXZ`fcY!#10>%^Z>dAjuy^|T-aL|5Gj%a6l6^GFK}CU>Z58)8y?_eQGyW*z%w&Y zZ1bid=RAPuX*YPd{&UFEzhLC0)OCNr$Y9jbTHO$QB?Si6{_NExGsU9%1i=uz@DbgP zaX^p4!8O%A5d{>J^6-6dm6MWO8h^}D8oSXK2C3jA zK_{oKsH2=~YM{P44M-1KS^#$P&>`GY%*!@K=j*IE(%22J_?(+Dk?#TH8pnWoA&X)A zZ|{2q=F%hrz$kd@~t!Kn%$4; zhA+pB0pFb_*Fdc2j=}vgG}CHe=mfqpX|w^Qp!Q(}Dx{+4?nn~IES~owl7BFYZh{FE z9FrLN1!(drl1v?^ddTyB2HzwLL*oF#{5PSp6|r2ZcTx*r$nrr9D-WB>hfTD{`g#>V z;7#Nk>g5bVEhhTfa@icgHavMu=i2ff4hukqtS?*CfojTtHOB#(?#0-NaSR?D`<^G^ zN?718$`)$uE`~{u0j9*Xz$`7-s<29oa#VtV69j=-!C&biMuLz!p3eD=*_6!h(W#S( z3NVo2H+XCuJ3i}^z$SzkVn_*Y3JvDGY()7^zk;=ua@fh}o&N@rMV7f=FmxZ>N+A zSVu3mQ!1N2x=Zi*R)60kPB-k%BUx=%WgwDdkP^4N$tqL#I;}~3Yy5iDxFBGYQ*ky@ zY*fEF&_LKn+*wFVfh89Sa){3&-(ev~YaW6upNCz?RBxdr-@VjCYm^`C>?qY55+Lw{ zV3kHGL-TEEqoSAGI|!pIw%Za`m(l_58ullYHOv(C7KvHwCBr2G+~8R`iN;U;mD+cO#>C4yz4|S#lDbD*!YUCyeBsU6{+p^d4~K7 zHsQ{8czB-zjHTE{6lPc2k1gvF2Gz6GG$`i?cOc}$(+x=-#polpAr?2?pt<#@uckAC!qU!sX?o|+VJ$cL zCvlYAa$|cA{I(-GLCzdlZ9Ort;C^V>GKd1b+a?%W){$#w@i*z>Fn4~BS0Lf)=?|&6 zRI^wxs^z=KBX+K|1Zd=%k2o4d(XW<@ z4hrOT7!<$qws$%1VQWFL5j$qT@%o+(?vxVWI;!`fZd@tZyoYFSSqW$Pjti~fhzlTx z{-T9q7GX-ZWpE6v=q#q6q_W+zPbo9PFBzgyIb88T;I|9RL5egJgJc_Q_w! zQnKzwhAp}LMSSkc$<`3cCn~DfbU%7ax4~D4efpkk@^pbED8EN4GdUIotF5r66E&Jn zIVGEV4^NZeH;^ujBe%@!drx{Y!5T{xzlzTOY8W+b(VN@bg8S|Z((5cC0!$hotcE!z z1`1|`KjZE)Qecua59`{=xTYz)wqasegZdVx^8j2jA8>8*ty)2 z_DHK|lvX#M4TXp>0ezk+Cxv!=IbW8-9zv%}khXG}UA(I6UrX}J6gjULFuCz;APBZN z$6iFjH<9M01`(o8e75kuh}AQ$)Cv!>#GMj;s1yjtf-a*U3~?H(@XJIjTCS>6fJ~!9|mv)O{M=zR!2oEy<}oe)E5{;cB@B3vkWt-ui6zy+KmX zl$W^NFVXU=_D}6WxrR%8pU}Yah0zznZYJn%{@oG^)~xMO$R%&E8-HbLHqLTgau z#hF^^Fw;LaF!m{NazGJeH0Sw~AkW7+rr+c-w?#N3Yi8;DIJb$WPT}(wsSA&4FR(_= zbRII~KTW4VL2A33vK=uyzJ``ZiST0?J6MbbiZD+g5ODO`AOfCwh1_P8nHFVEc(;JTm1$ZqU-gLMl478NopcO;{D{e>&LW5xGhfv#9uJSgu6bmiR(d0N z@dgOan@r$p3MDcC3jb7cH9N45lN_XRG${ILGB%A?D@KtYJ<7xTRW-0Dv)z=5<}Wt? zhWUs%um_zL)4g~q-Ut~FT$+0)qdQF*R~R53`Gl+ZWL_og8gou#7%Yyf!eo>8Y*IW~ z51Jk@gUO4(3{Ps8dg>JqJ_$LU6*yqiO$B=MRxLYn7TS;CyV?)LSiXgboM{*pMyLmR zBF>v2uciIeU{&>vpL(lR{@Vf8s3+6`!z=<5nF1Q+jzgzdx>9h#8e0!1Ti(@)%i~Co z8tBuLq%yV?uqGEk$Epz7;X$K;lsq5PO!ttaGFqLELj$zPOnmoQd-3_!3$hs+a=C%5z(sqKsl94(^7r&)B<)f zleqJow>C%0TWY?8*~G8Z4;E9G4ClHH+KI;K}81imJbrQrvXer=2j5 z8;fxLXt^+ZW?K_+?rb-Q+~#{6SopRSw+ZS;6q1npwk5cnR3T@gh2xC^;q_t9+qWEM zeE5H0)?iauAa8aix-cPVR|%a)V4W24T2>?|P*cJ!LRXEWSYm_(L&!g2ncg#Ji4U%dN9@q|GXXxzkOC6s1IOR!*;z9qvkX^ zoij=#U*fFoCU)6ba=r6iq+`fns3OsH-Oo^$+*Zg`4|Q0kMx2Bj)v(P?;(E!6k9?iJ zyyL+$-Q@I7jb>d<)L%H0y^bcX69t79*n+#LLQXZ3Nm{$lD8qAnJL-}eSx zdePrGz>CwoKY8Ge66|alFKGrwJSBcVAd1;Lo+!0$D(S!InnE*BjD2D{FtNM;L)t$! z&Nm=NogA<407Z&i9tA?IJ1ZDI;O_4f__=m79iy^!{G4sd?RxRQna_X+#Mg9m>7_yO zI%%zi7$v#Zf)^KS8%%bwiO{VwzQ1SRUK3YkdECj_P8G`X@?rS-BUNr)!omx`@r&!d2~j5T z#A?)I`n@OXwCB(1G$64koEL!e$v#z_)ez5@r0FgMM-6p)^XHV?nvV|9eQ%Ur z=FFKRlyb>X0WUxXEg-DSuY2vnx%&eDy{kyMHz!{HQw-GD?#z0t3G|2$;ywI!@vQ7# zha4^-cJdlD_QTojbCN`C#x{sA_Gd7!_TSBm=gsvuEe3OBM%YVWx5`o6TR<9XPTc}n zWEZ^vz>OY|BlsNE{g-heFY*|RdYCmm+Z??AIsFhJ6svR@T16j6;y3yuf<%JomppFl z!KGplPLOj2HJIFD^V=raC{{5fMU$Df;&35hhpS>F$AS7~^^5m;!Z>TDN-QezRj$OT zb;_mllbvdzcd!v!%2my~*jW`9BW3x**dw&hQo|L$FuuurwqyDeO8;XQAV`%2%n|kv z-IZgRgkxKo_4->?_D0E;0G01HRKj; zbFL{Y?%hMHedq;)gdC^BFv#cW<2}r?wj%iS2EGGOl!Hx_y9_yyZnxiJme%3+-vKrM zq)X`}NHU(L^1Gp0`8kPyXR8w(>UK|6?j`k<-P63V*7eS4klArH<>MM!q3?^qY&@75G?jCvdbZ7A{i3d8wx893p zw@|6KMYfD}4D4ia&d~x;yn}84x2h)M7lK#lgdCFQN_=U`X_vsMj5S^N<-b^YPWMR& z0+DPSmRs4yJvbGRMG7r9qcPQRF;I;_-F`2=Eh_F*Mr8U9K;_k0-`S{lk2aiCCf4PLBM_>&uEmP0)kv;AALQSq> z7O$FWRsu%&>qgE`%d#(zcNgtUqbNVstJhV8TZzm}qz&11&xG07R1wfg2uw{zZ=`+n zYE1%=%4<$hs=fC@PZ2TJcW`OOAATD!XdH0wn1EJqcZCiGT%Cbk(a`%som#-WxSK{L z$KXMr!6qB`&e)hiz_OX-`BN}ll_B-Qn=cEj2H#I=a#0d%iD8?meC`#l@!v-pKZaL4 zRg>JyguG{@M2BC|3K5I%+sTmd|H6P;1D_L*nOt%K33!Wdj9YoY?&Rz`JxD6R<#(1? zg4qp3^G`)9UNn+T{rKDtDYX?kO>XCy&K#;_hYtk;w_IA)dF9N*#=k?N?A>a3kV`Jg zKXq2S$B?5ZUrY1vX%PnV!$}$i@40mf_#9a0iHF!nWO!z=81gfy7l!)|FDWQbIwMl*XfX$ITBNUMT?|=muI8`Mj{@ zCjnAd`a^-!g>>uZMh`_KvJbi5gDM@TBVI#1AXgm+Z{v=w#nuZ{Ec#DHGvphrrPslY zS$Dg{8;cDr*eUJ>2RL%W#4o}0O4eN+XGD`u<1hUJPpD%aU`~nEX^sx;ABxI z%ki;iW1UG^cQ@k#XPZNB{4Og_H^+lGS&k#f{gAL}zDIs3Sf@q57usu0Z02=6n9WgZ zt>e&L4|q4q=q;J@%$%Ty2-F(=fePY)IHiMjkQV<_>0KLL>my|GEPZ zZp+a`T6Ms7yUd`h{x3MG4pjap&^U_?+g-4%JlcE#m_gs9MyDnx8gSZ0#$);|T#VNJ z=B;LYwc$7DR-8z6KPR|Szp1<#!BJlXuzgbaYe5*oQR3x7@m1{!C`dGY4>pOR>WjDD z^T&(`feT#2Q4!3GOX+7#!A>J+0r-aV?a_iD>c?pt)7=U5(-uZ=_EzO2x|m16c#`O2 zKTZlcu-rg^XRS-bZC9F2SjpLg*pJ7o|AB`;q-D+bJrR$6<=9Y`tXNn&qYg`%c2!2R z_ALNlHNCusMUVw{@J-VoME)wEA-F6sZ<^9QGr2!Cia>BdyC!o09p*M1OJM*Q!x3`Z zrCkq;KzI5vL}nwAAUK#EsbDp6-^VkV7>qt3IWZouz6u83gtEkBBPaB^-uXcqH;ym& zhc&c-ancf6K#ua4+tUrAiV^Otyj35-8?D{Q?|RXCI!^*|<&N`XY<{`LXd;Kty$dO; z>)?81&ohwHw%Jiy?eKu#06;S{{nndg)&olK%{7xiXMcwxfVWo%z>I7>So})U+sRvW zud9P1#8)5#a2#0W+_n;Ys?p)@WV2Xb^sgezhXR99e9kFwG+3olY-Fv~X{%8+{HqM< z@sENqykoGNh7N&uK%GWuq3`E$s@Bs#&PP<5b!Jj!MxE_0C#x$@qz`>PnF6jEdd`5Y z!C>y`O!9xc0AA(XixaAMbw6EaqC?%JpYkgYX3KCI0ut2014>!4*n8u&IuJ+C^@nHY z?S%yXXCWU?Xhiqkip7kCjpM;dw}nTiytB*hN&SA6cJwrjBm*IL$DDZV0h{P+Vb=v_qRY@i1t`is9Mb%MMp;i@X5 zoZmxvv-I_DB$1YdSIZy$Prd-UoFuN$o$Q@t2y{mbx)rPkIS%~9yMqtlGGo^D$j37Lra?wc`&yuJQp0=Xn||-KTjaP~dwDK- z6AQen=b`YM8<|B_1*>MguhN?@fko_3XgX(!_ENg_=N)3sD`@M=;m0J;LCfC@ ztnz(uXUlT*a!0K27on*-IV3i;HrRewKuh~))Al@4$-LmNID|hI*zY#7a3jXMTihB0 z8V?h&J_#Z-;&#q5EXf>Di9iY)6%0IzV|y`tZ?O=lR6u^N%jI0MUeK zlfVsU9L-7~u$g*Z(7h?-gW^P(hu2aHO>WIf(%Hrl42NBfNO~7zMoK1|S$51|dEpe3nZR46IOV4;ywo^mC_V?(5GjgAJw^hN1`(WI8%PNWD>76O5O^ zIFDYj8rck+_&KK%S#->OYRx%2wvIebs;o*~V1)v>*F3j~64SMRg)_XEs^l5^ImG33 z8P(r3w>H&d3KV=4S+43@XT!GCV;U8>-m=VLw#_2$syFztaG9T|!+N9=9ktGi4i#Bh znf)n&)4Qq?(ft<~3Bkk;u%S(Bg;0roD_i`SitT9#6D<%r@@}gDU>=sa z+ct=Q#n?vOFxls_(1jv61kzaNS8iNAE)15zUo`O0+bi8wVG0yn;rKhSo3FfRHQOMk zB`0$~T=|8eJjI2$bSzxIh$&lb?b1`6)KXE)0)lt7kkVO-f}P4HM~mK#-gCJs+N51Q zO&2K`%Ij^ZlZGRCzSm*X>iddb5d1ck$o-q9%%-|{HA>c;VvPa=uM4R*>_;6SD`|LY z6@FjUxVrnpM~gIeGU?#Miv6xM(g};@Xdbzof?nEhuk#vy;V0&euj6H0tM^4)x!AUb zB9Es*U(`q;3P^?bIVUO;b_WlD3);$`65usKq+kSAfXk+T916azzOyeQmk(c118>YW zd0yH2sOb4esAv5qkCm@=V~~pVJ^7QX4*AV3IT6^dDN}J+M@vR(} zZ1)EjV11CPuD)-55UY-R;B`q6bQ{TLNH&4YV5-L+`o~88Yz$3Y zMWq_};ZIcOiYw>3Drwn>T`=OhWb2={bRnkx$?ZNt{wBY)@hQ}I<5niK2-gT5KUIN& zc;yO$@BL8xRES!N{{0e%JF`YH zY7I5!+02lP^y<>w$fL~O3eMF-d-ETDvfjg@u?2&(&u(8x1-D_v!Wz1m+Q;uO7#u*lr~WIRV0>O zj^;bF4(FHfL&C}}D+$hH!O_tSCcppEMLeW^xj1>*HjLq(!J#8wIvH2f8UfpZM4at} z>Migfa!a3kDwSb^@WX9gx~(sc^3iJ^_AyXL(`CKq^@?vAZ+L3sMrLD=8laSWH_O9W z+#mBUS@a~|fksGWQ|Kg1Qpo8(MCLrqn9ZNuSW+*OV?C!1b}H2E0Jc@)6scQKpl)Mz zmd|*fz-_Z&)f3xXXGS~2?Nqsw4bE?tfo(v<$pCR|2i*%i|H?9k`z+Flkn^J!^PponHOf3&h~_)*2DaVduIb0t%) za?kWod`;uHS9Q7C_jxM$@83CzLosf-uYwmnrp=@S&ix^dH8ow&YGHSTFdEPmLV9}y z^2dZ&$}af1yhB>BZUY;mmw==&2r>O+D0nZM)3A$`GboHUk%-NvW_$4>vnc0$>l=+x zmjDw0uG8{N4l(4P?eY zQ4T?mu%J$A> z+@oiuE2L}AIX5DSaOvvM$8&fKL(ahpPdD3MaXtN_K!fErEoReJuxYBpBn{nU{4i-# zO(vslv7PCDK6c}$g5WImjhc|mqx+KfJE6OWum!A%kITeYT3qb%0nVKIGG67RS|Wrp zdY+=X-layGYhKTvLqcT*p(;DH_UDB>7v#={n=8CRlwF%4XncBKHgh+%ag3&TCq9dc z`Wr2o=d@*<#I&0(AuIzsNBekd(tKaS2D2qXF5d6+?z1OAQZ-z?Yrh?bOuw-R+naa4 zs%15iwd=NT-^F;vScf=3+m2VX7~R9r*&;@G^+t>Cn1eItsz>r-em`AZ1@ za06@PcQUZbz|_>`Lt~jOTX~Sc%$HTKsLU?}`On1Ny4~y=Rl#~K@F6X=jdQno)tQZF zd!vl@+cTGkh5L^~?k6;TGY$>2bD1_>Gc|ic9u&;g!2G>LS?*`tqH`aNqfBzR7=J@H z;oiG{{Kp6hfnRx(j(H|5f$j9yo6EG(#5Df z1V*=5=tc}U4bdStv&>Nc9UFGaq=mHLqUd$RE?M%d^@utIa|YV4dMM#(uRit*&VrU9 zL}UkctG~vzw2323%T)@3kqBb!H~K>0ade5?6)h*6QFx=uHwq~GT?{6yY&Qan;JU&Q zFmXQ5C(ck%*%68V?(djyg9~tDVo>FTV)66g(DCcc$Wa(1=-^^93H8nr=fzAWSmJe* zunU(e{$9sXC*rVK?P$XbC6MG@C@*;TYIp=xgpQ2h$Q5Kde?)1*@Db5;5qU>!FZguu z9AkWU?kF2?{1gQPg5yLGF#A5EIIDFtl{|PF?sZ1MU$RQPFhcdic_guEPk24hC1sJZ zDvu0qJ+(K0GAYMnNCbxNw!Si`{@p+#4s|dw&lF=e1u}UGK@Z{;G~QaDNk&zhXo7GY5Iv_SSkT1Q{(9l`a+tGKtQd8?;Y;*T_bXb-x_9BH zP$Dr#sABE!ZLFj(jsZH?0c@5^B-;V!oL7O?$7szE@-Y_FYNilVJ}rwvCX8j+>XFB@ z>^iuR6Kh}#=(!F(XF0F{T+a%DF3kB%<&E`Go$|>QMcBU z+h3G%<0U3fewOE|)EE?9>$fg8-ByTMt@rU^f059`!FEwTcR8Y(;~`5a@Kb`Yg^ZQG z@l)tk5l&V_BN_|o6d0HSgEXTjz1#xHc<`GYQVc$=CNiOEX1&Yi=cy1 zhW6Okce8p1I`9R8yny=f9*6>@eDx1mb|x z{XIQVqCsx!HNu}Nh=?wkm`#TvS{o_7NIgQx_xo_3z=DL)>w~A})u)?!p#w?= zkk=lr#c$r6bPS6KV&NGnky5E(o+Yx z^)`Bt5IckDM=3s|_To1XCt$VaXNmPlxx?e1k4dKo7oAzFqCEkMFwfHgCiuhR^qm6g z_kgF@F-4Dm5-P>%-X0CY?x~Vs<$`rC+BCHTC=oQNoCopYF!DZ=GRy(|wok;d(E)QFO2EpTN{@ScAPn^JS0#<(z&TxqD(ST^AC;713Slp?^@~Jp|o}_L|He#sTgu zemLv0o=04PW{88YT7e8HnnjoEaaj_b(K%8|G0W*HvDyMb@Wn*k6pH&-cM}XyUy|@u ztf0A6zgsU)e;1awwHlV|)NNFzte#-0(sLH1ZC}CJ?pQKCo2xBjMdf&xTx2n`JCEBy zkjc*it;QkDmxRf0SB2GYSBWu1Sg4W5NT=~jg)$$mx~nC^mnxAwL7J2|X%4b;2b3Ub z7>1jxrN72LY9_!JT|m!5I4$i1#+}7{PMiQShSS!5>n4rnp9YyOrTZi1Gk1q%-ybCP z1;2)|S#5Yb}^pu-Es5AqP#Uj|_*}s^>Z4R5XTSo4WSpYdRkd zVLG#5y(Uf3)S`>*nsc^|R%}+0d*a8>J2GpCp^pR#qNg^sv@eZVpwUw#mS9SjaK~9C zIuCb@NzBZd%~} z1&m5XJ^i`ZrY|6@w|8CXVa)4X%Fv7o2_AMop0!|{V~xUFeXLzJE7Hn??=V*mfJ+wv)!TZ8nWH7@OL)0HlLSu+Bz)abwTJ$z;Qu6e`(*v+Qx@$;g_O zE7+xbxC!(j#VmG?R&H*!_CE{31D($l>FgV=>&wc*VA-JR2tIZ!M2?WVDCTA$idA|a zb9H);Nhe|?fcwv{st;Fn(F~VN(!WN&eB|2=L?jhDEH^KU#4AJAk zIoNHvetC_kCa{OfLz7hhpLb6OxiRE}I9INXpiHN!+Rf(8HThE%U*nYkODB^`T(G9x znXUB~&!aUu53_xvZ+Bd#3CmOC1(f?=9Vj}u(IKSD zft8fj0tu#bNq`0CKi%11hLCBg`TchC%uiUbl`~`ibtT~nK^{Oz@wha6L6XCBzaS^H zytV);Tq_EY3$Xu2;QUv*4UBbRL!^U-RHfOD=XoQSw{Pw)k-@aCwyyz5lC3Mw-B}_^ z3~Gj+1IZ!gNCYezhm4XK2qHAM7K9^eTX2rn+FAf) z%J5RuRx&S;e~ky)O^#;D?-|KQDV1hz#4{aPu$(%3?bu02e2b}1-ygnGdhe%91tq>y zDHNVW*nZP!NQ)_s)yL)Thg<%SJ?%o4;O}=+O*5Y_j|ZY+;963#wg!8Per=QOk0cWT zeY}=FDiI-9_`Kf3LnCluGOia)u+@O=qtWo2w&+-xh}dgR{gto%D%jFIDI23$|<^zg^11e1jzFb_0J_!##qBs z_kmJUDGc zMi=ocab^G-O$u18Jq$35B?@ZwU=xhMX*t-tT&R+$wwUB$w7GzvUC&g+^89Jv@v#lJ z{JV3q!kD!n8u~W|X>d(MhK!Z2RhBbn%7uBJ(zv~IDV6lp=el;drh;zw#!JE(qA~{5 z`XQ=wsZdN+uG1w?;d0FoR^~>V%@p~&%$B|i$NyerK+8xE=Kx^10U;^8g+3eVf4f6K z91pkkGtiz@*8xh7C8_>E&$IcQE07to7vSOz){Jvg0VbW8&Q9Lb#0MK9YNp{JK<5hp z4ClR#L${()%|J%lx$ANkW(InI=_QI1HFHW-hrxk0G8j((b|v-ICvC*{9$U?SJePs zofs~6r}Xu|I-#%jPDV6}MNu}iuYUoXKov0p3FkrI_X9%@x8>6JuiPGuQRe^&SR_+L zyT(Y$^{i>1Si9Y03E{7&B1S%6TntFF`P&KV9U}7tI$$X(S3<00a?nAAK!k;v7DWvp zoBD{3MnFKoE)&3Ki#q#t2Y{QRQxg?Qr1g9Rt{qK%ajw55MyG9UU`q(~t~w3ix`)f!>kC=ebHLEmc8V& z>5N0Qk@49Igu1SWjdxV3hPNiE^eP1zM1|xRN5^OhUB>NuVfpRDw5jXm z-I$sM{2|rKjxV$HF?#^(Vg|wGrY!K|THU zPSJ{?^~Ou)plf{}Q+XnD&FL2v!@Ve*aou#rF)seo^B1kTGpvNheSgnrxMa0hN#cQh z#dq&&ck&>V+;I^F2?Yg;uq-s3uIGK!&NKwhiCpKbW!shS{|IS5fIFcMtNwujh>mkL z{jkGG-6U3NENu>l@_a?dfU_~+tCbPR6PalOFl)pB$(5#b!hFDVfBNBP0caVZx@d)d ze_^+TR4RQt!vLtxyM<*8|YdjYAhClgXe)C6kT=9+Kw1Sh`=bvEB$Z`fUy-2nc>&4NpPQ zue5u%DD!qgD3g$jiiS208Lzi{)}fG+lMkj##eo60X)NGHG;IK4AC9CHksN@AV%aKS z!OWk|mfRanCHw0Gp~WjV9?zH^7{SNxxE|whV+-K08yJPb^VsMgf!%0@R*0jM;eO1! z8)brgA9*7>0pfR8$~LVKI0!SH()^!7?a)dte zA@@H6eonhG9Ic-? zeCMx}Kl9>Hrx2qXY>(9Vwi+-kQG1?A7cW;`AVsV#wL*IyxkPPgz7rEscP;AoU~3hk zKllsbN_{!m?(*qJ@!g85rD_z31Y_}CH>T?!Js)fyS2;J22GxQv6XLIoq{x<@S$cqS zlu|s9(AckBG`@OK$=g9Rg_)f2XUBs^T@4?K3$-Y-B^Lq6xHc}PE@=3K=N?o!i3Nyz zx9uG4-HRt}N?Vt#3aHbUHT`HIK`G%PhTR(&c{*Z&}2w!dAldnTD}*aEG`Lu#^a#_QRulq2xO?Dxbzrdyf2=~di zw09)4s$ri&7Vr{+Es@6$wvl?>stQ@e_@&u^(#M={iPK)_mq-+I0LgE$P_A_TbI1GO=4pSr8_U|8a zYqh$ltNEaTw?@%EM0pRxQjfB=_4=ZGF#ia^+=geG!m&!8QV4?}#RS5389!9F{Q z_q_twGle?e%&Zn+gQC++H`I^QV~=_Y7J`c-68Y>IROu!A-kF-ukRhmsRQL}QgIEGd zTiV%_0`$&B@grzFY89Y=V|QSYqA>hwb>M|Rypz{`G`o_%T_&hd9;W@Ou%@YHnuC4u z6Tx)CA;v=E3%PC=tr}ZLKID!h!E1Xd0bM&f?a2E=L%HIgs8PsC6&7P!x$d?Od1$ie z+CxS6q69w8kL~5Jt+UkuU#K#)P)b@)7d_SM&o~HVRN$jNC8J2>li3e89%`)G=6o=D z;0L>BVhKSRP{90^2Z~Z{kx6dW`1bw%=hdDB83vtptp0O%VsJoEkeCu0o#_z5akOGF zs!98cBIMyzaIp1sbK{HqiJ!-!an(}8K^}F+nmncQFljFITx^Os0m?T!sphu#xYkRz z2Nyzcrn|0xs}*t8Agz?vd8%bs+QOL3BmF+ax(JY?CLshZKqAMw5N&MrThv{D&5r+S zN#+TFtgG$!rD=GvYP?1+klkE`dB69z$)RAfNFE@=14%m-=@$a3WAX2RrE0~Gu~o15 z0|QZHREyE*x7V;?;L6Vq*!0lfA>X2 zAG69S4h)T$#o$=%j~RIZs@#}WX0qqKbwWEXMGHql!|5`2+v z_n?rv*ejP@TF6oB9vX{1X_D;WiTDq57m_z6XNRsVa5{ ztNiz$W^9-aL(Awe(LxxTt8+d3B*p_BdAEq>`MOP|c3y&96~i##N)^wk-;3h4#rH-D zbG^#R##=eHPF?LKgThzIC5+lAU*AHVzhVNtVb>c0NrI!rqooTeZlUM^98GDISgvQsVG=cz?wzfgPFXdsz7gG1hqaUAww zW;*PcKf#(mWk?qSMl>gSP!BQEvzR@#5N*@hI5s>h9QZcd+XE$||_SDg?y-Z=G^yZfK{} z;m%~=M)`g#@Br6xzA>)HwD%Y}V|Wc_RU*L0=S@w=;zMv zv3~oThH!3w+kd&gwWL|z1{ZA*SenDPq%q0wPIqgqU^L07Krj_0?*E(cw<7oj{Kbub8U)TDjD;;m}=AyFAqte!av zx}LhS&WApY6uM+9FEsuUch!C2rIWL!drbK=Y|uAO%@tTNQH&Hq4|@%Go8#`jPqzJF z#=Kb}zREuOZfbQRu*lLXnzn&l=NtZr+?BxsIc1%?nQ=Y99zoCh;c%uZ*3fIw3dt)7 zT!VI@zw0Z8b^Buh*S3!x5Ij%~;7|vIA6M+h7ipOJvQh9R7I;PZ-WKP;ZE>MAzAb=n z2jJfMzX@pgJo(;%fG39~6!eCla5o@|u@1z9{kQ>nrm2UrZqtGr_easior-)&>a*EVxW$!!1A7j?u_+x@aMOJ#-9U5lM?u8mMIdj6Kdi2ChQlj=0 zvcQ{9!IwvO8L8$Hq4jtYURh$fYZEQobLof|AH^H z5jfKbl+l2}MZnh$nCUK5VWd5%AKojtRDO?s#jh1e{LuB3ojTBPMoDOHV+OFs;%bh< zw_HC0EcwX5y6-_;n<+ zq@whuC(AyG^phhX!#jcZuC44j&D{-c#Sb*M!^+o8$g2ap{f-uSan@KrF zd9&jIJ(ePhko~pO6^~Cm^gWBm*eF#Jun?}8AA~3i(*p#?!|(Xi{1Akk<-iVikp|By zRGi)aevK0x*Vp?i*puWQ9G_*;1u+3Nf)&QccdyztmeVDoOSZ@rQrp_@_rmT;f}f0i zZRDA)&TX12wXLh-$YJi+{ai5}ozaiI+JAOvOEs`##t-6T2U_#GQ#O*Y!sb)Cz;VHV zOKhW81b>i-9yhCM1apiv(uj={Q^xC}vGqE@dkaqHck(`geb~GY>+o%m(vHn0gj{D@ zk3--`yoUt!O$e)=K*Z>iDMF7XH`e^eXW;QJP-#6nX=&bU@uWAw z!R}v8;PGHl5u!wAaz!FV=hnuEHh|wf56c&o?}2v@8Gh&Ybd58kF+2Z5=e=k*%!vYt zN9E_oMA6M|7z=;>GXmqcP2lw|kbmdVfoS6g{uIJbgUF)3+~0W8((nOygO}2I@pvVw z?@;{UB5sqzYza%OU;DN?6k!-s=4YOaL!(=wiR-*tIsVSmgV2V{ln<+y+!e+3|Gg&= zo$U<)RsnX2M^5(nX?uU@Umx#Z!~LCE;O6~j^d8s;8qDA-R!Y)}KQ4+oH@^RWuL9l= zE{HcEG(ox&`|0HW{}q4V_M(EwMjjsj?t}-wM%nt)iVMxvbd1u~TCHof;r*I-o2&;o z@QwfT5k$Gby~`Ji8bonjiRLRn^G~*1xAbm_|NKAe9&DNe_%G19nqxeUNP&5R#Mf0Ol6y{_y3*cWMtM6*US-e?BVBRPo8LFtW1i& z>m6~HZ&k_qI|V@`hLgt_!{(M|!z41{@zUBOGR`h*x&QM4ibZtpmZYu*Ql#wo`sAIu z2(0TDX51q-7_IN9uWsTChp*N!hMJTgetq z>FV!H4IA#fJsIZ(s8z2!lVC?#QQEk?7a9U^V6Df?IwL!$UB98HOT2%t5r41bK}>Kf z(e-LFz6QRt#FSU=_%e-pvCz%+p%WPEsFOc^QMl?YhLN0)m)eY8W9+70!pM*Iord!5 zHcuFoQ?bzJ1OMC&Wboi*=^yk~{5lLfZRL75 zCo!9@ZWQu%zoJbk^`|-!i*9->8kjX)lW32Hl{M#4fvx@b2a-mCY-~D;7#{$&B53SH zSC9>%DGd*=I+hGq+-sU64^HbBeQL}RYYs$&toGQZHI=Jqys=b}vg<97`DGjPC3w8x zpQ)A?+{i=ajiXDFQ7^Lb&14Y1G6GSuC{~aofWDx%C@DsovkI-|C5rd&M?I%z#Auzb zW&ZQGoZ!WR_#zhQrfOw5D~j8Sn>s;^kJrSPY$tz6?Sv=6#ZIQ4OTt%umfioZX#aPAU(n%i zAZEW{@ZQV0{r6x0=OzW-!9W0r?Ut@bxqrO<@B9AG>IAGi9X>u7ye}WKp2`2`C;#`- z`F{i1R1<7KA(3f>Jiq)u2h!#R26}^gG8CmfS;^2i^=+2Ne8K_Hu>w%7HP7H)70%CjRi3RI;chp2b0b;Xv3R10N zfGYax+h2@>d-%bm=n1faB!q*5%g$We~uu3o0 z?%)=3p2Y5S0KiF1OZ%UefqK6}y~U$3Ody;6o@kTZE_G0GvV&AAGmXHlEsUNIPf_K!huk_A_4$sDSLLtH^Okoko4)S5 zS@o+w&F6;h8|o4^5spCTbk0>)1FMcLj@PfJWtOZwEAEN-2fu`|3Z!GJj-1x-PUnWo zl}Zdp%F0UD*WDdrSC^!-FYi|f^v_MG*4HQ1Gc5zY^VPt7j_U~eT(x)e=DrgxvIEqc zB@^!fvGL+vHNgE+d@jWEykB(|I5UL-xW%pqt5^}E3d*2cjdC%i0`I%Zygz*6MgW!HEFN+@z|W3?+Rpg zL2|+SAI93e-5{j}AsN$uC5zL_kZ6CSo>nf4eN)&Q^z{!Zd4aMsidCVrBqp+MQ*@Kg zAkoJt@Fqb?4nnmcPK>Da(${+d4((z9Aug~hJ;Yv8G7KSTNJ`enhol6NzWhPLN(ssN ze#Jf#9}*uMwu=wIYLzZ7E@Hf5P|)7fJZRP!g)5e+|A-TAZ@Ph;7JmWa5aWR$vw*3S z?yjE`G^o2*i(74RUcN}@+T3Ptx*DK)V=^2jjwO?E z(ke@rDD3f#?0sG`uMiv0;DX{4^zdlC_1D<4bL~T6y5qC(XN9^BeFbW@{HA^9fG>xJ zX^@59uJ;)rcag%vVsCPDRy^2ZLQe0#KxU5N=yK$&{A}cVAEW(cK9R|Ux(5*62!K!1 zCgn~42}9(<6*R+E4d5%HW~cB5!~@MMVyBnaXfrjPvrC_J{V);i`wInjO82$m+)g>f z(hD;aCDm$%gZJm-hgnt$Re(zBmUv=~?u_}m+&9Nlf)2#5)IZMLRbN~2t0-B23r|De zBqa9)4KA4~l*YEDGJ|3$Pr%OG5hBv65x>dVf2^PQeV}uN)EiSQ~ zEJ-mA7)3*P!CD~&aJ}03P48Jdf{g<#~cGB%l~?XbF|nQ zj8@(_;dl0wtn_NJ?jk!Y@^1V%XRX5H|4RDiGGJLb*4*g#=5Gr*`jDH6P!HAZa5rWN zz?V)NaUFtFb+A6mq+TCqG#X3V9OhU~k?G?_gn}@pJ8df_5Hr2n~Z2=ATx0ZIb_8?<4X_Sy6g1a zG5>nT{bpAmM}2nt`iA5hzgM!K9&T%E1pA|196*|YpO61CQ<$bL5Z z+KfAUjUl?KwHOtC?|6eq=C&+*MaN1No1^oP9L|szeO~Cj1be4VPr4Mrq_sHl`y%=A z%?9xjUuuWw0D~pOlq*iv@pvkkNCbT-gR3gD!B_ABhV+3f66ta?^WNr@n6t~v0$am_ zzl*bJ9eVIvUGw?J!+YheoM;fl<4^W|uck7`+bjjF$et9$yy;y~pTB2qBNF`U+jF$v zA31G0aX&m=mn-1e^sfgD!gy_bImX=lQ17PnA3cw>FQ8ZnJEKv;NBjLKP4*CJx>4;K zf$I8i1a1A)38yWHOiYIebSrrN^gN_vvH%e+AfCso6qo%E;QrFIAN?T@#E7x&r`f07 zB98-Y4Vk1yZJ=yuH>YO22!B*eOoYMk~BViRkK#C*++MzL= zu2&`^uPwK0#R_C#JxpV)0u$U~5kQoq30We*7tcmpiLy?T{tn-H1FRf%YhuO#Q7zwQ zv)k^dxNyl@GOQC*^0CW@L?liH5NlbabU}=MqE80`9i!;Be1Kv&DL^udHyTY*BLD+0 zFbT-Y1y0B4So?cGAuA4OlrB{BWBvdvY0t$#b?a)){d^IQ*^~hH2+=k=f>7wUs-~^v z6JYkzzm|Lz1d%W^D^Em#1p`J_kPOc&vG?n3;@@^9ZTo{>i&;>A=Wslgw5f@&sQ#<2@WjACn7Y6)xKpTLXz6RgUn}9|{5_xB-@fL#*>v z7&^4m4hZ?te}d3$H(q52Xa&RpVPos?7HO%Xss%>9WnGmvtqDQCo{SFF%%Ju;d zdL-hBEeVQ?o zO&_LJubyDNXf<#bmUnTzt|HL{(;mO6pbYXd{1B51z^<0K6W{#bkVaw1I?I8Md6`TX z!y{mzsq@^b_PK~=7tI7XE1oR}NHpC2RNa8rx#~0$X!seYPr*#yEASwr3wY?i5L-RT zu#|5m;vEWCVud5IPRmRsZmz?DVd7!70HF%_z(Q#>=q~%V-3e6kGe?RQSF=xY%x9&RIXuPSG>I$>06b|^8Rm{77S zTIS4e1`R8T1!8seraNx7Ty2%08Xo?kk>7~JWi-EIM-#^y`dQyHW5UT>ZMG1A$h3Sn zCKpo{(0+WL(1KEj1pPUMV)1*Kq4`ZhsZg`%yb9T}N0e}p*LBy_MmNFS%UI{vIl99? zi#k4c?l3!#cRb=llx@V;L%1<@ydue0$7Ww&SM?|q3KM&aCN}*N&Ly;@J}Oq~NTap( zsn$Ht?_6cuDNj3}9#e67*SJbg*bvF}Je`ZZeuQ3@c^vK_J zFEw@U_ibwE?ib~*cf97oL14S!q~tpD|w)X@KJK%AdEmw$|B%Bi|Z=cLvo#Usm+6|$glCQ}U z%$*$FZ*Maf9SrH-VoKFpey(9&rJ98}#1u0rgZ9eYi6z!B@*V2tcTf4eRh?g2?tLej zt*$OK3-QWcg+_T|e*bt9`;5S(TqmhqCOM+9Xy*AHfzJHVDVjjwZRT-~HksWq9Kgba zI~*|b?c;IOAube!5{7(3Ph7jnmTGZF6golx>U8j6IK+zncRrJ1>62i6MFtBeOLd@w z4FdokY{qQ0V&X7m6h}b6^oVu363+O^s6XthaPpU2$&^X4ZHDK=jZ*vJTp3%sc1sio z(#?ZixaqK$io&Mt>8dfx)$vuh=h4AKbIB8g%oICBFLPy!6&!|APj&(I4jO_RjRwn# z0uc={{elc-u{QqRO2XD;Y|bisWwPa$|9qogB6d z5?}~Het{&!rmYTfdny4_)vXV5cn)NbvM~+EEjOkw17OW$Kp-{i zrNZaq3hZnJtcbqGO*R}ltUE}APR^2I-2>8}aI=*KnSl2R0mNhgp3hOZSHEn<)NDD> z4lHx8!ep&;rTo;nvHH0s3fY_of)fXMiz{x&Wx`MrL7Y&<+8tiPf!K5Koa8WryP)<1 z9<(o*hHs}-x+^w42y>+)C35{UDP7VH)eBzXHhzx%?65EPj{Z;L%f4}H^nXKk!&9q? z5Ug7-gXF>qp3|)(xu1%Dpe7Ox&rfPtw;4M4c?dqO-tS~+H?((s`Xf7a!&6szsKAVi zGy-NRu)5vbimJWxX=1&S>3hWzI=c5&@?nO?2o{N5vRd0SDO|ldIz!8RsgxeZo1N@( ze{i^2>Z|)*%3C1na!-L;%_vr%UGm{iEk^(GA`9pYSF^$8ZkeFe>{+Vdwa5ftH(~D& zzATPWwy-tFOhxA9GM#JjI9>;!1jr}seo{2viy?9%{R0s^j|;wQfbOiufkumoR;0%C zZURHuh|or*Z&dV-^qR>S@;A<%`O^t-w$)qyVI&g*J29rogNcATYxVn%V)f4P=`)M{ zoK^LkGsy|ly(hz3SVqH0FZc3z?X#PFb5Gwk->c# z3Y*m&BZttGS3xc%zJ@Yw)gt`Ph_7Rr@ZU~WhTBS<5}8Pg*9kxZ&}f*LXg-T9PhVNQ znwYcs)dkq`(UO*2v37fsdXr`WbXo2)`>FGuGo@pQ(`z@K@K^d{LFynj49NQ_I|uoIVZ7InGtd493;d3YS2Z?`xjy z_n5j{zdQ8Wtk>HUt(G{iPMk^WZB$O(osv?Qy$tQpTozv#+{2?e(?{L$-b~GYeATgP zI+w6N9@S139t}>SynRl6Wyj;J>B6^ntLt^upywJ(%W^y$ir`5~35Qv5QJm2Ed?b@B zr%*3$U1M+TcW2w;;8iu<`{k(gx<1_KvD%Vy$ZL%HXZx(8`Axm)q+5U*y?1OuzW?FO ztMTH-V>BP}O4{CC zvUho>Z^@2M#T?TS3rL1T^z2+Rl-h;0?Vr0Imj8q^iXE#^n$Tfyy~!QzFgVwF zx}Scs1r-+P^v##1!|O`cNbtZ%65e?2H9?FtC3aK@6pPz`_m<(};5)we2n9S(sUvmj z46D&||6u|2qpVNv(HIvejt)+KEY%)bTk~LYi{7;>D6&%XxRop>ubL-Qg0Sv6n5Yv;s3%azM^jrqPJoVg+t-I_Tw zBFUT)?X1+2x$C?6Q+9OPX_Jld3GusskQqGHwuW_GTYmo6ckmF&!}%AW-@EQ7CtWK zx+9v8|M6d@&GxS_$<}bExtK6xrDC8!ruuFbFHEH-iW8>EG%LJ^jPD^kJA)L+bNNHU)#%i%W^+L=$i>Qc1MB|KthFl!XrZ z&AM5AG*0R#oj|n-PnB5iPjOZ zFx*MWUUEDMYl&dFa|J{MtCmFpeC;>b!d?l^zm$Gpk3X|@?{5~IH5!F2)cv`%lI-;} z#{L)=oIRjvbh?FOD(T`Ig;dUxvvoe}f*A!K@VXIF zToF+&TiUuTK(HAS*^;@ln080weBNV{Tv@H>E$7@Na)Pe>8wd0{h8b={=P?qrRqKU9HaR~zNkN78cITu3RLN0qJ? z_B`#L;{qAmLGK22W(j$Lq+B;eC*vNnq|UBdPJI@M2pwc*tp;;%m9nc#c89^4qNx-V z`#)%G3;Jmc`8>gR=CxEe6@6oGBRtKXSg;i`8q)zNby0hk6l&gu9if+U6~PkI65q8g2op=-L;d264wFc;|j|@wr?rp7&P7xu2 z^G-A9>c=w$^s{07*z@l%VkCPm7I!6g_LL2K^a6f24gW|MY&We=at_O z?a?h{X?~3{-i~(|-nF7_4#Bkz{7K-mFq-w7ZO-#nPYBTm|0#RDx9b=xUTT)J6LpK_ ztY$$udrXyY-kSYc-yz(}c;z*H>6pfO8T=XUt^C8=YpW*NhYBq;;f6AGJv^^Gk zp-m>?h%0AR;^9-Gu-!5loDi>yueXDRaJ9pwPK zf1EuxPt&olSgz_4j}v?V#Wz;@L-uOJ=^##H`C{U5ya4+7FIq^Ovs^BWBQ4gS1krox zWW&=ciZQErnk0#Hl4+em&&A5@6*sMA1+A0Jm-G7(>hk3_V>ZS^?In3bjGe6fZAoBQ zdODx~1ZsfD-;?5CU_wO4k}By5HcFv>-qv)8p7}Y^FBVM+6e%+L^0lQ8Nk@qpXcvkP z@oUK`7@~AB!*00&0qd=cJl5KV$W=e`SVX9d1$k zne^s`iskE1q;pK|eWu^BsoI&5y7bqW+MenA$;z3m!$;4h^ZNoyv$GRfvR%KoGZ#3O zD|^nz&xW(C9i$0+KnFsO@rgTjHxd1mc1(njq#zTCvt(_uag8w=&evd3DxAq&Zuh}! z=V*xTws|D+st@ChD?xn;cn-OfPUmIqN5_FQ=yK~@Bu4r(21GR1X!wcJ^+K+|IdvlD zw8JdpbeHUq?#?O1dC$PqE5&s@eJgfbL|{A&H2&P8XyIpoa(9>zM(va;ls&9OqHDh; zNOC~Rc3o=C9e3S)7pH#DO*m`auCV99uE=P*Xk#Q8ay=gF!WAp{k?0HL`sa?o(n$I! zm;hNIZsHtyolvb4yU8y628Sbwe6};>;cE~=b^_7TN8?`JT7lt&YbB%|{K$A#p{W%|26^&da+8xPDud{#Dc_ccpmJN_jze=kRd+&e`_@Ld4`!W8hDm{{al z;-3VrWRF@f=)c(M|2E?k4ug<*Qk%uDUT14eDT|9BtOM&d#73C6cY;nKa1Ov(hK$AJ;-ixOnE8wiO6E4v^)$jM&^nJYwBM~jIsu->XE zjusJX9kh{ugcQQ@J`a6XMb zWSSE*U7c&u>tjwYIf-Uy79y~Y_85jGi$Km6jfWV?@%xq2XD60#cd5tBL-owUMRks7 zfovCpF0j|>6x`kX>igz47r?Q*(QjiCr1)OnAzV*ah&Z+qP7~?LhO0> z8!bWKri%9?<(rhninVg1YPh$OdAvWXH49D`_ufSc48;`UT4lMqspEFkWVB9&_Jy{_ z2qnMmFQ@A>X#RMYJDe&D`9dA1z-%Fk{jr`5Tk8Zal00@xxlvRa2bFn2X1ahjUKl|_ zoymA;Kba-Hp4-EvsS$^lx#$w0 z5nz_-ll`Yi{X^aF{NMMHS%qOCQ$P&AH(5VAMd5NM*Ys#q8y-7_c%DhARoa$tUhGE3 z2%K)Ue2Qss%-^{!aGXv*(tE4{ZFwaQcC*mN!3)`uAlv0}$lea_u%Hl)d;l5v$J^hr zuql}K)qTKhOm1ZobN<9NLgxevrQ?#^+v!lX&Pc+q;fE$ndUB!=h^!l! zqSUh?&SU%yiTP0nDmWgS6rq(Vdus&d7n&I)K|q(6%|H=8B=18v8@S{`4zlha_yUAX zC87Xm`Y#c>aC#$rp}l>?pfY@+j~JwA!og^^-6Ys!JRb@-v^0k!D-~&mKI4DZ@80zY z19l6zlaH}RmEd@!IZ0|_-Ih_s+xT*r)(zkK&CpYDb%x9)018+RiMGSCf4XKhqa<5y z2rqlO?5`KrFj&YSc5)kn434wbp8iqdBs)Kg)kXWHue}ki0s)&x=u6*s!fP3moo>_Y zX3M@g@r(WR0px}TP++C}H5G}o3^p4ca7XZG$vZ)@p)4yI_aNGqk)TlTRV_Uz%Z1p8 zcF$+^phP!Beet!VPWgICo7E<&(!rW1VOO>l*lkWODsW${TiT`UoW@YWXWhIy>?}7N z4p3fs2n_d`lI1+h@7>xCt)+pDvsk0>eC=YtsgeNc{%XTS#zZY_czKMSI25vY0+0`Z z?lt;ivq#&PVzp&>&OZ&!QbwP>rpq6QvGvv<#`0A>!op1K!>KcwRJ7>Qq#T}wFYHm6 zZcPAhkJw30o2;|058&C*KcK=Fzgmt_lHUzS+*zX|9puNE*U$7;_}%`J(d+n)xToCONF{ta_0JR9 zw&7?eXa^L0Bp%&p5sX>BBk(bztE&o1d&cmjNPp&s@^*1U+A^f>fQubaAQuNxAIg5# zdZ#7#u#wydrQLTv!oVILf{-~|o+Q>Kd6cA~imhIDM-F11 zj;2MTw=!G4|yGTDYTP5(f5z+-~TYLj&eA&PO0@|_lTUH51!Co-#}@Z2y= zhnhfotXA(u$AkA5SOk)*aT?!6_oLe6!Fe)b(Xz)>mb4g!^7n?3ms=;-i?gMxg9)zjeCT{lsuS-n!ZvdY$%sU|z8HbxZfJHyAF)DDA{!Boew zCKZ%=rZa{_30WF=t_O8*yt@whdP}J&%vqm7lzBO$weAP>0ElV@l`a2Tksd?C zH922!w%w-Su>_{08#JZYL}3hsg!Dgs6K_@aJE;}>gO4gjNL(5@ym=PMtV1arsRl#q ziy1%Y1ag%GBwo~G zJjULm=ia<*o8CrwbUdPTe9lp1y)7jlp?%CP@8fu`ENmVCp7@d)xI^@Xm zxu#LEXh~tf{jn%DJUKeCxPY;*IJ{c#XgbF{}({ zz5Vn%o#ReSGujG05t-;Mh77N$;`%g)(x*}QcM-aDdJ4%-`Lekgo}5MkO}ospH?yf` zN17u053Rk?HV|p3mmga(mhn&Z*J&b~Qlwv#;)R}UWSgYrXBP%BLWVm-#1UkuEMBeG z20yT&ktnLMHX?)wb!*dR0Egds_eu$+`1AMirkZ~ClJ}9=tE^9$Hcb3+o{a`SI5`Kt zGn$GacEK{L{Sw@~R2X=GTIJZIof>w_qwhJr7!MdMQw{b;Y~N0ZPcos(;cLiCABt8$ z-o-kFhNf()o!aCeg@q<-I>n{s4gl-LpjZKVoT#p%A!2rKE7N`POJr}N-j${D#~99W zOfEtiXdeqe$*%n6V=*dQ&gXafl<&ysyEt&(&^tO!D8#mb4Zb_WVi`cICxmsbuzP^X(kKIggy}z7*q#P+DW~Tox#51&BQhJOue5cEuM8LSlwu zk{uZ5M)%My-RJ677*h|Oc|JQH3o09mL}=SN(zL!!4#*C~V3kZb>4f@b6zdF7kj0TT zZ0ghO@~`f}#70!E*Wr-qM9jUj--r?&!k;6q-p6H7f5FEl4vQOvYgS+?kX?UhdJ+7% zoX7Vbg>TkneBXLbqlzgtaM;-4Q8WJL1ZmQlZ6 z>r&X82=S!wOnsilArA7i8U5o20|_26G4?^=>Zq2_p{3~*G$^~vMhfmW4XM?$^+VSjaCu9wyAda@)5(#GcM-0&xcp^5No`OTFChJ?O z2tEpU*CUN3lqLI%>45Fh9!5Y?im@eg6 zM{oV!!@J)!?K?A>z9e?BaTOpxs;^>gF&VGbi$kf|+zOe2CW?2af6H17+|KyaZZg^_ zX#3<%wmP+Y_#oiZc2$);&2#x)Xt#5e;*LVMC;BSwl41f;`XZ775pX|X0&des6U+Pk z$nTV_iGc*u*lZf@W_u*f&^IgM6tS4#DPo594?W#xB8nmL>rmk6BFsxSoDpkX95U$- zb!_^z&CO<}ZH=Zpqwfk|3`fMSHHIb$%lV9L`f!E;K3~Tw;P2x8xk3LZ}S3!tO_t90likA z7Tx1vVJ2=On+pJrIjyrEFx-l;&1r;>SvCedbExcsapX;-L@rsREbgrjTcKA#j$>XD zp|tG)O~(zAV-Uolyt|Et+m`4UVlbm!l*+@VGat_egWZm(yFR=pstTgr0`5Gt7f@Ea zBX(c07w1v2v@?4Hgg?3!FdTqB;dItdGxqRvIt0_!j$2pUuM&tRbhJ$fl_GgBp4S@% z@wJoLN*(vhz7UmEJF4L#x!h8oa`4?J>CXi_@H$h0$*3>D%AfG1N2}SP_|v!V;K#r99Uo zOuIl3+NbPpDRL6=IM_Ba<4()I-)OP+F((Zr<;~)Njv>A@RsS)%`Dv78ofr#Bk|*pD zi1s;|^#y>$mGc#}>b0WkwI&2WJF0| z-itoRC=(_4^1k@b<^8W#C!G%~v}l9nvT}yklPTMAY0VB$bf{#YSF;3y<-S`iOq9avAWyLyW7}VVqXBEZTQa{pYLK995=>#Xq?G)f4ahy< zoB&&G3~=ZlU*ZC$6rAbeascn6BwVcHuXsV_509g1Y|~3sghRtyz@df|0KKCRDAt?L zOCQaZnXBBZj!IB5z>$saB-`L8eNc{2yY&EMFr_$`JuX1R==h#Lnn5K&B;dQRsmlcN z9UWVAy1-L$=?!L)QkaY*0Idht-<|2wP0s=^eK3&7*3jq!(c5;=u;x}$=kM!dx;fuu zZ;XZ52q-?+nx#NI$__)QRM|Z5FEzJI0F7BADBy7FFL*-YdT%rzxDoBTpfCnwWw=d~ z>4wiDIL<0{!?0M!o`SPO9O~3Nyk0GEaz=Dp@iYEEy52GUE-RF04R`hcKxDxi7_2qw^4-lTNvJ;e?er5E3%(nyTX(%VN3h%_aSCYi z0qvHtY}I+^anUEg+a{J{i1^kQAeR06Sav}epO-P6f|4BnvQpZliXP+1VgCqa>^&2~ z(jlFzON8;syhpB7uMs44-2~I>lK989|LM;5b5Z{aUYBd=_{FMejT}I$cKqwp)XxQB zyHq1FUjD>r2EHPzXLI;^jL^S=oR?>vIzbd28I_8#PnFLO zt!#D^fW~mZk^!27dZs;QQG)^;Kq}U)N91n6LQ;vh?Fo>o&oG*9170tY6@0$#VN}Kg zX456Io&^y#Z_pU!=Xz2???(gJ#y~1NUggACJOaMZ zJ1yVdr-O+R^6y`!1<4n=_xOR6x8!mJup^%TIJ7@OI({*6j5kwGny0)mO^ler zxG=2K0QaMY{TPEhPnxn$-|>gd28B#||8?VZ68#I(yZ$-l_wN%wD#zoe=l{fNsc1FI zD)0TJ6w?3GtU|t#?i6pW9JB#txKN>CC=OpJ-Pl7UoK`;CI2~5$rKVzrw+<}-)*Da1 z##M(iH3MX;qu_?qY&6)w7kakw%T0MpdU1yHxn zz`50AV$5E_H6zoEVprBZl-?uuaHv(s$bRnzSjupXt1C62P(;(nBH9>v7ovSO<5a&^ z)`LO;@veWF;-fo-QE`7b*#O8yMgGpm)tPUwzLd*Q#BeTfsSkNI*u+caAN_=*W87q7 zTcT2)%t8}jn%L90e_N=BV|)4v3$PvQ1N(j@;G zCGh5T=||<4F96~%Ygv%~pSbsFR=XSkAt2vE$z%ltW3833^&L-q|2P1KZlQIZw*miq ziPl^|Jis+BtW5LStP%{H;%cUGu-f2qJ~b1kvxb>$aHuZ22v*)SpEb}!dQstRx48;X$0~5cftAo)NcO(;PI?BsVcb!%H1Ihfixz@=}~aJ48ft-1=%I?JLpGgjET5b>MBt(y4nrbwY68K7IJemlr zslnuQ6++})y|aQ!gsdU|J4MGijn)C<}8+11&XE^*a?6xsLhzDm0>!>7^l7~ zFaG4476Y0pW|L{|{OvHnA=Zu!u-S$aEdTbqwWE0%NLsP56A}V-?MGaz;~E+4nAhXC z(t1kGH~dP%KFE>jsat02Y1QNC)-OdIdo>LEF-2PAenM~d!4UUg|A{zlvWxnRdON6nV%251;yp#>3~DF@7&MCc$L8Wmvp`g z*mt$xnI*oU!q$o)d`$*2eP*coODGpJ&P1TlX>f?I65m+`uMUo(qLqmzrX-;*DWG{) zvVQ;$B&=R>PQ~i9bHLA-ArUL#D@BvA+l4fypEd5z&X}XcE?|!(x66Go$T*O zGfN#KjE?uy9*E^gjL@xr|44P-^^-^Fyre4r2p7#CGp<-Y41~p61C=k_vyzWOPP$O| zpo+n00?RR!_8v97d>sb0;*s-N3O(zj^hZh?TS-|Ry)-hd&OnKGD#K6}Lshe?=54s? ziSFg1Jz#$AZFz#RW$*pzXeN5lA{Hf@QNOWMu`21QUI2~nDO|KdT!~1o+&*2WVPRq@ znPbad7EJife))6YpEN9#yC_y69%!}dym1&3+FPJ(b1}(K^*-yloiQJkjTENo^yWwKV*2KVDkH4<<{PFsWIdjv~dWe-X>^SA{8*uF|;@YYTXs zUd){k%=GXZggQf%!4a!>^A^beX1=kt3|(>D^?Y0|$mMi2MC%luGh4wQ0D%ZeuqwEs zVQxB?LY$Z8t17~-5S&RN02nx?b_JQDo{uKHx@qqc-lKR)E3IUq&*XiiHXuxZ z64~G2UK!}m%%d5tpYLVyMjNusj_&TmVEX$*jmF|N7I@c07Y?ib_VXEifTD_Se>kpi z6d7xLNbNQb8W#?d6c{fBuN@(dzWm=zphOh^GRIf~?(aEcLUPRgR9LRE6BsOcD>1KZ zE1VJ*Xq=eKfUgSW3o5KMw*0jCgW&#KVIzf1<)3u;etDbd!ozS;Km0GWaMV=$LBny)RgQg@ys z%7%DKhNsq;&__3x389ni13}Ywy|b036ae_7PX22?s?zB6!SL)IFz4#jAbcik{c65} z>VH^PqJlgGJ{ZD|4;W^YHiKN*jZlKi3<`o7y_OxzzsfG_n<)HsHZ~ZGwV%_A)fW`^ zAT1_bHJB)&D7gS^WG4bum`rnT3K}8Pfg1k$`oj>^TaPiq{bP-Y*xaEV_ungF`iN6Z zY@K`Yk)Wj|)cafxQf!Bbt~khX2HDjYCL*n>+ml5#GCZ05f61#t^jvY2vINpWzid`-~TG88#*t%+;D3z5>HK!q3t=GBNGmT zniQDS)b%nv{F#T!{vFk3Z+kI8D|bxnANS9eTg>uEyDF{e--~q&{@8M+_`Q<;k$xS9 z0>xKeZx~<+b_(<&YsuRlJR_|3w%46SJfdWae0}}G3jWDEGl_oOmW-ZLw$c8RvCR~FC^*wi-y?gf~d69Zi#-HAIUZBZ+beE_&dcLKeFmtf5O=V2gStX0-n4XM2IIs@~G1?$N zAW+|X2@TnBsEoOv_feC@QljJe?q?@ERnDu(SJe~=zEpOwUI4dcIaxq#1EWp}znWLn z%j+~6E_8w4bK1wA2beW_#{Fr^OJ(THEVAOgRTEH|Ib;x1Lx`a^rXl@GVfg}MYXu?v zE)C{TV&%{2J?KFdkpb{d2;bP}CQ;X8Tb`fl z9ZL>I6sGvj_3x(;@+acuQu&X`(9bbtP$0j+d@k|H@w^V>(T}_BO;qyfVM*5yqVGva za%aY$diE=^oYPZXH`mH^@WVU){3Iyh=qt{Y3+nVQ!};h`nF{ zhx$twlxSF9+;_s8(n zi*zF(!lv4c7lP4JF-PDs|0I^dWve+dz2l?wgejhflGYtzdZR7WQVYD1^zW5FB`ApD zzy<9J;)S+~c?jVlY$aL4bM(TS=VjT$lvtP7;i}3m;+?13?;zk_8hp!*N}yv-qvVuj-#hNfF7fyM*c0O5B<3jQr1GOwd2{u#h zj%`b_WyV0Uu6~ZPCFKZ5CqLK73K8(turoKVMNAOYE=+1RNbL#7Sj}B#kY+pAz;Q&U z`hKcTCFYAVN8dwOQSBnlVeW6IAH>=WKJKFGgW5_ewkfAwIAT=1_5Yp;n1$%5qtLO!O1tL^XU0ht*iEQZvb5>YbCaLtliisSFm;hp=3R3am^k;u} zF9x2gqgxE2+j_9^+&!^v89dP{Uo$`T%2dawaE}B`{gAxW=*Bdk3-?zhh?kfbPdDEZ{!uUP~&ru+nk`X zH~#ylnt{I|=l>(?`T2bDBhCSjab{*Fe#Jo<%7x@Xi(`=2NaEHAL?(7(++#6h+3CMK zdx=DA`sp?CpTzM0$p#~l#J($~A%$$o{MRM;pZ)m%+*tt7Cp>X{(2&pV|9t-c57yv+ zD~o2tfL)yIHwfHVzzg<)!9Nh`?P@x$W}scFmpQ(BnOip4koR%ldSs9l zSq?aO-mjbB9Mgpz!)ON@!X!M1Q9X%C0dKfiJl(%WWKSR?1w^#3d+#P!E703p**QKg zIv%orE>?p$C`P>TIP43hfefz(+tJ%_(l+3Bm#KPieA&UQ0ettP01G>(j9wRz4H4F2 z2a=BF%I8qn*z7Q;0TyH`?!l3BU(yMHa^&sxxr}Wi;UyG-pTG_P`vc7{a>vouU4#~1i zHn#>K!DUW01qMQT3>Xdf;>|>MOHlZhEbe$w5#^|PYd30AkcM1IP6ba{Z96_WH6W$p zq?Vk@vd5CH_A$8r-`G7jxEPvWjLElHWYh17EL_fG(Scs9hX5$ScoFf>?i;2|QJEWLNwmzLy{IC-zGHm3PcZTfn)0w*XIYIf>JrGF?`~# z8<39BQq@v9$&XRWy|i(po3%gi3jwW-NyI<^7=jz$9!`sC(@A^6!09`KwwK!7>Hyt$ zcu*ugfF?@&fZ%)TzgRGFq+Pw;j_A{6AF?s9aR%>`#2-O5uz#x|xkgNAalP6m!5#oC z%ZoPCmktJ(+Aa1)o4n%inBS@&e~5JsZveVXWdgu*UHa9xdC~QO8AuTQFQ4r8cJ~)* zpDeR`DhVFEkXEa4kx{$BBFUDvnZ#NePA=gE)AM|(Zm8I%+ZPOU0yMy>BVt}}_Ci8m ze^LUZg{~!=kA+^Wq=ydp@pu?-B=DsB4T(Jc&PLpvLBv`5L{NJt#X8G2uid=yyD(bD=Ms5fhk{qgrRmHort>(M57BO0Uf0$5&_liGlvIRO%>zCTXaPBho4Q4329^533ZLqLNa(=w-ee9RX|wrMt{?aNjajFj zTG_NRu}&d!EAU2l0cAQVg053N+x?hoj9rPGZOt(OomyF3v%!K4ICG)Zi)`=g6anq3 zia>JxX+Oh8P=ISS3krcY9a>+#P#l%U7I1uZuG){$s>V9JoHdOB60k)Unyi4aMNiTz zSyMb8uynNmgNs67jr~qtj^P00MMnW8+GI8&7iKWbv5%eS^|J38$nt1B=mXgtuM^gi z8i5Ys^6lgDs$|RGEuz4@!FUWqCcD+?L=p&{TD9^%ZbC%Bn~S)LaMPdfIshHKtUdsO zw!zL1%OK0^ekrfsU+R3drP8B%fDcj>NW-SUe{bg31*DtU7h8z;(gc7aodgaA7YIqh zVZR#-j0*m4Uw1ADr2;avc;B`urGsqlLb0M4-Nj9)Rdbyf{9Syf@AT@$bVc^{{4o9gcXXTC1b0Dy0Z zLOO*E*$^>BT&7qQKJ|d{U3(FZ4mKB<3aM60>k&HIK=3Vnm!pi+c8tavI5{c7V0D5kC_^kk=mSbxg{M?%n9pqkSH}{qIm~89vdYV ztMC{Ty)_!vwb@+@Sxhe;qrFUaH`d>v{|t3i&>wU?7L-1XBb!|J%HY6Gy!;1@((MBd zwvRSW5>KLIC3)`9+ByG!bWO*KwQ7f^xL+@YH0`!>mQ5Ed9i2L6$9xX~<_k{oOSMd( z)iAFQw|7yd!;qr9jab;RNb8SK@lhi@V?g6CjAHcL2L=-it+f_FaM@^$=|b`Ia(WLl zu>E(o#$>4WEt35m#pnfwXXyCFO4(_38Ax@HPh0?Pq?V3~KRe&6~i%XJMPb$_o0eife3UrCoMqpp0$6cLccsTqnpA8i%Ek zfeI>bQy%ug0QMFeBRln9kon012%kp+Dkm0(KK8arrzX%OYWb^t8}nhe7h!woHMOOh zk$55E_il)-t*boiXzC(Ijm2kdW5Eh74D!J30YSB=C^*4Ww)Zt41t%JWun&CCtkk{? z)Tc0B5E;ZtG-?z75NM(}^t1KmN;kj`37U_{Kvi@px#}giyi=5g;I!Og1$!-ihj)pC z+s#o}cMF#InDWM39EwL2ieX{A3n)GwrDU>JZq`@M)pyK@RU0ZDt?CeCnlBRLT!j z6!Sl;b(O`9atfb-MXY*%h|pu4^2U6{pqgh<AG-jcSE$mASk3>f_IwyFY(36e@o+ z@xbrW<9%dWEqpMnMYeYoA)2m-8ki`r2s2?igdj2SxpWwdrHcUMBCDO8PU!=^o+YVb>V39)O;MN zaHlYMJZMyPY}c(El(2@G<0*IKd-<^VkneCdpGl>pAs^4x-oL(_6;14CWN0ov5D~de zRXkhSA8LsrCek|f$H5gwT3jxw9pl|$=q@w}@zS$$BXwhdnyv&_(=eb}s<6z`3xnSHbzncNAuEhY}W&vqE&GAhQHqDo0_jB2s z!QQ7$>GfD-dR-7*E|Gw@FVU!Pql}&)pCryE(UMet^BZYO?<$991#`K!=(Ai#`Ij>Y z#$Dtj-oNZDYb9oQDe+evh0@cjwfe`@8eJ?3Nfc{f?lJ#${3wL=%kvpepqMO8Bj=mO zA2JD#V+?YTSYKT-`twW>U(I(d;4igso~XA^!23cRVPUxH=u9=uLyQ`rDyo=Rir^)L z*a3uA?{#o;NiEt&xtHdzV|OTnFXwqOYaX&x972Bp({T}p;^89qP8_3KJK?ZPmfM@d-wR03P{Rq?6dV0G0c~v0zzMS)40y2vXu)!D7T0>x(2yjvWkLFm1%u5TxSPPnpnEvG^37gvpxJ$Q z#PAQd1kH*zJK863wMMnW-ZHP4TRQx5aqIjg93JD}KW=EqI_-^H9^Yar>kxgUkvaI3 zlLRl|77Xgr;gqY$-69g>n=BI(eyB_)(sbd|awUIDayd~+8m-hE1v|??BFA5^O(fC1 zNq>^xKx=`o)dD@2hzbpq|BlDk*xvp0o+LJ2ua>l1M_wCt^^4SkbFtqnozW5}H!F@q zsbWp=O9EM)7c#jjhYXdg1)^nAL26^mTgWViV+x^zt5=?N8*7Lt>YikVN}dj->v1H> zD7_-jZd;ug+Q{9&Km?ZDj46sYdkb%Xdy#ZgNvtHbYK2+Wbg(u7!h=g~XU-gW=s0Jd z?|RF#_EO$U>b03rghE?5^P=MnKvFAONHE+xm4qr%8Y^Smj4oh$k}$PzXZz2AKJyZW zKAk9Am(hpyx<7TpXr<1IDf7X`v?7VsMmEXvBq_1UrV3Q#=m)}WuuztKEJ|KA`rBgK z%jxC*RR|m{c+^c3Hkf|KIqb68@op`fsXNadB~sm9xS4*H-utHkaTd&{#n~9mdL=D# z1Z*E2Sa_%q+M;|S;#B7CSD z$@Cr!6_3yxO#kuFn;l?&=-~VbyG&IiG)|-SXQ}bBLJzX3Z>HWu;FF)kw2J)L9H)ov z*6AlP`c4=>pd7)d(*LF3AXA4|Ul(9Ik(FuuXyS}*+`Z=4hk@)}YrfQeozDKLW1xlN zPj>xfH5$eIB^>@Q&(gCm-*MG%@)dU>Z5U+jCFq^OY&o0rZJcuF*c@9SlXl8vt{ zawc7CdKUE^ViVq_A<{?dB8?xmkN6?P#nbUk&f7*|bV>^84@A^7$|$TKE0YaOQE}2w z4kOVbQlzFw@wF!Jk!I^c_i1$&IGEJ5nS;s1z20F+S~a>06oib3u!c_5=nDyp+@@Au@8R#SQ7Sz`JC zoH(ma+8AY)PD^gZxl7G$YlCKOZ%F;%j%sAUdn&PmGlaBlvoj>kLn){?=;((?Y{xYU z%@Dl`O8FdpF@2r%jK&^`uFHb26Z#fWf_Ex&s?t5bLjgpBH8y*y&o3umwF&v=?Qoh? zYSwbU=8$O!>$rPvNO2%#r& zY>`w;TGBfY^^C`a#DlS7ANq6#t-9JX4=Z6i`Gj1Ub3>d95(-{)APCKIMGX`txvvO- zMZl)1gYi*MgMx=)GPf0Om8dKDmliEX>w!2BZ%#6r*{L3&HIQ$DNg+zdptmr)^Cft{ z+)?#eqL9lH@(5s(AigX@iY4#tyS3wmjO26EDZ?c__k;;yGtSBU40Bfx94qYI=VxhU z0})Z#?mksGNrgA;NUTwGz+jdn^b4hjLV&Q`Z2Q5g=zp0>*bsU&GYw179sj_m8$@bdS~SBcSY^%9Pjiw zk1dh3yCr=b?I&do?rW(LLLYG^(X>PSAVo}lOs=Y$)0ic?3sq{mog6Rq>m-Re+~&d$ z55{H*C%2{_P*k4oEE&b8%P#!*)ur04yWg~xr@XO?O#KnMHu0C5y~Ene-$q6+I#ViI z?4?dYtA-8-@#8gOjAyLXLLd6CsBmOr^jDb={9N1J63>Lp7*oIP!NJGp*t~%v@kZm& z?vZ)6-e0%{ zX}o;BnOu2KZVou3ZQ;)Tw(+B$-sx3bStL)NfrPeq98H2vj#CqGSqs-nLHptoSjS)_ z(Q2tW#w^vq@k%sPauOtWkw5L~{g>n^4hFUlF^=}N5{08mdYOP^_w@B|8gEobQ|5;G`74QC=Rrx=%F*mpY zjP1Y~aQ0U8sx(h^`AC<9Tx|U`0o0Q4_Clv{Tykj~6{7QxYjugN=kU_aT_#(tliNtA zgLr(hsd(<(HnG=_~3cuO3KJ8PS1A?pNZ}9Y!W_A>DSb5qAj_v(M0NbH6(t0a#QsCu`3T z<#Tgq0kAKHKBfNg)oxPpkC@0G!Y1v{;jc@-T)w>cJR9HbN?bpw$S6hL*Oh0CJs-^~ zI%53+ds?kCH$1%2v*{vZ(mevf^S<(bX(Ue`2sWRsTpRzZmw;2%@zsNkv{mnMnjWX} zFP2H58ML-n3ZVs1t=h3&DVM`rde3V(h`<3L5e3hWLRv;+Gm5$0AoD@;09_Kjsyt$! zEuJT&%g$*H;4uE=14g_%_pKwWR?EWn?=j0fkDCFE<}-zcgr-Zi#!B;QOnTwM+J?AR z?cuxc8;$=C??!-}`r*tdcvYoQH>g{QT`NA=jh{q=)bjgY{fk4axC8!v0I-kcN>(7c zscuPc080HpgRX8}owXG%bQ&)MFdW!^8qTnc?tRmXOk*Rdvk}971YB8nUH6%;TN*!Z zKaK60{YlXq-U!%`n2-qHXq-r;R-FpanK{i` zla@Z1$g;d(lTG^N$o}K>?YAQ##928!N0pC3s;tCin9Zc8F#_$AjD zL>xxp?d|QaUYy_0Qg@0(+27^JIswF+vmGtV6^K-nlEw9MbB11|NF0`s)Zvr@w`=uX^9&g@(>6>r6{+^MvOr`Va5IWxUf#maZ|Tc@Hmpjg^sy z9))@b6JJWe5pgAZZoZGpOOE@!)!IlIZ5i5w>3Loa*iVaK5%dVX0_Hhq)R+@$&_1~K z5>A+UA&gBqfU!*WetQbSpfQ4flcqL-kN)u5uCO`K~ub> z_I-sSId4^_UZ+vKjX9Ldc^QG?Ud_esP>Xq03Dv*=K?^`~ZHWI1Jc(8?+F;gvECA43 z2`h((a1n2Q5`_pAHO=vAG7k#m(=YLbbRs9!{FlMobV0by2$_h~o5 zOhD7a<->9cRhp6J0KoD!CE$_ETdg!UB-{f!F)Ymlpma6py5FCg&rVRcwXvmeHE0m@ ze_*I-8?1HQEocc6cKczx0-q`)w(*x6F(NM`vdF@%+Oh9-8>&^hv@+?;aZHokv+MJ1 zo=^6ca=LM%GnWM965#Od-*np-b!&v0^@V``6>Ulw)b1{T!mw~h+qC-Qtl1>g2IxMk z%|crKdXVGO85P4UNdMJZzZxJV`k{HF3W9R^IpyA43{!0-=!8u_XX}fs$De~|SQCUMg*3Hyq;#h(MyweXBvKUyceZvixDAEp=IHXFVr z^YJ=Lz~|@thDRyFQ5(Au>vGB0d~ehZu+-PCy83HL5snxR{#iW{25ts4LrgtGJk#*; zG-05deRG`+0S$&+d$V=PeZia2m9>_Ym9+*XHB$w!5iP)n*T#0603qAH50TBak&##7 zP{AkBa4J_|ADU2<{+Fy9=k)!hm^W&44=%6MbvuA-RXhhMAV3FJ)ckbtH`dX)MIj3d zj`(_|zR$HUHA519{z2m$6&kcvf0xA8IqQ?_t2%y-6LR;jauMpvsaEPV96C8zrac-^ zXymuAS$A1nI|l^pt`*-I|5QHc#-W}p6S?YIscZE?GW6q^1=U9{HIcp z|4QHfWAwdEs?EzerRuvS72V%e4b?*nl(~i{iJ7x9_}!RNrPBvo6{E%~C8bs!$Hw;N zmuSY{hUaSXUh~KH97jcLMwd5ony&BgKwIPr!;oYL-aA>_9G^61EAX zt&kr`MSnW9{Go1e-Ff}H?Y#D+`LTbLz@VySKYb1k<_+Np=-OoGqkHSf9CpdJaV%z$!2KWY%2B>|s zG5>CF&Gg^5V_Lrw)%OAZZjE$i^B;F2n$o-2marFRXfD;Cutqn%_-e5|dy_n^fg!~; zL{&G_NGn-3BC6ByqtH{FhIXDrOAPn5_)Z>0&iJv?mfCAKolvk*6amM8+#{eVtLyyp z5bq*h!>IKXFvOsXq6N&sWresEO!|xdK+$9 z?nJFIw^o{Bg6-mhiq+!-%3(VX}vHE*+u_HZ4>y5>4MZ!1(}R4bD} zwIU^amVy*?i4(GHd`nXt%jVx7F(#5lLe4dPSmaWb8}`N_B3)vcaB3sl$h z>s({67o81-)o`1L1)Bb*(DBtoRt;=+B-BBLd>9eHWJ{C|tNYm-_YZ{(+>d1HU zOZ|Ey?LaD}^1>tdhDK;^Y5n>z8!=g`Wp(7y-^2LpZwi&m^Q|Kg&4^KaJ=r+)G1J4K zQLTsKU1q&Od*&Q;YkD*w^gX=<;6to$F5EDO~Jecv>DXHsVPh+ViCO!IY& zmvIH#1VL>B5;*deYO3Wbf68+GeWpYS>md+Y!_np{c+B$rEmi_pA9EA>a?#3j(Q~5x#NrL%sRu6FEh* z}Gj&X2M4!Yl)Mfv^DY5Ud{TpUN z{qs3OR@fiO*RR&huxWC$0HS>lc9pWtnRp9`OtGB~#t`N7#<~~DOiws|NuYJ_^n)$* zHyS%l9ytyrO4Oj(TQD*2Lvbn|Mez9B7NXRgCbaC+FQ?^_`6TV8%w*|V1Yy$* z%tXJT8_&VU5B%YB1O~6rDw66%`Q=bx_#hShAV6-$k>m7U=J-s|UD^5mvhneGJ6P<< zhWI$eom`Hy1o}AGv0eLdE5o^70E6Nc!GagRKB4hJP0wq-dJ}rloWjy86>cs#!EN+1 zhdO^e$z8#3Xx(WOS&ru}aSvHH*fAbgIkf)u5YAvq1!WA`oTGHgq-rk0q6887`r_{J zo%Oy&`eoC!>@;>yXRK3vg{$6kA@g$7Z~pA9qW~``O(r1gX$M^m)FWaY3*G%AYgoC)~qT|-cvsLflj_67;&+xHGiRTb$kT}03 zs}I1W)hz5S|3tLhWMk7Vlo2eaOl9JUD5q4#$sqBhRyUAaE?K%!HdlAaeh5j{{?C`3 zR44$Yc~hV&=Eu5K*vmf^rWd~PBuhA)A$#i&SLoMRHOgg)RkjO1?po%pt7)xzl`H?q%d%@WQYaxK<2yRk~YL9j&(Q#}9D)|EXH_BefDZ_E| z4S|8b(!tm$eTU?bR&!H&vj<9If8rZt=62=z1Oi_ZMc0wkjo>=P^xs+NK2zCCsy&B% zEB2%QiCuv~v1iqpGoUneN+Qc`i>n>}N!^N%^_ow`7P0}K>1Qkkz-uM|Mu97Hk3);? z54;{)Kg%{5v>*L4!9f`_F$r|Gx9?f5*eh{3lyn>2pS(qq^RtNu`HWC{nTn?lgv#`* zYeF7&Qv>;Qn8k|<$`kw;0&1ivvZ-3YH8hey3zPK00-R5*PiZdKd#~LWQJ`yqPU1(A zrpaZlJpsbJ$ZSk?5LRrk>~Mgr|F})@N4FdXgCG5^R5OkmO(Akn`WAA&R!n>=b7g~?U`>m`o@yw3}Op; zW?=xyAFni?^l&?%W!6N+?kD4CY!I7#)$Ol(DvIM!XQ}MekCNWL>rL~mXrB}`7#AQ{ z6bxL(8@<|g0~i8LTN>&0O2@(VDlLhoIQ&Vln8#4}z|aA6=Y@jX&Ts%TWLObpN-`YJ`k!-vF-yYsy@|D3)vceGV+X++)@ znsoOF3@WG{>`k)V+e?DqUtynkKvmPj+(*Xgw@iJXXv>>K#eIJWG8kBOYyBOsVXfD2 z7?ZoAL_40^W{s?*=ce2DO2}KT393-g(6`Q6&lAhCxbKkbd+WQ-@qvGn<^4K-{49a} zF$}TaDY~*bNu40;wng>w$45s}N}b!zUY*y_q4!NStc#Z=wbH^;t_l-hzh0Kp7pD2xWCT}9{D@q8O1Wjgc zqAsJnHESwqYTejel3I-!TW1gdEjwjpdlX8-C~bjj?`=nFJuWNv*EC)7h@XT7z|lV9 zU(N)Q1|=vwTyV083dM**2z^`N-NCNq+{6Q%`&_rYzP-`Bp<9LC3%Oe1#p?tGjX8_| zy66gN_qu1ewum6vYYlGL4SOeH!wC|Ef*bC!pnlD9K#CRn&MU5a!Yi;&aOp7k=>ATx zbzIUQyrxz6Zn5?l6q%ZQqw%>+AEnYrE6RKS$n=}6bkEf9+ULQt@C@nv`I+b`RfflG zG=ukmqfR(BDhIqhx>?RTxERr>H?ayBvjgjLwhI1&vo&lnq_Ii z%gON1=38<>jI*EPF>(1+AvJIm_x#8%wuWi8-1db5^^z5l&CY^E%AhD}#Wr8MBACBo z!f;#ei6aC=F(YbRj@K{1;fb8W%I-lSXEv`Q&xH1kIfFB~2JtcpZh?vhyNafY?JFKI zqSYvr$Esi_ZMZPgp{mO$_9s*Q)@%SPz+-7@ZXSn3qcR8PmiGmwt=+?}5|@Kkkr4}y6tuU{vP3v_Ap+iYFa}Jy=q2dMKqM2R!ALj$ zN|8c;`)P7Sk8wPBz2P9)8Y_q9f@n>gm!qkrLZd~nUM=Nw(*_w|=T*A&#fI9wnPUa| z_qu>@Su9$dgiG@4HqOlm9#;|?jih6WASK)`&o!xab9FIJ0=K|%94^V^iA-9Dp$|%~ zN~?o5!fAEK*v4EaI1E*q07*sb6kJ6~mNO}nNu3j2aatBQqlixZ=^wR1cNU70w56)w zbra9Du7~3FipNGV#*9V}awKaM_v-OC%tTK38eSir9zRj3m!)b~9Nbi^7)zl#C3POd z(^cp5>&&bxA2z6%eCyk!dGy8jMM`#(=DZuobye*YHcu0p#6l3eHs+3r?rBy`L%bW6 zL_d?0U*5OA5zFc&Uq~DA=}w#oiB2AibE^hY^IovL!I?`i~&a)Q|(~C}(HIi(O;ixwt5;e1G7m|*_yV^p8_gYXwHz-Ad zgyP4m!9ZkZ>=l-V5CvC$g0Ll9rr-9Fp$t$A!iD#;2$7|xC{i5-+aJ;nHeIzdkvb@; z|EI$uy^XrNl0bniQUqm(D4C+()E3f#;@H<+AbT4RTmLFfb`E|EN2|mXCK8`%J!|F# zaWt|4qP^Rq;Q*dD_YX-^4+%>7Li=p;l4j~D6ZiXSc11knIm@iK|Gak!*il=VT9vgi zDG|yw1kJ~0c#xsB@W(B&w{5gVK}_Ejf5 zBS7Fy*Lf6jA`t>-u!aKsKy4WslX{q@5D`

    c!^5ZcA2j@z8CUhLg5p0xFo$(`>{ zsO)-~>)pFEbJSl4NiG!M=;`YY@a)0ZZ)98I+`5B3i&G~14sIPSsp)^&_*Q>+JIo6~ zPwKW-@%XXwQ$YWaSs%MVu=rp~B5prfN;|mmoYO2l{ZAe%Xf*=GQJ*z|yFVgp1r1 zN_TK7mMBXs8C;~-8eOBOZG3Q1sh!AEX*Wiqe^XV?*<%z*vk#y!SCYs_oMM|w=?Rg& zWZ*tLOQrC+{mY$j=I<=!6JpG?6HHA!UK{ErPSo(pz84o<1DsIi%UjZSNb;=4MZl6- zKafA!h1ypu0#`YCTV9{9R?Je-p7>*-ch7#AazWaK2IBh62UDU}$S&ljG z&BU({xEBrMkIdDj?=#J~VKiC=cV$%x=h;_$Lx_ikbHEmES{MjkIxHF^zxdlvb@Rx4 z4tG?f)ulc!%qLzWY9^yVdCv--^zGa1gT(WL{@V3^;O(}Z6WxzNJYw@M+lix(a+*om zG<4uJYy%0_0Q_VGA2L>$I?(#)tnq|WS!d$%)b;((jmf^b$$oKj0R6rBlZ+%iDzBSc zDCp|gD@QIlDD*6SOnfPI-4QYw#n#=LSBw0l7Y;mV( z?CICB2JCFn`CU`$%3+fuxb4%t;!FC8(Od-GaLn5J7z2&o+Jff91%v$P5#bcFEU)5R zxa}3v#*kp64I`{j=x^PQ zRvExOi4(-6x_F+5=f~1KqpF?vhcqipz15sv(qx$=LALR0HDZ47cYMc_1zN)|x^zcG zJuFX%X;3pN9do^(yxSv>ww}(msF*xTK!b(5?5XS7rCN9dXfYIZvtl;!j^PoGLN?XVbw{j>OZhGJxWDnaQ6L#H<^8_HHjAAp9rFAW=mR_cq>xCPjrql`6(k53a1D)+XbnNPooXCb*X~yHYp4LB7#-Ch@0$I)eU5KwNRdNf6f9 z?*lzj_)a3qJ{t?FC=Z0VE4eiaG0#=4@KZ&NU|mn$#xU>c8RNQvl%t_twd~ksxeiie zm95LZN;Y%XJ=i}7y%HKLwW>#trTkSy$|rxgtKNPvN(g$7*iW3BzKIJU1_rJFBCbP+H2Y?1Zl(i>f3RQFu7*0< zkpxDfzPBEJ#oP3p8Q<(4g@R~9o(udnYmx*81qH3g0K)_`C9h7UI7@;ygHq!93zX;e zFp)GEz+Y87OTNl%x$w=faq96VBvz0+G3xVaFTQJwXgm|t;%xX2;@MA$(!QPvdY>JT zM1K^8@&t*M?&k>jM}O)`Yz7WPpo?e*JBFhHz2`DWV=%lx$v{+2wEMtWoawN~PT^h# z{u+w_-@$eI|Hal@$3@kxar<;CA)P}ALrN$h9n#&6fPhGMcc*kW2$E9L-Kl_tba%s0 z0}SwPo^#Igi_iP#z|5Y3*?X^ht##ekR}-NtI2mm>(X``%LiFoTuy%WIZD2CehRjC7 zFJ{DSJ#-w@+k#4B7GC~dEl`)+eyxipB@`Lao*rwL)g1#Myz<{6;{piFRpcc4WB%sr zSm!wMlwqXksXvUg%ziT(Mzp6{2S4;gSiB*f@-~B=KWSzFCU**diqWfB@peXk0phO= zzq2My_CdSwN1QpDZ()uy?DcKPgZELYB#wzol%q9^Pu_X*QdJ8}wvgJjWlFuShB8O` z-^RwvmXlmgw@Y@NX47RPw1L36RAFUu?kOQq`vN@rQ;1*Q2QK-JKkxilUZ*21%{sACN0|uVuZq z_D|U#5&*_9o<-+#+N7})O-Z-f(5^S4SxPU}0ZXXFH&y5;y&!1T7I`t-o=AlKC@Py* zt=lfy(YB2uXzUdcss-S{x?1diG5B-?WwM2ny4jwTQhw>sfO34J$)7H*YCqIh__C?N zucJe4KVYEIYhNlb2&sgY&YPC~epS_I7%WHca;aA*MbFS;^E=@)SDN!05j{ek-hvSG zK%X4HvEf?;HduzhUPJ_vx7eaT=vH10v%XbY9rW1DLtuk$?V1B^6MQzmG73kJMtRr0NlY-nhAk@|jQW zKCrElw5pWg&|^u5$Q7D7w?qj)nPQz|4bup_zT*_#A4AErMWun?6)T!j*p(wNM+gt!T)Z}5l`?ZREdLG*=2Jvr`>Uv>k; zng|hK9~L8(K*ZOjZ1hh_$Rxp>lUV7m^ZJ5`6T3o$2Pl_FFd5i32ZWz)l`;$8K5`Bt zd2P5AMch#LZQthExhs!Hh}C=`S>i+xj^>EAX=bE~8znZUm(QZ24UJ{r#x!PRhQY@Z zy2$6m5yUYQ%bh3U#cPE>qWN74Uq(!WJ^r%tl=dT}S_!<_B3pp>mRYHR>-HoN8W%q! zg1jC}8?E^;oeXhp5j^^v6L&HehXuG1 z<9)dp3T9Y&UqV0<%c;^igp|c0L7t@*%7r|_vvDTm4|Dgqh?$J9&G#Sry5`3oOtcTC z!k5}2NRMv_AnyZRZPZd>EY`3(1!& zU0;K*znUcpY)9TsFeE1v#(;`sgCq|8+Q^}EQ?*RxUR)IYWfF%sJ~{czRc-q8f4#&d zKl;`GI$o8E)EutWwS-!t-tIJos$S>asg@X_*Sj;k=`TB>k&r!M2pSVqyJ!kJ@*HSZ zy$w!eVY+(>8Wgm>qrY=H&0R;mlN<7wdYG(I*{#!SR1{H^R1Co0zbOUe$L<7H#I~H` zDxsesjU=7JK!e;@sWcvup@~a5hN>aZ{x_Vk*#<&_i9pL1odrUS-~7Ku`IShSmG}@J zp}+Ci8Ckx*>eP$e^X3gVbBfRTL&dD#6;fBCS|CmB^;l<=U%5>Xci@RbCmRftoW{)7 zp|{~AwK)hLZo%{YlXUlbdD+Qx5?8Ge4QHCikG*JCn1Z9PkD?nv)Y=5eaD(ntTi~p) zw@}gJ=6%Gh2Zo=Jf1ZW8y_B)Y2O6aczKiVl3F6)_(zbXTy*avSS^_zP+57s}P&8qo zRf2|4vmUfuFe`V${vE%CQa0aAFozQ8t3WxZVnD=5bV(p0mc39Z3~sDYN@oh)FDbWH z)JWuGIvT%b)4kCl#S3?1n&CWIUdIjOYE;CH0@g&oM|*H*VSj7!IKQt!$z|OzQ3_)Y zc+#*4j7p^T)*C$7wMk_=85njuFd|Wlg(h`7oO(gOec9%@jDxNFk^P*z$A6rEn_k}5 zMVgcxU$Xe*syDF>mkm{&SLMx8NqJ#CGHUFG3TiAbIWY}@944!!^XfG^+jfkAkKx4npPCx&Z1(ato}L! ze8z0E6$51tMV_PK(?qb$JgHYkZGIf2mx*Ex(~7iw2EQh+;bq0!>KN^2`Q)Y*FUwV3 zq>hheZF*uqYU_M2&^I9s=@&gmw0Fh4!I|8x%HtE+}R|4(2Gk!Ry$f~P%1ihJW6>SL)^0Xk`S!_NrjvJ?2$&zwTo|FlJj;r z*7lN?A(i;4?RaWj8GD-U>I223-C5z&>*!gwH2;@Ci0V_pDiY?v>ng+3suHi$*sdU< zNZ&YUyKrx0XauwOo(Ij?!NQIT$~W>=oNY}|Qmp$h^N95%K;7?pL59S?25G8bOh~mP z$Hn?>1@!8va4XID+I;AP$}Cj01Vt4~PT$(EY{hWq*8k!?F38s%RjY?q>rZNRGCgnQ2KEX< z&=MM@QPc;QA5Ks#A-TV$7kCb9*v%Lo@pXy&bW>+A;5GW(aBOynD@5lnUu1&?DL3Hc zRX%b*L_4;hVKRO+kbnN(lurXU(Skl4;Cb^h8y%l^9TDbbr3+ zkNZ?&!lQ?mBA;%G5%?*2c02iONnN9^Cx%|hh^zbSD+ct*O}T;N)6wK72^ZOZ*9D!a z2@ZQ3$`ttwD!Q_np#-^itK_u?%8PiBs1>jOej!DUDhc7l1eJ0!cBW4AVwp+2QIYNN z%9zi|7^hRCx779e(xGu}z-;(Km)%g4$(PR3G<9CTT3ejd0*Z}4*zZiZ)1xPBCC44i`gqyiAp9(-_&AfZ=_ zfQ%zkmH5#|mPihwMcpNaN|?^@r4dO2xi3Y8Yk&kd9QYtK2)F^^U7tm%n5eihqd5}7 zMZ~FSaB%0aa6}mU8?uByXcHJ2^cU>x(6Cs{#k{mZM;1?E-Q5|ni3S5t`}YX{dDZ_O zzNGP&*vzGCShZIC#9=I_sxD$J<7~17ywm^pdH;PV{Q>R}S=j&m@zm;FG37BXtH?6 zUQG!8##RPzUff;HZ+TZv z;~k_vw4GFc)9@_2dnE|(Jv_~*`xsK{o|-{+qSmkYQ_*Mm)~uPb0y>xma=eb~C8|8$ z+%i+_Qvz4^Ubzl3j`J>)c9%}J$~wayjXi&{{ZS-Vx77GBCb}xSa*l$k!3Z_(>;vHKN03WfB)BCMS$<9=x)~@ z=Va>frt_&iS#`;CxXxzrucFBfK8Zi9QrB%UZ5L?f%xHP@*#u@+k(+7TVqwy5TPuu@ z1|SJ@M(!u$JCqmB8!g{|n&W)ald*FW zqq`Mge`MH7-71acw8XL;21q6w?N*a=f8RoY?$rg0z2W!i`VQqHAHnLu>tpUG!LfDQ zIR1W>9e0+kvMg++k=avx#YBf!s%po6oFw7T*Mb{^G`#5MO4`ZNS8mK=B*svdtb z{r#>$)>;-B@E85%gTo*N7{E1ZmL;P3-5MPg@WeQSX!?vq2gl)>8E5VD1)6}Zo~;OQ zPTaGK)4M41zJ#s)QZ-ZLLXl73;Rdh*E6@=2k-Hpwy8Z&qxl61c#wjDrbBta zBtTBWV@s~MbDE-O8;{A>rrdP1XaN{29TzN$=M5FhlTUc2@T`IX4d#3$^2OmzNtk(# zhoqvwxwz>3`gFUOn;KR=v9#skV!f65iBWh6;-%lyjUOj~tvriG`wmfxdH}m5c;@Pj zVexZ5yQYRR9|wG3nzg+dr5O-#dw7)bZb?UGUGYmLSZ*>1Tj2VB zbMq<1N)w(gjo`^=DL3au>yX z@81K-Ns2eNwU@=ErF-Bdzg$N6ol8K$n&p=g-2{1cyBP*jTb=d#!_BiD0^YvgDISV- zTdHu=XX9pL-u*pW_a-qS_EGBZp3l-!yww7f%W{^)7 zm6PAyfjLjQy;!Y}c~(*=$g-)t>N+&9JG4eKFcqwB+5je*<#to5-dDR^T$|IQ?1i06 z%}!@BOD+d9!5y|i1ck$GKj%iIoat@FJmZaNb`z~liti{oGV<)oWMi{qLPtOZKUi=o zcg1DaU?of_A}JY7w?>Q${&ya3*V9HOTv4wA}#JJzJ%&3LFi$gapv$lEQf zA0;ESBpYrA6%7plXEgySp~XJ#dcCB<4wTJE%~U)a8H#r%1KbPbeQ9!JKCtt_ui$l+ z0lkq0U|GiFxPG?wN7jA`woUb*7EY6G0waO46AI**w#;N~}04ozbJyL>`4~FyQ`&bR-?-JCaO0&Gmo> zUg!VAL$w%)zB-BSYqsQyusa5FRXne59W?)^Cdfn7I$WO-9vKAk}zJCDF(;&sAAguSsfb+73L7fVV$vXIaa}brH?zcin zW?rC}^2KTwzu7cFK^`GU6uBlZaXYW4<(7^zI}OEv7Yk@1{?oHJmnxKspR z9m>%r6+=#<4#$4;Ob;)sY7S~5lav<&K=3l`4CIoJkZ>-N&S@5iaZA{U!3v)f%HymT za|`Y~8wR-aOpH~a&podvDMZETDGwu4M5qLBAzx845Cd+<5W5PU>5LSl!s9`g?TxB| zr9;yFvV)eDK(XSuH>&J?&6jm}C^qHnuwsmiit8DjPKct^WFg`JLf|k}dgkVE2;b z#~HpC9b3PubshFK{5sDDhyLj7n?e1W8_K(h8ql0N_sQsm&Z}?Jpl44Zpv5JN(@n}I z{uk|AOX@@UTKca5tHkp3sXW`y&OO|YN<*HCN!LHE=kUfGYO$G(%oa1yi<>0xKqr(5wzc@a@i+ zyA%k??*i*j)xp-po%#aWW>F<+sA`eyd}DErC)3_R>8%E`h%#NqIqqg*qT<#M3iqrN z#d~Gs^rt%TdGC$T=ZP3+N7LJUC9U<ZDqba>=m6XQg)rTp~z|{V;`|Uc|4iNt>Z-^sI&RkrpMHo9zB8neOMC;etEa`%zz$*anOPfmRlRo zU-;Mn>h52eOtjWV1uKB_+iLSIh6#Lf@Asch0bVTdMKYjfsgf0X@C*p3x?%BlD#pZP34;;w`TbU;}Jlndt;TYVFUhRx5xXt!>Zq%{QgzClEiP+ z>P{gJSUTZCL1%rZNR&hBLpvXN9$9Zqm*@nV+;!sp_F}vI-Q%S8diSE@go)W4%!Rr@ z-T@VZ{Fi_S9;v>TQ(q&VK>apFHo+m|R)p_QF5S7@6Q!RNFrB&u!RM02tAW1leIpj$ zGueBJ7fzGoAw1S$VO+Y-A1>s#{FhrD1(BaI!`X`VPV%D2$STHN zELe3t@?@2fXj6BqKpk?O*;VF^L*Qv6L5!zNfX}MFnNr+=KS z%y!dU6+vjdlx_s_tag0vb1O<_lq%gLnpVzT4QpR6UCxNIO*36Va@gUV)udq--#EYj z=wYDM{iq?0`%8sdbM;RX9yT-D$l^{VLd*}}y!9&e(u8K4Bfm9wK+m#3>7(hEbjwp_MPLZ49=ZoEi=i?0)&FDDg~YD@wS6j&pcyouQzaRw&c@pr3fr8~`yH zin?WAdFN9U&Aes)u#mq)mrnjbI1RzF@u<8Eot3LdA!t$fHvM=%8W@47@u6BZbjdy7 z$U(yjv(x+f#qG(K$yR2llr((5?Nvbq-w^&S?DTGw=PTJrxRqU8GUjf2s>TJO%zV(H z%>3O|;g@Dko$hxJlU^ru+peohu5)&VI`!`ltwRGx{6c14yq7WlZ8?8+c~O{m&Aff^ zH-&D>SH^ejTIl+kyLKwRSf-TJYb1DZCTVWL8vgdSmuI_GEn|3czf<~0@vKg&DP~r0 z?EI?ln|kN|+PH?;cD*OPzZJTSqs8NKlN#443A$Ql*n;X9KNu`6e&2u3J#zgd$NGVn zQM0sG*HL|(`Y($)!JgV8n|(l4iTMJzCOn^ehSVMV0=vKvzxe9$F~O8Y$COleulMGb z<()p-A;|(0Op?sylS@3egs0+VQPa;)8Lu@3bt2?rNVbw;XTe(gYzaOJoZff*=M%9e z`yz&;?-7gxzT>_5`56HOUSLfjymYH;m!8M{Qpya8K2@BiUF2|1Fe~qg{!OOG(?vU%I3Lu`{Z%E=a4kHbr#WYVs!d3D z+HM|&|NTB1voOL1)h42jcr;rY(@NNs7QAbbD%usR;o0+++k`@Pp)8B1&0-K}4_aPx zRazkVi^ycECnO-K`MBKDz5FF}3-M5qVa=ZhzNoNi?CUM8OymUcuEi>YgBVhOEp>oIa!1j<#*7!TyHG4)1L0!!b z4-$&4BTm(*InAv&(#|f#)5dwD`cEw;O%pdWMQ5;apDPKMA3J;I|8EfQ!HeyPkCS{PFNE`FMC)nthW4b+OHK#VaGO$>#^|J9LmOKr~G%+~|5<9w? zvKz9bqQ~nP1Z2!Y6cA)kTwiP8w*V(E19jSKKfGe4%80Pj7a1=s73C4AHe5a>MEaEg zjfVC~-sn^aM)euz=NENB8M|L#!9W3O>F4ypZ? zN7Kn$gieoU*-~cG_Dz$C(YPCIytd_XbS1gO+rPg}qo3cS1Cw@tL_1w~zq#8bv*5=i zL`Dvzh3e=ZpzLGh%;4SoI?)>RzuiXH3n%5WeZ7heTiG!7!W!X#h_9VS?)9I?R(fymKSd^8ye841f_aT0n`eN20=BRk7C z2a9Rh=Ck0lFT7)=Hq>0y7}p$9?JFAW)J0F;uUj zF-<$eG^pbhtrneIoFHzZ$Ebm}$NuUXw&Ms3<`pXa;cD8YVk|D-iGM(CV)*UOB50y* zP&)e2KVCe>-nr$BG19>J$6G*m=m1(vL{FFY=Nmvkb?0+B4=G;+IevZ2a_HVP4n+3@ z-;+SPQ)0G)9EIZd*$qr#{-;B;VUlBc+B%qx} z#>;;{km>Z>b*ui+{M+<5dRtp=@tdw!^4)l{sff@}VQ|ui=o#xpqY87eL4H_3!k03| zz0$^|@S0cs_-$c5X+Nx|iF{frrHk;<`qnq8AIKx&?xeTiC(X*^GB4kbDX%z-@xrOt zj)ZpxS%IwEPJD`ET^N$$`IU)9&fY*p*W%P`=~i#Yn%2B?`{H06=L$Ec4v=8!(ZdjL zT#!p53!aDl#_~fe8T5~>&YCF7hh~xG>JZKcmKWp9&;*`A|0e0T?O*x)&M06JAP=0C z4H+*9>Mffj2|Vz~h9jIUL)>&^xA0=3P#7|P(mqt8=EE)8;r68fo!04^0F&=I6M`IglTBdO5u}Vo!#>|FShl-)%;srRtV}({34c9n8}&`+7N#XRaYmyEm%q zmYtA<7k9h!s)wtw>5l%`%&Fsvt3rFPzu8&*NW8mc>^yA$V(=Tpp^oiwM7v7qS0Nj4 z08rNaN1d_q?pwy`XSt|m{V*1}c$Xpb*pObZx%Y%dN;xN_5xXVac5vu~I@K_$STUu5 zKvwRX-TlETPp#F(h3Ost>F4uojH-F*x9t1rVcW3|o+YNSv0JG(D?``3x8DwQQit+J z$5Yb{gya8UuM~_V_o&uEV))Q?s1kFbxzVjRl#DHUDK1JyCNYV*oqJTmo#!pErOI{| z;xix4va1!BQH6sxf~UQmvy`_OD+1S6ze%dRG=f2wAZ^;7cQkA2S{ z(h;2OF(R9pFKn87?RDB|zFQ8(s4Y%^G@X?2W}s;>|1^^;V_fU&J}4V-7hpH{bG!g9 zkp;%tK^Nf}6kDPO4faLztI$-lo92nsa1Rh_2Rw@04vGWuk67azIUcKDd(z2Nh!cq~ z5=;uhR z&~a$(T_5EkEunc@FSG8aAj$|r@A|5atSaJ%VM zObzCIs1vCAa#No!X>};@&0;rr3sF9Oy7fhB&YMp353qf|I<1j$cU1bd!3~VsOxxz6 z&%+-CK1%w1x`j8K+NB<(QhMzX!LE1);?(D(p6aH36Lg;KUQ_FcId;i@?grEX)jm|p zBK+LPcD0(|{VjL%Nnqohk2S~HXeHvjQ0>!Me>8LB*!g%*_}Wg`18lyj;j{a@o;2Ko zb+fgr7_udtH^gVklI&d^aQnCL^lQcoQARy^QC*SO*~7$Xc)6KllcgVwR;9E=?9C;< zo-qh?9GoZrN*5eN@ieRvrJ@twdifI|@nA-zBmVMyi~{1u;8Muq z>4SLiK5BAwk&69+dzAl0GFOT^>6WNbA(w@Nk^aSB7gdr&vk;W2v6zH%{C@0}yiFQwhGl-b#`16U|RA=R`;td1cD0@}kQ})QX?d(xXGCyF zBT3I|@-~v!$!xz$tw2suXM484_+;2M0!>HJqvce{Hq3hTE%vM)l%Z1FZVC1Dbu6*b3enxh+ zkyEE}CsYVee(%s<3t3&axO+uEd->bfot9AZ(yNZnoxC76BvpB6 z&ZuA{>u z+bfW!?b-X|kPf(y<}L}oA>7g)By9GBq=E0+Z6;?EB^`Uj4L{tXpK7Y)oXRVmK_I4g zF=QrP0}Sj=7@hUY>m(JJ44JXpTktvG?BHqHFKuwoaMI3SVxqc%Upz3uy0l-~aejX# zJc0>ICApt-4I^_=SSnO_$g5U*1(V2%=8s{-KpUVVKJjn}#tn8%bZVbOx`lb26uW8G4dfW-T zbcMKzdF5sQQLNUc{`&e4QFT9dhsa~THI7*Uy(GQysEP zZ4QGT*g8EX{mu$IH?nsZZh)QMc;Q3Os4WcYfMlYaMVkCLV@T|$E(OsGD_)#79QHCh zBbrvai*cmBU){4zh7G2=dT(u!#Dm#2y`GrF{Kd7+t2&tnAIsYMoPXlvDW;LLi^^8v z(L2Z=88Duivzw@P2Y*q!S3&YYF`|E~D1U@AVlcdOQlU+|wmZW#*!x5<-uVpyAp||r zT1%a`lBKIUVO1V{d7E85ZNwSr$kK@?i(=cE7c2EAw3Z_3J$;woOkpxQcO)GBhD=Xj z!UbP7!#pK=KTagY-LkXTeiY>0^OUaIh8BkaXVLYcf5}b?&9pXYE+!HbMT$F=idOA8 zC=15$SD>T+R)O-fY(%yz<#iH41!-Qz+@yt#HE%ToB>7t2_%Q|}Tz*k$jau`v2Fc#4 z11LdIc_1T^^`PlA5F|;DQhCT&G*ApT3frAY_*GL6SzTDCDs|3kwQ_P+XbxGUsWI7I2Vijg4f zShwxe%?*by8s8v=RZ^0|G)#8r~xIDs{-!_wErO!{;#~i5OP3?{}6H()Z$*kU$B^*`(GyD z|Mwbh>}M>%!+vR_#y*?+=F=|f21YB8q_~=yM4KgM|ANDHbozIftO}um7 z@T66w40%tGQLX=w8l7slw3)S!^|_-6&tXGYCWrg>Eb*^Q?E7J!Q<=N9Ane*~Sq0yO zeJ`D1FHY8_X3EoHaIU;$s^_lkTSfe^GdbUcb%&c;uaDjM!UBwB>r^M$yJd=%lukX^ z+lW#tQ>DP-n(I@2tMK)liK|WNQ_T2(A&enIG;em#gm*Tx8aa=)Nx$@yijp3E<}pey z^uf(1XF*%xq^;7MS}orP8We^fMrQP`8xj?U)n4HJ3xU`W0AI@-0+-ug9djALCv^OA z3*ZG7RH1Y~=p{q-?35Iz^*e!U=P+jc&nFXy!EGOLp%4*>(p1>f`%r;ur(`y&&NofQu>X5^9T zRo{EFs8n=;VzSoG=S^r}H|6sqV1STTgS$LkT|F707qBpHhkpqYT4GL zSjc*-9%#dqBL6TvD!Gx_@%+zX49Dx=izjB{9@si2=K*Dzg~R zV~6tcV5S_{B0t&~yUqgkFr&$NRTCPu^a1Au1lZaCd_m;*U$jQ#QFN^W@g}MCsYTjS zogr~VU6g`s1fP{A?4%(vtJaQ-@gsCRg3kcufeCf>kKDj-?bj+K@qxS@|9ic-Q6di1 z=^KQe?)Cg-5qF={bd<2%qfdV-vV?rDk`iwWWIMO0759=aQ(5Ht<1(} z`=}3yvRDD*J$pa$y3Uo9!#S`4K4RduY$gJ9-v9Y7zh>)2zh^QKJ&eZg=Kr%l;JwsW zQaV6I&h5v5Uo8s-_6yAbnRZH*<`3QBqSTL;9@~6$ldcDIHQ$UFl?%ktom#dMVvTn|2;aSF2@%lUw@=7Q_z7QfSx(0RtJ(bZKK^zv*Pc%Z5~6@j=NGyaY6F=F1L||wX;y^tcc8>4 zM26kFBEj-sP#K^d25xcV(9daeh)6`@bC?A(Glb6)pd@WJEZQcHW%FkmK7N=Flt3}d z<}{OK`lwXiqDz+NY(G+WWefy7I8MrdWlB|xmNIpMmB&(8^)e9r1?~io7=)iqr@oBB z+Z9bF61`{=NM>P^1gM3_H?t=5>%_G_g-)qsfyh#P0aB{)i7{3!Pf-5n>YNWKp~ zHToAahB^;jGp`;Obr0I00TmxFb{ka4_3P^d{LCmU9QsZmt2dIP(a7_SoR9T%F{RI= zz&|%Zqs4!KF((5K_|G$xqsX=@U9bGnS^NO}qIz1p#@zAi)dzs1u-VRuXk$j6!&+Z_ zgSLiua0y;bqk0=K2w5Kl`N=C(&kF(a9mJmGr+3Og)91 zt*knS{69;w&*N8v7ZQfu4Vv|5IPHGmgk3(CM_@s%4z};fdIEg%@66!woX4d00077a zbIpFZKq6|vQfOB)FL*7M{W*;!`(Z87I*H`*5qR<(36?e$ka{6auaskL!Re9v&O&tr zw98?C0w0N8K@A8JPUm$`ZviAi#}inYNe1FNG0k;Fv}x^ALhvFZo{j$-MN5@bYpr_v z)yqLPxHyX}&mD4lT-9;^P(g7Yu;LFa=>q7&@gngWJl0S{uu>>+%=PUxL!Z`n-b3+< z&~9nQt-FoLP-T(u1)vUokK)539x~u;EdC>Q_|@0>$}$@nFv8XU4(mpTH0&nppXP6c ztM+}{#A=_^D7GMX_am6aT#2$pvO|?mHx0l3oN7%kDn;TiTwbMECd-Z_nJdKsE2P%T zJ^j=Fj+tjX;(7jxbIxy4xHwE81N?aAD;}Q)#uA6={9rr7KpbQ*IYRg zs{1xE?%h!Yo(=RO41_Iir?!#7&(i+y>}%1eX#I$PUop^j5q9FF6?ns$&@bVv*0|@z zz$n}*SkCnF#X^$I5M9K{W#mv}6YCNeDbP-DsfEC>Rls`gih}(2W!Cmu3hqf5*jx$+ zq=^B`SNGV8m#S#%QvnE+3&otS0dxuYU9rpksX>-^OW9zLjRuSnapFPbiY|ZQrSL>I z`@)LHU}2d>^iCMP!edu#nlluV?vMo0*dYL^=?o3SI9Ib)=kWH&vCBNx(4a-r#aVcOB3=z)JKFxx>CPKcxD9|cD?8t__$BWt@W zm`_GN!zW5i)#zIREDP~Qi5w?+_Y@}S5m!gm#d@2)s{H-E!+4>$5g7^)iq)+^q)fk! zcw&7j{c62h*Z%XBT~qNc^FyBWja-g^;oAO|EXC9Dd2>O3t_;gE+qPSQgNU}VDNV;c zcfyvN65Ia2r_5TFC|tjoL%pvw*tVxcn@R3Z?j!4C%>S@dpj52-@C>HDy89?vhvlokKS|8##lp{wbgCR?brY37|%HCe`?EH zlIH^DSADMyR+bfx#iE4KEHD_lz6Ci^{#(TSI~<53K^UK_$8_elhEL>lfAuKs%h|L+a|TQEL9wAMswkVEb(AYe9|yPQuD$Tlik!CSG|65qe;-Lio! z1^yl(;Hdrg-K{^Mv45}9u0PoU*wURz{@#J%yj=60MO7C4CS^cdnS6H-gl9Kuf8@=K zEN|gS2C^)OT37BiqS#iZ86XPpZKo`kTdp+6v1fCiHbGC?r=F**$+)(!KDfxAwTJso4Z+xKXmh$w-9}FH*O%Cf^6TiZgP zq84pi6#zw7ysRydwgc%WGA#5ROoatJQ`KK?McGQ9^uZR;Pv$yBvZ%#^ zmmO!+OkI~5u<~CzwZF3ey`?{YE#8q`Zd0G7ucTI!TE;ur@OIx+>%Y;}H zS8e_=Yvi$&$r8ZVNni{;TXzHq;!OJ`pp+p2hWyXNG+J6ti5dyghX665$hWI zl52Rf+2|rGS{urLry>@aE&FUenve}m6?3NmhX4Tj&9lK#qSullvD;~kS8C^nLD80G84y;b1f*!8^IlGTaLN*irQrX&kuK=f zz+P&~HDM86SwZRai{m+Yr&{l#o>HBbb&CPn^}+yaDKtB%p3t zf&jEh=fbg${HK}*s_f_13EQb2w#3eFF?=JFy0xb008$G~JR1a_W65@00Ym91;DjO~=(@T*%1RH06j7ci4XVd=7S~Z6;Q~NVaZpQz8z8U$@bVuFXh{^ zbfMSSZwwLy7{`<<0OcOm+>Ic!3Qlj)Re+tL)@f^XN2Ch0Urx&Hq)hn%=UqH7mE&8K zWq0{qe*n}?S3G&SqRE28J)iSwi-06AF)x3hJpQa5 za0oDWHntUc-^iUm(fm?yU6Wi49ITn^a-`)ca~q{l&A;wuWJAUzNa;QCQ676dveKzI z{{`aF#Q2iqT4ta`DtuY;&SMLmckkdYWNo> zn|IvCUp5`cUM%nB51P*Ql8yTQ9#286(smQ{>DR3<+2MNLiRdBll>m~R|0rewr;Uu2 z@L3R^O)XFvx*CflG_JF{Ujyl6skm0>5cczCWSj| z^i^QcsWmMD{BZ99`m+~E-m!4h0z}DyijHD~F%-R&UPs38UFuasSXFHh*$E&OR%WRs z{FR-ji|fS9d|vr5{bH$VcgG5lT@UYJ2~2N|fyx_eIzX{RA#@5|dvCHxZX&{k=KGZ) zx*A{rwFasCH#$G7x$+X7?n8yMw?dz1uzRkkPKWb9&7ULdhtmP?qsK% zxc}_p7d2}EbKVz}0&_4mNolaSD8Y*WPc^f^=Hu#KEK!2jM|)gu8fQq^%3(lwT^PWB zA+=`+?gfv)Agi;6eiZ7)iFZHnR`T$ewu`w$S>-Ey8QENFfiYC4!-Z=WR<6qQ9NBYS zDNCE!t%h6FjzcNnQjl|3@ck^ne(0)6?ad_eDG-+h*1$%Z1J*zgT|$ql@VfFwaH6D= zbqC6ma4BKX&DbC&nI5iLeLR59udF`pxJSuW*VR8`6z5gv_$(lGtATEI@#5e`L7VWy zr5b}qS<=H!=M&IU5>Pjk$+V;j!y23yc@ zqHR~-v?+g6)fRfXVK?6jKkI~q^zvUF1~(p3;&(jUWCvhouX>{BR>s(~(>H%sLo00-WM~6w$?@`p`oQ|m`Uz5ll`?ZPA`9-# zh=Os)h0^cZc|WSJ<6M3gvHI3mtk+xt$^<6v%0&js$bobhxlb@~NzXVtAQ;N#I|k6+ zM@th-*xuH|60pLAv!}K8);(AVxu8Sc17F?YQ1po3S3}0~G+O73{WzAd$WFODD(MOO zO7#kr@C*hhcg=15g;3hz?%?k*E|D+U_OK^tCk9AbC)g&&x--s7-fplGV+6W_R68xr zJg)2LAt@&$?KOIS``+E?Yk2?bm3l6LY|sJ(nUj(|`Lz-T*j!SK-|oQnv$R+=#7~5r zgq!7_a!xrUVp<-U)2C0~&3|(ozS?*+Z#AAg8FgX4?vKPzLI#X7AsB?(<7`+J*ZZ%z zt`xwK&+FAd)Llj?AM=`G1yZdeS`8fy-hQ8!a{(*Vte)t?L^_%1+1-p@e%A8Ew7jDt z^rr$sq9Xx@M96@=%Fe;S0X=hnnt)cRYCo(Da3rNs*=T5|WinxigVPHVdG3E7&$Tt{;C0eR2*oW-d+ZD~I%Jb}(X^A){t1W0Y4|+ZiWQ@@Oj8ydI9w zX4MFZ3+c!H%fOe9o1GS}_)D#PALvs&L$or8dpnA@q)|fyEY_tK72`Q&alZOXpsfi# zwg47-YWS}2E?U&P4bK4yQ>GXM)8Aa&WRY)9&Rtxix9SX1Eb zTDT3VBjvI*y{ZOo*O7|s*8He*s#+T5n)SjJAKhnV3W|zli8g5=e!!xrP7bdj!Xb)= zW8p!Ac!W#J05pWn#Wr!|O@2)d*1J4^v<6IJ`0JXuColp4%h)@>V<1$P7tkR-3gu zsH|Grb9IgXio<5%-65(vb(+mR$(4iw9V=BhjvPDMN>GVk1*J5ooCGQ`nId7-swrYr zfUfaIL)De_XHa{pmeUx)!XK&sY#01z8oVglwF{NM_G;iWc%1l2B}>FiqXF{hZ7{XA zl==KrqYua^gDy!q)MA92BIUohSxRmro(r#*#XQ(p>&k~G1j8Li!u=XZ#G`|8$e2tv zQIEV9X{|m!&+CKEc0SSp#IhYz%hHCDl9m>e4Ktb^pa*&j@77~CDZKs3s{cph%E8B@ z2$@b@v$TC-J>@x1c#hIJ92Loi4{_X&=>l(r?aSA!TouakhYW^*5%6SipKPXwpb^e}BCJSkay*M6YkCL8nL)g$@U2hh4d+H}w$VjIe6y zuVc$g_g?H8Cny=H5Y;!~*3-N)IOdiJ_PcT2w!vO89YwaNmhgXz*)_bWG`MuJcD}t0 zI0b0Ioqts#Q6U)t{vW#D0xZgJZyTLqfT2S`x}_TghAsh-Mna@Z zKtVw1Zj_W3si8Xrq@=r~L%O?r=i%{L-J|@2yhO!YMh6Ms02JTMy1Qs_s-erlzwE_mS+WR(GmN zVfdU$GD)oKW?0JYN}66nBu^+(oMuwb9YWBK;P%4qDDL!oY5i{CW-=l;n(G)=9)RxAVn{`kI?bXrKog3Y>d0 z@`kT><&5nA#yyu)y)pQ4V5G26TXB$!Jj8?LZ+C4T{*}3?k5bsN{42pESf%%C(QgZP z-U5sD6a|u^P5`EC$XRWx$J{B#`}~rY?Y!9cvs%uNCd<&#IlR%L-984k>-TbY%dk_) zKSvxX-C>ra>q?-D<#hL*f_K|~T7Dl{eiB)0|5Y$j2J%sVm6dy}q=+5;HH(^Fc2IKs z{wIOi|CTvJ`egi%vIv-+H-v*Q|G z{^r5Bpsfze8(y%gsNd`2&Ghwbx$G@;BBr0~x2V$wiYS)RrzM74G!w0CJ@(>{>M;(h zm>6h;L==IqD91~Fm+qgc%#9N~6|G8*$l)*2y2?JM5e5ipnQZ36tGNAAC8Y^&1l<4| zEiInE?=?f8PMnyEsH`xB_ZJH1MHybR9uSL`iM$9~+ZcLBEaso#3$@Y>&QB*H)IZ9~ z%C7Uynu{X#9Muez>6NE5#DklUss^r(p}Ie<{Nm=E1gB5WHiWux9zHK1-O74xyht%= z!zN4_lKc&LZF-*w$iOxk2xH$27-P%n<*ICbpt*iB2m#v_M>{4*8Q)3&enBejnIh9h zTUSLL7cv-iB<|LN0bR)a!_a%$cE!zcj)itje=Cpj;zYcdJMQS?-d; z`cZaFW2W0Ew>2D#b5K%`E{vN7x@yDuwp^3HC#<5%EY4{t)FS75f>~3&kJ!47P;FnN zMwb7N7ynaY{r5y$6^#a$Mv!J}^95RYuhw$Pt!=o-lx<{kWeXuzu~QQ_mml>QtKntUcrs$E0X~ZzWF-om^C}gU;h@91O;h)h&VGx4C9T0L~*3c;ex?gF;uZAm@4io zIRRQ6cd?C`tUpq~*XJP|pT00nVgZ4fN^-vzLFrJka)t;n!|p4$8|{s6`x zN{b_1u*?k5083;iK0A@Go6ZHJ7T;FH2XV05kRDeD6#wa%{mF>qjY}y!&W`ohJ9_NJ zS5IHT+E_b)?e1BG5-W{p-6elm3#oUIrVWyalRChmTAOb7qu*|^L}b`72O%QX!RHtk zwQkAs>d3e&IMgwyw+e>a5=UW9I1T&&(yC|Ojh%_@%Xh0xq2PV$c|*{YIk=2{5>yf4 zjs&TOsF#(B!l{H{1hTjU7GDrN{qnsaYKGYYYU!PZed~`~LUy-rSemC6KE$I-ZG5P= za7yFzIe9VQ&F;EP*O_e-A!fcD*URg%X3wJQ#x#9ZPG2D_<}>Qb(j0P9cH+kPwsim9 zbQ_9>^X1Nk+r%hCh54isi($2Y@tgWL&VSt!ZCW=X$zwx{ta>#x>faBBqHv#;yd3|m zf@}xRh8tQ#5jh+Il&_G{!l&CHCdbkblBt;~pB93B&}FD4(`6cto_Z>fTnpY-cwwh? zJ;CUt*pAj=y8TwVnJ$`v)e&Jg&SQ(bPOfco77|^72d6ZVMfD@!kGW7XLK&>0PrDt# zJxXSB8Y1v?+pN3zw;zH(?>+TDaB9njobd-X8UdGkGf#X!i7-&CUUe|HUlbue;y3NE zHb;Qk0)t)F-^uijG@37Q1?nR1Tlrors{Y|1TK& z@s0PA#=o^Bx8|t}$bcG?-Tlkk?M|8kyX2F$uO%tN{f3#?_<5w2I>U$Q^WPqn?^M!Y zVu&HUyiSc$+f>-L^9Ot;*C;anJ{7XXx7J;b{{=h)WqkW0$O%=2ruVat^MBKD7e#<1 zggtxNQk#K@S`^<|NrB*6l;C8l`-6@-8ZYl9=#!h8wUjT(uUk-&R4-RmH`vK#(io>Q zKfT5fG_k>oPBiz9aeEnjd3!YXSJa^hGekhHkJ4{n1>7pO{9EiKSkvQ-WBHmh-!|$n zXS?5Etw*Q6k1DE>!R4IfyU2TTM4#9%{gDoS={d1f^0C2WK~*aEE!y9s=8%#5f+2mq zSZ^XtL;6SEs)1YVl>OBopA(gXxf40ck9#z+EL5dOE3CAoMa%EgQ@*H>q)n%710wKm zdLQd7Jc4Nw#Qp{CuZ6Z00D!LiP5s$SQtg`odHk(qsVZYu40f6O6mwD;lGo|e|quF1dzaM}3m|U*kI)F{al;)agP<9P0jFZC$TuLc6ehlU+ zC-(uxzeK&5i-;nd=%4CTRn}v23qS$e4Ny}Fkl;M%H!29LJR)Wku1kK-*ql?;xMa_J z2%qT_D^1r@5wi;fr+HoIl5!ijbh*x#-r2Z>s9pR(RAO0sRqx$3yTn;n?%eyWl4XB~ z5Aq%M|HA%|<89Z68F3CohShj{(yOwf1r*XhZ%VcSTL$f=k@AC@Mej)v5Twz6998X=>m+1841k4$D{l0qQ9{_ap@|&_Dz;YmN@yzqu&K?LMN>2f>2WLyz%&UV5 z#)rAF{?tLogU1*S5&w~enve4{2S3)+tc4}2iXQ{2=k1g&RiJ-?Oc2$!w7CbR21j9L zrrPdmAV=z59Y|mf-3vhQ^A5rWpT6-?TO5*jXqaEd_QqSTO}>Q(;P7&=!7S<89$_6` zo4?$TitJ)~*`92fG9W33745d8afSKCm*i{V(*eHMi+qj7TBm(%W#3OAw{z{~D~f#9j~1rYHLndwqaF&6KX{^y8!a5(kO2EZ2JTkzYYJ+RDu zz4`I8>h8m|2ri5*G~16@ss;3g8TCvO1%)SzS#CfGs+8xtkur3fG|0E7NPrXT}JPA^Mi`d8_RH*Esi$>&?PJTU(p+zTM)$nPYKj>G=HE@DaeMur~9jkAe-6 zUq1jup<9Rf*s<1-M}xd0TInPS22-xCF|yTgf3{15K{Di5RCa(WNzU|EQZ1P~9? zWwh)S7o`#PnmlcgboWTJbc?752a6~Xy(ga9=XPX`T5bYcaL(9CKG%77^3T~fAm8n0 zhgDW&55pRRFti?h(8(O*8ih9lsEZ_UIiTWDFauFXRq-JXH$b)LFO~-o@UmVtrc&uT zK$uw`fIBl(chW;O_#G-CvR5vXc}SCI#fIv42~MJsaSOS%T!A7MAMTmby)Kd_E`jik zs`8jy-{vVpk3ZM#7V|(SVB4RqNd!WN$nFjDqzwZwuZiV_g^_DB|IDr0(;PU@+A*OvK z{90T-LE*{8ICS`LH75I@NU)^Omg3g@Bg7v>bly&qRR$8L>r*czN#7!eJ+tDg z-vrL9iQ_=6Bb*}AVQ(}i%pM5?2h-OQA)8w&Z7}lF;iIG?XgkApMxGq*up@5V0>aGm zE|niq=@MIaP34FUr*_4CUj9d5mV_ggYL(iH;5XRd^ie~pxfL8Pb7tPR2n2UVq*+VU z6T4oV6tKC|uV-OqE^0iz7Ir5W4HxYx0{Hma&@wUs@c9*xK+b^{f^0@UCTL(|-Vx<~){l#yLEL|_!a za9MzU8~<)rfX^FCY|S6AKp|TU%5yUqNlFK5G6ev(-?rFe29RMUp+`kPvNuepx@iA5 zMh()+uSuvvqtoQTOoj}jQna;jk6^(s=gCV3^x|qc{;)`6X?Ui2t!3o83z}Sr(kda) z7zvHa3yZbgUpQ0DEPkX68x z$8a-K3;OVn&3DeP%h@aLzdco@!Ab%@cC(qKwhZ5IAuLD(R+avsu5~s^HBRGFv>ne` zDeh_!mjiZY1vCv%0FjjPJdawh3t*nY_{MmJo0ISIH8|df-Uh z+7t9d(tm#*PHD%!i4-e9-tsSW2?snQmke&R|7P3}OROYJu7(;4I;?v_929vCL|lwL zz@Fogpp^sQE*E9auvmC{eE`!;2p)0^fdw2nt&P)P{<-4L>z} z51MP|#+L+;*qWe>EvZE27TcLB9x{J$0OW|r`xfy6)INt_2vbut&hSE>XH2#+#d%`y zYYN7LN_FnwAvd%GgN>d4Z2Bs{DGkuz>^#0HFNA^;TZ_IoLYl63*kW;e*a&UoUV*4W zTLZ9Kj>G0UMlcS8^IA7S6x#r2=J(f|A)ouPHXhC8Mv{mlDRT~M7+|m)swCao3HJ9> z)rA#2=G7^xihbMcIWcga?hOBoMdgUch%ro%|60M1RI;xPefbk9gt(Ks&i9lAPC5VE zMer&j6IYEGi)Yjy6H6N*u`kU@J49Jl73A*!`ivk2lSDu?;Dh}}5rIQW784~f`UdOS z`%6Wn$a7|h)Zv&Cv25U;;6eT{3BL>mt6VLmCFy`Rk-VgvpAcK3IJLm!@6GmdoFbLH za09SX(JVp17(*abVb=_5%8-l9?-H;dR*x7tF>YF-68=?SMu@i)q9%UPzyALNnmiWRio#0n`|YG!$&qK z!E^N0gL!51VMrw@sAXdBP?Xy*ut~k8W+XMQQ*Re$hWrvd`$;zLtePI1cJHDlMWTr} zV-|u#=}+eCT-L^aREAn4_L|9}!w)g87^F5l{n;Z=I!}!pT{Dh9k=aQb_8 zeOdqH(r*`M^zyHzv+jVWacH0uc)vUz)GemlmcACw{n(R|2aSK{z0VKU8U~b(Mav=Y z*?(Y3R)daqs`NF@81e(!5bU>lsi3F0_lmA$-~*2#O*oKlScEx%UR z;kVkO(x<%~#*Ri&!CXR30>Sl0pYo3@haBCu&{p}WWEY{clXyW={p_9Yk)0%y4|)~P zkQ`Bta1BR3I8IKRW7Q6mTClaBa`h5<$wWLGa}!GyHt;8#$ek(xe{9{t1zC)7@{Q$oN02hzkdv0=XxSF;Sx5lch#xofjMx#)N*4PF zjw9fswZ~?WshCL&4u_s$=1E!UGg%c!IU;o&6$Tr@4`f^8bquW?^Ko*YVVub5`cglVJ=a(sQNa8n!#p)&k+PxBA>wSvcWNi+b5gUm<|3dJ-ogsUVBy_(|Ss_93N9pvB6Ub)^N=ciEs7 z50c6g*AA`dpLRy&l}czcJYam+rDL3fL&gmqoH>hWFf)xice={wbR8Om&;Kq$y+PKRDIN;vAC+{ zhx3^Cj;asHkgE&CC#+-O6dxp0vml}JAuchOBkcS2F+j z?u*A$&dUv2^q=Ap`J=?l*Ug^i+|=QLh=VK-JmDjcY>-}P5i~ITM(i2wSeQTE$McS3 zxpN@&bZ|W|kJe7>#HN6+h{p9$G}WbW?~?*NV=%Y!Bxd4s&%4nl>5g&jJ2jq?o}1?# zIi-JGtUn5aY|Skgoq{X_$j!z0mWp43MupLMLn{ot-v)L{1pY2Qt)}LX@G}uXZ7sy@HGy&2iR#d)r(7~M$1}@a&lG=2PJ80i_|Ncw6eMzzZO9%@jgMGnq2jUUNAnCOS<(ie6ZYTnQUCHk=XFn-HNCk+krGQq{Uq zg^O?y@g9eE2jDb=%Rj|F@AP;2LecEy>WXuX*&GDsEOR-Sl?TL)wZ+8JZVfgjZbT>t zq2#F&Zi_z+kW-Vi3b$H1!1*%ogwDv>)JN~S#n|$7dHfgl+1yB6gh>-P7qs2VM=A-%qaajJ-Ehpbro8BuUlX0L{1oMA$eA8wzdpEQr- zZO&NEVTRUBr^X8sx;qK|ns>fjN0@xF(~yP|{i~^Iq#`y&^$a5tZ%#?wi<2!TUt6iO zxztt8wu;>8gVeQ;xB9ZSAvl!<{G)Q z8`eC+!(wk(BahoIxtTEy0@2B4g7JfnBf`x;fK}|x37tU0udRwix<3CRp6FAxL^DaB@aF64g3i=WK^|`PmGvlQlh} zVbIW8&^?+8kD;oI$>>=VyJcvJ%o!$PX|yYXVnfdz=B(dtO zZ8&w;QE^h`KqP!nwpF*D0mXpyN6Tc!$E~>~4B-&V+J#VqPXGu<7CD@^ZRUGCBtS>O zLIf_<4wTECMcf1yoM`b|^%f(N=AjM2D1HiXCZ`l67`-FF(3uu;!-5F9K9Nzxjxqd5h-WxU|^ z=wycki+Ny-g`#8N7#hc+Q1DuJnWOCB`bX9;_=myW6}*@au+;*bbsZ^*oPeqZ9lQ0MoP4E04wc;%OB2rQp$M~U3f>pSNl3v!5$4Ms@3hqps{ zol9DU`b#yX42F{z1uEllLwt|6regTslX>I1m zxyp)IVfa|NKllzJgVJCh6^&&4g)ZGs`@Mb)PEtTcnQ_I)j&jAjdFCxjmEZ|rn&12MedMuxe2p*Vem$%Y~g)dJ&LQc8VR*>^vRHeCzV$?m87Dg0?VieZ|?a&}98ucZ~o5O2RQ*HO^VKAWpPWR;Yb zW#5rV8Gi+?@V6HKe${PB^93{sQ@$&_}n zvhn;^X9hZSP?w}Gj3&WGsc<+yA|eXd@c&<9R>XP%eW7bAxIy`^S@gf>(*O6P@#A)1 z{tq<&&!+$TnMI@@#w{QFx)4P&l~t|56- z%iKf0_`rI3jxg*R(*Lw~EKPK@e~p_!CG)^u;}k{o&-?SgkmTFrd`TE;VDh-lYoq{d zh%1bbk2CBFV*m`Bx>p=qAc0`v;2AX*$y_Vz@ zt;2=H^(dCHec16IYB$BEsUyAL4>XCM*XOJgxo@Vw`2D9%kV3R{HoGi7%qHhiJWHM` zVAx$amp*a-n_ck9CN{XjC+pAJ**@f3NVQvA^J;pDiieZ5f3xhMQdI{p8?l@7I!6qS|3nKe9JNPY&J zgz?6v>8nzbn9ihrR-xm)S2o3bwqJ>}xuTr)@|RS>gm#xXHRt-}&(qg4gC|mp&E_M# zYdSOT9ghEer$rD{))WYxUf%rNHn|=V&)&&oZo}O8-HpwN>6`yPVxf+fNY$!%s!b~T zT}{1qZQ+5mX30U%nj%qGM4I;OXo(lImDR7p`1Noxfq3srDw>kZ+hhut(@kE-x#CY> zg5Ko(=d^$EJBjkc*!Mk}_zvk(iKDOiNW)*G6DxLK9eqa4lY4uqUu^PUTlTH;+%Rxc zt9euP*G8sYZvNN%o^YYh>#LQK+?Fp=`HK?WZ%{~aC`Df5Aaqo%9krMW0FD{y*u@L$ zs^uB{(|-;XACl7|8FYjC0(#f7<|XFyt)Oz_Rlnt|;_uhE(LT@ZSW9M|qNyK#JR3SE zr725w1?0t~vHza%{`;}`C6@l^GUnCCaaXsUS@4Nf+5M-Yj|HL?+Z&(qRajG|jur2o zcCcSvO8w@Fg_vI42AwP=&GzCXPrS|b)P81Ys{7B*tP??q2Ejk`4sg9bHToY)BG0$i zssH(vZIb$DJw#D8A%h^dC+IhSEaPsOp5YDKB-{U9?ZEZUgZlBWi=~iDVs55UCgI@Z z{cJt|{+~SC$e6VH9}XPAP7Eg7lq(_LS8tClM!wC(GF$oQUVQl&uTQfc+l{?MP zwD|=7^S3X4S`bL`grPiQ2-RbIWaeMD?EhS-eZ;7&XM!|+IyxOQ|9x8ipU;b%9&aNP zHFjqG|M}qm`)40EY9=Ie&QZ4b<$rzCkCT*tTFXwD-@|7y6Xtl1FLXm1ve(PE(>H9oHcQPSt$p#yrgamll4)|%Y?mkeeP4+rQH{+<*(!JZUytKuwSXl!Gnasx%Eq06G~@b*u-U`zy-@*B6MZ&2$S3;XQc#{oM_5 zQK<##HX!?9@88pTxsa7R_-*lwkj>nK?AzwpMx0R|`|95-%qc*>m|N7We+b1Q`F2!1 zYOXYI=(bMecC%lj7~7zq15gzr2}9D^4`2BJzdG3|DtiTpT@nB@g;N=>E8t~CQ$N2GH{P9Xjqo~d4UayH7hG^{M=aDjtcIrEKNj!In@s{9>+PK? zM(T9hFlYzcB5sWGft#+t_`}t!xfWF)<+$t1MphT+JMA6uqk7=}Wt?>VFmETPu1r{?4Umc{yNdzC^DGHChpA`Tx0rRlV2hiTGeK(7Y7Q38# zeScBYtos)b#C(JVZ2~NGmG|TMw2#7HMXjjNI5e3J;J@fuD^m%5$`--^1T4|xU|E0f zhfRqGmRKe(d|r~lM~nC;TvLCdB<`)`3`$|LjWWHnrePeR?gksn%+FyKC#{NHjn4q< zW8Td_Z$FeU82Q}&Vz6FlsM`Zd$pUgDOw1f4=o5unu^-ABwQ+>j@UDRR_ z$Ma}F^7ohr!2*tm8U!Fk8mKjS6r9~GRN99aPWO`z#iZUFz$zdYKm z=}ON2Q;8DnX0P(}X;eE_x3q)rv7hNT+^r!Cb*ckkmmkwj5?O=mOVNTn3jFzyRDi2D z{F~P9@wIAmf|ubQjb+oU*5u7p7-O@;i@A}KD9z@5p@O@kaT!!29|aH3XPfC@hQW4%3zC952^ z{fZ3ABy#ec;3@kX7G;~&529#|6BqfAITR)JuABu*LHqogyE#->w+>T?+~sIlN! zU2I_6i=O>(c~x^Vu71-)UV@w8V>9)V>gxUUyBx-W6s~4b_b-}7P1)fwD_20ZJCQ&I zCUp-eMBZprmG zcbQbq_o$og3eW^ITH_6flZW6n0Hs5e5H!AV9A>M5HRJFl3w8{=hKK8^<~2aDNc_$o zY`+r>v+W!<&;8*}KukYu*^-1^3y|g40j*Ev1ihgOQ0PJqn!gFM-;tr(o`3=bpQ@5C zFn|YwSt77CS64~1_1kHe(c02)^H66;_5qKWNm3kC6VJv)ItWW_A?m`tvqvn4=sg80 zwKb&Cjt1(&+`*g;N7Bn;zN0eaNjW73zpI0e157auxm(m%DD2-$G66T6EYiX=j`aY$ ztzDl7y%i~We9lNn-~_ni4j2TKw$HUtxLvlO87nPHC%TrY7H87bqnl?;IFK$Z$9<4E z!1Zf2kH-t;e}%q0a5jbtl84;&kt{a@j_g28E$(;ekmJw{j}|br=g}GEa7gr2sU6#_ zE5gDG@?!|~3?GLs#A#Aj-kyC8(uI5e9_a2*;(1=eytddJ;0{OKp`e*Vul~Q8xDHC! ze`)9`cm|&6J5O6oN}N&dQ}UPwKDYTh+=X%cCBOkh{it@iq^pGv%#^Zl>~~%nq=(e@*l8I&Gf-c<@JgVr%e1UJ z4tTqi$~|pitaZlsW0C zBYxkedVo4+tA**<$lBf*QXNODlgil0C2II~d{MF<>;AP4FK0~fugU0zO=>2sDdt8B z=Y}&V+cR@ZB>ssb~9xg-)Gvjm>5TBoEAqBM+j|EcV&^r=`jGt@k0 zw)&-IYe*aqb_9Jlg}ouY2^WQNb6MY)aujSHb}E< zS7Z_s4c!YS`khEzkwqaw>y>fgf=5ydJ-9O(gxnj;_)9qM&^MHF?g*SB0g7NOQd2X+ zZdLJm`4Hm)K(=?)1?hz(K@X*jmc(q0^;bl@BpO}WPP)!YCZN>3K5hdzy`cK z5TZq&m*kIPpWly|ECKm1RoGWYzz{*+NJpGXO{aH%_;5dD)gK|K?JY?zDsevk`m2)k zBms_uDx^sB+wS-@xWSjnN}wbF;n6JJeahva+8LK74?$du6ittby z?|+318^z z!eJ}{K!7>ah;7|68it@y)XD~^(Z^!2xF(7E_b9Nol-u-G| zl^G-A#SGx>f%hq>y(5fsL)E$n;H<+GmJD!yrPj%ke}onQ#l`9S>yOb2e~AGk6nXsx zI&J@+k!5;SRHDRq)9DqrU8{6IqIOCgo$LIr^Puc%Z`c0qtV+NBQa;OP>D13639Qsw z;%%tpHh6K8?ZBvNeuC(Y?feRu+|_xC_DKpmx{aNo+1qb~H#uXI!=5wLlqmNl^@ zL$nE(i)I)DM_fGBM4)K31aJmnzK268!!OcY3vI|L3kewVqp38Rk!K*xSv6;@&-^h) z;r@TyD(`H7>>Vk~F9CyUdUqKPqxqY{`KE;9h;s8nfF@m7jI7lYz^d`I37BZ1^7r_Y z4N75ak`s4iNW9Ey6Jbf4rs%nTITF5Fbz4oD(vRWec`buj-62xL#z0@@$T)=GqikZd=;_t zaS)Z35bpaIZ);1_&~|EndVi|--EfS5q1O2#2m4$gfTQXk5Y_4SArhCl)d-x_TQ#{v8g^ zS|ls3^`Z*PDpks*;}4~Exj9Z`4FKaffVEkQ=vOv-SK{CSN}r`oEVK$t?VCUv76BM~ z6)G@VF7XQ&nKy)q=Xst4PHrUvyrI$2BEDx^N~I&g7>D)=Cqf`$&43Iw>ODYmDwV3` z=encDt1I91{&to`&Fs(U6I9y(0tCbYX*j7A5A*%Yq`55Rmga{$yVa2+4Fm~7o4)Tv zE5y*(2b`@V!+A0wJ>257BE4 zoog~GS)*90>vvf8xVAhSECj<7P68?ul9PlpME1$7~VkB>9k*EebDm-H3xwPX7g@^cjHdYcg7PX zN;YsqRtPwp?`XiNa0$-5}syWU$1$xvKBok8`<@f(?MMzcGp4PZzWae9T=PO@wt zkdO<)RTby;bi0!X5GHHOIxn2Pxq3B*J5gYBxeB=d+)fvK{?23b!u4IhyHt%jq3*5k zW;!Op%^gYC#l3)Ng&|Sc<*J!#Aa0Z}-Bw}v5rT0y&_Ut?`!3C$*>m(d`{_#>(iJ8&7+}qEkhJR1(vYlm76E6$K2e{Su~( z@r2Ww)1I}8-J+95Hh~+n+Tob6t-#cg3JEivk;^SekHu(n_63p|YJF?d(OEc(m9EaX z`w6_LUAjMkLs`{-LqS#86}_HSfWM+n6zn)KZ||yU%`uC(pJEAlnAMS?tGq7~BKs z%ZG5qf;O? zDT!BIE({5aDGuEpI*dCE?V}DR@I_j74XDfAhFXHKkXtg)AP%*Fd%zPYa)V#}FjJ(? zs7ZL1xdjqTPR*k(>F`u6tGw;o7^pJWE??$+XChz_XZk}N2F1(m`u3p~WpOx{6|o$> ze#Iyh=$t9T<;P(-cwlOXbYlE9X!|@=75x1x7xL8dUi_nHe;xE2<3L1vN*zlN3{@DNf|gQCIy{l@!fRIcO_T3b&>$sn$SIJX>cD`j~g0kQY4WU|_zgQPJGi3AsB z9g4K|B!tv8!?O#ii?IsaBID@Huykv0@r5Tw`#P-e_p{|;tOhz_*ux*9yzIk4@5@tr zxmhI0AMfLjytzxsp=Crs}q;>MGj#z=g8@M7y=0eZ3M<1=Q`xP7B7LAPj77|1S5)r$NsNIalv*&X{_u|GozihPpjor6mfw*5&VgcGC| z%n20?iSCy$*WvVqB9L*8w;-xnMKYMmE5_k@SM>W1RKbRU0np(Iudnp6mMQXUD1#0! zx86&CwvsHXfBSr!m;wtF`GK*77n}@P*!DGxtOYgT=7D`b2628-iFaQ6+dqzr@b}5| zkxqt@f~@Aw1znox=y^b3SAJnbbsV&nVXpR*9;=mg|0;;TtAyy;OIPoJ`ehUT9w$tH zhP$=!*zuRtq`?h~;*O#SZcM#=+e6QH$w#$a9%>Oc%LezB%hTTPBFM>SYUV^lGaabk z*0lH?FK*}s6$D_;%v=Gu@6WsI?~s-52XXRxLl0|`+l$}J0f09}tPj;2wX};>0&=bq zu)U<{tht$o;Qc0b`;^OEOo|3_Z=~>VDS! zp}LXTN(%`L8WI+_)eRB$34y}-4(m0}T6T5$yvF?u(;rm4eFtCaxmTxU&ldU%AC4-F z7HODm>zc~DUpRmZ0~P1bcc0EYf&2F@;1BhmuyNJqDYF*axaF{_RC7qh<0XvM!f&Q zPrpi5G*$iSb;ohxAdzBhnb>xP7R#%IzTEc8d5`2@ zv=&uuqlW|T*{7CCJgyoR^IeQ?v3XH{^gY>ini@`dzI{z}yqc?2eyQU^qHMv5G8nGd zIO?(2ifIOEQl9tIKkples?VWK)sFw<5#J>*%z3$FXxV35SIvr%$yO3`G9mdY)1%4T z>6Q^+dn8#;#EV{wD5Naa(>SS0auc?m^pQTNcOrGIH8Nmr)#q0XhrpW(FD8R6OycK> zN3FfR54SEVajfx}mv+M5)#69@%Rdyzx2qWHn5fO4w=_nvRGYP8j{C(AE2&?_4DQNN zBw)0QXtfGVe=p%kBVKRiV!`w-K`m@t5fKjesm6^z^LQ4QhnmY#5OT1r9H=gz*c>Ur zpHJYu;?(E#XEMQYcI+|@msgzNOCqqE91!cTYQ;QB%UPBMA5L7^I7Iu{kTRpIY5$;p z_ra|CcJw@ldL)R#75Z@kFe|s0Y}$%AQojx=dH4RvEhgtEy5Z%#nrfc9KTCUAIpJ*2!?@b6_0+Dntc~EqI#GO z{`B&9&HBmL(;V$afALD5hH}YEb!z39?3k1MS6frp^C$)So#IGI4Fk0WngwvKmnDYn z;Wdg>A>Qys!YkYChN_7Kjn~~9Js%DBEQscjd?pZApVo~&o3A!r>c+s4U3S>{QcI8( zFl2LvG9a6{`eV=EGFOOapOY`ns8Fav8es*=D)tst}Cbdq2qt-u7% zkEjUckY*zLv>-EHHI<PRWebk8?EsfJfs$HvZ!V{D0vtdbWT=Dlra8ZRlmhSIjfn)! z0|_n3l(pQtQ^0b0nNnlQyW0wvQEa)W*rpR4)G(>8`%cfhDCwoPAKooJ6Vr~f$%TqG z%Czv3(grmh!n7C=2H4e9mC|}TD*R`s3*O%c%rrbilMxZBs+(}71x8+C4xjjj@yTY4 z{uT4HoA)f~V##~wo%Ws9d6l&9^eY!};hrfd^JgC9)=Z$AYbR%J9>THp$_Hu-5|B~- zNzcV*1y`K6g^$FIes7dmxv*ZmO4GQQnU3W$X3%b5Q1Q)GhH|Pd3!&gF1QxW=B6gOb z_;-02u1XTZTqgX%a>$c3RTSP^WtvdZU}M5;J`7%m>{aHJgo|{EU5yRxS({U6JXPt< z1kFiHjN|8!t(pK3e~s7Ro)hX+5>E0!Lq28$+G^KsJZXq!A5zFtvitE6BHKIMBhJEp z1J0KtfV%xzhx^vx`jrn{vgKw}()}WQ2cz(Xu8GZSoi5t-F+&O&Tv3GYIZ*?uS>ZF^ z%5dTGnmDmZS+I)Hjz1*A&U3B(gZfom;?a$}C)XkF_+iXmUE>+KmtTlX-;? z!?t&{!bnn~B0Gopn|R)#QiHIf=-4vr)Co?B=5w>EP`CvN;eZE4Qm4qjQcrFhB|8M} zwGCbLI$LxQH_M>eic;FR=BOITy2UT)meiBS$ z3?-&_{ga{q&GCqS9?)ochPKWV)Ld)B$+HW-rMzpRBs%a9S?n+C7 zZ^mvK5-VblY|k$-H)BsWT-++CZnExQFtp1@?#%~Bmbx*&0TR!oyyi6kXCY$xBcRUX zIB~;rq92vhdow5}9`iGaz+kg3Fs4g`B&_3%4rG>Q4rAWWPEP9>O<N znkm?&k<>Dn)IjaE_8JYQgWns&<1^Tm0?2rb;@7>+NjS{PxT{T+<}38|j;oAR^tp}I znjF>JhpR|v+cbwTjEU*j{V~LP-vmANiIef#fBv_`Ia6w91$~1}r+v#vS4r$Oq?|eF zm7M3hUEl{l-onu?r*|)&u94LfJ&Pa(DuwUC=o^X-5)aQbznh}88h^)($*7y79R#-> z!3+A)8#Uj#_-f%qxxRsBA{hy?bsw1fmB?hATnd`Wz{Z6qyqy-+CSsrJ!H(fa5^EXs zD!}R(iVA6k{G`{m3Q`k|35V?NXt&|7#?WBwVTPj?qOCjWPFWKZZieyBAmuQ(_0L1Z z1yJtoVYoh}Q6M9L7K_CE=(!F22I*x#v;(!1~!I7H-}=n;4@rD$eYHN)`X(Km0|hsj{D`EhgBS=h(0 z-8OW^)>E|ooXE}i>-B?N1f~%m{#4`Mz$%C~MRQs5$?*LjU`QkA zzY|>}1)`v@yUGMzr~xh+4BiCgXh?ho*EO7uFkU{k&<9T|??Q)TW$zN|9A|kkaj{LZoatJ|i++je@Q6Zp z@~}n%aSk)TLi z+N7giO%2;!%WxVIQt1xg%41TPR<(v+)|ZyNxDVbh=`Xe2R+xH@)ufku!#_E7u2XhM z9m~$GMsCRFc?8)M)+w8#MPN31@Z4=oXTC&&2@7j$$I_>LbCp1nRF-tf({Z1S#S9lK;eygBE z%PA&Xb!FZD8Ct1jQLW_rcU5iJ`+nDMfY>S?YkXI+-n2D+W#c zr}CNHUWwyc3?n`(4hC&?%=>=I-AK0*wnbs+!Z)Wzd0tZAWH!ps5 zcr?Y&y>pq#fu`oZ9x;n&9JPAuJe(26wxvwN+5z8%Q1B>uHAUO_^*8L96wzVXnmZhl z2>dg0CqjU<-?7BYlo+>yp(?I7Q}hH)Q1f>Uv|9!?MmE(Tb3BqRnWN)<|u z=;8#`M+V2eg@+F%dCvw83YqR%gh{~FF|EX%5C*r59t_Mbe=_i;w zjn9)f*2HjFW(cA2KtuP(e;T^gRTfQ%@Dr)+ueU>Kw}?*{6COMr{~ID#6` zkV$-Ij}iErGRVPZLkQp0T+$+Wx8L;WA#XC679o(t=M3QlOC?qc(Rrx~(Q{JZ9H&$O zRocBmYjX;1QY1)8Z$IR)7A45Jo;+#k<84{Gdut;OHf>yC3%ClAi%s;S_Np6)@oP(L zU{KsmT=VoLp_+}qCTnTt(n7pHAeva$h`zps?9ty@ZjfrbBD`}=?YHx~cea`Pa! z`W=jYdqMYcqRF@hj4XBK2Ti8=em6x8dNYn7iZ!T!G=@f=uJa-I+1f9xlp9QRdk;3( z3uyV(a{WndAqEDLip@V0#fESIFB&kmP=3E@WfRl}f$jM3I`3EvtQYE2RPlng=ca+= zQ#yVkS7~axOltU(^yE9cf}HK)+M?s1RY$R-%~ice#GZH#8Ckx1*7bQG7khi<%bWmvoJwo7gM7q@rcAH0Wx%qq#86yKC@ zRmbme3Q~EB0zhMG!qVbG!SKg3LYEi)C#ueN+SUuF|r_IqBWCR+FJhLty4n8dI+ zu8f@XUSst~ukF2b6XTpiW*#g#lT=x=Q5~Bu%`py(QrLLnUE)f^1J88&jij<3oK(>X zC;-N(r+MP<&frRfC4K~YaE3Eg=mQ7x)Su}=cR059+s>y1a%|3msy0E%a{O>C|Gg4X z|0$^WiC;9GW?q|bbNHBB_|-@RH4?1iza@un?Z{o@ugu(hXSa!R0#wWd968AUWSsTK z0^*>JU9EMDQ3V8xe-g+PnyR)9pDKHHk>t{giUpMKAMB3&?_Os76RbFO5=sp9>YV27 zkJP)Z*0<$^pFt!~n`_=X1O!{{9}`j}nm1B9D4hmMFRx*Jl_vcSh7;0r9Ja39;HqI5 zR^)5@Zjw>gMAfzr+V0OL3@mLo&Ru2njwwK@|8Beg$5ywygx*#5Dvh3^)rm_7Fb|D zX9OdWh=KoA#B&K)k46APirv_i`iRTf6w^Fdb9$7&|pa?fM!2f^_0kr)DF*mc`+(48w5wvQwZ8^to36~LaVcT0FrkOQY3D27vOVRw z0DMj{f9s4I%o&}@5T04pAxwVC&IEdIBsS>F!FU5H1Lm#@sO;oik*P5SonP!8fJD{% z&0b^#mzZ!RSZ-Hc08|f75_G?gsH;xcW99DPKu<;v*_{1C*e(FaRi3|J((ER(?GL=g z0DV~4JM{VqiebMBqP)KyDQSIbo87}T$^i&saudBeDL8^z4@~ zQMko)+01dB_y*Jq1r$^TpzrF}Pp|_R%YASE(lhsf#5(}$ydo!q9r&X@oA_-3E3|ox zdL~)Sd#1+l0n51o)`i5dFi;{{Tz^Bd+r{ZQL2sVh__@=hhk|?jV((CDO{+F+XUA>op|Lk%d09ZLw*Dz7_VruVh^o>I35 z&S+W1KNw1_39?i`FYGKa^ zZBhnjyyE7QO%DluOznFaU=}u3e%VJkRIHT5{S244cI%?A%&z#;QffVbblOv+RWgW-z7OCR&O$kBcj>s4pSeOz zQt#BMd`^EQ$GueKKKY#xT#mg*T;U0LUU0Y^cxfNrJ@*|ucuzXkae+AYo562dE1>mJ zmSkaX7QsB(O&z-Eh@AQ0YH}2~X2}^SIjuBHlEN=D6E<$T>O4fT2lS%(%p#jHkv2*c z@W?W%4Vnpi-Ec0=OQ1iZ}7Eg!+RXfHI$tLek;B8~f7>IBzmA)DC^^!|^6& zULa*qY`8BNIhb-gEYA@mZP^={mBPZK78Ac#4A0^W<4oeeE1-j7>+X>0!rELc+1nVB zdYU8A@U40>pFJay$3ForxQ4b~+Rw};#RJo?@bP6%HhkEiC;p8U66{ejZ=CeZdptG4 z>Fw$LSM?k*f@}jbOo#Hu)oQ(^lJ~^`k;AVZoAO-Q#NTaG0mpHDaF!^8hl9d#5k401 z_(WD5;GWkuIMBhhnnC6-8$M{+iO0j2fHMR%kqgvCAYk};Ui2wY2Awm%RK`+i<0mJ8B*nHDcH!1q!>Y_h+%;gg%k@6Zy00fD~;}{6`(P_$Dkn) zm0FEuZDZuH1s-L`9fK-CNM=CFXZ$Z-3qS$c@hJ8&v`Si(2A#cba|@_7w_o%FEUrvt zU~NLa2oHS4vD@1TW_s{_~D2mcZSDF-39^AiU@j zU|RP%d7s@e|Fe?ts?x8m( z{LOsQ&9$VjIjb%0m;iY*Kv+6)$@dq zK^fMQG18iA`jeYOPsXj^%z`%_!*Gy6%{LM$B>CDM>~9?B&CLrU*oz+r-Tx^EwCOif z9p1sW{asNWbn}uEd4oaB<_ASUO?$e);l?%aheI}p|1``V3 zpr`AR$IC}o#B2SE8<~T$?BV|PwY?Rt_>>P08;$t00e8l2O^f>Uw~xl&5KsN z!SENI-2AZL+3fG!Hq2IbTW7Ve`-<+-d$H4VK(d{G=6fTB?@ftcX1Ek`ra?{Uf&Nbt zm#gNr`(`gG-$z^@%7DWAk=#jpoBIhNTJfaTPj$J!Pqa@!n;3(MJY)ijV0kJ*5^$;d zb*~^MNJL5~r4TEo0FLvO#3g|k)A_NKY87}NR1mywL(COHJ^qCyxEYEjR?M+o@z&zP z&0Tg1*5+1_GlWT80pLiFRfli}74&jdTsYh^HHHelN%{vcCwo6B5OKE*NS>cIqaIJlY@7;P9_-ql=hB?yRpt*Q>k6W1y)DCa)f z-G+TPZT3AB|Ek-5AVm+aySXoLkzA;LV=Iqrgkx$p?CD1CCZY9|>~(Bf=bL*d-tg-P9a~CFsE?lHdT#V zg-a*jjY*_OBfFnIn!A)v8g^4Bx^#%fo4Db+GwP8-8Fsnrre( z;Ako?EL^fVqN~b4u*(r0f9Ih2oo=gRq@DHj)T~}E3l|5K@Vs_t1|g+ z;`zZhKVRG6>dX5kgNUU* zS^d>kTcNKftM8sl1y*f__V%nx{^{&ywX1ysRljACpyj zpKO%epptAvVo+h*1a|9w&<%9Wc_{3i|3ier=th?Sw2t;6{qe8ut^Z4aVa8O0FK%8} z@cK-UL!!;bRQF;Dj9|{m_~9>rJ*y8%rqAls#Pap+0eI(jZn}{wY6K8)9>w-(eMwwX6bm4B^&9r z`F@5(;yq5ur6E!rex<@W-eQ)?XxyV*V9ZkOroPRNUBqNq6q8yV1o7;@_HeR9%B0gc z9r)xM$%II#g{soD>Y}7(1ycVY)30rsSW8+uB4ot_lh*ytnf|^(zc2Vn7fzA*e$&#T zDYX8A>OstGe2(AC$a`?MR&wn)MsC%*&}&Fzg!@jze>u`R^^#)78#m3Ea^EskZRlN# znKnJy6sQX~__h(V0DCp`2UZLMgO}IEV7+3GH7IIw9@i>rMw}9>%3fWTDtmroE+iwm zhys`sYpP!g0$$0Y-f@}tu)>|q>DG_~amrC$mOM5Ci$Ur&ZM74d#ZOyjm+ak~Qav$W zli;|VH&KEHQ>XO;ryXJ3gZ4XESsFf;s~2Vhl%xwOKgU`$=BpyS@SCl|2yL3zMNCZ$ z4EDdd4Ewem4GqJ1X$V|+$YOB1XG1Y;R;|xM-1Z_L55D_9#kW8{HqpLa&?F(qWv@%M zfRJiws%W#Hl%ct6Al_*@AIrccL>dAo$RKSz4Z;Q1+;;p?K5t#WoqAHeIAx1eX?>tW z?zyhS`EMVe6K<(ArVFh0Zl&jBZ*r+-_lzrVm;~%C_>>!HC6IZxWAYWnAh-ZY`VzL; z{vqKY`D?!Ki}vC+2#UXeE-Y*yJh`yrwN%44)!ap%rNs#2r4_pTcj-qDlcdwc>34Wb z0F``4gO)cTt0zUDYhq$H}zA{&ps9LY#HeDDsO~R>h85F*Xh=t z)?Zn1njh9`IIIaS8xIVHgx9a~1)m5J!E+1|Rs&>d5+ zk-c=*ZlffaUu!|H($sf&+wJIfq?11TVfMC7aR-;={=t>A&0NU0biRnY1Hx(Za^veV z6*R2?-Ck$O;xYK)Xjs5i(<%c`_r{2$464zfH%l8E&V}Z-cv=VVT)*LAJHpwihO z%D4cu&|0Xnu+%V(Z^iQ8sn=Op!;ISEU(?@XoER2z{|L;Gt8fb5vEB&H5P;}>Z}iamZrGWn zIsNa$`LAS|tNpQjw(cLHTFuOiJ*?8OS;a{CfAcRT4E|*#+qV*i`rJfq^}aL|!AM5( z8@(YUNU$tpxso0UE|`rEawA1RS31?j>UWV0t6)aySHfmQARmYAkugL<8GM^403$zi z`8JLA0sIN(8?rk?r;oG>$yI4GOa5;DUKJ~ZxSx0*MIL70_sI_~<#3EZHFKQnNI}c_ zSZjP~Vlr=^t~`2w%Q|y)yhumul2mSdjx?=a@z$XXnImcHE8!j?Wo*QR(p+P)6s9zt zpK;USqI_1##pRbG`Z!yYQN(XSjm&Qf}W=dmz3PfIR;P1h~*j!TIExd_V@FR{W zd3C;^Sbt}1Fg0@!S*&Q;tM`@K82(;JT&)3(;jaLugA<`5Dk^b({3)MX-;y$bqh#f|Xnh9_zlP$k}ACG#|ELjZ0A34%jQ7ALS=ZoBYr4jV|%Z=)TBrTQa-wCSiO( zwP3u1t}^LbhmD%R)bN`}#9ApmNiJ!Ny|YmVE;)349oCh)3c-%#n@&~&qW}KN(}PC$ z4BpsK=4Oh?(ki9eAS{fKi&&;@`4+-xHHbXiL zL1QfFdV@EV7Qc3Qbk;jQo-syCKA81IF5MV@rWPqolGgX@yJi2(CkO`}+{FOv+Ah`= zZYGW)Lj_i>mXc!OL!CpfFhpX0N(F|3+$89A7g@tp6DM1f^#0PiGmdBT8(&|I2y(QD z&#ON#xrw{8%!%FLfA9Jrvc}0#M3qT;o1N>`1{JuXwQ;(XA#4(V46IgZP65uQ7+i<; zc-I*!Yg#S}hZJ&+3`U_F0or@5^;LR-c$t@WVtVZUWdW{5?fZJ0UW|1hsD-I?_*P8}fm>rgCowqFZCthw`+l56z|_k+gm% zpVq@Jj_;-_x2>mFHTAv=w^skLAR@V$OSWwvPr|8vTDUZ*G=nF3cnGuz7Nn$S1R2T= z(W*D^iE=*OmpA@xu*AX>KJH*dBUa*vY)0S1JeV5b`C*VE7f`=!sU0rD?1Vaq#8DiM z+TbkrdyfyX&w?}^fi*>H-$-F66hqf*o>h8qhV5*b%H=rbQio>@i)q=!!{^9kS@d$-Xh=HLam;BS4O zm4?H!iUk1R`O_n4B)akI=M!M0y}Y7WP`LGP;zO2cc%)3{I1`%_&KmY)$VXw|qesQD z#wC|dfLKqKz&p>V!uR6fGU8@e9^^8I{DEbV_3;Q5Nj@!pO9`I_3rF=d6OW#Qefm#q zs~_+55x47Kk{4?o`|<~A7!0GEgiqX$q7o^n+V3I$XkCGul&VUJd&oOD)VmETRc+!+ zpHdMZcZbvtw4O+Ga5t#^uajalc5SiWgRXOJ_+9;S#e)487#+P%|^_KTzzR zo*WmQMiNchN8(W>#UJ7BE5;ITh$QiQwf32b>p|hYKZtIi6s{3N8>0!x zPJ{O-<&YX9cVH~*Q#p;cVc9j7Qx#%wILENZ$=KK*6+hHFWwpEwub1sK9fgXGVzq?5 z6dF15@mr}eiwCO9-3-p$!gv=j&IEL2jB|lcF_kwu&j^VFU(pcS1{mx&&Imh=VW5() zmg_x{PMjyaoUZks?!an^T%uR~w~3n#5EFdqUG#X2yLWyMQ8e-Fc@Y-~fp!)3p?7}1 za8zn{^n0f4Z+&6#)nF)1@Btj+N5d_nez`74C)%=f*Aa>7!Kw;gtW=RkZC3M}~I3Ex@T9gm6%@6CQ5Sgi+j`!f9IOp9mwp5;4 zoutnlOYXM{sXXZPJXuDM1%6#C+MJthm708F;{J$;R+@dTwTS{B<&mGs^I}Q1f5aZ#NbcujvpkKSy>^rS5J_j05BfFxH+^_4t@rcxLVu=T8&YT+ z&vrQX)|KtJ34t!!A=`MZ-)nr=ch|EUlbt!;Z!dZLqOu#QH2IB!GH@cv!F`ruzu<8B z6Z4V#oWuj>*ANoF!LJFOZZC(I&{p`yBS8|kBM&t0f$1MpA%Vgmqc_Hgj$jY!K&Z#y z+r|;!!FNGgqlS2JZsFKD)OdDz)TleMkgTSgt~<7K!CIwos&Vo)FvwcZJPJ^vQ2X)-O`Wr$4O~ zlIWZ-Hh->0_UwY3py|+ep=oV0jNQbhH(>;DjPUIE_)IC>+FB6m>}{_A9nBNdI+T0p zLa&lOaAMx=u)bnAs-|#dIDg03#@Nj0`d|2$m`GSCEEIvSq`vgiJl$Er1R+BN0X97U zn<+ls%wJ4!NFX%MN=82|3vUNxK{mm;QG_@aa{f${C6i2d{rUP#1hYN3rr}XIs1@`R zx$lueTdG#R?w6jCH;bJ7c$|+2`VzOae%{2>$`f00Q;tiiXxBN~JbJ%HD_!-`0dsNIX$+`xLsgpy!>T2X?^8NV7%$-(s zo>${Fyk_Z-n}_zZV&CqVyx+s=?=ZMKHy9~ElJY_zZTdd71?F|4L73xr(kd^h zi_M0Rf%N6H$ga?D?6YCkP2AViBI^2OCd77Qx6y3B+vWW4_sL!V1FX^&C7!~Dshyc1 zPtn(=X>)1!tJC0Ztz$B?gf{>HCVDy^xoN-rpvXC&nenEwNtAF#v60xnICEOSyUFDs zVUTn(g$|VwB5QSZ!bFS7Mm12>Tm~HUPt-)e}8bO5^;&gg^nw<}E9eV2=TzeagQwT1sTcjxsNNgWVxwAB7x!+hs=@JE-zL6iJd<6+%n{U<>4;8qm_q`B zPLzV>lK4dIC<5DDASe4|&`=87m#$m#_wt3pknx~17b3x;Min}2B`y8kGj$l85?OJXyN8Oa&14{MG zF5av(1`9J1U=rpKTK3Yr(xW_jygp$7U>9!R>$i#ffwV`+PCveDN3(nzX2_;KN08{% z<9&JBFu5P2aY!-b?0GoV)60JZb5HcY9@=@m;vXFWm%%PRS7r4BeTYNa*Y~bDtAEaf zyw9P1yFY6m#+n21b1IFpYV-hpTb7;XH@@1>)YS$&DKzJIMchk^Btt+vRYuXB!3OUs zXgc@6NAYIcFJoZx30;7s#6#yRC1rN|Slc4EKOx$SGa_Wv8Qb-g0?+#N1+` zr$cW%%=VRN#_O4DW$${c@=uMXKai%~47dc|Z~{RmbA$Ed&CqTP9xd0GOHOyGr6VZ$ zznJ8n$DZOYV7S@WEa-NDb=a-)jzH;l$EB&s9 zzqEyz>&;4*X=u9Ex$e6q$*T2%x8VJn3071XZGc))loo z?X&3dH#mXbG=0{FH7dW-0l`~Wq{Ml`C z36wYI4DR9#na;T$1qS7Eqhlp}=SmOsu7(iS2Gym}OOLx$KgL(}cpVj6-^RfY=4(2U zf2zz!4^nJ7xI5y)5XqLIThFlaKK{oxMjtxkEbBz%Jq^zxy4#e4M3RlZ9rx=#m>uxY zyK2|2)RD(R?Cr9n{+|*pv$#jY*ra1|O1fG5BK}Qw`?_!A0e6y6@9^xH+JQUZ${oq` zWfgc3`dH?bHQt09{~`b7ZRf!l$DlGPxE2m2h z@#gIJSRgRw>?*qgb&M>Ikl;fkM#&ezG|yKkWHp{a`wS51-!bGkh2#Jk#wC#F8F&Fg zg7BpPj8ee5G}-&|bfqSQo$l)39oh2_xC1UJ-<2pN?*iQvg0MFDhB}~C)K+>>As@7a zpMqdz13Xh4%z(HtesAc5_IQ<$ed}HtZ}QmB$V@D}gwUj(?-b)ruJf#Cu6vG~;9y>Z z$)CJHPG3p41mx|A;W4frnZRmkK*~}wz{*i#ZP|n?%$0$)UIGl?8n1>9Gi{_-b>Ceh-H$tc{q%YF5pf2YgOjG;#&<$3^UJXKC*?eJWUni5>mF){x%b9tmDA0m*141DAP6V18p!6-qL;`AwG?n$wM9`s^)!34jM@gP&MyN*Ks$Asj7gBh?fAjC#r5xbXSlc3Ax;C7Vr(S?CtJ#3=yH z65f|-)))#j;umbByf>JyA~*_U|Lw;{!f-D;U9QK9H0^O{z_;oY%I@)LWQc!^1UqBV`sG^;UWj0#;h~(Kw>^nBooW>e$cMV#7`&+nHpxp=`xgn+(DQLa$qpr zBE3X4NHvHX){!gd&e;s$^VSAY@<jIa%iuizosd-F(h|D>Ks$D?3D!%X=5uJz?16qs& zki$U10h@3x_izqb7ovmRF(dV~IIu~Q;HP6rS5<8bWIgtBpr ztm|W6b==cAb491K>|_m!g^CZwaL*Y?KfR$=Zae)g$QDsY69h zXlIH(AaOX|GJ#RNBEY$#yYdY6W~;m3ZfyxE$Fq2try8byuK)(BH;uM`Mvv#il}^#D zMyrsR{7}K1ih?=18}V)573>S}>MCYMZ|CqVUn?x|xx`3J-0Wh26bth?1l?isL!Pkr zYW0@navV+K)sR+tF>%Cb5b|iwvM+%)!Y*@ZN%A%OM6^yGs481fgjl1{t8S+@Fa(zF z(tz9KUPG6Deb1nP?_-QiLCY7TPWnZ5 zPt~b^{j2-j?PJM39oFrO&vPfWXoS}RY>RE5+}Z0igH5k`>3Y&i$3i9O&{uq4rrChk z8kwkT@mK_$Wg}k8bTc@XV;<+ZOevjpo);Q-B+fALS4;e{Xo7-6wZi@DFXr41WZNV2 zS|G;m0v_ZEgM#IuAdXPy;KmB`K?F<1-mYQ{r2B`q95 zzizuTr6$$(m+DnhxbK^9)yoF|DvVuAvB$(mru5&h zzbvvLu^dC`N)eySWP$Q|HXf|-z7URpJp*sU?w26Crn;$r*w1ePWg(_S7WtL>2uMr_ z9G=TXXt%)kQ{Jq(z{bu(>>FQ<*7tmts#m&Ey(;4jmkg{e6N$ zh#}yfIT);Q3nCgGr#rFO-NDqLNaf7868ty!Q7u}#>@aePyYS*IlXg{x&70JFJWCc_ zuW3b9$he?<6a{;L_VUSEkQWNiJ zyeFFMt;y$b_G;;!*!>W%sSI^q<+s$$s-mrH_V4hviZzR6F@2@cizs^jPO}#Vmn-*w z|7tu{^%+@yCJQ_eD^^}O=$FDh?ROkD!>#x^VYLfu0z`jX0=u=px)ceUvxB40ksFjb zm|94Jw5+D|>sgAvBD z?x)c-EBy9~hG@|jnwy?%qV?ahnv zrDiq#M?u3f-aZmqosn7Y8bj?H6{Q1(qwMnr7Q&hZX%G7i`fMH6H1kWsM7mC~y2cn2 z_u_rC^Z~M3md_G;q*7hQ8PgL*g!H;N(_el{f;RvdA%(2m;%BkVJZE?nkTz```kS*m z+dme9y~=tFM}-k4Bz|@cjUo>i(!K*A|LC7Rzif_wzd!1cjAhp)NF3_&e~dVf3VkXF zrt^OikQ^KJ3X&y6F9M#o;SMl?KOY4vV5EsZG7{bHq8B|q6@`T(lqky)Gk6biqDl6R z&MMDl-2lkG2dVehsp^0e{cJ1+oSJ)z1B8y`2A4G`OE$9E`8KB9$X#_?*sNFNv54^?>&Df z#|(R|A#WFdK4aAUQw#*AxAw86ots_dFFm&phtD1(iCR}Rb~S3gTXc_@-`$GrjT_ZnQdsB^ z?G{o6`-Vn;bUj;WFO-)q3xS4Bie=?z1u9ck2r{o~SV8K-F8PKZ(gH3EQ+&5=^hCT0 z@aP-qAO~i=utB`6B|)Z;FfnzAp@FI#+CKK;c(swY0 zLKZ8g33ljIdM9Qe+_Poi;p>B|^EOiGz+K041y%w#?waQ`Q| zEb*oGqn_xCWz(+z3;eqp<ES4G_(s#R_$VSrWlE(upTX!eY z^ya^;*CgYtJ+Y#0_x+yaprF&t9x_IYMx_EAz?8FDy}A(Uz0h&#D4rSRlsAHPdAOK@ z^GKiX`3Vz8V8a5RdJF1BlSi=eK#kR(*)#fE6fp7F>4iUAL&o|ec~+F0Q02PZ@1q7z z=y7+WM+ot}iM@!wJk_htpY_1tZPH^B@<#_&1zN))vX-20drD-Kd>?5aP}9prS{Ru0 z`Pf@*^hpi53TgWTx0 zf@j$R7}uDoqhMrEVu$yozMdWzS;ExlC)nQIB<@c^0MQ9P8M^iD_hoo;y^Mzy9g=jm`s0JiTF+h>K;t39~qz!nsI+Nf%biVh{Joe!u~>?8GH+K#&FU4 z;Bm|TzLDg?Ug3|bi{+{*jn*>LIIlKtSOQv$VP1>I&}nFIMo&q;iYZC#A=zi05$2P5 z8@`(J7RW9c-vhSy4%)Q!=Sg;!l*}8I`?ZWvkmqF91umVEeDD(@i@)`A5GX7fN@Q); zzXoI{|3%5XAbVad^C>YoHfZ*E4UM9 zgclfl+jRuB8$`MB7ndg?z0)Eeot&P_GnQq-5B$GHxlcmdmm=tVVKCII-{dz_%=yHg*N0;)`O3 zUJNls)kkvoSkD9k#N>fK31Iu+i~=r}aiFP(5$Qvq!vCS`t>dEH`Y%vofFXndq@*1w z>5%S5QcwZu2BndbZjcTMX$eITDW$tRMY=<}yY8Ozp7;FDdq4O7C+aZI%(M4?zTdUh zcdc1&d^1nS2i0EJuRVUF+YJagn@lRz1U`J4W*y7fNmX}xKKsKc-n8#SNj*g8IY*q$ z9=ds3b2n*<<64Y??KYzfyDf9jgG?e>O6#e|WR9jkEa-cn?~;#el}FR$lJV%5O3C}$ zTTz-J{t*dkb-W2$Rb$7|Zr;nkY>chioy3Or50;erSgDG`E+T8Vd28q>IBbvQmL#jN zM^{L5Et8ME2T3v6p3##iHKY^=OFo7d=|0bWo}nDa>e15R@@Rp08gD|4tQVeWAkB)) zh)vp7vlunR$5{xUlVi&B5~FjFJ!-9cdIkpc?{!!}$V%@E4Zxo-c)X_({b`$Hv_Ty) zNI{pOZY;X7G&A4uKsmfWiW+#1+G{AEMXja1zWeLYJq{EB0? zCr_FL^+i-G3fx814T>-Oe*6@+Gg1Xi>$@<8yXX#TW=|SZQ4r1&IWInW+Amm@ zWzKE)z&>)FeLS*WaiTjDk5TbAjmldiRv`vcx2FOq(RhT~KO|Nff*d@OH`sWS}%27ez29V{`h?OW>DlvABg>9VjtKYgl-8jKys_V~vb zlp0DH>Ev_u_<+Bq=P{L$^J);@LCk^eUwdR>|KS2F4KDkf=#&36Wk|@03x@cMOgpql z+z@zh8+9Z{AUO`iDlu{;eKgPF+;H-9*!vI_jb=pUkmJQDj7H`Cy+_u4n00q=c_J?8cC zKi2|#*rbn_A&ep?gc9URo&TuXh5D_%G*Oa>m#{lxHT-}wt_`scU&2YLGI$EIevCTtp9(fsa? zuBpYb;$|h@k=6yjoUgzY|F$(|#k-EP0={XrXQ)w$-s{}NFl zw7&Nx5vO=r<;pPr-9SvJa*j^#yS|8!Ng8vfZQhHewunKoJ7Jm0Wu-$MyZR?G-gsk# z&#SA7*URSJ`};cTYwgi1 z$#V0FoZ*_3w0Q@U$N#BNkfWQ1qY+oIBDf~J1#%&JzyuK7=jJ2{L@dCSkg@$irURca zd0Kjk&)toycA05dUxtVdzf*=&$7khKdXSHhQKVZvGEn4V(O+K6zt|Z;M%dA-%09dP(Or&gWSP^Df%k>k^s`iwdC#NBIxquQ z=CU(2G(VIrwF8y1mu_0H++S!uY{6y^bRa ziGk-cDzD>b;maN5c07JEDO_esT7K||JF_G``VY*Syb|cP90ve*B^YSlovW`+bDeYN zR|G$=fN^C`^LjA!(xSb{V#cUnS9P-sP#D5j$rd-7g)doCM7?S}B0OZ=B!rv^ycye;ou$%5YAq5yE-FcY^ZsOAwO{ZS0+Ve1A`RYi^#s={h~YjchU#S9+sSQ^T`2d?Wb%=9{)G+*^KTm)zxVIaZTtZQc8M`leni7)%dqpH}FXvG}2 zbWamtQEOKqU?9(=Gs-3U7pjF;PIc!)6l_cgA;(O)d4F8(ZjIurpVnlF==lxeekh0I zI(8}K7CugQV4|iA+zl2Sc9YF&rxQ`1u!_ z{!~6bzC)+QAd)E&%^u9%wy+;)7L{O$OhT|_!ij&l&3j+j6o62SAS3oD=dIt985AJE zBfk2==fB6CDeeBszJfZV1#iLgT*Ksp6P=$5H@ov)jnOVRv)7RTLHq9+cE^gMkK_Q| z57}^TjQAQRA@BR53HSiNDQJeaGMWT&IIhT#j+b9&+f4uDa@p3R#aTRP#q&dr;PZIQ zZ!YAt)_00a+YA;aEtqTJ$oFaTMefioXF)b<2V4EkN3TSgG>}waz9H^i6{0IR6VE2b z>-dOuDX`46+gS4!yr!#&Lo!6#wxlYQE!}<&W-Yz(p1Byt@&M7Xm)qO-DyLB&E++=j zR7srs(qTxVCu)eZQ5NHV2c*PQ8SawFWLPWJ9n_t|D2&r*&U0h7wsK`Kb3TwB!PV8v z^2ZCg+LjO4-`@O}FBl<4p;k?2ZCz*mz?nW3NH$$&@;SkCmHXo6)FqB@dd^PkPxJg= zP1}E-82gzt3rU6)^b^|e7EV>Fz@%7&)$VlVcTAFo%T4BLz$BZ2Ih_viPIJtT8r#{L zpLeK=jOW?ELQ1%*0J;<8^_RiFElaE+dmsQDrnrd-SX7(Je9g!ZXP5umlH!FBFpGeJ z7|OtWqJ^Sf^LB%S^`S{)Fkt=0mXtRQ03R5q_f?{rm|dR*IB);t@{>}Bh-y7A!Kd$i zMF1~)n6H%bP)o;`sFx!x#-9VqL{Z8DS%fLR()u{u=;L$NQ}iNjRqoFpyTA7gO+U^4 z#doKbQfh{-O3gIptxDVY^viqGRW>0BBlj2^tE$4%64*amk68TA)9@ao4h@`W7(APZ z92vx@GP>rR%Z)?>pDi#>7=zyxqvF6<=?uLZ)J{1)=^w{9!CsRtT1lzF%ANM(Ag{MC=w z+3gss<^^&IT9^L6z zhuZPF$^t;R1=DRW&!5d)b<_SGW&BiSo>w`Y5s43ey=iMW9Q;=a^{;)jfG>{^PMX+< z$?ChnnL7jD`~N=Cf8XVwS5AZ@-i`#@XZ`=b@S>GO6khlxqObmW>bgNdd1XD+PCsis z?)4?C0yhD7m3|51V`T-!+;U>#o6hxq@QLKNpt(ZGZKH5{#huEAgz|UZ42|C|r#IyW zkzj=mvS|LTxLT97Bjl7WN!Pej-Ko#XxWZ-URl$qG#2wOs?q?x!-WRJR)j;}* z$B4T{9zUI-7CN zwG@jUvAb)N2%*jdDUO4g_1+{^#E}Ec2A{}2!I}gg0g?^rOpw&#C^BfATXM9mA{vJCqhp&x6?J?j1(l%~U(-h+>u-wa#7O9pLl! zj1gv71hTC3np%{^_NgFn|3R?D zZ_zY{&9+s)voXb|MdlRf!!?e!#VVnxb?OYdls|K&9#qK%sThXpRNTW)3QsD8w*)Bi z2(QxEF4;@pf)m=9p~zuxil$2a<)vhTywH2+j;M<6)~vKj|jT@UzdSrG0E64>naZzjv~fWgR1Q4nP}WLKA(YS*Cs z_2sw6hR)MAl_^P}W>)}{0N;uRMb6Ty5l#gy_=|0+Vm{u45lxRY3)E$btCp~=01Kcs z4$7Zy84^9yRaX=kscjGL_yQK{4XEV1UTEr>`4bjt+X4}en>Nnu(*#DNaXRoXbzm>G z&Jymko6FtKgg1C?JA$kY0D2$Xm8WZY4;@I$??&zT5;tU6J-xc>AN)AKJw zq7l_cgGA_=9H4rxrh(`*cvfR_cEOWDg+6*%W898am$60v&g)^U-~u!WZ((x zRxPt!g93{A&3F-Kvqaw04}tG!sD5Wy0dpj<_OP|Yp;S%fQX#mGEtm{leJR}iBAUgz z<8xoAivlgLDkyvK25hD(FneEod-bGHYN{W!3oOEtkt2DEI<;IFcY|O@^KatISAI6+ z+5DwCXE*xi%XdQq@&8*;JJ4%>pA;p7n@S!=hdLtX*+iSTN+;Lu1g0Eo&!SOel1zBv zUhSReMcH7IxibNu(cDn*u%qkcQ z44I8#z^a7R@Fv+YJ|2-3F0YxvH6>1Kp%6-y3hING%~K3ixsb8tlMUr))I^-lKrJMqz$gT)WHFtZ3{o1Hu*^ zLGhl7vgEO<6nptN9&1EXoos&{26UHPMD=M9^!0dA7%n zXs8#_J1VeQ++nYV$HARvO4tIvN8C7;gJ<(TjfP~9{tS`n5eAVdiTMUE7 z$@K_5eoQQh-1+B*FN%mKzWAj$&8)V#+BZC0Du%qH)r7*YA5^J-Y%6a3_nGOJbKlSX z&3MlX1Dsef+eQ`^qYP{9M{aB87gt8V#zoH>N`8;}ji9X?(17Lmq};Ix{nPz72QV*; zfD_Q|U=kw*O10uOLwT)6r1gDnoVh1m>5uEAM$-sd)KMgxO56^XIH1I^Xa4S;c!AEo zBZLQrh>}$dnesDI71y0v#%Ye-Zx^OZuxwd-OW{Le4(+mA`~69M6c=U#8iQ;3Bvb zBPWcIp$F)sR)NST$DvPTDRg4 zV3Np*VTP>!NT@xr@DSGMg^be(LmbDGXf%jF^AuNd)s37y$)|K7=KD@VKBYHHzcdY= zFq3Y&g7vi)c|MuC46K?QoBxs>@%P|&xg{joWiB~qa!7d_TvU3xb!%TGn||sXIv9=r zr(ghJ6cS4BxNVGv{_XSp*^!rJ)a9wq<_h{o2lQ&ua8pilZubWCFmwq>nMid=`58P7 z8!<@5Ek|+{Fh`mwGJfnv8DV)Wt`-G^yClFcm# zLbClQtL?9iK-}q~BaHM}AjJW6W@ubBmAAj*N>%uvO>|DTo~Fd@Pc|s$d3G$%XgQG2 zRzmZLahVe_`J0s;7eDPOUX%Qk;>JZ~P@|kr=--;YGz0oI$3n|B+ zt|a#fun7WLv?}=(aW@0=lvyGVR=}dp#e=x`d>fV1{+5v5K#{+s`y|y<{~`D~X=={!Yq_k5W63`J03Tg?MZT5^>~orS%7s%pJO~-XExJ z9w_Fh%dD>0QY#YQ{Ir{{kU@9mpFa)dpU8VN-%!sPlf05+6LR1@R_$cLT`C9j%O@Lk zBlYT=G4MQ);g+zFk{bV0?uopGlq?+Trd~!%-GWp^|DKrj^=K zkG>J2bz4W6v=K75}@)sZY*dcPG@`xEhl;9cFen zB~*o=B%Jx&U1g^1=ioC?@^ZCVXP&Ke8euy1h!Ee`vi0TmPY2sf`OPT) z-0{(i?r^0W&hckE0QngSi-rWh_yvXjDc51N2yrN(dD=BVX`#(wy{S&&u6Yq0zo1ST z-j`O>bqO~-T~Yx3khBrI;-cZd1nO}dj10mYMAqZ!RTBzC7dB@o5%2@uz5N}UBFW)8 zhqT%vM_YQl1}wOw68*|JMoh(qwoyxpzc<#rHBpEqb1{XE@>%0Ei3hj) z(J+5Hnr#Qap|I=G#816eU*`X;eF`&Lm=ysqXb!3$&TF>3<)?-L*!dbW-0}{CAMhS( zqH`c<7kd+!MZIxR_pj!_;AjkLy8l<>lML*leM=Q03@tg%M8_(eJ5{FNUaIA{UPJpm zip<38cxLOus^j4$O9FI@ddU+YE73*;vzdn8+zdue)iILtyR#u&bUB(|LmhoBxsNty z2mXXKpGUPqYDB?GJ7?A5tf09kXmb9|>yb~o6=W6%5+5tRG3Fl}9(2K@yTI}ATG8Bi z8r$KawE||lq8Qy`T4us{_R-#Va9B6_gd?Ho2Y!Z`oSz>o?~o59w^lW~wS1s9Asn=g zaGd8mOJ5PnJ|j~1;CM@PLg#TbujWE72L0uS~GZA0OO81#hK^XiAj{DO< zycKxf?SAhHH*v_AVe&{t@^*fH(F$uKE{~_Rr-I05`fqOj{#r`Qc|mBHifl_ilH43R z!ci;7&Lc_Wo^EmIYwfLpy+uKZ04#@}MK|34rwfxWhZaqKVSe;hCwQjDApI~4(Xsip zMj-Xl{RdO`tMC=CnGWP)_1!)Cr-$R`0}nOp?>8gy#WaE zS5AJ<6zg=+ro~;YzWxIdWhzo0m0(sa7qyS5>r75#F+wCB%Y7+j`?J5zny0__Y-ZOE z=Bg-53h~*-Fi}qB)62iVc1$DMLJ@e(lt|^afb0%pGmX&8!RivI_m0_M{!L*YSu*PM zddzM; zS*+D^&4Amc-OS`ECV9D(nlGJ#9tMgdBfR95uteGZ4&>4)AV>pHfkeJjPRNdmQYp3;r4@Ar_10AYLhTqh=&3Z%~FU?$xsq+jdvQ8Zh8u9GGGI zN`uN)S8wVC6LZoFHh6r}H&MnP1nJ4}m@b`c|1|qaGxa(fxydTGhP+Z>pSJpYOQ?H= znLRG^wPvDnPoIr;&0msulR+!S(Y;!S%sLjg9lDewYm3zOZ|Qa* z(Y!uj74p)`uJ(M0G+IsbD0v6yf*lsehsiuvacKZmK;d2OO{%05tnxYRW^n8vv(S14 znuI2%XEm#dI(-IUdc+Kf`q%6L1E*eUEHm1|2W$jqV7J$R#Bvd{n-eB{&=#I>RMqVS zXrMm-EdQU+irD!zD87kq{3YTav$`w|twNaM-5ItX7=cvl2A-PlLyVlW2ul%tKG_tH9M=t?5AAS_N z39raWd|>38OupoLAlOUNqfl!*Q&qYeOEX5aQwHik%qbnEx2MS>9xjAkY6A$~YevgE z$PC|skYxN}7`|H?NNzJf0p?(zwPqa$xV&JxAm7vYjb@vKcj))$1?rNsQiyI{yH~B) zBE!97W?wNipVGj6^@Ag*cTzm=Zf{x-P3lofbtAk(nXZ0OASKjSG%Pd&+y=R{ORYUq@P4j3rmbDk&H)ADG&FD z4&oXO^}Dlt{tX?I-AiILwfK8!`@C(Do- zn%`@o0FYJH=l^DG0=U_DE!?^|%i(U7eeC8(5HWYD|fvEu2RP$yUB(YkNOofb?+Bl zH+-+B27^i^Yo9ayZSJlqkHrL3OTNgREv%a>x41Qmz4YIClh1MM+f>-XKnUDPm0l#8#p-gXC!>#=d`WjLVn+?EQ>cGwl%w=YQfBE z=)b0h0QE17NA=OyHEwUw~wl+RBslbqazo9o4E|+CKf2x#L@x;s*7@zdrHt=dl~UE@h_={pplFf3kOj0*e^6?0ZPCHZ=fge!7e}qD#}KD^lm_M zC7&PH?V4uFZ*f+fZP8O|w|E93b1glUG=5Gu5HXLcHv;>T)mB$%4|fCZl{?;`J8K5E z&{TDxwUijl5b0M=84xGatK9=}@%tl+?2{rFXjhD@fDsyLQAQ(a29qmAqKITwaWqs3 zFj~_mWP)dAq-(5%f}xpd_|>2CsMA9(UC0&px#0rUvOWpBK1G`6VfUhtCZ8e3K+iEb zt{bC9zS2IJk4`n1(lM46%3}d@hP>hPT|~G$_4N@&eVj0r?O+G%J4f=Agx` zjGL1`622D@FJ30&8G(rk$D0oET_2H&D(6(0*-vM9&3lEnxQC7B4jk44BLV4Y(Fi;V zA%PG^*+4;F3yVq~E$|;kNa|y-X{dc&rSyGgdeyO^XJ<(6alC3*@ceDe!MDL! z!PX9~$7qF5Vj^OBeh=O$WAV<|V5$|3{wnjH+l#AJN1Lp9m)`eo^;yA8o?LRR(qvih zM2E6+Yka+-!eq7gS$?cgrF9&^Z+cx#pDFlYxBV}IRe^71&%KXCx=B0A_9D6VudI~{ zA_sX^2P&~hrODH^A$8z%{nQuwWud)Z#r{QbSH)M$f#F<|$IrX2KeY!X_*E8i6`3Ez zFsM=~hVx+GXEfZaKu&#ujMo#InwW7s>oGIwirO!BA?KlVZruJ}d&<&3C)K8`4>=sw z?6{A5k@6Wa&zYtF3nbc}g-Pb?Ij2F9fz%-Q_XH+@maSVkJUJ^DU0sY9ExZBks1>?+x{gf7?_voi{>n!8d8G$EIUE1a^d|6 zw2h0iN42r(F~%p+P;Sl^!KY}JKkY)WA_xpsyeaVIw#L871if6tz@fmOict)0vNMZN$!b_jhWqN6H$ zjlXW<$5vZ)T;5!E*!<>%<#5(KMd=55-C2FzSl!Pw&4v|P=214YtSoB=;t$R~eZE4! zGP^k4UAQOM7Fp1+dlJoh=`=5d%N$OvnI_Izj*@n&JV#8gg~`--GkMY*oEvgMJGb@TA74)W=Bs~r$5eNwraDfeWkQ3`k$!ui;^d=`O9s)koV&Jp~DIx(tBba_fmNzLOezu3NL4pA%A-K?nAQ*WP==;74Trdgfd(vvpY-YuI>@iw zoOdWC(EM2ww?Q=^!g63AsD@Lyve|oarWq}64BeinR(#!f^Wm*fWz-nEyT7|sc8pxm zC1u0qv(B;pkr#aFwx4^M%gyJQOO#kd_jdnptut}@DAZ>mj}xfP_HzU8aZR7Rz3 zvt1jf)B1Xoe(1D?uAbZMUUt}Nm)8<**K_K8_kOBaUe=6e(^FJe_DK<{FELm zoLn>3q!3{GowzaFja2m7(VEIXUuiOgz99QX&HN$e6YErWS#zesBraanN(3h+_32C& zt&CFv{gY4hlWSP0L}p_~mWES9D&-v%tnNVbF4Mo!?=!}(G{MR@|FFnz-83RwDkQj% z=Rh`vd#QB^2^#_-UqLme-|U=i<;ZR~`;K1p5Rfgj(IThGG;O|%v?Ajbt^BP;o$zG6 zp6+*EMd*g_C*B~xK|9RK-}pJ6QKeVIMwKy($9InuvB88_kMMEyq72=+m9ZTi6v#X_rG_#+8|HYrpi7-$4YH+(aP?Uk@=IY3G;J zLWU%UjBa-KfC7t5BnX+I-*~zR%fw)d%?xHMf&rXrK&w$|A~hOuI*=Jf*M!g*Vvb^W zSi3=6(^S!3L+hJR^NBjh8!gwI1wt?dO6=_~B0IuiAvePNN%v!2^QTFUm0 zKN{Lgf~t5;258vY@a$-`B{qEQ9kdxgupR44kS5f4S#L`4O1(r5YxVP;fDzU0Bsp4h z9*MYb@Mlzsg>h=x?zAUwRtsX|S4xctS4w4{M1X}Qj``hQaObxHzj!@ArXxDkz8_Y&W7qhjlY{^Jck7I5ykLJ zqYCHrI;W>6L>>2J&(#QVcwX&&L82|?!S9mp-Kdy2mSRz0mLof!$0`;eP!Z z)7paasc{PC#|V1EN{A&&Vkw%hYw46ry(irWvr$rCOI8qdd|@me!%mGe{@}zB$}9iz zSc0KVnDpS1e}osMg0&;-%L^UrD6(k{uJQKZqkl^vDD)vE1Lka4oRzJTX$V2a+D}G| z>carMl@w;Ir^S6})$29uq5(j8IVX5r02&}c6fMstUz5x4sgzTA@tzPl%YvNG#1zME zUV&D`$Wy3_iY2lO%ZnV&(vq0lb{yj{KmDU2EA3t17vnz z{xRs)Ie=vU7S5i-`0t~HzOBS)TP!TjVWu0;uI)dGDvrAtq3DM49~Hx^aYDHE3ljX7OvJtmHsQf>MXT5%xBY)7?7yWb z;C15f5Te@=5A6Nb{{c#YwnGDI{}HE?+$p#GGaMUvvlz6{hke|G6xw8G)hAF2`oHLT zg~7d*m163rVI-mn!`O-BeeBBD%013kT2F=VvURT~7g{0ps{e*!P$ja?5>nK^Zpo={ zU*N4=+R17?*KyfS;GVly|5|q$bR&9UHsJTj$cW^+`q`$4u;kQPa?QMhgS9)TWi%)= zOa=wmQtz2%oA-aj?9~4K^@ZZ&7hg^Hz)&e8#g*5|FHP(-z_~+xfcJ7D;#rk_)eg9O z`Dj&DXXH^!U!mP9sT$Y4+sxHs;1bgEtPiODG?GE2HkIJpFZR+d2?(*=;-BLLwE*;c zfjNz5A@WxMB$&?3!l1*EJ8u9LaCe6@fC@r(ZW48D@p(;y`H3QKB^tA%#nWIicaOEp za>u?rlafdEDYg@6G|Z$Bxmbxm1Qgh0#{{5*0_%VYH5aI7SNtchH1*wOz8tMUuRklL z#HCd}nok0*06o`$CZ*fhQ)@6asAev%dN}zA*=mlanpNmfqY^qL-2*^A!||>$<9FRyV*w01-8eU zy55^(vgR``1w|7j3Jococ8$m<$p0))K#B`PpR=f|1-l9Zvi`?6Pa%)eh*woDq$e_W z#YwT0I=w@u>)mLd6NR(-w$(fC4u3N!mUFAbR2TB?YU)DqKwuFz8JB6O^<3TL(mCkS zA~^k17DPE8ZUan)3X2AC{w6ppUVG@M4zAq zoFotdNFaR0`ZIj?-PX4-I*VN>zaeiWQ1La*M~mGu0%<68wrd(_V_1Pk0Fz;E818d` zO}JwTgbF|wNpNOq)^c}wvc)>D(nattv^s^u$iFXBY+m=E*09-s2;-nkM=0W9-@OQ= z!Dk=&9hO9KXmK8%5X@S7fbu@S8J$}$7mW%o7xh-fg>Dvz3}p~{UQ;w7gwW%Cid-NB zKv8km7e_jR&^Uy`!z8(&5sYLH%rxU=KMyBnZSnNO_b&uS^~*t5;p`2J)sCxJH;&v^ zBagE6+6m0`DlB6Kz5}VZ{ik$4G-veYeHPs+j&5FSsuOBbzeOO8T>(&>^T4SvK%Dkd z$fs}_N`@O~H&4K4=Rk&{kWQuLfJ@;%4k33w1k8y%;JJg`tXzjyy5m=YG6a8$(+lJd z)NE}}9FlRHcL9gwM`@(LfRd4!*$S6}f9QH`Abt8eo_b1-C6>CR9xU!1qCWebsUL*L zNfR8qDc2g6B)ax~PGvPyz#PN>6n%G0$Y%OiDYkEFA@j^1>TnYFpTCQX5<-4rNvWpF zE%=K~VI@mn#fQ5Bsr%J&%7pj!b%(ys)ec{8BOy}tSw$fb>td0INk#YoeE~rax!Ho% zG!z>lFuCn2Wahi2#1}gm0iylxMAlTNp9jvM<>&IceCyOoZ6?P7d08PPXFK!It3xrA z7guFB<(Q29@_55!u{#_-vIpd7oAqkAMP1mAtvgesFzqGBchjEaW_|0K2C3akhfkh2)1KXB>pG2% z$!5=kSJoLe6$3P|cI@tVWHmQ6rSW%Qe|tjshNlJ%>wXNt&mWPA#rNXFWdl(nfD}|= zAWgugOdCkrnRKg4I14$7Cd)1O=;QC@I<0gW9jtZ7ck%>UHlx|w`RodCj zC4XtHXsL^}0A>ov-!(o*ApqAyKp|t{cu$s@Nf}{2P@&&eN)Erz^bmmSaX>8cz(k=f z;Q|ciKS8enZVj}&rS4LN+6|uW?f^OJe-AOIsM#*-jSvVNen>@1S|kEA^b80LId6k> z3xSb>=G;P#5A3`fVk+6MU?2N)gm?1Kav+WMf z=6gD%-EC0pWFV&R+$6u};RuOW7LiP#&c{SXk8lT9CVAv-U%L{gxB>t>yc}V0blzM4 zOFj--s9h)zVsx~?pwYYQyb1aOh?#~ODuEUXAx82xvNmip>N0xYdUneW`w)EDK@prUDUa{*%#YDY z3o(J&fdn?aDHiuXy+E{u%_>g?LZ%1>pRobTL>(_$^zutk$c88&rHW>qH^;hXEZLzi z|L6eWGOt8redy61(8t^_3Ka#1QFkiecg)N#!p?k?Z>ZzA8XfFY5 zoAbYj{0ZzON&?KVcc?zq8#+h|e3Bg1P!VXK+NnRAdn3s^Med-Eo5(8yLR=QL5c(<| zgRDq_UYrh!4)hL|4z?EO-I>Wt)N{fl_|&5l=^K7YC?XQ03rKF5&t{yW1vK!^P0|Fx zL__*-uVO-r>0w*@|Eb0S8M!}sTwP|l^R7T|n}4#wDXH$)*M@|?BprRStHKDK zx~K2-$IgAjOXO&O7#wg|nU3JXTTKZ%PLfTcIQzE%d0-cB&ZEcS7l!{)t4mw$(HeV( zQ2Zp&kAF-g><#|&NMKw{iEZlb`1e*8agLWzRl}D?W64T@vDOcyd1mvd1)J2SCunE| zcv$`DJ_X&nWgkNy0ih$0U!a`0955Xnc0TOZQuJw6dLOBn!-hkP zgBkkHP#l)n)Rh%vv4nTzdx9B((HCkm(}kf#`msYx`Js~N10RMW@qJ0ESmkJ=sI;bzJHDQ+o;5W~BBWWv!A!`g5N?ho=*tpY zWLGHbV-yGrtKp_r4HEnu#5w7ZJ4djaH2{VvQad0!Sivcfo}2|UlVXv?hrIt!be{AW>;SY--^T(dTQL+OwDy%!; zp)xrIX+hgWzb1fBq2KT)T3VF01S3!m{*DHqXY6>+&_Ozqfe-6wkefhdQ-R5dls^jL z-72DkbquTELz~7-E*yYVQJXSAYMTId$${dFn1*6O@y86LhLRW>8pi6Jq|=`n1p(Ij9!r+5?vgV9ns>s8w7y~#y*ndC6Nt5NsgcwXy+rYq?HJS zx%_3AUPik|k&P}TR|D-IGm7l*4{@Y?rKdlo2jHQgcZ^!jJ6k!K>B{^G|3GE~X<<)f zj$mM+l!VD&SS_!Fo$g#Fp~>gmapeztQjS=cs#J7f2CXi=X9(G-5dD!5QPU7vQTSYT zmz;zC?b&~wR}gS+?$Y7OwYF z5^s2IlYVVg(vdysBahc7b)XMb>Po3t#$Co~X5_?IexFQn6cDS#YBVqnWdEb~+?*Cm zA4!{Mj8t%Q0Qi>Nm81MPErxetDXfFCK13k+U}ck(HSxY0ee%6zn!^WXuZ)j+%02Lo?tASi z)hZBnpz9WpOa)CFAcvDMm^M>8h;w)h?^g-<;q^Utly(5@5(xK$xswJVt~w zrEQUJHl=@vjxS1OwCYC`2tN#Milmb>{4+&)(4;#W4SOn4fnPBLH7?QeZB}|O->Ln9 z-v{aklWiyH6Lt=Y>G_eOCAKT@(>iK0sy(eHVw_9d}-F4lR6K_Dz3YoV=b zeGH)m>Pu*|%(f#+pr++K8ODBU!mn_`KwoD6dn4KPFTU%~E=6M%{fV#8oFd?_7GB_d z^)e%ezr&c&CUqscMp~^~YmFtF(Pxa>K{8i?+F4qAgFHeH>KoS+k4+ev#2M}=mC*Yk zfH@HxcJhIA5s#TFu)G3I({{v=B^dT8mYqci)+ZU!IqaEi^58mY7XmvwrG|HT&0Fe; z6-CW!-cZi5-)Yj#O{x8lDB~x7a-H>$v@;{0ly^tA8oG|P4%4{Wm1cacTy6u#0(pj1 zSG&~q3S8zf>9qt{T(3lgWFODg|GqD*|3C?5f(I0D=buBTq-|Am(-mslzzc}i(%17mC1xI2)T}fLt=&x0wOIo3R$eOsR~-hVB>fd z;BY<$ezmP5A!je_v8sOtm*CNoy2ZU5+iVk-y*2V{&Hs2Vm4)8yR?CToEK|WM?N48o zH-O(EQR`=|Zk4{3T-!yXWh zDppMGPv;fhpnAjadQGp1dtFk2VuUFvk!U^4K+*1v4Qt>smx2a;a(6ZwnR)`y)yoaj z+`VlX^_3L;xUI0=GOKBq3rS_ZBjJmT6|YN?l&isa?x)+_Bd0);2DVyiae05cyEg;lk6FY+Ko?We`-p@vfA$XQQ_PqTEslHqPef60@;Qy-h+P6 zup>W+r4vbqRQfRLbZRH5DGVjAH-XnR@k3QVnL_kEoE$~$(C8XpT4QZgX#oURq{Ale zo`a)Qp#;Q$%PQKOA7sljDVn!it4d8F-?$}-T3TWZCH$>;TRwe#QTVYy5J%`NVSe=k zk0txCr%VcU@*~m-f*#+NqS{qlXf%wpR2RK2-7xs%LaWmkJ`PNQ<25UPhYHXiMxRpK ztXG&okSy&k1eos!O6U4D;VfdK;D(k!&6_^fI7m+oyOl!iox|({M4^QDL=#eYw-TQ; zF^=-tn}p%I@uSpgJ*50?=Pq|szRmdBW?_nEqvoygvcn)pOY&HG9ZGLTut%VW3OQcG zZU#pMJP|fTGBg5r%U0G{A0s*tk02SdVj1=>u#@B1h`=-KkHg^bpD^*ejYJx>n!%@7 zqHfHko-J=_UoWnBmPmk4ee#q#D{2-~#Bil3nDkD*H!{pHtSRcPR)=`493wg*b-TYe zRCEk^8QY2mDepVCXbka2c|^+(n+^(lboQRrWcN$@JqlV=qcnQlLm1^}Qdh0G4k5YR z06MMD)nFk|4{OX9o3G|W4M206dm=HbVYk~xDVs4kdRGM`)td~_bD>aQKr z>`b0dPZi*^j4wY8HEj3GLX}kE@V<7t3O-7gvkzPdW)?F=6?T$=b`a5GpXr{>SWGZ) zDomT5RUiE?Ah-{G6o&Mm^$V|{!XU;uf!PcY#l7x@4GB}TETT>t6AX4kq8Lg1QCVPG zgHcZ;UWcweW+4fCg_}!XcScT)%=&ci>dzql3oLtu<4nf zuTb%>{eASKcPc~zX>A=&*XoMa`rU*7(P*E42Z}%s@$28dSI7TClKw-7{|!&ZP$A@s z$68wregEHpB|x6~5a6Z}`gIB2XL3JE&bN* zLo~+T0{%H3Y=6G>4(V-XE_i_if^iBSG@wuN%ETN9CB2Mu^N|c9E%IUwr%^4Y`dDDJf%SUmeE>|Rzv{u6%rLuj+Vx|%scn#Cb7 zP3Vo#>8zhCAH;muHx@qR~ zCjd{hB>gT&sPyxr43PIc6}7{3kUDqs*W`9Zrl9j{XnW!+vuS5dpo#Q3R}h~GyERKH z@qp?GABqVttPVuqo*s0@CQ&O3khLJ3?j$ zAC(m@t$`D5i8g@+jgvWZC#%ET`|DjDqs8=O+I_u8m!5E)-#75)C=qJu^s(Kc@?g9` z7wfr-`S|7B5fM4EB{82X8UH)b_&%pHnLps1pYKMuh;+*BXUOyl}V{Sa>@@u@i+4FJWL-GJhx*cn)b%6q2M}S7~+bw>$ZS=f2 z+`llKvu8U2W8ar zO6L)D-c0zDve3PX+>y{N`sy-);8vhBopM?v4+l9p7Z@G{W%zh(^J*U*h3>oi*gkgk zQY}l5sqptD#Ips-y;Z|bH4>@3HcJhPVas67h!&tqAAak5u+q)laf@+GguD*uj=j9; z1=HqVdxIBHknLWP;gu#hjMX^5*_sATpzA~@fLK>e*QwY%jei4jOBdl3Ik7cFnawy~ zv$FPdCHi9M)>u9zXBC%w0WpH@#}kh50u%PJwGmjIxOSYh%1aoY7gB5Fc>S?%vLdm_ z8RyFTsd#!aXI{~wu#R_(t120e4ai4`YjvrJun;frzRk3!!A^;NR zJ&5NPiTJ7I8Tf%&fDQ$HMwoLYrjh!T(d_u({g2i@6cCeqJB zQcWoWsyIF`Nq^z^@%?H)lRw-K%U^aL6c0*fqDf>=mlm~86}taXH{C00eHhD8T-fXk z+OC!UGrXpREq_hxm*=xsmHcWw_*;myT@D!e2UY%6^UBgu(-$F+efV0WcJDrf`^C?q z-@hI=o$xKJ_R!8<4wlJXtxg+;_K(e6GLTCoPk~29WWTQk6~g*EM1lOud}H81lbp@#MJZFSHouwi4=CoqhD2 zKZrPDo|$$2ea6NHcUaP4bNMHwzvO%Ikn&C0V6S#9<8$5Z+^JicrWDw3{gky;PV*~8 zEvcuT^z3ii{G&|troL6igNv+3zE8cJ6-<~c4#aIFQ|0x$e~O-8`(I|WxJ#)$RsIID z*NMZYP1i>w2^WSPFSqD6b8X~(+u6H10vK>ZjL*M6>6*&^!!+G2+#AT5gfV--&HjEt zk|5HaQR1ltA5AQv0ccBDUOU4kcu>oO$xJp~&46KO_AvHVwOEPRCJ37MsJwO?7{>oV zWN8^6^(bz?1E3tEUVAO>F$UihukvGst?ftxy>weY)l1 z8SG7!(YgMbdfF4#!n!+ghSo2}(sCYHXS*=DjL(A)B^&!HEn-fT%J;44;U95^Y@}eh z&Y!{2yrMUtMBz&W79EC?o>5!BGAC2y$5D~PNM5-q?XGgIYVrM`?eOX%V7OXj#i*JW zyc2K@heCwc+p8t@v|jE(y;=uX7opb~gJOSX|51DE)H)U&*BWAgW)2ms79Qh7GcYaNC}p8>*74!|wMLuD{4=>0q(wFF!)**pvW3dT!xK zhjDN;AKx6ly=BXLCybvD;NG7sp}&l>Dg%U}fz_QXX3^oKY6cCo zQNi%Xyz=QFOjZ?GMl`ulw?o>XQ3*WtQo`gkfcsNPo?M4MS8vJCBex8E+v^ekgK_KF zkQRj_{UUfa?XVjoN+TG6(h&eI2}CC3hUT|Iu9JX!j6+C-K=@2ZxWA6I-+{Rg$DGP$ zBAt_PBuZlk7=wcv!8EACuJ>-H=lwq}a4u%;mVzcLea-I{vX0UH2VN(eMwWsIP*G8T ziEuK~g71FhoJYR|;VaBh;b&TZhRh@j2HQ6s6s-fegX%qwF~O;-*#PeGaR0xmERU*k zUxXW8uN=lJM0*YUc9}Y1D~3`KEUFvbtTWvi68Y}EcZUs&H>>Pk9u1dUQDGz;4lK`T z+9$d8{Z7o>*0vhAf(-PJaa#O(tvX|vbVn*Q|ELGHgEQ^#C`np4QdcZ^Vzx*;Kkj?~ zlL|oi7+CSsV85&=I>=1mre3aiqbXY%lJ=$9oZ<4V#V(>LcW-jsMa|N z!jYlQ0P|@k+1+fn%Mw_>307&`@)J68fXi@rEBD=>sUny)Vuns;JIwUm5k%^*zHxu= z1n6w=>_tlj#j3tJ7^sR#-P;$(kgwqy#gr9*-A?B18G5oJZ;JoMHkU}T zZruu$x9-QoeM5A!T%O3j9XdI>y)9(hn?-)b=V42IvQ9BvteDeIO;Z-ah-z>ARHb?) zSg3>vl;-OC0&2BoMgyPoG$XJV^?{AL_eb?+&$!rN7TP~?*Q6bmdmnNO0GQ2HM7!gf@ z*Y=(Y|NFPJX!$V~F$XeSlqTSGVl!V6Z4FgZDHD{Q{__QPFlt_CXZB2w6_6rr+MC=~ zmbYHE1h!?lqTR|AL?O)zEe{K>ANv} z9uJCdS9fClVGwZUi@{9dFMV1oEuP=JKP&tYrFnI$#I$3;H8WG(=;Mthm4(R;U>>OU zel7j$%{zLf_?iS?<4|>UMV|pbEgv`+fWTPssRWJP&?n%Yc@MB;pRUw;_tmn~B-5O( z@ML+*rntb+yDYvqqP=mrvBk3>qQfHXT;e}bG-%Q^SAjQ zhCb;y)=}8dWqw2$iY>pQA4v6f3k)M?iH#nU`mt@b|58L*xLk1rMI-KpjCKD?M65q( zkSE+dJ<8&V0w!cQk6Y4da!<2T#5A9NVr>s54IKtFP0Xih92E8TkLg3~URbT*nBXDYUd+05hFsew;E#uRaI4u5RznZ1CioEvj z4c+(X#T$J7{;H^-C$gM4@dqr~)6yM%#-S+-!+blk-3ESngxG?zii<(8fx=#ejT*Z4 z?B&*Eeg+r>Ohn3Qq$(Pd<5WOK;g*|5Lo8?~yKS^#9(^Mz9Hq~G7vtJDj61&zuUV5^ z^@IJeM2+c!5S)6aYfhks_gcN(l4uB|8|hPQ^$)?TXiFfSGT!d@w$AoKU7|v%vXi)T z10>av`kZzTeNaF5TY~WnXvf|KH9Oehdo~o-(7i0r)*-|bsU+%TWNgKGkCV$DoGXMB4%0wtyPFU71h_}cR)KIy&1$~@1_Kx}Wc;`$ok{OIa zIE*P@1>6T>5mSb=sUlA`cbzSU7|#q0K3NNKtIfXEN(r~rCTD1Hy8qgoL=BOvg%7?( z8n{cOQ&$XH{ry!SQUM0DGy-$1Z+n0tt;Q_r_!opqrvx5&ubw!*-Ij00v<&AYRjbaEt>1lrRyo z7sLjeJrp`$C5As8{fye55m09Q+R~AlE(ji~uTCcK2U2#nKmUwUN@W#eC6&~P(SQ`M ziA{#8VEhTq$+#FIDWYc1l?O1x<>IX1T>{DZwQ*n0bv+8UZK=N(eXyf#vD_>&(@ zfYD`B%r^2t#~6exsut{53@<8B?Qvl|b@@YCCXteDGxJHRJ4ah;GPWqNCEkxy*HAQT z;*@b58Dm14n3h<#v{akhRFi3T7bbE`63ziLo1K2Pc=@&b`Eyx~=|qdhTTZO02GN_x zvHn7xN3Q9}S4B6wC0YT_Gd#bu#riMutN#C#%m)fw>)i%FrY)RYw8q(FX5Efo)p*^D zYG17I#BT+^)w<(2>R_*siYPD<>UpUwq!8D)nT#%k|Y|0c8xIE|)pa3t+?^tO*I9ziv^ z@hjg*D&+e7Iqc7Ri2T$THmv>8rcE$DUzTp5sIlivxc;E1t+44a{MmdUTe+s!dWrXk z7Toih`nNGfEZ49{7*)7b_-z3-xKy+4B{98+-?s}*8*^ZXE*^)~3{7*pa(Z$_JuOI9 zb!C2Fw)ZD(qX)J2pi}axV|Z1E^gQb}zzi9OFWE(u?Zn^094QUPBrS%@Pq-re?bmQy z$>iv?g1K;-HY8%XYpNB3*`gvo*`g^eMK=9DqFrj^2_vOvAt^M2*?crMD4C$nq&&U9 z48?<5-n{KIywNiTD)jH+l8xA!nmGh}AJSX1&ej?1iZ13LD8i-JG*?rc1q=<%Ix&S7 zO5`vG&(L{R$LW-&Yg?H_&)w__R6RRlL)fWN;dp9qC?~UIR0<|@{BHC{lht9xy39X+ z9=Zol6&DaG^U|x%rRjXM5vK;WXTS#?i#_;ej4u={jK5A7 zd%x|XePNX~oj}BllqC|5h+SaPo{JEk z6NnwOc0Y^-n8hPlnKO1?qS%mc=KEs8DWFzGOmU8D>a>80MxAzo_p zcZ>Z;4mc6Gv|xXUi@(En%xmpltH>zNw&PkPmZ$9T-$1d=2)hAy*L_;ilgW791Gv5) zHmX()d$YJv7jDf-VqWrF*HbR5-{RTl%oSH+ z&FWNW$J&K{5_VS&OxW{c4FTMX%xiW0PWqL}bFoMs>BmYTuj?Ox9~$B5di*Rwv;4H` zl1!xFQ^x%tr>QsX<*MIR8adPWkM?0_-$PPPYVEYH)ZPH$qI!58|A<@U;>2$4#9jG- zBvL%NBu=xOmljo!2nWAMQ^_;q=4dY6vIEFv91@!;#xQCcU}^mi3*fe=GZvs6$T@E; z3xxVhkeQ(Ko?)SJeExerTNlD!xiyjR}SA;*h0}hoTvRZ{75s7yQ`gQ=+dZSplF<%x~MTFHgnf$p!kZ8jHO9!%=`6 zK|r)WPWY9^?{%|A299gLBjyCdQ&F`4r%C{p?J$LX^LoHD9dqE>Ql-&x-d2OMXJ^`ovDPN(lFoBI&4U z%W!)@iS3#;@C!FPpsrol|9UZ2%5+VcY1+_=S#`2zZuYvO6kU}5VBZd5p z{1cOP{+*xF&i{`8U}!g%C{FAUA3Mc!Igp2c*~Q_%9;2>>rbH9Hiqrc02j#vY<%@eF zadM2N-$;M>mw#Rgx7PSNA{nQAsV?d!skm;A&Cc5#R`Cx%4{%^ zuJ<{0#OU1uIDxN}z=b8kdZ*!Jd3D5sXkSOZn2v1m;#8PvvZ};K_DD1tg7*BpM%dY_ z92}He{8l8${dKs3^uTq~3%vRSL+B`d-?^P}p873cPbxLILO$^3oHJovfV` z20^HpiVoX%)Y~p7z|RNGPCAmbjg{L^5n^_g7_O$2AH9Ulv=IA;1M8NSG$)^)D4z%! z&2DXu6nK+N=k$eySRuzl7>d&{3eyaS$ge7sR5G*^mVLRs;)@mSClonqYb9e~1e`3- zqBjKL(x6A*{qtPR+mD86qO^HLQU8+r5uHM<8ycUMibSGX?+n5tg*)QgGaAW*r|ijk z-LZ(k6hcdG8((bO5yw6hUBjG12Zz`aQh&k{TF(tb?7l(Hudhf{=aQOHr{;QBl=$Z>nldTI8j#qaZ%CY_cKm z6>1*+&ul;DyxX%GGmkza3l`rLNe>8K`c{6gG5!{n$egN9V)vE&AMHN$OAx(cgYRw@ zMC9llKrLd;WFQuT!t}-~$Px@+KxfD{P*b1l>XWZ6)<=LAA}VmO;&c4rlZQhtfWwq% zceU4lf@wj}?!@*qVz~A=#j@S1_OR0>9bI_U=*j-;69=7sW%aSnAMxKWm)-*gr8NuZ zmnEA=#S?RNN%LX63zZs}GnK9X-sr#FCAJr4L3WbmQVW|4{rRaD`4j8s0z4La)3g|1 zI09`D6X8$e$>QsstV;w6)=DREMfbo<1P}(V!$?4|X`$nCj3e0(A6LCjuw(S9kBg-8 zG>`sRkeRGDZLjb}aW~UQAq^4`xJw{S7lG9{L>8y09RXPkd$5hOTNX>QO3&KW^}(Hq zBTn#3=w187y6Rji%Dc6gjFJ1ZEdPn?nGKevQ;WEO@ge1Jb^35ieG>h@h%%~w{Iz?s zGs0to1&>@GrQ3ane-*|zOS+n5CZN@h+1h$>o>rCkHa`%r-p4W?&q%zyP$8ZkG4io6 zvbbLB{yP((t}AtBHu6*JTudJG3xPnSo}I7BO2fAZh}>KcFAG^S&YM{n+Ze+rrbp@UY25p;~*mf7_hJC7-c;TVcl^zWG{Y(3o2 zX@q8B)Z10!9?_D+o41ya^24t=7S4(C<9VwJBs$J+W;GNHXkLYs z>``%Wo%RHzbtFJovkZkg(Bkb7@7fe9n;9D_C-w)d^>8`G{?v<7<2{P>cUyPmE(pvl zZYrN2NoBJku2!Y`Z9u=gfJo~qh0(LT_&AG4_ip&$+b=Zro*c}JO{gif*38fz9v512~#`1O{SSMEwjJ zMOqkIU5gRV6!TMS<197W?5=^N7kS%;m#`vQYB{uj1VK1nJ%r2Vw$HB&P8Tou>hJ!8 z?yDa>mlA8mLx6)iG@SR4?S&4f8f`7fDd;ufxk?$sJBsx|;tkdOx*N?dq&ny^;9nTk zVdIy3T^@H=5HGY6F(mrp1>m9kEA}!0?^I~AmBj#subCH;pu9if~B;1Fq1~7#&4S1r*-M2-Zb07^Hv2i-a#^ES9%WfX#Ft=;Mo=86 z$8C15Msc=}9zJu1q1_@Ge;#gc{aOa8=|!uLUpD;s!f_B4p1qmKH@KTVbJwom6`L1S zSJ6dj_uzlfdyQ?Dvi3|2TkmmL=Eloa3nJbOA)hC~>9h z^l^^6f=k-jBTgh$EBJ$-*T!YEGqfKqN*DT0!Aq*aLo@Ld{q&wgo~LWg|eh0Dq((krom^EXk^@J($XJ<&30wC+2IOQXwq!?eFW6= zn;&wCI|cOoPWzB^k&M{>o(1{oI^e0>65sZ1weNYe&!RiX2Up3i+%&ND%0H&9n~~G~ z+wpVg6KU7AKC=_pL7>vqUqo%GG@!$0CVm7x3ftgA6fLW-)|d^!GSXW{(wcUVwMvqv z5IlGPQx+NFHMxo~Mx=szX=;B{LSX>rhCsK^b-toZTBd$pTVBTv<{%0vCS^51fp5Rvj`_xCTWr2lPK0j6ZLaT+ig-vZ4lC_X#x$dGn%XnLUht`j)@ zOVS+ch!BS9986Xp2wqJIi?4oQKm#=2dt@Wtc`LeV>MlD$iA3(gH+%8Mh08sFIb?K7r=vutSE5l7~Dqb~WK;uynEusL$0 zG5NQm&7n!;{QM0FCTz*YCUlb!ln8O^{sEUrmS~6T; z)nR8jiKvHx8CpG!4}w>MuN3KGheCmk{s2QOg+Nj$Jl1ZB^FtFsqCkX-(n6SHSMuL($r#YKTC}yEt5Nf$=9*a_U7)~3jecs zKtT@E){>`C0yj^|Fh-)^BhM| zMiPh@rlNHT9^mPwMD1mL7swr&Zz?&g~$V(Q0Vi}~-NTVIyUS(^@V-YAnO z?6K+I9#jS+U|m-33psBvNE8ecW)Do6Sime#eXBiF3B)?$mX^Q4ql7toRK;~Nm2~sEHr3XNGKX-Yj9l_`Y0NDon0LZlPvfv+%7NGYHtODpz zd;$k)ncj;6O#|OQeB>W_?UlH!0C;27w_n)+1j#05L1;=@AtF3H4j|-E&+N!o$UHU> za9J13dSm#}`dPvOs8JIP_dCa33gFvXyuMVf-@0fS&3BfWDM4HnNHj`kGezvb~4UXF!!c{Gzy5@zj_ zyRFeD_VXRgcJdn(ru|Nn=X7|4OS>$F;^mUtz7-ymEQ-Ni+mhY2iD*vp*Rak~7_!Ca zwvp)-rH&AafLqYf`1l}B%Uw?{=PMXz2oSY846iTPJw(O=z`i87sn0 zn?3$i-2h%%k@phDU_INDO8-(6Og`!yx{kZ9%*fV@!O{Y0rDa?MT!(vig}iUI_C9J3 z0!lM`5TZ$QbV^-Vicf`Z12Y=zDm2M=2GfV^_mEjv2qH3feM^978=(hLIo!G?qmi#X zKj^Nyy1Ezt6%q_vj&93%kyUlBcH3OFaSkx3Ji4l5?xn&EfF2WM``_CMXV^f%uO~tH zoFoh7bu_v3YqQ&>-`ky^oa1g?1OybUEMRX4%;()m6BGhOPFSdzukQ^Vo#Q~uD<7yI z8s!4kp&+Ud^5HHeQrqSb!Se$};kL})h+Z^QFkrUE?)_6cZqIkIsS0bZR7=dijSw?I=1OdH`% ziksyVM51h+rsE?3^eM2J`%P`%2G_#tIyE+gL1R=B*@JvRq_<|-e7@Inl&#k@8Ti~~ zq8rJ0I1f}_C_ddbm|*@rBfW{|S4`jX?)KB|%4~ULvZcj!^{-3qC?FXxf>+2kdvL4D z8_V&lmRzpWe8lndr5r1e@Q73Z@`&4jBLP8hiIEqJ;xJfM(n%T>ck;vX}OK zxynf5n;R#~e#hZ#h0XKS!?@cpg%)io&`qi_HS=%Rxnw}9Xl9l-J7Ayzw>EQE?eK9i0iqWe;&kp$qSE-b^e zAqV4$I?MfW&GNUZho#R}TSxvbytlxo!BC*UwgE7vpE1#CzCSd;DO1EzsAesm>fHw08vYz<`1C76oF37fI&fbj>KTOf(fJDfxNWi1? zx$T`IX0B*H%&3y%&-PD-SM*#)mRwmwR+869Pi9_JHiX@5(DWX1mF>4~q4-8uNJzd& z+8;nK@{Trmb;6?5fCQ1C3SFyotoT9Y$BqYeKpL3R$e^9l2?W}QAYdbniQ-qcN#-;H zUb+InszFWPMJY&Lf-d9Sg(-;AMT|+ljp}v35js);YXmqJ>@NcYA;17S3(Riid;4I~ z>i|r!q7PdC_3(aw=#%c*Mg|`NQGV{WevYwoZaQrb|EH$t3&^ zJ-?t_U!der^#Oo_UyUeXL@q<}egSSG2ryNvdCizJlkI+!nBS0E~WJWM$jqAZTeC z;@*CdWRPJIU4s)ejdn7QFf{FMep4C>+zud1`T{$NZw4pWUJ(f#aP~p=Lk^L!^M1JV zVX$zA|=n~n~ODizq56|h=_>{W;lgEtVD4a zBh>cX$EUL(Rb$Z($ruPoCOXMIF_Rs9L`W>W`h=W|@lL{#>PKGy6qj)9`yt5xJm!0> zQhZDo_iuoBi@!_B&d#3G;y+oxPl?aEVRNu{xKjp`1tvF`c;v!w2?*sIc z@4N@$R)>>NaEmDU^2%M199k}<{c{64@z{(dMLj#?vFlP?v{+iN<1hM-KOn@_r8ZKh zpx%uFKM&u3zKxr_JJfIO)~KYbot(`)B^h~k;42n%I{U^<$8)*xc76faTwrLkRkEx0sRK$ z4_i`*c2<&kP|Ux>2_bAgl*6Txfno`FJ%OtGBWmWmsN7qN5GaKx?}D(W`0BKQQ2i2e-iUu_eb8mQi1t=sPE747Fm zjdz*i=l3&#aeIO3h^A!nt*@eWyU1r0`z10y`J^}>*F%N;36b;yL2PRdi!o|GhR^Yv zeTS?pKgZH&blYB3FS&owq{|?2gL!<|WPe6y(J#p3r0nBk8qQM5SHL%~FM{xmQ?Vej3v~xF0(sLS9F-ji?*SHf;JQfCjcIInZhHH{xRp5bd-nc$0wt_kU9V-~1j6L} z{<_sMEhk>o1n1)bJ*lHf&Xqd>Qedi8?nci~|K2_wQCi)4TL|+`QaEDGSzGOqsAw() zb_nt3{3tYHeQvGj0-gs1zNg&7enyA9yuP&kl^`opTEWbUe!*? zS#QpmE>Exj)rhC)iF&#m52jG2Pc!S!M?t0E5&xi+5gJJ3t8hba8~r}J6X$T25{36n!eT~jGqW<6CT5Cz!>|G~0WG{Jxfflc287BvIub2CPB zL7H-69#w8m0K};!(4|afIPYTnd^^Z%?TL5Po0O&q5`%8@hlkq?>9mOYE>qpb2r1K73;sB zP7AxMN}(RzRrls1fg`6?Zice;wf}rD!iYTk)S6FnCGima59RqIWGxjdY*_+@od&vg z?Lgdpkh8}T4xLtITH@jC3}eFjuOtb2k%#hQysOA42R(h~lwb5IpE7CExK=d2%?-sr zktSDgv?Xcy{Z1%60TFmBDgP9qNrMWh16qO`_y6>+l2%J zc~%}D@1^vZ80rm@3bNDf6?{@e(({DV#R^}EoD?;Xgux^K6>qv&;*`{Qi z>6qP3l%~P>vtH$XE4tLh1X{aaf^%Z%5~&E@ zxsdjXb$FJlqB(pPqWN+6T*2H?Z-Zn5>kfXwxj|wOV%=sR>X6|fvfK;Awos^-cq;@l z{ZIPKbxRmBcL7v@dlv)j=j6bG?|8u{I_$6t{2>>Z7+eELfr1=}onTb7pK5rF3hNQc zvvL0!BmhH#bWqgSR@Ku-^T1*6b82j9j3F(c26-Dpj@S7!3b5QP$I&i+iI=1KXCsL1 z^}?JjJSl6PC}@xH$t6XEM_YdNO`(RZ`y=HsTiJjuxuDp&mnR`aIy^fe9Gm(Hsl!R5 zU5Ejm1D@}*$%dhYj$Ev^Y_P%+?rYEv+&NsTy{`BNL)dBM)G!k#_=k{RhG72%4MuL+ zAdl+bpxdojo@F6%d{3pAb=9gsvdndOJZNZ}`Yb)7+*R@)hLb4CKG}{Bp?+P^LNMVb!4Q%; z`i+~5gZ8k!9`U7D$Ef{uLZx7ZYtqdws;7MFg&;A;I&%;sP5sc9Lq5X5De~B$ef0S3 z2bYchKt=6_YI{^tW2)s=gJJ8C*$v@aF=3%c&BfB#NCTrvLwGs z|L-x<(Ypi*l>9LKM*v__?)}&9LJrpxxo6#X&X1g~F49Q$RPxEJiU`yy!TD1i&p%*Y z0AK|ir$490YO!SbC(2K-@QEm_qG$&cbtcJ@%Px-F={5rxRG5|gjiQ|9{PcJbuH{Lj$H}4QQ!3=}Ock=bAT}acV7RtoVQSSq9R$A{7t7 znTftv-%77neRa751A&?g&t0|Un=i1ykK9Lf1zyCaSu&E7B_a$r zam019OM!Mi6wL*j*)?Kg$;Ar6t|M#hf{du6#Xm&CB@2jxdNe+4$a??^Nu)}M0U=No zSV#wtc^`zgw4ut23?;r^%~8NuCMx?K&Nxcu!*|?^vA{f^oa4l3HW8u)*K?4{HmZ0> z@o}mFM{cKp`Q?c-%4H`shP|TC`Bzrr29AOri1r>+xFPdu(SMv-XWj1P#LYa|Ey@FG zRMANY8n_!EFnSx3qUwhSCb^B3)L`S@%vK&oqpwhv!L0kZzmC0s)0W-Gfb%k@KW$Rl zAEsn$;fUh1%E?&f+iE#hHGD`g7VHiUU~-LqG=*=No`vW#yclZ3Rk<||U!$=IpOkHh z&-AD@PgZDqmU?Y$Nsg*im*y^YhKR`GmFu0GEkto0lwWDUa5AGD7)Y5(fxn>#5i0AC zpJNQDR<~eTO^ybyG#KY@S8B`!S3b)@0t9HcaN1&he)5^AYcUi)$>DN`2O0W=}dQ+5^0u!O% zc@lpxdxB4OUC?8!IOKyrzp}n9A8q@IO<3P8@%heuZj1_!Db*qK3uO(+GpB7)IMn-* zFwufBpWe!0hF&o zn{-XA21!~5O|%Vi5aUFLde;psf0Ihx9aQ8L0jl!2b|ombH|{>8C#5O9*-{Xo2MJ-{ znT9zS5%v!BY@rS{4!gCy!b2+YB$9t-#tDa$ z))%k?SolqE#zmwGl!;$xGoFK!AWveDan$qb){}7lqgFlDO$VtsW!m1WIAxXVI%fTl zsJaxV3t&eNfsTMn6jaoBOcBhqUEGuRIQ;VkA&4e|wAZoMlTlIy(rG8u-rwSPf4W|r zSw+p>yl?Ol)F#$6O5FfWnGcyReuWR&s5Tf%`-TkH=n8o;hup};H*}ZBdMaIa8dGPpyq!Lr{n9S z8mjm|rROdMN2#K=48n-FMZ{FNQm+D+&}3)%G;M ztSJ9ZxP}z@=GY^kw7u@$76&{ZvF#p!(=$F3@%`UY!hhrdTu4yE`W{} z>h%9g4*x#|;)gzT1|&sme`e=~AQ$E$yU@;-|L8hJ#*MZ>ULU>`46&WtEbbsS*;kMw z<1}!SAAcN^`8!N6XDa4H->>Q^A8Fft_eG8~3>Q!VB!p7MVym}1y$?Uj|H>Wxc+jH) zFg^YvDe`5?;6z>NXXdx#6$cf)7A~EpL%z*BaG8vmFOMAW)sxFO5E~-<+T?-`#6hnz zM>7S|^KE(Mt-6BY4@SjOW+cY2d=h0Uz!)7-SMY&RSBQ3Z*_%f-5&x(RPQ z0OpbH^U)rTBxMi$?eznK4Z7HCBxj-5)Et2VW$#D$hsPQPvq8+(k6^z|cm7ie(?gTP z-!kHMQ&%N^2acIF+=`dR9n|+8fdYYalf*7NMQf9deWp>Ih8{YB;>UT)#mu*;Z|U`KgZ3}C zyxi^H)@O8m?|I~yBolW-1)>>jA5KQFdaXCNQn?+9JbRl1cgNCrs5C+^y91 z?Xa!Rk}V*>yNDo{A>^=fAOhqnQ8AJCTmaYTG7)lLoM9dMm~c6pH(K6Z(=7`aDDl13$kvQ*HaayH4;pL-XV5QDb`lA)ZoX5q*h782zlo>o|UY#pzew95a@>4t`m8pweCK zxhkyg;LWq3qUkwKna^y7JeW_Qf6n z{J1gxJ#mHG|I^BZsw@k>oT&AOzU{SAf__JS@>3W>Y*zBtdU8cyCTbtG=k7mrru4%5 z*1mS_u2(2$$0P2iK|MMK7p-c!^GqJFC142KD`rNV?HBRVQ@6IPEQ(aIIym3x-u2LJ z;|VOKf=;zt!niW!8Os%}y3jpen6^!=H5AiTeW(B=&_#Mq|@g^vDxt^_l<*kp%*X~C5+(dGnq+qEZ#iZGeO|cjx*3(#)K))wH ze+cK~bq*K2%(u-ri6*rAHp|x|a4s5F*TVs(BYah-xRmGPjU)&`xXZ{Kdt1e$2nlBBnD(t4YnC1`QFQgGfH>f zM4lNR6-Ea+bc|u#w*y1&8UTs%VfL2zLM?^-QoPmU`JxXQmKK*R5wWLJ`|z0;$`{B; zUIGcLq4zn*DCS=JH}Inbs?f*xt_gtZ_4D-)Aawxr1KH=?{e$9c?iW_P+A_XD(4nKe zWCk#2`%af`$3pbwU(C{b?={e3zkJ*>@^@ER_@TfG^IbaATiHEK%yT*>lIc8&dBr2?P_ElqZ;P!1 zdN;84n=AYl1wE*;#Za*^-|&?$ z;AI3K#^9(oNmDAWev@!7rng#!;{JJCm!qnGa|~=Se}Zu6Tx)Knk!HMbJBz)Vm7yXn zMxb1suQq`?IJ{uh2g3G3nZ8$-yCXKsU+^wmsAm||i=v7-N>>5U9n|ZTGp66V|J^K1 z8kCsyOxYfCxlY$pi8GtS^UED|6Di-JuU1nIPl+_0gW-emYL85lH$(el*HOB9OT(qb ztuUSDpm&*Qktk7+b>fBoEV3D@^a-o|Qt`;OF!R^prvh`&vzziwyKFv=^_28ZstE69 z9G08U)f>naU;n8`!d-z!P}e-vy+z`B&XNcGJl6Ex%`h4&b@fl}NCY|??K0U`2-Vs$ z=p@#NHcuXoShh&GZ0dRH6TNjT~G+V@G6g)}4Fj_zVJBhtm zWyY{N^*i$kz1un=7W%U2LD(04waS#dM5t9!!1Ji#Dj+REpmEXbesG9fqSZsFQx@-( z`Pj3y>Mt+VDmEkU$pDL7udnVqlg7>hl&G%){>ObFi~h#30Z{5tDgl*CAJA@ugxaRn zcV`-}6p1@3M-K^juq_n@7y%q$F^6>Trx3*gS>+_;tgIEAMGltFmNy7kwCjHy)>iMg zvJu>}=(`nt832l^lcv}xr7-r-5!dyO0Eo9v9(q}?rl6_;n>L>ZthW53Gg=^}IskQ1 z0wd{I)lr*8w1v&AH$Rjm)GUK11Uqmov7K$PBDZ+7*4}=h><6vkoNU?>op zR5vEJO98%B(Q=Ch^;{k4W}$B>Oz<55xc=!QZ4S`J5}CHJ8MVfHB>(o~K2#MTH5Nlw zgyXtDc5Tbzo2WA%$tVY102oBmb0Az9K3AejsT@L}{ORQr*mm*0&jBd28EK$qdbE;F z;@7{}{3mo6RtTsIX=jQnmWy?zuD7c`SD%z)p}8Lll~twJe3aX8*Fr9})x?$(liEc1 zjD5&N=yTG~rkdRyu3DKJ*CDSyl^D<)rqX)d9GabfY=4*E@pXaf9^*E;HUDS!Q&GKe z%_R8Uw2LC(RN2*sAykI({8kwOtDM9`V*hNsWsF=j&9NZ>TjvL}r$p=rCQ_1UaCEt3G zD`88`KIMqepoDSD`6+ok3NM{{$SgeoWu||T70!(lGu=jcC6|I;C)rZm8>ggiv;>qZ z#1u)T7Jr#?yjsd!gr(WLSYeTfG7fJG; zC6b4;0V6;4tj+7>=gOHO^;UZ2=sRm6q0aO+FAHa}7DJd|=1veu4}^@gpYYjI%<^+| zo9C>y$Q61*G;hxk%!~e=dHbzj0FMj`@hytX+4Zf%%JvP*&u;ZHxU&@I2JdhBelz#@ zbG6N?qUDd~iA`(ApU$S1bcJjid0OO{SUQW#4-5oT++{-uE&HNGTE%{i`_DcD7w>}s z{oO1-U5mHUh;W2}y(Ee6p6#nHk)E%OUic2#>}FsBt?I_Rv3p4~s8jFR-#5Y)+HC7E zFg?w<)CkmUewBC9Yi_NcxI*5)o#tBcAi;j8E(r}6a|R<EEKr>$28HF>@QKl3z-=tLo5@ zaROv4I*=r+v6c8aPD>XW!Vz}S#H3=~kwKXNmQ-I90wU=4sM`6Uyb?a>hc*7-yR`EJ zXzY*+KZU&O-Mf)_ptHhB1CRpw>bfY$SbgZwhii1;@f=v&_#;U{kXI=Lft__x^sfh1 zwptA5ny)Qa_dsAU!W)H?d%XM|IK#6WqT( zZqJme)1Z-NQg?+=aozEV^Ni8PpmBGhr>amV3MZ?;PV9B*Vd9AOB9t{I&?xl8<{+1b z*8!Ty+Ja`*C<5&2MwlpsgwkH@WZ+q}ad)Mmp=g!qQJU|rJa`dE#i@neJxTEHEoqTs zMu9>%!)nL`1urD?DA03h6My&uyk}TwJVG856~JZ#>A{*qKE8hm^A5ftiGT|`3&B7r zSAiYLyeQ3)MrVW8L0a|c+0H%P@U08~h=7!=^7zmb?5f?E!B0Jyqa zxjC0}h`|htb3pg0*&#Cmxq=deQ5^48rS-rjGLVXw&_?K<0U9`qD%c=Ql77&x?(35O zG3WF}gk+UW&uy!ix%mIcddsjlvaW3ucPF^JOK^ukaCf&rEK z-x+tl8`MIA<7nS3*U(RaXK3NlUNIX150x__Svp%#?nqDQGYGE&q50WB67t%bpJJL) zhgKeuSR@5&9xe1#d^fFjB^%U^dSyQMJ9D#C18p7u| z^{>BT6tabuTW=PuAq6mO|6x^;`oLu%K!3zOZkpi#MDi81mjv^}DhQinH=bm8jg!oeVd1?yInbzOF>8ItsHL%iO2RhEbn|(YGRdxKxKF=C z$7R03Ds06oUJOA>gOg@FSvdQn#8zC>^CP8X72)7TY{{=$JbEqW_pkF8+etxpo76dV z4=;+gcRM(T^7Jk(@&kg8_8ThboZ>&#Yfaad6}cNyLfW$u<2CD%ntpnn%3Hq3bavI5 z$;xPynP)krFj2H>R!MAePKe8>r^wAme{WrVyfsv5GKRdYT0)*3?(lm_hI@`!C1FWV znMlG{w?*wsrz0I zTSroX0;GCvHZ}k`0!tKvy0_ahApLm-aBb$0rL$S;UR8pHRR$NG(vfi9QB*4hB@>l21R#@RGY}-7 zioGz_6o0h3O(G?l^q*T4g(g*o4yzW1bP4uh$x+UiNNa&UsfaPxhWfa`MFPM*=k4>2 z-HK&;m^Yt|n40uMdTu`{!oC_H1{pU)UI7*O2ms5Gl!Okv-^)f|>tRV%8;$+#^j zmoZYN$}Y*H;rZ%8t!TBL`)u*S#A+f79a|PlF8CC1OJ}Piq%egVKaZ3YNix`ZJg;=x z9r^r#{9uO_B)_tcv&XF^1JDz)(^_&Mno1^z8L{$6@;@IOwTFe?v>QBHs%wSO`^&dl z&a*jGsP>nY7}*Z^s;`BOpQPXXytDr!&z<)CQLCpP1M-1A*?w|BEr7;Kn=}#?%_Q_# zQra3Yb2p>?XO_W1GKvKm2O1{qUOJc3QOdR22J54@_ut*ea8 zQv6C?GA$#n$RaI~@hPK!U!fX(4x`d#l+BG21+;zKnvaS}yE`*C!g~Ape6B~dfbBt& zhopC(Eb+5Xc2uuE(HqazUu^B_uRh(^;H42MyHjUJz}^IGeJ+RsBaKRVPP^GFdy4TJ zPb{=R!Ma)h4z9=MEGJu;Ipn?BiN<`c*9YKzj_*HK>1CNuu&4A$j#x8F@_(*urXkqln2Krli%X{TtxvP-SgzT@z_ zw6DlAaJ?wFMQc1JXkUNP6aX}fZ0-iz*i42L*$xv3(sj1dGsh1#7i{}kwkn%N{6E=G zgjePi!u8a2t!}-8B*kF4?c%l#QBsoa{46p*Uc=T}y5x~|_yLMiC3Wa3ll!4~hFu3g z&lFAQ-^U|siN85#4+kTYYey)R^K3lG3lA$md9t(x-s#o_N|`P3(s+s1okW!*$xkh> z!6(CU^9WX1^qd&bECe2DT$S4$H;bmv6ZzYh5)E4~Kc#O8t6xO*T|E5J*N%wc;l`4U zk&H^fn;1n#yyhmtztr1g=(%7s z7x;CO1nB|r$^f63S*r2tA*uA+AYbqKs zHI`e02L*b#V=U&MN*rd!?m^X!YNx!H_NgV=K&-w9O6VAnE5BqV5KrWIpBFyh?lmSJ1p>xE2 zo3i;ij3;==-v($IB@Dizw3G8wZ$s4D?=T2-iTGQ1;PITtTbN{#XVK%fPA@ZwNF6Y?d1=3#j zG{1!L*};p`hpS(}qf@Khe_s4}A4LuwERLhje8+uJ@8Af{aAR6SpaOGC!X@at6-TAra!M<$#xi|R1u@KhgNQdL z{hjOUI_mJ#DmcdZlZbW`oeZ&^w?m7QhM1xn?m}hsR`?h>w3B%Pv%G9!iMI&c86w2e zV)qn6{+M@{ccOX(To4VzQ~_W9h`pTOd8a4DQ0(T1MB~mJM|C{Cszh(K+^-)?NVeY< z9GkBWWj`Cl87DCeAFOt28tbJYUYK5#azWBZnvX#JAtqCH+8L@2PYN=jkt}oRR;<)P zq!Iau21D`0xF7cHlepkt#kVpZ>+1-_|2P-%vGulIi0` zzsm7V>lk7pa^m+0odNIps^GpuMvcQ;;W)QZtXSoS!y&JJ$Ke1!E`m<6zDHT6@FfLYhn6=%-bT6=aJrH|PiS}H6ue&1niA9X46Qd;l&^wbF$5I!^C>klpQ0T5~~pt*nMw24j@l)HCvooS{WnH(}Ne!&FQTDJ2DBRn$bMhFC|3`4GyRs9f}YkHbx3Gp-Tpc&)bM@Pj~Vo8OjRM5XM z`V#w^Odn))Y;2C~(Jc{`fv|xi_hfl@`n(F`mWZt=c%5i`lD1vw`=Kv}>J3BPe&dbU zKD8cJwaR&J{3*L2uLA@Z)CfGTv_iW=`CrAC<1>&wi(U#^#*DjH z1c|;^(`d8#!EGw3!b^QifhY zVC#P~Vs zWX4}LJW;}-8V);lY!qquTKAgeHuh;F1kYT@VmXxrG(AhPeUO{405>oh$v9#};JSvZ zniD~@{k14b91lyL6`j_LDrC*euc8nA%6E046Qp)XJ?V3-(K)es>-*Ytag=}e3=6V| zLYVw)vH_3Qp*e7>Tp5Eta;F6f^&lZE=2NK(3Ji6olg^{tgMWE4zRIEysVh+;{5t#k zqD6g23?ueTt>s)0YN0tcL$RlX^Ry~>N9@NMJ3|BxI5&QC-NQ+AM`e#xvfH3HI{e2A z;1I{wSB9DCaH7a0TTFi+(&*r6f0KpN`1$<)Ika_9SdRU37d_;Y!<<*o;>D)f`yAm- z*nFF1ce_8Ah7Px?tOhS`>nFUT7)3;dV0Bw#N_n(vrchAGt&H6yE7>fat?lNe5V8=? zozRD-7csR#r^n_?rMg!HxgE#>pLG%!Uu}qcp6YO+J9k=h7^_85UxqMyFx(&M)I7V- zL%e<%hfVBc+gWwGRf}K{qOL#L!qH{S^=O%F_M$>ij-W7l$Mcg0V z_;7Y7=JqDb+R-NnSr(V}9jfW3%vIbSiJp?#!u#nC#O>J*C(VBZLFnM4y5}K6@2qVG z&<6wum5nsTu;B3Kui zFs%c%4ozjtnm|-lezbb$WD^rN?Dj=wL`D!FQ!*x7e>a=^%fa>^9Fra$(447;RYfe4+2f<}q~mV> z0UPAa4tiYKeWG!N7U)GYT1|F}QuWlgewCP$_#4jM0Jy82TV1=PCViSL@RQqw8LyeL=n_N8n?3Y=JozKs zlVU~TXEn32)O9zRX+i7jd4DRwzTbfr-P;+LTeLjiPunh|<+6nYn@Q<92&N0-KJMeM zg3-+oIH$Ze82V8ya@ex0ORcG&{rr8$b6@1caR0D?y~B~Xc1v?fDCpN=lZmlV;L$s` zYU=yiK_?-6@>1nUOMi~rWS@n4s2mZpFjZep7DCc@KT7Cy0C#uF+JFjIuNlr)ZKFgdOG zGW7;01JvkLRTvvz%}O0Cp4O$7!kzHEuO{1*GKBk~8|y$Ym6QXH<2b`D5Faz|H1m97 z+1%-}z^Ki$=cCR+xs%CvcAa@lWp;?2CS20#twcqnDfnoUyVK`na=Bs>2?IG{gZ1jT!cIUMMTSrglMu{xc zjgs>)Rf<%uXs~FzOp@+fz}q&eVus_Vm2^tDgXF|p7;}H^vK4w*LJfVhWRyW zUkLW6HweW8n3-<_#*Xp2w$0+iUPtfg09cBb!3D_E`V#vGuzeLV;ECz5{8KbpBu@ZH z(G<%4FiW2it6Y}3e)%xbk)EPdMLIF8M+F8C2miItL)OG+YaqRJ8 zV`$`;TOhWKJ(=Y#&yUp5eJ54oV&r|)n5kIC{0aciut&55F!Lt%O?L9_NPrlnD4x@h z8gx{V>x99i>zpP=9A|osMcRuVN!ViZ=v~@Wdu!jp_`dtN@{uS&rY?7AeC3MDrp^Dx znW!vJFmjQbjlX;B^pf~pmotGSohzR1)u+cZsndnaTl)2OYcP6tsD%2O%GWQlre&_# zDoxGek4D}OjgC31uF%Vi*Vd?6l<9NRD#yMYM!Qkglk;Z7g?N4hNHoT4dlWk=xFsA+ z@MdMacB6bf-hQkHG;x8&cCx~JF?!5B@>TXghVWikLYzR4yn)!OPbj248uxwAn&-5- z<}eU-7bQH-IPj2T(C&Hh78&28;${E{F2kauQJRb^9p^zQX?(qIb$z!R1YCA2dO({_ z3qYZm2Ri`lxeY+mDL~4-&UYF;?G=&7;$>ZET+M^|Ot%#}05nbk!sRp2#`(~=KZ7cN z@J~K$B`6*-cmUz?(1j^}PVHW&?V9*pHWm*B{)48rci6ry)tfFV4=2R&RBpDF!eMsU zS50DO^;Yyww#THf8kNZuut49)AEW!}Gx2?MUkTN4+%s$>$rJOQYb#}6-(9}Yy07tf zTb11g@Y;gi_lQXHisey>`!D`_7tiitSv<1TI*UgLG2u0An>~R3iThX&P z5)&#lHL~qEO5!&{^6H`Y;=W54?@S;RLZrwlUDcd`tr$NiK~Kn^CmtgcKlqh(N)fX78DPZw4>L{JwD zGX9(BlM2u*4ZP-wM?~8D&FZ@i1AGJ!l(+o=1~eHPVJ@6#T+oIwU_-6a>53{poh}r&w`rc|e)xJ%JY>FB%0hvEwUBi`%vu0B>oDuS;znb>6@;apzHWM%P6e zsqV@$7w+@<@48*-s|75KI#8f#YmUE-KAU1RvF>Q6I2Ox|kXl=(;DUM;$xGeVcO7*U zn|YhIEqY~{szH{45| zbh+s~`jcw)f+r9O4CzNheCV~zmjsMkyr*v*>0*mB2sO4LP?3`-7)Zah=pJl*>Ecxv@zwNqm(SyoJcsZp2!02@k5mao#}@ z%7#`88K#7Z)U&)p;Ix{GrOA>~tEx`iebe7*(e;;9%i%&j+sxNW1Cm!$097vX$paG^ z4gz6L<%uKaN3M;y4f^qHEc=5)767~@a4K9MXc>-+5mfQM5!TTZC1<0;56@zF>!%E! zS9``Ik>oKv!@?SPQ!Wc&vi^XjEvR3z;WAPXzVS&p(#Y`NK@DkE&fB$S})?SA-#IpG)K&x)oBu-un zigVO#h|{*~u-S;@cNXbrf8;BjKUav#|7v{1&6GvwrNNYxHgTQ&l`yWh&I~f&pQjAM zyqy?T_69K`l>}YysVk6eJq{Msn~hocB4))DC(*N&?y^XW9gf3d%ld&g@B&P)D^IOd z1zaz+dthlH%L%3GXcw$XD?TSu#Qo#fUXqarRH3`Btum1S2$dFR0JzWn;(^0lPD`w0 zR!vMCfi-iGFGnsE&C~1IQ$#|Kk`BN|-=%nDX>o@bM&Qe1b?VIIb5Q&xb?9VGUDIwJ zoG`S{AexVF@n~QaKV+-7UP4}MG7%LlP0z|a4`Oz%=k5`)?R16yMsj`t0RWuC>M`>zD^Av^RlTT4%E1k@wJQ{>@JJgTeBWw&gWW_wg zE$PC>vUL&Es^~px-?3u9qa|hA$Dr?QKq{IRFP~9`mNKBz`5sV_nel@PhL)fNb&9DY zSO2BYtbFEqqDbm4Z0Z@?AxXWc8wYPwyz%46Zl z-5+w4o?kE}hd(;d4Ow&T^HEg3A31o<#6$EKWmn(j64f+9zl1#&NIT34T(E1qp9(r| zbs2{Zx%GY2K2Cd00I}S}wUD4D=wUIG1uJk#^E4@93(m=-R4tR9*E|+XWj2P&``$)c z#`7xLhH3xMO7T95yCVuqrlpxbucY__RL z=qQ!lmP6r6fs1Z|L>wYzdYw0KdFS`hw!a&L3|+VkH-xXBTi~~+#Ri8KXij4F<}rSb zMr{A2gI047aV$e~{+O0_HCg-f{p5P3_Qk+4&TxIF>*(uqwG6%*BkAks>=GU+`8GIK zIPh33i*Rfvdccvs>3yk!JYnwBo%s)TBKA@4m)I1wlA1(zwiv=l8PkJJls+j9ti&8f zzWlk*;~4i32cw{f@irXu&&@E0^+tuJKQl*R6x&o_u(F=iPQKkZPN>fg39HhKd@WCx znm`_u-+q3HZ@op-G3rG+E&twOl`Ig1VD3Q!}4@UilT{Pwm6a^b}yh0C%9iight zIo^~4^r5p#6o!YXd?y3wcSw6J^Ft0&N26=_d$Oy$U??-XN;K0`Ug-c#tW+P6rk9z% zc62Ou_`IXY%dd7K6uX#Y1Wi3mI8RCo!{Ovp(<&zJ?bYvs7*K zUv^onVlWyEWT;h(bhF40Iuab^`Pj&c(pQtvD&>P>DmgM(SfZFyq8m-5GM_eG41+{l zH2YQnR9Y0P9C}jmII17s7~Z=pq#c{p!<-IkRIS?49e^<96SbtTfCz>$+s%oP&BFc$ zAPi*(DjrBSNp*QSq5-eLegO>R-+OVdaT5Yi@VFLlwPx2?8haFHFc!9SjG;`Q9OwNW zKq;m}^!Ra4QPFPRiI!~h4&as1$|z;nf0EG|^0s~lB^1yeX}fp#v*B~W$Thc&UQsVD z2~>$y@#6Mw`l9T7`NF{pdQ9pwPWZGP;~`B<(if~UTtA$d&$NHCUt`{NloZ*dvdfP& z>>Dl3*L`U42xz)_Z8vZ%Q}Tr-z$p_cF;tp~d@(p!dd`Vd37%3`k7j)M6q{)Gd!!!r z9Uk$z%lEka8mpT8FBiT{mpX#`Yir{c+I)8=bZYI%Tbgpw#EZqfm(e3}Cbt1Ws!hhP zXg9dID@LG?`T=*YTC8VdDmdjC2+=5q1c|teqejw@qXi+TB(@cw-@?tS_zT{yCuk`zCKG{B$N(w*+TR*Ml$_Z~S@tR` zD6H-O$_<=<4S1$|bAw36))vmK@`WBDaXFN#(fytfX#J85vEWLs)|_6h0v)E^(#Kf5 z<{a5&K>1KLChs=dvJP1=^B5>M)$Ee?a|Gw5JiHpR_9Wk}U}*ZYC{^h-sfO#G5ZUKO zV%Ad=7m`FvO|GQ8O0{z=i16z`SgEOkRZ6pKNA2r_I+f;IL@PW8J#8#D^j&uiva?@` zs;25!#M;;mvc!1o>U`5>jL0%rNfyV78mCJX{g*IwgA-+w&0~L?Bmv;@k{GEdH1D=K z=FbrHgQifj?JR~KRLvk^U|Jr04hos+{IF8$UksFHJ(du7;;dB3tjo*=>FV85gQYp_ z=6_sE4elW5V%dM=_dLi>VBsm`nx+T63!8U+hX|L4)`QlAGQo&$l$`0+0WbT0h=pK? zs?ql;=p!e4=lk2JOhqexounQL@$3nrDS)H^q1=H)o9+jLZ!+DKC#S+HWPi{ zf^iM@OKo?EVxSzpFVX3)-Q)gp_rmydIGM&l z8bT(}eR+_K%}yxn4_~v>y$YRPaTPrg29~~LeT*@XaJ=<(mM>X8*gH{YO0WZ3%XCJb zt*#6Uad6D+kM*YvSdQ^}CNFa4o2RbHuZ7&dY?o1kANnt#k}d91M5~Qyv85U3tNRR4 zj}`;i5X<3YwIq~I=Qt*3$PNf)*bgMC^m3sofxtz`V2w}O?1-bbB{VbmfwxnwTVLI2 z5dqmQqpOYJ?dl()xvAQUo8~E#^jRebRpIXNLPFY=XyWjs)a?kWx`FK0T6&zAYcF^k zgjUbOt@e^1H-fNKU(Z4>(r{Em9GyeEd7 z{e1}gc<*WtiPQh7n9jcIt{*S2YA0M-S zEz3mk-v<305Xso5>}F%E#le&~A%BqWYRdOIKiJ?(Jm?W+;zK>Hx&nRe&Q+jlLVU#Z z--84NEQpM67XZI6E#HBiF-n#7>N{XbMac5gVY$2BA0xT@Dr2r4^?QhmqyxpfjY#VL9L=@WzAzhkb%$k6@ z^L}B9S(8zRaW70W;Krk%o~%k_2ZUk${2dCl2GZ3Umlthp8QK>sht)%V66?BQ#}?AxmCg)D^EXo<29pS*j}*HCR%X8+wra zR6f;bmDG6ouK2CD@EvJ3u{IJa>!-f2lm=-wB}%!XYdj5SaUFosOhP=B>^BN8 zRrD>xOE%Xdt@V`O^|s4=r9fVLOWrdum)NcyWljJh8HyOZC15;(;{=VRz7L?~cROJW zy}G*w#*O&g(eyiemsc1GOn}>qm1khOJ%dI?Y$L#?DAa*WG$7ukdhyJojQSeg+GCL1^d|xppKF#*_FGopQ_a5#^bTiK^ssM z5C)t_!26**YJGp$%x~P%Ki(X(Pb7{O1+M~}Cd6<=;dyZ6ZGWl}$b8c@zQYca!D#K`|kzWfU}g7Y2KTPnWYtRZLw`?cxC}>7_5wzRg zd+{sPlI^rReY;gbQ+9E}FD@U?2sCuR4KUsk0UM??ffM}>DXS|Fef+39`5)0Q8hHb+ zW=k;x17XD`Vm&AA7$iR4a#C~6x*I=E;SwkK%Vt9PBt#7K+6d31jve^(IF73P4Q(-Q zsH2}lXAj-&DzUFUrLr+ysV*Sy7A&Acw2T&yIO_RuK=7Rz;6Vld-WZyl8Eoqxnz3cx`RE9VfPaX zD0HxTDqj+Hf3eXn{^By4V?FE(lQNU*8Me{hU=+vm1Q&AEUGS~HA?0aeH-@q6@d9WB z;-p~M?G-4KkX}W3?J49Byw2RZGwp-Oh-J@2Ud%U0Q2`A9p^LpON%(YYwqut->j1DU z6FBd~3~;aTYGQDow+kaV03QS`dauEt!n@wdM-K_FqGUz0`jQ)r#1aOeYXu;l-)(}K zEHSOme%jh4^H3CeZrXs+@K$|fvJbpldAZnxAwif)_8VkMEuna6iST3HXpJ4H(5{Pz z$`x2UKn;Ywz&c8%%<-~^yF;@_fFr@h^N0{x1~#BnCinhfJ*!~gxliQ*m|H@g?(IE& zlsq3}pMcWUQw`CVijL|PLN%VLjuu%_xIfJzVDuv;-?!e%~ddoj@@0=IL zGIHd70Ifmfm%!TF{Xc_!*07`kjh25TPMUX#^T9lT$)LQuoFU+s#*}y(&QQf_yZ*AC z;tr&A5uIJEc%8!Qya48dsxpbR#;F+gU&#DvN$N12TfF%v+h1QEsd#1InWYV3uI2#1 z$qvE$)Ti6d*Cgxj?K9puFwny1`tZB|2$U310+O+0rT5#KDgC0QEL9&w!vr1G98G_c z$I%OH|ICuFH5;MCLfGQ{fGgar785+f@sWJXV$7Gf9+VCk?F_ZuJq9+KT5S&0uAT6k zj>{9p!hz&FHBB{IxoXd{nAskG6tJ&J0$}axsP#%C4Fc6=%Mzt5af{)OOS5{iz`cJM zrg(o-gzQL=&NgUzlw~NKCY$R7^H4nn8FDp2`*?9Ax8X2Sx5@8~^oYLqbP7C->;_ta zSu*|#gfT4lfZ*M`0ZzDW@lN08NFxYCW9Vui_YEEsZ`RPy2crW98tw_+8+;}w@)e|h z-|aq{1ds9c5fHLSJwIGS2mHZk6Pw5qMD9HQ$-Rn8RtvZ`Ui2Wc;2MO6CD;dncHdeh zhz)=&o&a=D_}_u_p5kHIVuefy0VDx9k8Rl1V0HQ@Sa(*)aVYfrKtN7RtrfE3d04rV z<_91+RW{g--T*XF)wMtpm*$PYKo$V2q#L8AMe+GNZjVsi zU+zvg52ML=0R_W0fWW7`C7ET3e}EO+Qd)uB#QW%z;WDFrQ+2!Q*BM-RD>?5$g5+8M zVXv}rg)QwAh&fG3P~fxV&yQ2IUTSjCa5F+>9_pZd7@Ck%5CWX2LMSeFfYh}DV34@u zBdYDwKFNIsmfP;}9dCj3R0O|XQJ|J|0p>FiUOP~MV28jWv~&a5c&3%(nC-yJ3&5=^ z$ySwEAe=K91EW~^gE;_ukUgxrVmVc!{F0pEz7ZCp_zq100s@aLXY5yL&(_q!_LR(C z{W-6?eap5ra~Wh%@p`)>O|SkVdk>lAq5uv8N}9pTVXvTdYWO3qXAQE{Ba?(1_Ue zo*Sae%$aO*h?>xI`(VJFq?AAH5s*o$JSXli`hYAHuuf^Kgq5w^-((+0@m#sbL#1SW zz6X_S`MlR0_H86vim&VYA5AE(OPWgE-B z7=2{ZTZwg14faeCbLX`mk8VA>q&eE&dNnEQqsCu0JyXiMR;E#T(3L3QBGG$Ns@w-? zh3Uh;L)H~$vlt2;LjI1LlWuiAUIMC!ZRaj$;PF^I{l4iUa**5YWT`YuGq-rlPH^OQf zrutY0eWNl)AhW!mAmtT6nuSq1a zoe8j6oJT`Y9L4M`c08wq1}gE{XMkis?iH6xodG5Sjya~IrRfG?f#Du*hN5HH9miqa zL3biJrZ%rA1l!&(;&hgF$eW@`hL43yQumsm1=D8tRQmDgnib4iBVZa!bUJcZF3Xj7 zu5bQ1|2*}sWj08a>QCwW?hSGZ_YK0d-wF6g8TdC$SqeE#7g!^UOzO^=078{ zzSPh+y(qg-&u=+Pm#T^nT^}#bnv$v`9Q%4d?dnA#V`uSpcQz^Bb(nl#fVxRy%b!b%KTlsW6*-rqYxbjd_PxW)*pFJNp8%C|b?uV#A9)lH7`*rkk)e|* z$e;=-{YP)OO`>UrK-J;_aBe^=&QDY6?)71Odb`;b6;BZv&o!E`IQ)BlmOHFe6sH2R z2DN4>#GzWi5Bn`vLlP(@nyjvs-Q!-2m?6XZ&5xp?=Z+=#G)G*=)1%*vVu?#rVpKOn zmx}~C#^W|!AclchIkrOGAM3rL{Fzs^o=T_?nw=k>rD&8kgZ>P&6lY1zG=41)5gL_6 z7E_=ZIm+nZv6*9ii=&hj4wxF+?!wM{8mbdeZM4<*(p*ri+2}v&7<_k6l*U0JYOXwi zWSw1aqBwNu;E`>AfTw5qxQCx*k!>Y#p0a+&A_|u7PqP5!4aE<31Ln@)jouO14%Q0p z6Ju`SH3YfmRd4=)F=4%!xCRc6&>E0vDUoawKe!$4l9r`LcU|n-r;;J1Gp`)m=*gBI z4IQ5bC&loGaW~=nO=`1yI?QNWbW~Z| zk9>Tw@Q=h6`R`T|+60UXZ#??npMl6$I9!f8+GQuSB+LDOeSY8zfH$5y( zZs!-!BiNJuCi{Yu?~bgyziA1`bv)nbUb>Ukvbh4qjN#bN*r)o-0vQJ1TnSQH*e8bx zDC?Cd>#qVEr;;xYICJhg>jI=`XmG%NSR2|^U&v5+uec1_@caG)wwPI;jOCV?8IjnU zSvI>#8{g+8osuTfe14SdcuqIo<@2=@%P73?6M52!p9FRLd5gFjmQ1+19id$xzx;A*#d~~Pdu9at+@tFU*R@NmFT~BCV#=YC)E{63|@71 z>5cL7XCsIhgl6v~#M@PG{?U{%2m%w46-B0;d>s7q2>b!-jU-Jcz{G$9VGrB zy`cC7oPWWW;H<(t*D@bO3VrlhM-I3}%7W{t*ota|+PXE{D_t2EP+ZL$Bk{Yc@yNKl z`S9YtYx>dsAr|H0hXTHYvX9>l_!;Vjc?^>2Do@oJ!fEOHW~hk>sUVbNp&LceDIo-o zzH{8#!hokToSOh{1I-L*eFi!=akFY$f6$vyc+dW)QNUw{+jC&4*+*8g3EZ!t1h}2` zUme}uWKJHz<2g1US&-ry5?G1!{!HcrL22qIfZWlEQ}xu};*7GxJTGkGqN)J#Jsi9L z<4=4SiS8cbpr(+Gcr!UJMA`R8-IacfG|DW{PHufslbtloc}-jIhu-^(u6>x-Zp87^ zM8dU-`f+i8fsuWCvH#wPt(x6JNde~M&a0f$(B1^;-Cy?w1O$I*S5WZi7Vyvl_4zTU zeDP30dQ9iy*YyNnWlH33JQ>U}1Rmx#lRkWs_qLO;1%ZDF=5x1&kS$+s`kEPM@ zI&M!1lB+Phx~e9Xs^D-WsNfio_u6$x?7Upm5qgaN03l)10K^IZF9EtgXoPL=trHOw z#geh}rdVN$iow`46f-#9S*3UOB z4`3#IQ5n<3OJQjgX*16SX|<@&`f&{BCY4N!fr5H&601`VlWHX;&eLdE7T~VuQ-K$Z zUf^w#3aYh+j5C>tqvTX5SAsHNp+3ykBW(_r^tJXWaC>ZMJZmTrq*X7LtO$z$XByJQ#cpEuzW3RNtV^M1;y_1S zkyXh>lZ^$gqV^zf*KXcMNDi+E1yZxo*W`wFeQf;C?=<8t1bIGOB*e+EELA98jsgz$icq;E_1IYd4-ygy<{d@wO(gK0tP z#auZ4mPOzv62`o&$K%|YtC@ehXezrO_`Z+9=Y6#oBW)q?ST)ZrIxpJLXm~!>kMKtR zC4X8fpOl3+MDA{QzW^Nh_{Wv>wlhg zp2B%BHbxO~#&}Pz2>N(L(jCd{i0O!ISu+Ufh_cBx5?}lgZ@^HMc%|F0E;(zMlxzK? z?YgXq?-30XNp>yy#2HG9X?LPmHckkx0B>rhS)DyKAFBXbCt&tx<`1Tr$R9~G6zD*e zGK{&9y_)jHx!J6z7ZX%u5GQ@7%zQxD9Lbk=G^E+lhFico#Okn>UH62Th`BqTvB{TIhvnUoNvpRy!jrUo5K2?zWZy@DfS>_=yGKrueI35Y@o?w{}zu zinXX&u%O7VW3V&sLrS)^1~>H=^->S8Ax~1M&FC=%Ut2Dozqkf{$;PId0e4qXY>Av( z;P{L;ws9^sIIGKhtcc4$l+JhgIW|lRiQj1N5W)j14a4T%*gYUtthwlf?b1Id>zDA{ zYu|2gBl2fxF{FgjDQJlg2akK`fKP$c=JHw5%Vw!WPhbJdgC}T(|D)vy3Sykd)xa0w5s31N^%>k--PJx1j9m z6%L|_EEtgj{d!3IPqV{d5qdqk+h85OIr^OuF_EHb$W)WwlBELn3%R7X5@!K2T zy0C$usp#1+OpG)JmhZGOlkUPiPKVgTEKRS!-K51`q)j`sMWkeU$ChsTgcRb^CNVay zIql)vl%Q_OBuYopd3|vWZTmLuSl0)>DXy`-JvP$9u&DLP2XvkJb@x$SBTITL)Y_Rf zQWba8f(94whU-|y=JuvF=&4Z>EOW%mM|IX+|9wbr-7kmrV#52}I)|UX38va|qc&$Q z3A@2+1jos?4E@0l!7aD6%G2)Q{&~B#0<*w6@K3k-IF4{|1*HCiV4_0o%kBkh!vVh| z@Z6Sd*t?wl$wi(wyCcXTHX{v-oc(*y4ZKjvs`-)779RzUb5nR@YmJYws${)-Hhf9f zzx{4y(=sW%v_@tRZ>2eTZ}J-$S|7I=-+Wvli~0K@0SDKI-et0$;WS<|Kj-ewv&xMo zVqiJ}(?&a22krqABZuUD^#0tAY9#@Ld31|z;RQ7}74L%&0S$HI2K5E}6Y3PH_3_4P zzzU>x2;*Oi_kRx`;0VINPHK)mpwa$a)CG@i9&BRSm`6(L*>9T^-tO1BK>jIHTY>)TWTT{FgARlXMf z;ujM2is~tr&Re&nWv`#)TZDgC5#RwnBmtZR3EO|VO+Ysz(~g?Un*~qZ;uXJSXDSjJ zAEFWu3d4s%(+mfLlx(lx`Z`{JN>vWoT>cg z@jes+o%j?wsKgWr(^BaFJ8z3V!Klz@u@3GJ0pkG}DFL3c-JLGNPEty)lCuU6FT zyAF*xq5cePFi;ZzdpSO`LZnaMDIFh^Y#oj&q{vXAM9Vx3cwBF7*B#0*a|%v*ze--a zM%o4^{O4M|A%}p?00(*zSuAEs&J}BG=5Ls}##3_$gn&Eue_aWooSp1eB5)#BQImHl23yHY!a-dmE{4A67{ zv%w#$Q>ChIlwnBOY3%0FfV#n?mB-NgJlJNTo(|!3p`pA&r-8m@e6Qg1;*VPGnt2>r zUS4gTM%&EFyhwabnPete{o;8UBX04NsnUeD=bOb^0F$5%1svFh@6I>#50@Pvhymbk z!d2SuVZZn<)3qv_6u|UIiv9aB8!Grd`58bz8wo)sm~8K4S~3A3#qM*Gk{)2jYv|&);RKaqxLA+EP2^aro_+R`hk_-m-l^T)n3Z8N!vIbla_nDP25Ul z<{s(#w({P2!7s;{)Nbs4$D8PK)mx=k7cf?-m1!xLXwLFxiTb}j10|h?Ak=5z{MUTB zDQ6Zp7v6m%5DYrSn+1!_qK0?^CI|WtvL?h zh7A8&Af_v8zSD8NjK%P2k6K!8kU1bmyk+;Lvd1E1-SwN2`Tn}JTt&iZA4&Rhkbbc| zAfvKAYHzl+)GHWV%$S{S;8mRM(1mgVqzX%H(F};Jo#V$ZaOg~FrlDLH}vxqie`c2MJxj~p4xFr5*&jiziD+*P9?U0DypGw zED+1~aC*94BT&JO7-$N(Z3GjYcH&haz7JRPY(Qlb_$%sY1*sW_3sCdm(cE%V6aeHM zhJi79i|S`d53z{>R35vfkLtDbV2}6eh6=9E6wUuujI3Mq?9Wd~|5gn0x_@U7Li5$~ zo-QFLTtU3(p_u8`pq>9|s`{rHUB!njf_tZ}o-5pezML)A z5iU!B5C+EdPjMm-UmJmR^H%%obET1*6#g6$lgHiqYGa| zHT^T#Q$9P*UUO1xdsya~z%a41vZaAJdHL8%80ban(n`<|q5%3QG%?0~Z!6M`k{a0eAW-RCwxD?X)Rs=p(=#`XF>vs~%@vOa6Qu34ekj%LNhS`jg;qmxo z^Tkj8<7v@$59LA`wu7|$29<|}QW~ihy=tGc1cp{%oeTnAy?XAlypCn3Vb)@g2Vks` zY(Zl6w;Mhf$p z_gX|==Fwr^O>RO%$T!-@yO)UTzO8g?_~)is4eGmZVX?{0iocjn4d+D3?Z4=XEXx>& z672x%O=e8!QvUv=`Lqm&gE?qoZ9l|5N%q%~|yfhLb;^Q#*8oPP!iRbn}1hLAbu+5p{< zO1Tb?jO@Y;DKMYGvYIK&`l|J|z5nd>)N}?I?H)khj!T z2c=uWk1PNUp;L6E8|Q)aGG6d2$O!3rPLb=It)JQ+4w%sn=YHo2%YFyamz1oWa-(^b4_IfP?)WkNJrB?6Px$8!60ytSCEB0#GSmZxn-{<<$y$>nobv-RD>GwjQhJw|)9aqQIrOYLwUjzxA#bt5 z&Oi^GzE_D-tCIvxn&3!uX)ZU6#o+O`)+RO(qhvK|kvBtzkN&v&mCFkDT zZzr3yoz0VeLiRE~V}?`p740ux-JP>MHOyE-X8}Of&RD~P%PVXDqf$9lyS{rxZS#X| zW0kBY(FdEKEGqGR6jDlQBK3LE+x%bF6gJexB6U9JlmGK z7(Z-wp~)zQb%~A;|Mz5B`Ya@AY<1pwy6w#VdUI+!*Qe=B80H(*E>9nc8eN}cG}r@- zEB_=Yrt34w^%b6JlM}uU7FDa^)t&IIFyyU#Gq)=H&Z*jD;G62wx?`=)@2Zn>lDM#= zJ64C-cL4Q_jqIV@!4TP2YNC>+V^Ad{I5EyiKvz_CN=2{L)N=H6M z^{>Ca+=9CCJbQ43-WUvR4XE_OcX>Df%D4rp=4rD3aRGQFWLgqj;B%ok#ZaIXtD+Oc z1jon>Xj$PP!g`RHqWEN;_v2RMpu(AE@tI8Z0}*OJ$PrkznTrcZL=Fdmk zjGYUeh3$(feZn5?wmO3>{+QY`_3a~f4R!J!SH_tIn@`he_S#Ek<)>Dxs|mq2kiFy- zjLV(j7_604#>y5NTyk%%Kfj*#imjv5RIYk(p9;vPF04iE#4sF^c~kIcHc_>mp~zZJ?q9Qube{MTybK^7?=SRc7`=Xmdx4DE)Z8WLiFpu$L91OQ7cv zc*d53voW9*Zi~Ttm0BCIn}`R@H>^$HI|)n70iIji&D{P_KlxR0OexG8K*^c)p@UpJ zEVp6}#<3-#kr1@+oq2uw&xboiu6dJqox$$pJX=Uj+l#l~^r1r&m2tJ1kR5bKm)p)Y_{qeN#(*B!+lz9QXrflA& z?YIXJ9YO2~EI^GS)Z%b$w0L{pQg-(x$ox}C62p!5uUH`dwhDvbk#kYKk!Ka!gJ@II%%C zs@4p-8mG%z{>bD*JJVmqP&;pegQeX{@dCl?X;oHQSLY9UU*TH#p6FOrj613V_!21` ztx69HjLZx4OEn|U=@;6G-?0!KNG#J%$xe1ncb&Qv4i%s6s{t4CXr1(&F~2#%j>86| zV#nU%^C7>j(_;Bdt~g$lSD^h9s~9zv($tK*q* z$g{OzCH9zIbuHc%SfyY-QR^NT3=&=!CistJ0~iE37%oPb(6eW)Q2R|rpl$JY9RfIX zb;51>tI2WX71crL)3k@l{$BO_BGgdeACrA9b!RP^A6#`V_CO6)=Bri3>nH=eI@CKH zV1x65!X5z|9qI80;oOQ&rBLc3Nl+1D8UxdPC%f>365xbzv7u;#c2;(Tp1&ji+4ViX zZCN?=oo22CiVgD_Ii5fnX0iJnfsa^d-!XVG3+8J6>Y#6ez03(nLT>`d9rOt-#~6k(5MR8Kc-{X z$}x6|DF&yHw-MH?rN+d@_+OEzgL8-@%YC!m7OeV|xa2>q?@Z$4EiLw5Hl`*QHn7#N zu6co6AiWtNpI_dzGE4cdHoN$kL9o{`)rdCh8`QFD@7fqFK=WfcNxfNsXhvEhsxr(w8KBP58|Ifyo!&GuTlT zPb^t^pCaZ{KdH0)hm}sbz0DG2ErEAQyXsa^cT`TAUTRfFV#)D@f(L2IckxIX*9KBe9AQp)9p5MY3;=N6)SxA^^~7LL-FCHc#$JS9>GNgubR-un!ZH4 z%Mu=qIG$H|Co&$860=mSi+>9H){Ol3BgrMda^3N|y!OKK>lF z-@Jg8Stc3Aird=E-8!S%>$BfaGRov_C}ed@`hZ-e5`m=K3U29>g(%`rj0B2JUMCR= zfm(jES&7_bF1tRIm$&iJLPK@L67Ms!{!|WC35Sh{Enep%vM299)iKp8rrK?q4s2rk zpz&k3xNuE8yj_n8zhd7~USN5U)akh#m>%FM$(GtiAQ(br+v8d5^@tbg$|xl#rb|vN zWIEA6>QUY-I7fc~nSTa_@=;m_f*#(UC#WYXO;T@ptG^ucKagDDNW9VrUvUkI`%Aru z7K+8K<*c8Azq0T7W1mLZD*BwASN~M}!GE2^O@x{g^P(uUNx`qAq2osno)L{H_(DTmzxwZvst%IB@74ex6Wf{Eg*X`3e`pN8DliCCU>N zArQxngEkGy@r^~tCj6&CF{|%DmP9JB3=pMSC!kOHQ1^$Jj)4~&5gAg?7S90}>rZDF zV3c5%aM8duH_{TM8XJxy3crg2mdLd?K9HC$gaain^p_@k zh_+}K#SqzD!XKUWW418b1BI!!FUbFa_}PF3vYh=9DB`fLx@g(Sc2T9 z(kom{ci0_!r{8q;h+pimvV{{<^3oOQL$rJCI-^|K`e{M@jTf&=VG5Z=sb2$r#6S8; zEl*$)C9knx*~c9Sr~KW}5<Kg~l?KoG~c~d5!Qmd@TR+?Y6YD#CmM>tMhCBLe#{i>p= zC|DBH;(Wt+VF6ihCPgi8_{vIHA+A>>#j@PdD??J#C}KrTpx!`T&vKkHk+(F=P|je& ztC&(O;VrW#N_1=}+9+(!r$m-bJu5zWW~V9A8WSgk_Pa-%)n;d*FlM<*=Csk{{;82G z7x|NU;l8l-I?xjktM|i40(`rz0VAO%%5fyjCPG00VwRdzqZ-N$s#A)7D%NE%Sl#+L zGTU|E?V5~{q&4=2j!EtFtrj1mvc^I@{n^#2nBPzawO;CAezQB$Ax3I()*%y#UJ0!I zVK!qjQ;3LXsihviqqC;igkOoCF6qX@!&YDnM>i|}tgGILG+J)UP{(n$`y0^B?3vcO z<&;ff_=8*ZcVCiGPd{ z>l^WVHmFJ^Q?}N9CzC-q{`s3H<|I(%hhmqI+Lj+}K0H1_q_-1VkV1X;efd%#OiwQ|v zhE6?uRQI-JZX_dx&wP*;$BDj77@Cu!s-*+ds4yg0ABS>}ikGJQ7@GuP`^qwCGb4++ z%}D9mABEwxI~41*T5ZPY>xS#iCN)MgNVItk&r2a>KwS4C8dmQmhuHR1);l~#am@c4 zV`@zJ^vnnRVIOM=uKDh6nEyF~I%Jq1e_P-*v>c(ws5ilif7- z{@&&O&`M*!0uGz{erZE`WZE{N=xT3023#M+_hdZ03;h21`-hpP zJ}s6INSwcmyn5VTXE}XJA|!p3a5Zsv0Lzp7{cD4~f{!8ZNjkgcO2vG=Jbio~ZQcM} z&lCzM_xQ3lmiy@P^D+;@$nA)f(<|d&^Px$+N#m{$x&%7U<0cp93BS2q_hqfFX9!AdO9P(PYsW_cR*nL_AeGAso>2K$VS!QBN>&9=$)}FQ9Wyl$N83y)oPCFl% zW-|BXWxXa?vVGBvd<61rY>Kt$`vtfh5LIWAsx7#iCGG)?9OOE<^8uZ$%(uXx^-Zs& zI)~mtqA>183=f$8W2b`_)-StXeP5q_Z(J!s*V9#jgW;vA3+Q^bA0_JyJ2?-z&zBW`!ZiDRv+O}O*&RU&X+R#uveOZUl04q| znx^_$d9S|=uYiJDe*A{>B*EVe#W?UbXtnB8mQ*vQ|LW5MwNR?B_ZoED)gIX)jNViY z4)-B+Y;W%S-ZollBin`yl7Gr3V{(D-Sg* zM)h9-YAl`+v8T7ilg_7u{W4=(k5D#)-M>t<%pf5p>r{T}vDr(a(;QZF%=yA2OFrR^ zZbL$;Ib4(#xh$h+7dNY;^wr__JSONKmk9il{l6mKJ}4yffl|TotL#nk;&eWbIix>_ z5K;vF#Ta$!qO$KgOec?TNNC&>yHs9Mxbt6ev4)*kXV(b8bsqcA7~bWfj6tEpX{nfz;nDUpKw z-chk|R4x*GBuT-hm4(~bTsQcm0iqnUUo0_CjOK4Y3ZgeGZV2073j8hv*@qUarp&u) z;s<`lewS6;wj?27OWN*!NmAxpNfRI;Z-`0HO%a$mFf3G z$CnY9uksrt)#i3gig{l39}P$qNQ|jWaw_eV!$4qmp@_&d^kE}OFZB7mH5YD}s{HyK zU-n{|J90X25Nop7SK0H9@wLQpQ=M&|#ij_F;99aoTtF`HmSXsd_#Tm{hk1}lxPu&Z z;ONX1bRYb7@G^wkLUcechGZJlC)FpDbN)xzk5KO3Gf%)b@Q*0P+19q#R>Mc(Qg5Xy z!%x`^e4B93Z{}nF^jj3y8eUkIMGdD7s9X_jd^)NLfPtM~j(1MMB8qa0JR1({s@J_ObOjK-jc!;+?4Ke-%X-y%A40X}PhYtf|^1>cbg&!@K_%t551iM@)*6 z43zjer>Y(G^ZFdR$cF^i z>2C3azXS0PMo?52JTn3%`KB)f1{I=;y$Rhk#9QW+&|$7pT*%3a$j+C-iYI|yVyxg@ z(tbgOmxAE>JMMq8JFUOAb0YWR?b&ss^WhmRftnTdQK%K zJYPVg{6070#p+G@*-kw_Bje2vze{ctArE2q!Lc|nJThc21xI>Bo0x1YStgin;<&1E z1rp`K=R}0{ACX~TS_J+M#8k8l)$Y{e;Ody$n|sPwm;4%|5+yV%a5vng9Yzq#Y5<3 zd4=80O6{$2clh9sz+m~WV%pT83{EWvMQxYCDB5tCsFw`{0@%(J7s@yxB<_&kztA$) z6@EM2E1M{()YS>Vei(Zt;^0BJPk} zqm^Z??}B0Od1x&TfRp+9Us}wDIre7k(!FOw_#aSVRz%4xC2AQDzEfxQc-jbhPRwkW8#_qe ziGQ7-HNH`ta>WmFL$&X|dxLqUG{*TS5a#$T4CFS1ATXSMvL_;HJWVTHMDYn|;sO1j zz=4R!timE4M4rbA7>S0rEB9Yn4Q<`a3JEVza?_`1-nv(rw)j(NIbyBI7#6lpr2T?h zUHNKrrR{0=$e7Ou~&J=Cs!n&Cq11^#yrdA^+9y~6Nz57gY+wFu~W+689E6wDU#$-Q|}GJ#3q zGV_wL#}f)eaOu!&I@*##!c*2Si^ofW&*q6^YMLK!#nO$#Wb>yC%&Ck@q58?Gp0W5ZaJi@q>Z8+ z^S%fZS5>ngv~Um7zniWWCBLszOk>UB8|dsWI%}l$^~w^<{ZhcB z@MPQ!%Gj6N=;A2?H}r_ zQ%q4P)flkI;TY6xvi<(K!$@?QN|1xC59O1v>tXEQS`v^*o;0=1-A&Mws4o{L*N)f# zLQL_QU?0i?6A1yA+f-P>_FDQtSV$-J*V%-0#>mZn%%MoGG06omAzDUNZfj2@Cssj* zW6mHSNCP*XH~OjXBk)=xhdn;G=N&7v^+}`e_7wR4MiD&yOj?y^+X4rr8#0J#&kxr_ z@g%}*#G&>1Zbu2d`>f3Y{~qhd=GoX4OrP}w(NmINw2%Vvfczf+8G4`N7U>Yo7$J*v zyr0s_3t0!IJOi^(`Yjch9>f_~7PJD(sgcYM^@PP*faik8b!tH)Z$5LdXE=A{55l`B zp6wMvY+(+2SBaK{De%^b*jL0(S`<^9oRU<}gMO$_g7Gv!K3Y#QYaY*3zvU^LI=WMd zYDfw6#^xaG2CCJUNO3_nOX`UfmlEk5rwer^bwEV^#Qp>Xu*Z#+88Us%-vl(CpoJ`w z4xz*#b)y?7`J9tVq?hsQ3v$vEYjenXimnN47b)}%-phFz}&zvMX$XR-*!_5K4-#V3SVy}4b>9kK37wn|`YIk~V5%^m3%=5%xS z6U-6;vkl0dKU2B!j{4r;FB=C)FG9TxC8<(aDXW5!*NC^4OWZgjmoEr4RQ&?~PzAX9 z8s{Lo}9ACJ6qKan;xe|5%J;G-Z+|)6F@c6$t#{U;} z=EXrqjIf1G`@46<*Ads?(?vJlP6FcwG2iue`=Us5<923PC;gvys+k04f#OjFVDSF| zvdwTXZd11qvL&+p@DKlmMAufsML9%|3I(>E8GXhgmG)>SRhBtjU+mlI5Kk`-Ik8@( zW}qv-*@e*?w|w^X+rwzsb;(n0MsnN-0&0i{p4^(-kCY2h=m(iK2LjND2Y)zUJ9`b+ z`;r;=X;KU=6sx{9Ic*zev~OjKZho;MGV*EIFU&fcl8EHL@FeB6kDLP3=Qa@02LM9a z=)7TMdqt^M<8wPlXuiF$8YZJ!xEjM=ZQaHF@%a8XH_vr1-*;guy-WT+VX>n(Idd8V z=}LXoXt#0L5Xjc0_J0s#;ovtzA0ZZ-HICl8oQAbumD3*l|7m0ZAA z8$uv|I4lr}a?vQ`7Q2eGq1{G^P|>3K#y*9?rg}nXNi@xYZBGfS>3GJG6k|i&8*pqnkrTjm40^bd5ompgxuoI zN9)$hN{eMn}+k@|>N0-ad!%-9NuzzejywJD-6B6(e(C_{;Q0$u) zb=$ice)&I3288`@$!IAVa8RLHoOtJW#h^Feyv=+$woE(pB8!pYZ5Ltc?6@QzDl8Z_ z+ZcupCk$r|8WMmRpTFmie6-iA{o(~tQIK5nDDa|FOU7_k6(aEhdz4+3X=uM#yH&VU zsw>VE!v^CRT?y9@{z9@-H|?xuc-{^UAAr=wrx?!~()lV@ny=^CwW){cIR>J)J^2tl z?!oD9?f`qdClQ=_b(|GqF|QmAh2dNg7I1d)|kFOty+`6|XUt?0V zx6PjYI2+L0D>Bi)E{OiIRfCHqw(`o>laX9n+T6$yXZRgINt~D6#fJW#!d3pImeerGyEr6=P@_L|E$K!>@8z=Qfrn z=dAO9EGy%`f4=E9UeD+rYrp){z3>Cfhgro889AOUa6st=8E8bxshW^trI02aU zFp*`p@j77qPex;F=sY3Wroa^Dd&ZqF;a&pTkLzXcQqjWUi=W2qf=)s)9Q8I zLjv*$6O%*jlJlA zV|)(SMoU4#f>%Nw>?d>2)&tt5(+FJU?>7z8KJMoy`?#RB1IZ=~ZXw1y13vR6!27WM zD=#hyXgJnhvZ-j4v>3}~1mGt~9mYGP6ugyC-YpFB>0_YH+w3~3UNWAX@i@~x_nAXb zTEA}}fCAG2H&tJ7m*>A56$bp??Ukm76=00m0^F(9MVyM5gd81X|FAM50hg~SOg>bK z%gyffZCI7(zVE0%M7`3c>zN33kswE)DP6Z{k@Uk3xLI2%*Bv=_Jja!Vl?V4 zJ<-5#HaKZ}v?Jm)2}vyl8n^L79rMM45&Po(0W}huDf%y!f~fT#Li~O%KX8Ur1Ll8k z#?SCD<~xezS0y!cf_NS|m;s7-Z;RO472Ago(pa=N)TkUw&>Q!siUi4!b9AP2ng9&w zhaMpM%0b;~J_cBmxSU6dV((nlbt6z1pNF`yL+1A6qF5DuoBV&T(>T}<8+Pcai z{QcKfYW-b(LN1W3=?gr;Q}0O`yT89U6`2Ps{{C1i{A0Txrte;-kM+$jQg@*|JnQUY z%I;}XsVP)nKGEDofUkQ+UZWyUK*!x^^Wsgvp-$P>mARN z82su@<6MB#pSTMC%%>qayFMvK^Ifl*qHjwfmXNLKYk`9Ax_s{7oT6*Ki<=$01#kPJ zCZ3BwSxY1Fp!zD~S-s;l%)P^D=gwyhc`}UJG7wm>>I$YrE`t1 zhhGFLbxU0{THJCDHOk9$Ij!?a{4ROyRw_ZdDF#Jq?w1RHE+=xBicg9leo+eM@58{Jsk|{lakVks0x|*T7X%G z>`6hwUEH9>1fN_m;{*QjVMRgtIhRP#FJj9Tfb8s0P5RHJ;7LQb@8fa(Kgu-DJsgNT ze<~U#k6HaCe(FLL6^@kkIFXVGcNc1e0I-?9c|X$N{|OE zVdVozWz#hOW4&p{f!RXYiikn>s{`RwN11 z?u~O1I~^&WkAgh567>l@n=O~)pRjmh{{GmQKC?Stn>zn}4e(}v&KYHC{wHkKVfVRX zc+#xPgV$kAvl2WxJ5D(PeOxbZ8D9J7eDi@)m8k%FKgl3MAWW-x2TnCeCwCtj@@(R^Y%(Y2WOgbf?LjRWF!ihSKS` z8(|sQmh&juG-u=?44$Yd%CY`6G;Et^R-dK5@s1dIGdW*kOg4%;7C()k;e#vH82xRU zH-o@A1s8dY+!Jo9mgX~KSuB6E{LRwC^}T+U5rZV7W+t!mn>GTf>9&O>cn=@FO(H|& zX0lN7Sp`w&TkKL(e$CERU|H!IPA#}8p=G{>Qe4YECaX1EfcHEVlbm)_e`D>9xW~~j z>fD>j3}x!hC_?BEhSYz1girHp$IpZx36z7>qo~t0XJ9vd1JL$6A6V@y0s$?VaMref z%CaUE-p9O5a*MXp#$9r&<)*sP@7&%zfZs3+?>(Dco8Qzhu=QGAvc(cSs#2?XSQw17 zJ|!@WKC+xYDtzG#4i68dpSk)6lX5RTXBhydG!}G2c zDZ`bT+%Y58v@qc!81945odYw-n|=IGhWVA+ZK*u4J*ZR?jR~Wmx$3hEt*^4FzNoWWxBOE>)KAgy?y5nLx)}&^RG_ZZjKDlPRMqE^v*&7a4$}YmXHiPoTqTaeJ4cp7z;xY|{Aw=ho6@hG-V`tCn znVyCbZxe^&h1kq1o3$-`=5+9{PDy)5f66-Pg#1WMVJ?VDRwEkw&7Xc8osa^89MdM& z7{O#Np40PNKFE5vnDXm!)ieI(2O^>h2VCH0iQf$O`Qiv>*C~+`JoLRv)|CFMJ~gDS zi=RrWikeanHrBq=4_9~k4;pXB8Htrx-^(NwnK2rxj+t0yvkix1q`tNMAagCB#{Qw4 zRCkrfvQ50&X`0a6WqaJ17(6xRd_X)ItDA1cGTVxdDO|6e;xi_Wkmn4n@tsti)moe` zPP+uRmC0~G!@-?jrIxCm++UiwS*8Z1F<2vxrVI@TRf+x)acIY;fc6vhRwJjJcRkk9 z6`%Pne&b9gdE>)D>1Sc8_nlK{$xGoLg`dJP(-4D5qFU+UcL4N@i)|nm9dOsxA`;Tx z{!6SFZd@W)AZIbuuH3C{{y6hiVSi&>BNs6`gl#fNAeO%Uh9zFqV~z36K|cQGwpW6a|VsNHW7|+5YULRcC6AIdL#WJgN{83ebNfvohjWR zKQfHJYK3+3ge`pZ1dC^Q2}gP*_1Z7;_f2aWc=o2b{QiIjC)wzK} zHXAr?MKTIYLBcY&G?M6kGNmF_nXgX|(ROt=QtIUTFLjy@7yuKAny zh8S}_71|Uy{Qd5*qr^WZBTvka7?%7idGZAO-K@vo6wYkya$HU;&2evcR{mrSHW_wT z4YQ^4Tow+roC-?YT_YAR)Q7$r!#Rhv4H~Q6Y=(bDy&x9eUe;j~-87!(ZodSsNx z0q=AZVpNHG({vW12@}M3nBeLe4)TwcB?mYSP1MoZ)`7z9o8pS1QHr zP;rs-51y7Q@pWU<>9>AVPdo$|9z4l?$`V9?kJgPr8z#NcN-K`9W#rcTEIxn={vz>a&|5sj7N-Q`xo_ z#+5jcZ_wQ*p3^t+9Rn!savwPEJP3(6GEDUp^MRbHNAs;xOgGPp_NOrP-@bu!kAbc1 zbMZ(|9A8asi%4U9Q_%bdMfjV9Xk_DjY0hA~$AjpN|Mz6-6?@=kO_)GiKJvy0rBhtM z5WkcK3qKckf4oiH2)t2o-+=;P(?I9)tI1zZ?LUzwSSSAp3H)pSPD(@glB;pd-I3>ShZ8(NehRO*+^0kZS*cF|ae$Fp$OZz`&pJ%sA=OvFGW%A3B&)7ZkicTc)K)-WphysT z_zC~N+55S=f4TdLUhaH+>^W29HT|<0UHkbFBMq)zm58Ka_VKpRfclLMqCb^y4Jcon zb04jVJ~h7NAU}~{VB1a(kBqe~?cTcBnAm#lWLmlnnSN7B=ieguuQ+2~%E+CNvWpyh zzB2qwLX>h5xvB-$nDu|G0Y`?&X9&@v|NFNfBOejNPrmqcRmceDzx`kN#{U!K2fpn1 zcknij{a-J@|9x_fk7IN>T_w*OZFKx;LcH~uB-K%rn6iz8Q`t)%oLYit!D>jmivB94 zQNEhNB&7Gh8VO3uD*N|U%I&t(R$VS6?R>Vfis}PAB)o&Qw9G1@$0ulRNE94Kq9qjG zQfK>PIm#Nm2Lx0inFjLV|cfj*<2z1*b^lrltBH~;9))f++ ztiIBh{%tM(Z1Cf`m4gV(NGp&(M>RwW`o-Kvv%7hzy&~}<`#lrK@bDaR>k=w-w`*i| zaeo>%P-$8u^BnJ=Nb|h0*;(mN8$D|w#9@zRiu`|A9gLKy&-TY1FLtN-xtuyx7%#dk ztBtO8WM-69!DkcfDw6$|c_NRl>3EEu(YVhJo?-jXz`g2XGae)xjEJ!X)O%o8%xBa9 zTR9?;^!ZVthwB^*P<=Z1^;zk=vf#33FbkMNWYjw{&>8V6m31)dZLq5T?l|<F zT}zT6x&2YB@9EC9Nty2TD(}ko21lFI@_UP~hqAViwOUETWz%ia3cGMIWMGU6?UkId z^jdXCP?ki4CEfFr3iC(7?;1~yN%W-b+fYHnzSdV|A&>N&crrl^jWX%QkIC9}_gBnj zwkh9d8 z(L+>LYE%CueM`L$vY(glmY%ZtonvMpUI)b&fZ1{g(B=DFImjdZvu2Krl$?SpngR=4=TIG+Rg2;J9_PBo)(+>f`@mFKLbG(5I)Mw+DNdv^Xm2 zH*U>caM?|uPBg;bZv}Z>^w;gmj*&Jij@(z@qGvWad<(N*sy3JVm+VhrQkPxvxl{nc z&F)88d&ql0JoE=C5b>OBay?{-6+CTG1p*f|K(D9nyR?--s=uXj-LtDJCpSB;hvxTp zC{9=A>GQ2@e0tQ7*)j%Dwt?o%Xr_&^Vk^f2SlzI#*id(7zl>#@=xhmh@+KQ`^5^{L zi_7`ge;NecCG!$w|NoD#w+xCSY{PVs0128P!6hNMySqbzyW8OI?vmh62p-%uxXa)S z8r)q6ciGPFe*5h?b?Q|A;U6pKNP3Xnz_qqTyOL0!`Je z&g2@%j$NY~Si8*mQl$lSkX;;A!8Hbcv}}Ct3COryiE^#Ci7ia^agwnDOw%vNc>Nps zTkTvjTOTWwd+KcxJ6~lMi3}vXp7zG6SGV1Sh6g{|54uMyBtQwX>2*{Ww7Bd8pC-0d z@3ldV_r8r@F#DH(-(C#`Lij(CuUcDL(i3s2u63sP=vAeZXQq69<>5g8)4J}0e>RwR zA#ozh+O}i;(LINj$nb8qgu`^~m`4fUYSpuf$a#Qa($$NKT4sR6yBuJ0tYKqsDh&X& z);0j>Oz{iHeE05ZS|RlxI~{Cpk`6+T9ZN_(E|e>$=6 z{PuWx-i9Ts11z<+D_c(!{&`%FD-bD`*E|~mkMkE{cp%#sQEAm@Dp;G>0;o< zyg`y}q>`u#7y?&uM1FT0VN{4I|Hy`R|M1yn`Q3&a2f%joW0dgrHBe%EG`yt#l-UFu zf&YM|z(2`M%4MgEjf;@#)wRyzv@QLQEq0evUP9rN|4NlcB)zo`z}*Dqcs0W^?-b`$2S ziH~gLQP{Ek7vJk&+kLTYM=cbf_{Z%YskxJ>oznr zWCv(i<>~r>&!=+xp(uHGCSt4plKanT)4%cgpW5d0tpPsU40%1X3|6Wl_(QwLOaLjp znBpez%xU+_`s%h+{wZoZ4@aJC`@Ms>JqMcp=wr7`|MKz6VU~sNbt&wF_L9!m zQ@{4J<$a)$S=KBe7PfS~H=yo7sG4$BFR?W62*PlTl!`tRu{&rwKYdhD_+I%a_)`6N z&<2)IwQ3>_Lkl1J4+{WdyG{8zHGYa_e;@X^Uo5tA@eXVGcRYD{M!ry$f>5LJh3f6! z?&f8$exLLC$;u^vJICj2k5Si-16psl#Nqmi5gg|uT*j$As+Puxz1lN|^u+6CBD*_2 z2DEg9;Mdx-Sh^%9LXC2l5pcBY5iV0rb@|IDs>~9C$CSHUbpxsx+?Eg5Jp^&=O;^+n#%xRWN2A>Q!4<^WnhGmT^30tB|oV(yYNOrRTtc?)*VWV zwHGF_?h2$Lahcpo_xY|7oig%Fifa#N2)m|KK}#i$JtS4S&9&17NMX283BiMSFI7$L z^-|el2|9z0HkGaKMg_c?p(RS-XcB?N1FDjhH?~ZIL@Qx!GXWn!xyXeo2#bfp$zI+A z`({1wEK4?zT(l%yq;MRMApB+eZ`E&?Lm(weT}t*@gGGR`BrMdc02x3Fk_*+g=F}zKn4KNm>n7 z^Z!FB@kg{M!^b=VltN;VQg2Z5hgJEd-wY+)-B;p=DGd%YN>Wguc*8uQKf<`dF$Xmo3a#TWBG@A8i}8sFu6x=rEq(RSB_?CHJ&bgM*&H z$dZwym#N(OXNJpFx6}MLUX&C%nZG{HHQ0lh0uW?=bVd~ngqYDBmdJ9^@(Kn3RugMG z<^X`*L$%tD<5ZyGGMP&w&9;6~B`PWtgr9|a2x%z2_b$X)GPbt2TN@Izgj?vZ(J z6K(4bfH`%_p~l*0z$TVLFbANn5N}fR7_H16#Z)akvam~Z7qa@jjSiMeoc3v`- z3IbBTp^@)CGhZAj%o2*3Tbn}~n~vA2&CW6JmD^3(KgI71)mrw;4!J6d$TjY++{(t} zf1y!)aEQ#_%lSNY(>9&2U3XLfhQHO(RDV5lP(qkAJ1j9*>pD?l?(@~b;Yi=0cn^g$ zQ+Di(_3kc?V=T&a`}GG$Xj;f+f5PGitnCliOAol>1cp!(yu?c6r(LaF&@*_hIEkT$y#rhnp|xV8l`7dY_y5G~Y+D|P!U*+MHFLto`W|nY zdt>L`cqY?pDbLKuS(PBJzB#4?SY1@}-ti=IXsQPvIdo+K_}SZt?|UUjeLEx`6{g9+ zYp)pzcArMSl7+pM!v|>EIbg#Y0vI4hcN=R@ z^@(?E=iiUZMA!P{et@JrnnXN!6vJO3t6DHtN;a@}6Dtg^GD?cV*}se5&Ic0Dbj5Lbl{k+&1 zs&}^weO-*^0h&2`d_;>UDrtVx_sSg}O0k72vZ-RE+a&b zBiNZvemvd{4+6Q0;?w5HvOWHC4drdsvcZIg%Qp^vh~=lcN5Q>-?xGRqBL0BvL>!=i z{*c_m|yVzqlSdH^*lo>N`xv3!Pp~OJ}2y_M3N4TV84{DA)|2k5m^boegys_J*yu>!z26 zwBY@d0gGp=S!9aSZljQvvcId@*KtOw&VV`E&<&xro^K*f6z9^5S%L~{|GecDa zTMM1|pB9MoM^M5?q$CbhRBE0d-U|x~3O-5HFeWfd^xRO`#uxECF@A$n+j5puRK?Lc z_;YqVCTg~#xN`b3EKjAd%>KCf8&gm&YV8gTIRbaDa4$ZvaHSWz*&66Re9y%7vN{#0 z(*+5|>+mr=#xTb*G|aJIKoovnkPU)`$wS-IH8tz{zQvJpLfIIFvM>6?|L$|iZ7>(= z7fNY0dvrNdq61k!#%bW#o+ph7D-I?_h}bI`SazwJgf~ z*blJSJFi9Ck=nWu;pr_|;`)z-gX-(qU&R)n0IRJK@w=%lNaxLxgM|W-zV``-|LGeY zRb|*E&SI=3hM9B;ZWt{1H%+@=ksI(Sf=9vHwRf#nnv$|jD2BP4@Kq!4yz_@tf+b5E zdwCC>T;#NxjFb7<9eyU!D{Ho(KIna1TQ$P%61On!68_J%3hqiac%jBoD`8M1>s(&5 z|Bl*xth$!NC={Lafo*qCTOjG8(B3a4{5qouj+WaFooFGymZ;qw$1@jV%Ja2KgH&zq z3YxU=>57m{cOYBo=UWq;=;y~{1apq`iqpoHLt&8Bd)DVAD1sQ6K#hOE z#anOqRNsJGOtewq&SP1{KgYYQ{n}yls&~Z)#=oA)jc;8T|3a6m86?*Bj?|YdILk{M zy=g-ofBrzS#Ifs>^6eeg&Io$Za6D7np#VTB&}r01Ty?Fx(GbD+xE-5%Nhf`x>+#Lx z67Ms~b!?s9S$cI%Vr?yWw@OxET`&}MZj#^R5^>deN^AMN?Of~hm;e6Y zbg955nwRx@N#AK1Nx(<`XzE&nZq?vX-z|Pd@Qzul*&z^{R-d)EwVY(0E#|WtfQ7SX z7k1o0g~R6UO^-$zM^M7`^=E{k8_h>h{Q2+D?=LZlFfRCLQ+2iZ++`F=(1IYNsO)`ne6W;EuYj$7;iZS+zO#&Y0mDrPzkEj0i)y%|sn`6Q zD45Na5EtE6JhvcbJ&5%EKstu>1rmHBz5w%QWy-L3*m_Clz!UqZniL8fsOB@ykrSiN z^jVSWh&*d6f-VfPqdxE=GGPr6sl^hiFCB12Ddg|bV}OxJKH#zONCi5R$M7Cf*IsSC zQ-}~33-iZ+6cgBV?LBh^LQ3!SnSk8_^3S4==By4e@Zivyw$Uh7Hb{t5aK-H zwv{9k-=Nz!w!7s-510ic_U{jKbwb4({qJhRKOsWHAFnge;U9(`)_Trny70u;2pDB< z0fwYj(tbp^)NfpwE2mzxtteh&3)sQLdx__YS1xg!@qI%Qxcu%TIE8GO5*9__+icf{ zInEDFi?QgtS=LWddu(Yn@3nYG%&ql5ZKAGLAVilXrM8!*q#hn#E4I7HSn4*nB~2s; zAIPQ3@Nfvb@Z}~a{k0AV2>kDrVbrya@a9rA046mP>rtg3HoWfIc*99ASFPZ-gIrs> z&I;%GsK0DZdc~<=`wHw@(9*I1EooZ~neuozUe;*=bI8dRD2HB0YTdI}jjC*JMouA; zNJH4dn;m0w1Ko)V!vlo;o;;wtF`ttYlXpc9pB4Y~!zv$Dlm8yZrq>d66M8UB6S56r z7l2~=y5}dAttrx$V5$I5H6}J4+2bozvZ^?>LPcZvZv_+1jSc+{;W1RBlXBQmuhSQz#oD609Ph%Heee>i8YH-83hZ^c0HxP&-UhDW_h>*qZI~ zix#ykU$j@i6dIE{Qp>{zx{jcj+TX4#5RX1Qyo^_%<+P;7AuKs^FGo9KA4}k2%hw7Y z$-k)?&~jX8nHr!rzM{K?&BG)?x4|$ZGSoGs&(Uw`qwf|H;%g2C=U>Y$nI z;7_&)X#OQ~DgVD}E0@zn%3nZx06myS~J+=h4O#}NEc=bHT=e-HB zBtSm@Cfo+0>oV{rhd#moS7kI%BBpv{7CcrN|wH08s0+Yk8=8xql0 z#qkoRVa_Ff#bKp=0Pqu|w>#2V5{_6p}6S&*mY&!&Y*SaY7^}nGoQ3QAx^Tt7w zu(~1&%=3mb#*s)TWCB7_lI1fdP0TyLZnmD|kc-017g0zoI$=pDT@Po@xA1r+d}&B` z(oaxkm|k|!z&s{VOkxwbFuX)Mp3*31F`VfeD9aP1M5Qv1#au#%(c!Qr#N>p}1VW<; zUI|q9_yHPcMxb<&b>OJhK>aejY&?Q`8Nt9>(gps3uf^OlF^3njPkv_Kr7OY&<{oUy zvr8G@+}@$P>H*W7te25S8G+%2c9oy9%>AN}TAl1Wy|FK7&Zj&R>>L8(9PLvmD#KKf z_XgNB5XnM~h{#qN;J&|I1EalX{_hpn$`ZgIkntW@9n*UBP({~6&WALs?g#An#pdG% zA96#rPlKu<%-d=j#P>64a=P%_@~HEo^u8gtu&Z=76e3DH*5nJJG%AH2TL=a|T{>{^ z(g6$uFqcR>TpuP3*tQga)<~U4ZwfwL4b~X~mIt>--}^wLgfe^zj~GJApX82WA**Hn zI{%N~gnz#dZJ~k0mOd~lI+}jWg83wN8C371FiOr730HFr@vBdA4Lbd9e&*5XxXyp< z-D5xb8bt(p|Cy>^m~{*OL-6@0nwAD{LuO$c_oQ$*ctp7W*Og!bqwx&WiikM>($x?G z0}J1ZiO6~VZ6R5kbdliyx;Ov5OaJSyYCY!n0Nx&9j8Eg!@$0)d4#Z=dWK$M+^h6_y zkR1?e2uo(FH`9#3|9-_yw_{qv{Hd zlt?*&gBwQ#)?9ZR8K?xqYfP{4vyRyt?AJP8@^_F(eH=_&`WL(e3R2c~d%SF4BWMzV zP~d}F8K}Y!&RVMH5G~tf*phe`k_}bR+ZA6H#}o8v66)q*;Rk%UXW)1BLmiNe75)ol zenCf2(GQ*Wdqi#|;S=h0Q>)$M8#uw>1|WD?%$s7yF|ft1(PUyR7G}?9LG< z(|4mbzi7V}M{fX6PVqvW9lYD{Cgt$QL$=)=QQi(bM32XPWcLJ(uG13CrE9-Jaj?uy*ROVupSa z-OCLS2?@sj!Y|ja)ilE(q(uQojnglhwQpfnmJA#4p4e*@U5{Yb6OX*4#am+Qzqmnt zS!)Icw7_yWu!3>O``IT5MCxhNj_IX#QG0+0v%o9!#6gg6XpSn&b2p7TI?v$VHqhq- zD5XDI#gkGW9pb0UV$M0pYx-fgXVGl z=t8(NT6E$d72thSh2 zP~?xrMc)`@k3&KGSj>wVUxt(6{60gs{2InkywwKKQY^SotULvXr;zrc?ZT9_G zD)subRuL7b2TD$#vPD+U{N-MaoF?pWK=b~Pq)oeaexbYAG0Dr$ zNcEv4o{_rbEH&LXm_HAJ&BhywkpQeXz-W4kIvSzIj-AcDQFC{G)D;N-Or`vfnS2k7 z3!b<_JRv`Z$)#ehJo~nQZN%*q@D^c&!_N1;*3Es*Ai<^T?37QG-;gLi8r26&lac5(Go9VSP1u8p zC;y;d`pkOJfdrb3^7rAVe{uHqB_Cn;8P3B`Qy@|htbOT5LEtxWU|r`M_;*Mde&=f^ zN*j`7ueUUekGw~f?X#&N11z(AUMq}7f(5_WwqXGJ-ji%#n@NMuZV@MeBY1rUfF&g% zLN9lD29wRe{e5aI$iwyOz58U)!C)I;6zcesI2WTJ4GPNB#m{+r(Qc#jPE^rE7xM+k z+TL5N2h^^49WgNVouk=dHl%kD5O>lqL`^h8u3a4>on@ft2f zr(T80?)a>LEez8g0C}yw_$mmHpi40JFb?hoIoG&i#pWvYb^uR6Owd1i_v_d>0Du>* zW&JpbakGe_((ON3f?fq&A~A?Jbhd%7QQ3CC zn=XjAEC|p)r`o;UBU3hH+W{{Dpj(>HuL4N+m(=J7-AHSCQty8=!mK?Ya+RZ-Ax61^ z;kSbLQTNFo41vu3$L)c6pkA_l-h;=LQ}Iu|i_utZsxd6eEG{F$vW=w#qmP)6#D!RI z(iH&P;RD8$HKEPH_?fPSpNkg>P&w;g%L%}{fMnN${OvUyb)q2R4Z!Z*@ZLF3gx(+H zIIkR@kcSE3GdoD7`+|DV_J%`_ASu3XWn^pLvKW))uoIJSbt!u*1;q+iT(XvDj*2W^ z%%wAW!%yK~5@hPQrg}_=aw#T0MTJCxcIz`+sPAlvm6y=g90z=4ER=XJh?~Jv$*eebwJ6q1V5@J(JHXk z{J4;_R$HqI+jahew}aB){o`DqH}o1y!Y?ad;f`j!Ga6A?hS~dOj!7LfF0+>YEU2SdhT`_ zu}Q@w-k~{+!lwHnO_8iwxVWf6F)=&9V#3 zEz=n+1qqW-KcL0ld-0g4@Tg`H$**!hUfS&JyWH9Nu!zKz#j6#O%t)u=e{4PZbX`S~ zJ=`?6cI6z~067*CF7#RMjh#KW?<9BT8U&O``cF$xZ=Q>x)ZbT;YAHJ9tnXcom-{y| zNF}9*crDL!@^3vdSP(Z&Lcb4+IpT+$Uv?vr4*uhs2fp+iG1HQ{_XpNE0zD`XKdC#w z%Pj!O&-fQbh;!dOXfi+TJzBTE!9hEeSz|L(J_n72*TmXDaJhWjXK`10zDR zDwG0u-^(A&^HA|5zBXxbAL7zzdvYZ&Nj-A$n&xY_s_ay6sO0ZZJ%hIYXuP|s9Q*F5 zycZ>xe?VR}44Zu=*(@_i!@rAwdJMd4!aaPO&oU_-VO< z_5+}Kv&X9Qkvg;H;RN%Go3ef+*jWT*Ms+A@gIBpSEB-5%zglG(tG3jDp*m&l| z`cKJF-6wlt(}tLAZodI{o`0Wd-6x=cIVicH@tYgXqVv5irP#r+qcI`rkSpxG5A)~Q zZn?SGBW?uX1FqYT!GsUhPyG=xxfh@U;mOa+#iBez2F#f(yL76GAJIr@60;YIqn_Pv ze$?7LywPHHEPq;lad5PX(<<#*3JE8<@vQjCuU6=vW!I-AXc_S|U-{?haiK^#|F2!` z*CqbFR>6cS52KQaUB!cG3ob1VG}tZ~5eA`bOlXj{U-u@3k3M^`aLFd?W{Vs-#`lF% zkJzTdJXP+tXjF^VYyx7iXD>MRBsSi2ctV~2P6Lso2^w}a#Dw=tlQ^;$aDzV|&J)_V zZD@>q5B?lUrv;RLcIHiDuMufG%5;DR0)z}71vH#=zK{VV6|xRC@Rfl59ksQ(fuhx2 zA$tVB;vXxuR|fr`vw=v3v9RhLu|VFlQ+Cvfc$h1yu?3m#t$#Up+QnkwH^M3x4(q?8 zd+8A+xyS#U7W(J5bCsFq4IxH)<`St8c_n{rtC0GS4BB_*N%qSyuwX_}xrJJgZ|l3Q zG*72V30#Q?t^*}M42@gUVbQKPio48p)s9RwA#0Nzd(?>Z?i` z^5$n`_R2|`GRLDhEC#&-()7#8?oaH&EaJp?6Uu!Jh8Exp@+lO(1r|q(hmF{5&HqF>#Xl*D*&`_`-CPI4Qs-{hO4nDvLw%cE*^^o zCcht33runB=0tC!9>qh^w>}tV+6wZ6FD5rh!1?kJ1yw47TYQ&UKx&B+!7oa%6?Y!= zX(GfATf~A9u>lx8+w;q&Fu6Vhx|=p5)PisW+MH#8nCFR*bpBP3xnmv~D`EnuA12Ux zzJ(wn;x%sVF(y&@6Va^6KWp4nDM{FZWvP%qp4nLL3ps;gWp@Nl1|{8O^4eZUT$(%Q z#I^pIwL>n0_QD7Iw1&_=%nSs0dz+!N-sF}|SVraGU5xYW5ebn1VTnOhKx#~WuB~|R zl}NJvDkG^VFhk zWV372Z==O+#=AD2&qC4pQ?QSZDm95&Vt5qdQr3M<3 z9n)jsLONyTNyREUw9r@5Gy(RNyAhOXMPY?o2&!UF6)=SDEDnP6cqDE_d3q1eQyW(4C{GGqA#Jhfd=F1zaEIh|6HgK6J~n%f&KiKb>%taMS{|wv->FQ zQ}|nT-te9cl9n8d1+ll?u>#c4MqWkPL%gh_T0G_oXk+c146bx$Z%jM~$~4ZV0kmBT z?E8|2&8VYohOVJsV*9#r?rr*)lzh7mMIpK0|CIuo?a`zF2+jSpTFo+je8$gT(u7o! ztJ?g12uqq2R1Q2hfL+{L+GOj~aPTEv3>*9%;U%hmpn0(8`yRU7wZWk%gaDq1ufVjG zOFT^Wj^(NaICy4J5srm%ZH&19m8M6>(DUyePy6N%OQx9+Z|7vYADx9M!6HuN~0lOeYuE-l+6T?~VQqwFiH5 z9cVuf<$FG{_W@J6ybcR2`xjeJpQy~4_pF^3&k3zXb&gMuw;~p6qk@ zmh?3F7k+Kk*xVUZsJM^u{_|JLGGRp#LrrkX;9&lsT}qw=Z7ojyvnvdp-P?RVc#_u zcki7JYK^`MorY0{yK9ZxC7Pm#Gm2ixrkYsSo7+gKC@0_B3jF@Y^mF&stp|>k0biwx zob+ckuh*(yTm!!T6C@ugdR@CxLeC^$PmT%X^n=zD>q+cuPo%0x~_v?YA- zZVgnK_Gj37h01aVO5SflPhx39QQj1q8pSC zC0g^3!Ug%r;|RZId6a&Ua&-OkHsc zb?N$<{YcX=JrjqkXrVVE>=l$7kynUVa97$Mwl!4 zOd_y^<~v`c@SrJuZ2o}W1e^{skt<*Hlf8<{HeLi5NYzd=krz%=bnG0h0vu_#@7#)& zkMjT*fU4H{tyPXoDLWrdR)$yM9kWF>?b(ZdG_!oW%+vEXu9fY}D~yNm^MsTXvsn3P znykMn4alZhj1%qw1cg5SVuUlATR@iWq<40ITQ%?F;d9aTu0A#KU$XCy+FJo!3$DUb;r8WoqXRzc}t- zfCyq`E-enxJ0Vk(0h+h;mF(h|2=3{ z;2E+@Y-*|KjsDp(fn)jlBhgvN*&m{3XX=;O^f70+I(QClLVlOH2^McF%F+Uj9ewfJ zPp{Uy+l6N~|2yg1*?#l;{W|~q+4esW{TIL8>QASc>W^lFh1SWNb{juy1j}z{aa(0b zZ_jQOl9`TMl0I;0cv4oIO&Z^dhr*qXA^urTZ=G|CO~vHOgEYRZct3Kwqe#Y{-`*4* z*-?Yfh=QxPcnGd38Hd{gf4tBZKTg{rCoa-A zxbG+eN}(eq14$Uuvj#}SA+I8QFdXGlSA9rdo&wR!&ga5QS>{VVa7E;X*&57Ill4av z5M&ZCv$lnDi9QxL{t?5`LNYK#Wxm}4fuu7wjZUF?%Xy1Z3hIoTEfX5>505`%tw}4F zm@y2N>auiNjX_Dh>E1R;oqsEJ53&@>_Wha*u;4i&>ikX!=JulMR?5)IS{Nz%8M0Vs zJ0G|2$d^NI1;1}PUMd}>pyS*2NRhVO#(T1-4~*=%kC=~NowqdF9PY-vtoS~+D8xD;O<;a`=ZNUtAP zEm}f`5ZV;;#KLCIc+>L^qqlNeuTX>%Q};<-vc+@wIV7nNi+s$=2{Kg`Xu`{3+g@KD zPnVq`-_l}m&*)7Z;jas0z*t*Zj+J5D2Igf#ZTjs+3^G$?Uf$eyCZFkFkC?csXZ?S# zFBtgj`0?i?A1zfk5PIL*3i_;XJht2my&muStSq4&dFW%iZ$4u*`+0L+)}CjOc0~4I z*-N^&jDPdK{nlu9!#`&ZJ$$(qBJG5Fsbk)PqCcOG(1xyDl&JhR_1AZgPA|BYev^og z+1q=WVg!>O0DC3Z{+*wpmjp%+{I6f_4eJK)B0puQV3=WIAlzW;7q7pUh`kM|M3mwv zXLyobeT*3W+x~ke0w=z;^Yt6pz`emo$TCdO-0lwq*?_!=DV|XPEL|nEH@H%C%$Q1~ z%?H8+@%#Nr$FWz}{bTZUNI{ol#xXE8R7z^sU7iA?Idu<07$jwrcJSW`j z`(^TN{qS$$UMz?couZ60Mrhc6FNq)dvr6#&5;#cs?PpJSBZt_O~kvFX%WHU+4aQq&o zIu3q&;WuOd1fz6Z+&OVFdxv*nUxx$e}& z9L#-j&Jpxv;bJ@OqTJU5zFM~D5JiXX%uXqQ=uqnX9M7B6ct+bkX{)Z8&)+^_CJAL zuWi0tav6|G%n};Q84e~PI4RE=>`eHU%FeR(Z%0*j4yJPbNkT4|61d^yvbSiUCHjqm zE{H;y$}_~|zu^KI3X+HzZc(Gl%a@qlqC~fQJ{%;^H4F@DN&f6B(JI_6>hyJPZaeZ) zJ_RK#cDfgC+Ov0|R|*ACn&Kz^-kG|8n-wI1X&OQ|07;-Z;msP$`&<~KlFNnNHx;^9Lbk6i8x&*-DKG#ObFPo^yat@UJ|%Q=||v0ZB-gw zMm>_Tsx6KV1hR;@A+#)JN}4voOHX`$TbCRhbZ$Fh?_7CaD{b$b zT{C(IXx^tW=fIo?8<}98IQuA#UFnWkHDM0PWZ&t>vuyX(MEvYdx<|i@N;^*yrn(N< zwXB6R+LfuhjiXv+0iDC=Tgq&&JJ6m?GbNJri2dW>_Fij*%C8J$vzvSEmy`Q~uRfcO zS{%;?TFo26M+8M#6k}N)-`dHO#x0lNj`{JvTL$te0>4*&sPt98I>~)$Cv)cPm z)6V@-<*NG}T3~gMa4{cCz8x0)%Pd8r5DFgGq^A;isnd1#fls^cshPUJ5-8C9v0Z7* zys&5BA?SoJP95&Cu0Hz0p2XK6UhV&$>rjf+cZM#Bx%w{a$`mJPV*PWB0|tvMLR0W> z;okwZ25G!IZ_r5U{??YmeEje7m7n6eT_8rhH$$b)7SsdKCck z{ChAmR_;0vGR2z@wpA5AMx1kVzK=mLFK_HnL}_xMgh!vLI(N25HaxNwS;k6JI zm|I=FPzagsIwvHJ&T#{wJA~I^r)U+gL!nzhwC>%{ZOfn_y?0_{UC*Rq;-5A7Ly(qZ z7f8hT#|L0uo8mFUeGBoB0Ba*uPv{_(%LN7FTI9z-lhJ!pcpDkRGFYVC_>VrnSYUS2 z`5j>UVY(XL4^!B0Ru}YwpKhm4h?~!n zo+U;B?kEfRqbqvc2Y7o_Lb^+h@&p7c#SRUK^J1Qpr&f88 zU4_(I0*U#)y1?@-bkhaIXEJ;Ip)OX~F>c)L;6LFOuNk;>{hLdQ2$rNDm-altD>qW-JAMWLvAd&3wA^N6 z1$hfAh0!lmFfOZUqPXGEDO;t>B1ngyHG?MjoWFc^f~+B~>j-w-ZQ>?z55_fy@x0wGf&1f(>d_$2w0MHITpdx5+YGR-Jc#otP-JGTHZ_183lb1E|jR!ss6ecL%>wwPe}6z z(yY7tJh2vw4|ehU7-~Bj*oZ%k&^uS^S!X(1o|d*tsK2X^imhl=Ri_4d`x{8ypxObk z5+O>wpf8V+^qD(#993_<-uCFAc*)=C=#%nl-$pSeG>&tNmhKC_mOhl}bky^6JN$G} zm4hmwS=+if;>hB}TgO&1&5qz)J}R<}yhR)=r3;6vH9(ULR~(_G66M%n)i@(`w@(DbVE zOkS$$4?8Tw4Xjj5w(Nj}na2+?36hVPe~p6!$$E;a7Rs=d_#LLPx-0q*Re^u=_&!Pp zTiee%lhaE3(Uhi4W0*qZF=Rfh3~02e6K5-QV-9oQtQN1U&%Y|BJuEuciaP2+r>yhy zv%+*WG3)_SMWK?){a)Q|KZ8bSxsh+$dLOalS5?$PasM2CgSlJ_O@4{-PkwX6;eCRp zsK~;bgWAJAKk!xz_Gj?lSbmf80G{95y>a^*0UpbQ(?%Vvsoc(Qc0+3i4!JK=%PNqS zn5J^(*$~sVGVsRnwsn8R3cd`Iel)vZvI=5y=_Lgdm?JVbXl}^K$z8&c8AWk`9e{a) z54X(Ey$pFYvZ%7M(#|5(B+I%}HYV=lcrh9tHW)F)J&m{&A09l^!kD-&9t1qfpGYn3 zJzGIuBCUgoH*gsUB6wws%r@8-d|#5-aEj?9v33!=aB0q@b4b%Vsq2qbBn=ATjDmfL6PS40LKFvX`dvBkc9-KS%y$S+nAu0v8P8xH{P6v*LZ|WO zu3Cub%fQ|6a3stVS(oE`N-*LgTE~agVvNU}54*)&kH7JSlE(OMBH08Fn7}N^=`HDA zZCU~edgsU0IAFwULJv^v`2aqh<4X|N2NzB@!ziQj9cJd6`KbX;edPiZ*O6VWEp*s~ z8#W=9qyXEtOxQ&8X>al9R+O{|dG?^8s&y_IQ1V zoDLo?rSFZ8Rhx}p!A@aWC-u4-J{s1XPRyVv07+WvvzSm)M1M*Y(@oop;{bi^4h14a3{E12y|NzH3F>+wz{5Yx<3_JsPHHa=r0 zF3`23S=z=X2?L(>QMZ&#Dfp`GOYS5Ham7(yF6HQEg5$p$1H1}m6q&34m& z_Y`=dQisQW$r)9!RSOI7TGRS7=(WaZNYc$&Z$ltfPUpbu^bwQkL)VKkv?($F;RUIb zNbT#NlSo*-$0G%G-I;!L;`WNG4b`uUzxw>E_NfZ&7GX=EPyd}rWxy)yDxOKy zH&b2@!VCD?_#u&!i!VgM_$Z8nu-D0z^y#eXFJ8|QdIxl5-pl7XLV3AgFU@Kldfulk zH8#$)Z}QRc%xpBwFY-&I34EPqyYvhqC<#0f{k9|Ma1(QL+)H zQNi%Hf?rmAZfnz>v?V)L`t`!H99HF#Q8)#vy;lzUzoou@*i1*JeiQ@Act+*3_JgvLC?X3FAhC_#n-slWiw{W0u zjtO7|c~^c_uf!gYHLv{2j%*Tu<*oxO&gVC{&_GMIoi2dtz>Gn+_VzOE!{G5wN}Xn% zE2vTU$M3^m*RfwcUW_cC>$GQ`X_$pHE5e$)={LLhuz!{(rFi^u$mT)hj{jAG^%nna z1t1vwZaD>^=PH%VY52!hG zhNm0GuCmF__+%WtT92Z&>o2Jv?0>VG4G4(y1fZq$28owXkbjxbZ!AFiGJW$)2yn}| zP|vOtf#l}@*VPJ#IHi!kfTK;Yi1+`O=Is5K=Hx@Y&Rzd+9vw)YhQOlpp;S8?kzE~( z&t~Ux{QDQpr10G^HE0jtAz(dVA3Et?yd`n=gG4Yp5$Ej0LZ0~Bn+{>1aXr|#ZhJAh z=gru9%=$W%ql$1w^H~t5rMb%-=bm0&A5Jj0kE&yn+4?X2J=Fj8$Uj5Ouxk$gj($4^ z4jq(_Ar-xN=j-OsA<#hyLvp98z}f_qQyUNFoD-6pD|+w5joYA&uAZM>|9#w(NLazv z0dprd_#`b4Sc7eu?Tv|W>#BCny6v=%5yogwV3qV-wC{Cu+$`g#q3am>y3NV7JDy1g zKnd0<9UCNJ3=~=ce0GDRI{pQa;w-+JV>-2ASTAGnL3Ek?Eulj<5~uK=O3X(*fQmx@ zzMiM*ewTeM_hhB@mU8G`{2kJ6KP=+UJDO=7`}SFX1#Xg9@5k}d(Da_fpINszKL;Gsq2MFG8V@3YTCHdj_!_*U`o&}OzHa)K; zloTA=yugXy}%yuHK` zM>6>m<$_I-7eSop@8O9+>A#`YxIg0C6cysK&*j()`=ex=nhFUNoK!-OVe zcw`M9g0RwvP|yB>4si87L^Sc4?*JglGYvH3>IGn4JQ%7goC$gymK1IGVJSqZ$Fs<~ z9L~rbCyI#g*0Gg4aM>=hYtMocV$WfWc6v~luGY93E-2 zkBZNW;T=Yyp{~aJ#2O65sRrnNl^<)io@06d03ILWyCjmX+fWRi?e{$UIl=!xDDz+B zrfv^sN5cI9IDykyH_~HMxd+U8TuZGN(35pO_*;>Vp>Pxaq*HIP;fN9g>ku0^39vGZ zaNd$Z0O#^#sw?0=msq9o8R#$>h0!nuejdXW+!`qyD zf9IV0-dp^^uof_TX7;x}i5FzrgLMf*e+~#x@$VK*E_TL5pc=~j?_Y>^ar$nBHK0NK zTmH*47Qq$U^|5$WYJKaYDHlu@ltR6y2p(l5iz4zXP zNDH9J*t7QigV`Dj;9Y9TzR7x*9ckXgVOPv`9xp=4@)(gP?qQFDaht?fY0?hIf7Gz< z)pDBH_7a-69`Ep81l$)v#MJ{u-x~$wli4BGNk5-d=FbYU;KYxJ7Lm z5Y(B^y3gGLK55V2sF^&`Aak-a_|TWoWn9ry#5q)7Kele+{fMMawxRWPAaOYwyoD+H zKnZ>W*C#n@Fc4}>lHoF~XS5~%N$eNj>OVUCN<5An`9zKOIR6D3#am$&;l=1MM(Vo!DfA29u^!X1se)~L}_9~+zjMbDE-n3BpHEE_gSZ28wYUKMx zYcx7qTmx}PGN(9p&h(A1aAIP4H}cto!SFx4#)TG)GTNfkd2!dq6+yWjtA`0}U?$=9%0 z3e{kBY8OM0gmk0`-+cxoJ(;%Y{a8JLK6#_X5~RVU*FhCbur@Ja0C$sOhfe(NTfo%c z6+|?`6(BTavy+fy5~O;q24UiRGSYE+NEBU>Lf06pSuEl_n&dN$CVvLPgl)`EZ!CmL z)Q>g)UO1#2Guq#I?kO%~QTS=A@&*E)W`WCx)997RE4o18tG}J6YeY=)gJZkKP5f+L z%uz$X(`Vpc-FcQme35?$V`GYCQK{r__eCLY4kp*@L%yS8t7rr^ zYf$_7>Hvww-&*Gya$NPDdPzevr+z7#*=;8>uW3(aFy)+31j9JjgD~mHDL0Mfgxu7ogEFjMOFh7w&qo;y%#ky^P zrENgslb>KdW26X5DNC6>NI>_2ThLcG7FL&Rccvn{6=Kxj5BEjTv=qT&s>0ZK@@7RR z_)+0e()<2mm-=72+mGF$e3J@zwyQ#DpkE@DA3&N#s_hlOSH^8b4Y-O0voOkfDyvP& zse@`H&@00`7==r(%T|i8Hgry1^1usDQN0LzkFcr)ogC0GZ`qXB5N3zMjvvO}YLaMj zQV2S}N>|&!A{Ev24y>}gCM@m=qdin4DmDwoWB>G0xzKr&50~pp0B8@W7W_OiFd{$7sts7p|HjJAwgEb`ua|UoEk8S*D|StB58ttBq&h zx8G<>`a_eoM7cYpwchx4vbYqqo~Gntfcw}-(90XQhsdaHUcKj9-n z-=0zI2)K{(G(n{!2v^r7o!pka-JqQjI_F=6GMS755yLpK$Wu=8_ohlc7yKxMDE}gY zDwPr}VW5#fn3qSzdy1Ho-d}}p;X`}>*&+j80U%y*0J-i!mH9PhPPN-9(@}bZ)*fS% z0eCEUthAv67lF1>oHC3{DZwm3%Ly8~YKe&as_2{hvt1mow?eQllR zoWC$H=+wGF1VRfSl*ZW$Q2)je7)Z|V%*1M++OfqoF4|!5fuup5Rlc@YNev#-+@lTA76Un%Uj-SncMnPq(@I`39EUvY((Y`P#PgxGwKu{ z;GwMflZ*s&=kAp(mJou4qd(UL?5iA@4M2( zK(7-wyZBBR6MU@TFG3|VQ}-RxNL4^&;S)NB?gMSlE2SsCFmvThs;3)r$HQ%~&%HY) zFl3RS5IB64B0BxcwadP!1wW49L+jcppN$G5^QY9W*|^@c9*ws=PQN+OFYV&Z7T1k9wm0NH44R=Ko{xs_b>m zvB6?v0Wf7`RV{m8FXkPs84;P*X)VG(mIzv6<}l|r5!oxlMI;I*c+muC!HoEs(BS2U zOa;BRSe)m_#=XOm0n$~q~D(U8=w`tL?;Q}CqC8z z0c`0-f-OZ%MHEOETqtZ=1YQMih*rGY_0W-ZkwaEe*h*Js{q2L-6v+IB5+=IbGO+!( z;o%6~12DBK*!@g}Bq(Xo1L!T8>=8ltjtqi$_O`Ha45bc)Z*=>S`o7}{Cw%TP3&kMY z0A#Y~mT`*dJj2m;SDf>yC|*jzJXCMD;IRf$KW8Yk!#W^0M)2B4Ltvev_=^2FREW_& zl@(UqeS%?K*+?HSzhU7YXDx_ag(E|5!k@0>oK_^o|G9@$?Ni4}>VI5(~r0UyJTaJ1>dMk*nUQr)n;Ll6mcAi6`K zR1mW)0+D|iRlYI!Wc=f!KMXmBvDb0)byK}=Nj1L;%9Bhgq1I`+5Mb*7s#u(7@dZ~p zC$WXv1y{cF!=iDoP-K`!gpnAq-vHHxE-*snzRyoBhbPbm>@*LP2jGmEVItb~zssE$ z+J;85*rv;_1kaKGLjHw46we}=X@E#HtFf9fGMKC|Zb!S}c53#WMOwKOonWQ?L`oxD zP@*Q`8>ZJQD4V2I9YLtt8ftggR#=FtL+kR0@*H64D*v%+4{~1>j}(5i6npL1OJ+D< zs66{-j=_3(pKFk&-y3CNRmx%rtb2`3ED@z_3}Uk40=L=&%s^;69f~ZWD!3O#beB+N z<(mgr6g|#5n#*^Zotst1RJ#0`KTLq3;FNynxK@)EW4Wwz==a9boBZ}eVVviz4v;l_|#0y*l z36e%$c4CZ2s*0hufw!#gvC{*gh~h}OPNO1L(PPSIKaQk(!G&a`NvPhR{*Ya6AT8z_ z_T-19%Wk)$@8)!Fh|jLXa!A~sMDtb9Mr-=@5!D*^kcTr3c=!Pj5B{qYcC&W)=)s@X3FzLx zz3({_@Bv6Grv3%GaxZOl)%{wfT=g*mtFJ`f&l#@3U(-TVciV`qAN#>tZts9_DL?ElVZ;@UKJEg$5g7IDgf@8KK2q#GKq?DiecK0GzZY+6X-u^PAE#LP$=7TL9lE%m@4 z>KmvA!hXEJq$bbQM`bq&>i+gKc=#VE3E*zgu=qQ5zLEVGFjx8A%#SPm2H_uQ@#T9T z5h71kM^cS5mFqts)c^nYUmTT#M3mY;4e2^9hhqEB@xKQA2xijmpL5ygeT518-}qo7 z4hUz9Ibrqr%L6Auok4?F!2^%?uZKTiy`wVcBCUV_|2Ecee?wbT=>jC<e=pN7pVEd8+QPJl@hVcS=Q$J6#kC@TnGw34^L?<$ zmq-t8@{5m{{H=PD!tCQn!J_~DRtRE&{$#+y8MF%81bFd(DsdLFw~M~>{;YX5BXf^= zK666%@02EB6xf-pK=sZyQ8#y&mC|XBzpiW(k}na8C>uSy)H*NNK~22+NDPT=i2nC8 ztwaUf(1M@<5tj07vNFxQBIZ#pivN4rhG@8&(C+GP6otsw-RA8BQ)-g(c#QP#?O=*kc3S_g#=f*;{} z=zJBEx__&yH<~*vzmmKhBrMBNoR%!3eS0g>gK=D}3hdth^MyvZBVd*AP_EDZl+i3! zzJFPBU|bXaP_Qk~%=IxNBE40X{Kod%y9zZeSAJ7EP*R8Fzwa@zqxLpdH^eRjVX{9O z+3o7pvQ()~z3<0z20xoV0@&7#vtMiY-AasaQ_E-M<)CW4?<^3ikCq| zJI@~VDK})|&l6W_ZD5y$2JY~ClxU6~CP`NCX+iyvMWWIv+2H>f(C}ynF#o#0Lg$|7 z7}AXScSnD$QnCNndrj%~afOhZ@8rAWZeH-F9=A07{~49ugvdr=jTI^qk^jNz|KE>X z1<-)pZ{?YST=DLi^FiHULlYr6QqsSGjq zFH_3@=aO8h(8~1dl{1`(J`9}$fvsFYn^Q=0aerHy=J5BLsqcb0;*eLIPuIGu&HLlJ zbo4KpsORe6f;?XQ79ktg&Y$8hbt5b)TMo7 zNP7yLyJ&%!$z-;cn~(7erub~l7jc-rfQ`GJ5$7JLgZlwErica}fE43R0JI#&`c;4l z*t^aG;BjhXqRI8s41bNk0H;K2KXt1lmfW|=+=2SE7fg_7$>I`*Diw&0jplJXG5~Vq zlV+%zmp#?ITN8ai80Jv;v zu+QH(juQa+djSkNS(uv8{)HsS2K#R#KC`;~X`VI6nS_ArapsbFcfB4Pd4PB;loELzU5v_@b?$1vA(4(G zrU3vzXWU-f6&WD3qEgs4aUZCHpnW+qeA>KxNzVqV=^PVWU$O-D!d;>mW8QLd$^a*{ z4WN%{}t2Rm(~2QpAm!{ZzQ7zli8Ais!jghhT#~wxX|Y7C4GEWgOm-W zrB`Vp-oL$WRzuSbO*csF)5&fCKyefJh3|ReYyi;oJ|t#}an^`%^l`m!}7q&d>SLBH=gWE}@of=|z&CV4kTkylgl6VzHy& zpdZpPbLEHxqu`zGu6u>PBjmZe+)t+BckDuCYoer4NV-^E3HYs?!KViByV+E%hY&XQ zzg@TB5LpuPm|TEK1L*+fQt*v0LO2GY z<;=8x>6$R*YmP{#F-$SWgl<46<|(QTE!`+#})a-qy zS#EU)-o6~gtrz68bmA*35cJ+ijb zK4;fzx!^u$a%hSm6P%{Yg|0hNT`;hgav4SV#P;m| zTTyG>;u@_{YVVLRps3-%=$j66;(YS4{EdnD?dxPZn$$Pgt^heWYzZq4i4iI85L7O6 z&edj^RVp$xGN9fAaH7((L(xVWs_mxiCW~TUVztNKas)R3Rb6VS_{py@V@=QI@^Dv3uFJ(MVc4jAAAv16(DJlB9~#(*=IVArQ5_wDZ;KacLe~p z9ztqtwA`2BIM}82hQ)Li7(|>>K%WtH8A{h`B=6EmP#hCHa$(zB2y+VjZoBB^=78rK z)d9>TDrqZFI%=dBLO9AgF1}(#5W&DIMWvZ?BK}NO){#K@i5;<%$_1{CYAB@|sPQ9q z7f5l7h^}4549fJ$dnO{laytuBlPGdW5;4Wn(4y=vrBuUe(}$P#lR-E{w8JvrLoTPv zt0f>sClh#Q)Wn9mFTw|W)_;0I4KNTawUhRS_YJgYl1Sci#u^yM*l2AW-qNYHnXZ)* zVrhJ4gQ{Tg|E&9c@F0ut__11ye$Qvu%(tGqD8)Lt5z)ftcV8GjP`2VSeV~t@ob}Zz z(PXfDD@N6W3cFKQY8FYtd-X11>ajr`A4=rmKCNfzZYKpl3D}RpX-LCDhqV(zzM(yE zfKRTzocE-9AZi?gQlm{2D4D751?oPuvnhF9SCmiC|%h+ z!QDfD7C9f{;yulYc)gF_6s;wg01rlYCMnW@MCnw%Q12Bf_ieboMHJyW0PXXTX!YV- zzg~f1D?=Oifyr_KMy7{*faMJ$*d0jH+yFSy1VF)^WpW$_WBhw9oXJZ+u3ttZ@5#TH z!m*Fa=$9!h=!F5@h*MxatriD(9U&vtvhne8p;keFZK8(a;$nc=!cq8OirA#X03vO9 z&>@HlGl4~adhiuctd$I`7TG4t+T-zC06}|U`oIr#Dp*uBU%l>u;e&apgqbeGhP1o=5z?(PjF8Wu5J_k;{}8&?$>43smO{ zW3+oR?f`qRjbLYBb9C=MobpkK-I8#SewZI2Ve3UYF!&9)TAu_`=u6zZM<<;C;+Zff z)8#YN_bFbyN=CaGD4Tr*hZ+S80%;vLEoN2ycUbNzw`Tw;HUI?QuWmviSswB)XuBQ! z3yDsUn0>i$>u|C-K3P2Wb30SRwl}aoGaNxM0GHD{YGE(knieal zMJ^}_{%!4TX(h+{85GU}PwHnaQI_iEuw_+B2=G7GIH|8tW2GAWZYej!V#1KP%sRBo z4Q83b*n{}mQr6<(-`Blxt*W}h%p$k#nrcRBxKyn4!#^xPs$$q?mvOSk0^}MkyBe-S z*#tOgScO`~q(ts>?dgY{&itNI!z6W2``x9R}80M8Y4AKMTyOk$D z5C#?@?qx5%jZa+A3TJbIpcY{*C?vvT&o4%Q;5A=hsmO< zoQ!`kDOn+sMt6|;22V#=r7Uu5o}D1j1Dj#;s&KK^|F>yI$6lmEaP4HQk9@6&hHr|X zH)ejmNCK3QO0?(g5X@&$I26`AJSOdn$B0~0U<-VG^9I8V)DH4G2b~a8shpFC%?K|1 zk5tM$*x&SdC>G*Nyt!*x7uRjkI6j!ad?&xPOjWu{h|T^a z;f4vo$Vz7cOL2>3vDHABT=P29SVBx@Kt5afzr$Nkury;z*+4cgnxv^41}7&z+g?Ul0il8 z%eL!Jl;;vqR1sC9fNdoP|3PJE69>#wz!?l%`wgZ_2n68agqN6L7hnxIlCY3k0@Q(3 zHu?TNjz;;g{u`~^m$r~?(zNKNT1Qel3ph_m~x&yN+?Xzr4U?t5^ygqYy-kA%6)L>hgb8_<+)LUR{d zjDcd7uD3~@T;alIK~EB4O`A#jO+(#lB%Ej=BpQ7&VTw=r=ppmww{P?(coOXPP%D+f z-TlO2ga|8W8WR9|X?Gu3 z!gun;C>W3eac%->dwUIqD8Ow1&CE(Ri41#~%RCo7BIBDkA4B2%$ueXGwu;OnIqa1f zbDTdm4s8MKwL`#Xhc#1t-r2;kE*q{0oZxBZ>aNIbY(PCd4vVEu3ySF2v$zaR`5^F_ zBmz&YLEK7$Aq0!C@->EOC{+v{3L_H(JLmaZN@E`wsh)v4p#v1PTp?Qi@XD#+YsF7z z7RO9dLub?&?sEgN!SNxk=RZZO?A>@Kt zg)8EXz<1-HdnJPPHcI5qif!XomgH&FNIy5nxCp1?wFKc!zTGa#fE4}e?4AOCLZnWo zI1*U`tZ^5I&}j4u#EG=htSHKqo`nM8m2QBQiYI%`IF2ci{zN%1Y2}JP+(O1 zMhGRNyI=u0Sl@faQpD86!3rgU`}9}vj$e})hn?ynOUU)m_k6^7$l{R8EN1ibYU7ed z#+AeW!7gFS2PX?s75!ZLk{ArdX|g^UZ@QZM-26x249kR*@LHSadfMkG9=3-USIGyg zK^VhoM6p2-p?Sdtj=ZM+`fgpOhn%SN0p>2m$J7+_J(<@B{bG_!ZZBG< z-VK(e?)gusd>gS-_HP%UcPt4-CSl-90)l@=~R9^qvba|ggvOzC_Sj; zCSW-t_U&~)2bkNmhKPP(jqB&drYCVH-XwFM+bWA>nZr9i?z$kPW&aj~8G;qR9Y)a0 z1tl=B_lM|3@J0y3RM(G5?9}()Jy~>6oN<{^zDo@jbVk@h1pCXFD9+AO0qQkz%z4Us z@pSOZX{0o~dz)V?)%!}+ z;TvXtS9q!{vO5w^V3k-xgqphOnZ-VWwl~aYb}4ik+)jAwD%Fqv!SwU;Ezubpn9>-( zn6N~T&O%QDEM|WNs9Fx>wdh(?#bJFQ6e&d!>U9vyq(t6OAP0+gr4lm9=<-yRS96-e zICOfTi%I?UR1n;u&Z^482^oXQ!6&3G_~^Kf)Z*Q`nKG;Fgy(jK8V; zVDRUooOFz-AZCCrDfm-2=I06yjx%;xc5>c-FK)2EJ`i4H-&G`N&UxrA?r}b}LI#~= zC;*KbethZmXXfBDyLTu04;col(WCO^CxoDU&#R0Bl*iz!>N$+(AXgkjUk)e|J{kDjcMIp5DDsmN2RCdTuJniH{d%--J!6LvP90J0UNBaq@od4To5g#H)J|JB{4Mj zuR&1$9R1#Jk|}VH@$+pmA|x>wmX#9KZKWjWK8;V|b;M&DxUha`LO6*sF!M`h;3N-c zKmUub4OV#yvtVx-Qnhsd@jJk`$&wSaMRHEzp5h~}hZxF(iBr*kme(4gO(~<7sn(*F zcKQp}4Y6;$_-=B|-U6eBP}cKnQ_u3ea0p2R0?Zk`85~UZ1q-=AExO6rcae=-K-fBL zvJy7Iy_OebHVPNIgT=GuPJpx*LJ*ZXE0GN{IK30I%guH|`Lg~I71@j=%^XZaaY6S> zno#WgOB0bWEc91!8jIkkl*)IfOkZVsVc37aPQLB>OGB||Lf&!eEV$UrsR;?{E_NB& zg>fuF=05~OCvDyrGASxu++@+&uc}|5 z$-Z*i*DMU{R6D+Om4pmOA#L7d@lfp-t?X-tx%_77*v_?`P)>yP^#yi#f?BNcTag(B zww=n6tIEL&5LA$|@NfpBzA_#MgvnYN{Y04n-^)V_xmmHGG*nsm$SZ}|)>~jL-l*PE z+G1B54HAM1DUB9UT-?jFr8>DS=~%fKf@52*`#L{eJie|u>N@#qwj4M)ibN9QXV@Np zJ5zL(%q)&)Y*E&s^2XKrkXimgz;Mi^1dB#tt22hEnAbrr`pc`fUQO{c)8A*ilMU;d ztIj2|d|DF!$LAWI(Hi2*y_ClG zGR@8^zcDpv=8HKuj;D|}CYuEZlnh&%A3np@2D$=u_vhO0J6gwC-uVpe{DOh#696yF zD1#9n$Zm<284P^N8{+e`G>mll&Hcuv-d6DRv0JzHN}t%A!CqQ_N1vOs@zZcR|D1d# zU<=p!K(-Dys$~*N3fc(>nMic@t=QJ$v&Se>>slviG{#e!e_^fB6VC4z)x?{EvCucv!`{W8va_n}^#j~n;B z1CeD{Gzk3TxXsP0rF;EvGvi6rslw6)7t}`gyn<-L2Ik^7{^O9>w#;*${;oD|?_W(& z!7-hcI3p$77N)Zpo~MfM=F^GNJ*mvE<*UmSEO(~t*#fI6bq)*>|oXPn+ zhs>YJ_3pvNl2O4$?e%T^b;C#oa>$;=`nexsiT_(`ANV(dlG*J97ls6;`3)Db#9toy z-l*39zTs#GPn^@_frs=?pnS62%ik1zB?)$9wle&)X4iV**Olr^^z6$UKH{*O811A= zWMO>7Y-4Rp0jXS|zr((VwXUS|L*2r6MFcbKaujZ?ON-y^Hn-Zq`$F;RsmgU9Zcm4_ zxX!1|n45phYsVBGxs$@mZ&r@$!`=GA$S zF30FAAX+rFoA^6lL&P5^+vwk4Y>6gCj~HqiZ-bm)_>)Eus1GzM(bS|4TdKdMoXx0= zj7ivsfv8*8UE??9G?M5CNr3=uv38|=;P4fi{t|YhnAFBZyHbITUtVJ*e9>?QE7-Uu z`zA0I2qVUA6FeB;nl3H8&GR$talx4c5 zm%dcnV3L{NSDJo|XbX;Tj&EgT01H`fN(mO*_(lHe@;_8qUp^r+IhK3vx2jAjxq7b zFUGd^2QEM6wjbT4&*91}UO%1N$<1tk=Eui&Sqwh0gpO2OEMT5}-8^&rcuq@ZTs!CN zAY1jt@ynF_ZJou0)Wl7d91D?AaSR%hu^8eOTB2F$%-g-0neSTpA$=Wg>m!y|mrK9( z{&NNPm&hr@zbiz3pgRasnecqN?&dmX)`Ndyf&cP&UmsJyyCe%AmkIag;ivN1>LRyA zZw=-kl*G-3Ih3b@+Skoftf%fP^VdP#r0i+C)=v>;)cNM?f7PmY{~T{lE@e78FJthR z;9}fwL1SB&d<;49SXd^qy(H8L`dF&Q#=;6UjsdU5Y_sLYLGk(Pt8Gnei-M|J6+1ot zoYtY;F|AKRW8#zUw2oIRRH;*q@eij~g8ZA_h*f*k1TA;fWkl~(Ax_|a_0&66rqoA8 z%PD`CP?g;Z=#@NA?D``$=k~; z+UncT%HuoR&%u2F_?KUQ6Jc+%6pk=SZ|XnHVcEob(~Pj;<&{`ajBgA30G$sEQI zy;%uVXrf3LR2U%>b`86abmco4KwQ%OYmq1_Ys zqvwr4ClP53rCY5ll^gNs&x&(0pPs3j?jguu*t((z)81L<@g{BkVC8j)3#=F})T92Y z9N}~?3O>mxqTXLn=H2uuCEh+*AmS}-PJ>_Z>!xl;Zq1!CrdR$O|LLZv_$HgQr{{vej4OtH%HM zBq|KpZbf;I7s)OuEX41v6wwgeKc%oJr-|`5x_=M)G(bBh6D2pKXdNF@LA{5-i8(VNm(hn3NK9Fp|<>)gJw`{Knv?l`RS>)FhT$|lsvkjA(f zvD$LZIZQn~VRb6J|CEfRV5NE3eB$P~_M_#Oe498476|OU8uwuT?Fm5)@-0S|x7q(rJoTt>q!d?Gdj!Uwhjj|N7#iNwdM>0(N;Ff;N6)Tad1j%5 z1&7&;iRQ&_^<|#>v+yVp2gCaPBNa#k{8PtI`n9`}I&b&dsQPt(7Ty%+&!Lf}3MnyA zj->ad`xsXA=v1BJ(E2`{hC=g47@n^V=jm30&BiD^cBppe>nlXUIv?Wzl{17^$ClFp zzP`TtErT4n&Y-_Bcs2M3SZNaKdED$a+~J%TyW~fp%aq7wYy^bMiKJyvivdLHHAzgz zX8yPE@Him6_5R`lK=X~cK7kO?VgSh{)Us)M?2Y{&O8csURztoPSg#iC(xxWWnliC1~Q zYDsWy#Y`EaDYL_j-D)>ksn7U{`C9XO1}FHQ(iG=1Q)95{)9z{Nxr2|E)zA8s)%O>p zH*yBdG`sZpf4k;)Y&S?{gePbS)+<OYy@iEMzzSiN z%8qwob4lhF&Qg~@v|79-&k|8mt}rNX=HuX~ZqZTkZasVyaWq*_sV*D#cJvw{J;ZMW zZnxd5pod2jCw{(D%R^VMRf*Y?p6}Ja2iL2cjg7_mLO*I6x2{fYzf~&!=&%lZ$oRU^ zvbD_nih=C7zulJq6?c4b=4_;!~sL%8=0#cRRXqld9gE0lqFqM&qkDp)i2tLUa%iG6dx_%>Vkz1ju0WWau9OImZvReCad5 zuQ;g2J@)!1GuIN3L2o^l7p$DGSM$kHon=2*_1m!abSd!D6m)AD@Hf$4?$6AGV+Wmf zeQOYO*?X%Za{I$%)wBpGVm<@VR(Zhm;{4EnNy?{Mi9y7}oJ)g#45TK~yeRA8?RIdu zwShjuvHVM}XF&4K2(%GTSi9_uijNpew|8O30d4ZJNaSEKjNWJ+{tg^8pj|v~f?7F2 zLKo;xpm~t^<90m#H`jzn2wd5_2HUOP>ht&m$h@ATcwT{RQDauK{ox{|hjhK?GN$f& z@VDVEtS@j-u7nTk!@TMwYsWj&UV;Ne zn(Q@B9?Rt0=6^A~n{x&pZhj3bFDKZf%re|cbozMDvt6$}oC(W6VO5V!gUV-SqcwSLyfL^k z1TJfK`nZU=O&vT|RkxpEJpaqNXNl&U_SRDmzdmXKIpI~=%oMy!H zX^nKbnxFNt`A;UHRz+@22`ih=xyyFG=GsA;(B(%;Ps{&Z*=8bP)?f8tCsSKoc{k3~ z9|+dLcM@@Y3<_<eWT&Zg&);lYSF@4(=khVz z0)=13a>za}n$)!rZ)l}xGi_v+OfaM9@je4}J1=vVTV5~}GGy=S#r=2oq5U}n?~uaT zHIi&=h*57S3_V)4ClN&l@9O#s16iG1U1;dC6U4bg%h>H`LvFikt$}0c^!7Y6##0D*QCK(UleHsLZVQV_f+#L&GeX#e2!o zHIk_CF-y>4;kMMld%J5{q0Sda*es6UWuT}jdrTQJ^V8*%W?$(U-Vw94M|90WmG+#s z>*l!f7tMs0=X_oK;4wAt)*@>|OY8klBkp+5M8VO-I|3S-ciSo%_8maUbs>I#&f_qB z29%%2H_J&n@-v+Uo(xO=*wF)%c3BGfIWZE=2|pl`dKZ1?vC;-B^D1cM>G3`jjfit+ z|7>$G^UG}YE{8K<5$(Efx2OgDP9xTE(dp)JKQo8eF8$Y zt&YM6$gW)42)Z_Hh6YXww=#GV~ueee2MH2k0-ONZ1NA!nARgVjca!Vezze!uSeMbOvwF5+2ug}vQDHE0o_zpn zg9N-6)C(1hs|rXd=Va>OyX3M(njx76RUd;j$_@A<5t`_`1FsYWCTV;a)PIcHZW$*r z>t!gFkn2|?Kj0vrkdfp)st*p-lKa^qG{fA%Uch@f?VyiRV1*nsuO^~??Y1$)$ z-n$^9+@Ip~3+;i%jMXs>l*&tDERtcf>i0w>jmEvFQJLLSuhvhN%0Jh^f31M%`1Yya1_{Z6X+erX0tC*1 zwfg0RGhvSb^MbEgUdB#x`jo8)hE=ixc9Ht!v6K>*l}-;TXV;@oaX-CiP8EpW=IS|l zcs3q5b<_PBoICFAzn~+>^R*~w7s0_~GoW3j-;Zl@n~c>Jm3dfmU|7zr8G5QrR{Sz2 z`vxOB#!)yce$w4hAx7R!{ZEpk`lhHtzhhWo^3UoI)49LK360)Hn=@*wS8JD0axZxp zkAG62j!!I*+!`;ifM)QIn{G<$m3)$lvRkGUrz(D*aHukHFu>Ry_isF)9M+#&;gfu- z4~T}{+oRvb4MZjhgcZ{`R(#0*Nt-bvG@~SceXAITT>?GFj8U}V`JjJUY=M~Jj2A^y!CnD`O5qBxK-Z{d%fuY5e9 zUfS}5L0uD)*)m6VN8DoFT;`q|+LaX*_qVM^C$pv_(>TF!86(~++BI&)Ao-XMTGCwZ zSY$!^#31>0-O$bYNu^x>jho_=1|+@C+MB;LNu|S0_KNg%0?*G4p98sBE>2rSVR~;w zi%qPz;=AMWCcmA@jX)DWutk+Ize6VWXJm8fvUE}kV)F*}OzvY;hDyKd5bzL;WXE^d zAPTB>LytE;x(K9HyXxxZXE8?UhVhp%izgz8Yj*kct5VzU%7jO}VnJq=CUt`#uDs4+ zmP6y1sDSr0y#>|m=rY4IWpPya9yM-Dz3M!Bk*w8PPdLW_B40zGU;V8-^31(L3OJ?F z()iWT-fT9^$HFH-#(JuYY~mizq{G^uE!s-Is|&d=wYO!7mN% zi6r^{`i{o)>bsf4)i^y+-%*(XcO6A0xW_J^Gj`4IxPevhW6U-oULASYv*6vhHoZ`r z;E}BYLP~ALKUn~Uzy>ZXO>;_!J~$Sm)3cu9T6j}l51T;hDGR7DGQ_ORQXQZ-RFO@# zH!|ip?Q6MGw40IE|&V=g|ZjFutn6_1M1gU9vs8zFK-?UrHTwfQzM~&BpTi ziY&>dv4$a7lEzTvI+>ZC`+Q6FUZ{rw)v-{?Wq8w-ibwiG#FO55G6OIjGoN~8o2*h z9CCk+wH&jSBJ&k_H=DICW1(8_L*%`o<4iH+@$LaaJj0{04GiTO)u<0{^7TJWtKFVR zT@s2~H;O(;w%Ul;1dx7c(cJ7-*}J4DTV!l`sNN{i(1|KDW^oZAq9+PxI9GrFwU9Mt zq9lK)NC1Z$HGk1jr-BAkF^k5(=EYb#PqeI_b^ic_V(Qpp#bV27?pKVe z-bSnCYy%}D(ehm*N^#{SdzL5pck)nf5&wedcZshnyeU)n7H9?qe&N@B&pm?}F}6PW zi_5E(lgomB24SLT21Hzcnzo$&8bj78k}QBaBYTlc&Dq)#?So;oASG~m+s1DPp|h}c zDY((`wp)c5x!yNzEJ2UoAl}iaF|GDth2#4M4RWnI+^!<@S6!Vm_KChnscU*;E4hzq z$;PPlPWkkf!wex$OXI5zn)7Lq)AI^tzh+(3~0waN$xyN_s8eut;rvnfS zY#4!$niSE;3T#|s=1m7SHjd-NDPX`**;iRKPVxPbdsv7SEIo137#rFRUXG;v^x(9p zr!SLeI6Ok~kxPcYV1`)ue93!QJGc6th5P?-b(T?aZQHiS zDJ;0V7Zx0XySqzpf;%C>-QC^Y9fCU{I0Q=|xCVE(w>bCi{my%Ts})77RdcR2<`}(y zT^%vfZB%=6(*T~_y4oTi=QdO8fdDT-)%jOh@-$FNTyRx2^=v<>e{Q;@Hz7DCR2K$u9Ey zf2=rKEE;~cjKie0?g&oX)_R1SQZ;jvxnTrcl3ETCdvRueh6cE*K%4<^Ttj7;+uzP z3{w`<_s33V9UHFrWTj@U1^npkGz?L-9p*BwHvR;%tOZfL)JR6_)G~p`?d=(C>)epU z2H$0I84YmlQeHzymB)<}Cr^T8|C}JqKA=L}E@Jv9%hmN50aH3saN#>Xm`3J7G^+__ z{t16G64RF}bC3nQbGBTrTJoBj(CA~ccsFEYoj}pCI`?1#le$Wd$96d(h;}5>Yulkq zLeBK@+ngr>qd})O(%R9Ya-ETgNUJ$19DwLC#uRB$KGf$1`hEQ z{xbuErZr$*8pSLy7*`pGg}o+4#_~lv(wla7JiYLi%qSPpFbE^SrEa_TqL=Be{h6Iz zsawUXjuiG0M;LT?WYs%@aU|xCHf4aU%~G~Q+i&phNWz$rb@`r}sU=~cs5HgR(m&_< z-I-dkEOs9Ogr4$x9WG2OA)8Z1rPFP2hQKz;MS$tT;1~L6 z6(fJC(7`YN$=;^+nefWGKiAboW1wrhi3A2!J_n>jXCYgB3}%ZCv!hg zpg*?A#FQ{4=MU;2D5gq?I`1Yy#ePYd{qsw1E?@sZ-$o{$q4ZOwcPx=1tI|c&0XW@y zUUNTXy3V;@{O2Bz)23h1aFz!1^}0k1PBj$^S#p^^4VCwO#FE1tIVXF~LVVtIE74Bp zD|vGPtFU6VK7|h9YT-q?)XpJfbLU1<7GJmF=nY{(6^>Xlws=PW_c`bIO%E&c2u2-h ziEl)_2XRC^$)2pi8p~DBZE8QLydDWg)ITz%f|PNv=mM$r7b6mG{bMlPjbgXm%esOb zbeI=U?dWt4*O(Zmq(62CY`2yM7iqLMhyaCO7Dk2Qa{(g<-SXu(U)6r@nv~&ZRRgN#lWlR;xZg%=ZA zjAq*%e)euhQ!XU!3%4AZGB{4~X)~1p5yXhnn?Q|qVurwrQR~}8Y8y~~(>pE7kB6CK zMq*8h91Sgt=0@qABmjC^&ctMP3#E`gu3aC2mSc+3Uq^+7_R-;mu37)im(wPN(9d0a zvp?+);Sg3#U1Z^yU!@W?eUtA6!Cl4g7F!&tOIYh}I)XF`&1j)4naAHx8o_vG{8@3E z7>heD)3wTG7(R$}hV^MZms`ORKWNho04*1ZTGG%bv`jA=n4j4iR=(}{woiayim4C;Sdw%RcE zb})FTEQ*e(nw>o6nk@B;+73M}GwkM5$er3^Cpq3Omw4p>_h+Rsk$|)~j(*Ews(x&D z)uBRwEJU`e_~YCQ0&XgQ`{ljT+GpMG6_dnsbY$Pnex+@wUZ%n(v|Hw94oPZjr;+SY z$a^r1xrL&Q8a-Hp{P7PFZUk67$@CQ54sUnKSWJhIy&qs7wT$NdzYNZ&98YYFyH$?ShE z8Qp*Tr-tff3hDLLfsDs0o2Ns;sCSZEUPYBba723GDdw`3*Dq*Y!n#;c7Q_piq}Lmm z*S(du&Moha;x=Eb3V+nC(MB->{?w%ZT_lJRHtpIR&0&8*7$TTX6I(mhD7}~Juv!}A zr){nc9I9tQ%gC_e4S%3FbKi4h=KOoKV}u%khU5yFBUD-)T2#;y*m#j^k=0a92Gc*; zW1G-qU@{%aUsbD8{F;IvBeT~q2}hOlc$>c>fwW-vRO=~io)&3D+{2SzOpegm>2XZC>=2|{vkbT|7gyp*2(Q?j!O zusg}R*hPoiunlgveSf$EQ$GK@ny4sY`U|H1 z0SpC^Kvg7jZKR_YI1qrf6 zIx=vC$#YMtom9-2197Iitoh%WFEj2JUgg=$9dpbHKep^0{_9G&a-zGnDN1a>YrHxK zjIY)!RqTz>&;>$muUE#dWXON4`S-nXQNRFSjs<*JJ=KD}7idwm&z-ofCtvmcrL-~` zr&I}dB%7zpi`Q3!tpWvnr`FvM0@WX7Rn^IMjsdn&yeABDy?i!z*_{T)j zZE%=g+vzo$As#AzAb_nExY2@c$Gj-0(3V{^B?v zBa1CKO8}G3P8K)GHov^+IF^0;@yj1-aTL9}un)_JKR>BK+DbD0BVcgkqf{;xR@~se zIz_qINoc-MJ&sVNcIC=0Zkg*=@^?U;^%R1=yI+}HZ0h{@?++s6FWjIe9MrQY6O$XA zpf2K%L4Z+(k;C+b1UXE+z0~ zTTN@qr8>lF-cS3(pH<1!bnid)A(p2ySKfWx5xc0s<7)Ze%i=gj@pf)aGdI`LM71~3 zqzY_^sQdKKcZZDZJr3;srnaHF*?2lHqZzZT)n3;`rzQTbtg!i`7q`Z5LJIwM<@8_q z>AaK!|F>tu2-O1hw(Xo4{OsNTdvQOY|21fC0&4wtoFjE5k7BxX8x~BY%hSI-|J@}| zhoIlhaIYTd+Hn7_BcNva3DVvfh?bli`lqFh<9{gCOHWoqDn=OiUw4%M`nlgWhnDc$ zdm!Sx?f&m~`v0^hz1NgR*l53E^QZ7|7`4d&7jg<0r?AR=<~YJb)YC?_v~#_Z^7muD@H;;A1iE31cqy|{SL6JRo3t#uf^1PGkRn5o|* zamQ-(J8F0*KJQKLv#g5~aR>AUdL&>yyh-s7?M4!)ouBTU(gDM*xRjLP!fCXR`^C0Q zI=jWBSz&REL93(4V>*{zMh$8zO5E@mV7pnFZcmqHTUh+}6!PykZtZ)!FQULD{eA17 zz^;uZGuY(&oGiBmK-dTM1E1&yQ$G-jO6&N8)ZU(LDn35m`Q^!-sg;=5Wq4~RhVKN; z_lpboR4MC2e(f%`)j6N{Lz9!NV>YX4i0$t>k^cUEDuoOV)lN63XG)S5RcMiP{h?T5 zzLEh_9%u2r5%CaT%Dm6iq<7@+-U9SA-jVQ^-3xs=NZQ>4^bj z?xUI@O=s8t-XeSFDTQD89oL0dqzt|PZes@2&j6vi-ipF3<@G=y{Wf&SgjIv^8=7;3 zBA1&XF-ooLM@5u_(^f{-2=Ksl+^76aUPP(BI~*@Yz-3zo>rsV}@?O6}jSxUw;v4{m zQ9k1o5+t}F?w|^%t)UU%N;P9ZPjyl*ryBau0rts+<3BB9XBz`w;6J$hRVF44AP!g> zj;AzxdAiTJL;FT6pO&}vmiO6Yc6eE@aR4Pw8acmN^Q1B1u?);CaBZp(HupI-h5uuD?s`+79J6Q&9+N7&EqNa=HddNL^CHdDN+;yO&}q>`JDMW=eF zo|bk6bS;d~rAa{YYwaHoP%^>EJ$SUwxK$UQ9_|*w?&}>wMHhKaY+rZGV;?8Fjg1xo zLwC7)dBVx-irvdKU=Or?fZ7x#JDDq=y_RGE`k(nHseyrk0#0_XSKd16uZlXWCqA)% z=9jbhyu?dz$H>{`u#zrm72N~uy-%TeE)I99@A(1p&u&wO<%cfoYiiak4e$Ge#|*vA z94vSmtG`-5Vu(G%Z~KFhDk%+T_1|6{O{zEfX1nOpla|EN(wGbQDK%6}6`6oE-d0*d zdFj@}+`aD#t;Si$OjrrXiGESu;mfl|v?w{UU(cD*V zql3OoT0?R2ODoocZgEaV00sd=9_B^Oi_Lt;b1GNp)eIW}xSvcntiQh83ZdEzBc4s% z=KQl2$Vz0BWq-7PJ=SG?NXG8PJOA7p1s z6n5SY&l}-|?w7kW1rkUCXd+?Mf*_F;SHL&=6A=OC1sK6=>wdGP1LJz3kHX#h!3lX0 z%~xp1trM??!5g&p0O}WN4i0q&tan5o^40*(r>RE@A!HZ8JsGu)jN!DD^f>6LJhv57H9UWvS5b_%_KM=>HZS4}i=B7aJ{N ze;^KEh9AOF!&-^Lv0t_Wa8lKVPcr%>XQrdWcW-QG`y=4B;&XcSvbdpucR~EhL&rZR$IYp1 zjbc0D;NYwckZ>5m3K@eaMCD?y+rs1`yGL)2TE{W6IA-rM-7pY~V9 zFbbFEXL4V`v6^6dUl-f=JD<(l-uq06|DfXGAGKvfR4(#)x!*&8`+=;x|6@5wbv3m) z=MU;J4~`KK&f9+C5S2`5$0ka6PJqOU8N-$Fm5)x`Q?JEdUtqQOb?BF2(rhNT>^wq` zTz8DVfXWBfdgrf|szVGu$k{?)MYGB*!Byx@G!HL~a57KT4_j;-%j2V3a7#RWdM~wf zVllHQU>`)3c2#&$$%vN>Mbw_x4e_nv0zJi;9K)%ZU?aTmC z&I@e;=cJzSI>N&!%nA84{nHuEgZ61VWNc9?R)51!K7&mToY=v>I|vGgb%;LHRuu!f`=7dcE}XDlwZfKd49&WlyFL^p5> zUS(T&y`0JZnLdrA*Ds^qyO0sa+uJtZSD%SQl~xuJXhnG_*8P4rP(hAAt|>Qn9|Md7 zoJGz^$Df4cGjly@lJOz}pLB6beX(^Jy&?;mWw+Z=hFa;&&_8$%;RfXnMu!s&Ul6-H z>)CcbG1`XXy)0YNrr;eL0&)>f&8V)xfPf|~$8&o_eBVUc{?;BbLnk#~6vOUSLhc!3 zs5n-(Yow^YgeJ0s%<{{meE>gb6L7#_YB;(`mm?RRq6EK)qrfqTdJ_s$&-6^kF}!k{tgMESY`D7j@YlH_z;-7r2W-qikvQuRS}|FGswJ(hmog4 zA(=GgD0vIrsbAg12|)8IO~sK1YFZa6ht!dXjf}z!2;43g{$YXZ1XY7itscjn{qrfOB|=o;M6t5Pf70A^kZ2bX4JNp%j3sGt6iS}J<{ll)s~sh z79W))9ZVi|^~d35$>g28sfAHe43vnZtD5sz|ZhVs}#2)6rB=W zeK&erG*dH-0C)6jIi;^KnQWm7v|p(*`hy`o%# zHAY{W!gPCg`z6pUf_swD${hg5H#g)um%jEp$2p7cI~>5`sM^r3$eH@bGRXv|@1|5Qq8_ z;)H5{RmDU)3ME1<SX^hf3Spw+p`q)MJf8#;nULd_+5A2<~OjHsq z{~KgvO(CgDB?}se}8m|g~?tNd=o9vQ4XpG#ShHk?D;}iB47ELk+Ws>UI z)F*6NhPz!pxX=A6w8PMIrAnmwAyHFhNZK%&#mm|hY6kip2I??u{(~(j-;xMnZq6ch znpKR}C-EZpo~%nq?gO0Odl~Zp5N8^t%)lKsC$p&>{aCH&WQL8KleP;V$2o2qlJG!Y zDC6(Cg_0<5X%NlMnDY0mLuUOk?Un~L_Y=tssguxlsrbJR!}MOu_@YYdNAp&HrckK; ze5d+a&b<4F;y1wDuIzyvjs7}YE$=K*IiLo=0WY{5LEIvA0+5KT^v&fjRN%{Fn@*cL zrU8`lQN76?{!1qU(fHjWFb#EoC;uM*x%6TLUJ*KS;siCulK_FB8bCCua8r1#tGkE|^|TR?$1z~%pa#Zf^r1zg1?}T!7jUP4m>&Jf z;+g2K^gu(rHS}ZD7BvEgK$asXL}@GaX;x;|GSl&HLrN@EIJN_QNhTK(-)Y7Q9hH5Z zBLNPTK$d`B(5ub_4(bV$k^q3{S0RNZz8gbBVfC72{b@8~WrP5Q%7KLz@L>$@ue?0T zI+6&UAhIEcPm$?`BBcaRBKIAK_=iHs(#a?y9@Ra_{M^ZG3tkm-oWJ7dN_!F5F#Wt~ zOorWhrs5Xj9_t7WMM@SByK}Isv+>5*14l?NU~WdN-rV-#DTOn+XIRWSw)3q}jxwzz z)Jy~CFI(XzA@2epxuBWHr@@3(%IgE`@T6z(C3EDwUL5mD&3&;JDZXa{#!TY+C7;iB ze@%3BwE&H}1XL(fExY(%CCn*a=`(V4#c|rMXK7hcQ7Of;_j$S!?iM#}Ix}s_*0{AO zi76V+qr~6YMz|aMi*{ohi-B9+h21Y%3<75>-vNj;>M#%#C`6uPI+lH%t7IHR#u#}x zQylqLO?vc3^^OW?Vy=pRlvI2%z z&O-UC`vBec%{@{w@?b-KWDpb$n%0Xk6v;*lMZ-w2qGp zLulkat})!_dPvIpUIYVqeu3iU^$LAC%x*IktllZMr?>TYu-`XAg8N3A8H${KLxyW; zf-VWPe)>GC1%(-U8It6kJVS3jJ;Ilw^&r2@7_ENC-xw8AR9!$a#=yj;L&^Zl4Hf3f zwm}+>4c!_hud7=XD^F<&?C*Ig>lmtQA%^XBxGm!}S1?EaB8|&qOgGcq5q19hOA+A= zH6h6zNp@llcld;L;M3GyEw%oRYz_-R4gU!oDwWowoO7Mo8f&gBt`4I{{xYcWveJm> zi{rC)vjGY|SZ0(s^R9-FTK3a%L<6513Pt^LjnI5!)6{5YDk;(7HufOnuZ z_$A3Fg%!n=^}!hQ3fj5aVDuH2@guW0Pi-!}^98vEaXz|l$Y)(mMfIgo6?SigdMLbj zUj5B~^f};TACn)Ar#)ad#T5f1M)le+&WmxcVbCI|X?oVPoiO(EjS$rm9YA$+=(oGqbEJ1$KW30?jG_!-tiznY zo1{5m?1cFtcMnb!;mYabuLXGObK(?vskoq>H2*S~l||pAX1ah*XC=7jG-Z9Ji1!_g ze(G&a_!j#RsZm?+V!nuyRv0AW-`k2AiOCXNHy7;N4LuaO4eh%+*ote0_9fuduOUeJ zp#CZSN@W@?=M*ujFVgA5VA?G1Mrs3GX^w#B0ax<*H)fM6)RJv@8xl}=FU017ePMIq zvnv8r>P=3nTm3t&NQmTIQqqP$9N9y^6_f<7lV0%T15*vH2vC0dQHD`G6`=L%MJU2m zmy;KwbL#3{*5e(MqVURsc%cMsc?nH8)0lq;Bhw$H9=;;!+zl z1fpUz1{^a_WQ`}Wt#?1~?5DP~QE42|Ntb8~RiDFd-G7;f;s zZUuc`rYdHwI>O%#$I)rQxP`%`#{5yB>9N$fzo+13;gtz;Zfdg^#gt3`Str6TW|>It z>b;Tf5B{_OGb?<_9<-b4&yvj;2&yy9gLZIXbD0V0jc{8|V*#-RiNX~8yp>oVu8QE} zm*w(_E9cwJCEb1&KqC>|H=3D!cR*@p4o5LGVF0vW{kOw=F_Yy$Jtv8~Biv)$vd2$6 z)OON{?!}4{+a|?X+7zslum~5ChtM^!#T;}{#pdDT4Ihq*s@8l&DGE%z>B*GG3FNrV z`_aI7X|t*6i)IfPa6{vM6rb>oL6A(W1wva=x$7j$88?X{%!!0xuD_QmmSkiUc(bSM zKGKNyrK#3l@T*dzwZ~q*(~zC+uH>l9Ri=&^sW<#434*LAp=CW{Pz{ zdA)#ko?ziBaN{#|m+r(7ROd@=!asltg?a-bs|78AHP=@ZiX4VChgPx;VN|f-(ag|G zuwc-<*jQmzaA;(a_h#JfOJnq*5#U@@Q8$l3enx{%iu3hIUT0!Tc@B|`p! zD3dAHB$oBCf{4xpcy=Fi(HStAWruc+Fg}YUCmwA|O}W^vfKw(b`IgfGwRpOb&L@z( zH@d7TRxg+W?8&ZjG{$T{XfwL{C$@vcy{B;s7rhQKEK+; znin1L9d~_mX@@``skm0s@p|{tc}k~(JLwtIH_j`&UDg(u=SyeH&frvmAAMhB5WDXJ zh(3OhbLOyfY6D~Vmb!`cLBdub1P{o~AzN1IPW9LZ(>Hz|)nIZ`Khaha2DInNRks|o zx1C7R-YPQw0ULwuW-~`|s*~WIA2kSpo6q4e@a74Zr7>!LX*dt2MG!_3wX%X9Ae5#D?a}I#xl~f@onsBt7A1JTH$c-M;iUreG4w>X@piS4 zt9+1_q`&y$1^b{ggn6P0p77nZ#2n^OkgdzGEr+N{97JaiTYiieDxZE2N^dP-bT+?XPT2CcX4v~N6r*#oFGqxT0@VxNRVgFn7#g|`{ z#Ak@FciqZ0EHX0Q;^?Ns;LY*IS_ZdXh6dO~=MZK?AMZBSZ$lBd{i8ps5S65ib|gNzVxm&$7MbH!S;WoRW8}>%1^6zs!wYK z6pgM&iNLH-bpf^S!ed1?;Us$lqhJ}Hv$lBrfEm$FD^eGw`Dycam^s@wx#BT;(lFdh zt0h|VjlaI=^82B9S!~1RWRrYB-11>>WJsoEGAc4h{g||TQ=>6?%0R6p2liG;%TLN< z!bFMoLD)eww^7$So04hl>>kOIv8mYuaAQq+>7T4OS_S-MyJw}oM&0Y}YE+X+;ZCI> zbK30xg;M>2^b^s*_tz2g7~>PRQphAR4}IfUN)qv!7tL7vmpGo}QJFk~G~9!SfTt8b zajxsb!lz1s!uKBl4G4qI+Mav54oB_>Yf)k(q+ubjHq%;bnP0Df#Q+zZfRcT>T62}E zWf}gN+eWAGUg=KnsnAFhQTq1Zt90I{2KYGx^of&rQ-Ct6(7x#MxYK(1MJgQw1$Qj0wIp(d`d_N;!Iv859BzPfim2Pdx2L1$RwxGJL zlUcl5@+9+-42jsqWlSj$yZr`;0V(KxkYW&7kohn%=`E;QyLHBQGxn-}&R9Np6Gljz zfJIO`Dh;N8Gi3xh$p}cy^2bp`KilwkqHkP9yPg3x&Dq^*NAb6YQ;TZz#WB0BW8u3M zk~FIO*H?|Obn^X&tjs(BG;TAWfF^57Ze8V zFX;R7p1 z@k$#;ycSK!`U;p^`$qmbfcZk$pf90wfKV<9#fNcAHJ3~mP^jP=u*;pH6)-om3ml(c`ax>Tw*8bOs zQ%1_~9AYbI4CMjsE57h5*kpOi09BiJnSSs?cwaHMf3&CLsOONjr1s^FufSCI%mp^- z*ef0xB}}VA2exSeo*des%2VbuK1OHj5WHfEaPJ;fR5CL^4JXZHcx1S!f{DAcNk}eg zRc!>dS#W-ndMZl4$_D6;5%rT0BsZ&KF)}(;t$6r6is43rF8AFDnq!pk9>e=Ba~Ly+ z9vOv5x6WsbQdcjUE}O0lAnIoi5Z+aDdKG(DpVLxjZIILXBp&Qp*B97E&ulqc>7mrh zT#s!hsi^9`x4WZMyx85H*D_GqB=0oM_NSP)m6Uywv1`QGqAu6INAwji_Wpir=xizz zt#YAKqD}oXbarZX>Sw9bjuhW8yVe1e))L%F+tAwMA~oJ?F@edY$v;zH|4$(Rgoa>q zS+(t^*vn7EVF%hCdD(;Zax+iPPMUcK)g)A-NVAld3rN?~&H^JN zA@nL3P~c8LX2Ozr+d%7SVni}uR9jC@9nCvW1zxBHD2jdUl?K6q`SwasvjJPj690~7 zL7;VQ$lT~0qdHKjH2jaU$NyCjYMg>~_=%Q@?B%iS<;Ul+W@r~|+0a&qmR;ZR-U%zH z7Mjo&cr8#^f^nA_zb;Z3p)%G{!bID8 zx?1qCpLuhU$Lx09H_Z98m$r|sk@hgVcCLO5*!tz$wiPYtE?Xk7 zAW87ACM`JJXkq$F;@Nd=k>u|{vrMr$QtAihXe-x-*H=*nhk z>@*;T8nH)^yyY7m^^8^>IhnO6`u)T7`~G7Te7%%UHpAXr%#t--`9lW(D;@lm6UBqJFeR%t5u|Q< z%DZ1rP%1YK+Fux6_vUr$k@lv_i7UDm@-T4E2P?Yd)zNzVK7?F5>5Z;1uQsvW=rzTN z{ix`#3;edY96>qwG@nrn;~u@oE#jkL6wN?r?7p$Nxv=`L!l;)b4jKJy^%E<;s8OTs zfA-`1$X;ym>9LS5CPSQw1cTE!;v$&Yt9)GUJ`ei?3jDQ(nk?0Tzt_rh@r7(Ga@LDI zU)!Q==7sGf7%#lHGT0jbB(gP__|=70C$NwSxSA+Af ztn3ZSB8}z$snic5JR#Sc`TM+sSEeLE+Dhr#TYXxqqT>SRXDU&A4y*?0;}= z&Ju9e=2E=ZAkiGNvv%=Oz`61?EQkdHFHjwHDKSYb85JH-Q$=rE5~^icKYP5A>^IoA zVAZ+;SeF$!2F~|oY!UBb6P6sh83b;#<3uM;5;;vn%DpQBcG6cKchaljt6_sG^^U6l zQxn`VQ}*JY9}Q0FYC9~I+!(i$Bt8OG?t{edom*Y_Q-HHF8Nk!)-ynNTR#{;7+7tuU z^O1b9&>GIW9*memw63wZl?Nd2fhnn$T+#2d5aF%w1>(0zTxLaOi{GC&*OpxaybS^O z__FlgM5nz$HAF9Nmal)i^#Nr( zS1RU|hl$Vkc>-6Cq~%H?Qf7%FI2l8#Ng}{KzIJ3KLG?Hxt6WcqCm#-Dk5ZyR9nAcl z#7jB*4^styj7@Dktx{wUCfUYg|J7vymmdc=I|*)QM;}-mRy!M9_Ep1GF)6}-0thFDP*hUIqigIH zK&s}A52#?UpI2l_ey5WHB?!G{6?khX8WW2?1kiDTi9S9!`~)2OSNmwIvIT%vhg0SU zwE^=QAcsqd6>N7YKLhy9?|Cw=JZ2wS20lYcJ>DF@+Hu)F;P}5%L5-3Lx&KW6`#FI| zqOM3OU9=zxg2?H;q{loxvlImdHMS=N81q@rI&jpcc$xDwspsK3#TixTy)Wvk9)Yi* z`V2P1d-szvI;oZ_4N+&w5&eP;jO{-RmBE+|-jnL(=QD?FmpiFNH`BM&$_Tfqg@uQ6 z$glgR9*_RE$rTl|%r|mPGhTcLSXW^Jk5p7@a6F_=k|a4D>JT7uT2Zs8171_AhpAnH^U608SxndD5)(pNV zZiYA2cOOsgy;{GKNo&>ReL|F>LzT}i#3C-8$B6n6i9%c->-*zPYRI5P_aeboLFx5A zVJ2o!jb?|WO7quICZ`nhX9Sh@lsM${>pxHPHZtWmu_IS|IVm%-uF3qePW4u+r!(F^ z466ATdmM9Mz&^-r+zULuyHQP=UXqn&T$I*HNg9E0veDh71hqhc^FR8`SCzMHKNQzn zgseMO^Km>N0{dftn$Q9e(F05d8$S)liI9>HhuBj}MxiYO3+Me2x}VWN{AZ!3vq1i* zvjM{Wd1xl$5ZT8YM!4633 z4ams|v?lX&fwbi!rCbAm=U?PLQBDIQn=H=-7FE zw4wQ7_vb%IcYsmp&(1AYJ%#o+f4S~rl8~>Dw~xTd8-Js-8%RUo_W%{cq&Y^d%_<2o z(J`~feC+mWdjhmes({Q_y6j47ZPZ2&^Ct$-YdxiWO_w!pkxrjJ}YK)wq9`5uX0Dv1XYl(a;ICMaCOM=kLofwczTj|?b+F4EuzSM28KSn^U zJriwnqN2YAb}yh*Qo3vhbR*dgcd*b&G-q%?E{=ofB8S7O-mhPPY3py($geu%fg-Q_ z3m^;_J6t0%C21m_Qfj;Y2g#*Zi;-H9GX@E;&P$QE=$FRG0AJ8)>W3zsH36HuQFxDTa zM_?TD$`X9RGE2g!spMbMpICB8rZJ37;=d-8{pOrZRh%d)I%AQ5am$t3&!j6S>{xf& zr17KNg_iA`h7G=Iar;Ys_~v?lo7qVor1P$P6!LJc7Q%N!>EGLHMZP62Yz1HOW%8-c zo%KEcewf`Jrw+DWq23~~Wim{b?B;qg)*T~ z+&$UHquD}S_x_NWvrbw2M@4!ZxLNPJT|%41gZw??XE=P#>M|Jc=Q#S>ZsUX|t$!5v zv6Sm4sKfN5VMIL6fAO8zJ!6W4(9_V!&=t=ug6{a;FF-<{cli;64Z1uK`q9M+fDGvw z9MI$3`4W)iJ$(XF0gZtaKnC$Of=Zr_&tfuhGCCp{rRd%0GaJb+GOy0EAUEM@DW`QD z>$kpL8^B+00s?{{M1^U)udSUSB%;VSrQ#%5_5`4aMTwxe0)V`luF|dq(je0$KG4qk zUZgVOG9bcGt<+>1sUA}IygixMCLg4*q>M{AWw$PX5FxHi0DQB6gsHQ@ooo`<&`SmB z&?vwNNremSyNrVdRvO0Dj>qX263+ocx4^<6CYe}=(;?zNQRvZ1f>F`jMgnyT0xlrG zqquvn`@>T)GRnjMAVp-D5i5u6qZPuE1HX%cLge1O2S`Hzz#>@%lXZIou+slTL>|2Y zGIeGUu~yaIgMOiHoOD0hS8t2*NaiSOCIh+0j@}Q#Qi!of1!N z*Q;ZjaMP2F*(gvn;KF|uFdJF{szOasG!lw#U`OwthBf_1c!Z07rk24oZ(oK9Ifu$b zti|EA{Mk$&S8Dc=jX6O4*z|KB&oZgq-up3^lDK<4KJm`uS2hH zx_FxC(zMwPYazHbb_AKozMZk;SVD|i20>MT858F>-8wm*Tygr*tgV2{^z3=^xMFm6 zPX^n->X2-3q#RcS@&wOobuI4cF=~Y3&#*7c4k~c;vaubfe4iZ2i6P2?M~zQff9*g7&H&nNhwbl z;EN?NlmYGPD4`>=N*?qOjD$N1tgeb&=3CTY5Dp387r1t_S!^hFIbGGCni-(fB&qFw zsL+hc1Np(T3=}|%hWj^*1JVb6C&z&cGDZTtM719vQx*x<0BVS9U{ZaT0EbutKhLIt z^?{L7#K|?*L;0>j@I)2jslCQt^6MJx*onB(;s~C;{c*E zmPeLV!~n=m`&j)~7ER>|x|71<>SX0(_rn207xHc(s^Ij!)V&`RyiSk@Tz*vk3adCD z%q!?c+vg22Pau~>HYd}~Q-!JgVthT_>SQ{>{ZcI z5S(8Cz6+Z}q^ATy915`B-TBHJV6g_2=f{C4)$OoE`QW%jyJXNURP$uX?7l0@*NZK| ze5<#0`usk*-$BczCcwdP6h2_hac4jwlEW&!%Xk#N)@wP=t*ckD;R4bip}=dJ|8ff6 zy|~&NZ7=x?>e3;~NDu0_s>J_o%r38j>i8kZG1%>b-oeLAa~b@BOf5AJMN}4 zK(`Ho@{!G8w{fyCFHgsaNfe0AOw8T!)f0roKwhg;oq-lr1i!)P#n4d1vcPkgcYlZG z&X{H?E?u8%iZRqt$u_;Ox-U6>hy38f=M+C-I7&xMkrB?kkU%&U%nvo?{h9gBGZ-g|AD1g0p z!4i=DoH(#0{AY)i?=sz1oRsa+L=o<(vdaG~69#ASMLn*9f7gxXy%+(Kga(c@HnWHz zI=T4Q=Tl#jQ|;hZ1RLTR;dB3H_|^n2e*Eq-<}(R=6G(sO02VmD)6R=5EuDt~IK`K>Q}TX`kcZ6J0_k24oMXym?5h#-mOn!Q$sv86jFddu0 zjztQA?`lwQ5FOv{6=SrKITx#_(O?AxdFY~Ziklu1c^vEKs1ZK%?U409XbZ%kfbDn% z+WG|+7|roA*W)D(q_2!Cy>rA!FexeHncP!uUE8A<=#9&M3^>T!?tsGABLp7_CW#Qu zjrB3s8S1G%%uVz-odxh|bfpde^|OGPA6i&3lv#)oNX6ie+)!^~2J}30NU90Boq9p> zE+x2b4Z;(q4hFXBeG5$BqHhWc+5#hFpEU#$=m3)auVndVvP(fM=sd*Xu8@QRYip&~Pz@&Wgk0qq54hw>iKg*L7qsw_7nZrS0S z#6e*Amh*o5nz1p2COC7c$*Qp0L|HbA&sVVT?Fs6wLUNZ#3;D0Bz?D|Klcyw zC^}<^f!hYGe$Jd(RT^10AEW+nb{J%30`cr=03mtf7~>zV?2B_zuIeS+Cxk_cfWP4O zys>?b`S6)#{%NuSw8?S)WfO zi-*T}?6dcK_uXGTM$?1tUcFYW zs#!JXw1MclX2ZYORr{_(bn8yKS$<`wDAF=VLm$t1obcE$oA{-4Mx+05x!3LNq%)7ta#gD;l6ehlnS9#6}1FM#DPe7yAT6jao)9p_{J%lRx-3mGnr0 zbU{Pt-9o6yfbs8>XGIz#&lvrRK<=FhdLl}(oQ`Qc4h2BUh+$L;v&q-l2JzqR6Lv`P z4@;y~s>vo|AW8RRmDhkhB3eMaLGRrep^9P_^u;Eb64h*ISC0vdrN9@ z^9R$TPB&32(Fc_LzhC4#&>b2BH`JW>1#GQ7`METnC-V=vWCbG>Ad=rxP?s&B3mz;L z6iw4-@Ghu8rW^e4np=*A{i6O$=V45XEQB>!Dj?c|UxN!BjTWYxqQE+KgqYLoZ=qsf z#XCEINUJ2|?Qxo!OuAaBjkfVGVH)g!C4_)k6L`1|fx4v9n1VU!lfoB)OB*4VvyQ9sS7fsCUNkg1luYOQPC zYiq0dPpk25>#eW-wO^hdziJ`>Ycn^Y!6K@vBKIU$Dn-KhH7nTw?rMqXJc~NiveHxy zSL9;)oiURA{RV#r4d5#w2?IBWE$UYe{Qs>9!0Fe3gkIGVxrTMNidRo&&u51PCG`e3Lv*r-D#*crTTvU?mfG=p`r z9Pi(YGio=Hf`NfO+{*QQT0DCP1p}j}bg>=f(yr_@FB{;MLZQM?EBXyZWY?KV{E2 z#8|=$UNy-A4C&Qd%>j6hvk(Bc)>zF|N;}@&?`40>%JLoJ0)%aQRscQeA+BTuK;;Mv zD2dJi5RE4fAi|*nu}}nXH74}soJ1ocLekUIlfZtfkv$&o?##u)DLO;)g+t)EfU70x zYf}?fX4Wu5{@@@wS(qKbCJcO|3XoTps{!;>#)}TsI6!;Na5#Z>E{!mq(@q>C4!0y2?_Q|=R4xdaT!$q?FX8M@@U^#r9DCQIDqrECm!a^x+j|T1Ese zp@8@N!eibe;MILpTgy-`8Ul?)StJ`R(G}gj2B`BIe2`LCoqW>|PJAujc@M2hSU&Dg z5fD#7N-CrkjC#7gZ7gAKXqbnM8}aL!o6BJn64))<;RAbAfYRPW)*n+F>LOmp?QjZ4 zqW2ls4Rnqv9Zt(1;AW5R&*z1m!0{x*a5Npt#xwTfVA zU%#&Fi$w!Aho3)a*28+^SaA0`GHNv3MD+j=LVScSg{_^ z)`P)DX9hS_2%?*t8^03?CT1`&?uaE8g$| z<|yZyD8M>B0}ibfSea-9I&w&_RaIM2h&;R)V2s%ffLdVy;$i?0M$?F(>@OeM zc)_!cy(tj30>E>LZW|APV-*EiaQ`m&i~&d!NL+rjpmF9#u?_R+-GVW-W&y5BjV1=t0(Rou8_yf>J(EQZX zRJs2KwEvNfLZWM;>HJ>OWIk0CV|&W&v}-O1UuT^zLE?i6AR05iZ(T3d!*bm={9YHp zr*&~{Mo=>ppx{HadEJTwP$QuNV21rB6ls;^eG`EI66<)IsORA#|7Cy1O(EAF*DoAC z{N8ZD%>Oha7wBB%0b;i;93t$TF zy!7Vy{?J_59Wgxl`m`sgxiQ4b1aNLHf#6L8=mPBDar};7YpkwjuHdQwG{}=qqhu*x z{~hFjkQZ=_Di5KSj-RkNO*&mE{w@lZ zTpG*dwUNSYqy-x(8o9LrjE^V?dKrh9sVx-NZU126f7*!L?5OIzR#i5onmu5UlU6s#Z^O_JK!J(1IiDM z{@`0_n~>LCgh|IiMY@>K7aaR0LY|Jk+Moq@%t4i5{u4dNS}Ob}?M^_92fksGCbXgL zb>euIr@*z7Ir!a&AE~y_87prrJ1qtbhlI>)MQrTF?7QHwuqAZ?^lTl&ACUU#X@459 zJ}Aoz_V7&f2`Z@;YERho(*WgPw$0IOMeO`YPyyyU4wKlEGfw2q3P?^TD%tlM6~qph~|RnqoC7k*C*7^e+}HGtOCPOH5s?UMV1q(g11%^gHG{12!OjCVvU>r zTqkD@xaE%A8}TYJF&?yRV?eI8zR6!w6lLcVHh(WKCVU4KC9uVMIlzLeLf&aR5<1Xs za^9+vx9drzmOcKIZc7LBt6O$0Sx{md?qn!^TlHGqYj(|i=ITH~8Z6v_xY?&KJK<}j zzDF4m9(td%#tugKmu?mBdE-8PY4O_}ZsGn(Hh|pwEcM81#XS@!_`TNk7vpld_gs}; z)xaiUm{x=j?@}6Wx;Nff`;KCYmPeCgSp=rqs>ag@TUSkQLwk>RBP}m19C%=7MUemX zQuitHBhMiiO}ZjervvN2(s2)$>aJUqMTy)x@Ax^V3BRvMN)`{C+Y891{&wa>@_t)` z9iN5BfwuHaNt#jj*Ds#ou-yR(nIrx0izu6(l3>wq+T7y_<7pqI#{n=a!*mRTT6S8R zm6hS|UhPl1WvCdXivAawcofS~suz7Dgc{cn^ zCn;4 z&$hO%cx@vI5LUmGo6pKd!Jsb%nwU0ElL~9~?E724q^3G))=YQ?_bym6p0bEe`5k59 zXQ*(B5-uRuHu>9Y0TLbJ}qlbO5&tPp6;S^opR+e@WmQK zjW1(sxQ*1Fo-8Mrp^X|PnaH6z1s%E0yKdxHVX%-y!^kfizVh4X@!R1C+uwoF&)~`G zHK(T8B33m01|(Kg^h{fTw2kuCv~;DqEcz6i;E~s7&gc2pRDn1?84vS5(+^BTJqgLS zMLXjK`PqrnLP~y|IE>on*R!@4W||+(U4mKk9=?5fY2~!FYNGh&s1CB4)C5^PA(F%= z?PPH~6*bRzIA!D{pe&|+la{FHy&3_1-56lKkg)n}wDVb7B(#oD)lt^3`GG|sxhU;r zaX4x=@rt^==+&6(k?}~~-?wu3Bs!h8tU)Fvski9^eS{CgfHP$*Rm#r>-K(Fg>5a_p zsmv#HeJ_I_OpJNRfMj~m2UdF?ku-UlU(5wD>*VAx|A(xOyfnl2sYwU52zzS*)3v$v z{d+D<*fWp=yZPp*EFf0cKG`}dk;?fbUpKWImJ0evlOs=>m&(%vGYvj)7ml69IV8xwj639@Gj&hr=FS?6PB8s}0xcZL!_K-VBvAAXRXqrB)CQ6m4uTIzW(t8NXOO#Upi6V|f zvpok1Xip2cWCL79Y*bI?!5x%iE*ZRL@l`tG znq^>cX*NtPDPZ?(xx5yrp<+j|ny-Q(co~gYc3AJ5TO4WMB}3ZFOND$VOS>h=s~&dU zYVoB*Dkm_mJtjo0Iac;p;O+urPu^wrD+0x;iEv_ z^Q2+}H8@M{g;PiwJ+X3;{HFmVj^^M_sq$hn0Te=JogubPR;L4aEYxLCU?hcY>mUyr zM7~hX#m`2t%WYQl_N`@52%qGx$NJ2&R@EZgCCP0?&SdbgFg$NExqy>3uRwC|H$gQ~ zu;5d&WH)=-(FRnI?K6uVqs8WTe7e~?LOkuea`}a>fG5Fbu)~IF=}A~{fIryLpj>+? z(f7WiSzETxBvhHMu7ji33ts)H(xC&(SMILzOvh00b3pR5icul^$KfyiYO3)ME2SEB zMX_>J5p4yr1Vo?%4X77&MMbr*YJRYBE^|hI2M65y6S#z6Kqo0%8j!bzA-dAkva!@A4 z+#X++Kr$WSyDI{9&OPOlqIc+?oBp=MNd5HP52{voI;J&E%KEoQ<={bj@{53kgZklLO$m!6z>EE2kZiz{jz@<_<=k=d@%%ji0zhKRe; zc=l464>Z?w9fw=Y~oj3Zt zPN~F8fkt003_H4NV2=h_etZS$lWkc@?Ko{aqYuK5aRF=86zlh0^4>ESjF@M)eV5li zNEK;`;e7<~vp=l(f3m^X#sPm?Pe-0*-Ws2pc3Z z+Zu`|aYgsI6ySXTxXxGa{7F3E4OR0dBtYO7oV{F_u?wZC*Qf7Cw3~M`A#jr7Au!qO z4Y;5*Ps}>o$Rsfz%tJnrWeh_WQ7FI~n)L->Ct_n>Av)MWz)u|S51IjmIS&(Z);#xj zbu-tOZBUm#YnR*In?GtxWf8HZMR+BJ*)_M-7+3(kWx?!;((kSqVeUb;rrJwm{*){T z7-V)yofhZckD}EO#iTNM>qDMsyp2qpwv-NnK<{`6oU*ddY){TN2JU-eVTr}e*ISAX9S z6HE$XyT-&1C#(28;$$u(!U>~bjdHO=n*3+XEqa!Ic|0IU!4T9%6X8;`D<+8ifV=li zf_S6iJ@O0JyB+^;k%H}1`&>yP^*dekFN1I+N`m=_U7uA(9D5-l;m za=oj+C|IhG_&n}qCO8}Q_(fiSgH0swo#>|k_j4WgI{V#pv(bUGq%u-KTfP&?u=L%I zTkLa)N1O)J5!?7F;v@PE*78wE?Q3zBk%c5NkG0q)VpWD z2A|W_-IAmkqMS>;^|SlzxIo8WwtF1U2+Z31lyl?*I@>eg;SEI)pt6358l-_Ugt#WH z6p-opPN~1A+vOJ69N~MUxiEWq!mg<#Je6Pn=(Wg|9C6zl{tR`wo5eS4M|bX{O`E%r ztfXGNZnK!CwmBdWW8pr2W75nOYUHz^OBgTT+|5xS9AbnP>%{fDcP|(ohfe6rYd5Px zgcVW#?^DZ&KfMBBJDkI$nxeHU+%Ja2-ecit136%^Cum-DaUK`Z^jg0u<%+ zOBvVecS++^M}76>DA}|EBGy_0YrP@&)q3m%dU`f(lliX(Rr*Xp*$f8u&&w+2`nm00+#)$EMIgw1eE2E%u53;yFKQ>@5h^cq2d!zwGx##)JhZ(vRRdXA?eTZX-OmyhwYy+5(+~%Yx7wq0L-cE(6 z1BGZ~9t!q5`cvHeAtFIR8$(^+6Qc#qWJ?B_Z=;PC6K(f+CdN_C22;lF@tEPEG2+zd z)T32Gf!9&MK)+vB&$T}r2q-Vb=}GPPIQz8o4#xO>Rs(Wc>69RR-tzWW<4r{e;m!5$ zbpV7XQ)6hjSxXeHQFlHCm+_*Y(TgN6L+IopP#7>&GMG1E-^?TbySppiiI44_>&+fw z>d>4f-=#OcAoA0+FVU{5Ug{WPXhhV#DgYYrw;&QGDA2IfscWEX+1?Q=SeS)--i3_M@TdrwiJ6wn`y)T9Y%*cCh9>)Cc& zwcmab@vVQ@6a~LzK5b3?K_6 zCg;~RM$IqPdW92Xr>GG=38i&ILDJ`g@vZmySYRCiy=m9$p*_kumtJDPK@dC=r0X4^QJN6oO6Kc6e&Ipe>9Sw2xbJwwWBhni*WgdGz zvp-CTOhraM8gdt#PD9Se<9KWJVAROwCOs_REF9xSW7;o3wN7O` zm@u_)CoGKnwJjdN#Mjo@M@a|)PT|R`BH?M6V%Td9;C=oowI4Ncgldixc52=Evi7zV2-1HtIaa0@-xP^pL=`VKlzg2#3 zKbxN9FzcxBd!p7vTB5B)$RG<--2$wW5ApZ+=t$(G71*NQphz@EHp=sQGCJ@QWoiY7 z9#(mGvlnF{R3kdGyNc~vRn5}hi@}ekxKs$|hFlfvH0A_H45QNrBp}d@p$4bTVu1r` zDCOAk6i+O?C7rywo2JNQZ2$(QGI))3ePeajuyJh;NpdEtAfrR>*(jgnwgf#lU3Teo z4c$AMPA3cK5Wk0bL6C-`MrNv6_=~lgfR=I^?iZQWZ8<6=29q!N82$7thM5buU&9)7 z@WwbO{RDb|Yi-Zzz*=UY$=mfV#w zN>5XKGMNPJoi8CiaaKL;Uk;-P__gjOlDPPOHNWX^;*2Dpj_v^jEbTD8Ht7EbU17nw zgWnB65K^m~eb3ZTYRap2FGC#=!`#mReF=!(qL@tttnYL&`)TSG{R=h&13!eYwZwcK zxl`@>>M^05X+g`SNLtVbzU9y=#tEgA6x2%F$!#>^T|&9;$ha%gl`QqW$g&h7#O;Fo z8;-|hoXKWNEI2htx+}q(TWg$^8Tpbz0j%fZclMSbZO!o?W(d6cb>{R8h(22d5Qczm=OY}{oVp`S{vji+`n z;m2FMt2?G_B)K%?QIbmvaUpXZnTmu0Nkz|(A?_&>)9IF{Gg#zm))UuoE;EOme?@?A7Fk=;Cj1SF+aHnH^8y{WHr49n zlt89rCiUBogF`L$7SOjngS2@j&Eq&ODcZkx!nPEoDA+gpbR9Z| z|2$=%PHs6EZd|CtDnqu$Trt3(1uk0?hXBaIz>omgD&vlMm6W|^`;)>Jf847 z?RarYr$5+Af-XA4(oLJ1x3;K`6qqM`m47GJyD3KV98S1yacs1HXO+y*j`UxI?e7u9 z)qQ~iszODZzgkd^FyMISaQ3&?8_536`Q3`r|7lDGEMW&|7yG-{Z4kAvfI=ZR{y>%0 zKaPmfU&tj1oB^81v{on77HzI@@8%bdPtSd#fi zYr@?tZ`?lvazSpZ-dyZnjImGG)PdwuJxM8%baH}D3IYd*nEos-Wu!8hw(A%PC?;A0 z;hv{{kUz`h-(Mji3dO7Gi3e4+H-~q)|L>Rhi4NW@3)aQK3F{Fv* za&Co~5uON>FJG$9{7`j=(AeJgtti&#!`T82pL{ds@{EH1oG{+uc5uSwp1u7!ctKWN zU#SalS%q*=mMP)P#@s;{;x}{_1whC@HgM|H&Pk9HFNaYl+4bk@?HT>1w#zlVmH1Yr zY>}~00|6!$Nv@fRh$>G>_-SW206q$%Y_5}&OE3^{u;ZJdjXFD1#YW}RbvRNj9E26t zKi`z(!HQ8L;3T(8og7qKtTg`2<)D~&NysnT@+XYC1# z9spVR<7dvJ&Lv{$Ui$0}M>R3al?YbpI^12BcK|bG1))r3H2L~)+3c*0;1Vpq&e16d zgTra)su3d^IlG0td(3hz9O6Tofo(Q3G`}n6z%pg19`6~JJTS2+a*hksNepxqjb{hb zabir1dDIhuRY?_UcV@dPz5rMy>i12hJ2h9l&A$9k!XjANbT=3De6dxUA<8b=CiXkS z@FIY_Tfl|he@?}^sVJ!$9cd%qD{5nA_@^zjDS&uFy|H7k7Uk6WgW;}HCaBDunk_}4 z=xp_q+Bexec<{L}e+CLXJd4k15!|G2Vfu*y1-8qA(K028g$i5z%c4xN97VgAy`AYVZ)YqJ<( zs^=Y&#?%%W(*vX5(^;i2Eg7J<5omtjXJ1hG-r&I~Q=xTZg{#>E;en6`$<4*|7d;P` z6Sk?qE!UF@zmM=mzPc!Xf=4S`k-!xy{7gkO2Eot#z3d}2&jBJ_+}*ewc&zn}y(sum zo}3%6Q-dLpPqptnDrE4knlvQD+4O8jjoqQ~2lr=%-^hJGaM1OF8JRVln(nAgCxhRyOd}NErY{IRTwh4u(HF? zTEXTCe6SzK`g1J~^3+!N4ocxlCao@2isK;zI~&MR>zsIEEANDmewgr|H5k$y0G+7y z8zKF!RgpW_9qa@}Y5JZ7RM2PvNtp&1(}-p)q)8idvlr+{lP0x8QB2u<;qz$!Bl$&@ zO4HtToP5!xNXf%=9j;~t?jj;_kVn-`cip3$IRezRfh*i`t(%?_e7Jgo#i|s;+8~0K z6dj+Bx58=2YtF3J>u(F!IJ~eriMY)eNCMY5f+&wF{*XuL=MoLZ$3r%ETIq1Lys7%W zE|kQmJfQloQ`0CCndcOj8Br*m$rO37A)K%<%b~f!-YLvCw0zXCHduxAEeUrkpCNJZL>3S4OKY@_vG37Yi(!K z6tUjRU!fnUOQ&JCTa38qy?MbZ`o~2Y0y=U&)dXtZcVgqWdJ+g&sqQTjNOR=1x*k*R zs8HxD<%`UTESH&5!Tr@dY$`f&F);!I#_@Ot-;Kilz|6<-NWxOj}du49bgexd6GMcShL9I&SFMqu1kn(hU zHYqyA&_d1`4^W!_b*1*i@0Fyfn&X{#MxYCx1+UN4vmVMwDAmH*|%6f$U#uV z)#}cIE+OSwdePp*BIh4t{|~M!2NorpyWWs?`vB3vO@eV z&9$&OxlEC+LpO5A4e#aGKQs)}r`fX7L0`(aqJ+i~k;hV_(#AWY7wukPC@d4jG8Vna zN(@!0LK;YCx0mC|l>e-t*12&mB0uAOHZ!z0+#%sCB+!BkT}( zrFmM0_u)3Rt}v>)J45Of&z_W~Ta3#n zf94t04lkAZ?C8EiVt(IYH@*8+WjZv8YB~!Qs^;oh4GmWp))(qnn)fLXfiy>-tPBkD z$BifU-pmVjhtJ)2MPiS<%k2y* z>22;|nLwp0JREi;i~mu)rh~CvD@;Wxv*;IVSG2_S>aRq&jqA}`Z8ZZ*hf3k~tH#ga z4|P?ZyL{_7ifDx-8DGl|nG&&ptT7z}ES88%_H0ah{Pafx-0+uGJ=0R1PZYj?7Yv;; zAwsR1wKzE+eoIc{m21?A-)9qZidLO1mp-1y$wqb!Zynk5ZETAjBok}JXnead zYW|Y5I0G?-!qD3b(P3E?TrX-JvrxKna5y+2C|Ay@aQ;}Ma%VdynjtH2bOw@lNCAWy z(Wkywzf5EQF`fT`K~V2?9U;TB8q4N&#s5ka8`a%4-}4|K@DWT)_S8Q3+0z&g`XnnH zS6Vk#h(&rzCDn8f~-=Htx}^B$&Eu9s^8m*05C@KvB2q$7K>y*%Pp1PKI* zQnt=8!u~m>|E!eIUvqnTcdz}1*Bdl^%pPcq(TxbveuUA&-(Ca_2F|Bwp0Bx{JoW=S z)3GX#e&w(G4cv*(^%_5%;F+^GP({9?#y;sSjl)vstFjS`$8*Q6KU1CNOn?2JcI(kV zYuaAbGhrvA=iOXHBA|JLHk+Xy)X3(we>R_RY0fg-JdrsR z;Wfd>gR_-lx7?%XlSLh0V{Yf{1+1kw8==D7od!MES035Z;KTew!_x?ck_RUvqCbfy zr<<#A3@naY%n5|nRY%M_wtN*-k-mDkQ0?$D@ECNEnlToEkG@{S4A>ok$e`6}cuX~q zkGgl|u~YVz;SYw;i|L3k1s^kr2yAgS0Q90$PK>GO;=73ti|3t+7 zVl2|V!9XTMPNt`|?ORisXfB}#Px~@6E^(Hwd*Q>Bt8Y?WRn>clv2{Q7OBZkZIlr_0 z?&1jpPqQyx_N==$uTY=usJjMwU{0RBs~?XAqUZVW|IoYi&nK-NeK2pUKJ2Aya4I#K z6{sVGi6apmUK%>PIO}#pnkj`(W&flh>Rj;(>8mR2QgtXt=LP?m4X5@qZU0xpGdm_c zP58uVl%d=Rabnye)r_co&AvdYGqhLUP4`U}|3*%~D^y5Ob{|^BrtA_A7WHblc#0kv zqT-H}$~8%|FD$tpS2*fvtNr6J&f_={WCcsR-QCBm$_w1IQS*?v*E`5(0-wr77Ua^n za@B#FW(_XA<9hz0SUaWdn;HUFmy9rUkI2JEb}fA5jMcESiQ0K*Pio6B#;o^Tj1{zbvPB(o8TVbu+i>T*wP2;3tkSsUtqv1(mUd|g=XnF(h*d*tQZ zLRwxn6Xj*$`P!w?=d>bb?UtmgZEM#6fK1$|g6+&mLJ(JBYL*!lZt- zb)HUMoQ1eEYU8D{oj;#EFmYrr+s5T8OY%(yE6;b3ZpM!r^?XxtuXdL9YI-=`J2VU1)}T%PaO?m{9}++he>oaA z(P}{Ru^heq!75x6{o5{G+Km=Q8=m}i-bKK6=@?kCK1mrShO)rd6NXhniLs?*j{>v# zWVJ8m+{g&X3{yYi-c<=D5Gf+>BovDEoXZfZBJwzJyiT$*CRpsYFO;af=(Z|vS&S@x zlqo*DI_u?)dJUQs88#dZ=)23i=cGxh!i>#WjuYQX)4vJ0hrOyLQHqzr#_?EFX+n0* zdL{tVgQhu(eRMc`>WbP7|2s@sLAL8Q<3Xs|_05lZpHf#MX` zGfw9pi$r1&J&DDmnjdV?x>8^~PY24MJLnMNxoKLreR^BbZOqoM1Vdb$d|CgPV#TJc z+Uowu8jAJec}l>NY^}Y?=@T>51TS*Qg4WyFD?D?Ikx1Isux-~BeD~URq*FjoNPO3f z_2M-S1Ha_Vr}pTI3LM0D@|!%?gqGlJxlpv150^&CygR|TCuOC!ds`-o>V;=cppogcxg zI7<#--S76G=Dqo#b6*4`W#uH+u&>XDe6Hos>J;2-bWJ4mjwDHu|49Yz2GH!WFJI2V zp9t_~8B}4P+64R(7oAzb;v0{9;@y;m;wQx%bWwb9ij9;bpn{pmuN&G z6C1;5HCD zu)<>%IeC~=yqBysWk5hS29vvsrKr(~=i5pz{VsLhJ8Ydzn z5cKO8!FSdz(~@o|_xl59<#y+?v#&2ZB+4TS9A*-u>%;z#P_X&EWnW0KMf)GUpwP+1MG98_J^B=o1|YZtZQT=opFt?tk)#(Cb>vG>_M zUS+RQUFl)@%K9|@{2YL^JhF2y+1n-6xr3%&@Bufhav@YjBA=zv{nK$AcUAfEG3a@c z)jB*@PT40=1`{6OrF6_7B@h&a6N`rtGbIlf50lgYD4{x8d z)l;j9GS$K(@OF(Far4=0H_=iI`uE2f!3w|LZ^WdvU4KSIgP^Dew3Y@>F`Q>So_zJu zZZcPc&eI3<$LU?g`ic51H$p;~NXKO09@_8a$(q0gEK5u$VX)Prj4Y781)?@QZ5;4E ztWTWV+0)iPH2W zE#k&xaU**1u6E(h>vyRkOyN~T|H7rh!mJ>9HSCWhtLRV+%^afz?8Ab{4(6ZFkj_=h z8mUvOF+6=o6r@yeCYQ-GQUQ3p#Rdy6R4Gjkgvjk4%9n*MU5}vEu%Ev;6%$x@AWXWx zays3If}*e6v2MNw9o9!x^KEd=gM7iox8P!!ySZ~nJQ@D&LhOC<2D{_y9m zc$K2hHg^I8YOv$sockT3b>y(;rV7W0Uafzi{TDJwJ^?^`Chm)F#s<~7r7tG)!S2k1 zwAyaWiDg}JsZ#NweGf%()u&IN_JR5H*qWHn(mge7duW@dQ4u;?C8Ibi3UahkA1QMF z)&ijHzJNA-{7B+1!h+V|acBJJ}^h?SL} zZMYz{D4*em2WlAGHx%$kH_tCGsz3?UdJdwQ+yx9sp)Q0$c`v!c ze`0QS7HKzD5OUkQ2DdU{c0}kl8-D2yEcTS4;-iM*p~z0UFs(k=bIh4bix}cD4E90-^ zQ*t;vHmGNGVfqoQEFR`aWY2fb#mBnLQocg7tp*QuadCSlRwKN_a=MWUt7t+MLK}q` z3iU&V+@pd>mmteBU&cznIbDV>=qbovuIAUZX;w!Wg%od+WdQ3v$Xr(ienuEf&TpDV z8N*sh)7iQ7`on7FJs~_zxm@Xupu;ZBFs(L@i$;ONZ@8*;y+UxTiE49^ZP15XEXv;s z=qgM7M7ou9>&Q$qrd?P}jRnn$y6A7sFUWul;m&9rK0a**i2A+nX+LC9E^4)Y?ufQ~ z__Qe$2?;>nxf@(}Z7rsqmo6qw6ey!zS!s|2C8}(tTgF?|=_$%?)AIedz6H9S{_h^G z?M@N2APD(AmEQ`<`+R}U!(D#AW2ezX+sjAwZ|Q)M@Ohxln~|_G{8v4A4$?ns6 z)*Plu+W1Hbq5d*~7%+>L!tZqxW9X;j{S&#Ys8jiou>=Xd>~^&xr0d8q1%&R@(-$dH zN9Q#;{etDC#}^&$-_0G_KVT)$zb7p?&hzg2>eFVKTB9<#1VTK-h^$?DaEajK53?}{ zUny?ohb}noP8IdP!k)bktUX_+L!e9b<_HTJ8Q9AUx0CVtQq*itN}AA_d^}v$s7^#5 z|5C1N4YT*vl4b(pXq+2Pm6~e6$HWlP+4lbCFOE$A6a_dGs3atDz%sml{HL1cPF+7Z zD6Mi<;_UP)Gwjwt%S}MHRpUhRb4cx#Ky5OGB)MYWC|NQjQLJ~Y#N69UPlr#L%4xhH{TH1g z&)+vPsTwIMe7N@S5yhcnRMZ_2*TL|Pzp6=2L2#GI-~d6h!wQZhY@kp5-{d$T9VMWA z0B_lQCm#L(+EcXg_opGX4H8eF``^9sH<<9BCL|~CTaCYzhp%w=vIyDN%jBy|}*jU0VcBvkLsDG?y^I3w( zm-|l+>C5JYn^W2!u_xasZTRtVKC~B^u`Egl1#z`?7K1tCtJJy2S{8cbvcRc`VkQDP z^8eot39w)Rlp&ZW3~jBk83t(>TWrHO-=(arPXD9nDPOQcX8LA`o|@rh%|4-xj1gE4 zGGAXOl1ODL=Y5J`Pw&}on*%4)DrBRZjiltSyI{Da0ErUyA;O-tKBmilZb2);tyzZ* zch=|^;=>e0Ik!jM;#v1u)(JoRUkZaq@#EFg78+7Xx0VNoO4>Ha=>P34k#dsh|EW_y z1!mX}4|yL+_}t}5R8HUt`ekI>na?-q#;)(q2d_lJkabX^=w*+z?cO`@xb-2b^a84c z2S#gE?_(mo9orq23n|=PUG~J*!LX0FzJj9W6S)4|ywEYlQe_QuR25_M1yzG>(6Cq2wBLa$Vc`f> zQA674L%Y|oy5cXh9SUp&whX2pcYL?+Sz$fekwsjixU;nGi7&h{LKQ`Za~|Rvt|w7g zY5A&gBN0CYFkX7~EUh4D$=QTFQlAVK-+8>s!>_(xPI(lX6y%mf&N(_msKi|^o@z^d z@D{ukyWddRIB&**w#F@auDV)FCV-(>Ti;aoI7U%9IhOJ*T&B$W52zw!EADTW^9l%) zX^?XHt*+`JkqGcb`3VaLN`J}#zX(U8s5Tx#M;{0?HT=z&xwAS>o8L0A;15gafKi>wr{|^WEPJ&GmkYuk>*nh>zBovVHIMS1wU3 zf)VcS4OhO!?-z{0>p7mKpr@=<;@xjwDLX$ml<_2cBQL6*1mB~KeoB1WVLZERddF>c zJ6_!1O$hZ_+P_sd06@+W@RFVh_?G#VVNo}+12LG0`dTmu&gY{~)Nk+T3e2?k%crK> zQ7pHe$e5_-20!-bKXw**P!}(m86XR^5Rv(k@f4lDdY^6ZljOSV8<@~+#!kM3R3A3f zNq&)2%yi)yEA9h?eIWy~eD}2;kr3O9t7P>&wln7)OgZ15rUX9pop#sX#GmckF->6# zzgpsy1DJA!^2;6qnI?faN1;kga6%?d?j!#P$70kJ#kdP%8@c#>9 zYC-;&+rq0gnkvL4qc6O&9G|pZksI&aT!eHL0fLN%yy1G1Ln|)CZH^&;x ze*nr796SgL0b3Mco;Da7077({O@J#fn`{Zz6ctPFUmInals}5oim7;}mQh?YV&L!#y8aGNh$Gxp!pS-#k;F zEA4sxc6johzW0S6#vCjQIkl=7`Q1d3Qe%UCnAAdGH|s?(6UdYyc2e292Sbs@tIovT ziukPN3++aA96ytpsa;`qAdxiwBK>J^2rEWR%4PkUKRRJhnL`w5LJiXf3REo z3Rr+NKW3r`KcxzKhY`W{2e6P%5%g=vKBlcE$or~YEKB6`%lqDu@A+y9m*2Aq6vP6n$5yX5|E6lZJ)TVh$TSPJz}0waGLXaceW`z^G=L_IAOH#f+~ z#~-P%oE70TDFY-^#Las7&(rl(O&!|jR^si-?cXKc*|3-Lf98WS1{|KYiM<_O-P{`w zw_5VkXKh=9Zrl0@wpMLph;I99@<4X|2ThgyHE^YSG5JUxpA^%Pih-0R5~qdSlPjMK_` zX{MQPHUSS4)g7FKRDU%X>&@p=-8yh_99VZJ2cY?78>FSX z8>G8KTBN)4&<)?l`_}t;pZEQKc%4hwoU_l`Yt1$19Ak}PIt&B%n4j8L?0>DyM=hpm zLXB8x!DvLPyg2Nmib56}Xq1f>@bpXKSL~Jgd7Q)V6mQe`MJ^z_V66rXYlGU+rST<7g{2MKw17Qe3UFD_T&ivwMU zqBEGldVKKuv7gnUATFEDn-2(9EP^zQpZ%3Y=ZHRe#P^D1Gn>B`b?Pg4=e^a6(OBXm zbTCvVuP+BX-sF=2r-^o^*Bym=WeFC?+X7jzo;}gJ-tB4~d^7EQ&q^nvMx9%{p0B%6 z^uyW4l}k81Sa?f5K4x_|Q>8Itdv$N*t5;R89FXvDGncb~0 za>V)LGTEe^C?=z3bNsQMWVDLx%k|boq;WRZ#y=ZNeYul>Qvu=RzH1x%L+qa*0WTUA zhKWM;y5dM<%(lHw_WSFTAXv7R-FBNb!6ThaelUDKPkiGd@I->~Z@yhzY~{J}5M>w* zOk^%g9@k5S3VEU0qykXBB@y`2ymMM+$#U7kun`b!NpG=gy;u99KY2zoTw7*lcg;!o zz=dXugzb~#0Sb3BGdRG3_--H(VHk>>At=QQaWwXD_wZn~rcv%mvm9XmDBtZK?8CY& zg8QeV%O6RaNxS`%`}^Iib5Uu|ibJ`*?f5k6ibbX*zUA|wQD5~9APR822XUxZOBo+_ z6g7@fNk~cdH})9bxVjLFS#p|Dz9SLmI;R5^WJ0TR}>zPEs+!;I`96iMTTPNzDD6x~3 zAgURey&h!_4`YpNorawlXA>2|B5s0un5c7d)WMuvxP_|y`Bt=8!2;xfFfRo97H8a_ zep@;6$yiRUlZHw@phjB|8-=~q5-t1bu!Q)+TSrVJ&eNtVfZz6aq5)zc>8OI}haGhz zRt*#K8Eb;ms7||=2CXGg260v^9Xc!R^`_BGlgH7X zwaA#X;(A51XdOIcq0-DX!wA>IorKT z&64wqXdN&7p-m#Q9wi>JcUl{%JXr5zu!(af+U{))+I1dfLP79P$!4=&+q_Tgnd2hz zMIcHHUACR~K5fYoki6l_b^#%Ei`nj#+2mILL|Pv&G_Wf*jE*wH;~ziY^ntyBa;C8z z-=e19z!`?!Hkg#J(hW`#c*ESqb7Drqzpgvh!?gE9H)d^XNbN&ceh?`VMx$9S33UW} zVJ>)p%Q&a6;Thwpvym2M0~b0bqq07?I)`eAB>xCA%7(QK6CBcA$`;k8*_Qc(w7NZ= zR#WYFLq<4+4eQo#TXcO3Xx1`C~5i7^W;#I?d~cT7H-Ge^s&_ z1{mD(qUNbGDK(SxeqX%8uX3%5_xpTkQIa3a1H#w0)h8w1{QGX^LaS%yugfDUa{nvq z|E=!-zI6Bi6+FIN@c&kW`OEmV!g#6R)qsY3jf5nw1-*T^aa#g+T!dCf|%6M)+zWi&UD5fLE#)0zIKOYjT_ zZh{tDc?=R4UEag}K>K`;Gegw6x&zblv476PEjsIi8F!5d8}oF2MDBD0dk+UQWwOF?|v zP;7ui1NBu#q@+m;hm7ewH)(1&kq?H+>v5}^R^=c$U*M&DgT%bw7?@mK`{71G%i<0&!( zE-5m;T(#|b4i9AA@MaTR6nzO85J4@UFpx?sR=pn$Qd78x+!lys(+RRBxtC8?1Fu+n zr$x};RQ((zCf)E6%3qS8^^kXh)AEbezMmvO2bL#F(>|j%CCCwD$dKQ1e8eVo+Q`5>lpe*=M+CKuV zzP2;ZYGc~z{nejDSRd|D6<@5pU7-?-jM|2ULfqjr<3S&!8I<-|X>M8%vYo{!HNZ^K-v zKZB?x?0w!Y@evN!8%dqRh^>m&?0pUN5cz&3k0?Fmpcx#MR_273MJ{;RM$~_MgiOEN zI{QNMrfO}FXLl|o-K9sFKn{!(50Aw^JW+G%a3nG7vOf|bpwR&L_s9T$BuVtwgh zT7l>LeZK_cTtMIl=HjYnzMBGNbt_8mp@%-0BC;VPyLFa>yaLLXZ1*e5*lQg^ z$a)9%$@L_M`R|j@QJOv4uNdsTxG+kU6`N@s4nslBcJNH8PLm}|C<%^E) zKi&w7o`?=|;pzRdo|4a%hq%F}_TMN`0sBP1dHHeqK}0TFi6(Wq8gcNmPVjm}iosZ@ z3+3E*x`$7TC0W@&PK1_8MLIla5tKeZ!+alk6I4U@E&Iuh$owSx$G451a=H!%e&Lkw zAdpHv%AB0<{FGjwVpK8v6^wj|awo%k1gg)M{jJ>}yZCA|slChza3;zLT9#pipU~pv zs)a<4cJR>jk?Zl2P1{?HhC8^3Uj=#H6L4Y6i2T^m7Czg<=^+dicGX0iaew4!@$m6b z%+Im$3%m-~da&kkPgOR)P?QK3HWFAnA_~61!25=H2S%<@`%%rIzv;;XRiPY%`pOlq zNIXRKY%b;V0z4P*F!Td9A#Qr%Y1i;Kx9O`0(7;X@dF_Rmn8@@DyqoYQjRN?Oz~@a$ zkv!;(j(nX!J-eybg*&yIJ*eXN|F*_CLix)T_*28B{R^78x_Iy9oiq=dhah)ER5SVB zSuX_{So1}8mVDp8FAj~=Si%`1iJba@)S zN94t+rK*N!4V&w;PgsBh0VY`G(fvu4>rK6PJzm0<3V8Kd>a&B&#p01GY#QjWcz(8B7fa@3Ibu?1exZzqdvUyUg)nG_5}IgMrsQfKdkre^v|oxFXs|7#-5?6_~vH)~nzy zK_C~f5`dh!+RK$B$Vt$`cwE7;aD5`7vz`fq>CYwb+6de7%CbIh0@;ux>6kfY%(_CZ zs33?24Z<_!7;ter?YGD@89W)jcYS6V@pxi#NUnmsrH&I-eQd5YT9m`Ep2Y}>p%@1> zpU@{q$i8CC6xJpv01-G&oTca`{j7r+C5rI>F^p(%QW*YWBVX6`e0xBrq;WEKA|M*;6CTg@F!(SisR$!_OP?_i*vsW6+(Q?Y8d-wr%D={Po?f#!z887u;EQL z_;45uQC)xu=NvNM{zYfrkp5#&=WM$nz3cH}6qM`*|MXp*jY>|R8S?i)m2`gYCNt=H z+N22U?(>eRet44jV#>7@BJz~QAT^JstX+_6@O%p)LEjJliTmlVAr_{}vOfcl2?uo8 zA$~5SAl`9y-L+bUFwPBn`%<#=1vIzZVt}T$ZTw!o^ zE=y{6rP)Kk7pgCvLeo1eJ4$=`n51A7EvlD@MN~kmmX+WO*%VHZ{%17?d24(Ooc|_n z7Idp<i?>liiClvkAZituvExc)Dr*%zp&Y%8K<=xZ6$TTtOaM9hgy4AIf?zi72V`JkLNVK-L_9#n|x{xwfcWF`);O zpQR`v`zTxTaHdj@uu{nwROcIM`c&lHGrpvFRFxGuiBP6BobHv+S5>DjYg02kmGXf~ zL>9_(28N9c?Y#_@ga zySYNq-zL#tEyaIYM&~2|G@DW(oh$tBocv!e9hN~aFl+O>`2RqR|MLFV^3l>ZZBCojSLrT&o{I%*y;?%|7khpK17aI_6#xv?J)R|0bJN%q zqPU||q18O2G%n%^6#88N_!JJ{9EPwdydIqSQpvut6tc#riU0K}l|OnHXs0TQnscGc z_M!Y&Tl%Cx`)HPUQ_g!Kd@Nnb(pZ-gWd0WZEy6o0$YAo=@7EG1fIJ8Sij1I6Xd-s! z)7hA($NL=HjXoqkPELp2SJbaRSj4SRDciLDB9?TLLdUJ z-CRrc+pquK$~)23TMim|%T-}532L%s^;)cDB1F-f)j>9w!93?bgT#L>Xnx;%P1wo3YW z$y=LN_n=%H8u$13ZNm|8;1&y|U?bLso&(bf6WRW~)gxNKc&oZvLs5HbZbwH)*sP0d z)6DJ7-5uw;OcEm)pywzAJWLzVs=$$KNg&`&M$j9I+1(R@&SbO7e{VONP>f1XswQDjGddxcPidOS+{iGINIk+xG&#(zUby6VdLq+B|P z8OO2j&RuI}+&;mtK)t%R9(M6XmLCWgE_9gyM zkhJ^+drGQr`%ix3_PfWFi+&=nRRD0i0c5`%n4F8DLX=Sg!x#Gdp@HYo2lzW(3}Q=} z1nU6;irs)2TsZcd_om)}039k|lJ({04=0l!r2!LFb*UNM;)ExJ#hULmUtP}xCDHdZ zZa;#36U74EQa)FO%Zz&cqui0d-{>glCGS*CVW1r+67i!?vfyR4$8~SzsdBz1d1%l} zWZmaGmI361qX>D%%eA0*m1zV4F#{j)G_$#}6_T|2G*_a+5R}3GCldN-;DR}^IlT|2JS-Gk!AxQNJVn2V zoHkr#uN-8)8dQKr0U;gtI0dUC$lAUms-KU?|L=+Ylwvu|dCXC|gzu$vSNKu}> zu0|n_5B9<~3R;~MOhvP@#`&obQE_370uFgp`l}U2R<3bAy3xy|$rOg<=ULBh~ zJY9R(dwOK(w6D};xb|_39bCJ&uiUqu+n-!7+mG^GFCQ&D3bQxz%Yud(+HaUrZFyzW zxx2vNHE*-Q_irkQ7$a54^6ehB^jN7$FhBggXB37axa@9T+Hk2$dq%FgW0W z-XyyJ*9&RBcct?G{o{YH-HY_5BAz7Bm;8Uc`9Gg11*T{D|NF~H0w0SK@1zFjcv|&( zDg;pG%2Ql_`=i>^ay8Fk!nYJeybl9GM4n0CDAxm)njA~`B5MHOm};}BPWS$Eha^HV zohV3Y%0U%v7m*h?DgPE>k9zwZnIgF?!D%OPM~j)#!be?WhhTNY;`+G60$IAbtLZ+1&Z&RFb~&ZMTuv{&bPuqHX^mriv!&C|&_tC$dVp zCdXkPV#^*N7`)y%=SO10jyjM47p#=>UHkwnXX^lJm}e%6y|;VvhsKuwWN%(yO2$x2 z9iE~RG|Vm_?iF*q)I3}>#+y>y@ZM&vE5Hc&=W8=sRVexmi$Ocp6l?ba(7jD6Y!^dz zNTN|G4|kB6IS04{NO&?p$Zt&rkmG8AULbVlykJ>R?u<_W#y+ z)=KK+{UO)0=h1h9vuP*Aty+U&v8chC&8v!cy$FeM*cTIQEABN9_b2VuF6YL4u-+Q9 zcNr0GK+cu{-41$;=1 z_9lDc`2aG)S$_@G`5d4ElgwtVUF?6U>mIU{;W(ov-+uyl3W}w1I-c#uoS7p$lM@rFIKBX_D%&~ zot@FN?qxu%kz#MUDE7rS5rA#q0YMe69e$ay<7c6~P*Sq)EBXDU=6XP_T4}!C`sCav zd$o2-=KP7!?U1SHQ_p^BYJDYE4(@ z=)7UG!oR|E@dMwVH?O!H1vWIKUu;8I``=ntY`PKGbVmr+J|_D$`18z0{EljBuY8_Z zp5yFG@X4C52<1i-sWjzyPiEEUf0e&mS+>cNQm-D0!YrbqHc4Kw8>X{6D61a=XgE{= zv~{kGLOH+Z1-Nj1CVB+E(tiKjQg9ri7ni$}(H^izE)QmpHub)UH>^WJCJz$9>0VDR zeDo~zmwQtbhjqH0zCkb`7_kP5fH)%ERN=-mAfnvjDk8c8+IwzmQ0e})R0r`Y+PHn^ zfL8ujKy!;NogCT>n@;`xVNe6khJ%!m#WYh$&6w<=QRb3224yo#U$0 zXTT?5Y@KX3=pNu;xRkDj)oR=HOF$mtdaW_65&`MY&MnsdYeIesG#p9mroDU+Ez{1j zj^j@y;SJbjBp-ZVS<+_^CM;FH-OlJync%AV>Uo^l0ibezVdQu1TbY?xCijxiX|)Dc zY+{omUbH_yjwNEZDaOle9%YBopDDjpjbrAL!OKmj;EH^m7kC3;=Zh5aFdh)+A`^62n|=XCDWGQIp>^6b}xbUi4ZCU?HUx~j@OgZQ!mR2 zvW#O=URcEw%o4zZzrqE5nYQ#A6%mh)u#WNoTjGc03sX-g?E4^1-_l1`~YkY ziOCS5ov{JQHIx4$G|@LcFV6wTV3GnC}{6g!Aw9#+8JrB;o#S3bUTkFkZ+S+TG{o+>y&W-AborUq9!^( z=8_vQTk}Zcd_B-C`b0)+%MYwe=K)7uUIaZeY@t8|SPr9LYZe(05{EgA0|!YxB*`YH zFwv;mXMe!7wQW3%p=~$gRutz~Q4pm$;N#GB9K)6?;9kox{|zs?6?-U{Yp*$SDlzfB zRuEA(trb6VqtEO;Q~@g4$D7K?tTQVsIoL*@t`)8T@qy&_JGcJXfJlz*r0A(>H`-f5 zQzx0m4O>Uw_Upv)c4#7Z#?2A*Zta`$ffK-H6t6rC;FAWVsuf=gr8~c)V{>+PYU@;W z6eKPff#t~W2y%juF9fc5T1Lbosh{rc7t!Je{HqLiFMsJ=Ut70cDi0*D-?`nJyks!^ zuB7}Kz=1^+^64>h4Ep(IRt+cr;#N+^THE__IiG?RuAbpuh>row>CE>i*PFn;KIO!v zN4g1F7LCsCVbceDbe;AJ%`O4jQ`13EZ~*ovA$0G0dGj$`>)n1ydb=H*)$=ITgpdpf zhDn6uonOLpzIdF$xldkIlQDk`wu=YT_cM=o} zz8FZeE~Fy;!hrzqS3eg!3!D!W&s`*}HlbyDcA}_VG$1 zLU?CEnxC5PnZ+0O2qtPF@BWuZ}!`wTsJK-CW zKWDsdLE?tpXj5#sNm}hs6{HmAmT|ta+USo$=8#nMLXJAd(0?LaUXt$e??ALj^TxFs z78eA;qj<7XWoh4E$Q44D*X?9FVo8E9bfjbUJSKi2hBv33v?)+gk;@+Qj-DAr? zaRkITk!Q%bU|p@f+Z6D8we`WugTbm#U}If|^TT)1qs{;DS2S zxaGfqJ{1H0W!6NwKdwy;VIWa6e^5>=r}jqvf*Aie0Y9Dr(#mAWJQ48X$7zF|$FRi5 z4YKwI9lH%{_uCxL*Qi)M>HDTS>5HhUz{OQ4T>GQL@DHLPT%dQUj)mCIFwPHWF^utz=9ZhY(enORGev|JC^+eMp4 z=cS|$k~(b(EN*sQ<8^-RWJR*=D*FX#K;W$$PmUeeOGVA!c2+rQk5mGq78(W~TFvev8 zlww|<#UxByspMbz&0wUdD$7-#bry5g+QTE^2rQaOK3{dNPglw1yVeCm+|J%OK%5LI zQQyzKGG+tTZYf0rb9chD8jW-ijiq-T=oJu|JmkolAWYjv=edr2oLpg14CONT@c7{gFmA(Q#?m7h9HB5n9fhSnZF$}Q z&6xqsS=YkgOM==wKpS#7fNo(4;{@UA&tgp2DZ5klShvp`6pCtWmwfh-N)hw zxV@>!%DWB%Pk0(xXS5r7irM?w0lWk>nMwqW0{ywwcUj!@AX?RLrs+j0wA>w7lF~De zEMbwDkO_I}Gn8tVjzexXa+;=2s*&FCj`^0}tx}azVD`%bp){)IH>P{5SAY$~tu?9L zApMpzU^xrv8FiFJcs`4JY5X5U zz1$YH8_TU6_s)(m)V_Px^l*?%rR-zo>D}h!*=C@giN(53R*k|IjWQE4`f?*c`C)F` zBT!et_+*SmMMjPW$@sa!1X76gl=;CQ!we)fFb#Zt?Z?*jewcjMSq&*W6!w9ui~?KM z;R`2Y37!h)Bv7L*Gh{h$M6k6CCf2A%OQ#?+ipu0TitS(`R9rjQ#H8VG`TH8lJ;$i@ zlJBZ?m*{XBCB6NyBzP(ajmX6BN8m>e0X_u7scQnY4(*6op@LX2raR>N1Mv8{2jgZU zhxB{nC-WX|+hyH9#B;pmv=6T}g#4E&yGU5j`#26Es+`ko3%hTkV>gV9RkAP;fLu)0 zEHEBGp-x;vDp#**cI$quA7YP=;I^nbZ@G@JcZ`AN;iW;$Q!pCe=u|6Plii%$@7x9L z!DhT21Lt_Rz1kr18st}r!Z8EcKQO@f&GrV&U#Cy_VX7Va(T=Dnn!iwhBn_prfHIqnHft zPQmH-HXugj^r+@TBKs2W(B4f`ZO7YUE(cV_7@@DeCLw+oL>U`d<8QzL?Oa&_=F5E& zg5Cz+E1?iv2{T>`wR9iiy!sf2;=5)NcR_F&?!p6yfPo7$Q%hl!)e<920J#T5p{rAR zM>qQpBGT!c=}?heY_$pjZkT>%6bvfLjT>Iyqm;^+HL|Jfh|8meUo_yR-5>{n$)@gz z5S|3I$`NrYbwU@>dr`f*5$TR9ggPK2erl!y)|j(eL|>;;fR`hLeH$`{eXT(m&9bQZ zNlZgW#a<7xgkMiED31Iq=>dx8j9Q&DIqa~v?~QPv^dW~T+h(HO+&OHYoJPoQ0V5Lyb-1?G6SBovnpLLDR_GldvwIwar2PbrS$El12~ z$~5~GdI%h%GZS`MHBv5tm_@|-{hH~3ynf|(dM4jN0kcl;pe=Y7VMaK{P8bBDBV!uYPURCr70Tcpd07$vMM6;!Mou+VOoR6$ozCgjuzrvH87wvRc^*~W*H_VMl2qHwC-tf15BIs2%d1un2qTCZq>k7 zJ*8aAXfdHsH$H%$mrDQ_mJdqJ1m$vano)ahrf$V4^Xo5%vlH+HlrXD2#C?j!+Nd`2 zhC<Ib9phNP&&yC-kJA!1b?s6AYH@- zHxl0Guk>zWS?&Th`tKI$VC5=K%fqX}uPNo_iaX#1!pp+z4!3;J{O$O2B|UKp*0feN z3M*jI-`!L*Ng`c^uf<_nNF>nib?b|J%SquTi{yucbSHwFbfsQ9_1TpQklbWtg2cJ> zun4uG(!m2za^11FyjOA_rkTc8!4qI&D9M5uzK#j*2_+{5+ig#TSP2Gyh2X{DM(s1p z_y%1*4dK|MjbJ4nQU6MwF#2Kg_1wL86IoPD0>lj%#F?On-jA%)b9&}ysqjvnQ!RPz zdH&L=?bY@<25?ERcd@1_am`=K=C!Gybf7cxXQVnE#PRHg^ILpF9#uYdt|lnneDJpO z`QWP%;rjD#Bb}&WJaV?c$C`{uuqsI3r`iY44)PL~0q^bn8_VGDFbqhtMHnI2$D8B> zY>Gh_(5e+3K+WeYzX@vy-zIXE%s&SM9D!Q=vWvX_O} zN~eiX;UeIbOK+UaoEVu19R$ z@soJNlBF5Zf(zB&IV?N87NCLUludf|+V1_KgQ0{&{xrH2lDfl}&U^!8D{6nx@H@FA zki4uzn}Z>OOauAH9@y@H{B)YzOakKZjX(vk*M0&R0!hqrDypAQ_#mo2TfvIrB%z*- zWQyS880MfF3DHFI;m|3SuBuGI2l|ot-5O--xxflPAbi*a>%>RseD%?{TAuK%$4zQ$ z{EH&C1SKyB`q+Y3SR0LOAH`js6eYch#B(0Pk9=5L_g&JQZe`YnO7_TzgblenEh%E9 zzyv>mH|uTCT!45UBIR)?JPR6vGG`bXjg>vy?@gl)W_+cSLKxkd)vAxQufFzef;{~~ ztx|CuCAHi2QKFFyY*y=sN2ho+==gBT`tmFnBP&$gm!p((4q{Cha%9VIL|)hX-Mz z3?V~$Bvi>^uT}`sdz#7CqAt_O3bS{0&`|0?m4~3Z{n?+M$4WE^DzQ!y-trL0kePzUP!pk#@5ia;KRJh z_^8Wv3LM%IIA1Nq9sH4GPCQt8+(S_p!UH9`OYRc?=T;$g-PiQg6mriuQ=G8d-N!Y^ z;O^bU{JDr(x!Frho)cq}PPSY9PNOU;^SCPWJlC4f&0{f^m?(Qo7Q1qLZ_&Ovegx~x z%pq`b-5Ff}JyBko@ssFaNlG}&-DQB#B(vJ$?=JXTXs|m0S}{4oN~*L7lzM>O5doWb=SNEe0*U5 zUVyG*Cw2N<>@18{FO2xVVF+Sr)Q)M*MA2Nc9xT9@7z%|kUK6WOn8x@20Yw0}89jl@ z2p=4-EtJ6b2hYgK%X0ps1<--Q&JE+g*k^GlXKDBc$}nVwT+ENg^{Flc*B4#X9*m_a zcs{b{a<#HA&^@$I>BAUJ6g|tYwgV0)?2I)TtLTz{88*V6SF17ZU64-5NtsQn2fFzC zKg)j)({Gq08UgaE>D2AlvWEh@H6c+blV`yK0*lVEre1wd_5zQ6Qz?5zj958@?c42@ zWxPMYO7d$kiW4mE<~)2Z(;n0@yYEW;amO>F+7>_Mj(KywFY;Vr-+IjHC@_2R;~5t zVv0irifno@ReHkw5Gi(mBVS_?L4Yqc!MZWh{}?iMa2Dl?cseep!AKsr)*S^aaJ>{o zVSXNl&(tPuT%V3hy}t5!EpqkUV9&RnuckcT`1v7y`Q6QVfu#%;fQMKpk(I`EaTTmp zlLoH;BjHG)SWciwwI#T81#`!zK*CeCNnQKNEMdeUd?W}vHU1qY+R<#9snube`QOQa z13oC4`J-ck`Fv(LS0XB^+ea$J-vqK zFEW;l=z|XhKUgfOKJ;{?!7MdX6A!N6Za`h!_Cf3epNk(ta;w-G4cf()EcuyH2G*{GhQ#8^Fi=X^tP=?|>Lt_5PnxyLmt=m8 zo@Gj{f!n#1eO9OvMtxpbEhW`Xx+%G^ULhs>8N1c{-y|VLxzS%nbFmzvZS=fxwr-R= zIO=4+9~~yIPti5qy7FF4+&I-f(;UA0I!r|#{vUw2IzSo?eDYQpcEWhcwQJA)Skg(N zZl?S0Q+I_zl_3*V0$&C9$#HX?g|ne~ufpG;gWw~Ks`{|{b9t-}z}jXL@7=b#t0OcF zXBWv8Hj;7-*L^V+P8xE^3+NoBFeK0aVvhJokk#?vq%fva#r=cn9;0ww@(Zy5ju$|) zHRi=O>kSc{)*Qsk(W7_>BRaB=@E*+BknA>sB$9p3$N(V85gF&327DnhlAriQt5=I!W z-4@wA35936e~#rqzkOEqZh0WxXa;O@!0i7 zii=9%IBStkyQiS`#)S(g^v%!UVzJVL+kxUW5LB5(1DL$Y~0Au_S{Kx8Um0Ne1@To zNG=)A`y;lwI&&57Yk(jE1KvCdv~{Tf`cndMoO%$M?}1q3yTxh62g~`l=R;)3Ii! z2k@Bm+N-QU_W)M_28%=vI8weN`qtK2Z4Z8 z%?r+k^r-EsS zFclbe2&pt`SA8n2R)Gh>%n5$K#QW|AOa+=zNlN5N;)MVW;;EmfMnMALt~E;mNHoBk z$(8xS`nwdEA{l@F1!X5SA*j$Q6=Jho-+L6F@Ywd%F-n_(TnBG6cMep7;HU_k#jg0& z0jCde%O(MM{1%`u<>_??o&k89I6&EZtVU*joYAm`9SWmJcNTnly0;!oWXNbaALTl| zvh4W0>H|#}4KR!r;qQ!w;$l*)ng$bJcx`dk?T^Yb99tL+vjzk{tUUm>M@5N$w*7+! z@F0)MOmYTr&sH=D@>hGAFiE@~t}V54jNE|bnRZMB8om*{ZFGU@d%I1s_O~19ZpWtv z6J_kKC8 zjSd@F9Ca(d(CUqv564nOW$0DZ(+TdwXcxZ=NpNlT9H%vJl}3FWko{HvBlObJXh;wS zAJ9)IqW%F;8l=Ets3GYpfrx-_2I>0nUNM1VI4H~7W1Ein8v;Ig)UOIEjW2!TgM@yD zHF)O>o*$H-JZ|0$W^nzA8MSJ;P^$bmu+xwB`h%}wJ4~T!mA-E_Zr7wYFHvU;K&)6U zGw6?8yJpX1^j6iEFO(wz^zedJ%Gl)GD@(yj{9o{Kh86=a_ynI!P2G0*V|h*8AGTev z6g`rX8(YOQIF!UNCATnldTQbT( zWzeDM;);AO1gI57uvU7%w>I{5nDF*JeYMTL3$xK`CY|zB=Bpw5Jhh)2nj>m}- z6KEp2ItzVY8T_b-^ubgrw0ZO@U|>qXFp$SVYu%3ukhJ8V%)EWG0oUv`9e}gYs`H@( zpGn`p-J~66H!X+o4CTsAag91B{7cDS$C@7-w5|A zzqBX|;E)!)YY6zg%vOWRw8j`0rZagI9>6Nw0D36%+^m8ol9lR!>w0U*eJeylH~O5 z4|eUTIRFNu(5PTk;oPA`hdo=02E}RxoS_bj4|k$i0ZeS$Ry6IW3+46Nf$bt5h;zUC zU%cMUeip~8P@_;NBfgyQy$;LX07wOAE=GHq>#yFaYFA46-$eYI%2gwQbELXt)Fi>X z)Nn{WHta~5A2(KuqpCIOB|Te@d^=7k#&@BAijAo zX3H^~m!?KV_B+d%gadPizJVM$G?<7a;A*kKJ$Q_<)3H-fn>#t%#v!KqBkWND_;3d04>&B&+e0{gm|Nee6JP4zGFsxV;k7`4 z$_xgM?~(APSh-C7v#*BgKlzZ4gCKF;5Og&rU8Jei=UWuDxLRf2o*_lS;~1A}lYg%n zi2{gi%w7cQu+>!HzqH*fizOnH%l~O&{z=6Dq00rqgDLXk)92C=2oXo1t_D|6j{}o0 z_1h}e%zwG|pX}MkkOasD4Lgm}5q-k__I;=lwaqrHbfs#zyVmlWFuB>-Z^X8}(R}7Umj7$8 z&L?Ytw8g#Y0QLpZa;44VMwJe6;Pw~WqKzN#{7)1yX7En3v9I`)t5xE$xhLLuFvj)& zgj9CO^{eApsAsE-Tm6)D&8^{9$@E7W;IU$#KISJuxV9MSYgk)t70|( zOfdwX!$^Lhezb^MAdA;|RK0t;3#xdwg;x=q;?)^CN6#Ow*(exZek?LT8%ktg7~{E4 z1(Hy*#kN@^*MVq~Gk|Qu05IAPKw%hEq%Wxtx04$qB%07FKxma2k@P(|a$bO3t4aj! zGdcvNVZElxPQ7BxBhrb}NcgCp_uSx)WustlTFHE>>RK81`lOE33Ndj~#p2o8wnt)jU_4W15I7OKlAY@Dz)B#dW zE{9e9hEET-lYlr6A8uCi?s9AMi^teN!1gZ&sI*L%X#M>bv|;?; zYrK=bpAm5!B{9Eb1ESU!Y_J@|_GIzC?moi5b2B{nl_Z1J8(x8?dS0^S z*)%Q_8IGp=S5(gme7BD$jzvS9Gf0a0QZc{+A3E(PVmSC!kq9U}+_7H+3@rvL474BL zSkH$-06mz2*6Za1I-;wS6`(R~xtcehY_K($t2K)Slpm4L0OSo$l1U9Mi0Gf#0o(v_ z30X9-YOfkQTc5aujHag2j0R4;hSEJENo-gbuX3rYJ0CNLUwe?9NT<=%vli}U9A%J1 z%cF`#TtXP>&y-2TLmctuButzc{ic5i!YrO&zIE9>bggF4eU=+n6Dr-eOGEx{_-M@0 zzVeItxh;}P8@%&u5Q%_u0Pw{&1X3Vaqke{>9?l~m>4>4z_%1e_K`r#6CLOyFZug^%PCs8w8w?Lf^;hEHg5wf^jd&{x0S4MGeQJNPi0Wo&*`L`Zu68I{>(p0P-IrK>4fM>14T351^JB z=;8JOEXW5t_)?0vS$McEI;Z2sF7c0mT-1fFa*6VfAo=!xa>SR~w3-@xZU}rmNk71F zPw3Z86F%s`U9H_dse)Vgq?AjVakvEJ3Jed`(XIQdK4|(mtS$$OU$D?M82kIHX@%PHBrSJ87O& zA78R^o~%k&-RB_p+cVgENnjBF4TEX`i#X34M%j3Zddq0+34#S(C0#+Q&7MMU)5gpG z_=xzY1itD5?uGSrApb`pU7#Mao0b>NW7hxr;_(cgZ|+m{XREj<0X1&kp+B>OrYE5JI-`g^e< zJmeD)0xjfaXmbCwGi=y_59nFrpPK}F&tN{&ggG3p7rPDv@giow$9_i8TblgurU8Tl z%aeKAw__L?IMU}Ed%*jgAB>yX%mb2-2<(mbs>yXGs9s~H&NDcV!sPDVhN#y2i z#THb^(YPN$yTxVT2qv9cZ?IfK|DNsxFbwhIIew9?W38?q~z2DTt!zOt5UUH?urFJ<5#_}iKX(^O7ZJVv-kT1GM=vFZvZXQ*uF8u1=ZAXt*sG!Tpp*oqJ%3BdKC^ss zc0ncXU-RrI2@BuuC)1n8_7Vj9i%%a8eE+N;*P#QI;MKFA^{lf|!20;`?w$dtpM2Z( ziOF7)*Ht&3)RBE?g%TQF&=^47ug(`|8TQ-2=hP)#6~RoccL8*Q;ivCgnQ;;@GVfq= z@O;W-hk|q}wRaDL&X%J0ac~Y=uMUU1bdqHri2GFIPzrNz5+4VF9`o)d9{}1|*B&do zeF>jQ_*k2D|Giu1tFr@Miw>k5fO{6#Vcp(bz7P3*M_eY6+C|8i<_K{YhEjscK( z1~}+1#9f6W!lvt=z`2Bz#lR#qXp5wm78UtdYNBl=fY-QHB%D8wth{kKw7AS~b?>g; zvmHk<8gBeTIvD?sqRjjLa&J2bj+DCL`RJ}*J^`!_S7b@mm=8@G$C3jPmw9pD<8MYQiO1kd$4nkguw;8Z@AvziS@kLMoRkkzpHm-yA6wsdAe)`L<-1{B z0Bxm9Nw}M<1z6C+CS=}#gf{nsUQ)F$dFaOSK9C_)Hv?4hQScb0jY?i}bk-y{9tx2|ln4dJ9VY+team57CA_9ftovyQ zvbAU?rRqLfse&>2D2ZGkO-mS0V@Ct4P62qh(wd%cTwEv*X_NO!_GR!J#}r{iw*M)uX~k$=T|EZxm3i9Eu92nxcq%kxOfnp$cGVjo zq^Db%GU+8oN;(8a5T8id@nQvuKVv}F+aqO0>Wj3W0B?qQS>WY~I|yrx zT2bAMO2;RgrTfBZkFO9S`Ma_%z_0!KBzMy;8e?u`pU;)vp`RWB9l3{KGf)LcuHzTQ2~Dqz4Aa95-Mhpun`q1O2QOSo61a^+~;xQk%)=7IeKD? zA$G{9p*1b96p9)G{0;x8_#d=(7TUB$(-wB|KEEg5 ztmHHo$wgoAQgf!Ok*N_s;;!S?=T=%pe@08p%N^GeO)hIJ+hpMkJg98}^99#@f5j#~ z2G>EB1=Rwz7lYI+KC(|6c2iO=pGs{~IIeTh+Ep0`Psc^?2Y^#WEr-{3^~KXF%<|#0 z(CVS-{<8re<$^@*k?hXKLB5{pUMR#m@e%JV!1vqQHv(rqI<2t&4p(*Iz^V3;E-Y*R zevmteeTH*YjM21N93L~(Y}wjh^96{eP2MZ!+?x5m{qo2?2B`i%WPMbKOV~r@W*}ze ze-B*{`7^@T+L0*LXT)su$zyBzJ)-%p!tg~-4qryGn(b3XktHaxegSyc=rXhwngc#e zlp-5xl@JQgqNSx|P+1eNs|-BnkPW>PVFomW=2p*9t}M?7e1=L>2?NkqxQ7_HWmF(r zj5y-aghX=tQDZ<}{q?^5rPqs%Yi)UPQ1l)kBv4TQXNQBW<3RE_I&$9krgqQT9s4jc>^yzTx>(d_%=F?LkDausH=$^0qNXYhO}mV zKg1{JgZ)QYOhr`7{2T%_&(HB~cxE{Q7K#jw=kz!vyWHO3DRJc11$lt`I8=78R>^y@ z&hD&B?tnXtEE82L-r>L1ROm=TM{c_Puy21> z)|0Zh$oNptm8m)Z)kMKW+tapIce}l%!@U(DWlbTpGNB?HpR0rZYqFl~WZgVTQg&qG zpRj?NpWVxNOGKR+wppDy?h*RXpgxueyBHqbekDY|AB}N7_?U-0zdvS4FacVz@tG&6 zC{}^!bR2+U-N;s`Gtg~b$Za_#MaE1yJ(of{gyN=kkuW#1S+Z%N9_kY6(!s`;E#-sV z7c>!V7I2^tY%AVHGZoR3_>IY$<22zt{@RrP9WY&kyCymWVKy)1`y$-Ef!!-Nn8sua zj!xb7pgsTEf6#_~QID*77URQ#)SGb02GJ{CSk05L&=-GFlC1l#Z5xrme%jb|jb!PNM zgGDV5Qx+U+JK*OE*NNT>chpu~rhDl5y`Deh({Da&UNt!#2=|Tkbx0X9me*lunii-MjJKe;AZXK;7b3RBte^3UQfr3 z62e?ye9sOF-eF<^wrnE1c}tA!(8Zi;{6}X%HoYV#nlGV?HkP&wRW$!Dsu-W$nOTO3 z6Om13^%KJ_xp2thT|`QpS1=gFED2}}d=803=Mw`G>9^J8$FUWlY&5H8ap^QB$`<~>HYPnXNqbZ$NQ{k>%{oHHV^W#` zhgSE(PwXDhLCG7n-9t3Q zf?d=(WWGi|tqgy=pabtW8Zlc<%%Kh^sNQs;9^CW~D+$A|u-sD2Pdq1ZKmMRtp={vc z_t&fGNA%~vUQ#aJFGBsvyY~}Aj%|~iOi9|C@ zsac*pTT;=cJ~gYG z-iB!1(4G)?JV5D-ss+`1U$b=%a(7{vP~A>8gVoTWh2{<2;akg=@Jrwy7vkEi%ZoTJ z7(!Y)4a4T#hrc$rbJDKr5buOrOsGDBGZ!L-#L@?It8?F*M8-np;PaRYpL_MbaTzOd zf{!Fym5{}!8!inavzX-G(IG(zKON@2Sm1l{tVI_ zr`T*?a-KPW6zzrw7C4iE1dGUvtneq&EpM}F@A;r}9K(4*J?e&#z=0=WNF9Judgb#U zWSH>uC}x*?asY1-+Nt6)l^W(E$|93CjJsy6T#H6(zIGC+4J}@=PpQv;NWqMJB|I~~ zCj+1772l7p#>5Zk)>4<8DJ*?6ECm6)`DL&Gd_!y)4F0NX6?3h_9U|nDojiAAZ1I^; zqt9;5S38M&E0wGjbVJm5KB$>hDJ=VF=4XrunYgS&^X;NYnv?pF0fr&`Uo>hHOWo0z zjqq9-e`Qrk!r}7P|3Hg%^eNq#bydM0GXhOCVGoT&L6yua>}unG!mZ($A=Ie>6{Lc1-Q?@B9M(FbP;`#NV^&o&od)l_WY0j89ih_ z_-FJW-87mY%D1QJ7?0$HDw+1w>OQJtZf&pKU;W`k*23{+T&+u;m;@KESU znhg@%Ir@-TR;#0j-CB4>JkH2q!dR@ID)h6p011I;l_FTo&Z%Ze=b+4c`W0&Bp2ck*9vR0GY6W^&y;}xwAo!bXl zW^Ui*u)~~xNgcjGv=Bu#>VCy@pzLHn9b`I8MYuBw&6l2i_ei#(PD>j_oDF`&iWx=^ zWS6t&I?R?3I3abMxWpEjxqk6!0h=gD12c1pn-Zn>KqHTwsuZ-#YSH(o!Zd@*wUfTZ zQJ$^%*8U$v4Y0BgNM#NxTq~WQ4hud@t(&J>@Av3CW^Rp+7u#dEn`St+0=?kPrfO1b z#JHvlJ#JENm0o$$LXPRzmBuf3IRG@*p5Xhqlg955gzGWlNNO!(|ZUn1vyxJRhkY(JV0rho%@mriaGzd$y~waz-`xdd<= ztbtC7pn6q#a+Odi^P|JKEXZ8Xoh_Wd1&-v6Wb3t1>q#QQM=C2{wWKltu%g`TwUhRe zTB+0>E0;PkcN=4Z8%b%v-Q$^+evxdX{0y^Fx7G`O)__WJH+OgBh4=XmDR5EsJo*xt zU_Dh*WAt=)q9A0>t)EIZcZqB8SWamHVztL?QwQ_INDNzJwF}Stna=7qBmY{S&|9gP^%)eYo7BM`#i+bn`N59ng zBf>^5MVU47L9bkM>1GT)?hIX!lB9AHwC6NcckYmE%h^Hn1IIit=RTf)!%p0JXT{jp zuwfd?%+)UT$!>Q}ubf>@nsGns5h))oan>IG{ook&T4~5QW9ZeVB&7s0I{q8l@oJ#r z-S;@8;ilm)m*$@SKl!l8mx`LA&-%r! zkOy^aC`HgvlexkNS6?O_!>I8m8_^s=p_eZtea6@ZH08_h+zXVJ55ym0b>h5w5u4l< ziH%+&vYKzKIuV^?hcZB0gcu80(#*TVU!?L?l*xUZ6UFE8HY zy0jgHp)Z9&-z&~CsUk4M_(FTek0#@n<7%V*XKMsyVv-aPG{LJXk?dEx=n*B9_(89O zX9uF!Lj%mc=RB9Z!pwhC?Vtt!Gz)8VEdYy-4cix$@(}UjshqN~y+QUp{l;hWEloyw zuaT4=MHJy!s9#x>dMY;gM2F$fYkm$lb7Q%8C?nh(cY3z*ay_Hb7tQ3bv>$?=jKqAq zN?xV_#P&6yIJ3n1)PwYf1+x#W_$;=#LH*FNk8+pHEg1`j0%6v3BBw~yXBY4CJB=7C0oCq0X?HP>pBs`%>b(J|RF{P+-PFhjtW8yG{M z!8?94SaA~DBjY<@R6SbUGMW7GH_&n~f{=QxQ!vm8lBa}5S4niY!`EJ2iH8%h?7n;4 zYNPiR|BJ*Rga-}&&}uh$IBFUoYwRNcFAg$AZ;tjj&geG8gtLS196rE97P{UJrJ?$k5gQw&R5f0zk(csGBAX{2?FEY zd=Xo_0&cR5aH1b=S?Gk8b%NZH-}b}VJS#z9(mnqki|&RlH5jHA)`b~{t%_2O z%f?o$@cf5u*5F;92D!&ek=_OH29!PD_+l1v14OvMF$$L!9Waz#iIkr2rH>#bG1MP0 zXy4cBH1<1u3weuW&Aq=`@Ix#6=Fa$s?gFA=8W4-f@z~IG_z|F%1{6piufD^$)BW}m z{bWMc9s|#Ix2*Qwd~4uKd?~)y^^p$zmR(S;Gmqa>8cCwRU@~qfJQc|yC58+bye)UZ zT&+wPU4#(q9-UVJ^vlPg5FC;Elibn~K2>3)CF-@kIoAn&_S&eImy&iFE#b}6u0orT zJJf?q-A&4Mv14C7kcmZ_&NP&L2RN2OAHhN>WXa^kY?;q9O`I{Y`&ac)2QB{q|Q}MgF*8AQmiPei`i1V{y!v6Jw z1qJ;*?oO`2Vm2P_?LyaObit!`n1XV6y+g_DSI%?J7^OXiKW|21Gy|jyJaOnu1{RhN zlZ(R}`86l#gYg$w*OWP^5{394;~}{D)`T;BCEE9tx4u^8T-BU}{LtY3#JQlEIsDWR zTl8Nr>MvEU4Q1vqeuzEgl9pA$FUlC53-&?~Hc*xXollwwP!sy%ppSeZkk%G#0Vtzo zD2^JsdYq*@0nDP$KsYG@3ymr)H3~T@`=vq?lHAb999R3*^5^1%aHM{Dv{UX~j27;C zd}oaL?1j^v86&uA(Oye5>=xg>Ts>#`>-cv($+NM(5FSDqLk*VLf%@mEdhY?A2kL@kON~kprrFckzDDzNVqTvKV*yO zj2Vu0+(V>w_EF0>mZq1IZ?{L|aTw1qWi`Jva*6NqP=d&?cJ*x0!q#+A35Ur?4paIM ze%j~=6WA==N=|e3;2{ImczAvWisLLZahlMh{NJLx$kw zpc)iyrz%mWYC5my@X}INprRZwykgvx6yn zM(>p0MBl@vFvaRwbsvf$&dU0}@10Sl(%nSR+y!G?I^QMx(^K_ zcf{-G28YlErTLPhMgMr_)e3WBLHPH=2o+4rZ(lz;nV1{y>nkws=)_*2iy^QUd(HUa zGMj05$5{w=UxPBZ?2XP>GJZpJgES#r~xROLHUctM5`? z*OpvZQ+Ojo)`qx6($b5KAX=nljGUEr^THc?56W*KjE^P+C4q9CI9qs8$w0lyyahvh ztU__Ca*C9=t^4?4n*=nzmh03@LOfOid=$u z^byMtAyZV~lDQ5c(S^|3u)hfoO!8FVMx!9+!z36&=3a@bMY38aQ|I<(xu#7etrXA~ zmT#1Ewbl@ZK?Y(745dL8TPX-02?1goVHU!{P-Q zY8D5$VlIB&-xTEs}d$hJsUz~0fwY8g}POksLBB8dLAG!gALzKmK1@iVqKLeyt( z@HlAnL&NEr@246O(_r*Q#44;>1LWgIU`Hp*Bn3^CXkxWI!r%2gkqx`rszY=MR4{+S zcox_ec&4a3YYFhybc&snDqJcBJkANW{v?K6BMQpT%6MChaGHL#_Ei|o>Zq3@;C3BC zL?e8|$`0RQNVJFtja&1mv1O69@>y>eN!1>@R;(h*;KbG25ZyA|HTzo%VnTS1MQUhm z%mpSJYopQ+MX7ZBBJIMUO?2xew0Q*iQ;s%?wgYMjzcUAh?lGZ*&*?(M^gp>+5YdZb zZI9b*D>*jPzq!YWYkzT2I~+v04z!Ap#T`y1=d)v67wB#qHN?W03eWeJRWK(9)}r)d zvE|ZdsVG!)d;UI8SAY7F|2LKbf8Ig&;@bgB1(O_*imruHm$m5nBQy0(`D;4esAB_>43ie-zuo z2RZ}D*|F?oE&F&_nI*|=?2hn%W=kky2m|;*;Fnw#Op2b|iOgyli|||rIsP=&GAxLI z6W^3zAhzImhx?o(H^hxa+*Wau&MX27_1{IueXyFriokwFiL4%pH*%|z^&6Ca)atzR z5|&|c@jhHAc-$wI3jK#6ws`om80*t8{0;#)*CZ;0e45tP>P?6zq>&r@ULY89cf~>x zsDFG6?$f&V?1)q&`ur0Pl5-581iwGPiCXUm|eRh7yG)np?Hy?ktU zjwbtQJ>16gYd4wkeis2&XeoXs|MtVs9H|HukvKH<4V_|IakSVn!AZKho9%;+2I`GX{l9+oQZ#1~6Ta|oleU-J13-{>XVz^bsR0?=Li4N_1q%}|1CnYGtNV$_(|C;3j3xSMiO(U8*cZRVT zBAXt_`7f(V`e+4~w9rvH8UM~ck-OB4(MQJDm&6wuPLJVI4&aj+j$dpZ@V}6#d<7$2 z5E^#S-Y&Y5FH)KBh@sg*9B$cNA7yZq6jO~w-7!Dawn&C%jU}#KY)tp*5`G^hz=K|M zwUNhM*f>78CUMI8w1hsc2^a0GWg(WRFT%WKt@qI~CVrk@LNGCw$I**0K93-}1)xSW=p%s+#Is-Vn>OAi(p2A($}&-X!-a7R zNA!}4JDXqGapS-L54_b)M!G@@fQ&rm2+L7($mBC@ENx`Cx2hc14-3l7(>FY6ZV`E< zICM*NTub2PzAN80lh2vj0?v*9#Xn^{SQtxFtW;u3MJmXO2c+M7}K% zh7kT0O&7DeSEayp&BWNg3^EH{drQe@+JRT3Ir3L(mtc(e}$NUkVf!2rL%sHj_Q744dU+FG?b>C++7nh%O^7h z9_6iyiJVWkGoKFK;KIVD7m(HNGi63?nbCFbrJRJ@;;EH}<>ozUtZ#dDf2RHm-T_m2 zutxGxNpGwzxhlw#^i%Ms`sVak-u$D8*}Ic@DA%ugqo`4Q5x-7WFI}WzLEb~AX+9vV zkh?ULEE#WK1xs%_#C89xZf<>_QDP5m`kyceKwC|WQvLH3%^$I(PP!R*(70Ct9TJZy z$7lB`61m?S5=KGA8$h<~-6HzNo&vP1qa-8(?Y- zIkf)&DyU-^Qk@HCj!U@;eGY%xA|`!KJw~(*MS5^aDMLpsO^{*^3bNqo$hLP}|<=MPAKR1DFW(5)2_ zrLY8&s}clRuuvVVtlo)*$@z~p2BqemXU<9~Pi&B3zw>#!U%c1@TbBvZ&~9Uu%*_9N zEj9sA;iYf}Tms^wn5NbB3t+w4r|4A>tD$3cbIYTK2#d|Lc<56$AN1s8W6Gs zUb0m!v9CwdFR8t+2@GQyR*$mfB7C~uN>#;DCHCL@UHY!4>Q4kVHH)%u*g+Vd{Ecse z-RB3!P3~phE5edVL;)Zb@#?wm&12xGaPK3xKWj;o>_?KN41UWG+D&sVY*J4B^4CVy zMfcKVOM7310QDk%rE(SId!%k;)i$y8VQ&?IOjA5EThnywUn zc|*aJq=RbxOE~6^Fc=haBMJfvYNwIPCl*{+%Y}PGnWoFH6(k-ZKEBkZq;zVs>ctAa zVknn@rK0z0^DI&MJh@yEnbthM?4*K(Tg;%5c7-#sXh{dqP+I%I=>oC7rVImi^99IT3xekFfr`P&>;l>~L%3*H%{pP#6jOSQkB-f_}p({(fhzdoaj z&8iz$WIFy^bOW#XHU=$DtqfTgc9^X8ner1m{QP;J72F(tM_t|Uo>ZU*}t&gXOMRg6*l>5x&v0Q_yiFAvlb6pM0;{3BHS(^_Bv9Y@kT4 zpcC8ay#ITtFX_xlH1eg$Niq>mOnAvWh-u{j z*|aB05otop!5|B?Mf=BF19mRq;%Bs!5!}(-Hz1`nYDIlJ%1^FSyV2E>tI@qjTpw7e zs_D^CmBFv8Kwi=07L6DN9jCaA)CFN@X6;>|;X#DWe)&DHx0MTcmkjozO$WUJcHS}( z!YyN~1-S2PNBp}KRN-0ROyRV64LJgYCL7QY4)<~*xeP1eXLBI6bnCaN{RV-bBOYQ8 z5bl`$|q;T3X9u_P<#Gw+UfOtLa)Nr;|?LDOv-L zfqR!_kN4+rsYEAf0*A1FQto7YB+)+`OP&Q>#X*u%83$#~a0AE>s$> zhrdYsO2vgDaXhErw04wkb3wJUHqLy!+FNk%NekcewMX;+^7hDbivzTUN=Wq5z?H?8 z7Yl5M^VlR93a$R9CtY~59|^mJyGo<_U;ePLYo?1}H+W{T7P3#g6A^9U)5Y^t0(f_v zyGw(PhOm=REbc*jnP1AO#m#J`dm^w0V!XSn7*#Nnu)5jt zWrZK()t22pgWA!cxUUiiDxH~5F&8FKqNKNG8s0b9XrI1szxm0ob?5uriwHwS)u>!S zk7daw{m?abM*m)>9nNkVe0_Y7)or;}WXJ{+(2tFbggTu8wGfBhU-%xct{dvq9m%3aQW0JlmUN;e z7Mp3LVO}?U8eUBM0QEZGd)~M0U3ylb*ttT}D??u3gi60l(%FQqfFfaI@BChp#0mVi zRZ)$7;vM|JYH9D$f@LJ3NxXX&E5*BH;GB)0; z8e>TOty+iIju^d#GZz4~q5Hk)-qN#Gc;SCxKQPT_U>^nZXc^InD4IIPU3k)&$ zS#=q=vnYr`ZYG$KabZ2cHtkcXN4ieUJ35u6Od=wtad*j zb-VaOC{T_{8GNJ+*RSNhhdnrwSn+jVMA_%J0VW!(yOP}RK;h{oL{}wqZC;LJZ&{1{g3yzK2>=WAo}2$xS=K6W0vwY)e*Rfto)?O$ z#^DDvzQ7sj@o-(~EkLxU09iZwp^<~j9kw*u9HrB48m>yR>u3$Ur9>R!+Thx_I*^pS zIswau@juZ)JrE2Hv+nU00?7B^5+O`*CP9=1;IaUGaU56pqx8bcF89cju&$6=kIgZe zjwqL?69o$0r;viw4yiVpL%C+fRGAsa*~S{9sPX&G?blwnRH5usG(z(}C3t~A1n7S% zclH~I?`Jc@8+p$o%D784{ISxo&VQkKm8oTA-nlM&e*%Ilge)@&R9VyvhH;8(*Xy1*Mre|<&tWXCpB+} zKOC;LFx7Ul((a1YC2_!;b$+#r>6Ape4AW;~Mlxe}DD zP}#J0ecU%*Z{CH9=?YkE{e2+uagooE&m!P_pCV*Zq#Fkv>Q_r^YYSRYP09Z)b&ci8 zu(1wV2Zeq&6V&3*-#O{?-+;HOLa5(PkmM=GO9@*+QW>b87e=Tp@Hm+%cU@5+lP=*F zQFX?WTl(<5v`orWvQw!kmJW$iIO|-^^OJvVeJ!p=?RT=x3Rw5DqrHU%bf$a6C@_q> zHRV$ANnHvLiSCGUmsaWQ2#lipzww6@$?R5G#k!av;y#uSk_%BSk$J2yaAZ_c$mocR zm8BgaUx19*vBLZO%QH`_h>+OB7`*M!7hQ zLr43T*Y0MNpx8O{D+$2Ju*k>X%Ez75(Zx_ey3f2yu!hnlS$9+#j${H>*K*ZsQvPvWUrPCSXC$`8t^i<80h0y7q$NOVV=6OmKm8ZP{P zwGm2})K2`B)f;32p*UQGUrh`XV?FyX2A<-o&EZWH?brK(#YiHcc`XC1^gLGBS%(84 z%+1~?Nld(g2k`@I$kw-tEc5yBV-?v5UhFmGA>0o~XCS&f&_t)VAZib+JvD;p81C?O z>;UYLgqQ{<4l7BeE5zVAq`G!H9koYM{3gbyyfj%`yt8ajQy6YH`JEJX*`lnZDJ z%-1lZpiMNN^*H(j_6455(Liq6@A4cDa~8VaYRuSN?h3OW49Z>NeT%Y)M(S zwT!(Dot>}FM6esesmTP46}>;vnl6sD3I71~^?uzP@I%jr)cZzmFVv~Iud4(giA*$o zIQr)jtQCU5$+%78%R*k{TTcY9Wf=Jtk%Z#(CZ=hkZSmnyIj5Kr444HT8%SMNVWAw; z?5I*+5i)2{EUA1RbCa{cek6jA2jNh~gwheCy$w<3z@6v&ok07>kVbT?xA`vRzq(fW zvpOe)v&&+v7KJX+y#pp^p-HU`rA6wZWRY*dn<7%P9bkz=pOEVehsgfH{gW8HFDE1a z;E5qkWqi-XH*wVN*Cw(?h`_fv@^E`1IlH0GkTML8@y%7cwHgO$%y4eJnho?^nl!HE zj@lBif`TXAs_9<#1R5dz3Co>9K2Q?=DjZdVsT@HYbQnQPqw;e;oXnAG%+5gx0Q?DY zPi;)a(?=~#$FJl+vF^fLv{$v?!>J3omvUj|Z7ux?CrIPF4n`L57MP8kNK+5~jO63N z8w&T;67qvC9NSqdup*CGVXWFX_h=-=IXIZ_Cj(F>i5Q?Ccrn%>aAj_Ej#-SRS;nuy ziJnVH>M^HeO&amT6r(=QM{B<=258jE)DO<=V$j((Iyy!JVXJ6F4aD6+(KO*Go<+BK zQ?Z|>|ClzAz3V*`!}klC+-ExXF%;#4jYMJfm~qi@yaf#&hp&olf_VRs6ak zxR1q$(0w)X$uJa|hpv+T|EQ-(%lhV$MDl=hLMIt1hWB5g|%ykUMqhD_mVQ?xY- z`IjLQPUzU#1AYo)0_CQpVebl!QjM3G_WBlXn<3Lj+uWs*o+XZ z6-E~fpDo;^b(Ro7VUGtnpZ>QnLLGUDlWFUl9PyHrF19E^4KfZ!LVEl7{8g8jlVU>~ zK&_Dcq)_EPESY9TWVZ4_llURA6PxZ_Tc9R=JFO9qFykxlfOzX`P2v&!4-R4Cc&KQcs|m$AA6D z02PQYT_`6Lg8#0rXXF*{(?JqGC&WEf457SFA=L>qn!Xl^QmP5|1%XrjozDb@1>vMx zSD)4&uf(*_H$M59x_3m*6oRGGNFX)LAk2iaq$N})9~-G z4=g^XJ;?mkL(lyljLj>*&%a!wd?p+h7FUMLK^`)V9qPUKnUFxEpp2#d;T#^c7X{Mz zkz^>;f+oHr+>hZ)&ufMF`_x=eFAQwt%=gs^A#3Os4;T-_JaN~GUKXtuQ6f~bo>ZGI z-Q|b76>h3=(7>?tG0D`~eMX}y0?=Xd+6f`3YM~BB_++&Os5@v@Z?NMChR~OizNNFt zX)hOb#3))Q%^0^kw>H3ZrQn{09atyW0WKW3B5y}@miD5&%C;Qf%@2UnGxS*ly1Xh| z1QXexOQDQw|GA&qO5c74?h@mFyK21_3&7Z(uQ$CPFfqhRi*;t6Sn;?#x4dI%IHc^K z)T$I($cR3J$D1GC{G%94qW+{iz+102DnXm1pHpl7Y1^rTntbF6{UL2NtxRZ<;eDBB z)rQgz@*+j+C!MEWG!)1P77iL|VL??%2#i*EF5TdrL$0X=~O%iCEO))IKSt8B}I@nm~hVw?u zyBPE!(9MXH3%F$=Ze-#{YKVAdUdI zS6LiGRf)i$jd;s%->Ifku(&T>m6o-OF_ifz{&>(=j3{dqeT9l zog|KnC%W!D>b1AmMc2CaVYqYplmRBbg#YV^36?<)0;y}s^HttL(?0g zw@r>U`ZrSk_Q>Yn#*daTF+1lZgnQ@8d`y06;yS$1^L@&-V{-dzcXziQ@lij;O2EO5 zOLz7$k$bzZ;E&w&VGU(kWGue(xjc!Ay4$(uU%qT_?+zc^Y)>3i9c1~DWv(p4Y>gq%8jkecurC23Pu52#JHl4Mt7hOL_^ z*@HE1hHIizt~*fhR*5TqeAxikt9QS1zH1le56xU1h-TrUg9d_v(lUEIHG;ehJ^jwS z2F#T#VCB~+?5E7ZM)v69)&p`9KM-9s2-fYS>Qw&$Ek*xaEE>$eBQnVL@B7n{n$)55 zCdB9XG=~krGF1C{#`Mxmz$1YlrYEat?)8FWu3DijKf}GsMU!7rHJZsp4^4{hv+J+7 zgO?dQuU=A=ddqU1Ux(@NKJQ-OwSWg_jc${q+pZeR*$$yv+Zwi;m-&r9;oZy~w$-j1 zw*CH1&1S%{Kv~zl7)Jyov$~CQ@w%K+l8>GE<_w^Y-Zjv-9QEQ`_A*N8X*{1Kl?19GpK1I2LG@2bp;^ zWQv*Ar~ka8N}Lc1@9nfpMXfaR%u4qR)8@c()7#HJ=hkCC4zGJz2DfMexdty<2U%r( z6m#~$5`sbC#bX3|R60%2nT(mw5~@)T_*mC~Oh*JY&;qBXC*K+H=#-zPrE7|+C?-*E zB_&XveCBqXXpGHW+9-{RruI`Fu-}>}TRH<|NXnhpaZf}v?JGJg7yymHHk0WJAM0#5 z0Yo9DKj4$&8>7vUujJ89GY zk+3rFn#XYAApA&_o|^E*&o%Gv(xX7~fNH+@R&*8OGfxy;*$0vZI7)_>#o>N>=D$Q8 zCm3l2evvTs1L1-V*nO2rH7}iji`0}f=_n64V!<=1gz+bnfU+Q_Yc-0(8Kay<%cm4K z@VbM{7HudRFm}z2ogXn+1f&0|tV;Nht%YSJ?OxuuSm7zzNTynT1_OpwP8KCnIB7-9?T3bYLwL#ZOcZmKT@39Q#~I$9Dh;2 zEp+ZQ=yNK!RAZ8X`t`eddd7^QYAc~c6k$)&->${b|#TCt}cLHg|K>{?tbNP}`J*eHg^rn6bX{l_^`+lPq?`u}mxDS}bBJ6a`%stjo$9GcS}zX5se>6$YNaFzLCwF*Kl?N&$c0H6j(!RSZC( zLlD5M3Vxg~k!UD#PjD5_7z=ytap)07HoMQMo@3esSerGFH{-5TGkXSHr=t#U3m@NGx z-!w8MXL?-|KiBM?bqg#_?%xcI1|K%eEU-eOs%5O7n(@^$Vog10{8q zM4gb3@Ge=y?aCZ%>*BlxySMbBaMfz;LcfFkCBk575AuS43veb)vwVf#R1iG9%>LNy zaAmoGvG1u8znO{wZO?jaMt}bTSF)Cmv@DNX?e%(MQdxGk_+f|3EsL9PJ?nY3(G!Nf zyo)xEfbu%=sV355|daMOTT-36yLhhS8|(?r6L$pfWa=;MH_U7xpLHQj?E^MW%Q0 zdG751#^!#8%=2UtW>2M4XQu%qRIOR}G%)Y0YKZAbthE6f+BAigc*3m;LExfh_1@QV zt!Y){TW~&NHFlsx*dj+jAdygrq;D2XNA@mMZlli%(k1o!wOf8@Fx;i^waoeQLk_>w z-JMJU3zd*VHU{mDYpfJxbPj>wA@ZB3l4xFc4J|GKpzF<7 z)yy)MC}5w~$Qp7o;wuo+%6*!yPi7upLbG0kP&kc!@_btEL3@2C%Lg?&VsC<3h?kTP zxE6Bt+;z0}dHyCLEGkRWYCI4pd{UPzS~*`Uck`&`5%smxr*3i zd*IW~Q=99LA9fqOW*#({T7CQY%~GmMaA$f^a@XiXtIoPvtF&s*?~yM*$vF!oME4)K zohavcEHUPIbjq`)zR%XzS?gFHwpZQ!D53jC*|y2HC%D_zuJ_<<@_U&DOw%vWJ?z>1 z2N~xF&yC`075h=RynUtD@v=Fvt6pST8d7*Ws~f%dH2JzY-GJ^A$IlxGIpQ$hm~ zp65FptR{t}2v$?WxISv66LNPOe``8pK381#f?i|P{EW_T5fwT zExR)n?QfTM`3K_Xp+P3;c&{(Ow8txq|2~$C-tB;T4ZQlE7qpA3eie8vkU zm3=)Aeh@Yj`>n|v5?6L92nna3{4A?1uh)>|O(Q(3FdE4TWVyol*La&XO*3P`=Cg&z z#(e~XdZiyi+fT1*7@NQ^R;6NTMn|(SF&gBiuBN6c8F32s{B?L zs?DFrS)V>$ERKrQI&W|~8nz)QvX7p6C78~O8j7}GWQcppUS2zo>Y}O+x{xM%ZlPcK zyweZ*6lXeqqv-qgPvDpL-rtHrB$lsNU+7ybf=y;&$K?q>Lo=GJ*B1*IIVbO zmhmOH!TA|^u({bA)7rNQEt~dl9`n;ZZk+q?L^p?y+ilF)IXD`3#%pLJePd{c>cQ~B zZ=nip3WjEbj@gXt(yrUk&mSz#7wtdiJhqQ2Si>>H#9zge2>`i6UG9)fMjycCNufiB zKo*G#;ErF~q5#$2-~?(>fD!?Z&}&yh8bXJf1;Iu2-`N?$#bI#(j{2yD7horh!M zfYp&LJoIW@RbWG16S!^Xc(EcJwVsj19wP;m&|3UaLSSM1ftwtGof)i;rh}KjVn3qH z0$P4Okd)_{*A>~a_>gZH=T~xJVkEMgN^;6>mQMn;#3N0KWBt)gJxDc25wKm{W2uy@ zX1ntgQz~+`ofVsy9Rh=<$Wv$+Fa6M~-PE2m{GTbdlt!j*krh%{u#@!=B@QciaT@u! zbI;NS&j?2l_cp-2i8)~KpJ8S3|8bfsJpj)PM7EznS6*rx-|*hHK!QP+-CW1po^TgT zIYe{whqTNLz53Z6F?_$e4Zbdy>#9Uf3Q5_|W=_m%H+AYmt6UlHDp#!kU=5+ZHfVZP z>$l-pmO)Nnfrs=sfDMKe9b~n63Ddzo&c` zmEa`7b>!gyRXAv9`kztk!3}^Pcd01NSI$7{%X_Pq7!FNzR!Zypc6TX;C*5-LG-w-$8&|D)?I!>Zca_igF! zmS)i{ARrym9a54~CMDe|DVB|aK+0)5AMOo7ui=V6 zLm&kVm|*MN6Ay(R&He7fP7LvAA?v60K9W|_REZ!n^_Gs_^)aqbpq?&!pn{u7zo+B? z=ELPE6XTs&^&xr*yT3t76g98sd0~jU31YJg_p>%w{7uvgZCIU2!5iX0}+7$Z|z9sg3JmJ*%Pv0P5 zkG|F2-!YKvD!0G}1Brh=l#;8+%ryRqlJu_TauQzts3Qxl9Vd;}&8R-P0$=x8AdKLd zbaB*Qr9vnMB38O?f_S>K(*1#n_WCI;lV8KI;RbfGc$t8lM(c@q^RoA*Vf2UiR1V37 z?YZ~sO~WK|PwYy+9w1e#4D#)gj{Q%c1Mcfnl=cr`EmZr*-$-SGoYfe|>UzPo5#|IKf8uTKu@V zy4YC#WP4ElP~_SARkWIM3;W&Dtkdg#_LH%>uFB(>_GWG`_S9!W$-&F{!)jqbp(4#Q zcK7;sowQB|k1$TgU-k6YT(;0_e9UGDY7n?=Q-^7O-My`}S$R+RCxQ4vs3%97F8`hW zw>!hvIB{QyMFcVozW#k*Lmvh#CYQ!{c0Qdcm3(>k=VPT^kGjY4MCr**AN|*>Em0@O zUn1Mh*JJM7F1t1U9PJ81yUl=Z% z=nGD}2E1%9W7RQ8tnPNsot&C>#_vz##w(0aU?yn%ak|y772tHUe?M_39^$6YO+pm4 z`?m?t6Q3U4(K%uv6`ll_8-fh?-mq(^$T=S-pWlDGc)bY7P9_L(85x}LLP;hBbK6(F zq7ncRfy1%RF2~_zB7siX6NQ~*Hf|mLk`z)NZw94%w*tm*bsN>s-dJ~BQ1cbNv-EIk zPo(mns{A+J06Spr>C%ROxWJLl|B#N{NS$0{TVP9d1Ru-gcbl`7&{`_MhU1T&w?GeXUxdt<_1p`PuCBCz3}W-fH}T z44F}{TsVAk^f9aZ{+{2^`7}gZz^|FpP?E}J$uQ)fW@><}zf`G#)MsxnR{YV~(=z}d zyVhVr6AyBZ`|=c?FK_%Q(t>P*F(P1g$&|+n8#vz=g^x4m$2}2Om|_eCA4Lx z50LSb_A;7893Q1huFv|i&~UnzQ2qA{nASj+-^x-SP0nm67!0!S&rZqONPgaUa*h4< zD-rHy3*S?_!>N8^{Oj;Crhk>eP7ec&wWQ!7#%xsG(bOD{j*jC%4G*8@>j=D^tJPvw zO{D`$fP_p@H+b`49Z<>Kl*2PhjjM0;-p+9I-19x_CuV0IjFp;HAx**?-exOPPi2px zU0mQA3rn85`KKQVbT>z`bQ)RYb(Jo%J>*{EoO-+6S}jAdcQ|kQ&-W>5eX%JO{@LkT zwIZaVIp{Qn%NtkVEe0=%&*sxjUNxX0K&Flm7aL0_?sf4Oz#5oNIDm8~d8I3I0B8g7 zl!?ULoHmQr&q|)j{ck`(YYSF@%(2IMp9FQgs-*kd$nsgLfBqWSSD|#Vt6oxYbr;By z-blpDa1Us=0EJ;`H)oSUqkF}BGC*}Q6#pCyvQ2-^CY~GxbP4~9mjCZZglny_12Ue{ z^^o8LI+$~zajTV$f?aNuzh0>^TUP{$XwkJ)Z6JVR{$G*e-|LS`0NF_wp=~1p)P`;< z+(iiKDlc%f0HL`sN{U_!6p^;Da-VBgeWin^xtxFf-T(2K3p)|?j)~K0^1hv~ zpEMMBJ6+zEa{*M<^_D`E0xy>iD!j@0tQ4nL5OzbsQP%o3m*ZNy{o{WhivRZmMdKnn zO&}unPL-MQ03ef#m&oT>VL)5A0=eVxinb0S^yGF0RMp7}LBAoTG&XshufA4~pae0X2C5<498VQl01dYeUe#DP7B+RGo1aDjxCH zI~@#n5B=7%j(nK5?nrkSH+V9yPyQ|=Y17*bW&q7mSjvl$h^O_tc|;BIHnRgsf6l&h zbAI=?%LPSVD5zrYJvKk=Up($_cmp2y0IH`@n)Oa0Hj^)|mj7&yYYd1_O8u9D2K|eH zkj)8>>;sX@Tp$a8GM@_9oYwgLE6^jj6!3Axp|ryiM16&!e(DDTAV;&gP^ z@vb!}^k~G@ikimb*>I_n`T5yzjAZj<`&^{)zC6A*;T?s7ctZ-+^TdZ z08^e_^5#f3c|~4cW-CVxml-aVfrnoo=!0o+bRv)edS}1dwjb*!m-KCH1jWVj8Ttu8 zyut}$4GH={_N@VW5FyV6J2$@Gp!>m*Ljdp1jT^yt1KO}x4ic=BH-x$6g(8~ypr6QY zGK<;~bO5LMao4TIf7aI(O0TVqV&iyvpKC`!5Tpyuli>7Psf%K4*J2n>ebm8*3F)Zv z9BF-s{XRj{^bU*wX4qeVmhxXcM^GtMO{kCYDTH5v(X$*@f8`?R1I9BUFbz=HKp12k zfqy!WeaxuFj;pZ&Hy9O{YKwSyW(cVnJY>;L@tMx!N>0lmWC!cCL#v>Lw^(dqh&c@$ zy)VZuw|fZ2#dr*M&Ub&?XCFf#M^PpqGdpep%^Us^>uuA-Q*f(!Xn6TUg@fE!M&bh9 zS{Mq>(@k&3+^pTSBlzxT>s0cmo63|dZnmND}x#s5U!53uxyyA{oriyHcbbY=pi{krNvr z589_0+4ntc5Hr8qA1yd$TDouuj^$XJ!B9Fzh`>8ABg}epY)@pS8ahP z?W#gOrIk5Fz+?q|1ij>kg8r9VJ6F3~Z0e-6AyEcf{sNd3sD)yQ5r#rcZaMK^eNDLk z{=9}e)kQ=|h8B_3`$baF3DyFBcM6Z$0kY3gJy4;|-GZJgH@s3x=TdlizP7u1czgaO z-7bF7_nNCQ16__1Gu7Ptk3<-ja?VVZt!M#ND!e-@AzD~+4P2C&e0U>pJ~6Oe8+&s` zFuVeOWG6s%6SgnESby>E5#U+Fsed3)p=vwCAk2v!yZ#4e=7q+jIlNafp7=<#OwB4!f8Z}?vvLD+!JdNiAIdq_LGr&guPbgjK}?008BApKsDSUgLeDDjr2UH z@pBT|UEIEmCwf=TVbvVk?yz{L)8ng1mMgv%(1kDX`iWkcjeL#idlW>t1HK&Be$$(x z-=NDwORA0@>IP+K-i)agGXwFn-vG?;<8y^-q5TMk8KUhrO5;}U_}0%VDcr;9f`xde z&3NpB?#;`;E@>Bn)>InC7XxKzYLB?17#Hkm?6V+Owe~4MDObPMc&X{xayl%!yw-_? z7Isp1NeuIqr9d05?kLjAYv1qiLZe}e+3Y5A1n>P_`(GyEw49M5aSKE~fr0^BJmz?9 zNXn>wpNM_Iz^P%E=!o*h72ma5YdkA9W9;)6PRcnmo7+vX@R&%F=2Kg5VdG@H1C~n{ zd$z)sdy@%UZc&3{uFd%_tySq2FGuj&Mr+ClOgQ{*FH757lW=Hu^qh>A_OSWa-SflM z@syWl8mcsKeXrLf(i<$|9{GP>&8;m!pr
    4+Q-p7~)P@3dEz$;^U+Qy2nowP*5` zh*;KmH7y-nc$IqG$9L!L*T;Dj29iy}OSg!2n0kjYKPk60ZljT+wUZ%ID17v+-}v+< z<3V6-Uu!x7R}c|u56HmyIS0J_65rtXKPTYM7q%b`LFz|GEd(RVP%)qiYaeVbL_k9A zy}w<&uWNe+w189>*!2L>zK7b4?Nou-dKG+z-AFOPJ9^nUIQ*(GbHkp$5f0|S-@wOg zB3uJ-^s-eFX|w z2~V2@9iKV`U!YJn77La`CPt=0ZXL7yCw3!qrVqT=+`+$GK{vt0uJncv z!s$oOJ8p{4pNK6*0M!zcMwfHhnQ)AV-YsnyUdI?QF!8r69L`)TsTy{2goW1eqq=lY*1{5Xv)ltKDTdLPn4es_ighhD-k#2j=$^Z5Bhl?ywt@X zkvodzaMXG_pi1{nA(H?Z$XU1lq>J6&@vOP}ZD-^Z`_U{6fMA8KF{VuL+l6Y6=ICna z1ZsWo*)CAV`r=^T>LspG}r}loh~=WR1}-c}Nj6=i^m_{~qd6vPT_`xk3JY5h{dRrCk4s_p6*9W0KeX5|PYW);~1 z0K@CI=RKIu4K!834k`SC- zTP=e8ai7YvHwHT&5skO0E814eu(y72E+-jpd#;EUOizL-%Ptk{JW4%|XQddj5sN8A zPkk2ZtO@>Kb0QlzQTE#pys!8V>%mY_mfbseB3~ju2>jF|tzS`}G6BzM0i(?q_yC%}10i0+0&&XLfa7_Zld3&bJX~VqRiFH{J`4`i{SP z7%;H6h_V7ltU*mbFDmn0S0)Gz&s9q|4EI>IIF|$Y!|NtOZj}&ieX=Wx!yUTqsCutE z-OsoH-Q9uGmcmXNIM!~rF!OE8;gYB6Ok0rzb6Cx9bX{s#Efb0>b;fm1bNcS!(Bn>5 z(e1cZ@epwNvF)Wd`IaYN;J^Abr7o;iLg>m!C zO&?caqM_}VZ`dJ^&r)6T&Lwr7wVPg-Zv82U=(CoUANa5@TJW^(=1D&;zvE!ww$#*+ z2%lF-I3sf{`o7 zX^RUxOo5!(F#2@DZ!r2GGIiTJIW&zD3Oe+s?MjVI2vb}~p^qQ%|HSA247QimE6+#Q z-!sS3%K1Cc+N!c)YZCEh2hZYUW2?Re;Gy3u8XHS>O1kIJ3=;EumEy%j285vU*IcVy zmDjo2wBTtB+KkknK=`L!EH-evy#mqH>VzbGy<LL!Ib$KAR_*K zgSSWA>uH5P^w5(LvP;f^c994smbKNCS1{k6$0sIKYw0Ijh4_c`Hml;x8pb8BydQF6 zY{^!0b)FDsXI*L0+^~ek&%bZ;A{g|RJoAR}vEmhTaViw0*9hjccyJJ`1O~CVFO!`` z4(gML4T}VOzZ5Xk%QH}7giyFds$4gK_E|Lag~@#R-Ldtlm{%if zb`zsPWqTz{$(?Ta_Mcd&LAXcmo_V@It;)}>djnkyl+{PpUokf>6-1d56)Sj=LeV^b z#2rWKgCRGO)#S!LAsHnEgDIlUwU20vB@~fAxk()0+YU_Ht^-{QZkyy#^kmOCN?_&w zpLv{OLXeNeP_BPoRu%u#nK`nTS^E4EkzA7=`4kJc6cU1Tme$y{_#lOk7lea*)=$| z{PB&naGw%xOf0Mk^9?TYo6E=f9}#^mN5_pOe+yB3QHgC4PgO!0`1o;Dd(%5C6nXe;loj(X9#gqZ8u~Na6%EP z^{*mBtkd=j;|p9zMoX=I6m&@(wRvkGgxhUew|q-5ZMe{OLUV=EPK=3oWnSL29m&eU zAfzVS8erfn-Qd;CkZtkjIhTpCF)~vy7nG4EJuEk4G|ih#-2^{=Tb4B;@cbCo5Oy_^ zo>N8Kc1aK{f-KyE+}jhPf7F>cL)89Vs0{BQTw2)CkQwE8j&53-)v=N9K>%JC_NySu z7GN`cG_Q?^49F&Q)7m^&dW`bir>b{a8&PTe#*`H?=!cl@XO!44nun@n#E7FzoMTY> zbIs&dHX^0lx=$!pw9CxJ=2#yvgu4kGLc%0gQr4ozt#LWAPKnZ;~$wMS9W z5d3sM*eb-BB%I?%xHsRt7D}yP93p~3!g9i1ydpkC`Xd4Lr9S{YD{sP@L=FQD#NOW= z7ImR$nIBzi;h1jsCnbi>tBwV}60{4_tgNlP&?uDIR58PEfhYJOqWzS#`rjXCoM3jL zTnjsB%(s$Z7wO0cD@mw$wh}NAj|8)nhkHw2mFI0=T}u|XPx_djF}^Zxu1@4B@!_)1 z5c^e_7d2^3Ve&?I+&A`15hjap81BXWl6bFMVv|Z^AbUJY7UhEN+50QBMYcsh55b_3 zFpbEQc8;q&@pOr-|HcP(AOr;E`U_LB?|9TYZIOSyAMA0OEU(~O|MWgXs3RVAWT#tG zMAO$|&&=u6jmZ1#wwX`UoVt=M%CLB#gbPUrHJ6nIpBX>m`ttA)6%V}#`$ZbbMV0ZO z6=c-?1@8)4H_eQ1z&3K^XgJ$Gs8@{KuuS1YD8ro`uKXA^!>86;4wc;!hWiiD8}*tzmx7n zA%IY62}{t_ojGHOjDWolQrm-Rv1g4~T5sI>1LIv#JdVZGY6KIJOMwPFca!mbFcW@N zGOL^B-4RvQauN4$hudk6JaH_St3@L}U>UavOEX6Kik;JzHI4B4d>^U8v5IKCjgWv=zZqC~XBS>?*}_69ot ze}--l#781foe+!@&>Np9p(DQa4UG*m4@ncoOpujkt>)bwdK=U+2wm}zAH}n~NHe^VrIr`hQuRQ9 zYMxfLtV5vMWG(ruMiSK=$`pwYLPc}%rE=DW`ovpU^~!tOmC>09ZMNhU1QDaq+{>&k zqH=|$4gVXZPL3D~99|>WN-}5220JG>o1!r%P=mYQJs8 zJM`QXajVPm_J5@=bA-HvPK=M__>fUF7|^w;23_wdlkg@;5JK9v+q9)V)1jA!kKit( zby>^*`i)SNy*$XNW*1_CQPI+m=DxyoY_BEJ%0tSpfLOtT#=x4coFT z=4Iyzg!5JT1qL@(B`( z4n&3#>E$`h|KH(D-a|OCcXG%+>0hw#|2xjW#RVnjEz^m>%6VXdt|nU$EkyQtQcRK*T#%!6}Fx3g?DIKx7F5yIAnLXtfr9|4h83v z)3q3_E}qndP=$tMZWAMTF?l9!l!9rS#pdWaj!3L6Es!sUOhRZNvPF@{wv6C*_~cE2 zZt<6cS`xCtemiAd(m?@*ipC@ckrVc-U{ZOns z1PO7x?b5FvgtQA{C>|GfWGqQ;+4ia>G`2dQ{Z*11MZagc*Se00Aop3ydco>l4;M_p zd}$*(9 zYwEY$6YeWMus40|b=Ni9p-Ac7LH|6jaOEqoxKJHng49Bns&0u{kmJ>Vw*3pG3#qCP z2)LYV3%NnSk6aqBMPRkv{2%IdgC=MqvP>>ZuNqgoyCI(? zZbM1es0KVH<~|+}z;}6rc95d=?)5#wU*vSZ${i)znW@0QH)mAXBrSS(bTMto626LMvFi^+GFug;fi=#oNe-T9kj#oBq+ zvf7Tss;}5y`2OX+V00!OBzFVMo z1_?<=f|crT8$M$a)oWQQT1wr|`x-RK-G5DV=boAe8HuWz)o5XM_k?Hhal_8tmtAUb zVtu(L0*C%*1Hk3@Yi$h$tSo^rF@g)-R>Wn%<#kjJA~W;Wf(!o&%?$vj#TewSlv*J- z!XvN%-Ff~87_q=-hjrkVgZZCt97vrK9b5*jM-DJ`G7u>a1Oom(AepuXu_NHl@eBZz z@m$F{5c#uI*$g>VO#?7(0+`O$HTK3Vg0!v5S4Omu3YeS-HZ6;ZZx&}&c0DA-FO&ZF zvFU4*@i~hxNYP#Pjht$<-Z~<<_s@)cprykfBsTs{roci`}@uO{IEVq z!_SgLnglYHvYFz1^V`G#tL4`{ipNIZ+w1h6`1#*B9=|&9?e9ua`dDnIixpy0c@~LR zaBUV@(399zd3Nf?=y;1th8CHCS(wRY(mfP=A^9;ee<>vvhIO`DjH z70_9f4uNnEWR9zBX5Ytu8EItZr>&n1dGoA;IDtiQPr`Lj*HH3SXj{NJqlDt_d*@bV zn7D_l;$}rc!L5Ft^;PEY+RD2vw@(@6?hVc~kCawU$grQ=;xG;S zMX!dlWW`>c?+$>hh=fcl(!bbM0YQOManMF5^`M8Z_sfmg#m|yUp1nc;pRv5KH$j*v zd5J~$JeXDJPk>>p^~YgG+CBlm@9+S*JM;2>0G3`^yfat#9%VNS-m9OtJYT|43Gs$$ z+cY18Otoy_&^H4i>TAhzpACA$fZc;c9d3!AsddatuP6n9<|^7CB7F0AxrMAP6Tu*h zgoO2m_*P1qs7AAJIG6VH(4QVsU1^~}nXeIH=!Zw|@_Mq9B+HsLnN7TY3V0k@fA!Y# zGdx*HuUuNUEl|BSt+ZYiGeKQWVGhoJnt;2!j>r1?b3IosAI4=loWNoybcQRM&qD+w}QzbsDAB_%797!?S|E$;DZXS zes$5F(nNsM>n4VQ7vTz76YHcNn56OZ^;(RMk-G2JayK|kq~nG5ok<~mUh!5JT=xlU z`A^w&o3r+eej1+ug#a-@*RvdyE~~2VCJeWiAi%Lf9CCHGz1p<*pS|89s5R}QcC!m_ zV#(o9P8Df?7uPPW;0*=jpCJ@*I?@Cf>qyb@>H%5r&`$_9WRGt*8jJ}hwdPAODp1p^YR zc}hW%%c&=Rg;m30I@VpoB*lzoKiIj&{C>P1_njtJYmoMRtKG*&cL&>2+vfAFOrO86 zGt?2vJB0Nm*xRG@RZFMz|D3#6oYio)M`cPxKQC?6`C|4yh?w0-;|10Z4aQN;>td=? zsH{Zs(d1W@lP`u8bQ3Jt^~!{$&DtB1XXSS^ah`ZLX2Q(_oGy6i!B_iOq{`uf)YiKb zui|SQW}LkYkqNDbzIsNaHagZz$||3&y_+3hPLW6;u8R16Kj^&BAoM*+tn1{B50U|m%a&C0+C<;d&-A(eeX?*vie2rtn$u0@F1JVRf5X34mTZvQIKrVX2^`xsIg3 z^rEZPl(3}F_E#@(I$;2{kbadfG0AT;zb4&px=PctZP~_4LPb`G7`_x*nqLW`hzhF? zv+xspT;FghJr`675~Xoncx=~b`ay`5CFG!ltuL2S6jd;##AV1Fw!W98UF3wbEnpF^ zDt7BUQ|bJ3&!|TJHEwNNLD|CLLs+S&7MrGk^hpxOwqPb>>$`8{6#n0yIIqh8cnkGyB)UkLzzliZo<;bBYkDL{ zamdnYpwZB2ul6hV%L8$@z7zZAShf1i_M(mR{9!dW#amIDq-;^aIM|4I&J9z4MT`1- zN)2WdRo{f4UlxgfrF4+Ijdv?}ORRJ|eTwkStBdR8-A4xPS{d;l3yL$wd`d|Knk#y4 zN;9=OwwE#u@m8wW8

    7WL{k#*QiU5?|d&?3;py5b@EN$MUqPX=9!-{!_LD9r$7aP zft2>wO#~me-cX!>e0lsInPWguc&q=2LVxT`1zQEb!wUzF;i_@#;oD8>TVAOPJ#b z2|Za(N)u`rWj@G561ISWrI!S75>MLPwnlTfalXtAc2_-u4Dh7}*xE=y=+TyEpA#St z2+z&aXz*f=1Faa99g0p0@6*W8Y)LN2%drOs2HfuLI|woX(v(m!N1_8Z&`;c&CsFx#_pztzAf^7EZ&NzOFm(>uaL*t#*Y#MF<9oym0+Yb-HIiIplr% zvN^2Pz_W*y*gCfEH0I<9c4q(HO9!QMSdg$DM zxW1KF=~_Vgn{T4b`o#FUxe75fe1PXt`1Z4AA~`-vhZxEBd_#>l$O=v~l4873A;B-L zR2Xfs5?^tYDN`SJ+@n>{{O(l9Ui`s_i;_i6HO0d_JYL%;)=tbb5id|pE4K9c^hbJ! zPWo69l&227?>iC9N&Z`B;UmY+&UD23VB}%n)4icBAIHIPq;d+qx}IR(|~h%^X&<+llV*JVA$_J;pa z{r{SmmVnc~OzqhPWNf7%S$9ru>Uh~`eze&mgS^WeV%V_|Rn-tYFV z(d=|$#;i2LnE_20<{3Y-;ND~a&Zs-5hgO7ke!!v3C_wg-hil-Nf#>S(*-AWJ(z!EOI%0MV*KZS=j9+5nSFN}=7)~UkEaw3fUtcKw)sckO-veO$^ zHicBt2>uG!ClQp3-|WESRTnJzRHiUC6D$d?mD?10Xgw2KjDFQ}-%I zj^Q#o=YF*N%8~tt_P^&VZ+&5o^-pXh#G4TQ)N^S4WMNjCX-9ENM;TbehyJFIO^680 z800{=vRkL*?10a~W1k?3h2D{IeeqMj1vJn-DATsDPDA|J8C{AZ7x(Jl(}b0#0y(n%AxGzJn$l|C|I$B6JB zAy{PfkSYG%j(i-u)~xg!qlLc?G;5@YcU}jJZg-xa*9$V#+&|Yn%`el;2?VMmHNd08 zyIiqbZ1#b=<3=xV0zJU2yAzQ89u&1?d{afH)Uq!YJ1G_LJJ5M`3*4@)MTrTRc6Zx& zUr#%)$JIx0$}abM;`0@_;2A|yn22as-tiL90nE}isG*B3lUXEOOBqUva4)t=Oh&7b z#`2iPN8?W;w9eV@mP1L`y7SwCQRr!% z$ktwX4-=mAIF10r5Z2GA91?c#Ic-2{5%wl1v+(Ec`KgYcDJp8oEwZSz`r z3L%!Iq8z!imVOb4gB{pNFzmx?mP`#+)Z_trz4_H%ZQkTSN>ZjbXX#)2Q79$ilh+8# z9HB1)q~$?5Ht&ReqmsnFeh~unRv(^wR(Qjl3d4k_2Y7xKU3kKld@Tmh+#6~gxjd>B zI5NgJGc8Wq66>6_PCXA6?f`I_k8kte##8M6pQloc$mh zcUno4%U}FnwOlAvL|>LMHbjz=n8+x1&Ur#obC!^ca@toy@@BnJ!#94hOoTtvpZsG@ z)Xfqcz3)7gp<3Pue5NPB#8m4k|Ha~_LsdMGJsAgmD4&+!CdzqdqVJd&a5y(@hz=<2 zYTGdGqz8e(bv3yx_YAK7&9~}{w;TUp=yB6q1vr`9JpBx`#OQ+)*?9Yje zMQayk0q?mJChFW+SnUx!Iz)8A>tb&^mx-CVUahY#P2B5ehQ~&lHo#+s{st9qFwum; z6qbhya>?VRIJKd+eIS}8Oui2!F&OQEi`e&gP;EFb_vh-WW+V6DHcO8R)zrT7mrqSz zr2uAC*SZ`9esTl6nBET|?7xno6jE;`7{GVWr8yGvy>YK;6S)KJnP|Pl4hlJ9%32=N z#%T+r<_QARWfaqaB#uq@9NJqD7E>y*w(Al0sm|lCRd?fgNNOL7F-rykSo z8SfK-fQF*pf*`Ti*jrS1tqhj&a0zltC(^CNS%%W1Fi)$^Q6?*T#mvq#n{ z!G_aushmHQN$?iaJMe{#2#$Et_a?Fh?ozZwXlWYdyKz2PcTuM;9Dtp0=}6AsR-`a2 zV_{q=$SJ0ogZBRMELWHLD{EmP)2Y0Qg<3@%3aRI&wS;zNfBx)60Mv*K;2hN*66PVi46we3)KYfInL5;GA$3xd{wu`?4k<)&P#uX>~xa8?=eJzh0eFx5l$FL#`(v*dfuDL zW;kQ}W)xR8m{#>_3*3kXS8ECpuL`QuPPEGHQk9M_Npar@;;;e8esL1sbsD&(4+(mQ z2beh=FG+08mgC;V&O_t2_ZPs+-U$sMOYJ^VFA#4a+?IS@w(#4!b=&!7|8Tz5HgpI7 zt5I%Pb&2j2Nt4sA>-Nc?l*+y2$fs$Thxf!E%LZ>!>lBV;HEjg1vb+umkXNJNv?ZsZ z$G?RGQlRLUYs4^|Y)oMt#4x#fGYoPYR(2jwf(*wL+CYJ1 zB&zfU;3?T&Y-!PPkL9uUs0Uh3<0l4YQf<7nXp+errjA`CPQy<&6?>woGvfmunajuo z0)0MZv@J2l(e-v=Hl~Ve+WtMbnHeOgQxvSU(>mC@*bZ$E*FolvBPn`**9#;LWOQTA zSWVRRQzWkvc7O(GZ%S@*Hx1Iy#+7uC4G%!XWWbI}HSYK*2@?WJK(=rY+wh(&TR2z& z;(mJLc>a_!Prw2XvwL_=`Z_Pw;>H-{`cjBE7a0Spu0;5M=B>Ss$Ty;+Egd)KQds@T zhzwAtLFG0HiBGPkiTJ(?k{lju&KbWR>be$pI%vVk3Nd=H@nN2dTtEB}_?nA7P1 znyi;*i;3pC`(03XQ|S;=U1T8dy7guJnTvgxOU*?>J8XrY_(7EsX)-n^p465?s}*w# zFr9ia6`G>Qp!cw%X%au3O7gmgARTT*^ob)fuTK{1j~HT!osG(B*E_$m4F|9UG#2MP z13HOyx@oU2lyJT4zR8C4yWANft03wogc+5ReOgFqR zpjq*dGFNc8Y$Qxs$1)FhLBK>W5#@tqCl|9EURW4hRaS@Gav6LGh!2)nS&ie~x`07B z$U7(<8*>9W-kpKhm#`e!6xIFp;(dlo57=`L6F&S379X?#hl>wVxR9kqv$)^pkFFuV z51S~pcze*@tPtWZ3C4cS6AFnQ|6&?$OCR&&NuYk63|npV6ZjnqPMX4gUyUDIRjW3x zU>g?jGiQ7N-d3uQ=uk3M8y6OJ`D1SbEB8@JQHvvjG{LItm)^Fdi|ymc^d0Tx(3*1u zR9;Im?u2)q^^*@XRiQ0$L$@1Q92p__LKM1den22PfnpWJjRtE4HMhD1Y*r}_F0w_kN+&(-ry*rUr6QKNl zO1HJfV`=0c!&WYVS{)v+y~&ADjvV0azLr$Y>eEZaNgu;yF+ppMM*5IdMfgP)wE3Nu zOit_r`!a(t&LFRgub-Q=>dUcFB#A|idQJ%cgnU%idTsu(U7M=Ebv$UD1Ad(GN_kFO zE)kpHLw92GkZnX^p@D2e;%@&JB10-$((=OnMckzYh4=dCqsnKJql00b&4xpKNxm8P zF~lRZJmL%FJE{U`)pm3>E`!sJC*>unkF3$D=n%yguXCeT)_%5Za^yLA*TUEC(vbpP zPnP8mgY0tmg0t(liFzW8G1R|JjdZy=RD|r}5JmEQdoEiqKR5TH*^)B$&n9SJOVLod*8L~ zT{&@sKTF|f=riVB%#j(40ZT&*xUA-MiG-ZewLth!-OQ=enj7c|cg^AQoaA$7v7$wy zCG%=Zc-dG@;l1J~etgtMg98tx!Y1c!Jyy#y&=LbdQMizcekmmZfV!>;XIwT{dj1nc z^`28EXFW~s5g+i>v>Gj~nRiWJrHHE7F>k>j#@lO?mb75&2ikXLxEMpn7>e_lZRD@9 zNWh$ozgs*DP*c`FC|!sVZmX40t9}#Jy!Y57;!7Brq+Mwv{%s5yhooumT?BI~9yui; zfn-fs3}w3B1|z*qfpl8Jljo*?6_iT@5F;EutQhRb#!HyomBNB8@yPZ2Ayzfs=sNf< z9{tN1ju@yTAcJp(>SiYMzPg$MMqOerQZjvvqMG<+2lK ziGm6f-c;!~;5PCwit31vvzU_oZb8O>UtON^^Iw26KSc#5uYj;V)UoQ&^MI zQ2Z?O>JN_sJG!Do-Z}|I&tHeNCHhwWZND+)bBiapdp38Zk=;rcx6{ONjv6+Nmg245 z{uFA1R>^|#c`qIFU~*qX;}8COv8n&}Vq+-~8{gc-xYt6~{rnEg($+US#@XlsY4zsr z6s!&}^vn8=C}!Yj(K{dXy*Ky?TJHJx0SiHM%_r!&loUebHg=%E5OzjCdV6Ii!0kQV za!#>8w#Z61&FE-$oNHm}`%g=}bs6+s_=tDz(z{8h{eL65H$Bj2Jz2P}Yi(^QM+`)k zc%K~}L)O@{kCl`fo2bs2lJjM;aB-1?|6zzAS;7otbKjmc8Q#j^lcKwdviMZiY?&MD zDGx2KB?wDl@ZRF+g~8k+Vq)WMc_n5F<`l3_L*zEQLa*k|cPf>${pcn#;1%}(dcMTa z6>6B>8}bAed7-0OCK#VVZrs40;l)1}3V!MtvhINbLYtCK)(eg<5jj}{+SrHq6J$#y zjd=AKSR8b<0bxrapv>@^%0 zk8)-8;twEp-@56Tl1F%MshgqsurGDq5lfHX6{dx`6V$W`Z2XHa#6?Ev?u-lCMCuzE z|EnN4vX`S{7L+ed8#R+@(0Prorh)HmhC1ISa#-Mxj*2?3g{4Upr5W|f8u@9M!$T6x zZ$FDTC!g!ZJu16F??fVXS+g1Um7UgYIhy1IkuPNE)|D1&n(@QUuD@T?#wi&b$HuD|nZBxU zl$J=T7G~q7Eya_s7gG82oQ_LQS$&N}ef{qt3A~R=g^!rC7CUjMr(hG4)Xt6r#38G<>IRJfv;|~EaGLq@)8Mu zoi6Xj)B73yPFKK-5A4dL-CW;&*hhQUWWU2s*6g0_-0&sA_;gZ=%_NWb$$(6h(;a-ARhNaTeqr)-Q6$tbZE6s!kE@`@+)G;w;kj>|lj3cpTF^tyK= z*!$Pa8ykdfzWh8-?eRT@O3M4S>9-o%{lZf*YT<)5s(9(ZzvW&PTJa87b7xb?x@6zp ziEgpPMmjFe-71mPn=0}Xv(mhDBf~HJjrKh0B41PWqwTp`38F?HF+V1I_g?lxiTOKI zHlE@k*~TSkW_?}u1;rJPB-EAs>ujGAK@{xBT!KLBqF#+z$@eV&wrti^`ncfj(755CG^2g90R`06?3IhU z&i3Vrp?0lq|5q|E+t^(9qXC}4D})tP7c{RzwbSr#{zqxBmlFJi&WiKy?G2Y%k*Z2b z)o)@-p4Xbzo6alg4H-gkJSUUvf<%y?b#ie{VF)F>i(yqJd@a48xmnX3vO-bo} zmW>lDxLHFy6^ZM9D#l`!QlcmG$8=P?bb?o;&HB?90~o{jrk{Ah(}EWit`25+erj`R zjIhX!l!SvVCh`0CCDmi0kjf5d?O18Z0C9YGk5H^z8B~?o2OtaTZEHDiR%zF{_UTu1 z?455Oon$ZT_ol^UWSk&=>@E4l^lm-mmzi6CaBm9j&6#FF_YX6?L|3EDa)B?m6}ywv z)^|Ku>v_8$dU@<`xkoZ?BAgq(N&2X^)xMWJap*D7q#M%miQ(!D{?Z^XXzWIFk> zR+vcofn%Rr?l@Kg)oohCqabLcqbEX_ANdrbjw#n&>GhwUDR{-T5-2LGj<>n776f*@WUloVoy85VI6klAM;FnhMBD2H5L=Rh}V|5`i@$; z&d|}NkqD}{Vp{Ws99^1#$MYHG@-)CAH8nOP_YnSY*gfC8QTs&UQb1POC z;JRjcZi$`pM%_HjG~(>7Uo2PGi@NTaM;dDPbbQ@s)@r>tZdjyf#295@IpW21#XRMh zn^xi8zfc?PDNVRjtB97GbfleK)S(zrr-Qsa?RT02WDP zj72yVxDm$eJZx zS+cOnzyKtZE#3M!_VxH`XjGcF7i5yS}9sUt{4m-@5T$6VpM?$L*rN(RvEr| z*`6tZPkN=(@cPL~mS^-*F~o#R5vvz=Tz_)HdwxiOT|2rpq$Edr*|UijCB2=?$NIU~ zrj$<%rR_VxNXjGYrJ%|l{J)aQe}+Hjr0~$fK=6Y0#Y|~Y0(dVruk!V_yS?v5tSM-W z7?uAm7t5L25k2xM&Dey-G@( z%2fQeNoP`gat5z>?Um^;P}TZ#FJOo=73L0n4_4+G?$`6n|FQ0C$>`!Izz!D3?{Y6; z_pvY5``NwZQ$T(`-KsKckGwxNS(H+%^!Z(G`UX6pXZcB;pWdVT;A5$IXo=!zqQgDA zIU7v#v0%1>bDTJ{I09|8{HDBvi@&l=a_2bPqp@DJ2)7m3|2-80@Pk5~rJBuY#DX8m z{kA;E()8ig+NxD=*ZCCAv$!rC7NTrfpuT2VI{B&f>Qr-m$Cr%XO`l_gR-Ugj!K?bMwc(`Jbm!?=1eWG&D|8s5|w~`PI-EWo&N%9tL+=_5ps;HCj!wL{yd6;YrA^iQL=zxc0qyFOJh!AQw9` zyixXTNKR&oc9@JWrOyqK5TBd|H-xEN>T2w=hC2ToU*3+iXw(T6n`&%C@t7l%0QE3F z43}7)%H+2L;r&?SEM@Pr%D$|NP*k}Z7WEN*h)Kt15-A3Jg2D`%d|YhoI(nNd&y)J1;4NjUUpz z$QJ5a9b~PR1K1RxJlG&R%0!nqS|I_2R=ZFSqJT~I!Y--)2{udLGtOEC>xxfF$Sl_) zGloG!LvRWBgA6RN%}p_Tf9sO+-Y%vZvUS&*M79xw*YrS5J_+2&^zaJ!ey^(J>x~vo zlTakWh!WU*wcJtns*65gfgb?w$tF=Z$_O4Q}VGBdq=t5C* zpIgnlyL*tI2^?M+H^k7LEtK$;IQ@8PV(~819wZ2B<>LEyNQnCH?DdCWEVS@VZ(63o zpiOV_ST@u-(S~_5-v7Iq_=(Dk50h&3B}n@kD;xW{2fl4}@;n8oIt|jC$G1>hELWYE z88@>k&ai)wo{%r#9Y&pPW3*v;@gZc>uXMJlL}A|AL8bJya~`H)V;TrqvaAl+5ZMxp z5|&dpBT3{}sD0eM7}mRfZEQs*Su(3i&vma%1gbJMN&RA8aw4lcbWKmAs|~~}deb61 z)9!I{x<|pd?zeM7_@7c4HW`^}%^$iXt~yQbgg;BPeo)E21FF<|>$?})RsF=`uTKZx zrULrRJH15iuJ;ww0y=~gJ+X0o?iI1Iu|ryAh0SS`$)lOIy14b>T!{@Zt3c{hD;+cE zb;D`V16OV&)HCiP=(=v^E`- z9yhoCPn{{v=nHiH7VbCUk;J0k95>_NG2y^{Xy> zoBQl4+`^#`-YyA%CA5-v_sSw)@_4vj@0)pK)mc4!0RSjnK9Cc|Jv$cDooG>cJde!o z1bMx(%S}}{3>#5b8ijIoCL_Y-ZbpOUGGr99(9g z6;29^kvTw|uAd1BO6|>9Q*&V|+ zO*X^&1D4hfo4*gM8BWHum&UOB|3t6+-H6I(W9vo^H z1-7)$yT`BvuI>c}1%+v}gH_r?l~j@rPN>3kq+qtLd+Z-oNfJ!xCAGK7l;H-3H+Yo7 zt&(%^+2o+P9NQ5@mZB^2lR@Lj7as7RXM3wjcZ)UR-(qMUjR=j_6B^VaipyV?^^=?U zbA?u5V1R-kzMue?-9NXyWGb8IzwhKn#y7ZmF68BD;UDunrHeD`TDs2q+msZMhhX~; z+JAsMDRc_-lP+AWq~{_p*t?sX$+Dorle;VobcqvJ`JXkOlR>r0)B$>Yia>DC#NTi{ zDiEvl-+KKY(hA!T@=*Z#HohR)N%r|2F4H>G{G-u^26Z<;{KSPKouq0pe3u1&V5KNC z2As|*^xa>Rdp(JecJlwN{1@8d)$=4?kAy%npn!o+k@>=kgP6<+*gM5G=-_nH@FOae*seC;siI|Cpvp7YYj(p=k*WYlwEbQ*`}uAAU7-b=PyD zw8uMGjz7zpmH{80Ypta`xuhB$$#OPPD3fta={7;#<&Oqtkpl@t+lIl%CJPB<70k>` z`NCr9IY;?;Ic6m~2&?=Y*T*g{2$=LYDk9G@)KuLIr|`cdp6N!q@ambLfAZ}WUXSfI zF!XYM)yX8+udwT;%x~Ksvfmespj8lBoqAJbve_CP*@kaVhm!sT<9*?4i+WYluEq8q zzHc2#D#5RpH=HW)KhKM%q3|9n(=$Xy)2j7}ZrCG!7FA~+l?R;?|KOof;l;KP@hTR? z4L6lfU)e~qo92YxWWGuKfdcpicsO0CDl}8(g8f_=bvgfx?nOJlsW&l$^3QB|AS85uR{>bZ#$j2)rXMEm*%2C3WfWz)G=`{SY z*A3cxi2kKKR~~pD&Ji8>G|cl4i|aI|v$cgi7u`@*GOCPI1CRvUkIx%4jl552X1hmo OHuSYk9#q_a{Q4gzY*6w5 literal 0 HcmV?d00001 diff --git a/site2/doc-guides/assets/syntax-4.png b/site2/doc-guides/assets/syntax-4.png new file mode 100644 index 0000000000000000000000000000000000000000..f13005e39c448f42a1aa43e11f7266ec38562533 GIT binary patch literal 243160 zcmagG1yCG8v^EMMSg;^L7YiQTZGiy6f=h5+++DM{I|L2xZi~Ca;t<^3Ex5zJ+`s<& zzq;?ed*5_*oijb(In(|1sp;uT zlKm%89i+9K;o$IT{v*T7s8CQ35-WKJPOHeIE7jcO{N@5$`3iMdPD($6l-}^~zf(e*c8`eptWn z?6Tf>Ia@Y5(`zd`aalNWK2N=*|C1t&;EnWzenWJ~Ic1(R`8C`BzZP_-C;GF~8*flo zkQM#EpZ*uP66cgX->llrxBsV)|AM+OGtJp>@TlWh+WBu^`o9dg5paJT%YS~#4)RFA zS?c?rz_48OO*(>g7scwft&6`l6fc52_Fj`--T$>I|D_sJ=9{jZ-Qyv?x597HHcC;g zTNhpv`igI@LdT!}2Ri=wfZd|E&%eG}kf1dZ$xkRmH*XpCZRy7jlFyUa!P?yV_`KWx zN1`77nr=4mpoYY+JU_;t(vycPX0U&Z;WZRbEA-;>DK{Zqs2sbo9K`2XpcZ2vzz zNQc853)OqxT$#yOUSna3eD&Sc4KjwmtyenqSG>H}{=d5P|M2iKY0`avd^QLtRA)+xi&X|94ULzp4aVLPxj%SYu0e^5=Ysxvna|;EipP zW1h;V5JXM4`&&c~*Mf`8l&NuiKFId^x%Gb}*~6KGt(|hJQNF3+=ubP=sAsjvSY0I# zV0qjxxe>?HC*GX&S~r%KQ$dF$fA7}ie~#FFQ^WdkxU1LgSANTdMs|BX;?eOY(iLW* zZCp&AS`Lf{VmqvB1Ml0pGTw@l#;;G}|Fb)hG8X>n+P3{iMspGi2PPn8+ zpAL?ittsM}DUuIWbWA7f(8TRinOQ#DzR zgJx}Rp}q_VWc$9uhp^@=U;E`(^1op6xAb^_UgvR7p^7`a+q47W`o_wvy^571VPe}? z^*3$c=u_)tH!KM~ifAQMXwgh*(b%CbD;;%}k5Gpf%fnug)Rtv4OW3tFI&4e0DIK#N zYQHi|y69jLEw%iSVlo|LmQ(~YW)dbFMY4rycM%k9{Ia7gh*R;ShRZ`ENKq-ZD5_^V z1URDk6~`3l&VfQHjGCB7cjqVXb&gM|)Pk$R7Qp~t7(|M@=&mkSfY4sj-CEwiO&cds zRd^$b8Ll%2McX2Pm`aPC3UsEefNJr} z&__lEdvg_7r?^xj^YV@wa?xXLk{DP?fMr*=9~tbFte%{4E=_uro^x#luQLAaaQhgj z*(P2u-{=2}d5H9n1pRP-x5_BnWJ#vwGZvo-lJ=CI)H|SdDD$+|VONEa;o)n!G|e9X({j|O)M;$; zl8--_xH#r}!e+!-tTKbtZMNpvKlONQZ|)kASrJXaH`z9MaaovBH4Oi(1OY=AkJEVl zV?hPDb}oKgO_w$L@EUfM#t8#f1!vhVI?XRiW-Yd3=?yQQ{xq_@)7bKmeEPqSw>OB@qHuHWf)i=|`*fleF}CRD_z*$DtK_jQJp^-yk6+OsAUK{JRk&Q-uk zTfHdQ(DNy<0yINL)2Q{{C}Sk9G#;za`WLfXpVcajPQ?m-`dX;IVYTIR`@ zsiQm+iMe=5atuj6>PAf)8I5{^qPAuTX7@xJJ~Q*YJKt3@6x(Y%{^j7P*8hdIHWg_g z?#Gj7K9?hhdR_rDVbnWelvpY0uV(3})4|NPBaFrn+pA!N1l<>01>knPA^}s0e1wB> zfi@7J*;EZrf!5zIDl~*7!O{s*KOX`NV**z`N($P958Wgb`oM|1i2f;PB=ulGRQrvP z5jval0aKT0mY&Br)qY7CwH1u;`2&ddeI!J4Ci|jH?n_ekQ^2uG-^~sbV}&iw|ta;CF(YyV4;ipo2&PqyfUz4FBL(R5{wjK*QDV z)WoZmqFd~xYV6EJ?3M@os@%DSs9%{=CRD2WY3H`u;&mHvin@<77{p?mocm5l3EJHB zB*j6Q)p>Ag4x~PuZB~j={Gjo-IzFrd3|e6R{SUAn!%cpJb*_4aR3PSb!s8wHj(wLu3fCWjy zt#R_-IlX~DYtM|2KhoGnlGJmkpc>|T6hv8|B1zcBsQOD<^@YHv zLnV#Wno8-4%XB4Cm8Y#44mS)W3^Zl|{)&3|CvixM&aqs93~qCf?hoW#gDib)_-9If z@oV`$%TYagkrSaf?v_1N4AO=PFhYqKqd}C@4~p;aqS5sY9LxQfb&m+1t84T-mtuP*_@zUWlg4iEr!f3Xh>G3gq9j>??nlyErsI*h!<5r*+8^xmpFM`K-dy-q z-)-LhS}#`BwfVruh$@;BRzRxHVa_ZDT6(IF_d%k_9o{awTITtg3UTu&uvuyT0$)SC zNn3?68?b6S>)C-A;GbYm*;9~6(Sx=cNyEV~8+f=d+nrgE5(*fwmBJ?e5-upP7OsDOu#Da%mi6NR zX~$*fgI&dY827o)=DOyac%dkhzVb8>Pm0Xty{iHTl-8$ zwFVZbGatj~Ya@;UAAcwquJ8A9nIc@Nvm!3RT%=?4>h}5g$SxxTz5rH{R~*M=MoV{T zvWXc%f#JAZ2boimjUCf?_1kpckEzW-CuUTE<&Y7or0x_JtRa3842PAI0ArSqTuBNm zc9tK4;f3}BFzA8@C6*pcqV-i;jy|_q+s~nm$&YFwXe?hDEM}m&=|kwLSe!!u$uDT1 zY`$yiqJ6>;&5b`0gQyGMdd`j*W?MAnFH)*Fi-o@^Z4-`3yJ{wiWm6MTp!|rc4lB=b zD%ldN>}fnO6||WrgWNq9YOkxxPX#VAz+r8Ua2kI>kn@!*+h}idl#a^S_%_s%sj2sO z5ku@kC5}7NDlJeNZ;B%1SzWD+H0f|sx&SYm?|q;Ib}Tf!Pje|6DQB8Yq^wOT$+7Gs z=KH;?FBDPle#;vHy*NI!uMYBn%tCe~WzzVi5HobkNmPBNq&a7oj3aOdkisqMDi~yi z54y6g49TWBtx8aGQ%R!b3S-fqX%A8C%Esf4-+SkFL^^64k1RGSBFsXiDN0WK;Q<*d zNH!~d0&XUxjk1ukm7pE!i#>71>V*6t-R%i}6p2a(Bv9q=K;r<17iowGP}rIO(_d*BT8yIqZa z%hO_Iwn;bypsOr_Y+ogh2lEmqSQ+TEJ8x5 zK{;5s+;XI$xo!2fp-Q7r68gQK*Rrzo#a9cXx{1!q&Z)n@k1BG+vWSU-z2l{p6>nvw zkgXRLkFlv0GZpj&3Tr$Kb^8V?ysAX~?~`y9fsWDEiy^kl9WT?etz(SC6FsDoK+!qP#P!#|B4a1avLW543sCrZD!DUWo$csCKkOY$r9 zYZ4yP*g{aTKByi%&EH~cEK;%5aRPIYweeW2x}1|2G5DrejBJ~cn8#zahwC5~jQX)( z+Shob`1>4PLYYM4fWKW?EQR&0ekco}rxQHK%v5CI8l6X8zI#R*ZSWU$1{I7Z7ArSpitGEi zEfXovqq3%~qM*8vIPHA5fiucUtH6~htA=lCJ^#`l{rJ8Vpn{B;IhStV_$w&^(0VhI zT>A#c5U&G=4kri=+=1oDGod^-&@B0)uFHlomAHYC?(M%R~GJnpALh@>2_p=1Ov%q%@ViEvszhPcmo_&RvY?clppoyI&UGsA5n z?&LawD^jp;Id!dtpU*sO(dYedo^Qiptg+7FF`GE|`vGUt zB*_(=p~G8{7$fKxez_2;(UzUg>5)Pu-E_7TkDalslVK;qVp5k8Fm)C*0fm3OCy8nT z;UsU~_uJTYUJZ9FaT0OGDWZbrr;?;kO>DZRAA6}I*?pgPcebB4bR&OiJ&pJ%Numpq z^V->fU2-Li6EJv2`544=W7Z;-u>v_uYenTahN&H&eisjL^XijgiM%+pUyVd=(vI}U z$qU}rLta!E2ca5TocdmmgqimI0W&Ez z=}Efk!KEIl&Gn(4_%uA*m&I98GIUwgwAn5k7m*mQ70{TP=FG}Jv6WdCfD2#HZXToY zcA7$LD(?eZ!S&606q_nKc;Bn5{*lUu4pI8iYBPdG0`fkwh_>1($LHv(cKh% zNkCS+69*XtNS^SSR-yht-vhJdH()-{ja&La>P>bx)x0-IegYX4nSGI9QxS;SmKX*z z?Z1qm1s^xo+`RytxQL*rViU<}@8L8zxRsCbq@@9r!IN}Zo*e5ObMi_%^}ul#(ki;32;wCcF2>_y*2!=;(Y*D3T=|MO5!iO*;RbcBA-bns z?r(UNTa2W3!mnLnJ*h-N0^MnUE@)_p3KOLn0x}MMu!!xXd*m7`Ng^A@)Wjg#5w_$M zVIh5fRrlayQwm044|| zx|x)TqZI zS-U=h6Ry*jJT@=n_0#R5V!ep85J4HXMn6~i-Zt}7l5^2o`= zzYLe~Sy)0MaAQ+;-G=ObyU9}pHnS`iT5%)Oz!V}aXe85kcj40JLiV(ya%5T*4CqU$ z)tE+HxAOGkRSUl5yJ4Q@&zcU)K(4D=H}gQMKEDj}TR-(^mazC%@DEuGAunX)sz8aY zBC6jT1Kx#8tm;bDr62j}nf^(WdDqD|h!_k=G)|vL%U~q=Z^1aKe*h-RyK0iyxjmp}J zg2?L)`-)B7KxNZ<*&V6-8n&vOutY`I9g+n}2Bg!?vaevDr_GK?!vJQjhkGCp_sa93 zI@s5Be9!Ud69JSLONe+OqPcpcAo<8&_g(#je6Wf@WlY(9z>e$vE9^mP>}vkE5u6Wk zUeQ~5b8I@{f=Ro5pn%DaL+14b71qv^%~dL>b{KizZ%1~os?Zy55ldw4tFR|a2321$ zzc~?tsj^v|3|!+YXazXa{@{(stD~y-W+;Qnydt)E&NcVG4^?8X-C{InD4URXylKr@ z0|=YGzVzqcC{7nN+bj~xMUBt0nwl4P{%?y1` zR>}KcV}L>mY z8+V6K>&-#Ii{L-NcF8Tfc|{*rpO>e3lI>5nqt42NB*24uLDW!H{}BOmBV zIa@lQ@#o835SnY?&6EgvxceBF^;1LFmZ3GexhJT#KWMq2V z)#Wc*NGv#R8-e%G1x5ZZZ!h;hLFY>wxH?rG%46Km8POOwMp;y8jim1&g7(bvq{RDa zl(iw3V|9~*DT*&%Kv*wYw>S6xpaA4Djbs~;sitm!lhbMLLEN`|wynTU;dVaX9oap@KEJcp*=j!aoaG6+R&Dj+yBBcP>%vZVsj3mwFR5|8;NnKn3R{aU}SV zs`Xs(ag}@Z*Cs%r3`IfkKI-8Quj0hD>NJp=Hdb$Ie`_SYf{XzC z7hqeIbYF~KqY9B{0)Wd`&{=^ZRbpqkO%MG15OCLrSvYD$t~>^N+|)r<%9_X8x^@H*}b#L@9mU6^hY>F-OeIv0V# zMzaNl@jHuhqpTH4b3x!i?3JQ1P5Vfcf<6hJymuX8RMwE-KE)$zI9;c?S zq2C!R0oUWa2Ee;TW5{{y$smz=MsBsB9QT|W`y}wXKbm))a7K<(Ue|T*hvTd3!oZIM zgJP_xsJ^s97aEdr$ew}MBMoFS8*F5JOCLA5a{R>bQ$d;3ap z%h*NKbImGb@V0*0=-$781qAN*HlX-AZz1ZER5vZPLRNn<_BW>6m0Ngry^in1hN(^h zX^Q$@$CipS1Z#}q9QQg}(MOD4(5nyMV#x&@1N~X6Y?;R!JsVBa5xtC;W2czOOFE9e z79F`==kD5cOGa<^Tjz-^$pNs|2v6(f04sn7Mcs>ZgHq0RrEJ@f-#eXBWZlIN2}i20 zip-tg0kwbZh`|b#K+~(o4~;^Usr0Pcm_x4q(i?qQP z_P?ASt=?4aalG8EQ)HY7KG33I;oGGi8XPbnYi7NJ^kDMq7yE}O3MGfumNk7NtR1`4=?}bA9Rhy4it^3Y)(>Ic~X9#ff_c72fvp~9j z`|Ir4vzwM&?84zb%8HSOER?7(Y6<*1biY~K$4H{Xp2m--)XTPJfG*E4P*2CX?6KV>XuzP7B))6B1I2w1N^6|d}A z**r0|cK3=st%$?IPVCwkiqgPkd~aw5;w)x3orBqm1%|85Vc4Out{qX(h6iu<{on&O z5m)neT+w@pAwJ~g7B=Y=&wfLsi2UfyqM@`lZvmdyhY-kXabsBe`HLRdYlC^`-wdD6 zY!(Sa_!T;X#R`(6?Rl(Hj68CNtF{&-m=O1-nrnGSb>nX?buMc)%3x(HOX7 z-*K6Nj>uNfCkVwZvfgdDDcTlHTqG0KEbV!Q7pvSHc?u8HSQt7_I5F{I68BB>?nik~ zd7sAjf&<$VsB71u1UAo@lo+^hGiky0?FMO%}s*wM9a(PEZUrm zRMyV0mPp?*+~?|hx}mVOG_6qnJkiRDXaW}QsIx;&Gjcc#HWL`KFzmVX7wx8KXL zi8C#fJ-^^jc-wY&kCaGLaX<49&6_vSRhW1_6 zN>KTqdecz^&%!GI1cqWfxBA&3x~7YxuH|BC+m$5lOIH^I+ZZ=WBc$(60vG7R|E6fl z<6t8JLe$rzP^VGJ56qlTcjyVVn0Ki>Cdz`(62K>!)$N2YNhvtanphL^_eFiOq!KbX zg-NcRzy9*vh7Ewcrw7f~M&xxuxg#&N+a@?%!CXkT4zDq^p@9Rxj>*2Q{7no*7PIFc zD2iMqj-of_C@R9zv@>^S|NapjM-%IkTWtJXalb>T@Mmbc_4<&)hjpB5UqS+hb~ZDa_t}Kp z+yQdB3PvZ7-!?HG*+oRS%~Fqhw2 zX+Z}xiAtITLL5Mll3@g4A@L0jU5jBbb2GIaTXEFJhP#H1Pd`DKaHU6YhWlC}WN5+~ znkQI^+x7Gs{Yyh}0P%VI{`EMbUx{dzs9Nz+r%(zq&{!%UgH;7<1eLDpFhjbyP zqwAi#Ka<@#VXH3m90v<_{bw@)vk7)vrTIaTsNF9e;b}qkGH0v#u&Z!x=#7X4YKc9ewfnn+6`c-y_?esGK1TzYRkz-3 zFk5{(5aed%;dcmqSj+w!ceV~C#err#dQagFu-ZTTICbE=>Q84x?#uGKBf-hHvK*X| z$l|j1i{(3O)-(=J0);Bt5;;%YO=!A5#zH&O&&W0epVjE2!_6T}H^Y3EdIzn3 zsHk0|YZxMTJZ5)X4mR)qzT)#adEKU=d2eE)OcwZxr+O*g#bk4pvwZe>>`Uy7v9T%t z*yx^cZNYqSp-64eGUp_l>Z$t}CFEwXyr0f}`6G$5B4rV~<7%uzR5kd31adl9J}_C` zOs^mWgF2obZGpg@_h>56w! z){QAr8~AfcI{A)qmisvF{km^9k0kA&XAg;Q){~;~xDU2L`4cV=J%Pu@^G?xED)W@svVM)TApm;H{I|F#U zYi#@*Lp^Fwq|NE&+${=wZ!WEl{*O)R$6Km%lsxN zFgMF6dAG=KdB+_aVzq>t4CiQC7;R`15w>ff_>KBKG^QQp*|751`4ra^ENz?wn&_E! zZVUKQ=#OBaK|;;kQGyjSl4+@wXL-IPGCRG^CmmA57oA`a+$bWCeP^-9XP@&0X`fF7 zm~neGDy8(lD1nmFWF)(v_z!tle<-*fX~&+$(0QC>;?;8S7rnjMX83ibo+kFCg}XEk@dms3!a2pZQvL`Ed}J#z=KT3=-04{)erG@ zmay4CZz~rses%QZ%etb9deLxsGR@?9zVQMwixI7`udBe|>plv*3ED3>yFOo8JxdjJ z@C(_7o#aS>n|dwZWJQLryv4nmw4Lf=XB)YWfE$(8_5Jd82dvDQ@W+iE+##Me>2CI? zGhira1)tNi@vw%^XV>jlVs2;}Zok6sr;1BzI!}hPt7P7kZ}i60O>2C_jQKr(E=o8* zkTcLrI5TnDCFdt&Lxua;Q5?|3*-Yz+rFW)8+GMZ|>Oth^ozLgZ*As#}mr(N9KrCTR zMS;`y>xu14%g|6VFM7MSvqq0if8;NT;q%e#daBT?CD~^~i8*Td(h$809FJIZZWr;Kbm%w2OI;d$+$0;VyEpU<`%p56jy6q}|t* zvkoHe9~!lr&o3F@KIvfA7uhbSe|D?=HvH4;L?MDe-;46?YIDQ*uBj8Fda1sSONfX3 z&27Zh@sB9o&dfs>)D3!0e%<-H2KgHzxEUU74jM;jQW;wTbX4=;@aX%THJ>Jzb9)|L zwGd>*e!5>nQIs0Pf&O)f>qNT_XSJ^!R(KquF>V?=P*NasXrRa;AL7j>dl=95A>e(x zRG)qE!S`B48h-&gUSrIVvH(v*AxFn2P|S^0I`{TZesejz_#cL3H|Au~J!Jxt51# zDp=UggaE(7(!t@wIYCSmE19puYNY2tw7)8=FsGY=@5;laKV84_5LWha&fjHuE<9|) zT`jW~6Yk@@K+lU|(}k%JBzv{T(l__XXQPUS-LmNyI-cg{pU;!U*-?E8LIMhFuth$n zg}@zWWE+B5$G+V_^62!x?LT-|cvpR7bzgPPw_~CW^4}KUtq$$r5{Ct|_oT>9L{{1Q z%#hSvly@ywzK&-$_4i)CrtC9jD|x+?`i}hN$zFUfuw(ox#2dYbuO-0pG*85SB3IW> z5Gw#DbV;Ipmj#_qUm0c;Xm^;=1Eq-2Udezt-rnmIygcDl(#$AuV{|%Q*9&JB(JoX< zWrKC~HdDdmB$UPEULdgLzo1n|{ws#^>29fBwP!{dh}284B4N*Gj+$pY_qu%FNp!yx zo*3Eb?N$*X|9jHgNcJVUx4&7-Te#}+%=L0DFg7}gz(D^Dt*V_*0k$k-QT>tRw9^rA zUr4om@)OK|2MrmJeK3I$+4x-*Wf>1v*MKj)(fVoVvIyK?jwYH3)8%iRFMo7}&0xZO z0IesY5CWdh&GDl(KU_d5Xz8sJHSVSHR4Qs{DU6(jW5_3#m%9Yy%DcFFQSMq%CmHs> z=BrQ8m7QEDS+x7u(_N}h+EGm%ERP5OK@zrz3yW$yf2`|dfoy`wqxvoz5QRHxhRJmL zM`SDU4l*ug9x!<5Z}t)C2y}gnbbZD@UGC)j5+a}18rU@4yLpK1Xholi#|i=5U-uL* zKX&?h5V<_`56L&J1Dw08S)y@`U1D43vR6+tJw^gIifaw5{3q#%6-1uv`4G|t!I|#E zo;C8B2(dtE>$hM!143D_$>k#We*9(yL#2Dmv|>R+|EK8~KU8oh<8OAP{1m~C9~EYt zrn5935nBG~Ri;YUmJaXFUUx-3OBA{Tz=2`$T-*mWaYkEyE;+Y=m)bX-VAs%jC#gJa zk)$OI{P|n?f*W~59HlUr$tDd6;JYe!Dky~tPqpcbavJe~mL+ZwV#BbB%ds8SLQuZE zmMk?s(tF`cMhi5%WmK1y27kd&fU%R%Gm{LNPC@Gzl0E6g&t&Ww@O!?wMq}`}Y{Myv z0FUju+~FVS^wyEf68j?p6Zug6O^H&sGQ|8brz>1~ek9vlSm+TC*L@|1KX9*tZK)_) zQL@9*uczw5JBoXc>6nbf1LEW^v17=;!leIxRl%$$uy#^vxKhn(Wh4YSK=5M1(aOCM z!8&QFwgF>Jb!yzD*l$&L<<}R|cMHKHJ+js>xNcR^?dz9I3D_!qYIy}x+YZhdkc*;4 zUH_$U6TLoxsMJ&+P@1r&faf#_ztgs4=C*5wGuG)K`EEbKJPnNB*fEiCS1w*SZx|^M zlfv=*xjH(VhE8Y7zZANXe`IMRvuw&3e*}4;A0f(&Whf5YQQN3<>XtQ( zYDwe>WuDjNQ2Y{u&d#R4Ii0p1A{$8j7knVNQ&Be&!Q8m`*=@zPDb=p^2z0;Z9&6jQ z#>3Kf%JEv%QzNwUX~D3Bu{vp#uuEzHs~r0e(0QC>ro+^$y=caI*|IoVUbsW6{nJ8^ zIT&XCP}b+vZ0?1);Ld7PN|9$UpnUReF{scP*^fArZje2cltfj5KdC;f;gz5XOW6An z(24UF+IAk5Fv-VI)paEBmsIcg8RW*q}21G(JP$z>%lH zPT#JjP98C1ei^<~mkL;T+cFcH2X_~{N4W~OiZO+dlyvP};;nDK?p%ic8`{Q0GUu1M z6dqr;p86B@+DUoKD#-ifH_Q z!W@`4DA5{`XA-UQTyofA#ATmB1}@{@%inxN=_rcZ zj+Yy-y@^ybUSadsr|nelqNV|iW>m6JKCQ5a8OF&-8j*`X&F>UvRzoq~2@73s?!<22 zGgQS+R(C}UhwC94khE-;7h8ZI4?n^lvE8@XvJzMYINdan)8!VMz3y$C`>)9LIo@7k ze-^fF_sR~o|4kFzPxWl{*bqF3>Dt8Abdx8?PXd_O)j`Rg8}Ai<=qN@@n7r{HL`PP2 z0qy$fuKxNww{6?nX9iQwUG2zD#(3^~@OFAR8n1>c{(f|&7OsSBOC`r#S9MaK|J-~_ zwV${L)fs(WV_FZ3PIpb#%3tQhpG4(&lnLDFL;0oZiflP#$cF*i{mZ25I@$nvnjjf0 z6Raf2Ub44l4uW;7B4_KIsXjQ44)4^E-o;w``gU}VOnv6ry!!Ur*^r2Qt2k~aFVkzP zJy+lP0ORI;UyQ*k=w)+r2lO3?x(gy7+-_|cpyNrN5WDu4HVD6rrLff2#%_>)m-u+# zmq=HW$(c^yydBpb&Ia@Kp5eZHYgs2aFz|$T%PF2+-5KZDpqSUQJWI9RZ`w+52G1di z)Hc7BW%tK^k|!gnXtW!9KEEFvO^b$)uV6y&m8ir40p@ceK zk&0pUmHc{#981WrSeh&X`B5y22^`pYS7dG6c|SpL@OIU{iOcN1@zWjnHn}?Vod%3( zmAtg=oV|*Y&Y{{rpS^_CrQ{u*ML&{SAi?#LFXgQDyq8*%?(`-jhZsrqH+~7(SEcaW zZ{RcYos7V+ano#F(075hv=6~7+n^ibPq)z<^E}vC-??i7IJUH0S(&UctS#vCDMmd&n5v6E2bS!u4LY~$#wwUUu?SDGC>=_^n z*f@sGH&|OF9WqHM&mH&O#vOO9n!p-5{c~XOU^aSHr|68H!vu?8^`h7*9~-MROATCY z;Lv29^3#zlO;GYU6!qF37}EM*V)6C@hoa0A^V+}Cegd#K4ESA? z%Gul~69`xpaJnxJo1}x9lw-FVn$Y0OS{^ZL7XL{84${2_G+bCnr+=W(KA*Pa52*ar zy7t1Hd`xA3C!+-Q;`$u@*@sfu5yM!met7&HgG{d7*CC#!kEaP1-zjZKy{|lWUGsHX zCY+G=p5Z!7CskOV$DDa+B*()lEB#z3a3D(ucbUQM{lApzJ&9LMMjNjUQU~V2gI{&J zDokH0>c|t7CkqRQN8gGOf6ho+uAM43a`d&w(e;G#x)M~f@4s$SfO;>+WN7m-H7&#iWgabkb89Oa>9KK zz5|`F%WaA1NPFQRK2v4PZ_$%(68p2v_4xH0-{sKXW9v*cZ=5a}*Qo`pS#FZ9s;5ly zjIWI&$Bb3jq2v!3yrIVcSk=8t1W_zjxXT8X!8&qHuJfRqtj_rpw{Krhh^>4YtAI~K zOXMxyUIEn_W+EL!=DxeD%7@ge6f(d>&~5gcBbIO_7>5`C;nwNX@^olu{k5sWu@;@T zd_;6e(^eDZx%UJ4lky0I!*$=^Bk=yk8mRHeJVs*wHE!%7g@Uh>bqC4Mo~+;G6_6E| zL$|z}{{8B4wg+_0jEC0wQ`k03F68gs)$=z0-n9FmYD}Q06F^Emm<;x%*Yi2MM3L*L z?Zi29h>lWUovb)}pP4v)TtlCAzc>HvOSe0olPzpF(99rI_NruvH{cR-@sl7dbAIU? zZcOXTZcMCDxx+_2=_cE2BKWyO+2aApPP8>t(L#`&Mz^GWmGL&E&C7+EQ-ee@>RVAi z>4SXeIMy?`B9_GbO%ncjk`7~?<97BkiTMsEmh$3dmvwlO?+ksK(mwXKNu1M^?AJbKLz)xXEQ#jqqh$xSri4b)U~c_|MfZ9xb~b zj4cqkE_L$b1rB67Z*+j+;c0}wd3c|0>E(NbhSggYqGl#J9B%ZGZRXntEZiS6Wn;|D z6SJW`&WHE6#cwL}XjM($s~>H7vBF~i+e={0fxNx1lbr#EgWy*mLmg!JNG}q(u<|)zEJm5} z@Nyi>7$30vP_$wnV~e%O?Y3e9P4%8%sxbqR<}n2J`WiG2XzmC5UrD&kGpVqw7Ofbb zU}O=eZmiffBX+Vlegj66t7V{(*sQj%ymjl<9C675TA%%ioqM*vn%DxZ>Zn#u5w3gD zq*f#IL&RzEXlfIYN4^*A`%^%+C@pg!r zmL~4^Pm9&BTQ@~9(+D)8z7xTXJ9|Lo-KQS=!N3Ul;X0Ow-@7;L%vljX(dLN;e|DcE zCpC0>m{sUvEm9etR6kox)RKD)YaE-o*!Dl_i)Tlfi+P&UVUVUmXN%QH()AM9zF_7C zCCQPO!m9C3VmGBQm!R{MkU3iAcb{dnkw3g0 z6GTn;GRc-12IH@46(VJR+tM}@ezmvGT5AL4P!nWzBQhPhzFFwOcL_mDMn!Mas1BkO zDe!73GwhPw(_PcF^&E!hc}o`UnHrK*@vrLes2_E^4kelvCpC?!3jx-ZE$U<3 z_qtF1(sq0otg@G=$kR;!mIZ}b;7!+Y2$WQ(&JuWB&$ZH5cr7z&d^?-}7N!eH6#52! zx2*+^8?ClpdhM~=;vZ}{XjRm16geJvyF-^Ft3F1KT~kKy zoo~EGbapyCRxS*-nR!adMQps=G|=LEjmU;B_;`s`IA1hA-K3UWZ)sf8{r1q+bzQLs z8NCoqI#;z&ce+HlM`xtOd#RhCFNc@mMm@zj!>Kd0v|f%ppnx<1 zUGA542{bE8?=7)g2@NDy-#;zu~H$t2jcEiTAMhp;9es3W%IN``2j<{-Jf~dKvC% zE!%D;R!Tu|Zn+3L3YKnR+$Fd>!QI{6-QC>@?iSqL9fG^Nd(hx6 z0TP%&2md(tJ#xSIIrrE8Gt<3yb@%G3>RPKmSd>a=4`uwzMXlM2x_Ig$nuL#HZVyM+ zb@VWWtsZv}1H&ic3*r-JdrbU~;;wBlPQ1K6$gp>;;*I}Z}} z#?uG-sM_@|TdHjwXVa0j-zBzRqx8*-yM#Z-;KLXQ`Pv4y5b}}p80LC>PCW{HRv?@) znwUWRec(?c2v#_hwPhmib6pGbG6LX8&m!(|lS`*_at*f{dUW$#9;QHPZ_K`Y$5#V8 z7{*WU$xV!)*Cu?XKLf+5Utbv`-Ui%t{J~rSvWPgPJ;cXH{uRMB&2~QtRWP}`vPU^B zc20i91tdPPkrS3qygEOPTyxfE8iYxo06b=GA_Nz+z>+v!BNV#Ocb$?j52nw-ve(`7qFlcZ%^|0ml^2y{xFFS4T{i%NUsm2P3;LJT&+DA47BlWhG&zhV|QZ=P$qPI@(@05xpz#HK{7~Z#|QUxSzT~r zrc-@FhX}l)9xV1JLY{O*s@$!|=63;ms?(-2pj_k@d1wPL|1K z*}WA_G=X&V;Ip>INN+7i$7*r{KH6tZWzx3vQJY?}whxOIWDF0%4!yw3J^x?72dqW@ zvbmFz;@$`(Goe<{VE_MfZuUpv6=^6N=?VV}vQoE$Kxo2E!3-z#JPsr4* zFb1gA@*+qm(g|yrlb-Lq$r4bY(>uQI>Oy$IYHY2s>18YR&N`O>*QaDV9$L-A858*! z#{c|Ykqfkj?7C6tV`#Tfs?Ic8a-Vj=)S0$P&j#j575}vX7ozJQwKhCuo1kx_4ME;A z`P~x5Hedm3a`cbU|*g4L``*Mhq914awrq)Np!I5ET7=zdLXE_|M>!V z-w(`5JkO$Bz z_k$>sYki#jeQS^}-k=Ms>QQWIVEHU49iwFQ%Uqg=sHwm0IVE53dw`r~$Ox`0 zay{PDW3G>s6>b)IoSy0eJ}XXlv4po?bQG?%kNEimc~1u`My5bSVTxLya3SzQC~k$< zudg#Gg2>b(SvsDDiihCq1-+LCom11s(bf_anH9`&Ew-oRb6FEmg&Ba7zwqPAFE`f} zhs!(yO7j@DMz46of*l~mu^dZ%GT=l${99H;soQ4Opre*I>wfok-VGbl(igM+Cy}-~34ih6a^?Wm9|$5`DdoyU%#{W{CC1*>d^|+Kwq4G|tZw z`ey7*?+vz3jZMw^_AFcD+A(s9#tq7iJ@HO&9+2f|ujneSd6C%&c{I(kXH*hN2P^ian z`#n>5^!losQK)Cr7fAmKznbXSO5Q&X;O1Y3ArJc6S#&-{if{?4QeBm#;pW(<&n$?8 z{|kRR(%_?z@$0XO@F8N98IGR;L*L?b(|KllEc11zv4YUpvED66+sL4k_K0hsd6YL3 zn)?jbFR*#UGr~D3O?keOLr&q)9HqAVSWb^XsQthV_|~GdIu4eeH6cq)C1us>=jBzY z7F7o|SIR^+Wbt7co%03b0G94-JpQh@hp4PA(^&Ip8TNXt&@2ZgH;7?nN;olF`Tb`> z#3k0tc-Ciuy7NR+B{2!TKMVmSd!ME;b#dv!zH)9LAb{jM79cFSaMLM_=Y3{EM95ts zY%ehj@)l$ca_?k5oXQwGNX_NG9@CCs8%D1b0452NOMYAkH@MtI>SmQYuN-L3S%6~X ztS77)-4Yhx*ToY?)QObG+%6e$wyt}`3EO{}`VNOEOM<+J5*Z&h8{2`Wpfp*-6W|}EqbY6 zy0N9cyUM=6toL36#(Ptw>L^jjX8!m|JXJz5U{h__dJDZr(CWic{Y`^FT3WE^hyD{G z6n{}gLRv-PogqY~wVN6{hUnZjhpf}4-rDNj;NkD)^ofkhRHda``~X})NmwfQ*u<)xH~k^)?(GO9%#wTxCX78ah&f~hlqy^5eqQf@6;38TGS)=*K9KkrIkcN zFB=QvSlR|?*FhgIW0iTvJ-AF*sG%^;&$cJ60R%$crcdLn#_@@DcLzEi5&i*~_hRxv zq*vpK=O1_jlAp(`v7;KDRyp4;I?^s(i#kx?N7?(Xto>Av!?upbZPvk5w?-#9N-VLKwF&P+YhFuNL=$o=x5gT@{1c8?q zV18^M!^;8Kx%=fRSDd5vhKkd=Yl$V977=vQ`t$!!116#-%~B$rCDJNAt|o<@lr(;>OV(IfK2ubkA=*E>J3LWg}x zpTnXIh_p#-lGu8&;y?E>$T0=u2CV{dCOd%W1^5!p(H~|di+tJ~Q}kT3EENb!G&x}~ z!Y_q94`LhiJmuImNN3@MHQ?_oub8J{(!X4Bt~x$9wO!R*F84%TxxRjX>BS# zPfw<#!Z}|`Cc|c(aW30iMuL5DvZLmTy6XI#yqRccN-2Tw@_&=%9=N|6enfHV7pA4d zjX4E`C=V*si?{_$cB(;bMl(98Z6241WG&3wAcZ3$gvRCKW3;=MxRha<08=#eI5;$p zD?zo>$NF6>5-s)jnCMTf9r58dCV@b|4k2##(>OUE7$1tZe15Nly7hJ+DtRD{tK}{| zj(82Mdbsi&x*R_dDDekDM`SPZd1v^Wc}j~V0ZapR_S z`uWjBlm71w`>LXRNJ_pO^wD(STjH1Wb+~mDR&x-AY|;!?q3b(?hJIh57%y&Z~lDkRbVn&~|` z+)8%EdTL+%)AIZf2v3K%!ADhs`WL3F)M_}|D zyTK)|b7nCzHcHOfi#{FQH!x9kfZ6t}m~TMfSHb?}#49mr#;$U!sDI z?{jpmmKV&(a5j?T!`m{1VNfItQb;EcE` z1Qw$%{~*Ff<$mEO7zmO}oM$KSFh}*oqQvg3&gC=er(YJ+!wDd7i=Z3H#{DDzyTb2% z>?|6uqLAz~%RFJXny%P+4x{ z#S>pIO2Flo^f+pe+fAeGFD>$o>e-clFnJ%geMn4tO3UYu%hjjXd6n5qZ3!JGz#--q z!JqcSP>-F>LpVo}*bjZ@Hpo@9*Gn|9)z_vT)Ki5*P_?L*oW%egy`TdeVG%RO9LIvv zde44{R=~IiNz;b5x*y~)LkFks(6N&o6+SYLtWenKT$b5va1Z>n5~pD)a&yPggfLFW zc7QY>3gv8&dQ$#i;*Dt{abf2cuxHLI0MQ@MMbNz5%#27>iI4VuQZo80PUlM7o^SGm zDARH(;&c^P-qTV)Vt(l5?VK>X$vN~8iO_Xc@4WiX^8A4!l zJ*(|r7l)vQwb~8BJw%>{QLUBEDlUp!Gz=PPwA_^OTXaIH0T@mj;+fRr>L+f`SFRj- zW#iJbH;`mdVzMr+em6{qChCxU*N-H{%`nn9)A1YhAgctCiu{b@Kd~L`*yk&fljG(t z0_KeI?z6PLU6bRZ2$f>cKuRqdkSp#%{BdY4RyRY>^dPAJ!slGBYt}?Q^kgQQFS0>d z39K%V=QJnH*7%_;LXf~CseM{UG*2{FDpk)4rgO@`*m|mgNn<|zJi<7BA`^!?{#()~ z)N0Cb6I&jJLTGM-q9`*ui(+q1PtT%|6@yh}H4eCvN_EDu!2Z}mo*9k>vTNvI5wQs@ ztsu!XE1`P2tIi=jUoys?GjAni-7^+2`6KTIZm|VZW762HJU+5MtFrB2GPq=I=lfcf zb%__hG~0B-uP~nT#^KB20TWe9mFw)OV#*mPR zJYiy5rnp<#s5ojzHlyVcPcI`Bxs^vzt#-nFnP#S1(Q2wtUFbm`NG=DJU>y4UR6!^3 zTJbJKZ$^|bY7tGw)>T9G-E~*u$+WBAdkX#@S)tE%*I4SsQn2vrstS1ru@UC&DHZCM(tb~xRu=qIYO%Rp^p7JWiVve z?y~G>SUQ#bFI2pSmrjyQ{B@hkZO^`C>bmgSH zG>0u%D?Aok`7MLvcE>G#BEW)h^c8*0W!+;7>R4n#R>T2?{#U3O1>=0u>lII*6`^c- zM!+IQY>CRKlX~toxo+OTu^#o2Y0b%*1DDmsO_q>_`fnvGpIG+U76#WP>y{=JbC=rI zDC9Z(plo1@7u?f(8CjT8!l$lN&2Kyy&b~@yOIvYNpP&^}7x&5RB-Nq)w6l{i7ZX^u zqpEW)(2ZOK+d$ingd{eAQ(G>#=@;<~j2jCU#l53+^W@acHZ8-Lxaa%TE`odmW<#70 zEw+wH$hzNk-%+wOz?#OjbXLsZGhY29kg=}mS`)plAEq{qlFLUkceUOO2Cfh$X;v(Q zq5i4k-oOq@xY79COkGYxhL>+{0lOR1P#S1j+4o~W!DLxBw^S?W1jEg9oihT1Zr6QJ zu<7qvN3{CM)TtOQ9Z@ZRX`0*G zdhOsTpr+(-fotlKw?>tG11BHPK_==~G^H+0TS>K5vFx@&ac>#xn^jQ-=ypKQ^eN(_ z{yvwd&7z_w!)h|_;tq~J5qZvK+VhxDawunL!9anH@W)j~lxs`{>&}0;&naPvNjl%- z9PH9ezJVBFjp4W{jcPNS_BKB}#$J5gI}%TWJJCZLDY37S{qb((a61E=ZlVDAb%5X3 zpjzvQPOG$SX949a^6x42#XwHwgK*ZwzSu{B>F*~ZarqDtQi~)-jydgX1T=BBA;ye2Rer5EDfbD8(AS` zcQFv=Zx&5Mp_;6GP6S=#7<2Z8@MQ_GLL+AarT(ekrSQ&Wy5MeWJp1 zTMo5M2{9KtGBey*GdyMzUU;oLT3m^5SWj+)vb8oQsj^;45kMfx5+^c zck+KT^!kAxh{!>t28s4V&;@jScYDN$2S4 zMv&QTCqgTi9F6LOyy82S$gg%E%P1|MzfVPda6ZDeR^EEVM37UC>Q}|-QOzVmD02yk z-sF&ay9e;tl-xd6LfEKK>PQxWJum4-uMAfnh*NL))C^t9c{u&X-q$AE9b4NeB54$r z@`_U9(Z47g|0$4Bx7R22i#YA)8rL{2u(+BQ99eq1FZSe?@^1x~Du`dIb!u3kyOP&x zQ>KIf8%|J9vjOx+pu*1Y?jGxJfYD;V>)H?P?q?c8<;T~3i{5^@#^qi9(Z3+qTLgxm z2;iY_1hE4TEXIT(4Pr{OBs)^cL8=r)n@`DZ);W)Juq1&d2EfaiL)%yEZF-P6)H4+B zcYm>9V=vJDE8;lW%b|V85rEU5@5Xl@Bf=5eGYKAQ=eTo8;4IQJhAD_W^l|mcwC?Bb zez7MeaC9`0R!MkgpPFF(6azVMGbB*TZ`gDN8J4Ohpp5{f!r1@*Bx*`$JFd34p3Fs*171!S0*jLmq=TNa+8>gSxX z6|8e|qy>=i%VyUy1U$_!NXcLF`H$669nY$==mm~Bg+^ll%cd?UR{yYv znN@RmLqG1xj&A)OM#wXiX zF$%owiJs=m&BmGEzPB+!0cFIq<}c2Iw0!Kxh~jp*!d zvaB+&$+!hG)z&wIpTWytQcl$I`>9=h>z>R|;QSSrndi&lps&6@uABZrATh?xVwZq6 zBVx1vg7HGG#js&XBVI2HLpHG`N$F!zG}QjLJT;z)9#YH}3EvZpNOR>~nr<9+p*%%? zF@b>X1nTd%xeD^zVOz+q3{+rMrT~{^*z2!bEVDqOg&^#1{>(B%RK77>%y$e5=XM&Z z&a5i^i8LQT)rgtknjw8PIta|zE;vM(DXLLWpi&O5G^h1KdSw80;g?m_Wx)|~g{}OJB5+ZqDTtfbvXsGIyB5_Vv zok~S6+AyX&p0k>mmAGjs+HbK`vQ}%gc1F>oE4JT9VxNb`?~H!J&{a5~zxJ=laG%|Y zt{<@B@A>u`2i{v+%&Hf8q;a3TA{uYNlM;M)YZ&mzqW<;^?Q08e!cU%DDBQ^vm4NvI^J&u3WfGTt0V66N4y z*<_>IE^j~>rt9C*EElu&M9KPd{4plQmWuDpGcgw;&dB=I!pKo_e>1=%kHh8;oAM-R z2@;BQI#C_;Buj>>^i}0&Dzc{IU}=5}K#aO{LbfMa@mmjiZx=P?hm=XY!Bl6%swhHJ z#({L{RR6%_U)WBwt=bjX4r}w(c*d?Qomc+o;?a@@=a#=(JmqH!>TXqIg-ZJVh>)*cnQ)-?Df4j4YiU2Rb&beM zQIdb1-!ZUByLW>&LL3o%PS?$nIGkc2!a8>>Q#|jvnVGj(!8C=YJQ}n6oblJu5*@;P z$F;M?Zt$`-w>JQ>=VBlR-MC%Fs)qk^CeJ09^YltoZOc+-9_}Tp;O0}48BD8paN39x z_w9sA?paQN7=zW*XZG8)C3JI*J8m&F_1%cxn=<72=D_S^B`;#}jJ-EPG;=w(x6MeI zEOHZ1X>TZ(z{0YYlm6)dHv1HAG*i|ZiJjIjO(DWYXvLi$JZTrcIyT@f3%oxw*0OYI zZfjOP4-$5f6!{+>Eb~GIS1`ROjR>YM7r#mNWvm_3DR`k8&hv7QTBk;&s;4T)vpH<2 z2Q2;Nl<|Z&z(SFuMrH89wh)<`DoxXa`5nqGf?WMdoqNGf$a8m56YVK7?mYz16Aw@z z>(j=SHw;xOn7k+V{J$4sp~4twkl>1KIk5_gOo)vZYMBTT=#?0olvO%>>lH2@g;|aGFNCUa#oDfQGn@WA4={f9jJ#LSF+}GG z=4E>c>gVA7So_}SuO|XxNhh19k?q}2Yi<4aYV%#mefCf7#VF2`rG3loKik6EASGeM zJF3r0LPW*c=sOj^-l#Yn<-E$NgFC*oDb~?C%bO{AU+-P?vUV$u5VYVf;j>(5H9_K&F)Hdyso^KWM5U&fg*`Q;3%G~xk^tanLy`W7y(uw9Wgc? zCdJv?N=285$Ew+j$pe+>Cqz8CSNN%tJ_k9PF^!1-s?M2X%Ccge^M!}*uw^|G;*hrz z^C!vC(yyaf)A6Kl2pwZZf<{c|@x*LglIrTBb7{K5`C;0Uq-BQ+D(;g~vt~4%91i37Z5#SJa|c@*^w(X+%rf84s3#=+f3ubXkvaY z1^UD`bQn>%3>x&jt7Xf}Uw^}UfTnbieF>d9iYt8>^?zRUf`K3UNa5^!7G2nPTR}z` zQcQi(8KvU-UXo(S?7LE&RbGWf*Eb{eCi+l*G^@ZBwWN zJI4`ybf(zK^y)1={kSZ>@)JluY(N!~yO?2hyE2N96DZB|1WRN6YowTz_LIL=A|4_# zWus#>a5Y5oYYEa?-;*r<_v;)ZViiqM((=WcTmm z#jwQgW_(u+NaK#huUdL9W5*oJGGo26)6};lT*rhD4%VF{!IPn{s@{eQlB_8S)wg5s zt!rOVn*Bi#!0kPLn-)p8ZIg?5UbdwH$g+wNLF*tK&#m`bQwl@;nsgN*)@ey~w?uZp zEKtUW%M(KKHwcKaG$y$`f$H?k?YRnjsp>jHhDCMK@O!v}p`<}}h|wPiE2qK=4JmP4ceSfTKpzydO z)GVG1(#Q-$B@~AO?Q<=)k3iC!4wsAUi0#0mB9Q@$yQ3rFgFqyo_k`#!#0=Ws>D8Ey zin+k0>86rv2b9}Nd5smOz7Do@xjWMwjkhE}HJv$V$1gxsY+7wH?r4cS*b0wg607~K z{SAFJR2T{0afcObwta}Aj*rSMb&6XbV=IMwo6*1N7f;g}>4I4zSn-zrQK1k9nNI4o zK?+~6;k4N5qyDs?@F;cyPbBJsxYv~QI{dY{$R+tz$huiUpFI}o$}C1$ zr~&*|gC^r|)fmJ5d{H!C_;1;s@B7?`6qrF^TGvC${P%E3X;S#8*C6XiNIE}`Nd+>i zqA+un`DAGSrDV%-I}!MZNh9JDc0hUjcNVe*z1?Tnj{{*JGJ(!@=Iyd$DLUL(!)NR}RP-A<8X)-M8&Y_??;b*PJ6WJFxA{m`KQ0Xn=F90NSeOU-MJ_pZbkIMq% zFqvWuqjNOHG(0O7jIb&Ub+M-}AY(upvCH#o!wq(l$<%`GRPa5XP$KVuF@3co4?Eg7V3qi{mf`c{lR;euL;X+Q7@Grwr31-(ieOCK;DC&>@e zAm{tS1uh+6L61qiRSetcqCo;9S^lK`S!fQ)#& z;4Z_{WS*O0<7AT-A`b-OO+Nqheou$L5f_ZztY=)M{BBn1NF@-?hr??#J#8P3J|FUt z^qoqSSp9yo9~UixREmx8eMIF)Z|h+MM=Rl=d|$*sCs3j6m=v1t9Y4x4Y{sT%MG-mq z=5yS-%rfeu@YR11*=>%J9HI;Y;ZXU$?~{!>TcR6D{55`q`lxirCQc@Db<+gDe>K)KI!MG9=5S6nP9b9*qv@0`yhCP{$}y#59+Lk~fghFR z255Ln*e;z6!PgB@qnrf`S**A-Ee=2D^pn zSBeH&A==G~C|mgnkpQ~hZKAazbqtAcbyd38yN%vBH(m>$lwnCd3~_SysEEO_?nw0j z=GB4G)0te}fi$0s*sH%RBKL^V(<)E2Q9y*tZ?aL07>00BY}+TO5m$=MuT9VeF3Ofk z>*Nxb8xIZ}W_Bu47J^m_-y#vpy{%~+ubo&&~d0#BMeDUw%C_?OwR2F!(Om!ZOT z>iDRRgTl3%!7eJ2F{N%wOtUrpF-ie->dublkhV$scZi2+RSl`*)C~wWzh{CC zA#c@)X{$KDXstCp!6Y^!rm^3z!BhE>(0x|imRj3nfx zijbqA%3#>>H>w!!jjV6@Q1Ud6n1mx#es2@8Ox=2)GfQ8!}3C@hmGq z@MVGb$K)x_>CAD&vW3R2vE>IMN@kz$HDldy^-lZ12v}1QSs|Ka=kGoUqJq-U*ghFp z1nyJXD{N6>12wE*uT$o-(xM7FXd`(5dIZCjGM)mbMUBc3iR!ll%z=#?XMcPDY8nRTa~3qjUd@;&S@DwB(Q0YkLTVH*jgP&=Q+aLF;`*A@$9AH zYOh#NkkKmW7sU3J5)~1X$bluow3FUTt!fH5HS*6)H-md-*Bs2W8He}Ef|0_{1-@v? z{B)!Su_wamSf28T-5L_A0v2T9@UG>hEwA3|A0>IJ?9)DQjZc|<^}`#=t*^~R0r)h_5xvk%lkKYwNX!>XKXvf3 z_U_Vufn?PuMZZsghVTYdBS*rh@FKpy9)x^U^A#M5j(pL9Oj`K8yL2*j^%vjkXqVY^DksE*d4DiqsOev3zM?q4)PRhy_RykyPm;og7 zM24!GMPsm$CArH!oMh!f=f>DgpB;3Rh@hSyc>?x>+pYJPL2x`4vA@vG2OW?u(=;Nc^lsTdKB z-O*ytpMLdgjA6{gwqtc5n?j$m9L;Hk>0l>r5i2UTqDx7)-QP3MA~T;DXjZSIs2Wv` z6HfAY`nSBBU~R>T;Xyr@W!&My@*!La|5zO6g!U+%QLIj;%%KpaYSLHzR)^@gyxTqp zuX7={(Soz7rLlup>}p*^lvx)yC_-@QK{S{D{Rrs}5!nuI5+G#$Y$W_P?EdH$Z*N*O zF^|T+t|f!PXi}R#@4x>OT)K#f+=QMcBlFQ_SpnKSBzg}=A)}6YSANaFPZ79^gax$} z#tV@UOtBBg^S>VVNFgU|JT73fsEmP`e;+qxQB$j|*S){}_nt!ZqQSO^?}G33rVK|{ z+#IWI(3P6DPeudETxct8Geq;`(DD_~luCad(M-c70dv$f$dL8l0eKHIWJH*99#D#y z`5&JD`@{cXjjBDk-xsfYL7AWR!PPoa?6800uznfYJL-OlR(k%?nRUM~1yPS!0!x+> z(f1Wb@?WkDUVl#LSP)Kv78?_Xf7z5Ui2PrXeVa6( zb?A>_a2&{%G3xo^w-FhoV#MbAj*HgxYWZA)}>Z z8qu1&W70}9vk!RKa-JaxdMy?O)t&~Az{t#GFrNG}4oZK79UwV9CYZ9TsiO*bd1$j) z)*k=x_hfswM;C(rB&9_dcAdd~ypq-r#V5LJ<(U*2`oX;G(&%{V?F5#36mUuAa3Drt z^^-8jXG{)f79`V36H-K6I%{+e)8J^@bWoD)#+>~m}EzXpBIv2_07rR>i$Als$6eeQ0 zkRK(Iu@=MO@Jlt?4YlkvCkMfu120$nx_*D#!e(b?XlYCxmdGl>?tfz7`hUDV#tVUt19kHhFG0m zTgs3)OeO5TYeN~U`^AUQ=-YQQK;ZWv+DGpWawMW=kfANPaw_BX0y9L7?TLH z@wTpSOYYP_#3xjU~4I5 zSG>1%&e65FOek$Eqy-wg$t@VNRLeA@lA2(z*Ivp1Y-iHvE1=s(a&n7&Qe1`%RtyXz zSTLMgRmFY*Ay%4%R2;=L)SG@}+b9XnxkeFk3DqxGh?!V2sbCjasZQwg#s7iYANb%_ z)SsjT`lrKKW{;%37HehR+gWzcwNSj&y^;PU*Q0Ux8H(ZvM4aiup!aHd;!4Hb3MY{1 zHEj(=-06IN4LyO?Y*sQ)tl4{(=+jB2F^5nZ@{%cx^=uZa7~vP~0$H5UhN)EQ8nmMc zk|2$ar`xlc^(J#qpNctjG?E$4=t2*xxt#JqR}$gC-U?fG`DgN@g_Oo;;8|gi(n}aO zdpJv~1<+K1EPw_car?mbVx{`LH;MYt1S@PTT6jAMbO1|>$o5dHa+my#Z5yp?U8`t z%)d?NFke6#M}T0qC&VKfeq;NV1NErc&5#O@ckl;TKE*6+w57Gg1ET+#PdhWr$VQkuPgo z68!HLrQJS@;Hq5@=lQSG9NRaf9yJ32L$G$WCh%vP+J;|Ig+N8)p4+YyQ|~X=%nQOF zz4!>Ta9E8n{scn1%moY0zipuHp>{4QGSo#q_6K!3qjMLRB;OND3o-; z$8GhazW3)#&o8JXghigu=aupw?{5);wU_lh^-MFppG1@un#Sjg8u~;6r_SO*9Mf!0 z+pYGx#_pqW0%wMHFKpuMPM)JU_A=!{p>9+x$cYW>!a6^r>%K>$uA6CkPI`l>+u|r< zx2o&F+C62S52%7A==*su1_ zCpO>AvYX~w4-QTJYwx$Afn$MpyI<9wNh%iSn1LnsDDpVpE|=@~ZrX;48Wn>V6S5KQ z4iH1gxNs46^fXBqgA2!jw-yL*BaBvM&_~bk=0g6{bW9NrAEag#=CTSmY?|g{S|jOl zVn`Kl%psf8jtHy$yXJQQi+a^ws7aj-VH}D)0ZzEHHf6)l$+(3=qJKIfAt$22U*mBR zasParFb6GW2OXL_pI$Z$Fd^0B<+0}qdjm{Nuc;xuU_i41CKleHgQ(TXU=l4t7u?ms z5Xh53NYs@^1G3!JvjXv&%~sn|pqb0>wYuE#xfsZKzjfQn{4(i8Vm{n66Z$w}r15{; zTC!o^(C6V2IQ2DCrcx>2(lI^w_Y16ifxw0Ra_8LtRDqJ?SooEmj76;ojD=!B3mWDgX>_a2 zzS5~SYuzlCG0pw;oOqTrY$D1L>6P@s%4y*3p?ec-WSsxHx3q>zN$_4Hz&U7rDL5ID zT<~xtcyNCQrPzW1lKyM^uCKje_Miz-ppIs6(CcRO?sg03;U~DQ@#qeL1%0-Yi!VRP z-ut;+t@uFJZ4mS{GP!p=tOTV9QaHOI6MFV_<+i42-j{V99+EKfV%g=a#IkXn@u&q| ziDzMTShazJ$e}^h|MGQ7wV~D@z8Cj$hzYG}iG4~7V>YL~Fd6ntrfVDMDr9J3U(Ya_ zD(c{v)4&0~_N`?~G7EfV)Q38ej7QEJ`%;TUI|BKB_U9+J%#%yXH@W?ZBN`o7dR?Br zzE-DXk~(>op3q3t8p>f$->suV(U_g;EO5)Ir_SSg z*HHM^m|V!;_ARB3&!R#|NO}?#LLFH&VZq#VNdLpjs?vlMNE99sC7~w^L)u{Dc|(&3 z$gAstkN!ZmI4nf}M}8rjDD8Rwb)U0*&mO38#9_17!E5&g!Ag4WPlZCKdvPOHu|Iu} zyE($U?2X3GefM|L{$FZPbcO~m5DrHG5KW$kYx>SuTG}R{&eI(Gm7gKWVHoW%_m`FW zfM<`Bu-C^w8|ovO53t1RyF*%EhHu>e9do#e7RfGCxqZsON4Yv~^!wd?N_SM^ z140-iPlZ4AZY$5(o&v12Y5(|Gm@V-zF&NNf zwhAj^Gi6c7k7TlDL~sy9F&i@=BK&MNInsGr$Z!M6^@p?32fq*h4?Y$xq3sibqcHb= zyuWDHCxg|1IRIud;7}5dmo;$j=BgI<^M|;Ni%m%9wIV(*7SUZ0+$W3xhlKg93zdgx zYI-gQ`#m}na{nI#d9D!q$G{Za~Fc&uUEcmc4 z-r-8x5WSBhuLh3nai0HC zKoAx_?B%l}t;iP^1G{$9#Eo2O9Srw+B4{!oaNk1v>M11goYf#qzx0HQCrKtdT$F-^ zz-`iX#dJ&z#WY3|e;5lwt@8$N#w~CfeMC%jT?|cu{uxi2dbuW}f0x0v| zW4`IBo%Vp(Q}1vIQY^^BfM^s_lbpYd)Xgjl6d&{rYQ7n3B7c!Rx@ty388t6;A9%{pk*v<2&@!x3#}%UZe+i4J4#ARa$G>^U(oj%e(S@fRb;K!m_p?%97D$tS*9S1=5Uwvza7c{ot$XH4msf)30bfLM573GsBYQp4~9G& zIvaJV z)fie9ue+6N6oLTbXOcynZa3i3@N@mBLZ{SklD_^ z5pk*O{@?N}E>Ly|;M_bbm^TIW|Bs5cqHJu!0DMfQV@M83dM}maCRZ zH4h`u!xhGOP9ukehC=8znuNGGuGMC@+Ad2Cc9V11uhp`JwK?w@(NuOza5}lrAU0mq z1E3>j%4Cm9#0=C_XnYj?$z%+FtjyQ^GY}sUAq}!s0pxJWT4QJLw7MP@#g*s3j{jr2 zg}@|=ELSL=G|w=l!edaC#ly!9RVZYW8}@ut>|r$Q{{QHD>$s@8^$(OB1{q3fXpolf zhM^^-L_%ulPC-Jtk(QQjln?<4MLLE~Dd~`q96=iH=Dcyvy`TI0A7<~p_FB(=*0+|< zpG7ZI(W7>B_Snq0MT*GQ*#mYe1Mm|nt2nYjqCl%2B?$lz$uaUf?wP2yQl)RZxVV_8 zHWqI_oi;4-{536Fr!ZTV>#G7xi$(K{r@0aax-aZXJS5J4SSCN+IGNPYLsz;~;B0%Z z6)I<9lDqeL#>5_1r4w6q{D(^$=gX~;r@qEJ&QQur>GKE)kL~zc_pj6LlVcxC6m0X$ zoEy!WCpvwbTvuhC+BYBQM{F?xxCk&%mYb=1TwP^tBbdPe`lWoot=Ok?KRyij8-uea zLkoO;=|5*ReBi#I`_1wsCDZ--H!UKO=#0%ej5TKcY&p>M)vOto4E;wASvy*wPre4) z`1h5plpYev%ugSBu%{>}ScFn)5!l zp?U@M_JjE{q4oZ&LKeM{`&|#aqw9T(XNgFpmuE6``SzKD`>{PohIC-7#BnzfpYwtU zwFl7P+|~jcD~E;68+3frz?;u4=W8)y)kZ)Av*8kWWjM`iYj8YSc7F!VMDt6V3&4}f z&i(D=Z8c;L=!7sCI_?jxX^Rt@FRv^=k-Uzhk$f>2=|{N_Y@6JHS60&Z7|MHlIn%bw ze`9Q1K#}uh8EEY{))S=DfE{#e_hBS$fF=%pz<>q)3zHR=QDeH-66d9k9+NMT2n4O1 zNAuIdZ-u-xyYR?`CZ2nfgCJ0EUXMdC9GhS`}f#Oa|2v1%4NR zuY%~uqGbNa;Y%_CTcd``uLobmEbbBluZv_Wh)UIS9;1UGiK|_asvk|hVdPv7@sAQi zC^ng5IiVC>%^MFlt$@f+l3pFaT9z@CC9F}zKDt7^jks4{|nc&wf@- z=d$l^1{%i?P7^IH89I%36oMj&1{e+#_#NVIYw{gOHNbXt8=10;hD9W|u9@l1q$_a0 zOR{HU974sSqEn*ru@q~1{weIL_`6b-^Qmtn#tR(U@RI|$=cY)d+1e_`iCZd^P z;bjitMUUhKpvz~vM8vKz(PVo21K@`s-kQ}{L>Im^N0v)#vysQR!tNb#Sd(U`OmsUo z-pP9a?)^yRIi+O}-ySOn9UwUGg`urRk`O>uUnXH<_fN5x4H!!6W7gE$j>`sIA1hHW zXziE)hjf^uuRVTJyP#Ej5(Prxo0400M{(E#%L%OP$TsM@i-A+?L<^1(Q z^OnaP0SJ`>oF;sJu_b-?UNz;QX&vt~l{>f}SYMpdch_}(07`~-8^*%TA;`3M4=z*V zyiU~&@je6AmjeZRbyFeF%ag#nvp@==WF_K6V9x8tp>0I^<-D4=L+0IGE1gLfOl1St z+HC-4tNp@|N{ugu9O@vW-H=U=r|ze6gPvmmYW<}GO-UXW`k|)D#{;V|f=Fob7(D8g z5n=U)66wntk~)3aDMfzQGsWJ-{N!QZg*({TQ(7r-eg8^D&^%F5FOPtCZLDixMe;n~ z(LDe4*+EZk(#y}Ono&vgMwwJ_wml$Aw@ihe&o$4#^Hr?%@QLK0lZMDDMl%p4k6DD_ zzR^vjL5zHj>}RQvxC6?ShXMoN6;M~6i}nZnXmI+$%(X9Uq|ZDQ3+2O9142;bMAxDi zbf#&2w+lX4@)vpSed@FKh4yP@2m^S+guSKR%EPUM_nv(RJSw{s0G7$i> z@WAFh+a`l<(E4l;-b0XYI(~QD%q!Nx%-Fu_`%9Dx6u-TjaWS@Nv{I&^)vFdVlZ1S3+IMiXxbX<613r@AFluQ1) z2J?{=z5TIHA4xN2!$|>w#j}C%TT*ki6o*WQpa-|_bwkn7VF8NADwfW#FsVi!*Zh9q zcm&N07H&&*^WXWAXSKA(q^(m`q%VEvR12h|!SMFKrFl4zk*G^Zg64F1bm`&G%ivYo zW{&MB!S21SR2=^LXw5d{04C^9obY>8+7xRCvPRg=Dg=oje5YsR7XH8l^ewcLhW)Yf z!%up|>{4}ytlR>u6t|0B%beW_<{^(JUPq{#nVP7c|&y}uhrqMEctacf>OuLBAQHop4GFNpea!)g)`0J zFYtJbfDn#`!oWMIICnQtNjM5^oHHCVGu=O!z8f#4T-?lzVKjV5pzv z!(Q8sVW60?HZH!(QfGBVp$y>Fv1G7h6>)CfOtVLUnG`V8Or_T#Ol!JfL2s{tmGdE1 zNg7s4-{zQwf~2v)uIm_V*GrhfoS}NSd`vc0Yv2QX^bJ54OQWf0zhsCv|lX z>L=A72eQ5X!^%$wE9H9vJA8&ZX!Cq-AjnVR9T_b#$i8AQs0=?Tgb>cX%ykpH1TYeNO2mCv$X z$x((}9`~J>*Spc~OzN#9e4_CUFq&!f-l04W4G!`id_#l|YI(?&T&V#lof!I>xH(PE zyN_fp-9k(+c`m&!9ea5Mn=!K_EW8y1#?$50gyf(#tLe(-$l&AoSl-=&w|6>O9jG!H zKCoikp=4xt2hBBI5PVMM`9!=!HHY}(Z~JIQ#cL_!hJ$A9!cQF%lQ8l<*`Z5*3z^N* z6-`pQ$JNhYJ*JfYKxzud zUm_mzLkQxqHxV30LnmDf@-}AtSWRZQ@@{la)%wR7RZ%W7u!6OA1zlJ>(Ew=}pW?h@ zPbCNl_qUTDa8m`zQP9)kE;16Q`kgt9$+B{!21{ELZH&UzR@xzmG<-Zw787KY&G zhY>uW^2uMP8~!}~LUr*XbQJ_~rzdW?XTH|qqz^U`|3SoqBO*}TW4?P&d+}Ao@lNL^ zJ%hZf()g(2Hx0ys=U(|{UyHpC_{5nAr81$s%4!L4ymMztP@@oaFUCtWI1S1+(;VKz zclXJ1YS1zW0g_clgJ_%M-^PW!*;}QqC7fr2j0~>?734al6S&8I4}nsO%uz|A%1Nju zCMA`gJX&`Wq*>+ZWvEcb2~dk%p|)cXPx3m7s8mdQ`L-Dsg~u|Z1Jy!G(QBzdgb2^P z5S=>MuRt%`Ystqx5f)G+oxjn=;L}x(YErq~jR^L$x?&GQi+PK~yR9*?oOlNx$qzmm zbD@I}N$|6oR}geq#&9-h-l#~X9!EY;*oIUtw4Zl4vifG;laY)JFiH!nhrQ+=E|*xK z;c2d9S>8F7<)HD@RJ;?0%5v+{JRyjiRXAY<0uN|}>R)U7oj6c6feFd{A&SbELLT1U zmX6&`=oXbpC03)y>^JPm9TvT@<5JNic7J-X*4B-n%;5&%!N;k}&Pa7>Jqr3?j#ems zatO$!zzB_{r+`1zjY!1gyGPzdRQ(l%TQg=DXu?bUrD10u1cVY_eTGsYh&#ck2uKrQ z_}jR(mF~m33saLbys8%nES5tS@uizL#Kja3OskqK)0g;a&gKn@%J|G+HOa}M*5F(43q#=f``+Te)PSr7HR$-X&w6Ge2ZKL0EjU4>YZreUZd;QKV+5Pcn#Q}NLws%Ya3*prWG67Z~}U6g3VNw23}#2P*y zzYsr5n~V9qS#ro+EN!z`?E@S^u6C#0rX8#nq{i%;aIIicLriNBlgCIN9Q!B%`jc0P z?suXKYqycsJQPuf96{*LA3NIB)&^(Fr`R-j!^`2;L)VEywc{K7if9u9R4Dn+cr3ze zhd4+F_`-sNs*)5TjIk>n4KJa0Ef)9<$B{$b`R#C2(I>e>DD46Ex;pt5HE)9Jp|*5g z^sz|CU$98HD)@q@xss-nR<$p8|G~jEz<*OgKTYsf6J@^snOZWk5?vh>2!TVG$DE=j z?^mPBBiL)p*8)W83v~lA(Q>-{Rt&vpmBoeJMS129#5b5A#UF1Jq?KuVpnaE%j>oIc z0pP9&E)Df&FiWGJrp*}c*0Fu;pC9X+_NW6gb*#*xQc*E}9h5}9YLIoTBCtiAqknQw zU#cAM&U(MFKdS}(r4ED?i|udk>DAcP9}Y}3a2wcD;?G+TVE^6fW~He6!bmp= zY;l$PvyFM_v&*>Jl0%0(Un!RI?(qlq&OL6_gUYLS)qO&RwIV=CH;-`EW z{w}i!G;2D7;yr_n$O055kThf$dl2&Qc#TZ<{U_Svn#GWp56Wi+p44sdzwF=#5fU?6 zv+$hfOXNdps?gD~`DzD z#VJ)2@Uv{>b$!W@d6e*T#9iH*N3(WiSn49=Vfc7xu+$b`K0znHAqU+=?<|V+6g$$9 zUm8lalJx~keUG2XSuFf}Xi^!+S~kuE|HAR;2Omhu88g`j_?mLdzsuZ zojH7=$XZBnn5cEFMpL_)l!}u5z54@3$}3Bf+VCoBgw}`w>{#j5+~X*<0LR)ORB1!` z#AE_wTBam3Lx!Nd)Vn1apm>alBKRv%>5$fbo#KW>JRUXLK~4;cNBx$Vp}1gl^xAo5 z5vXI0RnH%bmpqid;AAWzatk*mW35}Q>wcompplMOlSUwNs2A8qMc}H1&Bo9zGbQ)y z{qB1EgH>$wiNgs~^LBr|vh@}HC8U$L-5QzThN`II;Y^91hGo;hF!qX^-L)6Z?gnTR zr$4FstaWf(d71e6%e1o3P zH=h`jR=1i^VveEVDX5w@VLiE`r)t_7z^AvBhFJ@tsOs3sXLDc`yMEfH1f;2%^daQ} zkNnd%d7oIpGI2hI6&KcxP5$Ie@#=g#)Fo`pl3uKHR67(YQL7Ll83%rK)aeV75-^H1SZmQ6p1UsA&rv3g_n_$&`0nl z@_RJv3_&fGXmk>aL7W-Jx+oF3^>u{Th%BE%UO$AHH3J2v{uW50n(SMMcOBdOHjkQ7qLQWb55X~RS2-ov7ObJ@DiI%ip|m6t6J657oY#?BMcovN z3wlt~ozb2~5_R24R7v}M7Q34QbKraHK=N^>i(TuHKuP_6zww!>Lt-gQW3Y@ zjBp|C>K?@qxy(U zK6h99Q0g(}I^LyH!;o1bPqUWp^+s~P-p1QiC1~sf)G4kVwYeL^b0P2B$2aVrXw7KW ze?svCj-2jEiMrD+fRtbm7SU1YpQ;nlqN#a&K%7QjV8p}T>8>&&=&EuL0rxjG{e_1r zgYz-C5<2MFMk4ki9t{YAAS{wy#pvt=TGmdOaJgP)w!1AU!Eyz(Ni#KCuCr&7|Te+G-Wo4cPn5KbSOZQ6`9-9&BlX`Sy2G039co8zY?=c9jj_w`m#`WV^v~IyUc%X6Aoo>Yw8P1emDN{Xg9PN z`5u3^xuZZT#awU$wMg!|GSdFZMDl;J^59KayWS%%EN_}ALR&g8$KiAc2<@4<5HDYM zqmZQ(3&=hw9zc_8LfD+_p0vo3MvsuVkpn-Z*9^~jqx`* z|J2lfp66d-q~;Z2xmGB8o|xsS>~-QpU5O(mGn}{ABv)h~hldBN?>pF`s z0ZCAgdje{RhWI^zEIp^QS0erYB{u%KFf0c~cgdxa1OJ}s@&6CkQw4tsfqnky%DV6R z1jhc$-xziJ;{W*5-A8Cto|@!3oFJ@NaNF%=#XX3~2X7D3zUVy+!Nl1}){tWRp4aAq zVb71|R559l|}=52w+jPnzcqn} z)oddyhg=v{*@$e|%Ok#<-f)?W=KO{C|2y`7^B|gDgyp)Sa*N^pqF(>kkRa&*5(JsQ z89*KE8`Q25IKq_{!-N?QY+A#67Owy0>;5yJ=wwlh?N;BG^sJnFPA`~`-rirQAKcVI zkDlKAQuP&0)AQcj)Y#&Ow97ycj{MmGfUS>8Tm2l$OeM+b?V`^E1YtfUy~>sV6#Rvl zlr&9z2|&9|fbC?XdpJ8QD?v>ruZ7}t?N6$=@1K>Wu`ju_-RK4Y;zIZE{_p&rNMIoh ze1AQ^%tBO(i1%vaAhH=wrY#$OyVTf1XHiNbEL$vT(|n)gWjS4GV6ddS;B$Vo`Zh1m z^cXB*Nr8G<#oczzcSF@iF5&Gu?MrjEbruhEQVP07BPT9nY&^djX!-ac^IiWkpf*a` zsb3fcur{~P)Dstg4h?}wVqhQo9fnP0d3mz4iPvU3{s=(Zy#}c9=7gN)1q^B}m8WZf zf{wU!_Ayb*-!AkFVtWJ9w!~cVbAJKpyf_x(Wu_DrF8V$Zi>a>vpd-ye=DFqbh0(d zMCckS2O$EvdM$dUa~Qu^2=M_x>(s9nZxjrxf86g4X#&ny-NV@uwn%jKWoQDgk5vT4 z#?CdFAOB#j`>t!)e{;5M>Rg0Au+`%8n>bDUps8p`JtKJ>cr(5i%Xhad_pJN^02xFl zHURvSiO(iUrih#Q+7)9Tz*J!mXg?>g$*F8y^5wG~%Xe3j_BU(YyErm*KUzOjp^vOj zBSfd>)~C?=S;A)V^JB}*8`h4Ez(0FPa!#vV2-S{FTemfNPdix`N(CyDlub< z0Zq9+YrEr2lK}~zeE%+km;&H)HQedvsU}x{_Sq$+0Y=B}^Nt{57LWw8OWac)0Qp3= z8S=rf(b;#qsVB(+Zke)!CVhKe)beNEp#-E5DRU=V%5(9u3s)Gd7{h4){X?n*a4NI% z$F#vXnRVdl(qB@!8C;La;eVfnZ>O6EB;5A+$klhz%SEKHJPAFH z>7akf@#N>sJrhOy!MwAoHS`xKGWMZ5g+Sk;K6MZeBKz~tx;%Z(s)rAIE0L!Dr>S_o zK9PA7ph`*KlSJxQvobV(M;(xnLM7{!3vx3|>zeTJA0XqjR}=7%k1Y4lIv|D{Exs`{ zdR0P&}1Z zSppVgO>P#Vx99863Hvz$2Z{8MDx&sW$S_?TY`5R_wBen`D|4Xn6k9=!yJaxsULj@5*Il!=KFR{@^UG}47h^)y= zqI@)xidMS00~ijUx7`O$v%9&UmyL8s z^{J+reS{nr66-y>MNP@8#pZ;V<27QFCIxI-+b8`!UX4F z-rg*@8}e^yf@oW5+yI#SL-t?WA z#M!u@E#|SEf1AXa(zw}YN$*@j=%aY6U7BK?ou~?jEfBc%Vl_+$w38VSZ*kI$0s<<| zD#J!D&*#k0vFRux$l4{*Dc()T!01}3)U%&%eo3JzoIcU(_GV4m!s-CnpEETu-LQVU zex^3Y!8!EQD}>t{gKT$3g}tiyw0EtDE6BxflXo*1Po=fIte$Wn2N~T+I6!V3>6;Mn zf7;VhkKKY}moIvDes5jQ z86~>C`)bULWFdun(-uBc0Bwj9j8u0By<$@MOj z0K^MDy&`VX0eP{o9YFz6wNTHJ!e#eC&KHE4lAkvBTw0DJbfq5O0BlqxHD8W!n!7I1 zn8*&Jf#i6xGEvTZ*5MELdNP%^fSZSUVq?%lV3m20-!r`PiBiDYJVDw~5T#jD_&l6z zCueWP{@K(9lp;OgkU{4Cg$%a<;ON{FPrryD0Tb`laBrTTa1u2Ak{xIUb&zQQp#05w zMRBiXlmJLk3Oonl3MD>De+g}CAw0<_Md;5uCw&%x57gzGl6Vgak8*;X zd`~}s7b!~X0l2Eo+}8)=XRM6CSGsY@aLZtR>(lL@#uF#9gQz2R6vnrzQ1s;oh@!z3U0Sc~vh36RbB1 z>w3BCbQa`R5F8AOR0Bk^1aJY1w5TsI8;vDaEaI10{>9nlqbcKnGSAQ+OP&1LG8^SC zBU;9#>Ngl2J>7eU6y5>C(+n{@Ew5Fg3PFi{L=sKQ>K4FkJPX|Tbx*H<>3Y3H$h|4E zv_fQ&^VuIa)`GJKTPjD(#7>H?iB{x&%8dPw$4uKXi48vamtW!qB#Dx+A@9I1E?) zT6~uez%_CKlR`VcqFC&DW6qCW0}5-qKr?jy1^T6e?0Fb=N&2hUN;9i4LMNdgq^#HQ z4J(vv0aXntecUHf+mqYRv~uY$dDgQ85{=yoU5cbpi7acYNa^t|UCv7nBM_7~1QPi( zjD$BV=tS2cy#B=gRBi=iytd37{%jaa2-l5<{-a{=-$_~v|Hhr&IA_>ADMBI!~wrT3@|>(>4E3y zR@Vi&rZ0yD-w^r~RMMT0?BahRzU~rl(VU4&zm&h~h(qNR$udby3X{4&0XpQE&8`I9 z?7dBKf4i8;Xh(vt!qWS|2-;)?gdsBiHq=P*uw*G1yLMvN`Jr^B_C=rcZNV3zO;ghF zQlpT!3v0O$vZF#HsM4%1;Q+K5zZKy&Y0!~<+0TE2wNIqR)07d9Nt(j_h4i$no|7H! za2hU_fEP-+pV)wrkV5Z?yBH8n>Wj2Hy@tUDD|4e#!ee7i2RxD}wC zY7{CY<$Phwbie$+uOWu8(EY@qOT4Nb9AB`16nnUK(NaIiVM+aAyZ?D++M|2}q7Ied z&C=*o?Xx6q6K*_Dt@(P``t{X<(Ym~T<@mkmLM-d3SCYo%da0A4#p{QY@Xq4b#&(C_ zKZle-vC>>pd)UnKFg`u^E_ zZ<1-SDk+;zj;)KK$qd%V5JMY_8gZ-}7D@(4*LTmN*^fhUuj~O^!HW6edz>GRb#dnH zIALJc(~H@phrVwibQ#0sX7CPI_A9d|P_RiN9ZPL>rLqN7<0RSQD*BNR#o zdjV2TJHz|~8rOEz7fL+NNqGCHH1SfG+a8)Aq7uS=Y`^y|C2Z3rJ*z-znSX=(JfPNX4^M*yLx`7Jn^JU*EGhSeAM zJU@%z4saEGrY#P%Sb<5S7xazU5aLIu^IP*+sF}WLyWa#%8+#d})-@gH-j)K*(1$w5 znhwxobnH67=wZKXg-)&uXCm}PnH{GM^~`Q3D}wNa_$6cgB3jK93tGa6K#(H+|IFF2 z-$5{TuA+aOmuM)q$=a~RAz08ECLU2UY-H#BhAoeZU-M)rVaeP+V@YCw_ZLHuM6!1% z7E!*PE6KM@mJp-}7i~YgFP<;8B7p}pXs`RJacQ+hB^@5}X#o8-|9iPhAa{QGlqLt= zply)rD|m~l=0Ol&QT7UXE1tv=OfW5Ybf+w@dbJ~6rsOmAQ@T}rPc(nJXNSdH^k*E) z3q-VGr~pMiGU2Q8%}jGZo=^-5LiZ3sMwunGjHe})ApEXV6}QWeMuIZGB*mGKpT_t} z1l0Z%Jh0Js72of{-|RFNPIcaY(Pz5zZKq@E>5<}ei12=dU(U-B(Ova9K5wPF70C)a z-l#Ft%BHn9JIcJELude$Rx{AC7Gi-Kc5CaDYBLZgl_-UJ6XyNiW{NP^?r_Sib0y*y zO{>fn=t?F1ar~*yC!DwW$5er}^x||>o}r@7Yu=8fYFW?3r{oL2bK!CwvRB%ZY+?9Z z={nYeH4;!wX#7Ue1wJiqQ%c~)hiWT%0cmazKxmNgjqp@uWeawmf)VXT-g5%18>!O> zCoT9BlO192M&FF?!G_N-$gu;rV-jh5i<++l#Q~usBL3+mZd@=eTG+r4Ji52nJy1+Q zOp(L!yny$S_YM!;S8ZOx1+WeU7CIJ%mr!9XQQ*QXJb*B0=wQOuZh?dz#5zz{i&_DPq159OPQRb2!Ms+{FxI&E?Ye3Sq}BKboNMjpZ8?0; zoRY+-m(S*CWDlif zJ>gXXTEsYNp>q*OfuFf0-Q5NuCe$iQ3XMJ*MJdUgM=nq%SLj)J97%x!9GSO8jP zFG~2ISVu)E!ibJk!cg)Ja;%_XFd! z4$bo!BJ%MA__4F;FRd?iFBR7;kKhq`Lp&07Vi?zNEd@g9K=^lnhfGI`E{~JCv-eOr zU7{diRehRc`%s^v#G^n3BTtIU-VKcckPli|#9c+NX%O1ytVKy7&)u}l` zU}v~k5MO9|oO_g}1bH^Z#o#cmL8q!mTYFt#To}l^JfabSPR+nz6N#h@!l=w;Un>ilE+F>*oL+~ZR}#X{e3HhcQsG|TvG`|Bb# zzjjhJ-qxddDa0i&(&3A=7?YJ92y++giFrd`EnTDs>$a$niz2h25d-_O`BH4gQi2ji zk`#>&%S_?22ngOZ-U?+7)nZ(2jc-pKxz+no_<&%mdB}+^{&wiL_tevC&E>J2vB9z9 zc2xh!^A6P9Xj-pc^?D5m6%<+?SdU3*XmCd|O4*WSf@NBo^-vgkTy$f26aOpm*n90v z4XG_X3?mBM0vZuwS{ddmJDzGCKQY6{NWfM1@xt2roytf)I(<(&Y78Z{C_3vOq^9yG zZCL>+iiad3>iVfW=QWO#`ZfBd0H)%`?W-ZOG)~_wp4`-TZv$~>5~Bv45L{;h=TI23 z5a8RKft@(QyOZSwB*gfC1@StM<&*2vXVBNkibfGRgHbr@JsyM2kt#*Sbqt3qaMi*! zv%1A1SfWrU(1Naq>ZQR#Fza+6D(jXx*v=3-@yz*TH7CJ5gh_2twSyexTHT)j^WQJE67_TW&z?unig zLpu3Hw+*%&PFJV4xO+&8?nQfbZRI?|ELNCgMlKg!Dqp1~(X;Z$N@nrY+tgU;W`9Rv zJtgp=*I?CFT#?7p_-bYiLSt9}tA3LuFIqA`W;PiRuN%@D;`u=5>SzdJkB(Xnqu)>e z6IqLQB&zeZ%G=7G2_{v;t9`>h#j?zG~s3-&SP+VGP~o$1*12@_3VN zg|>#bhyXkbD(VJf-T1MuGNomWDs|}rAG9RU#*oCH?*?N(QwrOh?ujWe%e<6(cn#|@ zG;k*``O*v$>lle(e&!UB(cpxRKA+hCY(hcT+;&9qgsD;t7SBZ3GNCtqIHvc4vLvvZ zdh;DgRHl0jDlW(zlCXi^#WEY*ZKk+ArvlQ z{2OqLfrg1U0hz7ym;FN`J7=qrO>@>ku_d$#L=rN^lrulZtSUM5<6xr30T-J)GR_is zU&~rQQOC_`?LK1;<_{HX*fDFD`SU3pL<40E=5-&9Wl6zBmjdZdX)z$xGP@XV*@!He ze`u&_2(}g5&1$DVA(BLUM8irMfap<^RWJC(bF?nGiB8{V%Z=dl{*3xUn~Idh(^Zql zm0ycMcz5pgV3FiAJjD%c{{*pEXPNnpRQKWlGkB5zGo>-XaZau;+m8e}%vbVb`F=^d zS@@Zu*xv+|q>IC7D{DXN-i zYo|ONJra~Q>ga)#NIV{MC>v%q8)h~3##Q4{Wq)@h&w!%9BVsx*D#my@`AU%oPmN$f5w}I) zHh4Rw>bTFO#KkU*gIhi^Um_8dLlYMKzK z8HJ^tlZ%*Nl(uSGD?{5^PBBe7==5Ir_q;-O2N_%D$++}C;C$ErtM5;Gd{s5| zWFJANwqlChK$c$;*G0jwrp9);!9y&AoRwT2{TZ#<{wy}5Aqh{#xbOn*s$Sq6&f0$qc60m4j8UW;Kl%)wadmok!k-!t*$xIbc#k6)+Fq8UO$vUBA$6nsFv%-rytpIz0_$0g1sfwe);Wrou4_JU4$^(? zn1?vU1f&bWq-80KO^c1`K5iS@5uNm%^gu!S^D$J70q;sy>zRH9ZfDJjtL1ekw9>sF zW*E(^Qm63iJ2v}6a$(kMup#1)deg!-a+E1QAi(S7nJ6#k+PGUNa1e3~PlMz#vcb*&^IB zJA}|FK?f_ifsW~ej2ocZ5&_J-4s8CfQ>d{_YBY(Z$Ev0g;vf_cvFq`VyofT>qU(MZ zqDLo(aI~a^p{itntwyp-vB$p7hO!o<(^r&BPmwU2(0RZn`Vzw`^QoZS9sHrhy8i9_ zo+-*C5af#dMgCS2muJ+q!Q2XMYkTzhxG+y^5;sV!KADj()?3Nw&(MJQE{&p{2kBaI z^#m8s2ZrR1n3C-yIT~*v^tUAnen%nVv&@92VXq>tnQ_OA2M+4Yzd0_zm53rwW$ zcH5E0qbL!Ie_LL(9dQymS0j&Aj8|A>RYc$6+NlpNV)`OE8$_w-Fwf_&B3G=4dLHB; zAhJG?&{>m5f~OZioP9%};;}#+;KcAa-g9qrGj4=@RWV>keq99xH!}D!HACuL)SGw$ z*yI-pa#LwIKX}C+V-FI*5Sdcl9HK;b^o+C|D3e0edlt81ZBlWY^V1)EBHj9z5O3 zM&QS}J3;guD@UqWPx^-VZxQL(zDN=lYlB#?mt{{a*X$10^mg$tm?W=58Fq+r>pNm< zqq%9jsfP&EbZ>QMj?$}I^XsjMR@`#m9w=#KlDpb7I(R*ns!QJ>6W(REn&D@)*B@e| zL@KswDwZ3jaf7~|Cp1#?D9*xz6DXfVCD)c-vx;|W1}KNsl+YyEV31ojb>hz^RpW}W zT13fUuBy%T;ts85UekrGQm+$*WP7tJ5qs-CGLS47=?rW!Ceq7Z!3xE?ZO2kbIA1|s zvSpc;PPAEzDDPWQTyjiHXt1<9rlC33SVmI^1CfnKol$0X>+z;bJ-;3Ov`I^xb~cM<5c0(L+px16G7;DOXKNCi28~mK)a!HZ%~VZXz(}ly|zijfvo_$lB(Ht6faCtvTy!neI67(q(dWiLb$fY~3D8 zJAs6evVI$pNZAN_93Qgu=H5JGHhd(Qb|pljUMlzX2A?~&QIgpd{XxMoYFDmIgzOaV zL_KPiSm`Z6@BH%x$=|ub!Wl%;7mc+H%f(K9`XUqEv;Zz3xAK51qM~0Prz^FI_4gYB z(=%V2J^7u+gAbyZjY2YWT}0PO_)%i2SMiMo13cA~n$;_6QmkH=9ALF$)f=87*3De` z>0UPUH6b*;C}dgdTRcyE=@YM}4vUssO_uhAFVVRu8eyRt5J`gMno8>eHCLtO!ke|3 z-8SE$a|7bUEY9p%B3oQZR?chRDH3PGrLNa@L;Kf&`sn&g40{~3?0z0b{j%Z5qY!z6NcHceOF8&JrWQjx z%*F?T_gVCiqr)m9p4{!nmQm()VbFb(X02m=O_Tl~2}22edwb4v2y~T)fUmugB)cU1 zx0fe6;Id57Rv)bEEJ(3+=wYW2BHjPMBa|ppxvk%=S|>aGfG!(bDgOaMyI9<5E;-+P z@V)j|M-uIHZlxWak&l%u8se&&fQT(&3VozGj15#bApj0ZR zhNf*tAB3Afm25muEpSw={qR^aVTQ2$DY%A%JG~_DbF|G?4f1+}7r%~waqH&(k9oBm4a2d`hD3f}>u-nm{eR&; z()_3=;_8hX`B^|r@NTG{tNlzEoAX?y_*pB z$P_n(zTDtzV^039AOA6QUPhqQzT@mU-tR^f*IIgWiA<@4hGH*lW*)=(KP3yiL@kuM zKowT+cq*p#Rcdqc_u5|x9h$ERSL$D?H;gD4$Q-x@csU` zQI&wi38=LDk4Iq$uSg<22;ujkTdGYSbg_Vo^(Zxcb~37?BbICv z*EId6XhF-az5TxrgpH9XgIF2?Wp4VTjmW#9gm4Awd-r&YMqRM5oWL0MwxqIREmkOrbA52%VuQ#{w{XuBg!Pmp; zK`y`;X`CtgvUDUOL$JhVG`ALrXWI3_0MeVDs`E1iS@J)gBhDEB)e7|}<@N{44ZU`H zjP;-CKouMj3$p`7(iLvh-wC}HR>bn5hTuCPA8ghI3OF`z@_nUmBpv~Km=L=0u$dFZLS`z=RuoqUpXCHK&MFq zvhn{iLI+1g!h{UEt^1kYKgWx~P$;vl%`Yejb7Q7d$Hm1N{rHy>2sTU?MD^i;flg6g z&&H3bdyg1|T1+PncP@W32B#|mHyI$`b+#oU9gn~Kb#MR3sQyPoq=Snl_@&eA`*Xci zwr8b?&Tt&n&!up?4%#wfuY&~x{I``9B9B?7e?`Qm1hxswnuxWu?R_@Uh^xQ3lcan3 zJH435mde`wT_pPF((7MPa{T@Df3g7VCMCG{4}|}FP0drVT2`Cp+SU_{PaHWf8WVdt zK1Yq8V8#Gjz7Mml3UwJ3%m1@(xWH589R+~Hj1P#C(vHJTi@v`0Tb(IUO@HC!nj!GI z)#&JerIsW6wcVCxVx(3Xexhvjbi5EwM1(($zwtcACV)BWcK+jsw0#2RFnL@{C!M-} zj{5dHMyGG11k+^bIK7i6_*F#zos{mE2D=if7G&3``+P;0<-~$d?Y=#GeJ%*{FT=Z4jRt`Il-pgU#jmK+^2;o%UCdw+Ih()6jFWP zmeD^Vw0<~0Y8%dGxF9)~@cO>h{_J1HpS?7ipl<|vB&>FTzV#s;rKR7aWKI%(>mkI? zlHSF<%S)w9nuf^i`pt+EBywzPdGKvB&PHO~^yB~N+5T3sQJ|r%4t3kJymegU(0C_g z$5APVp84yuVl;e zI(*yPs$in*EuAN#z^tCjg)U?uzCY^n_h41f8#i9sXK~NO#pzv>Z?WF?|2~hTV*a}=B9%w?RzsDUYU2`@<<^NW zo(?yAOokliBw>=tm5$rXZEe?=zdrH>T%W%soO+m^-Zg~yWx@QUjGgwax}F5%L*fkd zTtC&@%OUX|k?WZs43^(h#RtF0ym$LIk%rj+6I7a?`*#chQDX$*_QZ#hp$Cy-|2V+!i}ohHrUeMAL2(brrsA`kV7vf^!BbEU z$eHw1kOTBNmi=#;?u#QJKr7@hX=&1>q-2ec)7{PBN-bI4aA*yhE0Qye4$N6w zXXbC6$RUy87b)Cc_}eV|h5|9-SAQ8}fk`m7La^Od9kvhAYRCd5wRzi}b65)&@OU2h z8T@*uRIC8R!+n1KGLy%%)Uo!3uxP`muN~eWzYRejen0;=oNqZ)9X<>EP(xku>r@ld?h_@ zdgl3?9=*rbzkmI^%7M_HSlDML6-z6*(Ok(Ay^kheYJ>SH{_kAYfJd56pvZeUW5cXD z4!M{|`B0v8TR`DX^S=w=pXK_GZdoCJ9)bsCNs!=0Ws7-!kURXV<@w*3kiQ1ylq%Ss zU&Pg<3(!Bl7+pQt`|qQ`d1!8BPC|MoxkFeQya%1M%$PP6oEUv6+F8(**f`a)zvn0fwme0Ac z((5-7-^SK*Oi#c$zl-Wp$(iyJ)0xULmg!q^98!|khxiKMJ^$^xi4Rw&6HYllHkZk8 zIenA57T0<(-8V_PHoiDW_jh<*d|U9o0XOds08(00&D(aGe?q0y=@U&g^~=tOdmVvX zCM%#eJoN*hOh>!<`u7WgcQ?_8K<=5L-9&Np%ILp6AS*svYe}Jm?dFOXU>=@0OqyY8 z%ZL5sRH-7tT?I+pNc;Q&e8qbP=vE~SRaRA5c`4sQ+E!VpQ{*Z=XH+{TtR{->kaqB) zjIcWS(1*`Xa(ig~FQphw{iO}wZ3?E^)ot}ZKMP>o7ZXVSJvt}GLPr|z`GsszK>tPk zK=SXA<;P__o5vojoN|nTGOgLZm!%Vx2FVX)`q6zH7MfU1-83*U2R(LN8_wL?0{$~y zVEzh0PQ(S`Iv=8W{kMQ(oQXrG`UcYsWW9!|q&y>3wXsMWn7jxImt)Q!%M->8-7mvi ze2%wdTZO2?14Ql>AMX?UkU-+eU=S+$o6US6{xw9)EEHEL@%H9Q_3U6V=G(IOdfeuR zit4w5Ge*ytpS~luG_5l~{BK$kA|mLcDkf6cc-6gmLcXxLu1b26LcbDJ}!QCAK!QI{6zs`Bi zInVok_ugL&#@M@iclD}OtJa(~=Ut4h$%K5b(3oa>OD2p5uu znrwE5T;%5p5E1VPPVJy}HwU>JtWn2CFCxigw2@?8nOOS?FIgQI{rVvH?OR*BJN5IC z5Jc~TYhiS_#Bh2zYN8%JZ?i8>8O~?Vc4xfn$&~T?$Iqkm&tel?+qjsoy|5fFP070{ z@qV@xKNa-fUP!U#l~p~;y#}MRD<%O3$xj|YBR~~Pl`zeA<8UjSXKMc!uhjhg-Tqmt z=!la$k4ZFXr90k3gqGW|DCx@TU-Wa|pn$H-=g=1qof~Y&BryccSqVx13?A!2Y2%ee z(IMGQw!4$wABts3BfSf$_PZn6jWMD7@6sl^c78amk57yni_T?7EJiWWY(T4yx$|Kl#nT z^MPSdh!YeOp|IeUJa<)02HP+i4J_K;cl{$+^)f?uGc*(FBka?rQDjEr5x+0t_$CkN z!HV4=o=o2XLePjNqu~<~$JT=C@6Vth%lJ9u*{?Ed9>=DsJchYE_Zmg{4hK^L4NG>$ zW~+JXkZhWJK{Ndwcr3zJZV!f!>($w_>l>TMJA1*wqeE5XOi-kN%W%yK&jGR-3M?Er ze`*^~c~sM8NAylK<@N9POj&QbMxM(OiJq3}#g;JZ7yd_iYGIub0kz=1RXWw4yJKfezM9=Yp8e*AA&)*>q(%Uunl>`M-$K zOhjM3?l7$ne`br0x&I^^(|Z1@u&s^*iz6wprhaI%w0J)B+YILS>ib`VI2D7S=c}d` zQpnq(Admc;w%p{P+2E_Xw)d$&Er&*`(-qqBXrL!syz*>+k$=U>ldUF@lRMSYFaEgQN}#od2*AARJddNpTR zq;cr&>n)Pmyq44Av$*TKfA&sOFI&9X%>JnP`3ptne7mrlgC)7QR3>?QLzC)&2b;Q% zi$TuI&8XD4R+);ha-H#LA>-F_iWDaInqsODLSn~^W*ryU+@@^Sr}FAShnn)WXD2f} z=0)CttcswJJ70K6@0Z87qN@$H%Ehvl)!bb~ZgD$dpAgXH>%VwESkdF3?!fNU0NyR(N}$CE=I zm(AqIT7vQ{_Q!dI#tEq6^k@|I$5i|0;Sh`G2;m`-&zc-iP1#)7w z7$cbef0(wO7mmHgxf2z>s@R1Zm6JBqH5HNqE`y5A{^vC=Wn*}bLBN2JTDlJ z4~hA^v$=)^@(Wm(?-fxb>DT8!}csoB`Uw2?^b2IndQNhoi7=2J(ZaB)$Z*_ zHfZ~6_RiXb>VZ%nqR(!;D!3$soDJp0cBC5zmk7Iov39Q{Xllt7dtf`G+ZB`C+^h1n zO}sr(?Y^=Qtb=>2MYfbA3O~<-sQ1^qncBbSC=OnQz~V<}B!+Lj{D3^+bwjM;afC;z z^zsQqy)NSv2K&g*hkyXah~&%J553WD;=J9y=Ug^vKag`ZZ&4;!gyM7An}^N#ZeBAh+CQ1uT3)J?1;SNKQ-_8nrU4 zakymQPBGO@^Y31#12sU2kJ$(UABEp~gBT=yk${yzyPa{R3l{!93YH}#H1L564-^Vy zyTqXQW3U`%i$&@Cmk(-i%fIjM8k+0-C`Mt&<yNywipia!K#k1q^q#Bc%@}^K41omk^C2LB zG-3w{&pr(6_*|NQzp7karwflVu^kMx3`cS(`U9UlKgL=nLIn+h=&uj|^_>`2umBq| zY@Ok~w?a+Hw#5RpEM3<_F;3bSiewcDhN6PEUe_w9G}GOT z{NAtp49J(TzGqTg++Vye^)qX29tox0P^kAt+2iG;z$#1cC^Q?hl?&0t@e%O{R^=>% zv^wxMEnb5ZQ`jxp-X02(yNwJcBLj?OPo`$%N=p5x{EcK<06G>#XqWgeP^(wPx02y^t8wIm zOBplT6<~dqPul=R0x@1W#-0fZqcJ~1#nnF5BdLeIi~izCRhu=Wz{N+oKL#|>G0&F$R{GC&yL1AD>p8v6U`{)KKWsu)_ncIJc{@k^$OW>c0L=)!xKb!AS@ z^l?LQVE&F<43K62$$V|&<#h|zrQIqHc0{DGUeIimTEgBlKV}lhzhc#nL}6-CF$|tv>5fIgK5*IbrL4a6NkYt5B`4 zK&|hzdOu6CM!OHs9`6z(|7kLf3#@fpD_9vTUT{@d`c%7rE!LqWv4aGKfZl@XDTB!s zoDxkk{{3&Npxxu>JgvpH`D1yZ-q2T2hL`(M|E8)_h_#xz_5Om1Kdx@ZOUI>YiE-tn2_ zI~@>ruI~m4xPz5nNS$v=G?g14E1*d@+$RZttAvAFD!qJyrbXpd@ovQagf_c>75$$} z{7Dl9Q9hb&wO{wcxB}g+j?z5C!^39=SM0+hBY#@!=;$0Q{iG%$uG7Ks1LQ<=%w`Hp zh;6XMw_yq4KK z>th*b;JwyqvibO8VJISIBt_Gq)(rlPa*2@7Bd^sUr{gibg_oi?DG8t1x0Qn&+9J@C zn+y$`xsptSWGu5i44lf6Yj?Y4PKrDECu8wF5h9?Wed6WSqh z6SSLdv#_;6{(HV7fPwj@3-Un*r}xTAm@)ijoCh*_JodAI%4GtFbw;WI`#uzYvAzIxIEr^1)_MF6D|6cNc-^ZB_1k-9$uGPu% z*3st!jaCx^(CVB@f{VmrX|X@|iO>Rm?*DNkK7^*vh(mHczG$Ccl(vED5&;Rxueg{( znhS+iL&@7@o3&3h=d8`v`@h`tuOC97Nf49EfFnkAvf7Fc2xZRp2?_~;rLkIu0mA!c)4pg3mzS5{ zW=m18_9v^vA$7jm@4>2+t9`^Xg@9_u3fwQE>rIzF7s=cLcTC2_vVYlYKj1bem3V&K zTF1N3cazuDaKebck^kqZn=%n>&}x^Sl5%Ipd5>Ou)11C;5*Ysb;;l$QFeTV+2t=cB ze(eYV7AcM|L8r@&+W>3ai80sZa%(7^jULcUECm$@j`1SnayszU81&l!5QS!_aLaF8 zWc28+s==)|B3!q*(r@HAI%ypso#pGH(VA>ssJjvh+s~mGuX2(%rHUNdONkQy9h5Fd zFswIb@Lc&{cOY87)!wvJQHFN?1*F{&y%_XQ;C{^Hr2!=5dXv#-vD8?>MQPhl-U0O9_Hs$t)Mt_ajPC?HV9`84QDN>q^ziJ-GQ2y1wFEtl?+sNMC0O9WrC@u7Ea zy35Izh-r2J=vmZ6X}ZwKYa*pjX2bW+KaUbbWdiaQrKf7L8pN!)Ax`dp&;&to-JAPzb7 z_=17@3o){j1oEdtCqnk!7_1`Vpnm|Ps>tMlQd*;S>VB-Afz-N3wF|~m)qz2a)JdLb z^X7cx!PPEP?HZJ1^`obh3m9&)gg`(<4nJd|(sP(MDb^K2q*#V0YIJK(y{Lx9csy6w zxf{FDjB(8oAN@BB$p{hdFT;;yf1Yw1e0vrddcHizXxE2dGxKUZL!3$CzW0Fx;da^N ztjA&j-G|`L6mL&70sqeq0?hX$l&|o6Yez>1phiiIQA8-yj3(EUw|#-jNyI4wu46<1 zMs<#IhO~u{{q;kzhKDZ^wl`lla+#xo%0rRp>Bzi|`=JdqFCK-)Rpl;|D44QOK2-7Z zZni3G{_3=?5OV#WaT6y>FasIvvWq zG^cdG?SlI{L~*S_^>CW;T}_yjuicNSXY+RG@Q7~E{K6SiAzjR}JYC*AEn1=fWULsxCktE(RfM$=zs{l;-DSY@e@&E;42eBV1UD&m4a>2mh!B$5E}q2EI7f1~6i7T)dF7)u%94`;a$l7$N+@x&_l;S%Cd3!ib6tz*^d};ZrU7FAz>iG4ts8(esPi$AfmEmqv zh1Gm??4Y`y@}GHcqzJ~CV57%3G?GYdUb_i+`(V`qZb5m9MKTK=j!Y{r36%M>!UD~5 z^{siN&9xD03C&mG4Nk2{xD{uj9^U9EMOlPV4_#8T_e+%0u~rkl>gl z3ycN*KYf;~*5lt9O@^;Hw~hoP8J7G+f{HOd%ao%X@E(?g1Q!@U(WElVSN?Z>`d<~0 zA0HWEOfDTN&i3ATI?#WF12jQkQ4`+&MOvbTDDDnt54is=G_w}}lU4l@T=?vfB zzPNV@W2;yVxqj0p>n0nA{^uqCH;Df#OZX)IS_mh~H5=goos8@=^kIkrrvESE68m#TTCuoqw|B{~pUB14tw2%^0LlR6-=BG_KfQ z&u@X6^>+Dk{G_M0xG29i>`h=wW!-U_E5VACA`AWv;{X5ikrBmE(xH|_ve5;@OPpr~ z1Zc-2Y;PAme4;mgLx5;Z`QkFrPt+}#8UF9{mwlMpKc6jwG!pOWcF4-bZOcnY^cO~|?02u$8bJyMDiSrP1C(<78H%HH80Tcj>U(-% zeX)nf6IGI;SyHA66lk_znz0)6&&eS;S!qTFw1w%zP$=UF1-U#PzTkP@$d_xkvI;J> z#)=f5v;j)q3qQ@1;c`a!N>%3buuGV@M}$O0kNGMi$_x?#73ysOa_Sj~#I8uL8?*KB z;4&~UunOH0`@u{lp|0}yP;l%5SdjPw8n#EAhdL^`0^kX>nsd9Z0I-gm@zoiw%@j~^ zpV562pNCr;NNfiTJ{D??VZQ$^{)%ok$VLSD2x8^xMoMLk`d>xqgK|f+nKn0Fai`(Jlv3V=Ah?&qDqPg!d=E{4Ujd2oFAfKC((JLnL&C!a`@>Lcth2*(xBb3coF!7L#_)Q3X|=n} z1|r4vhw%Y1a0z@tgZ^+k^Yv+KCohyju7nh;QZ0exRYfa=)BB@-u+tIya8lDyV;*J6 z%tv1y#biRQ*hG<)+4a5iuWY`ZvIf*6Fo>}(#skpI^olr>tM19eUZSVl+sGGhG0G25 z_p>fJ>dFx>KWOdtonG(f?!G4Hh%3WK-((-w)Qr5Uh)3ZFEUn5TD}JH|a^E7z$46tC z$>p6*--T$Mdg~wyJ>*Flb!ge2Zm6se9DN?YAp)+qdFZt_lbx1nYV(UQo%v zI)}q)3azHZU!+OltsQOUTMPr`~{4^+Uv~TBow~E3!_f%Vl0& zfl^h?xmHal;$}Dv>5mRkA)#-Qf%S9Q3YlgVqYS*N3udX^&&)u9G`pJ}0fJ^S8Dfm^(od0Rlhl%aFz9q7nlAuyKHM;TiSyOz zB$DGuWOe`{a$vw`~DFyH5%9)V2CAcc5yXFB1{CsOZT) z2D_buQ#}>SC+P?#aOU54=3~rq=U?RJ`2W5x_npIb# zY7P3K&;Ay zr;e#Ez^5pys(U{R8#Bme-z!xv`%dKVXp73l;`+yRO7~ko-#wH;&$vd?YtFDYAD<5^ zf1?J!oEfchX@XVu;oI9Afl8%Bqt)sSI|n&?Whzsm*&_?#9e~bw3SR=mgIP;?L=$h0 zz5OhAnf*PFt=C83IOYw9Hk#V{AOD%>zBXLW1)8W7eW0FdX`3j2TWoc%ny?l_o)e~_ z?DKbV0w`tKMfbc8U$?lm3it5$LH8?AgOg}A^9%>0=-4FlqANi>Iu_28b=-z^8kEaa zT4>kmGC1synvsx@3}lO7plsa%HI~yyc3DLtNU6ud>y&$ykm--3&rJZfrX z6nhC1kxf%Op(^hD95x?)!EP#*F}~ifJ zNk-X-=kk@B@#895Wy0kT-p_i34yje~2GwsHNs6B07G}4tbCtIlLsZbf^9Pq{)`^G> zQ5SLZ(cmq1tap@`IlPOSg@yVk{N(!A6@Cs3#9~DMO0PiFd{Ry%XC=c+XU&#}ftnY$ z)S&y~t(0)uPNji}H`k!X#)hhoW3l#o+^J&vHP!p_4V}6o`D75W)axBdSO30pM_kU` zgyZj$K#U1!78PLDqCgiX(l)-oemL*$B-ohYt-e1YbvLOTnWe||5flE>#;BP}>hvvu zI?SP|v~oS_3h?4Ex3{JO1BXHD%M+d-k8e&JomJK#mn-=(%!6v9`osJEEoW<3+1v`q zj9H~;m!l!Gi<%Z8dBSc(bU@>raG%tZf!Hu^RK>e+ z>_i>ndPqz_V3~EjNC7B3+NR{y3$I34U3u|cHa-Fs;TVdvt?Z99iM>MUbcfZOsv-qz zm9-KNO(;zJi3ZnBI^7Nnwn*|Mi$AsoQt3p9 z{VwP^`aZ^X87#(^E|TSYWHPC#+b{6PBpk76DbI#i90pP#xqvu8i}sdyGPNQ-!`@07 zz{Sdv^pFW{tcf8qYj(Lbm=Xxt-g@1ui_<~ka{gqOND-M~Jj&NkK2VbSzEHNuE+Ate z{90o_wRxbNaJUFDy1Z)HC<5uRXI#H%`{@);n}4}|q~*Ylz2V+_O~=bH*-AqGYZU8O zf90$%^%~A#9N~%eMZsXWLc5K;DvqK#S7rXTdJoBW^`})G#V93^^~>)3iJwI`^63ru z#|`crIGQlQR;q<#p@42w;Av4@Tmj<9?WM2n^@ZZR$p|iE4Tw8ag^9?b+56f2xPoud zl`Ir=O9ZXvB*k&5q{L!FB(DVJ4THpG(U`IdR6@zMWu;CwtW_l>FZ|1k@z*z>tr=OB zrzcTzaG)jzF+oF>TcXYPq{hQ zkrqIqc&8m%eE7@eI?xfLJfG_FjXw&p=cxMC}D@u5#ps6D{x{HS=`t_F*+w`w%Wo{+=j9Fogq+y z8I`{UbxNU2IoFTTdX3AHqkDGZ>GqhMY9!Gj4oS*T3WUmOpghB9uvk8y8A@RH4KSJe zb%}y8R0B*;4a^^Q@_>9^g@+<#UQfTW3o#)3*PxS1nkvBPgN9QHIXKQE?=t3}t{L&5>d#1@l}nX=L6@l4kU@2!8WR8M18(n@a$QgP^`c6G&P_)4K``jIu9WvU zQ1z}L!?f_q>Hipho2$^g4P!E-j9JmA0in9D90G*yz$(|6T=0+2W{RTeL<|U6Ha(4b zIaHu)b~uh}MgWRPaTNm^=zcE@eVd-N%w3A3`g!!v3L}^wguVrQA5l|c&CjLzG&9aUKDkDJ3l$_9f%Gq(UNr!Er4P6Bx<6;bnDQ(Eqlu-KfbBhx0^#m)=)qB+ z$N{S0+q7rEhVA@YT;LfB$^1h^GPUZ23`N?r1k$gFW1ynwT%=nZ&BkEE3Osh6-cQUr z49`If-Ja;96wH4K%W+V=F55cxM?jDFp@y)$Zj(!&3;EpYSI1PeeX|ofq+JVJN!1g8 zRFyRoUsjmEBs$_&EcfBtc#I;VL-VLkqu<5Va9|H^%Rx&x?+Zp{Mw^Lk-c5&p=X(^9 z^wRdt?_qWhuLvSx2?AC|9EL5O+{^Y?^{iG&7Eo-$F zw#-m8ZcO7wZi~b5_glWE@BGaYyj#C9jc*I{H1BcQ@pq4-x@7a&3?GJWo63giU0#|G zkNowZ*xl)Hc8^{{?zjib)?*29zFOJy(x?(u;YluDeXn{`2@J*_rC4^4I@QHaHUFgD zV4;Jvb_wnE+TyQrvOG&4Etsw&Fja^?dtQ5rny@P%n16ApIgXP0O9%b+C72~GX39}# z-$XgB#qM-vbQIKmzTL<1+O4efny*B_6`}r;6lZNoHFo3e!Dzv|sc4+S;hbAKcEGj? z3C%yK;vK-iz4cG}GQZs)fbGz6qpW+>=`=RGE@Pt5+|xe6r$%3d9IK#l%Ktmf^ntpu zN&f%|{YH_$P-5>gFxoD4D0v2Z8{h25J3+RJ3Ty(-5&TOlb%Wl0D$GNzwqOP%l&Xd) z+>XMdwJ3+{Re?$z=2)}@WhWN@Zl=JG=n?P=gaagg^pC2jN=dqfDz7E=a@~cD!ZX%o z82thB5w{;uU7-^ls2D`VXyl6d)`4>bYV9UJc?t#kW|b=QrZZN4_qLtXAvWwDjnq-b zPiKW0=;`Uj(cGx-yF_9fgBo`0sf->f@Po$+2uE#8!*4!(u2kY$l&v&T<9{N$q!o z(4sg6a`49owE&scPK*a5H%?{@lip7Vn-=CkWRrpn z!~yL{B@)mLRloz{shnIV5c+CRLf5YI?&XREPzP#?mA!Tn?K{kO3x{qEk*I2~N~ko~ zNTwrwUx5U1SK8QPI=>k>e7_xt^5VAH@qqMN)`Q_}+f)1ErCA5+?O<-(eq;gtCI|ZkQAbz2DiJD;(_zYE)GnCgVrv z$Noj`vt2NyvVr3icXI>m ztYs{qsxFi_#uNUH~*@U_q$7n5);$ z5aJn9=%Hj@coMRC#D|u13}mmm{S@_J!af>dMQ#%Leh#Td!U0LK@E9$9EqK!&*YBtB zgL+TqSJ+5nVbS8-_5+a%({6rrs6jY+?#Qc15yn$M^C9a1FNHH>TrfUqCCj*+1U=h( zMmC91rcAYJt^g%eutk*y@Y&OB{vratn*7ijlkr6df^AI{DKdh0tF&Lj&hg6aBE#%r zWZysKS~X#puhAAQq&!xSZ6G(H8h%4}L z`mbEr$vC)m>-JFYaGP_aB9&^FRIGVIPrpu5Qj2(7@-qO$RV_%xZ3SuaVs^ys!V`gI zmEFz#swV=X|MZ<>=pfY&j~q%aWGakK=3c-5SZ+`dg>8Aw*I_qI?Cc~$rI4K~Eq-$p ze0*Ug^IFR4J&@x-Sep-l$~70AZgNtkR_ zIL=EgzGqk{rV`0AHV?sNtr6RLONKE*RE=8Th0ScLj)18!#3yQ_2rw$%Vah-_<@IUB z7N!gP?7k?VLgO_z&RF=7TupZ@(L{ZV^mRai;LtD;pr!V>rJXQnQ=rkP>AmBSXfslf zu@i@)Mep63DhPBWYIvAb40@Tngv}zSLYP_Mi&=pW( z)jvd*4#wzS1VKV;(8jKZTwqlgx<$+aXBsRuGmW0^0Fxp!B40XWk#CPe0)7GpgC1Fl z_fd4xh2_KE`Oi7Jor0StxKlP~2Fw|aD3N>gf*oo1dg~$ZC(13YNnLBtOc9GL{r04qE@yFSdoO*(T0QK!TU42@(si&a zP_Y>0`Sp1jq9cZa;h?SxDbsX5K04_`h3H}f-O;bf6X9KIDigsDEmoMChF!_r1|q_S ztPUUC@7wDUlCkm&?q_xl{_eYB$ls>zNU}~;aQ#2A@)KI?j$yq+>bRToJhe9?fUao< zy!oT+jT{8aH4E_HxRsM-ovUNhYP=S1L1p%@&W0T^yfZkZuX(@CVo5_foGEE%$5-e4 z>`fF)$L{GMZg4?R$vNQRR|u^^%@{t^+wE|f6*0=_$SEhBowUpKg9R(sQB-b@p=Q0> z++ff^ErCI^HGP(#(`rgPXe@VLPjxxkEx}+ZC+4iDi~i_&ynWj7S?R+X@0agmtYBbp zy(Fl_Tb+yTF~r@tFhyLUj3br>3hjtt4;><}E22X-&5c*$(AEa9M! zb?o)4?#A7~7aV1JQuAjXSopU!49hJ`HS3(01Z=HGS5?-47#2g?)-|rGn)eR92*FC1E^h(AOd^K2TE)QZFJz zcaaK*Ofqz(GS{;FNyQ#P+BFNbe5jABK;$rYp>6C0?tQW4-QuWwJ2)lX0gudI?e5OF z$39K+-9Hp28EnQ(BABDusK@I4Y*bY3408=8KKLu~n5pX}D5Ndd=@6ICeD|B|igw`E z=U7uc(|&Cu29>g_M6gEU(W-gVek`962#!WzAJ{FZEb?NRx~n0~8)_55kBVr`0(AbknHaV!#OqmQQ`*lOTqoY88A^ zJc4~G_NsRPrDOJ$B)2Q_LE{`9isvqIlKQ^yr|b1z%l9b9g=+n@@*Rd@m^i&X7_5wR z5?NR>Shx|rN;aYlSrH6#LWg-ouodUd=7RhF<$Jy%8R0CU#XT>XcLlZFl}$7y$_P;6 zPGNrrrp?Ng{R24pyI~!*wOj_YS(?!IrIlb17A%U<0CUOv(gwG9lio%L4ykL~-k$IM zU$~>60_;MhZFfgpS6ISHSF4{sSC?|Kbi4_xD?AvcWn-f$)B7I2S+VoHHVS!;SCaoG zZ7#NKh`<>viJ^Y@HY`&9)m0@HGWp_YWp<$qMAWHRNyB;DU*-OQUNu7bEH-~ul?KDz z{}M;U!746xRVumR%v=G4zg2heea)h{!)WKn>}_GB1mVeVO^g>O!?vOoNV-p-`N~Xu z!?^4P&$O~!u75tUyKQ_n-bDA&iB=W#3p1;Bm_ht)#tNPN)}$dk9XSZ$k9GW}Ojn}i zm(ccjiBx~c$bV_ZoG(U$I zsH*(y8B$9XDKs@TN}}c32m#mQjntTi-Gy7KxhOdF&hLhTth&5o&|W8GKhn;FqGlQOK+$CJ5*>dxh~iiYFn_~ZW&&N(l++TcHLnNulabI7i^W(qF~G1b62vF;>4a^ zQhbpC4Hsyi%(HcZj7o=m?PHcP+BWfKzg9Ljq%fo>`w;y^tG=@Om zTP^sIY1F2Tq-((x2`S1x(3Ro@bRVsT`I5;clL1mp<1#CSWAL~HXk+1UIL9IL;P(aC zCv4}bEwk>9y)t+*T_@NF_UwmWk5gs5d&s1S_>ChfHjFk*GdfGXW?=u}JKe{T#QwC3 zO5M&Y1K&p;+@m-_-v=G!KSuROoOH!!r<<^yNUYbhTG}j<$4p>0t<0eEd;5q&t)K?( z0V@pG$z@abV)E;U-#qvAPCwqQ`dFPkdhVt7JK@#^;RD4e_GH_Wz$o&O&3|Vl4p$@;KpN&=Fyf2&0NUUb(7#O&x;B3n%wm>xtkPs zu|!tIH+F$qxAt4{xKr&S;6>7{ujisS#p-z5?hEBd873{E;q_}JI;(&oJMwIS#!V*+ z`ETnvs5XY2j+r)MuhjrXssVvu)?K6@=WF(OKhxfVQfzKJ&|zL!jtIGa)unOE9+nc@ zoD6XY>m_wJUn8=&9fM6|;05>)H9O#e)APK&@WQ_9`#-fSsX!kQhC;dPzFLL~`i89& zpoL9aZgZCdYms4tuZmel;#X!Me>N);t!Rd11Y!xpT&`*2b=>vFirNuf40r|aHt z_aEXYZor52Zk%<@W*gDWY`5^m;!fM`b?3T$2DT2K86$PsHAVH*{oll1FQzL>T?gV_bfqmeCx^%W8FC0-ulV7bjPBMhQwK#?w z#$^=w@TrjDNHz8*V1MW&N>o)ZBO#Yib^5X;OSd1XJ4w;AIpF?{+VxQZ?o_84$&eTa1=GgET{j?bp!fkwyaJ|2Ab-{|eOJ@YaoF+iMLb<_Lj@5IiTP^p zEIa{VUHj66eW<|s7UxY|c2fx(YC*q>chEr)=ZTP`EUH0jS`&9srNC0rPvT7l=}-)o z&}lLhD~O3HRFY`FUur>xx+=uj`6V%J!D3rs)X}tnT$I)3#*7pj*1(}+0}VZWBf&zC zhY%Z9qZa)SX_DX>U>^(KwVF;F0zb#*<439_*m~Ep024z2))gv;I+b=`PhDZX_43)G zsNvv0S^!3z zn|$j=9Zmm5@c&AdD;$t6?(Q39vYIO#ph>ZRebWYlD5fNhM>-ReFw?jH;F$97+Wyb$ zcE$sNt*5N|=)C5cjQBUjdgTq$s5B6ics!X>@e|O{9|U%aH_4z<%y-N9ZkW0M5CUxW ziKgs{F3x3^AMDH6eaUrO?Q3FU|Gow!NFy~!>i12)vQHwAy0u17-vLw>aQrxswGH{% zd&;?Lr1dXn;j`}wm5yOYMgwitQg~+m80smBfyDOsznF!KXjA=$i9JrEucrTWsr=5 z)%-sm+mEk7h(HnnPAKXUXRcB^e6h|nAAo)jewQFaqcq*Mms>zpIHW=1JZ_4Dy?-T@ z*28(dM7MA&Cuilo1B;4$p_MkCNqg*x`?2tRw@iord^Jih-|pu4>gf0Sd{3@G8|Z+>E6k#?*@oUmw^6;i3z$@lch%6 zuW_r}qgghsU=)=9McEzVgfznJ@r`LJDVE>d+ytQKPjEQwJ^-C2U%xkK>6GZ0kBnO! z2BbUOWNT~SeOdvG=Z7Sr?5##qwJ{=Hg?4;lt=TwEtvj;cfakFtL)<6TO08e^iPy8K z7OO3&FlaPgz$KALXTtV7-xUV(09eTdP@qOiWFl9(0L1&PJZ@)I_Y?|GAUTHHIoXFRR1rn5};T8&VDE- zf05b)*mTv-=Q_Caxd4?wt!7!8YBc1sy zJ{};V9-}(o<7bnoi#1H`8g2FjhE6Zs0Vw$ggZPY;H}z+g5(UzjjXw;00IWT&FRrsA zBc3VWcHlDQLE@O>DlA{bmV`6!l+uuJpP}Egr5R-=E|iV({IZKXnr63Blr$JIUSnmx zi2X=oD{;yg*~9wTX^yqrSei$BW9g5T45io!I5>E&R$?iQmZEb#g_OEd@&NUF256OAgDa@gj4RGE48SA3|(s%5g%P%2X}q6YQGbx}#E*nIX`WDEy{ymy1ysu0^wf0~VIp$1Upb zSD#oCMBKDF>P@9K^#BI=x7C)PGdQZsyukUr62ki@3;B%JBDBcg75Zd(5K>8gf=|AY10(Oj+!+jly z_~z^JDAigye#2gM8I6r>Kw2nHq%q;}d}8+S^t@d|A+!RmZR8HAhKqorhA;Iwut)rfHa`fQGa2lBe@owS zvqKl_d3=68Ztmp*c#(7VQI*mn>_xC^ZR-HGTmDT}h z%7;ALL!qcnd!iK}q{*%;PRsxe(gB^M*CycqUmc$A+HBqs<@a%u0OZE)pxz*RPb5ky zp9cnH(V~6C=VXAN?kSt?vIVs>nT-$Ft@r+Mnt$$Y7E3NJCo_9al&)GM-#B)PM7Wm6 zZx{Al5RLzQw`8a!70Z@YUj(zLCo1yaBM-@lLrOi}C?>v)l~+^f#MvUM%-bHuaA0>swfa?wy-Z^!kNYul zIbYERWTgEHioJw~X(9!e$ z<72fdYrlF@k}?AxxNW@dD%1ZPH8-!r{ocwFQ^4TrxTd!6)Znp%1GxZ!g!ftAj$RaU6u|-5ZS#h{S6V;)3%C z$oPJz(wUsf)Nx~yqzMI7f%o4=lW1FL_sHp#G&R~=vUZ{D*tv-lO}5)N z^TF(I04i(%cr0)>tJN563^2qS5h3=>I*SrIIq=&lzr4KMWUz&C4)$@d>Gy!SqxNNA zCm9z0gcZl1_^N6I5C)&{E>gF(2waWMNJp<-cbg*Ra@oL;_VM>uLlB7_Fk4{b+PH+>yl4%|fHsv0(cuy~47L zsS8(Pnx zxZ1T5LH&K)>0Yl@}}FRB#APW-|e?? z*!IV|lnt`3ITOk7uxq>P6&ozn+rN*sU{L!}Kbnxnhv8Q#s%3?R=Kt|K(|{_I%k?vi z<0*0kMNI-+)yB*?RacJZD#|LD$}z_lq@L%P-#%TW7Ak|QI;94eZO)b|*O}P6Np1P) zG81452G*%mXlT@s9d3omndM)N&pH?|S}arx^-FC?Zb`kAK-vkfp>A>m@reJ^pxty5 zPa3HPCy|>t!ja$n!xb*>nIjqpfZ5`&vTY?PMP#ZDhab6tFt&uSEhMWAdKT?);*XYYboObj>4E6)2RchwbZ`m zl{L+8v4FZmr8tq*-)P|a9r5nE^xVU5Dkn1qb0g?c8N}uyKKj(G5@+gnngI%mIp%~} zUD)_B32AXxL9)%yMaaZ&VT63e^)|Q3VR(s2e~<^GpXY}EjTuw|z?pvHJY!j7izY{>(y+d_~ zvgL9bt-| zjSRha1bWkVdG$|U805u7YP0*6>iLslh)K4~uS4b^m#1qCMgKpl-YP7vWosJ^-ncaG z?gV#thd}V)F2UVhg9Qu0-QC?iSkT}u!QCNeW`Aq#{huo?`sse?>Y7z!jMow+@7|NM z9E{$92(ASBk{JdsV(*P^p7c^1X1Qntur51XH^C7P7ce1Re+b(EffkL7OvHcL=s_ZB zXd`HSfeZ{jt%VnoG0dvEow>4fn%)mVgY!~d0jdvmO6-O@oYT=s)Frxm;8sDj3&wbo zpE$agWJRSmx_P4B5BpL-2Tq@QzG8nha5_OgA6+PnQwdlNsGw-hPJJzwo%1?Ez=&TE&ers@io?#aH- z)If?gPQn%P*R8c+(RjLZHWMt=LXN%tfWCiitzkqoYxUkv$l^FPWu}!(%2FdMfd>P) zNz9VF@iZ_JEG_OAv6EEDOBL)ab5pF88G*`5z8cBjp^zm`!Bti0%_Uz}hDpI|UzgD;VF z(I;^x{dF>?=_8>(q;uJxNJerGn-6GVe|J8sY(aMhetI&L1<$e8-bafpgYp&lQCw*o zvJ}Npr`|c+<-|_pK-j`BeTysLR?Nq>gZ1_gjas6N@O-6U!8yTd?FLe(G7jNe_nmKW zX=Fk#^!sOu3RUna|Ly-t!m#H)`9WAjv1oAEB2yw{J9&sKSX z<%T{>g;qQU7>VA2N}e7GX;|nNn)_QSxnw+ok6s^I4xMEuTAd#rcE&FCBE#alrA#Ek zC3d#M=AXRATMWM&x~QBHx`h_x7@e%P2<1=a9kt@V=34!KEtdS{`*WL=H81 zViyv}X>d+}X2b8fvMT6S%wA{6Na4X~-#>oocnuVGH&(@u zZRj?n!ut8DG7JoNUsH&5&GzX#)h*lBfol%1pIi3QzE76pPX4^kZE*(mV{VlriZt$# zU^Ku_P1lpeL)armi=`;IwYegZVRN6j-Y9`CdIxy8je~0?Xe$!adqQ)c)oXhe+rW)tFlZYcIjlOLId%-~gMfxqR8uOePEsNk+$@Z@6IZdzuj3?A+Hq z*pNi11^R#VN%tLkgfeRG=suj&iXz+$X) zwCTTBbd#x+)ZSY$ z^#CPiL3g(b#`kOpSiJlIN8}bc<-xBE0}U02m6#?8iP54WHqcVX#SQ`kvEAGascafF z`@}2`$==vWubf!a&$kn%hoZpLAiiuTz7m0~d9QMUHTr%3QU90mCjesK_ea z)1#Y5f5>9PU@Y zE)&?#BG6=?kdh2%Ry1#9XkBt5kk zv22sy@GT#;GvBHyQy0gyoCc{2;inZK8m+i*UNyN!XgcQ@zG9E)6&fc9{Ir;&FrCua zgGCPw*tb1h03B#D4kfR($Ja|(VV+HYmv6kQbE)MKJSFju#;3!HBoxOLB<+yfsMJ3j~w?E9q%ur`W-92SFl}58&%derR!}@pyP%J}#>33-Ei< zWk?o)J$0s58)`w2qZ$TjN|aUr(6xhP(pb331uuQE+SMcB2YAN@`pqNSWX}8#&cLR; z&g}Zbalb!S$vGpKEJt6Zn`_~#)>pc#6&iThGBd{Y*nz)wVA;xdWJ%WJGRaBr8I2B@ zE7F14k*tSBJU{oV{1W!74Pcm6TaHc+;w15BD4}$g%w#x?RusvFl!A#zNK%PVZJQuX94`5pHnvFK%D`y72`XHh~%h9xtqLcPQxF`D$NNeor5 zfEvPMyVT&<1-+M5I$6+yq;UW=JgW&&hrx?4hghOl+F2s5G2Yu9h7A}o=+UxCc{}vX zoDz*E&Jw-X`R+TeN(2Fx-f^)GFW1mOE)|4uwVd&gVILc%Lj;kDyjCPhk;(&74Kn>^ zY#_&Pzy3F8VqS37u5&)d#1kG;B0>{-0&)YaOiT~kBQV)m{O#4#I_iNTRWi;_j0T?x zw40QkunI!Mg2Vl-Jr*giJN0`q5%JJ@^Tf5g8!Fc?PO+Dd0uGx2ez3gF(Laz^j8?(0 zLR3W=%nfHI8x>*|k3J5>0J~lxk#50d5-OIR3M0^w5oxJ+BdS41R%^xi_E*Qpi${6I z)Nj`_E!_nvf6Qm&bP*)K$N9r`IQ7V{&00#n7gp)Ue>+^jARgm94<007Fo`HTnt8}YZPBcj+qD2|X8@dwh(w;_CY0@DBoY zbY(93*R6v|90cbEfUpM353(N02+#mrz0gQuMh^xivkTi{2Nh6CM5$HC2g|>KNNr- zBR+m1m3(o|kFbBw|5AIjTv`Ltjfx2Qmi9$b-AkVY;2wIBfVSe-BjkMlSs^RQzt|Su zwrVeyNZR}BPv>gWEmx)?Q5cb+Cw&5qfr3#XZ*sI+ci|L{Rz%*pGRIIQRQftHvyaaw z9CA`=%~I*gRp+|%`zb1Y$*a2~ap|W}6Vr*{v;}@x)rNh53(iD05;4;mV9_z%ihn{)yt8kfpTzE2H(!aUG|-t{H# zow%uYcXK<4X}@VcBA4XoW^$L?aMSB<6&ojKU9Z$Kb2N6DH|?e-l^*y^#vo!%)c#UN z|C9l>9iM^W#cw>&DpC$3gqi{OvUb3G0JqldRsIX5x!j>+RQq%4!44pd>o{1|#Ur>j zE0CAeNVhTGil0A=xRL%Vste>t4g74o)q^9SHZsGU$xw@ADEY@M?b}Ro60=DrpX<#2 zpbs9_=sHMEwHCwQ?jUw}ZWyIW7+FD$Qha!no;1%%`Be@TzYR$2L z!-;3kx`EuK=9b{lDdchr40Xw`JW7ZnLFk(dh?gbwi%AGI5}22wk&t3Fr2ZIX$ZJf; z=^Oh^*W1#kMVxZxj~$RmT96$bXtbpxu|#EWo8LaW>P5m*m%ni^qP)z*m+1|Z zv`(c%P`w1StG$BNSK`XJmj28~J+vGs8;VyGv@@&G9u}W7y7k7)(-yMfL(q$pH={rd ziIEKr9h@&TCh>9taBB^B746RVSsN?$|3^3-m@X|+YTv9cL(Wn{;G6QOI+~+}gainc z=!phm5%8cg3#DsDJ_+Mnk#R!%XQ;poCjMgy%ROeBnJtsCe>As z1dB3Y@;YEW6qi>H-O`_awHgx>g@N#|Z8}|cGn?43AZAhh3K=8+t4h%m4Yy8Jn?}Hh z0+dymd!(6y`tcq@l=;_HbPI`x@*aF!1t`qsc2;NmJ8#s>hudzS%D2X{i37~XOLgZQ z^_VnI#pnH@~!gw&K2Grr1^f&bB8NC5QKKQSjF=2mq7+;~T^TiMOx% z^(QD&UUI)x&Ge=-#%Yt0S@ONugEl!dfpNFnS&iXtkR%r>9<%AAuYipm5k(PYN zXZ-(4oj2`avYGwnDf#7e{77bOd|YIA?U6%UT8$Os^=;c#l>u?Byzh1=&i@qBfh(vG zwGJ(>XI3fr=)oauVWPmKEb)dxh!5ZK=-NYL%QMI;&@JiV5O?6y)sVCl`O~)NK$&tC zwJyaw!XK30Rr;T`-|x@B3`kPi36pUt6^4>M_3;jbN>i2^_XI$A2@uW#iOq=x>4;N8 z&x~o@nt^1Db*{&Nt4amAuDNrru?Q78FHE2uj<7ealQ$_E7S`Ssjz-CglikJjw--Vu zC+7iDaj}GKB~IY}y+R>YgD_xI;R)BQ_#kRNL%XmM4&aMa?-h+jd@=a;uOul;HrS8%GCFeqP5!WwDxGSBxA8h&7F8 z3S>8Fx(Uq`+xN9TG=lP`GUjD{MzaiR-@UK;@ROLx)Z!1$rNJ!q)6r8^Xk zbMBiZ;E9HYmU9naN$yd*;BKW}UfU2ee%G@Q02*G>9ZPz}l0@PF9Ly(duV>oY z(i;UvXRs#Desm8Sh@k~ocF!2_YxejhX|bRd!2?uTO&zx5-nK55DBTBEA>K|Ph8Roj8b&l z#r@s;3d22n~*jGWF;+FE{30z5#E^otd z!mJx^oee)gDi5i#<*&U=rcp8LDL!xYGT>` zT=CxEd=c;BqV*0Q!JQlP{Do)vj4yxo$VV6@ocb-`qsh=x`Sy{G_5S*K_a7mh2FpO8 z?^^kB8BjDi1k-<6xL~?Vzp>nCGhvnRnm{FAW1+0-Q-POQp6;2NqT1G&wHy1_hFaiq z01W()RZ&7rL3uygS1z zZ>=mTn4TLuj@ab^wt}kf{iLjttH+vmKA0b-4q%zq*8_l} z^-K?5>4c~EW)B3A^@$^6oS8qQU7^ML+DN6t5LGD7)qeSp=NlUy1Q{XQBkXGFM7A~u zfgjdb&bEB8qxBUf0n!nbp#3O*l-jSNDe;90(TB6OCmy~R!Ygi@D0X@Cp4kmJGyT)z zYG8Ufq_=?*Z1LvP#Haemz1!n9_vp#k0g3QC17@X3Vo;Ri(8RrwPy6k(c}tr|Bpc`V z42=GpA5XWfCw(Ke(i?zEP1oi>omsG5^&v6`R8a%xBW>*?(^Lj87d3*7;s6LZ75qJ^@2p$wWpe)ONexbf?#HU=bzeKUe)!XvLv z)9WvjIPJt6$7=mfmv%syp_E|$J{~DIPeyg{`!r+$5IxWWFhEX5O@^X>VxxafHz|Mt z?E%{dPIDkrcB#(n6yX3k)?Bi+-`2g)=gqq($9USMYyiDn)ZrB}0oMTVueow<4<&bX zjRl3?CJdD7wz2_Wc!B1Z?^b`_UQ6`ao0Kew{_2SC#@LBH#ih{W&`0Ib&=rs@Hi zZmw)1wbxtHdp&%>5n%oorMocj;P`bsY=0@z=`616eeU;^I4M&6xKY3yHJE8&fS{@0k!RSR4qGAm{b!1@b6N{LAXy)c2P)T~L7(+!~ar=p)N zQK4|Kb5_tTeLc*N!zgstcRs5-2}35pC|nyKeUwDp`o4%9Q^Xi*AtI`$9_qd>x$b@Z zdxxT5YIE#sy(Nn;p409W;6?NH;JjO<3lBebbTpn8Q*t-R(y}*JIt$t6CN(%Ev@m=% zH9^*+%;CROMMf@s5?oHw5pY4qOxkmg4^dfW_q4*%P^pUjlVoWo81|;46}Dc)>P>NL z1KYyR{w0&&^(aY@I>N-A1VA%i?;{sb{@b1b7BHZ#O?uS?W<7MCmml{d2IUY@!cRPX zu?FtVwCks$y1MNUmtzJl>i6cqZt@gai#D9AFwC|-3Ve8Elir?o-{89qR;o-S9vd|A z?#6L2o9Z&Hr=}fbx!a~9BoM^-NoGX5zW>$qnvdFnGqtOVqudR%%4>!%0Gouw?}1E) z?FBf3_y{HzWF;JKOMD)$bVl>@Z~#L3>`JqJ+(EWKG2pqN18=@wob9)KwonUHgOxqb zh1Wnp=ga`ijtp)tKO9y+@KF#hE1U9#`O}MDfZG#~y|$<8`1LsZ)URZhVU8!bM5J1J zFOId-#p>)c{xt{_w?|w)gqf*T>z!L26f=89L;DL&}v(rilAkBixr2SF@kCbP0HTFr3~(^oXC44g=z=LEYrZ_h@MJA6 zSyy$(Rn)r7!->XjDjAjxW$m9rxs(|@4qe_|K`v-8lc-1bp+7J~7<+$STYd#VhQZJX zwMKtR9+XpAL~%Ci%)Z!_17!*+gv1x{ej$rgAD|t$)!q381c8)Qs_>9RwyN(>W|lwm zdt4Ykn*nn<-~2u?$bBG#q93?zt&|TS&UugZyL?*RirH*Gu>-6tg-=&C_LCv^-|rcj z`kn{96~07@2E+b-?(?<|Y2g5T0R}=%;f5t=!o+M4Fq9}8WF+E~WSCoPykMMmuR`5R zF_g*$gzS+J7Aj$?U%1%%(ncqS3D*7We%+oCLv2QVOcSwov^1AB(XiN(4od zT~0`pv=c3Y*B_nCn=Aihwwl7CG$DeIB<2g^H_|_^Odo0#2~Hl3K)^&Gil^`){G>^n zUm_DVv{`jiZmuHoBbeIwz@pk@@tYd;w&VE-FX(V?vha*Es|jBoKe>NqKY45aE7T+X zoOr(3A1eBG?+X#aryXysXbj7TdjZT3uTJWK#=E>;l@6G3@hK9*RW+%SdHpW@c4N(= z8R8jZ`mPeWVg}FwRZ@QBExd(FHjuu&zJ{uVw;Y?m5u=|`>+;AfA8#lD*oxur_s0Pm zNDV>ZuUfyVC)LuXD$7sT@d8_Ao&flK`t8w49MT{#2VoC`k5v|f{HP`zS<&osZ3S_LJ{(cw;fEi04pvtQ{CTBIR5SC$7DJ{O( z%f?2nLKsD$pEpl_`zl?PXxt&S_Xtol=jJc{rny&E4H8sOrv6f4B7*_Hc>rJ|DXAfN z+)Y$=F{pI@&XE2>4hfIi^ixW(I`)#td?Kwd5@|tPo9!zSr8AW?Oisju;e0T* z;TweHUCEo@Y8UIVoOWKPM@}2 z#YL(cX|z?Em5i`A1*2C55Kv9)wtpAYOo-_l#IlTH^)1{U#xhBZPW|HYSJ?G=gj&9Yh`-lN!QqDY3!~ z4J0X^SRn0zQb)4DHT0H&YC&CWd$YpsW40>^Az{TDO~nApv)!5CO3gwhU{U33pqRrG z`_*HbDyT)3M+#PsqfW7vlhrRc+0Y1oxTn(V_9*RPThJYIokmA2ewRFOmnU1MB!-9| zduJ%-S=ieG!b0+P@G{uto6EP5NK*O;W4Ju>D9NR@wbWm@zk+^6A!`Bfb5surj4V3L z?=#qqKYr&<#P#*(r(L1VqrA_a{Os;_Z)cg`y(a|^Rvl&zIkl4L)#<%5N8n?v0LT>o z)*@gZhTpK{UCbTD+(%<7&1_VkV(4K`e2US$AhRDWd@tV;GQGRZZ`w46Tu&)yGZr92 z=oar3&tP!E$B5iYGMZfK32!H$tvqf|>u!qA_nysNw+ALa^=c)-_&b`u*Ip?3fnr5Z z!F)=)Q^hAg`B}q2$!A9%kJUSn+5UH144dN?8{Jk`%u=&U@;4#oDdv{FiEoKjG(|;Ds^)g z>I~NxTCdESzJ;I5E*0Ir4CsE+Xc3bvb+U%zeJM9mEj1%}e9jyQg@jp!u2Y*epV@5t zRH)wFJL&gizjSfHIHBX9uRUEUrzzChnPk7LvF}Tm%^F3$?3l;URDnHAjwLhD*4%DX z8-ghHX6=Raw^6WU(nADSw#PkJB?*s3Ae?PLKqwZ4|girB1Y_$%|{!$@}rTh z#4_0BXl@iyKqeSAXyMar|6C9vl1;qfAtwnk4bym+>P1G4gHNj6GFDR~HPJ0nllb*6 zZu!Zkub^gcus&QI_o_oM_oB!LOiE~Qs`w@aq$r~Ak4cQ)ws>Y8B1=%tD`2rUt!69E zA9NrQD%bmROQ`&Cy4*PX11S#lfVMwh`DyX__d9-Qy{)&rw^|T84c3T*@G4I23?;rs zos@|^`Th?7=jE4+dNm=Z;!&yIghJC!8+)&nXKV+tNVdsrnZ$PDlzyQH9jS>zQlqA{yd2`ij`mjowrxaLjD8vZuZ$~S?Elbs_l62h;yWZX& zl?eJqfIWcpe_H`Vx(6C1(!`84F{DS>zKdMZ-bYI*qigO@EHSHgQ|lU?Ml3#s_v8Fw zUY*lWwQ7eQQK{3F72D~OcR@pat$Rk&=d^UUyiU(cHaeXm9kIq0p5tt_%O+c`(^PI8 zHA@-_+~^v=pK5fN!wa0a%2P&d+LG!k4#>(k5!QacQuJs0Wb(X7lvB|e!NZAYWxwTi zW2zz4CbWBztMzvj;*+1WCZCjq9!>54#8wAu4am7q>+A32)J-DrGi=WhnRApF1_o7xmXfmp)$IhdlZ)Vj zS|TxeA+Yy1TKZ$nN?9eNlrLn4AY90>0J55JQp5Tb!Ge5ed3Mbgjv3o~YSnUiZo2JM zeLsLto_uMqcq*Z7oAzTnFGu)8H!ZcHrKHG08mGix3g{ zL$S2?CYx!v{zUTF41^QtR}xgq4{`xasha+}Li)`UcOw7=JEHliW*EGt;=4}x#0XD_ zHwt5PNGRNPEq#m$fL$=;1UPxp@+iin)>h8F6KoU^g)Lr_4-zly*^M7fH2&^t<<4k3{&ur!HsAdHLhz$qvWy6F>D|I-}FA|z4VjS6!8{rG`t(+>`+)0*+f)< z?zXu?s^I^6`^}y`4bifl6f?(`<E$wwWvbQksCYlS8Wj%R0u@~S^t!6_*vecvTSkS>1Q*53SG}7R+Kv$L z{BM_3TleTj%k*RSKenZsY_Fa5zR(OtW*cv+6<#r!RQ%_gm2gNdenc6zonJO?0*9&UY|B@^b>(1Is*9s|yqJ|v z1DVp@`%8L-9(&7m#wQ?cew!$@bC1}DLw*rCE`?)FLkkH0sbwEt)fALJ`U{Ggj4*2m z3%vl$u7gSB6QcUe@pXTylc}6K`s>6O9GO$ZHu?FFlko{LZhUd#3;|efYE?U>dii&# z;$i)KsXkO1(MC;44maW4Z=_qIBv`&LbUhC{{-iP|EP*CsC!5+$%tJlxJi?a&= zwaYqdPEly&fb4pWQNMsm82gZ+UPO4$J8ES1aMCR}h=~{v`U`+WZofKjiR-i+93Rqw zj$4y3ewT`dMBhclaSj0mTY!;AswMi+gMwdiv;!vX2!7Y@9zaljCK~j+pp&KbfyQv2 zSeFiQ-U}KY#7*kv^-ey7Ckcwo)QFQ985SBLRhhIW04;jc5-m`~fzVJtM?!ZwW)u|d zk6Ij=J8)Mhs)*XaqLqn?AqL}P5;FYG5iPH#kP-!dl;jkJ$Oa@gT_tkJxpEp7C?e4) zJpEvZ^wTdxGS3&1RfxqPSAa0i7!v#i5L<0!vIHTgu~`<9et5_4ccKe1dBds$1q@DZ z`tYyvP#usE&VHkdaYIXhK4Va2!5k^9ry`MULMo3tt4;R_HnFH1>ryZV9&$f@Ie;~S z3!=axwtA6>15Kl~=mN4Q6$AWd!RI_~^iOZc4ohvOr5yfGiltspN=#W$0P(!cy-g$~ z?oG6b$2J>^#9LC1Yi$_Ev+tp{uJ5ry``9P5=GDI3&p;;mqw2c0#HRK>p6Uig%m1%OB?v{Z0*qYaS58vu-dssF9Gaa2?OT2+lI`u7IC!U2k(vZQq8cIIx1 z*cYwR8Itm_4CmeA@<>AMO4Guq%*K^GG}Dma+!;91q$A zq%p0dx}~S_EnVhqoCqzTY?`K%V?dO&0J8gLn|f!ROS9Rie*XstjpeVSNdomU(qit? zMC#*gbLRLBY{o?ELOGdX-($~i6f1HnvUj#z&~Z}0?ed>XMns9MM}YC9`->!_haYzN z&9E4%aMzmGZbtTdv7fM$dRh=*o?W##bEDQ)+0FRSiG~U&P>t9ZLqovPEARt9=ijv? zJwO|ofDF$Nr*a#mc8G}i5{m!#_A&&Sl0a%xIj|%~og9lZSfNS=$Z{X8S9qCY<(|cL z!_FysmcHUSN~@P{{AEu=p$T#;()BM5ReL5>3k`IeD1PeZ4s|# zQ$#`>`2j*A;8Gy=IwS+~L@nnO%z+EJtNW)G>vH6~G{GaSIzZ?bbRJ$o?vq>d%?=nK z)RjdP?Eb*6{({4NHwj&84-7;T6ROP;*g>Qp{>n0d2 z$DMbec12#Xa`t;bps3`*KYl&QuZ+UGi&m;xorOfm!=jNQiui8A9lDueMs+~&v`Kr{;-sHa~9#5alnFbj??BqT6^=@H?U1UpM#>vQ^w;l3ytJ$X0I>+OtL;FPjMeEONZw4F!xZw^I|J!adgn<=$*vl%Y z3S<{E!oX3^v65BmGzYj!GdmdWjnMufWdVFYx7_!VHH)kpPSe z_M>1e*YUF&U=;cz3X5SE81N@IZ}{=Ie^J))1SVD(4V-ry!j9D*0{owg$GllB-BKFl z%*3U**IW`ayw6%o<@f;WUEg28dl+akT~y1wBX$2KPy3zZ(IxB67080CRsO19pU&0Y zlG81D|Bm8sPFFO6#Z9+NAdCwS*a{g!w^|`Bq=&hW8c8`NAFQu6e-^l|a@J|3qBs5Z z<1sna^CM2FZduFf^((HY<2dI1NnzDA6;qn*`Nx$4lm>?XEnvhKktPt~b_($hY+0Q` zl~Gt5IB1gHmhW^)tPDzw?BZfcCn8UV*4Jbt$dHJ}7tQ+MUy1Tu`d462IeDImeP z^y8sH5}DeK=FQ2#+K&=S^{Z@-BE1LX)&`Kv@ zf?D)jnWz|Goa&TWv!ZlA?i1Lv^?AVH43X&=gyfS!k*VbiMXNJqKcR)ctpXmHFFXiB z7SXn6tV-+te|!UZ543mj!~(8om4!WRzvwhtd!DjSe7izJK_wRWC2S#}xf!&8JcS|Q z{g`-K@6#ztRZ$MR_!#wO`?wEue* ztxNZ?LZNe$T?%unM7Big3@GwY*j-^bA}I}&-bw-LCM@kPAI||rV+y+&9LL1`i1g3W zg!yQHUsv>)eQlMs->qsti)x9~PxFZV8ZGd0(T~o(ba}rjLGAQ>+AQsLck+GzK%t!$YcY4A{Xlz{_)ognkUlSV-%fRmD>adOb(Yh)C>D$dcRV~zTg3KWA1EDy zCon30gDr;89mk98!#|zg-xIMPvzBO7rrD#q=G!V=U@A_HX1cHAIU?zlvAoAM7;xXm z_Cs}QrG*wKYryaC5`e5)9_61q7TzP`0{3p@qx^LR?;dUr?8?i|kd5^Z zBT78!#^X_Qd&f~1;G%pvo?$Oof>|N4m=RQYcxN0XE zveISb8E8Gi?gP$2+@gb(amG zyDb2az&B5aJ{A8B*1NYZ@ZrNfv9FYV zlHaI%L<`;hGU@96{c*jt1KjRZFGckI_A|~s z3>;<_Z{lSC&HLh3=3E5~9=jO|q@X~~@}kLYX~!6ak8r2@UdZ}yg_aN|!q_#$B>E6xF+8dp)i^W?FK{_09IM68 z3P6Jh(V%I2YuHAS6WDH$R<&Ffv@IkJBl<#m0wZe?NIlRD-CWu30R_P0vjEa1EqWl- ztt&$_YrWx&0?}%jaoMxd=@PqjFBhFNK{T%X<~x&94XXU;mQN73W+jGnf`L%c$hQz5 zM=GP9jwN7FKwO4-NvlV;YMVXJP=S*`tag5LCmO~U`G9<6-5S01jnHK>x&+G(hiu+` zlB2xDrU(h&a2ooP+Gu5(cbHc7EI;{4j-3)a8GZ~7C9}eam7K?ryglCr6hLf36N3)P zAE1f@_Q8C&2cC^+FQnsdipL1LQ!cAcn!9mfj}IWe@3#0=-@#GJj}3V7x2N zfw#dps0M?=-2LKo<9Cs{>yzYRl)UdbudeyXmiHCR(+Zr~6dtYb?=v=j(p()zz%MrxklYd^-$XNs|66sCibB`PZEhyv0{u;ec#oo~iFvF}Jx1Af z$vO3x57ip3W@^=}dLqgAs>8u2@s|L`$B6R1vzLa7V#?l^6wV z&dZr={2cy*WwJQFWv_c7OzTOCI}R>ll)}`U3amdfon*e)<8=Y`Sh?Rvd7s9@UNUl?VjlKmI}^Pk4_Z`=Hc8>iy zOmWq^M&N3M!fDNQG4hiqX7NJR&(2Pt5432VwL8!om`a0s#(!gXCsWt;OS_XgIhN67aA(c*rw7KC~Z^>r0K^23{%bQ}IDXSr)>y z9^`^Z_!0uPh>N|>RkIWsELW2~*uuYXI+TO<27f?>#Q{9OnI_WxXl)1e&zwZkN0p1c zSB74hyfSmc0qgyqlwt*c{T?#z0$S!B$<8LYH!4)Ni5rE3(b3+qNCin~F^Y`@UW`== zc&}!8yud&1uh{#jv21y&%ryOO{8V{=ZVg~)kpjsVaXqo{4@eQxEKAYBrsz9gYtb-c zdDyYCI}wH(DkMUE@P+g9{hEVLA-sKIIeZ_!&ss#>^g3b-EH<=zN#F*Np!%>M*30Sl z9(+IEY!9@dBAVqxQL`*`UG(t0ZMh8mkuQYr#nUU^8vZ=i4lep3K3se^d$FB5I+G`5 zMyE>4YCz(Ng<9VH?_@n53v{wRqeN{z5CVH(y+txPmWo!FhutNO%TT-(sY4yrSi&3e z?iG%`g(2zZ05&Xhhg0buG#)ecH9)mIr00FMb3E$vQ(r8-@h(z6cO1CHX9((@x9yyQ zCWecg91eml&_fiUurZ%&LaqfNhXN=+NQpgiXPiw{a`s-I$DXi6%u z3+TGx=ebjFj?qw zYwk6lM9oL3N?{p=H0@kTqE|p_A6Z5NhJ#Itz% zIu#9i)Cc^&)fmV<;^xIVw)uzcdp8exgOC(Bja5*K*HNy++y(&p$Lkpc%dk-xh9ao~ zZpJLv2G`ZmOyQSmR3M3Kz{o}v*LWUZOdRSXss@|Yj`w*d4i-XBP7qGkrR)zCyfH(; zQnJWZ5h&r8Z{R6{mGFv%+xhz1!Gy3axV(MK zjn2d47EY{@Nc5fOths{ZFF^{GHh?wP>8Z$AiG3yUn$5?K(z?(!_oAK} z5(aIKYKbCpBdGA6{-vN;I9Cgj&7a}|bqJGef~k3Ro0D<+0-mk?q$=Pd$|C4;_&vQS zxzH4xSzSHc6nAP31?<4XzFzFz2La=g+QHK6Jq%{KqW+&_o2}+ZmUA+0Fl9;*L`{4HX_U~U;@Oa^oYEZ zB*J9?)#dN!b%1L&pJvX!Hlgm0-iBJRED8ewrC`7AJ1B{Z&r;%lp5YfNO z-%*DIj_IN3Kp3&+EDyc>e0C1F!%VIA@C=YIheV=T|NhmyMf z#nKaRo!i6WDo~%V>#`-e?UsLeU%M4P9rBL*2r*wln7re|=ijI=Hq+89Wy?6r214L_ z)Jn`3I5Zro1a&VE8LYxyW%K&ca}q1qG{FjIg?A=<3(MlD=#v9m*e_!AeYH?ome|y( zX=M=H!|mzH;zJ zra^w<@^D*lJQO^6Sx4+J9Wq}G-xJqnQa!T9x`!TS8~Sph;<`f9y{w+$cAbF~gCqp| zclbSo^PQKcoC8w8kl-fMP&6-c8Hrz^0VrE|w*ekXxD=p; z)v}g!Gk@4$%Cr!Z81MKHX@duv(8c7|V}CPrANP|?VYSx-Z4{6;|Sd*wjU#9wPfaCnU z&`>$s;MNuHCzg!Mv+x~fTJ+|X5n?7;bGOcWeM6cfo1=0it##8pYq*({VYY<|N2b1 zfH@-AE|er6Oilj}T{i^nVnSM?!Mz%(pkjaDgt#4X&oGF` z*<209`UMP%KQh%jU2{T~;9viEAGGwLSKSMI%(9a%^ZG$rDUOz3WjXnZ^W38k_N>@F z?ztA%x^I?3LxxqQ*N;RrAT2Ls_98(x&baaWyhelGug9Qg5YA84UszGe{&|lVV$tD> zQm0YsuxX5u(`R$vf!XW%%@JC4XKFI(rldF|1i?OwkrC8m*m10eq}0?qt(jqM*Lg(q?rUG zh^+TY9_rH&#Y5oXC;vn{{eN`5Wl&XbqdrV*VAEZKq=0m)Al)e<-64|F2-4l%Y`T$> zmTr)iMp8NikwzNcoAdnQ`JZ>r7iVxb!&+;vyRO87ul>eRDl+(E(t=^fxr9$6@5X>j zOrWko(kEbm$V)QfUm^l9CfLw-;T`Z&@rjh_wSlCHs!EL&6zl%k)yk1_g(W;F2Xn4V z;y!J*fP!FRaB{66R5u1o`z#@ZYr<kHV;)cXonZkeVV-SG2O9mk%De&)Y zsv-1Al!z#UxyySi_>@{Ix>r?QNE>AJ@&&Q~eqw_ivUueeOC56yUCA;AXtP&v+#dT6 zxxZMZepv`j`5_IGWEk<%B|bu=0|C4BhO7`3;2G}I7 zywGRdf#ePp`-Q1?4!1wlpWn`5i50I@J#4x95g-aHF-Y0kNu;zLC~>V@e2+-~L-vs% z%tdzCd$uR2vwN}xx4@41r!p#LD;c#Tg^5^`AH8m8K(g6oeU|AW-TO_4yfs?O#C}#mtbWy}~L*1_hjwBk-zU#!RkAQ;ucN_h&mk!})Akps|0Y>P=kF|<^30{IpwXpgj%`&lw;I=dQd)14q$99V9Byr zY_K10TW`JqlHRMwU2moOgG8Jem{p&;PO8kyH(&ngPTyu|`k5QP0UDYTq$xdlOl!

    DKu`yuh~RaNJ-fvRAlfEN$x3OwdxB5bXv z%EfA>NqwBLKW1PmG(Vex2&i=>oX?hK$9#Tot|pLgA5cv|`p#+rk(nFwVD7@1wT8r< zOD+E{(L}yP^^I42DGOx!yA01frvCi+Xh8bh3TFEje2ICxsn+wgM_oZ6Pjv{8fH$7% zhVymS%-|Ul1fUeD(`b%i(;|d{$1O8cuD1z**CUXSE`sCR1N#8w1Gwb+x92u(m!`tL zG$$Zr>GCD@qT#VH* z$aUI-be%M{w{iVVhjH!w_K9D=XEq5ZjO|n8RdvI@RTF~oiZ^{RPd)u(<`m=+oft~V z`8MFboVs6lH2~jZ{Gq(q1Nj|*y;MS6H$unz|CjjnB@$Tvu z`zS{BRT(;of#+Fb7D^1mZ)WY5iDvc@t@)TK}G790XUnigf&eQ-# z+Ziz7HBo2tW8UmrOX%mHbJY7jh*)E5%8#{FL!}J|(<3jjZ-L38)aA`&fC^}%&`)P@gyh552jRxeE7^-xJ--qZ$ojyE=i0r9Cg)iyl;-AM|<#tT`3{- zD+8J)ogwxm;Zg~a9ife;$>>e}eOVrRUtAUWf7hDptz~eqzl&hL1%5X#3OW5@V{V$Q zPr#LQiVz&J7IZL450Ur0Iz2j>KfH}7-{}L*gKmo-SeU-&OHxVmIvVwI+!ny>vkpN| z^^hpa<0lWKU{<_8sUI<|IF%7(5gpWl72RzkJN2~s&_rE^@yhDAg z0A)Yt8Y~ai-YG66A$0w6^ZaW)EiJP~;ho|W^}+H!rt=lDSmSHVu+xi;;QmkEH-wj? zw~V(nZ=ED;JYF9EY7h7LX)vTNxv^zN%lzGT>Zz>Y%U-(T%Rp{^K>pH3<(w^xxA}s| zJ*ys842is>-ycega6erFW~n|4QOdRnzfVHqS9!By{FN=+c`>cYiD&z6Nonr+B)<{X3T!-NQawV~ z{wrbIBH;J3Y)}(Lduq(@H%8+U$9G{lk=<45}=l{Wodsje>^t=X2)n=(clWRN*gc@*0y_t z=Ey5I|B4Zd+>{UtDEo5P$*vZV3rbY9Li5{WE!fVcbu>vR^7w7)N;JWh%O? z9GGn_gCb0ic~WlECkcCN;p|bozTyFSlW~Uv8e*v8u`YEs4XZ&)=2yuh5`)p!Tf-4Z z^Lr4zaS2_1c%^(WtnVls_o?_I<3c8%|7md8(8{BXJZ3;Ryftz_U>H5`zCT$Kq43d# zyM#Mh-EoONd@LH(!LMq9F5qnw5o*EfQ_Z_^M2Ei{O55)XdKz_opQ7r z_GiTW`J!^hM(hZXpZ*1ok#_^Rx&iy6>Tmosm445g)-W9qrP=x%G0r*U0_PFVqKZR2 zgDY@r-qI;MDLe}aZz2z1qQUd}Yg6aDL8H`EG!&ghtN)u{2%Qmr?r^z^y1FxE?6%pV zIh}vSo$+EopX;D*U%5)+O|ec}gvV5#Kl#<*C*_XmK=$t^W)!rW^7$}iQ}MGnZ3uHb zjy5KcBd9-$M{CQ73O#1-UnSJ(Q@vC(R>0L_3>?FG2){0|eYP4!8kc6%6sl4|*FVm= z==YuW4wCGpY+eG|@Ahg&-vND8*Ex*PUHvOQT`h(tywngbreCEeze$zbP#D`h zmS{ZMunFz1DX)gK1;G|Pjlni%L-^r1D!2y0xJ&o0MND+H zx=k7K;clZHyYlfX?KM~9HA;U03_GsFx;8CGQtffS@$*j(Kz!uc$6|KJ9FTCDANH+8 z_>LF09Iz0_^fn?KtSpP~3A5+mF-f_4-W!a4m$*WXSLe`RjWfjkxM&}IfZmr5EqvUQ zhfV3e$4ME<=|_Z?N~_fFb$b)*ZEjnseZS(5ePtiqD;Bflaq>zS%Puao4rLv|19qb4 zFhJ)eFd+>HSR5V$Qf%|(w`}$RwnC{HJS2}q9E4hy<|hN3e=FYbWHp#_5Hw-#Y$w4? z-aUKsJe5N6>CbR6ezU`sPW`lGxUB*67?B9#Ox!EUFEKI)B=W-g89se|lV%+Tv%=z3 z@DSyOdI*g+fjAX6G}|<2H%&+K$P3S9jzO$wK(y#F(~)QgUo;vfkH8)@)NTs@%@6T{ z{j!>oV@f0aB&ON1==3e$xxghACq4)rpexGn8wv66-+sSv{(Hwv31w-%rXlr|WFU1L z*r{Hx-g_+c{h!ptiL%!M>~ylZCOs-cKBf!@=h~!ZscOdl(^DL(|!=RQphoO?OeSHxn;a zx-IY4V-TF~o$Bhx{&Yl`h=eKt6Gx2rQ!CZO;#ce8-F@4mUZsw5Kf>=rW{%!A9;uY+yTv^?XvcrAsNlb?lLWqUeN2z!{_6}hiS zSyPoJgjz;~b=kt`;s>|KN|ZN86^y9kP#lI@L<*gOu{EeE7X#9|<@V14giZlU3tJx# zHXMW*-XH~vo#MOB05imPI1b!NJV+B5Joj$UgH~}IK=Dq*Av87xFDwfQS}1>!XW_$O zhF5-#Xk9quGNvfh55MsJikL(L_q$1$IK=bc4rmcaL;L zBB&3I@q&9sK}k)|m->QI$jJ8gTEFK;6RcC7 zy$$aJ3bXGOx;0mkNgdpy0{FDfyHq4PdUMnK!_w()mHJZA93@6#_z+gqT4~rxHuc&t zoCgnyK_yxpx~V#^pR6q) zvJyKlXA2wtO!VWOpwxZe&V8R}ZO&4O81VhgmC0FI9}&aXNC}lO3ESq=jtPxp5pB0su5T2}}4o zM1UaEIc~Y~wH)t$McyC~qfCb5jgJc|-Ute2=*wo+?B2j#YDP=ezNm)V%Ub5#EY`c3hCS3uj`Z%3TNS5)?S5(Jq>oRFYX@4Dy zMeallkD4MKY8`1%trwg0aAc^geEzImZ#Y-ZHh68*K4o3&_UKG$#lW)}Su( zHiBwE(>NOhV;Hdky9Oigv5oY_BO@ewbdK*FB{)pKntVE$_!TNn7zx#yvCEmdMT`e= zZSBRU=RAA0PTBPh*6iY&kV&mSBOu^9^%onh(rUg{9NH&^RnbI}WfkASijXU-N7Y9E&E*Qje6`rKrS?bx}EzNOKIMFnlx7S~1T z#nt2Ots`iToQr~ywtpRxb{+Hym-w(2r-~mejh=~Il3Q~Q);Mdc{R@%Od>HW~5K5ua%ZBID^gY># z!_(!sL$L0IM))|#YM9Bd#}c&*L&&EkJwxS_D!b_v^bPYmC?D|P=%S=1%no=Z;I}#3 z_}3X~U()bT*fWRu1p`@1GqIzA4J`+WWpHu)@KbF3#%_=APSD!8A^3N8*Xrv`Pr@X(d-;v zU2*{NbD%^B5|juN2U=tRuYh?Gble(PwaGua=I2)$~8vt;M&bb9bkINbXC$> zv~yUFt>^yxix*p0ITQyh)B7sV6*wYtuPhH^*KWio;k=P;TaJQetXu#C-pY;?Ibtb1 z%wYM`TzsM}dka!X0@WOY*S=Ix+DTKxB1W`|r-6w*bA z)=Eun1lHnUa49^SX93F20;+63*n2LDe*7Q`ZyXjbx^8S@7p~{HQ9guS1YR&AM&Hk9 zVtxwz+5!0MLuR( zx0OKhOd052dxQ{5W%vjyH2a-BS0tRWw!R0)_DJ4mv!(77UiSJfc7-yGl^S?iW;_=) z&MB8ByquX%TFPfnc3W0BvqbE0KENcrKqb5^4{B7lUpzhpqvgY8WiB3RIloxQ3JQN} z?BC~-4N{0)^Jg{&b}dF)Q+6B=^JEr%ReD|Ta-ItPf%+V$-%Q8fHqGRsUk4BP2;arM zRcu~eFaOk!>!hM?aiSnDP8cX_6iB*-kt47L#aWhw>9yH&1bC1cy|rr!*%clh0~%k} zk?VGBX>-cc_dbKq(IHjpqDE5}Hq9A~ErNo}A0R{tNS(t*nQ$s2Vb3kkN-_^<^G~ob zQ6ZS-^kN)J963gB>CWpGY)%%q)^yNTQGb8hGmaAMO@2jYCjk{Nm2lU@8E>-Wm4>jV z5__TIJQ`!8JMQIM0SP+0uMq7NKL37e<2Q;-Vq5PRoTXu?Gy3)w&{4uyV&a?7Xy)7j z@Kd>fb|p~0q-jGZn_CS+L_LBq_ctgHR03F1QfNaVrEy`1Vh}X zG7(=u+Q%aQJ%sicHI2fDc$gncEag2{U!Ld1-P`>TxWFojW?cGW1B6N%jFzcOx^oz_ zsRR^H=Jg%Www;fspwsw>#1kPf+=$~OH!+VpZ^c2Y3Gn9?`TZoccZmo_etMGY(P*xc z=3zkvi`;?m(SzVhbwQfUKmXE_gm;IA@=-f9)rH|uH&^cnAO+F~@j)+R>tm@a`Jn`M z?8xWCE8cvPdg+e(L|5ypE9tbtsy(NDaNEAJs=JN+gx{>R^A}6>6Xt4N1%=r8r|_pw z?DcJ5bNw3afeZ*zs&TCO2Topb2bbG?tr+=d|oyCg--^w$uYwLmZ7Hz|`K?z9R^ihavg znf6i({Le2U$WjS9C9&oQ7E>EvV7#}HM64a3#TrHj##rhuVbn@0Paz+VH=FlvDMCMn%S-gidQaYQol2Pqu}P}@P+cBZyOE1Qm}8Z z**VkTwK+_+$3rKsE9hxKe&(fsxb~*3_GqmaZ8UPYhRd>QH*M0W5w{|}w`cG*+XLRr zmYJl(CJo2%W>))OC>KE1F=|l5x~?d!z05HnH$TvI_(-}_R`T9?bI1>y9-kcF9-69m zR_*X$Hh7QQPl$k`TA{?SUN3$|QxX>Y5vX*MkYA z(u_LB7y%^!y{M9UwZ|1*Q7z_X4I3fY8`KE>`J-24Li!K9{4b3DALq)F7?m^x3yNzO z)uS%sxIIEBa9&Ldr|4cu-0i>Hi9)#-n-21qazal_2}fY z&Vj!zXTNU(mGK#cjxU9;9~$8-N=mE$xQAhBzQttWz*Moy$2pzfnR8#hTmFmL|2IYO z;Db?jM-n*7^f(?k2B_AhrvYbaM*_m9moEEizKGJbQ~(2}@khpt1IV^pBy^4J|Gc?< zX{rStavU1qO748mWGL483?TrBK=sGml#IH;@jiFEqi?BFJP0E;>M_4t7M9EPx5(Oo zX~*6Pweo%b_Zo%fxZu1yJOcNz&hVR?R-=79&wR6P(!sBc62s>I=b|4&V9A^Z^BLAO zlLJCIIM&XRg z)$FAM`7dZD8^RSKs{xFdmV^8m(dRp66}q%Dm=tvWcNg;SuD!X|M<}YJ@n4es`;r@LpjO*2{4wtwsZ_DCanDX*QJxKE)vQr;i%flbvUq+CYkIa3)_oq|<+>RA3|$Do z!4b?#9qDHg+#`qK;ulkcsBi7VtnoqVOK3hpyX%RmR;8iul+%3m>ZOS?TK}p7j34A8 zX#@3gqSC9wlwP7$=!ZJn<^9KB%nHVGfCBxYyjUJ^B-~|`0%G8F96-BM-f}F*2L#?+ z3eo|<3^NE0?#XgWY8Co|JD~GPY$r5;YU@G!t&_6kvL1>0sV3xMhzUKANh|9Q=f-@u zIa(_KfeE+x%RI>jG{;dJ*#&Qa1;{7!TglO$w$z`b_7MJ5ky1ophqs9I@ias$QNNp^ zzt*Jm-Oqx@n++O5PF0cr@>Nuya9mD5p(^LfP#0@g2k7kK)WfJ9BJr81>}L13$evMn zsXS6;WM}=Q8lNc~72nqWDtWnLCtYRwm0Tk*8=QMprSe$dakqmk5?OE!D&MBxN)2~) zsFkV@fR{H;9bu#opcG14&Cb?c3a%4qqd^fcelw{UKOck~M3)4{cy06-%q7i$MJj5q zJ3Th*#9b8w%toN3T!8UI`t*xV8-SL&$=(X`WK)8Ya@*WUDWuC-wl}`rKorBE*Yp85 z6W;k-^J)+oSK#{~!dhrh2SvmKYv*zZ%tovri0t*rA&BZ**5ADZl+6_@*K5HuRiGrd zo#@&83hW6O>^Z%~?@aZ}=ce0nLoZ|pgFY|Z1ejF#cz&JyR2!=-i9U2Q?IH1vSkn0{ zG|~7!)M5-j?rg2vyIV#pQy~1iXCx)S{UyxRN23Thq1!ba z3=P=#GA-H%Ke55%U`AC;wY4gx9;R!+9kJG!e#0XQosBF50bD|4L=k|JI(UYe`rJA- zXxxPV!G}>J%ufSD$}-zl_o+V~Y{yb)hkMd_7YXET zJXgz1ebInp28h3jN+aZ^ZM6}{(>@r^m z1dT}H3g_u{uTyzsf!8O?T%cDDz?FhQ;~f}^Qc>hPfu^Jq@{ZSFGZgnAAPzPF_n)#G z4p8}vDkS6x?^=JHeGBgi;s{}|QjyP-i7GHAm{o(8vM$FU*Tz37Td{>*L}13DhE#uj z?n}(Vo_PwLW(&Y-$ya>2^Mnsr&|0?;+1Wx+qJlXuVrAId$s4!zp9b!GN)8GfHYIo$ zRSfHWd9IBafccO05sbgwR~&W@@UE7cg9nYcNcrZSd(x)m+-W5IAFB-i6rEsNR`sJ2 zSthXU1Y+uHuO{T z;aJ07nD^>y^dPd{bjc?LzKAGpg*PB{?Q9E&A=4#yhG|78f=C;m7v*5Fv1W7|WL-_< zPqce`v-j|Br#s#Vl59w%u*{3M8Y!?h?{OK_1AcRP`;|AI7f`ofzaMt6IG@%qc+7fT zkgR{KQxLZJH7dqKaErF$B+4n zMy0E*QEIdbKA;|~Xtl4FzOX*JOAflNu`@ds_|W@jjpZi?CD}8}gqi0ev?8H|TCiUa z^hx9uKL{)+0JqXp;nqDYHY#uuK_5MGV91ejP}P6>yMFoV8fY3$Fj$PrQriJT>M-?2 z+I2gD#iL5(rv>ac9gzwyt@J$LV$(&!s+fbvx>#XXI#IfC)aC6g-(of%tkYh0c9>7s4=7WF@qBElz*MDQxCZ7 zlnGIyK#W_?j2tz4FQkhHuxr9pF??04Y;`bfIhL~)mpNPm1AX4wH)6XXAJh&kTx@ug z*!2}IM9sM|_8j8K54+&Bs%`RAo$vuO_m5hmUl6L$dPiQYPXATl?S?S=M!$w|yx01t z0h7;v63G_GpSA`?ff5FC8e$$>mU{1#nvexoxysxA^@OF|S9VSYX_sJRU8>yJ9r-Jv zdIRoZ6U@wKGgsL3+Nk(QL}489f?F`-+*BlO0XR zPCHCG53wxu{`I-u?fM#!Nn`IZC1{)QSzn(_xpR2Md?#ha;6VV9f%F^1>F?WsH+1~H z`UUl{ToK|C?z;}ETy5-54rS?DCu#`MMumhIj!t`bepgRkv5%u>;G{-cEGv3R{P%eKME$I04GRDqeM7_>j<=$3D$@ zmg6+R5oL&4^SMVitID?yGVDqtu}r-;5hx2`F>hY-uUF6oD9mZ4B4V5s zJtLDT@GM?+2L~q0>SyDXmyad+bUb07TG%!JTg9`73N5Pit*iU2p@&tjrR8<^F_xHD zl}St2^vAfKqQ~|7%lvyo*YfVz zQRR+eZ1B2|}X#{fsiGXXSY#UIEv=YIkLd-)%%qf)mRum~kef_Uk5v6$; zvVm4XIuOrXAHsQryVx#~AwctPZhG4O3i8E^PC;&#i4d8O))x^r!2nfyi>pmtwU6|z zV3m+VIOnz`1CP1Fj%1U`SFj7KR;x&b!#gs5kJcDf2x+2Cg_^jK{leq{P}=OLjXu?u zV$z*(!+8Oc6;r=Q1&P>5Kg@tF3a&uy6-1ZzsCURGS%4iizo%$l^y&>Y#TMb>WaT1c zA6ox%qDh?)h&?1zJ*GAC(rx=gRF|vVa-CQsYUi}HF7lA)$Nv^K&HCeX%vKuF@gS%? z(wqGzRfh}nueDfkX7nfO#wsz$DI0EC74ZJ@wwu23uEywi=aOOPw}9)ryB-fF?f_f( ziIrq(q8gH6W)I37gdTG2RZ&#IM6IiZKApx^PkmmpCc+?x;|mE zyoWjTOup<$^8Of7x7r6sEhR9~6l5=2!7rSFL^>-;=Wwj|62vP1n0!Za^YhR`zo z@x4UMiB0*B0&F^Y(v6e+xD|%YJ_v9-cS+oJHU%9le4&m>#tmWhQ*a1(`T6Dry6yY! z?m&MT{a%l;!#NzUBMUW`xtz=w>!YNP5=x^y`xIko&fA&jGKhcqjo5F*Sk!Dr{~C>a znXcVBkm9#^7N@#%R*3IOl;rlhl@`gybtFx9qa;YvSUq-9+8Od-i{Zg^TscelZ~hO%sb+-322Ynjxr1$L`mDw3DR6q~|8AWJ5fC<^v%%$X8NBZF@x8k`1qaRd#@|o=y z-`7tJ)~M2zP!i1Xv)|yVOPYNlS6)WH>$(+>q*OsMx%#A-v3tv^>$;ivh;AQODlw1; z_8v-!wcGCdem5I~X$JNblJddosuO-o1JyARkZg+VAAg0!i`|H3Gh+DSF9;yZA_=T= z;bIsm``x4e1oXmLAp(?!gm`#vT3DJyWj@pyr3(vcy*WGv1X~9me`mAfgitjob^31k ztH=ynbzu=6on_y4iZP^ePQ55f?98Q0McTmSnfu;yCbJxPiCaFdAex6vEtZ@Ah#wr( zPGrq>vV=VGUfCRh%mLqrQ-nxZFj=#J8aGWuutt60D*2AZ1v~=Pu}G!#==*zY=;>iwe$DKseXi$vH16w{ z^Zx{N%_8E86jL)0kfHG#V9a<4>7p&(`4KeKL<^k!U_bWgb9PW(e>tw-aQtgrc_m6n zdE?J^fp+m4*WE6G`;B>kfQw%$U(_iM6GY>;kw>G=QcI)3S#ZwQBjhNgtWbOgG%WaM zkLz{^W#ADJLR|u~P|_85qY$CbzL)Ehu~L4yq!tgl_*GioGGW?+@k!NGVo-0HegUx`dn&?BL)tSajDQTYvFl{<(-ZTP&^tu& zU4|QM2Y%KYYjPI~&(rax9h%isYo&$&ZV8z4eom;bA>3~~r~wl6>EU$8Z4#@_)nVmW z2TIT#HCzWJ@;ogR6^H21_@;OcRd(uiP*U++Or}u2?z~j1(Q%`?Tm{8ND-~N8`*ERO zXKgBq^e)Mh-Qb-dWxK^OqFyw=7n!9%yETmNU;8*UNMHir&s>225rQkOFT(z+bgmG>WD-6uYup{zjdp$TSozswkM8xk2zjH#zr=Dr@MW=LJF~Dq2C8 zef8QS@}JoAsYUJKWku3u0-X@y4|BIs;cCP{*}6vEaf0tR&-y7 zuOE41Lt?2#sl=Ert*)Clm#zIoEB1bWBSi(arRcGZ#E)c zXWUn<(d2Evr1l?bkI~7sS~A#py1lhgA4nto(TSgecD3>vL$}UX?_M>u5KFvmc?wID&bAra^#D(8qQBnRl`XN8T8m0IJ^CrH# zAc@6C2~qes4>fM$&k@mz9We+?|NZ>wrG3Wl)=MIbzvtRKk=MhqCvuUeF;bel*1znaYtH zATUDY#3iD#Dr%*z7-YNMkXMF4Qt#~*^NhLzX>2}f7Vk}n%{*qA7_eYVU|~_V3BDiw z$R9H7ba~FVgi7XIMoiDo2DK5O8^e!gNE>j(_+3QO{qSubW{SUYDagM6+RskFW#>?KUw=4UI0>< z^f@{yzW+0gid5rjwwsg=>$YicRp_%%4<;lamZ;k;U3@J6I;>wMWm=)d{zWc12AS|1 zj^Nydj@MsL?vnCBg;5-R-lg|?ysP@GbI9a2$y#Wp3+bbx_WX1O8{&wd{-27vdKTwY z>~ER3B2C_Z@-TKC9O5ivuB(msi-0?{zf7wpUlehB`l$rS?T;?~s#L5@cL}2OHb7=O z5k7B#PmTP4W*XlgQik6Ozg3TNQ}*{-FXWftKONR@RF~5K{ziMcZ12R&st3||k~Th3 zBNyBmvDAu`MEAkjjrXG@P=WvQwx^xN{Cny%B*rb8yB05Asl_Ay zF-`jl3+A$TImfZH@TFCEs=C;N>(X}Trn4Tz#w=KE5>HSta9b1vct7ZJq-K!)Gss1dp|B2WL?66Xy)rw|#-?wg$7=N| z8=b~F1Lxk~;@?J}p5Q0gQJj0-mkv8U&ufhm=J6OR|CeZ^3dhxKaBf`=s+oZa=b!pl z$nK!%$J4hiCx21AfMeBZRxC+mqAo~B?m-lea72eBzVkNI<~00s!&07NV*AgAhW)|g zk$-1Z`;l+QQCr7CKd%~CO;a$OEn#cwl9jla%A@1>n??+F7iXy>%B@xdE3Ewg^#rU$ z3cNgNYde11mAJT${VhqU082Esg9>Pzy14~ zf&0V{uFL;EJu9g1^-G08-r!Ln!%sZHwq!y4^zF}PGP}O&_m?PR(&uDPM$;Fy!+06S zm|xf=RTo(0Q0ctvNBb-CqyGgIHrb!=ME5YAaed2j@(<~}-cMbcxj14o2qLQbD zf6FQ!!V2I&8B#68PV+kmk=G>35i-)yr6+Z(kSG_X~^*f$1bGz7}q$G z@ZGFU=1I1D?xRlp_CMGteuHn)UC8P%BO5GzF!!+Q`A59&gG!Fq&DgE!QY};{`szNM zu+hz;<~kxUJn$cukk417I9KxBW-IX!(5UmDpvCqk4!VCv`7A%P6}cH8^<{&-zEo9k zJ9(gOJuKq!-^0hZ=V8iO)71KJOBNo&*YxXvI7wwL=2cT#Bi4>5gz+}dO)|!E0vC?C zDAeqAjE}khYueHi2j5$BP+Gx%{ofCfmBvYlO!Rz%rr}0esP01Xzc;ku39-@{Zdnfg zj_~(fMfIB2$JxK}^~j{2dE!6A>Me~SLNKgSGyC7){Rh(ipILqn>tXqPe9H3q|N2Ca z1h_U%qbiZV|I$C0>0cYA*~j3!t>piyrTu?>A|w<%|F>OnPQDBGj+s>v0tleABN=p> z4(v{sI%34So@@;O`%0Yg=JprAol?NU^@HxIccAv5xTM`;l72a>7~#j)u^jQuG~2dV z5cZ`WVw}K>AVB-i689G^FQP9AE>?E8(L!Ebra<~i_A5wsSmO866C%g^c3Wrbl_YlF zMiXe`qK&rrrvvK!jkK4Pl(H#YJnSoq$h%q zK!(rhA&y@jcEE$6t0+Fn%w_<(ssvIFH_{zPV&&NyB>;nxR$>X*xZ~RtrZGJM5HU1S zs=>|&3J^?oCJI&;8n4^{De3@D$Y$>!*IO^pgB0LI1SB+>9I?QM_MGD>mDhB-!JYmn z@__X#0NF;$IbulOEg*pgUvbp&a<{Z%5CE+V4-Sd09hO|`7!BNukJB$nU)Y8Y>wi&y z%ZkdPRWGV;Gko1O0rHx}9zqN+fv0Wa!!!sp%Lh4scb*wrh3FT{_jlXNvfg)>W@QUR zrhku<9?5K_MSf>m6{yt$MJ(6?4NmVBKd>SLfEHw`Uveu2fq7z4LRUX|j=JH^W3852=FS5xb5sF~PR_dT z1VSfo9d$}n89@P&{Gy9%q|{TP$6v69Yoa-uoSn0}IfM{Hq6DE`eo3l8JY z?*@xB9Svk7@k!5^t|Ls@bm)EvQ7{>Au_ScwswNwh?EQNdTkIdEeiffWNNX^uhU~dgXIxoBurUnB<4Gipzb7 zGwoHWYDCI<_@}ziAnp{t9MM-PwRT!OEW#NAAm0rkuQz`qoV_gu^knt-Mv`rU?bDyq zFSUe)>AXO1nB0L;^@uDVWcCdJSK`W~)6RtCOQ(6haevge)gM3<=2haX@ZE*DcAXU? zC{zv^_463;Uh4@3S(Av3!0&YvR=Gtp7AWgSTvYRyu9-GI6rGR>_XoJLnE^A)xicR^oav+wEhDpdd%&-zt2g+2>mzY6G)E{F8kkGX=7GVzX zIMdi-+F4z|xjBIJO*co_@DtNTKm5_@U`nr_S)9j{+ZfjLnZDqSnc;pO(f!=oxs7b5 z!hroDl92MXK)2)8@KF~(gsp`+0t6AaN{3=(CQTIL1xnHz#J2wCuU-~Oss50t&|VPE zv(%hEY`0A)SJJX7>AuBI9M|NdBOfc)ZAg- z;?rg5+=KllkL&-f${Q2;D~57ls{R2WPpc*zweLrQsxenLG1tzpGB*>2c4FO z?RP#U^;PUILvQ6};8`0%|I$%ly-+uPm%mJ+Q;Xe+WgHwTFprl)r@(?UOUGnIY>rl8<%_9bZm68S~e&XZe}m-Qa$wI9|4iC1n$!n`dk~f^ez&|jlK<4 zZ-w~>v~-~2PV-v;=Wt}y%GK#k{7&L`3DIyu+rc$Lht*Hr8&kbb8?QeFUEbxVOjLWl zB6UBq82@STI*a!AwI`!Zv0{|Dro5ECgWIV-*K1g9_gIZ~Y>tTyLb=vBpHGEu`y7ws87}tGWa6%M@UGWT$tW0hifPPLO$P65 zyG~5&@DO$QB_3%?JT78jZimuqrc`KF<$|fQ*Z3CDg;(!tQ`|ZfxG4@1J9fEJaXG$C z!LOnZ^9>ibA$FO>Rp7ZyqFB=8thxonN;K2x>zl*hoXr>%L(t4CD%XOT*l&@Dpdn*8 zK5q(a22bDfkq?AZnld|OtRP|fX-a5A1T}Q+16lN2G;s`BfdYUM*Cl6KBn#!4q~9f# z1pKyL+E_^^!B1cbN~V96dht>j*%;OLHPCXjNMRfMgbJpzn_mEGw#DDL%$d4Oyqj7E z>}WD)h*kvU)3|8@Uch{)a76PEe`CWS0mdoR({@mF$m9?rJc^>^_DIIBKGIFBkEYL8 z2%hwEK!U(}W2br%xDqC`b}6AU2y)6h9J^mH+pd1sy>nISre5PAPm&*>a1YKi88}<} zLYBX4e7iZI5}jZc6rC&v+%nwf@2OsAO-P|NolC{cx1S}J*&lq0P0k?2P~7D^Aeb(f zv+PYu`Fm>`LxnKh+)eLj{-yJzTjM9Tz8~fCM?r^?=*!O|_Iyp12!*RLhA8g{te2Sx z-|eZH4S4oZ&>UnswzMqmXp)|v3(VCz>G13@zS@=92>rDYO*R@Yi=H%iN!e=}my7zR z1D}K&=1h*?rP~SZa_Qo!q4_M4zW<~7m&qv6anJ|Pr!>H)S z|C9_MKEgXiX~@;XWuZ(nhX~@p97lL*6kD8g;PIdA3K@35RS?F6RQlFJyE$5OqH#TZ zakUX&Qutk(edX(7>!lCSl09q$_f>YlETnsbqip0x_$0s*sugi8jG#!gn!!`O(dp_u zA;w@gJJjlZaKBG{S?KQWnj@~`dY3GFd^3t)Q}akY>7TFg09WSocIH7^nkS|}u_z|+ z<}rWgZO0;1CTuZC40CY$g}4zo-aaJ75c5=jwuzchWFG-pu-~wgB~ZglD+3E3gSKRw zLz{z54vL^=i;AUW;v+Kn0$nsdG+nNX;_ z7luj3vcc@WM0xf(9t$v{)TN-kOjM7Ptn(gmo;CC3B>w5p zef{bqR#QfdVbwm0s;+C)5jBZ%x5d&Zm&*ajdI$&Vg1&~FF@ykhcoWoxZ)Rcd-pHzR zs_)d9R8bBCxg7t;GHmy6`^5n|IdqVPp=nVW4%e`+;lJ&KmNR$E89VN(Y{KalzAY2H z)q7LrR1;mNpd%&gjqY)Gb$(!5ra|HzD1pfuksgr7>hEpKQqRi!3bz)$^cF5yE_EvY}I+kU#H@oZ2hY z=EeD4>JwRJDa^1Ic}r(mg9J$ zYSofBBoe{%icV{F`27Gn(}`nG&jOm6?&r5MCedf^3+Yai-if}*eBAOH7~9Ixbu5`5 z!!X16EGC|uTQtFQc)Ah@F%1J>xFq0YFd9WCJxbqk2`365@6?FZxVdLsK2^O)hbgV* z;o#ePZZ!@PyzTD@Rm`Dz74Q8^&TyemsOq-oo6f@C?K`dXH`^>i*V1@l0WpM0qaxAC zx-_MIA&C)A1ywVl@auXUY$&V*Kkko2T%!#MVCo?Qh+w8nAR;jJ-&!LLjmAG2H_#6Gd-|NM88J#i^||JVKI1 zKrYQamR`0;J~BS!lN#LGVF2Yr>Q^eu!XK!wY)+(* z;5jJgLw(+G?wt2f>c*o`XyC1S zsvwV;F6eLvO%YcA+w=*zP2ICU8&10z8-i+JrWf~)t5ylb<$-DtUIMu9YV|sN`d~;_ z1)DvpiTbHQ*)^Pvb<2Rw#b^{ixDsm&)CwP$kpkqaVh|&YbHlnfCDk%bRIIRJA$?v3 zijKVo)G1<)s6gzE2Q8go%IKyvqgTj6G9V#JZEgY^&Vw0Z)ncMu~W6`gGU%wfY%$P!F*5Z{Zc%|`n#{Z7`Ly#XD5@tZzR&UDiw5EfF zW-6VSNMfA+CbG0h*DzZ#Qs>t1jPt_~LURnXpwmxT{!T5QyVi_7c!%4%2vKbJ-`S)xb(ZF`rQ+ z9lZ5gqXBFTv87QCf8-ugRFtT3m5B1dB*nv}c~Zg(4P}DIOM7bxZc>J+!I5}|Am+yE zRakZq1d=2^=*g23Az1IzQb=oHEscF9FgHEbHvM*vk2B&Y1~vrS?8A&xfSG`=dZW1u zP#AL6;x|QP-Jbhn-hQ^Q6mTVzwa@f9Kab?syyrwWLd&G8k8dM% zG+)4kV{1j*|3cS-VP@U|br}{P#{t&!8G5RPSt1F3^NrK6OERn>g{dkF)LM!5eaKL0 z6x@cv5qv|aP^d-M5GihTTWC5?N0+h}QMVal$T@AbK^9V$Tsee0d6c0ll6#ZaVg77c z-7uG4H~U&5tWoDOI0fCb)^y=ah9FuBP4my|z!ZAUapLz%uB%fh7S&`i8iD{-CaV>v+1PSOx6DHQ{NonSE;NtBoG54OoWLI2i%}ZN2QG;< zS1S`p6hOWZ3PvsY@%h7ScV2fx{uP1=ErW>;Q`;5!=|LXov)1Z#WfJtZ=$KwMPiqlv zsV>qi_v5%?akOW|r=2yUSKylJ#qVURZZ=9PCbEn20_+9<~cNOV@LQjn>3qCjcg>sKbwG?Qq zDU98##YDY3K3VZ?)=Fa*B`L|)_;c7Tl|=j75!t}Y`8WFk(Zpq)(USPT&(yzrijjo4 zX}c#=Zhr*J3k=EcX>Kz=pUK&)WcF>E7(6O5wTg`;(`yZx5&o_4=HSDRhvIP6xf>swn}BaAET7vJ`Rn+VfoDzW|V zWPhRf6VxElNXm$BedZLIYZJ(YMm)iZjd?v0(_ecTTx&-hTJwqNp8aOHuUzaCc(rDh z=Jbayh0Nq4Wx;340_5OhS<9Wn^Xz%#?EvNq-xNpOpmHE9?DND}ZigMbH9?(MO){ZD z+Y751&%7?2NG)i9tla)g>bbU(ji07xm_+gfif03aV(DFoo$MNmdkl1cIjU}R>}bg9O^4(QR} z&Al1jC#AUnkhuF%&A=HjB1g*3j*+7Km6l32^_`$#2kWJ&nb~H3BqQrHnZu{`A2>6` z3QTNOa-FQ%zAqk6fEdsnz}&~yBHF%t{wZMTM^JXW(;Xcq8e1H7dM zvDsckNUz{o*%WC`dA6{_MADJ(9=fPI^#X99dN>P-p1ONL=$gbnN{FEO64f@!~qJ_rm$7n{!#1$K|SJ5L`PnQ}?5Ag;{bR}WAhv3C6yu|J&Bxd)lCd~sz z4*a(>yjU$JFCMvJc$#yhWFn!6?{9wHS$>*o66w|Xh!$S;LQGcG+VwfxOU3)?d2;6z zk;-OuCwgJF13iJN-QSehD>_+lyr&sT0LJ<^6hpgPJW*xTL&sYojtTM2zNC;hEjcSv z6W>+86{3A89+|Glre_dbS`RZNEYw#AW^0F>yus4U6S<@{T|pZ_kpXibe0uNs-hS05<>pBkJK|o17om`6HXfgV_^`TopxTrHTvmCT78>>LMZm z`=W}9ib=P@x6^3vOg?|AC1qhjZCL&A^P}Th_SXYiz_<=;&71mt!!XJmo70Xg0pL!u z_h;tgAUq5V4f*`b2!)Mg?MKU57@*bdYWIG;(VHa)j;e$MB0IAUX5zISVW=e$WtIf2 z%&raN*!We!i_A8I2boRX4(tMMvPez^7_-HRNqLT^2*%wXOw>GDX2^*9=>YD?SuQ-T z^@pQEqH;93XF}qe6t48}&Zb+KsPsLNQ!;)&U<-mLkMTR#l@N6yj}45+O!~*EgCp=fs@a>L=xe-u0Msm_zks^*Z8ck;X$7bCg5TR(dte<7^RY9wb`1E3 zp8$->X#&_K&dyrFCIX&z_?dmhPr!gqSVRO^5-`oq-LG>2Xfb6w+_R5OaODj;zTTch z1Kbky+q7C*h2Tl!CIW4(?FtXQUOU0i&=3{Vgs-7f%q49w>%vy_ZIto!E1 z)~GY>yWhJkIj`QdhN=!v@SgGkJLm8GD0(w*1C9&O#yEfrJo9s2l{Ypu#gn}+pXBNK zQw5L)X=a@_ffJA`JgMVwIh0zklLy@&X=>G_*L<&ktM^R<5c*mi^8WsQiF%dw2)O`! z!Kdi%R2C!Ys=<-pfLGTh0M)Y$g=5VG_y z!>^_QqLxM})Gw0&JB32e7WL%$w4I_SyQ-qV|4aM1-egY~wd5FPnfuwT_3jbQ>+3l! zSC!Qlujh$RTZ3x|+YQGKrX3I2bdzKs5WuNl{8R)|14aibId>1Yd@tnbBv+}ox<45| zu@Vlgc?GmK>X_bSK40fur;H|lIgf^E2z|U&v#9gD#Y~}0Jv5t=e+s+(0uryWXf-7vy@Uk<@oD5s9YJa3W$=rX}ADs!jU`?vAlrc zG!t9ZOdgkmAb|A!L$j?KaeS{L&kYH*r!;u%=K9l0xJCC*04Xeo_wYE--;YI3PS9}Z zC;B0k0=Z;CrOQb7gC}59$`OG-wy8&uQK47p&f&il$vHh{NO|L4G;7r??Vyf93Y)m9JCM>N>CNz|d9 z8|Cm~ud^=bM!;4M!9z6Id22BKlLhcFRW<3(`DQ+w2{a?b&FTwgvH~Dm0qBA#9%Z54 zCWDkO)1s9S9ePzTLB%_EKN29c@am#G02EgCB9!RsAV{n>SVUXTeH}8etqQ@*W9`q| zI3D)xidzmTN~UqpkZQMY_Eu1v+xD+seILWFT|Wvb@#TO82{?xzB}WT2J$&%aa&O!M zX@fcKw{PE!vrX#vfzd3eQz;D%9La+f*CiWwfOB2>JarMqDA$|R3u#ok3=mFt6TUYS z=1u5_{$z?0sP0BxmaPsqu#M$z5;Hw45~u3v{e&}!%jHkbe=N-J?>xigjHJ{0V7c4s!q2hoiZLkLW0i!_>!|R z5x1yJpLc6ggJL#4Gw^D+0^Luq?iajHaVSh8OZ5qpr4p&qR;L2|BF1#wR$PSaEb8UO z)8XP(&I>j?00Y2vV9i+n>+{-cnimB@3A=;Z!o$4NL>+s$+b{s-BR}lSuy(_lIWmL` zi$`t0tw`Ic{d{sC__K?Gz7=+C`%eq0 zlo5eqNr@g{(x%bj(?*JxX|lw({DspWAca$Bo@|yZXX2xBlVHtOD-`b;V)=k^+FUT3 z!{Q7u2F#MDgfP-l{ZsA{qm>)Z)7LL{ZICx<5O5>aSnRSg0lmp1Yw|VEwY!pDe1P>M zj-ZcM4eODMooByyDz955Y0UyOTn=%4(}!Rjd!r*%H1?+!eQQLm<4L?Q) z$Y>vLRvi?qFwg>BSmq=@Z&aDgBSIvw*2XcMhHWmg_xwrxPAd6gdrG*7Jx~{R{sZP% zGTtttm#75Z7<`4ou`kx=wvCf~v&!QPv-X{jm)%fHUro2a*VPu^^XYb*!bPqw*VC3K zI?Iz+!7Q}%s=}b^n2Hh&9o?wt!Gs{0(Wg%UFfh{+w`tl+}lSX4w zxSZBC6g)(`M#A{lrj_2nGgy@Af|v+9F^YiZ8|Ff#9lH)PP)ktBpJ)dnXd(K*ABc8IFO0=^ml#0&A2b$}md zjdY807?v%Aed`L+t8Y>Lj?UoJf&O2KYx5yNKtqO3Hv~Nd<+rm?o#*0>fBbx#n8x zil{}A!76{=20^2LLr5WMA!_6~%G^Q@zrNfGvY{W9GS?ylVVz8vGO-Xc97D^%8L%-+ z4X}9J`eQdvbx9BoLsuXX%K=A+X;k)(ts`rtVgD%G7%mV#!C0S|R`tSTv~8v?r?($4 z_Lalaa#1AmeR3#Jy-W@A%hD-c3g-;)v&qAZPzg8FRY?12l?;T?-TiDuA_IJ;)-w?| zWi9GBMUqm{?mYIqBG?*o2hlvVv33muMTzDPcet8*i1$_J?TVzu_;5*g7cf%vhDJlk z&54s*TY!+3T}<9%nClz6SN(pzSdB2(G5^*tAj1sCSw`guM};9yeChVTHz&+929H1k8u$5)c`B(pdU9bmPsMXm z!I|)6XP2o{cN#HM8D+%-bXvy*yD>zHJ4HAd=d5)I32Fkx{-#^}T~|i#y-zQtI>|wL z2I4)4zO0gf^VOZT9^oKnfrT8!V@M+?heKOi$yHjve?G3wmZ5us;JqGcV!+oc4n`!0 z2A$A;g*M}uTRwqROtyl2eHbb1=QEYCmU~NUUZZz-E00_k#t12ASAl~HvO-Ig8HK&i z=#)j66GC~du@<%Y;)5V`O7}hg%8a7zc`E?tQ2dDs04ND3u{lm2aI}}>n_t&|3##kO zD6?vP64#eHvc0mwi_Rz>?aRKBWx2lWxy!-UZyhbUvQ`$*yOp-IjAjhk0t=o4Q-j^L z`yexp>zQ59y+!LRo=f(L4!(F=A>Xh2B5Ql`vh@>u5HL(#ld9Z*S)NfzqBvD9{% zeW9zE385qzHad$5J4A@apKbU!c1a&=OOv^8mX-ay?P5E+*}8MT)Ss1`fxsY9uK1^z zx$38`XImZrDebAhnGOfC6mqeKwRxLJK=cGcjs==KM|BaKw}*~GrpH@lMF+EWxkCGz zEntIUOFTQ{Y<1Yq-pX#b|30YaOu<(qk4Al_f397q6X7W3`jMblmrYr=ddpjND;v}v zdtrMSuOIk=cExm&uM~#oR@|x@xs0_OZaL?#w`mHq?YdEFes}isy949~NA>M~Xk3kb zyOOrtq*PP4U-CU#3fcZ~O8ngeJj}@|zTE1An4v&j>G^Xx%t1S1{?ykSLu+Ev5e1)1 z+JugW1>@o}@!68Hsl7DyQM*d8>8mvL8qaRgu$F5)(aE4J7XjASyD?0`z|tp*cQh;d zcja6`YGeZb?_=167MjSXm|dY0bbV1_|Heqf!howXNQFE3+p)vmt49iaez9YSuLrqWJTd6Y=WacPLb=(V{kdXEuujELF1Xa!d;; zVNp##{%%(NWw_+qz%On7nu%atN!mA2P12B< zj8I*?fc9_Xw_F`x37N#awvEu~S+=8rPKOZ<=9VKk>&~2JPGD_hx0d3IX1E zkKa(caqn(i|Gt&@U0#a-7(G1`` zpdUbdq2&>@z?KcgAh`FntLQ^%Ca;mwq(}fGJx6XKLkgoH=c;>wCS|~T{y(~_M#V1R z3`mVzl(5iN`}&ucoQFmTFmUSBqy}3QKX3-)@keN?&axAqHr9|Z--H(Yrw)7Ns891 zS)W?}=oG8>wG;+3oK$%|-@VNrhKwO@@ih*Ipe7zS=-7@7cD4QHF4YTq9w26m>BLd@ z?WW<*6Iz_`U88vLzr-HKELUXkN0$8V@a}u=>oCE;`)K9y?vFn24*8}sariSxieupr z`X}c$_h3jMZD3Gv#2c!{4`DfkUAF_iCZDes`>=)}?;v9Y?+`&>bKO%iBfigDeayav zalFJY>a8Eq><}`%1emN?%D(;~|6t>c4N~@z51t+GUHlEp#jCJg18hIAC0X@(w7IR} zuZH{eJjS2ot;K9_nntya(eISXY!JY0TIlnk#nmO`O%Fzsxnqd?8lOtyuwDnFuPM(7 zi6v|j%eI?YR4g(|5{ChGOzz*UAB#~5a|Uc?NH=;UHY;g^9fJK38^zY?8aHS_A{qG} z)3ra9Yrilt>S81}qjw5U3O|9e{=VM?hoX3Ds?kuhF0)NmWBAUxS5kLtsQadEt*Aym0x)e#eM66Cz-2guj9~vrNlA z2^Nllk1vJrnu1QYTls}Nyx&vU3ceYEMII`gs93`dahIE44I+jef-zA8ar66V;vJ&q zLV9LQS{diMZ%T96!s|;7LDjgP{~Ths!$YVFBD)8g$LsI)0R#IMtdD3vTJLlg7|U$8 z;FUB-(}f2o!Y&zwd!Vy}a_)vb2~%eWi9U-I1kqUDfgsphUmdorH6j}iB}mOKp*W$M zYL@yo@{mRP77P9qSANd9D%!H2h-Pzg-R%4}uoPGoI+t1zhi`jy8=~8w=v^S(Nk&wmUMjdaIu(3Ia|hkPP&t zv0n0}R~}B9`c13jDXPhre3yX7%o5{{S-qVRJ*hT0i%k47Ua!T>&Cxl0zdiZ7KiPmqAg#Qz_eR z4`OD*%@U2R01W5a_cLfgUvIoTQ{$Mvl>G`*uxAYLEPM1khvSbMzXGpO0o&u^swKb$Ktkl)lOd-qA~ zcL<}Z6+ZZ)i%{gyPoWKi$FofxTWZ!-{i8ra?5vOnBfB{LZxgCk8Fj3GnT7*aFpBHX zZw*>9xlrN@5H!I7rzWJ6j#nRaqW}mo6PQi(Wx8`DM$a=szGtUQQbOw{!7!5pg^VL6 z9`yHBlI|fz=Z_QHl6x_nlt^C!jM7fI!b=}6)`V=kUYsx8-VuBD7N%3>alSNs^~ED< z`-P#wh}EO!7&#p4LN6JwTPM~cSMZPYIKTtD_q^2~@nh^8gsP;Ne3F69$bEfvIck0|oVR<{`}eqm;(+>oJ&tr143I3uzGJ{{ z6oCfkVm(hC9ZOL>Y-}y)oe(Vw7Xxd!X+hPhUTn*=L>fc+evB%?9 zPY*)Pfz4#-#~g`}^v9|K&&v0ovy<*rJQrGE=oOXW9?0DzTurd2>Q)~80~Pg*x)2YQ z`C3s{jvXB5E*BP|V6qY8q8K7`&l^mlj2v969PU(yx8AxGm zGiZal@H?yLI#X)j1)M>K=)2oNGB@)AYtBSgUFau0^ULii6F=o-foIObX*g3(o|)uk z)1S7jLy?k@`R!c$*@{@&D^>Oxne-ty-GKOV&&!1L!k6wg#VeDuUkN8qjX(r7_CEG; zvr~G*ukRLU)ev)J@HJjUEcE8xbZoM`)97+u-pXR2%oELJrbAyE=Rykp!_CMEmzEUU ze#!=McZ~}w+T=4b=HHve@DG#_3j9RG z=lYSuOa~TE)9C$N2<<EqzKTyt_NCuMX|M8>m{J+Jn zi9d|)iyMm)<)8X04~0m#+sO$&VZFEM&6Mwz$gdt)^WX>IK)NO@2Kl$=(VrX|=mBCf zJuCP>|BnmzUySf;l&5GtqwoTwob#if(mcJ(D-k>yUtG*cU$P~|)tb0kY1RvU z&KaqY`8?&=NS=sgv`xVSh2uv5dAr+eK>FbfL{cR->Z zmz8yV-TZ&T0U#?StiCW^7@NS6qtP@DwV?5<7Y(C?d1wW|VoE(^>W#6c-!z#GH*)@u-elBH&=pJuEX-8fT37Exj zs5ldUh$2S0&RHre7hw1+K=KFVCRzaUnH4Q3-@}l&3(EM6f4)6@ZJN>W%X~$kbhE(3 z%HeR<)mR z0%l--fW{OAj8vb0)lvALE+RCdv(C3+qxZH$BC%MY^Ze8+v~d>PH%TD3|D&l_t92nZbYVGu~)4Nr>)Ff7dZeggPS7r0oZw9qcq z+OLbeF>r<34)XFyfSalb$Cplkf~z*4BI^bE$-T)u=tjdu=cM+OcUkSEc{GE=-}JNE z2Nf2;J5s5Vbfd9K{)NX+xp(v58E@+%9@Wf0fd%hfKBP#dyqB^S7VyFIj2TU@WB#}c zj~kzm6=-1CoJPV%ZDW*wHt>sll70D;aODP+<@x^!dh}5wN5;j!)3g)>OS;=lS>8n? zZB)CCP1Se#JD_rDo{jV$`^jPvAl3RKGLAB#By_o4dOace>~XlC=@(2UQkc({#FusE zzm`tsI8RIMt+a3gG^*MWp6{E_UMirL5V+q$j{>B4-rm68vBvG>FDR&}1{dQQoX7)& zZUn#(5q4sT$<#)CO_s=Eu;B~wC5zjMHo&054h{}>0`{=h8{91CKs!hB3F01y|0{cC z>$Hj7+x+1s;91tCX_gi2ayaLIf?ZQnQ?jco-o5he2eYoGD>T{fsR!-dY_b;qcieYy_#-lAB6ft(k>0OL*PQtOR!L8*4MxS0C zr#M>=zPRZlzDyY^DH^ZGHBi5FLTvMimVs-bxTEW7-UpyzP>k4qjqmIfvcGgrER+Gr zU@En84TAu2w!O=0gPvu|x7AMlhK9h{<<9bFbsb1FVhSwtzCqCB%y&7qpa4o*>m2}z z!L9Z8kkU{z_jdsPpCG_rp?>IxwH}QPeKnma?EWUcX7gv$q>p0oYJY`SSmdCG=GhIT z*>B&WJyWTCyw->OyjtC`AVaM2n${BXbQMWHIa`JPYC-QxL;p`QeG9oI0uZdd&HfP= znBL9$k8SECCoeU7OO?1(qgs2mL9wz}viMF|s~tzq&K79VDLeG8dSxepAl-q0<2(!s zDiYt_m_SPrKw?2kz*m8MwjG!t5zqE_I3!P<&4bfjx}=JyR5V{w!~~&r;-H5_!8ZE-?tQ0B|Le%9xZJ z%Tq=R_klV0^;pc4wI_j`#hRjfZ;vV@nC9KjtxX`}Co>^rxAX%Zn7swuc2uP|DU`5$ zJ^d6HCaU_X;`#yqCI=Th32f&bJY{&XNgx{`040!iIhGq^YeLTg?0q&X7F?9aHU;k8 zQbrsJDfb0%T6nkc8Q<&b7ub_()}Hs;OfSMr#C7?aStDjy|2}wJ8z_qKq0-IUr%!Eo zHILJAbYfuMq7x5KW_Lpp4GtP%rR!6gk#X4>X7qT;C(DscCikOx$6B3LgF!V3>3Mzs zD^DVXaAE3z#`Ec=zqm2E%lN`KQDutCvg4!SIhk1a&1D~yLkpKSn$L;&h8k%ZPq?^A zcO*K}4_tuMC6Wg=Q&!OlO`)v=tvSazPAjH>K{v$D*^fF@!*p}}J^LSoZ($P9A$J`L)2S7#$i3h zRXbjcJfxm*(6c7Mig0I{s-shC{bi3yl+m=NdRc-M&2k=)cXIVt-1eXN^*h)PxmiqN zWASu1S4P$wmLx5fDW#$(NCi3->dbTX11kY}b3;)~tmpR$Xo@-D`;Y4C;K(J^5B|k* z7xxsv?}t8pqv&08A;%MZ?SH>7&HLt_Y;0oU5xrUMcB+f6;7_OIo=7Q^3<7*=Ss77R zD^r|L0DiLf4MV6gGLfaAKwcpSnpt)0q$ngpp46j6Dmm6lbZyAMB|w^Smh<`q4VXf9 z0tESoRYVW{IOzmT`r;3iCm|yUT|GQtCb-kNy542AJPcp{mV-QzfRr)G+3MBvMqif8 zCRc-Hi%iUvxH|xB*%-WR!Q&LC&~%$j1_}-pt%p<$it+zZW&)DTyWKe%hv@;q{E^5U zDR4Lkmsg_3U3Yc;ZZ_Wf?rpt^-atVn=IAnNMw}7bW}L3j^QB|*5m||tcBn@xxFM1V z&ja;D6oDtcASB*j%kPv8m}D|*Ff%QVJrr)Cwdm|UWa#8m;Idl)r|Fun*Opg%>yF~w zn^9~?VO^6oMiQ-nnI{gXEy48k^r$5!;0{ZHvz0(+$qYl8GF%uJ2D-@jOxwaJh~30% zB!?N?#BI)VKJuene&kO?Tni^vxh$ukPtY9fHvNj-E-{ zx?}({hSDP^s}h4qO8VIXHa2if&vHE`Qi@3+I7mI)AuIl7u47#d09&La;Q0U69=Wz; zap~!n8%JE84X(#pKSfZD-u;O_HL!hZU*eB#ig&K>WxI78mni;|6?RZVrU;KZxcn2U z=8{NUNQ3PTvUx5J8OAE+lSH(Cm9Kr$jrm$g4$9+FpgH#ytymq?EB%)w4r%zms@YsH z7X9(V~=OLJ_LThogZmmM= zCs-+~c9D(ri5kx-84dm#LD%PVYsz`G*2eB9f-MuO2+Ua@r^OB0tCat|k_j1TzK~YS zi)BTet6INCV8!9jAIY!#rNiYrgCAs;b=-7av6MU3PI=bX$|1GJ5UeK;G#C*hh%>^G9EHl?2Iib&XhFl3W&ls#IGHe_m*DL?h!xk}#M3O~0wuamgKa zV9(?A1ZD+;wTSRHydMaF341>l{DP=mcI|iY>yROpzdN0gd|t<|`__mq@IuLCr+GV7 zs)6h@SJ4%NN}Wb2VZkE8iJ1dZta~xZJJO+x{#-65SMjfk0Er1P5M-L~0f!cedUg)a z9MMrl^ZJrEW$rQ@*}u#|ZVSu0p|1Zp`WRQ^S@g#)ov(jIrCHqi`QFv~VcqiR>Dgk- z?1XK8x}K>sh8vlj)4PF}0r|!zlhkB6HnYyXrd%U*fjgFj*WxFy+(my*&2dT#!g`I- z4fx=4+oCsv_pael>h(J~$C;H~_iCpT2#Sh~4S|%TMx|x3?9LtH%C4BESecer2fZ2a z|715!?z~+k^XDbjI|)w5F4AGg@$c6Qvjt5Wyada&X%ZNfX}Do6J{a{`3)qnR7CW9& zwRx-I#re-a2+vBfUB$Zcvi|m@xj)_{a{d6Giz0hLxb83XDGjL=Y#7N5Y(sUO6=^9Tlm| zqum*DjF7w~J=*ItT$6}^;>L^D`z4>Y73~RZ^+<30?*^NMcb32OtzD2P3VMA zZiYrg29NxzPz3X!4)g3wHsT@iKH!8)f(GlQT^~AMn5>|k{yuUm2bPCRJ1B7#sF1f0 z9gwCK=AQdq@Nc`z$Rb9 z4=vSr_Cdo$pM+N$y^77VI87S#?hZ~!btMxIkkb5!){uoDceuu`d{WjDmWQfMm?O_W zrbUZWmX)*c`LOMnlZY0{Ke8sF_&mPn+DE*{<;cnt`|vq`2FhUM3cn}{OmbOEN*Ws_ z1wRa5DEa;TA&tR3#T`^o+D#qrBR}n-+f0A&$7!VhRzQX@_D!RvIkLht^VE6F#WdM5 zRm2xJC;j92j05$gZ8fL-^S8Mg{p3dy!pdYfeD2g7w-%AT4SKHL^EiXbVIiHagj3Sm zs`-^|_b{U3EoFaOkKg1VymfVNeyu>@W3I{q=9%IhwlX5ndD3TRaCnumb!Sr}5+>pv zDXjdH{)oz-5dVgNl1_7$jX`RKWew%sSg4$k&zu1I6AN=bWWb>t8g@ug2uaw9ceDo6 zk(h4%$QYUCGVR6Bgvsh9$=KC95{4TlQNB9<&5v%fq3;}y%~{VFReW8Ckz)B)eW8N- z!XU+(HNJR-w??&vESPg{pw|hALXWPZ&iwpaAiOi!)eVZ_okn$?3r3Ty6(Wh(fM8Y0 z`RT%T!}pbwqpmDA$c$Kg^~Od5Nqd7V{+IdtjUe|w3%^(rsN13e{NVnR)H1y0_ZiMQ zqWl5DV6~rE6pS2+EAJ+<$a%vq+9%a5_Y`plu3Zqt_Ozjx6 zHhfbu4=UFK6pA;Od^z%UoH3Mt zI#t^$pk;Fvbv!l`(SU^p2;EwxbSh}0N~0lDL)}4(=Q0ao6EbCz%^`E2Sl0!CexOqj z(sUv67~Lk!qpTdV8lUI{?ojjX$+x`bDh@_HD6hUa9{p-97jxPp#$Z&1dgUmj6!#_P zkP{ZJ=MT!!FmY}?lQE{Y{dBQ|i-x>`y!j)(#uIWe>(ZY$3%CH_nX)f|v6h$Zx^h{n z7H(g+!4m`hXxBm6VL^?{ObUEu&@_MN%9-uwgq`rs!S>*b8SM?97qZABqS!*13<|3~bLxbI>9bV^2E~c^%@0h;R~sNj z-obcyZh+7+tjkX%ITYcXYEdU!KXy;Fz#yJ_K)_7?PO@C`p1T3yCZS_7Ay}YOu)C6z zP<(eIV4tM1-vk4_`9oLrW#m#FwDdcp=HNzY4@+fjl;(6~vC8 zULliI5W9#a{Nt_blFaaYD&vGKwQ|C#={1P(4v`{hn3ty_@0PJ&MG4-szu=hC4E*s>BhX z>a4ZP_a4pYJ2Ng*j7C{>tPt1rc|mKH;OVW`uy6^9Ks`So`2s@RG+aj~nM*Q6T*oJh zDFKK1TJWSw__cLID- z0?3xgDJsNQArS!9gAo`(Bx2142sVrv;6X}F6`w=~xsfTFav_|hM+ z7S99`VONbpigeMLfEeuIahB8$;kp;4NQ!eH)8So!H#^B&F!pbB?1cQUNYKb=CN3qm zdSEpP4Swpo4+Pe-y?|l2h#aEl46B260nM!MhPQEKg1A)};wIy#$B+YF4XFm8p9*Fz zkoWyOUY@a;pM>hgRdNwOuxjC>OmAlMBn++@vC`s$`P#9UUa;nzvFHARy}-ULtvP<7 z8HjGgELP8tSRwDxu5mNDXDb=|B`Vd(_JixH#!V`lBSkX&@x&`ZMsZYEWrr@x(dD-G zyY4>ayx$f~*xQ%F;5$$YPUVJ7%I`Kvb_1`B8tGt%^Q+V>pRw7G!~s>2_CvITHRKe zwI9aH11o;?Z^QFEX!H(#8d_?{e57g*ztR*WRW~GLj>|{&q3hfB`*L!^wI%p9-u9)0k=rG2G zr*$tAV0@=4AR&G4agLVhzKgVF{U_?^x=blGo0Jhn(=dx9Irk~~CxR$rV>^2Lb;L6C zezy8n{w_Ki8HFgen64M#$3Ot-hNm8T<@>Kn_3sJtQ}ADST}lOCc1m!MGLJa(*-Np)DF07gthNRt%Ui`OtRM^I2cmHiUjC zHHJ@A+fijRKBEcWW1kg8dd*5resDxqOjwQe;!@efD}nrg6Pl{Fsh^sa#^4Ya zu12x^L3DIF@uDy8mB}4>N5l&j#9f*hD~w1>tOTUKS@ZW z#Me49+DSu_Eo4h4Ziv0DK*rc;0st&c787INR5bDadXeV^rEcB>4;d3}cSs z`B6ao!@-?jme3pOYx@Hv8f)#Oq*@`!{xG6|r@)sk#cofg7`a=@&O4ev&5^&b@lo8Q zuGlr16de6SJ6a?O6ZPlOM{E+0XLjRWCJqY!NEVQOu&T(Eb$CjQF@C|I?a=vv}~g7f84Q}3iK5$w1z5P7nqLH+#37DM)=^diP6YMd`Y=2?oG#^5zu z@ZZ^M{Vj&c@MZBR@nHhr4n>s}?1|%&|2H2yj0|ye|F_kLCMg_2BLM+Ij~|T!-#4dd zooX&?4sYkJhYp|34(8%Ps9zucm^!)AU#3_{zNmrSS87{>DPR>Ahg*oX-|ON>!jMwj z>9|JnGm@nfs4q()ap~iJGM*6K+3rlr&`_S#>XOjr>h2RK^Xlz`+3QLb{x8V$>%D(- z9>Z^;!ehr2V9!B_Jq*H`1h~X#h`mE8i02QaX~hxVg1@CD8Ur>^4T=xXaZ9fKf=N3c zq0rAmfG8bofo3q2*0l7DfTA(|W{570l-OaeXU(i)%xal{m1OODqXEU$s&+M#oQlPZ zyljr7r4dJ6M$7(g6KmFLl^C@G3jYwMpZ33_!zF)U#2_F=j(8o`U=gwp6XN-KKm|os zW3lmiPXR_ezxorhJV7m1RgGv{MlyyUOa~*did`&CeiEQQqSO$9b}%a8;v2m29Vs-7 zR$~|jW0zo|97i|nnNaUx!&&>|KS%PF6GUy3c2v~ao^7Lz8wxK}lRh0r78w-a+(q(+ z=*?iA+5N%uXXq6dy%8W~oL4n_{#yPY#NYF+V@){0G!rEYq?C+LP1zbDG3~aCjHVN( zgGH0#{^bqd+Hs}F=dk!LC_-x3$(LRfJbVoW*UOk;q?Z!=viF$OWTq+%UGO!aRdyYc z!my4K8-k8lkkp7z9E!T1PhxtSu$x-Ql#gOnNMDQ&)_6e$45R*AZfFGx%9N_?8GgDZ z>I7M4$G&4c6V>w{+6}g51t>X@uf|8u&>pxgK@h{@d2vJ(0tihcbZqvgrX&kk;l7G- z86kCqzJoWs8`ykyl3{*d$EL3C9#Q7%s#3kAg(~ukD6?D+YyK7No$KPcNsV%M!)^n; zsvj(+>uCmr#aRdfw!XC6P6%i}uMqU`V-Ih21`wV%J{Akca&>ntWf?{R>r~oeZ(KDe z{zHb-fY}1gt-Yv&IrOI4Q#f*Rx%g$+*RZkCkQ@Z*5?)`D0B_t z!OKSVn!}?bi-E@-s}?YDYn_s?say~&ZB=55&#qK*QrCMP|4ASRq9l^-ZHU`nymIJ0 z?j@E?m1~d%bnB7GJzbfj@9u|xMt%%oAz(b7!a`>%NykCz`!MsSto#Rf5sH9dCSA*R zZRJ|ejQP9_1(>ylLGjbIno(mQX_;65#&N8FO=YyPZ%FZAQdgd?Sm$psu-(f|f*kpz z7ygT@f50iXRSW(?r9uHF0w&hRC1XjNYU-7Y+J!%EK~bVlm{`t`fa%joeGEgWwNUaY zQaz&NkLWSFmH*`l@W>+!th2*C575OjdHo1yup%Nk$Sr^TtoeU<1OA9)zylmC z>x~PG9pJLL(!MKKj`yY3>-qP88PVd_1GELRlsT8 zzxP9m4=Q43ilpJhs2fDdiXp}4xBS0PA3#kH8F;g3ph-IQ_@H>VxoY={9ogE6y&$H0 z5($2i>Eu7wP)p#Q)5yup)a^*8kYPfRu^>-(GD-aakBwty#?;YB7m@Tr zq{$q2{q8~$CZQ=4T+VvU(zdJrF>qVbUHmZg4v=6`!V`4QMm5V26)XP z(*Q*TK6D%O8hrRAtqDON6!Zw3nfVXC+KFg|``{kq$Q|t=db6ZJSDztPi0>*rK+de(CqYW*arISy<{!f2)D$PSpybu$S!R(e-Alvh&MsU2 z^!#+hXH6+d$wAT$H~Y-d@$nniHdXLtDx<=~xS3g~dicM`p#M`x1PTK~|JK*2wW-m+ ziW=LJ70)m7rY8neq_BTozS1Dk1?HSyxJ^CXB=5D{JfMFqlJ5nY^7JZbfgbE3z$F}> zNq;&7l+zIH2YP^%QYZk4OiXH=v$pxef8G%Qqy})W_a~f-*P88#|78biywBWDm-o6f z2;tRafe!B7s_)Cp*IOI+unnKCcwi8mvyv!f@QaLI|KO4`xBsAA&{w7;Qf7U7xdz9k zw+jb{FbZ*U&g-QEJy(J*($5726gI0Z(+Rx?n?Sfg+|%=kH1RMs^>wTM3)C^tOf>)m zGQuby0i|~>U~n})kbT0y#|OZ*58HlY9mW6lAWlwBs4eD7HHO|qXo=GGd;b~pqtOuW zXXbXR{+b8XJrTDnD$3%P&fL0_;I)p~PR39ixo&dh2h=Qf3&mR7yOSkpIXQj^l&*4s z810ndbGIZdEzPspmQ|vSunIU}4Z8K!=SKl_tsfXQyBNq9kufRixzLjUpjUcD-aYIL z&!f^!?l!7XAOR5y0Cne5iu7IP)gJ&QthKz?cTq8(v(J1#N<5+_`Hm~nrV98r-~bfJ z3uBcQ{Hz<(U~sh*2{XM6+fxce>}}c|R(Y_;9x9^8B=3RI?a};V=L#oJK3gXcR=GKx zEnbaWYIm#FgpD9h2LRH^?F1Hm|5&&&I3|M;tO2=z@z)qoB~y+3G4zf6m+Jy*^rAo@ z=5o!gu$`G_Bqk19!s4?9zx#=Y?NAiqKh5vz$ji$M%&Yg@IUAHd(>qS@?Fu3hthsv3 zNr8*jdpCP*d|iL$`ij2=A(w*8fk$V9^$9h6Ox7!)l%xlQgo5$hyu3>q&eG5SdftJN z^&@%f+|acJ5A52?KIY$S-TiWf1fY~Hd*|QF&;IR+TOwXaOjz0jmbqOB~Tb=XN-D5PX_Y6nJ!;n+PTQ4@l z>eiE;*}=iifTj@lpUKBvDTnsA=T`MUDWCP&MzleH{x$j|<8vVc&GuA0(i1MBqTYDF0U%r=sDZnz+3g*`qM}-mXZJlu7P_lUI?e{GB&_~b!t7f zSO>((>6iY5-HH6poRp@-ZfC373XK4MsxIcLt032|d@YrL0D{KrqF9Oir z106#}O4(;F5n4K+P7OzxQKH|P9r6<@au`5)hQ0~hu|inx&xEZNr-9$NU)l}OZX-*UzyNsTq2S7WsbYaEfPb9gkCrGhNu+BYH%bdH z0RRp}Lw}2li*FXhD~eiLmMk6psYb)Zr(jv+0u9puF+Vldy^W8Nv-UZ+aMu{7xL@sT2lyE5_#MK-ACSurS&9_33tS`6shH-5G^A z&5O}xzeL8<4PG=AN->G!TC=eWrr*mdN5!)f44I-kN+sWYvjWHS$%)NBqIOZF4ZQ`_ zqVOnySBLA$@yL?pvcO%mTVVCLG94HiO1BPxN0WTMJ5|W4dn4cK0mT^AX?)1?FrUn@ zI>Xge-m0IfHWa>wq!{Vzz}ER>g8@|nB;z)F67R18u1g<|rrx(IEZ-~(Jiucv3V=i> z9)VzhAO1)jkaS3SyXphgy)1RhPS?Jxh(*>U2 zdEOo+YX6Y2-E7a=m>0unLv`|2RiQ1n!`!BfT_@;Hpn5?~7Jnd?_j*KD;kh$jdRd5b zXaO2LE`SL)c>&w`bk!Nm`3#^nHfYi)&1gu#en4vRx10HqB+Hje#}prUrIBA)pN z-L&_XO6uucDOQx=omOB-2>w^KEQ@r2sBe9gdnN3=Z^J=+2lJCJ;^P>6ayTrab1d6E zpDBPO9UDk>0_j;UEVsMadjlNlBK?NHIjTw$X#+^J+Q>`hM1}y(+I%59P)55Jo}&dU?^>y{$Bq|x7i@DIWCN683#QOU<%DzSU4Un{APJ! z+DgGF&2gC(^OD1f67;&|f=LI6#%t$O1dsPg%2e_t&#p5%R{*qmYc$Gd4u}!RyHIf0 zDgKJ6Tl5v;b#3%&z2Hec&KsjvI3gzb-tU$DKb^os0f5T#*0)`6MOY=69Ph4VdG=iiQJ@elCchB}2oL#q|r zYybk&`83rci?wK14a&VzpuXdUina2(^ScG(qfqe$zHFZVhGNm4cn}WeSc782xn-9{ zYVi4=y1)%n%olm^2Z*z5U=cN5;%VA^{6|ASSqcU;fGx8EPk;0G1y+VWgoJ4!U99xg z9l8z)en11g)BL!%o-a#mCsM45iKyA3j56bi+H?zZk&=+W=}+Giyd1#V7rF#ptQ;o* zY0!0V6W%lT-B_uNEwU2;B@cW=+$2kG6cCP3|Q%xtuM62)B2=(cn)A=Lvv~ln~ zc$>>rB1X}-`lnFY(?#mlFSrmLqAj`-{0hHJWQJ*omWUtopo|O zSgYj`Xo4Q4n-U#wqPP{f0OUvQ&zR3~hdc=I6Qy|UOm&yc_supeYjrVl+v%^b4dOPp zHbQZtB0VDS6TIF8`Gf2RfhKTJ|FI2!Wh5jT5mH9>pg!FiSk@7w){mqWy7MU$?#SH|nfKDvNe4p`kX0XpdG4ZMqONV26TAe8*X|e^Q{Z z$R}n%B=k%i;(Fj6F^pPFtcqXSab-4%mTX3fR@x&wI-dOL^;G63502tTB~5g;Hz4+O zp@T0RlM%TOAIEF~ih1namz$-FXwb^RoUA)VZZhjg&Az>Ym1FZmq%jQic68eR7{-MTFh!T;+SWkg?d9h&>N^d0VGm*%RrK&9?7I+K)qh=nMAnIk=?}qPBD$M(tZUi`TU}kIQuw1+c#QncF@HjuE&bE z!4SrNF3@#{$$mX`{+9abc|T9A>X*j%m-VTu@C8!}?Z*{-$8&AGjg6NxZ<<=-id7)U zSwsLTFG;vu120L_y{k=@itDibte>Ae52YkJSx-8&1-mhID@7~~)LHFG!X|$t$hrcS zKNCRAVLsy*D7cp!LevK=GPaZP;sd%cw&XaiyvAFS<2=qvTlW)jNG}sjb=i~W zFgRj5>`tNq8|X@7m^uhopJhDaZYZ28{%J7a#Ma5dBcB&Ug6;r01S=xt5KMzcp+ny< zpWVffM;`w1&TFDYzR`N>yM_jeL&a~|nWx`;l2Vw<1)_Q%Z2lQP_CZ%mTc3Imv*mPX zRJF|r)f`SuQxy8Wt^n}@zFFepbXjA^t+P{ziz@H^c)!zOnKkv6v?oiGs7D}}s*+*} zv@PNkVj@7TSeV$)O2WDoh&U9hw#+y(;Cg8v6K_=!G5{0Ga4o3{ zRCr;ZwUDMez?aw(=TMt8RMM4BNCY_ePupI;&V?bAji0q)b=_OtqgW!>1bTq zXPM|W^$!+pIuCI1eE^IF%b%_Se7?PKC7Dqd*u1t?3&D7oSJPFNQjx`^ci|tKEy*=Q z9pn4m9lWCDvaX@FwyL2%=FwNTi%ZZq5L@p%6D4b)8mj%!-K>D9M9dH+=I%-;u~FT+ zqf%4N?28uoUD|TBn03Gjl0F?;OwC^8xn`}zxf^0w1+f_igq#VD@AU?#tk4}|q{9(W-Fag2 z7N2ZWv${_`+7GTxgI+fWx#H%V`ZzvXCk=tzEO^sw12!8*|x47ygA! zbAG_M7Pp&M9oQ}gw>%GRXX{ED|E}sxVX>$k`OAEsObsUdN$3_mXvl34$b>@f@ZuFO z$J`KXVEYj#!Xp~(cRw-nkW2MjP^K6A^jWYKPnD*Lw~MZ0a!0@+)JFXe?cMg24NlAs zHGzzcnwe3`-gm%TQez8KhH5-z8ztee5XDkUB2;kuO0 zF}mh{?I`CMjY0{`N;f3<$IE)Xo({x?fxz3tj%4e-{J|+V^)KlYWt8B~6=sQnvnpa4Wg`1cw zHpC9+m@$U(D)s%=P7*Xvr3mC9D!w=m$Fkvyp$WxVXI|G_uLi(#X+H>x>({&wrab-o zSowYVsgpm!WKm#0n}L~ie*A?gFI9doXMtj}E$v(45Yrbinr%*Q#nFVLwDDbd1`d82 zN86;#qU0%qsPwa#J#|?Psl4$V>-(HF9#ot6E(nBKyg#V&7=6>R4`_&AeIwbVA-s*W_b(7-59impkKVA3i_{;Sx0P{zfIrILH|4km@*~2zwy7 z1l9X0va4Bm9l9&)!K;=m{?e@|QM4h64Qu2-*98_l~;*e&QQF5VI} zsuYU`dlU1bL|1&?%r zR$+<>3MiY_25X2uk?}-og)G@1X@WJmKD%&U3Kw>N?|MW?NB(42*2=>p(xHAe)%b%~#F zvNfB^S}h-9r&(w>!xw$?tSfzixcrPG$5xD;B$Ex2Grl00UEz~`nz**1RyaP9hrn^W zOE>kWCTbF>1A?woZ2tSDcLHPiiyJS}EL~6|0SO6e4M=<+Z=LlbY@a4!rPl?&Qhh-mF*=w}xGfav$eJRvJlu&bC8fWLt-Ga1 zwK$NB+@X7|<6Hmk=|x@tT;aZoi>oNYu>$ z3c6J5g!17o9~PUd_^Ga@_E_l<;P44y(1hsMsc>M;xQ`UK1_&H}C<7K51;VHV zXf}HEHH+3y)@R?K{QLXkbB#p3NN9)q+Evau&gA>_VD(3}eDOYahg zEsjpTYh;G{UPVm!aJyG8)qd+%p^ok{D-J9qD)eC#rJ zIXJk<>dj!GHbUW3Za8sfI$_HR`#CF1mXG;7ZL7KJ&zK{exlar&xCmV6LcHzX_HBVE zl0P2q*S)IWVZW^Vr?q!8?tK8lpTeg%(&6n41Y?(k`ew9jbX~73|S}-b=jHHr~n)Um$U9q{hUk7%tYo15YtXke@{#sP+3O zip{N{s4t^&FwV<W^PG{=96pSY!kE{;4i5L+OJ*wa98;I9KT_L}pQ`*p1>g3*c@w*W? z5J(fFVh`g{zC3}o$s*|!IIt!!<^ISX%bLjt^P?R_A@Gt>COW86(;~ z&e``(M`2E-v>6msliLh~E&>vMxd=E6IjqOqsm!-<)y=bqOE7Aj`c)?NC zgEgv`S?%W15Ti`&cfkx&2e>6dPbDEd$|&I|f{+M^P>=HJe;@d>(dvM|aSfIGAxfu0=JT!^+my+?4k+(%qS zkvklu4w(uW8`|v&c9^;Pi87FX-9V@epgE+p0%ei70@H*t4QWg{ArW*RzjKIe3-8%1 zk(~!SSGp#zP%^0m!F~(G#(#>HhqJ!}N<@v&9((HNDRaYM=`{ z;2-Idst5l_uBf7T5u&*TJ;BP`p8YUCQ6QM)m!UE>J$720haiXi8xDHFZhEwa%m%a; zw~wk}op-4WYqh%TmVXvaZx$-LDK@TO3tx1z<)$@-!S<=02Q5T8TwP}$c`@V~sW{t0pzz)Ng_Nnsr^QUVtmEPnWSS!uOPcpg z7{p(U0S{jY0uC6*4x8o-c_!k0JIlPAv$YijkbhTATsZzH`$tks`;z{CC3+UPK}z3! zR)HY~IOAiRQ7OVHI~MYsH7!WYkQ|23D(3r0TXQZ(Z~lMg&>m-8SxY5a4m!%Krm`7S z%n4HKYD9=0s1QQXOIU*-VZK;H2%}%1<#{(ioec=fj& zwvw|EIBAwpJzN|=Z^8B@Fr8fZbMdRchT~)`F=5Ut>t>ECQOQ{n-ly<*zVii%)Li-0 zLS!!u`M5~-`8(p7#Hi1i>ho-F(+|fC7uj4k(_n@RKGM!UAS7+N1Mb}JS1ry2#SwVO z!9^lnjd7ngW+Hu_TKfS^5Do-9A5a&0gEB9uld-=|*T^EK$TOXd2=!~5ir?9&7MpH6 zI)v~g={;*uTC9jry;!i3qSrVPjf+E)dH{mBK_X#vy`s~aa;UUO z|8Vr&3uge53divUnuRlP5SIQD;nYC)ajsx5vY#i?;g??qK$qy)%U4X1s*(0bYJmemP|Fw(-CrJ*5La`qQly|o&FyhW~XX|=l&E4m4BV@ghe zPN$UWnkn8}X%Xc|<@wK*fk#l&<-2&$v2*iZ7Jx*LKT2x%q_*QAJj54_?@u=RM?Jyg zhW1r~QP9P4(5MbPiK|M*9j9fgHM{TWk74Uk+@g`AhB8jOGD{$KExt2@{hcpwh!YgL zchZeJ24~gCSjgdm!1$9yB`nu9KG& zryP46H`Sbacq0W8SHYVdJI2w&xu?F5&Xo>JUT@BBahyELCnro#WS2tPO^go=)*T66 zNWiT0@A79DgUo<4fj;?rvjyQ2%r1N^eRD(GN?*F;x6@O)=?6`q2>VA#B`d^@A>Ib# z(krT*A8kZCXO)&Nbd0N$M|oS6vvl~Qup_^#d-?!D#QkB6!DH)>e;?TY_ln#0V=t8s zWB;};{DVW|ZMkF3hSx0r<4ES$@7T;XOz(>|>OX|hpN9!pZFsu%WOfMorvrYpEq0rS z6%p7g#+F+uRfk;5qf<(C-$2kcez- zB>&z0aQlCT$N&8`g8)7wxPRmlOzd?#+3BRxtunJ4KmE3WwE?~pwimRl0Q{x1z7Y+N zHXQ`&_$?860V|ky5 zZ!1@hE^E=6|oT=S8?Ja(XB0oH~j4x4?nyLqph)%v(Gxor>}c)7m^G;g=pUu zvnb9JnVF6)CZum(0+D`)fjkO}k&mb3C7Z4Gd|&3xaL2$(g8SWyho+bC#KTgBNF#Be z>;i0#d9X8mimBCmk?=_=Ki~h}3jg=tYDxqAtztTLy0+*()$7#LwU;D6?T$~2^*h^$ zn=5cf@bE}oo=}sW<#6LydD)5(9Vk9|S%!2Ac*t?yWLpfzGddqgpTvGDUCj*`cR!>p zU3?H2i52edF2OffOolR}m#s3EFg<ZSFyp+Xm?CiJ{XlWyh^r*&mHdOvsqdnh%f|R)YKqtW5iD?E$tJTCm6P7!EgU# z1Aqs%#nm-2*Vu5=-eUW`aE@g7@>|HK;{-LYmOoNcWjIh<^oBV(jAeV0rxPr_DfL0S z?f?Jo$^YJ)|6XJK35GveaYGx@u$u;Li8I0CbYSnb8q|Abztnpa-yAQl;#2s5eH@)m z{(^joRD$TA!Lv^kE0r6gQq?6?IAJ9m#BhXK-6B*!S`A#Q4b1c~B5YmVRf~j zOC6Y9WHm|5^CikEC9iOFMBQXiuGSfxXnVasdJ&F4Qi}kae^Z`*vn$B97Ya|IYg!;s zgLEB&u=FsCojUbuzEp1;B5wKO$?Xy=_k@;kqM|~*8^0Pk>`nO+ZGviho86A}zcUXo z_ZSfv$&D`6Tcod%0l5Y$O??;j8l$tU5NKd;k_c4y<(lo*1l+(UvneT`9swi^ijq^2 zQBfLEhK@i^w!F_1sfW8)!({nShNr-oQf4Mau9utq6W!-(rtdh3Wq5XZqmPn#pmNs3ja<8k!FGz<*jM5$mIAoh@I&a?OHiRSiE_@;<0- z3w!eD$Yy+1LOok24AEU5P6B_wimVgSA z=2{x(WCYe2(XYwDo>SZ$ehW}7_>MACY9vvjmMeml%6*fM^;4;{xP2U-?F*uMIfccB zd;ZC4ol2JtKAJm3@Z1R~J>k21SEy#w7OZN%E^g^`V?7;N302=&=G6p>?e4{v;LLWZ zfoiK6k~Yf~QtI1405Du;B2Rb)T#mCuf9h8{b4`!vY7D4IXBTUF6Ws94jyE|XuJCPSyBfm> zkF^Q=CI!6PM)_9RZuTrp)E!1^v%St!&o^uJdGo74SvfdbUTdU247@st;}*01ta@}T zGo?qT>=qzud#q{J1GKnE%NiZu`UDTsy#TuS9#Dfu<1YK4^irwYJQ|FGzZ*)O#j}Mk zl-IQd$X=*`7)vAY*v+r*fnNHxSdq+786(-K&7-7<=kQus*Cl1~n0t&< zr$YEK$$jS<>oe;Tif)ankY)DMBnzhm*Z21&tf_^T#r{$mssbKvZd&T@jycgLZ|BSm z!^U47JjS7>?VyAX0dwuJ%j+aYa!rt+#st2%)8porYNdiy=yaX|{}a5`^`1ki7x-1R zzCRK(Uu!b*tGS$=%cyq80)!)t!gAs*WK4|IJ0)=DJ zLl_$};!!lE(_3&>cir3+Izt2WLT{>7pYO%O%<3U^I^L?pk8|2A=4GKjEbJW?)Q)V=U=9@sB`HcH*4BoowpxenW#<(P~&-RCL2a71^|G9d3{QQOno&c{r zs}5>!U&go~)dOJgHc48RjP+2Lv_VS6gG33e1w1>HOB6rT(x~wGE}@%N3o1>8&nN^g zXA`P*eAvE(%~epsbKA)CI>ew`7c0fluM?4-!uv8R7`h9p{OZ?jD$_m@YPCHRo&ilc z(3Xdbp)Z#)ZoGf_yjsW~6vS)6{vq%ZZ6fj{G_j=rS7uvIX3=D9e=Yo+{c=_f)?$NI z{4@+AzcC`l!=37iK^$`+a_Qb@;^jeXex)B>6SeV}m{I+Ii> zlOs+NN59HQsP+0qM^!SKu`D6++^%&TVPXZ%Gf;TK`tQqL-T5Y|m1d{t=RE>rHa-?} zaIDd~!%a^TI(dOreXJm0@>6i3!r%c1yAJ4H> zkPkKqdg|SjV4d|AS!&)S@d3&iy~ma0aKrO?(ptI3Sgd}#*_m$j*x=!;{L5Z`m&R?A z*5l>miZW5m`~Dl#`wc!xx8Q~x)y9HM9@bfYf%OJ zy0^MeiI%p|!sOZTo7i`qI&F;Gd@e2Np~Zgw*DW%q-Ru9e4*9buLsxv-0JLXL3)&Vk zL5MiJK#mJXExZ$3zD2OiM7dm}()4Y`D1>!ApYdDG6okGMD^@esB8G<|3izrhu=eNvOEP|+pYbqDWm5$d%y>fHEy7QX z&8uWU;^*Fwd`js80Klij2D>$#(IiHCpa_|S zL$C3$L*>Ti`{vDi-f=BYc&dJ_zMlN_vY&2`DSX@t4{nOEGiv;r}9B$&*XxTT! zjq3I@zNLCk0`r*S(bJ z6dNc6j%3=$ZbeSUi#@xJf;+R>58~Tbzt8+y@6_!QsP2k5EV=$#xP_Z#0eX+hHmW-9 z?rF{J1dGnvcY2QIz*!ttr;UTCa;p%ohpd@LVE%@)f6=%pCp6l zswo4&f^tcKt>%NZjP+1de69jG5wdY?q{X@_CdKCi)!e}Q+djToVb#L>VrM-vPy0_~ zTQ#M-D!mEvrF<9Rygdmj`%&)e%a945^KKgz-hv7EY}v`appNEv!D8P{FTlm~M}crw z@!?yW*#{KTF0`&{AX7CuQy6-%-LUb`?bzV+1P20=Z8g(ofK_$|NVLr+{UZ>Ihj)Ml zt|K1p9zd2mep>!N7+{B9Z?sDx!GCrCvbX;@bn0ss!E=Xy;_nic7iF0kJt|n(jhYF7 z{(W6WQQjc@RFW{9O!C+FVaV7@tF!__*_6WUt1mq76?VMIP|$S0F0RbmRpe|T!m%q+ z(ul8ZQgPRt)ZIBR*#wx8GTzXd(RPw4%LcUAQ<=e~f@;4}v?*O+lOn9k|F#y_E&l6c%66bPXZk=()2v%Hzzk+ZVfX(s0i{zA7Syln*x~zT&z~U+9 z&2djj5&KX8dAs=S?_ntftcT{J5nn8=7AmxVrUn+$SGDw|;QotsT>%R<61^8XeP{_o zJXj1(kOEEUA!i!#qDme!`DUud$R*tYVbH$ou}B7*Jp#zweahVe;&Mqqcvu_sbVz-D zV)|BlCCdk7fKnlqAw-n@K}V&<5jkH{A<@uxA<1ZnH~Z~ygsyi-cEF$gO&4pv&1l;m zVQ`>3kiZlYgC0cwL5xk1CcSV}W|iGzFiG8APM6iu)Bd4=M?-7C?P5T5u^i$|OQ*)j z9|A&4H0BA2ltnnWoi_fFjV2P@1>~6Q5LiYqqD1#9_eCd*eP6Q^n0ZB62G)H5{4muz zkRJR#ZVGk!_n#5R14!KM0sgnd6}$Ec2yxi?1e_^EKmUUU{-aM9kXfqt(Smm=%C6fz z%qPd8$scbvB2wA3C)Aih+uF2hO=M4^FhD zqI5+a%BE+lPV3OoV{b0+qvQ8k@o@J-Hv5(m*a(P0``v50Z-3vL8jfy*Co$?r<}``p zrPuCW`wq56O4Nm;l68{x8O@k&sNN^D=*X+uRofl0=) zCzLV^oA0M{8NSFw961$ov>ve1{%Hw%xq5PDyV-tndhXaO^T4P^to8Wqt$o|$PCI); z#@l@ya0G9gd!kz}iKxLZC)r?2B{5k>MJW8DdRL*%SPoL1Smq#EMm&YChhOd`j}|ai zE$m{Po&-LkZ_ui!1Uyp^Fmv6f#1E2?^`N&$3rOW0W}Pm*jqrJK%B~SW54P^@)sXD{ zU1#DRF4td-pAW#9o6VJ|z_~+PLchq_Ly-)eL!U=4q$EFRK!PAYFrxnxdfCTRcvK}8 zJFT-`&nm7JXLcxI*k_|M2uZm4EN?wEIA2&v?S93f%XaPRx;_5$42<5Ll@`Z*rC7ym zGH~h{Hvo)+Jz|eavRkQS+hz3))|XVe9bqSaGEQhtawX>5{z{D21(Yt9bq}2i*0FGf$p>Uq4AUS#R;T*ae(V zcyF)Ne@_&SwC$d}>?ES>YTSsvf4v)Z?rL^|wxpVRDiwZu!k)CV^Iof~Jroade+|G?0CdsXz#);@_1Kmu%qO9`&Wf(5`GWyVL>TFUq**&dc?E7Q*p%(LC0|O7NgL1a8?# z&zXZVw$oX4Q1*9+2Si+^ldri02|%0Kx@ljA)@kQNMzTdXE;E-0eaq(8Z;Qp{iACOY zfzM;6W5^pG)Z69pd{|L8s${V4@>)>%Q+k_!PhMzZ<6TAtL|kfQVMZ|NTBZ#_olM2c5O(8hY|g z=JY_}^EzJ*SaE-`S=?^J9h@QCh~Uj^r5$av5gSthLi0hAdVG7e)9VuiFGU3*jT4-u z6nA6|b=_k5UUcg6gq;8FdR<@X`coHff9DK-IA=V$&#hLfC3v%T&dVk<3tewq;jU4H z&CGrSuV&r#_;iVI(m(cB5#48|GN0)-4&vfs9pif|;%p>O;6I@r-#bftc$ywCZLIL& z3ho{%!E=@z;j2}b38S2k(2oZ2R(I@nCKequ&_9J2{Az2yqVx&;F83IKsi{C6fqc|m zN`0e?`1Yula=)SRrO$h)fmqH}Pep-#wn&YBScpvu_Rt&AF69POkO!1&l&{3u@KX@+ z&|oBv(g(4H3h{Y_IyK z*7DKjb;&qWBvpNSp<>G)B3HfKF5=R4e(C)4PC?f#d*(P~dRnnj;My*SCAe{Sj%jsu zL0$(sGUe@HmmKA4=b66VW!j#T7sH`Lme`*lr~IBlj_-0G+Oo5fW59j||ClEMo%;S* z%%+c;|k25+t?ZFkqJY9Gi>JLgmUBpE&0K7#QU3{nUUB zl#zg6J0oEHa1bX$KNlX6e1K`}>A?YQh%Q1e4EuHakDxc81P+d%+#;etZ5Z0E{( zV%~SSM2x_ZUO?6txsS7fdy4s$ig~)5qR=BiJBG3g17{x=i^Wx1q#WKBe!1zkaeIEI z?vm~4SL(-Qln&fw+ywksg-cy}T``M&&Nlu1tQ;D5z z3Ed@;XQw{-p^rzrI7#4vV_Wtm0TR!-cWV zltlHKnh9J1hNnh$=$DSLBQat}Av0;PfL7^=6T#=#UXI z=Wa)rbyP`z=6nQtB*MPix(C1r!t435^nUrI6jf0wTEo(oXKqJ*Z|8sMPW;?OaLT*I zGn4?ef&GyCiMktTWc&MH67G8CE^4#eerKVlNB`}>yEn& z|0V)CIwUX{SH-sc^*#1r@fUpDclJ-Z?(@nYyc5j-e9Wl*ptEjVk0#8Bb~y$qz2}3A zPdyRM7ah*Ot|*N^tC2}1Bi9HRilso*Sm6gy}e3Jq4x5rn$n%6YsY`3jpWc4xEC>p?Y)Bw%?0C9hII*pUq zO{*`L?RWaE1c5&+)cjp=SNXiIIE>8cV~jL7DOC6EWm64@^3Xdm+7AWY$RF2clL)E< z=Yib$fcIOr$+o2B0dndOfzMRyUgp)NjU%%}!+|J!#O=B%9MMDH&{-d0E7Y(nbQDIL zt*uAX)Y>q3cV-b`1jrid8S|se>GVJzi$>5Mwhy6@)Nh$#S%mh-scLwuKL0JAu?Ij` zq=?DeFy`tT!Mb=wtJ?>fGHY+CC|FI{Qa$=1B|#8tshg_fA4JRFVNi4i<|y(sz;y%~ zMY2RY^Dx6=<;15wIA`BHZXuCWBIH7JEhS2EhVqbFjYj*109{t$W%&5-a4=8;TRzZ) z^B%7T88Imba5~s?k!7a%#a<6$U-)}4&yV(4T}*;J@q^0PzwO%~_%Uh%HL zC{fnQT`5BReC1e)E`*6=Zv>0fx*PwXyV+)d#YKlm_|ICSW$pC$3Fp;&_|F(XA!-v% zx9(`E&UlsP{dV8b{McCch0;jogMNn2!vw^;vtraXxE~rlEo2E3Ld<^i(X6k)B$m7IIPZZU!9}SP@h7+q_JB>InsStgiuK1ptWr2abLHF5JA%L0h+xe$!VG= zPi=|e5D;pEY*~P~4@lJ(FNPg^lS6A+W=(ADXBQ+(4BNHsN*8A-kM`DEX*{SN>Rz$m zjb`Nt#kbUr8Nbrf{sB5z7}71Sl*i%6yZI^4&N0?$Fd1j{nA)CmkmG-<_CVd9lfbAM zdhYlV20V!jboH^*pI0z>9!~O~CkVg4%<_1daznvr%)M)Sn1=HZbk~ZW6#^#Sp%NCL z_vUoaAD>^^JxI1MPY9XRThDdL%-jvC3L`^>dXOO(&`!% z)~=N0bElneyX7JjvLhMDNHDuKjOkB#K^Qk0;z?$El=!2B`RM+<`(2T4^us+qpmJ5) zC{^0t`NTq7C$P|>H$M~j*{*YA*ul<1F{Hqt;wPF9@D)r*K;Mc9wi^Wa(!^ZLeLNzn2x%?Y%Zn{T9^}DJzcqm z8%@Y$^4XBVWTh$g(a@WW5H6YpEX!fVpwj9*e1KCvObotDh^B+qpiDWRQ36(h(sc|$vVrxBMwA~G1cZU8bEt(kvrVFv!feMk zC@?MVc|a=ym1c_Rl67Al4z(=-3L3#Tpi6eP;kxHX-Et#)>F{^Xw7%zEdY`Pk%|)rQ zcQ{Vi#ag49y!`tj^f5spLY^9{h442FN-_)3hcs+*rNcSUDg5c+}5i^9y{M%AZnb}|BC2n11^r4hP=_VX_5rW|o} zqcOfH2+|5svKK5nz`x68I*bm@q|61A30MNAz|`%66y>FU5bk+WQ(KwZjuKN4eh5Kn z#~Qvi)G|63C@$7bXAR=Z2%-<7e%yW>2B#SA3qWULe(txOMe;r@9Xgx#& zQ#Ie>`G9lIr8yja$R1w^++iq24f;T@gn$clC{m&qD=!3RL(pU4v>j;{Sr4Ttk# zaBQuUvp{L~9h2l_Wis2|XVMmJ1MAX;2jAejVA8HxFefSazLx3KT#^`c)$Q%|-g@nw zgW(U8&VA()nf{Cx+6E6;?-DU@N`O1*et(BqQW0ycvQNJWuz9f!KGEscY3}k?I&$r% zUbHTJBGTcrp|*BwtKB4DX_`!7?LQ&$w+A5I%R#sV5D9b|C*lx+Z-J*=tV6jQ-*EAt z3V;UOR41rkkU6zTNVeg&hQJ}X12@hSX@lM3pgJLG)6NLEaq}!)3!_SBiEl3xe zx4FjtSP&dvP+iM6+hJ*}a2HX(PMnV8v!<8D(V%UAV~%=JX08>U6XH1-M#Eu#h9-nn zzjAd--BZVd`C9`P?~4kZAd{3^hY{cb9Z`3R2Py0!j;l zfOHHE(%s#Hlz<@J-Jo=Lmvrpq`@YY&_x|>A9|r^c0jy!IJFfFQf0y~qH{=~0uNUFv zoo_bMpstVb&6_rz{&U^U=JVA$&$T*D(jxOy=PY&487DpeU0;J2GTp^{x6WO5gdFk9 zqk7On<8$qlk6lzZ_HFG{bIwsSrqGC%GSsXc1WM&<}< z7#fPxy))sHU(twf*7A9&7x~XT4*c+MWRm6J?_oS^RakN$xU=ru=;pB#^q1MCdIceN!8MaD_>vzQhnepm(w-+ z&ois#m7?KhNST44h8!Cp1F~1ySQ}Sa@_QQ^IolWm+lz~ z9A|4QOO6Is6A%Q$E}eFf5aW7v)TAkDi`^soWg+YTwp0A)t4tOQ!H#vU&(B)B<@|hn zpVA9i*Ux=rX1ziy9(NRtnN{b8cC`p6w{t`x(I=}+hi$5kC%9=pVzpYfMz?62O8z5v z{ok*|Jz$8SIaUGB2mbkji#Nea54RBngwB#WQHp11$9`*Xxyd2#@rewCE=YpIH#UZg zeaXgtU6W3Jm*+OtH}?O3jXh+LF=dQKd#>ryl@z;qdUWw;x#)xJhIw8UBW~AHb@je4 z;j6l?n=9~Bo)dcdHQF~Vzd^khyGh~qM*$b~)1?SyN(_FzmKrUct+hTY@^X4nLUk&9 zvm-AeR1$JZ)5w3UVf#=tDr=FgI^sP^g+vlCdue(-xC5;Wyu|;6xrD7W%jrb!QO$lLV#QX!tSVq2AwK9 zvo2@O9sX#-V$OYt4@5g7uMTf0OEaIQc36l$VjwzDo=%RBh8sTV9ZDYSH@@ zfB(W4;3FjD_&w|A7Zgm>`{0Oq-&T)g3TZ}@3B`Z%2+BAc%M#h{r}X1(2Xs{0r9EN- zad!5%mgCuM@-Y;Da{q3%xSzkDZ}s}Q8!hsM44MTvwZZ3^*{I@FT@Uwc9~RpGMTwW`3>}fSFl=FZZMzEe!EOoZtD*Ip zDe2~`{kXqZ{Z+5o`cuG80+Ppo)kqN z{=`og6?h^gnhY#Otp9mll(SC!9y9kCUV z$u&Orz7Dvg{fjmT23hRRXvQ!gzoxM7C+z+Da;R+lX`zmJ^aAK|pQ2T&CN-48!2wY2 z0PKGH0I>R!a##NanvD-HyREVTdHi~p;|0Fap`_o)U|u62MP2)Ovky039FW`0H+sSu zmsH0|L0aSD4;2%i$g)<*92xL!wER2@Xtv{a&f8ic5&;%u&T;8uNtr_TC}9CYaqp@{Xq%M5o5;g$e; z1;S)Vxat%q11?UsRi4VU*>v7ilHIqGg+<3b3OT=-elDP zwcawt?fI6z>2KDm`E+GBhU3}K*hE!V4pKH!t2Wcy9JXfPZ{N6Z(@Gk(K1sZBB5=hm zNEF(W{?8oW0Y)ecMkQ1PyiRB`&rdfdKMV%H{iA+(1A3xHi~Hvf82$><1#(f3tTm~O zEl<~r;X$$p@rIw8dNJ9RbHt7Rl_RHsnuHI~JA5Zz`xrQfsIa$QX*BH@BfMw>{YZEN zMgXu)uu~qrKg=@xQ9hE08laFwBr>T>e`-E%kX`F|yt^E|ghND0<(_xXBhznUun#%{ zzQ_2ri{%uMk3J6lqihiIz#x>~6BRCuD5)d8bvDym044_qp(L7IBz^Wn0If?hi?$*l zBX7s3%mLJ#I}e$8@s~$uO$Vr_+tTOIt%101O5dx!NsR_O{U7%twlhr3Iu)s+R}(UQ zyygH`W2#7r&`GhhK5Ws$JG4l2|6o-X=# zMzc7on#6wCxZIU(f8~3#nm{KsDiL^IM(P9>$by6*9<*!II@Xx)NO-Md>+BbGtepYA zL@e+p4*r8Kl=^sfdyU^Al@5<)2q?{d(?(ad(agj7``TEI;r68yLP^x&&)*1iKk9aN z|7kT&Atl)kgoafXYxf~M+H1tcl0m>5zDloKX*AXN`9l-XcS5BOFsh~T+Qf6=2bi3N z--mtuY7#_)bsAiTWI@Vp9tOy3IbVM3Gem7Yo4w^V?EuaH`ciJ>v7N+-5NuM{TL@zs zLL!nBLN=_kR`&n`{mOpQ?*Tv&*DA=T4`{JtF?2e*Rn`8Ca}Na(53K?{o(2;bWBw(X zeO2fe3_Al5RUX<;+a4Y#CmYuuXkI>6wdw%qlT$yrcBv;d-Oc`(6yOA_IWa}`44_=z zFGT&!SvOn-+B)u=Jcc(W@NP<8^C}I}Cx*eJE8_F~c`0jvuXDV7OFXwu1JCdHbYh{R zZDCXX_*3Fp=!P42yK2tp+p3_H6Mk8q)29}bP(eh>-)~!#o}ZrIh0nJO&2Nn9{gSEl zy}=yxxg_#%oG`hNOY!7>_#5QUH$Px?+bv^AI6O4@XRt44vlD5qGa3((rT(`t@o7g0 zPm53Epa0$qmnVIl*hY^aqswx0CXeNi80iLja&* zSAZk8#pnKts~A$%e*FtDEeXlxUiXzmQTG7&jz#D+fgMOzG7#iPG9z%hgU_i@0CWxs zNde3tE!blc_j|N{UTZ>dUO+8_-fAq<=zf2#Gtdd}9i+P&mb@iT1Bx!^jO&xe* zFJ|08c)^?j)|4h8Yt-OQG^#~~CXq0Kt?=SwW@&?4{ou*@lMi82R2^m`} zkZW=3#n&ZrMC2iv0xwm|8mmKqj@-N;c&do*(u2? zRmBG6(GKx1zc!{Z2TQDDRO5tmzMYQ^#` z?+mM(Yc`6xj-T$5VlNKr?F((+FOhEFW1fmpQ#M&da`gW#@c#bpe*R{2b@8qH$tQ$e z$*boVZ&FLqQ^P9-dx;#vo2o-Y?#4KW-T1gp+NS3J-nFMC!n^unE1pBkrOlb7ewY&( zldAxFd96J>8CShD$^llLBvS!U+E{BunquKsTGCtQZKgdGdN6|{Lwg=U&M)?s!|xv-whBL$_P!oCrd!#ae0k-uyBv)SKCDZ?W%J?X~ip()qPzPM1OXT z{AY&#IIgzT686$`pTPNr;AF|fpFam>7a z2oa)e@D2HW_GI*|K$vQ$2x*UQqodjC%kz!I3m@Y^R&tn(N32+Dk3eBZys8w%i~Vf$ z*$wFuU8F6-UAln731&C)6M;94@fQ8S#mqI=&JOYY_^{tF9HNOzhU9Yz`z=KWVh03@ zNojL<7IPYL`sY3BnV|<8#*ja%jU;At7V{5&yQJc6rGZD|2lL+h?-!B@zkboa4pMj; zyR;n?kr+>j`gR`6p_}N|a>Y-=SJly`o+)JmScC_{LEnUXhS>HckECHC@PDiY-DjUBS%BGw zQn}IbdA=^NS85vcew%{kC2^p#r&c<~6(MLyD9E4%#XzpU#iZRg013j{Rq^S^2Bw*K z*H?B+ig)6h-V@Egq5|7Z8;)7#P*^)!J^yka9Gnq{2It`{;QjUvyjSZSPYiXEVUfL6 zk!Z2WR|v$!bV!@fQ=@uqEs{L(UVXi%^kUKLh{azOVP$#eg9n7;NUWr5#YFE`>94)e7}2}(JIxm(qCUC_ch>d)EPGQ9Tm6>HAGU9lJE9xp)I`O*=uK?A$z~+-*!`5!OKC)hC>1FPljh6|# z|FS*Vp$x%{?3EmMFGe(881hOyd?oG$vv*EkMirlDR&`!5R9KUfKM(&Ns+kiccGpI2 z+I+nA@1)f6_rt2m7)uQ%(q$J^05 z$fsH1Ci{tm$9rw)wU?(=77tyW@nSf$?YGEOh6!@p9E=w12lRKWbhn5?+la1+|LQ{j zhM>LBT2`Im`kx1G<}zH`k%EhuU+wr*E$0a$@>%ooQ4t-cgRB*t`|{SgEkT`(2yN&t z_@hVy(t-hmW-V|+(#i%20zOmM7hGj6SZFmK@_QF3pL=={z#1xW>fa&GEA4)MW^Pa= zsCjAW2b%SwjvII)6uG${ALOe@;QEFozU{8Z;?IS3dr9g{Y-iq->Q+&dgooV}`?iX1 zrr(_7931}o`bn8~PBH0>-)H9L3*R#3(lF1bgM+ejU0^lcKyHE0?V|`4F zk>@{^D4Q;DkZV;(w6VKj5LTFE-G{rhfvENc-;3cVC$uM0YhB|&-RY&&(XP7_6?94QVKZv_ZvC<&7 zY&eRZ!9YSZBXo?ChgqA8Cr;cVy18^Q(x*C#-5J*qQ4I1$I!gfUVPPP`G4iuxNKQW1 zP^JG-^+)fb$T9T3w$Iea`S>s%K(3@@_Sb81JgU|exqiY%&y*8%5 zw75;~JX~0i1crmpf+qvo0-Arz1spR$7}_hsT)I^_3=_7zd4$F_#NneKrt)>%pw|;j z)UA?K$j4N$5K~jd>Z_&JEul4DviRK?FOnd5yJ5YZjqGoeXN`|H!{xPZqcN5O!1PiT zIQ1O$!+bR*MNzBGNu%XCOQU*-9w(0Vojvy(C#w#&oF1|z2Cw9p6nj*puflC=I?TUQ z#C&H#xzsc)Ie0o)B{RX8S>Q*F%>o#> z>YY2Guuhbg zL{PIcCbE3r`AG!K7i&c|hddC5akDGm-$*QZpK=qpX`Pnl&uSP5>K!zhL7G?6?s-}|F?x3@`78SYS_B(}r6AS1;S3Wfxi z6>=n`io6K0Hmt?E0=Kosd%iZKh?G%>&8rvYGfoBX&*%JKttEQM7HD2ZeHHjeJ*8-z z@sqyVKPaAT)NB3Lp+rPcHn-7G32FSmRqJ=GtS_N5`}gY0xsMsNn)+FmE0l@(^Gf$O zA*bPUuao~Bj*D#|lY~m=cDxGE)5UW}i>+4vrK!8^PjL1?^{F( z?3ES-iFL%Tgp@UM{`(fz0B#G(T#G{`r5Oc{MHL(*FuwE|W++zA8l$)TAzU7s)Ur}y zceBQ^j~sce3PS+3DgN^-P(0PJ)v(N?42Y#Bm=*@VTZieJN`JNjQ+X$-t z&=H>4X3^7De0LkhlW}&@oAoaTX3qRCRF6Lw?>jeE)-nT3HH; z&K1kKX>$A#MDMi4Oiw@NHvnBtB_LYGdL~M{=^-R#>yM^LTxMk)x^BN$G8*w^vkTaf zqne8qQY6 zQoncZ0AuWrZ}MDk>ZeL&a%gNveG(l0FiN zahbuaO4;$dO3z;j%j9H}lsmh-4PTO<4y^{I5Y_YDs(xn)yMB{Xd%TuCM^pxpZd9FT zlQRQaLH;vwWl|<0>j5iL$RJ+T4$AufmwAn4Cvyi1lTYQwo!qJ17GF(#MCY8{I7BgW zbhC=fEdi_)K-^1S6N?Eb6k7bFx&>Pk|?XvBYp3aL>-s7y8yDn~n2@PPJUO z$Y?l){(8-Ip5HI`Xq8j*J*J&{e|V~<9<`AQr`GrDz2~T%P{;PaX zwdZd!&vq0N7v;*DYxyqEQ^(v(t1YT88*m#2-G@R0C)y-6!BLTob22D2M@iPpuQw)s z_c~cG!tq7S<)R8-W`yuFL=5YN#EP&9f-J0QbLCD!` zEUk$G^I}EH2FiG;P?xXJFR#`kLjhw~c8X%S7pbH_fr5!WKcOA4PeR`2f4HC6l8lw2 z8Ex%?&k|s+J?E`CvkuY?N9y;}5739-F>o;0^9C4!?{dv(dWXo5!tZL65}C2{P;C>e zC%etc@%qp;-rIsyLCfE%Un-W1=?fhTY;6icfQJLoka6Id266m0 z=|T`lf;`=H2@6S)N=?UT7RpRUhm{5(sSW5irf;$Sk-bQ^FKu?fQ%8?5GP78R3mpe% zK0KCtdFYm^b)Jd|J!*0D_XQxLK90G~ug-h=jLm7#Ws$yxt=kkSCPk8en4;!6L2gJa z72I$cP#1;FQH-86NB^{UMRNS#M1+3}jN>my;CgJdG>03Gla)dv6OQCeUz1!9(rXV5 z=6*E1bGDxHi+r_EtLaph5FgFQI<$6Z@Q~AlTVEMA#42=$UqSWyV5v~Di>qKB_Mz{? zufyJ&y;djp2X3J{i?(#pSMhs3AGQ3Y5EC}rZ=FR8z7@m!vD8tOl1L}43o-{$VSjs) z)b8y&;LnzumXbp3Iv6zy{`l=B%rm~k1%6Y4hDMhP z!Jfyk(xSMQTKNaQk%pmN(e{Iti$RX}MR>=h>7oO@Edv&f=C%3v6=nf zp7#GJ4dw)ZnoJ#KOS7YwX}9Do$>3tcA^%?6U7ICA!i+RuC$N7o2ZjG}>Pa#D_xIO} z=xJ2Fm%B*;z4J3EOX6R;JKj&vr3Z1Q^Ht^vh zi`%1xSS&9jIhnq&@1Nh&a_9O1#*hw*!6RRP%h$46pPF9s`iu0=<1ZFSPv2z9ej^SF zK)x3-b{(ib^(!XBmYjbOqP0K4>X}!40AR|=Ky7Lem7|5~Uv(D9H!v>Xik1oA0?wup z+GajAjrF&)J`DjRKxuuR^jRy3TCc5dgn2G-`f&wN*&JHTLQw@>j^-v|I!C(VK!A2S z6Sea8|9m2>NFbRjwg$f&D=U6h=_^!wwEbyc_sEa9oxvF#47b2aZac1gVQdykI)B;W7%kitSPq!45jn6cxe0v z4V+vHqBZKJ&f6Ul=mgAMqX#8Am(UVg#WX6>EquQrpOegW!B75TZ_}Zhf z|B~jwom;3HOuYw$%j5px+SNTQM^dQ;R=t0`to#cK|2nKp07M_f)i$4}8zCSk9}`I~ z^4l-wl0wY$wkknH>d6OK)13zCQ{Dg~+V*yei8%Fb7&h$|P%25jyWGui1R}@Dz@Gh~ zyGA(1>fIF7ZQN|-5~!Ld|5HMMNYv=V`Z}RB7LC19W;I{}q!6n5wABtvSG0JNN`A6BaRKz*>Mr&R81=-TR1lk@gJd#8mb zpq7ZK_HHRX{d2Tk02!lyOg|<&Vgi;VK#5wY|H!!9?3M!5fejiQRzD3z?!YZZeK%^F z%E8;t6ml+p5iUyTjUXiTH3EQ`PK)+G=~)Q>{?3zutj`152z_0-q%`~{;C~MB3-SWa zeQ<$v9_SL--h}K%r3T{2fDd->_xFr{%@qn4EA)BpfUAvHW;_Ew2uJ`z5+LrPYUZrC zViWO~eY^wvn~!RTm9yp@f!JtP zxavJ{;!A1G+K6j-0E*%14xt@0=-wANtK_RltA7B%^`0Luz>sdb91}sr%>Z!o1As4` z{+}jhxBP3CGK3qKqwI!;$_Msuefi(Re>w^f$=?iMya}1E;0&Lkx>&qJA|M;Y|Bb9H|b_vk7L}q=y+!m zA+kl2L(H?EJi{NGH`Scfam79~51FzKdt7eCZS+QvdU-ZPZ-^#pJ5RlT-RO3@;&fFi zoGJSDAGyshn8=Kb5?(%4`wcelD|B*UEvzLr-~l~GQ0o#$70q9G0b~P3K*)>2 z>Q&rDBs7X;BC?@%+|2qFzp#Dz$@TRWK#UfEGXivoD0Fuq4{rg05yCeZ-TWQ@`Nxvsi0s-PSFTJvysv1E8bQ-Z*BF~f4R?6snQVOT)xqX+W4@qlJQ+R@9p_0Nk2orP{e z1A*999`=2UO_1j&A5OLQ(w}|+hOVzC@HR2w5D57OCz7ope^LK#z}Iv!#EO`JjVc%S zyZ+C+k0U5dJh_l#6#rSJ?odnt25cCA9jZGvihg!HV@E9E8uhbPj*sO5?5RvCiG|*N zp$;1sqy{xhZ-A{Sr-9I|z1G#f8`LX5CsqNn)N>1<)Yx~hEkKSDsk1!);;ca>5ssT} zXTMp#c(n0ACqSIBt|7&W0MZ}FCuYS3ZXEC&SdUPHeAkqKQYBi>(G6NE;7yQ8U_4d1 z1Ibf8vQInTx$jOt#}@?!E+-fcN&*qLW(OFsIuf9>7wqV;txYr zlfOf7Vt9P4IAUE*loR`Gaqf<`R;Nm^(nPiI&jCvZ=`_WN*X@^snL%qWkBGOcaC1cX zMQ<~{j39`*BsK|kxJO>UlbZy5AvOyb_Wv3ZTx>*eq$7r^U0QVpwI+h)#1$Z(?clkb{(2h#vL#_O%)yit6{56=m=`sVF`qb z(y~pmJsqEko3%x@*d6j0qE_#27OpAfHhSVS^%B#*ondik(KTwXe4g(ltGK%Ql3a|h zlyTVp=kP$sC9-EDmNR0qs;Vwldl5r%^;B2kC+6;(-J6%#2aa-<*2*&aB9tBFYkR4V z?7Jh5JI6;9A;-Pa$6DK;OCoz5Ld|wykKAMZ!9MhD>MR5a9di9`^CAJQ_xR~a3fE{> zd=z=k^R*Tiz%rFiml~S|Al|tF>DoDpGic4f8jjj@VqJe@35ERnokl+p)Mvm+bZ9x3 zqvfsHh(3dxuVcoAW092)*6C2NVx~irf5_V0xsTDw%rMlt+<_+~* z(^HG#_waCUmt3Y*c5rwQJJPy>Q5XBUfy3WshQGFr&aD3E>uOqi2K=okbZ^sNqVkQe z--JvoNkV1SAbY04-T-$QPEhp9n!;AWW9S5orhp^Ju8zT)u#Qv-L=C;OM%SCb$~Jq3 z<3!B04tK8${YQC0#Sfzb$=*hXC@c23A&gJZ1Yn+70?Ej}G}={$1sWJejQD-20!n|% z6&tXux8Lo6dO>N9U+)$^R?!b(z~Im+mFkQ$BZU?HFdpk{TkGyzGL1BUep}M}nXYWy zs?L8RLPvkO@7kw`8I?m%IdZQ=;jp5nNI1eOQ31u2RDBT`Xe8TvxNXDo1u1~xTYd28 z*GUViHoOXQ|<{e;Wb_-H^{PHE<&uyT{C{86v4~$k;LDiwv26eAdYtQF+I4lPLZej)xZ4G z7m9-xZcbmzW>>(iGCMF|^mS;cU-(YbONo(L_313*-r1V1iH5dt+(M;r@sEV!4`eo2rYv0;^EXtFHo13R4rL^Qf;7Ts<#eT^+=UP{Zw0 z3x}|6;b?|WCKXr9GY@|SrJJx-a?t|OVNnGiH|ovGkxLk%AdpH&#i&^_11^a04%`z% z`B)G&mNXr%fs#U5LN_4@qSRAkfCY6=V$r@p&9=r3Qi7o?4@`_na>@>v)7W1^1;j`x z*zj&+lHn6@7w`{Wl7XKeqOacX4c|degh}DP1}3FHqRmy%2%12h>;s6p({Y6xj+8uL zWfmq0QEAe`H*qwBuZ7l$NidR$M!+UiLlIb*xsZuIzFfQo4g$wqG0rz$ok$yS(o)vZ zSP&YD7WAYrd68;a5eD=i);307ArL)Po6^CEZ^=@D-6LGM15R#065&GYamt&Fh8?CS z!HM(X!Kvc_VeY;3Tk);QVr1CXHGJrHAn~R7l-M7|X4`Y0gH6J$XUU0~ZISKznql*U z1Y{p4MKa5yA8Sl7_z!Wq-54rJbACRzcWnBr4J zqM(U)?|U)7^C563-7c()Kk>>+*bw5ub*hr~dlhnk2M~=c$)r(-l)Ph7--E|!nWp=} zt_IZPQc=5C@Yc*8{lX5yon!mm0K(f!8F`(&*agTOw}F@UA9O49*8Mo{S%T~@icKty zN?NXXtnuNBz51mQXl`Rd2-Aa7S-V7(&LV|SKg5vx9rH-i-a=2;={K1y&EIv|9Rb^I z{_}NYgAanP7)f>};c|o>+<+O~YfEKON(nsx0vlk%W4dmL*}4D|6{~3Y-fm~%Qa4_n zSg)(;Hc;4miiT1W*p=9gWJ%j7MQW$o)lMLcw;;rh&B=O!B~8V}EPgua~us z$2sJj$9_%`!jV@N*Sx>BCn=2LF<*!kP|b#yJx3cQ0I9UexAjgHZ4x@zlPHh_f|0~4 zC=%_0!0GE1Z`$PB;(@@1(R>6{iV<)to}K88_{*mww_4Xx1I<6zt)+f#af)f>9+A#z;4C%pKGRpNi{lijp=X=>Exlm6~n*AEsw%gdlc| z?Uw=E+SzHPkH!yKv!+%lm8M+f&&0Kk!CS)s_l{tQ1E3DeT$Jv}=oBu8zBk_p_@p-~Xc0xNh2Js_Zzw}%NQ?ayxz|{13}vg@FTQdrAzD>7 z$!@B#P!+3@evDgF2n`Yw5|@$-1s1#{Cklfk5^BE~FP<8{m89kS-XXC-uWAgAYR#Iv zX&cYr_6|pdZTnNk*-l6?E*kSs>1>r@6z%H;3VVajmEjO%QBR1<#$qE`>)l)({3fHD z3S(9xCVYmf(|LktpLw8vmu+^n@R*PRS{)k_ITAJ!I)f+(^c$@=qxh$I>y7I5(gj}j z`Q-&yQyo~9iou)Dc80TD*WZ^fK4P55LG0XKU+ueY)jPSx;5y>Rw4hvj9K8lpgICw2 zrrd|*sjVKt8te`t4D9wD$uXzH4^d}J*Wke-bfD+nKSLTd1{eiB-0?niYszgrzZSqBF; zz}jQ}`L|itXa!7-^Q_6>Zr{(Ct@+2=7F^9~+35^|N2S4H(2`~fC?V+C%>p;BLxMP7 z-OJs-c6iCqx_|Xm?Hz~K$+v0;R(oc7_Lu{g0o<2bu5tg?pwm?N!|Ez03*Y#^Knzd92RE(cIgM@i}^DK&)| z#lb>-IanQZm^yqG1u8yk4UzuW%FKgu3g~keU4}Yhx2Ibw!Ru?ofwfBPOyWMY6w{U< z9_3507FWB_gkp-&0d+<=0f(uX@T2)aj*}d|HqowelKd(pW&z<1JrpsFlqIcbXl_i? zjdhKRC@TyzSDTha+e%%1TGY#S_$|5e^rgl~udQ&>w|e}?P|c`eyFwg*Y>S~r-S*c> zvgqE`9gIv#AO7-Jsp$5rjNg1N)hPD--@7D@0y|%ioo^%;Hyzj9))%(=EuT%kp4-FJ z81DJ)1m%%jC!EOo&hw_3_20kWu3wj@3V5Y4?u_K7Z-QFXA7uKI3bZL+1NB&PuHE=#$5Y4$s-{n=^Pe-3Pp`jGW8a3?GnN{ zmG9MZCq8!1f8b0KK|<7dl`6?2RV_}G+QL=S1*QoUNHAka=v)yi6Rx>qaI4#&FA%V^ zoeG91m!>vycTBgX)&)JVd1iTX&l!UJ9bo?WW&z>+B(FM|^yZ02@)Dkq0=DJFq?L`1 ze)e*mJ>-ufd{nKleec=nHsn=vk01GlLr(;S;PQN#5D_lx?CO&u(#V~mFvin6+t!yaL&#QLSccaWafU0?;a{qVYv2E{ zy)MlkWkoqk+2?=TH@50MMc)P>rbkm_tWiqjvalNf8S6>^AkwZuaYNx3A!+C17lHvxQOKUBB(}qz^|~X zZ=dg|-9r@{PG56MQkjmK27l-SRly|n9888pcyuOqY1Z=D`UApJuu9JydQdogAfk0W za7ddd0TIT$HRFxNCL2PVxXD) z4#wT5H&tyJ1wH8b&0alv99-lZ{6Uxx`?c?A}mc&GF_48L0 zP3}n*OW&Ir@YtA4%96V&51$iEPMc2f5_z9~jAW?HQkvM<=xBD09#cQ^)&@)>L{7I4j83z7!d zZ$uJMbOJ!R6`0|s`vrXq+WySG$|sU7Y35w|Aa!{tdA>MK#Sl3z#fiW2v?Y-dv8Z%f z3uIX0i~^n*KF(OEKxL9)ZGUhsi+fL94gn&G!*EtX;I8`wKa6uk(pKv!)g-y29~^Lt zd;nbPO!X zdLA;Z36E)<8UzR|3kV66L!oQZ2AyHISCkbr8wJ6qQLcMTU&sV=#S`~rsw&Dhe~Jlh)y_DcE5Z-2YYY-5IefU5^s|( z!LN-FIx5aEvoibWcyST3?e!WzlPkl4z%kt|*91y;dLZT_d>Rhxy}?-PBC=T}2Y07y zAr<2uEmW>ujNF@}5U`(R^Ui*YL=&G;&|r4ZbF7(D6mH~XxpDI6VuvyW&ZbifDWC2IV#^U5R{W;=@(GN4fJHTU z-t{GPaG+x$d& zQPvqk0iKJ4taAoRYYJoC(&&S}nIgmnp@Fz|^CiqrK;R|LlQ! zuH05TS-|g3DdbQm1=0ESpyv1S6an@qhdpz3C33|!9U+ghPx<6vqJIe};y2 z>ZjO-{ps85Wuz6a>_k?QmzvOYf<3M-lmI9>=dKG&Zo@liduB2(83!XZQwFHJ?ln#r z=XGjnN?s)rmH%iOJp!ZFo|-+~s{^ed%;0wb(~8!dYhb!_!hgbyOTRW~A>&WT^vZ6S zt+;KjiwlSV`o&!YBZ&yFB`Z~c;*C(45Uw2Sg#H&%B<_RGi0b(0EFw-O4~QZHC$F@f zyUrlU;0;6yS1Yv>%=Z=q(Wz|3r4V+mG9BB3`WD0w#%P2BZq~B z^fG0BEb{K=vnknmV2$ZxiDv+kPoir;6X!={C^4<*M7C+TKMXgI(VdQxa;^j{I*rLS z-6xv)JVqAGk5U}*438bqi;7Ty8HpN{G{U@=)_@8Jp_CD7uF@+myjbQEx`Bn9hcO-* z_v=$$qEdgi(mG^deCfqDlxA*#czRCrBBZt)eC?zO>kq zZQE8B1n<`d%qurq8*<%iH1LxB{*--C-gN&c!fx^aNPCr@V6(JI9#fYVWiKQ^;-m8d zvQb)xzu<(3dOB6fSH!7KIFK3A9qB|&p1RN3zR`=1&dp4x>L17VpL4;vgt?3r4nL3{ z9(-==f+mRzNfF!IFGdrEp1ls%`d_aFb)g6o2q(2@BHov(QpU#oHFLD$#ofneAjxz8 zhL5&h+t)D|b%W+>Xy75t%=_z;Hy$3V871}NB%C9AfF#a zrN898(H)~ord;9;8&E*h%gnblS64w)Mrk*PAyNM{K;RX8!~x@a%p~GQA$An*W$<2L zpa+7L6JN3hDK7cp!^X@Rc7noph7nHSQ7i9A()mKuXz$l?mgQA23e5CO*;Lx*kr5{y zu|7>U$TY8I{FK|IiiO>dp(qFBXs61qVtL9IJ^2bkIpGP8Dk=nCWf~Lv6(j#}YxApU zRdRjZ43~Pff+=tEHJM)cHx|1EA{3k-0ITR)6gVv4yn6&K%8qwQ2HhDDd7YX}a??-Q z=4qc(>n}>9rPD_O`HWpMcrUF=s$Qx(-HK@WXXjp)vU*M8o0h7|`Qbkm+N{k{zud6U zQU)Xd*6F7to{EJHY}C8a3O^l%f{)^{B5xh&9{yC(v?9Gt9{(g_6n)5}u##1)CTHS2 zpi>1V5P=?=;n} z-;FbdQwb_Jdf(Nxb48+(aa*Y5fi^PRm95!CLYt&GXPs*PIjSZ|UzX=Mg)3c7XM6&^ z*!`@Vnoi2yVQqC$q_QUlMZ#UxkLwqVwGKK$^pUuZdWdk52yfks=FJR6c$fWmH#w2D z5IFlV*kVaWBs*6xx;sC;T0XZ7qm)S2mur0uvl75Xb_51f9F$VQ4-+GN)R+3?*k|x{ zBT`&ef8Tj)o99V?J8f1VNUlz#%V#Ru;KfwIAH-hnSrboB>ne7{WB*V=Uv}bm)%8Xq|i8jCjxjvSZ3`L78Rqyq?PP_Nt!6&6Mt}uFfGl+>Z>7GTBFd{-ovlF={%mOsgjf*_c^>XRA{a2f*RsMEc+=rglX&edP+C zWpGYB4G5@4T)>OyEk)!H;JwG#hHES_)~r$M>cU&c{aYXb8DW zW(lJ>aUxT)7%?;t@Lm}1?tnU@_y~3MpcLMcho{XS?pAA?(|Jd;b|fSpREJR%6mH<76np}PTK~>ZSs8hKrsXq zQrK|lbuFAdnUCE&DVAIz+aWmww{#L9(T*_eko)u_b2o!ZGX)jy63#% zyjq(<_yGO+=!^JvkQJY5<2O3CyNR^(yb+LxfDMaV~ zQr|QGZh96hGD&j`;Cj=}52maPoKQ=BoR4Ue=k9k`f7POd~ zle~KK=sv%!$#@TSMy$kMgIiDdD^`KE#|kChcbGx#1e>Alc}+ymkW$At?aaAVW|nq~ z21uh)V@keAn+DE838WhPYB_=>s<5MdRt7j&93}N6J>~=g#JdEy=PU3ZJzy@-4}ZJh zaLSPvu*%JYDD(_&|I&>8hS3;c;g6Jk&xOp6l5ol>MlmtRv>{a^=#cOqf-GXytw7n-HdcRiPya8p8i?{4ci7 zDyoj4ZL_$$y9T#|yA#|Uf;++8-5r8EL4yW&cXtmOBsjqxrZY4Dn(v>PRW~ed4s=)5 zsqS~b`@sUi)MoU$5-RAUabbb#Z^*swqx&24k{LluQjiYRXHSW~(iBEU9m8Na^sUZ2 z<6&VE{t!4)a5Z()B9DQos5SIq;Povf3`iwN=5TobeXL%90uN)=yeDx3Ix2VEHOckR zL07}_L;-Fbg;t!9P)N2cct(au8(9btwJ^Pc;aya(e#F<5EF0pE`Nj#z@#BNe(|Qrt zly9BI=z8?m!GL_6yHy=)J{*1s;eUSAY7hvG^K!D{q_4ze#Jzm9!Xfm3ZvFqZ|1U6* z&G;qIOAJ%F>bUIK|LbRuUTQE_lBx4xG5#Lbp}@o`9}?czbEyFajo&Bsav5RAamOc(HM7 z-~OMEr!4rb4;S-4RaLx-zh+K-dvrUhy!OncpZ{rs?p3F;@c3YPdO;8APEYRVK~Lc_ z>`xSszs;R}%hM?ucRG;*`*P?p55HjJ{656?XV2;SjS>>Wf98UUzjQ(CIy>sm5&v|= z46-uLFtb$iZsqjGKIAq}cf7;KADmT%Y|1xP-b(%!uTanbe$jY`M9_Hlu}sN*^H_T` zdja2gB2AuWp-8;?0iG6@iSf$mIdp7Lp*=r=qdJKnHM-*`7Ko?~h{sQMT+3s~MBKK; zjL#RaPod}~>)vFm+ai?bA-aX7G5k_!csM@NVn`Q55$i?&M3GS>c;l1P zJNVO}i!YVJ76*pB|N3BJ)9bF-)N~DXly%s2B>v9u;4#nJ<8+zHb}P+vG7RoN|4Zno zAV)?EQ4p9Y5e67yBmj%Fk~h@rbx9WBNB#);0o<$oh#85+bPgSWuee+S?XiF5Lmx9(O*iTkBZ!MjT-7=>gxpqsZY#S1GyyDB z<64OX)$=F%VhQ97!BqYjVV?4*tf!kT=65|G(s6HDa!rugBQdzYTE`ALT$knPx?Hf= z-exI_U7ISdE;nAE<|3}QJB7_#a3ZWpvMt#henaWp-f)y_|Li_y|)$;Q2l^G<) zSZWH+^6;->4{E_`*fRXov-SYWwxU_x$=1z^O*^=#&Qm(bTcsi&^(b%d4Rm9hj)N?{ zr3}@%V^#qZI#z(v$2@!9-i&3gTbkTYpi7=r5Q170;F}&>OyhZ}OGh}{E;}7&3xa$! z)ogIXg{lUdUut68J5G{VuO2u4<;s=DmcNwB2X?5|boK)E>Wg_mz@FIwXsp>w4`*1- zQA{+Oqp=y<9mcC#PnkQ;Fw{iDCeoZ>Q8dVQ^EALP)V?dohhbf0I_Y(JyWtLBdje{8 zReCn3yE7Nu^vV*`q<_rKO8oun@)w7pl(7UlNXwrl0D%{Ku40Lr4s)>2?YuAj@CJ}1 zZZnOPmke4pXNPH?`=7WcMH_!21n3EHT6>(YwYgYooH#hw(b@p)nkhEDdL-SzLI1HB zsZ!GGQOI&$V_Q(WY3nUq5d^b`(ssjrtz%mk3U9H@nZ3eG$=iM0rlG9KrM{!4K>5#) z6&fz>m*WBR=pm9CU3#uoe?R1w{FiQ-mvx5d%D%hPA(Dzub)wNLeRmJBam(H>wf;{5 zbrJNkXEUh6e+@^LnGG;}I;W@WUZUmB!u$vaNoX1BW-i*9NDl8pP~+Z|Chl}&4(p^U zQ)2IaYEm@+&35>`J#s6+v@EsP-8dt}Z2$3<%5-~cVZ^V@_9ux`cr^49j>7B7PBTg~h4C~%SC!q0Fbe-!Z3cp%>9}tP=M=;bumwMdh0?f=`H{?l zz|)^SLKZGTaV%T-vmawxVfps&Mz$5Fp?>aK&)CMV@5iuY0^2SK^de4g>DQ_wRv(*6 zFWAQ?ta|o2Kd2-W8%Etci&#`;0&Bfvwdh4#CP4I!OtaqQ%Jly@!iR;1HB)*4IbPxQ z@TRP%v^G^}7rK#6O$f1sMw=D;Y}y*#`ymoQ+P;!TzmxDY;z{EXKx^5DpGOHgo4~T< zQlVHi3T&$>_W2ACHvmrt+sc&-!4=S#?s(&)xQFff2oSU*&kvWRMLh()rq}ixtxos! z?f|XKPjH>~#XhQs!InFgd-+LSBsIw1R_N_^ffZYbDy~1rcbfOlpd#OKaR$95dzc!~ z18VRG2nvX2(y9Nnz)eCD@Op*k3ue3+$Fbny>Avi%fn(vFe*)#pycloV7$6@H&?&4*OFM;S^>(Of#Q> zU=5I2aH?^2j);N91aAj?Z0hj*MX46jMBgw)4^Hj-jH|z|r^QBjvC-m?O;PErK-P16N=37j* zAxsY~VyQxEB}doAf`7`zWnAhQq+yD=Hy4}ZjsmwV>UsKIyr3jOzH|KT?8D}g32g8D zuIrFSf?-m)I9m}KXM-P?qVL-Xwq5w^T;eS&eU#YaDFYG%)V~Y z<8?^N;cze1Yj7=jy*W*|UWp(bjL|HXRsm)jo%7Q{gRg+~GxQs3CBrj~4|OLn zIj%03%WAR6h;ZPwRq#QmMt!}Hs>2EZWfk<;`5h`HCgdO$19n<0S`fP#%JHHZ;Unyc z8DY|Rl6?`slKC(Nol;UFB`LoV-E*0j@av1D@Bl8^{^I&O$5W7|L|%)E~{J$MkN8HXWRsHhrD`_)lY;WPBs%AU`?vN)!ZThms7r;kKe*qqjp$4)s)wpo2cxi0yw0^L6vL zp6nUE@u1hA+&_-%fE4Iq-IAjE=RP<1+g^vbxh0dFjyX(_u3bW+tdDw9K=;IvW#>qj z=;yu-dzjYGV1#N$aId2QzMP6*F9Hg%=#rMV@A*%O%xm2ki&=gzeIfc}xaHB(xpdCs z)wQr&Z#A4yzyIf$IZB68N-kBzzNlwT&`% zj;AQ2kXHTtVdmG6fyVbw88(Z$UhxYG;)jI9`rCu$*@!3+sVxdWAR5`UfulE?5q!Yn zN>fKdO5aR%dy$p2qlP)aKL_tdBm>hCqxRY6Gsu(C3`< zA&=GchMcAKA>WD8WOMv0Qz*IxQ?PLygqYCssl+y%-23oOy*(8ou0lG`+?Aq%O2H+& z0cITLZ;$hJH}=xNMkc0Z>x>YIz}}hpJEt^S+gswA>=Xvw2&>j3b{j?{0v9}4Tqv0e zl_{*)Yf6b0FV=;TTiE*rvBy09wsOP9SU9cZDVK2WL!EVY0&W$ZGh$;p{&AQiI7kLc zJ9s}R<$Kzl`zXiNeyN@p9vOA!lalo(tHzW(mD(J~e)tpmOzlLJZrbcSTsaP1lswE- zQYvjtH*uL{LhtZ*h4wjT?WZFexT`T?8lxL%H{;w-etNj*)aBS!nCmx4f(dMqb%Q>n zw1m;(hv_*G2%OJ`SljQbr9Gdr9gY}M)(&UXuUl&WV&AG+Id2fhElYtH;EXDwhf|mV zU~c9!;oF}@H6^#(8^EBl3xzlf1pZF`iuJ*wriDj`DDwQf3-m$YG|5WySw0!JEc&e@ zG?q9cKAkD!(bC$1PjZFeG9mZF`d(xTsDe|Qyk;db8vyMp&c`L`d<+l)J&@Gt>eZ~O zV+;MOgE38E353)p)Gm17)ZIXX3(qla*7DK5lAm!!hsUM;So`OwF|6TiKAo;`6k__X zCd8=eXD~sKnBDF4tupz^;zr|0W$sOnQ8=RAyDX&8aCA^yaH_Ppo|eY=T(;UARyFzP zX?UQyFHdBe+B>v8S=GFyF^+9Iu}p93KM7Mt=`~@_<%hAp)B{nq&g{jD<)x_$j)a;& zznQdWjEIKmA|)um5o5t{{*v=2;*z*OThz%i*!c35!@;Ev5sR&jVj*9|Hql1s)gaw$yOdZPP$E{5UT7%rG07}U{9Vo zqK<$hJfl8Md;0@Rm*@kB9jXKpk0ovdQWH)Y{7c7%V|8TE(*&)+BJrPbZ=lm4Po57r z65!4-O3o?l`T{^yXo9x1QL_nMgo{uvh-qYI8ml0q8{U^&u%5U3?2h_5HeFUos3yt% z^^U5VTZZAV_D}ExT+5n}9R9JS_jATXCQ&e^yO5XmZFe}h>74N#lkHQexLuaHMTTFx zwB^QP>fjSmgT?!6x6#uS(9U4aSfE@+`S7qoCU(-gJJFADPyc*dF((;b%VvidzfJAX z6evk}oFT>IH7Fj9dmFD%c@pnJTb>sx z|JoFV8n`0y;z0fh+;S)8zLd0{1YeniG-7cI*WYwEs(0o@VH1rAVm56y{#NBLc+Ixw z9q0v##Nm#1%?>=k4A!DH{iKP!4^6g1mgfMC^fs;6db*1lI7 zlh~xZ25aK=a{i0f0rNgHxM?!IDn&hXkxv*9pXhO8Kp0v$dXsISd`@I&L~=}A;oc3j z!ehLxw+BWH>iw&UVZVw*+D-M}VvgjFA|UAG%#}D{HDL9pRO%^cGHK&kUVD9PzqYB? z4PW4Vn$c2po%O>tA1LKeNTAYgcGj5k2@)J1ZaOXg;r^$P$~9>lD|x7ACPi(_Q;8tg z<2wh>6_zNE2~SPi<$L|>%r^V)3b&!?Fnryv&Gn{K>}(vl5avy`w+b$qUm0^0I7>pA zaM7vsbgC3?gPgZ5s&UG^iuJqLe6uATg% zISTw4WP^MW8mQCo!_iL7W znyzqou}O#gQXDjL1J?}<2Yu(!43iYWqT|sp?A(s(Wkq_MkrPCa`Yyoxoem$4+>Giy zKKR<^z4fsi@%Kei1#8y}N+QBP7^K6bfMKeEmt73y_jO48O)A+wxZjAcjhN2HFSu-C zw1jJs2P%y9Z@8`D7shIm?*|=SI+UCBWC?U7yHX$@MgEGsBu(S3|R(ukxHiHDY?ES-WR*LP)u+R{YRaPhyHA zIN7uu!)K3ycVuyh!UYcnuyfPDRwYgI=@Xe<$$|wlEQVvi4+iPcL}8huV?XCj5v11j zw#yRiPpw{a+cdj=)Bo3GgNs{MN zQ8CT3{$Y}r+RU7Bt_34VNB1#JuoUY1`>o0|I0RWJ%bbv{^)>NdChzlK%k3~h1Izud z?s+Vj{Frx-S9`fTL4C1~fToL>I-uM5aO@M3%Jhzji4nQ+w{Ycqa3w-UV$qa%lr1p!}owWqIq0~O(&d5w>?GuM_sn> z$72y&!0RU+0ZzM~ptqSzCKu`eI(9=s>Kr`Q*Hw2_18A}{Wb)5$!YFSFJ{R~pZx0Yp z*0V0xzd}(tNK^G_7ln}@QYhH&`)|yLs`aNe(G@sQpsYYd~;x)L>#TgNcN_IdFtCKUD z)k)o?$!Di)NW#5>%s&YY^6f2iAfs8MeYOgaN8BxgW<^E+PUzc)$#mo(gqIZUK<&32 zO5mR%@epcCLgo;ua6x`SR{u1-(YqQ>b|9Fw8jNB8iQdpts}&T87Av`v@Lw(fobUX) z#*BEr>5Us{i-%UrFwb~KWdu$${z6trqzg}Ihv7ofAF(w`aCB&^;BFeyeJno%XM;IW zz&T7q+p{f}l=vB1AfXV!eAK{;rPhqY&j`f=GD_9KhjqD~-5_wFP~if#wc<);IEfK0 zV~K1y`yqQ7sWZMsas0(7eE zTT`;nhJf-l*qOBvM_5Vf5@+TP3?f0G7w!Z`DA}YB*aHwcn&U!hiVPABErEtSy>x?Q zvJOmye1>UJuEh0&9?C3nFizGt)*EA(dmUeg@`F}9im+Pr>(i>|@JbLN940+ghJ+G= zOGYm6u+0_cM>)qNHpE%(5rZ^t3K5C+kL-mB42P;RBu8RjN6Z4zfK@B%%q~g=xfcs3 zkuJ*!F*)2IoHFwid;sQFt}Kap%VCB+l3^kShF-($dHUn#Rr+Tx5Jf4ajbO;n3BPRYNYY3c(G8pK4zZu2cEma>QfjnOZSrRjp z5Mmsvd@LH6Rg7dQsnq}zo?pN4J_Caq8YQfj@nCpAGO zsuyb=+1k)jNEqQ5O$=$N!SJNA5D;nzE^S_(E)%@A5X@KiL%djprTCnYdJWuoRXO&R z78L@#Q6n6)ib;DPoq%j@SRKdrWe{>klSWt=juuCV+F?@1csAqJ4&rWry2BE@Fir=J zJX*UkwidkX{r5D4{adUviSOS&W)#0T0iA-k=8_9E{T>F>@W9w^3qL9yFI6wYn){3S ztGxH?r{ssR23YPy;htBC_P9rB*S!4NTMZFvH>#5!$` z-Lb6&G4kF=C=YHk%rr7FdFc~A+y=BE>l|qs6F3vcmvk=qDCZ8o(%q!@odQjUOi#a) zgyw&r$(6)Dja7gOCv1w$hes&QcgT_V^T(1)G}So86}h)7HwwPH60?bwc7l{)NVE+N z1P;-j&No}2CZI!9HRxsVl|rm8+iDh}AF5rHzR^7V(vZA6p`TK`q~k>8TSfg==JSkz z#N`49)tpH+e*uC)%ZNpd^kyV}{hZ$|8*u&7gbni>ja_>f+9K=L@>Kz!(lf2}&xEvHCS_bM|S}8=V4@W$RalfxK@R(^Y|N6VRL~`gMiX3Vc z5yld0YNuF+9Vz6kTIit_dtKTM{I8AWc>jncQa|Jeiac~>LNYV)qd*-tB!*Zm3Zsw0 zP6)3Tuw-Onh?_pVKoVoyeJCWU=J$T}{>A~L=e-TQZuNPXk4ddb;z=pc9SEyUKOo}^ z*m!h7YTK0I>B(X_rk06_A&47~J+rEF5p$~fCu4oHxKSU`C~B#|*Ilr^G*XBntF>78 zlDt>n16ZDdEs%*eSZ^_;Buk*76%pj24dgFhqSDzDXv)|^iY2ZfobL&rP7LAhl?VG) z=TW~4A487yj!$|tWXLV|z@09LZj{k~`d&3FNn1Oc79u{3<_A@P>?Aq}0Y{Fx?JJs! zm@*|i|M_Xe9~y8ACq#7rJ6w~n%^FVPYEytBGJZkb;?=ZH>a@1ZZjEs~Ur8)D;N>qO za>Iwdz)gdZvPkZgN;%;IiOlIun_%ST22d!sRD`IF!oE@zWJ<%iEMO(ffvLE<>rh&> zG&>%zf;X?OW0kXk6d-@sFbW=`nDQ>WFR)9?4tJ)kenb_6(*g2N1~i?;#>Xv->@11d zkFn^vw zUwZN-?Jv0jqJpbxnx}$Z<(Y8BY*Uj^vnJYZ+&n-(-+#lV&=>KV*iRYvM!-C2$+vV+ zK5w4+$7t+%)wsU$6M~p9_KVNFWFLS)58XT*xMLj(Bo?ByI0f{V3I%u_BzSRP4YrL){@I%1LT@ zJ^;+zU*CV6)*&ra_qo-yPQ$Z%jk`(Nd1U;VlxGVihpz9^fIZ-Em3*d;X1Gx4`d?2h z?AH#{u059eRu$o=47S`Wv~l0|y;5F(7JjlSHd0oN?!k}>Wx-Ne$Oj}v+A`rh9!Wju z_lpiT_X~czWVpP5cdv)srkiMn8b>?xM|B2~2L;ZGsKR!S@x~6&xH=Vb@+Ry+Q~ofq z$Hs0v+NT<}z&b4huOZ?IDSdDa$w76MgNvt;oDuFKef6uoc+4g+`b78bw_5tDx|z0i zRpwz`O%Y*ri&bK4t6e;;@z-Cb@?qqx{Pp@xFEu$|q0v4rwr|hY*h_=E9~qj?BFlG7 z8z{f+fp-#ObF}UipQpxYHHkH8T~#7vn^d?)?=ilUkHxVKuwpB>niAvZS3o)5HpBto z*ob=MH)|5*gD>)7TG=H((mGh1vZ6^fx^x0Z5r>oR5MKt_r63_s{4vxHw}giN&=mz) zqc#P`2WXk+@t1PGQ^XLuRl9oq$|=?yxX^*-WKN^UHT`;?;m8a#lA;#Ya7DC;FwkWx z{|VyoD|Us&v)S>F39r(Jaslm^M`H~#g7~4uaCqfWRwIn#92|CrV}4c5;&iW|jX|lC zm?iLeZNvJ!bmvE>8_KDT#Td3k4^5|ruc$pJV;4!&CdYLp#=z8Esnc$ki~O*c+r+(l zr7H5mQA>}lD)?d9lt`;rSMTi|bDy~(;!hNoneBTV;P$(c`=^ex*2+mmS6UGsqX zFGMI3f*{Q7eIdMAoH%>zAtQQ$;E(|R)Q`;5!g6zQ4yxtcWs+nYs%CvxLQJc^I=Egc z_~g4+h$=5yb^`In$ho(mAFy79tb=^;$lSLf;7)Yh=0%g^#igz}99OJ1{)d^CIprMG z+g4F((2SI|Bn6Wx$cX6Xw5o9z{c|u@#dTz&3=W<{?TEd)Nzc2gVW|*OfsnJHRuAJa z*Q=jqY>w48)uS1tbrnMxC^@s#+Ln?^@}o6Ktr$Y|Dig z46|C=9m?**7+!?44t@ORBN;^s)(#sf*9h8cvk%owMKrla`azZOo}XW)VT#@#WjF(V z8_7&8#Ul;PqfU@8`t~AfF&5x0fW1X~2V)K+E~}N#yGTx$M2WAz!{n5wWOiPtBO(DS zv6l~2)(Clna)Y|upj~F3%IBWo z8AE(M8^fiJzrEUh2rAlJ{ML(p4bp<6_AQN8IEh4EKyga2A#)2Xj}c3mGhv~aAGF19 zk>*N`4UZ}{o@w>&#R_uR05i6dX3!ub{Ttaqh7_fn%kv=mTfA&mvNc_WyFU<@>l+jr zL|^faH~91q7UtC9{SJ{WVLYo0Q7C#4D~Np`wkC4s9ejsZ7;FVJJ{jH#W?tsox@Q*g zMgGHD{4e|-4t5r~KyOu7e0#d%P}maeI@m7KlM2f-7DRduSTLm?N}&%_68959xA8ST zF3cHI&?%UkloigVW$4r#YwjQZ192XRg-os9rAYTO(r?Tcl^cXL##v4>`-D9~kg)B| zN>qmPe@8f7z?m^`Y5oSqup_ZD2kbz%eBQ>!FowL_C%hx6s`=I}C#h*}1ZjB)dXGpS*ts+$128{vl}6pV=j}IhxTT_$fs6 z2K4OgMARIDLvzxg51~k{U#p9HeY$C$agejKExA4&HkL?$A=^w6Ov;IZ`!7Cf{_+Pa z-)H~AJrv$1ui=s{o8^zUjyXJ%LHt4ZVAX7F|7|xO6suq=;>AVA9Lq_t#snWVzSkpu zqmnZcj3H2yPo?XvHtJ$6KjJMAP3Tbszm6n0DbzR?;>qMrY{Lx9 zFJYFk@?I8)yYfNX5X3dfMn;h4kFY8PHFxRdWs!jNOy?qA$R!RM_Q`n9foK&2gH6sS zEZFkTee+MtzAbc8HZmy>ROSG>`NY~Vb#NOrDUM1=C7dKP>!QC`wSf=2sXxl}n8#DF zys(!2HT7#@czDI*=5lX6>7WqZNz7iR-FFJ^{hr9JOl#090*qdiKyWw`e>Ac+@Ld2L zJ4Ga_w<-H82nlIb&&!ISKm}3M+@8s4@!WjlCpqez zrDcd-qovFsJr3u9fTIKo~L0N~s5uPu>n7Hz3pE0%lzz{@| zkM?1EYwlaE-qH%k<*ZigZP~WSnSq)u4etmL>2^<3gWQ#k<73 zPyMr2nrA78S7OfYGZ1Lh!LAOkD9%rd!^AUGw8?>Do6SV1^Z$E&{uk|n<&A{&@!6~% z0f*#W*KB!+qT^Jk7C8@xR|z!!N7n$?;7Ddl1L%sBN+?$%xk6A$p z)NS$NJOmv73-arI4~mE`#_JjSqzBiYmFA9^91hj}VR-NPp$7^>s|@c{SJS0G5`3r4 zyxPJWKH9sVRnT~_wwmFo5dSa!@xN$D+I>9Fy+Mv3vHSt{4N{9+44o1j_!UxsU{_I8 zn#uF}KB_AS-Qqjeh|sj~1`5yu9~#aWRXo0A)+8Uwd?{%&LW9N2%X`m_h)z-bvs0Fv zp60)H3$i9h+z#Meg+l|?+L?fT@W)52W%V)-pZVuv?${c~Kf{wm<{!SYNZ0#jZ>XKY zfHn?vvH!;?!0Rpg9*ltW+J&>4|6Qzn@`dYfu17Z_6a*wD7ap4zmvP9)bSQGCTuOHs zll%&qBdvTj`uI0WBu63!r0r#>_~_l%pE+Psu1tpA0yHYkziQ9I`GJ%Qh)Pna5`bwa z?wv1V;;p9>o*rlxqi}2m!W%Z*{(|;ETSPH~_($bn{zKcoH*voNb4`yB*d4JnDK=PZ zcm*MR0p;tbPjS3 zQvGupV|N zW`GXb#{#gx^aa`rygoH0!KK_aLCa~PW55`;0i=3f#9o|Xn0nYwS6ZyN^273~5J8#i zG;v#sltDy6!K@8UFrNdoAm2ot-7(Dse+O$KBgy%M&Vr68g$-t?gwQ9sZB)#Qe@y!S z8ScO=4;qM{9sklA50ICRvC}6@?bwwNS+bFjK;eEv@Z&Pz15*zj$Zuf*+$qk+`CfFUy<=dD>T!6BF5BfjTPcth<(Dg({k$x`w^hOCu) z!%*IHFt+7-~_1+JyFxW<)q^Zo8KW1fJD zU%NN+muAL149kdg7DJ);Sa{)95j==wZ{Tz|+~dR!1m27c{zC^1x*2|Us`|fPN#Sm- zr`5K`CWbc&U)Do7PI;{klPw7Q{^M9SSv5g(J7LaQ*@OQwP~uzGfeN>N9xREck!qFjnV!E=tyI~rIKBL7(8XJRl)L7#iuhb>%PbbBD6h^{2w2v(0x@yK*Am&;C9fZ3k5 z(=^uk^Xe|7dBK$T?a^-cUUyQ(7bO06TtHQJfGAYtw_jeC=X_xDN*d5<-$fMkT|9!% zBdK9Vh(GV){iU*~2m)dl;50iP00V#JDZ@YEK`GYJ$|5{BUJVc#k^TaN!g%23!EUh% zt5^Nt=PrTVTlm}GDK2bQ!%9{ig2Fet8|nnD=sDtRa;&3W1N_lW|9fd{80|ct{!9mi zS%TauYSA~|WdjFsMeC(l?s6~$9&1DOk0Y^nU*!&n)pXAWZl;r3I`TeUx(a?yE5F;< z3|Xbt5jcR=>EH>d;vTs_IXJnym(e#Hh~8l6Xyy0XzRPXsaPJpA?}MsEPfn=y$KVPc*&C=DIHb9I*75$MuU&FNxxEGhS&iMPc=0S3 zP;N-HpfV&gc7W6xDf`Z=Vb2fA@_*|9IR>$W6<0P2j)aN#`&Z0B~A!z`N%rh5-cyBBsf441S5vd@c|T z*#~$Ock=;PK^ibhHjfLvAb8ngsxb6{L85B*SOKJiqq!orafsa!CE>UI-@z?_*l-43 zbJm1%u)yBX_v*7iHNYC^ppk&>i{oX8PGgNAkq%z~sXnn0unuq-y~qc95oCda5clgg zETWWlvp{?RaBgIkgQnfnvCKVKj}&s6U~1tJ(AK+t2e7UjIq*00-Ttunx_zHA#oS`^ ziL@GFW$M2xH1&UbiT=sv6h(lP2$r^P(G7qjRyJ}3vU!dtpN}#Bxd%;;o=_4WvK_jQ z@2vt=M%D{7DmAT1?}X*~<;^KPA^=iyE0qut_IVz7@a)~881&SN@K=>DXm{e`>0s48?s76$09Z3N(T*TjyEG z2tV?phvltB)LlxEFPR!pyN`!ox;B7?detl;-M=c@x{C;f4no3b8(fNM|NT|1!9t;J zCm0cmobe_cd~Yb!;+|RWe{GDuNFdn4;$BV=Le*S%f$+27&l&yge6mX14~G10l~%x@ zj3*?KHuJ{tb7{iql|nZEeBsU3dT~!6^J>mfr$IUacU2bTF;Bzx!woa4`b$0wm6fNY z$J)bW9@|}YQoWPL$;;;hxkE2$vla1Q(t9^7DW3ygX#*6Vtr}A9PrNHQI%-?wiJ#J5 zA09fVy16w-eiz97{HFEm$UeRe%F4|%7HrLw434ONLdWQA|YQ-0ipH3fI74Fu}i%k zv8Zo20R;CtE4{~`&`fxE{c_pk;WS5`s0lbb_18mSQRB-p$B~iXuaa4RgVaD1i^#*> zvZ?fnB1!t|Rx9X*1tk$l$16V#Hx9sIkd6S?NrUPRiR3u|2fOWhlJL7%EgbcWVBnS_ zAv;i<%K)xW=qZjKKvEZR;`nNS=6}wRtxzG&)9PZI2fHyn$ zLo~ka8-r240bf3??+}nvo}xy3kbap$5?$7q!#eK0{YbYLo4?|GuA?Ay1BN45Y@@<= zGvheua3S!9Y2Q#psj}+@eWh; z-PeXvbA9GNlbM1k*0zd`iu#xeius_!_6-1;MSH>hpFrC>TR6hNd;}h@C7{1Q>6b|? zo0T5|)j+klX&JY2)qPLtOTGn7E8~WT$y2P+Uh1l8>milr$O8I zbbgSOZx%MC)U@$#CJ~ZLR5wRVkmT}36DXR&XKT{nN)9eNjI!1EO;#A-P%qh}`rW4r zHVVpWDKv}`+Wa(6#g-WJ%()(c;BqHC(|oJ z(2E3S#Ze~&7MXyPN;(O9I0HtYgh&46C{EOg^k$bdIeA9;n-;wUk^~ke2~U%A(eBXx zJf2Jyy3ycVXW^364rCELSrr*e;}i~QHRHNfk3Ma7!k5Z=csrxVS`4ySha{6-)wcfq ztDA?F0wmC#NDCHNrW<4$9vn7OFK3Mqs6^B9Bw-2Zf=6ioUIM+!1x=RI zW_j|C@^2m{Pvg?};L16m>PqaAZCJ)-^gCfEjP!0q44McJM18~pS4rmHCmh>>8XF|x z*N2MPMT7c`U!w@KpRZOx`n(tVs43JVh(EKU?~#j%h8F%>Zet(l6xk6kn(O*KS?BZL zyl&|)G{I4rG<@VJ+q9T2iNdhd(U`!)9TpCfBLb?uw$M)ah$mMeK?^@rPgP7T>b3&n z1nYeS&wJDA$#%okiQ_TjfF=Q<;^i-^2}PPj^R^)d); zX*SYznC3<7Fc?i$n8Q+Y`-1LXLPTgi2|8St!zh|Oypi-FCHEnX$wS^{&O^j)j5tqi zMNa#T)wb3-p>7SFWy?vOeYviPpX=^b~#!xJiE=YG~vr z38h#x5Lf>YF0DL%)^BCCbW7IeD~iuJxMUGZrE#yM_48n+AK+_LnPwLMTPuwXJ9DNX z_pHH0ouJM_CyX+#?J;uCBPB^1P=#dOoN!yMs|Ulis-QVJR-5KoB>v@TH;E&L_wUsl z^q=t`%+@%yPOoNW*a7C;`I~N)0(#ZOx7#Htx0T8g-RBR9e%}nkyDH3U7~d+e)BQD~ zCnBdokD0u$miMclEs*?0^EPcfj=xl=2`)^Y=Py64g*TZmb#i#8iHK*NQL=B3I`-gK+S4W5cQTU*cVtI#8&bY`Ff+e- zFcqB}sFwEp^%WR@P!ga>SgI(vqbGlBGAL4-K10A?5Nt89QZ}?eB4C{1wkPweO#dcD zpPt4kS5F#(*ptf$NqBMiEMs_%$kJj(O{i5B`zP0s+b7@MB0gCxY4CB-50k1G+=U=5 zS=)sDqM#_#rQq}TrXW_+;Z~1Wg+rJla8EQyt5k2FWfxbJ+S5juWX42Emqulj8)QaZ6apj;?I~{I#)1 z)noE$Ee3)CI?@P=^43R>MtTO$Hp2=iyWKrVTw@^cOpq7|eA} z5d;lwwwPQk6!kpD27gTTC~;^d(cpl9hqHFXXdiU2y*+U@a->*@ZO6^t5`+gvQpb)y zh~RG2J&Ry5Vfj{6yj5L1qpfzJj+M($3?JY%{1*2VDugT^Nng@b@Q1RcWnvtF9kMHJ+A z8KW`o8Ke`Fjh5wb(Inzt+Fnq_t)wQ2gZcVcSEth22riw4xGOu4T z<9S&q_#PNgx=C*vb2!cfb?zQ~y}Dk;u1_OZ%zq|#0H5ZcZkg(3<8q2T|Kp{N&EXoC ztwIa6+^11jk^Ze&(z-8yBrb{a@|?kKotVnhp;O+#+As0;Sun2Luii;@iTg3;?#LJ341(QSP zXLpA+#rMpl?_iCt82WWtJb&wLirr7|RU6IVpY|`Z_pb!$Ck{R=_Va1C(mY|$BRS7} zF1uRrk)v$b8o^z0ZsA-|nU&AV!aq~HK-ljCNxN=0G&sq$T5G5fz0ZUMp0|y2inZhZ zM!wu4cu{27l?qk$PS6p`?Y({Kkc9F1=c=G+^g+^&}?X@x_)`>!hcYo0b z^wl>o$V?pT$n5U(M&Xq;$L$6Yw8j;9uOUFSXdY+;a9Xt72cbXe8p##Wq(NhSj8B-D z{cu~xk`CsLtIO6TcdOrr|Oi z8^}nX9Vr6K%PuL61_Yz{AR5uSvy}7@EW&AofypRq*pgqn2+QRd(rAA6Z-kW+J@`f?$i>wW43gir|8z-`W zfbS;aL=Tn5*W@D;I$DpTBUXt%g4hMhAk{4dw{YaCu{xvRFYRMkc z^UKL-EPilWn+Rkov@g2)p755>wpricCUl?n>E1nLqoeKp+z*j*=9wgzAJugKvaA`<&KMVo{l(=bRj0$GO?}f+c&+Jui|x3==C?V6PI1Orz|)dy+g$hf zs*P}Nk4T!l+`tzBWoG;3*CLzJlkc)xn>ync?J;LtHC>CB>r8p7A!;3RU8V`i+A1kv z*~Qigi01cyNJn~pO@ZSYQcp3riMd<5{2=mNLRPJ@$iimcPJd?;_xHPmDtWRMo*O3mzlch ztEWf*{#38no4r+lrwghS|3%q`>xMF4%nd-lBT+_r5Ar1c*H zdXSB`lLMW>_o}KdPx%Pt@yW>FBu54@w`94iK4SSBu;Ot-{PrDjVGHWM%c(?y(-vkR zW~iYHf#RK9pfj|bmO{Bcs~~LdFo-yXncIt6A^44KOdcNUIw~aq>~_2Bo|UJj;h&#&XVFS(gY#XjkE=m)-cX z>%o8WQGpUgADi$pcGuu#`z zkrQ4^?LwcmSJnh$qP`15mrPdhdu=r8R5NatMwGtvl35OfWbzSV&8^$WxStKw9W&%s3R+eL1YTfE-KkE{YA8I7A5Occ#ARzg$rlq2puFDVE!8x73@h@xZ%3kIjl7_GXmYB zu2+8#U`7?z+Jl_cE+qnS5|AmdDA@(_1JK+g0qFVFz9;P$e!WC-6k$*$~QD>wp z97J1YxR92T?G!j6i}m3z-1J)C5&WxV%Xt8P{*d(cZ^}YzQ`ehf_t^ z?6qEGA>xe{%*6x;b%)d#ie^Sxc%2X&DoYg7abQEa)D?e4ldFtuQB0&i5BqFD?9J<* zGd|_l&zKTxEyln+&)@JXJqolo+s+|MF@!bXkkE19!?+jDlblu6pu5RAi{CT8CzxAR zkdwr4_c!#moDXpPan!n6p|G=flL1wBDcFBUQhK&n{flmNIHt^I(whA&Aq}9;CfO(T zwK+g7qWrxaXj@exkpsR?k!-V*m^(BqVCR8}1A&||Psm2dlq`(mNCghx+I)2~Qfjg; zBXD!eG+}1snk;&5l?j7}z`YDmd1a zxU4&{8OoWh=^bfxOB)&Uc*UEpb!w@z0=#~1l~k&0#sqPYpKlBMF%Y^@4zZ@^lnk2b zUQm5i*F^q+V2EC{Z47wrwp}_=Z8wp5a@f^VVg3B-H>{v*IPR(F|3Bz@>!3KoZeJI7 z2_d*cfZ*=#!QC~%o!~MI5+u00y9C$Z?iSoFxD#B5+u3J-XP1bc?IJIAYv*IxE9a7eqCd<6vJ)0eyFU;SKjWAa29(7J(~zyt>C>O-D$CJz)VVLK zneTFx#BlYU+pn}zRYR%qYRDl;gFPOLIg)H>de z9B{3igj=zQYPHUgG`Zq{H5&-@ifg8%9!pB$&x%$w0{xxj#}gS`nq?dc z%x~CHLVw7&#!xO(_EV6$`+$xAI_)47s?(e=d;}uc3WlvZdQuT7=Ws|h*ZA8gY7lg< zplJgmu^anYen*6nY|7(AkO0I3&b(!yEJq1g_zz>iRv3=6AWeY+blv}&BoeSh!WvvI z^)J9#Mq(Pj`(g5~nDi?lIhirOB~gn-3hn0fBy89q^##^D_Jwb@P@glr#>kW-1z_Jg z0PrY8IL#{&{gnmj7Z%Uywymh!$;Dgm7E9dp;!+kFYy zZpJ}&icyYYa+ZGKW7R#2y5yrz%uh{M#~Zl)4DM2YYTxQ2z!5)kVKj&+#L zSd2%K*z2|d{qGXK?Q{^LJ=K8Vp;JsG0=dK`V>mkZA~e?NlW=zL1_e{w7JWJof2pcf!o;`aR{h8B!ik7_yCi?!m&+ zXo1|%s`g}#*b}#Mo5^z!Ecnnd*WgrNSXx>%k|L3~G?oq;(~A}@BbAgC`IHJ*@q89n z*s*9#A!bG9)_$opr};g0mtOpc^QuC9?vB@2Ov8|_6A=gZw`r-4LU8EhaduuB!#IhE z(z?frBQ6F-(29T~MEDi`k=CZV5bJ7NLTd+upMZZ*>0Ue)l?;wmn^pTsTy_7H-|hI< zg^m!bA4gL&*D!wLs@{l%7pf3~iA6q~uMuk2Jb5%&{-YhaUxsJhR~UjD)i2+ZvZRDd zdJ!Trj=A(wGfYAgzZp1OBrJ(eT2c$UA7g+0ExX_#6 z(i#e$y&r=;YfL}Exac_vmZKR-H56?P@2aGN(Q!i|S)Z3oHm|*zK4t2(;KKrc$#*Aw zkb}ns+n15vj=ZDdiEx1ssfE=?@-Q?)5oq~T21D=b4)TbvWI|mFTnoP9E4E$~|6mO? z4PnDXIMpEPg+c1BRb%g-Vb&Dg@)r$IFF<=|MLurMG*s7PEb!v2`>2HjX zttyvF4sVNi=PBld!78H8Z7>YFj48R>q-~Wk381F=HGxdNB&9x@t%#5~3+g{Jbf_ZY z6B6SK9aWs+ydSfCVm;)SZzEPZQJKtzYmHyXN=jPtH zj;2G%&&M|CunF15x;ExnuY7c}&SdcZJewDFZLlyJz4zSP?9fj1QD`$Zl%rB++)2I`~md1Zb@YYzN z9Xuj^oTu_OK0Aj~m{OrrFl7_fBb|MsT^Toed)#($;`XBK%=d=AB*O} z1$QB)i6>=(Z$CUb7Y9BnK-W$wm9@u+PwKPM!L{culV^b;I}l&F`wopCXjYsu_r8** z3+gcxn)B>ku2qU6QIcO-Xd=KgvPaAPW**s{_*k-Mx7vtOw5i2r(tEguw6ri7eiS+#?w| zVM%U@QMyv=VOi;tD#dqd4ebbT0()R;$Q2ubeChB1Z;riny3 zUfH1MQPitJEPJ!x8$QF4_n|+zf}|CjZAyL3WXfh+v(RkZV%w*y8quuVW7@%W*@v+k z7Iyq)=!8=d_qB@FUW+1SKTnIR(3CGaQreemE8Cq-)Q_}HErn_+`s}*I=jpF>&W^97 zpZj*#e=^4vJd1pyrZ3jzJ$31|W0_5%a=J{@9u0bS zNFHKHPKY~OZBa%Qep4eYH;@f!64u$^KmYW2Ivr|uVXCZZ-N4>|Y@=d)&+NGqxwE%^ zzHTOZJEK^!U(3wEH{bJpRv- zgMea;h)k#V6P=ghwUTb)-K5zs`7m0wt+Rb)L@D&)r*}o z65>&D12)8?I+A}IpFqb_TlZy1q4D!F=+xa??OsavD&}UET}Z9`SB0VnQ{8Da=$PQ~ zWm}j%R}zb0<;uh-tAycsuqKb;S^7)b$V)d(<#!;8<7>w7SuBt8N{h$>48&Bi?XB4x4X^O_SiaRer z0ZfiyUH7<(TRd8a>3>F|k~-Q*JsP;oLd3nNq256~ihK2+IL{Irs|lyn9H)@#88Yg2 z2+>9~(l6e}&Q|yU*fr9c@UJW(k{L}FDJ27z>f49kdVYtdKK{A-{Ny%&JdVieM2v}= z7ij{+@#RkVGtQDSJi5@i+9w>7!YCxmDVQ0HE~JVK&ybNJdRRO~B}>3NgfTRwcRecs9VMp^DxG`qzwO>0x7*5FjHw#+n%!v0KQL*Ek1qTc z^8BzF!?k3{oW-Y&3%YJh+-M0ubso+e{f!%C*VBz4=iMfk2p4$LR}zo}ivCaYPqOe}u)YhVI4*oN-5t42}SbAi&q zFaq-{6R2_SZQecNy7gbvKCo!&xzD_!z5B(uD16R&#Gf*+;W35c#vT*kl^{ILT5s0e zw<>!?IBz!4&Zk!95CUUYmi?d#45oY=gBx?L(_>W7py<{7`91@=eZC`t@y$0m^WpeY znt(&aoLyOl7>||J^m-B%4Ij9{eW+lL7rMCPhXb6hkUBmoFSXfO`JU2}TAn+JrEZHh-0Igl7+biotXTN|ul?t9kj1hAEFE%JXv!)sg30;vGpZQe^E8Eu=))E&Hfo(TJhjs&qu5{!>;Esh(9RdFMwLLg zn-7ay#Rf^m0BibBK?iDYNabd7a2{O7SEZxM^zq1PF>xdD1gN475#VqaPNz-8nv$p80~?UC4}3xp zLQL4;Z+^$$vVEmmqd)uXzvY%unms_(s zOlnD;YF86lzZ}ncvr9i|p&O6S*4<(8S?K8DLom~5)LxO}+Jq-OJvLGs_@lDZ{xbh; zoBSR+O0d)1p^jQtJ{$cKCFHa;3ynwD#P|#HbkdCK3IyETrz7*;|DQm<|MeI?G@w;E z?4_*N-}i>~dn##z)BTrG5JQ6SUzRurs#~j_u|zTL`^Pa6In#VU-HH|Oy_6>gz>$hV z!jF!_*WNTNj_%RGzG3{vq$VE>t4u=8uDp3xes8PUT@%WFa>8Yl(?r>r^X^eH7~3-X zy{=Oj`&H}E&az{K;6u`Is;>2(=bJ~IOqZfhm!qbld|wjqd46__ zHQO)w#SYgTrFZ=?pDv=@WK!r;^}w+>)u69bN&)54c8)XuYC@w!M6;7c3lbu3`qhjm&o{;Xsj zSBsWfj^=U!%Wi;Tlv+IAhJrC^oxk$r`W9-L7*MUHRVe?x!Bc)7#or_Jg2K6m{Shki zaoBinRCKmlQ}}Eh4#xsiv;dvLsl6>7*5Fb7l4z0UlwFMJbl6+3K0hj&c00?-?7syQ zAQoXLrFW5h7kv{trLrBU4VmIPQ|62@nvkTo2AKpUL))TmK&WDqW69AMTWNHn!h-*Plbr1%)_;d$V`+g%ro{0fDop6yHym+<{|;_{Ktn*0+j_?x zH9mCby}f$m-1Fma9e6^oF;tDC%DKq=_RSD~>dPi{_2|?!Udy_PjmC;>u@SgjgQfb< z22k<;|2tL*fV3P%4&luO&d%tHeg7P~8M(xh)vUcZkelYxhmN5VK*QH6;f6uoB;dN_~Ppv?luLWp%doxCd6U|vx zf^pV$X-Gm|dG+}(3>T4!Hqsz%FKwXU_P3LqI%z}JCZ46eI*+};fK3O-d6Zk^|DA(g z+Gk_6TVhS)Q6^bz7Be}xgN8*I2E4J?9XJ1Es)B$<7Z6|ufn=Y7pu#5=L6OIBD&md*K-v5bqv zTW^K471{3)D8kDuJ)(ucgk3dJEWDYbXJ8te9wH3*2~Knjhe)@DwRcIXi1${yIOYoJIvQG zQHzGZRm)}w&)XzikfDfJ^g%n| zgx!k~xryut!A!O}t~8XXKzhC-j`fFOt1Ef+1j5e%$iOrK4~Jr2R;?5x?=ppq;x<~` zcdkCq*aY*>#JV>f=N3AjP5Q!dt|k>z#-5VAKlCRf3_Na%fEcbjUkzx`era+SUX(Ny zMTmjBx`gydXGzEx-mWNzr#{Iw$G02m1e&ofQU)}j;k(Uw&Ei&=jq(I%*WthkF z9z^H&OzKN~F8jm(8uxG*X*G4GLjOMQjrTFAcw>=|Z^Hu5mH0WUn1?pl zmCAE?^t^?%J#hy>Hg~ZK|G6Y}cXI~^5O7$09*QQp?ii<%j(mq75g3#&H9NdOnHsl` zU7mA#H8F;XOr zCB~m%*fO_iGO{!if6M(aU+*uRmSxeTqd?osQ~X9j1%6OO@UUk=b(=z@(^8O283%<+ z3FBD)r2VcxV#GBjySR`Bgq!}u0%+Vjc6c5UiO?G9&w(r-@1gu3=k5{~n-h@78wGsS z%m9|JgxvKq!0{Kof#-JcOuLs0AHYm|ovKsgTIjb2Sh2*wJa;tRPzUrcKtP@826gZI ze3yi+#4D!6xA%c`hZXeT#CKd;UKpf9~L7P@QV+=CCxD9-(oWl@nhVmPp^HLV<$t62^5v-!^7&)aV^+GLwyxS4WBoT+`~Zvxz+83oD7@V z-nfKsARgVp%w=1_%1;oF%dS$k$=vmXoz_ zWFtgFBox)|c641GT!ORB)U$rc(NziIUD+^?=9zMmzuIT$o6Nqg{#EkFt$PB!L~-a4*ZtW?6VLz>vH@I}B}PVn$1k7aB)Atnw_k1Lr+fQ)X|nDG7E-~?gi)FJcG zSpG=D9#cnW;TZ%RQ_mhuS39fZeAsa>A9lruolE}X1Q=<^^x98z2B>Lfbxw!tZOyDS zsg9{W__R!V4Y3!qOz!WX*l93lgeNc3ZL3B!Xa;ewKP<=sw&{2vj;u?%0Cx1}`fyEn zJqQGVDI7a)lYLPZEB*gnj%f>dR?RJi$bNYWHs(u3{OTaat&+PoWl+seMSHO;Z+F&{ z5wdJfRCsXdOq`#f8v8Kz13RXjWbKBCL;d&;tbBLEr8lVfrY#IH7D&4@2|dfT$V2&` z+vh6KlyE)V3PJ!lv+LvUw*iW=d!Ro6E*609b(s=6z=yL5^AU|At1*QgQ zPZFVFTV0PR2d&6}TyC_izZWY$mrWw}XV>F7*>j-5=>@3bGq-v>?=LouSDNfkQZZ7o z6{`Qm!SB6*$&tE?&S$xic|e3cfBy{ZWx7m` ze017s0c_~Y7GegerB#M)tu;tGKZtM|h<`#ZkoZliN@+bgM1T8B4ZvawZsC^$GXY~C zVpJQPT3W!*Ti<6cAF5`~&mn5|IfGZD(=x*=)ba*uK(jgVh^&*&Sn$Trc}cN#u_xW_ z%Cm3XO#P`7m16@0NVN?#R}B1Oh(XIy?2j$b?E9yjd1%Ot8Gj=LSLoeOh}lA|hhWy7(6p z*8 zU#0%t`mdw+bvv8X++O`VTK4KYq&}jPU1gO9uUag0T3vZ|8Dy}dFHd@wbnf4LtsiVW z%9h%TBJEa{l4ByF1ENohgg`73goezIlhuq|0)jL?%@q+3<5sX7X2azFNR-x_u!!aR zVRzKYfVY=+wnRDQ?rfEaqYOZWf8J>K+Iu|sE;`Zfb#1m-Wr(l_x}Db!1_$vbA1VWE zKM9vW$QNJ^h+7AWRVIp%8ekoVKr?3WI5KxZo_^w;D=BdxexOiuI!}EpBheSd%JZje z24-I|;66-ae1akRjUiG4+&;R2OFqo8)u-H2b(4W`9B9of)VXp^hFlh|`PX8ltn#y` zo9ReRBG@at>h9w8=Wz)<`xVmI$qralZac2U`>1rn&(ENX?hT313AZTBP=qR?ONqJ< zvG+~xXDenS0_h5S$=d;x_aHJ)$O(18UdH#AA2eAJJG@)}bvWHQbx*S7mtlQtc0i9# zneG=^{ZlIUX8lvGSRoCD-z(-;5C;s4^4arVgNi)26M3BLT0AQq!N_Y5hedELOz8>s z7Q4J&gzs&DR0VA!q3Gc>sr*8`y@dJ`@3sb3Tu^t?GiFGpr1M zuVcN_s*?LL_h*SwO&}&Tj!nC*QtD#7~eBhkBf%5wA3H{4POP%hNnHgP)+p^c zw2Go60_i8Fh?wxLc*FQQLSHf!aohd2Ot!X??E`eHJBSa}t8($Ucf;jjU{4Hp4H2mj zbG$qcCc6drKGv98qls)xehfJCW8|J5+*}pDoTr|L9dcu?U^ofzwOv5){xJSN6&N$3 z0h!I(!mguQTwNL6sOEG(x6Hs{*sl6}XQl|%IQQzXyo{wEAX>C9Ew}AUibZv|>4kjP z^ANFrG1uFwWPHzCCy<&VGVZ7Jkas2kOO?)zhSu3yh39Ja23BvT%Obrz56(U{YNEWf zAVju9eauEQRAIW@8<=lDWRQ+QaUIKZ>YF*>8btv6J-hi>M+-n6I4<3# zkxva9X%=3Kj6qlrxuP_EvpUcecrxy17{n`ojknq|sxc$|Sc?tMjcVyHo38dslrTdJ zKex(rm9ULF0T0@3KL0f`xpt9V@UGvJ{3VnV^dfgxY&Slkkux!oll#M_GS~>q0f}TJko=^Gt_ zdkke3LG1L%rMtI^yARkhds)N;MeR0hn!mO!%Djzbt~W-&N7}}tK{6LfQ~`bQAs3%F zEsKcs2fXXJ3Cdir?Q74e@8Nxw+;hSZd|^ZsqR8E6{s|kVpK3SDZ^HulKJ?$)l>z?w z+>m|ek@#G9q+^$ z1_DRqzeRhgC_tFh_P#r%L%N1X#cfa>pGJYZE1pQGr1|^0MLU;^MSR|Vt>guyxFUtz zCo}nqzVwA`Tw`^+HIB>U>M^sB_abivQu{cJnP21_vo&mRi-`Ddu?Pg>^kQ~$Lg_FL zB*mb|?EkWO;Z0k5v~KabbfTMmK#E3!d$wf2`jFX86m9@r5gjvqoD>hr#ucCH?QA zcD$r$dsjZvAIoajywUTwEy*khCYLynLkO%9zt10dL~~`IRixGX*Lou@MM9RJLs2Qn zS!5as;eNYU9&H)CeGZ+fgq>@e%%DS6*&K#Y&}Mn?=|=m4yrWVB6v}pgr^lb;?&Qik z=VMQ6J~O)|rzW&UIH-VFvIx+kjIVufgee2A8q2sbseYGna?G>QfcesM^)Z9 z$mD3@VW_@~v02?mB(mFsCUN*_q_lE`fosXIWO8yG9S!?GM_B^wHqB|AT=+GQ7Pf%f z@kjStC%{xv8(*UrcIThtTw2c;bYj%uO(0;KJ*KRgsmPhB%KmqVkJtpZ7g}FdFAp}4 z_~4uSe8T$2_u^JE3hA69^1V%2;7S-4@;F+X?5}De3j&>|m#>$=B=V1PY9A1VqCuNs ze@c!BXl!O6UcS*`Y~3{in&_^2lRQJK`dUUcDwOJ2nc@M0aUvk_%Q5AH#)0>5NY?6Z zU>jNDcrYr3;n?vAwx*5<1@oa>9Ll`eZU5ieD*=#r0`w%C^hC8+$kX)BJnHs0wbHW#7gIeEpl^7!3H<|>Iil!37E^gep(ebe+|b^ zXLRfKhN-FN_-sBEsfwDP1sZp(5?LsuxDo^s(%kJ=?TwAo-SRgu96I)ed>c>Y2;Cwx zoeFzeW#tNs<18;yJ0Hv_`ZCq2mM^EceDiF#Ubge=bcTB{nj}fDnau>&8u+wYCWH`6 z#74&cR&SB?L6rO4^}P$8cs`#%($8NjR&mNB)EEymA$mhJHcu9M##5n%Ev#;r(>ML* z3!SQ^uJ{2ARv9-tgv4jUD)mu}=pM7X5orw5J9i2&*y6`6XzdJaRZj&E%2Kd?N=Ju$PuIQux4J34o!swqfwiF z_3M7TrB%KfYNrC`>5-T)M~m-v23u%~X}5nIta@YNZTuu@6TIw(4OzT|o5dlv%|=Pm zrj1U>2I&FDf`Sk2HBIA>r5U@QV&_X|hITWgg)240j|Cqy8AXlGA>VfsY^Zm~B7T@{qmIhYYn_=*g}7ZElu|ek#mrKB+~yL96S@&xDei=* zHE5Bftv}nhJb>Y$>nu6=a6jp5g1xudpe1#=9U2O!^2OUmMD_y+oc9u)Z&?zmt18Nd z2IEUS3Cga|_3J=DWpw#OvCQ+8={>Kv0r>5Yg!S(|!vt2JBenQ0Dg8VLAXX4#iqLza}i66Uie~{+T>zW{VxxeN7FeI zIzK}WXbs)#1A~@|S=7P8e9<$#5Sdd1we1jGz#>IjzUDM;8`=a#+rdHP6fV<{H#Ze_ zD!5&fXh@X{>|fK_mo9pQLu^+sYGC32LQ}uZtxJZW5&TIas)3GTP-vr3NHu9{G(W)4 zAX|mhF(?5Qg2)0L&U7foDuAw9jaj=D72E#e)*^tx`+ppTpAdgjic$rP z2#O2#bCE*@K}BBLhON`gjpVh!6?O-~u>7@S3iakNfj^!E6Bfge2$L>tOe?)0BQkF3 zA|YbaDt!<|m@BB%g=Kj4he}L-4^yEpv-*LonoG#n7di#dgx&NC(6{S zMsfD4mR-!APNv2};Oh&IN+ir)*V5vj+h>dxQn?!rMa1_(3+mwu6;x3oL?5k{QZ?52 ztwR)km2_03vq_<7b&4{3J$XI)%sCaDi6ZFEUr3fT`k1Xzd5o+qqez$fZ*F}zqPcKQ zs~>)}*OPC>PA6Nd0Ig=lQ~UkC2|2?dGiX+)X>w$-wRp4p!{xb6f@uep`boDjd$&qQ zlbiX;@L1?J++ghyK7q%tlID%l^rBUQn=zZ}wymA+#ErVE`?2Pn~s< zO@F42AxobA&#E*wMJ`4Q!s~EpN56c5vUPb^2@;D}I_$T5|1ys#0n*s!8(S286!={Q?c}>zl8rD7ib^29^9{I`s zI4k5+diz_N#sY~tnzaDPX#~Mn+hc5$AYt#yvMqdf{NwT3?3lp!bluEqU-4qWmJT0T zHyF?1=BfK`p7G{!kI#Pn=QubFTAVR3c9OOM!b49&Uu1~?!005aIl0W2Yg&X7>vOx8 zh@OPoyG^@7`Cr9FJ2j(vbykQAj__M@x{fp`bz?t0vg z|9>B%a=`wUu7c36ko_Ir?Ufb$$Y-yturC0PguA@M^6Ro;+7u(ksZpEW(_%S!jD7Uq zq*y&0*7D`X)}nzg)@-IWHgswldk?MOd|Pr~Vb%UFEPUC)ELYVGH=I|gI$Yqe`_-1_ zgt4^>&A})txV|^wkM19btD@)`%?wZ6y@rm?C9-XckVba741uzFVC!5h;4+5pt|iBi z>A1L_`7^bg_5VIC|JNy+sY(2~L_n@^Hj6@yN|m|IR?>#i*ucEi?X-cn2Vq8~g!9;L zR(0kZ6gKL3?e9Qw(ZRn|dQ#D9w1}Cr!*h>D24(-V{rS(M{Cy~tF?ZNNoK`C9rh6&r z7m@_f8j2e^Z}~6IOafPY*g)mAmlzp7^GZpYGh$(_Cn$g_|F~4 zOGSIz8+d;G{Ta1JOfaosQ&aF!K8jOoJP8?;h=Lf>#VN;Fs zmiSUqaq)>1X+#XSYcUBdJHKq#@|1xi3M2ON}bQvnBSk&bFB-)!y9R_!aA7@W~DDe z#huYNG~?$%1+=1i7HVClJ>R6^*}i}kCByMXDbI?{;%O5xKRr2!<}%+UsvauueB@hx<@7O)PJ6`<$P0?!XQHp@hyMETbN+hi@_ND zS@@-^0lD~sgC*6t-jVYSQF6s9lkQ2d8zWmte@s>Fq|G4)$wao3ky63|Z2_5;Mp&*I z0`mf9`V(={Td)HK#s^G)2aif&%t;YR?}2^)5geBO=Anvg?yLc2NP+KblgFDCzJQik*gBB&`uQty%%!Q3LPm zEI*9AHlgfzF#K>bx?Hr5hk2L7!26eXl{oyRs+ae%TI=1{v4YlFbHcbR!Vs;#Bx=pR zd@0uIzs+URl00!9I7ptyf3`Z-T7O#Ic}UVQnEb8@8lES7OUwDbf`s^10s%gb#^=9Z z5Un2X!z(mcMLk9mE3+i!Zd92<+9?w_jgg%4YiwxSmsED=#{%!4=!awXgra8k3Iq22 z?CH?8-xby@xb&H}4wS&W6KO)T(Qhy64%?d&xerS``{|~#1p^VwAf2wWF;bpajV%(p zG`cv^kQZ+xvuoWlyuScSZ0|;)4>;KqsVsXMT?h%*dgSXT6(VspZWFI+YKY1GaT;HDjdLWtD4o6!6N+qm$>!q*s4$ zqEmci%5q>Qt<7#OhWVl*v$dIv|8vQ^a_Z)bQXuW5v92O=zeu=Y0e!UQzh~>RGo14u zmx`;B%$U*OlC255G?p3g+mceWu$(FtA7`O^c?`E;J}GB9gDNN zCVKOUA}%#ptJfFJ9MMw;@=pG+lJGo`8Xi!0Y`ED=F&+ilhz=`@=j&|+ita0aYN@)T zcs8u=+LgmYDGhMx)cP+Y9a#XV`W0a9q?THsi2rO2xw|xaAvQg*Xi)7*FkN80v&E&U% zYqG?|ORWd7r$Z0S&>H!Y$dA`kA`_&W{z&r4jBFxsG9ugSO@Ojedg-gN37nivSqiM3 zPlMC8c>UWm@VfMHPXCT?OaT;)WE33w&lBlf<|U4A$Dhr=D0^k%7irP!)Z|WI_CVO^ zkh-BwE`ZmaZ0z@{c|Pwu!qT$){_#egE75KDzYDAXtGv3!>!kGWIwX6uLR)3PlIBYN zdeD|W;>A}lpLeRe+o_k-v8JD8tOfD81$(st36y}Hjo13Wg)d_wW9^zHRXx+YjYn6v z3b!bUP%t{B1}$~}svB1P8WlRz2CJ_oz~6T(({HZAf>WKU(yq>)t2Q=%$4rIQJC~p= zY^@da`d4S56M#8&=&P3~0hQG9&(rx*dw?-DZfD4J12EPJe>%moI|5oc$ux>{-K;2I zgS&v=RIAWoOYi{XmGDWPB5oHO>#4+O#Jnm{MgSy4FpbSL1aA-c?wrj{b1Iv${hLb^E=C9gU%{a=U+ulcm9A7`U5G8e$zNHaNl*%dbKwg^%%g> z9tJ=_AI(P-*~DMF-XON$(ch4O!ubhMbI%cxU6MHoxF6Q~JcmcIsO%5VWi!Y6k~`Uc z5=F%GR3IrhM1duEKyrP48HAP_Y`JS&-J!KVm2*+_Pj(0bWN;g9AX?zfX{2)j7(hjU z67f&<{Zg%I6aoezEEEYgjRG%S8^Gs6Ar!g{^#m%)LUc+7JbDdQ6acU5yWrn`>|$n! zXCnMW5Kuu_YiJ}DntR6s4HpZ|D&b={U!K%{yP$7m(-|s5>eYXl=;#WNgpE>#^3LrX zrLsIPzCjEA)UHkjT4O^t%0Oj+V^hHXzj#WwuW=&w_(u73Jv@2p<#k+RjQg`NbG#oHCp7-G&vwmM>bPZn|# z2Otyr?cOBQUaFrayzUhrTc?o`G4cXbjX208VJ{UZI6pPpuO`z+pppR{4_okfhFW~; zbFhs-iJ8}AqoqJMMcBqeexrV&CuD8L>XNstFda?Xk6RDpY3}Ca=e|VMIDv9{ACC#ye(pt2;ii_hYBmHS^k^))zDnU%3Ai=7!p#9R zA0+XO|H8_3+{9y<%@(P%NMa#%{~2YdP6RBfMm5yUH-92nMwyz|zA|G8WAV2j{1DBR zEUoXiX*Qrazo(o{Y2B1*5Sw_ET-(j)po=N`#0T4Ab zr0)c%derZ}e|&_Zfzl)Y1SCrWooSIeMF1w2bgJ~0z&rMD;;arE?JWkrxAW39XN%Q2 zSWllP3-er$W)SS~Sxx+1&sOFWxoY>(^_v~IihBED&Imm15xs(xyJ?`Jt&kvV?yL6y zp4ccLMWh189DiDZMU4QAQTaCBMf!cnCd>=d1M(K^Z)m2-7~!YStb9L`p)5Rw-H_m` z_9wG!A07c(7plkunSFZEe-WfZe{L%ZUGIY|4>Tm1XGv7#%fNe5 zIe&EEvcSIeB@{y}#t7)U216S|sd{b-B5pE*P5keA??(y?faYs)$189x&0EfKMQ4@X zg#ID60g~B0qvdph!hy~j&fp&5IUoK;-sqG&@Po|UVv5JSt!aKv686N1e5{ionNo_- zPlo8RPE87+f&{(XF5}UuA&`2zUI-I_bQK?cy0vB1LS)ZYMzmLCz#2tC6;G0o^v#QqLTTlVN|Rzs*`C}r-psp z|HwGG^o=2IIkU-(F@$ZSQ2Oh*88kGQ#x;+j?4*xU;+Ir(Avk)BMG$c&wCn7gCJ`2u z$1%n4R-4xp!ArRjf8)tK+sv!Bn)uYa%S?_;H&yYlUd`tPa=(~JdS*M^+~pX1FU_39 zIT12yj)xnk?NCXN{&P1*=jA6Ss3s7jWi=+bf?#GeWp+_6l(cI^C~8bL?M0WWIEp4# zQw9uVf<82m)y&2u>)1TW>K_sn0`2j7P6AOv7V>U@lz%mHn6T z1uB+BoWTuACP*CvPNP)2+K3%MFkUP1PQ_7$qLB$30N*kKMq+k7*zK#Zla2I7+erT~ z|FLi(bBkNjLScifzID_T9uV9wlOmGV)4Eg~f~!bQM?#I3vF*efRzrYR_{ z-^C_h5usncO})KtGZc#kz(~V}@HXhnM8lSa!KPgwq%3T{GK#qVQe1gqmsu)j;rd-` zdMrT~%qfzgjAj$#Yf!u;+k1AjJF(e^I2Ph{y?b}(K{H~S5&b&7J4@xFSE(Iu=B39@ zH=ExA*b%#4ZcYoXQm#HGI;1}(wA{+3<>!g*^Jk?GH9h)fgEZeLTRdOAN6I7ck*W7X z#$yg=r{G}lWh=LaYrcq)$JH)kLP|auzv>OqX@y3({~?Ld#@!}KTmWQ?IF z$&WnTDN-x8Hn7|8g>dBE$Qlh2r_IW@jmZ!HaabH(yrc@sN3&?Zc5 zW@8FM(1+x2&oGQI_%)IOGS|Jz;4>;3CVvzt3J4C#%(TUCn{Hid!JT!9)Sd@APCq{# zgM&G;*uM8r>Wc6npp6MxcZj&8_!K9Du(6Vudud#qJrFP}$eKhNf_ zvmG%u`WsEgN8crQS$e8puQj-~t_QZ;RKvK4M2C5M)%7Fr9a3aJzO$%1TqiZsdmH&f z+OgZlJ+bGb__q0n8bU6cK~ccydE!gprLV4AoNkzRlX4ffxxQgAxW1|d6B^^$qi-=8 zziI5Psf9sAvAMpWP%J_W2X7xl-SIRULMO5+ughay#|S!ry+}{k3B?#{#pp=A`YiK! zmpsVcdZt)sKPkxa@X#Z{e$cpRu5n^<)tAL&{`h;_D3x#e?@{xCQG8UFLQ2;&daPRhO=oEJ&WW)n;JmCfdT zopD`H+3kNWl(A0?#D1{dHzdb3b57rTWxSLSx+F)Rt+Lc-`!VZOyEB>h^_OuE3rvE~ zrIz_Es~FEjM5UWrjc7I(+4#Gqz3MbxpFkcmsP+szqh#yeK3-1%`Uq+QQnTf?_DK!_ zSo);DRhtR1A!KH9zmN=u0}2BvR#f;>i}qSh&r6enZIsVZ+$5c0I-<3D(fz=no7KK> zuInf0ngTjRm1r7@2z)TilBI&o&UADNL1?2;`Y{>9Au8DMeP?_SvjyW9S-#PL6>aZvZ!Xt254#_1CaXEC-m zvq~HgOs)kJqFuDl6MdeUVj1nFM;RpEVoMyWDAhOH={M*Z_1g5Syv@_w`A(JTl39Lo zw$&wS9C0e0KUhWEZ4!B@y>r}`eHSqAg@Dg5t8w_xm+7d20uNOYYX`rf62wE%WL9Ce zgBUQGsp$wut_42hpwvY7hwcTVi%q;mBrr)-+lX<6Lla%HH2DfIG}n{wt5qo28(jN7x)=qUXbzoCwPy}4&`$xM zwHX+{Uh|jQ&JHHeAvOyJJ#X|138WuPX7McAA<7XoT#UStig!UjxBt2-CRX78{k0}t zs=tvz^#T8Hv*RSwrOCCzX>vNGZiRp8AfCs(53WCMN3{IFUpcnU{IwQ<=lwE^-eu5L zwTEDil(>}-hfi;&ZS4$251;ms5!QcKII0^x=Q=5}91(Qh3L;pX9Pk?sO5vB?kL07? z^x8A?&e7g&)Bie-9K#^W+VI6N?PUc!p2~wl)=x0&rVQK(2NK_#OUX%2;-HlcF(okb zY+I=FUc=YtB=9n&N_+T9J7V}N@#&)u9a-`)9ip=Tg1#n$iFdWO`RR^_e3dx0`D8cd zUdAALdqYImo7%!e*12JI#@~8I{DsA~=!-(_0OC4Gv#wf$p1thPz3iu5Dx-L7!<+=_ z0tC6Lmw}J+RqgC`h74)9$MQy=qImiH=obvwWbN9;ov)c@NM>4zP2I#@P0oT5?ysj;uw-aXsCg?tx8974#k zgc{@OtiaXljXS~I27zd!X#b=OkO2I(Lu9w{ z)C03~S|PL10?grE<`}vC8$R>(K;yn>)!O$*1N)m6WBtrA@(YnI)E8dn=6cmBX1j`t z;sw+bZmI61bf$x-xb)!cyLvX+##ce>OPql!rAV@~)n=Y87#bHwak11@c4gB%{{fmH zCx2M^)IZSf95I9(7Li!L*{B8lah6AzWyDqZXCH@ zs%2(FRp&PLh#U`4H!hFJeB;Aj$+cti@*5Pd?(}Jp5Gfxvz_~ryDnVm_CwH z{1$(GfB_*Q6bFL$x51un*+z>~ubiN?_Omo!tt}V=Jrl@qSbUs(y+6|$&kLXb-ME`N zFAkNw?*Z&y+b0JhWa=zbSq<*|-s{HXDSU~5)1!!zN3}+cPu%rYDTO5Gd_|k9V#o5y z2^!)aIV7rMWR||wb+9Qlqg}`-drg8L-5Xn#=1UzwW0!$AAZ+{Nexi-5F$cY>;f5<) z^O@`s(1{mb7`%5>F3vK5?8vuE^5Q=?%goe&glrTCW|Nr)??vjj^HY}?E+fr6s{CJM zxh0$~QdrnIJJ73y-&GsQ>#LJ+Gm@Kevj^Rpr>T>2V{;iXFRElI>XI5)r>XfBC7X&m z=6PIwFpV2&f2sU}@mgX&@Q>4hSqt>YsKxkw3Ky$W!>M-PQW1a9ryM5r50}-UKSMs9 zotd|6Rcs`+%E%`_Hjf{3%=&dynBIL7@?*8Dc4OOmy~6oXcdI8sBF*7=rC$C7K%o`A z!J)W-=QMq}K7Y`8YQjo7haM2$u<+Zt33(>ka_MT7HyzUC&R5eu2)X=DeV}8qcq4>w zRQPt;@$jRr^s(&EjG%`Gkqn;nZra4X$@JjoCGua>F-irSMUH~;rf>9( z&5YHC2~esknTvRY<~TOx*Q(1Ny6FcPRl^%DeuXx+KO|}x{!-NH{SApjKp>z6eSo1E zCQd8jBe4zh-_}OGJE*Ac;KM@GcIwqcV6PQ^+W;-6m@li!&+kI>Zh+nkJ39!6$Q{SW zi5PE$f12?S0&3c6*6(KHi1ER{kK8{HW6FObVslSHw@nAf&pv2HrAP==y<|M%^x^i< zBE8ioF+>GveU|fH_K5IB#!xF;HZtkyOZ)Z0p??I1xXYoQ5jc6H`Sa8yBZSgO5mr!O zcr_?jN$(CJQ^|mEF`C-AnZz;nQNMz2M@Ua7z?6zHO;6`8f8aHrM5~~xo5+=R)_fKnXooNS+f!7OlB$6Ay7%1vXj-6IxE(WZv)kz_Pbr*^+fwld3R8n8m+oR^g}uE_1~hS z*s*qw_@h;NNy)$IE{2bp6tIprSkwy@Ox*V}0dkV2#og+)e`L<`V6%2he1BK$5@Yb4 z;wZ)GvIUP*KN!^y9?hGl?nuVJa@|Rf3p(I)Ho4PWQaLg0U@lsA5`LriXXM0!0*nJ< zCK38>M9F?>_OnbO>h=6C)z<4iYiR72P>T(Pt$&Dn8zwD|Nmq!#sMXhp1aF1{HAg{1 z%bXVYR%Z`t8GLy6TzMX-v%rU`zM8&q*JD^y@;6EKS0%0KdUC3xE$4U8nGd5{lf!Cd zCDc1t@C06M_HhW*oG3Pjp$dpLVxERXaAm|9`RRr{Yaxb~pQC4V6ecb12Kg$B9!unA_j9A24j|t!%I{wJd8N>d_iSKbk@-3YWq~NKAj>ffryrouZhi zf_NK9Z?P7r_OGwccEz1F&VE+va7G)o|5P4|=StR34uRVg8Y3H)M$D>y*=HZp zE@{Y6;YgyUa`eF^EY=Ojj86P z8@P#AB}zyKTxNOSEF&Fmw-z5Z;-1@BOJCaGU zF{M8J^hny@V&7+a-4DJiz*#jIf#My9Ufyk~^nxb1xi+)wH*r+d_+*yg)^pc1h;$dl-+_J$zo!t!7pVyv&|m!dsdZWO#$qUeJN{b_*;W5L`#Me%(;9w>@H0%hD!OKhvMG5j|dI{W=R)|=N|$-53;QOcXmwA z{@H%esv~6n*R;icEl8Sz6OVfOnlEN#L1Huv3_iZ@6I}8usprNW^nN!a>hUgjpT}5_+UOL;XnksaNLp)NkAiVX}vK*6rz73mM!)=XnZGAW!^%1@4lrt=)5 zwJf;f)o>H`=aFI5=o|^1vsEl+KO4Z*lg*a3aMX_ZP(eO)WJfLFzEyi%`@*A3h=E~E z)^t1$-T@w2>$KFyo{^jZ7SEE4`B#+u5LDA=*@t~az84esUk)2<;RK*zWhj}uUahUV z(B;LZ_VIqxv+nH-EFFb;Jg~Vv)g_D9j3{VXvFTepwEXylm)2J$O+O;cqC&Q7&GtAY zlCRFUJ`cy2&g?(|t+FC*&Y+QP$o^wbt9f>7V3v@kiv^NLrJ+H4VY}@(kQrW?lxiwF zYfz>fR6l)888yI^QLWsKYiSH~nLFzywqQ7zu}cz5Wy@T*CN^i{m)Pe7`oQQ^kpo#( z!-EQMY~VpCr}A%S6YoMF$_q!Q>by1?%uKNda|qe(dRG2C1>SSlsduMIDwH$H)?P9c zmOqz{WhvJYa?q(FU45ZceNK>+xHAkqt}q5EV0NW(TzK3ca;tnp`q)^x(a0HJb~%3f z0Xc@#b8ql-MRn4VFPd`Id+Fc~%T%(98=>9?bIF7HhvAYi&mM_N)96fF;~!sacNVB^ zzrUNsq|}P-Jf}SQ?F~RH`G|%$gL7FSIjgr_G-7`F1$mC|CU7?yOUEDAfsJ6xxAr%PrChTSCf9AProZZDcdfy*ZP-|kHJ$Eo(r zOf`3}j=9)7s2LA`BTMVx^zdhVbgZ2!;gQ@Xg&%m(NW!JA zvCm>@TpDpeq=x=hsZ_4;7nU0wyrOK24zR+%rC~_$o`#@Lf1FTx;KsyqRE1$_QyGLc zKA}vse`z^)zn*bW)RJ_EER|^&#xRA=%ApX?uiA7Vbxvl0Cc95bVPqi8+bBzLqzIL9 z-0jD@rMJmcuF4IvlcVyjgw$mDeRN1472n_;E2Eo2`#tBCd%^DVfQ}Iduk8+sMf$xd ztwFhfgV?JhO~8cS#;=A{)2qhR> zx8}#RnJ=a|mujg~)t@>eVuCg$2yteVFQ4J{w5n`-G6Thp`x{#H6ug_|>j+us#yUw^ zXIA~qr~g2yhtt`AEY~|Ws(repy4&bjB8*YuCKfBVJVR9vEokIUtPDfTY<}FS)=@`{ z!oPkjXPY{oXyiLcd{Bo$;D~)k^peWWYy|e++j6rU@Y{VkJ#}VXj3P*F?RZ!kshF#u za2DSkxg9u34N0v~-lC!&x^LlcbN8Drda*?0%Sl^oUrPc`AF8z#WGcIlciV&G;(t&s zxI{{9l!DRrMqF)5-?oRqzHoMCvTU7ps&+Wd8lt6Cu(@{R($zn-xz-B|^fYp%Q7}di z{2ci89Ghu-SHwDP{)%F=fZ`8|kVF3Fg;{GRqro}e^!32G`EPQhK>EUrS95hA6AEVE zs;>c@igB zGdHlg-uF?QlR`d8RGd44)~nIGNoXm$DU-oa9jG~i`EGEQ?Co9m6n)1}!$d_bg~PE7 zt31RE=_O1VfoB8|wUP~LK8dG}0Q=W!kmb59ADQ(#Gb&FA4R(4o3%AHv5&kbU< z?zQ`01^(%fnk9tYgx|Plv}7{ukT&IL5l%8VNAo%wVige2ZGlW-s)J3G^rtQ&f2FPo ztxXA0&5fHFe_qIi=_Mt29tv?i#}lcWl&%NR<-;=L=^JsC=;L7H@^u*z+e_UD1T%#bK-2&T1+fPp-7U$@N^F+kkzGuj%dVo$tzOvPfg8W<6t2VxW6z+9e>PD1 zy+=@MWj~5y`&F12<@<@54K;a!$c}D)p z{6GEQ|DG3NVbN5WWYjC$p5@X-1k{~CKa%crVm&e1tdcM%$-WvF$jyupkC5-Um9;(B z&rBXH382g6#9J&=RD}i=pU2%_P`AqbXn$1wfpe9*wG<@B^u6wz+%+ZcL20Y-1mN>@ zI|L^Vg;gaDb6;O|g){z9tnHNW?Ck#`U{yRQV*28?IvE-EO*4 z>?fdYmcL00}L=+)Z8z zRCWzYlv$J~3!p?m1d$K>Gw87WP%74x^P(oY3n)}-ua--SX zj~xUn8YV>-rF6*gNZ@!O`^d-EH`{aC-H+5H+!r%siC{FQwpaWpfxo9OkaeXChTIb^ zQm{dEI}&q`)*MpN*V@AL6Mn1yXbd@Givrw%2Q zJReKiv0qx1|9JXSqWLE|Jn3!!+ZNHPB(Zrt+iPd$Q{~s=R zG!L4VECr_AhBTR&_wnih6t{pIV-$+dU8*d-U_v;3TyPczz_pHj#-+Qamm5^S6HBeP!&w5!tGYZX`7Tdpz*Yh|w7x8a4zeAkMN_l_AkP^_qmzpC!J9s594fqa=fFCcqk*P_} z)`71Cfm2#QZ6**N;9TvPnko`W-UObjOT0XKS_K{O2LlFIu2%0;K|jfA)0SzM_Nzhq zcm|^>+}k&Jx-#_z|KV?9GAh=1qab|r2^f!S&de;^{MU8;e|{f##S-q?SPpy{()Vj1 zqw+iA^<57}w(SkaWEMwm+yxA<=CB4Q`rN?s)f5dJncorAfh^HPoJj!w#SHetjT<;2 zHFockxU}f`BVb zS0SFt6=4K01sVMO_=bKKA0W?2_yVTLvFGkUF zl{4g|c?;`cwY_z0S`_Z$%-ri}!D=%R8>H;tCoyo^b{TiH(mEdv13Za_747LDae&SA z9{8h~g7q$oeB4~e!8f5z$fr!YgGgEV6u(OWQ{`^EZ{%O7cYo{8y&ONNx2(}Z92?oD zcqc%rDWMdHEKjJ6|1gcy~VCAC%f{^~VYhQVg9&GUxKR zx*WqAcOSq?m-u4&=4~U;X`q#om6XqxCxbcP33$u?q+;g93%WVoUmfgH!rnGC-aadB zF?Zm?Hh?hsC!Lkg(Y-Z5Nf64ZgfU$6r^D|*)H)#g8%<}ME0?AXPT`p3M6PpOIa^Y7 z?`ax3HRd8(Mk)VqT3v_pUslW6V(hC~4KPa|{)|f>gX1|RPyDMKqldqT1mEU&9qvyT z3MLj56=1FM_6W)#%iP9GK9jJ=H**31)3%9BW)e7<+y5C@`}?~3_nskO_@iQ68C-u3 zS^>a1tAT_`#Q;GHlyO8FWYMoo$0fm8nB=JG5rE^|#egsZ)rb7&I9~XwXP?_PhI+!# zW7l0S>+&K3n%b%7ac8}u0;5{Dv{EKA?rob<>=@i=1T2!gcl%x8;wW^VsK<%3w-zv^yaDnZN5UZ%XlBDB?H?Yf%jP9b75uRCqQ+I zaTd!X%s}{V@*_<-naNlCU%Vc{650-!Xfx|*P!wf>?QF*Nska@nRugglU4}~(6PfZQ z+ns(=mPpK`#xgjC-9Q7!b}_kCd5dYf@vQ+jyHt9Xh$mT&XWxhYx)H9+o+oA>I z?!B11f?_iC(5CU5u!Lx*h+Gh2*&4t$<4oV0OuI8Lq=nrR^$B;U?eo?oz-w>A5(zIehHbyMGm1Lvi$?yygQ>lkw~ zfmxFZUhe#f5ek~)83^IdmorB>NkNr@G})pCVS|d&aPIG#oYrvG(8Tsc3)jG1E~}>P z`*$e#=Dbaof!}{4Y+PaV_)W4IEcF2x$tJ6vTT+yPz$$ zfa3!ikVuysI%LLTG7w1LxpZ{CVTE_i@+amtr}!x7Ukb@=lEnIQ2Kd4g0Vk)V>7-N3 zK)Ob!TxYs&Uo-h@dnUcrZ{pPV7WcL=%C^7wU0wi;MBT55lh;c+rVv1h3Ea2=4)35O z21y623dgAe<(%q~Ne`OY_fpI%;B117T#aFP7%y^zEcQM!>7?FoNL3ZWl?|q2DdA*l z38ZQg@k0oSVXd{i6tg6l5D6ZpT>3LtWAdYmh*0EBNyw)tJ@Z%OaIhg{T^}L4$yp}~ zG=wxRGv}&CdTi5A2gT-hzq!_+hYV{CvVU6_xyNwtuGZu<*A9teM)uAAyGh0 z%RLgb*=z8yy03|{w_-Bgy_WMyFjc9gY!e_YG zQv&mB2pM87qE~poJbL(_ieRt7h?I{%HpNOQwTNE)@+!}8b*L%9h`ysDL1?Q81Pf-3{VH7(=pFc~0ImXU-?@EAG3*NBoU@PzLCXWL*uYwuuzf~bYd!!3jLgTU$1BN^ z{^$?7QXF~s0^Dde2ow1nKc}hwbwI0=tkpf)lgK{mcKF{x&_;IW09KsUbOp`3D#Ba- zc{G`?Zwb%f(X4%joPxwOyD?AA{h5Ieynsr#-~FYd+r=3^9&*`^m z4&}jk>)E$6EQ8k56?b1ED`?_dcKxkcR3uq9p9*PAR)Qh@sWbA`%`hO z&*!WH8vCyTP@PaK4P;H8o3{G!{JE^iT3l^CLJk5vw;PJAQt zAUmt^>XM;fcXwDY>pAN@;{0nZorp5HODc{+BX)3T06P-i}O3rQR- zt8V~NU{~UCUJhUj^Gp8X3=@XDDkkVmg)Q=+m6_^ZdE9t+);H-%N3oj(A1 z4S8k(b0VMtTwoW~(OQdHzh%E63^TpMzyPwE|m9fB1%%uxPuT@@YHcsZeLd3 z4))mX=zQ}6+>=x2yI0p6UY~rk)h7RGlss&3Fida(yUjsxI$JdKq#d)y< z+!y&lUEb)TYFa{w?9HyQ7?H@$i3>>qwny1ZX=&LlA;gZl*F>CCooB}nGYPr5J_9&w z?gW4 zssJ~Ze7%U0%%CH*Sp1WWi)m^ZG-&##1&Ihhe~Yt_XF~o-cc?y-*qel{BFlS*3&qP2 z4+2*#AFD`-_)n~ZRM(k`71-a~pmWiW9wnof>7L)D*2hft-CFE97%UnVCe_e)*lof> zx@DeZBE2W#f1Y(bog(Jsf4>61Sr#FS{zkVTC(-TiSk5-9pC z>~Gsy|7a6)7~aCsFjdPP@2w zE~6VU^OHX>12kRC=tA@8VEri+Wg@m@RFFIFUxEiWkM!(u;(TC3tjfRTlsj2HqFn?I z7eDzQr;F;~PZMrkI!RuddOG1I4<;(%g)AaV=FK;XZ_M~0Cg*hdbvIC6m^8ofg{Jkl z)*bbI2ssREYvct;lnC^2hP;MjwX^BTG_QiA66p73F<1cO@?>C}@FW~OA1v?}ku@-Ki zU4S$Hcs?i&68c8(9!oU1=#j`JvV`kQU|BA)L5*P`hSUba2zv9DN}k03>PPP65KRx# z-$=Rqr=*t^ITeF%I*hDH(aO*zz-|{6&`p4<>JjKjxkPzH0{~MlWx2xJ@jSLp-6msVK!Y@ zUGOFv_#_k5PlP39SBF>VDXw#sM)u$cUUV9=_2AmwpNCcaa2);$lO+?3oR%0i5)c5q zPQh=|Lx5%8i;cc%qZ74AaSRdIMsy5DW%Avb$X#Rjaf;rR`xAbUXS`xJ8`Z%dE+#LY zk<#Asb}TBK1%5Ys8xyFyb%m$u!Y9(h83%vxHMpu5gP)Nb#W>%BPe+14Ej3MK-Q!Ef z_%sW$e#7S>@T+^}6~whEt}=_QD;hnn5l`rRSGYU@Kk*!#;a^e{;evi|h`J&`x=+#+ zC~v8Jm^=kx)N6bmU%4%NeD=?1zv9$!0kK`fT3}9X8p`Qu-FTJt@E}* zupr}7oQ(x}8mi?A{QQ=?wiUPGH4q6hPW>_TpjS;5p`?xJx z1d`4!=NZObrHIqD=LApo5AvDRp;Y+IlOL7~sk~A_d3_qzkG}O6BSjM$;VvW|q}yQQ z^egwVE_7bsO>u#@b83cvk&aM>&QZr^b}0`W!vsv;ZH8CHXxmFsGpXJZeV+RwRkVlK zAH{=@-det8f%%!6b3={9{c5lX4VYp$$@p4sZ4JQH!Nk-S9xLqBvS^sp&oDX}6PYaA zs3Ax&J~E8CPWaPQx3EYieBPHE`PINWyUR7zI8S!NUd>@-&99CZfrj5|ch>!i)@sgi zKRbuxvW@}HkAj~bH5`+iR3F`X#GD8(_1IBWS&M6N?J z)pvo0#l@QPC{KV(QcV{kd(Qwtg|fc*yuFM%9K2brwNc!PfHqv2IU>W%0zhC~mIH2I zDmx7nUMk~Gl&ktjjs+UD1@-mIT86sgxo$l&l!cKC8z3P)=0@6h( z9g~L_)Y9m;h0(90&i|fi;)LOTEa9Be*kBGM;}FK&4=o`(&I}9F!lZu`<7F=NKTF{a z<6YG}qo4_)wa*l`IaA zu|ZRdAqBx{vI)m5qo$)Me8ygP2e7vT##Rhi$u3NcKHnWuv8^|I5h(kz#?vFQv$^4- z7;&t%Y>{AE2B6UzDlm_U*^_@U@jI`Xd9Uci5o#NrO-6VR-mTtK@N5yb)G<}pY9m8vm7qOI?%`O!i60g*(9t9;{}965DYCZ z)H9o0(LdVvIAIIJH_}958mu8IEQB-vj_v^G+y-af4tL#j_r;)A=cfD2aG!DADj0sf zNJfnEiAQR;CId4ls7?bl76rc9f<=;QNCbf2;5W#Zh_yea@SZ-mkY8 zc=kD2_5fiSB9H0&$oWL2RbHAje4ON{>!NhU#^wrs3VahWDfXDxjA@etN1LTn1#l|Kq;ez{H7gw-e$?1kO4|gBU>ajJ znP1_KYL@YDamo}w>EpvJY9L*$Ht!JDyFy}vrou0#=I^9UOzZ_nRlbEo%*0F0V`jFu zn0M8$zNQ^5sO8EA#R!6p5n?V;hH7*vEk8_lVf#ZP`y=!)BKpx|K$qJ#w= z!%v%BW$`evV34V~NL3VO`+(IF3`{E;D~;>nLYZ9?JoHI$oR1JD=7PbKAp$7jDdf*O z#H*-mVPc`m!P#B34YUY+ym?${rTXGYZC|r+fcZkZVT+Wzg8>565=mwlf!ApVS*o{d z)OMjN1!tUQLUcW#%L2lfXcHG=v#inh~)SiYj z?AmVh*Iqutd$^D67zN?I{^%r`lq2H)d8h?M<@x6AW$fvo=w|DVUx9>LxvwMy)x~UB zF2(zo++ZLx%3(^XpqvO;&hTI(fUemjKg@yu+?p^-&rm-(9lA8je1V{3#fY0TnX84A zlZ_*@1O*n!INlnj7_*!HRJvpRy?x{z=I z8FB4XLkSIv=QiSGk6&RBn4}f~`*s)nO{=!pLXFFl^g+#A>{7%sY(|Sy+JygpQEOAQ zR*u>-%t+wv;)Q49Nvjf~nMw_W>(mP!B3pC#D9uJ~Jd)a!Y#ULlQZ@cN{11;^O_ zo^RJA(Aou+arIRI*Aor5h-oHoL1xMHF2_l8~L9`6P?(XS?(j?!ha-2LS| z=~GCE4fAYFEnPO2tm0g~tI%Rks{nNd3J<&U&0%Qlb#K*mJasQgrt4z;sru{NnWm`9 zmh#f450a^Ebj7pEa#Dz;O4@jY3U{2}&hWIPXU%`wpt(yi*RQ@IEVDv7Hfv4(Jpe}h z(c8vx#t#;XL!PlE0F=a@ zv4P{*E77=*`;*;SmjesMQpGL1nY2~q)tPUfbtlow`oFg5=LzLO(uqsZQLP?~y$-a& zv(W>Q<-an`Og8VS2=ii66=vtuSFfEk7p5H&8<5UC!2w0k1SK~v7!H92c7`5~G^02LV_L zOULu$=4*rcHbY-rO~fNeh+b{dzO^*RYCqa znL!feCViP+!>vH15hoHJO(vIM^hYV#(7P{a$43;|@)RJ}d1dXo-=9-I=zC2`p)|&; zMj?l2gzDH_bo~(+C!;K;KsJp0snbX^$zrJ&HtX<@ic$G!_AkG_6{;sXD)(4nq^$iu zr#mW1&rTys%!~?fM4=&1(%3MPt5fT1eQ&`r7IzG(HI6|gClj5*dt$4oC>o#$(OU&> zb}0d+0MUvL>IGkMb=x4_3m))EFUT);9MDj!u9;QjNBa)ePsCSY!^mK8WXz}C7aH2~ z{Y|>`NuXI+@j=RAxj-hvh*Kdt+UFKMkHXRVm!hMcC0b+}ZD6I+1Qvp$xs2meEm~X& z-CwjHCi+DVIMfyfDdcP5vuS56*oSn%yfMfS_|NlP5tPreKSYqAtxGrhr^$E4Oo`4!e$M(BOhuZr%eWGKWH3>(9x?Wj-C=mi#l4F<7yNmqq48JKD zJNEKeBNr*j&>Yiwsf%L^L?Hc~r+Be)4I4b{!RZD%L_O;wWZ7x!q@o`AAC<(X3jRJ+ zuD74dUSfaO5O_OzCQtv0tblU;AT z7+FhO_7#SM>X39lj`=?Y_Iy+#JX_K|r9deuPr)HCdF(;`68FyW?yTZ5jL7{mvXFSvP6 za2VxRb1rCWSi;DCA5hlmxVi~vxDW&o*uDB{yk1Z~ig@k_lz_|bXOI_oAl4LPb)uN| zAIfS9VN~U-RFSg3DIti-%u)w=w1_dN_u*J}8bS|8R^`Q7+Dj z>Ni$$ri#8=8a$>9lw**53VbK99-cp)og6>h>?up6D_G|!`SNmawJ$ZNzkn5Z;%z#@ zl<94kuslUaPkHbQDdC1K%aJ+@Xr;(^jZ(<-g2U*Cx0roMhAJakEEw>KIuKW1*?Xre zP7>j)^MdB*a$8H5Y!~&W5T>M1@|0;G7mJQdq;|}C znyP&dSXU}0t_P&T_=Tf0a^m0=A25(ukbZb3N|_xj>?ctEd*lA|QG|eel+}Yqo;WOJ zs;_`+JuYwkQS!Ht)cbSr#@s1!_X|Y>DhMA>wlayH#3=u>xAR2d7`A9YC(W)VGEIUG z9DWbAr+HU&pH|VQjqOP|Nz}2-Fic2@V{p{uBYi465H%qaf!t6kyE998YC}K9>5KO4 z%8D`sSpOM+_>91|;J`3|n-ekjOM+P;JB z_<8np#}RhyyTF*!>p>C8`+%hz&)ArXVT?p1S0ZP- zf}t{7P3f3g!XLarb6H=E#04oku^zshq9 zD7pKPS7jx_teX9wZZ!S|++M2e`!@ev5Zl~x@ZZ@bM|+BMSv{xR8WHWd;#&5g5fK5T z*Z?u%rBm1F#D7D3X8X@Ivqa$z>1}y>Haq?=QSQH0;3O#g z?f&}Se~R{9?*YbALzcVZ|Br|DUtBEkKrqABzkF~^Dpz5BBK%N`yo1gqI-x?-)B6uz zo*+e+2GzJBY0;NE6C%u8IXJ~QCdKQ>@#K4|`A?-N=Pn<2?E)j>ik!%g|#oRH;@d_eU|~&zWox-2{*u&a*60m)A&b7kaRA8?V!rq(B0h zu9}2@%3B5Lg8;lB4aYxB#nGKVz{62_y8k=2y6~^_f`+41!8)2wKe;e}qc>ut(x`rP zbqNq^TNX3+Z`OitlInKIsk#<(GsM#(|0$@xexK&Qy-@_cUFh@$_`pn6EA^QHrRP$k zL=8ZcBL&R9o|_Ah{OP~)*(CwJ74tz$*}3@?d7QPgT4hY3?M(yQW#&sF=sUT!^xJE} zp-J=)ayz6b``4q2Zxp5{Od6xaemZrU&a_vj1#gzBiNK*r1!S1cdwOAYBn&*VL!YuV9m`uzZhtn479X{MAxVy&G4>L7w^>vzB%(Ffo{YNP~! zcql-Yn>`B8m&Jgg-lacOWpg51)FANwfVmLN3-CSQ4MB5Kz<%ZTYtYPoJ?p52K?-!H zqQ6dQUTb_iFL%x%!0ku*E;ajS_^L&{eQasB^~`&W*u5a+XP?Az-&KGS7j{FY5`jgT z1W1G^ehm>x0fI0(0Og2@8Rs*`e`kbd;BZk&3Glu+^i4Cvw*XrRl#dBIIG?tX^DrV~ z?joX?4NMKzYK&&$z-=xPR`kC-__I%9179w_>s&XPXvXDpECML?}3|{A{dQyNn`aZ@W#@5OjsvBGi(-a#r$t z<))M==vwIj5BdlAIzm-{K016FV$St3>HusYB^*cOu}vT;DeHqphul{gXhbCY)5X!> z;w2?pU#>^L^2ajC127*oo{zf}=2DJ=(6nC@{P@Z*0M^P%!}}O!=azkucms4Ylp5Z3 z89?YAd!VS{299t4DdzxpgB^ge30S3&OW6UYmfeg_;6NJz;52QAw;F&5@36Y{FNG~& zrQ{aoC}6X2`H-kc#d}vLDcFV3pb=)Bqi&LC3c&RR2X1p5daK(mRJzsrr_s6FXl5%6 zetfad^|&aF1uzDUwJ@GUV7EBI=#CTk%!8i(n!}WsMZJ!A$T-F_g@mStzi_(^WAf~k z0)-rzf--&zt-q#$HC_xDR!YPm0V%w^OG&hrrL$i9rQ*)x=jJB@7qf_aBh7Kg+fT~} zw*CX6?JR{Rf~i1MCJEcqKE*9lD9PKiB$e^$!FS^+`b)xzZ~btQ;%k+O9;5AKm0#Ou zRV(JVagu3Dr_*=ay>N@R%TJU7FBo-410=5Br0f&Huh@s?|8l416r+#)m^a%g7O%0Z z_ctxCxf;$kQ&^jm`MEPe^Wy(%68$tsvfbafxcRC=S4X0=rr?V8tK#)rGRLL1yXitL zeeM%n#P_R1i5hW_#BT(-c}9n#s5_Ex-WMn0j#Ve~dN(C`*Oz>0XAfEHnhJQ>X%}-! z;YQpkb;wP?eqZ#3_RAdKGmVRWP=L77OzD^IFSBXpjM2jo)$)D+8iye&a2#kw0MVQ%s7SHG4=QQ&9a7 z4UF&iyV}0rwCXX@=k+-~>tGSpo`K^we)oRtSaVa%O@M3|5Ec+df-LkI&w> zUA#*q+yJ0TtYg@#sv$zZFD%I!LW~9c@`=Dp*MI;10$y9ao}y)>6v3S$j|@Dd&W_Ks zTO#9n<|@M`FE^yZo+zzVULSDoZEO>(AD#@)I~~? z;j)>Sbc;QR7HtZth($&+*Oa~qP~_p^p!&ro>7=HK=NVoitFZp}jkKfWN5SzI2T)bY zH|Z~X8~ewR;7F%^k1GLKTvk-)+Hsujl#iCJ6LuMy3D93|B< z(qHvS$Z4U=(mSbA3EjR5B%@}uTW`Pp&|u-%{9>F|^Kwmh|Hn>DoKEztsQ7TtWKXJ~ zO|&gsU9p;xoL8|6Dn#@hJ{W1*zvYDW+|7Z?jdS($xTk@Ys@DE@p`tI7B6*%tdysK( zHherUavisYcz?`fgn11`UJXCMpM1TY%vkT8BlGpUTP*k-62p2%@EL@Yj2$WLNlEb8i#+R4*4 zTkgg-sC=~^5=b(dSd!Q^>ljNicRw*35JIyY7P3o-K-mm?H+~fgLpB!qUL`7${3SQw z`s;<}RHYIBBmp#o;5QRZwJ(6JTUqUVsAeH`X_bd*sQ9Mx?xkOVQ?eXm+SF3gdfP9-gj0a z>b8Db&#M*vKTN#?du7|!wH@2Gjf!nnQn6jJZQHh4v2EKG+qUhzYd`1g^L_aNYvo#V zj6TNfy|uf|BCE|fej^xn_exm)gyhWhi#`R|cKIx4(CjhgnPpt&BrvwAQVc*)IvpW- zC%_COk4;s)Q&g9Jg~^4-_&0S0rZE>52p`i~M>$OQ1UXu&VXb#3`bGM|h91d|6w3*S zHmP~`Hwov}acAgrMqX$5Zh3=-@2U;Kk;}}{#bKrDct#j{-faShIDk-bxq0B6>*-DG z$n7Y7Nv=c|=w!o$fch647bKM6E_L7H;aOG26HO;N7q3pb(x30FV>Geg*7puKV9s8( zpr0%gNlldLdHh+<|93rcLjpGIiQhnhx9|1(i5Y_6Js3;@T6^i243WR+qgQo|<6(w! zkB~R3`G)8#+s)I{FyJ`fC8aGxJAx9GuPs6qE8dJMEtjJQf$0Qel~8`=K=1tc_=s9e zgL&bS!JukPXVGHB=$7;o1T7Pf#<75dQ(|PNd@wIYL(4E97s?k0Zo*^D*XUK#^)5oF z{+K@{!-qxjVk8fMM|P%Qd29+U^*C+C4qcSM^V5$d!%jhVdla$g0RA&AbsNytYV9y?02z-5q{dz%PAU2Rf>*XTsHO}7X( z#cV2c8)B-D;&FA^7_;|)O*PvLo8TF7V4Oh>M)rXJtTDj4zNYAJ;jnoAgmMwvL~{Ki zQ3T6cprY3R<42_@r2oxhdipJIlqyihtE0TfVzebMlBbw@Wk;>^Ya+&hp%V&s7Dkcm z)J3fRDA7AUKa7p(kqM(cS}>7txOZs`{AcPUpug{%z|_7QojszbJBd0gbbKS#rbxBk zh&}Fl3WIBE3TCaDIdMsr?1Fa(gjx-|;%2eW{lJrHoZ*DKxe%FVMBN~fp~xO*nJK-RR5_}y?h6L2%H-tCIYfD z3Q%PBMX9va_&hGGHv1Bgo{jFyRK43kqBBQ1!Z_fc0KqrGMppm`+2sSYtk=iwh-N(R z^rVPgU>EP%^%_5#>!!Ulr$4vcXS#Mc!slo zkGjKGm=Rw~^i<}b?KWAK}z-9fX;YlZOI08(G?426`40^nMtiuKfe&WTp~ z=+&~tO38QRXBurUrf+H#A?ZzVdizI3Ae4yaT?`Cruoy^VbAX&(vb?kr(wom~>#R3(6?X{@gkJ@vbv!n{X+iIq*7YAtRGj9tRGL1gHXMUT|(#= zIDB}~A<=`UudurZrvWh)OXjO}ooXq?!Ay!EYhFyzhHjVd-tzS})|_t}cus#cRD0N` z{o~R&AA5EZA3JK#X0Y9%=;W8?@xPzE2)3p)dO@Xf(NOdACYEm|Sme@cE{MZqHrkPA zedz_q(-S42`4Pc6p-QvkNf4~LxWKfT_F~4ksMmg}66Dn0 z(t?k?MenRVI;*1*jar+je=q*`EcE{w40@+4PMHnrnhFY__o})aHPTbsKSPus&f5Y9 zxBkN*4;ykys40C!#cP-2{h)f?3yWP_q=ozn%m zyrq`TSbrrl<(jwT?k!yV@VxFS85RT9A${Y|TZI^z=$^d>V&iLNmXQz0z}xtdXud#1 zWD6*M`MJ<{u{`uhX|rw`HNW|it3Nv*=A?e@EQ0ya=a|9_Z9EF!!|P?ZG}!r>mCVFK zv^*@VnP5Mqd+Yf^yUgecU2k=v_YOMAhy;BeyZHd9yN8K^=C8ytq0_)kMx$DHAyb^t zdPr#b^0M$4{FDIoxH6CULCMy~!7g7qz=v=bDwvD?k3!lf4%H}+8ZwtHKoP7=;sQ0C z9~BBKUKdQPH4xH$&@)aqUllr>Cw1nhh%9_o=h>fT@LVAqxryVn&mF#rYu?d{-@};D zbfGtS`(Zt9&~-wXH_H_U&HAhFU;aUHwCc~Vf-h{P5VSlzx9Rd=)X>A0e&I$j3pX6+Q@&i2Wu;!Q1WQq?jZ~}-N7rxR?3#TIM|~; z2>V?w#S^05&wL?k|F|*~IEq^sTH(Ke156Kni=Xecu4n2=*Eb2}a~hKdD29HZhCd_6 zB%s*6IiU>V<0CuA)dI7rr15g^GcU;Ov8eYrf*O4n%(S>7I{E;arRadHP%J!sr))n=s{v16 z*T+xEXNq&Xmbw4?nE=Jr+EZR5|5@rkH$861D^$oyFU8H0oRh!&%y^7C=tp_2II?~5 z2-xOovVCy`@26M^gDM)bm+Az_9$+WaB^a<}qPG|{u>n}d%?Ud90I4hmu% zsJT#UsHDp zs7q7`sLB=GR2%d5hOyG}Azq~4tZ@<@$6V{Jg5XKd9w#4*(Yf1;05zV2BQ{0F#XE;LbJH=n`ru*VMdk1e9|CkA2DavNCx# z$8d@XL@GlPdA@8h|CM6)ZhNY2i89b$w71{hI3iLWzLDUTGzIZ(=48UZBA=XQx&Q3T zmh@@Od4*?f?{RZPS;=NDv}EfW4)*sY0uN=clIqWiB}Zz)fneK^0a z%O5!pL(MB%&j+Wv7p4>?;EL*jgbyn4L#AfVqU>-L%8rG*u2rmG5Bs*2VXV_c2RBf@ zdLU6WCOR$a@0I?AO8mn=aAGU#7aylPNJI)l@Sa@DO#S&W$c|rN=k@$yHdm#wPHR{Q zY^;q6Xx+^Xc9dteLNi9lX9s4@_!I>q=+KXx*wM26c&`trxgmX;K9Gdq7hMH1J^B{C zzY@sN1WZ5kO#;02#MIw0LY)=l)LUE2H_e&njL-K)X8~HtDkES9#i<OZc7XOjs0nx6p)tTzAihr-KvGkC12L&qK z6Y%Yhy!Shyl`eLV$@)2LxcWXTcRMLQ8|~2uAb^db_SxwI<~H-DIrI5PQ05!BxZg~u zMFHt?b}F}@c=7HpvRX)6fGp9Dsw7Kf>!O3E&y`Vc^Sx9S2aa7_E_0`V@+wD3xLrYW z+d8-m%H(5o#VOhubZy!>tT(^aX1j_xjG3Rb&s+K7pP6aT6wJK7;}{(H4Anr@7yx$d z&ua;z%!F;)7s(6V!NXXuEAo5ME9!WY17t~c_T4$G`-+6C-|aO#V*oYaiWok%(ZJ;7^KY#1VWK)G|G{Y?5VJ#QVAD& z=B%$(24gj$7Gf>wTPHGZ!*8HCX~lTy0F^MEC43hTD#3X}n1FaGGB;+JP86h~irtpW zD&uguf-bh@j@XP&V_I}doDKS)R*xGIz=C3(0hc)!j!Ky{Ic;lqnHR!@_~pAi1$UnZ zmp?|#E0r}=CFM}eH!}dJyndi){WfHNiW567l1kP+n|DWGZ@7y^GL^W8OfnO&M*2Cp zO)~2rona=BPNU{AR9N#Hy8y8>tA%gcgz1l%FWgp^VL+R35hL#Cj&aIKYC6lk5mRaE zMXPP<6=Y%DfY9YZ-4jx)rMA%i0%b5?J8NBJD*3E|+~J1^5#7Y}R~6(i5pb)aiUmyz-`L_9ZE zGq}G>@|uGxgFke_(!lG%AN{*vkd$R9!Tn;8dytv(op(2?K|6i0C~?p<%d*(cm;cvz zD|Og8nm~v>8-27t#cP$qlR!y7%5BKkqz8!+*T3y8hFweE_2LD2;rO``Bh{OJ(Q&B% z=}x4)Fu-QeWjPhj6(-~|myh0W);Wrtrzag9KbA;&EYbE#-X#C=`I*cB%q_kkc`R7Y z8+aZln%;MS9fbye$x@UnRK}iv7$-?-RFI-W4ETzVZDYSsYU?;tUhShWaq|F;N?4G+ z`bq>O>IkQb!?sj1QXRDW;HvyX>t7wh{@!ZRhJN^Jz}E=%Iz=@#zmTl$D^s{|NWGV+ zH2d*T=UHLG0Z5EF4y85feJ&&u98jxNcTa9++8CAEn&L;wFn5WmHwMXXu2h**yVjSW z(F=iKc6=+wL)pHN=k6J`$N#Ya=8@MKLjKHfDCsf1U>duyOtzvI^G+kruDIXv{w`B( z*2Debx28Cijq(jgkGyuQatNfDE=yGvfX3Nk1)IA)IC~wByjV***}-20mitQxI*i08oG`)bw`@bS9CfjLygU&^Peu$KaYtwA8$s6 zhM*7ky*5pm_g7^J&><;~gqSgL{Tl0SBGKBBspty#2ZfyqSlPIe-*Azbr;t+wMGHhs z1kl2bYp>MBfdoW;#51;sK^$#P2 zQ5NnJd^`D|yQ6zQAlE!8B1Q?LoT3EZ9Sb72srN?R| zGpug{qmhNkfF_)h&{Y=zPPt1Y>xpWpyU}+;1=c9cQA+`tV$SWc0dlIMg6A82Aj(uu ztA7+WVyzT~UucPIFn{2K-;R(J+qff#x?H4(70pwPH)G)Bu#7f9^-T#=6M;E{s>pqZMp}vHEiZS2?#M$3@hKy z@x~d{PhKX7(3TQ8pwkZ>-zXFsIf^O^3=P@89OuRZ-G0eU^9;?OmRv?KVymsa@aG90 zl{~^q*bE-kbY2Y*2&g3p-vPGvg(VP(K>`?~n?{bB+|K=9!4dKhmA8FcT%ZUSSYvVl zD5wO}EF$b&4G^eDv6*HJBtsojzcny6Qz$A*M{~c0?`GcPiK(#t5pxi;LKwl@N?<{( zW>7OSM+)^u(EfMav+KJSKO6^nmYuB0VCDFpaVT~9fpws{s#7veVMSk^^jP3ve@xLE zQ(Uy0Rm8cPDlqjMNbzgnoH%yqTJ1B+ot(+jRB*79t~DUdtdzGh@IHqEvYMBP0SaKG z9}~PaEJ|;*j?FoUH|ap(9SpdB5SF`}U}Zfjep%1hZ}z|ZqPuFOe8G6Vyn}(@Cya7J zp<@radOz?CMM-8_+Rt*n95K1%>%-1yZY z^l&={*0g_np!4Qa#0U+_>C*m8c4Efvv4jnc6*CJ@;-`xF=}hXi65 z^Ij;lhYSq(Kt6u`r+>4vd)zyazk5Su_g0L9u-D1cXo@*l{o7`xU$GIVbwIRE-*ES# zUkb4w$z}uc^g9^C*_h{VLB2n6Scbf~A~T3fuAj4g4{qwBlJozKS@Ilra_n|&%sBxW z7SBdd9;E;N2mfnpw~YJKer^d~WT1=J_K7zqKW;l5>;mC(ssm&Hs$K_w1}2P*h+Dyh z))-YS6oM}&VN+KAaQfeJ{h0#Fy>m0VrRlBr|NrFPfhqsd*#V@_Wy0D#6F)Qdm9gd& zpYN+c6$N{XyEujTXMvw}3QF}J*71K>37HdyenqW>%Mh7A> zmq3ZQ%b`YVaKd4x6YnLiI6F2xbIJCbJOOBKNSFZ7v7ZPe1`p2`6H-g2xh1XwcYNRbCEi93kh8})BILrAZgW%o>_00_<2jk>+4*E-> z>M?Y$?N(czT>1I=cgfE};|pD$uF?R{b#F%!jTTAaSk8lM;+&XRE1(gt-mcsUi$Meq z2^r{Brc!~??tJb7$W<>agmOuauaE{L12Gv6M`_=Jqt_XpV5X^5BmkL$s~Gkzu=Lo_ zYj1UT>Cto~YaS3tKOK1z3wGgF+J@ybcI{T+Zz7?N+?#bj`Pwh+7v3`1+*sk|k$d(m97gEn1 zLtz$IwS(y>&!*{;fjUM)RRjUD1K}MhU3T`O8J(4tamBP6L45q_!2hN`N`MA^AT`H> zoxdZ|t4E`~GeuAGdRX4U&MC20__iCv?I5GVW{e!8mbIp^aJTAhLP0@1N&jkDYLfkgMlK zJPmixbS68#t*pHtm>2C-6FZ|>UCX%r>4h;R&*67SmTsLMjnkjJ~T-}!pWfA zZi#pb_f0&irKa<=KYQqPB)Z6)9$d{&J}a4Mz}MF;cmmx%*1FUiGu9g99lZIHZnMLX z40WOp)jIr>qx+w33?g8i_U;Pxh#=|%;51n<70! zoh8PJZ_@3xCTzoKER_ys_8i=Vug)amXm!1tAN;`SJ-(R(Lb^ei&uz=9GAix;q zPpuyb0MDSHq9|XYXJKhs;RmzC0)S~){e6|lRSyZUI$x^VJv9@n9l+9vP9kAyaU9!8#1pJjBzt|5% z?BodrE<6Hk=yiI1Ad9ZehUK4qyjH3EyXo$19sm%AihBSDj%_1r4klozJkFTC->pIt zmbZy{Ck-kM6~oERUH-*9`8M`Und7!sEQ4f3rPW;7z6=HnkY|*QudS{c)8bNo33E~) zA|kShTPT=09gHFm+43_?Diw&s(^G$&?mF%|97e5gn}e4m(^22gqx2-*D>}Zkc&v`F z=Es3kBs~qLq5QzoSis5LIA zJxdtHJtD&u&2Koyw=rl{;*j2Z1Nh8z^M85fl z+grRCk$RGopMz^)hf`^b&stqxYMH!MyJ%L}AEfpl$88vPuVtrkYHTso$xP%)c`F(D zYtwL^hk-Q!S$p?(kJZ>}?Ns6(D?6!v8+r#=gr?Z*(otwzF*Q4Z*V&awsUnf&7d>6A zwfl@~-_0*3o9=Y;iBZ#%SG$x&8#;-V8}uONsUmS0E*j*dvF$JzL1o=_LpRDinn%}ogUW5__y z3}j7C$1k|1Y5Hg|#yVG{1Ewg0FVF`wf96=3>O-rMeJNhekDy-*A=l4GNCC@@9C*{n z*yNKVm-Ngjr8Y%vwF6&HD5K&-JJ}?ScQH)3qNSxp^RNVI6`+0qoOsv(f{yAdRg`Qd zyG*hS(%(2X%OyXeq2$Tr4S-sLNZBt7Fdv|Shkz+5enf4h(rlzE9iD#51(4oZtybhM zzlu$p|MI`RzGAa6HVUKVm3_%Mf?LSRr(LZzRn&a~vI(jDIr;esF5X@0W|tdzy1&0X zn}oLkp7WW?0HAORvxYB#icD{zR3QS9j>mS3m-l{>rPXdXz#^v?W@Nlw;|CFupg+Ad znZQ3PM=yY|q(P_{e0d2{t5+X(x_44mUOy2oENIC)w{@A1^$%Tx$-X&8| za*zg405TGkKS|T-jK)w@R6$u9?^vffz26%5uwVHBv2yg1zzTexPs--047#eHUH||` z)Tt{<>6S0j78g$b7OZznD2cRm96}7CX(h7=2Dj9bdFhdJOHpQ><(*rQ)oS4!KDgT{ zt8e-*TsLy$zuh?guR5CJ7Ttf`DmPlVra43sr$Z(_Vh8+o)EB%m^Ad?Sf4v}kSmX}= zEO_3P3gg9&Xpl-R9V|5*xixLUyxje9R=#)EnWb95~lPFqBs(+2X^Gf!X}c zW^#k>)7kc!7R5vG$sjtwmxouJ^vG1q5*HsIBX<2Dm`aYzxMKj<{@I1YF1sqU5e@oY zpt+3TSJj#2%SW1#7BvzB)I z0}Q3vT39?{_jOOiigt>GP{qM!TZ6zr)(Do&YLm)L>Z=EmKqcp7|34Sc-hx5k&xw77 z_DsYXb;ZHCKT{>SfhJ&Imur3Z;fkDf2E-9dUUxB`=IQ#uQatQ`G`E3kQ<8V~{^qhT znQM)r8*_JsRfHlE311W|p`sdXJHTojN{1wm(bPD`=bLd}c&e|h^)59b;qgThJ58wK zVVZhaoHH3lf22+KDjA9oXsXW9jz)MER_29XZ%){fu5f8Y;y7W!q*7Zd?tOt}XJy@4 zrTGE*L+R@3@+w?HCn4c_#zan$x3zx(jKTXSvjutH0^p7?)tyJ<0!^UZ?^ir3De%1C zANOhTr%rMGH-N17nGRRmkH=-*@=pkO94aA>>8S&W8P#oS?aYF`p~wP7a(iK_g*L}y zs~B=F2n*JkjQ~w70JwxrwQtcqq4>4Wl0N44@}Lb44sNi{aAcv+>tGi^IGPo?x^I}~ zuV_SonnKb8;C`@L-{iQRPU5(IZ-+3?r2VsSkY9@cqb-^og#o_@K%!^9&*$6l-hv&` zif}#zWX-+Qq?{6bLf52#qcH&BW1WO%4T|EQ4A=#wHwN}rzkq`LW&tCCGN4$1=0HEt z3;_*Hsn&QgG1~p7sr!`QT{3xYYfUep+Fc+1SqIRq=(@ zip+=jW9Q;B(hfM6B>3ChKr^1=o}2vzB+1JA5^HS1;wwWG-y7r1GqTzxs2J|4H)}&_iJabC51BDC6o* zLyeu@6`g1nFGMWOY7=V|{1J;D>GS9uL~ExrE==+dSkLGDvzdtZ1zRc#vSoYsb=AQ! zFUQBbzQcXUMx*_<#nIf|`O>=fAH+nN)WMesv<%vH(rBJ<=FT*|B;=sRxt_oeY<=h&O4ji6Zv=&m z!R9DxUHBLHXnDQoWUJkA{jA^~~hD9wWaW}}dP|2)dH zEpJatCF%#fBFr{hYb7D~rTP%C3guAHUPOw6nb`zGE!0lq%xVXTR-pv&x@NN0`d=a$ z{eb`v@7!T)qHgq*&MN}cd794{pE(xB_p-BAFFS#97a#0qvCa65_$VV(0Xpr2#!0Y< zm6QTcCedu}vJCj&JqW%o?hwI|Z}~jJ2`?kn7hgYG? z>X^5K`vEx??$-W0U0P8YFn}o4J`EtaXc&z)3n5MqOr_$-@QvXo*|8}M`fh@dqOR@X z@>q|)p?OUD5o!NiGm`jbuRCIws8{I2HGWvk+4}IJZFu*uhInmHs3^U^g9!7^pS?{mLF0cf|rkt_h{NTBj}7Y=~yr+ht> zb=u--eYy(`JLTEEZX8dCoD3k?3qM}GW z%iOjMOLJpdVc`h-w6S{b>{lJ*sCf|c(JIYT+6p)CAL2A6ah~GvBhAPH`ER|?JIWiF zxZ9_J28X%5Qn+yCGt?v!m$|nUs@bQ&3p#2(i4zzFZbt& zD#?WckpsNF?;88h9e%s)-u?MJalWZU$U4&wg|D#|G(LI)jn|@8* zIiE9ytkgnng{W3zw87|XhUqAQ)92BS<%k2`2)At9l0${6!bjH8a+DR1Z{3|0-z_0qPQEg6b! zNbt>=-?{%bzc)M)4%ElHp7Ce2yjZ#PNMXexM`Be1BUf4o^+Vl14@`eX4} z5xkzUW@YmV;{)-mVMs-l{T=7p1mI{HT&`@WmaoVaDvEnek#4F)vw5kBm(-08Y4M|I zn}#JI9DMIhrPy8d(A`vM_NCxmsuMMB`1U{*33L{09VpML_hmA3$5oH6Y^ARpSO zV`>`w-MaiQV1^0UA@*?AM(YX|><1ef{>2!qnWK_mnkE44=YvY06L)k}+$N9&m?l(r zoLT)XAP$|{FJyTE>Q5X8z*G9$#F4#>#O6*UM>d0XEbTngwp#%KFB;huOKitg2eLy1n_jObK_?CM9e&ew(O%mh~(Cs-K z_=#L^(#6Dmc49A@&dZTf|Kd$QpJWo-*${@?YRui?X!j#PO6yz2n0iN-yMA6h7K4Z? zo+NhumaCzW$~h*{UObn(p#KN357lg5IA;SZuWJgkNw+>!R>H}F!}Li!P1bfUy{;Er zPsrl#BGrTuWsfo8p;xLQiXQcKRyt3N-lx`O-Nm9Q#B4goT`VvhBCiLyuD1qmF zg~jKPX;2m4um_hxKrT0FSFGJi<5>Qabl#%vNrpyP4$aMNPTvpendk{S@0WVEHW zI{?ScAz2NSxRhUtO*;cgp2g1uims$8VLoVAdxpzdM#XzrE}dCy(Y}H34K^?FXZh1y zII%ZL?Y2xNyNVd~4NI&zUSydGu-e`#fj85QbS@Oyk&Jn-A21XGg6b(4E305Y44HeJ z+6BiaKJW8}qs3$Fq4{s2EjvhryI|8Q>myU$(~f{*WMvVGbS*YzXfB_V27e;x_euO8 zrCfCYtRP++RAD$*ED&zXFFe>_n7h!Oz|$W%c!9&9OB0OQCJAP;B7&y?xT^efre1VK z8p3SfNp^x-bjD4UHaC?{{Afa1Pe-VL+7t9Mc^G zdCHVIbvJI_o<|=2Mi%7kcv><7iham*y zH{G-ko=Tl8mDe-d=aIJP^;@a%{xppS5?&A*Vj0zVneYr528Q3kpN=JaKW}MS%Ho9b z4%oByK_>J9d=;drby2iJyaOS0eNCw>jKfO=UIpTgG-m%jTvxyGqp+4HZf<7A!3SgixGdGjf3PvdTuL+m|&LZId>mi*Z1 z62q;{KZYH}x(9Z7x(>KrzU&VuFujIezo3nf&*ggFU#g_4Wrrqk^vPu5A>9##Y;{7j zih0Q?5XcRHz*A(%VzGeI;O^+3TOHEA_sIIUobwjq)y5%ezUHub&D>1}61J@g{R!01 zalZ-F)!h-{+(9%KPO8ID!e2t~Ta<N z`SD@xPQRoybxsfTzzQ$2bj7_UC~eFp1xREvozD+sPjWQbJ+y>=d}^UN?0Q$0mA@6E zUSEa7!@FzYhP9}~b@HdoVX!?Q@BKEl`Jd(pzmRyoSrvL}9ho`HSY(RF#QGQQG`ghK z_LtsFGLBXIvl5GG-ZV8}BC~oP1_b}4cM#egW;&*RflyhYism8xI*vqp`uzM{Y|dcC zo%!IZ}U$s}`36uYL14utUl0 z+Vo0Un52hGm4Gb6(QG<2B91U7@j3Mxg&XVd7z_ROWfo}+o_Kpy_I;=i`U#=febr51 z0XRg6DDj&xQf2UV@B~CTS4~qQ5jx*nm{l-*5|23MM%^Uwun+Ja&oFqn!g75V!~OyOF;Q<-**C0e?H^y=EbVrnt|^@{ z)$OkKnQrQrm^N%ChNei)>uYeGNIBuzKT367YLE-m3es#`MwdT6S|5S-x-Y_Azuqt{ zlOsI_uKX@-tMbRS)aY{$l88f z>$9~c$OvbiUhDg2S^LJxA>Zn6_j@Z*D{v&jgsdDqZ~4g<_~|?)s)dZeXSu*Eo(YPZ zW=7qOkrjarMo!N%(T+K;Rg@?g;GIp!*tc+tdv4bXpC@Fwu4(1LXI!L83WDt(FH$@W z<9Ld(L}C?2q#-|uDd-am|AEGDxipEBN|9CX3swVwOILYe%QGIbr)(_tC#~?a z5xQ-_+=>~ksZP~%1>1oT@UoIX#pSxB#|Ct8<)cay98n#+IaI@-sP77mzXq_Y<(lK^ zZh_I*h;r=xjUG2Q=gRJ9;-@5p`gKSaGo<`7mhMBje3eYIUE3Obfg;~{hH7%p3B@@k zLJBTyfth{`(D)o=T_ZbQ1OK#T0A}fIBw3v%sa%E5QqwL11WPI+WCf%t$IdErv;s8px z+LO3TcyVvV$c5tg=}87F;U$kJ-b|--ms30W$3{dQiZg}yS{r5$x}&-Ok5D3y#rgwj zoL>XPfkm=|)B%8!L=(;AqjyA>QB|ec1jAx7*=t$TmBRAcTw|#AWetR76vM@asY}lM zJNrNGIKbF;b{8Py=jDFB^T!S^@*;CjzKgdF$*i&9W6WOSNuY#;qL^VKUA(8xnseU+ zxSZ2XefWfqdoLY&JDg6n*XOoubig&LYk@#)TA+&2jow?d)C4C?ZCXwY6|SbM>2EJ8 z(N~(U8el2ou_s6YLx@a)G0h>`T^ zHGQf3j`fMJcfxYJg>HGk@K)@rCCrZ;xFGu%sT0afGR+AB3dtNbjf!7w$g>(vc@%%> z$cr&2pzN*{jZ~p6-=F>KQV{6{#?RGv{6k4_J;d{RWBm`0Ibz11S zOp-FCG74-!)E zH-=hi>FbG!ViedMj;Ot0vECo6i;vCn+Gz5QA!eFw zJ|UaF-o-FKQavCfK2w;B@#{y=O%T1h(6M^t?jjrp@1!wFHjQSVT9bU+!EOwdZgUer zlyeb8#{wSHjs%bI1mig0O|t^wYv0K2#c7tnECc<1_Y=-;sC~L(`a~gN;s{)!eT}}b z_gJX|U3E)0JEPIRzvZ{Ge_Te>ziMN%*_s_f-za>Co_$pxyhh!vKAJtt0GS z%8!8g2K=j&rCEg^-ur29&a*TJ)qDVA^yIw%<@&Cbufh^xoW%k%jScAE+grT$GeBn+ob z+Aa154Mhz9BBi4whWlRdh(gJK&92P|bq|IRVCW}O1F8$ncryudOHA7Dg(?6{De7|G zD7v-@M#l~OLGx)z1|&pXM*9mWtVK0!8n!IAzw{XE)E9RLcFgk%U#|p+>`}D$`p4&n zH+JhB=8GizqWhU^?*MWsm8y=V$8yZdk-OR{U2JNkJGa(Tu{ zK-dhUZWS_#m)78ZZtI=KkkRRlph45Ilr8gNf15L8OYI8`a+Sj0yp zO3AD)iFU^tw4j1wBrpa6B*-Ql4ocMFR7R91H_(j>^8u+@-Dv=DFOi_&8tG$hJA@Nk zJhoacs-N(hFSnA(J_TlFg%fg~IFkt1*Ay2CAcK8G4a`%qbfM>~6Nn@kjE+%M{~LX_ zKJJZ+qDlq?|ALT>ku}3rSrD$Qr@4hV#QMt~&WvbB92!$(BiU`%ffLq=3X*dbYcEC{ zJZ4;fJU3cVpouR{u0Q~sj`|8_VXRVsd{h2=Vs80e60v@M^nDD|(fZg4r|eJ?zAflJ zx`JjTKC0n~-anzDd0h7q+?%Uu+gqNi2{J(OP7cRS& zEb%kHfq`$U1Dw9kts)Y&&Ck0fW2OoMQG zJH`}QXjOGRkBN)^K02FEAM!Pbax;()M=L>Tkp%RnUI3{P_Zu!Z>=lcVd=Pjy&(A%l zRFOU`C}INd7!qvd@u}og%o6_K$xLY2Xdl6X60eG7ZSjM@XhtR34DfJsdJ9HB!dfku(iU)HLz907*mrT8y2|7bxTZa$-~3h+PXXEyKZMOusvE< zFz9)xJ_t*>#2`8-dApJpIPUBX4*!lg+?gVGC^7bZxY}UKzM zg;_?>+3h)JrNhI!a%6Zs`gc+9{I=_T`^}o)a*O5YR}6jUZ%>Q1(naf*Rnc=bJjB`n z>G-%!^|w)#5tI|_?gpevkS<1)?Z+)Ix& zCrg=*v{B}-VSWw2Zr10J{hJ%P5Co>4`7Up?Z8O7QT%6x-U3_hFB~s15NjDw6)}w%f z+pghJJvEvVoaO|-rU&$)*~m<<54AnqO7on^ zM_9B`*W`OpJW54*R+|khVq)T2)EiRD^eL(N+LUV($8~M&D!OS7 zTH6dYM1c*w+KM|ekf+?A>fanl4Oa2HU9|U9uLt%_4Q4+kuVd%~nw$}hT#@`DV` z5dWf9Lv_H{rDaGR5fZFsYEGGx(Sz>G<~=H2h4}j`@)Y>>8z`L94M~aiWum+~881bB z@siw7hnk5=eWjgR9HU8fWHRCqjFZz++Eh;?6tlKd3DOGp1KP*M5MPy(-O!|&HyGTB z>g!>Cb??rxk8ct%j)?{cWAtwKhK5PrpDqJ)a|r?969^6t4(W7e@TH}tQni{KI&o~( z0zjfl0T?>%s~|0aSkin~X$BzI&}`1mA}{1Y!NG}IRbDKnN=mmhHK{xW4;uis}6=WK}5nT%5;;^nJrF;Im>9e(|>ZP^ZTp zn_Xt=5!#&eG!I^tQx&(m0XO6Zy-j5>wCTbL@jTNfeSBsD0!wd8Fv{vt^A(<}VyrRO zDIA{2@lQu1#0hYBwloMTjDD&enLX8@;P#@Re0MW6FF97;O#I-hH*cyunL+O8`O>U_ z@eD;A$fx)lwafvV^==70{EE>p;oj}z&@g!QN5ffXwGDN?B8e$Va_-xw8-H%-Plh;l zz3pC5zL?$3GW1J>F>11{6VuPZEP)zh1=A~?9@}9s)~`EA!w+J2gdU+4z}M*d_SJ#RP@YfbYBwjTVC@4Qje7A3<&?#;$}DN7S==o5@x+I|+# zrmncV)}DprCg~nVg+=-{RX8i3LuzT;jomoQSaMoq;-VXko1CizcE7tc%WdwGDEOiO zq}pm;r7(xh94%L0(g(ta-(`8ayO(jjE-1+nuO9sFzMtYTTWh4|tT|tQ=aU{IcEa06(YBFIJy5 z4zbCA?RX!yiz_FkBd58mOQ`QgfvKeX!Br3R_*UDN*$WXin(5+oKnUU6it&lwz-Ak~ zN7a81t~f5eS0Y#0>-B*#e^&b+X=d3`4Uwt&+eKyzr_aznmh&)U$nh3}l1$4;xD1&S zri#B5NvcOe%u+e{7|iD3inJPsdS}4kpkl;!6N;jZ6KZrr6T5icz{UP-2b+QMe}fb+ z(7|&DE>Q84|5_+d-5HI-K5=}a*;;c%xlO&>?pBR$yRYf_iU8;&sr*T4XeKatB`?kg z`umkL8X8ytgZxkcndhI8s<_@x9uRe3UmuKU8oxI;V3hCYd=95cFpJ z`Y$)TG`1OD7x5G_8HIBIs1vx2c_NV}6acuwvOEM53#-`*hySf!{PMZvTTmH@)&R z0DAwI=YXal#@2Lq2`SRYGfv{7wp8V(2jd#OdeCpr>D~UTY))>+jJl5(Al;RTC3{c? z^1SmqnU{)ZtuS6E^`eq-N6c(HY>Zp%6W1?pURu==hC@)%8xGEqs5Xp~88Tyu<}jZx1`9s+oS#9H6?vvwFP z0p{=(Sn~yj#@fsn(I^F^$e8H@`ec$o`8`{w5h(+{!0(?^;k2zjdyT5-vUECrA-V`E z_x1m;uD1?~t6SEGkqI8$-GYSR9^4_gy9E!ygUjHq!QFzpyK8WFAKcv;zIpGx=iK{M z{m%TeYxdOaT6?vuUaOye+Rz@m7nP)cGc%jBP}PZ&f*vV-&@1Ts{EwUqZGy$eW$Cby z9Vdq(fc=n`@PZDVV< z*+~MSKx2}OCJs}`M0q%cP%$AX83|74!o&k8IX&#AJeS&BtG=ak+mUuYA2Vb?4iaT0 z3%(`Jc~bN96Bj9F4=%#|jv^DVQ`M zY2z4(wZ&tK32sa1a(4vbB6H0*ikc;8bzp&4W5QDcM367*=Grv@NJ)%x5p!TO; zXYpv}V^acO;Yxazdq#DS)O)Wbnjb}-GD@`MgyGNtuC`GpP8Q^18zqvzTe2uNVI+iJ zMWH|1!V0m{9Q{w#%OSB_J5Nr!8o$BAOI8$@I<7Texo(w4pEVHt;eIkT&|7sdZ(_)B zcN#u|f5%B`NJ6rE45`DN~)oB{+DGyXeBIv(v|DF>PEr8i>f{sgUdBn?3p zk$9m!^Qy{ci>U*gaMluldQ%||yfi+8Pk%}QjIwU7& z^}PR?C`3nx00x6UZ>~SPn-5^BJNx@`SvepUKdhYaG?+pke|%b4+GQx?>S?q$b{qL- zTN9pGa|c43-jDe}r`P%=EjrzIZi%Nq3r$owooh+Zf%5w#Tu)1aQ1m;P?-ZGDd%!t8 z%Gnw1s~O5@E$NSk(patikS$am1+LcWUSCw(Rmw zQ%g&CdmvMn^;-r-Au^8{eOaEXsMXcg2)BQxwebfS41^zTj;S>x3s~l*_A~Qk(#@FB zwUhh-@eq2JS#@E1JT!MW!uOw3zv$n-Ti|#b?|uG5Yi8wc8T3U0tJv)%iO!%a?(19~ z<)m|oMfN~v&@8&^6=p{G%a7%Y(Nr3R%F0T6M@P0*A4m_vZF+@k={Y0vYIG7CC~uxB zcXT@HM`8IQTt*#oi9pUD5^`z$h>$aJ%@ySGA_!m8QiFAJDP)mKy+D4BFiv@M(|8nx z5G||?-+oiSOnGl_w1hQ36Y|5UY7jfyAzdOMhI5q)F!aMI@@I8>(irn@U|dPO>2oCl ziv|3Zjb+(AgO9PkCat@t_fvsqCchI!r=*P%*&UN%+uVCEHCLfX`}J#U_&o>% z>>eTf#cxGn5zMHsw+Afa?_)b)n}L){?H#cyZ@DDxqyPv?=#j*gP+~2{i7|dvR~f)d zY`j4!FlH&=rH>X39sTQQ4b=C~(VEx!odAhdt5-IU8#ZS>JsAxoD<0Z{u1D_ec05~k zb@5usgUs zrHyPx7y;>%L6*TF9KBa;vCCUkO0s=Jf;&i}Io})*_IW|?X)e^9D2*Nt8h(M*^<3{$ zilw_2S`az}dCquTnc<2;;!lzvOwadSmO=K)Iu?o6j?6^29ljC_J!n1odEVafir+bt zx~!G`M?x~ry*;ci^}~pcJ@s=Kc~IupVvXbZpj5ISN^WYyD$GSta=9tk&ewq7SUW6j zHs1NIHN&m;pMRjgriA#yzDU(NGr-7N6X^-Lw+cgrutn*BCX)?)DZ(%c`Xl>^N`C%t#ibLYn(@b{v^CvOp&$EMco1rJD8TnN>HqDjUI8 zF%FDL;(o<&zNK61pA@kjF%>=!)MJe{7}&zoLI!p&Bf11+Y#;AOFr~Y8D|A!O9Bm$$ zEazzAbTLZA=9dO_z$;QEuxj9M>jUnjC~Sp?;u{sV6t*E{O(A z4=WkvuX)1I719vtq{1JKStJPO)4Vzv{793@=*s|gc6K7!v)sM(@_OrYG)e>LIVgE@ zp;1H7$TX%oKr982{$_MTiG_Emg(t?a@cZXNVKQe#`SM0GI9GBl<*uj4@dwJSDSWM= zM_5OrMb_URg;uK%rwD!(e~02x?+v)6VrI9#irKnapQQV`3;!ovC}84rViY6pANxBGGR3bw`o&G-lJzI*Y7VgU0Dkm|R55GDHdT?vR8z zZ!>L-lOpk`|94tY;~yN+(6P&AThDcRpns7hL5Wqh16Mg%TWk1Orf?IcuqZsLb~&@= zkrg)JudfE$WP{K&x$uN}QY(PQM69z-m6~+J<+PEL!8dq-psy%Upc>tV_AJGAwWy zfkl*LrcN)~a8%cg^Fz}$V5=if$^sjaE`S>?3r8NxbNOr=fhKjSh>r(PBbuP@GdmXM zCg%QdrL=J~E^}B`$b>9B0;lP3D;LQdmO-;_je1z=q@;yqIR_Q_$11jz*A!2MY{O1W zNQTV8c@z3|_CkHky%g8uL&)flyiZ`SnvYp-EwqGaWQejD!jnuok!Ba|KUMN6EEyjJ znDgp}7pPElV0dr-m?7*G#f?$@j#tD*5I003#JFWeAlHe^V1MdWpZG~VX?q~hhEA)) zvHksPz40Z~E1?#OcR&>by3sm!fD-7=b$aW>xEh-x&}gn_S+!1Ggkqq;E$r06asH-y zjaE$4sX#llaCk+eV@soz_$!k>thJ}R1}a%^Wbiz--MuLGr8){Q@_|z-F_r%>hUX%a z{6Vf2RYu!C6%mA}ke;O6#Qg*e@@$8&!63Q}#9hm7*wM7K9~i~t(}u`?D5@i6^0WLg zPLDcX5dH+xW+d{ZUn7`Wf9uv|x`Q?v80Z_R!-9V^2HnWj?s!U80+(HPdOi@>PmH9d zA#;6#-QxB5%FX4FAb&K$$rQ# zImNc5UWzoda%XJVMsfOCGsqbc2s><4Sx#E??;|w<_%CFhsD~)c48+yAqyD39jb*Lz zl$G0Q#6=r7aY$32BNixyxs1RJ~rF$vHN@TX* zQZY950HFp z8#-S5xso@Zh}}FA!+z^CjXWET96Hgi9818#)Ra<1^H(%dey!!G*=7=mN?7a#b4T)v&6W;XKPWW7@})1 z48_Np?Hw4kW_2|s$hutn9tthtXmNk5kdzYQXRv;6sf`i&*y!Z1Kk_ycddNee7$+(H zbR%x>hllQgo7pgw3~!W3vd^cqyL2+zrK6TK6dNG5OB+w5QviLRG*q%$l-M(<##$OF z<^J09$To4 zC1v|d#}Tu=pFWQ;dCtLcS~Tuy9Ff)Qb8URk7#@<)KI*<4woyMQj5Yy6c&Sh$4 zlG=gUa$T-YGL1KZEHvK0&>gY1Xphl;4rk>A*5E)!pw_6=(@r%(_stw?3-gk|+;d|h%r@uP=TtKo7fa~)FBPH9X=e$h7QvT#Ik_+p- ztk5Q!*s0o{iDtaSxLXhfLzlXBTy*x7Ysgm@d&`=YdARhM)5qtVSNd)Q;V7(_HrcKd z-q${VllM373nuWU|Aq15ISvr5=r(Vxx(o0jq+^+K6@&0JCt7{b7poMPjZvTJE%aUWVAqwr&_H`6>IFO zd2~G2pOSx{ozY3+NKTjT`|R~r>R=rxdTJvAbGaniprAM;WW+_(5VS7Z%XBKI>x7bS z52K|qmf}tFCHh2&vr~s-!4<3I)#{#OmU{BXHk6&bm=0#|z(dbZlBr?{u;Npk7zUrp zugU|LdKTS_xC^;UgH;qclkeHk9qsW-1s=Wp~)Kxrnt)0It zwsk{tbx_7^>~jMrjUG#>-i<)y8KOsg!ec?O?_wDeYqN@dcs+9eo}i>KPMH0}*@#(C zbfR28nzqe_BLznKV}s{?|MA1d;E_6zYUfe8Kf5dR*U9M$;D;rz#zn)Rs~i}k;n}Bn zKL9uR>7vgr?YZ&LFn+ejfDPSIb(AG8Xz={?VW?(OBR zSbJC3U#vE3)KTgQgoA<-%P<(X2F7USw{8L>Wi>V@ZS{g2ZTPx1Uh!hvKMVLJb{Gs? zJ#~$C@Y69Ae}}cTb_7zRUBnzQs(!e+QUwR$UAay+p?{hnh&MI6U#T&<-q?DZG-wXD z6smYSg_SGag7vg;{RKWSyxQD0A5*jFrcHV8b!x75hdr&xm^NFqHCZ`>gCGy0KFKEe zg0&FHIk1F~GqSrP&@lVloPM#@Q54>tt>Oj1eN<7PecH%D!&TL3wx@uk>LJ|59h%Ls z<8g9O(P@gm$X6x3vLOXUKHn|PMJmFC-d|_y2?a6&VAtIJU&&@0F+3JoEjfwXG5<-9 z?AK z+CKqL$gzt6f2KSJLs-)Taeu11tn&FlCEfdXZj*zEP^H(wiFbRSvv1f;h8Umv35Y+; zXY3Mc!M!OMiWe9A78X*6Rh#G6wzv9li-K2a7-kpI>3S7fCU)eQRa5S*=I{mjd$yjU zPPTOlvA@1`XasiP$ssM*%+_kTtxK_`xILq{V-2 z>|tbU!J+Ngn|08GVEx72CVy|VeEU{`SfMbV3iIgh{D486#Y!Oa-Oj+NTPly~f`I`n z6~%e8aQhV_u+EWeIQ}}bHQYU_M=eKhQS&l921;s`8`)Bg&}UpX%(op{;S;<1dBJ7f zO)|OL8=)R=wPMLp6QR&J9e)$pnPv@^fpE3-;Yd@GHY)Nbmj}#|w2oS(lkqzgKi`QWpu!+{$y`XjrE&i#>Sx}P8C?OhwL5eNa3B;T+}uQi8vU;fFn4eC znWhg{=`Wxx2gwRygo|D)DeZeoM6pwm^4XoBeVG!*HZ3aAY)Rw#_c7{b4~q>$laX;) z97AucN*KN1(9dq$DrU;HR!#uBcf4pvL%A@Rxtw91jCa=xo)I}edDI(v2mod$Gr!1T z@6jie^|#gZDk`_t3}D~`)LzDZ&q0PStT(;a<95go((8`yCxxgF&-Pl7?%E@=Aysqs zOEkVX&|(w%{&*{Ew&c-DEuJ4|5v3hpbSE651Ru9xIqWv{cr)LCp03cg*2;I#Ua#Vh z-KZ$#5qWu9OCRC15a>cFU0{&H5ou`ha_Df4azeF=L%%bG`vooe`CS>inw;ACIBA9O zGJ9sX9vDotxKOUKmetawQu*UAx>#n=u5Y~s!qL8Rc@(PWZbJi zXRlA$ec0aK_M|R!6a;U|MTvIq%eWX8y`OYP6*Lw1y$Ex>WB{0c;^Y|8fPKI2565}M zl~INGROVk2z9jMEN*a5(isnkqMrbFF@9`dN|qx@4TMVh!!( zGi6xz`N6oXQs=&^NxiL^)#?tzKB3p}=&b7<%vej}vXq;7=d$6+8$U{B*tkq%R%Q3z z&eoW0sGCfU81JQdvmboB1LQ{2`^wiLJ7qB@43MA zb4G!FPub(KP&`lWyP2(w9-;{!dL=D%Rs7i*+UMC&*7We{6O3%6ZyVNeiT}4ZN1HG# z?a914d?82&_fcqg;;C=8MK^(13R?GcKa~Y*)K%_c;>nAR7!l7s~8h-;T?790%Jx^;W13xlApL^T-tk-VJWHSLP z2`zw+L(6-8n~x&yQg0D*ruQq2;E(%fT$|;uc^e@6Ir^gH4%;?or_t>H66}A;H$@2M z;8=H3NbXqPRw~S0+k6IDaUsoi0qaA)S1drPuUvrss1W8r=kEJ~yR?BsxwV zOhlcuWYh0mSAE_KbmANu)!#ITDVh^9d+Z{Ff7a}{-ugU0nIt9OQTMW(ExX*lzdd1h z!zn@gR-F(r=lPg*8&*xfw>6nudq?0(pDb|oUdZTL>E3#oTj2HJs;U9bF)mw7ay)x} ztxNhK^AgTEZWEpPeE(A5k-TR1qx1fGs>Ox*tMljgiz95Md!Cwvh(*V&`%!veC0X-d zqLaN}oLxXZslPj#K!!cc`ANo&U$%Nhn(vw=KKG|PLW_6pc8i;y*|jh9hCEebE9KIF zwec;(&-~9mK&##GFH>wIw-l~z&hBrvtBFS1N#Gy)LqTV zXXmGv<#(nbfy?tfb-t=>o%mVbU0KRl1ZtQtVajR*fJ&b+MvhtRGDh(VyDInJ$#X`1 zJU`zy;v_7Av-pBFlL!@qt33~b?Ri<=>$Jq}Dwv>{bAu65b!bqObKRxg`%1Ik#a#ih ze$4accC$+7d$G#9PG!PNM^l1+{`*l5?q%8CH$THu{U8{- zVzLGSlAAjfTUAAbj(giXwm!nep_4@NwhqtFKK>ym8oAv>KVI9~%MzIRixZkOCA08sE<|4^JO z<6fauvZIPRWixPm6WPu%2ymx1l_ZtqxTa9BR+MyijYPu@PzfoG2y&OC!D(jIzPRC# z6YG4y17K?M9B}k7SbJobMe>P=zk2S(D>4;;sm$uqk+erDIl;P%ZuezZxrQ1KQo!aU zixZVb-TCq~+&*?8^$}y&QlP%a1Q{vuTa=36bzcda{8OnJxL@rRz$k)Nsr9qLYQDO8 zIpHH?aEL~m)WW0d+V_$3^e+0n5_NphZu?AG#buJ03YF+8|9H@a^Z7Ql4x#-<9uoa; zOuxP8ZpW@Z;}%fzlkC4%j|CbS>toaqY@YIQ9X(mAf^K6*=*++WrR}i@a^mD4P_c4l zLjL8CjFu^?3jwI_b*JCb2)_O68UM2o{riFwBaB3>;e}%k{qI}M`*iKVr-v3oFWS=zZ)El0Jz!4PU4>m@7r4P-N?Ar| z`PZyR%^+bO+3Oa3MOKx^d|72KHrvYoeRmBwn3C;kwqq-ay_r1lAa|$3(~!{dna^k> z%RU6G17uRpLIbt?p1;BOpWxRd5BR$&1vwe(zT;(?RlYz z$s$|dF0%YrFCuM%l)$mFS-MRlJv^~}cgD8G&E@)qPg6r?s3%v~_WkmtPvaV7#+U&5 z@rMP>-c~IK0vEi0-7A(hcWJokyPo7bNF2is2bg^v+;7r1+r?`tGAuja-!7764X%Ih z;Wjl(wDggjDrOu6CUFzotg!26=(fvG6dMW6}qGFd5c*ptUXz?DJSLRQk4yPS4yA4*RJV3f2k4kfl5*>93v+Q(eJ2j@_ws z;Jnan{D4Y7JBrX>pF`%Ze}^6rIcqT?{OLAsnd|?W5<>>Q46$kQh+u4_GkkwuERqey zaO;L8Ra!E^Mg4l1W7XuwdpjkT&L@QCwFQf;{_+&NK8Dx6^?}jAnCvzOK5=O~L($FW z+$sAdAztBY3I9XOBqOAX&V6+}rrKrt0n#=nsVRNqws0@G>(z`W5eUA?wt>eH0}-6x zo8TgRCFWHEXBj;9;w8O66rn@?X_r<=``m4aFA4bG2Ik)7KZ=X-GL?P7h;i#jf%A$H zxVIeB5QohVKoe~C{@j;&{;Oy(2V+=C`F%RZ2iqV$ELT&rseO-e)DHNvN?_~cP4_UH zbzgt<98JZDaXy<3{-Uz$j`MkY;Hd5fn++Lrz5TgMQyb@)rEqay*8L{%{dz>Rvvvuih!hA>v?j|%k+m(y zGi$>(FA&ikkd4oO`>qQ3=GbwfUsdwps^HYwEB7)R+<>RUJB@1acC48Usr~OH(e+qe z-B@5z0t&dyBvn=iGKzW3#AnUn-z-85zaol3Y_6rT>Z4^}Q|o-ih@#JRJVDR4PldI9 znE*h(UY1xLDbG*%Mfj0c=;^Pq0JAE=h0jAQ>DZPY21xu`nT! z1~Ux2o=shEXQOWy!1qFMWZXxO_m@q>42KO!)0kCO-{kh)PweNc-3ffQ)ex?O#3I4F zlBk}Sr%51bj|C^|B<^tvtONQlyqqXj>S3XMXpqCxs9`6d4`kW%kELH9N(r^H>ddW|c){~t`-*(3?v@S{8je177$RAm zCOWVl(_#(6Kxp9+~gm<(wUbH}B%KeJh^(f($H0gcr#8~y^HL8@&#C&cM zr^;<$5dT9!M9F~K28i^*a)@Rw%xgS3BUflAluAe6B$D-@r{1L6)^l~0t0#eDi^Wi- zDn%ud^Ew()Jvt{s|MdG=R9x;3`j2l}(8ybRogr63 zgEVc@#1b!Y(xNy^Uo|1UNf`u;z*As6j~aq!u5-&*C~@9{&?C|KFS}{zdb)iet}w&A z6VZYt^B*6gD?ndoXC|@2kFtAL6XX5!1TDJl_D?1zkH7)%q?H`6$jFco_-bEbCB@k` zMb961jK?3NP$v`9lt)JlA7`lO7P=Q%u1+wISizu&fq%REtl9F-8u@wCi{m3g+8?u5a%zXofD3XPspTl` zK+eEF&tkEz{*|g)x4Ypullic z-#rh-$W{WCp0oI5elIWr>YY|x4PRCkb;t@^aNd42{`T2|C8G;;u=XbwiG1{=AQ1V~ zG@#N2jp2Zf&7_uMbIMheWoT-kY~jE?mNpS`Yw*wNcw~x;EyT$Fz!XI_v|cVCc#}F= z&Kg;6rXUDG{{3Zc$!vi3>+1X=Pk53dNMZ86x(U8FFMvp<@*h7r%C`X_rCR#kzN%NZ z5YFqA8%Jnoa22jOrYqeQ`?z@-%$*@M)fszvnrpjcLw?k;zrEVzr*q>*t@aN-_y6gJT~I|5vYkv z15PAK^)vTieJ{Ozwqyczb4EcajLm<%@H8CJl=o*yrX+BVtG%8Jge9f|4s>l8K@ z>U2}>m!8uR2EWvp+tN{0QfHLSTvi?+w)>)N_C+!=EY&6Ztbole|1%1~QQu*Ls)E%~ z82tPwv6MY(sjF7xakS6NT=Kl4S?!AJoS{BcFWJ=orO8M0J`|gEd9@T0utGC@8qw5Ab7o~d`$IC9`F^kOIxMAr z22{Jqv)sHx4ZC^FM&)tC0OFuW?tY?MG4f((U>7JwvA3JwIH?2||DcmwEt9h8u*+5- zW-pC&CtB?@`7`|j(g?!KZ}r@4SEtyD?lo?C6iL;VJ(HJx<4OU%eAq3cbb+G zdY&fmX@K_r!=z>os2BqmBk`Lr4TQ97p7|={xCG9-%k7m!+!NeOk$K-1Bk>iU-4Z=j zMhuzbEd|26@7Q`Fc1>$8t5Xagc0P32-53xw1_Q}LTnvM(G1iNb%yqmjJU3qo&+B0n<3CGd{0YJ#6C{1aN*E9%beJ(foe76!9NVk19;rUv^wo{q}Rd?5egd+w(d zWG%}c;L(77JcI;}PsK>^If zUL<&$WW()BL6|mgE2~xKZEg%VyrnA&);1mL+#e+&rNEEJ=H4=W>B#?>V;1eiXJL}m zhf*VFwTAr_&vK~=kUKQ)AVTwyPSKCLs4)`jfYPMhVx$Khl@UA>pX|n#N~EIn#0CIR z%wG)07Og__g@|UwLuhdfr-ak$Qr7vVNnjc)@Is9=Lt)9$gI%8k?z_~+Br4JfO#u@<4_q&));cN_eY7qOV zGVfB?UN^yHUmoT070uYs_kq>HeRPr-`-&nPDpClDf6iy zm<*fmPf?b~57oNDJ#yO^uZ0hWH#&S;vzEoZ@fww2m67HzQ|F&lY}?)hrVGh4Y0MgR zb@*&n)i2)0?iEyp+lRaObEKQR$F5AcNhLb^nM$AYm`JJ5pLS%3d=2>){j@U*@Uokm z)?HRZTbs2UYUY}9jfp{PyW~l4Rk+UBwyi{Q+mY{TV(105N#U^1p~`EV zO8cOrrAQ+6OP-2x6}(V+EZXPbXOU-1OVsH~;lZrWZdMmQYeczdYQ}pUOfJoq93N%K zP{TXr;s+H{f%7bqQ>Nc^TrupwjOA}yzt_jbX0DN&^fj}J@Xaos*50>Ga8!;NMiWXH z?k~o%+e84pxkCqIuCn)0&uC5RWwlNy>)f}Q+OKB$K9=KSo%Gffs0NWEuS`c?Nx_Vm zpR;YKnOo}gg{MDFN4w*cP7bqk>)_W5B;ZIEv-R5yE$GPyAWqxnEBD(atUi8dBU6`; zNNGwtKosgF%#L#K-f-uJe=K4;A_s-=fB36PjCE%~uzv)cL=uXw{VO#GTzV+1`Lv*Z zEGmE+a~}i*0qdi3$=qaR)25P z*K_9vQa;v1xa$(lg6BKh%NWjAIDf81t2aBD~S&ZSy+%v?h4<@m!`QBM*nD<$Z zdZ$5TS+~q*Wwa(iJ{5+NFMD`d$GU6Ba^S!ayWPJTKPs@dpWSY}A!T9Ve5nQH(@>tw zpi_?rFGT=3I`Y0S6%CpzTZ%A>FD@lgwXiVcFW9|v2*M~w+!eG9WLAMilIM4ti1zQv5)4Zo*;jht>Tb2$>Ry9 zi%*-2$^eA*$R>mUqCX#_O~lt#18N)tBR;oz>IYU4@>d991zSm_keDR5jCv9DR)-q4 z;)|n*|GBWMT;C30sT^#xtQ!2_Kqd8o75rRVk4t%Li!n!b#URV#9yo`OuI z9(wF%R(BCKxTyw~^^8f9CjQMoeXe>viE*e`c5`av`zd-z@I;Un?TJx~&W1Yl#pmvc zeUiTp(ZI_j+A|p41liDX%zZ_>F6frSkrYRjp%Ja${t_f5(%?j2vDIr7*P%}V)uGy$kdX|Xgds!U#J-5DyuNQk#udK##D@*mV56hnz9K1H2RBbro_-mY zbEK*1W{~rdE60@;sX++8cuJw+HSN_$Xvtu~b&jJo|MJ~DVzgJ`MOo2X>jJ?X7blMP zApw#ifzmhn;g<{k66&FHu7!Vpk1OvY?tiL7P!+(6`r=S1k(CMNn`=Y+a)kYK&VoVhb2bjcg&*5(zSKn}XE#G#qlGxuYRYHA0`~Wwk%=Ar z#`8$wX9WS3`D2mPQ{fK;*-G^$CgNmx6*Mkc-2uFKs#sU-QHjU>Oj%!xr2;(;2F0~P zi8SLixM0p3uJH34sPO9Zw$rR1$75XNS%jqzHwZvwol;ao-n4gG+9#U;2r**wC{r89 zS&-3Kx)_A~$(S%wf1<*KD-tBV(%Z@a2b%03PAhr~h?Iwgal6s(s@sH_yczP+jY(>F zk*x_YjG~I0>TO4lr{*=ws5jnc=Ka-_U(0WQ;Q&Q5;Rt5!lUNI)bfx(H$$yP=a zT;^AbL{^QgiNPiax{Q#8y@6JlV(gsfo+^vSGkV2#+Sc9bWWSl6SiSUZZ51R}{@lZv*og4xe0)v! ztjFRxaaw6@EiC!)ewaUqZ}Zv8jFP2toaqm>Dg?U~7z)TYW+J!rkOvULFo`=&XAK}* z6|KWX;QrB{wq${76_0I`S^4fMDDJ%ey`S=Lx8%4tacV}agdgIPgu~eKa1N6XR@>+{ z%O~m<+w!556c&9!OPKBi68GJ>+38gL0^^zd7UALVM$vx-_-CeAT8NtEeUT|pqCRBR zw)|2wV2v6!HTxd2*9JKYwc?z4>PHicz=2Rd_of#IoBDlY2_VG5tn;CAiD`07Z=4Zl?_zL1WUT*-BN;ciDB#0 zBoU%*`hld^97Ob3gOdP7s@`4e#qK}0$-U8eGMe72x%XA%H=n$HC4GZpI~}k8(l+V0 zfPafRT-^lH^5@ zEs%(V|F{FO!loaC&bindqkZ{rm~79k?7D8~ks)qtU{=RH3~?f|Y9U%Nyv;zHtj+=D zyM`zaeauRAO~7S7t(&KZvHV5W1p_=Z-j#@S4J=%$_^S+5sdSbR0ru$#$wuiX7m zw`k5pc75d0;0v7vf;1&*p*!9znY*c!wEyk6Rs=@`tB{L#!CrCLJ@4h4&L)u#%VnuxkPIo>Di%Y>Z_myiPj*DlO9Z_SC z80To$vjBT2h1`!l20-Cn_sCU|)rF_&Ryd!*V;&^Tb2<0;Ax;^t@cKA9-Gous>)= zLt&*EWTXn&LByarZG^!+L(dp_`pvj3R~xEm(H~UP%*MyPx8&ek*P}k(l}`8QoLR!q z%uefb@z;WG3Uoc~!slq09dFipBfkJ~>Ov4B*YtoXM_)NH07V7=BBeP6zf!$Sqe0YY zn1u#+AD=5eM*?%E@NrMl1_jm(dLi4!mX!9@h)@ot*MzkHEa}W=2V+irWZcw3c}g*7 z(qo}+1$Br8(Y;P9r%_TC*7*7~m>jF~<^;|RCEUqIz{Edw5dYgOj313yhNlfAR=^5| z)@w(mcrZwbwG1?>MC09!C2-K1lh#dT{W^jf5c0Rc;#MB>m^?bs$Rjbzqg;k4OYJpy z#%GM6!SG0GZNx|lk!o_hU+Sv@kDtaommmF}!tZlpdaqyy?nx}4)8Taf0V=RTC>7aC z0Ah@9w`Zg-E-`MF8f=3QvK~yhPn_9MkzvZMHoD~741Z2x#8E#sr8ktA>>>8&B!D$8 zyHnh&z{~Qa?HK=<{KiUpSR3uTu(4y4YEW1xqG4N8vl>$~Pm=LL6>V>^gL@qAYl8!_ zwSy5?FY~_N<<_Gsi6K|>7Mwg!t{vPyZtF~vom_M#t$$H>`($6UYugW&SyUt5u_hMW zkoXk*Bg@S?L^MS9FF#5;24PgdcO=IpGc>>Kg9W8Y`jk^6wX26v`kRf*9d$cBW*QsP zyXCUJpv-p(8pYXh)17<8!inJY80^x{n0>zt%oQ=#jE}W=oFoH?;Rb?SxL%sCy`QLZ z&LZ=DYK;>A_DZ1@)f&?$jY^kZX4&2UVw%MPcA#ZJI8McL3cV(Z>5p+gEt#aEh~}j@ zP3a&nMC0}tLIdFd2Xlys1}V~lV5Du z)CRf)v%Km8?Nyoa-JR$qRo;pX&-~3JTkCL<{INt@8$Xe{^=`I3TF#;ScXI_60BGJ= z1wqI(+=K0qM2jksnxwz$5z5~+OVmagYUce}2k;hFWED@|M)1|0t$muiI>?1Svrl35sIM(zcMF-4*{Tu+Fn4l%{f6}cwNK%QFU!~B zkoI&|uM2~P#U#bwYgHMw7JX>$L41G#L2O$a#c-_BZhBkpe<|y8b0O{{7v+2FwgrpgWlq zuBX^`Y*6w*tOfD(w@UzH;rSV)jYCZ4F;%r{*BetwJ%rI1`Jj_A zIwHV`fdei0{Ma-jHt%6*vRc52)-p{^YJ{7(&XV$jw)hi~;UtfWW}idJgOv9?do)XA zhp>G{w#-rVO}fh{5p=|ZgEa#R3>>PM;@t+%I+l4T`6iF_l+>XxR1kg5;}E_P2ftIC z`vku@R>&oBzObs*8vbaN{`_GwAUH4yA4RI3rS>f&=jU84qS$V#P-N1r=51s3OVLKl(%(!|6|xaMsPhG!`|Vw2y$H9w-K8 z5tMQMF=A*e{IR8>GIE%zrizJ{5yTi=tp{j6202 zj|JgC)KQ9NnqA66JFJ!8>k~6&VzH9#M~d#5&f={}Bm}G{uS*0yp%t^4t%9nt!K9XglFm z&|hZ1gpa%{EcOwr|iH1v(z(E{TSA;Hlf|Jf*vXdEA5=$ z5voK@{HIKoCZDUHI=wIsR~sJOo{1lPJYL+ojP%7x@X3r(iDECRBV0df4i)rr8UD5J z@8Xgx^(`w^bN8kW$m>}Z{$0<8UcFT7n^$|BGAK%OpQzkBw%6+G>+m5!7_0jVrtKV; z^yEaVjcM*^g|1HzIc_-Mo^zFw@3o?gh9&|17qJ0_c=xW+D`L2nt1&rCdI~j4qJ#iV zNDh*dya{UEE)Qz%2YKkWMSS8TV~$|Clv~g+lsNzt(c%<|B%`_#c!p8&7xbmBqw&S$Fw`s7TTAJ*}3S-h*90SF$57M_kM5x()lp94|loIF!D zDvzgdV$is>ro!^6r#-_*O~w<0=piUo?eTd?{zmrI7?tn=G_2D4({6wgSz} zpp>nZ>aVwv3Q2bsrjqYqD+hP`Qj z5+j5!UqHK1Vr`uX7}4t}AK`I{LwZB1(c;PEDrg~Xi?<@PA(QTPILrTn463~_ScWEDvkXHw_jGs0 zwEqp=a>T&rwg5f6@P2(o8DIXk&J?vvQBRnX1>TX~WNI*J{leDS(T2gv^xJfB@AvkQ zE7>AT#WGvdS6f@X#I8U8J1#9Sf>yg0-~R<2&5*NTqXa_vg*``*fdu66B<_J?a+|wS|{5%*ofsHx5EfF<>D~OtCP_O1p`rxukm1zTu5E6q=#P8{ysV_bi5VlLHVaej>21^>wjDN0c>@?XaB0g(Kjn9?OSe5e~~K zHDhtP(OMb>Z=&-A5^VpM-u&<2|IZW`NRdC@=7uSv<%%+!8XfStJsCGYW+^HRArXsD zS(A$gim6H>;D{b}ldujU5-sHlo`XAh_Z9y&pZ|l1`}d_B7p5mWf_g)&dutWMcPcuJ zZoU+u${>O&9pF<5HNK(gY9j8V|8L#{M-AogfOx#X$6_& z*&f5;G0g4p= zkT(!v4U{S4*3C>22n;dG6ZB%Xmo^p)pauRv^!*c9lrRFuUv<4%P-hT z8bneDY!1qyIPC+zkE7NQ&7k=8O-;L!g?5v7QbD_1U6NhMVS`tt9M)e)abVB66=t%I8$_J2*G`Cpf{OMhCXDQ z{~z^)2jG##{~dp2fSY~eDe7TE@RYK0!*J=&XCi-l%99WY03b%f>;K~GO@Q%kdKUnf_SB5$sK`J`_ZvpM7Dg}Z1 zIYD^zcPGe9OU7J50gC>w41|J*#(;wT59IF`3K|~@{(odBC|PKNe`Hl?+W*DCKtY9C zLBahmM(1z&pHJ-H?QiY>En#z@|Gi=k%zx1^z#Q2B%KrgFLAi@tk^U_Z?WMIHp`dW6 z{NPbL7%p}JLwO5A80P|z)(utBgpvir_1qps@c@)H0Uf44iif? z*jg1_jfL-JT#>{taU6W6p7B0M;-tYMX2(M7lK& zG;pBFkl+eoiN*dgDQ6Y9d$aWo2;#s2Ux(AWl9_t^f7Ab`>KKYYq|KuUA~p;I$sOC5 z^ob<;*S{_Z8TiqyG@z#r1|n@Qgj@}FAES9=JNs8LroU?HqwO_Fz_mqy9;^DW5x2;V ziy)2NVV_}59&kndk25h~m|3=)WgO=+CuAhx-sjCdd7B2PKDU2Nahn6DL^MyPSj0#u zx-j?xXb&`R#;P`vsyYD6rY|6Y3Y&qA4;4Sd&}Se_7OJB4WfRsekHC z_6OplmgqpeHEV1^MLHHFdJsD9iyE9(>Kpc<>YqSqMF0y@9n#)a-@XUS6=R>vp`@cG zqk|*Z4rAS#lJKK1kE)MeAXeBOBs~hQnR2T~Gtn0m|I1c#RyDYpNEc=Ysl-vGKOFW5?vMxic5IT}bsQ-KnqCi;>J^)7_Kpujx zGolQK@qkTCN`8llvO&qS%e*A$2hb8Fy;y)y1`~(L>5%PZP z{rqu5_>I{+n}qpKYPFT-$cwwK2~b{GkBnMO`O)WdFdV`gN3q<#0r4X$YD`nSMOBM~ zpzU%^H`l^^i*{bM%1zA5utse>DGIIWBp%xg#g2Du%pmB z3NuiRA71^hu(dQ;8$8O2VthbPT#RlnmhtJ<4oV1~om z8VRn0q3013kATLJP_uQ^Z>`R;c}@AIl?W9?M8D!&xCvg7G^bb$AY}C^2l=b&?Q>xxy*6(71*+K6|7O!K%nhY2L|EK$t!%&ffIe~cVX%U)X z2seBi!q{dbzqJv8PjomV0t#Na4^LKo2miFXI9!n!X{r;0R@!8dY2x<-4IH_W6yLP4 zza;W{7o401kVR><{Ts6}bZHwfrJ4L-a!loz?X~sYJ7>X~BBw4*L|8`gr zaicFlxOaRv+ollpUdO*dWZBbE76Bse(|}+9dh&QFB;d%jo9sjKZ^!%VEvo-8!29Wy zOW;H6{sFjW@8ts)_!>6xIb$N|<<_y(s?YwPxQ79;B>kvDHm{#-o88Yd;DfvZt`Un! zai~9)DF2_4a0UwX*i`^IJSv0xtvkJpkF9skf|0u7zCo59GQ$DCDm!?GZZ-Hl3BuU1D` zmn}N5ErEf=8j1PO$H&P%J_=|u1ueeW)aT+Ip)pUj>;fq6lDpz>5$Gv3)G_0e(= zEf568u%j=r0##LuMCz7?976l8p;`OK_v1<1k>(O4Woi>#tfI2wt7TujN_nh-&;OZ@ zpCC|4{U0J{4IM!4@ML6q#W!dd-}Nne)FKLv-=83m_nU;c8$-Yssz0H_e&}#LXc(~C zYA6#=656pAPKtQPUeWR1HSbnwFtdg7PkXY-f;dFBKTy$_gnH;_@#G>a=F&tEZ)dGg=e>Dp2t4$vg!xAGa%C=M0=9f zEvJ+`hFUmC#5`%rt5TFNe?&a~$4?GZPNdsvPAY$c!|Hw00A-w)F{4y#-45I%)nd7i zjW4VmG|t(Lw4AF9dY|G7=)?eOaR!NHG2|*mmlF%eSpKK;@?c=8V)8zuZL(^1iunWi zMd%NyJd`ve>ICb-eyddbG=Y()bro%N534KOtMU9`5pw~RUrm!+YE(G&%54yzc(@>p z*-2Cn#JE1>AV-CbO(O`LYAdYDSvBTsuyF5FSQxW6y=AqfBPz%I@vw3zufU*m<#uh|v?9hjq8JoNKy>#f zfil4sR{|IxsOz1Wul%I32H$`emr5Hq0+>8D(76fnq@Y1A*DhwN2DF{M-I?A=!OxJ| zBl;mg)|* zpQ+8p3egE=F)@ZPQKoTcjJni&x*s2#>??00wK%l}kqInS6YyFnB{6seJ?ma&46SU>{CK6jelnt?Ql*yT!=)kQF(b; zC2Vwbl)1e$IlVZ9zT{;-?XgCubKaDn!)(8v(uwsg|z;Iuuhe_l^TL} zCyKwLk!-Cd^HEAlY88(2u-#bY(_UG*w$Ze3(8!d zjzdi;^67V~%T>MAI&yQ#a>abBoIY72EOf@h^gLF2)9TqL6%pG#X5xF;-JZDdkyC!|I?D1{Qo(b1ObVdcc|OF@Ikw z-))~SXEEFtL9-Q2zPz7L=A_glq?@GsQ1(&zgv9JBuaDxe`lE z>-JI0-Pbd@>&KmT3cnI}tN#!3ocqCn<9X%kLtYa9#g1!;0{Fw6t5&O~bNM)KSQi<|v9`gBEd zAONdy&bVQ?(Z+JO;Ztm`KvQINq`M^fe!s|we12oG(pb?T(^u{Dv?Q)(fH~pw!5Q_p z*_yikr4(-!Lm*B8W>AEDX_sFKqCDQ8rt70#_TS?B^vWg!`;${#H4u8>$91@;S z2*U~!vRU`n4+$z-(e&9b_)f;FQ3_EZQf3l~6RgI1+~~ zx!sqIT|PekwV(Q}diwg6m$WQ)PjwTE9NwtVhpWww=L1pcMS@0tqCl_ezih3l%q_hQVaX&Q!UP)!Bhvw^X&xL}( zevFQ}@|+F;t>YIdmd?{J>rGq7T!Eo7#s!v-^K&aA?d~_!i_L|-mEZv&dQ)ttAm!vb z&sqFb;rIFvJ>huKv}oo-mKY(QI{01Babde+@OBp}b=&uY^iDgeSU;*2S{$jS2p}Xu zm$zwB38uwb4$)^04}XvkG!lgxp8Q>$95wJFk1J8vQcMy-hAQ_+M2KIv2ba!f)1dJ3 zH38h_{98k1?MO zw8j0T2=VDGo!e?m+q4I|=a$KdQ)~| zQ2WVi;hymQv2knIkmHRaxN<(qAk?*)ZF{y?lJxe@USU>kh~-Vl+pxwjHd_5KOaA*u zPt8~7UzW)GXjZY4^7iVPWrjr9=9~r4MxRV!Agg8`r;m?R;x!bvw! zU#?l@q9kw?RGwr%)nJ`iZhg&Rr28F9p?yv8e5AhoygLwKz{&M&g$bY)pS!S_nNs(( z1m?^4kMCZ&oF5_pY{+j*d;w0{t|lj5TZ83sX?wt9j-yFT+^s(=E5=Q#+=wYS$z`@n zRn|Gs4Ht7w5~2>})@dp1fAq$Q2A(^%C^B7UM^{kZsplhJ$WkfbKln#+Zpb5BmPq zJB`CqqkL5zBigyJy{m44&c!i%F4+|N3cTNi{Y+{O}dR=&4K`R zpDZiOK8po$D@>&fz>~Q&C*@4>`mdeoHdYGfBN8g>LTG^y3CoaR+}2PTtinrjG=i-m zhWQ3Ow8;Cus)1e{Ld|GDk)A#Kh0DOU_f_7O)126c5^E>5C~}CxB;Rub z%s@o5_k7W6XE+p1Tj-cr8dU~6S-$UstMrH(mW<#s zurex%5M%dMM9Q3ZJwX!jE1yGD9kNyVw}|5x`z{7!J`c*uP4Mmt&lhzfrrNtbAAqxj zkjHIFokl&>mq6%Ik~Co*&J~FXsJPG2=02OKr505LaYtr;22!}Eq~WbFl`FTdmM>qr1Uoept>>GCdQ{d@%pJ5E z^&y}xoHJ_pl=Mtai4kjbz|HoM(Y!zqRb(S1g+Ax~NpQ$NCgI1_t@%M}uVjb3<>zAV zoy9lXslwDLA6fEw#*z$*_1?+FrG+aE=m&omKir$y>Uw+11jhr<2T zLhOIJo;(1@vW7qWyz@J{=NUi@Gk78DPQC!@y(ra9K_lj7N zUdH1#xYU!ypEmcu#)#Zr9du*tW{sw)prKooIvrPVPmX3SR&|(oDv~az7KJ~@l&$a(7X*d zDmVC22Pte`ZhFFlj)TISs~~VHfXv8Hsj4s-tXu_MB_sKXfy)uWhP7DeT=vX&?-(1? zTrrFH$VG0+>g!h?YRGy?uI5i`N2Z8TT&XiHv0{@z4&1Bi^iAZv{gMtg#BgB5PM%F_ z5~@oWU2dhNe)9}Kk5#<&>YAadY1U$*C9{>84{p$7t8bgWZ9L< z_LIqd5-}?JGTo&d)2IEDeFiY?Mi;RXXlC~yn)!L@1WmKYUCIj6R9)LyLr%F%3-h@f zy-0??cRSlcE-*}Hxrg+F{dL{nWa6$_bXtZie50aahIFS>$Jq2}Xd}`Gy$>9uj=Ey) z>RR=T%un-~w+u2@_9eZ(PNiJCbGab4Dy#(jW-gq^cl4p?P3JwI*-9yr!da)qR6CN2 z&ZR&r3kKj10BKsf$7J$N>6TYB9g}pRmD*xqC}U+6Vmpo`J86{Jk@szb%XA8Zb0(G1 zy8YpsSbaohjXF6Ctg9GlW*W*V+pdUF5{+|SrSqk{~Rpf1WVvx(s=q>2ns>LfI&I) zr=yO?eqpwH;8e?EYLcaCNOz9EY)xaMl%xXBY3j1? z!Ghu<{QzH)C`nPZyIiQ}acc2u>1?sQ*wmt{^5;)s%j;@wCpaI@y0zK(I8h5;8mnF_ z*?@-@3u9yZzuIG(L(%~0+PzvKePPV{EsVWoHs=H5>0hKHq4!5fL)&}riGJI0!$m`* zp*|eVLV1e~XfaxCx?Cis8AKi>)R8W^-HsBQB&NJh5b>|hRx16jfq|@`TS>F_c}zKP z=y$j+pHeGk2w1^t#{=A|Lq7`MWLfBc-wyF#WS`Ckp?K_QE8*JNwS9ux_-R!vQ~~H< z8=op;1I~3ZSDbL__5*D*;er}syyg}>tHN+nDANcpJM&L+7Y%t(CUgY9Y7mr_#s#+n z&ZegkYgu?bPD`wE@2$g43ITU!HJpQB98@Sr4P*IOBmLCW9O8J_mf;^!#Tfu^He}rC z@$|-31!K=o&H)l$RR9;j`+XF_s_DypgvzJ}WK0IWE~Sow!x|g2>+#~X`KDoCA&W8l z49EL#u9grxyJ=0;)ulxrrRbs+nI?HwuW86d5Cw}s+-OE(Ue#Fl1r-*;%I9{OV>6GY zXT`(HK{yJu9S{1^rQW1i^5%fzJwm{2F5PfFD>Y2QnE%aWI#QoEMDU`SbyBg z{+9ZaRSm>wJG=5Wb5!{|Up+Y%Q&WlsMXjVZFO?>PCZl_Jg|dVClWGKOSE9_!1@Ic3M*p69MX|r|2%_c~ia8F_A$FIm{mDIX1Bs zU!zXlMxV@a03RQpZEVpE2e|w4HS+5m3YxXchOni)hvg6t^1?vsUEa5<#ftE@U!R&P z?ykhDh}ids7!z-E5C9Hu$IcwrzZ-q40Gs;cVz8m>y@A6qqDOsKJ4FyGB0!g}4u^!Y zQAo(KQ#(NQ)$SnRCy)}?bOq!LcQzO{Zfi6#mPW^*yESlYN-B?*=8@f=GaCY;GXeXPd4~AILO@LYAu`oD`2wu*|n#4`YM~K&vB;c7ctwWWY@__7)Pz zr9xf~+59uXe{txuq~0@TU}eb4jfonW<>jBY%oAi&(P(ooxCa&ZH`Q&SkU~ZNb5b;1 z&7FUFFqw^c)4BOm#-6wOk6(STj;5V+aJ7qy{HYz2sZqi(M{1YbuaZvhg9ck2=B78# zhu^BGa%z@1>f<3F@rkxg1j79KP6;I=0llYOLD~lz%Zk7cgm$4VY#DbMKXH&naG$QT z@aSL=_vAPTJH2*Bm0D6;)J-DeH!;mS`_7KfzPWzgaG>eijL8--KKm-q@0)Ummi5sN z!ak5qL`3vcOec@;&ijzu4o)rU$ApYukvB#tx4r(@OU?7|qgspA_qP`+fs0SRHaOfE z8tUN-Jg7wM{^6COb+B?u55~hrB0Y>DiDS=JZ;(mEC`thW`w~uUR zAxoC`a+VM;6ngLH;NvBQv4F1D@Br{Ap*h z?d^8yVZ^xQcuNrc?4bz}}x`-Q9Ct43Z_g?0gJf^;gI z9?>RPZ-pNe(!KB3XVq9oc8pE7uTanCQb6ea!o+c}SH3K4hdTZ{j%S@!H-C^ypi(uv z`0Y~(Q7*Ko^z%MX=g~yQ$9&gvdFec_%_!1H>Hz!ldMt6nyp6ER7CD`om+zgA$g??& zU#*;pGSh>#3r~K$jXgPjSmmdmD=>GsQd|ge&BRX>gWE#kxvm#hap@^UHMA()w%78s@E9-3N`v<6_IZ!$5i501Bd6 z)hW#iu@IiC7IiSKL{5r^2yJ0?%nvZSIq5AmAITW?kuEoH0K$87rZ(|$?8iu{Ob%bp z&mtU>V^9O0ax%$QdPX8}$hAo3+BOAyORW!VYA7V}1%J-I`i=selO$Ju^Q=^a32s*$ zaZsSsOnnK_88=v5GE(HN*k^k;mF(6SvM4OFF#iq?Anop&eC8{F@LOx7T)k}k@kxiy zrP>64P~YZnxR~;LEduQ1b3d-OJ1zoD0m=mOw4F0%D9-LT-&`+CY1X2Nen+vn8;g%} zx*^g^H^&U(@ZM6ICV%o-iLh+N;Zfb>jZ$E1p z*BMB(^u_29DL>>RW-tUCv2Ffu%ZU9P~IBYy(Jm{IL#pS`N*=A)OzX9TNdf_#e z1e|)c4}pOh4d0;&nha~Da{kCoTh}Hu9z=)P%GMy0Bi`G!!aH^hOY-RVGU3``szaW{ z07fd^*|coM23qTcNl&=FHt3mmIhks|5`mRENfV)W46SpP*aYD znK-8F?nXLw*Qcg*lLpnb6cSw_NAri9P;1rfzyn!q@J>U8A9L?UW7;kQ#viS2#=NRz zs@)GV)y{WI>;iD_$%rXj9cV(%gn;Nu0Y^FSd$GCa%yYRwBogp5Y(*jW;3^+;C(Qw$Q1?Z^Zw|QOgRwmsmP;8{pbaI_U9heRz}) zwjX(T%J{AJVzn7NI2&zLi&f16V1(n_lL9H63yiTLpx{{EnW4;K|ICzuH$zk^iNN`r z1KU_PeH@N}b5_*IkE~Zcg^n#B;yi(!4jYMav^|9+T%T}h* zYmONhBk2Y)o2AUmOp}&!h3{L_kViSjOD^78LbjhIFfsTs(^h|%f^pXf4Bul76};AQ z0Cr?66&&F`9g*OeI}~Ky=1EbCB231h7?l%EMGzdtmSFzOm-)^}aRhErq?n9FxLCEP zP>0wRgvWgeHlMAFF=je&lH)abg0gl;%g_B87!svM+!#(qPI#KwlBvyoF-;7{k!s)8 zR6$hBb$LuaiQMuJ_Sr%J>KiPMe=NXM5wI^L%Bd+PWPLePoISj-ple#d7URtUGkx3t17mUwWT$gr?5O|R{% z7=#VC^7cYGWo<_i@;$c*aj?vj|5SPleE=Op-cl(7GVPFnW-+#YDVGI3Kxn7)7+4%U zOM(n3hlV4WSRdKgDfHmjk{`%F=3f%4HcLQ+mZP(v+%@X#V<%b>OA?uob`)tXUS$ks zH0*%bsoVikb}I=+!v!#AhTpHidutM1$J;Bs~3s_pHlRS zT_h1i#U~G|Te7$Fp3NF+A+T#4lLCd>qudX~%L9d*!N2@L4aBz&nDzzQdiS$x^ z-@4FhDa~tfq*PM+S4dMpq?-{%{&-xGpJM>$OhiOU#fH>V744#rL8;*>6!N7W@{vj1 zG7s2%GAg9csd+PlKcP?TBeyY>sYYONG3~+qlZ)slX$d9Pu_xm0z=~c+SaO|ebuTm8^$qdZD`Rl!%UkVCY%!ZL z!De%#auzDUVFjB8DS!g@FBDCZk=7h0TidU$Hk@LgLnWl7LJ_qZ27cDbILZy; z{HCjsjD#5+ zCwETA<6sLYx3DCrDrgJ`%(Km7JR|I-pO3*{lIIxyfe2W#(L~xmQqTtNp-h0Z+Fj~T zPH&4xAk|#~*&gOEq!5-AhZJ<@0mURG?_;$EZf%_$ z)CzCuQHanI{94j+)|x~yrO`JxyiuapDM_-|Jc*vC=U}(oxH`4)k9b=nFE6e}BT5A+ zq*yKml!$beKcJukW-EQS8PV^A>+x22_@lr`Q}jgqK672HoJK4DRFMJs8+0pe&Bx)E7`Z&MLgl5B+BuFu6$O-YqOX=8jZxeU z0^ob7j&Lz&DBeqs8Tt~m`1R{YNE&(tH4Z)N>g(JYV> zr;yb;nFsNX^cYEfOmSxYZlY|a44a@kqo~(h;bfLGFaswU!D~JODqtm>+vV*IF895) zVyI-LtwBY`G&W){wxz;Y8vIc`!Q)2D$3Zv32(tG{s&F=B^2UW#U7riwqQj5!fag*G zUoqB56q>&(TqnC~=sg9N;+Y(cFnR{6-y&qW=?!F!>#)aGhNSS1K}q%;xiIc9CU>%;fD$Z zF~ZeIZpp4GGq14fK8!hR(fAsMV!-r$9!B8DSqb}fPRYwjMj53^`6i;}Ml&ZP7NecL zo5(z8+LFb{GA*I+fC`nOx`9VC)RpA&IpF&UG{^amX<7)qe9RtnuO{7WsI9w&iFU{n z+AW&pll%%pVN?_V_m_B(9S9%FXX(dOL#Zj8D4i!26kqKjCEVh@{x{l-Y96oi;dg57WEHvyn9b?kzXoGJm_S60i@MRJspF{~hS8M6DS(m%{02 zg#wA>kwinjiIN7Q2_No_2LUEHEGC-nj(?!NzYt-@XNK~}8?41eU*7rjnnyX{fK_nO~1*SP@}i^y_xj9mJO z{yWW%2Owd!H5y33JkhSd-J|=|S&;CC%PapOhpig6h}Q@PAtb<+Rp#QdmOK7RIa4;L z{}MG*xhrTY1W{{7l9H5je#ECpQZjcbx`fFju4-}N%c-PH*PPWXuKrB6{7|Cs-xW_9 zk~11Ubv!;A7d;cteL*g7C4Z&?(&?@Vvkj*s_LvkBEBj>)pIi(=c|7iowG4ogo&G+f zF|T(qZ2>n=)`Tj{(j>+`ZtTvvm%C9SxC8O>-b2E?GkqJJ?$@^=Mb^a_J`GM^9C*HN z)ZK`kk1p&oA#zWRpijEqzV^i{;l34#PPs|Cqq|Irix8xt5}E>bLg#tx{4J)XI^Nd} z5ee2-(OI<;BL_8su=fJZ?%3e{@O^lh1px6$LKzfp0lmMST?fBNz(p{tglX2jWqS`_ zoh0;k*nBo>DV+reEb1yq@eFBfJQvioNb#~bx*&QwydSaNSt9(jk^Q3g4vc_}B~o>SD0vd$4y{cWSosus^w&_{&i z$FwcnAejh=nee0{Ojb7#zUdGFu*!~{S{i88 zalr_>&*~@F8q2Y^wO3ot3glX_)&2T8~9s<;0yJty!Zxd6Tja$Yk!%)W3Gjj>y39K>^MKo1c5L_|&)O=nngyvx%nXM1fKWdn2BFdGD43<&JOt zv_>N!$}v@3H*{R;R(a-lgXON=UA%(FgBuJ)5A#?Lg#&aWX(!)F!PcM24p>qTD+cB$ z_}skvZ)hL3Ox1Lg%wsvMQ0;}Op>;bz2< zPLV=~IDV-9XoMF8!)2L?C)S%UwZYxnn)Pn5NG_&&-R%7mJK0l32dId_IF5A z&cDtH-V$2c3BthKn5dZYe1~rv1JAyC4S9jCsJxy_%uG42L*3y&1i^Z0tktT+Q8b3+ zdKg>v&+c&v;m&st9eK`?Isbx4Ix}dFHYtKoPnnO$5Kd95HT{{75y*f{Vq%#|CX`xa zlztcAmcd4D9-_Z7x|3tX>OWbPXTzx&@vPGUS&Va@{Td%PvR0EuRrcb8o9d<7+}^Auqx|mdt5fj%Yt&5*g-sX`?KSBmO<>3Fm#3HY$R%h=_4ImPZ ze4K=6g!|~egFlX>1gf~ z!bBdeyS<^Doc4;fDwu)tJ4>9!@7HF-UBCRcO8QOqVaDHoagir2n4d;$eQ|6*NWFkj zrFtNOp3pzU)R&?E6<58>zdQ2o%Y8J*sL_;^oP!<$=`Xmg##;BpD$N-7CbvFZy(ZJx z-WbkQ6gKG|w;GXWUff!A|F0M#v+O9>pr4C{=pTL01#0|C-=F-@0N6krxF}mp4+RLnoxELEobRK>WMyIg` zG#ox1Wb-h=xUUrFe;TZlI!4z;0(>KG;b}KT=!`>ay*j*8h-3xi%gG}nNshtn6S`?#Nu{dW?$ga?l|t$e#?<_fCMH{% z=ny%#2T{islAEV01SfmBKcWf;;p3LD?cP-vd)Chn!izRGSM6el-s@X{$t5y=&-&?l zK8mY)Rr+0*opOIpE&9`bTim|6FPv)%RCT)Pq-o6GY-2J#*4O(n#0n*Ap<#YvLJ>1U zo;2rqUgqd)w=vob#TFFPeq+ZS)2zm~=lz|U-^M2l0bfhxo?V^f(EdYGH46{~AC`mJ zvmB=iI@~vC%XgC;r0(>7jINi|N8K_^l00>YpOZ6qWt0r#*#X# z37KR@o7@GgA+doM9F12%bCw@?sZ-Zz-knB8`MW~3@p@okg`J@tElM|2i?idX+9QGU zp4#i(w26fwv&BjTvt5w2l)%?>X}_g{J<=cqTWsm?Ug%n1*P89df-jslQ^I>Pa@xyZ z{Oia-XqM<&$Cqb{(nCtVRstHq9)Q|iDL&D#O}=He2TJ&h9xfv0arrI`J9K%xTzlmJ zECf6KIKK!|zsp{!1o2??ku3XU^?5GKF`n0CIDbXStQEKNp#r*kn zplb;u4IPV*pHI)iqF6Dvtw@bN2AA4qzUurto?$X8{Eev;vK*f!Rf$v*mvkr*`P^VR zUj#l1inBlKz`ZHb1T%V4ZVvw~JRH)jWeLk{@|3s{IC4s|ogA*(tVZFh=V>czEvBfYIeE&xbST!9MvIBK|K+ z8ojri44=7!So;!R@d~==cxOfnY1?G zz^^E6W$|pr+`x||$ttO)kJ4Jl)H_%RHQm8x9JnZ2H>bafK9|YCq#`fwsw5XsDd`gQ zi3?7+tdO+d*6?5Nqxa%;hu3oJe3~rzv!P7T_bbsU-^JSAgtFFsrKi^V;Z}lEud4H^ zh0P#{zRtt0{P!w??`hylP0Jz(snoH@q_y{Yyf!V^boC|8#lxs;5RY;x&Nj9ZAtZY< zV*0D)s3l{=$y7yg*35%aUHg?3%iB;x9yg;{*sM+mYp_ghWu2kiw@rx+5}X^!rT6I= zp8~sj&8Y64SSy-3SU5ROSEF$pjS20gDQ17AHv<2R3r{bJfg)F`OJm%q%|0zoGLI)h z?$*Rd>G_)(W@jg-r3r!+GOKuIw+j76$pZ;X%|JL=IQY5{uH!#-hz)@r<0K}&4G6z( z1J5uZOiru=al`?=K}pb+O1k5H!gARE8ESPMO?-~SSYG>-N?;?8S^+0^MNqva%;KS< zv|XfwpmAKez2UAFu|W6`C6Go5+%m>NFn~>oT*s?*%UBk{@dnn08y7@O=63_9%x}sn zZDGbO_w20tm1ES{#fXE**-ehdI1RMTeWQ|s_O=vShuSVztL@#c5QzK^dL`r-{qSV( zB#qLpd z#{H}^ko~qhsgpVsQ*TwR(FIcY7#`ON&;u=KsXlHZ2w4s;emkZNfHm9AAy+IB&y*uh z=Nc6+F(KvQy2u(_W$|iRP}I4N1QKd}-G>txIf$B`f~lBANSR8j#_EZN1xX4s%1>e3 zmA_auSYwz`5hIQCT|DeD(aD4Oofp-}GW#L)#uul$`ZUuP+ zTA`x=NpE36&a`u$Dl2G{h_*NS#bt{Qjzi4MNFlPSnq_RSd?2f$JE2YW4 zhDl!Gb?#>;2#}pY_6~Y$vF=iRL)!bWg236et84?I;P?ihaHQvOu-jDQ8@mqd72b#|LP;@_=aXd#LS6(T z=)j_k83@=Cw}OkNu;qtRW0!bD&?6C6#)Z4H;gAa@wJ~lP1+=zjZsA)lky8Q?1HGo4 zVv~#e@;xWmb8UrDKV4% z;4=#I?JNGo(}Yq5Kf(v3e{cY)1@C1!3_T~XiV?V-3@AMC82(9nY2nlozTprym2<|` z0&Infhe>r>Fm99Vk^6Eqd7V1LTd704> z(ty2v#r)_%qPN0g4Thng)x(2hR|LS_LT_AQ7@qJF+nYvk)Be4W4wvem9^L&Rl;Hkp zBnl-!sD)l*Ee18462LV3F}t3x1!iX03YeSE!B4*n;kDu1wSa#xW5E)740gj`<2dae zgy(#dL`F~8Eb97wY_wa*Bnu+bCv28WB)z~%~ zH%%r+W4p0!+cp~8P8#FQgYWZu|DCn&nsd+HXYaH1*#T9wx5Am;R8qZQ3PVRIcZNR= zaV%gUK2K8Mlfgr0+j2}?=IIqU1O*&3zK~i_U(w;a8nA?nmp#!iXaGjC_d+&^Gv|H*)k4`_@H-@%I&okmE$2t7hq?Swp zwi5Eo(xi2M(Gj2H=G0NkKfO6s-b8*Y_P_2N5hilz9$-4Af6yls7SN*8)jGKwq*uI~*pKF}%pJ1Aayfh$a& zin;*V3UfVO-o{8YA7h0l%))c)s_ zB&))OGPY+Pf^(}xq?&^qFuC45c9AvGAs`}nECaKd8KievGbUBApdW9Q#mx-zC5woF zsaTVPzqajR0m<}{LjBNn)H!m0Mxb40fVmzqU5YlxQauws=VMtZ@|}_|HZ5_GidsLU zDB{~;isRa5{a5mm#Q4Hj$$HUG#e^p((!ObU+y>tno|%1{HcxVo(MY6})B+C!&}?Uv zXXYVl2rDV${19nX`;m-1Hd>{kN6uW`_Jj3k|8a7cSoqj-k=km zXKk*1xV~HzAI|nU7fT?vkF-Ss)3=|RC~s4*ZhZ2)&d(`_uofkMet4D*(;w(bzFUU8 zhr?5B7|gJ$%1x@c9PW8DNdD52!QiWrvvU2KvW>nIk2%zN_K+74Y*5#5F3OQCribcA zTl{%=zk1mp81HCr6VY;yMtIeKNQc@(c$3gO3K08(L&}B&GeK`2vt$Njqi#G4N4KKk zO4WZ159B&Tma0t2btPvG^kFd`srwJ|{kn3nkoidEaI+8EdoYhY-oDmAx&N?hr-0;r zrGTejm8bkVsmUdIJx*?Brofutz4PU{n+PZOTL|55651w;*O02dS+s7G-$Xs(4RyYl z$w=IN&8o*9x&EC*u6khgm-o27Up3~92Ecs_6IH`gGa>6se%s;v>jw?Ipn4eSaNSMC zlC;+lMJq(+)m~8pM_8C*wS)O<<<4Ow`?|C`edBeOChDkUWFFCP2H z*yi6;;#!+2U8+>SYn~tZ%+;a-Lxu*6uo>cQ4P`V!mQx-R;Bu9n5HuK6wV771a1>aQ z7btkkuy=+w;1OIV1!ir?jX&QD&6<*)3v(I0`P$dXTvf|u`2UM7BP!uMr3t59PM%kLha%7KmtzMLS z_sls~+#ioFbtdXy^E2<#EcI9sI;Zx4rfCQn#CC5kEp-K+nVIWG9^Tl_j5@!5{XQ_r z2v-l+TEbs?h=n&?CB-*!E!!0hM~dQs2-6cxZiF2&--U92o%1*l$6nNg;33mv3Djup z$ozFtmb>RU@DEj>DjAuHx*N^J&u=^x6U{d!F;EmN@O9g@`G>pX3{AGdON6@4y@J=& zpO+Ey$C_wM*eT2aeUfNK`6tq^e|Po^!kE-U%_y^2IC4lZM_7oS#z+`k!NG*J!*W&J z?kX+b$Ov2vxLghmZHZJI9PpPQ4C24z>i&%)(GZ4_#AYJy-t{|uMWqoUs}KJR7ZYg^ z`7)o`daeLkl{=($YJ#;y5?|-rg?<~ZY=X_`B>d;uzZf)$BoIo1$ui}RHb+q%CxMVQ zxPG&*o9geY2yb!viG9e#h*kfg7U&27B=@N@x}lGY@1q-Te+2GvII|~b3J8|{eLG)R zIbV479e8&6UTM}P{)fFWm?r&jPFYlyUhQA4)$e--O{e14=2J#@Z)Qg^!4~Z|5)9^d)Az^H3bS{#xJD>T>YR&3(0l}FXY|RZbwVbD;Xf`#82*RL z5nt`ML|AwF{t{cNQ!77;kb)%qjDL#~f3E*pxw_=mL70R$65=6C^Y2sppFW-e=y0Rw zp-VIs_b}*hrM^Frt2bA8)ldk@ptQdqI@!G(#uQ0WL7ia#4{}3@3vq!6-rYb{&_Im4 zO3g4?>St)C#5hBu%0}-*|9-{z$Cs2m-3=D!^OxhtPnphWFu;7C8RjSH2;^R||8~+2 zri~T=ct#}_7TkVNoQ>D=L1yyGkavnkl-VZG);(F&bH#`l{kGn9Ncr?#cM|kSePE}T zrWco`P%WR%bxdJLW#jyRdrSma$AA##Pvoebyr(40!=Qc?dO?11%gwIW(Dua}8sJ}I? z{!a!>MTmm+6Y1xJQ<3$V8BL6$$2R+I3hCIO;r3i-$W&in4pwV6UNR|h=)W=aEaf{@ zyf1nWb6djFE?_Yq{*`3#%Uf)OUn}06Pee*rUeS33T?K^m@ zC|zyqNATK{nV6s3H!JZ^4)VSvSadi@EVa;#d1m1MPdKv1U}U*d7@YKO((zestADUv zm%SD?cXq@uQ3&X%Npf{LX4G5~dpBv7^?)QPpGntl8J{ERqyD_#f*c~=H`OUXjY6s5 z{AgeDCMAqB6^G=U1sseAH)w8Yp8oGY%3#p-{KD(U zhhqHqETtf8+(PuIo-Q+@1|YXs>4Nh@!tUX83afr?DH)-$JbOIK2nQ>5uD>5n;Nt6b z?!ERg?^>^{^9+pTQzi!X>#N+WX*OHR!ACU>Y&P*v`Wy(c2 z-uzDnWX1oo$3}JJ-0pS*C{klM;*5q_!egB!1tarwhh)NUM>|qGI~M5Z(uO5uT!%qa zA7(%bBEr8XTL7*7{sv6w=rgsuFEJ(kb-VZcpSQW-)ra;K?#cqsqM}l_SJ2Z% z1|DHUozsoXK?X=bcjW4IMuaNRn;MDT(@Fl)1XZYw#D-5?o#VQxK&hk zZnXau?Ry=dU!3`?^HQyuy>zjuVcxbtv&rc`veosqrPb>u5l=3_Kf7CKv7>nH%j0fb z#Yg{g)}xtBQ&LMedRL(sL|MdIYI;DvOlZJDn@YY;y2YwUVb-R7b&$VX?JRUQDxt3Xq zNh$CCF9o!=SIJj$|LPn%F92qt512j8yrpX-(_oO7ejH0Wf85PAQo@Ygr%#R`|dFaxAKRTgVce^806WJ{SC z#)T7Hu?#v5Jn}63_soXISMwkOv@J;n5Igt3dtaG5JJ%$Bnj#xppg#9GMYejs5^%92 zvZw6JW1AJtzo3MY^~i$?z4f9QYQ1Yvg+G8*?DMLC8F={6>RqdUc?msbBYj;N7>I3Z zojvSq>`e#1JSWZ)7dEf+!_I>RUa~!JhEVPky^T=;Y85(LANleFjQuYaxw$GbNTuT; zDCOX|fiW>CXgqvU{82xy3+C~udX^gB2V6ksg*yE;W}VQnnM1O_zBWydny)>CZMb4w zygQ)K`1psjQxA^V0JEDhkvC2w6xd$Fz5r z{X{EeEhtRKJaptReu?A+d3AwA|gv*==-qRa9qBf3*f0-k*Q5 z`{=+%#Z0Z+js98`9@c9HfUxxsXg9?X6l6b3>R`O=DE|HRb-A*2K?FGPnpGZM-It|WITEj*j z|KJ4$|I&}XGn)E?YfLgwJZ=WEE7;SxW!jAs^Ye?TG25x^);hU$a7zT2BA!fE!E`a% z`|T74506%Af?yo4n=Ke$^WTH=r!8y}&$=pG_Zd5PwK^Wy)$6l$p~Gx6)0@){&%fo` zchD%4EtwqpP-Gl{A;~;we7nl=C*0Rogn3L^5PZ1eQ%CTt^ew|A3B!aK%B2c0k-@(B zdwE^Gf}1fY$fsPU%&^!k+`4B!Aa%LLs&Pq0(&R_=M-g|>mX(Mwd^3`=esjsdu(+wt zIYWGp#}bdi?=%^tI{IY}B8n4;r$)gF2TyWlB%%FNU9f!bwN&nY)TDhC4BEF^zYz+F z0pAHJ?2PZsG2T^pLsI_=XR7ChToF%ppl67J|F9*XFAVW$rsKo1!~C3U+3cA05ry(5EbX!a;>j zH38-nu~*P5gcot8k_RNRg6gk zejXSvmXLP=0y5Y-OvI ztncIEi!b(X{@OSGDiv1;Zylvp+O)h-8A;L8R_H}EIEsib-51A~k|VKg3KiM1LB7nE z?UEzfMEQpp|4|*`Av8m{6ZjtT3B;pe;JHSVI{lyO?Hv)`cP?6=I`la25(J!(yl*m_VZHpbr;5Wc z=GpBoJ@uUs?pxF2h)uL4FHN31(^+iU4w3FH@aQ&nQ$r8hM{kq*h6RT6WQ*nH%{pc? z(~WOK|2Enm-k7hzm*c%L+RyM}0yfCEjL97l$C`KHEXs5BV*=%5P1#o?a|(Bm`-P@{ zSFnT6FMJYsiOr#tuQl3^w-u%D=A9=cZl4`6mZ*l7SkAb5HceOAF+Ql z+p*~NU6lj)5!M9Qj-jEa1glsLw=`C7Rl|U9rw19&NBYhQ7j1hb-Upp7XoHB{lp+#N zJY&W8jPLfGH3wc5!uL(EK>YP#B4wm`ma-z%l#+UMrQ9fuVG5ON-y|g9NXBlMCI%io zyelr3_VCnvJVz+95#tQV44htnm`%M(FV+o?DlsoC4kSQc{OG&m^p8iF2h-LH0613u z#n8Fgmjgoh+gVr)t+jZqlxx%v%r9`Z@DpgS&VQj3*F2jPWa`Kjs5P*-QQ4Srb>=&w zd3#0D=`G!G{iFXRq7FK2tuV@3Y<|BNXyJW7Y>Y1tnFmQqyc8#6Cpa*jTm305PH}K~ zYB5dFWc|Jc!xM2p10md_Dz=v)B%Igh(OG7X3_rHfR(rwXI1~*QDiX$8FVlXHg9(&x zQtc5F9&c>KYQ=;kztNpZ_WoG>B^Bu<3@i*VPLdh5ffsjW>Q6tGzVrK$!dqYa(=st< zNiE7-zx{i|^IW0#y4Nbku(d>VbDkV*X7JB zv-J4F(H8WwQgJo;-bMdB^10Ph;MGID_y(UylNJsYwS=woKvb%dvIy15%%l5K@@7_1 zJg`4oV#~c~b4JT=b8tSks)^W_NYo?~c+i;rEvaeD8f1LI`rA}5m7+(Pjqw&eNA{i$ zerrmD&qnRETB^v0FfO)h${44PR`GvcIt>h;AB3>s;T9uPwh)2z+@GwK2x$xf7LH@W zTW6U#CW}+Eo&ZQ743D!U9tNm;UeHY4A5JX zY^urOwlDo3qr;8t4uA?U5u6C0!>vo19{w6~1-d0=YRM}6&{$s4>%BshBs7*lmioOtKbTgubaX{VWy6~ewq92gKazoVzssk#u zD&4>Wss=Uuv}W_n&{rtr_g`u_iVh+w%Lfb(Na|9=`RmoDsX6~QB8JtBI!3jSMx}9* z`XSn+$oBKj50+)KBJTk?9>eF{@oy1mLAPG1{EWuB#I?cS5mceP$5_Ll_LGZ8QKv7K zqcg*?eouGB#9|mNG1Xl;cIW|HNekQvAt$#fPcUk-AqKiB`zE~9ModBliOe;RM($>- z;8EO5%)3ZnT|$PK-{~|lH{PiaQ@#rbG_rky=XG2SuH|s|=0cBg`p1*c;`>Xi{60aO zp|O3s6vL}wPiBqo4QHCGg^Y2whqZ_3CT5N7Vxs4BMv6YM+y<8oKt+>qVYR9VTVgz={jgW6OCEr{%|H1yQ z{}yXb^4Wxs-r27@-y}xVyz5I|X({M)j@b>6AI{^C`vd+NnLJRdK{Sr}RzuRtNbm1k zVAc;9J{AZ+Ab?{Wy;m|>N=p)~hPk+wwv;Q@L-?~ju4`|vNU9Jeba)W;{g{nZ@1(<) ziBMFu)KX87fUi^FL9i#nDNAt4CiP5P=OVG<1TH=GoT_?gaRe~agqqx;eJuV9B|B{v z7F%B@xX7SY{zT^)r;(kF~#~Q<8QZZrvO$kEO1OT@ z<;0&1u9GfT+#|z`C#dv39uJ(xGgnb?!NgwQanp~Ft@3D)=#&gmD^kpcL?z%1{1UgJ zh03J;#Pbm~Hk-K^cI-q8l_HF2vmKWBi1Gw+GIuEmzY^~NK$SlTZ*N;136Q^*8s}SE zTkA&f&z{{7;WN|u5uz5FTy{fg4hlSW^0S?F?Feb52l^8ERldRjseXbA*@zV)*_?Wc zGE^-stsESNT!s(p8w%vvuIm-Oh~Pcp+GTQ{0NWdXyR%gQWOow)KwG{FWi{&5o5@@Q z^JZ}@@&j5=-)Y!7BmtN3

    yG5#Ws_#T*tDmk$;7H0 zC}s5?746vtF$Ui8oc`%-1eo)5bQn?>A#N9MU+x!H?3@2$y+xd*>Bu;Rl}0Z> zwyG=S#~6Z;(R1{O!z9|mJ}z#UAN_&hK4AAgjHXLI%uY*z06=kv5@{{}%jTPx+ts$% zqIvC-p4(9m)MCrG1Z(YDd1)y7T!T-5qup-Dab?Q7*Qn{ty555vnNsn|9BH^`2Vo&% z83?A1uq1|-zleT#ixsW@3jAAINtH~iw?E(G@ZoDK8z(b(-PW315uUI1^o!PDWGtKgKQ;D8lr{l+O-B#D}ay#uk zj-QW+NKuw%3@vfwzy{&uGf2zQ8pV502|F(PPC>lQOhzPOunS>qkc6jUXsBBNr2{fN z9fM^Wnc)YDUcsVgO?~D2^I5J|1&(V(gq&TSG2Ah{wZD+lX7QDW7%@0o&fhyk3lXL- zIK`!t3I}cojr+ac6(QFM!Cgb!!;{t5x+#9|v#Re#4c&U4PS4%YEaEw`)^aPw^+AD_f^!K&spRkiMoYOhXYw90mYX(ylf0GQJ}p3zxA z9>M;OjElHlwsPgk&h@Ri_J+3xgIQ18=#s35aUMfOk?{@KBYhV zmIAoACvu8DiYxrZrSq{6$WF37fHBW3P}fAekB7RoM~)hVte3HWX`(OLR#_riv~Y}+ zy~Z-wHU8|RD`f~aJg@aUe^uku0sZFFlXO9%xU<4C43v$F33n+HqznKJNQ!N&R=NsP zzK_1HR`H_pg7>%khlv3ZrXu)Jkt}Z6X_vvo@o2sfgEnAjVbyhq5;A0)pBxpjf(m=y z!svY!_H@@{T_HwuT3@Q(I{9Ot^}PD^d3^chFcj^)XV3bacjMLfqDD zFkuUytl)fNSGr3+9=4qPKomD!86p@A5u*;+50Nm=U>)1Htrw3Kc~A8QV0@w2DD_I?1=DxDKxU)R zjFf03wI7jx`?6tRzQd%CBt#`dzyS>#{Dsd;gaX{m;O@W1uiKH3zP)Tg(VVF@ddyNZ z8tq9pij3I|PGlHat(YcjoFrOK+39+8)ANEf!?PCIrN$%KI|sh2U^uz@6~{?2_U;Kk z>7em6>6##p^dN2dl&8M!cG?ITOth6w+q8-E!lr7cmbEcCMrM3qwfWDI|DaZWltdO= zjtNfF5Tj9{6b(^8DsU(q!>t6Hj^d8{)G0&2vU-|5@t`axA}HRFW>Vpy%{`gIczh5# znU5LL0;3`$G+ZuCC$yJ696>i`jh3_G%x4_qOZnWmI2?wXABfJHIIe@IdKE};fAUY^ zEZoS)`{UXb?OwTwQmz)wdHr=jbnxDxR=4vexEcj+*Ow-GCq;O8?d<2DHY6PO>)Q>^ z&dIufo*;VDA%ky%8-JQ8eRJ-tGUzRdscx%eL}jR87?8-EPh%HM7teLg+=RTR{|Tw| zhjh)LAX8)~G2t`dQ0zkfPTNH~InI1O-H3x-s-#wDW}T<2E|K;;>i2RzUp{L%UpnF^ zVc^b2!U(Nzj=9oWToa&3s8b#NIf&Vqirb$iDsnkQO5`q(XPpHNWAG+~MAUIaz_{5) zjn&f)Mrp86>_)h0)^$mS(fYx!vG}pNRXOd&)$nM0An>bLsT(a z>yQ$LZGck7O@{Wd7ruPSY8)l--a%;mer81w)!xl6)zrnYbzN=zr0SNjl0_dqDe&Hw z$eXJ3g#)F=zR6>!fcM8kmFL;~V%?9KlD*t|R1%J6ylVlkr@G`jlt$B0Oj{`(Vg&K_ zjhtemTg%~{OT6GUyx5TzIOMbi~zP0S0zMs4YP9yMfPN;o;6KaIgQxT<*N=_wXdk>$SYJW!N z8zWLs&$MYMgMsw>I`Nzz2zfEdn;B_qzB_yj-YX0=)S9XOPEzG<7C;4>kkO!RjcI?X z!n{MR*P`eeieW{munO*dyuqp)zb*iRVQ5L?&fy?@Ab$%mmfB2fJx{9#S24$*k}QFy zqlpZ&_-!xOqZQhQc6yzL8#@-Ngt@V@l1Uk8HXEz?ZFd(d7p^uNcCOHzcB|h?7t@)x z0&egHvm$E|$@o;QrSgh3A*4lewJEhS6Su5ajyY&W1{dVDzX_5$O7!%|0g>@v>U3n- z{!d)D(UJMync5W1DccGFFMaV33hAo2qOx$G_u8q>K00qa(~LCiBzl^L;aA=8utp z+F!p27pHdFfrD(bg9~8USyJ8wYlUF6o7=p2(AJ@uShand4paCirI7u1szW$o{rc)O zB!%JF2AfP~t4hk)7Znwzd$JDEmxB!%77}zreMu4k?@KtrJ{9koaAyE}UG@WNsJ=I# zES?6`kOd_ye@6nkK=mEjrsNJx5`u_F_Ni52a0zpTW@JT8xH8Ug)w(2~_-;EaYT-{7 zj=Hk45^+>h#+3Kx0|^nZ^{qg-(Clo~r3m$tarLUh(Tpja|DCzy*ehW$Bl!t~1(-NF zwrXvw5Uvk94;JKE4-lui2H@wrGtw~cYS+6@Prw6HymqKdj{x4vDBCm>33UBpa65#l z*crdaPMUsSr8A0v?SnWP2|?co+`3Mx80)CHYl73=d*VZ4l zMH`Du`MZ>mYu0X}=FA_=;9X(*HU#nD`DRDyjXqsWM*8WhQe-e}(%J^R5YCgXqKTNd zJ?oA^5VCIr<9KFdTsg*RQnRj_l228pB(2hOXj0g?HBLE7@ZKj2i0z+U7MUi| zVU~DLm+MCXKCZ4(7LC31w&i?`J0cStf~-^1(_0_?YCEnMzilDJ)LKdE;y6r9B`+s9 z$Vk7u{S(X%BlLA^^ieGOzNtqouRnYb!?us7FbYIM8|8t2$4tw`UJkTC90zZ_WS~V@ z91h&xOV>e!=RP{%Gr1m`%3b6SI28RkU6%HnYap7*yTYuHXnzcIbVSs^V6eNZD+?&V zzT7Y}mH`?15Q0A7v&QqucNW<#Y|DPFJdVm0>XHpibj` zs!vj6`!4Ym?(nO9!&YKq4Yo@J`|| z9C7p>q&z2|+j%;@fCD^R>L*vhXthdyl`lDvNwP&uWM48Y6)hR@uWVfDq(^N~(8F22 z8=5?cWjKHTw^o7={-aVVT9`o3^i*}suh(a(zsq3K#-W&pYEfsUGBtiE$;Xg^Tz+tJ z9ZQ;-x75rG$#_2)*aMJ(Ve)vqu{3wZ&RP%vowt=g_ReX-a*=|AZ~3w!E*w7?H!xmT zQ?55!Eehtk3al9d-OZyjatl z$S4B2s!sGmGbQsJ$;W9kb{ID)NaR>>yQ{? z_+)$O)75GjEgya$z-{$}Coty1aqdJhHz8esXg{)=q(a{h(iPsuMHI&8NFZM%vWd{+ zCZx3h4f%}9G)8(EhL+d$%vpi&Er1{ZP`;q7%CPg817|DLkQ}cs9551s*#iJTjcps7 z^a=c9td#!hw|+S`h)(+agO0UlMZ~*n^vpcd#xy+JzrwT5{%Xc(tct1g3Txebd=A2L_Uk9JONN)yE7tm<#`(%N`pu8;~`cE5MF--lPz4PQT#v=`G|wAh#@R#x22_UaAdIR5S> zkXqHH10MNjResFm0|TZa0m$g`kOHV+o`{5zsURi0DgwJs8)=pY=L!HP=wSGG4SBD!hMC7WbbqOF|if zF#&MUKoF!d&*y=MSDda+CL5rfdyh3Zc>m#g8=HSM=tv9R4lmakXftQl{?_s5((=Kx z^ZiSP%@T_DK_lo3{!%^tt+j1Sqp^75dFTS-5ob;)4O1{{b}n4-b6qP<7Y_I?$CnkH z^Ysk)`_I-vMU2pv>^x&spU1jxl&DosnMnxurv7{v7tIB1IvY73{z?Oico7(GOuSVU zE7+SC6`WQG79D3hpibrE4iW`$y48pL%4%u>IeuLg|2#+~-pJyDH+q*GAS8oc1f%v- z)FKY&H57*{7$w3|AKVKLeB2vA!&4YVUVRk(1%e9gM@%f63eUUG#}98&Q)ZgJrhsLR z)W;o+Ju(+>1_){3UTn}_2)>)EaF-_a!HltnUcW^ z8z#<)z33R6q@>JMPwUrJGR=w>apgub4i~?FFIU0@lmRE54Nz)$86+ePqc0CI#A!6B3w`ndB0xXWOZVpS*`Zl z{oe8Y4U0Bs6z*%_x?ffAku33%>%-ZVdfEf~wzm}0Q z2^!y)9&hF_n5Qs9_p`*^=1@naVlKETeU3&PbUNO4+A4(;#uOU}C|K}Tgf>GNI*-+E z8_UVkVcxnw3d^8)Azw1DpA$5Ega2aVN7kW^b`+r)sX5T87p|`fkvum&Ug7M}00HZ_ z=YPvtSl1If>6j5 zpnSZ~VbnpPOrXbAf*Kdi&{g6zAP@-OR7lG@}-(sIhM+`2%PS<@z14{G>&(-5BDVhk02 zm{Bcca|>T?bU(H`8byN=P~+!=Eo7$wX|WTG4lwi5g5nf-SutL{A?s>kvy`Uwq-sG? zEuD9^0hcD=ePrbWQ1`_)_50_%}z$+{vr z@PvCI@OyMmIdJ-Oi!~QfSwe7srN{T6pXw@P>9}(_E#HKAU=50UzOkc8NXJ11n?L1{ zniC^`I*~10A39_k`@m#-3<$f92{OBz6^1iGvw~$C$WKM7I|xkF1M+=yaXcF&9O2N>yyywKG8L z$mgG2txd(ZdaZ=)k)xR3?>U0qC~1%NMm0y`@EJrUa-K!|?-4^mdBCNGfx6)^D_^^| z%INOa^E#9;zjEU8gJNy98}Hf~^Mqu#CH|4cqg;`iO=iFr-OJ;L8c+`qI;ap7iE>}a z+-@Hf{(brET45{Ru&p)dqMX)W0{0+^bv5c8kahQXQVQegWMy1gKd+CZ_bWKfDC^Tp z8qmdp^+|Dv&Tg!JRu+SK(8xMO_kAvzva^$@0>C`^&judb-2nYp z)X&TBcsDvk-b$jg8aJDN05=e+WXXi%Y;ydb7(7e2OdEC^ICtm5y6s&_DK$(^ zS|!;4NEtB)JE|zmzWy2cVD{ zdJJYd;i28L5(rt0)ez5H&Fg90-Jzp2>a+HO&y7z@L?8SGi@p0Rra{Aka^D}e`ZCO> zt2S8qlc--KI1axzdS6T23Fq56M&^&MX?*hT@dFUv(w7deV0-7pi3Mhrgu7N^uDYat z$qjl_uPJ}_Vm)QKy~xMRm^B4piW(Txz&Okrm2`sezc!qw&Xij%&TCsg)O5iI}M@J*-`sF#a~Y13ln)AA}Kyie{viyJePtAuq&ZT`5MMSLDZygZ}CX& zQ|;0IDoaZxqNcucbFFu2pq4|3Vy8) zqfOQbk=3{gXG<)Ra9;w^qx4@GjQS4i*)aQmk)To8$zWZR=s;8>&qN%u!J?bCGsd;F#TNgDGDRLt6y>Wx8#@)*Fk})o&=m6;s@olMQ+sR}1(O*LlVIfB zMm<5bhISJ3Wu`($kX|~KZY~0sPXWpFxc0@0Z*_3??WiWLEn|&X+`FD&jczki;}Kk{ zH118!CL6yN(*^90FPtc0b)fk!M3HpE4eAp?OHN@Kw;p@*vmtQ1>#o+`6@+`DFE-U- zP1hlL9wmM|?KPyKUm#w{C*TVgzpesRBU8>VN8Hu=s!|3_3zaIXu`bfk$%%XrCq5=_ zLD9!gzNgY2?sDI~YDu1%u~P=r3t!8vz}UEd4E5P>IH@fP*&VO&4pKK(uwzWy?=5Y= zwMK!%&9wXsktutAsOoeADQc{;^HD3#yET!aZMbrmVR-Qi>$W;u@7c(ZGHf>xY~NUn zzvE2%)5sYAFEaVfNqky9W@gNS0yAL{sFPo|uP_l~I*+thm#X(mSUR0-+3-tXE1Va< zF<5WiPAG=#TOHEaIN}!)8G`G%Vy@b2Qi;hpg)cY52Or^6FT&I|V-MzbF`EbDJZkwd zk>&?Ek+OZH;;N9ZXc0LlTe%WFxAMu@nJ_A|Dt9rIv|>;nwZ(FsXLl*bc3wEK_TqD0 z^dp%&vvWV;3YgRpBUyBu`>&X(R(O<2%3<_I)@ zfWfJ!J%2c)oJO0&jfaBhv-r{BjPo1*O574RNoti z|6%jh;d-~Y%H@OujlX?O*NDLpyZ5CG$BI-#!2yCD`W>EK*X+Qor`;2&XNNMX_ao!c zQ;i?3jzGZ$xNQd)jn(yzv;7eygqxpuuY7MoH}L3TJFaYa5^_v zhaP`*sVeJc-3^ao!KBQa!W?T8j!$eLfD*u2G2pOctPL8+RqSKim*R&3V?O?v-~=1h zYIr)VvbARX6VmH+_i88mWQkd&r{yUS(ogwq3XK55FSjGXIKnKac)*85ARy0BBhSK> zm|MW@sQvj7gV<(l8P*pXHVhFwTv*bJKC;5qH7KXgJPN0>yowCooK-%7o08aUzIa|A z`#6)D^k*L6qJL# zAK8Q+59ZRf*89VFF=nddYVTTB+&O8ZACY+6COon>^3vkL0c@G|uox@7@ssBf5y+jJ2!){4oMkUr>yzsX>Ct!qTA?S*gwZ z5<~{|q7exn?DAzTjvZys&#gSo2L4qxZdMF4veoW`>E6MDw$tCOl<;;lHWiI47LfNi z$9Zg1EBUuD8JdUr@UJao02Jqsu|=Ub89;5Va*NtW=KW^_+ar}u1J~zpejrVXFVK~cJx=|E-A#PrG~+isy@ zdNQOzp!(ObZ#JIkSC_iVc$$h?Q)7zkh$r!vT^!aK40S3bS98LAfjpYRE$-F`zpfy4 zWDZ!xNmc?Fcj)iCKvZcXh8<%qvnDtFg#cz_vRRf;(!=j9oTc^WO{5&*u%0MbE%b~XE}X5PE-BF4C+9hme13^n*kacY-1+3`BX#u2-xw+{1T!Tz zf9i8As={rweK=$<0bFiw?$%SiwNWDL*2fq@HN>BQoIsXhUOJ>8>+rcFMYXY@2+^O> zpuUiv-KrDQqtI=ks>ntx)tp%PmA( z@uUDEXG2Zq?yT?@va;@kyKa;yQ?&R`PCt$Jt_mHfn8l%b=9y@&Rk;3ihi}k>Z%xE1 ze_F#P&@eSADUi*oHiaKi(SsS#kXV-RNck3mqm?6U3l2Ernpq$@QFAR>Qc#7K#&`As z7oK}Nf+hHWG`$5|98K4DjWf8rOn?A`g~8q3f_v~VxCM6`+=4@J_u%eMaJL`96PirjITI#gyd|w)j{TNWGI1^A8ZR3 zbqMqZ4t{ArVGk_+;o!z58Sp{XGc#1Ql^?fJtB+2fX5r|Zxa7J-M&LfM7c7$`k95s*AH1wGZ!o!WPWZfr*(mZPF_$$E+)&|4gH<8}%JgS`JVH#sE)&a5 zPsex{a>|$OI%q7Wwr)~HM0yjqS8^!73K1lk0#|KkCsAIc>|SUQY1-6PgKusW3;h?e z4F97{GeY>OO!&E;PP_kzzv+8NW7eZ8>ye4PSgmqs8%;Na&SHLPh`^oyOS_*?$6GIQ zD8ca^2&LWqwCUAyl*WaL6>_%tPdX!oHYw#2ds-Fe2t3yRJ*6Ks%unrFpm52YTbj`< z)_f%J^BUm(Xxp50@cJs;PC9WZ5%|{!hnb~ta9dhe=X+%wgZ~hwUCOzkb{9Nqpwhu-vjtpxH!f7OpEG_we z^IUFkZz5e%qVq51TG!R?EtM;At{pR(YUAZ+iX6qamE}t>2NU2si*o_UYtTom*e+P^ z8UDFtzLBwH82lKkB$FEva(QNjZ^aJj%CGe}??q*wv}K59ac>IdK&F@Nbp(~Aim6&C z$!CQc!34R>KnOY5o}1GIX@8aV%EB5hM3E!Cens92yAlw~#>Nh5R7x`I$) z#VlSx-Y+YJFh~%t9{9Ry)kl`nIQqt1rO!k98bcACG#C5M zStrw}rZA+F(|c6d=8(YPKkPPP-57@0wtu2e8BR76 zd#SM?E7wj{`J(4tq}Bx2LhztgYLWr}|C+O&0$~vri)ytkxrpfYfO9&(surOK*z#-d;RUSE7LX7#A(} zq44f(lQqvcDZT{m z4U-cP_H7D2uaCAH4Z09~OBDk^YE5fZUHVp*urDJz`7#jxW{EiCXnY}^9DOwzmfEys zahP+b5|38a31?&nSKi35mAvbnY|)I9tOH8H)m)rk_S@NeQ=uTA^MQ79I&`0SJ_)h~fUN-T0u&hN9F z#B+|lu!v>rDXlz%pftG<+m?qdHRlJ2%N9gk`e}%!{Z3W)dR_<9pdpTKrP%4)d4_g5 zshH2EIIoTc1{9s;em7JlJ&<&&8$Fo9I2GBNNMHB94ULaCQ3&fPseWY6_C+IVj6d5R zjG}8%J-Ge1Y0;v($bBQUI+C)aL6DRn&|H*My7?JRX8i7Me$Tz(0sqA9bdf!wJgTN> z^IQT&Es(ptSm={MhsN<{dDjLbF?bNx1pURBghy=X7Fj5%R zyF18#S_;p z>$Ov0SE#8?q~R9{R?iiZUG-mLoB0oDK-4EQd_MQz8R)-T9<~t1*5<6QQoj%4#zF5> zc{7#Ki`1-*PFzVakvBm(whp=_G5r(Zae&zPo8@)0$P;rW1qIhw&*2kxxf&*r9*;w!`$Lhr=y<35&m}0l0$WFs<7=O% z0j&mYOFJgTteRqPE-bllChw@dqnAsdRgmIo0QS?vls~1tVLMeZgJ1}i78*Vqbm)L+ zzmJKv8~1Y-KS`4f10vn$E}oy8=oXq&$|4UMc9$y0GruP?6<<^5n|KO&*Al-E_FZmA z_0w@ys)v8rcy3|tbrc(&)->__x5meIZC@V^Mnh7l8v=_M_H4Q2J6qidupq z--rnD)bGQIvBe$?y3%No*nbl7HyVv!EvS(3&C@=m*{Z%|Q{s3!c}zB9LnNr@2#V3Z zvULb$R{Gu}RBwR5OZeUCCSGi!&UYn9I1Hv(IqPk^gS*bp9u*kb9{?#o5fT+^b$smr zd$N&J@ZrW%h){ZQEA(n?w$pToyj*iI0C|)GW-$Ts!iZC30G@@ruKD7%R3l+L!vygt z#{yO})wW`>NLpjR z(ih}tf1Sw?aJ|ISfOFSkDJ3_QSIX7}%HSV1kC9pZSfLdCSXP8DgGyS>r_hSSNw+-A zHF>tyRo2vvFZ^(=t`7Z!#muqZDPTvO%gb){Ug@$GQAnND;*)QrwJ_4gfmg9?RnRGg zIxHe5V#t>IBYqF}^Sq7oR0(;TQt5)mQv1^16P*A@7x z^GM8dI)Z^@*ZBIR6vYVA_hh(CKX>^v_t02FRzn8D0phNlQTv2WsK;vE^sB-YVy&J2 z7v=Lj2?R;F*fUE5VAQ4gPFVH!ecb7IDt*ipik=e5_-UtW+>I^eXN-i|3zcrWDRoDNb!khLX;E)Fd z%iODYFp+MvRGkinpX>B468({T-0W6p^n4yB!u|tEQp7j&!u@UK1?;J;pp8Xo#ZYAA zO|X0&eh`%v*!3KeH0f-VnYgfL%`Hk<}*MTI<$GfWE&6N{6 z3H_0cnlG^tND4O1_M2=)xJ+m-*ERTmi8zr~HUqwm4VlbRr0k+Z5sl(Gl&ld|KO6)b zK~)(5cSZM<0#1Uq@q++&%{0R{ml=FgW^v{+bQEVC8wd0iH9{PkFKgz_NPw@1CK#fI zsnpTw!wERH0A6CGK0p&Dhj|Z>f&|l{35uEWkZtkJC{y*a zbXoeOF4;AhfjHw79y{?~`z>D}GgUO?;xmIk6z}180#Z?#&v?zft5xWwP4{iZ*1H!M zuYcXT;X{w#bpbHcebaYIvAsEY--&_QX{m$w2jqL2M7o@*+c4BSgUePI1+)7le+G(@ zwE~Q;lOQ`VhE~4OT;niqdhQD}#p95IwT*}*9t+O=H&YbT%&A@8TcbUQS?aYB3hc7C zb?ZM{)?At!4jzen-uF7HeqkIe`UrY;9fXZ1Z6yziCczyeG>#~{?RkZ7I=l)`qx`;}iV^C~YHOO^)qUW?))d#uVEfM?1aH&wgP~(qyQ-AIp^00@Ejn zfp0pCZABzPaF9gk#Bu)Jd#TE`W)~p|cUfTc+=rhfgZr_AWUj)iMmvDZgIgmXeeHW+ ze9$d~5z`hbm8#!rzC2&D zzSTAEm@oj0BW52ihJkW?kJGC`ojk&HDs?0rfH@%3 z*Q!spow|6nhHQ&<`e^UH1vP}AYG)Ff63 zW#9DZ;%4yh(9V>yLyOz;EawURr}lAT8XOzd54ax1j2Rmm;<;kQJjP*2foamoh@V*f z&#kSkAEFS7q+qjfiHC7v*p04o(=uNTAC=Esg4Eta{ zC=NeYGNzQ9XKm+%Z)5P=`P@{$S^9<57dZ3~z_>~mwx~=jB^Omg#ufEPW}IAOak6jr zaNaN~aFxJFLuNr{j69!kjbm0E8_NhXdmQ^|K%&fI3gE<};I=Ewc>w z5bcxt)oLomLwykOwbE<>Kydm#*LD@yJM2edLP2-h% zL(2e@8%>LaOn09cAS|5Mi3$3p1DCsX9u?MCv-nDqT4>xcZSDS5%9s_=NtY7NTmB=a zf~rmf=E!WV(U@&?GZlk;QoBJ({b$85^+g3GKfcy{9I5+jw>w<1P;5IwI^cA4t$how z&4HzLvHrF?Tg#B_CTd0FAa7nP#^m{3OXJJZi59{8##SX`2N}I{8kJ(u3LOo4@h@9q z4$z(-g0R3Qjs`PJjoBk!Reu3<&2Nt{aO5V4!C&7=D$o^444^`8S_v_^0`|CV+RF$W zOQK-?37^?C*y<8&E~c2%;C34{TSJIq7TJEn^_q+%EGj${e+^S0QJT5Lj6iEz0;TOk z$7AIyQ(Cb${%Ta^mD#;baLtRG#9Gp%?`a zq^J+|Md(vToIi4jvYHU6eDdBk$W%lVSVAi1BTSj#BrJX5_&@MiDVR;C%MkQy16?$o zW8sf;EMy|-dD$~wJcN|Y)JZ03AS}pip}F#)9gEDDnN?nNPix31qFLquZCVz6tQ7)A>W+(4V! z3q1cUNkWQ9Q%q~-Kb+Kz8~?LN{8ax%yVBGzUwbFeoIVMU``=78&IZ8g`M^Dv0hLAz zCWUVK2`-DgD&_+M+AEtqx7;BKY2+dj6v%6QD!QCsST7fckqMZsw~(=WKplgJ`|$#E zkUlLVFzfz|0gIn-xMF2T_f@F7>#wMOaJO#XygNbRWh6~+5qV8R-(@c>iU+r78@1w# zWt32p&#c2JUiFWE`BlllzD%e2-(xdJPCJYlrtK~|#8x~GBnkP2&7P7(eZaNS{&j&n zD2Ky{EqBx?5$W(X>vqlBjig1L@lE>QM*Ur6VdMVg{H}9^SMz4K|EBrV8pGxUV{vq~ z9q=|gIvZZDDdD*~v8+-qgW$iPvBg3V3udw&1a*rbalSm_nulNb$h>KCu_Ge`PNrR# zp(MT25y)odnq)}Bx%3X)vQ!5#E0HVl7~_hwE0I#zz2qv`%j4mm_dUJvDM*mdMID%a znzS*$zQaDOp6DjvkYbM}g<*e)zFW#0L+FHxb_9hyOVZQJ=2yCmWuoCl-xwc$%#H+b zTzymeUWWc|h@eJRp2O@(@qzMOQ9%aDWrHQCo^G4Fsz~-U@sP;9t{(OaaPjx^;i!sE z3l*pOY2Jx2qXb(`ZvIN|jk|7r$+tyDeu7;TqRzCjQov z$5>PNG?>kwOUiAXro(Q9streP7}ZLEhhCv|!TCMx*r;T7to$0t2Q_t4pKhdKP3ljR z$F%#_pWOQiQ7kGyPs7mxs7CV)7eyjpBe4gjRt+ES4#F;}N-L*=a0H%uabyH=b2_ng zP_5;oEQ96N^N+&+A0s6NPuDc@E94xl(iv6d<*c81&3Nyo>ef z8J_JfE_@0$Nfz=EM?q%z6*~RsZTB0<)gaRwv1KVk4J^hQ7xj=TAx54sZ zc1+h3wH$G|dcSI=jPj1xV-}P-zl98PiG}ngDhz@knG{<9#au_)4LsdZWP;GGL1| z(6tynhnv|wHCj;+xP>YjF>0tPquDNoaLx-TP?(3kSL&}tU%zju3{?aap+NH!+B$EpHe8fC-Y9o1sz2fVOS)>TiY30 zqhakTw+n)8UlQI0V8#T!RHVxT2hSlFWNrguA4aNG<#MwrMT(^&3hX=NBhJk@fMlRr z$FyX51wkDdoqt zNEp%5OlpVsZt$yL%J(?BW{&V}BPp-hC#P8~jl!d}P(86j+qENT5?@+2dPznfB#-j92&^;YVjPMm_7OcNfG&*cy}L`RX_J zULc+_1k8Z1w$>didr96muBgO#>m?zQNq8hBM^%Ds-omp#ZN}F+A`ycl`@%dPvx8FD zvPVyTWuRpq9dM8;w$1Y}N@*1q6+}u`Lm=aA=tKvpA{XgWtN*?k#H=vDUhIfzV)fab z{07aJRC8R^Kq-q3usoA)K9`JMubhLpztSuB42O z)%F|N75}99sCIdAUWKmrZ(}fRNg6if=#!4KV^=_0dXfT(sLX>TO>oAj)ttmC)3DXP z*l4uNifI_=_{;pF>EDXqx^K&_X{q2rQ6Uyy+G{}DzZu<)U~&V@OlA=`7QAc4L{%J% zD2xAY(*xg6*7XsR?o@A6zi6ckxSyg|XL=4!b}o0Bwa);RZkfGp)qdyU6V@B_2jYI5 zMvWL>K4g64wv+WQ6sqcu5QQ=jBBtz_`eijK2kgRru02r)Lj5dA30_t>#NLgMYWli{pMx#Gk{8Ir5E6Eh6mZ=Nln02Ulv{Xx zU>V~f?~wxjm{G%5pvh>A$7aGsg}uq5+;5mKS&VyN{OCYA^G}hy@@ks=Yt{2eBJk82 z6&j;&*R`N+NMGU#PI@)*Jf0!$r)!irb=phj;P+9e{%ie3P3AbxTB0IqH7F+LZ&hox)C-PCzSM0MvflAGwAsInt=n z=^*ex>3-!&5-f4Dwx7o@N5h#oxi9dQ!v!C~&NLL)*sN_jA}N5#A;Wq^t7e+3*! z#pfxGAA!ioE1_yZxjm9LGGtLC-zZ^I0%n{olypssV?O+534K3pN54Ed(Cz=B%T{bc zjyjqK-e0I&_jb>447#ZIM6c&M?rU06fP)VUf*<9rQ4Q8<&#z23&2_|w_fuPE#=L0Z z_#Fn&^y zMXc5mk^K7rt_*oq7?nScfO`_CkBMAgoL^+nfV2cNVr4T;kCpaiE;&9Qv-Tk~p;sBI z=S=efjZi(gX60zO8@}2zGw#LT37JM6>aA$hjmJMqNR#r$S5!LzezNQDxTNNWQMAEO zzt5Y-JCN?hn+HkfwVhlco^l!o!Rgd|mQS}%tLTVUUh>x_uIj$TJej&|1uwkUIFlze zbL6gH7f_TLCyAXLZJRv1dP7%f<6xn~hhW^$ ze97)wvzZIl%nPCs4Z%=$EzAua(`q~iG_m8PlxeTK+aC#{Mmyg@L-?GDv3LZ(!uW^U z#C^x7?883}@0kN{W<=f$}p@e^9W zBH92Br=BGeiecB{xE0;o_lp0sHJ#0vn1{2E97M*1?2UE4S?!FL<_m_e)9qmAL@Ik8 z=Pa*e@qTcwLCat^S?n0Y#M`eajuUxZPC_7F^y}QmtOAL3Fu-v9rSRcyHh3qpI3B7C z#*PZTg=|zTo&97c0Wku+-3se}p$cN_fL-wE5d41ks3%`yp~SU${zO?zQ@h zWZU0e$N)2LV8)et@jw8(Ck6^sf=Y}jLDMq{Q-|F^sN(rb-o-CJEMcOT4ev7sk&UBxS1H$C`McgH;Id!ZhJ6gO zuLEL8A029bMr1md*l2IVWK3>^Dykh;uw`ylDVY|$V_~fK^}=#Vzp}Z|vi;~Bmfv&1 zd0SHU*48~Yo`YhhP{|b|r=ds1UY5JxdJq-i7agsWUYrEN((exK-!i6S3~Tg75V`{>+%onA*s=ywv+3`+ zvNBU>n0VYv;Z=e#c{3(J!c+6}mm0b$C4K2^*?3nbcl>*5iygJ!&-MPm&iqmvgs{1$ zkzBKl3DT;+lpctQ8DFi?W?dO`urq)B5%bLw3ofc|o0%~?6oAzH6(bk@0FD)*!QUyN zLsesW%kOBOK^?9pw5KvGOlIJ~bu|+*n<*W}-sAn;!7szt(eAI?CZUbJxvHy{L zoYi!Pw+$MtXbU1!f9Nlv~ zjf~<0DTNRdEGw1UOZid7(#OV=y0<@m-WHp;tN=1YGha;$k9TI$AUEtRR?Xx&HyYf6 zGOT3U%vo*hdsPN0Dq)?ytMpyyKG0WJx4lfnrSbCdhf5y9?V-fReDpe=;5h|Yes1}? z=TOZj&tRNM#c9VUnZo$UYhDU^nkG5zw+R8cxjSq&a&HsJAaruYY&&6-WT<)`2MGB` zOBl*XmxbAL`+o!AJZTXpUiT`bb~SWL(z{9zkvVkO=lE~rUt3IO;;uEQC^?^OAA-yngI_TXBT+DL?4wMn=IP2p~*z zv&nB%?YF}_<9dKUJ3{mPt}kS^>Xn6YhH^;eB;~|)F1sFZ7?3Mf&01B=bcDjf!cJeq ze#EQ(+pS2gQsj1cpN+LsAz!Cq@uXW|1=G?^KS=*Wmd6RZRAJ<^OLXfU(1(oU7#`rqrk1JZ%?wuxcros7eYO9O$+a>n=Qf zphMue4z#$?)CX^^!9w$y)ByU-I_y6)2*O`wjF5RA$+f}$PpKP=y+$HxFTx4Ao#r+N zm8Z^x!>>fQzgv;?=ihor5fL(#v-s@-YUc+S;&Mj%q_VG_xvNco?I?X%4W(sRjc?Eq z6@I;*J5LmODTCIej709@Ce03GHKG@=RUKv21&BWtW0r2SB;{==Ee_X4&kPk5?4Ljt zM!%xK*=(PWIC>mcIs90N5fX#rx@Eb*^guA{>5`5N&29VB^EL5ND9tn(Oe?=-9L^OV zoO^&4NBIj;^+ZN((~dxDquQ}$I|QX5q#V>-V>@I32gwJ;n7DY({LQ%jSk1-Nug zIZJGNxUf_B`Y#MguEct5W)uHnvA<;N?B(^YuYr)q|Nqa$(sHD>C_994dhC2VYRHdb zEJb4s@q5>E}*p@bJ9jGZOIN8SQFr+i&siLPDy-D-T@U2ak`wLAQq@k z3=YHmZT}5AT)8@xLXQtQzRHiQZaq!L4rQq2o@mBNb9xn(SBtXq9hane-O1^iSV6l0W?>OdAPiJ=I06%sj5Q594saJv zpcFKn;xHF&w-?hzOa)Luq?3*P`T9!6bjU}*MDfgv!#h!%4dSBCyWLvX;5_^(i?C$M z42>0vr@)i8RqF!oH-{*_fD+SL&u$#%h|uM#VQl6~e&pe%HbFnH(xemz+kD;q@ToyX z1_GT=zuVP?Gp36~A)^ZVLq8GwEA+vzU7?yDJ<0B9&T=dr5fy<_W^xb5KZ|)1$Sg*l2NlVQCW^j)x^6?JkwK}C zY{Hhvw_?-Hd1U+ytdrM3s~il6MHl>r05WpEE-T5LK5BRtfor?(Spu$qR)MpYRBV%UpcT6y!diN-X4%vc1H~pkd&LOi^$Z1YeK#q*?;j^wPFppYF-wWQ zvf!sy?0;6lHI7kbOa}@-$H|k8Gccxtfy9tCmWga=8vW~dZhi!aqfPb7c+&c6{9?=G;hr*-to~5sAyI^FE1>ajmitESZh}tAV#Odr8P*M# z2#P6$9jui^IQ|>_=~nC{i-gUUlzeMxOJ*G-VB%LE^H z+404(!Tlhl*~DadxRVr*{{=fwiF|Q!1!0M*_#bL>b6K~R^oDTFubgjX@b@~-BUnY4 zv%L{M1wAomr(^mzF=s6WEf=TomWcFY!2ok)p^zN0J|JHSjc{2^G8O@XvO|IH4Of!L z=m)G6=u)`S^jO^jQ}RWJPMIFv4ZXhalz->kKYM*gC3_TwNkbWux(oFh73+N zh8YX_6ER79GAbIAXQlWht1Y=@S`N6xif?6>>#b5(_&Gfdh~c}S=7-FDFR*R7Tza=4 zGTQK6W=mrqQ&k!vGi3fA+#j}tMF}{soXlY{Gsq<_yd)j(33l#bT@iKS3+%-U46;g< zNaU>1{+dGcF&x3-7Yu21(TkA$uZ%K+?#2ovJ!jF$sZG*##9;fIhe;6G#mW~3HfM3n9zGc*`u99tc+dtZ_QBcxR~FX z1dr#0i&b#p`yTVFr|OC~MZDhkyYQ_}o|C5%E~>^F#(u*uhbZh%Z?9miG`g0+oF zwHqDq79NE5biq%ztGgoKlaBwf-g;P22Vl?u==UdJ;2j zbbB@e#I)}b90+M>@{;SSxQeNq;~?WwM^e^R==TTBx5MOw53vq=UuKDesmgg1_Qn1Q zO5v1z`)I5}chANYoXhE^m%zqQR7kjXtff25Bu+JJM-skOF1a9ry2<+#qd?_E)KR3q z2QYIq#y!84imoGTZ8e%d&40hQ6*q0`T4R$XdT`DQpolD@#E1o4mYaO2v~s`=DvP&H z4;SAx>67vuP3rkbb19s|xDjM5CN2$fh^Bx{UBlTd0!L=}cnBxcTnW-1fg5 zpL@*0Da^z-8I*VFFx`+BJ-WHq=i`4Nw6GTtIF)g%6zDf>0sx8_?u7!mBF+0M_tT_w zf-Ksamwf)<+C!9@$`l&u7nu6unAVi)san-30JZ^x`OwHOsdOGx$wBom<9}+D)5_CF zqgGtLJZ0BTWRJu2B1%tM_$Udf=@IFXI|x*Pe`B!j#S(~fW0ubxNCu;T2tsS@zYa)g z48n#Hvu^M})co^{tqg=dqK10`Jh_r+^&-@>R-G9Ps766duY@k!$FxP)*jW}tdW6*} zGKwZGb6_1YC_U#^Cigj_bG`6DZfJL*5tD^ZVETc}{%!&KW32zSl1(MHv_`*}Klb$+ z&3D}?e~7C(uU#GtvrUU7!F#&0gdAUql0h`KX^&7uy2y9_*QkoF zlT?r;VV)Kc2Nn_I1!{3*jHr(ym7Oip>Mi%Rg*R2SwFSy|8%d?oTcpK&M)0e2jY|9D zT;{JJK+lL*+9{}bf{szmxGggBVLxV?y#Sa)>F#i3E1A|h&5qec#;%(5sTu>OJ3 z^=n|ipv?*jekB-E&QcT6(DUU+y+yu;^CAN&JJ^%uL0OZmXDp*J)=Vr=`Yu7io;7cw zEx-Jp4ED2ZJcTdXV+Hv}gVahqL^JCVHb3jOz(4I9qqE2us+%cwNEC0uI%&Jd&zlZe zgHC#O?xdWV383Hnq_r$<0kxrreiy3JZ_ZBb@8nhmG-sS^*@%owUX{d4vY_XRq+sHI zeAR1Qvp8>Fr`n(HGUrVHOB9OFGL2@ry$1)K^l4NEZL{(s&#a1>1OhnHc8iU>{X#i(*sx25A;Xyn7XR@(r|1QewbCmFc3=kCbn=1 zl3g(s5Y&d1ACw>cF)`Y;X^mMeW747PY)Tox;h`#Xvn+O;&B)A5T!3NB3-N;15mDIY3>PF?-#k%w)_Z^ z)3+DKa$;PSw~v>tz1OI^ihC(`%_MIF!;z_rol|~#)b7LY1f$y!^&i)3QKYSKgppMB!xNwmaxghP+yOj~oZ^18CT!a}#@3z{RbNH^)t zjCpvsmc$$Ss4cc(#~The_}vfs2sZAg85&?F9S?DotxMgPBT}U4WfT1Qp&i({zq=%T zlz*lMZP;&Kv#E5>R875-ueA)5-p(-Z#noEiM3ow?45<3z)9>6`MGBW8IE@e=siVh& zRiGRp*dc8ly^jjNvPwM7h3XnI3_@7Qq`ecg>AEP^DyE$C9zyVJXNtqQGfKDl$iJA^ zR>d?{R%&D3;U=i5*b}BYPPUr?0fhL>nUOIk zuw@JS>ngb+8!j*YwJ)k*WJcCTr)$6U-|(MrSG+9Q?1$xb`WVuEqav5N4RzfjOAJ&H zmj)+z={^Pmx~lEuInT90!*`CKBC-pAxg~Rn6@Ax!v(94*?&cx9b4t?qFYBH?@dHb4 z_Sv@LYcH~tYE8RU%+6?~w?;gF+HDWdiJ!H1Y6bV|ST!gn4IYwyt!YoB6AFlBDPmnt z2;9Ypy=8L*LywwH=x&BT`l~d7Lx1@yX>7Zn*3XVIA593zCvwRt;agty3}ZgPcf|5X z$CS_|&T~~pMCw5~hK@T+N-^sj2U|v(uA42H6>=aZVp7JEzv|q9l)3$zxIH32!pu?x z&3+6KP!ImQ%Gt+*B|6_IF1{~iye89M@Di?Qlq~nfFN7~0E$vWw2hSws%RsQyri z#ay9%q=yEljx8x?@`PhmD&smd+K8=|7K*= z-OxF{n(%$cvhYTJ$6JGlT(IO$PmVR!vxZio)n2lv0>@Xv(;Nb*l*DZ&EqDcp*lh)?cPs_l zlRo*AOD#ocIRo3=-@9PU^iX--g>@e{D8SYhfpJ*jE6o^P2m6`%{^QNq2POd1%wB1y?ITkxy+H)CHe(#9Nzy3nAkooG zgcxXTLO5lka}mtUf7Y28hkaP-o<$ElMNivG|_ANtF-1Xn&DfpuMZGHM;mMPDBvv;~|>fY^SnS{UTxP+Bz zHNqN%HNF-NQKjb zrVkTS?d$JZ9Uai9)%vP2a4e7mL^D<)_x$bgIrD#XGIltdu=nPKlqjwswaw4KH-CRt z!6y;mA&18!e~$2Yes$GyBlg70%0*4Tj#vG==735mTS!?U%t zl6XiT-$yH5(0A$Xd3lBY9d7g=FKrR+za}u4ONvK%{kx0QzIktvUO(2TUJn4?4!1z| z<3%@SqW`n+!*Pe-3xCroptHmfb6w_hXsj8%_)?UN#Qp}+_|bH$S#w`AiBw0u10gaq zR>khe6MB03zLu^-VPS5_rspQZm`q6}Dzi}GBtolTzE5hFC{Lm9^+BF6&Tk7K%cU}# zJ42=X!Ec;FRPX1%P9FTX^IeHOP9yr<4rSc}*@QpNINpg}|DmJS|4Ylsi%#`Q^vg(y zih%?|E|h|?A5FvY!^`3}nsYoqC_a$0ZAL6r+gvS9zwGJf-8H`djoOea zabx06>@Jjil>Asglo$U)5mbtd&>uf8eplgo!!C;ifZklLuNIL%1&imsPfz`Sa)9&JW8Wv!9YSIG4xO7HXV-(D?)tsYI)P2L z$<)+>m`zJ7kjAQ z-C>3G`97|H@$KRbKmi{-hFuu5oy|k?i1gS$a$a*P;&Fcv&kV2JsK#z)L2ZokK??uj zrs0Irc=2ufJc@tRgF9MD5Ce-+5P$`=M-ymJR1>gIk;k9x_6cXNRnm%lrw)gd9hNyN zM51U#y~p)!j>S9Cl~b$#Hv--nXt)0z28Dpn_+9y-y-LhS3gtA5+S67;>n}$cCS#iq zITHK7QkPX*sM95A9f0VVn4wGgly5LFFgS2>l42T0{wyc)0e)|tceb7ZMaX!Ie9#1Bm9DM`a3=lwuXK(p53Bq2Z8 z5AGCqWlT;3{=Y{Z@!x$#^k~xlz9xDLKL7i+h@K;a$n+oslRP*sSYj=9Wh#_Ia7Y|Z8ECQ%OdUK(fM5@5ZBq@av6 zYAb+z^}UNe*lIpxdF`qaM$Km6?Xq7v|BshupM|jgT|jL{0Zsa z&awLDmHn}KaO-+i=sflHbpBBy9H<(;Z2HlozbJQGSTIQL<{F3-#S*2W=btN^wVVF& z`*g5BtPw3~1uD6^Ml)cch*!b9uCB`rVgI%D|HcR7`*X-cXM!^LYbBV;&Z*tLDs%&k+**E>)T84`N>!m?kxWw*$jkr*BTrr zF7nad>86?&pbx7{m&ZpYKhkH4*BElPn<2}XY38IeG{EQrQrYNs2bae3_`49 zMt=?2L|vy>4wHFXKO2gPw);At7vrh_ZZanJ>Utq+sI2a&`!{ZeMUXXzq!Hd9&?Fp$6VZ3%8%t6q0cY>BR{5j-j0loNC zhQqPW-)8ZBhAidWZ}tbI!o@y2a*-A5fB4@1 zNB;V6j%o<*(bgCH-jn^1x{0>aeb+6;+;CANu}Q3x#Keq{g^U%}yZMYIvDAI{JJ05C zOA;)FdS=I&2v4s~_$B25ns4{*Dfp(}-!6k z8S8T=HCiu^#1fN3D5mh7BH^pNS>nkaMzg+T)qr|9GwGH9BR|D3wO1$S_Hv+`As+e%o9g`W^a>qNv=7uy9;XVTag| zDVv#N96}~SxlL9mGWT1{eTEIcAv(W*pYQegT;8A0=leX*`?+D1ZF3Sj{4n86f7w;d zLc-WM`$@LIsc`IdUdqk;hMPTc{_WkBSGQ#!cb}m0=2J&RK>G=Ve6Y6H3VLjf!V-3z z;`3z2o;I^>s}1{FmB6b#^9n-QbBs{+k!M%hXD>1A>Dep0LjjwpfomgbL9nR{FSFGM zt)8cL$&1h|pws4Gms~d~Q`c7&5C>Z7>$%kldky>UXiWYur^laN+%iPCu2XVu#L|4SJeVVG6eP0MsY3R>7FiU@iOiyN z0BqITJT*ckYf3&p%gjx<|2WG3K3V3p+Aq10N~;q4UKr zGKHrFpC8|fTJ3G)QGD5-YJ2I6xQnC6HB@j=raBrHb3G<^-+@4ggk^EB=F3;*6n&@j zQK_yX2O2TkfA%I2j)-%r>=yb%Jr;sK0|gnD5(Itr`T$@91;P?4n+} zbEsn0$NEHukyz&xd4N(th)qrlLe&kV5aDIhz~$wE(Wq`r1xzk{Ut47#S`xGWwvrU} zs{)tH`INMabl@iU+vmD&qE^Eu3ne+Wx<}USH_pqm!iD8;mCkU78?#(bL0bL`?5WE_ zeMSsUeaQmIJ|&0B!Y5K9Cj1V+@jaU&cFy3n>fQ|xa2|)K5x}hs=HNdmd*+ywDuy>K z%g=-hrEl9^!k|FUbKEDP zNaeXB7kCvn?ojMP{)#e@%Rd_hT39B~BF{qUavIlTwS@+jxg46^&F+8dUu5Z{n(4!!LaN(&)L|i^h$# zX~Qbu)#o%zr59m_e4XSvR}uTj3(`_63{Y7LI)Q>}?NYtxGMUPwHS|xo z^oZki1Y@|P=;VZk8I^~BH))@A~0~3UL1eP~vQP5-qZ5KQ_iPTtKWy9VuBIy9r592pM7~iWBkS3Pe_^Xg<@x%pHf!-jgon zvsahWw&8(ekZJfpQ>mq9np;Tnkiw4)-~GyP?Zd;nLj#wrRW z3dzKMY}FRZ5!r#MG~+XQYRz*C^LT87Z)oVkopvG@Z;Dt`GpS;1+qdEIz9Le(*942* z%Y;teZL}Ros@0Bv!POX_vzWr=!1WUMtXi2>gc|_F_(j9uN?^ z;6YoI#&1<(0(xWwxKTBa&%Rd{N<+**5ZJ%~QEI?~8piGd*9d(BK4|rr7rc}vP~s9z ze4H`tk%n!!eV;Q=85r6M_5y@f1I3EH#W>KBKt@$j=u)bmt8S=CtrmS=FpyVj1%2FO z*cb2@z+9y8wVJ6WR?9tX9i>|4u*Y@)IHwA7)lb(^uXv#&-N8n~UZfsd&70G*)TPVa zLWzJ5y@GQ+&d}Xf@vYRiKDcEfR~qjmDPlS}@)IZWB|sD~(lxh#{55khB;-0XC!~e1 z9S>-;W=pI!T=od6in7FM&K-I##zI%FbtTPF+i7WH<}3aG4JYoQ9twgZG^qvn(LAK+ zN3ewobhS8EpjN0dAPlshtn*Jmy`O`Dl-BZ&rv;YuDLVX2|?>b?HhSx^_8och7WNo z?W6O(-%^%Ke5S_RJ<^mprc;9VT|xNoklwntry8ykn7pJ+h4bh-N(|77)ZLC{&e~AB zYfxNm6XR41#xB70h(OtC1wr(`G!O-FJ!6HB&%WXYlMx>jwOXhO*B3V9{5oP@xs++B zVY@^u#?>-M-ME%tE|2 zD{E-_36ff(keu+cm|nO37Cw2l+Dmf=M}%wW8^QW0?Ij{}-g9TSgCXMrh!WvPF0|n| zMc+dNm{}lhkZUSI){PIb#BtI9v42TtyYJycVG5sZE_+qXklG89fJZ`M#yCU-?pYP- z2KmpM)|#KUPF)0NgJIR~@;jPZXz3yN@Es~muy;+@=^yJ+RBxQ2m&sPL4k&>e4IWV3 zO9g+NYNs~tzld%_i+nHcDRED7FxmEqj$L7ja~-=l4sP3*szxCEwGxZOs)zj$mf$#K O$8VQSE)^I!hyM>X&+3Q( literal 0 HcmV?d00001 diff --git a/site2/doc-guides/assets/syntax-6.png b/site2/doc-guides/assets/syntax-6.png new file mode 100644 index 0000000000000000000000000000000000000000..aec5e6bb13bf824ff6cedf273a72f1c8b88c5a95 GIT binary patch literal 315928 zcmZ_01yCf>{ZvkU-(21f^j{8yDv0t3efgZ{5F7?>V=fdB6k3n2bo8?vAP^513flz%nrt2@s9WN?mObzH!} zu&Mrafy<~;UV(v$fXPURs(XT;cUvSFNT(g8A28W|b^=-rW-|l~5C{eiF;&L1u`uPn zR*N?!3s1LC?#@Lv94_<~?M0(7BlqMa)qRPhT4m-|o}gjOd_fNt>KouvP#N0jD(pt; zV{0tE&R(#K9+Oa}o+HTnVYAChDNn#l|8VL=#gR<{@SkX18n92hzxnINWDX7DKamwc z0-O^m+7p8e^&iPmm<$zq_62T3+dWj|p9vpA1xz7?eDgo{L+Mk-t-lyg8qq(BKqX5E z7x8}vF+22(4!ZV^c7V_s*?j&{R-a%*AaY9gk=ec0c6ZHL3-oM1d_ai8iV=;8?>{#9 z?6NyxUCE0r_YUd zp?T8KJ+nLj4n1p&MT5}3@$hHfBBnE!FUe}p?;l0?kuX5PFhC(g<-d;uoY!)DP}~kH z3i7TJy6!P9o?~bnZZ?cT$#)ymehueqV>k;^|G1F8M5MF%q@MhKa%pTCFOv<)UP+fQ zkuziTf#w)j4Z|uj?0GMfN2cNfZz5Jnmkp&TPS^PVVV1Ya5)_UnV&#NrE|Xd1M_lrs za^wj>$yWAg9*tmp+Tg4uXP5Z&)4oc+Ep4X0quE|aPwabpF8^V06i^Lt0Wx1qdr)R} zWLF(YfMfxlpiBIDstH_L#e-Wckw!Srw_&Qi&d$!{yWf7EGhALX6Sgz1(avq=;R!+< zLPFC2IA96DEqtPkHmUT%;bGP*)_zS8F!M6WOhpk;f?y?1xS}_F^414oS+WTtYG?Ijz%tB-ABc{&c<}+)cKkV^JfPD?_>q>)PS{nPypq6!hXYE!6dQ~}ux}f49f}a~Mf_)*4LC;?i62K4wUn@jw{H~Pgufic0CCj4 zbt9OCJtsYhxl#g_!I<+>!aC;nb;j?obMiKG zhK%KUX9iyW4Hajea_>KyA_r7(0uB%yTY%;x5@klF5aSP%Km@}L&ciF(0EZPFMG3-X zBmz6Z17w&GgykqE{4Ot^w6AL0zQIkV&pRN)hhqx^XVao*kM~sv{F#i2=&Rvir)S(h z4(rh*0&IZPS^WmzIG8c@NxFkH*(pCg-Qr0`npZyx;fByEk^jTqQGI)LRcf1I^{W?xMD#K$y3or(<%5r2h5Y zlWcXF5z04&rd)V%m9sn`fvCQD@p*I3e?9|Ur!6Y;us0cAXLp42czBfRGpxyidKp;; zw_b%re)Xa#@s#_pnmnJ~r7sW^yewJIb%zXg_YoQ=ht5saqbGjl%B=34k*6w3KGt-I zm)M8&BzVrX<#Y0TS$*Kmw;#By}Lk;F2!WpK@CgHLVEGrp~qyeQR zYZ7G2Fp-{~ne>||aVQ%Qt;V-R!RFaOU5$0TaC2vP1Dc;!KOwj22h{Z+q zlLH=nTzeyL#D5Jow<>-=XK@w2t9m&sIguBl`(8nsnw-d%Th*Jb?AR9my0dE3lfCC+ z_bSHA)7!g&D_exM+M95u2u#`82(r&zt7GnoSpW6EZv4}A_dTcB zNp@+uH8gC#+?{W|vwW<^0dcM35H+W_lO|HzNPS`O*yvo*rN55oNSxACgW;oxPU=@u zs`vSTlEo}MsSP4M@Pf^0SI$VFUX=^X#|zf?chQNxG`e$dEUXz&bVvpY)-WkI%jB&XOW@uf2xUP*cbJUpCeD3l(sqPm5j10mvEk0;Ih{|6%PyaC(EwE9`3e zgnVNOzQ|!Tb)uss1G8U&xw*OT?VSud`uRT1IElY^QN?zmoQ8)lmRpM)^qx)QQP3ca z)BqASz4?#`t`0_Rof$H+U`z8d^t?d_li9|Pn6>-T(1Kh0MeKgx-XE)<4=#tKy`5l&pcjfUiNi%3}& z|JS07cP7wz9pv?r209}Jhyo5yFF6egGhwJ8H0nVTD|Tq5{UOx! zu5jl8E@421M5sI_ZeG;H>z)Csz+fupIt3Lh*&@8mo9B}n(aAEb!rhNM=gq@-uVu4$A*>|KTPS;*2G=RzPV#xs zXFt)+yFnT|zwU=ih-%z<5~8Z3*EQe9r3ome#aNFJP% z^Na5u}75hFOfBkP@;{h4ef8&b}J!{PfMJtX_$Xj6)WNA+vZu(MQhwZ(M{7mf??9 zt0+(l%nK4{Xp~$Y8+{VWKK}@xP%xp2uAW4Yv;yLWA)*CvLMKT*$^VA6$+ssQkvfL^ z07FqpPR}R;!$%#Fp#Ktn;;PI63s2Co#E_{lEw+LNbALY5*w8mD1%$GY>nwPr=>Hfx zyubua1E+=77|}#RQ1NT>GTF)E;G9~me=9)H;)#Qkxiqjs$>)U>WO^ZK&JxeNRn@3_ zMHb({aC_+c4p60gi#BzxI9@`lR|~2W)%|w13NY3UE~oK~&okl>e9AE7&ZqPi z2Y@Q{Q%Nb}sG@|8=%Ets5>AqWEy^a-6-bfN6mMKefSI}NGGJ5kA|v}GmtMrNr1+sT zlk9?JK{@@zr)nhg zaZ9U4&iI-hUEY8Ibcneb+To z$eVc7eX&g%@dT7)r*#Yx#u7iqf*t4;R57w0Szk+$&z*~FpQBPI4#W@~j{af@n_T7) z|MD>;(*;Nc3_%54qQp?c1ZCRf`A=4A!>KAlRT6sR{d4Tl2MH5oPlOsHke;8PkIfm< zm7W4QXrUFHgKOG9yRjWkkT=-;15DO*6c>IR8)wtZ&?*5gzr*>XAk2779wGm05nW=^ zCh69T;nO8zu;p9)#XmzxFbL7#62J*+Q}gk_eKKbKCa7pkv?969-DI!?V-b=o1KGvd^Y^j6IBnXJV;uf%(k_w^~9-zRC2R z&y$eE^(%Z)((Hly;Jt`B!VTH1Q&G-)c$=@5DHWwuwn91O;dY~B3$Nb{ib5xcywd(JVf%hvMa@~S_ zwFi&zS&80EQoTsj3U8KvnSWDoh(7limtM}%Z~kN7he%p+#HtvrR0Z_b5d0I=2Zly!lXKN}~%`sSZD9~mv{4n&y` zjD0`yvA|ZmHjL$oTRxdXOlIhehcq1Q75}TGnzXh{UAFCvuzO{T==5gt8d4+O4*xS2 zCA{ksD5Ko$&u!X`ZPa^5R&LO*ExgTEjK(VI+6vs2)nK|N@W>Q01dFr8*ACA-TaODIg=TJ2NngwM@cmO+z@8RC-~#{P8r@5vj-K^< zV^-KCiR49p0f8=`vr?CC6VBsy#h}ndV>X;wH!AU-L-_zwg))SPrr>8Q9v7ylt**vh zr0M-*I}RnX$I957DO6#J@;Y984*sax+#7F#bp(cyyTgV|>F!Fw3#i?(1X~C8o`*d2 znuz6B#r|0Dr}XsvvyL@q1L@XZc>)QGkd(#PvKYR%XBPFKZ^V%rF0=>&$1VCigMklz zKK42CA69{gCos6SC@K}TC}o5SqMX}=fBFrK(H&D4$xv=5U8@(Nd+v1dWxP0; zE=T4CSmtwTZo6rIg=#03QO)~b`Q?|pEJ21e(11>!0s;+Q6WK)t`XF3 zrQ_oZ<#g=GCJH~=M8zP%YQ_B|UpnJFCYGlSpYgqqR>1R~cNINJj%%36tbg25UrCaw z2y4yX3q!n_K0_2`;=G!TyUVoyFb{QYT`T!SRT-gw;NdQlR!k|xAjQP#D5NDOf~BA+ z%>xB=%@oD+&tqN{6n*2m;c}@C4nV3d$(?hXRYwq7%GxE=SRK>L-03Xl)yft6`>VsQ z9;>Sg9;5>)Schh9_ggs5QLNDkM@LwZI!P^gf0%IpXi&Mix@M53(J)7Pnb{pgj8{a- z)_nIVhMm9N)C+yP{V`7^Nd;89V>R@GKFvI}*67W2=h4kF7>+~()>nKjH#)EyaBn{( zs0B7R4;ztU5mWKi`QN{vp2eT|2F#xx@BL*pj-22H&P(}3> zvsi~X6`SU0ir8paiurb8YTC4{OSsHd#wVMO3R?8!bfQm!DG*QhbFTy5q1$jwP_p!Z zW-@72E=T%F7R z#gr1hJ=xOrY{2gPrEfIc~noirY)l-%`OT7TQ_y)~j*_Ei>w< z<|rp)dCrPclwgY=nm>s3E_{pqtROKg^5-#p;~^dpigDYRr_{Df-0^Ewo4X*DV8cV6 zOE{bSh9Y;CM+l~-vPdqLxyKtteCXub!oe0-m(BAF>1U&y%&3OxL(&pOP8bJk0|JYs z4c?qh;;8b+;lfHLrVNR>Aaojb#Z3Nq!_peWV$qr7=g`wJI}Z;Jb9>;)Z?PD-<)`m) zZMW@WrejvF>#uBX7B*c3s{|47xmH-31sz9iGi3$rNyVI@A@vC@>FGG=y`3Yl!qwhg zT2*(ebbro$weWkcA~de2$o?RuEGjJnvr}p50myp+X=$Ux{)B|K^z`o%G3Vp$SWOXq z2>ya7N(yB6sd7HwqIA{IFX%Tkx*Kz3^ps=m1*}_G6*Go1*f5j^y23oND@Ll#)hx_a zVyEXAqlBv$1kf9Bh{ntzspEsb<1CXL9U56$yY>KTqAb`k;{l642EOV)v3&~^gfIsK zqX`k$m$PkY@cC;CZYO@33_l~?=J-$7VJotFN>v~S;YVgRiYV4 z*wx$S%Xoj+u$?463n{&V#UYFAXLTq^(eO)V!&R+wk)4 zJxMmb7k+8dN_U7h3r)2{Q@lr(5F8XyA&Zfd5+2>f9+d=x-0gSc8pQp){VViW!blfc zhs2u15dy;SV5Fu8z@a|_qIRm*F1VaeGa;SD*B--{eTphTSzo~D`6Etxl<|(Nlm6OBFkQvmj zjyPa3hBU_l!77bV0?xfqYT{PoRfWH>ZbTRkhTB4Q1y#B+O3CnoY>%cyM(P1gB$;Kv zuSI_B3uw^DGNF6rSd`S(aQ{^6VgjwSB$wD?@G3m(>n{HsflC1BdHFmaRYbSBuLaH~ zJf3Jz>Icx3N}N8_H4NC^yuoGExr;xO=R`EMq92r%89~k34?u1?DB;NMIHf}$-1y0FI5J%(PBt1 zOf|v*=${!$pJ9=`-9Zg0F%((9iHyfLG?l@teD z)NP@CET`@3tTS2&m8Nk=@a;NQQ=3^}3{p)khH;pV$jlX<#<}Th|F%GjcPW7Z+xW#= z>z3WdPw}X}GfV+hK`NU7vZZICx**9w)qI?>2=xH6Xh8nnNc;l%+zvjAfom>Zu(b1| zcukUlu7u82AWf|J{BsWDz2fB{E`<8FD)nKnK7wW^cC*b25RRVvjB8ymOS*V{u|v0+ zZ03T$Z7WxJ%hGe6$tZ*}3Z-5p8Coq4@+<9VpZ=D&3N_P!R2~7hZ)kPqm;N{$5Nr6S zvb*6hb9Xda%$#_Sr7(x-28sjhj`?A}=TS;1s&wKAHYq=TSv1hTPr_f)v3Po&8zTS!8|#S-}lsPUcc}-GAo{do~5Ogd}pFy-`1;GUIV(UFSYj3*OsR7E!Ft`LcRjF$hO9 z+h1)(ouGn4xjJF2IA|V@Sv&9eTWdzMdq1R7uo&Fv-Y?A#9S(~Q@d%X8H&_(>!DIf* ziHFJN?W9wT6;}9?;|29*Xc(L_4x_&t<77qv&e0HHEBt2p^LDbz=>^PIku5J8UpNXu znbVkuR-L{RPYEGaLP*b&x{`Y{*HV%6%0`~wi##fdO|TpDl5ZgcZC$B85o&(0BXTYh zI3X1DSoU~QR``2Vhk)P36voY_uR>}9v(Zf9Y=ZW%a&k`v$ho8IB7u@Prg>R84&PQS zxakzk2J`RwpJWqR6Xz(v)y(X`ez+k* z0j}Baf!U*BC*=~qE9hqa$`DXSuQul?ezls#7kRHMyfi0+QKdh%KZeinSRtmS$(6%* zaH^F5vxP0)M|ABWKDAp-uQqTm5{+Ub*vn-jg|0dH+mBA?r-tJdY&{lb?I3J!9x*gm zDy$5z*Z^a>$Z&$=aQ~C6>K;h)tB@ww`8>2d?Im^YwiP?zO+%GjR+j- z(n3n20up`#ZtU(Dl1JrAnbrgQR8yIdyC;hIf;L|yNaSP@3>2nJwbTskU;wL~HZ#N? zyAs{ixEec$GnvKg)W3085SFuBeL>7dJw2BB{95&(IX>UTTt1ia%|v8>6tm(b$eJ*WNt8F;pF=MN1W2f| z@Gt<>LEUieD2C_Jze1(qN2wrOdiNd4m%+0#i3@b%v!i+&oaB#QtO zK6;EW?50$4Ow4Rh_h`OE_yEZ$FJYq*J~5~sd;z7RCMtdcBV}gz!?Lo1`43}j+N)MJ z<=6~{3`VGGR>wld!m!~xcc!j(;T3&0y0*384>8D3oy2T@i@aV+H28k=TtB|9uyn$Q{dIvi!4SU+kU z2P3MPhmCsuH@FsgBH>=wFVYHJ6hg(onJp3xWPcA5=DEm4+GiRRr}Y=T0l zWwXQ5+K6mIsQGo-o4%Wj#5xyC{cvxzm}WI61a}!6Q6Rrxwpjvggx0HyVBQR6Xxn8e z%MXa)ifj`yB65m^{sF!M>=VjK^^M_LD943OE!*`v_P_CXY}$G}j;eSZjBBhcAcLEv z019#5$Hdj0s#d5G{lXiS`S!&RW%2F3uMnv(;g&OomX3-876I}F_FnT+EJiKz@{j5h zKmAthcX*N4$OO4i2x%9R1@q(I#Tj4L#BpRBTdFf-Mep`y>~~(hKZk2ug9`E7YFYfCXtKDlvLjL-?bWqxuB=h@$YObo~>gf_2s^%54zKm(a64Qx(6pi3R z<8PAt6MK=%Ktkm7t7wQo1{E0824SZ{ss+_2+qbPU@g+kdUF&f65G#$u5!rmv47f?* zcYV!(LjMbwwhO1PN+RDDx*2*ikfEHw=sU|Qn#w(^HPe^CjpeF0OF^cNh<1V~d`q8# zp0sfaAI%|+hb$Z6G*!;QDGvT@%ozmE6W4Q^6lk~VU(H2H_2WGSc_hZYz;|wzuIE>c zsK#FP`rekdjUgqgt*V?37ruMH##3mQIzmpTdfHU4y0~TboEd)2N8e9{0j9{sTDP?< zskh2NR8x6ldSMe&mylO70g(2pjU&kqawtDQczsXr(BhUUz!-07`aR3T=Zi2Yq*K{Tt|t%_$r(D zOD*$z(N%5}H={XpVOG>YIHqMrmM@J)CjzUg#^W%MY?8UI8Xl3BW2dPdrW$g{I5rA4 z=%*e5fAb1R$bx>M9p~oT5Fce2HHCdia-S$J8Z?FkSjJ!;EVGvu+Ex#ev-?-jY%6bR zErtezJ@QZi?5~5N zL0JMc%+}pP6yVL*uaa8yEEp-59OX*s!)=psKmm}VRa~2Y_n2E$x;)IF&0UbOA6tf3 zjZ)b}m&FtcxGzHr>~36{Io3P{xM!%3>jC^!x38KjPmZ-RCu?arzDyjO4L(}Umsq6I zrwajpL{j}%473G(7`~pk6l3V8Bt()6tT}n{NIVl$0V*c>)L*sG<0@Vq1z?DvK00_g za3Q>bvf$_j`of>FsikL)%ZcN<<4Cn~AFCIFAJ<%_4A`%5!8Dg%(}>}{QIED(?0-X# zYCaEUK~wG}r-r`>Awk==h=d0%kp3Q!U6Lma{ktL@n^-yOTO^AAw!NB`J`@3)rL&0*)`y;b3ob*#1Usg!hPUACWZ=PEyqdBqxZ% zu;TS&i?i)-_d?%sX*r70g0l+C5tqlY+N2`|hNJXr(3t1hh6mwtClJWpFYy~pLG{XKu~MT+#BLDFW~*P^x!Jyv z6EHU@jhtSuLBCQ$u2+}gF|sal#L3-eku4?)Mk@TR*nJwnq{4*bqW@#oDPcmK$h zftwqF8sr7K3%E9W$ZkIyt}CNN`@jskz6sVA(0{ulR1bGFfkHRf3`OhZ>H8PQ-i>aX zI)0z`9Db*&O>8fyzM@xhp6`#2JmOabTD;2_>&+8He2$}-;tI|l7l{XU4?W$WEY#~L znqImQ$MDd_`KIBe1=owqB~!>46?;(=zTx9<-$;?4`+PZZEBcnW3ZV^sGDY+(HtKcn~=rAT{3Pguk!VKC?+j#^WyIb zKh3}Jq8SZqy8Fwu?-VuJ%9d_Wr`1l&U#cY7Y*Z;#;Y~C(i+Fh!74kU8Qpd6>w@KUZ z?)AhOw4MIKL(^FrSrfZQG5Z|ITNbQO+#zS_QzF0tk8OYU)#hd4p)p?6khPc0<3a&gpL?#j z=GsvZUs|?8bLH?JhEddB273gxXw6Ebs4XrKKvATQ{RZc7tdlT=teqlKUG5{AIsK{qqaT3xZW$*RI%6MetIBd}CtW)2;f*+wRd?!#aDQ|4NN z=jy{~^ZYB_x{-qD*AV1~#n;D*%4#RcdjGXy zqJK+=m@tWaz7jK>32$s~`dKe@`IWrQ^&8%4ARw;jS!&Tvg9MyBj&d!(bcX=s{dT1bVOXk^Rp44yKPo zj4Fjlz%2eto^GaGrry>$o5wCKc)N+8`O%n=Ht9RkyVdFFKlkP zZB~(15Nt}TyJlsQ<_d>TWI-Som8JSV!uhMxME7&NCL8Ha)tc*7>Dn?M8qP6=0E&;Z z9yBD0FMik4Hh=WrXnn)V5k)&lm{}rE#Z^V5MY*fuW3)mRyPL~PP`Yzv5WWMl=y7;2 z`*TV4wVmuZ*LpICS3YCZW8lY_C#Vs=plg&drwx^zVO;(!-_II#zDx%`%m4Tmwe3T> zVqT}+`uq*C0OKn-xs%chu61F_u$#VvyYvI0^DpG(&b@SC`|_)*mVp|@#RR$*n|%#q zNhvUS2?2+Ni%VJ}`HjPsg*P@O4KIb0NPxn4VW}oh`cvmfd~XxkM7KUR3@tzrJE*3Kc3zI!)FhmGYY0Go-n2oI8 zax+-%bI?$rjIlSd+gWl212&f??81ebtW=5ly;M({A2<$Hp%NLGf+BrX#iXOno^WY+ z3?fy6BH3ejmd6*4rpIzyC80Vt6!7``I_lRZvkepNd1=*GMa_inWOr(FH z4RVUcI7cajp*K&3szg@fF2(~1$T9QKR4D`qQ*jPg@AIRt>C&;?OY~VIqS=@USPbiRyB?2p)V(Tt{I1l={PM&CsU{WgVY zLFKk`u-n(r;`_}Mn{*EWZ$#h3y4B_+KJ?LX(T&Sz8@hxtXO*I7c|EJjqmPIMEHn)Z zg3xe`Iur!{79ai^Po#Eo+VpjC?5;xNDp$xY3#xWUmo)2O)w>^CuSOIL3rlL9>=gAk zwvNn<%op3`&8~aRiv^6`h9k2!Tn{LSNJO6(0BR;eTzV6(HHrMlCYxNsGkA*$r@6_m zcx(*qGi#eF=m%#WaXV`*pE~_^&@@+GYY)JfnzdjuFe-wSq97M-`lmEfS<;Dc!m zILG|~DRvLK5AY5lcg4m92?kSFGsD?<{u*3@e;G1P4HO@yXw)Y(qV^*~!)vldL{*K>bc@ zU~??6b*fC;vt2{XoVT43;|3}7~|44#e1gkIW8D0b0z z!Sicz!1JVzhv;pQNZJg1dtMOD(g4-^5i@+iwDqQ;2A^ZILCoI_wBmh`!3njxf}_)RMzhiKCiz_A1U&&2kvwcH&gGMmPv(!=IvMn6+RmqEq^Sw1 zMs8!eFfsz_nFtkY9N@!b<5?C<(@Vq1UQyG1N=&Lx=<(Fh_1-#Qs@lBSE#)-VFM*7sN z>TJ4y_4P!g;xOY~*SdzIw<>;Kq8k=6OfcOpKhgJtm67aVAZ%I9y;l#=Y$Jh02%-=2 z0P@DG^pe#vlx^bK{cx7w4l?HamD(-wK(E@;6_-`p!?=fVTGX^tUuyBh!)T7oxJ0i4 z&5fzrOYU(-1@T~1ismHIhQ_zRh%we-udRp0fRDJ1y5eg*DzPvXVT7inAlI5?)#zJt`S{*UnQwm#Q zn?(r(6f8kx1I_gPDW1n`I$oMNy|b1OQ(xoW_OCHwpQJ{EgmMKvHgo4*C6Ad{7*}+8 z=P(UvJwCmAKDyrwoq)4Lzd9#|T@qid#7M zve7+OuhQ1c3Tb+OG5Z^2f>p)ELsHi~uNw|4mYsIK_(DdXp2{p$`W0*hDZO)6K z^tBi6YK)9FhO!rO&8`&(n?)>(8zc6FH?MV4d&h!joU=)vERtn#BK(Rc5#D*7q zpxAKrko%&RdnH664c|HuZq0n=i#SIn@NGIr*yhV1e)!Qc?OZgyU1E@Q@#QSQ6IhMz z+9@f2Q%Y*FK~%~Pv|P_D-%IULcRdOCU2E zCt}7oD&XD?4v@SHOlQ(XKxi(DC+5mQK ze^0|iT)H9Uqay4e)#q&WkE45p^aDBwn?>jIx|nB6qjq5gQ`$*f4|{SJvc$5?5?x`fg5`d{sKm?@=?xP0aRZ0l>mT|C zN<+x+FWwp4d5VU%INs)6%ZE1t@L{Pfx3O90V|?nN%Pm zm}sov&ONE>cd5nGF{XR-BvkG1&jNTewVglakC5TO1-j3^dM+i}4NDU^9x)oSkjh)O8;UZPZ6`ZTfw5ZY9e7CBzWX zKCh=GUGI9C?gZtc2s`Iy{l2{za}WYy%lhcl$i!@iE+aFcOQdp($WM_|N8&41+STOJ z(srt=q({0>m5kX>nN&}$a`}#4Vdg}apc!2HKdcgfMiVJoA}-o3^4urVnH!zwmc=Dd zMync~1}*Fd)0m`O%_q2?U$*?l6tdZ-u)lH6o+~s%3Ua=0$M*O8OXh@jZGYKA${}?f z{;rwMsD0e?mMiSU-(u(DP&LEaZ=)%An&@Xz$5IEJfhW3^0p0YKR|iEGL}@{ zvzz35`Uk9kp^Da=$xc3R?S{`zm?E88#fA{D}6NPx5en1e)Jl>&*xSMR|a zc$oQEhLX{bSOyyGjYRT_8c(Hv4~FdKw*7r<=(mRi7hcY}($uRJe1!6Op34W|q0PoT z$OYTSfAywN>hqTCG4wjN&qS8|PHlxSV|nhBn6ImLM*MN8`w)rCo`5EaH6Gf!>Alwd zvGdEEP3LZu0<=?#lAUB0T4M(2IT$R3LJN7-3V2g%wL4O5{fZq{p65Nkw7m4yGXM{lcOP%MpXBu;+7 zCawr(Fa?!mO0_)l#*=!}Hcwqw(KX0j`1M`RzA{6187W20Q@9Iw{h94}h)pYn4Hg|m ztb)zDpIlICjG1w@ZBD! zPeL!Osd6-Xp+ENh_+`t5wLjH_p0WhVX-43w#=#y6c znf1wWP+|)Sc)ENa$Bd`t$C=DNSO)4aJhMu&rey!T`F8Me!NAhF;d~KB7a(fF-d?tf znL(RbA1M0E=;iIwuJ<-|V)Y?Umx9AP+4J_+B*zhC^!}9J{?vGsAxZqaUUmIqyJ!9o z(Yp21NuNS)onuCTO6rf^9QIsY2OIpX&3{=v!P(uGj^Yg0=@&U9xQ&!`{m^Fv)Ye{_FAt#I^8Ht5rPV z0yBe?Go8zSg)-EA z?4?=<0`{d6t*zgM(UkV3!?u>brTwWE#|U+E>gOF%5aPjapCa&&iguYdIZo*w8h_Xc z`}Spv$-0ETP;5PTjgCMn3UAcsM}=^3PXgS=TPjBe5JoU z9i~2*KG`O+!`s-L&dN7~hbbgt#%xOU{~01;Wh~`wu z(vJnb3x4QhvF$TjTfIlC0nKuEISZj*UPq6Md77>bmB8(71bl40wW<=Q;5~`L1=H#n z8C@>d>0Jux_ihR@7+MbBNSD~+nGnO!PPt$Hx$uAgc<)ZJ)!L5=3sVi9Dah}r`1mHD z!7^GrBXp;*R>h?%CTN3b$V02T74YEq;IZqWY&es{J^Xg%vy}|{%U2By03HYXqV4kV z;Lv>nw9M0Gy*aWnhDY%A-CgNbkXXOAJ4Wz3N4gE_okZQ`*a|yxoSIC3@|6QmL2RXG ze9C*-Z`7a-AA*TTYk&)5T|iKEp_4;m1bVh1p}(p(uKC=D++nDi|1KJadBDFX1-pk} z%K&gevo<`_pwkXvxKJVeMMaCO)+&|6Y?u|ux%({~eMj7|4N3Jt)OQum&@GEhmBpwE zS!WNYj8m#I8115u$kuab=`I!ZrQdiePiSf?!H3}eUOJw**~1oKL{Hv%(x@=biM+bYJYeOKC=hVyk2f1c=3w>qu^PqsJ~n=Zo%8MsfG| z$iD9VFQl*EYqK=9&NPAApi_xS;BjEt-w5W6<#J%hh~V3z8nu2dFAW4~uw=K-1Be6M zYNrT#M1aZ9vl2H<@UyU71c3aZ>~b|L+Bhj+lVvR5*#DLF2vfv3-983CJ=yu}RpbEq z-`@f_v%I$-1>E-8{XJxA(9K;DB10Q$wrRpSC7^$(XPRw$6{(PtTXWO&`Rxu`HHkTu z?@t>(FUU~Gjn&ZYDxb+Q*PXA`YbDqd5X)i;HNl24aa1*_is^@YA-G~sXX-~Lv`8c> zdJ91fz=ej;6sciqW8SfcU6{a)BjV7Mja&d(4ze{WFUbS3gAAyUZX}sxVeQa7`n!-kLa@oMzSH^Xv@S zrMW6tSxyR~QKfkCuoy=!M@^n(f#+#$6p*R>wwT2tDT=0l=5-#`9=3=2I3GSHFY>x0}Cji`L zD63QUtFxM4-mzf0+`pj&crI|k zFMp=;T9eT{tft9h6>?R}y>2r)Ddm2Z5AfEIXph&A%o5u~0mwd}m=Pv9LQ~1vOiW61 zv^!jXR##VI_$s!k-67dmVe%tTZnsz#w!Te%Cmi)w(YIVI!NJUKehNFvPc-w6UF&d3w@E-R(w&5dl{K(V zS=Mtw%z@zf@_ztjK$^eEW`^cDo*>7LAG3J_<9|*J8p?*XV({kALui(eAb|_FwCb7~ zvswk~5o>@*g620pNZ^S*$r>rpm{N=hU}D5;*5cI9jpt5tSuNj2IQNUX;@1 zGYK2J)HmqD3okzJmOi}1lq8p~TsC+wUbs;6=W!E0KijCZJNHP*H_Kg>r2+cPJOgKO zUFJGhO#e@Q@?$CWhPmJW;SDkR8%$ZjJY-MyCanW!B`BaMT)upnDJCvy9Pm1T5nR4} zsWG=7eejW+Tvsi3z8R)0DHB6iDy!|(+Nq-XPr4f6$$l}zvk|M8G2+G;mSR;^uoc4-aC(;8c2>pUmI<021T=X7qoPYzvca$7`0snOmX!2#;qo_ozz@n-#DJ_TJEHz>XOpvyeY*;NomDgVc+1My$9TciR0Z<>OaZ@ zXvnr5J6uh*`nKdDDHVR8zFjcaeib9U5yjO_r^e9shN=QnXAvn{$pt*;+9qrMa0wt- z)}Tkw5Y|JKJqR7lKjZV1mhBU)z0m(634_dWRhAk#QBv_vlxYfPy<)8xHEOh5Bv>NQ z?h$??&>}bwmBQ|Vyj;i)4Db-Zqh*r>>1(2uvuDo~ZJDLLu`6W-J0n{Exz_L|S1#I) z6?XcRX`<^F4VSSvf@cWAC@ATF5}VW{3#?)ApG!o`mT9hHaXoV6m|)N>51(mL;xBMN z{?QNI|M8bnvPfa~iWJx^VWT`3R_8&Ox)_?MeS=_KZzSJ$BOg0eBXKbAq)+Fyo5Cs!i)49)t@-D;|IR8zyAJufR{>CeM62 zb*f$#VShj&XIJdrqr;6POhPwT2|iOLlr4}zhFdV!!*5A3Hl?Q49XP1{yvnAW8G6rEh%cP;@(kM< z#D9t3{X}a;i@cv^Xudrn-sn3oeaBpwFPy(L(p#^(XN|5%(a_CwPgzbb&c#d`GH=Bc>ZzV6$rgBqwzu3bluA9Zg^yM;y4$yzdbaMqrl(>2ocpfA{ylK>2Tq3@lnKECU4y zI~qn@c_yg*YhrRwo#ODUZCWI5m|%gM6k~^$|GET>{aQ@kc>PVCSlR3rXaU9j6RQ*M zrn4njl$4iBQL)$DX0L0p8Y64vWVIVB#xV(ShmQ>`fco}oZ;0@0LG@f?G;pP<*1=7~ zrIFmW^>cSpZ7j&ur!=G(0IuLa;~0j_#8(Xz)Byp*++PB9(-%+MM!i;iCFMFzRhu|xFSlkTFJB=Eg?lfeFWi@__F+bMf;HUFke6|;WLz5V9f=Ap6Zq4~0|K5h7n#RT`rb?eq^p6xML zN?dvwpW3=Qv(Ar}B?it5w{P6Xw+YT~y!nQCN}v&EA2OqcKne3wDL1f#VuE%`>4GaX z$|1HXOr0^s+{CwR{~W&4=r5nw*(1Dr<>lYGy@w7-P!hb=hpRWPOUN5)?Kpn&glXpw z9hNdp!U|);{!~H`L^%Y;II84C7d#k4mKBYPQjyxfDNq}mXr2u6eCvjpKIe6%Z zx$_}-!Wh9kzagc-fB9eixhcR7$bJ0Gx%0+!B3yqe#_<=j;A0s^Sh3I0i0OgBR`bpM z3xWNKZ>*JarA(d;FN?w2tZ~3%+G)&mz2*b1Vr0lu8z#yVVVD%3V|42Abp-|JH5YwZp9VKl=XnO`vD5EeeL|Q>RGLHCKv@W13^z+!YA`&BE1TViGx&ihc9t zm$esF3|Cn13~*U~{1lKi!FoV1`UddwfX1Ljdt+zKn5B6=!Ymb^ZrE&sSzEi5Ud+!t z_6Zjn4*ce~zY)WIQOdW|?!Wk#e{O;{f-AABPq{t|g&?Z4u4DmuS zv)~}sDO@PumbD%)7S=xIQPsrp=5E|2Zwmy5bl~1E2I(m1z*y~}Ua@kyz5DB*{f#LE z+dJE&^pe$GILJ1Y2@@u0-KrE#-sXPw%5P;IZ7~b%QVAQA)mALySYdG^Y-{Os|Mcr$ zNdVnyb1Vpmtg(1OVc|glWvwj}?$4C6akd0!Xw`>O@&x=%H!^kOp^nlwM@)39XvGQP zSfhlShqQ*5NT4|-#m-I%U9w(E7{FEA7-r2tlqu^!`&5>t&!jw>XMGzcizy0Ll#Ekl zi9ah>;*UQ1$Y}Z{&3n9Gs;jGwcWDr9vAwl|A%Yl$*&r);ViI6G7w0}e=klePo2YFa z|K>0LCpU6bg}LIh#xa*UL~VwcdaUlGryS-xa}l9Z3!s(k``U|VD9lL%zr&a@jO1f& zVth~#b9yW;le7yy!gwOBaUW~#2JX1|MqEb@r1lcjQiC3Idr&O z!Y5BOSM#^uecu!p@FJ!14nPP+=>6@huNu(c1;J0YnSCfF+WWHFv$qu<0HqAxKa4j5 zBOZJ^W%-1Dp^Rv3l;T(GDxX%f0r$h;e+_NGMZMxo1>qF)3J)9=XKurI5#WtBbJi?Z zQ(a@c4mgV9LdKxLRDcG4nE@Ie=y>w}PTF)w z0%(5~g~r&`rcancNq|96^p9Dc3IG}nT33+Jl=q7Y=qP(4ghbQ~6!pBJi@x4V8a;ZH?S5MeH_8&^y&|%Cy<%q%~uY6YhKLIRmaR zdI+SrU9`+@HLVVZh_KIO_wL<>*tn6g2(Uo0k@fufMq^rsNP~y3JwgMss;yi5Agpb6 zu{N@=5iJ~>v|+wEO^prh>`&pS>B`@$80`oJZ z*bULbiMt46gXRl2wVm5{$=ZBE?HMAC^i>nSD1&FSs`0pO^vj9Oa0ynX@0S2q8W!_h5>Jt)uTYf z4Yydd!A;kiq-8uLVd;e2*~(22_Cl7;O0_cdhgSOlT-9PNK5ed5!XEWPpg9Q;b}U#dYPbrs*dX+9C{E6B30HzQR>ZLqHiro}IP9iE3=$DwNRfk*Gph%i@e@35X>P9y4aZMfwJhS9d4yDB%Vm- zGWvL2jIcIiVVNk)!1urRlB{*h-9ZT`>o%;X`|>pC63&d4%h5`6VM8#q{di*VOLXG) zcqx}~v&0&u%B*bmpw*vluoDQ`=K@1!Op&+-$fhYgtg-%wb`pRkn%lC}xHf2UJFPt~ zC>*f52!XW{nRCEnDP-w&<3FoJDtvX#Pi`^BoQt-(T1fd~vr+neL=gd2S zviXJ@2S+hhHoo~eqJ^Rz!krVs*JE;TJDJyDUKk_9kl@M(!&$9;rXI#)?L=hg4!Dl+ z4{nc;n;EnV+57Tk zfIoOdNcd}3TL0}}jiyix5GDBlz-0$gG1@{y zFWv|2<}IV8Z6TT~W-->u^0Y~~TT)$XW6`R)d+PKl^KPlDt&`<*x&wztj~Z=)-%$w$CYUAKOuIv%`KrQ{ z#+S4va~2_-DApP_$SU7 zxSWRvoIm>km<19-qiH#xb>ZZ44x1XHc@FJ8D$tiaTMM)G)NgO%ec{~V`@W(8&*C{ba$kT5#_F~y4o!hk1|>G0$zZ9!+7tZ zSR5$@JBQd|Z9FaSs-wqs*p=EjdW^OqJ^V0GFc^-I#hdgPIu^=uIzvi%wuZp(0T&cU zwr;5XS7mW!4PzZ-8`gf&!5dPn;N5^{%MHDU7hbgJA@}_A&$(st&O?!ag0I`@lWU?g z1$i7!=f9l_4ndy?a7WOHqTC|gITL0dTeZs7RM(h$E%?kHTeeAHwQbc{)JrjaMcbPu zR85q!^jmU)mN%eyqrLm~nx_*=J*&fht(ty*Jf3>2?kCCJMuU5TVfee-;xhB!QvtM3 z;Xn{c9fpR$q)rkPAKT~G#5QJ4Fh%xxd+#1-k+B}CiZMp`$NrslG@C!JjSpJwk%*QzG&TU zh=@(rp$yXs^sOfe1{|KyK5xER7ja8Lz?(LGsu39u5`oZkG`EW}x?zhV3^_!bLxLz5 z7d3(%nJ$3QM$&{&SQwF1yI_!TcR`~>09nxtjuG)){n#qgG~4N#65?SDaKQoo?6WYz zTN=UUZVo=tVI=Asf)=ppl9dtk!FrLNizxt72a}&i`pp40v-B-P_IGe-$}zbd;bJ(x zYOLIc#8hbjfTJ;rDjR~&B8d=%CXh)?i7+S#*M_UgW7=H_Lo8xOcqLdte1N%F2jxLn z_u~qBGd93U%`+yJ1)0tK=28UXA|i%R36sMD>-YFcpjY{vuFR=?2yqZ?Hlt(Rp?xe6 z2%j*;d14;H6*S>6QbvGOA+CdHyq7FqWXcL$U^oPTc=qApipe1;2pQBz-_J_naYX4D zLtqY^Y*EqTfmPNsfsAT0_(8B!B-LMV6C7qSfuXqG#HRkC>a*sV%?;dDJ7h8SpN`@1 z8{h-s#wDIp;n4K6Z)Mxg?OMFFCsJ?*-t14tO^t(*zzt>d{Y^H3-RYTd9+jz%X$T0fN8HJ>4UKBb+n7zyY`dbJ_tu=wg9>(QfA8MIFNN zPyhVS=E74qMU37oSyiO*#`5>(TW{-dkqdG$(;gdXrx9e)G~?<`os2*DMolmVIVsIP z`?{{k{jXHx%fSiuZC8y2^(wJPF5eXYt8zIdoBkloFfZGMXE1fl6TB;6AWgF_9N9o`~Q8ecrKSr)jhUd|`+f|HFq5853`_Rqf*73J$?w&zUKB zWPxaH(LB?;y%GW#$E9Kt_{0z1DAT7-RqJzP1==CJYIo%&zCPx%g@7pj$>1nRrf822 zW)V&Uclyn~d2kwfFlQF~5g#*47nTEDq3Q3;>C~i|9%vKRKyVMYbK2Xa^5{G38bV$>SG9pZ@B$_j?*rxfNNONw)sAYW{Fe140{2x_xU|>mBe2^Cx>teJp|liXN3LMCjr3 zGL@> z|94chENN38Hwmh3pi}@+A~4 zTDVA-K7Aj0!6F@6GuAu{-jF95ia$I9P!=3Idf1eZCV#bch5v%`+>LHCFQ& z993rPL!oFh<4XVSWM@?>YN5JQ6rFn1Uh;ENtd>>b3OuS`D`*(?GFMqYQ8w8gR6T>+ z%p-V4#-7u2p-%)(`-OHsvTT`KC}EE?7#ivuq&z7U->EY`G*4Mn;2F%)s(FfF3hsyl z7H;V9Kj8~&5nG!IrR?H6riTt4v;&M-S8=hWoC|_2+g|{kl`C9loxtMD;ch5GfFB+R zH(K)D$Dgb-`19>=4%-?ht0ju#mtXn4twjbu3S&%Iyi>Q(6650qhYImLxD#`h3HuV9ti)X2L>@YYCNk^okGx7kiR5fZaq2xNdEg%_1B`s zd6%qmSf?taammxSLffPb!rKa>Gh^m-vp6EOZPIbxH*_F@K3Zvo^0|9xe70W0+C$*% zunskdg$JSupwQ?QXmTCdyU&<$2pL4XT-qMK%X#$paXZZp>sE;bO%_p3?`x9F#FQyh zOaq4tPfbm=CetLdSR6ih$eq4$&fFxZlK}B;gD4hTf+yGM;X)k z2OSc^0{6|wA2&Ed2w;L3Ys#=iNcWsRX%-Ad$%G0H?ZJh##jHicWXa_OQx4<6y1*iA z8rW1xc2PNG=hHqc{}xmcOpl!)CxLO4EM92$E?pk3D<>2VvBe2yoB(4JM#Y}_5gb|M z(PA<-P-g^Z4)iD&oGAzQF>8}xgC!Yw1FP+F%i6GcldBkBVeVo%Ib&_ngNcX8nNXu~ zMd0N$TA*T!o``dS_NKD9Ao>p!%n~3%*&!Dd+-O-8fRWCn^Mtw)Xl-#9Y-}``H+lyiQ6_FxSl8M#x6ndk zY2_qm4pm~`9r%o!Fmo26fW3_{QF9~^QQ(Ikf9!Vb++{daS6eGLy9rhguG{R9Lhvcq zIN()*yXrU{$^(-}TbK*y&tI?;quD38{Y?(&vO-6Wwfklu47ZIQgq{O9w z3&ROrL>taG@utt1A*Mn5x^(4WDJ+4U0*Vj>dl}}kZnFj$EnG&~b971h`2HZLp?ZNx z-XgNd;jO@au(#fN*Bv^1#4Q#xBXA4Xq!`sWp)6$l@YEPLp-SEZl?FqYM-)onH_RUa z*zVr7+YXBPkq#q43DPLoe7OD-Z9^y#zE?`gQS3evGsFQm9BhU=JqK&yrp~&+H(xn3 zV4854c7g9Og$1e)_<@MPlKt67wpa0j=oA7cpG;!Sgm$Ai!Y!J8jabZB7x^Y4TVlX5?os-2LTs(9{p=mX zo5&(Va8fYfP)9>7;V0I=Hti{9zNkCuJOY?tqVWSi z3E&_!03HDE4|uVPQQ}cFzfkBR#iNk&^c_A^$`#dtLKEc`R>cC1DXK*H3C6_-UsvkE zE*u=_4828Ll$Xj`*MTuOXG%cz3!zJm)JeGTgE~t>^p3ADF!oI?8e8EeUlzb^`nrTH zTw&R^GZ(A!)H*8{B_8Vr^8iaICsdA30*KtdFl*Yv0{tUrmLO`i3SUR#J|`$4!(E%47XxA0dL{ zBeDoWc$lzxa;;*KGup1Un3hZRbG5}r1OftwTaz)g*L6g>+(Vd9Ra%H2UaU{U$o&_h z4`Xsn+C~mZnk69|Vpb&vszOW^M~_1RnBZuRV0;m15r+956-QOGcwv!1JC55f3o^d@ zXd&5;388{fK(o!>i0dkYe@sC%aduc0Fw`HP8bA;Rp17^DZw2Cr`xyH`Ad>Wp_b~eO z6#p29~L(DcX6s3LKAzO z5ax2kl)#|UAC54eI%SGnU9@*d{Y2|a?@>VTy(#d{tXIkhO#EIMF-kCYMqOb9#YF0D zW*~<0(4=G81plzcF4TPBltkQKSSYLxbqRMK2yn2HE33hg0mVHKcTC#4c*!E`4}v?V z)&gr>aZWVolLyy=5SrN<{|s|U-wa;)Fhd$=^}7eShM@9bHoX9t^k9(lT!la?^;WHW`{K1~efEiH(t!}d zHLRN)u!9wd#qh~*tTiTzZ=uo;PVeON039GyDj4%_b3uhNFuku*a;pLz=Tf?AZ+lD&#Z938p;6N0ne>IFJZRP1QskmxXCh)jsB<~J9X0VC!cLF ztLmfL3bAC_VpH~DIba@GT6vnF?MM3$?$RgbD&89iEY*{%-D0`F70F_U0Eu#e&tIt$ zn{Ryf;EWC$8Z}y0)dlm6QDi*XuMSRc-m=Z`hEJj(D6L+z+72?>tnDNXf&;FiCd>+G zfDbCgZ-0Tg!8l*PA#Whf_X(o&xR|o0oIKfJYjB?CDDYCjwno7GLgNr{yi%*v4oP8+ zsHv{j>B~B(Zr>q&Ptc#RiD1fpFs4Jkl$^8+rI@u&eFsMNZEnq*!33=T7tUQU#*@9@ z&p!Q>xphJ}fXhYIhbupaDDl|_+O<+^H*VXtb#*omFJIFqZ-hgoY9j=I`{gT^Nhwg~ zCRS;E5^Z7bBP1cfc8Iw?q&Wz_jMX83oVI&X7I5~{H%SSDTPOi&X9ngu%A-P^nZPGY zBok?FbeX~gyj`gBPM$oKIvneY(J6#I40O<38&AOr1&+5Y%Hk&hk3)CZZU=Fc_9B?Ri9txJM^2v9iYtKNv_`8h`Wn0L@w{zAZ7jPiv2>{qT|5tBb& z2S2S6u5=nb!4e(N3p9qmYZrd7zx{F1%PF;Wa%a}5**Y+-zM;YJYMRCXT7;FBgUBX| zmtmi@(97sb7x*Z;YJ7dl;L2KR>k)uby`10+Pr$z4Cedm3Z*uS%f-if7IiCWB+!d|kc<`-Rz0wpsbHoq8 zW7RjDHa?F7w@`lOcY1!LO5bm0y@1~PLpY@vQonF1<})*R53z<|#ph&MT;Nf(UXo|U z$%c9<%qH6B)i@pgvesy}DBym~IVvbx3JcV3!J9VN`lbGWUyK`|1g7xItm}Nfh^-)e zjtX9N_Dp|R6!5{52O^M!7kJKki{~D=k8y^vM%k&!V!Q#yB>D!PqKIYt1fNivD4gW` zs+V+9KR7l`dxbH=5bV3nDu+W(`vk=$sSR(F?dg;D0eQG@0w^=C2W@03K)~T_x;1E+ z^ii~@!K;^MYf&1{_}*`@dR+pi?{;fqF{rTF2o(_8;X2){Sb}Jmv~r_mfg3)f&beAy z#$Souo%9{ z?lGMh$i&_*H_5}tj%iQKW;^w4!GeV%xKqsqklwEQbiKKGvCwg{V)V?`hsAr@>!E^U!( zk_MgwW)QZpFg*I`N-+*1=$gcAB*!YmLJh1TVC?D9trpbM|DV10Zn7)M4*asb_f_!T zqk(R;@0scD8FCD{M+ED0!g+kvz5i5p5yRlqyhTI`Jj>DdrwrDiG zDVTz*LKVt;r{C}7yH&3WRcN5y%&^JbnrdGh3OnMuzthi7OSFO7A0=9)X* zxvuZU=T6Z#yIF(4<|kNc4fl=kXUbK0%$~vkDZ?dWr{nE=+vjrF`u&ZMT?xzsG&EsYjPCn9yUOQ=~0Rw|pc9lNcEMs=O$~=QnDh!((parqUjr>RI zb|O`=QlPtu(037U)5~n<{?Gp452MfDe&?N7muzUQ3+o$M7pmbv&y{1`Iat-jqyggW z`SYwl+mR0L+s~$0-^Dxd5O2_qSfBAObls2lek|qnc^QgF|4I|~U+gNXp@rU#0k|%! z;dN&S&(uomW(D@)Hr63JfoSQD{?oF-fBVB7eXm}>$|P}=@pY1ku$@hC*ft+8RyRf< zceGW((Tmo>GX=z}jGHrzVe`0gjpslfV@utZIq-PO}z>&dZs)lQ+}`)L237{-Ld=O49c%ZtepAi47(@yqQs2wj(teo^p8fY zS-ztNa43V;P~$9Z%zzt>j6y)NlF4M6hWz%j5%S2q~zZgi$H@Xc?16ED}!^oxJ| zMXWJCckX;Txc>l~)i5EW@Qd#$Hp4L-$p{HO5a-WdNUy)y8N1^;3CruciD(Fgnl|W} z)?1}P!nJfBIb=wq=MlSJ)xH;}zk2uGwEw_9He7iocJCY*!ZQo0xlEuBrNTN5bB?e~ z#+bBTQt z)_AVXBfodUrLS=Wlzr}bk5>@F^-|tOKyw47!+nQi(~56=X(d51TKvc zM-Lr|wSm^JcScX2VZyH2{l~Z73jVE^!akFKZbk5RL#*tPTSUp}P4B<=9vh22&k}&w z5DNFev*%+sb3@R7^{cmo7i#EmZEHsdUMTuPo~%$NC22^J)+*c#A}EuuO_Bnh_|O=7 z@X*5u(f;0dem}w)f42A!5wfol*ihgZ=HxI!fq#`y4P|<3$C(2Mzykz1Mr>eB}w5v@QTy_nO43%wUfBkc6#3<`2z z@H;oac?Csg0lHE7b%Pn;v$%@Mx=iYFsdS&9g9rGXVQAb6XfS8K;g-!TBssf3F4?cLGO&qtObUqu$uE~Ej{=3xaXFvUUj88W@GOv*w z3ygyi3>fl94YbZRs~EE$Xc?RuVIu8>!Wm>Cg6JIObzoQdRPj4_Xg>zdqm1Kx=Cbpq z8$MpV#IY_ItN!4--;484RggX3^Ww!zVTg2-$iADm(@QurUV8au_$H2=Zsr2kZ+fP) zGLzAU?=a_o>y6jh>G!o*o?}#o%C8C=#2CgiH|}v`FT>j`z!#M3CTc8T(m@}%;7PI(D{d~9Jf z%R%E~Y+{6Q!U&mD@WP|enR|K2pOxQq$T)8ufd_jGkxm9Zx*F4tm>&ht^54a^-wIBC`X3PIfB9Mz z9ms=tV}dT>Fk3kt0$IpWh;o2&>3=@{{j5{&M&WQGtz=e^pxl{O@R=y>+pIy`5D)*aIB!@gsQOQpuT`m_x|Y*5lC(a zlXQa_7pEJ>c>2so5fWh-iJ@GcGv?6=Dj1&Ebei`R96T>ktF{w{ZBS`6j$h?U#l#6x z!BcOH0`V+uzXy}^d@C(edMjjpQ-u9IV4`!QnZNye&n9&u(HhERz~D8cKV84-zEMJ+Q^lG7(vEv<7Tg$)%$2eeGx zjvYq$KSMk93|zT7$?B{KxhgVogC|O;+JIltm`0 z<<1b<^DtWB6qlaqsmE&!W;>3-KtsqF3WbaQT{|EwOW;5wgZ*Y7DXd<*b}i0_^PuWH zvIwI=SB?2gb=|L{!p5woeD&y9bObpaT9}MgV}^{}aB6X8sGtIm6HXlyp6hc| zGTk2EiL8yy2>R)l(;vEy?miRaSa{8(5rs83%8?HBLfnisphHpiDrArTl0KYVwYuM9 z^PS6A5$aGjWBo=cyX`Z2L7FtAtQs13V(Ytg3wmSip^D>$v$5HVuSDoAlgucg^{yqg zpCc3!dUNv#H@4F1YIu;FDNLZyxJ#l+^R;U?vf?0q7$+*Tq0mBKPM(Iyxtps(gRmsN#mU5|B` zdJjBj*NIVq>H?lF@fBg@3Oc|lDJaD1wN}}12S~%vG-zf1EL~&@(PJ!N8YDbhPs5xO zzT?|*XGpta9)yN>_91I}HS%~q;Di1l`eZ4r=NiaVfDMDxV|4x2jR@@&HrHdiCOES5mJcHtmSJt)o zuEW$j@5W*t+3?Qk_rtI^sWJxs^nUuWtskB7D4v0!sWa!!2FKdRWEV2+;=PqbQ#60XHvESv9QE`COjV+j~8PsOx-RCBj8x z=$IZ_@HdAMK;Gms3%mU)^_{zTAp}1?__GL+@?MXZ(rYq+SH$w%aoP8T^AtwL7oUF# zfvb+TeUyIwk8j1?5+R86Q|!oTs7X%Z$xm zvUBNa<{iDfhQI#HKmF4f+s^Z&+|!7o@#oj?z8!t;+Dwff6cfi)=pnsw@|_i!1UYku;w;m58mm#;Xkg62@*UC$|w#jTqnYFz0e>f;WE6qh}yRhzc!k!$PvDr9=~+)3S;F;7;Ga#2|UYxs_~j@lyPqG>}Tgs_aC@)?OH?#Xn^+P zn3l#qJpEzFU-p-{ysQCln=}v`u;CFm z(!!&_Et{cA?npFl60i}7e*b%nf>au^(&`2*XvktBP_#vLz!fr zb?(-1Vswl<_d9+xhNu+!dm6sf+uIv_O`fCt)hiEU2V8XmBXw-2a;dVH20Q7?W5c8y zz4`)UKf9bkK{#SH6+ROv)9y-K?rugXc{TKUQi0HtrEn_8L;jm|R6;NXK8 zPrvxp+w>1I9_3h=^-y7;MfupdQChL<3>(*|ISE|O#U^P$S=&v}K14RwbDh`B5}`rg z2!phF799QLXa4~IQl`t1Ub(I)W`!2=m47n~tJk09FB_ZFv+M@G4X2pxDL%?q{?5@2 zMmXe;^5bToqAej~O51$?J*k3;2U?UU3{L^kq(aH`y8KfpmkJ$C`t!f~o7@V9O;!`Em|HSNw1Aan!h~Z$2nUb-n}3yF{|vtz!mc~uFXw$4j)3O@ zpyI;>v;ZS;L3enCc&4&q((<$nQ>$Xpa)oVYa4jLcR6>Bu?Dp!lgQ=1MQ^yC-KT0S> z8E+$)(e9P@#(iL5u&V^fv?N9q9`PUn`VqGd6b~QQW8%)N4w%?HfbwMk<}qY4haM8o z&LGsP0Vwdw^kf!>`Z!rQ$qJ8x zjQL!M?#_$~jXAHOmWG|`-9(8c9w#;@;{6(ME2E8#2vFQraJBl)0H@);hE3SEs6zmP zCiM=n(FSW83~`8cJY`rE_L$fykL)2(8yccutx#=#Eu}J5zg2D=xUTK7o>=QcUpkna zjHPvHSSy@;;yoFX^#pzLwsa7?&w@V(Qw~#!PG0>3Fs?_mzdc)41yelOX3NR{=s)1m zJ7{>an+n;7Q!sDs(oWv?iQjra6qbZj8u4IzpE;_mL!~ChB)AWTTE`A}z$V-=no78I z1D0TJJgcWpJx2@dl+lu|qMc#MVQ(5-RQlF?n^yoS{cwFtZGGMVN5mM0 zIuAZ`cy*CGd(2}zq3rpN?TZP?euI{#p-D$BB*a9?B*j`UueE^2dKj+kJB45cFAY}q zleDOEJEyURv87NDFQBd@1E# zb{xBDgce?Y6Di zbLm-;=QIGXV=6pbv`Z*lN2T$@P-kh)elgTeS_AZ44)h?MYe;b1N&|5O3Gd5GG=e8+z99iv|4caAr$_;f1m0@KVLwo7`M$2h5h<=ccWx^CPt;+iwh`qSw0 zSAY4#^cR2qL*|;v*aN`vTMeGVcxD-tC$7R<3V!SHuKthz!@rAjFjbiT@-Kgo{_ZC~ zEsakw2ycka6Q9(zSrR99sXSUbkUnjb^PAp!u=M?#^Ya-o5$KMM73ctZmz?n4Q zn#u@Ov`~`#U!Le0*~$uaj7K-mi#e2juwM5>pg{OC{L4P!y-*UO1RkxxPocpReE_9G z+tQ_itII&F(!P`aT)Vi$x3Y(8>CNM~3Zp9hs$tLTeH?4_fBm2T2X!o^fB(Pz?;+$n zCZ!E=?MK}2*|h_2%aQaa-~Z$AuDj97U;Nb%5rPL8i_U=pwt14?g;DyRor6|@$MN8N z<=B(zL*NDn&ZipGrJaxsDAv$rcQuv}^N4$KNb%%BZlIixJbA6JAD#CsG2G|oDy!Cn9WKmbWZK~z>Iv5G-~agk*gY2LY#dpr|vy|eN`6+uU= z+|0Sa_R2f`Exhua2%TiEuXISMk6ZAWl<{P;ysL0)G1DiO_!X{V@xup=kPAemz|sB zul8Zg^ZX7T%2-jZQI2s=6{kk>*uT!X(xJu#hm>uXpXFr=$j^bxJl}|aY1MHM?F8Q; zFELh)oKrRpgDmrNv2TX_t)Y&3YJm$pWz40`3Kp@O&X|1R!SnhNIu?Za1_r!{kL^x#v7ZiFz~fp$m*e?R0GqffB+oqCUY0< z2*}7QvBj{4xMI=#jIG0K;|si2uysR4Z0~AI&5)A}#Elu8EYl`(rh_M2W%HzH_{64( zN>uJVTd07S%kcP>zoVezhA&03H41Kigfvfy{G<@Av%*;z6^eI?Z#}Y`!vsJy-d*>Q z_=;CnzGHHOxmb}nm=F)?(X%H}y<;G5L=$n3I-I~1I_--|&PLk|pVnzod00((@ULly zxC9=ajRfp`$HMuprQ1tvG*(0F4#3Y6Y2FtLhTUVt|$tod0@ImgCh z+)7FxvU%*(+{nhbjPl6pT%iZh&uX%ag6r}V;1=mG@xC?*H6P&g4co5=Bq!)4glhPI zSy7vfUEg=TltLd+mC9KCs=!N8y+}c&@wLdD;+FsJw(B}Fk4RDYU4VY#eX@d9c!is? zsYDiKmH|Ufg%Z2jJi)W=FH&@y&-o`$F*I2IQIz7h^uC_v=V`xdhZz(W6H1&sKh}bQ z4>uk9_IJM>f|vqXc%BeAJiGSF_1^GaYFx7}m2zn?{}V3ed4Me7!)z;~Ay^ z8)IA8ADwx`wYX;vlwO&|U1&BRYuFza@*Y`C-=P|5t zU~~=-&<05X-g@=gBEf}6bP(@~hJzw$^f1E2Z80o>(eY^#&h@ay1OJf4{#u4dDW^@~ zrTe&4z$HfDB7xWPcf6zXLtd`;u}Fr`7ce;&YUJA}Io5?~in+I#b2iH7pj9RsA;N3Z z6ns#lYL153Fdq%kRFyr&8XZ9jANf0~Bh`pe+Jo#dGrqA>Zn-%d@2(#w7+iC^Eho$R zD!$9m<|=k9VQtcD>}R7SF4ade*Ib(;_j+;9s}k)Nw|)n|KUW2`z(+=$1E-XTxp)0# zk;YMd&<)M_Z6Y}u-3}x3^MDxGWORZ2=Uyr~F#mKzv=trJC3Zku7pwXbQ789ALx&MJejuWh# zRP^@thGO1Jn4Z>Tt@Kh?VYL@YVMGeli?C0EG@;3Gg*;m@gRn2H_9U%_&gp z&ALc9o^PwTxXi*3-ivJ5bL9$QdgL)|Tf`L#O~yy5K5!H$1u~VM^Or8fdhY8eL+&S_ zhdL(BfXFv3Yb{3O(Np}%Lx3{kzj&pOqR28F(3X7f@dal{qz%E!Tp;GG{2=i1v3>O< z6*12}Q8w_SC*>{Hxark1)OG!f(xrwB&+jzk&k*h_D4^H-Zqr|QsqkEAT>f|x%VWZ( zhAzGiRpOxn9%A2@rx*dyMefHEqRKOTir=wgEAN*N(N9-{<2RR(DfQ`q%YF7F5M;ja zXzAS7@B+Hl*Z*j#KuX>xO);k+fI4oUCgpmA%GbR1C_MSG^{hTy{k7(0E($!azyJAO zV^qNFC~GfIOm*Q#5^#6vUiY{>E3E^6RcJBL~#wxKdziLg>G_TT^jAaEa?kT zKnrB9c20C@ar;wu+DbuB@x#IHu*_3ZRzPLs&ZSwk%4fRX+>(CcCh>YMSFO>o@EnRMyu)d*KLlhGO;y%9TKVhoLf@?`Xx z;IS}#WWMp_t&-FVn}qFf2h<<@;0Limhsv^Hxq3M~LwOW$W-Kkh+oe||gf!Z{(hLiu zkWhsu%;SMNmCV@z$qQ_Ucuv}PcA=kis8Z^%ZT5U$J*W_~qJj!$Ix{dCydG!-s ztnlIu;rWDx`WoeZ5|aD$b6-*Kqj=n?HBSs<2|rIdC#Cx)_vb@EFzWVUD_g8_FF9O2PuC7P0|3!fR*gEtw zhU4L}xt{#tN@(hp+;HeLBg2b z*va#l4XvC-adC;8>p^ogh3$ab85$ajnTj8ZTj4Sb?;~Xi2Y=i#w2}}tH??tXYphw` z0LB&Ed?!C45Hm>ZEJcY2ZarV#YZbp*2x}naOFJFy%#J%>5@_rWodZs-N z&0T+8OKMWd4cdbS!)d(%-Xmk#pNyrl8|!{<_I$JZ>t6pSars%a^|L5rBV>MF``i&`|zwM}(HdhHMhvEgNkAU#scIZDF_}1cJ zZJDL98#h<+v^LLIS6^Y^D-3*vfqz;UC^+AGu2+6-9(T56BNNu4#im-ztnm{O{4rvq z#`PZ?rd99ShL0b6Qy4OQ`tJ1HMR;sz8$5|e<>h-r^Q%cMhO1DF$%7JObVPW}IQGj- z8f0qgGTfCY*0D~McVZoESXpgcq2yQ^g`t5L3Y=>9ep3gGTcG|7ATS1|SP1g#mFk#KP#?QLm zr)z^)d76}ay{~?KG7OaWyT3grus9Ulk!

    X32P7LHMaO z5&S;xL=IJj*fF?7K&}dPcVBIop?I19N$9BbK85C#K`33wd&{|twUU7fR6w1n3cej1 z?Umk&FUqxD!M#gmhI;AiZeF1G($WQ!?=@hMF~8#cSn&VV9JaHPg{X#xhKG*#^ZH`m z^F|Ju82uZle7V64rt_Ntte z%cZ%sxmK@=^79>QDei3cJLjem;KfiJxR>fnz-U!r8m z6ZXu8Nv3VAU@m%1av>0fTdOr1wTUPV;iiZ2w(-?7TpMcC+|WmvR>t-7RC26?nTY~7M{EUFBFTl;6a=4KY1|utj!@-f|`3A$-uouw2Xy#Abmi^cq5(gqdwZO z2(xjnvt7Y{D9`h&&BNZ1u{J92IBPE9Cdtf9&V|v;(lwjftgMaXnM(#jxZp+Ee3Tz_ zT-x%C*9_B@vj;yaq0FLJn@WpH7t6xR?@(s_CciKRdZR(q5PZhb`e+K=@WAgpn>?CF zY%bcb6kMscYXzhw+sb>n6@KSBJZ8#puowa|3vDl!n09X{noJ&JX z`hS!!jfg|vDWE92!BPXv43?MVp}zv=l@*+EzxX5Fgkd0%OI`~urNC2hvB9@l12tvu z6xjs9ltC%3R`2Cs`Nf=PgF?vUUE3RQ@m-V@g5am2x1uDyK2fc4nL~dbzXg=FoB*&QWNXo(zkND$#>nN@Z1B~yR zH_GvaAQo=M%eFJ#h~XMB7z`M4aYY;R#R_io?cagFWsUeup)KpE&$Qf)3;WV5prS=$4?^Wno~pfKI^dKVr&6C+ zR&rjG<6&h3g1F#@9cB(w*(@6OEyqHcju=!~fr6PZvMVb&Cy3-k5L04=CLuzGcQxO0 zlJT8oo@HCMZ_s6q7Lo%wGY#u86>#L|`Q9h{n}@YHuuYt;M!VxDb!b`lj8D0m>q^{7 zb9DmFu$0DisdAnNKGwliVjek@EiK_I^73pch;xXI|XLU zv3hqkzY~7$e{*K$y!!Oh{faiPt2FS!A1%=0EY?p!A0BrR2{5FyLLvQh5N!5##)|Y4 z@Cc*zic9G`Ki=%+)%#I^m$_$bi|0m8aU_djUl!xes=o^`g;a}&2yra2Ak-KT3)L{5_IUQ+*Jhcc3@jD6hyxj!lnJ@U2Do5C)cl&Z!& z4`r3SvKprm-Uxx3VtjuQn6qH$J6TC}v(q@d- z1$)0mOHsEsa#`FnY?~X^z+*iou^x(U{LW=WAp*{)Py)Kj^%4-?}Y3%HcOv#e4jW6wmnOwdED>*ZQFJu0=&Z@ttAL z`E6?)5ybs@jwOm`j4#`;fEOGR3Is{pMZ1J8V6ifYVu3u>&|J)Gp_FpJ)F0L>U7Bon z3=hb>?9bV{rValT~k+><-y~O1Z=FR1b zd9K!!DLf{h35S<~R~{6w_}r3$5Ax9aw!v1FUN;MA?Nz?_S^l-w-^L`yzF62CPhZLu zgjb@o7;8a{-sJ}qa%Dv=2h_F@%F@7jki(Gs3N}zz&alkvkb?WWpfAzuIHI$)(GrYX(~ozzTp0o5|wSpg*Ye|Vc?-CD|jtV z0@qv|G_K`(5gb&jk@A0enS*$NMFwFiuPw9b(w!AJxK{95lbkeU9Yqq3)sIuX2Vr}* zp9g}sx3z=-qXN8IAHP;QvtXM=dBSDR(qHiF1l`b3hjM`c5nE_y_F7)z>f2BEODkTK z17l~q%Xrx{U4Tkna7GAnUN7s8ph;}IGBm(`o9G|lx~ePe2B zZ%vaA9;V4z4weV+)iqEF^TIN;Kku2Ito1YspqgWoR(mcBLsE=NZ8?9bQXntqg~AdL zkShv+8nx3r;$cZC! zqaD>PwP_k&*wxyUDmV{1&b5V~YkIXU;cW#k8p-5M3fi&;Q@~S#Si@i_aXhO+fUKqg zVx6kMH?88nbX#KqJXoNR1&^zN_tpA@+X~rpFdzkdwkZlF%+8XVbE;>WvoKPv-;|@G zC#EaZX8Bfe6-tF_XJv|XxP)-)w>Xu{`>_b0``fpzhu%v|9FbE?|2bES`%o0g;Cwr0 zKQos*XE(OC1|7wTBWmvfLZ-0tIxMNG#7wHA|2iq(I_GKI ztU1W%k@udW849{DlVWzEEbAoM^k}I zFUb;Rd(RJ-^g)Kh{3~xVs{*bhnM@cbauT*X#w064}?{Ek*^o9%kitZK;#PAvTk9Lrlb|| zJjX|zM;oI;-Xf3i`p5T$XPyj}>pic{Z}OF(Yi`cksDLXQP>Sc4ReVB<|Rp&zQI7z+SD#K~Xt!j(n~fIckzKL~E+*(_(B; z1%!@WXuX2(W#~Rw66K05+a#X6vQF`0dlFIl@5lMa$Z<85 z^%!=DD&frL7RqWuVV%9iK2zP^M@VJ!8X zubYfosAe9mVvbk#sbD8)mH14P<`8l}pJx){-?k)71(v=z*viQIi>vfET~naBjK*Xp zChs?KKzJ1s_x`w)~Ax|1*aLQzyF?`7pXNvGEMxBJbrV}%xlein2 z2-`wfsO1c<*48c4bvGd?0ic&@D z>F!7^P0b-753#%5-Tpz&mRqFE2CB{ze85xuafT6|0?y+8W4~o@nX$|wU^Uj)BAf$v zWnCO@&6rL1?hi64!a%BPzy*^&3}baj$*4>vq!#!Pmau;ENHhgZ*5#Yx(`ms+#bm<; za1;qSB^}hS7}oWIFvH47pH5H z73{opS3KIYepDxxVS1e%UFpE{&!ywfJ(s39&f@;LOX=3d^YG`P)Y2@EtWT4i&nQ1% z40jpbgfwPZ5hX%d(Ua?VaUS0O&9r1hO97dNA>j;u!7r1FoC0@@FJ6;w6%I`WrC5|> z^5r*vKh?IirYaQ6YrR*}rMKTpvjanr4}z+%sQ5J!bX!}OcJJs(r;Z;wOHLK;ue8#otQYp&e!q&}5Ad@(7Ohi%{_cxin4iq-!m5Ru)ZUB%6rMgoyKP6PLbSD- z`OxxJz5@_`uTNwP0UnWJ<=b~`R|EC$*tsihW1g?T3Oxod9-16SQJ)pj@KgTS52eB_ zOBUh#8`T%(nkVuJC2#VAl0osVLVNkU;?CqdJ~H`xIA=;n<6*RaM2850p`UxZd%pAsg zRQM>x_xTYt%eU_p@L~F=FKt7{ZqQ?HSkS#5RiFA?DE#ZYp$Kwee^UHP^0_VZBd%0p zg;gUIqhY09+vK|@1#(Hpa-!TaSS-)v{b;Z7axBs^Mz1OguH(h_hGrB@?n67a!svrJ z&hnLCG{DE|kuzGM-P#F5?}16dHlme)hYBp)dZZD58Ic#)T-gq3RN#DSE6nD5<|(}r z&$uk*$?u7D`I!!!F!q_*`4xETp!x01t*I6y4bDuZ@hC4Fro^^J1wR58^>8ohm(EJi zxw%<+GGEZA-(>b$IDN6XrY^Ojcp6dg5E#HtsV^+Gpay<^Zpk_X#?Nm<5_i8DSiY1B zXwl`5pqv|3Ofc1i$F?$Y@7ceX!Q7O_COOb~U_6A2g_wPr^c_$R$xuEebxSM0`6=TH zPBM?jB&DOrwx<`KKaL_&$0X92&Y!)J&YtgsvCSg*Pb0w0ro#tY(y3F=q_&nVsj;ai z-MKxG-g)ms>K;owx9?8heDf8A(4|yYTa!LIem@T54-=OF#IVAEi6D=#O~Ga^gqn z%_slzEqE*mOQY6Y5L!cB&z^WD?cTL3o&M;f)OV+^#E6+l@E&gM>Pp}F_IJ|M)MWb6 zPkx+k-@Z$$SL?S=J+?rFdG+q2FsbZ`i^ru!CY#s5-wQ9CO6~2P&?9H~Aw*rfaU)&6 zbU6)h(EOd-cS;D&waMoF74%*dP8%EW;=lgdYv~(be+~RsrfG!l4^DrO{^NiCFHCv_ z_eIw|4q|#4A=rts*r22+e&yQ^Tc4uH$CtbQmB(G{<$31#vGno_FK}k#So-T9{U}XL zOr?1Q7ek6$T3eu#ug7r)AJTspE?r^b)<&PHWCI8vSKU)R6Zc}j5v0^_y4Iwd*(1 z^!N;8Uk?O)nSz6(p(NRlSv|4v+-j-cw2XvB&y@te5{$g4;N_5ZjOdA6LQ8ULktgWI zM8xDBXe8Jrf|{l>G#R8fy%Y}cMWY3q(1 zDA%KS1>iH3p}@Aiwm$uf=U+_&eRtB^cdw_r3m8oC`Whx%f-(fkIy?&7n%hzfUgCM9 z1yG#kU`CFTIp;+(XK!Un`?hb#TeFnzkBtx_yomP|`bDC4R?_36BIvwjNM0y2@Bm>3 z#k>a|_>OhB6so^WPioA2l#_7{52pP~bdf@KB+9GPJ5A2Osb%?qvTcdKTdHc)UU*__ zQwyHq>2zlh1Ml24o-KrR%2qjQWDM@9Z%K!C?M)9BXH)OUAZej&%bNyn*XU}mP|>d^OLt8}I=OX6I<D{0jks}QJv)?4$@a)ovFj!bNoMfRCbj zf7GDh3woGN$v5Xkk-_|2*FamXbq(o2S5Mm6)SgC}S0@MiphL$k_=c_&eD#J&3*tm& zNSrFyshDs(@G02dw_VkY;c7i^T>G)=NIVn&dO~eXyvt7&X_cRO9{O9ws{y3>o^lrd zqi);jJ6=n#70mSu^!+qtS?&xk%rmZe9k|68*u?>$Epndzotv6T?Ul9B-h;dMM0@XW z?9!NYi90F6Jq!?9US{H{(7F_CBZy!7uOee~tJWj()Zx86dbBGYKGK!?Zr)7~ zhaaSo@%x1HjFNATL%wHIdq+ij>6L@&#pe&EJ9mcC@W@RRt_dd6ITYWSbf1B8>&|rq zp5^rXspry`hs`KqSeYLJtZP-V&J`h?*?XpSO6ye~q%t9=S}mx{hKiX4vKZC=gh}Oi zBQMBZiBJoNuCK31J9f0CV@LM^_bBj;;Q__Y9hUwQo1q*q!;3Ku@y+|(D_DVoQi;7< zI3JSc^!N!^;J3^qrbw8H3ajiV@aW`@qV#ZN1R?Jhic?$Kx^+wH?&?ha0}sGa!Rm6c zkNYc3!dZ);-iWZ&+}wnM(HaxGbp;poD{9`Lq-awK{tX|~COr(!pvtIx=~?^QYp1L^X$Yw>2RW94}R%Jj(-$I~dv&(Gg_E5g2nQ|n|U1`(;vklw2Be&xD_wHyis zMsN6K3Vxh_uK6o|pMU;T`p$R0O}{n-(^p{FyLV4|{^Yai^3^NpFaGkcIJ6wvEEVFU zUdeA=AQ={W5hM1gGwpAUzI!JxF~Sm2j9DM1q6n-ZXg zCDZ%N7UU{2l=2l{r6z??Cxrlxow$;G%{;ltBKOMUO@BoDc)7BMNf0BzvExVMu=bz) z;umR>$x!1)!%}^CgS+u483|BF2(C13pV+Pf%36v8XGi6lrl_-`@24H(pOe!$ax+{g3}CLW5_pa#tao))8T$1iZ|e)%9tmuHq3z zmczXkuC+8&(3cG^-tnIQ#J+v~$rDrp?=i%)D0_WbE3osR9GB(!3feF3S=MS({moU7 zhg8Hr$7m^4z~|zTJkTGdtZqgV;iFYGX$~5_J2;fiUb>pbFcK}!&8426MtI|vRNs`+ z*u?#GqYu8N0L7S(P*w=lHS&3bjZ)E8>xZg}jex2vYQUgl#IXGA%|@I@2uE;&zd3HJ zFhJEXUSjD*(KgD_|7XdLQH^XbpJEza3j)#+J`_W@4u0+&89~lR>A8w39W~f_7 zXBP&I?P(f*^N^^LaRl~dynprZn-1!HdFQ@#eQ+Y(oWhTUBwhzYn1(+W%1Q$@cA~r; z>)x8SwY8^-;qi25;C||#84sb)@Jnm1<-HanE1uiCFD(*jv4r9_H#?E$AlV2rq?~G$ zt6FegN8i*yQZbVP=PXjKpUu`o9J`)cDYdORYw!nnp+ip%7XJ@LFdrWoKP4bK)g% z82Na%i05fq!!d9Qa}5RrH=~*hehhtBEM=NhR>@NO!xy(h;-P#6NR=YGkPqJ!sR1m| z>pQ|0_){4GH#EL7<}qN3IX;gKLrJ3#`BvGhC7)hom3fn2LQ`p9(NO134Kmu&UEAqA zD_v*=@LoH{mR%@Y2chjj4E9&=-hf{5b}6FJPV0!Dl&XT|`wj3+ai~%|%fmVDc`c8# zeZrDwGw*OAK-496HEl#lux?K77>4TjP@Z!9FA6Vt8?Dzed1MhP;aLF{54jEqm@&rh zTvS%}Aa>9}J@h~LwCd;u3aUhba$Ftm=uXFW?M`R=t_Kooz%zby<}Flc=ISQK+}6hG zbfC38&Es+OJl+Q*C}1eFjW}IYj*Ui;)|G-)d?6Dg)S{2LreEr4M?K#{;SvPtXcj^< zytt0OUopb)o_$JZNs^G7;}Ur7YmB=U7*o4hI;VjW_m&@kXV-&L2k+Dn_GV_>XQ!FKN5&9bVJa}QIzpS;+nWigsZ0CzcA-!Vrwdo#Oa1-R zX>wfK4lKWFK6SLg6!vwX;O)fgSefqJzmzUrx{pFqgD^l@4EWo(@1>v2-cQGl)Tg7z zcTy)SWaMbnYiREh%wvhc+SbApiN(IL9?k)*6VnrEdTN5Y2`55eGHiC+)~(TMJ##Wb zmG1G$F+9DK2r-mNQZZ|4tVt~`4Xiz_OkF*72w6<`%>8K; zcok+qC-|e?v|pvSnz~gWO|#%{YVN$4hydSXt$_eX=iO{OL&885eQUphxNAbPXmH9(@eVzb*)8sP~jLEAA@#hPzEOgUzHdQ z^eS)L+7nhwC$tvmz=PIZpE-g;HbkoJ!D4rs55 zHQ9JQf!8&yh4-_qrIj&zkMS{;?%umkT{F=p*K4_+*KyuP#DpGv+bc|l^1E(y;K745 zJvq&IYGxeOLQ9R2-|;v}+k|&|8gDB?gzL&W+S^&*+!ji&%A{j*5Kqhl6dyg#U7Z@m z_N5aip8<~Pbn)UPcvT-^!nJs>heE)v#%tS7h_xRle*1iEVgm2(I9BoLpbz`UwXAw~ zwE%0$c1-ADog|NRY1``S>oHPHMt_ab=bZ?}@&?z(O#$~bA<25NO`DKhJlpKg5(2m5 zt_F$PKH7mIWD-xpG{;({htSFB$S8NsfpAH?nwr5Q?NsSpMZ(=Y}!Jyx#gm&P1}rYJcVAJ;3b;kkm$H1onF{kcpzee}oT$WUrV zA&A{sF+$2$o0_XpNLv{r&=f8YY24@n!_I`!=$b}w-bFZXEi^Ml$nG#6H0KyYPF>$o z#eC4%upGv**?EFhS^Ch_P=ow9&scD*6Dc2ab23hJc$>Oxix>lrTR0MFrh~qip811h^aP5ZYbmJAcar;JcDS~0dRq!_Z#eQyP zi9j3m?IFCco;hm*0LGbH#>g{{N3@MF$NdQVN85YS4(5aja#cnI0g5gHTZ1BaB4tQ!8O5>hJ`WM<$z|@eUA9bhthrr#A2e$(pE%IEi@D$JH8KZfVB}oMj3fHyJQCnSO z+9FQj`S&sCxZYEx1reC}2O8o126)GAgypR*ZRz6BJ?8xBR6}^ON_HFd)FLHY)}nJV z_?yOfEFUm}r50Q_w>Go%hNvazsTqZO9z1rqwpeMHds9S1-Rzr4HT|RM=#e9-epgRw z1Q+s9G*AS0R)YdtEkgBv@K}cn+j+|QW&(UoInUdE$f*u3v8|>tc&47#8RUa0@H|U@ z%&G(cv$WK?P?dUE#$(v#0u8N%Pw54mfDXp-NU2z;NED%r4iHC;?Q{04g?8)Y9~dE; zkrfo=RURwBm-|LcQqBYb%`pagz)=T%q9V0K`vb?6bDuEdITRt^?Z(*CgF*xqg)E>l z4~RGcrqeXyNiFhLE%;Iq>TGPzVVvN=jqhf`&3MI3YJn%U08^^ zRmPbkuf9w@koK=@wnOn|d^JI9d*R)Ow(m&o(9%QZfqBB`Tfnh0S_^Uoamb7p6i|3T zJBnRJV|}{Mm~LiH(e1pA``aq&WA0V)9HlP?@$ioU-#qkO!Q51f^43bfw8L99mdqm$ z>8TxX=>b}zG>2N7dWiC~ot4bX3*4Nf?IX-p5Cigw6r+Jg&wN$tVm$A`xNGS7(gF?- za8?5?*E3#PA-GoHGa|zv>M7oz#Mv{$yYjA9;BSSuI$o>5p?y0JFCC-rW*iI9jf%fR zs}?v~;ThXITGBRnmEK^ZwDPfvfJAwvS%B{^GO1UXq^>VtlK*^_)?h$F&~gG(yx)hw zdCQpxB4#pVGJ;(Nu{JeVrGxvo5W05+!P4Lv7P`Yc&R@Qgu3x*4pfkm!-jz;1vp?Si*U2qCYwx+gstSU|95Jr0a@|NgG@>@#}^`(<5;)+m&)T|0L$IZvg*fl-8z z1_p}47{WD^EK>4X zZ%JQ&_4#0$GB`aC;|MR;Z(Jul^aCa`yrYzP^w41x=WPhX&7ov=5^BagAN9glLP6TK zV{6*Ay9=e)HBfGzgit!UkcNh4(g@+PaYVtwG~T`EY0p*!KNMF0Zf)xV-llZrN?&^K z!_#<(YtsHbJJR-T-3V_?;!GGSp6AY9M`6De7BE9ZGLEOx)oa(m6MY2irQ7lBN&fg= zD?;~)XO6MPcNfY)ZMxM*A2R_sk*Ex*D4jTdoOQtaV&ZRXX$0qB9fjfY)vJU*pGyxO z3^D-+s@7ENJ%vs9VuW$Ocfr0ePU%yQUf=`gz z(bb;b_{Qr90(tG>0E+Rur_ZF1&YfkpUkrF&!u#EUl2}jw%CkmLP~SvVb(kvnrdRb`0n)bi!TLCPRczP99j`HA3hjLzxdTJ ziCnmoPSCb}ggJKiAW$L<%_Arc(Esm!a5`PRehs6>EXrjk<-8DrXA^lOZWaCW7cZor z{rng4{^5iB(-8zyJ+d>v^UXKkOufC=)5yp$;oCnAA$dFf_uMllh`2ZcZ$RBZ!5l$3 zKX?9oIs+|DBFL-2>;;~ep~o3KUiH+sb=y`H@VPXE(yW)h2?NA|{Rh$&yy9=Y{T>tc zNJJgjmnv5F&8?evfD`L6LZiy-=9IU-NAk4_yJJ#=i+K4K^sjMAV~fc)UV`4xv1uc^S>u_K4mt1rKpdgx!*@{Wy-V!Y~0@4o+j zx^n$`c)V0_RkCvOyATFSxBP<6H1z2GcddC70#Osn>lCh@rTHnw0Pv$Uw7?_w?B2#) zfWp{_*NO;$o40NgB7HjyD7E#~X4n4B3)=P#U2eYg8jpizbqsN36{ z7$@y0;yVfDMNmf|)-Z$Gl+MF|yFdP~3rHd8l;@-owtc9nalND|5jX#6jBJ&zi2m(T526 zOX*P8mb8mJdr*d#Do~WRY)>`JosDyZ0geuZ;C-^OJ-xW=U}|Cxm{&1pJk3(q?TL{z zbZ-#Bm2g8g2t2a2J9VIZxD#z1yikM55JK)XJdz6<1oRG~upLHGJh^LcJU<7HjP}64 z3!euI#N&km3O*Vc4_)BW5XBK9QThky6BNf;#=ATJHZcF{>1`uKb~k$w7#(rv?wv4< zw4mhdC0c41imBe}FkB*}t8i(6x_|#ZMu1t2H%;ltp(CjegVAkZsYJQ1rTzQbdqN4i zM|8{xIG<$>Zlt|On`*-tFwL6dMeyhbe3Rh64pMEeX-Ti{Ka}bwF_=Ne^VBY%}vyHD$c8Z(o{2;niE&ckgbxPBhy? z2At<%wt=U;UE9-kl=Th_r3;j$Qg!bBopgD4AkDzvY^y`Tq~{$0Cgz{4oZ?vnZ>_ja zPqs6U(8gL6ym99E!O5|7olPvyjtqeR1#nZ7w%0eO{hd9~Q#JsE0t#dk7zl3+Um>z0 z8*8?2-Ikhw&5ePS3C1uW+{GJy7pKA;xSivkkpMj`6FUeV_P`Gw;I)@_D^P&ja9tm2 zZcn?rkUapn0%M_cT!&$N8k|?o5u%R_*j7mdR&#ec%GlG3Z0N6acyZuPx(V!4w7dp> z&;{??MVS3|l-fF!zA@&FG3a{n?yXRGU7x)d1LcV=JE#W}1A7+CAp_hV9!$N^`!WiB z1O!6|8Z%mvbGzX^+mR0*VqhEr_GQ}G1f60oOUFC5q#cCjR<(jlXkY?(hoJ3y(36Vo z0StM2fxQ`dVTpM`#d+AVi?R0J1W^ODaY}~@{qAhCm-*%Rj=iZ91HnTg=LQD`iS$5* z1{OWo$_OTD{HyfEVBjkSbOSu+1E6q$;0W?b4Rh>;?2UtG=&#+e!h`5HmA$ISFzGHrLVp690G6~)?t_s zOmZvrjbhOsK#&{8vk|)yGO;NzzGagO z5Ed2cUVP#C^u6zX4>)JjK>uI}@;jMWKK$^bP#kXc-Oip4JODu`rJ5B~U4PVw0Ht@W zn-HK~yLW|$su2(GHWY`sV@L2-Ooe3}tvA(^S1*(s`sfLs1xI>=XYuMaV9~tO$3|F8 z91~EU=zYA=C|$ zCh>OZad6G*3*g{(-`#Zg&b@SV?oK*}@busOn}3sj_}4!`>A8XTW|%OVTwZ8jQ7hwx z_J?&AT+}mZsqBC6yWd4{+m-I!#e;}IdGNq~lz4=*{sBA#wJ3Veq&MF9MtE0kx9eqJ zdj16zxz;p1Jc4Iti19L)o$FRZ^473BB=tpQF8n$;d`t~3s&my zeeXMA4CuexAHvX13_g0@uFPL4T7e;=IqhXl%?mG{LgB%4ed{K5%%!De^NZ(~f!s6pbBDd-pIjEHM$j z_R7oYkN)_N;4M4A%}@xqJC5u?A!|Z;yqNZ55ZHh4VCuvGp=aLy)0lGb&_RUUd*LvHj5q1!;QoE-&2PQ|9UKoKY>7w-y-LF;P9UH!q_^My zfHBue`~NV#^5Tm@mJ;G?b=hM=^HoOy79zYecXd z586?wI@+@rKG&Gue&>BW*#j6q=HaCo{XT!UtS7By}WehB(xzLLM06+jqL_t)SSYn6v zWu;~KzH9#CCydmM(Q4p_dRQhA+83FFocCxL3W94N@wOmnwm{pnJK$CHuc42lc+_VJ zHy%RyZO|hEjn+5Sr#b{?8FWkPV*bMmgkm|wUIBCC)3M{EW8H|7N|ZK3_1l?SI%e>M zAW#!?kiK^KSbCARy8+%P3Kg zGnqqF?bl}Jwf*pk7mgeSx09)t&C#4aH9I&CRQSb(!gLETxs<@QoTGTjT{k$*+^XjV zH%h?JjW=2$zZOIBM;Feg5$df)S=mMO#H;%br#%=ArkLXXf!k1W!_W-3~ zE5_VYM~|c)g#IZ!=vClu*Os1i6hqL&!%;SH>u3EMB0Af2!o|Zo+6cYX0%%q6F6IRk zs&4w?c=z^nYR|q@HHzmOdS0NuYUb1-6up@t%E6no4};p)o_4|x@wOrCSJB3K%DIIC zHge{Du)T~zRE?6#22{Y{Ql1eE;D#}dF&E6CK=fiL^oSuwc4;htKok9<_nAPf5a8+2 z@V@C8Zo{Zy7-a+H%)l4gRiqzaNJj8KzI#VH+PNc?mqEgcyD>=ApsY+WmrOmxxB+eT zXaw!-M&UuRgy%@-^2bi*l0}T^{V3UO7$J^z^rR#7M-y|`Fp*S{5b{<_8d#W2Q-f^g zNB=1+G&V37V-!MDPhHUGR%o@d4}6P6F(oqzXN_a^+@2Z+?xj8=9U!&T z2s!D&KeZEMByxtz^+V`m5?a!z{Gj(HVX4jO(ASLs`bdIr@`!^*Dl3w1rC(TU~ zX|zZb2s~gbbE5`{Ipj(`*vs6LpJ`~>R@;n$%$>f8>KY@GjWX;Mog2}$)NRkk(lEcD zHswN|_N^~<{;w3!b=|MEJD3Y3na||R)~}pR3WKq-bKrsfyATq#VOf8We);y>2$m^* z`&+N$?b=DW?M^(WLwLCE;o)-aW?TB>_BSK+=9jWi-;{A|Sob0nQU`!-=#SJK5xAFvV5EfmihLP8G_`g;sdQXL+! z-Aw3%tV2DSo;!I0p>Hs~_ufUq&&E*5o=q>F>SQn@fYi*UEjH8+>}{vUlGA-664;;pyRg>x5EJ08>LpL;$%`|QcsKELE$=iB1n@Zn&n(p#x2_+fL92?99nKvxKM_+S-6uNRQdu@4bT;{Yn}@8FC6yA-4Y> zml_d>4zRh{f&KgNUW}xlKnG)3ssG^H-%H2wcDkvS;X#H3-o!KhH^4hE_yDErPjbkP zZ9^ci78RHRe%B=F9n-7)^MCvWp51|XK8p9^*^|!@1>go=%Z$MfQa|*)XWzc`O%$m= z{a62-wR#^!y>Gt$dV2QwiCBmGgCG8o^|d$Y7sezCxtk2V^ZsdYc9F4@aipQ(Km3P3 zkBy^j_7q;AdX$Mh7(6;z=W+q1^(R05DeH)<(!cnVKS}!!uuhk;e&_BzLi+E7B5b&@ zwEy?e<=gMP6CPI;+hIKG`&buz|3NAfCdXws7&u1@Mbb@F67365M&@b=s95N`I6aO4xL z--TYj{Sf47&`gYyT4@ov2oA${}p`Vc>2jd{2d0S zN|eUKY5%@Gv}2TU&`Wsz2>kX?+PiND5hFu*#m1qtaTHEAFe1{VmZc51@Er8r901-k z@b?Bh1Vd~Vw3|Huo+Y}Xi4DKrhR0lGlc6K&#Hmy8ieqW2pOEhUo6)yK!i8mDgPz;M z$TiVl)Ip=GTwh+tc&I}m8bKjep_&2L3dx2U&MW+bhXy>Sv+%2TuU^E%_As5?dm!z_ zGqD5j^4ZaWbe(x_8Y5jZO64-ED{lAQPVe2ioGwih5|8lcu?{N7r*BGZYB>784>X&iwUB&F@FQ*B1LtwgCDdiWq+L*e$g6_vg%cuDr7yj0y@O4|^sx1qec zDNHX0o?FnfUa?_}Dx(OhdK)!ZY6P2Qj2P|VPN=b?C;Td}spUKRlJ-$|H~iixpIH>* zL6jNSTvAH3zlOFWwx+#!#||+jMo>#H;Tao6csE43j*VXGna5hmH;Q+$4<0%-JDGOx zKaje??PO;g{ldm_D7CE^5?EQ2mdIBN%w5P5Tj<+Lln`QT39VFUh6mAG6?|7S_oF({ zkA|lL5%_4N6WdX2s}a~A+_)az)M?5YZmdjqCm*Ikf}qtFZ%<&{>m}-F48se8_!jWE z2OjXu?w#q<<+I?6UH9qZ4m!CDo;%Fkbq`v+1rL6RvNA^dP;HISjOZW$QDtzpD_nE&?e*ul7JPqzkHmN2}S_EGN32M?x(mZtR1 zjZ5h^JbV(x1gmTsVQD}GMxiC1BieJocQPj4JT(PG!|`*;TXmT$AkR&#n2 zk9#-XzxR4SOoMpMYN4$<6q!!KlJ|9V6OYRNAB3aI3tTv5A_yYFcMZ#4L?&{6dxr76 z1Ea_lOTdt9sf2fTVnEo1C$+n~jqyq}!2_4LOor!tEA#vF+jgdtUE9(ihV`5D-4yR` zXYM)H*^??T;9j|RJ2fz-n=zDK9~ww!;1yGKmFa6IiL~0aGu<4#&&FLCVxU9AZ)=eq zcM#gY1DW_1#vk!IhTNg&6E=y#4(kQKGj(T~hKr|5@>Sq@^5hF&?)0TqK#Q)Ip+A}w zp3RYoNmWNHc>Lv7XFK-Y&Q?4NFdUV+1)nGF1>`zEKNqfd;PtkU``NPHapXb7kCbj zY&{+^*OHB;)^@D12lt|2G;&=*-RIe8g^;7_Mie_HJ0=r7f+~8d-AV__5Cz)+xRoRe z+33C*#J6ke?&8UEO^r&8Yw%Pim`|ciE#xyyWbeNBZu$si^B$JqH{W;@&&!cm%j0@5 zR-sV|>j~`|Ds*j;9x(;l2?RQ~NUtF@@d1LVUKrPq4iAmMnAyNYb1juMD_aXNd?PctYKck z!p1*5ex(tiYyX~|z>7x?xH?c+22h&b$4jQ*ZaefaPE(&Dr;`6HZC1f<24CU1;+`G@ zaqW0fAr>S43%5gaxP$q=)j740HwE?N$OS~A>q{L23v+8iO(t&_vfBD83Jn; zmeCrNN1XBE*rb{}oK&t0Q)^vIDY z_m{u=6}UbV#uLMjcQa0+^C`pfT`M|*cl91t;rspfG5R?E3|qvakM&r%4JA`KS3$W+ zsO?XF_S5tuls=VFj|u9*8`s7}t2f-Wqpb}sX|4x>6-CjtpRK@S2=O)q1*0H_m>l1E z=WUeOx1wElp$U^)>Dy1@Ylds33!ogu>4dV7g{_*obYR3iVy zpJ2cM9=%=-^r2&R3OaS2M{v<~$y!V_3TH&EhQ!4-I_yUjGREn_~zj@#+shNFRN4DZTaU zkMInj^1!n@;Iqe%9A?9yZS)!ZpK=dCv$ZHSvk2dYQFqbSe!Q?_BWxprUgw%R6=%cv zjN^$mId1tFGFJ%=OVjfR-fH@<~2;x0=ndI+iQVl7{6C_>pD z2ZZ{>5egHmk*p?Mj&+aliBXkW=pl9jx3$o#Lh}@`dXDrA-o6<$%qbMBMZ6e|lx^5; z@MnbDUP1sXFup8N-q9`3#rosB@XlFyiE9&EF`Trp#j9wP`#wibKOrHG+gD1A)57g$tqRa=s#f^!=Um2Z`TcY-GDqB zKIbtWOc8-Ychgk~qopyz9Wl~LCG!LN3QAU6+6lj|Q6VEo6Mf@`RcIKorm+#8Sk1Eu z#`ygQ_tU+H_fb}+(~BrThIKX~uT*g@Pu#+oZblBM2j&j;J!nBe_9&XzpoV_zpQUYND2~RJ^6&ZIOPOMghBr;b(ZLKTU3Dw`U@}YM|?S_<2_Y z22wnKEHGsp5_(+Cd_>55D+=QxveFhd(i*^fJ%*71bAN2i<$7&HKwV4g*wNtJ4PV~@ zo|WHv@83;##vj77XBe~4XHRF^%h-3>iJMHR#JXwF49f)!oxl3vcDfIZ8_K+kIlL7- zHbC107@scR=nej182A>V$*gY{MXiduyYUcjuV8->yy4Y&uP@*1%j+>w`05Bzu7}^X z(I1Ac_6^)k4~PJm!wcDgbD$QZ54&x^i|W}ptd-_ZrQW+YQa_s#)lFeQzr|7qjD7aY zqVpGh;AT}*M2g)p$W(ZU>2t@nA-irv*$K~XhF(2ecoAAv`8TAW36dxV zLX)A@TSNCyK&#U!A_BVT-%i@qgh5Q6#th5abi%`pGGy!w6E!tNbj2jfzR`}J#V$e1 z8zaHzKUspGQXidE`fOB6aA2~OzBkJ9CA#Mv^QB7Vq*eaA90~cpL`nDz6Qu+tW2t1q ztYw3UCSpt4+8PidEAfD`NeoO|1$3BQDJKa{a)(ugf@yY(Wo0?uE39ZtAWK+`7Faj5 zh%iw>ctiz@u$S2@eu+>93D-sHM2YYm#;JZXPxoNqa%kxHQ(;N$%DKEyivWU{Kt2V1 zC+*28CNQi@4Xm$m2Sh^w6(AK5=YU}m=Itg$dPd&FQ{cK8L%Q5%ehTK?%!K6zK(@Ap ziFgSC>)!odJRLVsI9Nl(z&E_qGb-!g0F4b*D6|N$1l~`x`O@SlAv-nGtYg5lGS`py z7wZ%2qZB6e+*TykfWXX082qy{)p%IO2n)&UyzU{m;|wE|ac8VOJ#pkY!W~)1fUCB+b^DfouN-{cUtJ&*J=H$quqW_1mkA$|%`@VBhc z^G3$&LL75341=@m=mSbrIdh$qA<%kJOz>5N8iYN6khcb*tr|K}C>llioE~totbLBe6Np1WM`z0?*lx&ZS$| zZ(-+zy{ox(d@L1C`t(JRwVQZnd)m}hh z-+}kAJ^=pxPTmVVfK?<#N@|5zS@rPlyEktxCr_R{Tb>Nd`vES#x`S)=6gO1{800L3 zUrN)()`eZ*P($Ep4Hz4gN@2`%Ayo<}q^@HTUS;tgWRIQ#l0im#dSJ9+1l_MGol%IG zW87Do=M5eJR-t3_`aSS{intmXN&*X~f?X}XJY>fy%^8Na8Iw*}#|Nsa)xIfn+& z(C(oAm{*3;UQhwXHMLI^f>+KM?VgC|2OWv5ip*+J%_cgKfU+U zp9ZW|u$XrnfkOq)E`5K7d+-+H=zeH}nz&EY8DPLD-GrT@!gv`0N|)qS-1@h%OiME| z5F6X&wKPUiGrGmMbQatOFIu6CEx7g>=~j`>JX%VtjHeNxr4@BtyxfE4_&8p5Xun|a zhOl&Ov^txAr?yHv+4~1wlvX$6-J*i0g7xCT;|JiTq8&^p{cOdW&;?U3?J<~CmEd@w zecI%Rux_CAd;M^I_b`+n!DO$_BmB`m&*(&EOq&Q}3n+DJ&?qe=M*7t<>Yl-4gugzlT-RZQ zr?77J(0`9SS-^#J+kI+`u7`8r!pe*e~;)J?OWKX?MOO@v=r1p0b}khX`V_kjA2 zv0T^Lq&NzczF4}!gd)yOPVNM#n#i+cp9!5|- zW&T6lfGxW*d2a_n{r=*!^d7?SZT62&A%qTqXYE@s*eJiBtuHYycr=ZUvsZR1bse0f zO{{6sc(-T(*1UgqV-W2Mk{VPsK@@^Ys38^Y*dMN;aK(~G%17bvsQI7k9~iK{%<3MdUjfR7YR37MwrKgJXU%_S#yj)>!vVkN74kH~q zen;SY4O%ykE4a!wU8E6PzyS&af7)GNf%OIgP|ZG}2&*z~DvH*@yG5+Q`_N4lF`9JD89_jA3kxt^WITo8Nm^Q4 zPRkn{g#!Hp(xC`Ez#^*^z8zs{8(v|R{=1GsMZ$erPZ*V+x(m1B1$P5)gkd7uE&|V| zw0Vc}BkD84m+QBYvuo@pUdF|B6Qxx@W9?qlhR z;A1ys@hIHlT_beqWyZ6AS31ag5wxL}o}$#B!pb;=!sVNIYHS1ZUR;KERT6+-Cm_QJ ze8k-z@D@el7ISchd0S)bB5)Tt+D?D_SPzeg4YJC-B{Wee5!SGd?-2)Op7~#8E@&0q z1xHnVZbLKnm^-JeLEX%?A;$Cy!IzBIi+wlt!j7Pr-B?~va|qc`=~Si7Y{OYmE?Ri3N#q|f?~XHg+8+;Dea4EG^Z{5T~A*-0km;I15X-naGlY?l||4vzasaSZV7+^IKyPMPLNsTZuj=MXeIJ$Eb-N zxG-3Y2Aj@K_MReG_Mj~tMXRX0t&uiA{N#Py9^XeehJaSyp}Zbj7tFW86V%X7j1S>1 z2H`uF337iBgc3q!1jja9PTE<#m(X54VnNp3^r#M@>KuXr(A{!@eEpOM3|Iv0FT5_y zCSn2+PrG=x1>2_B;CAulOQaU>F(XT=}-UkPr?f7d+zDeUH2zH`C0lqT-ryC%sDtL*3f;e-&lEU zyouk!ELP5Z=ZpX4UEwAp8O&41O-)JpZg#qb6d#Y8h8HUv*+K8%JId$Q451~o+!Q!z(p$a1B zudvk$FCMB`GQV!F!oq!L(w(emVBVwP5XTM)6)8>|ZMUDk80EH`#oqmoM#dE{w4#RV zEE0i>rR}uc28}XB)K)ob`lW?-iM^#uXwIi5ry}5!mT&j9Dwryai}D6?5GL*yRpYKz zTK8cEd&?0UFDGHqmI zo}Plc|M1hF5$I@lh$Okp9~V@|2K_z>mFd*o5bbt*bPWx&@E%ylo9p4mSC zU2dVRmPz%Qkn#Oi;DwVZ)+A^#pMamw$Pkfky&Yo&t}uTDspVAA_A5c`Fh7m1{#Sqd zUzI0#jcEIny_@J40W!1{LUH{T;0~!5w6@@ zcY;N7a|3vdWnGeffSo>6*>fWfr2jgI7@KNwQ0WX_yEuCm3sw#|S z9~R`RC|de}$0OW?o7nT(Nk2Q8ul*`!XzwQR6oiQ`xYpIf93N(GUqRp*#QNoykrf#A zb(r*h=&>-;>NAAZ_WJNB!BWNufcGfxlLwX|ATq*o1qNd~<)e~Ui(nlEq87sr;Ijvu zSlPiJtUI(tYu+)6!wzVsuEg%+LrqCkSMUqC%W~LGg%XuGP0-_0gz7FUgm%k4=x+bu z(Sx+#iHm$E3_tt$rbh{=MoIf9XWS}rXMB=CJ>6+-aR#gGd}_wqNRJGU!vWdYGm5eX z8m$saFREQ=!zRIZ_Mt;8)AR{zGVO1rU4yiFkh1y_vKZhr%ihSxz!^guXwVvTYXnPG z2ORwxTU=HN2GztK@@76-=d=_XZP;UDj7r;W#9W{Pl`j|Ew&0KglvHl@mGqzfI!8ZgHdI>ECf9c;I^m?Cf z^!C6HT1uPUXPcu2qv!))XP`N2ym!c=YxGnc(byM_c*7cIROah_L+njtKP9y8=R3=3 zZ(||cz3U)r9iGtW{06UU2DWXCR}U6*_a>{j(t69NQX|T11S|KBRW{d=I>%!80K^5&THyqWO8eb0w-;T zU?X(n`2+rFg*0eq8+C7?)c(ab0rl`y)AHUsGMsK*oeG!k)#nJ4)Tg5BDi%p~W2?-$ zHIyUL7kMgaxo-1Du+?nw82Y}2b$=Haad1RBgcs4$t@naQzvzij6L-N^1BS^jsQlH% zUu&w0;cLvVY38nhnVxR1rswSO-bZU9JyS78vq2Q5kC&X9y!x$+03(FkNdGVaocjf{tIkf$JB< z4`Oob<~^9d-Z(yE8!eG4(l!>aD9<1eL?k?o^EzceAjD1=m|Dm`zZGRe2e^PO<6d|-y>|O9c{ak$Z(?j1 zUV!5~HVD9g#;c!2LW`+KGRUl{QSU*3ZfBAnV#Si_aUZ~~Y0f{ra|?Hc<(LfDu8kpl zPz_qZ(_{7l0t4Mx+hB|a`q43Ut`KN~y{K5~t`ZG!n*bsP93qS%Jr$-fPz1zCx({R7 z>|o)cJAekaV6=UGU4+ob8t~xZL+%sS|HfOu4#+?esx0Oe2u6iA>ICQLxCpzzkRhOV znPdpj%lFJ~G93OWF2y|@=b)BQH?%$$IU}mZ`Eb6;f0-s3`m>T~umsNiY7vz;Dky6y zu*sh#V)3nehtY_amsb!@_F~WN?CfkDcc2wUJW=C!-dOu3d+zYFfUtrnEX_QhjVN(O zz8odE#O(9uVO8@iR0H)a5KKqmRYvD!3Hl1NO!>fE*y|qa{ze1wXgMuj?p7MV&d&L-vB<%Fv=TP8`*nAdppoZ zdW6o>%4+%!N(_S>8SG{lmr4aB1MMi#Y_LzoV_$CHy@Na9P*}F?Q7jUCiw|DQi=CyH zclj)f7cz6U@!{jg>ATy^jKJ#|pd;*~nMGK3zpaC)o2;-M#KliRTWzwg zle#<`7)eD<2nXOpnPhlm=t9WlxnsNm99n3w_|=0e)3d|pjMWJN$*#aa5ctDAH1vIS zWhGr3pGt4P{f+eW>62n_D;i_w@W&q$d?Lb75Ih5|vhePMQFA|_@X+$PB96hpY}Yup zhJV)yp!EGe`~&9aK)BDZ5|~UkZ!PDbWu$DmE?O_X!o1lV$60sl+2`}=F>~^DT+gQ_ zC$Y*s3fEir>yR5E${C?$wI}`C|Ng%Vm-4^(;a|iUdW4QR^716ELEBKcdBlwSm^_nL z%k^fiHrdDivwi}vtgJHUpuf7$xt(H%08Uz-TM+_JQ8-{C zW1cxkDjpmU=rc#G%raNU$0y>5naRl;>@_C13G_=^R%7lSHtpidj;#n9>d_U=SV>x- z!L9L8<1a0S*|KwdA7Yt5LZHKNmj+l1^mx$XkdxL=1VQr2AZqp0wN(p}7BlIT17ILB zw=%e%$k?ic>O}!EjD_fkzFEK0csoLS9ZT^EW7q@zpF|OQ1MizQylA$;|8~kW`t1Pp z@G5K4boT&bLy(nWm`9jkgVEKooDSle*$>R?o7J>&yh|Vyj+=lPA4E9m0)8jVJJ0^> z29DEM)_(u$bu2Li2V&3R20?l}ilvGA`j`aQjob^~0pK))3-uNPm;yFgBEo1dmdxwS zpDFhC?a=f0nU@i%7BC7MaY6>X1xCG#IXHqKtDtf~TSlRahbZ;dD02q}THMo>{^sD7 z^kzTNJrOYObQ3^^Hgg+`8T&~C-UvnrAX>TGa1R|pxp0S|EIoKoZLo$g4dRT;9{OFI z<-Ao61Mh_ncXW(JP@zs5)(w6S;fC6b7t$eX@d@;z8yD_w@+nlp<#X;MG@i2LJj4gr zevei%h>d4Ksz_7ubxaVIN%_A(HZLF0l8x$&4V)Q{! zd-=EzS(j&z_wdLUl{qR5q#Lgj!Isc|X#zY$oq~%hI02vLnU!=e6fN$eJwfO_;Qa$V zSHRv~q~~>l(X5~VJmtP;(00%tiVp>VYvD1L#y$4;Rp$DCUaXp z%?QP>v!8N=x(EA5!3{hUP{t631gj)gUc4`EkB<|qXew1UP~M>cx!m7FtU{Hghz?Lv4G%te|?#~tKgc73*HsB!3_EB zI7rgsN!<9Q|9}c#EK$#^q}SlrG)b6VoHAx);nN)E^j^MS$3P=xXmO)gFPhVIf2N#=X8%lwLiXpKi#>*@e|Xx5Y}L1 zAUqxmpa#B+&H6|6_VnzD8u-WT3A>rT@r`e#aR{Yuas9o+^rr(Rbw2IDEXmBb673ep zUKTDR{UU6^n7OA@VWJzYltF6l;eOkU5M6^AACnMb9euVs%Nhie594neWI}X5Gf6zY zOtQ}FQxgP?dKcJ?q?0f_rMyn|Lk@ zmJ@x+=)q4e;yS-0lg6;2t$vuk!%-*ieEV&f(B5#R{RBAw75fllza-3uf~|?-qYevo zzoXU9AaLHl|A>gPucv?ZCx3!)F^MLg{W37@x}pB3zxXe>OfQ6G$MRmDsK+KoU20~v zzPjkr7rwRHXzg1=n5j~}!r34H@gK2Q=a4$DVObGBdC&0>TPsA`WFdKmQ2pfbQ*dcC zz5DLF?Df1&a4mz{R0+UCkeJtB53Au3OvN3PB;x*~hmR0a*TYh$Kz9eB^P6vfljy7c zA*c@^^lIHx7&7QlB*srosoKgAqU;R!3ZTEd+r^VA78jKq?&-XB>vsCy_rJ&c2|Ruc zfg2?W41un<7I|a(M)G(X1+U48$-?!RU^zr5oJE-a;eY<|Nh_O^0l8HK6ngm;yEnf01JNbA--#tGch3&9{jS{xO0RTc=i%;F&g z!Zc3bG(b^55qm!hfnBR_;FX16N*P08j;TZVDXJFUR?aO>8RY6RoQkDsNvxy1;k~iUDcq`fYJu@U9FKga-=IdRtvbXte4( z*)G)#S}Z!Cttv?NP)dyABB@p72z*d6YVf54=0lAx?y=8Hi;>Pb?h8DCerTc4I*J?v z?o?pr2_Vx~jU#l%Z{JGIwc*rJT~BFcKCII2b9ZaO9*lA`dn2dVgMLEnf?fiKM1)!e zVd@B`kr)E|w_n5Ja*VYEIsh%2W6pY>i`}DZ7}*r;HZ0Q-2x*$ zcGY$Ni$W(HUta? z$I>y;bK6);Tj)a#I=qcAb%aJx=f z^Bg^2(X$q{EQS*n&2s;4x$UJNVaFglNANaUPNj9H;HB=m8z{MVQEZQ4-5R?(!FiGO z)Y03`d6;YqX~L~vk0%r=t-x~-rNCWyi*_P&w;{095kQ}@ z#8W{5sz*u%yxeF0pMqyBLTL$0^~ZBF>GfN8(mVIw!bKAg3iv1kC*6PaFjd)7a18Gi zf(f|lYIRC~4;4OH?`z=A9*f;Jbvvd5Hdepn2A01-(oGP;=l|!dT*r+p$sG<~WrN_SE8X4{dK{j`ee``>M(Vw_Xv`IJ|_k3M}ElWBGa@ z=oare)+f-4H53?gtgm;kf-BI=$%FCKh7zI!yxas{TfkufWzYb)f2VSbHN!JQyF#JR z!#X~TP`ivWa}lLS4MDMQeI(tUoKCl)tHac(%kmC5b9{O~wZQ9jH)CnMaxF~`jj~r1 zyo8Q4BLMDTAUW9HP3_=+C*F8{C?HP3lO;S}7Qt7?Vi&mU9fsZw0rVg}Sz1XqMkmw6 zYp)Z?Xc&6Kd;?Fa(9CAmKD|Ix?j6u)RiV;i&?oI%AN03M-;NmT4(geJR;Wz*;Q6!k zWN(WgVDb*oe(0MKhN(LEZC3)X{q^!yAu_z=4-@89`lTSy@@IecXa7_AhRnIi?-za4 z>SxWqg3l;km`v{WNoqb@c@Ig+jpR7_jA3|-odAUq71`239TWY#+Wq?_1M(f*#@YX~mMnBuQ zDSiA2XMFzrBZL7K9yIh_orAQU^EO+FvdfI$-r9$;SxK`nFtg7&4gd?Qj8KLB2r@&* z90jlg0bbwGC9ld^f=>`;5sFz%w1BnKH(hcoEHd{9OE3S11S5W z$IH~Q0h7l5Iy8&ALUlll#|RZaK0b;t*_1sP4$RHpu|1`Stqe?|%@mbv{fI zNW^`73aL*Cj-rJ$(4_dATTs@NTLTFWVy8>0ducmS3K&#JrhgM6{_N>2aKHtQ(Cu@x zbKuS+;13hWUOkVCSz{i(PaD=*NIc_H#YiVaRafsm_I`i_zG55-OT(+eA zh6aaVFoCwyIs4JapYYxT#s;l5?qIsyD2%(WrbM5~A@83&L zpFM@(!_bod7$(uZZCUt&No8CW8h60SeHS`C-zmo_=3nTEntz_96c;QmcuG+)yC!KIj``pf&BIPwodiw2sQ8h-)6Rq$P8aJ+6QG zqaVRg&0#%YS03;732WdfEI*(P;)EaZ#OIEu@HQf}$_Mv6|MX`+kMmXywi5J-v2KO& z_PClKy!$SKKr4do-$$OXv|89p6AB*SWYT5OV>xaRp?Gw7IL@PWjGn^){@r_jpMJ(1 zI7E4|iUQ3wMr*Qdb3bbbcyo->z;ktVkzB;u=ok!-42305FAsyVC@e4Gx@wf?J<3wh zIB#91$LCE&I0yGVJM#pZ+s}NPo|@uZI+!8GN%v%3)hqO`8;!ofy4=g@{?Zd@>JUe< zxHta&Uwo8Sq3I!|u^zOtH?JEjcC- zP@RvpuLWUC&lsWQm>ZDBEf^gz_$mc=cVQ?uYiVr>W|zq7GX9gdZlwtV;Y{Blax2F` z%sqLS_8GUZcsXMz)3r-q9QW>yz?^R4T3wU56Bod3-AD()*SWP7&M0Ib9k^5h?kCU) zEj6>mKiJVF4@O&;KBMIxqg1RDBVZ4+s?Ur{>fSr0LdVB>o!!SRnuEBiL zH|UiDsKJuDS(mgzKEUd{O`o)I>ehG#!TlCv>RfzCWYLG~2thE^;SLDcyU25!D5!5B zEVba){*$NBAq2^FELY4_LR6>-9g-Dr znh{!B@EF(v#w+ZHT!lW&BBVSsFd0 z6i83lCpiZkcWL_(eNqsP?IM)Z2~Fx{t}%phuIJ;mrL;h7i8=~MnS1xW>uwBE5(|QU zQh$xUtf2UrbRD9oJ2s0Cf-TBT z%0rcpODc+Y+WnaQe+#sG0|9>r_(5=>0X^r!CgZ-uy0gywJ_XME{GJ7_Gu#vR2Ed0S zCeV|0Jbw_{m;`a(V`?ZU_Jy9Z{+tq|rwXpGkVh-w9z0kJw9Oa-tMvPWrRQlDVP+3p z%R)LlFl}r>Ak<|XE-}ufoM*4_M?_41h$3c#IpDS(_cF`-&4I5AoCUn4C6xOr0PD27 z10WlzUg0&|*@3wpF3y=n=G#22G*zG z?jdd8WsKe9EiOJ|&*Mu{~m&@ zCKzz1nQ>PI#k1H$eTZ@tTDgzlc?z$yPkme9%_{5gj?Wm&LtwQ4ZY_g6Y3H*mRtCCfM%HN=?(ti2!FP}0O@`c`a@qcNu z9Me~6%S#LYr5=2f*bfFWHCR;n34;K?{cilIEPmNXMY_r7Uig%LO5(? zL;5KK7~brZA(^5qNWDjmaN0C^ar0u~(be==7c3@~)50>GFwGpi+m80EtzD}qT4^RC zLE1<>rCkPZF`}twv&mjLt_GRd#?{1Q8x$(c(+)vWc*qDr)hsfOpNy=oqPl{~xXEm5 zVF-93h;Hw2Mi~UjJu2uIIg=1K8@*}^L0T(n6+ssip}tZ{;r<8nuxAy4To+#W#c(C* z?lm$m3^#PGnKH7~R~$qL0v!7=RvwzHzf(qHh!cWnAaRtM|2SgtX_(jkzMK@UdRSuB(z0}ola{LOepd+;H6y}y74J^#t zFebV~I)3hdqq(0-S`KYX?3=m&^8xYDt2rRQT#k-;jc3W!|ZbspY~z440_}I5RV)Lg$3u2 zLYimk4(W0X4juq=&w^BdHu#5g)LyhRKUKtxz+_i&TNcOkESPylU9`zEZJ*4Ea})U} z7J)-F=SF?i0@@5yRDszWg(>!U9fK6l|tIt@QRAccAk-^zZLD=kq5h?^cP<+Ji@n^wS_hFmUu+ciQM1IP1J{%oX-kuBhyC zU)&*#;Ewx|=+`j{Hm9rs+l~o>qXMkEa|{@&aA09~z6LqKO^}U=1MKPd3{9O9+U;XE_KKncbt9BMJy>5w75$5bp++$nmi%~+|Ga12l0%lGh-iIMRg@*1yA8lS6c&>Y7AHu%B zcZuk$A}jV3g2(#!XjN>-EwmS*y@_|VKpn^^Lvu%2@BYm@-%ArHbbi9o6d$r*3!kBM zjMsv}z!V7Fx8;6kEj_voD@+>(Nt`t>&KlcH5Q{=7)U4-_)BkF3R z&I<4rPO;CEa)sXkc*N1GA=D`}4${wdNQ<*|7ol;77j$8CpKlj%ZpNi@4CSQz>Qvs@ z&Rxo=W{@^0002M$NklhhpA)pdt9hhkSboN_TLT{lQ!B zq$U(4KmFvRG|xsxHXj6ytJ0@+#!k0MgE93`UIzuJ=+je!M`7XH0_U}Iy4LBgJV2zW zF?gg3>(d_ih*}K69$0`2;oi@h1@(x&G=pn{yjS79efIaGW{#+|qj(vh_J&q~lgv#O z+_s~J(s2tqQA3cC4Kq+=C-!mtXQj=};+kRpVhKH$?!M%{*`jj&4u%Q;P(Y}VB=$A+GY~bB?#MpWy$S#7~Ro!<{V2!b+^gt&D5wxyPV)4H+l75B~ z=|emwzWwGm)3qBn(uXro)BCee(xO5#>$M6F*96aJ?F4?^jvH%_USy8Fk*^)sI%Bnq zyYD_UtDUy?A*gjT584%2#S_LuUZRFewLF5f%;V2`P>|`}p@PS<+=jBt+>{2Oib#D} zsQaj^4PjN6X<@KM+vHE=yKbP!^hlZ-LgNa`buHQ37eYbLKo#5j7uw} ztnfbc`w;psUK#9Bymqwoir7G5<$X3{aLy{UR0}isV34tY@y@r{I`Sm_R|0ntcR8KN zKZ{%RDF+OaX1cD^Z*ao?f4Kw)Vj*87$uCDuFRIP9S;!YG>x!szjAFmq))JDP7Yv+u~>)_oPGwy-iuhJz5$cQ-beB9IHsxLCg z@p7Ri%4H7}##+&WE5Kg31DwVrWuicUWQZt*iDMoGP)cdmnn-z;DK%pP<$@`fvaL~< z0+X8Vs7ry&%B@-gOk46oMqY?Vvx2ZoS~Kr4Vbzr5L6laPWi&bZgvpSjvNEakZ6I%* zd(8KAfOZ~dL+Yh&tDw*R%ln!7VCZ1#U}|*}i^fsDf{^=~=oxSEKJToqU`@fr%kc3c znY*N#IJ-39KtCurGEelm_4p0AfI$zjdvlf%J$C_cbXX31OptT8HlHJB){sVm-3O zb1MNF3_X3KeXtMFCt)Za1gHYBXe|rBRr!fOpV5D_v*2?#|ZE{ z>B9%OM?q_r5Ux*XpGp*&2jSlam)OGM5rHWJf24gpqaAH9B9pk=efPV65bo+j!$aZb zegDBjgf3j_U{(}l?WcP;4N_DGr~lvo=Z6t&%DuiS47#y$d}fjkI^O`5u9pgu_i-Us zDY41^*?kl=GEXjYrm+07h|MnG!m20)Tnz##X zE{ZB)^b`=m5%wTP64dnDlq0R@npILtkfd|Sr!>L!Lb_}-nWy0W0m{u44u5y9Iwz%< zDmC3>FRiKbS;K?iufgqs=g-1@ZDnmKZGrC!BFI?LW&6h9e-$&fc8NRls0IGbn$@sS~Um|?aci{WU}oAt6}t4RV}+j znipNgi{L88p_8?YteqJO|&6`+W=_ign~{L zdZGX+9jB`y0PRqoP4v80JLc{$w$LH8EHEK>m% zd*!f(?W4TaV>XwhphmsSr#gEC4`}-w`1-Rsf^;#@s?3oH-U4!;&;UI~Ok32qIQukU z3YNHRgV$Qyq0o$ftofK|cY(_u&mz*gW%v)_CFUpXH=6JYR?l3j!jNmHD+T2M59uY{ zVb#wcF4=R4P{aqc1zvd5h+CB9Q8ng21eTyr^if>%cqxb4y}y(m?$?Lp_XrtU&fZtC z2my+=?h^5K5g4x?ah@aZg@D4lWJ@P_Lrshz)G>FB0K3TCU8ekXXzwme%RcmH*YyTi zDj4p77vh}ovH#8pajp*Vg}?6Mo>?mmI$}*zuvaMDqs$0M!?pFJA;nV*S_eF2ehmnv zpsmZX`!5wVS17B)F~%BgeSnz0Dv-k`Id0;GI7Zo_fMj2Fz!-E?G2}Z(l(hw~l&&?VcYPcPh9jI+uo=f`K7HqFp( zExd6A$_B?JVKmuA`MwQ5B;Ei%5-$Uo1L`?qJP2uqQ0dxk>qs&6#R1#qT&7&gY(<$B z`_gHv^TIu$Dm_$K$ltjq_6dB)43U5_`Op`~mUdex<75}3{Ahy}{if4rRU9q*H8%(V z2>XAeByD<>Isg=hiL!#2kz8$SbT1XYef=Ka_ z8f?^E@hwXeSPx-arHDA$FOOf^acOGl^C3rCtOz&xLYHkOA;Gmu56h zVKL+FT(cKkQJq>OnbK$v47;L=b^8fsm5J;$APcX65@-Y?F?oxbAMYw`oS>l(4ko5E zy^;A*XGwcp7$m(q@+)jd*-?=FoQ>AwA<``?%>x zq8Zngd7gY4#h>3IpS9(WwByT6Nh0QA$5(z=s>*-c7*XYDugqqvLX^bQ zc79G$Xpni(GAZ+JTZMj%58qbdy__h||J-zujYjijYkInIm4di?R$zooFB=Iw?WcKS zTqA${iMwZixP^!mK|_oTsNm|+92R5Xg5V=5D=g#WMQ#k{Lm|;~^kjs!;F#Y(xmH>E z+~(W{`XdL!`;Mh&d%BpZNzFH5wB&xoP5yxbX!Fse!59d8U0+YJ=47k1uocdJ0U?^xX1G*iyp_lSSp+?)Ai_lf5m00tFsKn;BQN?h2NcfvK zkA89Q?>E2k7Uvp&H>~aMSN`D-e~3HqGGr=`i42`qM31kzkD$B2p4I>OpMDqtxB77T zvouoyiTtDZ>of0{-(JYv*~ThCd))78BysmlIi?DZc2e5m{yHgBv^$mc#k2Ak&p(++Q2i${b0r-PyF#FPsJtb;@V>?_EA1PeabOX%fJzOe~5sj zB1c6=waR&Q2vCk?e%JA87&FGnx^z2}snar}B8*=_OQa*VI5%C9Qu<5XgLx``o6PH#IPCN>UsyjWOaaUKRe%YfP)b-e z_Z_ElWmCzcQ0UQJ7A9;k9?~9!RpHEwTGmYBg8fs-J!FkEk}R);6}FRgqXWyYF52!J z)QY*xzT-^PyvI}bOgaj6@LP+SFfpGq&!ANPZh@`@Ot^L*sAa+JFup2Yg^c3|p3qL0ILmfm zq!TnTBF%0MBFEq7Sy)NAQTKsSlf}gPEM5Bp`Z||(rHu@c+A$y*@IzblL zx60*XqxsL1_ayp&Ps$l3OUz8>l$n!}&J1|Li1Shi(DnlgWf{`T zfUWLd5IGCv%^;9DT5cxRQLZnLaxGcNi{B9DXb(Sxx#JY~<8Q8qP7CmOH{Y?9E>e!a z;|6|_MZ#`6g-I)liB}zRP~-d_d4!z{MnH(NU8rO(EQhs0ummxVTB2XVzYu0%x+KyH z-eIu=Cb>nqUi%p1BY+qa85ZHA2GS=d3JhWGVq& zM<408oKoO}_sx_S)~G6e~}OS|E0z8tNi2MN{{sU ziy!@EdhaLy0M21ugGqG%uX}yP18I@_FHO2G|M`FXA+wcekXuAKrGL8XNDp*(w3fiT z0v5jGy-@!oo__LoCgu0!t8wPzhfa&=}{XBLLSQ^^Syx!d^Z-aeq6{) zn&;5u9puYJwTo3n07?XsBpxTdm5r$ERp#Oyu0!=#x*Q5Aj?ZoryKbG6I5N z0h8a^t9#lhxTGbkoe!R8QrhU3f{twxPVpLFv8@K^x51>hg0P*ywv9}&8UQ-?O*)SD zv1FR#UfEX53M$#0;6-l+UXT}QTuXjeg3g5NG5Yhd%FfxnQ!m=CG+eEn@Nmi=RELW-pO!Nm=RVz z&-8=^&2jM93gb@3qjuW)TP>kgM_uEh=WsRSa15llQk%)>{4j~z16Uh_DHasZMN$(H@$7POHI z+R5z@9_^A3JZ`52f_6$gicfy}L5>Z8u-<~AzT0|%kvOK(O@Y91JfbWunBq=(Mx#xF zV#dDPfPu*Cx=y34%IdA#Pr%cL@`;H}0I?cS#%RIQOC?}pIrL-5{0M1h#!p(vu>ePD?ba-8F;(oMf1^pc@evP|3RTrRlEvbta6w|$}k1+Klg zBB5D+UX-JROZokxJYVFVb(HC=mQ|MX#lOF{0;nL0N%InzgWlv&uY?sO#>k9JM!quB zDg(mgX0cABnZU zy%V_^<0gn|SivboCSl)kriD^PK`l#|i85~$Wm!ak$D$fizGtg*5+Wvvu5y;+nV;qR zP+S%`ENTj|)GuWG?{`>;XfcgBtJkv1vIDNMn2=kP^P%sOK?Y4sQbTK-goALj_8e$Q zd~zhXCm>@yZLOL4Zz29tXl5}HR-Q8|X%|*5_~tkh`uWi~9=Vm4?HqY=3Pz5GnMF)B z*&hGdhA5vW&I}W`E+n6uf+6Pn^7DAeAA9Y@@wxrdCG8lEy{o18Cc%_RVjU)1?pHA5 zPPSrxlyya;$e(8x@D(vgwpm=!1z8ve&hb(Rh>VfUtw#t5)(9vO?@$i+^1qB@89Q>B zUi(a(b>Y=8CcJIS2`cblTp4$N`$m2%_miI&Qn*&3>7iUY~XY&*iOkG6Rxt9W8i zEP0>U2L(gPk(Pl+!JJ~*h{`OP%rV})W1qa1S(JHIRcFd$D#m1sTLfn#lW>Z6ZFrGa zEK#U-eu(!z@@z>f)LLT6+3SSMNE8w&=Q(Q_FDgh2GXY{sPU zXb$^AL)iaoKsTK3U_wpeh)TqmCUJ+~jEVI|^TZS93I*5)iZAN4Kod8(v>d;kkTH@0 z2AR{d%-K#z%qPYgLI_Q#>Ov;~ec!i^#i~fySET-dF^KE9@V> zLqL%$So4g=87mF>Y=`(Rz6I=ZQ=;pO|@;JI#*&QtNmvIFSmBe|sIdEq$HHa>0KRG`vQE8{I} zWae!Xm!z!vjQPh6Z^gN(&}>*%k`PTmSTQbHqZSspXCTUKejEdedc{WrYIjI@5I8a$M%31guU;n$lx;L zEZ^co9%GA*KzBt&ZxV1&+;=bLS8x{&NQZ2>$zqB#A%;ewzEzO zS&oJ8SY$lsf~@hwud)uu&l32`A0E1%TbBO=7Ceh}h~MI}bvfpaj}}DpbU;@;ibhvv z;S#Q{;8dNUjS)?^peME^3eFdCO)*iP^%XP6JH@j+7fKLVZ@g;f0d!eR{`PN?KVCS0 z&Vy)IJog7#eQGgU$K@MKT#LLQl!_h1;>`ZqHv3GU0^iA`AgD6kiJR-USRq-Ys3Urt zfon%eda%$+%Hn)b@6$uaTjlWJ$?ytCrcyfFZLW{RL{pm zcj?I(52DBT_{WKniH6fkMq3R`DnwQ5i9VXbB_$~8lpXBEOU5t^pEW_nM z;^tzipyzj_qI4?6{f^saJ!af;dwf7vGmtF8D~Xvu zY?sL zAAsgMm~5Z&*m)MYcqe+n4^ujrsYMf8&zcY`L&)b>7oZtI=w$a>33hND1i+ z3wwU~{BOXzp%y9%H~BIL zX05T!obNhq)eSvp4^O3Uk-_}5Aq(XG;2k>U$kWpJTh?oN1vTJX?4TTP#x)ld(g~pm4kDRS+ zfxpgPjJ(m=$nQ_zv#tIU6S)@mc`icbKG1I4${ht<@kreDw@hx(d!9?bPi?bhtJI=T z@V8NNo&9JvBHy}YpoK&3V_YqZLSo;nWziRhKZcXv(QJQPpS`n5T%UjZos&&2QyzJL z2g*=DM&TlJlRpLR`*GjoQpvG}7PYy(s1de~Ragz`2ym@*7jEv5hxA=eqIr z=v#54Tstp6kK(x52)56u;)Rbdm#bpKBu}WTZNS)j%uS6%htN8M#TnqQO2kavk3Rz} zQPywlPc+0HHTLRq?d7wt3c0e}vOItPs>&?Oe)+ew_-mIMLH+X1yX=$$(MTe^c6lX5 zK``jLsd^n0iwhV%|CL&E1&l)6GE?zsY=zg98-gUy8z1q+SG*O(fQw+UL{cn$b!`TXM9-`o!8V&H$g z2jr zT_~me@v@4!AAK(RPj)S9!Xw&DC+%l=6mZjf;-{qrqwWQH9VcBt9kEQAyoYxKIrSXz z9W~Ypu898px|p1M-VR|U;O31>iFO8J6t}Zp9lVLG1v?CW%8+}%OlixMV2wHd+K@onHFw5U3a^)<0d)H`J59^y_Q)eP* z4Gf`{4%&w^-J?|sOYc&Tu!*`Z^~}|$k;p#bx z;jzJc?)%Dal;Dle;*;M~iQqYu&WH11v(8uvqgfgA92xI7r0A2FO@Z-=9v0W;ZG6YR zn9OS`zw@tGfB!KckjFXK>{yk_G42f)rRUT!WBmK(?{GGTTBL*mhwzNSS5XJ%+T&>>*Hq2W(%!B(a&kyn3`ZNyw zV}j#z#`4$Fw6e|;Uw zvOM#b$wksdm1i${rN}1Jc8s=0hN(iG;l@V(VEV`t-~5)n2qvB8hqcer<9;s4c4uak z=R7z?b?lOw7j_%K%=Ig#aY^X>&9z)dbiqF`Z;`KP`z0cm*%aT47rgjXiuWlaTrzx0 z|Js5}ESL}47aBo3N75oBsljMKTx5)zJ+Dh3+0Fux=U2AEK9-i3{5}~YMK{VH zese?E6_@dyOUNbL?uxMEi4#c&;&L3KWZ&mLSzb!}W&W%}SH#>h^26LoIDOf&ec5OJ zw@!}`@Sg`+8?ZoL)99DJVzSPcrg)ec#9;ExNcje}vK^;d$jM`Ud0b#jqaFT{4JA{2 z%poKjlPs+kX7d#3>{9GvT&r!c`P8_$rzjWsb$>J zeYS+BZBh8l7EqMIwux6A2WhDE(Y_chx56iT&X}X2jqy`V6Hq2^czyocJLg?6uUODH zGL6?{zL3gtF5G;UQ!SUo-4|f$>JUl zWxmQXY@uz}g6KqrFNx#=8!K)Uz{4mtKl}9GTx;&Zr=R-vXQe2e>fDah0!Pt8W3)P1 zwMQ@Yp)FLnLpQc$}4Sn)GDDX{&U}g%U=6$#8=1fJjOuaUzB`_LbH6Q=A^fI7^I1&(awcYMx>Vd}{){pjb6jXv!w=i= z8EtMX;>FhuWfd)balS8cR~)i<03p7*udW54>xS%Y4^sD`!CQ>BtddNO6W5y&>OJR8 z!CqlT`=V}=dL*4o=-h0%nt z&^x%rtF7$45&_)z>n*kA;zJAGCV|66zL*cpSD)oupPmU;@uGRi?MSwUCVXZxPnk-g zEB`JUQarRB!sc>)h8$2~U?ZIudr7@8Sq?A6>vYB|uOplbc;U5c zfx@SOXgr!lD|-u(difJ=6nng4_*RoH-S&)DkA_kBJktG^=hBrR z=D$^)_}y>J*IEFLHRKD9akD4Jw#oEG1wxswUbRoFY>}&luVlUgUw;GwKM73Vb=3OzwTAblQRyW zrc4<~-*Y)WDu#9nlBrhvf(1qll={zH!e_^tGWZ87|!y)y{Xo%(de_Y_vrq$?eeZ4UYDZX;hmh zi#OQLVbQ$Ph5vC@cpRh+!Z~OSRu~h0_VIE(QKjXOFC=AhHBqpZCN0aLP`z9|SX z&&BBovPHlM&js!3>V|)-k^>^{p0~)&FUq`JpO)wsCChtRa*@cRb1}ySYbuW{!IpY( zdY3<7L)ejd%bQ2jTwgT*=a=}a$dPGEE=jmJ0|Jl1-@s$;X+f?*k+um)=BzOA zh)NYBoeprmT@$cu;|#h(4rHu>>qeFJ5b|#J8=n&WcbDk78wi0_aOtEFLIp!flXd~``N3(>$Dk4O{JAlJw zIq$R5DAN`jm&He~sD<(+L-sq!M4tHZ%+ZM?KuBb}@bLwMAq^63UaPybx2IvYIbr?ndcwMLiebXpZa_<%KY}P3pz4(+^bBIZkqejA zuU#tfQh#t6s8~#e!sS(^$Y>rvkq{PKP}P7Me4^6HG6;j>A@uwEhR^}6U19l#CuSDt zD5`EY?;0Y8=R58PQrPN4m>lM)gAKXwWHs@%-f}O zWN9j(wj9Mr|+}&6SQgv+si~%K9z0d8-WQn$|mOICGhZ{W7dvKS{Ln6a5S1?9VXjz&LnM) zdWJ{QNnSm7Ln}MZH~-s!`9$APx;ZhZ;t&Kh!dv2*ptzu&e>wryYOp!(0cyY=Wh0D+K zn0WW>&FCL>QodpPhl$X6WoS5U?^M$cIJ~RRAt@O8vo;iC!Ec-4*p)5zJ5N_ewYb^L z{EKLWuYCd0X?q@h6D8m)-A;{z);U(TB`z44a|Uv68` zJ@T5EVcb{1i?J>4#aCHh{D`d4Cu^`D7t1L#`CYzPem6ez`n;W&pZMGMnEVQy^D1Z( zV*yQ>7#>Nz9bKsv%hPk_?$h;Uf;@2GIGW}YjuW^tGM;XXa?m+>KP713mX5}n#NCt^H~lbMZa=e}gRg&{5iol&Odm zGq-f*T7~A>M+!tikgMlf zpvJ;yeG!@~pkrerfSg(4ju$|ewz=e--8LT>PbuJVRuf8R^!^nHTH5OXCA0 zw7rs+*SFKg4(g75##AMj^oup_kmJ$%$H&v?)zP#~bl=wQf#jyUP0iu0Rll!oZ3utXLsg}@|5+r_v6SMCvm}MyDqB_ zi)Gt9uhOp_0$)o3G(%N(Ad673wWYWYMli3Y8F9%hGJB4Zw`J4!INV)}XJ>yK2euET z2?T?uPiNBn;u_)jafW1}RH=O0oVEWh&nod*+9@H6py-mrlB3qR3?{=v7HJp8tdW*! zZ9=$lO+DeTY|qPL2}Wykk_Njv()eJ1Fm*?}`9oRv z?;?GS^|Ut)b#)O1tcq}Ukd}|uu)b*(g-M~le5{8pGG%~7*#g7x8GnU{Mw@(Y z|IFfK&TaA96w^MgWa@OMjaJe*pWDu?)kecjyKYZSr4GJ#IZVEeV2Q~j!t;;9DTfBIx>+ZLI)P$*FL2%7xS-hp(`bHdqz8|le1N)9O^`E(m=#ZR`YyOJh{N7K;QVA`4k zj*s8ZzymLAaUKQhwbclwbj5%kb~j-#$e81vagN}r*W(Vb8E|KdFBa<4D77< zB)7u&MWT!f*4Q2t9lZ!){o9@C=wJ^#QP|Ebv+%L6)OaDqXw!sX+j*!EC1#tigy)49 ziXV;Tl)il>G=|Wnor^+ac}Fdl++T zq6x)Qz9QwEUfO2gHLrJz-_daI6g0%F4axRK>EMRJZ7N|M%&v@3;VDDhMiN&mb zaG0LYJPzTj!t)4-6WB)yfGo;mFxoipPVa%B2Tm*p$b>K}Y%_fG?<`~y{DQIe{fMyZ zxboY!i<>G9rBh{cbi`5X^xHOP_}hl_Bow?+;3UJmSLap3Z$<%=& zs|#37j*jxXDb1rS*h3lRyD=WLv%*J9!Z6=AD_tnc@VeRBO^dYmka0NXp~oKN`oZLw zd$tJS5qRVk$@VZV{K|9CULXS6rozj>b4g9^^*nWKihmJpy4V)reAhMJbDJyV8$b~` zh!>D^=jrNN+Nf_8F8EDpmvVQYRa4Y^eQY#M-MgE%q0cMeTrhrXUbfN^ADGP(Gu9r33)82d`VQW@RyY$|z8dA(o`p+K!zxjLCCBP?Dw zvA(^A^{T05G1bxV%Ng&&(H!G?7Wr7@bZd-5sROgstoz%VSx|Xaf#J6`!BjDG42kdA ziF=gefzU3--m}gg>g<7mn8NZlg-hm<^>z28Lxf0|`*zwMViOFs?RMfR1|L#i9f5If zaVc%nnPIfWL)Tygx!-2}oaxuvQHeb=opAgfr=ieekQW&cTX9IccWIB7<1Sc}*o(gQ$#pDXdwd^kRnsPQ=?W;EDm5&gxMdyfY^7H6wqfz} zEIwT<)PDELo6!c3sBr(|Q8TW-Frl(|h#rxrV%UD($&@RzBzphGxpi1*x_LIt9;9pC zeGEHph}%1q2gDgG;Uukdg7v^KwZVx;WKez|({HUSM~tsR-!br6Mc{0I#<;^Y^$rce zVA&VoMdj`h^>F$5=d-DE=OA5Y+?)D2dk`VKh5NeaiaXIi`W_a2zHPWiuBa?2nNh8k z{aEgg82^d(?sR9MKeg=cq{lF;hvaJq_Ts%|ROnmmx1wK1%}c3WtxC1VJkbho`}iiW zF2tS_C);h(VsT7ahk#C4G~wFl0o-lO5Ah)frr`$HVoqGxPcF+KBEnAB%R{VBb#O7T znsT*zc7t=>4BBwpahgH_aBUc2e03SzMKA>xDpK;9mCkG6;hc(p{^puF2?UvUwmBzT zp>;?G*!_R-yx=~#*x8w`_4YCkQ8*!(w9}GhaMb;|u-`Dyw2AiE?igSCY+J<g217c)X`*)sEETIQq{sDQ)Kw zect50;*9~kgoEu7ynd-vav{d)0l|S>zAtL~jQUK{3X^}GBaUeYu-7Xg!zqtL%okyV zr$Gfply&uycM6yHn|LmK_#!#v6l8!-Qo;3lm?4i zYrqOM5Q5MqbM>Q-KTKCE-DwQPNIw?IF6wAwu6QI1F<9|9pqaW zcK>L%^S11GbV!Kjxg=d}!RqgR#!%{5E_A1zwoTz`{aXJ(?0b9)UTttb93&#njdU-t z%!+$c6{sDT6MXP%)TC{;U7Ui(NI8tEdkoxit>ADdyRyZ;nK7aZfftUJM+AVdv5i2y zu|JMi9t&}o@gNJe#TwAc`yOQPJTPLgW9lc`LoV^ob3ReMr8Zpkjo7Qusb%xP_EVq7 zOI4r?ea!Dc1hE^v&;Z(ay0gYQBF<%eK48$9V$_FHfCJ=vh>F|FzcyU)J1Ga9C$-RT z#?N4E6ye}RTb+L@L=Pzc0NAzw??Km1`$HQQ?+k3@o^AW<@h*G;YgR8WOtd{dmPID< z&eFFzbN0#8r>PB^-^m&_)Ypq;y+6Eo>YjN^Lu&iGtO>{I>B4dvyE&aEr>>>du_tML zVTpA**J)Y8)du9z%Jd+rr;R~7a!bgU;p|V_C(NBgUn&)^39he{igIk5pD&$$SIZF~ zzhYRw8nc&R*>4#RsS}fYK&FF+iU-aS*Ss=)r8!d;@{;cY94~qL*|+1}uTsEg7yn!J z^lK=9y6E^XGm0)iF5WUMt|v0ny$E*`SfX?%>w{t1MPs^!3(q!giVK@-X$V2+dRKpX zV`wzJd37p{clYByc_YpBUJ1?c->uK5#~al&2-A9x#eHIEBsIhAE65$}V5!^ONOQO{ zu52HqK?JL}u;TS1$evQJ!bTUu>&EtG`VfZfi3>EJUKpTnPE4lh-m%or>Nk#caIm*8 z&F>zh1sEQg#BPL#E2HCSeEe$cUrSg<>-f6Q;R^R5)}bBxxubR&CT5GWPwAII8Jb}X zA~;BQb7;gna3!AV8%)p@&Y*Ac$A>cFA42_x8^ z-k7+S#yWekfOe!sgr0+wWn6f11BB7*!ZmApXe{0CN9%vU=%2K){}Jtfbp<#rGQPNS z?$uLoTPNjAnr>x8D{t>sM&&c)Hd**2Jlm)CQ?k{dan40JrOn2LRQ+;PQ z^-h7G?6q@&SYB92E6Ym=%BSIWJ>Cy9EISC}x_!K#W|=Q5;GM@pbkn!#!NK&}=tRsL zqpc2iW1XeL9Z14$e9P!6ENNBFG#q$vLu;l`aDQ+!{TuJ^^cy-q^&L(d63}x;6S;>-`bN_=;Q2Nd^@;Y*v@$f9wt@8~8lj!_y}->$1k-mWZ=^e&18Iu6 zJ->e#bZQHtxUsr`V3eQ*&?e|c&o0V4aJ!ok>|}m~q5~yV6N1ACgL#d&G6f6GYO}r0)^$_iT?Z!>4_Pr=yP;@b#dxv{zX=^h*!mXW-rzXbe8g%*kz-Ss^ zF0><*4Um73jgZUW!3bkyWavTqb*-Z>jnc0Qb46il=V(7|f#WN+l>-H6Lpq(4g`45(!r@cDXxW4{$3jtRb(j#2xSHYiI#&;h2T!*H1;cZZd&g_Gq z9wkK42s&B0pjSdWz3b)XHQZs}x-yZ*s7K*)ZD%_z?`)(6){IT&ob9}gtMeNJSJKhu zc4{5yjy0@C8&-kA3QDAd)3!9y-j%*Fel1M_&ki&n*TK!D{VH^Tc}L%tXkQ(R;dFa{ zdW*Sm3wo>cwN8C)C?qolC5P zo~wJrx@v%-kps>scC12=1?SLqc5UP0iu0s_uj`T3Z>{E#O2 zC(q`KO_UxuljB)ry!iZ={q>cp!mpqWzigZTvGRWn1<(v?m-oEKhi|Y{t){6BChrC= zj^Da|GY!Gy)X=i-AxO5uFtozxtnY6Vr4`1D#qlay`w38Yggr|{%Lyxmf{=n;6Aa!! zM_Za^AL9hpyxRV8>Ss}{Ah@infk!jD-*kwe)tUZqaync&SFo^XDZBzBvJcaH^!!;` zSziri^))Pi-@ADy^`BI*fWoZ9?04ZtyGwgpU<7qvy^4@kz)u_Zy`8KFX|3Z8E@X-TIE%o#|`x^gn`gWRNKTZwCr-~_T z7pAzq1txo8F&!dYn6xnTAo!1C(VWDhde+NX_foXxe1P4;;$3mkOp+9h_HEjXmR^WPy zz)?j441V}{#Bms!S1fXgzg={WLY+F?4`eBxa#$he~sT=AdE^3 zDSHo}*MZZw!QUbJl~7!Cz+6v)f6caa=~>!>nXVuxT?6<3aQtREfw4V;A>IL=5or`5 zw;92$n>O9U?Rp$xc)yApU{`xO>TF5<8(rza{4-?T{c!*L#*G_kVgQc=g2y;lJB1gx zd;*>{G57o(<`iJz98~2AYR6)v8=wqj2l)5qwQK30T)UMH)_2lMZHsfj+5ivpojr+1 zFy6f#1bt#$Mot>+bE%g=LuuR%#R=_qs+OJgQK)d z*+zxE1}*s}g4`fDTSJksi`A`(xm<&;yg&Phu{#Aued#vB1@ZZ?VqsB)4jr@Bu z*yFYj1AdqJ{3c3~KE}idz=H^dBkj%U+1g6_80CmI{2K@g-x|4!i~b0L4^}y>k6P)s z`tgOJoO$NeK5f-H9j>4OCm3vm%wq%#Jvb_qeUrdb-$Bs1L;u_t*$%DRhqg^f$7t`& zHd_wBoi6%}sFy|&(X5dEL z>(|9`zwYOMlxL}{=9m@@h%%OB+R5-_+%Yfkyl9yrEst3w#Rzx7uQV6)+A&@SNGpGO zrBE3Isw4fYKx7zO(vFGoq>2{6SUr{C`2PpUbiDN6|Y zJ&y2dfw8JFa(r?En2Xsx1W~u<<}JneNFx;(L--Npov1brZIF4OPQ{l1rN5k6vT5e8 z20*HgpiSi+A}suJdlTVT*&4i$i_q$y%rDvy^F=j^|LcCi+6uM=uuW6P7y#{qnR)va z^sF;??E;89^j)!?wS&^J6DCFQlU1FQT8u;>%9nOnZXfEaEPXO5!5G@%zhHezk+-z$ zbF}@iGHjW(r}h|sP0{=sf{MSoea9My1zSak(*c!Kvk=6^0McpNJV=|Sc(>!ctB(%%nXlWfujH?H2(C_~A?DfoNF86_T|25Ub29HPAk4do&?DtE z-&kw#-V8{Hp?#e%BxJ zR2e#47wgMeX8QGN#}7CkH9)j{+jhNeJ|b&K+PA4se?C9fKD(@Ty=|X`@5bBY!*qYz z@5;xGfp8$`!pja3;w_H;`oBdsKbc3Fg@2ldWSxO5Q>Ix0W@7whO4YF&NU}ejU2p+@ ze)s4hflayq3$p^Nl(c+Zm>Bu`?I3BNM{|jlXWYh^^6at2&w?XfCiz(A`B{lAwV_^1G2YMv=aWPlW6-!GdFA+#%f`7%ntXr z&@`a;07MArYB83Cy#h)xrENX7JMjG|3m|8P7=jGY8WnXV;c&Vkl#cJ*4w{dsYWjq-o1Nz-=86 zadKR+6Bw2R4B&!V+S<Zo z+OZ9on0&M9%qv$4`mI9)a;0#kWfm{=l>~v4QO8N)L<+jw-{O4oEKvTgD(x+-toRYJ>IdWjiS?% zRna6g*~ERHx)t`TfMFQ*M9x0`XxZ)o-gjVOBt^#D9{m-A zVO|ob&!W{q1F^olW}7@$0(@$W%R9DEE1<2$v=P(95lnuCQ`#|1mG!=0>UI-h!KY{u zmnC3<@!#LtvT?MkLCy)k+lW)tBLnvfV8 zksN{zfyWq(y8I!4|;8F!}0*PVhP~_z@mWWx++Zy`5jY_ zBIckiz-6gka${_@QL=qdR|!oKuOHgM8JnZM&HRb&AS^2LE=~e;?x0;OF}7EUP#ps} zj%Cv}Jqj}ou+C#0vA(x&t01JU6loa2WmDXsz$g0??r26SVd5Ds=)tTt}01P%qdnpK`No+xY`a zVy0amBN{R9;5ORGXPO_;Bo?|Vi}wzCJ>A_R2--Ek7HLXH_6V)lR`nPY4f1_!zk zE#Zcke_4ab#8}$+24ijjt?vwixETTk$scm-7$<2=L#Jo3`oJpS`pJw{4*}QAO$TV8 zQ;e;~K2{M7-08(RTOdsn5cuS%Kv12WeTeX2fVLg+eiqS|WdToF8D}Fr8sPmNVUD~( zTW9b&$41^NXvaHf;Kmo23z^IJu!M-eT4(vSDHY;AYb3Q4(Z&=1VJDV`@35FmSi@eFX+d(Au8U|y7PErQK)6d{@f zme=UtLB{Id`WF3wkZWQTKt6%cu4u)=0j9A{OCwYpNGGhsSp8=8fn{jJ$BPTLJT`)P z^Z|meL#tB1UDlVekrbNUj-%%q*SAXuS{WBL)-qWp>`}&M^uVrR2{MDZYYHu06j9o-OTQMW$1=f+mU-W+Jb#~eAcqif8sW=SI*Z2l7)y(SjX@7xYohfZ zLJ)HTD6jK5Lc<=TU!{dlQs0?{dHdN1H|_5acA2Y6PBYnLPH6)mweNEj+H`n-G&8lN zDF<98_#oC`t;2PNI!ME@(ERHzxof@G!d)e6caXf7 zKKT7Qf3;MH?;5YGo4bQS|2nylhw=$3t?E0PeBb$qD5{gAkLpIf-sErqqW8RO@}~KUO3bRTe6w_`?8-CLl(OP6H-F(PE|DY{8o~U<7tbB^!dFpBc|tayVvBtNZo^ zrnS#t9!>x~NkzYfsi`Bq?-D9L3u6*vCQHBor-ySkT+G2F9BKJ;+Q>2t{gb^-s{nxF z)e#$>8@F^0O(E6FP|h@fL4JWNJDX@_3s#*Rw)PNw8VWfMsDlk}~$q*ctw)&PHB z?;cqWMt^)h3wV(x00xHkp?knAt_)y>A>o=dlz9O0F@0Z$nQEfF+yS6yTZw5H+bT@_ zqmv!`I=^A7-73r-0fR6Rls0@5ur<_1W13aui)kj;cUEi{2H(0c+pu6Vn=Jwcj~M$UG-?fKiM4Vt zR$z>iwBZ)fOg}^ul|vg}U@X*0Q)UchFc+03NCBEgFhkBT!{?}DjB!+khL1O%*dMo^ zIT{TCB0s=9d14%M>LIj2(7ZvTtBmDw2qLC^8Vyett=lXbmI$=}a)bvWQDP)+?R2nFiA$9nIjEtEfQECcwYyiMNeJ^Gbt z0P}393&5KJ$W8%5c&ls`!1c}1j+ICcv_Vh>7+ajDy=nX6>#yyLEuy)?_DZpcUT3hx+}(zaJVv@h2m+1G1ZnAdhJ0|+WYvn2k5CxF~RX@jK(L{O7t91b$} z#+Xxv0PbT5L}a4O;I(I)&+Ho*^25p$npN7$IEX{vER6RuG`L3J9OsV!ZRins=aP)4 zG5SM(zo{X#`P4l{e=Ep~;~qFGfQU=PM z&#n;+Zf)feCfj8MXPY)n{d0i4G}`Di@7N&YbP>Q)Ky!Y5Y!C|x1YOLRUv51`$X0QI z$!-kFw;kBWypd&;EX_pR1n(ulYlt~~Fl`BFAWH!mStye15uo}H_rI{m zCq(kaMSL(tbiGap5xulI*}CEu>tQ$XWw#@^)I z9CHJWWn9aT6p;$=dd`M;=XMZQ{%L&;)A2ovEi@5O5&Q?Bo+cl`!8U7Qm6zbr-iCb! zc)pGXdUD{Vi{?JTTGeE{6d2puVocldsPDU6<7=P^WejLelLnL56WSm|Wqr`O%1Emt z9BZ>?OE?iBt^8%R;JLk%8ABNag>l9}x5)g$`X#?(K~A|X2sNoZX*zlB5Y{1#RloAy z=x{DND!g})-J2ajcJ5gL(KpS;q2#{0{Alt~o5IF$?ju*PwXbpNd-&35eOY~WS@}St zkX)}pnILIDecmP?<<_f&-itS`PhURDq_28GeP8GAD?Y5cvfTLNd!oQk4uHaMI!0a$ zj~IH5K53I=Qs)AfV1~4ff6qe}(kZr|xOL~YO`=(mb|ZRK5~NgV(KD ziA7LA>4;B-nNG}te9-Y*C15gzqbu4E#gUL}Nf{zyqc3jNaaE6fczo)^*Ji04$pwKpHCJ7?}EO+y!x!9g@KgZS0Osp~erS?g@| zk-(IA3~ksUrec$raw&46EIQkOt0KVskUAeVOBMz6$y_x{opS;~NVQ#nBI=eRY<_fM z0tO6VN8R!~BlE!YFuq(c07Ll1Eugi@P{&o6%PmZ@PGHQUFt#)J*iO?~iby&n<_X2m z3=;oAn5_t!7j4PXr7bc=mBc`rwGvW$ZQUSHtr8`8kSQwo&q?)5)O)X#r<@a;Y+(uq zgVjb$6Qxdy+*yDL-G*k@MknpY-8-1j*6jY~20m42OX~D9fU$vwphW&Z9{yN~6-EV8#bDvfsy!6JAhvW9h7QJB`Ym^evErV9NNhJ{W7P5K z2JJvgQDFSkJ1M6zDlzt|j5pX$dPmG11QQdi=V9<1GbYZ=0bb?Y1yBP95Hxh9Z2<&L zA$(8-(J}z?2+i>TTEA>L>C7~xaW5gDS>J#`MN4vmR!m!_C?L+w9Hzklnl2isZXK2Wo zTH-Vv(lUu=`!L539zAfvm^nb$*mcY%qf0gh;EHTM!-tIh>IIbXVUww|B5BrOXqyNH zYS4jE#dbx!%Z#d&-c103=8Z$%$%7hW2*$m_n2?$NAmwXY8<}!y8LRo`fv=&SEdj)a7H2Vc2KZ3diU1++Vx*I{6#jZ+2!RxkM?#9=5J-kbb+nmS z7RZcIL32u_6WcmM+sN1}kFu{Ki4#00s|nybV;9Z2-ZJPbVOW88AkVv7V3Lq<%k@3) zXC5g^1v*rDUrJ~}kLbH&Xd43*XexF=oS>J4Vrq93y2{U2Cc81sA(SaFD#1$u;gkeI zQT#hg&_~xtnL9E}X=9w{FbSOj?2a&B4MVdbEGIN(Q3)dm0r-=b#TQ~j2l1*X%%E9%d-Pmd8B92mLl}i-lg~>*e>%IMz%J9^-GpI2;jD$^8wn$?e%pl z;1hg+FKd~1x6WLWPDh=1kkp8D951eJfFedq<%xN?Dv`=ACAQOB6 z+L+#X>Z`eS2f|Y|tQqiE#)=`2JIHPWmk`=iV?L9%ZvuJ*k+#A3ur38(e2t5wzi&5sF3R?d%>skpZYmIP;3H3OGN%=U7`KbG;2=M9H~HfGQp(^^b`{x zG;)!vxfX>;3f`tP0;|#`s~vJ@u!;M!{^^oFPMe%|lzjBhJ+Ftt zgJDE5fd(>sb;5i8>?z_Sw)A;W!*f-CfT6g0y0eOlrO)TOtBJ!n@sgfd^CSkZ^ ztdT81rOaH%@VlD96t7k+*|VoRFv(Gy8cW;I)C3D0%Nq5F!PKQ#eCq%J`Dk{L@V5m_ zVOq7|$`3FHxEWy)J~=vJi-V)UPYs5pOA(P@WPy2+=1fd@9cEa*k zmFAOz(a9olFG#Dz5?z5A%fbMWA<}UC*l1T8CO8Q*^>F=}J$m*CfCgA6UDJg#l%^qO zMKPQ(`%7qSCNZz8!H}%Mz!YIpr4`K3QUBtjPYG}`Yf}IgF?%tX$s9g>hcG3QTQ@&Z+?{p0RyChm?;9zaupk3W&H0F88k=I6<$k>9p| z{Oxa@pP}9-^^x|2j3X4djUZ(X4P{sRAyEef3(HDl%lk*eTuTqgcU=tk<~nU+e^>xe z#4((rgVHcosnY?b(tCxh6P8@3-S=h}?bEqsw0cAzesbS__uwmNj1URNxr3HifI~sJ zG)N>=P!!SzCW0-j0va&naZC_1%pI{j!BrSzklvLmp+V3Cf=raoZ|&8q2+1b^vjXi` zpp{9?81LdAx;Q+Jb|20B7Cx^q>C=G3dyCfr3T@02D>lNrq3Er25ATMAE&wTjxwKgW z%mZ9@+QSO(?)Lt!{o#u*?6)Ty_!y)4!NQ`$yIsYPQJT*97T0K36hP(b%bzH9(s%85 zI>LK=Yx$02=4I}yIa7g?8uYK)T!7XJG9zmXSbpX}x`B2{V;7NN2y;|LQ&r?zMUs{V zDM9~B=q2A(ZC9n=5*)d?P&A`yHGohVVBVm7&3`hZO;Yy?@4fS>Brl;3ZO0<*Rhzmr znLi}R>)`J>2spa7blpDu;GR9k%=P~L#|W%SwuEr&#`WtiD3GEKH>3$;?3Wo^hl~kD zlh(e#4eBFn6G)aX@n|HxPi}Ul<^wZ+ylUE0`_Wv;v=-d$IvB3-LuSj zlhE?h*=yvVMuSaIsb?#O*wThR{)G4f2-SGEG&gDP)A$g`bmqw#f!tFBbwx8yATX#@ z0o1g)Oc_ z1+YYHW^M^n%OQLAw|{RHq910^fXe5NO1Ug${#pmv)X~Ceb9XU*IVANX_{|NItVx>` z@S{YT5wvJ?_@CX%&!d&95xEl6+Z0+V7XK3fUKzKYo^0a~E@Kf)M3gc;re*k~)1QA|3sFks{OEh=zD7YY0b zX2l0>=nZpJDVObpPaP((PTgDd70^PrTpQqOV3zp6BuU!8wF;PdwvT2ZZNL2Ie??%I zNt?lUa|;cYgb-C2FEJ@;+N13sGH6HBe83i%>EV+}`4jj!eFi|=SB!f}=f zh*QDLSrKssBqVSUNNMn_dWT9nt(D~d#0GKZy)~G&CQO^m3Ztm#6(M*SZMT}#CZA5J z3e$-x+8}@gX0pn7P650+nC41@D+ccffGh@20+6Qs0AbiWG`@kBCI-_dz=RPx?~61N z0%B-$#nZX(dXMN67~~SP$N-=f!85;^cUt8bey1%2GTJ)ggu0eVGaMPV+vBsCd_QA5 z83KWgU_J#L-jxM@AUm0$qBqHnujb^yC>D+lTfS zfAJSKv%JVVH34{qu_aBLKCMw#1)v%PM2rENlF&)!lQNAK;K&2c?hxg9l8B_E-6VpY z7^bqL*2Sc<$Y7F?3yJ~5Xg{B>-Z5yPvjS2H43-#YLu|b>f(iQZ%8KnjUIl0yAY~B~ zU3~kF%hn-3%{O{~G=8S1=B#o=bZ8O4`7@#& z!(L@pecP;SuE*`WrsE3gm2TF|k}cvm#vHg%kto43&?57DqNT zXy$@QjG+i)xl8+XU2PVFulbCV6Ld+yLScO80%y&m(mv_kgA8t5C=gMN`e_Rg&7b3# zcM@X9O3YQ#Flr2?m@8%&_xsGPZTu4Ecqa!DbZs5uKMNomk_MPL=ZNiKcAu=;6u(D! z-?Gf(sNvDPD;hWBj(Uzl^W~9Qz*CuEPC4dRgpdHv3S(Sb?a10dV7JBIm&ebxm~#kD zb^E$A3so~BShnb$=9S{F+JbOGG-M0~{j4@hz$fiKzOOq^t@_OqtIn<4{O#LlJV$IB zbMY$U`-I>;wHR(5n)_5P<4(e}82u_&fI;S}VT6d7 zRu>R1fRCZvdpmFF^VjRYKp9RXL0o7ih zEF8|aJ;}T^imCjxOv)*tP#}7*#s+PO<1&&MA0kTdiEZpK|I+3G24s`5Dq-Uc)OA{<+EVN>*QZR=J{I8_hWEVu3f6O4^p3*SdH5N^Q z(vRkDu|y5q3ce-x35j1ts$RteE6Ke%0=4WDglOg(+h2^2!Qi8bVo_%u<-@{E9rQ^E z%V26cfMzg?Xjh9en9TApN2{Bg_R+Ol_UpT!IP<>6(H#3_=A8LiRGC7{dlaMLE`t3_KCEwu#WbqKRrgul`LcQyK? z4CAW+896}fjT?8Yn}>5 zV+8D>zuK6+e!O(s#xUdFWb2ei1eF;ln9rSS%l5Bf0v~_&B^n2OxRz!J)Dy99@bxNU z&e;IeYU~MwD(Y$s8cVYytw)?Pt1y~HgcTp$nzg_F%pAcr{+kBlHkFNHZ=KMfny2 zj4?oJ^Hq$)6=_fIp7BJ{?ge}?yMjJZKafm_>XI0dkC z0H?*NMFccxgwkU$$yg=O4<)_3xCV^p@g1gq_ilb@iwGTN3CNR01Gn2@TO~BqRT%y) z1Tyn5?Mw5E_RpY;#;C?blz>H_9d5Bj5T=w2dYPzeg3%U3R17MShOgRos*Cl+6U=;< z(8^9RkNmR_KXDO`XU7RbCLxX@eIq3a=Kd_WsMT)f8k!NePjhGPQ$D9UYlqRM`wvzEJ# zMvTdAa&q2jD(_QYHguuxWwhmGn`Qfuxi~BY9d_f@R4>=<_~VK zPJB4!6RELV$24`6F`zyhB(T*r0{ATA*729i_b}-$+8FaxjWuAM`E4Hoi@>8Kjv3JN zUw`-uI{=7hG5MVtp2Bw&UuS%J70J7fkZy?gdw_{-248j-p*Z=Vp@D%VA@D#Wl3jE{%sFo4&L5kw% zKqw2E7$rdwE%pOkpM)X*Xz7Myn6&q&ShE!t?Ju9M;OB);7z>z8Sc@>gd9(yONNne4 z<{WUHf`>Ty=5rX+E!$-)loc4<+silEQ*sH-QxlC5J~RL;ZD3wup{*(a3Ys)+%|Pr$ zGzUBn04)jRYQe|MwS?)^BZ5QBUc&_I?md_kG!=lwV;G-(v{WS+?-Q7&M_b!y)<$fc z#W;fmI89qBtfFO@%nFQQ8D^pmgH(sHDZ(tRQ@5c6!6{(IZ}5rXleCNGcLjix+1a;` z7M5&b`G%$D0PE0a2Tjbw&27vF69A-97e!IgDH|}J4H#ve3GD!V(HOuyRAIjEKVHM{ zF#$6Tp!@Y-SsSyrHkz(I7>E2ZK#ON4C>vWacWno0y0+*LKvbko^fS-{<^+cAh`KA# zQWM1mrqM`%^G^+MD#_nQL|pEs6S4=4Ur2h`z{3|MS;h+BQJG!1W@` z^DzKOQEIjAKnZ3njwYarPi-6ZQxd?n03By;{R}f#OrptWAIAOB_AWk71%NRAvY4{6 zb*F9e`}Pdm@j$Qi{G25)1#UcjjIU$ScDT0(Xcjzu zEg}&|@R@F+BB}roFpy-;JU)nK+OF}znXg&jcSoaX(MruU3x4Bl-< z4?+$!H;lE#rJMG_JPbFM1p>lif*+v`{c`OI8gi@vupH>(tG(ZXVaF_XB151m-tkWv ztHT6-sXhD#pi~2hmh8dCwtdLG8z0_te$Co?=00Zdj}S~qV5Mznii|PQpxWmB!>l~U z7A(4se%Wou@%6?A(QFAQG)shNqD{BSyM^!I9s-XO+M{`<#<);WpeZ8J&Y=OD96=jO zKRa(WOut*aFGr-mk6<8+ChR6=y*EDmz&=1Q(}hOInDRdQbICf#L}zBK1t!9b6E`~C zFagY?X+&eCy%sl^cfMvUN3b+l{_rkZZZxNmas|`Y9l%Z<)a7VQKMG)%oMw@Ch3~TF zQ1BOu0XXm>p?sMue38byN}0f4-6aqHkw$6NhA!{=W6B+sFlTYjJ|G}l>txqXiU=V9 zZ|nG3YD<%AH*Yg{)BgzZRtbFc6z%3QxwaWI3)FA%)(yK(AItZzPTjO+$}x4`XC96q zSV-bi{{e!fN&0R8tB|xT8mL2wao#52P7ybZ6(V;}0B~*r!0!F(SI&p_n4n5u5#*?Y zU{3QjLzwrNIh%RCz#KwAV&){Z3;IyD%xUGvKZCI5Hefix+!2K~H3;+7_9_B3ws|5= zi@8j`hjbMq6?!?EaE*Gk@QJJe_6`s_Y~cGmmmQ~1?%5=oO^}m4q)#3bdG`PSm!rR$ z%yA9YgeroayZ1h^G+MwD;tD)kU!(tapqa{$Sw8;S2wIOZ8=hqRU1tx@79hX4wS~5J zn>2vf|MtIQZ`F>?Vi}Y~!@M{?YSHV<)VpFUkDu7V`i2XRr|9LQL{A?j7?kqXKDZAy z*s2DB2s(2&j=RvP4sANjK{CHD@;+vu*Aeq;g*9-4`F{bS$HMf72n^78VtuoPu%JvE zYs~X8X)d7+Lz#R_L=)b}GH3{&-$k~}8M0%9lqH0AJG4u)z&KVE)5IA#U~QBofFj+4 zOBsudH2|{q5|w|pOqSccX9HMzD9DoYDMqv0pkCTO%paZto&uf%o&uLkfu9rrWzBJi zB&qr?X{wk5`G88?iQpPW)L1xoot8^JkTO-337*sD6=85#`NW{!O1 z039riWf+kifM*R2#|q|p|Nh}qv<@W}U8K7#W;i^e{bTd+Qo{~)4Q*ci>qpLaY)?!f zz^DzwUbw%4MvH(efDJK^EhO}XuO9+>w%isNQ5X-rmS8xVw$4IaXThxlLN*%T0G?Q+ zVcw!J4PCA`VEFQsTLfHH$+Lr2@C1s~9?X6(Kyjy*Q zX%!J8DWd}uBgV7HqP~4_=(cXrKBa3g*z&twQ`s%~(`r9Cs0;%qfcXtTZU#4IzJi0IYX8(xo74GAI zN${g+hStj<)hTKK{asrvG437_+4KNlji5nh$1?vsAOhzBfC=w1qO39&*0$}zK6}6d zB6ZYBC$sVm0(n)?fM8%pJLw<#?D+#mp87Spcg&uwpRetr<*ztvf&_HY-i1!{k4CB5 zA*c!fPFfG8{dzZVhqO_FYa|po<~_)R#1uYgyI8Zod$Iyx$~#l?7V}970J?`6q67u# za|A=|!7-ZD2Tz$x&=|FNf5^vy7pe&#&im4^zh8M^s{pSIAV&N2HpxeOnHJ~|`9(?? zv&MeDQQnbt{A#tw>=@&Ga_m#;hiNeLQIol86QI%FT4nw@bXsh^ce3d$lh1bbggOCo zC{sa$G(I#Ny0OAvwOR9?<~M0&|G<_xYXG|;1Q6;kZP9a3Ik7zg4VB1`*UQZTdFILg z=l)lg+h*RNT!AwA!OJy32SXR3SI7Rmy=IRPl1O-=eh;+bRD=7x=B^$J=Jn%W4e0t{_4zFud!$FKEp_r%js(X5#8_2&cJtP1{Q; zs2%wzE%6e$F9y zs`305KKH--6}ACbPq z`1n5`e1Z8Z5#tF$R>G9IL13icGfq>*1KU9uAT51!{i!`*e55hOjnNk!S(`Ao)p;il zw02S-`KAk8N?7m|nxCvZv>}8NS_Ai>r2>d4;_y7eqC0@k0fZJCUw-MptoF>-D3`m2 zw4nuDK7RWTzq36+xwh-sq>m~9UIj8zph0b2Pl3y)0D53`8r(`Itk z5k%bs-tfEsBj(aEm@rFCm>_)=kfN>$pYJ-?V1o7e^H=xnfBirHugkCTTjVIVKs*;~ zm|MW)#};K_5A-Fp8^%O*h#@(4_jo`|q89b87A!G~EKFTA0b(j8-PYb`186EDfJSX@ zqA&@e*PXnf#Ky|!Nj_h}q2-T-6-E($3`P?fGhfJxim7H=97(}-d2T5$)mz$HdFon9i>D`9J*n$t%12BfBL`4}MvO~po63{PB3U>RgC||NGF=sj(1!G4mPK!AQIwPX zkfnhVS@fLp4nE=E@Vf4)<%}m%3T8xLhE*SzkH#s#)CT!OIXY9GMy19QuZsJ)S%Ul0 zEUA)eyQ{w*B#pFWilQqz1?C14^pJ+}#>p}4ZX5>5UH&QqUO4C%Wvge&OygG11T_oB zQ8+erU5CmO2vG4Y#SX;jDX zewdH0=@X=ZMtWx{C+_-`j5H6Z-=VAK*)CDbTts6^mQT9&e^f+jJ)=YY+yRi65SHW+ z;!L60p9Uz50ZN8=8E3GN_{}Gu*nj%R-`PJLY}h*ab_oomeRW0CUN)R)7Xw7me6{70 znEf_=&rC#}sH6JQCD&QBq0KZ>^$y3R?yKDb;T`f3-GiFSnndGVGZr)yfK?jxrZVDi za|l`MUC_}bjmDD%SnfA}^jnAKUN9#bRK8O~m7|a9ks%&CLVU!!?DnRE8V3oJmgc^pX0(CE(=Fx)WSt#c+A( zUD8MO=j8HffEB5U4FekI(IDPlT6TLX|C@jNZ-V`+iB#<(l9OrJKUyh7XK4l9_+;RU za&6ks;r*8OQfi1ophf#^}~L$D;; zNjT~vm-pHq1Gs9;sU0%YRz7Y{m)*0k*FTprpWo5x>O|8 z0i9$G;DlUy+5O=u;3@E4Qb5a*gj&H2`?gKce0Ul2f$|9E)|b~vGi3OAdCFJ6tNem_ zUeElG|NVbspMLZqfl^UK$wdi)umsvVEAXMd>!a_oSaDV>@WZlH>HlcQ1|7Pl^->?M zu`>R|j&OKr*%4Euvlbflw}MTG0caA$L7+yAm%2lLpqE=awVz%DEtHKRYgB^kBQ;oiwjsBh;NLHn3pg+ii#ZQkYGmbE&I!8>0J6x9JM=`qMn6=(5 zECtn>n5Xi5P_%m_sD>6+0S*p8mcU#T233raJNnXtzfUjeS$$-x^zDO!xh}u}AL=5& zs(dki@1EvnGMRf{Uo;0yV0(A~7}rB**NVXB*H0=A(*Mq=tqf&h;M`E z(MZ8e1k{zkYm>N2iV?-d2&hf+0d#5uzN>Do{y~?xbndGP?mM*CrHny4Xr&~0a0zvm zZyb4qAJJTgPWs4XTeqcIi9ky?Fm!<1bJ1%kgpAfU7x0JKfxhfZ~% zq?gV;r^Gz(Xu$Z8Moi_o`u56nT$MO0o{Nb=a|e0xTQmy~Mb-`Lpo;1dZDpWCSp_U# zuXf>cO2@t4)zkK=ZFGtQy+QkwmvRbeAZD5G&_~-Nxc;FKy*DXn6INa6PwUyc==xYU zC{>Jh|Dk&U80o(nFyy*20HgfUgRvy0)77V!Ff31?M4@$pcja6zeHGU5>ka4AL~gvD zPeX>Hx!&zI?}h6#XrUP=%ZsD!lf*PDI|K)=pP)zkv&D1W=v|TjSZ`Ig6&oiQZ@Xy`i&$;7_1S-h& z*#HZx1g_df=k(AZ(SJvlF0Kv^>QW8@SjyJiki?25p>$NIL7nIl=0(ho0W|>ygWMe1 zdszf2-KacjYPQJx1 znmo!;6m|#hNt>pQI>dabk0po;%2sB2cj;Tks4PF!3koVFO|A}&UzORF{! z4O|N9NY_^` zz5nAW;3?oK;3;q!6bQN1%gF0J%tMQ%78n>}K8jwez!18s^+`)r$j-P`7Oo@4g027s zN~A3Xh^Z-snMPK%z~JjaRZ$5{GU%NeAaJA2F6r!ml%YLnlQ6;xGtI zBvn8LqGa)jjbJLQ?I}{2{XrfFzFL@t$)8ghhrEOGACRde7g{vF4MX!fhn`QMt&o4` zMYMQPUSedly@x=&+shT;FQ%pc5D=6^AKvkWhx$H&NwPG(0+<>j4J--7nDrk5ZHf-t zz$8=YTQF^%b7{i#ZaWSQE)MzgsckxRNqK~a+OGP`RPIdN0!9=rU~UB5vm9kLJ!`%L2}&bv=9I>Cyzrf9)X z3y3-)i^Q@V>P2mZAx%w+fK#JHdX+DAmTg;-urgKqnA3@SFbuI1Ao&`Gx8|jRGPau%KUb*1H|#6^vB@0s$J)Q{zll z02*roMNT+D;-Jj%Z4Kr?(ZL;5Erym#bJueR*YysBhgu-~5|Bi18CNJnW$NITYnKyg zP;b{p(#Z5z8eeHJm4^V54%Zg<%}ph}Z{I$-aZ0c%ZNDRsq;hnKgsx0ggMWIbB)Dv1 zwV|!5BuE=TC?M@J#30}sUh(Q{`c^)NT2G$KM?a`kf_&t<(7Wx@xLZg`>j~T~=$AO4 zN^^wfZk4V%!_k71OmVYpe zRK|UWlU~!e4XJ2f@lXz;o648P1|k8jxSnxiBIpT(MXXg?Z=I!84@^rNs^>Kit18a= zg)h={s_nX&gs$OuQPta6TD7sFQJ*-;tWNxMGYNfG1Aw0ZJeg6Q>97RG(gMpmL!{Hu zJ8@Q7j-030f)xI*r+}w`r+}xxRZ!q34nRdx7D~5hvf(vBe=-F5(6$be&^q%R);=x4 zjP|u1f z(r;;IRi5hyomBw$bo5T{yZ_@U@IFw$4SD`JfeaN6#20}E%84F=q3vEUSpq0Rwk!>t zev1S(`KU=(3qU9l;Z{*nXK!9-`wX;o03Y0(Q3Pt{>AfSA;mjRXmR=Y>;k*>kOEuzh z@8dK^y1-@4P3{Y4{nK;uq3;5W1eV>Cdcm*cP&q*>TuUe{m{T+pFgLimg$0vNfK%56 zess_MKl|?5rwY5wM1Pkg%-~z+Ub3K#L3S>U<{xF_K7D!Uw@BgUJAP0NSCq@AFEgdn z5ze9d9p>q7@JGqRd~_|;a)x!+wJ=L9Miqop+RjS5!75sI0#IDVCFUBz%sS2__)7o) zKmbWZK~zM0EfOanM*ui!r)5?vIt52~4r|rBe0pEM&)*BvTy&P!ncSUSo9hi8?+HIH zO0H{EAsml^1<1?FJFBlungwwkp><+hOUNU?|80Dw8$|q78Yk#cA*W-4_lY{nwI;VIxL;3;q=6z~94D^8CS(E^suW-xObA*6e6V|VT;V1Wen zPU8GrH#io-Npj(SxDzgp&n0@VeuPVZZ>@H$huFRTLoL-2uGhgYF#u;2P(I2-wAJ(O zcbKs6d`hui!Rm_BYrW_6ycjfD@ku`&$koZSKVe^<{;Q|J519fQwho*y=H&+{jg3H~ z&gb7D33T*oxOYI~pND1Ith3VQ*e5m18zVovbQBG!G)p`XfJl|}nujvwP?^uYh`o!t z+n;OL4&A=&BdkL|P;+I36(GT@)W_w+gMrIMuWiaG%*SQ=-G9P)Oes|-_q%u2HD%}K z#$LVpzw4epI#eG$tK&uaxb)}pQF`SW=A+-?p*~l>%Flr^z7M*aj14BwX`4?#Z#=U!og7;9qn|g-?K&26Ne6{|d^-fMN%i5L>J<7PzpPFIeHZ4# z_18IEeJDuh7afj_y#%V$MTc&y`&6$N9p9CY`b7CiaOY@nVWm_9fTuLp5KHB`vE!)s zqLSa_x@!@+0zR3rHS4?jzm!rhD2L8=O_7#TgOQ_m#eH{uiKbVP%Vn-B?RR*H4vIRj zAa9C(EFq7A6S>-P?n@ay;?JG}o&uf%o&xWf0#^z^HRXqkhD>bq@BE?ry567ee45Mt z9oE8uCVn{MH!V!~B)NzYgp!xfj`n5~rzf8#`HzIpxh0>~Op;k`k>UcHi0jkRE{@6s3K{jX4^d6Ss)J5rn9A5m_J+87;&YNV=%Q|RR@S~6Kd2zunxD(b#w^g4vt(c833`TQskb(nfv+jIzcGKgF-m0qA8c<$4|Ai;~kg21O9bAlnh=$e~y$VZ{% z^t|RF&7=aMPCG>US3cYz5_nZX%zKhJz^eFaclTef!N?(UKV*I>aNg1bAx-Q6KTAh^4`yL)gC z4h=N!?*4NpcjnFeXI`DE?y9ajz0XP~KGm;iQ=J?23 zUjGp`T74YXh<0-oy{4m%mU1j)+DI1bL`XtaEA^e0HBQ0M5IO@bdIN_4VdwJr(nGR? zocn=YmqbCDg7JCqjZ4Y{|34MEF5&MA5D4TlvpC}q_@NgraV~eq0ABnusXgG@1RWED zZ6VsLgh*I1x@m{$p}S&Nzj3)utMZ-fqck)^lR%*bA>X*#()J;I(#pMI%N$?m-f?Bu zL>7z5rE}(^%kt>RAX{D2=eveQ`pH&co3n=t+jRfyul33`gsPxIdq(N(^5J!6@A!=g zMyca7cldT3O#M?weG~{y`65s6`x6uFfi_)gQ?_m-of%v2Yx2YL_ccg69JG;cGs_u5 zjwpC%|57u_HU94B1VU0=1ECHj#P9=F?Hs#7^1UaefzyNaap`znHuWNcg zp2N0P=F?lzzIWmx0mZ~gO{txN%`bNtH8reCX%NBXYmd?@nIrFjp^`xp)j@3 zmotXz*JetOJ5Rj3_1kq`C?2RaMtC#c6PfQDPFFr`pqGl11h6T)RAdPZI{%l|GTNRl zeFP zk0Bzktv&2=B}m-0qfZZ&N{iQLl6N|Hj4WA9jH%=llsu1Gx01(FdVB`NMSJIKWk#5V zI~%W$>AO2!Zt(4Dm>?O$PP^)v2o_ft{w{vC?LIG$vU;}vP>_* zWj60{9!>a!TKXH9a<+UW+pUDIOcaLic7fJ7xFdj+WS4d8&2PN^4mCmeox&DYK#m|^YNU3o?&*Y8P0}}Y zLpT2V4c}F$>w=p+;MrDlELTj47KeytXW7rIogh~4%5fX}gSJcf2dbN^5B>;yR|=_X zY;(fSPxCC=UH8$>lRG3fH>w#<>B!}_Gt^(tH@m>(P+2R*Ps1_#zNNPg2Hh@Kn!%qr z%Tg)Ay>E^vcr4w$Oqf>|H_8i~&5 z5?XK@*Ir#>L-Gj9yQ_$`gp^hJ{LL%`l~M!3$LnVdE)Pv>RZ3Y-j>nx-Gr$LBorGU7 zt`6?}R2Dj1nXt;(&lbqIGJezCFW+7DTXRlEXshzk)Fj=t(0Mwy+#hO)u7-Yja;ILW zy7KZ|XIcjYbv_$ZG|E1?3)WWSPd*)FHE=#t(6k8S+;5CHN9_AgtywpLizey&1Z{W4w8W-MrO}|z$j+I zQ9)&CTde< z&0pC5tRtKSg72x@lyG)AYdf82{doRPf&{e|_Z})Q@nL^VHf#l4=dfQF=(sYZB^BOq z{2B9I>0y73T9P={gNY_js-&;v4+bPWvoiGPfoJMMx;uyO-iIn1)MA_F|7(}kf4k0w zv%9gtrvP7r@nvzyGo*^nV?kgH`1C$f42vj&zMILR)St7+52#ne4If`nVfT*y{K+}r z`#$O81FtdHDu73p=gh+D-xSn$MZM(w^^e48`6rOdF54&J5;h=Tv}!H}d9*b5{`4hU z{NK2qA^u`I-5!-4Pr}h0#anHRD@ptRc)ar$1(0WoLs!@CV=OOJXB_yZf-DrvgcZ>% z}-uT>3#rC zr777KYu#8H zI6h?ij{J3lxGdPD+uIQLBF7mgyjgq5H(eL-m#!?Secyi?Ksz~9r8LCh2+I4^mKI}T zs}*}wdP6P=x(Wd^>a#yDME(nk-pO5fsb|BdN8lfiStEk#rM_ED*q6Hed*1SgS?87s z{yqnP8ja;Va`(2BJ94)uMYu=g<$WT#`3EZa%ftWwYt0wFof~i{?EodzFh0WhCJ#k< zNVTgC;9q5@d$hMmEFYPK1!h@CGCY-Xg zk$u_~ldtxgkecylH7P*?^4~;{y*WbsK;7Il72-CHzu*E;Jy9QR9v71~|05opE9G~$ zL1pvGe~KvT)~?Po1^Bn}YI`ihz4U}VcHN-STsw zSQW3p1Fp<>Do?{xTk-I!-;DLgF4eN*j$zVsG3gyc{7lCJr-<2}hyNuAkRiO;i5Fhc z22=~q%(&joKePc$eJ%?qcHH5~s3^EV%r*@@6sj;)r*>8K$YF-zgV5PC#|F=O4u|!- z>&Lr&cXQ|Xr6sRRNOYpg`QJr=Yg)eGKX%@v zR{d3F>Tb*L<_60)V9F7An!ma|&KXANd2j7wrczF}8YMDioq5R(kb~iSSb+#3f)Qfd za0RH0s4Mx(FQ=hjd9>M7oG~vd{B$ug_oD9=TTF5a4G!3qnQ;F45tT-=pDZB{-}icj zOhM;7|_elP%YTl;yEqcJ1(>9 zpvr~>JszGqx|k{Nsc}u*?VB@HU0)wja_Q+N_o*ZiByRm{-kAxGqeQy>m7(*f%H!|A z8~Rty(F?+unAQ=vBBw8&@!E_}4uxR%Jg35r(%?bc(+mK#>Xa=U&F?-lAX^h(QN1_0 z?l=s$EPaQ{70X8kT{xzL8GvlsA5H_j=v@4A=4whJq{tGK0yG#v7RfHwtv#F4taLS@k|`#Q z@syqj6UiATAml#}O;ehT-ll9NXY4q)ajBLIZlbAbVeF+xB;EX+lt#(IPJiU<6yx7xCg z`f_xG-$b4$QDjA$Sua**ukf3rYo1ac77FXSy=3Kk#kW$s-~V#cf5t0N z7`2@<0FxB;z#nX{sAT;a!w9;uz3bd`b+an12z@a8hk?@TKJ$LveO~w z&XR$gLec7$D`S0(&5K#2pJ9+H`r+a9<*e^juWFb}K!9L2#hp$}Knvk_U4_{M_xXl^Y{b-4506S6BC6%yU{)FgLNC%LBM|^r>Ks0(l@Y^=@D# z+nWi&#u^{Bon2wrw*9L!dkRnoLd-oRo@50*@RUQuVH=*Z=2q)44J3VU|5w!iJN9$K zYm)&A3>D>Wx{>(<*0quS#nf-8M!xh{%^|JMd@~(U(wP-k!TJi~21x*B1tE)Gyoi!9 z5+C`}Kgf?{Xup1NanA}UQ7h)IU^YCE zOa5Ba6`hT|h&sixgV_i#hkGOsIXJQVY=hufmk7v~lv=f7wJx$Vl0&(b$oBzsmq?#N znN&^;B_ITpRDMi=_B&bD|7?8VAK#{HJzuU~QR|JAYA)%zQlCqN)DQ9UGJE(U-x07O zK-(JM=M1LPFb5?^ZtAaIATgDxZ3s|0u5TE+t5!D3QU%mMYv^QLz__eaU)3{BOBqK8 z)Pt{mcy0AM)3E@6JYPzwR*Ab)m1BSK1qPtQ`u)`!Nr0f~ zHEh}zI2~9RJ}?}-y)>egjair&Qb4oKpuq#HZkYjJ+@HelBPk#J{ZBDyx{NV9+0CP+ z8k4es7KgzB_ghUma}q|nrQg5F)maAW>;X0_#Anu8d0vLxeq8?YdF>t@2KIjHJc1|n zFGvC43rbFcWg1OJd6urwgD1<|tN>0!tp^CtF#df8=c8;qnHc`>Gp^TPNIwtgM?eFPSnFT%h2s0W9Mn_c(PkcSjyKh5- zv~$X7oGfC6g%2HbKT<*u#chu6PKTXKWV1w) z%6lO6Th7qLh+raRjX`Yd&3FPU=`QXrg~^$W8G8G7K=ZdVQbPspS02N5Q$A9F!H@BK zK7Km?Kq^S$y3ZAYPy{~|LxBlG#kmJai{?ULLw$ks5(BWJ0b&6AQAhGxP~}xw&S3tdS$p%?kW>(|oa7;pDQ#pd2BmlHLUOcz*8!%wa#BFpn%2BTRDRgF|W-iQ6 zlgS$JmB=f<)p+At4O*A5c`NpV)=*=Z z-;jai-Kg31nlx?t@^KgOFBWFAK76j@579otS)w$Q$WsYS4@A(}1u%d>s|*avD98wj zsV;gy5SZrgJdb{8K<2k$Fc@j5Sg8J$YhRj-8lyVv0>$B50xYQk=wphVx1C@4FJAUb z$fJMrS&r5v=hAhC;CKVGlPZkh*PKbx6YVBNhRf!es)NGn7i zyYF7+iBR5Z1^u!{6XF&vc(e69lSVM|?;Sp0$d`{yrM9x)MmS)`vgG!C-TzMP{~jvR z^STqjOTlw$9BbjY(o46#iSQkd)=Pw2g1 zXn8goKg7uaYr0MCAi2Czggm#;DgOgT_~R&oB*BWe7FJ2j@2BvFd43b63nNjb*3=0f z+|Pv!fWu~ocgWoXmxTG(Y*{%oBv*#Ee~x?K=OpAvrSU4nDIu8Qu&}QR6ou%SV#{1y z2!f$I*s;~ZzVhLoHXuXfn4%!9dQoLHyfW>v3TIW|V)H<$}p zLO9JwZuGB=YqqnP^mh=Y2@5~URj$@+ZLuT@5Uen$(I-l)bH~*)y38LV_*uujXje@# zEIyBwW8L1JOy`AQ%M=_3K77*J)-<(Uh=pl$G4)xq8K+wo;WiMqy8ArV95yk&0=cgB zH3?ELDNAijYrIARj0GhHR~pNt>~F_{c&agG0~3X=@$7pmiU^2iAR3@NJ$7)5^;S zNC7tW4Y2L;un6zzE`_{n^^wDJRQQ`q{Fo*Zpxtly7znEFkCd~j{!vAh=kSw1@V;nw z>XiLlYE^8Dd$;?;iEL4Z?_php^NN69jViqs%*pXbKPM+6Xw=M_vDu#mj^ z**q6geHo<;K@i`a&3c`dSz_60PyyxObZOzq@{PM?!e!&zF)b~&Yq>+HP|5|)-WK^o z<`FRzD_4J^sm^ZKe+ydeL4VUWiDY7!^6JoQ<2z5tdMiN1k7ZKD@ps z*EJ*$NfbTyw=|3p$RP;G#rO`F&BHo|ylM`Bnl(A&p*{$Qyuv3Um|hIpg^JO$o=9Z_c2 z{UA%HUnd%*4TUcGRVkuT<;iQ)#=>$CrqR<$U%0Nx8ox`rzg&lrWQl{DppnR09q(!t z-6u7cRryDIn|*TA9Ay+8994TljHmW$%cu+gjwtMg+0s0`@#R@iyo|2aFGSao!}}^( z#e!z$Zl~EEBwDTqd;bcHt zam!x!UX<73!|s2IUKk=yh27x$_wh9IR#s zj{OOe!7)IYE3Fn?!4Y9DY)4b34E$0NFag#m zONJcnytwr#(u5^!pe9VhN(~U#u3{syj~QW>?nDhJ=7ay5Um*+2CgQY|xbwrqQz&SO zVtH5x9Mu{JnFtUu+X7wuGnZE-xdh!moZYpFeh#;hlaN__r);Ds0=fyV`;!Z!d~}VX zIeIkg3FU6vGM}h;5^Q@va!(ZO$5|mME#c@IH6D9XH!OT>rLLi&P$Yx3vfF`aGxY4F z6a7$+v{6lXu7v4jY-XiM2fb@4)8C21`khij3c3o8pO{L}Q<}oN%;5PoT;0y`rh4A~ zR6710_%uVpYg50sm(OH+sD9YIE`XY=Lbc_D*5d~0S}yJ-0dSe{CAko|H#Pc}3qpak|~+cBkjBVO8ekS+|aGWU3X z+C#NO5A0+IPQW>G>@#4}K-h7#ixq$EW9dAY7gwIPE3;`s5w+$pr13jbps%K0H0l-% zc>f*!i+Q9=0oaPDm)N(IQWiZvF0R^(p8A)aNt-1#Nbyw2bf0RTsNgz7f}=^ge)i9w zz&ZN?d!sW-i99KFGSc>=fJX)p4`CsHroxWaf2nG{mz3W@Jnr2dFS}ss=Pf77l+ECX zU2wjT-+WNA&H)ug9DO4Ek5~Ql6q0mKKXN1Jh>urfy&)pL^%Xkc(Gr3BT6`h3h0}gS z?chF@|4K9FUe<0{RUP+vI1q2_s1vRS5~WFsS`*S6K(l6@snl_mxTEmiZ z@G^`0ezfXRt|KxRw14Ii5n_T~Ov_CD0MEFRs{cE+n*;;WECzevcO}WiN&bfwKsO@( zGUr&3ZJy3*(X+#zAH57TU2rfnl3#kCRkKf%xpRHI{p~_|_6-m8HQYI}662mJ*B29x zs&%6*c|XcYXsy$1f_~TJeJNF_d{waT%wSg+=1+DpC;EzbQgLcArkeKw#MTu{5*S^I zhq3s?jr?8@S=KyBAJG{>>_`=`Sz3KwZkEpDTq1)VPjzy1T@9nEY=gn3)TGS_QzlvR ztgQ^Ys3%@NUk(HQvFh|6MbUpmV4|6|EBbgmQ@iU4uJ1`xUR>vRq)ISW0Z8-=!A7>f zhe%#o>Ti@kd8w|@>{f`&khfR%m&3}h43@jN81z8PB3GoXH;q?y)hgx${9uSNX6}>8 zj-X>b|Ng7%vy^d>nQgSo#jGAb*J`w*!p$e0;%o2q=KRu3B3cVaQP9mNE z@Da;lqnh~=AjT*I84ZwPLXd(-zQV3(g%@aT-Kk!$1K1)1IVO<`i%RPaQVs-$wtj9B zjng1gSM;Yh8u^*U1y+j8+@oVmo3H{{xQHRs`bEjk$!Vzp*dbwl(Rq26Dg!Hy3;f_h zAHEy!EsYS>RZoA&@pIDhdxd5sUb3g<oxlH}KaEf}>Y6{PC z$T#X96i)znU{ckBtMqG7S85Y;{>t%JBV8t=(c+g5q|$me-U1Tkp3acmRI zZ$Sh{*)N2Pt2GXI*5*w%dB@fd#j3TNB(MBWNm;K0CeR^^Hu;8=jT|Qzr-i&mqjBB> zCJhP={5fPl^9~42SFoUHxk$UG;QYVOAAO>YFe(Q>Co)Ql8x%>3IOc&!cCk(zYN5Wh zQK<*!#r}df`!p?GD_n8{_BtE_zYc@D4=A{@46p7*Vo zW7d?V?qxRCi)eyjty9giFXZVk=;l(@-qP+n{wivjsV7!i2^|XD73h4$Y_|~ov&r=~ ztQKEJQW{n{7z)5$#NAF`uKyF6gUS7c&EKO8C_gv(mINF?Hg>G!&xHYGI%yGqBzd#< z?aJN=;6@{dGWdBYfN?(!Z^e6ge%@W0fbJ7!!_(^evuC&Q z9G96`-7$g%uq?+~8>X2UEwuQ(!BxnTO_=u=t>kjU&j{~3m#XT<8mZycF;`X_dS%yw z!3;dG?@f$ZlfFWaL@~HeMg2r;a8Q zvC`ZEqCMX>J{+MR@8Jq)P;gyp*615AFF@M8LZmKFo~`dq2eD1NxHwSLe&)Kef>IKn z@sMWKvkpX5jBBffR7I@@^0{5 zrh)9PBI17K0+#PgkACet>cZhcPZ(8GtrX-3kts~^#mU+4T?zVrQLPkynfQ|(%oPYY z*LZ4wll%Vn;^jFwyfzU43isvhxZ^HKXD}gIE5lKU13xDQ z?x8%F_}h2hPAi%cPdK)w$G)m&f0V>@6eD$**iYYj>eRV&TDM^p@H2|TbcS#CR?H~X zX>&PQE;*#D7B-jF@G`a6)28Wt(wP!U)KhnWAUM6y5|Yus(b0bU^s=$V>dLCd)6XTM zLg^Bzf=5J%yYI0Ui7mt3k_E&;s&S2cad% zGlwduoJ-~dRf|!MPF>BbgS`cVPzb?Ng-fDbAKm;_x~&^K+6XtY4iZviAvETm2RpYS zC$TVRW^n2ee%j)8XtZ%}LQzps4@%^yyIo%lO(=8`Aq_Z@TX4wlA4+{_tv{<0Cnv^2 zdTC_U6zy$n<4b-C{P0JI_yjG97>`TEEN__o*AJ6{{zys!P!oLE!fiWINta$h>D_JKdwq9Jc-5A zInvj5RAhE=vO|wZGQiWq&m)d#5Um5XHwn_j5+h?4E^#i9OM#a+ldQdH&hc$ltiK)* z@qw7{$NlMSw`R&U&E`c zoIe#}P}Lz&cwZbng43!j7=w!6`ZS8=(;ycqG8b(AjqvOOcrI;W%4-E-4G`qnYb&xS-X0!~ z+X>$^ai{yIoc>*KAV+_gDoVyC4%l{WPz_{ z*KNk%gdird6;j_*3M`eTZe0R_VXk3_3&>4g^7| zUuh=kvf8VOJk}k_M`PoLT)=a|5g3(wF6sQJgJQViu7_xPcuo*hr9$IS;#JIo=j-rD zKD;T^k6ppQ@T#g@%+mHl`fGcnRIT<9a#M{QJJmJe5JXWqqAJ}%GPXGpz68aNVya&} zX$9NVn7d_t9AlWBB77h4M~=})wpfW3k=?$z!}KqD?3Sy6*Vti#&IKJR#1ld4%YNXN z58EmatYv5T9>)Vh9u`Ma@;5%07ucC5_Z@l}h|HCB4wc+wtM6&sxH*3x0=D z&HxkT^UFO<*yc?+u}u4!*LQf?bU<|fhq8zuMq7WGGH%+~=JF_Gk~kI*0onRlM16e7 z9vt&>!S=niR1Ie1PgN##XzxvUsg#R=rQoKcRAmD0xX~ozgaF40js;I_PBSo~KB$c)#B*Dryr@i(5cE_~?VRv+bf0roCB3$R zZXasMr)-P`0!Hek2Wg{dI7fv4i!DxugMA<44qBAVSw{qOg20kj4t?rD8=0i}$k??# z8sF0@f(1dODMgk2I^yJtSPZ$F;aF$UpexI)+W=>;+9uDX!oV^Qv1PA!$v9iBS*z(G>xz?dgG1zERWe%Y~yev}4i1=VQo@{6jYtpiQk!KF| zVeK%#RI~7(e(|r|@g4_AAin=B^>XKMZ;TcX-`cCXDWfuYR|1bZ64nMEQ0t)`IU)ch zkgmyj*wzLzvdqhP^vg|9hRyw~n9-&V$Hefk>1){0$jK!tuDaNU&bfkq)#Jso>*oj4 z0=jBx=xM!cUH=E6nrjWJAN>&UjF~D6TP+&%^G!5zI~yKnQWCx0nPQ{V^I>Z^^y=08 zQYGL{Eg$7BUl4_$qE#kG!~e?@>#;(^x72;{ncnf`9a;GTg-N!iqauUweRSl=qf?>f z*jq6Ml$A9R+CDclu~)NY8NV4>Vf@I7sQm4G=5lTFN}G5fC~Q-&0=2S2iaW4`ORm## zPwxu*Fsn)>NrhPp_@KBGd6rtVjD2B#2GmHN#VoL+BV|?#{@Ay@9((vpGUdO78ZR{5 zMJ6can9hN5OYN=UNjbf39ecFL2R$o!%p>FnGh=%0(p}%qA;u%H3!BbJ6@Pb_ImX-_ zV>~?qM92yrQq~Ju!4D|+r;@jdE!sVz?y7%M`@g@v{uc8&C^GNqI~)?3(C6^y8oTH; zOABR*IpfB1uR}hqJoVysYTVN^E;w@szueI@ zkESPe?imd^#dU^K_h)(Qp4HEd06I>jQJOJ^Xva zxXZye4<37429?fw^qgkhtDGa<1Mk$PSV>IcFOB~+l7DE~ zmJk}g;vB&?S<={wecNM_Yu?2HmS`q*5iV&iN6#(8*#H05(@lSOu)DyOEB{M1CGq4A z6Ocv(Gs{>Ir9%zXX1sP@QmnlcTkU_@h<+YZwM#BN(txXBE57*A)?nKazdg$U_l#N# z!GOm(5N;#uAfwH>ch5$a4M~16brgotrw`z5C(pM>U}^5I^Vq89NCHU0uG3Y(z5w(C z)xytI)h;f@W;NnpWVmgL!CR}Vai!<=fjhS4bl;+FuHQ6ibbNF<6%Ibil~d~^{hIHT z*G>92I2`G3sHv%D_ITX?n$q5)Teg64_9~rPNA}((J;u;fkgcUxt}nTzOiJVBHW=VN zDYVe*I63Aki^5;=fzMH^)U!%)Xs$Z%zGj>L; z0goN!q=!BCPRLK&C?m%AYWw6{X4Yj3;pZdn_gaGMhI3L-VjU{!wycF=hYPporX{XW zBw0HrjQ1JbsBVK1#Y}cOr2gEIu1Ya&ZuP&<8f%*gG`h0+Q-X=}s~K}AiSs*QhTE`x zp~@Tj1D~kI%J1U5J9wt6vMyh}XK1m?$58J44wQz3pyufAbk4j%`;%;=!PMEK?*!U% zw@>cdnan({-*Z0>2hfc-QD9}X1W^IhhCHmMz5Dyv&f=5)$Jq3B5P_UG+}y?+xfJ8n z_;xizUU}KG`gppIp&XWBoJYi0iD%DCpS1SUHOHz%A;7s4!Z+K$c&uC?%;UT$~wE!zs-QEdFSXmEd>c4Kw$Vrp4c=IG`dO?KiTa9>7{e!AUSBW zO&ENB3!gS^>RhdIW0_QV?v2QW|6Z8XvTj!A5NBf3QtWQit;~&lvh7(`ocDC3ccmje zrvhH8PKLYka$qg=yWcn)sqam$Ub=Nrxn?YECfcQ%c6f;rzMzXV zb7S^{g#Ez&gDhv7U7eL}cjwE@;ioXZ#ul@tK_TfaRz(F>=gv)yn;NR7N8O9cJNV~H z`Yx`wBu8TUA1_oEIcT}9n=nB<(d>z+*H=DR8YRlsaK^?RY+|gVs`Gg_=NW#iFcO-b zijk#1Ci4})uL?15UcTpe1=z(S6)Neh#A7zkF&D3!540*9YG{T&JIqEf04j;h3aENfV= zn0h(Qd9{)@lA1nFro5Kn+Gop6r3V9>o4TLs=out$wUfurYKoWlavO%!;uWY3AAIb+ zy9RUHZ!XmqPOdhWuMY2iHB3IM(B^1YN{SMMbzT@;WDbgQg)eNhj0%fJz+hJl;R$&aJV0Tqt^pTWj>=TY=gTVO@#j5vJ90U~ zw^lE$XJ7W?VCdSr9qY4PRaAJRYR=fU%Q;8ZE|;EbFNZ}x4S(omP%(E?FuCx$nU;PH zI4bw4_fnxI8@cj&L1**AEwQh{l%p>a>T=ypAV<)3IjcSO;#enG5sQ(E4sW3MZqr;Q_i)TUMo6u9X4%-hX!k!|i^O>+qQEg@bc3rhBMxq4I3 zUR%p>x7yX$!}Rs$wA?Nmy!Xbz8rbnyp+f_cpwfYZ9Mv7q9}1s#a3{kS@Afp?vI^St z*&@2$e&6>z_QNmnc5MIcInnL=?L%$?&&q>w<=G==r)iFyEUT4;aw3rH^jGk={yXOk zOj>VLqMoc=x%Su8JJG@&@N+}>8kx+h0q@R{nT1qr94O%1#c1}7B>z63`}KcT^GEf z`K`nF6d8L{Hm=VO4+sh)^$-VTb(oAkTTxJX2^yRi68i|Scnzp+KWj;c7%|YU5avk|{b7uNdY6%#z__`2C%F>fC zU!JsA){cW=I}PQL)!nvR+UEV?i=2jgYhn&&$92kbb)h8-u6x+M_Kqr8UW}QKJ@Xt3 z)ge`@nMDs+9j>N!1z2%u`X=k$7hgQ5#16nfkeg^v@iwBi$1A73GWBVi57lFTUWdNJ zfi4~1hIX9XttXAPaaxT-q1Trxz!j3>g+u3~!<%B$T*!om%2h3xQ360IVMDiQqD3@p zrU8l0tZ8WbYLh5SBBS3qrsxqCJBfxI7T!SF{biDO>3UPG{?pcH&zPeKCWH20=S-$hYRu_8IKKG}gcR%JzE}5}J`Io4I6lN6 zFeRw5oo~*fd|JUAW(XtoQM3^k8inUTo=+)@C`GNPxXpKDTOj|)@6m3h53W!gb5$=b znYQnE6Q9~8ivMl*n>O9s#hOQQMJMIA%1&LM2S(7Z;h07iLrRJw{aW}2XCRnu&yb*? zV1EA%w7zkS+>x}*r2L$rg_T7lm@sstq#?)}fSXk^q~e70Q9Y-Rw1nsU zipW9ri-b&aQCOl^Lw!+ydrJ%T_{1(L3Z-6K`|q%X*D8I=g_zlEipgPVWUjcmzJxegIp0@P004*-fn;vu!Z$uP9u%NNt1n1K3^xwBW-Ke{u#K*=XzeI6cC5^`ar(pWRU>~7;j3jA7M)0`3obt3*c`9QhLN&6i3 zJE_>uN0k0KSeZ5ti9$Uk)HoOa@pAiqhqYFo53AHL*QqzcJ`V2@oQYG-vSXNfow2g> z@Ea99lv4PIfbCu(&1BO#oZe*~TvcYf->zRvK{$v#<|7hiTssoO?Rp=jiYs#-feq$M zkOn)wC|;|%obsD>OPZU ztu|n5r+IX#H)R(0@jqW1_HQ$V4`z-V9m=~|_C20%4Yf43!gRWLnJnhuuNbe35(ywj zJdF9+BgqvUN)pZ69?etOB_M2r$umsMKo1h|Q#y=j+kqpVI}#8>id$MfNwp_xnM2z5 z#mIBz2uAQlwz{8NWzmFjy&vtTV0X+}&k8Y%0kx@)arJoFQ6(SiY4}3?>1a<*S$>w8 zpPz&WVdf(=AviScHF?xT^;Dfzcg`Z_SB{~!mD-( ze7A*u*(fG^+|{6tBABeRx$CgU;K}By*VsKa<{m$1PzT;glig<(WY>5l?>tSZA?$HK{i z-<9VTU_qBS-_PW*zIhw9Qsp$W=b7ZpYn2qeLXyEV0A3BR8`#mvm%tH1{xI z--R~`jSjB`C3?4uYS0?vnBr;mU%~KV7hs;jS;mB!GFRB=+${~OLb1~3&JuyAyXAU& z+CB8&VmDR%322rL?!Oq_fV8b4plIONnommq0K8rx_tq5V%;f(*`k0 zqT!4*eT0fiX=&*;@a_Z8gr$g>OVyva$9u!4(9*+YqWgfF%52%j24ZDn7uTxGM$W$- zT~-EO#jJt1gL^DXBTx~-KhBj5>3W|DPL|o5$jr;n##(lEp_h%;?38&{H`7RVc6~mB z3coSF|D3sysE#CeQEnz@UypK#`5Hsy?4fVTvz>< zq(VPo{EH@BA#2tA?ddBicgtwf6!>Me7@j+~@9Z%3VBkRB$QCsBQ-)s`#%|;YtVoEL z(_Dv{{mm?+_mUv7e!8e@Neykh(4srH6cqnEQ7HXOc6K&-cu5hD5e5h;&d6tRt+Xksx#G0vvmqg&6b(;b)Q5`m{D4^?(FlI?uYx8dTC7XE z-1~h(uu$@iP?P`yoTn|b79@QxRQ)U6(h-O$C1)oq+D4Of_oX=6+^S70R%oa0)?I94 z^{O(SjPP~aDAQh+sPk-f(>9&qPSA&(;!5jb9Lh}JX1SB?89%kSUcnvm(z=c5$0tt$>h z{lU=euX<=Mli$VsB_Xy9W^GeqIGW_(9SFB$z>Q}mo9!Ho zS4?N#MK@O_gjxkRm*F)oO+xf}#jXWBlR@p##JL%}bg3f`k~R$U5+=_1$(NQ(08oqd z+^AYqxv}7;yYpsiaoHo#R)}OF(6s;Q;q-Iln39g&&L@;c@0RF> zhxSB1Ud8Uj@eYx0r^1?&16I~A(%f5py4Q8p$x=TBH0*VC41Dx(6aF6nctD50t4%>8 z`8Fl{esOQ@dy#$BAIr(PK0WWM7r(@d4O2iJR5FwzEUf4rJx!|2-Frr#600&Ld24vmEo}zCJWDQG(Znm;l$w6CFFOIqQbl>$nR%M3z0wRdwWT2r~a??W}L zcfO>ZkdkRzYilEY(3Hlfuw*Y-w0~{k4{}|>onxpA`s(cLNR5mQ$FqGhIyRn0$44tPWx)jn-C2psh2y-5 zkoSS@+fsjje`;@Oj`m!?eLI~ydoIlou4mynj5qgTgkV?qVRg~PSYPLP9)}8erZ9bg zM+M=vyqo%z+E=AwoaCDY??82ye3#p+p(Mt3g)bH58A=R=Knc`9h@>=!N(7$T>c2FMSf6qqZq^_w}VGo;jQze(a$% zeCtNKaP(NZc=A-bIXRj}r^XZVW9sf~g{HcK|Ba5KoWi>dFPuk#vEStsR=t8UZbF;# z=IE>CXXX;G%aD=ByijPs71W#>c{i7Z5C;Iwbr>cQcI0b2wrx$#O-<>;BS%teduy7U zm`savQaI&UEDaZ*77$)OP9s@uMQDp-1Rjf}3{Np$19O2##~58npENYr)Ba}oQ+t}0 z#)roewWb=zv3+LWWfD@X=U**?nF?H#%|r08sdrP_y>m1396WAvJYBkUC0)5PgfdGx zC?E=CwHn#@7c|K`!tbYZ0>gSt#WKpQTqY3D9hFv-ywZlNSn`l-=0={YsLAJBF|77( z>VbETrNN;Qgh$q8hRl-RHxN5xk+KPyl|K<3N|;@1+P-rjJ@?#q)0Qn;)BEp!m`)u# zo^D^gfoV8qK_1Ndp@fm--9`zMrip>p8sinOUZ*yZB zAD>9`7+A9>G1pnoSXSvYT+=ygiqP^o=4Ho0t%cJK_QCFfZRzIiVR-5^T`o*41;QXB z{$i)t1+ay7DQO$=M5h9qFE6Dwc=NWNO{ud6k2n2rV{nLiW@3$Q?1~MI(UJ;o^Q;xP zhyg@{0)-hb<`=_Z%!JmA8WT7ETb4q&C%Ur1b1`)>5B2tT1OE`R70>XTY3z?4EAYip zyz^S%ot8%#9M&Xrh;!a{NI5J7F#h{iPC1viGoQ~?VcE7dr|`^jD$R&DA<`H`=N4H& zuXib&e&-M8w3D&tT}PsKnbD)aG;-8ID;+p6I^a=5$SBjuS{f9>m=3`4*(=`lzx+*z zkpd`Q?5)7Byg-(yhXaWymAK>GXl`(XxB~d#SKP0oPvX6Jkn`r&s0`=1E()*=9-G54 zrNhTMbFaW{qVJvEfC?HCC$_Z#Ua^1Ap0xMiBkAI`Tj}+;-$@G-V;CvtLe@~}RvIy7 z1B7kLaClF;h{qh|Jx$GZsU3W|q(Xz9FlBkvxBO>WEALx|{7s!oLtvC{pM6c!1}dPm z7^2K12$JZ;RjI=GYP5PvDiGh2vC&+u?vhn+{Y70k>C8isT0~tJ*Vf*ae)RGW(lgII zovvNGmVWcw-=$-poB($aoZLoY^eAp6O6iXC+y{3@ov-BX$%Go=ok7^Sh%k9Et9ZW& zW-)~@bLil~^wWR$w`m%N{pufHOCNl6Bw%QRv5ev!Sc-Tg>*uG*EJjvQPo-S%1+((H z0&f8(a68}~`0*eJ3r)5%1XlzBg%>TvHeG(OHmStQGgOf06bNbGBZm&9fBTc4 zBo*m4gtU(lDu4CMU!{)_92Deaiqya)-pJ(dL;}0vUQD*Y5scZU1zR#_X6G4RCmb6u z>*HKk1BjNHg~xa;fXmleypf$bk%o}PGdFr_MWEa@upFagk@Ke5{3Dmz?0KM=Zz#%^TDRtXe)paM2)ua9Yn=sLg%7X~*{Msgd)K zjvNm|KnMv|O!-z}3Aua4tHPO+hL3q}MYKsJEXc&{JOmU#Ls+Np<`7C|5n#FzI`-|} zgTlTwoj!9eow;xhT1Lrhur9H0x%U5OOu!>-9VjF(eCIpq`#*RoZQk4$cp4lYN*^Mi zzW&bJ>H77XX#xdieg^y_EUf@usW-~0Nke0!fuCoddMZ8h?Pt<<6j3L>H{W=jKE9nM zr>+Og8hrAYP>4~8EGBM|mob;ax48uO*)YX3Iwu>7ZF4KTS#FdKErg&!85$uLl~m;okvP&;J&2~FQ`no!yP;$-{@zGfzJa{~Jj^`-$1}Q9wU_2ziskg{~nI|v~pj8DMLvJVNr^C|T(b1N6@7k8W z`<-VvZcpzYe;fL$O_Qu?RH#(|bU;|2U1+$c>#f79ME!N4FKAQOvjF&O7DGmB;CF> z9C#4#$|}6czZp;>@T>@4a_ynmSxD3H*-4a}X{6|u_HO!?=hPz&O7Fg6DEBmiVS7(! zYVYq$BaHni_^$J42gumc*_j^MvolRVw3G0Osi`?y2^}I!%}A@3MZeTT3r#3h?aa5c z;9!Etohk0AR$H4?(?EsY_E$qQkv;R&gGZS>O9ncVAAWabfbmPLLQ^cZhk*P0J{S)h+%Js+NM20eVwahyxD&mCJ$BD2k(oOJy6F8y{T9~W5SU=Z+(Yw2=3z}U>=Wkp~LlJEN+?ETTI+*vKKYTbH zKYKA(xJWk(%Ez8(pl(#ZgvV-fFy%Y8?#rs34pyd4Gta4 z?no@XiEH_^b5SwZFB_5K4RhpVK50ZF$P$gIg=b0v^Y#t7!eBtZXFQV6HdBQ-AZPFk z-!nhlC2%Ty5fV!!$}#pyD8G~=51S%FW{D`pCgd5cfoTq7Rz2nB&9Q_m!!B$BYJIlK z6nnc+UsEgefzr7X+Bk6FK)QV8ayoVD4E6gJV&E3RzATA&4BW&;^cl_NfpjB$RjEL- zkA0`=(N}T0VG3vkxl?!rYTsVTyR*nEP2(<~F|0t>`v=B61`D8NkGVM&E?b>M-OVf6O)r3vcOs$^ekahbt8s+GB> zDMSr;ufr8qA+xLGlX2>abYuuyglbWJwlqrzf+q8m(ZJwozn(wqSWubL!@-fu5S$j6 zY!t@p^{T{OM_)0tJT4)CVWy_8j#im}D?(#?+P7~X%KW#&%KrOb{2`RKPfnetO%thw zNlgZ<*wn~B+aV*LCal7VSMP!oL^DE5Ey|6PuGXy(=)j%%n<=9X&x986_O@2~5@CXg zuTD!Wcz4}TczggmPXjV@nJ!o0OUP%YQG{t*3n6LZx(^S*fB5_VEge68Jl(?kHN=Fe z(oqLvmH9c@Z$UX}Kh;JlOa%1FB$Ij*{}dD_fnQ;>5x8m)klc2`zLb835qMnpG=YGv z7qW_10SO8G70J^^);dhl-&1<$So@`NuP4!d>_8|V8=J^5pupDv*CK*a&<1UDvlP7$ z*0X>hRZso9VYV-R|9iBtFO3u4cmjs2cQWV)qLF?y#wa|v)@%_;C5Ij^#Ks}@z&;K0 z*OD2^shVenVawQMEwmJmcXMiEVmO4L{L`QQUHX@w{VM|QjqpqYa0pYOP#2B;j7R%e z3w|TY%Yl8n8J8{4@*Lx)H|^cMH~sjpew_aG=RZ&X^Pm0&WrFD)nMffk1U1?#E!h5s zR$=anvHiP0{xQA(@kb~a-@$nD*U$wM913-&n+&Kz$1K98UMM3L6kgo1+NcT3?`-dk ziEegbhI{k@IGrVgt%?2wP2kLNl#}a>8^GiI!ZhQ-4WbZw2}7NiFF{vE2E>?yri+;_ z^KobyBB?y3-zk%F4ONwH?4el049!Sa;Uy48#*Sm69b=n@6XA{iq@d7OW1^r+3kDWx z&T|#pCPFtIN7kEp4LBsf6MYD7l)KWl7<4?5x@ZCKWO~nniPyeYr$EhXdNHT*irmJt zdGzd=bncU5sZl|glE>yI)1JP8^w{G&(&L8*=%>E)=6i3Z)0b`|w4h}2cM+{wUIMp6 zfkv=JLfo=xGh+hb9fl_w8=>FY+8O$80^Y%Ub1me9w}7`$%W2DOD4OsVyp7Uooog-~ z`;g1zI9_vR2lrx}l|f9^$bo|>BoAzxn6 zP&dn10$qB6N=K*I80%4sWS5y*QqGb#6Pid?^z52&@(wi7&e(>_6>FK8rrJlal5VlmTtf~PX zzX05NSu$2oH1Yl~A(72vEog7+3mQu`(^0nHc4bB*ueep<#*;>$IL|dV)1q45)u3U> zHRU$9&8JQM^{ID|P+}zSS^0Hn!=kb?bj`~Gd586z}p;cx=Og+TSrc& z?|1amO3JIqGN6ZcODQUm;J&Rn{r!Lbzfv7@(8uS`q-*fq7WhOvFl~n~?d{%_u8a?- zO$^267W)$n7)^g)ORhLo02x zeK&K^Qz-S^v&L`zVztMz3KX`n`wN(aAjb$JOuPwl$tK^)54t4jJ#sVGT=yM zl*8Oi!xosMV0hq#_vm;Htnwr0Ea#weXHKX2a~D(B=B;Tr2FbJ0Y{om?KE-)0JgEU? z5$ANOM{(4HHSHWp`HL7!@yWtN8=xvSucE%OT9jXq*MLE!33`?0=8y-(WfQn)f!{hG zwlXhvKqEWq8q+rB!xs59*=EV3;xNH=9eLX^AXw)tZJdGU!9mhAu#h#ikXHp!?|BR< z+Nm+c&Bru$=-Ho{nM_N{P(=OI!ec-!yn8COw6@T9L{3nqa}FdPh5?Cz|HwJu7(Nx= zKJvso3=Zw12Kqp}`2He&I)x187}YS+&bv*>i{e=0h#R!|Y+`0A^>=rt!#j7U4m$Z7 zQBG~p#K_zf_0dAw>W~qKO~{9B)Z->yU^)MRS<6Au~DiUAD>JO2wL@U zl}YNHhCyYz05%#uLpV+^b!#0}=}_sbX@zMp=r`}^jmdb7iPd~gp7pdV55G#wDuYdH zm6MGEUL*Cj3D4#&SnPXvcQ!&8&!4?O_|7=^Ze-#?*oGdw1`o?ThL1 z5~w_ECNyiymi~CZpZW&2Zl%pC7Yle@#zQ%)g`sMh zb#m$N-;5&Ikwyu1xrL%S&aRx|(47UFS%26H+$yv^UEPG7jRG!Zf#+5j^#UPpTH8X2 z$YcXN07#OIoaE3D5Kij}ry+!he(HrLN5{v~ZNjmqsmp$s1`MmUL&vZL)-@tj(bxSb zVRGML!rUxlb59R+)|YyFdIO%#D4~PEtzw~fMZrM@$x!L($r*6ao4Pr_aYGLxN-zS! zBuZ;jb4zN6ZcQ`PvuG#^lSwAO0wP|D~7G?|%QsbOYha2nUrsLysLd zhR-=Z#Ft@a{KxA?m>P693NFR1%pogWDk{)Fw4jhD?mIg=!|S?v)28sCo?`-Aq+eB7 z8|w%)6o=&51n#B%D_5@a3?m7`%OWOgyVx-!T!s=i;f?45XWc#Bgn*By>v-3VhOnKg zBQg6!8`6g)qjD-$HzHuUX50`u70O1!?ldGcv7@u}t6*gx>S1bb?g-d-@7xuFkBu@PQP@lmFV{dG!yGZ3B{$726jT(f`?#pUqAS3 zOM`=x@VU9*k9zJKkmj303S*f-6CR{3eeklLbmR67<^nz8V+^(KRM$*9(~Z>JF-C~r zuGG@FCHiZ4WE}dcL6PbJM{U3}lD2k1C*51<-$wYrl{7MbgSFixk=|9856FXBSS}DoB;V65 z;RPtvb@ePyXw%S9$J_`UjF}}B$aV~e2*veFGijKpfhO?1xqEA@->h$(1*c=+jga5P zQR-#=X01EUw}c?v+E$nP`Z2y>6d1Zak_N9qvj{m&j8jlb7(B)ZLMUfcW~nzm9S6-g z-HJ1fy-Vb+@9jygy_*U3yd8X5nB`bajFS$=NF&z`%#A&a_f3u9pD}q09-t|sfxcG( z-HwNKpuRQjCEU5Ott*YRbwK0e=?1&N&M`(Bk*@myZf8SF+D$)pbt6cl=+2{fOaL9L zQ&HgNL%Hct*hX2^+dqrcu9;*J#e4!~cL{zi&)9@qzz7b(co8991!fk3dXn+1w^IW` zlXE6CRf8dD0vqhajUKoTzLM&~Fd-Z%^&e(Cy3D(tT{3})?M@G5hvYic!KDZIBLgXkTtXAZx17&pR z+#Sr7?ck*b1!|PJvXAjUOCL4x>w=EFFAsOl&}cpZuN?#5)3ilBM_#Gt+_rDUK+*_| zqsVb%gwRf*;Ms3}R;U%8--AK92cv5fbM7o-ViJXXl)jyXuA89cZeZO(+ZLhWdhXpI zPo@W%V0@n>-83R*ujFq90(R6_@?(T=`Q_t3c)J%wVjmx_1lT6{k6GEVh zj^lv|xSe^Nc-@nty|XK1Jq_|(k@b{~l#z|FnVe*g4{+TEj{CUZi45)>J2f$tE-=r} z0EdnRWda=?If)y4>g$s>Q~?c)Mq6pL|7pAgq8Vcu5ileBQnUo}zdpG6${|rOVSqAW)-MdrIPZ98#)SF;VDp^1O#V^yFZ@v}Qq9-4JH2oD`+kND7;{}=I(C}dT z)vtb?t`bK4S3mwS0`FXU_0>P5qsLA#=%ECpyJozw-+JPy^ml*zlk}V4yn?XOn7;e% zXHil%rQ5ehQwz$Iaqb^{_)&WEt#|MQUx;;bvZIiki-AUQ%qJn#w}PMT+qb80fuEPZ z|NXQ{rKzhYefNd$66UZky$7Ct3mhu7txV`o9)1i<{0~r^#*;#P@8-=xzsFCUNU!|% zm2?f|=WqV{uLv7Ej4;%SVppH8T)&q7^xB`&tNhx4@xT11|Cn|X;;UCs@8Z-nlc>B4 zru7N!{cr#9j|d^R=myx6gCPG2Km6ej)5C`jr7c@Ghak{{0zC?| zJ9+wa`iFn~NBAMuUM2$-+#mh;N9lVnJRgdTO79>4_{a3~-~1+&Q74AK!Ta{iQ%|Fy z&?d&?2yHxb?sWR)|N2cjar#WQvT@(Bxi+bYYBBV^?>zf#`W{xgJ-c^>LN|n=<1Bsg zPe1z^7Q2h-;QsyTyU%?mCYGQ7@_(UpUkM@fzy8<%8|Ak*{kQ-AKSJsK(T`tFk3RZn z+S0cf#iJ?xm;d~q(=f{X`}B!kKb6bpzx&cH)zC?)>>EQM|dH z6Bd{rd-&n>qaVJUu7IE4zV-)A(Nu|3}|x)X3x=VbaiwN}Xim zy13~n6iaBVx2G$8?>o<@7hZfmJj+f%EeNu2Fed+nw%xpWlYVBH5`d8rKqo`kHVFz^g9DTe415-WiXzOZCovmHz)Y;SNjo0768~bs(ah>pA=r_Ix z3@n+;Lp$K96~S%){sZZ$M;}A!=}f<7EMLNavw!cN^zuvJN12;Szx(a0DCcK^dsF(m zfBTbk_=&?YR-`YD7H_}vUiuZ|&5*SnI~c!DK9Qb>rZmXd1rv;^)6neiUin=*K{?j@ z()XSZdg;RG6%#Q~ojrRN-t?<<4*ogGx}15f;q-(Uw6FZDTIE?0W@rwa0Fm_IXmwZ6 zTEgmFi!h^Sv$L}WUeb+_J)bUJKhGM%ZA4dWN8lce5X&Z%t#3d5RC)vHdIlap#`7k4+`*u;t^NCe2jyv=HHk;wB0T(K1bRX?(MpyGGZm+bbP5cLo1h?) z-c5}@&prD{+Oua<8W?D!&l}U*ZyjZc$VcIE(OZrRN&d04@4&<0;)!g4qMT_wvA1rF z!pBdfqsPw&affUGTNca1b3e0V&)_4X0? z!UALd*|cx}j&$|PP1^E4-q;)9yea)3|KXo~O+1-~{ES zVKi;(XAcIydc2}U15Auer`Lb~A>K1WHI2joA%<$sGY;k$ z&;%QDEoFf#WgKW|3E2I#$j+Vnpo5Ndo=AoTqD1u1HbGDIjMG`xy|zKq4>8yGqS(3T z!!$ho?9ChL?A5E$hdqRie*fv`(smT%0TioN6oQV&pG><(ccf!iE<`j(H*0a9+_^U$ z?%JBRz~crnUbH>jln#NjE5ox_GdF8!Qtr6O5|3k7FQ%K| zavGX@cH6G>D8EMJib<4@X$0<3aQ@nd?|~&6UpJSwq73ir>`nXN8HNU*!mIhwxl`#D z^U_2MhHWQ4+KPjnwl$_h-Ti4Viun1#8|e&Q+%diT4U94R-g&1P#c>Ps^JZj_p|Rm~ zY2;cOT%1aC#Ig2K{=V+b=@5LW3uR{!LtWE_8yFaoFHi=j;roqj#J8)rBOTbbJMG;D z&YMuufqM|%`tg}l=`?MeMS%n)XiUWpDy4sy;H9(Bn~?`DIqKr~Foxg9wr@=xcq2Pe zdTVA-I*6J%di8udHgp9;VN*JYXK(Ai{pr`>NcH`ALZSsp|i zGzd95>g7!p{zNbZZGqlsM>7cA+ufVCfxkV-Ok1e84catvYz*VUsjHVUSY3#D`7kuG z2k&ni{M~4nC1_wA#q$V!=achiP>NZy!d$R->-MxCkM(9Y*lD4xNh0swg@(^uzZC1d z-3@w2V|zNhWk=e(V^?ZpUF!(pwKt|F(pBWbljqN-78OtoYKOY<^zS;D9^W>QMv&vK zBLkhkemR{PzLgqL)^}0I_jVjgdst6h56z7cp>vfymv3HAm!O3!L|crYgicc@aX8?^ zwUu%u#u0SPPz&#SAMN=beg6;}e$8M6oIn_yoET5HnFB9j^gTyeGt|EwMgQ4DkKu7g zsRQ4en5*hh+%K_4`^b4{7z4n*_MY_6z5}TR1EFJiiTY>Qi{t#z&GgQ#i|I1s*)-G?aTo=m>FNm?KAzq@Vxtm+Abu3t@1H$T%yZ zhFnuzd@cTLoB}FFNsfs~g|Nopa)K!?<=i4(ueJoNLBKT$4z9oSRIFz*5WRa-cX|Y& z*>8Xz{rD&r_J<-&w3W$Ui&slaW7-3=`|*!|7#@lv>TtANW*GcH}k z8+e6DhzS{Rf@wcad!Ib~B+Aq42wmOjp+_H~{>>=h7g1iXBQ*4+7hm{Z3~W7kDzrg% z(iiPgD2}z4OelszYVmXZ-9-e@GYF%G0Cr%79mFDf_Tq(f>FO1fV>a%ANg4vTwSQ}R z_Ni}$Fg!9jhOlv-3C{0~Ev2nnwxOK&q#yn0hgc|E5vngSdAG5C$@b5qBo8vBEg{_P z4+ZqZsT1_s!<1EzfOjN9AD?~hS(LtW>6fp(5@Vv8Im7~WCc#-4-yoq~F5oxpwhP5; zGfJB6HEiiNuwP&z(fg_=QOk5IVO(uMd6@}9Ci&1q2gBR>&Ij*>GW{%y{SUtX1DMu! zJh`{S3%MDkb>PL9P}0WIW%@~_TOr`C3EELHR??gOB75!UMiEf@GeBSFO$4&d+hN? zVn@0o$BzN$jnr{j&z}PGcGghdN+(X9ig4!ZSFeRJM#V0#g_S?ui3o86d3|a!lv)~; z=S4Ub?dV1z>_9kiUAKy6Bedz{@&w~TkKUW_y%UiOdKw>n_z}j<<;W{@)PSQ}tmWQh zh>%H7kLPZ{F_BI~p)|0MEmR9)cGfPQ%dL6~>4L6X7tz!VpB{%Lwm^^@0IS1z)te9-~w* zllh*#d&ArQ&U^2M0;&-G5K9U)puE=fR(L}-V*KdkA7Wf;VUm3Eq4Nc7}_`nA5`&d1Me!x;`tJuUg2-X zsAUvG-g&Yj$2kGPZ@u!bd@j15@fVDV`ODU+Y&hN;7*jzE9;>gjY~KVHoK z0gQz77r3fzWV5p!o$0a1_5j0NI)Cwa8X7;(_}iBre{xs4dh=46J8=O^`2z1~v=~N7 zW<1bG52LU>{md>r4d?KHoWMIdng(`mO)otE2>nZ)W4F`Eb4OVlzLV$<1k-v#%P}D+zM`kc)&hBjxxWc2cymb=DYgz-be3K9?>nWwdug2E%3~VbmSze|By(ZD$>8W3%cAo_!)=8k!d1Q=4LKtKqsI9C;f<=TzF%N&8TInt@>k zVO>wuEKEVFSukuHx`@zY=tgf8a&5>$b?~=s2>bPTfCjHKj__o3K!>QjL=8`|8>@2? z>q^1)1cLDrJWf8BuCJK!!5nX==it+c=Ad z=gRdmqkLhcDeoyL$gLB4s6FD=Z+t-!bkq59!|+QOVYOr(^%@V2pTeN!JeVE!1o zeJS-5;j(|z=1_J=Z;hhBv9_5p+JzyrfpIj?T<(T_MjiCfmR&?K^q@eD*Ii9#=+6=Q z)8Rv$5=w#!d2Q-NZrR=0!*?L((y7ar(rrAI6Yxq+72$0MUv136{m3ni;NvQM=L#O) zQTl%lI&ES;o`5DN=mVDo3~V`&&R-qDbBN&vWG_w3rG7l}Pols)v3Eakwx;2mc(qYv zx-l4a!IxBo-K%0AWx{&rG;$Em9-q~LrWopklEO1k5ceT#Jj`4%dTS^x5VfPXtY_;M zl$YsrW$+qSs)e+-vpekrm+#EK6LVH?V|Uu$iszA?ktY}*()CW1gvW_u!|I-9*rcly zYiS(lJUcqHxy1`?nK8Gq+ZpYS5=^5U0 zt?O-MsgVgHtALxXO)bcao%D|ieJ^v>G_)~6*mYYck#_qIqz>MhNABonj2y;D^W5G8 zY5o>#?T4E(XklIfc_SDqCZOG|L<;O^g^uXM8bZG{c#k1lEnwViMi%G=rbh>M zr|r@;bM_*de-CBW_q3)u`tAJiwR8n#)+HYrP_7~W-30b6dpe0U+MI@vV>Cv$)9xje z9r`K;1!pE8M+Rrcq3_%v@d|QC8+4&lN$>SF<~rr~VGJ8aD!GZFj*lO{^b)-AG5FwM z{I+2v`P;ww>zJd5!N2k@R1tMprIK+>WzK7r6#SN(^4KpsZJYvXOB}%!StJl{OZi2E z*JS!!8XmQOP;} z$tU5Kck*|gDC0GRCO6bD>`X75B$vrfYd%aE#-{SMZ|~j^oIjzA-~aJ7GA%}^rRxrv zD)~o3@K6yt^3f4CA^H%dY-{@KpZpaY6dk}z(h-)M$MJxw+`jSl8|h7~?{4_>*Z=mf z5h4zzcQ?JyW=c0$i+3PAlaHZb8Uh=hM3mqC~SHr;s}V0=TP22A|JD!#Ts`bl&di&(H~5{6l($BegFON zT3)iLBF2DqHya)uO+WwnFYw;Go)!fIx;DQPiC$|%UKL4; zJ`yLv!wUS|jiHfGI+6{F$=w==e)Pi-lvSV%M}HsuPa8sx@GJ!gw`(N*8A`3v zt5Zo>MA$LpTo`SuW!p|`QJLz4S8Tx$Fp47i=38$NdU*@)(lI8n8(~2e4;P@NhPU31 zQ0ZaDg5liX{q}R|ar#tal}d#IeKs^v4~;kpFL>p5ucFX0(K3;1tQjM8H*X39e1+AJ zXVDlS5@^e4U`@C%I=RLiVE=#{Vb;^xj6k{xh1*c>s~A8&{NQ7hq$}_uyjuw7hADPn zd=YO4{h_SK&d|acDVrL=c;z}<80?q8^>U*em)-^6C?D=%sRBAPbA#xCPw1P4*eLAr zNB5_}QN5Oi{319b=*T}lK6)};ymAIFT}^uWiSN*+{!lna#;^0feS=_xVyQ=Xets&w zgkfOU?k%bBL`Oif6YqkClM9#5;nn_tcTTeef}O}mF(wVq!>5gE*aRz-3vc{6N znZ$L+2td5$=y;Gj+0O{@(8&o5l~->rpy*B@aCJeae9nq+=moqdrbdL&Z$14u2Dv?K zP;>*j1nv`Oz$0OD&0UnwQVeK91ujGei9OEj3s-9)bl0@QdlL%mL(ob~?1S(Q5fZ%^ zL7#p4N#?^HY4`4}z|H1IvxM&x!mGmSNCoC7(YAO@nT_kefPsR@SNUe*VJ8U>Ly+w0*YupoP<2U z4|>VeBc)44hB>>_+rFS8wOHYh~vO@%<{}p#@K~ZcqyU%1CJ6tfJapN*GqR3 z1@f(9AEuGUu9yoh;T1XxEnFt_c$oI0OGd=c%*`RZIBZ_Ftv_97vzL<>&!uy>uBUO7 zFnL-pbF|^Z&5VOVHbeX543SL$x}RvJKAvqy7;n|1k09IzLKZO&k1=20z!NuxLa@-& zL0N2mq*6$EhPAru@eP#25xicvfYa6KFo-Zz)iRIjjcLOO-j7gjRMAZgA*b13Wke;A zG3okX*W&ds2W-WAr=e_=$cGDfNpGX1O)0!Hz6{mA#$4XuG7S`=0TiJysw2O+`I3gr zZPc@~1A5Ae~1EFlLqGY;-TD~_VsPS$ag!{qX^H|+{Vy>Z-10!B}AWeQT8tS@djSY zYomj#k8Mx=Ed6RE-1QW5#WC7AG|xJ8JeaP5U4Ygb2%8t~A(YiCS1+d1*Dj|PK5z9X zN~?yBej8h)vL&49k zfsX||@^#2n8bcgM;}{K&oIM%gzXy;{_u>&&UbVg&t!d!SzL_5*)SbCeWBVpT?ll*< z?$@@<2da}Yiz!Qi`T1LKznxBl8@@lc9DqI&yvCFVql~4Oeh2{p9u+1p5+pxsF@a<<|P7cb`tS7~i zS7Cmxt+|fZQAPLdci)F0??)kgF$774_>P=&002M$Nklv3^uS1OV4?lsy${oC zZ@v|Z)Gj8t=LqrLO4z^zAxC;#Riq8QJak}xFphuz=YNgxP`y)n<4g*{moHvQKl|5T zFmX?#ly(wk_(FKuijRhe2uUo8JMhJZ3NiMwwJ?{V@W@&nCeu+SO7Ss*=gjpu+3Sbp z4E(vSNrs>TdF<$Ml-^&{{Paevx>rq`t+o@-V_NvSRY%tI~6zmtziUDGHv=ex9UT=X1Rz+-rFp}%H zZiRB|cjquHQjBZ=nhBko!YgtH#rUIG@A~vJ-wNeNWkv<&;K2j2mhS>qSe3oj&3NNr z)@~3Zj@#Swnm_BgTe6(Jd-o82*h%}3Md-3LqL=UKr=Mp1Fl!U~*bsyX-`#K(uJjnG z-0`(D!ok_V1v*p7b-n92{o#g2Dn6Q~y}pIv!TfscM29r%cMkO$zxUxs;R!!{7@^ko zVC6TQHa3gm9eKL_C-YU|vY%bsD2=;*QF!#|`FWFRnDj6-Fpj&R70bTQtjo^{If?nc z3?KAHxq*|Sd*WjKKW3q4c}Inq&nsY{Q2=(rvLKbuU%&F3G(_FzbqvO)In-ZNqyp;% z87X)t_;WIMGK&YSNn`@5!_#B3{euV$euAe50Z${zwyi_y3gdJbVa-j4q94G6d}83> z09By?LN_8=;Wsv&puH9^Tyq$1A{_et4>3mlA^OO1F^@p$`daa&b+`ERf^bN}_CYAj zz$$IJNtEM?n^|cklZv&tc74zwLbtnntAx8b*v?x!@m`)IjPfeDnPc5Cq_)_Z5+p|uzn&cnA6CNc8t*|iOYet?~oI|*Mtla8GnOuaYiu$GU;I@K0J(HanF zJua|ju!hZd3?nv#TP4{x)DzknUT0hxQ;tb#BdCFMcvz9dgSW+xo1IJ%p%c66BIxNc zoI?R+K?OXu7T&%OgI`nn-K%e=ci%gXfGiI{!JWGaEw*AH=!zOFEbtvi4K1wtf*xsH z^!)%rbOYsGx^j*X@b}YZl>g0aR-|#Fv$Kiy!Msn1-Xc=?(vkss)FTh7Axsl*^dd3T z3+!Oq*SC}F4h#Yxq&MDqn|^3W`}S;RZ864IHc@exVmA$uwkN@jYi6H*;@f!MFQ*$< zhgqXOn5M_!X@tTqHRbt2yhtVfvI4EVHHS)TIT*DvUkQr98=q zo2%^j>UlpN*AA42S`_FhhWRjx#yHC6W^gu+J!h)17TP03k@dgBT-PxkaDcJtN<9kc zVkmC6P~HZi)jwFEj6*W+{!kvBsE#zDMEBH&84VZ{LW1z0FS1<5c9Xu(4+ZMjO`-_Cm{B z`uep>q>BjIx8TFm7#0j`)GO72=T1-40C`*kf9W>fCDuO<1H0?6iu(5rZq z7b;xO@+!Nayu-0ps)-iiJuVG~{PRVnow7`p{w@40sga z!2j_EbKZqg)QxgEj)C+vp1x~i>~R6V(gRw{Jl?|CpBbJAC9{JmvkN}g1+8@gNEg6)Z%?UhSL+n3L!IhH?-!y9(t&F#k!(TD6Y#b$`j?HC4_V_kE+2}RJkOi#E5 zgF0aDW@$hxQ3W?<#?yya&PRE3^wZ)fdru&sG&&bErwY?$@*E$!j&~JLAMM&YFc49Y z%`~8iakmBIM?Gt)-JGy}TVJZ_Bov)FV4A3ad1%_T%FdN7*tayG#CL3=Vjs7$e; z4~qQ|xHC-nlbcsClFX+bj1qm&b{j@sy~VW{3!1UK7!h+58T#GJXVXX5&JjLLNIuVZ zFpoC1AzNTXy*@LM&P?E0oI{4_X^%O50Ry+|fniQzjS7XXGA8;Nw2)}+w>ga zcaI~O8`5_b*ZupffqCWCS3{VSZ7Gb$gl>U>R;GBAfGLE!cC2H1Q(yVbZ$i-Ai?a9_ zUQBoWJBlLq%U}I6efaUmFX~o@6E{1WV1ob!7==s)+HS&w6g&+9G>@C#sI=<6 za1v>xeWOE$v=Xw0z|~3qUc9-okllpixrS2ZSr41c#-xJcFAJKq2Bs7DDMMvNB)Rcb z7M9|Xtz0)1jLNb?kcwOi;g-Uri^b0H*NYb~L|CEP|%gKkJPgHrEA zu2oybMNh6Fpst$}$%e2wN!kqSa-z%i!@Pl3wQMC-XnuM8SOh!xI-w#UsLUv=Lx^!xFKBcrf0#m&6Xo9I;s4Ypn2^Rt)wP#AxvUfJW7l4 z9=)K#8oN4Duelv3hKMUb&m!D0i{@l0Bt~$zs=9n*q)$g0JaIX!_9^?qyREF#nUrmC@kIs+OMMZ z`WvsOZxiDEJnNg?K;kk=To)etKfeA(Z07VGp1A>%Mw5$|E`}9VLxFt~h6?&2K1CF; z&qtWHo_yDeO2c$n?EDx=hrHGuL3$G;nYxU@h|sAI$=P6xrsC#Oyl zZvPr<@}43z@@eRxCS5vziA@$Zr9Zv>7CeR3a?r*YmVL)iCk80XlV2S>ehl1VAY#HE zr@Y_){?+vCx1WtQ>ihTYB^-Ph3-KrEx4(UraBC)IV0H2qq0T#cOM@_?4YniyR2t+e zk)nCg&=~8k?cLmiw;Au0o1C;YbrCZABwj$?fldu^uOrm5md$L$sW{WaAx+jI&^m4m z>34VNW(1{uJ^kqiFT4~UKGz8kvSE^nVP3<7q$T1J%Hq3*oEx4$jq$>coCs$F$u$Uj3{yKB%A;T4B{zX$nnwnbY}7j z^nw=)-ZshFwAeW}_!rfXJJ1jafhrZI0)qxI0)r2^wM0MFA%wg0w%%fSP$QdwxZ|V- zpYZxa7kbkD6pgU3|1UI>`K4`C`Id)LNeGkflGM0M7z#o}tY%jAfM|M}L&ySOP72s1)3$!M$8+kukf#cv?VNo5%gL zh@p=t!tnb-4&ar1i#w!h#F_@)B@~Z&%J=(@b@0p<;22^K zyin7U+P303W6bR&LZSKP1LI-jn~hy zaS-j_PFSj+3%bOvz2{jQsDj$U;FyK~s2KMU^12moZwCfiH}AQLf;!3=H$;|!6&@}( zo|%Nk#&J{5IbQ(hG`N|j{o~_=SMzQQoBcFX);x3b82vs140>s6oR^@X*ew>6OVjcx z?4p#7q6Gfg9*w#QKj(QTN_Y$9OrglkAUjPmx6Go!zzoJu_?e%nxfw&Ou-fJPQk zpoVzo`ZS(RlyS@DOV+V@)hvBJMnlGUUh76|(BdZA-`~*#-R>dm6?(!;3*#kxmQ5W| zuG~R8$dvX=e=3Hf;AakHY?}USMbVjt-}E%FDOxX|cREO98`hn1=GkRqQNJmps{K2rzgD=SX-UH;inevMnzK$V7dZa(+Bcz;8_62GtlWY?R377 zZ5iEG3oU5WHfly0LBr25{CSFwn$`QxCziXdbXvcd$Vms4fRq>zO?5wu?@p%sGXsX*uY&AVRGRy#Stgirn5c!IVO z24kGRii^AR#biPeTGdtF+yyVQuu5rEAw$-ht-;|Ki`D<(pMDk!fF4N|0%6sY=|mqi zwjLmrd+{j(`q|B#1_ri8IJKL;=+$+t@+iChDOk%0-RMS7g-Vb+&+28(<)a8Pf%q*^ zy>lwn3Wr-+>uX4^UNH?OJO5x>#+QMCtv~^7LH9D`S%eHH^;W`o{A^GQec>iY zDwKVcrDu3#Sd|=MfwH6(v8`g*#6;&hY3nkKOYebC4MpEEFc4#)4?*A(p`mVKbcwZ9 z-GoEsR7Gr+J1YmU55665dUwyBJ(K?FpZ+C%bo6K_+FD;7kSgjL541+R7S5fl|9|%0 zgUPbvI?&8=@0IseU0v1H-oTO|Xh2dThni7@MzJxm`cb?4TXth+A~rTQb|$n#i6j7t z0zjYvHriLa^4?cHQ)a*KTBuR<;%F_t`jl)E>^I? zbmi)m5NzIk_g$>8KM$+$|MtK9uj$<6xxgoh@sY<1(I(G8+s2}*MLH`#B-AhnCtWXY z@_Z@!Z`4aK7wtOu<=JN4?Dx@~UJH2_3`{$?VZ=Wz&-*Z0|JVQXe?z|>vY-7tuJspV zKkrXCnr9KWKi&0pqc=L^Z*TvW;61-iPq2ooKHOFQH0dxaP3`I8#5Lx)N{>APhh0doT|ZB7AYzUlRhS2d1d1@yBu5BQeor?71I$ub z>4EV$bbJ_h-nqG%@F;jTy@oJ)3H-wX5KeZ`5z-0wb_5D(%@|gq^>w0@0@E5ysPE{FwzFv>0u7` zK(}jHem$GAiRZ`CRD{Siteu}D(XOv;r^!C{&4Y6)jDN`f%;xCEmGwVse z7PTEbk`O-1(3%4fL5mtHkPueb`kLxk`I?c0IuQK2fv0%no>kZLL;9yxd;p8!LDxQe z0_kICSK4!oC`0QPVt5=$(h4q?n2%+I=rZo721u&WwkmD2-I9-fgxXQOb_Nk7`Vkh~ zKdbxm5iZ}`(1hl_jr8uVPty6p5j-VE+1}Gfz^F+CeFUAU`;s5Vx+T3qz_UwuOc;&# z_QrC$vp_%~#<~^F!+Cgw>+GAX<9_?O3N4HgM!di9>J1bYSk%EA>6|fFI_PgVE~!S{ zuXjLC>8E&r88xhf!_2b-aOc+#@1(g$kFhRyr#BFEN0>JiV5xvc6LREY@OOd;x7AuR zak%j~q85+kQ9x7K0E&ko?vZuUl|XObF~C*+@TyJg2zl%5q5LJ@33sZThszr6kt`CO zRd|$bh=oVN40q6<0Rr&os?9pN0Pk|3QqP1G?y)Pwz<3+W^C-ddwxOS%r!GA*lIE%MXyb@C~mB4F&4F}xt< zvGv@M_u6BiI@t4eRZSgWGs(;s4>wR_R;Z}1WWzn04ZhJ-cB z_7~dyFNMbGbPNEKK^7}t-z;#x7Qp--g@W_JIICeldu}4FEw85k<3InWILgDo96$KM z_c`?YdOC-dQddZssk69+sNwk>OO|H*%NH+&tC|d&$n9*`V$=;0E;KB%jUug3--p5L z$8GkVx4+ARuoLEeH>T_QcJ0O$f;-?k6BptTi>@P^tw<0(CYFn&E(9{n9#^7uQyKukSZe`yHiLG!Vh*WopU>>i% z7Vg2q%%gNX-s4mD^!i;0_otgt+}p1w5Y}}pSCbQ*-}xthlAf{ebbN9G3lQ#nFc&H^ z+^g)yd4<92H?9X^RRPh53y+pQ+bGUxHQhr;pv@3ru5-RFq7d z!_E1PZ^dG%H-HL@RrZQ{6o=#H*<+*3nYX_EErPSG0;kJyj^^V0Qdl5_g~GWEm^kG= zup0;{?$45lE7WT>h1;cXXVW`Bd?(sIb^l=qY7%#0t{dm?zFc6{tg3;Qn|EuBoyYV1 z;Je=i*KmDDK-J>p95bLTyFie!^jm@6_Q&tIgNb)8@!t8PwbTPWjjxw6Z;wK9AMEVR0(-`zOg$!w<1>bclRz}eH($j%jiBSc)Wc)a;6i$Q$UZmP^Sy8H zgmPnah+{!I5kL`Yw3-YcKuo~+yIyYXuBO}f?lNBppfhomvJuo}Iz=ZLQNUDUoZSuT zS|%ss5i-&@`5zN%3EXAPYX%@-K6cq@-4h^hGE@;2`90~p?Fk^R+=saG4|D^R@h~S3iz}#FS`-) z$8ph|fKKeNrfUJzQnW=G(!*X{%!j%W;t|eP7FP+B1I>UA^g`R+*VoE?RQTv6h}r46GdG5yqk?h<>*og2PZLV^R{GNnJ?w^u&|1`o z^^O*XOXMbFF7qy`X>=k`ZR~f{Qs&V;VROVw2JDQv-HO}iKKQ>1joW~34Y3|wVc+x- zW4g}X&m%0s%gmKsglo%iZfYf7)iMvfN(e2zDE7r~`=&?5CS~=ZJR4_DccW}Ni&e09 zU?leBLf*qo-E&$;fP=!~D08|OoNQ-|tF$L(3gflQ9NA}#h7rUSa0jqr4XH#$z|t$= z2s*FQr3a-*=OM1t@DOV?wpD>#L+6Mt489nc>qveAm|dNiNZ&qxnTW7h9TD25QC=xn zYk~G?l};4E2FZ~}Q%TVW-!Q~hkY$tu!b|0^bgvs+o#&-gTukpV3?n4<;(T|&@a zTEcsPwP2jULlJ9*qKsJYQFS+Oei}iED;| z*w3Fuhkf4Yb7pw`5ktebX!v&l0%#i zI>uyaNq_$5|1OwkncERI=j$r^@csk#+z<(nN$Fzf;q)PVg0P~$&q5`|eOOqSEI<3R ze}kZdyBLB<8*V!jXU7R_^9(oKJFzfpu?njjW2{z2>y2d@jZ}*#g>p2?bb9x__ldOl z69R+~^%MfCB}(^zfB4|nxF0^ih1ZbPE1}`u!L{x-e(X-5@p1M6TUWmMCoWl2G07Y0$H z5ZK4Z5=`yKH$Nem5+S}3mSvRPLm(_YHo?f7Tf3XFvA={{jjj{ANXm%Igj84n*AYnS zT2}2(F8M2)ZN1`xLX3NAucLVQi@*4bh^*`0fdBZP|IY{rrB!$h_m@3fGMrDY+X@+4 zmcu+-Q6=Sc!dHc@o*r%+3sntyj@!M~GC-$~!k)&w>3$3HNzY3R4goo*w8HAL{ z^cHTI*REYjcW>V#n9>I@o;i?+%$}~<9!aCi$r$^=rl)5Rns3pM1>jsFxXFX`;~)Ph z{p3%70$vgff;Jh5s0N;h94bn*O5VGFA8XTG1l-aJx4;oP4<0^F|L@=YZ7{m-V^*`O zR`C+JyUKg<+@TUOYNHM6B(8tiCDS2(s}xWx9Z`P4e_ewVnwrqkx3Wk{?u_6(g8=;l zTx0+8-~VL@A>+(Bhd}1hY9*Aq4C;qBzerF8T&BOxkwSm@AN~UC=UV#T|I>e@jQMnp z=$}7)=beD7${qKf%7Dz%{zY)dK0IU0wPO6>$KOw{z40o{RDa;V9u2xA$}sk$4ev(y zv$qC1ko4+88VW+jRYpR54|(n-0N6h}CJc1VH*ksjHs1mVJZ7a2O}vcu78>Uk7*Ls! zbLS?~zx}hHg>p(a)iWFs)5{`1^>~V-eWsxWI~-y2EKOgY4ng+EKl)L6^R?GuOmHP* z2;2i>Kq%ChRMCLPoQDCQU^F06Y;^kTIpJ(9U(;FT1Q`TcnWvO zthpy`xOr*WlWD_pi`8Pde}p`(D26te_s_6ED(tc?0qfqs`$-cDkSkc$J_bhFy}G|| zDE;XBf0A}yC0NhEFoM`3`*Y_)5c60A&+Hv1SlQ2h`fs4;dQ@~VchP#onYkCbRv~E1 z6!U4cmx$mK>Cb*P!W>)+Vf5ywpM~;7x>zOhrN^k$;q1C`dF((q&@H`I!*vd(WrH9@ z494+#jfOjJ&yqUz>4_2^j$bL&JRmPu9U6wRTqfxC{atUpUmFnhcP zUNSp{K$>ZfLZJ+P9kbo7&!4hpok4l^{zPgRo`w6u6UbImmcj_rF z;{>^xJbxzk(q6s7Il2$$*{`{jZr!@aJQubX!R_%--t=N|c{DZ8d`8K{{8(9GzufvR z-a{ADpZydID)f6~#IsjvH9|=pYw?{2cd^J$U}b(CPl3TWyLpr~(>3J~hPS!3%9e+w zbm?MWx_Idd>#ne9=Kd0Mq!Q;xd$g1Dvx;@^s71jH*Itl^p$fRTj(|$s125LhW@ttg z!FTG;E#d@VO~k4(PZ@1^i|7Wq!?an@S^`ejz|Vd9$u`sEsi@n+9~|w3L8S{sjXg^D zW}c<9c$B<}LS_uQyNEK!eQeW2Yt{0$1}qN230ly1jnDuIob?+H_^SXxF3(B6qD+uOe?1NKX;F+%_JU||y9M)1*g#hSs zEv=UrxUEClp|^x}S4-h4Ba+yLge<9Fv!b?F=jtW>gM0Bfv;2e>?XW zSLhbvUvQ2$JR8%7dEJ7ru!`V`k_#aiUvR9RSc*Fb(;z~YZo3N2^R(@=`?qL63P>Kd z0owzu?ZDYRmImReLcfpL!)esqEh5Fv(xwT7vnwdQn)oxIj*7_&nhtU4hzac2HN)eK zAXI<9q@&(v1rj(@^aU3Ih}w2;c5&eMW2zh*K(4e>30i)Ag8`HDu%xxYWkF8bSKN z&24zFshKC>V;4s*y^;l7`nO2FsddH{x^Re~dBCwnE2Q6ryn6)A4*R5aABV`%yX@`q z*{#pgD`V$S`dx+hC5kkBgzJ{p?G_Y*03U@O)=JjW>v*U2Fy;@QK1Lyq!VAIHGf3q* zr5B(e_k6g}NFYvK3bTDEqE)~@dQ6}uP6)Wp{Qj%I`YVh&GaNPbaU2&n&9<7UhmYe# z0k@l|;CFAefg9yb7vK?eE0za}s;D<#{wnTIRh(DF(+|G(d*SdkUZ){Tr{y|NuEsqJ zf5BWa6L{-$$;_7lDDdH=gn#KbqKT0*9}$S-zy6p1lJ4BQ7jAYni0&SYtpc23nzd4B zwNX1{nnARXx-mXxQNMlnE{w(lA|}oR;tR7@H#S&!{_rOF1{qT)o7!0gO{4C~Wa*l6 zk3c&{Vg2R%@5R0w{mg&;;RhU*unnRwhmd3F=)eDm52D^ph_``$-hKal+`zWu2o3j} zX$@Mzdie1tpU|e+AQZnvc<5u%l0owLhYnnUVxJR>_nmtW(=q{idGt%Cg%Z+g#x|; zjl$?Q%*ZFedIf8wmIH+^k6HO!gsj==*|6FcP@0Dse(+!_1eDl2aZ>viZ(K|j2pn%+ z5hply!HBHS5C)$C&rfcC7L1(xE))*k+j#xj)nK9**+cT1-+UZGp#p^rv(fvE5c=t7 zpT+of;_4#6*I>wFF=3IEd>DL6VaqYm4clnH9&vLEVOaOfRfG(2%W*X_se4GZpsSpD z#v*9oCZ8#sKcoNdjlRx;;@(E1)@ilY^}2%v#XLrg?ggGILS}IRRX8)ROuSbUK5j-` z&SMvkPCjJ5w!ouD;KO_Gy^9Me5h%$Y!%Qk!cyY~zW)*9mxV#JV>5(D2>P(>s@Vfip zJ`6nK8r8_oD%QEkd;|;~e_wcviu-JJmt9(00jHm0xqvn>PP6l~5h3r>Pd|yM$+{@t z#ywZ&xDR2il|@qq;nruj!VO!ucoil_;53ld?c2A*6TopZ3bDeh(^or*2TT7locD{${qdG*&&CkvS~>s~qzn9wFYK zdEX{t@UOvT_WW&B5GpyWeiw^Eb&dYaq7iRn%=b}LJw&Kp2)sC|;mUdRd0Lv>2t|q` zJ&gs|vyrv%&&;euq~`Kd`iTI1W^@Uw69Pid5Q6MN&@>fIb%Y_$R95-0wN(vzT1WeS z>+}0qlH1ZW*4iq}+dTOE;RiQyQRhexgo|#@N!&zmUS@txGdETdVx;|TC`~ev0w|2T zjH?RFfDia~j5Sezo{=YTNIhX^{ebF1u4{)9H|V;vsmIs5Dol$WSa z+Oy8yt~LUYxHm11O8Fe?HMmno2$c425PfzJualdP_uwg zI0ud%rMc;OXv}Z%(7>&e@+v4no=tNM11^qp2n{Ps8w8};OCPk~#9E4#1R+a7X?~t{ z1cZ3<6g;8tt`*BDZTevdkD+gF0r-sbMt2DGb_e|2B7(RG=bo&|3tFTvM8wQ>>T^G; z!i|{JU;!1!ackZ~0KCumy_3%2cD)U(D=QmuzUC5^!Dbl83Ifq4Iosn%5)Fs8UwQj!%uA4 zv;*MQQQA&HKU6|KLh#)~+1i2??wIHOqa3M%;JOJtUSKWS#f`EG2dWiPuN4L27Rqm9 zj0|qHxWn0>2tKPGxdGfhr(=)hp8 zSRF2<(`(BJ(nPx7hi@WX3BF>Ovt!$}6f*R(v>V{U3ei(nsIvl0n^%ZNzO#=TXQrPj zxU^obZL`M|WdLGz#N*gJL)S+ut*1dK1H9lS9yki>fTJRLBobIUI8aY^JkdAR{Jz~2R zmfTH_9OI~{GQ86s@R?)oEJLem@N?}5;O-gTK#8yiu03G>9#%N+ZWuZN+{;*777X}j}nn7z4JTVj8) z@l%eVaeAk{hN9#A@L<|v9^57nq#hU2O7UF0Il1BoA1^#Sc}suy*+z4hbIE_6F0&0T z_#65upDf>LT!(kxdylnm4KI)3@Psh3^3>E+#A%V=3j+(V;6L+18TGWzW8l z=zsVFTl4i?P>HZ8os>@JzZBAWna{~2)}nDSBE`3Fy=P*1`>*Aq%erM4sO0f8G}5>Q zIzbc^We{Z!-+lk@Vv#hmr;L(>KcYiow$}>gen6wJ)*;xsr^Y2EYFwh745bWUMob7i zw||R}(}gC>07z1N%U{N2W{oqj+(Y64@$bF=_aP7%LDWTl4hH)dzxowSZEv_=x_D0^ z1h11vW?0t@UHLrMQSD4S%)M45(|GnE{66oP`plMT7&={iS0Re-k$dzA_dDv;a;3YT zR#DFl{hMF>GWH5)0hC4cRNsm|zWlAGb_>DmF2dRz?eHiG88WRk*)5p`l|4rfs8d%| zgCHsJ$3^+Nq)F6QVVpdk<-?DD6USI6WGR?#v0u(}nsh~e@9%ycsH;!^U;pA4FrK7= z(K784uBEy-<$3aaOkQqp@!NDeXbFG)n~xxd_k)QQ7b;i>?UVZgZX&?xrfQ&%D#Tao zr|t9kHsj>pG=to=?Cj={ za{8(3t(F{zLZ!$Y0*%$TsdO1{K*tz`M|I`@=k%B)e|I zh_D#{9piK7^Sd$jTJ`sc2B<*ahUQwg&Ve5G4#5w0V&nNZz*R>HLh zcQM@69zJ=DOA}rI;PNt-09^&c^_YdM=>C`WUM3Kr25#@d>^wrzpj)%b5eF{zyn)ThO~T825Yk5JFLc=p{k?Mg%CJap?Y>7NWg+v=nzk2s3 zG_pT%)}SqWFmJ~knX(5{y0NbEiY*VkXH-Mr^hgPVw{2_!(>h0}tPn6{hduodV0H(I zmJ1AOlpzhQ!NgS&I&Ob{FZR{ya;v+o*5{j_7$s4q2$r9^!Y2F1?#`t0)EYEkH1Oa6 z=1)A>U=CD>+M6T6hN^_tXs;1uj(8q*aluHX16}Ox}nV63eXYbL18-!K=gx0EL6D?_g!B?U$*W z_2CJDX*M1~ClpA)Rjnt?lWi=MaNm@B#5{N$0;Rz^j4vQ9SMevk+}t)`Njqtl)6(Ln zSbqER-gTH_m7g_SdH46a*gt!a9#3tg&9(c{wi*npUMN~ws~ks>)eZK?1S579n=~H z_nziZc+{E0dk9uC*jhmASbH8Lly8tnhS;@!7Y29-i%z|p{gYT^fIfX8!Z*0u4wAb+ zw1Q+rxIip;>=h$Y?h=CIV`%6ixVQFY27w+zkaxiRdg5HfuPIpO#(4( zQ;+9z8c4=~NeY{_%0}8=#CL#zLk82?W;_iNqyVY)@fmG8fKENYP14{pgsKi`@8>uRb#fGMDaNh3fqOnMI6^S8 zz76&$ssyYc$O>2WgjMo;6vxT|Ya=vFI_x@PtmGAhevi+XdH9Iv@kFad@v{lccklq% zL8-B~u!j5XX1FLT3_Gq{%ERMUR+rW)^ zcadY|j*0$H#k!j!N;4(|zyYq6HqN0$aC^ME2Ye9B8Vyu%-ob@+jlf>29BZ`I8F&{V zV*%mTJ(w#*ecoZ+@*aCA{h{prP0q}u90!hmIuD#LBw^ld(HF*!@mnU%3`(x)nWw20 zYvCHonFZW$9peM}33#NPz3J443I%;bxJ2)RpPS&Vmg*WVyi$pM=J*oFQK=ZzJynY< z4!{UWti6cb?jWyh5D$R8!5h$;ZSrk1Ww#fIWkI<$=v572bCvnGjbJ-nML=ZEBHMv; zD4%KbF3(yC7$(EJz<6#wLkZKigkl9+!rw0YzU7DNW`YLYMG5r?WnVA+>=8I`bX$*s zsgVD$G!XrtrhTo{srSb|!P=^=>;Y#!GQ_dI?lM;E;Lw^pD0Q@xw+rQM2e5N(Hjt5C zE_!sirY|%97nasJuNUtY1V3hd%zgXlIONL_eCeV;(o3UutL9i@%hh}Dzn^X)$Vx+8 z$98~$-ZIAdxFxT%JRiJ@XLh^BE;E-6bm(>tntAH-hn4#+VihYmx8Kn9wF#hhGf%&w zXLkApXf$2~8_>8v8JSaQU&{9)L>u}3g?>SBC61dg&C7(UmvL+b$qgVR@)(R`4R;0i zW`#9^=Q7*gJ8MMZ5N`Y#c}{`#6v$KogjELyuLJ=zP=SJ3n)eOD2#c2^BU3E%;gJtA zvIdo~f%?oVOzN_mj|N#|GQy5A8P(b-P#D<8R!g}3*3Bu-52P&5t<)-~d#HU>NL7#y zH$}$VP2*}TWzKbraIdSxy$(~QRn{nXwog|S>17yx_nb-O z;-WwC58|o-IXeR*!87x|Tx5m!^H?NxiC%>R+4flzOr1GH;(7W)pEZ6iV|0IT&LoskSd_Z7|{l;5aIwlE^VM~jEPoZ;mL3Y^KRQ62PuOz$mh=K7cLi3 z{zM<4@3ZluqqPX$E)#=4YBpHv7q_jR-0e1?L6^q&9ex59w`j+l?lFoadx$k?jr z(4w4cu|Dv~^fX9i+ve63jU*4{W~GusCJcv-;@iKhta1!*%(4PaC;Vm~NF|%qgnAN?7pet?} zu$NlGciFFKoPZK;ymjW6^GR!{=U+1OC zp!ZM~8YTURdAb7we9Yd=I?It~erh@0fX;2R#=s7P5Y7WtB@Pu9M-)fKQ_!_f{EIUo zncxPrVde*_YwbE@tu&~Nfmf>h1%^|;Lhm*ceom__g7pq_Yy{`hTPN))VOG34$iDqcm(mdUJ=Q@$D})sT zy(st+UW@ZW!3EM)fYB;ex-G1M&3LOAv6U4r+U{D^4h-nBlnk^(IYD2vVr`;$P+0ev zmLt~3Ch$~i-3IQN(oYo+(h_-tL-1H72}xo6_i*8a7jc}04{Z@n2C_MzJ_a=HW9eMS zJ-8gS3A#af;$0Q^l@V$zXccSW4r7`H7TQ1}t$JE4n@}K>`F7vzI%B(mG9UunAaEZj zS(4B5Co8-c2Um&0-DV{`r{M&8;_O`%CVIf=Dd(vH3X|)=VgW()2#*adiMvReRR}=z z1N^R{m{o4 zT%kKCJ?uAorCG;xiIu0^rF|aJ*W!9fxo+k0Y+Pxue5;mH*MDi3FgGG}8}IbqnK(a* zYc({JwWf@~r)AvraSw$s8m!oc<+h$J+O)sR90G36R$2kQ05AAMZxk0zk0Hsw8qau` zxe5CNpG!R|_%-fWi1S=tT|m0EOOLQ8teGCwC%tzcx!*_PuSJs7%lk|%-uWP@&8_jWm~+8?{5}Jxh>zsC#VRb8DY>f*#ZLviPRpHGSZ)cT!gGl zmIScCDJQoKRxtY_D@Yl$fDixq{)LO57Hb}9m0B{HhH$xK^2reEx+gJW{e)=bdM%~d zMU#oG|9cBBW$xqH0rE)X}1%DsLv9+0o-t4Frzev;wH!U3VO9G7kvtpGOPF_gZsz44!qFS%k& zYy~%`$4^dRnCjZ}^}e-4=23g}J1_{H-1m9s`3`y%SZxH1O9G5W(Z_eDz| znsIC7130TyWntr;eHDM5$8k0pOoWzDnSl`4gune1G>)@va}4bZdBy+0h3HxI?&Qt} z3L7u7$K_AQ*)ebs47{X};2sssaXfRb@Phy1x58-5H|pePzNk?8V;bA$+*SD?^Q!B; zmu`=e0qKHiRRT!2oUyjrt7uaeOyg0sSK&2JNYg3U$Jvo;?S)YV&ik187x;wLKv1|RPhmP=6lkYJh=5}r|G*1^ z4YjtYeP{0|0g)Ex*=ze0cQ%AIiK+W)=`AP-BQO5I0gIb^T8C-4E{Cmrw}ujscl19L z7~sPJu7sTB(d_%04Q zX0f&kD1tK`*04ZB_gbZsxcBj&dBs=)8~5&qA_wkKm-|vPkxfI%$6RymqaKfdnVp?Z ziK8-vP)9IH=3dQ3L^tG$hpHKv!|;bQ8j( z3PDR@xB@Q>G8H&V`C7OGml-tIR;};{M4mMejc^nWW#Fdkdy~p)=Ck`JflAtD&#YFa ztu0+3QT!oLcF;cI)D(af_do_fsuD;BR|G22-d5)>d8G+J6)U333ot(1BZ)+gmAIKX z+6LT&K@BD3<}u1?aIu57u+auMut%GGt&Ce6^|s@lU1guMYk3pGe7{ROKC1qcaqj>g zoutJtC3RCigc29oCFT|T-4NOcwHSF>is(0UiZX*%@sn4KLdIT&0WZhOc`r2GpQu8G z4g(A52>Hn#IG6YCDcAsyz^i-|j+}~$2w@!j2?auJL%_0ZW)%6Umx2y}pX;*p$~=y< zEplvyc^mVkgZ-VZy~fvwEdspDsVrB4-SLoCx@OQDUfV`#ox(R@jry7iG<67bq5&Z4 zl0HgH#SgxvX~ML`c&@WIvx>*V9!mcf*B-{C4VV3HXj2>Okb4;Ix&1<}q;Ir=d%JD( zju;QnDlUdAETnkjM_a@T*UC;3i$C^F0xjH(1nc_fJT!uNnR?op!)4mmevIIzFdXT#n5E)Q8q6!<=VANE~GbMOpHkj)#1J=pUyUQ06fw2$)VKhTm;?ikMj?cB~() zkHXZW&!jW)l23FQshf5V(BBg6SCMSFDvY=BLR@DHS{qvldWatEYQ2tU+y<_}j=nhS z+@v~wfJYeTdXeMbUVJH{tWD|c=<2>u~EnlWUQH}PP4!tCQn>`1!jTdywG|iHlAqW zL3asESq)a?mK+us@Ar$awT@PVT#11e#o`KUBeTfVYM<1=AEANsi43HSlSD=4mg&xH zpsf-BniE18`D8MErj;pNvbeW328KC?Xu6S}7xaTn{I_kzMDv$dFrLv0dPQR-V38Ig zTZfQDZ;I|k4t@x$Q_qB9CRdq=%2b3PO)?L`_9|*t&^UlG4mvO8usHt31-u;#HSw9P z$d7YwtyvH;{;Xf-J`3DvFkxtHw__9~@!lr7XcR(ZQPKc56Y%(;)DyauD1X1 z(MR#;;t_bR(Bz&W1zg4=Uo0#LQn|6hpSIZ;3bov5hq&ecg&1FjhvDW`FH_xx8Bk$*duF~50Fo3u3KGaMB( zWb(m18x}A!A8m>B%pH^}bT=-?UMrQ%Zop2<7~?6kYj+B!JIX;|@^_Ip^GUgN_BB^X zd#qQW@hxqV#@Gu~RJh8aWRV-NzgUVSRwZ5Fq<^#nD8+b%uxBQL@M#7gW&A?$;yvn}i zy;(TUiUJ;D#geo4wJn31@Q3h&AYvaK#Mm=OeXgC%g!~?*iglVP&iAyByp1uYFnH{s zN75oKP0rOEb(J*alm;r)9Z{zFEcuvr>x$?o_>5IEU{*1lC(rALFtL2=#c&MlwTy=~ zpLU8Fs&5qnWtN=-y7Uj=N_~EaxC>agTjc10zw4f5+xWZ&=AQF_i+)f?w2@!jvNq@X z#bdHxWzBa57_W9hNY-G8k5slrdND(AB;MAwa#S)VpBQ;suDocpUs9aq_F0=4GnY57vFX9%HGof_hE{Rj*x4e)e1AqDy~Sv51$Ebf-lR)FyDZc zK;T~C(YHd?dE%IRImS^|{u!_i9ZkHX?L4Y~ZE)65M; zQ6qm87wPgWjS{5u%;O?Dw@tKevvbQBnk$cm?TmJa9}q+P)z+`j53VA(Z-N6_$R~(i zZ2iji&KevFu*?G$+SdxbEm?3t3995f;4M!leC>aoM-&>zFU*%C#6G|R_Tb+MimT5m;zH+I#-vNV00`b%+)RvQT=ggL29$Ohm&Kb8k zl}{UKd<%?PfN?7{++%9WteYG2vt*?8^gGreXo~^14Dz&r@}|1Ij^&wkNLohu;nioU zz^-U#s6-rxmoMAJ^JW1HyuCD5WG^1^A1@!)INnAF!cY+O!8Y^WywDW4s6=+EpqX(6 z`sO=6b04FS{Q4XFltTaU`ezCO5e@uEvPIA#`gX2h&g6Nr6D$L~+@^Z2Vd4kGGyoxI{hz%y> z`6Mac`rVjK^$yRf_W%e>Z* z7yU+?B@{)qCn+rB^i|aAvs3pBjN!LsFbZpu%*=_f8Aa2UFNJwg4&ROUj(u?jQyL9( zR0MHBuK5wk6m{doKECa(3mg~%OmclL0lx9ui%d~vG>Tiwm4?QZYZG?xOK}Aer#wee z{Ip-Eui~d#x?B@^NgILyD`7JQM7w!vb4_EN;s`c}ywEs^6bwCw$)quIwbUrW3kxkJ z(N2<@keZ9r z(KZAe1eW5`((EGS*=sQ!_l!pxa^KhQHH<98nvLdi)L*<#|5mAyOOt}e_yb3?nkrHv zfVvKZ&>hWWJn37EX)s+1VAMy4e4aBo(;5wER4zOPNQ`?t^L_4jAd>Jt`R7Q35w=Zu zG|Gzk1+%E<#}PP`FGm@8iU&R*Pd--0(WO^W27QV)(gf!T&!inKA#77G&-^FTt_y2e z!0ZUs$5=!SLET~sO0yIogO=DjyJ0^56#=lESz!%|awwfV{UZbE^;cfQk_A&sTHW=6 zai^eY({te1k@&OhECd(O%5aT3Nl{!`0Eq5KyDUnfufXBl*aCNQ>E?~5^Xbt=2hJfo z3?y~2)fLt&uK0t`Silz)*mS-tnLSW4w!k^M}tZN0o`)vJPYldmsEpF=u#{ahm7O z2It`Evb^`tGJ@7c`dm)5GujnzeZ#-pCdW*gT999Hq!`Dd?)+Q&=3k~_YU#wTs5L6! z){93zjdCbAk5Lx(A~nyPCC+Ov$4TCxs4JfFq1&?5Mhfa25H7m<+JoW}KmBKfcZ|4E zuL8e$6}}xqnrJQIf2WiyL+zPFsfGtx(!`8JJz{%)|H z{33s&9PgtJ3N*io#YgKdF7t_Pw<`=Ib-HT8E5Wfm4{fDUQk9GY2f&hqk3O`&7PcA2g)CB}{$DRH=topR*IxP^_4+PP zv_`5(XG))DZKI4d6A~4y2(?C;@xWj78lbtTgqEASTwV&n#p*-&45F9e zA(ItEm+CAbD=Bih$0z{QA_fyq@-_vxSj|~3PZ2$tI2W;4hDevKaOC3Nl4Q8Og8A{c ze+>dBqh>WyG_A)j@RsD5X86d|WrC^TYl#wMuR^W~O9{WjHH>yOqXaOeE+o+a?iDPp zH(t=3xQ|u~HLd`H%qX|qA7gD2OtwXfQ6z~Uacd$NCE;zkwlz`!7y1Lycj2K@H17x* zP;`qArhtCNV##ZZbF@_SWUItUb1f_`%MCo_`}rKmtW%r4 z>&GrtB-1?&}eg|SKyAtc+Y|g|?h~ z_Y8SfUYs==WkClKSUm%;8BR5J+y=5zqn%Q_V(odrxX_aj z_2@lNj0E$VI^tk-=Dj6mr*b=x+wb_gmn<$i5?zZQ1vHqu&@<*u z1a^adhcXX9DQqghMYp8CDQOz0or4l+gXqB(+=xYyC6 zQKU&8-2h3WYT6@gD8oM4UN7P2GykF+Ja@><5}@H*=2}ZuK#)eXL722WIc74w!oYoc zv2IX^W0%Ju*I|uW;1DLx0~vGbkU4K8;b)XDtP5+V&uYS*KkscOcN#+!X@rmrh>F>b^N>nUcTT8 zVT61AXiC7y^4-H3edEJ+NZXxb_Nlmh&;Jm>eCF@Wixeq}Z~I}*s9i`Se9RXOYa|hd z9sC7NG0P(v-0FL6OUw+t=ao?jbfw!MvRP?8rkjRyv4(skYT`<^1A5B8-PM+IHSLKZ0c&3!T6rjNe!%=_pMH}*q! zVo78Ob*?dZ%&F*i&#RK4duN?y34al<|yrC-vI~ z`YWym9@B=*UvLu&#@xOrkPnhkKkvA4jDq$t2TDQn!BKjog}S(oW8G(LqJ6nt(d$U! zZ%Ztm7rDOP?>`ElUoU+BNu`5YFI+MuCmKuM$dKh}^Bl=u$k6y9IXIo-<#N9)&+mGc zN!`mSPSw!3{ql6r^S$uCXp{ub{7l#^l>Or-gi~K?U@=a5i9?MtVxIv5qKt&_)056b z!35b9C$GW=0~2z>mOEef#2Cut$RFeXtDQ`QO&EQ1DZM!6%`W zCxNgE0aS*Ew5D+pa72Q!W{jfRTog(C6_xT0yuIajERIp5OUH9E37C0HPoj8ovw}E& zGG9fxHYs3ZD@kaXt_O~7IHU5dYmv``!LhCOM|@)Nd@f@Nm|-q*UBMVwd7~DYq0^To zctuGhb7AwZxU9eUX*kS;@+kO{i$dLG=EFexE%HTNKq~uxD zL9&308Ek~V{34a3DC?;(kfH2zp_vzNIZG{{UagJq_+g!1jc1l)s+{r!^6@0biZ|vc zAXfAv*Cbn$yAZt#81tad2`eS#2%f-oZu0oS2+~RCundiEUb%h3pR|mb z426P+vQIEV4q#lxnCFH!astaN1e;2W<013z5Za_b+zme4KIfkB$+Ojz4LCaSqj<|~ z)C1i8l|O+=4iNArYf49$J=qp0-!LL210MhCmpvZL2D>eyiwgIe0}dr z$IcwWsBS^r`&8F<8EhMi(<$@TR0P>OPRO_Gd3)M;^Wv*B_M z4Jknk!kS8XrjcQ-F;0@6D)2A@uFQ80_!*?j*{SfC>8lmydF{8%ve1#f1lFGVEjSfA zh&2_h77xlaQ^CUr_T&K14judQkqxw-{4&S3%X|A?WA1ZHUEnu=TEZ-%(N*vAW!>iI zm$0wU0Hd4vTtsbjl{e4Wt}7kNhT6<6Jt0 z`BeTw0CVZaE!GIn#-kG$Shs7geY9Bnrl4tk&C#bQ0$So6V0Cpqa-FQ%q%(gPd|N9_ zkYjYhtC3FJ6Su6#HJz_0-+HXiKXJ>p`A$6Y+e>^A{=!~MrRh}QwW=(1o|5|jAlWDL zRB7+FZvOmmZ?lRSc_zySF}creGa^8yebO#p+Kzl@ITm0yi?xbY1wSKw%rnRgq~yQ3 z%@fbbhxRp>32U z!K3XiT<%4bF_ByuSi^mZd5N_E*(Ym*5r{=cVlUx^DTI&{euM~_G8ODsi_P(#_C+1k zC4B8$D-7ci`;#JwKr{tbh4&I9HD{)yMY43}C9TI{$L}^Sx=!cls*cn>b+Dcp#izd!1(rPA>d1)Z%<*o-EzV zZ~x3#+}n4VQOAk)L?3gTOjC@H?Foj*`Oj19Y7J(^1>N9S)X88NWiihM+X$mTC)C7O zs7~w9Hb+jh6SSzyckGL{jy9uYGr=m(B|2tM7yBg417 z0gA7tDwa0R%ypkV@DNr;ygdj`4;BtOWqH9y)Atzn@O)^v6EeSYZ-qJU$?q{8`vj&5 z>o82Guxuwder<(V@@nk*y=Oyoh}g3pb1YaP}YbyK;h5RKqV zdSgr;=Y&=#FNd)Sp;G0e47GI5XReVU=<-Yk-@Tr(>{%E$()~92B7^H$ozlvndE_H^ zR!9H>=d&%Z06PO01)h+^UMRR5&EMe?FXC7cQfsNQALg|*FN?KJp}*jFSYjz66t6an zOByRCX74a_O6T}?K2ouD+Fofv8D2p8#?EVC%=*t;PwrK))79Q%QLN9B6_CXN+b(Ss z_C9Au2kj$y;EL~qPLNN0b{0cHcxfj7__kl%MOos0yyF+G%E>GPfLXrf`Dcb07jA<# z^V>+e_Qf7$AE!*KJrlG_XI3DT1|r*GutWzolC4Rk!sWV@x+u?kaorr5Su zWtVMqk1uPLX-HqC-`O>slolEl+Xw0Z7M|njz0*s&E4z`e#tmO#*dec^$un8K9OJOa ziWAliPuM|PDV1YV4IWXL+E>>v{e~L1)jOj-vEs&`J`(HtVe>!;D6%0N52`2ET^Xw z3bh|60OaluaS7)?UVntZHwOV{5tnO8L`224#>wEZ8o%G=l!W*}VGf{%Eu4Ya%voNB z=58Zm)@y(?vyhK6{c`Xzyej0g{9o21J)4 zBM<7fq;<$Z4>AO%REP}(h+bMlagiX>FeVn|4=E;>pET9nm!JCFt{8UAX8K(qe_D&880{_89bSjeKz)6O2j;+zedPy>LQM0w5M1M za=S?eLjtY@e44stl-g6$${~`TL|y(s*yW%)n7D739U! zx@3|_z$No3qoWXNJL2NYx}9HMjyM0Gm(O#5dztP;9PoMLg<0*b<;pC36?vZD$y(-= z&NICkg2{ssD=aQW3iDc0lu0G_Mg}jtsw$K#6=6n6+()!a_G*;hToV^s{@+1PK+Q< zicBq?J*lSz{W;)FKO!7?oPx~GA@s*_QBafZKR_VWOCzGqa=^W1`d)e$us#Rg3~#EOdK~y|v;P&sscUsJn-?q!T(1lg9BEZASQ%+a=90 zv`<)uqKrUpMlbH9Y>!kS;!uoN2ZCx3X>yBEEHDl=7b_cs0OOVpt*(kP227(`VCQ?E zGfS8h;!03(`Xe3+QM;)iCgw^f#W*Rt&*2prLoH+A7qrH)D9X)akn>x^$y@>J967IC zHhJ>TlR51J(@1ZekM`Vd1YYqbEXd%e`<<;*T#dGpCg9bTK^Y_d`NaBcL(b_K@FpZg-tjPFoxRBL zh@yDmx3Krh*kIZW*9P(6MK0C6(Z)tQA`{Po{?obqSSV1Q0v3(eGAdr_-E|m2{RsMF zJEKTSlxL!Re`fgel8)GPTP5BLgVR+;e*lK>ZMyC9)3w62$Dz$ z9JxQyUf#DNF?$L{%@EMgW7o1eM{QwO;RvZ7JROKP&AQagJTPEitX)jls4;h&y8p1( z9|h1qEWCfO+MJ?J(mVyg#ILwA>cR-{@8oI_p1^V*iew^oMDb)%`>HFW_gBA|D+frscW7%9?K-PlH>SwGe$JYsT^pMmwLrR9 zSeV%Ejq$wGl7}l3dCxN|jc^->m?P}%6Hc+sh@lV(&jvj-R6hkp&s2kkFGevheVBIku=kHOEl$s*CwD)F09H9b*@a&s3X@8okEmyGcqE;hJkR-6E$Q>yx6=0J zB597&Ak5WZ7e}&S37+9xuubY{0j{ihG475S(rFOFc3{x1l^OIy)TJ(33$+YJ6GGPl z^Vz`>5beOC1PtBp;XVloqhVE49T>Ag@9~>+|%Mr zkibU4{4l4Bix%rxI24%4ubbqt@S`oA^slS4#DVoY;Gx8vG%RYCbqoVr2A4V!*4@un z1utYug@+4UaapE6{j<)Z#g4g}THE9^-@C{;u2$f86`<5AC^ zhvR6P+(!E)&gZ-?zV3OG8S=f&77AW6HbNla!6SuU^X7NCo`5e8iiN7E!&KH0La+B` z;_|uY!zqA>7e4oyHDz35u0<7Oj^~OGvOU1oLE=sYQ0)J0Ko z9x%g6z?xtz57RDlQTEyVVKtK;fh=X_F?@7t@?1D@c8fpHW`= zF;oj;SUD&$&#C~yOCLfkk5YaiOrBr)m*dNweWKkWiie)JxA&)4uiZ%JZ@iiYY0ExG zlFUB6oo41ArX9}t+dtgnyyBMB-{bkb&@$S1bX4N3S!ft%cFJVcDW?_WFL!fPQ#&)+ zZ4tm>hX}O0xU^^itrIP|t-U$*b(cB%fHO^LOEYd2`y8FI$6u|=T!KiRzi=T{N*2Wgi&V>fj5jt8{aCY%JfO*@Q?{}1aivec@DH#F5NIFU8y4riz);? z_&!fGS9|6OdBv5glM6;Sj@{syKSQXt>=-|8xMB_lp61^q<1Nqm=eKmkOI&qSy{tpU zrbFnnm=D1C80DqMwe%B>w@VB`d7y`o`yXW)=qBSs8Vcxa??^+eRa+dDu}LM{^wFSO zmKl5j-`?x)r?sV2JxqNFnd2x#hB!a51J9cS=4r)S3L~Fuia76O zAEf`nJ8+K*#S;bE0JRm9sgc|t0oT(Hd>8LY8*rn5BE30PWEYnnU16E^fpzoBD(s_u z(h6#7^EorJ1zH;G#Yodu1Gjb1mly{cxetupdzlrbj8Alq7Xfre%?fOTbwN8jT(_yA z3QpV7*isYa+qcX;m_=CnEzY4&<6gcxkJS-v^l(1IPgB2m*26g~PR>Yt=dEK&I203z@hNamC`L1U9 zQ|?QcXZyelz08i{bb<`Oq4BYF_08|5>6yhed-pNxi-p)ej~jBPQPwwb73=ypQ1u_H z{O>{lgc8HD`&&lYFXRESMV%mQr(QVDoJxHef0&|fl%~xcc#g6lQ1Tw^x)y+w6Mg%yJ*Mryr2`pBI^9=ii76CiFG5W zjh*43>T6dymZSm!$12#_h2RLo0TXseIR9>b-?(x)4Ucrjf%2bx_9zZLKXYavUA-~^ zbJUXV+__Km%C%T@ji_l~&Y!=OCdVf^XniF;d~`ouzHm8RxNtrV3=D@*T7dytUR_I% z9zRX{2%#}LG&u+&U9hi>5#yJH6C%p$*wfoZy%*EPix)Urqy!^X!)fAE9U+tWcF;aNja+Dfx?^J!&u17UfOg~z^kr_R2fG;nr|`-wPg`N`8eDD-#Z z$Owt2G+Hv;O8D^MOXt(oS6)xYoR@j`?p<7gS&TS?@%%tvx;inz-Uh6)9KTWJz;g+v z3x=I>qrFtOo2P?l+j=-k!bN!jVVRRRgOC;rO(!dBKj*9U!jRWd6?z!F+I20q5@{D0 z&!@{O!ToxatUyc zuuNAU3;X!sP%u`$t#hEd^{Cl2aXV?wVA(k{G#F>P)>mMAoD`x47oa=90ev$}^~7tDZy2_4cIp{UIkI1W-XU2Q!>(UY%qZIt=V!Odidno=*y zm{D-0cYGi{S(r;Zn`?B8+{~>bEpg=6QmUX^2SK!;c4C1Tf^nGW9ZYBY2cqr5!F`~! z%gbpIM!oI<@4+n4J=(5Efq_aqwqxwC%lGD8pp*f1b9X zCv7k)Dww?No8Wli5(XIyUwB;nFsRE3&d|0zmk_XFigoorckx_$?OU%Sz%->fgqP*H zZeXrQ2m(lNN)uzyxyzI33=v=3+d0S`t=QwKg>?VHT-x5MuFM{xvPy>NCi z&8@Gb2YZ{GDeG}Wd7dix1x+Kp!iVkeL6Is8>FrYv zR>u0oy^0a>(>WuagtgRu&`yQQ{GNTg)a~AI*BO!r{U0%&_Cc+@B)9pD*q`}3G#nE z7dRe>8HfQAR}r2(hp&Wy;(iBRnnYBwG5$H}n0!8u;5eiSQVrNbyAPtdIO^`Bk9ZDn zmiRDT+yp7MF!Ux5J1oX_?gHIc*Ub$O-6*#$~EzXijAs{rlaL+Fv=k~paK3+9^gy_kma{be7X zTUzpPcox)s2+bpex0R_|X>D~m`sO+#`+4^4K>Fd2-cIMn2RX!gA+4?;%yEGF*|CB2 ztv7Grrdvw$i_?%lu3LBxycPyh5gX>?>1%UL&s?Esi@ zfb)}1%3CA<63`Jw`LeJ6@ONIkaGQWq^>nwVx4wNNz468?Fp_5iUb`%iOUoPKzB@a= zh<3k{UcK>3>c={pnk!i2um*4@;oRIndi-Q2-Msk_0h2vt9439|?)|jaJescE;JEH^ z@za6=qe{(MtGqlvtR3d9ALgnIXZVCNwmA>X^X{5~`*?Rpx-ry`<#Ru+t(4R5HUzM? z7uxVctQ!ha9YC@l&Glq^cRGgvUWM51a@e>otD#kA%*&ujCv3ndn(s>^XM!=Q%|F2f zq?&fY0ST&{5n1TBI0()l05n=jOTl1Z?Vn$kKJ_*NDv6l4WWGZRkSg2TvWrN%43R;xu^hYfC#qZMxP3cYJ_3uoq1e%b=zrU zD)SRObyj(0hR1fqZ1W{oV+ivPKXc+2@@h4*pGMSdq7lv?alk#rB^FZyW|f#+YEumg z;yg3DpJQkQ7qzgqJ&JchCw-0iuZx%a#)Q2z!TB!EbUN?sH=0!!OY1XGh(VjQ7HMU4 z?7B&JjMZLv*>BV$2um=w$y1erVGX7LJpMTfo1h~kWfiQ<8@f$oLk zK$mI{t=CHEV(wyn2vD@N$>gY2FY!E>e9}s%#6A09+2U3(47rkEN^ON!semi(cfX#v zY1z6J_R#(@6fLiuJ)fFUsx7Xs=6ujgVXj8IgFT7586F}CRmrbasRR?*#h&+(k&$#6 z1(*0OE|t5mG3aw~=V8`+2YS;6@IFNA50=sJRup241zQ%nhep!SnK1;+{j{>R8VWZ)(hzyBjE$xv z;stbr|GlhfdNLTqC8Fno1EG-MyRWG|jbmNw!i9QfAEh90sFO$KlN8PKEBk=c1VY0o zo+`JWOr;epR?;o!xUPLA`s#jkvW3fG5BDsVC_1t=%X#41?_BCc7;HlM;CLK1V-cc! z_YS+zna1@-`qlx@P-Uxu`!OS^8gnq1Y1-vnC0Wp+e%f`uuRmQHJxiYs(=78~dDr-! z&{WElsqUrzG1jZA(80@-lX&PrCpTAOet8J^}}&zJ;tET7$s z$@u6H>(zR?x4M{?Yg?%qjgI^AJ19ffa_MW#XU0fZ(h7n`6{Kro4i~hBp@3ZBkwky- zv4ys=x07dp1uq=xrSA&Aox9aEjF-=K6cX3ZPo_Hy>*)#S*Q#ZT$|b04MH=z2K&<e?S`lQr6I+H2bY(en#DBQt+wK6Uv-Au`%5{G$xP&_gEn3Q=Q>ab z9WKtN8I&Gv@Q+;#l!BlPg!@7rd)FuVV3zez(Deh$3-B|&(AX97tke{{K7-dZpV^sn-U7uVF4uC@|-gmBvGX3%{M&d3hMEf3~w3c)i$AgTtq2v%&TmesD z(9hhtGJG~&pO{R|TPta92d|SIwm?wEGCa*wc;GANuco2V;k3TD9dLG?r#t?CY1E_W zul>XM@X!CsaHgFnH~;+G{6CLmS7B9K+YqgAPn^Prx6ZZc5-j7Z>2R5s|G=D zC5_tRUaWuf3t8}Sf|t?!bl_T8V*%RRg*ibm>qgt##X@NrY8w?;w{bh%#G>Zag9UAf zdRDOHwZdGN`K!SsZEtU**+o+st~u0)8%{R@V}CyyQwW3t=i1svoO$Qs=c0$v5$hlt zeI}~zRqf>I?e2!a9Hw=I6&J8BeqX(EIlcY8w|Uo_79cP(CQDcub;oT*N>gB^xs6K_ zDAoDm718m~>^V1GG@rr%J-I)XN>~ue?CX(Sbl_$=JlvBmp2teqgDcQcUz(g8Ny}@? zv>C$2O4^U5(fiuLR_cI}bfHqq-_zR;lh{W;d(+kH<5)z8v3M?}+jl++OG9t(SlU?E zZI=C|5?RJ21ka+W5bS&&9zB2tG$`}`kK?_i<1b$*TQ;y^RxT0Z?OVS>~0;U{oAS%?I~PU zI@!BA4CC61aBTGAecHQ9J$2e7^bFSH8JUM@{W=gVhq1JOd-8G`IX3}gG8Zn*x(sQh zl>rGMmBnJb)Sb>CZ0|AOj?uoh(l+ZH?dnaP&J+6D0|PmJZvPioF~4+mQ{e6J zkNZK&>%;X*fviqDDli*u;BbXDiwB)p=GsZy#n`q}pL0d7WsmlS@Xa`&jiW9Z`b=-M z)V7k(AU866?jVl{qO`e0UwTl>Tpt)s?_7T^eemdB+K1-q(cpH1c5tp2*SQgtRqn&t zVjgUhdxJ4>NY$QiLJuFnM3&JOICd8=Tuk4-ay@N83l}!m)7l~MXCI?`=1L6eFvRM| z-+CKnIK^S|hcGAZKka8s*1)fO_wS|QYZucnmdRDTC7MuVi3_LUSHSW2{RYHNs0_FDS?*n89FNU!Y9 z?Q>KqMvC!hV(hRd>^F|E-`U}p<4@slVTC6q9P(J9X-OTD zx?8Q@*(8g_Vr|&>HIWG*6S)AH@b7m)c57>e6rtd>EoBkJmUy1$-h1v@{^x(refsg^ za9J_HAq#YYy<=XK938g{g4)v7ekdQD5I`owMMHK5wVmiE>Gj&Q79|+@f#EPUJs-vi zs6PDg2jKumjq25CcRSE&-K;IE9(vgKT?DI&LK(SH57wHLbTIZFWNnPT0}lvd`_Xmn zt<8`m^DMLEirxU$(B+YXDaLU-I}pC`##^BcmdlI%t*BR(`3o~MJbMy-*~NZnlD!zp zF0Nh41^g&SbI&Fm5oMBb%?tq)YvdfD-SuAybC6FpK$celc_)%Vlgh3ln<= zbRh^C<|$2J;yPTjf6mZrXXqRw>N2vse7+s_>T98Et`_FfJ&~GgZCo0?N9Tn|WA0^t zW#cgUA41lX=>?#JS%m*lV7vwwJ?M{a_KhIXB`7D4AG#?B%wa>z?1d6@$EPsN{+L8R zx3`L67@ahN4eUXeB)Og-gdeg3lx3Sl#o#k;5M-MlcA^N{v7ISC`mp^q?q}EPlmRt! zUgo{W1pgK6Vw(4+_*psyrJFryon0)CL2d;=uV7=P2&&Qq3R(0_Qt*Rk)E6qFODhCn z4fK;u?s_{jF;i|JYI2~A-qrq^&GMe7g}ty!7O;fuXZU`xHyh{NT5Gk5(PZljlgI4|1>%x(v|ycZvT{*013ALXc8nv*=rCu`;9TjH?^l(qwNOqq|PgsYnaK zC?!7>FRI&nsa>2O90^l=)Ubt3uGd6Qb|VY@tV7CqM1MDVHk*o-xkf$yeS*1Z$A3wM z#e5EsZal0yzsTb`w&0v+C^26$LZs5Cxkid{S>9kUoZ>n~{Ra$oo%h*m`;=tVTpeYgwLja}FC^`=z3<=YZmP(e4AkH0&;CFE zW(CVmiXA8+joPh+xp3#sZGxa2$`U1VcpRQQ{V06=@&y9j5$=#;e|Ya+_~Y+_|QQ37vJ~>Y)2F&##e;-^sE$? zmR7>Em(N21RB#YP{cVE5B!O!u4zdBDO&pF655rI1eUJCOh}^UJsp;^Aw?7}|2_Cah z&Z;;R8|&-gCqH|SKxBsi)mq7`@Od(uv(qyeR00jImxtn`(0};k!>|HX_0Hl#_}u5; z1Z^ISj_w<8yb=1KL+WV#(YL=Fwt039q6$Avb~CI6m8 z#We={dodU#Xkkyo4}Sb-1k_cWmI2nLmvNgViWn%;`{$q4hrkKoxnd&&)=|H=r-vpU z<-PE;_rDWfuDrrtXHjm0IIcYb-NqO?!*_r1y->S4LD}zzzTSHH#@Bu=eBOq*iHm9 z%{BUZdO`7K!k|<)(1e}xS@{0O5{_3}xHmr^?q`R?LU&J?Oyx16m*MRPkHQAsFE@%u zWFpF8EY}<6dk4cprjLLCSz_MaShy7?c<<`TLD)XnXC0t);Ur8_vnC2Cdf&z8Isr`K z_?Rqo5kpWXXiA0QUMSMJeyDCq=*T$?;AL1Y9fs#eJK>;GM&`{jgs_PGOymcVp{q~? z#a}utgz7nsPcUc(u?nCKFS@x`KMvnGz{eGa;2652fv>1!8*^&LIEgVa>oSSlws0s9 zag3$?=Ume&;w;KD7v`>nf1o50oR3XMbm~-Nm{N z1Ncl(+uKLrXgwjAMX2E<931XrkO2+``@$S+Zy?mc{tRRXD(r&}fEH`Hc1y?MkoUJ? zK?gwB2hayp!NtJ2OqT5!eX#@eu1K)jfl(YMV=@H`pqD_t#qZTwo-4S)+OF(ug=?J9 zLDqi;=X@BjB4dzE_HUmv4?o)3BqLG@dDeds>gh1R*Ex=FFYA{}@jP5_W$!So01jDW zri18Mqs`>AU+XaVS*ApaeS%L(X0Vg%4WaLbLYDcLW>jJ$ZR5CY?{7k*zFEI{sQM#4 z&_UUR?jLD~I|6O&ksfH5qf--6XT<>Ylba^k8d=bPfah&9E27aNupSR^l4#%wqxot3;M?k9j|X!H|JYn#SQBN45sL zp+pXk5WG zeYx=3Yj>H8%kbXg_rtqCd&d4Kp{D@#vg*0skj{072V_VGhjL+MbtnAUj~<5?OB>XF zR?)?MfN$tIQeV>(W4yl=UM@WifApO{B{;9*un*%T1A=t*qi@Zk;lwgW9*ZE@VOHG8 zyRP#W^M8B|zu8zVOU+|Dj&b?C8VW}^cjycEQaiR$AVtt&0Q>f(`;UO<$ea^gouMmU z0Sauf*OTml+2NsZi;u;jF&s~9)a^TDX~;NJN^*5{5cj}XUml=qD%^reF+{fXKAGSg zHst91l=D9iXyR0eAkFm1L>SHYli_NhS8}0(&R9M_DK<+Kw9{2 z;W!4eHlJ&7pGbLSjp8}S=vV^)0hkWvwV%ITfGslyT~<(xWeXSVi;d<^Xe*lqOoWBL zfiTP13It(NfCYD?^$y@C?GmIA)e+!uooqg4b$iJA`To}MIwKFa#$@t>Hq&{?5Amf!C+$wcxDiP{--?NpegL}O%dAOB}fO$b+yWYF^3?8jV6qxNt-<>-?elfyzw7;XN-bi0nFB>#2bLQ*; zKGg+dFhdVi5`BXhF@2&J3>ZDRMR{top_dIf%=1HyzhMQq^Ya^^}ZL&|U z!|V`j4CEsVVC}-y0EYL`al6<8vzmkG^ii_VvDV#v${1`8bw%mb5jK7m8C_#OxftvL z7&NG}9KqNm_CVie7jV7?kdo*AZH(i=@I)BI4@h#|Yh?bYb{py{=HULclHm1>`mjsJ{x&eNLtWgwMyZ_pQ@Eia9pNIF}dq4c%xBd_(e=Pj#fBjobt5id3 zATT^S`*-i&i&>T*p8O&3z+$0OOise8dBPi`NaNYqrmSx~89^#>$Q%VLE0&^8`M% zu#Yo3Keq@9jsrx{>DW%s3G$lbG%}ivf%O}&zs39Z2&hWY;kHT5{-J>iC>ZIq;nAKj zKR**Tw~h#2LBbeQt4U|sYXUBiM;m{{`m*@+4#E#{MIZ)#N36koo`rm0C)b%J#LO{L0-$$AnKa)<dA@s2Kv&yEt8M zJbZ|Q*+mAJnmN{~4aepLrN4B!L<;ektbI>-II|EIGyS1iItPgb{baO7s{VTjnyWYq zW@ED4Zy0Cj4$j~eLF^_5;OMFlirnjx`H5a9vv0QCL)mY!b_2}AtE2s}PvB^fsNN9( zatvl0W}!y+li^q)Xxcm2<~KNZP?!5bfu}PB!_-)akP5miPEUk07y;`Wn>eG?$MUy2 z%nh8B=oIs_6v5j#2K|i*D7~y*b*CKzI2LB0qPtvGok|@d3}=nN@bVM{oH_FqYcqBH zSSL|V-=N>Lg=m0>1o|MmnWq>gg!Jy3JbLotg_-&}2iM^twUtTMQs*?yJT{r*vr`txe6!!cQ(PUdY8;O)WWOt?Eb2^v5)90#XG z20x_(TLsyMhiOgm6`R#AxRd?EVHCL@Brv+Bd+RkiC4r;VMsSxz&vvr+a&5^l2c7U5 zM{MutkPInnh~sj_{W|cZvpn~T%?sA^PJl|g-|Gx$s%i+gi%mOZ^?k@$qRVW zz>Oel@8Fn{2r?^|fUgAg6<9a!iChosOaqxTY=jA9_}=_$VY}m)y3ZEO6C8CwvaU|* z3z6NM5iypVZ4-5#6#sa17QdBMAn`IqU@Pb&%cOJ&a7)IZN}zC#-O({eq>%k|SQwcJ zMS$H5a@jqS$GJ>V4I$+i9ms9W5=lQ4)biK{k8C2WLWfxQhiLK94oe`3CoL#sfY`_%^8xrg0)7bj-ILz*~^U}w(>B&}m>>lqA#0AvDG&ri;o>l8r7PL36X@wIa%XM(G(7<}_qqtMk3$^?ww-$2cJ!fiSHape_c3^wAlrCG z^SxD=AP3Ssku~Q}YOR#Rae4zx6VU3kFKv^}BpA@eO<|wzQD^(P*~Kt4FhJ%Dzz6%@ z%Qd9@l?W<|*nMuv`b(>a36H`$wm=z|h|f;7_HNIDELzN_63 z_K}Y{0Ie^hE7oC49la#DWektdOB2lBQvl>9`#`qEYk*_3_!h(_)buvOH2SoM0I2V{ z7;TVaGRuva#Xyu>#f&458Eu$3$^qtH-8DBp8G+{n`xF-iKPMAjP%nJ+{G-U2m}dQ6 zXMc|pgeTAw31qa523UK5OFvx&+-KhV)BWMCJFj6cpnWrEY4&j|MREb^}qdBeWA(q|r?G z0O7uL>j8R%5+Qu5OKm^bkviHA=y^=zqH_X6%Vw-Mzh>Xq-B=Jypk44KHhbbZCjcVf zn4bd#g9b`~Uxrm8FkNF$lz3l`eg4+;LU;%OS0_j|bK3oJw+y~*oy+XO{(?o86X+!fJ%uzz|`O{_a2P%x4pX)^=}%RJCf@QpP#>j zo#@7Ag5knGQ{Jxu@2dC^4b~t7%gM~WjkDaFReTRwGJ-Brl5LWu9{uVxnZfaC8o#-p zFGCM@G|@jEdiedfcGp4)*&Xg32@i)R(c#nRZF~uWIh#Yd@5{)#V9pePM-P5g7CFwb zM#^&!^D@n|?(FR_*O&MQqv6rq?Jxk-u#oB`WN={SNp;LKAsM}f_%qU9)omi zVIi!qZ-j4t^Y@{4&`5_s@9_9I*8ND+j2}3_(h;Pe`by(8=gBIxBW!GE6j_pBFCVtI z_QJdG{tUX=1zFWwH0>FM9`!oG>kHHv6cA{&3i`8;o`t2=Wg0%*CfoUi@aXkN;rl=S zG0sFYJbL3%JnG}|<0oV?SHt-DIGNM8!dJig)$rj*AAQmwrvRNQ6Eb8IPqs?ogZCeY zkDo)oVnZ8zeem(qu(Gj+ql6=eL)?QS@V!6%9`AV`X+sBOd9B4Ti-!pCx*3;W{dF(Z zumgy}19M%-XN%k;9gNtG2trVWUcoyl%b~w%Vr8zo5^h*=jVBW~Mks-|H zKoS#u1j;=iQ3;UbQdnJGN9ori4c7p6xPLNiY=XRPABWZTy|A{nAE|sjWLm2z#5k!x ze_$R&OULsm-&N*fX=OQl?sE^rJ74|vu(Or|Eza5rq}vkbqT)c`yL&&p z{`y2189}-ALMww(_w4yofC#G^dbo|(MPJ3*3!NWii2rG<(DB?4C6B`-;x8>wXH0sc zbvYe+Q9AN`i1NGNE-L?FFYH*sayHKA{+)qDz z5;izfqG(MLW4zSp?H`Bq;AEIf_C_k@tL3GzUL|k_h4i_G$v`Fv6zm9Uqm=cR%gAdx zb-FYnLE+X%%GAF>xoM~KANNi%)SVTZJk?O>jZgw6O+aD#v<|a{L5+!eh`A;m34 zC)Gtz$-16kAgb)st8x|eyPsO!dw>$doWEENdr-f#Tx*nlS;H{3C=lvICR;dUgJj_b z2((0^X@DFihx%v&(iQe$1?;fTHOfakuasD6s`?o^8D?mC;JC-5T)W7CAvS?wa2S8G zVR+3~jPJ4lMUD*G?Ww6Sh1XQTVY9BRld;qYy6R+nq$`^ZIOd)MV?!9sE@*TF0$fM> zvt3H7Cq**6Ndn}_Ffu+Cc5&pkF|Z{Zq#Yd5M5#h{h>R(Y`XB++HO5q+$%Y$w_Wu|t zZVscq2btrKNHh01h+4>Rc7<14- zyXelEL4IeL-(fN&6`bFty-n!q0Blf9J8`TGQnSc(8i!zWd6~|zyWuLI4V6q6dQQ4J z4nEJDCsQ%iHyi<(2^=G{4ku)zs--iQ7#bT6r_Dad^M(N&i4ii-=KzES0`znARzsb_ z?;fFyed-TX8+?sE@+W@x*S~JA?{5XPK7E%UfDWA! z=m&5Y^2rLdu`dM`0r+~bVVhg4=))R%zAub3pM!&G0D>x1WdO|1Ml3@}wBbww6x4Y3 zirF86JexJi@^H03{&*D-B^U1Bej8{1epp}G0ysJf+vwcwtrIfYj);HAaNIM@iN|Mo z6au@3fX1ihiJ1@CF`ejw4)#t1U6sIz>}GqlA#<02stt8F_pTF^UQj=~huzE|Kev(J zVb~o5jA@(9^$}T~9Pc0F-u=kmi&raQ8#&k^b5cdWcA)1DaMYJht06;BFpTUkV>6#Y zm8+t!)XCOhn(-pH!+Mhu-`EZf0+Nxr$uK<77w!;npC1*XE}UTx8Z0iq{3vX%tb`(6 zfzNYfaVbZz`=!^bqmL>nzzYEafgbjXfwlVZ3IR$GT&B?V)=l;>C&L7OoxIP^>Bs0N z8^)Ot6xgxZl|aUTnP?duuqpOo!xU+V(`27cAd3d3Q)Kn*G`b3avk#zEBmm_!*qeqi zOgrinMh*5z5Xl;c_;Lp$H$2}c8Qp~;_7=9T3;?^bwi5aQ+9zqwH$&!a1#tLOeaV{U zSllk=SQ@hQc^g1Y;?!?Z|0acZ7=1Gd@RmpZK793zfQNkwqYO45_k`j9sBUv#{~Tsp^IACJi1yiO;9z%W*FxFQ|JVB&hhR+ z)E!eZ(_s`D8HT!D1KhA|W*Sg!kNMl>eFLyYW(aI&X$~`8qyz}vH%)MT8~s&5=NxkKF8jzh*A@=-_|S$8+eeLDskjJ$FiPiw%+9K(@#2i)7S4$t^;{m?S<6BNnPJ~PB#xabLewYNxcLiY526ioE?*bdt z4#?Wy&+$zJY!P%pXPRNmvk#~7In0#DtUPmHVT>tsZkke(Zr1dg#6V$ZD>4b1y#(q! zH_$)+-D%iZUJY#rBo5{lBXG0{D7?J86$_a~Kv{~$zO%e?SK$Qn1dhWtd zc>UI$@PpzZJ_=v~Ah(qEq=JIZJ_aCK$M&6Ki4L#vq0kQ{d^Vt`aK_jKm$ZB14`1V2 zvyr`(!Im%f4wEgd{!TB> z0Q5MJlsL^umt4ubGc5ExhwHUrXCz({;Z(DC@mN8!P}y9BCt zKw?G-OsVNWxtRs^EJ@Ft+}PR-zx&PK3)|Z$Ri4L8Tp854f^KO1CaIR9X1Adi!p2`lT{WHPtH&H*i| z=jX`;^@sJ{=QwOT+-o6>jKTNsNl+8C#Gb`5q~5cKptKC-VvFV;cG#7EH8nLFDaH>U zJR+M0&1CQu#{Nmz-6=BP7>fAVx@f*Iysj(%6lnwb)6!LV@BI}4DwH)84vfrOpBoDI z@81sZzyB(H_q)%-MK^!V7|Q2lUG~X5b>jTy(vgy!g%VP&@5Ih{2F_mn(+_LrRLIVq z4m%Hqj1ZV$Jf!PM!4TrllfAm8Q*OH`r1Hl5C3&3TM7-R^A%O-umF*An4XycUY}-Yw`}81OEPM2qKa@!Y+cfsnvx7crn080`kvr!oYmMhDZT zM9r{=Blf|;Ce4$es~}Hj1T0oqU1G>|N|QLAeFU_xW0<6RN+DX?+u}K-B#{Zm>q>#1 z4uh!PpO}GC#CTy0lrU-yJlrdS^)eUAW03nCTdG>Y#5CR_$vhF7v znF68jBlAYygqqY%kax0O$UqS)a)Cx3b@pK=&P6{##})LPLxS65oV*JV{}y0_Lah|( z5XP>L2vS>`iDx`*pxSH^_8W770d*fw;d~@OWM|orV>nFKBj#{cJoQyCT^7PQvTW^4 z`JxfFIc8!UhtBTET>vt5<~xgHHV)9$&PSH!I+J9hk_4ZFjJKOTUV%P)L~zCx`F(GA z>kDs&YtZRu8*5}ys59Xios&EP)AO&mIE3c=YL?BxjLK1hGHjxPM(+B#Uz(- zJo;G646-56)lGoX%f06bJft7)BO}W=E!ITa*%|G}dXZJf`N-llO_D|KuH?cl>#=1v z6+PKP7n*%-kPWkI?SA2$%n|Si&bL4YWKi}c@=t)&#n}7F!YxdI#N!+tt z66`6DFUdet+u@!U?7LX=$Xf8*_+joZzHXEeKSReVM?ZguzeS+_%8Nb5^$bkZVI~?4 zh6P6KAZegt$59=3-QsxODU+C_PS4CrgG_tU#hTo#14^Iftx^VXd$vW8)gBfW$li0ETeokcE9b(;AHCpyuP8TxT|;9c zJ7PLYRi52F-5}Q-`=YGL6w&R*r$pY#T;@6UhP_i_KNtu}ckUxN$R)`7gGin8j6Yjh z4*MJ}GmPGt3J>@j0KZ-2fqko;%CetpefTTrDiYe^m}_}-PLn`S7EFn?+r6YLp_L0K zY~pRQ-y7&qcMrYIu^cBPin1eq^;~_|J)%+$g$T71TWhJ)5Qyb zIMzbjaRn`OfUNT36m-Y&5%hj7tPxC{3IZW#Ep&!u3U+0+{&NGm;fFZZcH*?I(@vtE z&)vkf*mc-OZ?WMHV^E3`0APaAxWh54oJZH#AnXvZZxveO;l)Xq8!$@^hyj2uaN5Q` zZpW4+(8Jf13S0sTR%)yrK`q}QgDuJ`$dUtSAK<*dB=~czV|`;GPsaUH0cQ7ha9&B1 zb+Kkp`*g{DJKg8}PI=C-FMH7`y|AVVMd-~OQ&!=634qE>AM7En!VqHJ^(b+3x_m{VZE?*^P>;L%n;eqmrF6rB^diL z*IC+HV^86`U{A)f^b6qm>gZeLTD#ZeLwoRZ;D@=jd#mS|_p&SR4g>OGPm0xYNWj#( zMz^OuW=_C;xHqg(7SaZ2*v=dz09&-7rt3XJ+zzTTmt2=o0>nHFvDS4JVBt7Cr_mQD ztB30Yu1gRM6b?{N?J^%5P&*%&=%v6qob0uSe?|tp9~*nYxLe4Q=j@)5p}WBUD3TmG z!schWe>Tra1+1-slRMX-X$U{VvJCs2Y@ZauK7ZR_ZtVEV!o_3rEQRSNpvqFexkGJe zfwgHl7VdY&dK8c|DXaY`5E~YHjUAq@#rr*SXaq2_e`Gl91Ntukx*ijxb+sKthEp$g zxC?r26A%m`HYrSCeHFHk_QLbECF)jb{6Y=$el;g$-wDi-AxL~ z?&>DkoFFK*Pr>@}QCNbOI|L>77LCu^>x0oZ%%MLF`7D>>^q6BrWiZp95mEVT}vA$GK zMEZnHfaJkTIXwZrErPTUOXb2qbYnC!6u$2gm>i=R4ym1!x~r4xy$zhL|MWUKDpv%^ zMQASu%`w}_hWzwp5El9BC{wASwzI}p!BH`*r)#4@MKQTy`1;fTfBN@-^)s{P=KEv= z8obEsxBJ@=S*9`2V{1UisKt2r@Da~F3iqFStD$^!U^PZ23sM;n;!GkuAq%`P{{VW= zNNk|wSPQ2WeE2JBly<0PI|A9?p`-2?nPf6H++!9;wnDII=JHeL&L>A-eDb=HXhDJ! z9dApoPQo!+j-laoo-s!~FDNBQ*WBDh`0-EHLbB>QUXxYc#Sm^oW9ubgpAQRj<8*JG z4GXj9bozx#gHkubQi+S=s?iK64JMsB=aSV$cbgRi9Y=|!kPpxDY`fq+P8#o*8Wp?8 zvNlD6IqTJipvy0EHB@&KHq4z0G0P<(lW}CWGdAtwjPdn;ZeOa|R%tcQH)K#&~EH zcL|Wmn30jxF4@pyk6IF`G@`5(oO6Rx;9O(~IpW@CF~(^0aBFme`Wlc=0+E9%G*76q zEeu}=Z?^%!6kxzK)FbC>A8ODEbHOb0eD>NUV+R7f2vV9vz6DM$LHs>-V}L*{g@aVP zf|5WxI{COZOTymjMsJKl zPqNl(1$6%@LF5*}b0^mlXtJr$C64M5j&w)}{xqP(o)Qi)tvsJb{fK)R z0OBUlWP+wsIGP>Ehgw4_qa6e@$W&eV0eQZpfsN0sBkNhb8vRo!Z>I!z=L7*|ytEp6 zQQa}b-I&#*(q8m%Rv0x229RB2)L9eQThb3(!pn` zbYmdeBcRak?`6h+NN|xrH^&f&pVhejA%X5b>(@&#H9a>Ka*XH1j4;25QT@cV8 zK+nFUD|Z#Axrk1=14TSf_9Iz4gsuyvmFFLWo;XIv>Vm)9DI9ZL634Evw|TOg{m=SZ z7Gc2MiGyO}5%q$<_T@HL8yN6-q=RxL#Wnt#|LFq$vKL!CzFaHcsjiVNVE0d*cAL@I z0C&@S;a0M zYM-M{wWNY98M?|k5tc`sqYT%mg#>>tm`wCP!z`SQG7Y4az@gSR}P z@zc5b0v{uS5{ysZAdAlEW_<0fVY479$aU^-TDIyNeDF1Zvx1x(^2%bKK6yp0Ba2Eo!+nR<2h@&)tQiw&4y zuvvoiBp(+5(na)Nh5eL4j$Ip{OS@xWY$xG5>+TxR$Cr9l*b;y!S@w^K35G~_Te9tB z0&kB_l5y`2Ydag^0Q>8S=sIUUXNCQ^y|qb^C}%E{p-uy`cC$VWz>$6Y&?aklflV+; zP~jQfU#_d?>@wrb9QzKLvuwqBT0xgXz`jH5OIJhwJ# z2e~tgs((Y94rITbn#~C)w)2daH9obL9FGf-|C+VzVm!E@_)peZ^Be%Or>wnAlrC7n zs=hDwQ2W)3kIlrOB&l%GhN*%i4HckGZ(j8C7s?9qr3nG~eSVX-#H4qY~tr z9yacgE~1Q89O@>*Z=f9CtfJ$2(Sh;(KD)?`FSEHKebMQNPvcL$kE-{Vza$>`R|a;f zXZCr_2^yN)Y-n2tg{fkvF%s5zyv=me@om592vsu<@S86%x)!&Lw7CIjk<#(<3k?W)pLmHI1!kQJyz|H4} zL83vGk%ubusUGTgov#yu<_cMnD{6_iN(WHxDs-V1^EjgUI9wbhKqMKt-KX z02|x;+woRCw++?on&u^A(l^sk>7dU}bG=h^dL6o*4oHXN*P&)}RB`mGoE3TdjoYEC zANv1ciq6aDky1Fz`1^7AH*h$%$h4jDcV=%4eC*Qc>UwNN2iXss&0Grx;{35>jMT|w ze5_$sd6}ONb2GEh?0NRJ^$0XFeSiJSb@%1`+j&bk89Y)Rv%h8ZXP zfU$S7U&hEZWXTG>*xd@Rmgy=v1hNlpvkNCvYNcT35RFAVrX`)C{{TO07Y$O2bM4r? zjR&t1NbGD8pd78!;gPDe?z`cwM_+*I3n(S%0d+OfkmJ+2W(Zh(V5Oe#6<`q$=G+=# z?Pmr@(~j*mj(kx^+BBt!hi7UOt8L3GERYdmnadF*6neJgLhk2AZt9>yZbn>p@0O(R&4`&1NFo(fKU^iUeaE z1xU8x$qB*GK49`WK81SQpfb()dimQaI_Zv4v#ky|f(c?%IKd%Pf%!7FReL#agGKCK(rb7dHUdFbb@L9l9pHNvk!=`+^ zw*eqeO*FN(`U3-iBomD7<^BP3UW0js)n^V2bR)0~v%z(=F`4$}0(rO~n6o3dXTO_W z_oEBnP4FTCD0dC$tRDBtZ=M-2o$q3OIPa0|poTKzIY}LKH}ikMT<;J(_94f!)bn>D zva!(}0Gdb82r^`lqjX&#nLr)7wEn2T{RMl={ZdDNA3-@^fu&c2IU`$4P_)5E3;WQ+ zT3Dhnksm--I>R;$p*0i?HZ#+6>ZegDK%q%r%Gk5DlvU0CFdA*c{$G;f*l#KkVaBKmOpXGPz>b8#^z!+vrLUaH#LiP z4m&Awj#8mcut(5<5wK>9mTO^Tbe67r9h^6Z0&8#<8k^l38oS@xI7w%z!L!>;=NGS= z=Z??HKr}ubpv#}X|2fcAiuub2C?A3e0t3J1OiqtgzR>Z;09VNq1F z(c$J%Rjf{`pY3WWWzA=|6KJ$g+(%>J-&yZBh+}>+@D~EIAL=?$l7C>nGFydY84YLUzOL0yH z^aT3bT{4L_(%L6GW>#XFy=7CMqqSv_buyHq%k1wGj>r`VuC*GA?ANK;aVR!8K8#xc zp&Mso6i4eJKuSLux0U_vaIz1Y&DiX?*8zB9?V|M_`#AH5C&vVm7(mw8jl=L*&YSg( zYs|d6o{rl<&~=ukY;y*k5gni-mDpzs$%W1+j-5x>R2YXX z@Ed%1CY#ML6l>+|PX9=lMYk1k`Z~5a@@2OXPML25P#x$pNcades7ZouY2~mZqzarUEriN&2 zVy1C-Ya{Ft%#?5*JSM^nivN`R8XtQ&4rdq8bNicN0X@`5ZDvp507u(n9ymi1dDG## zB*(kVM_e%v zt*xjxdI5?A%nqR?E-u`{S8F3^q$4cy!=IvaKgm+?BpMfD@S8>>DycXljF+zjPCuEU zn`h?NRlu$)vM`P<9Yj92s3$cz^T-m{zs6o_5?lhOp;Gxk{tZB#KO6aY90M!G_t+YP zEhfQ+PkCx3t7NmKi<teoJ$dl&?4zUMjJ#!QJRQCwd z9pI3s+BrX!Juek+22jAJD;s+-N$?{)f}$Ne=5bYK-v)W2X&4>WQtkkD91wt(`CHeF z1&+?InZN@3UbaLpbo)_0ce4+zFO(W?w%0wa4syK>HrkNAA&vnXMsIoyMUBYg3Vqoi z7-&!Bsd3JQdB&a}Br}N2+r-F^I{VyfF#BXK({wn6*2?&&p__UfT#|05*VoiUGglSt zR2P89ggq^wSj$35QFq;Os$3AHldWqfz#YYh`X{#*!v(;_CG&6uNLXQQYv`ae8ntNh zwFTB{<{A44ps=;_6@pWPe9x+_po=^%&pAJ5{rU)W26=9&@Zq`;gG>Xi2J6zx{A37F zx}oi7(NX#+`>@Qc-B*9yL1xW}+QB$OR@7!Rb~xRHU86XADuDqep+q<_vyI%?ENzfd zg)7Qi)PJ5ad>pgXWM#>^oe*>#qg&cL3GxAW8rbJH_GyYeILX*vpB|@O0LRg>#s#{u zUJ;m;@g+Q8Ha0gy*ICCsMUXeoTy_&EU6S$i7_Sz-i_NQ)?THje+ySDN(P5rpESNWw`^*M`0{#YT9-(y~8$Qfh{&eS6*oI<`w}R|LA7y`Bvme#T zXMns0*cVmU6vNkaqMiV7pkxG=1LzxbK+g{5;vAselLT_eR6oym!jU~i?2MG$6XeY% zYW-wTNAQQt#*a{MoV5;_-_;1%JGq~f{RY=F;PeQiD20xL^}u{bU^k78^yr;E=;H@~ z%a+}ka500fFC<0&481ilnhBHWLeCsevG#&dos3_=#H{%d?1wDZx<+0b_%s5|>eejl z>lX?1Q=Yk+B1j*B3C5+v2DTAgHRODB3t(8~l;hD@n_W30Jt0d{DU1cVeQ zmWJ`%XmNk?S-t+eXZ%^+^Y3;a7xt5nzuO=F|Kr^vwLPE~Z-4%C;n%dIPp|M7<$n*gl{%7)t`c4&1c#*K6H_n-bhUhWfN@#|P;!Y1?ZZYhA>I0cSD zF8k-GQ4B>!KTS$k-t4kc3YYsN2puW|jwfi1M{x-%IrCP-r zrD>(VIKLRrq+8sbK>;4}_nh5@K{>C~7>_ky&b97@dBtD6J{>pugBP`0-2XR!<2O)D z9Dh+H!>vJ=lqj#5vBzx3K?Z*a0(sR3TGI#b~h2_nc7)s97 z14+B2sgDj%%uGA~tfkpM!b0Xx(~Xkn1XW(h*#8w2@2@gd#yn-g&P}W>k7NkiLVia zV<%Y=f_@ynL9>`RHfCNmCWmA{E6hihg{w2K1q}5DjU@83^K_`AITP#EJ3y0;K|1!a zwgrHQQvxuX9bI5l%gFv7!OtPB;@na&H1Y(9$k{RA#}*F2 zD48J}3h7W7R6>xBa(LXJrkQ!!fp$HJ!!8kI9CHb1eLHe4i8VLm8Ay& z_ZN-V2@&m9!3)0G2W@w^30E}?G4s96gYrrjy&C-Js&ZAWJL2U074tgNqE3n2F*uN(0EakjN z&~u1`X&vl6bWYdFLD(cXY-g{vA>W?go5OKWpbxIbCPEgSVm4$KePcO<0oDA}ROo{{ z--PCuAuDG?p%Rq70y_2-rQ62W@vF z^Px<@a|$^2lAx}Io%qyloU%TL=!H&n$}A0ruGu$R)L~c983*XG5p+&3I{H4;_6+(a zfz93pC_G^vYs`BQ-{P1#X(u3>ou-Z-mdXISsauvwCtcx@r2~AMEp)eCNhe@(wDbHq z*g^^F>6^#QoxlJhLZY5ng=Rz4jT;1Xm9YKnfvSEJ>s;WThs>9C?6;@qLWAe&XCzr{ zLnl;51Q+0oY!z!-gmOB9eX~ogXTuKD0o`VD$2^As*)DUD;Oo)RktmZMlQj$wT1F7p zk%MCZ&JKXi8JHx!=+L`u3$ekGM{IPbZ;szHpNJV2bphO?sTuovV(=_XreV%ti>{&D zw&=}a#(NtZJi&TQ;6GSDIKq0=)${1q3$nd705hxDNlG_M?}C1So(A%XCS{ATkFnhi zp3yU%O9U;K0PsUJ!+KpH4B57uY607NK<(rfrChhzOH*W0W#FFx06+jqL_t*J^#`l$ z+fMo;xQC9ANy{n(b}5y_&M@&%tEpz>eK(Y2!Afg{Ju<1EvYWZlDcA_*@M3!-obba- z=G3vrU=I20z}K0cABXKw4p{*0G?~CNYJjZatm2C#JcWRLS*JEw|5&XzFhCCz_Gjgk znsRC-4*@5gL;E=hoXoP%2l)L6v|gJAZXX{2N{}xmyK1Qlg(zfbVM=i|v$n+U(Vym5 z@oSrIrP;d!tZz4L3W1n&Hj)j!+>=F~V;z5{AGuIZg_%nDT49g|J>R!yiZU~*F{M@V#(rl0_nL z$8lO;Uk%^;KIe-P5KkGL5D-mJXB!gt`F(@h4xf9I21d`qwi%8lb$&PieRL7j@TC}c z8EU4k$%7AKCf^pyfV~KjDyBfl9C3r1g*9m z%wqOFrpo_JYS{$caRL=G)~Y6_pTw| zdjKw*AdyEncg1cFt|t(AwSN@Ou#y{OagGVNk^mG()SfMK+)nk1qe-qXCh5WO0EaV$ zEEt58aYmoh&|!^i(J2vF6R^RoOqFp|K&MLp4yRnt7V)C}*b|ynb%m##J(wete2Z(n zg*?bC*fx-4jC+hxC(COelR-Npm|8jpRzRN=G3bZPPo3zchH=}VOJ6(t*0Y80=7&L2 z2^0t}PN0i!<7iaTOBvetcDZ&qHr+TicP6P_#;Tl?O@6Yo9^NHzKOw925o=TjxVd+G z5t`CqXt4)(2%erZx7#@XJw0^4Lwi{C%7)sOhZpAFSdhBl?Wp)C6I6?;8V6sYJcS&&b;suSb)}X3GVPp~_y)#V< z>kjmmy19b@q?@@h1Eb?3U7jQeM>dcZ+}H|pgG1Q6k#Nk>I~xF8()0@iWc$=zWin>J zu|w=Hk80Qi;K&gOnqiYMB0yqC?w8bYw^%b}E=}Or0Ui7Ln6r+V+Ztz{7Wvy(08rJ( zKMQX?cm%Mxz)=-6{v*&iFYj_LFEmDx1OniGoXb3ps`cz2ZLWlmIeMW?V<73fUFe6; zE!>HXkLv7`0~iV~xYrVn=mmPP&i9=JMV=XI{jJnyb%sZ}b)YE99J?gIHZ@7c2*$?F z>N@-ID4ri_7FRl@GU$gLXmeyq3k0b9(r?*I6?C6RP$Y2lt(!ez4W81Vsmk~kr)TMi zKgOJ6a|k^4$>?nZF3U`?i?gEr3KrELG(O;{ky~MIc9G^r^oS_`0Kegk#$o%=#=B{r zHcoBo1bX|5BV@M23YnWzbXFO?{Q`TEhn9J3d?qX~k6q{kOMFV`xdSr5hphD>;9g~8 zg^t)0VF5WJ4~c%}ST5|7YkCEE)ee9JOb~VZ4z{v_O+6tqNH+oQMa?Jdf+mck$IO2U zM%gTLHr&%s*KUqTL+@75!5(39PC)G%wP+SNKYu4oOif39DNWRKDIfD3W+W?HOJNGo z1fIhJK*NepU+ zU;=OlJ+zK&C7wMGZ%jd@N1i$fa4XBN;#u5}*VnMCqcpLiL-FDQ0XO9;fU+;J{h2Jm z2yAo9SB@?z*ESSV5-E)Lt^LX3%8hyxao3V86~$Zg3yjyTk8xk|8$?`@X@!@Z{A?Kv15COe^>s6m9me`;D*l!Dfc0=%jE{#Ie0F``kBj>>#)IuZ=Dolf z&yS(hCTP~Azm#Y1B>{ae0R!sJKY?NkJN*&Yu}=K{;_Yx}b^-aFL63Eio#s4m)+fO{ z_R|z{fO!)9c)3d_aO$5MIC$q!k1{^zjU^z$ zh*?}*n$P?N2#V>}c<^Cn%9k`wzy8Y~*4Fr8pm9U{!{GC^olgDU&oz{tACEtN5&r#u z`oq8FfS9eHzg$ku1GC7724*l1J#t_M6gdr&0llfjnT&9q>uj`}cL@mn`D=JFzc*_6 z+rLba_ka>vYw>v$%>RD6$`!l=z6MU;JH7XzLjC=}eA3$}+5hu=31|7{e*OSjK&8Lz zUuG7`q>w>jqiHy#KX~p^4wOo|lwCPJULqp#C=0VhNzhvljyJo4{*Nz(&LGna=vTk^ zh48=rFaLA+-k*J+<~Hxf44(%51SM!^RFAg!#@D_Y{>Se+!>?`TA@{rsrRPf-F_Xzu-YgHF+$4S@ zI%_-l4wP0GW6csgNi`C!G0P%tK?l?0GvW&;;xE6H-g6XKHxAVUvZ$|p@hb#?P-nQu z@BUxE9k$k=AVcu$aaJnSVwIuY_2tH6{Y0&{PbR=_s~LW$?30zCJ7OJ&pjGF*J`g!t zI`BcpqE1GH;5D1%Tqc5)1X{V-q>&TZry2;D6y3%JS(_HeZs>i+7oX!;KlzqVIxW^u zr06c!m81>;g8-NK**(9S=V_nNQ)K$&^xdNVR zkhQw|+M}@W+FKx=pNE3o7uH_A2;ckW|3jv8k7uSi10JeP5p34df+jOPe=j_I@LJ4_ zeD~Ymq81HhN&X4K67`%ab(Ag6pEZU(WMkqwyWe?sGhoV?oiA}=sXqigF?p5Q$GN*C)vKW_h8ehdmR{U}6>>>=XuV1@+4D9%JHR z=GHL}Ku5DJdk@({DJ34NZ3jvlzG%d%1Qfc?I%vIBvQIo@o(^(0RoDgX)Jd|L!ky9S z@Xi}=hRcIuc<;&M@VrX9dk(la1Cu5=?16Ik03-5X8Z;RM_W9CFoW@ev!Ff`~G6W`` z`I%;H8Geu>A{rudYv<7hdef#g2I)l*-x7d@^&TVec9J+xRXpAjDlWk$ptTZ!3{Dz# zA^ah^KRn=~;YwLXiu6_*LAoMOf80#P}vtCzZ00fn+q3t6m z8K?emfK0fp|2>Z42xB8^TW9U-tcRT_5vte}!8;Kfiul)9w4Vh#^0irm$qflo~i9gQjZF? zsDo^DnKdfl+}IJ+Mj1T>3LX!1$o$&0r4u

    Hvc2?AafKFrKZpY90%qhJ8qemq2;@vO?yHW>WxRgUH7LgF(k6Atf-m*FWv!v)RK>by^9Uc!xF zqy)f#AD&rjxyJwaKm1RChzsFAee(~(@BRKCaE#PCcAks;l^04s{@6K|0!SSZx=$UJ zY~jeavf<6&{H^fjSAQ)GGyVcj_v3fJ7v6vJebx@$$yyIk%i7sR7LUmm@vp;m7NC#; z6YVv!j<^!rL;q$B133Iy>`Spo(-VT@*a?_tu+woC0Ll^)|6u!g7q$cY^@m$>w_T@Rw+!2)59@adAOj;Lqg-tT(GP5ar!mQ!|YuLrlZ8GLDJeUG@ zsI{QTDEnSkkY^HR)KSbq1L~=?y#eIy%X9a_8~5)s-qY{|_D_>~R@%6PT>|JM_MS&u zS+bDl`6oF`C5v4k+9#{Zd$1Y1WJnE$5LaZ5aRH$8yMM`4l>sSA>@z#6>R0vSSpPS_ z@tYiJ(FpJU;3wf@=CFXGwhmr~j18rF0I0tB@O7GA^@k1Q;u((dzWNCLp+1pa=;tlwAO{716zBm@a6dXGdmrbT@W2!x#PN4S}n6g26-f<^?+1dh8K8 zkM=Xi3H%6ah%a!mZE92i^s%8>JNl>}J<*HJ>*QKV=DG{!LydLV6-;4_s?{85VjuIG z1{6}Sw%ZU2T_~aP29{n++Pfv2+_BmK8N|hab)PjsqxVmq}kyd$AUe9zDMsI^6XW{-o?D+0I-JG_9w%z8}VS>1M3vE>CHKghzsoq8l!AI>6T%bWVaXToSnMz(^@F&U0-MbJmNzjsPZP z$^6;LewB>;HGq9fd%!x9>%}%=zlpl~dG3CK@qYZJ16VJ+1T#C_qsrK5UlB&R?g09+ zjf|?D?Cn5bB*<-G4?6%B1V9_a2yhXiPQAjvOb{elpJ+z9hYa)EcOQgt{DSZP;0NJ@ zCbk!iV*uKLP8WjFm*{2g2eD_g%|r$Tfnd;}&yN`oAzJiNM3{2yn=D{LFHySvM>%aRier*x-A=wp7p$8*csO4`UBs{ydgcVl(&|()y3DU)dQzPsRXBbm)rU zP`aSUdhG0gG=jdls5<)%euvMC?`U%d_~(rS#-y`s<_)Ea5)@46!71?Q1rcfg#tvIv z%K+2tljpTbT{64cLh*Z^;?nYR*xK2N9VX+?*#w;!245aS5*r>szpz%-=0&mgP=$ry z8&pd7aUJIPdr3P!@=u@r8mZSwYh8q)&A5JYH&)Rwn804i;!8uO(DpM`BLTtl}r%Js8|oZrR`1U*=wlyFw8hob;i?A zjgN;7U$bUQFIS+$FkkPlL0P2!0!5!>9er-EtCt#!&AFy@uS2L&?|-fe?{Q z)}dKDmd@Gq0$MD4cT49AQ=!A{KKIb}bArWd#vu~B&pr3fk8a`z);`t%a=!}C-(am% z1bV03?~pysVI~A|9A$$eXa6Z7MC+`&aq?My=7~%O^Nyz!>p~A%?=JR+nQYLkIL=d? zhclc;wl;I_A@aOmiY#jz&x&Txx?V>Jp3Zm+kRpLV>r{J2VjIs>NA3#9w~Z%GSho)L zawqS(#yL6l_Bdiq)}o+}!|5>NeFmEEd5JmdziiOap%IFaj4faHdz_rX=?y+N%I7J=>;_pj>7CJU$PCQ~ns$ULffc0JH$mIq5wZZSI zY^ZJnQL|KlK?RWeV}iOi=vg*nu?b1sYkckD@pa~YXLl$3&hP!l@bTj(;AVhNT&taZ z(a!zV!8#F>;}huW6ikyPvWF|N#{XKS#7zIi7k_{pG9o@4*v*&mC%oOy=LA1L1h8C# z5;#UiyY1{K>})JWV7d*R>N%vPqOD6{hQ#whq0}J*C7i08=K(B0*V@$-^M)-!b_oy; z4$F~3XWlZE%+PqKz_oBZb2gU4UJ?1(4H#90?gf}(A5;Mlp1*tT`SYi&&aH`w+9*qL^jw};XaVG7QJ#2081wmy69c6E% zPV}ISpJY!onOnOZmjFj>vc(dzpZL8_KJQa!8SvJb{}SszfKz&k;THH#BV$R91#0p> zv)=mzQ%e9LTXg!>fo(#kZey>h%PKs(43vHK1u~bYla1q9ZQNBxE)Bv2C|c+;8?69U zM*qRuH9K4<8F)8yWOJhR?X9p$mgkUWCuao0X1$`#MvkP_b|9bY9D}k+*VDa2nrpEi zuh3OILevGysex5&fQ4Apkh_CRkFr58p#!)KOc_`(NT+twT2V7DEgaEH_URFSx|#+f*m3FqvP&w) zBq(kWc=7$n6f|1^!!vBiNuB-+=%X^vaYAy$CSYnyy(GtLKwoDZg6&lSWdaWwErJi~ zc_cW_JMISS*5Zuhe7HY36$bEyvSmt7pqA!`s0n5k_vvzMUjZ9K8%Uesli6drLlxbr z?eeImShI}YZ!(XrspT`5$nZt2NC1}OdF~9_67D*g7OW|pTSB3NzF@%reBpoq zPD0C0U@tD1U-kBJxxkIk;plY}L)H=ChEd4X&;8+*wr zs{me}YU}Y++E~w<^Lhoj*vH6!Y;Z`H{0cw+cmMDI2;&Pciv6Gu(}kS2QE#s;iW^l$ zsBUM*KC74d_a9%s;xk~@S;J-q>*xq0%Eq$3QU|%o#;oI}2;1|=x=gNT!;36|RVT&SW;s3f;*#rm?vm6UUtY^bL=B8()*6c_uQ*^D zPj;_Pt9R0{doMl_nBqb3nLj141#e~(xzqjLm!CfEc*{jlNH{DQ-a4DGg|lVVWG2ls zfz)}%Y5Y4B#?4>y$@~1Jm*dsQXwWWjc2{1!j7U(HAf>}>IS#o&NW2PPcT-Pgi~pE~ zZBo-kfGYBD2Hf{ul-Z~lj(iV}56*cF+K!za+5fSnwnyMZON^Vos!oVOsi4r#;bS+{~>-foZNhxd8 ztXIWZ;9i$xbq}|3bKA%oLg(wlU}TXWyD^H;X|v)S`hM2$0y?D!DyMO7QVc9B!bfCa z{eZ~lje~_lqlQK4+EU%-KR;(la7~-nxV&ZnJo8e=%sJ|i7Nc@wMl?hQ8_?mB zpL?88j(N0hGOo8n;%{9GmYBeeoDzdK;}x4w@r)XdHsy$)*)7p#H;ylMpv7=$EDAWf z*0))g+d;Nj`jLUVO*WJg&uVMKp<^>K=YS&YU1I~YAj*{ixP~N)pfoYa*6&JdYGZyo z2`+5=ZU?~F8I@<^7SzGg?T_c^fu1SYV#na56RDGcQR~3bRaP1pVJTCN(bnxX0#4U7 zf=c{e%BSBmFrEb|=6b1uj6z1FmCfcEBxnJ5DkIj2rBe+em5tGP#7x(tu z)Cx#>Dh1tV0EOo|MW=&)8%RI*UL^qn5buaK}$eHtk9>8h3#?g}2 zZN}LSmF%pTaSQ^&N`Ycju~k0$GgoaT*GJ({Z?Oe;(T1=1n^B<-zveD z$NiWQibsra|8|_r4!p@E833Ov+I94Gldl^%C^njK-gTDG&3-Ztk9SFN%tRM~fth`s z!4}W%F%#WfAPs#h#k|(3TQ!?x5H5gV#?w(*x9NhgqEUaa^2D4M;n~MeW0Q_v<}U7O z-s?Z3qbmCU<*Q}RgZ@$2+};k4X+C6KrQyOaUM{jPp2xq7-?{;uXkR+HtUq(9o~shb ztgYyLF=qtG*8~sU)W9~#;I()kv%?4bmxa1SMv!FIo=}fH;(_<4r!*DfSxb1(~x_3Fo-&CO|O7WfM7}nQLre)SbwS&Y5$3f$b|ST1lQoJB z%ou|~g?infDS@o2$PGYsSd;1sbW;!1RLO~Cs5VM4c8^>s0y~09| zhjUHU7kL@&2p;HiqxfnFdAO z$h5yr5Cp~JN4zG_dLvPOW}R4rC>G3dJ8NO5Q1`L*X;nZSslZJm5B72b)$2uDtOVW#UzIPj%n8Pm!f2{f)DLw6sxE>`Z_JFuXJCl2Yi#V`%RL+1%TBhgVY~hU z*Xu%tbNtN^axgnM%$b@KbPN7}*?aRQ%aZG`FKc~Oul9ZG>Z;zC>7EUPjUYh@v@M6} zkZFsy9NxqK`I}hb2>WX&EZd==D2kLIkst;jff-;hTldWLQeA7;zVEAlzmxA)RrmA& z1CYc(iplO*@7;UzW}ZBG@@#oBvp4n!I6mGc1fH2z907t6b`s~_7t(4)NsZ3aH6Lq4LRsqVgV>=G2lE_s=A=9{YFK}bHFVmiCf zsp;okoDWlk8S5C69-TB%1_Za_ogh@+$=_c8dJ>rWGW|wl0Gg02oB=STRWE9UOiP+D zmVE9Vc^Fh=iJ1}wpLw7za>*_j1Y8>$?IuG1k*Qj*+=Moaway-Hy@*}MBucDs(4Je^(#}&UrK?Mr3$6%)r7!rdZ z@?|D!m~a&e2SrCx5F3`IVPZzxL8S(E9d5;cg0hZU56*1_|M`w%Z>T{*VjNF~lHZo) zJK_zY=$#BaeMJA+o=ALo?G@OGZ`C|>0ios(PpC(z=_VKp&!RBO1Q7~+cOn!d%j60s zPIxdz9dTv|Q3NuIlO9A%438||qEMmltei%RYQm910tS1ev4j%WL`TTboQtc5VO23* zbApj`aDsr@ElRRtAm&R@!|Tl*_+8KmKcnqpz<#lTKGUdIQPE1%P-_HFD{LNe@4b6$ zB7m`l5UE4a>&9ECLSVb>FZZ8?;zd97nDL}=GOWqtKx~xs=&sM=~AfmLAU-I?5Put^oQOP~T&~pr0m+$9Nac*Fcg`&iLmB?d+mb9<`+!}*S zEW>LZ&A^}`uCk!AA#C=y{d7nOl(-W=8qeI0V+0Jfa%_8Khzf|;$AqPsR4(&S6E~{A zw3Qv1eb>G*TvTnxFL_O-(dhBW8-X_V1MRCqkvU;T98j1Y+XpCLsMVC^ z*rOfX*KwjkZ8=9c67~`1)_ekei*G}(ggu-+LM0$1|K_!n5FDu9Zpp0K6|& zWX`BZz;BV3&ky=|KsA{B4SWnEbJkvk(OLOJ3=-+zwz)Pr3=PJt244&Bs$j&iGAv!# zK{o6FL(J$QhFe+#2&{Au_ zs~?(8#k8IBm32B@l_hm(fU&_4$PE}oGI^^YeibSSN5|+9j)SAaO$-V920lYy3!)11 ziOwxLRV6fJDjjnIuF8ZKVj0H)@*Rw!GW}A5_v)ysrhI8Qj+>!Moj;CgF+%Z{1TOdi zi~=wGn2gZ|ih;fzNA@evyun!1@sZ0xiAUD#!UrGi-)BwtcIqcYwhM41OVB_YtU=n~ zw{!wC4d=2w3w+Xy{h`cp48}MV2p&6gD{rjjJ^anl?h-~c@gpp2kWMX|x78`431XId z(P)Xdg}zZ;2??Z{XqPbQ*y8G}E-$(E98`y(m(`XFZ&jXp9a;PvbZCBL3OEz)WA;lh z3dQ4Ds(@3S!*Wino8Yg`){8|eUvI4+X>hy~urT{e`Onq+@~}$d6b*~Ep}z#qqdm|9IW42s=nJl*JIok> zX8M)CT5dQgX}@ZORN%d0Onbc233Md?t;4DAn01T=|2I1c)bK8aLuTSi9qyR%Vx2KA zDHo1Nq!nS^82YOdGwIRDW-ASWr$LE(kZ`M1EvAK&}I4})+m zhvM+^u(=8ZXt6R&*Kxl2!@uT=6m^l0x>OY9FEP3*&qA4ZK$H9%u<=8$q<8WYOAUy) zm%pPJucAN(h+sRKB9WV)&iICxjzU!Ge_o1G$>Km#h4gp?RI~spLJYVH=2Y<-T9z}d zSp>lJREW4I6NKgRG7J|cu3*Du4HFPRnItNU44wRjy;*?l*i2L25| z_*)V0@yz=CWKnM(6TPIi#5*cR6>-V!ObOB7_BwbAp>6?{6y?E)G*tD3Rl|g8P_hmU zX`wv3B)^{xHUBCaocTIHSyU*x;YR}=wwr^)7=Xn6Q;#7oDm+EXLN02{4WQ86b(tQ| zffVg4-YdT2P2ux~Fo7bAm#4F$D-G}+SmRr|krq_Koxzu3`plcw=_eaVnbu_Z3K+ja z(D=oNZ1Iji!c*xr4XV?5yYCrU@9e{)~(J}_*Me`9;X;w#B9R{Vbc+qRmhpC83<;R*a8OzN5Kstdz2 z!%FXAL}Q4amYLtOcY<3`na&V#FR#-`lh2+4LH3H2ivkNn!up-52>j`Fp0)nO>BoS_c6?xr%Zu*VMshn2&UHzKrkdBg^DmCy<;NqEz&!8#bEZ#Dkv>?*e zc|0ae{Ocs{uIfphfEeSAzRV6c9!7E2N3Wj4k;POV=AZGm(ztVX&VC9_hD14QPGyLA z5GMJDxG;Ky>f?9LO&bFSYNLK3&-dv&Ue1rAw9~38*-x*FA7{|f%0dsSJj{4J%b^Zt z51l4~ij+r%KZAET&Vj;qmt);vX zaGXT)x8qLpNcmY1kOL^5ogX9Kkg4Ubc-UkV|9rRqu60}p4c^;tR zp5Fp@KAq2(r7KU*U{xvqbmE@Veb2#CKcp{xkum#Q9_jx8DzjMZu=+#Q6 zM!7M-ZMYyIsu>TF*J~@wuTukM)5h?u19S9Y{DtPIyC~D&(v7?+x8OX)&;PdYN(J%A zQhf_WkK2Zx9kJ%hwSPOT(Q^rg(G5DEA_y{=wEp*sf=V;&;ZG;~tN6U=sl+H8Dl8c$ zopt_lrY|CL_~j#crJPP@T2^ehFoDbrd2Tga4Onaq2BeMAaIBP_VWA9|vHcYVMv3jE$50sJ;MN(@z^Y~Q zvi+KIAJu&BKb9S`mdXN0G2X#ZI46Ait0Y@cq|Ykjj3$bl#akJ(V%b%^$O_u)D9rR+ z;bS6)`#DwK3|!s((kpm?5XJ;sSuT2+N#p3iDY zpD&%Y^7;HeDV~PmSpg^LppsdLDwIc`S90^El$9TzC8?n(?d8R3$t2Vku2L0 zdA<%V&u0l7=Nq3cP8ucgAj54d%*EZJfxPOSpyT3Eeucl!RK)eqPrhzxr(@#z*S+>* z^=5r871^Bt$s^TnFGj5289;R%mn9g>;*Q;Fm;O^H)XC{lh+zQp+X`)U=FtTJ&sF?) zc=@FP`ge%jziH8@O1?w_X9=c!zR+M!kPvheq{p`;RQ&bZXU`_cU;ZVpUFldv{3Ydxlg@p=2fwK=Pm3c2N_|Eho^z#T}zxR9vKF-h08MTCRbv|FwmfRPLP-}~lc~=V6mI()f zukk=SHuOlt&$n$xLBrvr$iza`1+0kV3ez&=GBbe6GYYWIP$kHpp|#Llu*!_|rqyJq zqC+Q|KshQ0-cUKyD0aiAXbrzV)l7XR;&mr|iqzp5nHH3=F zFXTXOJQS*xjTV?l#dqa0zO19@w--J4UWP+76;3+uT8}&|3~+w?EB=yCUTVDsZiPFS z1pNFCz9G-!W_-jM?`0fYjJcmp(P!o-SA?BK7OudrVRt%gf>--mz`!&5vx+4Nu_0d3 z&Z6JR78_mhuItA=pni`HZWhN_bBDsF0UB+{#aZg7k|0o3v@Ph-i%Kf6Dt^5k;z#%c zZl0dYo1dS%Q50gP$ZrAOjGXx?MfgZmi4z?lKDA$ra8ZtV!+Sb1qb&jf02+=u4-E76 z-5fC_OpK)p{raA~&?L-;2x?xKA_PBvIqu9^`NX)fx4r%)6R!lFScM<2Ps{#1Wo6hh zJ^ONni^rHQo_(Hja-PD0#V$F3VjpJw=BSa}zB3tKl}P|mpw=QI^_bCQm680BZ&tdd0EKv{dlgWG`h$L#( ztr&}69= z$)P6m(KJGV0EZ5GVep>~u&r@rV+t_iJ2TT5oD>|w+_su67y!jr>d0`#Mbw$E{af6t z{61Cq&)jmI)|CdQG!cK87m!6fI0x89_U#1QKr;R7lVbEf34>x~_ zLW+8UBNURK2haI(i}$RmxPSg$fELeusuAT=yIDc$_%mhdw`jiFeUEQbVI=uGE0=ydPicKcuKc#^0=*kd4dsnu)DI%B6y~m| z8k*Lok!9W-Ue8mkH`i!3x-ORc!kBHGD3LqolR2#O{4;qO{FqzPhP#Z$`eI;`M@O`P zGhehj^IS5DgQ5(J{3VmPIS+xaXJ=4e*Xa5^aggE6jx3cOGI`VLW2P~9iQwdxM;W$` zZZEQek31&fzA;vFxlv3MWS`kCi?-ZQy!_^$v&_u1Hy51Rr2pZS1XnoEoyID$(s4_fP z!!gr|>;0V#wZ-lbogFE>2O{&DPaXu)xfZzqr*SKYuE~?m^A&uS=h`E`QmWujeB1I! zQFi|Mi~ss!yj;mDKK;n~&)=^+{+wS`L__5swlv`?>aj0!pA~m4Qu*q?%$G7UGB4q7 zH0!ewBpv(j)5#Zgn9^@N&f~~Cj*0y2)9?JEJHQ@Q1yHA%tv|!zE!#&O(2I^Qs|q@Y z9zu_Z&Qb%D7lEgMjW2*KmK6ouP}hrEby_KPup1_qI@fZYXV1?6D*DpWK3$@ib?H#Z z{D6vUi{`$q%)NE?YEhHp?ev-Xxb~j!6d!*6(kb_}C1e<6V8UX`ck}P!Z}F_Sj&FE+ z)?v@?hW~=hEH4c>ttTSK(0!v#G&&V zP(H#W3uUP$NQOVb^B_D4SfL@oxt{YyDTtguZor{j!m%Fju}N1Jn(;m?2LU8aGT9j( zV?Dx#3`yNZ(eX__3y5+vL|o*nv-%PV3FsJv7|^yW7^h{zIKA(#i>`w`ujr~I>HW0v0>Lftdng8D2d!;8{=GY-Htg&vrdxF?}#q->oz)ly-4Na&r=s zd7`ek`s_a;=y)|N9na+^rYBtf|f@4tc@fSTFmPDOT) z9ql~+LnGV)%A0_f!nRXb4IHxAco`j&{R)Dx_2?L_VaN zllvm~20ekZEF2JwQ+F?vo=gF{Ke$w7vDb=E0mbv*W04rhL0h zo^u?$LJnj;Ggu=Vz_^<#I|d)%{qxLax5Y8~Lgg`f<*cH}&mWR&*>hX+wST#`eH5$l zz`nZCHaDA5Z3~VE{wy;)nX<7g-kvYX(mqd$8DZ&%zVV+0dET=>-Ho*10q4sSQ?Wsl zN@_gtFF^any#r%BawY}mm9P-`j|(-FB}&<@)#=1jdoW6$p#sC zWrye34He$i!I`3N%H|H1C&28sFjiwBmXMEyf_AvJOpm*W?iDtk`-zZKUMi^Ie8a`A z_?18SH-6^|xX#Hk%XOG=2p~LaAi` zRbl1b8g_+pR`>LX9!mirk5}`@HhD$3v%=2QmtylEH{Fxo*a(WXwI}A~Jw?4e%jM_&h0IBcp8=4dOBA{qwvWdE$+;x}~3hp6|fx z=PW2!9D0Lq9J}5aXin%K-An358iMxcDSVC*WX&_F0*?2x^OrKXSImMwlecmqS9o#C zg&9sB5lS&G%xi(?->BR>f9o@U)1jZ@G2DuOK7x)oH}!zsUTe{p%Iuada|zl+cB-FB z56Tx$W^m{0*lbHIIramt43B^1OP`(0VTuCt&(Hk(OoLBzeD6sqC2@{>#-Dzm%qRmVSS#L* zJJ1%LVOxWSq(_b2=SY&rUwD#-ewlv5F_4Mq(}XD4jO}NVfTeqDXQn#fd+Q%bjL)Qjx$CvQm9BFqreLR$+Or6 z5^|tTCx4-o>7kN)_`rs@DOqFMTRj%Vp?oPWRWlo=eNA}2(uj8-G3jEk6 zl{au!v_{;T^sK97Ng3ik1T6XY*rhF)IGk+X9n}IE_CIxC+2?-Jvwp^EvjBS@hFnka z*WBg{Z+oCcdxpZAr47ar-L3MEyqMZs5I^k%`IJci;^ zDS#--dx6Ug$9Yt$9Qr7aPHP|mV~{)5YFM47%46Tp_$ZVem2>I@{dMpSjh=Om z_mL;n?%PQF002M$Nkl$GjwJe6lO=i#IIIlwmiM zP=>_7AX39W-b&w_66o8W1U2=un2!#`9qndg>zC%MYV^6|R!u`wqolF1Vl^{+dx(tVMdTbaxt^8Ej>J%7I24;WTFhP4 z-NefNV6adZkcWZJY%sgu>R_?Ibnfn>A9(I#=8-t?Vkb@wO?C=~CTvT+#K&(po(hL9 zD9W4k(eD9wrZ{rtB%b*t1`B#C9*G~Hsc^d!sWO}@D#<6$CUpd9T@d^ngZCK2(lNiK z6-P%zahbuyy=V_Vi#;~fZ(=5~lE<~RF@Ev@qi}GAd{8L~hbyoNs1QZOP(TUPk|X!b zYEhDlMJP9Hb+jCqihSMwa z=sl=3P?!j#RWpc^sZ@IN7_1sTCB|6Hz5=*M8CgZZQ0YtCv$qXBhCT^o(5?6^V6e9M z@F(M$S}J{O5Al<0>J(2D7WcRnu<)nSB;Kvur4DvMchcW!{$Mthc6dJ)P$TeFP94+{ZpVu%gt1LIH;avr6#fcc z-_bxef1rgy76XexCu3^u>`MKkBdNKqjhU^z^v+M;ODCKMCs;B9-#kF#V3%_4URxR* z=uh39?Hri?ByD>xqcdUDQ^yXl^=y~EN5&P-6m=%X!L9HD1PXhWl?5z9A~+=TD)8K7 zL)i1|fw-!`Ej0Xg$699+-F?;?a;ceR*)EIo%5|DzR;HWp6;G`T+)zfmrIJeq1*7?j zWXx8;N|h6%>nCVDXkjPQp}2H$SIFHts;yT6=23n zp;TC_`Gn^*jIBZt;?Z--3f!16dQU`#C^Nho%BL61TrHoVyRAK zsN@SEGX40fYp(4>Nk$|_u7J}&c{8wzsYn~aDD|iS)mG@y87yb1RfHUa8l`SUAF`Gb zv{6*HcUIZ^4`sWF^NQ^|-A?p;KxUIp@T{n)N3E2RozPtQ-wdQIw0R5KiBa#A5*a*_ zDZqC;khjY7Daai_@MSLB7o-l7RBYq;97eCd8#u0E=)wh#`e;mVzyDseF?bn@n#Oge zBntUd3TU6bOL_8&;6*|OPCX9B5OR65jyCzJhN5+l0|JaEfSgC5ZOFTf`Yg(pcfyK8D>PjVtmOJIdlwk#*72G>@dUhXkE-)dkj&?8sI&kE}6ig zbi#A#UVH_Y=9z}6PRnECQ#C>)bUyaUA!`Ml36h4CS(zxTC3+(0FKCWydpA=UcxM-3 z--|O-RYZM-GehP3F~tZF$Kc_Re9IdgohAP;6ZZ;PvjW;fR@FdSqj#)5eJUCF1mJ)>|umIfAc5oK6+8H(Em1@KZV;7z! zCm?Ph&OZhT@kAX(iD_>X$&x*OO1UH5g8di^8HpLk=8e$h7^Jy3>^3Wu%SImxo%afO zd~Cl)UFrbTgY&-DV?Sv7k!zXA3p%qb)QJI}JtE05Q%xUuHnLU9G3ZV3S-}GDRj@(% zw$!4y8Sns;{df)zIhQRqk7He2xdvBbJMI^m$)i(!AD$^LbIJU8VKRp``)y^69qEa( zJ$l4(RnK=BdTgLc?kw%`PI2zD{DoJ26O>0W^7=6R;Smmos)~F%3I5|)vfuJc;S8wM z8kAufb_fN=av0vvW5;_LXJ)Zab($E3Bdy#147(5c&9(3cbyVogCof9&t!)Urn<0!p z+QZB8P?I}o$64hRohNvJq5`M9Px{WzhsYkF6zidV?gUA+Jb*V3_qZQNsR%l<%r*S# z&2hX;^oQA?WwVpa=0$8a27P&hVHfWJemt`S;)tinU9_KP9HiBa#TdNx4ae!q=xFL{@8E#) zwKOq4lLa>nH_mU8su}zg9RpDIAd%}e&3waAjks1jv~Kcwof89)(G-RQgEPkmARJjY z<=D#%CC}8sViEE(G6$W9I$Hwb0>+ZPL)K~GHL0ViBVE6GBMn@*j6itDsSNP-9?NYVj!x7 zREKg#8I&dbCE9GegC7V2?#;|#WwPe+96p#i(n&Pbp$us%#$I~Bu^;W>*cgzQcfO-k z+b73YMyzA;zUG^WvDT zR7PoQrQC-0#f#wf2uS#6xh*2o!@yA8EzjpG}(IOtv>6oP|#3%xQX3r9TAGB(vpkj-AoxOrIxEuF;vqKqFWb6Ibxxe(4t+G;Dt1~I2UWcoY}MG zWIg?^fRgb`+0h(n!CVv_3aDSeb4caK9B?X(b;WER+3c@?)54+v>OlEb$Wdg-Sse|L zxTnFWuyL$uWU3&U*Brhf?S}`H_Sn}d+$sTF@rPU%Y61>BR9cBcy8v)NkH6zSu*Y|$ zzMPX9=p*tLcg^DXi_IK(K`x>bMP8yJr=nrI%x9B;Ae69}@#3Zmg7rA|HT*SZ%x(Ti zc2odjA^aA1Zg7;?=($!}@nG*O2z30=&DaA7;`!(-~!h%iSesWBo|exPj! ziq{}snh|~vR0r9NG2p||afC+hSgUPlPQ4dK(%@i!8e3dW>uW2_NNuAu^IqV^+Fz6? zF9y)3pIN)DqA`JgodN-pumg}$CQ?ek2)tGDS6bpno2iO(HuSj1+HUbl{$p@o^ppJD zkG#`7&L-Y{>7_IoP4oNgpzalv{$%Dln_ShKH;`?oFCiV(UY#; zzMC#%jIYlxLkJHLBv#=<+M;amYt+Q}-cuIyZ^ru%}c{wf4FQrdDdBkk(UN}}g zTXBzRz6kXuzhhTSNf!3Dj#_0-wa&)4XM3at+QfbH+hgL~;TA3Bo(E^@m^w!0h}+=V zLHmvnlbNL@`-c1$OG)NW)w`(^Sv)j2lG>5G4LCv9)>qTy)9JJTJ$t^N22f{bXBrvq zPn{j@sioYGv3J4|YipeC4P7DkR&b`amTS|6;lb3_-kM5H&7A+)kajT`XC|l9_~ayQ zS7sdIgzilP!3-oK1nA# zyQ$21&7Byh14NOOnNeM5rgfTnc6iqo>H-p9o;j^7hz-FQ5aQN#wmE!Ur-67^*^k4{ z2o)B$L?>bF8Ep{9?f^&34pK6=!hsql7Cchtp|j&L3q?8Nlx_&8s@Oi{xjZnBGTtXg z^k}7904Hu(p8Xl)sFKI)7)jieuA8{;IjV=m9lO)$KF^OSTiVmn(LnA-;BO3g2wjCY zTVoxS=-AX1q#mFlYRTN9ZkC7yoiNsbHTVcmq(`ba$G^<08H-$y-bl7e7~_&3(v>_e z#xL!YMxRMhA$PcKZa+GsL&xH_P9>d+)y#I*GH!J+4E6S=kqZOq7@l>ozLmC!G8yNb z_|1xOZCm8~b?8=2O)aUTO`5AuPo@{r=62pa*a|DqQHy-hclgCeRJc}DLr$5eaczT? z{}HVPg?LMtqczY{j6c4WTQM+sY!ai&8}XD!X{D3%j6LbA#?An)kPRXnpp<(?WJE3S{%{OPpwu3H<1g}MzM2~$uPJe_jBF1>3|#LM znXwDZ>T>o02(hZO#H->-dbWNCLBJ99n2qrfPpR8{R^<;B16y?ZzW2>7d|zDp(g6M9 zaR0e#cECMDfWP1^cU5$NKTlOMB*RhO9iQOg4!;rfyDD>FXp@43t@yC;?vv)8xHMgh6*4DJU@RS&$5G&vk216K0ErwJB z0)FVyWfXV3dB74(n0B>b*xbM~d=10&gOBf}brhLxX8w-Q6c6=?BN@e^vuCYr+SAKy zay{R>C^joQ+i7(l;VT12=qNjR^ye`p~6VCG?3USHwNc|ush zOC84gq3{5A3ksrjZos(Xyi}Pe>@_yD5H1BP@ECr$$8W{5ny{fV`*`K|_cu@;t5b7L znf8Mq7^k?^lb+~@0cMY`Gn>(jbKqxFlW7XNkqO(odz9h1b#zc6(R>Gu=1ibCMT(-aQo|k{?`~(v04WPhAR!@idm2n&#R&LQf-NT_JE6M zZP30zsPssa;d`J=QP+YK;Brl4y3s$Bj>@I0>M%soBYWSQmew-QIvo7$q4^MI|MW{JAr~2q;qY-w1 zxsfxMv%<`M6tEKgdU1;urqMCO<$}r!@fhm z)^~TM;ak_zPk#K9$X|`7mlXk_5I5(O_lnQ?`{uMBtI{FJ46FUcwaTJSrh4dI17Zhf zQK>P{2rKaA6ZWA=c!(yS$v9E-c_AUuI)$LDIIX!bJ~%&GsYSk$~>{{@_URZ z$1z5heXg7cJ{)ZpSMo6mfVMz;;7r9;IBMY0D#FL%Kse*gXVE~spuFS#I;Iu7Iy%zb zyLZydcU}*E(%4WRQAwXX{3N~e-uvnC1Uw2O@A{3)>36>MTAWo{18+V?MlLRHVPH?B zdmnuSZ{13_?p#W*y@CU|rxjQjgN&ym82ZBp&FSgVH0`ft)_EU8auyFxa~i(hl^UtP zihhV*gr})%OMbhwp{=(k-Mn=R`*51`M4>a;wD&;6w+o6>{W@XzM`?E{SX^}Ch z!=MVr(!e;>2snXeS|A)%uT0<82%;tp%pq@f%evc-8I`*Xx;UWTdSl+@da;SVgB~Odi0zHNf4`S7*Bp^=t5n4RnMl z>c}$A#vx1KEjmn@ z51I-%!B`ctO&wUqy8|88Fi!V*S6o*kZ*;oULSyco5i&*@$-N^gv5j$$J}rUMPGrq3 z9Fr}Ky|<>Hq*-)G%Pt(#!7BnO!m5swFQ_f>VylGTrbLRqm0I{0yx06*kt*f*J3CKL zpZN6Gp6uB#({DHi7*D@clDPDPoKG@asZRyq2<2{%$44~{44z{=*(>Sk)46oAw}vp? zWq`x{j<(Y6TbI)1tNp1Jfw8{Pk5{%6rn8P9gC#PcWKt(J3`&?A@(D(*XojR@R@G>r z5WmbjYKHKSPEh(_KC*M?`JvqkOpuC2uj)C7ymLVAU6_mCm44fruTB4p!n2TXq_25r%N+=$kD5N(qV4BJQ zlu)w6{k6Q256@+c4V2dl1HIYRgW)uk?g7{K68Vtv8}P3Klq6N1RuokDS1>yj3GF?^HKyLpTdbLO2=y*6HUvvid zvubse8mk*pUt@E+-P@n~FxWqspALmorOe*(WB5%k2K$|X3o!#)O&Hq|%0oSHGy$`4 zyB?^KPYVKLi&FA#=;|Ysu|KWqtp}Fl65em#*$S9dysJ_(`SHIpQQ!2~CJ}H4)msZVgx(ARcb3VS&<4sZVVvyJZwDv{ z3fy{tXrlX^Noe#n@>uE-#^)AF%|0J|v|(Tyd0;sul(rR=_j-ip4oVTE7C6uAW2~8C zgiV^|-I9V7r_71rh!Q&a(d$wx{b{>*n1McIj2>Gzie@$aqtR0jtQxHLpUw+$7Q&YP zcC?l&Wo_@VK9lis0FL$){uX1wz7vS1|Jm^3THjE5wSO=z()Rn{vYIy5G2S|X_fAJ| zx`q)m&w<-}z|ul!;A6%gEBsP7co~L&%tCX^l(WUV8UP2>zXmQ<`ox-jQWDQ9GANHK zZ=2BN8u`m0qlG+1A(Vpu;Oqep_`pJ`*@domaC9`1SJ>^(TAg}$&`LDLwROus66)%#3g@lXrUg_k0>Sd zNnd|2L+yyQ`|D{B2jebntV8>$p-m>&bINa;%VplxXpq(zyW%AJB9~R;*f#97vCPa| zYwGLk4@aKHxtn+eom60#YlPv&`+^8@#J7I_W8}gP_6Vp??a->TXB*pu2b0@1#8r$f&svR+<2v>bU|jAUY@z%%rbZaau6&UG2}KcF zQ%$79_{e{w!zoQ{aZMSG<@z){+?Q_Mx{^9NTVs|Il1jZj7&{o88#@~qj&sQ4mh{!H zy`Dxcb*K9uf51%o9(3BqUJWc6!3fj%YicQ_(JO`@8c zJc?n}3SM_<&nE9yV^EhM$Tj-m7{`L+t%LD&i`m=j7e>w6$tw55Z+S4 zsny@oPQI=*)ZLoiTSG6|Bo2x3bf9GbA3$m#dvI7nf4jhj#~_q?I23@xlfXe8Fx4~m zr0FX3lzj%7PQXLx6=sS6$1nySX|$1VWtsd^cghasWMf7HR7vX0EXE*XN{!_jZMe)H z8T-&gmAsie%zJ|qmv`WprVXP+*WBvtg>jbB#QX_HI>u*JW5@+|(tzh=2ahAZ0itc8 z?pk04p_ESqTGRa9|i0X-Uiw79D@<~S{Tp!Jd?*d z9vsWQq7yTc1VFzTu59RZBSNuR7~t{FlFzxM+h`5rPZh;qD{!wE^vU3GesnGR6$7sHd+k z41uO56g_6QO7)lzwJm9CW+^>-@-(gOED{Rbk}eWaaPZ6ZkC=^rRr2~cNYGYe!K}a8=NL900hO=rr2=fKtI@m^dGE31z z2<)Yyizt3PM`_fs+a>gEfwfcnd(8I0C>v3vyAa4DcnCY(G)m=tbsmo^f*Pi#t$qTt z*N7^cmrX>5db-jep4t_{Y3vh~)drYm1ImY9*$#F`90c|b6wryu31%8rLa`notZTTr zgHW$>%(gTGR~eY5=jPJG-PN?s3?wLE-7rj+_I2aPP*9xU9o|6sd$>H8mSM!3gsi!8 zr>ESQM)8byfy*i^iH1gYtVEcsY;9w#W2iG*d7YV>8wmd1a%*~}pD?xV{%{P;0n_yQ z8nV7N^)RzNLb#)zY9e97b2+<((z?2s=C-#m{%gatxu$?Yh@A{xr0)^_d!A(CnzBxFY+G`QGTs!-_f?++m2`-`k8ibz$sDqg!`)@aX{0%m>rB1xtDh;jW zCc??<(k5eXY;%D+cQDis=zGFFTe_leoQ2=qYE8>17E?#N2vm$n8%VqI`m`*7^@^*& zIrfdztYXHYiN5b*3=E)fb?{wgqGrPZq@yB{g^0Yy}J@i`#^bj*dC}4|g%hCSB z5=s}d*25j0gstH)0ij#W$kjAArcK7d?B-gU-Co7;+YdwA9w;&Pnt)}LP)n8niK)4? zNB@<1Hbk_+?Y7SJ+Qre-#~AEDq1sVvfG+m&j@M({)uS}EqRfo4=Ca&B49%mE(2I+- z{qg2XS^<6|Ct85369-feyr7v`(E}8^S!Prptt~RyeuQJDHTAc5raHp4rGp0gw~>!$ ztIZR^F*!RM>&$DRqfT~L?ICX~P6Q2d9UM>S+ht~Mw!rs^P5_M6MM7^EFrbF6Lc14- z((LRKju_7tbYN16EV`0Zd!41Ikl!=U%%Z^={~4#w+8LVYx}q}U@dA+wm%-8K=r9IR zJ>kf6X>xih-GBHXEfc2PM7U)iGg?Bq6O&+N(`%xD7#(k6zv zGtW96t1;{b26~vaYau(13)Tn>40aO^$ZQvQs>6Y@!3@C;ysV~WGYwp7Wj3S?3~*Z& zP5Fz42DSJ1by3!()YaL9;YnyJe8jbxxD>(&?HFu9jx{hgcEHb?x*kY9P8SI;HtOdo z`yz~cRQ`(SXWkG|_mkJ*#pdq4bMggt-zTfdX8UcHzuj`XFrk60Sg zTt^u6RcLI6JpTq7V)nSI8$R2nO_FwrV(RK>Ouc=q>x3`7fA7Qe!AB2pEY=X-dl6hU zVb~)jk#nB^yp3!zEct-WJw!HwS=vpnL9bQxZ|r{n{r0hDw+S9TGdGcz4besMW~j%< znNUPGOB%}QGV-MMxHYX&_8hWgoX}y!GdmEsrIFfldWq#f*Dx+OpoNpIjWk1_PwcLT zbLNPU)=O>70uKzO+c-~p3CFIxuz|s|nVupmrZ}5*oqZ~lyO&u5e8TuT0j4S(PMbKk zR(H3ElvqvcoN3)mI|duu;ae>@ypSn4H+6ol(MMC8t7+i~odw>Yv!;{0-y9fbne8!r zbR|vhtYduOD71k=P0%xZsTO>8!xwsxg+~~lYqZViE-5bh5j@mjq_l#s5%!npXe*~_ zVxvH3Bcdi|(}d6M+p=Z>&#~D~o9oePe@snq4}TTEW)^ zcuX_nrwr7teV%1+jwSfLWaa)14amH~_Ad0b{`8nJA4?BlAsT>?1_mEuhc>$m<%F>I zGT#>(n$kd5cj`mFwj+D@;m;F96pW$&tinfiCb}$Rb7L*7U`QWz5OsiazZOS8>|sIO zVt^R2)I((2CH0BsW}Fe(iP=UQ*C?lsdi&vz-8`en^wbc3Xvk$rqd1Y;;Zy1|^F%ka z(4PZkCO%eaA5KQ2dWIO2E%5q9!hhGG*8%x;FFi12= zJ>HEd$JefmrdMCPiowV*IoZO{MB%S(N;A{jY3*PB8EZ)=({F$KAEtH&?O(q6SA@q- zr~clq^v!R)hVs*ze)!f8(+3}nr!5BRrOQ{-8?RqN$?0JIKr`zoFqRJM((>A7TK(hy zH|@db@7}$fzW&uWD38!#V6H>)ef-J8^w!(&vx(3yhMGP|{eQO0k27c}(+qT#D>V!i zJ;RY|VY=OYgW-9fc)FOXP|kaKGmI{TP95HDXF+a_j;3Ce{iC)f^6sS%3CX?xXab>2 z*hK$@bho=F^&wady=jGsjZk9;;ey979(UusOxte`4W|)8db{y-B|>pacoutMi{-`H zbcjcB7)AN-jaEx1PXWyM&L!(1bw4#BwgtpNYfiDX_k6Co#G(c?E~S z-JZcz-o?WUOeY8n6^_O+1n~TBT1QZ+4!Uz^gbyMdxy45sG5HO?yjBtO76L=8=_S>8 zmvQPf*9q~Bkt?Z}nRY{nKuoNG`gmqC-9H{rb0}7A%pPCu8A5r0wlSK@2-m_#o365k?goZ;I|_U= z3PT+V?-r8)rKywj;AlP89$oKgPd7#{hw*sGxKMeYN74TN*jU=fPX0YBfWuw`_sj#wEJ|DcdOIQ&c1YGh)_SqK{fR# zpl6wlyf^cNu;%S_nOV%=x^^oyF{bPI7=f@!o$I)n@2||Kx2GPa2@KX_99@QLJF`Db zh+A(TA+|Vsrsrl*9QQC5o6}8ZV7_*76eYDiwE=58<7J()Hh}pU$4wK8>Bcm4z_Xjs z>`fg$^#2x$&NySCZe}b!!ARAR9LAAxedKZ&nN7@CsBq7rWPR_!hit&MoUU~>r`N9B zNX_)Iv+vH3G(s!V?ih}__n2*(1-91ua(ay!s!Q;b6WXUiwoiE<5_0$M*kpPNP7b)< zV}|&M2#xji&2)`ftvWVZHhvi4;PMG+wvFhH zH(vcp`osU~zhs7ss1kTz1LJ!gWB*_N*`KCAfAa@8ck0tuUVAnDH~;7laW*xkZTOYZ z6*cg&x8D9~`lEmG$Aog;BW$%b{m0+=PWrvy`#r`7umVa6rRh+`Uxgfg^Xc?Fq<|lp zRF?4QLw83gc^W(#Ol}H?rNe|2n5aESL&H7c0Gq)%hi5Bw_jaUy9Gy!~_Hi=67if>p zhOJ$BIYyA--i%W*85?kb18W*#AAqlZQ}t zbX!gT_AmY_t>QE^GUt_7ucoVv#Sib@!$4k5FWnjmyxjZv2Si+Kpg>lq-}}zj;YSVW z&;HGyr19yk^vcUu(zn0$ZTf={bojOrJj$?le)fL)@jE|5jxNWH3Xkoy=yMGV@lxS1 z$WI)CjNA6EmNfID31)5C6AnChFJ{o zG6uvrPPU8C<*knXbb;?~jKN;UreV2#I0YJJ2?=Ig)Z-9%d3c1+aJtAE!d7OR8tBKa zj)U|FXY0ZI43Q8!>1CV*-?=mj&tR4rL(~~29UkMz%Qqk2Pak9SwJ;NXi!pxt%C*>3 zkWNH~AEjjsryoB4IL*#a&|c--5lc)O(;vL@HP$w6rteQYNL#bJgnL4qbdcm}kLjSV zV>WnrvPHxWD+>Tj^+KJI#V)mj^Y#2YR57H-<)1FLJV@ z+>$;X8$)(HPHV`aw_ zcctICdMouIry7`bo?Kl{Gn-G+(gB7T#^5F3zS1`UmTJ>`Y@)S?d?+I??h@^C5hwiv zbf=TaCuxxAw3i1)(#Q8d!O&+Gm3H010d@h0{-ddx^k{Y=y@r1CGBo6@@1AW#h_51V zKc2#|g;BUb|9K|7Q8rFSsQtNIL_H`Rc8We=Hd3u0WA+tf+|{0eh-^DS$F0IBoWimH zE=!K;=!92$hSO`8uf;5}ONcb;78aJ$2V;!W`KfRqf9J-X)IlUvH{+kF1mqWbApQ3A zkf@(0pQQEi$8o&_-zrs?Qy2C2AP1PsX07{V=p*tp)_h2rAojBppi7w0pIpRbN4z*y z+#@f3_yykm(g6Jek^c*;F6io9M#j!*xKoI+FsweC@Xoy&KXXovP>EYHtr&#KqD<_f zm>pt-rXyzcZrqGepSRz60`uBO2(=J;h0%sly0~<}rbH|0z55@exn%}t-4=#pW4e9w zdg|-B5@$sop`0C^AW#r;O~ec~G&1PfVBv(B(sC0aoxOM^T~mP}cEWU5iTKOzUOa?- zJ!xfywJeXuP!!fN5@9feg}n0e-Sn+*el7JgEAqid@8YGONH?$E#wfjlpl3n+`lGaB zm<~p3NR4Qh?GuSAZ+?q0*al4KM8%02zzqbcUaCfzmHM7o(aUX^RV{0gj+iQXF!mUu zur}Sjc`4oM?IZMaC5^k*^vOi3Cw!^)@}+bH!+ijAncv^WW4A*n7CSByruyKck5m2n zc4{H+u?<1}_|b#((cv~BX#rO|ee62M8U$NAO6LfM&;aWboMoKFGtBf+DB|tlq#t9j z2XFuC20K|!%;2T$N*8fn>^5Tvyt9VUf};X~*pKnG&CJhJgvTgt=woK)?zQV_4#jB` zZ)QF1a3=U7M)4qq#tO4l<9KkLC0+yYErj&q(jrR1QEF#~V1)NS`sjWd!=t##pjg0g zh)q9m66{mv?CMrJX7*eoyMOdz8tH3KJC{b&>d9{Um^Q?0k&GMSk0GQ5Dv6jX_r+HU zowy3c!)J6AG?Hfx46!LAppL-P$`r;JyRRM-#@*i4mHI~5sN*6tZ{T!wZX+$B*gisG z-<+9E?Rcnrn6<3MDbaw@xvjSoXF>}Cas)3lPUAF#a9&3_XvSFWN7>jQw9Zh`S`-%M zS()7|;}96YYgjFFEXK1zEiXibdK9m3x< z1Uj7o?jKHNq0srrI)T;A*`=1z5nQwREG9TH0)?`HP%qJ_jadkj6v76 z_MwbQ4{OZ4x!zB)Ye@Jtl+G%Yw+R%WX_N_fkF15Z*YV6RF}uA+f34w|5dSsM>pG$A z?~gqs;t4|$k~gfhk+9sEN!GSct)_OAy362fc@Mf^=74!)95jRJ0{mr|`Y82cD0LGuy31_O7O*u!mmX8Gfj4_#q&Mx-UkzK#!V%GNqX(l2C)W7K z4+%XirQzEzr`IoCg->p!MU*AiV-u*#>;kh2i~~LO^~@G&zz1>h*AyX}bco^Vc>|$* z|Fix|w(@Vk>!2ZxUw5juzE2)J#PRng#>Z}Y{i|O|fA4p{oqqp6`~6re{LcFyFbi6a z^;{aCPsYcX3EYgeoj0yuPuH$qLb;lYP3r#OAO08Mvo8I~pZ-~DMFG5f_Z9d$>L(TF z`Zb}7!SfdO=!}hj(28f6q5gN@e~JM&kpA=k^dF!M5(>S) z%1n{r#z$#>X+FJNx|nX=zRKFlF^t7&B5WAH$RL>?rf({#|@lmQ`6I_t+_M(w}1GL(_3%dr>|!)Fqi2=oWI+I z4?kH+6Ekz^8((`Z{qDE^Bf@xx)B7LX1Gae#WI~g1AnqbJn{if_1DDJo(I>G{N~JHw zZ>%FG0>GV{A*pawY~nC&M{vn$djXh!O z+=WygzD&3&OD1p}zV$A~ImXSJYuuT3TF1a2d-6DynU$>{=uL;*zxN>q2#>i+mCC4%NCULqp)1F>$o(F5u%3z}}_a}l{RfQ%?3x16Qc;Sn=n zvKo^5jSd@nkp5=9DsoJtczSFiRUsex;A=fsE~mQ~yDP|wEh4(MX;&??k2*VD@>9*n zz>_Fa#E@#>xvSv#ou9o+cyden`pehBAu)D*bZpj8uj|yikQtrKBCnN+Y{P(WfuGCs zOVn|Ulk^1r$>lJ;@W(mYRgW&_kxyOpPY?K?r#-ds<1%>ZL^s@LMs#)^Jpsd5XVoymO-5&79lvV5;>7XhIq_HM(~GqG2&eSeOmr zK%f45n7%YXf1^81DmzbvcquWI+z7&Tk$cQWxHilI1)utrlk5HBJTV4i8?$U%D9$h6 zd4)AU9T;dkgb_VTL;ZvR4ctmE-M*QA^wvE*S0`z5W|5r>_tIcrIbFOkLb%B#6t3g+ zFaG%Zgfwlk!#?rwz*>t!7Q31es**!>cTqez)`j(whs>PnyNjLcm?3yNvzGq!U%#0i zJe(o)mr+-BKP_#|rvL4K{h!j6%eT_wu@BRa-u^zu9cwRuw-1Kyj*Q(s%jwbhBwjj3 z@C(x!9u2#3Q;sgYb@Pk!IC^2OG6lSU;`fM|+&y4jB;4nx@4uIRJd2SEM;L5xNTVp5 zy%-rkS$YEeyUhCSr*`sr#KPp(T6)Y*s#`FlT}IXp+?GkW^k{h@^=}QPemuQT5wPRy zi)jW!ZyVt%ZVUz5=AB~%$`2miOXCR7OL*eGe(NrqHFTz`Rc1CYu06ADn>y#1of*et zynvCf5$mqByV3v5PQ$e3PxjK-5i=SG?2356x(&Q*o)hR=O2h5=BFqv7$c)o4%>M#2 zlRy6OU1n!lM}+X!n;vI=ArSge_AX&m>e;>b_yL47uc`Y-j3Z}n{hw(Y^{UNab7!Tm=&b!;HXZ44;^X% zt+Foc-~HrWW~#OkK!mi?&qo*njci!81KfA(ds8RUXPh<5A0Zq!;Jw@6?yzG(fBIvb#HB zeIv62kk>x6p+U3+t&~w9es=G@)XhwUVYWv&Ra#ccX=G%W+2xM(v$dHt$2zoS!aCUX3h5j~fc9>a;3!OI+)aA7B@Nt?z>2x-xi%mxKwssS8D!!}mRd;v9`q?9t zh1jS9es;v>HkbR@Ar(dUV+=#K#9}-#K2R_gaYz`_*2`?rF^&XhBveFZS-bwpG9j_V z7T#)OS8MwE!|})Ir|39TO)P1i;*r6+7`BwerL+!$)_O<%_e zb^}FW?w$9UU1sJGe2+byNi!JCmoJYJ;>IlV0(Xdlz$v$tg83N$3HYGy5J>=e~lmo)uZeC-SgBi=|rxWR|civ-@ zA!e}Hkmr>uI?s%f0V;bq z=?Kt+-?C*D3~=v&u;h6*9>WOj#c=HD!Jw_%K*rq0a2!te?vJI9Szp=PN4Q;co?W#} zQyuc5#JY18a(r;%`~wxNOmF8M{dpI9G$?~`UInXiCJ8uA~d*mkdJQQ z(#fvuM;L(moUw}-42y)=vx<>?Vy_+;n{j~kVI1DQd^Me5_)b5WObd8fs;lbL z()>!Q>SfjuXXGWKss^rIB7~C}c?_N|ll%Dan3HV4CZ7XM;B2KTi~o3Id7P< za%G({-l&YD;qDvo@l%dKLdLbP@1!p1)6LgTFt~{5$34+jbk7 z)s!xvTa;KL;F6dc!rxn%S#Q8;7DofYJFF#mjMfc|^Y zui0yZmVrWn_l~tCn|SaPRAm^Eg8Y|D3e@ZY(i0}7XLbTZM%gSgv*I@I>v)y6@V3o7 zohD3aA${#D-$CGA4&hbC2sna~?!#z$d#|!B{_7Ywt?4iS^>^`t86Jg)4CAVvLFSI5 z`-Hvh!K~RP1%|}{B=*5=tNoS{#yLrZF6_hY_xA8a9=33pF_IBZQHSx+fbeasYfUfR ze4W|ZSHg?$Zg%y+TB=&c8$6B27^4=!oexW^cK`rD07*naR7JTh%AYtl(pU?GZ6w;P zqIQ7NUx$)k%R4ekQK4DFJS_$&-zbbYlQCvgY@8K>30FLM9y8+l40i|IjRvq54W)O7 zcedbaI|y7uLgXM@2)ZpST+1`hHXmm%w|MUEpj!yTO&FJgdKvHEGGQA%%*yDMY(_~m ztZ)ynm`eNLmCNbYH{M8tuiZ&wGgIlOgvmW&tjJPNJS)rkAtKdjqUX>?t*+4MNwp z>BDVcJyF|2*fO2KEFZ=JvykrmyNJQ)Lj8VbFPoSd?F61i)>xJhdU{=Vz{@)LY@zHG zW|4XZdkH0K#vsIh%#MuF*>t#TlQ+3s0pltL8^A5KdK?D6_vn+|z%@b-9Bu-WJL+Ou zr+z%Ix0%KIt(z~WAvRTN!jNhK4tINA86)(PF@xJV;BhuQEAy0YSLF-701{d;AkEOH zMvPx~dURc`YcmKw#A$)ygF?^*j=u4&ZxD_>n3}2Endeq;+kiq;2fh?Yt87f7kSn8z zj*MPTZOq_4S(;4`7N4edgj;NO0ghsAC@?ttrQ+w=i7I3&1jd0GKI9I-xzV z${wpe;P{-K)CoKt%rsWf(OvC*ssD}sbor~_Kw;oqOr9N}^fWUDJMj=NFdMr|sIO~% zkC?4#z==_X@#8w8J^Hf=`fkB%Uj`11OLr@MvdP+1+UuHAXJXn2Eow6l3jM;`T3Vy* z1q|U%29Gq+$a6O*QmJ;wXO&QK=NNGuxL&cD{&0<`Vad{^{O8uiE2*1^i+9E!a$Lz= ztYdc9>wF#^n+%}{)WBO@2iU~BCDzs+G8+*`r7&B=fW~8gkeCxsYv8vU3sukZJ~U`J zY6;PsAT z(NtoMZ2u5PKV2gn5<`cXnmP>6LukJiFZnTJc;|4Nb&n|6(5rmQ^}|*0PiIbIb^=^T zer55=%%nR`8Uh&*1Rsf?pZN#!#2A9s!CM=2c<1&@>3{wo|9gzN;n<+Z8N7Y^Y-)Oj zu?GD!!{83)+whrn98Rl@$5r~!z;a1-gZ!G&kW`)UkYFyT8=fju{*u@h?(ju&uV zTgCQlO#cXH4~CRWKU|BZd&u2EtKbaCAb*{OwpK>WaA|$Zky-MPq-ECVLgwP_+An2$Dx#4WHM+Uj8 zbv1QXVO+lb&V3>h?xic2Ix#F?X8Fo%>1%Ji0$hYgAHJC$J(*AMy!(Fo_`$t2G(<@D z>sL97<~HrTkkU8n)BOkIX=;*B8on^plAv?4CZ(I#8UfnXLvN(3t(BfBRkfl06B)!6r=B-U+-=ug<>##`Fl`qeq)W zH82L2nf=`6ft#1f=Njq27U?gh@9> zT0YNwFbzStW2~8MKpq-8ZdfP70)7YWpfAQQ!j#*~T7H)YG2CLqxDJfH*D>7h3=E}q z=&o6(5dFKnVgyreyK9#5qmi0v3ZAUk+pa15%W9`t~!7{82bp1Cvo0Z}+BIQ^G# zFwcOOUGTF9J-EC;N1+=8x*?Hk{59YYa6Ec;s}vg>@CAg-rUE_zdOY7q3q*M>GO4hP z7;)pcBXH;Dd5+;)`f(HfD6OnQSFIR~4A{U|BOwz`UMSzx2R)X@jhU1i;zCES&TKcE zIzc~cLGS6oaoYgC|9|%0^vSa1I_%3@Uu|7`^}a8&^z;k{gM|Q1h%`l05Jj0`I>HWz zKFR(EZ2uh&`vV;Ql7@)We z-HD@8Yj0uAC8o6>*!AN>QA!oQCYuT^F*0xb-(iF})tuNEfESKhgvzV)qd zhVb`;AOAGn9p&6jESt@4zJqZ|!DYKfIo!nZZ3-uJCfVO|%TUB10Iw8V*rNH7KRjIz@FIU5Pbv|xa#iksUg_ypir%1 z0o)WRU?lGGaCdK<*9UQiU&D}usUJs^v1J|%BAnge9N4d%JRd8acTmn(aeo_RALEJB zr&)DkRhY4oSVy1|$0_hZ#)`rT+W>wY;J`T=^0!`p zExmL8VwxU*l>YUPz8A+hynX6I`WoW{R>Pmb9T?M9_Qk$^@&f1bon>#zcKYG1&(b~i zKkB-zi?91f1+B`jN^&>BwA?FO;aWF1x*U97{8I__$e%fcpk?ONHcC(pBY}}ccllmt z`Tfq_X`(xirw0fkc<_ed#K1%BP9}JeUK6+i1Fs(Dl|q{liznD)+QcfaLa0nyC7>PL zb76SZNc?IgA_JQ&`jJ;J5df%8V&ZO#ceV$>Dm$8 zJyWj-EB$eVg|D!WW)DmAj$ zmfi(%6b+G+_lRcM0WAq^AN)AYVx;;n{_}sxnUE;NjExa*t*gq)nMTHS#(o3gSiEw~+As*Tf_DPL zIiXR)b4{s&wlU+#6MLz^BaJ1RYovlEaWyW>xyCaU2A@ie+nsY^eB=!XNsT&@qX}@s zo+$P|w&U9R6;?f8yY@8%Xk4oM!O_iKR%5@K-v7(bnN!YFMHiJn)D>_?I4Cq5V8uXR z4G9#2tH#erz(?hRb2vh9Zfofde(~VJ6iV5-bc3xNt!)P2Yf4iDrJER^BNzxcO)+}L zxW%EfqPr>{8@TYcw-ES-qW5-DWE*%s5Z2t71v_|Y?6I0C;=~bUjE#YsAekhPEQILX zqyk9M_IL`OY~wQ11ntw{(!olau@<`EQLYH;{=T}wS*a*n$2xFh$8d1}E?y$D-%B5V z{4o7{zxL~_xSmQE&*2XLXq+*FCo{3jL4O*gsRJYZv*h?emDXU-7 zXpKycl!i3ri+ILw$S1#PAK!4&s89Te`NlozM3p(-KtPvQ8ZDgZ1hvXuGV)1{4|EP3 z;!WO>w~KO}z^4brZf1Qm{e;!58QktKu(E7=ujUE0G($pV6bQ>h!F0r z7$uD=-3Om)C-nCw!p9;4&Ds_`5l@*m=*$I-DqlHuHVs2}A7Y#uhtJ!G%NH);@kUwD zC%n*cQw1*IhKgi1&@$J+SGpbZ-LYT@7#)pIONW;uM6hvd1ixp+sJ#Gp!Y0H1k+0A0)hfhf0~di+!v z50zG3v^&_u8WsY-cO?sM?vf_+kuRbQVofRNG+k2u!jPQ(Bqk$@4WREb+r?O;!}hZ+(tN4 ztc#U}lgABk!FiDc=D3HS`yjzM0fdVY_Jj;WfMvvrUJ5$~L^7xlvNS>&`(I$f&=n@9?pZJaEzGP)X#K^@Bh60{ zi56Eo_gHSURdKWI#EOKQDQ;CwSnq1iT;OF`ZiNoZFiF0g)rQZn;@YsQYv3?7)B#;6YO$Ag1;IIfoUKOU?&rrh+atk-bnu0no^bDwWxGB6hgrvHb zWAb`LNR5xek%F769}+Q@Tl=rz*nzOw!%Eu#!tENz6v%kGT$x39*}&Cy5oTw-)j5Rg zB`m(9bLq`<7g(V<$)1PPsq=GOXk96V;C2v2_$n6GGmOtX?lXUmt7o=zOK^>9%Ss&X zBfYqU=;o!%h*s-%Rz3Duu`^7)(N<4j@$FO`kNF>fUm;n>~g7BN!xR)4h*B4xvZ^aCql9 z0w&CmvEJPP@EIGhTH^9q;7A+HQAa|B=`}6HwCDQyi|G;(1-t0u95DE3>>jQ%%Lpv2 z05VqBSZ(!al$i-!t)>Z(fnZS;N$F1;h|mQrh7eLOvR~_UR!Ox8-+eTS`}za{Dqw=( zkO2_@5KkN*hLH)WOI$?pg=sa(d;@o&1B{c3um@Fpb|T0F(}Ij+d2xkn%lil#yBZSO z3DCqI$`w`!JS)_;&G5bM$|3l7Y4CXZDs$aEF0%+b|JMh9k)~NuZAK{Dg1LJniDT1) za%u$2lk~gF_|=(|J;(W{j`ldy=8D!8zB2L)j z2sckTM#w!==}D)1hj2d~NTURwFwjXS0mnMJaR7%dta4uQ94lAnE?px4%N%{)ipad% z2vVQohV|*%9B#Yp$7?&rH%b*_tJ1CIo&cb@(P_w_ZrbPZ3kn0>>=7O&+G0z04HxP@ zeiNaQ`*mQd%WMt!yXviwXRtP{(;WJSaR(uVRZydeR#AdHjD3bJ1CtvBFX8;rW?<@^ z-{3uuyYL)6r6XNsUCH)1g;wU7^D$@??KHWHdHLeS^ap?c??te+?|=U<(+}T&A3E#- zGtWo`EcHwfvwLu}c3Z+0u=wBp$^RY#j%W8tYfNDvG9yWPr+c|djS7S`SH)8$cpU+L z26tY^@%rm;rJsHHNvyp0u{Ww4_igWQ0p~^bxm9?dW<-Lk@K3r+-{U5Kq@R2l5XB=h zT0_}^)}rBgPPDYO1ErypU@S|}w+&XAmX@LEtUeA7ccp6=uMtuG&2;Ni_Ghu*+U*3P zYN8=IXZBDm_jbJxbzsgJD5WXtU|o&xkmjg&4^I*E?(T37CA4yRaf6lLRm#G>^70k> zv(9Sor)hNbArUxN)6U7kFt)`xu#B6*TcoAou_HVY?!zz!{n548a|^{2W#ibxC7lAF zG$e$u2o3~Mql}#V1~33ex4a$ZF5)nEP@|bct;9^QJG{?$CXdL_E z=Qo)%n<%gF??HrOm-d>-2W~f*=Zh#kO>A`u7eo}HPP|ikSp{Y=a+`o5K2@^$xo3XeDeiXWZ@og z1>I|-e&<3rgXn8|1WDgDbh`RPF6ItLPBGt%unx9psSbsv1DDbs6!k&o;|6r8k8K?q z?3&<}M&~xj+{_;P7s2EC`=hCPW+AT+%ak{px{!B%^V?rfb=)X#F3#Xe z&o&O^gfO-+kNU}f4jQiFq2X_V`M3$Kb%kq(!S?Lr4sdaZsPHQ1)6lAU47OEBvq#Wq z>{0PrMR7bgcnV%i5F-p7!?bsKpoqHFqz~8mI?Bp6dmNShn&In{ zY`d6dk8J}wF?jYIGSy&O!Ogt7utI!}p`;N`qvkvw4H{RwsQZnf)9LET3#k>Q=QfJ- zPj1}|xEeI5hoL@eG+~tQ(Ob9EJW8l;%O2g7=PP3ZZG$C%`}_eBrH|4>heDodTmqjl zluR?WamiosGHT^UJ7d_%vrh7FAd^*r!xDjq^xheS&cE^c>!}V;zO_K#!R=M%6dvyI zGE5a*dvuTEWTaeEgSzD>{xfKrcQv5u@j`QGpWGXKq)mDdwa|tE#`YU;zZ;RNSMVn+|^K1DxSW> zucsvM%0Zw0)E7$CQy4uvmpOlV>6HTd%Y*$F*37&M;tdXCfo)3{&Yoru=Jj;t+LiR# zXP-roiP1+7DJM_P=N8B$Qz=u`+{JPq%nk;ENCa{+srlQ43tE34M~j>}8q>me*M?}c6$BVYeausz~$#Xtf?0vRJfht_Ax@#$PO&F%tek*dBo}k zD_OWToIV9}zj~G#{B2yEMqm^@Fhw?bBh?`i$b}xqo9zPl2P_P z{p@4*Y{CTC@A6x}@m?B-!28}Q_KMBV65{&qIMzlYjzPfdD8CBgg8)vE%;Pm=qD}O_ z1w!A9WpE2ui`kiRl#%*4z&)74E_G^k+`&@5y}JZaQHg}uP_CQ( zDTp%%8w5sRMF|D5ua&5yXD>x?02#J>BiZM1T#WW#Kn0}cM}9yU0ya2wz^RS ztH4X*)ycI{qI)N3_8k$Iqq{%@tOZJ1{s9`~`R3Irhdb zxS>*fdsh__FjI0a+v#vSqGnG2~8rn+7y z(kcwa9w;2L?iH@0GCNlezJ_2kjLYXZLhpT6{8T^(5sI%8pkRR2;D^%&#hJiZzzPKd zK#eh7V1;q87a@q1rPp@OavtSix`G9Khxwp^!h@zSaqsmrr_wnT`W05lI(hFJ!K~)+ z5|~FwT40PfQTFr@I7a}AH_u;6$1xmSyvX6yC=>2S2{*XFK@@IQVHyf|%xQ(p4g~8C z<^l^xAQAyRP~2)LmnScrNmsyi-Me+?X<=^f8I=$rq=JxOP=_nHS)O7a$^!1wfBJ#S zJoEv<%kiBes^(qB?$Q9^=*8y*y3kD-wowek)2l)4A<4p!lL#450wO=2*HPsdo$gq zD%A_wK%)efJ7>py7@*BAKnPzmX~8ilQGd1y2c8H$1UBt&A!WZf)nAQH#<8U zeZT(ZbprGBv6{=OF7v=3YITILGC8yKmWhC8nOtP=2xz7e9;-wU-eGULo-5s*tn_09 zWYGq`1Ktl^!M%}vgz%oxiCK8bR=RQlBios`_`XKe;~zn%QN*B~yKK>@VFBm?kH2#L z5-!o;?8GVp9sCMD)J=rs9_H<9?5XR-outYEz1OZ>B5ptzaSd4UV{iQy@PBase!6&} zk3dIngt9V&BCxo`dB}J>tZiUiSgYX)GL(M%H-9VLynQ!ayL=Vl8G3>u&`RXDR{GWA ze23mCqq~yt8K65Ljw^)jp_Gxw@Weo8CCkzC6NU}0s@w3Asb%&OFQ(Tnzrmi;H6U3< zF+Kx)IbLFh`Nk^SS@8S!{)6Aeb((-jk8Ve%&u!bGp3?)iQ{KD|bv~~V! z>gSx-N$MQus41hK?_iuZ*1EbbpZ4vqgN2JsaP z7H^?ky7ILE?r)(md;H32jFH1A+`B{}t}y>SuXd9*2~UMzan#YMK%d?@eMg^+Hkj9#li5SMmUjC%Ul-(GftQai&2t_I`%`JVd;GhYSJNv~X$?cj5FQ?b z9Dg#0p=}0wE58_7UrlF`32S)y%)pm`bQ->NCRP)Taj*;B?Fri7?xQkx--J zBl!LrbyUIq2K03k1$P*Jd6xa2umS?<^|LB_A|mu|iPww=Iuxivz(eTgEOhuJfe9~N zyqxM7j2=+Bh%v5~BlEhDHnxj)YR#1pT6`F{fY+;Db1a0?K%7 zBfkjSU6f9FK?``)+lt(X7u4$~&vFzQ$NdmQ2uu%I1~kH$y2nvy7?p+)Aq4|Q#k~F%~qn$>p^bkVM5P>23W&T>6A*TNO(g|WK7YI~_ zyJ%grbXP!zDRT=&%RQU!?VKfYlv1GLq#cw^$5fKwTjQm)`Q9f1n zws8~jBS}ZPWAZ_aSpC#}M`gILuRVSJ>u;quU%!k}%bAHND!o0!>1Oi-A^k~Miv0^P3YqF2o7kISFk3Lv>B%tyG!Xv#>`9Ie;jB*xtNRa?7gh*U zzk&+JSJHFf`d~smM3b%KMl*$g;ISvG zFtRBulH&-~C$OMhMS$r=NZmuwT3lk)3>a!fH?;RjBC&N6EX6Y~7hu5m+4s4Gav+>9 z!30mZ7fz}RY?|rkD9q?H`cdO|6=8>^2+WdLBt;7ZNzX3J?r-#;JO&NAG;}Hgr)!87iox;*2yIdmB!` zaE#iDb{6MgPC%6B5f;P;HXLFdB+4s$KF$&C&V73-MV~|1$5~-tC)#8SmpIX5p(rU&ET0!D-yf(@p0TvRd9U_emu z_kijhn6c-5-ol8{!K!Z$D@fk*v_!hFhaqLx)gq8b z>+J*rLFdvGf=frN0v{tx{SFbG_vm{Iih7m3E4rb5Han57v-)t6)nNDC9z&^HqW^a% z9;Hd#e+y%<2C;Dke;MW0eaH>w zRx_*Q7b|BFQilg$m6vuTm@6^9AP(nF-l zu6E3Rn|CmJ9OD?4lj+P^R=ZAiK^r)?68yRG<~t}S9IKNaVoczOEB5?O&rGG6=~L;o zt5?%9N`c4ZsQ5ztF#r(3`ragwj<3LX-XsP?PpmZ6z`+%asC8(w;_?=Fu!17sJ}?cY z?%~~pS9p%$Cj77t%{H*t;Up)9igOE|$Foh&d%j12ua7>ug-h_;xFNqAhPe)C`uNyn z`ur9F%h-l8G)N@Y^C%wdj~yOnH63qodl-KwV%2p7{Cn-%>*>;^Q!&>) zQb#Y4yZ7#-TNoBXEQK$6=4lUJ02*I*nKODz(1~zubgx_k{CBLTSsfc z9yGqseg zC}(+g!ic)YIn_&P8>M6p+BwbdS270&% z4D~FVgg3OKU`<2I##yypVy;|1H5iI)4O*`YcoTGNkIyFfD&MQ(u^`sD60N7$I^#VB zzZ^JuI$bykuK;GcPH(GVvm17mvDiWZ)g4(EQt9?1`0pTm{2ZP^=TUI$(7FxI8gHOH zp2hh1Cab{bFrKSJwc=KPnN``XCFUS}bpm&S{p{nLu~B*%S16n(;3b9zsIZ@IGtM=en4ApPyLRCyqN|aob@cZs4Z2z~S8Ej|gMF!k#cf?a#A^^JW^0y+{fH z3PZYt$rN12bYJL4Kl?l^^QQ@3(*-k-@!h{Sokkzxdbi3d3Bf>4a}e+;+)@V-g7;{M z%+%lt0U&0zEksR5Ab`OUSZ5xkaDbJeUId{Uint-b*CAGhPH$zUa2iJD9uF0O+YsIb z>YT*=M!~Nccc~2o+i6z{5IVLYmb&Py1KVwgi|#uTlXa|{9*+Gv{aZ#b(oI!s;f_q+ zv0&v;H&V}6oM07aK^L(N0tvu~#t?GmQBqv(nnJN)n+F0I#2w*o0w(7Q=WWiQB0Nm& zcdMct8ID{g(S%~MfPiJ-7(KgOq|kNq3_`LiQ1=nM9xTqL6&kI$Wpe*t?6X10l<3dF z9H&@`T9gsd2G1>1AVzn9vSWFKaL&O9A8`Jf!IadM$Ee%rrStX!rnQHKuWgRi zU_PE1xyw9r@1|~uv%voj45k|k?ZoUfZnzjuSkY?4m^G)H7*e)&?sJyfaB4+4sdH`^ zkqjcx#v;sTh4E-0Xna03iBg6lg1}*4z@SiSQ1`+eb&W=Sf->vs+5mgMjFS3*m9=$n zVh&}?zy`XDtpU@Ao^c0m%(CLLO8a+EFgj51m82ZUF;=u2LyZ;gWegLO9n33&HelUZ#;7s_P7@kA zaZ(+PW~CS*yN{JYy$tRUplO_b8zs6~4++M7Ti0Dy$X(^Eg6{_GaNk+eW?KZcn5P{L z^6o+09Z zARt-L(p{2kz&Q`?`ryNl7@Iv-!jRy(|GoQv5qlXu=lSL>_H(nU^6=4E`tyJN=ls4M zdp`|ocjxXs_yPg0Fv_U>s*L{epZqbF`uWs5(1(IW|35d_M-?}6+`BQ_nUAI)O14wN zMyq*X;3wPYe71od%;#q2X&d7(G~AblX7PsL$gOdX4ety@MDX#EWfcR-Lgo zsGxgOx9PYBBMr67(2KFr`G`j9O6oGRp#>wtGy=ZzcQ1xp&pBV80S0(aOksGdQx_~g zaMP&XAC5ghX28J3%C_$Nu1d9|$muE6#@sbP&LnXo7AR*69F_-KcJOfMS3fIy8nDMO z^2{P%$d`-{0RW?K?4pJ*GOsicB}Vi za2a@EV&QDpR_G%uqCD55=L|=TxF>NHemOZimUb{IPC^F_6y;f(!S@-R35*IY7~NFz z+;_bSPm*ucpf_v4e6hhEdA39?A~zV{V*}c!0;?g6=3sonpvbqbs~Re&P_j2LK*h+D zLBsYuo;E-D8E{N{~ROgItJG@jQpoCqRNkID3`Oq%5$F^$_We5 zeA?|Ikj9U7_^`1v#*m>Jl(7acsxnSX)FU3xFCZ^LqgLtc7`%gB^{Ix7ZD$|noKw4o zg=Kh5e2^}DUR*^%8%DoCXUyBZ)1h?^WmpA(?9-a|NKk#=`e&uP4!}dkHW-24)>4X-gzMc zq-=4}?DFyy1Gj_)R5UDy>2Mj`3=;p>1FM_aJ?Bi}{%g3g{5LE2Awkx_9s!W#JA`;f zYkN@+JiG=r) z*ej!jM8Y0@W*l8*$on1-!Gvza;omq4R9SKCKv8@1#*K9D>={Y*94#5TaPM^N@3+L8>gf+7{{ub!wwBe86MKsC_7t*yGZ^dfyDg^rdzxrWX7`vZ3urPYwmdvyP z@n1&~T7w~8z41o6d=6LD+xOGbqlwglyQ7CO8^ul~U90{U)+`Bx`wbDL0|r~bAoj7* z@OI!7dq25W*sAl6Otl7q-+=%(m>ceK8`8BGi=7y*lCz8uR)si!6(alHbFZ<&R!x6) z`yK-mqxa+X`bsPoB zJ(=!9n_&;&26*9Wa|dqGzyHoRa4qeMu=F=)CRqLC>^2CU%%K;BsT--`#-%H<0;ke@ zmle!47}XjXwB~g%CL`N>>DxUg(r>)}4n(<;{`6-AIzfTlV0^Y9*mdC0gpjDwq>XlT zfX5vOU>)4ku&@Z6jnn6OpDOrW^tX={)DtM^8!*{r%GqR2Xsr~N#bmw^(or73lVe!E zH?gE^^e{@K^HjxE*y{Szu|*ZHEzBRx<*((dj(?L?NjG z^VwhwJyWj}c=WO&C37>Pt-%wvREiN;J>=V97~KS0@Zj}s?z!TrvQmZFH+6A_Bnq?E zSA+7XoN0~L2++ccnabiC!b}ZeM#0u-vI7WP-ITiv6Wzja;gLX17*kX<4SuAw*-PI$ zu$1d2>HcJeH5si3rMuTqH?&Sx?ma@Fl{wjlz@p2pLSXYT;6_=V`D=TPK)ZpzC)HDcHMG)<>IKa6Rn-&h5Z^0}Hn10R=F}#eG+vL2DqT0aos|B7D=}BC9xKaWGjFq~+0gl8#fLuk;YiFEvyW0fk zx1j?)gSaPOz7zo{KK;>;5zsdnJ46xMVxbDd*Ky_Fa^GcxQ^Bb8b@D$Y_qlp@1s)C{* z?^YJ@TF(g!G{BfOkKlnL-B&TT@6r>48MU)Q8Bu%510{+%sFBkZC*O1B+VQRd2Wi6z z&R%_+V{y)(I*oCNqjqlIN*{mrIc~C2SK8-1rR)gc1?(&IOF6@~<}nv9>wRLq?QO-L}D1VOOp-HdcWRprl=l2QZ2NQS?c^ zDO_Erc7!yG7?|p@v;M$0`)*z92-U}jFt8yCZGgjiLJ`#sII}tz^FV_uXEw9yZbbJ@ z@VLfYvwr259>!K!>BV7?mu?g&BU7*9ZV4!%Wv$S=A^J0zzVqw97O@#>@Dtq=jpr~k z=P@3-%tNc_mr;HVs4{XMw|>q|?cn;y-}_4f5iO@C+IkWXr$Gk=WpoX}eTg|%gZEcd zHh_)Fp0rh0#1<5fcHrA3je@2)hm4nC6wozk8gS^~x91+qFU8Vs`Xew_80%Kv|NS@L zP5=JIH(Bw$mHySOkKpI5AXB%o95&&*8`62^dKZ(*0B6cXVx}0PaZN?SpfBzxG`UU3 zXtC~v-b7*1uq@vh>g?g@lVOgx;y5LKfAa7F+YeZ^gJy5*IKsPbTN$D+-@0}kqiA3H z;Pa1Bd{^O_;5>bD`-FR>o0&&zcpMC~?ZGj2-gSW^oxsJdG@H;nm0q{N=rN(kOx1ZH z59FENJRZ+u?1Toqpo90s1&@Q%VATw-Z^0{ShbZOJ9Su-)h4w+W>?e3y1s?8&$NCfV zse?Ry1h4BNpFFUY@o=uL!NWGmCyftdENzHDfQ+N`yoa$iZpF5H-hsOjz5^B@ekXN% zrl;FyOybQBFrk9Q{ZOO|ciA zMk!Bm59;tLy}S?*8}@MwCXV2E8*BAV81}vU_d=MGAVuU+2uVhsqbO8#kJ}PZqkw8@ zS;mF#(H#z|AAJZzLKMb|-$1BqK{?&TLM#b{=8z!TlpT>GqaqC$3Q+Frs30u3 z&ra8$+#8S=qGII19o)C7oT;!bA!UW9*=Ur_5Cn}D5WQ)RmzZ<)QY#~UUB;cS1*@j* zb23Vip%XAhR(!Ck-Td^^w1Ne(3B{}l23&zaG?Cx^eKG^faPO#sj>kG6vIf!jD1|Nv zgYLabUAA?L3Jf^WgkXbg0Ap4F!m_9ENMGw52VwNx3d(~5UONo76(ZTn{x|;_;c0ZRvi&}^(NNf(q1Xmh_cLM>5=SCQmXd1n78=O=C zXki>wa(5UT;qDk}@ntf==UMTDLCJ&-wqWoZ_aC`mQ)7j~K`%JpU}a6AbEUz4Lhz#p zrB5rr0dShgzY4R`^|1}34wDg=V!acngZ>xR?M3pv= zSZe0CQJCvkmshds_733E${5D@(AO0ONnjy^5~;-#Jp}5Eh4W*!!D`3d+gQ;tbYSr} z`feCN8HWfggM#1g3Kt4um6a-2Rm2AaT~rt^1AwU<89YrvY@7O;c3Bk*r36C|xVgqh zcrgTm5CxgNg0y>$_jC(vM{sKg#tNT%z)h?}B?!y@9!7?USWBB_rg2P;-w0zGWsO2B zo^P^Js3A@TrlH#A>aIb4nw!EE;(OT(WM1x|ONu22dkGH%+B5CEaj8 z?$QFsKHVXh*Eq&6R-2b`jZ;}<=EcgYM;W;qYcyyTH%Fn3ITCKd^rZovZlO@|FwVY!tW{|W43`?FDOWY<`Ai77R6J(*~asT z`i}BvxrvotBcQ5uudE?NFUcQ>sM^Z8(&U#%HSjrf8_u77|9y(aV}!ll$_J}>88D-) zn7R;%;s1eLv~dZqp^^XVkiWb(q@kW(FkZHd$di)7<$0< zI;*xG{bnEK8_rW>a@dYgD8Zd=ze7DZWvz=WPG-9pu)_v$M3h4O( zV!_wIb5MT+04%$Wh=1af_>%b%by+v7)#i2Ek91s~BInfTceZahy>{g?@<;`*r13P1 zhm<_a^PHs{$_^SmT5v&LU3Y;4@KL_yHW+(XYwiCw%By>Lb(3~1-MikPF88VWlEEG0 zC_r3k_uvce3*HDGfad@$))20;xWkyXB3Dv$+C!Gnz+6KvkRloHqfLCH>`vy#IxhK} ztk^Pap)qk+Adx#G4k zl~he|j)go*Iv*?OQJXa$O-?2AI4fCHBc-u!+Mwe`ZxHzdT$3EFt zXmQ#__^!~WT^Pdx(MvmF0I0?37{Zp3;UtQY62`axDM= zKmbWZK~w_@1QaLL2m)$Xf{7#S$o71;yTM8u*H9t|IzbXBL#3&~{?$91p9b+X2t>tw zcnD-|^ikz8++BGe@d4PPC{bZveWhp>N0{Deo*FK_Z4X$pL81l&aL-As(9sqNvU>qk zeA-}ct&BG@IYKyc1qtzuOj)t?%rX^Ypd12&WR$bm8I#BYQ^v|juN5XWgwz>Vt<)!2 zU2vsFW>?p0NNROYDPyh$-!|_QLR~*fb}Q}FC_*D>8v;B)+u?fzrlqDqZDR z9`U(nn2JZc#q^3Qc_ZX%vVlF|VxBubvkdR~4R2#?P-Ml+_;AmSE0&Cq>i$RJs_~>5 zMMZ(2)_yEGRT#SHkGW;2{yRp12tXtH7Rk!!A8-oW0GWjsZ-cA!$-?=q@YqSP7vbE5 zi|{V^<}~OPntLk$LW@4ihhuEs zF3wDZ(~}q4L>Zn->b#AYRpeNaYA_%Q%&eVJ4#FV!*Pv;L5eT7KFk%-Z$cm8)V8&UC zz^Njx*d{F1_Sdu3TvZ8dMYw6;x;W$>JcTr290RMOVqbWHI^&XND>w(9-~oaOn_lN~ zJnkx#CT>`-f@>&>^q;OrQq^%R@caTG;5Rflu!_DwHh7T5u__Yg26L_I>J$axor%YU zE7X)pdvxWBB{PLMu4iy#{FDFtpNBv*iP6F1SlWP{*)#+SpTmjUz}((*WkXGr4aJJ@ zXuETar1I%i;HI=10m&n#RF2)3AC)jS%x+suA;crhYq?fXHaX6@cl;omd$#wh<~{2W z?4?@jzOG0V>~R&k_>QrkP_<4P9flz5qkkHD?2p2s=7QV`6?(-%`tMw)y|z{U5y~|5 zCMUjmAq*%sfPrX3IB&&JWrIAvWyhs{-tE9bsA9c>a&h+#?u58%e)#cc1X@}}D8~%v z97VZu6@pgf;u-J?#y$b0caDZ7l?2O^5}6CIf-9`{VAR0+6L9c}3TZBdcfHu3$hqwq zep%VmAgJ*t#!1X3zlPjcDQEIoo&z{qF0v~``;KRUuCm2IWd(~sjGykhRg@v4m4?TH zJ!P(1i~P-q@xw%mM~ghb3*HHWhUQ1;LCOMx%j_4WFV^cyl92Dmb=GpsLP!8sx7hay zeVZK}OFx}iqDWTBP(VDZ*yFm4`Or<%jEHVIo^!fK5K!EK{A$y#t>WWLW8rz zzJy%DwLDB?rSfVk`?-hA0Zoa793tF!TJT<3zObi()TjJpQz?WG*hkx<=@>ipTC#ak z^ajeS%5pnAfRcjliSyD;+Y+=KnEczAg!ilTl0pF863a|RJO#TAT#mOV*c$^AZXk(= zWgG<60$&n2QMyFc^828B2;a^I5Z{%VJjrq{r=SWGbATlJlaG?9Dt=2$yyG94=?=uy zNfi?oVj;1X@$VsQE0{_^3mt?!KBH7P*)Gq6h)O(>k`?T77c+sgihM{gQ&GV#dSj#V z$I#eLQ~AE_(IwMIAxugsXou50p4kRUkRb$f@v%~C5C(`5%r^Qbo#L(*?*Uc7g;1AeC`c=adt`}qZ8L2N-})QYmElTtSN_&a!c8!Jg)h7*^aUt^LT9;(u7^P zId`}n$zSHN%qWL^(LcX^kt7Ht36aVQfT1(cWLajloU%Ueo60gnne(}D;J*2N&p!Gu z@W*<*8TT`uWF<4x5BsdK!Iw+rR05+yo!`p>eUx34W8U)8a!dG?^*Cqjr`__InZ;4t z?L6TB{yRYv88DS@t==mbxE`^esTpqp-4eH%ua*-^8wCYjwJ>fv*OpO;*4c-r@=#|r z$yGj$I^xo^gGn%Eyl=VDwrFGSQ9f>q+w|W%Arv}TMau47{@^LM%_X@Bl2hbU@XjQ# zpsN8(GErgVD+?>WZAV#y-}%E|1no?Vc^9Lh_Z1Sw5U~vkLYevQ{#!hh5~@7J$`b+} zSxMrS{HesS=so?*93#I~w!{~pwruwZw>I<82ZJnq_|d27lTYtZ4oVb?u?m{2FX{x+ zaA~PvFu!+V;JK0ArlTRn^V|#b>?)-PRW6!JIhJ8ROu>V_cHAv9Mox^UmmpQ-1&7qp z;*Ds4uv4%mH-lXoR6^Y z_zvu#LqS&oVo9o`4Q6&IJkK#`NfeXsTb|vuoenEzfM|cnH#`s?8VoJZYhk9T#Ax&Z zSMEjPzBuFkxE1)s756P)S`y}XzJoq;Kd_f?U+~Mnc^)5(lfN^^E$35Y;}!2aXELlQ zg4cm^^c|iN0HIAZ(n~*46&a;7j)i%==O^ef7nWoH8f9bVhialsh!m>}hxb3jB<53!R=jWnE?$OUqIPm_um4^Vnq= zy6P)T&0kc>mC4pH7Q%*0$1LKUkWnL+tDfz+@fsv=7DMbLUU{o-AE6LED9Uz3QT_o} zU46CGU*+^l0ezWAPYef9WwQSBjLgz4&SnQn2bPQO<;N!yznHfEcr00tNy09@j^Er8 zmCK@{q`++IcBYXx^2e*>1o8JmyX2ZS#wtQgDGC;&aH(vK9|;#h@}Nze@PQo>Cb$=EHc1{~b~BLT{Uo$lKQew*KSb1L6H zOh{0;c%MaLnu6sGrmYg4IRK47a=)W6htD zbR02kBGFg(<9OtS_k^h*ZH)>=!qwH11|8O@ ztD>R8W^z>*!!?Y>ku;UK6F-VOd|9yg11$XBPuv&u`Q4&zv|Y=qf>HS_W&=NLJ5OVb zN&zVfw*cB=$xZ`T$W;pcS2B9*G@E7GKYxi&mRT3j{EvR~%*P=_Ql>vjA;Y4VPj)Jdtk-urg(8ymSWQ{RVMgm&nF<#ol5eU3Q?yq)wUSFLT6<@u6#6&Az7>^+jt;(7eac|?5$sU|A)ox{oUgr)t^ z9bR~C3n=bsoATR0Sh4!fvk+Rid9)78EX&UI<+gfX*x63u7Uj^%5IVW1Ja3=H#qxZ* zHh}Kfn_pPDYE*@$NS|z4w9QKRU(%yeane)@!rvtdqjpv zw&);Phd3Ao@HbpS85@%>V6LX^C*e#*@z=(aj#ej~u`HFK!!>!^{t5QT61mD+y{Sl? zTf(oLgaKDkhro;~ooV@=dz$^{e2X{t8$(8|kVnePR4TVS7iLNP21(+54W*XOEp(Z* zBM}R0Ca}gMV2@=sk5KYZ`fP}?hN72_L1^L_jJyQt#C5<65|Af^LMT=kS>vZKA4>i( zvtI>`+JEjvKs^){@icx@xU0~jOZ>LpqKKwKX!d?2#||Mr=Tj-A<$7N*D#2Ue%kqI3 zp?HPD0yg9WSv8i00>*pgYHEQdux{ux_oc%+aK`#3L?_P7#S2G6hk?!vN2MZ#794Xn z48FA2adb||OI_j0iT3V;gE`8zcpiP_T3RH&``eGEWmms7T5nV?pb zmSJx-Qrb^@r+Ok(r411c-(P^hxoOqqCrAc)oC?A%8?nSZp~;8-NmC65XT*8$OB4L? zGoDF<<$fx?W%=n1{zK^pp&a&uo0UMvV zo+&Wxts@+Ji=!^!XF1-p0^TW&ap4~8eG4WP*bsT+2Jcyxio9)i23d}!`HF_~T$wFU zX(bQ&O79!Nn2S2;Of zU!|8B0%Dm|Km-I!C00%h5jGIg*%w&IFY|)cfzAOawKCM6J3%83c* z`($>4=o%|Qv3ft5cINuz#oo*e&L0v|N`sm2!WfyZ+~YU*oRG{XvZ##mQmm9pl6=ox z)~Q;_HThH!jF@&f@t7oI^qFtSo@`|))D-2>un?Yec3Z`j%rN)M+PKMo31*aN&M2wD zY6U?Q3FXaxJXRvAxS3>YAGKjo7KYYhk^yJyBFr73CMz6TQEgB32x0qBoxV%e)@iak z_E*@bV7ZFdh7#wVELS}|NZX(h21_uYf~9Jebi5R@Lowl7CZe0I8~5E%pX-LnQ;{&w z<15q2^#wGzZ(H~eEcrt!zNn|Tb|A>)zj7oj+i}oRDKhA4T|>B*nHmm16h-$0GQR7) z6|0n#W1aTfK88}jwah^!#~j{;Xa{A;tmzR-46r7HD_o_u*tUkE7(%FZSuPob zm4GoXc`V2G?6cR_6T$@FW%8cNoWhv|0=gM?W+0i@dQ9ezxxlyXVD0=+ zL9}f?>qg!d=M-{uf0b@z!<*M+$VGBdz>~bT$7fNlg;0|7OACI$mC6M{^p%SYbKu5{ z;-jva5kw=1CHvgKjb-MDE_YEKfBX&s(emj~pf=a>lR4Gu{=DL*t8s@@pxWUZT-%-~ z(`yP5FAw@=sn*9WGFh7WeIEBzoFe75e)FSLnuKA{a=s5e zjwXq}Wr}CO)^Ra`a*rkJh;ibs&3UXz`*+H_`J2)rr|-pe@DU2O@E|@s2*s73QMi3U zlveqS-%Jz~HP52tBFZXj;0LMf*`dY;2(6IbMHvN5_ko!Em)|e`ii=!IUv23@%6H>Q zye7^vHgTPMXB)hde@Zzr*76>>7gZL66UL3W=h)g_^ZK!U{x14R*;)P6Cr8&+o)^tJ zXqlPhx7Nd}T!)MRk6c|{O`~{c$-g3)6~c)Da^wrnNAaP9ccdSJM_y!&K|d{v>+-(& zGfcegL*_@ec*NY-mD6Bm#=TJa+$K*ATCIDd?*0|G&G7B~&2L)-^Vc3yv2@DhT(M6gOyYSMy?Ew5jhiZ^o2)LozhBx^Dj=b(kk6tsJ)+J2CZG4p=CygnfLOt^&>~4A?Xz-JP;ttN_L!Gz>$Rqc z)8IQcrbzKB*J6zMd=`C*A33X+zOw&Xl*e!SXZcyy5^po6InMT5#ZLo+tChhQ$?RB# zAxpx=f0b8_%yq_7BYR#k<~^>XPTwiYYlZn#L>%6ehlY&AGxO#C=0-X`Wt(lau=cfR zt963QJ6OlPFI-J z&NszGz0Rc;<0xQ&T*FxD-c#L(#X$VXc;3AIs>2ilm4y6#0zStdKxnlF?PO0k`i81qX2Y)uF_z$8|_e8gRa zM2TMsz93S-$%wNv1a8oc$RHb9$r}hRkb@vH(N6#%kzw07$`B($St=Y&20Cz$2t=ra zXFx^amvUmyQ?$)C*upG6@Dv708FbzKCAwW3Y0^lyt{xe%CHF?70I=e@Vnptp{m0Ff zLCobxkE3513J`d5nm<;oxan`(Ri+@ilqqEm!pN9w@zw|+GZ98AZ-GPnh@dAt7cbq2 zFOF1DmPLtXm3A8G4!}C^o8mnyq;5yocKIZjEBAfRJfV!yINNGI_Xc`DGkfpSF7n6f z8{ZiQIj{MIxy-~CTZdKVWRFck8l2;&MBRC0-$MwXFhEWJT|s1Jng&>gu44)=mKE#n zT<8|95UZl-$mCvnom0Tm_bk8{O@8~^Pnn8m!l6)4qTl8F_SNi%`x#7!o3=H^D<*Fq z8v`2%H(?OMOHq$yL|c3o&*Rm28t{#K1Ye+$zH6VfT9<7VH7vtaG7J^m{nbqF+h+TJ z@Gaii$UJ7^mHn}=Cd(`PS;EFgB`hs1XLXJWW1et>_+%jfL>Uy~QAR$<%CzH)@qk7O zCxu=uvlBF5XkV<|o>6GP7P{7dCiy6YH~?a!E`5X#%UXZg}EVW40X zkwbZ3oUXYiS4A%dEtjbwC5o3YM9P&1)S&aK0eKH!@p%6WcphqJd0Sd$d5(WT_lZor zRJ`YWdTJrO=~xGjMdPBd_+xPyxAMLGxG2+?NdBdDMLRN$5+3%@XXS^>$5K`V^R3*A zgU7bTH)FmY+>Wc<*Zpo4PpQR@BXbCtE)2m^c?p z*YsLJFoHivc?WnZoyp1CRk*;J7@P90eAFJmeVMS%&)~DXr@EknPtZ=XIJF{Dw)2es z8o@tGk3aB7DzG1ZgheQy{K#Gl`P~kdV11swJ7|+tlce)@!!}A=Y@<8@hwgpey6AtEF&KI4#6`qIbv#(Xi~m z2#L;zhm&wADwWneEftTS_UNU#SB}+7)1ZHM8*&uGgoG%Ru`5p$kiEDgF_ADzEZn2x zq<3jwS!uJP%v|y;<6%q=-;;_m zF_r4~lcjNwNpE`;hFvX@GRRPzu&!#k($YjH%vVQ|bAo59ql$-pkSr;ZIb9)aC6T~d zm5G81M<$;!iLIYLIBm^u>q3F!eO-l2GAdi@#~uY-g@^U<%(E^%uglymL^GAg?jqz$ zIDN-Gm~t0`xu}4y81RJq?yKC@x`h_$9`z0su^yB&nM}zHWttM>7#-@8P|37HsYdXz zMSBExFonuXsBl*q+u01P(&t#Y5^H@iZnVgD$8Xw-Csgc@QNU8!6h{2t$AMUd;;tgM+2VH<8Jvx#Ta;la-h`%G3t9|%BKNsLr$-%PeHk6!*A z@8-+gKI`VQL)q39O0v(nM^|%Q{3Ik4ZOF_`#>C_Jl_V~p7-q%gJEQhQ8aqv1P`V5@RYl`}d zJ}V%ViRlr^HVW4A6F0#%aY%f1ABkBl&vLC^m^>}TYAX+N`*WFPoiS4=>u_>BEzi15 zk3Y@zcriap!{M$0hYj2e`+WqpnbSd+OxMLYU6JEOeDGOLerdH6qc%JE!c@*As zjn8v}jJ}Zn^OTDsNpGbySiZ_zU+O#X=FoU)FfYJUtWv1dg7jd|tBN@Knr?noueQ7%A z^OhG*o~J`Dx6FK2_Vhft6zg+V7ArJCCGwMaH`?H=&MEMdK51+%yXG9;hY2?|jEKmx zV&N#S1#%Gx&36LdX_9k5D9HPzp|CGL@H4KJTa*EO*S>fabBXfYLs!ltA)Q0&*iGt$ z>FE(<+q1G7KcZK`8~G_m&vVg!QCL&}AEs`%$vF4?k`9GJ#Q594Frso{_Sil`J|X9C zwSP{v@GLYhO~KX19(LpGW>E(g{O7;Et~_-qJ`uE1ioj% zwp}5i%#-gI&v{S6?=vyfGM=6?4DFM1#&$%1`JML>+d|7xVKC8p^6)=C)+s#8WZ(TQ zd=8Q@&%Yhda>dNCu|882ap*m3^^qg4ouq_S?C~*QJSRs+35z(A96R2PQlj*Do~_nJ zTgoyljrLkcF!!RQ+%|m@=refh+&Y{r)lbQceQKW~7Y{NMFYDp~PwdNnF$W0e!~L}X zeE*K{Jb+QI0r)V7_l-=4@_K2l1~$=K!Bl)jTLV|QR}4wC**D7m#0yXRle_Z76W@DU zn~!G9H5C1z5&kn-Y*}}%;-I23Ph|XS{&6h!6BQhkn#W0)lyyJ$aQ~$ja}n?c8O3P( zDat;3K7N@x5BKDG`Gvsakj=Tv_xF2WpeKMV=mT4L(qwwbR?=6P27f;N<|Nf^p{ACrK+Zy9Adj+cDjHkS9eERy3`(yx3g zw=LuB0loCi5W=H)c_c9AX7rqIf5R?=he+UKE|FJ-&^#r5;S%V`*U7{4z@|cD37+FS zlqZ}6x4E{;#}ha03lF00WDdOKmURX6i)){2#4^A454XcM^O$$7!7EZ(KR>{yw|bNHvXqtATX8b3MZT5@rZKPhuqpDAE^@M6CC zl)vULVP&22=_eBRN*UDFXvns+(dGgac_Bc!A~S*zMxQN7Z_)DQ3+1yYqX3q0J*?3Aj>#t3k1U&(cl;*tPFbbpo5bI^7Zq~j=wKZE?MvJf z#)nhMgW|p*Dk~yWnbr0SOCLorPLQH}R+i;671Jl5lsO7m7Uh(84^>y*ih7QgzyI(T zaPdk3{RIH|=dYqrDkwmOLPT+*Y0WEbl)9#b+7<|zEpwu#_kv#Y-1*_h_a!2J!deH_ z%@#58qtwJRVsN{rRY$>Wpgiqs251bfNI7|(K<;UB->6e1A}LBWoD#feNs6*gaLe?l zg%GA~0sr^aOzMnDL_5L?Oa&z&@oX#gEPq?F1+|1y$vZsoEJL5TYM^D+dUVTm51V{e z7a`v_S2Qh`!Y}gCx(G7Bbp%PE5|!EzaQ&t<_Y{PC4=;PxpVmwR1<56{B^xNgNj>kk)~ZHb9mox@wMESsSr>ndY#|y6|xVGosjhn5hs|Q zed0|^2+JsKvE;ILE+d72zyonXTr>jYvr;f63b2Wei6y$PBgUo}d*S1=(zC!iP34@k z4B;8A;b8=b5s^XJPZ_D;r1l!;d5}tSG`EI&s z=YNjqlS$mkO%PhvUiMBS0{#h#O!8$kOlkoP?2F2gX_k%brQutoVjuSzJCHSE__T$oqmL(#!9+y1wB>SGn& z!=-tB6jn!nAI`sDU~ysf`)xT=mYDiwQj`y^%-=;%`C&Vv4Cn4+wLJOr3+tggapNH6 z@!tP^_|g8&e}fmchdS-ERsUjyjH4raw13cq;8(WFkIB}FT`w~2kL(u|dbGVyYsJw# zUr^)|kB_wFG38Pi94CG7r6DK7Ssqi(8STIFE4zh=n}waFWa9dq*A=nI13O6*A^8H_LL5 zmgj5F%Huox^n)^<_HI#ae3`ebr@UXV*o@NU&45B~zwLcmPf8dI&X^AS?>_ya%>N#i=(B+Z}U2S^1Y+!&`=zG^wRzQ#qfrg zuG_yN4Fba=u#ZA@foOhtCYy?&;rSKd0vYWR549kLQzS10`YKr?5SD~QVwd-NkRMEy zIQyTPB2OlEx|ifPtjCkMIuJrV|IwAc8iz`|$8Qg}Pm$d+6=VbPsXqjA3iTKj)eD7n zlYPjd7(bxiw%@u{?sT09QJF$K`_fgUDjzp(6PGsfku{X+76fubuyZ0P=T8C)>RCfs zS$9T~Tw(P{6@RI-3ZK%oBiF$bq~Yixe-!RL8!{A7`(+{2A)&G@Mr11>71lcyURO?A zS*3jK@`ZHm+V!-}A?@$K|3RwF&xXJitE4vAqUe)lTA#Pvm#cCr)Kwgn*HzG)QD{~4 zD(nNTaDa5HVeQ$6x{hL-BOw;Tf4oXNV_dVc>RFzSrD4n+2v@kHCzzv!yRN*RwOJYn zs66u9EVfsR2Ir8cxpJ7uyr(G=k@K_E))zEuLlrgMBLsjsBp(N1KcJXx&9ajqF^WRp>1PtmeILyi-Uu z;l@Kf&R5G1ETaQH4$ui!uKkmZ2tp_)d#Ia!9e=hKB{K48oO1u7M=}XWm1&jeCd2R3 zo*a-Tx7AVt|1F2tpB8*f{5lwmqTfaR@<`D)qL^7ohhF5k=FPnP>+InT=T ztou)vaG3Tz`Sf|b<#{lPkuJOQWU+t4I|WsHYWevl92Q#tH>@UJf3lG0^}DR`uLJ+` zofr1~WRqW5(H0~hmG=HBq*n^)OFT*u0TE0>?o2%#!YezEvoYCU7=j^)ra)N8k%4s~ z#51`SBXx@lLWA_~YDcW3QHrZF9H*Tvm0)YEAXqwbj4O{-_8@I@@O5`fPio~XyFQe> z>u1lW8<(%fD%`*L(|?ubi4<68)#O-rA42U8N0xBZGg7z;%OZ+OGZOO-%8yDxtbP)x zppKH%LLLL{)QOg=f~PXvj^H|g5IcYZW&oZIBf!%B1wwT9v#RP|()X$8%F@8$rE|@?Ei1n^Le9HlauY2@hCq!G+bcL=7+`a8zvk4hyI- zbM^vUNoXAGOtvoiZ-l>{nhG4jfyfu@cbpaK9)w<2K?nMJQ~$%Ew7I&HS~!He6q+GMGPHT*+5wY@5kj-KvHn)F@!>IC&7W2YGJ?E@j*RENPYV`lyOSypP3q+1B#g z)xTJ!<9(Iz3NSf2GLi;|hSJjFQhGQx%3;^+*af$1#b7u-Ew~6+ue}cw4Q&r`tpta- za47k+yral(AGfwxJ9apopHWaY2;{X}^B0~7hhSV5nO7IZG>r!qLhV!#%+b1h$AnHHXb{e0ViHOY`oEg@4tS8kN^Bpxy21d8^Mks?CvT~M-Sb0oD3{d2jBOYtF6G!y2DHb3{~pu;Gah#v@* zDC7#LhAsPL+2+x`*Cgay*gxG$LsXD!T0iXnk))7f%U!$?&d*Br;b?l=y`y=a_xoAz zKkNSU%6itb7g1Ii5MGQl_pD~Vy8j9SuORRW0E_DaC1192xpEU zsXXpJPdER2NP4xt#W{<0N=ETQm_3bq=?D=^juV^w1cK=<3Q#XEZL>PJg`&4k2zFif z6kK-@@>kc_iDJ+CV3WdXBXuH}YwhZAMbaiClqxW}FS3D*RTaKY?vJ3Ho$cvQ7X}8?*k+Z8mnelC_TEhX{=2Pdnt7$FN*NUG;H`lH^xl-wVOE^W(h zjcv*?e`~6@Rnj&aIp$|)(=3MuUw!kR0K z=1@dxoL^c;VLx`Po1;vDS)Da%hFb$biwXjor7P3@C~<3Bn^{Sr6(PLRR+|b)BDr7V zM?d}`&Cs`Ree)aXH-GaxX=r#LR(wk%hUdO*6Qx&0qodM>;(Hqnc|1yn_O``I+FISgV#+{y=saV}p(+UEUM`rXXra@F3lRQN^s%_@C?CbX&C@~^w6 zo3;~91lTt)CTYMDkDi?h_&+*>UqQ)%)}Z(3Zy4VSV)p<_%!u%|rt#<>pELKr|d*t=S%f~YI+;6Q)6c=2MoeC4v> zOJn2X>66btNn>LZY1vg(#z?ouv#k0Kv$EIUpI4se=N8f|=SHd64iEQ7nVrB;fxeEi z?fn7rs`QOdOr$ke6GNfm0c}wsIGkK5^UgM9J{WzNPMjQ0Z!f)#l7a%neOIo`p)%VE z?Alt}QXdM=$>YaUCwR8Fu$b1?*U~!UuW_M?apJay zv#Pt9Hr=BOAa@zVcE+<8I8{*OPn|pw3ew{8QraM2gP8r@tZw^`3fzh1Vr9A6V`5$3SRKjcEd|L$r1xA4JiOEpXqmMK)_cXVa)`H9ShgDx-=qi32|HOj< z=9R0B_G@-_9(b;UlRK=m_XNI-j0~kSXHIjkkVcdIKsdS_h zCr$v1kr?~#o~|@LK82Dj92rRvqZ36^+}H%hoy@sjaHPVRF0g7mGdmaM4-BxfPkS_Y zbO5gf##b1v(O%E0jeVt*DBRD|rW*Iw!7-I%-LJQS^)Bb?c2wHa@!=6xX^*ioJrM9) zSX>Nm3H!!0h<1l-v6?n?&BRqC(1;X3%r}0e|2{tFUPxE+J?qqkap8Cl*iXjo=I<>I+yy9A3Iq!P&}(4q}`jH zNS{uO;l7DmD@s8RLijf>UrGI}q#)WNva{0BRbfS?k?tVKZ*Q%nO9-+jPmiGBAt)o_ zRT1vys_W^qsqs`ruxv%hzA`kD-aURUT|nto0qvqR_XJL_ET<-f?BT)xpS?GIt~9ys z^b-4yT7cU3z4s<5lHwvR5+#kO(S*m2neZ1!_^ZSIpBxT59FFjKVkX98jYLU9i4w(T zlilpS_8nC~7504x{5S5`N#zIyE%7TVhM z=82=}way*s5U#BC$aMPXsnms1+C@F{lv6V~#x^dJY^+T=^!lDd>Fn-3;SOki*M(m0yOVlGAVy%|TF`?XJJXxH_NE=k($Llh;jK?M z2K&>U@sTuWU04YBg3+7%j-(C=H0`QIF`t4|Uc(xB1#6mwsIgTtg1hqk94@ibsj0O& zwRJ*LAuV!?SlNeWO;bqH4_$X5aABDQ^on*Zk^7?`{V;v)YhQze9xK*XM9o{hfExw}0;&se`fGC<((NsEcb~Utjvwd+(*6{Lvp_ z(Q8kwj3Y6eXL3b?KRGpl+@GYs`kTKE%Va^Piud(xSip0jiSO?_Z@-oP;Je>Vdm)ax zz&d97A%ed_o~zeyq_byFrZ--DJ-z+rTae4WasQA1_$O(SejOYfOn?2ie+S{4NheMm zPv88#ucz0^+s-)FCAJ@H_WK|FCVlwP$7y(UlAj!+?&p6W*7omw>)TOhE!I!(JMJ%BJfD8|yN}Yf+qYxl;PtcT((f|{x*?dRc`UL1 zZr$p+oj&-0d^fL0fBfJN{vaJYdNg%{_dFQhyna38xp3(cgm#kg>^RgqEs{)gAY-_1 zbIMnlF&Pw;w~cX(%3qo{+xTpfB?(DM)}KwO&#shb*V1PvQ!XhZkP*lTyvhiae=q&p zY4T(JZ?V34>#s%T=k%5d=yL-4$*NhvVg?bbZ%VK4?oRJuVOyFc0MOVZL3nV*#ahz9 z+P{IZk<^Qe?i_@nP2z-7y&vnwz|>soC;v3W6fGDc+nA2F>`kZl9!Ra7JF!yC$~aOB z3Ssxc&NP7q2(Fn9wC+yl_8d&RQL1OLQZ_S^JIUWkefOruLlNwR)EvSBIgPvDIBuct zLD?vfgBZ@6+zjq{y6H`0 zb6TJ zKkk@kAV=-#*3Da3O2=`1txX58l79Dl-${oK91Qnht*PVWKfw6ufB+66&~hv^M3Aba zyKMXz`G%0X*(!tb8yKk!*x{BK+z6}R4Fk*^>FD7jVf`8%9>rR_ z3-b4R`r14162xd2;?^4z5np@f?fB^%X3Px?46^q1?ewD`{16h^l5S(EJpc(ie)LE> z&*#9v0A%(S_^`$l%WkaP)VvH)uElbyu#4b@i8#p59OGJRsA-n*Hp}`*t=hIumt!rm z68<0j;QN$O7i6-tqdA>AaU$*CcOWc@@BQ|-VTpa6_Wa;`-^aS!5|b=iX6GSco$Z}` zCpv@dDu)v=JNbSz@g38WI?I?a&nKUJmU@`9SYV7JURbgHGa#mZD*FNxBh8GhA4AZ7 z$ZLp6p3gr20&8nM7SY}DDH)$8F44FH$EH z5cj~F&Pd<-);F+r?hM&Kgcwib+Is+!{@vgI4ur5h_4e^S!g6n!yRqsN zkQoeTf6ys_i6BvtO4ejdMo&NMs{v=7}Z{O)*GGR8Idi%|^g9)WCm|T!R&tReb z&Nsgej^B&%q^x@wvq!L)c7UOl?u&ds-Ip(dWr0(CXFx}o{Kk})R!*bei|K%HL!Jutur@Q`!zl=mFG=E|kMB271$Ntb)7haF0EGE_LHt zcLbNbp*y2N>N+|aQx_zo8Kt;?2#e0$zBGhFUW*dC3q|(~5CYJFOl}U{j!SC? zZg%m>Xy2{X5YpCy8|~|mh*SIbLsEy+wJVp?N8`8@G9_cA&pMu^NonfZl{DMDBh{Zg zlNxu|qz`W0OqT};t^%Q+h3w2pcpwyPvzd0|e!I7$Gj&5U2ZjezFN*OH3NBnQ?Lq0? z;Tp&Bv2^|F73zT`VS#(=#F?}-G@1s+dEsiSJK;`>TdBcdp$?gVW=wt^-mJmQz?f7jUzjBZ$u;?$D!Hpr%l~5AAKj>W2F( zYwaYUs)Q5+spOD`7NpOWpsLg~oc8M$Jc+xi*1$eUfG(k8@~ijXPal5xyEKSfr52@= zCyu9YfAjZ(fc)YY|AaN~N_rdX+#mk%hw0sS-^LY_^`{e;Lc!NcXB#D*AAS5u`sJ^F zm41NrJ*<2Yq(<@9CB<|JW5|aLlC{{=l52K>#K(VHaJ3k|0f~PXA}PW7ITo|e zK0lwn|Gn>~Km73zAf0c7HPOIJi;NE~f;VqmPk;Z5e}tenrvK%C{-4s<-hC(i<1c@S z8*hDB+1eS)|NN_8V-@=ymdrQOL0oB@A@h=*7F^~4>$w>;=T!L;`cD+B1b+sRN zC(lE>A(E-&(d`)h%*p1iQMUvC)4-7`}^t8{)6eCe*Iop zDkV%>AJ3gVoxb+>!uO|YB_E3#{v|I`2b|FNESgmF3>0is+ted5%~Ada7W`dRv$ zzx%sztNmAh{uk*K7SGcsPNXYWu0m!Z>GZK%*0}!jSMU8Z>t#QII8S2{?o4;F@QT&t z%U2+=_rhYmhw98@9^CW~F{$wmedR>OU;ou#r~6o9|Kg`VO<#Na-Ds*- z@(g7w%#OvJY)9K>z3s`usq3{NY z5)qo@SbfxFUduDN=V z@Ny!s`ZsL#sJbQoSy+Fk@)xH%Tl_wk*UQf}8MgT(6VPo2_)&{DGVA&j+linh8pfvu zWzelc2FC`|oss*j#cWP{M!Hi2>+c%P!(aL)2vG~#Y%>$#fl)nVC0 zv93n}XQd|UHwrjp)aazf-XYll06+jqL_t)>WL>}_HG{kp=v3pBJqH0_B3h~BvxQ(l z2T}0%VWF9YoZp}g6GY>6U1|%p8#UN1VkU6Qy$jJA#44=i>CJ;jajET0T}`xUVGb+J z3~sw41jJ#jE0$#==G>mdjTZ~;EUvG*EKc(|3-N1%P#%Rqw@y@}PVGosLj=1+kG{R8O(h}C<)em^y1mHr>c|A$zk*^ZT6 zc@G`hA4KZI4}Y6J#A@ie*t_?BFO9Q4QLEg3tZwgO*^_8@Ko$+W|#8*22!h~R_I-nFL912u3%#M zmo8sS@BQYtkiahjcP*W-pF2z0GwDw6-EwFIpX~#exquM;^qC-2)@O+c>{uH@)|)%2`{1cKIhstQCpP?5DSug>q`d8U=Eaj}d)k>x%?M-! zG6F9W0{&$v|4M{hYfcJDOLe6DeBSbB0{XlW->g2T=GP@C6<1gbhQdX-AdD_pl5i;; zCAy{-x^WcawmIBZ(PSlPi;zMI=sb~Z@7}+gZjJUQ!j7aVTvkz0(@qq^83^4hq-_cl z-NXV>V;Q2c)dJ<~^4H3%1qHN)z$nvL|0bxfp6H$vSeO=2ShYytb&CQC;gqSURP*dS z#B$N7mdeR{kY<$TX7VfdC}oXK4W$LvO)ld;Tt}2#=f_oE7g6>t%gyE`vj)&HVlEp% z;FibpD5VnYw%Uet0E^H8l-XX^mtGj`O(PJ>Wr!UPw7htRykm>gX#_H4{32^pk$xO9$&%d%u$Ypp?nTnBg`qB@ET>Im+;lMt}N6UNZ60hR1-x zh9vM8FJ4X;uU_UhtD^~MW&5>04&gpLPB18~g0??gdZQlPe6fmaZJxtjJGB@0%I!u1 zF2=vfZA!$Ou10pwnP}02Ko?qFoIm~a6Xa=1$B!LNfAQx(1=rbh;nKylL5i@F3n%rwn9;X!XsY+{3s;)owp}&_;k{CdCS|!yu#-gCrYvsNCnGwhcWCSt-8G#oK zflNSOv>vIFCsd3IIa8rCtWXD*nq623cGk3qA}wiD@d#q5lFo_rXM!z57EwlPTABz> zgk=x~rMA6=^_M8D+-t>((M>r!8nH;Ap!&R9cTD5Xs*9=HKfyr|5f zth;p!hD+*Wixr-2!*$oSqD?BOYKyGlRIsp~QFnV=Q`&zqV0NQcDQR@sxN_tdSD zk0hZw`Y9QK$3*a@dI+}pgMg#FyWX#rEi3%I2l=%fTBAC#E;fTf=gy9ZP^#6)sH2UH z4Hb4tmDjY0Nx-BG*p?y+Zxico-Op5FwLU42FH`JK%8AfQe7&xFr|UmmlWG@FKqhAy zFUqoqwSY#oHQH?hB*tw(BxMFak})Zd(N?Y3i2;+P>Y6$M!H|fwB7Zm5x|W+QSl@QX zy5o`2A?hPY)UBSdG)iO?$~{gMs-aouYX~mud9@A}gr3oboRTOb%xYnkSc$yy>pH6y zzmG|b|K~schX`;~uO*vyEwIkgNUjph=m+vjw&sH1@f_vZ9@{_1T2Lo%TtD23`*1z( z!dmv4sH>hj&0~M^&JcEdi)A!C$fve&13I;}@nz!rFR>2a!8QKm$>Tx(&Ye5Qy2zI?3`}+p3k-XugN`bzxb5Go3nlD)sdCr2p`r{!G~Zr$!l|M|cC6~V~{15=6U9B%s4 zd}pk)g|@hU{7--K6RhXEQ$G_Zx^-V;E22g44*)4k2z|Zm$?@>yNd)+j1pg8Y34D3{ z(v`Cw0GbY>se@ohVG&XxuW5{^w0m&NJF%1P70B1s z(M@z(wg*90gAy&YHKc33x7jzaKJ6!J?YYi9sRirED4N%D=kCh1YI~U~6Ns z1h8uevkS7)OdF>pAdq7Nr*vXj9HBi%y&$EN&;eHg7Gj5Sy6E}xRF$}(BdWQlE`Xl6cahQt_4wwEL|QVdhlIH(P>{0>pZ$6I{QPI>>tFwRI(X<%M4KI^uiJE|q#P|HG1x?!EWlaSzFeJ04q7q<*YaX-sgI4((qt z+tE3tAE}lbW+pWB3!cVS0%~>2=k22|8mqmS!fI^ScdnJ@Y?TYaCYXV>y=A29C zLCirZ!tw%L2YUr)?;L=2Q(^AAs~f21>H7ZH`r}u@ zCpIj{W-9l@S@(X$v6W9%`WGOn%OX^R9#8c^Oai0ZGpO|*%Y7p6gv0J>=Y{>#XnPoP zC=s_^k8e&&xqcWMb^ChJdXjYX0;eF7;xv0v{X-$1c&p3}9~SX#&Y#j4i{;LMNzZYe zA~Ic(6$>b6dnn4ahEsZ7R$GN$4dbL97+h^%DTpSDR8-l(Y?MkME_`ePSx1t&&&B&* zIjaOKhP$M@Z-%y}xBS5kQ_orQ-fq?@7b#EfYAm|Qt3mNfE~>DxZL#+J>avyyb?Bg? zJ%kfmd@;0M%WGRnVMi&TNGaDI15K=UB#%xJ`-{1O5sf$x$xHjNAfXkKbZO9|4}D%U zmDN!?94$|%lRyBax3d#vR1WiHG$V$?6++UE zfEjG-bG1ABs?WBet__?rXq_=?J2`q-VXknDikjC?E@`&hwZ2f-y0?6=+>ugKnw0Fjw!P?IZ z=ovAAhXfwFVKZO9PoAJ%EQ5D|(-cESmbImzj+fO#kUc9ou*wm;m#&wHf5T<;==?l0 zeHb@?vq|e8nU}A^L+69stCnq7l2A#U9Bi)d;f+P zb}A0E6;#OhD82d%C0dmsg5{RXeg$mZbOjMtJF!*!Y1#CudYXhsEtB~K<3lB z!IOvQ_3YB#6sv&Z$=j8yKFAW>iicos{nI9$p!AN`@ZO^MimLhEoWb+m+^>*NV{XM+ zz7UfxVbo%skp?@p<)8`0Fz5=6a1x)ydO*I4_1>GUJ@m2y88>IWGTasmENYCh2E1bu ztq0$WN5L;GWI^N@@xey{=%n9C9`}ZzWz!WzT=!oBBJg^M>g_7^hU=%YE2D;%PL^Z5 zpRe2RuZ6*WQ8Y8qm)K~Q%=2UWW4-d+50d7oTh#2LF;-oaYBIPQ%IpJk=)13mbzui4 z;IKBIdsndw{?{STEA3BnSPNIXaa1L7$i!8ylbfFv6`LHMMDBja_$59qJ#QNz-u~>o z9u5QC8=*uz%n)%;;p4NAPvu;g`&~_4%r4pY_JqEb(X`i%q3>M()skv0c5Ss~u4mCr zQpW?!i(ECUKBxFe?&w;RmdBlRHh1X8uX0p8yVdC%-~6~Y&0ij0eKt6D7$5&nwHi+J zAGFF%n5q4(VlTVM4{&b5I^a$adKu#8A-b4tWY#o%8KTH_x(+5H_?s&5rEef!ry!(& zyOR2Ge|x63z9t8HST{;%p&(3rZS6sXY-owm7K1qlj4#{B|^KkTg^83nvFadBfX$6C*^Hg|K=fhE}p_h81P0jvZyCjL}s9FLbvvVXm@eAJ4stZO^QETxjD#K8*t z0 zHa!)5^TnbKL7>Ua?~AL@a;nsAFY6QAKrxI%lJU7>`@>G?Hh>3*?8OG$vZQ&Msqn+q7&7UP?roC3 zH?ndH2i5rO5himp_g{iO#Lk`z2UO@ala75Yqc>5tKF|XfKcB|5he}nI_dhZpO+VhE zFAYRi>DT@y^HmW1iK2oB7tSlzR;i8ae!z0_GYuNqt7Evc=?@b-atRMw?O*is2bBO9 zEDhoOn%+=6F16oRtPKNY?=*~$?_{L26KIaj7UekfAW>-EtAq}{Zi=(3EziEyBoa1P z^Ggv(k=8a?<5tAiE7^~8 z?WnS5nwpxCV|q4APREChNB~nz2v+6I5n?;)tj~T=7Pa{xlZM9hqfJ%iZzxE*b`YM>_?cAIt4hZ)M z3rQA^(qHbKp_HZs)h~1t8p04UeL59e#Mudr#h*d20%AQ&j)@o2XH3;xmk3y&`}=9h zi@}O{&ve@T^AQD#Wp-#!9n6@?0->8DkOP|{|0TB6H?<$aaBWmj%ZXxHL>raSAztwo zoo@GcQ$A7~ZxUdSHA|C;6A(G5VSlUL%?WVcmWxXoTg1|YU1c`6?s2z2l7mf;_Xun` zJ`ApLOBj-T>4BOcTlWL$;^HoT2I)!(P#5^>H+l11I@pF7vLE9EwGalQBug z$|7FhMpAm+)8g%P{vzi>FsL4VYh#r6X&--wH3WXRRjSz)B<$WlbxBn3XlZstoABoK zqv+mT?cghsz}JDx*l)=8Z7&Mj=s(w=o-Y&wRuKro*{0Utk-R)UNJLs%P2U+Xo(q~Gnr7$tVaz5ZRx zb;!J30shNqRBb)Q`-P(HqNRGzuJ8ljT=8sU$2nX4h39jFC-pSRJ|o>Yw405)F-KZs ztJIENGt_+F010VK=v}xaC&*Rm#;LV<<3Lj^jz5NT0-{xWVcpA_cJgjhTRTH4_Fa?y zRA@Uw>g43&E1rb$#-}N0R;$+U=im6x8Rfc7;q`Ejc9ve0r_+6N;*_ zUd)xpbb#h5l2ii}KB|3!7$5J|zRC>!$Rqn6vFevRws%>tjwP)nBcr2D{M(6zgNdhB zAD^z{BHCygCR)7$;W*RG<(A*L9mNPd2Cbq}{to!6@jLEURfmEp_#SWbarN>3`Q4+@ z7n&6>OZN<_j|Omskz8bL-*2&L`XR4f%{(or7Dy?jzyA4pS?>ruQ<*VN<7>Szl_kYPD?|jrPN)mOkoM4Z`u%R&)j-?C;OU6 zvT`;@sB~^L4v|QezHvBKm?|W?Qh($!5{lXSdB|SKvF8RKPd2bjO8$HCoG7VvRhC6mnou0EcsXsTFh2^b8iRaZPqb*PRa^;-StgCSrH#Ku?xy9M zDRS4+^iN_Klb$s$yRZ!~Xa>!RxNP%`(t2D?!txFq1yJSC8GtWlhUNBC80YRY6F4*0 zu#4@NCu*v(6nxdUZo;~cacPzf8-vV-wVNysELCxsKYqo33R+)<&A}8Mlt!y&5nji4 zpCL~$i(3EjaJ!Q|&N^j@6Wkol$qmvLYZE_tA)!AB7G;Hdl>gvd7gvLSx0i13`3@A(TnB& zDP}25#uk+Q-!*N6KxIgG`F$djwJnb890dJCC#B<6fZhCTi&X>*{O2X=EN-DGNP2y4 zs|(YrSG|_eDB~p|h31^$xg`b2Z<3)6k$b>^R=SX-0_cBK0n^5c1(UWzV&Q9V({v5v zKL&&qWGi9!Ks6pq3EkwM+maCUfZeJGmn7Bm zqn`7dRzQhRZHM0m7jC=s)xvB^%Srv**^qNpxV;OXzAp1gZvmaB7szuhEG?UC=r0du zk=R-zmeTWh$XShlM=zl5EX0pVd0~S!LovrU?{{x|!^@S+R1iDwSCv)4 zpn^~#lBp?iOdL57A%Za^C#LkHjIg{s6k^02-@sDb;J~AXEk(lj#%a8hXF^HXE$lLr zwkGV7Qb)QtYdU0@Xo!ru0U@HmljD~IWdp>gMozn2v;Y$uM!u3Z1U}IlA~wrR>M)El z)ElL#0&h-bfdM@&@!DdMO2jguX*l+~zqqXG^N#4W_ zph)R5tLeci`ZPiP@fhsQC@vx|#uvQKkKncUiJps)cWcHHD3lJmg-> zYS0-qX7%Z+`+^Pc!tNooH$HZ0d0%(} zCD_vBIANHAV(*)%KlxfQK2la+Ro$N~S`B`$j;OE>!{wbF5uubA+ab#o~ zB_&9t2b%Gy54y_Yl_hM*sVt5Vnn2phk8JzBasxRhs4!0qHqss`?%RG$rZhQAu1*R(pce(%ZR&1ocYpm*~0 zZMfpkULa2X{2w!sxAcFRJ?yY!#Qr-HLfL9g&wLeyQ1h}mgV8+rfwk&nNZ~98dzjZ6 zKEAR1XH^n7j7JK!B^81)p*&UdOc}sVX$ueobwymnkQP@(P+fYCv0GnLUYX}N^hqFv zGIeaT=ytYf-)VN0actQ9QI}O#+b?xa@jP|b&vW!)>+)<_adL?H1fPzEC;UFCe!pn= zV4FpiFRFSq6t^MSqT_=WD5UBM{Y%j-V9@`-^57eja&Vo!u?}2x&SeV|LYtfwedEf3WDmCZnwxB1?JR~^0~g(Y7Q%7oo#m@D^zCOtRcR}dCTxUQQ!6_ z%Q-`I$-q|aKp?|mk0CJJ>Ssp>nj4k)$z-}Z5*SAf6xHG=q5Xqh>&IfVxFv=ct7Ob= zEnJFaE9Z=W+X(JiawFFb6JdL2)Vhk9meZM`#9(od3fo~lDbsX<;B)`QAYFDZ9$(Xr zRVL_pK{~APEmMnFw0%z@2yowbnG|F5WRtqF^{7k0N5s+Ts_9F`)zExf-~=w;c8{;B zzjJZCFf2K6-aP+Hmk%KE5&Pq3eGzCuu8@evU8yVO8E(iuo`+JkUM^biU35|j+o9-@ z7DRjh!mih6?N(!$xTqd7zA2VS1@IK)ZkXR1-fI7h7*V(1dMuT7)XHq-J33Wl(9u@A;G9rK813vSv)OFB1~vl$+JHm*G2~hrn~FgafJ^OP48* z9nDA6OU}3+!y#}^ojyYSj_uQF+gX{$d=b`F8H=s7Df1Cra3zUv$qrWO`->Y~2JSvN z+O>)s;MAAVFp`}D`9^iBM7Vzd<}{G48-kKCY|k1Is0g(ip6y;{RIq3cw`pI!TTEZ^ z1|Kj=fxS?y>Cbh_S2@0^>GfXDy;7C9p8=LNH!Z#L@C)PI9?~0VLx=-y2^_jSB6({) zc?}WQ?hU8oX7|bk7Ed z&j#x2=Aq`js=)!R0_NJrNwe*q4AG^-fZLl8}N-lx|(ZInsHSP zQqvU|bQrv5wnpzQu5Q!2Z@?iRqAePCF9qyZt;dk^Lv$nI>@ zb#HN1TQ3F&hir}cKaUI4oW9t^M0Jdn^%!%>suzY*mmNAhgGM+9s-7<^SX*<4C zoC%B(Qrx90u968iO!J>UhC=h&0Ujqgn>B#vDN;KQnA0A!HaVX{u%SGwP1ZM?&a)E4 zzIYRXWQ3X&TPpu5K)WgSS%W0S7IbuY*zJ!%<=lsy_#QZZejqf9!h>CDxVsb)6@*W$r=e zC%_B)GT*!(uru-(VM#&ux25@BCKGB!;U~v^>jgXv#w&ds4qEw;0D7x6uMd~I7m*f8 zgA;@Uzid)r8LWd*y5ATwDI%|#Q=zppau+I{G+X-vL0#87(yjNrfJtW}nzcABviv1*P^&+Zo>DupNqIq^(GxlVRn9819E*53OqrYQf)Xxc(6=3&X z=TDljvU{?I*brUj%*m)3ykV%p`R9t`hXGw zs0WD`=01hyGt2lj8S*o43>{(yC~>m^u2A|tcf{+pD;Tl z2w=hxm90oFGw@(D#6a>XVw_#&hOUeD?sd?OL2TY`9NOX)J%riQvA{&+yQGKRR@jjL zmfAJYqeUQl=Vwhc`pR`c@jHO1=V?Jar|BdzQn<>F1u35vOfv7^YIzYbQ@qt7aCI>qI!JsJaqJ@>ho`M8Nk5up_f z+<;AU1LQY7G?<^*huC(Lk@yK^h_3rL21tIx)OmMAZ(dr!+G?w`{Wp;Sf;rCxYgLfL zE&(?HrNr@b85V4XTj(^eMaZ5UH(9r}ebT%H+ms&gaH||re~cg3n(C{v-a?58RK!E^ zgh`uMmnr+k#B8rj8P2)q2>G^E+C1qo7f)9I=H4dHiK&dTM4OaHI~njTuaId&dZ?=!@ulbLSE{^NSANo5_>faK4_-d9lRdge zvy~RKTZMlhv=nPz02X)~p$RP4V1yKBz!6OB*F>ttx&#-v1w zUyp~<8Xy~yAnB`9lKz^F^{%+ZDwQ;);w!GskdnZ2wx#$Z|3=Q>^4_fvS;82Z=@P`@XysKdQ`oA2O^3rubB2Ck=R zUe3B^0{GCEezG_Dd8>jM+LR^fWqfpg%kaOEyg>8kS8napJSNS=as9PdUYO|4*6tkAbt5ym}WKlzAbG(V9Z!8Kz zXjMXolIoC4^Qz)W%_pFxyXsZwyv%fM)_(|}(n zorMeo_7vg{w3AFM6*XoTU2U1~x83IE9&Kb+TG}AT^br+b;}nPK&ml*+C*BOw^RuG2 zJpnydR$0Eni&s5I0~nwG%%}M~?g+x7fAeu^zKuMS{#yZ!Vj0j33ATA~z)O%zsgD1noI%uJoiK<0pVkzv@fC zP09Y-xc_s}mFWIQXd<=c?25yjSvlL&xo~AXcHhr&#HPr_dsVB)8tOvSO2ksoi?_X? z@$#$&(6w5bwumwsoPskwUyZyq=Yy%q#VEYF)pA}xP!+^#Q2m4kYi|95R;V$BsT z5AZy4<@dmY59(Vnq9^w9#ALH3W>9f?ofK%>6&;V?H@@T&xf$K5nYG@Tbx(#Gk9wI| z^;(PjC1?^c>wT-_Ud>nN#FO{5ua}ngt?xfb!46x22Rjz?rFyPt5R`#kq{737IZ=I0 z{VmCFxJ-_5W;z$0nb~1UUOX30wxng~yqaUo;D=hRE#$Ct0((HIC~vT=QO2y<<~Dqi zzD+h5*6#SSwBdF?=u=E%3iC2?H&-u!2#8YGL zr_YCan%@{(zGp2x8A$7IxJ3pPYjWRSOP2n6iwau(`-EkL5l-MXBD=>!S zO@PGbZF`eKpyEGOZ{6?^*_FR=?flnup!H?KB{FRcOUO~fLPz60&DNxSQorz zk*>$C1aaap4?hYl_Wjx}{O{G>I3On8hB4%*FGyc=Z?Q|7muZ?PDWSsb-+1yj&h-nw z97!YcT<~Z6j3oqNZyvS1T3J>tdL(XmvlTh1*4Oj zX#c;^=|8^~gE7l5KP!`u|XScFuH%J`+5 z4|~yjH)E)-K7FWZ=H^==sN4@`rUKsMndxa| zBD6bI$y{@s8c0u+^XEX_FW%J!qwze}DkiV5DN0e3hXlYHxbTwpxcxKt)qO_~|Bd1O z`^aJacZ7sc_fP_A6;wVZBxh%ENf`R`tE4z)aJs1K@o%z$Q$F@Gw!! z%;nQEvt>VgsIhb{7vP-ZHtcjcI#THhO|z;@o2@B#2^-tn+nW|jg@U#)-v>HrS<$GI zB&VF2|KH93Ap-6ya57iF^T+J(GJ&NqLU68{%J3CvJ zyYg@vL2PVoWG^qF4Z|!2+DO~eBOEVTID@8U<^+aF@0XSqRdk$X(+nI|cKL!?L^T{Y zX7uI&?q&8CGNba2coSjxZ`AUMSbAK|R_t7K;&y}w&yxOi691^H(;K8+l4ky}cxyuP z_wl3TQjA}d?x?w2DqoyRY_R8tIr~lWswS#Fyq-5op4Tt+2|xK)f^|O3jy0v-O^?5F zgzM|Q4R+Za&$;-aV_ztWh@O8Zq17ffY0*@U3}r=LC)K5%`(JIJLIjtbGhS$}q;E zZ@vS8($nF&&1HL;hv{3CrIXW3yAQ>+G@X}m4C|lKVVjD|7+dd;Kb2I}bIU(l0WNZ%2chO6a0(h8K zhdq_Fv@Dq@@D@%l?^(2>e6^me9`8%y7vVc%Ip>;&0|j|uwLPl_9}?>6BcTJ2?cFMq zUqhczV2(~su3i7XYcX8HoEN~?CE=BQ;Xr~uM(Q7E%geKbg*FlxF=>(28aC~8R1Mk! zh=_-AE39%$xl=SvIDjA$B#1&+SV#v@CY$`(@by7#0gjj(h#4SLtG{JFK5 zsBW4K+3;1~s`rgi(~2`UEYVA&3^DRN@j5GQ`76W!9Nd~V`DtzY{_XXy_Js(`f3-F% z>>s#0+?Qk}DlFQ4MPN$DlzXUHLQ@+pRWa<{lox6`Xr85^$8g}+D3}u&u?9;Hq~F;# zPcU_U$)K>+*nZvbS*NVs*F~`AuivWinQ7?y{wt;N@xue3Ruh(NDt!#U_Vz-!84~I) z_IK8i(U7ECp{(m~9Q0l}hn;7)epYxf_8zOd()g^-LW%A>$IbX8<))*D zf2PRr7vb^}c}q{L5U*p3@R!uoO-W9eq_`~`ec&vg(bzYG{ z>24MvJNe-Fl)^W^g<^rSJF`r!ufu90YFR{bQZ|~-IBs}VcKdeCo<#%n1bWg}vX=>r zJvF6)8tglPe}d6zF;ZeXzj{0OOgm=`aAfGi6K#^2-9_c~`N^!rz^3&-56vCxk3R>- zDyHdAU}(6br^vw8=JJAxTSAri{ZjE#a^k=-dNlC|rmt8joIyK%C2(qGPV=I3W{f9Z zjd$*Pb>3-z>fkIS|M-s@2@{eG^v|UHC^9^X{{XeV#GmBEsvlTD6@V@5qQ+bsc{gS& zK@G%8l0sei{F3V^c3onY4Agjq>|>LuqG5Dyt*mcQlB}R=^j5F6xg0!0^jG+T?={@R zmiF@^c>a>5P9ViEk4u!%VC*jkS7gU<M6GW1}U^lrPF=JJ) z&s{6(?yGkC563q^hkbyB7>KSJ1Zd%{RXl%gx7S}i{FyQpuWS9!%GqHRzjkn8k0ZjP zrs$w~?h?tu&9gCMOBHFQhjO~i{KUUL)A{SeE#XKuVglThX)!WFFVT;YDcg$m&_Ub3 z4GK*}{~Z~jV0rAOfv)i;TpE8yT=u3!^CpALxFntUKTnWLbPZ;>sHZA&Tklbcvx+IE z>_oF-f&Sz^Roa46=Jw-1u>Ml3v?M9?cz@yxU+i#e+}7GWx(A6MaC2<-6A_=_&GlEL;$=nRkuUIv#P z-i>|_*SJl|rH+}IRj#m)%Uxz4V}F|teHg7msNhN3Yad$8q4$fZCOt@I-~Jcj>`3@i zdHJg$XiT#E)zYL<&v|Mn+8mUZ!KyeYC52}EweK}E334Pup0R<7yVvapN$rU6;!HYC zXRq)9Vq=3t`-1PxW~+sk9ejqyU6Y|Ly4sVS&UZ`SMirm?5@Q_V|GvUMt>>4EmlG<} zQ6G`=qCwgTKl_ywQ2cpIi0Q94%_;~xjM)vmegglx&LndyB)L7RkQ2DOC+iHgF~1!) zkQW%f@{&J$$uBu`N;PMb0UBDf%GwlP^G?@zyEK-I2)7!{N+pBY^MR>vMyej3n@+|Iy)hCqJd8!vhB^r zi2*!Qn&X&A+0q`etKvSgQv|%)3E>BPea?(InJ=ws)55P@k2TKJ@RMTxDP?$Jb9P8A zN^lpAP+rZ8Z%DP44n>v#7v9{l>*evgm+JgcSU{Sb2>?U*y~7pLQ<-fB4{i}^%d z@SGZ?Cio23uhAGse-Hr>_j>8ln62lhpRBI$hi8^-*7_zpCj7R0@eDlVp~a8UXizp7 zrx~lXM@n2-j>os9ahcRoQH{HDt^l2&uS9}e3S^-KzzY(zl5#Vz6r*^H;m){NcxiPw+Q5~e^~(Twt>qK4Yhlf|z#TZGgN+v*iW z(8V_JMG;<6wb4wxhgaUgjK?goW>kz*s{xLlRs$Xdqcu&bXM=YwxPp&c)&l~#ry|iv z`3<)fI#+lmK>jMpHpGOcVk@H2F^tCU zU@gw0lBTAZfnLdQ;J}U!g+VRFP?dK%WVyl0MvlrHy6jJQR>nY01R6nB}rhbG9TIAC+ z&2K^B#+Nrb;)Qn>B(pR1XO919#w-}vB?Te%!Wr6RWlCAbu|OyL!cF%~!4!Ief+VI{ z6n?D^M!y0vsV#^S13oa6SHSCE&841S1H_wnvGfdl7vnV{o;0=6&a+<^e0uv2Ai4VCSe~yYx9cTaP9L|L?x9gv;liL z`wh5^zpXw+(A9ftR;EqQ&tr~_jzPX)^c8Rj7*y?4)5P22G+V@clwnccR^LSpjg#$q zMWCDivArL)#U|ky&#d4o?@gfEg($J-S**e(m+fZzWjFu&JT%}&Ddf%7QI|3NSNbs% zm4atcyBHEOGoI}Ynp@?oVk*-0C_dT5itC~|8j(7G#@{UqJ>TW(%BZt}GjbrfTJ8J_ zx!$Jkk8mcIJAH2v3*IE_5~w684yKr?FZRZ*_ctS!rEKhfwy&dNKaX_!2CgA465nVz zj_JZP>-zvt=>IHIkg04)p+(%cgs=tYy8ffJDGgqV`knLfn2KJo1%Oe%(Lw5Gaal4i z-tOC<8@j{AFpifZ)2JU2}mIg+<>;2JZAoHoW`4djUwaGc&EN9o!&mM@(pW zx~YY{Q528|csT_23CdS3J%GQZhQ}!r3*P8Q?jcs}F#g3WPuVFeFYTXF=5lSRzCTr| z@?`2LshjzxrOndsCvlZs^d?@0OXRP=(#;X#%(dY7p3a>E<_L1C!WoK96Tdtde(lg~ ztwC^MVEYP~@OkI6sLN!6v(&{q{Cx^}S z48T06K%6Amh@nsOSM=FLlm2fm~WZ&%!;F`kavZhu&u;aeNn-AZ;YZ~(I1vZB4 zE>)MVla8ruI>n~0d#NS110}Wl78M|(T!V*@$a?Z%gkbbM0z{Zj` zXd*W0*{39{Dw~#_Da+sLHWmFt0idrSzgBNzE@`!q1^3z53J|5J@I{!SM1B@vd7i5X zPW6N}_(jejE2Y~B?#OM>Z0qmAz8!YjU?}~z)45H))u661#Bi^F+an0L{svsH@3sC} zy>X*vCmoqzgmBeuBsVg0pDyN?4%D&yWR1N;yFE4Xz|_tOip=b5*wIN;;KBYL+l?C8 z?a5?|R#hGsH`;hyfLggDK6J%xWta84k}M~y`2nLZA|n8*C=B4PCpg|VLN^2SCs`LPk9p=1?jQPJ^bmFOh)S&|+x z*bU$zAyIm~;w_1OhCzNd`Tn}F*#taB?83T){NW`N0qb2?@}iw%q{jXGqw)w)cH-CI z8rNuLOdmw#T)XD7y7B?mzL{0UnuSkLQt~oe1CioNZ)av-FXH?mO@bN$tbqQ`sW$CA z2B0c&+uBJ%PkKb^;Li5^=BGcO^4gZj+OPIo13ND+W12CkT#-@iFCxF@8?;W;)!sw{ z{9bKPD)t4cP$bqUNjBij4OQ#AxXbcnKTP4xPV54a@@4jN0ueiMX)k}aXBSeu%_+_q zK#q9F+^B(4aJ*9C$4|mr$*R}l&FOR3yHf^R%%d^6ALp4Sqx))XrAz)Tl(#(b_VvTi zc>y!?&>FD#%LLCwiU)wC;Fm})CkBj_nDlDRju5ZM`p37}mwRVPWhF7LTFI?8t#LQ> z)WSwk*yTN&sg?w6ykhpqcx}GUgkG5IcMP%R+uPf<&XVTzH4)P~4Im)u#_*X&q6%_4 zgh^cux;WGT9gzv&KGiLN0~-|g0VaW4BQG;QaP$S#eUrPAcFMB%e;7j};~|H}l{N-e z;?T?_JdLpGyVJBh8T2KL9NmytP%cnmo+Ns~WglY>KF~fi6($JT|N^ zq|jxvqGn4xFGBE+*CJjl-S^s*>jBh@EuRXCc44UcpD_v^;&}P&@!l;>YlB zA3eAEmsGy$m^$hSH-7$nEEv1)*&^=Dk8M8qdwHENIw=xu2$cmZSP8K6bnd6s2LK`X z4i%_JaRGjT-*SQyVG6e_>K+J?+%)IPU2fKLINpDZ{ai?q9RU(%5|!AmWS(vAFlXK< z{^1*5fP=Wo0CI@vfM8cznz@k}|)GVy|KazjAZ1)VhtabPzU3n9d5=!&Cc@bb3a(xnQgKm99 z$(1)`f(6nZbV#!^GHlV2`(g|YCRu38)qaeBAfZPYr)-t-8U7%L7nE)b4>MHN{QCZ# zf<9}X;qO+2s9kP$@jmF8_Ct+ya#S&TSnwyh?;ERtCGRqq)WdwWe$K@Vock9Hoy(wjK&;Q(h9+~mbdmi8cF5&B$!gT zV?H@D%^|}oqeI^eFvj-p<@ltkg{UWJCTJ1I7Q>J$zDS0ro5@!csFBEhJd+oScdK!p zYgL1;w8gF_jvA>cuLVXe6JN+fAdD-X*0~;Gnw%+|O-`=0?*CD1z4L>Eoi`bwD&PmJ zly%GSCm9zusg^FS;^eotDC?c`A_yV!J9&j z!&MrL!e>?C#x^U%n3#jcP^7_KM#B}8~35k|yrVxEmHcF8$2Lt9lN&oM$H(ShXTB|6YaZNoXFx7`K z@%i=ykx(n`l=(RetgWeM5zEx7!f_cMR=d=(Dn^(q;DyZuS(2QDf^9#c`^D)mm+gza zX$Ppl=z9Tk%X)arL2&PR*DA%~R5G?1o`!V!B%SXRB?BU>bgPfT(L@SpP5BMi-Fqe% zabGi(#)tgTy3XGVz713)i*Dqg<9XrqcNhchsERO-x4Q|7s)8FIX;p_jBNkDy9x~UY z@Mk^1nfKSn@)eo<>F6(#N~rTY`3qltU47B--(j`ypT%SQCw&@Vr4#cGeNkz$qf4%~ zuU`6=QusP6s45sCugpE{c!rfz6^*a;fODtvDd zCj^(n5^}l25B`_wu@mt}OzNeSvh>@mr9W_vp38(WtUDg%lZ>#b_TY~Heqc5uKI0q; zG$27!pi#hypv<2oiG+ltB(H`FbhLRe&EuOCTSng$>Rdruuc zNIUPer1)ZAqdIAcNq*h9)r-xYt>dD_-pv@AoA^C0CCKrfE8XGRw!A26dCrRGt84?v zvVg1IN0m?0qk))7hN@>^^P_{|KKe3wDm?3oR?%KPO=U3*S^dm-8i}x#MSVLVb|wu| z0z95BHG?FTiMX7bi_2tTBCVr%wrQ6HzT)ulu0{Wy2WEVop#1taFFS9V1YLhO)XTAo zd=Fhh&y5EB$<3w5%?Jf`i`lZ{_}a^5_#k8y7D$RD?XnI5_M3iA0~|R9Ec*)m%})7vwx2*^Q~r!22m-rsTH` za;r@)0?_)}f)%eOdV7PWul=1L6R;8z3B*)8;fn#c{Eo!OOEvh6Hxp`#WCm^S3=aGr zcWPQ1bhNaSyYud8L?k7ilLEqUeIeOsh8w1muSaUy^iSY?)4zRjVs>0J6QKS0`GG&j zavReBxZo%r{f^r=<}*RJg32E3OK-dB^CeTVuj%}bL%%geRQ zOUz!1yTwEg;>Cm_W=N08cmhChP6T+0q3PDI?XQ=m^QdAa?}Y9<2Tl6BjkWuMD$@IW zL58Ge4i=`=E1hWebVHB7x)D_oH^qiChg$7Ct`QNYTD`6ZnuFl&5H$)_OT64Pms1=- zB8#6HWK|4LxV|6qqSrF@!tYClk{b;{poQ?<BV-W>_>Un>x9(s-~MmqfCmuN{1B#`Tur}U$0?DqZjefK1_ zrGu?0x5i>-vun;;sT4O(KCYRMi#H#R2V~4#Ap>c_Tsih2ZsrONBPRc+mye~56FTfP zEI~)Lc>1V4X_EQi4!^FT{c6)_T6c1-^G@_jbKxJ;(YTWWY1zR8V{(&iaU7wXPX3$~ zlSsLHCJ zVDM0d(f2J6me~&wb}~gm%aNSk;z=QjcxO`}JsgO9X129u2J`3pM8{YniL;to;h4S@V0r~-;syAneb(N^yQ;P2d@s2%)zs=7Ydc6U#d{({vvD~1Vay>BwfI4mM-c0CUGj;Ly|ca`*d zXksOGpX%nD$?z37dndd1j>6qx1S;EA+rky6)J3$`+oK?H>b}UAS~d7BNA}WP$3QL4 zGopYZSdICKs;Y~H+jIYgs+YJdQG<)rK&qB>9%Fouze(3B3{U;z@uu9Tw-kEbWSN+b zX9FkAJ{1K%p(kJLx>eSqUGJjWY}Va|k!k%@@yIi&{)eoy3aTq=wsj!5yL%uI+}#Q8 z?jGFT-9314ciFga+})jx>&Amy4*#uls?NRldA-eAWA^OP-Cvvfj-39zd5-Gsd$<5? zA=iibhEsp-u`}Y~>-^(=xd|&a+I(@l_3y5P1m=It`2T|{G$H%+*%yznf2ufF5a>$N z;Dj|&9n-)nbf?#bp=Yo%re*Z3=NbP&`-Z|x?z7dAS_s9&Av~%Csee_r2cUBGbtN(b za7LDe9-n74wi$DZ^ujT^yegnc&AHlrt`G&HYly|8c;#kWz3hkydW{3+C{VB#3jq4 zULXVjNtO{9m|!HVix`q=nits6l+LXBfQl-RxW$hp-C@vXD0^$+{gqW^%j2dI3B}26 zatg}Rc%iziv%%mK8cO+g_k!}P?H;bkkX5V0l-~+sj1lByZbf1DKPdt(7ce8D3RFBN0*0H!B=T`tk@5$d9=A- zg-aQ9cyn~eG$Nuwj@KqC-7wML#0K$JYdLRYLr&flyu&W&#$vYq$}nLa^d>a=9KF?# z=P1*8ILeT+k`QD_s%6su!dH_m6}My!v(`r-dUMI=z~WSgsHq&u8rqqC?Fcm1S7k_s zk<=2~%;|1(B-d~EW`H{(r~kA4^1xM(GgG|%ap^IqIf6o}a6Fw81h+K!qEK_h{w>b< zuvzYAdrE80n=Wd}uMLr&GE>3U@;&v9DE}PL^Oy079d_5Pf4II9@mWC1`YVKY_w$s) zPoHz$sEp-(EBcyb9k>cWLR7+Mvm5$_HYU;3r>TV}E!zquj|xeRbL8|j-cWMd?886r z%|K2L^DXLXOyUPz&3S!N(>hqDnmYKsYN9(-Ddr68z`g%tC-)ssB!}oEp5diLE? z3Anu%lYeMM~JAy4vO)QPtF!1-YG#hmFI@3;Q zw&S}mko6>JKu6DC+wLdQGRD^Gq&BavnBd?LinU~t?6*3wKPjBKu&Bf+ zHc1+b@F`q1>ePxviX#c6vft3-s2WASIC&Q*2dC-derko}@S;v*MmbtRl zG1-h}t)Uo1j@0q=bQs44=e=~O1$m{;$e<-D)IGNiL36c1hckTD0ykK^bD@H*%&U}7 z>fj6b!kql8YL}~nyWM)Tu7K9?C3KW4F#mqFR#EVkh(b;<<;BnE(TGDZ=r$fZa||-# zK2Z#|p5h|7?y<}u7a;yD2>s&iq0CchhyXv;LN* zvXwSeQOACR5?Oo0RyT<^cySg-I2Tci3X<=MXD)F3*z*!LBic2m-NiVR)vPrt`AiNx z-Y*?-bEs~KDi4+iUtPVp|< zC9|k718gpH0sMd2?ZKwOzXN>fvFSci9~-q2Xb(8{4#yL{oc}ZV=3@D5n^HKWBkDD>RZI-A175a0O%^#7oJ)q^?UJ9Q9AtiVFLbjbAL(o7iSD%@@bK0sVL53XDmT3cLk=-Puc07Kx=dqTf$J7*YUEtpE_S%=1g z3!&A%0i)zpJGeEpq)@<3%=Q)L(-hULEu;a-9V@jVzp2y;fqyZak))Wx+aadmvM;w$ z-_HrCw*F*jq-_j441Xc#i6H-?G2G^cf>rvhkMl5|H|idS{=bWSp?O&6gTLMh0mzhfpyt!?@+*JMwfP6 zdznMi*DNAIi7!E#`m21>2D>&Q+hcclV||$<=oh1Yw@l;jJe#fGON9>Eu|b}*)GlZn zFeyjxZJGTDPYKWw{u^=q0lzIGb2Bzz=MJix ze;ugk86%&3GsZF07rCIW8Sa>5oURYX^7SM8@RFm_8Kn^UcuSL^JWV1C*QrRgf^FbY z&R@ExS+1fQPd;Z46vs{e+8N2-S!D|QbF{hhV^gZzcQ;(_pSkd;?1R<i!9m)W+&-C*s5(5`m0e4ANYoi@)sBK_Sr2?ZYKIp~ih z{tRt(L`dW5%MFz$L{W^a6%0f;Dsx%88E>)8Ci@ZBRvdhDsfPWyE=s81L4jSYHO5&f z(ptV4LBwTL2EG4*o3pl_FVwz+qjs%ukm%JnigoD?{gP$_-t9btirz1obC2}72}BJ$ z?D9rc1xP36oM(KkDx6s>g>6UOF=yy5!Zh6lkwDGDO8K{LeppdSVI;$nr%?&F$Rnnd zZMbd&J5Se}ZRFaBqTvZ|Vx8hANnU}ez8l>w8hRe&yGggNa-h}kh-uufudHmRtEci@ z07Fv~B#%dhk$xgs$ZHBCIqt%mt$Kp*htX=;2miaNAw9A;T~t_%T)09NUu6hclR1_4 z9?>l<8v#mmx8FImVQF)YsMdA>gB9xxfAht)-PO;W(h+lna}(N4+2x|-=Qt1qB6>Mqq*|%OM2M-N(6&ZG2bxEEYKJw! zxl&wOTstOnIT;QL%87Y3p|bMARun#M&~oekV`@X-cUvM^U%?_pLg<#Tl0&u+!!MF> zW-ED+2q`-!!B5x*7_0dujfq9>v@*w`VtaU7%4SoQHFJH_5i+@4S$CNFO@cU$7~7TU zEn~0bln5VsFG|%p^KS8u@77+8TOHqL#2P3sU?6Q^4aY+4@I_G=Z0)peOzfC$8O5Pi zR*Or~XnxI`jBsleHR+NP4+OwVeeH{A3dQ$Q$JJUn>*&5lVccdzldYNVaGqviF*fA9 zcGi3_3q@9;C|_RjI%a2x^Ub>G}O6kA0e*5qy&<0nH&+v|}MLpc6cjcIuuj~4oh zqOIMDWMvP@HBtahbt5FPc#CAvu6z0nVv^MOnObZ%c|1{M84ZFpB|3^!z#-3NXSsmD zvO6b|mHk!S<5D($ny^Cuix;$NX1;Ff;BV#?7WU}<9cuRbB{fa*Upu+zX`Yr zRDt_Z8#TPCZ5UsBuVN5!O&fN%JU~&-ug|&c$gH(ifMir0QyzuvKj$lE!+-Y2yO+Ve zR}J@|?hVYooM1~@xzT;qBwaX~cT%VoFiQ)f$yt=fRr{LvcZBe#ZdP<*9O^HnCjktL z59xDhGdryGX$t&9gBy#UqeqX@&fQRJ4EiP1@D1#jSJY3LZKoyXexz#iV! z)`nHb6+1@iyQG}*O0K|K$IV+)!MoYzrOkZbq*329NeUEf&iE4!Y#vXUyP_)fp~GCY-HmNo zK#fX|f=YUBow5QGYb*+#o2r8L0>e*wlTj~LJq@tkMPdWl9B4IUS7aLkxn_T;VZ2g# zgu~v!Eh{%rgN<)m1%Im*kMOu;voa=Dhh0~1c!X025)s}EsIR}V*AkA)w8`tJoo#N# zkgD0$ncPGfjDp;tU1pOLl)^yF6iqQ24orr|=!4VyiT_p>kc~%EZJ@&g{#{dw^6Zz^ z^?gN=Kwv4KdoN#|E)xH?W2C&VLT8nAl=C_1IzE0W0^GLGJdAuOwIlcOeO2*Wn<0U% z1S>0ophSS1(4;^c3*U!1{s@K~Lq>=kK^N%Oo7d~j6OAuwW=dCJSHbq*B$A@)vhT~@ z-l6N`KZ`41-Z^gre~&KAVbzX+aQ`gl+OMpP+#Kx-8Nz~p8tC&(Nutq8mgZP=aqMh- z7o(*?NaSXD^?> znAOFy1>mGh=+vR_T=*s>-ZNod`JN{e$s~j z`W6P+z};{?@%!DcBQ(nEmj_*kh%#TlL)rfGp2IH{PDiWg3e>cBBb3r`o#XXBp}6T2 zTCk7i^(M?`bOy!sM4^N4C|gvrtWHbIR*COT2|<2H=cY!apEa-5k={J)uMTn$TisnA zlu*1U(+!-k!HPeF`9TI}GtqH*My7ibCA0`=!EG{OujA8uhj>57Roe8m?{~<$WNE&8 z!>i$8-C-%H!=O^QR=Sqw|B&)dDv=3&{hEG#cS^iuTInJjCa#!_01zvtASj9F@82sB zkN@Mdy%|!ua|x0Rd&E9_KqJ`}cs{QJV1Dq5J(ZP3bxx30I-%xZ)RRi;K_18BmH*lJ4DmeZ+pX zMAK2vFZ_vq^g<3w;|o>H9nRwPvZQ_8N7`-d@gjzjF)|nlR;r<}%NvTh;+KbGu{%gH zcC>w}H=gn)RJWE}U@}*hBgLrnY2t~%C=Pb?eph>Ni#B+rW5hnP7{YA3V~ya@W{Qd8DcpUnBeyuVRV`$Zk{SSsBpM5u2k#@^X|1;DBV2 zkh~|^;cO>ue{F_>;h&K8Xy8ejeo3`PC<9Tu)$IM2qDd@9j8P6%OF#p8n_@q*3Eqc1QBkBJ18TLGls zzFhOdBF*LI%sHz`ie(_*3l(DQvM@RLX z0lT0)YI99yz7)iU?kYfb!iKm1n-59n%-6-X#ulM|{P7_8FIp@up3QyXe5y+xuN791 zrK?xR!0F)aruf(CIUFF6_wxnpB0RKsZPi)3vhhAF^zY=GKY-IvUwfd}jW0aSb#^&> zW^&EiaV`dM)4TG~Hk0xc&3e>Zg}agbr;NMCGlj>DHM$oLGGme8>gkgaG(dR2^OY$@ zO>}T3<7G5axlUE+L2%_CLC5=0ku*Z{oa1b|QO`Ku6>h zSpQd#{54t;q3yj(u2wp@i7`YHcD7kvu=*28+Dn+YtE}#9QS;FnQse0n8AXQm5 zY|!Nx9}EdeB7f=XRzL*OW>v0>QSakwvS*J~xV zSd7r?OBGS=-4_U5Xp1wnvIT<i(S%}{gq>t!m9^~|0hZ#c$*fl}(}kB7kJl>*D?`Zx!BkmYnE z;QBU9mdAIvoedI+c96ugxXcsEjCywnEH1)9wBiwmV>?&85|2I}eWjolrFOl-oiA$a zOhk=(iz4S(Wa2)boEr2zyL*3>oZvr=cgst_AAy?7zlTlh*{c*fm_B~3Yq8cU>HoO( zF>!u{K~dFDPa;H#x1+^yr78 zFJ%)x7_Aw{#T2-3eg3%ZM{Ngc_ke{yqfLZ7gwl;8B#}5U=J6I#wJD zomB6>NJdE1W!*}sxV6>l*e{6Sm}!6L89RaA)(7VSQk^FG$6nY;l;cPw@XGOgc&~3o zok)EgXP6yPr>{}u=n4hE%P!yoeMuBzDtfoB=bezxA@ieXn9_(x5)8;!`ds%70bxUl z{B7ljaHe>prnyc;dktlZ*Xwd2uSu|Cyx{a}#Fiqsz=QxX_d1ixFs=x)wy>;zLSQ!6 zq*JE6MuG{|k9Bgr7`A~Xi#CBKebQw_0kz3RpZz4qzqe0`Kpsh?g#p>Z)HJz5wM4r| z2zE@`w7Zj0zbN>wdSO5QnB;5ZLxPR(D6sko5Ma%D9 zBm4vhEjlCp5I}*3Qq+q)`6$WqbUVI+?v9Od5BCF&ko9#C)6_}~kh`05kPVKjdiK6QfmOa#%ovu#Zqp8;fvC+oxDQv7muGrM@FhqCg7pf0gIw ze-KYN2=ea=Y1*ygK2Lx2V=5$}eO&(}^#%C+{r*bB(jACr9BZ=R36XK*@IAs1>{u0 z)B&Qp0q!RRR7H9cOM%3{o`=5gaS`~ZPKG1kva$~rnFV6dR(r!qx84K5o+y%hShM4) zR#S0)%Bt?mz;CGIR(p}{JN1O3d-)(PY;s_-IV;xPa*gidKgv1i@Hd3c@kLe;h;SQ( zVEZ6c)_}vQ*^x6xV<7x!&i4?$B{(v>qg7rmods`}MkXB}^GcK_tlR3aHT}k=xAAc% zzI1)C+4*C0Hi&%BZ2JN3*}+L`ejpU&qth~0w+R3^QZP#}82ap0#YOi;)%mB)T66{c z-N4TSQI>p`n$e@qA%`rwXfea<5gAxkgU*U;@=s2gtW<@Yx_oJ zjnxs;&@*V&I^Q-NZ8NDYFi+dR=NI0MKU{t6o9Qk;4S)RRhom>dT+E={x=)6lY8lAM z8|1s2Rn2hANF95UUKBay91-*Xz2%7iBOLW{8c5;OWYHu0icvFh=17c6&wQ-5JYGEJ zYWuKRHZfAefhL zfhKMMk;XbXluo6`xE}#5G(cAZS>A6Zxva!&h^9k3EzmaX0gR?Il-vco@O!cQP@V#! z9obt@%u*0X_@bD^Sccj`POqnW<9d_{xiqOn+eO+62;b>fGz*z?NoFYx?wcwm)BT-9 z>Gyq!4>wo7tVR3HR#nV{ZJ6%bcyi;5q9^AOWT45*s&>ToC1?+eChchJ5;MyOx&?%# z)vVl@>eyMtq$=Er}hBw%Dm@0I%<)UPjRTZWOWgiWKL4298O5K3gAONc1UPHqt% zd1&CTYOSS(t|Zj8TkNf5Ual8SB31=P_D1XO%J8_4y7hYZy}3XF%GuZYw@=;DanwEU z1Ehs*cFbHoS{;x?o|-{Zsu5S1pbNp%+cUz58b{tvgA|H^1Rfwx$qNkZ+mkikD;m_C z77`?_3DMr6uWFEPF@Uzqn|G*2CjGC$X8eQGDrTE+^zl5;k#sl7H5(_(`XEV0wHqQ;9=QZabQOwuCh`B*wWU1K1f%GmKZ0H)#B=#-*a--HE%B_X>m{io^5e}LJ zGJ#H+_E@yko6<0FB&;2GXX0Zo3Tp0=g`i@P*L}Vn=s(!y3i3L$kF}U2{e}-ypg1z< z>ph#ttSI!lU=gG^ge}kY)I_@Xxys~0tE!?h+iqlh>V4*aCq05N{**{vEYNBq-(U0n zi{sx9IdsEHa>yU8=-+*X9(VF{i(8c;^oItNrW7ckTDX495|D5u2%kPp=6`W-4x+d| z%lNxPIIw}_=EDLcGFB*I2H56j*R`e3GOLO6K00yEjK5RtrFJ+oW8|ik5fP~-QOU+z zrBn*0@GM}+RA4lbhf0Gq3LmJ&-&p0AuBCw#sO@Ee+Nrh9mDUxqnPP_u9BFedmX`-5 z|6s2P=<)?^oM=d_1^i#5Veg?*-LY{nTZ14H$XHToQE1nft)2kU^%Zbk?MS`-w*N+y zCK+!G0&EGqv?xkB;NB>ly5o)|nAI3~=gbr5Ty$Suh`ye&e&hW!lLm z=t@_jorpCQ7#dOa(@vg1^NwJq&aiFA+6Lu<#BtXRK2UaRSPVY+69un%R=7M-;XtrrSVod4G;ORnmvq}iM|JzwCBL`l<5 z_N=96R5be$yh|_>3PTLBbePgUQgo#m7dzH9QHLQrea5ST8*FKp!ioR*x_gjHHiN_u zg;7Q!3AK5&rYX?NBEKag_#sFbilRL{ZLbXqLp0mOcpB zI5o~ql+r*mWqBR@iu-Nc(%;)({nZ%7Vs5;O+SARmt4#C__o3LmKLGzoe6WsMXWZT^ zB;Eei!s$F;J8!H=ToPU@nnd7zcGfZrtn?bEqlw;WGoYM95I*h<$6^xuo`% z+cBJMEw_yW*UK1lD;tz{StLK3CG#E1PLB-0;I^}B!JqA?6jq3CkiM9WZbbEjngQYF1{ZR^mz4G?7faa~qh0w$fq&?JB9TuTl^b zJSWKizO~pANkK|bQe!~mejIHk(_SM_B{W<(OAM3!hrUf4g~NJ6X*BI^DWZ%OhJ}FL z+L+iE-a@(lkZB05kR=QR>&^#A0@fQiL;ELGPJO|O3>~QZS7R08Lug+qPwS{%yOaD{ zIO5%=oWG?brkfPe@{4J*#}wqwc%c>YT(R3BW>+C0E_ zSm6e@0xs5>0WaQ&rPaJYSM-^Y(p7@nx?Fo~f^_=qp{E}8z1-YpG6W#hD6MO*dvU%CzK4=j<2Cq<^{!VdTV=4<{n_w|oTePQvf=@{L z_Zu9Ni~os{dHN6!=_(nYdgba~_;u6jKmJ_ChNxRe$Zhb^)7f|LTg-y9Aug~G62KT$Xe-CL#=>G3Eg35pz6VZkIy{pmy3lY#um+-)r9Yc>cv1 z`10JY>TDk^zm;Hga1?ZeQC1bTU^2E3Kmz*rPV>XBK2ZPd|4Z?!5<3scIc^2M}_D$2e zkR@C~C(q5{tSy=iQA->!j;MevFIxN(`P+wE1hAq2*mFyHk{4+R76>1#8u$07=5yXz zh8TTR3c;Iswxh1hzvXsSm8}LO9*=2B5wC5s83AyI_MDB?s=xAL^t^_3yk^eu-7j83 zKhlcpRzqc(SG*OD)TvkjS>$W#el1Fv^<7HI88^0_x0pi6uRKA6r22h~;PiZ?N zRg!?a(wd$P3#8ipjfx^1^3Jx6Oj^dj^fMl#P0h2iAJe&x8&m<7jG;>m*~jRxPC|(#^8%Iv(Faj9#288HN{6-Gcrd zZJxjmf9rd-u9pkLD8}0cJhIN~2NRFi)ntbW{`K6l*6^tnVAH$z(?vxq@#}QX-I(D+ zBUh^6y8WuBR5h1kOh_(IEAen^lJ*{$O;*YA1FGzM zM2J=w-ShK2>9UF<+XcgcXd_9918Sf*V$(_m6Qa&WUM{~z>$|;T__UcBO20DEr)VNE#&plAIQ9!V zgTZqn@Kln&hMC-h1GTtvPyqFe?`Y8uR6XynY9j*`52AOwwuXWfzTLV5CdJK|4DtCc zWuk6UyNP@CKcOZo?IM^!EJhflMtTt22sWA$2nR0g{&#^O|4_G$8+ONe6@dZtM`w4% zOsl7}p&OH)K!OX;PTNgDEAOP*&4njyp6;;v@`FyrS&5Hd%R%j;yN;HKObj|kzPy9I}ZUAiL>5?m|D_SXqG)*n}`-9yT zrCMi+FcOluNZR#KQZ&H0$BIte2aI;~pe@U1=1`ln=P>bPYt8f3vRmF`gOm@nNp2bz zNYkPfXp{X|BfLBs;~-Q;45+B?3-ZV!~mVNVR>$BF!j&5^xj;&M-tw3Ud2JROG9!A)tSmw>3WB@ z%DM$M0J~dW+u%$x0epwbt=^W&mnMJhWcZxGHz$BNN3Zwk8T$)iHg4TH8-+j zL60-_b&Dc+MZt5GxAybOKWz11;Uua7HS@dH#Zc5Y$6q>Fz`2Z$?HS6uI#KJb9l=-_ z(Y!}58f=U{0MwpnNt?d*D&T8SWG0fQcpH|a zDS&vdM3Vuk2^=aP5tU_McjyeDNP{WpJMS%8UjeJjjBB5|(UJO*r>niT%xIQ~h1w@B z8;Yc`hB>+=2<0Uz%oUNd_KkXSc^&xAx?(_$QX@2Je7bcRvQuOgifCXs`$Wn8fmuAr zQAYH7`zz7w7TRq)hZTfQ05pWIIVGcdR9Gp^sIJZX<#uXCyTuH=Eu1l)V&`u^4`eDy z^Kg5Sp18DvWJAMQo706!rVTQC0u1cF2%u731MB?@q@G z&QuzW+cw-JX4zC~)qb=JBU4i<=4vp4IyxL?xtj<0miB!^`q{S8O=d35D<;*~h4@3q zltcv2=evLy2^l3r>d%Xy);pc>?tQYmMV*}=Zq=%a<9Gbqr>w!Y8^1D;45J``B)dMnpnJ z=wK|pMN~JzamVl7;KdG{Qo2lMz z>Er$H7kjPvMOtoD%lO(qcajbx?X~PHB9YBlPqJ}un=C<+P=(|K24Ycyy?ABV>#bqU zH1wfjst(ma2QZyoPd&j``1as+~Vr$@%+ z$@Y;1;OaF!H|v|t^zUP;y)2I&Wheo5#wThZoH!Zt5H*Xf2bQjEo8y?rd7+e9NG5z+ z-oW3~Lckm6ANRT`v+6l9_^Kj1Ouw7f z`Y}X)G90egMe*o_9ygQtSdnrJ7-9kL~GW#8=mvXYEl>_>Lm41VUPo z4mIfG_4>A}v-Qocz@xi~B(M&+7!Da%M(cC9r?6B`=4;e#v|@D%9DX4@)`44mHr3^G zt-g4CAUGo)|1!>bciYB1o3jBZ0ehHZsKZNZTY{4Xy1(iMd9iJ+o1@QTEob}CfjzHe zE_~EhUq{93dKVN&ZIG`{r|M6=HTj$GzszT19(1alPPgbikYCf#)6ygtMbdKEz&ESw zEPrtcoSh5$D&ALxm_g6{OiOb0QRTE!%o^utpD{JCqL98=IolILzHq-sh3cf5?Y-J`&1~MYj@I8rn-3M_ zbbmy5kNAZI6&5ZpQGuYY+1!>V4Vnkq?bqs%qLu2h_^f>@`fXei<0{`OqXtpL%Zg4ma|H<%JL(--2fJJy z89{y6mZOR~jS!!yDwe|O0q>dJAEpQVXq@&+W@l%=)?W+z;kXbLnQS6bwPWsa+HAT} zh+Agd$G@RTbp8?yyqn}ZHrQdfR>uRk-XE{OF^s0Q`a95DplZGDV0R->GrcR_VZfa3 zs3#d=A;qMrJT-&sRp0&z-otegt){R941E;uq$Tv@%I4^jK$!1-*S-3u)U(|NHj8=x z9~J;)CdZd?1}NxNw8!t+%^_z%=j~k>d~JbopqqrXY6fZfBlKZpy#aTpp1xKj^w>3@ zG=QsyT*-`awn9#UG&2rT!Qcu=o4xefsnM>e7!sbECXm1Q+JId&x{mCGvx-J=F=S6t zpL@iqU&=#!3-m{q|6)Xkq#wS>p!0Xp;+7TKbc??^*Z%TSo#8k~r{z@*E5sCM)U8_a zVPc!=>WF1rz9t3%{^Mk7|L7;1TX zSN+os{nR5|>T(88U-=-bltERbCUAj*%X&Q&DQL%GLBM`35(Wt?1H)=CIEg5r0LV)A4y%n3st3S z)me>kBHA_Up-q61ikd-^hxRbD*+AOgnWmCUXEwWo3STM`C8tIX2i*ki>M?a28*7pA zXTnc}l3h)bpJ;tc}@FA9n}r!9|Ub+Tbj z3H?hn+8SCj4|1*w8Npz!L+ZZ!B?DdMd9~%DGmzp2vtcfaqn8M)*i3HyH6(P0QFZ*i zxH-YRdS&cFlI-%0mP4m1fGK%2z4`*j#q>gJ1KAsS^B1x6kRboM*Z)$|dD>n#Cuc3baGOBb&eCpO3{RMc( zQUi;_S8ZH+M>}gr%iupwaBT78m|sdwrsjLov8v;`h8lFMjyA430oADVaMFrHon1+> z44t?L)8pHnKz;M2_ln8b1x7%`D>|k6|S{4M;k~oed~%6PNx=VP&l!qHHLI0VM!^ z|C&jSOI-=96dD*Oct#cFRs$At_t559%_-BVEzY-Qj|Ldb?UHBK_=@&kWvl9gs=Pgn zRqx0aCFvR_ju!r)%0$C+Q40Ujz2DVzBTfK2PHcb8@-4y56 zerUgDl8_`oj;58d4~auW7ccukP7+{wKPT9ExTqE|ZjTawE=f7{Qpo+>h755>49oD8LvbuWtf_rpNTdV=j!Aa za@E9{$;QtP04!PGiB%IC4PKdcmC9aK9Nw!GGZO#-m`(fO=#3S2+sP_P8dAf_woG1mbG%4C?^Fy?pR!9{aRT*^+Cv>=Bx32oTxah33N2jt?ncu+ zI9;LQ-$PBwq#(56EGWH4|tdt4%H)1d|Q~( zx-q7QS1ALFM0`Xzl&xqr&JUu~Zf;hI8qzh-CavgtEaDvfBb|_<*jhgL;(J^InTfdk z>R%LhZ1jLgEtwn`;OQW^no@6%P`5tqQ59Dc| zVYO3=f>aw?IMNg7skTU`yre)1PWL|&J-ZmQ;d-0+1N-~p+A`mWCQxkC8M(V;FsU)J zcRs>pqT}VDW*U9&r^PAqz6c&e9Zenb9YbSP^5bExkO(ZPiN#lvu~gm7U4e2>P60|A zyHhxyq@13v*EE@x(Gb$)%y7bZzd~=1;N!?bR>6Zrgnk@LzHh2G{jJw_CqHnWd&x7d zdnlu9RM%g876#IvM5f%k_rywb`&wxB(a{hXqbU%88M-~v^VRrGuTa2W*t4g z1`l3r-fw8Rcj=^i1(cHG;0G>bENm%%(04{S8aO}gL^r9#U0sFI1Edz~sQ!sj-St*x z&+i>HkRH^Uj}&cZAEAb>Iu`cWo;JDON>XGVw@@&M8xbnCM>Q@DxDEnS=brevl-(O; zZu);MJlQX5qGxf;s0f8bCCvytnSR6R?(Dax=jimY`C&~P9Byn^bR0HTMEKrLiCm*) zmgaIasg97lPBf0IVzUp>RsY=l9j)?{Z}CDS3Z!mo+ODMExTM*zfA@XwQK@C^8?^eCw#By0 zUeHXr}DW%!eM80j6{c-0;gWnaQ{@9C;X1$9@ z)aPy+Ph_x@36hg1T=Aql9Al`tR+WZRBoqt*;^(J_J4c5sH#sn%`} z*lZSqmXi+o-eJ6)fC&2-F`YBX>(;lOE{%Q?TW>S4MaHmp+yShi{kFF0q_GZ6uSY03 zM#c()ViSR~dL2>_8Hfsa^gZDb;hqU6%vW96jDFWVW4yLPB&rUSsTPOu`zklT-MbL6 zATpV{zY-j70@5_2n#wY)cj+`#(S)^ee>aTM`F2R9%Y@c@L)=;u-kOs;6jT^~4)_>n z)H1gS)7WOUK}+M7BGpD4{X!ioLRzs|r%I4*?oVI*Dq3fs?z^1gXVDM)AwyU~B($z} z%d26v#uAbS0~h7z=E-L$wEMBI`ErqBv51iSCK#9XAAc0Ar)IG5@ zPV_Ku(1NRa{l}`{9e+~&PV?%J1F;j@cl}EfwGLjY*SJ;GA4%n*Nl8KnBNy9Ofvt@i0y zVM=1SwX-?G2wL0X02zv}-EYJ~ME@hHsm;H3Tgrk%*cYhU)NVd-3CxG@t-o}1^pFI_ zWYcgpMUYR?M%CM$8_xz<|XrVPBx6F@Ic#Dxrm!xg05?IIxlfjN~`GHDqwcFHQ27381MQj{q9E`w>~m<8Yg=?l4wBuOeG*SXha5_ zIs0B5o&9|r7iTzim%6b8?MQ zPu#CPHiS#wbY64kQabD|mEc_z@5ck9EwUs(6ks&dLYqRMkhvdZHwotw-EY?xxYN@x zJ#LW<1)32=OGKW90qNzfYE*Q{#{HIFU~NUIq!kcOdWqUCkK~4L4lFSBK~IxJdAR9V zWX4R_?d8~RB~$)~uxHt^NO<{e)@|{z)ys}A?YL5F3xeWM>h}H2!cv5m*f+j4oWMK6 z;F^LaNi9^rCp-dh&Gti;Bh$3sCGe*f^J?=2Bl{t3T@x1_{m0~6KBP}Z$6xsoU0=^- zc!t}NHnSRb+zVDtB(yqgNZI$Cn+CaE{pVsvGyKzDh&nmqs#q3NI~Q8_AW>g!fiDAG%_moekTrbzG{Z6ck(NDY8*UDa1Sq5qpL6s!9DzjSl)y$!?tkP$u zol5bPnC{Af;5CqJ_}SFiCspFT`n?(D8PlsOv4>AE?F90Qo2qE3%Y@c0U3B>yJMht< zz9Y~4fw9s6wRVG1@l%)k8nvz(s{e+knNauS84ca#la= zO^z6Bfw~NVw}2*dNEUNlM6KBH1(}?z)@7SFR1`w9;Q1gbdFphbEUUBY=f_LyHtQ*V zXy#9rHjrR@Mlo{)4JJ0p$apoq3c@D_$J|Vp)kpZ-vocvAA5K9QE{O1@uC71kvfbm$ z6Sl%DdhxdRYt7N8e?2Ai9`J7x@!Z1LT3o5_`>PFYUivsiy#VYFkTdWb`5aMMbYUoa}vLHqlEq2BOqGU#4kc} z_wwRtS_24w^#_uML^)hhQZ_KkK(9l3JuvVvzD*i`>m~jGv3da$y{li)pPC_C0Zp#W zlWmn`s6lMj+rcsOgT;JKvPf7)>S75^t;Ew< z=32>u{NrQeI4l(Mtf%vMHss{w*%k8yZ9Xnqdt_U9Wt5Rv?C(M*(2)@OGjH$K1~Om} z=f90^#Kh*gj{KdQQ@%xFccC$+o1m3oEZ)f$3L8mGoIA>yNWVQly4aX99cHDjYe74I zf>t=>8;K{j8m1BAa6{Y02}9rY^*8SJx-luu$jOVj?NWX2)lB>aPV;=IN;wXp1+;t4&n^q^pkBz?k$-U;)Y%^y=VoeKKN0ilw8Qm}&-dWsWGG ztUy4;$2JrxQya`;b(OQ;|Im|ByoLH8`dMYDV-SsYMWmGRn-dH_+|r}aK3&oNVNgib zEeWd}OH2y&^fgu;Yx~(O-zb-pwgApdxG0~4g=cP(A*I`sO@h=2a;t#!SsbQyN(H9l zZbSE>+>New; zRVO}_aPqgjWr4dp;{T~f0scbQbokj$^pefVQ%Vy7__3lcU} z2hd0nWnq<>7gdXX)oVk0-7@acp87hCo`>xZT3%Sj{c4Ttv*|{USp1`FB!c7FLe;AF zFemuyAFIYz&m)Or82l4GWW{EZ8~krZy50W;i9mM0leN8YECO#DqYWyQe;ht?7}@Dz z3}V(}KVA3hmYpb{HkWdxuz5@w>x#b?exKNudQ8Q8l#7($UOQ&JW!py-WXZ9qwpsp^{ZCr+G4TD~L&M0p5Lt|p8mnX&B=P3_a}I2h4d{a@ z9z8+cA(c8X&@;v{j8R8TF?zQ3<{oW}k2r4Hv2`%SX_{)s*lXu!Njwh)D<*i|jum;M z$aZouOf9_aExyRpj#c-bC@6Eb+)HbTR4ZfC=B)kgX(#p9`@I$RUOR^TwO#3bvgftv zh%7;_%!NI3|7AWD)7!`BFRL$aQ~^D4;&?MXGmDVF)J#r{!88vxBg4EtArliJnb5#~ zta(qKK5gdbXCN||XuksakCzJL)Mm16_A&W@t22>>5h~9FGVi7bK76(0YPl^r@nmdH z{N?88*)z=t@4lPY8Z_6gG*2HtftkT_wPZ2lv}n)Gynu17HGlKhf7N{a(a)RlD_5Ea z4;~<>E^n$`D_c#0oaUXhGOE%~85sgZ`d4$d-6f;9gCoirVrV`W$_OiPi}{7QQiyAr zu?L(Cz_5>^7`}7iLUa7YvF65&8_mNfC?Rv@-I&(CwLL45!L^m~MII#W;(yifkHR^G z?~DauadEjB#$$Ks!uhoE)zxdwH?*CJA z&)*P+k+IQ=E5-wSuegAD3M+l?$DJB?#cNk!*CT7+RuPO`WMuw^hRmZ7 z6)qyKS=g{_kAo*Mtd-LRMnzviIF;LmWgV=vUuafeV~%tS=$#44{}y z+TAP;#hc9gBZU)~H9Re&WW}Jtf*55dwYvaw!s^3E>6;$bag63XYzJ9nEaSHEr+?hr@{Z0T=@ zuDZ2)OL}t&KYr|J^TFl!P~_>quh2o+e)i&7bNkNy=IYfe%?z)F*(xyAn)FgQE(#}8 z;OWwZi_L|L7+#K_WK71JS-k&84jpNJ`};pM&rysOE_OlV=+f&BpMVW^M5Z zO8*q}?Kpdo5cIT$=erLB4RqhC4iu8^%Gd{Aq0ihZ!YG&0MqT`c!SV3#&z&-JXkUB? zwcoy5P98W&pFloF5)OI75G6e(B)kPO!Ygw*-;`5f>KET2&UtKq>WQV{eez>cUlcJ} zGrtv-xwonB8hd!wi=Uk-Ak}p~DxFxLc&}~O4C36j?PE=)57u5FDD(io;V>oF;wa*6 zW_eWc=2}xX6+Ypb06keZv|-z+(Wblz|2I6DeEjUK!Fkx>u|DIW3cP&Vy~i7P<5m8R z>`Af$lI6saea)@*zR$SDU%|J(lX{*Kg3u}+;?c_5XThC)^d6}y zr&g_^QbPtc_!u=2Zxn5Gd4V}g0wtXkY*BA#%xoj=P*3QlC{(pT8EZRP#@e%*VBSt{uc$ zCwy*={9x5bBCULH{d7Li*OH5QCEuVP>#Zev+o;8*sW>k99;8i^bJ6W3t}!dD5AsE~ z2N>~F{>oZjjY`gbog5-m+6ea2=~Ndyw^t=}xc=Dk4QAp97{x%j!Yihhdn3%+yj$^> z+qP9e@9@aLAq|-LvhNyil{*&LvtN_*&$Q9IWrqCegS^CR$D@p5K3(H7=9Wv@_JT&m zg|y_#8KswYvsSmT_$5tPQK2%*np*gC%`=npDh7!5w|$ZpZKI}2%l6N9cEk^v)d75F zk!dyZhFw{GJJVIu#SN!Hu4&ppAAF_)yES{T}%1zz`I7=gl1A6}``cyeAe!!Rf( zrlG7d+j%@m2blDeQ^%XR+4<)B^=}a7SDFWpALG4PWWiE!D%7esRADK?hOmXt$l_5> z_$iOp((1zMQsH$x22uPJ%?XQ)QU@brJY{@goG7d+bwle1V9G9f{6@J`K^wY zCPyclgHuxx{KpaYg-s9O#!?aZ?5D5w7)03{L4icYr!5&Hi#C6@Ez0%aN~!iw{2BHn zoHHoQv%Jewp`R7B1D~*qn|PN~3nAtRsS79=OWXj*Dt+9ZR4xi$v|x%Kp>_}-Vb~d&EuQEXFJ|%L;oah) zf}h_Jn79}-Istr5Prqnpp%wf4^|hG!6ak3ReWV-ZTep$uY+HU)STl>a{UIJlz24$$o`euQ2^z>!3zPvN%Syr$^;UB{*iPq@({msY#3i}ZA3K2q2@*KvV$9TCX z#zvqa4H?oZuOb;9ZN@OLIFFy>wVj?_Xb!_R7FYY4=L|07dk%Dj1HYeak)TP z{0e;dFf?+PzA3OBV6F`}i%P!{-r6N2q3xS-7$Kww7wOSXc<#ybV_iy#i5>v}$6Lko z(7{8D*ANDQ=`x3yj~b5j?#r7T*CghLbHuqTKS;)LF}85h7_%7J6@<+UK(C?!>u=@oS1`c8!@ zC%k6JFRwwa#x5gM4zM-|&v&KH43G{PQ_0NrzlVS_=6V=4^hm>5f5Ed`2!znzQ7{=3 zWrzdvMd^fk7J^&XQlH_av~T>y(4qW7StEC1WC8~Bi7)S)f-hi7Pb40a3V*=>_n~#y zr-`u%qJ&0}Yv&WiWlpnajceyl3b!B9R@wr-LE5YB*kW(?b=EwUw*&B@@io>`$|)BI z27nL}sZDsqO&@(~N2G13d>K^~nGM*xZFRFmxdhIBr|wM7Sa9cHQaCwSWCz3kKep0xFY} z0Z5QeJcTtGje9;^EVaD)F26G;PZ4f6gm5<=!F~_g{IlRZN|3_VOQo01_fnN@%VY?F z9z8u!=MYv;BmABOUxvgT;Ql-zfsY?QZEoDUi4YINK_NSF^jLG^!{ZT5#s9-cdYY(O zW(E6Fsc}L7yWji&B22Q@fN?= zoZ_Ay;Jf$lwPDLk;OGSPyagT+>rssL)ZM|W_4L^@;GaW5S;T`l)?9k~?J(i=9Uu-L zJ%kY1N51Fa=_&0WXfDxSgfgBhEIoPhym|cear0t&HuxDmFxGqs94AknAPQh2o;*Ei zb0|TNF%CRIAzI;3lQD$+al&53-JCsjvN?3(>$*J2^RyxAjPK7BBx4c(VU0sdLcSYK$2j66?TFFQQOGKOX3E z=iWVVKNEV;k~~hmzx?Er(8SciNd$-S=EZYpycXW!J3>43rolu{_q}@$o0~UpHRqwd zvu6lN20p#!dnE-zlS13!LxU|QN&&U66zKUBMypW~ zR9H8p_{hkcG1dX5=9y)xKdQUe8F?tN^Sw`S;&MwYi zyt@C}W@O?R;l(GL#g*CS*`w9w-mO)}@L==k?q|)D=NMOpPB(XM5?!%^AT%(B=Ybi( zWDeayV+w)!)x)e`Mg;Ha+K+ZAPK&{&2v-WvqoeQ=25g9RD1s04>0DN-SCKQqLQjj) z4)$AtOd2tX3Fk`oZ9y{`Gv?v|b56sIYhJ5@CjV=FP_F{nQ0mv%f`F~zhc$H(UaBF- zusvxe20x4+rt}w>Rp6XgOB+OlU_go@L%&qO6;|aV8qAha1lL_BT*GNY@4rf%owJ9D z?x0^9M-+!uibs=&^+DW;2gaS-Yr9#d_H&3o02VvOuF^6hSg6 zmk7Tw7Uh~f+HF~tH^<(2?Vd*0E9WKWL=)Ct1!`N80{rVu?S1NYjg~H=*aD~D&is0~ zpQX&5ap@j^;ZuQ90|EBz(iU@wHr!I-2oC5Cv>%q^+wo5X6MZ!lc6pic;9u7B)J~n2qc7H> zfz%mmJ%OFR34!uSQBaG5rBWviNGBR}b2>Kl+NQX%2hyy@T(`(9K?jbBvW|Fhta?-Z zt=u3iBedt(70+!MRQsQK=ltchOgd?t8*bx?k)E-Pp&lG2FY`|N3q0*Su=VgH+L0I4 z@b_Wl3cab0si2khA6~$`)aWbi8;MwrXx8c65tl|CDbI;(@uVzT#?aPKjNuh3lNw=) zL&H86xEuoOar`xUd6=Eay1XY31=NgG?({z_=e8w2U(dh-*a)G!SvD*^!@nMH;w zuE2CTkD?wT6|u{2UqD&J8v+cAON%HM(~Kj68w}@Ll->{CeW!Wvop-XJE}$UEpw6Db ztB4ZxPyg~SQCxoYi%*&lQ9xbX=4NKYybrPuZkoMxGCu2*xrz3KamvZtY z24%&4k}3&z@7^sTyWmqsuVVA-Pd*NF7Pc9bt0n3bCRd#0l{Ly8iJ&HoT1DM=`w#`} zi!Z+>ymp>2&Lhp`ix*IYPiCCNrHUp<#Y^{X6sF@x4>#w}o@HOGdp1W?w}SE6Gp8{w z+-^Sq;!lhTA(!F_n1Uh}6op^0<-)RnVrN+9k#lErkjf~^$x-^GVEFLyBiGupSA@(O zBHp`v8L!+sDB)8PUR63(C=U{Hdmk^IN~a5m{}jSwU8jzO0s*_p!+9=Wy41XL@nXDW zE@*n_Pt#YG(W@8+g!#hxw==%K|Ll)Zq@~jj-hU4x!J+2)^Jh_bH6DEO@yCHd!-j(O zArvk{lwHJ^;UyZe3Gwbtx=W zZ2R!Qs6}YNyoWGk9K&FurT-WT@&Ra0!^r~UQLw+Qc=rY{y1ew zMkl74w+|mDy!KRccko%$KRkgj#=g$Q!RFdGYt3)}^7F6CoI&U-FSL!tWP|Xy}7R4#1N~fN=pnE>B#x9_VfgFy9+|CDnCMi28==PyBF3}GUI+K5?xLJpY^-EmL z&y;N+3f%`WdX0dS9GpT1J#;mf2jt5i47)2)U zt9(q3Pi8HU=W1X`9|G}CAIwv7*gyID1aozaHNb5u%kZZt`xZk+k{0S7(JG z1r%o5R9{or|uwNHbEYpliw=}6?WRD?68n8a)7ROPp?bsA(_>rVxq8cQ4 z&b=sId<&O+TUkrPrfL=-1f~I%PjJDE;BM)|0r;z2k1pYt$48xjvgS1fr1mSbF1^a-uJsm zHp(A5Axb1#ioWzmjW$Uu62N}?ei(SexO-#To^EuN1U*4oM?yq)hRLk}X*k*#9=K%` z2N(0%S@uCecv?UAQ^9~ryOt_Xi+%KXI11h@du3;t@b~ca=!tv5GlftETv_1)ll>72 z(EFD!gWs_%u8xI^hTamDulL@4Hptq6k%WqW$ke@ z01n{l;Ea&wiAfgfrg`|_QQB0|QyGxS>utDw_cq>K_bsmD?K|7Nb?`hPpGQ&L57Nh# z=B@K@HRsNsZx&FVZrr||MdZDAE;q-HACCuG;X`HZ(uE7nDB;PEP!^vNmN~_koFtEj zHGP9O(f;X4?86IF7`LCrGOJMa4DaG2yxl5I4<0>?vQh+#V(5{gKxt^?@S&;Z0_`2e z81W2zeT|~1_xA|nGLQ1N#C}~Zlo~s1%mqP4Zj0JNP=PaUhY>nYpVHetOla%lW|~mr zlZ37wVS&AXB76_JaRA3qg5zm$Zi6QUANNHbgch_;A4JhS1Kms=n#}m!zt5PU5FH24 zD!_W6CRqFqu#ip=F6;qPCfSKFnY>(!* zX>Qo7l9&+@>Lxi5%0>P;dr|f5OUIuu56?nlUtj;K=|{MHfzfZ{&YjG0Y4q5!6W~FQ z|G_Au!m~8{9~iJ|LCn|>d1S|g$wJ=;Zt+XI3B4T`x>s?S3AFeaE#EdVar2c z^fJWS4+8q2M+M|n?p1^+Jgf9ruk`4wNwQ7GRlZk~SCx+Iy{+GweblJ{sM0KNcTe;T zVbw8~QNYkz1%DMP|Dw=vFX5itA3-s*-33oMwtv*6;_Bgci{!KKMTrJx@S`8ZlWje{ z-owJ=E%KUvlmZ2C=Q2OD_Bco5F`Ea6(9D5>5fl%U2BsUeK`ZOv8KnfCsG&*$QSXkY zqB{* zD(%=OjbHI>adQnVMuWo$W3aKNw>zX?#t)o0CsYJomzTw}N@xp%VW-xkoZ;9-ae?o+ zy-s zbxMCZ7pWHGzS~@!1L9pnka%R(f%f^$m^nY`D078Bq8c3g(SD2$z$z?(+y3!;Rbv2s zF^|Ssu+4J}^3_JL*usMnHjFR&K(U^IwK6X9vL=bfw=)KtPK&0 z&9}0Q%Dh{&%xlzz9dwMc-&bLZl5D9Mu2Kb1}exx-L^48Xn`R;dbDK1(+GvnUOa~hdlIgDBT!H^3;igmv@6qj6@q`0 zpeht7RErxGHA52h#2M!J7@=QSWvEWkbtbo=Sa-3KJ?50$i7Bk)Jd<&0g&e~osz7~= zeYa29AL!ycHi94kBXvPFlv0M$4d0CsZjxyN4MGn7+7TY`zXdH*WPX|$Krsx+!I(WLrK1M?_^&q0V;mtZq zDD6>%xB2EzUwqABI1dmm&o{^Ma(U{hOjPEs0BMMp0?_pH>E^SqzHA<$glgSiVX<)G zu#H6o(|-5m8BPWcBh+D;G?dH(fW)<@KVKot@=srUmHz1!^<-6(%9tLFqrjw^Ms3Ig~STE-sxLq0cfjSUFut2jWRxVBDpZdHSYD%tLNQ$BM@=T6ErD$SK4a z8uV%`BT#tpEpa4UqVyCy=#OE`__dtVMwiTq`-0l*lfc+vG&cAaxY$(Yd-nt zXUy@jD2)oNy-Be_VXB)p;01^AE*{PpUAz8m1WCPx$66dER28qP2NUV7R`@lfE5zMa zB)%Rzc#IeDOA!L}eKD+DDL2Dg`0=67s9`cYdZnmY=A=*3_WXz zcX$RlfgRN7eG#_`#HGJ>*yxXC!bA-ZEO=cfNI`z=Toj%1i+JV1r7MWE;cpO^aSQ4q z^%ew%XSWzjeyp zaa|A=_s1(NDg;O20~z8*rCKFs32&!+h>avL>c#`kH1J8YR1vssU%_|!E8}$UbwnDDf+_r{JvcZC!40SOG5i)-WJ@ppJ8Blz!+* z7Vj#(u37!OqR4Q6Jv6Qpj^U|nN)ij*Qa}2e$&jEuEbGr6hhihmQ2KtzBVtiB* z^=e7q=JmdYIX&ANCp|oF8SkOdQhLtBqewFvNdsUEFCxzSEbhXVFmT!sXVRA*P-(~Y zM1@cp!0Z;2PX)K!x3@W@N^P6}(&;L+MW^}EIx{YCgk3xM*gpWXCnaO zMP+OZiHX!#I#@*@>|Y2PvGV=NpA4KZudGH`0ZUFlJk5pRg4GFnlg*LFOgvM)b&^| zxr(Pv%V)%!qui8_rf2bZ3$iTEhQxCz)K4Bx!l?dGTif zO>6rA!ovcK#~h4*N#PmBBlDeGAWV*MCxsnx9fpdfQe3(hXr4WmEmM(k^g7XyKnN#tob_Tu0y-Y{X>TN9B z?jL-JQth5O4{$kpANA|3xuPrp?vid+oWLV z{)#7aPY5Hv#q(Jf(lIQt7(nQsG^^EB5I_UCTZ@u=hf`FuAb?x(@7~8lC{6FagXhxn zgsAxrMFqAM>XFeKqFX|L*WwL)Lb$1monC4cF{Pqdmu1N0*AhF0@?I8e!fFwU91lHb z2{*=5*v}YSRQQ}z&J~j$LOlxoERb4g=b?>NESLTLV-eV-50_^bWfe!m1Me~>diiDu zxime&qwb=T&=*B>`k97&D2)8YJIzfE1coPS6nTmP$$f}3)AQNixCp&EmF#NW=$!S^l_9gqa@b|nHP-=U52qePKYRKlb?8M=5F5ukE}q=C>mKX(-hHq2l)eCGjTTsCwv&ocYhlu z*55aWAoZv@{T6#}PrTqH*!#`M>M6#9F{bqm%<}FbT1B!5AHgeuxJ&BdE z&TH2dNzHaQxW^D=?us(=zicWwOL(Z>NGml8L4k7)K~-a&gd`nDV3Y>|n=QxNXhV2> z{U>bFr2R7tbPm3qh!jp+S0R;_hFBk5vlS#0N(^3%5;44g5CPTx=(Vy<6_{1{-NHQI z;B=I=24)(iA+!SVD0s&Mssbvlu3Cq+n1lJkw`-SG3a{`Tkl)g-O0dea{KvWKToHFF z9XD>=WG?*g%|}1`u=(4+{db{T4SrXze9fG{mvOS6!kNm!mt~bj95*9mW-(G~SafaF z!`s)7q6%F1Ok)J<~igkAbI)Ouu8FLG(f@(CwpEuNoO;Xz?wjKrx5 zrR#uvqvG1J5eGx|bxN8lhU*O!m&)se!DS<-rP~Z`Io4n_xKWAK@ag!PROG!UNa5=R zgM7k&LNi5iPq;A3qMl8Yh6?AU3hV}RO+$t2a^{G51wT}fykrJG`^|3@E!#?aj90-U zvI|JjxKa|h6#@~(y}qF`SM=f=(k}cOH6|wL7vuk3DYep;Fln%`E$bHE;EfDEuDrl& z6+-8@-bUA9$13|_p(B&;pb!2t$BHpX$By#L+*}Kj7155{SC)_qoI{R6coj-y=(WwW zihShwYG74HG0e6!B(Es)jE1j~5o!PyVbyq}yy=+|(#ysYO6-7qsqj5zgGr3zo-YC< z`)MBo+c5A36;NM~G0Yyg6L@DHWYVed7=EU(EK{06fzWH^Bs5OlkkgIw)?yfh)AqY3 z_qHCr#tj!f|H3fZ&)2B#wTp(qz$awt5g3^CTAoc>bbYRP5#s+G=zh77Or)XKm5^cT$^uma8?M0g^d3|7V6Q+8#@37Lbo)=@rP zdh(BcBp-F_g*Rws1`mgVm7W7%Z{eaPjcO!WV6TqC<~sGR@eWa8(hHy>?4dNiXY=|u z*I~r#*<0(r3&%;Cj<*i{+kf|ndOECis-S3nE{9uSnT7d3#2fp?*I!0aPlz&$pZ^pF z6(~#SAMG>lE@UnOE@mpvE{qE(oq9e^TB!9DxWESA(ng+?z1WVO0+gQQ3BuSERAR+u z46BEsDp9uKLRJyTSsfaHR09$Ku@!7c&-yx_)`} zYV+;w+u#W6J>$3v4QqX$!uVlZCKp~U`RZlTfnm2NPrCPQl|!zsV(7S$@NyU3BPg{- z68J9O4MN)-XAfy|&!7UFi?YIU#t>mH<3~TljbZHWH7*{iCL#15w4&@IF2}NG$9;wf zpc+}{DFsDG&3A1NzsDwUZe%}OdOZ4HwrT3e6&K-qY z6c4mwj6&fc0>cpdLMJBX=m)QGBJ6gR(-+qW>C)C=KE*>tGC(A8Ggpp|seI=0FXcmI zLpLIHc?som7=sD7^rpzeocHk!gL9K#WiG*OVT``_RxGHDicepA;#g?h3Y{}XhNPyf zbWA?^K)HcZ1HaOfCXHm^Ulxa<0bg-d-kmG2HRMg<+%;9$4MR7y&;4FY^iuO7pq%e?}R)@!B69HuQWj3?rR$Y>IGkqaB_?;hx59?Jo(YWuuHRmN9lSR=!8ncXh_g zPM>Vsu@_w}2MX7MGyLXW3xhP3(pEwewt^GAfxyKdm`VNSC|g>~ zx)q4t4XHodUXAGva5qK;BOtw@Kb+{Ihq#s5$Fgjn&qays2?I^Za z%Q!~Ty6<2%1UR_i9@~dQH#uYrx;F`7Z7DZ)_z-8(lNrq${0I|pmURrc_$>`OkMy_) zSHS4=;FN}xSNb_g+D7LwI_cT8jPo}0y2W#+_dK(iv}%PU`4#791@2O}UQM{X^bQ}3 z+`?KWj~Jkga)kVG0VA(GCT&y4GS|un8ZW#jUvs~&vZF@skzoy%j>vx6$G~+F3(TlMhCAhil+ ziEJ5V4v~O;DHIq+dmV*UrS%9WeR`7ZU;ovwb9(AQLIyqb$3pM>LC~7(pNv8 z&vg!^SmA(+fwC1si@O1=kE{RyKmbWZK~$&}-5LqRdSck^2v6KU)O*;5kAKQZ_{Com zcs3QWLMaPN!U5Pv~W&oFphCFluqlqDr#|clj4!w-L&=q4soQD zLg756W?)pQpqxL!oyIr~Vk*nxYsjvy2gMTtE{xjshen#Gt52Ju84P3y zu{nVor9b-}Z9!Zy2jWS#1?KG9GEqSoAT?IXcifNuf~^xzIIZ0GjAwlVmYUqp=}F{K zoFMWj^6m4|7hnNDZcR|}RItw$3-ac~UvO?;RUkFq^@ERH6XP3Y132|8J$?Qpb5|J& z;M4C~w&IfvZ5(71->Xc6>nS z>Y8483~11$<7arW2{1D+#A({4-prfwH2oLoY}Wup+scX6^e6kD>31KrXDjQ_qWDw} zNysnsXIQtiQRj;E3f8GzdKDJ>0bR|rt{0gu4ex*KV_*{o-k1U^p&Dj6J*zczEqncr z@e=kNp1|bu)U-)FqDv?h(^&ak*ql3xHicCJr+fgK-n2P;6Zk*$4_a2(%Is|yfSx8? z6Sm%!3CJ|wXbNK~i-vd|n;L6A`siob%lL#4N*7phCX+C8Zn0&96WGghiKEyl?`m9xzXnHAMD{ufNU#J9_x+m)5mu4kdbq@UP`w z!N=FbDA0Vve!cS?e)2cJ`YXZ`PjP?`hYaDBJb&(d^Y{Pwe{opL{V>IIXV1kF{`OmM zb0Ey2<|N*wb;3X0*e>SaS7jl9vB@lD<9^E&J#kde`I+;ln+q4uBP6dk*9fQd6&)YG z{{af|Tn^BgM*&xHQb~PI$fZh?efs%FA0p&GZEoGVjj+XgQQ13t_4;-C|BL3Ye({UI ze28#O{|poM0H3D`l|}iWE*Huf6vxNDb_KeSK3o(OV3twz4QGAlopT{f3X0c3fB}VSK=4QDBMg^SEGPra>2=#L9e%xRE>X*$2yb9#kzAKV& zI&tbLuU7gE&trk54<6*?{z>;){rlhi4fHh`;peljzKR#!gOc8T@7?A%7y@qJzKxfV z*W{oCKE~@XjgaJNf8xQDWZ$`bDT0nCCu?zc*>nA z8RUiLh?f`|BN zoI?HWn1fh^s-nc9#%IiALefPwFxiz2g)-<@qk~2z59#n#A`1R9@I(27#v?t@u5}vT zJe^j$a(=6{C`g*bS&uX(I-*VGoz~^&#ep7tQSUAWZx%(uv5IFGN&pMPu8RwaQ!F7xy<&memi(}`Y(Fz*7%p(s}N%Fp7j(TkL)MzA? zsx_R(XjTAZeOZVR)~yj2F?ia9ebh@NkAA^?)1x`zSeK^lK)ov8!07=|bbykD=edps zKbm#vSDi)H;r_l_!#a~+MJFlBK?#+80sYx0@iWf&I<_hf?p01`C*^FiYeb=j0O(C% zfan%IC7#>E^Xv_UJ~a4w7-037s*pUP9r?KX^v99e-eTK;bSuu>@}Nh|gMCzj93DeP zr8|{$mmHhaJ3eSs5e!@-LuenIFd*b*T-@{9V$V6GVb(ci7L%}wE0t}%`u0p%96X%z&I#wATaJuiDPw64fflQ6vJ)vv-#w)ro?#%tLDrJP^vU{*0$d^@`c_#N z8UHBa)a$(NUuVv%SAr)MZvWhNu`%X0D!h!@uUSH3jfTLikU8W*P#U#i<&s|LeU)lZwg z<2Oy-g}L#rP6W+wzP*X!^?vi-d+*`FJ5SirEfj&Tnp+6!DhZ$d{A0pG&!Fg+FwhS^ zc)xj@MPnL;@lPlXH*eo+{^1|~vHA5ce~G8%V-&OkeGY?SL5)zUa0-*w3$_m9(bAak zQ5JO>rcCHDdoF+X$ItUjg;TE$MG_u*k5EY!R)sgcL4W(3zve{OQwY`wXP{LNp9|L5 z2%blc5G-`->{rwyTT@x4!7a3-geIo-v>-agN}N%3xKHTG2nDy14#;?<--4^Ya*!7sq| zxijY?yt>Hg`0yZ|Gs2>GJ4tUq^E;2QLYGti)RP5qLGMs9f!ufYrXbXHCJt5OfteajDR& zW%%mV>nLl_PzdjYrj!h%M}-!V<^t;CE3WVA8NbTjNt6#?RpTpSTwL8-edWp(6w|ti}`cv!HY8Vu$efF@;!7HJQ0$=YRK;igdGT!KKSQ9MdCFC zA+7HquRPmzF#a+gsB9`c%cs&S_)Mmbf-7C0W$rs|RH%mW{Et9~@*Vd)>59PvE zo)3|&B4UmI@=8x;H&RB!j-F@NpRy*lxB`X6AmVQdF?fbaQhClG(1V(~P$KOA674#! z8VURRODH?;9*W656_xQUfk)d=QSrpm5$2^zK|(-*Y3A8<^K_b58u3rQk>+gjI}+%~ z8dCDNrD|c?D@m_hy{-?-6Tr#=X?^gl!)XB1yTvM8Idx+NS)}TPi`W z@#XExKEV{R2-!P_jqbFl3$re8kZK7tE2&C zDPKk8iT%!BjaA}$p0gHIGUYG+W$b`ikFi~IO<~aCZ@jJG06c|57Y7EIQ%R03Mg-=* zbJI5}^bZ^?eV~31{p-&>XDwqqB#Fp$q4gF|T-&dLKj)pzl|FFQw(~k)J>&5@)$Eu3 z!7=d06i}H=4u)Z|HU!jAME7cB@d-^yaK%cMSbAn#SWO7ie)_bKpO%(4o0tH$2G4ZS zDIk0|;b$@hFkxl#GO@RUYAxSY)7CPzuQCwf5---FHTttJzQk)s2oH?o|8MWiwk$cW z^L(nS2Q&r(4U!_jnG$75Hx|9Tti=m^`6Ybidq07_`~-e4TLNiHBqh?~2u>gg5+tT> zfISaY<^TJ4?94cMGS6Jy)m;%)d1BtNz~X#JBSS2Ozw|0Fmh~) z-PGH+)rqO2ymswpo~Sbc@i!7Rar;QSNL-lnlQ)0vONU<7CpYML==ix&kxuhn2{2qu z4h@c6Q%?e|$T6MJRSBpZ;pW{ruG9ay1T{t!j)3s49g`S#=qNY7am4^aUcUX$-=BR) zf*+qEp>t&Si-8g}Kh%WgyWjhsX#L8&W_;g{o0vIXoGaoCxIg>FFSRrHYvasHrwf-t zmtx2M*Sh?I8*;B{_msR}zpjB=I`xN(mUyp-U6Y^u^iAKk%ft-9mRn1|tZ%Wh63Kg; zx9(ilH#6VTcR|_Ny*7JU!WNSqzQ@YtOI%*$BY$Nu3yjQ0^%HzCcv0>MoUw8arhNy@ zKm;Fr>g6B4`G03W{^?KsIdy#lcP8Dob?lC9+eyc^ZKGq`?AW$#r{kn!+cuuq#?86o zo^$Uv#{1O|s8xHeHP^0MW7nLhV_EAFFKkpcsf!Ni>Sf&CUl00ZCp{mJSENCJ4}s>z zSZA-FAjm7*-uoqWWz2y`m@Lk@~TE9~FK_p49cUp}zzMZ#O7!;r|H)s>gN z#~s_5A3OSfWrgMK0jkf;Be$AqvCf^D>n|yPNCM=tzZSK7ZaWO%u5pnHXpMvlVNp~V zIBOmw=*5iV_pW+wWVbKjOb_BWeaF%~2j4P!=H3#kZs%)8aZDVwA3i~oAt|6Wc#Qid zVlKX3hH%E(f=n-H9W;MjQNLyDx==UqqSGMKu6iH|Ue$C@|B|vvD}1Q63=I}dzj_~k zGPC>K!^mJAr~wg2PCw~!yZ}Ewd#paA*2W`1^@(x5wMe(?ym`t{IZ0Hn=s&mpd|bAB zp}dY<(=;*a1%!He)|iLh^$quAbZ!UdpaU^$Q54rTksk1?WQGt7ZO3LG31^q+8=4M9_uuoL@A0&wc6j78Q{g_a zyo$QQHfs#Ugue2-l2RJWU*Go+TmcJJU9z;6>vCOLQu77EVkFP1xkDzo5hfSBVnBcO zC1exB3Kb11zoKD@tHS6&#cyy;?LoYRiF>V92yeY8S(BgAp4#gqG_`WXARBX^F?ul+ zo!PLhn7QAFPghmXK@);$$AR6c#6*vppCIW|I^dWDQBA|;e#C8nSiv$PC2=VDgcS#= zZ%Tm&+|Rv@Om6&yKucOyp{J+dhnVY*65G9D ze%#1>K{-fvu4t3)Fi<* z+I>7B!Y<6`+e`L4nwa!CcD}i%LZk|xe%gm1ac?D3S~2_lJj>=ebzTgTcsF$GCRk}d zV(>1?!itGLScE(Uo<;u2)mDYu>+6)g5JD?pWtaG1^xKjGr#CM2`4tPRCX=z$UnApq zKA*C5OR&sI%Xpj1y=P#xCunKheY*`Oy6q!IrVC!^(3HqyCx7I~6du35^{P7@NpN5G zRHLVTAAg4aJ{BKaiC^=&u5u6$q_WA@WcVQc7!=-hSGEbPC7M~Dy#z>?QCv&&OT50P z62Phb9d5BEDaztG3d&vC)Iwt4aVyZC_jn{x5|S6o&Hx%)bqxAp$@v?U=B?#HL!xl+ zEH*-Rdy*Q%36CD(Z>S%tFQ!d#b6x~`iqB_+{d?rM@6nl<;`0t0Gd1TARn9@yVHLIt zb0{Z0C-gmpZR@jS1`|s9ywUvl2F}|l)>e)JGnhOvR3;6Wb4-1ryPiAn)3}xeyD=_*OX1sSl;_2I??&i?PP{6liQkr z4IkxPuDfwAu2Ba{gp~>s=dwHylp!pI?nbEnG_o=9y=rS-1jn7JaJaUr~O4IK+eFMO)d4vk&nPRBl&Kd-L3E5BWi_E7&;c3xM#PKksHMiqK}4|&vO zSRhpJ@>3L|{KwcujKNhZ!!oMs5&M5WT>;D~R^q!MIf_HVjLD) zVw<n^H(E?(yQ64c?~t)yU^5TJwJ@5K_z+on zsHZ->3juTVgBPE7eTBQ`pAmvLxT!wyaw8Qn`ayAcp&`Dxx|&pX!8S5g@7jOgwG+!9 zmTa*VV1(@QqOpCtX!+R>`I-Z`@zEW*HGgcACz#axN}@04e2-p>IeiazHAJ}AL@qYd zD|b#plQ!%-6p55Sh!0&&4=f&}kK_lV1Y+>y15WKjz*0yRe$hrWLOz$F+CGxnJ;5mF zr(P};ZwdCGWuALd;2@GOlT+aT84zQfyd$bVG$8ws3PT(Ctouu;W|+%fpCeEK|h1&Mks0KlI;#v_4Y)KO1jM~X70z`L^oX$edd&Sz&u~= z$s1asRrlrn#tCe$0?^FA z1T}mmwVozeB*h9>L|x)Udyen<{>@VpG9s||442;dA)za)SD`xDs-Qd>VkmDdQ z_q@ZVe~4ohU)&QUUD@cU6UuD6ECB;IEMip4fK#S}7^3}Pz}SRwcb+1h0+cxfJSIYu z#^Xs1K6pq>Fdb6y29%4w;fWoB#`CCq1Z7ILo2#NlETktI1Ph$zGi`a>I~qE0w7r~i zl131dZu+-OMv1``!fTn-pT%o)X-H38$#8$rtx(UFluf(t4jdC8a!Yw^V|S>2Xn*kv zFt-@estBNc)(gm86rmQClq8ytF#@X`NZ%O^ z&u#U4qdExs@o8LjX++k zaapfp;oRv$m1qZTKVslg_7`CQti>lk62`}HZN)|EUGw)_wM#o`_vlAx3p+Nh7Qhs+ zDY63@-0Zhb&Q*6e0gC?4XH1-t73j$+f{`$|Qe1S2YyEZ^@~|{dRjl(jMn`k}kW4K0 zXmn;d*$11nXQ3e#@SEh;#!`UtMEccp5Jsp$Ruws^ARJ7rfmQ&Us!ZC?R=3|FN(xt} z#!_H`RbxVmhc)&xVI%qHRXjN-VQ^#FowBXc{t6WR26^efb<%2wx!`dg;;F^Ws|fZu?7{#%72*R%ArYv-W%yE}jqv*hh(&0~5rD}Z zG&Rlgg_3X|B~aKD*eB;*I#jtdc}0R8+6-J|`{5d5IR$ zV0gVWmA}itIaaNlEp8gXDbfTYo!*$L+P6t{);SC4hV(Aj1l|5?bU`2fBq zxe*~Pmd!S0wy11G>dZVgirN@tkmCuEy0|--*qME}G=gt3D!@3z`7pR|JWmOPatRsb z#uNdD_In_D98!HqW|JV0{=tIpc#LVVU?0)U0RlQOX){+`|;GFypWt}F^1&NyGmMWGQwv0s87)W+-a2PNa9 zO*+^ES_$!9W+_B-iR8-%W{#BV!$?kvc832RwqKHrEEsn@)I(2$3vR@Gn($_IKF`Mb z9X$QfJpa|Z6NvtB^4kbrQ0H0ws$!Js3f=KM?me${Ahr$mVG|ZAY%Hl3KQ??X16&=k z^fTzxGI`z9L1%`(Y;wtC>cr=KOXL7#6c;au;@R*C!G%aos6a23$Dg)iMs8e0h>%mk z1d4slae)eq;nvhzDlo`_191K<`Xy}HMO?NXX>au8tdx9TPEbMxr^DbVqaea?iVC6$ zuK_K;QdTlMyq6QfCbe!Hkf+XsE+*|}?Ln1?f>1bYc@qufQgEZaCIB9r1VQ!)v}-F= z?snykKH|(*U@X6gvG_A1G64Y%-!!%ZNuqX07)>2kAPPES|2A3jL!czXb7EE_YuP)9 zu)r!mi9;&$0BPmd z@j&(nSYa1ucjn*Y)N^FanDh}3o|{e6Fey~#(y#G3j{oAgtLGeOA40I7E*3%ZuP#AN z4?G8QNe1D@%#TK3mP7N6E6b=UfzvkuPLPhlVSH%iFYt(zSRbybke-cFKR9D3#-b&g zx8&vupCp5UmGM34TIk4(KL;a^A&fG>>;Eca4+Rj*A6PCpAwD{29eeHpZ|_p51W)R; zazZTT!b&fL3@%f$XAF|craym3jhgAf75Rgk#^W4OGBmn3{Ft?MTnZIKZ5aErq8`Pk zkP%kND9}I(BeAwj(qIOn3~~$b6tD@(qCag`juB{ zd{FesDq{C(tI7O;q?;BI*D9Ww1U-8{E`g8LFx7yx;j=UGq&{uDG%^aw$1=4?V*Me| z*+K%RqIujK%V3bf0oZ4P9{lE1t#s58hDoN?@TVK4SIS}8yI{;qF!;I@1n*L^= zi4A22$rx~K2O8R_XB9a#KX`dvRtVKJ)uF1M2=Px%p81H@6UHG)iQ*~q;pF`m+7%@P zbI_-zRsu!NLqP(FwYzXL6oXQ(*fjBQfcoePLCPjWc@NSvZNsqOeMwacuH?Mz>aFanwKxU&f#Y95Gc~YNo{TjZLS}RTMg=KMN`; z&NGw=AP|J9F$Q6R%34B_Kaps<+{=-0FIjiN!O7tryMjJpP_leuPd*=z6{A6q9Z7azhlv4O7sfz-E zjPq)wG)QSK@Yq3>IDPsC(X8JkE#Ab{llDEB)5csAB7&k=AiV+fblnKxEnP@7qJ;hP z15RUaNe}iY+3f=d2t|-EXSWYbUxu=+{1|lOXjUoc zj_+P^f)H5-p&G>zVNJHWVt2hlhte-070*{|LdW zxOM@W_*tF5jqTc)PvBhK@Tb7EeJQC`gq4PO{!eW5$L6eb>53w=S_w)ska4W?%8*}n zxI=BTZcA~!O()V&*^^fJ;~tRI5N zN8!qYT%pjMABU+1!lJOD@P#Tf;lQ3YzMi_)PbHv*|Huf@VHaol^$IVNEP3 zDiDlf-RB)!2u&gs9y?no3rEhV;TE{Y?;(*~a@TMlS?xd>G>!f|kWEr`e;6u}pA{`u z1q&L(UrfRUvFIZ^(N~BI*5nTnehMO*<-n1TYZz0DxGr@%0+y=5q}yd60Qd>ZF=$wr z&KwbAEi?$u1D87*!?8{4p==G^Bc{7%ph@g%!bmjTG@ zf4gAqsY6EgJ&VLKEB$XWM4S7#cq$F413imp2VlFh^A@+QVxe=xSCEU`R^pSNEpGs zbQe5+_s#bxZ9nhM^Y-?8*Cn$JhKl@QIU)LT-|A#`B=|nm5`5r6!6n5{!7G4b6ozA& zYhV)i#T3kqvd2Td5gfpItu)oKsYBl9N}_`MwNIxjX63bdt|(a_HqKJ(oSdN6Q65@5 z^?AQD--vK^(lxM8j$o+!ouLekuF0VXLcS~H1o+S=45T|kTQF414>)XK}bx`%Db5Lk(J@SI76-y+8g<-Mcj1VXDg zIsvmEIWEaYBw7--`_3uwG73J+)B~qJFV!OZfO(8DuP+7}qe!2~c*;K3)kuFp=>wID zYX5sIEyIxj_RYGL$?q|+?v2DUlscxxfh>-gM@vB|El7UXxD$tgKN=DF7hQuAnq~v) z==#q{pd}wJA?FRYU(?+zYm=TqBhq2JFtuU! zcocrty0H?_qB_rx+aY>fld{jyAcAq6Go#Oggh&R_`cFTk`+FY%<;oZWZYI3k{NzSi zu}`NkWNZV)XVzlxONK(T5|L8lVFLtj38 zBXN0e6mYj*!UCw;gTRMe9f11h(5%IW!fY^_BSqDdx~z^huqbuh(B?4aGwHuXGg$3o z{ty<`xedpQC!((q*^4z4H9X$Xil^85Yb~h2yb`w9rv17BUN_kS?4S=9?BT^9Fx`n_ zEmy!XySv#`JnAp&oKR%T*lwQawgTgAP<6|qJ?LtdPCpIO4Qz+Vg%5XFmv_dEn%4yo ztJB90a72n;PB~cQG6LN84D}~abbg9^>_jV2!RFO_dQ0?(;vce?*mu=BJ@7vV(-(8S z*gv){>&?8?=+qUAhvIv)b`-6r+HEbm*}DWx+WFp51p6vloCy(=4Ge0_&=p+tfw?+S z`^XgcIB}tWTN9#0fTZJ@3O9(+bI@ND&Ki(&q-i+(x|pC+rALoQNw!S#2$As7l+~b^ zK!kn}>>J^}x-GD1Rqr!!dS})fWQSXq5Rkk8- zn82HC@h7ttZdTT+l~3I$-(+A6Ac=#e3{p-pIXVSUUf@YGpnA-`t_!Cb?Its z7&Pb{&C+Ql#eb+(<9guQx5Jqa4?t;@RWN8ctKg3n&ul`29WFrWj7vK)zkx{x$)uz4 z-C6jcZ7x;G%d)c(3jIY-b;rwJkvN89iNiI}=uhXP&6U@y@vTO}8VB#zW>Ybu-U7Ag z5`vZtAg%?mu{$>_U7vTCVM?#FdMykQWUo}Z_wPk&t}^I_N;Y;{&?y#4yt{E|vX6&2 z?Eiaml&7j(3o}dE^@9x@G^sg|W!2qcqrIv`94ya20fvU zKiD?xfvD$FSkCex&2-Oo*4D>71&#A6`1V};fLrE3qq?7qCV~)6B9Weu8Ho48-W^|o z0J=G5)(50Ec~h*rnp15>0`zI(@c1r05H&$imkYZ1gb;NK2tc>a$0 zk5_R9rFkfKz9`{sGu}?1e*HpF0;~9GWswlrbzW%M%E(jR>Du6ZK76^f=6QSO+Cz== zV*KYa9DfWT05-|%n3`+J@Ean%7X$J~{p`@CCjN%UodZB@AI3!o%=+k;?3mutWP;Bu z6=az89KO<^BoZ)#$kFKLFA25HLmI9G zuaQ!}R`3Th*^TDB{7*4;g28x#+5`dpNZiZDqq9t&h(0`ckCMKmJo$LBsWxo-F{zKs zx$d4{RaOUGAGBk+_87!6yG~M0AB8&d`h(JxJ|DniGU+$r^-l%1joG^1^tN|P#B?po zdYwAm+T~E6O|Zxoio8GD?T6%0KhNC`YIPPYls6g5jE_}=H2xN zOk&J8K89WwnzIsWysVLiw{c`Pz>TOC1=Hyx$r3NK8|~K4BTl_l#jvwu2Rv$?KWF<> z6-*e6IB#t0&*S#y#H60Dp)M#+-!^XIUS#rOCG5Hic8tpNui|N&NmwC_7wv+kB;S#? zzE@)Q!13_pGK~Art#O9#gH0T2T#N)Tz1w3k=}B+^<=USDkbz=fRg{!VwA)Zf&e&ht^wys)cv!n{<=yMdxwQBVg1`g}@~E8==Ci)XV<8b4NU zR}^7zG{)HH$9+r;Rd=b|(Me0qtMGOEL&m0Zv5U8~I$&EJ{~`aTl0oOD!%635+9<{J zrMrP2Z2Ejzg|Se|yRYLF!PPdGY;euD3LAD&8CNhD%(S>&mO?f)=C)O!t$lCRqKBW? z<~7!~1=x&f)QtCis$?y&N0a((x%<)XVYIRq+4C{fZEMy|fd95MNyGKju&JFj%brRI|0yXM$i;s|UZ z!bSS}MI*sAJpg+#;yNl=Cfr{ma3Y^OV4u#6~Tp{8)eej4%*DdH}-@&_~z z`J)LD!XuR-W>piVF2G-+p=ly%RR+IdTfi?Rp*=6cEp(5 zY^NtIQ~gb#x%Ve*?KIz<8(~}iL+@{H7Cw&2QmnLhh!X4rUX4tzJ82Vrb)-4-F#O@C zy^%UxtWNP!xOkkRgW24$!g$Dl<$9PfNhRqyDknd%PtaNcS!B&I5&ZrU0V4kr8>v!w zx)SmDT@)#AyfO056}-EP5P%6@)_!=6RlJ_K8U5Cb5YLIV5={te4qt~`wPEI;qW3#` zS?U=LbqAP$Y&05CXKlq}ki`aQI$&1xNT|XjT-GuENk?@ZYnPW$Y3R$=0d>rn=-X^g z=p+QWoOuh#F}X(+pEETJyc5>JaP-2oFnce~?Gv9%M53ex)tk}A_563HJ3WHeB4jog zg0>8}`y`KnTM7{9vR9vSSto&flT|zHr=))6jB%+XaLyn>@{(Ws&S(#DdQ}U&$jTW_ zf_ijqWuiz!lMLOT#jAdw@?Q?F7mHSU?k2SF9kqTwkIQ~{?2WmdFGuIxU+)Z`@OE1v z*k=Wd-^Y)0DG|a7BLrXW0Fn02o5L|1o8YoN;Ms^Kc^cR2wZAEfriSN~v@^^H4pQ|< ztt;3$liEfSKD7VH$9-;{eyDK2Z<2lNG$YLHy}lu55hsLg_$e1D>CsrI3@U1 zQp`b-pl!YV3%>@?LISHw!h4C}=DT|TsF{nSuI_sU>6Cw#FY9IdT-bf73GJ_=^Icuv zn1KToqI6wPil^!O0tQ#NKSKTZy*`@dC%2xLbT`9ZA%kZHNo4u3^f)5i9$t=3{T}Q- z9bKOuW#76!yxJLE7Cyx^Xt5dFii)0p_b7Bx>6SRVMZnm~@YGlcJQcy34jWZd^SmpT zhuXYtQ}Mxws&^`iapUVeP`lRIRU+3O-bh;0#z#P3E3P8JFAGOQF)df`n>?@Cc#K7M z4_pl9h<=$(lf4ly3kpL`g`4y!waZP#iqewb?2JGI&r%f`Ie_?WobVPEAQq}{v*52* z-h1zGsTB3?XOm)eniqe&UOZ7ywAabB>@_G1cnyp!glT>r(SU?_31s+17bowuyu8nMhw?!l4-U}Kqni^b}AUUh(a|bRl!|*4J2F#roeK!O!FD? zl`_ypcv2B_wGQ|eD1uw7Z)2~z*x#;y=RWhkG5FrVYN;&t(OmiJwO64CykY$i==TZu?78e=_Zpa! zp~l*@DD@GFB^gA?8GP|LWPq=D-2EHDeEjG5raCh3QZmO=|SwVBepR>7m2%}TC z6PX@08E@OvSq%LAc|QHMH{STf=dB)ida-)J{S?KD=`|<=0xg*72JLvr^ zFq~|AVuVbdBGq#r`gs7$!2%jr;b{AIOE4B^V?#9ym9S`z%y-yw>W@V?l$8CX+FC>r zsB95k%zv!rj;hyLi2C;z{xL|6O+fNtTP0&g9<~k|wF)kGSqS!H!ttz%lrQ;740j?t z$O@;b0K@<|*N&d9kg*08F)W0+CR~L&=DsY8r$mxY5|q!O4pK%yx$9xrW~{pYqWsLh zyC}d3xftkadLnZ`h<^-L>^+(RRMl`zwX3c+q}O5AY{; zTc6qBGiU6!6PKqUI;WOdwn{da%s|-&T{)@0OvCnVRY2_wltIk=DXJ*g#VbO8g_lEL zIR`_v@j1;1LX8SeIDde4C0 z9m(WaOMxeak$3S~Ps@#fZ!5RhZO)F!R=IY4WV>7+5QY4K`fD`}B1_yh?F@oqeGXdc ztw7i{jo&V}HldM|D_#8lVbR&6<6w!R$#g`k1ShgV1|JtzzoQyefaRuMq~skpB}Rce zPbZ+QKdO#5NC^_k#?CRIG_|p-05H&G?)ae&bVgJC?5U?ed$g zDYbYoImxV)6z2k7hnL1&O;B%4@)Y4$y_#4wA>8Yq3OQqKYTKvMz4{h4rg^!`=btdww^IaPN664+5DpNf~b~f2g{PwvZc|% zMCR>lD0}rZgw$=cnATH5=O7q$?6F&edA0{-+nkpME07BdQ9flSg-`zRG!K`~=5W39 z@NG2Kzdqo3Z8B%GV>|GEDDhZ12bNCApJo~+<{yKPT$qc-1Ju7Rrxzrom`s_y<^Uwe z9$5)*687oan8Jx5=E&HQtoJx;;r&QD_?P3m70#a4`l7Xp{&q4A0!#W8@swhgIU1wk z*D8akLGOH-V_j3fy%XeAEH-P56w{OSy@yDO}m7hkM(m|SmT@e8mxDR zOw4NzK^`i-83W^tRp3_~gCs*0>l{Pu>)KB?!M>E(1p9Yz4p)tXSU;eaQf}_4QTxB~ zaT0M1QNadDSc1e`YLW9jmw(*-)fsq78ro7=R|BVWqwI)VWqVPMV2{l}F2|Fjn!J-@ z^Q=^g_1A*O|16g;mdmcgtagY&(4CF(cF5rgm*#FAFp42I+7|WDp*4w4A=#&Am8ZFE zoY^=v74@}zv)ElaVav)b;o5PP>z~;d%~dvk7VSJWz#8)u_QF8?gSzUJwB}%tgDL>I zYh4}wBYgRB>RW7)ag0feoV}Ah{Cbq&3-`$=O%sv$rgHHw{AAv8w7AR3#bm+T6u9mg zeecU9O8XJw$)%Kn*>IrtCqCdU9^nD^#-3EYvKzr=Aw@Brz0DgdjOoy4_V;S7-$zU* z=~9^zGh>JCEeUMLNK5SH)a90+0e8!;qYqZ)AY^CkOtzxYF3=3*U%S|IcqYS7+H0nG zaDgpRk4p39ceE%%l<^r7+|k6b<`$}jm>2xr>kY0+G(W9Q1}cdHzw^@lJP|L^VO5{X zR}k1->V%J^F?yB2ue`dm`&^!!AELg9%llxlx2;wk{5Y#xnXunFXL-&fR>K=Sgg?U~ z8w%FdD-_eYHeK}L9OkGmjVMi5yB=!tqk`491M)^TCjUelbsM1@l<(BRcA>jr2UHo) z6J<>EA?4YM=l+qeh+YB^=Pf^;`br%6(4b!_U7j7ZCyb=|%-@TH&&V)_R~yTl@?e&^ zUB*TwGigzPF-VLE<_V3=;#J~ngtc5?;%BR{(~@>{oFPMOCG5QL>wS)|es#XAXRFJz zn}kU{Z1dk7H)b;9o|uxKqOsQ$kdSV*yGRRB{mz3>)zvNOpt%VvbtV;udf3WL?bzN0 zqr*2BlFUWkX=GqaeK3L*bQC5eYnFQ^-?FH5nfsoYS=?#PGrarfJ5+tJv34L;jcq6S zwM(mq8j2#xR}|+Y)@KawiyLTEHxyaPQBICNgmCSUo05rWaZjzXOWsU`f}6EV*oQmz z4mkOu=5CQ*9kh4D@3JDE%v2LjT~x8L@$%CB-pbeAAzM>F3wUnM!gleY8cxJrDca*h8Mo0HUh-v z$P{>~NElTtwpY?`ty9{<8L2$CT2q3;-=me~m>Ta!w}0_=aCz+?_Tbwj@uAaxrsY+0 z@nLy8(wg%Kw<<;VvB&f~7@-eHY^+!w{Hze|-a(<-F*Df(Ck9)(Y+&#-wIQOMrnm2> zN04GAi_6Knj_%d*Q)o5BRDum5UaRRDJyYVUL?-Pao0bMt-RdMyRXhWF7<@(S$oMMV zLsFGE6S_+Hg`pN6d{uQ<$ktuBIu=TK|3vFHG)}h$X8+{vPRFGlyql+u^)cN66da18 z#oD-J>r8+eJNRj)ygTt!1^z*<_-j5CA|RYcR7Vcd{H@30<#O*V-;&R}8R{uT=t`l~ zZN~|h1~z+_ZZu)|(=^CF+`Jf=o8GhOT^YF%y-ypLVVt$2t@X!w+K@lLPOjl|tdF44 zrCAu!sBpCRIdFjzg7#Ntc`(pghkUvt7u(Myh-?pidabZ=!4&}NAw1@UC#E@GaSR3U z0(TC`#wRilnYAY%XCnbE(X^3RVc2b=!74RjJAehR0^?k?C} z*L^bc>P)@d+g1zx_cBVgrR|U6pE;>Vo{vanXM8pf_h(?6Q(SaTFdZiyR^4xc;3kkj zK!}o(B7!P$9y2l;W~s+_1L?nR+;ZooSUW)KDZ?Joy+?d7&`8AS>hE=Ri{XWTD5ldDAeBQGJNU2sW;1T3_*%|u(KHgyJ| zV;2KQanUqkuJCxT&uE!Vw`M|*qB1Xq*Vfn7d-UU!HzStY4iZWij(J+@1k9+bT}H?V z%4<=~cw`n9WZhSJCmrN6&Q^u|9b=|ddNJsN*w&X(-=%d8W}yM*vD6qb#G4P=(!+zu zzDm#MeV6f9^Q0!eAM>ew7A4E(kRY(e_TW*KtMh8{)SsCSoFGk2OKT~{O_2;AZwWP? z7;|)&KmlD#`@=KGMC-*esexu=UcY`gRjg&%R^BInZ?V-6tQ5u;;E4Xj_xgY|SK;SFl`u*p8_6GSMrdyMMHvtArPo+lvxB z;k=LNvo~-asqRbo{sxt!e3HMXawa-lnJ+yf<~MNjdHr!5$I7W6@!5cjtjSgk?Uli2 zCcqMtQxo=Tw=1D|0BB$t@dIQ_e^2wx8?_Oao{lTy-O0)gy$@|F;bPwT;I2d$uW!g9 z0FSYbWwi%#tTDuyczD6ow@9wl@g>shs+atsuEnZ3`^nw!Ez@VR;*r~E1?xMikX;_H zQ)c&I@9Nrkn`Ezss6Th1h`ib50ahDH?nLF5B`!rRT(uCs-KE#ge7_GH=Yc)`xW8}o zMRlR2_jY`;T6DriS641Bow3@O{xY0ZY1^&-TBw__?d@cFw-5)gY=eif5MPq!wza5n z;ii9)9Gwzj(UFZRq-(}=;&XY{!sO*SV4+#1qgG*#U^O%Qu;#O zVo{X3i2aZqol8mUe)VrLsw07MREhf>}-UPu{ReZle+VsLIGk?Pe{6VHxKG*W6d{$$EXp(jkRJz-P+LW8}eG zzQc!bSu;nOKxf>2RCg?}d_TrRrgjrofIk~3175g`oK2?4lyWlLmfX_)wIUkV(-Cve z50uom}&fyM^sEJ%|HPpr_h$aQD^& zx9!_iX4W3WFX&TwVMW}|O!e~eb>+7#m4NBIdNIE!-s3KmT6Q3tC=DkT5arNNf#vy- zpZ1ny^$-$t65p-pS_9D?`<3KZw_HggV?>*)HaS5J=w1;}M&D6ASO*tU7&$0Fk0q?h zyXZ=dcyMsa)UZFHq~ug6dvk2#>tHHVua(xYm`+a2aYfx)GQvR<8M$x+1N13qiLrVu zU)wL#GI~HO&I~?4_2D>u+5pS#0jLejq$eFXT!*!RY>}r}@Jl_Nr zD<+5bb-Z0e*Dg37#sq9|g`)PX@a}N9P>wAQYAu|9o0@ZD za&FbueJ5qh66mtPPp5AnW)R(}k@7-oJzq+h5uySAy?4zZnnUqiKwbMBCf$rkj|Bo! zZVl+Zirc2%(r`YVKqvzcX%*A%!Z(MuCV{0eF-MR01|wr~S>kD*WG3&7sa=1ITzkx)aQuu|a+W{7+zw0H8gz zj$ED*H>$H@pp02%HQY72=}{&(k$HZXD_PS#=c)%%owIcJ$%#Ns_Sbv6&k5Lc-jB8H za>(-m)S#$PVi~2rs$pMUf8-jA|J>CaiWTfXLF!;Z`U_ixIT6|z4|}!526&1sd)n}3 zxPQM6^pdaM^Q~NO(E|Yy6*>9{oZQ7Odd~WNqSJmyz+-VXJTWV?=BqDVv7kd zxNlY_n&{ILf2d@W{yFeDoFm(=^78gJd0ZP`Hhua!_n7rqxk>D3a=~X~qqz~5e4)JB z8;coPR}tQ^CS=?!V?VYd_~}K!%68m2=_S$TFmrTwJVQYY>rj!b6yBHbgCe( zj>u2rQZla0!U~4Z0#f~hSMTo=A9EQHS$48=B{K;SaVCISwOhiG(j%H7QbTS2h|o1= z`RLO0#L>9)?z&(DO{@XYQ77d4tAXk%0M}QpF%4zXS^7;uA5kDYu1^Pf@?sv9!-2vf zX!I8`xO^XBr_XC*?Nm+Id-!U9D1sRRK99Y|1XNMEaSgZ%YFH zzc;Jb|0MMPU_67!!46dvmBOq&X5Q=_!Wk^-9J6rbB#GmlHha_I{n9Ymy6C`2A^Ufb zM+W7|xj(9pujB}91}-VkRIDuAkbHK+{xqEvOeAfn%Qpetb8egiDvK(tWG0=9m9el; zsipYi{|WZ*{5nMljM-5Y7RO{I%fsX}E2ImVRhRCBPbD^J%}Mgkm(9zu491z>vn~&^ zQ1~z5W2H+_-N#cNEsTQo88Roj_8EwoY-Heb=go+okm7=vpU(8lb)xbx9UB*1lzubi z-(mhk5>~MG$u89AxJgD5eM7NnNH3TSDsLlsr&AZ637(&mO@*an_w#(ku`qceVmrd( zWc5thBGA$$mUt`?GDXC`%#m^7oZxcjX?H}vFnQ(sDb9(e^Em%YhX2I->mibtGOaXK z;WEKv3}OybP+oeSZaB+gJ6ArBlhI4iqSaz@{Vny1T#kO5Ac0(gA{C;l>3ihi;VF;dP z`j<(?o;-9u`HJs=_>tVRYnrJ8!{21w;tMOwsvOU-*jC$KiTltMkX324~$d@CdDFz zqtAfcKbw(}O$Ml?=(qk*rAyyI48`Tkul5t3*1|bk={Jjigbp(xY24p{5^670n{Oq) zXD;U`pD^R{WGSUJi&%fNrOlmt={{6q$raW)X%R)zDpOM`T9E zYiKYUZaI+R=%8?kCAuj0v4s79yVn0=cNYe9t6GuNW629HcF#)Eg9O35X(i}%q8Cb} z_gxQ?ES&~oL@kS#t_$)M{w1IRAwqg$xX45fK@#^WR0_8_6gNuO6Lm9$%HKuWdMsx~ z7bJwipHuQrSrQO7zz;}TwwHApGe2o1`kh}_8kc_ATFhNCHb0hqX*B4eDby7E|uV zAg1lbdc(b04)a+iahDbZ^K|~mjNp@od24nOh};oDijsv((7*X@C1F_*gv>HRUr~7} z7XGoezfqHW28&6Wof*zS#hzyfmN^Hc+FHo-?k{+v1KTQBJZ7!~4}uCJszUbVT$1K= zB&RtH=Nxg-`eNlUO7eqGM5xT86qZo+SGs#GO-vUv3fA$@0Ogqq{%Qm+_MmU z6m{JHP6nZjS9%tnrU)Kby4P3lH;e4)?)*4EK@pzAH5IPR@A&CkZs$t7lsAZ`}3rE++YN!eu^ch2}R= z@H%gVSWehrmaP6`^KFa)a$HdOu~}EznS7)*{oJgbanr8@TL)l+OLU&&zX*~hyXzfu z6R7-)4rK+>%m}3{Oq5WEMg9?DE7=2|YI_3E>-e8{t^aiNe~k$KMd=9=g>lR0BHjlk z_V;hJ=Xp1oi}T-R`1@r6n@Bn!aKT;qUGRy%Cl*s8b2+{i)6W+2K9~#_Rw-M$7 PeP5EIaw0WC27&(t7_r~j literal 0 HcmV?d00001 diff --git a/site2/doc-guides/assets/syntax-7.png b/site2/doc-guides/assets/syntax-7.png new file mode 100644 index 0000000000000000000000000000000000000000..15ad9b013abd5218730350924c254c2f39efdeae GIT binary patch literal 83816 zcmeFZWmr^e_%DnIq976y2C0O!qI4Qahjb&IL-zR>2}sQV!+>-S zBRSN-yWIc1&)Mwr>AkM^{dVNyn&q(86L&p#{O%Q@rXov9e2*9p506w{?v*+o-c@Zp zJbcRQ*MM(^7mV|Pk1KBKvM=z;hv?RU|AbiT%3CQb<8cDl*YOCiP~u(va|`f;cjX=) z;a}HycnVkU|F3KHE06xZ2OkeF+!l}E?|bxt&p-d-fM4M0zdx^LU-{o7X5;^J_f_re ztN&c%Q~r7EN<1h5__*OLr{{);N5lH(_lmqa%Wph9X*~H?&o#ZTY-JFBG3q|(8CICN zy}U=L(@^#KfRI?kGNSaHEF zx1j1;({V%LgEL}YTmBexFMrGz5h17UNNQyI6zF_xEx@c#~>b_1Umvb)RmCadJX ze~Jsl>)^af75VQOfgv@+@H$HPYl=wz_d5Pe#B}vah}N}7RL}oj+uxHH-2}e2w0T4O zKP&$G+xODf@UNHNRC^S3c|&Sn*0{tdzm@;X#Y%Tbt1g7+V? z{A!FS6` zEXck`V5hLQBpR9rD;k>AqSWqdcgt|r+L)LS)=Ik%xDU5PTwsr#zdB{@=JPLrfDNPQbQX?R3qZ_)r zaa!Aa@3wYY$eS;(_=h$5S|GW576A zM0c6fCgA`|Zx8LW7ga4#LhvWCB3WX&G=!#N%rA?sD*-ejPN~JBB)ym$rree+KI}<{ zwCYa;@f*cjPHp)39E50K_`rpM0EW0>atppH;!5+jIJE7FWqhC{b zc6Rnt@uXIl-Ox}^iotpy(a)kJalJ8RdE$s9NWh!w>62mYg58B&%F}nd-c4vER~aHZ zP`kNVuA4WCtd8fBHG}dx$ zSp83dA*6k-#dcX$l@s3Yzp8UYLG-GvI3y^#lipj1##lZXCMbs@ zn*Eo9bXdCIoDeM3rf&L*$M_ULEeDdPKHh!WKT{K;5lEEx$a${qEGg!FH&>)`AaA7h zr|otH#w@r0?1v(5Xl=UI-Y!;)>eUn4Xm7eb2g~)P1VHNflfZgtf{LHt-Y9j8C+BCe zv)_#k{@}`N8EN$Vzo+c2E4DXpWahy?;6L)=NlqHmO5#iU&I)ne8x8BbFXx%Wt;$du zGc-1k01~;ZCQNZ8flAX4iUksS`p%(A?3+uz&5@P;a39$2l5~~Ik3@t!-9S>$;=bsY z%Ti8W#2uLTy|+NX%;(3z{f7n_4$Gz2OLs+w-FZCe_H@~O$``#e#kEno49Nf34)We4 zX?0bW+Sj|feyp3U!(#((^g=DhwV_(R;Yn$_mKz1Jog$Y64DSj>(Z}*4B~<^3H(H`Wk( z^&e#!Vq5b&#si(4L9#>c8UAFs50~^SlmDPKdAOWTF{R|K-TwEmtk*zvv1FE zqw6GeC%eSQC%D`GiE2!|ak76VWZ<%f;RU+m%ai6T<_RcxnCtx*L$YsLG-{_oIY~;F zlr8Q(_L0B|`($QiIf0NcBf=L2k{o}$il29`uX!0(TMa6;&FRZ^MWc&CpBsnknu9xq!?M4cd+0*)L4BW|tNOmUndB zVtW9is2z^&Q@d>8ZZnY!$o=pb*b!c-;?C?5zn!^T&bq1d2$GPr=)#-PBl<)Kx#S|Q z$`@t+e=zO?pw^8Aa)1D(rXWmsJ)pG}34q9_^>ckzYG!bI*EsOCPvDZ8GST9{G3gRF zNRKSHw%2*%6HQVYc}=O!*gz;*^4995m3`Zg&CXei02~o{63tyU54z#8y$Q^YtL&w_ zq>U)#5VZ2>9FeX~QI!qq41{J0tWrURUxiacubJkLR5UJKdIFE2|6yEIYp*7|hMv!& zMUytjG-=kQAR_70r#u~!)wB&Oh#d1}tqJ5NfUZX>iLw}=xQ$i~4Hr8&W4?T;`~m9Q z(+#KlkNXZHk2{xZ#7)2$id_Kmhb-2nBCbiFgj{0MXSTqir)HTEFMJUE6T`KT@1a^U zS9|Wu4|Cb|cwC|w9{$wz@w(O`3R7yQ2S{Zr&I*NeC7jx2fBTsc{u`aHW`35EQ8t4M zfd>$%hsu96Kd?YL8y=dZ`Bs^;z2LW6z3in~vQG#$)T725D=A}6D79I6H}(^GY>k(V zhT)F2oY)W1c_%EDqI)J8m@-MJryoiG_KKy;fv3VX^>nqo5}xdvdYp2QZHzZWZ58PB zOnqt~DU&gMUSii8bz?n(X!dmUyGo|5sFQ0S#%C&5L~tJ0Q)V-j%jR`TIv=J3c~s@p zBWbx8kZR*|EafkGzhkE4X?Sz2Kz;(NAE~BU?{?+W_*MS) zW#6d2@_VE8pem~b#s{%`S*yq>juSsDOaI8Xs-Cf=3emXV5Nl!Hnqi7j`r;TKqdVYf zRG1a@sJf50=KSOb_z(pl^!(1-8Qi;DqRc6;(E1L^2pX+NIJ9KM)KCy*($lo-Q`yH? z$XrAwEUr&GuyaOeRsG)FL)xQqq{o$(xGC^v9^N-Wamcg0rz@dO>+pp zx;mBOBT&LMTt+R}x(N>FVO3iv~8YFSD_Vw^& zvm(7~&=U!=ZrXxIfodnTMWXdIlKgR-CP-S7ccx@gXO*N3;`IfM`HX+DP?r=i)a2`A zNz1F&u!rmV>L2p6bO`06E4&jPlyKfI*%2&fO6KK*-8Vj-oISDnOu8}M{yIA2SGJ<8 zT@b8vO7a_V&x;~bfPnBl_uck zGUXWUKTm5ay0xrGJDn>sXApr!6(};=&q9rSli0wLUbEuhi%v2~efG_)@g3`b@V+;FO)|LJ$S3ib0y7MU-3dheTrpZ%mZO$ z-9zv71ynyDX8AcQ;QaXc5pG4Otx6?BG{4+-$Zd6~R{Q)ySwq2X5hd%Fei?wso1>dt zFnWcv_DKC8&4J`*MvMkYM`l*0M)C?^-i)+%k=k#=LFvC;7@+WRxHGmTvvt7S<3%r&Sz7@$;UKw~5hZVi?mto!bB%Mg1 zoqO+4{rExoNfFP@qM(HYQo-*B2FO+IW{AZwS_` znY3D%R+c=o&vA+jXcj`Htdv4m%BI7fxZ$ji69@$ZR=i-|vt;n%Reg9a$eQbs!~CEk zrn-yg&Oi#!4%8|zk(;Kv@SPPs_@;{|x_>;{e1m~@FvFC@%5T-L96h5?$R*dzkQqFu znA22h`z4=bHOv}ezotE8F8_8U*Mp;J^7RY%^$rpLaQ|3;p4sYG(x!Xrg4G4@?0EFO zYCIj~M^fYC>v>Y%I!BF1QZf#w(a`eTaN~6GESvV+vv+Ob=Hl$O%8|T!-Yn{V-=a(u zGH<@_c3|FO8ATi77Ok^|l?Yf_dc5pubaFzce_9jpPG$Z@TEH%(*N29d59{OgTZFe# zWG-NSxH6;m;Lvq-%+Id1nmxb_gjjPbn{L3{_aD>2%!{1=F5@v0M~GtY!FC%y-G!Xp zZo?iCzKA#2YxP}az7Orrc`n@@_IY4whwu2yzRcD5*AG)tX=!Nm!w=~fY@?T*T8|5O zbSuC4W1Hrt1d$Gj8+Cv}doCyGOqVxp+wryRk8*a{6*sY-|L!Cc^gD6ioG8_)6cuoq z6|X^9`}HNT&KCmV2eV1Dw+p#LBblG$a={}6X++WS-m1YSyzFIKd9Ab1_f(440xv`2V*Z&=Fd;%rqaSdRYjTWM~#KYkrvpeRe_hDnAf zUGo#Ne3^qY#?~sK?WW$=cZye-EW&%;Oecl}r}r)#j*Qt)W)rM(xZ9ZKjO&+gdR2jY zX-o3A-|!$|P!BKRX_(~h3tH=ZZH|WLe#h{Dt%;_gIuQ-_&`^NA>k{7trS^vf-+n~} zYt1jNR@4b?w{!^3px-)Os5-TYXODY(Wi`YzrX0;C+nRzdHZs_eM5?pa>9~EC``85u z!GJJ>E#jyxNRQ5>VjUgE@}e3Pd)`9oMOz!dFxlO0N@QYK2P4aB94Nbv&}a0zJH7Y` zyam54R9%ikK>aR$tC5qNN?%y-9wD{@0>t2=2kX|dhKJhZ>^u6f=AWSpvcrb97)jrj z!HmNe9fNtd{yP6nC;O(e6@4$|Et+jyR2@U*s-rLxMOg>SCrO?RA$xs_TRqLq{RyXQ zNE|*@w?|c3-+2vAE$}-okib?XI>5bdzl+zTqp`^VV-LU=UlDUcl^PjDLlWOkX9knG zQ*Gz`&XP%b&BV69*P++#11#qP4&!kPVc0TIVQV?#jKU&`NW(VjC`zErZSLH&YZ)_D zQ!xkYx$uP`*0*rmAdL9CGO6GMrc6;!jGfr|?!C?0A^2=-@A=-U#5>6>y>u0c8M}jW zBM%#ZN-xosigUGnSs$V7=R5YBPrS{=etnTec{oNy6>1-jRO_Tb(>MiZD+1u~wYCC~ zM~LKA@aencF!1TH2zb(!_Nl#7b(U0*m_rqj{fKpTQj;CXZ^;LCs2y$rCg9!DM{4NM zFJ$$tIv_4in_4^(gs@)Pe994{6H-{q@c3TakMC`}aj~mrNZLl!wVSmW0dCPzxPy+p z_@ax2K`=TR%nKD6E0`v}pcAs6>VVH3KMKejlG z(|)68s3(-(mKvY(n#S4QgPlwz*7LjD$ij-+lAzMelZYUh1x^VM%q*%VRh5>HG2qOF z+|0ZuM%m9a8>j1v*+eyYksAW1c6cAmL&BZ@#{(wQKC2nlVq2BLpsH?~cF~9J zHG4y1M9)2jC9ol`dc*GA_9>%aOpEpDR&{J&!ru8QhA6(yVG7C=2b=Q;H*J&v*}f|4 zEI;`9Ikqhv?+&H6tEpzzu=w6bzvJbU!`_FnAtEK19>#!~@|MGCHJw~VkBQaw!Ys$m zFj9>qQAOyj$O!?2;qCJ|sriuG{U3QoJhmHFr;WjAYVIgYURV6ToL!ICOnZ*yqW4>O zZhgYcg|vDPFiMo92T9RwlkB-1j_on*%YamA`7dAC8eC3<-MHbmZi)Glk%~Yu)GH?Td$J$^p)sy$<)QFnn9T_899(J;la(bllM0 z=_+nfU3IN*50t%Iy^}4O=XgXYS^LHzmdGTUdwkdn7GeFGbq7WXzbuT=`qqPWPUI@2 z35D)&WA-c!If)m@?=4f2HQ9+IMfziZqJDx7t2+4tW<5pbwKpf-iVp5U3k;&z>OE{% zVmlkwQg^4)CG*AaH(c2A@&+X$V-Zj3ebV{n{J+h*jG<(Gi?dzW%;O%i$^oU}t z=e+QQ40`08n*l~~N^ebKJ?L5GQDETTNEx5!gE_DOZFI&IF+3`q};!GcR`(K86z zz0=dZO*8YX7Ee}#JsBf6S{@<&YHfN@u+cf+Z-n;(%;x;X>CrLLVJ}#}P%(Ev8dW5|n#6+d7@VpOq21FEsL*N`JRtyWDL-c6X>`ZOV#kavWro?EpI_mnWnEZ8N0(lC&t93Y4#7qN8}L$*!`-sg$^D8s(t~|x%g}LA->s)!Qy;Rv zEaYNjvIZiaFSWUzG8(x@n0xwsU7aiO)1lo>Ild!1WlAdrx+*l)(Y;LtUTemI7Q*>X z_e6$owf+habY7%)*xoM&;)ehi3dnHA)PBV*6oF|N$AGGuK5e7j{5qyCen>LnQc1R} z+9_!{wHFw9JT23{98Mr-OWe0VuO6j1=^e|+a1<)IPqCD6I(K_H7N;Gd>MZ@)>bd4d z^Yg(cCHQ6oJaso|J_i8dL;F3bO*~{wsz3m)`}M~xqh4{0@WKbCcT1v0I6d}E@7!nx z3DFNc5+-^^4DWuoGZCq3b%dp3SINZ5H)82TMvZ-zllULJbd07obbekb*@w^mB)@7|5<8(|Lg>|Fvy402 z8%8H6^4{oB+m56J6=wR{k-Gw+O_hA}ga@VUjgoZ7#y;M*v)%FEu@Z;6sim*xLTQat zIamLuWV4a^&Ca%Jlz}t*C6ZoiU0CKJ4Gbo;R7e~^bUR3oh8pC45%C$sy6wbJW zN@h1f7NPfc*1L))YfJiI<$?IR&5V0XO&N7$+A*2f1|>x*@GZt4)fVG@d8&4yTGw&Ng$|_g;wH5JZW%mRtB~pS;W0YHbJ)qN zy4a+dnnp5o(4gL1g4LPc=ht0WpQ;L(cJulDZ7bBZ*cu(DiztJ@ayJSOj}lcMItJHy zD^-{9qgM1je zs?Ckr%a#1D9|GBD^xJv)|(H{ z7dj5E*X<6vxJYzLRiMuXg00WI%bU>XjjU6HU`3m(1JC^n+Qvp!tSsCG66xw^`m-1z zdf3^8MR6X*c-*g#&>n(WoRghHWDUX@f{nO`>bm_^I2E%Ki3BJ8(?8F-vCXsWxabTLKGSGXC^d}SJ$Rk(#b z#?Is8!GVke#sgLjRBk9j!f9F*X?TBU9k;&SqN5p)T2UcI*#>Bt?!Ai*6YSgMt)ra7 z*4WHG!|>0+=a%b}cs*Ka+ffsOv!`|2jh=BNske$-9;K~(DpNXD2mBNv(|NkhPx?&@ zdbY;*SaLq4&~Py?HAYoZ4^?7+g8ZB`RO@1991Z9_-;h6)QabuprkjXTSb%f35jD6( z2OMjs<8a<4calh#XIhb;VMCr$B={DGOD@kVJ?Lv94BpQ0&qqB%mu)=pmaEj~Nfqd1 zUn=R=I0{43l9x?;k#!6bxn8L+j+&=5*rHfqd7RAqWtk?|Y<2AWHBcj%b7L0Ty(cc+ zB?)`oc7WO)e^CE3RHa3E?!fbw__9B(#&9jnb<i)%OFz=8t3A{X)ItpJ_;JP^O>6?3ox%=k{^cy0B z?t+8*yk0`X8lA1YMk$1D$cGiK?VfFesgK#NH%_WJrlW|QJ~h@^Zr>BONyHh2;1E2j zCVI1Bj)Cs&$vjC%{t|wIfM-ly(py$zUQvsx#c0J6`vA}g-)pW7pEk3~2XZwhbIT0Q zzMssNv%(H2QjV;_ru7w;yhDPst7O}+kA&KiLE}G`r>$#Odarg`n@y)Ug$^saU3AP3 zc%9$pt}r?!tGheb9A4S~u>!6TE4op{6gKOBiY6yjOV0M1^Apnb6`Xt<%#=%|WY$1r zJ2K0v7q`S#SM zg;<_Bqb^Te8y9+>gsisM#D9TgT?0q9Z-}JrSNapk9qkE0-jE6 ziy+(Dqc?3$6nk6B>WjK)hR$bEW7EFhLi^y&GN#vqm^YvE*J4C%mTgyLC1>(Zx7h64q$szLoIS=k9 zLvF)jtNP4#P~vwl+^=#pOas;!|rF<>5u6zv%PxDIw^_!vksZL_@lYMm|>9;HL^_LAOGc zJwMHdUo`Zd`jZ`D;o8^i^F7*S{n-{ z9r7`2iAKh*U6>>(a|Km$I>cL-q4e+LXFwp?E!wDSptY>THJC2U68IolVLB}muC}B>`Rdt0{BF()nhGy!&6m+wL#a-`Fl8)Meap5aJx5>MJU{mMDhKd z)5`j0#|{Z`kM&TT3w9g0zc^TTkyoR{HUo7ab+aE$JJ0<^cT=mF*CKjjAqN`UazdZ} zd)B1vXIl5aUgXfzuzx_DH!X7znyb7449X8pB zI-vB**kA~W$i2}o!+Y(k9O;Tzc6SBMY<|>o(g5lal|N){!7=S3p2wa?!LS@|it!Mv zt{O&}GD_n3$c8GT{e%uByke1))9%~UCrOHPD?~NGB_trP9V61S|wvt_pJydtqCaDuPr2`?0J@_2S_ z`mF}zEUna*PYU+C4Y^g9&(mz+qhIp|v#N-t(Cr2*fvWm2+)jpMv1jQ_KPO>viGWYG zP0(~_GB3VmpAqz1S&VHI!tguIFtY*uJeW3X2WNk=J3MyWTDQsNbrr3)aW8egVR;m@ zVa%>;BQ7UHMX`*kkgOxNG|tF-O9d70Q{Uextq3cxxsWG$*Dzqqr^fG_1<`ZON)#@x z$n&!FJDjqf_i{R+vzStsJGMvw|EfCAF9M$*XrZG>1W^@&R@&!e|^H}`9428p*!Dn1UBK$ZlB^T4uV*25If-)UKi zaL#A1xew~sR6l>{zrkz6OK9j|`{byAi1EnkN224zRCn3U79+V{Gfjv4TmE*HkPzAJ z;J%LB?m~iy7p`3IJCtIXgzWph$(}v=rn8>^G$Cp-T-6Ap{azVm;7FC02}@BB-@ z;|#(hdTJVL+pW1|`g>I+Ya#w3Oe7PEU$>vh-_8LltK2j?fh>bRw(bwB+vdp{jaY-) z$!??a-5U;&&U;Lv)mTdFgyhpzmgb6qtT)OmH|EP**_{fA6jOHwQ-4eHR%s{T;Y=5( z+Ab7^)%&06Iw02C-dx{1Z!ZcU$Z5qe5p zVlO|-m9HWjtbJnWnnaF`9DLJp4-0AKBdLqySlmHHfWQr8^^hpnbv^X)=&XiH@uS{RLS)4R@{m#T5kbBjqjZ2AVtr)lycyB!%n=+T=cK;!ZY9sf zb9KFQt5xdbe)mD0=}3uOAdn>{4#=}C9u2BjE;Qy%^_|L#?U}N!K!Ie=TbXX;6IQoX zHw{gGbCfJE0rIi4-;x$t}+{`w+nFkSG;j~vCh z1Bnza_-d={=6%M({ft)?Gur_+Bm=zu{Ys+YiA;xI2EP=~HB8KhGliK|(||-9N*5o` zksJp5{qAZDKDygD_{3jIs&U=cc-CejD@mIj-MJz7^JT|sZDnDFXh&yJQ{`@V&Guft zfl8HqjKh6lEh~zQ)sd^<6E-X>$;wn}PdSO1thN+d$oF7-ZfZ;?Yq>lX6KYL`VKpeX z@q39f%dMOpHgtnTiC51ve9Q4ZUeNl?{Y9d)r>Xy{u#y+edGy)YIzz{dQTEKf$Rw7S zeelAPYni?gJA{l^Xo|=8=XJ^I}*;oyJ zPNHOryj9fePkATZX*;VgNnV?qFkh`QN@P2;Xs+6q_S1t+P~&=m=Vn>m_Y&sDtXZ9d zg($wT_vG>}4+Ghr7gn*-B_P!4HFtmg3MRV+hU_Pp%1k5@D$GxMI8lCZxw%YSU7o_Q zN!UxN1?^fdlqCyL%^{jl!!@c*WAft;%phQLG2FHkP{UEevW>9L|6|V{R_g0R9;l$J zM7pPoWVhVBGyyhh zy_(GrO&6})Mp{E55gsA-lywj^(8;^!Lp8_=@PM16v z>Q;*~Vc4g`R@oB!iyQ_8|1X#%gMn!;OqRZT=#_(5ud!SYuGo**5WkD=)A{8FUyhyT zyJHeRd$(ajupWYe^b(q*7Z5)L1I&3FAG~(vhx#lq^N`B8SkY4bt=qRL-j1w8xdeTM3ojkmwIoi) z#zE$7exJH1c*NnPWq4D5O~3~^Ve5?)UUG!T6Cs#7j+JZk>3MfLmIyHvc>kiyCuJ5$ zw)RL-{%kG4N}VxnH5^y%_dG;*Pgxv&hfjPx(_%vdk?eCS6p#^S&dmx5J>=z3YMfc6 z5>QM2l1la65aHN{^8~MEc)egEX~6T}ACVQb=p`3h%MLcvx)0P=oZG6SQrx5LU<4%JTYCsvwNKj3&AAf3K@b9$PqLH7+Q* zo=k=j{3v?!^~KW57O~AlRmmzpEJ~=kvQE^k*umh2)}Rki>rs^vQ95MahTiGtQM>YL z3HrCJH?pf;H8Dcq^0=OwRT+>i(RlY3*oPj}z^z>`n!MY1AXa+=;@i3u3GdcJ_v-ewczs6Q>!t(x&X znFxhV%%_yqFs-pR@POLp`VSn~FIEBq!|g6+P`H(*^m(iE@_cp_x)PKc^=dLOz`y-v zg3~W}x!$(tK}1wkgXfWDMg4$2QNU^fk=3KJyWGV3x)o4Nzeq176W52J$B%5wQK$*$ z*u)C=k;(R8Tcx^CuDJ`U3NG?j!?g(40px=KuL&m&aNY|dk@K~tW%$v7%%xlT> z%qpHby0!E3eXMiXQQvHvKY`(fdHEKvJre59Q2kB2d8pqRI%w#%nZA@I<2C43CmT9N zrrnW*gV`XvZ?&q;uL$k8*i{E>xuzKqg*ntlwOvK;kW0MLIykn6Lp|zK+*4jQXl|qt zrM&6UUWz{z#PtAlO+vjUp z%K7*h4Ua{-7t9yB;fH94z4-L0(?O%5+mPf?6*ld-JNU3y4Zep}}=GD4#`?*yi!?IuhR zQ`Fv=*w-QPy&ujegF-1wU6yO6{Li2Kzh3+=M315m}%TFygPI7JARQzHhWG$MyQEq4t;&_tI*$1)EWpD@8^jXZ9 z(2mRWbbJ6<%uk^q>vYd%#ck{?r3swZl}xsReUQ>LI{alN=LvOxOU*k=?0K5)HvXtn zFOvNsMxLt1^Hx+RE-Hd%BQT@x&1jGzAAAC*;QUFMQY_RT6k)J=%S-WxcR)Kvz|nMi zR;pU6QA}lzc2Uw};-aB)W1VlpkJV7)+btQ=@u-kmY(L3>JC9IX2TfT=b;4#Y#NnRe zuylD~UvJR)x2x&PimRu7Kd=%G88aIb{96JFp7&nGBa|cAkiDbyj>nrVLCme)$Z-sU;1#Z#S2*Ocu<^gjlm zH8=Ts&{NgcD!5G}wLT^q5rL9?1!%T>_F8@qPzIE>`?Nk*NK_}KH@?4%QhLrog?^mq zt>~dXK;+u6g{Uw&y2#1R*>^QZDOx6GCI!8X`-As-_8R|$z97%z-#ejJQM8#1$q9m zg|$1jl@ytNZRFh%^f`+&Jy+J)mEK|lW z{aRU~Y+Q?~JM>@AZ>Ma68up=7xBX(pcjYFBJs`4p;^>+&JMcYHSU`wYu-`MKoA~6t zfzmH2v`Y^dO7~ZpN~v=>GZea4U2lvv%TU>)<*z8n2n^(Xh%PEI+FJxP^CBIR%fVH0*e*R!8Bq`_llr0svjW z%Z+AIrXQ~w(pGnvY0-YVdljF&hzFmv+bp!Qq>Npujr_fIBHQ~)yV2OT5tMjG$%q6$ z8_*uZa{U>VY3;Nuq0-}3^8cVF%gCj5Cy2F0FJoChQvxkg8otvrT$ccuYFt3mQ*BU^ z3iCfrQ-47!3SQxvPJGG~#1j9FPy8Rq=fofmi znQC$Au2WrFpp61Bqw>0~Tg4?Tsr3KX%O1ZGNX?}k;$mjZ4gmK5+4O-zcBQc_ww;8S z7y$fk_zlacjK=DV6ep^=~tT=RtYl^kQjpa$Dgu7W}=Hp{iw>1B}mihSIS8 z;f*9=ut(!+CTmc%uHsmB~%GF>%AE52}B9 zju6eeME_oojUAu>B9c3OiR{SKC+1s!yeU8h=?t_CLF=RWhx6f#)AcTkb~STK4+9P+ zbPm_P#(wTHRL%!-#>nrnj7|V-rmUEEo#@|ejgFBv{hIC~q3qoo$LzTf!LFaeYXr&8 z&VFqj4|FV?6r-Q{?%RNWw-bSme$T@Tl+*9s1tRj8>R8E>-oqKU;d|?`|3IeyTI|m| zH<%!<@9bPJfPx8{6Vl=^#s8<#so}sP-wrS&0_QCpb-^zDR8ivx-Ky9EpEy*q<3W6`A&k zo1rbw0#7KV!x?;wqVGKEP?5s9j~!F%7*ZG>{e^ON8Ln-=EwWzefVQ5XxP{Sz=t*RcgkedT=Y`wu+oqnvhebk#jrY zb0yqvN?T33Q|r{qkF?u|?}0@B&mqavEvB#nr8KiYpaYP z3!{ky%YE!wblWcx$$Tb`e?ZDDyZu}_0jR}-bVs|!lw=-*0-@DQ4Fjaf(&*DHfk}gk z>8^y+QYwcRtCZ4zP|1MEOu_*WQf2u1?f@T|rOO{@37Mp0Y)kp_>b0AdK#}?J^^8Bb zTDJChB}xg(w*Uk(;Sc&saN6eKCeXA-&cliDVAUJhuYTK(-?ZJh2?5UT=SO8#-|rRR zG!cew%GW7mcu@OU70S{C|s(vjA1QM5m}~67xl1nrl%8ewOG5Dwy&r~I<-k? z3GIFe6dGS;OLw50?p;=sqWoG?C`vEtE?3*K7$fE|Ui1cDZLNxY8%zpunrl4-lI7Pj zixucD**XgHZ)RH2;Z!{GEnw`e2P}=762HG)ZIApaO$z|LRPW3Kz>uoDPpu~T{CA>$ zsTcX_Q|eJFB(f=UJ|)p}u5q#*6hPF>;kLcneIIhAoTwcGytOrOYVjzECWxA$3fN7| z3Tbqe{bacrOeT3RxxZeY4@ zfVU~{Wd4%-K$-^lOLmgodQ3vf-Q?*_-1;``-$MT`x<(Ew)2^V=07&BO#=Lo=CKl^eBo{pgFs|Pl#>j}_VVoO$d5z+5B3AF(tkx+H2AuZjWziUm+yDy zTXdj5=>S9+pru}t(RZJ+)n8{TWVVkhUzY{PQ0=HD^PpuDN@rrkw(BEL-$MOhi~hJ1 zLaq@&Jnx3YcjQ__Y2<}Dbi)iU0GtWk7`Soi-cbh$Ux=4}AHKl1m*1@wC>4ph_)P2P z{7(s=+})HnvC z4wKVh7IGFzcL+{xhhC+M%SgwE-#KJunlIDZAsSiuJ;po4f)YS8SroJgm+l>)_rIv45Bi=s{aY=iv)h3{Q6N>e{nbAq%wG}<+ztdN zx>?`&ytI~4fS^4EV~Ou__&pfZ&I?FcOa z=F^gKes=j4B247KwjbQMbS?lhY60_=JP`hTxyPOS1F-FQg8z11asu;dQP2%ulGZEN zUjf^e3%cY90!fqqOXhzu_HQx^fPwb1txWKC6%b5WqnGnjjG=!5GrbD8E0^TJ<_uuK zgm>7VbR4Q}CvkVAz z{l61RK(f&n;FGU^$Z`Y4_hM>VT4YzmZO}-rOjXg$8Q@S>7JDm$)lxvDB_p~x-Q9$8J80= zzWWmqIWz$-JOpTFhnaN~12*KVJOs!nPTvy$ExideUXtDyAYJ4x}_Og3<=0@^r*w7y@tWDwMZ2qw+7guZBe-QSve`hh(HDAK+D z^NNU$+J~{?qmK3S*Ak1=eSzv+u#6oL{0agcp1=11?oWQ8c=)2>MN1D$X%e@lZt(+( zOWHinbA?-5;9FC3(ff4-;Q3mze{l{%0nN9b-PAg|042*RD}S;;090<9GvwQCsI;wW z(xF*CHb*YxO0JW6Oj;dqh#N~cgIqvO0pSLL#49cTPA z@+5W}g=BGOQ%a4iqHS37!yO>tIWNPt$}50?=CElF$6J%dZ>dnJ#BtOGF!!zN1yIPp z!W$!bVsoGW|6%W~zp704xM4-qO(~7i-Q6uA-Q6i5AxKII2qH*#2uMnIcXxMpH%K?U z*BNe`4-hbzrr}?j8PaxSHuBbBy`*yvC;>iB6 z{$d84?bmr~JF1YIxNQ-X zM|5)lP%u01xMFxq1&4aKy9Ae<2yjSsq}#%sE1IIo&b<(^9mK+u1nO z%R7Uq^}*olGbDJ02P5Y=u!9z{Q^DvW3q;;4IG}$ z1Us)YstGlndO~%B$#RJ0y8K}>d*{}Sx*ii3Z zeMcP+Zc9aUCNa&y;kJ8_!J$tHzjvjEM>k-zn|DHH6MeVs7;)1X^oh)8$6&V_Ku6UH zyPwdu>tA}Nu22d8Hq7j$s!J`H;(fnhY_I<-A0wFxO+Y=r%q9N?W^Tr<;Ksr~o1=UZ zEVh3Uv6G0KJNzr0f7MVBUqP6s8qnD#^UtOI5zLtc=#NRKum3Oo)93}5jtWmWn9N2< z!OCcbbUJofW4hQ-?{F|Tt(vDkd`UhS_>U_bN9$Qv$4f&`Z-smjQlxqHs-WT8-1T}p z6Q@l$dsku))c8+Bg3mQ1BqVfR_B;Kvdt@;`e*NtF;_xG#Rz;pI0oyp^Acazgbl9i3 zVgD5FBQZ2TOGA9Dt-sH!uDrZH9?z$|?3;Ngu`KUpi!{4kFg%V=q#Sww z$&}fD2BY5K{~PuGIg|bWeAI)_ohs;vgH9j`Tz2MpZ@B-RTNL-=@wT5&=ni0agaicz zh4q_p|M}$}d5XRzWMCLXR;3|`)hEou9?p!({wM8~{1_!PQN7quq<%1~Sv?OMCeXzL zTl!zX7IN2*FK<n!A{&3<_w3=mEf~J1Yi1J{65oP!L z6A-ce`w#gDfM6F$OFoCU+!MS19N9l_hRp_AT$=l5Y5afHB~u8P?U0tK7@z-ZsYq4; zWsYs9vvchqcOTdud<4KUh@2>dHy~)RFC{4n+@i)r6n$cH=Y8#um}VkbYs%0hmPv&dEAxPgK&e~v7n2Tm^0(r&B_Ir-UjfB%8cUw zhD2-{PnQt*DfMQy3TKP38-Uc#ZVjhFF((!{UywU7oh6@Jmpd115KF%41_EX42}BB6l`?|#nE zMrK`a#DDzI^yTqHh2^ZRZycyZvAY$sR$3VgkyJ`8?>~U32Ek-#O|l;ZP)u66`8c(T zVtm6#B{JF?QaVVycu*bQTn%e#NT{|v3s0tj5 zpG)09#S)Z9BH*8^ao9Bhc^cJjw?~~BA`f6>F@BC`KeXeYJaUo#z;)(cz5}1b`5W*i zmK}kw1^YTmPPH8X|6tOyOCqtfm3VSO!)~TYO00k?fQD%y&8@{0W3YWb|-l@p^u2ykl2?X6*N$bg?+> z$IhLn+oPRe!c;h{3@*yK5R<<_dj+P^ZfxQJklF(^UYX1L(fH9<*LRyo)nU?zOcr=~ zd8sQ~6+G3=Xs^ERUZ3^;uy5>;Yx*MUr$vDs^@WdJRcx9R^;ox7pi ze62e(U#>cZ5`FH(?bV6pPlt)(iF~ch9A1E$swyiRfl&*Z%fcD?5!jeN0rdDe%{DZz z#Uj64l~V6r5Sv$50>4{Le*zcF_abF6T;(`{5PZHo$HRpvE3qekj&;+H0CY{ya4>YE z@K$kqdRd3 zX;BeA5dKpnkjUv_`pB;$UKKchW0&m3ItPg7Wc?B{`EhCO9!Z%TU13T7(P+UsX}%~QGB z(GSHiqjVhS$8~KdIjR*T)J+X)z_!(%le>0JaNhWE_pxZ?yBc%$8^7n(b=wQ(+}$N7 z9XKCB>RYnk&yBbPp-fk2V72}LC?^AW!+JnMY22Le_ex6)Dhg>vzrYjAsD`J7BZaGM zaJ$L4j*5+FmsHI&s;2G&%KqaueWsumNf?bm)fUNu>=LW699e-|iU0J{F$Pda?v4x3 zhSNln)e3dl4cop4u6E}PHF`YgBl)YMz(q+h)Q^!;hGRwo5S^mR$`V-cQW&c9skS_4 zs6st=Xu46E-ES`mKaj#35>UsquZ}6nAz8i8fdZ{@=Nsa$+CSnb%Qd5j2IHCmCh%=e zK@Y~IfL*`@YX2FOuqpUtgL_89M1Ii$cq83RL2HR$5{R;KtW>n;txq4s=q=)I01KEH zXHl1-eJ8n;bq7{ljSk!M9-6|~D)Fvyd(_t4!iE{miVv?hl5ZwM=D3aB0uiHWHBe)} z>lr1x3_oR<7`+8hcXku^_6B9IF@1|<9@!GB55E#q_P)mZpBQqzYp7+{pSvWZq)}5# zG9*LI3Tzg-x&KNX`>X1Db^xuzxO$>8#wHb0vPdCj@pL%m)ZO)&l+Fb41yG*LsSEV_ zmqAh4%6yPJ#0aY&x-J@E{a#r#-Vtf)Gl4-0e@E8<3~$Z$#M}*A|o~gDtfUY*}!`5~~DGOPC}wsEWSF3_qvvQ7OGq>`hf5ro&%~2>rrH3qc8l;AsVP{p z9*UOKe$rLWF;_Kp?_fmG>YAr5LHJzH5#M#+eOPq6K(Y(PNj;zeZ zleY?-RCCC`3}O0C136NAtNTGq8DdWOjMp!#V8jE zW^>1iw)op;sp`@bVeP+x(;af?a}5$?-UD6oF9~nS)1G!kVx<)k2^gt^&)uXK663=S zlp29!m2J}K#>G|{B~EmFY6KWZ*lgy?kW%_)s+j)ML9sohLE^jDofiMwIFp&6f~Sm3 zm2cHOn#td+Yj$-?Xfy=SorHu8MrAmVZt}-OALEZiXd{%rnrKM(2{jrX(JmLG;ghg7 zE!b8qJ#~rVQe)-!!+E+k;%8vr7}v6nm2 zqA`K1@dZ7M-tBSz9d}OJ(sLvOCfXFjqfbBoAH&IH(k0ZTHxse|6b%FPo&Rx!#WRa=HbC|H~VY!p`s% zUttlAS)GjR7F~o%vvsTz_PvXa$c9v@o&wa@fu;)z z_9iSKS|9hi!@EU7`IS4KHeguhSMOA8!n$mi%`{m8osozS3f@mc2y4Gb3_dr8n zrCTZ&lPWNssU^YU(P+VzoB^ciSL>=3e;Ti;FhYgUXBv)1I^7kGnK>tFr$W_*R6EHF zy`;;k3aB0$k)-BU)$7r}@u{lOB&&L^9RP#wP0u>{YE6)M1LzXqz0@&9$l!iG$9E!x zL|+EeKTR3Jr{c>5?U+veuuN_5sFR=)z>f4|G3!zikA;kWDB6qo8Szbml&?Xnx}y&L zcKE2CE3s59A#pECx5dO%YHFC|^YZTF?7+(e9i7>2ki4V<9Ps5aRn)=48uXPsVF zwSJ0vUbn;}{_2 z=7R`Bqxq9`1aPN!%!mEl0QuPi0TwS)mWUGtz5tDrX2 z!O$F62?p=yVHCX@$J4*C3wT4vr3+kHO!^=#RNEDlL^MoU1bzdjq$U(U?GlNjb} ziMgmL^6_CwtH}0Bs}Xij$YKoFuZ6z6Z*<{xJoq!L_Z{DcX#I%#IAhi!61~$#AQ%V* z1nU0-V2P;H4|h4Kk5eSK2hL5RCkP=BRDYuq_~tE?AwR3x)r%*OvJUQRVwW$C7vfgB zOxXly*&=nAH%w+W=@mIRt#P|?w5+xtad@4qd)8$bj#U=ByIigVv$T_t=&s)>Ho~9L z+l1X)<#4L)i-cpmwUY_PI~0rN)Nl<~MV(JOPua#NeWa`wjT^*}t6Wk5J1K&tgMP3M zG&zvfwyX7Ui+gOT0>>SxlwmTbHN#jcv%>hqA3GLSaS;c6nkj}jaC}UmWYQ-pd^V~N z-J}($DLYDoDw7V2C6wJWnGNk6TH%~AoRJQeK!_!)0@O(oC*v%^+rTtZ3IgfA#c)#S zvEvp-Y>EmbDcyPG4ay6PXC~N67WM34Jo@oA;{IVQ2Jza1>)AUq$cf2|zh=cDC%7&W zaSQ5vj}^IFoL{`rRBHZ{E6#ut8#Qa=F6M1@#&4k>A@UUnh9yNj3)iJHDPPQ{BG+It zuI63~6;}0Qj750q_XdvU&hQ2x{XeD9c8rLIhPP0iLD$L# z2N*kiO5Uy7R?-DA{?@Sv8L?7YJ7ctu_XkJcYtkpX6ueJ#Hu-VqW9x7^J?M^kj%GfX zP8*m*!7;jDF-wnum`QGup3LHJm~(}Q%}V)pPTIYrj6>eF(r8~=u7;eZ#H*r>ltZ&# zOvWxssu1StSL3dXUZiQ=&0*u>0EiJ}EFjj*Xl&y>m-vHzgR`7ZG;`w7{1j_#ZW`QJ z?_;!WB|6mh*5)MW)Y>qLq92b!Rc4+ur1E%b0wOTuxtMDc*>1tt!W6@~^>+R+Ir?Oh zU^ESK1JXFbv8@{2PC|qjVjlC5jPpIS#ymy)r+w!^sd)Ok$1vGMB*?r7x%#($Hwz^y-(A|%NHIQVW%RKDr!2$K;p9URAXZJ76720eGf6;)Ia=?J7N@f8d|NI#<^>5u&Fd=bOgc`>-(J?_0n^JGTB2* z5K88xJlT>S(?(^N89*HEugB=oE9J>Q*cMA=yo!D^50lP4Q%ic$WFI5oMncu-QhnV{ z>L^MEmv;KA-U(F|RWB~aJN$M}&t*H^I`2(WnrINbYUX9QUG=pYu0}n2wUUscM5>ga z>6#B^Ovl!XEUupUus?*8gPed~7H}ZA>UiADF8Va0*vd5mm7mx1uXQLZhZi+TI_siu z)m4JA%Ax2OpfazYkOu^vqZvyctv8^Ox;U4I=c(HesJO(hRR$c*6yFo?NK3f&w9BoK zX8rF{@0%_+4h`|Qy<&Mu6_rhLVhgKvG?>p7&71@Gm+ui8$oMO^c81`t2c3$X*8bkI} z&DZ>vjOHR?pYclo1C>BQ**Hf@L8HQYTVaiKcDG(ta@aPi?i?)LKBLf8NF`6)XXb<~ z$yK9-GaOHN`F3oW#8T@w>l@;`MCwMe*~ez+R+@WJAz z@lRq$wO|-A%lxVOesuJv=rl1KOV_FSbj7mz94aa$mx@k3qk>nS8%&p=>MJ0Ypc?7` z30(S(SN$J*`czVF02>HzFiuugMj*BLWk(nKsh(dnNgo6lb1(l9!|Z%(62-ZDR^&7u^h1we6sjwCy1T_{jSb(cyh_F0qR=t0faJNyY!)WB7}J7 zl@sS%M~Iy#8PcI|+Yr!UnssRHeq$6v?xX`qnk)rMM zP}iRilYCATN{<5@udi}56TChJebEyYmYPOA>a7vQU!U$3jE?}F2%!#au0sVzIl6dn z1-OpjMd;=H??2drzQC!n={$*U_~)yCF1ZV=bWzTkis@4bihj^DoFpHF%}nVZ#5KwM zh+0aUdyT^UG(vT?ro*ZJb>@fv{Tu)Kf0apKrpI4Ziu`;AS!#IoQoH-R$l4xBtI$y# zF=h;aF1f9aAriE~i=ZQ~@=wM)ow@9nPu`Lz17Gb=RRnysjN{lHegiS7D}!kyR*3Nj{h1mE3#640fHmdo6@}!QF)}g#`Mv)B z*~DLhZz37ul2xJo8TfunK3^S&UmDC@EeF5)sF^?#YOOQwH4aw6&HArDKOc89B{oVj zqa_fk7gHcPk9^*KcRZD^RmuF6riJKjD$)w>%6F-)p(y4V8g{sOK+(R_mb7pgcfJJx zdtC?I#h~Qc6`1@_)r&ol7lx3V-0!d6ADUxNj!Hti8^G!rjzO56v)ag5eAvIPxj;XO zzL!e>q+t2RQrW0%gmuUd0{6>i3!5RIrXmvi-@710Hfr;0nf3bT7C4GXDs>{l-@-KI z?U-7TGYk}SxMjSrq3VPBiRWrPlBT}M)|4EsS9uIAKhJ(Iw;BUxK3D?!&SwP$91BkG zwr48~*VfjY=B5bGfKVOl{IXeuwT-JF-NvR~zu8L>J3sdOe#?Ij{$CeTzn)0nYux24 zx4V$3GNYRCBk?Hx9F93_;qIOu>XRsFeIxG!AF^JcnV$_*MnC5aEC~MIZg_k=PU&I* zzucMl{bh=$DP`$d_lnMHYQOn7H2$ zr!+n5)32m)gAQ+P5SKvGN5o3>ib{dt7Wo6TZP)KVB|qje9U^T0$$pl81r7NQ1Q~+y zvc!Zqum5$>$9Q4L)yJ*z^bZ_>$-4(^7NpS4=|{C-R?S+tRGr@xIEiW|m&dUBrGubW z3i=?vUeOaCGMg;;nE(!BD$ER^cqmO%UvO^`asgw6Wa?%;z<$M`vVVeeK%_(feg1V2 zUpK`}3KYWp2+=%NB$I7g2IC(VEX|6jKG$&YhMwg+Je4Z7zIxDFc5<7`P9nCR?z3Tsg=BC_f(&_$4fCka2O?Kg24> z3mYIC4v|KQn1%!OF-p=4izsBtp{1oNVsy>#&wbEXgo0wubUzUL%KL0=ffVKoXA;_$ zUMLU1@F;~XjW^H1Fn?;YR#_4&O#m$qhu02);Qe0Nsdr z-IlRkFM!o2r#f4Hf<8sJ;c)U zAi~Vjz!K%>TR6)>U<=VRq`zuIlXDV$f;9g4IquR|%hG=K1u1+l{gP?dx+lbKOMPCl zS#g}i&8zqpEs(F#QgF>k%Q5Bh)aplpyu=@Lfd~ehby)XtrM#s8o6|6djM8vT?FHk^W*_yfyGoLk#C3GTKY7x|0zYtgviH%&HLC~ zY|S~M`V2oQu~Wg1aOjos9H~7!Gf@*2(%%Ng)PrvUz`&5V=B|=>3!XaQjx-2#PKW)w z`eUY9reE||exRrcX_WvIxJ&*x*vnQY-w`2N6*WJ@{c3AR_kv(Seo$# z>$hD3P9%jyVH9=iQRRaX9r_!2ofaj1H*FYaIY1#JCTgayMzsGzTF~Z?7A#Y;$h(7C!Vxh(ndN-B8=KjXx zqo@bC!j--7Ch9eA|45NRt5IJpQ^|U?*3Nz;s|~EE)U#k}i!ls2~^3KZ| z%E4yj#l-$U`arfNqw>jBo+FolzQH0%A57Ye|IVW)cKomrfN>ozv=Q1`>eM9Aj$Fl% zhqL26_UXo5fJ1?h%O-5etg|ADu?ledHMIp-DG{|q&}0Aj-RTBdhFyk7s{)978-k;^ zm}|PT92p*~4n#h51BpfGIB?eKg9FABX?%~HeU3(nk0QG;V)$!1Pl@|UU?xGHAo(B= zPzI`Tv=r+Go=)Mamz!PVFTi&@jv=J-SEl{vw)|^kYq}O`VpZM6IxLgP2fU>FM`~|) z-uv}x83%(MlvMTAR$WzO3r}m6VC%0#&zDnfs`npAOVj&P5k*vl0l@l@In2B?iH9&>Z6f1ksx1Uhf#tZ)VsgiD|3?I zmpS<2+J0Yp3qyyp5#z-B*7_}rkNmf8orn9o2$A^H*H=PCElhBlIYaB*1I*BnhY&P; z2udnrq0vWLTrclL2pdXSKpRR&+Tfn)|K?9HEf(FQFb0iZch-q2Deu3Kc8`3)f2rTt zO;7%*O7}g9zBl$8b4PadP?jilkz zVx7T0mL_2$YP%8>`OanvsRK&S{GuhTF#CPHf0wg0NGz4U?dQF0BKdTPKDi$Ya&TTX z^~)F!e}t#-Sfn|L9zH@R65Fri!6j6;)gXEDb; zF4T89&vj`1aepwWmaO*mt0p58Kcnh2p24Qmj=PCGzU$q+XW`3vh|JjR;eF+ufzPyS z3B=yFyfgai?-xN9$5IkFfmaWkDRo5jgeQKqihNM7l4$D0@k6csxXsuA>9KI~0b!BL zG^z%lYjj>BhZsB-edu@qrucH|)j?Gk9+u>?VZr?HqTU*8mY2gf1`Xr-^ELuWt7!O5 zRrSK!nBRT*Hbd{pC|tghgvaU@^1&6b*kVh4>QxONPc1j9G~EuSn{%+}HXm?43fX98l;<4Oa4xWeaif{e;PY(Fj9Q(LZ9S`KJCzrF z3LcjZlI?M16C}&`m6xyl3f7ogxpHg*q9qIVN#S5Dp2`&F+cmGi8RU`C*uM!QODFDm zurckPDPCbS`lf4YIp!v87C9+}eMZU8zTl+PZ9XA{T*Ex~3HLtMGEbRGVBd#GUJj*b zI5|N7ssQeDw{-{n;{n!#gfsJGtFXrmY~scH*Pm?6Kp42+LCW@oZ&1> zskvK6H2Nbq9m5%|wwxwIXkXT;U^lx?N6*M%e8#od*mgyla2P$w=VBt!l}P+Bx0{&B zX7JA3YhnUTB4(}W2QFsd!#qOwPT7;v-Vg6pW9KLG4{gbPkY*|zH}}&GI_XD}mDJ3p zvIAJ{XA1M?OxP6QrJ&Wt{1rHg=2bAX&VO!Z&(pWbL-`Ip1ouDJ9&W$V-dwp-UAjeT z4SHwRIqbToa*6duE>HZNT4OcU(x>vuRB557{K_<=$1u$vK0ADwC}b?2ON;;vq1?`M5_Rxy#{i`A77 zU1YldQAY5HE3UQWGesVk!HW-f*Q~Hi?gM^eN9AakxnHy7GUtz5GSCEgaIJqB439U~jPA~bA)E8S9$5TDG?QELbf zx0@gLMlAaCT{SJ~X*VjED|gFG9Ev5< zmS*^W<)Ah!`B{^MMEAn^Ok35f9=xa(FV#5@o!fV$Vjn+UK(rhzKz8ThfJFy)+N-y7 zgRnG~sGnH>-eV|XV!swYGq1kE2*g=^y^4YMU_Ik!){YHfThFB>*^AX1G;P&Cu{(0! z54Y3(vg-AWlSyE;ZKGT^w8v^b!zqL;Gm0_C4U(tJD5uK9GCY2=iNQl@6D+A(!(MLw z>xVUJMKRx?S;E~z}!W$PgO!wo4*_Y-yEcwD)HZ8$74a%>tK4O@- z(#yWDnJm(J`97!oE9*gNI_tEPo!MAvrBTTu3eKUSs5&0Gk&5c8fGkO5fvuzc=5mBd ze)W^}Z`X_NQMV{>dQ0>?LWL*-qpHTlx^nW8F-){&Fn@NGADL*02e`e6j=sU$;ShZB zb-J2JJ_(tzxd%4)dTb#-?+-!=?+Jp2p|;_6QK8Xxr5-m@3vK0jmQy~8^oMWX zKZc>jBKQ1e3>{BRH>$J`Ce*Arj?3_ol^UAj!fu)t81Sj5iS({len6(QwWziBQd~+% zMi0azp~8FukIPG3`l6wg&@^7l}8Foz%7mRW#|ZS7yJ) zu(O{!%+N_+)Jc8VWKBNW(31M9YB}LDWT$(Y&}%~?oScJ9X}^Fpp1Ukr`BL+~@J-LL zUILBBMXRU3qxj0iMltyy+wn0)ZUpQ>?D(7va#cmbd_fxPtn*8=hC#uREW63$PP31% zV9H^k2k_C>?M5AOe%zjUr}b~Cw_|EgHvOpQO#ynxY(7Rg4M>i3o!U=K(=_&dXm8H- zyG;bE(?H-v@8!{|F_8F0oAiGm>3r3+*Mgk&7&g|1!FFLF0kV|IVOqGjQ5(oUu+s3gq%^tTv^jwP;Lbv(VtAiOaZeoy;7D!rYPcpScu`8TX)T9Iyw^S+^o7 zp{{gI0{c7zh{A}12I`8+3oTs(7|#2}WWE*$7s#S;bKmTbr+|vDPJR4~G^o*+QJc4=b4@DCkq}Gi{qdbl1l%H&>i;CZz8Ut|8CkS0+_RY=4 z>NB;H%F`d%94wy2SM>)5A$raP3e-SU+ea_TUrY45MiCc{S$2u9egLkrz%N3?5XQMe zS)tNXVDsw+FyeYiFMR}1o=5$d1DfjkXE}O}?o7`20Bc_dqQvtn6TLX&-UPT|P4m%= zASjc;{Q1(Fp^M1=HCVHot_2@WZl36V){MUAESst?yZEe{@W~{?HkgSqATP`od)tKP zB4Xq2JVJXjwZ`+LYjd)&>2tJ_fYuvIb!s00L)a3n$}^nj9P<^0dJmhEiq)SQ3`bx! zj@x}pwt5GDEmbDD&7QmdNFV92aq*0n#J++pni{+}w9!^3}}S$G9C?+yY8*?CxY&_mOG6$Jj6jku}FaFAa&K zhfM`bZ8sEDoa8pEdUJm%tm}Sd%ymBX?kO6tG!nvG(_)p)yN_|c&P1$MbDRWN2FQ`% zIXz~L^gG}wbTI`wXbd&$^>uZ1uLKbV{m)dvbxH~qW7Sva>F+k)j8hwk=a z5}b5-IM`UEq=yNqlGFIXpB#82G%$=^ zcR!uylH((_OLEo5E<2<|s{KGnch0A*iRV%lk0{Z|Henpqn}`Fmiw26<1jUR4ZN5kV zX18VK{KivkSn9rgRnBKyfJ%)*V5l1^)JyG>nmaf{qmYdh{Q9gLI56s~@eNR%4(d*1 ztG6@ae@VsAyX%YtNqn7ryDLQ9wBS1n3J9|3@HOC56$gU)O#8j5;l&mO5#k+S-UAq4 zYdq?wgE=k*#@dfy?l2}?DwlZLYdkI2YBaHW;TrNjWfiWmQR%f!)%0ruucv3Pt)#*1 zC8&Ae@%`Oa@L54xugg66^k*24`yS(zSsr?TK`mlJEO9HPi{L1N)@f5kgLzG9bH(jP z{4eVnS`HhHJ$pvaO~DJdbC}58kX)1g-;L?5K3!$A1<}#Z@rf*Dd6jJEMB5iEG7v1F z%h@jIh30sepD+o(oIqfS%jAht)|+eb2T}Yv-vCtA2s}391O_wl3L1st4E&n_va||I zNWgvI_Cg;7{<1@v^QwDL-Rg?Y0A$?-C<{W6SMtr{c{3u?C{PG5J*6KJe><8Xa{{BI zoR%)6l6rB^{qdQ4rf_}lN$&@S#ZnV zwv9dkw&GCWYM|br>w8jIy@7o=fP2EzcXfTbO#;-EY7(3iTq)fxr!UcXVCf;F4S;x1 z00Nwt@S>UyI2IwYxoQ%7!MtKga%B{g;@O+8j@az>1U~S{1DxVdeDOq~rJox{s_~&C z9)XD@n1er)`y4#(gYBf%9?p~6arWo0v*JuAmPTm~l-1Xq@lWOIiD`ckg%ZQKM_5!2 zvtjF#v#M5?2L~*8=xj9b%B%A9M_w?iM}Jwbpl!0)?8%NWZn&Xyx-njSAAjhUY|nYf z{GMCAqBR@6)S?;DQ_~M{yX7S_Id4We-AqB;mb6gp`Zm6*Y2=Ve`^k*22Enp5b&3~; z0mqDKTI?mV4?`1uxO)ZWsb_>Fc!mpq!v}YS6&nAL|h9W2? zUD&rvdvo|>yLYi}uky+<*AK?E>qe{wZ4Zm7>Ub@Nah|}i zU}D4`>0@=LTswNd=fN4Hak;D&e#q-k5X&v0$|J+Nz}kDvslF$lA&*&iR)T|-7`9P+ zNR}V6b694iRoh^Ny?Lgd8?77%{r%#U-_{tv1NPYCU;@;2gvoV?6o})Z#DEp_UEG_t z($|&p4$4$f+>gX(`|KPN<Fay?Y{`HJ(O=6r^HX|GC6yp(`zr8Wc|F7hqoH$ z8!hM6E8+PRt?G+Q(tp9-6o0{HI+@bY^nRTc*8C|9msMe9@TT5D{gaXcctMLWv@%RL z?ykcAl6i)Q%tNE;kk-1rA#-V@p8)f_XiTZ zfyNeROy0N5@f*<|?^dhere%Z)`A`^}srh!v_U+1>pW!$KVHAv|xhlHdm1{K}yejSI zV0dU?PT8QE8{$AAguI1_=BnmPqLXgBc?)=hsW!aD=;baP%(vd?o)x)8yV5a&uJDh~ zcn(a<(SBh^Lsb0q1(3B?U(uh}HJ{my;yK35;eCqUi!R=1^INFAfqt6?NZH&2Ej*0r)UI;H(f;MCK-BhqF~r|38_L8TmZbv1d8F3y&{nvOn`T0XXb)r- zJzvNr+^tf7juGPILylPJel+3Q?n+U|;jp9Z@goPT`L_;O(RScrOn;g1*%s0|osew? zJ6AitU7l2B4O~2w?AgDjVwvj4ix@j7J4r7-5s2P?m0iLc7*+QXD$$U)K-OpAa?cOp zZQf_7l*DvDEwIx+NzO~)Ep|F43&ZT4AN(-#)_<<8zvW`ln%w75+@#cGWO-0caVk`JVx1gGf$hn_4 z5?~Z#?pJm*=Tx0=2@w#Do_#-XhwOh@OmuWRnP^8p=V2{00c(@y?U4wf?7*Kem#tqm z{IOo9zJG6@XlGJd$+|PcH%#+s5jl$uV#JxjV!|Y|#!Sy5{pwCZ(M>F*r^)27;wJvCC0AmT~Ir7boD9WU?`J>bslz5X)CSQ(>7~ zdB&^;#_dNMQ-Yufj8balKB+>$z8_8 z*Y1Iui?8?Nez+kfojRWABZlGMQM6RUNiKmKSr_)*e*B}mzd`u0wb`qBf=%vLSQ-(a z0mXdGj^6=m15}K3HiwCl2G3sN@39#|A8C%Yr|8_-0(Lo!#}fi{Hx5Y2Q`|ETeMAy;v2LOow&e?pLA#G6JN zqG831ahQU~P`luaysx--pp!H^j4bX2nvFf-1xt`hxT;}@qR^wDN#IGnb61Bdeiiql zMY|eQpsAO7F%CeJspdpL8#EZ~BTz#uUuK>@+jfGXaXCO!GPbV}v0isZrt_q{BR}3& zmk)=#Gy&6UbP&@Ok5Z;$fk-Fd{S~wmv5J0SiE41$bsNCWt91pMdD5!~@iP>FgG z3eZ|Pw@DP7>h=ic?puEAn*kSEcG3fITz(k0#4mS+kr-Fb zf0F86EAE5PfokIe>2ozs?sGp)QjBdxkKbputLlI=(6F45+UzZ`Tc|y^v&c?Bj`n^` z%_4;GM@%K0>?Bva&b)cmX*9qR;(`?I7#e|y$6-=jFZ^Wlx_IK^6po1}A!=6J*2R)KKVgnFTorPdXH=Vj3jjaWAktFs?>|F9%v;%=(9aL`IBBS5J|cz3T>#d$GUnRT-FWuW&#@ zrI9211G7l761a|o?}q4U2&>O-9nZgBBW2v)wy0z)>B-T!;krlh^ZA^1hG?%fI~3DO zu0c^hDBB%9tdfOyC)fwbCk@|3a0Ty{={EDy18J@q3yTIlLXpewxxX+HL6%BlY`_UQ z5Esqu6R>Pv8Af1tYLHHFRv%TEezOxFu8tp-#iPi7!6%AcL;nn*I=mx++24bPbiH}g_fTtu0o@`eA7&wqYrzKg9EpJ*} zbyB=lZT6|My1VKaiCMJ4tVC&zJhPC$-XPtLq#P0YHC%|~*Vxx|*XA(g?8sRxqKk_^ zuGD8uR9Ql>?f3TTZuV*L9+;CAva!Rd>%HmQ6SGp8y$`B;f1_s;mBG^E#yLzj~Mr8MKc^K&dD9EsBT^KkmfIdYADtudb!4gB0kp7tnt zr~&sX^5Ox1OVN^TAxkgZHd-&b<~&0%1}Dh%DaF&f_~IAez57u#7*@i)IPbL1mUJmu ziMb3`lye-z8yvWM;Bl5a29Ir)oUO1l2BP9qmoE;h3M(3LO>H4m)VB+%T{FYCZU&szX> z2_!(}gDshb0eYGR3 zzub9PgXDf_cQESVnDN=FbirvYHh{R2+`#t6iaB-Ehq>B}CT#gl=Lbk#%MauOAlx#B zSFJLA`IKy6SUaVCXFN9sSl@YXwB+Rpp~DcZR)zHS*)9b%FsCjdztR=D!D+H#2vLeX znK0&T2@E56+W<6gn~_gxdJ5kV2tK_}IjaG)M*T*>XZ?*z`)$Q5pio-_2w4nJz|UJ` zc72>I&@m36t3ZaOW#^eZRwmSNZAZ?EfUq9aXP>%nzth+{Z;HFQMGJpel|O7s9mA&O zcxGh85t{9BlS_MjiD%|9Ap*tncSy-qvrdJYU~^A%UiJq4MM!u>o39wZuXcDUjE8N3 z3a{dAG1U;!$9s-=wYRC@u3B-Zsug7#g)S-G$L)BK7ehsWB99#R33%2&Vtx;OOoW^K zO;BJ-ZM1Ay6sLRoebT;DE@Dy?6er+0tXq8=k&laiJHdHkSN0*HkU=Q4_NW7=USAAP z8*G30azwU?NlweVCNz5UE}EdIV>6+-kPNNe>iHV@Xb6sY6pc=<|< z7aH2y*zx%Yz(fqr9dO$K6%p+S!r43l;Gx*|d~M~2ifNPZ)w+Oo67Qc34VOY4!FanT zUZQ3ZGVBsNW2->Z@tf1Y>-!f6kgAcoK|jmbRd z6R7YR5}_a+zOQKmGN#7MN?9?-b?~8+@ z&tbZA+L6=U4%1UJF&s0~-5hOtc9=0aHB8Oa)O0h@&`e_*APoB$5W*9^?G6EqhI)*9m?jTFmUo5b?N4cZO!2L1$^;WyM6WDiL&GUeg< zxHz;yj6qyGk$%c9LewJrAU=JAo4Y$HQtkc^OzQ@f9I49{XNkx#Onimk?>CY?jSkAP z6uiz*eR+mcVCIgk2$Equ1ndNDL2Ze+e0z-pq0U3 zY_KK54E8P@VmX?Q%!J-EUPNJomSyo2)J-sQ#d#R~jXgvmGLB{FgtkTJoTPUzeR1!Vg5wZ(tpOC#RFel{(J z;ajIz^E3gm>{Wn`dvq><+f_@Py9EgICm8TSd`JE^s=XONh+D$qgtGwJ-K!Um!Nm{E znqA=jc15W74@4^{blRPg8iFYqKTW6J)*KiB<@ z?qV#RWWQi?o!>A|hTGqt<42=5e`m|gwlRf`x?iaPH~3fB4fV>2IuRuVqL*MXpArov zAblKsJ19I$WjhKcRy2gNM+^JDw{c<1=QgHI})0mf_MzrPpAY?Tt&zdr1Acuz%2j*(pKxxx{lnrxVu>N0j~G57>yN z5+Z|9v_-j1FO2C~s&LD*WR_GsNUD&oLQRJsTNW1*$%x``nlI5%NV@8gy<^G=I8s-y zO?%!`4k0CMYk1U>c*%${tn@eu$hd#p{Bv^vj_!UqAx<_d4}oyBzk65pveh{D5_pLK z9e*!({6Vc)|4{u9pJv->_tEw_%F(b8k&&dvsG~y%eYTU_XQKHbYRvw^J7)EMm;Gy& z*PLghr?UG-{I}y5bR1w^x=Zm?)sAtO;p zsOd2Jt&&ybMf9=01h6oKBo5fT)XaJ3siYX0q-$0uiV|w)Y2R&A;|#o9dyipZ*&j&R zl%%eh^m2+yh#q8S_wDUAABOu2KT|tdx5}F17xXWRO}{I+iT*ZkH1}^3{{4Eu4P8E` zi-4bM%a3-7Fhc2C8DGx=?~maAcs4ZKn1YP$mq$rW>4}Nbl*8xKO1@b{ z;Qg4Cn2N_Hv^}&BFIY@mC>#*^rvA?>a7Tj2sP&Zs&85EhrU4WS3@hPp0@?bMAUU8l_6X%EslfLG6^eMM>Z%4G7gg zt$@1cL$NpOobStWJGz?a8tob{wS;1UN}o>Z&*KBl*6htECPbY~gC$92v(*Q_KEpE} z-Bk7$u8*AKQA8@o{S9$biBbaTRWgJWXE$v$?G?=a84;IN?uwu4ndO2Vd`FY3`}`FP z<)@obgp+k}@Uw^fS`0-FyJqXHf-|5HaZq6Fh|RdQ*oE#$V4zNNO{@sX-lQpZOyca} z2^Hw3(!&jbp%lUg&#ieKyPJ?bB+zVh^`T#4Q6=wVy`GLW)u@8D6n$>#^N%wWt2CSxfOzg6o2X$ zZDJic&z1e*4Iq3rM+tV2@caLwr1z=r!(koYdvo2ZE29cZlvz0i2s`od;oj8gHY%W2sG$B{jeu-ns zTxC6un+3T#1MsPcgA~Us8%vD%?rlKSJ^D9Hg%6c5aqbH;v+V4`yNp!@H$SuRE$qqs ziq1qRq_R!T{N|wGO$lc0ybmK1)oWF~{?GzvC@c8@KzjycYl0)?0+oPgJk71!ALq8~ z;obv)3+--%nLwCuCO!R)-F)V@_RZ6v-5KKl%)j6Y& z3mV%SMMwC2hOpIKNx7x68=}P6MtL40$nm5DzhTjYUt{&7I_nXU9b|>tm;vYwb*Qw* zfcF?J(4D`pkyp|wjTZ1U4-n_Z)|n#mNpOw8eevE>_U5Bfh)l52i0pmyd4V5*x43dm z_?j+K6|cHV(}B-qm!tyx4OJOgnnFDqS4|zhPNNGhb;#mshneG;;lxRf3_9?*-(824 zX9=n|{`J^zwD+NL<@0ywi4P=0Ar*u*iGyWcLwoZl z!f_w5sHN!Pa@BGuz zjVOjM$$C)l-yO%go6qyBz%9A9Rg2!=3`0Le zM>9>2P&A+BJ>tuJ{P&H&PIad6Ti@3dk#bVNr576&`}Y{knDr@H0lxqPYRZEOKLT26 z*zVzPgVj(!KIvL}y`x%E>QeC5%7MX%VSzAuE(2GFb&Mho;ioTEOkx6rgubk6!_=7C zk!`&Z@njC|NbC7t{}?dSQ>-!rc**k5d|Rd*KYISGzh!h_9;&NC`0byyYU+{TP;w*p z7Y(mVJY4CEXR|$u=1UKtOw7rnIUIUO+MwD$I;3T!lvne><1X*bGSn{UW?2`|2_tgw zj1I4`9D@Lsd52Hswg1W1%MsF+Xv?rmGiHVS?!;z?R@kNTCLS)Yry2@V@Gw`w{(|SX zxDDNY3jNL#==!qwft*hZAU)aSDF{VJ@){~k46l-&YWNiZ5BHQ9htS@>KfPx7gAMmy zCI_?lDNC)FOtzwLm^E83_1GnbjOK5jY8w4(jG2ICeWbyN1-|#ONIDxxh;JJxDN(kj zp%Hx($cQaQFrt!vM~=oF zZB@#hDa2>1u_UvS*=$0|82lYdEAid$b~7^^IJgej{ClzsA7f|+({iO&wzlnktY)sz zS!1Qz__XZl(CBn{=PY6pt5aN>5TJDZ#-XAB|FQ$Kc zPOj@2&zBK~0wW}|_>WM1i@5kskebr+pT&e-JOb+)sJTy}ol97@ctd#|ZlA_tN$k|X z@EG!XQhGalE5fBOdB_+@8izC4+v$0;A7GxlZ6W(R?}TzrV+*b&5rpJy{d+=&KmlH4 zsUB4=&41GSq&^6E(r)o+ym9_X6v@1B6hFvgB4Se-zY|EV=sK+^!mCCAqsijkDPXPpw* z9iQrADZGV;eWpuG1clmsHYXGO*nwS6KKaTiA?@w0U&$JbA76s;k6SoPG4lMdvMdEu=O_Euk-ijlghw|8ruH? zpgK<>y?@`>ZyoO2vzBGhl*KWd6^QM;d#C?+V83UNHv)fSnpCMAwI<&-W&953iytL9 zsn{HDexE#|&CtzIck(r#-}5mKVR~UWibosBfudFd4}g!Qj zoeivfY~hu@Dt(c4OpaLbBal12VNS;^-mo-2jGM-5pNSfjrvEb9TeX=f{Y*jO6S&iD z=;=qXPa@+mzo!e+Rmf1tOIXQDYf5XG)rri~g>sDls^Jzu)Hv8FGp32;WLftD`sm@_ z_OwNcKD7o7St`*ApA^E8FyGy;J?wu$1`{2tkhjJLmJ=JaWPcF8lLFY+9?HjLty1-Szf7SsVo5Z^(bkc&ymFFT^^DD*YVjb|P ze_w9n&PD*o6m5^4t3Wxt2fx!oT-OTT`GDg;0wPslf2&h~)2|wJX(<>*b~xT;q5?qu zo9!ygr6TumK;PPQ<-2JoFUrJ^l`lZCNALR?HgZ^ZLrKU=W3!U}HZRCjw>#a7 zYW=$ULasUVpJl+M*j8EAcp+@E21nC;dpxYJTc()c`T@M~Y}@3WZ5y)&E;{?@Z##^6 z&0vX`A>DmW=c17l38_~zhz@F%{hsd$77IdVcc^!T(~a0|txr7$!xuuYUtUEY|(j`%Im8 z*@pJt|Lf&2vVo|*{;~U>%dlq*X;vlnY~a!PtJ(%f-wC2K!_}8V zJC>w1K-#_KY&PNo&sJnh`}aRot+!3`Xv)zQA(Bf-QWC7M*ACHblV+9aWGgD~=Rfy5 z7<_pB{q2)`_Y{BX^||W*eMNQXLDZi6oWA#dMLxu3bw(yj*Fp@9n&|{IB=gnBez74L z7*i9m->iF>b#!zVr&X4u@~R!PW)ksu4S+#IYf^QWh)(<|@2|2b-90(WrEBNjNi*!U zexCVz>W08lQNMNGzBj3+x9?-S-u$lwRlo+AkargB7JiWnH`46kJi0DBP5xr`7+z14?ipIfUOJ~{j(dU1|*0W}m{zi#RWVoLNt!VbWnP)!B zc>ce7>m)8{z2z)yuwkElBerEGN6whzjH~N|rIFAzHnO2S@fa}D1|PWuPSzJ%0oSJ| zMra3H+}wIn@R3kzA>L}UgICk7JZY&0y3LdSZ!$}QYsIfv9o+MdK|A&1(Ig4+cDEZj zV}HD!Cju64DZODm*@MB$6&f<3HsrS(1f1v87Ei62fdF*@-AI!`=3cG;U|`e#m4f}j zfKB{vd=}T(aW0*Q$+qEi_W16+fQRx##ploP4iqpye&zSq=RNqLKcT%0;^ylz=f<45 zvBZu8m_(`dpJ#&D$yBH3|EoZJB4=7;pO&z2c?UGSR7<2Q1gVA7fcXB4S2jzg1`-3W z9$e^BlEvR;4<4s1P&;c>0v;F>UN#cjrMdYzGe56rV)2vwQu+HPld$G@&lj0-fXZwO zREj(1z3Se&)GE#g1D>XzS#Vv+@TSd)2ry|Ll)W}2>(BMDHy4zcebj39p_AXD+4zB# zU{Yl%Z%WS6q-FMdb2y5Xd)Ty0q|B(lJnG-sB|U(X1`?t9gMy1)vC<+X`;7|Rb3qBU ze}6%TES__|Xm)$q)8uge-)9a)ez!x^<%CXnuJu}C!->p$Gf@;#Y=RfXGuO9)q;I$yLL?tWI zzrg?|2c=I=l6e2)eZu)l_^%__N1U5Zx*aFPA0}dA3j{xy& z;K7Oo-brmsHnv)QBJW z)~*?w6eo4tuC27uXYe8jNTOI2&jO&NW;x^EUzkcI5N^3RFwRTI9!lVz*-pk~ zZF^1AabKXG`Vs(+7wcTrowDtHmWX8I!ok=8K+vo}F+Le!TJh;taeaSk3P8BHGLo#z z={Tv}EC;;CJsryJc*}GX;El_)4)yJ#Daj`Fp~e7y`1Yb1kUEUiEy^TF`tKf+w1Dlv z4+z;d@Do5=jJW|wk#4{*q5-V^Xk?Fh0fv}P7eaV#QbGvK={2fgYhv;_|(uq<-Zt?x*;{t$4z-HpOJJ(M0OB;mu zm)@bqe&j6d7r=Knm&Dh_cPIP@i-fg^C<7mR)XCnTZ!eDuijI;iwyK?10grns?(FPA zlBX6%N_Wpc7<%)*=g&E%uAVYuGDJUjUPG3Ux!Y+j#E%Jj9}f*6o!u$y^52ST{PDxK zhnor%0s(lFnwLPBAnejk4M0U7E1&9ddQiHLL)v;wahHKl;B8I@VT8=G-Pt!_4_k(xn%}`jMS%)R z&SaOuq{66N!Vli*O}E;Bk8Xwd>_mgkKI5B@(; z1vzzqq%*j3Y)Rn#OBJ2Y0@MMAqz2V-_BUr%4)e|5gK^@8-()90okHs7tfIee^4uYR zBGd96n801aAzxKoncA?rZM#Ua4?<$r01TN|qbV$xjM_&^7Q))7!KC6RM}3J`XwSBj z4s^qG^FM}Q(iBd#7LRa*635fHk|bQTagr@RKhkdHq}W&4;io3v3v1RDty<0uB71W; z?0y7#Gl!FU@V#68c^IyzJUwKl4V0BX19Z_@o%-Q9`8iw;w<)4xh>R28Y8IN`sl-7i zidBpYv+F0%kJP`bhX;^EO1W$KN_cLsE})wMAleAu)xaz6z*vci3lWR?+yzqpkpam| zOujU9YiVK|EYHdB?)JQ4&cR3~9+1&kKstzT1u4K!dmdo&*XNJN4qbuPl8NAX4316x zit);JV5jed?dee=J%2Zdj)HQe!;6S$HT8P3Fy%}f_IYo-yExn+T2$e>fGD*t`Q=`y zU(v_ZC$ngmx!P1h#X_^f3KX82fqFQ#oc;2D)>^9em_hm@@mNwU&a>}h_%+f(t0$ry zKBs+@a@D#xkR%EG#$KUSRf;>ws|6e*YW|>Dqes02-|D%_wi=1EMrvtBKkUst&3yCy z`Rscp6Knw%e_(uJ4LH?K8DvpV84r{%txFraK;^?y=lme%5=L0Y3{VS-5Mk*9r&3J? z3y==%7d&2Vqd?$>6{0 zb@b41DbKxG!(p)%0^g&)$1UmdLrhlHe2trVli_fC_h07g_i`SqA>G8`s?v;8Up~ zwuOPR%nKsQ#~&e#dk?FlAG{m_R`(v*SZnByd5LXv;e7~TZ|exK#(g7=79!;T0#;!_ zQ|vk4(!U?~K?|ar6qX{EZDSwJze$+J}mJD9|W zb67z|0hdKq9003Ycc}0iPbfy{!l#5$o-n9yR{$wq=s5+=K=bc zUpncVTk~r3rS3T|n0>hxie#MqTo2uTz%ubWu*C1w;&eDrgOyLd zNPoN(J00}X&Ssf>SZg?{m>3@;0`REjKj;b2lLdjX0y7z{Y=MN7C_va4{T2t1ySnV( ziYm(H(Qn=t^q~|p0rG9^y#<15G)#JJ3gkH7FoNnbz6ZEJSE6;t!;+2%wPKl;@5p~y zW;}%aJb*CLr*AKRNj)K@w$h2Ebst-cY#QfnS{o*g7iVCnTn1JQ=7kbq+rTX-;WaRE z!6wuSGYX)+M0M^8YWAsm?#_I*m}vljJETQJQCSDbwfuh7)I|PxqIitBTJ;>40OL*( zE31OJC?JCi%KKmEZku^qQdND(MKnb-1eQ@3#X)l!hPwdF&D}K8Qno57 znZoGO5+nWs;M8~aBvz4@dNJ>BzHZW?W7#F&91vZqE5U{*S*+Tkt|G)Aj{snd!>tgi zHJkN|hmZ}boz*qxDq_hOU}rV!Xntozac=_Ym~&U)L4)B4Agc_hG6b$h$bp!n&yF5c zzjpN3CN#)zLgRPSKknAB$ot_LKnUah!uz3Tlfk>4c2c5>lGdV;g1A7N$vy+S=V}jg z()?=3y-roi8y3Rs&f3mVUD-(iS>!X+PmWN+ut!3ttF{BY-%j=&#b(jga(TxqAmmU! z2#1wa23#)5KmoNVjzQrRt&TnZIlC=O#5^XtxeXbKBqjYP@Lhg(avB)ySf?`E{t!fw z2(hqw4HdH-hIVYmfd@~ZNSYu>_Tw#dZ3=y00Q^oI!Lwpx1$7-}Edx-nOxmvpI zWnl08Bwl0>OWt|(ll;+VS@D|2M=@t1qe^#M+|BqF8t@s$wy~$KH=!>cP^I-K;KJMec4Vx9^V}QbpaiY~QO>JQ+J~$6>fQOl%DRJi`lbvNG5` zoFvyFpTUYq!gX0`A?m?8;-_Vsn0f%hwrbx2sM>yDbK7WK#RVMpNwXz#51omD2@5^( zYnTN3lPi7&4zcm~9?K!3Xvsu2Oil+Vn}p=O`ai=xLqh$J>hZ@{GDTIzGY%Bg@UWCeYW#GF=R2BFbH>1h^>9zY1Wi4vNXK&i!TOJz zHuXbZ*xLBz5zR;PGGP{S(kgOun19?E+IsA76=Ckdo8w#sNSNiI75yb7k4;{z-+h)+ z7>0UhiP3>m&0{Tk14I}O84$<_O}_74BlMkFN4a0X$=`sG$xN_|088W5cG;~etvj{M z6r#q`7&_N03pZ0M7$A`CU?!l-L|aC~Z^qu<6&0%L!ks`TjXuL)53h~EBHC$dc-^g4{?rrqX7>iMt z%%=3A!Jl1{Kv#`sUa&0J zCW45OLY+c!v5qj-)6^qi+hJ6JAR;7bhpQGnlrrC!nMgRxQcb)MA44c!fI8*~VoES> zkD*i-40fxfk=jf)Vh+pwyq_dPsj~p~doLczkV}_q6ceCpv0NCB#L}dkd+D%weV9;7 zhBR=}ti=I|&N8@KQ>z4i)>y>p3D0s-GO~`2I5^LRtJ{-&;^KTLY#kPv zrNyRHf6S;e7j;*F?ZtYP6*VIqSD>fz46L*YWL*~nV#*9w7;I3ckTDem1=ujD9@-u7 zMHcy}8*l26>5*>Fi6i@<+KX$(Jv8(0J>s@Hxe1Qas%6@NunIPoan)nZ>N=baz!nK>hOd0_v-zl`B4q#iQP$nr~imXzH2u+S zSVFbv5HV(*_QB3*CWAzpC&NPCwN$gXUAuy58ZZpfP{p0j{MBR1n&5xbO)gV8iBA%4xX`}nS!35sIbB2wde$A)JDTTUakj2hE4CRmF)}M^*ijBw$@PsrUcN>7M z>p5TcmclZ);$r1-yW>qZHwi=-T(Q-+-?fu2vkakW`(RgG!FSDm;s|Oy6jiD-E3YY) za;A$Mx3d$C)L_y{K~27V2M9I2bzr!BY4;>+=rMt>WjPX8cb8->?trOXs_gK~5d?}rl3 zN({2p1+|mxF%oziK*qfVzA#rp?4C936UX@Ac0R4cUga-;(J{ft@zhwB`e}3Zj*v97 z+Gv9DXJwx%7N;OXbuig%LjK|Sx&w{v+ZSW!df$ilnwo@xLMso05LfpH9&%!Q*#OPi z9N*l;$a+0;-4Rr$Kyv67w`cfSh8KhL&G{8n1YojB4He)3AEL)=oqg!UyLaN4S@O7%ghkS>Mk)0V`=fI-Z4u!lJ z1jPh10aB-mU8*VV)cPdYvyE^9(LuD16IckRK-Hl@UcL}!_ij)liLY(a%XHBzU1VRI zR2Z>9&GeN(j}ms169TyL##5b)*Jb@mu3TGD(Yza3&@4K>p-?o*7{4k`wR!Vv9skyc zjT;jKYUhG_F}f}ubph@$I9wvkIC<$>c*T>+PdT&A*xzKwjGm|KubAOz;UC%xVw@11*w4l&VYwUcW(QdZ`P9P7`XBphLRI(Hm*n4PMCo)5c zq*z~={s0@7A2Q1Z;6Sjgki9Dp)5R&fAvEU@HBMD#JF4zjm#%PMlU}Jq`cr!AFgvgc zlox-^s)b~QvV;D-^AygO;rbXQ$lQ?YYo!qH(N2esnflWJH%p;oNYpW9^ls^O$kl2T z73WT0p3=h7hcKNyB`QNP-2I*7m#W=f^@gKYY+6&Yx|c03k^DD_Whl>Hi~I0AkMZUjt>nr=0a zbw|iy5smHJP!OPZYRjFcb{rMt@c~4thRgjw8@zsDvxIk20P_4g+Q-KzX zj!nhqTQLhM#2SL7d7yyPDOD!l2{c&Z+|55|u(e~F*!6iYmwCmk`38C3;c*glLcc<> zrsPlkF-uX>obGbG*zEoFl^iwtWS89BMB8!1JjgBJAh;g9*v`}a7sWs;qW~Li1~3#x zfWnJ>*_ddd=18kQ>9)X9DBraF^Kz-CF+f`~3R=i-Y4-@o%H-5n9+)T(Tg_Q6ZU!?6 zIGWI{5Dv1KKDWD6RA|E7nkA#OzakJ*(mG3vDFT24RAJso`=XC$NJwOeqJ@IH%+Qab-04NK^r%yXZ3p5d9{R5w*#R|?;fGqdPWmY4 zmYhw>gm;z^$GFjT9*vO^j#<)OaTMI>!Xmj;>`u>z+DLFCyQm5w#IvljQ>uCI@|$ay z5KU8JYB@2ARq&~7!?+j+mP>-i1_BF;6HvU)Ed4jx5lA$$WqJS4bfbJOVR;2&o1V%E zv2jIFsieXJ2O_gMA{HRMHTM(x{hcPi%Df&GX+QOU^?_``s|P5Kbi#XWBX|y%xUaO$ zzn-SiBgPv6^nPMzfACoOQzZXH5N+U#1;T51?OO=N9{N0=iT~)~v#+?<3c}CnNb}FR z-QMo~`=z^wkpj-x?-}(3@`$T$P^>^5CA$^7Iz! z*3gF%Rb z%fg8$%&F!}DK=ndu(p6M<1PYHxNL}twZ64f%2-ip*in|H{C)eM)Zt|t=7{NU+bF{# z;{3gk_M(x4a86qliX}98JnNKDdqhYETVZh0Q+7t`)`sbJ34AD2E=cJSkEHYe6ojTnUFX)L?aR_VyDJqHY>uci0cu6G+XUN$E>uNo27< zyCcM0!FrFrX4Qv)_04ht;Zi*2zLms3fkVYiV>L$ejx^{KC&CRzACK_S^skn5gsBzE z%-U5fe7{N~d`I0yYFH~LSsWk?4oFiHI^Cr$5CkW)J_h>(m$#Cc3_wRn0n~IZA=6DW?$!-JbED^7eVG0oOYV zCtK)2)C3uOje|erw2YLgyiQXI{Gy~5I*TbqIwgjZouI!EAUc*=X(q8@_C)@|zhvUq z0NceApWxhTZ`Q=pU$QgOPK*dB_t>zmJR()9Wf|906?|(bF{MGc7DJ3p7oU({lLoAu z4i3U?ic^$_nt>OzbRXmK1A+^J%!X$~dkb5Xg8>Qz6{u;KRm|9DpGNIaMEz*PJz+#$ z@uUib7#J9s7S;-3R7)#3V_T3gRN-3^2He+8Q~^}RRK^~ztl?bj?2Xv!e|g+N6myf+ zE{~q-5U^?t#_ICXjb2uL-AD+U#Oq5dHQLJaIE23l%qN>*ALw_`w~pJ}=J0+m zU0~HSgEjXxbbFobHoD%o%w2OZiXsx}f1)mvUH%*Q`Ze<-y-T#}v#6j@6-*$S)Q9uV zs}io9cai${X`1K?9RY7xXa#S`cZJP1%^bn-c*)(jSo$M4&#$F@^It4uo5a-ichXuX zDBok74enY(tX{RHpDIf5Ioy=c_X)WQbk6~+33s$bl#%i3?{qplD3P8RHl);2{a;KK zr(8Kw6CMj};}qdx$9sk7fZQp*Fm%gwGonn6rpYaIDs7R2eQik zY2vLfLb9tz1_5U4%0KLwQ^jphYm8JcB1{h6&WNcvYM^**AsmTv(x*6i2O-`T&G-~k z6*{eod^7bA;m^(wS)gQ_mG;!UtR3FEu07NtRreWR{Sz7z@9)>TllFe!Fj`=bc!wB? zeHO_QctP~BceHlE9oh4VE?jn&u(io}T4&k^t;-h^qoqjxy6B%khJhBR*tm8ZNw&(L zK;I)M;h#BNqndjl7H9JP4Rc0a6aIXlVdK1ycZLlX0BwobqPI&~)f!OG_=O7>eqM37 zOj-Fh1iBKAXIm>fEA(cVYNg{$PC5r*$Fq&;|FDd18$^=a5*?d%9P);|$W82M3S}}~ z?uQ(X(>r_ycN~b;vUL;skolB)6qMh)8~mcemrU;da(+^L?r>q6>^G7i5{~BCPQX_) z7BIItDnY80U@4{}d7P}>zRaA8aa0$EJv0tM1Tn4Ha>oY``Z>PPvMME7i(EXIyQ-4z!3tN=>NQ7d*x zEoJnDPm(Ac&B;&}Z#y!RO64d_QZE5wYfT;@pXft%f%?Uw-Cs$W;*BCe%Jyr{5zvA^ zA@Hosg|#V3?7V#|Odl_=vJ5wJ*j@!-GvA{Gl%f?TiVtkHTsz!oN-qJ-8k#r<>^Oc8a?iO4{*Y%OzVjvs9K)Iel1!*s}-NpADuAW zwS4uQb|+z_;IKY>^AM_ny|Lg7ggi`51=brtU=<`HLpTye!XB3>v`4%CK?b}6XBZb9 zz|MqK!O*C25{xz}ng{$fPWZ2A*!eWTj}V!b^=asz1NF6OrP_=ICbaxc)ZtBeq?&NQ zkjMK;&#uS@nM@!3<8DW5mrDh5uYu{Cy-GN15ntKf91%|>{1&i@C9$HEjLtcHOqDFU zf(w5%$lEoe)VykP+&$Cj6CFeAiCa@qBiq+`)dMJ5=X>d}Vuw6yxG_t{d(SB)X7E#pjHDkfTjEvJ?{T9^X| z&TtE27joWv!fZinsr(zkGKqdlXfY{^!MT7ReWN2fD{)SF?O?Y0&TmEfsQ*IaL%|z| zXP+gz>&jizgVbfilBf33Nui50%ksgG3Hwp;5b9ppTeQ4**pL`F7p`HNSfKU0gt(Pr zHKSfzd{n|MM~uT=O#7fEmriKDt=FboJu6<)vg>;Foh#Z9!2j-`Z_n&DeXm^5~(&^Jq)PD;V7U1*vGYG}%JXFSn@LWvCNU03}l# zi2WK`&>^Z|_DKu5hc2@phR+ubuu2jIkgnnG^Z9d-%l#$V`w$%0aF$uQZIJn8BndKPT#v(W0AYjm9$z9h3d#W=~@jy%fQU#2OQ?+JHLsQlkBm$c7*Pm~Q8FNV{y{ zGiT*WXScK^LHc~nIPCxSHElAa6QrQYc3n#8|12cJ0 zW7A_R$;jl&6@D)a7YuPFz^&9m9l~@benap)LTg?XK`dYnqk)!`#_Ca`9e3CStOWf5 zJd^}6iBvX7CTw)O0!BMs=gfg}#@9fqmXKB#+c4vV-xdn1m!I zdQA_3nx%Zt*zkz8u1h;SqWy!31yWN6n`d)9Uui={NVOI9(+VvLMP;vObkq}~Y>W*T zjaI4(D|wj;2MN@@$O(Uurg{V&~4~${{gwN@k%&=v1jftew9_0%i6Rv9D5Z} zGcDC69P8H3s+6O=QsOwhqAjk3*De0Vz74gARn^nUucn=7UG%l$-FKFS)<1S zccWhU3}3e4r0kVEDiJ68gwK+jjh@ZLEJw6kiBjfbF$w)TdiO1eGEvV#DCUUjr{J%P z((M+6)Y#6^y6Pq4MNa9bqauc)46gOb??L#`^#?{d|Kwl5c(b4+S}~yKA3J|cV3yGw zby(o0P1S40uA|Ek;^N|y6*jw27SLk9yRjwEKd)#wXsW?H;iwa5}rf(gIm#8@7#G=DwJVtf-Nx)^Zt^c*r z1^vDw&Z_yux)i;DjRGYSV)#V}4BHvvVXJ9pO2&cfdlbccX9TJwG4TOiD5UKfIrEP} zxnu4)pZATW90JGvmbpbtWN$y$w>(>&9TnUlgq4|;OP*ryNosWR#W?+TG!y4eeQ~A2 zHK=DhzWBHL>bWi8R0MGWp{_z|J_~BwzhxB-1EkJeRs&|T67xoem-=t{GcA-C;<`y& z_?(*!|3>wH2twT(Y=1bg`pA;Sny&$TQY$~9CP z-2*BsmHV3TGqjOh3)hr(&{Jo`S|v0jj1(WP-e1-y=b}(7|CW|bSkHr&&0vXO5R5Yx zn&c0=V`7SSC3crWLZi?7WIv|dq>7rdAr;^zqsGZ^vOjFef%OgDD4k6<5W^6e%id4# z;z-u<7bE>X*1pRNmaZJq0BIyNfbrc)-nc&TO-{#3~Le@SxF8~^cqIc7E8;J00_DGp#j}xP}5*t$s z9QJ(IbpvJYRe$EWz575(di)DNOaJ|Ep~7Sn1y)dEeME@>x%xp43fXzz_LNCSKsP7h zW!S?Lapr2wZ?quDEIa&<@yoYorOjdQhSQN9;pF-eOf5$3=;%xurTxVF`ML5*pC%%Z zB}}qxQxS}hs_D`6rKdudT*9uwe7f!`f&631&7U~KNA-a0B=`y9)}U%xg6lx3G6qMB z9)lrHh?Bw*or9%*32YXtE3DPR=cuCF`BQSc>F|h*Kj8WIZ<{s5nofA}Puk|5pOtbe z*Sb++2!N}w!wA(T9MUD&!@ZKz!Hh{ZEgj#O&2doa8Cv1+wX?m)no)rfvz4q*$|#24 zzjKFIaiO!e;~FW8eGQ!s3r6zi4wZ;J4V!*-QV*x4 zSBOWT+Qz7UVne|WX2)1#Va@qGZLwZeX7unx{ExuU%c|zkwXpOQGMBTHK!A0bCk|C& z)j}n!79l_09&WuE!$VEsqbu8YL?8IL8C&yci23qL=`~&!)I<_#`Rny(Pye&Y$5Q3x zo($;Scg@3!5ef|=QILp#6AX^E3Zznl>UdZ!kDXQP?K+lMF z=h(NPXxe@}-Kdc%q%DjEn^vmWyBL;;nktD@iQpPUhyBHm!HN)z0OKA%q4bG9=B%jbLhotmbczM3W)5Ej-W9f(e&q#d>$WzpSEll|G+oTRq| z$~bOhDXV5gMlxt)Uh>-82n$O{3Pa`7p8SDThxusYdszY`!};tJS-S<9E|8P4Zbx31 z>i19=Y^-C&K^lRtxGCM1AM+BnNEcu^tMC1kI@>AY1>|R9Nri2ZVSJT7dPX^e;nmY+ z>h^Z6X9CC%DFeteI@bDJUPcAfOz3~H>`1u`NVN03-m(XND$2MIB_dtktG$hlQS`Nx zX5_1$T{{grToLvhh$;0T;b%!i(b%SJ%zg49VY%^#yv%Bt;sVi#jr`rE98zIun{2dB zlZ>!<7rMLijcxnl_YLv+*knYe(juc=cf|%qONXLy_>O;dQNR(T-Y8~hA zV_FgTVWO{RV4n%CcaexlR^K5KJAov=o6o)ZB1)Yf6wmT-(FGX|Yc;QxcEUMY0b6}| zVLW!bug>6doOOT2e{!{m}|H50c}Q~zSX zwABDyM*pbR@a^>;DNnV{e;GHxPy7T2cZoPJMH_0*PEll++k-8w^v)qLYs2$whX^~R z1hi-@BzCG0`3x9|_t@SI7g$W~eRs(4{WaJzp+0&}6WoQZ5MC4$(MWLQ?-VSEtFl45ap#TA$Fn-qxWnu#&=Q>`TG3TICPz;?TUmBUoSPJx3Gd=VI~ewEa;+*0a%ee5O^9ZCHA*4CiMC z;Z%s=y3q?($5Rb0kNVS^?JAwTU&Z8W6{Y+m{p$`7xpFxt1;z?jeJgy@=wB&VR?1#_ zPasuNMxHC;MprzrFE%!=M7eGIxYdv$sd#hD*B45;Ym99eps=5kr%bPSmYKLvn0d)d znNN~g63KL593&sz=^?Zj#gcHh6+&i4;mIZ+llE3S@$i%qu1C}5@nHJ#>yME}ZzU$} zo8uo>(}Th#UPI{jWtFcH-w-qyYglg4k#Y4@Px=^3I#Del(4gmOI3Irs#DN@VF(vsC z>oaX+w2U8i!=0`g+ojLtPDUG(J!LX=aCj*}T*W2@{09+9d7NeZ)&Fj%ZGVS<(Ul({9;=8Z4xgW@1z>eJy~$pcP@@NHqv|g!)>lzK*l#Up3m+JQ;=A)9}*eW0x={( zSULF*^E_n94e15n0vK^`LehJ#Ep^eCkATonPcn5c>3U2Ote1N1hVk*s<%CCSIA-c$ z;!I|)9W!}Qb+Ep0=%*@Q4Dv4tJ)B3>$8U@1!VbP(Mu#PzlX(*5GGBl*<$Eo3om86` zNR=wT#(A|=q_4d;AbpEg;&yDH@NqO(|M|!2p@D~FI`B=;ghV^$k~3a)7*WWC(%Z25 z|9utX{EJ=3E(+*E3t^S-J*`TQ{{e*UhT|c!Uw{+@$Lf#pNRYKFkd}(dBM@Zk$OD?- z$8QwlFxx-zW%3W+#R|HXvAGy*!F=&C54&g~q?#BWO09-5qG$n~2jZPlq|H02tFPhQ zH>ywX(Yn8t%dRJ#Zu$QCe?+}yTvXrp{;fF303zK2Lw89@cY}0y2t#*CNq2X5r-0HR zNQZQHcStw)!O!>izaQ~}2g8|j_E~%FYpwOZv;o#X0KNJQTG!f>o|7=d#3$9T2v=9x zaVU;#1Af8{Ob2Pr61i>0{CN)Q=9Bz7W{w*Cnm=CxU6_>tRD}arMU$#gyK1t6h5vdW^JIoatv~lNZ7cV&XttrsX-TRTNA4 zZp-eCw&YywjVixi{E=~SqWWF*@?>y4U*q?uJz&}kCZ|S0BP+AOpF;R61*POfVE1#1 zX02VA_W7N%r?QqKZGW4??5FIS(d(-876ZyEoDtHXZ#{`+f?fTfV(8fTe` zh_#8>nmujF6xi~zylQHqy5qkgn$MtWH{@FP>myg*VE(Gc`yWy?SVJoiT2-5PS*|46 z0Wfxf$-oRhV`33sD)nz7Jj$7tYw-XI+kuR{%|e$l%98=OcKRfiW1wk&NLOn;nT0KN zqqDoD2I>U>g3)Z~L<8Q!rzU?CP?WNhZY&khWc=!y&hy}Pn>98wq1w0S5|L>a?Swd?fR;ZMt;c ztK8||n{Q4B^Bq3G8{xc0IZUN%r8AWswHHxE>BqIM&Sv9;3ljeFM&3?;?TKs>V|ylj zRGlT&%)32I=cTO8-AAfj!@O9T7}lwG1*U;JcW2jkyjBo9(Ym#R?<0SP<6JM9gDJF> z&wnb&&f5R@Z03I~wO`ZlzfG@%8~bEIH6-`sh(PcZwWqt1z5yuB8Cv z@gt@Iio}E!q>*Th>l(?SwsIYYOq|qA1`B@{S!|v<~0x%{kQoeL%A+)30 ztcgc7Ix6e)hUD8?o)%sSRIgy~_dX7cC~y~LdT?~Bfv_xDWe2t-7>LQPQD67SoL?xG@ zyICp{Skdd>}2{!p|RMNR#n`$O&iWKlJ8zV-qYZ>M6BDyp>bIQ@5hoa|iV z+18C{je${Np$Zg&{b5?`uFeXLnI@*UErZm%__1rP~@!oAfB@&?UbTklx~KMwEhV3Bb5E;x$oy+ zI!|cR;dip66*u)lZIrt2vWkw;O3!=ZzogRI2IL!?BZR%R8#ij6{^-$eE%i!Nl{Qk& z^P{v0gJc33TbjqgLgSg=y8FeiO8alqX{8k3Y%HGd4=2SU7s?!pW#1F3DD}D_PD=+G zPSz>-9=4l`bK1>Z88}(HJ#v^$mHVW7=p(zW^}^Pe*oBgH^-7%(R%|LhS|*6pNLYSu ziUhodnI;(;;ro##1`t6?g7#!+oIhAQTLG+^Q%AT2wxIpB{`%K>h#-&u@XjZ)8`Pam zj;@uNu9vv1ZLk0M01I2AN7b%U?MpBt^%C3$dQJQQlAzCk2;^r7g-!TPp7_0ng{qXt z!Ow)bDRwi98$sSp4INv?Y0)bZassQg%^m4MMH2QK{uSwmz@dsKI^gh*Y-)UoV_&h%c!iLG~qm5yR|&pX%0hY?}|eM z*&fN_V34NQ(Zxcvla(Cv}CKmyGVQxi- zB;HDfM_cIh7V|AWzezvsjMR-_&I2dElfyExr8r}P;aE!5{MiPeo3_*b*6zi$G(ht2 zl$JM=j1A;{2Ccd$OvWb?ErdF# z(RO*l0pYVGSDiJT%exaszze=EeA4+!-fD7WpXj0xEytwP?shGa*gnVHVq6)~`5M-} zf08mX;XrP&aWa2^pCov$SmE5*|EH)PKc8#-ySe7DvtoKTtCiX8uoO{D~zn%+W(|ev7ga;KX@JUy0Q_|9l9MuR<{ocbt%Xw{*@O}ud$`M4Oik7h42H4lVoI4`K zA$*MyZH>s!oMe-C+nlxL53|zTzkc7>IkmA$vu~z7Jm#lTk#r+Xc0VAQ_e={8Z*uKl z{laaQdgkToYc|gf^nWOD6pSQ#PvO2yWcuA){YidUl961zKE19s6`7*R`FL8?3ti_y zPAQ^~3#r`4qaZao(7eYvB(@>W`2;eIwvnHey2x8ab}(e*5+AWox>*ooZo?F3maUm8 za`#Rz>c+hLm^p0m^YDD-WH!>pl%nLqkKV)G;HkTK8A&xo@Ln3LQ38uw%Snru{i$=m z&`Qwm6FoWd9%}v?N4LcpQQ4-w)y`KoKWWwT#>@)5{#MEp5>I!^LF9J2gm4&={?&%( zOoPWw$P}T^g0;Dc+fAEJMk2H+6|ka`o7VnY%4Vg`WLn8KZTIZBKzx>nR@GKYE^=RQ zbB$%+nu?Yy_@Ep@gjRL!iwS0*{CPu$4LcVYZkeKsg0U$|?&NuiJ9-=A|_^o_m zq?e;q5K+o-L!|NOaOq3Y&xj?S)218ChetW`Wj^P^uuOpO-={f*1yH~Cza$9 zUWP!gR=l4{(nth(QVW3vG4Mil972?w`#$lRn=>{cEv-!CtMcrlMyl8vKq8AJ>a*cy)+A*%>5H=;cR_5Ue@%)5kCDhj5wha6Gt?7*cJDbxoX+T- z2rC|Um9_?zH@Nh*5ZSf`FI*e^U53SQ_Ps?~a7lREc zJmG@gXT+jIHpc4a=L4T?Z=3BUg;~4s0TYXCB~kt+lWrC!Fc2GE;iZ4XYU$0*CihON z%Ga%Y%2W@FSP`=A^8y?lOsNM9f~D@Sjtbn)w+*&EW2|yLrR3yX@TkU}K>;nEz<8EZ z7p;2a36=;QRsP>^#$$;g#PS5-&D}x&^oWu*oxLb=!yW6jFV3ozFS-aNVV>*@Czi%n zK*+QXTuKhl>YLKj(~k~A{^R~sX-WtsU}Ps}KD$j-<_3A{)8eT~1H5V_!<~b&G_#%q zZ2)Q+7~I#2QIr}tm3+Hr$P3E8iu)lpGNo-<@pI~@Xk~P9Vz4{GRDL-ZC_-&8|CHYj z>hCLb(J>|U4xiNqgYG@SwQ_AY^{O%?E+Kqf?W+ARB02XcR?wW(*VIxmQ>%H^T+|h+ z7VlJevsYo0)VIrLes1BT`dCI@Iy46f`4luAlH)t$GW{80?{WAtDmbtl;wzri(#Ag) zM4A7}l~l?&r<)prC0F=iAV|bmdAsZX8ppRD!_8w_!<3W1Gz6V;0B|o^N=#3u1E`ID z>5k))z9qV>bNMz8W12rK=W0g3JQP7pET+rhn*(vPuPYFm^(g}d*hs$N>i-EmWd*zK z>rGW(PbXndefn@v8j4EwP^Q;Spn3i;2cKjGt>ckPYM&>@rsddtf0RXiR#3iQuyEw9 z<4`k03BaH1rIC%aQbu}};5X^WAOmgDZ_7FBdoPSF<#L*Qrxt-%FMqXC zAI9Tg%`d4a3z2=wPpxG+CeUgUBQYm0<)ly`C=$C-xHYHV_p6s{k9?XuT4JU7l%((H z2ZxOH(^7Vb*Ps)03rugoK9{~Y=-`^Bqb9t|aa;c?Ce}u6QOzrH0R*7@oPf&rorb+^ zAfIx!GLStcx(r%5su2EBHQSjf5^gC{9vTYRg{Ij5Z3X;)$2n4%2)K9gJUY9u-DVYD zgy?joW4&X`e~hMGe!h)>OkN-F<4;J2fX3^U9K^qFKm1T-TJ?K%YgKdi>`{(->bvl%(9RVb)1SOE1lESs(_QcReU1~3b0*EYC8?% zDb8tc{X|4u(IV);tx~|Mc%+0qm0jX1%NKP@d#C%uR!dn6v+H7%V1X;H9hvP^#J2K- zoO<==wmYMOe3~)3GFyIMGsEcimMT=x82CvkgfSl>=U6?Y3Iq$yH{iq%cH@hc|A(DH>~N+;#?wI zj#ijT*3VyR27f~>)4UnA?0cCVy#=!pdAR1Zs4(08jb(qjA2=yT(A+$^!n-s|EiUxr z-_5;sK-%zJP^;aV)e`TQxlEB%Gj!@r@g$cV?u+&*^)#DmEs)7mQb$Vpv&Op6)Q8dYv-@$ zEzd@;MwN0u;_lr5L9MfkeCrN*eN3D6Oio6xTOmmArCQNH7Zn~i;-wDZg`x0GMifHJ z9F^aTbL4|AmI|Rcdck4(+qG{+tZd4Xf5eZ+iBQgUJ(@tzH+Y@$*YjktE#$CbK3%SN zC<$@pMrXT9I>|}%`E~cLAyS7|1)4~(v_^r7(eQP_x6dj<_gC>-@Ac+CNj%otPuU## z*q9*t2;nR^;r%!vjc}H5y~?he`9(Tpn!jD1np@}*lRrCSl>-dOT&rXWB}lX8Zcrf^ zf7e~`RA#fgZhva78}siJzo6&aY_usWclSsOMz~pY80nwUvj~2INmzfGZi0~lCh7C~ zDEifY%;>tqQ}~dIM^3irw)X>Q;4c7tL{47W}U|poI!;8u1AbuB6rKPv_7g zh+>PyZLBmeuI3}Z>UuZa?wCq(v2{cxLMteVA?eEPY(Z?J*O*0iU?<=6^!N(bbq9K@ z8q!&m3Q=oJdTSb-FuiYjt`{+l9q+(~J?);iJy4N)zW3Kzc8eS7S=RL;iE%|dvcV`m zeCk0-@41d@QcR0nM=i5Du1+-zk{=a_OWATqj{Ha!MHSQhJVnhA?`A}fDSot|5;-d2 zqZ_%{m_c>`<6VJPrDT1p$}n0S@I3tro2#JMmz^P~!(Bn((~h2e13x9lOvG4H7H@WK zcuMoORCCeHrg_Bs!47{8EyAN$ax$QUt1KO zyIyDc2^mdVYh(1Rwtirt=n-2*VN4cto=(>YV>|8U&$^L3R*40a!o01MpsU1_gubWK z2}9iCt{}Ics{Ccd_bwAVY#bOoe$HRb8g3%-Sd9W=Ye{V$6s@P_FG&M09G$eiUt51o z6JtxSG>NQLc6VL#?rE@#eNz@-4yocY;+WITN=SNeuaBHcY92{9#7qus$M`9Z4lr$b;YBa`AvVi8>tSy%at4=I-q6a%^^^GIjKRs^19ED6Pn4m;pVBt|g&8D}EsyR` z%M=H`$n9^3rh5klcRorEvRFfE$>`p*AyO)+8b8bv!c@D{^XS2$=4s5)AJAKHf+TNI zllcvcEz?<-x3K?Kbg^5K@n*W|;92Q;kSObUYaL=2Z*O7`C*||^{Rf=olV`!(|fNn_c3Y!29EH^E4^gq7Z? z8aWhJ8&h?W9;(eW%!frL;bS1B-(%)@^`h65#|u*`8Wjo`Z7D6*V*`o|TU)(<0rSrs-VaoVwaL1vvwJ_A$2QQld9It6sDhx9~?$u8HaPDgo` z-bcsxM|C=kWMJJkE}cKhXc+NsUq>)nr*(d(pOoMUGeWm|iWBV;-abNe{V&P*vDp$n zlo?DV?Rv^i@|R|<&r91q%H%N}7vfc9u=zJ9!9wYrwDB)9A0558_kW1Sn@QYe;+5xS znPdB%CGgeW4mz$C&Hk7cAmzTcR@tpR^P#~0JT5ZkH(3-;6`+e{u^LN+QR|aRRG}5W zb9nFw?AoQU78|}wH#JjbYTO_FZV6JUreFEX_D)u@zA!B`8tn_t>(@De*var>`%|su zeY=Hy%{^alqgOP1hP~(hC!KQ~z?lCfUSSqY=dj6?Pg+qx7tXFhQNJ8usX6=JOkOAm zfkvU-t5Rl=6WBDg)--*0#_&PHXO#%1All!TQ#i($=X0&KOU{g>0!uor`9oqx!3QUy>vG_Q|trr~y@P?NSM8a1E8j<~erC-k) zA#>iW`YmB+mwrAnJ#bHw{DMIow?WmRHBUc zcsT+%ad*C$O5R)8B(eicXgT?6GXTx<;-jJb7N$0mfNBn}Md&#?D0Oh6eU3NFQyA~0 z(WZFlJx!#5EU~JV9>0pm=V>7%k#7=`Mm^!wn;79ewhuRdJGFS(vGxT!Sbaqw*So{~ zx-q#5M`|QA4w4FvQkAjk8OX3$Ql1{~gMu+qnLe~K%dH_<*rxHi|8%%AA#C{@5Ux=5 zY3C%Bbtjc8F~9ARywdCj-@f=%ZyGCg^_epY6jUw@K*Z*%!0HA22a56{J^^FXy(@4x>2+=ta0HIg(aJ0mteo#D+BWC@QBS7S^!kCBc5PA8iu+*-=lxPi z{N{mePH*ds@>CS#p;U&wcDcm-r8_J7`v08VU+P_8kYca-x5V<(zCvi({JZAzVD9n; z28cdhnbiYE{}aGG%?hGVmBvGhY^LjxCMDD9ii-F^Ea;=2Gc2^wk?FulH$N6BP=0s0 z&=f6F7Qi?HfHgo4yp~hU!iWev@uquG@Th4JE2W~(NUGG1;S)?IHYFt`1~W`Pc?8>B zR`aX?w8qDRrChNC-pJ%V)V?_E(X;OR7&4#aYMWs3 zur6_D#3*HMws9)KfD42Smbw8AIulT_GzmYNPj%)Jvquvj6;2 z#$f>`_n71JOR)+A>Q2>axH~|`;)ZKp5A=6ie&HT80(xLNzpS0p;}0E|r75!tFv1!- zm5_k?4Kfj*Hlg=}T~hwoy`o@oAG)tm)r{d7)|0}Gq#y+HF}X5_y79CVQAe3_pD7B+ zS0-=e6_RcR`x-b>t5EWNN1LG|3=ONkOnQtGLjc`nvqx+J1zw_uJV|@~A%yB2C^%+) ziW8SZAr5~*$^Y)ZMr`cD-fP&1bP$0b?`7Nm4saa?M{|UK`||6n2Cbm`4{(e8v|gG| zSuz0FoFuq!@}sF#hHxKti5?hQFBGxMe@C&*6AmS7G38=jy7GmOSbT?QcEc9!PL{;U zqBLpS-H{h#DIZ2Mg3XoS`Oe08cmH_mU zwWa@UH^13ru|iH0^zSZQu2^-%f_1a(AD(j=aT@-IQ{f_BM3w0`9^=J`oPku=hjSA# z0{|p~L!Ew~xhn#!60`mTTK70>6vYWM-~bU})@+mLU`1J%aionqBSl%sEvWmn?k_~~ z0s4K~^TI@Zv-RxS(B*%DZ&C%6xP0!7R%Qn36nS(?BXUfBD~=%Nj;h-G*_;BbNoRk!TnU{LCFqg#7;6q@^{ONlC z(MKkksUrp>7nUR$)-X_<8=xWm(6f3%f?I61yKvlJ$Si`jEJr~?3F%6XuImQ*(F(_q68hWo;l_}KcA zYqItDQoQR_O=t41$YxGwcQKV1fLok{s#oq2a5>m4=+?n{kfaMgJ*iMk+9sL7=NhFB_$Cz8wfQg zhFOzq#rm9G`40E&FQDjgT7hs^{pJZ|njtDL#9{&P01aIavw<*+uj{(VGKxRbQgWqR z&9|snN+azc<(wlQR*>wQMlpTZfz!q_80zd6Dj)xxj66a-`Z|*_(D*d4*HueHS4Iaw zc`M9}60l_c_UsGHD0#K0D;@3~4oW!9>igK^wS=jp)N`%fl>pUXG>Erc;<{xcDXh^M=6ZP@gwigS*iJ9 z{#tQ2aD*Y4sFQLY%=DM;5W>^eqW@C?fKA{P$X;>ETW7+#i_CJ^Ox(?S+eDJj2^#U4 zb>x#QP2{&*b(Iglr7U~|(5qUnQdvAn)nDhoQ4HXpVqy#8-ph|0n8KdN@FmbMeY^Hc z4lyED4vMIuZxA&{uG0PDx}%{8ZQb9kD+h zD8L<$6}rl*6AcF=ovqQy`re^6TOfSa*Tey(Na6@3|0C56yPzwSV?LdkG905vL{Bip zlEZbF=>8SvA`!;%3pt!Xa=msHmZwj$r&b6(2x}M-l*IfE(f?iS7ezkMX^FYrp;_$J z1Wy0NiC+?qCxgX`mxgABS0?LzYaZjz*8f_)z7a#HSG-A6&UgIDQ+8l=Ji=7Rt#76R z<#`2u1*;&r_xUeU{OSxwB{lXj4+UF(yTuJV3C(dczSr}120%|IzdGK|X*LZFH*PJpyQ? zN|W(?h}9j@^L<5mj7f{fb{FcaNVT<1nWL_T5Zk1{YyG6%L8^=c>5MgeJCmaK&Hy!x z>s-W_R|IIeTZZe`yEYgaYam7u0fRxooU2CNfsHF|_s6sV$<1Z*7fM?z)IFFil^HBE z>M}G#egivpO(V=><~R8k_J~AKFc+3}?e?4d>8)MWb-%AVY(KPGhnoQ(M(0>Qt2aCt z8zI!YXK01o>5$`##gO^GwhMjmqy`9U**0Y}-)tV|5g#@F?UK7B3zDXRBL9m}E1cyJ z^qyezpsJn0N#ZiL5U}#{e{F@iHctFOkXQf|=03DKI_s!*F2D%&V-oa}iMCAULLrp7 z3zipej(JV<&^mUdizqw_-i9|Ir7_IUj=2Ri1#K#aIz@l$XTa-A=UH%t($~y30_urE z2Sz>jk4Le5<1kQz(82jDpyDGlN4k(p{&Lt-v{+ZGmj&@aMD>73PxV=V1r8zcx9=|2ExAd>^2O zn<$I!t&IN5R+tG5rk?9daaX-e<;tt6ytyE6nEkP?gAL`sStR+N5Zz#CL9puqSYU55 z2~m0VO`}MO-Ln}z?kR94hxay0C;1v*YHc~L$5nHe!+BJ1GfUv%RCzEF39n% zXIiLPp>onF_~TRdH%3%=J-`IK=nL5=Fxfky0QRtNq|gW7PFG^Xm=Q*RPB$1Sa8Mi6 zED|+$Fm2h&Ng-CFRuLV*q0PXX>H5W@k>LNhle8dvC#&kSl6yq6(lh%Sa35bC5nN7 zL%E5%`c+0o%w@z{EBsLG;pzfL?}E=_>01YIfU01lI_OZAaW6$ekbq$?!}z}c1UVxI z&p1h=1iS2$B&k5Fu3LtN^#30I4+Nhr{S{1-FatuHvVE|3lUMX<>I_b7M4#B;JVL_VB51VwEs6XX=@ z4n#BM0T*7kZzQ2BC1>0}u>Q5P0re_67>mp9`EhQW_}RcUMoV1VGzZ2Q=cK2Kp2*fp zq<0pC)7~wRYy4?`u)E&2UiSU}+6bA?oIKp(&w~%Yv=ts9`52;nlUJcEF{COWm-0}M z(yC9U#%fZ=w7UB|I*%#?ni$akL+_oz`-m-GDghZr9xw<>@v}s!AK*ED>XW|ftrg6n za9}TwyCwZYt85MXznkZT?KA9P(qC#y6e?A^9=jm1fLK-999N%$83GVkvEUKa!VH-2|UYY1@lABn;ahcM$uhXLzqJq&rs`YzpcXHO;tjy%LRi*x^Q2B40 z^}+KKINpAzVL!u9?*9Akl=&}*fFF#r_i1}7S%Wcc+G*NRIdJYO{N}VqJ^x`J1S7&K zQt{zJkaoChRM~(c7&%20M2BRES;lfQ&SE?fFS!N}K}Hzo75Ejh21hB@Xa`iC2CgM zo}+Ltt>3nQAD+a$^`?#WxLNIsqe>||g{)$1k^f@XOM|2j(4C6?HO5S9?k+;c~TK&3cQ#8PVS5biRnKjte~0M@oLcYoMPxozlLR*y7X5G8h?0zMr>~8l zxenVT{3wvmDF0Dre0(}#JQ!LKP^s}>8~G1hpPIMorNa)FMsft_5#Y1y6K1>QjODrC6`|Cm0DjF4Tt!= zu15q}mY3I$$bPP}IWW%m(z%gdF;n2Kv3>Aq=#kC8SE`cyeRPk(jyPHDqlNV+t`R$v z4A*ma@OF6&l=A0n}R<(PxY4o+>x45@m%O_={XwolT5rdw|9+ivwtOmch# zOH&Wsv+q=RaR^(A((LFrew{I;-Yf)dJ{hPT{D8 zi1sJ#-PHg0neRZV?RVwnquMGr16XL6MON&)OnZN4X#*I5r}+vvU>!RAvy2lUCmt1G zQ{~Na*8=jb3O+SXgNmNw5tZdc8I7CzFyke0o5277Nm)hx>hw0lizp%rE)7|~P6PeY zIsR(7fIKNCZ-fe@B1S|JNJB3beNv#JKfkvyq(+|OP~Z8Qo|VpKIg!5RCh_3QCGAM? z=GIgX^?dSw?ZHam&3wzz&Ep-3`C>*y*b-ZGQ6;ubiKGviBBvCA0t&$Qr$NA_1(p=> z_N9ZPaT<0es-zWHc*z+Z9CB3GV`GhP^IZBny>*E>)7Y_^{$DJd@)NiVU-`0(X*$@G zz)+bwen zdCyygHnCSpx{Mi5Iecn(TT@*qRPe9wtKyHSE=&FOdvJtb2K zU4to-x^`$fFXUDNO{#QAy)$jU<$K&~baT=EUIJi-SpdNe+4>&H<_BzxRDfs++{7!= z=w?n~)z*uAx~3HA4gNZ5_jo0`GpG1M1VH z8jksJ%b03T2^XO43LEJHQeHS7+W;yJc;>~H=EVxn*x0zivJ9Ii>-&biLk-+9 zgCBzq4-XYlQMd6bs-O-ugC81*CisGF0h5s5scGCUm6ppb8j(C_65io8zT;UB$3Jwflz992@&`*=M!SrbUN6ptu^ZJ_?zr#%c8jHzFqSNe* zl=cWnvzr0fx+weA>fV5ix#UX#GX|Ina{_K`7vh#>2#-w%m2nx)^OBMLPp&VeFj&g3 zl`gxZf-fdHriECCuf~P$Z{BwA=f)iYFGf;_9~@j+;|9?Dw$d6QzOcT~0ysS_@+#{K z-U9i)w4q@xDYi|^-@u7c?XPGgIRH+lBrG0H-)v-97(01dyoc z`=seF?nKsYZJ!m~haP{K0FME&4A{aEW{5^8sFGiOjmYl}I{*;RaR4Q{lh=LQNbwd- zcGS{*;k8AIbmaas|CwFX5XL0!B2dTMx`#sJ_E0$X?l*^dY5UV% z*-IBo4e*q>vzib%%#r?WK=Qv)XFs{+(mmai$L6({#kAMwcNOQug0?yKEz?qu+#Uq>4I~sdv)w=E0gaeu8{t3SC#r>6&Kkst}7iTVP2(b{J ziNhkI53rmYA1-I~>=&%+8}07jr@S$G9`SzKxH|?2S7~qNa{z&vSLO|zTMnW6Pm7s9 zs1ki-AOvqJ#v%Mzc0j}5xrYfv5s+a5%on9Oob@>7pMW*taxU166AujUX;rqI4d};k z_kd8=!B>Z^r!Ab%fL!Mhb5!=~H3`hs-O2sWbTvg-4YdOvL6wF@1&Q&MVp!JDE6OZ{}gyb7M>?6r!P#g2EJ`lCoM#lmK`s!F2f_7UxCU<0ZeqU}4z>xF-N7Z6?TQ zd^VC$voA%+DtNLPc+Ot{7A(WAXub84&~>(`ppiZBe7I#%@;6?Z1^VMC2g|D3XkKW_ zq-ZDzbtZsLT$FCtgBuAI7Q#oE>5X1+FKMN!^)-{>){v14BtyI<)j!x)<|HE&{k4`1|q;&lmp6tX%SXr*sw1H?VkSjhZT zFZaW`uR!_ki&uQ4fZ$gsV#O#>G@}Qm&$3xoj2NP2dtQLi4d0pIn0Np}@COajT>%B5 ztRbKKj@nNKF%kV_Yf~LXDrKq$Q)Y1QNSP7Kj42aB1;!cN`y%yZqAA|Ml@LJYe}j)O zzchV$_?tGw_a(TT@Ydtv^p|LOC^W`jb?3axC1^xdo@B&t=5Y4E2=D(2eg^w_g{|iF zF6_%-P4}22X1mXw(5KtOu>2i#(g@|LLO5;29&@}h&C}>o_u!Wr#S?GyVNF>JJZBDa`+ z756;$lNW@Hp1KM)`N#01=BBwI5gN8A-eO4c)wQK@p;E6fIU%wlLycoMzdTo01QDWYRrX^L4r~?LD zyd3}yv;v2Yf*Uk5m1djoAgkM8m zI;URah85#{@&rsSmUhrs1Uw!gGd!lOMc{eqAyCje+;v?Tko6DYerz01TNYcW152-b8W5bJl93Y|_3rvVY&qUdkO z9Dx!aBMO>F*r?_aVAI?-OFuvQ1n9MI6a_>&va>Y|46rf&%t49h(P2si;{0vYqGttC zKbi6z67$RPUdEGBE`}AtpEu{`B8gqe>eYqxAjl$FjWRl%J&G-(OTIHBRm9=HX31Rl ze}eE%lAqxV_riai@I`xGTV}&4j=#iJaxk_l6U_|{gDe8R4Rvgox<`GaX#Cd@vrv?SwyFD zap0>R2?L4Jk-Lcsmmw=D3R*C{!htWMWgSlPrC@_^S18xhk&CTuG-fbD%(kzu7**^m zfwo<2G+&Bfc1guS5+auNb${j@0>hRh?cA^-7{TVAg}ZSuDl&7gaGYOM0=BOZN_-xV z?;Ob{a)bcc+h_(><7AZebC_jlrc|m$o!!kh%xJ~aPzuo2D@u>8&wRI=P!*vs;lpX4 z0uhd~=TU;?tVvsSJ@i!DAL7g1Hpu&zs~KoI#u;NQaX3`*9xn1W(_I>kv%D(1>dkea z=^RUraiqr`NpC)&zY1KX@<{SCsR+QUhWT&FKwyBlRTjA4UURkdELq;xrScQQDc(&( zD}T+|Gk*z_L!#ElFK_z0&`X;IxW2cvT(u%YGwfBH3SHA%B)mcZq_!gR5SkI5^Y9yvQxrC^ zXnx6uYd5RUr%%w{TIhA2#YvNw5wMta)vQ?jhW4Kb zkd225iCr0Yf7?&0eS5_?W#y-CYuxdtqj?R>8H^?u=%Tn+>9$;0yyxkKY zA~-@TghIne*yxnIKSW4)L^HxjgU&w)^67rvBj-G<{;e%xN^&Qt3K5Ut6HA6WD_pdw zg%I+2aNTC01Y?o~c`bXK%VHNqI)h360v+>>2g~UZ*{B=6Q7(eG>P+K&N3d}#O=4lU zQM(>DYB?EP0GfVLD2{zpJ;9|zQXYv_7pLK8&Em1_8UM@gh^*V&01$5`jD>nHjs8Arh>iy5{vU7^)%iftLx6duD22}iUK z{%w^?#twi_q?qWaeN(jjJgZ+(cgim z)9(6s=?h?t^BEB{onwoD+u4yb4@~~vZdI0xxZiKB9sc;$n6E!-f8;Mj@HLTrA2G2k z4tx+XbvV2%%6u9jUWm<#=e3ih{1}Khe8QqzTX2=X&fH%qC>xcXpjIRp^~+7l+M6xd zGa=hBi8aCjKh3-_hNeCPA$_DiOe4R|Xr2@gq0CVMyb;cw52j6h+QiC}0exsOF{^~# z@JE8P5VZ~g66j(wKYp7-9WKUvK=Pg4&2A91UIrLhfL6phAMD7ynQ1*Q-88X|G$)k! z6;@slX8ebD^ z_pDAAby5g(oo6lK13})>{c@w|hL!#W2e;mZZ{txT;T&dzmP+{Gf2tzn*NJ5rwDTG4 zER?)~DC!M0MX?te(`EJP)0YYg!ShmsYWN=6BU=f2^|kwJpl4C{^m#BbUidKZ2!>8$ zXUXm}>a{Ii3gYW`@CwR&Dq;XpQ=!nX_W<;hNm!Qc0k%$E@#()#X563DWZ+46_7C1( z4`r?5qf7uc^yH2qK(R{!My%uNrPG1KrL#r+cX6?Ec}ptFsHr4)P4u}z=8gJ8j$2hO zx7kh@AM+1XTt8i<0&md%idPCQ_A(XZ;Gf-?_j5%v#-alsevHhx82ARn=>(a2C=SJ^h z3uEWC05#!W!8iT(Ub?BbQjG-0beTU2;b6frQrd(A(7~bG% zr$NVu+k;8rd_XmoIH+Za72`I`-GB8LxxM$#Ke=P}C!m%U_7zgc8E@4hOZ5!x;c;ao zCC9Yez#2_rCp|o|&AV~zWw$9quVml(ScQCpt(r~To$+ZK3(PK9qZc#-AxW7XBo^|P z)ndM0CR}Inqc+e8Fb&2fBSSed+SE#}`DuDtM(C5+n@-oN=PFkdq|4xT=gcnGuB9NF zWoD^pB#B$#6{VI=?+}zlLv7R+qB-@SU_F}%2J*!6q2^6T4cCo=yK(5x*iyrxr*RHR z!=2+lQKdUdRrn?IiO7HM-V+07NfZocz%y=?1hv|CQGy&k>=JNy*9Hq8t5lOO8O=yX zy`l&NW(S-F1(|rmC+je0(6$pnq{L>ncABm0^VJqpQ35g>6tT_EIpc~y6zrbe#T3Q7 zHOM-IqytKZOon}a?B0bH2v!l~Pw=0_=V#oNN*av0jhA|>PUpP@HA;{@hgCe%iU-Xg zQ48;ixAO3{(M5o~9U4&bacnqViS-D*Rlp32(<$=;4fU4ZG1n9RC-bp_n4ZbN@LjIw zi~L`f+@Ir|h+0YAKn+x#mI;5D(epo#6TZpcTI-dp{T-R$PvurkW0gG}{qis2K{SKC z=I4v;WLtw#@yebgDTq53@(66Vw8rBtm^D0}uamP2M=L(_2ul&dg^fJEGBf8s z=JCCzwD{-%I1}nC=jNBWyGsP0P_(k)ee9+Op9M0{c?{1ErBBy}#7vhQ^lEw_r-lcc zSdZ-hScTV!Q#HO{|1}?yKXzB_MD-Wi{0m+5!UD3;_0;<4N*DghoRFZw(q}%Jz4pib zKGlnK#(j*+w%Eq~s_Lhun3}62wfrBcGpoO+Y@7KaZKpnhEE`@@x!)*#y6vjtrk~^3 zs@kdWdn26ZFAV*5cWuxL2RF|wYE~dXJ|`BUG+_#8c7>!ss$0lUjGy)I$hIW{ktBWP z?7a)ypUy8n?R67a?)eC7YA)PX2KC*8guqOjwIw6zc~K!X0dsOt@+{AoE-d_cUQLUC zq4$!&hh7@1FLg#Yx-MkHcrZ5LedNP83{g_y%r@7{9CMf36&=DhfSwapX{q+k2QTo& zvalM?@Hg{6(UcCXk1*ThWU)6{qV&D~uk$DA6+Fh~lf(y(o;gtw)Nm*;GGMl(N|{M* z6{y4Tw}K7x8*Fp1K>ueXpF2MRDXCCE_MrSqNLyq!5PPxLZq{%SWW2%1zbB3B8C131 z(;d28q))q z=slekiy(KN6AHO?EANu|Kx=m=hWku|#nvsUn@G{~M?o*>$R)N#=Q@6I=}uZV>PY+5 zI~ix07ZKPCBH*v*Wg{_+8ZWCvA+mX>~P<+)YaWoY!Uo^ zSlA)9ukgv>QY^LXwaRme#S=ABvkm2#r+}~vKs1u41F9Dn(qFntXaB#hzB(+bZtWWp za1en3q(M@U5JVZe8>G7iknS2%8cFGfAtj|7q(i#9OFAT^+jryhJ>NMV{$Q?)YuJ0u zntScN*Zr&EAm~3ChI3$v$?Ud0Ru|DO-FMV(`*8PY73pmQ_BWJZ^N|q0^Hl1q?8@w& zN5a{H2`_*p?phr-&-E5)P40@4TTDLf)qW#B<@GBn+x;$T-PlmV^3$qtP6O8Np{R~d z2{rICR^rAnz8cWvT(s1tW#fnk%z;?Z?$5w^numLO_47w}&>?LOImPaX_(^8|*RzFc zo6Lz)Z*bO&AvC<@VMOL>N!DX735z)!<#`fg)^3xEM+Q8aWFug6F8*`8~QPb3-`N)p6&)6Mij%DBM9 z1FkKTCjE2&;0TIXJ&6#R)UlSBDl+lj+-0w*nGTd3Exr6%IcwwG!sH2OIkO_0A`L^x z0Xw-aj%t2ZRo6r5W&UZ|C+A8>pXTEjt(D>!R>Dzm6k-LQ7r^P8U8fwSx?Ft53|j@v z>VJdHoJKz6rqaKFG-yf!P|#a_6-ly}7G3{9xq=AYrvpYIh8r;#?{_bBw_2jQe41F> zi7j#Qpd9(^OE7*(K3F2FLWR)-ac&;`Xi~O#y3uiO=`J0PIP5rQhQCvZgwwJTbixXj zmDcxtclMsoDzT&1H105uu*k4GKmeCgucI$W1=q8&$n4L?s{qgEC;FZ?<&P>dvHbiM z#9p1+sis*bI?D#KxQm7$i%Cs5nNuh$_mMoh`zRE9&M|T(Aj};C$n66JUQ$KIR0xhp z=L}y9D~fyM!?$ftJg8N0px1}YSOO=fH**R^PJ48N5j+Z-FC6t>MohrpAxTAeV*$5- zoZnRw74Y@iM1HSkFv`bt9Xt>f6_FNkLbe@SiDAR)K>+#5c*()S41V?V)xRddCWqmH&P${7sAY> zvL^;gpJ|$y%q4?q2QfLtG)WwzS$C_OLY`_0T{z8xzuE)HFkyMYF{Xwq7y0ZqkbjuD zB;|6OolQ;1 z>z41X^r@1z;~xTws2D&7)vsZa`Z9RPn&;#2z*JFci|0d&N912B`; zf#CWWpgYeyfsr19jE2_(T*GrTDvjT|9O+$LtyCTZ{fmYL=dDy{o$_I(i9}LmZp2yQ zDGQM{TBAFYL$l+R*Vi;Wg+C2e4nOf7@(hNErX{O(r!&d0s;M%firb}?<}Qa+U@jcI zL8{?JJP`F4TU~2NJYF=pZeJtqO*t`{m3F>kJ#FZ@2s?zjCp$&EU%N)TuWAgXOskbQ zQOM(fztVNkH`923F!;&|D$7U(utBZHu0k>qdG(`s)p?Os0k$d?iRlt(4qz|@#LYGM z!?gjq4Hc824L^|4CbXgA8VQHf1AOB^DhKZUs+ea4Z!&rg9-3E4EW)sBQv$P?K2e0R zhj^zv*Qe11gURgu4=wr!RDepUSgRHz%7#x){Do^aRhPTi&Dow>-KUBvdSstVyw^G3 zA?yuSz`d$Bw-;vu%QG6K=~BfWvK4nyPQQa*cW=ECh)eaW!mb@N`uuw>b{4ZKk-2iY zn1@rG828Vgl%eGjZSK8*-^doouvC`;p!87?Ryrm_pl^+($F-GU&{q!b>H4f+oXz|b zC|rpg0&X27-G06dyTOR0aXZLbZsE(ZYTqS9Jj3%jii0)zx6vG63BWoK@iM;$Xz0rG zC_Igi+x%v;+&q>im;UCpaPGY=vqpu8YN-wsPD7uAv1a0FTCntSub46z7ws14dUIP5 z`;BGWW3?(fEY>@Qzq*RQB5?;YY7wtQl{&Ym6k}$Kwb)I#E`bM9g7V|m$rX>7rgL02 zU!8U9p~%>BxZmZzn&aq2gG5R*asuMJqp-)*+anRX9ihLQ8(oS&zy8wEn{Wd4B8@mz7mmY;R}l8VSOuwUoXIAl+4WH zaCAOJ&2+nF);T+T{HB=0jE9bOEXJdonpljyk7^8p_>751any+>g5kiD-oR_KnyZS6 ze~rd$eFSuSIvw53M?&p}+BsjpmH{xl^&Sk(*UYh{>g7UHMH*`*KLW?g4Z7dh2W);RSk@ghRk0D-GtfnXf56q^~k52~B0I!>BU_igdf4zqMlwM{yT0D-s;}`G8`Z zdHdc=g{C`Eye4M1b~rPQF)4d(1Y~}e++@_4A1u@zRF(mFMgIel5>PmXa&-X0m4__b z4L;bYZRp1sTx%sH&?HHoR-u8co!0r>B-2xkIn1n=?>%@crizr0b3V*UvS}72rJJ`Z z_*Q}@t~r73HJbvP3=wUJO_L5bw^KaL-4iK;m$0zY{g2uqhQT&TMQv{eN@2;K2t_AP zDIzObR4S%R$F0J$fAeXu$r6>qSAwInJeA6?*}FCZ8Okv1NRcB#wr7*j3i7SAV{(T& zB3fAnF?m%rN$gUV6ok~zDCes5fm;N)y7CQReBh1HtOQsnG-dqoF@%h<8z!2H!iURE zfvn-cR4+Wfcb9r!qD&v~E%vppFqkLXg8+aSJ$u*Z$qU;MlV!a72Sdid6abTE^ z1=5M7FnCYpw_+a8ejs^>4Hm!%!S3mttF}NM6d&Pi0Au*>eDS$r}Lj8u_qIljXm2&|Mq^f+qOv{ru}shZ6y{ zRfQsQC z-OQ`X>q5zKouQPn4z^-niZXO8a{P&0Go=(lt1dDni0`sguTG^Vl}(AAppJGcYSDW* z8^idkR55y3r6)U)nJvK}3uIf~<577fbKeIeNd=vECr%3d9KHyjJtTyV4yB4W!w*$>5M zfS-M_00(GU*L7k=0epd8Adus*O2MY(u4aU`W8Du&)>dZxg{LlKg+`)qJ-J?I6utF= zQ36z3%QR|a+qM+HiW^NiOH4$#ia!#8dBx`tchfu)q3a$Xv;uqu_f?5!+4cZ{=rEbY z4Cm|doXJ`Y`&2l3Ge{KP6s+&eFWq#0~>%Zhy5eI#9;s$)V6&x=Lb6G(Q*GJ zI&D0h#!HIA_Xad`L~>jC0l~BkG~wg48~vWi&hK+Gg>B?J#1Km~je3WCw;FJEhPRxV z-Tq~BX=1^S{`m{h>pLsh=x7x>u+{6s8o@V3 zG7*HuHkf76tlden$&!@Lb0jo-58H*_lTMPIjmY;|MtAqqH7$G4t{UXbdJAX!qxhQD z*xbKwlK<}Xur-Pc0LVO2pDL^&9hqt5h=Tf^?e{@c;gJ-=0pEI?@2~f)ecU>6)q-@N zlL2chd`5uvGY;O3si7&EecLATvCk9=dgOF-1}6}Ac>faM!X}I#xafBR@kw}_51}qL znKQg+vm;JC7Ynd(0_0%a6S2onG(uCJtm)q;vI-4>s|-x#f!J@OJjhAMXNxKrka)|D z!!tC>IQW2gbAnLT!|KR9PDiaVnm(Ro!oL$L1~@(Qq7K3ZyqM7_a7E}&*M9-rdS{WH zVSYy}BJgH_290Br69}>21H76!K>k=pxtD?_7(@^+PF|!JFqGJUS};lvvQ~IX_BukoIq5?)%0M*)1~2mbfr`J+SZ?8LHqunf zy8R^?F7iP!apX%3wbQo#=rE6WO(__WQ!1@O`CWt2mEvrLLzJZkD`4;02W+})7`1*5 z=%w<1@ysKI(}gp*#1CI(+^=E zoyH|Hh}3DO4XUsG0C1M7768Vd3nQmsl%?fi^#@i%!$Eza)}=Vgp@2ak9hGk{`Mo?C z*}Y?g0~!CO%D)SZkqfQr*KU?In_-=`5>oOpX8>g}Rbw8qiEMT&j6mXNZ zV3K));glSroI7VJB#~zx+d5&xuXG|ffPz+~d|mgdm}BrcQGsueuLeW>PNs9d<*k|Y zJ+UYHYnkVe-OgHTTFHikA2nSKXE7a2yi)UFlZD-&YzDjK?0b1aBJ_9A!K>r9W28HF zn=aYpM_yWIC&bg=%s8WbrStJr=R-#Eu?I1^6g5dchEku+cVFRROO7|mzcSf#N`81; zAO*IFM%_g^3y(&R;Q6LF0k;AqBWWTib$V@~r7E^8WW`~oMy8$Q$c&9yW~{FaK7>tm z<1ln#aR!)LJ?a1Uq|FO;khob>T$=dN9u|P3C>dzm;mSnoF4D|Cm~G!8<{>1 zUDR3P$pdYpN+OZq`rI_8?qi!s6kF4IZ$c%lw`e3%v;@A-CGXXG%mK$GnTI8V5*Hu7 zp1clVEzLuPR5^NFgVXUn+=35ree}b7>9GX9JWLpImh6pdk>$i|D9-b(S38tg=CW|} zcV`SQ%%oI)GTvRNwHRD}E2A(S9EmOVxWUcC3YugApEvo?h*{0R+!hbXgZH^4ip&)Z zV&D=f;moS+Fqc`PGE4r3_B;-ims^7x#%kNP!y5px1ExZO(Y$*mo0F*wWM)k#|i~TiQ5Nc{$ z^~njnc#Qk7k?QyTMau?2CbT1>x$VsC1*E*tNYG{k>WhwOQ9ff2GWR3FFIrM3ysb9R zVuT0o06irlyVn~6Zl@x&s{`WB1uR7tiz!Bm3pM$X71s*EIxoJ+Hv=?nlOeQvk;_&T zdMHdA6icBAzfP`JVU@+3Wv^{3`Ld`+9MT{Aqf;cv#b2v@@#&MtX_rx=t$$@+SMWM0 zcv_DD^|E+9=6JhUlx}i3XKn}AXQSqj@Qtrt+20YIljr1fgp&yI_X^W zp*GE2bw&kyqN+?VuY`y^!{4U;yc*#VzPeF9d_O;b39n~7(=|Hl5p~d5LQGeDrWV{2 z#a{2%)!dN!B*0s1`uus?Qt)UkW?nMJ^Kq;Qx-G>B4y%IAgX`d!oM^Y%1uwE>v!@krVIsj1u4=B9-6KjRNeu8Cc$+?ZG@m#Km1MIz!_tVAI%<9;yn+- zNT*_@7xa3Gw{}TvhrX|91TqYe->(9+rH5Qf+|pnc*HE#~M=kB4M?gv}>N^#YWMyo^ z*XE6;t>jNH3s{_){k~#)>W(1kvt0tD7Ea2*WkMldUu72|DZ-QE8U2DC8IC#>&rpa& zM{q+7M9P|G7 z8oMMh``k<7p%i}ZW%mMdTY;~N8W^X@fySs~-uPNJLa?^@xKc)^Tpc>(WrjQZojU9{ zcL&s8msP*(%;)pt6EWY`*Tjgtg!=h8FEbRc;qAB&SCJjC)?mTx5wc zXLR|yg(s#h3CffqMVcY>ZEa?gRlYbygdP!Ua(8d2j}m&E&ey_HxYG|!CC$E z?s((}dN%r?SQLPhfAdtUA*9r9lHCH~S(Pl4ZR;vB(^IX@Rez>DJX|3@b<^6ut0t$5 zou;Ukr?U(Rl5@wM(YaqXQoWub$$6O=B!7)ZJ1M~{CgG%>W$Cty6rFVF%x7b)DTfp7 z;>{HrBb^B2!<$(JCPumHlE+`FdEul4r@?nNETzUJ+>#euopw=rWfX7}js>jL891+r zgX&(*$@CqcDmt#$YNq2fH1L8KJ< z>RjG--ag0R&E!pUVw)+|u0frsIy(%o|r1nL=NB9KAOD1y-yLCZGh$mmznPf;Zhe^7Pf8*Si%C}C6!p00kwXeX<6 ze<-I2IJ|6x4g~bty^ee;8bc;D9$l;uvRf4y)I5pn(ER9-O?9t%_k#?3oY(v#>Uc-x z2aM4UKd9ofLOejrs&+&AKhV4mF7{s$W;7ApEy|dV>qsU=mm5oUJnbQH6CV&V`iNfH zNpz)8Dc2i`O?hTPMXOO`9&>9Kj-5>cif)L{%$ATjo?v87SnfgmO$DMicD-_I2dcCx z!5zb8l9Rq}l>d#^my!W0Ql%h{J%UxOYA4OF*-#anoP6XFV{7|^2rtPb=}JwuQroq*E)D3qaXC@ z5+u)l9OY{Ibsv23d@mToiZ}CM`KWLOngcfmppvgY1=@0?8KYc)O16CfqwdL|f*C|T zlCSFrM9wn}*p?Rb6Fc_N1;_kY%{1JV2%j1%dA1J%|wHR02EX!e3;N$Ao%kF+Dyze+Socw{MYTs zM?t_>MV6_DbQBrujjy|4mwVYWM2`%kf@eaX=xB+!Y=d%L^;1@mmhRuJccV$JUmy(t zz~@t{jAb1BB~4-FHRAC`t0!g{Kjg1JiYEKunM&dm;Nfkz%SUxaNU*AMlTL#}-ze*Z0sQqxT8D1N|k&X{w8Gb0( zc`X#>VW$nb%t89k1Jz~7(G-LW5%V276bD!JS@9J^QRKWQb}&_fYy+@}C%Y_PJ_nNy zP5KIEO>bDw7P>uGoyYAi4*dA-*^r>U%;R6<=T_VJ& zy`X{=-OziEZRYQ)!5o9Xf6aWHCB5SMdAC{Uf{4qwm1wWKIb0Xmi*g5Z=dLM*f-+El zXsE{Sh$lk~eQOF&La?kzqKpOzv6YkBwL`|-1=q>TO)W>neu!`U@kRlZ2;a1In%pkV zJ3rj4&S4a)qthmGu8yN{w%?XB+W)@0;6@DTngF_9)=1>+rb^l?h*h)&lCu?tQJd88 zLe84aKDMttW;M)Urstdk#8m+b+SoEi%dGPZHG@)CLFj)3N?hwq6O6%r0ZQA7deO$^0fF&7o^6^c0!bZFPC7}6YHr~&``R? zRc2)TV=M7Qr~GeS1=pHwLhX(?92ij<1qWVQwZkcnXvk;^gH*=8!I!*d5_bXA;Rki8 z4^8!ci<;sMs}vmq^pK_HnI)y)(>n9n z9arHGm;2{o2%32hB=RfisI1R0R<*>v2Rdu@ZbOw}FSZE^mtL#4;!FB2FU&AfnruK5 z;4o1uDN1Dmz@cy-gs8`r(?zk25V`@)roJoGx&K2$ zXd_B$zs+@^`)CFTNPWK7!BdxGaVQP*WsS=&KkoSKam0a*=r#C2+sJp&-$0F99a?mnHV<_ zBN~Yy*ta@sdc=ahMS6wWDAs_lxh*F95QS>!y-Kg@E6{3Qf0U`15bwoe#6y{5|Gh!x z7rsd8Z9%YJXJJHQprWSsEX5l+%`io2{rsTyZqt9Lnn(3+wPz=Gq&^kp(3F@fT+PbU zchgI1WPK1!XYyHN1%FeTba`!(?oOaGz0fh*o>R>umKKkvFls;M)(^tCIZ=z%0z z9f(oxo4q=Akg00s97m3^rN$@@zXtD+?aNd1;=juB9ywy4XGsCK{itfC_kSN=w81RZ z-b(_bcX|9QYJ9zJX__&8`)lr?I7B&f?UONBgjzL*u^aLWH35Rfeg9rl%(d~vXsjoI z=vDh40p%m`tIz7L$7{NLHy*6mxW$Y?B{uyDqMW59WDLuK3T|9OzdvAk}ppb`&)*gYtC`U(F&Gmi||(&UcoEL>z-oK?S2t!UmvoF+s<7rA#naTWB$IQvxEU{q7$i9ve8n+H(~hR4AT#=GfWFGrnk%xSFDo?V{EP zQ1C%|2WVVL9^rqZQ2ky=tL&;a^^bs2B|v}`TV<}Ik3iX63nX zgSS%we})O{=^jS==VgK)iKq0)4IJbot-<QdSR{a#l!Wh8z7 z$}_&7FE2#p^YE>8MyR7_*U)uL;6La8cPNoajdC;e?8ndq*Y1P4K>R|6@7b!JyvrV_ zyA;4M|9u;R&!J?-0p3}o)XP0p3q|J}GSC#lRU!J~!Y4<-$?Z4aJ?=$qZ7msZ5t+s$ zM}lY9^TRY;88z`|e~s>&SnEw|Q6G_(8TWGeka;?XoATCN0cmI9!OzAj^__~!oDQP- z#+mO0fgW49ZeCu$x0#pu`}%7KUAp*?y1f*&4EPNG9#y(9`31sF4P3(0*@>~ZnxXu< zMneg9W@#1Xyp_{gz(CU3xWZBT!CacNd%Rvu6|xxe4cL^-{oG1}bP@hFwi!9{mgz~} z;Z$7%uAA=hE6ZdL+`Tj+-9^%Rok~DRCG~yNyY4o-RIsShQM$%$hv)+O`Nl5QZHhYw zHU#6HjJpuxXprnYPa=#I+$K*ui%_}}Px~<9YB}B;S;jvl_wRRk#DjPRwQkXr zQ8M7--fT30F4!o;!Vha|@fpzJ{W82aIYv#l>T4-`g<>ZBQ-+vZh7D;+&d{!-zo&kf zD#PZ~{g~%C$61!m;{x8vEwewH3cdIW)hS}VUEJu9-;F-KzP`??WxbCJZXS2Micag~ zypWc;K8Ry+|4(HBMYaM10rS-`J+uAOrFu7KQ82=O=*BYHuJeU!$+-(BUi5FzJlBOO zKfa2rZ_z_xMG&>)DNk*_y6;WJ#ItiY*;E++RjH3a3shwliP>GA)oDV%24>ZkJT-5! z+`BfWY4_eucSg+gAE&DuRVI~&RJG3JsAJ{))z6*|H`iX~!xw_DkJ4AFbP-##zG+Q5WQVHAS z@)(2#{I%2$usF1I%$w638w@ITwhWVCVbxPtJQY(p)ocdm4GUQFziCzL$8-sT)N*KQ}rHEGme`BkMpLBRXY@2E~p^m4_+DWzEUmcMT*g|2YnDNIia^Th-D< z3#Q?nh9MTM&i#4S;^E=4gx_Qg=nySE^CiuBcQJ2_U1kLS}zZT!6GNR+o zFC(ZsUjIUm+9A_)Cy#h_elcfi!!4GZt7*C7ck6P61-VUbeghLoUml;ww3Cy{^d$DD zzh?LxYa1`R#BFyhfejc(gh34hq@;K*G%vl0lwUuBS=HRW=9{5s=Y<}Xm9vRCG{XO{ z|MnzBbks;nwUD(*D#vZ1igAQ4mCHJJTz5)|GNARV$n1(YKPh>ay#$|3Jo}&JzJ)`; zNc)?Z(Qzm5%^HK}RBn1Tp`2GX=jsI>19a>)yQ^znFFZ=u&A~&gd~#0qf3Au%!g^If z69=g!Ax6+m@#fQ{{_9gU)=C}mI~X1VdK8*6@38%tlU%XnAY%g8|16w3B;)r3j9YYt zG{ZaZb%zfW_MwXk{5nMrv%9_(1;l4XXC_-u^*6NL0FQj1zdr5@+Oqnx0qd{4LqoeN z5Vh-ws>(ER>tX$z{!&J!)S9l{S!RyU-2XdAR)|?IbG!C5Y!*AkSe?uw_-=q9x;gIb zoI-yVU3@i0isbnFS_1(M#3w71ggBSa5+6*EHEo8(q14q^X;0o5nK(vlE)b*rS49H1 zMJQtB%dOFcpH?Qk4fXHtqP(t(85(g;aw?6!?(rAGWF4z*HtU} zrX?3uSbXW7yJywV*}Ddh@?)bH^-MxT5k-4%(|PON%?nHC3L(``O#Uj~6+1-d=I+CZ zkE^7-3l@g_R^`Y3p26DTDO%MBQP-26P^P z;*niJJ&Y50*TQFk)PTPh8mEs5BTvJD`VP!=$jj9~>Zgvn)5S^3|5TA1wz`IINu2Dnw?hQOH9OA!7{$6nKq;PNkU5A5{gMa`3>MHOI z|Di#EgNv|+L;4Sm&foH1pM<~n-`4*r5x>L#{~W&~{4X^^-gm_Rt;0|LtJ%lM)pvgj z6ek&NS2#GrkN>*iWmRbZz`=>Y$x4W-dBLA_-g@Rr`#`lvC{b`Hc;RBkaLaPqiZ$e2 zDw5j;9ef3bZ_ZAhwrkiQ)4{i7`Stw#J3BihkYk7Tka>q@9UJp+xRR_gMG~yg98qlE z)7#5=-P^9)?Tu)(-@!;DpK!f4oBLEj{!ISIWj=;#7^AHR)Cf31ks=b5)Yv$Y$P!4@ z2zZeqse0E)WAOzv|Mw9I67|J|2#(|bX+wt_BqKnNyUxJ>|Csy_&kzyXJQ9X&#sA{| z_d$;YwGw|~U&zVbi;4)f!9U)jAcS{Bzf(lRjQ5B$vG5XMZ5fiYHboZMIL{AkkY9=ahLF8 zo~TGG$^P(r#|J;}3*LXq@SP1gW!su2aae_vvRUoHc%VLon#Nr1lR&uSUtRr|>J$Qy zV68@s)Y?BesQiDp^+#G#!52RT|50%^(fU8~&r47`b&5&2G<+PR$`f_s6_dhiPUyHO z0n)NR{2#6ths_=h}*?;vTMT}Ar zF8ZR%Ek~tJ#?WESkGGIIZxH_%(U@z%+7tANgC-0*96$A6E&j!Th6YX+Fj$Weh3?$v z$~amWm7S(q94hXG1b^6b^(?a$W`-}PT4l(0?nUt*Gx-k#77rxU#8Hn2rZ5MzvAuDB+y0wsk(A~AdkB5i237|b>-jnwzzNB`838dK#EF?HLy=ytJXF_ znZaQdIDqET=h`9lPrb%a1o;JL24LQ9Wlb{Q5ww@=gg;_)3q&xzJTRWeSjym61JXS5*;Icn&Qom`FGf(km~9qUdD84&KPtC5UaJS$ySSr*%Sc^99%dYIOuW$- zT5O(-Z+1m_HpHJ63(XGF*OPmOXG~G_{de#FF`;?}XpB;d#+IYm=8|7D-}jaiLbpog z)=C!LF?epXxtm(jiZ9pZ;@48edr_?!A2dfGkHsM=hOZ*s&UAO=YmBctdP)}w1-hp} zYWwsUFN0MK76YGY6^N+sVy{1GtMwj6YwEk(ha6XzueA^RtO(ObY?VE;*h zGy4Os9*aG=%q_gr>_PXrs;H_&@`>QTjlswbxkvstN5}&a4=Q*F57d&-5__kSk8Cw1 zlFIM#@-17ab4Gn>!v#7Hs(y6hq?j|)>ASp; zkUSNi$6|q9ZDu?uC6elHt=!T~8|uQGRekKmKgL{pi>cx5@?qMl(?; z8*V2qwhB!#$*ULMf2;Gihlo)yN!(Ah-@evW#7|PuMhUDF+aw6H2qA_oY46kGxN&Z+ zM>>l+?Rt_S%qBf}%F_(&|0M-KdQihX;Fs6zEqsz#0F|BJQqOZACo}V1nC*|628!Fl zfL7$Kdy-J7)2}Hr>O91O1?r4ELCA1QN-BMR$MBaNCuRk4cM(k>b?laYQ51tp3dw(W zk5aU!o*0LKz4(xtXgR)$&tydPH}_T+ZPmydgY=ge6>QQwgGtR8vERr`%6L-^0&HJi zi~8w~$dgEn-kCKm_q!4X1VW$v)r*+u3$sy>)l*ZGP~y4&0ZR^i2tnDA(1S&G{8!$O zTJfwD{#(+9sS__*KS}i60bTA?RFL2`@Uuyy%IN^AY z+w+{0e?*r(Ndd6;(N;o6v~;fVc}S^CCmyXP2HTVRe& zET_S+?C~$gWGT2%`Uyg(8x|&M`2e6 z7VOCr_w7cMj?dU%{wj#?*5u`oXGm&Xgoy-MCqZat!9}eYr2_@xG@XSQ^%xn(bE_sW*S6&9w)91}q5+1`p^iVM~ z{MUrM$KSK(P^9jB?K5smcIPL%Pg-XUqbnilBFWax4wa|2vywb)f zKcsbWhwNxIl4!PVNtVs}N*ePA019fIuu4z|Is+}P6JJQ8uQK*EC`BqIx!rMU^h?bE zUp&718Sg^#=bqVK1CG-8ZOH#U2?P+h6Jtu~COD@4V#<$LaG~pvV4uM4AbM zuDl4(E~^>yz6~I%8c^6I0%`NmLSrnq7+Sl+*w6yDva(GI^2FXC-EU>u zw%k&vqOyK76wf*tFmr3v>3-K4J2&Nox&*oE=u@kRBJCR;S9p4_aOP^!w@ng9O{<%h zfE<@FipS+Q+_trhAKW+jOsXKp42%+lMVBvx`7CjJ51TUri~4sjF5$jLH99xa>NUP# z*hGK7Beh{`Vy{l7|B*s{KwNeA20DJY6Z(j)G3gGlPSBM}cY(F*Z%eEVX(XHWGB-8W z8VhDR$uIHyT;$xy3zwcV-VdZ88W2so?C0;G8}AB{a#LTRo3ZIh)A&(u`&ju)B4zY1 z%)MpEtPTbTvMP4u^!-RkvC*U{Ve2W)#&?&YbrTWhq%I;^iHbZLP_sbdz|)d(DHEu_ zsGUw^+@(C0$u3%YQ&B}Pdx$=FyON^%T1D*8ioBPmdToAiQqqbaozm9?e+C_VG9_ydzBEa`1k69wfPwt3FY-Juq+c ze?J#pSXj^zB6+iWZkouooAyhX$@xxS3C~2oxGpo9ejw`!&82?JMCw#L79IF1W8!jg zQF~Dc9xT3}QYz|ymg48KQ&FAbEVo<$7mXlTh27kF^Lu)0`S{a8&cA|$JfegpD-v^6 z?Uz=A1I&row65pkM#!UUa&u!@4Pt@=XvQxBlNH=8`I4)~78YRAG$|UV9!ma5j9T{o z&&A`vr4bFQRJ=dlX`!JUh8VIH&sgGz;y=he+=VZFYLJV@38h_1@NC)6?e3Y7cTpvu zd)T##*Oo_RjPItFGT2z$RCW!bc;#mj2C<51o)jIV-6iifAz9L{Vk^3-!LZf9}>CXUYxX zY?Sg8mOp2V=bWG2el=RbU*4ZEt7K0z&bxloqAjI-hC4h4V$Uzb&H4=iTch>>`kx3lD*T{uO? zRrcsSSUZqab>L^_7EGf_SNS#kNEy>4R$UyPEsx{pd({gzO=}14Z?I{JTC?0$-6h29 zU}ck6@m;KDW@;I-3AwHNN0Rh?S4L?wY?pOv$jIJ_)$}RujcuidpW3^+z6)P`WqCe1 zXQqT|R&|4R=I~;xh>!!`n%;k`WT&w%s^9yWB%3eG`84Ett(RL@n_~NR1RS^GbS9$< z0C4Z$<$XKkAuUTy*t%qj&Bm?;%WdI0k5}^6z%I2fZa7j|3f%@*b*+fFD+F3);`gFz z2jV-0++QxO%;KW{x~|y}od==N%@bP(0-J4PsoAdw69-^tFN5cz(Ci_o*Lim%uj06Ez)f#MS}IX6}!_ zhqlsCAC>fm@n?FHmJeSsNZMDLdc0M;-lDBH2%ZMJ(cL{t*gGsdH!2HtXEB1k^lIcA z|Job>h=`Q)vZd>R__(+C`{_T@%MPaf1H@r ztfr@hudADmm)7plh}Kfa(<>?5J#Ski(TG~Yq6ynb*t5Wna^8DDZ<3r2yY2&&hX;|} z2Osp8GA20P28o2=gOw9IFVs)E-m^=pWhaZPyq-U_+Bh5kun7m4?;TO?ARG?#?>-`B zI)S{{;w2pg;DMxe@Y=8x6^>-2=N^6J*zff~t1Xz>z&b2j%$zJ+Wp78LBX2nEvQoyq z-l@>;$k;|IpjH(us7Rm?y+!yWeoq(ZO+1cxsIVE|$Qc*ywQs4jA(5PK^E8G2jjIfw zc%*Dr=~ut&?Duk%hV*B<*3~Q@Pg%T+1>2>yoR#|Z4)4pg$=vp}a&@Nlq>u73-_bFz zqkm$w94|rEGTBQd42Z-(}uPVUvYDC%7=rsU>OTW_h25s%>{_@oF z=EQ=j!jU*MJp5OuCt-Ntr=O=f*#K_;DrcQSTN@jd0f0$YElMd^Goi>h%l>rjU~;N2 z9&pG)g`Jov@qC`!1G2X>(iE+V*<5X{uG#vXk@S|~5m#v2X70rv>5DyS(i%p4nKP`} zfjzOzPw=Nm_^vX{ zxn3H+KG`~e%PvxwcYgI^sEL)L-6>)exint2m=iIqOnbB@;n?-gV z9U^kG^bBynK=j|FelYRYi(@XS%2l#E!a++=|@C6#^ zF(r7L(+)lJRk*1RytegBKjW>Jr8nIuAX)ts&mDb@Rl#T}SF2#$*gi8;EPE5`d%Nzr zPOsbCX3bvGPIc(l`5f+hG)kXxep#hGkQ}AxTBiSeFPO`yP9i#(A6pM#&gr<|_dZHF zK4_tWH7-Y5X5lAH-%)AOkmb?Je49NYYCERSO$TBGBq*3XG|)D~l(cdPRyV^VgIbh? zJv>0f8qvgFeR8Cr$cb$Wz8|0@S?ttE%=i)T4o}e4kDSLF|Gj{x9f}GEhm3_b!^PuF z_UyGCkici^?rF}$jpNa1b3`4~yI_@o_}HVE9n7Stye^&}O1nqT9U7yKbE$180OY{@ zx0yyV#P==+%Z^@5sNT491>YpJwL8gwpF2~=O-U>N3Tj+vo4s%N5r@r)TUFw@H)ejj zmu}ehMu+FxWKbT{TZQWz7z1upiKL=)vZSymThb6K&9s0*n&i7~N&H2Noo7iH{R$x1MGE{}mE+H1j34<+Gn)c=b<1Ejc!sterzPP*#O7!s_yOo%?{h5ojWJrDb>RF1yx6 zvmq<&N?oR zVx6A#B86?-m^@9@@Gz)+6PDP0XNxE*tcbE(TjeqUhs=BFs zDwlsJjoF{pbkGcTa(I}mei*8lV7F!9=;)aa)K9v0E5p%`#D1mh$~x^3%R0FmwjdV} zp_j97f6W3Xz3!a9Oy;J20d_tm`R?g<)2utRyq;6Mj{09+3}Zf@b)J*IKYH8PIHe(! ztOs(82NbW*R!laj@SbZ`vd3wVpGoF%Id0gQ2OO2ruaE7GpP19^#AIQY8Pa@309RG@ zM(Xg~>M*0J1R7Qk8fSNdAdCJ2@@LtS}^*R76>k^u99NAU${nFxst;+dQlUJ_d{#MQh^^Eh3TW3Un>Tfou^q>!(!jFd_ z#idtR{wvnfYMTn-VdvSxQoz=R_oM#p03Il7FMSZvQn;PJP0`cstIX2UQg+AZ&d{M& zL!Q2bBhbgDyDr+g0yMPkx{O*ECPBiBFOBq((mmoE7M$m2XKXO($1ETH-nG>*#ji%D^+aB6z<)P}wwfjdSVQxXcl{Onh$JYDh9mh($nhb7j-n-`wC0LXY z%9DNEDrSpnd?*ssl(nc_<@FDyVc?kR4R}tX1kFo)2mh*bIwsmWzL8c@;1BtE2V{|d zx%atMW&|R6WdY7l@PC)czT*@%J|~L!Rc|rc*Q$)x-2O|h+y0nd!yzu(?beOt%XbM< z87BX)Z$1n&IC%ghQ#Eu;>(^tC=0fUVpA1)dg|)07AA59p%6~1+QmR`ZTjn}_jdgt7 z#q+#7V7(pb&t6;QJjVb0JR+}|7TbZW#k}S-|5m_*yje$EhF4cX@P{F699G*={Ei>X zBUHMM^gZbUv?!l?&XGAPNjZJye`4a)nQr+`pfYlfBEF~S$PBp$gt(4rp=rQPS)&4ZyE6%lLB-;Pr{d9+N z+k^Pt{RI}TDE6EclXx*mWSHeAW0_88xy`CvI|vc*O*=k1Y#MdYaT%Hh2xRpx<8wrp zx|QJ>_q4fIR<2j-wyz4_URT@ddd0;dam*1ho-tK&W9(qB}X1d9E{q!61T z<^NP;vxmxE;hkoy)C*bt+!pcT(duVikLRwQ z(G2rZkr>t{r56Z77#X}N`Fha)jFFkWi9>def!`CYmr`2k%(`?*{`nswu!WQA&Bd#b z1vB4duINYg z5{0l{D)3NX`GMML+*kC71kTnYPJ_ zO38-ze$0{ghn4RJi;gYhlQss^GeHiG$-Fw~f+c)G4GT8%oywu5n%BN+r#tof`PtWg zgRtVWi8A(Zj3UpHk12BHroT$z$||w)0PmW3wl|pKY5bFh+(rsE{@$QqPW=?HKJA+B4}k%C~>|-=E^ybBup#9K~doxH*M5-d8)=I}gWE zG_wW|4(ME8W0XUDCLMHHCBLRlswx>C{bx5pffMP>Xj@kOBVHZ=yZhz(SSl-yjMrpf zYhlTtp1HB9$@VPdR@h0A+O@JmcYT--&+`k=-0x-u>aelaY7t~A7V2cT{^0B~=^E8e zi~FaxX<+Jbg2BvTsVX&s~?CsDI2lL={BQ@CfoUNA_$n$ECKA;Q%i<{`jBt+Y=mqEZ4W9% z5BFZYg-$cLAb)Xb=)GuPjK`ZwM( zHQ)C$SIP5n9quK5#;JRLhq-(|;Mnn}L$}o>^=bm|W0P@Bafj$dn+ye?$$*apl2Mae zXr{;nW!)yvBT0deGRzV_aO=Nh&53tp8^a5m}>A z$S_ng&c_?N-n?$!x!aHCuwI@H>|Q&@j8Za9G_1F%Y|K|9*j+C_AAN`Rnb3*Q+_2nF z`1NqgJ;p3+Z{bH+p9jvSB9<`t&EhRMR`e)@Gi83>c2Tw@gCxn?6ViOn@={eQN!olcG*s+C=+Ukj(&-!Tdr?ee-z%vb8dE<4233uTdo|%@24e zTb&kS7TV8lOnn(CFkS@HT`v%7mp7sp8~w+zK;VM-12{|{(<(a$qlt9gT`X*4={yXG z+nftkH6q*{G6B4EB~vw{0h2kc4&eqpTcD`4+n5H9srhlq)ucx)-Sd3Abqk2kdv1GN<+dpUp$7@w(05oL=6Q#| z_=}a#RO+ZdTL+F1t0hPz74u*4YTA z0rzTT02z!mq23&fr3-Y9D2?U zW(uX&cmp3241%}*#sRGTpDr{;`3pLH2}l}dg?%Pmv6aVyipw>U@5r z@Vo@CoMl8#I3I#K>v)0=l;7Fi3YGI<=ch7h5dRQlh9PE_P67=(%j-V4$)_`OSG;Gl zjz90XZh};>X9BRDe#h>85bMw)Y!)_AJM5~u z+q5Gb@goO$nSR*CnE)De7};aJl14U;xV9f%gJpO%9Fb|HzhI8dqTKIzAuTHb0c`O< z{Xb}Edu1ZGV!!x?U5FVgzTc9E!xFgr3V66}|F~-XGu1sWprlbcdR-LRju18r!f1M4VoLr$~)=k(IA)_2x zU@u=^>b?QrEqqU`3cOFxC$07WEs7(l{d-v|e}WTKaPyApmMxjZ@gUn%)y>@>8FC@c-tT$^oX;<-nUUS8DP0~VhZbvy0qkG2R({90MRU!AR5 zEP$`zf>LwawLsQ^xCror1w+bj2|wk4IzO?Y3@w?lLMX+;QZv#cwZL|pFJ)kTG-lA( zUa|XKJW!OCW8E9XMFX%?_rpT>Av(+&(qD^!;;b0j5OEp0eM;w;q{v`9L7lBWG9)W`t*dy_h9eV?T}nnvoQ;i)_lU{|Z<%E>@8dvcz@e>h;?;3xa!tIGrun|?L>=<@;} z9zaY6K61D(_Gvk-lx+Uh{nW;2Q2*2H?Qg`}V`9r%JDr`gN!7>z8I(bZRFB+E5TszvU-UgoH+i`DvE+ z(M`a8D!?e-+V>?mmuOdyVi6eNwSr{(HklGHEhDm#;#HUL6PYaoFlyd&Rd>LcZ^3k;lQ9PBBn%J(`)Y$Ozd?YtLL0_)tW;Bt*DS?@ z1m*Q&u=a?TvsL&Ql5DUWo9wDQ#<$2Y%%t#Clsc5rV02|dU)e|pn)?x2GTTpPC5F(` ztZZmYS{h;fmpY6IG|e2yga^l_`IAJs@91=3duGYb8>~@S|B3`y8X}R??)FlvuUj)8 z&65f<+dER#s{13qC&zC>eAX9H>E4@c_jqiMn+i;+)G)(FsMoq4SfqEjRl0VOT0%9ei)HsiLhV<-< z=c?L#B7cbs<1x zEw$XmzLSi&I6qkJk>n{n8Uv7fMx*V@`GJo40CiEZ+$3SMIDNBfY)mZ`8P_&d_Nd)uvc}#0MfSFe4 zMTjt=drjx-%i{!Thbm(IbZ?QJ<*c~s%cEIJ zWP`~VBO{BX7h2T$ZeiQ3uL4nFW`QK4*HLwO-kt_=>6t_vC$(Auc_a6PnTja~IhjiJ zRHp4{Fd6;4uEERErADs_6RMrTO0R`5m$>E_D{`~2N0VZbQHqY)I;n4l0&6=`cQSi( zUaFLGTTO&8rUXi)*rTk^$6jWH{mdh~qyb7Sckyy7j3oDQw0W7$C6Zr0>$jb3cs(oT z_O6-h+A2`;d!9A0TXZP>IRYBYp`mDUPzjl(l`I+fFX(sd8h)i@rai8!>LBT`oFY1# zdO4X|5Ae54J{4^eTgeMJP4BKl$*I)cV0o?Re&}8^d^!&-1933pD}UAdc7krTO>{e8 z{p~Uoldd$XP%4>h^A${pW3sJNZ_zmN3-$JD4s?vZZETXbAq+;uMj$Kwc z6RnbN-#2K2;N8PFrU6*wiF}(Zgj6bRx$q=0xma&rCariPE8WvKiiDB5Ye$9jb+pAm zB5hPii+_b~$e1dm$3#2$Zg~Jq(k{O!cP{6~@&-rlr31Y|9cs4e^Y~#L@0cx(dW6SP zlJ{MNxp&KDOYt)!jr*Ij-nQ^BJdsI>mV3j|yjnvHo4mPic6s`3UmxUi-z{*pkAMmZ ziQZw6xj?JN`xpgMB#MSxOY^;T(z+!bnkNI#mUyS)g;zhh%vCTSNUuc$15^k0;mc*oi=y&(=tp?|Yf0v`ik+e5Jq(U0Z$-v&1l zP^0s8IVkwg<+e4Tr=YmMrGjR9eYK{Iev>j1zYx(!`Ff4<5;>gECGu4rn^y`bTjJ}) zr5xp!%WeD?ImLSp+qCen;&FmM zs7tt?QV3Gg^fiJzWJIMqhMCYvlDOjv2pnvq^ga{J)Z)5G^;Kf0g*}0ln%ArOlf1%p ziZ(Cd?|jcXbs>d{f8zl9X>l4fBz_bL=K@`Kv+`#_kLL>!Tj?1DVOZ*~H?iE;>k~w4 z&wQ@ldXTf6GQYi4%^0meA7sGEIfA=;**=QFQvuJ~o+^%TQwa-iqZxE(4UYjnYddzpc@j$e<rcDi0W z&yQoTuj?{64?m`ipV157JAYBO7H^1e7;j({K*5y_5fKxsU94|h`^gnWt+lE}n_;f- zlBRLtCT(**&m|dgQf)<&sMy!!*D%=S`j~PQR`*kfZ1FdDz-7s5tc+2TB{ut0xd7Py za#khelz4j5>)tZ8spj4Wjvv8ZKZj<+R+@Q#D&O*SSQq8`ZC70KbT?_^-S1@tjpG0u zd9vp$Qy_M)19+|N{Gi*|*(y6FjDpaAE1FO%l)~?isHc9>CDW9C8D{_DZpR)E{2qesar%rJ+DmG3%j*2;XbCOFQM_$=;Ucma?yG7MF5Ba= zbD}Pj2pUrV5<-MLzc0+WOh1^fK5_QNh|-oN;;ySne*MtGKRsjhjHOXS;1gsLw&9n{ z?O0_WxKU_7;GJc6_nSY|^4F?6-xT*I0Cc$uPRn>qsoUZ0bnk^0$*d>gb-jGfr`oZjE?O)~euX5J=KM?|nv>$m1x*}1#XnIRDO|NAz9o#? zZ{aNmH{Pz)FUzTm45il9tZs6wuG8Z2RsmieEW>9weZDzJEc6iii+hd$)7o&Y5GwZ` zZCL~KtuXuyLg;745lm*}jR2^Ay%hFJP;1%a&WA7td}&?wr^h zW_h6dO>sbv+h&))!{*C#Rb}U|qp2=eYXmhYing>^%_kQusxTwt(ftYx{an7!)bfnd zL_Ob#@_nw?muq5A-D%C8rAcaa>KS?pI>jAx#D(t3l3vHJ4fS^^WR`)j97MSIVN zF>^7XIcyp#e^7`?CL8J!`J|i~OlAI^+n#}ojL$}-h%(4jY*sl5syiJ-8rmI(^ymCd zhe+dD>|1$QEYUu43y^28)_%RCgVvcp%w=SmEwAp-z@jB7mjjx*XY7hXK?7GA1h!Sk z%(O^Mm_;evBfP*!0?a~sB-ChRNLbcQa=|@+B-Momz|dxseXVfn=;;{vx?u3d*c2ZI z$GCfXbw3snZ_-UEPoKZCH3Emn1AJ;|)F*%!I3f=`QGUm3!jx2vA(lre5`p!>BjS4h z!Pkcof#ivCGHp@g(QIB1=K$lOiawtxxJZneDrG6Dz9``f!1CmRP5F|ZM|h^Rn1@hA zAg>m7A;q$bGEQ1Vc!H!ZIvhGd+xe5%8+qzSY$usvl}4ONXB7>WtO_Fay9!w? zG%8W{Mj0NL-N?rFkMe{NZIKaoTia-Dk^2A!$D%opI5I&S4RvH9+7X#(R4H9f`QlZq zal-R+W@`X4T+kb=I|`gFe^uJJhy6K~NAsY)Cx_J0X-}z-L?*I=brp_*vBRGtugg|T z&9pnqFwd3sVN!y+XV^jpblb@wYyr1Wo=e8k;4jxd$P_f4wtzIXJlk`s9oL|CK|cw! zPsRfUUtRvJuo5C!UQ-iou9_M5LvH=JP;q651Fri2W)|o{Aiq55vIYm!n6nA~Zimd- z1B0)xA40OOn430gCrt*sye^WoTdkWrio%Lri(m9UTRJ)?*c}C2A93=zg5RXBa!K3V zSpfM-JlE@cnP=nP@d?lzbLT5PT8$GjYfO@sl$tX`Ke8Keg%**b+K68{k_WAVjm!x3 z|82u9ia0GhQKc40^#D1| z?vNf^#M5=R`8*Heb%u(PHSK1o@fH$+EnFM+hKi6#e$JZp?^fvUQ_uiWWwJ5_l6;lX z!qlU3CK}IaOzcL!ROV>SXa60Bm*J0xuaGW<=Us*@+1yT-uDh z**CtgXSWi>XG^6aSI=_h%9d&T^wB5cJ~&6^zK@ljYnxBMHi?7$)o6{HV4)n%R0UiLcD-Vm-=6yr^j)Ojtr^vj38WB&E!q@5YDNv}9c?vu0q+Ed zUH*+(z2s=`dDy4BD=z9zA_;v@*D~)s>vS6I-w&ZpLGFjRH=kk+8tzBjw10wB{}nUg!j<6vUg`G;*&`oywbMqcPju_F%$LA3+j5IayQ04ndUY54 zI*qeD5O;Ioabuz)25)Zz&Tp@B$Cgg2EkqOU*A{eebS5R|s%qu3s^nBhaE1wyunk9Z zs^4Hx%+kkK>I&nW5uGw&obWxxy~q&-))C6uMVlHAxuDAqmCI5Ls*LqnJ%_vBq{r@0 zl+{iN7`q2PUmG6tmprb+2*w+pJGsQK?(5*~PjrujjqV_uU55cTm-#CNZxQZ;C!s=5 ze`uO&%$zOJBpENhcTqd5^J6+56xRpdJqD_+YrN0g1uepE9ub&zUivAXeG_LSZ*p}B zqX8;uhk!$QV5gO|^i!q|vzep?^w)yJx1(_&eos5L$qew{$1T>#k=3k&rzX(vlewo3 zDZWj(QeuUj-EYt)uS@WN;g>)CT@K#28MIhC2|uhmw$N#tT-`dY8F2>ge@Q0R)uN74 zu;W3EU>1C6=49e`xZ^%TI^gkVd%@j}Ev-k;)W^ag!jzl!l-3v23L*ggKR6*gDD zU+x=1;9!}z)Yu$PuWFIXz0X(5#&UiA7ztABe4O;X z*>diCIRU;*80eHQ<(uzc?%ap)OIjHapkBYIqg!haiWqnEnceQtG^pxI(HccePZf<>npU%5u z>-HBp%=;O#iBDBtzv{BMzR@aUrNAFkA0~#NuVUs?ZFhWJ)R?8h%LN|vU-_9JM=pQc zH)oAZzt-GSuJ#yhVp}xZb-Jnh(Hj`lvr&}(;&CNLc#-FjdrJThWq{wys~nW+MCOM~ zHaH8N4jK{{U_hH4YknwbS=F|v%izSuVW{7kGs+I=y?S#1f_uw zxGp@+HmPP=1P_uJF!nle3u|b9bZ4yrpFJ1HV?i=%p7Y-`4D9gDzv+HsSM8|(NNU;J z!OQD+?`Vso7u*BmTO4fdt^5&?YU~h&w}V|U?s5+)4CUIS%gEFnc`z)231=)BSqlur_H?8U2ewp0%r(fT9qpFka0Ht^d2PaUT@Y|&O zvC01)5{NK#k~8Ei2IOfkh~o_TD%wb=vCz4YvZ;?jwMjXxg9(60)7pQxfLntC=(uy` z=V;A#CGq$P;Dh?hhc-p47dP?2OM<_UaK{YEx9q*?;3tPKG<7oO336Ok(vqwLRNxFJ zzv5=1Fsb>8cLu&8pB=T-qr~RfP?9=7q?Szy0o-MDBE$@A_YPPm?v{E(#)drkpkuH; zZ*>6wqEZU)Kv1_U_?s--Nqi`4;{}=)Z3soC% zj}lKLevbqf&RGHv887theaA>p-oTz7b*{ZqAuh%K?Xn!`xCmd9m{Bm7z>kvZiQ^~? z_yV@P%cDi1;`;Gr)KTM7A@N=AMRJTmE@#mO73w_R_`As-u`|kC4W?PqR$>(6E%>+d zf^ESx=G1~y4eyUoU_VVF8@?fr!LH_CX$rNsbiNWnO-NPf$e032-k^u9K|=Vj&cYTv)hQg z?Tz4LTS+n-Mo^&kx9h1PQR3Ol>&=E81@a`L>?g@9Q+8ZAgIhX?t+wWa##mIN-yRLh zbi3^&+sK(%Pq!zA<-~2*;03c($m^3x+N4VFtoiUe%&mbf4@K8mFhxWfJsP@1ggPS4 zoDPWCow#AUD5xl~+$E#iX3>9AaMLQ|V8*drKEZWUN#L!{c1Zy+#6_mk$&*P? zuGz-bTQ=P?#(y(H`YKBjjFM^G{=N^*I8;qkAOo4oN{hW#!(@eX8|{mEOogg6v403)U9>tYZKiehp&X+L6nb&6tR{_V4fS_z{iz#IR1h_csiLR0WRW>L?wtlS|ESACaxa>FoAEWcG5khNzwTRSR^C`3CD|5`SaFS|-d zsA!d?moYv;UHG1^(fp*MkNH&fI08Vr&v756nPl4uu>G5!AczW|n#XsgNSXwTCRjY%0SN6CZNWWy7Y zP#wHRqIFeQuz!z!JFUz@?FmCOXCoRi`(&mPC5iCf zC7SjI(fCgfi~+D-K`$DNBexl1c7D9p@)`_tSizEzkRGJ<}`;H9)oVQ0ziRJT)}`EYt6R;1;u;TkEYeG-Fj%Jn|H)^@m|H8 z?*HTJ9-|`*`nJ(W6FZZRZQGgHX2-T|+s4GUZQGjIb~53_$;o}c>zwmE@3&ri^{(z- zUAuO7)&KgH2HaRGtBm2=^XbW;Fs-Z>yW26TDj@(Aoh-1QSGMsmRJjzwsU#Nv?hWa5bCG7M(6C5(>p^jNZ;f;P2ih+!cfM@^4| z_s1O%rO&TW<6Sms`E~qB5e7e|KxBKCl$T;&LfEn9EA0gQnd*!ofs&sLI#9@bW02t< z4F>g(#`Xmcjtp6IcuEmf$$>3bG*OsL^)px=jU5WuXGW;=lU1IH`JqNGMv1aEBoR&h zP1=Fwzl+k-i>FKPA7v8yO$DyxGyg`EC^0NnIqRbcK|4Y%K?(K5^JcBc-Z>uzLw%&) z_fD_NRnV~s?>E?Y5$Bwk+*p19RVvvo-Re})a~+b7oTzsX>IW~w1D2SYoThl?>%N@Z75G6@>ij7l}VP_@}N}g zQ9{&3X2>M`{N0nvV2PAJP$_gH^&Q4&sZQ&%l!d@xh&>6@K7SH<{2+@o!w}+yQIJCo zrp?m1#Vo#e(c(lYqDE$M^jAMT-gO~#nTzm>MBV?fa9r&pIYb(}voJlvMno>aOo%Cw zKpTeukOk{z2`rAjte6^4Dprtnv^%4ncSJdlO8=%L2lMxUC-=z_B<6Qb?gNNCMZU;462{ zNGakZ-2;yz)Z)|YNrdzRs}9N0LVMrC1$#oWC<^%?@RSx{MH#L+aEJCC;DUph0niif z`Fe6(i47*i(YKIo!tHA_NH^dV!_*h%j ze>z|y@$N~dWW{E7IWR%v?+nCX3#ZTgBn9%3KngkY(GUTboXH~&S1+cKA#(U5CsUhP z`G3#rBSDTZvxfQa0A zR4p}`nVctDEJcW{M^6npqaO4QCuH=iQlm^cR&`ljTA|!InXHSEZJGJ!07>_y33W+N_49qF5!IlvxfRJ7vSp;Z5tWeyL5;55(P)+9B+5gZ#!C3cYJ3iD8@iR+ z@cSrmeWHjubj^6NnVF1UO^P(DdY%S zoITa{lX_6)CUYtcU=ThSccaXzU;RIwBM>=! z$SKf2#_)du6(SIhDBrJ#5%>J_e*p@_jDpZ2`&@#3V)g%5zi5Y+DgPZs?Ly0o5jg(o z+$fDVJ0bfIfb#hN1@Ze-LPvz7F3wHm9Z7>oD!d?c`AS_E2b)sIE z*=0e_(@(RsPu_W*)Vf?~Y9zQTXJO|i#WlJBX_)bSULGr0N-GJXEb@Jp(oo}1bZQ+U z4wk0P@gOl~r3fMSw(Tm%=wyPuqtwO)?wty68lBjL!x$APl4AL=9mKS)NaoE*-ttzh z6uXY;p||h$^eu@L#<>Q6dz;(a3lXM$w|NJ>2@95zqtYhNUn-4>t@vN8Vf%#R@N3s4 zA-Y0)e3!Vq!0mfJkr5SacqAxcVfg01pXWQqUqNol+cEfE>!L#cf*S#HtfuhkR1c|0 zyDjm8o}>ozrIqf+^@=fAmDQw7G$XR-*LlE>;)27J39V8%^`G~WNRpRd%Z)lKrKXg& znXcZ(z}6q!7RfcWh*9N7GG5m7si^zEoS6tgeS6ADAL?}BBl$w0Ql})j73Y~qe?)nj zB=Dh3?teVj4H`_CI6DaZHu?llX{$c%e|?R%C}8`U!a-zAB|Y^>Y1 z4pk~jD$XP7WySV2fI6?S`+DXlJ@Ig#$iwz}l)BIiVG5y~dzuouH#8|`;mV)QmSw1S z@k|+asSCBJugQu0Sc71`2lI5hfSQ8@w)8)cRRiCX0^8T_M+eGOn|DXs*Rv^oS8`TX zDVAs`tx`l>GO>9}PoCN~-IRt;b#o40BKsQb!Nrhi?}!R81>w;T@=T(_+X1`iQED}Q z<&Kb8Ly}(*Uxq7EXQA&3+lC%nh@WTNWg;FlL`KxR*!Gnw9yDZT%tjcD!lP0ttxAr> zwnGLx8U6{e-r|6GGA?y405!bA2XlQ?TPoJGCFtPQxidhNZFDQai# zN9S%7Lo^s0JfKnY$xQp~*iGJ>;cKT%B%(uf{~3nJeJbq1>;t#0q%7gS`(`39iR2Za+5PIyFDj&Xy0&u{*Z$E({@D&PDJ0=?k#n*kj z*hO+kJqJ9J;O36$R<4Jx#QT9C?fU-CL&?_IvrAa&W@eHGM#vOE_BX<^2l&B~cB{j3 z4+UqE_PH5c`<4$MoBU_RLTQY&c3!ulBI#kmOXi9=Y4bM|f&4?hb;x`5HCo6Y6UkPs z1)_9QVLX2ulmIxt7z{-R)5hffZ z6nzuIYuFOT2Ts*bo<+C5b&9&3h`B?)zo-+@t$R_{271}i%7V6XZwkJ{6LGjvlQG)u zx0Ht1Y72^b9~HeB5O!#3?@}u!V@O!ZVKj&y%Q=Bz@0ilz!k+fUQ$_Nv%F^2Vza8iJ z|8||;1kWt0*G!ftsVHE-?Z)sGx+J58(TBF?SdhHX({;wng@%pvvjTM4K|Cy)Oy11) zh4Fi2!`T1!6HS%ErIl{jiM5K@Le9wu+~PRTzS}tM$?BBofNR_1Yu(-Pr*_VAi;Qc7i|u8=5V}Yd3e?>_R`1-MKz;ad zQ_&*v#M?OhS$`fMg)D8lA$}69k?oW@bdMf4`C|e(EP}{p@M@oQazn8AT{Ix&i8%Bi1unp6b(YcS)E;$!5dxy6_(KI%(RoTytq zqlqTO9V>!9Q4HSqgu58_&I$a1^sY!hQ^c9ACN(lCP8t$%|8Ob2*(v46FR0+FKK7F6 zq$cttC3luk1f5yOM+aTmqr%rt)pR6?*wmI<^~V?dgI;*!h)A*lQT~DU%$qY+YcRfTgxwo2Q^sR$cg(qn@gg&@$WkI-($|-9$`tW9JwQB! zg^PX6)1-{zoQOoAb%{Eu0NPC>((6$JZtFzL+xvW=x6vc9^<3Mg?&{SgqTCfB1|ig> z`5g5U8i~I{vHlbs9}a+YZ03?R4fg`H1Iq)5f@;MKmo60AmPJv^2tzyyw~(qST7>ON z=QO!hZKS3~C4#j-{#uv_>9~=w&VxW3&`A5(D z*3nT_70_P{{**v~ViBZkx6!SDo0O+FhScm+ zSh%Ir<1BiP*p^bMStSpG@k$)T4)}zHJ;QQ?TvCHd?2Q)MQj9ae2=Gc=N5_>Q;R1+* z#T#scD$!urNWM@KN*bQQQ`e zQO;mr-RkElMK-n!hF}gCK|iB@b_ty1cgtq8^^&ftrirn;0z0D2ck_;Ad4vNIE3%r@ zF2NY_o`3{~gC9Ts#ny*qz|nhQcstfE1B=x7@AI8&l}DO9HWicr%rKm(AqKVpjwh-` zy*41{6Hku&zlVo5meSca>E8@EP1XT(_HV4W5aNS64aO1S$E;5;C%Nt2dOk;=zxh9u zISMU8*&YIjWH~LhhYm(HyXm8GWd5nuXghpc9BxjgvksHL8A>Tg3w)sV`$sVEOG+gG zcrpn5Xv#6%GJKNzUDDy{?v?QHmWOs|COX*!O|m8XQmwu&r>~kW0V=hIfdyy?7B1K= z3*qqOV-4u#53SYG#W<)$!Rl%jjvNRN(&NRIf(lRqcc zpS~=-an?9WII+%9!41WgkCY^XV9U3di=SD9i3-$YR>XZcgc8@FEG!=m|NV?62CssG z3yc$nml?=uK%ET_ASR<+2g}=*-~+eHrO;4qX15_k5>Jpy3F9s2xD&1I`2=;lTy>z! zscAQvOl|*O4(})_As<%4ql{CKFWMVLMU4bGE8oH5PqK0t3-WvvE+@Q`8&6-fcF1=O zJjq*ELATL*v`sNaokv9oX2CMQLAsi0brX`1C3Kl6T&dMNGGH?6K&}wbo7OE!5%fo@ z7>GAKqy2@%`FU?U*ZFv{+-AD6jQkE2Dz(uQR zrWwCylk(dSaIJ1&7@#$ZW2kP2$z%vUMu3KHxOey{Z4HpnO{XKfIz%+H@8;_+3~ zm+shjp5{l_5Fw~J8=w3La{!CB6y=@vEcKE<^>#1&xsxLi(CZ6PyG$krg_j0z9W783 z6NUZvbqNv^%`zEDr?jH5Cd~Qa-D|j>&sNf#d#Cq~#8a}uRr2Xx+Rm=FGAY)@ zvej5cr}E*vcbUV*ib{pgnf)DYn({}K+Ca6<*#z1wXOqHY9nLxbSfjwl`Mma@ZQreE z$v?&lhP17>x`)XQ++#4e_qFgAVkIePA32|f-$6&9q+hQ(S)$uR%cb&7w#bTtK0mFB zc0D7is`iZvin|UV&uzzR)~DeSjfBI+1;H{{=Xzpz()RdLqecr2`8w|FS$8e|lNj#oW< zzv_*p_%uJH3B5$&dl> zCpRJQjjaJH_o;rvs=#QUXtqBv6nUBK20hamDNu&%gT0$}GbWCaS!8V}0>ZcsQ2^Q~ z4{$n@N-tgE$f(NM#^0-jVb^$Ez_%Z|XHNGnjym_qlMi-HQ`(isFJL29)1S*7DpWyM z!MTvJ+|x@fy(>c@H{My|I;=4dczmtZb3@|)7d~#CF;6PAZiM#7xfyBze4j2KODz^~J>bF8|QK*GV7H(#Z@;)4lgeO>ZZT~7gD zroccgz>}O}o%L!S5UNkdg*Wl{ap!$x%k6LXb-G!u1C5$oQp!#o1MR=$1Pk@kDtvAj z{Df5BrzF4Eh>d!u(sSh~sYfsJ`Q>HRYo48tn$EvHZ-*x?YQidtkF`YU`X=*@%w$(f zjEOBvsJ#9zkv2-qu(6QEUA-t)dHaun0Ut;p@SbS0s9qX&`HOm8ZTX|?Aox`ARNyHm7a)F|{nk!mCL|E* zl)neSO^ZeW+S6rO5O`iimh-FZ;-Y0aBW^@ya?s=nRPI{K^Yk7jEmP_hd1%M3jo^(0 z@`fzcny6julv|}Ay>51iIINasT3x5H)n>c$7{f_FXdbvoLi`Cg`w_azSE%WxpiZG3 z|5Xi55gqPb=x9(!_wdJ$ke@WkK3fF?J}ep==OT*c-ryX;Xc*-9M)@lfdi_#7TbZm9R9geN;2atRy=s#V(lJxzDlwEXfPFYwn&(}CRVG(17^wQx zWI=4Il(EDy1cK{H*udU)FGC?}+MKz>s|?<+W9l-%{BXM*$69{P+-OiBJRt*NAS9x;F;f%-$SQk0 zknVNPYqJe$7)q=Y zl=P_&V?8)kSz{&(1Jy<2g458jyM0YEpOPh57FkO6^aZ)*?8s{wRv?Cy_*bz_`qHRG zYI2baoO0F8$JXa6g3Cg+sM2qM{R0sL_(wAEtg4r$_=wJf)4nj(^*Pk4l z1~gFfET*mXXcgvci;`2P9iOL;iw2R_=H~RHkPxDxs!C-z%(H^k4o&tSeps`LtX0`K ztRB8sYi3)PvfGzU?|02Bn|>dDpOtSPJ6}Fe`_c^StIS0W#B~>9rFXT8UV}nZsF7RH z5@ozcR5>JpD>mG3SAM<1Or*qUulV&mZ0x#i?t*Z4I0IEH(`ihDZ2Hv;e&@l35W9oK ztE~u25%%ZiD+Ov;%!rIL0s}T+YC-_M0|2(%?9O*9QIzJ|!hs0bR1a-4E1p860w?OlvDXEb394Y#cTWsNv-AB-0$_DfM zPDaZ~>Yr!h^=&vxJ{5FQ^5@}&5Q~$wx^F339p2?$pE$FfKQdOZMK3*<#QHZzC|~KM zBpB==`K#tyD3_OLBbEVCCo7Rb4`fg44#LK_`aBcH}?L zMs2S;T-FV=sqq%WJUd0kO3Ue0Rvv_x*lYxe4=D=DKTVB*;BeH<1bdP@g*7qonm&)6 zd@(rs{YY2YTsYaQ?F07qr+uJL-zhUEzq?Yus`r)Ed04;zo26N3%%LK&FQuM6)@5Qm z(FnO{n`^V|y-!jzPLwq9bn{0yGcWZ1`vIZgc2T>M0@p}uguQa4yBq|F^TQc2I()K> zoLa4}OS}wuL9^9L`R$ZSkQx-qr@AMO{Rs zO=F>WY)33kd?WS7wsx;U5Oz1Y2%9f?^DeZRejl^v^*a}b7DRJ#^HQ~X&!+53-d^_4 zIluKR4@Rr+ z4PerXc_p`fuR3g2bWjkH_`Aku+8*of&^+|nJkJj$X})V@qb@Hu-aX)0kr&swri?~o zEenkE%oTb{fA-8lD~q2X7${U5K~BQ|am`-it^4vvuxDAUN5bbcXa1)5E|<$TznQJp zfH;_5#iP-I#d)H{B2c3Y3Fb;aS}tpOCQTg0rQIcu?k#6tzrC&$lYGb()eL>g%xaMk zC$1VTk>`n}Np|=6!|Tq!?8LG?-((h>;oQPHal2~EkI7)q>r0KsP`z%?>6>g4xHZV; zTv2K?v_jdEfjs5sEN-6(JU;KC<_L4zmDH!u&cBY0nVHo(`2>$p0sgF6+%8h7tu$hm zx$gT89BLaqO`y*GDH;Cnd1ZofXcvS)ns!f=eQ^=K^oLUUOj~e{ATHy&^75I9wGyW( z4Ek^p2`GQ-h2U8*>+392{&?6VdX#r*GZPdQp7Z#}UTGWSnXI*j#p))SE8Oot*={6r>{KZZO&X?25^vA?Ci?maQWONZ`A(lBS!L=T!aG9-&}>m_`AyluLR%w z$^zoe*2fZyqpmO~X*C?v%4#PXyo4GE>6}ic9m1Ae8KhU*olZI)ah5mcvC!p?N?^+U zblcBkvpKJ4EGM-JtF3@ObwPp~`>QaXEQp(PuABYBVbuiF>A%!Y1dux9G%VA2wHCjA zpYl?sR{ooD9p~Ab$>m`nUN+HkkHTPZr9n6AIQJhWGcjN`GYlQ$ZL%mF$pgiStLyj| z@yN8DwiGJ$yHE~pi3=jY`*-{`seGg9%;19O z)JLKX_YXZ7;Pr?mwbHMKEqZFA;=K=-S{14O(&RV=SvO-lIt5hfQ?!eJ&giJqA6oV6rzNzn*oep^9Q>pVVlKp7%}d3K0Wq( zU?@N7fGUXSsFc7AF^zL4cs4w|2JgxpdzlbV>z^}CQTN!5aRmH( z^m^Tfp^(NxSpX>x9!SFW2afea$vroq(GR!+bQ+*~cD+Dp48!8uqbG_&ohdv|e#3%k zCM$n)jYg@5MJxaT;S=}$#?aI|O`Z^5J$N>^*W77QL*wD!^@?2fh~eu!)im5?0q^4! zi^}iy0$56T%w8(EM;Wc6oz^n>Syey0g-QEp;z*4-CWLu*I&BmE>YYC^T?arryoy!} zY^UXKO5-~mW*401uwOBA>vuO^di{`|v~0#`*+q5nMf)gNmHV{w<)6U0j(Jahbv~9r z0+f$AC!hh=DDc>+wllVnMxj`Yy3+hH-ir{_n^MEGPg;5HrM!!FdD#Wt>68@M(!;-h&$}Msml) zu1AI>LeE(om58Z`AE;ID38lkaNRyJC0%EW)qV>Spv z{UTHf7ChzqaLzQc&+8{h*~ww(amHD$b(`gTi*^y1Ar?e@uh8l!`?V?sFSy)9^)tqP zPDg3<#e7mc<0q1p{zVha!~HDZioW9lExxo%Xmmt_wAQ6v&r2v|rrgu+NSO7&y5Eie zTdUqjCPE}qcv*wc;rkj8PqYtvily@Yy!(ySoz`BS@1@9Usi=Q>_qszi8t?`LhaQTiESn%qr`WNA;Y6T zaQtp)i()g0II2=pv$|%zMo;mVog365a9uRf!t6gV63VjKH~67d42sVWjk8e`FPE|q z4ixVspZ0m%f@A33{Z0|v*uV(!B%+BNn_~=>z&EnB8?k}dGdBE_1aw}Azp<8 z%(VVMyUOO}h`?kTyH|o_*uLC+Nxx9R{D6RaLTu&%=*3k|(Xg2-^V6#HeLN^XlE^X?4gr+_SF;H2mk(Vn0;LO(mi0py+=7qBEWumoWr@&M{|!l z5>MhX(0i0>8k+t#>Wp(uaP2SE1?r8USBb_0B|d*WO(C;iG_P`9#=>>R;BZ(v@E(5C zhj^{>+%6SgE0zv!B2oFXDlKL?9-0u&F(^rLbd0eP;nr`)l_RD1=T<%~>T89rUK%3j z26Tq%ZCY9dSC)XGKxbVzjP+NPqjs5-`n7>38_NWobB~i)otC*4CK=JSUccT>X}B(N z0-qtkk6nQsQ-e6UEId*6U|f+@1}$5-&358dYa?tHhIxeBzC)&5dJ8B*RTxrxl;lgH`jaXCxWkoFUt{Fz=jsz>sU6A z(~PTmUEkH}HeJ_BJ4LRi5?sP?=+$H8mF<0)mP+buRZRWc@G=M7`*JGaf(1^5*TSmP z8R$A1o9#*)$`vN+mb8Ayk>l52t|^u8rTu4&%|Iq zb$so0?eb?)%v~|HMMkffp70gdsV$$|nCT^Whr>8#pYIWL#M?%{aoU()}rHa z{10~Sw&#TnjvA&GVL|5-0L!lDsdJ~9Te03ot`GBOy|NbTtZ-9DlhYnIEMyQeLBN#= zlK}G%*At@d?^^9;^9QaH^)M&4lpC#`n1YH%dZO*4OrF|136pgimXM5N_BmDRmQ$E;8YUSZ ziwI+6n4iSbCJ4EPXeJb3ZfIaoU}|qFjjiPIxv3$$+h|jFSe&2D?&$?YX+VwuxWY*gF0o`a!WI6r*WpSU%xrwQHp@?Tg zfSqJkR>*b~Fqrh+iG!b{UtKbYMz@5!GwWBYmYQ^SD2iOhaBQ;{2BgEcdllTr!g|Vv(49Qs}au%zS zjY8BlTrPtADU663&ljgTlW(zE(P*Jh7_!y+L4dqhqRb3UtJkwhKKxWPX?yl}mfs>1 zy{jn5#d=Ajwk2|ujtywk{`C_TMdbwuD>**Jb6HltyA&3DA z#qDujo#A}wE~B|{1r{Zt@XC4(*VyDYXJ`XvTCi5!@nw7g+*n`FvnfEh_{&`RgfC0@ z_JkMzX^j_^7WGs3+x=2A7YKn@V4wNAR!qlAP8c27j80V#n zmFtY-dtI#>+T#8_TmTz|0?udPWPGMgau+Qc{3%9I-$Mu1r8RV%B+s`TykeSk`5g|k zo)=k*?to?iXl3n%xC^N8Cq2fuFqv&@XRg-i8thprV4b-wp19bw)^i5YPR$p+eKkJi z2)g2_<(Lhq_Iugs^?M=5;*-Mvf{`kch#+RWQ<^O+6-VLGbLGGGUfWtdi#T0{jw!lM zGmTP`E;tpD2O|{lClr9PgC6(^?H~L^2ulfa`04l6^|JkK6~O)wjwGcH@ci53^(Tkd zV}@ToXR5QhgPT^nZZ3&-66Z0Odrrn;Tlo#-(4U4S>lMJ~?OLg^ZZ09PYg52=c6A~pDD z7JDKNPuu_ReEyF=YSh2$EdsB5V%A936gxZe%=j?%Hqi8oWjpxA+R8&r-o}UNV#k2; z`38$>3QP4JZi}o2W-uSb%N;wAu#o3ye`q6u^m_MO!#IQy5PEd)n+l_Yk;o90Ut+gDi9OBbYFSn+Bp`L~ zG@S)(=|QiW-1%a;2IhY0$!pOp+U)amvN(}v5~3zTOS_5bcO|Ncv_-c1gl<{=_g`Oc z=^uxA@^U83wrTc8=%m7_?_pL8oaSSMe-dCA`m!Zyxg&kSNps?B2DsyDh7qhbpKteb zwK?0q1l*6;dSL~8FUMg=P##_t$Ind?VeFu92zc$)D`|gq4g6H=)5L1}w*vbkiMZ;- z=7%#BOrOv_%4?7vmnfN);c^$LWTV}$Xmz39V(m1{M#pwx8ea8Mhco>jE|m{oEVH~u zI*~FKGDu|Hn*WLj{0i+3$4^jtlmN4nq9CRV4x`nMtOsMig@ck0el6t5?a;H#YAY=m znq!`WzDUbvu#QK!yW9ie;^?$mgDa8bTk8xl-n*Zs9#Ihq17#(v7bBqQCxjXbhBRzV zxQZ(w7p64sx)Yj_1C&!L69YF4#2}yRyeVGvMMbt22qrj}RZ~3=u+<5J$-&k66qeDf={CPX^tWNEUfcdb+DHUvzx8l{%< zSoV!F_De&Oki(P)-bQ*{pOH|H9M|0(U@l8=0S+rVZeB09Y*h7`wmAmHp*LXq`D1C4 zB$o=VJBlyRNL47b8ohUSl?EyPpgrcxeBP9L>oejw^YN2E>#*y+gBcC~y(Zn0R8Y6mUm`~7>K(GX{T3}(i755n1qZ?O-fL*f*IySh@>Op?(}YbZ_PX1*tF0=e zwTi*x0j4&cxptpcm{?TVqienv<$|sV^|5z4IA&7ySHSucQ-oR$1oXd!d? z8^ljfD$sQ%gNr5_=vCKebgSNSDJ!3)lJ$~%H2N(mYI3Qn9znEN!gK2)KbxV0NJAt+ z(yrm|B~Kn%?-A1X1a=oe#Gdj^;8oK2ALaQ&C$YWO)}~T}4uDLeXZWKad1iAdUr27b ztZ0)=Zo-2LF5o6TI>i7n)ro(sE{KDOrUu)!U$HYhf+>OEg5Z*)n;q^zB7kn-@i@bB zjGsq012@Spoais#292(H%&lQ7Yq@}7CcYpq*=}1{B@+V^QV>3y9SBK9r#s^)GT~KgfYG*l6Y1t4M&p$ zGjXmV1Rtm32H%qyN)x|+r2WHU?QM5XclDHqQ56;m`tUL_$RHq}ygKGd^Z*GU0XrQF z+WGoKN|bN-UXkusm9dPs!!gPO>AJp-+fF}3UcV6UxW9057S%@oc3y!-u?r}G6)D{f#+zB#8TnsFVy5ok@R{ZeAg zvwH)n;gs-|m8wXG49-u)a^@t2{m1{-Zrd-H&T6Pzs^L@2hF+)HHHLh|ij9fKfNjVIEzBHbdQJ!3RuB}6Q4wm$_gTaujRrSAGl zB0Q(hE2_0&)OBqsC-p?2kq6fwjBIGGi#DLxOLwka4iP+VQs(t2rQRfvh`$`7I_EqV zKl|IUQyVO{9i?*Zg<`EtnuxwizK3~%Ngrtyg*ljk>Vuz8lN5l`Q}Dm3ouol^Cm-;5UezWIozDv+ngmS( zIvR#b!Z4x&n7oG-s%WDM<~+@To+<#uJ3>Ztt+UTkRFG$_JsK$i6vm*zeV8{;>N;;G z@b@b_KO<3g-5}@H@O*=PRWxBO&@xyX34_vhvfu@G__8!0z0?5sUZpom9<8X|2GFLz zTURLBs9*s;p7WUjOuMdyIkWS;Led}+giM-eVf^Jw`HmKW-`VA+|g_U{G=6u z^;*@NBgD!ADiBEwWweNsf1Sp_2rig9Vh0PbYF@w8Y97u&CrxS_Cozy%7}{#VDK&7@ z?FKCKdA5-vwd7(90d!D!l=cq%%4tSMF5cqd+qBKA*L5*&$CQDrprRjJKw`%MLyfZ_?QD|@P@A$P&RhfU*ypg!oG4I%<@&hPnA?9uYAH#>HJ#6*`dU7+0tvg76H3O9 zEtXosH*h2fsajQ*>cqDU!)hL-sT_M00&u;f;bPXFyIC?5rfB%ULAJ`3iS@Tg@AXg( zdy1%EHa2v~JEigrPs5{{Vp{YxM|>}J^RGIc{LHJtu{Q*)ChsmuOFzE`W|YLDpK9)K zjwEWZL5;xr4vYd+f6jp{O-9N|MRc&#sIrwdYWp*>p4REgWpFap8OOr=W7!|{DZ#|K zB}BK~j$D|8>=4I4w!GkdZza{q2e7j!2sU|();2RvkLW(BIOH!gOJ@JpdV2)&QFp_< z^(s-Cg1oTBj@jPf&!A$X&dDqB`*Q0!HLic!(Q6%#gDuziWT@crI+I3Ya!)x1|J&f;iODI`+P;h)++pe%!o)_8K3`SCgmCIrXOQYV7kBtHM^* zj3JMJO#a&}&As)57-CuTmjg-CZNNQoDZ2_?8wrShxn3j=lFs?5WsX+T7X%%lcdh;cd;Kmkd7M$lwLPn!s&juH>v=%PEA=%3YRu$`Z~2uYm6j>4%smbV zIF+X=Of<{Es^b#3?3e*5v7#Fqi&sVCEfu`K`lR(SUUb$M5Y%u!0j24Z!EBP~eYz&& z0p#ft582CP-_Yzd8l}I7<{Pq^dZG?%Foa6$iMSlII8CXXz>?^tJeUO4;H-2M9&BSW z8f>EabBQRrcA78ml8TL{^lqPjU@>3h1GTBBY0*q;TX#VveR}vW34)m;Y~|SMNWdR| zlIR*Oc{ssrjN$?G06l&BcpPTIB3k1;ZeIQd_9OCoH$zoMyO}c~pxDXW1Vnx#IZpT= zy5HiLaU-Tb!Nqaw^%qc`LHge}T>Q&GAB%tk68N@YDlU;qko(wZb%*CpqFO&5(F1`} ztG+8~md>^01Ojubw`rW?5{WMBO2_AXGBB4F5-yr(qDJe-sKA4O?2jQ1Bt)bG8QMiR zwYKq0U>0d}_mI6^8(-C27U`i$#`yH1)$6_;$;?=)ftR;3T~ro2KhCs+ZQb&$9Q45Buu`vZk*QPw014IY0a5 zn{x`sW8v(+8p}d(5KQo&%5#^P6b4^>TlXd@=SG$Iz+oe{QC5XD7af#{B5%+M8%s7V zRibC`j;jI;!h@_T;N>Y~7b^0*dLa5*COrrV^pw2ZBBsOwLle@RMRkGQDULE6r-XT3 z|6v)e?q%Lk>jSC2+!0Dxj8r%2+<0p~@=29vMIfg?_F+Vw>y4YMEQCoU8`30MWR$74 z+u);S>KPueT9-iuVdlMam07N`B`d{;A=`_ohjFag%#?kMj zZl9k1b>jCtt7M8Z@EU_Nm=wp4A}7HJCPnah?RJG3S3H~&vE;h0z8CdYWNmsN^OMsw zYFZ20VThv-f>FlkjN>%q>tY}T4lgVMhR;wPdl%<4HU*z(5;EY;n9H!;X7U(m0F16a zC*IGBpdC@$>&hj=T(+gS%xy=vA$C^fy=FbW;m>Eh-8eOrJ_P)h#|a(~_l9Hg({3zq zyRb8L2J#=04g|SI%SNYJt_MtIc>fP@wPsM6hBHXQpxV6ADvt7O76macCf_|_a>;0? zpA@&mIEM_oATjrC5VNa9U>0(dKZXR){$24S10vYJXR=aEEQ)@Id4x8nz{8iRP*|$b z3-nToPW-a@-sX&*0JDYKZz3{Pl`frtXRoI)cU5L2IDH)DBbxdjsj`0H9~^V#rfAE{ z3WAM`q9eZu6)ge>;O7EF)*P&u~ z|DOW>5sZ%db*Y;5HNx^$mXJ;7gDu3Zl9O-y;;$3S9NeS74&i(` z({f(>U}q8xa>sOn#!)8cUmS9X&33A+ha;dW^*#3j-6I_WkQnaJe9IHwb?Grc-p?#X z=PL4Ze!432A0R@+8#tX))@GBAyPnLX*FmJ>{i*I_1!!v1^Z+V_f|} zCJvJ<$W2`fGieqkFKTt?VnV^C0ZdUGDRF@u4SrM8K1c-C>!dVqcQ-?NIq@>rXvf=L zZfV|2s{1KYqb3EBcD((svPVwTB3dz8udOj4q`tT6ll4sp75eQ|g@}&+w}Jf3+-+N> zx43d@(o@qVs^k|}JWWfhrf%LXdIc%Qf#f64EsY&ecAc|Dm7x9Vi&_(>5Vv{lnHfGE z1)>BgDT+I|skafTx!(-Ehea+r2E6>}Qm%DTan5_G-*b@CZ6ReN&F)>~kDOtj%JxVz zu1yx<@L%dBMlB&n5X!uPNo(k^F=Pf{pGZ^QiVaie^~a!9W8k2(C77ly(BeON{3rz> zDqqxI_E^TE!?%2&rn?h-DyN^4WKZDDhl3(AJd=vl6*4F_r-k!Eu4ne8#P!@~@GDnTnFbk?+xOy0=bhnBX33GnZI8bFcJlmC(l~)h3 z4dNgmhLuOktquo}bSOLLCS)L_d?}nkEIcL=2*aRreJe@DWb4y^IrBUQlTtGgnjDO4 zn?%8}o9#wlk|}fK2Z`qQ>P`bxxezDz@w#YDve49Pn9ew%e(n%1>}CED1|)3Vb^gEv zuRDu#dPrZNLZSC(e|LiclzcwSzIQ)FJ3T3QLThLvCp^-X^?st&kS|aby<=>h@WUYkl&n#%REt818kw z*u0+lDV0;Y_p>^FHmTW5y;AtIPeUih*RNy%7_M&Vbz4%MNO&CCgjF1SIW5L43Px@% zKA3=Nea14_6Y^8va_Jw9q|vbEu|IQ7%Ehcx^6pQw`&4{ zu3HB=HH#jYe84~h7ijS9pg(8Z`aK!>nlg@rK!lG*=?!B+vWqOu3Y+`cFB=kZU=cw5 z(}8=EshSZ)Zc0AA!L;g+F8i`QImP?fAyj;b9m|=5v-v$GnzJP2Z|r38kmOvXZ6MxI z)iDz7fq&%F&d#nUBIcsr2PHCDj?oJsw+S|YYC4fEfX4z`zkQej30oLRVBPjb7?Gf# z9LXkHA?Nw9Mx&cvCX00&4G}wF9}@ok-HacOq3gMziVx>X*?knx&E0SqD|Src8Sg1P z)2yLL!$R{9#sl1=(7y#cYi@{SVzVv!c}g4l$iqPUM3mRS>lVU(5ZiLwS8h4-gt$4J z3$Vu_9oUY83z>GgVw`A~s)cV!ZwWK5sE|*c!6O2CH}^hlQwktJUgxhI!FWeguG5$?v<681>fXN_1amUUZIYiC37t3VI9LyoS+ zBq`U{sKpA$L$zqALqShR$oP97&7_a>a~p$@&AJ<{VZ6dz5VAM=v@zN=iahFWaQ=Mh za!`?wZv3bXm3b7Un|@%0bDHz5OgI+iX${}Z-0tR{SgMtQE8cWsBt$<%5u5cR0(UDC&=; zjIf9p06|XLu0QGv(1b)h;~ro;&Z3#sO5sb8Vl`yr2Rn{g(qn0YGJES7CKNifMd`CV z%MzCTWAq$Qgdf!jfnuyv0O2uXb>xwR6o3K?U2aHD3SS#z1mJif03gVXGvONepx#-( zcGPkrAdKU1Sh{E9$@c1MFUwL#pV=vz)2?;v*J@+Td#>w<;UmKhHwf_OaxRjw8QFv8_5JMQv+JrT!NLIK0AEqus2X_-MZBlv!TO=*#!mb`#$vnCr6;H zPf7^_UR7O#7Zf_1n!GDj;|^Z)m{q?dMdRVZ*)HZu#%rzGfz>^Zoqn3!(NEPX8`lQL z?dA`*XuQfg3J}pB1wtb%G|`F}Wd^c=@tE2xE$BXGNg1WcvNXHCp$kW)AyeReTm4L_ z%kx41W3fFW`GkxDxFdJ)V|???H@%5yz~H#F&e&q($RYY!esw#w*$r@k&nGT70A(zU zfHS}+{_3k&tr1`D@#+E_2;Y#E@3n=msV|PYUtqx2vLG?3ohJEp_uY4Ew{&}N_G0vIb#PKRfqdsHI@JBeSzC(_f0n{yOHPnaf z+3-#(jT>BNzx9&|?g32z=wJL?Et|wAEb~V3en`A8;o*3&f4IoFpq54Cn zBkR~$#*s*jb>!6{@iPYlv?vG*KEQTF@BIfn-dv z5s1x_&<5)$ZSY`kV89n@ObcQ|kPfs7y5eWNPvedkpfKshADfAE&z`+QuRc93SD7?n zsYd2)c>jH$KEm#NfF3^I$Ul!IiXi0cqLR4w7?hzweT81*=sZ^4aL>RWmgRh0EKUfB z0_-?%6oUnTkqw9dHI9=4Fftir95Y@{YU0LGSO7{azgXbC(N|-QafsEPzA(?o?F0El z!{HtD6tA@Yuyj5i!HaM-zsLfbAugAhhQ50`J%5G^essPMyFC8ufz z0{0n^416{)U}54uRygvN8uTEf?z-iHrP~fzW3kjsnL0VV`pT;g zwt9QD9p6z7oZ*Li!|K(m4K#31z_N?71DGvcniBwg*RxA3X*hH*)lSf{qsJ&t&(KwZ zikWG-5M^iN2rLqPq(u4)7`~(Y=4t61;ZGenGKn0RF{9IPC`VAPU9-mE2`f^CCN20h zvYVf!7OZ5@ake&EJpcT2+U32(pc`<7A0S8C%$_~VR@+u=a8v%(+qQX@&$4{CZk>I0 zBnpBfO#rP}fe_eM0*Y^bXrOK7+FN=L9_uARiB%E{BMJk)hUZqUT4f(S6f3iKEbRRR z7!cC$zNeE4q!bNM*RHj&2H267F~kH8z=e+!XM+xszvWRWWhO;zdO$H_1t2by3H(M; zh`25QO6fz+H^eOg#o1NXQT%4{fn&!io7{L!B-XXlX9T!ZHscs&fZ{pVIw}?XuTVOS za{w$r1=io<}+Ks<7tWzG0GbVrl?~^s3c-a#x7iE(leQ98;{7vpmfx!>F^0SqW zKjL^Ac9FA_8Q-!_0&(obr=D2uXgibKiWb>R9ZEanB7ZYU4olLYz=OB64XXrpplVcBEQa;}^MqlS=v$6x6axdcdD zxpIa0^`uV?Vd6tSa2f(Y8-7HV5Rk3NX8s5)nb*Q!UOAFAR=^%tg`V2^TrL0d#cwRp zSkKvqb*(JHOr)`TA(KTGAKAp@Yu!3+WYZ*cfaDyjNIQ4#vRf1~6+ks^>^LnW57TPN zYV+5W$&(F)7A?}!t;%Ea7B|U6_H*S*KSUSm55GI;oAEVd@DRzfcf47TV}H;UoaxCV z8u`xKf;Mm7>Qg9^>jknFAU1tEZ#wJfxENK`6&gcW?P4JwOQ}E=?#Y#Chc7X59oac% z%xGJQFm+yD$YPJt=#<cdT4Cq8wV!a+JBM!!cci(y2dS|CV=Z1}&EaylJ#S*vH z&FsqgbwvYDx|t;tlp_o(-uKcS0)@e04$;8i8H$;B_~@a4nKf}w4;>8!=->Lqpkvd; zV~;)NSz{Y5?;%K00GuJmK!4zY2hGqN-!Myp9%23bv(NkWVpvo-h#E^5LdZ)<8pNkH zu*jR`bXeCuD7p_AC2+r0tM-1iCrgM}%^v^rpDl!l&(h??NfT^Qtlm}~7B5`3Xp%lbM*8yKFmW;829X*Dhl6?FGh)@s+ z5tfzMpuvDPAeIurxxIb+j;Nq#e-oI#x3dFuw7I8?2<42v7pf zd3yy*itJpUHZ>mB&hMGC)X!?ye6{D<=bm>t%+&9_cdmpoO1XzikV6 zM(SvQVh>H+i0e%hsx$qH>qjqHcUaQJ^4FQu(7~@MmVHJ6vPysNVU>a`S`Vc`*D{le*XB>j@YmFKD}6lTVgtyQw{01WFlm zh>=ST;4px3Z?$db&W{B4UQt`y`8c&)Rh9k4nT+A1h>T!@@r(cTpR{B7Mfu&nX6q=H z&D(ChEzG`Uw#NP-(}zR+@408L+YEnW6=hQ$)~j#n^}kqkkc(_MVS>sgt9GIpKz!z` znU)2wzWj=Lg;#gdkJnyzt^46cO>m!l@=3WS9Cx0?pQEw~j7MHK+eYpK903IV*r3F6 zHQmEX3=Yl!j^4Y?L$eQ6Ob89E^fN%M(=3Gk_O+N)q&~fG|{53 zSp5KKvDuSn@C5*nGf{cz!Ms-&h(ALZpyP9}99FiIJA>-V5-apzL(RdP4*LA(C!cs) zdZfKi2;gi@bP4Vr0C_Bfk38~SdGt*I9Xk1Py{AJwq$_ z?Km`00lMLf&Zaz;xfd;3XzKiX9E(u*&I$A0&iCvO8aE^nMMLvs0g z|7N9(W4l}d)dSgseoQM}l!e<6Jb^yC>yA5h%F5!fWXTerbb!u+x8{ChZWu9axOkSA zTL`de96;k6bP52I_otuvt4~t^F#qzu{L(8!Om^qYnQbG^y!rF2N7y*X1~&{KRE9B$ zk1Uqdw#pB`bNV*w=BUe=3HUeY*<+7Qz5_f)IQR3>fBQEX6way5*UGo{9_wk^l|_!1 z*(VbdbODp17V1ZgA%FjiUxX)~c)~{{wiX~pXE70j9=M0Wr=%&DU!Baq#y?ufG5`e} zif0Gh;CJy~x`#C(-7mdnGXUV|^9TS3M8&Uu_mTu$W%%|tXNQgwQS#c-JDy=+Erlo& z@5oMrlEy_)!Dqgyx|_N_4q>-P?P3729F0E~=Xk{($vR+JZNkdL@Kr6ZLvBRh0>A&Kr4XXbUz0wlvDdn<+fpv-5HoGwahb+Z0bjO3?Kk-i@#W>j=aZOOCb`X)Q4q;)AEgE84jN%gixF%f2u*; zT{Hbn;kW<%FJ2P=mcT|k`GeKdeu@o11>1 zjaVLKvo`o)a{|_ZHf`crcAQK=u>X$Y)(Ofh1%RM84GFNY``G?D>O*GJY)qgmCI&d2 zb2cgh6Uz`*T>N`DIKEPXuC*o`2pN_e@jHS}1kRyk_c}_DI%7rX-d%o$qBVg{VD>u` zA7Vx1aDRj-G@-4u9g7+?g_p(;r`;CPsAHsGwJ{oCjbj75TiNj$n*J5qK*Ou2z!k1=E#m5I29>PD7@^P!79ng?*OW(4= z1pht`mZy9VU45lIBqj7ia!WiA1K$N6$LmI7o`5hroPcx--TataV) zM?Z?*OUH^p+mkC0)-QOLjTLMjqOVxRX(cNWZy)2VN#qH2?x~3eU)P|;Y?(4L;2VL^{8D+c%A9sK{{0ds;Rv! zwewnA>IyBf@^|YgmlpAivW7)1{~K#>=`-5Ossi$d{-?i@WypDCDa+Xa%XoYfE?$zM z;(7df-GAy^Ui-I8$*ru z254NNccNuE`jq15k1K%l5-;)wI%GNZIVt&hLb3!G0s0Sp%QI+19vDB^aZmM-6YLwX zrB?jKq#kfjI};zGpJ_8Xjc;hfna`X)!daoX$+Xc5r8})2&%xo57EuOpEn0}SjD7S0 zjYXfcQm0|0F~FPnwD~rTeNl;aMPlkJA7n*U?vzUzpcu40c%zkM4!zGER#|l5VfaK* z9)0xrFm_m9#{aP^xY;G6a^+0C4N#TX1~fl=1IO;K$*lq%=jF|i-mdPruV~dfBhT3 zlI$lx{gM6Pu#)i%uu1zWG_inYFg|Uj4hRcFZq>~#k6Fh{l&OBlcQMDIija;J(x;w5 z71M?vMZxw1=%V}7ntrDr40cJQdz6i`PoS5+JtJ_-vU+Sh@$nywn;~e617#^4Gj~^l zywoOu3^f8ExIYyKP-P39o_Hp$qz0CH2Kb>IES<^PZeF5KFH^8Dm4$Q02Pz-_pv_PQ zI>5Koi4f_MfnVCFxKNAsxW5$boUBF$`^3-a&IHoW)fQU`)i&~izLB@0Z!r{XAwRcN z9=r%>B0p#ckHH%#PXf1}K~v~Uh+r&ktl&r;W#D+J&!`i8 z%UF%FMPq<^Gx1kk+6rLwYLfUV2IP$LQat*D;o#jRx?f;elV4Ofv8F=38U_KC72dGfCZYG&P zzKObApDzIPm>AK&Pf#KqZy~TG!T4G&uBAkVWNSHk29ccN@_kMGMSo? z@Ngk8t{JQF8HOBWr^lV!n2k}_jZ~4=@D>5;!5hdt0`iZ0ZiCWeVYh5l(ewo=%rB-) z%3Og*runz?mNelZ>~`4!7G7*)O=M83_3glsk@>(qfKXqJ44 zkvp_ib5W~S^O*3n7$De5s+6$jM;@L|6{-Eclx=B6u=XkbY|;9JL0$4r5`FN>nO zJx&|jm9p?J$KwjYm$h9#6V2l}1~G;66>Zo-O3PX*E%)FTtlhxaNS;t>CX4*sub0Syz`dhR5)d&-Yb&Xsh?u=@-b8-3E^BWGuos+P4yN%vD$T& z%-gQhDV8i=B8#3jNNUAt)~uQGpS)VD81XfV2p_6w&L&dVGHhok0`9m5@ zaY7#BV|725`N`5o#y4#O=#~cerJvJc#mcVfM+bcs`kQNHU4j47n5Ip{qkk1aRZt(fwxqLlyA}j(cLUQjT6zx$LMP+7iA64^R@G}@3`JnmRr<;G`<xOboxqm&s>vPxFvz_N{Re@m&i~!^L z4<5~~Q$`-7*`P}I(znG!#;@sK*>47%xvx-W*?77cvm<_XU6elxF^cOrh!EW8&#`Vv zNr=?FoBgt~E&HU2H={s~W#|y^uAS|y&O3+k@AGVw3ac3?!F=6|+8BjI6N_s!TC7X? zD6v2b5*7GqCJYi#>xLdg5S$dwzP3=vw((~+htZI5#|J%<~B+`;!$~#8q?q?c<^b_=oJ;vSi7vrT* zt(iFoCSR#P;$`HfbkDp>77nHDNcZD6Wkx_yw@lbA@uthDo2-ey8ZZ0&nM6J_*ij|S zKP8DGA8?cUQ_~jOot~2wQ7_QY@;2`f$4_|V$Wb~&d8(J#ePF(pu#;sD*;5L>$T9X8 zZHSXT-@4QfGpq7U-(~ZQd_ub;U*tP89~;9$8qIzx-iXw0nC8eHx5gdXaJ<5Js$Xut z0=j@7ol8l6+-JaKv99tWmCHKmgPgdoh=s_0zTz%GXh&kW}XST>kjQmt`GynAK zdU3@APlOqU_So?S#wpy7vls?cT5d5jIBFq&F@u!T$HmX{*Jjl6dF12hf;ljYkzR_V zu2i>y5G@G}JiNf$2IDTSkpeF;Pf;`U?cdk0#AIgMR{k$+R;ksS9}v2d5AF&|W`%28cHM!D^Z(RW>ioq?}qYh`hDKy)a1g?lWl4<!nxmz0Q62;)sw4(Pj5f1Oi|5;)@8(Bt*<#Cf-JmJe?_@2_4iXJ<} z@UNm6Tyig~L;4c0MbN9K;wwt=0-`np*&3mGl~XP>F_lmUxGJ}EJghg^iLKnTa`mHp zkOC@z?o>kgs7<{~RZNx!6BH`N*sfRgES;C0b)6=J(GjA29ZfZ5TgX`nl`;9(rDu8i zmaD4aOXCeGm+T9>=eDLXP^MI#8$6riBERC=z>!3Kwc!re5SDV!$Q_|lhw8K83%2(e zoU#DJk9BERw-AGuy0ys1tN6rd?_IU&r59Jn%e!r%G@jq#dy^)Cwl^^XJfT zjHa=hEk$XRG@RLGf06*n<;<7~AIs~0y_wo{`jmVC-L?!+2p~iO$bf7beJ*GHE_YrQ zYL`3H&nZ9noc<{eC>!4r$clm+fXK~;`i2yRP{>a4^}je-?B5hal3$nSiYw3v&r^Ej z&ocFBBwebR^$g2w3XJ74xumEDi1{ClUes-SV^LW?jb%dD#$sOnXS5oso7Rc^CB<;t zz`Rk)lq4dJ5<>pQ06_gzU$Hx4;z+wVl_~AMI9X1;8)?la8*~Wl58(Kn_?lHwnGP}G zGU%7_r6ZSoSe|%t7J0Yi@kRY?T3=Zd4;)%^QHDbNf&FN9+7<`)pKa2AKg)nakXug0}4M_N{*nY4|;ZR?7wuRD@Paj@$d z(Jo=|7t2Y6Tyv;T0r)L~1)iZWOW2b{G3Z&rikH8o3lhb@2xfSe z?q!`~Wy&@8>xR-hb*yPTEa_MOH25rlx)ycKm30bekjM0(ah{1AFzRGh^261zt*G#z z#b?^ht9CNSYF26mx_qUs5K5(tSKNR+)5Z(Fpq$dB_}6ePE0}6QZRi|##2o#$it;*1dRu_YGmV&V7 zDYIvwW^8TAnAk0NfO;;iZ;UGNS8XoA`_i z?qTDZ0Ol)1(JkT57{%vrEdaIDU4op}gzecUnG_Fy)uZyu@#3cbW4{ zRZ5^ePEe?S?5DVqL%%eL>a{f`zM$^}IYk~7n(|G3UYZs4`m&7Q*tJM)Uzu^z$syeO zED5>nacNufztxA(lIVmlOZ#GNnm_+$6o4N-r}6!HwdpGbJU=fYTynnZc$QE&GxLiM ztRS*vm&u|px_Fg$dD?70J1@6tIaj%6xYIz$O5{>UmNI0gD0JelKJ1Bil9G&8=C9aj zR=dt0vX(;+MNY@bV10|5?4k@akhsw1*WtsIIPpi~&~#sg$Bxh!_eh-LrdH4!S$uuP z9jnAE^-3#D_p+Q)2Ju@-B0!=*N>Z#W+ZdtY&$<(lqC}z}xqYU=7jvbIEPt4{k&xoY zD8$trcvI;LDY+{AigAnSLoTJyiIs6Mk5WPIYbIKWyNO3Amp(oze5N?*yYPf7a*Shz zU#U>)nc`E4;(8PB;zIGH2oiIgA*W&S0uIibOo3hb{E8f&?q1ICrne_`LB4bYae%X) zWDiZLM6P_qhA&b}FCa<7nfXc6+jE|bzJ`zKq{$vJLxZ|@wz8`iwjq3I!{zVx&A>Tv2_qa`JIgyaqD$L@~v2TGh){%j##Kmafw` znPmkz-{Sj)f|6z5l7J}moetKv54Fe#1@i!|j$k203X#t77iHEjD-W{ul-S8=7kfu( z7{j_xYQ|f>O#QytplnL=;lTPbXoYQZu#(X48`~JH?wE_E`FZf{u+FJ zrGV#`s|%#)q(P#yIASCduGW&pg8SmQ1eS}!2@4=C*PpnR?D-8a^5OOdyiw&5D=s9L zGVsOKq489dcDX_XgSCD?JAhOFMBC4_`yKjzrazR|#RR~Odq;$0H$88I;Pr6441iav z@jhR_ic$3BtyHm9m$PjbQ|k*!6YFXaS6Xe6Eb@?`TG6uznV_;%G@wni_Ay4f!MzCN zbYYqkJ8h!jDSwTHqJj)9W87F;-El$PTyIBl$@Mb!IeS_804Qo=UeH126}&}-ad0Yo zV~&37T&{X~q#Mb)i!!++Af#r*+9su4w_T~cvfoPR`ZJJa;VtkrbcS~6kJO2le3#OP z-}M81cUCk}18r4n(@L*()axI~o40Iz=Hq$_`EuVm)yXdPH|{cqX zR~v zcIhhY5XD?6^ERQlONerL1_b~FWwveG*05*Kp3tLvk1%%3m}tSET_)qjm1Sk6^gahw z?-~A#w`K^1oz;7rkLb-IXAd0EJG;B+ed&pYQT?MVzODgI;SDrfw{8s|R&Ni3h71lv zhU9!N0kUF1l;>q(K-!-fsl-_X#bM~}SgK)q~yBbMVflrOA&veLV%N8Z{hUNU%Vpv|R8<8LBU zfzm`-UVg%}^gMm*)_Fo~oATD60dVfyx7Y8o=bh7xBVL(UkP$IoO063baSU_nKC1VN z$#>8@*Lhd_C*mQ!X-Be(zM{a-K06fwqTWx9Me=<7Q!o3X8`P$y_@Iq|rV*v8&FfD0 z?%x}hFIyhQjUQ(|=3Vf-J&{(p&9twyiADKd_+`p0e%GfTIZ}F@P`e(DRDfKF)AUNl z*iLfMAElr(wdrCqyd3qpm~xw%hPGWw`L*;n_u$pX5iVx z4l)pw(2MVzWqGz`nE(3xutTpv8#zjEpqM^Abm*wpp7D-htVm4=2t|UjtrJkl@e#g2 zapuhI{_x>LVej5OR=6MU+7&+huveHgX_AG#o%vO6T54+yA^`FTHo9b$Fep$xEZp@h zEkiRfs9pgikg{$77{vGEqG(DQQxw&CqqJR)(qvJczv{6|ZCbHnMHnt1F=65aAJ|_2 z)I}L(ml4Pyw6g})k0QkrL9JhTsIqyN^}hZ4!fUTB2%9%=4yS5PyWfW9hK6a=r-i|T z2Zc7g_EPB@&b3TIm-7~V(g+=!ToiebSDC;>X1hS=qD70s^*3CvNz4ENg!rnsy3s`k zXA!MhJWFGN>uCKdlmW1SV`BBQ4kjW&KjG$l5CafDZI;O2FS;;qC=J$qdgOF%tzI>F zBz*kwZe5=aTQ+YAH79FAuUle1X`=E(jwJbzVw+})z%yR^$oDC&TI}7sH@y7v%b{zx zuAwSdr5KF!vu>j$vEo@qDTBu3SSshjzRS^%9AbYl7S2~D?IhlX_3nk$vhmo(Z79BO zJXL`oV_VM`yZL!31umU#=V$9%Z!q&DwEjt z^DE6CfkD*}b}oSYl{RfIN%6_e@4n?m18t!+^6`>*{6Xw`A+;00W2-PkDhgO6QRce^ zC{X6Si?{a7nN+=^m_?V$f_&2wYSEMYCCbUwQXerNIB;Npcx(CcuuMv_bLY-d#vQ}4 zW5>gr%ij#gKRFiez4u;Q&f+_-p}-d#3<%seB6(dBU2sKcdt;d#J$5wg{BTD&GHz@H z5e<}3tT@eLMfEFwMsj|~vpyqGjMOjAxVTa9q5jlEycLY=VnJ%qKvI>L__ba8c40vO zfwro_FRNFrR-YcQ+e9n90S3SWN78fTr*&BkjZbCqj()0s^AhzpU#Kpc&^1P90~XhT zddBdfCYwxN`u6S{x^?a5`_RQeOi>sMjRxwKGHW#Ar9I@0V5d%4tK6K=LD2aekt2^>(_fG5Ay)#P_QTi>Tp_Z`9y;MRLw~X_=<{3fvgU0OY7EBLdyJ8wSjMk zOgK9)ZjoOm4Qpy@w3PNq=+vR39r;h3IH5MGo*g=fKHXh*QdH0g1wi~-S)^JDq_>wF zLfdv~Q&zXnS_m8nRh(45$JJ))(MIi5MI8*}QQrEELWiz3TFU!GeM1}D(iVY6`USZM zZBVP_0?(ig-hpQ{N!a|s<}iQ30$F1Qhx_lpPp`4-7q$yX%zNdPuyEnRaMe{;X=$=! zXsLcuC)vVAdjUBuwLINf{oFy%DJ_C+<*hoMIW681txu~T0qYf&(7R)(tmJJaQBksK z`;_>AcGR4zF`u-P)vTlP#L9|7jzF!2X?RMOyic@ziIQ)peA;zrA3BJZltO0c6X{%S3Dh?JTCRC9ZYg~SZ z4xLoK+Rk`TzOlarw8S%{IVG9^H29`(X#?f9QeH7cu83{Q->TD72D0vV#nN|;=yO~X zfm-!>8|70epxH_FAQ5FU&N`0l7)Fd3t}#;+UViCiXUw=Dz|N|w>Bxiv5JG=a1oVZT zrv=7nYkSd?a8_{`&(McHgiqkDu3fu`FDpYUjTg#>)&RNn>o*AKd>E>7xzML?A1da4 zu2f%ER>mdeNFR;q<0s-c5I=X+_`&KJ!EEen#tQ9#=i9Yyr)%+segmD`sUOXYX^gtI z4SzuA6B_g6fh!Je2k=eQ>On{GB6T_^;!qZSPkZtEJmv9v0;0K(yGTB{4@Ju~u8T`$ zOtG@Ue^!3Zo;za;BlX3)`JL~4C&0TK-+wfYsxY;kY7De`GXtCaB3#Hi~}7KareM`PAu*`u!v_TyOa79Sh~E*LO-|Gt4?8t$-t@r-TE_d>f)rpebLPy@ zuU}s&jZVI|Z|}bF-ir4;lcxzidi4kh)ZTr1%krl8cT2zlQnpuDhqY_hhEbzOhmSt` z$bSi}-gx8n5}<8D>yA-y9+eQ>xN$>Rzixv8$&kTAEGS#4UIPaWu;M@%Qy+wFTM0r3 z@fpelVTdxT)v+QK625?}J-c^@RV!D@A8wo5*Q;lbFm>A0Fi?J>6_paQ64(;a`F-O6 z0PhM`#()TfH0^rk}E$}Pk!-a+7goO3Z9XmzG zP2s@4{n}ViVURF#d|?P)1%7>a1e&K(92GjF~*3>r8{ zprTc%s>+2KH_Zs|tz2Q(iK?n9(Xm1kj3aVq_)vYbQNE{pP5a3Lrnont$SbsQ10_tI zckSF6KG?K5?EGk_6?(5;y~DI=)55T!LtO@zIsi|#l<@n?Z-;0^-}P62Oqw`R-vczk z5D=m})qcv|roP*{WvhWm|NaBQh~dM;3lj}IQJA~;?3T;GYhlXd$>GS6BVmi^gP&b) z$Pm%zYFlnGA&NYn)NI=HL0G+drTy^+3wZC^xyx-jC%$PRaMh-wjlt`tO`EiIS{){g zA0O6;hp8*deB#7O;f_0QQCX~vB+^c~A!l2I)>b?(yH?lJHVlcfFo4}|U8tHonm!$B<Xfdfp(=bwKeJpcUjsz;T;Wm~%n(av|?e%rE=NfG=% za>PhY2nU7M?c}4W#sbO#`&(#}XvCzBN!7$j6Ezlwhwhs60$iCadD5Us5PS+9_DW8_ zt^B(RL^EOUruG2HTT4Fg)mVP_-FMBm0M@EpE=-2U*k~v|hL-`IDfbg|{XA>!Xi`8qKVSjTm8T z1E6T-yDP)r|HI!)Fip1g3orr@J0L~HvT(QV-9lSgqsELJE$d8M3vV{Ad?*l0epo}t zjvFhh-lt*5t{q{yEdbTw@BZW8*_z36)N(ETKK5^qd1ig>wbz9C^A}hN_UqR#%)RHH zFtlo@X8D~32H7QwDnJQgxsZ@W;Sx}+`iCIisro(kyWiO|)4O-CFlg|=@c3UAhE=Op zh9CU!hhd@sBs+xjLD`WWAkbQWSQTo{)wo@NK!o@ce|aiA|J?Ion7|L#0hASB5TNk= zAN(M**3^WhOP7W}{rexS*sE0Uty|ZGg^OOZ<>mYT;rsTr19-gt`s?AP7hVVlj~rI} zs={sofiLvlj|o<|5| zZL%NRl&Mp~Pk#DSS%+jPkrfMYfsYw989#2EOMm8>r=>7Ahx@+yO)E8gNq6nq8Giq} z$HGSfYQski4?TPK4D;vBlcjZ0c<7<;gzK)mPCU?70QpGx^{;TT6w^w`n14jwvmz)BrHd42KX@buG9 zTj`I{Mlb;0i!Z$x7RZ|X1M$PP*If&XD!^O~2S5S=>(^Uvy%ipP^j||if$XkY!midv zx&;egGl2i`kALj;73XBjoA{I9Yt?~?1mI)8Tq*!I1SSfA6v{fLN#2^ZYrKK&Mgc6w z)Zs&iM8_x8?oFYOCYiW;VEtaAHvZxlzX-SAe!C|_(HdIoF@8p%i1Q<{_O)%()?#(5*L6cpL!}(3dHp4-P`;P-@fOt^FwME;EQ~)`t??S0=&qRNywv;Pn{)K21-6)t;Buj zaruM-yl=hjHf_`yY`IWcX6f%1?qhw5FZjoV8Ot(mKs_}{Wa7SI!v+J#mfgFDmCN4| zue=-%YP<{^HdK?0W8sxoUiGBq!3Q1;BQ!~+51A1B^{;>Rq!+9A*l}aRo02c^$v5N@ zbIUDrLYvA8$*D@y3!(nPP8ILadIDjP<$O( zzJ6oz8!G>FxbHrJ9sv@x2UfLzm%!biWy1db`-L8|bO094XxS;RIA9EA?s7yImblS% zSl|FBlVr`h`>wm}8;ON)8NQPGd*qSt8N{twv&M?Guk!!)x4&)Ra!jBeI?;BP-EO?` zMq7&Bd+%LaFaE>bci9rerjh6V`kXfWd|;q;MoOI}+}B=Ppk=#L;qE){3RAWDhx%>W z_`Vd&Q}*i`DGLpjHfVCb0I0%(0s!gMSpCj zANC(OC;Qqet=gS&@!V@n* z{WMv{Xft$Wj9{U=^UgaBLa->}{=p;&Yv?PlXtRs>^6tCu4g+QRW5V;EtZdIe_d>Ys z_S*!whIxW>PLnGv=G(XL2oFEYaEjg@ zg!=kSau85P{@ukBy#>7AefwSUS!+)qm%Q%0<=FgiSK2#qt)|gO2zvJSck2N8ltFo`wnC&eP&59P5 zcqZqJ6WZ=}t6MycP&#O908YO_Ma7A|)=1wq*s0AAnkn%0qreppJn2%FR!|&gWG&=4 zAA}78jiqEs&oDrWgR>qHD1c9vS@APuxrXIVgbZ+y$M54I4f@TM}+z zE9n9)k79j8x$NHkv6iPl38P1k36rjx7&>%n9|rUrpkGbyUw+#?jn!`GF;NSr-2eYi$v7H2divb3R z&}p+QJJ(9d?T{4{Ya6@Quf6ViSw(udEm$f(7Ciub&>EnK>jTSKEd2xiQ3x$XH>?nQ z1Qrfyr$5TgAcoFtAn=wGg8N6`fD-~VgLVKb(^)iy7TtBE6HCqb^U^#Fql1@76s*Ps+XzH8LZk&1&0fj3ks-`1jIjeyTy9dU#E1@UsZoIztRSwC62 z!(xpUa^}pL1{sIhNG9b-+u@1(iT(v-*)ZV zX4 zgTFQI++W;@W5GZO5Jb89^U}L`NuBJCMgUUsun(l_OL{DfOyKBi+&(y-j5=V=#vdDA z0jzZwu!mP@7fZ=3*RwQCe-HUHx9}8y-8C(1002M$Nklv>IoAj+RdV)coo-zjhi+aTpQS{{IWr}sT;w0XVJLEXSRL2N&{ZNJ)oVs z*Pc0TR{&Ou=wDoa;Mq9IJY^ohbpvZHV+$UJHz!>+$yR>AV;_N7TxRguL^c7asi(Ts zmQdU-usGi%z;EAH@eXu6efG2`lWW(k4dW(^vphj|FqUzv!S5P*$fN>(z=!~lWh~;h zGIh#S`B%$ws)CptGEQkfdBYR#n~WyuzAEo}MIz}_vXy!^hh_>~=@hu4fv4nxg(kuS zAH;F8+yFfBn*(?b(2^etqOUffbkGJLgi#W<$XOIZR0>f!v~Df`C0T?BfS67axV`1` zg|b#cFQXz*56L2gB}P)qmPvr!K$XQe3X#C!*eHCKS&{Dm+w8)^3jrUkANUz*!OaR) zU`@01Jsv8yOWCv)3oR1|6dKFsSa7fyVNqdHlB>#Tqs37Bp~5rJ z56j?WS&dO}i?qBBu<19TpLlPieeLLLtcfhQ1CX|8Ih3VgFCj{yvhfUyqw^CT0WP!& z9)mFOBAduMXaf%Pq%Y2hX92CJ1ddqpr7zWL0S$bGwV9)`o%0~R{yUAAcSYafvdUSp zqFnjYbMvEU)muKSAMN@`mb&c%UZ-4dEJ|3XPo1vOUv21OPNl~h4z#mIX#fCR4;FxC z>6AcUVa**laG*hkL1WfVKyD1Wq5nQEN$K%d1swoL;5C+>y>U&E0eQ3&3qAnltiS_l z0gnJMfbZLHy=}`HOZ}rH6F8=+y~Z&h4c8XZLvuGxAM^nrl)UI0fD239_z}`iEHP6z zd_)h(s!Lf0o8m_TeMX=3kxK;DN_a1UDFB?ZP$~Fh$M{ZE=R=*Mz-KWq#ez@X)Rj6H zg07Vn`9dqkF=GjfAz+S4_0c2847y^Tg_NaNrdvJFP?(XjH?(Xhx!6m@OE)X=hI|L8z?hb*u?|t)T&8+!#|8#fV zu0D0Dj_kd|%*dmqG9juS*vLE@4zA%7U!RJ+8iP5s_P0tC-xV#vxIwADd0<1L1**rW zTmxaG;p3gL6;OGbs+P(LJs~_0_9>E_Cdy|2kCd=DNwUWoyFo212^l$!LMsgnHG`ZCr6Ph;dKC>#2#h6zXdtF* zjglC9aU=vFCWlTAv&#Ekolz`yX&VX+4&!{Ks!K~(*ycDtvvWcur-`2?tfc})YTKk? zrYnBytKHg19uz|B{Gp=?SHLw3*0l2mxGK`I-mX+R3sOdJIck-GLS31t2&qZ*-Nk(SI1&ot^>5K3Z z4m7$n{Rzw#S(_A=1KD%5>A_e#0i;}>x@&6aE!MK4egiZc~+y_(%H?% z{p{MAZ*qXgT85->lkLTBH0jgAho57Xk1;c(bMO1F^V<|Jh~|FqH&a(oc>9T~43>ig zs_@9h7X1OfApAil3x@zf`=H95;VByG!Nu!7V80hPXLn0R55Ydbq^hoiIf<}w(lE7< z3#`MmOW9U}H58_%9pRgU6z{hmqMB-{mDqJGWP4jhpmuN(P1PYhnkJ*~BC{UQMe_aB zlU|9o(3P783&3th-{kn$FT83NLI50jf+;kdrT~a73Bh~n*Z#?if@`n_IwpZ*@T|5C zxA;Jargm_gHJ@Rqv!3^>Y|T&paE`ljqZe7JP8)VgJ?&orq^jgXuk9aNo2!wVJ>mlQ zP`A2;j@vcjCuP95_%^Vdi=e6ih}VUp;Y;Wdk!!5z05h5GNaNQPZF=E1OkFUhoW>uT z0RFa@!5=Ia!ntD#}Cjw6$%JV85}sdj5|?_o$gOwY>(LHh3z0T%FNvD2PNr571PV# z=DRRU>@NduKX6)jRU@g_a0Q78)_zkCbhp~7J|<=v9UF6l4z!qO0UI|szMxJc{u~H%DG>4NEp*4xe(J>v?@Cw{K;6L$hxj>-3^Uc`z;PG7}$yf z_sjLyOY&3rpdF7)bbK4em>Dhu1giZ4A2=13sljK4T3TQBkmii4;5W6udg+^LsY$cB z9^a{G8T7PQ5nw`~e6;LdThs;!!IQX3TYJDW99shYt34FtQm`4VRn__61{!z2CoVVJ#&RV5BLBs$H!jaDSBJ;X7CH4q}jS{#nd zqS&hl5^0q$0s9*rVE{^<1l5ad!cm%T7Y&(%LmqYG=bSidY}7o48q>E<`(q_#l49^;&-1Lh_3r^wpLRST zfH3xnG0B&_&+~8Z->yg|2+EEA_=j=Wfu&hIuHk*pnTZm$hoqA;Mkj5uJJFjn4hy-J z6;3we7y`?rFJMNLq%X4oDf2+`&R95)#Nz1B7u@dV?0_}AXkexD+8j7)Qz+t>S``f{ z#!0vj${WO#$Trpv`WBY>*9Jwfjh7JzOlM3_d^RID3q}=PsF%swsBk7{dtu{S$_8Fz zNYjDeG}nqeXwsHmgP8L1sK z#!aAAK_AZz`t9m`;$gtePXD^6&~#5oH{PCh?^|Y_&A<1@Y~gFTKR+`kAT=PfcE8>7S7T_BKIMI;+jc%pYS zOCO$&8_E{s7@5Py(;fCv63s?^x#_VVlW;yk>A>U5Gs_&}A9Tj>CKXs>3hkl0$-(*4 zg7S;w2tMh*t`qPdP$pKH9Wv!E~s0p~0w@wGW^scSE$_ z;$kU2X03!Emd*bd1%_UjVB*z7w2LtLyXU?KEMMs662&fWF!RI2dT>}xvC>8++i;Jn z?CK5Kp@8F4m7c^2^CZq*=y7IDg(fNB2y%NQCJg@UlVU#L;cN|5^-sY+%mNjb_Uyyh`cI4eN0)Xiq=J#Vatst%j@n}o7J@5vEyVS7AAyi zZv@BWZ^kunp-}7>2{qv)NmE(4B4Mw>hUKIHEIZOA*vZuyLxF?Y*R+o@rq&R$YtVIJBMo%U!*+l2(oBd(eD%v5$>re4P$x=8U_1fJwQe1aBKhzjsuV)M=E<#SV|ls~Mlc>7)>WOTS-h-m zPhc3NPE!t>Em&W1cay|k5x@KLG|V=wOu&hf{OP{CJSi|XbbR_uvsUAH+%$=sM}+4N z%x86?l;3O`%0WBs7vn#a6l@q5DSy5zShRRamOH1C{t(0~QRu?90Y6#6&C@Eu94x?s z6gM|XV&?T7cc>*}eG6nQsx`KhJ&pX?YKvKK92|@;4C6%NQ(?{b!?(RE#72``tmjZ| zBj4ec2i@FK+zA0U^^!t^j-)jv#eFgPM|)JoiyL*2j%_TpQl20@R8TpxCeePYVb%fu zT`(2P2x-?OIF|vW3b+QCyH}XIlw2iWsHtGEbVk1$_oSGSmO{GIenL`3DZY6bJo zt5}zaI2U^^+#GD{d1T70Udf-B<<;h*>i#z)u)EePLgO++BHN1Qz{3Tg%>I`EYT#dMj?tyO11adW+?b^bp=A&1-MRxH1nKQa zrM7XPX8PRpE{Y@X{wl{E`5X?{Z$3`-;#>%P=z518Itnm^#`dQ0mz;onpG(5yHgK>T zX)~Jh#cFjxJKvB`Sq7H?D>!9$J?h-}5yI?Fg+uULbJ1GqFtd9GG8ovmD0Uh!tM==n z&Z5b~0}J74A*YM8pns3ULqN6rO4#X#*>Q~ZUy1?@yVwjZwi3;s{~&tbpGlE=gD+6E zytF6TpQqoi5Ch*Ifu1)7+y?;tE@l8W*zm|OhKV$tnbil3=PxfZ710b;-VsDO=_#Bm za^B1AO6PSkLKTA+_(`f3F(3pKIMls|#N2-J*BVY7D8I0_2j-H)YShrL(V^qEW)bli zMrqafMxFyI^Osa`>!#C6yh0XOO&8QX6+YZ?TDt%(Mq$vnzdW)nBOr%kN$oG}j)fap zL!|X(1b`hemh&|=1<7%y;kaGqP3!v&WX(x(PqEP)-x7;<=Dh`E z%2=w-3+v!>_h?sLhB>0{?b@Z{S~bw zb`fI1J!kU!Y)yT#Ag%%RA2x*Di+bi@j7&pB#pU+mym~|Thn&w|H4QQxxc9-hZ(f3` z$YyipV+KF`h(M<2&`wi`SfZt z?=!~LFZMsK{I4T}Q_p2t>T-zY?V!6BPMf#*aT1=$^TB?jfeYdDJLFJ%HUA#C*+XQB zYtTPeSOf*n+jFv{EN6IalKee!vdEWUCwrsGvs~#y>3;%CZSnt!(NYLMgdOpt^%If` zF9C>hLuyf$mu=S*BJ8KE9aj@Z{r6o5TkAZo>)h5%+tL%e;+TS;wC#7)` zSj5OnYqid9_ToZ!#j$Si5Q=r@#NZOTG@WUzg*$X82P#NCo8P#(ndopySxXPaKKclh_veJ% za2rA)^yHqf^O7jec|s*+qez%nl@$jO0x$OfevK zUUZ}9U#(|4grZxOie3ZF8OPA{J<=kB(PD=JoEzEP2V}Y6w6k!QV{Tn9SI8Jg{F920 z7SWg^Tvpo4Zt}~38!+!YG5SZ^k zQsCas{@4Lhd_f$m?G18N)f)mUN%_oSSZF;-J953g{CH+Rf7}@gv1Uc0thK!& z2jI76hr!q(`=^H213IS(-4BkW)DD0YtE~wz+Mz<&Rd;~v7)DX&)cFOeI6i-XYFx!= zDzul@)AS?YKL2^mJ1gk8&Y&H*qny|sx@A`w=u4x#ETQtS1IKhAXr7Lp-b4^wz!z6g zAvBkK8!T~G6cm1~eiIiG^y@(tKQ)?H5rWij%OY|rPV4@KvBLiIzg0g9Xg!q*+ko1x(V%nK#Qgy(WIozv;CSO8LmN<)U91sD(sRt^jB9 z&=e@>#Q4UD*#PzXCGqj{;4Gz&IE*D%8`5C5LZ@mJxV}AJ2`pWY-es47k!eC%gg(40Y z)&u-yxpfD3#W$f##U+L#^b8`Cv4;qgdg%|PLp6Q43FptT2411Vm)^gJL>pQxueZ%q zbN4gh7INZ7M)c){x;sL}VLj%$OaCOg0S=|;G?8QKXUoq?J>VD4U3cmgV zs_kVM7Y1k0#LXNskhHiNRX8%!=bQJ&!1mddHA(4RVXvU}nnDKCX}#fF5@t(lindO= zb}S0N9V7tGTJXGvPTZvAdT&KjD3SG4YRZ4H-_2_#KqOnL7Kx$k;ezK zP${lB`^&TGydS36FgIisDYoOo?p6=>3`^NAPW`F|%O-7aNrF^~g3@+PS3c|>Xq z>(Ra9L*9|d31R=r1oOeDnbWQ zT`vaN$Pt@=j$vdTrp0>FiT*1+IN_D+YC#*#AgFU#*d9KzsWEccVJ99qxz8?J2K?Mr z${8^s(6;?3f6vIgs<_^}>&OmUXldVLD$L@^se@C?NrY~0aP{lw0S0~iENEA*A~!K5 z`;;yzF{W>64H&j-Ws7xZSqmKpz92a&>@tx~W-%i&JYRQ6#Cs%^>As)`}C45K!0r|yE$TQe2^ zkgO~~?>r2{SL@@YyB*mZei|}JqoAGUV0TwNl&7@lRfS9aL^4>XNr5+%JG0fpp@SbS z+eC)SE}}D8ALw=KznmfAT(lxl4|2fNWGK?kQY?(R$`0sURr31&QXa0*p9ia+uMYDj z6`af6sl-_tpr0~)3~w+*m2@FDAfM7egxvHZIYOCPO`KkIt*%PfRnkLUJJfu9j+{j+0WP^Xyf6rP_9P~V7GkC`sIXoBS#^oH*bi`w z7*$Kge(#euijr2BQq6kN)`A5KL}(%z_U?P0@{z;42dT1yrDv64m7quWI7ds2ePsj=5C)JK#aJkuQ zMnU@C*l2@y>O37+<<|ZEMsW@zJbe|Ntc~@D_vcgzP*j3al`j1w52KNf3&|akuYFq% zsPMceh5janpg}y~ExQ;XHPv?0M1@8}?>fAKTD2lkx(dYcr98Tn$o?QlQP*v{ke>mh zfa7DB{n_*EP{uX565aHt>(coH%22dCj>v2}AoS*J_!M9{VH*#VFl6X`T(}vlqe{Va z7aVOLGnC~x>&dZ#I#YaRnuid@Nh=cQzrJA`(+WjT;QnDA zVOcD5(JZ)8YbcpLPQR~M2Ji)PK+pUaxG89CF0_4&O96*y#=z7y+$kApj);Zx&6mn0 zghznswsvpFVgoCux)~pul&)a&^7`jX-Cwvl*CZPGQ-nv>{BmRY5IXw4r9@9DLF}EX zbner^x}Rlah@3Jz5eaeq@H?0{8f}|H^tcq?&sOSXikInnQ=}7Au^MhpN*gGC(OqwizGx}#0?sCP_t6`W{A5NV5AKrYhiSfK#iTbE~k0F zqA0iaATtI)ERpD8Qe@c!4mkk+*_wIkBGjbA6MKdJ(4zU5`2?wCSGkg1v63HZjp<)X zGRBaNLcS66Q=kfA2 zPos`bg8(OEj|n&;^J?0?krVHpPV8gPp7v8;x}R>i=ldUz?Jb>^cf!5bdpKY*alzIV zR&AfbPg~)?ug>|OH%_47p0@h8hA3jweje#$Fs?P33n)1i3>9wV=T0dc&7{9$v)`lN zO-(6YarhSAD!T7sjRGy?fm9P|GI1FGEM)y#0wSf5_n*t3*WuyfAyBd+L*y&et~0$C zz^ATBzvuPChug|FojaGVqQ6BtWcPn(ySAE8ZkR#3MOq0Dw2PQMunbDySu17b(`P`ii$1E~@CWyIUJ47G&1mHF8p8L+xaW4rgO^ z%@i6>Q{0r*D7Va2AJ|b`p08!+BosP2yD_~t_|Jj~KeU;*gt{xdJ+e=;b4uHIe-zfg zpMxo)KjsRNH`DnBx*upL8{8T`74P_7SDiAC=U&0?`_t|bA$45QBn8D6Z;z?@KAi8? zv&e8%5(+#1wKl&`S`T4%InJ%(<8zz>p}AB@n|>dO43jpGkIt&7k1{?0z--o2EMJ(ef^k6Xb@%8y`rDX9tcpXX|rg0z~G- z^)JZU9Z}}^6o1M#X08)DFlPn!Im(^Miy*80k{3^cWzZkTqDdh@+|<{*G`l-;oGAfM zhgA*M;pA&7!~ViRzSC~fMZai#rNdk*!X)-@Yf<~&3vWAF_0eSo;xy@)vn7LGD&{?Z z^2S9K$0*&QV2DFU#o0EUhOY_7EYL4?mv7r?i$}i|1!i{%dqPn_Y@ztQ2{m9Qg;4Im z@k0FXrw@%{XUH9{|M4d=xO5-73l!1+`SC+@*;@;bH0Jw%uKdq^NaPSbBmcjvX$he2 zo&^q#QO#fqd{8KIl8rSW*ErcFxSnqz|GHoNZ!{!g2;;O)uRT5w$(NYX0vYSKD~7H` zv+e#BT=vhY{|$|&1-Bvl1GThRNe~u&YLu-;;2(2qk?) z&ddY|8l?HWpor}H>9YxZg5&IegJEGq7%BK36GyAMQ`P8NkA?~B%BuAL&*NdG`t+|M#ZS;d zv){)mx)b!y!M(BNylR^ED2g`7R2U9cD$S+?_rm#a`=SrjT+UML!jKlzTFJY~D;&{&wtHI8$wl%h~c_DW+CW&sYS zxlGx0lWDW-x3cfuf})Hd|F%)(vsTutH2uVciy! zz6l|52#!%?T5;BLz>O!2dBj>)9ETTd+$1tnx;?pfJ(U!pjor{ju3_YqQYpj|uQ_teevE-HNY{(_?#81U}l zLKPb>(w!lJg!og@?d9bW5f>MyqDL*Konk#$21cneY#c;pK6xB&e$Dn@H*v!X8ts*JzSn#`m3H^Asd;e|J^Ry`K&ir2 z4_}^5>-(5h-T>EmQ{@J!r(l=Fk>20GVz$1zv@~rL9;v;dQm7NmHRrs2>8EAwln zcbTz1Sy?p>4=Y>q%z1Pmv0(1v?u--NtkAiNZr)c$TF&18F1z9wkO%T@8yJ|w@GRH0 za$E66^hUidt5AYUN8~$Qz7XHe^5WhcllT5dF~=lj26qOzHxSr?!{mW5)b3HTT`c)U zp(LxWD={U(YtLXOZf$MdKJP=Rp{W%K0*z4mEY?{_2IS;M?Wk?0B8J5(pdBRrdYW5U zz>qrO^R-(EwA8e8K)t&zzcpQM4a1J572f1j{rCZdqpNTCuAH=>+5#>B@eJgM$Gcj` zVS;U&6DyprB_`k)<;j6mY7nlJOk$&W9;tlI6zi>(9=Y_HltLISEdF9OuM1YF_dRAg zNZ}V9<5eby-_m;f$fKbceb(LouFc!>#11`lqc2;T#@~JG=!}RRrY^x`v&`${oP=8W z80_60Hp~jpJ&Nw^`*tmhi%5)uSZGT3MJL3{jc6o_6C>Whh{8&hT1a=@!xWIh4TLgM zsLuXLBGTe2DE)MIcgJcqg*RM>>lfuYzm*R__fG#^Tp*J#Sk;+wK+X6{?-AuKS|~G@ z6>kzc(1D?pxi0B^!D$-tPrd7%7!XZIN^FZwWV={uZcSkux6 z*?pNQbYq+biU~EI4E8Z*QalbAt2QlW5&u!p-09(4zbaZz3b^|z8S!`VGu7)5%Cw3qeWo)vqq$#T^MLh} z3*ljDk>Q~z^qW%AqJKCbo{E-Wfuo8r#_(Zq@IunBfu^#h`B*lJD&LZKL;`|s?eD81 z(vNY|L$u~Jv2~ui*0#22lseZ4tc>_R5rAXdGOK0}dw-VzC5#ja&_@x>5EB)O3?0@Y zadq@KE9YZ3J>d9e{D#Yv2UOj&Y*fs zEjtE(S5IGmu$rT4a4YbAfWi2m9iD&_0H|-btbCTmR{?%X9V{i6syH*D6?9R=YNjIk zUFHBm_<$pm^tnCb*#}p++PVG1JBn169G0CnD8@lwU!OPlQ|xDhFcAp$N8+^Ejc+Hl z9084p6J-g)aoi8d?+VbzceFNMX&Mf-rcG~QRBv{!MxsLK|SjK z*Z;=hfLy!B?b$xL7f)W^KqBGFU`2ekn7O%m4wt`I&Ub&UnTc>A(dZjoNBd{(D_S7C zW9f;Ft)|7d-Gz!t)348eyEfwDCup`|f=NZJ%#3<1w2X#8#ta!c;AfK%=y86KNNg0iYk z0IhV4Tyv#JgW+!+J3Hy9yfeGuk_>D^3jC}bAs-x7Ic1Nk_F`?-{5)7n6)A-Bn^fGe z&;c4I*&m2VM&szHNrLyn=qiksqshdG?-K3x2EPf`TSl71mVvf^DkB@ZAAsWuyfGo6 zmbWTTs?-28DI$x?^EqM05eJWIGLAb4^q8Pn$)w{qW4o-i7v{95mps-C*l=j~H}-)t z7h+)4w}oK=LPpy91H2I0#tLgwAg-V_!RE5ynQp8O7JF_53#CPyUfy6yc=W&<>dJp0 zfO&_FenW7tO{UT4T-cq%F%~hbGj6hMW3R)j%4L8so>U@kmQqBm!@xHeYPZ?RlYAoaf8i`?ej^+$WHStkQ$qcZ7kPZuv^bs z_LP7gq-q1d@8TruV5YQ4d_D4Ts9FPM3YLGCU~uT&mx}T!x{-9GmXZ*bxR$bl8O&oC zU+4^UwV26h=q~tt2M!mswI+Y!VzeY^>*mxy&8V=IF5D!7U;H@$)*|uO(}h!cKO7eQ zcT*Q6faASwal1gVo+(6>P9lOoo+&~EfwvK|S&UK6m+KJ!R4O4?XwG9TSQ_idlT6{O zO2EtW8DYwN7KsQ9le{aZ zIyCVZ%Tki!ahW4#h29wfXeGHk&V?~%Q4&a2rtcKs-6CHj-Zt8w6nxZ7X_DjAYG!u_ z!?74gy?lo>+oD&i2?Hj(;m^Ww{?FGOKut|m{CZn7@YEbuQ!}5v*h4ExaWp`9`J@Le z4vS03lxM3weV#`K)Ipnu1(88+&#*9x+j#HYlpv0)nAjb7XtD&;D-Clq2`1AuT(qVB z*MfVD@TWUeJ}`~LzU6_eXSpnA!HNaaa$;V$w)RD6feC^gRW2>*!$QdET#H`zZrr+kU4$f$bI4eM&q@ukg6bd6QO=E+2lh_K| z(Q)bO2~j633%=8V0D!&6GZ9*27WIc&`x8da@J9njFZI=thrQ9;I0HS z213_V;K7*g6O_E3ySn>okGa22?`PS~qciB3@cYdUPdF%GwZq){rT6pti{|5iMi_pH z@{^Vk_f5OtFtqt{i{LO(CM za^AH2B?@iZGRdN)C}Zra-BB^znu;sBw&?LE2GosWgU?mNoiG!@Ms z{&wAuawMP#JR?&-J@|W;5GKxB6!@e&i3!OD_&R8)t5CF5w}-@K4Vh|sjp+>4O!H{M zQHmEgktlcRc-V z`xc?bCK>DX8x?j_Y!EHFy^!wXT<_0vMgHskQDjx25vHA8jYS0kW}!~g2B5)O=?4yc zN5$!jyy^QBvFYf{kKLV5$P(3wJI8TezXzLfLBsQ1oQTfOP8C&694D^U2ys}HXh>Mp zi00#&JYb{?MuqRi1DxEaSqra%N!RIc+quffV}K1m8FGC{a>UhxKZ1-p(5$ zdA|NBQa)u6sCTbe!ha`2dcN2+heAy22S!;J`D@9;*(Dr^12t53Gb)35oYt{s;H3{r z3$hsXan9`Q69Pn($fA*vN7)mUbYEu**eC(I_X&#Fl-84VIL#i!ygqY`=h@80>>o7B zd01I2#w15mKL~3i-}0O|BMtq%q8app*nB4rT<%_CT`q=Pn*(O+@(KT=SM&%8_$&F7 zpDQpPyYq7l{cjAUDv&)IfB-2ls*Y!r$rboq_?=>N+VpC}cNfITKxBuxk)HlTDufXr_pU5jR@4Mqm-YnSfr|cDf0f6EN27Rbme)Pnpf97~Y&V zGzEd8cFXr1F+~ArqRoQ#bcp46e2$4VcTgQeXQ)lqU^;4AW>7u9k?lRIt-xmpgUVkQ z)s^w_8?gF?OFk~N1C3i;Om$d;L(xZS3Hpd}gOH^O2gcJ!69K>nv!$~oE5ix{J+9w9 zLI2)d4kwmeT0Uc}$e1VHK1*PHU^5J4eXcr=2Y`o@4kjX7vfeA7KX&JK2eqnF+z;b2 zP1hf6?Km*SB)ubBd16jSdp}z<$-^TU@=AEUuMFC2IzNAJe;O5k&(avjV1lS~&@&V# z@%}$MxCazKI2-HKKLwheevfa=x9voSA(qRka@j5ECsE1i{+)}|`IN&ZMLF!wYgV|) z4H)A5zji!OHK)R+u{~4Hbo~fbS14(jluN9`ZAni3!hRV5FOS25<4TKICh5Z*=VQiX zMohI#w>)H;8OED@tiO(!=K46{nHu$d*IL+LU9RVwLLU1#)~^5cKYAh|588lx{a*Ym zMqT}%sD#3sHaN?joqW4@*ov9KFQH?%_q&NaTX6Tsvpf(h zEW1++eRha+!ag@HKA(}t5*wCFnWfP!X1o91R(hSXT68(A$4hlcW{ibMgT66t1zsDb zj=f3^GXJ9woO=Pb^Lwjw4ChR`6D&4sBTnNWzzowoJ?(;e3EfMzx?~P@LbBEv zI@fNxvM>s1T#RX5S)sukM${~KFmq}9)E!VFKM;WNpgMA^>cGHcr_BUJ+Iwa~PFW6q z>5(y$%%R|GV&|}V`+V*tH41vQY%U}e#c|T%RAL2{shS%iV$=6-hS;0Fxbz5#*iE93i_W6A8{qpyD{^4D>B|>s4fcFq^+_jC& zVtXpzF%jt%_w51nUQIzzK*e>g8gcD<1NesDPkK2Ee33_Pgjp$$_6h&?&RPDOeWi7R zii`7EP)7X~PQ^Msr9qPgj|HXP|8WRF*Ddw4JfPQOzBa@_RKi-ArrnnNr%!c0^#`~S za8rvqko?(88tDsUo;GfrW8~qmnqj4Y9v`HC-dMLD{5fAH77zDL?)(sheI|?7Ih7i} z8_$hC*j;a}HH~6);;>dqhEl>z(rt@!=rS9| zT$cn)D^L_sN+{?UPjFf6M&cSHnLj80@i>@ibUUoMxszwH%#!7%JH>U&pkp*J8NrDB zoQOL{D`CiPh;dPXi?`e@5b&-~WvxsgcLp8Qk+Qn?skKAK9LVke63O14A@h8N8Ur!X4*DpV0q zMKA=ocI;9MmT?ZFy@nQ2wG^Axt!!O}c237q{dC2Y@kIaSO+@-N$sgo%Z8%s^Xzf)I zoBrd9|7{VHGE*cxJ7HbZ7z))4gT-(nwS*4G;%KtINVl7|5TG6|m^CcogKM7)30cIw zbXyB7UC=5DJi;f`Xta&}m9T^@{5lF`TjtP)vwm9b{Z9thlu~5Byrl9vnrRlTqUf!Z zJg(3tKF$XP8b#;Cp)Q9)`czzy9I7B$mbgo?sb=PTyAxR=-n~yM3pYoLY|~|0gsjlem>gk;r`^6A#J>4>JMBp+@m~As!s|Tb_N1`_a%z;$`jV=j4N8xaajzoLfgw< zV2)OKq1UYJnx$SNY5&$+q9Xv5#vu^V!!{Z@O(Ovl4&4Ny!Qa9&v1*UT(`gcEMO4g) z(RNinWAiyvaL*5Q9VG^Z37h#PeI(i(9cgj+&Sno}(-R#GvOMEkE6XPrIRsjCrO-iH zaiq;O>x?lFv|xi(O{`uv-y?fsF9y26t$@~z&wj>1e{Zf59LCmYrFgfSdob|$?>4-B zp{{vEDzdEu#NP%*tsN1=3KWg;P86Jds#$rK-#@3dl=0|B5ysssJJ;5}>;KutJh1+F zfd@|gBCNKi`8Pd62wXPIngZk{B!&cF^HZiw;QFK|m+6gw&j;@(t_6=) zwD`rXYoaSR_#n9K4D*e;HjG(ApfrheE*9>1rV!QB(-UYgtDRdrb3%IXOIG4Fg|VVa zsYops@XOXSD>p$iNNRGzS3#bM_ZHmRUcXKb8=knUcqtB-?{MFAjo#mQHzb}xbF=EuT>;a=b z@ZNf?Y$N^wk?&4OClOz)MmdM*iVj5UbEFz-Ynwe&qg$xa27>mb%Ib)R5F$z8AP7Zp zn03%}yeIOslmv6=?7!jIR{p?;YvgXKsFT#{`<9p2LqBO7=+kxY*vxLu^&iiyTgbN@ zyI*)TUBn`P6xk=m{db27LhQA%WU^qXDTcxm)ubN}0ya0aWqpJo6Ok~wKIWt`%58p6 z61YBDLpN#*=fVlY9Y!wHEGoeY6qNS`rS6jU2x==}U7OJ?r=?f`pAG=>21aljHA54- zmz<<(vvr~Y`Hodo6k02nkmw@DB1eIs?(9s zRsoV5ohzOI>*_1rG%7n+)tMJqmH~`5q=blcC}=T#CS%O2k+YD9=|hthd$^$U>WvFpG24;eSzsJ1p6iD70wjEt!n z=kB<{A4x}@<4~RzffL9Nvl!}>?J+0X6$y|>VbiOiB{LX^F zVjuT-?EGnvPqc+fYPY%X=4jQQ7R@d`PMxbz1Nf9a#H#uc{0O2=aaY`BC+_mO#Xzsb zxAI-9s#Yv;#ki{sCI)&X5pb1DfozUCU1UD*>NlS}*Rg zMe$r{a$?mLKzvQHY0b$sQ%TRLp#WkV68THju{NpLSDk>Q3vL+W0br^% zFbn_UZ?iI?*Nlzh?&zGYGyf)@^KcS( zu?HBQYifUgw6Ixber(%%>-Zh+P?yC;cuqM=(&V&Kqx*r@1W8kp+Yiq+2x-s0c9~?Z zmv?Gh8CsUfLYbdnaexbqE9(iN(b8`m7*KrE96-8*yfWORka)8q$Z4(4Z`FkdK9n`B;l79)EP*$GhP9@>k6q%FBLzEM!1i`%X~IeJ{_jFkhHYi&@IzOic&>3sN#|@vSE>NaOO)t{mJBy#B6xj%#l_ zVE^mMyL?%`d8R9(1ik8&QM?oL{yL z6mI$tfCz)Ig4793dAF=GDqYtqX8c+>s%BQ-Bo;V_ZP;mX#wi=rW*p8!+a=OspskY` z?Pv!)8|pEA^~hs8C1OxkD^rc^QGc!dj|*TA-RRH#*pQR+7_2l{MS{dI3U-WI)ocvv znc5c;B{Vs+Z%mM1#lpMPCy2qwVU)B9^95G;C6rq^@t3V)Ws)v#bhFRlvlBXnd*(9D z@^oGEV>}v)49c#q`gl|!Pw?mcjq&HjtOGKGJu{!h=X3DaGjb7I+2|^kfVI(`y)a;W zkbl@WNvEKRvFB#to&sHyB9Lb=-?_(D`pV&tRgvy`%E(5(cE^;VUs388A+KDy(kkoU#9T*(s*QCMK`IhGeh zq_O?=F#0vl-D)a1#6LL>nX($n8?S%s-aS_?i)o4O(0Zd|&F2?G+0)2fwA3jFqsisj zTrEV6d#hftkRdc%XSPep;ycD0JkPR?=1PoTf$XC5r7P~I^O(ThDJ8GFZ0G&v>>bF{ zeLvGhFcoN%57e2I#giZ4wRU)qAU7NK17{6g?$~%HXoJxU*r+;9|8W96Fo<)t}?g!2NJr z?{_cKd_aM^<@EUkEZ>n?0yE^=GW=Oyb<;s3eFc{>EpA5t`}G(7$Dr`X?@OE!F)Ils z=#y)G2|D^DAFVwq#is1>u!r#~!hr_QLK^iwX-TT5i*?6kVeSwWS-z@tF8|g%H_}>L zwInEh>R<;La)q+xolN=pW<-*XweCN(wS-x1Z-deUy{8|Fe*;Qv4FO`b3Y9+{-K9D# zvVntlFtW1C)j6be(p%%g0$sUrBX&L-_Um~$a!Xb?caWxo5Z>A}2IH7=dDF~9I4B+A zNMd~~wux1a>>c(J^NK}jA2V`{1`d|j4%n|U@ z{Lc>PF7dW&s-LIx_?g;@>?2x2=<>}zUunJoic-;LTZ_hx9EJP-yqaL~L|GDELtp>4 zzrq9xDLN7_|VsLID~{geshWuX{dS4z?@O>o^2H?&kwLN>^NT_`@^ zPd}Flk@Fm}Y~56+M%H@*V^bPcYGMHO^bz5>Zb}MN?HwufGPL+>jr( zj?4VZ7Y>#WFOhkOO~)kQ9f<`(`U6rF6JNetTMjGRieczN^mi08!mif-0s9$Tg!ue@ zU17L*aygiOR!5sJ`$#PL50t=QGw~TruS+I4KJ(LP;?7e^xiw$=% zt2pla=>3@NZF!Rue4qh19{vqeGs*q(`M+yFpuQwkStMBqr`rq2<)INrKeagW#}8Yk z+|200<}Q5u6;5K=a5L*^+#vWc;9+o&c<8DI$807Xg7BiZL4+0ZFG`=79)ik zmi29q<0Ifi<1N-R@wQK0>*Y3#)#%mpm25xPJJr4mbev-PeZFaR%GylNjWNqB;hJnU zj@qKhE|03$??7sGjhk0}iAZDd^g~5B@#qkIdDt={eM1b_%c_-4m%eFX2i$ulF78u*)5#W#DD+MX7sghhN zvZRoVBa#WP9Z87_Sfg0^5}&S$vJ4Tmh2>%>n4_Z5n5>xL%6ls29qdawbC0WEL(vXZ zV!YP5B+i^|Hfi;1jevhL8x1Kd#(QGn4+V>A?4BTr07Yig3W+R$3GOwVAx_gferBhI zG3CjfEeCzD>(~KaqK{EYyL-p_8<1@x?$J3jSvqS@F!QIQX7QF+uSv!wqPk}DhZ3Q3 zp847r#J<}=Ae(SHKg2a3-=5Lfx<34?1w~%jZ2*jS486e({ebh2_ z6=}w9n$+>xhix%qLQF?*3`fh3eykv|S8Kv1eRXF@3Mxy=a4F=)T>v7|_@yoYr z%{l_raZ8Opvk*KSpe?cFp;K0N>7Z{GB?_o_Fzbv7;C%Batr}OJwC{dS{5DQM#k%b?f`rTt-fOKe$i==Eih^zQ$=O=`62$2bXbAB zgi~?o9t&NqTb0qmj6kr~w|-h3fR+{&R$u~_8LW4kQ1sv36?mah_y zM%xMBgoejnLAq~UKw5^d4^ns<=Q#bDy%7=L;)xxl%@9;KHH>P5v~AELl7%6B#e|~5 zfOY3U33e$K^s)Sjx^pJ5aZ3)(v6L5*fxAy|yHs?&+AQIY{NZwgvG_wEmvlNe+6UKeG>&~zvxIP%>+9bofKRF{s6a1Xvb$hj6MVZx%#uZ}1n6sH4<$;dRL zt08#RlF#ZCerOQ8Rt9e4AUU8|(FpX2+xG4BWF9CRJ7{-xj2%kBs_?HbjI9e%i~5C^ zco3tRO2XEOfyphZwE9N)2eX4x*pM=+3N$#tLMVlZEy_VNxJZlk*$^&e)LQ@Dl9au@ zq04@9LdYwRvAuboOcv~cr^3$-DX$c^@@}rn(%CLBE&!C^jSXe!z8C@`VCY~_{o>YQ zIfG-eS><*$llzEtou&{y63%G8*5PL(I!PUw>v9%~!l>nM2-Y5K^0FyLL-=h9& zyhETXs@bm2xqiANZpbUDy!V+9DHLP9aU?bI9QLH=PoZ+vx|zDU1Yhv2f+89YRsAM= z32S7%1UMMz}3OEj$9TgqjBJ z-l~AS41ZxhOU6@?<5R0$gpC(JL3XurqFt$F%d0Wgs?uQcPiGf<@x`(!8Dr`xp5ry7 zXyGzzZC-X%V*bRUx37oP6b`>HkgDX7W}u6XaqUx>(6hm*1HY&56W4MHY6dIXegAR&ZD_kmd&}vjX9Q1QmkXT%GLkeYdExX)^Ccz40kb_(bw&(rm(5hv z!uc4QOY$ik9hGYnX@T9GGUm>%hyT=}S)p)G>Er|(L6Nw?IuIve-+}u+BeN zS%a#% z22#u0cvI>7Ag7$3@o5eOshiDd8b?pfVpM2Sc?S<2SparmzYJA_6i zm|fusPd=o?SzXkx#Ey07>(XPWvV}p>am;rs9mEGt*NQB8N>w-xVkL$^ zfMXiMC>2V&Qf>-q98=ujjeX#=caBhBa2gJw30>Hz+KjtJ)cc0Z2)pz-GkfzQ{4 z@a&&v^Xqg+rsA*Q6?P-Xf1(?JQ<2bYldi=rzoVNw-zynjA*14_{CMkf${uIji9?1q z>Ia2`6Vhh#Pk)`19+cmpzHcG4dQE=Z#u*+vV|}QdG$@fZd-nPBf}1uy1!uFYUeaaW z-c*W$C?gSqob)75gpIirx&D^ulx(vo2&&vNwn`nww9(Tr&YM9igltDRL)+Tby$i3= zv*5Q27xwP+R74CECad_Kr;^izOwTsE$Ro9Er$(67u9QKB+w1n1*!VFe8yuLn zuTbK6spGMm6J>jLQJxIw3ySH(u*AW4&EQH1Z8q1X*?3=to4FURthR)!lV3LNQcM7KL!7 zfjY}T_ab~o9Qq9Vl>GpD?r3rr4AV3_S~SL@daDUE?rwTX%}&~enJF$Y7VuF(d)UU^OchvgAdv0{Kzd&nJPP8pQAe%bQx2ko(+`b*76pmUguEI{qVfRKLxkXD3`ba&76 zw5J&LrpUV8eNpAb^WO}p$P^El{u)Arkppn|C5FtIirKrjlAq!x>%2-J=>3+6OX;5) zw4`&UbQXWp13@XnORko$N77inrNg^;$tM9FI)iey&*&ml`p+oWTacnN6p>s<|CshK zs!42fRrp8PTPO4WqP4LI-{VU`*xEN!pk2nJjiQ0KVc(B2zD$%I`o&Fk71w0F6uQxA zvd@|FqbITX7TXXkA5@&xQn4Jy-=(ylPDc_x8<)q)<*%{EtQMPf$TfXr(EqkOz2a8} zOX02PM8l`C!XtONcM;j6eIqwIM#sGdKg2&8(y{&d$iA(l-N*WczQsAoNs(Byn4q+QdGeBXmPfEgGVIq48X| zlb^PXO$Lj&gRxG2ZY~V>{w?r2Bfh~4`L4QdE??7??=22j3`M7EY^==>H>vve`!;3iU7G*#4|2jHu86JYU!F*| z<4lJU&(|42JH39rNj_64Ii?|g5Q<7Izt4cKiAPpiWO;Ys_n!0GcfKDKG=x9XonQFe z$OVKRVSe2-ML_=4nbz1^wvVW`GVFtHkLldi->*uSsvFlHq9-yz|AQ? z2X4kkw4c*=UX}J}27}xe?T++y<8eCym55LOQM-jjNS+wB-cPd0;j|)1yfhr~GOXz#j&kaTfSRpCxSPzR=h+l^waWBT>?s1By% zJi~^m?S#(e+(IG|L;4lbP9U;%Nbv@Uv0Xy4w-_NxztK>R=F(@aP}^+2 z{oD2xuwfeYB=eFjA(i1IO%k9~ok&**UT&|fdlc^iTO7?@$^q$u7vz>X@X%^iWWXVL zA}Do|&hd-VT@o#8A=AIvB@Y>rDTY09|1Oy*GG%2pHDPhK1}7!`OPB$zZ7iW}DuH8h z>TUS+tikiGq~*!gw%;m`4|$(Irfh1|VI@*nzjg3C*KmX9#EWkzHu1yAwjF@ondaXa z+Ga2@Zf0RNQmZPGPRCGsEmLRU{=eA7Sqk=jht99+yY?lOFrD^SWazJx`8OM|%zow| zO=bn5%zrie>63@ZT2xYNGCc5WD>6#rAICI^a})uiD@>h5F$JJ))Gxl-hZJDC6{|%knSb<+9U#^uHDo-e1D2y6v6*e@6a4Y-21jY!c<( zd9(NYXA%8}h#U=vJ)0%#+pOikGyOj>WC*SNV2lk8QDNNwES-Psq$C&3kCE1F&^U<#u=aFH)Qb(wh-I|yJF9vY%8lA-w~2L8GA zZ#+2>wS&_8DKh=!U~8uGo5rtp??2ZsRkS$}KErUPokI#$>D{dXieA65uQGqBs7u+w zKdnQ|Sup4?EM;d*1Tz{)Ycm=p+qvOCVKO>U8?`&trIJtc19Yl41RGdNLU?Ey&&z|w zgihb_!$B!6i-;mo&;$tzn|J0``30Es*hednt+AX2D_G<`7-f=Tr~7njCwkjC{kTE0 z2MIQf-h0@@Rg}Ja+~OZdT;mRlrG(mW7#5ZjCNy41xYE;)ZEdzsiB*>PwXeI(GB(S@ zdkDH@Qv!f=_~pbs5JYypjXeYsfC4fvwSZ<2LhM&D4B}T~0fU;|3jdY}r3_*}K>UhW zTErX3uVVA17cyaOi{j2Jc1-tKr0-<-j?))$hv0nKY$nBkuBe+S@UObyf0kUNwW*5>8T zT=KE?(kS z0#1=QbT!~Au1}DPDVog95ZklK$(p&PHRjOOiqlC=W1fX*koaw%=ed!f2Sk=w%+n#8 z+eaDJLyp>=S;rgMvVZ$qc=s66`yst>hDm!B1r@=-n=a6Mm}j~LI2@(qn)0XH#eZKy zPJe9YxWh}9I1y#|?%}Rzxyy~^T8nQTjutN=VI_KmL)7-5lHr$bw?pvC`M2SQ&E9=E z`sP95i6svhgZG!&JNw?P6VwZCKV4O%C!z>sH|r+nhbem#`FO+iakf7CQ9YR_P#ocV zw^XD?J(Ig+d@w5b>e=BH?kLg`@mJZ;mc^0B&^F-w{QbW#cBBOCu7La*gwKF)5Y~_p zYu(Y&773F+M_wOtjKd2F!8$=z8?YCbI1hKMbT}U@RnK|1vH;BbMdK-Xgwu(VmvDUm zY#eA~NmN*?jb>PM_I6-Ah;hZSSF;3NuUdjyNq@h%YLU3fY%;tNS;1z^lBb7j1V!7S z31so~+oOCQfN)hht58xBXykmh*37A&xnev$&A(}Ow0-g4U9H#~yK7TdU$jLcGc_|S zBMR7R6V({>MAo9OAK?jn-77nXYLdl&0E3)h>Dx)I#X&sDP8sqCn5l`qWo=};t zjJEILy^v(wO7WJR?TS}i{d~N>dbC4Xrs}*GGr*;7(y0|tU2o(>`g4)px140c37JkI zammBy1J2GzJ^v9kQhDD$)Qy=!FaOUrY7rG|EE;8!@*PO+t$!PhW3dzZ+G6Gm*dfoe zJ^n!y{bOZ;tOIcko>Bon$d0*|87-s5K|T(oL8@x+WqUmXYuPnCqSTTfJ8ki5+KYlV z*Pzh^Ba7VxJx}6W-Kl*CWm`i7JHW7sQ&(5_AevV8dgKbvVzt49UryrV*?p69igkew zk$e&*h@Jof)Y8J4$rrL;JR;&SMW62cXp3=p-*k{;BeR(=BrnM}*`BFOxQ>}}y2zw4 z`Ene*?ON|>^4U%=`)$xZ?Y8Q~e{zg}sC>xIU-odvOukcvxzlm6d{D3+?WF zUGT$E$ImEP+9_6PH8t8)L;0)upKH`3pIp`IuUE59abrIF?%UnF4@Yu(&3xD7X%XhN zlUa~byDJiMWAs0_Z;VsSjLSAf{RSgJ!yXez`B+tPQyf1ClEQA!u?u0`&*hiZZB91F zjOc4v3pVfpOwW6~SoxO|87)u&GZlTrZtlZ3^*=6e@}98#mXPESg+;{=7Vi1|Xy*UiU6*bThLH|njCvdp-W)Q{)ZY$v zS5{vlcn)X6IXWB&v_whSO}dC#K@{~1?0$-;FaXFJrGP@e-pFzBM0LPy`Ac;xZR=DP z$70VsJI?9M98KJrbv?+W$vFgSb@!BbF7mZS*zceY*5PMovj$!?JdbI z4cBcerHNz`GZlZ%Yh^BO7rIB=C2v*rxR_;_Li)%{l z`Ioc*=Y86hghg8}>5DQM+$jUf^_$^mSN2MVWR36oEf>%AGFn3I?w~slbR|-GfhX@* zBeq|j?;HR_vojTOy&Wv78dJN)4Tf||nSc1iDp_pgpQ7nPR z(Q2Hmwj4m{k@5@5F}}?*sR0Ef{mD}#eChwr4yQXZepn&P8xkuN0)i_5rD0UVgG7Ax zV%zVTbCF?yTBFs3DD3(`z;UChJ-ggO-K2mbX@rq_2G!BQZfVwoKf(wI)7o$0VNwX= z*O$?zyq$j7r{kJK`13nTWh1)nVPe_hc%j^ARcMQ!!bP0v(AV>uyD-v0*2Lr_Ri>I< zaJJ2iPr|qq7c)`BJ*IM581{s7hCO!6f!5z}i&KHxYT6{yyE?^zt-g&R#yX2tIEXW$ALRX&v0L-(Bj1pN6O zdAI$Nac*w&bm;S>Xa7^H6*E0*q=hXYekGbR#RJw_OdJCStacXtvd;PgQ}Rojm=pjt zap2vXx_-}1drylIE)^c$tAFI=Qggvvz39j33bLglI-|_;-wIN@O#aU@e+`Sjqu`hQ zV%<1nWP&Ddh8BKQ@)!Brc~mvjU}oL)o)fj3rg>y)oM7J>nE6cy0;AhAVK7AJw(Ee$ z>3c!1-ws{t?mSEh*SJH7VS}DSf32{4dbS+awf-Ffp9mxdSES8fdcw>p5juf0CK-ZPw7oV?G znLkVDIkD^q71uiPZZ!oEAAz1Zxd}}17{ye!p;wmxp$HC_I2&mz!@$~;+jN5g+)QGc zFF&c5J3`Z|ctJ)s>qROEgm~>513o#5w6W&Wfa^S?z(*k`+I9!-*sv%{Dv*OB5Tkp`&shXlC}Vaww|+js&{b(A3X5iKQkAkncK$g-J$<9 zwfo;Q2e!&Dk*Q(-kHaiG8~VS}$>6`yxC{%1q`bbw7rx|qx7@_{2L8V`JO{9-TLSya z3ieffu-tq4imD#3;BXZ8v(zz(%{lK5kVQL?MSnRm4ir`378A|AD)6K?75&Y4jQHqE zk#rVgf7C3$PZbna)ON=5;uH9qMRS}H-+Rq027~h#yF1FQCJlly;1oWLKR{~Mn}D{3cue5`Jy>d zDIAWpMk|XA%rEwxRr*2pG!i0Xx61XU$uub9eGYS8e?{XRoXR#m8|nHbB;gi})7AM$ zIvCPA!`Jv7)=1eP%F8Uqn_A%qj#O|5&L1U{J&ZP;yyhVvQ)6G*mV&7_0n(HFPZB`B z&x07N4`aa9FQqMB@^$#p&fYwa)W$smFA#DWWBkvF_ow9=Ep)$pZ!`~z=78U|$vqtp zRZm7$YB}&eB~2E&m`>i4(qeE9i-Llpq^*=T{V0lmW8<3)64w41lCbP^kB7#9 zHtQn!nq=x_$~yEb2UdLlRllSYvCCB9Po~IateA1LKDrsofTe^$?VgpaiPq4ANqR=l zC=F<*GsMy*l@w(%hx{RNAPiys=iIF!7#jAQePPVE+W+njL4XTimN(q>awkgb$mVwT z0T^OMsKJoLZ>hv;1Jy5ZwjdP-z93VkuylrWMOhRq3O8sJyz! zbg0o-onjpgf&9v>*hK1^MJ#1$HGt<^StU`HUxA^2ej-W2In-ZI$#1%M#rM^#4nDuy z{au_m_$e|tC8};Zx%1aq6!e2afgMNnh3^ui03tJ+|KO=uag5G6$2Y)ibKzmuU`B+9 zwnb(uqcGP!b@kQxXU;Q$@M+qS-<-e$QghT%W>ywib?Z$lcX83Ki9u9gh`O!x10cPy zvV}EX;$_R-;5Ri<?|C`J_v)?oUbXP|TTWQClL{6Q@8%{v8}Zk1cN41op{ z3`e^qzGc3Wjt>1YSqYk&ZH`K*s@Jj$N`g-+2~(tX;6uCxaf>GTQu)ml(vMp(-wcIB z?e7^Y zRI8+?Vgng@1YRJyH5_e`_}85#8J!pm4LQLfIc7)EoAmSX0e-M|Y!ye;UJq$+t=nt~ zH)Y9UpXbeaJ0Cij^E_wlj$X!k%O|8DHRvytYG(du3Cf?W1NM1^+!t&aa*WqCP+d-YfQ+q6vr z0@|X~IRCazmLy*@{pvOqwyzwsGzDEZ)Ah%40oQnQf|qtgwv-H?m5J*rlJC^(Djy2{El}R^PSTX`1RvSB`9Nn1_50a=8AxT483QT@yckG ziN((XXMiRJC8#f&j8fjO^dRb^Tix=ie0iK;4YNE;x=*EiR%9CeX7){B;>e!aDq5jm z(`OZjY@*5=cPTXdg`i=?MrhN?B zPq}7$Bz597l2D`CUSDCsWni=Mk?V55vLpuTkTI&|(6{Kay9TnkyHCB#;Yn>l(zh)lx1Hl|L+QY`xc)LZ=)LxP{ThDhO8N#E9r@tRQKMLxU~C(~IJ){w7g!#*i2gY~vq>-SobRB)aX-|2<^U$dNO&-UmB z#XufO(!V|BP7 zTFxwW4Zj?oE5U|cKgrY-kH3QS0yBvPRy>0^pA&rgxHRZ+jP8*#Bl+T3Po9-J_JCxf zV|$@*%z||t_VN{&^P9##=8t3xJ_J zkBwYbx{hsWn}it^T}UWsCsc8yoYQGbbht_U)eY@Fe4E)V$z|kVw_j{MiPI;DTi$v|q2`AKm#&Rr}MCA}xAOJ^V zkb&{o3Xzi^mC_hVEW+HMA1ftzo|d*YD3zyFhSy+C>}yX~B1m}N6Qh)=(^}Qp1iWVy z=Z){7f6>Djl)yCZaA^LGJW=r10O8@7&!QTl%}R@=)#tzOS|#z$>3OD5%qsK3$L)}1 zRwIfa)#c`hOUntQr2SoucBj|#95qj3o4mr#{AFoTgG?O#nY}=Qs^jA}fKjyfAcxcL>rsBjJ!bc%0mpewhOl8Mu(X*lo#c|5Lg(^CymYJ}g7n7&ohz4l8eALugz?4* z9dhNe3>G|(cl#LxF0t}g3(f8JlM!C=!9(>rbkMfNl$(nw8clsn=3#c7j`2!=C3c;U z8Iw^>mOtie_A!|>3RjMdAXQfFkT~ou1IQgIvrUAgY=(gTZ7OPjjyM?mE47ScDxSSe z`ST#k)LVT7R21&u7i+#|ZaPuJqX44YJ=52{STB1KJ(k1^4RzdX0n0;c`X2}xRv44B zdcw_4&_N*?k^UAWrGX>A3G0~(?5H|&WbW5~Xjs(jMft8)``TQwu-maE`jM|xdhUeL zQJlaz1y`tq;c{}Sya&cq9ZlqW>KtF0> zg>5uUL-9=(j>ZP#6Vh$3EEK@8{9frZ(*pl~REKKcQn~W}+sSC8i86^_DtcJGM!9O@ zoR#9*n1?e3XaCJ`0_|zJ?F(bB*Fc z$kU$$@&==JVNvK(+ACLJ&|b6}9fx7l;0dSU#r6Ql*+%ms^RmtI7TtFGDvns!8QQ(Q~uy!y|q&U$eGvsv+ z7KOOAFucPDfu-_*KUm@Q#vPF`FNH{u1|Unkk05c03e(KtnO3zxm=8eQJb0pzz9MlE z2`h>9(%fKgebP>aub?{Rp9YMUzj3}j;mflu=4sTb+=e;Ukg`k$f6<|V59TiYY3crL zZLKU)&k_~p-!!peOOJ+`8G-51XRvp10_*Us)eP&q`u^rd0T`X^aFjqL!LFLkIfRdU!G^D)qLOFr)WWN^ zwYnsg3)ZJp^83c=w1Vz84nXbDsy2DXFCx*!Zx%=%=tm7dY@NPj=YB^=lART zYqi<u$;&H)_S^+3yRMV31lluC9)*z2D*CTT%;`5V>fuQf(qj> z=jJ&Njl@BGn-S^cFyX!XoBc(`M&6L}pwCyn9~PEy3R--$bAo`F=&!BG(UI*HtJ*kV;;i6LaMjVR$HOEr(6 zD#)Ap(k;!(A;@>An~~oa$!cQ{gIj(2ND`Qz1)oJcfLFDb3vMuFbcnMqC3xPH zaY`86Mn?R-Yo454nM7NCNeUurw1T}g&py5_2&EA4%uf+msH`{b^H#EOs;cTGSQTs~ zl@d13=f_U@Izg)*K!YVn7Zsb=Q?=0i3O4|OhbAFGKspE6GDC5^O6!vYhG2LG5X|kH4koVMh@)V z<{C7sy;;S)2aJbOZ4BiqECcHdFxAJhha?vxd3Yb1BJxhRU*Ff6{AX?hU7=1%Hlz*YU`lAG(I_ml?5bZnmYEz2LeJs_Lh5-Jh>4Mrn!#khvws^F?pT9#neNNKnnCpK3S|>}Sv`7AH zp4RyJA@{8g+|P57j;Ql*9aGv}(mBh)svZ&d)a@{L5tP41Yj43h#{qA$ma^%k`Q=Wme!Vt)$;|bKclS<*8zEk z>5l2rmL;8iq!~Ai4>in*QYC=aWmOX>;h(p z%p6d@f!eo%x3<@^Ekm9h)Yhmvn=7R2*#W>5PESk{^q-yn&;#$w{E4=i=81m)u zzmFV_gMnD>zPENdve-Q7M!*QCQ@tNSGA7 zr^pyyk-|5ismur=5090Yxd~}(EGl9AnVMGO^3fL<2hTkUR`gwO*(U|gaF=3SK6xCU z80_CCzgAqSd6Tf0q|Ay~Ae+(CvuJ~biJuF;F zhuW!9FxB}bV5&*D7L!Ef)mrWXzG(v3cXuQv)v4OUSFhYi_2V(!*k>VO{&a%v^Md(O zY4K*?cD~LCywrfsiU4ZYI^d+2{D<_9v{cZG$P?l6I(%Kx7xYZ7FOe)v|QkJlE9IU zAPARo!k4+{mT?MvTqDj%Ua)EUQjB1kpa&|i6UD}*moDKl=>~lxG+vbdTx>F3$j1f% z_NXGdr$&EWZqnKJl-g^Rr;oh<66>9y+pneHnd<~J9`w~UHf3{lAD-;lQIzQ_86rJT z<&SOoi@aMP1MT$190*Hfnt9+xq{k)8zglD4;&GLj6fL%$2$ctz-orMMp>#%lr3$GW zJ8~1+{i?6-nVyk!3Q6HV?RYS>7m|6$uiY{3Giw56q>r%_lhzX*zjZ{Uovd0Kfhl++>W0+zLC9RZkPGwFrQv62tETx^Ny+k6OGZ1cBX$Ax)A zq$Y4tpkh@w55>M=#t!4j_G4TRew5K)`o)d5kB8L@1VOS`kT)Sw$XstlWkw>o^LbrTfevqZUQ?|x z0g7VNo$40aM8=uuofgVIco+9C$lqOvmwhg@s z3w=jY{baHPiPn&S%HBe}y`OGZK~33JkCsnFEUS{JR^T?0#q^>He)9qz`;NN0Yj8*X zw|b6T*8HPuV^I4TZnxbst3AhF$a*j#2CGIA}coo;?6sbVG;^$jtE?rJ+WbVHIVZnlGuFhv@{)z^*RgKh?U;*LuZnYv%rO*&2N zZTTHHtT7_g#0^=Ks3c0{>uf9P!pd8>A*1>UAGX`Sx8l4WBb|Amv#}7!!_;j4ZO?QmeJK_@k5GBePRI3%<7Ay3KiEEIQ7VjnOQUVdK$}-N97PrQ zVFXVb{7{6WU04sVcxkeXU2QJ)vtg9c`ym%uE|Sc$g<( zAL5gN;~*?9eCT#!*y;F<6jnTtqsv+a?XPtP6{-E?%#BcN_H+C5+mW`>CLS>F!hrLT zh6h!pp-;*gOwX#HUvd+^>EbONQM`=78ynULdblv!m~%iy>gK4DQD;}bo{}!byiz}A zd7z(_NI%EeYC*?>a+R2Ds^;WPuaD;yj?EKy$=4ES!p@#g4%EOl%W^n5g2W0xC$ zg55}iO<7v10F}f!mXj3Raq!y=piYaQ@*-T85Any^(VcY#JV($liTf~4t<(L6*f|QF z0&_YV508B7Qp(n|(DxEj;k(eo)m%PI_(;Cf9^~`0^;TWR{R-V?^m^mFn)E|Eva-z& zJq;`xd;NCSp{_D3Fgo^|?(xZTZ9G6T(lCY^;r%jB4CB{8H8IKsxa*{sN(p|p{b@3R z`pmXp5<8kc4AIiH7FC;xFz&xh zvYk3mP};APSQqIPj;RqlolXt?F&yT~lLuj6<^qQ4m?N6<7cqY-&xUCs* z)K3s@qcAwf7a=~iK!;}S1TENItYr)+Ibqo|Zv)D)^(>by5M-JyX4P@a((84x0LM)m z1o;2F0NS@s5a^4<0k^%~q*>c6o7K1npTqoHaK`G|Xw}H{sG_}w`d)W@Nj=`HH|{hF zEL_09qMxR`-N%h7$7oW|A|ZuX5e5Q(Q)?{!v~xrThZ>34m0`>P?-Z%Re-&?AJ0}eN z2%fyxabyMcP`JjVg!zIaX*akR*V7LL`mZo}Vwa)^E%OJP`EQb*f4-Q9!N0kypRT8c zW;rI0s*cG{Ds}Pvu~08S%lwEAy}=+xp`JN7a?|xh_su1B%025C85Ul_wMW`N2{Qui zzt{X;%KdSX8+)=)93>*Gp+v1nEZ|FB)rbnT8Gw*FMO_4}Xm2T_TyPf^Y(^-0oa1<$ zD0-YFjaR0MiIz6Xc`Z_#gLQ&I7?2s#MJ5OjyRk(otXCLjvJQVUSAnyh!Io8bJ=Akw z?-ancw7i^y>w5w7O?$-xhc6bi$|pDU%SL`8xhf-<(qedacT@$O6!iGBNkKnSDUE48 z{ZXF=VDRlc0QSdL)rqjrzf^f(?SUOv>C@t_f7WK|Q@384|a z)h_I}Ex?BI#nlPt$kwD{09L#2wEEn+-DBO`lFtu|jP31R46`Ok*`v9xq@|C~s68su zm)527y(OeCKJqE#G|^k|O z$j4BC70%6#FzjzpMzOTKcOnjljp{YP@SgG%hh~qXX4VrdW%xQ^Ccm`{16q%93m<)j z{QMRtWyVdO@#tsoRP5Dgv(=ch)<`VwuM5bE$eCwU&zDFo>hT7ZjX&+%8e)6AkHKo*d;R| z1XoLYE}ssDTYe4S>{gXCNUTTZ2-@GjeB3y6Om0y?$R2=RyqGxJ4=lSNQ!jQH=9@xK z*ev=1W^F`({Gi+IXJsuPwI?&LN@}IP#$JX~MnBl_hG{}nN4(FWY z12z7u%hS!mu^E@EFo#AA_NpRgM@7M~{uc<<3t-9Cd8jxsgc(*ePi6!*$7|!mOr1MP zh%-Cp8UmqHW~4)~qOq6vuN<3D{W7D^;pR=07yDUe+J7i;#J@cPy$e{=Hx3owzV5*lhu$nU0g1Uc|J1$L!#)10R~N`$H(l*O{nmQxqzIbey01<& zhgQ*NO9pdiF17ZX$rNuI+2MGBG+JY?S*ma{*1jXozdIbv0!G3R6(tqiCSd_`5z>vN zL(i(D;ZTeN!4u8)lUE*Z(5Ut(NhI!%hA&rUx4Fp~?addfLAcZP(}an>1I9Yv09xMX z?Txm|{+aDQ4~T$*y)+VRZln4e??_O}0Lzt0e(*sE zf+aL+c@9ftVByNyKs5%SRc=2HQ<<+&Sv{a(RrmWc+o}Vp!$d0RJ&CoogwG^zO0iDO zjj(i?oPBS@RAB-$DuKIf<2Nve0%V^##y5nXFx7T?Bwh2KsJZ zT^9eGB!S=Yre2VSi&tmCu*W?ng$i;M=AA& zDQ(p(+Pzv`SdG+hYgAeAh}z-43o#M$k=7DSHWH8|8JJ3EJ*q7oF~VIB5t{+6 z0i8Tv*7`P6IIV8L5R&uJ=Aob(1X%*SIQ*a{-ya@~?rJdW{&;&8Uoky|6gThsR&VHJ zgqJZqWgYGWfVs|0w+re?_lP~a0lk6M-RmLGJ6J&U1*)9GANwh3{r)PWP( zEEWNR_P2)D0%jgKEgKbld*d2UHsSJ5l5h4L41&HuisaD#4^wBs6<4%uYuw%4-QC?C zf;R3J+#Nz7xVyX4xCJM;OXCEW5Sri+oV-5g-FL_P1AFYT*4kCIYR~#6;U*9AAy?R~ zY9UFvqhD1SGiMeXH&F?Dn`l9+j6xZCj5XJ2JI_p!uhU}Kvy}-7)Im?Ll#n0FCfd&XeY@bR7LOUJXuz!hG#sk&}m(09JjI3TVWuq~-0_9$Cvn*C9 zYX!xK@AsR3@zIUA^(5-U*DqwP5+oAtrIs1Zfa$!pzc9c;fWf#BD!P}Mf7WfznJi6IezOm?Tb+{BtSLI1Yd zuM(W-bpIfhVq(+tM`BmN-QFds@}&&}-|`l0DCHC=C2Trrn6mFmrSdaLv;m^R5`SZp zaT%>`(6(Xm5J=$BXa(q=p@O@O&_C=pw;1{Ny` z0XC>i3?jhR+}HU?4`;PufE`L zFhR~OS}-$dXd7w~Gaqs#UZxh4|0_P$18gMDw+L_DT3I{rt|otwyVmmlt{5#dwd0Y8tzucEG6|N`6#GZq∨6J>&@z6(oW zi@^tTa5q4ez|pnoaL=QkEF8z+ud z4s$Y#;IUl*6anFu#EjcUiu6FuPZt$bU>#+^s9b4+18pb6%e7@2Fxn?$n=v`JPst34 zeN7n{59pH27>mn=7QmaYEk0s!RDfc0=zR-?FsUod8}lo@78=D{^OF3QZLz=pK}3AE z5N8<0-IOhC5n$0QEoc_N$FEgSv6qhGD5uaytU*Q7%)!C9H4@h+nkl|@5v`OrQ(H$; zyw$L_lA6*kDH#;u`;L^0RE?9z?Ng5K>6Leiq^>Ux_y{OOd~Zfi0*cYGUNn{sbD}m2WVGXgtV`t0 zm86?7vL7Z^__LVMMEvjS5N7%1YHPIQ34A zyn*pR+&EnL>Hg+{=<)M75f_q(@pzfmc|bVPMWW6~5x-Lu$uaT;)HdUZHY zEKphbF#KQ&MVNC^SQI?ot&Y(_Cg)9Q_mhAc+kHdpHQCr{;qb8#XZBTr){xq|MtbI# zBs}3%BFPjqHmi(@%rsVg3h9G9$>*`X3nBQ9`HO>SN}h9!inyetm!UvYB-z|ug;f?* zY*$EA$#p=41t_OxFYTIlf*D`#f`wHiO>ihs4;RovX(lt%b$r1?zMJ^Oq12VFuKjJ(byw9mt1b^sQUU7fC&li zZECwi$g2>nqm{e#PD#3TT!wVI=cS!PPf<8b)+uf4ZnHxE-f?OY=SsfeaGCyXN>L$N zsRq6i`bAzio=CV3tI7%mHLnY}pe3Qvv~4X5ZnluufRw&UrTB?9CI-<;MdVAAPoB>g z0Qgz7fqCtL(xthQKV42Lm!rC{xiPucDTZDr^^|X?-|>#X7)AGE8iE-*i7))ic4WE3 zq{;>H*}-pP!EzcwyLKN=4O1%R?5%op9x8odMOI&zQV7rprQVyA5d+LEhV``5bC9$6 zu_SPke@=&|{hL4=y0*|N?Ogu>5&2l2i)~&kK;`*-ts-2d2=pQtJS?9DgTVDm6F;mr9{^{YQyz}v6IwbGtq znchYeF{mRX>sgO)&4&QvikHSIF0D5e^|#QTEmTMktDUEYmi8rfn}1Y5`yp65Uy`wU zqTf{PE=g?U`2JqBY}>$fTdX}3uzy~s;uE@64x?$NZ%A1QyNz7ETc&ABu|^OAE>$HM z9{w=$oKD{3JB?eKU71}bE>?Bf;k0zfyT=@rm&ids5_Exy=hF|H3{f?X7p0L!g$4HX z7^VIsaN;GkxezZ4!3Wfsr;xGzvk-=|9W!^I#5R3Ik~`cK7kK=O>@o*#G-xdNHZJ2Y zg5~?A&7;K7CVYuhXI2OaUFRGS(5NiE$RvDQYDms{ekNq~4I}CkM|_!Kz6_!iY(WjP zp9tyUNZE277p8doq+T|UX|qZxrkLx>e?S+6--FZ{^Sg+1Hca?7yn8rwcEWbOJhn9*i|J5pxl% zKq@3ex(~2k(#K-P;3T|gOAAj~!Dj!$)>ixB?>b#ixUvGZ**}a{`|&>3zbWz9;F3(g zdy5s_g9nbO$pxfJ9U9VCEWVz-=`dCTD)Abe@%iKW2QgxQQz#>%V#Ytu)zQc!3uM~a z+ECHdr%q$m3p`oSTkeO;ROwvR;AmJ>2Ja*0UNo@Bt;bY@CYel6vKtHMiQ_fglD@YB zXu0P(N4P9pq@%9bVzM=AWPG}G*Pgh(m!O3Zhj;8Xs7Op97WUvP@JLVvQ_-qyTo0+B z#3EyCvb&ySi^5KasE(Qx0Wxn zuOp9mPKOvGx>;+DzKDuQO~+#p(-=V{>>oM8R85oNC#lUTh+K}2jmBs8o9Ln5at3z8b%F``h^&$N&pPYDXUk;-H}S zHF%)q4Ou+=Bq+w_nir+*YRa_*p>}du4L*V*Z~s1bxnKT9Exy485%x_>GRIQpRNFrH z$Iwke;UzHE*7TGwb=Y3Nkxy2+#`sBA1C2?&Fj1j3SG<|FS=V(IhFa1V)jG0SdpDgi z6A%(TtjeBt;c|2tO(XH1bKmBj)!pEx3c=h!ADe2g=Tg!#!nk1<%tm{baAJVtufaM zRmer#!~RJ;>RgnC1V8hT!!MWc2RLm*C}l3C!5)s`BK1pEY8xIg{PA~iEzT{f>84>m z9UYyni113~6&@ZKBuP;{L0tmSj*%}Z8Oc6yzd2J|xC1qVawqLul*pV$nl;=zfBt*y zzv?zT zdEJB`ixxo7%liT@7NeQ)hS{w0z0kqN=pISz!ib58)M-@f*U%luJOrp(Gw{Z(*{Nn?C&AN z^xGsa|4q1gR9X?lpzX3|{Y=aL_7e|y8?F~2)Ap*8{O>P)*Z{CkjY3iV2a0I(W+HeF#m74DfB91cqO@Gu#f(2>w#)Q`RS!d_=sxO_u}M3!}tUh+Jc?azuoo`TB% zj4IadCeiO80qM@ z&5AxrokP0fH`?aEYYZPJj0_MGfrJ$of-^8`)w6c!F%1xF5|Heuxx_HnZna^G6u3{1!Te^T}k%DAX>4iFR$@`Yh^R z0(gSMk2~&HYptASI*1n*)Fy!Hc+rXuY;xQJ92lOX$DXA4C%FVciuzelEZTAR=RZV+ zYRxkmqWi=|F1ziUzR!~Pld=Mt8asfF&zVzLHuhZ8pS)+Gq&RZ9g=uHN0;}vA!C%A{ z%$ubyIv3e$0FRD?;Xn@#Z&}%R^yUr3p&L6Qm7)~{VKfe)upKPb_i)OlyX{husZB`m z0gu=h{YyZwNN3mB!k2rFK~OuF58Nz?y`138`k0^ynsv8F=Q*#QLTuIQoP3(v)pUmH zI;_Pv(s`+wuHAf*7g%kTct2325ICGMy5k0pWI7*tQ}U>Czxr-=C>=I6{wF2lCkK77 zrod+qifH%j=@NvX^^&a~sw-P(-piFuWdtA@nCOD_o6H)5DvhWH7Lx2rW$ol%K{K3= z9-)Q&97sf@kaey!?~3}Ea|_lXC3qtai6`Xj9G^nkU^PJK7kM+5`!=6=w`rvRS+EQU zm7*_jR|2R~FT5-Y^!#Q>y8+#!K@NRhpHmk0yL4=es%26Hg+~FlIy{XtC|$iQ=hX-* zg)&II+A{JGgz!ZmEQ|SO`M}~#Hq@rmRm!oIAn6WKa0c`Wab(JePj+Oy#gMk;OL{=q z%n79t%i#r&!9d;EZ9*@6I`meF5h zVS&gJm-wDFL4jn#wJuerAO^Ihx);`IHglz+ExoU4TaV#Q8 zjXm2@oqSzLM2>@%j z6~%UjU(~FHCyy+H9!`F*od36RSyFK#d;wZ`<+niDjZa3_b0Px9tPJs}6aLhs%rfl` zK|tHlPYBM<(&NNnG6RWT5U4z2uhBa6yC$N5;yKIdl!&rF?bZpip1PSX#+Tt%{Hvkw zlS_tQIVVQm<^=!g`7RW%m?fxIF=>le3q;!=jBrduzL-X%?l85w>n25jL4FrIIt$d7`c_!AN| zDnU2BuQp!#EG~Np_LHecKqWj)`)~yD4D1VqP#Mv+Onmd4XG>FViXlXpIg7%xkH<>m zmKxLV?q-3#X1#w59G=JS3s!nQ+U-i7h!+VQ3<Mh$C=HvEgPZ?cDgyD1C&;c*VHY3p{d_&FD`L-Vh@#nK4< zZh zPV0nYDBY{?Y!wEl$V>1Vs!vra&3$vz6Vw}ZHuzm^6rsPnJfETeG7e>89IYtUPGOWc z;9_6bmo3Z1V=-L3gbWf}^;a#v?rP-1c$uB4E z(e~@Kj|~GrN+b;O;Ci1hBH^lFqsIDk{LB5UcDXv2M<&<3hD$V9I3_BKU3<_QQxr-X4!0>OxcK6#9?lIatUIG! zYe6WpI)nT1-~y*ebO^lkfj~+n?@H%m!H&m@JfNFqCmi`%Z|d&{zptwunBi%Cyh=5E z2?eLla}60*r@ZUZXZA2wI^}xM8M*ZOyjhs+&y{=@T8FzTR~?tjfyuU~fBdoSl!WP! z=@&|80qhtQLJs*lHLz-WI??C$>&8VjI?nB7xMgHiy8x>`eA2D(YnAyl2Gb6&igiI6 zElF%dL^NN6p4f-E?Z=DV3E9`NrAB1VaZ7RpY-+?nv~!%NLkIIdmJ|)|ER0$^FJwjH zO;u!Fh9^Opn%nBTs%^X3eW^3;js;W!_kDTTQ`=1^SI1lO*6D;csoEK!B7`xv{DZB2 z|BpVH>qhD?l~X&iNZ?QZ>TzuM(gaUaCczl|i_9};0g_LlqkRIbR@{8 zM(>4e5Dn1?52SOpwv3#Xx65bWelh&hCr`8QS=!y8lbOp?%@T6gr*$h5u5`3&1JG27 z@j{S>_On}y%H@N9)C>xJrF&tlm0$xgKoZphyY52XPbFt0K($VwRA^3?-469dxKMzi zfW9dt1GjCEef7zz4F==MWzJEfl`W+g+)Q$WCiPkw2t1X=gOUPA5k_(An(Qk{h4o$T znHoki;8Tqz5Y!Nwe_&VGruUR2EMV=SP(r2@B| zpQ{P5b?NYJM89#zbV7h!u4mP0ztyf7%zzwid=c3K&7Igh69{_CV=T!y9ohO9pBc@v z*R+LMveSwUPCq6L`SJ!R5f%>*R44D0 z7KUQQfbY`5S)9Q#ST;yorTCqDO7k>4c{q3x9da$&oOnG4`x*5m$yQ3p?!cL-+=b}L zI!)~VI&lH`nJ>A@K^n1lX?r70tK-WhS;JmK9!ZHortoYZj%8X)|aUVY5cQK>T_l}%9z zZ*UdumcyQa$R~;TAI>Ne5DZ5S6;VXfP$)|Pmg7EFiX%47`r0jJ} z-|q~Pkj9USc-g$?g2-Y;Bldi?qGE*zy(B>k7}XQB1RoF!`;WRo4sK)w%+RWGxyW#> zDbb>fKwO)?!mTRLbIp15U5)19A$q%({S$)7532-h!3Jo2N2Cd^aO8`Eb!Zl5$RDZf z+I1948wAAncF6$;RY3C(Vj8Opz2|Xaj#!7N-o|dkP_~_+7e!-ORaJwV46ffPjnl`{ zq0g2=OqRE!0TnCgJ*?A2+KwWK1;@pk7+XxRs<(+0Z9R4VXsTnD_Qtmjt+rp#izcQ0 z-*WzBH8$a{sG7k$p42-no1)0R+j2Sh$P#&bAY+^b5!00AXC;OIK0$GbH!h&=bKQx) zRBtGJKjD09CAXVbLp_rr%FU&60W+M$&(tFOCSxwkL9rqJ;3sE=kdFP*86_58u-8mU z5MI<@ZN0GRiah_WSr!tNyzY-teM&V9rE=O=GUW$8kQJhY(q&|=f}5xF=q1D^Bq0j_ z%21ug(J}F9PUC;Iy1ER8-S~pA1S4O45q|wek~F{@BhFYWB|{_Kq;cBNE*vEL zr{;>pQG$tHiq9>ef0YgMEBc)*L!c-xxPo=9XQ;Qz@y2;( zB)pfY9oz8f90TQ>sn5p-eJYjukw-}*zXm@RNkSlhgFWsiIps2c2eub$6;vnkCg)>{ z^-5fZL0vvPB=&mrQPdQx?_Fm)PJ|ER2$AA>cN~> z5J^we89-Y})go-yUpUOBgiVw3j47^_^<{jpe6IM249bmnr+tpPv?>1P^vb}Md~ZMp z%@&4?ed{9oAfIzIB<`K?QN&m$7QrTiG;3A4=S*X*1Ap;lnc=sIVVgo!&CkecuZj_r z(UdDLf7Yd;rV1rnHGB5WlreCNjT{Eo%|9$P1O55G;X)K$bPHKQK7S(1T|Enkmg6F; zW$hKhc@|~s|B*?{!Rbp7^ap>%)qu1~w^;NUZ=o3^b|YhE(x+Hm_GdN_PnN;$aYXFH zytL~^5%JFLL7-A`YN!Qf%mMbnG~^uxq7k(P_ZWQuQzfmvoT;D~vL4&;%7U6hfBSLg zoTo@jcu!5tT%cn97)(4bsFp8g1*A5hoIIEYbEcWej>7{Ola`>%kW8Apy#J66V7C9o z-Ol)rm6U)S78)_YY#H^Yv?!eaVP={QZWtF`$>TP60iH9sEoi_bOh}TbHSLqIKacu1 zP^P$|8bVjO%^8}$cylR5`eTW?p<=Rg!8mlPb%Sf=JtKiODHc+B6a)ETqr}j)UY~^& z0f52=4)8pZS(l*Jqy%uENTo<-r;OxjO4E7xMhsO?h$1Hm?^uOFgE_xdY10{)*Diio z+6U`+nrODxsg0!lxZz;PaD-xmuh+QbH*e4t|V?MFa2f5Sc5@rPj z^VSaaQ8X`tTRAGoe&^I|CoIRM)fzhJvQ}uLr+EpUIpmNGvsluaGDGFVta`N_L*T{F zo4-}j5mjLlGsYu%qWxP#zX?${dkd^}*Ld8J6(ZIiQ&k8`#j6Vn0m60d_~_M^7as{BYmCN;x6y{vK^*t4MFeUSMti zQSrKw0ovNK(J3W6fK^TrJXx=n_ZOih4PZ5kK|;}w1#H*+{le?<_t#f*B$$6KTwEQn z++1b#NJGTgQY3aJfIE^y?&SBk5?-YFiF78AyUyNjbQh7KVueOOKrKTg94%!I zL(e4XdKgSMe}JO;&dQ&7&x-C_kseLpq5{Rq3jJE;YHzDV@02YrKJ%mw524%6(%Q6z zr(b$2VZ2s*leWKhnq*jQk=}JFs^%7eFGxuMOzb)Dx*(-lMJI6)+|A1kP=AV+WEx6Xpvi8+YzA(-D2v`dN-Hn`)VRDn>EwFz9cK+mrm!0 zRpj~}&mfg}BVUx4n;lD2hF%TOwKXsO+?g!3CoQ-@1n|rCLbipK%Jq2&S+7C-^S%z!W zUnFnK8o*0hV@!@N(^f;4i<^%-|UPSs~CP_O$L=nC77vs$R1 zOuvwl{6_5i*=uqxjU`4jz*1czho-)Db(NV-712K&4wiNphtUUtIG6m0P4|$I&2y_u z0&PT2(l3$kV>()juIO2dnC5X2+7dPqd$9r= znAEnxy**PZjt6C#G=L2Q#8eU~ zcH(tA*NicYNrXmFjIO#z%(%&#*5Mm!A=VjOuxwt}-A*Y+@f<-`qkP>cy@7j0G%ER) zR0Z+jpK2N6blE{@FVK144s*U%F4r%x?^vBIw`g#C90^O7|vM1`9V2NBiwCQF)2fj-soM+4uMOy9U+wP!{Act&}*@B@~;FwCNx zOUbr$Ify5`p4D~3cm4vN<~TBVqnB8rogfOj_L%7QM7*--+4zYlC8F)jjL%-hm$=0p zPS4SAw?A#vAV~=N@?@BjXM8B#A~yA_5Fw+1OZsXW*FBIN3=4ScLHjXxM;_dK9GfT>-|@@{Ngiyh^;oGY z==Ygr3j`f4rz`!(hGCOxBC}6VV$avd*d|YHfedfdO=vMdXILQ=&3BlilJY5@2qxMT zI24{K%!%bsRRm4qDqzpU_`uk63= zAOEhm$l3Wv#j5!rv2aDC7dcPPnB1N(J7!ghj?ZNDjb#Y)@wg{pXu3P-$Z)f^prOG% zU-Fn9Q(EdaFCFn8Qdw=IGKH-05`ly5HZ6r6{u`F6X7I5@1Ox2U(|2kkg>~;ityXG^ z1-1l={hcmB0 zI$r&z;VNxgKoh~0j7s6LgX|Uz_Mj>K_f)cjO~p@A*4MkUoJ&{)ClZ``fn!@kR|E444 zvqFf#JU)eyz_--ZkDa{EDOD3?7`fA#`25&}sq9BwPzJ@SMI^m}M2!snhxS+B?nrN( zUS!Hf?}a^sR~4J@|6V)l|!7hTcmDBBX>4@k80KKJ&u8NsW4udRzN@ zpJ)v&>%Qp}#!2g?lC>{*oeVX=|>q5tNnr~~@y`NNX-PWd8EX+1t780 zXIP7vyqF0zZHiUB|5`x{%0N$lP!2pxTGZlAjw39<8OD>rYO2;MO^jrG9$e-*eW41S zl%3>VxSPr$ZH$*33(uL9@8_)tK8#CboHU^l>9YnyU5!aBn|(zac(}{q@}$c+-W3$T z415c~xz}P?xOEHQC1!6=XrZ>evdCbeQMJbCouE9!FG>Ukr=!(Ps! z#z9iD&$RlxTY|K>pu}6DkZE(hbXM&Sdn}$l@e`tk*)@;Wo};wf3JawS85G6xuszg* zK4m>wbmi54DCkei zL82&q8$BJ?CAudbv2JS8T^jh9J&15m!68`b8it!WP2ct5dma`k88hk?Y4FASuW(Sa zIsr@)K^b-n)Q*`V6A$+nb-qHIz9b|$I2su2@E9pM3pi@5*i&&1=fTjZRP8koq1-fm zke#hXdp%+|?RPFD2DQs}qnL$(Tu+cBTY`XXR#}FxS%EK`KHhI4GU1;)(dGn6j7dvw zTO!1l(QZY&@VE+R-Hvc!J0a0$r1SK)b6T2?wntQt(9n|LcCo74tcUyo81|2{c*rX# zh`<2qzwTaV)z^?J1Qy0TQ$$IN~l*Y+88ZPku;1d_G5!^Ol4!#wubiX zOyXd(WVF$Q?95}pG)Oc&_+{H(k;?Hf+wDV5n3d#}p>-5Hy0~#&{COlimHmFAc!##+ zd8*$r?YrMs*V+9R{kl2iGJ_;P z2F$UqLpo113QcQp@6%iE&IV?qA+mo7Qiur}Qz{VLd=%SGDpq_hGlVyaOJ0-Hm|i2} zkCfRDR@Y=N2+bd)v$`id1_ClO)#*ovpM4Y#y_2~V_)b3U$}!;{PD2y~B#x`R_4hIL zcQbuqcNaRJh{!^N**>-|H)%EH0tR0U`>nHK{R1@F5p#|En8THmpHlO^{&>9o!ck>> z?l_(#d_;;j`DH)}r)ENx56vr%mV940ps_P=ViY;Wd6g|HF^~2z`;WcJFw7%_DGTHCS5AJp)b+(}htTmnvrDDE!;h08sbuSr22Nx2$Ko=F{^YQ&`c%wV9cUxQ z&5S8}m9~>$rJ*G%%9_SC9ul(kkuVR`(*ZHt_2)(iT{BXPbm%Ypw){jwBkmI`Vr%Wv zzh8u*L$)74*+%b=wS`Q?HyT~}HToZ8Yl`tMgoGyi8D_$dCjT`fJ^OO%XQk?Ut#o*b zYJx+=BY9nX4vO2BoMW|NAfnlqVofhrYo`YOn-_bOMtI}Jx`Iv?QXqnCGnx=g6^lA^ zNp?h|#dU$nIjfiWe(hc%5<6o=!_%dPev|{teZOa}1Y_6yQ;)Y3Hl2@i%rCHO3_mc& zzAhLMj}(de3h(s6vXA?#U#Ott7V*cZTUCCoHkPzmAwg9!bupHz5=+qyrW+qk#g@aW zs^GA(2316nY#a-Y;PT-wOc|E2I4H?YUXc?`x>llFCy9W@Cwyf_$LUlC=0HRL3BUOM z%J6l&JQ;b8_ahfYg{X(0zzjLMbvru#iiXGJb9I(pzYAZ;(7CVI>0Dbg-Qlvq{C5Tp z01ces^AUgHFFbC@4YIWF0x66RcVM$`l%L9yU!FqVO1f=&PjRd|o9k8dkwsm1Bu^Vh zjwC*>JzYwL+pCZibC6y1nPNkn)4D4ed2e~-$r%cZ$RRR_a~T>-M1fllyF}n7+OBQf z&9mw}#~id$_+(~K{T@_oR_#Lwp1<15vwHM6?2`QGO=gxji<6RCTX?gTuF|}dIQs?S zSeC4Gvul(M&Lw?(A8UpX?Da@eN$KN+ZIz- zpRB?<%ghWhR;Qd=;$M;xCmI}f>y)cs)hy}*ikMl%X*)rNL00M@O20s}x?I2BmHwqx zkA>~CKGE^MAV+#ur7Y?Rr%g zV}v?Sucoj>_aoBNfWp;@N^IEx@7V|!b8VNJczm`aoPyX*BpUx`Q`)Phe($q12*f@` z)bB44SZv$a*tsTNA{m2AFqU?R)8A`+U9aU6n5PfGB7qcXVA=ZZo*-0 zMi=Qj?HnT)4wUnt$f|5#O+2t<5S?UZ)G)rlNk`_3Mql ziOM|9}Dhc<=Nqv|iVpJ#(F?r|6?8uBZlf8psHp_zMT8hil>3CRwGK(+k?7IEi9 zm2ih?r&u5)Dqp~*uD-Ex4Fu0N4M>+SS`|9j+4aCyqS!kS3D`a|h>7Gn=$U}JIcSzG zk$kY+u;oA@`)^pk#0o!HeoXW~oNBz(#-6_0(rWWqW}x-O5~!-fBom^70i0*C8PXi` zCF*5g0w>pkuKG{s&A?4=6x~@WSKK+8q4Yaqd_osqZ#47z+2!W-v(rtG zhI#k%4Fo`7T5(Zf{5-2-z*4rx#!4M|8G$L`&fZFq<=8W*eC+XWH>5Fi-uZE5nRS~k zRRP!nVPE+`;QT&>^XBd+_$T_xeoPR-i!nG$?6yWdS#Sp29dJjGIxs_E;PI)&#@Ry= zlhXGU^N(M>L4&8)={!Fwo80R~@ta&Lr5AlAJxx;O<;#OptE&uJ3uIJpl5zCza18ab z&~MiRmr;GCHB!F~p({=*D>J*tzBp)JFNva|d7Ygzzsnh0+P-dZnOw*0@p9*ifWzds zeuJ59BIKu#v^`EPh^ezsV`<3WA>VflCh(tv)Cqn@91UuOap ztA^aRzf)Uvu3v2R|C~#)b)Onh-KMF8*h8O(v^qncJCC$ImoK;n+@-$!TCG`_u@|v{Yx=$&cqbY#ZPR58_dJ?BOO!#I5> zdiAyfOoK3z{k%?BdLe+b&3k`elBp(R$27gC8qfwu8LBN@apeq!7pCM}Gqwq`T zzeh9*7G|;be$PGSd3RFiki96ShQN^A@6U#1q|@f@35{C3b5E{MocZ%i6OnLajh>!f z_fnpS*~^CY<)?Q&y+m&9m=3qoq(TLrP3Mswg(Ui(z=t|=Wj?1hHbI|hT|uv3YJxss zzWk~+997Yh{sX%Rwt_j=+27oGD8;Ojm@!C6o<|dUy||3Y_4<>COv5vtUywz19YCY8 z|A{?SYor|r4<*1>#UwKPO)*>25UK>1)};Bxj1S%gv7`UsVF+d*oTpfmE01hWbKmu* z9jPt@IXBj8iX3ksvRl;V%qZD))L-~KJnE$TY_8_DcUfcRD zxCO)W=1|@hxMpbuf5I-2S6`RdZDVYYW9)&{_f-FLY9#X2$p0S;K!E~=w#fgb z_OA@Bb&)r|v{l84r8>ulCq=vtv$rO` z!wjeF`EZ7_Hy3lZl{vWTPuRz0uicFaM?H91Q}QTI1=Z#Z=*_AhtdbvD)op5Y9`N?W z4FN%l{A3;oe%9`KS+3{HG_|w&3yC)ZlFp!9@t{ui9rHEap!l_FE1yWBmn#g^h?CSH ztiz?&nk>9*u&^Tev+3QW>b5h=VREwt)|&5E^gi6LQ?9`=Y>u*{7X^!CAO8CJt$_NT zOTu%uy=B`SX9FWUY(3rGFC;MZf><>F|1v1RLHcTOCek$fT85Du{q`NtyC)n+$@U))`u_t#$i{M0-|FH48~ zjLzu~e}fbKRf>tctK9byGkd;`W8Rc)D$B5eRQCUWO%QOxu?M@AfYSmeQXC6K!cL_A z5JMMdIBy7(>~aKC(tD!gJpUnsj3-RJBBj<_iBhi4@NxbOK<^ta*zp@pD-&b%A|r|W zHvyH<>)x z=y+BLH(>H}6`P*);0_ZH-;y^XDvT4U{_h6@xs1|oAI8!z;sC~O-(UKkf9;*`bx48|e+~UY6go8(wNEDem&Vn_RNW2rs|B#+w z@+fSiGit;XbbNnSamuLL;16@L|IC9QDwiMV54#AVYPsYU)21_go}*J_PW_N`VS#Xj z;zX?{g!|n{6B}dTiX}p}vJl+A1XUke=aFg8oJ)=b&ULP_g&#gK#PFWF&MyUm3^Bzl zmLDfEa>YZ3n~1*wWX1a7S1Ik1$p9+U=UqJ-qMsf6Y<{t=Kty#!!jhcv9H)ges6sYV zWF9srJ^TY7dzONId+cG=9<&AnVt>u6h-m&NRQ#tizy?E`=Py%7Rn$l0b!I&ycgaBf z$)wG<;lid^ym!9Aq+45&VU8cwMo&63=q;9U+m?Ly<0dNEX9c8(t!8W?S~fl4zwZ1lHcf|L^R-j6?8m= zyBMH!dE>f6pze2xnUf^bNFJfI`P)|f4*c018u7YMNSR}sVPMOnCt>*p_Y=z|QvXI@ zS9e3W^pEpI-ZaM35NS)B%oD&5rMu7jrp);6a^b?KfQ^}|(6{Q{_6Cl+#h~10UvjaL z^|>#7gZ8-(_ugvbe}n6>Y>CuzbMCHWHd0Q9>M>O?F7&$EPE=6$Iu;esHsT;hvbRpZzbu-foh4|lRHJH$mc)PPBER0Whn2hH#3(=Dd{I&4)rXVsDm1xs8Q*7x&#zhzOaxF0zk)D~ zv-L98m0P_E3=FK)Z3W+$$fXt-$g65{^%ccp%u+A$33v;WR&m%KLVYYzlwGxYCZl~o zF^zA-dsJzEj3xr7jT5N`u}ncUA1c^Os)6ZPI{OtfPMiS|SgPA8tP_;q?fN!wBcL(T9#^B zb?C5#r@9W0i@}}m4E@t(Ds9eoYdT+x`zjoX0hV+0dnpFD%3IWCEai*^XP=3iX#a1w z?myX_2pK8C;gA?*qX{*#PdJhD9{LPa1JfGGn>N9Z#!6kUfFM6k<(k3=^iPpztHBx= zw(wbX+l|FwrUPP2=a9;!KEBk1`jEC&$l zkfmShn~3-VGb4_HtAvP+#Ew+v@wDZsY!dqJ~uDapdkBfYuhG8nJ^uRY&%(yEy zHl#`N_Jf06QW*vwYeG)yO7S2@Zoc|rigys%Kq@abvWKrPu1`t9H8U1`a3H8n7PTnSD3O&;(iAI1<_4}y z(k?!QW8K<@w*Ck)OO++i4j3|swvle!Uq_4<3L<{G;B5OpRDESsT-(xZfCLSe;4Z;6 zxJw8U+}+*XHMqOG(>MfocXxMhq;a>`=YIE`dmdvj_|=TPt7}!&s+u)tXQb_l)Hc5dUo#o;n|4G7xd3VcG>mRS>})FAw~H?`%tCI z{)>gfthS=OQIAw4%*xkB3Ld>Qew(emA|2$=_f;#DtR{C_oh)=cYLqZ*)Gp&|9-X+D(Z6+i}4srtquqd7X3NK(?<^L?b`HqBW zGkAjr3??r7>@XbO4`BiWBD9LysO-2A$^9G#s4UiAQmE5iljpj+BHJEU7R2T7T`~8U97$6?=!V zUwKg*A>Y3XVG>BFaDh5kB-$*0oOFJK*=8_-PRsqK&!*Kzo7#D}O+zEWW66CvdK}fG z0)-Lmuj7LDEM}hbGh7Q(7oUPd zcF|i52~UFmS*1%%06YiOlOMTrmularDJbXz-_M=E_eK!7zW@R=0N+VFiD{>)VkDw9 z|G8oYD~G9xRW&J?NVh7Zj@21(bu>2%Un(^p+wE#9802$%pZ+w)H(RlZUc6Xd<6rl4 z4C=k|_)JJB@^@SS_ZUe~LKMz=3sT$+P}eH!>gJ>}x#lZW|MZ-#HP3DsetSzNo6+SqD^DnrDzIDN31kx<2{Spy}M&0{Ueb?Yxi}pN=o5Tzgp|X@iA8vkH-nT zXX5g3p{K%Ex+ay%v@8W)B4izTa(_Eja{=GCJr-YQ3MI6flOTWo-}L0af$GCYl1n`D z+pc0Fg~gpHRbIekir&XM0cBa|d?6~%ondD{I8FlvZWOt6#vb_hNc^YjD|8<1W5Ael zf=bg}1GpZ=J2x*KL=87Jy0_pNFD;YywM7W4>z|Hu6qVZnU@PS_kI&jc;N^tNq`m>8hq4DFKk zckBrIijdFgm)UFe6*CF~CsPW~O46vL1LfMJl?j)}#*)i`ZwF7vouL-<$4a2?K;#y! zR+r|gV?@!-r-jBb`KJWhKRyo(?^EL$e7*CMDO>lmq_|_v$*w{#m)!qy@aM1KiDa;F za~KAB!MJhvnWM{k*NoYrJGsIxXCG4M22R>FQKHF$TLVRReAV16h=E2_sYR_Gx#{xj zKX+f%5y)H4GhjXo75P8g-?m3NW7$HZykugn%|FW3JVVdll-%Sz(QtD zyXFJn&f{V0qr+jC5eCk-B*urI7ABZ`JJJ6n%m~eoJ8`Xg3U^D0ymqJNooK*1QBD_O z^4BsJ-t&`4=^x*o_t84DK@3yRYcyK9Vn@Yt=#_1GJ!U4cne65Z(qa|wn^i~ohyVP; zpC3W|e>sh_5q$Uw)USl(Zntb6G*)2+RAHYAIyz?p%Jk?k2?-`;>7uJwjf5clDY>nG zeHa?cmt&r<>o?N-k#5}js%Yf(h(se5nn#+H23xM(|CYE1B7 z!NjEj)unOpmxs$RE%6oqSp9{<1#c?3Mt$qfNL%MTke zDh_Pe6TBEf9<+C<@WBuLkpQBdsuM~UJ~M7y0XKU}JV^4-@bq=2K&1o9&F?4tZzbIJ zd)3eNJ}(wr0&n4!74^MEo2$1$q^r|eDrv6F|CqfO)z>lF4RAF(`bB5+>5^N@iZXx@ z%wk0O!P&+TYYl$*X$&d$*WE|C_hlAE`8;;ySXvtqKSOizY%>>!gJ4no%1h&X0;d+X z%=P7!m9*1eGfwG6E6jECFejk+dB`D@U*;~b{m=hxFfy{Bh&^?4W02pR5O-WWN0Aa- zp@L7KAf;+(Uq*K|&pzabKZh49B zaJnubt)`5}mAKuLNf1m!lP=SwnoruL7Lod3GHBULNlAf55@|py^%j*ss>TDF+{*xv zQDcHyNcxxmPRw=~EDOH$umv(E)O7TA{WY3J_({^Vd1!^SglMvh z>&vs|tp{#^pzEr(yxqhV&H1C!wiBUXMdLiif1WD@c8wJr&e|+w@7Z%%w)fnL*1Bkm z`i?n`uF#$(B$VViS0bmg(^+X8xTSMVazg$)nFl*M)+OwX(c^ju3~&R2Q8wUuW1pz# z|3+AUhj$_rx-5uL;%8pjJqO)#@WxCLrRu*dEkxjJdWm->jP@zwe_@7*#MXxhj0bs@ zi|XaV%)mDfUO+z4{JH&4fGoFC zZP=5r{=Zn*6(XpsiaP1LVmu3&*~j1S@2||^_;q{5y4YZnZn4@tUvIg3#3haAbOw-V z`Qvj*r&70-%O{M@yD!?Mo$987z}|^>Y5mVjKx6uLAD;$Ie!K$*6)0_Nq9n4yJCL0= z<;L5v^kRbtz+%09zW#jmjEh0b@hq94B?{qC%VbyT5dOAi=!Y8DZH3E2(+L05eWmGT zevNCxdXecHSwbVWs~AkJQ@R`5Ot4G$VMjJ^);(uII5Yg;r3>~)Py)|}*+T?TNjML_ z^)XPk1&^~&VSml-E#rizda)j~PUhFV?v%y7%Qca`nBBpUg_%oh|H-|PV>3(BIJ`8c zoab3cg)G`@MCqlLVG{!QdHPma4LaX1s1D{l{l3}dSZ76$t2z1v<$8=P+pP1F>z~Lb zgy8GwlhHw-E|2772FA*qiW@v3SovD4KE$@ZF?g?{CnTA96%84Upesg*zaH<)&t4dG zUg4*Tt$JK8&F*g4f2C%9>v58~>^J$B6Mja51bTCL@p6=`PyRv57KU(9hj$-07^dmm z5y#-QM*yD6%w@7ywWghS9%`lzo#eD^gaC8My$53t)wx4a5Gd{+F)K1%KHEk7U8{q* zAY&ajsP;&MUHud-%F13Cbmra1gI0Sh^4FDKfjUh+%?Q>Eo}EI723O0OM{DOGwa&{> zh`)XJF9(-~jiB8riKEKk-c)*7u~}EfPUNj$N%XqR-q=Qawj#ra`X|5qr5d?lQyue&L`fTyNjOL@Q7SV4E7IN^Fu-23eoB{Bfv&m@BOPyP$(!Y5+93NiT= zdcxth-PrnlzdvnisgBFoo2Mz=a@}{V=4cQv=KBUb6abG4Q&_ZdYsG z!sooyyKTd%;b7VH8^U1|^4*3%$#C+wg$3@{QB0e*f1Bm!7Kqr^2UTXMP|UdZ{mEwS zQD(k*!K<~7n9^v@s73SKr0U_7q4f_ga!Z~J?-~4Hlv5&JY=ZNZ%4;OMrl8$oBW_{) zTKfq%SqAXJMdmm~{+~1dm+3S=f(Uo0XpC2*H*GQNqH4F#JC3i~A#3+80c#Juhgy$> zZyJXDG+fa5BJw`i z*+g@~wQRMab6_(_3S2)k9#q_bC(az;Y3>b!%|nff%s(i%`jKXi zPFes<%x&g2gXfw8t3`@*D3-pfRS7U_cF#H<(;3tF)&O+mgL%U7`fV{e4JRZhQcYj& z!?y^fb?#Yy{|FdK{?b8Mjo?%dXba#HTD@U?=404=X7Labd|$(;E862;S&P(hThMZS z(<9An{P|6p?XI&EH1OsLpeasW;!RZ~qj$SZ&Y+=lW+lW2ZEVuPt-T>~(cmCQmRHfA zsOX_3T?7f4`g%&BZZn)TR{cRH^r0lg+uUB+#3snjZ8G(w)ObS-Y(pcrzrbJX+B!L* zvQr-`ZDbP=A7RLZUIgCwT`4%ZBeXTw%=b7{64&pDfS-}Jh}8X7larA=+S5)N`pu!pUv zpR4P+lwNMxb8k;=J=|pmoYOx1I-IVw&D}4?JZ_vv3FG_WJSX3_BIJbLY!4P zs;WF|7H~qOAjbXD;YH<+WCkogQ*|r04rD{?@PNzwd;{_l4H5a%;4A%&&}$9`LZ^eu z|AJlBY38B`QDWjExGnd^V}P5F^@Z|eiZ2kKt5CpIf1NUWV59XLl@=H#_;e6m_r3UM>+*7i^?aR8 zAsClEoJ+Cdlo9V_DJm}mjs%55m zEX|C~OR>>L;o91|;h4S9pAp2pgz9L!3qGKtT%{*^X?(T4xlH8Rc`ONQ^XD<5^q~*< zzM2{m>BmW>albpX%7J*REA>HXo|u%Zj4>7z=UX|>lCS%2#fJyJ$Ynk1@;yS$+_AkR zM+Z>NVRfk}%cjY5f3_rxPGwrhCnZl3bY3(%U~LwOSvFBhyIs7wv{ccPR3DvSH!JB4 zdnUs^N|wZ-P|i48wc{9=#xEWGFsWBeAJAWt^7Mr^Fiuy=&_?ul!jBK_@%-OVg;(2tz2BOO9){b$a_# z?D+zF6AL>l?skwTawCsAr+6q^BinOSavVXQbVuiYPfkBH{l`;)W=B*UVK%jn>fR`| zNUBmyw;YyjV)AnfPDoT>;l@3)RQ61!7Y0VzXk3P)x8u^nGSycR%Ilv}5{TZ_ZFLO| zDb(L*fARy7?GFdEiE(#|eRxs0r5!`oBDRNH?0BwiCYp#f~U)_3cV*x}|(Lsy8 zOq37|BuAAY#f`(IMU>Inl~0@U9*hQ|=44RXj5MnnscFZpAKEgk$lu2~U;niFtJeQa zwb#Gcnt=4{n9!+C;L-p0j(Fj7$?jfy%h&Y_LEWRTGV=h0_dT-38oPGcTXcFv8xsTn zvd58o&F$++34U?2+`!-_gSwHS3eEJiqiR#f)F|zmcnK~GT1Y*dFlm42;ogA(5to+- zyZi3v^H#6tdy6{NV+_8gDc)xNz_M&!jU*W^d^bEl*{S7QCq;N96ziVf*!Vns`z!Nh zpj|24^}XSEEiJ9~G6Icx3*~P5J_o>jFEo;PY>5HQS7FnIJZ%zbKy+eWUDF)bqqg7r zrGYIAZEGl8v#nyXukuzOoV1m@uYVWQ$!93CW3MAL32_OqR@UMAU!X7o_akCwbxo#J_k#BR;HlJ?0A*{*4}<3<9w)XD5yj9;kBF@LrEg{ zj;6*jdMn9(8K|nle+;~3K_z^xJ#H|aTJy|b24A<}PP4_4hCS*<00ez4bMbL9J8d#8~!eDlO{)l&l#@FO($(5W5iT{)@E;bgYUUr01| zYqlmZAz@FVGPCV1E91jhu$*hs8{1k-L5w#}6%4L#t^|+L3C>&g`)TFd;`^-Aftrh2 zPD$&|=98Doqs@C?V)1l|MXj;J^Y2Fym*?$gg1CW!8xnZ^_cthOFf{J1YL{q^?_>PN}S`M|w6D&K}D}-k9=eCa*C%{zX#4~V)3NiuOg!6ZYxY}rUZdZoR z%|}#y|Dm5G3xwSH{zczu0r_JT9^Vn)a;plLVT8w134kSLo~dx^_iX51Gtx&oIZijv z0&6m$15CJyKWv{Uq4&{%qEfRcIHQ3Z5`tVbC)o`+7sxIp3J?+uBoJR>`~f$O1gL<0 z4Z;KFM3D>e)Z0QHzxSAPB;gdA{71PwK?+9LYDb|h?T*%X|L2Bg@_8(==bip=V=KZk z<`S#NsHa5DWQx&dZ60Z8Yvh?Y%zecCe0rEZ->(JnZx?Y_JzPr1`qdl3hL!70pgYg2 z(5Lp!q7G-rnrD87JyRn-^XV*w4H>B{;JUPWi1Dt5q#X zN_86MjnpQ)2{3DEo6nnTG@Wmy=9rP?4CAE?AMP+@Od-8O%Ut*+nWvYyeu{uqrGa1p zuagF=C; zlOE9dKGUi$!Y(w(iEdMwz#dX$pZ=-Q2E$cJp})aJVFdI27#w-c!UNU=87%3AMDQLH zurL78;hw;3+9PZLcxC4**(5QlQL>?~1p$-3r8gK^E%dVI8&%hzs8OoAm^VHfj33;8FG;vJ0c;DAZ3)$?Nk0# z$rPuNJAal0GNr`9!-NEM+~H$KOn!O!n4TA}<3sg0B@g2Xgi57HU-Dx-p5AMjUMZWs zf(fM1mAZO8fpRCe64C<<%wCQcG5)@Rvi*83l7jc6iA*5G*^l=`hMIx$PpA(}%-JKI z^PPJH3zBUP(e^A6Xb@0ALZV2mP%ubMh0%NmMIl4iv>d&%cGb?wZ1dFbM}!*8*7hiX zQ zKx!u6d)ss|fJ`u(Vz`8a#0qb-K(9qh@3av<XNT}nLmc1b8Rjj$;+!sQj1UhSlk_g>Pf83V+8eH-EkfHS`0(WWZy+MN|HIC z-BcuET1f^|DMr~=l zkr^R{!d*vZY(Wk(%?(1&dY^xdO75$o7avfZtm{~E6isNIsMa`y#xO(N)v`5DLrsvj7 z^VCzsgwov?O%7u28kS-Qo232-j%E=!U!7R4#wn_ttSYVc%0qTU9a`|t zWOYcRgd4^YLj@aR#$JASjU0|t8kd!CjPbM)wiy~p zO6CN%g*i)6dRnsH`Le|qT@kl$ytTB{1g|Ot)=SszHQrti+gxf_NX`A>n5}=6wWJKGBDr?q0{ zp_oMRw>TG&-54z?5muYant%b-9kLB5BDMgnYTAC<)v({@bC@tmV6eGkG=s(gHQJ<8 zqEVT#dik3P$;HR}gWGg+nm>)h$VRv;IQLzLVX)n%6&7@XO=%Hl=7e-4#fzGW#uVbL zvb6NNLV?gT$F?P_MlCRIR&r`JXfP*?ZZIBP4;WTA=8Dh9vd{IbYo!`SrO|uokl*kx z2J$+*2A%JxTcRT<=~UW2`(3CIr-rdI=6R+ZfMf@?w6*p2Eu93-1WA;dE7s!~La?ns zMbI!{iaf76$6m^QBP$*6VxJ?JJ>3st&);YtziD+qncW;KUZdeq^4HoIK40WVf37v| zmvWn}^lZ*``LN07N%bKJk0r)rOVry;<>E? z@z*W(eGJF%OF6B>+{~X0F};4Y=g=oo5jH5O(Ta-U~V#eE`XKDY2(7J9O%x2N!!*&nnBx3p?9MS#boi2_;e zCcXR$Y;(B%rxFyGISgW2HkDepscjjYE(@<0!TdAjgB-KW32>wKj6E?7Wth~2$NWY@ z&wx`qFi_djO*%HfV3$0yNWm$~-43`@SdbA;M|kR=gH&WBp%X9d=pWuS1->`KVrpyc zuS5}C6)-2BOH;L{ydlRV+knjqQ+jQI524E!w*uQg5)?%v z1GxMJoTgeWPM(l6!vQAf^V#hbtmaJbuQfYZ7HD#|%%I^Q-tLtsShsWe|FvmhJefte zuT|otJ8nG*S_f^t`8QBO>t73;L4k&4c+Eay=ntR`{i4Gr3YOt}{ARWJb~k!ue{&?q z4yKEgsA~LjVrepWhB<1W#o@VEz52(hmD}#PfR0Ajio)$D1Pr81MmdCfBS=UBA$(>` zjd`6knunyHRSDnM`H4i90cn}I7Och2jG$>C5uZ+L(PO>;8dSvaXRI2@4%Ehp%{q#s zRW3oBDuDe%lfY9JDvZ;i9PiIij_RhzxtAI$0~Q3g$3gisP{zmnTe;E_Z)J@}HFzkj z@+e3I{NUm60O8-B=FMe$@w*kQX5`esTrXljOe(>Gg>9kCBgWbXEpgDI^U{;a?K6RP zNiy6d))yy@2;+^i^VbfB{|Iuy@gV=(p9H7obRMIc3euveyZ-6iKT$Tb2Xvs25gWh` znHTloU;6O-Y@!de>K8c|NjiGJ3saEXN0Y`yc8K2;$Ytzgcwg%}ln&y8E0|+^<4Y0J zjcy0|t(DJc<89iCjwMEUVhd;4NA(H0_;Fpu*g|f>`!(z{7Tv`t94m1nvB2%fbhZ|Q z(#g8*5xGQA#K;WW!!&E@648pqV!0fz+ftQ^l2=ZInzYbTB8kRV3TYM+!NFha&EVCk zb%TZ7jAMe!@l5tfI#*lYd99F1vwDdloXKJ2v*<#c>pehaH8ELi(SyV7_rrn(L?P^{<@D6RM<-CitDk3`Eb_&!;Al z5!hpjD4d&*Fx0d4bSbQiRG7!(a>ET4_XHYgjtZ^>T%LWxc(& z7i?l~Ukg_DSSxDj?P|$Z6=<^1iV{U|R)4;kRly(=C@yCuO{5)8h&7(AH|mUvs2d7q z?P$ARej>F-WP;9+Z}!Zy=cJRsdvavcL=12UV;bwsIR z;d+|oC{*(8)nUe;p=}tLXlPPSyblwldiMx)d`Zj&t;vNTc zW{uLLgi8#z;F`9Ke~z^Ly-cO5;mNnV8>c2wQJyV9bWy(s;Pu#4{@` zu?5EOZX^l{`%44UC%r&~WXwXJf7VUt>IF2!}rlao8 zHv#+1+>rq?8I02_U#)_k0{i13YZ^8&1)06$Mv@=BS}_~pmd?o819}kg!J8y__bV=I zsT2n-kWP*qQ9?McGTx&~ufhCIpK4k@E7~D{dNFIWszLd|$6bhpWtV#-_oeW(dY*lL zMr7>R^pCWroc~X|yfqvaoT9H?D{7@tk*Iv6PYdKl2k6T1!E*Gw3h7AGHqLU)@Ihh% z$n4f@ltwL!uIbO zR#FoH6@DB1{cWM(HHL!Eav#f=dq{q`Qx355>lp0ifrXS(v?j@juq5?%!u$7pZWB9e zq$BHn=66fiT{@rfNk7EMTo6|s3KzZ2`aDgh1#tq6OBD3WPT~Cgs`OL#K?a~h3mxiRsj=}-bFlc-IvBIL*7xNRz7 zFUT4}{`c!YP$wCD9!G=N_k!Gm_Cs52wK6%d8pWrRwdv8Dn=EgU^($6N-N{Z{%@Pt2 z7SfQ~_k5p59O?Ixd6;>jf06K#j}VpY-Cp&Nf^atqzaKb^nz zcDn2iM8=;>H@Zwxfa(`JN{yhcL{2A)ufYG?46g0;2kbI6)gp;Z?}FSP(Cqiw6Aq5qdh=Gr-DEdoZb4m z-$)7W_B(ra78;g6O69b{ykGf<=PUIRP(@DU?8*#=UDP8tRA%{@&X$`X?;_Ufm~_!3 z9{s(zyHWNd7jo0jkJU;DILx6Fl~#%sLmq-fcIMJu%o=rL3DVY0f}2 zN}RvKuJf9^(_pE(&)>6y%u-bQZ>(m`&!xi6;K%T7HM?Lx$=%qsHSU?wQ7;$>2h6_!R^eEKc*=%V7S8 zTxeSdt7{ynd1UWP-^A0!(Ayp2pJq&65d;&J8k&Wp>{r?7+} zl?^QQzP?|(UFI&Z1HA`%VaxT$i%0F`ITR@bEYnMqM%$0lv%}@Gl(8k5={2TW zEVonR(kiNwR{!`jt$UJGVT?Xl1LxbX-?qJr*i$?F3OauUy-#j<2(Kz-K4wXL8G(c& zCKiCq%omvGUfPq@?ZO(gCtGzA4U}`Tak`h-Ohz(L1pl!U?Y!|iqQ9?j=2YY~z-_fO zu#XwyL@+`_q0GxDe0zO!827s4a*iWU*t?1y@yb)Uau9;W7GL>&>`viIaq zCpbaZbaBht2L)i|fQ37P>oieZvX;IBq(VAaRVDBs{L-7rKYNtl-pv8zb~wP;P+{|X zcJcB~D>PXA&0)a+=Jn-aZIBUXGrz4?O~J#i=rU6Z&Pwjw)^LQbaqUZ!b}#8SM2ZhS zh&|0n?(+%FosbXWPjo{THV z@Q(}J%!$}65=4AYMp&~O1f+%2cW|`P@$$Y*360~K9>d!f52_3W^g`q~0u~k9q>UPc z81g)~a#pAw@o!o!PBZm5-e0bn+r3HrQDEGWC|UOIf570Xwb=DKEi#)np=k09O7`%M zkF&3%xq^bd3rHjXy75DfYA!rx5gdU>J5k=vv^3q&NbSU*MANnnG{Gb)ISq6ef5FYbu`!oA`}d1RXy5+r#k8 zRBF2>2fQp}c8-a3#_NAx-I(T}Wr{?6bIMo}b^f%TEDP#kCEM}x9u*}F#yIOuEZaaj z({wqj-||p@`tgXdkw*Zd@Rf0+^X1SdYJ!u_emJ#=aO1J}pg&o=&y}A}q=;Y_Q^%E< zX<;AFAQpdw(Cn`HFv}oyrLeF{!|Rbx@HHkbwED+8p}cRWOaF^Bc9#5QQ|CK>I_jcN z8}{SnY9inMwMXzmFVtxrK94peWZLpRtj#Cw1f6|jdDDQV>scyVz%NY)fPPw#fD-3$ z#Fv^8&0RTi)(4HD*0mEjrPK#G*DQ~>=5b{22Y)PT98vh+g^~(ZD>WsLy=8BV;#!l@ z+AM%9{?}9)=1e!`fGR&vZt)wzh~!4bAGZ&t0&d>9J)>0RT9oL4RIC`t^gYL~SLuw$ zxleCL!}p3YL^Mp)(p%1hpSnjDUItjuIbW zn>lg@JPU;pAj=;V;?VRse1$GLFU{Mmu|m);S!9~$CxAH5=~>UmNgq}D4AWtzW?p98rqT&msAJiq?c85R5rB!v2b6bY>FJd^jeSR(5{-v0wecm zH^4s!pNq!HcXGd&;PBnAc_5F%ZiL$b@|@HadR4Stk=HEHB<^uM-1%7`BW8E01A0>g zIk2|nB>kuYMeE_hei+PG|H*WZxe4QhokN+bDx6q~==rx>m5{*XYam z(SHs&gS2R>^I^^1YB0iJ1as}zVN#>XVb*8rwF|wBpzMJo>Ce0{ITP}Rhb(nDqN9B7 zy8&*#PG2}21(@QxOI|Tr%(rirsNIy8SjAWiUcREgnYw}pINWCU=#s+Rw+xX$-UJ=I zv8J&ud&|~p&U(<*2NSqGz4mv8-gNMtW}-b%B|}5y>^+AcRw8VM3g($DQ@03Tv*Go? z@r3g*>+1?1Dxa-ajYyS6Q2*`G9|Qdd&JPawXRYTT@GKX<62PXdkI>+0yX7tcvw>t- zsSAuz*Z_WgVuLb^Yt9gblHsBLdMeSrBrVe_v|4i(?d5T2)I3s-&0K6DeB{LCFWvq& zIJ0>dsjum8wjZ`HWn&|AyyEw{j3}J9ZuI^6e(er)PnhFW6w?r3jMFx zyFiIIIpeb+m|)WQMByHD+{|&;$NuCr8Lz9T8-T^uK9x~#^I^uSX3$t>h{}?8oll|J zs)A5uflcguPzQkZ;G(lKFfoXQCtT0Rl4lQTq{0FDbqZ6{guk@;w-N0z9*jo1+Tyvc zR8{S3jnhdI)}DJUzHXD1VSPvjBovCgVxq^f;^w2N0b^RT3cFn#5 zLP}NXBTHm7trg*IYC$u8uGQIHg6f}iE(M7f~63T%YYSq~R z!`)eTp2z+$hy&qY-xFBUc7p1BZ?I3VEXX4TgJSUC;@Ma~vpD~ZRY;p$7YeA;LA z=fLLgXd5FDfwv=gICFAM^0=Nu$!KK1%E;xe^kNy3F^jjF`q03;!+r*}P>0Ty#_3?p zrrMm*|5Ug?ebHr6)t=U%RvnaWGPDAX$jUnxc;JhDUoVmB{vCsY5v5jh7I;=GwbA;s zYnd9xc_vG5J5PA1;&y!aGo+u2FX@uZ$UT?TdN6`(5X-`Om$=Mh>&1zLq)Y4Yn3+j( zs~FVnUX-J?tcbQ@EURle)0wEB$dxls&B;R5=x^t@beoz}J@2gMpn0{D^{MO!=1ptE zWFC)Y>cS?0Ag@3V zFO|X`qYwp~7>eja58lrkxNq1~(H4{NgTlYfQHU+!F_RwB08EylXEKURETcNQf&q*Q zEjZ#?Di4|Gp26ea!kOYUYYcx)h&g;TQ&Yk!d${cxdV$Z;dUh5SFjwUmiPi^}&W%!M zAfOd0@XGJKeq6<^LU*Nt#I3Ro&(F&*_9apJs21EbO2wUTS;(;)^o<&BQ6*^wS>EC5 zworB(4KFdD2W<=WAz`P!x&-g0rf^+3kVO7cBxgDpUczoou4_vwrLD?teX5w{>o?dL zGj9ya7KPhRXWgxkw>~P0JV~8NXwqczCaS?(W1rVWAAI7I38O`y^Q0ze_3^(~Mz+Uwu`#=!z-LK4MJj!j)S&7vi}vkELVYx;3-fb~$!6*?ctQ z5cB@MhpB9iUMmlFyUb#aviT@#(V`OSu>|qSbvr{4Wg9u-7GOo{(H34?9>yCO=Eq2x z)g?Dn^$&cj#kn+m!8N)IuKe9l4<)Z@HP|5w-KqC?sjc_Fer7J{P}8{uH1)6en&N)Y zYG&mZCYy6ljCQwxfp%l$VBz)LC(rhIO!f(-kngaQJ&uSH39}6t_~kNYdnBDI;-fl9 z5P?4HMu)W4W#)C4DPiq>SL+i9lL~2~_3XOfpJt6vE5efun|C^Fbb%y`^mL7c_iJzBE@D2VSzXTcxBAJ67!^r51O(W zspcWg-S`1hf>e-^i&}!7omIs6MTnlj*)V1Dqc+ZQw8^y?J|F#~ghVLO7kWwURjZ8= z;O&WsnWX0C6}wwD%FtcDXRXh}b&*)zHl;qkGH8Tl3~P>S!VQPI;sYX)#r2EC?m10x z?tYYxz;)T-%5z>UuI7qU0aB`>H?h=F0w}8>W4Yl-yDZ0u{h)U_isE=M`tV=fG1hZ( zV07=TKf17#oXrW~r)CyOCpT|ge{{^iv^x zfY>n&O5ep$JsFK94Dv}==itFnZpY;SzbW72@M@~EPAYz`HNAR`D;%Vop|phTur!z6 zmvlkv$a|4pCg?E8>^m_ZcclVWi0Pxna9u*>LJ|Uf*p)^oziLdh(%(%Y-;qe$M>PG} zZo~;r>MC9L84=1g3Ti` zSmbThzJXuRvUo{d=!R=fGZ@6Pm?TS{JskOkX!xvlp4}#YhtM@yP;++mrfqm2#lD%4 ziVfp}q?i!x&R+W`oT*TaTBzY%H{N_%mr}PRQs+|M|3e&h?Q!Im!9*`nrItXf-G1x_YB8?Sy%?Z}w$d=?a#EkFzt~s)bIQrf<&P^he;*kXQnL1hwz%(8g`p%n~@bUFiYpp?d?r3eh z+VFuI?N5_J^~pCsS6s5vkGQH$mL7=cmAs{c&m+EvYz-Epi^5+CeJ>8`9E|dOJ=yF^ zzm|@$Z)bGpU}FwETA{q;bvPKa#GA1Iug|)F@yQan7`J(IU=uV{$JZ>KV&%fObv2AR z!DN?>7R|8K{~D3i=_}|(R8E$lGc4~JY9HAXO6dl;Sx>pWTP>^MduT_*g(`b2Fqu6D zlN&zFcoco45>j^wYiw*}2jDe1ER`6ut2IL$Ja4{hJG+IOAb#?wrA))9yMH9bLwdWr zz}?8lLuT*1$U|Tny2M*ZR?Dgh%XijxyRFYCq(m-z8TMo1Cw~Rmap0KJW2)5{!tq=y zuB85?>o9*rDjwrzT!QA+zA9A+k?wD~?9IHLtd`UN)K&yDu~R^2^?}yc*p++giW1u> zwC-VJ0C9QxRIz=dzHu9K+d=oxxMIBgEO9WY#(KFByD~PZk(Hh|aYtC&dV8)Ari1^( zy4y>PMF+3Y_g5JxP4|2N1GF+3+|#)83dP`-+bE$(@dfmSZzSJPyRk9rRe$*F-ryFO zjM?q&;wEf|mdX>yYr%I8RO~`I6R-2?Z-<{;omj0FG)~jDTF>8%py_6^A`Y{)g%Q7B zqxoZ}%%}~q$0KBeq367PYXF`;uhug9&iFof38^S){7$S_ad20Z3TtBT#U(j{MeP0IfOMe`Pk`>A~d_T^foqz znvR+<4!s!lz5~jJIUE`C^;+IvjO2;$X)V#@88ty#u#9f`Mp5}&XmQ%+_l6fR0ARsk zOR0`PNX-7dcLxbn)q3opz$Zu2pC-&(pPO2f6EPC4Ij z!*w}2<#4!B;9Ii-Nl(KVgs%co1*1}O(SBLySkh=rYLK|2*y(JKyV1CdVTo~Y<;Skf zh}&s9H#Cf9UG3hz!&z5*cD#|l{N;V%Y9X5*Mkk$9b)M;`LU}(b*>!=A=zsL%A2(n9 zr>|lBkHxS+iD9^2cilDFd^Q3jXzH|Sz<6b*6d#-Wcy@NGJfuH2Zk>$R7HJMG_)>!* z8H{)isZ;Sky7R})T#R{*QN1vqy#02%P6M^uTP5aDvh-=Zwc-#XOmZ}>3azK7mvw^E zvdj2C|L6adHqM$oE1ufk-k#uDMdyG_rC$bR7_HE}UQ-Q+T^tprH$&y@^ytG6vm1Ht zoD4wZE0wH=Pdgxq;S>dLzmH~^nE?%qQ7(3?8rc=%bG|Pcv9@5a#EOPu_tS z^rT|s19}FP(2qP5l+qpz7kS?MJ$T)QVVFi<0lxt@{yBworEqLg`fYSJhQ38xE zyD76BHbFX}GUBzpT>mLfS1ezVqvpho^zMeYeYAbkdz*4CZ7Ex0{6b60iXq@g+L32G zmBh03MCzUV{1^U^(;S{@9=iWwPMmlu&o*z`%C7s5X4V4Wf*$E-C{xVJ6OU{9&bPml z-TC78bD#TMJmYQ#Q#QVH%f~T>7qNMC2{gmBH2vbSQBGZM}Nddv2%c-m@6#RtSGRAbcux)eqzlx1F?%Pyg0J(X$GYK z?^pgSaV~?}&Bnda*_dS7qv5xQfnk6D_kaJtebDN=M+yV46~e!^PK8KS_RqYuFeAxx z&g^YuS$#13nH!UNL(J`WZC9c9O7%6i{7U#T>fwUrb~xs6i4XETby*G<)~Ui-yLL^y zT_a(PsTgkOU2s8j3IcNcgyWdzVn-?r?J+9iG^1t&go?p~mp!1`-R-qJ;fqi|Nd5hy z1;Ys7aObI;id^^Ou@>&SMk0oxO=F+Bd){Az9w^@tf9a){;wS3u23hxLbj14`l<59eptMTXVU%iZ{O+Ph4UxDYE z9hHa*dNjtWIB;Fh?Af!M%RhQ~cDZXLls@dLtoxEYDgie`M7scpCkdQ->NLDQGl)mo zXgD+Sb~mJlLrzRih4L4pBAq<>#K32q_?iWNJW@-1 zI!)#7Z*i}2Z(oa7>#%I$orVE>QfSf66Px^{AH6DlS(UTD;W%;1$NEO|kv1Ga>BW<- zU1KWD8o|bCd{4n(sK%rC9zAk22Ga$=GdA$Jsmdi7R;q&3J8LKf*g>3d-p@4{pm_2c?VmrRTq$D5F-S3_yi36q=C={LREy*)jRr#F~S zIU!wHPK8DKaJh%eLWI|Q4PFn?>yepMceJ$ zc-5%RUESvsFO5}QCrm?t!+w*O^$5#T;@p1p*tyfsJTrWF#u=x_kkvbU=9$cnfIH=% z%VQkKIY@Oqr?hj%?9(GdkH>>7{pfkr#x%|8lyepBar)Rk?qG!rJ*C1Y58F?c?Q#^4 zRx+>hd@^#v%|Z^2q$M{#oyUe!j}?-?TW(Do?Jwb$*6s7c$dx9=E}mKd^JOL;8>hUP zIdfKMMZkoue&;f`r6%1w&mUY)6oad}wkwId1TJD0zquUezO-PirA z@5s@XOT=RBefdTl?nP$r!Z3RA#re(TiQ}5HW=-vs*uS(7ekaXm1`a$`c{JFsB@c}6 zuFYpP{O^Pzcm_J0UKy(<@V@(Fy!Uz&YW%=!*QbP7Yat)vuSnM6{{MWFqjVVQf8o1xKT*@k>gz??G9hx$!!dX_d7yHL&zh2I)TRM1ARY1SRBndRe9%~ zcV^1T^9lcm(WM?ng^eJn7-jB^{87C2ZEEBzI}o|osFL}!-i@@CMyl6J!KQcCG)@Kk z+#uc$7PVca6b-`f+SWq_ zA@ZZru`Gk|39E`#&q#f)VP??V>L@x&4LsB}U=bF=O#hgt(qgFr;YnKI-osJ?nOZ}$ z(uT%ONl$)qw+@EXSHCQ?cC%@%SHY^c)2U?@qUFcyN7CZS^fh@J-{oi^4Jn1VzaD!v z9HY~6ZKrM0kcdK}Z^di5F0TglJU=%ONo(fuQ?Z-LX3|&rP;sbIR_W=D(o-jGNE0eS z;glDwBRGWcf;-9yZ&E>-YAbzW|Jz1AG{S2d^Tu-o+=-+8a)N!4GN7M>;nR4k>I90T zJ;0FM<=Qg*Se|?;O^H`ChF*brKmD1(A>hTceLTCZJSI((FLMA?*U5SC2Kwq**>wrx9NU|B|=%7M3Nqee^(_EL8! z7!V8gW&5lzdD^G9@4x(C|E2jK|KopbTykR@hU4|pFp(c>dmQjcYjt4UEr*mF_M_9< z2I;l(kKRE0JADnjbYh40O>2Mor+m}EH?4i9$I!H;d1XVN{Wuc35GLEc2b^X0pcE82 zFOQ6ILku*d7f`tMP7XVU^uj`4h?ihRa45^Rn9h$zmHlFSta0z^t<2rZu574_jfVc$HoA+<$dW$nqb~4BT6|X zUb*LBx8TX86a|mALpkgqNExV%G7J}H+M9cD8J;HIILo!qisPcpQpk6n2H|>G+v|Om zP4a}04h)qp2F8E5_qFD3TvZHNQ=us&N=x;$i#TyREU#Qb264TQEITpKH-nz9fBk38 z*)yj!pZ)k{WsUcT0I)RR(w>Un^ka+b?rW>^zWdW%z40|rmfH=FhTjwhWT23f0vb-= zjp3r=RcYEW3T4;Ic(}7dC#gv%xfv7=<%4oXAi4NI>XG>O6T(m}0jTJFYlgzZb%K}R zz4UCfqD)I^3dsU5_XwW7G{E{b68vbixSJ5*bFJXo4^y}hUP8^(BN_5O@m<9@P}rJ2 z0%I_^Nv~o=0%o&`%1k(-Sb3)4kol}*raG zptbjw7lR8WF=7~PWdDR-xvUwaEQ@jjgT|Z2lJ&(1KrskMLb}bbzKUgJ6@|*!mW&!Q zGX3t9a&R_GFGtB=dc73Y{au>KcZw~z*^giqFB&^loK_s<1O#FgtWpGcp8m)OG04+qsq^G;>R_&8phIvdGxA>cTlEo=2AgA&FL}?p)gJGpiGyp zK47qnw1t+s;Yh_@H!&E$i+kJQ+fs&Nyfeh=)DW0a>=m!l-w+Kp6)iz}6N7sx!y}|h z=!4Ok(0jA^?-DN;g%AkK4WQ8`|Coy0mH=wV*o=sVJLj$_fg#F^jG9nwKTdsW``6j*5UJVrC zb~95AUfD15vI9}m>>tiOgW>I4TjLdc#gKOy1{u}Amdl2-*dfc6S2$&xzRHL&F+@bPz(1>vb|L)s2JdPR5Ejp5Z zRgM5#2%UIwW4FXGM)&w^IJmZ7)vx=Z4QV}c+J?1RZC@$l0|NblT6i?^!7$J#B@dS3 zX#9tcfiMaj;G11!;&^v-ArYtWRER~f!yFL+JSiU6h&u8hU?UVAx%M$m_w`^vyBTQ& zOsQm9xnf0LALpArF23aA=DKUIYffdlP$A$5U$|sz1+F_`V@xV3fYC<^It>~HeLNxZ zu}a8)+LDnr*D8Qxh>w8eK4}hT08p0Rjor*26kG|iaP+~WB2l;)#D{{(%B@0gg;&N$ zz*wH}5qy5IP!*1f)MW$1l8*sG>s2cPkKDwI1M5ur!bM&7P9C-uXs}sdw^uqw6zNq6 zzLUO|rSc5U?QbA`!%sQ9SfQ&rNQ+HgxH+pb+25*jjk!3w_g4EZdc zQ`ehu7?nZBudg}Cgl@@9mQ!q!F(HM9i=DaEuOZOn$Eruk2*Orn`2w7 zjNmhkUjJg)6Q_XzveTdOsxiPZuXNh+c&fblOF5LCytbz;Q+PGrEu^+p#f}j0QH-r* zZgFmGp66+b$z$78+C?7fN!!X}uB9E?#eMM7#vkgZ^pN}^4$7Ycttq@=?X**;hi;r! z9R@FDfD8N?#PMtU1H0H4MS2cf_^ng<7Ee;(5;EZph6+erI+K86UP0tBhR^(HvKWgfGs%yzR+{}Htn}fm2%R& zq&IWzqf*Mz@82W_jvAi7No)_Os}O$ilofGA0)&W+ITVKOV=dzqPB0K0D2tmH-U}!B zqo_PokY(L%t0zh0(zQr`aPv*9fn>ot>wi3dP9>_)stO{{& z$RY>@R~REYyfG*&_|)_zvj2BAnoMN-;$^Ub1k3YN6;Fw+@s4&ic7zAqq;AVFvo#iM z9;)t?M|#HqG4x9|fFQ4OxNS#mXW+4XJ~^hQtzVs8(^eSkukk_@B~HE+nS7;fKFV`1 z-bZLbiUJC5iHMMd3JeAuK=88!I4U`4!Ho<^hTYTAkh!AYCFI7BG;v1Dn~ zHc~>W>4VKWW9ahhktv1#3mhseVK5j|;mI%emajrKR2ZXtiUtZVq+WAVmP<8ix@7tQ zU#TllNc9*5wuMm3>u#Iz;@(hT<%YUz@L8c8idvV z1l!&9vA;D?lEgkCQQMNVMKEJ1kjHY!T?u)K6yS9z>w=ZigqZAcfwWW$n zL;n;d?$F;t(wEBtuxsJ}=HLHqGljLPKB|YHhi|*^Z(j(50~lfLz|&V8*=6*CpQLH) zQohDF0C2`zEY3};DA*{}=tHtoN93`FT$|tYo%+1VqtYK~ulIhehwtP!in!#fzjeT>l5T|khH3M_3M!A}>Gh;{>(-dKQ3IeXpl zQyOJd;`-+(W%(`4hAP{|VA?@kcOI{8yT7I}Ti>S?Ql6GvksCzUwvxQP_NuRpQh#?% z<@wR?-y8;xSi@81D4}yvBYuOw-F0gy5Qvg-I*N-3uVuinUaMRSki-@KYdN;NJj&_Z z;_cvOKSEmLL!Ip?)i)72rCf!N%G{B23AJF0=8v*!aYYzV9+7pumN3Rc2v?iVHW&h~ z!XP4pGWv?m^MPO16}Tz87z`@kQMT#*gbH7GTH^}MP%m)W zS2bPvEv+qis>0)rx;yyewzP}5w7a{8gJ0V$iL?-@@cM}tsFTuG6?jrqTV7c&SO#!q z07e?>^>JxIc@`X$$WHmGo3zq*fuhtwg4A01)0wp~`wWugne6=c}PZk$(MtP-;1D{0d&@;#x*rnV0f(|$#;RXi3MpZ+UF zw2AuqxI3%xmG+sq;IlptoZQrUtEaQJxh5~NvF7r(kB_=z>$Pn+)OOeS{N`C7;F-1P z0;}&yzg+!pq?bbl`!D5^sHUy--`BToP5gLOKo*C}3osShD`{#OM0W;|ULOs=c?=w} zhNq~mjP;vG`@k9%mVJUsUQtQcATx#0_%*h$QGHsO&HDpIM2Ud~leY+EnK2#^N~?yZ z3~sp**sTDy#MH%CWq2XT3`U!#DzjEbbp%A1Jee(pa{8K6BHqWhS@b9r8P|#$$}9?l zi?+;;5c1i;awB+3Br)Bi+hObyI$p20(bY2Y*AE&`Y*RziWD`v6lQ2wkK;jK3hglw{k5$E-9e&km*$>f);$IU3lJGTI|dkkS6I`Ao?J% z#1A28&~3~8v_%Wf&%c&qRj|3N~f zXIt*<&2T(#)$^#)<#UgF(i?g}=)!}1TJf257TAXH2lw4Lt~SqJ$^*{P@S#zB&O;OOAA!qk;JI8J?Ck-6;SrNB?#fyorzgZ1mcl8)YkfiBwk zF!AwwWq1xGpkDR#heD*AZY-m$ z7h+u!8)bIj?Htms(^N=njFvTXmZ8$W=Fc@V2#pS$bR=?e?%FpSa+Ybcbvc zLKs6LO0=!8;|=KsV36x>n0)+x+cHQ|hmDeMDy_LBq+qwZVFefBMnzQaO8X1Gin5Eb z+14ifmhgVz9u&`QS-;9pWXwSdI2`@-%hrt!9t3op>hjNn{%w9Z(seG(+ziCWzsP_gz_O0$`);EAH6Y^m%JBR(w4Wjo3Wr7VhtEufu}irs-d~5&HWQNVh_Yc6;k5SbvyMQOSYdF>K1YbmNFQ$M_!zmmKOSH zqXB48jVWNGAP<#rK)&CFTj|Nu5jvq&#vR&_FglOh3$2XqP^=EMCP@p8?$_&?yeuo} zM|Is9k6!T|zpBs}P+ZsTq3i8R_$ywP(O@xhPb`p1*=EY!GmR%1Xx{c;)e45&J#faj% z+txUENk{e*F|R-zvElr+eK&3S zuo@!+<2xA-c_9GxbI62;d8n}@|ERO@p?2}fVKN?lTrprb3=*7v@5q-!;I+z%N)O79 zUtPCVIiplBT6z!=f3r~)j9-(Y`Mokce=V4&&kHfRuB}fg4yH@g1D+mNE(eCP`4Z)cR5nvDp~HZxK9}>^e;hM|LGD)H1m=pd20? zBnV&eOnSVJFxIRU_WnYRrzWuSK|}xe%7}Z|>F9yih8%;NGAN4QzIJd=_TVek$jGh1 zO2W<>pPJXgh_AcW#c4HUd^9jr{Kg0dunJqnZvn3}Gry}bT&O%d3N-JB~dFFcHzTkB&u`jm3T+@~QuBmFQ5Lj2~j}jYd zBC_JGwr^k?`f3{ds+;bG&pHgjo$K(tuK?vq`RdMBO~SH^kE z(iUo=!uFcPrtQN^S>GJG6A7d(;8<(Hb$*k*lp%z*Pm+Ut(x+=u#hcGkNXj$y;A*5_ z8{48QU7DxDnZhX2b&P?nExyu`X}g0m#j?BUv-`Q%Wi2FLVLikK%Le}L`fFeNIPLUe zK#)GN8#?kwnESUiu+-cXp1v4R#$dN4oiMQrKPPN>-JQ~#7HPppzS=0-o;<*VaO}(0 zcNLtbxKf5)Rr{mjpym|s#@WBZV?tRuM=G2JA(mT0=rortUh0DZza_tI>3-0CGkE^mW^;A;tro#UTW2Z4(ZB3I+*X>E zE=m}@uJ&OkiA#k%hqEu3k2>?y@N`7gY|Wv#mY_7!f?NgMgs_HD<5 zo?`eiw0X1EpX=~`c@n;;de3<2M3^c(ZL$GBmTRGhPdOlC*KG@1SImF&<%d|*U@#mS z7lY;aHykI^bQ93Prpk2rKujSM2-CmHaP_%D$!iH+07@?)$wz9$$Mc|bv z3*cYiEVrhs^_o3!v=0vXlf?Z}DBOgr2;GAyEr7C3HIT(B!zU$webWC|#gz86%D>{J zZaC;GvCWrusAy;<_(`3ur^|K+r~KBUU%kKLvMvmo|{9e`2cb&3AbuB%hIIF@H!NIlJf)X&+R|t`5 zVA?3Wrj?;<5*4Rqc?3|-boAN}>v{b|vGUC4^%H!la4n3_VC@?$P7u2TIa6j)o|MJC z->xwgCd)`2Z6<2Wm2tKG#%XkRU?ZRR1vWRqkf%150tLAfe8i`g=R@zsk1!b@`h<2Y zr^L0nTU@8#xb7}jn2Zk%Q@I168WP_ERMJ%7YKp#JVWFX&w)!M-l;>~BuRLmbzOBai zgN4$bgFP+r6|S`3KB2y1*q7(>WKEObw6TAg1&;*}<{vBwi^0d@%}^)h~;V06#htYLSK3FxumF0^V2h_2Iv(v2A(XKbBTyRPuFTq)y*j z$Nq$zdo3S{n^rc|_5~(oSr@>@MQ%{G*PTyS8tI@DpTfH62#`}o%e&kd(J3?iBkfbK zpITR^ueuv}!1clKe?Y?D`-r0+&)S7Bu77)3oW6}$`mI^Fw%NX|ya~nkXHVw6#lFW^ zI@4%$6y|H<6ih>SZ`FR@>e3w)N~rb_sqvNBB95&~;-st{ys2p8#*NL^Et@%j7-bEy zj^&-yz9-!GBulUn4lR5`$;Z*a5qi@0koM(MTz1&97?IXWoDYCQ6;|WD+rgpe8{gZ| zY~vMir|{0~;W>c4P2Uz4S!Su7!hwbB!b~Z+JZj@MgE!xfp;6;|a$OXqFzDnL6hd$T zA|r-cruB_19%`QFd}Wjm0ODIX?3+?hO;PIiiGn!m`cf4;T6sf?o1{EkJFmc`uY^f~ z5!UV)nbt#0q&+z0JSgi@*lCPraA0+9-Qq86SZ$MyqMVA;B;zanOM1((wyIz~lz-K# zRe3Xy?@hOF%Z}E9qu^9Q6-7rr3XU-Z!u1wc`YH8Lxb?T+-ThJHs_^vQV};x~+u2xT zR`10JfaaDj+fwqCXQtWD&JEj=V0zM7U+IrFzUwFXd0xx(k+dEfCYRo9d)d1A?zBrj zvLfUC#E9xjM`Sj_$(x(t7b}=J(f3|+A$c8qm5&@Q?xFuGtIQy@&ug1f z4rTZ(h4Ql>#95X-UenocR>hBCmEc8No@uL(MtbnRI|N6zCm)}PSE<+{+Cq4yR|hgq zI@?xgaxZVD+19LHx2D;$X-m={%e$@Rw~-tMF2C(>X=vNFt@(VSn!;FBw6YpQ>CZiBKP)Tn$rwz7eq=c;>)%s z9nWo}bq)+9AssomsHbhDAB}Glm(6G5jyTFH{8q|0Nx^A>-I{7&gf_?&8W%`_sYa^p z5Etf65Kw~mzPZRF=ae7AhZoo)7kFlWY*@b@Il9pjc_9R^rQ~G{z7yQ{jeBOY?I>a` zDJ-@9@Z*&LwLw^=jS_62&xx~3N9?A>7h_!vPgQ2vg+!Vt(krZQn1rNN8 z*`!p!r_Go9_N(GUS`e=ETWv-s6J-H|N8iCXt-R|iEM*9@W!Mn&6GG?YDYty{Pm)4g zofIWsZo9rLtI|oLt*(|!6}5JwOgjulN5(2tBOL2cT!sd?$8mzoD3 ze4tskc3l);?~I=2#vA{jx$LsbLhz`AW+$&G965Akd4-@#17V=D)X;Iqc)hN46hgHw zm0B1MBbW5+*RN|{e);9*)z@A_p={+fU%Q(NFT9|+^dpxwJ*Q0%VQaj^kcIen^U62V zcg&F0Hr{`2N!-b7(uD^viK zV0nDnyZQ#3eMr(W)q#SgJ?@k@ytVADyz_lD?-~B==RTXf@fgrf%Mb@nL3Eqw#-&0O ztTp*(^_gxrANAYzfW%jYrZfFf;U(Qrmkof6wQsMDGMLE0(o?2PNq-2RePbIn0u8Af zV_zYm!SAz*B?XoJW7|yUC+#oL#X$6+>MC4PUYkEkpi`1h%Ih`%^Qg3T88eb5VbOXOqp72mks__jmd^G&Ty9!(bsD{$FrB35i zq%~MpZI7KJoxi>6?dFB&pAUW9`ia}}PHP8?4*X&~bmf;2zlx2Ch0TvtP;^j%-8O7_w&F>4!%lm5gN{cZg! zI^Uk<+i<)Oy+e<_ruF$3Uu@>gd9+!&a4F}X?rY9I`|ReHTW@V*MQK!-z=p;2HR&{l%3x4WRC*GW(~jV|)IIRqGCTlXBivW28bU|p zkz@PL^@STC&LUNkto;J{{1=1Rj;NvFSlr6%KH0l_M>B%gGUgSq;N6fxAJ4-Jz^n}L zUH7Y3uWm;3O2`u?PAu zsi_i6Zoqk&0ics@a6E+R397IRX+YdaSI$^i^*-K~WRRbfBZZQ}$>pW3qz~@syM5p) zhs={I{6#sTC*(?@L-TdxtH9+m`_6j#x4C``V!7tGdNcJv=6Hd2z>&I%@>dvBek=F` zD`kbQ^V>)BzIN6+eAa=@iutei(iOKY5MFcz>-2d6w$v|Of(P?R8ziv?WJ3xy!vUe> zPc11g?J<>bC3Pt!QI=&a|8}8u?ZpNtug4Wu>94dYS#wW1Zfxz57>*jAZLjNr^4+$t zVF3Q}mv_fIb>j^;G`%xsWb`_9>eMjnu06bC8ba2%Quz2fG6^^Wo3eIsew@NWW5S?; zC`H!S0m`&0IYqr5gxygrzHfMB&YpYvsb`uA6OV7sJoBsw?-3YF)d(6n3gN$HXS4F{ zw-Jb2n{&=lxLGrQcxGf;8D__}c70SJ@T)L(=aviM-n4mB^E^uY^Pl^i{KUs}rYf&f z)MKl_l0T^;LND(+hLxxoW+D&p592k3yI{0;-g&2aW8NEi3zF|E_SKapGP^h_bvOlU zUNS2L`1Rwf2#wdnpn>8vitV=Yx*jbCmTCFy#A7Q?eRZhudsH3_1`PMzdtdVmuL8a1 zx@()yfBy4%kFZMJ*Xb4UgHeZI_O**chm?L&AgMIdREiR>c|SGAiG3(uRf;9he@KiS zJ9k78MG5i1b_}zTvC7MRlo@dw@QN&5x-^4}*|SfNF!nW>3c&KJJMyuE$g|i*E)sv~@V|vbXHum-Zx(Ncxj%x`XYr^(yrG zL_GV35`#D>hIgsA2k^GZOi8?G4v9r<`&M z?K1R}L_YkGrE zxK=T0XpVqad<8CVISp*`vPP%$FCHyNI@`H-CkF2JX3^p$&9SNL7>gV zyeh`N-AheG3}FVzvSoYav4*U#m33*2@EGcPzj$?XUyg?J6|BDI)3z&1YQM!K(;A?B z>(_g!@%h$UOPk~H{EEMkys_;Vb4TAX^NCtq6IgWS%#%N9Q&Wb!8;bt|I{!s*9q z)K!twy5o6nJ;5E<{-s|@TW}!F6pK^CzZ6zr7iM*7jAzRtNcZ)KKl>>IQ_@x1iL4c* zR#r#OSZ`PRQ;=WeFK<*TG19Q~r4G|M+``JaWe&q|ZP9yo~Z+c=q`SgJDeT$dn37>yk-^BE0qZJpAB8&HDG&H)otN8>ZJI zB4aRamP7a`P*S1oG|pDy87bHN2s^MTbTpJ>h`^A;ju{!oREZc&r?TTw7_Uw_jX+SV zkgPjQNcspLgx#(klsB$0=ysI33db~lQYV5%9Mtj^E&^sN z%d$kcIND7eJ}ySmsAez5wi~u0jL8&l9(jkfa0$7=K@G?&weW1XZ49x2mj6Y*4i z<};tkl9p-Hrlrh1nDdrlJv;EOImobI4aU1nL%tAx%Z|d65eUQfJ0w&067vWdA3!_G)b=n`&YB3N;cKAfNoH4LW``lLo3%6xg8jwb;H}E0Ah1r3~cfb2R zXz{}4jyvv%p<{g-WuXapL>M(Lh95Jc`5Xgt4dH21i}OJ`9fgr^yU#iI+~(9%Pt8o( zJ@?$h;A&%`6`t8{ji_SPlzvWgyS@sZ<+lWrS{ z>TqyYD6~&Lc_vlyW&i0pw_bzv*6sN-zeZq{&%p&{Mc$J>&I+1p|M^#gbmqS}5f=MQ zx{my!Od6juEms@}k7-ORoY9rYSL-*Fw%5}8e#L|T2IvkdR8v&!94eTy6u96{lTHazWT6@Fj#G%HuEWPQ(uFo5f(PCn)27!?Wu!aQGWWQjj_I^)AZHv^(*1Xh5Ab<)r*7)MK0}Y?1mJ+ACMC#OpFIg3~om zPI-$%(<w(j^884xgWV5_n`St%}3;E?&I2>FMolPMmTgWtFKs1;X-W z%TmS!()tmnX~*U*fo~M+-4p^UnzR*xtw%Tp&-Pu-x^?T~IkRq+ittT2aSB2O_@XTK z2d=gBqekW=@N4@wurxwNe=@J=I|*1dTn(y&?hv5{-W;rGlhk+wnFF1*Yu7fD zS)(@vd7YMtkz^%sS$6&ruViqEAWC>*j%%#hd#=H zMB>IJBzi|S(a-j!3S{!6Nzm;n!Jk4|#jAi79^1Nk{e+jmBbW z3vH&|3ht_4?N8x+_g(U^rqQ+7_N6?ueA)75)!VB=4;qOoFzabCN14S7vEa2gn=uSR zG!mqX^?X!Rr=Na$a5O^2FTSB$Jv~dHNy`P8=7hglZt}5hX-Ps<95UD@4O`cE zucH6f(@)a4@Y{c5kQ>&kELypIMYDLx5|*;O7NcY%af=o$3ZF{1GiT3?9B34O{r~_# z07*naRD0#sSDMY6H)rO?^_`y6yKwQMwAF#w)TyVW{o=v_*Sp}_sd(#Cbi1qwusjSe`$Be&od|$Xt2RKC}O=L%LbAWO3-k z*Jw^Vbz16GhM8``f(4NsmLV-qf(Mj6dV$YA>+Bf4nOir zV!tRaAvYimz}%PX7+Um+&y9JV%!)ae-z&Hy`kepixx8bcV^19esQ!Bet!oC z%sTY1vmO(8W$F~#eFA-Hn#zCMTni0Nu35V}xHuL)DgDbY(L>VErm1@c-MC11(!;#wMr@elpR&zdN61gBh+DFpWS!bQqOoGowV4P|cS6KUM zJ~FNjrA7YDV?ZV|s03YS=`_*XFo@G^3egu?OQj*8097#Q71^|HbMweU4`;f2HwJ+t zMvX3w(<`p{Sd@lDyL8FY=5D6!Kl|Cw!tlNFD^uH>VcOSNL;7+EP^D?mvsb6$6Tz24 z%a1}wqIWIm0tB&)xE_Yv3K75Z%BxKCjDwj@MM;l#SwXRaafv96x?S7*xT#Xz}9a+2@{1y)vDGWeUci9<1J;-Ux36 zvm^4A2((}R@|S^AL7`wj^UO1vYp%Mw8Lz;^0A7ILfAQs)!jO*W>$=N}@j{$`-udy+ zXox@Z$fKE}y@V;QNs~^9L8?G`^wGICHTBEjF7kinl~*E6HA*HiufT zsCHd`0WDMcK2+(6>tT+H{R58O;BfJh#m&RelFIBd=<8&(uR>lNz5;wM`%ti3hBUZf z!TjdRk6jTQitAnA_8o-a6Hh$eytm=K2v?PnhWMqIUCKJMvqGC1HP17!axf&$^jz4M z@#BwcE@r*oRaadVrK<8-wtRW>+%wOnuHz8Y4kDa}odsSlqRppGpH2$~4_y>z>(j<% zD=_|6t}4baQ;-@j_P3sr%Pze%is)E?lJ;JG?bYT5cuty;Ck+!3?iaA;`TX-Qh)1;I zQhdsL8eq>n{S5TED1u%dOn>kt56Z(kX@hIBH4HSYvhj_wrD1vDnP;EPQU#3(r%WAy zX~?Wxv7)&W-1PSL7H|!4pqD6&!5B(3uT>ol>8~ zKttGE!xtl(`h;6!+_~(DTe*%4oP&Wzb3Ea(cndhE;f&SdtTz$>e;P{LfmrkuO zT)4QopCv@ng@ZLcYD<^A)ja;#T;$^>o1KAU?H7&cp$sH-SiFp!)gV4On@Ba7hcxdX zqaS_b;pU^rlhaS19o~>m%WF~i(oRE&RVJIfrPZ=D``iHzK$c|s$eRm8ymx+jD z2R?Jq3CrknWw)|S9#J>k^anR(Q?7%z_2`2)-AgjZbGR z9Q^5EG1zBEYB)6Jj|i-k=_f}mdi&hntsz*wbdG3TucUFQ$4C#yTsqo5ZKSTe5XbBNIho~_`y+|M@ zjHCx*B5OY#WsMy(Dok_|%yS}MGKKIc6rMs;fnE(-g`es#W~bLho}|=D_FK0XPlkC_ zIvZe;?|$by*|C za0HdV8dh$^QrIe>^lV8-!mly8X6>5hhd=r;_<1$PfL=)lMgRPbZ#FN!^kR%O>(S%> zI9~4`{NM*M?Cd{E4ZC*PyWf6wn$s_uYGM zGauz(c_$$#^@u%6-d7kjX&4ml0I#lM$))($Hp_EhhiKw?Np-coQ3h> zhaYL4#pu-#IF*5%sC^RU|BGMTjp8h8>N-RW9yLOqd+zx>x1TI~%84f>pLJ_^sE9ow zLW9UO(sY7@0oo!xY9QSYPruB7(0)@n*#CD!4?f$)08=l&#>hYa^EZi`7dq4Pq*qbe zcpf;mY~G63RIkVT3wmx|eDS5`t{?w6%1kf113(9+KfLqKX3@e$EM3_dnsF9DufLvI zHTdz&vc&TR-;9FxGcC*jPx+uSbkHzj#OSm?UNG9E9HX$b`8@_uKmF-Xp@}(}U2$;i zpyg4Nx^$tTpjT`Cx((s`AN}x0fl-gL2E4rc-S2)kaSkjk`vjE#N#I1@kfxmpvhIl! zCoq7QPnq_#cYwhlz10w!;a(X0m}Dt*H=n_lhF{i0^ITao@6CC5otLm4avMf}@fZvD z0}m7))vM)N;<=B_Z61N9R=mA3)wn_EMBsCIf>Y=^72JS$3b+w=msc1xP{u-=>IBm` z;F~{xK_kxOA3wee-odgTX-F}y-q653hIV%7G$ppiXs!qUk{>lxG^U<>;t9OAv~S0@ z7;jrKqLmXbgLBiVo3@}=?!W)O<~}}~8PFN0jxybkp{JF!S;I=X;Xr%dYtnJquw@sl z#gXw81GYEcd@}<}hIVx;_wIX}Cm1BXv-+JZ&CpSE z0=(%d6vOE&ntsHFBiiF>&)fM5&f8eeDf8(~+_7~>qv!3atFMj%^1094(4%4CNdMMb zZi%6)5iz}|r}>Bf`yZP57+YuKWtwo@1WN4_HhK}Qs466dOFnJ=rYx{9H9U;9DvrdG zo+tde$)UG*I-i~h_;=rZC!@`qZn`N`i#xV4bxMs{ugR5DNRC{GSP%JGmB<&hn+QfC zDhk$06r6wZCx605rE?-M7cO4de1*mI8j3yBdt%V(?bIvtiBH^~b~`n*5bvwTo9j() z0~dN7RVoUWS+iy~|MUOyzeZ>)NWM?{xpN*v7`)NUX2h;QQm9`8e$S>}r(g88s9+W? zTGT9Dw6M{`c`Sy6Qxh+tob^PXd)|3WDQ#g_?+cl7bEK{S&`_U_@v1jV7=C=$k0V@Y zW=FV)2Ni^a1eNBEH{RIX0^BNUNB<8p`hVcQ`{NbVW2ixkq>bQrI{j0h{#5hP%P()7 zZdP8|f1@$>cOj59q)RBcP3N^S%5%LM(vNiD5{mf?7et`WeRNLq>Cb+qx$?>@Bf#w! z4Wq{qR!--B@k?KX4lfRF^la)OefpVaLT?lBx@v&mhtczqk9;KS&=kZPLQa+5j}l%0 zqCj20j-h1x+`a8+{LItOX#VQ2{wmiBbBzgi)-h9i)(9uZl({18P(mY14x}@J#->Y3E`(q7ATB#*dA!-TfAaR=RVAu{CoVNYj%Uiujc#Y1 zb9M}-#fz6@{iQc#;|M*Y;DauO4Zr|wf*AV-6fAQ~{ zD?auyU>uv75tk6?sn)|&y+ZG2MDJ3R$2nO-n)=f}{*&;sa2-4T*yd+H`&skMbI-;z zqcNue*NeA#6o%&qUV&j85jR%*x zyvzXOA-wIMV@Bo%ctL|qPtFg%k5Pakl~46ZIaMo-oPXi@&8I%~sl@4_a{BXI|MIO& z**ai%aO6~{xc>Rie$Ml=@iyL&a@+u@N;X89gC`Lh+{;YH*Dpn%ykY{LC4Ewqyx!+y z4m0u5FwjW8j{d(1nstEgvIRd5Xx!oNU`9{Vb=O`8E~jJwcOLTOv!D5FP6yFzt$eYGzM$MFEf7E{r^#x7Qx%uXsiJzLmf^GaRhMh2L zY#GE)q0M5{yGB*CS6n*yvptEdNCocTp7=CTyi=AlX_hy0#*F5sn{R494$mm#^(z14 zKmKE8%fr;0{%P~kk6fN2rc9pF%z12X z^V+Mg#XFid*TN6-YyBF=FLTn3%QHUx=}$LqGE-+c%k&L4`n^hdE(dbf!+zHhB5kDFDxw-Xj9nx6&0==y(;zYg7U5ueL^%3_ z@S)UzVbrMMQ7{@>PC0rkjfUQsaZK+~kW-Jsqt}a8V0cI@2VtYSIH+j!7VLm@pj_pun1?rvX(2S7Sl$F!7!-nNgy zU~#};yR1jgm4?gfZ@eClrZnoH%5}T1qof~s@R3YSUWfu#2|G2W^45rWzri+X6mH*+ zXNo%QJ5Z3ed2P@uY+HrJY0OtKi1cDg4{O$}#^7C&!GKCkqe%nR9q5Y}FOH(qYx1{$ z`?t+C*It7MbxioqGHtW#xP;l@vJJgu1qgTr(UZQ8gc#-2vvv$R=QltWH! zYaDDuu|<)=@58{?FpOvAlWUd7GZ@hDm5##HRHMVzfqzApcix(})=ByJZ<8g(*-8^x{6<0(~c$TV5 z6s%X+^+LJ}TO5cV<96?%@0@}rPOoA$eCW-{r4EL`Nq8ps>%?|0l(F$H@+r_dXmn#2Ywt#|No43K zJd1e9DPQ`n!2y^Cf8vA%I^rid_gmcCZzM~bI~`ODD5hNJE*BIaZ_SuBBgRv_y%;(t zuuS1~242cwkErXxP}_zBWCe0WFT7LRwo^H%9KR5U$+y1!?Z`3LC>!LN+9%=wjYh0c z0+~S8U>g$M6<43}n^+Q9hDZ4+x5S+^b`FEZ@eGDtcPZaaXON^^y_tc)A_hQ@KlVhH zDJkn+Dz$ZtM+b5~s{D_R7%Zs6{8+DbI(wm`K^X63M&MEghqGqR%FN?m{>5KJKIn~i zfbLq%4e-xWWT~=5S>#fHcUG-R|0@ghOb(_ix?!m_^KOjk=@{HIqu-Pr8u=~{P$oNj zQD++Ux>~LdDhJ-mTFa5{cJI=2OdrrFasU3MX*X2A|JO&X;VHj`r1{musGX>B>CQNJ zAp5t4%?hUS0C^24jiMzG^>@Ghod__!1QMTuSmo(7nMy<@E;F_gs6TVJG@~$3SO;XmD>H?tBS=aGDYuK* z-PqvtkPN6o@cc+ULyq|Mip02LIzi#{cVN>1V_{b+9t2t--8bmT}^9PY#@98NyhJ33vgp>CT2C*|rva~-XCcdfBVbK{IO zpg?ywzWCja_m^z(bgyH8pvOvOXq)s%xopJgx1oG;EEdm%+c?|qG@?^rPGcJ{-FPB^ zhSZyH&ch&GhPV0U7!IEO_%XbEBd3qd?&1Nh9OSlDz_clf9YAfP)2-GOIDk{UNjt7T zbt#85p?6R3nM%Yp%+i-e!5(o=8>DyFOFN~e=XN!U&h>k4wA;3AduW;pvXsTV8r*u% ze-YQr_ga!q>a0k4xZs*x+)_KA6179$L}BcS7eSQW z%%wkt<-jl`M%sq;@4@rKpy{Pqlc)09j+{A_Gko=2JFTZd8W^a-K2jE?L=_mgt9@!6 z!eSZ?BMCBhHNIgwe3H{cGP{)BNjCf0o%B*DY%xE5EY=j|B3syjRQ_)?%@J#O+MkLx_yf8qqgQ37Odye0Pe=q69tH)dMEuIjuQm%7Eo?q<+b81v zbc2S|ps(QC&j^aVGNp#0g1|-Yj8D5Oqr%39@Lw9U3tMX6AB4Uc78 zsb|TR-P3xFTr0=fhNDL`Jw13CnUd5itAbJZQ*1_b5nSNNwpowC=WZm@z|mu58Ws3L zHbA&Rz>Oh#PG-2v5n(BQ9X%?{#D_48Q#Tjr1)Do}PIDFpmBPa*6$M@D1(ymHnOeLl zkj<+?sESZ*t3>p`sr0rm9iw-21V>2i-G^dAa0t7j($X%=vM-DG$T<7Qe$#Wra6lBM z{r(ILs-hSHB(gq(0)0Hw$!DK)R)m=AE2)g)mU0nT9AqiH#e=`rrG%E=#EBo*V7gY) z>D`Oju;MhK>!V#u=d_&Zgd+i^qccQrEy_3&<7f2tMHo0D;YRazH2%S@~15h&4KGEZ($%# z+z%Es-bS$&QU&-CrUUIyX%`V1D4Rq;sDrs){I;zNAO6Rv5ma!WdK_QMS(9?0*28>UFv z4g#}&je26#(PuVa97wYnY~!j7Ou>#wEBKN2b4eEQTV!y-p1fOW$2M3F6_$2U#BSh` zK58hkpgq2cw562NAJU!tF6}$O6AvUveYVRWKE$thtFVSYNN@f2tMDOt1DEX%{}Sh> z9tX4g8K8R94i_o&P+z4@w>|Pr!WXhrTGIZJ2GO9Xv5Pk4xvLB zGJ+ooL>@C}Jj&?C?Zyyt?a13K+E%Hbj~jh9BT%O>Hb9UHLS5{ZA(6^_Bk`a)Wpc(D zXEwXW?81BXPBtg)mT6%Cg9jBW{3KM5aq`zGE`_2@CQ*+YH@a0itoOvrs0U5~qq0_M zDI`@?QoUYFTOd=(v~sP+p3n5O3EObIP6oZFyO;)ZYFEQ=G)CFmD^_NawoI(2#8GCB zsz7K&QO%r@-SY~$wd>YoO`)Djy=W;D&w=Z$Mj$kDmLW!aHf#Z7ge!1~hj&?{GzH`0 zA_Rp3RXD<&v|Ry~{%946>rb7Y*6?&hefBwLC$H0Fwp#--JC~V0Q9)^}2!m7gF48preWdxKr%l7k^?4df(#_mCbMSbw3!Q1B6PUJiW5(=Rv+=AiWzFLRcGI4Y z5;-o@YI;i*+-{7}81U$k2^g{pL8pM-wJPq`uI0^5jQTyc!!`kQgrG{*SHYfeJV$q7 z)Gb}IBnE}+$n>mfyubch*?^-b+v#8J?dsN{pSW5 zgI-6=H3+A?mwO6724ZxOQ{|Vq3g0Jf>(wJqlrCkM&w*fXZ*Ml=$P3Om$QOFprAf<> z*Bx+r3WZa+3o-5;^l1<{U|h0zN%JHFl0W|AKML%_?QV@P3?)6au7`ICj@}@HMzAt5 ztpM)Gg5Ve!iM0>>@)vV0sCOn{?HyE-MIsmY9MA+Nxa6 zK$Kq%F555PdUV(ZHX-V1&+%T!-E7>V$I4)3XJoNFlYumKNlSVXomtW&Y+2$;&!S6V zTy~_~(b?eUAN$Eok8v8ba2@yqMr=3mNi2FvPhu0wOs3pD(c{sFABk+d1iBtS!GX5@ z$1ZMoTO-dC3w)!J18SG$xT#YE&dqu124Qw{pPL!&H>c#qrLsi%NFzG*WPAKXcLW^1 z1_?g*HKA*eGfw+&q^%A@q+6W`C-D|Mm;AWq($gPi%$SjxF^h6$Ciqk}xqig{L1w$T zRG2Jx`-$6V%O=jOUKPW+rF?L|sd)qWBn{kf!wt>ZI3#R?Ywg_>xnRKpWXCyKF64$W z<%~;UPM-dc=W< zvyu|8bgULIfKsYde=wjl?x@GJ<2et4jI^#0I@seA=NqJbWM~;)5IHtlJcmSR~hZ?r)=xgBjD7aYfs!c zzIo$j3<1w1bO-JmFwPE+6&ja?%)+M8;kp{9_;<3?+Z}cqs*Z~F0#2REd56?3{0fH% zGTPTRGxhB!t?Urtic#Skb)4RGZI%YChR2E(%bPEJ;R_LHwxg$~2ZQXi99bgs%$zwh zQ_?SSOoy9B{wD;BJN1OmDIMQp;W$sDLBm~xT&1QU5EvI?pTrq_8j<48h4Bh5jZ3Fa z74|Z%9t0J-)61vff!Vd|q3pJIakoc3*+k3p9l074Z_InM*@t1FvG4jG4HKSkj>ZSw>hY@D9yED%?gX5^~X&yr9 z>;#wY(sepnucMx*Yp=f+#d2y!^yi&-E(+=q`hP+*i+*r=NjxrIytw)I|KUGG@Y>%B zU0*)n&TWsKae==yroguhy#fkIHzm01(|&bf{8su++Ejo`&kAA7Go<@OP9SRcSeSse&*@T>#w|?cGyP_)HPxq_~{K42IXpG zl&Apq`~e65DCaWhlOGo@To|6WZu7VtV;zGh*DrdGq;grB)R>k}?PZK)4CVwb~A19N_72P+vJCp?(r~UN2s=BLz@;pcDcS_4cw_OCSVm|+Z^I3Cs_W$4BnSNPSrD=XM&+{x(1fViH zq#`IdAzJ3NN-N9E%1m=*Rn~`ID{ED+?*7!PR(DnRA5dSrx>r|cnnPt(t#m?76%{m{ zXA!{xP*FfY5RiEYh<<+WKJUFJ4(CSPm>?kbhPdaPefIF~XHU<5_i)syaAZoeO*343 z?KQUf2Tji!RYEk5x2?EM9qw$&s)M`rwH?CmD{TlJASzGFIt8C2JCK{HAVzUBq{Try z_P_%VSc62RWcqvg@@2LbhV(F<_Pu+)r!&=_3{S0o%18`E1$%ZsQxRF~#MB&>pUqC- z1-QUV=QBi|BJPj}4a*J7ZqUy8HFj(atsfXNY6L^_0_-8JG~TQ&V+Z?+bYWLlDpKZu6EuG%cz;4>SB{WF``nUh_Z^OvzMk(&;_8HApt5$^vbUX%b z(IC^wv15_-M-T=`6l;f=zJ+w5l~#n77Dx?$w9=%9G=gtLSks<*5O)q6Hv zu||k|K%2Q>{(NbSei-h5;C>@}5CL}DGX43ppZ%3Ju#f2b?i?Ubs4R0>tf?a!`~D&* zbT$kS=+xG(U8BbTal0p++g9GD0ncO;%!xc2S4b*@05*p$RO7r-?&v7cM$(az_mCWR zHq(hwPB`-p0(9MyrS=ZoF=dT49ceSuuKEW>MMH;s&J^WH5-8`Vb3IU3scX!rK+tK&ICIvbw*smk;f1!G z^7qrf{ApPG%v$Z3eo03WtOURaKG9-eVIoj9s)d5 zr1>6wj+TuRRu?Gd`bQ_d;+7R*+t%&j@jpBfp3qTPl%EwVR@f;SL;C4c!z!;NitxdU z%0UO;Bn(xZYyeB%E?l_C22hY=PTHWZbH*s^e|b-yK{725ly1_7iOUt@I(RhaHr1?n z$d7@VEnuL#Lgjw-6RX2R4?bvZ8UwR+pW3Vw~NYvDW9NKL93Jh%+ceHZ$j%Vc$cw^AIZri>s{PGw7Pj!m{TeI!1zxt}H zO+MCB)4qIh({kJ3#gTZl|CAR_Kwu3f^%9Lc^&e%z?wz~sG>#`%uhwkTI4J?<`q|azq!OWPV7;-BA}Aj3RqOW22NVBJvpdA$-anA9SHF zgLGqG5<~RUCBNj>g<<-rQxF`{_y{NejGg~sZ=^ttn%mK-3|ujp*6rKgv!dKerSZ)>52vl=$NKqbc0_C_c&?QXv7HPuog9QUK(0FDj1_a8f{40 zl*v<7aCP{uw3-k;Q+zZIh)*SFq&7*rsZCnN&p${4TGs>e2dN0t3Z9Xrz|oQ+AtN`t{B zD|RAsW+rZlpGwRI7~TUKS|{7kqO{n8b9MS?WS9kOnq?>h+AO)Tfw6FH=Gor{&TN+*5P=Fs}STW|##yu5!)>pnaMl;EPlc2X@d?&j=T?z$O5u2KjQ35en5K>J<6K zi2^KvAfK6B@B21H;*$N$xyy>{#u-<^lt(W(5?|NcLrv7s@1@6UA< zlm>}R)eA|A z&7hI|HN2ItU_jkQcG5|kLfXkUh&3}NOh==wqwKKZ22Is|x#P=E}N`OFbmOpUYYi+aR+VjDNoipT2W#TvW+d(P zP+4a7j&w)karxCF=9#iWDWDaxg(rHZJ&F0jeh%DVb4Ir(G{88d)kyc<_gI)eds4Xj zj>Q`E%M^q5sCieXA>H6jovO6yCHDm9@BNyO0y{Sft52^bo`3)rRVF*SspX?mwa zFMbTEC_m%??Ewq*37byBb5)ACL4A$~O6 z5G2Y$lrKs{*jCmoO(HcaKUOZ}|D-Sl4?3raWo!UR2BUN|osfyY`J0~`fwpo;IusWT zB!{>Y3-KWjodna#JVFGRekMIM=n!erL%y5%2<8Uw*z#Xx3_qk9Igyu+5bZJfOlN0! z=-%ShJL(_Am_iIjgdcynX61}`S=uQdq=9f)J4U#;^YCJhkS<}Z@L1*&%Qx7`z0yUR zY;Y-gkrT3E7AuRGFdsvW;94U23f7lDLle;Kh#CY z7SER7dbYA9V8NF*4suJK!!2$Ux!hX|Vi@6{@=BXS*bqO@SKQ_eX(!q72p@=8+|WgM zw)0nx6-1>_o7R&@`NI%4#$_f=#EJGD4K!say0y|_`NezwygZQxo-wWg1xq3(t-L~N z^4jW+(q`$_i-?MOwRmhm47S3LFu@3{Xu~a^DKmP;ztt)4YX-8%-38F64w8)^9=KSYR})_wp& zW)OqFmRs42u{lgclNq zq5j7*MIEMpr5%$3hI+ZQ2jZpwBcBO_|Hy!$4Iw=0IM{gp z@UF6?K<=&m)wP!KxWSupO!*^j+FNVK<RvzJe`$E>CUqJS!&tux9L=2oz*Igq`IuT{vbZL~_^g~$sz&lEZec?O;ez8@9t6x6B ziLr)BS9YKpDWC#til=H~BTl%)%(OFxN=BnXgfzCiH-aoXlhYh2epOBrSMmM;z#y%5_)R{LuTSJ0B-sKJP%4B;PYym&d-tcf= z5ms(v1CFIIl)Qs9J?Ck~R*vJP<}dHFjwo{*8#ZBT|6&NZf>aBU%!0d!J4DEEmHTM$ zi&1)+%9dQn$v?^x9XcJ8895{zI%=>YEE+;PhDUzD-x_wkH$1J8j_J0tEWtpP%?oZ~ zelk@F>5PW>!L0=vvO*?o^NloR2KkX24HBI}zTO}&v7+tI=eMO$9!;}NsE|(5sx*)` zNC7fXh8b>WNQFzD}L8m?ei-#28}fFN0Pya?4vR<$*qbZTulI!4Z$P!~+l-6sjl^gE>D&o39?d0xlF$Fe809cC z_#^IteQhqvs(mDPnQXYr2_EC2JwXPAO1aX1+R*;}DM?jh9ljn3zhbgHHJ1t^&U#r$ z@~eM7h>;)U#;y}ol*)~jLw~1d+L)*b z%OWUTNNZM&Y;7%TklD1g<>QY-LqmgYI&^Z@Uy)PyEe@pvgFMytBg#Y_V77_;CTw{y zw^mpBMSu7rKb z(#yRmW%M2lG1xcJR>=)Zs|**Fb?^mmFT0sz3&Za5s|cvOlpZ9_r-{uzgu!4T8lgp) z879~n4&bG7(yEa^{`I$|M6o``RzhV%9;vs6n9=|iR?e~j8LYQ%JZ?hjxFM8?Xvf1i z7e1>!`nia$lW}whpoAFL?S8~ ze!26?8{M`Yr@UDO$%kF@K5KmP=^;)U1doXZ&Kgw(f^*(mwaMk^eZf zP$f5_`#ou44RXGdC<_Spaw>Q6t!XO>mClH(y@)TJbmEe(miMit$FP(CJcfvD9XEl< zP#!Q4FL2~5Bt`zCK+;;M^Hq^TfT8)kxlU*Gn>Gt%|PnsnBWMGt~ z4O|UlWTpcIfBB)DA(Fo{*|M~?DJK{e92qY1U$n=W#}VH#fQorh2%_>B-BBK*Z+^XA zS5s4Qe%1|xF47tEO?4v%XgJxuo-L0vd2x^79dPE$jlIbIxLbTsm&{l=)WIxVEWsTM zBroD>_PncM6f7Y_u$XS-&`E=R++6E zlQ;q!@3Y@BNs4kCQA9SEufm*jotk>R5X^vsZExj0dU*-13K&hp*eZ4fw=7j$SMr z-Fi(&HjB=xS~X%XL$xDvw;ob$mud`V|e zteBMj4}&G&(HeF1BkPWLwwH7weWW6tT;vrWOhiRdl0mni@{9 zt~44ni1mckW@K$Uh}cD;Dji$f-p+DC<7@dR*{vbVyRags#*M;OVk+fN0eHGAb|vny zA|LqF=%PheSc{AHj&fA=hdYz44m@#-M!lr+RHU#5+FNPs_*~+O6T7j_RCi7sTE4A5 zRr0)Q)#8q_2wCxRSEKl}@WVq92SnCt@m$(sNKC1*E{qLfbLY;Z5SliNse0C#LJ*OW zPt|Hee}@fFSz7dCF}G6LaaS2M=-$(wkGx{1z@*>Kzv}RP%+~r6Ng6C2b;MS22bTEL zj~Ah~yn8w=j7mV!SK+Y|6!zt%vf`w!n;1@APD>q`> z1ihMg7BQpaC^pSrh|HdjMV^YCRX&_z#mK+ws(U)=b8TX!ZCP!3^3hxoHZTo|ZrQ;XONYqaKE-S9fdPJM1@GmXvKOiBc$E58oW{gtJW@)1i4wbbv*T()Kx`$M$qglC&FRsLe>BvfW?2tS&< zJbqs+r*7X`tmoogQ7+=_DYA7Y{gyZUc&2nFux|95h-VI5DiPJYGWxVC<~c}5lXIMI z69PRP0c%KAe2rwP!-xy?Rl?Dn$oOE&$X3>!v_}iknr63$L({5wkxa9G8Iuw8tZgt& zu`rDUKpk_wQIH#)d5dA*yZEO&!|q~HQ*d=4z_5j-8J@3b&iP|W3>r@ruy|EDqo~C- zk8KC+;kE6uwu@M_V~KX{Dv~<&jCG^qw6=E4>CMHCKq8(sa+qlYrahQ$=qeMZ>FjXb*R84ya6WG(~*BAN8EWS(a+9i=&qNan?Xp#EFhdoz^8`OGmNMYiUPkT#2sbtI88(fzF!FIp1Ja9Grt2$O{YLn+pR6 z%Y!ThEvA;?Y8hZ7uC!BNPhV~)WiF|E17cr+eE+zyL@hr_vRV~#p zo09B0g{Ot-2hM7HLTSvkE!E92?IL+Npc+zTXE9377!tqnz@@WTM(sfBcoPY?J3OPf z>mi?Vq@FozMcwmxxgrDDC>_>{>%PLSV%jdgm5OZx?bFF^yF528e6T`n5W0%CGb`Wc zTkGTUA%_u8>wX!W#@a>+<1d+nK-VCkvPD2WjCoQoO)UFOKM8@3A<%`is$&B5L|=3+ zY{1}ZyZF5FS9nzArcLo7upGSGq%Sh~*ApN_N=6u80B|@gU(!{@h$87)rGnYw3K~y*i5j^;^1|qZQy=ds1`P#_Q!$<6uHBt~z zp-S!>Z!y9kB z84uUh`DaWY&YQnL=P52UM@=HORns#5s(tm&=%2!+hr@NVzQ_h_Q2m9Mz&` z*V!M^!VyT#B65tr`w!#OWZHSzB&`(k$l!*!`R4Ub9e&JL8m_+bDw{s?U6@fz)LCh& z{^{)5eavUl)Vha3Lk1b)6JK%g@9h{oQbE5~_wU}uQtv4;}q@zwQzR|D#Bh!&1(kQB*8#!9UYM{bU>qh5ia-;pT6UfWL zt4dwzM5|WSJ9XUn<`@n8r%s=xl1C064Tp{#GBP)G$j~q-`=&B!Kdou?ZJ%!o@9L{A zBS(!2S6*?YEe$xGiQ(Nb@nZZrKDGbX!*QQ{?|sYWE#aMa-VVEV@3ytH6UL1TQ>RS} z3l=U2d?T1V?0PeQQvdIAMawXcRCL(=b*hTl&GZv&Kld0ypx~MtH;qfwzHA z=cDed=x6!QmvJkyXSs+L6?aMl$!IOFjU+0~Xglr8D8l~BFTV`$y#00&0!Gi|d;HFPDh>V+};^#}_x5V2mQu1&S zAwZkrA;yn9SGoH#C5=f8{ z3;o2Rt*;lF#=uJF$K zb;4tQXl!U~5dh*JI&>&(+qNx?IdWJ;wU1E8-W{bti!ONRBZ&DbVu9a{LuUKT?>pk%Gf$*`ugM9eV;kZsV z7anmMxq)R%&Pe-n4mxmacnS;4V={_JY8qa$zA>)Vbt1X@9$R}xP;G3y%z3>}6~QJ>WG9)A!{3|FT5(3YX;z{F3~HHYKJPv|{gZIFgT zYb^OrbH9EAZ0#nSYM7qC`>sEeo2Kyk>#v0M>)$of$|uI`AZmq$qNJIrV!J;bjvqTN z_a{W!`YFyqrctD0g!COddd!&HXPE&+^`kv1SR8% zqxQ8LXhJ`&1s$Ta^y5<$df%6X7mRi6ijTB9CTMO>o@fe(4j&3zKmRPOeP)d`QNzRJ zNt429VMTP1Yz%4Rs~dz5cKi14vuXE10|(i(sjay*W^6PfoD$hMDGjcD=U2V=0Q_re zoJ~`PD+I8Hu{Vqr%PBrwWGS$B)~x0R{ueK6J=nz0`VHFqfRR9uyhn?xap3=qFsjcpx~#HZXFF zA-xu!+=J}EfiW%p`x2g{70a_#0ea7uNKObl3~Oc4oDGmRyu!`0_>t8ohXXRAo%0HcW^7JXC?Sw*VRyiLgH!MleI$8Zg>w4lu zEZ^iQVGh(13MYT~pZs|A-?BKf=Bw;g8+P=eUdP*T znqDUa3JCN{;t8p7-esgt#A3wA5n-1!dK)%uu(gUq1`i2y=FGP5(9(HF;;D{F?19b+ zBKgsWo5Hc{r$ne8 z6*(OgMvWeAQ~2}eEij@&hs<>RyYH@-j2~!&M!zs+@|1AfZ8wKeng*vcrn5xG3boU{ z{@Xf9G^?zu!yx~sTR#nJ*RC~q0|ggXPhj*|WoqH{BSXe){R~)!wf{V?(3m z<8hH7$jdt6v+eWGCG+vn*w`4BYe@y=z&C>7ZnrdBZ)@83lPw>I(PKu3Yp%UEELynG z1|)O>{RW6+h`8?BzS{`Nm(pr;05s{nROM&f_z8*=4H)F;bogrDzAO(nihP|8Q>IJ_ zixuV+$v$Xc9N0nSj>MN)r6 z8{1gQfi`rY%HEFcU#O1G(Q=l)nhDq^&399{`s%C0e0@*7pVhB^VXv0Btb6C3z^nrU zhiTKNhbyH0o;rP+#xc$qctPC~}MQeC?<#q(eQ%d6qs! zf1x7KD~V@5vmj+?7vEj?Zg}XyheTrbh4Jb*AxBR?^K{s=caL?l5U~F0j1C?=9Pazo zuQZLjMcSCLronk%MDP`n#=rclzfwol5Vd~!NOVsntyEOi*Hb5(OsfWoct;)Mqau+9 zwRVg(i66ZGzB<@-Vd?ePhwpvw&!kNmC~exYs2w~NiSMb?C!__HrUqgnBB@LE*Btij z*%Kak;K8uzqYuL{b(-TRj1O%Qhee>il9qKy7?Q1>ByB7qI3zOi(aw*;f&B-lTN5e{!#BxWrP!TW`J<{`ma! zHei`LZED!F`^)f>w9?C#-4JfS{dS8R89$T8n0)zC?eEx!xYb6)v@tL^3gUz>d;z$t@LnZjpr_vhl9FjnM;3#&7h5M%~M{Ud;YmFW9p1> z{N!KF>w~J`c znLWqS^3GfDgik;F)W|ZLa@Lk^{%EsJsox`PmM*>C1~OpAk`4wc3?9*NBV)Dpu;DIN z9mqL@y7PIKttBq~oOcA+oS95QAR$mi;DRKcMKZ#)R!0o|H->4y=bn8o?A*CCT>Xu2 zh^TzSrWqk^JJkWkUG8mtbwJeH7Oqm#EetUTLy3g6bV;X(N|)qNl0$cQcS;Hp0z-E< zLwAQrGk|ow@tphKz4zQVfA87xt+n@W?X|wWmgn^DL2}`lFseVyPe`&%HC~{vjD)JG^6|r#(P?+6G+~w}qzqa? z(WT?mY;JBknqrui?>q5f-0tm35_q%cU2K?mnjU3Q6vnc;(IO#7s?-G-9PZ7=o9!f*}vXNox%fl8D<0`LRs> zp69*V(K)mmmZscEL0#3qP^nGlUgea zP|RJvHAg(1?QmW%c1AsaUTPCf`iw36JB~*12vG9*0Xcc<#uwad1irCk{S@TV2en@o zsq|dd1Apju$1tt?(mqq582Ml?x4z=O5Wak9Y)b2SFQcegsu&eIsd<2xc| z@B3Rk^kmu1aERUA6+GxgHAX^p(S-*hYR+%5Fxonim5aLV=(ouj6>R7q9B4_RZ}i}G z9j+flHW;k%v3z{E4FvN3u;A4xx!J0jwn9J4I!|&Xptq^a2sm6;}SK~S9ZG- z1-`f;%7FLRchxWo41==Gr(nU9J^vWR489J5PuyNHOi((<(Kky!SQJyZIrdFrBFfbR zqHtf%?y^hC?_4!N4YVa?#~hX7M58WK`M`eQ0LbCO%d%~T0f#`PdF3>|0d&j-R`e;# z*y`8`dwz~WE@R<8`3uC2Pq~!oaO?Ky%toMKnS-LdFLZa814#fvDT4v-$9{x(JxXIF%opbQ@ndfzB3~@23@6E`N zyR(??UQ6`-SX^D0RI^LOX7Wr0AR_*U1oa4MkKW9+{Xr39$Q# z#v=RL65KZC$U@OWPHwiq`Be5HVIVm`Tk>kSN0fU1?srid{*B&`Ua@PJR&K6-gPXFfMARP8#EP~?=7X3F zBT0c^<}6pO=z0}N_vsRuE^<}{gZq8#*e~eJ`2uXO4Z?1r^Q#{FtVu}^T6Nf|o-Wg` zCaEQ`qrrm|k!&bf?_p0OO)&C-O4LK75X|f$>4=C81-+0%lVn|T8=Y32A$;!V<;@*2 zuur`o3N>xSANarlI*SI<`z+B|u*oorvKzi_45x%EO*H}S$wPHLr$AO+7kD6Pq(N2! zgb*nlLS--PsG*}Epqgg7x7%0ZkVN{5wEKo&Uv@F%lS1JTMGIc;+BmLgom zR^zb5z)M~$_Ob)IY2+l`^I4z$1XB#Hb7ImT0eB->``=kkS?&O>}1eaJjngZqm1! z?Z~)x-0yd)QV;k7NiiVw*BwB7I+8#_QyfXzBJ-u0QE#46u678MaeESjc~}BZN`BX< z5pnU;ro+X$^q^4N)C*(AMkRfVu+&l{iir`^By?&CL?oR{fqLvc&1*V8W)vw>T8GkP zM?Yi^ld@IwvOwd^yhBNnENo8MXVkx-Ic__sAhOvE_&Okqc|mKCi^5qWd9XKwHmx^`V*W-pkFNvm zqxnglb4b;JS25qZUNO{4!(#sGGT3hjwe9EO4?3+P{1rYUgmSw;A{(7xV^66m;* z&l zAW_;1gCg&p_^a_&&TrJ%gAZiNzW7ges;y#kB7~zDUc6U zc2#ToJpA-d33-*?s)&{-6~ezwkBgbWAY1tG(x{%_K{{W7Bg!>o}65usGvE!MdJ#QN$m`MaYe4^Ahp)giC zem%*b3+{+gj>An*PPC=gsWh76VBfPEzxYwSFXrnRzHix}1vAd|Vwp`}^y;?O? zCapGa()5PZTRKOQttbS*&1*sKSPL1E?|rzRLuEOAN%jnn{4+ccoIe>TqFuxyVfz96 z)hJK{M9Z1|ZU@`0`6$iOS`F3i$7Bk3DA*)TnP}4M7iVP5J`kiGT&xX^OX)bpYoPO( zFiLw*i?i^2q>z5#STcHaj{T!X(G))|g(bkq%u`x0EE?n17gHg#apSYns!7_-b_)w@ zYpvIu>Tpm}3us?Di5R~K)rZ0oEmg!c9&?wWWlLC+(z|~k4jr0@FTFwE(gfLWhSgYF zVe~r9>XI@ri)u(ejeS(y2g;Eyh(m2+@QoQ;xwPD|tb$%ZumK4+{cjeQ-H%(;_g{%} zM;_dpUH5qB$M?=C4r z1;@=*Cab=oJ^WSk$_NKEr}#)zct7dRULb-)g8PY=GY)Iz&QKqs}%!hyq#jaaj7z!C0q+{NaISLGo+03#PJx!oct!FQ0MkCoy74; z`21UgT~s-d%KadWGf zr;SmH7lIJMpw4$GF=*H+J{;_{Mk_x^aQ#FadWTPTydQ)ogUEeJ*sU7p^z8weQmvvb=t#o}6Vr3bBmcNF3-=(>jL{6<6~5G5ASfA}Y*O z|1esN`eE?khq#&DvlB9d9Yj}aM`n0WnWi`UlmWhiCBGXEN`vQoh4ZbFXCnOwDBQ37 zb545bHd46EqLlV?#aBxUh#&95_2(KDA!pS@*~qC^igaso-rp63RwpXOL{5F}c@?!> zdF3j#Jp)dumO}k2Y$|OCL49i&IwJF@p~}iGOx(P@3sui}dBlT?lrt@^QP}XoXs==f z(Uh?1#^yNrC`9b*Tk#8iH>cqk{20rzeb~AX?A&(P(V92Ix7R_zx27YP3Q3#giU-DA z4-NaYg38?>Z>DGlRRYH_tG|6cN>*yX`yn9L$kb%>(%&X0wqIC&!E}%SISyfx7S5DK z5u0LnLr2xO;x@kn*x$cs|4tax2^xJ#G=Yz}o#krN777_(V!w(oV8({tV{4RAy(=#j zk*t{FQ22gF@Zp%}ddlCL6T>$^?91DB{$A9iqf+<;{c1#JVo2vur;(%bPB|{R(jsHF zhXsY_+`A{Ct{P2C(Z=Nb{ks}UZKKJ8g=G4k+aGnEQk8wjY#`H(i%62`=&S+bm_E%K>_(b9L=$+x~54ZEF&iywMyvt&onhuvu-q)Fs_%=&AB-ZCZ zVIaq4p&auQS+k|mVuf;ZTN+NsDf2s#$)9PSVH6|ObRk&)rA+5Fb65<;c#C@|%o8f& zh)B@B(#raSAzF#f1Icl-7q64{JJmQ^Y%=pYv82Vd>>6iYzHXD(?(YSR7A{1!q(+yi z$eH!3)L^5W6b91;;(5AQ0_(FBj?c1B$yJ`zzWx<5#V(8RC;3>vG>T-P2-hP48~Nkb za}+Gq>1W$cPphZarW@7Ixj<$56yZ*ED(XgW{Ru4Znk$?VqlNB5tjCZXT2Hve@~;zQ zL0iu_dzi4sCD%X&$&tp^B*Xac$EfJasI=fgWH=D=Cnkic8hahF+K&%k9&IeaSUr|XDx{H^wBK(ZQIC<`V;ZY3Stg=hY&8|?ir7oh7yyJ>uK{fJiZDQU0f+VE58WOvE`4%Mb1}wvk zf>yT(b+}MHL^FNm+IMkyLDq-JqN%oZCJQJ9->2I&Xf8kl7w!caS!Qj_Lk1?lXOmRU z8(=oHOyqC~5BK5-e2`^CiF2UY8A+#DnIho86TUKV@!aEfYQC@0{P`+V(B(VO&ub`5 zyTBS;r6PMojFD?3VlPXfGo_jbG;XBwPl~N6i1ODY@%q(24jPB&!gcCxeIce_aM~X4 zj}KEAW{@9#$%s~s$OmjLP~r84_}qOu*fx)VtgYsrJ4}rX3E&1V_xA_P#PFGpWSP*7 zb$xSkYM9TjHXjE8d!XFS6%Gobix&;-`(`^bxYG%zb#3?7N{K3*ofxXVR?P7w9fK1S zOoD}tu}&qnCT90;K=t$V)rau+?P#eCv4{7d51t1A;GNC;`~}dRzSfKsz`pUUs}qIoFWvu+R<&UxX&bMRh!RpZm<% zncGm-ZBmEJTDZEBW9hm~&?&cb6QmBCQ-&CujdgtxfODdM=EBkNW_o}3!cl0}PYR1? zF4f>@Mf))e=+F(90B}QClniFyXz9!DBK-9LLsTOa3EV+)O9uWxgX1sW)`E?fLofXC zAOl>#l?Bl8{B^FlIQPiRE>Hkg7=Qd@1$l)K3fCQU2K1x0hsCyHD|1Y5culQbh^~}Xvr1gBSg{k$F(15h#B3io!gvMoEr2^s^>WMBoQs}OK1oepH6w?N5wOD zTTG6B-Ou;&e_i4|p5Q&^wG{#rvTBb%4D{ZPkvw%0 zs`@Bn2n=0usBy152?>YQPu`JsNAcLsd3iGnhn{Q7qX8Fa3s2Iov3P04sf$g7ErpVmSu3fq;qw(sZ$yEEv5T=rkE#J@Q1#5Y+B&3^VwL` zA66B5_|v3S?y42-0~qeXm%<8UmIS`MJBXb|&+Kyp7yI+oMEZ15nF93_@?3zrpj!Hb>I%l6D;lDWh$@oPmO1>K_p ztLoa2?i8uAtAP>|r@iBIWDYd*m}XS0pFDor{W~qsJlJu?;D&sxKZ-hJT^9EuV1qC-$7g|I%OZ+y_Mw>5ZTwvd)B?cTq%0!V^TZd#4 zzMMT4DZX%`g!-C>(0y?keS}Au2~gC%Ua9US@S6al#mujqsNo{3yNnXk1m2+_L#VGUHPkMzx~K zC#O``p9MhkTfoy=6KX5Prd-XMU5!9}c%dzgjlo*IEuN`ozG@HtGd_IWe69CV?pxhB zeg@}RxT}*uR?A0$S`K{>lUPuSdFK#&EDHc?XTU4F@UhSpuHq3RZ+xaZ98x4R0PG?@ z`I-GYpP018FI1bn;o}|)kOT4HW=B(?{i*0DMq{&W>2=K|7Y24I(<52Yl|-~t&bGUb z!;UocmskPfAF!zvrTaL-DM0A_Z`qcJNa1TF5t_KZI*O}raYYfNf|EtW$1%y_?z~8p zCz3bKyq|0fLlhAH(d4UgAWl^#+aBUE?B%}a1efCqWv zL8V*#Ko|)bGTK%=MW=~wX@M>{H0aJ0SR7oz`RtQhamGx=j{%*FECmtojz9j&FOu#1 zPu1ICb!bq0i|^d4&$Z=h)ZGe~cl*+qYbfV3#t$*dbVc)R)?06>u+B`CFufb??Euub z@>~hQb(tTdAPMv;Go#vqWxNW9WQPpiUMj>kJF_0pxrW$m5z>v8esE!TA zKLNl5=T5VR8hgp2S8)ZS-)oCaQEW67Qh)EWHpl3XdnrM>^ z5^Sg7uUBzF76PXa+AZq&%Jz}PK6>C+c!7ORjOQb+3TnD>yB+WU&IF9q!i6zd-@GCt-3IH<1y@2&NASqdW zp#M;EGnpAuvu<%@evp_HbO9kEc z_ZdnRnD%}5qIRr;9Z#%LnUq0!Q7s=i&c52NmTZPmS5}w8tb5|+^7zhQyx-`p$$VHU zW=0YFJWZP#3s6mw4ctI216`XyMX&-C=t!)C(-l6A7~|6hrR>F4Z`9)R0qjr3vZgfK zt?#@2#wQX9imE4rBQhA;p;ghw9l3f$lce58t?2VQxvY3vD9P@{^{^yaUd={3xrIEu zq)_v*ckm;)HO_RmxPUL-kUkp~;4r5BUBlixpd`QuMBAIxX|oEKSSd_fMoxPC{fQ`c;!pkATW zVkcGtdd2blj+`t&c59x1z(_s#$)R|@9)D-E9F6Bog$^I5xYg;p!*R$> zT5q#Ve}4K>nRccpD<24VhDxf6k>gc>@2PBdbErgr8wBO zR+K8g6mocYxk}`b0DoGyZ+_GG?9d^aB)*LW{j=xAx9b}eW=S1y>B{=&o6y%xuKZ8f z66jQ19?RH`Hzxr_j@Xz`pMB2Wnm&QlH7r1tBY92P^LsD0Zl^ZbYs zG&Ya6P8pV#D~}EpTl;(*aq7h7y^0iY{*K6Z#ZObE^qK3y*(DDba|_=y1TL<)Vn^o0 z2k>vO`mWknB1g8HNBda94f21vgo>ACO>&%2n!Ulx)7P%)H_d1M?yjW>-tgvxk(K8{ zm5A>GDC>8i{ew><>h~O=jdp{aexh77_wVu#me(kYhF9(MfF39HuGK8(16!cc&aA!qc5Qxz;ACnt zlea#b()5g-C}~xVOnFJbE*s6e7f8h()4^)?)7todB2r%l5)wwwBE_8JGLPi7Wd!RS z5@#+3XVtd%%I_4`5v*gRWq&dE#jK-PP@tAn&SZDI%4osaO^HuoQ#h)xgOmpwNwb!M zj10v?!n3lkX)8`tPvujRN8R2CW$a2qf5CN=S3?crW*Z4kmdY02+RtGk9o~1_mgVe~ zS=QWTtKEJG7-d>ws`@KP7W3EMCq&W>6D#7$as@Yr;p}uNUjDVSs1an`U$`g&xxURg zP@d(dTA0=9G-JT#ay6%yuirhis&C87TOk5%|F;he-*94Ip|+t_XYK|dD60oyr30Tc zTr2z~+kf|{iB9aQ2X7Fbn2{110@(_-5rQ9Dg(c|f{$GcaVtdA(N|;KPD!O_w2eT0a@)=@G;I$LKmP=zMKwmS$>WG#i=`%{>kRz`APJ zxI4!GY{u7yzICO8mBm;b20J8xe%zSXYA9FDIA=AIbXW7> znH--&mUgm*{T!A48=3kO{FnAkoLTCoJ4~$Zvwkj(U0UjRKHpb2Yp-;MysK3H(cBi% zy!p-{3u`7lw^1Zju6<<0%$sSM_S{)0o8Pl=kN0eX^3|K?4X+?J4v>?FLVN2OH~-0- zbUss!UyI%b<|0lYZl;5uwgTMDM*t;<4q ziWRp(yh~flb#5x1qu>p##KN66$EEv)c={v76}Owag~Z+S)1R|Z0g*b(G!#{iXmwHf z+Y;xSbGJ3j6KmTY6}kkbn6xyuG&F0!2P*orW|4av?HEp8*Q$Lsw7}@8jHo_OY_Zzf zR(!FXX@i@r2q2Kgc|ok4DMaJmF#Jp%3CHlGB1F4#C@UTIQn~)4X+2D|NqAQ4!u5^U z>dr7(UW&z{Y|}7e4BIAKhb_EL#={zc3FR6 zR{pSa@sQy>SywToPj-ENIoDt(*w)yhzg!_Y^d-BIJL{+e*;e8gthtZvMWtxdYA(e} z6Gi#;7W&5!HIC@j)npxnb`vv+^wrGVo(`HNC=E^GR;n|UE@*g6~ zzfXt&v3vBbhSb&D>9=qM@X6jgkfSJPo1LS*7B_$ySjZC9=NV(9S5H~IB;MFP>0$W8 za>ec0_eFO;#Dk5$&xL=@Y)k~@wE5NX|Y$;7}_N7eGTX$!D517MWZsY%0*x}puhQ!-J(VgS)V^>6WgfNP>Kf-k{ zzv+)oeJ@(mpS?KSNT^ZfY@<(ZII3TEIUw?1ph)Ccqh!3CvEMzm)P<|>XD*U<*^KBN z-#CW25QY-8^D7eqEJI@`#|>#e@8#Sq)X+S;Y(UFO|@F5|k9 zg4exj$I5k7G2?!D?1cz%WdH3;{I^hmZ*%&)7Tg#}oYk6tt|rQD9u2KDWP*iUZF7IU zKAm-2-Cn+K(LX~Jc3O1Hc257NZU_X4X?Kqy0MxKpCZ+Fh`ub5zXGj7-G^k~BKKLGn9#7l%$8U9{!{*>DYsiBo60}sn&IdA&y44AK2!&}RKgda^h z{Npe8=zMROZ(c6c{nj~$IgG2MQq+C7ELjHD1yhqOlVl4kw$=V#S^l3rd5;d2!VW(U zhn4kwh}L$_vs{`!dOD-J^L4g3HcUk&M1m z3V<_$#&*{>ljCLGf9^I%VWiA;{Y<#_EIF)d%PrrPQ1fT`@aTO~i^AWg>c24;#`7&- z`{H>vuqW$twHwmr(Y*Ulg)jq+sg|qv;ysGW{xYuLCLFPPg80cvDNB}182J4kTQmt2 literal 0 HcmV?d00001 diff --git a/site2/doc-guides/assets/syntax-9.png b/site2/doc-guides/assets/syntax-9.png new file mode 100644 index 0000000000000000000000000000000000000000..6c38d48f8eb80cf17a43125b4bb3163a024c1500 GIT binary patch literal 151067 zcmZsD1z23k(l#z3K+ptFAOs6;3GVLh8iEY&?vkLv-C=Ndx8T8H26rDE2A7}xcC+{X zcXxQ^IYM`z>h9{Qx8JHW-xcJ<(O(e0fPsNQmy{4uf`Net!oa|Zp&&iqk*G>qdOpB9 zD2WTfl#decJzoTxXh?pQm4%^wK1YE;gvEw||D(wB9}FxZ4C4Qt!@x+v68(Ly1WWx_ z88{f25OWxWzshJmAOHCLc>X@u{_6;z1^b^Cv*3Q0h6iTB|2~Hk`=c5*7wmI;FsQZ? znhr29_>_NqVI`H`pTNKfz(|S+s<^@)q$7Lc_P*|4(RIf}rN4+&5yVAP5?8~TDa)L%)Y%1{_F-$f|oBIBWBg}7#Jfo={B6~VPt)EMtyk! z`@fPEQ^Ha;>$$PWA#J~yp{t-yma7a6|8L3vuf5fQ_pgd`hRzto@s$Q!E6Oc)T_Gr-Wj{HaDEDKZwWoluoIep!JRTpkAXHw=A#P-~c z&j3+Sqwc78xe0te#Pjn%=QL1Y24{RIPglIA0CH}?D?}o*KEb|IXhMLaiN17%G_jKwa>|)#jBZIVqvxsYV`!Yi_r3#+Oqpy^|6}T3 z!OevE>K4^zAcx)&`KfX!@D0aRBqbEc?0b5N82hbXMrv#wxY;xOOGV^h0Vc%BlpRKE z*>muM!Og%%)Uc|Smc=z)cz6Ni_|dul&}{(};;#rV(9mYSWmyU# z*OJn4skmhMYpq=*R-7M*`&7q#_!p@|$Y5Le3dbHI*fdBt;c0=7w8AOnOykT7pCw=a zi-bJ^0&l$(3PQMse&8xENx1bZaKAhIM@L}a*_yqgmM5FO$qw%$vBa{d40SZVMbQNC zXlM@N{7bXKvA(LuQ+8f2okX6K zfX{%84z|rZY9L7Haovv)Gw!GYL-OC{7kInGsL&j%7|f0J;(aa=s9(T7M60=r#F@k* zJ>l`6s*UjqcGnk_mHtL^O6YqcfO!TTNtv%sh8q1i%JPcf^$!RZ=jjmWq3DBwCUt%l zBYSpXV#*{hzuZO4od>vM)4iNj_jF|B>upm_IXPHavOSXuXZOZOk9F&Du`Q0UVzU&1RB8$yNs zUvu4xqT$qGz-7S1J6R&ccd8@=<3&4+zjh4mVVH_07J&s!c|yTEk$Ml zxw>6pSHeQz^ex-%`9jfAK+&|j6lO}>V=Aq+YVrTBtyPKl1~XVrI*=%5(#OewKRlUl zeaTqew)V3jtM}tcxA~BO<9AN(3@bubo+6pM9hqm9%7Jw{J%^K;A@Z8U1$}AWxp0Nh z@yft z!F*3P9O;kSMa7wznOxDqj;;QLKS8LIF<<^U+@7drOY#ttPcAKPupc1dTpAk#4H*Ce z2y>y41>SW0p4@6-fZWuMT$F;;S~7mNw}o(X`Iwn|mCNS@$%L|=DgCt{oN#CY^oG+p zce^($8FhKHIY&$^*0XZuJAm`%QEct6b9+lv5$BVS8T)r^y4+r$c-uDF_f=Yile5_S z05TnIN3Sh#6Tbq*8i!9fLggupEosm*9N-^XtzpaNo8yj>T{xFm@2s`#hgtSDa?yFO zzyQaMdu}F|`O+Rbr6k$DT{y0XuwnDd{js*{QYv)is2ubF{Gz-Mr%WXI%7YUkwL#& zYniZpho!<9|6-FN?qDiW#nTM9`Lw2Ji{SpLOb<1M0>UzX{E}pasHLUbP__sOpxo(* zC+lVYrgBCo37&Rp&KfDFfEi)(X1s%Jh#{LjNP>{{3ggy$w?WTs87c_MF5cOR-l076TMnYS->7cF2YMSddD_K7Kv zsjT|Kd9$P?*hQF%#-LaCeZq{T^6^TrzPta>+}baDaoKHJ=+hr&)`v(|j6eEt4VTjN z1<9)20U*#jEw8Q;1Nm;2(+$Vc=URjLp3LHQQFdiAuy<^7QNN%B{Hn+)WwJ6CdFrS2 zRftynz(4U5H3daaS~D6V9xO-^;;3i!dB)8dQUpyKRyBzJ0Dc)Xtrg8487z!m06rF5 z!O+W!>x^y_KVQz9V6&XZi*vOhUb;Qc<(sh@ToNO4J^W&aSg$#eAe|{SEZGR1C7Njm z+Az8Y>zR0&Kl;5>4hix;Y$~|ypNDW)- z%5saBW98RkUrXWr|Nehfg34e99t#{i9BNyH8!=!ZTP8tOYNEHw4)m~Dlv>e$;)Yr* zgoU1YHgFX^8R%N@Wb?IEuTuafXropZ?$*4l07dERE-PaSSegSWMIzd-g40|}g`O78lJBE0b0RV@S zg53Zz>LH*(U23lrlfkfG`VPVErLK0SfKBG5(T^CsVT**>B8m^vksF~2*C30uPVt&!tEb1J>ExU!dRIylWy?91OM!*3h~*IG+F ze~#b_E?zWU3};cos;Gepz4x?TOW{_%eVU+ad<@Q?mg$c@7Y%yd$4$D@B(*$d8VZE} zGy^Z$$s{#Rx`$jXYcBog>ogs(LK7*fD%Ga1wRI_I{I?b*2fZmHN?+2>S7u0^pHr8Ly%1#s?9r=h zN%iif4&vd78nOdoi(SANp=OKWV*td_65amN0<#e*+PSvW#nV&ViPn6=kYw}kS&Tu8 zuPHFSin^-Hel1S6d3@qk*OYc!!ixWTU@~lv9|SJzw|?M_Q*>jRgO@oj-6E3Ot-wv90IOcEGV>ctWX?hME{H3HD%MmO-8Iy|E^J@*_ z+vv!baC77)G4v~ZnM#QJ3^BG33Q;a=A>@W8YYc88;~kj)Dbdg zY?VcQjefR2JnCUnpgYEUG?Q4FhilPli%PfUWVS%xLk$_0*KCqi*ix&LqNM1EOL@HD zbTB(sO-%c%t6hI|wW|YZ68^UbiWh@{$n?Zun+>0Y(X`6(hH`Jc)N90%`@G^QC<;-Z zs)qzC%yYAiokIPAglnI zJBU0@8hvb4+pVHVn^;_n83SJFeH7?&S7hW-qpUQCL2qm?6g9V7^~r9TC3J5*xqCma z3a=g&ahT4eWPctflHA4eqiEfpz(jwBNYHSC!eh(|wbwM>W@IR^(Sbq5YmUR7+GHd} z&~R`N7ksYrqGIL8nRk+W5MAO_3I=-N+@xbD#EUD4ayl!VKzX`-b5OWkHDSiKeqQR& z#pbs!-g(LgR5KNy?ZG8O$b<08VpZ&W$&`=@?M46A;Q(<1)f&Q~8^z(e#UC>&h z!X;;o290zc(8t<Ap|V)H|pbb9ynvTk43HG<^gj zfJv|3P@=aimvU<8-EQqDk9IfVdgyQ#a~?|9$XphXDcwSaHQ&i)1AT0O>#C)m8Pj4j zD+XsfxKkw+G^yUu-ZZ2iHRt@a^N4b4ZaKPSGB2^PW!oNZC@ zSQphfHVGYywtmncnl||n>?K2D^V+yfWc_bHBEbH^_jRv&l~AKRji@|@7OChOP3sZc zqAcaYlt{Bbql;kVCRi=z2&ybbakLlNeaT zaH1roSU76TsI=6t56nYrD|m2MrKb8*JOz5x0hMJ7H_bfd-}2JRby73rB#rjpsJ!Y@ zwg_{vrl6(CR?6s;zIUJRg~<2pv!6sq?0Kamt5ut~DREKEJF`tW{)|uh6>3CI*Epy0 z%ic0mep%oA4*$hw%=sy6@suv&N+Af2_${t`u@@;$BimV`|8hIp0jVxpLxT zgstKrh#q&L43mfW5IY*7J*dqhl3v(pZGQ)S47Q zgt?3qEa?93uOVOc#gS-RbFR~4eE=H9-LZ2E4gJcBvC=k&)CIUbO%aD4JMaX}Y|)#J z?vFtvVqGJ4@TJHWGeb2s>9@S(&%HGyv}bV`(mqn^JDht%JNgDvryZ9^)b%9XR=kDt z2|QG}xsO|;>vcGjoWtAV@xX6FH`)|>G+`N*Yf?G%ZD~DTF3TWYar>M`#f^F-HgEJ4 z_0=^bd)YYM3SflSq}}MM*e%5+QYHLHwcjY}Uy+>mt{7GaD`EESnbLFV zL^TyTN|dH2^HZ+QaVi_PUPD<;;WOb$P6iA3@=O$5elg{wqD^Q!C1uten7mPUG5r`( z-2=UMiQT7)4^i7m)lmM&V2+HXh%@h}?;uC>Auk#=DY~kuJVIe{g$vXrnYlpLJ7ti1 zrqb;$KqGlHAISeC8*MeO%yAsBFBe|W`jimUB~?jBGxw3=ua12>d7dcVlVACw|D`(R z{WK~uO(A%V)T*W`cXBFeRUzFLpfhud+eVzhcpc$Gmts~(>oo?YK5vY(+?##?PmQWQ z8(0xCu`}nd$x%gHtt)QPZuI)_4S&j+&r=XMP>3KrKc!(=c4El`vOj>A*{ST~p{xR_ z|C$nsFdQH=zc5APGtp3c{vCOCP!XevoSY_<7Z#jyomMVHLZRJ|y-nvbHG0~hyllf? z)ZUqEvp-@?(=1NLILB4dE{+w&E6`M`(6dmHdKdD;eppwf(H*fjeNjA4iP6aZ|2X$g zpAdBN^Yga>(mh9%QDf0S{^l^9V3PGJFb~Fnu+H#%N=kF)f1_GE4aMG4Pz1CU)^s>N zHoARAjzsykksFXOA)X$Y%MkD>v8wfjXy{*B|33&4IY}Uy%fpu_Vf$~;_BXP5RwLhM z-rt_=UxNvhe*mFg#)v=RZD+IyjCLt*`F)f<-S^6mH)E8e14O>E%De|*JId2CUj=orSe5=VOOtA5U247 z%$=D&&ef;`ld~u&DOGHS3T|covv=*}d$!y|WpDqoAz8$5n#xp)x9I9;7Yj`JzRe$} zY?n@9;}o>h#T2pfdHKKVpnk6>{)`9rdzJpYiC+AB$qv2yVY`QiJJ(8S$W^XNLtl`j z?(BIyMcw`+P{H>3KQR`bXZho~|3>=oaOfX=gv1Q};3c2P&279k`Gu8!4?YU)T))cE z-?J)*64+oNO+#V*wZHj5lQYl&7hkCj%=e11^&JzA;$MS%hVrfVVxVQ4#pur)g>U_) zPooqxgksJoOMiaIaAEGfb*dDh7n)>s`1Hh6EJI`rL|rN$LX6DZ;L_DL$(|wl0^o^Ux@vja zUv>WA{_YD@-~LA%yHE{Ul2qxHifj*Q$rA(k(|Lj#`hKk8dUX$W(Ix2oDaaoRkM%07 zXc6W2fWy6_o~w%INS}%gt+#&XBIKEjOP4Xr1!SWN(LcSVEw}Vf@qK4=m0J4rvOIyyxlkX`-P1$E&OVdjq>*kj zR~@vzzD`X;gD{pKg6Ajf!Nl!+kmc_WXEC1D<&BB35~5n7RwX!bj=@Gw?oY^WQM&?F zb3*J*#*;5N zeWase{(F40bl{Ab(ftdjs%#V*GLDPo$M}@Z2Jl$T=P#b9IUA@Yir`c62(ZM;HPrlb zJn^!Nj|M$r6O}e1!tm|R44>opqMN6IR-C#UwpNCm?3w2BvoCi|HYjBx4U-HF#P?g67>qH zqJ|+_8v6XDs6}x=q!lnuslxdaW4o!o#p8;zGL7m@sbfJEST}co6+qeQ>?K8VSE;V} zUkhxA>Mj1%+wg>sO9JJA)Fy}&%koRms4(EK+tkG%2HIur4-$j$iScP~WFsI{ih|WO zWoWcDrb92VM>A+Br08nqg^SDa=+wsZBx84T!WeR{R(vwMyWu?cE$3@#W=hnLe-LIK z9Vm*Eh$iZ})3jYJdvMXLGBYzrTY$?+nDkkSIbk)R51RLMlaRtfuZ~Qe>+6lEf~6K0 z^}G~#C7>Df#eAolQf{U1B6F3ARdxI{36VHD@K@fzXn%aZwvnAF)HJ`2%!DyX7#%o> ztA|GT{dXk8R%W%bF3Bi#kNfyRnA>F=G3az^Oxx@>d`r-T;Q$<*(Q);X3n;3apO+v; z=(y6eazn4IpwR9=z}iMjzvwt(S9=)+F&_>knu*h*!b64_D=OZkJCFx>%Vbvhdw*%~ z@}Gj>mSTA5*)apz&r<{6!X^+?We0*|5Ye9JMQtuuJPM;YPH1{brDwqd}So_Hky-$M7Y zxok(SAEoK+$Oq82)jNELk>$yR)a+gGF*Fq8BpkQ>wac;-P8nobk0!slV}IaEjlANW zRlOxZAYWw|>;>g5c3W7&UCy&F9ckA4+pm67LLsgX-Rl^3p4jT)druq?RG@zoRNi=W z?o004332i?Zhj0l-UXADlMh|;u$3)WfceZ%;yoLa`6F|((#?z#dDlSlc~-R#d}b*Y-5FndIjR8 zk^mP!?|2n%Dv)M2UiX)_f0k)tcA;0ekxglr@VReF*NMLJxP5a2L$BRD6?b=5!j7BF z6-^#{WSs0#D8Ev~hY_FkrFoqUIPlYBZ?@bfT9Huarp01&=i*|xBgyJI>{FZE1qxZ8Z4T<$ zC+gUvQoesb{&^=YcT=9{RKzK=-sb_Tu8a?ns{-Oe_L)XrtBL!pCF*-K-vya>>3c!( zW=b_Cg~iKa$Lqs}mFWv>UOAa7=bY8O zv)kQGF7vs5(jFUM=?Hi)%eiXGY)?_RA^B7e*%{;y{iGhH`djA_oXZLiXL+ErOi}_Rpz;&^!B2niSb(= zBP5vuvh9yS79E=pON=;YkxlFCMSW#19#ligziGU8t4W=P`ALz)>>(K*kDT5~hRAq# z#MJA)=dbnd(}VSolZ|GQ;%lE0?r2L1JkSbt6V=TRQ(iLqHZ#OGjx7?9kr@s-?wx*! z@VV3`&v?K~BovQ(BPK6<`+eTgYbH*x(j%xk;^B}EZFJTY7aERWThcL8PO$#LSzN1l|K+6OBv;qK#ow+FnAGB)03 zGON<0BMB860ibdyHP~3wjo0DCLuHd8J8AQiJ!S8>e+Nj(=qz9nC{Cc!!4WIBf5TE? zx<&d^bi$^DbLxbff8+3e$T}Mx!di65TrsJ)ZHBqJR5^G~JGbXl4aPd~ygpfz!U|Ag zwO;&0$YJ^B0SeY<1E|f_a#3?DByl+(NIctNes=T8KqO33nJAvKZ<=-WQqZF%Z5r%n zJ*4VAOL!$5{PNU&_i&+sQP*ikRZ8*aaM<}6zkYhu8)kotnPL-ANXV`C!|lXgL8crL zhh^4<Y#ngWPv1nV%Igj)@I9L_pw(67`&&CR_Kx7_4FH3mcNk?i~OlkU*S2)l_R zf)qBWIm2;M(xUAxt^jqP*2KfvFyr2{a_6#+@x*4h@6-6_JUll~Ya#{qGw)w@Kdb{6 zzo%|d*Y~O;q-xX@_VnaQci%5^oHs8hvYWp{)eNt(TBw)8YJ0d`z>AIIe@seY2YfA) zinx1LT{Wqzr^hS){mFbZujgxol3I2HlJ}+K6ylM2uk}3@?J~HWidBEw5nmw!wN{)L ztfba{4#S4;jBu`86vGcumR|{D@`~&(F!vMZ;(9J}0>I^})yJ*pkxH9PWp`c3WO-L(fYq9ct?>u)-gkV-yj+R=RTJ;Cv$OEDG-Bu)vS|RB(aVf>(-UD9{u+|j@ycEYqotcJxN?{$+RK0kyNd~FDGQ8O9h*K zviEBF^be*5f~Z%;x866Om*S?PQRNwx&Zu`77umi>7U)_R$ZZrvJUuRz{UlAlj%79- zC8Xf5n9cguf?j-+We0`oyB1TkhZm;cs5ZR=djluS3875|>3Me%nyl(X5H-`2YtoB6 zW~x9toOX8dB(mN}Rz1%>7YgP>yNv^BAi)mwN#ZoJ=<3@kPK}*?x=Ikn(?Y8oYsN=( zZ&xTy7lwxl;gR7$CHWdlrg064awmR(iiU=UHXV=0 zL#^|CC(}nS{^s%S*y{^ZpI`veP2Qcf??<{rKFw0glG_HGD^HiVo%s6k91hzu%A{MT zRY>)dgl{@94nC_5F1ka?xCVLFuS!|q=n{~$U$hlZ9_IKQn9R}1XJ!%8L7$&Fms4;% z%!t%FNoT))%(b>`N@pxud`{EDsHmcUlkb<@AKEBojHr}oRxmq2c1Pw#=(_m&k)gW)Jpw?d7^AS zq|dvkbw-cV8GlA5B0Ih!?iKI*DzTig8F;<8wUh-^Zn%IPdhC$n?}e2-`6Ili@nfqw;CKnZ4p<-JN~a)S z)O6$!)F=Z($h>F8W5_gA=Z`x)ZryIS5&41Dff6;@M!+Mj8a0U>Eodmo#Ogf@k96?c#&Yv+z() zc*5@x#5K5?d0%T6T%$dk=_>F;j)_6`CYhFh4Yh`}(<+%cUhjpO#<(!5TscnZ+G@Zt z%*UHuZXnjNysxMV@-Gfw_DucG!erD+TXF-dBBwMhQv1p*E_6Ee^1R1~JCE-h%f2=A z(=*LZfUQq26SM=ht(!zOgA*Bai@zcm&;cRkEu$r^4}niolb{kLg-e%}6MMvkOulwg zrQnF1IFAprVS`P7;}Bb4l-WlvvMe?F5zD#c;VB|`I^Y6?QX_9Ts@_UH*1A21F2c)_ zv}B2ABo6#Smw)%u{%nz@JEcXF*TreA!KwQ#AV~Z))Jc52=gJ)zF{i!fuEINC#Tfzu z9_P#C`tohl8(X)Om0)XEb=Q=Ctof8VpZ-hP#L5s|9&NO@d76onfHHr&{`-y}p^Yyzh9m$1;GdU@2kgbBG& zCND5yZ45&GuYE~;kLdj@-ne?bH3DbKgBdzQ$Yy${C2@CM6^teIgT3`~9$+!FZ`U91Ce_PjTzZGjE^z7> zVnW?I8-NGFW0ocKyblt%O3C1A`7($-d6mY1xrGG{#!l0tDy{n2{DnH|`mMpj>(Yqw zZ#mY-1&`Ld>}l?n3bn5pbavU_%iPq-30@6dIOKr0_Y)9bFo(Th$k^}^D1C=~4eE6)%E z26k9~7k)e?^TYPa)RMeKZ_KjS6&ayB6nS%-JqDfd{REIcdAib@h45hG=Ds(g%-Zz)D7L(M9PX6>WY@0b4 zjy(~&h3FNTI_>6E5u{gjMM!$AA`QceX3B@@sg*-AZN0QK9cNLAoC5HJn z@cBtp>rxA^8tzYc#4W)FfBd(AbVbX$i`RtHa!Nzo#00tyU3R#nTMw(mqj!1ad3is_ zLBA}YNgid4OWduPuqX2w#~z@}{ycjKamv(z zW@4psyw7l(oVwm#mC`MwhF6w<+(`CA+VM5oZP$FeE@G)7N+lCXDU7MmC42ITB>?jg z_odbMn&EZg7iw9)P65vZfF4BIFneY+?LPh-+|J)5V6JeMb)X3kVj(vJ6tGvce)?!} z-5C?Xo~fWxCI2xFC=i<4!9^vTk{74+iOs)6Y>Briv=+5%d9*$t9@3s zw@)vHCcFl#ed4K|hu8b~wo{k}j>ks}p0pH8NbyVQ79GWMxh^+*I!FurY~2lcRqdz| z=Z*4Nnb8}_VPE7d37>=$u==az$6sd&39>M^;Vp-h3F>2hVktKBoq;~HViY4UmHCBH zlQ2SkX)3uovvDg7rwbyZ8WT`#`;-Z?$-$@zN_sA3DPZFg3H=Khgt(a%oU%aYqeV^C zqIC1mMPx64{5$&8XlV4M6`-s(@*YEcxHQFg;e_m=<9x&sh8B!NbEd=sNBcl1%}(HU z+!Ln!#?I|Xr*atuA!h;p@T4dWd$z=1^n74v7nQ*c!0Uv&rh_+$AHp(UdG9VINxv{n zVj*d7A19^%$&^X^d-uRVV=A!Ktz6zg9RniJtKC2D>sRJ=_QTc^+&sOt8$2ZS=`tHp zYWrwnDb8reNWP^=_Jh(@M(m6lU3npL;o&Lc_>h;fm$=U1=|IyraB;H{Y0hjj=BfXT znQ92M!}-vnWXgsAJ~oBJJ6epNX~C3sxl|KCO?!*g4PwDO<6x()q*@IO>{p#X?eDNE zV=-%0-qD(_b1+`+`w)&4o+Kul+99!%@q{J%UOq2{o~*z;(+zi`gz-p1%ntP`J@XWs z6xhfWHBnu+3^`_KZDfzPYzYh2b*Pn3J#Kf7Tqo@Z8B>7=3;f@rddFxO#X zo>ykbB{2z>2@5OAIVrXk(X0qPW366UT9?ymk`7n1Y=4H46P z{skgq9?I-30_KrI<-xK+G-f`s8nZ?>lbA;L#k}Tb2zJ({KjN+5qk)gAkJUa`wn{gE zF%GYEr>3G!q(dRGp2|&@;xUz^Q*UEKn^zKJe^R znXJnpmP*yaUqJSwJ7RY=7HhoOPMyJWOA3OC8aat8>2TN$s0Y#t4*N8AOmH69g}wf$ zN+cXf=@Dh$%v^VlarE@WsctFQwup{U#PmH0@(;?obwO+L56=bl#|J zQ%Ii@AO3~5e;618?7gdL(dw(eD{>l@{GV7>*8HnarMSv;GiJ-!`AG6RK^r6$2CPgV znTV~uc34oA5t)+S>*6x&!kvjzuYrLL_FP5EgNvSA+{e2fC@vY1BrcZR3Zt(4`IAl$ zMvyCi*EO|TAn#$v$xrh*Y=Yq-67ZmQok*3E`!WqFAg|8n>Zu~c)JGvP6=ZUljW)U| zyE1nsP2FHKX$h-sg+_5sgS;@%7&<+#O3O^`p|2&TIOk~isi}J&g#C!{V#S9wZk|0@ z=$F;kgY4uL`hxXJqtNistWA+9#H;%-t*Jd;t)fRFMWii!yz!HomLG~g@xRP)uFMlL zfm3)8X9*8#V&L_t0Pm3PEYaZs7^S{Rgx~7?XN7uhf80Ma9VNV^r>|1e(n{mV5BgC} zK%}K$9lEm1y$EXK_!_X^dCe%7G!$05OJL0}mW-NlH&Jc6lKo|zvdr^y=oRAXgb5&8 z9d}19^6ap^7{*_sS}kZxI-Ii+?jRKjK}w;X7w`{VptlNzukIsI?>vnxm^}I@HK?I+ zs=8!)FS8gv=?M9rz07}Mt|0P?=@9<~;#nsdRKTy-U2sGL^~~q8-I(Wk7iLF9VU^j5 z=piJ=zs#@my^6_8?Uk&Tz+fXS_KzM4l~vHtI@u+T$j3K(p48WsM}{6mOq~VKQH!A> zr7W7x=F*4V3ogN%UVOS|ta~h^DB7AcX9gR_4`u?v9%Mo<$Vay(H>iI3#7H=m4 zI)!t+ZAJLjwJdvkfN3IX4GRg-m5axBMV`{}7fI>|MqYy&bX+8&foSK7NS;g@EAGOH0}T#YrA0qw_Ec7US&FtBppGC9b1DsawUC(#Ht^tu~Z2|7V3zWWKlbvX*s6UEqxl{IzD zJ#gb!IPZAO{szj}+X^{%EpfNx?pu`bSn*EY=egmc)MJTz;jJeE8QV%6xL2x&GMr-- z7q{5WPTTBg#=|c--hPjJI@XWlxyLVdy9!X9Q>m-qK7VjAUno;oKHCB-l%bBP-aSeo zm+-^>vh6Pah z&?io5+DcstK7a0R;^xhmijhY%UsNbY2}p|OcCL&_z5)X~>)Tv>)Z2CB_h|G|nzzym}%p*^B1-grLcGeog9z$0Ib4 z)W;ztsVM+6{1&&Yi+j4zTu!g#LZ$15vYu~#EqNE&!g8AjcSWdv_m|eQmsUhRL`}x0 zEf$JWH$mnyJLM`*v2LRf?w9Y+@jYeTe4fPC4ko$t*9hL;XH2yVi9xjQEfJ=wEcR3%iZsoR z8bdqYlny=4LG)&P(cuob^Lm46D=pR^d%Y~7GS`E-Avsu9zJNhCvDnO=Evpftdk}IM811C_? z@k-e;ALD5%+(pkM6SlQB@bee%Pg~!VWu(s?=30Z)-Mj1;1|N~&edy)iy4Nxp&;|M*8T)Ovo$$py!BwH|E+-OI{8=-W%# z6uwFX17@#{2u>7njm%XQBzzv!XHMptYBltG{z>hvO5=V93#?k-dNVYqmWFkBGBWQ? z6#qs|6m>XQmnzMU)MWJjoBdZWF*{Rt?z)&P>KrsQ{}zLXx$RTaKQ&epGoM-Q=u z$<>CY3Z+k3BESeClvP*GC2*Y&G$VuCZN_dDmgPi|MN)Gcb*cKDP48M54M+@V`f)G9W zqb{yxsp$p)^Y8k-ocg(+w8Nd;8NTi)CAc$KADW`b1f114EA`U_(42EzyO(GpRpw~I zy&>dT=nO$zVa%UP)+rA>U!Q@h^fvPQHMU&d5Lc=kb6PMBE9r38ZwDFj8)|1dz6$_9 z=3=&b|8l6{YSg1PK51L+Q+lEC!s^~ODy~|Y<6!3~QeK6z&d~#uD;{}lQlCJ_0lHfx ze{t2*lGAn0~wHh(WSFEdl}&sx%;Ztn?8_&m9~p;*XS|hoOj>8b4W{P zU1-$+i@Ie2Qw=k6)frjO9+&sR7PI*gX$*uW&J;@Fy=(gdp0WIoP*lV-Js&qo@fi6E zMh<1#$CQgeLFC#G^M&|^{0>=@6b<~ayA3vfV4u>u8$7jPr^0skU60&l`cTVZiePQA zB>5IYnmAwqoXbu^G?`+YI<*gCH=k-cSdB^-jejlWe3)0)&z3!71*zU@{tTY*RpSWH zZ9N_W$|YKAh|>|hpX{rV1GZK0$d+gU$GH-8m+CRB_{NM!MqQzw6TauO>wr^6WzAHz z*y1gh%{cTy;HKJ}`}W$~LapMq8IpawR*v{e9P&UTh8lm8qeC|crBnM|Q@@#N816jF zh3Am#2BLKfQa7 zyXYlQn}I7r?M(0RIqCXv>Ss!1gqaCpN1B0&T!_rq=2q+XWk(8`70Sk1b8OL*j2nq0 zr4oqSMgSqiO%-94?M=q1 zowwDxb?)uA98*|I&U)X~4ZYL4e4SaS61$R2r3|_|F03m3Tu)QBM)Fz>wA0r1+S{0B zWM=2<)Vy58Oj|8bn$_vJypTu`vE%W8a%a;$#VT_t-TD3)NoxtnQMPrgyFKk<1{rv_ zD2ASYvA#7vREzS+cglV{R>SHad+QX^iAj8#j-Y}13zYdPt+5wYLi#RJk)e( zUyZ1@n$ntU-rKIbOhwMjO?xf&oP&JbSsn9qRZ;s8Oy?i@W~vZeE1Mg$)I#0*l|FQD zKD`@~M-#T$2U~yzYAWsoZ0Q^%X4%fz&)+cFJWZA2iIVgl{;Y(%0~=|%2x-rIKON#u z;4x=%SvMy0S3>Rwlu%9p1Tx_2SC+g9VkA=*#LnQEFsMN&0fQC-7dxz0SPf zyOjY&)mY}}G~10wxJ%_YlvOpajrgPHR!(Z=vMfdbTrDkm zSp)JjZBYoqi;TU$ZY4-iGWDvsaC$5?XstVebyA=bu^oq>j1&Lz-8*r|`fWM&&Qx-y@Aur$tQQVd*@sI> zt(H~&WF+MY@0=9)m2tOfP!UB%8txu`sD^`wY?-jD(mOOP%&lG?tK$d5=G-g9?T|J; z!d9~M+mm)5q+=)3O4APkSJfMov7_+&{RP!2vRsEh7`xFYRu!8Rf4(T= z)H5H=U3dBYTL7h&xTXhgho4!|qc05MBc6i*FPF9YS3~qz8K16t`$FBL`_5GA1(8wr zosPW`v?IoY|4ARL?FUpNMKQITdW?3$$8@a*HVcvCN+*o&iWOe zUozr>Wodn(&1+(7p+2h3N089l39+t@C)dGp$ebk9c%h8q*98e?q@p=@rsuef@R##W z)^I#@qwrFKsn$u(^p$>rrO*0fXHpFrmSi3^gu;Eu$Hkj5Z7sNvI$iB#qvlMxB(5z} zHS6yxNK+#_ncd+O;r{tPniiKL>J7523E)kL{;pewQ3=xuj-uFO6k6MtxuMZ3`D(># z<5VfkpGzgTgq5ohvtHIudB_WKD)m1kfZ8QLZY$#qb3(pAAx*RpBsY22o8=6_q0TvKJE%4 z9_7wizFV&NqYL+aK*;fC`S*>s{b{P}BYOGM$H&iQWKNa@j*)~tK(yO^+8Jh-N8(;0 z%l|{|FX6`TvYrGn1NjP_gm=HIwPCrxzmVMZRO7|5D+^dh5xvRMlBwc_=ZnXCe2xbU z6-({^&eO9Ucr|-B3K&{Xc07GmGbWj+I|6_UbLBdQd`)3j+R(bx0#sp(Iq@aXv$gg` zfxjdiQP&V^4!_N~xp`Ulq4IJ1`E9|N;K(f7`Y(9aTJpURaFSaqV|u?u6C-8u26OAN zORc>^oc;M5WsK-q9FG(C4qIBCwjny+y__ywx40`1r@4H4H-H}W+FNLh@ z+0rd|FLR_`K;0A0&I01Nk7j1#JznNNT4ub1^)?@C-&3`bsZe$U`f>6z!ak>K%>=?! zASAy1(}+KVGU6cwmV)+t&_Z9P8(){;3!#fDs!ED%>Y=A&GFrWwnC{d%wh)vMCg%Qh zvkV9du@v6%auKdp`d41U7UvUBLFdBmNGlG^rXSbPhGcX^B$FAt^ADkBvl3gPb=>oD z6VXf=9GYpVqK4=xyYHPGscM8NS~*RFdR~grh>{+1o3O?dkP-C;{Z!*tW>a|oSb9E_ zGVqI$KeyIPXl`o7bVsx*&eJ@Zg{-hO){#m^c&QQNecM}IaL&>TQR)aIp(MDTXX1Bx2Leq*3DfgzLwBTzU=sRqflG3aWEH&!oS>#8P55}D`j zZf&3Jk15GshDVLNJrOHlTVYp=&#?a@M*>@p%KpnS??p24CI(TgzcKBh0U{m$sNMs*jF zEa{N$5sE_F+C_23H5NAz1Qq@|6nOo#u~nM9a4AFMrn~nX4iO`m-BG8MG#zWCdrm|7 zX#1{T+#+qg*w|R`$ct;Te5))oc7NSK>t@`)eLv->G*ubbcxYu7lYjWT9f=14S_YJevaz46n%Ye1@qYo+5=e$M7a{Uo%QeTZEkvA%Rb+V! zL)pfownHf_gX;9PN}mO|N|f+!t<{rK#i{3FC9Q6hg$DZ^-l7Gid?AuszK4?6t*dGo zY##W((igwQ?7M|cJ_BE3KO^=E%EG7ci$TmoDmp%0mU3$T~**0XSbNH_V)|wz7TnO z>5rt2Xgg(5V!E}Dpluob9_kp|WfkqmBB{-IRtK##^SbEEsp*>QLg*O5oYHxOyr%J1 z#oYD4-)EQ-taC3JUQ2bW`3mg+2Ut!kCs zVQ%VrX$?Y19Qiw!fFOKs^6C6A?IDS>(yBW*;3@+FH<$xE>_t6~mmH z2$+}3meBf>#BK3OdOJKbXSnQ&T0NE=Iyim)cg1~?T7PpFWur5m`iD@s5hmZ(Zj85U zO!QKUKXChp(+E6?A~yZ$Bl0f#WLB5bc`YFin`hKAj&Y2hQj3FaDr+x`8dhj-C8ha! zsv-n#(N!J$Rm!Sv9d+CvO9-pwiQJTOQ>q2&DKwr~)dPpMI=*Q-F)}~dgSq*Qkvd`s z_be+YKHYbkf+6mLHg7vZaIf`^M0nt<_8;>eVMKjCvkd?VFoG?ZmdCB)(QHr=`K|0#-yaQtgrUpIv#nIR4uKXOD$78Z1^J0v{t&^TdGG~Poqk6O}?b5w_pF{ zMe72nB&k}{S-JG3TC?dC_P{v^n7ebK_FF7EeAqFVU!HgL<<)kwOjfYSa}~ks0>Kc_ z*Ge=|ki(`1emlkG-<6#qEgCA2&`Y%n>&)mM3>MS!NhY4ArE(-OZ{$Oy4Qg_a$sW$F zmgt{2azA&N$Zke!n=D%pD>!y#6xU+;SEED}b_0@}Zh5aqp*%@hu`llE{+D-O4mIY; z$Z~dwx<{&@DQxK;RQoX&>$uk$H*2mln`aah%PJfG(S>>R>ybJ%)bq~9Z;t|x6+#4aws;L-o-3*qdb%wXO?U@ zU|5^&6M7VnztBqel~g^^Zf;XHT5{I18hW(Nt6dTcHM)2!#T`2n#vRw$#d>bKl7Mui z9&+1tH6;7NVp6nJG;(~ncV0+IF=h8&brt$q?A6*i;CydynD*O6kJCpn`zQ2$`N6W& zBA#PUH`0h~&3bJSAsT{pb(B5*33rI6sZV)wT$<*EJ`i~cQvI$jFTFEUct|V!kHc?S zis3F+)0VA;Z#4{N>I~BRtG5pjVql@{dB-5<*jk6F_n$9$O6sQZd3|Jl*geUm+L1#|5&65th7ljzUDpFSSC}d9U9@Q# z08(n4gJPUtl1&1bz3+pwqgCa%=-Yxx0($Q4pOE zZvVljB3hW4(Q#>#GIN`27^}x0E~X9hv`*#u9H$&#tJxzMTOVIua?QH#F0Uyff#+vn z-WP_=q}%1^HOqO(`YT0>*g-D+OoC38j7VyU>O8H3xxZhupoGOiy|XXh0CA^MyB##- z3O`)@wqc+k1c9bOs$WCVwX0cvbw$6rWMVzVn(x3K#W`_$$P?QyF^pH#zhdB7y+83y z?2EHY2azAMxtI;L+Su;jdUcI$OC0Uey)y?Hsww?m#Kz*RKqaj+FQY0Ro%lli$vWYYG4RvM- zjFnaV$dJf3ZQDLWI9oen=b+r_b9&AbSLqba*QFX#2@^!Cg)oET;!3q^`nH3))0vxz zoX+l2Rj_T{!a}YH4z#fp{q>{0#o}-2dm3-mJrMJ;4ZVR zr`h}kRH?i?&$8T$zESRZe{=;u9M&}{$-UPx{1oZDpc<$n)b6!3T&Yx{+Vtd{Ul-ag zE0X*XUZisoUD%4#Lc1<^{%v% zc>&dJ_tCZKez{v(M*g_!?Szn<`&%Z}4_X^)UpIurt8j zlx0H|-N;)+?Y;-Tf6*K#v^Wl#0t5;@Z}D{5Nhe=I@);>VuIRo#Q-HN6v91<6`=CGg z*uC`nqJ`3FHHXSn=>uQ^@ncf+qZ<)Fi~=0^i?x}h<&bAbD+|vqG;A#q1&eU4WlTu8 z{4Gh*f($xtv7WD@c@2)(2D2{|P>X>8))lD5X6=qk>$hs7FV{WaU1o7HG4rP!woVqS z!#Ny|3YZ2&L_}~)E)LAg^%WFiY=)`pvA7&YVkLz ze29K5WaS=pR({?53-Ioqz8b(Kiq+4l;xke=J74Pg#&X!Xo2#-eP!v~>61R!`<-^DW z%}m>QQy4fOcm9L}!MRA<7Ybp7#eN64amACxDl6c7`^>M8%iHI^9tD|&9*@kCf6p#w zKhxAY-v6$ymHZisF-=sbF7Sy3z@ik#7b^8Uh6nu0=3kY_~s`U?WW5@nVH(H z%-gfH9{orE&%Ub^^oAo?L;LJ6K$~Yl<2!q)fDubX?Lgb4Csp#q#&T{2>TCsmKU%EL zPwL9cA8)%FqU=G+>=`1zguqG#NbsSo`C2pOYMM5%0{~Dbm1wn?%7J}7_k1T-3Q~CX ze1ByHCgtH6AzVOn0g31Na_I-ELJ=!ftNjXcR$?gmCI9~RIva>CRcm?Kay>>2x=_%k zqud%Wh}RS@(LtvC!36N|dcA=Un~{By1Xk$sN?*gSHv}c? z74$QtzR*IGL-AtlRRaHSbXaBs6nS4#sg$235}#E-SA(R6+(8{;e&ks%8B!%1$nC!} zFsRkYXq5N|*mVQU`e_MZ?mO}%lS>o}*r?nKj5a#FA)y%#F!S;Oti4d{{$m+?p7a$Q zj3VMaWf}XFs>y9peG3cG$Ph#2Rry)9Sy`iSsiNgf0rF)iMm3E@sL;C#x#IoCU%g3Q zN)?LH(3cUIr#{{yQ#HmsmTibJj*X=&af(~+36qC6fwSsyo@vpW0piHd?Z#hM)E^M* z&E_M@+TUCqBx^o@dDXe=U3FS7jc&3(%qlys?8g@eQ27S-Yw*);*R5wHINml-6+N$W z8zG-RZp)&r4e51O>r&VI)U>gRPn7)sk#&=;@{UoxG}YK6a9PcqZt z*@9;FoMa5DVQ#cTW;8L5&F>pwth(TWq`=bAxgEy(0X<7bZ|7T zGG>#NfZaDf>Ju0*Bke=UDc!`b`*FRJ zEt^DJ{r3ZVGS10$X1kU>+%|?whMucS_%Y_Y@qs~@J9OcjG1PQaR6N5j#q)9@i2lA6 zN~?f>v=7aP{&!bRVQpvd&urrxk`j-K{RtQbo8z6Hw9jd9dbguG$Jn{L?B6rM3v2_Xf0b8bAkZu!?t27PF$BCT?< zhaI3^kUOl9Qipi@MA^*U!I2&vttWlwhF>5#g2ne*(R0`l;v=Tt%SVdMy5^iM2BDAl zkQ*YV-s)N__32SX6+_-*TrKheU%mUUrsoMRl$FZqw+A5NDE$iB-uR%3r^3&4V-@tq zvt1t-k3hz*qKUk)8N<XnXA9Tdn$PsJ$YA{6<%(d(Eu_0ZKYxS4D)0XSq#FB7$1 zJ9Qo>!594!^Djmv=FvLlO%>}zbhd|Cp9|E$m#x~zRAgDROmU-@kj%gpb?MRfMP6#; zw7mR>a5hpP%uq|g%%%Vd82fUn2Yf+^8j zVEgtWYf&QCh$yLmeRig(AdMbl!&r5%Y49Gt0EY-5rbXQ0rQe2p!&(C^)ya+r+b-<%BMm_-M39^ zj;V(-bTJmd6BU_DvXzR{l=nbs6%0A14QEZqBS~~-AJ%cXQ9DL9U<@%1th-=I3&^On zfXFjF2)!HB)^a5JhS`E%w11grJ{0oXv`TIkFxxQGgBR2-bX!RDoFKJy{uy8aj z>e7h}LW4R=AHZ8!k3S55H-zZ0jZnp@s5+`#EKO+|dBflI#s#Yu0lg*GQ*VB#lU!8A zYxk1dzgo`}%s4YUM{V9;Jpb+fda9tmK&C2Qzy2-Pr`w;eR7Crm`!^WxqaS=|^3ci< zNNFNn#(Z}X;nto+%X7DLcH6iYl=>WtQi}Wq< zuxie$lH}}C|_s{u~1#lxw&W(NNK_4>}t&SQOiAdp& z8h5vb)4JsYK&hvo-yNg=S_Nx6cV2PU8 z88YvO({3hfvRfET4Qp@Jk557IY!9Oy=a6WMAdix(Q++15QzO69^CG-D%Vdi~id(B} zp7`x3!4G56-~PGu$9GGzQMR^PjNjUw=Bh#o75ZC~+&}j-k|7tKH1~~uD|QUiNg_7c z1{k%BR~N)bETp3fst%Ol=?C!OXD(-c8Qum{A+Xm;E#X__+vC6uuUK`48UqaV&;yPe zPvriz;`*OjVy!kE=?Z=_k{P=qGd zi5)s-{st)7RH^c??7F07?LIzo-Y*byn*yC(_hEQgE(pl$uw=R1t~WBYLDr{HhV?pb z=(g5b<>`uO%PRgT0*&>lGocP6FcysJYaf#-`XIn2|GHCtCD zavKDouuMtn8$GU%)u-fU;f_t8_dovTu(E!RP?nBZ;r{g?o}2?ugc}`e`t3CH>wR4- z;xl_5gT5-n7X+e?%UI?OA*FJyRv7UQ_8p-qS3d37SZegOk&^G0M;w>xx!8wH|5nTb z#=Gf&O~kWHcum*cf_@}$m%`t9srA3LhB=w| z>4GWbQGvC=k79;+3YMm*`?05+x#f=PD7KEKOV zxAWGaKU7Z$eH`i%PZPl$t{AA)VFi zGlCX+VwHK}Cnl{?6l;r$IV@V+8|Y)g#lwgr@-tkC(EU=;nUi23r3p!5wUUwvr>|93-%~XF_}1MdrvZ0kaIw)@1l4>M`osByuotw3hWJYN0fbK*=B3-` zlQpF-fM1zcRu|SKuY)RK>DIYypep3P^X(qUBVa(z7l3^%qY2wX>&STe_>PoD?f64D)>dR2+3r zM|jiz>l|M%GV<*Qc;4bN>nFGZra(^-YH=%Yj5zpS(Mvr`lz(_*$awuCkO!gMV^*w_ zt$*ctPFOP7Ttu^d>OL-n_gpiTMP%SA9oO=A8B;Gwa1%dzy1`E{PgS%fi}a93UWm>* z`sFW|svoJCW?QoLfP^opNp&{`PalM!X^g z!WGO#QkaJ}a*dI3Ph~#k(!}#`({eye;B;TaxZnY7wU2U3%VJv1`xGf!+gh2a=8M#L zBUuG+%l~l0o?~F!s)z39(^qMBEScz@Y6)n~&l|{Na6?ZX)HGP$q^o&t)5Kkt&_O=W{Zx<^C{^X!?N#{|+KiFYO`aC!V~l z1f{9c4kbF@QmFvV$x6$2y>Mu=xKdrHIt86T!OkA{E1%hMvu1d6o6K1C^e&=r8F*Y9 zRYFo0akv7LpF+>d_0*yQQt9-|)1lm`SSnJbTy%l_;0Ry$Sk@&AeQZ92Tzyp$ujOpK zgAGwZCo}=j9i~9crkLh4piK{*(WyU%B{CY}WNMgsu4|VjWORhlFQ5o9(jdz!t@_D3{^9Zu)ta@Edj*X1apcu)DbDnKPQ2zZiKEN}Rou|MJC%)ZmJl2_ec_pja(zFhm{a zNUrpvg%HC$)osJ9beJ#2I!!tSL@KWd0dV=$vB(or?VwkR>>^XmKvctpul;S3QW^ZZ zk{Rs|-rhTu#H;D@MMDLwGI%QeLE};Q6^7Vprh>4YZS4in$0C$h%^T%NQfC&PyA(-| zQ=+7bH3ib_FFaRm6+DM;C>z)~;*{UMq(NpyaHpwR>e1d9cp4UdBmEKiN&z^$F>*Jk zw;4R>3E5kP{djotarNZ_X3uvlEn|WkUba&GdJNsn21br8CgsZ2Z&EW)^HV6}DlTE> zmwQkMw*9J(@Egwp;j0^N!RH9dIY#0fn%b&(4OR)7ptSuqIHN_6<#`j;>j8@dThI~H zu6<4l&&l{}N%HT%F6)3Je0Mw5Z1MC|GI8XGJN_RN{lt-8)yVql1Z?Am}Fq&7P*5{*0oaKt!xlml%hUxnN{rb@_xz_aaJMhaSwI=#!r?%(m$szv(WB;7$GUUUh9UJ=! zKg@bnTWt!wpZi*MbzCwg-?klE996{HDU&0|xR&Q=&L95tAN+g-CwE^7{Oans2OE=f zUT*INRol zgt5MWfPftI9wI`gYNoL?5kDa{B#rx0{Mh}ucv;)qj%)*)%e-ZerFMvs?y2?TXR)xB zB(zj?ZyJ0{G&f@c&BPms#wX}#-1KMDa#hK(aHj&Hmxt?p1=lq6;%cSQ$br#VDXzH~ z%Tj5StuL{sLlxy@FR!fZksq!K=fix`PJfi~wAp(sx^t}d{aQ-(=hAUG-sv3+@8eI{?0Od5hhJ{_BzJN zfK)EWxmA446y;v3U{H0Q*A&Wo!87biliJo1>+rV`tEMSAiV_yRybNVh>2`hcuNyL7$~K0z-G@t>Js343%eps%vfAo}v#*iDuKLy! zMQN7mVaiPCL~~y2Qcd>KI)CCfT{Aqf@RPOhm2o0nUN1!a^JmB_GVrendy@fCda&R= zUW9}&FyM!P)72BCCN{5+r0sS=PdXgA`87WUy%B6Iw{?{JF|(47;*W9V(`eYE8Q-C^ z$P9+&aoILyt$Mh?{zs>#iisnkG^!q=)(N5G!jm=_iVIJOf!Nxx%~rgA!4)&Ryh=h8 z>F*??c_`HwvYl4K@{8Td(U9@0oMe{*kPB5Yfnt2`&d+j%;cjFXy<NqZQXoxmsgD+u@XTew->1e^{p4d9EZJGoW_6_!|3Q2J3IZ@iWs-nf!Wr zz=XHj(5anWwBJ(MiK2QNGIr4}f|ds-l$4Mr9B1S-j<2`>bdfb`g})#e$zvf$&_|Q{yu9?dd2c9ic6V$EpBypjjJ7VRhLaCwpii zQn-NBewD*FWuuYA{>pGtK?$+NF`e#!Z;J^D>D?0Q1(WAeZ;|usU(dvcFA3(y8+$mj zS{vHJp6FGX*_5$O$=7UB`LF5Jt`=+SN+#cg*R#N>5^_c6_r?@1#U-?LdU0k-?%a~ z8^1eBtA9V84`5~cRchk?pi22(gP+n%RLW%DUG~bI%<9}EDkLIZ_35fa=?^$3S9wiA z)O5a@|01snS;M`^PS=v0WK;=x015N@w)7wAW0ocLRk%BXAw{R$g7!T1$Nlrkc?yJc zVM$dOgHuYgSyWmQNuD3n3CRj;1f=S{w;nlokIYp(&uoka)^)t<1~f=OOZ+B9wjTeJ z8!hnfzUsr*1k*#n+r>q*BU>AQS8kTswR5y76iibYV|^Ecd?sIAwkb8L+vpy}x)DsI zymoMKl3ZdloBWB$ry2LV#u4@6kZLx$?Y%Mo+nuMERHw@aIN$q#Dm`~Jwa)0$wPBnzsHV3)kS`@jnfB1_rnqW(^|UlCU9-g{ z=d<0_cs6)i>OpN&xwD*62x2`H7YJnR)T zc4I^gVFLq%-ZWt#^m46o$+7YC0fQP6jGxc}vZ9zu!jM~FnB&(KJVJJYeqapYq2Zdt zERj=)k>0Og(I5J{evYhJC<%OO$jI#-D+!)tmHV2g(b>vFcrF&}f}rb!VXR{IJTB36 zcTSbdPRX*zdgXbAt@WH#%O$aBP<*`?{n5T@TVL!tXf;q%kQLHne_plf@pA8Pif7azcF} z!tG2=vRfx9z@>X&dtQC}eoUhfX}h2a9%Px3g3&jPz=d=vSfxK7NvAE^(gkdZ{8%e{aedt zuc=}Vmy>q8Wa2<^_(V^*T&r-Yr?|{yU!RkvA79G>hy2g!!L0Zx7K)1;@!0^_TFp_5 zrb-e%nD>sB7pb&@%jxq$IQpE6fjYKxz9SLP0UNGAf9{+aWC`6?ol$QQa?{E=I4}-j zeYe?-63CJuOaA?jX~u_djPUct2pm<==iUosI!xY^UNI)n#%%^{3&!QlwZ#J)y7bWe zWvzq({>{{L zoCy7Wa&7zCHZi}FK;GjmJrm#MZpNtHci_((QdZ)rTGAeRJeUuE6gSzQM&7U}R*iqX zXp~xsY-pk76%9@(;p5$6`E>sFwHMIfq7!ireXjRJ!lOA8-%m5Oh^^!LJFwS8_#ov; zmkaFn#}rCb=t&#x;z#4>pKi|Ad7T&?zOu%^8la;59q)Xf2go5c3VLGLsRX}G%<_^< zWy*oslTmp^%u<&^!bxk{sccktCJu2u_|gtAmV6;V*p}?aQQWbAJ(M}*YoZGp;obT{j9#J z1FTCsL3*QFFzLsj2}ktL?F*7)zFzO~8%( zhZa6sts%Qrn>|tYZY&HigIGTUU6To7p*cbM#)weMvSP2dYPXkRS6KZg{y(PL9TIS! zQGy&>x3NFc9z69k{mOWc{3Ybn48~~wWoxl=XwtM84ek$AV_sM0-7 zzw)ZX)r{@velxdixQOU3134jo|j1dGjw3(OZP^XA{d8BkL~MNee= z^T~t3YkHTCraoEwga4?y_~T>37tD>0Td?oD=Vr0(MaA{3jW(QK3_iXgI!~g-G~eUR z^RkFRB$xzNvukBMAons8O^W7Yw8vT#BeCAKU=A=$OqzV^NA88Vp;I35HRdC14_GYB zS3@d)nq||9{MZ1vd)gi_r6e&cZh=lsHwpxvh#VU@_`HhT<-zyH_(o}4yy#2t_G$we zk12jnM9?z?i#Af$3vD}p*+=K_;=f~kg&wBi8_Cs;8iT62`PHR)8xujl?lA@nrk()X zN)oMhTMRGBX!;WO*RO|R#RC4$B-FUZ%XJnC`G{7rX%I7&`XWiZo;;dO_TOTj=Kjy%HOoiUdeth4~OAxUk<0;iS zjs4M98kkJ*J?7sUj@hj3zVJ}1{`}{}@~?^W-#;3{|IR$Po$)0f{|>MJ`yc=LYDUi9 zoBzLl`;U>>-~rRH-d1uRkrIf2D||dxDqhQAFcgys?7<}g^rSdXp?Q}Va}(gIq+kN5 z`iF*SmQ&~=v=RXmtwl;nC&#_NzDd)Q*&_0z<)VK`X7Z(n_;(5!Y&&KOFR12#< z4pXYA#99AR(5%CHfPnT{0lfX+TzX&J7u;Ad#!9S$(*4EZVkD~v~>HjB<{QGJ} z3-Slu=+`&E-Y%=rZa*(c3|J^q#A1=M-A{$XlLb#doCUm{&~N6oJ)@G7lZor#(-o6a zUH3rIrz2PXH5C$&r>nPM;t#Oj7aG+uU6!28}jfU+>oo_adtmtkzn{ zeG0P!KJ)%Z>k&p0Zm7r&5&JizhR}OF+2@A@Ylfy~TCO&+ckiJ|JMJ|a3}fV9*$0>a zE3|%;+`t)EVv$1roa)l~D5xa=@y~#`=K%TZI$I-9g8}V7pZ(t_MMq5NQ;3aV15~L0 z-4r_NbjWt&bGD$8m=gI{~3a2@BY~SJ0tkl$L<-xX@Y};`@oXx_W8FK-Ayvs5I4bz zzF3aN(`xgK#+?CUM&J^SLV-+E%LuE^Vp-VXXvQW$(fJ?!@ISjOgEt>=%$?j4Kl1L; zktx~5Mqe)M=AQWTwxyDu&6oHX6+@c;y_tY3_5-dMINj(MF;F1>*F9}u0S$$>{k84C z{_?k;{Qv%0y$7jMW#6`ITgJ3rBH#<}X~5sP*3+Qe8f8{u2i!0r3mgC6$II>o+{>ZY z)6`xn8bSI`Mgq$y#Qj!mt~?pKQPb(60{5La2&VH&(u0MRcKG%aNB7+$0dd#1wADdF zw$_FMGPO5Tgr@m|@9&pLs2k8t=T-bq{GcPA);XH207?k4JRPw{u>+nBC`e z>g1`13O!XR(q+w>IhHF?ic*AbAUCWAh4J# z%G`)bDn-WQF}EO2+q(1n9)Ka}9<1jU0uXTF+kkyt75~HPLBua?I)1x(*?ivMGffW1 zDmG&uyMXN>+BBTrbTJ zj3F(hORJO2LWHS{$#;7nhTqRMZEc8M56=}9cHW*9koz$xe}+PP@ZUn;)l**0dOQ#I zZwbw=dGymrxT)xEA3`DBB)3lg@u5(hS|fLQD`;+Lj0!bdOO_64<)68KF|Zvc_(6}U zIzg`G2XcE8J*wQN`xr-=EEfphl&0B6<4bSry*e7Z_Nko@lf}Z&n=IBU)w% z7L-5NXMJCOQG$1Nc8c+JdPyCu@qhTk=Mtr(dI@ke6kx=(akR8?g#Z$F-^2OR{{9+S z<1eYSys6bmRm`?pdTI3@bcD!1*RDW@Zp6s0uJ-=+KB;|Qn}dzSYIH4*xx{*Bq_$kl zihyL3ok)VJ+_TJRixv)VyHDqYgK;8em$Z<4RB|vCo~KwKT<@#{J1u?jCRZ6|QD<*+ z%#x&0%@Dp}eGx_}>o-F6EjEvcpJTFHJmnqyr*x|38_C3otE39t zLEcc^N$!}^=9BIu>NrDWaZ!I`Z1E!JHAJ2Yi+r*|r9BFM*lOJPTYhzA)41^h-{p%6 zA+Hb~hdFv40nO=xC@j^n5<<>tN3q% zomOXEA3MIO^SnocK2iOkkEg-b9lJaAKTXe7MM##_GcR%7mnqu>qp`Pq6eFfSy)e9< zjJ8S zUZAYGne*evkCOl_Xw?sm1RoIkp_j8x~@uC|^k(2Nhh|(xWp;Q#>;>vu3Q5wcYjh2vXjX5sdT(b_X2`>*vq)xUa;-=omn%gH=X2gHVkm1Vk=KF7K?;Kb4d8mtMWXrwFc|QLJ_Z7b z3X;S+zZhPlx;?U3k6kwgUobVl;Vf-T4C8^OfKH z!1deE@UXbNO>S2}+i$jKvpeEF5NKxYY;IYan&ntZWrpO^qx`1ztk=X-WW;_=tD_`InUhE=H+=*?i%* zQd!MA0Y8B&z)hL}#PDIA{Cb-Qg1Ky+)0z#};Oj~Cc+owCgY!<5h`mi1=bBNEkU9`M zf%Xo!u7+3#15qA4w{MYJ_QX6)dmySJD&;1x&F86G3@AHoU80qbuoS!R`B#PMJjZ zZ#OnYj~Af|6*5?*SdZ+ZE?*C~b15%d(^>J>+Y`m&$^^lLK20@1p$|}|BT6=cQ^(!9Z9%EeqhQ7ERVfQe{z^2e0he(c?s zr|a3Uo4ZusZmI!IS%3OO!;gNVN6TF{#mg0^noi-$%U9Q0Qc` zX>%_xwMp>w`_)I=ZA|CcmOFw< zL!O_Wo(7u18w>%G7$zU6J&52j2)`tn0J^iAege;e&QDMrSQ0uqmYfv=?l9I^TzVCO zOtgt4MD|H4(}Dc(588mgjt@3AluEtj>J-%xqBoYA4N-=fE)w`IZ~}o&Z~PhrR4VXR zSa6szEMtM{ouJ?Am)rg5cfk69ypzr^mf!$v(vbzi7Tv8)fQwfW)&1>sG(!%YpAi$w zJ4b%L*19~jgCl@q?Xz*cWL93tVh(u1^9M1?YiOi&feXMu(r(C9{SuAU5{9w|EFSIe zPE?O3GJS_21Q?D29Y4?{IUL?A{ErvF1d?zhe(BRZU`AiK1RD5F^a*Ly?reo!4M1?| zD^36#Z^^%2a^zZd=7}g0Otd5#bQAuB*GI+WC|i1TO;+nXqtO;lEx7M5*V09Yy4*x_ zJZ9i^LB4PhSAgZW%=O-=Lxqhg!E&O2-B*3M95dp@{rTq~dHo#Iw2z1vs zNZy)a>P+p8E>;9oqJv|@uKF+Uq(MjmAm_T*b$%7s1Tbj#`-@dN)HPUlI~fAW&rkHQ zA@5Y~_jESC=UgPKephVZE1AyN%2U0mU3zWKaf4l{BPh7IW}8x08eDFso4o7}m(W99 zp{dkd%fKrG5K?MB0FSF%8(>-Ojt1ZLXyTbn_N2>-7j}9^0Tgcm%@X5fL>kkWqYj4g6iD)H)0k zyYTLHb=b`<)1wL%F3+>dZfBCpZoAXTcpOn!Z$u_Bd|NERz3lG(tw+H6`8}(`u5?&1 z^GlyUhU|jYQd{@M^H4a|pCV$>qvS6N+^Oa(>U|BTfW*4OSeVgQI;pMKb^_N=xS5%x;J=sO6oGTv0F;*A_ zU!U)lFCUTSDL9bOVtL^{D7`uJf$%sTYpz+{E;tnPhk9{Xdh0#kL3Z}LhWz~gmm9+{ zxb%8{SKAJy0R(R%7ofyo=8lj^Xvd*s=((fE(|Y+fey$8T5SJHo))CkKXtL46W;?8d zVQI-GG&B-dEH6@5$;+i@JhC2O+B1nlt^?JSZSA-1npOFa#2p&UW%m5JzR-I87KS z5 z8zglvtjY`XWS$FtRi;eZ&ha{{g*%fps_KV_hXdoIvi4HIbFT7gd;o$rSLQK!S-an< zE>#9pcQ7$Ivs(930vrfbVQqPF`OW7_pAw=Q$z-!DK@-&sjhYJUSOm^20$myi@(8?a z=GRQnAmC6D$HS!V%QC`sATTDk;x$cog;5JpQSGSp>ho*lmU6pf33FegV4mil1qC{e znF!#?D!a>{J9c?eKB2tXUGSC}2o3}rHX^@U{U)6ZywIf6b`HGlZHVFo`Beq*0oT?3 z>uSHY0Jic6m^lvAmBV*I{XhqAOCFvWYsRFn&Ptp|53sg&Mu$F+SuWj#jJ>Ob6!%+p z0m_|Ew*v-e?e@nSY%ZA8Sufl#i-s|WM`_&ly13`R_q!f1sg4H2mt%NAW9937Ws^TL zPw^RFOj}=|PU({UdwZvYC6@~&vR&=B(%Z?s)A{i=l;Aj}&ou-U512WsT%)G;X3w7D zqLgyyn7N|Xw9u8Pd(^tvKUZYdzo{7v;8KSDLSsV1L^@SGn zagw{tc@`s;KD#C;Dv2mr8s!h+2FU`5a)27?%md!E$PVsyqA3$J8thus!h~YT2Q|GX+s?6BvVgr?8*%H23 zl1NJJWE(Z5C(3%V(3C62s9x@7<;z+jCq$u?37*Q5nA22EjtY}Ehzt$Ms7|eIv*vdLKHqhrK&MN!Go|XRvD8dhgbBv{ z%^eU@Tf7Xr(4&}!tMt!EK7E4I#U-MLGlhRZ3QnbiTfBU^ss5xAjYILY(i2qdUzf2T z!b}4)3;hDRTGs9WQ_%cQUue!DsmDc2w%DXYD^plP4`T#bZDU?O-uIVkhGICFpHI?H zfXpz4mx*dZ#?)u;!xEKodxk$f>Xgr&@SjGhrSU}08mXk3ID9n(NxFQ<&K~rK&M-N% ztqG(XVfBd<-;7%blTkRPjRR6bV@EzZtGU}Bjk-b~EIb_lun-5ZVCF@2#QRmR`! zF7hXvkW{-cOEgSca?zJlp^dnkG~PZl>2!SHto!s5&Wg$;KapVS(`X=N9O>|Xh znQNw}ySjU->R0z#>wa|o+#uQ%s5&yeg*B-;F}UEsQ218R_}tDp{FvfTu)=JW2B!B#@_cz)9JZY~ zf^xQKn=aXiA!S-=86B?-PJUw741kj*Tv4IEOKz?@~&~!q8imu8+VUl14Nu%bCOGZ;L zPz9bFWjJn;f)+{`c%S^OkT(zc#Wi|w)!h24*=oGtlTwQtDH3}6(B@8$Qf_dQ zjwWia9%Ah`l$CB<=M3Feg-x)lr1UUmcEFoxM~ACbG9(6qB&>wTgI+&+S!s4%Lsm)z zk~|md4CR!xvE}{ap63~bRJCIQGx;J{@BZx%Nnd$3$iA@y8_+{9_OG8kU9GTK=?cG`us8Je3h*xu0Gbs1A^5bc# zXk9N8TgnetS$60JYE$2niM*Q%Q0AH91Z_;e(SLfuyhwYCo@tA#Yo<&KwtbVlnn<1Q z_#tK4YxJ~XNMuLhW`<(5o>8Pj)lbXtk*ZV@Jg&r!yi@a2uov7CawM_go`7Soecm&^iQR zED(&`vzSogoOp#WCXIRb(tpzGdPCxui4GEkw}bsB*V|AQWbq}d^qm_F)zC1ttP0_L zv7pFh4J)YSC8SzA6J(`Bzsq8s)}$D-wuat5w-pnve276_Awd}1^|@a_yigkjRLxT+ z$$5Ht=+Wv9E8s2B20U!p6cA-XnGr;g11x%HXZQ`e*WdRN|GTWCP+iUE1m5KSF!V=e zsqBeSPYsd?_L|+IK6Dq^2BF!;6qqU>gj|jLA6p0-nCO1gwG>T3q!*uZ(K?ARjXx$N zz3bOL=Cy>0t1WjZW$X9*IK0%f^(|w^j4r1o3h~lg<>OWJ2hq_{;}{0p6{!X9yPq^* zkb+m4W_F$e^hJJWq%S|sgyoZ3zL!tBS}5A^TP1TRhsTktf{%}d20J`Qc#a zn^nB(I_l-7FYo*KJfDMxO+7!4bTl(5IYM+*`#&aRZm)uydY$ANJOFp$i|@)|^AUIgZgO)$Yc13l@&wn=J5&{w1q zD!7CXyH%R#0BD~G#u;^HvV919%LOBs{nOEk(IIKxnj!Ss;kxuE%7QNu2j&3C4q$o%8g49ks8c}$RP zqmW7X4Q*KY=8DtyIOidfoNlyqHcN==F>Ap4=e;Qj0&c-_krP(EU|7rhxTl zJb^(r-0aw+RDUqiF~}{ zQefm^yI`a}7L|Ec3^^@_gOog0J+xyI^a?RWRN!JjBV#Gv29#;r#vbiW9Z+S9fI1o1 zyYLFnBye+<)O$ymjw$g7%wySM&rLb=zurD$VUh%7yyQqe_#}N{cos8B+xtyyncEp! zZ6&|V$5?4QtTNB<5Hjyt=srR$;XZtAP2NGYKbnM>ken!$+|psiL@9+V$P`)m*&LRP zG`SkZU2`a<)>z6*&#)&#l#rpH?=8Z82w|`(;T)NyiKAVa62q^KS=5g*U zj3s&SCtF5TKWIF6bk-2lv~fOLeWwWaQ^JoV=207nS&I=sSKv9O`n$<)u2n8iZrvLp!S=w&TtBYa1}Flx&5&@Ep<`ZR1I z3_G&zZRG4vawT&xAnhY^4|)fVW18u)>CoydYCZb}f1w>7T)#McafKO#f~{TJeOLU_uPlDweD>Ge6_b&7MXuJ(IH18{F%&=k%>(^R@m;pdmix; zRxTz0w^rLQr7`T=n@FxiVU4650KskyC#}+%L#R-(6Th@MkC;n(n05+quNI z@Syd+pMOawVYiJMhDNP+uTnJo(fvPvwK(xD&En9Wc0J&$fWw9QVC$F@Ky?)9lv0p5 zW|8{0B}>Ph{=>Onn?a*;d6miNc4iH)vM5ytLw^lS^tUR_3hO%7kx>m_CWGF0M6NSn zO=@s~CK>;)8aZg*`){?@gvmxd5uC^$m#zBa_!rX1VtgH2+05P;k~~RAbM&GjBTwkb zuUu6yafu9AAwtoork0N%#6fynxRNm1@Qn`5Eh&JclS<))o@_L*ZB1j%`VtExFH_z| zn6^<)g#$$dePoG=Kh76+xYb_XAHh6cxz+L;jVCS2ytn&iX2%*E)*d?VY$@_G_YuVx z94|SgXJ$Bbj8LmKFTdnP;hQ!@xR=yE+m7z-pS-0Az0`w6=p2W8f=o8um$Gl(YaZ94 z1CWLT;={H+s=6h4?-Z>JIg;ymxgW7gu_k#)ZQSP+8dR9Hb=AVL=`;yDE#iH;2}pFF z(p>JeParG-XZCjU!`PgEqqzn~TDRt8ic;~*-mSVr) zQAbXsEh%4w7_6?}bnuKgleg?-naPix8gSyjJCxtyu%aLlkm!0>LhuN?({HAqR4RPT z&=WsHgoU1!sGGXUrwEU(+M_FjTiN_wDb+WYR;hEiZ0rnD#H}Ep{D;wnc<5B> zquS)9k0>?R*=U|`O%i|j>hJj``Q+|T&bd#$A>n$fa(^BHE-`+aJ)BPx>nbN};kw+E z2^Rf8BO>jXpt?OhR6@z<2NOY#bfavSWR0SEm;c*x*zkvIJi&>!uif&HAEx zASM~hm_M#{+A5#|rb9mG*z6a%ul@XZ%as`dwm!z2j-O|_21EWEOC(d&ii21Z&Io{d{N&v8%G9(`Q$C3aUPK^bCZMfF8Z_Fmon@t49} zoQ#e^v$AW{bI+G#`qA+F8)cptWLyr;4sQ2tCwnd*d0RcdD_0FnQx6rC#|o);5yfq7 zu%PxKS=-5sgW$*eo0GBuXo0RyBG7!&(>|AG^M)&44D`X^ZxRmN@%fUW>~LY?{h$q6 zA&i9G zw)viFFA0)OBBNLeM1M_P1r1Ct9!cdgjJMmxHv6iQgU9sgM z4EvsJvRKNcEg@RG9H-7I{eTL^Wk<4e#mNOVV}!>gcqTF zF)EX$((CTjs>QJa!itR0TeUr5s++`F(kL(^#VDi@I+MsG$&12+$w2+J^= zB#58iN@raeE$v@kjwt7M&`z`VBkC-lsusDatLYK+1u4R-mSzv+4*K%C%0FK)-d*$% z!|qQT`jV<8QZ$|ciKNWs%7cYM%NPSHj0{YMo$0K6>ER|-VI6Y#Rd+U^M!pT#Q~NNp z9OMglfpSmSo&-B~!?+mcGC8BL^)ksQ__ClSE>Db-+e?RiOz%WK(Q{fHRTvxcO()Z{ z_Y58=*jpD*kGG`GxHgom-BlD*4Ct)Tjbl3-TqQ;hrpktg_Uffgx^qrgQ>A}vEkuv@ zR!h&fp8c98$S?D{HpMdvi;tvkD;?m9hF%mhRiYSsWxBHv@)enjY?iAQ+UiPdjb z?tw%>sxMnLY#YAMuHH*p%k`XpCfvmmZ;~D~KQbpR0*1Ed$9`t@Mx!$*?q(wKP@DwX4`wO!PKNluZPc#|cKj#y$TVxNF5Bc2V z(L=#uChYaJVOPC;a5Q|g`pZB*2i_v0V=d^CDZg_#q9Dg80$H!??Ocy9=^F;}qj|lBwCKT`30S7Gm&QNe|OYbtQ+HkFZuxDdr zj|z^$v#oi+SzVFJKbT#nYVkze6Aqnyr6@kLUT&(#=M#x8IQUf~{4}cn6+WQ71omSL z$<|cBHJ|fg!h8;<2po*E)!{cAhtrS5%xl<>ETP1cbSa5|LyQ_^Sn-t4dh2wX;5=#!%~Vh4-KtAO>)(O)7K7#fodBP{7OmrY@ zbR6nQ_kzbA-DzdQI=JDZf&sD&oNR!>xN=$dl*X=Yq<2ko^2CH84wR zqfd>TXwXK)xnUOhC5bFy#?o0y#gpK~9TfSJ?a-d_WcX58g(fx~!e!|wp4^7~TJOBk zY9MmmzBgPgo*5+TWcz-P>vV&Q0hr{E)gv!`*tBf_Fr|GZrLT1ZVcM^_cmu0e_+5SS zLkC(b&SI7_++XE`sq|mgjk`-RZt;zorAon#=)Q;VWlsc*vnBk5uE~G&Lz`YzTJHDPMXQr(0H$#Ye7cOO?lDw>Qz zU%O0iAZqv;*-V+gpS*N#>YJnD_XF{5LZt8I64obq0QK!Ng(+{rqwv<4q5IeIkdwxJ z&=ggXb*l%w`(gGpr&cG3(udd^|X3(5c zeQO2_Z+41H{BPR0E$j`9vaEOF%+KV`6ABmkVrJ~{y`@RfYz6>%{oYIV#t_TL01sPQ zaV!!3S87V98Po-$K+Y{VEhps{!v$O_JY%Sjn%%AI7ST*>cH6%&*;>4}R`(8!2szxa zRpzn|NSadWucWcBj!-&2iCNPQTbjvYB705-HC~9%dz{2>Lvh?pcZ<F-^utN(}m^>xJPLu?0HVB)Q=-+v{Q?Qvemg*f_By8jh(CIVn_Ie1dU ze_HSV#o+riUr};R&(BW(XOI5JcLPid(GSN{pa9lCF`=wGi{w5KL(S8-znm$KOrzTX zj!pA!v%~n6oOprcg#Rji{C`pVRU6S)1&lTrh#FP_n=8J$4-;|Q2fQY|GHnC1(mfzI z4Xv}>V+uffcF%fATt0YHx*pG0%cRsMBpG;9P6`iDyCh!_63ou+0fES@OAYQ0I8XSTK7Xl2I@(eJlKVS+5d|snl zTU*x=k&rS0g8v=3GO19#OdgO4Gyv*9R2{?pK3ztZiu2Jk>Y2|LvJWZ62v zk}lP48*dAqjA(ZC9x#AK6*&5l#^5uND_33Hppe8Swe>0!;<_&`4k@Uosl$3gwVk_dRBOo zKCksI#MlvFZ9?>wWd#%sqqQsEMKIhBA^<+dUu`v~6r1&`Q>rqRyMH)o1$73Z!f68# z7?79lUCByM%;#GEo`-F31VP}9eF?_UL2*Sr7>&q16Z3c z3xEIsUI9y~cCGvrH3Nfuj?3XJbL?8tDKa9F87s0fU|b2sqB-cctBi1orh-ynZ1Qjs-oP zIf30ji2HwB&V4k$Ur6Cah$kz_HpQwI5$0gdKW$1^{Y9*e6SfNiTbEondTgE0O@%f3 zhLoLXO4Eh+EmBEu%#|C8`o+D&|Ee>}=a(Kn1tUk!Nd-w0)c2rc$C4>vZ9_$-5ZgjT9sTSgLDCAp(=8D6fB779duBB^b` zWA1;={_?I{zs5?_sTWRpaV%Eb`#*2+WuaH?>f|u6BBY#SNdK|gdO@uOue)I^h}iwh zoe`&j9McJ`@5ta_XnDmrfZPF$46x$Mo5JrA=X^Mu@dz+EayC&v(*v5ut^C zzY!_;-qKK2OM^BUZ`|nzH`72w5x^Kbo~{b>IBbXM{-NUig518myxhe~@9eq$DjN8o zE$YJfDh$b}>E0Gh2lBcw!Fk2!H4Q_S>J`^9Q=0)-8Kd43;^JcfRi8g?nN$QLh{2YV z%xO!ZR;xUc$`7F%@B%RUY6%=m^$gS$z>2VLyg8uKwf0J1Ua4Lbh;$ZJe+;UWPt>OM$_etH9-Syf7zQmS(RZ9rjlLJjZ4x@%mHOg#P}z;P-; zWnXi-ooCzQFzV@e;JKI^2xC&nl6ozMzUxCHgoQSomF1F#>usJ@Dp1OanE}iVt^Fsf z1bE;mnoPCozGtLx%n&lz{71i0=kf|d63i3he<38>*x8Y^oGDecq+Sb5EBO(42oRN7 z9H0H1-zQ@y!fFzp_+{yrb6}li=sLN$P@0PyHWY!**1Ij}sfL-mDMSELmu43@N27E- z&45{Y;P1V@FkH#W)kUS^3o)(H%JX5ZuEw_nAanTrsTMDf969lNj2+-!n{F>`@p`yc zB_P?WE~^tf?<03STNn3NApD;f$v@A`VRSC9n@-D@K-eSOGLjwlk~%zSi%BCDF&=+;}E@K2NPyUK@XC88^7P8zH7Z~lZnX44H22a-68Fk1vk5U=& z(@i&I4(Icd7>bgSkp+ZDk~aDEzBApv?56*Z+o2ZzZJ!{(RkLV^b%e`AZ;!bgMe;tf z*%9KtlV!(uIU|}^Jv>BqSoMX5U+!j)E&P9VOnyT0AFOX9=35z0s3#4c1RQ6t&yqCe6&m)Rbfr}O$0%-f z_|=rfasJ?J&w*-v^xgf=-1Xjf9uh)_XW*kWf?oP`KO}IjbHtZ;xWk$(yQuIXp5(|6 zW2dB?AsVe?tJ>Gc+H4P z8CNG>LU}v%q~8o__>-1u*yW0K1o&$fFjIp9*l}Gr-K{3E=6UYa=lS=8tU5QQ)b(>Z zej-PQhRVc^|7A5pd!@21ri-PFdL!gmOMqXrug{Nn{Ualhi;Iiqz{m#eE}7j@0vJA# z>+9>&=rz@~V`5@v4ICUF%crKMs^myxlhbh$wLV=Z)ASxMH9D|r=bh6iX2Jmd$u!vs zpp8?hHycSkVtRCTU8ct*zgco2SsRdhjaWDnSmST`zUHTUZD(hvQt?e32?a&vWa5>6 zIz3#d9dS)-YHG>{#846%HazF&4>8GHU-bUDI5uHn7B57|&h7E|prG@CWJk6T_>3IM zmoKU-?yQ5|DNmPKSWee8{7(-kNL1QMO&9PBIe1VC zNEg8?Z!AV1$az`~`Jc|t9gL;EfMv)z`POa3l|`&%ZH!|rh%G6?$Q69zByHc-Kc2GJ z+m$hBmEt=sR?tpy)t2gIB*!1ZD-}re{rr)`-#E5Cwy~GR_bDNB{xgZSj5Qoej?aCI zxS{p@Z^4_nVkHp~sP5igGwMXMRJTCWp#;P1v1Sh!zteiJ2j@-&-j2@BJ-|~t8|W~o zh|$r}gZxYYA1!l$g1H1n*~)K26XYn!N4#3dy$ zVO%9)$Y32CzdNWnZ8e?MCEex-c|5b+#J-(3CCb{rK^wBOvgYgom&8*zL}1fF9)&$v^k@ITFxS z^NhV0j-{O-(5@ZbvKs&Tcd0zFkZ&n$+`6 zF8xG*74up8+zP;|7l^4xlJw)WPtu&p@;w5A4{0i|({^|eAZnt4MM%Yv%~ME|e|39Q zE#!4MRvB6Gx*=k>S)8RAdMr2UMax0$Hdu%wZ3_qtq_IUPaFc4Qq_@=bEY)un&~5cj zX=-l1&^fc$E+2GwV_YtTA^Y09rvLdYlZ#IfEtY`NksU?felFCiP+Iz}$t3h23npMH z%J`A=1x~)ky`9IYV4b!scYvtHo?Wg~GqFdNT1xp~I=Al$zSx!Voa8c$N6uV6S}kT( z)bZQZx1^L=K?3c;VeBh6Rd`p3%a~ z^rUr>7X+^X5OWd+>!^`j_HfZA0^)1zN$d!GnHvD@Vh)>4rEHc)SP0v8EU-1-L zR@^b=UqnzOqZ?DcY<@~wxKvM@@@UJOqFcEba+3y6l5-cyFCLoe9X#FiRH~YAw-#w} z1$s!grP}XfzR$6E&oGh9o;zEl4zU97_U~X3Q_b|Wq-p$2B&qQh_(3sA2f2;Q)-O`e z>1%!nTWw7e?DERsCc5+BaD%~sfEHQmAp{eu94@TNo4)>qK7h1_^O9GR?d1#&oIB%z z<#U8DSE8(3eXv;=y9a0aSV|-y>oaZf<$?0i0@rEc#Y?$-w%*g^FrpCiY1{#Gh=IPlixELegtBIj8#V9OvDF9@HQ#sAEh1q4N6r6u z+6lIpqRFWTbtV~h`KxiT!q6)M@0v^}g1(q4I$8kDRhopr`(G&KNK>PhOfM%~1jN+< znj4xDTs>D|kQHQY-s}0%O@yD6N+tiRFAkkXx_|NUDjxGU7@`d+`cq+T*w!KruLZzH z4E?0_rDdQD*b>-mggA%0$|Qx|5{LicuqvQZl(pXJKusIxs_9!dl4=An*BkpyCaO~Z zHC{oW!n@3%1LeXxK($OSg(-3M7n9xTZdN3Xm9H(3QzC_&j*;-{$Z2ji3P_w(AF?1B z)U}^iv2k#6VG4kOejkShe-^|Sa6Bj21GGd;1L!4Wy8vfTmJqyBXD7;^?1OOvc!;xl zEaBV$>Q>{`sXPUeMi$3|X}M-xVEWmNB-#kYI(dNaV<}WF|3j0xyqmWs?WLC3ZS!4s z-il~iD#%nRL|?3hp!QOrE1NC{4Miuh*%U zBCgQ8bf~g$q%d z`xaRQqLpWo`oAsWdFY3EDHGlqZT4HckHNa&<01u!VvNUKIb(nIud);$y^B$i&u|4v zaqCQ~%&%?;idvHivE5>QXc)4Il>z!EX|< zGf>d_KD`G#lSTw#pph9E9(|S#iX;&%)kHX8A;S>f2nBn;Jh^n->u4vjm>^-(YsCQN z1IgEPCWHbE6hcJaRluZPKVLSH#c_9(EM!XnPO!wSMvk}I4B-<-%@16*J}Q0VPd>F) zyFi^x+BIMlooq@+ko5qdt5vwmr%!f@z3gO+`7>l~K)!IgfT`Gm~VZi&NZ~%3+=4 z9<)j<;90lt!9(Z*W;`96hQg?1kI>qMkDEvLf{uf`_Zsgg;)Yx1j|c<_t0j5kM=Uo7b8+a!nguG2CtfydF9NKFs4qGXWZy$?iX|rcbZ4 zafu%vJB+=*x7EhP9^+WE<=4DL-F+Pj*8KBEQUO+@cS5_((hoZt|V2 z_WRK`{(32|q%A*flCr0Z9L%hxIt>Yx6<0H|QihpZ0>}GtX^mAH$Iv9Zuu4Tpf z+8AqIN|<$`N3cT+RC`88(izEMS9C4afda%+xD4wQ1Luy2Ia1f^@Q7`$_VZ*^>fswFY zL_ScTpkefsVXw&TD%J70Ii70YABOt7?grSW*W-pUSL zZbmE62;ig-Ef1hSW=TeJJf0D8?ij*kuN%n38 zWsJX8B~fj0KE{$*`T)X$*(TLHa(!F?O-VDbednHvDqD{O>Z9+_Z z7kGyBh=fIDaBJvke|u`Ufrplgpy&w=hr;w|+xw7R@eNX;h!$2d-Z^Z56ZFqr;fD3L z&+E0$_l+>)UpfW*Pn%Mn4@348i&}7U&o!cylrh*a47Llxh>thwS%U}E{ii5)klMxq z&2=uli{0O@YTWeeaZ2r)bCbJW+c_~9a+BpgA#j!JMP+x-}fADA)#TJ zr(3jpldTthGtt$i$6|_<@-#L6$kE_X)Mamd$t*3B%M31ppL=9_E0RsUQeV0U98JgG zJLw8~g@mZnDNy3lbF5oPLY??bxX+r~-rZ%$Q{>MG-NBz}SXnelu$c#pG>iPWi z;iT?WYiRNjgj`pX+vg>c+O(uJm3UK|3b7IH#7jE3IDLtSLfqqG1#Gw3?v=IZFft!c zA9+{4x;~nX&AZ3*@p+Zn*2|>}TJRFt>|Vm+EGsiJCJsf5^`fCZl|TCa9UDgy&BeSE zkcQ6m`RDcf`OgXX7JjC%NmHbkjmGYP!!%fVDWe@4NPyiN&*_0xR>c@E&taPGxZWM> zJnfP1-sTJYp+X3Zt5nHIJWqYw&yE{U@NFc~o}ukGt#{xdEDC5Jo~dYeXY`|GM>5(S zRTPSO;e@l`!ikY#)VyGl)!P4#zss3sOm~A#galVe;F*lqE#wB(^!5lO0p(HCOIU-5 z>n$lnY?P9s=}L>|1-l>)0^{i(BWR}4KcA^rShv~*?}_YcP@7 zth_172z`t38-K`~ zP+}2H;Z>IDKMfez`Cpf4@O(Ik9Igx|@;W8+g`OJJ4C}8j+-|Ar&s`TGDG&OdSP-snJ#Yqf+Z#`s@;gLT^`)`HZT`ASq`D|4DOSB5gXTqub)LtYuo|B|?wjT=e}g;GluxhL zt`xiP0^w^JoX7n#UzggT!Gm|NFN&PTgI;#8vh5NBW-XfCb0FNw6;MGb+FV3(Fi&sy z)_~e!W@Oz@1-u_!J4|Gle8IY$h*jgo7_wHvOo?omFagC%79Hf%G$K@Y!0Hc?X*)h^ zHJYNWW2_br8hPG&0VSezJcD)j+jds=cB}(mW%2|nbbPiii5KgxLtJYvpA=Rt^`T|+t6>?UQm-HSL;eIb$ zVTT6iT`&CMa{dqTQ1VQ*ucW1IdNs5mD-lA@tHAI8cP;Yj`xj-)EVeRhEiD68#Iwlh z_l$rdmdGgU>1wICe#%>OYC2m@VWD#Wk;UE&(>CZZpR(NdjRh?RIn=bcA2<-{Ah=4J zOCK}9UAU+?SS|)Uah$65(&?7UWq~lyijXV#c#`-DmIEY(UK=LGd zSi4oQ0Yz(e^6B(7r%Z$zynnP%O9>_b*YgmbIcn8-u~pw7T9#HwMbVpVd-;8a4KObz z?+vu#;>A8_%ece+m#N?v;U7UjV2vK@`ZNQjM@G1i9#S$YTDkG@A!cw(SY#x%pYJD3IFwnHZv~EB=*kPd&C$Mh=w9#T^@Sc(xJ;LUJr22O z6szCEsuiz_4K0lL5%CDm!@RROghMifNnYP%TRRlDmM|Zdv782L7NGgNQGZD$_=ZV{ z0ax$^S)|~|l*c_CZ9g?&H}+F(Q@&hl5E z1)g=yjU3K*(sEgQ`E)fMA=~W?g`^8rb`^JjdW0@`JZ5+&I#h7B^IsRWsuUf<(4XED zuqR4cc6WAk3@vl7_ZUM3aB7dAgwDYb^EIr14&KV1w}n9QG8#d>eG;}Dc1mjS;d&6r zfDm0(4K*nwu(eZ|k$B%`BxQFn49&)B|I7HU4gW1cMF-E<)Nx7Dd<*OW{W?->eD=TV zrDBB$>Nkg3%~fVfdlFZ(#j)(V++W(KiYn^qf#Rb6!j{rgqIxVwZ!#L5;o$PId}%Sb z;Bhi^V5Ce%>a%gmU(S_PME#bK=VYF4*T5`^$YZRLkk-#$f2If8Z9!$n0Xg}muLVzO zgFA=z!I9+VJ-)np&=((CK{H}CXDLl>nCC&>8*0p@w$)6vv8Ba70eXantSOLRk=7bg zvBYk_fMkb?^$GM+o5d?4^Cm}E_nQV?{Y?Ks+=+}dpXzT**WE%qjo)(C)XQ$*W7_;K zn|9Jz9#=IUi<{dHwaedF3fxg;XgZX*o~^DQiJK&|r+Yb=PE0LH9Zamd$({B@=_`9r zhi#WdHO!Ujexry?^7(7{_-`+Olgo5}4Pakr_KU^lwOptXbd#Y7Vw-v%v?geq0D^WKCVNVhv5yORSP;(j8 z>RQMkG>4v&nWJ+F@g9c3U+~uW&gWABOTNCW5vW=%p7qyi`0QYRdf^8&xwu3&Q+IyG z2&plcXxYtvf{&mwlqHfKqxkX%7Fl=U?dldhX)1z5vQb}0ueIskl7toJ+Z`CgT3v+r z-4#*8t4kKin{qsl8cI-Mk6q>WN#?{;`9x}rWZY(+Wp+i;PgGdJv(fWY2 zr<6mDgGJFx+3r!0BeOSSl;xuUR|yK)bHxc!ksg?2R?LIsLqgS0BurW#&{%>|BpZa7 zBrP&H39;L&@KDyD!4vWu?}v?5LbHS+zg>j)F(*{jb+N}KbAE%lu0JU#^>Rmku}?_P zl19Rmb>m9yPAKMDnqSWPgY+z09_5^Y57a}^k&l_}{vybJc((B3aBUwwLs5=ODetwr zIF3@uv=52Fv{CXp#Erc`I~e%OfRUZrn?ee@0KTVRzp# zPNJDrEU;*5ac(m;5M7nLwLl%^LZDY~AbSd`D>EDX;xYYcY}e2?>fSo26c z9Ry=v&9*u{Xo}tq(N8Mwl#XXeDmV-@KR@nKo^PzGZ@F}*@s~L;R5cBaLvLq(S2bK5 zEi;*1uWm8s+f>g@+TXQ|Kz5rYq-C(32dR^%D_{f;l1; z^FKL+rI2D9jgZLx!owTLeakAU@hPQdhK?Z6tb-JF5MR3=Stj*E?;4V_Ii_hx$M1Hr zb|(928l}m@WvNdEOe&kfV61|750r@sBb+PX=Kyz>z3*rr$2mSIub-AuFn?5-R8&;{ zM0Z150#@RhV79czMWh>|jozGp-vi>J#VR02hRXNp+1gnUN<5z=IWy|8d9Olou$t&q zW`);|qY9d=E08G{_nNtm({fFa>+prY0i#c*S~C{@+!(0+MdF3#qYPieM2G(y%_Bi7 z=tTB9C7+Ux?&f%jmx&CA^8|_fXO8~$hd+AO{RzDqe)`dDicvYTlZacHt(#-JV_82h zKvD8LYalxfR|CTi6&ncuiFEVFGqV?}tHNI9oOUjFC&0rs1HHKAe&5#)GRz8w+hiVrhiohI5a zT%BI*-i25>tM?##b0eY4=mFE^$QWR@iy&(TGn*WuMGF@xJD`axPUYk&r<;8O;`-lv z!UZ(}{Z@S|g_z2N7}&M1Ki(*jUkiVe%~iTTH;aPhl@DiE3m0~35jt7OXJ5_{W6)k% zvh)|taavm=fP*?L3SiRN+(h`-#$?f9_C;ekSJsCvT?s}yfdezuf+_db!py~LEA>&> zEH@MPo-q2ulg4NtkFO#hf#Y_I>{3RLX4b?r!O<`Aw9b@09K*3sbJ@ZBzxR#`ih=pc zbQRcsIFfI_d)UaL`meBCsl;(2ac3Pzi!<(}yq&FMJpw}F6*=PxSbSB7(~f^NF|bQ* zC_?E1pK^y2hJS=y(h>cu@&2y|^WxXjuwU2|Ud`@W|KHy55oqXnF`9Ax*Ms?Y^Z)xS z^o0Ci%?+>KZd&~3r2c;``p+}AcQAjMOL9M^&4LK(&8Acf5mh^Q+p&V9$-k08$$^(9 z8w6Dmvfyo95kAR*tvUI!o5dp9*kGDgBP)jH$N$$Zrh!Zp6sjCiI)#2EN3%z_M2s@coV^bFCWzz`QnS+ z*Flw!YwpA1Vk>-)`F!OMapRdLw4mkUdb;YFx0Ta(@nU2&txxN@+FQ*Fwto<#ePT;> z-3O5<|5ByI={B4PSiJX_GOsda57u=4yQaWTF4)4idvY2K{FZBO&l&J1dY!eH=t;)N3>?}ZZ!IrhnJvmFQ!+;sueg-(}<#+cG4$waEE#}X4fd9|6FfWj6>>LG{cuMhDE~$&W7O`H_rpX3? zNMk*Wt0+n0D8|-DB{c3$3=ohK31GJ*_C0uJUk+L;!l8jkVEG9C>)FXYfGdsmdA>+v z0ph9m06Q+L>M1pmdjR$=t!@L-U3DV>x4|_p3fTlw#^%dE#pNAHdmi(UZ=v)Qp>&nM z_<-@jHdd+6^PO2+DL|Kw1FW&;_jXv=p=A801O1mjzK%>CFkjLxDVePnQ!HejMDK-!)BQRgH%d0ACz^pZ=N!8(QGeEB3~7)fe7) zM%M+n-#nFZ^gYjLp2ORH5#6dRTFQVGO>=`-458ju;F0ENTLvB2mxyEfL3( zqp};|RMS~Epm8y0me${a%t#1dQ0wvCb|N>N&~a4Q{;V5!Mq7YguI_J*;sn@|0R3<~ z&F6XagxKqH44-o5_dLhNFJK;_4~O!B6i8+^4PPl({Ph}{>4I&`>XT8K~GCts?ft~$*~VGk{@db5PKEOx`dVn4Bk_%^1=ZA+Zx%!PkMY3Jv_g4AHN z7?0!LCwR}t6tMY7@(lVD_dTWCW*dLb!~4)acU+y%5UrZubGW*nIIHYd(4(hyxAWI$ zn&d3!EJraw+=F!IyA0Yi4i9FrOj|s%z{mxuaY=5qpFbg#PLSwXg-YJdJOe`aWQP4-tC z&@CX~8io2IBMjaZ&P9O3-}LEN zuAF`FAD)(+7vyks9Tn>Vn^98w7peO zTy40m`$w?g4#7gO;O-VeaEFG*U4k_38axD-#@%V$-9rP71r6?y;4VS;%sOXP?Td4J zZYjEI$~QlG-)D@G+&q6Fi7iU|d=v1@;Lvd<&1UwmN|>f@-oc)130Z!=UwcM|mxJCa z@(D1fI;(Z1g#xd;z9(1T=cthjs3aitUB;mI`%Uv)PMbgUpTiD4fpLuB{iRQ{6XnI> z$^pTQMpQD8Xu~ndesA5h0N!z-PH9recb^44f1KI+s4Tr$3;RrF-;&pHv+8pWfVO;v zrkafCbNT_`7s`+?-c1k$;yrgHZAqrHJk+!v6|Oy!zkZopUezX_?R!2PM`*D{X8Ys) zExF5RJXf)A@E(=dkIX@mX`s()RMmdm7G(SnnaT9rjBsAGE+K6B^*flm*dlkyt#!vd zCy*VXa?+{Cb%~wRNtXzHvAms!q7Y^~-I_>Vjqy3Mlv*cbPN?;uZJBEWm%g0u@@}DI zc)pg#ykMF=fi70$JuTA3>E9I(xe=@*+eFt!r$kFcDToXf^YX(qk6d>N9Tap_0TPM4~kys*Nkg}s^gmWQ?2ux-lVF|>qWXqQPUsamS?pcbu81$wn(YQotP#aqRbUb46H7N zYjBJ${sJ?Cc?4Z@9Du2V*;=;~*2iB1r>ZEngUj&%V3THoYYvPw<}}_%*GSf;Ap=!L z(%KiEa5ELRf!4j^jFM)sn;sJYQx01pCYd_VObwd^J;x7KVBxoj_Ls9igo zLz+X5`<2o}6C{!ndf9nb5$x&zatjE;CMGf`zF&F(e54$bfJo%3cjRw*e*!WHveYRh zzJhK2DFec6B<7kU z-}yOv)fr{4d+&MJ%+|qFqR;Izu6ZZa;3oNn&oO&&jDS{`&+*@v&+=4Hpxx`q%H}vd zu#Wn%t`8FFS)T%uobSyhI-*bxk+ivxZ=k1RSKVojWwD-1WkkUeTiu)%<$Kk5O8)F~ z_04R%?0JdWJ;4N9oWA)+PuX6_`pnVqB1Gd2ZGYH6^TcIJvJgXncVwBiNLTMJ*Ks8K zo2dl3#G>bzB?P3LwbGiE-N+JDgmUZ?=j*u#bv}7pJ`5=@y}WG}-`*2-kJkemeCq0W z%pwXTJ>X0E>#_M6|UQPfq|4o!5B$@1`mQS z*pinTS5#9xs@wB@y)`%F7v{0WlNk8sKP_FvOM*8fQGOKY)P`I zg@6ND*525cnT}+gwRu^cI2DeNCWFu_u(g`2vW_c+6M|-l@_?}~%)ty$Kn)$Yn~}X^ zPqpW*hjk;!VYr&5ELp&^U$+$_9Hy!MkVbaV_28%C5Kq++1YpydG+ zhLG_K%A4gx(%=~?ODABO!q|h<^deI$`#l5?Pc(@x7PU?Y9aJEpBPnua14iaLI)g7y z$C!n83NZr*)fHF?d5~=M+!tdu(D6}sb@J_pVjaC8sAZzC1eT-;&miVl*J7Hepad1T++|~w3S0?OAVbZdO7G9iS=|> zfpp-X_7#xaurvc4Uxmj1_7tLg$Q(LqYg(W(g>qg07_JJKUeHiR)7K=SVrMv{srTnD z*vRiH=tMSCx%FnY5pqXr5pXpN-T`JZtj7p{P5fv&+OSKP(~zIpc@t;CdxeQGf;3JN zP3`?m5w&LCVm>T=EK8?b)0-JIKRWhUyUfFmPzThdq<38zKXt73v^X9OKXNquJ6e1G zTIc9|+6xEqDwjOD(tT_^FMf30p__!Je4Zu5qac{dfg#Z`31NwAhRZcfGB72pxZ-Hq zAxh$DWauCEzq^Wq4d^u0ggh?yGM49>Rh+$kV-0)N%Q-KZ`X%Vyvm`HbOe`bqVffun zZZF!eB$Qj7q-OHB(*}==wDySKJ4$oQWs7gFSGagSosZuqpznT3&MYvY)OA?do)TQ7 z|J{(dJ%&D^+%MiW{Y+=o4BakiUR7R!EkE!Is=e?#Kb^}U@MN6M$Xw7*V7OUibPG5d z)kt))jNd+IJ4$>Kwo6&&08eZaL<=8JPfT+R)U;UlBH>w#PCFJD1~9h*itKIIv8_Pf zUy4;TJQH*0-Hv|(ty67xkFzt2?6IpFwVbuc3G-Y33__W;NdD@#7Q++Jh$X0zaDJrto(DN{y>H(a?_OmCpVGC9&_J0pLt_ftGv1`#8){H$#sv%%_Emfe1jO3qs4u%q1_jTn{j*@y0yi|n{-loUf~ z=Bj#vf|Yf)NZTHKcy`4K`&PVWbg8p2s-bMmKGRwKL}xhjrakH4fEY*gnty%_$P)A>Z+;^`ZjDX z9j5{7lg3r=1H!@)Z7DI>u{sYG)I5~q4wwCHF&_2`4c?RnoPV#_Z15#I3r@CXXOlda z5F?tuP&^h`a|i9$td<@t<`O^)ug?K&%ja3=eo^be!cm*4W{0)Un(u<2MJ`71vx#-T zcb%;T+t)0h@|CRn(?A-y~&m#bsK?S5DJKPx$jgtRn?Bh7w@GGSyWU z<@4{K`S1;J6qeiqf3IO1D`guyJcbBG{B}n4gmcVDK$hI;t_l;fLb}k7lj5({qmo*& zOvHc#;PN}Sg(he#mKmo-fqt7nN=8X^x35N0nnI}pyfHn_o&n8klD{|9DXe2}O**Q- z0p*Ux{}Kt#xRS4a3h4kAr66&Wxv5q{u)m7PSqL4}&U`{O^KbnC{24hp?Aoiz-Ww3cwF#ZNaW4l{;Cw=C!i`B?C}GPLaP0;5?GnZY z_3vZ*NunG+oVevom4fKk1iy!KUNU5peOiof;;3H67zXcpHjzD9^xBm_24dCdx8Br!|H{u`!TugQ z{0`G7Mtlr0c-s_-zv2fIyJ$N+ApUIdbCG2PY# z5T+j_Or^v-lXO=`OTT;|RWND$dhX~=EOxk=23 z`@vjz)sL-lufMN0$iKr;+##HZEL)VEr?2Q3Z45^%n38gl|H9u=b}8=0d;=%RZi(jF z`*t5BpZMu+LgYd*z2aCB%l_;os>M1nH-!kd$vDjes4f1t<~n>ky=HE0OruDmS5!F% zjl#s_OSIj4*!|2HqhbeYi8^8?nvAUARAUdcPy8Emnuz-?EFR~)5ID^Ya85CWgAi$C zT6DI(6ve$m^sQ0C(pR%hi~)jIuKQ&rVa{_&oRtSnkQT=s3Xm|J`e>Op=3)h|K==a*`BNUSftF&`Zvi-H5rj!EEA$4U@{zsUkWklA0S?i9IxpZ!Xu>ve% zx;E8UbBUrT9V`?xb#vH~VhLb}c3HMRixS5MklCQZl%*3?>pAJ|BW8BFm9@@W^}U&b zcHI;MYPWuY2^Gn1sJKj&;)>U=>hItF*r{{& zhB95`X`;VU&*9UBmU_nflnK`^H|kd!@memU^hZ9|7%NsmF#flVGw+w%XYZUZ&LfB2 zA}FiPmuwLi@;#EN{YfZ5TgAznLWkDgAYdZOg&+;Z+sy{}K*} zaQE@|xnf1HQ<2JU-;%KjY=q!{@~uGr9j_b|46FTX!b!{}i6%73U7hPZ8eE24OzGWk zNHy2Vp>-}=J9_#ZIUXLnq;&ICYN)f^7!7KHfU=bg{kuMJk)-|q%8dp zEKgF9C#Hta^NE#(kF6MAP%T2b#@yPZ0%BPytO3^I4=Ku&R5Q_Ykb1wh39MQHbk4Y zMMP%x>*ZK6IFxreP*{>wq*G_k77;GOy^QQ+}P&d<3n1u zE7>B&+DdPJ@4h88q!7;fQkrjdz)VvxRp-IEilnLky@>0$;P0fUAVTMiW=D(2kfRk# zW>m9Su2vH}i*^(^R@s5+5{0~_7WzbTnsVuPD^Q(X zd%ElmZ+lkb*IRQc>2RCQLc;yKd+k@sTx+{}^36NRDv>w$&HXzun3uAld21uv-M{1nh6`s| z-X}hTzB19?CM;MU7~+&y7MG~+y=)G9>Jd{#_PWL;q%pu%B5mv7Ag4BtDwnT+jhxU4 zC=*SB&MSWH7U8!KB;uqpqP`OF`q7mvlFy>b7eL>=@M!IIb}$m%*@=T+-BIvG?C*Bs zTZ>BAAktZ?F)6aAanu{;kQuq;)R6gYt^@FXnp|i6EI!6+6g~8kkRY(mWO(vhX8?}R zxQ)W$QK)tgqZ9|c>DG8dKp9oN%(P`ueDRB7l6dvi>*sE;W1tnUX|=_1s|ac;vN zCRu%|`IAxq`NJP6?s`<_Tj3`5UZ<-}f{lZhi>gcRx;^jtfel`yx!ORQ6v6dT7f^jQ z$)P*~&)NiAT&G7dAKsf8(S_M)gVcFVTQ|bUKBa4Nv2Pe}iC9Do-x2e@u3L-k;7n`T ziCMa4EqeZ8F7093+}1S_<5Os~w&nCt*);G`?+*)cy}VOyOlkA;!*6ltWv4;H=opg% z0L$wt6^9%Q0F>z(au<|o0c%YTUU`_R&l4|n>IiSZ`dIOcHeFMxQ|iKC||Bj3a@!Jb{D}4I+%#*`_r^SxQ&TKc5Hz=yO^hfPN`ozbjrF!#YAQs{4W z1VZt^rsp!ZL#P_3KD22;YfkLBF2ATVp#Ewnz{r>VzT+v(bt*#E1?r7`^4zf8*uN_U z){CUSZkD;Q)Lb%fb9$TJ4f8{S6Ie&U{_GM}&)c~2aZF*;`;k!gzI$0d^J?Z6Nt%?2 zj@E7AJT)-Lc^Uz4_9XY}vDv$PSC2*V+rrDm4x714$G5kZJ}GOx-|nsDdV~Zzy?N$Y z7(qM|6734PL_Djc(s^|xk4=JuVdg%^kDOs9Jid(5G_CR>uGujr~pl}_! zzdG6&Wq_W^QQv^G&YNx5;kZGAh%r9BZ^(=!;;e=~k}27e?pM0s8S`*&{~XB+l0vyj zSQFh0<`vZy+2dY2Lbajl+*WJsvwd<*mc>Kqr-N@J&j$jUGMa9ywrtQF`*x{2`_@_> z2R*mCMfY+0x}RE02umB~yVHqo%$?hZ1(?=eIxl#aE4+7;LWq45tZ6I-k9mU0FL>ox zvYwM@_}p)TS;pp)RVnO?AHE$Au*ljRdWg#phN9DPAidcClLy5@#SbX+E|e}x)huzp ze+I+5vR{6O5|(RCP<2WN!6pG`uFX3^oe|F8P=44D4AUn9JmrzXA08EG*XHo zxk4x!ZwSeP<(2%)hncSUtP{#S(!<}zW8ew;*kge2*a>W3Xty5JLG;ph3{5EGL%89l z6r}Lnc@hnSbXmFDwvpslO9?#-^E&wu0Xq3iTk8=*(t+WvUWG} z*keMWPNIEwfInA%)aGT#F)J4OMiFf>uiP&<+Dfo);ve4cKtC_qxLl%87Guy*iDZOm zR(8E@LcH16^#r2Hbx^A;zBliu?6o)gJQVV!o8d*yng_OJDfKur1367Kxy$F4=71bS zw}OYuEsivw;7@Y4k=OOIIZFI-{MDT-nOd!fUE#V(mv?t)48=FgQP0*5?`Unytg0=` z41SGIsW)%?-fVO0u%{J)RrWfkoVm95nL&`-&X>=OOUM;PxF{#& zWZu)_hmq(>L(QP8E`0j9?J`}X%l9GB8f#e!()P2!N!*Eumf!>(h@wj+IXjDu)ud)i z#Qzp6qx2_Eve=raX7LTDTv^Dx(MzY%z<3H8HTllLzT`G8(mHI9TE|1MT4cz^JT|)7 zD34WnEec<|1JPdm+sc}CLW84~Z|3(IhoVZXlh^ooVRDo(4=?j5b%RW)k?eOf;S0f| zev75<>I!p09=%(bs7G~IyjIzLN1{a6{A9iIrWzY<~yGOP>3AVy~ z-Z3%iFN8%~ZheN>NyW(trfTY?zJHfR=nP>f>at%c5C?$w_|-Y3?j-jXzja zLkFzj-IB`Dy|3f^*?wMWE<;LDA)an8)LG8nlWWbkU!Ovfpc3J+1*c&ZHXn6-$!2&HS=${ z<|;RjJ)^g4e{^C!;Rv&jn|xJTw~v6(?yd##SoHah!OiQDL2<0~YkQ|22KRY-zp(l>v^ewKpPV z{90aTJ|`Ut>iqWO(>I=W;gxTe_qTdk4Ci)RNJO`)a=&Qz&erlhuo#SXY!!#OzUhNw zrz$pk-o;}NDwN*YIQF+$6ot4cPW69(pG;KiKT@xpT~2E;H~>2PMlR6w^|o*pkhMF; z5^jj7OS9gZjd8@*)XBaTlEd0?eMx%kG*a*T?Mkd4xg1_Q`_V!*!RvPx|7~Kn^4+u< z8z}k%`+dH>C6svd@Ei8|SQmo~)_N6l--h;^dj@))F0jJ6>LuMG|j(AV4pH`C9wT4ZE7kUA2RxoG%(^RPxfR&Vh4krIqO{`4aS(NxnhwddQwETPRc2W3(}2| zOtw&B2B@tcN#XlOW+XceWHwhd8GevG z&Y9!Kl$h9E==V#oNMX$y&PX4d^R6F7jGe|UXFW+SKrY21IoXi8n(=5Roh`)XfW)Ac zjB^|FZw_p@aSmE%ufWQDVc(!uwQav_Wi$j;U}IsX^scFJ=_OpqAF4LO);#D6wip#= z+5w@-fi)Qojq58xJ07q_HNKZ&4$|HCF^Uhi9mkAW??sah0= zMoiZG4KsLCwgbF{{k|7r9O4+>I*a?O)yk3A@kU|(bltvJQa$e9-~Rr4y+dqXRg{v_ z)MkZpW7I_fJLzhBZGG854qJ}?yG`@=>V8A&VCa36C#2@#bn3sfLa!@rh*H|U1A5-B z!>_Xf>5(Dq)nv+%>i3ma)l7nvSI0|SPbsxT9qU4Rv4$|pjxT{^*om4Y0R$~d9z8X+5z7=j1(04h%Fp0iPj(bt>ed^bW>=`^Y*csA8W8O>Q|#_RrAe=Wc%#VT zX^XS6@`W-PiZ#do9ss2yC02E{D{uO&6s>Rttr_(Usx{HndpIvfLyb+Wcar~1@P3Ih z3F{YVIagh89ivS|>2U&I@^Ky96ZU!3>En6G=dg~ee%zV7d6@*>x00ZlT2HF1+?_(6 zF*NjsQpUOz)d|f}rO2NhC}vD^pOhd@x=gU^YJW4Dk@;Gx^6O$e()M~IBtkt}wF>u} zgJ0~|SU#nO0gqR?RT%K3Sid$e9}V>P(%9PWn!7x*7_#UQ<=NqHwSo(bH;BrR)zv}A zHJ1nk$I**nz8lU-BGteG$my6Q>j-Gem5T&}d0<}ShTOdRH_3Npoaw@vH`X?g(s9jv zu@ri|l|P?lkg>==ARY*{s~X>FcNb+k-4|f~*=8cow%=&UA4fOT_v!}xl0;?I6>SK^ zS$|VP_xM*mX-c_FG0?Wiob+DGAY>>wuyb?FqpNe9zL^gdnr-6FA;mO_^ zB*!6}x}|zd;Q2iN(c3K<+z+|iU9k>tu*y0lls%2;zkOvmBf2B;CTD=`$_EiuWNM*R zrxR{<_u4P>T1KzXAy-WCQ5{?7`#qk3soTu1k!-K4b9OXdcKr7}!I3g5Nc!2=za)Q8 zDZ^d_GFdNK=UXf%!5H9_2?gO3vLNUXwo_#ahbqK;yW%f2{65j+pzkz-q+f4_S=3{dX(DAkBJ9G;fywmtL;J zDTPg_Yw=*uB;kDFGf5cblkr$MT~;tJAIyfxSMQ#1;lDGs{GE{x!sXSPamYl-{6{!b zh z;xDTZde0<2SpOfGufR(>9H3b8THtu+e<|qyn*&yT1^{B+|37~ly~2E=KPb`+f66xW zm&xU%S@)~s{FR;JC;fii&q_M~#7p>PqfF1e;}NHw4I@A^f3>~*$H9vE-3?{Bj57kR zHtijNq^q^)@t+N-j!7sX?3UMdP7z3zr3T!#60sAm46n{MJa0F`i!@4<&jGGhCB@^*P})z1->GI;08nQ3)|vC5 z)9z|={C`M{@oWIF^JWk5Gv@&8J#`?Bv*yRGzmoxz$l*W~N^PnIo^+mAvAZuiU-Cj< z68Zj0-@K`q1WB`^BmRc_zGCaT-PA4SG0X1;sIt6ACrIffP|z4cBG!Lb-FdAW%(zME zbBG(`VJ4ynY$+73O}fxV%TgT#$3##YGQA8NojMs`A++=$ghHo)`xqkt_ND+Po4>n$ z04OcZKR1%9HQ=!@18GA^iONE9hTeNc%N&?1xb&6>mCecjroNOH&j9~w(HDX?eZ^bV zaW3z5IVt0EVSY|94fs8NP++MI0Savt|6t2O!|2WIVXAJ?n`I6Aw!LB@M()G}fTq=f z$5I(F?D1ZI?%4u<^X0Hwt^9l%PNZ%8VsExQ1%q6m=8G=$l3|hf2*a0*22wjfbPh!M zB6O^>=tO<4lYm>V=IoNrO8N^tG1SDvX#Q1g_jwbPG#+d9tU}>>2KAMj#;()6r#~45 z($TPLmVMq8MvdZ-%Ol@?Ntqs zr6zM^z`)5wyT*d@fal`fdX)96O54O3LjdK>dzHep3z$Z+Co-svWB%(LzxF7siiRGp zyq!W>2SCaMz^GRPkX@Abu>1pmoq^XYZ%C?yXMQrNeUQh-5F~uuQefAz5jF)Fdcxnx zWF_AN3=zg#OtV6S&mx|GQE9?c!?N?p@HbqVLco`=D8a7Azq;f6yIt!}@_BW@ zV|j0~lQ$s-OD;o>ud}y>pvJb@T2Y4s*ze`nhNLb*06d+w;^T%a&M+(PA(Pp?yBOtG z3GOlZha1M%gntrUrjsEbF zx3(43|EzDyf3yKSti{h<3Flc&@Nn%QB{OiTZ4LG-y2|2@t~%C5(5X`ZLQfNUy4?y4 z)Nz~D+?ml(QF?4?J#IUAeN_)c+Viv+0VAsKJH6tI#KeEc4NMiW?H74(QlT?NFGe~9 zcak*vgx_VY*tZ{<1B??Z_ilsnI+A@w6otpK(>YME9r_1zfecGR@g&|yF|jbw2Rxj~ zkO1wwi2&r`{7`NdF@TR;580ka)=o_y<69aH3S-9C;*hBOzE9?+ZWN-+BMz#d)_OWz zJ@*A1<1BiszBFc|kR`(EKoqAthPvc&X1UYF#DS-b}KF zR#|RyKUMe_zZBTjBureM0AookNgQh^|5YTW>~}`=kfh9k_aK&GhiN5#XfU>zKn&4w zn#i;Rn$N1&@6cf$n=9bkYO(A3yGxC6BK21f$g$40#{VL>je8bH&}2PL-#|ty&B@7^ zdp#lFFbRA^AB_C}7X5TEg`tnKbP^l+avF!sy<9@Vz=p@A{v`URF_08xtsmu*IJbB* zgGa^VD+^FJ)K3=3i~wLP(IS!=T2s zp>5q;mt+y4px-FQ{S+|KH^C6!isCj0$TDn2GxFOwNh~xoeluJ%!mGMJ`H^r5SZC}# zg~juFF!q%HU~+UzOgt);_n*AHLyooQ+d=U{qi%oA^N$92djxA_Gl6P(;ncaTxo`E@ z&Z5`#UMVkYh!+mJC$Mw^SDu$mihpZ=#!6$aczY&lLH^Mhev{yB^&UZl`R(jKa>cBZ z(t?K9bB|@;i8qPRn+y_qg2f!afLVo-2 zjNK&NpthfQlLi2j!7i;^d?yplX=iZ&t=llCYpXN0p^Ij@{tN_(hc*ppk2lAL-U66$ z`Ma^Squ$-J(Sh@CC-g3OL@3&{#)=0^%?=Vj_wRs!>;_F*|0T-EWi-3Ytn#bEiQ4|BkT=r}8< z+p-_RhuOaQof)|qFf69O%Tqp?{?FS5;7V!H*;bA z-r>a9GO9~+hm4?e19sAEpYVBEU1{F8nbDm{XOuFZp5gxMuAtgM>s@nd+$2gzjD zAhP``nOI8dD6XoFv~%Wycud2&`c7uIBA*F7i*J&PLai!W%ru0{NfB~qC6ym8;=(3F zlW~d(ECIcYb5Qs3D>Red~TwJeVCx}Qya3+9O-wcnk8GMIiJ6t(D*Bw1aOCD!gsH9LxO zl1wZ5Io<;Y7L^~jAN4J|KKTpjG@4J9b48ixqtGn!{8fJf^aNs<{g_*#$AyFQee-bc zH58hzyf#^q<-HU{Zpb~b=&uDO`LWKPBn2u=PfCTWb|*;@2Tl3s<4^HKt^8#88#aIQ z@*$!GqaBIyK)!q1M^=Ba)1(-~LBiKUSCRzocK2puZxHZ7Gn5c8NX772a|zrEsAg3R1nHlm|Bz>S5Y{+-t(-v&H{< z0qD*_!G?kxKEdK^9=poYZQA%<4wrtX{D*((ts?m3-d-{qv|!ydY@}}9j2@^s&8y$+ z_!}MR;g%HnG)XRG`y98+m|0jy=JMP0*)l=t*Mj#!zpK)%`ZZpdPwAaU=DTe zKN_j=S~%FY0g6YoYS_ ze_(Am`C`}eX1zv5V#%QVjH%!Zc?)&^c9T=f+@XvX{4j5BQ{z1RlzjCl#$a@>vKeNj zK6)Z1O@$Rit02(lmDI(sk>NOD0X!v$P*%3)&Mmy|OprVU9Y;+C|Gmh0AwJhhPDnoa@T( z#LAlW4233VsJ0kI16wwcZA{ePvl0)gyT$yf8pHts%CdG^5a?4w;54dKQPz1~(5BZVFw`GR zk;UFV&yP1?%u=@I6?eTrpwU(s<|uR|xCS9^BqPhq09c)5gnk?gaNOYBU z1!$MARQr#N)NLLDJx?8kZ#vx>UnZ08z?xf{|H8M(ImVrfM@>aZ2`9^LH z2`Sd08R#&I=9CBw3q4ATwexa{nhBD1%>|l0h)UXc(_A__Xb$<{UVdyy=a;W2uWH*= za~LY2#HnTVYk+DJDFx>>q1&oP(T5FE-X)Pwgz`nGWyCOlXq+<6=d_}ighn9My!+$! z>V!gd=tf&lR8-k35cySqQu3=UsnQm*vLxYUU{l%z>{;x31LCEKaN$`jdsciNIqu|D z2~VFXlHY?)7f$R7M}1YNo2+KSsrxhUTP991bbMwa)Acn>(;$%Aee($#tUpTQGNVIC&9b{E`6bhc zmNIh=SG2p{j*Xcfu$90Y8cFmBR^bUXz*`L%v-TY1o~$Pd$NP*_$FHBP3hpG);k%&yZtn1J4@= zUDO6U-D(YS2Qw-<^=477!;_H$PjTx*2Q3#(KRS1;`?y=}i4UgLsD%lcH79g>Sl7Wb zA$9k#)EP45LKN;QSfq)AGM}GVk|GH+9ki)XZC)IyjF!{Q_K=^68ogQ-VH~HkX0}O% z@58t7oKe))7ZtG&xMXK4b?Hj-X}*6rifa@@e4eBadA(79RR4-*+&um!G~Pme)Sq;j ziTb*yM)k)NICTloY|?*@6`zAi;9W=J)6<`uvMW;d3e1F&Sb;tn#)ZsQ&bkF!MTED( zqojnc?YW9;QDU5HW5f1DAyX!#-Z{OS z^>*H+2z=l4`o&AiT#avw?iTJ9aV}oRe_p4D#T&oLHq1rYsY3IL)Vo zhzv6Eqtg_X=G~OotqsL26|J%!fGI5bP)3>t?S#0OxbHoz!Pl50UE+|TE@~ZdpDI8w z306tAYz5nWF}4~A`3U%eIVz#NNJ&jrOBv9r82OftY*G+F8Sl>7-Er!6Wa(w*8$_rW zW&NWYI0xrR>sW#ZfH!qE?)#N<`Nz8mYEms*&dnDVTe=~CO?GUOO8c+`?_ z%i;OVS;iAqV2o;3hKWa&s{Rn?`qRkxze9tI^9pq7cg&1jib`#XDeU%6IrSWF5evXU zn@JH%2ji=R>BcJ>wFS;sjBiK=w%*+A$SPJWU%8FhS~(fQ#wowIId*X=DN;2V!qAbndqClwDHpRwasxavYX)-usF!HTFJ0cn`YKU zZ_G2jlGTa)3q?$y)|W$B{oy|;6&a;1UD^Gjfr;gg*=(N zQKUrPi&U76)@>+=g2U6U#++$8bvo)NiV&0Y53aGkKNlrw-o=&{>@VsViY90o2EeMa z&k?E!Olw90GE!#3)yH(om*c?8DLLD(FI!Fhu0lE~83&YV!kMXRQd5Uiu&skCZ{c1p zvHKJuNb1PmTHm&eL^!cuj69@hLd>Tur_c)PapsWYw`*bYVhaMvP|Sn@`VgY3OlCzS z6yb8*@5YeUOo2vBGpgM~ChwxXMJ5JFF-h8M9_BsiA~xHFTJ5(3%aKr{o3PLirO2Fr ziymuCg+;}lO5dMTOTP&YRYSGyhvHdC8<4<3HyELRWYx_2xag7_YQMkfmkZ!TlP0fX zJ0Z2I!_%F4PhD<%9N$yZgOBeg6ubY1dumQJTw>)7dW)qf*+ILs_BNkffJl|21HJ`&JK6XPLm-&gpF_C(R3SJ-3|^=9-5$RA>< ztID6g=<;dVk=e4SBpTO#iLZ3Yu6$zgH_)%lF8+N3b&3t;n4$k${4=`}duBU}L|7qf zO(7ka0i6+QD>t)+gSCb8gU^nB9p~p7dr6xtUHWt2k+{sl{apK3s?%#Z#U@8{e4PgYF(eNjF4Leq)A$bx zij-zi^-m(;shkQw9iI_}>ZA;lO08eh#)>%gKTIKJR6^axzl$qv1p*-5V0Q6_f!a=|we3`&1~zrI*pIJGR}JfIXm zApJruM&#=Dp7hx5XKZlcO`8YXEph#AF@*p%+n#Zd9{4|es4Q}`2y+a~8(;CKhX3Qc zkOo$6mbQHvzLt1O%M-Wv#MxIE+~`_!vlZt-q?Tv3eX`jLH$2mKOh}5AXX#|cCH1c@ z3k#-ZjRUo#;|nD&B`F`&Q96g~-+t@na-wDdQYFh`r$4wkXa&eI+>bs;KJtjLN zy?SA8(-8(kD&o5lFxSumv>_t~S!;dn6BUX-L7!1$-Ly5i75YIzr13T6kP|th+|{U= z|CIMjq!s#{@U+)KWv2PRr*&$KIaA%G^?quu;S6SLp5rJ+;-~t;>N(1p%c(QDC?k%4 zD}Z$Iv-e&#Qv1kcfxvRoH`rk!p8TVy2SKU=`CkQeMiOi=-iM6YjwEvoN_<7KpvRml zdM=}V*9ufpB{{M#iw9j)Q)U=V+?-8@L&~Ai*HSAu$hKt-_yxdPEpltJLR9~JJ$k0W znYJU17YS>*EBjsH+`94Vnvlxx4%`jV9wJf6?g&uSzWuQRXS{dE6Q}QT2pOqQClB9R?Hbha@L{{T*EPu2 z-YHu7=IH=wp|WHu&=8k1%*U>Vz~xgZ&++mWFYRn0Yh zscuk$a)h6_lLNJxZ;zk)q+(RaiWBaDKf%PQ z47zw<2xhbNj=jN}k zxB*9j5Grfpe&h}E1IfMvA_2k11CyWX-xOtyQ@T`_J{c3n2QR9D`(nQ;$ZJMdG?TBxH`TGXE6&*AoSDbP2*4uYcvnGEH}Qhdm} zEh5mS3>PJ5TC4}1k(4>l2bNc+4KqZuER5VyqJ1wLI&Vav@GYKpN=7w!fQh}lQC6`m zfSXAAt*lmUx;4zKg>;fi(R2vWQMsA%JSjs(QSoJw`{|O#pTwqTo-lz-Z?Dl2Y8byf zEv~vSh6f=TdsROpZ&0%j&3$LN6YOa!G3hITm9r^heUhds|{m$pzEl zS0!Qkx=}MWu+s9cfAS-l<+Z#tf%7y8{mh9I(7;Mm*X@e(k|jXFd`keLnEcrzRLiET z9d{|OC!H1PVex<1d&{OgwykS8B)Gdf1a}Q?!QGt%cMI;pgA?4{-Q6{~ySuyl(;T_? zK2`g}`vcxjR2NrQcdwqk)?8!EIYzqy0k#}=FEMEYeLh&#qeqzZuRkYS2qBJ4*@<9H;s{r=3tvj5iGHnDI%b%X^=iL++B zq!Am>j;&}ieh%-lA(b4_rh_+OlvvF~S3TpShh(2 zZTzOykzr>4I1-tF4LTUPWZ7_u`!*a6;z!`ecZXhv%*uI-C&(rbiBNkF!5ouHI7`g2 z@u@fPmy;b80tmP;Ca99kVUfrBQwQ6_OS;V=A_l3G8v3$Mx|h^igN~Q*6Gy0kM={>A z6>~01^~Z=r-j|Y$6C(0IeKe)RHez5maxQI4oCLqPlqX^Uok;)#@-iH|hwP&BB~Jbf z7rdMagXr%L4GGxF(H;jHZJ1QD1zfv@6EjSv5vT7QafiL=2YnU`4}QbAQt*PJeyYi- zR(@s%p*lH+HZG2ja3CN``|2#41q?L4)v=$p_@{iO%*V^7NMr#;tpu!hf;%xY?gj6E z<*E)rK}tI8j-k6kj@uqN2Rcw}L9syq-g*g=Y9M6)f2iqkd!(OOrC}hwc_E?gi6nAd zpjxr@j`uDW|EA~vrjPY57pOrD(0;Ta*ZcoSe%A_hU`;J<>{b5n-TX%>%Pa^G$f^Y$ z&;E~`*Wa1(b;xOW5MwFx!1R8AtN-7Nz8@8Q1~UIye16ZXfpkp{Y_*=En zNDg4d$aGXvJ$PI_h7a%GH)7gHXtSGWJP4>U(E`3UQ2<{pba}XWx8@a3TQ^UfqA1eP z4bMeGYw`B_k_*tHnY=+>0NF zTmf}|bguxqCUYf@tAv14NZ+;|WXV6NPOFp*8)3iky`F6!?%Ln* z&=H*j0l!e{JJz~IGin!rcHMp_b%3ryMZml;I>5KE6=072>nZ>h6zD+9G01o0(|0)r zX5u4}_&o%^XToPP>7*FV+rH&GrzK1FcloJp02hx_l;x>4NgnmEb+Z543(Ej`=Y_Fi z-*uh?253-tNtJRT#WgB}&Tf~i39?oJ(l9KJku`Qg?@SFkIbib+P@D40YPp`2{2an! zv(CQ&)R^o5{9!pVeBnbl*Qr|X3LZAyM}P}%!3Cs&1T=}tW5OCO-`T^iGr(v8KGbye zsWBN4(fhw>g2Ms=Npc$g&;}ST*T^cwV}K)#vEAw$h@n(80Fve3z%h4N7%J)kmLKN| zK#oQ~&TQ`j&m0d3Jg|zX)dL(eVeC-e6#%9D0k}CbfY^g_DXW>3r~-&!@L1ih_i}}* zKg1!BlQ9VZv54maR5};}P1NyV=xY24ktD3#fcnFBad}(RyLgu~1cAFvBr5V;1tz23 zSFuCD3m5e=utOK1*WvGO&j8Xm`rTX<;6dvdbYJJ%Uk!COTGLz)DFvWC`th9p0RJEd z9D9EQoD-MWXY~&SP^q?kSeF1v1u<~zxMWEV2v^t&Fag}CNhz&I2J+TrdeJa=b7B?S+wS4#A7XH_Es` zw?h2WM?n0n6&8n$<~vI9?5fL3)d0g}o_lJ=M8GSGcDZ=aRKO;77~E;F7GOC)1Dve2v*%_I`J@fd!|P}x ziTB#34%Gbla^1kc03U^^MiG1hMK)GitnNLRNEB-KNB026#7gQL~{9P70J#}pmw*Of8u7}K5^dnS9v6by^hMgS1S}NJ z_tFS#!;~bFHc57Tz;fv0l+o9bUfXp$-P7pt}78C0t$zN~a}IUjbzW(Cv=SyQw8ShgtGf(+ZMk zev&BTI5Qt1J&Vflxt?18K0`PJe{d(TqjJW$7ubSJjPG|RaVLEybvGO~{$b5cp_+OI ziJUXdh~sQzELD1Izh;tf+=;pUo6-5c&^xC?ZQGx@ULmwl0$)hyz4!!7_FW-sy{~7+ z1gIK;vAP{rRt}E_CJH(Sg+fC}yIwTa9IN+=44Z6N5voF1E2;(N{PBzgOyfdMnlhjy zCNfUKeX@YTO{#2$U(UuCX+2RGDp?HY>Z4sI@-&MT2RoH8^uYoCD$6}k?}NXr^bvP? zlT*QAbUhxN?uQW8-MoGTaobc1RD6HJB;$oI2(#V=D0>1`Ho()90p%q%JORZLT|y1> zcmdc(>7LLVq8iU;f(++581(S(;@z1tG(0Pkc$2E)E;dOFx`JIkA7S0%EObRa&lF<# z#!9?)S4+qXX;2+VzLd6V0R=3DnjGu_mX4idrz8vZuv0YpW*diDmYNznTBU0%_EXZC z?eYX>2Gn6@!$)xJ%ZSSbduS?ZM9an7ybQ&r>+RMhd>r~o!hPv~

    #cNCgOe)1{Y( z7Ui6i1I~~>i^ll9Mb3jP+hCMSa5t*4Kf#CE0qQI0oLRXaU?k|J`!An2L+BOQr)erV z&1x2mlEH>5IS<(LGr$Kw9fpc5iglMV0kVtJobt6J3Jz*&I`1Si1=mi#Rsm`@FK)Di zuP=cWl{BnQK~oVZJG{yXPl9eDn~FdyFi#0`hq4F+VBEzXN0gG%nD+ZMG1(7N=KE2< z%|rb9jUB6!XQk%4`rQ|Wq8jEDvsT`SreK22nzwcdIk171Xj)U= zTsSq+l|kXaun1bx6PQ>F3qWO+Ht~GEt0#HP)_kH)?yWU{wo=R7^&nF6P;R1ZDu}9s zTciN*d^d{=(V3k&M5&!iP?7}F)rLI}NqOzM6NQHag2T~*wd~u7B)Xj`f0!$o@@pu_ z8-3;`k#xb{#s8E)z6;mldgu87Sr=RZ8BoeIB!?Y(T5*R$UGZ$eN1!4UR)Gq`XsXPr zs4M&V3AMV1SAKH=pDQ*^VsA86(a8Ip;rICaWFlXvHE%4|kc8;QO@wJc-7W#U11v=- zZtY*R!?nD>Y<}EKY;}l5>?>4itt5nY5xO6t@SzNh)A}*sLtKDudtQ;~cx;*kPmHVXp{< z1wICJ;wr;Pb=&wVfp`6a4$UkIQ>mmGZY&b>WvuYY{H%2FtKR@X zTsi!!y9lpm60xrw5z#>{8j^6jCj<)l$DXx%u;4}UhHg`zym9Y1?qG|_Qe0UO)3j_FR=q8&XLbWBYB+u`4w zAy6}_N*ur4acqq~BcCTB_TzO@bg^W4vr>HqY8v6JYG6p4X`&J& z%6m&sgHh9XG6DHnHvzdtPEkf>!KMwJeH2FZrNDy`5==dc+lVC>?K;cH4nS-b6VphO zWiZYZSZ`ookXwM6|2iYCH6Pze-|TMTe204pUx_tANyY|Y>#tNfoWEOuf97JDs(e&N z^7koiH>X_I9{n!();)?LZr20_RVAdTjZF=>l?%F&vO<*2@h8)wX&QMEHtG0m&EY0W zQJ(@*TAo$Xx|%{*pC)d}8=a#mKkO373Khea zxeLuZ*iWqOOl82m5L#My#-98{O~OS7zD!H&rV67tFjK-79f7?j9&tOttU6^-Q4qp+ z5?EINYwA48H5usGJt5rttFmvA_9TErr=+wHDNk?W$4V>_!cNAjsWmQ#5QnKb547De{WGnC%e!^r0``|okgDwx4ra#=S$5TeKmNr?1x=JiIcMl+HP zxD&97RbM_cJV$>vQvD~G2@L^h;7(UMaNP>I5alUg2creRkH(!} zv4#oNg?CWQ(+c(uZD|T`rdih4*fB>0sPg+K3|f7#iOlj{xuyO@@InWUW{Stk8w+Qdh(=|p_~T|LT4 zjrm&u@T++j|6?<8UIn%%zMTbHfd2W+_>UZ)ZJ{qpMe+x%XgX1P5%d8r9lkHwEzS6E zB*#aroZFE?J_3ZdP(sJCJqpz5cKX!SVqWr3UpK{g2bV!>)ov+g{VIM9tx74i{O-#h zmF5aA#hJR5i|$=yGV5s?M|{4r$;6L+pw>WDGW`9ADw}lB0*pHb2{z&TaUrsx2hPYx zo=D*FO~ttd6M{tpg0XFZm~>o-)fdm)Q8qwxxNJHyD@44ftIp z+Cd1We^%}fHw!K&e^rXWfsQNi72PGcr((vXd9Iq{8f$VS3+DSvXB^OJ??y5SK<$>u zJg!;^pdPyz<4#ms$bt7drPbozs5|d111hkgbzI^oLV788;PrdI&J`kpB&a5cl66vL zQeInR$hzX2o1x~ZA(=X<=QXI!Cw8`0>h}@DhE|>23>PyWru5=3NhV_St^9=^yf5~L z7$9{%jK*7m9Rs`8%gG;tk<|d1`rN874W=nZbbw5~sovdtMf~@#mV8tSYkh)ZM2v)r zc$I{kENI?+;R~PRwG=7{>eX7S2+*hc5gz zDC577fLsDeH4S6sA1<8t6Ig|$Gtpo0{`VdI>&T=Bs4HLmMLibsF9`761K_U<@GJ5I zC|aIe$m>6Kn6T&p%e-Yz?4I!dfs}t;;QfX}O%)4-y&(Ppy~Ab%Ff_K!!K8y6&|GArg-&+_a@bYnNcU&g^VH8cE-=7B6Y+~&nM$ua&6c~uGhRc?Jf(%)} z-k%1#cK__}Bk-T`;3p9Q20}#3MaTXho+jzz`_uSqTsr|-?%$%~Uq=~8B)~wBlyTEH z{ln8pgS?0G|NmM){(pF_{N@)+)_nt`ZhTZ+2WP+{WSNnBt@rnw`af?@&g~s?{nT9_ z7#d@4UP$;9KQr?D|dk6?~xnxKjGq0N=hxcaT*dp zmq|(Rrt^A$ZwJuxLh-Jgnu=<&*%Ji7r`2NhRx3HcRA04dRy-r#l=kbdIAGxc)nIE? zEm~sNFn-}O=)n0VQO#CLDmCtvNa(auS$*8_qQ^RDGDUkD9S`n7{W6=Nitpy!oMlR? z7=x8?9{)?H%VQ#Ps(dw*Pch0Z&-3<+S23gMI_3j$IWDgXJ8Z9hUJ$e9QAWK-_-K1=cQ^$-D5r9Rh!X??Rn=}bg)qw6*^p)ztX z?rQ?Kyr(sze4ng*UNtCfqdoh>Wq0(y2*730i z=_q89SLh=-%Eg;L4?m6yOJI%p2@|CEN4d^ofX#n}0Tj@)>R>vRCq2LbH-i%32DC^S zqMfZct;GUX=P_urpI9wH0Vnn}i7}vwfe4G;N^8nf*Nf|~@4bNWqjVgjhv_=QLH#Wn z)B1x@?i!~Z2AIjx3PFr0!Gl&5pG*Pr?qu>Rs@kFqoxK{jw5jZf&GPdA>Zg{Ez@pixVi@-B_ zPv=>L_dzd|W+A`WqG9!UvT}<>eAhTVJXV(PL0rrt)xmAZ1EGlOa(~SLkk|h${A4c^ zo%F>v7}oWH>tVfEUYE~25Wdtfk=Jg}C|QQu3pfyT#(eovsJ&-rI5mP1(|<}whecPP zHaGyFcyBSUFHaE48q*N+IPb6wOjbB9hr+wi^8-L+7yuw>JkaYSWdwB0unYs{IEL#T z-c_C8I3_&qq`=uN!+;Qf)n1xgKrxu#04}4RAEy2zfYA@#Uu^mI*aPOrv#$G?J~Bxx zl-q$lNQcXf)x$tP+y?I}#Fi6Kat<}w2L!Xa*w#C^4s>`h76>56{dnPXn*&i3p70|_ zl=w&l)r#VZA_t0;gF!7Enf23WUN|>M%@aoq4GhNTy8tJJ1kwRsj84wos^Cv0LGd4N z?oU2m_t{OxJEWk8W;=Z$4hG26S-NeiO(uwy z%bOgICUeB2P1f2xlrjau-g^;aZ=h$rVz6$$xLlIPQme}BfH@gRcjy7_i)6h05d`@U zS9^Bpp*w9UsZV!XNwLJl#HjRx5P}{dXm;18(Uh-7t1T{|J7}1!X47Jg{F*M25X-#2 zK0f=vu#u{rM`Bb|pq#($rF-4;w0!+68JDv=l3eV%_z>mR=KjD`s?`{Ed$v}72oZ9o zAHkD%Xe#Y?wJV*(nWoR1lFu2X`_E_Bq*Svk|va730>6`Bp=iywDGIjNt zIlgSUvuZ}m>0)h&L=1(%ND^x{li^^&He@%~Qk#cc{N}hQoAqkR;&5ue&yXK7#lK9= z3E#cV4-Kgo1IZh8B3`l9&GL1A(4vwLxWV9(0RnxlPD##yRMSH1Ikap|Ek$lrToc!myRNsp)F zGV1B+!6J_c#ECgV@xAmi3VrfS#`@MgF&3ce-6?XziJ~+gyh9E<*zNwBj_rW+swdYb zK$Uh0&&28Y*exZxz^76bsq#jM)1FtFm*h?W_pP+c9(U(;e!li93#;7vHNk0|@GJs0 zI9?P?E$)m^|0Q55srUYiPMm57Vz$=ES;8H&=t7m}cU-Qhah0)_Fa7*3%4r~<@Kno0 ze@lEzth$yizl)>5di8G~7uaQGV;fbgx=wh2O$%Q^VNgSiXwE+|fWBN0K>OeuON5ya zmL(j5T_RQV1Q0Aqh3W$ey6coWEe%b^=1sQy<3!PcXqsrR``Hmi=9z;2)3IoD(Gct< z>sTNlAOR5(5y4}rT=fqvK(ARe#u@14l>5!m9Q)Qa+$snDFKS+uKhw1mu~dhH!2w3h zYLzVWmh~ue&cAHCjBaCYuAoLJ4S+d&5GTNAu>y58#8@MU8-Pmj$kqd#`4z;F1k1 z6aZ_xDXa#3A=13uTm8XlDDR+HahE*4&XRcsZD&08q)Ubtw{6CD#inTr-`Ty9Nn%M! z@t2x*ne3+(faF5Oadwu@4Hj)SI|jb6@?K~6o19caSeK*A3Llem&|j`by{lU-EF_P- zkIaV=cVhJiTxTuW>eI30V8h%l$ek_5pC^N4`KC+Mr`DkmU-s<{F`Qxv4u9UM&KdTF zVg;W8L!hEtlERrgh0x-t*}JvIL?pnEhzJS_0{m%sf&3l{^Mc??cr8$hBPd&Q7&cCl zDXWKTGQ(_J?jWc2u;%$Af!!t^s=y_=FsA&E&7^fx4G_rc(UUb0@Cxvc`>p@ZqHldq2k>l$^)=Q0h)0TS%`g&MIw zWSl!gd&+m@vYc9zG4Tv<7kW->UavgTV~#~K!Y~iNG4}Y5#_;U@u0A=q=YjJqRbrzJ zAKARo#?Z&Ro=(2vyEYVdQ+;T|R)#_9!Z+`l9sB~%5B}U9TPCuSQzt7;Qr>~9RP){H z@Ydb|`ud8fTuZ&c<*1O1S}P$I$BwPfduXau8jh)fSI5P}bo&lRig$UsL?wpTtr%(~ z`5w0QLd%kE200Z5J|Ao9mgC~{4TH_Ot82r`SF?*Apbv~bI5l-H=k)cC7|p09-TBbC zZe52=c9F8sU8}b8xQ0LT%ZDJXCIr?#S$wR1_5FuDKsI55xAP03^ zmEq7Na6?LX5x5*#C9U%*P=*4j0sw{o`E?wzBq#$&D+=7EQP8RW%z(lWr|GhSXO=iV z0&J8LpfdKR73 z^|;&4=E~E9CV+mq!qW3|WcnX@WC)XKeCW#uT$St_jOD~MVF zkew_p4ShJ;(T65YPL9|cq-3iitt&X_x{#=bizd!FVmz-zX%gsQR(hMhtTF!Zcnc^h zL*m1n)BSY#l6yT!U0%?FPNUYLgJXgq^eY<-Eqs!*)pI6R3xxnzxclhCI!syh%X_(2(Y~qV zv*BzA?AWYU+;3&>AEXn{SrGng6}bAInOuhAq2gmmb6mn_q2YupuBE}5jOW80&|LcK z4MMlHdv!H+aJ%5r)n>-0uvd1jo8I?Jn@%Ro))9>S0tQ|znD1ZgE;aT9AMr$X-H#Tiea2&fgpV7V*~+eYh1W)xn{5W>;PPj535 z_ZmBs>8r&I4sR4-wj9hsa`0uNb4eqUF%ggTfu;;H9{ZXYl`m}~goEyskB$&;Bpg_v z9C8`MXKj_M_2xF6nUgRqzU+Rf>v8yIM6ybA_+@AK4$Hq!WGY6Tt3c!BY@cjyXO9uD zP+_M9zg7`>fi!s1d~W`7ce@X&Ua&#mXZ;d-XRow=RRIT$sN~1apc(rLKx-#Kqjd?$ ziPTQcRU6BqfwRkUnF@y*2OY}?rDd`-WC1<&h?=#Gkyk9VOc z1%l;bjr?85&h^36HQ;SkU|iRBKq{k3@m)ot5c#J=L%|+kA7&Z+07T==3ovni57+=N z>jfZ0WnBv-Wvd0b4Az62GH%U$-)S@)mfQf5aNNS#H^BiPJwI-F@CMvwVhVSlHi0p_ zq7Bn3-{wrG-IJ#kFgP2m#T>i$05ZT}j6r*eKU0pA&iq6(p!LVo4G2fqO_t}wahhl) zXkHdRV5)2K*&>BOAk#=LcV}zgX66=EG;|c^)T@ms&dzZjB`E%tmuC2B8w#j8W?U9_c zy68m0L>v!pm}bJJo#{_I8ntE|7vEDUXLmX!0#+PY)gi9s6!_&dtROl$jlZ?Wqc<;h z`D|NNf1U=DSRPJfMp1@}D_;`mnAee2(-5fW&=?5j>zV+sus84G6f8;a#a5*FQ;{M# z5uhBulE<3duUZ1}KILXXK|4M;QVPjB&y!TrVS$3lP}{1c0e!Wa1ktT`HK`ep`e8KO zpyLr6ct3-{rP^}hN^sORctA`gc}uD~0Nt}ws^wRRo9V$Ep3&D(5<`A~@kv*Vx#j-J ziVF&Jn#5{8&BKk(K_r?*F`G3xek68prP>i*I)3_3i0v1t+#3*~He9 z$uDu^TPI!$aWc-GU7HJaM+UB?b9%v{HWd0x_27yR*!reBO+p4e*k&M0JHNy(N7`yVxa#KPmPH8fTYMd^$R)xGxmB6~lt2b7f=SWhC+>btxhJ znrnd*@x1V?c53JFPLnHY1c^Fd09T+KR5f#K+0)7mR5rS<07;;pOCio71_C;bi>5>( znH7P@1*G1hfq%)XMP+g@?YK4n1k@il`IE!(y!?QlBwM+n%IWcd(I$xRX+%Yo&pGc# z>(7s7NQmXP=>x3_F_XBnPOmGW_E9Y1$a=R33saO}M+E@C79yGjs38fbwc zvlKQf1xC4e6zFYeS$PT5Sk(-mG0Oh_eSLBtHl1r{xmdCPIT)s01P7y4YRsWjddqY9 ztEa!nc%1YepnAPF-bnqMWve;-`gy9QnJj(4#nX!qnhK2{&uus>%>!NW0ouDH;SyW= ztc1n5QNIuWfnMj-hP*^!W+TcxE{N9G%UVy7^p`{9)gCULLvKixk9j7|1@zI){xo%o z;a2Qss~cTH6n&spc0$6D`>rXjGIeZ4*`$GuUG-_Y;#ZCZ+63AH^Ec->Ps}IK8?SnJ zVZTORY*NCA?otl!>C6nx-nNHZWV1C7GU^+b{oz$!)cq6570%g=1wL-+&JFo%{_eB= z^c&-WXveBS=~Ep|yXD3VVjk=8?4IxER-e@__|r6;N=&B?@mC7Ij#|*V=vp0}Oe_cBp`j&WP1K;{Y1Fey z3ct{!M59)C0`e-0y2o;`I}j4&F~n}_JipaTHCTHC9GYxcYu^;iMx@2LIt9ZAz$rp1 zx_nDc`=|)Pyh_TJG>EoGQer^2!MgkcTEm?kYf%_bWltWu zYg|{LRyzYq%UVn)f@k`b=W)Y7Iaq!RgLS089gL2}w_;75E#;M#UX;$HvFU9UXkm=f zC(KWXeo$;8MYWmDak|6+}yl!vx!n6P>JA?{BM>WND%NDm@+#2o5dpK5|UMU+s z%C^!{-#I=g9cwzNz4ZUGo{lZtVt+6>n2IhD?STvu*E*fDbs)~Qup}Z|lYMpqbk#=>?|(8Sa5FOdQ_vwSmR$PNu+?OelMce(B#hKd+ zve*Jop}ksT@V&jg^i&PUWbtBlE=L;LZzYJVW}6iZCJ$py1u`O_<`Fd2d(AKhLHNx3 zFP$ZjOxjyF#j=ix60~l^Ii5msQ5a_?A>YcqLzLB)Mim=)zoJp^M(ctbazq>v~lkIoA`al3Q@S7l!h|atc z3^eKBI=q(}r$E9sg+k7+==iSrN_}NqvTqC%et0DpW$P~jI!X4gc%1ff0`u#z24@je za<7noO-=!V9b7`OY+(MSYZ+g(+pdwUAEQYIloZ>nr5W|-KkoYkqYepoK0S@&cwkJm zWfR3vJF&TD>9`zTIiB<^mn>UpO#7}r*l?t_V9Lb$UQ$YyrdFC>DcC>Cc|GcOIj;Iq z_}x)66^MN9muXy>$7@jLqVOz%ZO$-JsoAmeMRnJJksWgQiA^kgf^Id@c927W++nE! zZGLT<+;|4RZ^nM-A`A>8jj^0yhMcLt@Q_gQL725YCYy9slUdd=1xmPSrwI4_2}XZ? zPM-ssrRVd#ZEj3g7dk3x4nsgl`3ok__9sKgBlZ;}8n*)S7%ep}ZwxXLKrDxKDtB31 z!>u9zgauH4Q+8SpUa_5Us)#Og)U6+qKEyJeFa~^si&2Ht!ZQiGpe;UAw5siKeDC-$ zleXV)qIvyU)sGi0gkupdTFmtlti>5(;}v%J{I%;5PSuVFD|<&b&_c@9q+aJ*l>$&D zI`$htb{5`GbvwDZr1{(Y1_MH0hnc7mGQ;Vsx4G~*B^($lZVm0@yll>Uf}Zcf{Nyk* z7(JbH@^uwyE+g9QNg*g|)Lequ?IqW<9>N=e_nx1!sG>rBOs|!vF-YkhltO__Y4(h| z6|u@!oC~iKOpW#V5tn=D=|I7dY&qt`C7%ErDY`!*Y5%+^Fc>=%G5Eyk7%=*wS7uK5 z`K9ZthQ|)QYn_;E(azehEIpEfyH^<{Vq??Cg5v}(ja!#aMU%ycv;AzQHM!X3yETPX zQ_7<_=qPw(YbZ_0=rIqWu#W`Uj{dm#0tXpt3Vu?*KE!3}{=np;+2}&^b9-hn#rK%m zXgFkP*kcj5n1hCYN7EWeAlG|wdEFbTP<7+g)MRzTkaMrYJLnUiMG-q>HW+OI*Em~W ztG){aRp0cjGi%Jb1Ms!AU(GUen_2s2TBO=xt%L25!Qo+96$#YSjepFQNCVxhK~?4Y zuq&?zHs%i23Z!F5BxQnG_3jOrUISBN6)C{+pj-G?LauBL6MxqE>Bk$;7sZn64oI05 zv%U`7EaJ9di%paqzU8P$1xP!^ku!~c;}aIPi^=NZNQpZc@T(+Zl$ zZVRXD*UGWm$fw}mW!kye&E7jj9BKR#bQjEGGLXw>rOqfVsqT~oa3pE-LpVq{d5f5q zw4{&*ETBxK)Mu$?zm7RtD8K3O1#i@r%x>wOD!oPl@+BXvx=2LBmFQZ(?1EW^0%X<@ zZfvv$tIru-wl+?zXfxX%kQFJ zi%Y?&Qnoa{Cp(?%JuOfO9&Az8L!@1QbfCTLqn)FrHC*|j%Z(TrcNPS(02M+Qsf&o% zdySr4q|=&(`h&%8*nW)|HR>51vEw&};5)=r2gxnJZkpt5#j<0t6X290S!+X)JBIEO zowRM4sE0K*7M@MbNE8$M_LSAdLY2m>+g%k0mYx}wNo{%`^m;9O5B_5*ZCt5gWx%VWBtl&-w$)MxkB)N7-{27{d9c+K@C0scj^o)3=I z&~s{`Wv5kHRy15P4wLXn=XkcW7kRD{c?rt6AAZYZDB9Fc&HbbWd9S_Z{o1waIV`8= z3j?)@w^DaGTP@xEjH&dY&w64Fy%{-=_O{_7YLic#cA_l1UiLwt8@jxL?H|79U9=<= z?w~s1XQ8=yWY0 z6yc}rSLL$7`G(D8l=LdDYgX4vXTJPP4MQ@A_U+5F<4au0GcXVFGu&P6o^BOtb_)u? zIo{z|;o5-{w>Eh<;GFJ)UC3G5Oxa93gIRf0dus6Xauo%aSij5kxR7X2557@f)uA z`<2glg`Vr(Mea6u1a?0lQ#TzUUQp^%WJJ?5Cfjxj#crdno%Vk?dbGyl53I@bYQDMK zK6~_0#yx_=Wm%|K_37E1+xzj|~>$RodwWPsz5o9h(;dM*AJc?ka$t}D9^amZaZ1}9ue3QiXGW&wJRXaI`?h##f z)ttEO7M0>22EvUJ_bo|Cl78fK!->j(vhSkZ(|183F~mOwkiYbj2f4lF;-Jrb#D^7^tCZDn{(bwlaU^osmoaKY-r zs~bPRQi7NPyU_Zhz~N`U6>jDlolFkWj#uU&Hze3e_@YEIiOR&rJ`XCgolfbuu&@wu zw$rcIM6O5t(r2367)u=LafE>`5ajM z5qIoP5&ou}k{MImD!+7Kt&&mIXRTeyc}?Gyw2~jv5~J74k}`Lhb^2;SOPgHG#0u}8 zqGHz=r=2Dgv3r&gVmNE7cuy6)%i5_)GJBq}thr!O|FuMM=RjGHkacA5i^?BL-DltY zFJF5_CYvgHm8gt4G;Ceyw1eO_+^giAr_mPOooQROe_)33J`lP6#2Ms-=Qf6TlY%Ha zRY-m!8NvW&RN#R=7r$rujXG)ST^WhH=B-9KeZaGOJfy#nS zH-Z}&?5M67sN+57tt6U~yy-uwHEwjMkqW#!1>fQOFSP9Fr1U-;C1Y>H(=dk{DSeY| zpPL^(QcSL_it^>`)1 z_5v##<-kZnl3c;)$!O z{7u}pU0ut>td%1cLY48_`=HyQj71uxvOn?k3S6bP+a{r%e7rWV^-|Kgc_C04oMg}E z+h9!sVac5QrgbMh-P~imEK=FxGKC!+x@H2&l+j@QOHCZ^nLxjVZEE-bC$#bBoBf#=DxEiVtVi-r z$qauO$)eqhYgg|hsFmG*Dj1ioLO7P9XM5#aidDqnMR>|y`oiZs;o}^KjiHz>-ptF7 zkClhSwFk-vDQH;T3`wrHOF2=^p;tNhX}itqn^KFN5|pDZ1RAI0kFo6A*s3mcXok7? zIJq7!o1gNT{FYlXXePP`rE?nZtn%Y5Zln)+#Dku(_WG%FQKP{>4Y{8=(-&K3Jw4+S zTa@&;WpTy|mQp%cJ}^QG4a~&0s_Y(%XJ-g%T+qZU57S|GH{;au{#RGV1|}- z-DTTyqpTiXD%-E0MbELV`@j0b-)$d?zb`ZR?J&-F;4#?ZXm-k~3mAhsDZJ$j>L=k=V4OKj6`4&ZG2Lq)nhZ$>O`SdS3C8awCtQCfiL|b>JisnLtU4fmJxM8J~q#7@H7rV0*4~*?JB0M635tP zm?Kwe=0_!rrR7`SAa+$eZBbGN>dH`@tm-zE6$XLkry*_iHHnr!TY9|B+Nej>lofB4Vs^sFIgp!=1ir$)wXlxPVmiH!Dw6Ijv@h z2Oz-9rTbHd#L3c)p*cTJE>5fgyqQ?mwjz%Fu4-^cLXA9#MGXO3 z82;!>sSZhwrRcGdkkrthMCe1Os z314sNf>7(t+Oe{2wQlahK=KEjh>*N$c9}~q+vrOJ7=|^B7WOZm`KamQp*Lgu?MV{n zJ<@EqHs^i{!~~+M0*P&-DCo5>IHbtm$TQ@#7@z>g3i@MJA4_&kOpM~cYX~|>K!Kfw zwBr1UwetM8?c~oG50^$fX$tTeB6ezQS=1O?t-RG&MO<}$)tAf1}4LwbDhDB zhVn~FZ&@nHbXzq{qzbUW>h;r0%H~!ZgJE{0BlyjQFs?Ye+NC%eyeIwN%Fe+BH&P%@ zKQ+*}o)n;5Svsrrm}6GzoN|`SzC^}Gf2o`GVL%Lws!y3q7UXm^y z%-!2b)jgwONYWM>t{;HhpuG^5c^p5WPU%hg3Xd*0jdm~8X&)OQgPK6gr+i!5nM{9@ zrLT%`o+pMjvnbe*Q1^YvMMJL50)D7m8U6gL*W`@^oTrM*ty_)mT9hx;>6}}cRd#Ft z>N-@CdC5A&1We256UbW(wN~16ceHyZVYeav$>nXQvGTW?ID#{XS+hWmQJ;~4)BQBS zd8{9_;0}2W@`)7clDfBS%>8?GzB)`gVPqMww_wp`4MY|B_b?OILgMZTJl+`xOsp2* znSFrDot6j}sKr;&)W^M3A4u-2I!djxC)*WM7n2)>9(-&FfmpB^abW%B*Rwxg+)}&? zMrQ|ah$qNVe*R$&O}lBs2I1DB8Y;Ay<(8q<8+LY;w5d|qx5qj4mA#W5AR+4&q50z+I8&ks*=V%AYX zMS1lLl!TRnM4=4C+(~xJ!Duk`)jrtTD=7P<+G<$6{AxeT5UsYi!w-9|bMurts4i=i zv*G3__HUf%t?6`^j|aN_PZNNoi$bwaYBY76Vhi~Vzt8q1SXrdMxUWEN|3%LLD>H$+ zd|A~}8A5wB+7IO|yfQqTv_x3^J~r#@a&5Z_PXNn?^!BcvM}`L ze`l%dlpo#Hp?z`8|wojlL=LF&8&^i51`ApI7 zq#L{*X6DIp#p*!6swv8$-TvhQ|BRwJ?eLL3+X>JFH-`jI>zo@q4-o53aYhh5Q}!M8 zSxD_pO^9-#NLZ;HK)x*4={iB63Q}3q{k$ohXTQ4$MP7`YvfRnl8&Hj7iB~&; z*u~FTx;1dsj&m=i>bsH#V?-~!rXq1%#TCu&-YHWL^Pq+&8}2Pmw->V>nL;DGmC1nH z)Rery#qpT~xmH&CS3^}*qXJU$rR$KjjE97I8BJ!Mcj-dyU6DS!3VC7u-G<}l-G@8L zXZ|mfCWDQPHNoVx3PlNdsNOl3{&ycru5z7QOTFq^tc2Z)5;WFO{#+^XGWCUzJJb7? zr?aw4Pl0g`pY{s+@QVm{xMY46DfN%h{C9M&^Z9?EiNxtvk&pMnxuUBrFWOQ%j*}@- zr5=V}cFS#366I)-KP2HrV7d5-Oy?{#^eW|VA@N<~!(yqLwIS;Zi2-qGQw`zoKmILK zdn&w(pf{%7$q3%S_$m+H05Q*bA;h88+b^*0K2GMM$=RE`4bor!T^c7m?&r;f#Fyf$ zpcCbTkgc<#^}uIMirlMaMc)z&`#5%PY?}ryIIdeq;#^c5UR}JD*liRF2Gxx0Ps1PH(76^{)m*#&Gu+{uT+@f(ad2^>yx%|d?aNO1NLB{o zs=?jql0e$7Cb_JB?We18eUH>&Z#%8@d7ZE%OWN+esV1GVO-6<*EUD8NOH`0xC zmxOe8cXxMpcf+?i_tg8{>tCPEUVG(SV~#n-u&?&Uh74O#IooOX=Wmiv1Oy$jEXwP4 z9#XLKccXNt@0XwKDAJhg{hGO-C?wf={r2B7Z85$p87EaL{1sa88|=hy!OP*kFvG^F zs`KHpJypa6Gi~c`v6ys!HhO?nz*un?JR6dw_rf?oOm|F?JhI{w_n`QoxRL}kuDe$_ z4RDzDXC5NfOQekKC6Tb&QLn4?+^GfL7;mJ;*alEnY1%VqlASx~gNB^c+G33k*&Y9=>6; zpV1Zn)(Iop$CV}TGNYl*@~hdcPDH`gh#_OvEtTt#yuW%f&lBVOqJ@(b7kgT-S}4?g za;N7eQO4gmhkU;?4)P%PZT8c~GmWJMOX@kV$$hsruyv0m#$(K)Xypzl z9tf6E&K8bl;T>Nyo0GH{<>R+r4S}925+|hhku`tjs6}8tclk_|?6{}r+Vk+C4u_=| zs+Lm%U`@9tLF@=+GxObHmk`sB@XJzA_Nw{M7qR~C7q8Xz2cJFKDE)?R0~u zWa)+=7Gz4&rI|qsKlNhd{E0OWqZKqpA_>lg*`><|7cW_n40rq_wyIorej}^rHW$(j zRpRd$X4|-K@zzlZIkw`7wUmL55=(te<@1JJn+q^d<1_k;H#o_kke^L}7x0~f6%e%5q zUH5FXEbT&MIPy7X3$rcbFygTPDvW(8DJi;AvuV`mG}Z*yb4SP+ZHHz4%-le$&bseR zUiwz?vS>KpSxF72+GpB_^spV?Y3k9GahgY*$)5V!N5`o$7IrrMaMoNYs==9PxeX^* zm4f6pA#mL?JwScHM*3Kdsdf>mP-W{E^`X$|QJ&Z3PIf|c9C{IJkcUJOoNZY!UD{DO z@|p<{C9~4GxMO#ijl%1L2BT<|cW;7QM(3^EX)A7JCp=juIgQg*Q?t&Yfgl>Fr8%u^k~dT8dd5-OIlnPgv+N1}A`kJcsORuB!%@T1;- z%9d@$B)Az>dDzRIm6YJE7qY+0(8kG0$y{U;6f`s_^~YP(;R11%pLBm_lpc+cYX^gd z1}{)|M=nl8$uDrLbOTvN&2 zr_rumG~9aOGC^`+h=dneXi(;$ue4S|i;CJ|M}#epqVfJ7))C$$h|{4wN@mdfYu_E| zcQ5MUdPtvVvW30kG>WGY3r_cG_8@YDtJb>4z8y2<<+lWmWF{0W~TLym)if53fqy#Kzn9EbVPr9nN-5)*M>OE?tLxi_+h_xJqH_FiY8dvs5k@ z}m_lU0bHx^*Dr``s}UhBT$7O7qCzT7u`yslsV; zguRIi790x|O!aFb{eqBozg*su4QDQG5mW73 zUjpyhGUVmFhz{>vq>mr&V6xh|+Qv^}x3WK3uKm&NtM)`6%_JWTi0K3b{4 zWms5!i!)1R1+Wj$e@)ZOC$z~EiC{S=P$w%sna=&%QvFYy@=C`#1_y31mo%O$wyZn={? zmohrRgR+EqzW;gyoR`mpw0w*P%HQmj8b43+>>9tcER6ra{lNVV0%(SzpHei(KUw5| zNU>h;fx&}dO-0lEr@#K-iN6lzpeN9jgbeg-#J{*Pe?JIxjm~Rkp*56D2=lL3{q-Oi z|3}7;AA?K`jfL{O|7q^u@5lP^N|3NvZe)jG_5Gi3ejx=<5T^%9GY&i|(6gui_v;yA z^b)-m_=9Nv6B)f$X#DlVRu2p?S&5elNm}&()jGG=*4-OJQ~awl|9<#I75Fg8%)Jtg zIsr1NWI7L{k<{$n-Q7=eq6((SzqOh;uJ)%u%wmd(8$IC#EF5zx zy3T;W)gEBB)BLqdwZXoqAC)bK_59{0_b0JPPQ3~ytrpXY_CZ6)LvL@d!a$QeEI`f{ z=nBTn1z5DQcjsGsK#@?RylSo(53A*JR=Y2J(w4{y2Fky>xD_23(0Tz9p2G=g-t6@B zj~B{?D)i@9SJ{BV6L+8j7VAF1lFbDw4VKT~UJ2>1Z%Th{h^95WplLR zKNIWtBlxldT?s;5%<|R^pg5KZhfX=_1pyu1rFS@677f&209vgj3>qnslyp#G>^7UC zK_pAvKn0u|lR-b4TAAM4cQ`B|z#kR`Oc4+OB0Kd`?3`j7rE;;vKpbr@HnVAM&^6m3 zK%RdUnW}exaxOP{m4=BSyLU7swzTXaI|eEW#U^XM#8Ac@7pPYn%O)}!g;DM_I$eyr zIUT+2WL1d6LDj}u`qiez>-6^$g7m5sVhU?vV!Wa{WhR@R%tdv!{Km@ ztC+7K7PCMpa?w_Wb5vR8Z@hVDVr34v+)(LS6;) z&G0Cc`7lj@WQ%}ri_P_h#qp!Tvm>8gDflnmzS_|eRO(ebpnT^pdAM2XCJqqC7| zaiymMvU*}@$0l$30S7k-vg^4Dqk=|gL|kLPighEsbb7&gfQ+BORlB#;P$M3Et#2Tu z3(v!Ov{*M$NKh)2bPE?AFAr%^gZSLezqx)AxZC5l;-aAF@j+BoOReR@pw3rQ;09(Pcg#qf^Lr>b#xUEL%R^^lTH)VeV}N z)XaJmJy;-a4Lbwgi}Og|C$y!ydk;d|5dU*p)w~7UbsK$2hZZ21tS7wuT(BHa_VH3| zH%5sY^+pZ4E2anSlbM8<0E2$jX&Xb3s;i1DQyWTv)!=Q19ymNqA!gwxV~kwT;+L|Y zus0}OHHJgj6Q(^7oQb?1&E=(SKe|J63Mm1rMplDTfy`rF`i^e8QYu$q|KB#GX?eO% z3i4-$I4&&J=R>^Jsq^Y81qEO|9Pc!cMK2@84%o8LN?lnDS}U zzkOWl=FswQ+kXONCkv{fa!NPb^oj$~ZKmw!pHO2pmQuZ;-i?3T4`Pnp%>@(}8=R}J zzYopk$&(52sy8`TwIfbzBZEW}yDB}UQ#cf_s8tAF%kI|w5%EmVmxgdu_N#V&ZgR4o%qm>4Q!Gc zzO>VI;HVMyW>c=EizjmxBD%qLW4o63OA4@Dj~gZ{Qc}33Ub)`g{1*q;m9e4dPE#jk zfoRPxzu~zMw4AokQwBFc_YF(q`Sj&*VtcH;iAu+BU#s-~DAxPvM!Y&|HSU0Q;=D&H z5zqJPucxM+k3R0O?9@Xw18~w}&I9;)&=6~B^_K7#FBab~5;2!mg#(hCuvfQ$8)afY zE|lws(ft&*HSt_nZ+&D95Wm&f6a+hYWjavIrL8%anbYrE&TKONnSueRoc9i@c=iH^ zGU!3i2(!9K;c_as+3}`AAcaQ4FI?m*^9oB@9fu=PM^$%@u}M#}+)->O(QAL>qiM0` zeg`OZlE1~>>H%~QT9k{utQsIq-h#uW#qiv1q8<>xBjC~C7R2&~FCZnSP@|aEy@#7I zuYLHj3Rkco(z5{<5=9)XIuSuA7@gUm--m*iRcz$|$?qf{mR?&=bZJ!RU|@#9{B2gI ztkoQylN>^M)g&7bAf2-9as}j46r#IV`W;x0*Ery*3VF@6UjSvs8gaj zkLN3|7wfpfK8WLw@NRC`Lf^-VLR9*(4nL!bycoc51v_(+1GmcissPwG1C#)Fd7nrE zwLr~Bc@mX)q%wsoI1-d=DdpGZe z@wYx)2MlS*^@QT&8eZt)d82@vfunIr0v0s7-?NiV@DHX;&CC5Z@WsKYs8k3{$PkK| zO}Z37GSeV5HRM7w8Kt5FI6G$z`gGfx;?aJ87Zp3cL%+h}YhIOblNE{Xl&;y-YkRH( zJj1Jbjc6oDA#7ewUCmEaEh?692lGI|Et>s@EWr%Fb;~Ze0|b6&fi{|vDaJzX8gXa6 z&jVxAT7};w&3<&@`7nx6>6 zU)0ciAKmqF83EE;sf5>Ij|l5fX{xDgw)nFY!T>fH<@o-M;Xj4K`;k4BU5{?3VT;4=YEv*Ms;HG z*yX6bAjY6p*?V;xp{rkPVGB0M!JzfQ*8v=rrMR%S z(J+xt%oi3xMT%a`*8x7L0jPgpk>omXkdJF?K8vLxrYbK!o!l#&iM}*9<8Zifw4@LG z(TpVY2>5@>7yfP`l^FDd7QX>1p{5MK{)TVl9XDlY4~sdrq_~6HJMaHjcs6F`J!Wrqr3OLhnl%= zOPBf;j>6A1{9W<{vW7RJ`dSx{5C#=CHJSi}xQR51^-8v9SjVkcYecn7PlyC*n!46_ znPOZ?A|*|WEN@BJu2v~}u_|}_%YHs}ItS8FK~Yz|^zLbhdl!c!c{adhPXD7y?%pWP zZMqO++3qkO{>D&}qRL|5_TKbGO23Gr-aJH6~Pyu~6AwDE_f@MiR7`sC`ota{g|7E4JXD%ci5BXCeI+ z0$Gj5yyVD!e~tBqupL#v!euosw6i4dkf|;cY zJs{2y^QLvOd-WKz8zj}d1YW@6S2-8V50Oldo7XsH1YqCzD^8MP1qWP{2I9)inHNrzn9g z!Ry&vJG^kFcE2hrk(XQh2hS#)khb)j%^c z-9T@GHDXx<{wBhRvpsN$OE{FD8))0>cJ_?9v$y? zCJ#bDHLFzkd-2*g`iw&u@Pi8bK1rj!o8d$#En`8Lrv0}6Lo#7?`2o;wAzSZ3tp8wq z{vNYmb1eU`RTIm`N{LeAdet=KDu+C+TK^M@BQ!+m9ReDaiq?Fk$uBg@15&Er6M%k; zS#d+xaJ9++JXNV6gGLg+6__80Sfl2y7ooki5s><$3=7AXReo$Er3>&T$~fcGUftlC zyo)CN13$dJ1T;D4@OF#u9NX>Y?e&y3i7#mcaUya21{SblR>aPELN!Y(M=epyQ1G z1Cy>K5mX)9HJG=n`PzmH)oCj?q+pWiQr*_zU%a;(@>Lbo(-kIf(f8;}A#~sP z+w6l8LKRLkr_ZKC20>Gml=A72DU#=|xL$+~k2kiXcyX0IkVg|XERb)u>Un!H-KHbx zrOfrmfF!YMz0Q5!$&~V3OCg`~HPMT9uPUDid|RT%+-hA_vz&$SMJ%o})~<|GD)+>L z>i+Q9k@yxLp}s29um~w^)Dw5rtE!+CJ1EqiV7hS^r@qNH{wk=NUkw+%W9L4bk_Cu#FA;osK93gsLed{V@*4)uQo{j(Mkhc;Z296Sq$SoSa{Z#9h zc?Q)vRs{^vx1g*T!OM5JML%vJopzfM5iVWAZ{9qKl@JTzUsvtVioV-wfAGQc%s^Hr z^HJzN^Ccla!9#fNS5Zushed*ezgIP9fz94u8x64ygQzS6&|% zV^ZQ_tSW_*iuo&wSBnCM^2}2O%CxG@A>E!=c?q&>!^TG+|6($~B9xcjK-938OZ{9y z(gwu|aIis;rKqL<0&syA@@H``_^EUk?H}z}+jddQnq0Df15i_7_0vWgGh1 z?s}&ztAC#O8^6?d2LQ;c($I&i0}%iTIgfZ4E=Q+Qi2$VH-HaHoKC(u`8le6g5&Y}E zK=BG(!r^^c^o7UV1cS$H~d5ZtLRgOxAjAY|O$5p(EYX^G(g! zbwK4HBLhRE-q){3s_`RpIf=^7Hgd7yzqA;|2%Os+6XSxTex&8>9kaPp6;0ggXMNI6}81BBkTb0`IVyPJJXD z|7#9>OdM^@!3R{)1{*SMi8B3ayN)D1~KMe5$`{nvFArC=-0`N_=8&Ye{4-3VLk~1 z+gfM5Xu$us2L9-WR=m%zTRBX%{-5lrfA=!oS-2cc;Jp&H?##HvoIs zo+7V*|NbXJ+~+@uoG{>G#5$*l*S9yHuJ5a|ut=2t8kk!oA|M;X5bAEc&V;SVhJ<(P zB*&2aGPG+jVw){V@HkS)1*KIsQ~dKi>Q}$)tD(i5A(NY$uj6WkJ?DBQE2~~xb_{x7 zQ^q@&tni8xrf-e~IfY$|3FKWv8=T~Ilh%FkN!e;lnTJfqLx`l4Pw9l_EB2nfd9vJT ztfa(>cV-H+O=6BSu%?dXTg(Oh>m-`Dt5_vcDYhu(0W_TEZv1O{U`+R5*Ob=rY{6LN zWx}xSdP&QntjTrZJ*Y-I`AFTZxpxmp)ydx+FIkES3yX5O z-dZ^W#vtQDvO6Pb(tzX%ov)u?PVl+Dk3i??O%Aa36dLT!ugm~3w?xbbz$+~K)ktAz zE62g>V7llRpr1C=d3`aW8E(1kd_WzQ3+QgxGC({%B+j#_G`q52?T&c^LAc=S$y*fv zqR#;go5?k0Ny}%XYu8}ImeJ5+?b>*R8xPw1(et;q&I>DsT_Yu0U(%p&M|snb7&^x9 z4M+O7Y~5e3D=`kr8cS@p(`heuvZhY1?8F(l{E1zcj9`Q6V{~r@`)nQOz|CZWjxl*& z5Z>KfWyA;cqGex1vmf?a&)2xWU8=RYf#U8l9Lf0U`YpVtv%t6NPu>9PyT^|Wk3H*^CiXyR4kzm*Yz^J zM$e>YipN_g@}30>7LLT~M~;-?$A&)q7{d_{o(ZX{l}WX!Wc}x7$M5Jo71o-$R!>&; zHyDa6plcX&Ldw_Qf`zzAIO$ia*mM5R< z-09MgfQZY+2jt|k3zR30A1c4iF;HsM(gCVs2lkjR#eF^Ofxy&(^yt z-5!>3qOzQPxzJ!VuI6IYtG{Ic3CISCJg4C_Ug_6^%J}()jBx_+(h&#H-rT*3T-46S&4{u7l zBVk==q=`&c=TW~vZ8_U-kjyWJu0j2pweD{FHu~Y>j6N|64wE{IHI$|!@n=jWBcS8i zam`u>8twgRtM>+H_OAUmPaJ{Ol4+kqR~`d`m`!%cOTda263Vyw1XD<*b6uIuI7G76 zPz${s38YTuq{nt>BBXx`eE!zNR_n^VwL0?=9tllTHl}(+W^FvC56}=)9y7tWe?DiY z#70lvwuwTy-JjFJR&DjKcIAjLb$gVRE2P<=>-AGa3tCgwW?zAYlZMHu z2s$ar%azf3{qy<7mnYU<$^zvQt;u>jGcy^#I{$JHAe|T9&#B)V!K4Vu-G7P5(1z^+ z9I6uPB9_lA$8#IEtgEk?*wfQf&H#(;;r$GM{7fKAk|(~tV&TB#d})k+1pwjW!L-f0 zVbl{oBZebr+vH zqCz8%frK%NOQ-uMYO^V?Y*Lzm8`*1opTZV&4dI^$0VPUh}hCZQXa6U(I&~AKDxq zh7~Zrxi(hM3SZ|R+#~std`_!W#nK=zG`eLTraG=U;gK21Y${&}KU-N~FlH#3r?bSt z0e=BkV>G4AMjQRaHFKo~qrP)wz$*XA!<)y^`vr`rqr#5$SEfckxG<}ujE(z|T1*LN z9=Y~y*;iYG6Z|PlTsNLF=79+=S>@Ap2VZCeM`gVdl19+)YL7kf=oWiw%7rbHeWxwT zGO2!4FjQ3Z_in_8tNitYjFG!v9DYo?wBPxbU4h@J!@8-|WHaO5s56zG>Py5tTZZOEU_>sqN$<>;>#XcpU z;J%OKndbAwx0)1W|C4?!dvyWr_KKKtEY;tlzUP$}{w0d1$7QJGacP)aAi3%Ooa3vS zkrLYo%A`2>E=Apdo5%-!$cE$%7lW7BJdV``GdGW0ts74Loc?$C$l#p^EN>*Mp=h?7<$fUv-HDu1%v;EPM84ZUB;M6UD%zdP#` zYjg|$!Cix)9k*XF$)*bI8O(%hg!iKkA&}tx1t+39bIMB0tc026^^KaqiI`W@!Auo7 zjDXI3JS&h{e2Ms7KQxFGAbU_PCb3%DN0t!XA}to`vV-xCFTsJy>y!ueppXN0clNFC zilbT=#L1Dy*Wt#Gg6M_{4vZy@nrd=H*!q3x5PE6PTS|^K`Uwf1;I~BZ?Z*wgxjH*I z(qfhLT->L3vgjsU$qAN!W@+TxwTR#mPAu0{QE71Z$$x!U*J=+NF80IQTeJC>(p>TJ z*bCFK?4t#^w#4M@0{m4cXNY^jLm1k2C$(nW6s^$BVV3d&xfHih7-?K9+}sjRf} z-YAV|FzA*((CY;LR0`IFUG%$ePrgtvhT;kP=In|?G{l;j)1hzmSemj#Aq;J$~WwTA)4Ou&(vIW&I%I4brD$ z+YZMWTNPGn>FpKu2AR+qL$zdSc#H%Z?IApl2UP?22&GJ}*oxb!ozRhs2>S7rFHg<1 zTC*Qgl0s;~D?SL8N4~e)uZ%>u+mEJ?Z-NuKk2*70F6@r;QVo5dY%zB+!m-6%vxM5M z%rh7IcEp~pStmgo(vhM$^u*(^lQmn3IpzctOjRYdhCb534JUqKmA?XYW!)kL*Hr_5 zllx(ew0ksJzmQ;^>SQE0RmPPZQy=kyaHCbag{_Z+;cG0MNJ8s{pzp_x2fep35<`ym zTc%wV?z?0hy+?LxONn7zSV+CNk)a6q2%Kk@clr_ZxQ;gNt9M6h!s5^4skCfoPb&1i zo%Xe~SQm+0dI;B~8{^k_qy$Jhp|Jcy=t^*mv8+_^SnqqnlDO`-8J}70#YQkgt-IjJ z>x_ff>U!CDEWPN17VO-2azn5}@tI<8j`AWA^q-qnLKyw5Qq6eXw1ep_H0m?kM|nr3 z4Vf*CyZ4gJJ4Aqf`(CfSOefrVgI75LjZD0+_}4Y`QWYEeLdorbOO=O z8o#jC9YeRVThfBlQTz0acU7dw6;~CqUITUffk&A^z@qFM{|r z6W;`i2uu5r580=j*F9WjrB31J8R1~xFGU_4R#&63_T3&9N6v=VH1Ep^e?As}!)2e1SIJHmXRVOc{A%!06LLlp z{DBdhOQ*kmsALHHw@u!2>)t3qGC8TzRa7>&wwxQv3T4;8k@HczP?X&Q-Ld*&4^cQi zV~4;Efs8JvaEOvF877sj^USIHdQ#|0JRdJ3o&}eAuBs$K&`YXQ(#j^gG|%=YP&QH& zsbo9M;A&g2yj*oWwlw!uv}}z9Aa65+=Z5WyN-81qc+!$~`fi;+67FlTU;Vj6a}*WV zDKD|uVb5;M zgrqb9;!#PlXve{E^6Z$>s!*Ms`=QcN=SCky$(p>){!EEwzxQHpnmdQAO*JAVezf$A zHyN}^-UimJYhAhh?AluH}1wBXJz$x(fDuR|tuSKO)sFKMw*EB)F4l)gbzB4&g zxpP=wgGK7->FL=W&G4pdpxJ2kf=IcQyO*gjl>DW^-c!@7IKN3Jh+Umta1KEhT>#|U z_7kYRq=;~sO|#nVwA~KNdL}Ob($#+Fj##QF@`n$`mBwT3I+(U3B45wf{0TVvE;ciO zfK90Q%k{+|_NpW_8te{Y16cFkEUULl zSU%yOLh{M=aRqrVLnr@>GXcYma{s9r^*@}6&==t-6^XhSOW#`gGNGW~HwmKoD=!{s z_|~cT&K}53c3cWi1y$N5lDY(gHg+eu&4l!mJ18$SJu*i-TA4)^q!VK+&!Xhun+7dy z7@ZT7tW06|T7kckS<5F93DZbVPc@4V?|%2}AK?-VoLh{4NkWv{oDxl(xVDZu87xPy zbeoo-lrN#UYQp{UB;b^?lphmK4PBwKNHg8E5_e6l$#<1{@~i)C0{_iS#vHBPuic6> zC6wc7#?(F0x<)s}gw_q552EjX=6(0-aPFIEs2)QlCMD;R8Tru_G}Z}qHFPe5$I1T% z%A?o9Ek=R6?Yngk&H1oJ`R^n}2%{9D;eTS;fm_UD}G&#Ih_eeX7x1CY1zXb zg>vy%JUBwl?Bx?#4=FRPgy=Z3#ygt?(drf}5lQ*f91_Vx&;E!BD!-J=%%SI>o*+4@ ztjg9)E+4;#i4LrMbAs6A)@qmyUl<=@!PBH)iGGsLX6UR& zcQ!b9kN&bhjEXii(!w)8yV)jEDm)XnySdzfR_4Hqr}BiP=#t}K-b+fyemqaB{_9?_ z>4wMpvc!jGwX@!B9n$n>K;f#$l=Yc)T^gn6;qVC(zc`9Gui}k@Vv7d}zK6n^r^Au+ zhcRxv`urQjkrHit`>Ttrl;I)`Ns$*-X{6@+*mm%hP1bPjoJ>VCl4wxq3Ejpp1O_1d zo)&!txeoBG63OELP}MlxG`2~n?FT88l5ps&uKm;@kKu$xB+~-s(@F|Ed@$#bt803Ts%(c9w-yrk7zQP*-X= zE(J)<%d?$!2AL91syECZ@Psm@Zsr*f;}vSQ;!PWFzGI#LHdk&Ei>5uM{IdS;^dtO{ zVvYnB3B*JGyz=#N+r;S%+%ijN4l^gH$^mj>XRy$>$myd1%WH9el^ z?pBmtYxpc1f1aLzr1^f^-IgL|rnP;AnC_NJLFwkFlzmJCybVyLr{h<|EAg7aQc+k) zCRQ-B+GsoDxYalBzaiQjTPfk?q5Y zn~a;QE3jEJD-fTUA;@W;~x+i2}u$zKJh8J(5G; z>=le5-pH@$3FU3vQ*Uk!<-9q@wR&g9_6V}a*|-YCQD$oc!!qBYV(K`?8I7|z*Wn>e ze!|+=>$?I`(4P2pTmFcQ$<=epzda?kbse)A5?JW==%PvdZ0??7XH}NTb;49A_<2u&bTKU zOX1fShy2I>V8hg-ZR#Ah=kDXX;j^D}gHB&14G!{J5`U;mrgk0sZvU#Eqdy114Js#_ zx^Le4Q+pge`P5ilS~!Ub#?8iQ#4DyJ@bV1 zkCRT>VYyJU&R(ebB`5uf{FzzHEsCgQDgADwGIt+Ojj`A|P@fID_NBM&>AdVLDq#J> zdDkP&npbzGbjc+}=+i|dA`WdC(VW2+lm#t|%Xf7Q(_7@Fmvd0a9TnqM7;SS-+@J>?i^r{pT4 z|K3074P9GQPu!?2G`qz`8^;a^Tz8RF&vZ_IHJhreA48AEE7SL zt^@J5El3Ws4Sco~lyA`9zY}-wMZQ1?7JUd;d)b}a%gQFv{HU#w{{}z@VN#=K>>R_GrkNtQ~ zUxJlw6r*3AsWs^%diGL2#uMBNHrfc^_O*s8+mQ2>qfe^`X zlqNxXodrrk($9prkR&rwGW{6}{9=%6TG#O6SUUKK*%+i9GV`qv39aA|rtu%!_22u* zXYk7`l4#3}rU5qD*j$dKa23x{TpdUd&ZBSLwV@M}^+-p8mAQwgYLjq(8i5(Si6#{E zVeTz7$T=n0>->8#s8b(BPC9+e)*C35)(l_nFD=r8)~f~hT7 z$RB%Jy~Y-}Kj8($Ts~axea&ev*v*NgO(loE$4t{y}m^0vt+&d^+!Au^#wQ7)eK z2H~pVg$~VVu)^s}!&HF<=dg5q) zolQ}WNj6H;C5$+Wv$bgdDi&Ud@PJYQ@#YPSAmMq(nwF~ zkZ7b#lhcSX$v}wlEt07BM)QE)TSu#ux$>AF>_efb-Q#{~-Q70U=WBhAtK|-;=te8S z57!`oB-Lk}4HKf8Pv_2MRgHq-9;8rUdX;(E6`Y4|cE(j_;Vu4MwWPSRQsnYmDclBA z2ip)a2t!s$X}C~fGmosW#(I%oto=7%Oi;*@jh3~^pwkuCY&h!Nwikl|6%MzRpv>p( z!TmZdj>DCZd#7bGa1slNArXbANd5Fqu!{>WPie(kWQsxj0;0D4#JdcJN`rlc38X9N zD<+YIU9VWkO~*%irN-x2b(_Q-EH6B#{i&+uZfos2d@*Iqa|FrHFf$EF3JnW7;;N;d zeGcv;9{KoLAXLr`|Go@ctZ~#NtwdKZ+s`)8Vb^Gh49+qLb0%M6Hk*pLRDu%5e6{n0 zEhoAGSDJ;JUwOs@hPJL}a`oR9(|fcmd~}3hy%Ktxk3ctHGPv^E20q8%tUnXZenpHOQVzE&?u63a}@;AQWEkOrZf%*O-QesP0kbCavEI8IHmOT@Bh zA|w0y_9#tQeyKG1Wsa&oV=1NrO_17r0fge+x;o{hA!;{$^@(J_qiz+X_$pwZFxd+6 z5Mbe7$hfPb`F2eQ*8?`_lW4KkS5KH9nd?Y_D3oVWWZ~UtxhP~lk?YdS5zxr}6#n93 z31xD`-tq`kMIeJnzlJFm*b94K26CT3$ii7*tt&g}+F?VIK=5+*rRq1dG(5~#nH8X$ zZW4H#6mn$nxB~S#3bNpd2#9deC<~u;-)AHxbG|+_DQB0Q2x$!ik_n)$ z&V2{YW58m3$lO*nqw*Ts*aM5Pd`d!s`#a7V3zx2{mowoJUZ!|-0R96%7G}FXE-92u z4gKAZ`nLyQPvKU4xurGtL% zdxH#-JbQ9RUYU@sw_$~#{cG_>|#j5(`x%T^yMxatf>=@QST8lo9Aot+bv zrR6Yb+TlK5}Ux7K`aM=90=Um{jJh8kDe^awV&$6f2OQdGn+qM5HM~t zQPm(Zc^d1KxT6g)wVZLBTPLr7GZ@%wN zP}YVH$r(h~kR){+N{B4mOGt^_={LCv`mVheWl7!@u@-%j4Q0E5u*2HiIqTPju$?JX zD#8Ubf(kk=syXL}a@~!IK}cm5`$=1(<^*j>NV9j=N>3Ce_xaQCRqus3{kve*6yz5f zbuKa!aSe?!j$}d+!h*P9oPL=hdx+ng{wEYKc_X|+6TFr}CczkM5N?^!`ZL4Hq!#AD zM46wo?JHNq^luZH2K;Fhkj8@YvEV_W(Vc?^?#DE_vsO_#OSnK^+cj!ZTa{li+%0IS{4_DYM% zk7shEgFCV2={CDxEusY0T_Mo0nk6&vGdbO)bTv+Rcwsv2pLKo0c8%^^4eJ+KaU)+?ySu+b0z<6+pz7!Lo z!iBVyyMEeoC}oyjj|H8Rg6i1<(fX59)(dpzlxGb>DIm!ff|l zJ5%qeTwHXbN|Rg4N;qjZ)%&+oemX*g%hx;;Sn~a0nM))TG}{F3POFZ5c%((D(^)u^ zNJpAt5m%ZR0!z_ITq+kNhF^GDW4W?il-+rhymH zZz6`u$cc`FeIPz;=&bUyY0&#wfxe$ofL3FBi-?y(Fd@=39Y2P}r?4Qyo}Qu(Rq3|1 zl2&-mKk(x@AzFKZvJBhE#eRMa5iPFQv6^jrPM_`Q&jgG zxHYHo2nh> zno;3PjiYH@zzxlUtlQ}oZfbb*T>3vTJv_xu`rJ_dklPv&sd?7ye_;7~0QE@h@Fb#I z@Hr`UeOXEv%`9M>IwVZYZVQ^CQi(!Lpec=yrlJ{ar`5%vvt~ zPE&nZ6dSB}!WRsbOH$^O_YfpvX~VKELA|jtrX$Iz=}stX8Ea!8Q`y+4RN9`3lB*qd z--~ZA2cQ`dBkzX^-2{eFwdMPW(I5)L*`=X!c>J4vMKq*CCzKnYe)D+hup6XJrwhfO zS6*D+^(CGJMT#r-$@A1NlDI$}us2gk_7X?6ISZ8{k1SHFB8pc;#A5hN(LU<{6B$E6 z&avDN+hspps0yZBsz8eS*pbh?pOgZ`UqOV?I0+xEbARjF%{7;k7equ>PcZquNNLH| zY@(S{kd62BXUCT`41{xjw{mbjP6fO6(YuDd=r@k2QMGVsq4Wi579qeqxgi3EVpUd~q6S z{Ii*e>yC}13^KuUlrIviDzkoNA2H&gbSD`upA+Sz4BXnib;qap+h{GgP<)6)_w~~p z`>gGl606~t`{gH}rN;FO98t{Zs@)D#?Stv}WlrinF7RKwQw)=dM!2LnpHl^AMm~>x zAl8{!%`jxctvl^Qz-Ta)p~duAT>bd#A+S+5sbpUstvw3kOMR6CH40M5=^btcS9o;J zP)h?P+=6g;jEL&&ZVR4Xi7_(DP=s(>h{6;>6IQ3ssrFjAlji$A3ay!C7mpl7RABp z30**X7NVBHswHo?2StU%)Wy%I1ir}`pkq~|O)J&;#!;u$RMQvMQ?!PtbOgoQ^w6sU zRS9Op%-MX2Ea)laHadtL0U!k`iK#+oM6!$v)+hKA0*}j{^p3 z9W@3Wgy;&^M;SY%p!oKb_u!2RPs7(92=PicgD>mQ=Rrn<+qFt{QuL@78imua{9-i| zKoxRPG#mRzxHWs8T$6c&l|UEOtJwcX+FQm&^|f!`(kX)sA(GN1E#2Ki4j~-^N(j;^ zE#1=HozmSYozmUi`E34qJ@@td+;8r;3^RML#b&K@)_EM?V@fI;@MzJbx>Z*EWoU!P zvmYY}yk)2%o<*_EI1$ajj=C`MY`8Hhqd}(sL3&@SR83iALD0* ze+$m_ooibH&*&&*WG3@9zsp!=?WeI5TR&&d+ubv6ejU-}3h&CZeQPs7nlWBJw40AqmRT6f8TZx;V71o@J`N&*K3H7} z=s8#n)zxjo#1qkEt{xIaw#m=jo_&s#8UB%_*&x2N0^q-K6O>cL*4;i1)w zWIbd(p6o)6mV1gb5Z|wMn;Wd0kC>yNP{eDOu7vP?;+9Qv=$j4SQU?Dq3acEV{r^sy#sUWMm*FkJN_K;S;6qu#1b+x9BB%$C3@&cmhDUn{U| zIw$-uh0Je{xG*a{2wc7H^VKMM^}z(w+KEY#QYdIbT7<3DB|yStL<}7CD|3Qnxv*sE zcX%b^TfJ)(t=ToA#7A4v#M8}2?1A%lCW6lc;?TP@6I%Pi>T35(^w>NR`Q>jYt>E~2 zinrg{S~b6`UE|uiqi>>wxIcJ{Jti+7d{~0C^y^^PdYH>#=T$c4nf^@!n~RkeSLYg0 zm#$ly>7P-slOoho3+`Ury`}^84L%L2qD4g*Z^-}&8~X_z?lez72A_C{CPSkQM7?<@dgDO0X0VA)Pe&S`-!US9 zd_j2)G+CewZx7b1)rxWZym(sHmaMoi{=yjK;08e-?=E$&TGmF1V^FvZmhqt7KS&-n z^a=8uBRZkwXI?6z6-Z#Dgf@#Q9%M&4klRB6{%=z-_1u6yZk`+JYJJB6wz)UbjUf)G zNCx1?gE{PDqg@C%t%@-iJESY7JKZ-Xb!f`t_OUQUIU>6XZO80d!i zMCh|3gP}Bp=_3O7a{WKZV8r2z${u#He&6lZJt+0?)nw6~h>eNlq+Qfdr1vBy^QyU!p zem>glb=bBw^+AcRG}kv@?hkvf)_#aFLIuZKR}>jeyk5+P_OYH^7bsM_wp>x}Weflz zUWz)DKw~bvz zGGd5!zPH#x%9ig@w-V_&Plr+%M+M4qTt)hyh$0%4(PG4-1dUCX2Vk$J2mJk=m$Y3v z7FEG-Hm!cFR9VhG`T3f5FZyG1KU@puQQLv*vxpN4gO*kGYx*nqmRe`(PzLp%TUQTJ z*A+|Irj2fry$;stQ_JVChD1p8ICs+UbaO+|sFW*$MpR-2_|2HYt_i+rg5jCCB)Ns^Swf2SqeA*7R!lE?kD%35uN6Nc9+&4ivJgw|p{=)5k{i^(i> zDte<&%DdR$+{yPUQ$K4a-+p+Hz2Td%76+A9WSgATFl1h7;Nf;CpJcJF8*bU6dbP~P zj%|Bg9XEn{!f-_W){C>J2(JB2r%2dWD9gOIZiQUgSe(uwIPVN1! z(1jst(DU(wiF-EIu*Fym8540c+JTqceXu5~KU=4w?CGsk6 zm3q0KZ3@%h>Nl2aBlJX%aiGFNXsp0S1AnQi(mbCh>-UqEo|xT&ZVwL0F!OMS({THJ zS$tc(T=B!!jLv{*m4WEet=f*|t#P%)e$<|6oQd57@g>>*`*`7U#+!$=kMAO>c}s4I z4jF#3Zqv$za$2gdUU^d9q^C`Z-YM_;GtK%;x~DT&p@O!0^7cxR+-4rw6GfCIX9`BM z4eTtoB`zELw=8e>n$uI!)`EBWQe)>o4d~{(z7O$IEXl|!7>%|!H=J;({w9R9NkmL6 zY~S;cz0hu8M%{7fBA$kCY%aR(%=+apMoF8~@Vv&xA!OWc6;bUz)ilOg5ypA?2~qhy zJq2F_=$C3DleUZ?pHASfXRaUe^{gtvn}5yC<~pz~%po23L$hhLi`&lTn|%V+_G9ca zxQU5hq)7Er`0}(zrn8HU(8j}X*v#&OT2&9$!UZKqwGp9F&57h}w; zU!KsSVa90PGWf-yyhTxn60A2|2~-oMS!yGKgQBBmY|Mj%BQN-M#HpwdzQtoeQ%}%O z@QJvwQEhBw*o=*q2@xNpz{l!x#2bLZGR%q44Bij>>|_`B>8iM1@DdS$CL+507VSAI-g&r-nX zqMnDeo=!+324#cf#b=bRe3!a0Mq&25rKM8Oe*Io>mJNBrw0LI~%rvYS>9-c%K$OP* z9_)mSSl3Dv)vzazIDk@RAy=IKw6a2eKzS8|l)QmPNf?c#LF=GC5<%>_uPl$a+Q#69 zDt`5j9HQ~DH_vxeL9v$M+f7*_;6TAc!(6fqa-l!~$c)$DrBOyKi<& zJ7GFT!B!LcM*{f3s7H#~?n;}ZLN!1gZfBZGjI^XzHheR|iN;2|Wj zEsewcVTN!(cRx_<(}6QdZ6*$!|7aYi&c0pdhZ7=(+1WLONn!c2GKZE92c)XJ=y7C< zcktz`{xBb)2OFy6Vk%rZn4xWOzOSf|%g<`V)r%L2TFzeyX9Oqy#Kas#r+JG1(SZYB zYcGQOG@XV68wdtNh&~v-o;ecpZ_8)b-04fMZz>!hQ53L?_b|<2H;MWwmJ_4P?G_`k zIg-)R7-)Nx#ODWnO(2q^YZ=j(x|PC&arLdE=@ijZ6fYXWjKJ?~28y*B-}_@Kk(d8w z6xj@(9rm5q{VSN_5W@<1G)|2aKjomQ&m}QSNFa#mX(?Hozg8j^8|;i|=_nJ^l(G!M zXUdMwDCI%80I+&wy>=c3N$&9=UfXHF7bunETtLB?daaYg#M5aB2Rx)H-h=2pYAH&Pz2A2DL&E=R7wTB(#8v` z#c~-o;!bjRWd8JIMwa)}LUZT5Xx^CfJ@NOk+9WW5Km%@^H~bY;qvhdDHtx~KQ2>ue z{N=rR@aYj>#(|+dND9&CJ{jd2{l_R@u1MPcrmYrZ(=`AkHD2N6^8Yylj`&_Hjma$v zy8rW&|GB@HUm*SlUu}5zL;aPG+kf5le@)zf4g#>|j0Wv4dcg9y|I^DSKgiQ@CXkw7 zchpt0{7-poAx8_0wCU4 zyWAhv?%rMQ>((8Wq{Fd&9uJuxnYjGEeN+-+WA(Z9fzI+ka3ysx?$2n7@QPJ+9txOB z5#Wy;$-&su1>$-E0-G^3rY0dkCvGYeY4I3LE62w9C$fOk4#R5r&x><_@!kEh)|#6Iy+1-N6TLt){4Vr-d!!M;;OXI(-Q&Tj)*zKH z07RkyXu!^}iyzm$gn(Zv5b}aO{poC)xzO*3&=cb{13;FBELSl|1d8vH0IP8QYsgS4 zV9HW55sYgKrhjVEBdLBOWoJ<@7mBq!UTOw9R=Glf-x;<6&Q|t|&+KfYpQv{%BoH6Q z$NR>5Hvad;)OM^P>X!wmmm@KsoNz((_)~ObE#lNT!(!>-r(kTh6t5|ha8R;r14-!^JIjRCqXOzFdC%D z$DB>uZV9snAN%mE`WmAjc>8PnP%s=gl5JTffc`2Bz=0b4dX{fQvkj28n$EaiYu8#4 zEdp^029diLXJrlB;zr>84|)?w`QxikzA+3dV+>+BaRV6*`R$@e|Lmk^ z`52oxJBd^4NQ(Ta0}5Fd=>9l`i(5fkkN8>sL8sdfJ!7feSkB|lCQFi))unr*NSzB` zJuC$i_k7+j?LHZ_#3HdZf}?}+2k)%zYln4D=9OMPYt}*WO`^>~7$6-_(A9HDv{sCI zPPkq;sPhn8wyk2Lwf`m1Gbo=Ie~ra)cK;eN=hztD*uXVqmCw-B)ukSV;d=^WGV^^cT$vLPOtbWE1uKjlKuCb21BMgU zePvs|g@RJ}`Y%J*Jpg5@goZ|Zup2RYJQ?s!`~b_JE1J63^4?+^I{@@p5-ad{F~P<6 zlX?>1)!<+(L|~LD-~+@w=JgXHm5R?Jsf(DzCoihY4e4mA(4Q(Zj5>9X%;A?a&907k z#CapT09DUEvQs9gL9(S}1HP*D>26RQH)-`2TsES+ADDifInOvg=+g zQOOI7?XDnLNBgR^_IW}Q51(%~sI&LeH#y>J`oR68FYb9skxg7bvUs^q0Lf_#K6q2% z7ehk(=TpXqW|vD==am(hYWy%4!{Nr*Qy@9qwjf%nOrydOiYsZA+N8Yu-gDY^)+B>r z`kT#2DnG9Rk9E_j|JostDKDE0I8bSHQz>ju@ia-?PP}OM0c3;<-?W2zlameGQap>1 zoGuhK6bOAzGKdIpg{I0AOLs2X0~-t#3E`C^!de}HdM+ewNnb^7%D>q5@Nh!BE`UJq z7p~%Y&zV!&9Lm#l1KJz_DO9sK zV<_3xT!d;W+{5V|+x58})o-KMG+oRgX-+uq`7Z^WzPmHu@yR3*fmH8>j*@3UM9Ql7 z621jvyqk~&e=d=S@Sgh=$FF6)0%7W|06+v4t9Gfyt_S*0HN}N}Z--uuka#{QEmrNO z_frMY(!C!MpnV4Dk8}cGC4*l{`*89VeRB*dQ2nOLp$gw6Y?{n!BEvT9YkNTa9ISko zE+*1`|9efeCsrj-)mBl?wZbzTleIJ@G&Mh7`Pe)Moj93gkWyAr?tPZ23@g-erraXz2J49A8!46fK89Xlg?)4)18^7O1Q8U6NQj4HX3Kw^^*)Vf12Fb zFHic$L&HpXzg}0;=YHZyGt8-KR;HkJHzns2nDbIxLFQeXB9E*`e7T{;S()UpxP}Ld{d~QO86TFiVpkTo$V*)A3MLJf~W5$>) zcwT8XAm=%e5n*&9;+S`Xs zZWK`Ij%#E%z%r$?M$Z&)i3M4OAmtKOYqZ)!+1oHk~Zc<^oTc+SwTG5=ui)RXv#agXx7dxELyol%rrpm>?J~Iayi| zY}iovRg!IGYqLl~9jcl){auF$x7+S1?>MRHh)`)^^+DwsCrfgukPdq;C95Z~xUuKS zthCM>sYxl0DPeW(hvEESr8lZ)agf8h+C-n_STsThTDNLRjTV%ja;V8R@4GLOWE(7f zr@O}cRK<*9n@}X_bc{nhf|#1DH!%glFn-aj4jkjg!Mg5pX;o!52eXvKVvEc{F(%z1 z>U1i`hXti>@sV#j1Zjd_>phD0V|7@tuk6H&Gu^T+a>`oP`dBGX8{H+dv$R+-jX8&gp=z`AM58rk zIq%dk06Gh4`ZTCqKE#~LRd0~YWfO?A=Rrrz{0g@Xepbl9^7{AJyR$X1`!-n`%#9L$ z;TH7Co=^h3pBpjg3Gxoqm?l+!lpEJ=_K-9+KK#a_8nWFQGIwzYWI=@^bfe59ETKf*s{cC|xd=<4_*FGAyOa61MVj1BAwDSSxl z=HTW1LW%j`)kX=qd9oUX)j9YfwQ~FOK=(%HSS|GmWaKBcu2eicR^z z4wG7`wr2l1E+i%h@D4VX8|c$V3r@eXKDMJz1J|PeQ5so{L`N-g_E|&XOHZ#ShrlVx zZIGs_+1TMN2c}&(_NB#+9yR|~J9z^BY3WEZM`hfUd0SeBCdn z)MW^%T!}qy3^t!GXY|zg1cs+63;v|=0;vvwn*w2WkJ?Hz#oN}5lBP*Y?+y_K+#Pd` z7d&IB$KOn!-AuM1(}y>JO(N1P>ION6Dvc`E`iY^O)qUr*%R6mQ0ihqVXzAq09HwJuW9nxZLx4^FU{b&a(OKR;flITaM897jqLzdv%)c1m8*2f>39epDeVzLz1M(xJY1A-Z-j50{y79 z>)fI$NSa74m^(ZhT7lcQSvctFyC`1hS1+4f;@R zlJWwAO|u9jTQg4u1*r=P=sN`5kCn)l_PNatzThkml$(6LXy=zt{KeY*sYUl!NKq4r zi9O!oh|BWs-qvE_d$PBPH!TfW2&y`f%&KN%(ehZ|PIK$uTFK*f{Bu#u2IsY}g^F~I z1H1#6xP&qRGoC`cGtKh-QGUujA^kgDEg;yJtJdPkPum6;I^QmLoWRG)s-_-N?-vB|1?F~WzG1zc?eKQO zA=7n_pzX@=8Gcx>9S~sC-K0!ig}Uxllo30_Wt4$CJi2>{UFED6b;r_j<&%ERPEMn5 ztG(N-x$ZsL5UxXw?)*G^FuplTz2LKfTHCei1-6<>WjesA*&?xuuumgAO5vm45FZHV zYH|q*mR4YiHnr+z03fuv$-dwE?n8z-b-Ii|L>TgCaT_W9TQ-yB2BNjp;6VW>^ltdh zhlf25LcAr-SmsLoI1F*S)l6W?IulZ<*VO#@n9SuwCR=eTgMa!%E9GXm=$AsK%f@P~ zb{sQ@Y3O%dgrxIA{(9wB=QFYH)9SJ)96r2+X#(OCl^|@|p&GKoARJZtB%X^mBZOU4 z(R--6e2|E0`U`re>9chdskm?U>+5N5xPB$na@-N>g{l{>lZ#KHA9FSN1>C>9qY&u@(7AIT8hw_mKPDk!l$!LoD-v=s`^X38L(&Agwxsru{E!07 zy8?`VP}`U&QeUScvg|}>lYp0e>kP?;i2~1tTu**pgA4kR>C@EOeOuA@O8B6h*kZ1N z^s7yUlnUoTFf>x7TyNpsLn5Y@kB|g*Tb=F0TrPIrY`?-)GNdex-lmYL?25b&T;S~v zF|AU=c1=Wa9&aRkXT$`TN=}vdK&cBuF2_*LO;h2VcPEU9=xw1~*tNO>;!uZz`5jCG zOMVTR?px1$t>JakVV)^2J8ecd{nm9S$6uD-a<^o6zq@b>3}CgUETHEyLq?@2^Tuka z=)W(Ts%lL>hm-*aHk`kwZm1`$X9>I zY&i_>|45&addnsd1vfs+<0H0PGtfzqmU6Z={PkL9)Ya8hX0*%D*VQEq*2a`=4T|#kwAZ*Xp&Qq=Jfnt@eRh$0jw2|r`qVT5H;2?sUG~&97E~Zw zec;0vNVVKunTAahJGtas!R;%irj8!7VDG{xXJvH~)i#s`HmpK2ta@voNUV!^OY}gW zXvqxBW_plgN+7RDKlowqSJq(rjB|Q`}JR$JF5rV`N|D`Iw&8%82uVQ`Cbr?xDtk;H`Z{DNWe0DQwVU}pp+*$kFZ`va;3({3lUiwe1*lNz%g;2i2ZrgMEnn!{LYxe z*^VH@`Whk8*aYljus>w)HX%%0;bK8aqYJeSN0$rCNEnw%9v2h#U0ez#h#W!PibmQ6>XdQ%H-H8-5qL{`zR6%t6hka8F8ISf98QH z(r}aCfPSSz#@%cn4j-gGDB8OlUB$Vp1bt8-;+M)Q!p`eHY-P^zrC7zADKi?rXWQo( zrL377HfF{cW0Hj2@wQGnOvJixc;ER;?b5te(4SV9QfJ zsv2WtG-v%&IL^4+kYw?zcNdE*)Vm!mBL#A$78h>Kbu#DlWwz8#v_}^9n*(F?`4i7e z3WIT#zRcHloYbwxvlo0+ z!j2G-bx9|{)q@>6Qj$)x>dumz#--mt`3a|s>3!*2^bQ6q0HANx{YhQ<1m!hh`^deq zj#J6wyP@NgIb^qak)PME>GRNqx&;;0<3rjlCKp{h8t(Qe_~o!jpQjMI9bq5=<^BP>$X4gKf*wt>*a9E4X&a%C`vLr7zBSpaKpO!k zD&5R-8T7rms=Tb5@z!7*ua8kQI6&(^W$Le8W0C0R75AM8X{39RKtXXEU{@dqH@Uq9 z0#V&b??fDrsiXTtJAtFcYnZ*p*kU8Q)9uG}#$!Ua%%OQf9}t?{+p@Z^1V_c`u%{lE z7YdS358ZMDE=<}!$9{4P`_2)M{7(4ih;|)MC@gS7>%J+xpLdk{Z{zKLAHPaTYizc0 z>uS)#5i+H#ntMoq0mB}|-tJ_D#{`l%3CH96_mg;tGzn{SzgYe|(tm>LNd>$nOh{r* zsQO~n!6b)&qhq_1CN1WmGHF>6vaAF>;ko9J$@GthJfr!&1EccLr$Udv!+yOp!$y_@ z$})O`aFWjAHNILI=Qv1A`|Nx^AzX@Y-*?Iw^auKH$cXMYUZtmze2~0{GNjvDe>4F9 zWu{M3cf315P)ks{l4&CQs?+>iBANI46{3WWBb(vM4+7`%-H() zf@?^C?_>$VK#(|Xp`eA|Dh2!W%_6!#SvW_~?$Xa@zdQ)?N-Qxkb@3q z#s_&&L|XfL`C|&_f`13D%Vm&-Nx}9+hMtkQ;&6Ip4yptrP zAtdq`a0xT-lcLY^14p^qX2qD36W2qNMkJ-Z5%G}`YQ3l_bnDW}vkQ7~&yy86zNH`8 zD$M$2vzcHXx9sy&|2CD94J8e`tzY9oXewO&;}A2~O^&(0SAgC-##LK7rWT#P*y)Ym zsS$z5H&$uct4DieG1lKKnqtF|>>Y5=DP&JHA%!8GZ(o6ud;SJaa6epYXYrLIyI`{+Wo4Vgc&c3AhfQrQIpN_yR( zJs*71QYFK*nSU{b2YcxZHQ4vwI$h09h*7$2e0z$KF2s-sl$7`xSK$oyeCIDnAZpI@ zF3oWT9QENsDu-oHOh4I4_DxFY@B=a65^2@8#S%ZU~=s z(!bYj6&(KD!?=uM_h#$FAIX)Df8UgML0o*9B2P!==^Ga1o_Fj?=^Bi;CgRCF1g5*- zPn*id$%;^{&PHEe-M5i}+O_reR@5z+q;?;T%`M^L_jv|ucKUsAt4VR+&RYP8f#*#V zd=g%wIT>LZQ+W&h%Jv5Ldkm*d>g5g5#mXp^A*U6qfew_c;x6Y0gTwiO5-c$&PA>o#^KTAE3n~!#5pKj@pC+)* z@y%1B&J#c|;#Z5i-W59)x$c{f{Ok%)q4mUrsUhH&#x~DdlKNyQu4FS%&}7z0dLrH2 zyIahMpZTy!B0z7{4M$y4*+JRPsP5+a5#$-Ha&wsWz8z)Jtt}cKLxA|9sKye<*ZOVM zNAD?mT`THi)D4u|Hk7|m@3dy73jcdsBexMW0z~>#8G$5^M5U8Z{XWT7a`?wr9%k22 z+j%6-^}6p>WQ7~D^HOMn9KE5PVJeAlK6`S`;-_2)Pag-a*JmCHam)wA79WOD6TEgi zo?I$z59b)5&yzviDEIp@NZnb?+T5&{a)A`#x4pKXa#Rfhw1EW#MpV^m$?1go$MJ|6 z69y;PHWFRtOPvr8tUjBe&i9!9uB)f)(WEvd>T{p;IURqW-K_WcKQx-Ff6!=a>QQ(g zQ)3H@m+tP@XNG+5H4We1&>in)c>np!5#<_m-A4S>YAf2*h|5`kdggCIqa&LSj1+H! zydj#k{+{n1U0v|2&(|6s(OzjeO8Qfb_tKpN?Gr@dl$NOoQB2dKJkN1<+a0*-$ra|jV8DDny z{sswlLTCVM0ghJ1__pwCR87m-Z4isgk1~(HU?`BWpjecxk>IPaQhj_*0eJxCQNf86 zWMZd9osEHz*({~mq;qWN5`}kwhbLD<vSQ&QG!wi$whf#U?pjv3_jkujH#09yN}X zGw%La!Cjt6JSk$x=Ta{@a)HVwA`5I1=WD~{9SqWM+-Nc01a*Xq&rz9aN)~x;c{0ZC z1`7r3DY@(H)(Ut0d6mgq!zgLdZrqV@#CgXk)SGr4>+E_8x3HS6DO-XOH@qLVQ?_J( zvfg#fsMiU;g;)NSO(2!v6+?L|doUwpv;_>nuP9oe4+S)SniukPa!2PAeZrfP8qL=o zCwa^g%4F-&(-{)-B4hg(g&6@1estfZ)wX2bvpzW~y+?z9!0hF#oSEDv$9#I`=aN^7*e+`12kGXzwhT7N$ znK&%2thHtG|I54UI+69gwZo;4n0I)%!;#zUI2E@ViNRwq*9b4~YjKj}iT;;M{jWss zlnjuob=_Oo_y5bM{g2?S&E4xot`^*z%~tT=`oaIbVKaCvCP29--!uY^llw=x8XmG) z_r?F?#aDaTf#%#7AVlkKdX_(V_@4D^INup^C=u`U*QGMu=Dm;FNMHV4YUd>k$N|0D zS%kBJw3Z)tSBJ8IU`ZPD_BwbJK;$U_oojPI0R=O72E<~R0bPEZ4WJ*97)2qSDZoYi z=8f>pG(k(rZ^P~&uBLg>e;*t&IxzCr=wT>in?RA22Lp$w2#74O#GW3>E`TtkY9Qhx zC?rJggn>L|f2M2#&;=mxJ^=tuyZz}8g&GxPK%fz8`@{^&|NJHX`{^MT10;TXCk3Lk zJa#(~5GxuMfTnuYzwMi$U2l5ymTVTHyH`Q`$i?+HLITmUFBVt~%2%c>uq*=)H1 z-P{Y`J|lYO?Bl;ak}r1$qIgkfp3VaO!+bgJiEL?6?(bWwR4++awt)yLBS6q&3ZPrd zazevp$d?mxJeV~&%Yme&u2|;*ag;Q=Mb9sBf)4=eaO(2m_Iz4@VliKd9Ovordb#Lp ze-tGxklS~$5e-S?0|>I}&I)miL?%E3xSrQ;N8i8GD|Rl1R%IN}I+zZ4p%nd226*43 zG(Y=Pt`6o30Q`7Rt<&k+-WMD$_dC1pm)@`1=Pp3y5*q6VFd=eeE&y|EU0(wC_uPK! zSyIpR7c9mpnYEtVsb4>y6w+D9gBGpqFNVFVv&})M_2Kk?N@V}_5r_5tv5(9rBmxeO z2ZVOxNF;9Q*ffN`J~(ZF@{OMIYp4I~&ny)(JO&-`IV-}>3H+`#sGaTNA{t(uGJ60H zr(0I%`EH--#cUkBT&!7zJg@^K&upz(x3=$5zGOmC9fXP{1FcqNU=m{m<&>74)?a%o zhqxCqD}4YW9)APw7C=Ce>vRILWz=nwnjH+ZP35tdp`Fzz!GRqKCE&z9rY~y}p;s@> z24HAjTf<3t8UX!0t5n0%9T0cq15Qi3-Em4v(5+7tFMyAIlea+%(s(VA$m@8dY5=G; zD*D;X$0-`Vt-?OutcMSP%vXt90rXL#8_Hokt4WYdCIE;-Ns0MY_J6JbPNaYowjiz- zqnkn0A6>@42cH&-F8*d8_@*fV#9A+5Q)ITg^wZ9 zm$}NET_mKd7(;}Kg558G=;cLis%irqRmx^cNfTFi2gbIQmoo!*St?TmTLz)&k122NI$_@+Gp z%|Y#2`xo06V-g0JNUBPnHIm%IFcw#`5 z>(Y)rFrC0=8q@J>OOj*h414n1y*!vaR8i zeXaUDywDS^Vk#a950-N`nBKP{u>eF@a@=b`;$QVHk{@OakNp?>?g54v9n5dAC3>y8 zIx0CbCTj=_Z3e`2z>8l3+EGe~Ob&l(tt*Yk z5pr(R5i)%yGzXtAQy;Z;LvP4$HN3f+lsiQFXqDQH%NXTz6R@(V=5`{Vi)HS})Me9N zJD$G-G@=-SBE)>FPMqg?6aV)ALqYYjnys1}c7x#kUf;{W2?Jha<)Gc4Z^)&fXcf<_ z88nbozG~nBK}SpkX?IMQJ?U#9Ngfaq(0;VpdwkVg7F^k6$h zj$Me>PrBm&zv=$R_C~sjIhxy~Ti*R7O=#@f=>RLtHFpgF1);9EHsx5@`I5xW6quQd z18ua;q=Z8h@L*$_Ff6JkuLhC#-7t{p$KSQ|1ZkEq!WmZ*T(3~?H6=seZH5k;z8&ur zR^pP98Nb4pX8X5^XNNtN05%)l^yp)#A%sd{w%b4)h$_p}67KX8Jn1_ctibI^Xj9)^a?;qRH9oH$)&BsNV!`gK}ew}(>{;)4&}bmo9u zhxYr{(AJ2H;WszqeXf7l_+8gR=F$|DeaRN*_RyMW|xFo=;maKTQp0L*T_tpg~nO9dj^nT$O>YH zC4eFwRvM4w$B%e#11pgX-Ibnq@$!Al%?c8eLPfW!?uH5zKp=V_ql27QR&o3l2fu?{ zJ=@2)5& z-hT_}QNp!LXr+?lorGPwCxgQ9II)Xd6#Ywq_+OuU zvhqItI$pqOL_1F7-%E^9M{X?zBglwjw%e1nS z-WA-|7fBAbUbXx8lI|KG)}{MGz!m5m{lm~CNGsmIRD)My_Rjxsh0r#4|Ceg83Msd$ z=-*dKpc*82sRk_|s&SJ4I2zNJ(fr>pu3`OHG#9=X|erPthlHpkWfcB|H($yoyNpfN`kXus! zl5=_7YnLrqax|F=_F8Ol%fq_o>t%td7^co_^UP2ZbuX|93zh}XwRlv7f41<%-Jh=f1 z!re}gpd7ZzX&&ZFayFi>lNSI}&-Vb7n}LF3M5i@CLJO6A;>J&PwnhGz6%y_qb&*Hu z6mM^Agh@cUXKG)dj-+X2io7N3(DUgt?!#7EScvoHFQwv{&L z&gaS3r>HkL1)5Rlb#8jFWqJ)_oZR4~08dD`$D(ir$*1z4rMC5=yIA6h(S~rbCMc{-cN>D8)&#LXkV+u&n5AAmWKJJanz2@D z=NVCsUPlFtj3Q0vOX_=DeVaY z4_IbmU#dR9?Ng@gl%DMkq=mj?Sd^&|QKjHFe(>j%?@!!Wc)hiIG4 z!o|ObkGCm}i+91>mu#lHYsAZ24%>6p{U|zu&WTpKbvv<_^A%v-&d1kHwkMclp-H+~ zI`i8WA}iyb_e}$lLSa`X?muz zdUpp_$r)KELqC2f`a58L3JNBpTc|_9@x0eLS6^F%#xS^Yd_|ui!g2Pp(m5-B1s~ws zAjc5@rkRWJu{NzVjb}YF%dOEN=6pAg-=7?i1=8Ccc1f4 z{9sEh9pwukx+KUYLKIKCidh|x7OXbK#Kp$|)6JrPcP|^T9Z!bslS4=OBJFdoPgZ9I z7Qn3rfKQkB>jOY-1|#m(%2F#NlUkd*pjG7pb*uTg!?Eu5F*<(r;apYk#^r{Prh&(H z*utjyUyd1pNgX~tk48sVPZBbL69C*7!=x0}$!}nSx3bTTfP|!Cmhbr$#tbc$*+IEB zQqp=vdffXdrzNXE25}y_gK@i0Gn{utY z0aInkG0f?AM|*Wo-;q@71*9`~s$Gh!qN@3%S*35v0#-ZiRy=>_jydvRcB2)StKYNd zOJTfwR;1N6k&^e3-OAGK_dyOK-`T5JXD9s*e<|}uOjHIpJ9U>V-?QPH) znHwfJ|Nb*+U_)!F0A$3Quc7PL{orq{9DYK|(`XQlcHXrYt9G-3bsO8%t{JT|S%1#R zrxAA8(|Su%n~_H1?rVdVyNVN3VCv{}1lyg|B4#t7*6`NyhUKol82_9+*6IdVd~S=N z2Ych;4E2_l!XAMx%u|9 z(zX89|JZt~sJ7ZLOtdW&x8jtdZE<&}Xj|NayB2p34h4$4ySoKf3w!M`Tl?R{I8(-6~ z=bY$}o^^TPKFasJsR9s;|Acv8fz>@Y8(3=&5?TEhiXCg;8KMCLrs~om>}4ae3_xo0 zQu<_2YFcmwMDM_{{|a*?&^ku2cu(DSApn1M{SsUrdH3mnV$dh4I^`Cvp;;) z>!)tT0Wf`}0Q>ARD&_Nk6ZpOtk7t87A{~d2zmAvb)y$uunAf;a#FO_2i-_}-Q&SB- zuP>&}T>(Sn@tB3*bKa zmlp%Xw#D?B)OYL5fnJ8Ss@}sWILjo(GeVg&StwOmE({o51a!9;EHtAs5oK)=5FHaOF&?=EBF*e8sS~U z4~#^U@KJH#r4k08yYy-6WVk#b)EhwU;+zKm@Iya7WDTzAzvyNiU@dmsb5PWbiDD}0r-BxxXVCWR4d!#){wk#Lp_>QBgRv&_;&Nxba)k1S1}D2Yi& zNjOuLt*MTyspjKyc9S;`b>_~0^0t}-xomOXL>RPn^pA~!ar#p2O!93vzhu?4@zbM$ z<<@cGs3hoSZQRy3*7>4K#K`zmX5e5nEnjiYrDnH_ERzMk@4i|(McF0TB4~bsEkRg4zL<4Dg_LW@ zUT?Eld5q}tq6*UIc%Umh?HkSP?9iWEn;X3YzQ#FpqYD+OX_JZ0*0xtN5(90fjJRuK z7dU@xe*WDb>zq28P4iMd5~z4;L?f%ble1nnze1)oGG6^e6~)uSBIi97{P8(1Pr9ek4hL9Vn@joKs-KcfOPABU~%` zVfiT;7G;0%SMfT1_V&!oX^)NR=@5{#5odS|h_M&4KYOrp^}>v~&Nl99w2||uAEB1h z%gL}`pJCibK}j+lFn-q(v&hRc!rRYt4RHfcyNq~3#1FTe_e`XG<%fM$GxS6OlSLer z5}6=p=E~2nkYA(f$J}Gb^S_>M6x`!ZsBS+HG)3z1JC5xSU*nzh z$UPRyf}D*+XD$a4ww4i^DxbCUawjs5W)?=ZWNtJ@iA(mJ++N;o9j36C3RNCXF7Vvl z3R6LIe0FIx$~>J|pVtYs>U}*BK#^H0FLFn~c|L!Ba>;RJ@4AZ!($tH z0kw>XFYnWBY>fC=F??{=8Mn-JuVe9tn0xH&F91SGc=cI^d|>ht;Nf;7w>S@9+V%mm z-O$u{&`IGxw$++d=Ng@_kFSfA4Z$JO2T0+cg{r&=%&XjvBr-Z~^zt8!=ZFmeW2x$$ zZRhhY_A0PN+7UJyKnp;O8Af05x{GqC^Dn%M!Q9TcrE2aAAtRg4d77=zDsoeIX93b; zN|%Y)O*zhA7<5|dADYE)?0Y@4G?=-s`;EIm-I$0n+%AqKE?gdcE==6S21Lq@P zcFB<1pMP7@FOa&(`_x7~x!xW!V79#XC{|uhOXL<@EGFoAqjD>-VGw3eXCDCiH=PWB zU#8dD(M_L>fcY0)HZ;7aX501scI|yRZO@RSD9N`D7NOx(&a!T* z0fZ{-Qm>rKCFEpOQ-PFRmR|Nfk%pneGMj*j1INuFxPSzT0BT>Aj&W1<8f7C$hsiDgi=k?y+(hvPQe_D`GRh7 z^n>68=dP}#SSn6dS5_#~ToLSs5Z#X>cbG{XEJMnpFh_xUFR|c;ExywDYi4-cNJ8iS zXKFME2(H)(c~K|^Li&a=pRtVFpDssC1vczG5;|~B-dkf|!{2Q+>uWP_=L;fNRh3PF zmy$kt@+>0D^Ub%Q}HyKW7xd5!`>4elwWuGQ?gVL`M zd%~S2f@4UHv)7I$@f(o$& zWG9PGv^$+gA2hCqN8##C_e&RM@egjBk#axSYv&x;w&f!nSV_|94zy%?|Isb))4-b| zE${nNr9QCjxAoBc1JCtmk;||Ttjj>sRV@q>L1cNEu~7Y!*5`$;klQVKI>SEoMH5aL z{}VN+0x-UhDt4u?~&8&;h3@gQ*BFN;dtizs7NvKwIcla2W3D&+? z#8YOE?Pq1v!!Xi5$i*k?<>~MW>hXN}UJ>z=v#hM1nM?>!Lig9ZrjqBO7KM6z?zH92 zl2o}0|BgF#fUZ*`k-lK9+4Q#;J#!`ckHzMlmLMP%d(7ajSS4NQ8p|-`E7i9F%h^vI zj-42r)^W?YcdC_{I~fVv!uQuB%KQFvKVu94T}He%={~8q?Cf5vKV^Mv% zH-$HYClyPr+3ZlTd6Az?nTb}I6X?rAz8Oc6ute6c5-*7+<0h%sKj@1XVCVA#m$d91 z^{`;oWpZvhT_GV>fZKaD13!_E2xx!zfTEd7GEE_2F|tASTT&SoAUCYbI{=~5djwF6 zC%gjo4qp^!@WN>C8m&SzoE!}7>;gncjFB?2cK}U`;){cAyMGp@PmRa@jwG03D=$iW z4X@3x>PfNa8(?JkCGr8YV32so_)Qg^hmo>D<{?BFio6r(fJ+k_N}_AwN*{~qqeN)v zAob$`GiCq7>zyvE`x_I`kN7B{h8CWrB&y&dls$SC_ks-fD+hotrFKA6X7djsD@M8f zQb1Wo?PZfX>;Qn``bWOK`f?P>gn~Q2#q_mbarPhyKW$34y)(}~Y0kT=r*?H{yLUfq zZ11lLaaWSbm^}eRgxUswAFX94x=5Nd@suG{p#cdrIl0MvCzxL-+`(3*OuC^DsaRnp zKE$Mi_OkXLe*Z{Qa}O?%=}tvc*yU{Q@`+7O3|%r%hU%w-Blxae}@M zf?MYUtW?A?q!K^3I&A7))*pn`(4DOG}^_ zyn&9qn9TP{Y^q*;S0Q;nDh6qIc+LL81r^B3HZOQR>{l|X9PC7+R zAp?PRCC3X>@xxT?ID3r`-}NRpH&#?gezE-GkT4GKmz+l@r3X7tWoym0TkhM4X;)mb z$2)Nz4QNvw%6lPG4ezY}!dcs=TfNYEq@_)3!=z8ft6DMFu;RlRpgN*vL`*Pd0wrQN zw3EgPe@<|Rd`Ni6<}9FzhFULL`VjsslZ;R?F!v0i7nXE3d#O0hn6)ljCtk*dUD7UJ z6!a-IZfSw#FZ6p5U|hCE5}SSEy6B%XRaHG#w(Y)T`Z<3Xp^t}$sU=U=`se-|(k6Ur z^Sl3qMCDALOgJ!@E{EaTqk6O055m9fmO^0)(;r4@$?&?M3ybw@>yF1ZUfV*P(xQkD z0D)!Mby%$f78qoxgWMdncmZ$-Z}0w+E{E*{o~}>T-7)I^5Vg@7f79Szs5PF1ryYgmS5m!k|)PC}FPzxI-kQQ3p?}R##`qVK{oG zJ7i*gAw;Ydf4_8f*(m%?JlUmJTvz!C zYdp;2$<8qp5BDRwksYC)dzdka`^Ifyx=R823GK>ck)(XQ z52>pKUy9^U6?ZD{#){%A#T5<0I~-!BlITtj2VH4lyP;cVw=}OCX7!b>WOS4HtSi+g zs_os6;fUK(-{hLCRR75xydEjC_YFSa+idESm$32#N1!c0wJ88O=2eC>x3W1@xKcWoGa7E5bxPK$j z7T7dxpIxX6TTWaz+QqvviPL98%lLcVM1!)YElSVYZ!;HrgW8qmmxdDgnG~yCKcL`L z)ziH1|H?wjJ>QEMOMmkIf(}5kYT}_>5E_7-ayFiA) zjyC^&roVY)R2K8tVnJK2Sqkr15Bwee<|lI2P7_`9wofx()XlNGLz3UTy$H)|0rJkJ zXnPK+cFD9pm3q&p38!THXVuGE^!f+Ii)=2O^$TNxXsg=O{UbV0hO)r?Sof})jrLe6 z8OJzGm!qcc13k<*@eXIUKE7^#r*U7HF^4%HBA&A2Z{(7u*n-vhe~|}OGhHhFLFTM= zc)lWX-1y6BnfSZ%?V$}6+Idm*({0w~%8l;MLoCinwllMkIFjQ%Y{~~l(0r$}DVCg# zi&8+V{exC?$h|tL6MSVrif!7yISr!mog2$E`pTmg$^)hDFs)Y zeT`yO+8r)YOQ=Ab)lO{*TIJ2$U*pAQoREZZWL9wqWIi zm8E_Z9>!rKbJv{L@EP&7v;F8zT&{YL^Y$Mk8cHb^aziz=WEQi3ynAP5(MKP6)!V`Yh-x46Z&O4<3LsHmt+tYH*3 zTeDSi7=td57{RSAFQGn;GFeu{nJv|-TGUW}-Lxihy?-OD&cggA7SnXHyy@re6aT#2 zFu@0PuOgKA-X7~Ahn zo$ULCz+^HAWWtJ2T8SF`g-Ft*m2duB@Tbq-B%ABgZwAWBr2GhOxf6s^0^gbn#c_q* z|9rp)Aw}x{JY+OMMqy+t4l#_gibKj|f$}qZ`Lis!w;b%?)j4J2s39Vv5$dn;*OLe} zN;iGS@h6CnH<40Y!xdm&#ZQZ7fES`6lMbm?LhO-he|&+tYukSJO(R{bE!XTJv*9Lv z61NK=`EnN52FJqMjj&@{dqkeZ{`^YWmd)%)D^u*yyEUJ84&8l>x9KD)T4s{Bn?y6P zU*dWwCV%gm_7{UdDCEc}A(ZFhU%v3)BYp<=v*JR2mo-IJnrIKBR%=?_3405}v%M1+ zjXCB>ZydcKKh@qa&PIUz=RJ7Ls`r?C=5S|V>;@Y4m;dheqO9v31NtRMjmHQ|q&ul) z_g-tAQ-fk@MLc|0`jBat^)P0syYIj8xg@z$N|$g1Kc*hmji@x{(SdjswC%1|;z*81 zKcWR*^O(~*Nqcx0K~A3G-9<`$=~h^iI){4e-t6dY#Y@(w3?|0up6j5n0Y3O9L=B<2W=+ANL_t zEj0-(2i?h^c-(P&@FEChJAxip)k7mF2BmHO8?~Y|IEdZn0KTckEJSu%@nYmjwOAea zeoMUBeg*AsM*2EzOzpF^`lm+gEfmZv{&DeR>_G8bA`u)Oq}xsZZLpeo=NNsc=|T2@ zow?K)szE>AQc5pP+LyL(A)v?>XXEZqO{wSNTvwoj&<%t?3qevP% zcto&G`h2LHM=5(So?_bmz=tmQ`pMIKq%Eh}^TbkjKAG-ZV{W1%E8F(eJDIA{GZ;a~ zIT%lErM^$-p#OBbNic+2Kdb82&SYuJgU zoGWF*_b`@bKD!^o;R~`dYrA8UJz5A^k0 zee0qMDgA=x?N78O(?i$Bc}e({n-{&D-)FJ%9uB0LPjG!`E)~KeG5n_|zRCV2sy?kbf6~2k1wBeF$Woy|MtA z%kyjm;?|&0sgH#kR0k)R#!FW=57onC4XE?}Zg*8fI5~CZMjGtO!0T7Y67v3?K)o>Y z8HuZQhjFT{cT;4%n|)B9Ectg=DFQtYa(Jwnu$858fok8PZUsvupCMad0{p_6 z^SqQ?x{RVt`O~I0N2S28#d?c$bD_*8YyMNm9>-o6nKs9>j>pPI|G$d{-uH!DU2+KP z{lm@00NDwW@x!b6Ff;+9kopopWg$*RAjC=r!yrotxC`APd+f3Uek$C-$WqWwzuNU7EH+sw|Jl4c>x_)-}$#|`b z0GUlRD&|L8?=9kY-0?X?SF>%9&${Xl#!w_X>M~7tST?)md;}=f22W}U7vGzAK%m>w z(?@npm3<}Ur1xyObm(d;oT))mJBOly{q<$j5+T!&G@k-~w1uQbXn*`Ju|kIZMvK$4 za*yBV&aSK(FI7X!e%@Sd>y^UNP{M<$Vv8QMYGW7QKDIp?fWhh$dB6Cp<2kZnq`BT| zQKcicu(;@=fwKtqQUV;g!UZ9+Wpl~8oTcj4)kv?Es#-AZ14GmGi|-x+sFGebns(}G z(PW3!F;!nbX!4lWSJ0n-6O7(10_mD?@oHYEF>9K)Bo5bXF!FlM5(hBg_uAZ8>S%M^ zSi+PV=(yJ8ncAnHuB_y>pgU7$KU`+hhcw-m9Qpp%G43rSo){6flE2RiCmy4Wci;OX zS((%zCe3_!*T047W*~eEo)99N#!P-1zt(_h1_a35N|`W14QB_!J<$0IatOXNFrU5dOG?FYk72dFOt!>}@uc*l?;TGw%iK z3K$h0b9{gt7zKx*HbXUjw1--8JSgMPG;lcEula%h0_``H%|=2E*bDyqo){D?~r zbYhRh+}@bBOE`XZ>jjV3(r6NY@B_Irkkvo=yXpT@|6Awsw3Xti$5;L|&|)8i`lEib zHri^XEn?)GH1&^4uIk9-S376;;ZZZIcEa&wOyOH86nN3c^641F@f*!|P44|813_dQ z+Wy(jga)~c*f(6(i}NhWc3XS|$(OyVJR+NTM$FoGcGgj4qqz2VGnki9dV@dymb71( zlbgU4VOXTopFZ`c{nM*IZ@Is`mtdCSgCfVAqvaeV1&LHK=^DOjx=DuQHeCnroVfCtqOdl4mgmCoc>Z+&DnkSMKhIFwrAxRau2$ zKe?@~0s>a`tm-72!sPAhFfL zIKkJ?`9U4SGuvr@gz3=C`Y)JvI5Uc`dY|g}6Tt3&s-<#`FRW-j{b0Ec?U33Spu!&qOI2 zlF9Qgnyk(qHhA1Ng_xD?$x?OnGBodTh7sS=NzuHQpJ2nHs{I_lc%{xmJ7A`h7pQc; zjZ(2eGQu}2^t9@3hPB3vqV{0fBc3|m&VJ%KjGQA%3v7k)0~NyB9Pa1{@TC0u)P`ww z|3Ox0a&Xh#Bo%D>+81VjyI1#HEV=&>;-IUCw}o2s*H5~i_pSSA;YukM6#LndmZl{X z6aF(*risu3ZEHpMd_-V=?PvF0SBbv0YZ@b~4YmtNxx75%UWU@6p<1&K* zvc#qBK6Gwm?@?Q8kWQT3bv`QDmcuJ?YG_+|-OTXM=1eW!a$ zAl!HNH3^{iQ#AE6djCE5TZwP~*Ma=4-xHS1^&g9O{|Hl@n|M`h;?|@dSZ@HxZzZZTV z>WfnNJ`}QB@P9qd|GIXxFVKn{ZFSw%-zqO~(We&0un4e0z5^J#K!epnY%m&8Y=1ne zo)SP_uUE(Ih}}!PYd!18W727f*51}pKZ;q~84$Dw=+*Qq&T~39w>_xLfu_9X8wFpIl_j}D<1K+N=mH<47_e#Q} z#*Y&E8$CfTR@?xMt6T-hG|I{c+=prCEi5R9KI|{lQD2{~@bfAI9)EL{9Dos{Zxvso z20X-S!En3L9cH@q;+AdAWwUG!1k~%_U22>)x}0eCCY3Ic>FHVl#>579OvJrOglSH{ z5lay|ttM{Qe6lL?t@vD*?L#&YlV&(#)B!TKRm%_JPS_Yf^O5^D$RDt><|fc_iXq~6 zv6`)I+D)~*e7{$Y$08y6q*%K%$e)cH(FGVE%9bb<$m|R!FzTA6+e^~G#4rVo0d|=& z!16S@7kWDHRp=n;b`*~T0-L?|11`f>5*vLdoO-es0ln^8|~9K=WI_z8=M6yJBxu@{{?@O_CM=?CBCkQ(Og zZ+`|oM zWdc9-t88+^3y^+OtJZ|d)Z_~|zHO8jfX0zCye@~X)&nrHQFY#DIOh!8W_jOGoR*sd zIy03HFZbc0A!No^cBp`uxfwtbtlm)c%86-Yho+}f{+WVk&}vm$H`vjVFI5|$R%o_n zs#Pp~TY2!>v#-)Mx0s`+TXgE{oKl9|ee1N`(zT8hgBe&(j4WC(tjx zl0nk?D3F@Sbw_1dGOmChB`Qu^DH8>cZvGq~K~n8F!nxxIRy!|zrDcC&>bNsJoUh5l zo75ql!}zzUpxt2%9b!z|<~1ApR6~_ao#BN2U$I{*o+bth|4O~t!DM`7Z)|`)7M_)S zn~1^QbQFGIXfkL$C9H=9V<5DZGu3R92Cvd)(Pp($$Iahb^{Mw?HH2g!sd6nbf>7Azo(aEt+@`wQ)zdUz}+Da z(tg!-Tkq_UPAV^047tsw2r#4C8IjChid4E@0{T#$h9Nc`^Y(6N-kan;r)PhQDy_!I z*y}lwmu!B96O|t4bfIFva##iU>-1a<-0ukud6ce204-W*j1&{<0M-zV+}i2oUqXay zrz4a$?H+Ms+#FlXOX{r%lz>hzJ!NDrdzRjbcq=79Lgziy?AH92siV?|%B79kO;0Ii z9lt+m4stTf`CCjSb2H0|fm$H>ctBi}t%p5|A1sp?8N5UC7Ei`V|E}}(89Fu&_$#h} z)miv)^BtjrbC=V_ioL0xr_IzCv5>)FSm{1mf!Hx%f4sMA{fU%}0i@^MfWG8?yHe3S z!qhqrNC$Ve{I*(-D(g1X?Ii|%1uizWtg#Xam>&U(h@@(nT7#G8aZL#4SsQFDuwKa> z2>}pfKmeQCL&pV^r2E{wawK3ZBmu;5V3m4v#ZryB!s|G85+p=5q=0YK7WdeOg8=Eo zI>TkjntLQ*!QmX?$X5>WWxe;$V*Bl?<;s8VvCKm%P~?9~_|x}58cX9TKtWT7#mNg@ zOj-g~w^dIf8%TGbD**XV;I@Wn^_oRd#_vWRw}Y-8Edz!sfjj_YT5DPJWWLXyPWs!4 zv3h8_bqahR7ZYp}K~psL0$@g2x;ke7NOjh%1Ntzaqu<1)b6J19xX~I3xe4A2^Vp;I zzF9B}>}Ebh)Hdk2+YHxD|bGFKm-~6gpuDw$7biSrz1RSay*!Ycpi6Ixz~4?qofW z=^tt~^Ej7*;`me;6QY9%+yk-7`E%!djcgwAW7RL*p1Vk1SP*%ORp zdAiOn1)NF0;J=WYXW2{O@9!xu+O z0WJkUraO++xMw)eYRr~Ht>0aoTLKu%qK35g6|LJ~0W$^_hugXS5?OW>HfcPkjwtvp za`Vw+TJigWLmsn4OZQukQlWi7QE+`2Y0IkCpD$!ssMXQrjWxDsr0shFce(i)7a6DT zFR?e(gW|b9+FKlF*8@O+)8O6m=j-@ftJ$k<^A3KGG3O}3PwoZ-|D=*OnLb?Be}w08yEiVvi;G4lUWeMUif zdwh_fXSa412_A?~sMqK1I!84GKV5w9WzH2`dncUPunA^OQm3i)5}-E%@-ZxdcNz6) z6%gPC^`Exxj`g3T<)ZgCPioYe65&smXR*|>nDzc=Gw;c@w-qG}u>l(d_Osvj*Bj8t zhvPG%VG5z4%1Mr$UNGdL4{EgAZw)NeZZN#L160kY1vrELbzL+^K<~5Nf_?983E26r zDF8EeZ>nQ|ieUrBl?KKoV6wHbgNT_A{2@X}uC6w}@7I&Tzbf|uRBq^Q7h7`E4;LW{ z^MCY%+2sY`%%Q)sBkyEW*!s7^Vd;TH_8R(J4BH>oE)7Or=Z&ILK)UV{>Q=f8cOQAf zKM5Fx-ToZ*bBr?=b|6gSU?IgNtCqXozs1N%s;7;*NI&`QQPi_|}HbqQuS5n!otzl7LE4K|#I%%!{Urc{Fb zy|4BsxX|%`Yyet!6IQrY# zK+aQ@@F{XTf7=%Ig<*q9diRsQpNo{V95?U+DtxBY^3W2jOizryGqdo|92ze=-!FPy zZ7?XeX?nsuUbC##`+sGO7R+JR-0-K`>1pOfXuLo@U`knJXsR~WkpeYYw4F=r!E_La zz@FYcWT<3uR$$g6-kDjudRQR^zYytBabY(&42F4=p_N9h>22U%_~iR?swgIr#UbHh zVYa1wpBNl=vEcZ^Y5QET)7z<}nl*;byz=i8*jP`jK-2M)T$K{>2)p&6Ua-qAqs96$ z!Cs#}Wbe_9!P^FdlSOc^LB*)kZu4WeyllgBCTHiErs>O*Q3Hh>rt>=zpW!!kgR3j$ zIxiRGqY$(D17Oh6X=7(ozFuNzTX4NxFw1IiuO&bBrayl%7Tq7bzx&O*9W90?j}=1Y z*K&~`dxEME3}p8t144SPcX&!H^j_$jG3nsMQcPSe~ z0!xE=P{ZyqX?harbfMp%1Z#KJ;BIcfo8S9uhXvEdLbfhpS$6tsy<^6=1TRjXcXu-!V4fOvDoqP&9B0ny${axVOWkdYbom(- zgmbj}6C<2p@e}IkET-Aqw*#=Ad+b8t*J}}tHXm!adInK18X-4ZXXW^&==kL+hkFq$uNiRJy05+K>uQL$vu!oD9j^WX$JX*Ss59AR!r*r{oSrvt zP22(V=g)Z~Avo`pK5Z^&q?tx1aZ*xGYrz}ce(s`=zWKa2Abe6kWu^T=X3ATx_%m3< zU}{wCcr=w0bteInTBwgnMF0DJV5Gq14`RJ#mYuCrFwe@l(@8OER?QFY)E$Cuj?&TL zY(|ipUO5l<-T_ zwI0MO#dt%urO$dvx6juEcG$3+N6*CCwrIXsmwl(*A8Wek3pQPu?DR^SqeoU#yeFn_EH?c%h-^Xw9Tdb}`-HAga)4`YUt# zl77%9>gtJKsW+cjO9VcO7_eeBU0ICSJvRrK z|NGuqQ$l(M6qLdwom-nqa*e_WjaJh*#ute>jInF_rAe!WV()WgO}VIGWJm`o$N(u0tdvK>@E{=GSHh<=rI+r9rJ;LE$JWAzQW z<&@Ntjw-^wyr=G;oJgGKgWG9XfnqWp2*^@Wf#h$f@ZULc-2b}BA?+?l{4E|UI z-X-(y+|dodj#mk49^+T`ACq#j)?ecctOhvA788FP5#RyC0QzGI+YrUSXn-)zL! z$F}`Ckuiy%>f)V|rUJ7}c}R?Y#@H<<1Ia}%Jlv(Ucxk@nB6Lkmkwnz5^c3xHGdZr6kmGlYPrxr+TWVPpNvPa ztMEPSx74di;`yz}j)Dku%~u5Y_h1Bwko4dW5H?>q!E2Q&Z5lFEi}(}R2ZMVUgL$&6 zm2l=zT^n_}oj*aydp;-=KA82dx8%9mhM0dv3c$_om=}r%ZFa{qBU|A)Lm4)jF9+8K z1FY>*uw1G8$TT)ZkT1%)FX(c;Zy0{uLlVtmxo`!7Vyq5&@P(4p*9(iejxJ3MlWw8j zWAmE*Wk*8&+D)kl@OUut^r8)9Qimlf%yer)%YRqx)B@zyFHIOUKWv`#)jzpD0+tp< zm}@z`m8}7%schk=&t@biuA}HGwU=Ya6WkXVql;!X)|jeMLb2<&&|~rU#>DqmT{*XF zNzuV>xU?Gnzc~0pE7OG1PP1;QGgj(^N7Fb`JV;xFpLusJA4(VUIPM}xEJlW%o%%zv z6qQ%~UAb&y%DvV_VgsR0BSF3&i(rpKgY=-$%%iPd_)@Pd!oMBtgkR z&T4Kk zPtJmgZRS5#XUtdQ;eHo&lpfscUfYpe3`88JJy9(Kt zK@iJL^aYa4YLJOz(n%S^*HznBxkY1hROE^d_h>QfdBgK%G?&7wC<8ig~#Jm zn_kgmAb7=ml{;_$&)%a_fmm;l@MfVBC3!WzS&q?Yu5dn2Ci2@tw}SJRE+fN{LOgnl zX}H^QTO-L@1u>NlCt5m?qkO6Tw7s#KbABiOSdo0%vgkFmB{}k7dLiPl4)Of~c7OQN zm73$i@Jz|h79)YeKel zqJg~=X_H~A=XB?@XOardtfiYSUfT%=8#rO5R!GGBPSLN{gdDh=wTxi&ED`EYI!gpR zKC2Ft17>qNBW^D{{AR00k&*VcS0@^q_YwlAOgJOA1J`NSPE0B-OdYbC$X z3Iv;ZQmvXwwOuC)y*@cUis}4k9nv;JmG>)L|7%RZv&*v3k4>|sKZgP5D~KX@n_Cpf z3!DF#f5f*Lh%yt2#PK~iB;{8n*<OeokI%}yt`2*@e7M9r*BQn#ya>%Fa45t3Bf0QJ^yM{r6!e~zU2?cAC z*H9HDFd=Q7A^m)SvjB#PVm!%OKu>ewE$Q|LA{( zC{g9+Eo46ekxhG&h8$y~3i4z0sDu!@e})u`$d|GKE;IeLZ!quwcEL$eT}mmsed~UY zbTX^T!|j*4qcI@pn^E>DZ17i%>;?mTRgM9#V`TQaKo_E%iCMh@)T50?Qq6W6{Mw zYSLyhpzJv(tQS>Z2m?4*#Q4kw8|6>?jjW%|cP6fLMzxm|fl z>^+2SjOZ=S7FGBC>kgIHz&PgLsz&w~(-0IF1oz@K`)=;Dc(JLSzS#z2KRZ9aUJ9aD{r#7j!Qp zx|OSa1Q1jp{9d!GYC#$+|VvyVFi`o{$AmVUC&4A2YsF@wTe~Ic2BOt zr8A$X)KyRHb^&v2&W;P&(id@wRL%`1x)Y=$C5S)U&NF&1rSPyX7zre+{|*Drl4i=! z@Z0fkyR}=ZoeVj$_zHm)r`!kfl;mZ=J1R6Puu0L&<>1!XaExG|=R5)(EN}8>^X67A zUO>adpKOc7++d8};CNt=)oTcT5Y?U)wX8}e@{#Rt?&r8O%i7}$AwyRzaM#<`Q2mge zAIDE{JA?DA8Yz?LaUEY*+zg%wTl2iXi4c%IQ!dxO8C|1G4$e_!0WaO+{z_ztw-#|D zVj)Jw7Woc&_gAgE%`_B`Z1wC^dKa+Es<$bB$rr1AmOKrRp|-)TDwk;~Q&MbmzYZ}K zjt!dvO)MuyAg~FY4qN2 z1tBUFg%wzszRn5WY__R$KWF&Vn5?b|lZZ$xR5@fUCtyaghMKHwex0rn$MBv#pk1uq zwrw$9$MWzR)#Pi2N@pq@N>;gSD>^Ooo$dR*4?mbL%E~L)@+tJ2OPBX{*-5}LQ~I@- zU9ltnq!F-Mn9E?Bu=?4xwnvtISg6BD>rk_fP8Mv2G$VAQ(&GGN{HyjC&P?lzDSbTU zYGwo7q+GxB6t%66Y^O|H9JP+yOl$$nvT`9Sk=fl}PPhdl!^4n~@3r3BBB~V6VxlT) zvKN!IKRiQ1eVtx{tlcMNg;dK!8kw&~G54F#6611X<@`F1RLt4N_v&Cvt~KzP8GfWA zh~o@=g)>as?bxCfHUYfAuD{|RT#2{#KKqC{k#X{5^P#2$<1;>oPdWfQG&=I9EJ<<}NSDG>7LvBRdj4PoVGo#5VB}_uVDE@K1s= zyJY>t*^0F5*cldNXd>;iOu7(~`@5)hcw@-$W??`m3s!wwA!RvXj*ryrAcbH(nd@A_ z)pabW)N2xX5amnYhE+rfi;d|03W*q7VVuKHK_vsKHNNHV^vOFCBMsY8Cu0U>HAqPU z?;Y<79yL#C}qyjpbeU zg+oKdl^n)Wf%oN*xL0KHo*S(zRl^~}s1xgDt|}(V{K=^x5KVirKJaadx&GAoba~+y ztK-z>f=YM@4Xs^o9|n5Q`1GI_eZ{PbzJI{}d4SqC3iA)*V6U$cMHT2apouqSI*s=i z|B5%{`p>@)w8RwB2#b0g2vGN-^D{8-_}_57*e!BVAfC0loctPLMxi)Aj=|8g|2$DX z6hM_VD`w6IPZqB$a_|AQRQG;LFg}Wng0WtL#vIE8{d_A$M2~lqPyAkyO#*nz>ib$2 z(e@5 z7s(fW4DeGn68(=Z1cULhLC}@@dsc?(kGfxPY6dKpe!r0&izSQi>-&mNKAL|DP1Rfd z=Zz>%Z%aK3=C4upgef=b?l$X4r*Ubk(Y$v+9*$b<5e}SvX`c6TfuXCF+5WI&s1oi> z(--QX0nwMx2>Tlraua_t*lmMVx&xUG<%z4%wB=oC`FJw#m5yep-lw?xds&}n)jETF zlnW^fQOip69oYf2ingSc=(2>l9h{0bM3U#+J@0#2vGPqy5dGQ1^%KS(2UOLiZ_y!K z)u7^Fe0U`6ccv%Up8&*A!~y=CL?9KZ&Z;Qa4=EHYvi&kYT|9T9$Jkc!PmYs6EyL2) z;C~c%mH|<9?fOR;N@D1ap+Ra0K_mx3LP7-Tn4yu7A%};N7={>HLO>7@5rYtr28m(l z6hs>7kPuJ_MLgU8|MvU=`+=e;W}&mhwB`p@wi~3{;k5Vj2|w#yBlW}YLR~+ zW#KRYBQZe@WbP4Gh8&_M>pGH`C z(=dfD+aVrLi*wA#Q*GySvY5PI{i~?F=tD%25qzLocn8H8Hl`3M=aA#{LA%DkB^{*$ zeP9+Js}UxueyhAiuQDr(hF5C~4L|HR(3Eih5)cB{IfH6i?myW$gjFKP@e`b*9IS%H z7h0V`U)RUB^d6p6Hsc+tPY{TkRY}RPu}<-vr6;! zieKRWAniVJ(DKE;Sj@ql((1y4q$pb?mtd3BgAvI_CzA=Bw~Qv>MTO+ zs_y_Xw<-dj`>=kj<<@;eu(w!zVdDdfEB8({m7Qc6B3WGm zYwenlD5@MKz;pXez$rSBmD+pUeZa@Avj#3etafE_<+X`DdJ+Rmw&Br}n~DJtv|{l~ zK-9nc;HF=&fQ$->AJ2ZjAvBH|QYfopvG)9#lcOBrKmKFCpK2(55P(C%q9iBxn|%~i zJ$JlF*WTwHyG`2XrmAAS-z^QezWo8yEN0?i6qb^m|4ZbK3ix{H+JuT@O7-OrlG zxR`hqpF0zumZI-{{z~j4V7SXHeXQF0^&`l(r+TAx-ZIojuJvk((r0 z(Qg`0lMy9%F25(X&8UqhuSfv0WKSBu8oejB!*-^!gbn)JKDpq!aChKMb(~8tx!hoL zS)?NS!2|es`6NirsmbBVYfiUitqN&%k~5TI3}?mzRi8FVoFj(p-lNr_SyCa?^8G~ z_phgv!muay`pROGA^6ot#%g?|VMriz_Nig3FwB>{2L_bSFMIh|)8#F-etcP4yy=mf z{Uw^kMbS?2zCr6NAM~V@1zA%FY4olAXL3P<=Ccp{9E+@lADS2)U%$98lLrlRtDu;4 zr%GF#8P>?qhAk6xF6B=#L)O*$=Dk_Xu~|+|D5@b84PFr-;%QBow9n@%(@`ieh3d2T|xzY8+Ml>r8v- zkCq0-vp0$QJ6vHENYAFt9@ArTo72xev+Wzf30Yz{l)CC2YDu@>d#~5M>3cSzDBb(x z^T=Q45t<=(di`&&T@HyHOGr59aqF`hy)=5PID0SE;<1FKZwpC7@yGPHcUMHQq6!xW zIPM9&VwrKUM9H=_tYpABZnk21`*{N^6xc-iX}FpDK8hSH!^tFWAuesX3=FOxGk>CN z&3(~bBl`Ua8f0)pJ=(fY`x=^SGHdkU=!hpK2y{xK-crDy(7xCD3NKKSkU)s!?ddb= zc&DwiJIh6^!g+@ovVSkT&g-UQ$LqzqCmCSstA-2f9wYDI8pOMPSF$4g8C15Lh9<%6E>rdYpydh+>2G$(lP;gWaeT-hN;6CjDwG75#z?`G zm9QuFqyB4W%PrzXwsZCzTzmzPi_R8wHZ@U76dkR5&b<{WZ z&`J44+JBIBl>(4L%vSP#1{IQaA*q)04z2)JSPAZA@_sW)XaE*Wz(*o^P{5SkQZg%rlnsf(xw6! zljfkfCgIXmKv*O5HW!POt*&&-K5+*%!?z&-y%RipDDT1G{1kXMz6F4Bq9P)(M^jE< zJl2MdJ7@~=ORoxVH~f9Kp0^wj6_4EgJfi*#Xmy#cxnxmcpg133N}Td9ATyd2Kz^tO z(l15$X>NuBwAe++v%%^J_+Nnbq8p3SkmWB~Y-Y?8&^W(#{LL!It}%na?{J;@&$;hk zIov(qm2)-dF9Q%5Lw8OZP6dO$W&x;e(6mdyc6t3C#cf0Do~6!M)~O1zFO1kd0?g(9%K#F*4o5UJ0D*fc=U)0A~ijQ32{gr+pAd;R*s)(2gs=b z{VtJR;9Fq0+<`5OX8$XzXw16Coqo4&)ve33c16O~>oR8ONfgII40-2SCj zjet>xAzUbAC2i^w`!mILx)NByWLDuQ&CLKY3A=rQ2j`6}sakPspTA6w06nu*8YQui0VT^c)%{1s)c&0%S-*$&lp_az4*4SmYd)$s3um9Os05{Fue zF|~=xo#f4Z>+&u&@o-!I@bb1ezTbt3kL$-*0I~A4Y7bq;iQEDx(ap2ZBlTebXvqy| z!jJ6rqIRl!%6Nlle_@vLfYWZ%(Tr#LhjC+d?;!x(Faf-{w|*6#cb|nG@4YLxQRFQ7 z`8!S`@mP&>>3dAV^29E0fWvoV&v7F8mBM87Y&-DWXE&zQqG3XK2pz&V1W+Y!l6X%W zezy1K*dG6qYzNSEuN1Cxj)W>FOCo$MA6*SZ&o)$YU^)N;RiWL&k`UI*2qDC=1L#%&r16nA5gL zMUI>3aJ%f%IW8#zC1gnGEk$!0I`CI~`_ZuSCxgWv4K5Y78pLHVUU6=Vc9@RbW*rW+ zj=WosLN9eoH>?3_bO52IVDRx6Ab^B2q-k9#?-TR%@XBuxNPo-*(S;GA zec~wpVWpn;Bt9N)?hcy20H9qKJRmOf9#G>HntEtc=k*xq*qZjGhyVyXH~f(+0fr9F zy*ww%I}?(+7`uVhB=H(!%1X5`cQA=m*;ZmIYp-iH(Hm(`w8HWv~L9Q zY9!+oPK56P%5NZqtuB^b#Xsa+7n}`)4`P8@hDn0=Q~gY@{C1jjR!xi*V9U7^in~gb zPDaB)0bo_ilih(IEI>uHK%2r-651>6GhN{GcCHc19~b6e7~ZhHZxBKsnJb_yUla29 zCoJT?wE1Uv(Wg5d;;~YQT#1ti)dfkFS7N3mC<$bK`a7^|3XiIn?c^swsr6`V08I6f zS$p$4v@rzmln<52vh~B&uI#7?Khywdq4>)#6W;;sr{0@8pM;#Y261u>Fa@)HurE}F zoOfH^K0HzMnu3m0u>4+j3r8XCB*p@5z&zzV?%~z%B$4ZUJsV>|@`jg6&GqZYiKCv4 zX6jcA96X!s_S}nA*U-z%Dp3w~aq5fH`u!!HHdWN>y5GMa4oDB39IViQ<;ywQD7aCJ zkuf7~96YwawEck(Kry1CN#9b1ajS~RvomD2j9tmE7=akrhZ7F>8?XBVgm@01bc;90 zRT$bQ>fDEqn?1kTaU+Tp!3p?8XM0vOcF)kqx=oZnV(zfCSGm!bLMr?jm?0cIEtYMp z4(PXs`y-NzLyol1uPn^hO>tYLxho(C?D7=B6=-ye zpXZN|R9DMQnHHC!L^c_G9#F-NaXY@(@`B6Vp`CbyCFIE-Y#CO!)bXqfM~b%bStLCV zz>(wY_6~Q{&rh7TyI*U+!J?KsN=AqaX;^pWH$V#w*?LWV1}qs~`4XZtZeM^104`?0 zSX5Q%W~ouN^yaUxvhC3po`Zmhu~&#yNIH!&fTIiowie#XB=)dD%jb1rsC9RN7Q4|V zJ2f8PP!%6oyFcJ0(PVFrR*|cbt5IF;&}>_CS{ho$9rCNqh|f^Z2+ZW&K=GiY0Ta_} z0|S_3POn{wlWC~RtG;xRT!4leYO=cd1+}IM3zML>RI!YxMoX}TqLGquQ8g8<-|j>m zVe3Zl50)kn(v%(4=uPE_E^Ssj=aqvXgOrr|fEl%(&+q{(C8GrItLnyu7?D8L^o@~^pxsQS_Y?Kw>*Q>Wr6{YyW#<2Gy@*7^ zi2Qh4zowGN*t1U4$)<@bY9O5g$0FIp&EM^tT=BL&yrXvgAcQPzr z&U`@0M}A<3cxR{qGNoYC1Xl2rBoYK~2CZw_by-u8X)y6-2RUy|2$_frB)jMg5#_#$ zT)nR|WrD57<-?E|}jk+^3vb=i!jLCpr{Ux}c#g#w1n#|26U@>&_ajbPS zh2Jd^_)uxfa*eRCq>0g%CqU%LHm=@~2PZpY#AbM~w+!6(jW}n6ZU#}rYXYK9FU!HN z0yxxs;Je()LGojh@HDajKyYprIt7?c&bAA(<?+-iStAFfS=Z@>E2 zs(7XeQ3sjM_YXK6m=WYY@lcIy+9+q3G+-rTr`}&Qj8{3~w|`W#@!XcDiWdM6^+xz$ zVq_U-a2wuN?jbgURhNx)hh+|4Mtxn*yz1#F&OY3T@inI;h$~{fQx7|HWW5dyV_N|% zbRc=RDvj2_zV`LGM1Y`RPgFIi?d(gtf(4(K3uvjuBaERg`XXXmME7tIB&R3dPYCi- zGNtBG%3=w+eX$PfBp%2X$vZOg6mC8K%(%Y8EkC4W;Cc)3x_?h#uY$f3$yRr#!pRxL z%RlIY!)oMf6olodOzbD>vSrzO0*hJRfKlxJ0-(;;ea{#*hX3~3*U9j7F=9J^cQW$zg|`1BQB}OAFMh<*Q%Q=$TW!B z8}%vD+2))UBR?mz`9|N*qr6D9Qr>v9LkHr?>*_pJUwC-E?dN%OFd5~6?*nS=YY#|X zA8|*4+hjTS#uOc`;r)&aRC8>rz3{2)x`)Pld!2>$SOh}n>XmD(#n3j0vBG@29YxfY zRO%ywdTxmdiJoa?hKgK=!C1U3GU%O<$uN@&=f(VAg>aXz(;>~;7=8#n#CeREkquIF zHw)vVD3QZ`(N4ppzcsCENRaxIpzcQ@58cxPqedpLN%w1q5_YZ5L3z_;FyDZY1U&c? zpKygO{&WV?7_xk-X&D8L*W+8LLQFr7z9P*uLB2-rdYIqN)cnO}vlJx~SKbc7BoFjdP(ws+?xJ-bTC*;5Aat2jIcYxoOS;;Aj;gzk*Y=jw*-JF8LGx|V} z%~m8iPvv=bWdx^S|2!&G7IxA#?Ip_4$!>lV_Qpv;?#Hv=ZClQizgiB6`fSF~rDk#b zQRHeLT%&KzdE!E)qwyJwy4lpj1W946wrGv}T!T#GBsrk!9s}&mB)rs2U5D+A2aBR0 z*%}cKB9jKvE9|fa3;LeSr39Qb#+qxvJ4wfvSwc?CK{)$qzeoc{B%K$5xo)O>jfZWn zS=@g=ip)Hf`p(lc{<0~JbJj$W5v9x`h?QMTyLjn9nrlW5>zOtr^DjM~TVXE>PTV>( z`iLk>=nQV%uX*i^ZU_S1qbeY+Bff-M%3_Vb!+9EM_42CY8Xd^Rs3=-NI`D>w)VChK zYcW%lS1Io0?2%CnmK!l^b=IF7 zui`%D>|Nqhsrp88@nY9|xukH*``9DRL}eWm?#KL={$92U9_LmNxd0 zg|`}v1yuTpI2t;zglf4wl~FME;*D996gTq_&Q0TX+t;|_fSaJ8(5I8PioDU^fKPSE zhWQc-SvS~`b~H2kO`0s`OGsf3iw<~dqIa~VRnKBJ^UJAA@GkNVIlgQKk1@hUEktPq zO1e^&W%6F{qRmVa`~o9Js;=3oi2M^3_ClHP<2To?G%Uq%+6K*<^O7VI5m6KLv|wiE ze2IMbG8jYPijj0=OO_^?%wl=l1m(D(giPrc^)i*-^-ldaaka%W+o`^^bfvdVHtq`j zFfd%jonHu9e#z?f`X^w(RkqmmFlqeGHBpi}rUGO)`-0WyO7E5{#>v4XHqLL!msGw1 zO#nKU9wfA5#$7MQ!z0%~e~q&Pp_cVjp7kDgp6AE>WxWiETkU zc_*#02JZrW6!y_F54hikgmzEPsOx_-e zSu4}WlAqWgwluAKgk&5paj-Vt>UMG-czD^9x+08TEn5mECrCBl>bpM&K+#gU6V6oO zERN{&oGX^1dcDT|lPw|sxy9~cf#W}c#vrd2APv&I;4%H!hyCzmX+564mw!6 zQ2TX7&L3zg7;K@?lK`q$B-Wjp@fO1NCRHz+qTZrXy)93TMrSu%0dku>Fw)g{VP ziITiC$}=TJgqT{sk8GPZsp!xaz8nw9Mp%1?r0%Y`T$r+VSslpcE`tx=L1qR-S=)4} z4-#%))f`2CJY0>!fdjFAaKz zhptdUq*a#!4Oxs@e2bUPEs27yye+HMCnY-9mXOYYm>eS({!RiZ z^gEFrk3H!Km@o1+0n+G2Z_Mh5g;$aoW6E*W_b$_hU*aiCc67NI2zuPo*MIS;jO8O= zN~DYVLT$4OB50LV!X?y#%^3QoP??Q<^tp;)FKbWIAtI1*8vnA&izeW+-ZO)^SZ3c}=a!AHRONOn)ShNI!5?;*J1GgUW8p=LJWLQ0-2SRUXfE@lDkCt=(?^opDX20ag0n3ZhvG zl?=+>JD&X-J1;ajg!FT~yQZy0ea9J2!ni8D=~|;YV@jid(e$7z2&mN0w1DnHgZJ4% z%x<90ab%B}!__Tj^93*U@rEmLj+xN&fPU-=(5XjrqkE%A{0p|=2I7>!tbgAq4?TZo z8)~`WgUVKY{`N|u$ZXga#XNGGDNDLcYODK)G_}mR>71@D zMZ2}S-EI2scRhDNy2CHsbdEeUS<}(GiDA(a(uUi>BI;GIFC;aWXP6B9o!a~F@Q#Zezp1cWPas6zEp2^=#%mm zQEb@*VtXq+L)MRG56Q+2Ujg<)Th~p(e$ViKZ@`bFri_0mA4`VcLo>MgmeH9I-WR)u~Z|TtLq8ZP8 zAcXvT)%?f**GYpghXuXSw+0Gr&eC(LNKN~=a2%VPRF3_0-mPof&K4i{qM>G5O(5_qvWza;_4VPUq9cjwuh?z(|OYDpSw%1;8n2=)h(Pc<$BEFYFEuzlcY|M9rPEd7R-;vzgf$;*!{(>M^Ul#1&k^jUZC}CHiog4(T8np(KN$8d6hdYK-p|!&#Rx zN|5XP7JLcWN#IEoIlpu1k7ofB^T*-_g?(1nzmfikdP=`g7|s=pfyWatc&F?FL8)lQ zCDY~jw*3w!9!1p?T-%h8ktGL1$^SM)h$kg%m?6hthr)kNy6Q$lgPj^>dO%bOrw$%} zM)v@ct&fE=;^OFVzAtQ{2cqG6zm(++;U$1U-#~+fU^t!@$ zkO^X2jXqQ3Pq|pmG$;?!PWcS*U0TG;(qWD<(RwjbDJL)f7@aqv6d+PPdRe>hsvgQ~ zFQPw`!!mfkZ!is*LA8qF#9htS)6`CE?N3%j;?w9H#?tXOY>}mZJX0!gM_pLk#v9J^ z_b1y|4&!akY}F|J4X4KW?w|Ud+{q%W-7?Z+!{Fk>GbqCp2Ww-WM*F^SPPI6TI(YZT zIZ4NR;eM@FH-qV_46<1)p4VVfN(I(dXmdiy@I@x{C1fgjP?f|gI2e-V{!8F(hx<;8G3)sGcOUT2|iWj8{ z{3y9J^?2$p#v)sX1)IK@jv3QNkx$qXyL^Dsn5{ z{d4RPuC-^@+BOqd)Mb`F%|xMn8uoEwEn5%oTZahoS?DsUPiFYa(6hJ%uS6fL)yj5! z`E$^w#t}IySe*Oq!hjv|;NCaV75pO!F2vb@o>(?NpT#IvS=R@Z!Ht}&gmZi}|I<(U zlmcBu`)1m9a&_~WV{k~<$YuKBHWWsTznlINpisb*m;z;jY~EG@pdHX^(87)3L|VLj=KB-0#+a{+uj%jrsMeT4=ChbT z%%aLC5C}0sVLR97p_KaU_-sF?#1)2tI>;6aG`pbbcNX)XtBNG3ny-`Dzcd3^$S$E( sM|!&+{pazV+@+`JkJx%3EOGwxLQH+SZ|^gm93tRPPuobVLKBYpFP%Tlh5!Hn literal 0 HcmV?d00001 diff --git a/site2/doc-guides/contribution.md b/site2/doc-guides/contribution.md new file mode 100644 index 00000000000000..14d5e8dcaec55b --- /dev/null +++ b/site2/doc-guides/contribution.md @@ -0,0 +1,215 @@ +# Pulsar Documentation Contribution Guide + +> 👩🏻‍🏫 **Summary** +> +> This guide explains the organization of Pulsar documentation and website repos and the workflow of updating various Pulsar docs. + + +**TOC** + + + +- [Pulsar Documentation Contribution Guide](#pulsar-documentation-contribution-guide) + - [Documentation and website repos](#documentation-and-website-repos) + - [Intro to doc and website repos](#intro-to-doc-and-website-repos) + - [Relationships between doc and website repos](#relationships-between-doc-and-website-repos) + - [Update versioned docs](#update-versioned-docs) + - [Update reference docs](#update-reference-docs) + - [Update Pulsar configuration docs](#update-pulsar-configuration-docs) + - [Update client configuration docs](#update-client-configuration-docs) + - [Update CLI tool docs](#update-cli-tool-docs) + - [Update client/function matrix](#update-clientfunction-matrix) + - [References](#references) + + + +## Documentation and website repos + +This chapter shows the organization of Pulsar documentation and website repos. + +### Intro to doc and website repos + +The Pulsar website consists of two parts: + +* **Documentation** + + Pulsar documentation repo: [pulsar/site2](https://github.com/apache/pulsar/tree/master/site2) + + ![alt_text](assets/contribution-1.png) + +* **Website** + + Pulsar website repo: [pulsar-site](https://github.com/apache/pulsar-site) + + ![alt_text](assets/contribution-2.png) + +### Relationships between doc and website repos + +Type|Repo|Description|PR example +|---|---|---|--- +Documentation|[pulsar/site2](https://github.com/apache/pulsar/tree/master/site2)|All files related to Pulsar documentation (English) are stored in this repo.|[[feat][doc] add docs for shedding strategy](https://github.com/apache/pulsar/pull/13811) +Website|[pulsar-site](https://github.com/apache/pulsar-site)|- All files related to the Pulsar website are stored in the **main** branch in this repo.

    - The website is built and put in in the **asf-site-next** branch in this repo.|- [[feat][workflow] add links of doc contribution guides to Pulsar contribution page](https://github.com/apache/pulsar-site/pull/114)

    - [[improve][website] add download links for previous versions](https://github.com/apache/pulsar-site/pull/108) + +Files in [pulsar/site2 (master branch)](https://github.com/apache/pulsar/tree/master/site2) are synced to [pulsar-site/website-next (main branch)](https://github.com/apache/pulsar-site/tree/main/site2/website-next) every 6 hours. You can check the sync status and progress at [pulsar-site Actions](https://github.com/apache/pulsar-site/actions/workflows/ci-pulsar-website-docs-sync.yaml). + +> **Summary** +> +> * pulsar/site2/**website** = pulsar-site site2/**website-next** +> +> * pulsar/site2/**docs** = pulsar-site/site2/**website-next/docs** + +## Update versioned docs + +If you want to update versioned docs, go to [pulsar/site2/website/versioned_docs/](https://github.com/apache/pulsar/tree/master/site2/website/versioned_docs) to find your desired one. + +>❗️**BREAKING CHANGE** +> +> If you want to update docs for 2.8.x and later versions, follow the steps below. + +1. Update the correct doc set. + + For example, [version-2.8.x](https://github.com/apache/pulsar/tree/master/site2/website/versioned_docs/version-2.8.x), [version-2.9.x](https://github.com/apache/pulsar/tree/master/site2/website/versioned_docs/version-2.9.x), or [version-2.10.x](https://github.com/apache/pulsar/tree/master/site2/website/versioned_docs/version-2.10.x). + + For why and how we make this change, see [PIP-190: Simplify documentation release and maintenance strategy](https://github.com/apache/pulsar/issues/16637). + +2. Add specific instructions. + + For example, if you want to add docs for an improvement introduced in 2.8.2, you can add the following instructions. + + ``` + :::note + + This is available for 2.8.2 and later versions. + + ::: + ``` + +## Update reference docs + +If you want to update [Pulsar configuration docs](https://pulsar.apache.org/reference/#/latest/), pay attention to the doc source files. + +- Some docs are generated from code **automatically**. If you want to update the docs, you need to update the source code files. + +- Some configuration docs are updated **manually** using .md files. + +### Update Pulsar configuration docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Components + Update where… + Notes +
    Broker + org.apache.pulsar.broker.ServiceConfiguration + These components are internal. +

    +These configuration docs are generated from code automatically. +

    Client + org.apache.pulsar.client.impl.conf.ClientConfigurationData +
    WebSocket + org.apache.pulsar.websocket.service.WebSocketProxyConfiguration +
    Proxy + org.apache.pulsar.proxy.server.ProxyConfiguration +
    Standalone + org.apache.pulsar.broker.ServiceConfiguration +
    BookKeeper + reference-configuration-bookkeeper.md + These components are external. +

    +These configuration docs are updated manually. +

    Log4j + reference-configuration-log4j.md +
    Log4j shell + reference-configuration-log4j-shell.md +
    ZooKeeper + reference-configuration-zookeeper.md +
    + +### Update client configuration docs + +Pulsar Java configuration docs are generated from code automatically. + +Components|Update where… +|---|--- +Client|org.apache.pulsar.client.impl.conf.ClientConfigurationData +Producer|org.apache.pulsar.client.impl.conf.ProducerConfigurationData +Consumer|org.apache.pulsar.client.impl.conf.ConsumerConfigurationData +Reader|org.apache.pulsar.client.impl.conf.ReaderConfigurationData + +### Update CLI tool docs + +Components|Update where… +|---|--- +pulsar-admin| [pulsar/pulsar-client-tools/src/main/java/org/apache/pulsar/admin/cli/](https://github.com/apache/pulsar/tree/master/pulsar-client-tools/src/main/java/org/apache/pulsar/admin/cli) +pulsar| Different commands are updated in different code files.
    Details see [pulsar/bin/pulsar](https://github.com/apache/pulsar/blob/master/bin/pulsar). +pulsar-client|[pulsar/pulsar-client-tools/src/main/java/org/apache/pulsar/client/cli/](https://github.com/apache/pulsar/tree/master/pulsar-client-tools/src/main/java/org/apache/pulsar/client/cli) +pulsar-perf|- `websocket-producer`: [pulsar/pulsar-testclient/src/main/java/org/apache/pulsar/proxy/socket/client/](https://github.com/apache/pulsar/tree/master/pulsar-testclient/src/main/java/org/apache/pulsar/proxy/socket/client)

    - Other commands: [pulsar/pulsar-testclient/src/main/java/org/apache/pulsar/testclient/](https://github.com/apache/pulsar/tree/master/pulsar-testclient/src/main/java/org/apache/pulsar/testclient) +pulsar-shell| reference-cli-pulsar-shell.md +pulsar-daemon|reference-cli-pulsar-daemon.md

    (It's almost not updated and only contains 3 commands, so it's managed in `md` file rather than being generated automatically) +bookkeeper|reference-cli-bookkeeper.md + +## Update client/function matrix + +[Pulsar Feature Matrix](https://docs.google.com/spreadsheets/d/1YHYTkIXR8-Ql103u-IMI18TXLlGStK8uJjDsOOA0T20/edit#gid=1784579914) outlines every feature supported by the Pulsar client and function. + +> ❗️ **Note** +> +> - It’s public and everyone has access to edit it. Feel free to reach out to `liuyu@apache.org` if you have problems in editing. +> +> - This matrix will be moved to the Pulsar website (instead of the spreadsheet) in the future. + +If you want to update the Pulsar Feature Matrix, follow the steps below. + +1. Submit your code and doc PRs. + +2. Get your PR reviewed and merged. + +3. In the [Pulsar Feature Matrix](https://docs.google.com/spreadsheets/d/1YHYTkIXR8-Ql103u-IMI18TXLlGStK8uJjDsOOA0T20/edit#gid=1784579914), check the box in the corresponding cell with the links of PRs and doc site. + + ![alt_text](assets/contribution-3.png) + +## References + +For more guides on how to make contributions to Pulsar docs, see [Pulsar Documentation Contribution Overview](./../README.md). \ No newline at end of file diff --git a/site2/doc-guides/label.md b/site2/doc-guides/label.md new file mode 100644 index 00000000000000..816c5aec8cfb9f --- /dev/null +++ b/site2/doc-guides/label.md @@ -0,0 +1,19 @@ +# Pulsar Documentation Label Guide + +> 👩🏻‍🏫 **Summary** +> +> This guide instructs you on how to label your doc PR. + +When submitting an issue or PR, you must [provide doc label information](https://github.com/apache/pulsar/blob/master/.github/PULL_REQUEST_TEMPLATE.md#documentation) by **selecting the checkbox**, so that the Bot can label the PR correctly. + +Label name|Usage +|---|--- +`doc-required`|Use this label to indicate this issue or PR impacts documentation.

    **You have not updated the docs yet**. The docs will be submitted later. +`doc-not-needed`| The code changes in this PR do not impact documentation. +`doc`|This PR contains changes that impact documentation, **no matter whether the changes are in markdown or code files**. +`doc-complete`|Use this label to indicate the documentation updates are complete. +`doc-label-missing`|The Bot applies this label when there is no doc label information in the PR if one of the following conditions is met:.

    - You do not provide a doc label.

    - You provide multiple doc labels.

    - You delete backticks (``) in doc labels.
    For example,
    [x] `doc-required` ✅
    [x] doc-required ❌

    - You add blanks in square brackets.
    For example,
    [x] `doc-required` ✅
    [ x ] `doc-required` ❌ + +## References + +For more guides on how to make contributions to Pulsar docs, see [Pulsar Documentation Contribution Overview](./../README.md). \ No newline at end of file diff --git a/site2/doc-guides/naming.md b/site2/doc-guides/naming.md new file mode 100644 index 00000000000000..f46f7d3a02dfb7 --- /dev/null +++ b/site2/doc-guides/naming.md @@ -0,0 +1,211 @@ +# Pulsar Pull Request Naming Convention Guide + +> 👩🏻‍🏫 **Summary** +> +> This guide explains why you need good PR titles and how you do that with various self​-explanatory examples. + +**TOC** + + + +- [Pulsar Pull Request Naming Convention Guide](#pulsar-pull-request-naming-convention-guide) + - [Why do PR titles matter?](#why-do-pr-titles-matter) + - [How to write good PR titles?](#how-to-write-good-pr-titles) + - [💡Quick examples](#💡quick-examples) + - [`type`](#type) + - [`scope`](#scope) + - [Pulsar](#pulsar) + - [Client](#client) + - [`Summary`](#summary) + - [Full examples](#full-examples) + - [References](#references) + + + +## Why do PR titles matter? + +Engineers and writers submit or review PRs almost every day. + +A PR title is a summary of your changes. + +* Vague, boring, and unclear PR titles decrease team efficiency and productivity. + +* PR titles should be engaging, easy to understand, and readable. + +Good titles often bring many benefits, including but not limited to the following: + +* Speed up the review process. + + You can tell from the title what changes the PR introduces. + +* Facilitate understanding of PR changes. + + * PR titles are shown on Pulsar release notes as items. Concise PR titles make your changes easier to understand. + + * Especially when you read commit logs in command-line tools, clear commit messages show PR changes quickly. + +* Increase search efficiency. + + You can skim through hundreds of commits and locate desired information quickly. + +* Remind you to think about your PR. + + If you can not write a PR title in a simple way (for example, [[type](#type)] [[scope](#scope)] [summary](#summary)), or you need to use several types / scopes, consider whether your PR contains **too many** changes across various scopes. If so, consider splitting this big PR into several small PRs. In this way, you might get your PRs reviewed faster. + +## How to write good PR titles? + +A PR title should be structured as follows: + +![alt_text](assets/naming-1.png) + + +> 💡 **Rule** +> +> A good title = clear format ([type](#type) and [scope](#scope)) + self-explanatory [summary](#summary) + + +### 💡Quick examples + +Here are some examples of unclear and good PR titles for your quick reference. Good PR titles are concise and self-explanatory since they tell you the changes in a clear and direct way. + +For more examples with correct formats, see [Full examples](#full-examples). + +🙌 **Examples** + +Vague ❌|Clear ✅ +|---|--- +Producer getting producer busy is removing existing producer from list|[fix][broker] ​​Active producers with the same name are no longer removed from the topic map +Forbid to read other topic's data in managedLedger layer|[improve][broker] Consumers are not allowed to read data on topics to which they are not subscribed +Fix kinesis sink backoff class not found|[improve][connector] xx connectors can now use the Kinesis Backoff class +K8s Function Name Length Check Allows Invalid StatefulSet |[improve][function] Function name length cannot exceed 52 characters when using Kubernetes runtime + +> 💡 **Steps** +> +> How to write a good PR title? +> +> 1. Select a [type](#type). +> +> 2. Select a [scope](#scope). +> +> 3. Write a [summary](#summary). + +### `type` + +`type` is "what actions do you take". + +It must be one of the following. + +type|Pulsar PR label|What actions do you take? +|---|---|--- +cleanup| [type/cleanup](https://github.com/apache/pulsar/labels/type%2Fcleanup)|Remove unused code or doc. +improve|[type/improvement](https://github.com/apache/pulsar/labels/type%2Fimprovement)|Submit enhancements that are neither new features nor bug fixes. +feat|[type/feature](https://github.com/apache/pulsar/labels/type%2Ffeature)|Submit new features. +fix|[type/fix](https://github.com/apache/pulsar/labels/type%2Ffix)|Submit bug fixes. +refactor|[type/refactor](https://github.com/apache/pulsar/labels/type%2Frefactor)|Restructure existing code while preserving its external behavior. +revert|To be created|Revert changes + +>❗️ **Note** +> +> - Choose correct labels for your PR so that your PR will automatically go to the correct chapter in release notes. If you do not specify a type label, the PR might go to the wrong place or not be included in the release notes at all. +> +> - For more information about release note automation for Pulsar and clients, see [PIP 112: Generate Release Notes Automatically](https://docs.google.com/document/d/1Ul2qIChDe8QDlDwJBICq1VviYZhdk1djKJJC5wXAGsI/edit). + +### `scope` + +`scope` is "where do you make changes". + +Pulsar and clients have separate release notes, so they have different scopes. + +>❗️ **Note** +> +> If your PR affects several scopes, do not choose several scope labels at the same time since different scopes go to different chapters in release notes. Instead, choose the most affected label (scope), or else your PR goes to several chapters in release notes, which causes redundancies. Choose only one label as much as possible. + +#### Pulsar + +`scope` and PR labels must be one of the following. + +scope |Pulsar PR label|Where do you make changes? +|---|---|--- +admin|- scope/admin
    - scope/topic-policy | - pulsar-admin
    - REST API
    - Java admin API +broker | - scope/broker | It’s difficult to maintain an exhaustive list since many changes belong to brokers.

    Here just lists some frequently updated areas, it includes but not limited to:
    - key_shared
    - replication
    - metadata
    - compaction +cli|- scope/tool| Pulsar CLI tools.
    It includes:
    - pulsar
    - pulsar-client
    - pulsar-daemon
    - pulsar-perf
    - bookkeeper
    - broker-tool +io
    (connector)|- scope/connector
    - scope/connect
    - scope/kafka|Connector +fn
    (function)| - scope/function|Function +meta
    (metadata)|- scope/zookeepeer|Metadata +monitor|- scope/metrics - scope/stats|Monitoring +proxy| - scope/proxy| Proxy +schema| - scope/schema
    - scope/schemaregistry|Schema +sec
    (security)| - scope/security
    - scope/authentication
    - scope/authorization|Security +sql|- scope/sql|Pulsar SQL +storage| - scope/bookkeeper storage|Managed ledge +offload
    (tiered storage)|- scope/tieredstorage|Tiered storage +txn| - scope/transaction
    - scope/transaction-coordinator|Transaction +test|- scope/test|Code tests +ci|- scope/ci|CI workflow changes or debugging +build|- scope/build| - Dependency (Maven)
    - Docker
    - Build or release script +misc|- scope/misc| Changes that do not belong to any scopes above. +doc|- doc|Documentation +site
    (website)|- website|Website + +#### Client + +The following changes are shown on the client release notes. + +`scope` and PR label must be one of the following. + +scope | Pulsar PR label | Where do you make changes? +|---|---|--- +client
    (Java client)|scope/client-java|Java client +ws
    (WebSocket)|scope/client-websocket|[WebSocket API](https://pulsar.apache.org/docs/next/client-libraries-websocket/) + +### `Summary` + +`Summary` is a single line that best sums up the changes made in the commit. + +Follow the best practice below. + +* Keep the summary concise and descriptive. + +* Use the second person and present tense. + +* Write [complete sentences](https://www.grammarly.com/blog/sentence-fragment/#:~:text=What's%20a%20sentence%20fragment%3F,%2C%20a%20verb%2C%20or%20both.) rather than fragments. + +* Capitalize the first letter. + +* No period at the end. ❌ + +* Do not include back quotes (``). + +* Limit the length to 50 characters. + +* If you cherry pick changes to branches, name your PR title the same as the original PR title and label your PR with cherry-pick related labels. + +* Do not use [GitHub keywords](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) followed by a #<issue-number>. This information should be provided in PR descriptions or commit messages rather than in PR titles. ❌ + +### Full examples + +As explained in the [How to write good PR titles](#how-to-write-good-pr-titles) chapter: + +> 💡 **Rule** +> +> A good title = clear format ([type](#type) and [scope](#scope)) + self-explanatory [summary](#summary) + +Here are some format examples. For self-explanatory summary examples, see [Quick examples](#quick-examples). + +Changes|Unclear format ❌|Clear format ✅ +---|---|--- +Submit breaking changes|[Breaking change] xxx|[feat][broker]! Support xx +Submit PIP changes|[PIP-198] Support xx|[feat][broker] PIP-198: Support xx +Cherry pick changes|[Branch-2.9] Fix xxx issue. | [fix][broker][branch-2.9] Fix xxx issue +Revert changes|Revert xxx| [revert][broker] Revert changes about xxx +Add features| - Adding xx feature
    - Support delete schema forcefully| - [feat][java client] Add xx feature
    - [feat][schema] Support xx +Fix bugs | [Issue 14633][pulsar-broker] Fixed xxx| [fix][broker] Fix xxx +Submit improvements|- Enhances xx
    - Bump netty version to 4.1.75 | - [improve][sql] Improve xx performance
    - [improve][build] Bump Netty version to 4.1.75 +Update tests | reduce xx test flakiness | [improve][test] Reduce xxx flaky tests +Update docs| - [Doc] add explanations for xxx
    - 2.8.3 Release Notes
    - Fix typos in xx | - [feat][doc] Add explanations for xxx
    - [feat][doc] Add 2.8.3 release note
    - [fix][doc] Fix typos in xx +Update website | [Website] adjust xxx | [improve][site] Adjust xxx +Update instructions/guidelines|Update xxx guideline|[improve][doc] Update xx guidelines + +## References + +For more guides on how to make contributions to Pulsar docs, see [Pulsar Documentation Contribution Overview](./../README.md). \ No newline at end of file diff --git a/site2/doc-guides/preview.md b/site2/doc-guides/preview.md new file mode 100644 index 00000000000000..ae4c561c6b0a60 --- /dev/null +++ b/site2/doc-guides/preview.md @@ -0,0 +1,164 @@ +# Pulsar Content Preview Guide + +> 👩🏻‍🏫 **Summary** +> +> This guide explains why and how to preview Pulsar content locally with detailed steps and various examples. + +**TOC** + + + +- [Pulsar Content Preview Guide](#pulsar-content-preview-guide) + - [Why preview changes locally?](#why-preview-changes-locally) + - [How to preview changes locally?](#how-to-preview-changes-locally) + - [Prerequisites](#prerequisites) + - [Preview doc (markdown) changes](#preview-doc-markdown-changes) + - [Preview doc (Java API) changes](#preview-doc-java-api-changes) + - [Preview website changes](#preview-website-changes) + - [Stop preview](#stop-preview) + - [Maintenance info](#maintenance-info) + - [References](#references) + + + +## Why preview changes locally? + +It is **required** to preview your changes locally and attach the preview screenshots in your PR description. It brings many benefits, including but not limited to: + +* You can test your writings + + It’s a way to check whether you use the correct [Pulsar Documentation Writing Syntax](./syntax.md) and debug issues. You **must ensure** docs can be compiled and published correctly. + +* You can get your PR merged more quickly. + + Reviewers know your changes clearly and can speed up the review process. + +## How to preview changes locally? + +Pulsar documentation is built using Docusaurus. To preview your changes as you edit the files, you can run a local development server that serves your website and reflect the latest changes. + +### Prerequisites + +To verify docs are built correctly before submitting a contribution, you should set up your local environment to build and display the docs locally. + +* Node >= 16.14 + +* Yarn >= 1.5 + +* Although you can use Linux, macOS, or Windows to build locally the Pulsar documentation, macOS is the preferred build environment as it offers the most complete support for documentation building. + +### Preview doc (markdown) changes + +Follow these steps to build the doc (markdown) preview on your local machine. + +1. Go to the correct repo. + + + ``` + cd pulsar/site2/website + ``` + +2. Run the following command to preview changes. + +* Preview **master** changes + + If you update master docs, use the following command. + + ``` + sh start.sh + ``` + +* Preview **historical** changes + + If you update versioned docs, use the following command. + + It may take a few more minutes to complete the process. + + ``` + sh start.sh + ``` + + ![alt_text](assets/preview-1.png) + +3. By default, a browser window will open at [http://localhost:3000](http://localhost:3000) to show the changes. + + ![alt_text](assets/preview-2.png) + +### Preview doc (Java API) changes + +Follow these steps to build the doc (Java API) preview on your local machine on the **master** branch. + +1. Go to the correct repo. + + ``` + cd pulsar/site2/tools + ``` + +2. Run the following command to generate the `.html` files. + + ``` + sh javadoc-gen.sh + ``` + +3. Open the target `.html` file to preview the updates. + + For example, if you change the [ConsumerBuilder.java](http://pulsar-client-api/src/main/java/org/apache/pulsar/client/api/ConsumerBuilder.java) for [Pulsar Java docs](https://pulsar.apache.org/api/client/2.11.0/org/apache/pulsar/client/api/ConsumerBuilder.html), you can navigate to the `generated-site/api/client/{version}/org/apache/pulsar/client/api/` directory and open the `ConsumerBuilder.html` file to preview the updates. + +### Preview website changes + +Pulsar website changes refer to all the changes made to the Pulsar website, including but not limited to the following pages: + +* [Release Notes page](https://pulsar.apache.org/release-notes/) ✅ +* [Ecosystem page](https://pulsar.apache.org/ecosystem) ✅ +* [Case studies page](https://pulsar.apache.org/case-studies) ✅ +* … +* (except docs ❌) + +Follow these steps to build the website preview on your local machine. + +1. Go to the correct repo. + + ``` + cd pulsar-site/site2/website-next + ``` + +2️. Run the following command to preview changes. + + * Preview **master** changes + + If you submit changes to master, use the following command. + + + ``` + ./preview.sh + ``` + + * Preview **historical** changes + + ``` + ./preview.sh … + ``` + + > ❗️ **Note** + > + > * Use a space between ` `. + > + > * If you want to preview multiple version changes, append `` with blanks. + > + > For example, `./preview.sh 2.9.1 2.9.2 2.9.3`. + +### Stop preview + +If you want to stop the preview, use one of the following methods. + +* Method 1: Switch to your command-line interface and press **Control+C**. + +* Method 2: Switch to your browser and close the preview page. + +### Maintenance info + +* For the old Pulsar website, using ` yarn start` can preview all (master + historical) changes. However, to speed up the build process, for the new Pulsar website, using `./preview.sh `only preview master changes. + +## References + +For more guides on how to make contributions to Pulsar docs, see [Pulsar Documentation Contribution Overview](./../README.md). \ No newline at end of file diff --git a/site2/doc-guides/syntax.md b/site2/doc-guides/syntax.md new file mode 100644 index 00000000000000..8c1cab39362c27 --- /dev/null +++ b/site2/doc-guides/syntax.md @@ -0,0 +1,295 @@ +# Pulsar Documentation Writing Syntax Guide + +> 👩🏻‍🏫 **Summary** +> +> This guide explains how to write Pulsar documentation using the MDX-compatible markdown syntax. + +**TOC** + + + +- [Pulsar Documentation Writing Syntax Guide](#pulsar-documentation-writing-syntax-guide) + - [Background](#background) + - [Why use new markdown syntax?](#why-use-new-markdown-syntax) + - [How to test doc changes?](#how-to-test-doc-changes) + - [Syntax](#syntax) + - [Markdown](#markdown) + - [Tab](#tab) + - [Code blocks](#code-blocks) + - [Admonitions](#admonitions) + - [Assets](#assets) + - [Indentation & space](#indentation--space) + - [Metadata](#metadata) + - [Tables](#tables) + - [Links](#links) + - [Anchor links](#anchor-links) + - [Links to internal documentation](#links-to-internal-documentation) + - [Links to external documentation](#links-to-external-documentation) + - [Link to a specific line of code](#link-to-a-specific-line-of-code) + - [Authoritative sources](#authoritative-sources) + - [Escape](#escape) + - [Headings](#headings) + - [References](#references) + + + +## Background + +The Pulsar documentation uses [Markdown](https://www.markdownguide.org/basic-syntax/) as its markup language and [Docusaurus](https://docusaurus.io/) for generating the documentation and website. + +> 🔴 **BREAKING CHANGE** +> +> From 2022/5/18, you need to use **Markdown syntax that is compatible with MDX**. Otherwise, your changes can not be recognized by MDX and rendered properly. In this case, your PR can not be merged. + +### Why use new markdown syntax? + +The new Pulsar website is launched on 2022/5/11. It is upgraded to Docusaurus V2, which uses MDX as the parsing engine. MDX can do much more than just parsing standard Markdown syntax, like rendering React components inside your documents as well. However, **some previous documentation using Markdown syntax is incompatible with MDX**。 Consequently, you need to change the way you write. + +### How to test doc changes? + +- You can play with the MDX format in **[MDX Playground](https://mdxjs.com/playground/)** . Write some MDX to find out what it turns into. You can see the rendered result, the generated code, and the intermediary ASTs. This can be helpful for debugging or exploring. + +- For how to test doc changes locally, see [Pulsar Content Preview Guide](./preview.md). + +## Syntax + +> ❗️**Note** +> +> This guide just highlights **some** important rules and frequently used syntax that is **different from the Markdown syntax used in the previous docs**. For the complete syntax guide, see [Docusaurus - Markdown Features](https://docusaurus.io/docs/next/markdown-features) and [MDX - Markdown](https://mdxjs.com/docs/what-is-mdx/#markdown). + +### Markdown + +* **Use Markdown rather than HTML** as much as possible, or else MDX may not recognize it. + + For example, when constructing complex tables, do not use HTML (``). + +* Use **closing** tags. + + `
  1. ` and `

    ` are especially useful for constructing complex tables, such as _creating a list_ and _adding a blank line_. + + 🙌 **Examples** + + ``` +
  2. xxx ❌ +
    xxx ❌ +
  3. xxx
  4. ✅ + ``` + + ![alt_text](assets/syntax-1.png) + + ``` +
    xxx → wrap text in "next" line ✅ +

    xxx → wrap text in "next next" line ✅ + ``` + + ![alt_text](assets/syntax-2.png) + +* If you need to use HTML, use **React** syntax for HTML tags. + + 🙌 **Examples** + + ``` + ❌ + + deleted ✅ + ``` + +### Tab + +The image below shows the differences in writing multiple tabs before and after. For how to write multiple tabs, see [Tabs](https://docusaurus.io/docs/next/markdown-features/tabs). + +![alt_text](assets/syntax-3.png) + +### Code blocks + +For how to use syntax highlighting and supported languages, see [Syntax highlighting](https://docusaurus.io/docs/next/markdown-features/code-blocks#syntax-highlighting). + +### Admonitions + +The image below shows the differences to write admonitions before and after. + +For how to write admonitions, see [Admonitions](https://docusaurus.io/docs/next/markdown-features/admonitions). + +![alt_text](assets/syntax-4.png) + +### Assets + +Add dash `/` before the asset path. + +🙌 **Examples** + +``` +![Page Linking](/assets/page-linking.png) +``` + +### Indentation & space + +* Use the same indentation for running texts and code blocks. + + 🙌 **Examples** + + ![alt_text](assets/syntax-5.png) + + +* For the content block after an **ordered list**, indent the content block by only 3 spaces (not 4 spaces). + +* For the content block after an **unordered list**, indent the content block by only 2 spaces. + + 🙌 **Examples** + + ![alt_text](assets/syntax-6.png) + + > 💡 **Tip** + > + > You can set the **Tab Size** in VS Code settings. + > + > ![alt_text](assets/syntax-7.png) + +* Insert **only an** empty line (not two empty lines or more) between code blocks and running texts. + + 🙌 **Examples** + + ![alt_text](assets/syntax-8.png) + + ![alt_text](assets/syntax-9.png) + + ![alt_text](assets/syntax-10.png) + +### Metadata + +If you create a new `.md` file, add quotes for the value of sidebar_label. + +🙌 **Examples** + +![alt_text](assets/syntax-11.png) + +### Tables + +To help tables be easier to maintain, consider adding additional spaces to the column widths to make them consistent. + +🙌 **Examples** + +``` +| App name | Description | Requirements | +|:---------|:---------------------|:---------------| +| App 1 | Description text 1. | Requirements 1 | +| App 2 | Description text 2. | None | +``` + +To format tables easily, you can install a plugin or extension in your editor as below: + +* Visual Studio Code: [Markdown Table Prettifier](https://marketplace.visualstudio.com/items?itemName=darkriszty.markdown-table-prettify) + +* Sublime Text: [Markdown Table Formatter](https://packagecontrol.io/packages/Markdown%20Table%20Formatter) + +* Atom: [Markdown Table Formatter](https://atom.io/packages/markdown-table-formatter) + +### Links + +Use links instead of summarizing to help preserve a single source of truth in Pulsar documentation. + +#### Anchor links + +Headings generate anchor links when rendered. + +🙌 **Examples** + +`## This is an example` generates the anchor `#this-is-an-example`. + +> ❗️ **Note** +> +> * Avoid crosslinking docs to headings unless you need to link to a specific section of the document. This avoids breaking anchors in the future in case the heading is changed. +> +> * If possible, avoid changing headings, because they’re not only linked internally. There are various links to Pulsar documentation on the internet, such as tutorials, presentations, StackOverflow posts, and other sources. + +#### Links to internal documentation + +Internal refers to documentation in the same Pulsar project. + +General rules: + +* Use relative links rather than absolute URLs. + +* Don’t prepend ./ or ../../ to links to files or directories. + +🙌 **Examples** + +Scenario| ✅| ❌ +|---|---|--- +Crosslink to other markdown file

    (/path/xx/ is not needed)|`[Function overview](function-overview.md)`|- `[Function overview](functions-overview)`

    - `[Function overview](https://pulsar.apache.org/docs/next/functions-overview/)`

    - `[Function overview](../../function-overview.md)` +Crosslink to other chapters in the same markdown file

    (# and - are needed)|`[Install builtin connectors (optional)](#install-builtin-connectors-optional)`|N/A + +#### Links to external documentation + +When describing interactions with external software, it’s often helpful to include links to external documentation. When possible, make sure that you’re linking to an [authoritative source](#authoritative-sources). + +For example, if you’re describing a feature in Microsoft’s Active Directory, include a link to official Microsoft documentation. + +#### Link to a specific line of code + +Use a **permalink **when linking to a specific line in a file to ensure users land on the line you’re referring to though lines of code change over time. + +![alt_text](assets/syntax-12.png) + +### Authoritative sources + +When citing external information, use sources that are written by the people who created the item or product in question. These sources are the most likely to be accurate and remain up to date. + +🙌 **Examples** + +- Authoritative sources include the following ✅ + + * Official documentation for a product. + + For example, if you’re setting up an interface with the Google OAuth 2 authorization server, include a link to Google’s documentation. + + * Official documentation for a project. + + For example, if you’re citing NodeJS functionality, refer directly to [NodeJS documentation](https://nodejs.org/en/docs/). + + * Books from an authoritative publisher. + +- Authoritative sources do not include the following ❌ + + * Personal blog posts. + + * Documentation from a company that describes another company’s product. + + * Non-trustworthy articles. + + * Discussions on forums such as Stack Overflow. + +While many of these sources to avoid can help you learn skills and or features, they can become obsolete quickly. Nobody is obliged to maintain any of these sites. Therefore, we should avoid using them as reference literature. + +Non-authoritative sources are acceptable only if there is no equivalent authoritative source. Even then, focus on non-authoritative sources that are extensively cited or peer-reviewed. + +### Escape + +Use the following characters to escape special characters. + +🙌 **Examples** + +✅ | ❌ +|---|--- +`List`

    This error shows up ![alt_text](assets/syntax-13.png)|List``

    Here is an [example PR](https://github.com/apache/pulsar/pull/15389/files#diff-472b2cb6fc28a0845d2f1d397dc4e6e7fa083dfe4f91d6f9dca88ad01d06a971). + + +### Headings + +* Each documentation page begins with a **level 2** heading (##). This becomes the h1 element when the page is rendered to HTML. + +* Do not skip a level. For example: ## > ####. + +* Leave one blank line before and after the heading. + +* Do not use links as part of heading text. + +* When you change the heading text, the anchor link changes. To avoid broken links: + + * Do not use step numbers in headings. + + * When possible, do not use words that might change in the future. + +## References + +For more guides on how to make contributions to Pulsar docs, see [Pulsar Documentation Contribution Overview](./../README.md). \ No newline at end of file From afcdbf0e2b5fb905e1f82f0220436f8f9ec0c742 Mon Sep 17 00:00:00 2001 From: tison Date: Wed, 19 Oct 2022 09:49:47 +0800 Subject: [PATCH 13/28] [cleanup][doc] Remove deprecated docs (#18064) --- .../version-2.10.0-deprecated/about.md | 56 - .../adaptors-kafka.md | 276 -- .../adaptors-spark.md | 91 - .../adaptors-storm.md | 96 - .../admin-api-brokers.md | 286 -- .../admin-api-clusters.md | 318 -- .../admin-api-functions.md | 830 ---- .../admin-api-namespaces.md | 1267 ------ .../admin-api-non-partitioned-topics.md | 8 - .../admin-api-non-persistent-topics.md | 8 - .../admin-api-overview.md | 144 - .../admin-api-packages.md | 390 -- .../admin-api-partitioned-topics.md | 8 - .../admin-api-permissions.md | 189 - .../admin-api-persistent-topics.md | 8 - .../admin-api-schemas.md | 7 - .../admin-api-tenants.md | 242 -- .../admin-api-topics.md | 2472 ------------ .../administration-geo.md | 302 -- .../administration-isolation.md | 124 - .../administration-load-balance.md | 280 -- .../administration-proxy.md | 127 - .../administration-pulsar-manager.md | 216 -- .../administration-stats.md | 64 - .../administration-upgrade.md | 168 - .../administration-zk-bk.md | 378 -- .../client-libraries-cgo.md | 581 --- .../client-libraries-cpp.md | 765 ---- .../client-libraries-dotnet.md | 456 --- .../client-libraries-go.md | 1064 ------ .../client-libraries-java.md | 1543 -------- .../client-libraries-node.md | 652 ---- .../client-libraries-python.md | 641 ---- .../client-libraries-rest.md | 134 - .../client-libraries-websocket.md | 664 ---- .../client-libraries.md | 45 - .../concepts-architecture-overview.md | 176 - .../concepts-authentication.md | 9 - .../concepts-clients.md | 92 - .../concepts-messaging.md | 989 ----- .../concepts-multi-tenancy.md | 67 - .../concepts-multiple-advertised-listeners.md | 44 - .../concepts-overview.md | 31 - .../concepts-proxy-sni-routing.md | 180 - .../concepts-replication.md | 69 - .../concepts-tiered-storage.md | 18 - .../concepts-topic-compaction.md | 37 - .../concepts-transactions.md | 30 - .../cookbooks-bookkeepermetadata.md | 21 - .../cookbooks-compaction.md | 142 - .../cookbooks-deduplication.md | 151 - .../cookbooks-encryption.md | 184 - .../cookbooks-message-queue.md | 127 - .../cookbooks-non-persistent.md | 63 - .../cookbooks-partitioned.md | 7 - .../cookbooks-retention-expiry.md | 520 --- .../cookbooks-tiered-storage.md | 344 -- .../version-2.10.0-deprecated/deploy-aws.md | 271 -- .../deploy-bare-metal-multi-cluster.md | 452 --- .../deploy-bare-metal.md | 568 --- .../version-2.10.0-deprecated/deploy-dcos.md | 200 - .../deploy-docker.md | 60 - .../deploy-kubernetes.md | 11 - .../deploy-monitoring.md | 138 - .../develop-load-manager.md | 227 -- .../develop-plugin.md | 139 - .../develop-schema.md | 62 - .../develop-tools.md | 111 - .../developing-binary-protocol.md | 637 ---- .../functions-cli.md | 198 - .../functions-debug.md | 538 --- .../functions-deploy.md | 262 -- .../functions-develop.md | 1678 -------- .../functions-metrics.md | 7 - .../functions-overview.md | 209 - .../functions-package.md | 493 --- .../functions-runtime.md | 406 -- .../functions-worker.md | 405 -- ...tting-started-concepts-and-architecture.md | 16 - .../getting-started-docker.md | 219 -- .../getting-started-helm.md | 447 --- .../getting-started-pulsar.md | 72 - .../getting-started-standalone.md | 326 -- .../version-2.10.0-deprecated/helm-deploy.md | 434 --- .../version-2.10.0-deprecated/helm-install.md | 38 - .../helm-overview.md | 103 - .../version-2.10.0-deprecated/helm-prepare.md | 80 - .../version-2.10.0-deprecated/helm-tools.md | 43 - .../version-2.10.0-deprecated/helm-upgrade.md | 43 - .../io-aerospike-sink.md | 26 - .../io-canal-source.md | 235 -- .../io-cassandra-sink.md | 59 - .../io-cdc-debezium.md | 549 --- .../version-2.10.0-deprecated/io-cdc.md | 26 - .../version-2.10.0-deprecated/io-cli.md | 666 ---- .../io-connectors.md | 249 -- .../io-debezium-source.md | 770 ---- .../version-2.10.0-deprecated/io-debug.md | 407 -- .../version-2.10.0-deprecated/io-develop.md | 421 -- .../io-dynamodb-source.md | 82 - .../io-elasticsearch-sink.md | 244 -- .../io-file-source.md | 173 - .../io-flume-sink.md | 58 - .../io-flume-source.md | 58 - .../io-hbase-sink.md | 69 - .../io-hdfs2-sink.md | 66 - .../io-hdfs3-sink.md | 61 - .../io-influxdb-sink.md | 122 - .../version-2.10.0-deprecated/io-jdbc-sink.md | 165 - .../io-kafka-sink.md | 73 - .../io-kafka-source.md | 240 -- .../io-kinesis-sink.md | 82 - .../io-kinesis-source.md | 83 - .../io-mongo-sink.md | 58 - .../io-netty-source.md | 243 -- .../io-nsq-source.md | 21 - .../version-2.10.0-deprecated/io-overview.md | 163 - .../io-quickstart.md | 963 ----- .../io-rabbitmq-sink.md | 87 - .../io-rabbitmq-source.md | 87 - .../io-redis-sink.md | 158 - .../version-2.10.0-deprecated/io-solr-sink.md | 67 - .../io-twitter-source.md | 28 - .../version-2.10.0-deprecated/io-twitter.md | 7 - .../version-2.10.0-deprecated/io-use.md | 1787 --------- .../performance-pulsar-perf.md | 283 -- .../reference-cli-tools.md | 1039 ----- .../reference-configuration.md | 857 ----- .../reference-connector-admin.md | 12 - .../reference-metrics.md | 617 --- .../reference-pulsar-admin.md | 3394 ----------------- .../reference-rest-api-overview.md | 18 - .../reference-terminology.md | 176 - .../schema-evolution-compatibility.md | 207 - .../schema-get-started.md | 102 - .../schema-manage.md | 850 ----- .../schema-understand.md | 576 --- .../security-athenz.md | 98 - .../security-authorization.md | 130 - .../security-basic-auth.md | 127 - .../security-bouncy-castle.md | 157 - .../security-encryption.md | 335 -- .../security-extending.md | 83 - .../version-2.10.0-deprecated/security-jwt.md | 331 -- .../security-kerberos.md | 443 --- .../security-oauth2.md | 282 -- .../security-overview.md | 37 - .../security-policy-and-supported-versions.md | 65 - .../security-tls-authentication.md | 222 -- .../security-tls-keystore.md | 345 -- .../security-tls-transport.md | 313 -- .../security-token-admin.md | 183 - .../sql-deployment-configurations.md | 277 -- .../sql-getting-started.md | 187 - .../version-2.10.0-deprecated/sql-overview.md | 18 - .../version-2.10.0-deprecated/sql-rest-api.md | 192 - .../version-2.10.0-deprecated/standalone.md | 268 -- .../tiered-storage-aliyun.md | 257 -- .../tiered-storage-aws.md | 329 -- .../tiered-storage-azure.md | 264 -- .../tiered-storage-filesystem.md | 631 --- .../tiered-storage-gcs.md | 319 -- .../tiered-storage-overview.md | 52 - .../transaction-api.md | 172 - .../transaction-guarantee.md | 17 - .../version-2.10.0-deprecated/txn-how.md | 151 - .../version-2.10.0-deprecated/txn-monitor.md | 10 - .../version-2.10.0-deprecated/txn-use.md | 105 - .../version-2.10.0-deprecated/txn-what.md | 60 - .../version-2.10.0-deprecated/txn-why.md | 45 - .../window-functions-context.md | 581 --- .../version-2.10.1-deprecated/about.md | 56 - .../adaptors-kafka.md | 276 -- .../adaptors-spark.md | 91 - .../adaptors-storm.md | 96 - .../admin-api-brokers.md | 286 -- .../admin-api-clusters.md | 318 -- .../admin-api-functions.md | 830 ---- .../admin-api-namespaces.md | 1267 ------ .../admin-api-non-partitioned-topics.md | 8 - .../admin-api-non-persistent-topics.md | 8 - .../admin-api-overview.md | 144 - .../admin-api-packages.md | 390 -- .../admin-api-partitioned-topics.md | 8 - .../admin-api-permissions.md | 189 - .../admin-api-persistent-topics.md | 8 - .../admin-api-schemas.md | 7 - .../admin-api-tenants.md | 242 -- .../admin-api-topics.md | 2492 ------------ .../administration-geo.md | 298 -- .../administration-isolation.md | 124 - .../administration-load-balance.md | 278 -- .../administration-proxy.md | 127 - .../administration-pulsar-manager.md | 216 -- .../administration-stats.md | 64 - .../administration-upgrade.md | 168 - .../administration-zk-bk.md | 378 -- .../client-libraries-cgo.md | 581 --- .../client-libraries-cpp.md | 765 ---- .../client-libraries-dotnet.md | 456 --- .../client-libraries-go.md | 1064 ------ .../client-libraries-java.md | 1543 -------- .../client-libraries-node.md | 652 ---- .../client-libraries-python.md | 641 ---- .../client-libraries-rest.md | 134 - .../client-libraries-websocket.md | 664 ---- .../client-libraries.md | 45 - .../concepts-architecture-overview.md | 176 - .../concepts-authentication.md | 9 - .../concepts-clients.md | 92 - .../concepts-messaging.md | 989 ----- .../concepts-multi-tenancy.md | 67 - .../concepts-multiple-advertised-listeners.md | 44 - .../concepts-overview.md | 31 - .../concepts-proxy-sni-routing.md | 180 - .../concepts-replication.md | 69 - .../concepts-tiered-storage.md | 18 - .../concepts-topic-compaction.md | 37 - .../concepts-transactions.md | 30 - .../cookbooks-bookkeepermetadata.md | 21 - .../cookbooks-compaction.md | 142 - .../cookbooks-deduplication.md | 151 - .../cookbooks-encryption.md | 184 - .../cookbooks-message-queue.md | 127 - .../cookbooks-non-persistent.md | 63 - .../cookbooks-partitioned.md | 7 - .../cookbooks-retention-expiry.md | 520 --- .../cookbooks-tiered-storage.md | 344 -- .../version-2.10.1-deprecated/deploy-aws.md | 271 -- .../deploy-bare-metal-multi-cluster.md | 452 --- .../deploy-bare-metal.md | 568 --- .../version-2.10.1-deprecated/deploy-dcos.md | 200 - .../deploy-docker.md | 60 - .../deploy-kubernetes.md | 11 - .../deploy-monitoring.md | 138 - .../develop-load-manager.md | 227 -- .../develop-plugin.md | 139 - .../develop-schema.md | 62 - .../develop-tools.md | 111 - .../developing-binary-protocol.md | 637 ---- .../functions-cli.md | 198 - .../functions-debug.md | 538 --- .../functions-deploy.md | 262 -- .../functions-develop.md | 1678 -------- .../functions-metrics.md | 7 - .../functions-overview.md | 209 - .../functions-package.md | 493 --- .../functions-runtime.md | 406 -- .../functions-worker.md | 405 -- ...tting-started-concepts-and-architecture.md | 16 - .../getting-started-docker.md | 219 -- .../getting-started-helm.md | 447 --- .../getting-started-pulsar.md | 72 - .../getting-started-standalone.md | 326 -- .../version-2.10.1-deprecated/helm-deploy.md | 434 --- .../version-2.10.1-deprecated/helm-install.md | 38 - .../helm-overview.md | 103 - .../version-2.10.1-deprecated/helm-prepare.md | 80 - .../version-2.10.1-deprecated/helm-tools.md | 43 - .../version-2.10.1-deprecated/helm-upgrade.md | 43 - .../io-aerospike-sink.md | 26 - .../io-canal-source.md | 235 -- .../io-cassandra-sink.md | 59 - .../io-cdc-debezium.md | 549 --- .../version-2.10.1-deprecated/io-cdc.md | 26 - .../version-2.10.1-deprecated/io-cli.md | 666 ---- .../io-connectors.md | 249 -- .../io-debezium-source.md | 770 ---- .../version-2.10.1-deprecated/io-debug.md | 407 -- .../version-2.10.1-deprecated/io-develop.md | 421 -- .../io-dynamodb-source.md | 82 - .../io-elasticsearch-sink.md | 244 -- .../io-file-source.md | 173 - .../io-flume-sink.md | 58 - .../io-flume-source.md | 58 - .../io-hbase-sink.md | 69 - .../io-hdfs2-sink.md | 66 - .../io-hdfs3-sink.md | 61 - .../io-influxdb-sink.md | 122 - .../version-2.10.1-deprecated/io-jdbc-sink.md | 165 - .../io-kafka-sink.md | 73 - .../io-kafka-source.md | 240 -- .../io-kinesis-sink.md | 82 - .../io-kinesis-source.md | 83 - .../io-mongo-sink.md | 58 - .../io-netty-source.md | 243 -- .../io-nsq-source.md | 21 - .../version-2.10.1-deprecated/io-overview.md | 163 - .../io-quickstart.md | 963 ----- .../io-rabbitmq-sink.md | 87 - .../io-rabbitmq-source.md | 87 - .../io-redis-sink.md | 158 - .../version-2.10.1-deprecated/io-solr-sink.md | 67 - .../io-twitter-source.md | 28 - .../version-2.10.1-deprecated/io-twitter.md | 7 - .../version-2.10.1-deprecated/io-use.md | 1787 --------- .../performance-pulsar-perf.md | 282 -- .../reference-cli-tools.md | 1039 ----- .../reference-configuration.md | 885 ----- .../reference-connector-admin.md | 12 - .../reference-metrics.md | 617 --- .../reference-pulsar-admin.md | 3394 ----------------- .../reference-rest-api-overview.md | 18 - .../reference-terminology.md | 176 - .../schema-evolution-compatibility.md | 207 - .../schema-get-started.md | 102 - .../schema-manage.md | 850 ----- .../schema-understand.md | 576 --- .../security-athenz.md | 98 - .../security-authorization.md | 130 - .../security-basic-auth.md | 127 - .../security-bouncy-castle.md | 157 - .../security-encryption.md | 335 -- .../security-extending.md | 83 - .../version-2.10.1-deprecated/security-jwt.md | 331 -- .../security-kerberos.md | 443 --- .../security-oauth2.md | 282 -- .../security-overview.md | 37 - .../security-policy-and-supported-versions.md | 65 - .../security-tls-authentication.md | 222 -- .../security-tls-keystore.md | 345 -- .../security-tls-transport.md | 313 -- .../security-token-admin.md | 183 - .../sql-deployment-configurations.md | 277 -- .../sql-getting-started.md | 187 - .../version-2.10.1-deprecated/sql-overview.md | 18 - .../version-2.10.1-deprecated/sql-rest-api.md | 192 - .../version-2.10.1-deprecated/standalone.md | 268 -- .../tiered-storage-aliyun.md | 257 -- .../tiered-storage-aws.md | 329 -- .../tiered-storage-azure.md | 264 -- .../tiered-storage-filesystem.md | 631 --- .../tiered-storage-gcs.md | 319 -- .../tiered-storage-overview.md | 52 - .../transaction-api.md | 172 - .../transaction-guarantee.md | 17 - .../version-2.10.1-deprecated/txn-how.md | 151 - .../version-2.10.1-deprecated/txn-monitor.md | 10 - .../version-2.10.1-deprecated/txn-use.md | 105 - .../version-2.10.1-deprecated/txn-what.md | 60 - .../version-2.10.1-deprecated/txn-why.md | 45 - .../window-functions-context.md | 581 --- .../version-2.8.0-deprecated/about.md | 56 - .../adaptors-kafka.md | 274 -- .../adaptors-spark.md | 91 - .../adaptors-storm.md | 96 - .../admin-api-brokers.md | 276 -- .../admin-api-clusters.md | 308 -- .../admin-api-functions.md | 820 ---- .../admin-api-namespaces.md | 1315 ------- .../admin-api-non-partitioned-topics.md | 8 - .../admin-api-non-persistent-topics.md | 8 - .../admin-api-overview.md | 133 - .../admin-api-packages.md | 381 -- .../admin-api-partitioned-topics.md | 8 - .../admin-api-permissions.md | 174 - .../admin-api-persistent-topics.md | 8 - .../admin-api-schemas.md | 7 - .../admin-api-tenants.md | 228 -- .../admin-api-topics.md | 2132 ----------- .../administration-dashboard.md | 76 - .../administration-geo.md | 215 -- .../administration-isolation.md | 115 - .../administration-load-balance.md | 256 -- .../administration-proxy.md | 86 - .../administration-pulsar-manager.md | 205 - .../administration-stats.md | 64 - .../administration-upgrade.md | 168 - .../administration-zk-bk.md | 386 -- .../client-libraries-cgo.md | 579 --- .../client-libraries-cpp.md | 408 -- .../client-libraries-dotnet.md | 434 --- .../client-libraries-go.md | 885 ----- .../client-libraries-java.md | 1035 ----- .../client-libraries-node.md | 643 ---- .../client-libraries-python.md | 456 --- .../client-libraries-websocket.md | 621 --- .../client-libraries.md | 35 - .../concepts-architecture-overview.md | 172 - .../concepts-authentication.md | 9 - .../concepts-clients.md | 92 - .../concepts-messaging.md | 700 ---- .../concepts-multi-tenancy.md | 67 - .../concepts-multiple-advertised-listeners.md | 44 - .../concepts-overview.md | 31 - .../concepts-proxy-sni-routing.md | 180 - .../concepts-replication.md | 9 - .../concepts-tiered-storage.md | 18 - .../concepts-topic-compaction.md | 37 - .../concepts-transactions.md | 30 - .../cookbooks-bookkeepermetadata.md | 21 - .../cookbooks-compaction.md | 142 - .../cookbooks-deduplication.md | 151 - .../cookbooks-encryption.md | 184 - .../cookbooks-message-queue.md | 127 - .../cookbooks-non-persistent.md | 63 - .../cookbooks-partitioned.md | 7 - .../cookbooks-retention-expiry.md | 413 -- .../cookbooks-tiered-storage.md | 342 -- .../version-2.8.0-deprecated/deploy-aws.md | 271 -- .../deploy-bare-metal-multi-cluster.md | 486 --- .../deploy-bare-metal.md | 541 --- .../version-2.8.0-deprecated/deploy-dcos.md | 200 - .../version-2.8.0-deprecated/deploy-docker.md | 60 - .../deploy-kubernetes.md | 11 - .../deploy-monitoring.md | 148 - .../develop-load-manager.md | 227 -- .../develop-schema.md | 62 - .../version-2.8.0-deprecated/develop-tools.md | 111 - .../developing-binary-protocol.md | 606 --- .../version-2.8.0-deprecated/functions-cli.md | 198 - .../functions-debug.md | 533 --- .../functions-deploy.md | 241 -- .../functions-develop.md | 1600 -------- .../functions-metrics.md | 7 - .../functions-overview.md | 209 - .../functions-package.md | 493 --- .../functions-runtime.md | 399 -- .../functions-worker.md | 386 -- ...tting-started-concepts-and-architecture.md | 16 - .../getting-started-docker.md | 176 - .../getting-started-helm.md | 441 --- .../getting-started-pulsar.md | 72 - .../getting-started-standalone.md | 271 -- .../version-2.8.0-deprecated/helm-deploy.md | 434 --- .../version-2.8.0-deprecated/helm-install.md | 44 - .../version-2.8.0-deprecated/helm-overview.md | 104 - .../version-2.8.0-deprecated/helm-prepare.md | 92 - .../version-2.8.0-deprecated/helm-tools.md | 43 - .../version-2.8.0-deprecated/helm-upgrade.md | 43 - .../io-aerospike-sink.md | 26 - .../io-canal-source.md | 235 -- .../io-cassandra-sink.md | 57 - .../io-cdc-debezium.md | 543 --- .../version-2.8.0-deprecated/io-cdc.md | 26 - .../version-2.8.0-deprecated/io-cli.md | 658 ---- .../version-2.8.0-deprecated/io-connectors.md | 232 -- .../io-debezium-source.md | 621 --- .../version-2.8.0-deprecated/io-debug.md | 407 -- .../version-2.8.0-deprecated/io-develop.md | 421 -- .../io-dynamodb-source.md | 80 - .../io-elasticsearch-sink.md | 173 - .../io-file-source.md | 160 - .../version-2.8.0-deprecated/io-flume-sink.md | 56 - .../io-flume-source.md | 56 - .../version-2.8.0-deprecated/io-hbase-sink.md | 67 - .../version-2.8.0-deprecated/io-hdfs2-sink.md | 64 - .../version-2.8.0-deprecated/io-hdfs3-sink.md | 59 - .../io-influxdb-sink.md | 119 - .../version-2.8.0-deprecated/io-jdbc-sink.md | 157 - .../version-2.8.0-deprecated/io-kafka-sink.md | 72 - .../io-kafka-source.md | 226 -- .../io-kinesis-sink.md | 80 - .../io-kinesis-source.md | 81 - .../version-2.8.0-deprecated/io-mongo-sink.md | 56 - .../io-netty-source.md | 241 -- .../version-2.8.0-deprecated/io-nsq-source.md | 21 - .../version-2.8.0-deprecated/io-overview.md | 164 - .../version-2.8.0-deprecated/io-quickstart.md | 963 ----- .../io-rabbitmq-sink.md | 85 - .../io-rabbitmq-source.md | 85 - .../version-2.8.0-deprecated/io-redis-sink.md | 74 - .../version-2.8.0-deprecated/io-solr-sink.md | 65 - .../io-twitter-source.md | 28 - .../version-2.8.0-deprecated/io-twitter.md | 7 - .../version-2.8.0-deprecated/io-use.md | 1787 --------- .../kubernetes-helm.md | 441 --- .../performance-pulsar-perf.md | 227 -- .../reference-cli-tools.md | 958 ----- .../reference-configuration.md | 788 ---- .../reference-connector-admin.md | 11 - .../reference-metrics.md | 552 --- .../reference-pulsar-admin.md | 3335 ---------------- .../reference-rest-api-overview.md | 18 - .../reference-terminology.md | 176 - .../schema-evolution-compatibility.md | 201 - .../schema-get-started.md | 102 - .../version-2.8.0-deprecated/schema-manage.md | 639 ---- .../schema-understand.md | 556 --- .../security-athenz.md | 98 - .../security-authorization.md | 114 - .../security-basic-auth.md | 127 - .../security-bouncy-castle.md | 157 - .../security-encryption.md | 200 - .../security-extending.md | 207 - .../version-2.8.0-deprecated/security-jwt.md | 331 -- .../security-kerberos.md | 443 --- .../security-oauth2.md | 232 -- .../security-overview.md | 36 - .../security-tls-authentication.md | 222 -- .../security-tls-keystore.md | 322 -- .../security-tls-transport.md | 295 -- .../security-token-admin.md | 183 - .../sql-deployment-configurations.md | 199 - .../sql-getting-started.md | 187 - .../version-2.8.0-deprecated/sql-overview.md | 18 - .../version-2.8.0-deprecated/sql-rest-api.md | 192 - .../standalone-docker.md | 214 -- .../version-2.8.0-deprecated/standalone.md | 271 -- .../tiered-storage-aliyun.md | 257 -- .../tiered-storage-aws.md | 329 -- .../tiered-storage-azure.md | 264 -- .../tiered-storage-filesystem.md | 630 --- .../tiered-storage-gcs.md | 319 -- .../tiered-storage-overview.md | 52 - .../transaction-api.md | 172 - .../transaction-guarantee.md | 17 - .../version-2.8.0-deprecated/txn-how.md | 151 - .../version-2.8.0-deprecated/txn-monitor.md | 10 - .../version-2.8.0-deprecated/txn-use.md | 105 - .../version-2.8.0-deprecated/txn-what.md | 60 - .../version-2.8.0-deprecated/txn-why.md | 45 - .../window-functions-context.md | 581 --- .../version-2.8.1-deprecated/about.md | 56 - .../adaptors-kafka.md | 274 -- .../adaptors-spark.md | 91 - .../adaptors-storm.md | 96 - .../admin-api-brokers.md | 276 -- .../admin-api-clusters.md | 308 -- .../admin-api-functions.md | 820 ---- .../admin-api-namespaces.md | 1314 ------- .../admin-api-non-partitioned-topics.md | 8 - .../admin-api-non-persistent-topics.md | 8 - .../admin-api-overview.md | 133 - .../admin-api-packages.md | 381 -- .../admin-api-partitioned-topics.md | 8 - .../admin-api-permissions.md | 174 - .../admin-api-persistent-topics.md | 8 - .../admin-api-schemas.md | 7 - .../admin-api-tenants.md | 228 -- .../admin-api-topics.md | 2133 ----------- .../administration-dashboard.md | 76 - .../administration-geo.md | 215 -- .../administration-isolation.md | 115 - .../administration-load-balance.md | 256 -- .../administration-proxy.md | 86 - .../administration-pulsar-manager.md | 205 - .../administration-stats.md | 64 - .../administration-upgrade.md | 168 - .../administration-zk-bk.md | 386 -- .../client-libraries-cgo.md | 579 --- .../client-libraries-cpp.md | 708 ---- .../client-libraries-dotnet.md | 434 --- .../client-libraries-go.md | 885 ----- .../client-libraries-java.md | 1038 ----- .../client-libraries-node.md | 643 ---- .../client-libraries-python.md | 456 --- .../client-libraries-websocket.md | 621 --- .../client-libraries.md | 35 - .../concepts-architecture-overview.md | 172 - .../concepts-authentication.md | 9 - .../concepts-clients.md | 92 - .../concepts-messaging.md | 700 ---- .../concepts-multi-tenancy.md | 67 - .../concepts-multiple-advertised-listeners.md | 44 - .../concepts-overview.md | 31 - .../concepts-proxy-sni-routing.md | 180 - .../concepts-replication.md | 9 - .../concepts-tiered-storage.md | 18 - .../concepts-topic-compaction.md | 37 - .../concepts-transactions.md | 30 - .../cookbooks-bookkeepermetadata.md | 21 - .../cookbooks-compaction.md | 142 - .../cookbooks-deduplication.md | 151 - .../cookbooks-encryption.md | 184 - .../cookbooks-message-queue.md | 127 - .../cookbooks-non-persistent.md | 63 - .../cookbooks-partitioned.md | 7 - .../cookbooks-retention-expiry.md | 413 -- .../cookbooks-tiered-storage.md | 342 -- .../version-2.8.1-deprecated/deploy-aws.md | 271 -- .../deploy-bare-metal-multi-cluster.md | 486 --- .../deploy-bare-metal.md | 541 --- .../version-2.8.1-deprecated/deploy-dcos.md | 200 - .../version-2.8.1-deprecated/deploy-docker.md | 60 - .../deploy-kubernetes.md | 11 - .../deploy-monitoring.md | 148 - .../develop-load-manager.md | 227 -- .../develop-schema.md | 62 - .../version-2.8.1-deprecated/develop-tools.md | 111 - .../developing-binary-protocol.md | 606 --- .../version-2.8.1-deprecated/functions-cli.md | 198 - .../functions-debug.md | 533 --- .../functions-deploy.md | 262 -- .../functions-develop.md | 1600 -------- .../functions-metrics.md | 7 - .../functions-overview.md | 209 - .../functions-package.md | 493 --- .../functions-runtime.md | 399 -- .../functions-worker.md | 386 -- ...tting-started-concepts-and-architecture.md | 16 - .../getting-started-docker.md | 176 - .../getting-started-helm.md | 441 --- .../getting-started-pulsar.md | 72 - .../getting-started-standalone.md | 271 -- .../version-2.8.1-deprecated/helm-deploy.md | 434 --- .../version-2.8.1-deprecated/helm-install.md | 44 - .../version-2.8.1-deprecated/helm-overview.md | 104 - .../version-2.8.1-deprecated/helm-prepare.md | 92 - .../version-2.8.1-deprecated/helm-tools.md | 43 - .../version-2.8.1-deprecated/helm-upgrade.md | 43 - .../io-aerospike-sink.md | 26 - .../io-canal-source.md | 235 -- .../io-cassandra-sink.md | 57 - .../io-cdc-debezium.md | 543 --- .../version-2.8.1-deprecated/io-cdc.md | 26 - .../version-2.8.1-deprecated/io-cli.md | 658 ---- .../version-2.8.1-deprecated/io-connectors.md | 232 -- .../io-debezium-source.md | 621 --- .../version-2.8.1-deprecated/io-debug.md | 407 -- .../version-2.8.1-deprecated/io-develop.md | 421 -- .../io-dynamodb-source.md | 80 - .../io-elasticsearch-sink.md | 173 - .../io-file-source.md | 160 - .../version-2.8.1-deprecated/io-flume-sink.md | 56 - .../io-flume-source.md | 56 - .../version-2.8.1-deprecated/io-hbase-sink.md | 67 - .../version-2.8.1-deprecated/io-hdfs2-sink.md | 64 - .../version-2.8.1-deprecated/io-hdfs3-sink.md | 59 - .../io-influxdb-sink.md | 119 - .../version-2.8.1-deprecated/io-jdbc-sink.md | 157 - .../version-2.8.1-deprecated/io-kafka-sink.md | 72 - .../io-kafka-source.md | 226 -- .../io-kinesis-sink.md | 80 - .../io-kinesis-source.md | 81 - .../version-2.8.1-deprecated/io-mongo-sink.md | 56 - .../io-netty-source.md | 241 -- .../version-2.8.1-deprecated/io-nsq-source.md | 21 - .../version-2.8.1-deprecated/io-overview.md | 164 - .../version-2.8.1-deprecated/io-quickstart.md | 963 ----- .../io-rabbitmq-sink.md | 85 - .../io-rabbitmq-source.md | 85 - .../version-2.8.1-deprecated/io-redis-sink.md | 74 - .../version-2.8.1-deprecated/io-solr-sink.md | 65 - .../io-twitter-source.md | 28 - .../version-2.8.1-deprecated/io-twitter.md | 7 - .../version-2.8.1-deprecated/io-use.md | 1787 --------- .../kubernetes-helm.md | 441 --- .../performance-pulsar-perf.md | 227 -- .../reference-cli-tools.md | 958 ----- .../reference-configuration.md | 789 ---- .../reference-connector-admin.md | 11 - .../reference-metrics.md | 555 --- .../reference-pulsar-admin.md | 3338 ---------------- .../reference-rest-api-overview.md | 18 - .../reference-terminology.md | 176 - .../schema-evolution-compatibility.md | 201 - .../schema-get-started.md | 102 - .../version-2.8.1-deprecated/schema-manage.md | 639 ---- .../schema-understand.md | 556 --- .../security-athenz.md | 98 - .../security-authorization.md | 130 - .../security-basic-auth.md | 127 - .../security-bouncy-castle.md | 157 - .../security-encryption.md | 200 - .../security-extending.md | 207 - .../version-2.8.1-deprecated/security-jwt.md | 331 -- .../security-kerberos.md | 443 --- .../security-oauth2.md | 232 -- .../security-overview.md | 36 - .../security-tls-authentication.md | 222 -- .../security-tls-keystore.md | 322 -- .../security-tls-transport.md | 295 -- .../security-token-admin.md | 183 - .../sql-deployment-configurations.md | 199 - .../sql-getting-started.md | 187 - .../version-2.8.1-deprecated/sql-overview.md | 18 - .../version-2.8.1-deprecated/sql-rest-api.md | 192 - .../standalone-docker.md | 214 -- .../version-2.8.1-deprecated/standalone.md | 271 -- .../tiered-storage-aliyun.md | 257 -- .../tiered-storage-aws.md | 329 -- .../tiered-storage-azure.md | 264 -- .../tiered-storage-filesystem.md | 630 --- .../tiered-storage-gcs.md | 319 -- .../tiered-storage-overview.md | 52 - .../transaction-api.md | 172 - .../transaction-guarantee.md | 17 - .../version-2.8.1-deprecated/txn-how.md | 151 - .../version-2.8.1-deprecated/txn-monitor.md | 10 - .../version-2.8.1-deprecated/txn-use.md | 105 - .../version-2.8.1-deprecated/txn-what.md | 60 - .../version-2.8.1-deprecated/txn-why.md | 45 - .../window-functions-context.md | 581 --- .../version-2.8.2-deprecated/about.md | 56 - .../adaptors-kafka.md | 274 -- .../adaptors-spark.md | 91 - .../adaptors-storm.md | 96 - .../admin-api-brokers.md | 286 -- .../admin-api-clusters.md | 318 -- .../admin-api-functions.md | 830 ---- .../admin-api-namespaces.md | 1324 ------- .../admin-api-non-partitioned-topics.md | 8 - .../admin-api-non-persistent-topics.md | 8 - .../admin-api-overview.md | 144 - .../admin-api-packages.md | 391 -- .../admin-api-partitioned-topics.md | 8 - .../admin-api-permissions.md | 189 - .../admin-api-persistent-topics.md | 8 - .../admin-api-schemas.md | 7 - .../admin-api-tenants.md | 238 -- .../admin-api-topics.md | 2142 ----------- .../administration-dashboard.md | 76 - .../administration-geo.md | 215 -- .../administration-isolation.md | 115 - .../administration-load-balance.md | 256 -- .../administration-proxy.md | 86 - .../administration-pulsar-manager.md | 205 - .../administration-stats.md | 64 - .../administration-upgrade.md | 168 - .../administration-zk-bk.md | 386 -- .../client-libraries-cgo.md | 579 --- .../client-libraries-cpp.md | 408 -- .../client-libraries-dotnet.md | 434 --- .../client-libraries-go.md | 885 ----- .../client-libraries-java.md | 1038 ----- .../client-libraries-node.md | 643 ---- .../client-libraries-python.md | 456 --- .../client-libraries-websocket.md | 657 ---- .../client-libraries.md | 35 - .../concepts-architecture-overview.md | 172 - .../concepts-authentication.md | 9 - .../concepts-clients.md | 92 - .../concepts-messaging.md | 700 ---- .../concepts-multi-tenancy.md | 67 - .../concepts-multiple-advertised-listeners.md | 44 - .../concepts-overview.md | 31 - .../concepts-proxy-sni-routing.md | 180 - .../concepts-replication.md | 9 - .../concepts-tiered-storage.md | 18 - .../concepts-topic-compaction.md | 37 - .../concepts-transactions.md | 30 - .../cookbooks-bookkeepermetadata.md | 21 - .../cookbooks-compaction.md | 142 - .../cookbooks-deduplication.md | 151 - .../cookbooks-encryption.md | 184 - .../cookbooks-message-queue.md | 127 - .../cookbooks-non-persistent.md | 63 - .../cookbooks-partitioned.md | 7 - .../cookbooks-retention-expiry.md | 413 -- .../cookbooks-tiered-storage.md | 342 -- .../version-2.8.2-deprecated/deploy-aws.md | 271 -- .../deploy-bare-metal-multi-cluster.md | 486 --- .../deploy-bare-metal.md | 541 --- .../version-2.8.2-deprecated/deploy-dcos.md | 200 - .../version-2.8.2-deprecated/deploy-docker.md | 60 - .../deploy-kubernetes.md | 11 - .../deploy-monitoring.md | 148 - .../develop-load-manager.md | 227 -- .../develop-schema.md | 62 - .../version-2.8.2-deprecated/develop-tools.md | 111 - .../developing-binary-protocol.md | 606 --- .../version-2.8.2-deprecated/functions-cli.md | 198 - .../functions-debug.md | 533 --- .../functions-deploy.md | 262 -- .../functions-develop.md | 1600 -------- .../functions-metrics.md | 7 - .../functions-overview.md | 209 - .../functions-package.md | 493 --- .../functions-runtime.md | 399 -- .../functions-worker.md | 386 -- ...tting-started-concepts-and-architecture.md | 16 - .../getting-started-docker.md | 176 - .../getting-started-helm.md | 441 --- .../getting-started-pulsar.md | 72 - .../getting-started-standalone.md | 271 -- .../version-2.8.2-deprecated/helm-deploy.md | 434 --- .../version-2.8.2-deprecated/helm-install.md | 44 - .../version-2.8.2-deprecated/helm-overview.md | 104 - .../version-2.8.2-deprecated/helm-prepare.md | 92 - .../version-2.8.2-deprecated/helm-tools.md | 43 - .../version-2.8.2-deprecated/helm-upgrade.md | 43 - .../io-aerospike-sink.md | 26 - .../io-canal-source.md | 235 -- .../io-cassandra-sink.md | 57 - .../io-cdc-debezium.md | 543 --- .../version-2.8.2-deprecated/io-cdc.md | 26 - .../version-2.8.2-deprecated/io-cli.md | 658 ---- .../version-2.8.2-deprecated/io-connectors.md | 232 -- .../io-debezium-source.md | 621 --- .../version-2.8.2-deprecated/io-debug.md | 407 -- .../version-2.8.2-deprecated/io-develop.md | 421 -- .../io-dynamodb-source.md | 80 - .../io-elasticsearch-sink.md | 173 - .../io-file-source.md | 160 - .../version-2.8.2-deprecated/io-flume-sink.md | 56 - .../io-flume-source.md | 56 - .../version-2.8.2-deprecated/io-hbase-sink.md | 67 - .../version-2.8.2-deprecated/io-hdfs2-sink.md | 64 - .../version-2.8.2-deprecated/io-hdfs3-sink.md | 59 - .../io-influxdb-sink.md | 119 - .../version-2.8.2-deprecated/io-jdbc-sink.md | 157 - .../version-2.8.2-deprecated/io-kafka-sink.md | 72 - .../io-kafka-source.md | 226 -- .../io-kinesis-sink.md | 80 - .../io-kinesis-source.md | 81 - .../version-2.8.2-deprecated/io-mongo-sink.md | 56 - .../io-netty-source.md | 241 -- .../version-2.8.2-deprecated/io-nsq-source.md | 21 - .../version-2.8.2-deprecated/io-overview.md | 164 - .../version-2.8.2-deprecated/io-quickstart.md | 963 ----- .../io-rabbitmq-sink.md | 85 - .../io-rabbitmq-source.md | 85 - .../version-2.8.2-deprecated/io-redis-sink.md | 74 - .../version-2.8.2-deprecated/io-solr-sink.md | 65 - .../io-twitter-source.md | 28 - .../version-2.8.2-deprecated/io-twitter.md | 7 - .../version-2.8.2-deprecated/io-use.md | 1787 --------- .../kubernetes-helm.md | 441 --- .../performance-pulsar-perf.md | 227 -- .../reference-cli-tools.md | 958 ----- .../reference-configuration.md | 792 ---- .../reference-connector-admin.md | 11 - .../reference-metrics.md | 564 --- .../reference-pulsar-admin.md | 3338 ---------------- .../reference-rest-api-overview.md | 18 - .../reference-terminology.md | 176 - .../schema-evolution-compatibility.md | 207 - .../schema-get-started.md | 102 - .../version-2.8.2-deprecated/schema-manage.md | 704 ---- .../schema-understand.md | 556 --- .../security-athenz.md | 98 - .../security-authorization.md | 130 - .../security-basic-auth.md | 127 - .../security-bouncy-castle.md | 157 - .../security-encryption.md | 325 -- .../security-extending.md | 207 - .../version-2.8.2-deprecated/security-jwt.md | 331 -- .../security-kerberos.md | 443 --- .../security-oauth2.md | 232 -- .../security-overview.md | 36 - .../security-tls-authentication.md | 222 -- .../security-tls-keystore.md | 342 -- .../security-tls-transport.md | 295 -- .../security-token-admin.md | 183 - .../sql-deployment-configurations.md | 199 - .../sql-getting-started.md | 187 - .../version-2.8.2-deprecated/sql-overview.md | 18 - .../version-2.8.2-deprecated/sql-rest-api.md | 192 - .../standalone-docker.md | 214 -- .../version-2.8.2-deprecated/standalone.md | 271 -- .../tiered-storage-aliyun.md | 257 -- .../tiered-storage-aws.md | 329 -- .../tiered-storage-azure.md | 264 -- .../tiered-storage-filesystem.md | 630 --- .../tiered-storage-gcs.md | 319 -- .../tiered-storage-overview.md | 52 - .../transaction-api.md | 172 - .../transaction-guarantee.md | 17 - .../version-2.8.2-deprecated/txn-how.md | 151 - .../version-2.8.2-deprecated/txn-monitor.md | 10 - .../version-2.8.2-deprecated/txn-use.md | 105 - .../version-2.8.2-deprecated/txn-what.md | 60 - .../version-2.8.2-deprecated/txn-why.md | 45 - .../window-functions-context.md | 581 --- .../version-2.8.3-deprecated/about.md | 56 - .../adaptors-kafka.md | 274 -- .../adaptors-spark.md | 91 - .../adaptors-storm.md | 96 - .../admin-api-brokers.md | 286 -- .../admin-api-clusters.md | 318 -- .../admin-api-functions.md | 830 ---- .../admin-api-namespaces.md | 1324 ------- .../admin-api-non-partitioned-topics.md | 8 - .../admin-api-non-persistent-topics.md | 8 - .../admin-api-overview.md | 144 - .../admin-api-packages.md | 391 -- .../admin-api-partitioned-topics.md | 8 - .../admin-api-permissions.md | 189 - .../admin-api-persistent-topics.md | 8 - .../admin-api-schemas.md | 7 - .../admin-api-tenants.md | 238 -- .../admin-api-topics.md | 2142 ----------- .../administration-dashboard.md | 76 - .../administration-geo.md | 215 -- .../administration-isolation.md | 115 - .../administration-load-balance.md | 256 -- .../administration-proxy.md | 125 - .../administration-pulsar-manager.md | 205 - .../administration-stats.md | 64 - .../administration-upgrade.md | 168 - .../administration-zk-bk.md | 386 -- .../client-libraries-cgo.md | 579 --- .../client-libraries-cpp.md | 408 -- .../client-libraries-dotnet.md | 434 --- .../client-libraries-go.md | 885 ----- .../client-libraries-java.md | 1038 ----- .../client-libraries-node.md | 643 ---- .../client-libraries-python.md | 456 --- .../client-libraries-websocket.md | 657 ---- .../client-libraries.md | 35 - .../concepts-architecture-overview.md | 172 - .../concepts-authentication.md | 9 - .../concepts-clients.md | 92 - .../concepts-messaging.md | 700 ---- .../concepts-multi-tenancy.md | 67 - .../concepts-multiple-advertised-listeners.md | 44 - .../concepts-overview.md | 31 - .../concepts-proxy-sni-routing.md | 180 - .../concepts-replication.md | 9 - .../concepts-tiered-storage.md | 18 - .../concepts-topic-compaction.md | 37 - .../concepts-transactions.md | 30 - .../cookbooks-bookkeepermetadata.md | 21 - .../cookbooks-compaction.md | 142 - .../cookbooks-deduplication.md | 151 - .../cookbooks-encryption.md | 184 - .../cookbooks-message-queue.md | 127 - .../cookbooks-non-persistent.md | 63 - .../cookbooks-partitioned.md | 7 - .../cookbooks-retention-expiry.md | 413 -- .../cookbooks-tiered-storage.md | 342 -- .../version-2.8.3-deprecated/deploy-aws.md | 271 -- .../deploy-bare-metal-multi-cluster.md | 486 --- .../deploy-bare-metal.md | 541 --- .../version-2.8.3-deprecated/deploy-dcos.md | 200 - .../version-2.8.3-deprecated/deploy-docker.md | 60 - .../deploy-kubernetes.md | 11 - .../deploy-monitoring.md | 148 - .../develop-load-manager.md | 227 -- .../develop-schema.md | 62 - .../version-2.8.3-deprecated/develop-tools.md | 111 - .../developing-binary-protocol.md | 606 --- .../version-2.8.3-deprecated/functions-cli.md | 198 - .../functions-debug.md | 533 --- .../functions-deploy.md | 262 -- .../functions-develop.md | 1600 -------- .../functions-metrics.md | 7 - .../functions-overview.md | 209 - .../functions-package.md | 493 --- .../functions-runtime.md | 399 -- .../functions-worker.md | 386 -- ...tting-started-concepts-and-architecture.md | 16 - .../getting-started-docker.md | 176 - .../getting-started-helm.md | 444 --- .../getting-started-pulsar.md | 72 - .../getting-started-standalone.md | 271 -- .../version-2.8.3-deprecated/helm-deploy.md | 434 --- .../version-2.8.3-deprecated/helm-install.md | 44 - .../version-2.8.3-deprecated/helm-overview.md | 104 - .../version-2.8.3-deprecated/helm-prepare.md | 92 - .../version-2.8.3-deprecated/helm-tools.md | 43 - .../version-2.8.3-deprecated/helm-upgrade.md | 43 - .../io-aerospike-sink.md | 26 - .../io-canal-source.md | 235 -- .../io-cassandra-sink.md | 57 - .../io-cdc-debezium.md | 543 --- .../version-2.8.3-deprecated/io-cdc.md | 26 - .../version-2.8.3-deprecated/io-cli.md | 658 ---- .../version-2.8.3-deprecated/io-connectors.md | 232 -- .../io-debezium-source.md | 621 --- .../version-2.8.3-deprecated/io-debug.md | 407 -- .../version-2.8.3-deprecated/io-develop.md | 421 -- .../io-dynamodb-source.md | 80 - .../io-elasticsearch-sink.md | 173 - .../io-file-source.md | 160 - .../version-2.8.3-deprecated/io-flume-sink.md | 56 - .../io-flume-source.md | 56 - .../version-2.8.3-deprecated/io-hbase-sink.md | 67 - .../version-2.8.3-deprecated/io-hdfs2-sink.md | 64 - .../version-2.8.3-deprecated/io-hdfs3-sink.md | 59 - .../io-influxdb-sink.md | 119 - .../version-2.8.3-deprecated/io-jdbc-sink.md | 157 - .../version-2.8.3-deprecated/io-kafka-sink.md | 72 - .../io-kafka-source.md | 226 -- .../io-kinesis-sink.md | 80 - .../io-kinesis-source.md | 81 - .../version-2.8.3-deprecated/io-mongo-sink.md | 56 - .../io-netty-source.md | 241 -- .../version-2.8.3-deprecated/io-nsq-source.md | 21 - .../version-2.8.3-deprecated/io-overview.md | 164 - .../version-2.8.3-deprecated/io-quickstart.md | 963 ----- .../io-rabbitmq-sink.md | 85 - .../io-rabbitmq-source.md | 85 - .../version-2.8.3-deprecated/io-redis-sink.md | 74 - .../version-2.8.3-deprecated/io-solr-sink.md | 65 - .../io-twitter-source.md | 28 - .../version-2.8.3-deprecated/io-twitter.md | 7 - .../version-2.8.3-deprecated/io-use.md | 1787 --------- .../performance-pulsar-perf.md | 227 -- .../reference-cli-tools.md | 958 ----- .../reference-configuration.md | 792 ---- .../reference-connector-admin.md | 11 - .../reference-metrics.md | 564 --- .../reference-pulsar-admin.md | 3338 ---------------- .../reference-rest-api-overview.md | 18 - .../reference-terminology.md | 176 - .../schema-evolution-compatibility.md | 207 - .../schema-get-started.md | 102 - .../version-2.8.3-deprecated/schema-manage.md | 704 ---- .../schema-understand.md | 556 --- .../security-athenz.md | 98 - .../security-authorization.md | 114 - .../security-basic-auth.md | 127 - .../security-bouncy-castle.md | 157 - .../security-encryption.md | 335 -- .../security-extending.md | 207 - .../version-2.8.3-deprecated/security-jwt.md | 331 -- .../security-kerberos.md | 443 --- .../security-oauth2.md | 232 -- .../security-overview.md | 36 - .../security-tls-authentication.md | 222 -- .../security-tls-keystore.md | 342 -- .../security-tls-transport.md | 295 -- .../security-token-admin.md | 183 - .../sql-deployment-configurations.md | 199 - .../sql-getting-started.md | 187 - .../version-2.8.3-deprecated/sql-overview.md | 18 - .../version-2.8.3-deprecated/sql-rest-api.md | 192 - .../version-2.8.3-deprecated/standalone.md | 271 -- .../tiered-storage-aliyun.md | 257 -- .../tiered-storage-aws.md | 329 -- .../tiered-storage-azure.md | 264 -- .../tiered-storage-filesystem.md | 630 --- .../tiered-storage-gcs.md | 319 -- .../tiered-storage-overview.md | 52 - .../transaction-api.md | 172 - .../transaction-guarantee.md | 17 - .../version-2.8.3-deprecated/txn-how.md | 151 - .../version-2.8.3-deprecated/txn-monitor.md | 10 - .../version-2.8.3-deprecated/txn-use.md | 105 - .../version-2.8.3-deprecated/txn-what.md | 60 - .../version-2.8.3-deprecated/txn-why.md | 45 - .../window-functions-context.md | 581 --- .../version-2.9.0-deprecated/about.md | 56 - .../adaptors-kafka.md | 276 -- .../adaptors-spark.md | 91 - .../adaptors-storm.md | 96 - .../admin-api-brokers.md | 286 -- .../admin-api-clusters.md | 318 -- .../admin-api-functions.md | 830 ---- .../admin-api-namespaces.md | 1267 ------ .../admin-api-non-partitioned-topics.md | 8 - .../admin-api-non-persistent-topics.md | 8 - .../admin-api-overview.md | 144 - .../admin-api-packages.md | 391 -- .../admin-api-partitioned-topics.md | 8 - .../admin-api-permissions.md | 184 - .../admin-api-persistent-topics.md | 8 - .../admin-api-schemas.md | 7 - .../admin-api-tenants.md | 238 -- .../admin-api-topics.md | 2334 ------------ .../administration-dashboard.md | 76 - .../administration-geo.md | 214 -- .../administration-isolation.md | 115 - .../administration-load-balance.md | 250 -- .../administration-proxy.md | 86 - .../administration-pulsar-manager.md | 205 - .../administration-stats.md | 64 - .../administration-upgrade.md | 168 - .../administration-zk-bk.md | 386 -- .../client-libraries-cgo.md | 579 --- .../client-libraries-cpp.md | 708 ---- .../client-libraries-dotnet.md | 434 --- .../client-libraries-go.md | 885 ----- .../client-libraries-java.md | 1038 ----- .../client-libraries-node.md | 643 ---- .../client-libraries-python.md | 481 --- .../client-libraries-websocket.md | 664 ---- .../client-libraries.md | 36 - .../concepts-architecture-overview.md | 172 - .../concepts-authentication.md | 9 - .../concepts-clients.md | 92 - .../concepts-messaging.md | 714 ---- .../concepts-multi-tenancy.md | 67 - .../concepts-multiple-advertised-listeners.md | 44 - .../concepts-overview.md | 31 - .../concepts-proxy-sni-routing.md | 180 - .../concepts-replication.md | 9 - .../concepts-tiered-storage.md | 18 - .../concepts-topic-compaction.md | 37 - .../concepts-transactions.md | 30 - .../cookbooks-bookkeepermetadata.md | 21 - .../cookbooks-compaction.md | 142 - .../cookbooks-deduplication.md | 151 - .../cookbooks-encryption.md | 184 - .../cookbooks-message-queue.md | 127 - .../cookbooks-non-persistent.md | 63 - .../cookbooks-partitioned.md | 7 - .../cookbooks-retention-expiry.md | 498 --- .../cookbooks-tiered-storage.md | 346 -- .../version-2.9.0-deprecated/deploy-aws.md | 271 -- .../deploy-bare-metal-multi-cluster.md | 452 --- .../deploy-bare-metal.md | 559 --- .../version-2.9.0-deprecated/deploy-dcos.md | 200 - .../version-2.9.0-deprecated/deploy-docker.md | 60 - .../deploy-kubernetes.md | 11 - .../deploy-monitoring.md | 148 - .../develop-load-manager.md | 227 -- .../develop-schema.md | 62 - .../version-2.9.0-deprecated/develop-tools.md | 112 - .../developing-binary-protocol.md | 606 --- .../version-2.9.0-deprecated/functions-cli.md | 198 - .../functions-debug.md | 538 --- .../functions-deploy.md | 262 -- .../functions-develop.md | 1600 -------- .../functions-metrics.md | 7 - .../functions-overview.md | 209 - .../functions-package.md | 493 --- .../functions-runtime.md | 403 -- .../functions-worker.md | 386 -- ...tting-started-concepts-and-architecture.md | 16 - .../getting-started-docker.md | 211 - .../getting-started-helm.md | 447 --- .../getting-started-pulsar.md | 72 - .../getting-started-standalone.md | 268 -- .../version-2.9.0-deprecated/helm-deploy.md | 434 --- .../version-2.9.0-deprecated/helm-install.md | 38 - .../version-2.9.0-deprecated/helm-overview.md | 103 - .../version-2.9.0-deprecated/helm-prepare.md | 80 - .../version-2.9.0-deprecated/helm-tools.md | 43 - .../version-2.9.0-deprecated/helm-upgrade.md | 43 - .../io-aerospike-sink.md | 26 - .../io-canal-source.md | 235 -- .../io-cassandra-sink.md | 57 - .../io-cdc-debezium.md | 543 --- .../version-2.9.0-deprecated/io-cdc.md | 26 - .../version-2.9.0-deprecated/io-cli.md | 658 ---- .../version-2.9.0-deprecated/io-connectors.md | 249 -- .../io-debezium-source.md | 768 ---- .../version-2.9.0-deprecated/io-debug.md | 407 -- .../version-2.9.0-deprecated/io-develop.md | 421 -- .../io-dynamodb-source.md | 80 - .../io-elasticsearch-sink.md | 242 -- .../io-file-source.md | 160 - .../version-2.9.0-deprecated/io-flume-sink.md | 56 - .../io-flume-source.md | 56 - .../version-2.9.0-deprecated/io-hbase-sink.md | 67 - .../version-2.9.0-deprecated/io-hdfs2-sink.md | 64 - .../version-2.9.0-deprecated/io-hdfs3-sink.md | 59 - .../io-influxdb-sink.md | 119 - .../version-2.9.0-deprecated/io-jdbc-sink.md | 157 - .../version-2.9.0-deprecated/io-kafka-sink.md | 72 - .../io-kafka-source.md | 226 -- .../io-kinesis-sink.md | 80 - .../io-kinesis-source.md | 81 - .../version-2.9.0-deprecated/io-mongo-sink.md | 56 - .../io-netty-source.md | 241 -- .../version-2.9.0-deprecated/io-nsq-source.md | 21 - .../version-2.9.0-deprecated/io-overview.md | 164 - .../version-2.9.0-deprecated/io-quickstart.md | 963 ----- .../io-rabbitmq-sink.md | 85 - .../io-rabbitmq-source.md | 85 - .../version-2.9.0-deprecated/io-redis-sink.md | 156 - .../version-2.9.0-deprecated/io-solr-sink.md | 65 - .../io-twitter-source.md | 28 - .../version-2.9.0-deprecated/io-twitter.md | 7 - .../version-2.9.0-deprecated/io-use.md | 1787 --------- .../kubernetes-helm.md | 441 --- .../performance-pulsar-perf.md | 229 -- .../reference-cli-tools.md | 941 ----- .../reference-configuration.md | 774 ---- .../reference-connector-admin.md | 12 - .../reference-metrics.md | 556 --- .../reference-pulsar-admin.md | 3297 ---------------- .../reference-rest-api-overview.md | 18 - .../reference-terminology.md | 176 - .../schema-evolution-compatibility.md | 201 - .../schema-get-started.md | 102 - .../version-2.9.0-deprecated/schema-manage.md | 639 ---- .../schema-understand.md | 576 --- .../security-athenz.md | 98 - .../security-authorization.md | 130 - .../security-basic-auth.md | 127 - .../security-bouncy-castle.md | 157 - .../security-encryption.md | 200 - .../security-extending.md | 207 - .../version-2.9.0-deprecated/security-jwt.md | 331 -- .../security-kerberos.md | 443 --- .../security-oauth2.md | 232 -- .../security-overview.md | 36 - .../security-tls-authentication.md | 222 -- .../security-tls-keystore.md | 342 -- .../security-tls-transport.md | 295 -- .../security-token-admin.md | 183 - .../sql-deployment-configurations.md | 199 - .../sql-getting-started.md | 187 - .../version-2.9.0-deprecated/sql-overview.md | 18 - .../version-2.9.0-deprecated/sql-rest-api.md | 192 - .../standalone-docker.md | 214 -- .../version-2.9.0-deprecated/standalone.md | 268 -- .../tiered-storage-aliyun.md | 259 -- .../tiered-storage-aws.md | 331 -- .../tiered-storage-azure.md | 266 -- .../tiered-storage-filesystem.md | 317 -- .../tiered-storage-gcs.md | 321 -- .../tiered-storage-overview.md | 52 - .../transaction-api.md | 172 - .../transaction-guarantee.md | 17 - .../version-2.9.0-deprecated/txn-how.md | 151 - .../version-2.9.0-deprecated/txn-monitor.md | 10 - .../version-2.9.0-deprecated/txn-use.md | 105 - .../version-2.9.0-deprecated/txn-what.md | 60 - .../version-2.9.0-deprecated/txn-why.md | 45 - .../window-functions-context.md | 581 --- .../version-2.9.1-deprecated/about.md | 56 - .../adaptors-kafka.md | 276 -- .../adaptors-spark.md | 91 - .../adaptors-storm.md | 96 - .../admin-api-brokers.md | 286 -- .../admin-api-clusters.md | 318 -- .../admin-api-functions.md | 830 ---- .../admin-api-namespaces.md | 1267 ------ .../admin-api-non-partitioned-topics.md | 8 - .../admin-api-non-persistent-topics.md | 8 - .../admin-api-overview.md | 144 - .../admin-api-packages.md | 391 -- .../admin-api-partitioned-topics.md | 8 - .../admin-api-permissions.md | 184 - .../admin-api-persistent-topics.md | 8 - .../admin-api-schemas.md | 7 - .../admin-api-tenants.md | 238 -- .../admin-api-topics.md | 2334 ------------ .../administration-dashboard.md | 76 - .../administration-geo.md | 238 -- .../administration-isolation.md | 115 - .../administration-load-balance.md | 250 -- .../administration-proxy.md | 86 - .../administration-pulsar-manager.md | 205 - .../administration-stats.md | 64 - .../administration-upgrade.md | 168 - .../administration-zk-bk.md | 386 -- .../client-libraries-cgo.md | 579 --- .../client-libraries-cpp.md | 708 ---- .../client-libraries-dotnet.md | 434 --- .../client-libraries-go.md | 885 ----- .../client-libraries-java.md | 1038 ----- .../client-libraries-node.md | 643 ---- .../client-libraries-python.md | 481 --- .../client-libraries-websocket.md | 662 ---- .../client-libraries.md | 36 - .../concepts-architecture-overview.md | 172 - .../concepts-authentication.md | 9 - .../concepts-clients.md | 92 - .../concepts-messaging.md | 714 ---- .../concepts-multi-tenancy.md | 67 - .../concepts-multiple-advertised-listeners.md | 44 - .../concepts-overview.md | 31 - .../concepts-proxy-sni-routing.md | 180 - .../concepts-replication.md | 9 - .../concepts-tiered-storage.md | 18 - .../concepts-topic-compaction.md | 37 - .../concepts-transactions.md | 30 - .../cookbooks-bookkeepermetadata.md | 21 - .../cookbooks-compaction.md | 142 - .../cookbooks-deduplication.md | 151 - .../cookbooks-encryption.md | 184 - .../cookbooks-message-queue.md | 127 - .../cookbooks-non-persistent.md | 63 - .../cookbooks-partitioned.md | 7 - .../cookbooks-retention-expiry.md | 498 --- .../cookbooks-tiered-storage.md | 346 -- .../version-2.9.1-deprecated/deploy-aws.md | 271 -- .../deploy-bare-metal-multi-cluster.md | 452 --- .../deploy-bare-metal.md | 559 --- .../version-2.9.1-deprecated/deploy-dcos.md | 200 - .../version-2.9.1-deprecated/deploy-docker.md | 60 - .../deploy-kubernetes.md | 11 - .../deploy-monitoring.md | 148 - .../develop-load-manager.md | 227 -- .../develop-schema.md | 62 - .../version-2.9.1-deprecated/develop-tools.md | 112 - .../developing-binary-protocol.md | 606 --- .../version-2.9.1-deprecated/functions-cli.md | 198 - .../functions-debug.md | 538 --- .../functions-deploy.md | 262 -- .../functions-develop.md | 1600 -------- .../functions-metrics.md | 7 - .../functions-overview.md | 209 - .../functions-package.md | 493 --- .../functions-runtime.md | 403 -- .../functions-worker.md | 386 -- ...tting-started-concepts-and-architecture.md | 16 - .../getting-started-docker.md | 211 - .../getting-started-helm.md | 447 --- .../getting-started-pulsar.md | 72 - .../getting-started-standalone.md | 268 -- .../version-2.9.1-deprecated/helm-deploy.md | 434 --- .../version-2.9.1-deprecated/helm-install.md | 38 - .../version-2.9.1-deprecated/helm-overview.md | 103 - .../version-2.9.1-deprecated/helm-prepare.md | 80 - .../version-2.9.1-deprecated/helm-tools.md | 43 - .../version-2.9.1-deprecated/helm-upgrade.md | 43 - .../io-aerospike-sink.md | 26 - .../io-canal-source.md | 235 -- .../io-cassandra-sink.md | 57 - .../io-cdc-debezium.md | 543 --- .../version-2.9.1-deprecated/io-cdc.md | 26 - .../version-2.9.1-deprecated/io-cli.md | 658 ---- .../version-2.9.1-deprecated/io-connectors.md | 249 -- .../io-debezium-source.md | 768 ---- .../version-2.9.1-deprecated/io-debug.md | 407 -- .../version-2.9.1-deprecated/io-develop.md | 421 -- .../io-dynamodb-source.md | 80 - .../io-elasticsearch-sink.md | 242 -- .../io-file-source.md | 160 - .../version-2.9.1-deprecated/io-flume-sink.md | 56 - .../io-flume-source.md | 56 - .../version-2.9.1-deprecated/io-hbase-sink.md | 67 - .../version-2.9.1-deprecated/io-hdfs2-sink.md | 64 - .../version-2.9.1-deprecated/io-hdfs3-sink.md | 59 - .../io-influxdb-sink.md | 119 - .../version-2.9.1-deprecated/io-jdbc-sink.md | 157 - .../version-2.9.1-deprecated/io-kafka-sink.md | 72 - .../io-kafka-source.md | 226 -- .../io-kinesis-sink.md | 80 - .../io-kinesis-source.md | 81 - .../version-2.9.1-deprecated/io-mongo-sink.md | 56 - .../io-netty-source.md | 241 -- .../version-2.9.1-deprecated/io-nsq-source.md | 21 - .../version-2.9.1-deprecated/io-overview.md | 164 - .../version-2.9.1-deprecated/io-quickstart.md | 963 ----- .../io-rabbitmq-sink.md | 85 - .../io-rabbitmq-source.md | 85 - .../version-2.9.1-deprecated/io-redis-sink.md | 156 - .../version-2.9.1-deprecated/io-solr-sink.md | 65 - .../io-twitter-source.md | 28 - .../version-2.9.1-deprecated/io-twitter.md | 7 - .../version-2.9.1-deprecated/io-use.md | 1787 --------- .../kubernetes-helm.md | 441 --- .../performance-pulsar-perf.md | 229 -- .../reference-cli-tools.md | 941 ----- .../reference-configuration.md | 774 ---- .../reference-connector-admin.md | 12 - .../reference-metrics.md | 556 --- .../reference-pulsar-admin.md | 3297 ---------------- .../reference-rest-api-overview.md | 18 - .../reference-terminology.md | 176 - .../schema-evolution-compatibility.md | 201 - .../schema-get-started.md | 102 - .../version-2.9.1-deprecated/schema-manage.md | 639 ---- .../schema-understand.md | 576 --- .../security-athenz.md | 98 - .../security-authorization.md | 130 - .../security-basic-auth.md | 127 - .../security-bouncy-castle.md | 157 - .../security-encryption.md | 200 - .../security-extending.md | 207 - .../version-2.9.1-deprecated/security-jwt.md | 331 -- .../security-kerberos.md | 443 --- .../security-oauth2.md | 232 -- .../security-overview.md | 36 - .../security-tls-authentication.md | 222 -- .../security-tls-keystore.md | 342 -- .../security-tls-transport.md | 295 -- .../security-token-admin.md | 183 - .../sql-deployment-configurations.md | 199 - .../sql-getting-started.md | 187 - .../version-2.9.1-deprecated/sql-overview.md | 18 - .../version-2.9.1-deprecated/sql-rest-api.md | 192 - .../standalone-docker.md | 214 -- .../version-2.9.1-deprecated/standalone.md | 268 -- .../tiered-storage-aliyun.md | 259 -- .../tiered-storage-aws.md | 331 -- .../tiered-storage-azure.md | 266 -- .../tiered-storage-filesystem.md | 317 -- .../tiered-storage-gcs.md | 321 -- .../tiered-storage-overview.md | 52 - .../transaction-api.md | 172 - .../transaction-guarantee.md | 17 - .../version-2.9.1-deprecated/txn-how.md | 151 - .../version-2.9.1-deprecated/txn-monitor.md | 10 - .../version-2.9.1-deprecated/txn-use.md | 105 - .../version-2.9.1-deprecated/txn-what.md | 60 - .../version-2.9.1-deprecated/txn-why.md | 45 - .../window-functions-context.md | 581 --- .../version-2.9.2-deprecated/about.md | 56 - .../adaptors-kafka.md | 276 -- .../adaptors-spark.md | 91 - .../adaptors-storm.md | 96 - .../admin-api-brokers.md | 286 -- .../admin-api-clusters.md | 318 -- .../admin-api-functions.md | 830 ---- .../admin-api-namespaces.md | 1267 ------ .../admin-api-non-partitioned-topics.md | 8 - .../admin-api-non-persistent-topics.md | 8 - .../admin-api-overview.md | 144 - .../admin-api-packages.md | 391 -- .../admin-api-partitioned-topics.md | 8 - .../admin-api-permissions.md | 184 - .../admin-api-persistent-topics.md | 8 - .../admin-api-schemas.md | 7 - .../admin-api-tenants.md | 238 -- .../admin-api-topics.md | 2334 ------------ .../administration-dashboard.md | 76 - .../administration-geo.md | 238 -- .../administration-isolation.md | 115 - .../administration-load-balance.md | 250 -- .../administration-proxy.md | 125 - .../administration-pulsar-manager.md | 205 - .../administration-stats.md | 64 - .../administration-upgrade.md | 168 - .../administration-zk-bk.md | 386 -- .../client-libraries-cgo.md | 579 --- .../client-libraries-cpp.md | 708 ---- .../client-libraries-dotnet.md | 434 --- .../client-libraries-go.md | 885 ----- .../client-libraries-java.md | 1038 ----- .../client-libraries-node.md | 643 ---- .../client-libraries-python.md | 481 --- .../client-libraries-websocket.md | 662 ---- .../client-libraries.md | 36 - .../concepts-architecture-overview.md | 172 - .../concepts-authentication.md | 9 - .../concepts-clients.md | 92 - .../concepts-messaging.md | 713 ---- .../concepts-multi-tenancy.md | 67 - .../concepts-multiple-advertised-listeners.md | 44 - .../concepts-overview.md | 31 - .../concepts-proxy-sni-routing.md | 180 - .../concepts-replication.md | 9 - .../concepts-tiered-storage.md | 18 - .../concepts-topic-compaction.md | 37 - .../concepts-transactions.md | 30 - .../cookbooks-bookkeepermetadata.md | 21 - .../cookbooks-compaction.md | 142 - .../cookbooks-deduplication.md | 151 - .../cookbooks-encryption.md | 184 - .../cookbooks-message-queue.md | 127 - .../cookbooks-non-persistent.md | 63 - .../cookbooks-partitioned.md | 7 - .../cookbooks-retention-expiry.md | 498 --- .../cookbooks-tiered-storage.md | 346 -- .../version-2.9.2-deprecated/deploy-aws.md | 271 -- .../deploy-bare-metal-multi-cluster.md | 452 --- .../deploy-bare-metal.md | 559 --- .../version-2.9.2-deprecated/deploy-dcos.md | 200 - .../version-2.9.2-deprecated/deploy-docker.md | 60 - .../deploy-kubernetes.md | 11 - .../deploy-monitoring.md | 148 - .../develop-load-manager.md | 227 -- .../develop-schema.md | 62 - .../version-2.9.2-deprecated/develop-tools.md | 112 - .../developing-binary-protocol.md | 606 --- .../version-2.9.2-deprecated/functions-cli.md | 198 - .../functions-debug.md | 538 --- .../functions-deploy.md | 262 -- .../functions-develop.md | 1600 -------- .../functions-metrics.md | 7 - .../functions-overview.md | 209 - .../functions-package.md | 493 --- .../functions-runtime.md | 403 -- .../functions-worker.md | 385 -- ...tting-started-concepts-and-architecture.md | 16 - .../getting-started-docker.md | 211 - .../getting-started-helm.md | 447 --- .../getting-started-pulsar.md | 72 - .../getting-started-standalone.md | 268 -- .../version-2.9.2-deprecated/helm-deploy.md | 434 --- .../version-2.9.2-deprecated/helm-install.md | 38 - .../version-2.9.2-deprecated/helm-overview.md | 103 - .../version-2.9.2-deprecated/helm-prepare.md | 80 - .../version-2.9.2-deprecated/helm-tools.md | 43 - .../version-2.9.2-deprecated/helm-upgrade.md | 43 - .../io-aerospike-sink.md | 26 - .../io-canal-source.md | 235 -- .../io-cassandra-sink.md | 57 - .../io-cdc-debezium.md | 543 --- .../version-2.9.2-deprecated/io-cdc.md | 26 - .../version-2.9.2-deprecated/io-cli.md | 658 ---- .../version-2.9.2-deprecated/io-connectors.md | 249 -- .../io-debezium-source.md | 768 ---- .../version-2.9.2-deprecated/io-debug.md | 407 -- .../version-2.9.2-deprecated/io-develop.md | 421 -- .../io-dynamodb-source.md | 80 - .../io-elasticsearch-sink.md | 242 -- .../io-file-source.md | 160 - .../version-2.9.2-deprecated/io-flume-sink.md | 56 - .../io-flume-source.md | 56 - .../version-2.9.2-deprecated/io-hbase-sink.md | 67 - .../version-2.9.2-deprecated/io-hdfs2-sink.md | 64 - .../version-2.9.2-deprecated/io-hdfs3-sink.md | 59 - .../io-influxdb-sink.md | 119 - .../version-2.9.2-deprecated/io-jdbc-sink.md | 157 - .../version-2.9.2-deprecated/io-kafka-sink.md | 72 - .../io-kafka-source.md | 240 -- .../io-kinesis-sink.md | 80 - .../io-kinesis-source.md | 81 - .../version-2.9.2-deprecated/io-mongo-sink.md | 56 - .../io-netty-source.md | 241 -- .../version-2.9.2-deprecated/io-nsq-source.md | 21 - .../version-2.9.2-deprecated/io-overview.md | 164 - .../version-2.9.2-deprecated/io-quickstart.md | 963 ----- .../io-rabbitmq-sink.md | 85 - .../io-rabbitmq-source.md | 85 - .../version-2.9.2-deprecated/io-redis-sink.md | 156 - .../version-2.9.2-deprecated/io-solr-sink.md | 65 - .../io-twitter-source.md | 28 - .../version-2.9.2-deprecated/io-twitter.md | 7 - .../version-2.9.2-deprecated/io-use.md | 1787 --------- .../performance-pulsar-perf.md | 229 -- .../reference-cli-tools.md | 941 ----- .../reference-configuration.md | 774 ---- .../reference-connector-admin.md | 12 - .../reference-metrics.md | 556 --- .../reference-pulsar-admin.md | 3297 ---------------- .../reference-rest-api-overview.md | 18 - .../reference-terminology.md | 176 - .../schema-evolution-compatibility.md | 201 - .../schema-get-started.md | 102 - .../version-2.9.2-deprecated/schema-manage.md | 639 ---- .../schema-understand.md | 576 --- .../security-athenz.md | 98 - .../security-authorization.md | 130 - .../security-basic-auth.md | 127 - .../security-bouncy-castle.md | 157 - .../security-encryption.md | 200 - .../security-extending.md | 207 - .../version-2.9.2-deprecated/security-jwt.md | 331 -- .../security-kerberos.md | 443 --- .../security-oauth2.md | 232 -- .../security-overview.md | 37 - .../security-tls-authentication.md | 222 -- .../security-tls-keystore.md | 342 -- .../security-tls-transport.md | 295 -- .../security-token-admin.md | 183 - .../sql-deployment-configurations.md | 199 - .../sql-getting-started.md | 187 - .../version-2.9.2-deprecated/sql-overview.md | 18 - .../version-2.9.2-deprecated/sql-rest-api.md | 192 - .../version-2.9.2-deprecated/standalone.md | 268 -- .../tiered-storage-aliyun.md | 259 -- .../tiered-storage-aws.md | 331 -- .../tiered-storage-azure.md | 266 -- .../tiered-storage-filesystem.md | 317 -- .../tiered-storage-gcs.md | 321 -- .../tiered-storage-overview.md | 52 - .../transaction-api.md | 172 - .../transaction-guarantee.md | 17 - .../version-2.9.2-deprecated/txn-how.md | 151 - .../version-2.9.2-deprecated/txn-monitor.md | 10 - .../version-2.9.2-deprecated/txn-use.md | 105 - .../version-2.9.2-deprecated/txn-what.md | 60 - .../version-2.9.2-deprecated/txn-why.md | 45 - .../window-functions-context.md | 581 --- .../version-2.9.3-deprecated/about.md | 56 - .../adaptors-kafka.md | 276 -- .../adaptors-spark.md | 91 - .../adaptors-storm.md | 96 - .../admin-api-brokers.md | 286 -- .../admin-api-clusters.md | 318 -- .../admin-api-functions.md | 830 ---- .../admin-api-namespaces.md | 1401 ------- .../admin-api-non-partitioned-topics.md | 8 - .../admin-api-non-persistent-topics.md | 8 - .../admin-api-overview.md | 144 - .../admin-api-packages.md | 391 -- .../admin-api-partitioned-topics.md | 8 - .../admin-api-permissions.md | 184 - .../admin-api-persistent-topics.md | 8 - .../admin-api-schemas.md | 7 - .../admin-api-tenants.md | 238 -- .../admin-api-topics.md | 2378 ------------ .../administration-dashboard.md | 76 - .../administration-geo.md | 238 -- .../administration-isolation.md | 115 - .../administration-load-balance.md | 250 -- .../administration-proxy.md | 125 - .../administration-pulsar-manager.md | 205 - .../administration-stats.md | 64 - .../administration-upgrade.md | 168 - .../administration-zk-bk.md | 386 -- .../client-libraries-cgo.md | 579 --- .../client-libraries-cpp.md | 708 ---- .../client-libraries-dotnet.md | 434 --- .../client-libraries-go.md | 885 ----- .../client-libraries-java.md | 1040 ----- .../client-libraries-node.md | 643 ---- .../client-libraries-python.md | 481 --- .../client-libraries-websocket.md | 662 ---- .../client-libraries.md | 36 - .../concepts-architecture-overview.md | 172 - .../concepts-authentication.md | 9 - .../concepts-clients.md | 92 - .../concepts-messaging.md | 713 ---- .../concepts-multi-tenancy.md | 67 - .../concepts-multiple-advertised-listeners.md | 44 - .../concepts-overview.md | 31 - .../concepts-proxy-sni-routing.md | 180 - .../concepts-replication.md | 9 - .../concepts-tiered-storage.md | 18 - .../concepts-topic-compaction.md | 37 - .../concepts-transactions.md | 30 - .../cookbooks-bookkeepermetadata.md | 21 - .../cookbooks-compaction.md | 142 - .../cookbooks-deduplication.md | 151 - .../cookbooks-encryption.md | 184 - .../cookbooks-message-queue.md | 127 - .../cookbooks-non-persistent.md | 63 - .../cookbooks-partitioned.md | 7 - .../cookbooks-retention-expiry.md | 498 --- .../cookbooks-tiered-storage.md | 346 -- .../version-2.9.3-deprecated/deploy-aws.md | 271 -- .../deploy-bare-metal-multi-cluster.md | 452 --- .../deploy-bare-metal.md | 559 --- .../version-2.9.3-deprecated/deploy-dcos.md | 200 - .../version-2.9.3-deprecated/deploy-docker.md | 60 - .../deploy-kubernetes.md | 11 - .../deploy-monitoring.md | 148 - .../develop-load-manager.md | 227 -- .../develop-schema.md | 62 - .../version-2.9.3-deprecated/develop-tools.md | 112 - .../developing-binary-protocol.md | 606 --- .../version-2.9.3-deprecated/functions-cli.md | 198 - .../functions-debug.md | 538 --- .../functions-deploy.md | 262 -- .../functions-develop.md | 1600 -------- .../functions-metrics.md | 7 - .../functions-overview.md | 209 - .../functions-package.md | 493 --- .../functions-runtime.md | 403 -- .../functions-worker.md | 385 -- ...tting-started-concepts-and-architecture.md | 16 - .../getting-started-docker.md | 211 - .../getting-started-helm.md | 447 --- .../getting-started-pulsar.md | 72 - .../getting-started-standalone.md | 268 -- .../version-2.9.3-deprecated/helm-deploy.md | 434 --- .../version-2.9.3-deprecated/helm-install.md | 38 - .../version-2.9.3-deprecated/helm-overview.md | 103 - .../version-2.9.3-deprecated/helm-prepare.md | 80 - .../version-2.9.3-deprecated/helm-tools.md | 43 - .../version-2.9.3-deprecated/helm-upgrade.md | 43 - .../io-aerospike-sink.md | 26 - .../io-canal-source.md | 235 -- .../io-cassandra-sink.md | 57 - .../io-cdc-debezium.md | 543 --- .../version-2.9.3-deprecated/io-cdc.md | 26 - .../version-2.9.3-deprecated/io-cli.md | 658 ---- .../version-2.9.3-deprecated/io-connectors.md | 249 -- .../io-debezium-source.md | 725 ---- .../version-2.9.3-deprecated/io-debug.md | 407 -- .../version-2.9.3-deprecated/io-develop.md | 421 -- .../io-dynamodb-source.md | 80 - .../io-elasticsearch-sink.md | 242 -- .../io-file-source.md | 160 - .../version-2.9.3-deprecated/io-flume-sink.md | 56 - .../io-flume-source.md | 56 - .../version-2.9.3-deprecated/io-hbase-sink.md | 67 - .../version-2.9.3-deprecated/io-hdfs2-sink.md | 64 - .../version-2.9.3-deprecated/io-hdfs3-sink.md | 59 - .../io-influxdb-sink.md | 119 - .../version-2.9.3-deprecated/io-jdbc-sink.md | 157 - .../version-2.9.3-deprecated/io-kafka-sink.md | 72 - .../io-kafka-source.md | 240 -- .../io-kinesis-sink.md | 80 - .../io-kinesis-source.md | 81 - .../version-2.9.3-deprecated/io-mongo-sink.md | 56 - .../io-netty-source.md | 241 -- .../version-2.9.3-deprecated/io-nsq-source.md | 21 - .../version-2.9.3-deprecated/io-overview.md | 164 - .../version-2.9.3-deprecated/io-quickstart.md | 964 ----- .../io-rabbitmq-sink.md | 85 - .../io-rabbitmq-source.md | 85 - .../version-2.9.3-deprecated/io-redis-sink.md | 156 - .../version-2.9.3-deprecated/io-solr-sink.md | 65 - .../io-twitter-source.md | 28 - .../version-2.9.3-deprecated/io-twitter.md | 7 - .../version-2.9.3-deprecated/io-use.md | 1787 --------- .../performance-pulsar-perf.md | 229 -- .../reference-cli-tools.md | 941 ----- .../reference-configuration.md | 770 ---- .../reference-connector-admin.md | 12 - .../reference-metrics.md | 556 --- .../reference-pulsar-admin.md | 3297 ---------------- .../reference-rest-api-overview.md | 18 - .../reference-terminology.md | 176 - .../schema-evolution-compatibility.md | 201 - .../schema-get-started.md | 102 - .../version-2.9.3-deprecated/schema-manage.md | 639 ---- .../schema-understand.md | 576 --- .../security-athenz.md | 98 - .../security-authorization.md | 130 - .../security-basic-auth.md | 127 - .../security-bouncy-castle.md | 157 - .../security-encryption.md | 200 - .../security-extending.md | 207 - .../version-2.9.3-deprecated/security-jwt.md | 327 -- .../security-kerberos.md | 443 --- .../security-oauth2.md | 232 -- .../security-overview.md | 37 - .../security-tls-authentication.md | 222 -- .../security-tls-keystore.md | 342 -- .../security-tls-transport.md | 295 -- .../security-token-admin.md | 183 - .../sql-deployment-configurations.md | 199 - .../sql-getting-started.md | 187 - .../version-2.9.3-deprecated/sql-overview.md | 18 - .../version-2.9.3-deprecated/sql-rest-api.md | 192 - .../version-2.9.3-deprecated/standalone.md | 268 -- .../tiered-storage-aliyun.md | 259 -- .../tiered-storage-aws.md | 331 -- .../tiered-storage-azure.md | 266 -- .../tiered-storage-filesystem.md | 317 -- .../tiered-storage-gcs.md | 321 -- .../tiered-storage-overview.md | 52 - .../transaction-api.md | 172 - .../transaction-guarantee.md | 17 - .../version-2.9.3-deprecated/txn-how.md | 151 - .../version-2.9.3-deprecated/txn-monitor.md | 10 - .../version-2.9.3-deprecated/txn-use.md | 105 - .../version-2.9.3-deprecated/txn-what.md | 60 - .../version-2.9.3-deprecated/txn-why.md | 45 - .../window-functions-context.md | 581 --- .../version-2.10.0-sidebars.json | 610 --- .../version-2.10.1-sidebars.json | 610 --- .../version-2.8.0-sidebars.json | 598 --- .../version-2.8.1-sidebars.json | 598 --- .../version-2.8.2-sidebars.json | 598 --- .../version-2.8.3-sidebars.json | 598 --- .../version-2.9.0-sidebars.json | 598 --- .../version-2.9.1-sidebars.json | 598 --- .../version-2.9.2-sidebars.json | 598 --- .../version-2.9.3-sidebars.json | 598 --- 1714 files changed, 491623 deletions(-) delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/about.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/adaptors-kafka.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/adaptors-spark.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/adaptors-storm.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-brokers.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-clusters.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-functions.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-namespaces.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-non-partitioned-topics.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-non-persistent-topics.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-overview.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-packages.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-partitioned-topics.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-permissions.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-persistent-topics.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-schemas.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-tenants.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-topics.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/administration-geo.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/administration-isolation.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/administration-load-balance.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/administration-proxy.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/administration-pulsar-manager.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/administration-stats.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/administration-upgrade.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/administration-zk-bk.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-cgo.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-cpp.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-dotnet.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-go.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-java.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-node.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-python.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-rest.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-websocket.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/concepts-architecture-overview.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/concepts-authentication.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/concepts-clients.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/concepts-messaging.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/concepts-multi-tenancy.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/concepts-multiple-advertised-listeners.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/concepts-overview.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/concepts-proxy-sni-routing.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/concepts-replication.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/concepts-tiered-storage.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/concepts-topic-compaction.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/concepts-transactions.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-bookkeepermetadata.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-compaction.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-deduplication.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-encryption.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-message-queue.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-non-persistent.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-partitioned.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-retention-expiry.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-tiered-storage.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/deploy-aws.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/deploy-bare-metal-multi-cluster.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/deploy-bare-metal.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/deploy-dcos.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/deploy-docker.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/deploy-kubernetes.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/deploy-monitoring.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/develop-load-manager.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/develop-plugin.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/develop-schema.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/develop-tools.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/developing-binary-protocol.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/functions-cli.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/functions-debug.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/functions-deploy.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/functions-develop.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/functions-metrics.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/functions-overview.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/functions-package.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/functions-runtime.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/functions-worker.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/getting-started-concepts-and-architecture.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/getting-started-docker.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/getting-started-helm.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/getting-started-pulsar.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/getting-started-standalone.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/helm-deploy.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/helm-install.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/helm-overview.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/helm-prepare.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/helm-tools.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/helm-upgrade.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-aerospike-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-canal-source.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-cassandra-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-cdc-debezium.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-cdc.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-cli.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-connectors.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-debezium-source.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-debug.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-develop.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-dynamodb-source.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-elasticsearch-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-file-source.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-flume-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-flume-source.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-hbase-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-hdfs2-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-hdfs3-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-influxdb-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-jdbc-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-kafka-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-kafka-source.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-kinesis-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-kinesis-source.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-mongo-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-netty-source.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-nsq-source.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-overview.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-quickstart.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-rabbitmq-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-rabbitmq-source.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-redis-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-solr-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-twitter-source.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-twitter.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/io-use.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/performance-pulsar-perf.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/reference-cli-tools.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/reference-configuration.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/reference-connector-admin.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/reference-metrics.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/reference-pulsar-admin.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/reference-rest-api-overview.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/reference-terminology.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/schema-evolution-compatibility.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/schema-get-started.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/schema-manage.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/schema-understand.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/security-athenz.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/security-authorization.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/security-basic-auth.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/security-bouncy-castle.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/security-encryption.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/security-extending.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/security-jwt.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/security-kerberos.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/security-oauth2.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/security-overview.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/security-policy-and-supported-versions.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/security-tls-authentication.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/security-tls-keystore.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/security-tls-transport.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/security-token-admin.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/sql-deployment-configurations.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/sql-getting-started.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/sql-overview.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/sql-rest-api.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/standalone.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/tiered-storage-aliyun.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/tiered-storage-aws.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/tiered-storage-azure.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/tiered-storage-filesystem.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/tiered-storage-gcs.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/tiered-storage-overview.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/transaction-api.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/transaction-guarantee.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/txn-how.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/txn-monitor.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/txn-use.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/txn-what.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/txn-why.md delete mode 100644 site2/website/versioned_docs/version-2.10.0-deprecated/window-functions-context.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/about.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/adaptors-kafka.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/adaptors-spark.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/adaptors-storm.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-brokers.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-clusters.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-functions.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-namespaces.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-non-partitioned-topics.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-non-persistent-topics.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-overview.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-packages.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-partitioned-topics.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-permissions.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-persistent-topics.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-schemas.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-tenants.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-topics.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/administration-geo.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/administration-isolation.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/administration-load-balance.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/administration-proxy.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/administration-pulsar-manager.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/administration-stats.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/administration-upgrade.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/administration-zk-bk.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-cgo.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-cpp.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-dotnet.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-go.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-java.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-node.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-python.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-rest.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-websocket.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/concepts-architecture-overview.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/concepts-authentication.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/concepts-clients.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/concepts-messaging.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/concepts-multi-tenancy.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/concepts-multiple-advertised-listeners.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/concepts-overview.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/concepts-proxy-sni-routing.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/concepts-replication.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/concepts-tiered-storage.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/concepts-topic-compaction.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/concepts-transactions.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-bookkeepermetadata.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-compaction.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-deduplication.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-encryption.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-message-queue.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-non-persistent.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-partitioned.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-retention-expiry.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-tiered-storage.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/deploy-aws.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/deploy-bare-metal-multi-cluster.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/deploy-bare-metal.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/deploy-dcos.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/deploy-docker.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/deploy-kubernetes.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/deploy-monitoring.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/develop-load-manager.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/develop-plugin.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/develop-schema.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/develop-tools.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/developing-binary-protocol.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/functions-cli.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/functions-debug.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/functions-deploy.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/functions-develop.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/functions-metrics.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/functions-overview.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/functions-package.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/functions-runtime.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/functions-worker.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/getting-started-concepts-and-architecture.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/getting-started-docker.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/getting-started-helm.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/getting-started-pulsar.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/getting-started-standalone.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/helm-deploy.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/helm-install.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/helm-overview.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/helm-prepare.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/helm-tools.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/helm-upgrade.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-aerospike-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-canal-source.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-cassandra-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-cdc-debezium.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-cdc.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-cli.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-connectors.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-debezium-source.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-debug.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-develop.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-dynamodb-source.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-elasticsearch-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-file-source.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-flume-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-flume-source.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-hbase-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-hdfs2-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-hdfs3-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-influxdb-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-jdbc-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-kafka-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-kafka-source.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-kinesis-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-kinesis-source.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-mongo-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-netty-source.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-nsq-source.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-overview.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-quickstart.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-rabbitmq-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-rabbitmq-source.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-redis-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-solr-sink.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-twitter-source.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-twitter.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/io-use.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/performance-pulsar-perf.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/reference-cli-tools.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/reference-configuration.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/reference-connector-admin.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/reference-metrics.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/reference-pulsar-admin.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/reference-rest-api-overview.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/reference-terminology.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/schema-evolution-compatibility.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/schema-get-started.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/schema-manage.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/schema-understand.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/security-athenz.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/security-authorization.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/security-basic-auth.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/security-bouncy-castle.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/security-encryption.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/security-extending.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/security-jwt.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/security-kerberos.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/security-oauth2.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/security-overview.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/security-policy-and-supported-versions.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/security-tls-authentication.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/security-tls-keystore.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/security-tls-transport.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/security-token-admin.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/sql-deployment-configurations.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/sql-getting-started.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/sql-overview.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/sql-rest-api.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/standalone.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/tiered-storage-aliyun.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/tiered-storage-aws.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/tiered-storage-azure.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/tiered-storage-filesystem.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/tiered-storage-gcs.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/tiered-storage-overview.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/transaction-api.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/transaction-guarantee.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/txn-how.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/txn-monitor.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/txn-use.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/txn-what.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/txn-why.md delete mode 100644 site2/website/versioned_docs/version-2.10.1-deprecated/window-functions-context.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/about.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/adaptors-kafka.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/adaptors-spark.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/adaptors-storm.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-brokers.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-clusters.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-functions.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-namespaces.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-non-partitioned-topics.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-non-persistent-topics.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-packages.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-partitioned-topics.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-permissions.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-persistent-topics.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-schemas.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-tenants.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-topics.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/administration-dashboard.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/administration-geo.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/administration-isolation.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/administration-load-balance.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/administration-proxy.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/administration-pulsar-manager.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/administration-stats.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/administration-upgrade.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/administration-zk-bk.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-cgo.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-cpp.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-dotnet.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-go.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-java.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-node.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-python.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-websocket.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/concepts-architecture-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/concepts-authentication.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/concepts-clients.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/concepts-messaging.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/concepts-multi-tenancy.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/concepts-multiple-advertised-listeners.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/concepts-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/concepts-proxy-sni-routing.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/concepts-replication.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/concepts-tiered-storage.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/concepts-topic-compaction.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/concepts-transactions.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-bookkeepermetadata.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-compaction.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-deduplication.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-encryption.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-message-queue.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-non-persistent.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-partitioned.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-retention-expiry.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-tiered-storage.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/deploy-aws.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/deploy-bare-metal-multi-cluster.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/deploy-bare-metal.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/deploy-dcos.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/deploy-docker.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/deploy-kubernetes.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/deploy-monitoring.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/develop-load-manager.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/develop-schema.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/develop-tools.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/developing-binary-protocol.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/functions-cli.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/functions-debug.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/functions-deploy.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/functions-develop.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/functions-metrics.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/functions-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/functions-package.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/functions-runtime.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/functions-worker.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/getting-started-concepts-and-architecture.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/getting-started-docker.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/getting-started-helm.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/getting-started-pulsar.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/getting-started-standalone.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/helm-deploy.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/helm-install.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/helm-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/helm-prepare.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/helm-tools.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/helm-upgrade.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-aerospike-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-canal-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-cassandra-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-cdc-debezium.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-cdc.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-cli.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-connectors.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-debezium-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-debug.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-develop.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-dynamodb-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-elasticsearch-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-file-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-flume-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-flume-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-hbase-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-hdfs2-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-hdfs3-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-influxdb-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-jdbc-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-kafka-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-kafka-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-kinesis-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-kinesis-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-mongo-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-netty-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-nsq-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-quickstart.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-rabbitmq-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-rabbitmq-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-redis-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-solr-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-twitter-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-twitter.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/io-use.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/kubernetes-helm.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/performance-pulsar-perf.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/reference-cli-tools.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/reference-configuration.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/reference-connector-admin.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/reference-metrics.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/reference-pulsar-admin.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/reference-rest-api-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/reference-terminology.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/schema-evolution-compatibility.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/schema-get-started.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/schema-manage.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/schema-understand.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/security-athenz.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/security-authorization.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/security-basic-auth.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/security-bouncy-castle.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/security-encryption.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/security-extending.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/security-jwt.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/security-kerberos.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/security-oauth2.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/security-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/security-tls-authentication.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/security-tls-keystore.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/security-tls-transport.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/security-token-admin.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/sql-deployment-configurations.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/sql-getting-started.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/sql-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/sql-rest-api.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/standalone-docker.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/standalone.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/tiered-storage-aliyun.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/tiered-storage-aws.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/tiered-storage-azure.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/tiered-storage-filesystem.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/tiered-storage-gcs.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/tiered-storage-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/transaction-api.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/transaction-guarantee.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/txn-how.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/txn-monitor.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/txn-use.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/txn-what.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/txn-why.md delete mode 100644 site2/website/versioned_docs/version-2.8.0-deprecated/window-functions-context.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/about.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/adaptors-kafka.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/adaptors-spark.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/adaptors-storm.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-brokers.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-clusters.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-functions.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-namespaces.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-non-partitioned-topics.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-non-persistent-topics.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-packages.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-partitioned-topics.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-permissions.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-persistent-topics.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-schemas.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-tenants.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-topics.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/administration-dashboard.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/administration-geo.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/administration-isolation.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/administration-load-balance.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/administration-proxy.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/administration-pulsar-manager.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/administration-stats.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/administration-upgrade.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/administration-zk-bk.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-cgo.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-cpp.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-dotnet.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-go.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-java.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-node.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-python.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-websocket.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/concepts-architecture-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/concepts-authentication.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/concepts-clients.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/concepts-messaging.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/concepts-multi-tenancy.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/concepts-multiple-advertised-listeners.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/concepts-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/concepts-proxy-sni-routing.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/concepts-replication.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/concepts-tiered-storage.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/concepts-topic-compaction.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/concepts-transactions.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-bookkeepermetadata.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-compaction.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-deduplication.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-encryption.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-message-queue.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-non-persistent.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-partitioned.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-retention-expiry.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-tiered-storage.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/deploy-aws.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/deploy-bare-metal-multi-cluster.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/deploy-bare-metal.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/deploy-dcos.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/deploy-docker.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/deploy-kubernetes.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/deploy-monitoring.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/develop-load-manager.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/develop-schema.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/develop-tools.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/developing-binary-protocol.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/functions-cli.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/functions-debug.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/functions-deploy.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/functions-develop.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/functions-metrics.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/functions-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/functions-package.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/functions-runtime.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/functions-worker.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/getting-started-concepts-and-architecture.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/getting-started-docker.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/getting-started-helm.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/getting-started-pulsar.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/getting-started-standalone.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/helm-deploy.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/helm-install.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/helm-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/helm-prepare.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/helm-tools.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/helm-upgrade.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-aerospike-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-canal-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-cassandra-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-cdc-debezium.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-cdc.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-cli.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-connectors.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-debezium-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-debug.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-develop.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-dynamodb-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-elasticsearch-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-file-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-flume-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-flume-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-hbase-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-hdfs2-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-hdfs3-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-influxdb-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-jdbc-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-kafka-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-kafka-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-kinesis-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-kinesis-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-mongo-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-netty-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-nsq-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-quickstart.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-rabbitmq-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-rabbitmq-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-redis-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-solr-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-twitter-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-twitter.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/io-use.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/kubernetes-helm.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/performance-pulsar-perf.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/reference-cli-tools.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/reference-configuration.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/reference-connector-admin.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/reference-metrics.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/reference-pulsar-admin.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/reference-rest-api-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/reference-terminology.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/schema-evolution-compatibility.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/schema-get-started.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/schema-manage.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/schema-understand.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/security-athenz.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/security-authorization.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/security-basic-auth.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/security-bouncy-castle.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/security-encryption.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/security-extending.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/security-jwt.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/security-kerberos.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/security-oauth2.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/security-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/security-tls-authentication.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/security-tls-keystore.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/security-tls-transport.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/security-token-admin.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/sql-deployment-configurations.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/sql-getting-started.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/sql-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/sql-rest-api.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/standalone-docker.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/standalone.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/tiered-storage-aliyun.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/tiered-storage-aws.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/tiered-storage-azure.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/tiered-storage-filesystem.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/tiered-storage-gcs.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/tiered-storage-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/transaction-api.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/transaction-guarantee.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/txn-how.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/txn-monitor.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/txn-use.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/txn-what.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/txn-why.md delete mode 100644 site2/website/versioned_docs/version-2.8.1-deprecated/window-functions-context.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/about.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/adaptors-kafka.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/adaptors-spark.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/adaptors-storm.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-brokers.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-clusters.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-functions.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-namespaces.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-non-partitioned-topics.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-non-persistent-topics.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-packages.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-partitioned-topics.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-permissions.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-persistent-topics.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-schemas.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-tenants.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-topics.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/administration-dashboard.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/administration-geo.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/administration-isolation.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/administration-load-balance.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/administration-proxy.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/administration-pulsar-manager.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/administration-stats.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/administration-upgrade.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/administration-zk-bk.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-cgo.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-cpp.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-dotnet.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-go.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-java.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-node.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-python.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-websocket.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/concepts-architecture-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/concepts-authentication.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/concepts-clients.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/concepts-messaging.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/concepts-multi-tenancy.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/concepts-multiple-advertised-listeners.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/concepts-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/concepts-proxy-sni-routing.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/concepts-replication.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/concepts-tiered-storage.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/concepts-topic-compaction.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/concepts-transactions.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-bookkeepermetadata.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-compaction.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-deduplication.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-encryption.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-message-queue.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-non-persistent.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-partitioned.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-retention-expiry.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-tiered-storage.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/deploy-aws.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/deploy-bare-metal-multi-cluster.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/deploy-bare-metal.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/deploy-dcos.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/deploy-docker.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/deploy-kubernetes.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/deploy-monitoring.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/develop-load-manager.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/develop-schema.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/develop-tools.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/developing-binary-protocol.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/functions-cli.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/functions-debug.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/functions-deploy.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/functions-develop.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/functions-metrics.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/functions-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/functions-package.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/functions-runtime.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/functions-worker.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/getting-started-concepts-and-architecture.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/getting-started-docker.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/getting-started-helm.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/getting-started-pulsar.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/getting-started-standalone.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/helm-deploy.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/helm-install.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/helm-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/helm-prepare.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/helm-tools.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/helm-upgrade.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-aerospike-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-canal-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-cassandra-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-cdc-debezium.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-cdc.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-cli.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-connectors.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-debezium-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-debug.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-develop.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-dynamodb-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-elasticsearch-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-file-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-flume-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-flume-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-hbase-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-hdfs2-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-hdfs3-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-influxdb-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-jdbc-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-kafka-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-kafka-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-kinesis-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-kinesis-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-mongo-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-netty-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-nsq-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-quickstart.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-rabbitmq-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-rabbitmq-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-redis-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-solr-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-twitter-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-twitter.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/io-use.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/kubernetes-helm.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/performance-pulsar-perf.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/reference-cli-tools.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/reference-configuration.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/reference-connector-admin.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/reference-metrics.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/reference-pulsar-admin.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/reference-rest-api-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/reference-terminology.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/schema-evolution-compatibility.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/schema-get-started.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/schema-manage.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/schema-understand.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/security-athenz.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/security-authorization.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/security-basic-auth.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/security-bouncy-castle.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/security-encryption.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/security-extending.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/security-jwt.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/security-kerberos.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/security-oauth2.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/security-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/security-tls-authentication.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/security-tls-keystore.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/security-tls-transport.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/security-token-admin.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/sql-deployment-configurations.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/sql-getting-started.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/sql-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/sql-rest-api.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/standalone-docker.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/standalone.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/tiered-storage-aliyun.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/tiered-storage-aws.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/tiered-storage-azure.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/tiered-storage-filesystem.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/tiered-storage-gcs.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/tiered-storage-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/transaction-api.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/transaction-guarantee.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/txn-how.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/txn-monitor.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/txn-use.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/txn-what.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/txn-why.md delete mode 100644 site2/website/versioned_docs/version-2.8.2-deprecated/window-functions-context.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/about.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/adaptors-kafka.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/adaptors-spark.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/adaptors-storm.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-brokers.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-clusters.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-functions.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-namespaces.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-non-partitioned-topics.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-non-persistent-topics.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-packages.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-partitioned-topics.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-permissions.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-persistent-topics.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-schemas.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-tenants.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-topics.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/administration-dashboard.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/administration-geo.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/administration-isolation.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/administration-load-balance.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/administration-proxy.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/administration-pulsar-manager.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/administration-stats.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/administration-upgrade.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/administration-zk-bk.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-cgo.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-cpp.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-dotnet.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-go.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-java.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-node.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-python.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-websocket.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/concepts-architecture-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/concepts-authentication.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/concepts-clients.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/concepts-messaging.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/concepts-multi-tenancy.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/concepts-multiple-advertised-listeners.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/concepts-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/concepts-proxy-sni-routing.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/concepts-replication.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/concepts-tiered-storage.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/concepts-topic-compaction.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/concepts-transactions.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-bookkeepermetadata.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-compaction.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-deduplication.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-encryption.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-message-queue.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-non-persistent.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-partitioned.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-retention-expiry.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-tiered-storage.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/deploy-aws.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/deploy-bare-metal-multi-cluster.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/deploy-bare-metal.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/deploy-dcos.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/deploy-docker.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/deploy-kubernetes.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/deploy-monitoring.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/develop-load-manager.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/develop-schema.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/develop-tools.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/developing-binary-protocol.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/functions-cli.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/functions-debug.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/functions-deploy.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/functions-develop.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/functions-metrics.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/functions-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/functions-package.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/functions-runtime.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/functions-worker.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/getting-started-concepts-and-architecture.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/getting-started-docker.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/getting-started-helm.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/getting-started-pulsar.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/getting-started-standalone.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/helm-deploy.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/helm-install.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/helm-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/helm-prepare.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/helm-tools.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/helm-upgrade.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-aerospike-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-canal-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-cassandra-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-cdc-debezium.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-cdc.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-cli.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-connectors.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-debezium-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-debug.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-develop.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-dynamodb-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-elasticsearch-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-file-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-flume-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-flume-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-hbase-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-hdfs2-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-hdfs3-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-influxdb-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-jdbc-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-kafka-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-kafka-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-kinesis-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-kinesis-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-mongo-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-netty-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-nsq-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-quickstart.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-rabbitmq-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-rabbitmq-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-redis-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-solr-sink.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-twitter-source.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-twitter.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/io-use.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/performance-pulsar-perf.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/reference-cli-tools.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/reference-configuration.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/reference-connector-admin.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/reference-metrics.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/reference-pulsar-admin.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/reference-rest-api-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/reference-terminology.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/schema-evolution-compatibility.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/schema-get-started.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/schema-manage.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/schema-understand.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/security-athenz.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/security-authorization.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/security-basic-auth.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/security-bouncy-castle.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/security-encryption.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/security-extending.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/security-jwt.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/security-kerberos.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/security-oauth2.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/security-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/security-tls-authentication.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/security-tls-keystore.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/security-tls-transport.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/security-token-admin.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/sql-deployment-configurations.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/sql-getting-started.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/sql-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/sql-rest-api.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/standalone.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/tiered-storage-aliyun.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/tiered-storage-aws.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/tiered-storage-azure.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/tiered-storage-filesystem.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/tiered-storage-gcs.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/tiered-storage-overview.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/transaction-api.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/transaction-guarantee.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/txn-how.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/txn-monitor.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/txn-use.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/txn-what.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/txn-why.md delete mode 100644 site2/website/versioned_docs/version-2.8.3-deprecated/window-functions-context.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/about.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/adaptors-kafka.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/adaptors-spark.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/adaptors-storm.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-brokers.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-clusters.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-functions.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-namespaces.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-non-partitioned-topics.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-non-persistent-topics.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-packages.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-partitioned-topics.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-permissions.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-persistent-topics.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-schemas.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-tenants.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-topics.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/administration-dashboard.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/administration-geo.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/administration-isolation.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/administration-load-balance.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/administration-proxy.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/administration-pulsar-manager.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/administration-stats.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/administration-upgrade.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/administration-zk-bk.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-cgo.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-cpp.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-dotnet.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-go.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-java.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-node.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-python.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-websocket.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/concepts-architecture-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/concepts-authentication.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/concepts-clients.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/concepts-messaging.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/concepts-multi-tenancy.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/concepts-multiple-advertised-listeners.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/concepts-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/concepts-proxy-sni-routing.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/concepts-replication.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/concepts-tiered-storage.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/concepts-topic-compaction.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/concepts-transactions.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-bookkeepermetadata.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-compaction.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-deduplication.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-encryption.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-message-queue.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-non-persistent.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-partitioned.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-retention-expiry.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-tiered-storage.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/deploy-aws.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/deploy-bare-metal-multi-cluster.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/deploy-bare-metal.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/deploy-dcos.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/deploy-docker.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/deploy-kubernetes.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/deploy-monitoring.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/develop-load-manager.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/develop-schema.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/develop-tools.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/developing-binary-protocol.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/functions-cli.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/functions-debug.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/functions-deploy.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/functions-develop.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/functions-metrics.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/functions-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/functions-package.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/functions-runtime.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/functions-worker.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/getting-started-concepts-and-architecture.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/getting-started-docker.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/getting-started-helm.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/getting-started-pulsar.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/getting-started-standalone.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/helm-deploy.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/helm-install.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/helm-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/helm-prepare.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/helm-tools.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/helm-upgrade.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-aerospike-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-canal-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-cassandra-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-cdc-debezium.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-cdc.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-cli.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-connectors.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-debezium-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-debug.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-develop.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-dynamodb-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-elasticsearch-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-file-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-flume-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-flume-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-hbase-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-hdfs2-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-hdfs3-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-influxdb-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-jdbc-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-kafka-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-kafka-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-kinesis-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-kinesis-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-mongo-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-netty-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-nsq-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-quickstart.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-rabbitmq-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-rabbitmq-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-redis-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-solr-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-twitter-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-twitter.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/io-use.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/kubernetes-helm.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/performance-pulsar-perf.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/reference-cli-tools.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/reference-configuration.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/reference-connector-admin.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/reference-metrics.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/reference-pulsar-admin.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/reference-rest-api-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/reference-terminology.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/schema-evolution-compatibility.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/schema-get-started.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/schema-manage.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/schema-understand.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/security-athenz.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/security-authorization.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/security-basic-auth.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/security-bouncy-castle.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/security-encryption.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/security-extending.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/security-jwt.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/security-kerberos.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/security-oauth2.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/security-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/security-tls-authentication.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/security-tls-keystore.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/security-tls-transport.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/security-token-admin.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/sql-deployment-configurations.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/sql-getting-started.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/sql-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/sql-rest-api.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/standalone-docker.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/standalone.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/tiered-storage-aliyun.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/tiered-storage-aws.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/tiered-storage-azure.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/tiered-storage-filesystem.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/tiered-storage-gcs.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/tiered-storage-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/transaction-api.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/transaction-guarantee.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/txn-how.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/txn-monitor.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/txn-use.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/txn-what.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/txn-why.md delete mode 100644 site2/website/versioned_docs/version-2.9.0-deprecated/window-functions-context.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/about.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/adaptors-kafka.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/adaptors-spark.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/adaptors-storm.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-brokers.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-clusters.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-functions.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-namespaces.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-non-partitioned-topics.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-non-persistent-topics.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-packages.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-partitioned-topics.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-permissions.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-persistent-topics.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-schemas.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-tenants.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-topics.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/administration-dashboard.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/administration-geo.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/administration-isolation.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/administration-load-balance.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/administration-proxy.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/administration-pulsar-manager.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/administration-stats.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/administration-upgrade.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/administration-zk-bk.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-cgo.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-cpp.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-dotnet.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-go.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-java.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-node.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-python.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-websocket.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/concepts-architecture-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/concepts-authentication.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/concepts-clients.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/concepts-messaging.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/concepts-multi-tenancy.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/concepts-multiple-advertised-listeners.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/concepts-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/concepts-proxy-sni-routing.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/concepts-replication.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/concepts-tiered-storage.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/concepts-topic-compaction.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/concepts-transactions.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-bookkeepermetadata.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-compaction.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-deduplication.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-encryption.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-message-queue.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-non-persistent.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-partitioned.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-retention-expiry.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-tiered-storage.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/deploy-aws.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/deploy-bare-metal-multi-cluster.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/deploy-bare-metal.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/deploy-dcos.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/deploy-docker.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/deploy-kubernetes.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/deploy-monitoring.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/develop-load-manager.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/develop-schema.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/develop-tools.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/developing-binary-protocol.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/functions-cli.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/functions-debug.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/functions-deploy.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/functions-develop.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/functions-metrics.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/functions-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/functions-package.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/functions-runtime.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/functions-worker.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/getting-started-concepts-and-architecture.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/getting-started-docker.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/getting-started-helm.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/getting-started-pulsar.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/getting-started-standalone.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/helm-deploy.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/helm-install.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/helm-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/helm-prepare.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/helm-tools.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/helm-upgrade.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-aerospike-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-canal-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-cassandra-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-cdc-debezium.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-cdc.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-cli.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-connectors.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-debezium-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-debug.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-develop.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-dynamodb-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-elasticsearch-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-file-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-flume-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-flume-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-hbase-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-hdfs2-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-hdfs3-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-influxdb-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-jdbc-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-kafka-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-kafka-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-kinesis-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-kinesis-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-mongo-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-netty-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-nsq-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-quickstart.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-rabbitmq-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-rabbitmq-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-redis-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-solr-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-twitter-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-twitter.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/io-use.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/kubernetes-helm.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/performance-pulsar-perf.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/reference-cli-tools.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/reference-configuration.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/reference-connector-admin.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/reference-metrics.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/reference-pulsar-admin.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/reference-rest-api-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/reference-terminology.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/schema-evolution-compatibility.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/schema-get-started.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/schema-manage.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/schema-understand.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/security-athenz.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/security-authorization.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/security-basic-auth.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/security-bouncy-castle.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/security-encryption.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/security-extending.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/security-jwt.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/security-kerberos.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/security-oauth2.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/security-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/security-tls-authentication.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/security-tls-keystore.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/security-tls-transport.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/security-token-admin.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/sql-deployment-configurations.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/sql-getting-started.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/sql-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/sql-rest-api.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/standalone-docker.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/standalone.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/tiered-storage-aliyun.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/tiered-storage-aws.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/tiered-storage-azure.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/tiered-storage-filesystem.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/tiered-storage-gcs.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/tiered-storage-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/transaction-api.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/transaction-guarantee.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/txn-how.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/txn-monitor.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/txn-use.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/txn-what.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/txn-why.md delete mode 100644 site2/website/versioned_docs/version-2.9.1-deprecated/window-functions-context.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/about.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/adaptors-kafka.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/adaptors-spark.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/adaptors-storm.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-brokers.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-clusters.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-functions.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-namespaces.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-non-partitioned-topics.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-non-persistent-topics.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-packages.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-partitioned-topics.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-permissions.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-persistent-topics.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-schemas.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-tenants.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-topics.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/administration-dashboard.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/administration-geo.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/administration-isolation.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/administration-load-balance.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/administration-proxy.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/administration-pulsar-manager.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/administration-stats.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/administration-upgrade.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/administration-zk-bk.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-cgo.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-cpp.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-dotnet.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-go.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-java.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-node.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-python.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-websocket.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/concepts-architecture-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/concepts-authentication.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/concepts-clients.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/concepts-messaging.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/concepts-multi-tenancy.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/concepts-multiple-advertised-listeners.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/concepts-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/concepts-proxy-sni-routing.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/concepts-replication.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/concepts-tiered-storage.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/concepts-topic-compaction.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/concepts-transactions.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-bookkeepermetadata.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-compaction.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-deduplication.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-encryption.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-message-queue.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-non-persistent.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-partitioned.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-retention-expiry.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-tiered-storage.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/deploy-aws.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/deploy-bare-metal-multi-cluster.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/deploy-bare-metal.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/deploy-dcos.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/deploy-docker.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/deploy-kubernetes.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/deploy-monitoring.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/develop-load-manager.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/develop-schema.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/develop-tools.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/developing-binary-protocol.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/functions-cli.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/functions-debug.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/functions-deploy.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/functions-develop.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/functions-metrics.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/functions-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/functions-package.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/functions-runtime.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/functions-worker.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/getting-started-concepts-and-architecture.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/getting-started-docker.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/getting-started-helm.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/getting-started-pulsar.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/getting-started-standalone.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/helm-deploy.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/helm-install.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/helm-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/helm-prepare.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/helm-tools.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/helm-upgrade.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-aerospike-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-canal-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-cassandra-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-cdc-debezium.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-cdc.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-cli.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-connectors.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-debezium-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-debug.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-develop.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-dynamodb-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-elasticsearch-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-file-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-flume-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-flume-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-hbase-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-hdfs2-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-hdfs3-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-influxdb-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-jdbc-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-kafka-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-kafka-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-kinesis-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-kinesis-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-mongo-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-netty-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-nsq-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-quickstart.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-rabbitmq-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-rabbitmq-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-redis-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-solr-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-twitter-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-twitter.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/io-use.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/performance-pulsar-perf.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/reference-cli-tools.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/reference-configuration.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/reference-connector-admin.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/reference-metrics.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/reference-pulsar-admin.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/reference-rest-api-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/reference-terminology.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/schema-evolution-compatibility.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/schema-get-started.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/schema-manage.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/schema-understand.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/security-athenz.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/security-authorization.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/security-basic-auth.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/security-bouncy-castle.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/security-encryption.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/security-extending.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/security-jwt.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/security-kerberos.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/security-oauth2.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/security-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/security-tls-authentication.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/security-tls-keystore.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/security-tls-transport.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/security-token-admin.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/sql-deployment-configurations.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/sql-getting-started.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/sql-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/sql-rest-api.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/standalone.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/tiered-storage-aliyun.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/tiered-storage-aws.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/tiered-storage-azure.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/tiered-storage-filesystem.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/tiered-storage-gcs.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/tiered-storage-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/transaction-api.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/transaction-guarantee.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/txn-how.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/txn-monitor.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/txn-use.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/txn-what.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/txn-why.md delete mode 100644 site2/website/versioned_docs/version-2.9.2-deprecated/window-functions-context.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/about.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/adaptors-kafka.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/adaptors-spark.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/adaptors-storm.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-brokers.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-clusters.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-functions.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-namespaces.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-non-partitioned-topics.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-non-persistent-topics.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-packages.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-partitioned-topics.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-permissions.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-persistent-topics.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-schemas.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-tenants.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-topics.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/administration-dashboard.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/administration-geo.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/administration-isolation.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/administration-load-balance.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/administration-proxy.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/administration-pulsar-manager.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/administration-stats.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/administration-upgrade.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/administration-zk-bk.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-cgo.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-cpp.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-dotnet.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-go.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-java.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-node.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-python.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-websocket.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/concepts-architecture-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/concepts-authentication.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/concepts-clients.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/concepts-messaging.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/concepts-multi-tenancy.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/concepts-multiple-advertised-listeners.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/concepts-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/concepts-proxy-sni-routing.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/concepts-replication.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/concepts-tiered-storage.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/concepts-topic-compaction.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/concepts-transactions.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-bookkeepermetadata.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-compaction.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-deduplication.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-encryption.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-message-queue.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-non-persistent.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-partitioned.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-retention-expiry.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-tiered-storage.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/deploy-aws.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/deploy-bare-metal-multi-cluster.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/deploy-bare-metal.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/deploy-dcos.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/deploy-docker.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/deploy-kubernetes.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/deploy-monitoring.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/develop-load-manager.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/develop-schema.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/develop-tools.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/developing-binary-protocol.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/functions-cli.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/functions-debug.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/functions-deploy.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/functions-develop.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/functions-metrics.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/functions-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/functions-package.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/functions-runtime.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/functions-worker.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/getting-started-concepts-and-architecture.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/getting-started-docker.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/getting-started-helm.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/getting-started-pulsar.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/getting-started-standalone.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/helm-deploy.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/helm-install.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/helm-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/helm-prepare.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/helm-tools.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/helm-upgrade.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-aerospike-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-canal-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-cassandra-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-cdc-debezium.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-cdc.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-cli.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-connectors.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-debezium-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-debug.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-develop.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-dynamodb-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-elasticsearch-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-file-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-flume-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-flume-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-hbase-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-hdfs2-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-hdfs3-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-influxdb-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-jdbc-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-kafka-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-kafka-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-kinesis-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-kinesis-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-mongo-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-netty-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-nsq-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-quickstart.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-rabbitmq-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-rabbitmq-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-redis-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-solr-sink.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-twitter-source.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-twitter.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/io-use.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/performance-pulsar-perf.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/reference-cli-tools.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/reference-configuration.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/reference-connector-admin.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/reference-metrics.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/reference-pulsar-admin.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/reference-rest-api-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/reference-terminology.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/schema-evolution-compatibility.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/schema-get-started.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/schema-manage.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/schema-understand.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/security-athenz.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/security-authorization.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/security-basic-auth.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/security-bouncy-castle.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/security-encryption.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/security-extending.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/security-jwt.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/security-kerberos.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/security-oauth2.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/security-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/security-tls-authentication.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/security-tls-keystore.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/security-tls-transport.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/security-token-admin.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/sql-deployment-configurations.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/sql-getting-started.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/sql-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/sql-rest-api.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/standalone.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/tiered-storage-aliyun.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/tiered-storage-aws.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/tiered-storage-azure.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/tiered-storage-filesystem.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/tiered-storage-gcs.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/tiered-storage-overview.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/transaction-api.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/transaction-guarantee.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/txn-how.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/txn-monitor.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/txn-use.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/txn-what.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/txn-why.md delete mode 100644 site2/website/versioned_docs/version-2.9.3-deprecated/window-functions-context.md delete mode 100644 site2/website/versioned_sidebars/version-2.10.0-sidebars.json delete mode 100644 site2/website/versioned_sidebars/version-2.10.1-sidebars.json delete mode 100644 site2/website/versioned_sidebars/version-2.8.0-sidebars.json delete mode 100644 site2/website/versioned_sidebars/version-2.8.1-sidebars.json delete mode 100644 site2/website/versioned_sidebars/version-2.8.2-sidebars.json delete mode 100644 site2/website/versioned_sidebars/version-2.8.3-sidebars.json delete mode 100644 site2/website/versioned_sidebars/version-2.9.0-sidebars.json delete mode 100644 site2/website/versioned_sidebars/version-2.9.1-sidebars.json delete mode 100644 site2/website/versioned_sidebars/version-2.9.2-sidebars.json delete mode 100644 site2/website/versioned_sidebars/version-2.9.3-sidebars.json diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/about.md b/site2/website/versioned_docs/version-2.10.0-deprecated/about.md deleted file mode 100644 index 69010822119822..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/about.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -slug: / -id: about -title: Welcome to the doc portal! -sidebar_label: "About" ---- - -import BlockLinks from "@site/src/components/BlockLinks"; -import BlockLink from "@site/src/components/BlockLink"; -import { docUrl } from "@site/src/utils/index"; - - -# Welcome to the doc portal! -*** - -This portal holds a variety of support documents to help you work with Pulsar . If you’re a beginner, there are tutorials and explainers to help you understand Pulsar and how it works. - -If you’re an experienced coder, review this page to learn the easiest way to access the specific content you’re looking for. - -## Get Started Now - - - - - - - - - -## Navigation -*** - -There are several ways to get around in the doc portal. The index navigation pane is a table of contents for the entire archive. The archive is divided into sections, like chapters in a book. Click the title of the topic to view it. - -In-context links provide an easy way to immediately reference related topics. Click the underlined term to view the topic. - -Links to related topics can be found at the bottom of each topic page. Click the link to view the topic. - -![Page Linking](/assets/page-linking.png) - -## Continuous Improvement -*** -As you probably know, we are working on a new user experience for our documentation portal that will make learning about and building on top of Apache Pulsar a much better experience. Whether you need overview concepts, how-to procedures, curated guides or quick references, we’re building content to support it. This welcome page is just the first step. We will be providing updates every month. - -## Help Improve These Documents -*** - -You’ll notice an Edit button at the bottom and top of each page. Click it to open a landing page with instructions for requesting changes to posted documents. These are your resources. Participation is not only welcomed – it’s essential! - -## Join the Community! -*** - -The Pulsar community on github is active, passionate, and knowledgeable. Join discussions, voice opinions, suggest features, and dive into the code itself. Find your Pulsar family here at [apache/pulsar](https://github.com/apache/pulsar). - -An equally passionate community can be found in the [Pulsar Slack channel](https://apache-pulsar.slack.com/). You’ll need an invitation to join, but many Github Pulsar community members are Slack members too. Join, hang out, learn, and make some new friends. - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/adaptors-kafka.md b/site2/website/versioned_docs/version-2.10.0-deprecated/adaptors-kafka.md deleted file mode 100644 index e738f9d94b6a9c..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/adaptors-kafka.md +++ /dev/null @@ -1,276 +0,0 @@ ---- -id: adaptors-kafka -title: Pulsar adaptor for Apache Kafka -sidebar_label: "Kafka client wrapper" -original_id: adaptors-kafka ---- - - -Pulsar provides an easy option for applications that are currently written using the [Apache Kafka](http://kafka.apache.org) Java client API. - -## Using the Pulsar Kafka compatibility wrapper - -In an existing application, change the regular Kafka client dependency and replace it with the Pulsar Kafka wrapper. Remove the following dependency in `pom.xml`: - -```xml - - - org.apache.kafka - kafka-clients - 0.10.2.1 - - -``` - -Then include this dependency for the Pulsar Kafka wrapper: - -```xml - - - org.apache.pulsar - pulsar-client-kafka - @pulsar:version@ - - -``` - -With the new dependency, the existing code works without any changes. You need to adjust the configuration, and make sure it points the -producers and consumers to Pulsar service rather than Kafka, and uses a particular -Pulsar topic. - -## Using the Pulsar Kafka compatibility wrapper together with existing kafka client - -When migrating from Kafka to Pulsar, the application might use the original kafka client -and the pulsar kafka wrapper together during migration. You should consider using the -unshaded pulsar kafka client wrapper. - -```xml - - - org.apache.pulsar - pulsar-client-kafka-original - @pulsar:version@ - - -``` - -When using this dependency, construct producers using `org.apache.kafka.clients.producer.PulsarKafkaProducer` -instead of `org.apache.kafka.clients.producer.KafkaProducer` and `org.apache.kafka.clients.producer.PulsarKafkaConsumer` for consumers. - -## Producer example - -```java - -// Topic needs to be a regular Pulsar topic -String topic = "persistent://public/default/my-topic"; - -Properties props = new Properties(); -// Point to a Pulsar service -props.put("bootstrap.servers", "pulsar://localhost:6650"); - -props.put("key.serializer", IntegerSerializer.class.getName()); -props.put("value.serializer", StringSerializer.class.getName()); - -Producer producer = new KafkaProducer(props); - -for (int i = 0; i < 10; i++) { - producer.send(new ProducerRecord(topic, i, "hello-" + i)); - log.info("Message {} sent successfully", i); -} - -producer.close(); - -``` - -## Consumer example - -```java - -String topic = "persistent://public/default/my-topic"; - -Properties props = new Properties(); -// Point to a Pulsar service -props.put("bootstrap.servers", "pulsar://localhost:6650"); -props.put("group.id", "my-subscription-name"); -props.put("enable.auto.commit", "false"); -props.put("key.deserializer", IntegerDeserializer.class.getName()); -props.put("value.deserializer", StringDeserializer.class.getName()); - -Consumer consumer = new KafkaConsumer(props); -consumer.subscribe(Arrays.asList(topic)); - -while (true) { - ConsumerRecords records = consumer.poll(100); - records.forEach(record -> { - log.info("Received record: {}", record); - }); - - // Commit last offset - consumer.commitSync(); -} - -``` - -## Complete Examples - -You can find the complete producer and consumer examples [here](https://github.com/apache/pulsar-adapters/tree/master/pulsar-client-kafka-compat/pulsar-client-kafka-tests/src/test/java/org/apache/pulsar/client/kafka/compat/examples). - -## Compatibility matrix - -Currently the Pulsar Kafka wrapper supports most of the operations offered by the Kafka API. - -### Producer - -APIs: - -| Producer Method | Supported | Notes | -|:------------------------------------------------------------------------------|:----------|:-------------------------------------------------------------------------| -| `Future send(ProducerRecord record)` | Yes | | -| `Future send(ProducerRecord record, Callback callback)` | Yes | | -| `void flush()` | Yes | | -| `List partitionsFor(String topic)` | No | | -| `Map metrics()` | No | | -| `void close()` | Yes | | -| `void close(long timeout, TimeUnit unit)` | Yes | | - -Properties: - -| Config property | Supported | Notes | -|:----------------------------------------|:----------|:------------------------------------------------------------------------------| -| `acks` | Ignored | Durability and quorum writes are configured at the namespace level | -| `auto.offset.reset` | Yes | It uses a default value of `earliest` if you do not give a specific setting. | -| `batch.size` | Ignored | | -| `bootstrap.servers` | Yes | | -| `buffer.memory` | Ignored | | -| `client.id` | Ignored | | -| `compression.type` | Yes | Allows `gzip` and `lz4`. No `snappy`. | -| `connections.max.idle.ms` | Yes | Only support up to 2,147,483,647,000(Integer.MAX_VALUE * 1000) ms of idle time| -| `interceptor.classes` | Yes | | -| `key.serializer` | Yes | | -| `linger.ms` | Yes | Controls the group commit time when batching messages | -| `max.block.ms` | Ignored | | -| `max.in.flight.requests.per.connection` | Ignored | In Pulsar ordering is maintained even with multiple requests in flight | -| `max.request.size` | Ignored | | -| `metric.reporters` | Ignored | | -| `metrics.num.samples` | Ignored | | -| `metrics.sample.window.ms` | Ignored | | -| `partitioner.class` | Yes | | -| `receive.buffer.bytes` | Ignored | | -| `reconnect.backoff.ms` | Ignored | | -| `request.timeout.ms` | Ignored | | -| `retries` | Ignored | Pulsar client retries with exponential backoff until the send timeout expires. | -| `send.buffer.bytes` | Ignored | | -| `timeout.ms` | Yes | | -| `value.serializer` | Yes | | - - -### Consumer - -The following table lists consumer APIs. - -| Consumer Method | Supported | Notes | -|:--------------------------------------------------------------------------------------------------------|:----------|:------| -| `Set assignment()` | No | | -| `Set subscription()` | Yes | | -| `void subscribe(Collection topics)` | Yes | | -| `void subscribe(Collection topics, ConsumerRebalanceListener callback)` | No | | -| `void assign(Collection partitions)` | No | | -| `void subscribe(Pattern pattern, ConsumerRebalanceListener callback)` | No | | -| `void unsubscribe()` | Yes | | -| `ConsumerRecords poll(long timeoutMillis)` | Yes | | -| `void commitSync()` | Yes | | -| `void commitSync(Map offsets)` | Yes | | -| `void commitAsync()` | Yes | | -| `void commitAsync(OffsetCommitCallback callback)` | Yes | | -| `void commitAsync(Map offsets, OffsetCommitCallback callback)` | Yes | | -| `void seek(TopicPartition partition, long offset)` | Yes | | -| `void seekToBeginning(Collection partitions)` | Yes | | -| `void seekToEnd(Collection partitions)` | Yes | | -| `long position(TopicPartition partition)` | Yes | | -| `OffsetAndMetadata committed(TopicPartition partition)` | Yes | | -| `Map metrics()` | No | | -| `List partitionsFor(String topic)` | No | | -| `Map> listTopics()` | No | | -| `Set paused()` | No | | -| `void pause(Collection partitions)` | No | | -| `void resume(Collection partitions)` | No | | -| `Map offsetsForTimes(Map timestampsToSearch)` | No | | -| `Map beginningOffsets(Collection partitions)` | No | | -| `Map endOffsets(Collection partitions)` | No | | -| `void close()` | Yes | | -| `void close(long timeout, TimeUnit unit)` | Yes | | -| `void wakeup()` | No | | - -Properties: - -| Config property | Supported | Notes | -|:--------------------------------|:----------|:------------------------------------------------------| -| `group.id` | Yes | Maps to a Pulsar subscription name | -| `max.poll.records` | Yes | | -| `max.poll.interval.ms` | Ignored | Messages are "pushed" from broker | -| `session.timeout.ms` | Ignored | | -| `heartbeat.interval.ms` | Ignored | | -| `bootstrap.servers` | Yes | Needs to point to a single Pulsar service URL | -| `enable.auto.commit` | Yes | | -| `auto.commit.interval.ms` | Ignored | With auto-commit, acks are sent immediately to broker | -| `partition.assignment.strategy` | Ignored | | -| `auto.offset.reset` | Yes | Only support earliest and latest. | -| `fetch.min.bytes` | Ignored | | -| `fetch.max.bytes` | Ignored | | -| `fetch.max.wait.ms` | Ignored | | -| `interceptor.classes` | Yes | | -| `metadata.max.age.ms` | Ignored | | -| `max.partition.fetch.bytes` | Ignored | | -| `send.buffer.bytes` | Ignored | | -| `receive.buffer.bytes` | Ignored | | -| `client.id` | Ignored | | - - -## Customize Pulsar configurations - -You can configure Pulsar authentication provider directly from the Kafka properties. - -### Pulsar client properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.authentication.class`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-org.apache.pulsar.client.api.Authentication-) | | Configure to auth provider. For example, `org.apache.pulsar.client.impl.auth.AuthenticationTls`.| -| [`pulsar.authentication.params.map`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.util.Map-) | | Map which represents parameters for the Authentication-Plugin. | -| [`pulsar.authentication.params.string`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.lang.String-) | | String which represents parameters for the Authentication-Plugin, for example, `key1:val1,key2:val2`. | -| [`pulsar.use.tls`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTls-boolean-) | `false` | Enable TLS transport encryption. | -| [`pulsar.tls.trust.certs.file.path`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsTrustCertsFilePath-java.lang.String-) | | Path for the TLS trust certificate store. | -| [`pulsar.tls.allow.insecure.connection`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsAllowInsecureConnection-boolean-) | `false` | Accept self-signed certificates from brokers. | -| [`pulsar.operation.timeout.ms`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setOperationTimeout-int-java.util.concurrent.TimeUnit-) | `30000` | General operations timeout. | -| [`pulsar.stats.interval.seconds`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setStatsInterval-long-java.util.concurrent.TimeUnit-) | `60` | Pulsar client lib stats printing interval. | -| [`pulsar.num.io.threads`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setIoThreads-int-) | `1` | The number of Netty IO threads to use. | -| [`pulsar.connections.per.broker`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConnectionsPerBroker-int-) | `1` | The maximum number of connection to each broker. | -| [`pulsar.use.tcp.nodelay`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTcpNoDelay-boolean-) | `true` | TCP no-delay. | -| [`pulsar.concurrent.lookup.requests`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConcurrentLookupRequest-int-) | `50000` | The maximum number of concurrent topic lookups. | -| [`pulsar.max.number.rejected.request.per.connection`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setMaxNumberOfRejectedRequestPerConnection-int-) | `50` | The threshold of errors to forcefully close a connection. | -| [`pulsar.keepalive.interval.ms`](/api/client/org/apache/pulsar/client/api/ClientBuilder.html#keepAliveInterval-int-java.util.concurrent.TimeUnit-)| `30000` | Keep alive interval for each client-broker-connection. | - - -### Pulsar producer properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.producer.name`](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setProducerName-java.lang.String-) | | Specify the producer name. | -| [`pulsar.producer.initial.sequence.id`](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setInitialSequenceId-long-) | | Specify baseline for sequence ID of this producer. | -| [`pulsar.producer.max.pending.messages`](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessages-int-) | `1000` | Set the maximum size of the message queue pending to receive an acknowledgment from the broker. | -| [`pulsar.producer.max.pending.messages.across.partitions`](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessagesAcrossPartitions-int-) | `50000` | Set the maximum number of pending messages across all the partitions. | -| [`pulsar.producer.batching.enabled`](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingEnabled-boolean-) | `true` | Control whether automatic batching of messages is enabled for the producer. | -| [`pulsar.producer.batching.max.messages`](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingMaxMessages-int-) | `1000` | The maximum number of messages in a batch. | -| [`pulsar.block.if.producer.queue.full`](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBlockIfQueueFull-boolean-) | | Specify the block producer if queue is full. | -| [`pulsar.crypto.reader.factory.class.name`](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setCryptoKeyReader-org.apache.pulsar.client.api.CryptoKeyReader-) | | Specify the CryptoReader-Factory(`CryptoKeyReaderFactory`) classname which allows producer to create CryptoKeyReader. | - - -### Pulsar consumer Properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.consumer.name`](/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setConsumerName-java.lang.String-) | | Specify the consumer name. | -| [`pulsar.consumer.receiver.queue.size`](/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setReceiverQueueSize-int-) | 1000 | Set the size of the consumer receiver queue. | -| [`pulsar.consumer.acknowledgments.group.time.millis`](/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#acknowledgmentGroupTime-long-java.util.concurrent.TimeUnit-) | 100 | Set the maximum amount of group time for consumers to send the acknowledgments to the broker. | -| [`pulsar.consumer.total.receiver.queue.size.across.partitions`](/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setMaxTotalReceiverQueueSizeAcrossPartitions-int-) | 50000 | Set the maximum size of the total receiver queue across partitions. | -| [`pulsar.consumer.subscription.topics.mode`](/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#subscriptionTopicsMode-Mode-) | PersistentOnly | Set the subscription topic mode for consumers. | -| [`pulsar.crypto.reader.factory.class.name`](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setCryptoKeyReader-org.apache.pulsar.client.api.CryptoKeyReader-) | | Specify the CryptoReader-Factory(`CryptoKeyReaderFactory`) classname which allows consumer to create CryptoKeyReader. | diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/adaptors-spark.md b/site2/website/versioned_docs/version-2.10.0-deprecated/adaptors-spark.md deleted file mode 100644 index e14f13b5d4b079..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/adaptors-spark.md +++ /dev/null @@ -1,91 +0,0 @@ ---- -id: adaptors-spark -title: Pulsar adaptor for Apache Spark -sidebar_label: "Apache Spark" -original_id: adaptors-spark ---- - -## Spark Streaming receiver -The Spark Streaming receiver for Pulsar is a custom receiver that enables Apache [Spark Streaming](https://spark.apache.org/streaming/) to receive raw data from Pulsar. - -An application can receive data in [Resilient Distributed Dataset](https://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds) (RDD) format via the Spark Streaming receiver and can process it in a variety of ways. - -### Prerequisites - -To use the receiver, include a dependency for the `pulsar-spark` library in your Java configuration. - -#### Maven - -If you're using Maven, add this to your `pom.xml`: - -```xml - - -@pulsar:version@ - - - - org.apache.pulsar - pulsar-spark - ${pulsar.version} - - -``` - -#### Gradle - -If you're using Gradle, add this to your `build.gradle` file: - -```groovy - -def pulsarVersion = "@pulsar:version@" - -dependencies { - compile group: 'org.apache.pulsar', name: 'pulsar-spark', version: pulsarVersion -} - -``` - -### Usage - -Pass an instance of `SparkStreamingPulsarReceiver` to the `receiverStream` method in `JavaStreamingContext`: - -```java - - String serviceUrl = "pulsar://localhost:6650/"; - String topic = "persistent://public/default/test_src"; - String subs = "test_sub"; - - SparkConf sparkConf = new SparkConf().setMaster("local[*]").setAppName("Pulsar Spark Example"); - - JavaStreamingContext jsc = new JavaStreamingContext(sparkConf, Durations.seconds(60)); - - ConsumerConfigurationData pulsarConf = new ConsumerConfigurationData(); - - Set set = new HashSet(); - set.add(topic); - pulsarConf.setTopicNames(set); - pulsarConf.setSubscriptionName(subs); - - SparkStreamingPulsarReceiver pulsarReceiver = new SparkStreamingPulsarReceiver( - serviceUrl, - pulsarConf, - new AuthenticationDisabled()); - - JavaReceiverInputDStream lineDStream = jsc.receiverStream(pulsarReceiver); - -``` - -For a complete example, click [here](https://github.com/apache/pulsar-adapters/blob/master/examples/spark/src/main/java/org/apache/spark/streaming/receiver/example/SparkStreamingPulsarReceiverExample.java). In this example, the number of messages that contain the string "Pulsar" in received messages is counted. - -Note that if needed, other Pulsar authentication classes can be used. For example, in order to use a token during authentication the following parameters for the `SparkStreamingPulsarReceiver` constructor can be set: - -```java - -SparkStreamingPulsarReceiver pulsarReceiver = new SparkStreamingPulsarReceiver( - serviceUrl, - pulsarConf, - new AuthenticationToken("token:")); - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/adaptors-storm.md b/site2/website/versioned_docs/version-2.10.0-deprecated/adaptors-storm.md deleted file mode 100644 index 76d507164777db..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/adaptors-storm.md +++ /dev/null @@ -1,96 +0,0 @@ ---- -id: adaptors-storm -title: Pulsar adaptor for Apache Storm -sidebar_label: "Apache Storm" -original_id: adaptors-storm ---- - -Pulsar Storm is an adaptor for integrating with [Apache Storm](http://storm.apache.org/) topologies. It provides core Storm implementations for sending and receiving data. - -An application can inject data into a Storm topology via a generic Pulsar spout, as well as consume data from a Storm topology via a generic Pulsar bolt. - -## Using the Pulsar Storm Adaptor - -Include dependency for Pulsar Storm Adaptor: - -```xml - - - org.apache.pulsar - pulsar-storm - ${pulsar.version} - - -``` - -## Pulsar Spout - -The Pulsar Spout allows for the data published on a topic to be consumed by a Storm topology. It emits a Storm tuple based on the message received and the `MessageToValuesMapper` provided by the client. - -The tuples that fail to be processed by the downstream bolts will be re-injected by the spout with an exponential backoff, within a configurable timeout (the default is 60 seconds) or a configurable number of retries, whichever comes first, after which it is acknowledged by the consumer. Here's an example construction of a spout: - -```java - -MessageToValuesMapper messageToValuesMapper = new MessageToValuesMapper() { - - @Override - public Values toValues(Message msg) { - return new Values(new String(msg.getData())); - } - - @Override - public void declareOutputFields(OutputFieldsDeclarer declarer) { - // declare the output fields - declarer.declare(new Fields("string")); - } -}; - -// Configure a Pulsar Spout -PulsarSpoutConfiguration spoutConf = new PulsarSpoutConfiguration(); -spoutConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650"); -spoutConf.setTopic("persistent://my-property/usw/my-ns/my-topic1"); -spoutConf.setSubscriptionName("my-subscriber-name1"); -spoutConf.setMessageToValuesMapper(messageToValuesMapper); - -// Create a Pulsar Spout -PulsarSpout spout = new PulsarSpout(spoutConf); - -``` - -For a complete example, click [here](https://github.com/apache/pulsar-adapters/blob/master/pulsar-storm/src/test/java/org/apache/pulsar/storm/PulsarSpoutTest.java). - -## Pulsar Bolt - -The Pulsar bolt allows data in a Storm topology to be published on a topic. It publishes messages based on the Storm tuple received and the `TupleToMessageMapper` provided by the client. - -A partitioned topic can also be used to publish messages on different topics. In the implementation of the `TupleToMessageMapper`, a "key" will need to be provided in the message which will send the messages with the same key to the same topic. Here's an example bolt: - -```java - -TupleToMessageMapper tupleToMessageMapper = new TupleToMessageMapper() { - - @Override - public TypedMessageBuilder toMessage(TypedMessageBuilder msgBuilder, Tuple tuple) { - String receivedMessage = tuple.getString(0); - // message processing - String processedMsg = receivedMessage + "-processed"; - return msgBuilder.value(processedMsg.getBytes()); - } - - @Override - public void declareOutputFields(OutputFieldsDeclarer declarer) { - // declare the output fields - } -}; - -// Configure a Pulsar Bolt -PulsarBoltConfiguration boltConf = new PulsarBoltConfiguration(); -boltConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650"); -boltConf.setTopic("persistent://my-property/usw/my-ns/my-topic2"); -boltConf.setTupleToMessageMapper(tupleToMessageMapper); - -// Create a Pulsar Bolt -PulsarBolt bolt = new PulsarBolt(boltConf); - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-brokers.md b/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-brokers.md deleted file mode 100644 index 2674c7da875f9b..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-brokers.md +++ /dev/null @@ -1,286 +0,0 @@ ---- -id: admin-api-brokers -title: Managing Brokers -sidebar_label: "Brokers" -original_id: admin-api-brokers ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](/api/admin/). - -Pulsar brokers consist of two components: - -1. An HTTP server exposing a {@inject: rest:REST:/} interface administration and [topic](reference-terminology.md#topic) lookup. -2. A dispatcher that handles all Pulsar [message](reference-terminology.md#message) transfers. - -[Brokers](reference-terminology.md#broker) can be managed via: - -* The `brokers` command of the [`pulsar-admin`](/tools/pulsar-admin/) tool -* The `/admin/v2/brokers` endpoint of the admin {@inject: rest:REST:/} API -* The `brokers` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md) - -In addition to being configurable when you start them up, brokers can also be [dynamically configured](#dynamic-broker-configuration). - -> See the [Configuration](reference-configuration.md#broker) page for a full listing of broker-specific configuration parameters. - -## Brokers resources - -### List active brokers - -Fetch all available active brokers that are serving traffic with cluster name. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers list use - -``` - -``` - -broker1.use.org.com:8080 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/:cluster|operation/getActiveBrokers?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getActiveBrokers(clusterName) - -``` - - - - -```` - -### Get the information of the leader broker - -Fetch the information of the leader broker, for example, the service url. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers leader-broker - -``` - -``` - -BrokerInfo(serviceUrl=broker1.use.org.com:8080) - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/leaderBroker|operation/getLeaderBroker?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getLeaderBroker() - -``` - -For the detail of the code above, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/BrokersImpl.java#L80) - - - - -```` - -#### list of namespaces owned by a given broker - -It finds all namespaces which are owned and served by a given broker. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers namespaces use \ - --url broker1.use.org.com:8080 - -``` - -```json - -{ - "my-property/use/my-ns/0x00000000_0xffffffff": { - "broker_assignment": "shared", - "is_controlled": false, - "is_active": true - } -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/:cluster/:broker/ownedNamespaces|operation/getOwnedNamespaes?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getOwnedNamespaces(cluster,brokerUrl); - -``` - - - - -```` - -### Dynamic broker configuration - -One way to configure a Pulsar [broker](reference-terminology.md#broker) is to supply a [configuration](reference-configuration.md#broker) when the broker is [started up](reference-cli-tools.md#pulsar-broker). - -But since all broker configuration in Pulsar is stored in ZooKeeper, configuration values can also be dynamically updated *while the broker is running*. When you update broker configuration dynamically, ZooKeeper will notify the broker of the change and the broker will then override any existing configuration values. - -* The [`brokers`](reference-pulsar-admin.md#brokers) command for the [`pulsar-admin`](reference-pulsar-admin.md) tool has a variety of subcommands that enable you to manipulate a broker's configuration dynamically, enabling you to [update config values](#update-dynamic-configuration) and more. -* In the Pulsar admin {@inject: rest:REST:/} API, dynamic configuration is managed through the `/admin/v2/brokers/configuration` endpoint. - -### Update dynamic configuration - -````mdx-code-block - - - -The [`update-dynamic-config`](reference-pulsar-admin.md#brokers-update-dynamic-config) subcommand will update existing configuration. It takes two arguments: the name of the parameter and the new value using the `config` and `value` flag respectively. Here's an example for the [`brokerShutdownTimeoutMs`](reference-configuration.md#broker-brokerShutdownTimeoutMs) parameter: - -```shell - -$ pulsar-admin brokers update-dynamic-config --config brokerShutdownTimeoutMs --value 100 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/brokers/configuration/:configName/:configValue|operation/updateDynamicConfiguration?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().updateDynamicConfiguration(configName, configValue); - -``` - - - - -```` - -### List updated values - -Fetch a list of all potentially updatable configuration parameters. -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers list-dynamic-config -brokerShutdownTimeoutMs - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/configuration|operation/getDynamicConfigurationName?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getDynamicConfigurationNames(); - -``` - - - - -```` - -### List all - -Fetch a list of all parameters that have been dynamically updated. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers get-all-dynamic-config -brokerShutdownTimeoutMs:100 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/configuration/values|operation/getAllDynamicConfigurations?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getAllDynamicConfigurations(); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-clusters.md b/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-clusters.md deleted file mode 100644 index 53cd43187e0697..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-clusters.md +++ /dev/null @@ -1,318 +0,0 @@ ---- -id: admin-api-clusters -title: Managing Clusters -sidebar_label: "Clusters" -original_id: admin-api-clusters ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](/api/admin/). - -Pulsar clusters consist of one or more Pulsar [brokers](reference-terminology.md#broker), one or more [BookKeeper](reference-terminology.md#bookkeeper) -servers (aka [bookies](reference-terminology.md#bookie)), and a [ZooKeeper](https://zookeeper.apache.org) cluster that provides configuration and coordination management. - -Clusters can be managed via: - -* The `clusters` command of the [`pulsar-admin`](/tools/pulsar-admin/)) tool -* The `/admin/v2/clusters` endpoint of the admin {@inject: rest:REST:/} API -* The `clusters` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md) - -## Clusters resources - -### Provision - -New clusters can be provisioned using the admin interface. - -> Please note that this operation requires superuser privileges. - -````mdx-code-block - - - -You can provision a new cluster using the [`create`](reference-pulsar-admin.md#clusters-create) subcommand. Here's an example: - -```shell - -$ pulsar-admin clusters create cluster-1 \ - --url http://my-cluster.org.com:8080 \ - --broker-url pulsar://my-cluster.org.com:6650 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/clusters/:cluster|operation/createCluster?version=@pulsar:version_number@} - - - - -```java - -ClusterData clusterData = new ClusterData( - serviceUrl, - serviceUrlTls, - brokerServiceUrl, - brokerServiceUrlTls -); -admin.clusters().createCluster(clusterName, clusterData); - -``` - - - - -```` - -### Initialize cluster metadata - -When provision a new cluster, you need to initialize that cluster's [metadata](concepts-architecture-overview.md#metadata-store). When initializing cluster metadata, you need to specify all of the following: - -* The name of the cluster -* The local metadata store connection string for the cluster -* The configuration store connection string for the entire instance -* The web service URL for the cluster -* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster - -You must initialize cluster metadata *before* starting up any [brokers](admin-api-brokers.md) that will belong to the cluster. - -> **No cluster metadata initialization through the REST API or the Java admin API** -> -> Unlike most other admin functions in Pulsar, cluster metadata initialization cannot be performed via the admin REST API -> or the admin Java client, as metadata initialization involves communicating with ZooKeeper directly. -> Instead, you can use the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool, in particular -> the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command. - -Here's an example cluster metadata initialization command: - -```shell - -bin/pulsar initialize-cluster-metadata \ - --cluster us-west \ - --metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \ - --configuration-metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \ - --web-service-url http://pulsar.us-west.example.com:8080/ \ - --web-service-url-tls https://pulsar.us-west.example.com:8443/ \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/ - -``` - -You'll need to use `--*-tls` flags only if you're using [TLS authentication](security-tls-authentication.md) in your instance. - -### Get configuration - -You can fetch the [configuration](reference-configuration.md) for an existing cluster at any time. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#clusters-get) subcommand and specify the name of the cluster. Here's an example: - -```shell - -$ pulsar-admin clusters get cluster-1 -{ - "serviceUrl": "http://my-cluster.org.com:8080/", - "serviceUrlTls": null, - "brokerServiceUrl": "pulsar://my-cluster.org.com:6650/", - "brokerServiceUrlTls": null - "peerClusterNames": null -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/clusters/:cluster|operation/getCluster?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().getCluster(clusterName); - -``` - - - - -```` - -### Update - -You can update the configuration for an existing cluster at any time. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#clusters-update) subcommand and specify new configuration values using flags. - -```shell - -$ pulsar-admin clusters update cluster-1 \ - --url http://my-cluster.org.com:4081 \ - --broker-url pulsar://my-cluster.org.com:3350 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/clusters/:cluster|operation/updateCluster?version=@pulsar:version_number@} - - - - -```java - -ClusterData clusterData = new ClusterData( - serviceUrl, - serviceUrlTls, - brokerServiceUrl, - brokerServiceUrlTls -); -admin.clusters().updateCluster(clusterName, clusterData); - -``` - - - - -```` - -### Delete - -Clusters can be deleted from a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#clusters-delete) subcommand and specify the name of the cluster. - -``` - -$ pulsar-admin clusters delete cluster-1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/clusters/:cluster|operation/deleteCluster?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().deleteCluster(clusterName); - -``` - - - - -```` - -### List - -You can fetch a list of all clusters in a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#clusters-list) subcommand. - -```shell - -$ pulsar-admin clusters list -cluster-1 -cluster-2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/clusters|operation/getClusters?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().getClusters(); - -``` - - - - -```` - -### Update peer-cluster data - -Peer clusters can be configured for a given cluster in a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`update-peer-clusters`](reference-pulsar-admin.md#clusters-update-peer-clusters) subcommand and specify the list of peer-cluster names. - -``` - -$ pulsar-admin update-peer-clusters cluster-1 --peer-clusters cluster-2 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/clusters/:cluster/peers|operation/setPeerClusterNames?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().updatePeerClusterNames(clusterName, peerClusterList); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-functions.md b/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-functions.md deleted file mode 100644 index 8274a21d68008a..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-functions.md +++ /dev/null @@ -1,830 +0,0 @@ ---- -id: admin-api-functions -title: Manage Functions -sidebar_label: "Functions" -original_id: admin-api-functions ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](/api/admin/). - -**Pulsar Functions** are lightweight compute processes that - -* consume messages from one or more Pulsar topics -* apply a user-supplied processing logic to each message -* publish the results of the computation to another topic - -Functions can be managed via the following methods. - -Method | Description ----|--- -**Admin CLI** | The `functions` command of the [`pulsar-admin`](/tools/pulsar-admin/) tool. -**REST API** |The `/admin/v3/functions` endpoint of the admin {@inject: rest:REST:/} API. -**Java Admin API**| The `functions` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md). - -## Function resources - -You can perform the following operations on functions. - -### Create a function - -You can create a Pulsar function in cluster mode (deploy it on a Pulsar cluster) using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#functions-create) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --inputs test-input-topic \ - --output persistent://public/default/test-output-topic \ - --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \ - --jar /examples/api-examples.jar - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName|operation/registerFunction?version=@pulsar:version_number@} - - - - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setTenant(tenant); -functionConfig.setNamespace(namespace); -functionConfig.setName(functionName); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setParallelism(1); -functionConfig.setClassName("org.apache.pulsar.functions.api.examples.ExclamationFunction"); -functionConfig.setProcessingGuarantees(FunctionConfig.ProcessingGuarantees.ATLEAST_ONCE); -functionConfig.setTopicsPattern(sourceTopicPattern); -functionConfig.setSubName(subscriptionName); -functionConfig.setAutoAck(true); -functionConfig.setOutput(sinkTopic); -admin.functions().createFunction(functionConfig, fileName); - -``` - - - - -```` - -### Update a function - -You can update a Pulsar function that has been deployed to a Pulsar cluster using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#functions-update) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions update \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --output persistent://public/default/update-output-topic \ - # other options - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/functions/:tenant/:namespace/:functionName|operation/updateFunction?version=@pulsar:version_number@} - - - - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setTenant(tenant); -functionConfig.setNamespace(namespace); -functionConfig.setName(functionName); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setParallelism(1); -functionConfig.setClassName("org.apache.pulsar.functions.api.examples.ExclamationFunction"); -UpdateOptions updateOptions = new UpdateOptions(); -updateOptions.setUpdateAuthData(updateAuthData); -admin.functions().updateFunction(functionConfig, userCodeFile, updateOptions); - -``` - - - - -```` - -### Start an instance of a function - -You can start a stopped function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`start`](reference-pulsar-admin.md#functions-start) subcommand. - -```shell - -$ pulsar-admin functions start \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/start|operation/startFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().startFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Start all instances of a function - -You can start all stopped function instances using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`start`](reference-pulsar-admin.md#functions-start) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions start \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/start|operation/startFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().startFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Stop an instance of a function - -You can stop a function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`stop`](reference-pulsar-admin.md#functions-stop) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stop \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/stop|operation/stopFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().stopFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Stop all instances of a function - -You can stop all function instances using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`stop`](reference-pulsar-admin.md#functions-stop) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stop \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/stop|operation/stopFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().stopFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Restart an instance of a function - -Restart a function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`restart`](reference-pulsar-admin.md#functions-restart) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions restart \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/restart|operation/restartFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().restartFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Restart all instances of a function - -You can restart all function instances using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`restart`](reference-pulsar-admin.md#functions-restart) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions restart \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/restart|operation/restartFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().restartFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### List all functions - -You can list all Pulsar functions running under a specific tenant and namespace using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#functions-list) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions list \ - --tenant public \ - --namespace default - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace|operation/listFunctions?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctions(tenant, namespace); - -``` - - - - -```` - -### Delete a function - -You can delete a Pulsar function that is running on a Pulsar cluster using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#functions-delete) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions delete \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|DELETE|/admin/v3/functions/:tenant/:namespace/:functionName|operation/deregisterFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().deleteFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Get info about a function - -You can get information about a Pulsar function currently running in cluster mode using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#functions-get) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions get \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName|operation/getFunctionInfo?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Get status of an instance of a function - -You can get the current status of a Pulsar function instance with `instance-id` using Admin CLI, REST API or Java Admin API. -````mdx-code-block - - - -Use the [`status`](reference-pulsar-admin.md#functions-status) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/status|operation/getFunctionInstanceStatus?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStatus(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Get status of all instances of a function - -You can get the current status of a Pulsar function instance using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`status`](reference-pulsar-admin.md#functions-status) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/status|operation/getFunctionStatus?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStatus(tenant, namespace, functionName); - -``` - - - - -```` - -### Get stats of an instance of a function - -You can get the current stats of a Pulsar Function instance with `instance-id` using Admin CLI, REST API or Java admin API. -````mdx-code-block - - - -Use the [`stats`](reference-pulsar-admin.md#functions-stats) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/stats|operation/getFunctionInstanceStats?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStats(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Get stats of all instances of a function - -You can get the current stats of a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`stats`](reference-pulsar-admin.md#functions-stats) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/stats|operation/getFunctionStats?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStats(tenant, namespace, functionName); - -``` - - - - -```` - -### Trigger a function - -You can trigger a specified Pulsar function with a supplied value using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`trigger`](reference-pulsar-admin.md#functions-trigger) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --topic (the name of input topic) \ - --trigger-value \"hello pulsar\" - # or --trigger-file (the path of trigger file) - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/trigger|operation/triggerFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().triggerFunction(tenant, namespace, functionName, topic, triggerValue, triggerFile); - -``` - - - - -```` - -### Put state associated with a function - -You can put the state associated with a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`putstate`](reference-pulsar-admin.md#functions-putstate) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions putstate \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --state "{\"key\":\"pulsar\", \"stringValue\":\"hello pulsar\"}" - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/state/:key|operation/putFunctionState?version=@pulsar:version_number@} - - - - -```java - -TypeReference typeRef = new TypeReference() {}; -FunctionState stateRepr = ObjectMapperFactory.getThreadLocal().readValue(state, typeRef); -admin.functions().putFunctionState(tenant, namespace, functionName, stateRepr); - -``` - - - - -```` - -### Fetch state associated with a function - -You can fetch the current state associated with a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`querystate`](reference-pulsar-admin.md#functions-querystate) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions querystate \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --key (the key of state) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/state/:key|operation/getFunctionState?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionState(tenant, namespace, functionName, key); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-namespaces.md b/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-namespaces.md deleted file mode 100644 index ec1183f6de55f5..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-namespaces.md +++ /dev/null @@ -1,1267 +0,0 @@ ---- -id: admin-api-namespaces -title: Managing Namespaces -sidebar_label: "Namespaces" -original_id: admin-api-namespaces ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](/api/admin/). - -Pulsar [namespaces](reference-terminology.md#namespace) are logical groupings of [topics](reference-terminology.md#topic). - -Namespaces can be managed via: - -* The `namespaces` command of the [`pulsar-admin`](/tools/pulsar-admin/) tool -* The `/admin/v2/namespaces` endpoint of the admin {@inject: rest:REST:/} API -* The `namespaces` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md) - -## Namespaces resources - -### Create namespaces - -You can create new namespaces under a given [tenant](reference-terminology.md#tenant). - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#namespaces-create) subcommand and specify the namespace by name: - -```shell - -$ pulsar-admin namespaces create test-tenant/test-namespace - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace|operation/createNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().createNamespace(namespace); - -``` - - - - -```` - -### Get policies - -You can fetch the current policies associated with a namespace at any time. - -````mdx-code-block - - - -Use the [`policies`](reference-pulsar-admin.md#namespaces-policies) subcommand and specify the namespace: - -```shell - -$ pulsar-admin namespaces policies test-tenant/test-namespace -{ - "auth_policies": { - "namespace_auth": {}, - "destination_auth": {} - }, - "replication_clusters": [], - "bundles_activated": true, - "bundles": { - "boundaries": [ - "0x00000000", - "0xffffffff" - ], - "numBundles": 1 - }, - "backlog_quota_map": {}, - "persistence": null, - "latency_stats_sample_rate": {}, - "message_ttl_in_seconds": 0, - "retention_policies": null, - "deleted": false -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace|operation/getPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPolicies(namespace); - -``` - - - - -```` - -### List namespaces - -You can list all namespaces within a given Pulsar [tenant](reference-terminology.md#tenant). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#namespaces-list) subcommand and specify the tenant: - -```shell - -$ pulsar-admin namespaces list test-tenant -test-tenant/ns1 -test-tenant/ns2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant|operation/getTenantNamespaces?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaces(tenant); - -``` - - - - -```` - -### Delete namespaces - -You can delete existing namespaces from a tenant. - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#namespaces-delete) subcommand and specify the namespace: - -```shell - -$ pulsar-admin namespaces delete test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace|operation/deleteNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().deleteNamespace(namespace); - -``` - - - - -```` - -### Configure replication clusters - -#### Set replication cluster - -You can set replication clusters for a namespace to enable Pulsar to internally replicate the published messages from one colocation facility to another. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-clusters test-tenant/ns1 \ - --clusters cl1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/replication|operation/setNamespaceReplicationClusters?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setNamespaceReplicationClusters(namespace, clusters); - -``` - - - - -```` - -#### Get replication cluster - -You can get the list of replication clusters for a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-clusters test-tenant/cl1/ns1 - -``` - -``` - -cl2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/replication|operation/getNamespaceReplicationClusters?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaceReplicationClusters(namespace) - -``` - - - - -```` - -### Configure backlog quota policies - -#### Set backlog quota policies - -Backlog quota helps the broker to restrict bandwidth/storage of a namespace once it reaches a certain threshold limit. Admin can set the limit and take corresponding action after the limit is reached. - - 1. producer_request_hold: broker holds but not persists produce request payload - - 2. producer_exception: broker disconnects with the client by giving an exception - - 3. consumer_backlog_eviction: broker starts discarding backlog messages - -Backlog quota restriction can be taken care by defining restriction of backlog-quota-type: destination_storage. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-backlog-quota --limit 10G --limitTime 36000 --policy producer_request_hold test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/setBacklogQuota?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setBacklogQuota(namespace, new BacklogQuota(limit, limitTime, policy)) - -``` - - - - -```` - -#### Get backlog quota policies - -You can get a configured backlog quota for a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-backlog-quotas test-tenant/ns1 - -``` - -```json - -{ - "destination_storage": { - "limit": 10, - "policy": "producer_request_hold" - } -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/backlogQuotaMap|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getBacklogQuotaMap(namespace); - -``` - - - - -```` - -#### Remove backlog quota policies - -You can remove backlog quota policies for a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-backlog-quota test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/removeBacklogQuota?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeBacklogQuota(namespace, backlogQuotaType) - -``` - - - - -```` - -### Configure persistence policies - -#### Set persistence policies - -Persistence policies allow users to configure persistency-level for all topic messages under a given namespace. - - - Bookkeeper-ack-quorum: Number of acks (guaranteed copies) to wait for each entry, default: 0 - - - Bookkeeper-ensemble: Number of bookies to use for a topic, default: 0 - - - Bookkeeper-write-quorum: How many writes to make of each entry, default: 0 - - - Ml-mark-delete-max-rate: Throttling rate of mark-delete operation (0 means no throttle), default: 0.0 - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-persistence --bookkeeper-ack-quorum 2 --bookkeeper-ensemble 3 --bookkeeper-write-quorum 2 --ml-mark-delete-max-rate 0 test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/setPersistence?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setPersistence(namespace,new PersistencePolicies(bookkeeperEnsemble, bookkeeperWriteQuorum,bookkeeperAckQuorum,managedLedgerMaxMarkDeleteRate)) - -``` - - - - -```` - -#### Get persistence policies - -You can get the configured persistence policies of a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-persistence test-tenant/ns1 - -``` - -```json - -{ - "bookkeeperEnsemble": 3, - "bookkeeperWriteQuorum": 2, - "bookkeeperAckQuorum": 2, - "managedLedgerMaxMarkDeleteRate": 0 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/getPersistence?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPersistence(namespace) - -``` - - - - -```` - -### Configure namespace bundles - -#### Unload namespace bundles - -The namespace bundle is a virtual group of topics which belong to the same namespace. If the broker gets overloaded with the number of bundles, this command can help unload a bundle from that broker, so it can be served by some other less-loaded brokers. The namespace bundle ID ranges from 0x00000000 to 0xffffffff. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces unload --bundle 0x00000000_0xffffffff test-tenant/ns1 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/{bundle}/unload|operation/unloadNamespaceBundle?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().unloadNamespaceBundle(namespace, bundle) - -``` - - - - -```` - -#### Split namespace bundles - -One namespace bundle can contain multiple topics but can be served by only one broker. If a single bundle is creating an excessive load on a broker, an admin can split the bundle using the command below, permitting one or more of the new bundles to be unloaded, thus balancing the load across the brokers. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces split-bundle --bundle 0x00000000_0xffffffff test-tenant/ns1 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/:bundle/split|operation/splitNamespaceBundle?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().splitNamespaceBundle(namespace, bundle) - -``` - - - - -```` - -### Configure message TTL - -#### Set message-ttl - -You can configure the time to live (in seconds) duration for messages. In the example below, the message-ttl is set as 100s. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-message-ttl --messageTTL 100 test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/setNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setNamespaceMessageTTL(namespace, messageTTL) - -``` - - - - -```` - -#### Get message-ttl - -When the message-ttl for a namespace is set, you can use the command below to get the configured value. This example comtinues the example of the command `set message-ttl`, so the returned value is 100(s). - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-message-ttl test-tenant/ns1 - -``` - -``` - -100 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/getNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaceMessageTTL(namespace) - -``` - -``` - -100 - -``` - - - - -```` - -#### Remove message-ttl - -Remove a message TTL of the configured namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-message-ttl test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/removeNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeNamespaceMessageTTL(namespace) - -``` - - - - -```` - - -### Clear backlog - -#### Clear namespace backlog - -It clears all message backlog for all the topics that belong to a specific namespace. You can also clear backlog for a specific subscription as well. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces clear-backlog --sub my-subscription test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/clearBacklog|operation/clearNamespaceBacklogForSubscription?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().clearNamespaceBacklogForSubscription(namespace, subscription) - -``` - - - - -```` - -#### Clear bundle backlog - -It clears all message backlog for all the topics that belong to a specific NamespaceBundle. You can also clear backlog for a specific subscription as well. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces clear-backlog --bundle 0x00000000_0xffffffff --sub my-subscription test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/:bundle/clearBacklog|operation/clearNamespaceBundleBacklogForSubscription?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().clearNamespaceBundleBacklogForSubscription(namespace, bundle, subscription) - -``` - - - - -```` - -### Configure retention - -#### Set retention - -Each namespace contains multiple topics and the retention size (storage size) of each topic should not exceed a specific threshold or it should be stored for a certain period. This command helps configure the retention size and time of topics in a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-retention --size 100 --time 10 test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/retention|operation/setRetention?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setRetention(namespace, new RetentionPolicies(retentionTimeInMin, retentionSizeInMB)) - -``` - - - - -```` - -#### Get retention - -It shows retention information of a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-retention test-tenant/ns1 - -``` - -```json - -{ - "retentionTimeInMinutes": 10, - "retentionSizeInMB": 100 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/retention|operation/getRetention?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getRetention(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for topics - -#### Set dispatch throttling for topics - -It sets message dispatch rate for all the topics under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -:::note - -- If neither `clusterDispatchRate` nor `topicDispatchRate` is configured, dispatch throttling is disabled. -- If `topicDispatchRate` is not configured, `clusterDispatchRate` takes effect. -- If `topicDispatchRate` is configured, `topicDispatchRate` takes effect. - -::: - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/dispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for topics - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/dispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getDispatchRate(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for subscription - -#### Set dispatch throttling for subscription - -It sets message dispatch rate for all the subscription of topics under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-subscription-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/subscriptionDispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setSubscriptionDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for subscription - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-subscription-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/subscriptionDispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getSubscriptionDispatchRate(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for replicator - -#### Set dispatch throttling for replicator - -It sets message dispatch rate for all the replicator between replication clusters under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-replicator-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/replicatorDispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setReplicatorDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for replicator - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-replicator-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/replicatorDispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getReplicatorDispatchRate(namespace) - -``` - - - - -```` - -### Configure deduplication snapshot interval - -#### Get deduplication snapshot interval - -It shows configured `deduplicationSnapshotInterval` for a namespace (Each topic under the namespace will take a deduplication snapshot according to this interval) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-deduplication-snapshot-interval test-tenant/ns1 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/getDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getDeduplicationSnapshotInterval(namespace) - -``` - - - - -```` - -#### Set deduplication snapshot interval - -Set configured `deduplicationSnapshotInterval` for a namespace. Each topic under the namespace will take a deduplication snapshot according to this interval. -`brokerDeduplicationEnabled` must be set to `true` for this property to take effect. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-deduplication-snapshot-interval test-tenant/ns1 --interval 1000 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/setDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setDeduplicationSnapshotInterval(namespace, 1000) - -``` - - - - -```` - -#### Remove deduplication snapshot interval - -Remove configured `deduplicationSnapshotInterval` of a namespace (Each topic under the namespace will take a deduplication snapshot according to this interval) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-deduplication-snapshot-interval test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/deleteDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeDeduplicationSnapshotInterval(namespace) - -``` - - - - -```` - -### Namespace isolation - -You can use the [Pulsar isolation policy](administration-isolation.md) to allocate resources (broker and bookie) for a namespace. - -### Unload namespaces from a broker - -You can unload a namespace, or a [namespace bundle](reference-terminology.md#namespace-bundle), from the Pulsar [broker](reference-terminology.md#broker) that is currently responsible for it. - -#### pulsar-admin - -Use the [`unload`](reference-pulsar-admin.md#unload) subcommand of the [`namespaces`](reference-pulsar-admin.md#namespaces) command. - -````mdx-code-block - - - -```shell - -$ pulsar-admin namespaces unload my-tenant/my-ns - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/unload|operation/unloadNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().unload(namespace) - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-non-partitioned-topics.md b/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-non-partitioned-topics.md deleted file mode 100644 index e6347bb8c363a1..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-non-partitioned-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-non-partitioned-topics -title: Managing non-partitioned topics -sidebar_label: "Non-partitioned topics" -original_id: admin-api-non-partitioned-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-non-persistent-topics.md b/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-non-persistent-topics.md deleted file mode 100644 index 3126a6494c7153..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-non-persistent-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-non-persistent-topics -title: Managing non-persistent topics -sidebar_label: "Non-Persistent topics" -original_id: admin-api-non-persistent-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-overview.md b/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-overview.md deleted file mode 100644 index 408f1943fff188..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-overview.md +++ /dev/null @@ -1,144 +0,0 @@ ---- -id: admin-api-overview -title: Pulsar admin interface -sidebar_label: "Overview" -original_id: admin-api-overview ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -The Pulsar admin interface enables you to manage all important entities in a Pulsar instance, such as tenants, topics, and namespaces. - -You can interact with the admin interface via: - -- The `pulsar-admin` CLI tool, which is available in the `bin` folder of your Pulsar installation: - - ```shell - - bin/pulsar-admin - - ``` - - > **Important** - > - > For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](/tools/pulsar-admin/). - -- HTTP calls, which are made against the admin {@inject: rest:REST:/} API provided by Pulsar brokers. For some RESTful APIs, they might be redirected to the owner brokers for serving with [`307 Temporary Redirect`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/307), hence the HTTP callers should handle `307 Temporary Redirect`. If you use `curl` commands, you should specify `-L` to handle redirections. - - > **Important** - > - > For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. - -- A Java client interface. - - > **Important** - > - > For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](/api/admin/). - -> **The REST API is the admin interface**. Both the `pulsar-admin` CLI tool and the Java client use the REST API. If you implement your own admin interface client, you should use the REST API. - -## Admin setup - -Each of the three admin interfaces (the `pulsar-admin` CLI tool, the {@inject: rest:REST:/} API, and the [Java admin API](/api/admin)) requires some special setup if you have enabled authentication in your Pulsar instance. - -````mdx-code-block - - - -If you have enabled authentication, you need to provide an auth configuration to use the `pulsar-admin` tool. By default, the configuration for the `pulsar-admin` tool is in the [`conf/client.conf`](reference-configuration.md#client) file. The following are the available parameters: - -|Name|Description|Default| -|----|-----------|-------| -|webServiceUrl|The web URL for the cluster.|http://localhost:8080/| -|brokerServiceUrl|The Pulsar protocol URL for the cluster.|pulsar://localhost:6650/| -|authPlugin|The authentication plugin.| | -|authParams|The authentication parameters for the cluster, as a comma-separated string.| | -|useTls|Whether or not TLS authentication will be enforced in the cluster.|false| -|tlsAllowInsecureConnection|Accept untrusted TLS certificate from client.|false| -|tlsTrustCertsFilePath|Path for the trusted TLS certificate file.| | - - - - -You can find details for the REST API exposed by Pulsar brokers in this {@inject: rest:document:/}. - - - - -To use the Java admin API, instantiate a {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object, and specify a URL for a Pulsar broker and a {@inject: javadoc:PulsarAdminBuilder:/admin/org/apache/pulsar/client/admin/PulsarAdminBuilder}. The following is a minimal example using `localhost`: - -```java - -String url = "http://localhost:8080"; -// Pass auth-plugin class fully-qualified name if Pulsar-security enabled -String authPluginClassName = "com.org.MyAuthPluginClass"; -// Pass auth-param if auth-plugin class requires it -String authParams = "param1=value1"; -boolean useTls = false; -boolean tlsAllowInsecureConnection = false; -String tlsTrustCertsFilePath = null; -PulsarAdmin admin = PulsarAdmin.builder() -.authentication(authPluginClassName,authParams) -.serviceHttpUrl(url) -.tlsTrustCertsFilePath(tlsTrustCertsFilePath) -.allowTlsInsecureConnection(tlsAllowInsecureConnection) -.build(); - -``` - -If you use multiple brokers, you can use multi-host like Pulsar service. For example, - -```java - -String url = "http://localhost:8080,localhost:8081,localhost:8082"; -// Pass auth-plugin class fully-qualified name if Pulsar-security enabled -String authPluginClassName = "com.org.MyAuthPluginClass"; -// Pass auth-param if auth-plugin class requires it -String authParams = "param1=value1"; -boolean useTls = false; -boolean tlsAllowInsecureConnection = false; -String tlsTrustCertsFilePath = null; -PulsarAdmin admin = PulsarAdmin.builder() -.authentication(authPluginClassName,authParams) -.serviceHttpUrl(url) -.tlsTrustCertsFilePath(tlsTrustCertsFilePath) -.allowTlsInsecureConnection(tlsAllowInsecureConnection) -.build(); - -``` - - - - -```` - -## How to define Pulsar resource names when running Pulsar in Kubernetes -If you run Pulsar Functions or connectors on Kubernetes, you need to follow Kubernetes naming convention to define the names of your Pulsar resources, whichever admin interface you use. - -Kubernetes requires a name that can be used as a DNS subdomain name as defined in [RFC 1123](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names). Pulsar supports more legal characters than Kubernetes naming convention. If you create a Pulsar resource name with special characters that are not supported by Kubernetes (for example, including colons in a Pulsar namespace name), Kubernetes runtime translates the Pulsar object names into Kubernetes resource labels which are in RFC 1123-compliant forms. Consequently, you can run functions or connectors using Kubernetes runtime. The rules for translating Pulsar object names into Kubernetes resource labels are as below: - -- Truncate to 63 characters - -- Replace the following characters with dashes (-): - - - Non-alphanumeric characters - - - Underscores (_) - - - Dots (.) - -- Replace beginning and ending non-alphanumeric characters with 0 - -:::tip - -- If you get an error in translating Pulsar object names into Kubernetes resource labels (for example, you may have a naming collision if your Pulsar object name is too long) or want to customize the translating rules, see [customize Kubernetes runtime](functions-runtime.md#customize-kubernetes-runtime). -- For how to configure Kubernetes runtime, see [here](functions-runtime.md#configure-kubernetes-runtime). - -::: - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-packages.md b/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-packages.md deleted file mode 100644 index 608dfb7587daff..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-packages.md +++ /dev/null @@ -1,390 +0,0 @@ ---- -id: admin-api-packages -title: Manage packages -sidebar_label: "Packages" -original_id: admin-api-packages ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](/api/admin/). - -Package managers or package-management systems automatically manage packages in a consistent manner. These tools simplify the installation tasks, upgrade process, and deletion operations for users. A package is a minimal unit that a package manager deals with. In Pulsar, packages are organized at the tenant- and namespace-level to manage Pulsar Functions and Pulsar IO connectors (i.e., source and sink). - -## What is a package? - -A package is a set of elements that the user would like to reuse in later operations. In Pulsar, a package can be a group of functions, sources, and sinks. You can define a package according to your needs. - -The package management system in Pulsar stores the data and metadata of each package (as shown in the table below) and tracks the package versions. - -|Metadata|Description| -|--|--| -|description|The description of the package.| -|contact|The contact information of a package. For example, an email address of the developer team.| -|create_time|The time when the package is created.| -|modification_time|The time when the package is lastly modified.| -|properties|A user-defined key/value map to store other information.| - -## How to use a package - -Packages can efficiently use the same set of functions and IO connectors. For example, you can use the same function, source, and sink in multiple namespaces. The main steps are: - -1. Create a package in the package manager by providing the following information: type, tenant, namespace, package name, and version. - - |Component|Description| - |-|-| - |type|Specify one of the supported package types: function, sink and source.| - |tenant|Specify the tenant where you want to create the package.| - |namespace|Specify the namespace where you want to create the package.| - |name|Specify the complete name of the package, using the format `//`.| - |version|Specify the version of the package using the format `MajorVerion.MinorVersion` in numerals.| - - The information you provide creates a URL for a package, in the format `://///`. - -2. Upload the elements to the package, i.e., the functions, sources, and sinks that you want to use across namespaces. - -3. Apply permissions to this package from various namespaces. - -Now, you can use the elements you defined in the package by calling this package from within the package manager. The package manager locates it by the URL. For example, - -``` - -sink://public/default/mysql-sink@1.0 -function://my-tenant/my-ns/my-function@0.1 -source://my-tenant/my-ns/mysql-cdc-source@2.3 - -``` - -## Package management in Pulsar - -You can use the command line tools, REST API, or the Java client to manage your package resources in Pulsar. More specifically, you can use these tools to [upload](#upload-a-package), [download](#download-a-package), and [delete](#delete-a-package) a package, [get the metadata](#get-the-metadata-of-a-package) and [update the metadata](#update-the-metadata-of-a-package) of a package, [get the versions](#list-all-versions-of-a-package) of a package, and [get all packages of a specific type under a namespace](#list-all-packages-of-a-specific-type-under-a-namespace). - -### Upload a package - -You can use the following commands to upload a package. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages upload function://public/default/example@v0.1 --path package-file --description package-description - -``` - - - - -{@inject: endpoint|POST|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/upload?version=@pulsar:version_number@} - - - - -Upload a package to the package management service synchronously. - -```java - - void upload(PackageMetadata metadata, String packageName, String path) throws PulsarAdminException; - -``` - -Upload a package to the package management service asynchronously. - -```java - - CompletableFuture uploadAsync(PackageMetadata metadata, String packageName, String path); - -``` - - - - -```` - -### Download a package - -You can use the following commands to download a package. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages download function://public/default/example@v0.1 --path package-file - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/download?version=@pulsar:version_number@} - - - - -Download a package from the package management service synchronously. - -```java - - void download(String packageName, String path) throws PulsarAdminException; - -``` - -Download a package from the package management service asynchronously. - -```java - - CompletableFuture downloadAsync(String packageName, String path); - -``` - - - - -```` - -### Delete a package - -You can use the following commands to delete a package. - -````mdx-code-block - - - -The following command deletes a package of version 0.1. - -```shell - -bin/pulsar-admin packages delete functions://public/default/example@v0.1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/delete?version=@pulsar:version_number@} - - - - -Delete a specified package synchronously. - -```java - - void delete(String packageName) throws PulsarAdminException; - -``` - -Delete a specified package asynchronously. - -```java - - CompletableFuture deleteAsync(String packageName); - -``` - - - - -```` - -### Get the metadata of a package - -You can use the following commands to get the metadate of a package. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages get-metadata function://public/default/test@v1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version/metadata|operation/getMeta?version=@pulsar:version_number@} - - - - -Get the metadata of a package synchronously. - -```java - - PackageMetadata getMetadata(String packageName) throws PulsarAdminException; - -``` - -Get the metadata of a package asynchronously. - -```java - - CompletableFuture getMetadataAsync(String packageName); - -``` - - - - -```` - -### Update the metadata of a package - -You can use the following commands to update the metadata of a package. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages update-metadata function://public/default/example@v0.1 --description update-description - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version/metadata|operation/updateMeta?version=@pulsar:version_number@} - - - - -Update the metadata of a package synchronously. - -```java - - void updateMetadata(String packageName, PackageMetadata metadata) throws PulsarAdminException; - -``` - -Update the metadata of a package asynchronously. - -```java - - CompletableFuture updateMetadataAsync(String packageName, PackageMetadata metadata); - -``` - - - - -```` - -### List all versions of a package - -You can use the following commands to list all versions of a package. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages list-versions type://tenant/namespace/packageName - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName|operation/listPackageVersion?version=@pulsar:version_number@} - - - - -List all versions of a package synchronously. - -```java - - List listPackageVersions(String packageName) throws PulsarAdminException; - -``` - -List all versions of a package asynchronously. - -```java - - CompletableFuture> listPackageVersionsAsync(String packageName); - -``` - - - - -```` - -### List all packages of a specific type under a namespace - -You can use the following commands to list all packages of a specific type under a namespace. - -````mdx-code-block - - - - -```shell - -bin/pulsar-admin packages list --type function public/default - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/packages/:type/:tenant/:namespace|operation/listPackages?version=@pulsar:version_number@} - - - - -List all packages of a specific type under a namespace synchronously. - -```java - - List listPackages(String type, String namespace) throws PulsarAdminException; - -``` - -List all packages of a specific type under a namespace asynchronously. - -```java - - CompletableFuture> listPackagesAsync(String type, String namespace); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-partitioned-topics.md b/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-partitioned-topics.md deleted file mode 100644 index 5ce182282e0324..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-partitioned-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-partitioned-topics -title: Managing partitioned topics -sidebar_label: "Partitioned topics" -original_id: admin-api-partitioned-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-permissions.md b/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-permissions.md deleted file mode 100644 index 5ace9d573bdaa2..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-permissions.md +++ /dev/null @@ -1,189 +0,0 @@ ---- -id: admin-api-permissions -title: Managing permissions -sidebar_label: "Permissions" -original_id: admin-api-permissions ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](/api/admin/). - -Pulsar allows you to grant namespace-level or topic-level permission to users. - -- If you grant a namespace-level permission to a user, then the user can access all the topics under the namespace. - -- If you grant a topic-level permission to a user, then the user can access only the topic. - -The chapters below demonstrate how to grant namespace-level permissions to users. For how to grant topic-level permissions to users, see [manage topics](admin-api-topics.md/#grant-permission). - -## Grant permissions - -You can grant permissions to specific roles for lists of operations such as `produce` and `consume`. - -````mdx-code-block - - - -Use the [`grant-permission`](reference-pulsar-admin.md#grant-permission) subcommand and specify a namespace, actions using the `--actions` flag, and a role using the `--role` flag: - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role admin10 - -``` - -Wildcard authorization can be performed when `authorizationAllowWildcardsMatching` is set to `true` in `broker.conf`. - -e.g. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role 'my.role.*' - -``` - -Then, roles `my.role.1`, `my.role.2`, `my.role.foo`, `my.role.bar`, etc. can produce and consume. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role '*.role.my' - -``` - -Then, roles `1.role.my`, `2.role.my`, `foo.role.my`, `bar.role.my`, etc. can produce and consume. - -**Note**: A wildcard matching works at **the beginning or end of the role name only**. - -e.g. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role 'my.*.role' - -``` - -In this case, only the role `my.*.role` has permissions. -Roles `my.1.role`, `my.2.role`, `my.foo.role`, `my.bar.role`, etc. **cannot** produce and consume. - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/permissions/:role|operation/grantPermissionOnNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().grantPermissionOnNamespace(namespace, role, getAuthActions(actions)); - -``` - - - - -```` - -## Get permissions - -You can see which permissions have been granted to which roles in a namespace. - -````mdx-code-block - - - -Use the [`permissions`](reference-pulsar-admin#permissions) subcommand and specify a namespace: - -```shell - -$ pulsar-admin namespaces permissions test-tenant/ns1 -{ - "admin10": [ - "produce", - "consume" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/permissions|operation/getPermissions?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPermissions(namespace); - -``` - - - - -```` - -## Revoke permissions - -You can revoke permissions from specific roles, which means that those roles will no longer have access to the specified namespace. - -````mdx-code-block - - - -Use the [`revoke-permission`](reference-pulsar-admin.md#revoke-permission) subcommand and specify a namespace and a role using the `--role` flag: - -```shell - -$ pulsar-admin namespaces revoke-permission test-tenant/ns1 \ - --role admin10 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/permissions/:role|operation/revokePermissionsOnNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().revokePermissionsOnNamespace(namespace, role); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-persistent-topics.md b/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-persistent-topics.md deleted file mode 100644 index 50d135b72f5424..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-persistent-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-persistent-topics -title: Managing persistent topics -sidebar_label: "Persistent topics" -original_id: admin-api-persistent-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-schemas.md b/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-schemas.md deleted file mode 100644 index 9ffe21f5b0f750..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-schemas.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: admin-api-schemas -title: Managing Schemas -sidebar_label: "Schemas" -original_id: admin-api-schemas ---- - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-tenants.md b/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-tenants.md deleted file mode 100644 index e962ed851e4f0a..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-tenants.md +++ /dev/null @@ -1,242 +0,0 @@ ---- -id: admin-api-tenants -title: Managing Tenants -sidebar_label: "Tenants" -original_id: admin-api-tenants ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](/api/admin/). - -Tenants, like namespaces, can be managed using the [admin API](admin-api-overview.md). There are currently two configurable aspects of tenants: - -* Admin roles -* Allowed clusters - -## Tenant resources - -### List - -You can list all of the tenants associated with an [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#tenants-list) subcommand. - -```shell - -$ pulsar-admin tenants list -my-tenant-1 -my-tenant-2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/tenants|operation/getTenants?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().getTenants(); - -``` - - - - -```` - -### Create - -You can create a new tenant. - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#tenants-create) subcommand: - -```shell - -$ pulsar-admin tenants create my-tenant - -``` - -When creating a tenant, you can optionally assign admin roles using the `-r`/`--admin-roles` -flag, and clusters using the `-c`/`--allowed-clusters` flag. You can specify multiple values -as a comma-separated list. Here are some examples: - -```shell - -$ pulsar-admin tenants create my-tenant \ - --admin-roles role1,role2,role3 \ - --allowed-clusters cluster1 - -$ pulsar-admin tenants create my-tenant \ - -r role1 - -c cluster1 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/tenants/:tenant|operation/createTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().createTenant(tenantName, tenantInfo); - -``` - - - - -```` - -### Get configuration - -You can fetch the [configuration](reference-configuration.md) for an existing tenant at any time. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#tenants-get) subcommand and specify the name of the tenant. Here's an example: - -```shell - -$ pulsar-admin tenants get my-tenant -{ - "adminRoles": [ - "admin1", - "admin2" - ], - "allowedClusters": [ - "cl1", - "cl2" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/tenants/:tenant|operation/getTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().getTenantInfo(tenantName); - -``` - - - - -```` - -### Delete - -Tenants can be deleted from a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#tenants-delete) subcommand and specify the name of the tenant. - -```shell - -$ pulsar-admin tenants delete my-tenant - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/tenants/:tenant|operation/deleteTenant?version=@pulsar:version_number@} - - - - -```java - -admin.Tenants().deleteTenant(tenantName); - -``` - - - - -```` - -### Update - -You can update a tenant's configuration. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#tenants-update) subcommand. - -```shell - -$ pulsar-admin tenants update my-tenant - -``` - - - - -{@inject: endpoint|POST|/admin/v2/tenants/:tenant|operation/updateTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().updateTenant(tenantName, tenantInfo); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-topics.md b/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-topics.md deleted file mode 100644 index ceddd38c0404eb..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/admin-api-topics.md +++ /dev/null @@ -1,2472 +0,0 @@ ---- -id: admin-api-topics -title: Manage topics -sidebar_label: "Topics" -original_id: admin-api-topics ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](/api/admin/). - -Pulsar has persistent and non-persistent topics. Persistent topic is a logical endpoint for publishing and consuming messages. The topic name structure for persistent topics is: - -```shell - -persistent://tenant/namespace/topic - -``` - -Non-persistent topics are used in applications that only consume real-time published messages and do not need persistent guarantee. In this way, it reduces message-publish latency by removing overhead of persisting messages. The topic name structure for non-persistent topics is: - -```shell - -non-persistent://tenant/namespace/topic - -``` - -## Manage topic resources -Whether it is persistent or non-persistent topic, you can obtain the topic resources through `pulsar-admin` tool, REST API and Java. - -:::note - -In REST API, `:schema` stands for persistent or non-persistent. `:tenant`, `:namespace`, `:x` are variables, replace them with the real tenant, namespace, and `x` names when using them. -Take {@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} as an example, to get the list of persistent topics in REST API, use `https://pulsar.apache.org/admin/v2/persistent/my-tenant/my-namespace`. To get the list of non-persistent topics in REST API, use `https://pulsar.apache.org/admin/v2/non-persistent/my-tenant/my-namespace`. - -::: - -### List of topics - -You can get the list of topics under a given namespace in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list \ - my-tenant/my-namespace - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} - - - - -```java - -String namespace = "my-tenant/my-namespace"; -admin.topics().getList(namespace); - -``` - - - - -```` - -### Grant permission - -You can grant permissions on a client role to perform specific actions on a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics grant-permission \ - --actions produce,consume --role application1 \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/permissions/:role|operation/grantPermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String role = "test-role"; -Set actions = Sets.newHashSet(AuthAction.produce, AuthAction.consume); -admin.topics().grantPermission(topic, role, actions); - -``` - - - - -```` - -### Get permission - -You can fetch permission in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics permissions \ - persistent://test-tenant/ns1/tp1 \ - -{ - "application1": [ - "consume", - "produce" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/permissions|operation/getPermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getPermissions(topic); - -``` - - - - -```` - -### Revoke permission - -You can revoke a permission granted on a client role in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics revoke-permission \ - --role application1 \ - persistent://test-tenant/ns1/tp1 \ - -{ - "application1": [ - "consume", - "produce" - ] -} - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic/permissions/:role|operation/revokePermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String role = "test-role"; -admin.topics().revokePermissions(topic, role); - -``` - - - - -```` - -### Delete topic - -You can delete a topic in the following ways. You cannot delete a topic if any active subscription or producers is connected to the topic. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics delete \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic|operation/deleteTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().delete(topic); - -``` - - - - -```` - -### Unload topic - -You can unload a topic in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics unload \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/unload|operation/unloadTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().unload(topic); - -``` - - - - -```` - -### Get stats - -You can check the following statistics of a given non-partitioned topic. - - - **msgRateIn**: The sum of all local and replication publishers' publish rates (msg/s). - - - **msgThroughputIn**: The sum of all local and replication publishers' publish rates (bytes/s). - - - **msgRateOut**: The sum of all local and replication consumers' dispatch rates(msg/s). - - - **msgThroughputOut**: The sum of all local and replication consumers' dispatch rates (bytes/s). - - - **averageMsgSize**: The average size (in bytes) of messages published within the last interval. - - - **storageSize**: The sum of the ledgers' storage size for this topic. The space used to store the messages for the topic. - - - **earliestMsgPublishTimeInBacklogs**: The publish time of the earliest message in the backlog (ms). - - - **bytesInCounter**: Total bytes published to the topic. - - - **msgInCounter**: Total messages published to the topic. - - - **bytesOutCounter**: Total bytes delivered to consumers. - - - **msgOutCounter**: Total messages delivered to consumers. - - - **msgChunkPublished**: Topic has chunked message published on it. - - - **backlogSize**: Estimated total unconsumed or backlog size (in bytes). - - - **offloadedStorageSize**: Space used to store the offloaded messages for the topic (in bytes). - - - **waitingPublishers**: The number of publishers waiting in a queue in exclusive access mode. - - - **deduplicationStatus**: The status of message deduplication for the topic. - - - **topicEpoch**: The topic epoch or empty if not set. - - - **nonContiguousDeletedMessagesRanges**: The number of non-contiguous deleted messages ranges. - - - **nonContiguousDeletedMessagesRangesSerializedSize**: The serialized size of non-contiguous deleted messages ranges. - - - **publishers**: The list of all local publishers into the topic. The list ranges from zero to thousands. - - - **accessMode**: The type of access to the topic that the producer requires. - - - **msgRateIn**: The total rate of messages (msg/s) published by this publisher. - - - **msgThroughputIn**: The total throughput (bytes/s) of the messages published by this publisher. - - - **averageMsgSize**: The average message size in bytes from this publisher within the last interval. - - - **chunkedMessageRate**: The total rate of chunked messages published by this publisher. - - - **producerId**: The internal identifier for this producer on this topic. - - - **producerName**: The internal identifier for this producer, generated by the client library. - - - **address**: The IP address and source port for the connection of this producer. - - - **connectedSince**: The timestamp when this producer is created or reconnected last time. - - - **clientVersion**: The client library version of this producer. - - - **metadata**: Metadata (key/value strings) associated with this publisher. - - - **subscriptions**: The list of all local subscriptions to the topic. - - - **my-subscription**: The name of this subscription. It is defined by the client. - - - **msgRateOut**: The total rate of messages (msg/s) delivered on this subscription. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered on this subscription. - - - **msgBacklog**: The number of messages in the subscription backlog. - - - **type**: The subscription type. - - - **msgRateExpired**: The rate at which messages were discarded instead of dispatched from this subscription due to TTL. - - - **lastExpireTimestamp**: The timestamp of the last message expire execution. - - - **lastConsumedFlowTimestamp**: The timestamp of the last flow command received. - - - **lastConsumedTimestamp**: The latest timestamp of all the consumed timestamp of the consumers. - - - **lastAckedTimestamp**: The latest timestamp of all the acked timestamp of the consumers. - - - **bytesOutCounter**: Total bytes delivered to consumer. - - - **msgOutCounter**: Total messages delivered to consumer. - - - **msgRateRedeliver**: Total rate of messages redelivered on this subscription (msg/s). - - - **chunkedMessageRate**: Chunked message dispatch rate. - - - **backlogSize**: Size of backlog for this subscription (in bytes). - - - **earliestMsgPublishTimeInBacklog**: The publish time of the earliest message in the backlog for the subscription (ms). - - - **msgBacklogNoDelayed**: Number of messages in the subscription backlog that do not contain the delay messages. - - - **blockedSubscriptionOnUnackedMsgs**: Flag to verify if a subscription is blocked due to reaching threshold of unacked messages. - - - **msgDelayed**: Number of delayed messages currently being tracked. - - - **unackedMessages**: Number of unacknowledged messages for the subscription, where an unacknowledged message is one that has been sent to a consumer but not yet acknowledged. This field is only meaningful when using a subscription that tracks individual message acknowledgement. - - - **activeConsumerName**: The name of the consumer that is active for single active consumer subscriptions. For example, failover or exclusive. - - - **totalMsgExpired**: Total messages expired on this subscription. - - - **lastMarkDeleteAdvancedTimestamp**: Last MarkDelete position advanced timestamp. - - - **durable**: Whether the subscription is durable or ephemeral (for example, from a reader). - - - **replicated**: Mark that the subscription state is kept in sync across different regions. - - - **allowOutOfOrderDelivery**: Whether out of order delivery is allowed on the Key_Shared subscription. - - - **keySharedMode**: Whether the Key_Shared subscription mode is AUTO_SPLIT or STICKY. - - - **consumersAfterMarkDeletePosition**: This is for Key_Shared subscription to get the recentJoinedConsumers in the Key_Shared subscription. - - - **nonContiguousDeletedMessagesRanges**: The number of non-contiguous deleted messages ranges. - - - **nonContiguousDeletedMessagesRangesSerializedSize**: The serialized size of non-contiguous deleted messages ranges. - - - **consumers**: The list of connected consumers for this subscription. - - - **msgRateOut**: The total rate of messages (msg/s) delivered to the consumer. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered to the consumer. - - - **consumerName**: The internal identifier for this consumer, generated by the client library. - - - **availablePermits**: The number of messages that the consumer has space for in the client library's listen queue. `0` means the client library's queue is full and `receive()` isn't being called. A non-zero value means this consumer is ready for dispatched messages. - - - **unackedMessages**: The number of unacknowledged messages for the consumer, where an unacknowledged message is one that has been sent to the consumer but not yet acknowledged. This field is only meaningful when using a subscription that tracks individual message acknowledgement. - - - **blockedConsumerOnUnackedMsgs**: The flag used to verify if the consumer is blocked due to reaching threshold of the unacknowledged messages. - - - **lastConsumedTimestamp**: The timestamp when the consumer reads a message the last time. - - - **lastAckedTimestamp**: The timestamp when the consumer acknowledges a message the last time. - - - **address**: The IP address and source port for the connection of this consumer. - - - **connectedSince**: The timestamp when this consumer is created or reconnected last time. - - - **clientVersion**: The client library version of this consumer. - - - **bytesOutCounter**: Total bytes delivered to consumer. - - - **msgOutCounter**: Total messages delivered to consumer. - - - **msgRateRedeliver**: Total rate of messages redelivered by this consumer (msg/s). - - - **chunkedMessageRate**: The total rate of chunked messages delivered to this consumer. - - - **avgMessagesPerEntry**: Number of average messages per entry for the consumer consumed. - - - **readPositionWhenJoining**: The read position of the cursor when the consumer joining. - - - **keyHashRanges**: Hash ranges assigned to this consumer if is Key_Shared sub mode. - - - **metadata**: Metadata (key/value strings) associated with this consumer. - - - **replication**: This section gives the stats for cross-colo replication of this topic - - - **msgRateIn**: The total rate (msg/s) of messages received from the remote cluster. - - - **msgThroughputIn**: The total throughput (bytes/s) received from the remote cluster. - - - **msgRateOut**: The total rate of messages (msg/s) delivered to the replication-subscriber. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered to the replication-subscriber. - - - **msgRateExpired**: The total rate of messages (msg/s) expired. - - - **replicationBacklog**: The number of messages pending to be replicated to remote cluster. - - - **connected**: Whether the outbound replicator is connected. - - - **replicationDelayInSeconds**: How long the oldest message has been waiting to be sent through the connection, if connected is `true`. - - - **inboundConnection**: The IP and port of the broker in the remote cluster's publisher connection to this broker. - - - **inboundConnectedSince**: The TCP connection being used to publish messages to the remote cluster. If there are no local publishers connected, this connection is automatically closed after a minute. - - - **outboundConnection**: The address of the outbound replication connection. - - - **outboundConnectedSince**: The timestamp of establishing outbound connection. - -The following is an example of a topic status. - -```json - -{ - "msgRateIn" : 0.0, - "msgThroughputIn" : 0.0, - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesInCounter" : 504, - "msgInCounter" : 9, - "bytesOutCounter" : 2296, - "msgOutCounter" : 41, - "averageMsgSize" : 0.0, - "msgChunkPublished" : false, - "storageSize" : 504, - "backlogSize" : 0, - "earliestMsgPublishTimeInBacklogs": 0, - "offloadedStorageSize" : 0, - "publishers" : [ { - "accessMode" : "Shared", - "msgRateIn" : 0.0, - "msgThroughputIn" : 0.0, - "averageMsgSize" : 0.0, - "chunkedMessageRate" : 0.0, - "producerId" : 0, - "metadata" : { }, - "address" : "/127.0.0.1:65402", - "connectedSince" : "2021-06-09T17:22:55.913+08:00", - "clientVersion" : "2.9.0-SNAPSHOT", - "producerName" : "standalone-1-0" - } ], - "waitingPublishers" : 0, - "subscriptions" : { - "sub-demo" : { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesOutCounter" : 2296, - "msgOutCounter" : 41, - "msgRateRedeliver" : 0.0, - "chunkedMessageRate" : 0, - "msgBacklog" : 0, - "backlogSize" : 0, - "earliestMsgPublishTimeInBacklog": 0, - "msgBacklogNoDelayed" : 0, - "blockedSubscriptionOnUnackedMsgs" : false, - "msgDelayed" : 0, - "unackedMessages" : 0, - "type" : "Exclusive", - "activeConsumerName" : "20b81", - "msgRateExpired" : 0.0, - "totalMsgExpired" : 0, - "lastExpireTimestamp" : 0, - "lastConsumedFlowTimestamp" : 1623230565356, - "lastConsumedTimestamp" : 1623230583946, - "lastAckedTimestamp" : 1623230584033, - "lastMarkDeleteAdvancedTimestamp" : 1623230584033, - "consumers" : [ { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesOutCounter" : 2296, - "msgOutCounter" : 41, - "msgRateRedeliver" : 0.0, - "chunkedMessageRate" : 0.0, - "consumerName" : "20b81", - "availablePermits" : 959, - "unackedMessages" : 0, - "avgMessagesPerEntry" : 314, - "blockedConsumerOnUnackedMsgs" : false, - "lastAckedTimestamp" : 1623230584033, - "lastConsumedTimestamp" : 1623230583946, - "metadata" : { }, - "address" : "/127.0.0.1:65172", - "connectedSince" : "2021-06-09T17:22:45.353+08:00", - "clientVersion" : "2.9.0-SNAPSHOT" - } ], - "allowOutOfOrderDelivery": false, - "consumersAfterMarkDeletePosition" : { }, - "nonContiguousDeletedMessagesRanges" : 0, - "nonContiguousDeletedMessagesRangesSerializedSize" : 0, - "durable" : true, - "replicated" : false - } - }, - "replication" : { }, - "deduplicationStatus" : "Disabled", - "nonContiguousDeletedMessagesRanges" : 0, - "nonContiguousDeletedMessagesRangesSerializedSize" : 0 -} - -``` - -To get the status of a topic, you can use the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getStats(topic); - -``` - - - - -```` - -### Get internal stats - -You can get the detailed statistics of a topic. - - - **entriesAddedCounter**: Messages published since this broker loaded this topic. - - - **numberOfEntries**: The total number of messages being tracked. - - - **totalSize**: The total storage size in bytes of all messages. - - - **currentLedgerEntries**: The count of messages written to the ledger that is currently open for writing. - - - **currentLedgerSize**: The size in bytes of messages written to the ledger that is currently open for writing. - - - **lastLedgerCreatedTimestamp**: The time when the last ledger is created. - - - **lastLedgerCreationFailureTimestamp:** The time when the last ledger failed. - - - **waitingCursorsCount**: The number of cursors that are "caught up" and waiting for a new message to be published. - - - **pendingAddEntriesCount**: The number of messages that complete (asynchronous) write requests. - - - **lastConfirmedEntry**: The ledgerid:entryid of the last message that is written successfully. If the entryid is `-1`, then the ledger is open, yet no entries are written. - - - **state**: The state of this ledger for writing. The state `LedgerOpened` means that a ledger is open for saving published messages. - - - **ledgers**: The ordered list of all ledgers for this topic holding messages. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. - - - **metadata**: The ledger metadata. - - - **schemaLedgers**: The ordered list of all ledgers for this topic schema. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. - - - **metadata**: The ledger metadata. - - - **compactedLedger**: The ledgers holding un-acked messages after topic compaction. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. The value is `false` for the compacted topic ledger. - - - **cursors**: The list of all cursors on this topic. Each subscription in the topic stats has a cursor. - - - **markDeletePosition**: All messages before the markDeletePosition are acknowledged by the subscriber. - - - **readPosition**: The latest position of subscriber for reading message. - - - **waitingReadOp**: This is true when the subscription has read the latest message published to the topic and is waiting for new messages to be published. - - - **pendingReadOps**: The counter for how many outstanding read requests to the BookKeepers in progress. - - - **messagesConsumedCounter**: The number of messages this cursor has acked since this broker loaded this topic. - - - **cursorLedger**: The ledger being used to persistently store the current markDeletePosition. - - - **cursorLedgerLastEntry**: The last entryid used to persistently store the current markDeletePosition. - - - **individuallyDeletedMessages**: If acknowledges are being done out of order, the ranges of messages acknowledged between the markDeletePosition and the read-position shows. - - - **lastLedgerSwitchTimestamp**: The last time the cursor ledger is rolled over. - - - **state**: The state of the cursor ledger: `Open` means you have a cursor ledger for saving updates of the markDeletePosition. - -The following is an example of the detailed statistics of a topic. - -```json - -{ - "entriesAddedCounter":0, - "numberOfEntries":0, - "totalSize":0, - "currentLedgerEntries":0, - "currentLedgerSize":0, - "lastLedgerCreatedTimestamp":"2021-01-22T21:12:14.868+08:00", - "lastLedgerCreationFailureTimestamp":null, - "waitingCursorsCount":0, - "pendingAddEntriesCount":0, - "lastConfirmedEntry":"3:-1", - "state":"LedgerOpened", - "ledgers":[ - { - "ledgerId":3, - "entries":0, - "size":0, - "offloaded":false, - "metadata":null - } - ], - "cursors":{ - "test":{ - "markDeletePosition":"3:-1", - "readPosition":"3:-1", - "waitingReadOp":false, - "pendingReadOps":0, - "messagesConsumedCounter":0, - "cursorLedger":4, - "cursorLedgerLastEntry":1, - "individuallyDeletedMessages":"[]", - "lastLedgerSwitchTimestamp":"2021-01-22T21:12:14.966+08:00", - "state":"Open", - "numberOfEntriesSinceFirstNotAckedMessage":0, - "totalNonContiguousDeletedMessagesRange":0, - "properties":{ - - } - } - }, - "schemaLedgers":[ - { - "ledgerId":1, - "entries":11, - "size":10, - "offloaded":false, - "metadata":null - } - ], - "compactedLedger":{ - "ledgerId":-1, - "entries":-1, - "size":-1, - "offloaded":false, - "metadata":null - } -} - -``` - -To get the internal status of a topic, you can use the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats-internal \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/internalStats|operation/getInternalStats?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getInternalStats(topic); - -``` - - - - -```` - -### Peek messages - -You can peek a number of messages for a specific subscription of a given topic in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics peek-messages \ - --count 10 --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -Message ID: 315674752:0 -Properties: { "X-Pulsar-publish-time" : "2015-07-13 17:40:28.451" } -msg-payload - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/position/:messagePosition|operation/peekNthMessage?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -int numMessages = 1; -admin.topics().peekMessages(topic, subName, numMessages); - -``` - - - - -```` - -### Get message by ID - -You can fetch the message with the given ledger ID and entry ID in the following ways. - -````mdx-code-block - - - -```shell - -$ ./bin/pulsar-admin topics get-message-by-id \ - persistent://public/default/my-topic \ - -l 10 -e 0 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/ledger/:ledgerId/entry/:entryId|operation/getMessageById?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -long ledgerId = 10; -long entryId = 10; -admin.topics().getMessageById(topic, ledgerId, entryId); - -``` - - - - -```` - -### Examine messages - -You can examine a specific message on a topic by position relative to the earliest or the latest message. - -````mdx-code-block - - - -```shell - -./bin/pulsar-admin topics examine-messages \ - persistent://public/default/my-topic \ - -i latest -m 1 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/examinemessage?initialPosition=:initialPosition&messagePosition=:messagePosition|operation/examineMessage?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().examineMessage(topic, "latest", 1); - -``` - - - - -```` - -### Get message ID - -You can get message ID published at or just after the given datetime. - -````mdx-code-block - - - -```shell - -./bin/pulsar-admin topics get-message-id \ - persistent://public/default/my-topic \ - -d 2021-06-28T19:01:17Z - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/messageid/:timestamp|operation/getMessageIdByTimestamp?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -long timestamp = System.currentTimeMillis() -admin.topics().getMessageIdByTimestamp(topic, timestamp); - -``` - - - - -```` - - -### Skip messages - -You can skip a number of messages for a specific subscription of a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics skip \ - --count 10 --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/skip/:numMessages|operation/skipMessages?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -int numMessages = 1; -admin.topics().skipMessages(topic, subName, numMessages); - -``` - - - - -```` - -### Skip all messages - -You can skip all the old messages for a specific subscription of a given topic. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics skip-all \ - --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/skip_all|operation/skipAllMessages?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -admin.topics().skipAllMessages(topic, subName); - -``` - - - - -```` - -### Reset cursor - -You can reset a subscription cursor position back to the position which is recorded X minutes before. It essentially calculates time and position of cursor at X minutes before and resets it at that position. You can reset the cursor in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics reset-cursor \ - --subscription my-subscription --time 10 \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/resetcursor/:timestamp|operation/resetCursor?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -long timestamp = 2342343L; -admin.topics().resetCursor(topic, subName, timestamp); - -``` - - - - -```` - -### Look up topic's owner broker - -You can locate the owner broker of the given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics lookup \ - persistent://test-tenant/ns1/tp1 \ - - "pulsar://broker1.org.com:4480" - -``` - - - - -{@inject: endpoint|GET|/lookup/v2/topic/:schema/:tenant:namespace/:topic|/?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.lookup().lookupDestination(topic); - -``` - - - - -```` - -### Look up partitioned topic's owner broker - -You can locate the owner broker of the given partitioned topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics partitioned-lookup \ - persistent://test-tenant/ns1/my-topic \ - - "persistent://test-tenant/ns1/my-topic-partition-0 pulsar://localhost:6650" - "persistent://test-tenant/ns1/my-topic-partition-1 pulsar://localhost:6650" - "persistent://test-tenant/ns1/my-topic-partition-2 pulsar://localhost:6650" - "persistent://test-tenant/ns1/my-topic-partition-3 pulsar://localhost:6650" - -``` - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.lookup().lookupPartitionedTopic(topic); - -``` - -Lookup the partitioned topics sorted by broker URL - -```shell - -$ pulsar-admin topics partitioned-lookup \ - persistent://test-tenant/ns1/my-topic --sort-by-broker \ - - "pulsar://localhost:6650 [persistent://test-tenant/ns1/my-topic-partition-0, persistent://test-tenant/ns1/my-topic-partition-1, persistent://test-tenant/ns1/my-topic-partition-2, persistent://test-tenant/ns1/my-topic-partition-3]" - -``` - - - - -```` - -### Get bundle - -You can get the range of the bundle that the given topic belongs to in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics bundle-range \ - persistent://test-tenant/ns1/tp1 \ - - "0x00000000_0xffffffff" - -``` - - - - -{@inject: endpoint|GET|/lookup/v2/topic/:topic_domain/:tenant/:namespace/:topic/bundle|/?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.lookup().getBundleRange(topic); - -``` - - - - -```` - -### Get subscriptions - -You can check all subscription names for a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics subscriptions \ - persistent://test-tenant/ns1/tp1 \ - - my-subscription - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscriptions|operation/getSubscriptions?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getSubscriptions(topic); - -``` - - - - -```` - -### Last Message Id - -You can get the last committed message ID for a persistent topic. It is available since 2.3.0 release. - -````mdx-code-block - - - -```shell - -pulsar-admin topics last-message-id topic-name - -``` - - - - -{@inject: endpoint|Get|/admin/v2/:schema/:tenant/:namespace/:topic/lastMessageId|operation/getLastMessageId?version=@pulsar:version_number@} - - - - -```Java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getLastMessage(topic); - -``` - - - - -```` - -### Get backlog size - -You can get the backlog size of a single partition topic or a non-partitioned topic with a given message ID (in bytes). - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics get-backlog-size \ - -m 1:1 \ - persistent://test-tenant/ns1/tp1-partition-0 \ - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/backlogSize|operation/getBacklogSizeByMessageId?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -MessageId messageId = MessageId.earliest; -admin.topics().getBacklogSizeByMessageId(topic, messageId); - -``` - - - - -```` - - -### Configure deduplication snapshot interval - -#### Get deduplication snapshot interval - -To get the topic-level deduplication snapshot interval, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-deduplication-snapshot-interval options - -``` - - - - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/getDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getDeduplicationSnapshotInterval(topic) - -``` - - - - -```` - -#### Set deduplication snapshot interval - -To set the topic-level deduplication snapshot interval, use one of the following methods. - -> **Prerequisite** `brokerDeduplicationEnabled` must be set to `true`. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-deduplication-snapshot-interval options - -``` - - - - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/setDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.topics().setDeduplicationSnapshotInterval(topic, 1000) - -``` - - - - -```` - -#### Remove deduplication snapshot interval - -To remove the topic-level deduplication snapshot interval, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-deduplication-snapshot-interval options - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/deleteDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.topics().removeDeduplicationSnapshotInterval(topic) - -``` - - - - -```` - - -### Configure inactive topic policies - -#### Get inactive topic policies - -To get the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-inactive-topic-policies options - -``` - - - - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/getInactiveTopicPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getInactiveTopicPolicies(topic) - -``` - - - - -```` - -#### Set inactive topic policies - -To set the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-inactive-topic-policies options - -``` - - - - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/setInactiveTopicPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().setInactiveTopicPolicies(topic, inactiveTopicPolicies) - -``` - - - - -```` - -#### Remove inactive topic policies - -To remove the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-inactive-topic-policies options - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/removeInactiveTopicPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().removeInactiveTopicPolicies(topic) - -``` - - - - -```` - - -### Configure offload policies - -#### Get offload policies - -To get the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-offload-policies options - -``` - - - - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/getOffloadPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getOffloadPolicies(topic) - -``` - - - - -```` - -#### Set offload policies - -To set the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-offload-policies options - -``` - - - - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/setOffloadPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().setOffloadPolicies(topic, offloadPolicies) - -``` - - - - -```` - -#### Remove offload policies - -To remove the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-offload-policies options - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/removeOffloadPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().removeOffloadPolicies(topic) - -``` - - - - -```` - - -## Manage non-partitioned topics -You can use Pulsar [admin API](admin-api-overview.md) to create, delete and check status of non-partitioned topics. - -### Create -Non-partitioned topics must be explicitly created. When creating a new non-partitioned topic, you need to provide a name for the topic. - -By default, 60 seconds after creation, topics are considered inactive and deleted automatically to avoid generating trash data. To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to a specific value. - -For more information about the two parameters, see [here](reference-configuration.md#broker). - -You can create non-partitioned topics in the following ways. -````mdx-code-block - - - -When you create non-partitioned topics with the [`create`](reference-pulsar-admin.md#create-3) command, you need to specify the topic name as an argument. - -```shell - -$ bin/pulsar-admin topics create \ - persistent://my-tenant/my-namespace/my-topic - -``` - -:::note - -When you create a non-partitioned topic with the suffix '-partition-' followed by numeric value like 'xyz-topic-partition-x' for the topic name, if a partitioned topic with same suffix 'xyz-topic-partition-y' exists, then the numeric value(x) for the non-partitioned topic must be larger than the number of partitions(y) of the partitioned topic. Otherwise, you cannot create such a non-partitioned topic. - -::: - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic|operation/createNonPartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().createNonPartitionedTopic(topicName); - -``` - - - - -```` - -### Delete -You can delete non-partitioned topics in the following ways. -````mdx-code-block - - - -```shell - -$ bin/pulsar-admin topics delete \ - persistent://my-tenant/my-namespace/my-topic - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic|operation/deleteTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().delete(topic); - -``` - - - - -```` - -### List - -You can get the list of topics under a given namespace in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list tenant/namespace -persistent://tenant/namespace/topic1 -persistent://tenant/namespace/topic2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getList(namespace); - -``` - - - - -```` - -### Stats - -You can check the current statistics of a given topic. The following is an example. For description of each stats, refer to [get stats](#get-stats). - -```json - -{ - "msgRateIn": 4641.528542257553, - "msgThroughputIn": 44663039.74947473, - "msgRateOut": 0, - "msgThroughputOut": 0, - "averageMsgSize": 1232439.816728665, - "storageSize": 135532389160, - "publishers": [ - { - "msgRateIn": 57.855383881403576, - "msgThroughputIn": 558994.7078932219, - "averageMsgSize": 613135, - "producerId": 0, - "producerName": null, - "address": null, - "connectedSince": null - } - ], - "subscriptions": { - "my-topic_subscription": { - "msgRateOut": 0, - "msgThroughputOut": 0, - "msgBacklog": 116632, - "type": null, - "msgRateExpired": 36.98245516804671, - "consumers": [] - } - }, - "replication": {} -} - -``` - -You can check the current statistics of a given topic and its connected producers and consumers in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats \ - persistent://test-tenant/namespace/topic \ - --get-precise-backlog - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getStats(topic, false /* is precise backlog */); - -``` - - - - -```` - -## Manage partitioned topics -You can use Pulsar [admin API](admin-api-overview.md) to create, update, delete and check status of partitioned topics. - -### Create - -Partitioned topics must be explicitly created. When creating a new partitioned topic, you need to provide a name and the number of partitions for the topic. - -By default, 60 seconds after creation, topics are considered inactive and deleted automatically to avoid generating trash data. To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to a specific value. - -For more information about the two parameters, see [here](reference-configuration.md#broker). - -You can create partitioned topics in the following ways. -````mdx-code-block - - - -When you create partitioned topics with the [`create-partitioned-topic`](reference-pulsar-admin.md#create-partitioned-topic) -command, you need to specify the topic name as an argument and the number of partitions using the `-p` or `--partitions` flag. - -```shell - -$ bin/pulsar-admin topics create-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic \ - --partitions 4 - -``` - -:::note - -If a non-partitioned topic with the suffix '-partition-' followed by a numeric value like 'xyz-topic-partition-10', you can not create a partitioned topic with name 'xyz-topic', because the partitions of the partitioned topic could override the existing non-partitioned topic. To create such partitioned topic, you have to delete that non-partitioned topic first. - -::: - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/partitions|operation/createPartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -int numPartitions = 4; -admin.topics().createPartitionedTopic(topicName, numPartitions); - -``` - - - - -```` - -### Create missed partitions - -When topic auto-creation is disabled, and you have a partitioned topic without any partitions, you can use the [`create-missed-partitions`](reference-pulsar-admin.md#create-missed-partitions) command to create partitions for the topic. - -````mdx-code-block - - - -You can create missed partitions with the [`create-missed-partitions`](reference-pulsar-admin.md#create-missed-partitions) command and specify the topic name as an argument. - -```shell - -$ bin/pulsar-admin topics create-missed-partitions \ - persistent://my-tenant/my-namespace/my-topic \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic|operation/createMissedPartitions?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().createMissedPartitions(topicName); - -``` - - - - -```` - -### Get metadata - -Partitioned topics are associated with metadata, you can view it as a JSON object. The following metadata field is available. - -Field | Description -:-----|:------- -`partitions` | The number of partitions into which the topic is divided. - -````mdx-code-block - - - -You can check the number of partitions in a partitioned topic with the [`get-partitioned-topic-metadata`](reference-pulsar-admin.md#get-partitioned-topic-metadata) subcommand. - -```shell - -$ pulsar-admin topics get-partitioned-topic-metadata \ - persistent://my-tenant/my-namespace/my-topic -{ - "partitions": 4 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/partitions|operation/getPartitionedMetadata?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getPartitionedTopicMetadata(topicName); - -``` - - - - -```` - -### Update - -You can update the number of partitions for an existing partitioned topic *if* the topic is non-global. However, you can only add the partition number. Decrementing the number of partitions would delete the topic, which is not supported in Pulsar. - -Producers and consumers can find the newly created partitions automatically. - -````mdx-code-block - - - -You can update partitioned topics with the [`update-partitioned-topic`](reference-pulsar-admin.md#update-partitioned-topic) command. - -```shell - -$ pulsar-admin topics update-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic \ - --partitions 8 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:cluster/:namespace/:destination/partitions|operation/updatePartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().updatePartitionedTopic(topic, numPartitions); - -``` - - - - -```` - -### Delete -You can delete partitioned topics with the [`delete-partitioned-topic`](reference-pulsar-admin.md#delete-partitioned-topic) command, REST API and Java. - -````mdx-code-block - - - -```shell - -$ bin/pulsar-admin topics delete-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:topic/:namespace/:destination/partitions|operation/deletePartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().delete(topic); - -``` - - - - -```` - -### List -You can get the list of partitioned topics under a given namespace in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list-partitioned-topics tenant/namespace -persistent://tenant/namespace/topic1 -persistent://tenant/namespace/topic2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getPartitionedTopicList?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getPartitionedTopicList(namespace); - -``` - - - - -```` - -### Stats - -You can check the current statistics of a given partitioned topic. The following is an example. For description of each stats, refer to [get stats](#get-stats). - -Note that in the subscription JSON object, `chuckedMessageRate` is deprecated. Please use `chunkedMessageRate`. Both will be sent in the JSON for now. - -```json - -{ - "msgRateIn" : 999.992947159793, - "msgThroughputIn" : 1070918.4635439808, - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesInCounter" : 270318763, - "msgInCounter" : 252489, - "bytesOutCounter" : 0, - "msgOutCounter" : 0, - "averageMsgSize" : 1070.926056966454, - "msgChunkPublished" : false, - "storageSize" : 270316646, - "backlogSize" : 200921133, - "publishers" : [ { - "msgRateIn" : 999.992947159793, - "msgThroughputIn" : 1070918.4635439808, - "averageMsgSize" : 1070.3333333333333, - "chunkedMessageRate" : 0.0, - "producerId" : 0 - } ], - "subscriptions" : { - "test" : { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesOutCounter" : 0, - "msgOutCounter" : 0, - "msgRateRedeliver" : 0.0, - "chuckedMessageRate" : 0, - "chunkedMessageRate" : 0, - "msgBacklog" : 144318, - "msgBacklogNoDelayed" : 144318, - "blockedSubscriptionOnUnackedMsgs" : false, - "msgDelayed" : 0, - "unackedMessages" : 0, - "msgRateExpired" : 0.0, - "lastExpireTimestamp" : 0, - "lastConsumedFlowTimestamp" : 0, - "lastConsumedTimestamp" : 0, - "lastAckedTimestamp" : 0, - "consumers" : [ ], - "isDurable" : true, - "isReplicated" : false - } - }, - "replication" : { }, - "metadata" : { - "partitions" : 3 - }, - "partitions" : { } -} - -``` - -You can check the current statistics of a given partitioned topic and its connected producers and consumers in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics partitioned-stats \ - persistent://test-tenant/namespace/topic \ - --per-partition - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/partitioned-stats|operation/getPartitionedStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getPartitionedStats(topic, true /* per partition */, false /* is precise backlog */); - -``` - - - - -```` - -### Internal stats - -You can check the detailed statistics of a topic. The following is an example. For description of each stats, refer to [get internal stats](#get-internal-stats). - -```json - -{ - "entriesAddedCounter": 20449518, - "numberOfEntries": 3233, - "totalSize": 331482, - "currentLedgerEntries": 3233, - "currentLedgerSize": 331482, - "lastLedgerCreatedTimestamp": "2016-06-29 03:00:23.825", - "lastLedgerCreationFailureTimestamp": null, - "waitingCursorsCount": 1, - "pendingAddEntriesCount": 0, - "lastConfirmedEntry": "324711539:3232", - "state": "LedgerOpened", - "ledgers": [ - { - "ledgerId": 324711539, - "entries": 0, - "size": 0 - } - ], - "cursors": { - "my-subscription": { - "markDeletePosition": "324711539:3133", - "readPosition": "324711539:3233", - "waitingReadOp": true, - "pendingReadOps": 0, - "messagesConsumedCounter": 20449501, - "cursorLedger": 324702104, - "cursorLedgerLastEntry": 21, - "individuallyDeletedMessages": "[(324711539:3134‥324711539:3136], (324711539:3137‥324711539:3140], ]", - "lastLedgerSwitchTimestamp": "2016-06-29 01:30:19.313", - "state": "Open" - } - } -} - -``` - -You can get the internal stats for the partitioned topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats-internal \ - persistent://test-tenant/namespace/topic - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/internalStats|operation/getInternalStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getInternalStats(topic); - -``` - - - - -```` - - -## Publish to partitioned topics - -By default, Pulsar topics are served by a single broker, which limits the maximum throughput of a topic. *Partitioned topics* can span multiple brokers and thus allow for higher throughput. - -You can publish to partitioned topics using Pulsar client libraries. When publishing to partitioned topics, you must specify a routing mode. If you do not specify any routing mode when you create a new producer, the round robin routing mode is used. - -### Routing mode - -You can specify the routing mode in the ProducerConfiguration object that you use to configure your producer. The routing mode determines which partition(internal topic) that each message should be published to. - -The following {@inject: javadoc:MessageRoutingMode:/client/org/apache/pulsar/client/api/MessageRoutingMode} options are available. - -Mode | Description -:--------|:------------ -`RoundRobinPartition` | If no key is provided, the producer publishes messages across all partitions in round-robin policy to achieve the maximum throughput. Round-robin is not done per individual message, round-robin is set to the same boundary of batching delay to ensure that batching is effective. If a key is specified on the message, the partitioned producer hashes the key and assigns message to a particular partition. This is the default mode. -`SinglePartition` | If no key is provided, the producer picks a single partition randomly and publishes all messages into that partition. If a key is specified on the message, the partitioned producer hashes the key and assigns message to a particular partition. -`CustomPartition` | Use custom message router implementation that is called to determine the partition for a particular message. You can create a custom routing mode by using the Java client and implementing the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface. - -The following is an example: - -```java - -String pulsarBrokerRootUrl = "pulsar://localhost:6650"; -String topic = "persistent://my-tenant/my-namespace/my-topic"; - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl(pulsarBrokerRootUrl).build(); -Producer producer = pulsarClient.newProducer() - .topic(topic) - .messageRoutingMode(MessageRoutingMode.SinglePartition) - .create(); -producer.send("Partitioned topic message".getBytes()); - -``` - -### Custom message router - -To use a custom message router, you need to provide an implementation of the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface, which has just one `choosePartition` method: - -```java - -public interface MessageRouter extends Serializable { - int choosePartition(Message msg); -} - -``` - -The following router routes every message to partition 10: - -```java - -public class AlwaysTenRouter implements MessageRouter { - public int choosePartition(Message msg) { - return 10; - } -} - -``` - -With that implementation, you can send - -```java - -String pulsarBrokerRootUrl = "pulsar://localhost:6650"; -String topic = "persistent://my-tenant/my-cluster-my-namespace/my-topic"; - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl(pulsarBrokerRootUrl).build(); -Producer producer = pulsarClient.newProducer() - .topic(topic) - .messageRouter(new AlwaysTenRouter()) - .create(); -producer.send("Partitioned topic message".getBytes()); - -``` - -### How to choose partitions when using a key -If a message has a key, it supersedes the round robin routing policy. The following example illustrates how to choose the partition when using a key. - -```java - -// If the message has a key, it supersedes the round robin routing policy - if (msg.hasKey()) { - return signSafeMod(hash.makeHash(msg.getKey()), topicMetadata.numPartitions()); - } - - if (isBatchingEnabled) { // if batching is enabled, choose partition on `partitionSwitchMs` boundary. - long currentMs = clock.millis(); - return signSafeMod(currentMs / partitionSwitchMs + startPtnIdx, topicMetadata.numPartitions()); - } else { - return signSafeMod(PARTITION_INDEX_UPDATER.getAndIncrement(this), topicMetadata.numPartitions()); - } - -``` - -## Manage subscriptions - -You can use [Pulsar admin API](admin-api-overview.md) to create, check, and delete subscriptions. - -### Create subscription - -You can create a subscription for a topic using one of the following methods. - -````mdx-code-block - - - - -```shell - -pulsar-admin topics create-subscription \ ---subscription my-subscription \ -persistent://test-tenant/ns1/tp1 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/persistent/:tenant/:namespace/:topic/subscription/:subscription|operation/createSubscriptions?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subscriptionName = "my-subscription"; -admin.topics().createSubscription(topic, subscriptionName, MessageId.latest); - -``` - - - - -```` - -### Get subscription - -You can check all subscription names for a given topic using one of the following methods. - -````mdx-code-block - - - - -```shell - -pulsar-admin topics subscriptions \ -persistent://test-tenant/ns1/tp1 \ -my-subscription - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscriptions|operation/getSubscriptions?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getSubscriptions(topic); - -``` - - - - -```` - -### Unsubscribe subscription - -When a subscription does not process messages any more, you can unsubscribe it using one of the following methods. - -````mdx-code-block - - - - -```shell - -pulsar-admin topics unsubscribe \ ---subscription my-subscription \ -persistent://test-tenant/ns1/tp1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/:topic/subscription/:subscription|operation/deleteSubscription?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subscriptionName = "my-subscription"; -admin.topics().deleteSubscription(topic, subscriptionName); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/administration-geo.md b/site2/website/versioned_docs/version-2.10.0-deprecated/administration-geo.md deleted file mode 100644 index 2d64f0b643f1e6..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/administration-geo.md +++ /dev/null @@ -1,302 +0,0 @@ ---- -id: administration-geo -title: Pulsar geo-replication -sidebar_label: "Geo-replication" -original_id: administration-geo ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -## Enable geo-replication for a namespace - -You must enable geo-replication on a [per-tenant basis](#concepts-multi-tenancy) in Pulsar. For example, you can enable geo-replication between two specific clusters only when a tenant has access to both clusters. - -Geo-replication is managed at the namespace level, which means you only need to create and configure a namespace to replicate messages between two or more provisioned clusters that a tenant can access. - -Complete the following tasks to enable geo-replication for a namespace: - -* [Enable a geo-replication namespace](#enable-geo-replication-at-namespace-level) -* [Configure that namespace to replicate across two or more provisioned clusters](admin-api-namespaces.md/#configure-replication-clusters) - -Any message published on *any* topic in that namespace is replicated to all clusters in the specified set. - -## Local persistence and forwarding - -When messages are produced on a Pulsar topic, messages are first persisted in the local cluster, and then forwarded asynchronously to the remote clusters. - -In normal cases, when connectivity issues are none, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, the network [round-trip time](https://en.wikipedia.org/wiki/Round-trip_delay_time) (RTT) between the remote regions defines end-to-end delivery latency. - -Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (like during a network partition). - -Producers and consumers can publish messages to and consume messages from any cluster in a Pulsar instance. However, subscriptions cannot only be local to the cluster where the subscriptions are created but also can be transferred between clusters after replicated subscription is enabled. Once replicated subscription is enabled, you can keep subscription state in synchronization. Therefore, a topic can be asynchronously replicated across multiple geographical regions. In case of failover, a consumer can restart consuming messages from the failure point in a different cluster. - -![A typical geo-replication example with full-mesh pattern](/assets/geo-replication.png) - -In the aforementioned example, the **T1** topic is replicated among three clusters, **Cluster-A**, **Cluster-B**, and **Cluster-C**. - -All messages produced in any of the three clusters are delivered to all subscriptions in other clusters. In this case, **C1** and **C2** consumers receive all messages that **P1**, **P2**, and **P3** producers publish. Ordering is still guaranteed on a per-producer basis. - -## Configure replication - -This section guides you through the steps to configure geo-replicated clusters. -1. [Connect replication clusters](#connect-replication-clusters) -2. [Grant permissions to properties](#grant-permissions-to-properties) -3. [Enable geo-replication](#enable-geo-replication) -4. [Use topics with geo-replication](#use-topics-with-geo-replication) - -### Connect replication clusters - -To replicate data among clusters, you need to configure each cluster to connect to the other. You can use the [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) tool to create a connection. - -**Example** - -Suppose that you have 3 replication clusters: `us-west`, `us-cent`, and `us-east`. - -1. Configure the connection from `us-west` to `us-east`. - - Run the following command on `us-west`. - -```shell - -$ bin/pulsar-admin clusters create \ - --broker-url pulsar://: \ - --url http://: \ - us-east - -``` - - :::tip - - - If you want to use a secure connection for a cluster, you can use the flags `--broker-url-secure` and `--url-secure`. For more information, see [pulsar-admin clusters create](https://pulsar.apache.org/tools/pulsar-admin/). - - Different clusters may have different authentications. You can use the authentication flag `--auth-plugin` and `--auth-parameters` together to set cluster authentication, which overrides `brokerClientAuthenticationPlugin` and `brokerClientAuthenticationParameters` if `authenticationEnabled` sets to `true` in `broker.conf` and `standalone.conf`. For more information, see [authentication and authorization](concepts-authentication.md). - - ::: - -2. Configure the connection from `us-west` to `us-cent`. - - Run the following command on `us-west`. - -```shell - -$ bin/pulsar-admin clusters create \ - --broker-url pulsar://: \ - --url http://: \ - us-cent - -``` - -3. Run similar commands on `us-east` and `us-cent` to create connections among clusters. - -### Grant permissions to properties - -To replicate to a cluster, the tenant needs permission to use that cluster. You can grant permission to the tenant when you create the tenant or grant later. - -Specify all the intended clusters when you create a tenant: - -```shell - -$ bin/pulsar-admin tenants create my-tenant \ - --admin-roles my-admin-role \ - --allowed-clusters us-west,us-east,us-cent - -``` - -To update permissions of an existing tenant, use `update` instead of `create`. - -### Enable geo-replication - -You can enable geo-replication at **namespace** or **topic** level. - -#### Enable geo-replication at namespace level - -You can create a namespace with the following command sample. - -```shell - -$ bin/pulsar-admin namespaces create my-tenant/my-namespace - -``` - -Initially, the namespace is not assigned to any cluster. You can assign the namespace to clusters using the `set-clusters` subcommand: - -```shell - -$ bin/pulsar-admin namespaces set-clusters my-tenant/my-namespace \ - --clusters us-west,us-east,us-cent - -``` - -#### Enable geo-replication at topic level - -You can set geo-replication at topic level using the command `pulsar-admin topics set-replication-clusters`. For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). - -```shell - -$ bin/pulsar-admin topics set-replication-clusters --clusters us-west,us-east,us-cent my-tenant/my-namespace/my-topic - -``` - -:::tip - -- You can change the replication clusters for a namespace at any time, without disruption to ongoing traffic. Replication channels are immediately set up or stopped in all clusters as soon as the configuration changes. -- Once you create a geo-replication namespace, any topics that producers or consumers create within that namespace are replicated across clusters. Typically, each application uses the `serviceUrl` for the local cluster. -- If you are using Pulsar version `2.10.x`, to enable geo-replication at topic level, you need to change the following configurations in the `conf/broker.conf` or `conf/standalone.conf` file to enable topic policies service. -```shell -systemTopicEnabled=true -topicLevelPoliciesEnabled=true -``` -::: - -### Use topics with geo-replication - -#### Selective replication - -By default, messages are replicated to all clusters configured for the namespace. You can restrict replication selectively by specifying a replication list for a message, and then that message is replicated only to the subset in the replication list. - -The following is an example for the [Java API](client-libraries-java.md). Note the use of the `setReplicationClusters` method when you construct the {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} object: - -```java - -List restrictReplicationTo = Arrays.asList( - "us-west", - "us-east" -); - -Producer producer = client.newProducer() - .topic("some-topic") - .create(); - -producer.newMessage() - .value("my-payload".getBytes()) - .setReplicationClusters(restrictReplicationTo) - .send(); - -``` - -#### Topic stats - -You can check topic-specific statistics for geo-replication topics using one of the following methods. - -````mdx-code-block - - - -Use the [`pulsar-admin topics stats`](https://pulsar.apache.org/tools/pulsar-admin/) command. - -```shell - -$ bin/pulsar-admin topics stats persistent://my-tenant/my-namespace/my-topic - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@} - - - - -```` - -Each cluster reports its own local stats, including the incoming and outgoing replication rates and backlogs. - -#### Delete a geo-replication topic - -Given that geo-replication topics exist in multiple regions, directly deleting a geo-replication topic is not possible. Instead, you should rely on automatic topic garbage collection. - -In Pulsar, a topic is automatically deleted when the topic meets the following three conditions: -- no producers or consumers are connected to it; -- no subscriptions to it; -- no more messages are kept for retention. -For geo-replication topics, each region uses a fault-tolerant mechanism to decide when deleting the topic locally is safe. - -You can explicitly disable topic garbage collection by setting `brokerDeleteInactiveTopicsEnabled` to `false` in your [broker configuration](reference-configuration.md#broker). - -To delete a geo-replication topic, close all producers and consumers on the topic, and delete all of its local subscriptions in every replication cluster. When Pulsar determines that no valid subscription for the topic remains across the system, it will garbage collect the topic. - -## Replicated subscriptions - -Pulsar supports replicated subscriptions, so you can keep subscription state in sync, within a sub-second timeframe, in the context of a topic that is being asynchronously replicated across multiple geographical regions. - -In case of failover, a consumer can restart consuming from the failure point in a different cluster. - -### Enable replicated subscription - -Replicated subscription is disabled by default. You can enable replicated subscription when creating a consumer. - -```java - -Consumer consumer = client.newConsumer(Schema.STRING) - .topic("my-topic") - .subscriptionName("my-subscription") - .replicateSubscriptionState(true) - .subscribe(); - -``` - -### Advantages - - * It is easy to implement the logic. - * You can choose to enable or disable replicated subscription. - * When you enable it, the overhead is low, and it is easy to configure. - * When you disable it, the overhead is zero. - -### Limitations - -* When you enable replicated subscription, you're creating a consistent distributed snapshot to establish an association between message ids from different clusters. The snapshots are taken periodically. The default value is `1 second`. It means that a consumer failing over to a different cluster can potentially receive 1 second of duplicates. You can also configure the frequency of the snapshot in the `broker.conf` file. -* Only the base line cursor position is synced in replicated subscriptions while the individual acknowledgments are not synced. This means the messages acknowledged out-of-order could end up getting delivered again, in the case of a cluster failover. - -## Migrate data between clusters using geo-replication - -Using geo-replication to migrate data between clusters is a special use case of the [active-active replication pattern](concepts-replication.md/#active-active-replication) when you don't have a large amount of data. - -1. Create your new cluster. -2. Add the new cluster to your old cluster. - -```shell - - bin/pulsar-admin cluster create new-cluster - -``` - -3. Add the new cluster to your tenant. - -```shell - - bin/pulsar-admin tenants update my-tenant --cluster old-cluster,new-cluster - -``` - -4. Set the clusters on your namespace. - -```shell - - bin/pulsar-admin namespaces set-clusters my-tenant/my-ns --cluster old-cluster,new-cluster - -``` - -5. Update your applications using [replicated subscriptions](#replicated-subscriptions). -6. Validate subscription replication is active. - -```shell - - bin/pulsar-admin topics stats-internal public/default/t1 - -``` - -7. Move your consumers and producers to the new cluster by modifying the values of `serviceURL`. - -:::note - -* The replication starts from step 4, which means existing messages in your old cluster are not replicated. -* If you have some older messages to migrate, you can pre-create the replication subscriptions for each topic and set it at the earliest position by using `pulsar-admin topics create-subscription -s pulsar.repl.new-cluster -m earliest `. - -::: - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/administration-isolation.md b/site2/website/versioned_docs/version-2.10.0-deprecated/administration-isolation.md deleted file mode 100644 index b176d1f14c20db..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/administration-isolation.md +++ /dev/null @@ -1,124 +0,0 @@ ---- -id: administration-isolation -title: Pulsar isolation -sidebar_label: "Pulsar isolation" -original_id: administration-isolation ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -In an organization, a Pulsar instance provides services to multiple teams. When organizing the resources across multiple teams, you want to make a suitable isolation plan to avoid the resource competition between different teams and applications and provide high-quality messaging service. In this case, you need to take resource isolation into consideration and weigh your intended actions against expected and unexpected consequences. - -To enforce resource isolation, you can use the Pulsar isolation policy, which allows you to allocate resources (**broker** and **bookie**) for the namespace. - -## Broker isolation - -In Pulsar, when namespaces (more specifically, namespace bundles) are assigned dynamically to brokers, the namespace isolation policy limits the set of brokers that can be used for assignment. Before topics are assigned to brokers, you can set the namespace isolation policy with a primary or a secondary regex to select desired brokers. - -You can set a namespace isolation policy for a cluster using one of the following methods. - -````mdx-code-block - - - - -``` - -pulsar-admin ns-isolation-policy set options - -``` - -For more information about the command `pulsar-admin ns-isolation-policy set options`, see [here](https://pulsar.apache.org/tools/pulsar-admin/). - -**Example** - -```shell - -bin/pulsar-admin ns-isolation-policy set \ ---auto-failover-policy-type min_available \ ---auto-failover-policy-params min_limit=1,usage_threshold=80 \ ---namespaces my-tenant/my-namespace \ ---primary 10.193.216.* my-cluster policy-name - -``` - - - - -[PUT /admin/v2/namespaces/{tenant}/{namespace}](https://pulsar.apache.org/admin-rest-api/?version=master&apiversion=v2#operation/createNamespace) - - - - -For how to set namespace isolation policy using Java admin API, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/NamespacesImpl.java#L251). - - - - -```` - -## Bookie isolation - -A namespace can be isolated into user-defined groups of bookies, which guarantees all the data that belongs to the namespace is stored in desired bookies. The bookie affinity group uses the BookKeeper [rack-aware placement policy](https://bookkeeper.apache.org/docs/latest/api/javadoc/org/apache/bookkeeper/client/EnsemblePlacementPolicy.html) and it is a way to feed rack information which is stored as JSON format in znode. - -You can set a bookie affinity group using one of the following methods. - -````mdx-code-block - - - - -``` - -pulsar-admin namespaces set-bookie-affinity-group options - -``` - -For more information about the command `pulsar-admin namespaces set-bookie-affinity-group options`, see [here](https://pulsar.apache.org/tools/pulsar-admin/). - -**Example** - -```shell - -bin/pulsar-admin bookies set-bookie-rack \ ---bookie 127.0.0.1:3181 \ ---hostname 127.0.0.1:3181 \ ---group group-bookie1 \ ---rack rack1 - -bin/pulsar-admin namespaces set-bookie-affinity-group public/default \ ---primary-group group-bookie1 - -``` - -:::note - -- Do not set a bookie rack name to slash (`/`) or an empty string (`""`) if you use Pulsar earlier than 2.7.5, 2.8.3, and 2.9.2. If you use Pulsar 2.7.5, 2.8.3, 2.9.2 or later versions, it falls back to `/default-rack` or `/default-region/default-rack`. -- When `RackawareEnsemblePlacementPolicy` is enabled, the rack name is not allowed to contain slash (`/`) except for the beginning and end of the rack name string. For example, rack name like `/rack0` is okay, but `/rack/0` is not allowed. -- When `RegionawareEnsemblePlacementPolicy` is enabled, the rack name can only contain one slash (`/`) except for the beginning and end of the rack name string. For example, rack name like `/region0/rack0` is okay, but `/region0rack0` and `/region0/rack/0` are not allowed. -For the bookie rack name restrictions, see [pulsar-admin bookies set-bookie-rack](https://pulsar.apache.org/tools/pulsar-admin/). - -::: - - - - -[POST /admin/v2/namespaces/{tenant}/{namespace}/persistence/bookieAffinity](https://pulsar.apache.org/admin-rest-api/?version=master&apiversion=v2#operation/setBookieAffinityGroup) - - - - -For how to set bookie affinity group for a namespace using Java admin API, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/NamespacesImpl.java#L1164). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/administration-load-balance.md b/site2/website/versioned_docs/version-2.10.0-deprecated/administration-load-balance.md deleted file mode 100644 index 397c88c5dc0f7d..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/administration-load-balance.md +++ /dev/null @@ -1,280 +0,0 @@ ---- -id: administration-load-balance -title: Load balance across brokers -sidebar_label: "Load balance" -original_id: administration-load-balance ---- - - -Pulsar is a horizontally scalable messaging system, so the traffic in a logical cluster must be balanced across all the available Pulsar brokers as evenly as possible, which is a core requirement. - -You can use multiple settings and tools to control the traffic distribution which requires a bit of context to understand how the traffic is managed in Pulsar. Though in most cases, the core requirement mentioned above is true out of the box and you should not worry about it. - -The following sections introduce how the load-balanced assignments work across Pulsar brokers and how you can leverage the framework to adjust. - -## Dynamic assignments - -Topics are dynamically assigned to brokers based on the load conditions of all brokers in the cluster. The assignment of topics to brokers is not done at the topic level but at the **bundle** level (a higher level). Instead of individual topic assignments, each broker takes ownership of a subset of the topics for a namespace. This subset is called a bundle and effectively this subset is a sharding mechanism. - -In other words, each namespace is an "administrative" unit and sharded into a list of bundles, with each bundle comprising a portion of the overall hash range of the namespace. Topics are assigned to a particular bundle by taking the hash of the topic name and checking in which bundle the hash falls. Each bundle is independent of the others and thus is independently assigned to different brokers. - -The benefit of the assignment granularity is to amortize the amount of information that you need to keep track of. Based on CPU, memory, traffic load, and other indexes, topics are assigned to a particular broker dynamically. For example: -* When a client starts using new topics that are not assigned to any broker, a process is triggered to choose the best-suited broker to acquire ownership of these topics according to the load conditions. -* If the broker owning a topic becomes overloaded, the topic is reassigned to a less-loaded broker. -* If the broker owning a topic crashes, the topic is reassigned to another active broker. - -:::tip - -For partitioned topics, different partitions are assigned to different brokers. Here "topic" means either a non-partitioned topic or one partition of a topic. - -::: - -## Create namespaces with assigned bundles - -When you create a new namespace, a number of bundles are assigned to the namespace. You can set this number in the `conf/broker.conf` file: - -```conf - -# When a namespace is created without specifying the number of bundles, this -# value will be used as the default -defaultNumberOfNamespaceBundles=4 - -``` - -Alternatively, you can override the value when you create a new namespace using [Pulsar admin](/tools/pulsar-admin/): - -```shell - -bin/pulsar-admin namespaces create my-tenant/my-namespace --clusters us-west --bundles 16 - -``` - -With the above command, you create a namespace with 16 initial bundles. Therefore the topics for this namespace can immediately be spread across up to 16 brokers. - -In general, if you know the expected traffic and number of topics in advance, you had better start with a reasonable number of bundles instead of waiting for the system to auto-correct the distribution. - -On the same note, it is beneficial to start with more bundles than the number of brokers, due to the hashing nature of the distribution of topics into bundles. For example, for a namespace with 1000 topics, using something like 64 bundles achieves a good distribution of traffic across 16 brokers. - - -## Split namespace bundles - -Since the load for the topics in a bundle might change over time and predicting the load might be hard, bundle split is designed to resolve these challenges. The broker splits a bundle into two and the new smaller bundles can be reassigned to different brokers. - -Pulsar supports the following two bundle split algorithms: -* `range_equally_divide`: split the bundle into two parts with the same hash range size. -* `topic_count_equally_divide`: split the bundle into two parts with the same number of topics. - -To enable bundle split, you need to configure the following settings in the `broker.conf` file, and set `defaultNamespaceBundleSplitAlgorithm` based on your needs. - -```conf - -loadBalancerAutoBundleSplitEnabled=true -loadBalancerAutoUnloadSplitBundlesEnabled=true -defaultNamespaceBundleSplitAlgorithm=range_equally_divide - -``` - -You can configure more parameters for splitting thresholds. Any existing bundle that exceeds any of the thresholds is a candidate to be split. By default, the newly split bundles are immediately reassigned to other brokers, to facilitate the traffic distribution. - -```conf - -# maximum topics in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxTopics=1000 - -# maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxSessions=1000 - -# maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxMsgRate=30000 - -# maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxBandwidthMbytes=100 - -# maximum number of bundles in a namespace (for auto-split) -loadBalancerNamespaceMaximumBundles=128 - -``` - -## Shed load automatically - -The support for automatic load shedding is available in the load manager of Pulsar. This means that whenever the system recognizes a particular broker is overloaded, the system forces some traffic to be reassigned to less-loaded brokers. - -When a broker is identified as overloaded, the broker forces to "unload" a subset of the bundles, the ones with higher traffic, that make up for the overload percentage. - -For example, the default threshold is 85% and if a broker is over quota at 95% CPU usage, then the broker unloads the percent difference plus a 5% margin: `(95% - 85%) + 5% = 15%`. Given the selection of bundles to unload is based on traffic (as a proxy measure for CPU, network, and memory), the broker unloads bundles for at least 15% of traffic. - -:::tip - -* The automatic load shedding is enabled by default. To disable it, you can set `loadBalancerSheddingEnabled` to `false`. -* Besides the automatic load shedding, you can [manually unload bundles](#unload-topics-and-bundles). - -::: - -Additional settings that apply to shedding: - -```conf - -# Load shedding interval. Broker periodically checks whether some traffic should be offload from -# some over-loaded broker to other under-loaded brokers -loadBalancerSheddingIntervalMinutes=1 - -# Prevent the same topics to be shed and moved to other brokers more than once within this timeframe -loadBalancerSheddingGracePeriodMinutes=30 - -``` - -Pulsar supports the following types of automatic load shedding strategies. -* [ThresholdShedder](#thresholdshedder) -* [OverloadShedder](#overloadshedder) -* [UniformLoadShedder](#uniformloadshedder) - -:::note - -* From Pulsar 2.10, the **default** shedding strategy is `ThresholdShedder`. -* You need to restart brokers if the shedding strategy is [dynamically updated](admin-api-brokers.md/#dynamic-broker-configuration). - -::: - -### ThresholdShedder -This strategy tends to shed the bundles if any broker's usage is above the configured threshold. It does this by first computing the average resource usage per broker for the whole cluster. The resource usage for each broker is calculated using the following method `LocalBrokerData#getMaxResourceUsageWithWeight`. Historical observations are included in the running average based on the broker's setting for `loadBalancerHistoryResourcePercentage`. Once the average resource usage is calculated, a broker's current/historical usage is compared to the average broker usage. If a broker's usage is greater than the average usage per broker plus the `loadBalancerBrokerThresholdShedderPercentage`, this load shedder proposes removing enough bundles to bring the unloaded broker 5% below the current average broker usage. Note that recently unloaded bundles are not unloaded again. - -![Shedding strategy - ThresholdShedder](/assets/shedding-strategy-thresholdshedder.svg) - -For example, assume you have three brokers, the average broker usage of broker1 is 40%, the average broker usage of broker2 and broker3 is 10%, then the cluster average usage is 20% ((40% + 10% + 10%) / 3). If you set `loadBalancerBrokerThresholdShedderPercentage` to `10`, then only broker1's certain bundles get unloaded, because the average usage of broker1 is greater than the sum of the cluster average usage (20%) plus `loadBalancerBrokerThresholdShedderPercentage`(10%). - -To use the `ThresholdShedder` strategy, configure brokers with this value. -`loadBalancerLoadSheddingStrategy=org.apache.pulsar.broker.loadbalance.impl.ThresholdShedder` - -You can configure the weights for each resource per broker in the `conf/broker.conf` file. - -```conf - -# The BandWithIn usage weight when calculating new resource usage. The range is between 0 and 1.0. -loadBalancerBandwithInResourceWeight=1.0 - -# The BandWithOut usage weight when calculating new resource usage. The range is between 0 and 1.0. -loadBalancerBandwithOutResourceWeight=1.0 - -# The CPU usage weight when calculating new resource usage. The range is between 0 and 1.0. -loadBalancerCPUResourceWeight=1.0 - -# The heap memory usage weight when calculating new resource usage. The range is between 0 and 1.0. -loadBalancerMemoryResourceWeight=1.0 - -# The direct memory usage weight when calculating new resource usage. The range is between 0 and 1.0. -loadBalancerDirectMemoryResourceWeight=1.0 - -``` - -### OverloadShedder -This strategy attempts to shed exactly one bundle on brokers which are overloaded, that is, whose maximum system resource usage exceeds [`loadBalancerBrokerOverloadedThresholdPercentage`](#broker-overload-thresholds). To see which resources are considered when determining the maximum system resource. A bundle is recommended for unloading off that broker if and only if the following conditions hold: The broker has at least two bundles assigned and the broker has at least one bundle that has not been unloaded recently according to `LoadBalancerSheddingGracePeriodMinutes`. The unloaded bundle will be the most expensive bundle in terms of message rate that has not been recently unloaded. Note that this strategy does not take into account "underloaded" brokers when determining which bundles to unload. If you are looking for a strategy that spreads load evenly across all brokers, see [ThresholdShedder](#thresholdshedder). - -![Shedding strategy - OverloadShedder](/assets/shedding-strategy-overloadshedder.svg) - -To use the `OverloadShedder` strategy, configure brokers with this value. -`loadBalancerLoadSheddingStrategy=org.apache.pulsar.broker.loadbalance.impl.OverloadShedder` - -#### Broker overload thresholds - -The determination of when a broker is overloaded is based on the threshold of CPU, network, and memory usage. Whenever either of those metrics reaches the threshold, the system triggers the shedding (if enabled). - -:::note - -The overload threshold `loadBalancerBrokerOverloadedThresholdPercentage` only applies to the [`OverloadShedder`](#overloadshedder) shedding strategy. By default, it is set to 85%. - -::: - -Pulsar gathers the CPU, network, and memory usage stats from the system metrics. In some cases of network utilization, the network interface speed that Linux reports is not correct and needs to be manually overridden. This is the case in AWS EC2 instances with 1Gbps NIC speed for which the OS reports 10Gbps speed. - -Because of the incorrect max speed, the load manager might think the broker has not reached the NIC capacity, while in fact the broker already uses all the bandwidth and the traffic is slowed down. - -You can set `loadBalancerOverrideBrokerNicSpeedGbps` in the `conf/broker.conf` file to correct the max NIC speed. When the value is empty, Pulsar uses the value that the OS reports. - -### UniformLoadShedder -This strategy tends to distribute load uniformly across all brokers. This strategy checks the load difference between the broker with the highest load and the broker with the lowest load. If the difference is higher than configured thresholds `loadBalancerMsgRateDifferenceShedderThreshold` and `loadBalancerMsgThroughputMultiplierDifferenceShedderThreshold` then it finds out bundles that can be unloaded to distribute traffic evenly across all brokers. - -![Shedding strategy - UniformLoadShedder](/assets/shedding-strategy-uniformLoadshedder.svg) - -To use the `UniformLoadShedder` strategy, configure brokers with this value. -`loadBalancerLoadSheddingStrategy=org.apache.pulsar.broker.loadbalance.impl.UniformLoadShedder` - -## Unload topics and bundles - -You can "unload" a topic in Pulsar manual admin operations. Unloading means closing topics, releasing ownership, and reassigning topics to a new broker, based on the current load. - -When unloading happens, the client experiences a small latency blip, typically in the order of tens of milliseconds, while the topic is reassigned. - -Unloading is the mechanism that the load manager uses to perform the load shedding, but you can also trigger the unloading manually, for example, to correct the assignments and redistribute traffic even before having any broker overloaded. - -Unloading a topic has no effect on the assignment, but just closes and reopens the particular topic: - -```shell - -pulsar-admin topics unload persistent://tenant/namespace/topic - -``` - -To unload all topics for a namespace and trigger reassignments: - -```shell - -pulsar-admin namespaces unload tenant/namespace - -``` - -## Distribute anti-affinity namespaces across failure domains - -When your application has multiple namespaces and you want one of them available all the time to avoid any downtime, you can group these namespaces and distribute them across different [failure domains](reference-terminology.md#failure-domain) and different brokers. Thus, if one of the failure domains is down (due to release rollout or brokers restart), it only disrupts namespaces owned by that specific failure domain and the rest of the namespaces owned by other domains remain available without any impact. - -Such a group of namespaces has anti-affinity to each other, that is, all the namespaces in this group are [anti-affinity namespaces](reference-terminology.md#anti-affinity-namespaces) and are distributed to different failure domains in a load-balanced manner. - -As illustrated in the following figure, Pulsar has 2 failure domains (Domain1 and Domain2) and each domain has 2 brokers in it. You can create an anti-affinity namespace group that has 4 namespaces in it, and all the 4 namespaces have anti-affinity to each other. The load manager tries to distribute namespaces evenly across all the brokers in the same domain. Since each domain has 2 brokers, every broker owns one namespace from this anti-affinity namespace group, and you can see each domain owns 2 namespaces, and each broker owns 1 namespace. - -![Distribute anti-affinity namespaces across failure domains](/assets/anti-affinity-namespaces-across-failure-domains.svg) - -The load manager follows an even distribution policy across failure domains to assign anti-affinity namespaces. The following table outlines the even-distributed assignment sequence illustrated in the above figure. - -| Assignment sequence | Namespace | Failure domain candidates | Broker candidates | Selected broker | -|:---|:------------|:------------------|:------------------------------------|:-----------------| -| 1 | Namespace1 | Domain1, Domain2 | Broker1, Broker2, Broker3, Broker4 | Domain1:Broker1 | -| 2 | Namespace2 | Domain2 | Broker3, Broker4 | Domain2:Broker3 | -| 3 | Namespace3 | Domain1, Domain2 | Broker2, Broker4 | Domain1:Broker2 | -| 4 | Namespace4 | Domain2 | Broker4 | Domain2:Broker4 | - -:::tip - -* Each namespace belongs to only one anti-affinity group. If a namespace with an existing anti-affinity assignment is assigned to another anti-affinity group, the original assignment is dropped. - -* If there are more anti-affinity namespaces than failure domains, the load manager distributes namespaces evenly across all the domains, and also every domain distributes namespaces evenly across all the brokers under that domain. - -::: - -### Create a failure domain and register brokers - -:::note - -One broker can only be registered to a single failure domain. - -::: - -To create a domain under a specific cluster and register brokers, run the following command: - -```bash - -pulsar-admin clusters create-failure-domain --domain-name --broker-list - -``` - -You can also view, update, and delete domains under a specific cluster. For more information, refer to [Pulsar admin doc](/tools/pulsar-admin/). - -### Create an anti-affinity namespace group - -An anti-affinity group is created automatically when the first namespace is assigned to the group. To assign a namespace to an anti-affinity group, run the following command. It sets an anti-affinity group name for a namespace. - -```bash - -pulsar-admin namespaces set-anti-affinity-group --group - -``` - -For more information about `anti-affinity-group` related commands, refer to [Pulsar admin doc](/tools/pulsar-admin/). diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/administration-proxy.md b/site2/website/versioned_docs/version-2.10.0-deprecated/administration-proxy.md deleted file mode 100644 index 202d7d643437bb..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/administration-proxy.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -id: administration-proxy -title: Pulsar proxy -sidebar_label: "Pulsar proxy" -original_id: administration-proxy ---- - -Pulsar proxy is an optional gateway. Pulsar proxy is used when direct connections between clients and Pulsar brokers are either infeasible or undesirable. For example, when you run Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform, you can run Pulsar proxy. - -The Pulsar proxy is not intended to be exposed on the public internet. The security considerations in the current design expect network perimeter security. The requirement of network perimeter security can be achieved with private networks. - -If a proxy deployment cannot be protected with network perimeter security, the alternative would be to use [Pulsar's "Proxy SNI routing" feature](concepts-proxy-sni-routing.md) with a properly secured and audited solution. In that case Pulsar proxy component is not used at all. - -## Configure the proxy - -Before using a proxy, you need to configure it with a broker's address in the cluster. You can configure the broker URL in the proxy configuration, or the proxy to connect directly using service discovery. - -> In a production environment service discovery is not recommended. - -### Use broker URLs - -It is more secure to specify a URL to connect to the brokers. - -Proxy authorization requires access to ZooKeeper, so if you use these broker URLs to connect to the brokers, you need to disable authorization at the Proxy level. Brokers still authorize requests after the proxy forwards them. - -You can configure the broker URLs in `conf/proxy.conf` as follows. - -```properties -brokerServiceURL=pulsar://brokers.example.com:6650 -brokerWebServiceURL=http://brokers.example.com:8080 -functionWorkerWebServiceURL=http://function-workers.example.com:8080 -``` - -If you use TLS, configure the broker URLs in the following way: - -```properties -brokerServiceURLTLS=pulsar+ssl://brokers.example.com:6651 -brokerWebServiceURLTLS=https://brokers.example.com:8443 -functionWorkerWebServiceURL=https://function-workers.example.com:8443 -``` - -The hostname in the URLs provided should be a DNS entry that points to multiple brokers or a virtual IP address, which is backed by multiple broker IP addresses, so that the proxy does not lose connectivity to Pulsar cluster if a single broker becomes unavailable. - -The ports to connect to the brokers (6650 and 8080, or in the case of TLS, 6651 and 8443) should be open in the network ACLs. - -Note that if you do not use functions, you do not need to configure `functionWorkerWebServiceURL`. - -### Use service discovery - -Pulsar uses [ZooKeeper](https://zookeeper.apache.org) for service discovery. To connect the proxy to ZooKeeper, specify the following in `conf/proxy.conf`. - -```properties -metadataStoreUrl=my-zk-0:2181,my-zk-1:2181,my-zk-2:2181 -configurationMetadataStoreUrl=my-zk-0:2184,my-zk-remote:2184 -``` - -> To use service discovery, you need to open the network ACLs, so the proxy can connects to the ZooKeeper nodes through the ZooKeeper client port (port `2181`) and the configuration store client port (port `2184`). - -> However, it is not secure to use service discovery. Because if the network ACL is open, when someone compromises a proxy, they have full access to ZooKeeper. - -### Restricting target broker addresses to mitigate CVE-2022-24280 - -The Pulsar Proxy trusts clients to provide valid target broker addresses to connect to. -Unless the Pulsar Proxy is explicitly configured to limit access, the Pulsar Proxy is vulnerable as described in the security advisory [Apache Pulsar Proxy target broker address isn't validated (CVE-2022-24280)](https://github.com/apache/pulsar/wiki/CVE-2022-24280). - -It is necessary to limit proxied broker connections to known broker addresses by specifying `brokerProxyAllowedHostNames` and `brokerProxyAllowedIPAddresses` settings. - -When specifying `brokerProxyAllowedHostNames`, it's possible to use a wildcard. -Please notice that `*` is a wildcard that matches any character in the hostname. It also matches dot `.` characters. - -It is recommended to use a pattern that matches only the desired brokers and no other hosts in the local network. Pulsar lookups will use the default host name of the broker by default. This can be overridden with the `advertisedAddress` setting in `broker.conf`. - -To increase security, it is also possible to restrict access with the `brokerProxyAllowedIPAddresses` setting. It is not mandatory to configure `brokerProxyAllowedIPAddresses` when `brokerProxyAllowedHostNames` is properly configured so that the pattern matches only the target brokers. -`brokerProxyAllowedIPAddresses` setting supports a comma separate list of IP address, IP address ranges and IP address networks [(supported format reference)](https://seancfoley.github.io/IPAddress/IPAddress/apidocs/inet/ipaddr/IPAddressString.html). - -Example: limiting by host name in a Kubernetes deployment -```yaml - # example of limiting to Kubernetes statefulset hostnames that contain "broker-" - PULSAR_PREFIX_brokerProxyAllowedHostNames: '*broker-*.*.*.svc.cluster.local' -``` - -Example: limiting by both host name and ip address in a `proxy.conf` file for host deployment. -```properties -# require "broker" in host name -brokerProxyAllowedHostNames=*broker*.localdomain -# limit target ip addresses to a specific network -brokerProxyAllowedIPAddresses=10.0.0.0/8 -``` - -Example: limiting by multiple host name patterns and multiple ip address ranges in a `proxy.conf` file for host deployment. -```properties -```properties -# require "broker" in host name -brokerProxyAllowedHostNames=*broker*.localdomain,*broker*.otherdomain -# limit target ip addresses to a specific network or range demonstrating multiple supported formats -brokerProxyAllowedIPAddresses=10.10.0.0/16,192.168.1.100-120,172.16.2.*,10.1.2.3 -``` - - -## Start the proxy - -To start the proxy: - -```bash - -$ cd /path/to/pulsar/directory -$ bin/pulsar proxy \ - --metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 \ - --configuration-metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 - -``` - -> You can run multiple instances of the Pulsar proxy in a cluster. - -## Stop the proxy - -Pulsar proxy runs in the foreground by default. To stop the proxy, simply stop the process in which the proxy is running. - -## Proxy frontends - -You can run Pulsar proxy behind some kind of load-distributing frontend, such as an [HAProxy](https://www.digitalocean.com/community/tutorials/an-introduction-to-haproxy-and-load-balancing-concepts) load balancer. - -## Use Pulsar clients with the proxy - -Once your Pulsar proxy is up and running, preferably behind a load-distributing [frontend](#proxy-frontends), clients can connect to the proxy via whichever address that the frontend uses. If the address is the DNS address `pulsar.cluster.default`, for example, the connection URL for clients is `pulsar://pulsar.cluster.default:6650`. - -For more information on Proxy configuration, refer to [Pulsar proxy](reference-configuration.md#pulsar-proxy). diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/administration-pulsar-manager.md b/site2/website/versioned_docs/version-2.10.0-deprecated/administration-pulsar-manager.md deleted file mode 100644 index 40c5a33da6da81..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/administration-pulsar-manager.md +++ /dev/null @@ -1,216 +0,0 @@ ---- -id: administration-pulsar-manager -title: Pulsar Manager -sidebar_label: "Pulsar Manager" -original_id: administration-pulsar-manager ---- - -Pulsar Manager is a web-based GUI management and monitoring tool that helps administrators and users manage and monitor tenants, namespaces, topics, subscriptions, brokers, clusters, and so on, and supports dynamic configuration of multiple environments. - -:::note - -If you are monitoring your current stats with [Pulsar dashboard](administration-dashboard.md), we recommend you use Pulsar Manager instead. Pulsar dashboard is deprecated. - -::: - -## Install - -### Quick Install -The easiest way to use the Pulsar Manager is to run it inside a [Docker](https://www.docker.com/products/docker) container. - -```shell - -docker pull apachepulsar/pulsar-manager:v0.2.0 -docker run -it \ - -p 9527:9527 -p 7750:7750 \ - -e SPRING_CONFIGURATION_FILE=/pulsar-manager/pulsar-manager/application.properties \ - apachepulsar/pulsar-manager:v0.2.0 - -``` - -* Pulsar Manager is divided into front-end and back-end, the front-end service port is `9527` and the back-end service port is `7750`. -* `SPRING_CONFIGURATION_FILE`: Default configuration file for spring. -* By default, Pulsar Manager uses the `herddb` database. HerdDB is a SQL distributed database implemented in Java and can be found at [herddb.org](https://herddb.org/) for more information. - -### Configure Database or JWT authentication -#### Configure Database (optional) - -If you have a large amount of data, you can use a custom database. Otherwise, some display errors may occur. For example, the topic information cannot be displayed when the topic exceeds 10000. -The following is an example of PostgreSQL. - -1. Initialize database and table structures using the [file](https://github.com/apache/pulsar-manager/blob/master/src/main/resources/META-INF/sql/postgresql-schema.sql). -2. Download and modify the [configuration file](https://github.com/apache/pulsar-manager/blob/master/src/main/resources/application.properties), then add the PostgreSQL configuration. - -```properties - -spring.datasource.driver-class-name=org.postgresql.Driver -spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/pulsar_manager -spring.datasource.username=postgres -spring.datasource.password=postgres - -``` - -3. Add a configuration mount and start with a docker image. - -```bash - -docker pull apachepulsar/pulsar-manager:v0.2.0 -docker run -it \ - -p 9527:9527 -p 7750:7750 \ - -v /your-path/application.properties:/pulsar-manager/pulsar- -manager/application.properties - -e SPRING_CONFIGURATION_FILE=/pulsar-manager/pulsar-manager/application.properties \ - apachepulsar/pulsar-manager:v0.2.0 - -``` - -#### Enable JWT authentication (optional) - -If you want to turn on JWT authentication, configure the `application.properties` file. - -```properties - -backend.jwt.token=token - -jwt.broker.token.mode=PRIVATE -jwt.broker.public.key=file:///path/broker-public.key -jwt.broker.private.key=file:///path/broker-private.key - -or -jwt.broker.token.mode=SECRET -jwt.broker.secret.key=file:///path/broker-secret.key - -``` - -• `backend.jwt.token`: token for the superuser. You need to configure this parameter during cluster initialization. -• `jwt.broker.token.mode`: multiple modes of generating token, including PUBLIC, PRIVATE, and SECRET. -• `jwt.broker.public.key`: configure this option if you use the PUBLIC mode. -• `jwt.broker.private.key`: configure this option if you use the PRIVATE mode. -• `jwt.broker.secret.key`: configure this option if you use the SECRET mode. -For more information, see [Token Authentication Admin of Pulsar](security-token-admin.md). - -Docker command to add profile and key files mount. - -```bash - -docker pull apachepulsar/pulsar-manager:v0.2.0 -docker run -it \ - -p 9527:9527 -p 7750:7750 \ - -v /your-path/application.properties:/pulsar-manager/pulsar- -manager/application.properties - -v /your-path/private.key:/pulsar-manager/private.key - -e SPRING_CONFIGURATION_FILE=/pulsar-manager/pulsar-manager/application.properties \ - apachepulsar/pulsar-manager:v0.2.0 - -``` - -### Set the administrator account and password - -```bash - -CSRF_TOKEN=$(curl http://localhost:7750/pulsar-manager/csrf-token) -curl \ - -H 'X-XSRF-TOKEN: $CSRF_TOKEN' \ - -H 'Cookie: XSRF-TOKEN=$CSRF_TOKEN;' \ - -H "Content-Type: application/json" \ - -X PUT http://localhost:7750/pulsar-manager/users/superuser \ - -d '{"name": "admin", "password": "apachepulsar", "description": "test", "email": "username@test.org"}' - -``` - -The request parameter in curl command: - -```json - -{"name": "admin", "password": "apachepulsar", "description": "test", "email": "username@test.org"} - -``` - -- `name` is the Pulsar Manager login username, currently `admin`. -- `password` is the password of the current user of Pulsar Manager, currently `apachepulsar`. The password should be more than or equal to 6 digits. - - - -### Configure the environment -1. Login to the system, Visit http://localhost:9527 to login. The current default account is `admin/apachepulsar` - -2. Click "New Environment" button to add an environment. - -3. Input the "Environment Name". The environment name is used for identifying an environment. - -4. Input the "Service URL". The Service URL is the admin service url of your Pulsar cluster. - - -## Other Installation -### Bare-metal installation - -When using binary packages for direct deployment, you can follow these steps. - -- Download and unzip the binary package, which is available on the [Pulsar Download](/download) page. - - ```bash - - wget https://dist.apache.org/repos/dist/release/pulsar/pulsar-manager/pulsar-manager-0.2.0/apache-pulsar-manager-0.2.0-bin.tar.gz - tar -zxvf apache-pulsar-manager-0.2.0-bin.tar.gz - - ``` - -- Extract the back-end service binary package and place the front-end resources in the back-end service directory. - - ```bash - - cd pulsar-manager - tar -zxvf pulsar-manager.tar - cd pulsar-manager - cp -r ../dist ui - - ``` - -- Modify `application.properties` configuration on demand. - - > If you don't want to modify the `application.properties` file, you can add the configuration to the startup parameters via `. /bin/pulsar-manager --backend.jwt.token=token` to add the configuration to the startup parameters. This is a capability of the spring boot framework. - -- Start Pulsar Manager - - ```bash - - ./bin/pulsar-manager - - ``` - -### Custom docker image installation - -You can find the docker image in the [Docker Hub](https://github.com/apache/pulsar-manager/tree/master/docker) directory and build an image from the source code as well: - - ```bash - - git clone https://github.com/apache/pulsar-manager - cd pulsar-manager/front-end - npm install --save - npm run build:prod - cd .. - ./gradlew build -x test - cd .. - docker build -f docker/Dockerfile --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` --build-arg VCS_REF=`latest` --build-arg VERSION=`latest` -t apachepulsar/pulsar-manager . - - ``` - -## Configuration - - - -| application.properties | System env on Docker Image | Desc | Example | -| ----------------------------------- | -------------------------- | ------------------------------------------------------------ | ------------------------------------------------- | -| backend.jwt.token | JWT_TOKEN | token for the superuser. You need to configure this parameter during cluster initialization. | `token` | -| jwt.broker.token.mode | N/A | multiple modes of generating token, including PUBLIC, PRIVATE, and SECRET. | `PUBLIC` or `PRIVATE` or `SECRET`. | -| jwt.broker.public.key | PUBLIC_KEY | configure this option if you use the PUBLIC mode. | `file:///path/broker-public.key` | -| jwt.broker.private.key | PRIVATE_KEY | configure this option if you use the PRIVATE mode. | `file:///path/broker-private.key` | -| jwt.broker.secret.key | SECRET_KEY | configure this option if you use the SECRET mode. | `file:///path/broker-secret.key` | -| spring.datasource.driver-class-name | DRIVER_CLASS_NAME | the driver class name of the database. | `org.postgresql.Driver` | -| spring.datasource.url | URL | the JDBC URL of your database. | `jdbc:postgresql://127.0.0.1:5432/pulsar_manager` | -| spring.datasource.username | USERNAME | the username of database. | `postgres` | -| spring.datasource.password | PASSWORD | the password of database. | `postgres` | -| N/A | LOG_LEVEL | the level of log. | DEBUG | -* For more information about backend configurations, see [here](https://github.com/apache/pulsar-manager/blob/master/src/README). -* For more information about frontend configurations, see [here](https://github.com/apache/pulsar-manager/tree/master/front-end). - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/administration-stats.md b/site2/website/versioned_docs/version-2.10.0-deprecated/administration-stats.md deleted file mode 100644 index ac0c03602f36d5..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/administration-stats.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -id: administration-stats -title: Pulsar stats -sidebar_label: "Pulsar statistics" -original_id: administration-stats ---- - -## Partitioned topics - -|Stat|Description| -|---|---| -|msgRateIn| The sum of publish rates of all local and replication publishers in messages per second.| -|msgThroughputIn| Same as msgRateIn but in bytes per second instead of messages per second.| -|msgRateOut| The sum of dispatch rates of all local and replication consumers in messages per second.| -|msgThroughputOut| Same as msgRateOut but in bytes per second instead of messages per second.| -|averageMsgSize| Average message size, in bytes, from this publisher within the last interval.| -|storageSize| The sum of storage size of the ledgers for this topic.| -|publishers| The list of all local publishers into the topic. Publishers can be anywhere from zero to thousands.| -|producerId| Internal identifier for this producer on this topic.| -|producerName| Internal identifier for this producer, generated by the client library.| -|address| IP address and source port for the connection of this producer.| -|connectedSince| Timestamp this producer is created or last reconnected.| -|subscriptions| The list of all local subscriptions to the topic.| -|my-subscription| The name of this subscription (client defined).| -|msgBacklog| The count of messages in backlog for this subscription.| -|type| This subscription type.| -|msgRateExpired| The rate at which messages are discarded instead of dispatched from this subscription due to TTL.| -|consumers| The list of connected consumers for this subscription.| -|consumerName| Internal identifier for this consumer, generated by the client library.| -|availablePermits| The number of messages this consumer has space for in the listen queue of client library. A value of 0 means the queue of client library is full and receive() is not being called. A nonzero value means this consumer is ready to be dispatched messages.| -|replication| This section gives the stats for cross-colo replication of this topic.| -|replicationBacklog| The outbound replication backlog in messages.| -|connected| Whether the outbound replicator is connected.| -|replicationDelayInSeconds| How long the oldest message has been waiting to be sent through the connection, if connected is true.| -|inboundConnection| The IP and port of the broker in the publisher connection of remote cluster to this broker. | -|inboundConnectedSince| The TCP connection being used to publish messages to the remote cluster. If no local publishers are connected, this connection is automatically closed after a minute.| - - -## Topics - -|Stat|Description| -|---|---| -|entriesAddedCounter| Messages published since this broker loads this topic.| -|numberOfEntries| Total number of messages being tracked.| -|totalSize| Total storage size in bytes of all messages.| -|currentLedgerEntries| Count of messages written to the ledger currently open for writing.| -|currentLedgerSize| Size in bytes of messages written to ledger currently open for writing.| -|lastLedgerCreatedTimestamp| Time when last ledger is created.| -|lastLedgerCreationFailureTimestamp| Time when last ledger is failed.| -|waitingCursorsCount| How many cursors are caught up and waiting for a new message to be published.| -|pendingAddEntriesCount| How many messages have (asynchronous) write requests you are waiting on completion.| -|lastConfirmedEntry| The ledgerid:entryid of the last message successfully written. If the entryid is -1, then the ledger is opened or is being currently opened but has no entries written yet.| -|state| The state of the cursor ledger. Open means you have a cursor ledger for saving updates of the markDeletePosition.| -|ledgers| The ordered list of all ledgers for this topic holding its messages.| -|cursors| The list of all cursors on this topic. Every subscription you saw in the topic stats has one.| -|markDeletePosition| The ack position: the last message the subscriber acknowledges receiving.| -|readPosition| The latest position of subscriber for reading message.| -|waitingReadOp| This is true when the subscription reads the latest message that is published to the topic and waits on new messages to be published.| -|pendingReadOps| The counter for how many outstanding read requests to the BookKeepers you have in progress.| -|messagesConsumedCounter| Number of messages this cursor acks since this broker loads this topic.| -|cursorLedger| The ledger used to persistently store the current markDeletePosition.| -|cursorLedgerLastEntry| The last entryid used to persistently store the current markDeletePosition.| -|individuallyDeletedMessages| If Acks are done out of order, shows the ranges of messages Acked between the markDeletePosition and the read-position.| -|lastLedgerSwitchTimestamp| The last time the cursor ledger is rolled over.| diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/administration-upgrade.md b/site2/website/versioned_docs/version-2.10.0-deprecated/administration-upgrade.md deleted file mode 100644 index 72d136b6460f62..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/administration-upgrade.md +++ /dev/null @@ -1,168 +0,0 @@ ---- -id: administration-upgrade -title: Upgrade Guide -sidebar_label: "Upgrade" -original_id: administration-upgrade ---- - -## Upgrade guidelines - -Apache Pulsar is comprised of multiple components, ZooKeeper, bookies, and brokers. These components are either stateful or stateless. You do not have to upgrade ZooKeeper nodes unless you have special requirement. While you upgrade, you need to pay attention to bookies (stateful), brokers and proxies (stateless). - -The following are some guidelines on upgrading a Pulsar cluster. Read the guidelines before upgrading. - -- Backup all your configuration files before upgrading. -- Read guide entirely, make a plan, and then execute the plan. When you make upgrade plan, you need to take your specific requirements and environment into consideration. -- Pay attention to the upgrading order of components. In general, you do not need to upgrade your ZooKeeper or configuration store cluster (the global ZooKeeper cluster). You need to upgrade bookies first, and then upgrade brokers, proxies, and your clients. -- If `autorecovery` is enabled, you need to disable `autorecovery` in the upgrade process, and re-enable it after completing the process. -- Read the release notes carefully for each release. Release notes contain features, configuration changes that might impact your upgrade. -- Upgrade a small subset of nodes of each type to canary test the new version before upgrading all nodes of that type in the cluster. When you have upgraded the canary nodes, run for a while to ensure that they work correctly. -- Upgrade one data center to verify new version before upgrading all data centers if your cluster runs in multi-cluster replicated mode. - -> Note: Currently, Apache Pulsar is compatible between versions. - -## Upgrade sequence - -To upgrade an Apache Pulsar cluster, follow the upgrade sequence. - -1. Upgrade ZooKeeper (optional) -- Canary test: test an upgraded version in one or a small set of ZooKeeper nodes. -- Rolling upgrade: rollout the upgraded version to all ZooKeeper servers incrementally, one at a time. Monitor your dashboard during the whole rolling upgrade process. -2. Upgrade bookies -- Canary test: test an upgraded version in one or a small set of bookies. -- Rolling upgrade: - - a. Disable `autorecovery` with the following command. - - ```shell - - bin/bookkeeper shell autorecovery -disable - - ``` - - - - b. Rollout the upgraded version to all bookies in the cluster after you determine that a version is safe after canary. - - c. After you upgrade all bookies, re-enable `autorecovery` with the following command. - - ```shell - - bin/bookkeeper shell autorecovery -enable - - ``` - -3. Upgrade brokers -- Canary test: test an upgraded version in one or a small set of brokers. -- Rolling upgrade: rollout the upgraded version to all brokers in the cluster after you determine that a version is safe after canary. -4. Upgrade proxies -- Canary test: test an upgraded version in one or a small set of proxies. -- Rolling upgrade: rollout the upgraded version to all proxies in the cluster after you determine that a version is safe after canary. - -## Upgrade ZooKeeper (optional) -While you upgrade ZooKeeper servers, you can do canary test first, and then upgrade all ZooKeeper servers in the cluster. - -### Canary test - -You can test an upgraded version in one of ZooKeeper servers before upgrading all ZooKeeper servers in your cluster. - -To upgrade ZooKeeper server to a new version, complete the following steps: - -1. Stop a ZooKeeper server. -2. Upgrade the binary and configuration files. -3. Start the ZooKeeper server with the new binary files. -4. Use `pulsar zookeeper-shell` to connect to the newly upgraded ZooKeeper server and run a few commands to verify if it works as expected. -5. Run the ZooKeeper server for a few days, observe and make sure the ZooKeeper cluster runs well. - -#### Canary rollback - -If issues occur during canary test, you can shut down the problematic ZooKeeper node, revert the binary and configuration, and restart the ZooKeeper with the reverted binary. - -### Upgrade all ZooKeeper servers - -After canary test to upgrade one ZooKeeper in your cluster, you can upgrade all ZooKeeper servers in your cluster. - -You can upgrade all ZooKeeper servers one by one by following steps in canary test. - -## Upgrade bookies - -While you upgrade bookies, you can do canary test first, and then upgrade all bookies in the cluster. -For more details, you can read Apache BookKeeper [Upgrade guide](http://bookkeeper.apache.org/docs/latest/admin/upgrade). - -### Canary test - -You can test an upgraded version in one or a small set of bookies before upgrading all bookies in your cluster. - -To upgrade bookie to a new version, complete the following steps: - -1. Stop a bookie. -2. Upgrade the binary and configuration files. -3. Start the bookie in `ReadOnly` mode to verify if the bookie of this new version runs well for read workload. - - ```shell - - bin/pulsar bookie --readOnly - - ``` - -4. When the bookie runs successfully in `ReadOnly` mode, stop the bookie and restart it in `Write/Read` mode. - - ```shell - - bin/pulsar bookie - - ``` - -5. Observe and make sure the cluster serves both write and read traffic. - -#### Canary rollback - -If issues occur during the canary test, you can shut down the problematic bookie node. Other bookies in the cluster replaces this problematic bookie node with autorecovery. - -### Upgrade all bookies - -After canary test to upgrade some bookies in your cluster, you can upgrade all bookies in your cluster. - -Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios. - -In a rolling upgrade scenario, upgrade one bookie at a time. In a downtime upgrade scenario, shut down the entire cluster, upgrade each bookie, and then start the cluster. - -While you upgrade in both scenarios, the procedure is the same for each bookie. - -1. Stop the bookie. -2. Upgrade the software (either new binary or new configuration files). -2. Start the bookie. - -> **Advanced operations** -> When you upgrade a large BookKeeper cluster in a rolling upgrade scenario, upgrading one bookie at a time is slow. If you configure rack-aware or region-aware placement policy, you can upgrade bookies rack by rack or region by region, which speeds up the whole upgrade process. - -## Upgrade brokers and proxies - -The upgrade procedure for brokers and proxies is the same. Brokers and proxies are `stateless`, so upgrading the two services is easy. - -### Canary test - -You can test an upgraded version in one or a small set of nodes before upgrading all nodes in your cluster. - -To upgrade to a new version, complete the following steps: - -1. Stop a broker (or proxy). -2. Upgrade the binary and configuration file. -3. Start a broker (or proxy). - -#### Canary rollback - -If issues occur during canary test, you can shut down the problematic broker (or proxy) node. Revert to the old version and restart the broker (or proxy). - -### Upgrade all brokers or proxies - -After canary test to upgrade some brokers or proxies in your cluster, you can upgrade all brokers or proxies in your cluster. - -Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios. - -In a rolling upgrade scenario, you can upgrade one broker or one proxy at a time if the size of the cluster is small. If your cluster is large, you can upgrade brokers or proxies in batches. When you upgrade a batch of brokers or proxies, make sure the remaining brokers and proxies in the cluster have enough capacity to handle the traffic during upgrade. - -In a downtime upgrade scenario, shut down the entire cluster, upgrade each broker or proxy, and then start the cluster. - -While you upgrade in both scenarios, the procedure is the same for each broker or proxy. - -1. Stop the broker or proxy. -2. Upgrade the software (either new binary or new configuration files). -3. Start the broker or proxy. diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/administration-zk-bk.md b/site2/website/versioned_docs/version-2.10.0-deprecated/administration-zk-bk.md deleted file mode 100644 index 0530b258dca2cf..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/administration-zk-bk.md +++ /dev/null @@ -1,378 +0,0 @@ ---- -id: administration-zk-bk -title: ZooKeeper and BookKeeper administration -sidebar_label: "ZooKeeper and BookKeeper" -original_id: administration-zk-bk ---- - -Pulsar relies on two external systems for essential tasks: - -* [ZooKeeper](https://zookeeper.apache.org/) is responsible for a wide variety of configuration-related and coordination-related tasks. -* [BookKeeper](http://bookkeeper.apache.org/) is responsible for [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data. - -ZooKeeper and BookKeeper are both open-source [Apache](https://www.apache.org/) projects. - -> Skip to the [How Pulsar uses ZooKeeper and BookKeeper](#how-pulsar-uses-zookeeper-and-bookkeeper) section below for a more schematic explanation of the role of these two systems in Pulsar. - - -## ZooKeeper - -Each Pulsar instance relies on two separate ZooKeeper quorums. - -* [Local ZooKeeper](#deploy-local-zookeeper) operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs to have a dedicated ZooKeeper cluster. -* [Configuration Store](#deploy-configuration-store) operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum. - -### Deploy local ZooKeeper - -ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar. - -To deploy a Pulsar instance, you need to stand up one local ZooKeeper cluster *per Pulsar cluster*. - -To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -On each host, you need to specify the node ID in `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - - -On a ZooKeeper server at `zk1.us-west.example.com`, for example, you can set the `myid` value like this: - -```shell - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com` the command is `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start zookeeper - -``` - -### Deploy configuration store - -The ZooKeeper cluster configured and started up in the section above is a *local* ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks. - -If you deploy a [single-cluster](#single-cluster-pulsar-instance) instance, you do not need a separate cluster for the configuration store. If, however, you deploy a [multi-cluster](#multi-cluster-pulsar-instance) instance, you need to stand up a separate ZooKeeper cluster for configuration tasks. - -#### Single-cluster Pulsar instance - -If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports. - -To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorum uses to the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 - -``` - -As before, create the `myid` files for each server on `data/global-zookeeper/myid`. - -#### Multi-cluster Pulsar instance - -When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions. - -The key here is to make sure the ZK quorum members are spread across at least 3 regions and that other regions run as observers. - -Again, given the very low expected load on the configuration store servers, you can share the same hosts used for the local ZooKeeper quorum. - -For example, you can assume a Pulsar instance with the following clusters `us-west`, `us-east`, `us-central`, `eu-central`, `ap-south`. Also you can assume, each cluster has its own local ZK servers named such as - -``` - -zk[1-3].${CLUSTER}.example.com - -``` - -In this scenario you want to pick the quorum participants from few clusters and let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`. - -This guarantees that writes to configuration store is possible even if one of these regions is unreachable. - -The ZK configuration in all the servers looks like: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 -server.4=zk1.us-central.example.com:2185:2186 -server.5=zk2.us-central.example.com:2185:2186 -server.6=zk3.us-central.example.com:2185:2186:observer -server.7=zk1.us-east.example.com:2185:2186 -server.8=zk2.us-east.example.com:2185:2186 -server.9=zk3.us-east.example.com:2185:2186:observer -server.10=zk1.eu-central.example.com:2185:2186:observer -server.11=zk2.eu-central.example.com:2185:2186:observer -server.12=zk3.eu-central.example.com:2185:2186:observer -server.13=zk1.ap-south.example.com:2185:2186:observer -server.14=zk2.ap-south.example.com:2185:2186:observer -server.15=zk3.ap-south.example.com:2185:2186:observer - -``` - -Additionally, ZK observers need to have: - -```properties - -peerType=observer - -``` - -##### Start the service - -Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) - -```shell - -$ bin/pulsar-daemon start configuration-store - -``` - -### ZooKeeper configuration - -In Pulsar, ZooKeeper configuration is handled by two separate configuration files in the `conf` directory of your Pulsar installation: -* The `conf/zookeeper.conf` file handles the configuration for local ZooKeeper. -* The `conf/global-zookeeper.conf` file handles the configuration for configuration store. -See [parameters](reference-configuration.md#zookeeper) for more details. - -#### Configure batching operations -Using the batching operations reduces the remote procedure call (RPC) traffic between ZooKeeper client and servers. It also reduces the number of write transactions, because each batching operation corresponds to a single ZooKeeper transaction, containing multiple read and write operations. - -The following figure demonstrates a basic benchmark of batching read/write operations that can be requested to ZooKeeper in one second: - -![Zookeeper batching benchmark](/assets/zookeeper-batching.png) - -To enable batching operations, set the [`metadataStoreBatchingEnabled`](reference-configuration.md#broker) parameter to `true` on the broker side. - - -## BookKeeper - -BookKeeper stores all durable messages in Pulsar. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) WAL system that guarantees read consistency of independent message logs calls ledgers. Individual BookKeeper servers are also called *bookies*. - -> To manage message persistence, retention, and expiry in Pulsar, refer to [cookbook](cookbooks-retention-expiry.md). - -### Hardware requirements - -Bookie hosts store message data on disk. To provide optimal performance, ensure that the bookies have a suitable hardware configuration. The following are two key dimensions of bookie hardware capacity: - -- Disk I/O capacity read/write -- Storage capacity - -Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker by default. To ensure low write latency, BookKeeper is designed to use multiple devices: - -- A **journal** to ensure durability. For sequential writes, it is critical to have fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms. -- A **ledger storage device** stores data. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller. - -### Configure BookKeeper - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. When you configure each bookie, ensure that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for local ZooKeeper of the Pulsar cluster. - -The minimum configuration changes required in `conf/bookkeeper.conf` are as follows: - -:::note - -Set `journalDirectory` and `ledgerDirectories` carefully. It is difficilt to change them later. - -::: - -```properties - -# Change to point to journal disk mount point -journalDirectory=data/bookkeeper/journal - -# Point to ledger storage disk mount point -ledgerDirectories=data/bookkeeper/ledgers - -# Point to local ZK quorum -zkServers=zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181 - -#It is recommended to set this parameter. Otherwise, BookKeeper can't start normally in certain environments (for example, Huawei Cloud). -advertisedAddress= - -``` - -To change the ZooKeeper root path that BookKeeper uses, use `zkLedgersRootPath=/MY-PREFIX/ledgers` instead of `zkServers=localhost:2181/MY-PREFIX`. - -> For more information about BookKeeper, refer to the official [BookKeeper docs](http://bookkeeper.apache.org). - -### Deploy BookKeeper - -BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar. Each Pulsar broker has its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster. - -### Start bookies manually - -You can start a bookie in the foreground or as a background daemon. - -To start a bookie in the foreground, use the [`bookkeeper`](reference-cli-tools.md#bookkeeper) CLI tool: - -```bash - -$ bin/bookkeeper bookie - -``` - -To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -You can verify whether the bookie works properly with the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell): - -```shell - -$ bin/bookkeeper shell bookiesanity - -``` - -When you use this command, you create a new ledger on the local bookie, write a few entries, read them back and finally delete the ledger. - -### Decommission bookies cleanly - -Before you decommission a bookie, you need to check your environment and meet the following requirements. - -1. Ensure the state of your cluster supports decommissioning the target bookie. Check if `EnsembleSize >= Write Quorum >= Ack Quorum` is `true` with one less bookie. - -2. Ensure the target bookie is listed after using the `listbookies` command. - -3. Ensure that no other process is ongoing (upgrade etc). - -And then you can decommission bookies safely. To decommission bookies, complete the following steps. - -1. Log in to the bookie node, check if there are underreplicated ledgers. The decommission command force to replicate the underreplicated ledgers. -`$ bin/bookkeeper shell listunderreplicated` - -2. Stop the bookie by killing the bookie process. Make sure that no liveness/readiness probes setup for the bookies to spin them back up if you deploy it in a Kubernetes environment. - -3. Run the decommission command. - - If you have logged in to the node to be decommissioned, you do not need to provide `-bookieid`. - - If you are running the decommission command for the target bookie node from another bookie node, you should mention the target bookie ID in the arguments for `-bookieid` - `$ bin/bookkeeper shell decommissionbookie` - or - `$ bin/bookkeeper shell decommissionbookie -bookieid ` - -4. Validate that no ledgers are on the decommissioned bookie. -`$ bin/bookkeeper shell listledgers -bookieid ` - -You can run the following command to check if the bookie you have decommissioned is listed in the bookies list: - -```bash - -./bookkeeper shell listbookies -rw -h -./bookkeeper shell listbookies -ro -h - -``` - -## BookKeeper persistence policies - -In Pulsar, you can set *persistence policies* at the namespace level, which determines how BookKeeper handles persistent storage of messages. Policies determine four things: - -* The number of acks (guaranteed copies) to wait for each ledger entry. -* The number of bookies to use for a topic. -* The number of writes to make for each ledger entry. -* The throttling rate for mark-delete operations. - -### Set persistence policies - -You can set persistence policies for BookKeeper at the [namespace](reference-terminology.md#namespace) level. - -#### Pulsar-admin - -Use the [`set-persistence`](reference-pulsar-admin.md#namespaces-set-persistence) subcommand and specify a namespace as well as any policies that you want to apply. The available flags are: - -Flag | Description | Default -:----|:------------|:------- -`-a`, `--bookkeeper-ack-quorum` | The number of acks (guaranteed copies) to wait on for each entry | 0 -`-e`, `--bookkeeper-ensemble` | The number of [bookies](reference-terminology.md#bookie) to use for topics in the namespace | 0 -`-w`, `--bookkeeper-write-quorum` | The number of writes to make for each entry | 0 -`-r`, `--ml-mark-delete-max-rate` | Throttling rate for mark-delete operations (0 means no throttle) | 0 - -The following is an example: - -```shell - -$ pulsar-admin namespaces set-persistence my-tenant/my-ns \ - --bookkeeper-ack-quorum 3 \ - --bookkeeper-ensemble 2 - -``` - -#### REST API - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/setPersistence?version=@pulsar:version_number@} - -#### Java - -```java - -int bkEnsemble = 2; -int bkQuorum = 3; -int bkAckQuorum = 2; -double markDeleteRate = 0.7; -PersistencePolicies policies = - new PersistencePolicies(ensemble, quorum, ackQuorum, markDeleteRate); -admin.namespaces().setPersistence(namespace, policies); - -``` - -### List persistence policies - -You can see which persistence policy currently applies to a namespace. - -#### Pulsar-admin - -Use the [`get-persistence`](reference-pulsar-admin.md#namespaces-get-persistence) subcommand and specify the namespace. - -The following is an example: - -```shell - -$ pulsar-admin namespaces get-persistence my-tenant/my-ns -{ - "bookkeeperEnsemble": 1, - "bookkeeperWriteQuorum": 1, - "bookkeeperAckQuorum", 1, - "managedLedgerMaxMarkDeleteRate": 0 -} - -``` - -#### REST API - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/getPersistence?version=@pulsar:version_number@} - -#### Java - -```java - -PersistencePolicies policies = admin.namespaces().getPersistence(namespace); - -``` - -## How Pulsar uses ZooKeeper and BookKeeper - -This diagram illustrates the role of ZooKeeper and BookKeeper in a Pulsar cluster: - -![ZooKeeper and BookKeeper](/assets/pulsar-system-architecture.png) - -Each Pulsar cluster consists of one or more message brokers. Each broker relies on an ensemble of bookies. diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-cgo.md b/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-cgo.md deleted file mode 100644 index feee2cac3bafbd..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-cgo.md +++ /dev/null @@ -1,581 +0,0 @@ ---- -id: client-libraries-cgo -title: Pulsar CGo client -sidebar_label: "CGo(deprecated)" -original_id: client-libraries-cgo ---- - -> The CGo client has been deprecated since version 2.7.0. If possible, use the [Go client](client-libraries-go.md) instead. - -You can use Pulsar Go client to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang). - -All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Go client are thread-safe. - -Currently, the following Go clients are maintained in two repositories. - -| Language | Project | Maintainer | License | Description | -|----------|---------|------------|---------|-------------| -| CGo | [pulsar-client-go](https://github.com/apache/pulsar/tree/master/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | CGo client that depends on C++ client library | -| Go | [pulsar-client-go](https://github.com/apache/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client | - -> **API docs available as well** -> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar/pulsar-client-go/pulsar). - -## Installation - -### Requirements - -Pulsar Go client library is based on the C++ client library. Follow -the instructions for [C++ library](client-libraries-cpp.md) for installing the binaries through [RPM](client-libraries-cpp.md#rpm), [Deb](client-libraries-cpp.md#deb) or [Homebrew packages](client-libraries-cpp.md#macos). - -### Install go package - -> **Compatibility Warning** -> The version number of the Go client **must match** the version number of the Pulsar C++ client library. - -You can install the `pulsar` library locally using `go get`. Note that `go get` doesn't support fetching a specific tag - it will always pull in master's version of the Go client. You'll need a C++ client library that matches master. - -```bash - -$ go get -u github.com/apache/pulsar/pulsar-client-go/pulsar - -``` - -Or you can use [dep](https://github.com/golang/dep) for managing the dependencies. - -```bash - -$ dep ensure -add github.com/apache/pulsar/pulsar-client-go/pulsar@v@pulsar:version@ - -``` - -Once installed locally, you can import it into your project: - -```go - -import "github.com/apache/pulsar/pulsar-client-go/pulsar" - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example: - -```go - -import ( - "log" - "runtime" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - OperationTimeoutSeconds: 5, - MessageListenerThreads: runtime.NumCPU(), - }) - - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } -} - -``` - -The following configurable parameters are available for Pulsar clients: - -Parameter | Description | Default -:---------|:------------|:------- -`URL` | The connection URL for the Pulsar cluster. See [above](#urls) for more info | -`IOThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker) | 1 -`OperationTimeoutSeconds` | The timeout for some Go client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries will occur until this threshold is reached, at which point the operation will fail. | 30 -`MessageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)) | 1 -`ConcurrentLookupRequests` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 5000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 5000 -`Logger` | A custom logger implementation for the client (as a function that takes a log level, file path, line number, and message). All info, warn, and error messages will be routed to this function. | `nil` -`TLSTrustCertsFilePath` | The file path for the trusted TLS certificate | -`TLSAllowInsecureConnection` | Whether the client accepts untrusted TLS certificates from the broker | `false` -`Authentication` | Configure the authentication provider. (default: no authentication). Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | `nil` -`StatsIntervalInSeconds` | The interval (in seconds) at which client stats are published | 60 - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example: - -```go - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", -}) - -if err != nil { - log.Fatalf("Could not instantiate Pulsar producer: %v", err) -} - -defer producer.Close() - -msg := pulsar.ProducerMessage{ - Payload: []byte("Hello, Pulsar"), -} - -if err := producer.Send(context.Background(), msg); err != nil { - log.Fatalf("Producer could not send message: %v", err) -} - -``` - -> **Blocking operation** -> When you create a new Pulsar producer, the operation will block (waiting on a go channel) until either a producer is successfully created or an error is thrown. - - -### Producer operations - -Pulsar Go producers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string` -`Name()` | Fetches the producer's name | `string` -`Send(context.Context, ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | `error` -`SendAndGetMsgID(context.Context, ProducerMessage)`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | (MessageID, error) -`SendAsync(context.Context, ProducerMessage, func(ProducerMessage, error))` | Publishes a [message](#messages) to the producer's topic asynchronously. The third argument is a callback function that specifies what happens either when the message is acknowledged or an error is thrown. | -`SendAndGetMsgIDAsync(context.Context, ProducerMessage, func(MessageID, error))`| Send a message in asynchronous mode. The callback will report back the message being published and the eventual error in publishing | -`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64 -`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error -`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | `error` -`Schema()` | | Schema - -Here's a more involved example usage of a producer: - -```go - -import ( - "context" - "fmt" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatal(err) } - - // Use the client to instantiate a producer - producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", - }) - - if err != nil { log.Fatal(err) } - - ctx := context.Background() - - // Send 10 messages synchronously and 10 messages asynchronously - for i := 0; i < 10; i++ { - // Create a message - msg := pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("message-%d", i)), - } - - // Attempt to send the message - if err := producer.Send(ctx, msg); err != nil { - log.Fatal(err) - } - - // Create a different message to send asynchronously - asyncMsg := pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("async-message-%d", i)), - } - - // Attempt to send the message asynchronously and handle the response - producer.SendAsync(ctx, asyncMsg, func(msg pulsar.ProducerMessage, err error) { - if err != nil { log.Fatal(err) } - - fmt.Printf("the %s successfully published", string(msg.Payload)) - }) - } -} - -``` - -### Producer configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer will publish messages | -`Name` | A name for the producer. If you don't explicitly assign a name, Pulsar will automatically generate a globally unique name that you can access later using the `Name()` method. If you choose to explicitly assign a name, it will need to be unique across *all* Pulsar clusters, otherwise the creation operation will throw an error. | -`Properties`| Attach a set of application defined properties to the producer. This properties will be visible in the topic stats | -`SendTimeout` | When publishing a message to a topic, the producer will wait for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error will be thrown. If you set `SendTimeout` to -1, the timeout will be set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30 seconds -`MaxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `Send` and `SendAsync` methods will fail *unless* `BlockIfQueueFull` is set to `true`. | -`MaxPendingMessagesAcrossPartitions` | Set the number of max pending messages across all the partitions. This setting will be used to lower the max pending messages for each partition `MaxPendingMessages(int)`, if the total exceeds the configured value.| -`BlockIfQueueFull` | If set to `true`, the producer's `Send` and `SendAsync` methods will block when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `MaxPendingMessages` parameter); if set to `false` (the default), `Send` and `SendAsync` operations will fail and throw a `ProducerQueueIsFullError` when the queue is full. | `false` -`MessageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`pulsar.RoundRobinDistribution`, the default), publishing all messages to a single partition (`pulsar.UseSinglePartition`), or a custom partitioning scheme (`pulsar.CustomPartition`). | `pulsar.RoundRobinDistribution` -`HashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `pulsar.JavaStringHash` (the equivalent of `String.hashCode()` in Java), `pulsar.Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `pulsar.BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library) | `pulsar.JavaStringHash` -`CompressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), [`ZLIB`](https://zlib.net/), [`ZSTD`](https://facebook.github.io/zstd/) and [`SNAPPY`](https://google.github.io/snappy/). | No compression -`MessageRouter` | By default, Pulsar uses a round-robin routing scheme for [partitioned topics](cookbooks-partitioned.md). The `MessageRouter` parameter enables you to specify custom routing logic via a function that takes the Pulsar message and topic metadata as an argument and returns an integer (where the ), i.e. a function signature of `func(Message, TopicMetadata) int`. | -`Batching` | Control whether automatic batching of messages is enabled for the producer. | false -`BatchingMaxPublishDelay` | Set the time period within which the messages sent will be batched (default: 1ms) if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or until | 1ms -`BatchingMaxMessages` | Set the maximum number of messages permitted in a batch. (default: 1000) If set to a value greater than 1, messages will be queued until this threshold is reached or batch interval has elapsed | 1000 - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels: - -```go - -msgChannel := make(chan pulsar.ConsumerMessage) - -consumerOpts := pulsar.ConsumerOptions{ - Topic: "my-topic", - SubscriptionName: "my-subscription-1", - Type: pulsar.Exclusive, - MessageChannel: msgChannel, -} - -consumer, err := client.Subscribe(consumerOpts) - -if err != nil { - log.Fatalf("Could not establish subscription: %v", err) -} - -defer consumer.Close() - -for cm := range msgChannel { - msg := cm.Message - - fmt.Printf("Message ID: %s", msg.ID()) - fmt.Printf("Message value: %s", string(msg.Payload())) - - consumer.Ack(msg) -} - -``` - -> **Blocking operation** -> When you create a new Pulsar consumer, the operation will block (on a go channel) until either a producer is successfully created or an error is thrown. - - -### Consumer operations - -Pulsar Go consumers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the consumer's [topic](reference-terminology.md#topic) | `string` -`Subscription()` | Returns the consumer's subscription name | `string` -`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error` -`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)` -`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | `error` -`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | `error` -`AckCumulative(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `AckCumulative` method will block until the ack has been sent to the broker. After that, the messages will *not* be redelivered to the consumer. Cumulative acking can only be used with a [shared](concepts-messaging.md#shared) subscription type. | `error` -`AckCumulativeID(MessageID)` |Ack the reception of all the messages in the stream up to (and including) the provided message. This method will block until the acknowledge has been sent to the broker. After that, the messages will not be re-delivered to this consumer. | error -`Nack(Message)` | Acknowledge the failure to process a single message. | `error` -`NackID(MessageID)` | Acknowledge the failure to process a single message. | `error` -`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | `error` -`RedeliverUnackedMessages()` | Redelivers *all* unacknowledged messages on the topic. In [failover](concepts-messaging.md#failover) mode, this request is ignored if the consumer isn't active on the specified topic; in [shared](concepts-messaging.md#shared) mode, redelivered messages are distributed across all consumers connected to the topic. **Note**: this is a *non-blocking* operation that doesn't throw an error. | -`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | error - -#### Receive example - -Here's an example usage of a Go consumer that uses the `Receive()` method to process incoming messages: - -```go - -import ( - "context" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatal(err) } - - // Use the client object to instantiate a consumer - consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "my-golang-topic", - SubscriptionName: "sub-1", - Type: pulsar.Exclusive, - }) - - if err != nil { log.Fatal(err) } - - defer consumer.Close() - - ctx := context.Background() - - // Listen indefinitely on the topic - for { - msg, err := consumer.Receive(ctx) - if err != nil { log.Fatal(err) } - - // Do something with the message - err = processMessage(msg) - - if err == nil { - // Message processed successfully - consumer.Ack(msg) - } else { - // Failed to process messages - consumer.Nack(msg) - } - } -} - -``` - -### Consumer configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the consumer will establish a subscription and listen for messages | -`Topics` | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing | -`TopicsPattern` | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing | -`SubscriptionName` | The subscription name for this consumer | -`Properties` | Attach a set of application defined properties to the consumer. This properties will be visible in the topic stats| -`Name` | The name of the consumer | -`AckTimeout` | Set the timeout for unacked messages | 0 -`NackRedeliveryDelay` | The delay after which to redeliver the messages that failed to be processed. Default is 1min. (See `Consumer.Nack()`) | 1 minute -`Type` | Available options are `Exclusive`, `Shared`, and `Failover` | `Exclusive` -`SubscriptionInitPos` | InitialPosition at which the cursor will be set when subscribe | Latest -`MessageChannel` | The Go channel used by the consumer. Messages that arrive from the Pulsar topic(s) will be passed to this channel. | -`ReceiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `Receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000 -`MaxTotalReceiverQueueSizeAcrossPartitions` |Set the max total receiver queue size across partitions. This setting will be used to reduce the receiver queue size for individual partitions if the total exceeds this value | 50000 -`ReadCompacted` | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the consumer will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal. | - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example: - -```go - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageId: pulsar.LatestMessage, -}) - -``` - -> **Blocking operation** -> When you create a new Pulsar reader, the operation will block (on a go channel) until either a reader is successfully created or an error is thrown. - - -### Reader operations - -Pulsar Go readers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string` -`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)` -`HasNext()` | Check if there is any message available to read from the current position| (bool, error) -`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error` - -#### "Next" example - -Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages: - -```go - -import ( - "context" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatalf("Could not create client: %v", err) } - - // Use the client to instantiate a reader - reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: pulsar.EarliestMessage, - }) - - if err != nil { log.Fatalf("Could not create reader: %v", err) } - - defer reader.Close() - - ctx := context.Background() - - // Listen on the topic for incoming messages - for { - msg, err := reader.Next(ctx) - if err != nil { log.Fatalf("Error reading from topic: %v", err) } - - // Process the message - } -} - -``` - -In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example: - -```go - -lastSavedId := // Read last saved message id from external store as byte[] - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: DeserializeMessageID(lastSavedId), -}) - -``` - -### Reader configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader will establish a subscription and listen for messages -`Name` | The name of the reader -`StartMessageID` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `pulsar.EarliestMessage` (the earliest available message on the topic), `pulsar.LatestMessage` (the latest available message on the topic), or a `MessageID` object for a position that isn't earliest or latest. | -`MessageChannel` | The Go channel used by the reader. Messages that arrive from the Pulsar topic(s) will be passed to this channel. | -`ReceiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `Next`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000 -`SubscriptionRolePrefix` | The subscription role prefix. | `reader` -`ReadCompacted` | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the reader will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal.| - -## Messages - -The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message: - -```go - -msg := pulsar.ProducerMessage{ - Payload: []byte("Here is some message data"), - Key: "message-key", - Properties: map[string]string{ - "foo": "bar", - }, - EventTime: time.Now(), - ReplicationClusters: []string{"cluster1", "cluster3"}, -} - -if err := producer.send(msg); err != nil { - log.Fatalf("Could not publish message due to: %v", err) -} - -``` - -The following methods parameters are available for `ProducerMessage` objects: - -Parameter | Description -:---------|:----------- -`Payload` | The actual data payload of the message -`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message. -`Key` | The optional key associated with the message (particularly useful for things like topic compaction) -`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message -`EventTime` | The timestamp associated with the message -`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. -`SequenceID` | Set the sequence id to assign to the current message - -## TLS encryption and authentication - -In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so: - - * Use `pulsar+ssl` URL type - * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker - * Configure `Authentication` option - -Here's an example: - -```go - -opts := pulsar.ClientOptions{ - URL: "pulsar+ssl://my-cluster.com:6651", - TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr", - Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"), -} - -``` - -## Schema - -This example shows how to create a producer and consumer with schema. - -```go - -var exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -jsonSchema := NewJsonSchema(exampleSchemaDef, nil) -// create producer -producer, err := client.CreateProducerWithSchema(ProducerOptions{ - Topic: "jsonTopic", -}, jsonSchema) -err = producer.Send(context.Background(), ProducerMessage{ - Value: &testJson{ - ID: 100, - Name: "pulsar", - }, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() -//create consumer -var s testJson -consumerJS := NewJsonSchema(exampleSchemaDef, nil) -consumer, err := client.SubscribeWithSchema(ConsumerOptions{ - Topic: "jsonTopic", - SubscriptionName: "sub-2", -}, consumerJS) -if err != nil { - log.Fatal(err) -} -msg, err := consumer.Receive(context.Background()) -if err != nil { - log.Fatal(err) -} -err = msg.GetValue(&s) -if err != nil { - log.Fatal(err) -} -fmt.Println(s.ID) // output: 100 -fmt.Println(s.Name) // output: pulsar -defer consumer.Close() - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-cpp.md b/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-cpp.md deleted file mode 100644 index f5b8ae3678de21..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-cpp.md +++ /dev/null @@ -1,765 +0,0 @@ ---- -id: client-libraries-cpp -title: Pulsar C++ client -sidebar_label: "C++" -original_id: client-libraries-cpp ---- - -You can use Pulsar C++ client to create Pulsar producers and consumers in C++. - -All the methods in producer, consumer, and reader of a C++ client are thread-safe. - -## Supported platforms - -Pulsar C++ client is supported on **Linux** ,**MacOS** and **Windows** platforms. - -[Doxygen](http://www.doxygen.nl/)-generated API docs for the C++ client are available [here](/api/cpp). - - -## Linux - -:::note - -You can choose one of the following installation methods based on your needs: Compilation, Install RPM or Install Debian. - -::: - -### Compilation - -#### System requirements - -You need to install the following components before using the C++ client: - -* [CMake](https://cmake.org/) -* [Boost](http://www.boost.org/) -* [Protocol Buffers](https://developers.google.com/protocol-buffers/) >= 3 -* [libcurl](https://curl.se/libcurl/) -* [Google Test](https://github.com/google/googletest) - -1. Clone the Pulsar repository. - -```shell - -$ git clone https://github.com/apache/pulsar - -``` - -2. Install all necessary dependencies. - -```shell - -$ apt-get install cmake libssl-dev libcurl4-openssl-dev liblog4cxx-dev \ - libprotobuf-dev protobuf-compiler libboost-all-dev google-mock libgtest-dev libjsoncpp-dev - -``` - -3. Compile and install [Google Test](https://github.com/google/googletest). - -```shell - -# libgtest-dev version is 1.18.0 or above -$ cd /usr/src/googletest -$ sudo cmake . -$ sudo make -$ sudo cp ./googlemock/libgmock.a ./googlemock/gtest/libgtest.a /usr/lib/ - -# less than 1.18.0 -$ cd /usr/src/gtest -$ sudo cmake . -$ sudo make -$ sudo cp libgtest.a /usr/lib - -$ cd /usr/src/gmock -$ sudo cmake . -$ sudo make -$ sudo cp libgmock.a /usr/lib - -``` - -4. Compile the Pulsar client library for C++ inside the Pulsar repository. - -```shell - -$ cd pulsar-client-cpp -$ cmake . -$ make - -``` - -After you install the components successfully, the files `libpulsar.so` and `libpulsar.a` are in the `lib` folder of the repository. The tools `perfProducer` and `perfConsumer` are in the `perf` directory. - -### Install Dependencies - -> Since 2.1.0 release, Pulsar ships pre-built RPM and Debian packages. You can download and install those packages directly. - -After you download and install RPM or DEB, the `libpulsar.so`, `libpulsarnossl.so`, `libpulsar.a`, and `libpulsarwithdeps.a` libraries are in your `/usr/lib` directory. - -By default, they are built in code path `${PULSAR_HOME}/pulsar-client-cpp`. You can build with the command below. - - `cmake . -DBUILD_TESTS=OFF -DLINK_STATIC=ON && make pulsarShared pulsarSharedNossl pulsarStatic pulsarStaticWithDeps -j 3`. - -These libraries rely on some other libraries. If you want to get detailed version of dependencies, see [RPM](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/rpm/Dockerfile) or [DEB](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/deb/Dockerfile) files. - -1. `libpulsar.so` is a shared library, containing statically linked `boost` and `openssl`. It also dynamically links all other necessary libraries. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsar.so -I/usr/local/ssl/include - -``` - -2. `libpulsarnossl.so` is a shared library, similar to `libpulsar.so` except that the libraries `openssl` and `crypto` are dynamically linked. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsarnossl.so -lssl -lcrypto -I/usr/local/ssl/include -L/usr/local/ssl/lib - -``` - -3. `libpulsar.a` is a static library. You need to load dependencies before using this library. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsar.a -lssl -lcrypto -ldl -lpthread -I/usr/local/ssl/include -L/usr/local/ssl/lib -lboost_system -lboost_regex -lcurl -lprotobuf -lzstd -lz - -``` - -4. `libpulsarwithdeps.a` is a static library, based on `libpulsar.a`. It is archived in the dependencies of `libboost_regex`, `libboost_system`, `libcurl`, `libprotobuf`, `libzstd` and `libz`. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsarwithdeps.a -lssl -lcrypto -ldl -lpthread -I/usr/local/ssl/include -L/usr/local/ssl/lib - -``` - -The `libpulsarwithdeps.a` does not include library openssl related libraries `libssl` and `libcrypto`, because these two libraries are related to security. It is more reasonable and easier to use the versions provided by the local system to handle security issues and upgrade libraries. - -### Install RPM - -1. Download a RPM package from the links in the table. - -| Link | Crypto files | -|------|--------------| -| [client](@pulsar:dist_rpm:client@) | [asc](@pulsar:dist_rpm:client@.asc), [sha512](@pulsar:dist_rpm:client@.sha512) | -| [client-debuginfo](@pulsar:dist_rpm:client-debuginfo@) | [asc](@pulsar:dist_rpm:client-debuginfo@.asc), [sha512](@pulsar:dist_rpm:client-debuginfo@.sha512) | -| [client-devel](@pulsar:dist_rpm:client-devel@) | [asc](@pulsar:dist_rpm:client-devel@.asc), [sha512](@pulsar:dist_rpm:client-devel@.sha512) | - -2. Install the package using the following command. - -```bash - -$ rpm -ivh apache-pulsar-client*.rpm - -``` - -After you install RPM successfully, Pulsar libraries are in the `/usr/lib` directory, for example: - -```bash - -lrwxrwxrwx 1 root root 18 Dec 30 22:21 libpulsar.so -> libpulsar.so.2.9.1 -lrwxrwxrwx 1 root root 23 Dec 30 22:21 libpulsarnossl.so -> libpulsarnossl.so.2.9.1 - -``` - -:::note - -If you get the error that `libpulsar.so: cannot open shared object file: No such file or directory` when starting Pulsar client, you may need to run `ldconfig` first. - -::: - -2. Install the GCC and g++ using the following command, otherwise errors would occur in installing Node.js. - -```bash - -$ sudo yum -y install gcc automake autoconf libtool make -$ sudo yum -y install gcc-c++ - -``` - -### Install Debian - -1. Download a Debian package from the links in the table. - -| Link | Crypto files | -|------|--------------| -| [client](@pulsar:deb:client@) | [asc](@pulsar:dist_deb:client@.asc), [sha512](@pulsar:dist_deb:client@.sha512) | -| [client-devel](@pulsar:deb:client-devel@) | [asc](@pulsar:dist_deb:client-devel@.asc), [sha512](@pulsar:dist_deb:client-devel@.sha512) | - -2. Install the package using the following command. - -```bash - -$ apt install ./apache-pulsar-client*.deb - -``` - -After you install DEB successfully, Pulsar libraries are in the `/usr/lib` directory. - -### Build - -> If you want to build RPM and Debian packages from the latest master, follow the instructions below. You should run all the instructions at the root directory of your cloned Pulsar repository. - -There are recipes that build RPM and Debian packages containing a -statically linked `libpulsar.so` / `libpulsarnossl.so` / `libpulsar.a` / `libpulsarwithdeps.a` with all required dependencies. - -To build the C++ library packages, you need to build the Java packages first. - -```shell - -mvn install -DskipTests - -``` - -#### RPM - -To build the RPM inside a Docker container, use the command below. The RPMs are in the `pulsar-client-cpp/pkg/rpm/RPMS/x86_64/` path. - -```shell - -pulsar-client-cpp/pkg/rpm/docker-build-rpm.sh - -``` - -| Package name | Content | -|-----|-----| -| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` | -| pulsar-client-devel | Static library `libpulsar.a`, `libpulsarwithdeps.a`and C++ and C headers | -| pulsar-client-debuginfo | Debug symbols for `libpulsar.so` | - -#### Debian - -To build Debian packages, enter the following command. - -```shell - -pulsar-client-cpp/pkg/deb/docker-build-deb.sh - -``` - -Debian packages are created in the `pulsar-client-cpp/pkg/deb/BUILD/DEB/` path. - -| Package name | Content | -|-----|-----| -| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` | -| pulsar-client-dev | Static library `libpulsar.a`, `libpulsarwithdeps.a` and C++ and C headers | - -## MacOS - -### Compilation - -1. Clone the Pulsar repository. - -```shell - -$ git clone https://github.com/apache/pulsar - -``` - -2. Install all necessary dependencies. - -```shell - -# OpenSSL installation -$ brew install openssl -$ export OPENSSL_INCLUDE_DIR=/usr/local/opt/openssl/include/ -$ export OPENSSL_ROOT_DIR=/usr/local/opt/openssl/ - -# Protocol Buffers installation -$ brew install protobuf boost boost-python log4cxx -# If you are using python3, you need to install boost-python3 - -# Google Test installation -$ git clone https://github.com/google/googletest.git -$ cd googletest -$ git checkout release-1.12.1 -$ cmake . -$ make install - -``` - -3. Compile the Pulsar client library in the repository that you cloned. - -```shell - -$ cd pulsar-client-cpp -$ cmake . -$ make - -``` - -### Install `libpulsar` - -Pulsar releases are available in the [Homebrew](https://brew.sh/) core repository. You can install the C++ client library with the following command. The package is installed with the library and headers. - -```shell - -brew install libpulsar - -``` - -## Windows (64-bit) - -### Compilation - -1. Clone the Pulsar repository. - -```shell - -$ git clone https://github.com/apache/pulsar - -``` - -2. Install all necessary dependencies. - -```shell - -cd ${PULSAR_HOME}/pulsar-client-cpp -vcpkg install --feature-flags=manifests --triplet x64-windows - -``` - -3. Build C++ libraries. - -```shell - -cmake -B ./build -A x64 -DBUILD_PYTHON_WRAPPER=OFF -DBUILD_TESTS=OFF -DVCPKG_TRIPLET=x64-windows -DCMAKE_BUILD_TYPE=Release -S . -cmake --build ./build --config Release - -``` - -> **NOTE** -> -> 1. For Windows 32-bit, you need to use `-A Win32` and `-DVCPKG_TRIPLET=x86-windows`. -> 2. For MSVC Debug mode, you need to replace `Release` with `Debug` for both `CMAKE_BUILD_TYPE` variable and `--config` option. - -4. Client libraries are available in the following places. - -``` - -${PULSAR_HOME}/pulsar-client-cpp/build/lib/Release/pulsar.lib -${PULSAR_HOME}/pulsar-client-cpp/build/lib/Release/pulsar.dll - -``` - -## Connection URLs - -To connect Pulsar using client libraries, you need to specify a Pulsar protocol URL. - -Pulsar protocol URLs are assigned to specific clusters, you can use the Pulsar URI scheme. The default port is `6650`. The following is an example for localhost. - -```http - -pulsar://localhost:6650 - -``` - -In a Pulsar cluster in production, the URL looks as follows. - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you use TLS authentication, you need to add `ssl`, and the default port is `6651`. The following is an example. - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a producer - -To use Pulsar as a producer, you need to create a producer on the C++ client. There are two main ways of using a producer: -- [Blocking style](#simple-blocking-example) : each call to `send` waits for an ack from the broker. -- [Non-blocking asynchronous style](#non-blocking-example) : `sendAsync` is called instead of `send` and a callback is supplied for when the ack is received from the broker. - -### Simple blocking example - -This example sends 100 messages using the blocking style. While simple, it does not produce high throughput as it waits for each ack to come back before sending the next message. - -```c++ - -#include -#include - -using namespace pulsar; - -int main() { - Client client("pulsar://localhost:6650"); - - Result result = client.createProducer("persistent://public/default/my-topic", producer); - if (result != ResultOk) { - std::cout << "Error creating producer: " << result << std::endl; - return -1; - } - - // Send 100 messages synchronously - int ctr = 0; - while (ctr < 100) { - std::string content = "msg" + std::to_string(ctr); - Message msg = MessageBuilder().setContent(content).setProperty("x", "1").build(); - Result result = producer.send(msg); - if (result != ResultOk) { - std::cout << "The message " << content << " could not be sent, received code: " << result << std::endl; - } else { - std::cout << "The message " << content << " sent successfully" << std::endl; - } - - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - ctr++; - } - - std::cout << "Finished producing synchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -### Non-blocking example - -This example sends 100 messages using the non-blocking style calling `sendAsync` instead of `send`. This allows the producer to have multiple messages inflight at a time which increases throughput. - -The producer configuration `blockIfQueueFull` is useful here to avoid `ResultProducerQueueIsFull` errors when the internal queue for outgoing send requests becomes full. Once the internal queue is full, `sendAsync` becomes blocking which can make your code simpler. - -Without this configuration, the result code `ResultProducerQueueIsFull` is passed to the callback. You must decide how to deal with that (retry, discard etc). - -```c++ - -#include -#include - -using namespace pulsar; - -std::atomic acksReceived; - -void callback(Result code, const MessageId& msgId, std::string msgContent) { - // message processing logic here - std::cout << "Received ack for msg: " << msgContent << " with code: " - << code << " -- MsgID: " << msgId << std::endl; - acksReceived++; -} - -int main() { - Client client("pulsar://localhost:6650"); - - ProducerConfiguration producerConf; - producerConf.setBlockIfQueueFull(true); - Producer producer; - Result result = client.createProducer("persistent://public/default/my-topic", - producerConf, producer); - if (result != ResultOk) { - std::cout << "Error creating producer: " << result << std::endl; - return -1; - } - - // Send 100 messages asynchronously - int ctr = 0; - while (ctr < 100) { - std::string content = "msg" + std::to_string(ctr); - Message msg = MessageBuilder().setContent(content).setProperty("x", "1").build(); - producer.sendAsync(msg, std::bind(callback, - std::placeholders::_1, std::placeholders::_2, content)); - - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - ctr++; - } - - // wait for 100 messages to be acked - while (acksReceived < 100) { - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - } - - std::cout << "Finished producing asynchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -### Partitioned topics and lazy producers - -When scaling out a Pulsar topic, you may configure a topic to have hundreds of partitions. Likewise, you may have also scaled out your producers so there are hundreds or even thousands of producers. This can put some strain on the Pulsar brokers as when you create a producer on a partitioned topic, internally it creates one internal producer per partition which involves communications to the brokers for each one. So for a topic with 1000 partitions and 1000 producers, it ends up creating 1,000,000 internal producers across the producer applications, each of which has to communicate with a broker to find out which broker it should connect to and then perform the connection handshake. - -You can reduce the load caused by this combination of a large number of partitions and many producers by doing the following: -- use SinglePartition partition routing mode (this ensures that all messages are only sent to a single, randomly selected partition) -- use non-keyed messages (when messages are keyed, routing is based on the hash of the key and so messages will end up being sent to multiple partitions) -- use lazy producers (this ensures that an internal producer is only created on demand when a message needs to be routed to a partition) - -With our example above, that reduces the number of internal producers spread out over the 1000 producer apps from 1,000,000 to just 1000. - -Note that there can be extra latency for the first message sent. If you set a low send timeout, this timeout could be reached if the initial connection handshake is slow to complete. - -```c++ - -ProducerConfiguration producerConf; -producerConf.setPartitionsRoutingMode(ProducerConfiguration::UseSinglePartition); -producerConf.setLazyStartPartitionedProducers(true); - -``` - -### Enable chunking - -Message [chunking](concepts-messaging.md#chunking) enables Pulsar to process large payload messages by splitting the message into chunks at the producer side and aggregating chunked messages at the consumer side. - -The message chunking feature is OFF by default. The following is an example about how to enable message chunking when creating a producer. - -```c++ - -ProducerConfiguration conf; -conf.setBatchingEnabled(false); -conf.setChunkingEnabled(true); -Producer producer; -client.createProducer("my-topic", conf, producer); - -``` - -> **Note:** To enable chunking, you need to disable batching (`setBatchingEnabled`=`false`) concurrently. - -## Create a consumer - -To use Pulsar as a consumer, you need to create a consumer on the C++ client. There are two main ways of using the consumer: -- [Blocking style](#blocking-example): synchronously calling `receive(msg)`. -- [Non-blocking](#consumer-with-a-message-listener) (event based) style: using a message listener. - -### Blocking example - -The benefit of this approach is that it is the simplest code. Simply keeps calling `receive(msg)` which blocks until a message is received. - -This example starts a subscription at the earliest offset and consumes 100 messages. - -```c++ - -#include - -using namespace pulsar; - -int main() { - Client client("pulsar://localhost:6650"); - - Consumer consumer; - ConsumerConfiguration config; - config.setSubscriptionInitialPosition(InitialPositionEarliest); - Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer); - if (result != ResultOk) { - std::cout << "Failed to subscribe: " << result << std::endl; - return -1; - } - - Message msg; - int ctr = 0; - // consume 100 messages - while (ctr < 100) { - consumer.receive(msg); - std::cout << "Received: " << msg - << " with payload '" << msg.getDataAsString() << "'" << std::endl; - - consumer.acknowledge(msg); - ctr++; - } - - std::cout << "Finished consuming synchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -### Consumer with a message listener - -You can avoid running a loop with blocking calls with an event based style by using a message listener which is invoked for each message that is received. - -This example starts a subscription at the earliest offset and consumes 100 messages. - -```c++ - -#include -#include -#include - -using namespace pulsar; - -std::atomic messagesReceived; - -void handleAckComplete(Result res) { - std::cout << "Ack res: " << res << std::endl; -} - -void listener(Consumer consumer, const Message& msg) { - std::cout << "Got message " << msg << " with content '" << msg.getDataAsString() << "'" << std::endl; - messagesReceived++; - consumer.acknowledgeAsync(msg.getMessageId(), handleAckComplete); -} - -int main() { - Client client("pulsar://localhost:6650"); - - Consumer consumer; - ConsumerConfiguration config; - config.setMessageListener(listener); - config.setSubscriptionInitialPosition(InitialPositionEarliest); - Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer); - if (result != ResultOk) { - std::cout << "Failed to subscribe: " << result << std::endl; - return -1; - } - - // wait for 100 messages to be consumed - while (messagesReceived < 100) { - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - } - - std::cout << "Finished consuming asynchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -### Configure chunking - -You can limit the maximum number of chunked messages a consumer maintains concurrently by configuring the `setMaxPendingChunkedMessage` and `setAutoAckOldestChunkedMessageOnQueueFull` parameters. When the threshold is reached, the consumer drops pending messages by silently acknowledging them or asking the broker to redeliver them later. - -The following is an example of how to configure message chunking. - -```c++ - -ConsumerConfiguration conf; -conf.setAutoAckOldestChunkedMessageOnQueueFull(true); -conf.setMaxPendingChunkedMessage(100); -Consumer consumer; -client.subscribe("my-topic", "my-sub", conf, consumer); - -``` - -## Enable authentication in connection URLs -If you use TLS authentication when connecting to Pulsar, you need to add `ssl` in the connection URLs, and the default port is `6651`. The following is an example. - -```cpp - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/cacert.pem"); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create( - "/path/to/client-cert.pem", "/path/to/client-key.pem");); - -Client client("pulsar+ssl://my-broker.com:6651", config); - -``` - -For complete examples, refer to [C++ client examples](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/examples). - -## Schema - -This section describes some examples about schema. For more information about -schema, see [Pulsar schema](schema-get-started.md). - -### Avro schema - -- The following example shows how to create a producer with an Avro schema. - - ```cpp - - static const std::string exampleSchema = - "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," - "\"fields\":[{\"name\":\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"int\"}]}"; - Producer producer; - ProducerConfiguration producerConf; - producerConf.setSchema(SchemaInfo(AVRO, "Avro", exampleSchema)); - client.createProducer("topic-avro", producerConf, producer); - - ``` - -- The following example shows how to create a consumer with an Avro schema. - - ```cpp - - static const std::string exampleSchema = - "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," - "\"fields\":[{\"name\":\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"int\"}]}"; - ConsumerConfiguration consumerConf; - Consumer consumer; - consumerConf.setSchema(SchemaInfo(AVRO, "Avro", exampleSchema)); - client.subscribe("topic-avro", "sub-2", consumerConf, consumer) - - ``` - -### ProtobufNative schema - -The following example shows how to create a producer and a consumer with a ProtobufNative schema. -​ -1. Generate the `User` class using Protobuf3. - - :::note - - You need to use Protobuf3 or later versions. - - ::: - -​ - - ```protobuf - - syntax = "proto3"; - - message User { - string name = 1; - int32 age = 2; - } - - ``` - -​ -2. Include the `ProtobufNativeSchema.h` in your source code. Ensure the Protobuf dependency has been added to your project. -​ - - ```c++ - - #include - - ``` - -​ -3. Create a producer to send a `User` instance. -​ - - ```c++ - - ProducerConfiguration producerConf; - producerConf.setSchema(createProtobufNativeSchema(User::GetDescriptor())); - Producer producer; - client.createProducer("topic-protobuf", producerConf, producer); - User user; - user.set_name("my-name"); - user.set_age(10); - std::string content; - user.SerializeToString(&content); - producer.send(MessageBuilder().setContent(content).build()); - - ``` - -​ -4. Create a consumer to receive a `User` instance. -​ - - ```c++ - - ConsumerConfiguration consumerConf; - consumerConf.setSchema(createProtobufNativeSchema(User::GetDescriptor())); - consumerConf.setSubscriptionInitialPosition(InitialPositionEarliest); - Consumer consumer; - client.subscribe("topic-protobuf", "my-sub", consumerConf, consumer); - Message msg; - consumer.receive(msg); - User user2; - user2.ParseFromArray(msg.getData(), msg.getLength()); - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-dotnet.md b/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-dotnet.md deleted file mode 100644 index 52b6200c478af8..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-dotnet.md +++ /dev/null @@ -1,456 +0,0 @@ ---- -id: client-libraries-dotnet -title: Pulsar C# client -sidebar_label: "C#" -original_id: client-libraries-dotnet ---- - -You can use the Pulsar C# client (DotPulsar) to create Pulsar producers and consumers in C#. All the methods in the producer, consumer, and reader of a C# client are thread-safe. The official documentation for DotPulsar is available [here](https://github.com/apache/pulsar-dotpulsar/wiki). - -## Installation - -You can install the Pulsar C# client library either through the dotnet CLI or through the Visual Studio. This section describes how to install the Pulsar C# client library through the dotnet CLI. For information about how to install the Pulsar C# client library through the Visual Studio, see [here](https://docs.microsoft.com/en-us/visualstudio/mac/nuget-walkthrough?view=vsmac-2019). - -### Prerequisites - -Install the [.NET Core SDK](https://dotnet.microsoft.com/download/), which provides the dotnet command-line tool. Starting in Visual Studio 2017, the dotnet CLI is automatically installed with any .NET Core related workloads. - -### Procedures - -To install the Pulsar C# client library, following these steps: - -1. Create a project. - - 1. Create a folder for the project. - - 2. Open a terminal window and switch to the new folder. - - 3. Create the project using the following command. - - ``` - - dotnet new console - - ``` - - 4. Use `dotnet run` to test that the app has been created properly. - -2. Add the DotPulsar NuGet package. - - 1. Use the following command to install the `DotPulsar` package. - - ``` - - dotnet add package DotPulsar - - ``` - - 2. After the command completes, open the `.csproj` file to see the added reference. - - ```xml - - - - - - ``` - -## Client - -This section describes some configuration examples for the Pulsar C# client. - -### Create client - -This example shows how to create a Pulsar C# client connected to localhost. - -```c# - -using DotPulsar; - -var client = PulsarClient.Builder().Build(); - -``` - -To create a Pulsar C# client by using the builder, you can specify the following options. - -| Option | Description | Default | -| ---- | ---- | ---- | -| ServiceUrl | Set the service URL for the Pulsar cluster. | pulsar://localhost:6650 | -| RetryInterval | Set the time to wait before retrying an operation or a reconnection. | 3s | - -### Create producer - -This section describes how to create a producer. - -- Create a producer by using the builder. - - ```c# - - using DotPulsar; - using DotPulsar.Extensions; - - var producer = client.NewProducer()) - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a producer without using the builder. - - ```c# - - using DotPulsar; - - var options = new ProducerOptions("persistent://public/default/mytopic", Schema.ByteArray); - var producer = client.CreateProducer(options); - - ``` - -### Create consumer - -This section describes how to create a consumer. - -- Create a consumer by using the builder. - - ```c# - - using DotPulsar; - using DotPulsar.Extensions; - - var consumer = client.NewConsumer() - .SubscriptionName("MySubscription") - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a consumer without using the builder. - - ```c# - - using DotPulsar; - - var options = new ConsumerOptions("MySubscription", "persistent://public/default/mytopic", Schema.ByteArray); - var consumer = client.CreateConsumer(options); - - ``` - -### Create reader - -This section describes how to create a reader. - -- Create a reader by using the builder. - - ```c# - - using DotPulsar; - using DotPulsar.Extensions; - - var reader = client.NewReader() - .StartMessageId(MessageId.Earliest) - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a reader without using the builder. - - ```c# - - using DotPulsar; - - var options = new ReaderOptions(MessageId.Earliest, "persistent://public/default/mytopic", Schema.ByteArray); - var reader = client.CreateReader(options); - - ``` - -### Configure encryption policies - -The Pulsar C# client supports four kinds of encryption policies: - -- `EnforceUnencrypted`: always use unencrypted connections. -- `EnforceEncrypted`: always use encrypted connections) -- `PreferUnencrypted`: use unencrypted connections, if possible. -- `PreferEncrypted`: use encrypted connections, if possible. - -This example shows how to set the `EnforceUnencrypted` encryption policy. - -```c# - -using DotPulsar; - -var client = PulsarClient.Builder() - .ConnectionSecurity(EncryptionPolicy.EnforceEncrypted) - .Build(); - -``` - -### Configure authentication - -Currently, the Pulsar C# client supports the TLS (Transport Layer Security) and JWT (JSON Web Token) authentication. - -If you have followed [Authentication using TLS](security-tls-authentication.md), you get a certificate and a key. To use them from the Pulsar C# client, follow these steps: - -1. Create an unencrypted and password-less pfx file. - - ```c# - - openssl pkcs12 -export -keypbe NONE -certpbe NONE -out admin.pfx -inkey admin.key.pem -in admin.cert.pem -passout pass: - - ``` - -2. Use the admin.pfx file to create an X509Certificate2 and pass it to the Pulsar C# client. - - ```c# - - using System.Security.Cryptography.X509Certificates; - using DotPulsar; - - var clientCertificate = new X509Certificate2("admin.pfx"); - var client = PulsarClient.Builder() - .AuthenticateUsingClientCertificate(clientCertificate) - .Build(); - - ``` - -## Producer - -A producer is a process that attaches to a topic and publishes messages to a Pulsar broker for processing. This section describes some configuration examples about the producer. - -## Send data - -This example shows how to send data. - -```c# - -var data = Encoding.UTF8.GetBytes("Hello World"); -await producer.Send(data); - -``` - -### Send messages with customized metadata - -- Send messages with customized metadata by using the builder. - - ```c# - - var messageId = await producer.NewMessage() - .Property("SomeKey", "SomeValue") - .Send(data); - - ``` - -- Send messages with customized metadata without using the builder. - - ```c# - - var data = Encoding.UTF8.GetBytes("Hello World"); - var metadata = new MessageMetadata(); - metadata["SomeKey"] = "SomeValue"; - var messageId = await producer.Send(metadata, data)); - - ``` - -## Consumer - -A consumer is a process that attaches to a topic through a subscription and then receives messages. This section describes some configuration examples about the consumer. - -### Receive messages - -This example shows how a consumer receives messages from a topic. - -```c# - -await foreach (var message in consumer.Messages()) -{ - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); -} - -``` - -### Acknowledge messages - -Messages can be acknowledged individually or cumulatively. For details about message acknowledgement, see [acknowledgement](concepts-messaging.md#acknowledgement). - -- Acknowledge messages individually. - - ```c# - - await consumer.Acknowledge(message); - - ``` - -- Acknowledge messages cumulatively. - - ```c# - - await consumer.AcknowledgeCumulative(message); - - ``` - -### Unsubscribe from topics - -This example shows how a consumer unsubscribes from a topic. - -```c# - -await consumer.Unsubscribe(); - -``` - -#### Note - -> A consumer cannot be used and is disposed once the consumer unsubscribes from a topic. - -## Reader - -A reader is actually just a consumer without a cursor. This means that Pulsar does not keep track of your progress and there is no need to acknowledge messages. - -This example shows how a reader receives messages. - -```c# - -await foreach (var message in reader.Messages()) -{ - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); -} - -``` - -## Monitoring - -This section describes how to monitor the producer, consumer, and reader state. - -### Monitor producer - -The following table lists states available for the producer. - -| State | Description | -| ---- | ----| -| Closed | The producer or the Pulsar client has been disposed. | -| Connected | All is well. | -| Disconnected | The connection is lost and attempts are being made to reconnect. | -| Faulted | An unrecoverable error has occurred. | -| PartiallyConnected | Some of the sub-producers are disconnected. | - -This example shows how to monitor the producer state. - -```c# - -private static async ValueTask Monitor(IProducer producer, CancellationToken cancellationToken) -{ - var state = ProducerState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = (await producer.StateChangedFrom(state, cancellationToken)).ProducerState; - - var stateMessage = state switch - { - ProducerState.Connected => $"The producer is connected", - ProducerState.Disconnected => $"The producer is disconnected", - ProducerState.Closed => $"The producer has closed", - ProducerState.Faulted => $"The producer has faulted", - ProducerState.PartiallyConnected => $"The producer is partially connected.", - _ => $"The producer has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (producer.IsFinalState(state)) - return; - } -} - -``` - -### Monitor consumer state - -The following table lists states available for the consumer. - -| State | Description | -| ---- | ----| -| Active | All is well. | -| Inactive | All is well. The subscription type is `Failover` and you are not the active consumer. | -| Closed | The consumer or the Pulsar client has been disposed. | -| Disconnected | The connection is lost and attempts are being made to reconnect. | -| Faulted | An unrecoverable error has occurred. | -| ReachedEndOfTopic | No more messages are delivered. | -| Unsubscribed | The consumer has unsubscribed. | - -This example shows how to monitor the consumer state. - -```c# - -private static async ValueTask Monitor(IConsumer consumer, CancellationToken cancellationToken) -{ - var state = ConsumerState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = (await consumer.StateChangedFrom(state, cancellationToken)).ConsumerState; - - var stateMessage = state switch - { - ConsumerState.Active => "The consumer is active", - ConsumerState.Inactive => "The consumer is inactive", - ConsumerState.Disconnected => "The consumer is disconnected", - ConsumerState.Closed => "The consumer has closed", - ConsumerState.ReachedEndOfTopic => "The consumer has reached end of topic", - ConsumerState.Faulted => "The consumer has faulted", - ConsumerState.Unsubscribed => "The consumer is unsubscribed.", - _ => $"The consumer has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (consumer.IsFinalState(state)) - return; - } -} - -``` - -### Monitor reader state - -The following table lists states available for the reader. - -| State | Description | -| ---- | ----| -| Closed | The reader or the Pulsar client has been disposed. | -| Connected | All is well. | -| Disconnected | The connection is lost and attempts are being made to reconnect. -| Faulted | An unrecoverable error has occurred. | -| ReachedEndOfTopic | No more messages are delivered. | - -This example shows how to monitor the reader state. - -```c# - -private static async ValueTask Monitor(IReader reader, CancellationToken cancellationToken) -{ - var state = ReaderState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = (await reader.StateChangedFrom(state, cancellationToken)).ReaderState; - - var stateMessage = state switch - { - ReaderState.Connected => "The reader is connected", - ReaderState.Disconnected => "The reader is disconnected", - ReaderState.Closed => "The reader has closed", - ReaderState.ReachedEndOfTopic => "The reader has reached end of topic", - ReaderState.Faulted => "The reader has faulted", - _ => $"The reader has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (reader.IsFinalState(state)) - return; - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-go.md b/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-go.md deleted file mode 100644 index aa36fa786ac5e9..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-go.md +++ /dev/null @@ -1,1064 +0,0 @@ ---- -id: client-libraries-go -title: Pulsar Go client -sidebar_label: "Go" -original_id: client-libraries-go ---- - -> Tips: The CGo client has been deprecated since version 2.7.0. - -You can use Pulsar [Go client](https://github.com/apache/pulsar-client-go) to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang). - -> **API docs available as well** -> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar-client-go/pulsar). - - -## Installation - -### Install go package - -You can get the `pulsar` library by using `go get` or use it with `go module`. - -Download the library of Go client to local environment: - -```bash - -$ go get -u "github.com/apache/pulsar-client-go/pulsar" - -``` - -Once installed locally, you can import it into your project: - -```go - -import "github.com/apache/pulsar-client-go/pulsar" - -``` - -Use with go module: - -```bash - -$ mkdir test_dir && cd test_dir - -``` - -Write a sample script in the `test_dir` directory (such as `test_example.go`) and write `package main` at the beginning of the file. - -```bash - -$ go mod init test_dir -$ go mod tidy && go mod download -$ go build test_example.go -$ ./test_example - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -If you have multiple brokers, you can set the URL as below. - -``` - -pulsar://localhost:6550,localhost:6651,localhost:6652 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example: - -```go - -import ( - "log" - "time" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - OperationTimeout: 30 * time.Second, - ConnectionTimeout: 30 * time.Second, - }) - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } - - defer client.Close() -} - -``` - -If you have multiple brokers, you can initiate a client object as below. - -```go - -import ( - "log" - "time" - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650,localhost:6651,localhost:6652", - OperationTimeout: 30 * time.Second, - ConnectionTimeout: 30 * time.Second, - }) - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } - - defer client.Close() -} - -``` - -The following configurable parameters are available for Pulsar clients: - - Name | Description | Default -| :-------- | :---------- |:---------- | -| URL | Configure the service URL for the Pulsar service.

    If you have multiple brokers, you can set multiple Pulsar cluster addresses for a client.

    This parameter is **required**. |None | -| ConnectionTimeout | Timeout for the establishment of a TCP connection | 30s | -| OperationTimeout| Set the operation timeout. Producer-create, subscribe and unsubscribe operations will be retried until this interval, after which the operation will be marked as failed| 30s| -| Authentication | Configure the authentication provider. Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | no authentication | -| TLSTrustCertsFilePath | Set the path to the trusted TLS certificate file | | -| TLSAllowInsecureConnection | Configure whether the Pulsar client accept untrusted TLS certificate from broker | false | -| TLSValidateHostname | Configure whether the Pulsar client verify the validity of the host name from broker | false | -| ListenerName | Configure the net model for VPC users to connect to the Pulsar broker | | -| MaxConnectionsPerBroker | Max number of connections to a single broker that is kept in the pool | 1 | -| CustomMetricsLabels | Add custom labels to all the metrics reported by this client instance | | -| Logger | Configure the logger used by the client | logrus.StandardLogger | - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example: - -```go - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", -}) - -if err != nil { - log.Fatal(err) -} - -_, err = producer.Send(context.Background(), &pulsar.ProducerMessage{ - Payload: []byte("hello"), -}) - -defer producer.Close() - -if err != nil { - fmt.Println("Failed to publish message", err) -} -fmt.Println("Published message") - -``` - -### Producer operations - -Pulsar Go producers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string` -`Name()` | Fetches the producer's name | `string` -`Send(context.Context, *ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | (MessageID, error) -`SendAsync(context.Context, *ProducerMessage, func(MessageID, *ProducerMessage, error))`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | -`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64 -`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error -`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | - -### Producer Example - -#### How to use message router in producer - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: serviceURL, -}) - -if err != nil { - log.Fatal(err) -} -defer client.Close() - -// Only subscribe on the specific partition -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "my-partitioned-topic-partition-2", - SubscriptionName: "my-sub", -}) - -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-partitioned-topic", - MessageRouter: func(msg *ProducerMessage, tm TopicMetadata) int { - fmt.Println("Routing message ", msg, " -- Partitions: ", tm.NumPartitions()) - return 2 - }, -}) - -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -``` - -#### How to use schema interface in producer - -```go - -type testJSON struct { - ID int `json:"id"` - Name string `json:"name"` -} - -``` - -```go - -var ( - exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -) - -``` - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -properties := make(map[string]string) -properties["pulsar"] = "hello" -jsonSchemaWithProperties := NewJSONSchema(exampleSchemaDef, properties) -producer, err := client.CreateProducer(ProducerOptions{ - Topic: "jsonTopic", - Schema: jsonSchemaWithProperties, -}) -assert.Nil(t, err) - -_, err = producer.Send(context.Background(), &ProducerMessage{ - Value: &testJSON{ - ID: 100, - Name: "pulsar", - }, -}) -if err != nil { - log.Fatal(err) -} -producer.Close() - -``` - -#### How to use delay relative in producer - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topicName := newTopicName() -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topicName, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: topicName, - SubscriptionName: "subName", - Type: Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -ID, err := producer.Send(context.Background(), &pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("test")), - DeliverAfter: 3 * time.Second, -}) -if err != nil { - log.Fatal(err) -} -fmt.Println(ID) - -ctx, canc := context.WithTimeout(context.Background(), 1*time.Second) -msg, err := consumer.Receive(ctx) -if err != nil { - log.Fatal(err) -} -fmt.Println(msg.Payload()) -canc() - -ctx, canc = context.WithTimeout(context.Background(), 5*time.Second) -msg, err = consumer.Receive(ctx) -if err != nil { - log.Fatal(err) -} -fmt.Println(msg.Payload()) -canc() - -``` - -#### How to use Prometheus metrics in producer - -Pulsar Go client registers client metrics using Prometheus. This section demonstrates how to create a simple Pulsar producer application that exposes Prometheus metrics via HTTP. - -1. Write a simple producer application. - -```go - -// Create a Pulsar client -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} - -defer client.Close() - -// Start a separate goroutine for Prometheus metrics -// In this case, Prometheus metrics can be accessed via http://localhost:2112/metrics -go func() { - prometheusPort := 2112 - log.Printf("Starting Prometheus metrics at http://localhost:%v/metrics\n", prometheusPort) - http.Handle("/metrics", promhttp.Handler()) - err = http.ListenAndServe(":"+strconv.Itoa(prometheusPort), nil) - if err != nil { - log.Fatal(err) - } -}() - -// Create a producer -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "topic-1", -}) -if err != nil { - log.Fatal(err) -} - -defer producer.Close() - -ctx := context.Background() - -// Write your business logic here -// In this case, you build a simple Web server. You can produce messages by requesting http://localhost:8082/produce -webPort := 8082 -http.HandleFunc("/produce", func(w http.ResponseWriter, r *http.Request) { - msgId, err := producer.Send(ctx, &pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("hello world")), - }) - if err != nil { - log.Fatal(err) - } else { - log.Printf("Published message: %v", msgId) - fmt.Fprintf(w, "Published message: %v", msgId) - } -}) - -err = http.ListenAndServe(":"+strconv.Itoa(webPort), nil) -if err != nil { - log.Fatal(err) -} - -``` - -2. To scrape metrics from applications, configure a local running Prometheus instance using a configuration file (`prometheus.yml`). - -```yaml - -scrape_configs: -- job_name: pulsar-client-go-metrics - scrape_interval: 10s - static_configs: - - targets: - - localhost:2112 - -``` - -Now you can query Pulsar client metrics on Prometheus. - -### Producer configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Name | Name specify a name for the producer. If not assigned, the system will generate a globally unique name which can be access with Producer.ProducerName(). | | -| Properties | Properties attach a set of application defined properties to the producer This properties will be visible in the topic stats | | -| SendTimeout | SendTimeout set the timeout for a message that is not acknowledged by the server | 30s | -| DisableBlockIfQueueFull | DisableBlockIfQueueFull control whether Send and SendAsync block if producer's message queue is full | false | -| MaxPendingMessages| MaxPendingMessages set the max size of the queue holding the messages pending to receive an acknowledgment from the broker. | | -| HashingScheme | HashingScheme change the `HashingScheme` used to chose the partition on where to publish a particular message. | JavaStringHash | -| CompressionType | CompressionType set the compression type for the producer. | not compressed | -| CompressionLevel | Define the desired compression level. Options: Default, Faster and Better | Default | -| MessageRouter | MessageRouter set a custom message routing policy by passing an implementation of MessageRouter | | -| DisableBatching | DisableBatching control whether automatic batching of messages is enabled for the producer. | false | -| BatchingMaxPublishDelay | BatchingMaxPublishDelay set the time period within which the messages sent will be batched | 1ms | -| BatchingMaxMessages | BatchingMaxMessages set the maximum number of messages permitted in a batch. | 1000 | -| BatchingMaxSize | BatchingMaxSize sets the maximum number of bytes permitted in a batch. | 128KB | -| Schema | Schema set a custom schema type by passing an implementation of `Schema` | bytes[] | -| Interceptors | A chain of interceptors. These interceptors are called at some points defined in the `ProducerInterceptor` interface. | None | -| MaxReconnectToBroker | MaxReconnectToBroker set the maximum retry number of reconnectToBroker | ultimate | -| BatcherBuilderType | BatcherBuilderType sets the batch builder type. This is used to create a batch container when batching is enabled. Options: DefaultBatchBuilder and KeyBasedBatchBuilder | DefaultBatchBuilder | - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels: - -```go - -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "topic-1", - SubscriptionName: "my-sub", - Type: pulsar.Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -for i := 0; i < 10; i++ { - msg, err := consumer.Receive(context.Background()) - if err != nil { - log.Fatal(err) - } - - fmt.Printf("Received message msgId: %#v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - - consumer.Ack(msg) -} - -if err := consumer.Unsubscribe(); err != nil { - log.Fatal(err) -} - -``` - -### Consumer operations - -Pulsar Go consumers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Subscription()` | Returns the consumer's subscription name | `string` -`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error` -`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)` -`Chan()` | Chan returns a channel from which to consume messages. | `<-chan ConsumerMessage` -`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | -`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | -`ReconsumeLater(msg Message, delay time.Duration)` | ReconsumeLater mark a message for redelivery after custom delay | -`Nack(Message)` | Acknowledge the failure to process a single message. | -`NackID(MessageID)` | Acknowledge the failure to process a single message. | -`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | `error` -`SeekByTime(time time.Time)` | Reset the subscription associated with this consumer to a specific message publish time. | `error` -`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | -`Name()` | Name returns the name of consumer | `string` - -### Receive example - -#### How to use regex consumer - -```go - -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) - -defer client.Close() - -p, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topicInRegex, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer p.Close() - -topicsPattern := fmt.Sprintf("persistent://%s/foo.*", namespace) -opts := pulsar.ConsumerOptions{ - TopicsPattern: topicsPattern, - SubscriptionName: "regex-sub", -} -consumer, err := client.Subscribe(opts) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -``` - -#### How to use multi topics Consumer - -```go - -func newTopicName() string { - return fmt.Sprintf("my-topic-%v", time.Now().Nanosecond()) -} - - -topic1 := "topic-1" -topic2 := "topic-2" - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -topics := []string{topic1, topic2} -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topics: topics, - SubscriptionName: "multi-topic-sub", -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -``` - -#### How to use consumer listener - -```go - -import ( - "fmt" - "log" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{URL: "pulsar://localhost:6650"}) - if err != nil { - log.Fatal(err) - } - - defer client.Close() - - channel := make(chan pulsar.ConsumerMessage, 100) - - options := pulsar.ConsumerOptions{ - Topic: "topic-1", - SubscriptionName: "my-subscription", - Type: pulsar.Shared, - } - - options.MessageChannel = channel - - consumer, err := client.Subscribe(options) - if err != nil { - log.Fatal(err) - } - - defer consumer.Close() - - // Receive messages from channel. The channel returns a struct which contains message and the consumer from where - // the message was received. It's not necessary here since we have 1 single consumer, but the channel could be - // shared across multiple consumers as well - for cm := range channel { - msg := cm.Message - fmt.Printf("Received message msgId: %v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - - consumer.Ack(msg) - } -} - -``` - -#### How to use consumer receive timeout - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topic := "test-topic-with-no-messages" -ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond) -defer cancel() - -// create consumer -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: topic, - SubscriptionName: "my-sub1", - Type: Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -msg, err := consumer.Receive(ctx) -fmt.Println(msg.Payload()) -if err != nil { - log.Fatal(err) -} - -``` - -#### How to use schema in consumer - -```go - -type testJSON struct { - ID int `json:"id"` - Name string `json:"name"` -} - -``` - -```go - -var ( - exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -) - -``` - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -var s testJSON - -consumerJS := NewJSONSchema(exampleSchemaDef, nil) -consumer, err := client.Subscribe(ConsumerOptions{ - Topic: "jsonTopic", - SubscriptionName: "sub-1", - Schema: consumerJS, - SubscriptionInitialPosition: SubscriptionPositionEarliest, -}) -assert.Nil(t, err) -msg, err := consumer.Receive(context.Background()) -assert.Nil(t, err) -err = msg.GetSchemaValue(&s) -if err != nil { - log.Fatal(err) -} - -defer consumer.Close() - -``` - -#### How to use Prometheus metrics in consumer - -In this guide, This section demonstrates how to create a simple Pulsar consumer application that exposes Prometheus metrics via HTTP. -1. Write a simple consumer application. - -```go - -// Create a Pulsar client -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} - -defer client.Close() - -// Start a separate goroutine for Prometheus metrics -// In this case, Prometheus metrics can be accessed via http://localhost:2112/metrics -go func() { - prometheusPort := 2112 - log.Printf("Starting Prometheus metrics at http://localhost:%v/metrics\n", prometheusPort) - http.Handle("/metrics", promhttp.Handler()) - err = http.ListenAndServe(":"+strconv.Itoa(prometheusPort), nil) - if err != nil { - log.Fatal(err) - } -}() - -// Create a consumer -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "topic-1", - SubscriptionName: "sub-1", - Type: pulsar.Shared, -}) -if err != nil { - log.Fatal(err) -} - -defer consumer.Close() - -ctx := context.Background() - -// Write your business logic here -// In this case, you build a simple Web server. You can consume messages by requesting http://localhost:8083/consume -webPort := 8083 -http.HandleFunc("/consume", func(w http.ResponseWriter, r *http.Request) { - msg, err := consumer.Receive(ctx) - if err != nil { - log.Fatal(err) - } else { - log.Printf("Received message msgId: %v -- content: '%s'\n", msg.ID(), string(msg.Payload())) - fmt.Fprintf(w, "Received message msgId: %v -- content: '%s'\n", msg.ID(), string(msg.Payload())) - consumer.Ack(msg) - } -}) - -err = http.ListenAndServe(":"+strconv.Itoa(webPort), nil) -if err != nil { - log.Fatal(err) -} - -``` - -2. To scrape metrics from applications, configure a local running Prometheus instance using a configuration file (`prometheus.yml`). - -```yaml - -scrape_configs: -- job_name: pulsar-client-go-metrics - scrape_interval: 10s - static_configs: - - targets: - - localhost:2112 - -``` - -Now you can query Pulsar client metrics on Prometheus. - -### Consumer configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Topics | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing| | -| TopicsPattern | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing | | -| AutoDiscoveryPeriod | Specify the interval in which to poll for new partitions or new topics if using a TopicsPattern. | | -| SubscriptionName | Specify the subscription name for this consumer. This argument is required when subscribing | | -| Name | Set the consumer name | | -| Properties | Properties attach a set of application defined properties to the producer This properties will be visible in the topic stats | | -| Type | Select the subscription type to be used when subscribing to the topic. | Exclusive | -| SubscriptionInitialPosition | InitialPosition at which the cursor will be set when subscribe | Latest | -| DLQ | Configuration for Dead Letter Queue consumer policy. | no DLQ | -| MessageChannel | Sets a `MessageChannel` for the consumer. When a message is received, it will be pushed to the channel for consumption | | -| ReceiverQueueSize | Sets the size of the consumer receive queue. | 1000| -| NackRedeliveryDelay | The delay after which to redeliver the messages that failed to be processed | 1min | -| ReadCompacted | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic | false | -| ReplicateSubscriptionState | Mark the subscription as replicated to keep it in sync across clusters | false | -| KeySharedPolicy | Configuration for Key Shared consumer policy. | | -| RetryEnable | Auto retry send messages to default filled DLQPolicy topics | false | -| Interceptors | A chain of interceptors. These interceptors are called at some points defined in the `ConsumerInterceptor` interface. | | -| MaxReconnectToBroker | MaxReconnectToBroker set the maximum retry number of reconnectToBroker. | ultimate | -| Schema | Schema set a custom schema type by passing an implementation of `Schema` | bytes[] | - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example: - -```go - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "topic-1", - StartMessageID: pulsar.EarliestMessageID(), -}) -if err != nil { - log.Fatal(err) -} -defer reader.Close() - -``` - -### Reader operations - -Pulsar Go readers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string` -`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)` -`HasNext()` | Check if there is any message available to read from the current position| (bool, error) -`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error` -`Seek(MessageID)` | Reset the subscription associated with this reader to a specific message ID | `error` -`SeekByTime(time time.Time)` | Reset the subscription associated with this reader to a specific message publish time | `error` - -### Reader example - -#### How to use reader to read 'next' message - -Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages: - -```go - -import ( - "context" - "fmt" - "log" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{URL: "pulsar://localhost:6650"}) - if err != nil { - log.Fatal(err) - } - - defer client.Close() - - reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "topic-1", - StartMessageID: pulsar.EarliestMessageID(), - }) - if err != nil { - log.Fatal(err) - } - defer reader.Close() - - for reader.HasNext() { - msg, err := reader.Next(context.Background()) - if err != nil { - log.Fatal(err) - } - - fmt.Printf("Received message msgId: %#v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - } -} - -``` - -In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example: - -```go - -lastSavedId := // Read last saved message id from external store as byte[] - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: pulsar.DeserializeMessageID(lastSavedId), -}) - -``` - -#### How to use reader to read specific message - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: lookupURL, -}) - -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topic := "topic-1" -ctx := context.Background() - -// create producer -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topic, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -// send 10 messages -msgIDs := [10]MessageID{} -for i := 0; i < 10; i++ { - msgID, err := producer.Send(ctx, &pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("hello-%d", i)), - }) - assert.NoError(t, err) - assert.NotNil(t, msgID) - msgIDs[i] = msgID -} - -// create reader on 5th message (not included) -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: topic, - StartMessageID: msgIDs[4], -}) - -if err != nil { - log.Fatal(err) -} -defer reader.Close() - -// receive the remaining 5 messages -for i := 5; i < 10; i++ { - msg, err := reader.Next(context.Background()) - if err != nil { - log.Fatal(err) -} - -// create reader on 5th message (included) -readerInclusive, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: topic, - StartMessageID: msgIDs[4], - StartMessageIDInclusive: true, -}) - -if err != nil { - log.Fatal(err) -} -defer readerInclusive.Close() - -``` - -### Reader configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Name | Name set the reader name. | | -| Properties | Attach a set of application defined properties to the reader. This properties will be visible in the topic stats | | -| StartMessageID | StartMessageID initial reader positioning is done by specifying a message id. | | -| StartMessageIDInclusive | If true, the reader will start at the `StartMessageID`, included. Default is `false` and the reader will start from the "next" message | false | -| MessageChannel | MessageChannel sets a `MessageChannel` for the consumer When a message is received, it will be pushed to the channel for consumption| | -| ReceiverQueueSize | ReceiverQueueSize sets the size of the consumer receive queue. | 1000 | -| SubscriptionRolePrefix| SubscriptionRolePrefix set the subscription role prefix. | "reader" | -| ReadCompacted | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. ReadCompacted can only be enabled when reading from a persistent topic. | false| - -## Messages - -The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message: - -```go - -msg := pulsar.ProducerMessage{ - Payload: []byte("Here is some message data"), - Key: "message-key", - Properties: map[string]string{ - "foo": "bar", - }, - EventTime: time.Now(), - ReplicationClusters: []string{"cluster1", "cluster3"}, -} - -if _, err := producer.send(msg); err != nil { - log.Fatalf("Could not publish message due to: %v", err) -} - -``` - -The following methods parameters are available for `ProducerMessage` objects: - -Parameter | Description -:---------|:----------- -`Payload` | The actual data payload of the message -`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message. -`Key` | The optional key associated with the message (particularly useful for things like topic compaction) -`OrderingKey` | OrderingKey sets the ordering key of the message. -`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message -`EventTime` | The timestamp associated with the message -`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. -`SequenceID` | Set the sequence id to assign to the current message -`DeliverAfter` | Request to deliver the message only after the specified relative delay -`DeliverAt` | Deliver the message only at or after the specified absolute timestamp - -## TLS encryption and authentication - -In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so: - - * Use `pulsar+ssl` URL type - * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker - * Configure `Authentication` option - -Here's an example: - -```go - -opts := pulsar.ClientOptions{ - URL: "pulsar+ssl://my-cluster.com:6651", - TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr", - Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"), -} - -``` - -## OAuth2 authentication - -To use [OAuth2 authentication](security-oauth2.md), you'll need to configure your client to perform the following operations. -This example shows how to configure OAuth2 authentication. - -```go - -oauth := pulsar.NewAuthenticationOAuth2(map[string]string{ - "type": "client_credentials", - "issuerUrl": "https://dev-kt-aa9ne.us.auth0.com", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "privateKey": "/path/to/privateKey", - "clientId": "0Xx...Yyxeny", - }) -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://my-cluster:6650", - Authentication: oauth, -}) - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-java.md b/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-java.md deleted file mode 100644 index c548ff084ea688..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-java.md +++ /dev/null @@ -1,1543 +0,0 @@ ---- -id: client-libraries-java -title: Pulsar Java client -sidebar_label: "Java" -original_id: client-libraries-java ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -You can use a Pulsar Java client to create the Java [producer](#producer), [consumer](#consumer), [readers](#reader) and [TableView](#tableview) of messages and to perform [administrative tasks](admin-api-overview.md). The current Java client version is **@pulsar:version@**. - -All the methods in [producer](#producer), [consumer](#consumer), [readers](#reader) and [TableView](#tableview) of a Java client are thread-safe. - -Javadoc for the Pulsar client is divided into two domains by package as follows. - -Package | Description | Maven Artifact -:-------|:------------|:-------------- -[`org.apache.pulsar.client.api`](/api/client) | [The producer and consumer API](/api/client/) | [org.apache.pulsar:pulsar-client:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C@pulsar:version@%7Cjar) -[`org.apache.pulsar.client.admin`](/api/admin) | The Java [admin API](admin-api-overview.md) | [org.apache.pulsar:pulsar-client-admin:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client-admin%7C@pulsar:version@%7Cjar) -`org.apache.pulsar.client.all` |Include both `pulsar-client` and `pulsar-client-admin`
    Both `pulsar-client` and `pulsar-client-admin` are shaded packages and they shade dependencies independently. Consequently, the applications using both `pulsar-client` and `pulsar-client-admin` have redundant shaded classes. It would be troublesome if you introduce new dependencies but forget to update shading rules.
    In this case, you can use `pulsar-client-all`, which shades dependencies only one time and reduces the size of dependencies. |[org.apache.pulsar:pulsar-client-all:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client-all%7C@pulsar:version@%7Cjar) - -This document focuses only on the client API for producing and consuming messages on Pulsar topics. For how to use the Java admin client, see [Pulsar admin interface](admin-api-overview.md). - -## Installation - -The latest version of the Pulsar Java client library is available via [Maven Central](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C@pulsar:version@%7Cjar). To use the latest version, add the `pulsar-client` library to your build configuration. - -:::tip - -- [`pulsar-client`](https://search.maven.org/artifact/org.apache.pulsar/pulsar-client) and [`pulsar-client-admin`](https://search.maven.org/artifact/org.apache.pulsar/pulsar-client-admin) shade dependencies via [maven-shade-plugin](https://maven.apache.org/plugins/maven-shade-plugin/) to avoid conflicts of the underlying dependency packages (such as Netty). If you do not want to manage dependency conflicts manually, you can use them. -- [`pulsar-client-original`](https://search.maven.org/artifact/org.apache.pulsar/pulsar-client-original) and [`pulsar-client-admin-original`](https://search.maven.org/artifact/org.apache.pulsar/pulsar-client-admin-original) **does not** shade dependencies. If you want to manage dependencies manually, you can use them. - -::: - -### Maven - -If you use Maven, add the following information to the `pom.xml` file. - -```xml - - -@pulsar:version@ - - - - org.apache.pulsar - pulsar-client - ${pulsar.version} - - -``` - -### Gradle - -If you use Gradle, add the following information to the `build.gradle` file. - -```groovy - -def pulsarVersion = '@pulsar:version@' - -dependencies { - compile group: 'org.apache.pulsar', name: 'pulsar-client', version: pulsarVersion -} - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -You can assign Pulsar protocol URLs to specific clusters and use the `pulsar` scheme. The default port is `6650`. The following is an example of `localhost`. - -```http - -pulsar://localhost:6650 - -``` - -If you have multiple brokers, the URL is as follows. - -```http - -pulsar://localhost:6550,localhost:6651,localhost:6652 - -``` - -A URL for a production Pulsar cluster is as follows. - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you use [TLS](security-tls-authentication.md) authentication, the URL is as follows. - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Client - -You can instantiate a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object using just a URL for the target Pulsar [cluster](reference-terminology.md#cluster) like this: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -``` - -If you have multiple brokers, you can initiate a PulsarClient like this: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650,localhost:6651,localhost:6652") - .build(); - -``` - -> ### Default broker URLs for standalone clusters -> If you run a cluster in [standalone mode](getting-started-standalone.md), the broker is available at the `pulsar://localhost:6650` URL by default. - -If you create a client, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -| Name | Type |
    Description
    | Default -|---|---|---|--- -`serviceUrl` | String | Service URL provider for Pulsar service | None -`authPluginClassName` | String | Name of the authentication plugin | None - `authParams` | String | Parameters for the authentication plugin

    **Example**
    key1:val1,key2:val2|None -`operationTimeoutMs`|long|`operationTimeoutMs`|Operation timeout |30000 -`statsIntervalSeconds`|long|Interval between each stats information

    Stats is activated with positive `statsInterval`

    Set `statsIntervalSeconds` to 1 second at least. |60 -`numIoThreads`| int| The number of threads used for handling connections to brokers | 1 -`numListenerThreads`|int|The number of threads used for handling message listeners. The listener thread pool is shared across all the consumers and readers using the "listener" model to get messages. For a given consumer, the listener is always invoked from the same thread to ensure ordering. If you want multiple threads to process a single topic, you need to create a [`shared`](concepts-messaging.md#shared) subscription and multiple consumers for this subscription. This does not ensure ordering.| 1 -`useTcpNoDelay`| boolean| Whether to use TCP no-delay flag on the connection to disable Nagle algorithm |true -`enableTls` |boolean | Whether to use TLS encryption on the connection. Note that this parameter is **deprecated**. If you want to enable TLS, use `pulsar+ssl://` in `serviceUrl` instead. | false - `tlsTrustCertsFilePath` |string |Path to the trusted TLS certificate file|None -`tlsAllowInsecureConnection`|boolean|Whether the Pulsar client accepts untrusted TLS certificate from broker | false -`tlsHostnameVerificationEnable` |boolean | Whether to enable TLS hostname verification|false -`concurrentLookupRequest`|int|The number of concurrent lookup requests allowed to send on each broker connection to prevent overload on broker|5000 -`maxLookupRequest`|int|The maximum number of lookup requests allowed on each broker connection to prevent overload on broker | 50000 -`maxNumberOfRejectedRequestPerConnection`|int|The maximum number of rejected requests of a broker in a certain time frame (30 seconds) after the current connection is closed and the client creates a new connection to connect to a different broker|50 -`keepAliveIntervalSeconds`|int|Seconds of keeping alive interval for each client broker connection|30 -`connectionTimeoutMs`|int|Duration of waiting for a connection to a broker to be established

    If the duration passes without a response from a broker, the connection attempt is dropped|10000 -`requestTimeoutMs`|int|Maximum duration for completing a request |60000 -`defaultBackoffIntervalNanos`|int| Default duration for a backoff interval | TimeUnit.MILLISECONDS.toNanos(100); -`maxBackoffIntervalNanos`|long|Maximum duration for a backoff interval|TimeUnit.SECONDS.toNanos(30) -`socks5ProxyAddress`|SocketAddress|SOCKS5 proxy address | None -`socks5ProxyUsername`|string|SOCKS5 proxy username | None -`socks5ProxyPassword`|string|SOCKS5 proxy password | None - -Check out the Javadoc for the {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} class for a full list of configurable parameters. - -> In addition to client-level configuration, you can also apply [producer](#configure-producer) and [consumer](#configure-consumer) specific configuration as described in sections below. - -### Client memory allocator configuration -You can set the client memory allocator configurations through Java properties.
    - -| Property | Type |
    Description
    | Default | Available values -|---|---|---|---|--- -`pulsar.allocator.pooled` | String | If set to `true`, the client uses a direct memory pool.
    If set to `false`, the client uses a heap memory without pool | true |
  5. true
  6. false
  7. -`pulsar.allocator.exit_on_oom` | String | Whether to exit the JVM when OOM happens | false |
  8. true
  9. false
  10. -`pulsar.allocator.leak_detection` | String | The leak detection policy for Pulsar bytebuf allocator.
  11. **Disabled**: No leak detection and no overhead.
  12. **Simple**: Instruments 1% of the allocated buffer to track for leaks.
  13. **Advanced**: Instruments 1% of the allocated buffer to track for leaks, reporting stack traces of places where the buffer is used.
  14. **Paranoid**: Instruments 100% of the allocated buffer to track for leaks, reporting stack traces of places where the buffer is used and introduces a significant overhead.
  15. | Disabled |
  16. Disabled
  17. Simple
  18. Advanced
  19. Paranoid
  20. -`pulsar.allocator.out_of_memory_policy` | String | When an OOM occurs, the client throws an exception or fallbacks to heap | FallbackToHeap |
  21. ThrowException
  22. FallbackToHeap
  23. - -**Example**: - -``` - --Dpulsar.allocator.pooled=true --Dpulsar.allocator.exit_on_oom=false --Dpulsar.allocator.leak_detection=Disabled --Dpulsar.allocator.out_of_memory_policy=ThrowException - -``` - -### Cluster-level failover - -This chapter describes the concept, benefits, use cases, constraints, usage, working principles, and more information about the cluster-level failover. It contains the following sections: - -- [What is cluster-level failover?](#what-is-cluster-level-failover) - - * [Concept of cluster-level failover](#concept-of-cluster-level-failover) - - * [Why use cluster-level failover?](#why-use-cluster-level-failover) - - * [When to use cluster-level failover?](#when-to-use-cluster-level-failover) - - * [When cluster-level failover is triggered?](#when-cluster-level-failover-is-triggered) - - * [Why does cluster-level failover fail?](#why-does-cluster-level-failover-fail) - - * [What are the limitations of cluster-level failover?](#what-are-the-limitations-of-cluster-level-failover) - - * [What are the relationships between cluster-level failover and geo-replication?](#what-are-the-relationships-between-cluster-level-failover-and-geo-replication) - -- [How to use cluster-level failover?](#how-to-use-cluster-level-failover) - -- [How does cluster-level failover work?](#how-does-cluster-level-failover-work) - -> #### What is cluster-level failover - -This chapter helps you better understand the concept of cluster-level failover. -> ##### Concept of cluster-level failover - -````mdx-code-block - - - -Automatic cluster-level failover supports Pulsar clients switching from a primary cluster to one or several backup clusters automatically and seamlessly when it detects a failover event based on the configured detecting policy set by **users**. - -![Automatic cluster-level failover](/assets/cluster-level-failover-1.png) - - - - -Controlled cluster-level failover supports Pulsar clients switching from a primary cluster to one or several backup clusters. The switchover is manually set by **administrators**. - -![Controlled cluster-level failover](/assets/cluster-level-failover-2.png) - - - - -```` - -Once the primary cluster functions again, Pulsar clients can switch back to the primary cluster. Most of the time users won’t even notice a thing. Users can keep using applications and services without interruptions or timeouts. - -> ##### Why use cluster-level failover? - -The cluster-level failover provides fault tolerance, continuous availability, and high availability together. It brings a number of benefits, including but not limited to: - -* Reduced cost: services can be switched and recovered automatically with no data loss. - -* Simplified management: businesses can operate on an "always-on" basis since no immediate user intervention is required. - -* Improved stability and robustness: it ensures continuous performance and minimizes service downtime. - -> ##### When to use cluster-level failover? - -The cluster-level failover protects your environment in a number of ways, including but not limited to: - -* Disaster recovery: cluster-level failover can automatically and seamlessly transfer the production workload on a primary cluster to one or several backup clusters, which ensures minimum data loss and reduced recovery time. - -* Planned migration: if you want to migrate production workloads from an old cluster to a new cluster, you can improve the migration efficiency with cluster-level failover. For example, you can test whether the data migration goes smoothly in case of a failover event, identify possible issues and risks before the migration. - -> ##### When cluster-level failover is triggered? - -````mdx-code-block - - - -Automatic cluster-level failover is triggered when Pulsar clients cannot connect to the primary cluster for a prolonged period of time. This can be caused by any number of reasons including, but not limited to: - -* Network failure: internet connection is lost. - -* Power failure: shutdown time of a primary cluster exceeds time limits. - -* Service error: errors occur on a primary cluster (for example, the primary cluster does not function because of time limits). - -* Crashed storage space: the primary cluster does not have enough storage space, but the corresponding storage space on the backup server functions normally. - - - - -Controlled cluster-level failover is triggered when administrators set the switchover manually. - - - - -```` - -> ##### Why does cluster-level failover fail? - -Obviously, the cluster-level failover does not succeed if the backup cluster is unreachable by active Pulsar clients. This can happen for many reasons, including but not limited to: - -* Power failure: the backup cluster is shut down or does not function normally. - -* Crashed storage space: primary and backup clusters do not have enough storage space. - -* If the failover is initiated, but no cluster can assume the role of an available cluster due to errors, and the primary cluster is not able to provide service normally. - -* If you manually initiate a switchover, but services cannot be switched to the backup cluster server, then the system will attempt to switch services back to the primary cluster. - -* Fail to authenticate or authorize between 1) primary and backup clusters, or 2) between two backup clusters. - -> ##### What are the limitations of cluster-level failover? - -Currently, cluster-level failover can perform probes to prevent data loss, but it can not check the status of backup clusters. If backup clusters are not healthy, you cannot produce or consume data. - -> #### What are the relationships between cluster-level failover and geo-replication? - -The cluster-level failover is an extension of [geo-replication](concepts-replication.md) to improve stability and robustness. The cluster-level failover depends on geo-replication, and they have some **differences** as below. - -Influence |Cluster-level failover|Geo-replication -|---|---|--- -Do administrators have heavy workloads?|No or maybe.

    - For the **automatic** cluster-level failover, the cluster switchover is triggered automatically based on the policies set by **users**.

    - For the **controlled** cluster-level failover, the switchover is triggered manually by **administrators**.|Yes.

    If a cluster fails, immediate administration intervention is required.| -Result in data loss?|No.

    For both **automatic** and **controlled** cluster-level failover, if the failed primary cluster doesn't replicate messages immediately to the backup cluster, the Pulsar client can't consume the non-replicated messages. After the primary cluster is restored and the Pulsar client switches back, the non-replicated data can still be consumed by the Pulsar client. Consequently, the data is not lost.

    - For the **automatic** cluster-level failover, services can be switched and recovered automatically with no data loss.

    - For the **controlled** cluster-level failover, services can be switched and recovered manually and data loss may happen.|Yes.

    Pulsar clients and DNS systems have caches. When administrators switch the DNS from a primary cluster to a backup cluster, it takes some time for cache trigger timeout, which delays client recovery time and fails to produce or consume messages. -Result in Pulsar client failure? |No or maybe.

    - For **automatic** cluster-level failover, services can be switched and recovered automatically and the Pulsar client does not fail.

    - For **controlled** cluster-level failover, services can be switched and recovered manually, but the Pulsar client fails before administrators can take action. |Same as above. - -> #### How to use cluster-level failover - -This section guides you through every step on how to configure cluster-level failover. - -**Tip** - -- You should configure cluster-level failover only when the cluster contains sufficient resources to handle all possible consequences. Workload intensity on the backup cluster may increase significantly. - -- Connect clusters to an uninterruptible power supply (UPS) unit to reduce the risk of unexpected power loss. - -**Requirements** - -* Pulsar client 2.10 or later versions. - -* For backup clusters: - - * The number of BookKeeper nodes should be equal to or greater than the ensemble quorum. - - * The number of ZooKeeper nodes should be equal to or greater than 3. - -* **Turn on geo-replication** between the primary cluster and any dependent cluster (primary to backup or backup to backup) to prevent data loss. - -* Set `replicateSubscriptionState` to `true` when creating consumers. - -````mdx-code-block - - - -This is an example of how to construct a Java Pulsar client to use automatic cluster-level failover. The switchover is triggered automatically. - -``` - -  private PulsarClient getAutoFailoverClient() throws PulsarClientException { - -        ServiceUrlProvider failover = AutoClusterFailover.builder() -                .primary("pulsar://localhost:6650") -                .secondary(Collections.singletonList("pulsar://other1:6650","pulsar://other2:6650")) -                .failoverDelay(30, TimeUnit.SECONDS) -                .switchBackDelay(60, TimeUnit.SECONDS) -                .checkInterval(1000, TimeUnit.MILLISECONDS) -         .secondaryTlsTrustCertsFilePath("/path/to/ca.cert.pem") -    .secondaryAuthentication("org.apache.pulsar.client.impl.auth.AuthenticationTls", -"tlsCertFile:/path/to/my-role.cert.pem,tlsKeyFile:/path/to/my-role.key-pk8.pem") - -                .build(); - -        PulsarClient pulsarClient = PulsarClient.builder() -                .build(); - -        failover.initialize(pulsarClient); -        return pulsarClient; -    } - -``` - -Configure the following parameters: - -Parameter|Default value|Required?|Description -|---|---|---|--- -`primary`|N/A|Yes|Service URL of the primary cluster. -`secondary`|N/A|Yes|Service URL(s) of one or several backup clusters.

    You can specify several backup clusters using a comma-separated list.

    Note that:
    - The backup cluster is chosen in the sequence shown in the list.
    - If all backup clusters are available, the Pulsar client chooses the first backup cluster. -`failoverDelay`|N/A|Yes|The delay before the Pulsar client switches from the primary cluster to the backup cluster.

    Automatic failover is controlled by a probe task:
    1) The probe task first checks the health status of the primary cluster.
    2) If the probe task finds the continuous failure time of the primary cluster exceeds `failoverDelayMs`, it switches the Pulsar client to the backup cluster. -`switchBackDelay`|N/A|Yes|The delay before the Pulsar client switches from the backup cluster to the primary cluster.

    Automatic failover switchover is controlled by a probe task:
    1) After the Pulsar client switches from the primary cluster to the backup cluster, the probe task continues to check the status of the primary cluster.
    2) If the primary cluster functions well and continuously remains active longer than `switchBackDelay`, the Pulsar client switches back to the primary cluster. -`checkInterval`|30s|No|Frequency of performing a probe task (in seconds). -`secondaryTlsTrustCertsFilePath`|N/A|No|Path to the trusted TLS certificate file of the backup cluster. -`secondaryAuthentication`|N/A|No|Authentication of the backup cluster. - -
    - - -This is an example of how to construct a Java Pulsar client to use controlled cluster-level failover. The switchover is triggered by administrators manually. - -**Note**: you can have one or several backup clusters but can only specify one. - -``` - - public PulsarClient getControlledFailoverClient() throws IOException { -Map header = new HashMap(); - header.put("service_user_id", "my-user"); - header.put("service_password", "tiger"); - header.put("clusterA", "tokenA"); - header.put("clusterB", "tokenB"); - - ServiceUrlProvider provider = - ControlledClusterFailover.builder() - .defaultServiceUrl("pulsar://localhost:6650") - .checkInterval(1, TimeUnit.MINUTES) - .urlProvider("http://localhost:8080/test") - .urlProviderHeader(header) - .build(); - - PulsarClient pulsarClient = - PulsarClient.builder() - .build(); - - provider.initialize(pulsarClient); - return pulsarClient; -} - -``` - -Parameter|Default value|Required?|Description -|---|---|---|--- -`defaultServiceUrl`|N/A|Yes|Pulsar service URL. -`checkInterval`|30s|No|Frequency of performing a probe task (in seconds). -`urlProvider`|N/A|Yes|URL provider service. -`urlProviderHeader`|N/A|No|`urlProviderHeader` is a map containing tokens and credentials.

    If you enable authentication or authorization between Pulsar clients and primary and backup clusters, you need to provide `urlProviderHeader`. - -Here is an example of how `urlProviderHeader` works. - -![How urlProviderHeader works](/assets/cluster-level-failover-3.png) - -Assume that you want to connect Pulsar client 1 to cluster A. - -1. Pulsar client 1 sends the token *t1* to the URL provider service. - -2. The URL provider service returns the credential *c1* and the cluster A URL to the Pulsar client. - - The URL provider service manages all tokens and credentials. It returns different credentials based on different tokens and different target cluster URLs to different Pulsar clients. - - **Note**: **the credential must be in a JSON file and contain parameters as shown**. - - ``` - - { - "serviceUrl": "pulsar+ssl://target:6651", - "tlsTrustCertsFilePath": "/security/ca.cert.pem", - "authPluginClassName":"org.apache.pulsar.client.impl.auth.AuthenticationTls", - "authParamsString": " \"tlsCertFile\": \"/security/client.cert.pem\" - \"tlsKeyFile\": \"/security/client-pk8.pem\" " - } - - ``` - -3. Pulsar client 1 connects to cluster A using credential *c1*. - -
    - -
    -```` - ->#### How does cluster-level failover work? - -This chapter explains the working process of cluster-level failover. For more implementation details, see [PIP-121](https://github.com/apache/pulsar/issues/13315). - -````mdx-code-block - - - -In automatic failover cluster, the primary cluster and backup cluster are aware of each other's availability. The automatic failover cluster performs the following actions without administrator intervention: - -1. The Pulsar client runs a probe task at intervals defined in `checkInterval`. - -2. If the probe task finds the failure time of the primary cluster exceeds the time set in the `failoverDelay` parameter, it searches backup clusters for an available healthy cluster. - - 2a) If there are healthy backup clusters, the Pulsar client switches to a backup cluster in the order defined in `secondary`. - - 2b) If there is no healthy backup cluster, the Pulsar client does not perform the switchover, and the probe task continues to look for an available backup cluster. - -3. The probe task checks whether the primary cluster functions well or not. - - 3a) If the primary cluster comes back and the continuous healthy time exceeds the time set in `switchBackDelay`, the Pulsar client switches back to the primary cluster. - - 3b) If the primary cluster does not come back, the Pulsar client does not perform the switchover. - -![Workflow of automatic failover cluster](/assets/cluster-level-failover-4.png) - - - - -1. The Pulsar client runs a probe task at intervals defined in `checkInterval`. - -2. The probe task fetches the service URL configuration from the URL provider service, which is configured by `urlProvider`. - - 2a) If the service URL configuration is changed, the probe task switches to the target cluster without checking the health status of the target cluster. - - 2b) If the service URL configuration is not changed, the Pulsar client does not perform the switchover. - -3. If the Pulsar client switches to the target cluster, the probe task continues to fetch service URL configuration from the URL provider service at intervals defined in `checkInterval`. - - 3a) If the service URL configuration is changed, the probe task switches to the target cluster without checking the health status of the target cluster. - - 3b) If the service URL configuration is not changed, it does not perform the switchover. - -![Workflow of controlled failover cluster](/assets/cluster-level-failover-5.png) - - - - -```` - -## Producer - -In Pulsar, producers write messages to topics. Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object (as in the section [above](#client-configuration)), you can create a {@inject: javadoc:Producer:/client/org/apache/pulsar/client/api/Producer} for a specific Pulsar [topic](reference-terminology.md#topic). - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .create(); - -// You can then send messages to the broker and topic you specified: -producer.send("My message".getBytes()); - -``` - -By default, producers produce messages that consist of byte arrays. You can produce different types by specifying a message [schema](#schema). - -```java - -Producer stringProducer = client.newProducer(Schema.STRING) - .topic("my-topic") - .create(); -stringProducer.send("My message"); - -``` - -> Make sure that you close your producers, consumers, and clients when you do not need them. - -> ```java -> -> producer.close(); -> consumer.close(); -> client.close(); -> -> -> ``` - -> -> Close operations can also be asynchronous: - -> ```java -> -> producer.closeAsync() -> .thenRun(() -> System.out.println("Producer closed")) -> .exceptionally((ex) -> { -> System.err.println("Failed to close producer: " + ex); -> return null; -> }); -> -> -> ``` - - -### Configure producer - -If you instantiate a `Producer` object by specifying only a topic name as the example above, the default configuration of producer is used. - -If you create a producer, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -Name| Type |
    Description
    | Default -|---|---|---|--- -`topicName`| string| Topic name| null| -`producerName`| string|Producer name| null -`sendTimeoutMs`| long|Message send timeout in ms.
    If a message is not acknowledged by a server before the `sendTimeout` expires, an error occurs.|30000 -`blockIfQueueFull`|boolean|If it is set to `true`, when the outgoing message queue is full, the `Send` and `SendAsync` methods of producer block, rather than failing and throwing errors.
    If it is set to `false`, when the outgoing message queue is full, the `Send` and `SendAsync` methods of producer fail and `ProducerQueueIsFullError` exceptions occur.

    The `MaxPendingMessages` parameter determines the size of the outgoing message queue.|false -`maxPendingMessages`| int|The maximum size of a queue holding pending messages.

    For example, a message waiting to receive an acknowledgment from a [broker](reference-terminology.md#broker).

    By default, when the queue is full, all calls to the `Send` and `SendAsync` methods fail **unless** you set `BlockIfQueueFull` to `true`.|1000 -`maxPendingMessagesAcrossPartitions`|int|The maximum number of pending messages across partitions.

    Use the setting to lower the max pending messages for each partition ({@link #setMaxPendingMessages(int)}) if the total number exceeds the configured value.|50000 -`messageRoutingMode`| MessageRoutingMode|Message routing logic for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics).
    Apply the logic only when setting no key on messages.
    Available options are as follows:
  24. `pulsar.RoundRobinDistribution`: round robin
  25. `pulsar.UseSinglePartition`: publish all messages to a single partition
  26. `pulsar.CustomPartition`: a custom partitioning scheme
  27. |
  28. `pulsar.RoundRobinDistribution`
  29. -`hashingScheme`| HashingScheme|Hashing function determining the partition where you publish a particular message (**partitioned topics only**).
    Available options are as follows:
  30. `pulsar.JavastringHash`: the equivalent of `string.hashCode()` in Java
  31. `pulsar.Murmur3_32Hash`: applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function
  32. `pulsar.BoostHash`: applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library
  33. |`HashingScheme.JavastringHash` -`cryptoFailureAction`| ProducerCryptoFailureAction|Producer should take action when encryption fails.
  34. **FAIL**: if encryption fails, unencrypted messages fail to send.
  35. **SEND**: if encryption fails, unencrypted messages are sent.
  36. |`ProducerCryptoFailureAction.FAIL` -`batchingMaxPublishDelayMicros`| long|Batching time period of sending messages.|TimeUnit.MILLISECONDS.toMicros(1) -`batchingMaxMessages` |int|The maximum number of messages permitted in a batch.|1000 -`batchingEnabled`| boolean|Enable batching of messages. |true -`chunkingEnabled` | boolean | Enable chunking of messages. |false -`compressionType`|CompressionType|Message data compression type used by a producer.
    Available options:
  37. [`LZ4`](https://github.com/lz4/lz4)
  38. [`ZLIB`](https://zlib.net/)
  39. [`ZSTD`](https://facebook.github.io/zstd/)
  40. [`SNAPPY`](https://google.github.io/snappy/)
  41. | No compression -`initialSubscriptionName`|string|Use this configuration to automatically create an initial subscription when creating a topic. If this field is not set, the initial subscription is not created.|null - -You can configure parameters if you do not want to use the default configuration. - -For a full list, see the Javadoc for the {@inject: javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder} class. The following is an example. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .batchingMaxPublishDelay(10, TimeUnit.MILLISECONDS) - .sendTimeout(10, TimeUnit.SECONDS) - .blockIfQueueFull(true) - .create(); - -``` - -### Message routing - -When using partitioned topics, you can specify the routing mode whenever you publish messages using a producer. For more information on specifying a routing mode using the Java client, see the [Partitioned Topics cookbook](cookbooks-partitioned.md). - -### Async send - -You can publish messages [asynchronously](concepts-messaging.md#send-modes) using the Java client. With async send, the producer puts the message in a blocking queue and returns it immediately. Then the client library sends the message to the broker in the background. If the queue is full (max size configurable), the producer is blocked or fails immediately when calling the API, depending on arguments passed to the producer. - -The following is an example. - -```java - -producer.sendAsync("my-async-message".getBytes()).thenAccept(msgId -> { - System.out.println("Message with ID " + msgId + " successfully sent"); -}); - -``` - -As you can see from the example above, async send operations return a {@inject: javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId} wrapped in a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture). - -### Configure messages - -In addition to a value, you can set additional items on a given message: - -```java - -producer.newMessage() - .key("my-message-key") - .value("my-async-message".getBytes()) - .property("my-key", "my-value") - .property("my-other-key", "my-other-value") - .send(); - -``` - -You can terminate the builder chain with `sendAsync()` and get a future return. - -### Enable chunking - -Message [chunking](concepts-messaging.md#chunking) enables Pulsar to process large payload messages by splitting the message into chunks at the producer side and aggregating chunked messages at the consumer side. - -The message chunking feature is OFF by default. The following is an example about how to enable message chunking when creating a producer. - -```java - -Producer producer = client.newProducer() - .topic(topic) - .enableChunking(true) - .enableBatching(false) - .create(); - -``` - -By default, producer chunks the large message based on max message size (`maxMessageSize`) configured at broker (eg: 5MB). However, client can also configure max chunked size using producer configuration `chunkMaxMessageSize`. -> **Note:** To enable chunking, you need to disable batching (`enableBatching`=`false`) concurrently. - -## Consumer - -In Pulsar, consumers subscribe to topics and handle messages that producers publish to those topics. You can instantiate a new [consumer](reference-terminology.md#consumer) by first instantiating a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object and passing it a URL for a Pulsar broker (as [above](#client-configuration)). - -Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object, you can create a {@inject: javadoc:Consumer:/client/org/apache/pulsar/client/api/Consumer} by specifying a [topic](reference-terminology.md#topic) and a [subscription](concepts-messaging.md#subscription-types). - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscribe(); - -``` - -The `subscribe` method will auto subscribe the consumer to the specified topic and subscription. One way to make the consumer listen on the topic is to set up a `while` loop. In this example loop, the consumer listens for messages, prints the contents of any received message, and then [acknowledges](reference-terminology.md#acknowledgment-ack) that the message has been processed. If the processing logic fails, you can use [negative acknowledgement](reference-terminology.md#acknowledgment-ack) to redeliver the message later. - -```java - -while (true) { - // Wait for a message - Message msg = consumer.receive(); - - try { - // Do something with the message - System.out.println("Message received: " + new String(msg.getData())); - - // Acknowledge the message so that it can be deleted by the message broker - consumer.acknowledge(msg); - } catch (Exception e) { - // Message failed to process, redeliver later - consumer.negativeAcknowledge(msg); - } -} - -``` - -If you don't want to block your main thread and rather listen constantly for new messages, consider using a `MessageListener`. - -```java - -MessageListener myMessageListener = (consumer, msg) -> { - try { - System.out.println("Message received: " + new String(msg.getData())); - consumer.acknowledge(msg); - } catch (Exception e) { - consumer.negativeAcknowledge(msg); - } -} - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .messageListener(myMessageListener) - .subscribe(); - -``` - -### Configure consumer - -If you instantiate a `Consumer` object by specifying only a topic and subscription name as in the example above, the consumer uses the default configuration. - -When you create a consumer, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - - Name|Type |
    Description
    | Default -|---|---|---|--- -`topicNames`| Set<String>| Topic name| Sets.newTreeSet() - `topicsPattern`|Pattern| Topic pattern |None -`subscriptionName`|String| Subscription name| None -`subscriptionType`|SubscriptionType| Subscription type
    Four subscription types are available:
  42. Exclusive
  43. Failover
  44. Shared
  45. Key_Shared
  46. |SubscriptionType.Exclusive -`receiverQueueSize` |int | Size of a consumer's receiver queue.

    For example, the number of messages accumulated by a consumer before an application calls `Receive`.

    A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.| 1000 -`acknowledgementsGroupTimeMicros`|long|Group a consumer acknowledgment for a specified time.

    By default, a consumer uses 100ms grouping time to send out acknowledgments to a broker.

    Setting a group time of 0 sends out acknowledgments immediately.

    A longer ack group time is more efficient at the expense of a slight increase in message re-deliveries after a failure.|TimeUnit.MILLISECONDS.toMicros(100) -`negativeAckRedeliveryDelayMicros`|long|Delay to wait before redelivering messages that failed to be processed.

    When an application uses {@link Consumer#negativeAcknowledge(Message)}, failed messages are redelivered after a fixed timeout. |TimeUnit.MINUTES.toMicros(1) -`maxTotalReceiverQueueSizeAcrossPartitions`|int |The max total receiver queue size across partitions.

    This setting reduces the receiver queue size for individual partitions if the total receiver queue size exceeds this value.|50000 -`consumerName`|String|Consumer name|null -`ackTimeoutMillis`|long|Timeout of unacked messages|0 -`tickDurationMillis`|long|Granularity of the ack-timeout redelivery.

    Using an higher `tickDurationMillis` reduces the memory overhead to track messages when setting ack-timeout to a bigger value (for example, 1 hour).|1000 -`priorityLevel`|int|Priority level for a consumer to which a broker gives more priority while dispatching messages in Shared subscription type.

    The broker follows descending priorities. For example, 0=max-priority, 1, 2,...

    In Shared subscription type, the broker **first dispatches messages to the max priority level consumers if they have permits**. Otherwise, the broker considers next priority level consumers.

    **Example 1**
    If a subscription has consumerA with `priorityLevel` 0 and consumerB with `priorityLevel` 1, then the broker **only dispatches messages to consumerA until it runs out permits** and then starts dispatching messages to consumerB.

    **Example 2**
    Consumer Priority, Level, Permits
    C1, 0, 2
    C2, 0, 1
    C3, 0, 1
    C4, 1, 2
    C5, 1, 1

    Order in which a broker dispatches messages to consumers is: C1, C2, C3, C1, C4, C5, C4.|0 -`cryptoFailureAction`|ConsumerCryptoFailureAction|Consumer should take action when it receives a message that can not be decrypted.
  47. **FAIL**: this is the default option to fail messages until crypto succeeds.
  48. **DISCARD**:silently acknowledge and not deliver message to an application.
  49. **CONSUME**: deliver encrypted messages to applications. It is the application's responsibility to decrypt the message.

  50. The decompression of message fails.

    If messages contain batch messages, a client is not be able to retrieve individual messages in batch.

    Delivered encrypted message contains {@link EncryptionContext} which contains encryption and compression information in it using which application can decrypt consumed message payload.|
  51. ConsumerCryptoFailureAction.FAIL
  52. -`properties`|SortedMap|A name or value property of this consumer.

    `properties` is application defined metadata attached to a consumer.

    When getting a topic stats, associate this metadata with the consumer stats for easier identification.|new TreeMap() -`readCompacted`|boolean|If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    Only enabling `readCompacted` on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`.|false -`subscriptionInitialPosition`|SubscriptionInitialPosition|Initial position at which to set cursor when subscribing to a topic at first time.|SubscriptionInitialPosition.Latest -`patternAutoDiscoveryPeriod`|int|Topic auto discovery period when using a pattern for topic's consumer.

    The default and minimum value is 1 minute.|1 -`regexSubscriptionMode`|RegexSubscriptionMode|When subscribing to a topic using a regular expression, you can pick a certain type of topics.

  53. **PersistentOnly**: only subscribe to persistent topics.
  54. **NonPersistentOnly**: only subscribe to non-persistent topics.
  55. **AllTopics**: subscribe to both persistent and non-persistent topics.
  56. |RegexSubscriptionMode.PersistentOnly -`deadLetterPolicy`|DeadLetterPolicy|Dead letter policy for consumers.

    By default, some messages are probably redelivered many times, even to the extent that it never stops.

    By using the dead letter mechanism, messages have the max redelivery count. **When exceeding the maximum number of redeliveries, messages are sent to the Dead Letter Topic and acknowledged automatically**.

    You can enable the dead letter mechanism by setting `deadLetterPolicy`.

    **Example**

    client.newConsumer()
    .deadLetterPolicy(DeadLetterPolicy.builder().maxRedeliverCount(10).build())
    .subscribe();


    Default dead letter topic name is `{TopicName}-{Subscription}-DLQ`.

    To set a custom dead letter topic name:
    client.newConsumer()
    .deadLetterPolicy(DeadLetterPolicy.builder().maxRedeliverCount(10)
    .deadLetterTopic("your-topic-name").build())
    .subscribe();


    When specifying the dead letter policy while not specifying `ackTimeoutMillis`, you can set the ack timeout to 30000 millisecond.|None -`autoUpdatePartitions`|boolean|If `autoUpdatePartitions` is enabled, a consumer subscribes to partition increasement automatically.

    **Note**: this is only for partitioned consumers.|true -`replicateSubscriptionState`|boolean|If `replicateSubscriptionState` is enabled, a subscription state is replicated to geo-replicated clusters.|false -`negativeAckRedeliveryBackoff`|RedeliveryBackoff|Interface for custom message is negativeAcked policy. You can specify `RedeliveryBackoff` for a consumer.| `MultiplierRedeliveryBackoff` -`ackTimeoutRedeliveryBackoff`|RedeliveryBackoff|Interface for custom message is ackTimeout policy. You can specify `RedeliveryBackoff` for a consumer.| `MultiplierRedeliveryBackoff` -`autoAckOldestChunkedMessageOnQueueFull`|boolean|Whether to automatically acknowledge pending chunked messages when the threashold of `maxPendingChunkedMessage` is reached. If set to `false`, these messages will be redelivered by their broker. |true -`maxPendingChunkedMessage`|int| The maximum size of a queue holding pending chunked messages. When the threshold is reached, the consumer drops pending messages to optimize memory utilization.|10 -`expireTimeOfIncompleteChunkedMessageMillis`|long|The time interval to expire incomplete chunks if a consumer fails to receive all the chunks in the specified time period. The default value is 1 minute. | 60000 -`ackReceiptEnabled`|boolean| If `ackReceiptEnabled` is enabled, ACK returns a receipt. The client got the ack receipt means the broker has processed the ack request, but if without transaction, the broker does not guarantee persistence of acknowledgments, which means the messages still have a chance to be redelivered after the broker crashes. With the transaction, the client can only get the receipt after the acknowledgments have been persistent. | false - -You can configure parameters if you do not want to use the default configuration. For a full list, see the Javadoc for the {@inject: javadoc:ConsumerBuilder:/client/org/apache/pulsar/client/api/ConsumerBuilder} class. - -The following is an example. - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .ackTimeout(10, TimeUnit.SECONDS) - .subscriptionType(SubscriptionType.Exclusive) - .subscribe(); - -``` - -### Async receive - -The `receive` method receives messages synchronously (the consumer process is blocked until a message is available). You can also use [async receive](concepts-messaging.md#receive-modes), which returns a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) object immediately once a new message is available. - -The following is an example. - -```java - -CompletableFuture asyncMessage = consumer.receiveAsync(); - -``` - -Async receive operations return a {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} wrapped inside of a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture). - -### Batch receive - -Use `batchReceive` to receive multiple messages for each call. - -The following is an example. - -```java - -Messages messages = consumer.batchReceive(); -for (Object message : messages) { - // do something -} -consumer.acknowledge(messages) - -``` - -:::note - -Batch receive policy limits the number and bytes of messages in a single batch. You can specify a timeout to wait for enough messages. -The batch receive is completed if any of the following condition is met: enough number of messages, bytes of messages, wait timeout. - -```java - -Consumer consumer = client.newConsumer() -.topic("my-topic") -.subscriptionName("my-subscription") -.batchReceivePolicy(BatchReceivePolicy.builder() -.maxNumMessages(100) -.maxNumBytes(1024 * 1024) -.timeout(200, TimeUnit.MILLISECONDS) -.build()) -.subscribe(); - -``` - -The default batch receive policy is: - -```java - -BatchReceivePolicy.builder() -.maxNumMessage(-1) -.maxNumBytes(10 * 1024 * 1024) -.timeout(100, TimeUnit.MILLISECONDS) -.build(); - -``` - -::: - -### Configure chunking - -You can limit the maximum number of chunked messages a consumer maintains concurrently by configuring the `maxPendingChunkedMessage` and `autoAckOldestChunkedMessageOnQueueFull` parameters. When the threshold is reached, the consumer drops pending messages by silently acknowledging them or asking the broker to redeliver them later. The `expireTimeOfIncompleteChunkedMessage` parameter decides the time interval to expire incomplete chunks if the consumer fails to receive all chunks of a message within the specified time period. - -The following is an example of how to configure message chunking. - -```java - -Consumer consumer = client.newConsumer() - .topic(topic) - .subscriptionName("test") - .autoAckOldestChunkedMessageOnQueueFull(true) - .maxPendingChunkedMessage(100) - .expireTimeOfIncompleteChunkedMessage(10, TimeUnit.MINUTES) - .subscribe(); - -``` - -### Negative acknowledgment redelivery backoff - -The `RedeliveryBackoff` introduces a redelivery backoff mechanism. You can achieve redelivery with different delays by setting `redeliveryCount ` of messages. - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .negativeAckRedeliveryBackoff(MultiplierRedeliveryBackoff.builder() - .minDelayMs(1000) - .maxDelayMs(60 * 1000) - .build()) - .subscribe(); - -``` - -### Acknowledgement timeout redelivery backoff - -The `RedeliveryBackoff` introduces a redelivery backoff mechanism. You can redeliver messages with different delays by setting the number -of times the messages is retried. - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .ackTimeout(10, TimeUnit.SECOND) - .ackTimeoutRedeliveryBackoff(MultiplierRedeliveryBackoff.builder() - .minDelayMs(1000) - .maxDelayMs(60000) - .multiplier(2) - .build()) - .subscribe(); - -``` - -The message redelivery behavior should be as follows. - -Redelivery count | Redelivery delay -:--------------------|:----------- -1 | 10 + 1 seconds -2 | 10 + 2 seconds -3 | 10 + 4 seconds -4 | 10 + 8 seconds -5 | 10 + 16 seconds -6 | 10 + 32 seconds -7 | 10 + 60 seconds -8 | 10 + 60 seconds - -:::note - -- The `negativeAckRedeliveryBackoff` does not work with `consumer.negativeAcknowledge(MessageId messageId)` because you are not able to get the redelivery count from the message ID. -- If a consumer crashes, it triggers the redelivery of unacked messages. In this case, `RedeliveryBackoff` does not take effect and the messages might get redelivered earlier than the delay time from the backoff. - -::: - -### Multi-topic subscriptions - -In addition to subscribing a consumer to a single Pulsar topic, you can also subscribe to multiple topics simultaneously using [multi-topic subscriptions](concepts-messaging.md#multi-topic-subscriptions). To use multi-topic subscriptions you can supply either a regular expression (regex) or a `List` of topics. If you select topics via regex, all topics must be within the same Pulsar namespace. - -The followings are some examples. - -```java - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; - -import java.util.Arrays; -import java.util.List; -import java.util.regex.Pattern; - -ConsumerBuilder consumerBuilder = pulsarClient.newConsumer() - .subscriptionName(subscription); - -// Subscribe to all topics in a namespace -Pattern allTopicsInNamespace = Pattern.compile("public/default/.*"); -Consumer allTopicsConsumer = consumerBuilder - .topicsPattern(allTopicsInNamespace) - .subscribe(); - -// Subscribe to a subsets of topics in a namespace, based on regex -Pattern someTopicsInNamespace = Pattern.compile("public/default/foo.*"); -Consumer allTopicsConsumer = consumerBuilder - .topicsPattern(someTopicsInNamespace) - .subscribe(); - -``` - -In the above example, the consumer subscribes to the `persistent` topics that can match the topic name pattern. If you want the consumer subscribes to all `persistent` and `non-persistent` topics that can match the topic name pattern, set `subscriptionTopicsMode` to `RegexSubscriptionMode.AllTopics`. - -```java - -Pattern pattern = Pattern.compile("public/default/.*"); -pulsarClient.newConsumer() - .subscriptionName("my-sub") - .topicsPattern(pattern) - .subscriptionTopicsMode(RegexSubscriptionMode.AllTopics) - .subscribe(); - -``` - -:::note - -By default, the `subscriptionTopicsMode` of the consumer is `PersistentOnly`. Available options of `subscriptionTopicsMode` are `PersistentOnly`, `NonPersistentOnly`, and `AllTopics`. - -::: - -You can also subscribe to an explicit list of topics (across namespaces if you wish): - -```java - -List topics = Arrays.asList( - "topic-1", - "topic-2", - "topic-3" -); - -Consumer multiTopicConsumer = consumerBuilder - .topics(topics) - .subscribe(); - -// Alternatively: -Consumer multiTopicConsumer = consumerBuilder - .topic( - "topic-1", - "topic-2", - "topic-3" - ) - .subscribe(); - -``` - -You can also subscribe to multiple topics asynchronously using the `subscribeAsync` method rather than the synchronous `subscribe` method. The following is an example. - -```java - -Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default.*"); -consumerBuilder - .topics(topics) - .subscribeAsync() - .thenAccept(this::receiveMessageFromConsumer); - -private void receiveMessageFromConsumer(Object consumer) { - ((Consumer)consumer).receiveAsync().thenAccept(message -> { - // Do something with the received message - receiveMessageFromConsumer(consumer); - }); -} - -``` - -### Subscription types - -Pulsar has various [subscription types](concepts-messaging#subscription-types) to match different scenarios. A topic can have multiple subscriptions with different subscription types. However, a subscription can only have one subscription type at a time. - -A subscription is identical with the subscription name; a subscription name can specify only one subscription type at a time. To change the subscription type, you should first stop all consumers of this subscription. - -Different subscription types have different message distribution types. This section describes the differences of subscription types and how to use them. - -In order to better describe their differences, assuming you have a topic named "my-topic", and the producer has published 10 messages. - -```java - -Producer producer = client.newProducer(Schema.STRING) - .topic("my-topic") - .enableBatching(false) - .create(); -// 3 messages with "key-1", 3 messages with "key-2", 2 messages with "key-3" and 2 messages with "key-4" -producer.newMessage().key("key-1").value("message-1-1").send(); -producer.newMessage().key("key-1").value("message-1-2").send(); -producer.newMessage().key("key-1").value("message-1-3").send(); -producer.newMessage().key("key-2").value("message-2-1").send(); -producer.newMessage().key("key-2").value("message-2-2").send(); -producer.newMessage().key("key-2").value("message-2-3").send(); -producer.newMessage().key("key-3").value("message-3-1").send(); -producer.newMessage().key("key-3").value("message-3-2").send(); -producer.newMessage().key("key-4").value("message-4-1").send(); -producer.newMessage().key("key-4").value("message-4-2").send(); - -``` - -#### Exclusive - -Create a new consumer and subscribe with the `Exclusive` subscription type. - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Exclusive) - .subscribe() - -``` - -Only the first consumer is allowed to the subscription, other consumers receive an error. The first consumer receives all 10 messages, and the consuming order is the same as the producing order. - -:::note - -If topic is a partitioned topic, the first consumer subscribes to all partitioned topics, other consumers are not assigned with partitions and receive an error. - -::: - -#### Failover - -Create new consumers and subscribe with the`Failover` subscription type. - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Failover) - .subscribe() -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Failover) - .subscribe() -//conumser1 is the active consumer, consumer2 is the standby consumer. -//consumer1 receives 5 messages and then crashes, consumer2 takes over as an active consumer. - -``` - -Multiple consumers can attach to the same subscription, yet only the first consumer is active, and others are standby. When the active consumer is disconnected, messages will be dispatched to one of standby consumers, and the standby consumer then becomes active consumer. - -If the first active consumer is disconnected after receiving 5 messages, the standby consumer becomes active consumer. Consumer1 will receive: - -``` - -("key-1", "message-1-1") -("key-1", "message-1-2") -("key-1", "message-1-3") -("key-2", "message-2-1") -("key-2", "message-2-2") - -``` - -consumer2 will receive: - -``` - -("key-2", "message-2-3") -("key-3", "message-3-1") -("key-3", "message-3-2") -("key-4", "message-4-1") -("key-4", "message-4-2") - -``` - -:::note - -If a topic is a partitioned topic, each partition has only one active consumer, messages of one partition are distributed to only one consumer, and messages of multiple partitions are distributed to multiple consumers. - -::: - -#### Shared - -Create new consumers and subscribe with `Shared` subscription type. - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .subscribe() - -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .subscribe() -//Both consumer1 and consumer2 are active consumers. - -``` - -In Shared subscription type, multiple consumers can attach to the same subscription and messages are delivered in a round robin distribution across consumers. - -If a broker dispatches only one message at a time, consumer1 receives the following information. - -``` - -("key-1", "message-1-1") -("key-1", "message-1-3") -("key-2", "message-2-2") -("key-3", "message-3-1") -("key-4", "message-4-1") - -``` - -consumer2 receives the following information. - -``` - -("key-1", "message-1-2") -("key-2", "message-2-1") -("key-2", "message-2-3") -("key-3", "message-3-2") -("key-4", "message-4-2") - -``` - -`Shared` subscription is different from `Exclusive` and `Failover` subscription types. `Shared` subscription has better flexibility, but cannot provide order guarantee. - -#### Key_shared - -This is a new subscription type since 2.4.0 release. Create new consumers and subscribe with `Key_Shared` subscription type. - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Key_Shared) - .subscribe() - -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Key_Shared) - .subscribe() -//Both consumer1 and consumer2 are active consumers. - -``` - -Just like in `Shared` subscription, all consumers in `Key_Shared` subscription type can attach to the same subscription. But `Key_Shared` subscription type is different from the `Shared` subscription. In `Key_Shared` subscription type, messages with the same key are delivered to only one consumer in order. The possible distribution of messages between different consumers (by default we do not know in advance which keys will be assigned to a consumer, but a key will only be assigned to a consumer at the same time). - -consumer1 receives the following information. - -``` - -("key-1", "message-1-1") -("key-1", "message-1-2") -("key-1", "message-1-3") -("key-3", "message-3-1") -("key-3", "message-3-2") - -``` - -consumer2 receives the following information. - -``` - -("key-2", "message-2-1") -("key-2", "message-2-2") -("key-2", "message-2-3") -("key-4", "message-4-1") -("key-4", "message-4-2") - -``` - -If batching is enabled at the producer side, messages with different keys are added to a batch by default. The broker will dispatch the batch to the consumer, so the default batch mechanism may break the Key_Shared subscription guaranteed message distribution semantics. The producer needs to use the `KeyBasedBatcher`. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .batcherBuilder(BatcherBuilder.KEY_BASED) - .create(); - -``` - -Or the producer can disable batching. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .enableBatching(false) - .create(); - -``` - -:::note - -If the message key is not specified, messages without key are dispatched to one consumer in order by default. - -::: - -## Reader - -With the [reader interface](concepts-clients.md#reader-interface), Pulsar clients can "manually position" themselves within a topic and reading all messages from a specified message onward. The Pulsar API for Java enables you to create {@inject: javadoc:Reader:/client/org/apache/pulsar/client/api/Reader} objects by specifying a topic and a {@inject: javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId}. - -The following is an example. - -```java - -byte[] msgIdBytes = // Some message ID byte array -MessageId id = MessageId.fromByteArray(msgIdBytes); -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(id) - .create(); - -while (true) { - Message message = reader.readNext(); - // Process message -} - -``` - -In the example above, a `Reader` object is instantiated for a specific topic and message (by ID); the reader iterates over each message in the topic after the message is identified by `msgIdBytes` (how that value is obtained depends on the application). - -The code sample above shows pointing the `Reader` object to a specific message (by ID), but you can also use `MessageId.earliest` to point to the earliest available message on the topic of `MessageId.latest` to point to the most recent available message. - -### Configure reader -When you create a reader, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -| Name | Type|
    Description
    | Default -|---|---|---|--- -`topicName`|String|Topic name. |None -`receiverQueueSize`|int|Size of a consumer's receiver queue.

    For example, the number of messages that can be accumulated by a consumer before an application calls `Receive`.

    A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.|1000 -`readerListener`|ReaderListener<T>|A listener that is called for message received.|None -`readerName`|String|Reader name.|null -`subscriptionName`|String| Subscription name|When there is a single topic, the default subscription name is `"reader-" + 10-digit UUID`.
    When there are multiple topics, the default subscription name is `"multiTopicsReader-" + 10-digit UUID`. -`subscriptionRolePrefix`|String|Prefix of subscription role. |null -`cryptoKeyReader`|CryptoKeyReader|Interface that abstracts the access to a key store.|null -`cryptoFailureAction`|ConsumerCryptoFailureAction|Consumer should take action when it receives a message that can not be decrypted.
  57. **FAIL**: this is the default option to fail messages until crypto succeeds.
  58. **DISCARD**: silently acknowledge and not deliver message to an application.
  59. **CONSUME**: deliver encrypted messages to applications. It is the application's responsibility to decrypt the message.

  60. The message decompression fails.

    If messages contain batch messages, a client is not be able to retrieve individual messages in batch.

    Delivered encrypted message contains {@link EncryptionContext} which contains encryption and compression information in it using which application can decrypt consumed message payload.|
  61. ConsumerCryptoFailureAction.FAIL
  62. -`readCompacted`|boolean|If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (for example, failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`.|false -`resetIncludeHead`|boolean|If set to true, the first message to be returned is the one specified by `messageId`.

    If set to false, the first message to be returned is the one next to the message specified by `messageId`.|false - -### Sticky key range reader - -In sticky key range reader, broker will only dispatch messages which hash of the message key contains by the specified key hash range. Multiple key hash ranges can be specified on a reader. - -The following is an example to create a sticky key range reader. - -```java - -pulsarClient.newReader() - .topic(topic) - .startMessageId(MessageId.earliest) - .keyHashRange(Range.of(0, 10000), Range.of(20001, 30000)) - .create(); - -``` - -Total hash range size is 65536, so the max end of the range should be less than or equal to 65535. - - -## TableView - -The TableView interface serves an encapsulated access pattern, providing a continuously updated key-value map view of the compacted topic data. Messages without keys will be ignored. - -With TableView, Pulsar clients can fetch all the message updates from a topic and construct a map with the latest values of each key. These values can then be used to build a local cache of data. In addition, you can register consumers with the TableView by specifying a listener to perform a scan of the map and then receive notifications when new messages are received. Consequently, event handling can be triggered to serve use cases, such as event-driven applications and message monitoring. - -> **Note:** Each TableView uses one Reader instance per partition, and reads the topic starting from the compacted view by default. It is highly recommended to enable automatic compaction by [configuring the topic compaction policies](cookbooks-compaction.md#configuring-compaction-to-run-automatically) for the given topic or namespace. More frequent compaction results in shorter startup times because less data is replayed to reconstruct the TableView of the topic. - -The following figure illustrates the dynamic construction of a TableView updated with newer values of each key. -![TableView](/assets/tableview.png) - -### Configure TableView - -The following is an example of how to configure a TableView. - -```java - -TableView tv = client.newTableViewBuilder(Schema.STRING) - .topic("my-tableview") - .create() - -``` - -You can use the available parameters in the `loadConf` configuration or related [API](/api/client/2.10.0-SNAPSHOT/org/apache/pulsar/client/api/TableViewBuilder.html) to customize your TableView. - -| Name | Type| Required? |
    Description
    | Default -|---|---|---|---|--- -| `topic` | string | yes | The topic name of the TableView. | N/A -| `autoUpdatePartitionInterval` | int | no | The interval to check for newly added partitions. | 60 (seconds) - -### Register listeners - -You can register listeners for both existing messages on a topic and new messages coming into the topic by using `forEachAndListen`, and specify to perform operations for all existing messages by using `forEach`. - -The following is an example of how to register listeners with TableView. - -```java - -// Register listeners for all existing and incoming messages -tv.forEachAndListen((key, value) -> /*operations on all existing and incoming messages*/) - -// Register action for all existing messages -tv.forEach((key, value) -> /*operations on all existing messages*/) - -``` - -## Schema - -In Pulsar, all message data consists of byte arrays "under the hood." [Message schemas](schema-get-started.md) enable you to use other types of data when constructing and handling messages (from simple types like strings to more complex, application-specific types). If you construct, say, a [producer](#producer) without specifying a schema, then the producer can only produce messages of type `byte[]`. The following is an example. - -```java - -Producer producer = client.newProducer() - .topic(topic) - .create(); - -``` - -The producer above is equivalent to a `Producer` (in fact, you should *always* explicitly specify the type). If you'd like to use a producer for a different type of data, you'll need to specify a **schema** that informs Pulsar which data type will be transmitted over the [topic](reference-terminology.md#topic). - -### AvroBaseStructSchema example - -Let's say that you have a `SensorReading` class that you'd like to transmit over a Pulsar topic: - -```java - -public class SensorReading { - public float temperature; - - public SensorReading(float temperature) { - this.temperature = temperature; - } - - // A no-arg constructor is required - public SensorReading() { - } - - public float getTemperature() { - return temperature; - } - - public void setTemperature(float temperature) { - this.temperature = temperature; - } -} - -``` - -You could then create a `Producer` (or `Consumer`) like this: - -```java - -Producer producer = client.newProducer(JSONSchema.of(SensorReading.class)) - .topic("sensor-readings") - .create(); - -``` - -The following schema formats are currently available for Java: - -* No schema or the byte array schema (which can be applied using `Schema.BYTES`): - - ```java - - Producer bytesProducer = client.newProducer(Schema.BYTES) - .topic("some-raw-bytes-topic") - .create(); - - ``` - - Or, equivalently: - - ```java - - Producer bytesProducer = client.newProducer() - .topic("some-raw-bytes-topic") - .create(); - - ``` - -* `String` for normal UTF-8-encoded string data. Apply the schema using `Schema.STRING`: - - ```java - - Producer stringProducer = client.newProducer(Schema.STRING) - .topic("some-string-topic") - .create(); - - ``` - -* Create JSON schemas for POJOs using `Schema.JSON`. The following is an example. - - ```java - - Producer pojoProducer = client.newProducer(Schema.JSON(MyPojo.class)) - .topic("some-pojo-topic") - .create(); - - ``` - -* Generate Protobuf schemas using `Schema.PROTOBUF`. The following example shows how to create the Protobuf schema and use it to instantiate a new producer: - - ```java - - Producer protobufProducer = client.newProducer(Schema.PROTOBUF(MyProtobuf.class)) - .topic("some-protobuf-topic") - .create(); - - ``` - -* Define Avro schemas with `Schema.AVRO`. The following code snippet demonstrates how to create and use Avro schema. - - ```java - - Producer avroProducer = client.newProducer(Schema.AVRO(MyAvro.class)) - .topic("some-avro-topic") - .create(); - - ``` - -### ProtobufNativeSchema example - -For example of ProtobufNativeSchema, see [`SchemaDefinition` in `Complex type`](schema-understand.md#complex-type). - -## Authentication - -Pulsar currently supports three authentication schemes: [TLS](security-tls-authentication.md), [Athenz](security-athenz.md), and [Oauth2](security-oauth2.md). You can use the Pulsar Java client with all of them. - -### TLS Authentication - -To use [TLS](security-tls-authentication.md), `enableTls` method is deprecated and you need to use "pulsar+ssl://" in serviceUrl to enable, point your Pulsar client to a TLS cert path, and provide paths to cert and key files. - -The following is an example. - -```java - -Map authParams = new HashMap(); -authParams.put("tlsCertFile", "/path/to/client-cert.pem"); -authParams.put("tlsKeyFile", "/path/to/client-key.pem"); - -Authentication tlsAuth = AuthenticationFactory - .create(AuthenticationTls.class.getName(), authParams); - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://my-broker.com:6651") - .tlsTrustCertsFilePath("/path/to/cacert.pem") - .authentication(tlsAuth) - .build(); - -``` - -### Athenz - -To use [Athenz](security-athenz.md) as an authentication provider, you need to [use TLS](#tls-authentication) and provide values for four parameters in a hash: - -* `tenantDomain` -* `tenantService` -* `providerDomain` -* `privateKey` - -You can also set an optional `keyId`. The following is an example. - -```java - -Map authParams = new HashMap(); -authParams.put("tenantDomain", "shopping"); // Tenant domain name -authParams.put("tenantService", "some_app"); // Tenant service name -authParams.put("providerDomain", "pulsar"); // Provider domain name -authParams.put("privateKey", "file:///path/to/private.pem"); // Tenant private key path -authParams.put("keyId", "v1"); // Key id for the tenant private key (optional, default: "0") - -Authentication athenzAuth = AuthenticationFactory - .create(AuthenticationAthenz.class.getName(), authParams); - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://my-broker.com:6651") - .tlsTrustCertsFilePath("/path/to/cacert.pem") - .authentication(athenzAuth) - .build(); - -``` - -> #### Supported pattern formats -> The `privateKey` parameter supports the following three pattern formats: -> * `file:///path/to/file` -> * `file:/path/to/file` -> * `data:application/x-pem-file;base64,` - -### Oauth2 - -The following example shows how to use [Oauth2](security-oauth2.md) as an authentication provider for the Pulsar Java client. - -You can use the factory method to configure authentication for Pulsar Java client. - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactoryOAuth2.clientCredentials(this.issuerUrl, this.credentialsUrl, this.audience)) - .build(); - -``` - -In addition, you can also use the encoded parameters to configure authentication for Pulsar Java client. - -```java - -Authentication auth = AuthenticationFactory - .create(AuthenticationOAuth2.class.getName(), "{"type":"client_credentials","privateKey":"...","issuerUrl":"...","audience":"..."}"); -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication(auth) - .build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-node.md b/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-node.md deleted file mode 100644 index a023b51d8ceb09..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-node.md +++ /dev/null @@ -1,652 +0,0 @@ ---- -id: client-libraries-node -title: The Pulsar Node.js client -sidebar_label: "Node.js" -original_id: client-libraries-node ---- - -The Pulsar Node.js client can be used to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Node.js. - -All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Node.js client are thread-safe. - -For 1.3.0 or later versions, [type definitions](https://github.com/apache/pulsar-client-node/blob/master/index.d.ts) used in TypeScript are available. - -## Installation - -You can install the [`pulsar-client`](https://www.npmjs.com/package/pulsar-client) library via [npm](https://www.npmjs.com/). - -### Requirements -Pulsar Node.js client library is based on the C++ client library. -Follow [these instructions](client-libraries-cpp.md#compilation) and install the Pulsar C++ client library. - -### Compatibility - -Compatibility between each version of the Node.js client and the C++ client is as follows: - -| Node.js client | C++ client | -| :------------- | :------------- | -| 1.0.0 | 2.3.0 or later | -| 1.1.0 | 2.4.0 or later | -| 1.2.0 | 2.5.0 or later | - -If an incompatible version of the C++ client is installed, you may fail to build or run this library. - -### Installation using npm - -Install the `pulsar-client` library via [npm](https://www.npmjs.com/): - -```shell - -$ npm install pulsar-client - -``` - -:::note - -Also, this library works only in Node.js 10.x or later because it uses the [`node-addon-api`](https://github.com/nodejs/node-addon-api) module to wrap the C++ library. - -::: - -## Connection URLs -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here is an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you are using [TLS encryption](security-tls-transport.md) or [TLS Authentication](security-tls-authentication.md), the URL looks like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you first need a client object. You can create a client instance using a `new` operator and the `Client` method, passing in a client options object (more on configuration [below](#client-configuration)). - -Here is an example: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - await client.close(); -})(); - -``` - -### Client configuration - -The following configurable parameters are available for Pulsar clients: - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `serviceUrl` | The connection URL for the Pulsar cluster. See [above](#connection-urls) for more info. | | -| `authentication` | Configure the authentication provider. (default: no authentication). See [TLS Authentication](security-tls-authentication.md) for more info. | | -| `operationTimeoutSeconds` | The timeout for Node.js client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries occur until this threshold is reached, at which point the operation fails. | 30 | -| `ioThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker). | 1 | -| `messageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)). | 1 | -| `concurrentLookupRequest` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 50000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 50000 | -| `tlsTrustCertsFilePath` | The file path for the trusted TLS certificate. | | -| `tlsValidateHostname` | The boolean value of setup whether to enable TLS hostname verification. | `false` | -| `tlsAllowInsecureConnection` | The boolean value of setup whether the Pulsar client accepts untrusted TLS certificate from broker. | `false` | -| `statsIntervalInSeconds` | Interval between each stat info. Stats is activated with positive statsInterval. The value should be set to 1 second at least | 600 | -| `log` | A function that is used for logging. | `console.log` | - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Node.js producers using a producer configuration object. - -Here is an example: - -```JavaScript - -const producer = await client.createProducer({ - topic: 'my-topic', // or 'my-tenant/my-namespace/my-topic' to specify topic's tenant and namespace -}); - -await producer.send({ - data: Buffer.from("Hello, Pulsar"), -}); - -await producer.close(); - -``` - -> #### Promise operation -> When you create a new Pulsar producer, the operation returns `Promise` object and get producer instance or an error through executor function. -> In this example, using await operator instead of executor function. - -### Producer operations - -Pulsar Node.js producers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `send(Object)` | Publishes a [message](#messages) to the producer's topic. When the message is successfully acknowledged by the Pulsar broker, or an error is thrown, the Promise object whose result is the message ID runs executor function. | `Promise` | -| `flush()` | Sends message from send queue to Pulsar broker. When the message is successfully acknowledged by the Pulsar broker, or an error is thrown, the Promise object runs executor function. | `Promise` | -| `close()` | Closes the producer and releases all resources allocated to it. Once `close()` is called, no more messages are accepted from the publisher. This method returns a Promise object. It runs the executor function when all pending publish requests are persisted by Pulsar. If an error is thrown, no pending writes are retried. | `Promise` | -| `getProducerName()` | Getter method of the producer name. | `string` | -| `getTopic()` | Getter method of the name of the topic. | `string` | - -### Producer configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer publishes messages. The topic format is `` or `//`. For example, `sample/ns1/my-topic`. | | -| `producerName` | A name for the producer. If you do not explicitly assign a name, Pulsar automatically generates a globally unique name. If you choose to explicitly assign a name, it needs to be unique across *all* Pulsar clusters, otherwise the creation operation throws an error. | | -| `sendTimeoutMs` | When publishing a message to a topic, the producer waits for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error is thrown. If you set `sendTimeoutMs` to -1, the timeout is set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30000 | -| `initialSequenceId` | The initial sequence ID of the message. When producer send message, add sequence ID to message. The ID is increased each time to send. | | -| `maxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `send` method fails *unless* `blockIfQueueFull` is set to `true`. | 1000 | -| `maxPendingMessagesAcrossPartitions` | The maximum size of the sum of partition's pending queue. | 50000 | -| `blockIfQueueFull` | If set to `true`, the producer's `send` method waits when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `maxPendingMessages` parameter); if set to `false` (the default), `send` operations fails and throw a error when the queue is full. | `false` | -| `messageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-messaging.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`RoundRobinDistribution`), or publishing all messages to a single partition (`UseSinglePartition`, the default). | `UseSinglePartition` | -| `hashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `JavaStringHash` (the equivalent of `String.hashCode()` in Java), `Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library). | `BoostHash` | -| `compressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), and [`Zlib`](https://zlib.net/), [ZSTD](https://github.com/facebook/zstd/), [SNAPPY](https://github.com/google/snappy/). | Compression None | -| `batchingEnabled` | If set to `true`, the producer send message as batch. | `true` | -| `batchingMaxPublishDelayMs` | The maximum time of delay sending message in batching. | 10 | -| `batchingMaxMessages` | The maximum size of sending message in each time of batching. | 1000 | -| `properties` | The metadata of producer. | | - -### Producer example - -This example creates a Node.js producer for the `my-topic` topic and sends 10 messages to that topic: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - // Create a producer - const producer = await client.createProducer({ - topic: 'my-topic', - }); - - // Send messages - for (let i = 0; i < 10; i += 1) { - const msg = `my-message-${i}`; - producer.send({ - data: Buffer.from(msg), - }); - console.log(`Sent message: ${msg}`); - } - await producer.flush(); - - await producer.close(); - await client.close(); -})(); - -``` - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Node.js consumers using a consumer configuration object. - -Here is an example: - -```JavaScript - -const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', -}); - -const msg = await consumer.receive(); -console.log(msg.getData().toString()); -consumer.acknowledge(msg); - -await consumer.close(); - -``` - -> #### Promise operation -> When you create a new Pulsar consumer, the operation returns `Promise` object and get consumer instance or an error through executor function. -> In this example, using await operator instead of executor function. - -### Consumer operations - -Pulsar Node.js consumers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `receive()` | Receives a single message from the topic. When the message is available, the Promise object run executor function and get message object. | `Promise` | -| `receive(Number)` | Receives a single message from the topic with specific timeout in milliseconds. | `Promise` | -| `acknowledge(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message object. | `void` | -| `acknowledgeId(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID object. | `void` | -| `acknowledgeCumulative(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `acknowledgeCumulative` method returns void, and send the ack to the broker asynchronously. After that, the messages are *not* redelivered to the consumer. Cumulative acking can not be used with a [shared](concepts-messaging.md#shared) subscription type. | `void` | -| `acknowledgeCumulativeId(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message ID. | `void` | -| `negativeAcknowledge(Message)`| [Negatively acknowledges](reference-terminology.md#negative-acknowledgment-nack) a message to the Pulsar broker by message object. | `void` | -| `negativeAcknowledgeId(MessageId)` | [Negatively acknowledges](reference-terminology.md#negative-acknowledgment-nack) a message to the Pulsar broker by message ID object. | `void` | -| `close()` | Closes the consumer, disabling its ability to receive messages from the broker. | `Promise` | -| `unsubscribe()` | Unsubscribes the subscription. | `Promise` | - -### Consumer configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar topic on which the consumer establishes a subscription and listen for messages. | | -| `topics` | The array of topics. | | -| `topicsPattern` | The regular expression for topics. | | -| `subscription` | The subscription name for this consumer. | | -| `subscriptionType` | Available options are `Exclusive`, `Shared`, `Key_Shared`, and `Failover`. | `Exclusive` | -| `subscriptionInitialPosition` | Initial position at which to set cursor when subscribing to a topic at first time. | `SubscriptionInitialPosition.Latest` | -| `ackTimeoutMs` | Acknowledge timeout in milliseconds. | 0 | -| `nAckRedeliverTimeoutMs` | Delay to wait before redelivering messages that failed to be processed. | 60000 | -| `receiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000 | -| `receiverQueueSizeAcrossPartitions` | Set the max total receiver queue size across partitions. This setting is used to reduce the receiver queue size for individual partitions if the total exceeds this value. | 50000 | -| `consumerName` | The name of consumer. Currently(v2.4.1), [failover](concepts-messaging.md#failover) mode use consumer name in ordering. | | -| `properties` | The metadata of consumer. | | -| `listener`| A listener that is called for a message received. | | -| `readCompacted`| If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`. | false | - -### Consumer example - -This example creates a Node.js consumer with the `my-subscription` subscription on the `my-topic` topic, receives messages, prints the content that arrive, and acknowledges each message to the Pulsar broker for 10 times: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - // Create a consumer - const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', - subscriptionType: 'Exclusive', - }); - - // Receive messages - for (let i = 0; i < 10; i += 1) { - const msg = await consumer.receive(); - console.log(msg.getData().toString()); - consumer.acknowledge(msg); - } - - await consumer.close(); - await client.close(); -})(); - -``` - -Instead a consumer can be created with `listener` to process messages. - -```JavaScript - -// Create a consumer -const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', - subscriptionType: 'Exclusive', - listener: (msg, msgConsumer) => { - console.log(msg.getData().toString()); - msgConsumer.acknowledge(msg); - }, -}); - -``` - -:::note - -Pulsar Node.js client uses [AsyncWorker](https://github.com/nodejs/node-addon-api/blob/main/doc/async_worker). Asynchronous operations such as creating consumers/producers and receiving/sending messages are performed in worker threads. -Until completion of these operations, worker threads are blocked. -Since there are only 4 worker threads by default, a called method may never complete. -To avoid this situation, you can set `UV_THREADPOOL_SIZE` to increase the number of worker threads, or define `listener` instead of calling `receive()` many times. - -::: - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recently unacked message). You can [configure](#reader-configuration) Node.js readers using a reader configuration object. - -Here is an example: - -```JavaScript - -const reader = await client.createReader({ - topic: 'my-topic', - startMessageId: Pulsar.MessageId.earliest(), -}); - -const msg = await reader.readNext(); -console.log(msg.getData().toString()); - -await reader.close(); - -``` - -### Reader operations - -Pulsar Node.js readers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `readNext()` | Receives the next message on the topic (analogous to the `receive` method for [consumers](#consumer-operations)). When the message is available, the Promise object run executor function and get message object. | `Promise` | -| `readNext(Number)` | Receives a single message from the topic with specific timeout in milliseconds. | `Promise` | -| `hasNext()` | Return whether the broker has next message in target topic. | `Boolean` | -| `close()` | Closes the reader, disabling its ability to receive messages from the broker. | `Promise` | - -### Reader configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader establishes a subscription and listen for messages. | | -| `startMessageId` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `Pulsar.MessageId.earliest` (the earliest available message on the topic), `Pulsar.MessageId.latest` (the latest available message on the topic), or a message ID object for a position that is not earliest or latest. | | -| `receiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `readNext`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000 | -| `readerName` | The name of the reader. | | -| `subscriptionRolePrefix` | The subscription role prefix. | | -| `readCompacted` | If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`. | `false` | - - -### Reader example - -This example creates a Node.js reader with the `my-topic` topic, reads messages, and prints the content that arrive for 10 times: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - operationTimeoutSeconds: 30, - }); - - // Create a reader - const reader = await client.createReader({ - topic: 'my-topic', - startMessageId: Pulsar.MessageId.earliest(), - }); - - // read messages - for (let i = 0; i < 10; i += 1) { - const msg = await reader.readNext(); - console.log(msg.getData().toString()); - } - - await reader.close(); - await client.close(); -})(); - -``` - -## Messages - -In Pulsar Node.js client, you have to construct producer message object for producer. - -Here is an example message: - -```JavaScript - -const msg = { - data: Buffer.from('Hello, Pulsar'), - partitionKey: 'key1', - properties: { - 'foo': 'bar', - }, - eventTimestamp: Date.now(), - replicationClusters: [ - 'cluster1', - 'cluster2', - ], -} - -await producer.send(msg); - -``` - -The following keys are available for producer message objects: - -| Parameter | Description | -| :-------- | :---------- | -| `data` | The actual data payload of the message. | -| `properties` | A Object for any application-specific metadata attached to the message. | -| `eventTimestamp` | The timestamp associated with the message. | -| `sequenceId` | The sequence ID of the message. | -| `partitionKey` | The optional key associated with the message (particularly useful for things like topic compaction). | -| `replicationClusters` | The clusters to which this message is replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. | -| `deliverAt` | The absolute timestamp at or after which the message is delivered. | | -| `deliverAfter` | The relative delay after which the message is delivered. | | - -### Message object operations - -In Pulsar Node.js client, you can receive (or read) message object as consumer (or reader). - -The message object have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `getTopicName()` | Getter method of topic name. | `String` | -| `getProperties()` | Getter method of properties. | `Array` | -| `getData()` | Getter method of message data. | `Buffer` | -| `getMessageId()` | Getter method of [message id object](#message-id-object-operations). | `Object` | -| `getPublishTimestamp()` | Getter method of publish timestamp. | `Number` | -| `getEventTimestamp()` | Getter method of event timestamp. | `Number` | -| `getRedeliveryCount()` | Getter method of redelivery count. | `Number` | -| `getPartitionKey()` | Getter method of partition key. | `String` | - -### Message ID object operations - -In Pulsar Node.js client, you can get message id object from message object. - -The message id object have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `serialize()` | Serialize the message id into a Buffer for storing. | `Buffer` | -| `toString()` | Get message id as String. | `String` | - -The client has static method of message id object. You can access it as `Pulsar.MessageId.someStaticMethod` too. - -The following static methods are available for the message id object: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `earliest()` | MessageId representing the earliest, or oldest available message stored in the topic. | `Object` | -| `latest()` | MessageId representing the latest, or last published message in the topic. | `Object` | -| `deserialize(Buffer)` | Deserialize a message id object from a Buffer. | `Object` | - -## End-to-end encryption - -[End-to-end encryption](cookbooks-encryption.md#docsNav) allows applications to encrypt messages at producers and decrypt at consumers. - -### Configuration - -If you want to use the end-to-end encryption feature in the Node.js client, you need to configure `publicKeyPath` and `privateKeyPath` for both producer and consumer. - -``` - -publicKeyPath: "./public.pem" -privateKeyPath: "./private.pem" - -``` - -### Tutorial - -This section provides step-by-step instructions on how to use the end-to-end encryption feature in the Node.js client. - -**Prerequisite** - -- Pulsar C++ client 2.7.1 or later - -**Step** - -1. Create both public and private key pairs. - - **Input** - - ```shell - - openssl genrsa -out private.pem 2048 - openssl rsa -in private.pem -pubout -out public.pem - - ``` - -2. Create a producer to send encrypted messages. - - **Input** - - ```nodejs - - const Pulsar = require('pulsar-client'); - - (async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - operationTimeoutSeconds: 30, - }); - - // Create a producer - const producer = await client.createProducer({ - topic: 'persistent://public/default/my-topic', - sendTimeoutMs: 30000, - batchingEnabled: true, - publicKeyPath: "./public.pem", - privateKeyPath: "./private.pem", - encryptionKey: "encryption-key" - }); - - console.log(producer.ProducerConfig) - // Send messages - for (let i = 0; i < 10; i += 1) { - const msg = `my-message-${i}`; - producer.send({ - data: Buffer.from(msg), - }); - console.log(`Sent message: ${msg}`); - } - await producer.flush(); - - await producer.close(); - await client.close(); - })(); - - ``` - -3. Create a consumer to receive encrypted messages. - - **Input** - - ```nodejs - - const Pulsar = require('pulsar-client'); - - (async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://172.25.0.3:6650', - operationTimeoutSeconds: 30 - }); - - // Create a consumer - const consumer = await client.subscribe({ - topic: 'persistent://public/default/my-topic', - subscription: 'sub1', - subscriptionType: 'Shared', - ackTimeoutMs: 10000, - publicKeyPath: "./public.pem", - privateKeyPath: "./private.pem" - }); - - console.log(consumer) - // Receive messages - for (let i = 0; i < 10; i += 1) { - const msg = await consumer.receive(); - console.log(msg.getData().toString()); - consumer.acknowledge(msg); - } - - await consumer.close(); - await client.close(); - })(); - - ``` - -4. Run the consumer to receive encrypted messages. - - **Input** - - ```shell - - node consumer.js - - ``` - -5. In a new terminal tab, run the producer to produce encrypted messages. - - **Input** - - ```shell - - node producer.js - - ``` - - Now you can see the producer sends messages and the consumer receives messages successfully. - - **Output** - - This is from the producer side. - - ``` - - Sent message: my-message-0 - Sent message: my-message-1 - Sent message: my-message-2 - Sent message: my-message-3 - Sent message: my-message-4 - Sent message: my-message-5 - Sent message: my-message-6 - Sent message: my-message-7 - Sent message: my-message-8 - Sent message: my-message-9 - - ``` - - This is from the consumer side. - - ``` - - my-message-0 - my-message-1 - my-message-2 - my-message-3 - my-message-4 - my-message-5 - my-message-6 - my-message-7 - my-message-8 - my-message-9 - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-python.md b/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-python.md deleted file mode 100644 index 10000237990443..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-python.md +++ /dev/null @@ -1,641 +0,0 @@ ---- -id: client-libraries-python -title: Pulsar Python client -sidebar_label: "Python" -original_id: client-libraries-python ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar Python client library is a wrapper over the existing [C++ client library](client-libraries-cpp.md) and exposes all of the [same features](/api/cpp). You can find the code in the [Python directory](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/python) of the C++ client code. - -All the methods in producer, consumer, and reader of a Python client are thread-safe. - -[pdoc](https://github.com/BurntSushi/pdoc)-generated API docs for the Python client are available [here](/api/python). - -## Install - -You can install the [`pulsar-client`](https://pypi.python.org/pypi/pulsar-client) library either via [PyPi](https://pypi.python.org/pypi), using [pip](#installation-using-pip), or by building the library from [source](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp). - -### Install using pip - -To install the `pulsar-client` library as a pre-built package using the [pip](https://pip.pypa.io/en/stable/) package manager: - -```shell - -$ pip install pulsar-client==@pulsar:version_number@ - -``` - -### Optional dependencies -If you install the client libraries on Linux to support services like Pulsar functions or Avro serialization, you can install optional components alongside the `pulsar-client` library. - -```shell - -# avro serialization -$ pip install pulsar-client[avro]=='@pulsar:version_number@' - -# functions runtime -$ pip install pulsar-client[functions]=='@pulsar:version_number@' - -# all optional components -$ pip install pulsar-client[all]=='@pulsar:version_number@' - -``` - -Installation via PyPi is available for the following Python versions: - -Platform | Supported Python versions -:--------|:------------------------- -MacOS
    10.13 (High Sierra), 10.14 (Mojave)
    | 2.7, 3.7, 3.8, 3.9 -Linux | 2.7, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9 - -### Install from source - -To install the `pulsar-client` library by building from source, follow [instructions](client-libraries-cpp.md#compilation) and compile the Pulsar C++ client library. That builds the Python binding for the library. - -To install the built Python bindings: - -```shell - -$ git clone https://github.com/apache/pulsar -$ cd pulsar/pulsar-client-cpp/python -$ sudo python setup.py install - -``` - -## API Reference - -The complete Python API reference is available at [api/python](/api/python). - -## Examples - -You can find a variety of Python code examples for the [pulsar-client](/pulsar-client-cpp/python) library. - -### Producer example - -The following example creates a Python producer for the `my-topic` topic and sends 10 messages on that topic: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') - -producer = client.create_producer('my-topic') - -for i in range(10): - producer.send(('Hello-%d' % i).encode('utf-8')) - -client.close() - -``` - -### Consumer example - -The following example creates a consumer with the `my-subscription` subscription name on the `my-topic` topic, receives incoming messages, prints the content and ID of messages that arrive, and acknowledges each message to the Pulsar broker. - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') - -consumer = client.subscribe('my-topic', 'my-subscription') - -while True: - msg = consumer.receive() - try: - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except Exception: - # Message failed to be processed - consumer.negative_acknowledge(msg) - -client.close() - -``` - -This example shows how to configure negative acknowledgement. - -```python - -from pulsar import Client, schema -client = Client('pulsar://localhost:6650') -consumer = client.subscribe('negative_acks','test',schema=schema.StringSchema()) -producer = client.create_producer('negative_acks',schema=schema.StringSchema()) -for i in range(10): - print('send msg "hello-%d"' % i) - producer.send_async('hello-%d' % i, callback=None) -producer.flush() -for i in range(10): - msg = consumer.receive() - consumer.negative_acknowledge(msg) - print('receive and nack msg "%s"' % msg.data()) -for i in range(10): - msg = consumer.receive() - consumer.acknowledge(msg) - print('receive and ack msg "%s"' % msg.data()) -try: - # No more messages expected - msg = consumer.receive(100) -except: - print("no more msg") - pass - -``` - -### Reader interface example - -You can use the Pulsar Python API to use the Pulsar [reader interface](concepts-clients.md#reader-interface). Here's an example: - -```python - -# MessageId taken from a previously fetched message -msg_id = msg.message_id() - -reader = client.create_reader('my-topic', msg_id) - -while True: - msg = reader.read_next() - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # No acknowledgment - -``` - -### Multi-topic subscriptions - -In addition to subscribing a consumer to a single Pulsar topic, you can also subscribe to multiple topics simultaneously. To use multi-topic subscriptions, you can supply a regular expression (regex) or a `List` of topics. If you select topics via regex, all topics must be within the same Pulsar namespace. - -The following is an example: - -```python - -import re -consumer = client.subscribe(re.compile('persistent://public/default/topic-*'), 'my-subscription') -while True: - msg = consumer.receive() - try: - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except Exception: - # Message failed to be processed - consumer.negative_acknowledge(msg) -client.close() - -``` - -## Schema - -### Supported schema types - -You can use different builtin schema types in Pulsar. All the definitions are in the `pulsar.schema` package. - -| Schema | Notes | -| ------ | ----- | -| `BytesSchema` | Get the raw payload as a `bytes` object. No serialization/deserialization are performed. This is the default schema mode | -| `StringSchema` | Encode/decode payload as a UTF-8 string. Uses `str` objects | -| `JsonSchema` | Require record definition. Serializes the record into standard JSON payload | -| `AvroSchema` | Require record definition. Serializes in AVRO format | - -### Schema definition reference - -The schema definition is done through a class that inherits from `pulsar.schema.Record`. - -This class has a number of fields which can be of either -`pulsar.schema.Field` type or another nested `Record`. All the -fields are specified in the `pulsar.schema` package. The fields -are matching the AVRO fields types. - -| Field Type | Python Type | Notes | -| ---------- | ----------- | ----- | -| `Boolean` | `bool` | | -| `Integer` | `int` | | -| `Long` | `int` | | -| `Float` | `float` | | -| `Double` | `float` | | -| `Bytes` | `bytes` | | -| `String` | `str` | | -| `Array` | `list` | Need to specify record type for items. | -| `Map` | `dict` | Key is always `String`. Need to specify value type. | - -Additionally, any Python `Enum` type can be used as a valid field type. - -#### Fields parameters - -When adding a field, you can use these parameters in the constructor. - -| Argument | Default | Notes | -| ---------- | --------| ----- | -| `default` | `None` | Set a default value for the field. Eg: `a = Integer(default=5)` | -| `required` | `False` | Mark the field as "required". It is set in the schema accordingly. | - -#### Schema definition examples - -##### Simple definition - -```python - -class Example(Record): - a = String() - b = Integer() - c = Array(String()) - i = Map(String()) - -``` - -##### Using enums - -```python - -from enum import Enum - -class Color(Enum): - red = 1 - green = 2 - blue = 3 - -class Example(Record): - name = String() - color = Color - -``` - -##### Complex types - -```python - -class MySubRecord(Record): - x = Integer() - y = Long() - z = String() - -class Example(Record): - a = String() - sub = MySubRecord() - -``` - -##### Set namespace for Avro schema - -Set the namespace for Avro Record schema using the special field `_avro_namespace`. - -```python - -class NamespaceDemo(Record): - _avro_namespace = 'xxx.xxx.xxx' - x = String() - y = Integer() - -``` - -The schema definition is like this. - -``` - -{ - 'name': 'NamespaceDemo', 'namespace': 'xxx.xxx.xxx', 'type': 'record', 'fields': [ - {'name': 'x', 'type': ['null', 'string']}, - {'name': 'y', 'type': ['null', 'int']} - ] -} - -``` - -### Declare and validate schema - -You can send messages using `BytesSchema`, `StringSchema`, `AvroSchema`, and `JsonSchema`. - -Before the producer is created, the Pulsar broker validates that the existing topic schema is the correct type and that the format is compatible with the schema definition of a class. If the format of the topic schema is incompatible with the schema definition, an exception occurs in the producer creation. - -Once a producer is created with a certain schema definition, it only accepts objects that are instances of the declared schema class. - -Similarly, for a consumer or reader, the consumer returns an object (which is an instance of the schema record class) rather than raw bytes. - -**Example** - -```python - -consumer = client.subscribe( - topic='my-topic', - subscription_name='my-subscription', - schema=AvroSchema(Example) ) - -while True: - msg = consumer.receive() - ex = msg.value() - try: - print("Received message a={} b={} c={}".format(ex.a, ex.b, ex.c)) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except Exception: - # Message failed to be processed - consumer.negative_acknowledge(msg) - -``` - -````mdx-code-block - - - - -You can send byte data using a `BytesSchema`. - -**Example** - -```python - -producer = client.create_producer( - 'bytes-schema-topic', - schema=BytesSchema()) -producer.send(b"Hello") - -consumer = client.subscribe( - 'bytes-schema-topic', - 'sub', - schema=BytesSchema()) -msg = consumer.receive() -data = msg.value() - -``` - - - - -You can send string data using a `StringSchema`. - -**Example** - -```python - -producer = client.create_producer( - 'string-schema-topic', - schema=StringSchema()) -producer.send("Hello") - -consumer = client.subscribe( - 'string-schema-topic', - 'sub', - schema=StringSchema()) -msg = consumer.receive() -str = msg.value() - -``` - - - - -You can declare an `AvroSchema` using one of the following methods. - -#### Method 1: Record - -You can declare an `AvroSchema` by passing a class that inherits -from `pulsar.schema.Record` and defines the fields as -class variables. - -**Example** - -```python - -class Example(Record): - a = Integer() - b = Integer() - -producer = client.create_producer( - 'avro-schema-topic', - schema=AvroSchema(Example)) -r = Example(a=1, b=2) -producer.send(r) - -consumer = client.subscribe( - 'avro-schema-topic', - 'sub', - schema=AvroSchema(Example)) -msg = consumer.receive() -e = msg.value() - -``` - -#### Method 2: JSON definition - -You can declare an `AvroSchema` using JSON. In this case, Avro schemas are defined using JSON. - -**Example** - -Below is an `AvroSchema` defined using a JSON file (_company.avsc_). - -```json - -{ - "doc": "this is doc", - "namespace": "example.avro", - "type": "record", - "name": "Company", - "fields": [ - {"name": "name", "type": ["null", "string"]}, - {"name": "address", "type": ["null", "string"]}, - {"name": "employees", "type": ["null", {"type": "array", "items": { - "type": "record", - "name": "Employee", - "fields": [ - {"name": "name", "type": ["null", "string"]}, - {"name": "age", "type": ["null", "int"]} - ] - }}]}, - {"name": "labels", "type": ["null", {"type": "map", "values": "string"}]} - ] -} - -``` - -You can load a schema definition from file by using [`avro.schema`]((http://avro.apache.org/docs/current/gettingstartedpython.html) or [`fastavro.schema`](https://fastavro.readthedocs.io/en/latest/schema.html#fastavro._schema_py.load_schema). - -If you use the "JSON definition" method to declare an `AvroSchema`, pay attention to the following points: - -- You need to use [Python dict](https://developers.google.com/edu/python/dict-files) to produce and consume messages, which is different from using the "Record" method. - -- When generating an `AvroSchema` object, set `_record_cls` parameter to `None`. - -**Example** - -``` - -from fastavro.schema import load_schema -from pulsar.schema import * -schema_definition = load_schema("examples/company.avsc") -avro_schema = AvroSchema(None, schema_definition=schema_definition) -producer = client.create_producer( - topic=topic, - schema=avro_schema) -consumer = client.subscribe(topic, 'test', schema=avro_schema) -company = { - "name": "company-name" + str(i), - "address": 'xxx road xxx street ' + str(i), - "employees": [ - {"name": "user" + str(i), "age": 20 + i}, - {"name": "user" + str(i), "age": 30 + i}, - {"name": "user" + str(i), "age": 35 + i}, - ], - "labels": { - "industry": "software" + str(i), - "scale": ">100", - "funds": "1000000.0" - } -} -producer.send(company) -msg = consumer.receive() -# Users could get a dict object by `value()` method. -msg.value() - -``` - - - - -#### Record - -You can declare a `JsonSchema` by passing a class that inherits -from `pulsar.schema.Record` and defines the fields as class variables. This is similar to using `AvroSchema`. The only difference is to use `JsonSchema` instead of `AvroSchema` when defining schema type as shown below. For how to use `AvroSchema` via record, see [here](client-libraries-python.md#method-1-record). - -``` - -producer = client.create_producer( - 'avro-schema-topic', - schema=JsonSchema(Example)) - -consumer = client.subscribe( - 'avro-schema-topic', - 'sub', - schema=JsonSchema(Example)) - -``` - - - - -```` - -## End-to-end encryption - -[End-to-end encryption](cookbooks-encryption.md#docsNav) allows applications to encrypt messages at producers and decrypt messages at consumers. - -### Configuration - -To use the end-to-end encryption feature in the Python client, you need to configure `publicKeyPath` and `privateKeyPath` for both producer and consumer. - -``` - -publicKeyPath: "./public.pem" -privateKeyPath: "./private.pem" - -``` - -### Tutorial - -This section provides step-by-step instructions on how to use the end-to-end encryption feature in the Python client. - -**Prerequisite** - -- Pulsar Python client 2.7.1 or later - -**Step** - -1. Create both public and private key pairs. - - **Input** - - ```shell - - openssl genrsa -out private.pem 2048 - openssl rsa -in private.pem -pubout -out public.pem - - ``` - -2. Create a producer to send encrypted messages. - - **Input** - - ```python - - import pulsar - - publicKeyPath = "./public.pem" - privateKeyPath = "./private.pem" - crypto_key_reader = pulsar.CryptoKeyReader(publicKeyPath, privateKeyPath) - client = pulsar.Client('pulsar://localhost:6650') - producer = client.create_producer(topic='encryption', encryption_key='encryption', crypto_key_reader=crypto_key_reader) - producer.send('encryption message'.encode('utf8')) - print('sent message') - producer.close() - client.close() - - ``` - -3. Create a consumer to receive encrypted messages. - - **Input** - - ```python - - import pulsar - - publicKeyPath = "./public.pem" - privateKeyPath = "./private.pem" - crypto_key_reader = pulsar.CryptoKeyReader(publicKeyPath, privateKeyPath) - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe(topic='encryption', subscription_name='encryption-sub', crypto_key_reader=crypto_key_reader) - msg = consumer.receive() - print("Received msg '{}' id = '{}'".format(msg.data(), msg.message_id())) - consumer.close() - client.close() - - ``` - -4. Run the consumer to receive encrypted messages. - - **Input** - - ```shell - - python consumer.py - - ``` - -5. In a new terminal tab, run the producer to produce encrypted messages. - - **Input** - - ```shell - - python producer.py - - ``` - - Now you can see the producer sends messages and the consumer receives messages successfully. - - **Output** - - This is from the producer side. - - ``` - - sent message - - ``` - - This is from the consumer side. - - ``` - - Received msg 'encryption message' id = '(0,0,-1,-1)' - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-rest.md b/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-rest.md deleted file mode 100644 index 1b26eedc01836a..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-rest.md +++ /dev/null @@ -1,134 +0,0 @@ ---- -id: client-libraries-rest -title: Pulsar REST -sidebar_label: "REST" -original_id: client-libraries-rest ---- - -Pulsar not only provides REST endpoints to manage resources in Pulsar clusters, but also provides methods to query the state for those resources. In addition, Pulsar REST provides a simple way to interact with Pulsar **without using client libraries**, which is convenient for applications to use HTTP to interact with Pulsar. - -## Connection - -To connect to Pulsar, you need to specify a URL. - -- Produce messages to non-partitioned or partitioned topics - - ``` - - brokerUrl:{8080/8081}/topics/{persistent/non-persistent}/{my-tenant}/{my-namespace}/{my-topic} - - ``` - -- Produce messages to specific partitions of partitioned topics - - ``` - - brokerUrl:{8080/8081}/topics/{persistent/non-persistent}/{my-tenant}/{my-namespace}/{my-topic}/partitions/{partition-number} - - ``` - -## Producer - -Currently, you can produce messages to the following destinations with tools like cURL or Postman via REST. - -- Non-partitioned or partitioned topics - -- Specific partitions of partitioned topics - -:::note - -You can only produce messages to **topics that already exist** in Pulsar via REST. - -::: - -Consuming and reading messages via REST will be supported in the future. - -### Message - -- Below is the structure of a request payload. - - Parameter|Required?|Description - |---|---|--- - `schemaVersion`|No| Schema version of existing schema used for this message

    You need provide one of the following:

    - `schemaVersion`
    - `keySchema`/`valueSchema`

    If both of them are provided, then `schemaVersion` is used - `keySchema/valueSchema`|No|Key schema / Value schema used for this message - `producerName`|No|Producer name - `Messages[] SingleMessage`|Yes|Messages to be sent - -- Below is the structure of a message. - - Parameter|Required?|Type|Description - |---|---|---|--- - `payload`|Yes|`String`|Actual message payload

    Messages are sent in strings and encoded with given schemas on the server side - `properties`|No|`Map`|Custom properties - `key`|No|`String`|Partition key - `replicationClusters`|No|`List`|Clusters to which messages replicate - `eventTime`|No|`String`|Message event time - `sequenceId`|No|`long`|Message sequence ID - `disableReplication`|No|`boolean`|Whether to disable replication of messages - `deliverAt`|No|`long`|Deliver messages only at or after specified absolute timestamp - `deliverAfterMs`|No|`long`|Deliver messages only after specified relative delay (in milliseconds) - -### Schema - -- Currently, Primitive, Avro, JSON, and KeyValue schemas are supported. - -- For Primitive, Avro and JSON schemas, schemas should be provided as the full schema encoded as a string. - -- If the schema is not set, messages are encoded with string schema. - -### Example - -Below is an example of sending messages to topics using JSON schema via REST. - -Assume that you send messages representing the following class. - -```java - - class Seller { - public String state; - public String street; - public long zipCode; - } - - class PC { - public String brand; - public String model; - public int year; - public GPU gpu; - public Seller seller; - } - -``` - -Send messages to topics with JSON schema using the command below. - -```shell - -curl --location --request POST 'brokerUrl:{8080/8081}/topics/{persistent/non-persistent}/{my-tenant}/{my-namespace}/{my-topic}' \ ---header 'Content-Type: application/json' \ ---data-raw '{ - "valueSchema": "{\"name\":\"\",\"schema\":\"eyJ0eXBlIjoicmVjb3JkIiwibmFtZSI6IlBDIiwibmFtZXNwYWNlIjoib3JnLmFwYWNoZS5wdWxzYXIuYnJva2VyLmFkbWluLlRvcGljc1Rlc3QiLCJmaWVsZHMiOlt7Im5hbWUiOiJicmFuZCIsInR5cGUiOlsibnVsbCIsInN0cmluZyJdLCJkZWZhdWx0IjpudWxsfSx7Im5hbWUiOiJncHUiLCJ0eXBlIjpbIm51bGwiLHsidHlwZSI6ImVudW0iLCJuYW1lIjoiR1BVIiwic3ltYm9scyI6WyJBTUQiLCJOVklESUEiXX1dLCJkZWZhdWx0IjpudWxsfSx7Im5hbWUiOiJtb2RlbCIsInR5cGUiOlsibnVsbCIsInN0cmluZyJdLCJkZWZhdWx0IjpudWxsfSx7Im5hbWUiOiJzZWxsZXIiLCJ0eXBlIjpbIm51bGwiLHsidHlwZSI6InJlY29yZCIsIm5hbWUiOiJTZWxsZXIiLCJmaWVsZHMiOlt7Im5hbWUiOiJzdGF0ZSIsInR5cGUiOlsibnVsbCIsInN0cmluZyJdLCJkZWZhdWx0IjpudWxsfSx7Im5hbWUiOiJzdHJlZXQiLCJ0eXBlIjpbIm51bGwiLCJzdHJpbmciXSwiZGVmYXVsdCI6bnVsbH0seyJuYW1lIjoiemlwQ29kZSIsInR5cGUiOiJsb25nIn1dfV0sImRlZmF1bHQiOm51bGx9LHsibmFtZSI6InllYXIiLCJ0eXBlIjoiaW50In1dfQ==\",\"type\":\"JSON\",\"properties\":{\"__jsr310ConversionEnabled\":\"false\",\"__alwaysAllowNull\":\"true\"},\"schemaDefinition\":\"{\\\"type\\\":\\\"record\\\",\\\"name\\\":\\\"PC\\\",\\\"namespace\\\":\\\"org.apache.pulsar.broker.admin.TopicsTest\\\",\\\"fields\\\":[{\\\"name\\\":\\\"brand\\\",\\\"type\\\":[\\\"null\\\",\\\"string\\\"],\\\"default\\\":null},{\\\"name\\\":\\\"gpu\\\",\\\"type\\\":[\\\"null\\\",{\\\"type\\\":\\\"enum\\\",\\\"name\\\":\\\"GPU\\\",\\\"symbols\\\":[\\\"AMD\\\",\\\"NVIDIA\\\"]}],\\\"default\\\":null},{\\\"name\\\":\\\"model\\\",\\\"type\\\":[\\\"null\\\",\\\"string\\\"],\\\"default\\\":null},{\\\"name\\\":\\\"seller\\\",\\\"type\\\":[\\\"null\\\",{\\\"type\\\":\\\"record\\\",\\\"name\\\":\\\"Seller\\\",\\\"fields\\\":[{\\\"name\\\":\\\"state\\\",\\\"type\\\":[\\\"null\\\",\\\"string\\\"],\\\"default\\\":null},{\\\"name\\\":\\\"street\\\",\\\"type\\\":[\\\"null\\\",\\\"string\\\"],\\\"default\\\":null},{\\\"name\\\":\\\"zipCode\\\",\\\"type\\\":\\\"long\\\"}]}],\\\"default\\\":null},{\\\"name\\\":\\\"year\\\",\\\"type\\\":\\\"int\\\"}]}\"}", - -// Schema data is just the base 64 encoded schemaDefinition. - - "producerName": "rest-producer", - "messages": [ - { - "key":"my-key", - "payload":"{\"brand\":\"dell\",\"model\":\"alienware\",\"year\":2021,\"gpu\":\"AMD\",\"seller\":{\"state\":\"WA\",\"street\":\"main street\",\"zipCode\":98004}}", - "eventTime":1603045262772, - "sequenceId":1 - }, - { - "key":"my-key", - "payload":"{\"brand\":\"asus\",\"model\":\"rog\",\"year\":2020,\"gpu\":\"NVIDIA\",\"seller\":{\"state\":\"CA\",\"street\":\"back street\",\"zipCode\":90232}}", - "eventTime":1603045262772, - "sequenceId":2 - } - ] -} -` -// Sample message - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-websocket.md b/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-websocket.md deleted file mode 100644 index 145866e41644bd..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries-websocket.md +++ /dev/null @@ -1,664 +0,0 @@ ---- -id: client-libraries-websocket -title: Pulsar WebSocket API -sidebar_label: "WebSocket" -original_id: client-libraries-websocket ---- - -Pulsar [WebSocket](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API) API provides a simple way to interact with Pulsar using languages that do not have an official [client library](getting-started-clients.md). Through WebSocket, you can publish and consume messages and use features available on the [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page. - - -> You can use Pulsar WebSocket API with any WebSocket client library. See examples for Python and Node.js [below](#client-examples). - -## Running the WebSocket service - -The standalone variant of Pulsar that we recommend using for [local development](getting-started-standalone.md) already has the WebSocket service enabled. - -In non-standalone mode, there are two ways to deploy the WebSocket service: - -* [embedded](#embedded-with-a-pulsar-broker) with a Pulsar broker -* as a [separate component](#as-a-separate-component) - -### Embedded with a Pulsar broker - -In this mode, the WebSocket service will run within the same HTTP service that's already running in the broker. To enable this mode, set the [`webSocketServiceEnabled`](reference-configuration.md#broker-webSocketServiceEnabled) parameter in the [`conf/broker.conf`](reference-configuration.md#broker) configuration file in your installation. - -```properties - -webSocketServiceEnabled=true - -``` - -### As a separate component - -In this mode, the WebSocket service will be run from a Pulsar [broker](reference-terminology.md#broker) as a separate service. Configuration for this mode is handled in the [`conf/websocket.conf`](reference-configuration.md#websocket) configuration file. You'll need to set *at least* the following parameters: - -* [`configurationMetadataStoreUrl`](reference-configuration.md#websocket) -* [`webServicePort`](reference-configuration.md#websocket-webServicePort) -* [`clusterName`](reference-configuration.md#websocket-clusterName) - -Here's an example: - -```properties - -configurationMetadataStoreUrl=zk1:2181,zk2:2181,zk3:2181 -webServicePort=8080 -clusterName=my-cluster - -``` - -### Security settings - -To enable TLS encryption on WebSocket service: - -```properties - -tlsEnabled=true -tlsAllowInsecureConnection=false -tlsCertificateFilePath=/path/to/client-websocket.cert.pem -tlsKeyFilePath=/path/to/client-websocket.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -### Starting the broker - -When the configuration is set, you can start the service using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) tool: - -```shell - -$ bin/pulsar-daemon start websocket - -``` - -## API Reference - -Pulsar's WebSocket API offers three endpoints for [producing](#producer-endpoint) messages, [consuming](#consumer-endpoint) messages and [reading](#reader-endpoint) messages. - -All exchanges via the WebSocket API use JSON. - -### Authentication - -#### Browser javascript WebSocket client - -Use the query param `token` transport the authentication token. - -```http - -ws://broker-service-url:8080/path?token=token - -``` - -### Producer endpoint - -The producer endpoint requires you to specify a tenant, namespace, and topic in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/producer/persistent/:tenant/:namespace/:topic - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`sendTimeoutMillis` | long | no | Send timeout (default: 30 secs) -`batchingEnabled` | boolean | no | Enable batching of messages (default: false) -`batchingMaxMessages` | int | no | Maximum number of messages permitted in a batch (default: 1000) -`maxPendingMessages` | int | no | Set the max size of the internal-queue holding the messages (default: 1000) -`batchingMaxPublishDelay` | long | no | Time period within which the messages will be batched (default: 10ms) -`messageRoutingMode` | string | no | Message [routing mode](/api/client/index.html?org/apache/pulsar/client/api/ProducerConfiguration.MessageRoutingMode.html) for the partitioned producer: `SinglePartition`, `RoundRobinPartition` -`compressionType` | string | no | Compression [type](/api/client/index.html?org/apache/pulsar/client/api/CompressionType.html): `LZ4`, `ZLIB` -`producerName` | string | no | Specify the name for the producer. Pulsar will enforce only one producer with same name can be publishing on a topic -`initialSequenceId` | long | no | Set the baseline for the sequence ids for messages published by the producer. -`hashingScheme` | string | no | [Hashing function](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.HashingScheme.html) to use when publishing on a partitioned topic: `JavaStringHash`, `Murmur3_32Hash` -`token` | string | no | Authentication token, this is used for the browser javascript client - - -#### Publishing a message - -```json - -{ - "payload": "SGVsbG8gV29ybGQ=", - "properties": {"key1": "value1", "key2": "value2"}, - "context": "1" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`payload` | string | yes | Base-64 encoded payload -`properties` | key-value pairs | no | Application-defined properties -`context` | string | no | Application-defined request identifier -`key` | string | no | For partitioned topics, decides which partition to use -`replicationClusters` | array | no | Restrict replication to this list of [clusters](reference-terminology.md#cluster), specified by name - - -##### Example success response - -```json - -{ - "result": "ok", - "messageId": "CAAQAw==", - "context": "1" - } - -``` - -##### Example failure response - -```json - - { - "result": "send-error:3", - "errorMsg": "Failed to de-serialize from JSON", - "context": "1" - } - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`result` | string | yes | `ok` if successful or an error message if unsuccessful -`messageId` | string | yes | Message ID assigned to the published message -`context` | string | no | Application-defined request identifier - - -### Consumer endpoint - -The consumer endpoint requires you to specify a tenant, namespace, and topic, as well as a subscription, in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/consumer/persistent/:tenant/:namespace/:topic/:subscription - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`ackTimeoutMillis` | long | no | Set the timeout for unacked messages (default: 0) -`subscriptionType` | string | no | [Subscription type](/api/client/index.html?org/apache/pulsar/client/api/SubscriptionType.html): `Exclusive`, `Failover`, `Shared`, `Key_Shared` -`receiverQueueSize` | int | no | Size of the consumer receive queue (default: 1000) -`consumerName` | string | no | Consumer name -`priorityLevel` | int | no | Define a [priority](/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setPriorityLevel-int-) for the consumer -`maxRedeliverCount` | int | no | Define a [maxRedeliverCount](/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#deadLetterPolicy-org.apache.pulsar.client.api.DeadLetterPolicy-) for the consumer (default: 0). Activates [Dead Letter Topic](https://github.com/apache/pulsar/wiki/PIP-22%3A-Pulsar-Dead-Letter-Topic) feature. -`deadLetterTopic` | string | no | Define a [deadLetterTopic](/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#deadLetterPolicy-org.apache.pulsar.client.api.DeadLetterPolicy-) for the consumer (default: {topic}-{subscription}-DLQ). Activates [Dead Letter Topic](https://github.com/apache/pulsar/wiki/PIP-22%3A-Pulsar-Dead-Letter-Topic) feature. -`pullMode` | boolean | no | Enable pull mode (default: false). See "Flow Control" below. -`negativeAckRedeliveryDelay` | int | no | When a message is negatively acknowledged, the delay time before the message is redelivered (in milliseconds). The default value is 60000. -`token` | string | no | Authentication token, this is used for the browser javascript client - -NB: these parameter (except `pullMode`) apply to the internal consumer of the WebSocket service. -So messages will be subject to the redelivery settings as soon as the get into the receive queue, -even if the client doesn't consume on the WebSocket. - -##### Receiving messages - -Server will push messages on the WebSocket session: - -```json - -{ - "messageId": "CAMQADAA", - "payload": "hvXcJvHW7kOSrUn17P2q71RA5SdiXwZBqw==", - "properties": {}, - "publishTime": "2021-10-29T16:01:38.967-07:00", - "redeliveryCount": 0, - "encryptionContext": { - "keys": { - "client-rsa.pem": { - "keyValue": "jEuwS+PeUzmCo7IfLNxqoj4h7txbLjCQjkwpaw5AWJfZ2xoIdMkOuWDkOsqgFmWwxiecakS6GOZHs94x3sxzKHQx9Oe1jpwBg2e7L4fd26pp+WmAiLm/ArZJo6JotTeFSvKO3u/yQtGTZojDDQxiqFOQ1ZbMdtMZA8DpSMuq+Zx7PqLo43UdW1+krjQfE5WD+y+qE3LJQfwyVDnXxoRtqWLpVsAROlN2LxaMbaftv5HckoejJoB4xpf/dPOUqhnRstwQHf6klKT5iNhjsY4usACt78uILT0pEPd14h8wEBidBz/vAlC/zVMEqiDVzgNS7dqEYS4iHbf7cnWVCn3Hxw==", - "metadata": {} - } - }, - "param": "Tfu1PxVm6S9D3+Hk", - "compressionType": "NONE", - "uncompressedMessageSize": 0, - "batchSize": { - "empty": false, - "present": true - } - } -} - -``` - -Below are the parameters in the WebSocket consumer response. - -- General parameters - - Key | Type | Required? | Explanation - :---|:-----|:----------|:----------- - `messageId` | string | yes | Message ID - `payload` | string | yes | Base-64 encoded payload - `publishTime` | string | yes | Publish timestamp - `redeliveryCount` | number | yes | Number of times this message was already delivered - `properties` | key-value pairs | no | Application-defined properties - `key` | string | no | Original routing key set by producer - `encryptionContext` | EncryptionContext | no | Encryption context that consumers can use to decrypt received messages - `param` | string | no | Initialization vector for cipher (Base64 encoding) - `batchSize` | string | no | Number of entries in a message (if it is a batch message) - `uncompressedMessageSize` | string | no | Message size before compression - `compressionType` | string | no | Algorithm used to compress the message payload - -- `encryptionContext` related parameter - - Key | Type | Required? | Explanation - :---|:-----|:----------|:----------- - `keys` |key-EncryptionKey pairs | yes | Key in `key-EncryptionKey` pairs is an encryption key name. Value in `key-EncryptionKey` pairs is an encryption key object. - -- `encryptionKey` related parameters - - Key | Type | Required? | Explanation - :---|:-----|:----------|:----------- - `keyValue` | string | yes | Encryption key (Base64 encoding) - `metadata` | key-value pairs | no | Application-defined metadata - -#### Acknowledging the message - -Consumer needs to acknowledge the successful processing of the message to -have the Pulsar broker delete it. - -```json - -{ - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Negatively acknowledging messages - -```json - -{ - "type": "negativeAcknowledge", - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Flow control - -##### Push Mode - -By default (`pullMode=false`), the consumer endpoint will use the `receiverQueueSize` parameter both to size its -internal receive queue and to limit the number of unacknowledged messages that are passed to the WebSocket client. -In this mode, if you don't send acknowledgements, the Pulsar WebSocket service will stop sending messages after reaching -`receiverQueueSize` unacked messages sent to the WebSocket client. - -##### Pull Mode - -If you set `pullMode` to `true`, the WebSocket client will need to send `permit` commands to permit the -Pulsar WebSocket service to send more messages. - -```json - -{ - "type": "permit", - "permitMessages": 100 -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `permit` -`permitMessages`| int | yes | Number of messages to permit - -NB: in this mode it's possible to acknowledge messages in a different connection. - -#### Check if reach end of topic - -Consumer can check if it has reached end of topic by sending `isEndOfTopic` request. - -**Request** - -```json - -{ - "type": "isEndOfTopic" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `isEndOfTopic` - -**Response** - -```json - -{ - "endOfTopic": "true/false" - } - -``` - -### Reader endpoint - -The reader endpoint requires you to specify a tenant, namespace, and topic in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/reader/persistent/:tenant/:namespace/:topic - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`readerName` | string | no | Reader name -`receiverQueueSize` | int | no | Size of the consumer receive queue (default: 1000) -`messageId` | int or enum | no | Message ID to start from, `earliest` or `latest` (default: `latest`) -`token` | string | no | Authentication token, this is used for the browser javascript client - -##### Receiving messages - -Server will push messages on the WebSocket session: - -```json - -{ - "messageId": "CAAQAw==", - "payload": "SGVsbG8gV29ybGQ=", - "properties": {"key1": "value1", "key2": "value2"}, - "publishTime": "2016-08-30 16:45:57.785", - "redeliveryCount": 4 -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId` | string | yes | Message ID -`payload` | string | yes | Base-64 encoded payload -`publishTime` | string | yes | Publish timestamp -`redeliveryCount` | number | yes | Number of times this message was already delivered -`properties` | key-value pairs | no | Application-defined properties -`key` | string | no | Original routing key set by producer - -#### Acknowledging the message - -**In WebSocket**, Reader needs to acknowledge the successful processing of the message to -have the Pulsar WebSocket service update the number of pending messages. -If you don't send acknowledgements, Pulsar WebSocket service will stop sending messages after reaching the pendingMessages limit. - -```json - -{ - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Check if reach end of topic - -Consumer can check if it has reached end of topic by sending `isEndOfTopic` request. - -**Request** - -```json - -{ - "type": "isEndOfTopic" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `isEndOfTopic` - -**Response** - -```json - -{ - "endOfTopic": "true/false" - } - -``` - -### Error codes - -In case of error the server will close the WebSocket session using the -following error codes: - -Error Code | Error Message -:----------|:------------- -1 | Failed to create producer -2 | Failed to subscribe -3 | Failed to deserialize from JSON -4 | Failed to serialize to JSON -5 | Failed to authenticate client -6 | Client is not authorized -7 | Invalid payload encoding -8 | Unknown error - -> The application is responsible for re-establishing a new WebSocket session after a backoff period. - -## Client examples - -Below you'll find code examples for the Pulsar WebSocket API in [Python](#python) and [Node.js](#nodejs). - -### Python - -This example uses the [`websocket-client`](https://pypi.python.org/pypi/websocket-client) package. You can install it using [pip](https://pypi.python.org/pypi/pip): - -```shell - -$ pip install websocket-client - -``` - -You can also download it from [PyPI](https://pypi.python.org/pypi/websocket-client). - -#### Python producer - -Here's an example Python producer that sends a simple message to a Pulsar [topic](reference-terminology.md#topic): - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/producer/persistent/public/default/my-topic' - -ws = websocket.create_connection(TOPIC) - -# encode message -s = "Hello World" -firstEncoded = s.encode("UTF-8") -binaryEncoded = base64.b64encode(firstEncoded) -payloadString = binaryEncoded.decode('UTF-8') - -# Send one message as JSON -ws.send(json.dumps({ - 'payload' : payloadString, - 'properties': { - 'key1' : 'value1', - 'key2' : 'value2' - }, - 'context' : 5 -})) - -response = json.loads(ws.recv()) -if response['result'] == 'ok': - print( 'Message published successfully') -else: - print('Failed to publish message:', response) -ws.close() - -``` - -#### Python consumer - -Here's an example Python consumer that listens on a Pulsar topic and prints the message ID whenever a message arrives: - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/consumer/persistent/public/default/my-topic/my-sub' - -ws = websocket.create_connection(TOPIC) - -while True: - msg = json.loads(ws.recv()) - if not msg: break - - print( "Received: {} - payload: {}".format(msg, base64.b64decode(msg['payload']))) - - # Acknowledge successful processing - ws.send(json.dumps({'messageId' : msg['messageId']})) - -ws.close() - -``` - -#### Python reader - -Here's an example Python reader that listens on a Pulsar topic and prints the message ID whenever a message arrives: - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/reader/persistent/public/default/my-topic' -ws = websocket.create_connection(TOPIC) - -while True: - msg = json.loads(ws.recv()) - if not msg: break - - print ( "Received: {} - payload: {}".format(msg, base64.b64decode(msg['payload']))) - - # Acknowledge successful processing - ws.send(json.dumps({'messageId' : msg['messageId']})) - -ws.close() - -``` - -### Node.js - -This example uses the [`ws`](https://websockets.github.io/ws/) package. You can install it using [npm](https://www.npmjs.com/): - -```shell - -$ npm install ws - -``` - -#### Node.js producer - -Here's an example Node.js producer that sends a simple message to a Pulsar topic: - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/producer/persistent/public/default/my-topic`; -const ws = new WebSocket(topic); - -var message = { - "payload" : new Buffer("Hello World").toString('base64'), - "properties": { - "key1" : "value1", - "key2" : "value2" - }, - "context" : "1" -}; - -ws.on('open', function() { - // Send one message - ws.send(JSON.stringify(message)); -}); - -ws.on('message', function(message) { - console.log('received ack: %s', message); -}); - -``` - -#### Node.js consumer - -Here's an example Node.js consumer that listens on the same topic used by the producer above: - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/consumer/persistent/public/default/my-topic/my-sub`; -const ws = new WebSocket(topic); - -ws.on('message', function(message) { - var receiveMsg = JSON.parse(message); - console.log('Received: %s - payload: %s', message, new Buffer(receiveMsg.payload, 'base64').toString()); - var ackMsg = {"messageId" : receiveMsg.messageId}; - ws.send(JSON.stringify(ackMsg)); -}); - -``` - -#### NodeJS reader - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/reader/persistent/public/default/my-topic`; -const ws = new WebSocket(topic); - -ws.on('message', function(message) { - var receiveMsg = JSON.parse(message); - console.log('Received: %s - payload: %s', message, new Buffer(receiveMsg.payload, 'base64').toString()); - var ackMsg = {"messageId" : receiveMsg.messageId}; - ws.send(JSON.stringify(ackMsg)); -}); - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries.md b/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries.md deleted file mode 100644 index 6cdc1e615c81fd..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/client-libraries.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -id: client-libraries -title: Pulsar client libraries -sidebar_label: "Overview" -original_id: client-libraries ---- - -Pulsar supports the following client libraries: - -|Language|Documentation|Release note|Code repo -|---|---|---|--- -Java |- [User doc](client-libraries-java.md)

    - [API doc](/api/client/)|[Here](/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-client) -C++ | - [User doc](client-libraries-cpp.md)

    - [API doc](/api/cpp/)|[Here](/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp) -Python | - [User doc](client-libraries-python.md)

    - [API doc](/api/python/)|[Here](/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/python) -WebSocket| [User doc](client-libraries-websocket.md) | [Here](/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-websocket) -Go client|[User doc](client-libraries-go.md)|[Here](https://github.com/apache/pulsar-client-go/blob/master/CHANGELOG) |[Here](https://github.com/apache/pulsar-client-go) -Node.js|[User doc](client-libraries-node.md)|[Here](https://github.com/apache/pulsar-client-node/releases) |[Here](https://github.com/apache/pulsar-client-node) -C# |[User doc](client-libraries-dotnet.md)| [Here](https://github.com/apache/pulsar-dotpulsar/blob/master/CHANGELOG)|[Here](https://github.com/apache/pulsar-dotpulsar) - -:::note - -- The code repos of **Java, C++, Python,** and **WebSocket** clients are hosted in the [Pulsar main repo](https://github.com/apache/pulsar) and these clients are released with Pulsar, so their release notes are parts of [Pulsar release note](/release-notes/). -- The code repos of **Go, Node.js,** and **C#** clients are hosted outside of the [Pulsar main repo](https://github.com/apache/pulsar) and these clients are not released with Pulsar, so they have independent release notes. - -::: - -## Feature matrix -Pulsar client feature matrix for different languages is listed on [Pulsar Feature Matrix (Client and Function)](https://docs.google.com/spreadsheets/d/1YHYTkIXR8-Ql103u-IMI18TXLlGStK8uJjDsOOA0T20/edit#gid=1784579914) page. - -## Third-party clients - -Besides the official released clients, multiple projects on developing Pulsar clients are available in different languages. - -> If you have developed a new Pulsar client, feel free to submit a pull request and add your client to the list below. - -| Language | Project | Maintainer | License | Description | -|----------|---------|------------|---------|-------------| -| Go | [pulsar-client-go](https://github.com/Comcast/pulsar-client-go) | [Comcast](https://github.com/Comcast) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client | -| Go | [go-pulsar](https://github.com/t2y/go-pulsar) | [t2y](https://github.com/t2y) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | -| Haskell | [supernova](https://github.com/cr-org/supernova) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Native Pulsar client for Haskell | -| Scala | [neutron](https://github.com/cr-org/neutron) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Purely functional Apache Pulsar client for Scala built on top of Fs2 | -| Scala | [pulsar4s](https://github.com/sksamuel/pulsar4s) | [sksamuel](https://github.com/sksamuel) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Idomatic, typesafe, and reactive Scala client for Apache Pulsar | -| Rust | [pulsar-rs](https://github.com/wyyerd/pulsar-rs) | [Wyyerd Group](https://github.com/wyyerd) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Future-based Rust bindings for Apache Pulsar | -| .NET | [pulsar-client-dotnet](https://github.com/fsharplang-ru/pulsar-client-dotnet) | [Lanayx](https://github.com/Lanayx) | [![GitHub](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT) | Native .NET client for C#/F#/VB | -| Node.js | [pulsar-flex](https://github.com/ayeo-flex-org/pulsar-flex) | [Daniel Sinai](https://github.com/danielsinai), [Ron Farkash](https://github.com/ronfarkash), [Gal Rosenberg](https://github.com/galrose)| [![GitHub](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT) | Native Nodejs client | diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-architecture-overview.md b/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-architecture-overview.md deleted file mode 100644 index 5f2fb2ea991670..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-architecture-overview.md +++ /dev/null @@ -1,176 +0,0 @@ ---- -id: concepts-architecture-overview -title: Architecture Overview -sidebar_label: "Architecture" -original_id: concepts-architecture-overview ---- - -At the highest level, a Pulsar instance is composed of one or more Pulsar clusters. Clusters within an instance can [replicate](concepts-replication.md) data amongst themselves. - -In a Pulsar cluster: - -* One or more brokers handles and load balances incoming messages from producers, dispatches messages to consumers, communicates with the Pulsar configuration store to handle various coordination tasks, stores messages in BookKeeper instances (aka bookies), relies on a cluster-specific ZooKeeper cluster for certain tasks, and more. -* A BookKeeper cluster consisting of one or more bookies handles [persistent storage](#persistent-storage) of messages. -* A ZooKeeper cluster specific to that cluster handles coordination tasks between Pulsar clusters. - -The diagram below provides an illustration of a Pulsar cluster: - -![Pulsar architecture diagram](/assets/pulsar-system-architecture.png) - -At the broader instance level, an instance-wide ZooKeeper cluster called the configuration store handles coordination tasks involving multiple clusters, for example [geo-replication](concepts-replication.md). - -## Brokers - -The Pulsar message broker is a stateless component that's primarily responsible for running two other components: - -* An HTTP server that exposes a {@inject: rest:REST:/} API for both administrative tasks and [topic lookup](concepts-clients.md#client-setup-phase) for producers and consumers. The producers connect to the brokers to publish messages and the consumers connect to the brokers to consume the messages. -* A dispatcher, which is an asynchronous TCP server over a custom [binary protocol](developing-binary-protocol.md) used for all data transfers - -Messages are typically dispatched out of a [managed ledger](#managed-ledgers) cache for the sake of performance, *unless* the backlog exceeds the cache size. If the backlog grows too large for the cache, the broker will start reading entries from BookKeeper. - -Finally, to support geo-replication on global topics, the broker manages replicators that tail the entries published in the local region and republish them to the remote region using the Pulsar [Java client library](client-libraries-java.md). - -> For a guide to managing Pulsar brokers, see the [brokers](admin-api-brokers.md) guide. - -## Clusters - -A Pulsar instance consists of one or more Pulsar *clusters*. Clusters, in turn, consist of: - -* One or more Pulsar [brokers](#brokers) -* A ZooKeeper quorum used for cluster-level configuration and coordination -* An ensemble of bookies used for [persistent storage](#persistent-storage) of messages - -Clusters can replicate amongst themselves using [geo-replication](concepts-replication.md). - -> For a guide to managing Pulsar clusters, see the [clusters](admin-api-clusters.md) guide. - -## Metadata store - -The Pulsar metadata store maintains all the metadata of a Pulsar cluster, such as topic metadata, schema, broker load data, and so on. Pulsar uses [Apache ZooKeeper](https://zookeeper.apache.org/) for metadata storage, cluster configuration, and coordination. The Pulsar metadata store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster. You can use one ZooKeeper cluster for both Pulsar metadata store and BookKeeper metadata store. If you want to deploy Pulsar brokers connected to an existing BookKeeper cluster, you need to deploy separate ZooKeeper clusters for Pulsar metadata store and BookKeeper metadata store respectively. - -> Pulsar also supports more metadata backend services, including [ETCD](https://etcd.io/) and [RocksDB](http://rocksdb.org/) (for standalone Pulsar only). - - -In a Pulsar instance: - -* A configuration store quorum stores configuration for tenants, namespaces, and other entities that need to be globally consistent. -* Each cluster has its own local ZooKeeper ensemble that stores cluster-specific configuration and coordination such as which brokers are responsible for which topics as well as ownership metadata, broker load reports, BookKeeper ledger metadata, and more. - -## Configuration store - -The configuration store maintains all the configurations of a Pulsar instance, such as clusters, tenants, namespaces, partitioned topic related configurations, and so on. A Pulsar instance can have a single local cluster, multiple local clusters, or multiple cross-region clusters. Consequently, the configuration store can share the configurations across multiple clusters under a Pulsar instance. The configuration store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster. - -## Persistent storage - -Pulsar provides guaranteed message delivery for applications. If a message successfully reaches a Pulsar broker, it will be delivered to its intended target. - -This guarantee requires that non-acknowledged messages are stored in a durable manner until they can be delivered to and acknowledged by consumers. This mode of messaging is commonly called *persistent messaging*. In Pulsar, N copies of all messages are stored and synced on disk, for example 4 copies across two servers with mirrored [RAID](https://en.wikipedia.org/wiki/RAID) volumes on each server. - -### Apache BookKeeper - -Pulsar uses a system called [Apache BookKeeper](http://bookkeeper.apache.org/) for persistent message storage. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) (WAL) system that provides a number of crucial advantages for Pulsar: - -* It enables Pulsar to utilize many independent logs, called [ledgers](#ledgers). Multiple ledgers can be created for topics over time. -* It offers very efficient storage for sequential data that handles entry replication. -* It guarantees read consistency of ledgers in the presence of various system failures. -* It offers even distribution of I/O across bookies. -* It's horizontally scalable in both capacity and throughput. Capacity can be immediately increased by adding more bookies to a cluster. -* Bookies are designed to handle thousands of ledgers with concurrent reads and writes. By using multiple disk devices---one for journal and another for general storage--bookies are able to isolate the effects of read operations from the latency of ongoing write operations. - -In addition to message data, *cursors* are also persistently stored in BookKeeper. Cursors are [subscription](reference-terminology.md#subscription) positions for [consumers](reference-terminology.md#consumer). BookKeeper enables Pulsar to store consumer position in a scalable fashion. - -At the moment, Pulsar supports persistent message storage. This accounts for the `persistent` in all topic names. Here's an example: - -```http - -persistent://my-tenant/my-namespace/my-topic - -``` - -> Pulsar also supports ephemeral ([non-persistent](concepts-messaging.md#non-persistent-topics)) message storage. - - -You can see an illustration of how brokers and bookies interact in the diagram below: - -![Brokers and bookies](/assets/broker-bookie.png) - - -### Ledgers - -A ledger is an append-only data structure with a single writer that is assigned to multiple BookKeeper storage nodes, or bookies. Ledger entries are replicated to multiple bookies. Ledgers themselves have very simple semantics: - -* A Pulsar broker can create a ledger, append entries to the ledger, and close the ledger. -* After the ledger has been closed---either explicitly or because the writer process crashed---it can then be opened only in read-only mode. -* Finally, when entries in the ledger are no longer needed, the whole ledger can be deleted from the system (across all bookies). - -#### Ledger read consistency - -The main strength of Bookkeeper is that it guarantees read consistency in ledgers in the presence of failures. Since the ledger can only be written to by a single process, that process is free to append entries very efficiently, without need to obtain consensus. After a failure, the ledger will go through a recovery process that will finalize the state of the ledger and establish which entry was last committed to the log. After that point, all readers of the ledger are guaranteed to see the exact same content. - -#### Managed ledgers - -Given that Bookkeeper ledgers provide a single log abstraction, a library was developed on top of the ledger called the *managed ledger* that represents the storage layer for a single topic. A managed ledger represents the abstraction of a stream of messages with a single writer that keeps appending at the end of the stream and multiple cursors that are consuming the stream, each with its own associated position. - -Internally, a single managed ledger uses multiple BookKeeper ledgers to store the data. There are two reasons to have multiple ledgers: - -1. After a failure, a ledger is no longer writable and a new one needs to be created. -2. A ledger can be deleted when all cursors have consumed the messages it contains. This allows for periodic rollover of ledgers. - -### Journal storage - -In BookKeeper, *journal* files contain BookKeeper transaction logs. Before making an update to a [ledger](#ledgers), a bookie needs to ensure that a transaction describing the update is written to persistent (non-volatile) storage. A new journal file is created once the bookie starts or the older journal file reaches the journal file size threshold (configured using the [`journalMaxSizeMB`](reference-configuration.md#bookkeeper-journalMaxSizeMB) parameter). - -## Pulsar proxy - -One way for Pulsar clients to interact with a Pulsar [cluster](#clusters) is by connecting to Pulsar message [brokers](#brokers) directly. In some cases, however, this kind of direct connection is either infeasible or undesirable because the client doesn't have direct access to broker addresses. If you're running Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform, for example, then direct client connections to brokers are likely not possible. - -The **Pulsar proxy** provides a solution to this problem by acting as a single gateway for all of the brokers in a cluster. If you run the Pulsar proxy (which, again, is optional), all client connections with the Pulsar cluster will flow through the proxy rather than communicating with brokers. - -> For the sake of performance and fault tolerance, you can run as many instances of the Pulsar proxy as you'd like. - -Architecturally, the Pulsar proxy gets all the information it requires from ZooKeeper. When starting the proxy on a machine, you only need to provide metadata store connection strings for the cluster-specific and instance-wide configuration store clusters. Here's an example: - -```bash - -$ cd /path/to/pulsar/directory -$ bin/pulsar proxy \ - --metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 \ - --configuration-metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 - -``` - -> #### Pulsar proxy docs -> For documentation on using the Pulsar proxy, see the [Pulsar proxy admin documentation](administration-proxy.md). - - -Some important things to know about the Pulsar proxy: - -* Connecting clients don't need to provide *any* specific configuration to use the Pulsar proxy. You won't need to update the client configuration for existing applications beyond updating the IP used for the service URL (for example if you're running a load balancer over the Pulsar proxy). -* [TLS encryption](security-tls-transport.md) and [authentication](security-tls-authentication.md) is supported by the Pulsar proxy - -## Service discovery - -[Clients](getting-started-clients.md) connecting to Pulsar brokers need to be able to communicate with an entire Pulsar instance using a single URL. - -You can use your own service discovery system if you'd like. If you use your own system, there is just one requirement: when a client performs an HTTP request to an endpoint, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to *some* active broker in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means. - -The diagram below illustrates Pulsar service discovery: - -![alt-text](/assets/pulsar-service-discovery.png) - -In this diagram, the Pulsar cluster is addressable via a single DNS name: `pulsar-cluster.acme.com`. A [Python client](client-libraries-python.md), for example, could access this Pulsar cluster like this: - -```python - -from pulsar import Client - -client = Client('pulsar://pulsar-cluster.acme.com:6650') - -``` - -:::note - -In Pulsar, each topic is handled by only one broker. Initial requests from a client to read, update or delete a topic are sent to a broker that may not be the topic owner. If the broker cannot handle the request for this topic, it redirects the request to the appropriate broker. - -::: - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-authentication.md b/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-authentication.md deleted file mode 100644 index f6307890c904a7..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-authentication.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -id: concepts-authentication -title: Authentication and Authorization -sidebar_label: "Authentication and Authorization" -original_id: concepts-authentication ---- - -Pulsar supports a pluggable [authentication](security-overview.md) mechanism which can be configured at the proxy and/or the broker. Pulsar also supports a pluggable [authorization](security-authorization.md) mechanism. These mechanisms work together to identify the client and its access rights on topics, namespaces and tenants. - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-clients.md b/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-clients.md deleted file mode 100644 index 4040624f7d6366..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-clients.md +++ /dev/null @@ -1,92 +0,0 @@ ---- -id: concepts-clients -title: Pulsar Clients -sidebar_label: "Clients" -original_id: concepts-clients ---- - -Pulsar exposes a client API with language bindings for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md), [C++](client-libraries-cpp.md) and [C#](client-libraries-dotnet.md). The client API optimizes and encapsulates Pulsar's client-broker communication protocol and exposes a simple and intuitive API for use by applications. - -Under the hood, the current official Pulsar client libraries support transparent reconnection and/or connection failover to brokers, queuing of messages until acknowledged by the broker, and heuristics such as connection retries with backoff. - -> **Custom client libraries** -> If you'd like to create your own client library, we recommend consulting the documentation on Pulsar's custom [binary protocol](developing-binary-protocol.md). - - -## Client setup phase - -Before an application creates a producer/consumer, the Pulsar client library needs to initiate a setup phase including two steps: - -1. The client attempts to determine the owner of the topic by sending an HTTP lookup request to the broker. The request could reach one of the active brokers which, by looking at the (cached) zookeeper metadata knows who is serving the topic or, in case nobody is serving it, tries to assign it to the least loaded broker. -1. Once the client library has the broker address, it creates a TCP connection (or reuse an existing connection from the pool) and authenticates it. Within this connection, client and broker exchange binary commands from a custom protocol. At this point the client sends a command to create producer/consumer to the broker, which will comply after having validated the authorization policy. - -Whenever the TCP connection breaks, the client immediately re-initiates this setup phase and keeps trying with exponential backoff to re-establish the producer or consumer until the operation succeeds. - -## Reader interface - -In Pulsar, the "standard" [consumer interface](concepts-messaging.md#consumers) involves using consumers to listen on [topics](reference-terminology.md#topic), process incoming messages, and finally acknowledge those messages when they are processed. Whenever a new subscription is created, it is initially positioned at the end of the topic (by default), and consumers associated with that subscription begin reading with the first message created afterwards. Whenever a consumer connects to a topic using a pre-existing subscription, it begins reading from the earliest message un-acked within that subscription. In summary, with the consumer interface, subscription cursors are automatically managed by Pulsar in response to [message acknowledgements](concepts-messaging.md#acknowledgement). - -The **reader interface** for Pulsar enables applications to manually manage cursors. When you use a reader to connect to a topic---rather than a consumer---you need to specify *which* message the reader begins reading from when it connects to a topic. When connecting to a topic, the reader interface enables you to begin with: - -* The **earliest** available message in the topic -* The **latest** available message in the topic -* Some other message between the earliest and the latest. If you select this option, you'll need to explicitly provide a message ID. Your application will be responsible for "knowing" this message ID in advance, perhaps fetching it from a persistent data store or cache. - -The reader interface is helpful for use cases like using Pulsar to provide effectively-once processing semantics for a stream processing system. For this use case, it's essential that the stream processing system be able to "rewind" topics to a specific message and begin reading there. The reader interface provides Pulsar clients with the low-level abstraction necessary to "manually position" themselves within a topic. - -Internally, the reader interface is implemented as a consumer using an exclusive, non-durable subscription to the topic with a randomly-allocated name. - -[ **IMPORTANT** ] - -Unlike subscription/consumer, readers are non-durable in nature and does not prevent data in a topic from being deleted, thus it is ***strongly*** advised that [data retention](cookbooks-retention-expiry.md) be configured. If data retention for a topic is not configured for an adequate amount of time, messages that the reader has not yet read might be deleted . This causes the readers to essentially skip messages. Configuring the data retention for a topic guarantees the reader with a certain duration to read a message. - -Please also note that a reader can have a "backlog", but the metric is only used for users to know how behind the reader is. The metric is not considered for any backlog quota calculations. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-reader-consumer-interfaces.png) - -Here's a Java example that begins reading from the earliest available message on a topic: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageId; -import org.apache.pulsar.client.api.Reader; - -// Create a reader on a topic and for a specific message (and onward) -Reader reader = pulsarClient.newReader() - .topic("reader-api-test") - .startMessageId(MessageId.earliest) - .create(); - -while (true) { - Message message = reader.readNext(); - - // Process the message -} - -``` - -To create a reader that reads from the latest available message: - -```java - -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(MessageId.latest) - .create(); - -``` - -To create a reader that reads from some message between the earliest and the latest: - -```java - -byte[] msgIdBytes = // Some byte array -MessageId id = MessageId.fromByteArray(msgIdBytes); -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(id) - .create(); - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-messaging.md b/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-messaging.md deleted file mode 100644 index 3470a8e9480e53..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-messaging.md +++ /dev/null @@ -1,989 +0,0 @@ ---- -id: concepts-messaging -title: Messaging -sidebar_label: "Messaging" -original_id: concepts-messaging ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar is built on the [publish-subscribe](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) pattern (often abbreviated to pub-sub). In this pattern, [producers](#producers) publish messages to [topics](#topics); [consumers](#consumers) [subscribe](#subscription-types) to those topics, process incoming messages, and send [acknowledgements](#acknowledgement) to the broker when processing is finished. - -When a subscription is created, Pulsar [retains](concepts-architecture-overview.md#persistent-storage) all messages, even if the consumer is disconnected. The retained messages are discarded only when a consumer acknowledges that all these messages are processed successfully. - -If the consumption of a message fails and you want this message to be consumed again, you can enable [message redelivery mechanism](#message-redelivery) to request the broker to resend this message. - -## Messages - -Messages are the basic "unit" of Pulsar. The following table lists the components of messages. - -Component | Description -:---------|:------- -Value / data payload | The data carried by the message. All Pulsar messages contain raw bytes, although message data can also conform to data [schemas](schema-get-started.md). -Key | Messages are optionally tagged with keys, which is useful for things like [topic compaction](concepts-topic-compaction.md). -Properties | An optional key/value map of user-defined properties. -Producer name | The name of the producer who produces the message. If you do not specify a producer name, the default name is used. -Topic name | The name of the topic that the message is published to. -Schema version | The version number of the schema that the message is produced with. -Sequence ID | Each Pulsar message belongs to an ordered sequence on its topic. The sequence ID of a message is initially assigned by its producer, indicating its order in that sequence, and can also be customized.
    Sequence ID can be used for message deduplication. If `brokerDeduplicationEnabled` is set to `true`, the sequence ID of each message is unique within a producer of a topic (non-partitioned) or a partition. -Message ID | The message ID of a message is assigned by bookies as soon as the message is persistently stored. Message ID indicates a message’s specific position in a ledger and is unique within a Pulsar cluster. -Publish time | The timestamp of when the message is published. The timestamp is automatically applied by the producer. -Event time | An optional timestamp attached to a message by applications. For example, applications attach a timestamp on when the message is processed. If nothing is set to event time, the value is `0`. - -The default size of a message is 5 MB. You can configure the max size of a message with the following configurations. - -- In the `broker.conf` file. - - ```bash - - # The max size of a message (in bytes). - maxMessageSize=5242880 - - ``` - -- In the `bookkeeper.conf` file. - - ```bash - - # The max size of the netty frame (in bytes). Any messages received larger than this value are rejected. The default value is 5 MB. - nettyMaxFrameSizeBytes=5253120 - - ``` - -> For more information on Pulsar messages, see Pulsar [binary protocol](developing-binary-protocol.md). - -## Producers - -A producer is a process that attaches to a topic and publishes messages to a Pulsar [broker](reference-terminology.md#broker). The Pulsar broker processes the messages. - -### Send modes - -Producers send messages to brokers synchronously (sync) or asynchronously (async). - -| Mode | Description | -|:-----------|-----------| -| Sync send | The producer waits for an acknowledgement from the broker after sending every message. If the acknowledgment is not received, the producer treats the sending operation as a failure. | -| Async send | The producer puts a message in a blocking queue and returns immediately. The client library sends the message to the broker in the background. If the queue is full (you can [configure](reference-configuration.md#broker) the maximum size), the producer is blocked or fails immediately when calling the API, depending on arguments passed to the producer. | - -### Access mode - -You can have different types of access modes on topics for producers. - -|Access mode | Description -|---|--- -`Shared`|Multiple producers can publish on a topic.

    This is the **default** setting. -`Exclusive`|Only one producer can publish on a topic.

    If there is already a producer connected, other producers trying to publish on this topic get errors immediately.

    The "old" producer is evicted and a "new" producer is selected to be the next exclusive producer if the "old" producer experiences a network partition with the broker. -`WaitForExclusive`|If there is already a producer connected, the producer creation is pending (rather than timing out) until the producer gets the `Exclusive` access.

    The producer that succeeds in becoming the exclusive one is treated as the leader. Consequently, if you want to implement the leader election scheme for your application, you can use this access mode. - -:::note - -Once an application creates a producer with `Exclusive` or `WaitForExclusive` access mode successfully, the instance of this application is guaranteed to be the **only writer** to the topic. Any other producers trying to produce messages on this topic will either get errors immediately or have to wait until they get the `Exclusive` access. -For more information, see [PIP 68: Exclusive Producer](https://github.com/apache/pulsar/wiki/PIP-68:-Exclusive-Producer). - -::: - -You can set producer access mode through Java Client API. For more information, see `ProducerAccessMode` in [ProducerBuilder.java](https://github.com/apache/pulsar/blob/fc5768ca3bbf92815d142fe30e6bfad70a1b4fc6/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/ProducerBuilder.java) file. - - -### Compression - -You can compress messages published by producers during transportation. Pulsar currently supports the following types of compression: - -* [LZ4](https://github.com/lz4/lz4) -* [ZLIB](https://zlib.net/) -* [ZSTD](https://facebook.github.io/zstd/) -* [SNAPPY](https://google.github.io/snappy/) - -### Batching - -When batching is enabled, the producer accumulates and sends a batch of messages in a single request. The batch size is defined by the maximum number of messages and the maximum publish latency. Therefore, the backlog size represents the total number of batches instead of the total number of messages. - -In Pulsar, batches are tracked and stored as single units rather than as individual messages. Consumer unbundles a batch into individual messages. However, scheduled messages (configured through the `deliverAt` or the `deliverAfter` parameter) are always sent as individual messages even batching is enabled. - -In general, a batch is acknowledged when all of its messages are acknowledged by a consumer. It means that when **not all** batch messages are acknowledged, then unexpected failures, negative acknowledgements, or acknowledgement timeouts can result in a redelivery of all messages in this batch. - -To avoid redelivering acknowledged messages in a batch to the consumer, Pulsar introduces batch index acknowledgement since Pulsar 2.6.0. When batch index acknowledgement is enabled, the consumer filters out the batch index that has been acknowledged and sends the batch index acknowledgement request to the broker. The broker maintains the batch index acknowledgement status and tracks the acknowledgement status of each batch index to avoid dispatching acknowledged messages to the consumer. The batch is deleted when all indices of the messages in it are acknowledged. - -By default, batch index acknowledgement is disabled (`acknowledgmentAtBatchIndexLevelEnabled=false`). You can enable batch index acknowledgement by setting the `acknowledgmentAtBatchIndexLevelEnabled` parameter to `true` at the broker side. Enabling batch index acknowledgement results in more memory overheads. - -### Chunking -Message chunking enables Pulsar to process large payload messages by splitting the message into chunks at the producer side and aggregating chunked messages at the consumer side. - -With message chunking enabled, when the size of a message exceeds the allowed maximum payload size (the `maxMessageSize` parameter of broker), the workflow of messaging is as follows: -1. The producer splits the original message into chunked messages and publishes them with chunked metadata to the broker separately and in order. -2. The broker stores the chunked messages in one managed-ledger in the same way as that of ordinary messages, and it uses the `chunkedMessageRate` parameter to record chunked message rate on the topic. -3. The consumer buffers the chunked messages and aggregates them into the receiver queue when it receives all the chunks of a message. -4. The client consumes the aggregated message from the receiver queue. - -**Limitations:** -- Chunking is only available for persisted topics. -- Chunking is only available for the exclusive and failover subscription types. -- Chunking cannot be enabled simultaneously with batching. - -#### Handle consecutive chunked messages with one ordered consumer - -The following figure shows a topic with one producer which publishes a large message payload in chunked messages along with regular non-chunked messages. The producer publishes message M1 in three chunks labeled M1-C1, M1-C2 and M1-C3. The broker stores all the three chunked messages in the managed-ledger and dispatches them to the ordered (exclusive/failover) consumer in the same order. The consumer buffers all the chunked messages in memory until it receives all the chunked messages, aggregates them into one message and then hands over the original message M1 to the client. - -![](/assets/chunking-01.png) - -#### Handle interwoven chunked messages with one ordered consumer - -When multiple producers publish chunked messages into a single topic, the broker stores all the chunked messages coming from different producers in the same managed-ledger. The chunked messages in the managed-ledger can be interwoven with each other. As shown below, Producer 1 publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. Producer 2 publishes message M2 in three chunks M2-C1, M2-C2 and M2-C3. All chunked messages of the specific message are still in order but might not be consecutive in the managed-ledger. - -![](/assets/chunking-02.png) - -:::note - -In this case, interwoven chunked messages may bring some memory pressure to the consumer because the consumer keeps a separate buffer for each large message to aggregate all its chunks in one message. You can limit the maximum number of chunked messages a consumer maintains concurrently by configuring the `maxPendingChunkedMessage` parameter. When the threshold is reached, the consumer drops pending messages by silently acknowledging them or asking the broker to redeliver them later, optimizing memory utilization. - -::: - -#### Enable Message Chunking - -**Prerequisite:** Disable batching by setting the `enableBatching` parameter to `false`. - -The message chunking feature is OFF by default. -To enable message chunking, set the `chunkingEnabled` parameter to `true` when creating a producer. - -:::note - -If the consumer fails to receive all chunks of a message within a specified time period, it expires incomplete chunks. The default value is 1 minute. For more information about the `expireTimeOfIncompleteChunkedMessage` parameter, refer to [org.apache.pulsar.client.api](/api/client/). - -::: - -## Consumers - -A consumer is a process that attaches to a topic via a subscription and then receives messages. - -A consumer sends a [flow permit request](developing-binary-protocol.md#flow-control) to a broker to get messages. There is a queue at the consumer side to receive messages pushed from the broker. You can configure the queue size with the [`receiverQueueSize`](client-libraries-java.md#configure-consumer) parameter. The default size is `1000`). Each time `consumer.receive()` is called, a message is dequeued from the buffer. - -### Receive modes - -Messages are received from [brokers](reference-terminology.md#broker) either synchronously (sync) or asynchronously (async). - -| Mode | Description | -|:--------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Sync receive | A sync receive is blocked until a message is available. | -| Async receive | An async receive returns immediately with a future value—for example, a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) in Java—that completes once a new message is available. | - -### Listeners - -Client libraries provide listener implementation for consumers. For example, the [Java client](client-libraries-java.md) provides a {@inject: javadoc:MesssageListener:/client/org/apache/pulsar/client/api/MessageListener} interface. In this interface, the `received` method is called whenever a new message is received. - -### Acknowledgement - -The consumer sends an acknowledgement request to the broker after it consumes a message successfully. Then, this consumed message will be permanently stored, and be deleted only after all the subscriptions have acknowledged it. If you want to store the messages that have been acknowledged by a consumer, you need to configure the [message retention policy](concepts-messaging.md#message-retention-and-expiry). - -For batch messages, you can enable batch index acknowledgement to avoid dispatching acknowledged messages to the consumer. For details about batch index acknowledgement, see [batching](#batching). - -Messages can be acknowledged in one of the following two ways: - -- Being acknowledged individually. With individual acknowledgement, the consumer acknowledges each message and sends an acknowledgement request to the broker. -- Being acknowledged cumulatively. With cumulative acknowledgement, the consumer **only** acknowledges the last message it received. All messages in the stream up to (and including) the provided message are not redelivered to that consumer. - -If you want to acknowledge messages individually, you can use the following API. - -```java - -consumer.acknowledge(msg); - -``` - -If you want to acknowledge messages cumulatively, you can use the following API. - -```java - -consumer.acknowledgeCumulative(msg); - -``` - -:::note - -Cumulative acknowledgement cannot be used in [Shared subscription type](#subscription-types), because Shared subscription type involves multiple consumers which have access to the same subscription. In Shared subscription type, messages are acknowledged individually. - -::: - -### Negative acknowledgement - -The [negative acknowledgement](#negative-acknowledgement) mechanism allows you to send a notification to the broker indicating the consumer did not process a message. When a consumer fails to consume a message and needs to re-consume it, the consumer sends a negative acknowledgement (nack) to the broker, triggering the broker to redeliver this message to the consumer. - -Messages are negatively acknowledged individually or cumulatively, depending on the consumption subscription type. - -In Exclusive and Failover subscription types, consumers only negatively acknowledge the last message they receive. - -In Shared and Key_Shared subscription types, consumers can negatively acknowledge messages individually. - -Be aware that negative acknowledgments on ordered subscription types, such as Exclusive, Failover and Key_Shared, might cause failed messages being sent to consumers out of the original order. - -If you are going to use negative acknowledgment on a message, make sure it is negatively acknowledged before the acknowledgment timeout. - -Use the following API to negatively acknowledge message consumption. - -```java - -Consumer consumer = pulsarClient.newConsumer() - .topic(topic) - .subscriptionName("sub-negative-ack") - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .negativeAckRedeliveryDelay(2, TimeUnit.SECONDS) // the default value is 1 min - .subscribe(); - -Message message = consumer.receive(); - -// call the API to send negative acknowledgement -consumer.negativeAcknowledge(message); - -message = consumer.receive(); -consumer.acknowledge(message); - -``` - -To redeliver messages with different delays, you can use the **redelivery backoff mechanism** by setting the number of retries to deliver the messages. -Use the following API to enable `Negative Redelivery Backoff`. - -```java - -Consumer consumer = pulsarClient.newConsumer() - .topic(topic) - .subscriptionName("sub-negative-ack") - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .negativeAckRedeliveryBackoff(MultiplierRedeliveryBackoff.builder() - .minDelayMs(1000) - .maxDelayMs(60 * 1000) - .build()) - .subscribe(); - -``` - -The message redelivery behavior should be as follows. - -Redelivery count | Redelivery delay -:--------------------|:----------- -1 | 10 + 1 seconds -2 | 10 + 2 seconds -3 | 10 + 4 seconds -4 | 10 + 8 seconds -5 | 10 + 16 seconds -6 | 10 + 32 seconds -7 | 10 + 60 seconds -8 | 10 + 60 seconds -:::note - -If batching is enabled, all messages in one batch are redelivered to the consumer. - -::: - -### Acknowledgement timeout - -The acknowledgement timeout mechanism allows you to set a time range during which the client tracks the unacknowledged messages. After this acknowledgement timeout (`ackTimeout`) period, the client sends `redeliver unacknowledged messages` request to the broker, thus the broker resends the unacknowledged messages to the consumer. - -You can configure the acknowledgement timeout mechanism to redeliver the message if it is not acknowledged after `ackTimeout` or to execute a timer task to check the acknowledgement timeout messages during every `ackTimeoutTickTime` period. - -You can also use the redelivery backoff mechanism, redeliver messages with different delays by setting the number -of times the messages is retried. - -If you want to use redelivery backoff, you can use the following API. - -```java - -consumer.ackTimeout(10, TimeUnit.SECOND) - .ackTimeoutRedeliveryBackoff(MultiplierRedeliveryBackoff.builder() - .minDelayMs(1000) - .maxDelayMs(60000) - .multiplier(2).build()) - -``` - -The message redelivery behavior should be as follows. - -Redelivery count | Redelivery delay -:--------------------|:----------- -1 | 10 + 1 seconds -2 | 10 + 2 seconds -3 | 10 + 4 seconds -4 | 10 + 8 seconds -5 | 10 + 16 seconds -6 | 10 + 32 seconds -7 | 10 + 60 seconds -8 | 10 + 60 seconds - -:::note - -- If batching is enabled, all messages in one batch are redelivered to the consumer. -- Compared with acknowledgement timeout, negative acknowledgement is preferred. First, it is difficult to set a timeout value. Second, a broker resends messages when the message processing time exceeds the acknowledgement timeout, but these messages might not need to be re-consumed. - -::: - -Use the following API to enable acknowledgement timeout. - -```java - -Consumer consumer = pulsarClient.newConsumer() - .topic(topic) - .ackTimeout(2, TimeUnit.SECONDS) // the default value is 0 - .ackTimeoutTickTime(1, TimeUnit.SECONDS) - .subscriptionName("sub") - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscribe(); - -Message message = consumer.receive(); - -// wait at least 2 seconds -message = consumer.receive(); -consumer.acknowledge(message); - -``` - -### Retry letter topic - -The retry letter topic allows you to store the messages that failed to be consumed and retry consuming them later. With this method, you can customize the interval at which the messages are redelivered. Consumers on the original topic are automatically subscribed to the retry letter topic as well. Once the maximum number of retries has been reached, the unconsumed messages are moved to a [dead letter topic](#dead-letter-topic) for manual processing. - -The diagram below illustrates the concept of the retry letter topic. -![](/assets/retry-letter-topic.svg) - -The intention of using retry letter topic is different from using [delayed message delivery](#delayed-message-delivery), even though both are aiming to consume a message later. Retry letter topic serves failure handling through message redelivery to ensure critical data is not lost, while delayed message delivery is intended to deliver a message with a specified time of delay. - -By default, automatic retry is disabled. You can set `enableRetry` to `true` to enable automatic retry on the consumer. - -Use the following API to consume messages from a retry letter topic. When the value of `maxRedeliverCount` is reached, the unconsumed messages are moved to a dead letter topic. - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .enableRetry(true) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .build()) - .subscribe(); - -``` - -The default retry letter topic uses this format: - -``` - ---RETRY - -``` - -Use the Java client to specify the name of the retry letter topic. - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .enableRetry(true) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .retryLetterTopic("my-retry-letter-topic-name") - .build()) - .subscribe(); - -``` - -The messages in the retry letter topic contain some special properties that are automatically created by the client. - -Special property | Description -:--------------------|:----------- -`REAL_TOPIC` | The real topic name. -`ORIGIN_MESSAGE_ID` | The origin message ID. It is crucial for message tracking. -`RECONSUMETIMES` | The number of retries to consume messages. -`DELAY_TIME` | Message retry interval in milliseconds. -**Example** - -``` - -REAL_TOPIC = persistent://public/default/my-topic -ORIGIN_MESSAGE_ID = 1:0:-1:0 -RECONSUMETIMES = 6 -DELAY_TIME = 3000 - -``` - -Use the following API to store the messages in a retrial queue. - -```java - -consumer.reconsumeLater(msg, 3, TimeUnit.SECONDS); - -``` - -Use the following API to add custom properties for the `reconsumeLater` function. In the next attempt to consume, custom properties can be get from message#getProperty. - -```java - -Map customProperties = new HashMap(); -customProperties.put("custom-key-1", "custom-value-1"); -customProperties.put("custom-key-2", "custom-value-2"); -consumer.reconsumeLater(msg, customProperties, 3, TimeUnit.SECONDS); - -``` - -:::note - -* Currently, retry letter topic is enabled in Shared subscription types. -* Compared with negative acknowledgment, retry letter topic is more suitable for messages that require a large number of retries with a configurable retry interval. Because messages in the retry letter topic are persisted to BookKeeper, while messages that need to be retried due to negative acknowledgment are cached on the client side. - -::: - -### Dead letter topic - -Dead letter topic allows you to continue message consumption even some messages are not consumed successfully. The messages that are failed to be consumed are stored in a specific topic, which is called dead letter topic. You can decide how to handle the messages in the dead letter topic. - -Enable dead letter topic in a Java client using the default dead letter topic. - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .build()) - .subscribe(); - -``` - -The default dead letter topic uses this format: - -``` - ---DLQ - -``` - -Use the Java client to specify the name of the dead letter topic. - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .deadLetterTopic("my-dead-letter-topic-name") - .build()) - .subscribe(); - -``` - -By default, there is no subscription during a DLQ topic creation. Without a just-in-time subscription to the DLQ topic, you may lose messages. To automatically create an initial subscription for the DLQ, you can specify the `initialSubscriptionName` parameter. If this parameter is set but the broker's `allowAutoSubscriptionCreation` is disabled, the DLQ producer will fail to be created. - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .deadLetterTopic("my-dead-letter-topic-name") - .initialSubscriptionName("init-sub") - .build()) - .subscribe(); - -``` - -Dead letter topic serves message redelivery, which is triggered by [acknowledgement timeout](#acknowledgement-timeout) or [negative acknowledgement](#negative-acknowledgement) or [retry letter topic](#retry-letter-topic) . -:::note - -* Currently, dead letter topic is enabled in Shared and Key_Shared subscription types. - -::: - -## Topics - -As in other pub-sub systems, topics in Pulsar are named channels for transmitting messages from producers to consumers. Topic names are URLs that have a well-defined structure: - -```http - -{persistent|non-persistent}://tenant/namespace/topic - -``` - -Topic name component | Description -:--------------------|:----------- -`persistent` / `non-persistent` | This identifies the type of topic. Pulsar supports two kind of topics: [persistent](concepts-architecture-overview.md#persistent-storage) and [non-persistent](#non-persistent-topics). The default is persistent, so if you do not specify a type, the topic is persistent. With persistent topics, all messages are durably persisted on disks (if the broker is not standalone, messages are durably persisted on multiple disks), whereas data for non-persistent topics is not persisted to storage disks. -`tenant` | The topic tenant within the instance. Tenants are essential to multi-tenancy in Pulsar, and spread across clusters. -`namespace` | The administrative unit of the topic, which acts as a grouping mechanism for related topics. Most topic configuration is performed at the [namespace](#namespaces) level. Each tenant has one or multiple namespaces. -`topic` | The final part of the name. Topic names have no special meaning in a Pulsar instance. - -> **No need to explicitly create new topics** -> You do not need to explicitly create topics in Pulsar. If a client attempts to write or receive messages to/from a topic that does not yet exist, Pulsar creates that topic under the namespace provided in the [topic name](#topics) automatically. -> If no tenant or namespace is specified when a client creates a topic, the topic is created in the default tenant and namespace. You can also create a topic in a specified tenant and namespace, such as `persistent://my-tenant/my-namespace/my-topic`. `persistent://my-tenant/my-namespace/my-topic` means the `my-topic` topic is created in the `my-namespace` namespace of the `my-tenant` tenant. - -## Namespaces - -A namespace is a logical nomenclature within a tenant. A tenant creates multiple namespaces via the [admin API](admin-api-namespaces.md#create). For instance, a tenant with different applications can create a separate namespace for each application. A namespace allows the application to create and manage a hierarchy of topics. The topic `my-tenant/app1` is a namespace for the application `app1` for `my-tenant`. You can create any number of [topics](#topics) under the namespace. - -## Subscriptions - -A subscription is a named configuration rule that determines how messages are delivered to consumers. Four subscription types are available in Pulsar: [exclusive](#exclusive), [shared](#shared), [failover](#failover), and [key_shared](#key_shared). These types are illustrated in the figure below. - -![Subscription types](/assets/pulsar-subscription-types.png) - -> **Pub-Sub or Queuing** -> In Pulsar, you can use different subscriptions flexibly. -> * If you want to achieve traditional "fan-out pub-sub messaging" among consumers, specify a unique subscription name for each consumer. It is exclusive subscription type. -> * If you want to achieve "message queuing" among consumers, share the same subscription name among multiple consumers(shared, failover, key_shared). -> * If you want to achieve both effects simultaneously, combine exclusive subscription type with other subscription types for consumers. - -### Subscription types - -When a subscription has no consumers, its subscription type is undefined. The type of a subscription is defined when a consumer connects to it, and the type can be changed by restarting all consumers with a different configuration. - -#### Exclusive - -In *Exclusive* type, only a single consumer is allowed to attach to the subscription. If multiple consumers subscribe to a topic using the same subscription, an error occurs. - -In the diagram below, only **Consumer A-0** is allowed to consume messages. - -> Exclusive is the default subscription type. - -![Exclusive subscriptions](/assets/pulsar-exclusive-subscriptions.png) - -#### Failover - -In *Failover* type, multiple consumers can attach to the same subscription. A master consumer is picked for non-partitioned topic or each partition of partitioned topic and receives messages. When the master consumer disconnects, all (non-acknowledged and subsequent) messages are delivered to the next consumer in line. - -For partitioned topics, broker will sort consumers by priority level and lexicographical order of consumer name. Then broker will try to evenly assigns topics to consumers with the highest priority level. - -For non-partitioned topic, broker will pick consumer in the order they subscribe to the non partitioned topic. - -In the diagram below, **Consumer-B-0** is the master consumer while **Consumer-B-1** would be the next consumer in line to receive messages if **Consumer-B-0** is disconnected. - -![Failover subscriptions](/assets/pulsar-failover-subscriptions.png) - -#### Shared - -In *shared* or *round robin* type, multiple consumers can attach to the same subscription. Messages are delivered in a round robin distribution across consumers, and any given message is delivered to only one consumer. When a consumer disconnects, all the messages that were sent to it and not acknowledged will be rescheduled for sending to the remaining consumers. - -In the diagram below, **Consumer-C-1** and **Consumer-C-2** are able to subscribe to the topic, but **Consumer-C-3** and others could as well. - -> **Limitations of Shared type** -> When using Shared type, be aware that: -> * Message ordering is not guaranteed. -> * You cannot use cumulative acknowledgment with Shared type. - -![Shared subscriptions](/assets/pulsar-shared-subscriptions.png) - -#### Key_Shared - -In *Key_Shared* type, multiple consumers can attach to the same subscription. Messages are delivered in a distribution across consumers and message with same key or same ordering key are delivered to only one consumer. No matter how many times the message is re-delivered, it is delivered to the same consumer. When a consumer connected or disconnected will cause served consumer change for some key of message. - -![Key_Shared subscriptions](/assets/pulsar-key-shared-subscriptions.png) - -Note that when the consumers are using the Key_Shared subscription type, you need to **disable batching** or **use key-based batching** for the producers. There are two reasons why the key-based batching is necessary for Key_Shared subscription type: -1. The broker dispatches messages according to the keys of the messages, but the default batching approach might fail to pack the messages with the same key to the same batch. -2. Since it is the consumers instead of the broker who dispatch the messages from the batches, the key of the first message in one batch is considered as the key of all messages in this batch, thereby leading to context errors. - -The key-based batching aims at resolving the above-mentioned issues. This batching method ensures that the producers pack the messages with the same key to the same batch. The messages without a key are packed into one batch and this batch has no key. When the broker dispatches messages from this batch, it uses `NON_KEY` as the key. In addition, each consumer is associated with **only one** key and should receive **only one message batch** for the connected key. By default, you can limit batching by configuring the number of messages that producers are allowed to send. - -Below are examples of enabling the key-based batching under the Key_Shared subscription type, with `client` being the Pulsar client that you created. - -````mdx-code-block - - - -``` - -Producer producer = client.newProducer() - .topic("my-topic") - .batcherBuilder(BatcherBuilder.KEY_BASED) - .create(); - -``` - - - - -``` - -ProducerConfiguration producerConfig; -producerConfig.setBatchingType(ProducerConfiguration::BatchingType::KeyBasedBatching); -Producer producer; -client.createProducer("my-topic", producerConfig, producer); - -``` - - - - -``` - -producer = client.create_producer(topic='my-topic', batching_type=pulsar.BatchingType.KeyBased) - -``` - - - - -```` - -> **Limitations of Key_Shared type** -> When you use Key_Shared type, be aware that: -> * You need to specify a key or orderingKey for messages. -> * You cannot use cumulative acknowledgment with Key_Shared type. - -### Subscription modes - -#### What is a subscription mode - -The subscription mode indicates the cursor type. - -- When a subscription is created, an associated cursor is created to record the last consumed position. - -- When a consumer of the subscription restarts, it can continue consuming from the last message it consumes. - -Subscription mode | Description | Note -|---|---|--- -`Durable`|The cursor is durable, which retains messages and persists the current position.
    If a broker restarts from a failure, it can recover the cursor from the persistent storage (BookKeeper), so that messages can continue to be consumed from the last consumed position.|`Durable` is the **default** subscription mode. -`NonDurable`|The cursor is non-durable.
    Once a broker stops, the cursor is lost and can never be recovered, so that messages **can not** continue to be consumed from the last consumed position.|Reader’s subscription mode is `NonDurable` in nature and it does not prevent data in a topic from being deleted. Reader’s subscription mode **can not** be changed. - -A [subscription](#concepts-messaging.md/#subscriptions) can have one or more consumers. When a consumer subscribes to a topic, it must specify the subscription name. A durable subscription and a non-durable subscription can have the same name, they are independent of each other. If a consumer specifies a subscription which does not exist before, the subscription is automatically created. - -#### When to use - -By default, messages of a topic without any durable subscriptions are marked as deleted. If you want to prevent the messages being marked as deleted, you can create a durable subscription for this topic. In this case, only acknowledged messages are marked as deleted. For more information, see [message retention and expiry](cookbooks-retention-expiry.md). - -#### How to use - -After a consumer is created, the default subscription mode of the consumer is `Durable`. You can change the subscription mode to `NonDurable` by making changes to the consumer’s configuration. - -````mdx-code-block - - - - -```java - - Consumer consumer = pulsarClient.newConsumer() - .topic("my-topic") - .subscriptionName("my-sub") - .subscriptionMode(SubscriptionMode.Durable) - .subscribe(); - -``` - - - - -```java - - Consumer consumer = pulsarClient.newConsumer() - .topic("my-topic") - .subscriptionName("my-sub") - .subscriptionMode(SubscriptionMode.NonDurable) - .subscribe(); - -``` - - - - -```` - -For how to create, check, or delete a durable subscription, see [manage subscriptions](admin-api-topics.md/#manage-subscriptions). - -## Multi-topic subscriptions - -When a consumer subscribes to a Pulsar topic, by default it subscribes to one specific topic, such as `persistent://public/default/my-topic`. As of Pulsar version 1.23.0-incubating, however, Pulsar consumers can simultaneously subscribe to multiple topics. You can define a list of topics in two ways: - -* On the basis of a [**reg**ular **ex**pression](https://en.wikipedia.org/wiki/Regular_expression) (regex), for example `persistent://public/default/finance-.*` -* By explicitly defining a list of topics - -> When subscribing to multiple topics by regex, all topics must be in the same [namespace](#namespaces). - -When subscribing to multiple topics, the Pulsar client automatically makes a call to the Pulsar API to discover the topics that match the regex pattern/list, and then subscribe to all of them. If any of the topics do not exist, the consumer auto-subscribes to them once the topics are created. - -> **No ordering guarantees across multiple topics** -> When a producer sends messages to a single topic, all messages are guaranteed to be read from that topic in the same order. However, these guarantees do not hold across multiple topics. So when a producer sends message to multiple topics, the order in which messages are read from those topics is not guaranteed to be the same. - -The following are multi-topic subscription examples for Java. - -```java - -import java.util.regex.Pattern; - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient pulsarClient = // Instantiate Pulsar client object - -// Subscribe to all topics in a namespace -Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default/.*"); -Consumer allTopicsConsumer = pulsarClient.newConsumer() - .topicsPattern(allTopicsInNamespace) - .subscriptionName("subscription-1") - .subscribe(); - -// Subscribe to a subsets of topics in a namespace, based on regex -Pattern someTopicsInNamespace = Pattern.compile("persistent://public/default/foo.*"); -Consumer someTopicsConsumer = pulsarClient.newConsumer() - .topicsPattern(someTopicsInNamespace) - .subscriptionName("subscription-1") - .subscribe(); - -``` - -For code examples, see [Java](client-libraries-java.md#multi-topic-subscriptions). - -## Partitioned topics - -Normal topics are served only by a single broker, which limits the maximum throughput of the topic. *Partitioned topics* are a special type of topic that are handled by multiple brokers, thus allowing for higher throughput. - -A partitioned topic is actually implemented as N internal topics, where N is the number of partitions. When publishing messages to a partitioned topic, each message is routed to one of several brokers. The distribution of partitions across brokers is handled automatically by Pulsar. - -The diagram below illustrates this: - -![](/assets/partitioning.png) - -The **Topic1** topic has five partitions (**P0** through **P4**) split across three brokers. Because there are more partitions than brokers, two brokers handle two partitions a piece, while the third handles only one (again, Pulsar handles this distribution of partitions automatically). - -Messages for this topic are broadcast to two consumers. The [routing mode](#routing-modes) determines each message should be published to which partition, while the [subscription type](#subscription-types) determines which messages go to which consumers. - -Decisions about routing and subscription modes can be made separately in most cases. In general, throughput concerns should guide partitioning/routing decisions while subscription decisions should be guided by application semantics. - -There is no difference between partitioned topics and normal topics in terms of how subscription types work, as partitioning only determines what happens between when a message is published by a producer and processed and acknowledged by a consumer. - -Partitioned topics need to be explicitly created via the [admin API](admin-api-overview.md). The number of partitions can be specified when creating the topic. - -### Routing modes - -When publishing to partitioned topics, you must specify a *routing mode*. The routing mode determines which partition---that is, which internal topic---each message should be published to. - -There are three {@inject: javadoc:MessageRoutingMode:/client/org/apache/pulsar/client/api/MessageRoutingMode} available: - -Mode | Description -:--------|:------------ -`RoundRobinPartition` | If no key is provided, the producer will publish messages across all partitions in round-robin fashion to achieve maximum throughput. Please note that round-robin is not done per individual message but rather it's set to the same boundary of batching delay, to ensure batching is effective. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition. This is the default mode. -`SinglePartition` | If no key is provided, the producer will randomly pick one single partition and publish all the messages into that partition. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition. -`CustomPartition` | Use custom message router implementation that will be called to determine the partition for a particular message. User can create a custom routing mode by using the [Java client](client-libraries-java.md) and implementing the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface. - -### Ordering guarantee - -The ordering of messages is related to MessageRoutingMode and Message Key. Usually, user would want an ordering of Per-key-partition guarantee. - -If there is a key attached to message, the messages will be routed to corresponding partitions based on the hashing scheme specified by {@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} in {@inject: javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder}, when using either `SinglePartition` or `RoundRobinPartition` mode. - -Ordering guarantee | Description | Routing Mode and Key -:------------------|:------------|:------------ -Per-key-partition | All the messages with the same key will be in order and be placed in same partition. | Use either `SinglePartition` or `RoundRobinPartition` mode, and Key is provided by each message. -Per-producer | All the messages from the same producer will be in order. | Use `SinglePartition` mode, and no Key is provided for each message. - -### Hashing scheme - -{@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} is an enum that represent sets of standard hashing functions available when choosing the partition to use for a particular message. - -There are 2 types of standard hashing functions available: `JavaStringHash` and `Murmur3_32Hash`. -The default hashing function for producer is `JavaStringHash`. -Please pay attention that `JavaStringHash` is not useful when producers can be from different multiple language clients, under this use case, it is recommended to use `Murmur3_32Hash`. - - - -## Non-persistent topics - - -By default, Pulsar persistently stores *all* unacknowledged messages on multiple [BookKeeper](concepts-architecture-overview.md#persistent-storage) bookies (storage nodes). Data for messages on persistent topics can thus survive broker restarts and subscriber failover. - -Pulsar also, however, supports **non-persistent topics**, which are topics on which messages are *never* persisted to disk and live only in memory. When using non-persistent delivery, killing a Pulsar broker or disconnecting a subscriber to a topic means that all in-transit messages are lost on that (non-persistent) topic, meaning that clients may see message loss. - -Non-persistent topics have names of this form (note the `non-persistent` in the name): - -```http - -non-persistent://tenant/namespace/topic - -``` - -> For more info on using non-persistent topics, see the [Non-persistent messaging cookbook](cookbooks-non-persistent.md). - -In non-persistent topics, brokers immediately deliver messages to all connected subscribers *without persisting them* in [BookKeeper](concepts-architecture-overview.md#persistent-storage). If a subscriber is disconnected, the broker will not be able to deliver those in-transit messages, and subscribers will never be able to receive those messages again. Eliminating the persistent storage step makes messaging on non-persistent topics slightly faster than on persistent topics in some cases, but with the caveat that some of the core benefits of Pulsar are lost. - -> With non-persistent topics, message data lives only in memory. If a message broker fails or message data can otherwise not be retrieved from memory, your message data may be lost. Use non-persistent topics only if you're *certain* that your use case requires it and can sustain it. - -By default, non-persistent topics are enabled on Pulsar brokers. You can disable them in the broker's [configuration](reference-configuration.md#broker-enableNonPersistentTopics). You can manage non-persistent topics using the `pulsar-admin topics` command. For more information, see [`pulsar-admin`](/tools/pulsar-admin/). - -### Performance - -Non-persistent messaging is usually faster than persistent messaging because brokers don't persist messages and immediately send acks back to the producer as soon as that message is delivered to connected brokers. Producers thus see comparatively low publish latency with non-persistent topic. - -### Client API - -Producers and consumers can connect to non-persistent topics in the same way as persistent topics, with the crucial difference that the topic name must start with `non-persistent`. All three subscription types---[exclusive](#exclusive), [shared](#shared), and [failover](#failover)---are supported for non-persistent topics. - -Here's an example [Java consumer](client-libraries-java.md#consumers) for a non-persistent topic: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); -String npTopic = "non-persistent://public/default/my-topic"; -String subscriptionName = "my-subscription-name"; - -Consumer consumer = client.newConsumer() - .topic(npTopic) - .subscriptionName(subscriptionName) - .subscribe(); - -``` - -Here's an example [Java producer](client-libraries-java.md#producer) for the same non-persistent topic: - -```java - -Producer producer = client.newProducer() - .topic(npTopic) - .create(); - -``` - - -## System topic - -System topic is a predefined topic for internal use within Pulsar. It can be either persistent or non-persistent topic. - -System topics serve to implement certain features and eliminate dependencies on third-party components, such as transactions, heartbeat detections, topic-level policies, and resource group services. System topics empower the implementation of these features to be simplified, dependent, and flexible. Take heartbeat detections for example, you can leverage the system topic for healthcheck to internally enable producer/reader to procude/consume messages under the heartbeat namespace, which can detect whether the current service is still alive. - -There are diverse system topics depending on namespaces. The following table outlines the available system topics for each specific namespace. - -| Namespace | TopicName | Domain | Count | Usage | -|-----------|-----------|--------|-------|-------| -| pulsar/system | `transaction_coordinator_assign_${id}` | Persistent | Default 16 | Transaction coordinator | -| pulsar/system | `_transaction_log${tc_id}` | Persistent | Default 16 | Transaction log | -| pulsar/system | `resource-usage` | Non-persistent | Default 4 | Resource group service | -| host/port | `heartbeat` | Persistent | 1 | Heartbeat detection | -| User-defined-ns | [`__change_events`](concepts-multi-tenancy.md#namespace-change-events-and-topic-level-policies) | Persistent | Default 4 | Topic events | -| User-defined-ns | `__transaction_buffer_snapshot` | Persistent | One per namespace | Transaction buffer snapshots | -| User-defined-ns | `${topicName}__transaction_pending_ack` | Persistent | One per every topic subscription acknowledged with transactions | Acknowledgements with transactions | - -:::note - -* You cannot create any system topics. -* By default, system topics are disabled. To enable system topics, you need to change the following configurations in the `conf/broker.conf` or `conf/standalone.conf` file. - - ```conf - systemTopicEnabled=true - topicLevelPoliciesEnabled=true - ``` - -::: - - -## Message redelivery - -Apache Pulsar supports graceful failure handling and ensures critical data is not lost. Software will always have unexpected conditions and at times messages may not be delivered successfully. Therefore, it is important to have a built-in mechanism that handles failure, particularly in asynchronous messaging as highlighted in the following examples. - -- Consumers get disconnected from the database or the HTTP server. When this happens, the database is temporarily offline while the consumer is writing the data to it and the external HTTP server that the consumer calls is momentarily unavailable. -- Consumers get disconnected from a broker due to consumer crashes, broken connections, etc. As a consequence, the unacknowledged messages are delivered to other available consumers. - -Apache Pulsar avoids these and other message delivery failures using at-least-once delivery semantics that ensure Pulsar processes a message more than once. - -To utilize message redelivery, you need to enable this mechanism before the broker can resend the unacknowledged messages in Apache Pulsar client. You can activate the message redelivery mechanism in Apache Pulsar using three methods. - -- [Negative Acknowledgment](#negative-acknowledgement) -- [Acknowledgement Timeout](#acknowledgement-timeout) -- [Retry letter topic](#retry-letter-topic) - - -## Message retention and expiry - -By default, Pulsar message brokers: - -* immediately delete *all* messages that have been acknowledged by a consumer, and -* [persistently store](concepts-architecture-overview.md#persistent-storage) all unacknowledged messages in a message backlog. - -Pulsar has two features, however, that enable you to override this default behavior: - -* Message **retention** enables you to store messages that have been acknowledged by a consumer -* Message **expiry** enables you to set a time to live (TTL) for messages that have not yet been acknowledged - -> All message retention and expiry is managed at the [namespace](#namespaces) level. For a how-to, see the [Message retention and expiry](cookbooks-retention-expiry.md) cookbook. - -The diagram below illustrates both concepts: - -![Message retention and expiry](/assets/retention-expiry.png) - -With message retention, shown at the top, a retention policy applied to all topics in a namespace dictates that some messages are durably stored in Pulsar even though they've already been acknowledged. Acknowledged messages that are not covered by the retention policy are deleted. Without a retention policy, *all* of the acknowledged messages would be deleted. - -With message expiry, shown at the bottom, some messages are deleted, even though they haven't been acknowledged, because they've expired according to the TTL applied to the namespace (for example because a TTL of 5 minutes has been applied and the messages haven't been acknowledged but are 10 minutes old). - -## Message deduplication - -Message duplication occurs when a message is [persisted](concepts-architecture-overview.md#persistent-storage) by Pulsar more than once. Message deduplication is an optional Pulsar feature that prevents unnecessary message duplication by processing each message only once, even if the message is received more than once. - -The following diagram illustrates what happens when message deduplication is disabled vs. enabled: - -![Pulsar message deduplication](/assets/message-deduplication.png) - - -Message deduplication is disabled in the scenario shown at the top. Here, a producer publishes message 1 on a topic; the message reaches a Pulsar broker and is [persisted](concepts-architecture-overview.md#persistent-storage) to BookKeeper. The producer then sends message 1 again (in this case due to some retry logic), and the message is received by the broker and stored in BookKeeper again, which means that duplication has occurred. - -In the second scenario at the bottom, the producer publishes message 1, which is received by the broker and persisted, as in the first scenario. When the producer attempts to publish the message again, however, the broker knows that it has already seen message 1 and thus does not persist the message. - -> Message deduplication is handled at the namespace level or the topic level. For more instructions, see the [message deduplication cookbook](cookbooks-deduplication.md). - - -### Producer idempotency - -The other available approach to message deduplication is to ensure that each message is *only produced once*. This approach is typically called **producer idempotency**. The drawback of this approach is that it defers the work of message deduplication to the application. In Pulsar, this is handled at the [broker](reference-terminology.md#broker) level, so you do not need to modify your Pulsar client code. Instead, you only need to make administrative changes. For details, see [Managing message deduplication](cookbooks-deduplication.md). - -### Deduplication and effectively-once semantics - -Message deduplication makes Pulsar an ideal messaging system to be used in conjunction with stream processing engines (SPEs) and other systems seeking to provide effectively-once processing semantics. Messaging systems that do not offer automatic message deduplication require the SPE or other system to guarantee deduplication, which means that strict message ordering comes at the cost of burdening the application with the responsibility of deduplication. With Pulsar, strict ordering guarantees come at no application-level cost. - -> You can find more in-depth information in [this post](https://www.splunk.com/en_us/blog/it/exactly-once-is-not-exactly-the-same.html). - -## Delayed message delivery -Delayed message delivery enables you to consume a message later. In this mechanism, a message is stored in BookKeeper. The `DelayedDeliveryTracker` maintains the time index (time -> messageId) in memory after the message is published to a broker. This message will be delivered to a consumer once the specified delay is over. - -Delayed message delivery only works in Shared subscription type. In Exclusive and Failover subscription types, the delayed message is dispatched immediately. - -The diagram below illustrates the concept of delayed message delivery: - -![Delayed Message Delivery](/assets/message_delay.png) - -A broker saves a message without any check. When a consumer consumes a message, if the message is set to delay, then the message is added to `DelayedDeliveryTracker`. A subscription checks and gets timeout messages from `DelayedDeliveryTracker`. - -### Broker -Delayed message delivery is enabled by default. You can change it in the broker configuration file as below: - -``` - -# Whether to enable the delayed delivery for messages. -# If disabled, messages are immediately delivered and there is no tracking overhead. -delayedDeliveryEnabled=true - -# Control the ticking time for the retry of delayed message delivery, -# affecting the accuracy of the delivery time compared to the scheduled time. -# Default is 1 second. -delayedDeliveryTickTimeMillis=1000 - -``` - -### Producer -The following is an example of delayed message delivery for a producer in Java: - -```java - -// message to be delivered at the configured delay interval -producer.newMessage().deliverAfter(3L, TimeUnit.Minute).value("Hello Pulsar!").send(); - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-multi-tenancy.md b/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-multi-tenancy.md deleted file mode 100644 index 93a59557b2efca..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-multi-tenancy.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -id: concepts-multi-tenancy -title: Multi Tenancy -sidebar_label: "Multi Tenancy" -original_id: concepts-multi-tenancy ---- - -Pulsar was created from the ground up as a multi-tenant system. To support multi-tenancy, Pulsar has a concept of tenants. Tenants can be spread across clusters and can each have their own [authentication and authorization](security-overview.md) scheme applied to them. They are also the administrative unit at which storage quotas, [message TTL](cookbooks-retention-expiry.md#time-to-live-ttl), and isolation policies can be managed. - -The multi-tenant nature of Pulsar is reflected mostly visibly in topic URLs, which have this structure: - -```http - -persistent://tenant/namespace/topic - -``` - -As you can see, the tenant is the most basic unit of categorization for topics (more fundamental than the namespace and topic name). - -## Tenants - -To each tenant in a Pulsar instance you can assign: - -* An [authorization](security-authorization.md) scheme -* The set of [clusters](reference-terminology.md#cluster) to which the tenant's configuration applies - -## Namespaces - -Tenants and namespaces are two key concepts of Pulsar to support multi-tenancy. - -* Pulsar is provisioned for specified tenants with appropriate capacity allocated to the tenant. -* A namespace is the administrative unit nomenclature within a tenant. The configuration policies set on a namespace apply to all the topics created in that namespace. A tenant may create multiple namespaces via self-administration using the REST API and the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool. For instance, a tenant with different applications can create a separate namespace for each application. - -Names for topics in the same namespace will look like this: - -```http - -persistent://tenant/app1/topic-1 - -persistent://tenant/app1/topic-2 - -persistent://tenant/app1/topic-3 - -``` - -### Namespace change events and topic-level policies - -Pulsar is a multi-tenant event streaming system. Administrators can manage the tenants and namespaces by setting policies at different levels. However, the policies, such as retention policy and storage quota policy, are only available at a namespace level. In many use cases, users need to set a policy at the topic level. The namespace change events approach is proposed for supporting topic-level policies in an efficient way. In this approach, Pulsar is used as an event log to store namespace change events (such as topic policy changes). This approach has a few benefits: -- Avoid using ZooKeeper and introducing more loads to ZooKeeper. -- Use Pulsar as an event log for propagating the policy cache. It can scale efficiently. -- Use Pulsar SQL to query the namespace changes and audit the system. - -Each namespace has a [system topic](concepts-messaging.md#system-topic) named `__change_events`. This system topic stores change events for a given namespace. The following figure illustrates how to leverage it to update topic-level policies. - -![Leverage the system topic to update topic-level policies](/assets/system-topic-for-topic-level-policies.svg) - -1. Pulsar Admin clients communicate with the Admin Restful API to update topic-level policies. -2. Any broker that receives the Admin HTTP request publishes a topic policy change event to the corresponding system topic (`__change_events`) of the namespace. -3. Each broker that owns a namespace bundle(s) subscribes to the system topic (`__change_events`) to receive the change events of the namespace. -4. Each broker applies the change events to its policy cache. -5. Once the policy cache is updated, the broker sends the response back to the Pulsar Admin clients. - -:::note - -By default, the system topic is disabled. To enable topic-level policy (`topicLevelPoliciesEnabled`=`true`), you need to enable the system topic by setting `systemtopicenabled` to `true` in the `conf/broker.conf` or `conf/standalone.conf` file. - -::: \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-multiple-advertised-listeners.md b/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-multiple-advertised-listeners.md deleted file mode 100644 index f2e1ae0aadc7ca..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-multiple-advertised-listeners.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -id: concepts-multiple-advertised-listeners -title: Multiple advertised listeners -sidebar_label: "Multiple advertised listeners" -original_id: concepts-multiple-advertised-listeners ---- - -When a Pulsar cluster is deployed in the production environment, it may require to expose multiple advertised addresses for the broker. For example, when you deploy a Pulsar cluster in Kubernetes and want other clients, which are not in the same Kubernetes cluster, to connect to the Pulsar cluster, you need to assign a broker URL to external clients. But clients in the same Kubernetes cluster can still connect to the Pulsar cluster through the internal network of Kubernetes. - -## Advertised listeners - -To ensure clients in both internal and external networks can connect to a Pulsar cluster, Pulsar introduces `advertisedListeners` and `internalListenerName` configuration options into the [broker configuration file](reference-configuration.md#broker) to ensure that the broker supports exposing multiple advertised listeners and support the separation of internal and external network traffic. - -- The `advertisedListeners` is used to specify multiple advertised listeners. The broker uses the listener as the broker identifier in the load manager and the bundle owner data. The `advertisedListeners` is formatted as `:pulsar://:, :pulsar+ssl://:`. You can set up the `advertisedListeners` like -`advertisedListeners=internal:pulsar://192.168.1.11:6660,internal:pulsar+ssl://192.168.1.11:6651`. - -- The `internalListenerName` is used to specify the internal service URL that the broker uses. You can specify the `internalListenerName` by choosing one of the `advertisedListeners`. The broker uses the listener name of the first advertised listener as the `internalListenerName` if the `internalListenerName` is absent. - -After setting up the `advertisedListeners`, clients can choose one of the listeners as the service URL to create a connection to the broker as long as the network is accessible. However, if the client creates producers or consumer on a topic, the client must send a lookup requests to the broker for getting the owner broker, then connect to the owner broker to publish messages or consume messages. Therefore, You must allow the client to get the corresponding service URL with the same advertised listener name as the one used by the client. This helps keep client-side simple and secure. - -## Use multiple advertised listeners - -This example shows how a Pulsar client uses multiple advertised listeners. - -1. Configure multiple advertised listeners in the broker configuration file. - -```shell - -advertisedListeners={listenerName}:pulsar://xxxx:6650, -{listenerName}:pulsar+ssl://xxxx:6651 - -``` - -2. Specify the listener name for the client. - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://xxxx:6650") - .listenerName("external") - .build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-overview.md b/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-overview.md deleted file mode 100644 index e8a2f4b9d321a7..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-overview.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -id: concepts-overview -title: Pulsar Overview -sidebar_label: "Overview" -original_id: concepts-overview ---- - -Pulsar is a multi-tenant, high-performance solution for server-to-server messaging. Originally developed by Yahoo, Pulsar is under the stewardship of the [Apache Software Foundation](https://www.apache.org/). - -Key features of Pulsar are listed below: - -* Native support for multiple clusters in a Pulsar instance, with seamless [geo-replication](administration-geo.md) of messages across clusters. -* Very low publish and end-to-end latency. -* Seamless scalability to over a million topics. -* A simple [client API](concepts-clients.md) with bindings for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) and [C++](client-libraries-cpp.md). -* Multiple [subscription types](concepts-messaging.md#subscription-types) ([exclusive](concepts-messaging.md#exclusive), [shared](concepts-messaging.md#shared), and [failover](concepts-messaging.md#failover)) for topics. -* Guaranteed message delivery with [persistent message storage](concepts-architecture-overview.md#persistent-storage) provided by [Apache BookKeeper](http://bookkeeper.apache.org/). -* A serverless light-weight computing framework [Pulsar Functions](functions-overview.md) offers the capability for stream-native data processing. -* A serverless connector framework [Pulsar IO](io-overview.md), which is built on Pulsar Functions, makes it easier to move data in and out of Apache Pulsar. -* [Tiered Storage](concepts-tiered-storage.md) offloads data from hot/warm storage to cold/long-term storage (such as S3 and GCS) when the data is aging out. - -## Contents - -- [Messaging Concepts](concepts-messaging.md) -- [Architecture Overview](concepts-architecture-overview.md) -- [Pulsar Clients](concepts-clients.md) -- [Geo Replication](concepts-replication.md) -- [Multi Tenancy](concepts-multi-tenancy.md) -- [Authentication and Authorization](concepts-authentication.md) -- [Topic Compaction](concepts-topic-compaction.md) -- [Tiered Storage](concepts-tiered-storage.md) diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-proxy-sni-routing.md b/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-proxy-sni-routing.md deleted file mode 100644 index 51419a66cefe6e..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-proxy-sni-routing.md +++ /dev/null @@ -1,180 +0,0 @@ ---- -id: concepts-proxy-sni-routing -title: Proxy support with SNI routing -sidebar_label: "Proxy support with SNI routing" -original_id: concepts-proxy-sni-routing ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -A proxy server is an intermediary server that forwards requests from multiple clients to different servers across the Internet. The proxy server acts as a "traffic cop" in both forward and reverse proxy scenarios, and benefits your system such as load balancing, performance, security, auto-scaling, and so on. - -The proxy in Pulsar acts as a reverse proxy, and creates a gateway in front of brokers. Proxies such as Apache Traffic Server (ATS), HAProxy, Nginx, and Envoy are not supported in Pulsar. These proxy-servers support **SNI routing**. SNI routing is used to route traffic to a destination without terminating the SSL connection. Layer 4 routing provides greater transparency because the outbound connection is determined by examining the destination address in the client TCP packets. - -Pulsar clients (Java, C++, Python) support [SNI routing protocol](https://github.com/apache/pulsar/wiki/PIP-60:-Support-Proxy-server-with-SNI-routing), so you can connect to brokers through the proxy. This document walks you through how to set up the ATS proxy, enable SNI routing, and connect Pulsar client to the broker through the ATS proxy. - -## ATS-SNI Routing in Pulsar -To support [layer-4 SNI routing](https://docs.trafficserver.apache.org/en/latest/admin-guide/layer-4-routing.en.html) with ATS, the inbound connection must be a TLS connection. Pulsar client supports SNI routing protocol on TLS connection, so when Pulsar clients connect to broker through ATS proxy, Pulsar uses ATS as a reverse proxy. - -Pulsar supports SNI routing for geo-replication, so brokers can connect to brokers in other clusters through the ATS proxy. - -This section explains how to set up and use ATS as a reverse proxy, so Pulsar clients can connect to brokers through the ATS proxy using the SNI routing protocol on TLS connection. - -### Set up ATS Proxy for layer-4 SNI routing -To support layer 4 SNI routing, you need to configure the `records.conf` and `ssl_server_name.conf` files. - -![Pulsar client SNI](/assets/pulsar-sni-client.png) - -The [records.config](https://docs.trafficserver.apache.org/en/latest/admin-guide/files/records.config.en.html) file is located in the `/usr/local/etc/trafficserver/` directory by default. The file lists configurable variables used by the ATS. - -To configure the `records.config` files, complete the following steps. -1. Update TLS port (`http.server_ports`) on which proxy listens, and update proxy certs (`ssl.client.cert.path` and `ssl.client.cert.filename`) to secure TLS tunneling. -2. Configure server ports (`http.connect_ports`) used for tunneling to the broker. If Pulsar brokers are listening on `4443` and `6651` ports, add the brokers service port in the `http.connect_ports` configuration. - -The following is an example. - -``` - -# PROXY TLS PORT -CONFIG proxy.config.http.server_ports STRING 4443:ssl 4080 -# PROXY CERTS FILE PATH -CONFIG proxy.config.ssl.client.cert.path STRING /proxy-cert.pem -# PROXY KEY FILE PATH -CONFIG proxy.config.ssl.client.cert.filename STRING /proxy-key.pem - - -# The range of origin server ports that can be used for tunneling via CONNECT. # Traffic Server allows tunnels only to the specified ports. Supports both wildcards (*) and ranges (e.g. 0-1023). -CONFIG proxy.config.http.connect_ports STRING 4443 6651 - -``` - -The `ssl_server_name` file is used to configure TLS connection handling for inbound and outbound connections. The configuration is determined by the SNI values provided by the inbound connection. The file consists of a set of configuration items, and each is identified by an SNI value (`fqdn`). When an inbound TLS connection is made, the SNI value from the TLS negotiation is matched with the items specified in this file. If the values match, the values specified in that item override the default values. - -The following example shows mapping of the inbound SNI hostname coming from the client, and the actual broker service URL where request should be redirected. For example, if the client sends the SNI header `pulsar-broker1`, the proxy creates a TLS tunnel by redirecting request to the `pulsar-broker1:6651` service URL. - -``` - -server_config = { - { - fqdn = 'pulsar-broker-vip', - # Forward to Pulsar broker which is listening on 6651 - tunnel_route = 'pulsar-broker-vip:6651' - }, - { - fqdn = 'pulsar-broker1', - # Forward to Pulsar broker-1 which is listening on 6651 - tunnel_route = 'pulsar-broker1:6651' - }, - { - fqdn = 'pulsar-broker2', - # Forward to Pulsar broker-2 which is listening on 6651 - tunnel_route = 'pulsar-broker2:6651' - }, -} - -``` - -After you configure the `ssl_server_name.config` and `records.config` files, the ATS-proxy server handles SNI routing and creates TCP tunnel between the client and the broker. - -### Configure Pulsar-client with SNI routing -ATS SNI-routing works only with TLS. You need to enable TLS for the ATS proxy and brokers first, configure the SNI routing protocol, and then connect Pulsar clients to brokers through ATS proxy. Pulsar clients support SNI routing by connecting to the proxy, and sending the target broker URL to the SNI header. This process is processed internally. You only need to configure the following proxy configuration initially when you create a Pulsar client to use the SNI routing protocol. - -````mdx-code-block - - - - -```java - -String brokerServiceUrl = "pulsar+ssl://pulsar-broker-vip:6651/"; -String proxyUrl = "pulsar+ssl://ats-proxy:443"; -ClientBuilder clientBuilder = PulsarClient.builder() - .serviceUrl(brokerServiceUrl) - .tlsTrustCertsFilePath(TLS_TRUST_CERT_FILE_PATH) - .enableTls(true) - .allowTlsInsecureConnection(false) - .proxyServiceUrl(proxyUrl, ProxyProtocol.SNI) - .operationTimeout(1000, TimeUnit.MILLISECONDS); - -Map authParams = new HashMap(); -authParams.put("tlsCertFile", TLS_CLIENT_CERT_FILE_PATH); -authParams.put("tlsKeyFile", TLS_CLIENT_KEY_FILE_PATH); -clientBuilder.authentication(AuthenticationTls.class.getName(), authParams); - -PulsarClient pulsarClient = clientBuilder.build(); - -``` - - - - -```c++ - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/cacert.pem"); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create( - "/path/to/client-cert.pem", "/path/to/client-key.pem");); - -Client client("pulsar+ssl://ats-proxy:443", config); - -``` - - - - -```python - -from pulsar import Client, AuthenticationTLS - -auth = AuthenticationTLS("/path/to/my-role.cert.pem", "/path/to/my-role.key-pk8.pem") -client = Client("pulsar+ssl://ats-proxy:443", - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False, - authentication=auth) - -``` - - - - -```` - -### Pulsar geo-replication with SNI routing -You can use the ATS proxy for geo-replication. Pulsar brokers can connect to brokers in geo-replication by using SNI routing. To enable SNI routing for broker connection cross clusters, you need to configure SNI proxy URL to the cluster metadata. If you have configured SNI proxy URL in the cluster metadata, you can connect to broker cross clusters through the proxy over SNI routing. - -![Pulsar client SNI](/assets/pulsar-sni-geo.png) - -In this example, a Pulsar cluster is deployed into two separate regions, `us-west` and `us-east`. Both regions are configured with ATS proxy, and brokers in each region run behind the ATS proxy. We configure the cluster metadata for both clusters, so brokers in one cluster can use SNI routing and connect to brokers in other clusters through the ATS proxy. - -(a) Configure the cluster metadata for `us-east` with `us-east` broker service URL and `us-east` ATS proxy URL with SNI proxy-protocol. - -``` - -./pulsar-admin clusters update \ ---broker-url-secure pulsar+ssl://east-broker-vip:6651 \ ---url http://east-broker-vip:8080 \ ---proxy-protocol SNI \ ---proxy-url pulsar+ssl://east-ats-proxy:443 - -``` - -(b) Configure the cluster metadata for `us-west` with `us-west` broker service URL and `us-west` ATS proxy URL with SNI proxy-protocol. - -``` - -./pulsar-admin clusters update \ ---broker-url-secure pulsar+ssl://west-broker-vip:6651 \ ---url http://west-broker-vip:8080 \ ---proxy-protocol SNI \ ---proxy-url pulsar+ssl://west-ats-proxy:443 - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-replication.md b/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-replication.md deleted file mode 100644 index 1ac455c7028325..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-replication.md +++ /dev/null @@ -1,69 +0,0 @@ ---- -id: concepts-replication -title: Geo Replication -sidebar_label: "Geo Replication" -original_id: concepts-replication ---- - -Regardless of industries, when an unforeseen event occurs and brings day-to-day operations to a halt, an organization needs a well-prepared disaster recovery plan to quickly restore service to clients. However, a disaster recovery plan usually requires a multi-datacenter deployment with geographically dispersed data centers. Such a multi-datacenter deployment requires a geo-replication mechanism to provide additional redundancy in case a data center fails. - -Pulsar's geo-replication mechanism is typically used for disaster recovery, enabling the replication of persistently stored message data across multiple data centers. For instance, your application is publishing data in one region and you would like to process it for consumption in other regions. With Pulsar’s geo-replication mechanism, messages can be produced and consumed in different geo-locations. - -The diagram below illustrates the process of [geo-replication](administration-geo.md). Whenever three producers (P1, P2 and P3) respectively publish messages to the T1 topic in three clusters, those messages are instantly replicated across clusters. Once the messages are replicated, two consumers (C1 and C2) can consume those messages from their clusters. - -![A typical geo-replication example with full-mesh pattern](/assets/full-mesh-replication.svg) - -## Replication mechanisms - -The geo-replication mechanism can be categorized into synchronous geo-replication and asynchronous geo-replication strategies. Pulsar supports both replication mechanisms. - -### Asynchronous geo-replication in Pulsar - -An asynchronous geo-replicated cluster is composed of multiple physical clusters set up in different datacenters. Messages produced on a Pulsar topic are first persisted to the local cluster and then replicated asynchronously to the remote clusters by brokers. - -![An example of asynchronous geo-replication mechanism](/assets/geo-replication-async.svg) - -In normal cases, when there are no connectivity issues, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, end-to-end delivery latency is defined by the network round-trip time (RTT) between the data centers. Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (for example, during a network partition). - -Asynchronous geo-replication provides lower latency but may result in weaker consistency guarantees due to the potential replication lag that some data hasn’t been replicated. - -### Synchronous geo-replication via BookKeeper - -In synchronous geo-replication, data is synchronously replicated to multiple data centers and the client has to wait for an acknowledgment from the other data centers. As illustrated below, when the client issues a write request to one cluster, the written data will be replicated to the other two data centers. The write request is only acknowledged to the client when the majority of data centers (in this example, at least 2 data centers) have acknowledged that the write has been persisted. - -![An example of synchronous geo-replication mechanism](/assets/geo-replication-sync.svg) - -Synchronous geo-replication in Pulsar is achieved by BookKeeper. A synchronous geo-replicated cluster consists of a cluster of bookies and a cluster of brokers that run in multiple data centers, and a global Zookeeper installation (a ZooKeeper ensemble is running across multiple data centers). You need to configure a BookKeeper region-aware placement policy to store data across multiple data centers and guarantee availability constraints on writes. - -Synchronous geo-replication provides the highest availability and also guarantees stronger data consistency between different data centers. However, your applications have to pay an extra latency penalty across data centers. - - -## Replication patterns - -Pulsar provides a great degree of flexibility for customizing your replication strategy. You can set up different replication patterns to serve your replication strategy for an application between multiple data centers. - -### Full-mesh replication - -Using full-mesh replication and applying the [selective message replication](administration-geo.md/#selective-replication), you can customize your replication strategies and topologies between any number of datacenters. - -![An example of full-mesh replication pattern](/assets/full-mesh-replication.svg) - -### Active-active replication - -Active-active replication is a variation of full-mesh replication, with only two data centers. Producers are able to run at any data center to produce messages, and consumers can consume all messages from all data centers. - -![An example of active-active replication pattern](/assets/active-active-replication.svg) - -For how to use active-active replication to migrate data between clusters, refer to [here](administration-geo.md/#migrate-data-between-clusters-using-geo-replication). - -### Active-standby replication - -Active-standby replication is a variation of active-active replication. Producers send messages to the active data center while messages are replicated to the standby data center for backup. If the active data center goes down, the standby data center takes over and becomes the active one. - -![An example of active-standby replication pattern](/assets/active-standby-replication.svg) - -### Aggregation replication - -The aggregation replication pattern is typically used when replicating messages from the edge to the cloud. For example, assume you have 3 clusters in 3 fronting datacenters and one aggregated cluster in a central data center, and you want to replicate messages from multiple fronting datacenters to the central data center for aggregation purposes. You can then create an individual namespace for the topics used by each fronting data center and assign the aggregated data center to those namespaces. - -![An example of aggregation replication pattern](/assets/aggregation-replication.svg) diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-tiered-storage.md b/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-tiered-storage.md deleted file mode 100644 index b45ccea5888bf9..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-tiered-storage.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: concepts-tiered-storage -title: Tiered Storage -sidebar_label: "Tiered Storage" -original_id: concepts-tiered-storage ---- - -Pulsar's segment oriented architecture allows for topic backlogs to grow very large, effectively without limit. However, this can become expensive over time. - -One way to alleviate this cost is to use Tiered Storage. With tiered storage, older messages in the backlog can be moved from BookKeeper to a cheaper storage mechanism, while still allowing clients to access the backlog as if nothing had changed. - -![Tiered Storage](/assets/pulsar-tiered-storage.png) - -> Data written to BookKeeper is replicated to 3 physical machines by default. However, once a segment is sealed in BookKeeper it becomes immutable and can be copied to long term storage. Long term storage can achieve cost savings by using mechanisms such as [Reed-Solomon error correction](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction) to require fewer physical copies of data. - -Pulsar currently supports S3, Google Cloud Storage (GCS), and filesystem for [long term store](cookbooks-tiered-storage.md). Offloading to long term storage triggered via a Rest API or command line interface. The user passes in the amount of topic data they wish to retain on BookKeeper, and the broker will copy the backlog data to long term storage. The original data will then be deleted from BookKeeper after a configured delay (4 hours by default). - -> For a guide for setting up tiered storage, see the [Tiered storage cookbook](cookbooks-tiered-storage.md). diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-topic-compaction.md b/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-topic-compaction.md deleted file mode 100644 index 34b7ed7fbbd31e..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-topic-compaction.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -id: concepts-topic-compaction -title: Topic Compaction -sidebar_label: "Topic Compaction" -original_id: concepts-topic-compaction ---- - -Pulsar was built with highly scalable [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data as a primary objective. Pulsar topics enable you to persistently store as many unacknowledged messages as you need while preserving message ordering. By default, Pulsar stores *all* unacknowledged/unprocessed messages produced on a topic. Accumulating many unacknowledged messages on a topic is necessary for many Pulsar use cases but it can also be very time intensive for Pulsar consumers to "rewind" through the entire log of messages. - -> For a more practical guide to topic compaction, see the [Topic compaction cookbook](cookbooks-compaction.md). - -For some use cases consumers don't need a complete "image" of the topic log. They may only need a few values to construct a more "shallow" image of the log, perhaps even just the most recent value. For these kinds of use cases Pulsar offers **topic compaction**. When you run compaction on a topic, Pulsar goes through a topic's backlog and removes messages that are *obscured* by later messages, i.e. it goes through the topic on a per-key basis and leaves only the most recent message associated with that key. - -Pulsar's topic compaction feature: - -* Allows for faster "rewind" through topic logs -* Applies only to [persistent topics](concepts-architecture-overview.md#persistent-storage) -* Triggered automatically when the backlog reaches a certain size or can be triggered manually via the command line. See the [Topic compaction cookbook](cookbooks-compaction.md) -* Is conceptually and operationally distinct from [retention and expiry](concepts-messaging.md#message-retention-and-expiry). Topic compaction *does*, however, respect retention. If retention has removed a message from the message backlog of a topic, the message will also not be readable from the compacted topic ledger. - -> #### Topic compaction example: the stock ticker -> An example use case for a compacted Pulsar topic would be a stock ticker topic. On a stock ticker topic, each message bears a timestamped dollar value for stocks for purchase (with the message key holding the stock symbol, e.g. `AAPL` or `GOOG`). With a stock ticker you may care only about the most recent value(s) of the stock and have no interest in historical data (i.e. you don't need to construct a complete image of the topic's sequence of messages per key). Compaction would be highly beneficial in this case because it would keep consumers from needing to rewind through obscured messages. - - -## How topic compaction works - -When topic compaction is triggered [via the CLI](cookbooks-compaction.md), Pulsar will iterate over the entire topic from beginning to end. For each key that it encounters the compaction routine will keep a record of the latest occurrence of that key. - -After that, the broker will create a new [BookKeeper ledger](concepts-architecture-overview.md#ledgers) and make a second iteration through each message on the topic. For each message, if the key matches the latest occurrence of that key, then the key's data payload, message ID, and metadata will be written to the newly created ledger. If the key doesn't match the latest then the message will be skipped and left alone. If any given message has an empty payload, it will be skipped and considered deleted (akin to the concept of [tombstones](https://en.wikipedia.org/wiki/Tombstone_(data_store)) in key-value databases). At the end of this second iteration through the topic, the newly created BookKeeper ledger is closed and two things are written to the topic's metadata: the ID of the BookKeeper ledger and the message ID of the last compacted message (this is known as the **compaction horizon** of the topic). Once this metadata is written compaction is complete. - -After the initial compaction operation, the Pulsar [broker](reference-terminology.md#broker) that owns the topic is notified whenever any future changes are made to the compaction horizon and compacted backlog. When such changes occur: - -* Clients (consumers and readers) that have read compacted enabled will attempt to read messages from a topic and either: - * Read from the topic like normal (if the message ID is greater than or equal to the compaction horizon) or - * Read beginning at the compaction horizon (if the message ID is lower than the compaction horizon) - - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-transactions.md b/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-transactions.md deleted file mode 100644 index 08490ba06b5d7e..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/concepts-transactions.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -id: transactions -title: Transactions -sidebar_label: "Overview" -original_id: transactions ---- - -Transactional semantics enable event streaming applications to consume, process, and produce messages in one atomic operation. In Pulsar, a producer or consumer can work with messages across multiple topics and partitions and ensure those messages are processed as a single unit. - -The following concepts help you understand Pulsar transactions. - -## Transaction coordinator and transaction log -The transaction coordinator maintains the topics and subscriptions that interact in a transaction. When a transaction is committed, the transaction coordinator interacts with the topic owner broker to complete the transaction. - -The transaction coordinator maintains the entire life cycle of transactions, and prevents a transaction from incorrect status. - -The transaction coordinator handles transaction timeout, and ensures that the transaction is aborted after a transaction timeout. - -All the transaction metadata is persisted in the transaction log. The transaction log is backed by a Pulsar topic. After the transaction coordinator crashes, it can restore the transaction metadata from the transaction log. - -## Transaction ID -The transaction ID (TxnID) identifies a unique transaction in Pulsar. The transaction ID is 128-bit. The highest 16 bits are reserved for the ID of the transaction coordinator, and the remaining bits are used for monotonically increasing numbers in each transaction coordinator. It is easy to locate the transaction crash with the TxnID. - -## Transaction buffer -Messages produced within a transaction are stored in the transaction buffer. The messages in transaction buffer are not materialized (visible) to consumers until the transactions are committed. The messages in the transaction buffer are discarded when the transactions are aborted. - -## Pending acknowledge state -Message acknowledges within a transaction are maintained by the pending acknowledge state before the transaction completes. If a message is in the pending acknowledge state, the message cannot be acknowledged by other transactions until the message is removed from the pending acknowledge state. - -The pending acknowledge state is persisted to the pending acknowledge log. The pending acknowledge log is backed by a Pulsar topic. A new broker can restore the state from the pending acknowledge log to ensure the acknowledgement is not lost. diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-bookkeepermetadata.md b/site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-bookkeepermetadata.md deleted file mode 100644 index b0fa98dc3b65d5..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-bookkeepermetadata.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -id: cookbooks-bookkeepermetadata -title: BookKeeper Ledger Metadata -original_id: cookbooks-bookkeepermetadata ---- - -Pulsar stores data on BookKeeper ledgers, you can understand the contents of a ledger by inspecting the metadata attached to the ledger. -Such metadata are stored on ZooKeeper and they are readable using BookKeeper APIs. - -Description of current metadata: - -| Scope | Metadata name | Metadata value | -| ------------- | ------------- | ------------- | -| All ledgers | application | 'pulsar' | -| All ledgers | component | 'managed-ledger', 'schema', 'compacted-topic' | -| Managed ledgers | pulsar/managed-ledger | name of the ledger | -| Cursor | pulsar/cursor | name of the cursor | -| Compacted topic | pulsar/compactedTopic | name of the original topic | -| Compacted topic | pulsar/compactedTo | id of the last compacted message | - - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-compaction.md b/site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-compaction.md deleted file mode 100644 index dfa314727241a8..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-compaction.md +++ /dev/null @@ -1,142 +0,0 @@ ---- -id: cookbooks-compaction -title: Topic compaction -sidebar_label: "Topic compaction" -original_id: cookbooks-compaction ---- - -Pulsar's [topic compaction](concepts-topic-compaction.md#compaction) feature enables you to create **compacted** topics in which older, "obscured" entries are pruned from the topic, allowing for faster reads through the topic's history (which messages are deemed obscured/outdated/irrelevant will depend on your use case). - -To use compaction: - -* You need to give messages keys, as topic compaction in Pulsar takes place on a *per-key basis* (i.e. messages are compacted based on their key). For a stock ticker use case, the stock symbol---e.g. `AAPL` or `GOOG`---could serve as the key (more on this [below](#when-should-i-use-compacted-topics)). Messages without keys will be left alone by the compaction process. -* Compaction can be configured to run [automatically](#configuring-compaction-to-run-automatically), or you can manually [trigger](#triggering-compaction-manually) compaction using the Pulsar administrative API. -* Your consumers must be [configured](#consumer-configuration) to read from compacted topics ([Java consumers](#java), for example, have a `readCompacted` setting that must be set to `true`). If this configuration is not set, consumers will still be able to read from the non-compacted topic. - - -> Compaction only works on messages that have keys (as in the stock ticker example the stock symbol serves as the key for each message). Keys can thus be thought of as the axis along which compaction is applied. Messages that don't have keys are simply ignored by compaction. - -## When should I use compacted topics? - -The classic example of a topic that could benefit from compaction would be a stock ticker topic through which consumers can access up-to-date values for specific stocks. Imagine a scenario in which messages carrying stock value data use the stock symbol as the key (`GOOG`, `AAPL`, `TWTR`, etc.). Compacting this topic would give consumers on the topic two options: - -* They can read from the "original," non-compacted topic in case they need access to "historical" values, i.e. the entirety of the topic's messages. -* They can read from the compacted topic if they only want to see the most up-to-date messages. - -Thus, if you're using a Pulsar topic called `stock-values`, some consumers could have access to all messages in the topic (perhaps because they're performing some kind of number crunching of all values in the last hour) while the consumers used to power the real-time stock ticker only see the compacted topic (and thus aren't forced to process outdated messages). Which variant of the topic any given consumer pulls messages from is determined by the consumer's [configuration](#consumer-configuration). - -> One of the benefits of compaction in Pulsar is that you aren't forced to choose between compacted and non-compacted topics, as the compaction process leaves the original topic as-is and essentially adds an alternate topic. In other words, you can run compaction on a topic and consumers that need access to the non-compacted version of the topic will not be adversely affected. - - -## Configuring compaction to run automatically - -Tenant administrators can configure a policy for compaction at the namespace level. The policy specifies how large the topic backlog can grow before compaction is triggered. - -For example, to trigger compaction when the backlog reaches 100MB: - -```bash - -$ bin/pulsar-admin namespaces set-compaction-threshold \ - --threshold 100M my-tenant/my-namespace - -``` - -Configuring the compaction threshold on a namespace will apply to all topics within that namespace. - -## Triggering compaction manually - -In order to run compaction on a topic, you need to use the [`topics compact`](reference-pulsar-admin.md#topics-compact) command for the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool. Here's an example: - -```bash - -$ bin/pulsar-admin topics compact \ - persistent://my-tenant/my-namespace/my-topic - -``` - -The `pulsar-admin` tool runs compaction via the Pulsar {@inject: rest:REST:/} API. To run compaction in its own dedicated process, i.e. *not* through the REST API, you can use the [`pulsar compact-topic`](reference-cli-tools.md#pulsar-compact-topic) command. Here's an example: - -```bash - -$ bin/pulsar compact-topic \ - --topic persistent://my-tenant-namespace/my-topic - -``` - -> Running compaction in its own process is recommended when you want to avoid interfering with the broker's performance. Broker performance should only be affected, however, when running compaction on topics with a large keyspace (i.e when there are many keys on the topic). The first phase of the compaction process keeps a copy of each key in the topic, which can create memory pressure as the number of keys grows. Using the `pulsar-admin topics compact` command to run compaction through the REST API should present no issues in the overwhelming majority of cases; using `pulsar compact-topic` should correspondingly be considered an edge case. - -The `pulsar compact-topic` command communicates with [ZooKeeper](https://zookeeper.apache.org) directly. In order to establish communication with ZooKeeper, though, the `pulsar` CLI tool will need to have a valid [broker configuration](reference-configuration.md#broker). You can either supply a proper configuration in `conf/broker.conf` or specify a non-default location for the configuration: - -```bash - -$ bin/pulsar compact-topic \ - --broker-conf /path/to/broker.conf \ - --topic persistent://my-tenant/my-namespace/my-topic - -# If the configuration is in conf/broker.conf -$ bin/pulsar compact-topic \ - --topic persistent://my-tenant/my-namespace/my-topic - -``` - -#### When should I trigger compaction? - -How often you [trigger compaction](#triggering-compaction-manually) will vary widely based on the use case. If you want a compacted topic to be extremely speedy on read, then you should run compaction fairly frequently. - -## Consumer configuration - -Pulsar consumers and readers need to be configured to read from compacted topics. The sections below show you how to enable compacted topic reads for Pulsar's language clients. - -### Java - -In order to read from a compacted topic using a Java consumer, the `readCompacted` parameter must be set to `true`. Here's an example consumer for a compacted topic: - -```java - -Consumer compactedTopicConsumer = client.newConsumer() - .topic("some-compacted-topic") - .readCompacted(true) - .subscribe(); - -``` - -As mentioned above, topic compaction in Pulsar works on a *per-key basis*. That means that messages that you produce on compacted topics need to have keys (the content of the key will depend on your use case). Messages that don't have keys will be ignored by the compaction process. Here's an example Pulsar message with a key: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageBuilder; - -Message msg = MessageBuilder.create() - .setContent(someByteArray) - .setKey("some-key") - .build(); - -``` - -The example below shows a message with a key being produced on a compacted Pulsar topic: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageBuilder; -import org.apache.pulsar.client.api.Producer; -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -Producer compactedTopicProducer = client.newProducer() - .topic("some-compacted-topic") - .create(); - -Message msg = MessageBuilder.create() - .setContent(someByteArray) - .setKey("some-key") - .build(); - -compactedTopicProducer.send(msg); - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-deduplication.md b/site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-deduplication.md deleted file mode 100644 index f7f9e3d7bb425b..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-deduplication.md +++ /dev/null @@ -1,151 +0,0 @@ ---- -id: cookbooks-deduplication -title: Message deduplication -sidebar_label: "Message deduplication" -original_id: cookbooks-deduplication ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -When **Message deduplication** is enabled, it ensures that each message produced on Pulsar topics is persisted to disk *only once*, even if the message is produced more than once. Message deduplication is handled automatically on the server side. - -To use message deduplication in Pulsar, you need to configure your Pulsar brokers and clients. - -## How it works - -You can enable or disable message deduplication at the namespace level or the topic level. By default, it is disabled on all namespaces or topics. You can enable it in the following ways: - -* Enable deduplication for all namespaces/topics at the broker-level. -* Enable deduplication for a specific namespace with the `pulsar-admin namespaces` interface. -* Enable deduplication for a specific topic with the `pulsar-admin topics` interface. - -## Configure message deduplication - -You can configure message deduplication in Pulsar using the [`broker.conf`](reference-configuration.md#broker) configuration file. The following deduplication-related parameters are available. - -Parameter | Description | Default -:---------|:------------|:------- -`brokerDeduplicationEnabled` | Sets the default behavior for message deduplication in the Pulsar broker. If it is set to `true`, message deduplication is enabled on all namespaces/topics. If it is set to `false`, you have to enable or disable deduplication at the namespace level or the topic level. | `false` -`brokerDeduplicationMaxNumberOfProducers` | The maximum number of producers for which information is stored for deduplication purposes. | `10000` -`brokerDeduplicationEntriesInterval` | The number of entries after which a deduplication informational snapshot is taken. A larger interval leads to fewer snapshots being taken, though this lengthens the topic recovery time (the time required for entries published after the snapshot to be replayed). | `1000` -`brokerDeduplicationSnapshotIntervalSeconds`| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |`120` -`brokerDeduplicationProducerInactivityTimeoutMinutes` | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | `360` (6 hours) - -### Set default value at the broker-level - -By default, message deduplication is *disabled* on all Pulsar namespaces/topics. To enable it on all namespaces/topics, set the `brokerDeduplicationEnabled` parameter to `true` and re-start the broker. - -Even if you set the value for `brokerDeduplicationEnabled`, enabling or disabling via Pulsar admin CLI overrides the default settings at the broker-level. - -### Enable message deduplication - -Though message deduplication is disabled by default at the broker level, you can enable message deduplication for a specific namespace or topic using the [`pulsar-admin namespaces set-deduplication`](reference-pulsar-admin.md#namespace-set-deduplication) or the [`pulsar-admin topics set-deduplication`](reference-pulsar-admin.md#topic-set-deduplication) command. You can use the `--enable`/`-e` flag and specify the namespace/topic. - -The following example shows how to enable message deduplication at the namespace level. - -```bash - -$ bin/pulsar-admin namespaces set-deduplication \ - public/default \ - --enable # or just -e - -``` - -### Disable message deduplication - -Even if you enable message deduplication at the broker level, you can disable message deduplication for a specific namespace or topic using the [`pulsar-admin namespace set-deduplication`](reference-pulsar-admin.md#namespace-set-deduplication) or the [`pulsar-admin topics set-deduplication`](reference-pulsar-admin.md#topic-set-deduplication) command. Use the `--disable`/`-d` flag and specify the namespace/topic. - -The following example shows how to disable message deduplication at the namespace level. - -```bash - -$ bin/pulsar-admin namespaces set-deduplication \ - public/default \ - --disable # or just -d - -``` - -## Pulsar clients - -If you enable message deduplication in Pulsar brokers, you need complete the following tasks for your client producers: - -1. Specify a name for the producer. -1. Set the message timeout to `0` (namely, no timeout). - -The instructions for Java, Python, and C++ clients are different. - -````mdx-code-block - - - -To enable message deduplication on a [Java producer](client-libraries-java.md#producers), set the producer name using the `producerName` setter, and set the timeout to `0` using the `sendTimeout` setter. - -```java - -import org.apache.pulsar.client.api.Producer; -import org.apache.pulsar.client.api.PulsarClient; -import java.util.concurrent.TimeUnit; - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); -Producer producer = pulsarClient.newProducer() - .producerName("producer-1") - .topic("persistent://public/default/topic-1") - .sendTimeout(0, TimeUnit.SECONDS) - .create(); - -``` - - - - -To enable message deduplication on a [Python producer](client-libraries-python.md#producers), set the producer name using `producer_name`, and set the timeout to `0` using `send_timeout_millis`. - -```python - -import pulsar - -client = pulsar.Client("pulsar://localhost:6650") -producer = client.create_producer( - "persistent://public/default/topic-1", - producer_name="producer-1", - send_timeout_millis=0) - -``` - - - - -To enable message deduplication on a [C++ producer](client-libraries-cpp.md#producer), set the producer name using `producer_name`, and set the timeout to `0` using `send_timeout_millis`. - -```cpp - -#include - -std::string serviceUrl = "pulsar://localhost:6650"; -std::string topic = "persistent://some-tenant/ns1/topic-1"; -std::string producerName = "producer-1"; - -Client client(serviceUrl); - -ProducerConfiguration producerConfig; -producerConfig.setSendTimeout(0); -producerConfig.setProducerName(producerName); - -Producer producer; - -Result result = client.createProducer(topic, producerConfig, producer); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-encryption.md b/site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-encryption.md deleted file mode 100644 index f0d8fb8735eb63..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-encryption.md +++ /dev/null @@ -1,184 +0,0 @@ ---- -id: cookbooks-encryption -title: Pulsar Encryption -sidebar_label: "Encryption" -original_id: cookbooks-encryption ---- - -Pulsar encryption allows applications to encrypt messages at the producer and decrypt at the consumer. Encryption is performed using the public/private key pair configured by the application. Encrypted messages can only be decrypted by consumers with a valid key. - -## Asymmetric and symmetric encryption - -Pulsar uses dynamically generated symmetric AES key to encrypt messages(data). The AES key(data key) is encrypted using application provided ECDSA/RSA key pair, as a result there is no need to share the secret with everyone. - -Key is a public/private key pair used for encryption/decryption. The producer key is the public key, and the consumer key is the private key of the key pair. - -The application configures the producer with the public key. This key is used to encrypt the AES data key. The encrypted data key is sent as part of message header. Only entities with the private key(in this case the consumer) will be able to decrypt the data key which is used to decrypt the message. - -A message can be encrypted with more than one key. Any one of the keys used for encrypting the message is sufficient to decrypt the message - -Pulsar does not store the encryption key anywhere in the pulsar service. If you lose/delete the private key, your message is irretrievably lost, and is unrecoverable - -## Producer -![alt text](/assets/pulsar-encryption-producer.jpg "Pulsar Encryption Producer") - -## Consumer -![alt text](/assets/pulsar-encryption-consumer.jpg "Pulsar Encryption Consumer") - -## Here are the steps to get started: - -1. Create your ECDSA or RSA public/private key pair. - -```shell - -openssl ecparam -name secp521r1 -genkey -param_enc explicit -out test_ecdsa_privkey.pem -openssl ec -in test_ecdsa_privkey.pem -pubout -outform pkcs8 -out test_ecdsa_pubkey.pem - -``` - -2. Add the public and private key to the key management and configure your producers to retrieve public keys and consumers clients to retrieve private keys. -3. Implement CryptoKeyReader::getPublicKey() interface from producer and CryptoKeyReader::getPrivateKey() interface from consumer, which will be invoked by Pulsar client to load the key. -4. Add encryption key to producer configuration: conf.addEncryptionKey("myapp.key") -5. Add CryptoKeyReader implementation to producer/consumer config: conf.setCryptoKeyReader(keyReader) -6. Sample producer application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} -PulsarClient pulsarClient = PulsarClient.create("http://localhost:8080"); - -ProducerConfiguration prodConf = new ProducerConfiguration(); -prodConf.setCryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")); -prodConf.addEncryptionKey("myappkey"); - -Producer producer = pulsarClient.createProducer("persistent://my-tenant/my-ns/my-topic", prodConf); - -for (int i = 0; i < 10; i++) { - producer.send("my-message".getBytes()); -} - -pulsarClient.close(); - -``` - -7. Sample Consumer Application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} - -ConsumerConfiguration consConf = new ConsumerConfiguration(); -consConf.setCryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")); -PulsarClient pulsarClient = PulsarClient.create("http://localhost:8080"); -Consumer consumer = pulsarClient.subscribe("persistent://my-tenant//my-ns/my-topic", "my-subscriber-name", consConf); -Message msg = null; - -for (int i = 0; i < 10; i++) { - msg = consumer.receive(); - // do something - System.out.println("Received: " + new String(msg.getData())); -} - -// Acknowledge the consumption of all messages at once -consumer.acknowledgeCumulative(msg); -pulsarClient.close(); - -``` - -## Key rotation -Pulsar generates new AES data key every 4 hours or after a certain number of messages are published. The asymmetric public key is automatically fetched by producer every 4 hours by calling CryptoKeyReader::getPublicKey() to retrieve the latest version. - -## Enabling encryption at the producer application: -If you produce messages that are consumed across application boundaries, you need to ensure that consumers in other applications have access to one of the private keys that can decrypt the messages. This can be done in two ways: -1. The consumer application provides you access to their public key, which you add to your producer keys -1. You grant access to one of the private keys from the pairs used by producer - -In some cases, the producer may want to encrypt the messages with multiple keys. For this, add all such keys to the config. Consumer will be able to decrypt the message, as long as it has access to at least one of the keys. - -E.g: If messages needs to be encrypted using 2 keys myapp.messagekey1 and myapp.messagekey2, - -```java - -conf.addEncryptionKey("myapp.messagekey1"); -conf.addEncryptionKey("myapp.messagekey2"); - -``` - -## Decrypting encrypted messages at the consumer application: -Consumers require access one of the private keys to decrypt messages produced by the producer. If you would like to receive encrypted messages, create a public/private key and give your public key to the producer application to encrypt messages using your public key. - -## Handling Failures: -* Producer/ Consumer loses access to the key - * Producer action will fail indicating the cause of the failure. Application has the option to proceed with sending unencrypted message in such cases. Call conf.setCryptoFailureAction(ProducerCryptoFailureAction) to control the producer behavior. The default behavior is to fail the request. - * If consumption failed due to decryption failure or missing keys in consumer, application has the option to consume the encrypted message or discard it. Call conf.setCryptoFailureAction(ConsumerCryptoFailureAction) to control the consumer behavior. The default behavior is to fail the request. -Application will never be able to decrypt the messages if the private key is permanently lost. -* Batch messaging - * If decryption fails and the message contain batch messages, client will not be able to retrieve individual messages in the batch, hence message consumption fails even if conf.setCryptoFailureAction() is set to CONSUME. -* If decryption fails, the message consumption stops and application will notice backlog growth in addition to decryption failure messages in the client log. If application does not have access to the private key to decrypt the message, the only option is to skip/discard backlogged messages. - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-message-queue.md b/site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-message-queue.md deleted file mode 100644 index eb43cbde5fb818..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-message-queue.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -id: cookbooks-message-queue -title: Using Pulsar as a message queue -sidebar_label: "Message queue" -original_id: cookbooks-message-queue ---- - -Message queues are essential components of many large-scale data architectures. If every single work object that passes through your system absolutely *must* be processed in spite of the slowness or downright failure of this or that system component, there's a good chance that you'll need a message queue to step in and ensure that unprocessed data is retained---with correct ordering---until the required actions are taken. - -Pulsar is a great choice for a message queue because: - -* it was built with [persistent message storage](concepts-architecture-overview.md#persistent-storage) in mind -* it offers automatic load balancing across [consumers](reference-terminology.md#consumer) for messages on a topic (or custom load balancing if you wish) - -> You can use the same Pulsar installation to act as a real-time message bus and as a message queue if you wish (or just one or the other). You can set aside some topics for real-time purposes and other topics for message queue purposes (or use specific namespaces for either purpose if you wish). - - -# Client configuration changes - -To use a Pulsar [topic](reference-terminology.md#topic) as a message queue, you should distribute the receiver load on that topic across several consumers (the optimal number of consumers will depend on the load). Each consumer must: - -* Establish a [shared subscription](concepts-messaging.md#shared) and use the same subscription name as the other consumers (otherwise the subscription is not shared and the consumers can't act as a processing ensemble) -* If you'd like to have tight control over message dispatching across consumers, set the consumers' **receiver queue** size very low (potentially even to 0 if necessary). Each Pulsar [consumer](reference-terminology.md#consumer) has a receiver queue that determines how many messages the consumer will attempt to fetch at a time. A receiver queue of 1000 (the default), for example, means that the consumer will attempt to process 1000 messages from the topic's backlog upon connection. Setting the receiver queue to zero essentially means ensuring that each consumer is only doing one thing at a time. - - The downside to restricting the receiver queue size of consumers is that that limits the potential throughput of those consumers and cannot be used with [partitioned topics](reference-terminology.md#partitioned-topic). Whether the performance/control trade-off is worthwhile will depend on your use case. - -## Java clients - -Here's an example Java consumer configuration that uses a shared subscription: - -```java - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; -import org.apache.pulsar.client.api.SubscriptionType; - -String SERVICE_URL = "pulsar://localhost:6650"; -String TOPIC = "persistent://public/default/mq-topic-1"; -String subscription = "sub-1"; - -PulsarClient client = PulsarClient.builder() - .serviceUrl(SERVICE_URL) - .build(); - -Consumer consumer = client.newConsumer() - .topic(TOPIC) - .subscriptionName(subscription) - .subscriptionType(SubscriptionType.Shared) - // If you'd like to restrict the receiver queue size - .receiverQueueSize(10) - .subscribe(); - -``` - -## Python clients - -Here's an example Python consumer configuration that uses a shared subscription: - -```python - -from pulsar import Client, ConsumerType - -SERVICE_URL = "pulsar://localhost:6650" -TOPIC = "persistent://public/default/mq-topic-1" -SUBSCRIPTION = "sub-1" - -client = Client(SERVICE_URL) -consumer = client.subscribe( - TOPIC, - SUBSCRIPTION, - # If you'd like to restrict the receiver queue size - receiver_queue_size=10, - consumer_type=ConsumerType.Shared) - -``` - -## C++ clients - -Here's an example C++ consumer configuration that uses a shared subscription: - -```cpp - -#include - -std::string serviceUrl = "pulsar://localhost:6650"; -std::string topic = "persistent://public/defaultmq-topic-1"; -std::string subscription = "sub-1"; - -Client client(serviceUrl); - -ConsumerConfiguration consumerConfig; -consumerConfig.setConsumerType(ConsumerType.ConsumerShared); -// If you'd like to restrict the receiver queue size -consumerConfig.setReceiverQueueSize(10); - -Consumer consumer; - -Result result = client.subscribe(topic, subscription, consumerConfig, consumer); - -``` - -## Go clients - -Here is an example of a Go consumer configuration that uses a shared subscription: - -```go - -import "github.com/apache/pulsar-client-go/pulsar" - -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "persistent://public/default/mq-topic-1", - SubscriptionName: "sub-1", - Type: pulsar.Shared, - ReceiverQueueSize: 10, // If you'd like to restrict the receiver queue size -}) -if err != nil { - log.Fatal(err) -} - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-non-persistent.md b/site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-non-persistent.md deleted file mode 100644 index 178301e86eb8df..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-non-persistent.md +++ /dev/null @@ -1,63 +0,0 @@ ---- -id: cookbooks-non-persistent -title: Non-persistent messaging -sidebar_label: "Non-persistent messaging" -original_id: cookbooks-non-persistent ---- - -**Non-persistent topics** are Pulsar topics in which message data is *never* [persistently stored](concepts-architecture-overview.md#persistent-storage) and kept only in memory. This cookbook provides: - -* A basic [conceptual overview](#overview) of non-persistent topics -* Information about [configurable parameters](#configuration) related to non-persistent topics -* A guide to the [CLI interface](#cli) for managing non-persistent topics - -## Overview - -By default, Pulsar persistently stores *all* unacknowledged messages on multiple [BookKeeper](#persistent-storage) bookies (storage nodes). Data for messages on persistent topics can thus survive broker restarts and subscriber failover. - -Pulsar also, however, supports **non-persistent topics**, which are topics on which messages are *never* persisted to disk and live only in memory. When using non-persistent delivery, killing a Pulsar [broker](reference-terminology.md#broker) or disconnecting a subscriber to a topic means that all in-transit messages are lost on that (non-persistent) topic, meaning that clients may see message loss. - -Non-persistent topics have names of this form (note the `non-persistent` in the name): - -```http - -non-persistent://tenant/namespace/topic - -``` - -> For more high-level information about non-persistent topics, see the [Concepts and Architecture](concepts-messaging.md#non-persistent-topics) documentation. - -## Using - -> In order to use non-persistent topics, they must be [enabled](#enabling) in your Pulsar broker configuration. - -In order to use non-persistent topics, you only need to differentiate them by name when interacting with them. This [`pulsar-client produce`](reference-cli-tools.md#pulsar-client-produce) command, for example, would produce one message on a non-persistent topic in a standalone cluster: - -```bash - -$ bin/pulsar-client produce non-persistent://public/default/example-np-topic \ - --num-produce 1 \ - --messages "This message will be stored only in memory" - -``` - -> For a more thorough guide to non-persistent topics from an administrative perspective, see the [Non-persistent topics](admin-api-topics.md) guide. - -## Enabling - -In order to enable non-persistent topics in a Pulsar broker, the [`enableNonPersistentTopics`](reference-configuration.md#broker-enableNonPersistentTopics) must be set to `true`. This is the default, and so you won't need to take any action to enable non-persistent messaging. - - -> #### Configuration for standalone mode -> If you're running Pulsar in standalone mode, the same configurable parameters are available but in the [`standalone.conf`](reference-configuration.md#standalone) configuration file. - -If you'd like to enable *only* non-persistent topics in a broker, you can set the [`enablePersistentTopics`](reference-configuration.md#broker-enablePersistentTopics) parameter to `false` and the `enableNonPersistentTopics` parameter to `true`. - -## Managing with cli - -Non-persistent topics can be managed using the [`pulsar-admin non-persistent`](reference-pulsar-admin.md#non-persistent) command-line interface. With that interface you can perform actions like [create a partitioned non-persistent topic](reference-pulsar-admin.md#non-persistent-create-partitioned-topic), get [stats](reference-pulsar-admin.md#non-persistent-stats) for a non-persistent topic, [list](reference-pulsar-admin.md) non-persistent topics under a namespace, and more. - -## Using with Pulsar clients - -You shouldn't need to make any changes to your Pulsar clients to use non-persistent messaging beyond making sure that you use proper [topic names](#using) with `non-persistent` as the topic type. - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-partitioned.md b/site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-partitioned.md deleted file mode 100644 index fb9ac354cc6d60..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-partitioned.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: cookbooks-partitioned -title: Partitioned topics -sidebar_label: "Partitioned Topics" -original_id: cookbooks-partitioned ---- -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-retention-expiry.md b/site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-retention-expiry.md deleted file mode 100644 index bb268ecf671664..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-retention-expiry.md +++ /dev/null @@ -1,520 +0,0 @@ ---- -id: cookbooks-retention-expiry -title: Message retention and expiry -sidebar_label: "Message retention and expiry" -original_id: cookbooks-retention-expiry ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar brokers are responsible for handling messages that pass through Pulsar, including [persistent storage](concepts-architecture-overview.md#persistent-storage) of messages. By default, for each topic, brokers only retain messages that are in at least one backlog. A backlog is the set of unacknowledged messages for a particular subscription. As a topic can have multiple subscriptions, a topic can have multiple backlogs. - -As a consequence, no messages are retained (by default) on a topic that has not had any subscriptions created for it. - -(Note that messages that are no longer being stored are not necessarily immediately deleted, and may in fact still be accessible until the next ledger rollover. Because clients cannot predict when rollovers may happen, it is not wise to rely on a rollover not happening at an inconvenient point in time.) - -In Pulsar, you can modify this behavior, with namespace granularity, in two ways: - -* You can persistently store messages that are not within a backlog (because they've been acknowledged by on every existing subscription, or because there are no subscriptions) by setting [retention policies](#retention-policies). -* Messages that are not acknowledged within a specified timeframe can be automatically acknowledged, by specifying the [time to live](#time-to-live-ttl) (TTL). - -Pulsar's [admin interface](admin-api-overview.md) enables you to manage both retention policies and TTL with namespace granularity (and thus within a specific tenant and either on a specific cluster or in the [`global`](concepts-architecture-overview.md#global-cluster) cluster). - - -> #### Retention and TTL solve two different problems -> * Message retention: Keep the data for at least X hours (even if acknowledged) -> * Time-to-live: Discard data after some time (by automatically acknowledging) -> -> Most applications will want to use at most one of these. - - -## Retention policies - -By default, when a Pulsar message arrives at a broker, the message is stored until it has been acknowledged on all subscriptions, at which point it is marked for deletion. You can override this behavior and retain messages that have already been acknowledged on all subscriptions by setting a *retention policy* for all topics in a given namespace. Retention is based on both a *size limit* and a *time limit*. - -The diagram below illustrates the concept of message retention. -![](/assets/retention.svg) - -Retention policies are useful when you use the Reader interface. The Reader interface does not use acknowledgements, and messages do not exist within backlogs. It is required to configure retention for Reader-only use cases. - -When you set a retention policy on topics in a namespace, you must set **both** a *size limit* (via `defaultRetentionSizeInMB`) and a *time limit* (via `defaultRetentionTimeInMinutes`) . You can refer to the following table to set retention policies in `pulsar-admin` and Java. - -|Time limit|Size limit| Message retention | -|----------|----------|------------------------| -| -1 | -1 | Infinite retention | -| -1 | >0 | Based on the size limit | -| >0 | -1 | Based on the time limit | -| 0 | 0 | Disable message retention (by default) | -| 0 | >0 | Invalid | -| >0 | 0 | Invalid | -| >0 | >0 | Acknowledged messages or messages with no active subscription will not be retained when either time or size reaches the limit. | - -The retention settings apply to all messages on topics that do not have any subscriptions, or to messages that have been acknowledged by all subscriptions. The retention policy settings do not affect unacknowledged messages on topics with subscriptions. The unacknowledged messages are controlled by the backlog quota. - -When a retention limit on a topic is exceeded, the oldest message is marked for deletion until the set of retained messages falls within the specified limits again. - -### Defaults - -You can set message retention at instance level with the following two parameters: `defaultRetentionTimeInMinutes` and `defaultRetentionSizeInMB`. Both parameters are set to `0` by default. - -For more information of the two parameters, refer to the [`broker.conf`](reference-configuration.md#broker) configuration file. - -### Set retention policy - -You can set a retention policy for a namespace by specifying the namespace, a size limit and a time limit in `pulsar-admin`, REST API and Java. - -````mdx-code-block - - - -You can use the [`set-retention`](reference-pulsar-admin.md#namespaces-set-retention) subcommand and specify a namespace, a size limit using the `-s`/`--size` flag, and a time limit using the `-t`/`--time` flag. - -In the following example, the size limit is set to 10 GB and the time limit is set to 3 hours for each topic within the `my-tenant/my-ns` namespace. -- When the size of messages reaches 10 GB on a topic within 3 hours, the acknowledged messages will not be retained. -- After 3 hours, even if the message size is less than 10 GB, the acknowledged messages will not be retained. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 10G \ - --time 3h - -``` - -In the following example, the time is not limited and the size limit is set to 1 TB. The size limit determines the retention. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 1T \ - --time -1 - -``` - -In the following example, the size is not limited and the time limit is set to 3 hours. The time limit determines the retention. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size -1 \ - --time 3h - -``` - -To achieve infinite retention, set both values to `-1`. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size -1 \ - --time -1 - -``` - -To disable the retention policy, set both values to `0`. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 0 \ - --time 0 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/retention|operation/setRetention?version=@pulsar:version_number@} - -:::note - -To disable the retention policy, you need to set both the size and time limit to `0`. Set either size or time limit to `0` is invalid. - -::: - - - - -```java - -int retentionTime = 10; // 10 minutes -int retentionSize = 500; // 500 megabytes -RetentionPolicies policies = new RetentionPolicies(retentionTime, retentionSize); -admin.namespaces().setRetention(namespace, policies); - -``` - - - - -```` - -### Get retention policy - -You can fetch the retention policy for a namespace by specifying the namespace. The output will be a JSON object with two keys: `retentionTimeInMinutes` and `retentionSizeInMB`. - -````mdx-code-block - - - -Use the [`get-retention`](reference-pulsar-admin.md#namespaces) subcommand and specify the namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces get-retention my-tenant/my-ns -{ - "retentionTimeInMinutes": 10, - "retentionSizeInMB": 500 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/retention|operation/getRetention?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getRetention(namespace); - -``` - - - - -```` - -## Backlog quotas - -*Backlogs* are sets of unacknowledged messages for a topic that have been stored by bookies. Pulsar stores all unacknowledged messages in backlogs until they are processed and acknowledged. - -You can control the allowable size and/or time of backlogs, at the namespace level, using *backlog quotas*. Pulsar uses a quota to enforce a hard limit on the logical size of the backlogs in a topic. Backlog quota triggers an alert policy (for example, producer exception) once the quota limit is reached. - -The diagram below illustrates the concept of backlog quota. -![](/assets/backlog-quota.svg) - -Setting a backlog quota involves setting: - -* an allowable *size and/or time threshold* for each topic in the namespace -* a *retention policy* that determines which action the [broker](reference-terminology.md#broker) takes if the threshold is exceeded. - -The following retention policies are available: - -Policy | Action -:------|:------ -`producer_request_hold` | The broker will hold and not persist produce request payload -`producer_exception` | The broker will disconnect from the client by throwing an exception -`consumer_backlog_eviction` | The broker will begin discarding backlog messages - - -> #### Beware the distinction between retention policy types -> As you may have noticed, there are two definitions of the term "retention policy" in Pulsar, one that applies to persistent storage of messages not in backlogs, and one that applies to messages within backlogs. - - -Backlog quotas are handled at the namespace level. They can be managed via: - -### Set size/time thresholds and backlog retention policies - -You can set a size and/or time threshold and backlog retention policy for all of the topics in a [namespace](reference-terminology.md#namespace) by specifying the namespace, a size limit and/or a time limit in second, and a policy by name. - -````mdx-code-block - - - -Use the [`set-backlog-quota`](reference-pulsar-admin.md#namespaces) subcommand and specify a namespace, a size limit using the `-l`/`--limit` , `-lt`/`--limitTime` flag to limit backlog, a retention policy using the `-p`/`--policy` flag and a policy type using `-t`/`--type` (default is destination_storage). - -##### Example - -```shell - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \ - --limit 2G \ - --policy producer_request_hold - -``` - -```shell - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns/my-topic \ ---limitTime 3600 \ ---policy producer_request_hold \ ---type message_age - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - - - - -```java - -long sizeLimit = 2147483648L; -BacklogQuota.RetentionPolicy policy = BacklogQuota.RetentionPolicy.producer_request_hold; -BacklogQuota quota = new BacklogQuota(sizeLimit, policy); -admin.namespaces().setBacklogQuota(namespace, quota); - -``` - - - - -```` - -### Get backlog threshold and backlog retention policy - -You can see which size threshold and backlog retention policy has been applied to a namespace. - -````mdx-code-block - - - -Use the [`get-backlog-quotas`](reference-pulsar-admin.md#pulsar-admin-namespaces-get-backlog-quotas) subcommand and specify a namespace. Here's an example: - -```shell - -$ pulsar-admin namespaces get-backlog-quotas my-tenant/my-ns -{ - "destination_storage": { - "limit" : 2147483648, - "policy" : "producer_request_hold" - } -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/backlogQuotaMap|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - - - - -```java - -Map quotas = - admin.namespaces().getBacklogQuotas(namespace); - -``` - - - - -```` - -### Remove backlog quotas - -````mdx-code-block - - - -Use the [`remove-backlog-quota`](reference-pulsar-admin.md#pulsar-admin-namespaces-remove-backlog-quota) subcommand and specify a namespace, use `t`/`--type` to specify backlog type to remove(default is destination_storage). Here's an example: - -```shell - -$ pulsar-admin namespaces remove-backlog-quota my-tenant/my-ns - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/removeBacklogQuota?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeBacklogQuota(namespace); - -``` - - - - -```` - -### Clear backlog - -#### pulsar-admin - -Use the [`clear-backlog`](reference-pulsar-admin.md#pulsar-admin-namespaces-clear-backlog) subcommand. - -##### Example - -```shell - -$ pulsar-admin namespaces clear-backlog my-tenant/my-ns - -``` - -By default, you will be prompted to ensure that you really want to clear the backlog for the namespace. You can override the prompt using the `-f`/`--force` flag. - -## Time to live (TTL) - -By default, Pulsar stores all unacknowledged messages forever. This can lead to heavy disk space usage in cases where a lot of messages are going unacknowledged. If disk space is a concern, you can set a time to live (TTL) that determines how long unacknowledged messages will be retained. - -The TTL parameter is like a stopwatch attached to each message that defines the amount of time a message is allowed to stay in the unacknowledged state. When the TTL expires, Pulsar automatically moves the message to the acknowledged state (and thus makes it ready for deletion). - -The diagram below illustrates the concept of TTL. -![](/assets/ttl.svg) - -### Set the TTL for a namespace - -````mdx-code-block - - - -Use the [`set-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-set-message-ttl) subcommand and specify a namespace and a TTL (in seconds) using the `-ttl`/`--messageTTL` flag. - -##### Example - -```shell - -$ pulsar-admin namespaces set-message-ttl my-tenant/my-ns \ - --messageTTL 120 # TTL of 2 minutes - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/setNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setNamespaceMessageTTL(namespace, ttlInSeconds); - -``` - - - - -```` - -### Get the TTL configuration for a namespace - -````mdx-code-block - - - -Use the [`get-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-get-message-ttl) subcommand and specify a namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces get-message-ttl my-tenant/my-ns -60 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/getNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaceMessageTTL(namespace) - -``` - - - - -```` - -### Remove the TTL configuration for a namespace - -````mdx-code-block - - - -Use the [`remove-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-remove-message-ttl) subcommand and specify a namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces remove-message-ttl my-tenant/my-ns - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/removeNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeNamespaceMessageTTL(namespace) - -``` - - - - -```` - -## Delete messages from namespaces - -When it comes to the physical storage size, message expiry and retention are just like two sides of the same coin. -* The backlog quota and TTL parameters prevent disk size from growing indefinitely, as Pulsar’s default behaviour is to persist unacknowledged messages. -* The retention policy allocates storage space to accommodate the messages that are supposed to be deleted by Pulsar by default. - -As a conclusion, the size of your physical storage should accommodate the sum of the backlog quota and the retention size. - -The message deletion rate (releasing rate of disk space) can be determined by multiple factors. - -- **Segment rollover period**: basically, the segment rollover period is how often a new segment is created. Once a new segment is created, the old segment will be deleted. By default, this happens either when you have written 50,000 entries (messages) or have waited 240 minutes. You can tune this in your broker. - -- **Entry log rollover period**: multiple ledgers in BookKeeper are interleaved into an [entry log](https://bookkeeper.apache.org/docs/4.11.1/getting-started/concepts/#entry-logs). In order for a ledger that has been deleted, the entry log must all be rolled over. -The entry log rollover period is configurable, but is purely based on the entry log size. For details, see [here](https://bookkeeper.apache.org/docs/4.11.1/reference/config/#entry-log-settings). Once the entry log is rolled over, the entry log can be garbage collected. - -- **Garbage collection interval**: because entry logs have interleaved ledgers, to free up space, the entry logs need to be rewritten. The garbage collection interval is how often BookKeeper performs garbage collection. which is related to minor compaction and major compaction of entry logs. For details, see [here](https://bookkeeper.apache.org/docs/4.11.1/reference/config/#entry-log-compaction-settings). - -The diagram below illustrates one of the cases that the consumed storage size is larger than the given limits for backlog and retention. Messages over the retention limit are kept because other messages in the same segment are still within retention period. -![](/assets/retention-storage-size.svg) - -If you do not have any retention period and that you never have much of a backlog, the upper limit for retained messages, which are acknowledged, equals to the Pulsar segment rollover period + entry log rollover period + (garbage collection interval * garbage collection ratios). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-tiered-storage.md b/site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-tiered-storage.md deleted file mode 100644 index b1deb135209a98..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/cookbooks-tiered-storage.md +++ /dev/null @@ -1,344 +0,0 @@ ---- -id: cookbooks-tiered-storage -title: Tiered Storage -sidebar_label: "Tiered Storage" -original_id: cookbooks-tiered-storage ---- - -Pulsar's **Tiered Storage** feature allows older backlog data to be offloaded to long term storage, thereby freeing up space in BookKeeper and reducing storage costs. This cookbook walks you through using tiered storage in your Pulsar cluster. - -* Tiered storage uses [Apache jclouds](https://jclouds.apache.org) to support [Amazon S3](https://aws.amazon.com/s3/) and [Google Cloud Storage](https://cloud.google.com/storage/)(GCS for short) -for long term storage. With Jclouds, it is easy to add support for more [cloud storage providers](https://jclouds.apache.org/reference/providers/#blobstore-providers) in the future. - -* Tiered storage uses [Apache Hadoop](http://hadoop.apache.org/) to support filesystem for long term storage. -With Hadoop, it is easy to add support for more filesystem in the future. - -## When should I use Tiered Storage? - -Tiered storage should be used when you have a topic for which you want to keep a very long backlog for a long time. For example, if you have a topic containing user actions which you use to train your recommendation systems, you may want to keep that data for a long time, so that if you change your recommendation algorithm you can rerun it against your full user history. - -## The offloading mechanism - -A topic in Pulsar is backed by a log, known as a managed ledger. This log is composed of an ordered list of segments. Pulsar only every writes to the final segment of the log. All previous segments are sealed. The data within the segment is immutable. This is known as a segment oriented architecture. - -![Tiered storage](/assets/pulsar-tiered-storage.png "Tiered Storage") - -The Tiered Storage offloading mechanism takes advantage of this segment oriented architecture. When offloading is requested, the segments of the log are copied, one-by-one, to tiered storage. All segments of the log, apart from the segment currently being written to can be offloaded. - -On the broker, the administrator must configure the bucket and credentials for the cloud storage service. -The configured bucket must exist before attempting to offload. If it does not exist, the offload operation will fail. - -Pulsar uses multi-part objects to upload the segment data. It is possible that a broker could crash while uploading the data. -We recommend you add a life cycle rule your bucket to expire incomplete multi-part upload after a day or two to avoid -getting charged for incomplete uploads. - -When ledgers are offloaded to long term storage, you can still query data in the offloaded ledgers with Pulsar SQL. - -## Configuring the offload driver - -Offloading is configured in ```broker.conf```. - -At a minimum, the administrator must configure the driver, the bucket and the authenticating credentials. -There is also some other knobs to configure, like the bucket region, the max block size in backed storage, etc. - -Currently we support driver of types: - -- `aws-s3`: [Simple Cloud Storage Service](https://aws.amazon.com/s3/) -- `google-cloud-storage`: [Google Cloud Storage](https://cloud.google.com/storage/) -- `filesystem`: [Filesystem Storage](http://hadoop.apache.org/) - -> Driver names are case-insensitive for driver's name. There is a third driver type, `s3`, which is identical to `aws-s3`, -> though it requires that you specify an endpoint url using `s3ManagedLedgerOffloadServiceEndpoint`. This is useful if -> using a S3 compatible data store, other than AWS. - -```conf - -managedLedgerOffloadDriver=aws-s3 - -``` - -### "aws-s3" Driver configuration - -#### Bucket and Region - -Buckets are the basic containers that hold your data. -Everything that you store in Cloud Storage must be contained in a bucket. -You can use buckets to organize your data and control access to your data, -but unlike directories and folders, you cannot nest buckets. - -```conf - -s3ManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -Bucket Region is the region where bucket located. Bucket Region is not a required -but a recommended configuration. If it is not configured, It will use the default region. - -With AWS S3, the default region is `US East (N. Virginia)`. Page [AWS Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html) contains more information. - -```conf - -s3ManagedLedgerOffloadRegion=eu-west-3 - -``` - -#### Authentication with AWS - -To be able to access AWS S3, you need to authenticate with AWS S3. -Pulsar does not provide any direct means of configuring authentication for AWS S3, -but relies on the mechanisms supported by the [DefaultAWSCredentialsProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html). - -Once you have created a set of credentials in the AWS IAM console, they can be configured in a number of ways. - -1. Using ec2 instance metadata credentials - -If you are on AWS instance with an instance profile that provides credentials, Pulsar will use these credentials -if no other mechanism is provided - -2. Set the environment variables **AWS_ACCESS_KEY_ID** and **AWS_SECRET_ACCESS_KEY** in ```conf/pulsar_env.sh```. - -```bash - -export AWS_ACCESS_KEY_ID=ABC123456789 -export AWS_SECRET_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -> \"export\" is important so that the variables are made available in the environment of spawned processes. - - -3. Add the Java system properties *aws.accessKeyId* and *aws.secretKey* to **PULSAR_EXTRA_OPTS** in `conf/pulsar_env.sh`. - -```bash - -PULSAR_EXTRA_OPTS="${PULSAR_EXTRA_OPTS} ${PULSAR_MEM} ${PULSAR_GC} -Daws.accessKeyId=ABC123456789 -Daws.secretKey=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.maxCapacity.default=1000 -Dio.netty.recycler.linkCapacity=1024" - -``` - -4. Set the access credentials in ```~/.aws/credentials```. - -```conf - -[default] -aws_access_key_id=ABC123456789 -aws_secret_access_key=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -5. Assuming an IAM role - -If you want to assume an IAM role, this can be done via specifying the following: - -```conf - -s3ManagedLedgerOffloadRole= -s3ManagedLedgerOffloadRoleSessionName=pulsar-s3-offload - -``` - -This will use the `DefaultAWSCredentialsProviderChain` for assuming this role. - -> The broker must be rebooted for credentials specified in pulsar_env to take effect. - -#### Configuring the size of block read/write - -Pulsar also provides some knobs to configure the size of requests sent to AWS S3. - -- ```s3ManagedLedgerOffloadMaxBlockSizeInBytes``` configures the maximum size of - a "part" sent during a multipart upload. This cannot be smaller than 5MB. Default is 64MB. -- ```s3ManagedLedgerOffloadReadBufferSizeInBytes``` configures the block size for - each individual read when reading back data from AWS S3. Default is 1MB. - -In both cases, these should not be touched unless you know what you are doing. - -### "google-cloud-storage" Driver configuration - -Buckets are the basic containers that hold your data. Everything that you store in -Cloud Storage must be contained in a bucket. You can use buckets to organize your data and -control access to your data, but unlike directories and folders, you cannot nest buckets. - -```conf - -gcsManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -Bucket Region is the region where bucket located. Bucket Region is not a required but -a recommended configuration. If it is not configured, It will use the default region. - -Regarding GCS, buckets are default created in the `us multi-regional location`, -page [Bucket Locations](https://cloud.google.com/storage/docs/bucket-locations) contains more information. - -```conf - -gcsManagedLedgerOffloadRegion=europe-west3 - -``` - -#### Authentication with GCS - -The administrator needs to configure `gcsManagedLedgerOffloadServiceAccountKeyFile` in `broker.conf` -for the broker to be able to access the GCS service. `gcsManagedLedgerOffloadServiceAccountKeyFile` is -a Json file, containing the GCS credentials of a service account. -[Service Accounts section of this page](https://support.google.com/googleapi/answer/6158849) contains -more information of how to create this key file for authentication. More information about google cloud IAM -is available [here](https://cloud.google.com/storage/docs/access-control/iam). - -To generate service account credentials or view the public credentials that you've already generated, follow the following steps: - -1. Open the [Service accounts page](https://console.developers.google.com/iam-admin/serviceaccounts). -2. Select a project or create a new one. -3. Click **Create service account**. -4. In the **Create service account** window, type a name for the service account, and select **Furnish a new private key**. If you want to [grant G Suite domain-wide authority](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#delegatingauthority) to the service account, also select **Enable G Suite Domain-wide Delegation**. -5. Click **Create**. - -> Notes: Make ensure that the service account you create has permission to operate GCS, you need to assign **Storage Admin** permission to your service account in [here](https://cloud.google.com/storage/docs/access-control/iam). - -```conf - -gcsManagedLedgerOffloadServiceAccountKeyFile="/Users/hello/Downloads/project-804d5e6a6f33.json" - -``` - -#### Configuring the size of block read/write - -Pulsar also provides some knobs to configure the size of requests sent to GCS. - -- ```gcsManagedLedgerOffloadMaxBlockSizeInBytes``` configures the maximum size of a "part" sent - during a multipart upload. This cannot be smaller than 5MB. Default is 64MB. -- ```gcsManagedLedgerOffloadReadBufferSizeInBytes``` configures the block size for each individual - read when reading back data from GCS. Default is 1MB. - -In both cases, these should not be touched unless you know what you are doing. - -### "filesystem" Driver configuration - - -#### Configure connection address - -You can configure the connection address in the `broker.conf` file. - -```conf - -fileSystemURI="hdfs://127.0.0.1:9000" - -``` - -#### Configure Hadoop profile path - -The configuration file is stored in the Hadoop profile path. It contains various settings, such as base path, authentication, and so on. - -```conf - -fileSystemProfilePath="../conf/filesystem_offload_core_site.xml" - -``` - -The model for storing topic data uses `org.apache.hadoop.io.MapFile`. You can use all of the configurations in `org.apache.hadoop.io.MapFile` for Hadoop. - -**Example** - -```conf - - - fs.defaultFS - - - - - hadoop.tmp.dir - pulsar - - - - io.file.buffer.size - 4096 - - - - io.seqfile.compress.blocksize - 1000000 - - - - io.seqfile.compression.type - BLOCK - - - - io.map.index.interval - 128 - - -``` - -For more information about the configurations in `org.apache.hadoop.io.MapFile`, see [Filesystem Storage](http://hadoop.apache.org/). -## Configuring offload to run automatically - -Namespace policies can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that the topic has stored on the pulsar cluster. Once the topic reaches the threshold, an offload operation will be triggered. Setting a negative value to the threshold will disable automatic offloading. Setting the threshold to 0 will cause the broker to offload data as soon as it possiby can. - -```bash - -$ bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -> Automatic offload runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offload will not until the current segment is full. - -## Configuring read priority for offloaded messages - -By default, once messages were offloaded to long term storage, brokers will read them from long term storage, but messages still exists in bookkeeper for a period depends on the administrator's configuration. For -messages exists in both bookkeeper and long term storage, if they are preferred to read from bookkeeper, you can use command to change this configuration. - -```bash - -# default value for -orp is tiered-storage-first -$ bin/pulsar-admin namespaces set-offload-policies my-tenant/my-namespace -orp bookkeeper-first -$ bin/pulsar-admin topics set-offload-policies my-tenant/my-namespace/topic1 -orp bookkeeper-first - -``` - -## Triggering offload manually - -Offloading can manually triggered through a REST endpoint on the Pulsar broker. We provide a CLI which will call this rest endpoint for you. - -When triggering offload, you must specify the maximum size, in bytes, of backlog which will be retained locally on the bookkeeper. The offload mechanism will offload segments from the start of the topic backlog until this condition is met. - -```bash - -$ bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 -Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - -``` - -The command to triggers an offload will not wait until the offload operation has completed. To check the status of the offload, use offload-status. - -```bash - -$ bin/pulsar-admin topics offload-status my-tenant/my-namespace/topic1 -Offload is currently running - -``` - -To wait for offload to complete, add the -w flag. - -```bash - -$ bin/pulsar-admin topics offload-status -w my-tenant/my-namespace/topic1 -Offload was a success - -``` - -If there is an error offloading, the error will be propagated to the offload-status command. - -```bash - -$ bin/pulsar-admin topics offload-status persistent://public/default/topic1 -Error in offload -null - -Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/deploy-aws.md b/site2/website/versioned_docs/version-2.10.0-deprecated/deploy-aws.md deleted file mode 100644 index 5497aadd7865f8..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/deploy-aws.md +++ /dev/null @@ -1,271 +0,0 @@ ---- -id: deploy-aws -title: Deploying a Pulsar cluster on AWS using Terraform and Ansible -sidebar_label: "Amazon Web Services" -original_id: deploy-aws ---- - -> For instructions on deploying a single Pulsar cluster manually rather than using Terraform and Ansible, see [Deploying a Pulsar cluster on bare metal](deploy-bare-metal.md). For instructions on manually deploying a multi-cluster Pulsar instance, see [Deploying a Pulsar instance on bare metal](deploy-bare-metal-multi-cluster.md). - -One of the easiest ways to get a Pulsar [cluster](reference-terminology.md#cluster) running on [Amazon Web Services](https://aws.amazon.com/) (AWS) is to use the [Terraform](https://terraform.io) infrastructure provisioning tool and the [Ansible](https://www.ansible.com) server automation tool. Terraform can create the resources necessary for running the Pulsar cluster---[EC2](https://aws.amazon.com/ec2/) instances, networking and security infrastructure, etc.---While Ansible can install and run Pulsar on the provisioned resources. - -## Requirements and setup - -In order to install a Pulsar cluster on AWS using Terraform and Ansible, you need to prepare the following things: - -* An [AWS account](https://aws.amazon.com/account/) and the [`aws`](https://aws.amazon.com/cli/) command-line tool -* Python and [pip](https://pip.pypa.io/en/stable/) -* The [`terraform-inventory`](https://github.com/adammck/terraform-inventory) tool, which enables Ansible to use Terraform artifacts - -You also need to make sure that you are currently logged into your AWS account via the `aws` tool: - -```bash - -$ aws configure - -``` - -## Installation - -You can install Ansible on Linux or macOS using pip. - -```bash - -$ pip install ansible - -``` - -You can install Terraform using the instructions [here](https://learn.hashicorp.com/tutorials/terraform/install-cli). - -You also need to have the Terraform and Ansible configuration for Pulsar locally on your machine. You can find them in the [GitHub repository](https://github.com/apache/pulsar) of Pulsar, which you can fetch using Git commands: - -```bash - -$ git clone https://github.com/apache/pulsar -$ cd pulsar/deployment/terraform-ansible/aws - -``` - -## SSH setup - -> If you already have an SSH key and want to use it, you can skip the step of generating an SSH key and update `private_key_file` setting -> in `ansible.cfg` file and `public_key_path` setting in `terraform.tfvars` file. -> -> For example, if you already have a private SSH key in `~/.ssh/pulsar_aws` and a public key in `~/.ssh/pulsar_aws.pub`, -> follow the steps below: -> -> 1. update `ansible.cfg` with following values: -> - -> ```shell -> -> private_key_file=~/.ssh/pulsar_aws -> -> -> ``` - -> -> 2. update `terraform.tfvars` with following values: -> - -> ```shell -> -> public_key_path=~/.ssh/pulsar_aws.pub -> -> -> ``` - - -In order to create the necessary AWS resources using Terraform, you need to create an SSH key. Enter the following commands to create a private SSH key in `~/.ssh/id_rsa` and a public key in `~/.ssh/id_rsa.pub`: - -```bash - -$ ssh-keygen -t rsa - -``` - -Do *not* enter a passphrase (hit **Enter** instead when the prompt comes out). Enter the following command to verify that a key has been created: - -```bash - -$ ls ~/.ssh -id_rsa id_rsa.pub - -``` - -## Create AWS resources using Terraform - -To start building AWS resources with Terraform, you need to install all Terraform dependencies. Enter the following command: - -```bash - -$ terraform init -# This will create a .terraform folder - -``` - -After that, you can apply the default Terraform configuration by entering this command: - -```bash - -$ terraform apply - -``` - -Then you see this prompt below: - -```bash - -Do you want to perform these actions? - Terraform will perform the actions described above. - Only 'yes' will be accepted to approve. - - Enter a value: - -``` - -Type `yes` and hit **Enter**. Applying the configuration could take several minutes. When the configuration applying finishes, you can see `Apply complete!` along with some other information, including the number of resources created. - -### Apply a non-default configuration - -You can apply a non-default Terraform configuration by changing the values in the `terraform.tfvars` file. The following variables are available: - -Variable name | Description | Default -:-------------|:------------|:------- -`public_key_path` | The path of the public key that you have generated. | `~/.ssh/id_rsa.pub` -`region` | The AWS region in which the Pulsar cluster runs | `us-west-2` -`availability_zone` | The AWS availability zone in which the Pulsar cluster runs | `us-west-2a` -`aws_ami` | The [Amazon Machine Image](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) (AMI) that the cluster uses | `ami-9fa343e7` -`num_zookeeper_nodes` | The number of [ZooKeeper](https://zookeeper.apache.org) nodes in the ZooKeeper cluster | 3 -`num_bookie_nodes` | The number of bookies that runs in the cluster | 3 -`num_broker_nodes` | The number of Pulsar brokers that runs in the cluster | 2 -`num_proxy_nodes` | The number of Pulsar proxies that runs in the cluster | 1 -`base_cidr_block` | The root [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) that network assets uses for the cluster | `10.0.0.0/16` -`instance_types` | The EC2 instance types to be used. This variable is a map with two keys: `zookeeper` for the ZooKeeper instances, `bookie` for the BookKeeper bookies and `broker` and `proxy` for Pulsar brokers and bookies | `t2.small` (ZooKeeper), `i3.xlarge` (BookKeeper) and `c5.2xlarge` (Brokers/Proxies) - -### What is installed - -When you run the Ansible playbook, the following AWS resources are used: - -* 9 total [Elastic Compute Cloud](https://aws.amazon.com/ec2) (EC2) instances running the [ami-9fa343e7](https://access.redhat.com/articles/3135091) Amazon Machine Image (AMI), which runs [Red Hat Enterprise Linux (RHEL) 7.4](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/7.4_release_notes/index). By default, that includes: - * 3 small VMs for ZooKeeper ([t3.small](https://www.ec2instances.info/?selected=t3.small) instances) - * 3 larger VMs for BookKeeper [bookies](reference-terminology.md#bookie) ([i3.xlarge](https://www.ec2instances.info/?selected=i3.xlarge) instances) - * 2 larger VMs for Pulsar [brokers](reference-terminology.md#broker) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances) - * 1 larger VMs for Pulsar [proxy](reference-terminology.md#proxy) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances) -* An EC2 [security group](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html) -* A [virtual private cloud](https://aws.amazon.com/vpc/) (VPC) for security -* An [API Gateway](https://aws.amazon.com/api-gateway/) for connections from the outside world -* A [route table](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html) for the Pulsar cluster's VPC -* A [subnet](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html) for the VPC - -All EC2 instances for the cluster run in the [us-west-2](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) region. - -### Fetch your Pulsar connection URL - -When you apply the Terraform configuration by entering the command `terraform apply`, Terraform outputs a value for the `pulsar_service_url`. The value should look something like this: - -``` - -pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650 - -``` - -You can fetch that value at any time by entering the command `terraform output pulsar_service_url` or parsing the `terraform.tstate` file (which is JSON, even though the filename does not reflect that): - -```bash - -$ cat terraform.tfstate | jq .modules[0].outputs.pulsar_service_url.value - -``` - -### Destroy your cluster - -At any point, you can destroy all AWS resources associated with your cluster using Terraform's `destroy` command: - -```bash - -$ terraform destroy - -``` - -## Setup Disks - -Before you run the Pulsar playbook, you need to mount the disks to the correct directories on those bookie nodes. Since different type of machines have different disk layout, you need to update the task defined in `setup-disk.yaml` file after changing the `instance_types` in your terraform config, - -To setup disks on bookie nodes, enter this command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - setup-disk.yaml - -``` - -After that, the disks is mounted under `/mnt/journal` as journal disk, and `/mnt/storage` as ledger disk. -Remember to enter this command just only once. If you attempt to enter this command again after you have run Pulsar playbook, your disks might potentially be erased again, causing the bookies to fail to start up. - -## Run the Pulsar playbook - -Once you have created the necessary AWS resources using Terraform, you can install and run Pulsar on the Terraform-created EC2 instances using Ansible. - -(Optional) If you want to use any [built-in IO connectors](io-connectors.md), edit the `Download Pulsar IO packages` task in the `deploy-pulsar.yaml` file and uncomment the connectors you want to use. - -To run the playbook, enter this command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - ../deploy-pulsar.yaml - -``` - -If you have created a private SSH key at a location different from `~/.ssh/id_rsa`, you can specify the different location using the `--private-key` flag in the following command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - --private-key="~/.ssh/some-non-default-key" \ - ../deploy-pulsar.yaml - -``` - -## Access the cluster - -You can now access your running Pulsar using the unique Pulsar connection URL for your cluster, which you can obtain following the instructions [above](#fetching-your-pulsar-connection-url). - -For a quick demonstration of accessing the cluster, we can use the Python client for Pulsar and the Python shell. First, install the Pulsar Python module using pip: - -```bash - -$ pip install pulsar-client - -``` - -Now, open up the Python shell using the `python` command: - -```bash - -$ python - -``` - -Once you are in the shell, enter the following command: - -```python - ->>> import pulsar ->>> client = pulsar.Client('pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650') -# Make sure to use your connection URL ->>> producer = client.create_producer('persistent://public/default/test-topic') ->>> producer.send('Hello world') ->>> client.close() - -``` - -If all of these commands are successful, Pulsar clients can now use your cluster! diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/deploy-bare-metal-multi-cluster.md b/site2/website/versioned_docs/version-2.10.0-deprecated/deploy-bare-metal-multi-cluster.md deleted file mode 100644 index 9ac1a85580ffa8..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/deploy-bare-metal-multi-cluster.md +++ /dev/null @@ -1,452 +0,0 @@ ---- -id: deploy-bare-metal-multi-cluster -title: Deploying a multi-cluster on bare metal -sidebar_label: "Bare metal multi-cluster" -original_id: deploy-bare-metal-multi-cluster ---- - -:::tip - -1. You can use single-cluster Pulsar installation in most use cases, such as experimenting with Pulsar or using Pulsar in a startup or in a single team. If you need to run a multi-cluster Pulsar instance, see the [guide](deploy-bare-metal-multi-cluster.md). -2. If you want to use all built-in [Pulsar IO](io-overview.md) connectors, you need to download `apache-pulsar-io-connectors`package and install `apache-pulsar-io-connectors` under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you have run a separate cluster of function workers for [Pulsar Functions](functions-overview.md). -3. If you want to use [Tiered Storage](concepts-tiered-storage.md) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders`package and install `apache-pulsar-offloaders` under `offloaders` directory in the Pulsar directory on every broker node. For more details of how to configure this feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md). - -::: - -A Pulsar instance consists of multiple Pulsar clusters working in unison. You can distribute clusters across data centers or geographical regions and replicate the clusters amongst themselves using [geo-replication](administration-geo.md). Deploying a multi-cluster Pulsar instance consists of the following steps: - -1. Deploying two separate ZooKeeper quorums: a local quorum for each cluster in the instance and a configuration store quorum for instance-wide tasks -2. Initializing cluster metadata for each cluster -3. Deploying a BookKeeper cluster of bookies in each Pulsar cluster -4. Deploying brokers in each Pulsar cluster - - -> #### Run Pulsar locally or on Kubernetes? -> This guide shows you how to deploy Pulsar in production in a non-Kubernetes environment. If you want to run a standalone Pulsar cluster on a single machine for development purposes, see the [Setting up a local cluster](getting-started-standalone.md) guide. If you want to run Pulsar on [Kubernetes](https://kubernetes.io), see the [Pulsar on Kubernetes](deploy-kubernetes.md) guide, which includes sections on running Pulsar on Kubernetes, on Google Kubernetes Engine and on Amazon Web Services. - -## System requirement - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. You need to install 64-bit JRE/JDK 8 or later versions, JRE/JDK 11 is recommended. - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -## Install Pulsar - -To get started running Pulsar, download a binary tarball release in one of the following ways: - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar @pulsar:version@ binary release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget 'https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=pulsar/pulsar-@pulsar:version@/apache-pulsar-@pulsar:version@-bin.tar.gz' -O apache-pulsar-@pulsar:version@-bin.tar.gz - - ``` - -Once you download the tarball, untar it and `cd` into the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | [Command-line tools](reference-cli-tools.md) of Pulsar, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](/tools/pulsar-admin/) -`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more -`examples` | A Java JAR file containing example [Pulsar Functions](functions-overview.md) -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that Pulsar uses -`licenses` | License files, in `.txt` form, for various components of the Pulsar codebase - -The following directories are created once you begin running Pulsar: - -Directory | Contains -:---------|:-------- -`data` | The data storage directory that ZooKeeper and BookKeeper use -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md) -`logs` | Logs that the installation creates - - -## Deploy ZooKeeper - -Each Pulsar instance relies on two separate ZooKeeper quorums. - -* Local ZooKeeper operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs a dedicated ZooKeeper cluster. -* Configuration Store operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum. - -You can use an independent cluster of machines or the same machines used by local ZooKeeper to provide the configuration store quorum. - - -### Deploy local ZooKeeper - -ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar. - -You need to stand up one local ZooKeeper cluster per Pulsar cluster for deploying a Pulsar instance. - -To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -On each host, you need to specify the ID of the node in the `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -:::tip - -See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - -::: - -On a ZooKeeper server at `zk1.us-west.example.com`, for example, you could set the `myid` value like this: - -```shell - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com` the command looks like `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start zookeeper - -``` - -### Deploy the configuration store - -The ZooKeeper cluster configured and started up in the section above is a local ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks. - -If you deploy a single-cluster instance, you do not need a separate cluster for the configuration store. If, however, you deploy a multi-cluster instance, you should stand up a separate ZooKeeper cluster for configuration tasks. - -#### Single-cluster Pulsar instance - -If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports. - -To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorum. You need to use the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 - -``` - -As before, create the `myid` files for each server on `data/global-zookeeper/myid`. - -#### Multi-cluster Pulsar instance - -When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions. - -The key here is to make sure the ZK quorum members are spread across at least 3 regions, and other regions run as observers. - -Again, given the very low expected load on the configuration store servers, you can share the same hosts used for the local ZooKeeper quorum. - -For example, assume a Pulsar instance with the following clusters `us-west`, `us-east`, `us-central`, `eu-central`, `ap-south`. Also assume, each cluster has its own local ZK servers named such as the following: - -``` - -zk[1-3].${CLUSTER}.example.com - -``` - -In this scenario if you want to pick the quorum participants from few clusters and let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`. - -This method guarantees that writes to configuration store is possible even if one of these regions is unreachable. - -The ZK configuration in all the servers looks like: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 -server.4=zk1.us-central.example.com:2185:2186 -server.5=zk2.us-central.example.com:2185:2186 -server.6=zk3.us-central.example.com:2185:2186:observer -server.7=zk1.us-east.example.com:2185:2186 -server.8=zk2.us-east.example.com:2185:2186 -server.9=zk3.us-east.example.com:2185:2186:observer -server.10=zk1.eu-central.example.com:2185:2186:observer -server.11=zk2.eu-central.example.com:2185:2186:observer -server.12=zk3.eu-central.example.com:2185:2186:observer -server.13=zk1.ap-south.example.com:2185:2186:observer -server.14=zk2.ap-south.example.com:2185:2186:observer -server.15=zk3.ap-south.example.com:2185:2186:observer - -``` - -Additionally, ZK observers need to have the following parameters: - -```properties - -peerType=observer - -``` - -##### Start the service - -Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) - -```shell - -$ bin/pulsar-daemon start configuration-store - -``` - -## Cluster metadata initialization - -Once you set up the cluster-specific ZooKeeper and configuration store quorums for your instance, you need to write some metadata to ZooKeeper for each cluster in your instance. **you only need to write these metadata once**. - -You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. The following is an example: - -```shell - -$ bin/pulsar initialize-cluster-metadata \ - --cluster us-west \ - --metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \ - --configuration-metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \ - --web-service-url http://pulsar.us-west.example.com:8080/ \ - --web-service-url-tls https://pulsar.us-west.example.com:8443/ \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/ - -``` - -As you can see from the example above, you need to specify the following: - -* The name of the cluster -* The local metadata store connection string for the cluster -* The configuration store connection string for the entire instance -* The web service URL for the cluster -* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster - -If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster. - -Make sure to run `initialize-cluster-metadata` for each cluster in your instance. - -## Deploy BookKeeper - -BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar. - -Each Pulsar broker needs its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster. - -### Configure bookies - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important aspect of configuring each bookie is ensuring that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for the local ZooKeeper of Pulsar cluster. - -### Start bookies - -You can start a bookie in two ways: in the foreground or as a background daemon. - -To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -You can verify that the bookie works properly using the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell): - -```bash - -$ bin/bookkeeper shell bookiesanity - -``` - -This command creates a new ledger on the local bookie, writes a few entries, reads them back and finally deletes the ledger. - -After you have started all bookies, you can use the `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to verify that all bookies in the cluster are running. - -```bash - -$ bin/bookkeeper shell simpletest --ensemble --writeQuorum --ackQuorum --numEntries - -``` - -Bookie hosts are responsible for storing message data on disk. In order for bookies to provide optimal performance, having a suitable hardware configuration is essential for the bookies. The following are key dimensions for bookie hardware capacity. - -* Disk I/O capacity read/write -* Storage capacity - -Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker. To ensure low write latency, BookKeeper is -designed to use multiple devices: - -* A **journal** to ensure durability. For sequential writes, having fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts is critical. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms. -* A **ledger storage device** is where data is stored until all consumers acknowledge the message. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller. - - - -## Deploy brokers - -Once you set up ZooKeeper, initialize cluster metadata, and spin up BookKeeper bookies, you can deploy brokers. - -### Broker configuration - -You can configure brokers using the [`conf/broker.conf`](reference-configuration.md#broker) configuration file. - -The most important element of broker configuration is ensuring that each broker is aware of its local ZooKeeper quorum as well as the configuration store quorum. Make sure that you set the [`metadataStoreUrl`](reference-configuration.md#broker) parameter to reflect the local quorum and the [`configurationMetadataStoreUrl`](reference-configuration.md#broker) parameter to reflect the configuration store quorum (although you need to specify only those ZooKeeper servers located in the same cluster). - -You also need to specify the name of the [cluster](reference-terminology.md#cluster) to which the broker belongs using the [`clusterName`](reference-configuration.md#broker-clusterName) parameter. In addition, you need to match the broker and web service ports provided when you initialize the metadata (especially when you use a different port from default) of the cluster. - -The following is an example configuration: - -```properties - -# Local ZooKeeper servers -metadataStoreUrl=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -# Configuration store quorum connection string. -configurationMetadataStoreUrl=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184 - -clusterName=us-west - -# Broker data port -brokerServicePort=6650 - -# Broker data port for TLS -brokerServicePortTls=6651 - -# Port to use to server HTTP request -webServicePort=8080 - -# Port to use to server HTTPS request -webServicePortTls=8443 - -``` - -### Broker hardware - -Pulsar brokers do not require any special hardware since they do not use the local disk. You had better choose fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) so that the software can take full advantage of that. - -### Start the broker service - -You can start a broker in the background by using [nohup](https://en.wikipedia.org/wiki/Nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start broker - -``` - -You can also start brokers in the foreground by using [`pulsar broker`](reference-cli-tools.md#broker): - -```shell - -$ bin/pulsar broker - -``` - -## Service discovery - -[Clients](getting-started-clients.md) connecting to Pulsar brokers need to communicate with an entire Pulsar instance using a single URL. - -You can use your own service discovery system. If you use your own system, you only need to satisfy just one requirement: when a client performs an HTTP request to an [endpoint](reference-configuration.md) for a Pulsar cluster, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to some active brokers in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means. - -> **Service discovery already provided by many scheduling systems** -> Many large-scale deployment systems, such as [Kubernetes](deploy-kubernetes.md), have service discovery systems built in. If you run Pulsar on such a system, you may not need to provide your own service discovery mechanism. - -## Admin client and verification - -At this point your Pulsar instance should be ready to use. You can now configure client machines that can serve as [administrative clients](admin-api-overview.md) for each cluster. You can use the [`conf/client.conf`](reference-configuration.md#client) configuration file to configure admin clients. - -The most important thing is that you point the [`serviceUrl`](reference-configuration.md#client-serviceUrl) parameter to the correct service URL for the cluster: - -```properties - -serviceUrl=http://pulsar.us-west.example.com:8080/ - -``` - -## Provision new tenants - -Pulsar is built as a fundamentally multi-tenant system. - - -If a new tenant wants to use the system, you need to create a new one. You can create a new tenant by using the [`pulsar-admin`](reference-pulsar-admin.md#tenants) CLI tool: - -```shell - -$ bin/pulsar-admin tenants create test-tenant \ - --allowed-clusters us-west \ - --admin-roles test-admin-role - -``` - -In this command, users who identify with `test-admin-role` role can administer the configuration for the `test-tenant` tenant. The `test-tenant` tenant can only use the `us-west` cluster. From now on, this tenant can manage its resources. - -Once you create a tenant, you need to create [namespaces](reference-terminology.md#namespace) for topics within that tenant. - - -The first step is to create a namespace. A namespace is an administrative unit that can contain many topics. A common practice is to create a namespace for each different use case from a single tenant. - -```shell - -$ bin/pulsar-admin namespaces create test-tenant/ns1 - -``` - -##### Test producer and consumer - - -Everything is now ready to send and receive messages. The quickest way to test the system is through the [`pulsar-perf`](reference-cli-tools.md#pulsar-perf) client tool. - - -You can use a topic in the namespace that you have just created. Topics are automatically created the first time when a producer or a consumer tries to use them. - -The topic name in this case could be: - -```http - -persistent://test-tenant/ns1/my-topic - -``` - -Start a consumer that creates a subscription on the topic and waits for messages: - -```shell - -$ bin/pulsar-perf consume persistent://test-tenant/ns1/my-topic - -``` - -Start a producer that publishes messages at a fixed rate and reports stats every 10 seconds: - -```shell - -$ bin/pulsar-perf produce persistent://test-tenant/ns1/my-topic - -``` - -To report the topic stats: - -```shell - -$ bin/pulsar-admin topics stats persistent://test-tenant/ns1/my-topic - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/deploy-bare-metal.md b/site2/website/versioned_docs/version-2.10.0-deprecated/deploy-bare-metal.md deleted file mode 100644 index 25dc04458613b5..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/deploy-bare-metal.md +++ /dev/null @@ -1,568 +0,0 @@ ---- -id: deploy-bare-metal -title: Deploy a cluster on bare metal -sidebar_label: "Bare metal" -original_id: deploy-bare-metal ---- - -:::tip - -1. You can use single-cluster Pulsar installation in most use cases, such as experimenting with Pulsar or using Pulsar in a startup or in a single team. If you need to run a multi-cluster Pulsar instance, see the [guide](deploy-bare-metal-multi-cluster.md). -2. If you want to use all built-in [Pulsar IO](io-overview.md) connectors, you need to download `apache-pulsar-io-connectors`package and install `apache-pulsar-io-connectors` under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you have run a separate cluster of function workers for [Pulsar Functions](functions-overview.md). -3. If you want to use [Tiered Storage](concepts-tiered-storage.md) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders`package and install `apache-pulsar-offloaders` under `offloaders` directory in the Pulsar directory on every broker node. For more details of how to configure this feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md). - -::: - -Deploying a Pulsar cluster consists of the following steps: - -1. Deploy a [ZooKeeper](#deploy-a-zookeeper-cluster) cluster (optional) -2. Initialize [cluster metadata](#initialize-cluster-metadata) -3. Deploy a [BookKeeper](#deploy-a-bookkeeper-cluster) cluster -4. Deploy one or more Pulsar [brokers](#deploy-pulsar-brokers) - -## Preparation - -### Requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions, JRE/JDK 11 is recommended. - -:::tip - -You can reuse existing Zookeeper clusters. - -::: - -To run Pulsar on bare metal, the following configuration is recommended: - -* At least 6 Linux machines or VMs - * 3 for running [ZooKeeper](https://zookeeper.apache.org) - * 3 for running a Pulsar broker, and a [BookKeeper](https://bookkeeper.apache.org) bookie -* A single [DNS](https://en.wikipedia.org/wiki/Domain_Name_System) name covering all of the Pulsar broker hosts - -:::note - -* Broker is only supported on 64-bit JVM. -* If you do not have enough machines, or you want to test Pulsar in cluster mode (and expand the cluster later), You can fully deploy Pulsar on a node on which ZooKeeper, bookie and broker run. -* If you do not have a DNS server, you can use the multi-host format in the service URL instead. - -::: - -Each machine in your cluster needs to have [Java 8](https://adoptium.net/?variant=openjdk8) or [Java 11](https://adoptium.net/?variant=openjdk11) installed. - -The following is a diagram showing the basic setup: - -![alt-text](/assets/pulsar-basic-setup.png) - -In this diagram, connecting clients need to communicate with the Pulsar cluster using a single URL. In this case, `pulsar-cluster.acme.com` abstracts over all of the message-handling brokers. Pulsar message brokers run on machines alongside BookKeeper bookies; brokers and bookies, in turn, rely on ZooKeeper. - -### Hardware considerations - -If you deploy a Pulsar cluster, keep in mind the following basic better choices when you do the capacity planning. - -#### ZooKeeper - -For machines running ZooKeeper, it is recommended to use less powerful machines or VMs. Pulsar uses ZooKeeper only for periodic coordination-related and configuration-related tasks, not for basic operations. If you run Pulsar on [Amazon Web Services](https://aws.amazon.com/) (AWS), for example, a [t2.small](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html) instance might likely suffice. - -#### Bookies and Brokers - -For machines running a bookie and a Pulsar broker, more powerful machines are required. For an AWS deployment, for example, [i3.4xlarge](https://aws.amazon.com/blogs/aws/now-available-i3-instances-for-demanding-io-intensive-applications/) instances may be appropriate. On those machines you can use the following: - -* Fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) (for Pulsar brokers) -* Small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache (for BookKeeper bookies) - -#### Hardware recommendations - -To start a Pulsar instance, below are the minimum and the recommended hardware settings. - -A cluster consists of 3 broker nodes, 3 bookie nodes, and 3 ZooKeeper nodes. The following recommendation is suitable for one node. - -- The minimum hardware settings (**250 Pulsar topics**) - - Component | CPU|Memory|Storage|Throughput |Rate - |---|---|---|---|---|--- - Broker|0.2|256 MB|/|Write throughput: 3 MB/s

    Read throughput: 6 MB/s

    |Write rate: 350 entries/s

    Read rate: 650 entries/s - Bookie|0.2|256 MB|Journal: 8 GB

    PD-SSDLedger: 16 GB, PD-STANDARD|Write throughput: 2 MB/s

    Read throughput: 2 MB/s

    |Write rate: 200 entries/s

    Read rate: 200 entries/s - ZooKeeper|0.05|256 MB|Log: 8 GB, PD-SSD

    Data: 2 GB, PD-STANDARD|/|/ - -- The recommended hardware settings (**1000 Pulsar topics**) - - Component | CPU|Memory|Storage|Throughput |Rate - |---|---|---|---|---|--- - Broker|8|8 GB|/|Write throughput: 100 MB/s

    Read throughput: 200 MB/s

    |Write rate: 10,000 entries/s

    Read rate: 20,000 entries/s - Bookie|4|8GB|Journal: 256 GB

    PD-SSDLedger: 2 TB, PD-STANDARD|Write throughput: 75 MB/s

    Read throughput: 75 MB/s

    |Write rate: 7,500 entries/s

    Read rate: 7,500 entries/s - ZooKeeper|1|2 GB|Log: 64 GB, PD-SSD

    Data: 256 GB, PD-STANDARD|/|/ - -## Install the Pulsar binary package - -> You need to install the Pulsar binary package on each machine in the cluster, including machines running ZooKeeper and BookKeeper. - -To get started deploying a Pulsar cluster on bare metal, you need to download a binary tarball release in one of the following ways: - -* By clicking on the link below directly, which automatically triggers a download: - * Pulsar @pulsar:version@ binary release -* From the Pulsar [downloads page](pulsar:download_page_url) -* From the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) on GitHub -* Using [wget](https://www.gnu.org/software/wget): - -```bash - -$ wget pulsar:binary_release_url - -``` - -Once you download the tarball, untar it and `cd` into the resulting directory: - -```bash - -$ tar xvzf apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -The extracted directory contains the following subdirectories: - -Directory | Contains -:---------|:-------- -`bin` |[command-line tools](reference-cli-tools.md) of Pulsar, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](/tools/pulsar-admin/) -`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more -`data` | The data storage directory that ZooKeeper and BookKeeper use -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that Pulsar uses -`logs` | Logs that the installation creates - -## [Install Builtin Connectors (optional)](standalone.md#install-builtin-connectors-optional) - -> Since Pulsar release `2.1.0-incubating`, Pulsar provides a separate binary distribution, containing all the `builtin` connectors. -> To enable the `builtin` connectors (optional), you can follow the instructions below. - -To use `builtin` connectors, you need to download the connectors tarball release on every broker node in one of the following ways : - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar IO Connectors @pulsar:version@ release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -Once you download the .nar file, copy the file to directory `connectors` in the pulsar directory. -For example, if you download the connector file `pulsar-io-aerospike-@pulsar:version@.nar`: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -## [Install Tiered Storage Offloaders (optional)](standalone.md#install-tiered-storage-offloaders-optional) - -> Since Pulsar release `2.2.0`, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -> If you want to enable tiered storage feature, you can follow the instructions as below; otherwise you can -> skip this section for now. - -To use tiered storage offloaders, you need to download the offloaders tarball release on every broker node in one of the following ways: - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -Once you download the tarball, in the Pulsar directory, untar the offloaders package and copy the offloaders as `offloaders` in the Pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you can find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more details of how to configure tiered storage feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md) - - -## Deploy a ZooKeeper cluster - -> If you already have an existing zookeeper cluster and want to use it, you can skip this section. - -[ZooKeeper](https://zookeeper.apache.org) manages a variety of essential coordination-related and configuration-related tasks for Pulsar. To deploy a Pulsar cluster, you need to deploy ZooKeeper first. A 3-node ZooKeeper cluster is the recommended configuration. Pulsar does not make heavy use of ZooKeeper, so the lightweight machines or VMs should suffice for running ZooKeeper. - -To begin, add all ZooKeeper servers to the configuration specified in [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) (in the Pulsar directory that you create [above](#install-the-pulsar-binary-package)). The following is an example: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -> If you only have one machine on which to deploy Pulsar, you only need to add one server entry in the configuration file. - -> If your machines are behind NAT use 0.0.0.0 as server entry for the local address. If the node use external IP in configuration for itself, behind NAT, zookeper service won't start because it tries to put a listener on an external ip that the linux box doesn't own. Using 0.0.0.0 start a listener on ALL ip, so that NAT network traffic can reach it. - -Example of configuration on _server.3_ - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=0.0.0.0:2888:3888 - -``` - -On each host, you need to specify the ID of the node in the `myid` file, which is in the `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - -For example, on a ZooKeeper server like `zk1.us-west.example.com`, you can set the `myid` value as follows: - -```bash - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com`, the command is `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and have the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start zookeeper - -``` - -> If you plan to deploy Zookeeper with the Bookie on the same node, you need to start zookeeper by using different stats -> port by configuring the `metricsProvider.httpPort` in zookeeper.conf. - -## Initialize cluster metadata - -Once you deploy ZooKeeper for your cluster, you need to write some metadata to ZooKeeper. You only need to write this data **once**. - -You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. This command can be run on any machine in your Pulsar cluster, so the metadata can be initialized from a ZooKeeper, broker, or bookie machine. The following is an example: - -```shell - -$ bin/pulsar initialize-cluster-metadata \ - --cluster pulsar-cluster-1 \ - --metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \ - --configuration-metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \ - --web-service-url http://pulsar.us-west.example.com:8080 \ - --web-service-url-tls https://pulsar.us-west.example.com:8443 \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650 \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -As you can see from the example above, you will need to specify the following: - -Flag | Description -:----|:----------- -`--cluster` | A name for the cluster -`--metadata-store` | A "local" metadata store connection string for the cluster. This connection string only needs to include *one* machine in the ZooKeeper cluster. -`--configuration-metadata-store` | The configuration metadata store connection string for the entire instance. As with the `--metadata-store` flag, this connection string only needs to include *one* machine in the ZooKeeper cluster. -`--web-service-url` | The web service URL for the cluster, plus a port. This URL should be a standard DNS name. The default port is 8080 (you had better not use a different port). -`--web-service-url-tls` | If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster. The default port is 8443 (you had better not use a different port). -`--broker-service-url` | A broker service URL enabling interaction with the brokers in the cluster. This URL should not use the same DNS name as the web service URL but should use the `pulsar` scheme instead. The default port is 6650 (you had better not use a different port). -`--broker-service-url-tls` | If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster. The default port is 6651 (you had better not use a different port). - - -> If you do not have a DNS server, you can use multi-host format in the service URL with the following settings: -> - -> ```shell -> -> --web-service-url http://host1:8080,host2:8080,host3:8080 \ -> --web-service-url-tls https://host1:8443,host2:8443,host3:8443 \ -> --broker-service-url pulsar://host1:6650,host2:6650,host3:6650 \ -> --broker-service-url-tls pulsar+ssl://host1:6651,host2:6651,host3:6651 -> -> -> ``` - -> -> If you want to use an existing BookKeeper cluster, you can add the `--existing-bk-metadata-service-uri` flag as follows: -> - -> ```shell -> -> --existing-bk-metadata-service-uri "zk+null://zk1:2181;zk2:2181/ledgers" \ -> --web-service-url http://host1:8080,host2:8080,host3:8080 \ -> --web-service-url-tls https://host1:8443,host2:8443,host3:8443 \ -> --broker-service-url pulsar://host1:6650,host2:6650,host3:6650 \ -> --broker-service-url-tls pulsar+ssl://host1:6651,host2:6651,host3:6651 -> -> -> ``` - -> You can obtain the metadata service URI of the existing BookKeeper cluster by using the `bin/bookkeeper shell whatisinstanceid` command. You must enclose the value in double quotes since the multiple metadata service URIs are separated with semicolons. - -## Deploy a BookKeeper cluster - -[BookKeeper](https://bookkeeper.apache.org) handles all persistent data storage in Pulsar. You need to deploy a cluster of BookKeeper bookies to use Pulsar. You can choose to run a **3-bookie BookKeeper cluster**. - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important step in configuring bookies for our purposes here is ensuring that [`zkServers`](reference-configuration.md#bookkeeper-zkServers) is set to the connection string for the ZooKeeper cluster. The following is an example: - -```properties - -zkServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -``` - -Once you appropriately modify the `zkServers` parameter, you can make any other configuration changes that you require. You can find a full listing of the available BookKeeper configuration parameters [here](reference-configuration.md#bookkeeper). However, consulting the [BookKeeper documentation](http://bookkeeper.apache.org/docs/latest/reference/config/) for a more in-depth guide might be a better choice. - -Once you apply the desired configuration in `conf/bookkeeper.conf`, you can start up a bookie on each of your BookKeeper hosts. You can start up each bookie either in the background, using [nohup](https://en.wikipedia.org/wiki/Nohup), or in the foreground. - -To start the bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -To start the bookie in the foreground: - -```bash - -$ bin/pulsar bookie - -``` - -You can verify that a bookie works properly by running the `bookiesanity` command on the [BookKeeper shell](reference-cli-tools.md#shell): - -```bash - -$ bin/bookkeeper shell bookiesanity - -``` - -This command creates an ephemeral BookKeeper ledger on the local bookie, writes a few entries, reads them back, and finally deletes the ledger. - -After you start all the bookies, you can use `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to verify all the bookies in the cluster are up running. - -```bash - -$ bin/bookkeeper shell simpletest --ensemble --writeQuorum --ackQuorum --numEntries - -``` - -This command creates a `num-bookies` sized ledger on the cluster, writes a few entries, and finally deletes the ledger. - - -## Deploy Pulsar brokers - -Pulsar brokers are the last thing you need to deploy in your Pulsar cluster. Brokers handle Pulsar messages and provide the administrative interface of Pulsar. A good choice is to run **3 brokers**, one for each machine that already runs a BookKeeper bookie. - -### Configure Brokers - -The most important element of broker configuration is ensuring that each broker is aware of the ZooKeeper cluster that you have deployed. Ensure that the [`metadataStoreUrl`](reference-configuration.md#broker) and [`configurationMetadataStoreUrl`](reference-configuration.md#broker) parameters are correct. In this case, since you only have 1 cluster and no configuration store setup, the `configurationMetadataStoreUrl` point to the same `metadataStoreUrl`. - -```properties - -metadataStoreUrl=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 -configurationMetadataStoreUrl=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -``` - -You also need to specify the cluster name (matching the name that you provided when you [initialize the metadata of the cluster](#initialize-cluster-metadata)): - -```properties - -clusterName=pulsar-cluster-1 - -``` - -In addition, you need to match the broker and web service ports provided when you initialize the metadata of the cluster (especially when you use a different port than the default): - -```properties - -brokerServicePort=6650 -brokerServicePortTls=6651 -webServicePort=8080 -webServicePortTls=8443 - -``` - -> If you deploy Pulsar in a one-node cluster, you should update the replication settings in `conf/broker.conf` to `1`. -> - -> ```properties -> -> # Number of bookies to use when creating a ledger -> managedLedgerDefaultEnsembleSize=1 -> -> # Number of copies to store for each message -> managedLedgerDefaultWriteQuorum=1 -> -> # Number of guaranteed copies (acks to wait before write is complete) -> managedLedgerDefaultAckQuorum=1 -> -> -> ``` - - -### Enable Pulsar Functions (optional) - -If you want to enable [Pulsar Functions](functions-overview.md), you can follow the instructions as below: - -1. Edit `conf/broker.conf` to enable functions worker, by setting `functionsWorkerEnabled` to `true`. - - ```conf - - functionsWorkerEnabled=true - - ``` - -2. Edit `conf/functions_worker.yml` and set `pulsarFunctionsCluster` to the cluster name that you provide when you [initialize the metadata of the cluster](#initialize-cluster-metadata). - - ```conf - - pulsarFunctionsCluster: pulsar-cluster-1 - - ``` - -If you want to learn more options about deploying the functions worker, check out [Deploy and manage functions worker](functions-worker.md). - -### Start Brokers - -You can then provide any other configuration changes that you want in the [`conf/broker.conf`](reference-configuration.md#broker) file. Once you decide on a configuration, you can start up the brokers for your Pulsar cluster. Like ZooKeeper and BookKeeper, you can start brokers either in the foreground or in the background, using nohup. - -You can start a broker in the foreground using the [`pulsar broker`](reference-cli-tools.md#pulsar-broker) command: - -```bash - -$ bin/pulsar broker - -``` - -You can start a broker in the background using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start broker - -``` - -Once you successfully start up all the brokers that you intend to use, your Pulsar cluster should be ready to go! - -## Connect to the running cluster - -Once your Pulsar cluster is up and running, you should be able to connect with it using Pulsar clients. One such client is the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool, which is included with the Pulsar binary package. The `pulsar-client` tool can publish messages to and consume messages from Pulsar topics and thus provide a simple way to make sure that your cluster runs properly. - -To use the `pulsar-client` tool, first modify the client configuration file in [`conf/client.conf`](reference-configuration.md#client) in your binary package. You need to change the values for `webServiceUrl` and `brokerServiceUrl`, substituting `localhost` (which is the default), with the DNS name that you assign to your broker/bookie hosts. The following is an example: - -```properties - -webServiceUrl=http://us-west.example.com:8080 -brokerServiceurl=pulsar://us-west.example.com:6650 - -``` - -> If you do not have a DNS server, you can specify multi-host in service URL as follows: -> - -> ```properties -> -> webServiceUrl=http://host1:8080,host2:8080,host3:8080 -> brokerServiceurl=pulsar://host1:6650,host2:6650,host3:6650 -> -> -> ``` - - -Once that is complete, you can publish a message to the Pulsar topic: - -```bash - -$ bin/pulsar-client produce \ - persistent://public/default/test \ - -n 1 \ - -m "Hello Pulsar" - -``` - -> You may need to use a different cluster name in the topic if you specify a cluster name other than `pulsar-cluster-1`. - -This command publishes a single message to the Pulsar topic. In addition, you can subscribe to the Pulsar topic in a different terminal before publishing messages as below: - -```bash - -$ bin/pulsar-client consume \ - persistent://public/default/test \ - -n 100 \ - -s "consumer-test" \ - -t "Exclusive" - -``` - -Once you successfully publish the above message to the topic, you should see it in the standard output: - -```bash - ------ got message ----- -Hello Pulsar - -``` - -## Run Functions - -> If you have [enabled](#enable-pulsar-functions-optional) Pulsar Functions, you can try out the Pulsar Functions now. - -Create an ExclamationFunction `exclamation`. - -```bash - -bin/pulsar-admin functions create \ - --jar examples/api-examples.jar \ - --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \ - --inputs persistent://public/default/exclamation-input \ - --output persistent://public/default/exclamation-output \ - --tenant public \ - --namespace default \ - --name exclamation - -``` - -Check whether the function runs as expected by [triggering](functions-deploying.md#triggering-pulsar-functions) the function. - -```bash - -bin/pulsar-admin functions trigger --name exclamation --trigger-value "hello world" - -``` - -You should see the following output: - -```shell - -hello world! - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/deploy-dcos.md b/site2/website/versioned_docs/version-2.10.0-deprecated/deploy-dcos.md deleted file mode 100644 index 35a0a83d716ade..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/deploy-dcos.md +++ /dev/null @@ -1,200 +0,0 @@ ---- -id: deploy-dcos -title: Deploy Pulsar on DC/OS -sidebar_label: "DC/OS" -original_id: deploy-dcos ---- - -:::tip - -To enable all built-in [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, we recommend you use `apachepulsar/pulsar-all` image instead of `apachepulsar/pulsar` image; the former has already bundled [all built-in connectors](io-overview.md#working-with-connectors). - -::: - -[DC/OS](https://dcos.io/) (the DataCenter Operating System) is a distributed operating system for deploying and managing applications and systems on [Apache Mesos](http://mesos.apache.org/). DC/OS is an open-source tool created and maintained by [Mesosphere](https://mesosphere.com/). - -Apache Pulsar is available as a [Marathon Application Group](https://mesosphere.github.io/marathon/docs/application-groups.html), which runs multiple applications as manageable sets. - -## Prerequisites - -You need to prepare your environment before running Pulsar on DC/OS. - -* DC/OS version [1.9](https://docs.mesosphere.com/1.9/) or higher -* A [DC/OS cluster](https://docs.mesosphere.com/1.9/installing/) with at least three agent nodes -* The [DC/OS CLI tool](https://docs.mesosphere.com/1.9/cli/install/) installed -* The [`PulsarGroups.json`](https://github.com/apache/pulsar/blob/master/deployment/dcos/PulsarGroups.json) configuration file from the Pulsar GitHub repo. - - ```bash - - $ curl -O https://raw.githubusercontent.com/apache/pulsar/master/deployment/dcos/PulsarGroups.json - - ``` - -Each node in the DC/OS-managed Mesos cluster must have at least: - -* 4 CPU -* 4 GB of memory -* 60 GB of total persistent disk - -Alternatively, you can change the configuration in `PulsarGroups.json` accordingly to match your resources of the DC/OS cluster. - -## Deploy Pulsar using the DC/OS command interface - -You can deploy Pulsar on DC/OS using this command: - -```bash - -$ dcos marathon group add PulsarGroups.json - -``` - -This command deploys Docker container instances in three groups, which together comprise a Pulsar cluster: - -* 3 bookies (1 [bookie](reference-terminology.md#bookie) on each agent node and 1 [bookie recovery](http://bookkeeper.apache.org/docs/latest/admin/autorecovery/) instance) -* 3 Pulsar [brokers](reference-terminology.md#broker) (1 broker on each node and 1 admin instance) -* 1 [Prometheus](http://prometheus.io/) instance and 1 [Grafana](https://grafana.com/) instance - - -> When you run DC/OS, a ZooKeeper cluster will be running at `master.mesos:2181`, thus you do not have to install or start up ZooKeeper separately. - -After executing the `dcos` command above, click the **Services** tab in the DC/OS [GUI interface](https://docs.mesosphere.com/latest/gui/), which you can access at [http://m1.dcos](http://m1.dcos) in this example. You should see several applications during the deployment. - -![DC/OS command executed](/assets/dcos_command_execute.png) - -![DC/OS command executed2](/assets/dcos_command_execute2.png) - -## The BookKeeper group - -To monitor the status of the BookKeeper cluster deployment, click the **bookkeeper** group in the parent **pulsar** group. - -![DC/OS bookkeeper status](/assets/dcos_bookkeeper_status.png) - -At this point, the status of the 3 [bookies](reference-terminology.md#bookie) are green, which means that the bookies have been deployed successfully and are running. - -![DC/OS bookkeeper running](/assets/dcos_bookkeeper_run.png) - -You can also click each bookie instance to get more detailed information, such as the bookie running log. - -![DC/OS bookie log](/assets/dcos_bookie_log.png) - -To display information about the BookKeeper in ZooKeeper, you can visit [http://m1.dcos/exhibitor](http://m1.dcos/exhibitor). In this example, 3 bookies are under the `available` directory. - -![DC/OS bookkeeper in zk](/assets/dcos_bookkeeper_in_zookeeper.png) - -## The Pulsar broker group - -Similar to the BookKeeper group above, click **brokers** to check the status of the Pulsar brokers. - -![DC/OS broker status](/assets/dcos_broker_status.png) - -![DC/OS broker running](/assets/dcos_broker_run.png) - -You can also click each broker instance to get more detailed information, such as the broker running log. - -![DC/OS broker log](/assets/dcos_broker_log.png) - -Broker cluster information in ZooKeeper is also available through the web UI. In this example, you can see that the `loadbalance` and `managed-ledgers` directories have been created. - -![DC/OS broker in zk](/assets/dcos_broker_in_zookeeper.png) - -## Monitor group - -The **monitory** group consists of Prometheus and Grafana. - -![DC/OS monitor status](/assets/dcos_monitor_status.png) - -### Prometheus - -Click the instance of `prom` to get the endpoint of Prometheus, which is `192.168.65.121:9090` in this example. - -![DC/OS prom endpoint](/assets/dcos_prom_endpoint.png) - -If you click that endpoint, you can see the Prometheus dashboard. All the bookies and brokers are listed on [http://192.168.65.121:9090/targets](http://192.168.65.121:9090/targets). - -![DC/OS prom targets](/assets/dcos_prom_targets.png) - -### Grafana - -Click `grafana` to get the endpoint for Grafana, which is `192.168.65.121:3000` in this example. - -![DC/OS grafana endpoint](/assets/dcos_grafana_endpoint.png) - -If you click that endpoint, you can access the Grafana dashboard. - -![DC/OS grafana targets](/assets/dcos_grafana_dashboard.png) - -## Run a simple Pulsar consumer and producer on DC/OS - -Now that you have a fully deployed Pulsar cluster, you can run a simple consumer and producer to show Pulsar on DC/OS in action. - -### Download and prepare the Pulsar Java tutorial - -You can clone a [Pulsar Java tutorial](https://github.com/streamlio/pulsar-java-tutorial) repo. This repo contains a simple Pulsar consumer and producer (you can find more information in the `README` file in this repo). - -```bash - -$ git clone https://github.com/streamlio/pulsar-java-tutorial - -``` - -Change the `SERVICE_URL` from `pulsar://localhost:6650` to `pulsar://a1.dcos:6650` in both [`ConsumerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ConsumerTutorial.java) file and [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java) file. - -The `pulsar://a1.dcos:6650` endpoint is for the broker service. You can fetch the endpoint details for each broker instance from the DC/OS GUI. `a1.dcos` is a DC/OS client agent that runs a broker, and you can replace it with the client agent IP address. - -Now, you can change the message number from 10 to 10000000 in the main method in [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java) file to produce more messages. - -Then, you can compile the project code using the command below: - -```bash - -$ mvn clean package - -``` - -### Run the consumer and producer - -Execute this command to run the consumer: - -```bash - -$ mvn exec:java -Dexec.mainClass="tutorial.ConsumerTutorial" - -``` - -Execute this command to run the producer: - -```bash - -$ mvn exec:java -Dexec.mainClass="tutorial.ProducerTutorial" - -``` - -You see that the producer is producing messages and the consumer is consuming messages through the DC/OS GUI. - -![DC/OS pulsar producer](/assets/dcos_producer.png) - -![DC/OS pulsar consumer](/assets/dcos_consumer.png) - -### View Grafana metric output - -While the producer and consumer are running, you can access the running metrics from Grafana. - -![DC/OS pulsar dashboard](/assets/dcos_metrics.png) - - -## Uninstall Pulsar - -You can shut down and uninstall the `pulsar` application from DC/OS at any time in one of the following two ways: - -1. Click the three dots at the right end of Pulsar group and choose **Delete** on the DC/OS GUI. - - ![DC/OS pulsar uninstall](/assets/dcos_uninstall.png) - -2. Use the command below. - - ```bash - - $ dcos marathon group remove /pulsar - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/deploy-docker.md b/site2/website/versioned_docs/version-2.10.0-deprecated/deploy-docker.md deleted file mode 100644 index 8348d78deb2378..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/deploy-docker.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -id: deploy-docker -title: Deploy a cluster on Docker -sidebar_label: "Docker" -original_id: deploy-docker ---- - -To deploy a Pulsar cluster on Docker, complete the following steps: -1. Deploy a ZooKeeper cluster (optional) -2. Initialize cluster metadata -3. Deploy a BookKeeper cluster -4. Deploy one or more Pulsar brokers - -## Prepare - -To run Pulsar on Docker, you need to create a container for each Pulsar component: ZooKeeper, BookKeeper and broker. You can pull the images of ZooKeeper and BookKeeper separately on [Docker Hub](https://hub.docker.com/), and pull a [Pulsar image](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) for the broker. You can also pull only one [Pulsar image](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) and create three containers with this image. This tutorial takes the second option as an example. - -### Pull a Pulsar image -You can pull a Pulsar image from [Docker Hub](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) with the following command. - -``` - -docker pull apachepulsar/pulsar-all:latest - -``` - -### Create three containers -Create containers for ZooKeeper, BookKeeper and broker. In this example, they are named as `zookeeper`, `bookkeeper` and `broker` respectively. You can name them as you want with the `--name` flag. By default, the container names are created randomly. - -``` - -docker run -it --name bookkeeper apachepulsar/pulsar-all:latest /bin/bash -docker run -it --name zookeeper apachepulsar/pulsar-all:latest /bin/bash -docker run -it --name broker apachepulsar/pulsar-all:latest /bin/bash - -``` - -### Create a network -To deploy a Pulsar cluster on Docker, you need to create a `network` and connect the containers of ZooKeeper, BookKeeper and broker to this network. The following command creates the network `pulsar`: - -``` - -docker network create pulsar - -``` - -### Connect containers to network -Connect the containers of ZooKeeper, BookKeeper and broker to the `pulsar` network with the following commands. - -``` - -docker network connect pulsar zookeeper -docker network connect pulsar bookkeeper -docker network connect pulsar broker - -``` - -To check whether the containers are successfully connected to the network, enter the `docker network inspect pulsar` command. - -For detailed information about how to deploy ZooKeeper cluster, BookKeeper cluster, brokers, see [deploy a cluster on bare metal](deploy-bare-metal.md). diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/deploy-kubernetes.md b/site2/website/versioned_docs/version-2.10.0-deprecated/deploy-kubernetes.md deleted file mode 100644 index 1aefc6ad79f716..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/deploy-kubernetes.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -id: deploy-kubernetes -title: Deploy Pulsar on Kubernetes -sidebar_label: "Kubernetes" -original_id: deploy-kubernetes ---- - -To get up and running with these charts as fast as possible, in a **non-production** use case, we provide -a [quick start guide](getting-started-helm.md) for Proof of Concept (PoC) deployments. - -To configure and install a Pulsar cluster on Kubernetes for production usage, follow the complete [Installation Guide](helm-install.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/deploy-monitoring.md b/site2/website/versioned_docs/version-2.10.0-deprecated/deploy-monitoring.md deleted file mode 100644 index 69d994b7d586f3..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/deploy-monitoring.md +++ /dev/null @@ -1,138 +0,0 @@ ---- -id: deploy-monitoring -title: Monitor -sidebar_label: "Monitor" -original_id: deploy-monitoring ---- - -You can use different ways to monitor a Pulsar cluster, exposing both metrics related to the usage of topics and the overall health of the individual components of the cluster. - -## Collect metrics - -You can collect broker stats, ZooKeeper stats, and BookKeeper stats. - -### Broker stats - -You can collect Pulsar broker metrics from brokers and export the metrics in JSON format. The Pulsar broker metrics mainly have two types: - -* *Destination dumps*, which contain stats for each individual topic. You can fetch the destination dumps using the command below: - - ```shell - - bin/pulsar-admin broker-stats destinations - - ``` - -* Broker metrics, which contain the broker information and topics stats aggregated at namespace level. You can fetch the broker metrics by using the following command: - - ```shell - - bin/pulsar-admin broker-stats monitoring-metrics - - ``` - -All the message rates are updated every minute. - -The aggregated broker metrics are also exposed in the [Prometheus](https://prometheus.io) format at: - -```shell - -http://$BROKER_ADDRESS:8080/metrics/ - -``` - -### ZooKeeper stats - -The local ZooKeeper, configuration store server and clients that are shipped with Pulsar can expose detailed stats through Prometheus. - -```shell - -http://$LOCAL_ZK_SERVER:8000/metrics -http://$GLOBAL_ZK_SERVER:8001/metrics - -``` - -The default port of local ZooKeeper is `8000` and the default port of the configuration store is `8001`. You can use a different stats port by configuring `metricsProvider.httpPort` in the `conf/zookeeper.conf` file. - -### BookKeeper stats - -You can configure the stats frameworks for BookKeeper by modifying the `statsProviderClass` in the `conf/bookkeeper.conf` file. - -The default BookKeeper configuration enables the Prometheus exporter. The configuration is included with Pulsar distribution. - -```shell - -http://$BOOKIE_ADDRESS:8000/metrics - -``` - -The default port for bookie is `8000`. You can change the port by configuring `prometheusStatsHttpPort` in the `conf/bookkeeper.conf` file. - -### Managed cursor acknowledgment state -The acknowledgment state is persistent to the ledger first. When the acknowledgment state fails to be persistent to the ledger, they are persistent to ZooKeeper. To track the stats of acknowledgement, you can configure the metrics for the managed cursor. - -``` - -brk_ml_cursor_persistLedgerSucceed(namespace=", ledger_name="", cursor_name:") -brk_ml_cursor_persistLedgerErrors(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_persistZookeeperSucceed(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_persistZookeeperErrors(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_nonContiguousDeletedMessagesRange(namespace="", ledger_name="", cursor_name:"") - -``` - -Those metrics are added in the Prometheus interface, you can monitor and check the metrics stats in the Grafana. - -### Function and connector stats - -You can collect functions worker stats from `functions-worker` and export the metrics in JSON formats, which contain functions worker JVM metrics. - -``` - -pulsar-admin functions-worker monitoring-metrics - -``` - -You can collect functions and connectors metrics from `functions-worker` and export the metrics in JSON formats. - -``` - -pulsar-admin functions-worker function-stats - -``` - -The aggregated functions and connectors metrics can be exposed in Prometheus formats as below. You can get [`FUNCTIONS_WORKER_ADDRESS`](functions-worker.md) and `WORKER_PORT` from the `functions_worker.yml` file. - -``` - -http://$FUNCTIONS_WORKER_ADDRESS:$WORKER_PORT/metrics: - -``` - -## Configure Prometheus - -You can use Prometheus to collect all the metrics exposed for Pulsar components and set up [Grafana](https://grafana.com/) dashboards to display the metrics and monitor your Pulsar cluster. For details, refer to [Prometheus guide](https://prometheus.io/docs/introduction/getting_started/). - -When you run Pulsar on bare metal, you can provide the list of nodes to be probed. When you deploy Pulsar in a Kubernetes cluster, the monitoring is setup automatically. For details, refer to [Kubernetes instructions](helm-deploy.md). - -## Dashboards - -When you collect time series statistics, the major problem is to make sure the number of dimensions attached to the data does not explode. Thus you only need to collect time series of metrics aggregated at the namespace level. - -### Pulsar per-topic dashboard - -The per-topic dashboard instructions are available at [Pulsar manager](administration-pulsar-manager.md). - -### Grafana - -You can use grafana to create dashboard driven by the data that is stored in Prometheus. - -When you deploy Pulsar on Kubernetes with the Pulsar Helm Chart, a `pulsar-grafana` Docker image is enabled by default. You can use the docker image with the principal dashboards. - -The following are some Grafana dashboards examples: - -- [pulsar-grafana](deploy-monitoring.md#grafana): a Grafana dashboard that displays metrics collected in Prometheus for Pulsar clusters running on Kubernetes. -- [apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard): a collection of Grafana dashboard templates for different Pulsar components running on both Kubernetes and on-premise machines. - -## Alerting rules -You can set alerting rules according to your Pulsar environment. To configure alerting rules for Apache Pulsar, refer to [alerting rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/develop-load-manager.md b/site2/website/versioned_docs/version-2.10.0-deprecated/develop-load-manager.md deleted file mode 100644 index 509209b6a852d8..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/develop-load-manager.md +++ /dev/null @@ -1,227 +0,0 @@ ---- -id: develop-load-manager -title: Modular load manager -sidebar_label: "Modular load manager" -original_id: develop-load-manager ---- - -The *modular load manager*, implemented in [`ModularLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/ModularLoadManagerImpl.java), is a flexible alternative to the previously implemented load manager, [`SimpleLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/SimpleLoadManagerImpl.java), which attempts to simplify how load is managed while also providing abstractions so that complex load management strategies may be implemented. - -## Usage - -There are two ways that you can enable the modular load manager: - -1. Change the value of the `loadManagerClassName` parameter in `conf/broker.conf` from `org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl` to `org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl`. -2. Using the `pulsar-admin` tool. Here's an example: - - ```shell - - $ pulsar-admin update-dynamic-config \ - --config loadManagerClassName \ - --value org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl - - ``` - - You can use the same method to change back to the original value. In either case, any mistake in specifying the load manager will cause Pulsar to default to `SimpleLoadManagerImpl`. - -## Verification - -There are a few different ways to determine which load manager is being used: - -1. Use `pulsar-admin` to examine the `loadManagerClassName` element: - - ```shell - - $ bin/pulsar-admin brokers get-all-dynamic-config - { - "loadManagerClassName" : "org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl" - } - - ``` - - If there is no `loadManagerClassName` element, then the default load manager is used. - -2. Consult a ZooKeeper load report. With the module load manager, the load report in `/loadbalance/brokers/...` will have many differences. for example the `systemResourceUsage` sub-elements (`bandwidthIn`, `bandwidthOut`, etc.) are now all at the top level. Here is an example load report from the module load manager: - - ```json - - { - "bandwidthIn": { - "limit": 10240000.0, - "usage": 4.256510416666667 - }, - "bandwidthOut": { - "limit": 10240000.0, - "usage": 5.287239583333333 - }, - "bundles": [], - "cpu": { - "limit": 2400.0, - "usage": 5.7353247655435915 - }, - "directMemory": { - "limit": 16384.0, - "usage": 1.0 - } - } - - ``` - - With the simple load manager, the load report in `/loadbalance/brokers/...` will look like this: - - ```json - - { - "systemResourceUsage": { - "bandwidthIn": { - "limit": 10240000.0, - "usage": 0.0 - }, - "bandwidthOut": { - "limit": 10240000.0, - "usage": 0.0 - }, - "cpu": { - "limit": 2400.0, - "usage": 0.0 - }, - "directMemory": { - "limit": 16384.0, - "usage": 1.0 - }, - "memory": { - "limit": 8192.0, - "usage": 3903.0 - } - } - } - - ``` - -3. The command-line [broker monitor](reference-cli-tools.md#monitor-brokers) will have a different output format depending on which load manager implementation is being used. - - Here is an example from the modular load manager: - - ``` - - =================================================================================================================== - ||SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.00 |48.33 |0.01 |0.00 |0.00 |48.33 || - ||COUNT |TOPIC |BUNDLE |PRODUCER |CONSUMER |BUNDLE + |BUNDLE - || - || |4 |4 |0 |2 |4 |0 || - ||LATEST |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - ||SHORT |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - ||LONG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - =================================================================================================================== - - ``` - - Here is an example from the simple load manager: - - ``` - - =================================================================================================================== - ||COUNT |TOPIC |BUNDLE |PRODUCER |CONSUMER |BUNDLE + |BUNDLE - || - || |4 |4 |0 |2 |0 |0 || - ||RAW SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.25 |47.94 |0.01 |0.00 |0.00 |47.94 || - ||ALLOC SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.20 |1.89 | |1.27 |3.21 |3.21 || - ||RAW MSG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.01 |0.01 |0.01 || - ||ALLOC MSG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |54.84 |134.48 |189.31 |126.54 |320.96 |447.50 || - =================================================================================================================== - - ``` - -It is important to note that the module load manager is _centralized_, meaning that all requests to assign a bundle---whether it's been seen before or whether this is the first time---only get handled by the _lead_ broker (which can change over time). To determine the current lead broker, examine the `/loadbalance/leader` node in ZooKeeper. - -## Implementation - -### Data - -The data monitored by the modular load manager is contained in the [`LoadData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/LoadData.java) class. -Here, the available data is subdivided into the bundle data and the broker data. - -#### Broker - -The broker data is contained in the [`BrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BrokerData.java) class. It is further subdivided into two parts, -one being the local data which every broker individually writes to ZooKeeper, and the other being the historical broker -data which is written to ZooKeeper by the leader broker. - -##### Local Broker Data -The local broker data is contained in the class [`LocalBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/java/org/apache/pulsar/policies/data/loadbalancer/LocalBrokerData.java) and provides information about the following resources: - -* CPU usage -* JVM heap memory usage -* Direct memory usage -* Bandwidth in/out usage -* Most recent total message rate in/out across all bundles -* Total number of topics, bundles, producers, and consumers -* Names of all bundles assigned to this broker -* Most recent changes in bundle assignments for this broker - -The local broker data is updated periodically according to the service configuration -"loadBalancerReportUpdateMaxIntervalMinutes". After any broker updates their local broker data, the leader broker will -receive the update immediately via a ZooKeeper watch, where the local data is read from the ZooKeeper node -`/loadbalance/brokers/` - -##### Historical Broker Data - -The historical broker data is contained in the [`TimeAverageBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/TimeAverageBrokerData.java) class. - -In order to reconcile the need to make good decisions in a steady-state scenario and make reactive decisions in a critical scenario, the historical data is split into two parts: the short-term data for reactive decisions, and the long-term data for steady-state decisions. Both time frames maintain the following information: - -* Message rate in/out for the entire broker -* Message throughput in/out for the entire broker - -Unlike the bundle data, the broker data does not maintain samples for the global broker message rates and throughputs, which is not expected to remain steady as new bundles are removed or added. Instead, this data is aggregated over the short-term and long-term data for the bundles. See the section on bundle data to understand how that data is collected and maintained. - -The historical broker data is updated for each broker in memory by the leader broker whenever any broker writes their local data to ZooKeeper. Then, the historical data is written to ZooKeeper by the leader broker periodically according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`. - -##### Bundle Data - -The bundle data is contained in the [`BundleData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BundleData.java). Like the historical broker data, the bundle data is split into a short-term and a long-term time frame. The information maintained in each time frame: - -* Message rate in/out for this bundle -* Message Throughput In/Out for this bundle -* Current number of samples for this bundle - -The time frames are implemented by maintaining the average of these values over a set, limited number of samples, where -the samples are obtained through the message rate and throughput values in the local data. Thus, if the update interval -for the local data is 2 minutes, the number of short samples is 10 and the number of long samples is 1000, the -short-term data is maintained over a period of `10 samples * 2 minutes / sample = 20 minutes`, while the long-term -data is similarly over a period of 2000 minutes. Whenever there are not enough samples to satisfy a given time frame, -the average is taken only over the existing samples. When no samples are available, default values are assumed until -they are overwritten by the first sample. Currently, the default values are - -* Message rate in/out: 50 messages per second both ways -* Message throughput in/out: 50KB per second both ways - -The bundle data is updated in memory on the leader broker whenever any broker writes their local data to ZooKeeper. -Then, the bundle data is written to ZooKeeper by the leader broker periodically at the same time as the historical -broker data, according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`. - -### Traffic Distribution - -The modular load manager uses the abstraction provided by [`ModularLoadManagerStrategy`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/ModularLoadManagerStrategy.java) to make decisions about bundle assignment. The strategy makes a decision by considering the service configuration, the entire load data, and the bundle data for the bundle to be assigned. Currently, the only supported strategy is [`LeastLongTermMessageRate`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/LeastLongTermMessageRate.java), though soon users will have the ability to inject their own strategies if desired. - -#### Least Long Term Message Rate Strategy - -As its name suggests, the least long term message rate strategy attempts to distribute bundles across brokers so that -the message rate in the long-term time window for each broker is roughly the same. However, simply balancing load based -on message rate does not handle the issue of asymmetric resource burden per message on each broker. Thus, the system -resource usages, which are CPU, memory, direct memory, bandwidth in, and bandwidth out, are also considered in the -assignment process. This is done by weighting the final message rate according to -`1 / (overload_threshold - max_usage)`, where `overload_threshold` corresponds to the configuration -`loadBalancerBrokerOverloadedThresholdPercentage` and `max_usage` is the maximum proportion among the system resources -that is being utilized by the candidate broker. This multiplier ensures that machines with are being more heavily taxed -by the same message rates will receive less load. In particular, it tries to ensure that if one machine is overloaded, -then all machines are approximately overloaded. In the case in which a broker's max usage exceeds the overload -threshold, that broker is not considered for bundle assignment. If all brokers are overloaded, the bundle is randomly -assigned. - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/develop-plugin.md b/site2/website/versioned_docs/version-2.10.0-deprecated/develop-plugin.md deleted file mode 100644 index 28d8de8ae375dc..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/develop-plugin.md +++ /dev/null @@ -1,139 +0,0 @@ ---- -id: develop-plugin -title: Pulsar plugin development -sidebar_label: "Plugin" -original_id: develop-plugin ---- - -You can develop various plugins for Pulsar, such as entry filters, protocol handlers, interceptors, and so on. - -## Entry filter - -This chapter describes what the entry filter is and shows how to use the entry filter. - -### What is an entry filter? - -The entry filter is an extension point for implementing a custom message entry strategy. With an entry filter, you can decide **whether to send messages to consumers** (brokers can use the return values of entry filters to determine whether the messages need to be sent or discarded) or **send messages to specific consumers.** - -To implement features such as tagged messages or custom delayed messages, use [`subscriptionProperties`](https://github.com/apache/pulsar/blob/ec0a44058d249a7510bb3d05685b2ee5e0874eb6/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/ConsumerBuilder.java?_pjax=%23js-repo-pjax-container%2C%20div%5Bitemtype%3D%22http%3A%2F%2Fschema.org%2FSoftwareSourceCode%22%5D%20main%2C%20%5Bdata-pjax-container%5D#L174), [`​​properties`](https://github.com/apache/pulsar/blob/ec0a44058d249a7510bb3d05685b2ee5e0874eb6/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/ConsumerBuilder.java?_pjax=%23js-repo-pjax-container%2C%20div%5Bitemtype%3D%22http%3A%2F%2Fschema.org%2FSoftwareSourceCode%22%5D%20main%2C%20%5Bdata-pjax-container%5D#L533), and entry filters. - -### How to use an entry filter? - -Follow the steps below: - -1. Create a Maven project. - -2. Implement the `EntryFilter` interface. - -3. Package the implementation class into a NAR file. - -4. Configure the `broker.conf` file (or the `standalone.conf` file) and restart your broker. - -#### Step 1: Create a Maven project - -For how to create a Maven project, see [here](https://maven.apache.org/guides/getting-started/maven-in-five-minutes.html). - -#### Step 2: Implement the `EntryFilter` interface - -1. Add a dependency for Pulsar broker in the `pom.xml` file to display. Otherwise, you can not find the [`EntryFilter` interface](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/plugin/EntryFilter.java). - - ```xml - - - org.apache.pulsar - pulsar-broker - ${pulsar.version} - provided - - - ``` - -2. Implement the [`FilterResult filterEntry(Entry entry, FilterContext context);` method](https://github.com/apache/pulsar/blob/2adb6661d5b82c5705ee00ce3ebc9941c99635d5/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/plugin/EntryFilter.java#L34). - - - If the method returns `ACCEPT` or NULL, this message is sent to consumers. - - - If the method returns `REJECT`, this message is filtered out and it does not consume message permits. - - - If there are multiple entry filters, this message passes through all filters in the pipeline in a round-robin manner. If any entry filter returns `REJECT`, this message is discarded. - - You can get entry metadata, subscriptions, and other information through `FilterContext`. - -3. Describe a NAR file. - - Create an `entry_filter.yml` file in the `resources/META-INF/services` directory to describe a NAR file. - - ```conf - - # Entry filter name, which should be configured in the broker.conf file later - name: entryFilter - # Entry filter description - description: entry filter - # Implementation class name of entry filter - entryFilterClass: com.xxxx.xxxx.xxxx.DefaultEntryFilterImpl - - ``` - -#### Step 3: package implementation class of entry filter into a NAR file - -1. Add the compiled plugin of the NAR file to your `pom.xml` file. - - ```xml - - - ${project.artifactId} - - - org.apache.nifi - nifi-nar-maven-plugin - 1.2.0 - true - - ${project.artifactId}-${project.version} - - - - default-nar - package - - nar - - - - - - - - ``` - -2. Generate a NAR file in the `target` directory. - - ```script - - mvn clean install - - ``` - -#### Step 4: configure and restart broker - -1. Configure the following parameters in the `broker.conf` file (or the `standalone.conf` file). - - ```conf - - # Class name of pluggable entry filters - # Multiple classes need to be separated by commas. - entryFilterNames=entryFilter1,entryFilter2,entryFilter3 - # The directory for all entry filter implementations - entryFiltersDirectory=tempDir - - ``` - -2. Restart your broker. - - You can see the following broker log if the plug-in is successfully loaded. - - ```text - - Successfully loaded entry filter for name `{name of your entry filter}` - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/develop-schema.md b/site2/website/versioned_docs/version-2.10.0-deprecated/develop-schema.md deleted file mode 100644 index 2d4461a5ea2b55..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/develop-schema.md +++ /dev/null @@ -1,62 +0,0 @@ ---- -id: develop-schema -title: Custom schema storage -sidebar_label: "Custom schema storage" -original_id: develop-schema ---- - -By default, Pulsar stores data type [schemas](concepts-schema-registry.md) in [Apache BookKeeper](https://bookkeeper.apache.org) (which is deployed alongside Pulsar). You can, however, use another storage system if you wish. This doc walks you through creating your own schema storage implementation. - -In order to use a non-default (i.e. non-BookKeeper) storage system for Pulsar schemas, you need to implement two Java interfaces: [`SchemaStorage`](#schemastorage-interface) and [`SchemaStorageFactory`](#schemastoragefactory-interface). - -## SchemaStorage interface - -The `SchemaStorage` interface has the following methods: - -```java - -public interface SchemaStorage { - // How schemas are updated - CompletableFuture put(String key, byte[] value, byte[] hash); - - // How schemas are fetched from storage - CompletableFuture get(String key, SchemaVersion version); - - // How schemas are deleted - CompletableFuture delete(String key); - - // Utility method for converting a schema version byte array to a SchemaVersion object - SchemaVersion versionFromBytes(byte[] version); - - // Startup behavior for the schema storage client - void start() throws Exception; - - // Shutdown behavior for the schema storage client - void close() throws Exception; -} - -``` - -> For a full-fledged example schema storage implementation, see the [`BookKeeperSchemaStorage`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorage.java) class. - -## SchemaStorageFactory interface - -```java - -public interface SchemaStorageFactory { - @NotNull - SchemaStorage create(PulsarService pulsar) throws Exception; -} - -``` - -> For a full-fledged example schema storage factory implementation, see the [`BookKeeperSchemaStorageFactory`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorageFactory.java) class. - -## Deployment - -In order to use your custom schema storage implementation, you'll need to: - -1. Package the implementation in a [JAR](https://docs.oracle.com/javase/tutorial/deployment/jar/basicsindex.html) file. -1. Add that jar to the `lib` folder in your Pulsar [binary or source distribution](getting-started-standalone.md#installing-pulsar). -1. Change the `schemaRegistryStorageClassName` configuration in [`broker.conf`](reference-configuration.md#broker) to your custom factory class (i.e. the `SchemaStorageFactory` implementation, not the `SchemaStorage` implementation). -1. Start up Pulsar. diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/develop-tools.md b/site2/website/versioned_docs/version-2.10.0-deprecated/develop-tools.md deleted file mode 100644 index b5457790b80810..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/develop-tools.md +++ /dev/null @@ -1,111 +0,0 @@ ---- -id: develop-tools -title: Simulation tools -sidebar_label: "Simulation tools" -original_id: develop-tools ---- - -It is sometimes necessary create an test environment and incur artificial load to observe how well load managers -handle the load. The load simulation controller, the load simulation client, and the broker monitor were created as an -effort to make create this load and observe the effects on the managers more easily. - -## Simulation Client -The simulation client is a machine which will create and subscribe to topics with configurable message rates and sizes. -Because it is sometimes necessary in simulating large load to use multiple client machines, the user does not interact -with the simulation client directly, but instead delegates their requests to the simulation controller, which will then -send signals to clients to start incurring load. The client implementation is in the class -`org.apache.pulsar.testclient.LoadSimulationClient`. - -### Usage -To Start a simulation client, use the `pulsar-perf` script with the command `simulation-client` as follows: - -``` - -pulsar-perf simulation-client --port --service-url - -``` - -The client will then be ready to receive controller commands. -## Simulation Controller -The simulation controller send signals to the simulation clients, requesting them to create new topics, stop old -topics, change the load incurred by topics, as well as several other tasks. It is implemented in the class -`org.apache.pulsar.testclient.LoadSimulationController` and presents a shell to the user as an interface to send -command with. - -### Usage -To start a simulation controller, use the `pulsar-perf` script with the command `simulation-controller` as follows: - -``` - -pulsar-perf simulation-controller --cluster --client-port ---clients - -``` - -The clients should already be started before the controller is started. You will then be presented with a simple prompt, -where you can issue commands to simulation clients. Arguments often refer to tenant names, namespace names, and topic -names. In all cases, the BASE name of the tenants, namespaces, and topics are used. For example, for the topic -`persistent://my_tenant/my_cluster/my_namespace/my_topic`, the tenant name is `my_tenant`, the namespace name is -`my_namespace`, and the topic name is `my_topic`. The controller can perform the following actions: - -* Create a topic with a producer and a consumer - * `trade [--rate ] - [--rand-rate ,] - [--size ]` -* Create a group of topics with a producer and a consumer - * `trade_group [--rate ] - [--rand-rate ,] - [--separation ] [--size ] - [--topics-per-namespace ]` -* Change the configuration of an existing topic - * `change [--rate ] - [--rand-rate ,] - [--size ]` -* Change the configuration of a group of topics - * `change_group [--rate ] [--rand-rate ,] - [--size ] [--topics-per-namespace ]` -* Shutdown a previously created topic - * `stop ` -* Shutdown a previously created group of topics - * `stop_group ` -* Copy the historical data from one ZooKeeper to another and simulate based on the message rates and sizes in that history - * `copy [--rate-multiplier value]` -* Simulate the load of the historical data on the current ZooKeeper (should be same ZooKeeper being simulated on) - * `simulate [--rate-multiplier value]` -* Stream the latest data from the given active ZooKeeper to simulate the real-time load of that ZooKeeper. - * `stream [--rate-multiplier value]` - -The "group" arguments in these commands allow the user to create or affect multiple topics at once. Groups are created -when calling the `trade_group` command, and all topics from these groups may be subsequently modified or stopped -with the `change_group` and `stop_group` commands respectively. All ZooKeeper arguments are of the form -`zookeeper_host:port`. - -### Difference Between Copy, Simulate, and Stream -The commands `copy`, `simulate`, and `stream` are very similar but have significant differences. `copy` is used when -you want to simulate the load of a static, external ZooKeeper on the ZooKeeper you are simulating on. Thus, -`source zookeeper` should be the ZooKeeper you want to copy and `target zookeeper` should be the ZooKeeper you are -simulating on, and then it will get the full benefit of the historical data of the source in both load manager -implementations. `simulate` on the other hand takes in only one ZooKeeper, the one you are simulating on. It assumes -that you are simulating on a ZooKeeper that has historical data for `SimpleLoadManagerImpl` and creates equivalent -historical data for `ModularLoadManagerImpl`. Then, the load according to the historical data is simulated by the -clients. Finally, `stream` takes in an active ZooKeeper different than the ZooKeeper being simulated on and streams -load data from it and simulates the real-time load. In all cases, the optional `rate-multiplier` argument allows the -user to simulate some proportion of the load. For instance, using `--rate-multiplier 0.05` will cause messages to -be sent at only `5%` of the rate of the load that is being simulated. - -## Broker Monitor -To observe the behavior of the load manager in these simulations, one may utilize the broker monitor, which is -implemented in `org.apache.pulsar.testclient.BrokerMonitor`. The broker monitor will print tabular load data to the -console as it is updated using watchers. - -### Usage -To start a broker monitor, use the `monitor-brokers` command in the `pulsar-perf` script: - -``` - -pulsar-perf monitor-brokers --connect-string - -``` - -The console will then continuously print load data until it is interrupted. - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/developing-binary-protocol.md b/site2/website/versioned_docs/version-2.10.0-deprecated/developing-binary-protocol.md deleted file mode 100644 index 11394505ac8cca..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/developing-binary-protocol.md +++ /dev/null @@ -1,637 +0,0 @@ ---- -id: developing-binary-protocol -title: Pulsar binary protocol specification -sidebar_label: "Binary protocol" -original_id: developing-binary-protocol ---- - -Pulsar uses a custom binary protocol for communications between producers/consumers and brokers. This protocol is designed to support required features, such as acknowledgements and flow control, while ensuring maximum transport and implementation efficiency. - -Clients and brokers exchange *commands* with each other. Commands are formatted as binary [protocol buffer](https://developers.google.com/protocol-buffers/) (aka *protobuf*) messages. The format of protobuf commands is specified in the [`PulsarApi.proto`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto) file and also documented in the [Protobuf interface](#protobuf-interface) section below. - -> ### Connection sharing -> Commands for different producers and consumers can be interleaved and sent through the same connection without restriction. - -All commands associated with Pulsar's protocol are contained in a [`BaseCommand`](#pulsar.proto.BaseCommand) protobuf message that includes a [`Type`](#pulsar.proto.Type) [enum](https://developers.google.com/protocol-buffers/docs/proto#enum) with all possible subcommands as optional fields. `BaseCommand` messages can specify only one subcommand. - -## Framing - -Since protobuf doesn't provide any sort of message frame, all messages in the Pulsar protocol are prepended with a 4-byte field that specifies the size of the frame. The maximum allowable size of a single frame is 5 MB. - -The Pulsar protocol allows for two types of commands: - -1. **Simple commands** that do not carry a message payload. -2. **Payload commands** that bear a payload that is used when publishing or delivering messages. In payload commands, the protobuf command data is followed by protobuf [metadata](#message-metadata) and then the payload, which is passed in raw format outside of protobuf. All sizes are passed as 4-byte unsigned big endian integers. - -> Message payloads are passed in raw format rather than protobuf format for efficiency reasons. - -### Simple commands - -Simple (payload-free) commands have this basic structure: - -| Component | Description | Size (in bytes) | -|:--------------|:----------------------------------------------------------------------------------------|:----------------| -| `totalSize` | The size of the frame, counting everything that comes after it (in bytes) | 4 | -| `commandSize` | The size of the protobuf-serialized command | 4 | -| `message` | The protobuf message serialized in a raw binary format (rather than in protobuf format) | | - -### Payload commands - -Payload commands have this basic structure: - -| Component | Required or optional| Description | Size (in bytes) | -|:-----------------------------------|:----------|:--------------------------------------------------------------------------------------------|:----------------| -| `totalSize` | Required | The size of the frame, counting everything that comes after it (in bytes) | 4 | -| `commandSize` | Required | The size of the protobuf-serialized command | 4 | -| `message` | Required | The protobuf message serialized in a raw binary format (rather than in protobuf format) | | -| `magicNumberOfBrokerEntryMetadata` | Optional | A 2-byte byte array (`0x0e02`) identifying the broker entry metadata
    **Note**: `magicNumberOfBrokerEntryMetadata` , `brokerEntryMetadataSize`, and `brokerEntryMetadata` should be used **together**. | 2 | -| `brokerEntryMetadataSize` | Optional | The size of the broker entry metadata | 4 | -| `brokerEntryMetadata` | Optional | The broker entry metadata stored as a binary protobuf message | | -| `magicNumber` | Required | A 2-byte byte array (`0x0e01`) identifying the current format | 2 | -| `checksum` | Required | A [CRC32-C checksum](http://www.evanjones.ca/crc32c.html) of everything that comes after it | 4 | -| `metadataSize` | Required | The size of the message [metadata](#message-metadata) | 4 | -| `metadata` | Required | The message [metadata](#message-metadata) stored as a binary protobuf message | | -| `payload` | Required | Anything left in the frame is considered the payload and can include any sequence of bytes | | - -## Broker entry metadata - -Broker entry metadata is stored alongside the message metadata as a serialized protobuf message. -It is created by the broker when the message arrived at the broker and passed without changes to the consumer if configured. - -| Field | Required or optional | Description | -|:-------------------|:----------------|:------------------------------------------------------------------------------------------------------------------------------| -| `broker_timestamp` | Optional | The timestamp when a message arrived at the broker (`id est` as the number of milliseconds since January 1st, 1970 in UTC) | -| `index` | Optional | The index of the message. It is assigned by the broker. - -If you want to use broker entry metadata for **brokers**, configure the [`brokerEntryMetadataInterceptors`](reference-configuration.md#broker) parameter in the `broker.conf` file. - -If you want to use broker entry metadata for **consumers**: - -1. Use the client protocol version [18 or later](https://github.com/apache/pulsar/blob/ca37e67211feda4f7e0984e6414e707f1c1dfd07/pulsar-common/src/main/proto/PulsarApi.proto#L259). - -2. Configure the [`brokerEntryMetadataInterceptors`](reference-configuration.md#broker) parameter and set the [`exposingBrokerEntryMetadataToClientEnabled`](reference-configuration-broker.md#exposingbrokerentrymetadatatoclientenabled) parameter to `true` in the `broker.conf` file. - -## Message metadata - -Message metadata is stored alongside the application-specified payload as a serialized protobuf message. Metadata is created by the producer and passed without changes to the consumer. - -| Field | Required or optional | Description | -|:-------------------------|:----------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `producer_name` | Required | The name of the producer that published the message | -| `sequence_id` | Required | The sequence ID of the message, assigned by producer | -| `publish_time` | Required | The publish timestamp in Unix time (i.e. as the number of milliseconds since January 1st, 1970 in UTC) | -| `properties` | Required | A sequence of key/value pairs (using the [`KeyValue`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto#L32) message). These are application-defined keys and values with no special meaning to Pulsar. | -| `replicated_from` | Optional | Indicates that the message has been replicated and specifies the name of the [cluster](reference-terminology.md#cluster) where the message was originally published | -| `partition_key` | Optional | While publishing on a partition topic, if the key is present, the hash of the key is used to determine which partition to choose. Partition key is used as the message key. | -| `compression` | Optional | Signals that payload has been compressed and with which compression library | -| `uncompressed_size` | Optional | If compression is used, the producer must fill the uncompressed size field with the original payload size | -| `num_messages_in_batch` | Optional | If this message is really a [batch](#batch-messages) of multiple entries, this field must be set to the number of messages in the batch | - -### Batch messages - -When using batch messages, the payload will be containing a list of entries, -each of them with its individual metadata, defined by the `SingleMessageMetadata` -object. - - -For a single batch, the payload format will look like this: - - -| Field | Required or optional | Description | -|:----------------|:---------------------|:-----------------------------------------------------------| -| `metadataSizeN` | Required |The size of the single message metadata serialized Protobuf | -| `metadataN` | Required |Single message metadata | -| `payloadN` | Required |Message payload passed by application | - -Each metadata field looks like this; - -| Field | Required or optional | Description | -|:----------------|:----------------------|:--------------------------------------------------------| -| `properties` | Required | Application-defined properties | -| `partition key` | Optional | Key to indicate the hashing to a particular partition | -| `payload_size` | Required | Size of the payload for the single message in the batch | - -When compression is enabled, the whole batch will be compressed at once. - -## Interactions - -### Connection establishment - -After opening a TCP connection to a broker, typically on port 6650, the client -is responsible to initiate the session. - -![Connect interaction](/assets/binary-protocol-connect.png) - -After receiving a `Connected` response from the broker, the client can -consider the connection ready to use. Alternatively, if the broker doesn't -validate the client authentication, it will reply with an `Error` command and -close the TCP connection. - -Example: - -```protobuf - -message CommandConnect { - "client_version" : "Pulsar-Client-Java-v1.15.2", - "auth_method_name" : "my-authentication-plugin", - "auth_data" : "my-auth-data", - "protocol_version" : 6 -} - -``` - -Fields: - * `client_version` → String based identifier. Format is not enforced - * `auth_method_name` → *(optional)* Name of the authentication plugin if auth - enabled - * `auth_data` → *(optional)* Plugin specific authentication data - * `protocol_version` → Indicates the protocol version supported by the - client. Broker will not send commands introduced in newer revisions of the - protocol. Broker might be enforcing a minimum version - -```protobuf - -message CommandConnected { - "server_version" : "Pulsar-Broker-v1.15.2", - "protocol_version" : 6 -} - -``` - -Fields: - * `server_version` → String identifier of broker version - * `protocol_version` → Protocol version supported by the broker. Client - must not attempt to send commands introduced in newer revisions of the - protocol - -### Keep Alive - -To identify prolonged network partitions between clients and brokers or cases -in which a machine crashes without interrupting the TCP connection on the remote -end (eg: power outage, kernel panic, hard reboot...), we have introduced a -mechanism to probe for the availability status of the remote peer. - -Both clients and brokers are sending `Ping` commands periodically and they will -close the socket if a `Pong` response is not received within a timeout (default -used by broker is 60s). - -A valid implementation of a Pulsar client is not required to send the `Ping` -probe, though it is required to promptly reply after receiving one from the -broker in order to prevent the remote side from forcibly closing the TCP connection. - - -### Producer - -In order to send messages, a client needs to establish a producer. When creating -a producer, the broker will first verify that this particular client is -authorized to publish on the topic. - -Once the client gets confirmation of the producer creation, it can publish -messages to the broker, referring to the producer ID negotiated before. - -![Producer interaction](/assets/binary-protocol-producer.png) - -If the client does not receive a response indicating producer creation success or failure, -the client should first send a command to close the original producer before sending a -command to re-attempt producer creation. - -##### Command Producer - -```protobuf - -message CommandProducer { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "producer_id" : 1, - "request_id" : 1 -} - -``` - -Parameters: - * `topic` → Complete topic name to where you want to create the producer on - * `producer_id` → Client generated producer identifier. Needs to be unique - within the same connection - * `request_id` → Identifier for this request. Used to match the response with - the originating request. Needs to be unique within the same connection - * `producer_name` → *(optional)* If a producer name is specified, the name will - be used, otherwise the broker will generate a unique name. Generated - producer name is guaranteed to be globally unique. Implementations are - expected to let the broker generate a new producer name when the producer - is initially created, then reuse it when recreating the producer after - reconnections. - -The broker will reply with either `ProducerSuccess` or `Error` commands. - -##### Command ProducerSuccess - -```protobuf - -message CommandProducerSuccess { - "request_id" : 1, - "producer_name" : "generated-unique-producer-name" -} - -``` - -Parameters: - * `request_id` → The original ID of the `CreateProducer` request - * `producer_name` → Generated globally unique producer name or the name - specified by the client, if any. - -##### Command Send - -Command `Send` is used to publish a new message within the context of an -already existing producer. If a producer has not yet been created for the -connection, the broker will terminate the connection. This command is used -in a frame that includes command as well as message payload, for which the -complete format is specified in the [payload commands](#payload-commands) section. - -```protobuf - -message CommandSend { - "producer_id" : 1, - "sequence_id" : 0, - "num_messages" : 1 -} - -``` - -Parameters: - * `producer_id` → The ID of an existing producer - * `sequence_id` → Each message has an associated sequence ID which is expected - to be implemented with a counter starting at 0. The `SendReceipt` that - acknowledges the effective publishing of a messages will refer to it by - its sequence id. - * `num_messages` → *(optional)* Used when publishing a batch of messages at - once. - -##### Command SendReceipt - -After a message has been persisted on the configured number of replicas, the -broker will send the acknowledgment receipt to the producer. - -```protobuf - -message CommandSendReceipt { - "producer_id" : 1, - "sequence_id" : 0, - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -Parameters: - * `producer_id` → The ID of producer originating the send request - * `sequence_id` → The sequence ID of the published message - * `message_id` → The message ID assigned by the system to the published message - Unique within a single cluster. Message ID is composed of 2 longs, `ledgerId` - and `entryId`, that reflect that this unique ID is assigned when appending - to a BookKeeper ledger - - -##### Command CloseProducer - -**Note**: *This command can be sent by either producer or broker*. - -When receiving a `CloseProducer` command, the broker will stop accepting any -more messages for the producer, wait until all pending messages are persisted -and then reply `Success` to the client. - -If the client does not receive a response to a `Producer` command within a timeout, -the client must first send a `CloseProducer` command before sending another -`Producer` command. The client does not need to await a response to the `CloseProducer` -command before sending the next `Producer` command. - -The broker can send a `CloseProducer` command to client when it's performing -a graceful failover (eg: broker is being restarted, or the topic is being unloaded -by load balancer to be transferred to a different broker). - -When receiving the `CloseProducer`, the client is expected to go through the -service discovery lookup again and recreate the producer again. The TCP -connection is not affected. - -### Consumer - -A consumer is used to attach to a subscription and consume messages from it. -After every reconnection, a client needs to subscribe to the topic. If a -subscription is not already there, a new one will be created. - -![Consumer](/assets/binary-protocol-consumer.png) - -#### Flow control - -After the consumer is ready, the client needs to *give permission* to the -broker to push messages. This is done with the `Flow` command. - -A `Flow` command gives additional *permits* to send messages to the consumer. -A typical consumer implementation will use a queue to accumulate these messages -before the application is ready to consume them. - -After the application has dequeued half of the messages in the queue, the consumer -sends permits to the broker to ask for more messages (equals to half of the messages in the queue). - -For example, if the queue size is 1000 and the consumer consumes 500 messages in the queue. -Then the consumer sends permits to the broker to ask for 500 messages. - -##### Command Subscribe - -```protobuf - -message CommandSubscribe { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "subscription" : "my-subscription-name", - "subType" : "Exclusive", - "consumer_id" : 1, - "request_id" : 1 -} - -``` - -Parameters: - * `topic` → Complete topic name to where you want to create the consumer on - * `subscription` → Subscription name - * `subType` → Subscription type: Exclusive, Shared, Failover, Key_Shared - * `consumer_id` → Client generated consumer identifier. Needs to be unique - within the same connection - * `request_id` → Identifier for this request. Used to match the response with - the originating request. Needs to be unique within the same connection - * `consumer_name` → *(optional)* Clients can specify a consumer name. This - name can be used to track a particular consumer in the stats. Also, in - Failover subscription type, the name is used to decide which consumer is - elected as *master* (the one receiving messages): consumers are sorted by - their consumer name and the first one is elected master. - -##### Command Flow - -```protobuf - -message CommandFlow { - "consumer_id" : 1, - "messagePermits" : 1000 -} - -``` - -Parameters: -* `consumer_id` → Id of an already established consumer -* `messagePermits` → Number of additional permits to grant to the broker for - pushing more messages - -##### Command Message - -Command `Message` is used by the broker to push messages to an existing consumer, -within the limits of the given permits. - - -This command is used in a frame that includes the message payload as well, for -which the complete format is specified in the [payload commands](#payload-commands) -section. - -```protobuf - -message CommandMessage { - "consumer_id" : 1, - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -##### Command Ack - -An `Ack` is used to signal to the broker that a given message has been -successfully processed by the application and can be discarded by the broker. - -In addition, the broker will also maintain the consumer position based on the -acknowledged messages. - -```protobuf - -message CommandAck { - "consumer_id" : 1, - "ack_type" : "Individual", - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -Parameters: - * `consumer_id` → Id of an already established consumer - * `ack_type` → Type of acknowledgment: `Individual` or `Cumulative` - * `message_id` → Id of the message to acknowledge - * `validation_error` → *(optional)* Indicates that the consumer has discarded - the messages due to: `UncompressedSizeCorruption`, - `DecompressionError`, `ChecksumMismatch`, `BatchDeSerializeError` - * `properties` → *(optional)* Reserved configuration items - * `txnid_most_bits` → *(optional)* Same as Transaction Coordinator ID, `txnid_most_bits` and `txnid_least_bits` - uniquely identify a transaction. - * `txnid_least_bits` → *(optional)* The ID of the transaction opened in a transaction coordinator, - `txnid_most_bits` and `txnid_least_bits`uniquely identify a transaction. - * `request_id` → *(optional)* The ID for handling response and timeout. - - - ##### Command AckResponse - -An `AckResponse` is the broker’s response to acknowledge a request sent by the client. It contains the `consumer_id` sent in the request. -If a transaction is used, it contains both the Transaction ID and the Request ID that are sent in the request. The client finishes the specific request according to the Request ID. If the `error` field is set, it indicates that the request has failed. - -An example of `AckResponse` with redirection: - -```protobuf - -message CommandAckResponse { - "consumer_id" : 1, - "txnid_least_bits" = 0, - "txnid_most_bits" = 1, - "request_id" = 5 -} - -``` - -##### Command CloseConsumer - -***Note***: **This command can be sent by either producer or broker*. - -This command behaves the same as [`CloseProducer`](#command-closeproducer) - -##### Command RedeliverUnacknowledgedMessages - -A consumer can ask the broker to redeliver some or all of the pending messages -that were pushed to that particular consumer and not yet acknowledged. - -The protobuf object accepts a list of message ids that the consumer wants to -be redelivered. If the list is empty, the broker will redeliver all the -pending messages. - -On redelivery, messages can be sent to the same consumer or, in the case of a -shared subscription, spread across all available consumers. - - -##### Command ReachedEndOfTopic - -This is sent by a broker to a particular consumer, whenever the topic -has been "terminated" and all the messages on the subscription were -acknowledged. - -The client should use this command to notify the application that no more -messages are coming from the consumer. - -##### Command ConsumerStats - -This command is sent by the client to retrieve Subscriber and Consumer level -stats from the broker. -Parameters: - * `request_id` → Id of the request, used to correlate the request - and the response. - * `consumer_id` → Id of an already established consumer. - -##### Command ConsumerStatsResponse - -This is the broker's response to ConsumerStats request by the client. -It contains the Subscriber and Consumer level stats of the `consumer_id` sent in the request. -If the `error_code` or the `error_message` field is set it indicates that the request has failed. - -##### Command Unsubscribe - -This command is sent by the client to unsubscribe the `consumer_id` from the associated topic. -Parameters: - * `request_id` → Id of the request. - * `consumer_id` → Id of an already established consumer which needs to unsubscribe. - - -## Service discovery - -### Topic lookup - -Topic lookup needs to be performed each time a client needs to create or -reconnect a producer or a consumer. Lookup is used to discover which particular -broker is serving the topic we are about to use. - -Lookup can be done with a REST call as described in the [admin API](admin-api-topics.md#look-up-topics-owner-broker) docs. - -Since Pulsar-1.16 it is also possible to perform the lookup within the binary -protocol. - -For the sake of example, let's assume we have a service discovery component -running at `pulsar://broker.example.com:6650` - -Individual brokers will be running at `pulsar://broker-1.example.com:6650`, -`pulsar://broker-2.example.com:6650`, ... - -A client can use a connection to the discovery service host to issue a -`LookupTopic` command. The response can either be a broker hostname to -connect to, or a broker hostname to which retry the lookup. - -The `LookupTopic` command has to be used in a connection that has already -gone through the `Connect` / `Connected` initial handshake. - -![Topic lookup](/assets/binary-protocol-topic-lookup.png) - -```protobuf - -message CommandLookupTopic { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "request_id" : 1, - "authoritative" : false -} - -``` - -Fields: - * `topic` → Topic name to lookup - * `request_id` → Id of the request that will be passed with its response - * `authoritative` → Initial lookup request should use false. When following a - redirect response, client should pass the same value contained in the - response - -##### LookupTopicResponse - -Example of response with successful lookup: - -```protobuf - -message CommandLookupTopicResponse { - "request_id" : 1, - "response" : "Connect", - "brokerServiceUrl" : "pulsar://broker-1.example.com:6650", - "brokerServiceUrlTls" : "pulsar+ssl://broker-1.example.com:6651", - "authoritative" : true -} - -``` - -Example of lookup response with redirection: - -```protobuf - -message CommandLookupTopicResponse { - "request_id" : 1, - "response" : "Redirect", - "brokerServiceUrl" : "pulsar://broker-2.example.com:6650", - "brokerServiceUrlTls" : "pulsar+ssl://broker-2.example.com:6651", - "authoritative" : true -} - -``` - -In this second case, we need to reissue the `LookupTopic` command request -to `broker-2.example.com` and this broker will be able to give a definitive -answer to the lookup request. - -### Partitioned topics discovery - -Partitioned topics metadata discovery is used to find out if a topic is a -"partitioned topic" and how many partitions were set up. - -If the topic is marked as "partitioned", the client is expected to create -multiple producers or consumers, one for each partition, using the `partition-X` -suffix. - -This information only needs to be retrieved the first time a producer or -consumer is created. There is no need to do this after reconnections. - -The discovery of partitioned topics metadata works very similar to the topic -lookup. The client send a request to the service discovery address and the -response will contain actual metadata. - -##### Command PartitionedTopicMetadata - -```protobuf - -message CommandPartitionedTopicMetadata { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "request_id" : 1 -} - -``` - -Fields: - * `topic` → the topic for which to check the partitions metadata - * `request_id` → Id of the request that will be passed with its response - - -##### Command PartitionedTopicMetadataResponse - -Example of response with metadata: - -```protobuf - -message CommandPartitionedTopicMetadataResponse { - "request_id" : 1, - "response" : "Success", - "partitions" : 32 -} - -``` - -## Protobuf interface - -All Pulsar's Protobuf definitions can be found {@inject: github:here:/pulsar-common/src/main/proto/PulsarApi.proto}. diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/functions-cli.md b/site2/website/versioned_docs/version-2.10.0-deprecated/functions-cli.md deleted file mode 100644 index c9fcfa201525f0..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/functions-cli.md +++ /dev/null @@ -1,198 +0,0 @@ ---- -id: functions-cli -title: Pulsar Functions command line tool -sidebar_label: "Reference: CLI" -original_id: functions-cli ---- - -The following tables list Pulsar Functions command-line tools. You can learn Pulsar Functions modes, commands, and parameters. - -## localrun - -Run Pulsar Functions locally, rather than deploying it to the Pulsar cluster. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -broker-service-url | The URL for the Pulsar broker. | | -classname | The class name of a Pulsar Function.| | -client-auth-params | Client authentication parameter. | | -client-auth-plugin | Client authentication plugin using which function-process can connect to broker. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime).| | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -hostname-verification-enabled | Enable hostname verification. | false -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -instance-id-offset | Start the instanceIds from this offset. | 0 -log-topic | The topic to which the logs a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -tls-allow-insecure | Allow insecure tls connection. | false -tls-trust-cert-path | tls trust cert file path. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -use-tls | Use tls connection. | false -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - - -## create - -Create and deploy a Pulsar Function in cluster mode. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -classname | The class name of a Pulsar Function. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime).| | -custom-runtime-options | A string that encodes options to customize the runtime, see docs for configured runtime for details | | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -log-topic | The topic to which the logs of a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - -## delete - -Delete a Pulsar Function that is running on a Pulsar cluster. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## update - -Update a Pulsar Function that has been deployed to a Pulsar cluster. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -classname | The class name of a Pulsar Function. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime). | | -custom-runtime-options | A string that encodes options to customize the runtime, see docs for configured runtime for details | | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -log-topic | The topic to which the logs of a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -update-auth-data | Whether or not to update the auth data. | false -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - -## get - -Fetch information about a Pulsar Function. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## restart - -Restart function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## stop - -Stops function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## start - -Starts a stopped function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/functions-debug.md b/site2/website/versioned_docs/version-2.10.0-deprecated/functions-debug.md deleted file mode 100644 index c1f19abda64657..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/functions-debug.md +++ /dev/null @@ -1,538 +0,0 @@ ---- -id: functions-debug -title: Debug Pulsar Functions -sidebar_label: "How-to: Debug" -original_id: functions-debug ---- - -You can use the following methods to debug Pulsar Functions: - -* [Captured stderr](functions-debug.md#captured-stderr) -* [Use unit test](functions-debug.md#use-unit-test) -* [Debug with localrun mode](functions-debug.md#debug-with-localrun-mode) -* [Use log topic](functions-debug.md#use-log-topic) -* [Use Functions CLI](functions-debug.md#use-functions-cli) - -## Captured stderr - -Function startup information and captured stderr output is written to `logs/functions////-.log` - -This is useful for debugging why a function fails to start. - -## Use unit test - -A Pulsar Function is a function with inputs and outputs, you can test a Pulsar Function in a similar way as you test any function. - -For example, if you have the following Pulsar Function: - -```java - -import java.util.function.Function; - -public class JavaNativeExclamationFunction implements Function { - @Override - public String apply(String input) { - return String.format("%s!", input); - } -} - -``` - -You can write a simple unit test to test Pulsar Function. - -:::tip - -Pulsar uses testng for testing. - -::: - -```java - -@Test -public void testJavaNativeExclamationFunction() { - JavaNativeExclamationFunction exclamation = new JavaNativeExclamationFunction(); - String output = exclamation.apply("foo"); - Assert.assertEquals(output, "foo!"); -} - -``` - -The following Pulsar Function implements the `org.apache.pulsar.functions.api.Function` interface. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class ExclamationFunction implements Function { - @Override - public String process(String input, Context context) { - return String.format("%s!", input); - } -} - -``` - -In this situation, you can write a unit test for this function as well. Remember to mock the `Context` parameter. The following is an example. - -:::tip - -Pulsar uses testng for testing. - -::: - -```java - -@Test -public void testExclamationFunction() { - ExclamationFunction exclamation = new ExclamationFunction(); - String output = exclamation.process("foo", mock(Context.class)); - Assert.assertEquals(output, "foo!"); -} - -``` - -## Debug with localrun mode -When you run a Pulsar Function in localrun mode, it launches an instance of the Function on your local machine as a thread. - -In this mode, a Pulsar Function consumes and produces actual data to a Pulsar cluster, and mirrors how the function actually runs in a Pulsar cluster. - -:::note - -Currently, debugging with localrun mode is only supported by Pulsar Functions written in Java. You need Pulsar version 2.4.0 or later to do the following. Even though localrun is available in versions earlier than Pulsar 2.4.0, you cannot debug with localrun mode programmatically or run Functions as threads. - -::: - -You can launch your function in the following manner. - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setName(functionName); -functionConfig.setInputs(Collections.singleton(sourceTopic)); -functionConfig.setClassName(ExclamationFunction.class.getName()); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setOutput(sinkTopic); - -LocalRunner localRunner = LocalRunner.builder().functionConfig(functionConfig).build(); -localRunner.start(true); - -``` - -So you can debug functions using an IDE easily. Set breakpoints and manually step through a function to debug with real data. - -The following example illustrates how to programmatically launch a function in localrun mode. - -```java - -public class ExclamationFunction implements Function { - - @Override - public String process(String s, Context context) throws Exception { - return s + "!"; - } - -public static void main(String[] args) throws Exception { - FunctionConfig functionConfig = new FunctionConfig(); - functionConfig.setName("exclamation"); - functionConfig.setInputs(Collections.singleton("input")); - functionConfig.setClassName(ExclamationFunction.class.getName()); - functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); - functionConfig.setOutput("output"); - - LocalRunner localRunner = LocalRunner.builder().functionConfig(functionConfig).build(); - localRunner.start(false); -} - -``` - -To use localrun mode programmatically, add the following dependency. - -```xml - - - org.apache.pulsar - pulsar-functions-local-runner - ${pulsar.version} - - -``` - -For complete code samples, see [here](https://github.com/jerrypeng/pulsar-functions-demos/tree/master/debugging). - -:::note - -Debugging with localrun mode for Pulsar Functions written in other languages will be supported soon. - -::: - -## Use log topic - -In Pulsar Functions, you can generate log information defined in functions to a specified log topic. You can configure consumers to consume messages from a specified log topic to check the log information. - -![Pulsar Functions core programming model](/assets/pulsar-functions-overview.png) - -**Example** - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class LoggingFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - String messageId = new String(context.getMessageId()); - - if (input.contains("danger")) { - LOG.warn("A warning was received in message {}", messageId); - } else { - LOG.info("Message {} received\nContent: {}", messageId, input); - } - - return null; - } -} - -``` - -As shown in the example above, you can get the logger via `context.getLogger()` and assign the logger to the `LOG` variable of `slf4j`, so you can define your desired log information in a function using the `LOG` variable. Meanwhile, you need to specify the topic to which the log information is produced. - -**Example** - -```bash - -$ bin/pulsar-admin functions create \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -The message published to log topic contains several properties for better reasoning: -- `loglevel` -- the level of the log message. -- `fqn` -- fully qualified function name pushes this log message. -- `instance` -- the ID of the function instance pushes this log message. - -## Use Functions CLI - -With [Pulsar Functions CLI](reference-pulsar-admin.md#functions), you can debug Pulsar Functions with the following subcommands: - -* `get` -* `status` -* `stats` -* `list` -* `trigger` - -:::tip - -For complete commands of **Pulsar Functions CLI**, see [here](reference-pulsar-admin.md#functions)。 - -::: - -### `get` - -Get information about a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions get options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -:::tip - -`--fqfn` consists of `--name`, `--namespace` and `--tenant`, so you can specify either `--fqfn` or `--name`, `--namespace` and `--tenant`. - -::: - -**Example** - -You can specify `--fqfn` to get information about a Pulsar Function. - -```bash - -$ ./bin/pulsar-admin functions get public/default/ExclamationFunctio6 - -``` - -Optionally, you can specify `--name`, `--namespace` and `--tenant` to get information about a Pulsar Function. - -```bash - -$ ./bin/pulsar-admin functions get \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 - -``` - -As shown below, the `get` command shows input, output, runtime, and other information about the _ExclamationFunctio6_ function. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "ExclamationFunctio6", - "className": "org.example.test.ExclamationFunction", - "inputSpecs": { - "persistent://public/default/my-topic-1": { - "isRegexPattern": false - } - }, - "output": "persistent://public/default/test-1", - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "userConfig": {}, - "runtime": "JAVA", - "autoAck": true, - "parallelism": 1 -} - -``` - -### `status` - -Check the current status of a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions status options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--instance-id`|The instance ID of a Pulsar Function
    If the `--instance-id` is not specified, it gets the IDs of all instances.
    -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - -``` - -As shown below, the `status` command shows the number of instances, running instances, the instance running under the _ExclamationFunctio6_ function, received messages, successfully processed messages, system exceptions, the average latency and so on. - -```json - -{ - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReceived" : 1, - "numSuccessfullyProcessed" : 1, - "numUserExceptions" : 0, - "latestUserExceptions" : [ ], - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "averageLatency" : 0.8385, - "lastInvocationTime" : 1557734137987, - "workerId" : "c-standalone-fw-23ccc88ef29b-8080" - } - } ] -} - -``` - -### `stats` - -Get the current stats of a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions stats options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--instance-id`|The instance ID of a Pulsar Function.
    If the `--instance-id` is not specified, it gets the IDs of all instances.
    -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - -``` - -The output is shown as follows: - -```json - -{ - "receivedTotal" : 1, - "processedSuccessfullyTotal" : 1, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : 0.8385, - "1min" : { - "receivedTotal" : 0, - "processedSuccessfullyTotal" : 0, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : null - }, - "lastInvocation" : 1557734137987, - "instances" : [ { - "instanceId" : 0, - "metrics" : { - "receivedTotal" : 1, - "processedSuccessfullyTotal" : 1, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : 0.8385, - "1min" : { - "receivedTotal" : 0, - "processedSuccessfullyTotal" : 0, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : null - }, - "lastInvocation" : 1557734137987, - "userMetrics" : { } - } - } ] -} - -``` - -### `list` - -List all Pulsar Functions running under a specific tenant and namespace. - -**Usage** - -```bash - -$ pulsar-admin functions list options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions list \ - --tenant public \ - --namespace default - -``` - -As shown below, the `list` command returns three functions running under the _public_ tenant and the _default_ namespace. - -```text - -ExclamationFunctio1 -ExclamationFunctio2 -ExclamationFunctio3 - -``` - -### `trigger` - -Trigger a specified Pulsar Function with a supplied value. This command simulates the execution process of a Pulsar Function and verifies it. - -**Usage** - -```bash - -$ pulsar-admin functions trigger options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. -|`--topic`|The topic name that a Pulsar Function consumes from. -|`--trigger-file`|The path to a file that contains the data to trigger a Pulsar Function. -|`--trigger-value`|The value to trigger a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - --topic persistent://public/default/my-topic-1 \ - --trigger-value "hello pulsar functions" - -``` - -As shown below, the `trigger` command returns the following result: - -```text - -This is my function! - -``` - -:::note - -You must specify the [entire topic name](getting-started-pulsar.md#topic-names) when using the `--topic` option. Otherwise, the following error occurs. - -```text - -Function in trigger function has unidentified topic -Reason: Function in trigger function has unidentified topic - -``` - -::: - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/functions-deploy.md b/site2/website/versioned_docs/version-2.10.0-deprecated/functions-deploy.md deleted file mode 100644 index 826804db6bbb73..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/functions-deploy.md +++ /dev/null @@ -1,262 +0,0 @@ ---- -id: functions-deploy -title: Deploy Pulsar Functions -sidebar_label: "How-to: Deploy" -original_id: functions-deploy ---- - -## Requirements - -To deploy and manage Pulsar Functions, you need to have a Pulsar cluster running. There are several options for this: - -* You can run a [standalone cluster](getting-started-standalone.md) locally on your own machine. -* You can deploy a Pulsar cluster on [Kubernetes](deploy-kubernetes.md), [Amazon Web Services](deploy-aws.md), [bare metal](deploy-bare-metal.md), DC/OS, and more. - -If you run a non-[standalone](reference-terminology.md#standalone) cluster, you need to obtain the service URL for the cluster. How you obtain the service URL depends on how you deploy your Pulsar cluster. - -If you want to deploy and trigger Python user-defined functions, you need to install [the pulsar python client](client-libraries-python.md) on all the machines running [functions workers](functions-worker.md). - -## Command-line interface - -Pulsar Functions are deployed and managed using the [`pulsar-admin functions`](reference-pulsar-admin.md#functions) interface, which contains commands such as [`create`](reference-pulsar-admin.md#functions-create) for deploying functions in [cluster mode](#cluster-mode), [`trigger`](reference-pulsar-admin.md#trigger) for [triggering](#triggering-pulsar-functions) functions, [`list`](reference-pulsar-admin.md#list-2) for listing deployed functions. - -To learn more commands, refer to [`pulsar-admin functions`](reference-pulsar-admin.md#functions). - -### Default arguments - -When managing Pulsar Functions, you need to specify a variety of information about functions, including tenant, namespace, input and output topics, and so on. However, some parameters have default values if you do not specify values for them. The following table lists the default values. - -Parameter | Default -:---------|:------- -Function name | You can specify any value for the class name (except org, library, or similar class names). For example, when you specify the flag `--classname org.example.MyFunction`, the function name is `MyFunction`. -Tenant | Derived from names of the input topics. If the input topics are under the `marketing` tenant, which means the topic names have the form `persistent://marketing/{namespace}/{topicName}`, the tenant is `marketing`. -Namespace | Derived from names of the input topics. If the input topics are under the `asia` namespace under the `marketing` tenant, which means the topic names have the form `persistent://marketing/asia/{topicName}`, then the namespace is `asia`. -Output topic | `{input topic}-{function name}-output`. For example, if an input topic name of a function is `incoming`, and the function name is `exclamation`, then the name of the output topic is `incoming-exclamation-output`. -Subscription type | For `at-least-once` and `at-most-once` [processing guarantees](functions-overview.md#processing-guarantees), the [`SHARED`](concepts-messaging.md#shared) mode is applied by default; for `effectively-once` guarantees, the [`FAILOVER`](concepts-messaging.md#failover) mode is applied. -Processing guarantees | [`ATLEAST_ONCE`](functions-overview.md#processing-guarantees) -Pulsar service URL | `pulsar://localhost:6650` - -### Example of default arguments - -Take the `create` command as an example. - -```bash - -$ bin/pulsar-admin functions create \ - --jar my-pulsar-functions.jar \ - --classname org.example.MyFunction \ - --inputs my-function-input-topic1,my-function-input-topic2 - -``` - -The function has default values for the function name (`MyFunction`), tenant (`public`), namespace (`default`), subscription type (`SHARED`), processing guarantees (`ATLEAST_ONCE`), and Pulsar service URL (`pulsar://localhost:6650`). - -## Local run mode - -If you run a Pulsar Function in **local run** mode, it runs on the machine from which you enter the commands (on your laptop, an [AWS EC2](https://aws.amazon.com/ec2/) instance, and so on). The following is a [`localrun`](reference-pulsar-admin.md#localrun) command example. - -```bash - -$ bin/pulsar-admin functions localrun \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/input-1 \ - --output persistent://public/default/output-1 - -``` - -By default, the function connects to a Pulsar cluster running on the same machine, via a local [broker](reference-terminology.md#broker) service URL of `pulsar://localhost:6650`. If you use local run mode to run a function but connect it to a non-local Pulsar cluster, you can specify a different broker URL using the `--brokerServiceUrl` flag. The following is an example. - -```bash - -$ bin/pulsar-admin functions localrun \ - --broker-service-url pulsar://my-cluster-host:6650 \ - # Other function parameters - -``` - -## Cluster mode - -When you run a Pulsar Function in **cluster** mode, the function code is uploaded to a Pulsar broker and runs *alongside the broker* rather than in your [local environment](#local-run-mode). You can run a function in cluster mode using the [`create`](reference-pulsar-admin.md#create-1) command. - -```bash - -$ bin/pulsar-admin functions create \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/input-1 \ - --output persistent://public/default/output-1 - -``` - -### Update functions in cluster mode - -You can use the [`update`](reference-pulsar-admin.md#update-1) command to update a Pulsar Function running in cluster mode. The following command updates the function created in the [cluster mode](#cluster-mode) section. - -```bash - -$ bin/pulsar-admin functions update \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/new-input-topic \ - --output persistent://public/default/new-output-topic - -``` - -### Parallelism - -Pulsar Functions run as processes or threads, which are called **instances**. When you run a Pulsar Function, it runs as a single instance by default. With one localrun command, you can only run a single instance of a function. If you want to run multiple instances, you can use localrun command multiple times. - -When you create a function, you can specify the *parallelism* of a function (the number of instances to run). You can set the parallelism factor using the `--parallelism` flag of the [`create`](reference-pulsar-admin.md#functions-create) command. - -```bash - -$ bin/pulsar-admin functions create \ - --parallelism 3 \ - # Other function info - -``` - -You can adjust the parallelism of an already created function using the [`update`](reference-pulsar-admin.md#update-1) interface. - -```bash - -$ bin/pulsar-admin functions update \ - --parallelism 5 \ - # Other function - -``` - -If you specify a function configuration via YAML, use the `parallelism` parameter. The following is a config file example. - -```yaml - -# function-config.yaml -parallelism: 3 -inputs: -- persistent://public/default/input-1 -output: persistent://public/default/output-1 -# other parameters - -``` - -The following is corresponding update command. - -```bash - -$ bin/pulsar-admin functions update \ - --function-config-file function-config.yaml - -``` - -### Function instance resources - -When you run Pulsar Functions in [cluster mode](#cluster-mode), you can specify the resources that are assigned to each function [instance](#parallelism). - -Resource | Specified as | Runtimes -:--------|:----------------|:-------- -CPU | The number of cores | Kubernetes -RAM | The number of bytes | Process, Docker -Disk space | The number of bytes | Docker - -The following function creation command allocates 8 cores, 8 GB of RAM, and 10 GB of disk space to a function. - -```bash - -$ bin/pulsar-admin functions create \ - --jar target/my-functions.jar \ - --classname org.example.functions.MyFunction \ - --cpu 8 \ - --ram 8589934592 \ - --disk 10737418240 - -``` - -> #### Resources are *per instance* -> The resources that you apply to a given Pulsar Function are applied to each instance of the function. For example, if you apply 8 GB of RAM to a function with a parallelism of 5, you are applying 40 GB of RAM for the function in total. Make sure that you take the parallelism (the number of instances) factor into your resource calculations. - -### Use Package management service - -Package management enables version management and simplifies the upgrade and rollback processes for Functions, Sinks, and Sources. When you use the same function, sink and source in different namespaces, you can upload them to a common package management system. - -To use [Package management service](admin-api-packages.md), ensure that the package management service has been enabled in your cluster by setting the following properties in `broker.conf`. - -> Note: Package management service is not enabled by default. - -```yaml - -enablePackagesManagement=true -packagesManagementStorageProvider=org.apache.pulsar.packages.management.storage.bookkeeper.BookKeeperPackagesStorageProvider -packagesReplicas=1 -packagesManagementLedgerRootPath=/ledgers - -``` - -With Package management service enabled, you can upload your function packages by [upload a package](admin-api-packages.md#upload-a-package) to the service and get the [package URL](admin-api-packages.md#package-url). - -When you have a ready to use package URL, you can create the function with package URL by setting `--jar`, `--py`, or `--go` to the package URL with `pulsar-admin functions create`. - -## Trigger Pulsar Functions - -If a Pulsar Function is running in [cluster mode](#cluster-mode), you can **trigger** it at any time using the command line. Triggering a function means that you send a message with a specific value to the function and get the function output (if any) via the command line. - -> Triggering a function is to invoke a function by producing a message on one of the input topics. With the [`pulsar-admin functions trigger`](reference-pulsar-admin.md#trigger) command, you can send messages to functions without using the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool or a language-specific client library. - -To learn how to trigger a function, you can start with Python function that returns a simple string based on the input. - -```python - -# myfunc.py -def process(input): - return "This function has been triggered with a value of {0}".format(input) - -``` - -You can run the function in [local run mode](functions-deploy.md#local-run-mode). - -```bash - -$ bin/pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name myfunc \ - --py myfunc.py \ - --classname myfunc \ - --inputs persistent://public/default/in \ - --output persistent://public/default/out - -``` - -Then assign a consumer to listen on the output topic for messages from the `myfunc` function with the [`pulsar-client consume`](reference-cli-tools.md#consume) command. - -```bash - -$ bin/pulsar-client consume persistent://public/default/out \ - --subscription-name my-subscription - --num-messages 0 # Listen indefinitely - -``` - -And then you can trigger the function. - -```bash - -$ bin/pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name myfunc \ - --trigger-value "hello world" - -``` - -The consumer listening on the output topic produces something as follows in the log. - -``` - ------ got message ----- -This function has been triggered with a value of hello world - -``` - -> #### Topic info is not required -> In the `trigger` command, you only need to specify basic information about the function (tenant, namespace, and name). To trigger the function, you do not need to know the function input topics. diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/functions-develop.md b/site2/website/versioned_docs/version-2.10.0-deprecated/functions-develop.md deleted file mode 100644 index c32199517cfcc2..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/functions-develop.md +++ /dev/null @@ -1,1678 +0,0 @@ ---- -id: functions-develop -title: Develop Pulsar Functions -sidebar_label: "How-to: Develop" -original_id: functions-develop ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -You learn how to develop Pulsar Functions with different APIs for Java, Python and Go. - -## Available APIs -In Java and Python, you have two options to write Pulsar Functions. In Go, you can use Pulsar Functions SDK for Go. - -Interface | Description | Use cases -:---------|:------------|:--------- -Language-native interface | No Pulsar-specific libraries or special dependencies required (only core libraries from Java/Python). | Functions that do not require access to the function [context](#context). -Pulsar Function SDK for Java/Python/Go | Pulsar-specific libraries that provide a range of functionality not provided by "native" interfaces. | Functions that require access to the function [context](#context). -Extended Pulsar Function SDK for Java | An extension to Pulsar-specific libraries, providing the initialization and close interfaces in Java. | Functions that require initializing and releasing external resources. - -### Language-native interface -The language-native function, which adds an exclamation point to all incoming strings and publishes the resulting string to a topic, has no external dependencies. The following example is language-native function. - -````mdx-code-block - - - -```Java - -import java.util.function.Function; - -public class JavaNativeExclamationFunction implements Function { - @Override - public String apply(String input) { - return String.format("%s!", input); - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/JavaNativeExclamationFunction.java). - - - - -```python - -def process(input): - return "{}!".format(input) - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/native_exclamation_function.py). - -:::note - -You can write Pulsar Functions in python2 or python3. However, Pulsar only looks for `python` as the interpreter. -If you're running Pulsar Functions on an Ubuntu system that only supports python3, you might fail to -start the functions. In this case, you can create a symlink. Your system will fail if -you subsequently install any other package that depends on Python 2.x. A solution is under development in [Issue 5518](https://github.com/apache/pulsar/issues/5518). - -```bash - -sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 10 - -``` - -::: - - - - -```` - -### Pulsar Function SDK for Java/Python/Go -The following example uses Pulsar Functions SDK. -````mdx-code-block - - - -```Java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class ExclamationFunction implements Function { - @Override - public String process(String input, Context context) { - return String.format("%s!", input); - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/ExclamationFunction.java). - - - - -```python - -from pulsar import Function - -class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - return input + '!' - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/exclamation_function.py). - - - - -```Go - -package main - -import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" -) - -func HandleRequest(ctx context.Context, in []byte) error{ - fmt.Println(string(in) + "!") - return nil -} - -func main() { - pf.Start(HandleRequest) -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/77cf09eafa4f1626a53a1fe2e65dd25f377c1127/pulsar-function-go/examples/inputFunc/inputFunc.go#L20-L36). - - - - -```` - -### Extended Pulsar Function SDK for Java -This extended Pulsar Function SDK provides two additional interfaces to initialize and release external resources. -- By using the `initialize` interface, you can initialize external resources which only need one-time initialization when the function instance starts. -- By using the `close` interface, you can close the referenced external resources when the function instance closes. - -:::note - -The extended Pulsar Function SDK for Java is available in Pulsar 2.10.0 and later versions. -Before using it, you need to set up Pulsar Function worker 2.10.0 or later versions. - -::: - -The following example uses the extended interface of Pulsar Function SDK for Java to initialize RedisClient when the function instance starts and release it when the function instance closes. - -````mdx-code-block - - - -```Java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import io.lettuce.core.RedisClient; - -public class InitializableFunction implements Function { - private RedisClient redisClient; - - private void initRedisClient(Map connectInfo) { - redisClient = RedisClient.create(connectInfo.get("redisURI")); - } - - @Override - public void initialize(Context context) { - Map connectInfo = context.getUserConfigMap(); - redisClient = initRedisClient(connectInfo); - } - - @Override - public String process(String input, Context context) { - String value = client.get(key); - return String.format("%s-%s", input, value); - } - - @Override - public void close() { - redisClient.close(); - } -} - -``` - - - - -```` - -## Schema registry -Pulsar has a built-in schema registry and is bundled with popular schema types, such as Avro, JSON and Protobuf. Pulsar Functions can leverage the existing schema information from input topics and derive the input type. The schema registry applies for output topic as well. - -## SerDe -SerDe stands for **Ser**ialization and **De**serialization. Pulsar Functions uses SerDe when publishing data to and consuming data from Pulsar topics. How SerDe works by default depends on the language you use for a particular function. - -````mdx-code-block - - - -When you write Pulsar Functions in Java, the following basic Java types are built in and supported by default: `String`, `Double`, `Integer`, `Float`, `Long`, `Short`, and `Byte`. - -To customize Java types, you need to implement the following interface. - -```java - -public interface SerDe { - T deserialize(byte[] input); - byte[] serialize(T input); -} - -``` - -SerDe works in the following ways in Java Functions. -- If the input and output topics have schema, Pulsar Functions use schema for SerDe. -- If the input or output topics do not exist, Pulsar Functions adopt the following rules to determine SerDe: - - If the schema type is specified, Pulsar Functions use the specified schema type. - - If SerDe is specified, Pulsar Functions use the specified SerDe, and the schema type for input and output topics is `Byte`. - - If neither the schema type nor SerDe is specified, Pulsar Functions use the built-in SerDe. For non-primitive schema type, the built-in SerDe serializes and deserializes objects in the `JSON` format. - - - - -In Python, the default SerDe is identity, meaning that the type is serialized as whatever type the producer function returns. - -You can specify the SerDe when [creating](functions-deploy.md#cluster-mode) or [running](functions-deploy.md#local-run-mode) functions. - -```bash - -$ bin/pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name my_function \ - --py my_function.py \ - --classname my_function.MyFunction \ - --custom-serde-inputs '{"input-topic-1":"Serde1","input-topic-2":"Serde2"}' \ - --output-serde-classname Serde3 \ - --output output-topic-1 - -``` - -This case contains two input topics: `input-topic-1` and `input-topic-2`, each of which is mapped to a different SerDe class (the map must be specified as a JSON string). The output topic, `output-topic-1`, uses the `Serde3` class for SerDe. At the moment, all Pulsar Functions logic, include processing function and SerDe classes, must be contained within a single Python file. - -When using Pulsar Functions for Python, you have three SerDe options: - -1. You can use the [`IdentitySerde`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L70), which leaves the data unchanged. The `IdentitySerDe` is the **default**. Creating or running a function without explicitly specifying SerDe means that this option is used. -2. You can use the [`PickleSerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L62), which uses Python [`pickle`](https://docs.python.org/3/library/pickle.html) for SerDe. -3. You can create a custom SerDe class by implementing the baseline [`SerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L50) class, which has just two methods: [`serialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L53) for converting the object into bytes, and [`deserialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L58) for converting bytes into an object of the required application-specific type. - -The table below shows when you should use each SerDe. - -SerDe option | When to use -:------------|:----------- -`IdentitySerde` | When you work with simple types like strings, Booleans, integers. -`PickleSerDe` | When you work with complex, application-specific types and are comfortable with the "best effort" approach of `pickle`. -Custom SerDe | When you require explicit control over SerDe, potentially for performance or data compatibility purposes. - - - - -Currently, the feature is not available in Go. - - - - -```` - -### Example -Imagine that you're writing Pulsar Functions that are processing tweet objects, you can refer to the following example of `Tweet` class. - -````mdx-code-block - - - -```java - -public class Tweet { - private String username; - private String tweetContent; - - public Tweet(String username, String tweetContent) { - this.username = username; - this.tweetContent = tweetContent; - } - - // Standard setters and getters -} - -``` - -To pass `Tweet` objects directly between Pulsar Functions, you need to provide a custom SerDe class. In the example below, `Tweet` objects are basically strings in which the username and tweet content are separated by a `|`. - -```java - -package com.example.serde; - -import org.apache.pulsar.functions.api.SerDe; - -import java.util.regex.Pattern; - -public class TweetSerde implements SerDe { - public Tweet deserialize(byte[] input) { - String s = new String(input); - String[] fields = s.split(Pattern.quote("|")); - return new Tweet(fields[0], fields[1]); - } - - public byte[] serialize(Tweet input) { - return "%s|%s".format(input.getUsername(), input.getTweetContent()).getBytes(); - } -} - -``` - -To apply this customized SerDe to a particular Pulsar Function, you need to: - -* Package the `Tweet` and `TweetSerde` classes into a JAR. -* Specify a path to the JAR and SerDe class name when deploying the function. - -The following is an example of [`create`](reference-pulsar-admin.md#create-1) operation. - -```bash - -$ bin/pulsar-admin functions create \ - --jar /path/to/your.jar \ - --output-serde-classname com.example.serde.TweetSerde \ - # Other function attributes - -``` - -> #### Custom SerDe classes must be packaged with your function JARs -> Pulsar does not store your custom SerDe classes separately from your Pulsar Functions. So you need to include your SerDe classes in your function JARs. If not, Pulsar returns an error. - - - - -```python - -class Tweet(object): - def __init__(self, username, tweet_content): - self.username = username - self.tweet_content = tweet_content - -``` - -In order to use this class in Pulsar Functions, you have two options: - -1. You can specify `PickleSerDe`, which applies the [`pickle`](https://docs.python.org/3/library/pickle.html) library SerDe. -2. You can create your own SerDe class. The following is an example. - - ```python - - from pulsar import SerDe - - class TweetSerDe(SerDe): - - def serialize(self, input): - return bytes("{0}|{1}".format(input.username, input.tweet_content)) - - def deserialize(self, input_bytes): - tweet_components = str(input_bytes).split('|') - return Tweet(tweet_components[0], tweet_componentsp[1]) - - ``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/custom_object_function.py). - - - - -```` - -In both languages, however, you can write custom SerDe logic for more complex, application-specific types. - -## Context -Java, Python and Go SDKs provide access to a **context object** that can be used by a function. This context object provides a wide variety of information and functionality to the function. - -* The name and ID of a Pulsar Function. -* The message ID of each message. Each Pulsar message is automatically assigned with an ID. -* The key, event time, properties and partition key of each message. -* The name of the topic to which the message is sent. -* The names of all input topics as well as the output topic associated with the function. -* The name of the class used for [SerDe](#serde). -* The [tenant](reference-terminology.md#tenant) and namespace associated with the function. -* The ID of the Pulsar Functions instance running the function. -* The version of the function. -* The [logger object](functions-develop.md#logger) used by the function, which can be used to create function log messages. -* Access to arbitrary [user configuration](#user-config) values supplied via the CLI. -* An interface for recording [metrics](#metrics). -* An interface for storing and retrieving state in [state storage](#state-storage). -* A function to publish new messages onto arbitrary topics. -* A function to ack the message being processed (if auto-ack is disabled). -* (Java) get Pulsar admin client. - -````mdx-code-block - - - -The [Context](https://github.com/apache/pulsar/blob/master/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Context.java) interface provides a number of methods that you can use to access the function [context](#context). The various method signatures for the `Context` interface are listed as follows. - -```java - -public interface Context { - Record getCurrentRecord(); - Collection getInputTopics(); - String getOutputTopic(); - String getOutputSchemaType(); - String getTenant(); - String getNamespace(); - String getFunctionName(); - String getFunctionId(); - String getInstanceId(); - String getFunctionVersion(); - Logger getLogger(); - void incrCounter(String key, long amount); - void incrCounterAsync(String key, long amount); - long getCounter(String key); - long getCounterAsync(String key); - void putState(String key, ByteBuffer value); - void putStateAsync(String key, ByteBuffer value); - void deleteState(String key); - ByteBuffer getState(String key); - ByteBuffer getStateAsync(String key); - Map getUserConfigMap(); - Optional getUserConfigValue(String key); - Object getUserConfigValueOrDefault(String key, Object defaultValue); - void recordMetric(String metricName, double value); - CompletableFuture publish(String topicName, O object, String schemaOrSerdeClassName); - CompletableFuture publish(String topicName, O object); - TypedMessageBuilder newOutputMessage(String topicName, Schema schema) throws PulsarClientException; - ConsumerBuilder newConsumerBuilder(Schema schema) throws PulsarClientException; - PulsarAdmin getPulsarAdmin(); - PulsarAdmin getPulsarAdmin(String clusterName); -} - -``` - -The following example uses several methods available via the `Context` object. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.stream.Collectors; - -public class ContextFunction implements Function { - public Void process(String input, Context context) { - Logger LOG = context.getLogger(); - String inputTopics = context.getInputTopics().stream().collect(Collectors.joining(", ")); - String functionName = context.getFunctionName(); - - String logMessage = String.format("A message with a value of \"%s\" has arrived on one of the following topics: %s\n", - input, - inputTopics); - - LOG.info(logMessage); - - String metricName = String.format("function-%s-messages-received", functionName); - context.recordMetric(metricName, 1); - - return null; - } -} - -``` - - - - -``` - -class ContextImpl(pulsar.Context): - def get_message_id(self): - ... - def get_message_key(self): - ... - def get_message_eventtime(self): - ... - def get_message_properties(self): - ... - def get_current_message_topic_name(self): - ... - def get_partition_key(self): - ... - def get_function_name(self): - ... - def get_function_tenant(self): - ... - def get_function_namespace(self): - ... - def get_function_id(self): - ... - def get_instance_id(self): - ... - def get_function_version(self): - ... - def get_logger(self): - ... - def get_user_config_value(self, key): - ... - def get_user_config_map(self): - ... - def record_metric(self, metric_name, metric_value): - ... - def get_input_topics(self): - ... - def get_output_topic(self): - ... - def get_output_serde_class_name(self): - ... - def publish(self, topic_name, message, serde_class_name="serde.IdentitySerDe", - properties=None, compression_type=None, callback=None, message_conf=None): - ... - def ack(self, msgid, topic): - ... - def get_and_reset_metrics(self): - ... - def reset_metrics(self): - ... - def get_metrics(self): - ... - def incr_counter(self, key, amount): - ... - def get_counter(self, key): - ... - def del_counter(self, key): - ... - def put_state(self, key, value): - ... - def get_state(self, key): - ... - -``` - - - - -``` - -func (c *FunctionContext) GetInstanceID() int { - return c.instanceConf.instanceID -} - -func (c *FunctionContext) GetInputTopics() []string { - return c.inputTopics -} - -func (c *FunctionContext) GetOutputTopic() string { - return c.instanceConf.funcDetails.GetSink().Topic -} - -func (c *FunctionContext) GetFuncTenant() string { - return c.instanceConf.funcDetails.Tenant -} - -func (c *FunctionContext) GetFuncName() string { - return c.instanceConf.funcDetails.Name -} - -func (c *FunctionContext) GetFuncNamespace() string { - return c.instanceConf.funcDetails.Namespace -} - -func (c *FunctionContext) GetFuncID() string { - return c.instanceConf.funcID -} - -func (c *FunctionContext) GetFuncVersion() string { - return c.instanceConf.funcVersion -} - -func (c *FunctionContext) GetUserConfValue(key string) interface{} { - return c.userConfigs[key] -} - -func (c *FunctionContext) GetUserConfMap() map[string]interface{} { - return c.userConfigs -} - -func (c *FunctionContext) SetCurrentRecord(record pulsar.Message) { - c.record = record -} - -func (c *FunctionContext) GetCurrentRecord() pulsar.Message { - return c.record -} - -func (c *FunctionContext) NewOutputMessage(topic string) pulsar.Producer { - return c.outputMessage(topic) -} - -``` - -The following example uses several methods available via the `Context` object. - -``` - -import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" -) - -func contextFunc(ctx context.Context) { - if fc, ok := pf.FromContext(ctx); ok { - fmt.Printf("function ID is:%s, ", fc.GetFuncID()) - fmt.Printf("function version is:%s\n", fc.GetFuncVersion()) - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/77cf09eafa4f1626a53a1fe2e65dd25f377c1127/pulsar-function-go/examples/contextFunc/contextFunc.go#L29-L34). - - - - -```` - -### User config -When you run or update Pulsar Functions created using SDK, you can pass arbitrary key/values to them with the command line with the `--user-config` flag. Key/values must be specified as JSON. The following function creation command passes a user configured key/value to a function. - -```bash - -$ bin/pulsar-admin functions create \ - --name word-filter \ - # Other function configs - --user-config '{"forbidden-word":"rosebud"}' - -``` - -````mdx-code-block - - - -The Java SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - # Other function configs - --user-config '{"word-of-the-day":"verdure"}' - -``` - -To access that value in a Java function: - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.Optional; - -public class UserConfigFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - Optional wotd = context.getUserConfigValue("word-of-the-day"); - if (wotd.isPresent()) { - LOG.info("The word of the day is {}", wotd); - } else { - LOG.warn("No word of the day provided"); - } - return null; - } -} - -``` - -The `UserConfigFunction` function will log the string `"The word of the day is verdure"` every time the function is invoked (which means every time a message arrives). The `word-of-the-day` user config will be changed only when the function is updated with a new config value via the command line. - -You can also access the entire user config map or set a default value in case no value is present: - -```java - -// Get the whole config map -Map allConfigs = context.getUserConfigMap(); - -// Get value or resort to default -String wotd = context.getUserConfigValueOrDefault("word-of-the-day", "perspicacious"); - -``` - -> For all key/value pairs passed to Java functions, both the key *and* the value are `String`. To set the value to be a different type, you need to deserialize from the `String` type. - - - - -In Python function, you can access the configuration value like this. - -```python - -from pulsar import Function - -class WordFilter(Function): - def process(self, context, input): - forbidden_word = context.user_config()["forbidden-word"] - - # Don't publish the message if it contains the user-supplied - # forbidden word - if forbidden_word in input: - pass - # Otherwise publish the message - else: - return input - -``` - -The Python SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - # Other function configs \ - --user-config '{"word-of-the-day":"verdure"}' - -``` - -To access that value in a Python function: - -```python - -from pulsar import Function - -class UserConfigFunction(Function): - def process(self, input, context): - logger = context.get_logger() - wotd = context.get_user_config_value('word-of-the-day') - if wotd is None: - logger.warn('No word of the day provided') - else: - logger.info("The word of the day is {0}".format(wotd)) - -``` - - - - -The Go SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - --go path/to/go/binary - --user-config '{"word-of-the-day":"lackadaisical"}' - -``` - -To access that value in a Go function: - -```go - -func contextFunc(ctx context.Context) { - fc, ok := pf.FromContext(ctx) - if !ok { - logutil.Fatal("Function context is not defined") - } - - wotd := fc.GetUserConfValue("word-of-the-day") - - if wotd == nil { - logutil.Warn("The word of the day is empty") - } else { - logutil.Infof("The word of the day is %s", wotd.(string)) - } -} - -``` - - - - -```` - -### Logger - -````mdx-code-block - - - -Pulsar Functions that use the Java SDK have access to an [SLF4j](https://www.slf4j.org/) [`Logger`](https://www.slf4j.org/api/org/apache/log4j/Logger.html) object that can be used to produce logs at the chosen log level. The following example logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class LoggingFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - String messageId = new String(context.getMessageId()); - - if (input.contains("danger")) { - LOG.warn("A warning was received in message {}", messageId); - } else { - LOG.info("Message {} received\nContent: {}", messageId, input); - } - - return null; - } -} - -``` - -If you want your function to produce logs, you need to specify a log topic when creating or running the function. The following is an example. - -```bash - -$ bin/pulsar-admin functions create \ - --jar my-functions.jar \ - --classname my.package.LoggingFunction \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -All logs produced by `LoggingFunction` above can be accessed via the `persistent://public/default/logging-function-logs` topic. - -#### Customize Function log level -Additionally, you can use the XML file, `functions_log4j2.xml`, to customize the function log level. -To customize the function log level, create or update `functions_log4j2.xml` in your Pulsar conf directory (for example, `/etc/pulsar/` on bare-metal, or `/pulsar/conf` on Kubernetes) to contain contents such as: - -```xml - - - pulsar-functions-instance - 30 - - - pulsar.log.appender - RollingFile - - - pulsar.log.level - debug - - - bk.log.level - debug - - - - - Console - SYSTEM_OUT - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - RollingFile - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.log - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}-%d{MM-dd-yyyy}-%i.log.gz - true - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - 1 - true - - - 1 GB - - - 0 0 0 * * ? - - - - - ${sys:pulsar.function.log.dir} - 2 - - */${sys:pulsar.function.log.file}*log.gz - - - 30d - - - - - - BkRollingFile - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.bk - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.bk-%d{MM-dd-yyyy}-%i.log.gz - true - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - 1 - true - - - 1 GB - - - 0 0 0 * * ? - - - - - ${sys:pulsar.function.log.dir} - 2 - - */${sys:pulsar.function.log.file}.bk*log.gz - - - 30d - - - - - - - - org.apache.pulsar.functions.runtime.shaded.org.apache.bookkeeper - ${sys:bk.log.level} - false - - BkRollingFile - - - - ${sys:pulsar.log.level} - - ${sys:pulsar.log.appender} - ${sys:pulsar.log.level} - - - - - -``` - -The properties set like: - -```xml - - - pulsar.log.level - debug - - -``` - -propagate to places where they are referenced, such as: - -```xml - - - ${sys:pulsar.log.level} - - ${sys:pulsar.log.appender} - ${sys:pulsar.log.level} - - - -``` - -In the above example, debug level logging would be applied to ALL function logs. -This may be more verbose than you desire. To be more selective, you can apply different log levels to different classes or modules. For example: - -```xml - - - com.example.module - info - false - - ${sys:pulsar.log.appender} - - - -``` - -You can be more specific as well, such as applying a more verbose log level to a class in the module, such as: - -```xml - - - com.example.module.className - debug - false - - Console - - - -``` - -Each `` entry allows you to output the log to a target specified in the definition of the Appender. - -Additivity pertains to whether log messages will be duplicated if multiple Logger entries overlap. -To disable additivity, specify - -```xml - -false - -``` - -as shown in examples above. Disabling additivity prevents duplication of log messages when one or more `` entries contain classes or modules that overlap. - -The `` is defined in the `` section, such as: - -```xml - - - Console - SYSTEM_OUT - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - -``` - - - - -Pulsar Functions that use the Python SDK have access to a logging object that can be used to produce logs at the chosen log level. The following example function that logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`. - -```python - -from pulsar import Function - -class LoggingFunction(Function): - def process(self, input, context): - logger = context.get_logger() - msg_id = context.get_message_id() - if 'danger' in input: - logger.warn("A warning was received in message {0}".format(context.get_message_id())) - else: - logger.info("Message {0} received\nContent: {1}".format(msg_id, input)) - -``` - -If you want your function to produce logs on a Pulsar topic, you need to specify a **log topic** when creating or running the function. The following is an example. - -```bash - -$ bin/pulsar-admin functions create \ - --py logging_function.py \ - --classname logging_function.LoggingFunction \ - --log-topic logging-function-logs \ - # Other function configs - -``` - -All logs produced by `LoggingFunction` above can be accessed via the `logging-function-logs` topic. -Additionally, you can specify the function log level through the broker XML file as described in [Customize Function log level](#customize-function-log-level). - - - - -The following Go Function example shows different log levels based on the function input. - -``` - -import ( - "context" - - "github.com/apache/pulsar/pulsar-function-go/pf" - - log "github.com/apache/pulsar/pulsar-function-go/logutil" -) - -func loggerFunc(ctx context.Context, input []byte) { - if len(input) <= 100 { - log.Infof("This input has a length of: %d", len(input)) - } else { - log.Warnf("This input is getting too long! It has {%d} characters", len(input)) - } -} - -func main() { - pf.Start(loggerFunc) -} - -``` - -When you use `logTopic` related functionalities in Go Function, import `github.com/apache/pulsar/pulsar-function-go/logutil`, and you do not have to use the `getLogger()` context object. - -Additionally, you can specify the function log level through the broker XML file, as described here: [Customize Function log level](#customize-function-log-level) - - - - -```` - -### Pulsar admin - -Pulsar Functions using the Java SDK has access to the Pulsar admin client, which allows the Pulsar admin client to manage API calls to current Pulsar clusters or external clusters (if `external-pulsars` is provided). - -````mdx-code-block - - - -Below is an example of how to use the Pulsar admin client exposed from the Function `context`. - -``` - -import org.apache.pulsar.client.admin.PulsarAdmin; -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -/** - * In this particular example, for every input message, - * the function resets the cursor of the current function's subscription to a - * specified timestamp. - */ -public class CursorManagementFunction implements Function { - - @Override - public String process(String input, Context context) throws Exception { - PulsarAdmin adminClient = context.getPulsarAdmin(); - if (adminClient != null) { - String topic = context.getCurrentRecord().getTopicName().isPresent() ? - context.getCurrentRecord().getTopicName().get() : null; - String subName = context.getTenant() + "/" + context.getNamespace() + "/" + context.getFunctionName(); - if (topic != null) { - // 1578188166 below is a random-pick timestamp - adminClient.topics().resetCursor(topic, subName, 1578188166); - return "reset cursor successfully"; - } - } - return null; - } -} - -``` - -If you want your function to get access to the Pulsar admin client, you need to enable this feature by setting `exposeAdminClientEnabled=true` in the `functions_worker.yml` file. You can test whether this feature is enabled or not using the command `pulsar-admin functions localrun` with the flag `--web-service-url`. - -``` - -$ bin/pulsar-admin functions localrun \ - --jar my-functions.jar \ - --classname my.package.CursorManagementFunction \ - --web-service-url http://pulsar-web-service:8080 \ - # Other function configs - -``` - - - - -```` - -## Metrics - -Pulsar Functions allows you to deploy and manage processing functions that consume messages from and publish messages to Pulsar topics easily. It is important to ensure that the running functions are healthy at any time. Pulsar Functions can publish arbitrary metrics to the metrics interface which can be queried. - -:::note - -If a Pulsar Function uses the language-native interface for Java or Python, that function is not able to publish metrics and stats to Pulsar. - -::: - -You can monitor Pulsar Functions that have been deployed with the following methods: - -- Check the metrics provided by Pulsar. - - Pulsar Functions expose the metrics that can be collected and used for monitoring the health of **Java, Python, and Go** functions. You can check the metrics by following the [monitoring](deploy-monitoring.md) guide. - - For the complete list of the function metrics, see [here](reference-metrics.md#pulsar-functions). - -- Set and check your customized metrics. - - In addition to the metrics provided by Pulsar, Pulsar allows you to customize metrics for **Java and Python** functions. Function workers collect user-defined metrics to Prometheus automatically and you can check them in Grafana. - -Here are examples of how to customize metrics for Java and Python functions. - -````mdx-code-block - - - -You can record metrics using the [`Context`](#context) object on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class MetricRecorderFunction implements Function { - @Override - public void apply(Integer input, Context context) { - // Records the metric 1 every time a message arrives - context.recordMetric("hit-count", 1); - - // Records the metric only if the arriving number equals 11 - if (input == 11) { - context.recordMetric("elevens-count", 1); - } - - return null; - } -} - -``` - - - - -You can record metrics using the [`Context`](#context) object on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message. The following is an example. - -```python - -from pulsar import Function - -class MetricRecorderFunction(Function): - def process(self, input, context): - context.record_metric('hit-count', 1) - - if input == 11: - context.record_metric('elevens-count', 1) - -``` - - - - -The Go SDK [`Context`](#context) object enables you to record metrics on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message: - -```go - -func metricRecorderFunction(ctx context.Context, in []byte) error { - inputstr := string(in) - fctx, ok := pf.FromContext(ctx) - if !ok { - return errors.New("get Go Functions Context error") - } - fctx.RecordMetric("hit-count", 1) - if inputstr == "eleven" { - fctx.RecordMetric("elevens-count", 1) - } - return nil -} - -``` - - - - -```` - -## Security - -If you want to enable security on Pulsar Functions, first you should enable security on [Functions Workers](functions-worker.md). For more details, refer to [Security settings](functions-worker.md#security-settings). - -Pulsar Functions can support the following providers: - -- ClearTextSecretsProvider -- EnvironmentBasedSecretsProvider - -> Pulsar Function supports ClearTextSecretsProvider by default. - -At the same time, Pulsar Functions provides two interfaces, **SecretsProvider** and **SecretsProviderConfigurator**, allowing users to customize secret provider. - -````mdx-code-block - - - -You can get secret provider using the [`Context`](#context) object. The following is an example: - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class GetSecretProviderFunction implements Function { - - @Override - public Void process(String input, Context context) throws Exception { - Logger LOG = context.getLogger(); - String secretProvider = context.getSecret(input); - - if (!secretProvider.isEmpty()) { - LOG.info("The secret provider is {}", secretProvider); - } else { - LOG.warn("No secret provider"); - } - - return null; - } -} - -``` - - - - -You can get secret provider using the [`Context`](#context) object. The following is an example: - -```python - -from pulsar import Function - -class GetSecretProviderFunction(Function): - def process(self, input, context): - logger = context.get_logger() - secret_provider = context.get_secret(input) - if secret_provider is None: - logger.warn('No secret provider') - else: - logger.info("The secret provider is {0}".format(secret_provider)) - -``` - - - - -Currently, the feature is not available in Go. - - - - -```` - -## State storage -Pulsar Functions use [Apache BookKeeper](https://bookkeeper.apache.org) as a state storage interface. Pulsar installation, including the local standalone installation, includes deployment of BookKeeper bookies. - -Since Pulsar 2.1.0 release, Pulsar integrates with Apache BookKeeper [table service](https://docs.google.com/document/d/155xAwWv5IdOitHh1NVMEwCMGgB28M3FyMiQSxEpjE-Y/edit#heading=h.56rbh52koe3f) to store the `State` for functions. For example, a `WordCount` function can store its `counters` state into BookKeeper table service via Pulsar Functions State API. - -States are key-value pairs, where the key is a string and the value is arbitrary binary data - counters are stored as 64-bit big-endian binary values. Keys are scoped to an individual Pulsar Function, and shared between instances of that function. - -You can access states within Pulsar Java Functions using the `putState`, `putStateAsync`, `getState`, `getStateAsync`, `incrCounter`, `incrCounterAsync`, `getCounter`, `getCounterAsync` and `deleteState` calls on the context object. You can access states within Pulsar Python Functions using the `putState`, `getState`, `incrCounter`, `getCounter` and `deleteState` calls on the context object. You can also manage states using the [querystate](#query-state) and [putstate](#putstate) options to `pulsar-admin functions`. - -:::note - -State storage is not available in Go. - -::: - -### API - -````mdx-code-block - - - -Currently Pulsar Functions expose the following APIs for mutating and accessing State. These APIs are available in the [Context](functions-develop.md#context) object when you are using Java SDK functions. - -#### incrCounter - -```java - - /** - * Increment the builtin distributed counter referred by key - * @param key The name of the key - * @param amount The amount to be incremented - */ - void incrCounter(String key, long amount); - -``` - -The application can use `incrCounter` to change the counter of a given `key` by the given `amount`. - -#### incrCounterAsync - -```java - - /** - * Increment the builtin distributed counter referred by key - * but dont wait for the completion of the increment operation - * - * @param key The name of the key - * @param amount The amount to be incremented - */ - CompletableFuture incrCounterAsync(String key, long amount); - -``` - -The application can use `incrCounterAsync` to asynchronously change the counter of a given `key` by the given `amount`. - -#### getCounter - -```java - - /** - * Retrieve the counter value for the key. - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - long getCounter(String key); - -``` - -The application can use `getCounter` to retrieve the counter of a given `key` mutated by `incrCounter`. - -Except the `counter` API, Pulsar also exposes a general key/value API for functions to store -general key/value state. - -#### getCounterAsync - -```java - - /** - * Retrieve the counter value for the key, but don't wait - * for the operation to be completed - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - CompletableFuture getCounterAsync(String key); - -``` - -The application can use `getCounterAsync` to asynchronously retrieve the counter of a given `key` mutated by `incrCounterAsync`. - -#### putState - -```java - - /** - * Update the state value for the key. - * - * @param key name of the key - * @param value state value of the key - */ - void putState(String key, ByteBuffer value); - -``` - -#### putStateAsync - -```java - - /** - * Update the state value for the key, but don't wait for the operation to be completed - * - * @param key name of the key - * @param value state value of the key - */ - CompletableFuture putStateAsync(String key, ByteBuffer value); - -``` - -The application can use `putStateAsync` to asynchronously update the state of a given `key`. - -#### getState - -```java - - /** - * Retrieve the state value for the key. - * - * @param key name of the key - * @return the state value for the key. - */ - ByteBuffer getState(String key); - -``` - -#### getStateAsync - -```java - - /** - * Retrieve the state value for the key, but don't wait for the operation to be completed - * - * @param key name of the key - * @return the state value for the key. - */ - CompletableFuture getStateAsync(String key); - -``` - -The application can use `getStateAsync` to asynchronously retrieve the state of a given `key`. - -#### deleteState - -```java - - /** - * Delete the state value for the key. - * - * @param key name of the key - */ - -``` - -Counters and binary values share the same keyspace, so this deletes either type. - - - - -Currently Pulsar Functions expose the following APIs for mutating and accessing State. These APIs are available in the [Context](#context) object when you are using Python SDK functions. - -#### incr_counter - -```python - - def incr_counter(self, key, amount): - ""incr the counter of a given key in the managed state"" - -``` - -Application can use `incr_counter` to change the counter of a given `key` by the given `amount`. -If the `key` does not exist, a new key is created. - -#### get_counter - -```python - - def get_counter(self, key): - """get the counter of a given key in the managed state""" - -``` - -Application can use `get_counter` to retrieve the counter of a given `key` mutated by `incrCounter`. - -Except the `counter` API, Pulsar also exposes a general key/value API for functions to store -general key/value state. - -#### put_state - -```python - - def put_state(self, key, value): - """update the value of a given key in the managed state""" - -``` - -The key is a string, and the value is arbitrary binary data. - -#### get_state - -```python - - def get_state(self, key): - """get the value of a given key in the managed state""" - -``` - -#### del_counter - -```python - - def del_counter(self, key): - """delete the counter of a given key in the managed state""" - -``` - -Counters and binary values share the same keyspace, so this deletes either type. - - - - -```` - -### Query State - -A Pulsar Function can use the [State API](#api) for storing state into Pulsar's state storage -and retrieving state back from Pulsar's state storage. Additionally Pulsar also provides -CLI commands for querying its state. - -```shell - -$ bin/pulsar-admin functions querystate \ - --tenant \ - --namespace \ - --name \ - --state-storage-url \ - --key \ - [---watch] - -``` - -If `--watch` is specified, the CLI will watch the value of the provided `state-key`. - -### Example - -````mdx-code-block - - - -{@inject: github:WordCountFunction:/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/WordCountFunction.java} is a very good example -demonstrating on how Application can easily store `state` in Pulsar Functions. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountFunction implements Function { - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split("\\.")).forEach(word -> context.incrCounter(word, 1)); - return null; - } -} - -``` - -The logic of this `WordCount` function is pretty simple and straightforward: - -1. The function first splits the received `String` into multiple words using regex `\\.`. -2. For each `word`, the function increments the corresponding `counter` by 1 (via `incrCounter(key, amount)`). - - - - -```python - -from pulsar import Function - -class WordCount(Function): - def process(self, item, context): - for word in item.split(): - context.incr_counter(word, 1) - -``` - -The logic of this `WordCount` function is pretty simple and straightforward: - -1. The function first splits the received string into multiple words on space. -2. For each `word`, the function increments the corresponding `counter` by 1 (via `incr_counter(key, amount)`). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/functions-metrics.md b/site2/website/versioned_docs/version-2.10.0-deprecated/functions-metrics.md deleted file mode 100644 index 8add6693160929..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/functions-metrics.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: functions-metrics -title: Metrics for Pulsar Functions -sidebar_label: "Metrics" -original_id: functions-metrics ---- - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/functions-overview.md b/site2/website/versioned_docs/version-2.10.0-deprecated/functions-overview.md deleted file mode 100644 index 816d301e0fd0e7..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/functions-overview.md +++ /dev/null @@ -1,209 +0,0 @@ ---- -id: functions-overview -title: Pulsar Functions overview -sidebar_label: "Overview" -original_id: functions-overview ---- - -**Pulsar Functions** are lightweight compute processes that - -* consume messages from one or more Pulsar topics, -* apply a user-supplied processing logic to each message, -* publish the results of the computation to another topic. - - -## Goals -With Pulsar Functions, you can create complex processing logic without deploying a separate neighboring system (such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://heron.incubator.apache.org/), [Apache Flink](https://flink.apache.org/)). Pulsar Functions are computing infrastructure of Pulsar messaging system. The core goal is tied to a series of other goals: - -* Developer productivity (language-native vs Pulsar Functions SDK functions) -* Easy troubleshooting -* Operational simplicity (no need for an external processing system) - -## Inspirations -Pulsar Functions are inspired by (and take cues from) several systems and paradigms: - -* Stream processing engines such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://apache.github.io/incubator-heron), and [Apache Flink](https://flink.apache.org) -* "Serverless" and "Function as a Service" (FaaS) cloud platforms like [Amazon Web Services Lambda](https://aws.amazon.com/lambda/), [Google Cloud Functions](https://cloud.google.com/functions/), and [Azure Cloud Functions](https://azure.microsoft.com/en-us/services/functions/) - -Pulsar Functions can be described as - -* [Lambda](https://aws.amazon.com/lambda/)-style functions that are -* specifically designed to use Pulsar as a message bus. - -## Programming model -Pulsar Functions provide a wide range of functionality, and the core programming model is simple. Functions receive messages from one or more **input [topics](reference-terminology.md#topic)**. Each time a message is received, the function will complete the following tasks. - - * Apply some processing logic to the input and write output to: - * An **output topic** in Pulsar - * [Apache BookKeeper](functions-develop.md#state-storage) - * Write logs to a **log topic** (potentially for debugging purposes) - * Increment a [counter](#word-count-example) - -![Pulsar Functions core programming model](/assets/pulsar-functions-overview.png) - -You can use Pulsar Functions to set up the following processing chain: - -* A Python function listens for the `raw-sentences` topic and "sanitizes" incoming strings (removing extraneous whitespace and converting all characters to lowercase) and then publishes the results to a `sanitized-sentences` topic. -* A Java function listens for the `sanitized-sentences` topic, counts the number of times each word appears within a specified time window, and publishes the results to a `results` topic -* Finally, a Python function listens for the `results` topic and writes the results to a MySQL table. - - -### Word count example - -If you implement the classic word count example using Pulsar Functions, it looks something like this: - -![Pulsar Functions word count example](/assets/pulsar-functions-word-count.png) - -To write the function in Java with [Pulsar Functions SDK for Java](functions-develop.md#available-apis), you can write the function as follows. - -```java - -package org.example.functions; - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountFunction implements Function { - // This function is invoked every time a message is published to the input topic - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split(" ")).forEach(word -> { - String counterKey = word.toLowerCase(); - context.incrCounter(counterKey, 1); - }); - return null; - } -} - -``` - -Bundle and build the JAR file to be deployed, and then deploy it in your Pulsar cluster using the [command line](functions-deploy.md#command-line-interface) as follows. - -```bash - -$ bin/pulsar-admin functions create \ - --jar target/my-jar-with-dependencies.jar \ - --classname org.example.functions.WordCountFunction \ - --tenant public \ - --namespace default \ - --name word-count \ - --inputs persistent://public/default/sentences \ - --output persistent://public/default/count - -``` - -### Content-based routing example - -Pulsar Functions are used in many cases. The following is a sophisticated example that involves content-based routing. - -For example, a function takes items (strings) as input and publishes them to either a `fruits` or `vegetables` topic, depending on the item. Or, if an item is neither fruit nor vegetable, a warning is logged to a [log topic](functions-develop.md#logger). The following is a visual representation. - -![Pulsar Functions routing example](/assets/pulsar-functions-routing-example.png) - -If you implement this routing functionality in Python, it looks something like this: - -```python - -from pulsar import Function - -class RoutingFunction(Function): - def __init__(self): - self.fruits_topic = "persistent://public/default/fruits" - self.vegetables_topic = "persistent://public/default/vegetables" - - @staticmethod - def is_fruit(item): - return item in [b"apple", b"orange", b"pear", b"other fruits..."] - - @staticmethod - def is_vegetable(item): - return item in [b"carrot", b"lettuce", b"radish", b"other vegetables..."] - - def process(self, item, context): - if self.is_fruit(item): - context.publish(self.fruits_topic, item) - elif self.is_vegetable(item): - context.publish(self.vegetables_topic, item) - else: - warning = "The item {0} is neither a fruit nor a vegetable".format(item) - context.get_logger().warn(warning) - -``` - -If this code is stored in `~/router.py`, then you can deploy it in your Pulsar cluster using the [command line](functions-deploy.md#command-line-interface) as follows. - -```bash - -$ bin/pulsar-admin functions create \ - --py ~/router.py \ - --classname router.RoutingFunction \ - --tenant public \ - --namespace default \ - --name route-fruit-veg \ - --inputs persistent://public/default/basket-items - -``` - -### Functions, messages and message types -Pulsar Functions take byte arrays as inputs and spit out byte arrays as output. However in languages that support typed interfaces(Java), you can write typed Functions, and bind messages to types in the following ways. -* [Schema Registry](functions-develop.md#schema-registry) -* [SerDe](functions-develop.md#serde) - - -## Fully Qualified Function Name (FQFN) -Each Pulsar Function has a **Fully Qualified Function Name** (FQFN) that consists of three elements: the function tenant, namespace, and function name. FQFN looks like this: - -```http - -tenant/namespace/name - -``` - -FQFNs enable you to create multiple functions with the same name provided that they are in different namespaces. - -## Supported languages -Currently, you can write Pulsar Functions in Java, Python, and Go. For details, refer to [Develop Pulsar Functions](functions-develop.md). - -## Processing guarantees -Pulsar Functions provide three different messaging semantics that you can apply to any function. - -Delivery semantics | Description -:------------------|:------- -**At-most-once** delivery | Each message sent to the function is likely to be processed, or not to be processed (hence "at most"). -**At-least-once** delivery | Each message sent to the function can be processed more than once (hence the "at least"). -**Effectively-once** delivery | Each message sent to the function will have one output associated with it. - - -### Apply processing guarantees to a function -You can set the processing guarantees for a Pulsar Function when you create the Function. The following [`pulsar-function create`](reference-pulsar-admin.md#create-1) command creates a function with effectively-once guarantees applied. - -```bash - -$ bin/pulsar-admin functions create \ - --name my-effectively-once-function \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other function configs - -``` - -The available options for `--processing-guarantees` are: - -* `ATMOST_ONCE` -* `ATLEAST_ONCE` -* `EFFECTIVELY_ONCE` - -> By default, Pulsar Functions provide at-least-once delivery guarantees. So if you create a function without supplying a value for the `--processingGuarantees` flag, the function provides at-least-once guarantees. - -### Update the processing guarantees of a function -You can change the processing guarantees applied to a function using the [`update`](reference-pulsar-admin.md#update-1) command. The following is an example. - -```bash - -$ bin/pulsar-admin functions update \ - --processing-guarantees ATMOST_ONCE \ - # Other function configs - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/functions-package.md b/site2/website/versioned_docs/version-2.10.0-deprecated/functions-package.md deleted file mode 100644 index a995d5c1588771..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/functions-package.md +++ /dev/null @@ -1,493 +0,0 @@ ---- -id: functions-package -title: Package Pulsar Functions -sidebar_label: "How-to: Package" -original_id: functions-package ---- - -You can package Pulsar functions in Java, Python, and Go. Packaging the window function in Java is the same as [packaging a function in Java](#java). - -:::note - -Currently, the window function is not available in Python and Go. - -::: - -## Prerequisite - -Before running a Pulsar function, you need to start Pulsar. You can [run a standalone Pulsar in Docker](getting-started-docker.md), or [run Pulsar in Kubernetes](getting-started-helm.md). - -To check whether the Docker image starts, you can use the `docker ps` command. - -## Java - -To package a function in Java, complete the following steps. - -1. Create a new maven project with a pom file. In the following code sample, the value of `mainClass` is your package name. - - ```Java - - - - 4.0.0 - - java-function - java-function - 1.0-SNAPSHOT - - - - org.apache.pulsar - pulsar-functions-api - 2.6.0 - - - - - - - maven-assembly-plugin - - false - - jar-with-dependencies - - - - org.example.test.ExclamationFunction - - - - - - make-assembly - package - - assembly - - - - - - org.apache.maven.plugins - maven-compiler-plugin - - 8 - 8 - - - - - - - - ``` - -2. Write a Java function. - - ``` - - package org.example.test; - - import java.util.function.Function; - - public class ExclamationFunction implements Function { - @Override - public String apply(String s) { - return "This is my function!"; - } - } - - ``` - - For the imported package, you can use one of the following interfaces: - - Function interface provided by Java 8: `java.util.function.Function` - - Pulsar Function interface: `org.apache.pulsar.functions.api.Function` - - The main difference between the two interfaces is that the `org.apache.pulsar.functions.api.Function` interface provides the context interface. When you write a function and want to interact with it, you can use context to obtain a wide variety of information and functionality for Pulsar Functions. - - The following example uses `org.apache.pulsar.functions.api.Function` interface with context. - - ``` - - package org.example.functions; - import org.apache.pulsar.functions.api.Context; - import org.apache.pulsar.functions.api.Function; - - import java.util.Arrays; - public class WordCountFunction implements Function { - // This function is invoked every time a message is published to the input topic - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split(" ")).forEach(word -> { - String counterKey = word.toLowerCase(); - context.incrCounter(counterKey, 1); - }); - return null; - } - } - - ``` - -3. Package the Java function. - - ```bash - - mvn package - - ``` - - After the Java function is packaged, a `target` directory is created automatically. Open the `target` directory to check if there is a JAR package similar to `java-function-1.0-SNAPSHOT.jar`. - - -4. Run the Java function. - - (1) Copy the packaged jar file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Java function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname org.example.test.ExclamationFunction \ - --jar java-function-1.0-SNAPSHOT.jar \ - --inputs persistent://public/default/my-topic-1 \ - --output persistent://public/default/test-1 \ - --tenant public \ - --namespace default \ - --name JavaFunction - - ``` - - The following log indicates that the Java function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -## Python - -Python Function supports the following three formats: - -- One python file -- ZIP file -- PIP - -### One python file - -To package a function with **one python file** in Python, complete the following steps. - -1. Write a Python function. - - ``` - - from pulsar import Function // import the Function module from Pulsar - - # The classic ExclamationFunction that appends an exclamation at the end - # of the input - class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - return input + '!' - - ``` - - In this example, when you write a Python function, you need to inherit the Function class and implement the `process()` method. - - `process()` mainly has two parameters: - - - `input` represents your input. - - - `context` represents an interface exposed by the Pulsar Function. You can get the attributes in the Python function based on the provided context object. - -2. Install a Python client. - - The implementation of a Python function depends on the Python client, so before deploying a Python function, you need to install the corresponding version of the Python client. - - ```bash - - pip install pulsar-client==2.6.0 - - ``` - -3. Run the Python Function. - - (1) Copy the Python function file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Python function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname . \ - --py \ - --inputs persistent://public/default/my-topic-1 \ - --output persistent://public/default/test-1 \ - --tenant public \ - --namespace default \ - --name PythonFunction - - ``` - - The following log indicates that the Python function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -### ZIP file - -To package a function with the **ZIP file** in Python, complete the following steps. - -1. Prepare the ZIP file. - - The following is required when packaging the ZIP file of the Python Function. - - ```text - - Assuming the zip file is named as `func.zip`, unzip the `func.zip` folder: - "func/src" - "func/requirements.txt" - "func/deps" - - ``` - - Take [exclamation.zip](https://github.com/apache/pulsar/tree/master/tests/docker-images/latest-version-image/python-examples) as an example. The internal structure of the example is as follows. - - ```text - - . - ├── deps - │   └── sh-1.12.14-py2.py3-none-any.whl - └── src - └── exclamation.py - - ``` - -2. Run the Python Function. - - (1) Copy the ZIP file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Python function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname exclamation \ - --py \ - --inputs persistent://public/default/in-topic \ - --output persistent://public/default/out-topic \ - --tenant public \ - --namespace default \ - --name PythonFunction - - ``` - - The following log indicates that the Python function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -### PIP - -The PIP method is only supported in Kubernetes runtime. To package a function with **PIP** in Python, complete the following steps. - -1. Configure the `functions_worker.yml` file. - - ```text - - #### Kubernetes Runtime #### - installUserCodeDependencies: true - - ``` - -2. Write your Python Function. - - ``` - - from pulsar import Function - import js2xml - - # The classic ExclamationFunction that appends an exclamation at the end - # of the input - class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - // add your logic - return input + '!' - - ``` - - You can introduce additional dependencies. When Python Function detects that the file currently used is `whl` and the `installUserCodeDependencies` parameter is specified, the system uses the `pip install` command to install the dependencies required in Python Function. - -3. Generate the `whl` file. - - ```shell script - - $ cd $PULSAR_HOME/pulsar-functions/scripts/python - $ chmod +x generate.sh - $ ./generate.sh - # e.g: ./generate.sh /path/to/python /path/to/python/output 1.0.0 - - ``` - - The output is written in `/path/to/python/output`: - - ```text - - -rw-r--r-- 1 root staff 1.8K 8 27 14:29 pulsarfunction-1.0.0-py2-none-any.whl - -rw-r--r-- 1 root staff 1.4K 8 27 14:29 pulsarfunction-1.0.0.tar.gz - -rw-r--r-- 1 root staff 0B 8 27 14:29 pulsarfunction.whl - - ``` - -## Go - -To package a function in Go, complete the following steps. - -1. Write a Go function. - - Currently, Go function can be **only** implemented using SDK and the interface of the function is exposed in the form of SDK. Before using the Go function, you need to import "github.com/apache/pulsar/pulsar-function-go/pf". - - ``` - - import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" - ) - - func HandleRequest(ctx context.Context, input []byte) error { - fmt.Println(string(input) + "!") - return nil - } - - func main() { - pf.Start(HandleRequest) - } - - ``` - - You can use context to connect to the Go function. - - ``` - - if fc, ok := pf.FromContext(ctx); ok { - fmt.Printf("function ID is:%s, ", fc.GetFuncID()) - fmt.Printf("function version is:%s\n", fc.GetFuncVersion()) - } - - ``` - - When writing a Go function, remember that - - In `main()`, you **only** need to register the function name to `Start()`. **Only** one function name is received in `Start()`. - - Go function uses Go reflection, which is based on the received function name, to verify whether the parameter list and returned value list are correct. The parameter list and returned value list **must be** one of the following sample functions: - - ``` - - func () - func () error - func (input) error - func () (output, error) - func (input) (output, error) - func (context.Context) error - func (context.Context, input) error - func (context.Context) (output, error) - func (context.Context, input) (output, error) - - ``` - -2. Build the Go function. - - ``` - - go build .go - - ``` - -3. Run the Go Function. - - (1) Copy the Go function file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Go function with the following command. - - ``` - - ./bin/pulsar-admin functions localrun \ - --go [your go function path] - --inputs [input topics] \ - --output [output topic] \ - --tenant [default:public] \ - --namespace [default:default] \ - --name [custom unique go function name] - - ``` - - The following log indicates that the Go function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -## Start Functions in cluster mode -If you want to start a function in cluster mode, replace `localrun` with `create` in the commands above. The following log indicates that your function starts successfully. - - ```text - - "Created successfully" - - ``` - -For information about parameters on `--classname`, `--jar`, `--py`, `--go`, `--inputs`, run the command `./bin/pulsar-admin functions` or see [here](reference-pulsar-admin.md#functions). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/functions-runtime.md b/site2/website/versioned_docs/version-2.10.0-deprecated/functions-runtime.md deleted file mode 100644 index 9a01dbf4da1d1d..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/functions-runtime.md +++ /dev/null @@ -1,406 +0,0 @@ ---- -id: functions-runtime -title: Configure Functions runtime -sidebar_label: "Setup: Configure Functions runtime" -original_id: functions-runtime ---- - -You can use the following methods to run functions. - -- *Thread*: Invoke functions threads in functions worker. -- *Process*: Invoke functions in processes forked by functions worker. -- *Kubernetes*: Submit functions as Kubernetes StatefulSets by functions worker. - -:::note - -Pulsar supports adding labels to the Kubernetes StatefulSets and services while launching functions, which facilitates selecting the target Kubernetes objects. - -::: - -The differences of the thread and process modes are: -- Thread mode: when a function runs in thread mode, it runs on the same Java virtual machine (JVM) with functions worker. -- Process mode: when a function runs in process mode, it runs on the same machine that functions worker runs. - -## Configure thread runtime -It is easy to configure *Thread* runtime. In most cases, you do not need to configure anything. You can customize the thread group name with the following settings: - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.thread.ThreadRuntimeFactory -functionRuntimeFactoryConfigs: - threadGroupName: "Your Function Container Group" - -``` - -*Thread* runtime is only supported in Java function. - -## Configure process runtime -When you enable *Process* runtime, you do not need to configure anything. - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.process.ProcessRuntimeFactory -functionRuntimeFactoryConfigs: - # the directory for storing the function logs - logDirectory: - # change the jar location only when you put the java instance jar in a different location - javaInstanceJarLocation: - # change the python instance location only when you put the python instance jar in a different location - pythonInstanceLocation: - # change the extra dependencies location: - extraFunctionDependenciesDir: - -``` - -*Process* runtime is supported in Java, Python, and Go functions. - -## Configure Kubernetes runtime - -When the functions worker generates Kubernetes manifests and apply the manifests, the Kubernetes runtime works. If you have run functions worker on Kubernetes, you can use the `serviceAccount` associated with the pod that the functions worker is running in. Otherwise, you can configure it to communicate with a Kubernetes cluster. - -The manifests, generated by the functions worker, include a `StatefulSet`, a `Service` (used to communicate with the pods), and a `Secret` for auth credentials (when applicable). The `StatefulSet` manifest (by default) has a single pod, with the number of replicas determined by the "parallelism" of the function. On pod boot, the pod downloads the function payload (via the functions worker REST API). The pod's container image is configurable, but must have the functions runtime. - -The Kubernetes runtime supports secrets, so you can create a Kubernetes secret and expose it as an environment variable in the pod. The Kubernetes runtime is extensible, you can implement classes and customize the way how to generate Kubernetes manifests, how to pass auth data to pods, and how to integrate secrets. - -:::tip - -For the rules of translating Pulsar object names into Kubernetes resource labels, see [here](admin-api-overview.md#how-to-define-pulsar-resource-names-when-running-pulsar-in-kubernetes). - -::: - -### Basic configuration - -It is easy to configure Kubernetes runtime. You can just uncomment the settings of `kubernetesContainerFactory` in the `functions_worker.yaml` file. The following is an example. - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.kubernetes.KubernetesRuntimeFactory -functionRuntimeFactoryConfigs: - # uri to kubernetes cluster, leave it to empty and it will use the kubernetes settings in function worker - k8Uri: - # the kubernetes namespace to run the function instances. it is `default`, if this setting is left to be empty - jobNamespace: - # The Kubernetes pod name to run the function instances. It is set to - # `pf----` if this setting is left to be empty - jobName: - # the docker image to run function instance. by default it is `apachepulsar/pulsar` - pulsarDockerImageName: - # the docker image to run function instance according to different configurations provided by users. - # By default it is `apachepulsar/pulsar`. - # e.g: - # functionDockerImages: - # JAVA: JAVA_IMAGE_NAME - # PYTHON: PYTHON_IMAGE_NAME - # GO: GO_IMAGE_NAME - functionDockerImages: - # "The image pull policy for image used to run function instance. By default it is `IfNotPresent` - imagePullPolicy: IfNotPresent - # the root directory of pulsar home directory in `pulsarDockerImageName`. by default it is `/pulsar`. - # if you are using your own built image in `pulsarDockerImageName`, you need to set this setting accordingly - pulsarRootDir: - # The config admin CLI allows users to customize the configuration of the admin cli tool, such as: - # `/bin/pulsar-admin and /bin/pulsarctl`. By default it is `/bin/pulsar-admin`. If you want to use `pulsarctl` - # you need to set this setting accordingly - configAdminCLI: - # this setting only takes effects if `k8Uri` is set to null. if your function worker is running as a k8 pod, - # setting this to true is let function worker to submit functions to the same k8s cluster as function worker - # is running. setting this to false if your function worker is not running as a k8 pod. - submittingInsidePod: false - # setting the pulsar service url that pulsar function should use to connect to pulsar - # if it is not set, it will use the pulsar service url configured in worker service - pulsarServiceUrl: - # setting the pulsar admin url that pulsar function should use to connect to pulsar - # if it is not set, it will use the pulsar admin url configured in worker service - pulsarAdminUrl: - # The flag indicates to install user code dependencies. (applied to python package) - installUserCodeDependencies: - # The repository that pulsar functions use to download python dependencies - pythonDependencyRepository: - # The repository that pulsar functions use to download extra python dependencies - pythonExtraDependencyRepository: - # the custom labels that function worker uses to select the nodes for pods - customLabels: - # The expected metrics collection interval, in seconds - expectedMetricsCollectionInterval: 30 - # Kubernetes Runtime will periodically checkback on - # this configMap if defined and if there are any changes - # to the kubernetes specific stuff, we apply those changes - changeConfigMap: - # The namespace for storing change config map - changeConfigMapNamespace: - # The ratio cpu request and cpu limit to be set for a function/source/sink. - # The formula for cpu request is cpuRequest = userRequestCpu / cpuOverCommitRatio - cpuOverCommitRatio: 1.0 - # The ratio memory request and memory limit to be set for a function/source/sink. - # The formula for memory request is memoryRequest = userRequestMemory / memoryOverCommitRatio - memoryOverCommitRatio: 1.0 - # The port inside the function pod which is used by the worker to communicate with the pod - grpcPort: 9093 - # The port inside the function pod on which prometheus metrics are exposed - metricsPort: 9094 - # The directory inside the function pod where nar packages will be extracted - narExtractionDirectory: - # The classpath where function instance files stored - functionInstanceClassPath: - # Upload the builtin sources/sinks to BookKeeper. - # True by default. - uploadBuiltinSinksSources: true - # the directory for dropping extra function dependencies - # if it is not an absolute path, it is relative to `pulsarRootDir` - extraFunctionDependenciesDir: - # Additional memory padding added on top of the memory requested by the function per on a per instance basis - percentMemoryPadding: 10 - # The duration (in seconds) before the StatefulSet is deleted after a function stops or restarts. - # Value must be a non-negative integer. 0 indicates the StatefulSet is deleted immediately. - # Default is 5 seconds. - gracePeriodSeconds: 5 - -``` - -If you run functions worker embedded in a broker on Kubernetes, you can use the default settings. - -### Run standalone functions worker on Kubernetes - -If you run functions worker standalone (that is, not embedded) on Kubernetes, you need to configure `pulsarSerivceUrl` to be the URL of the broker and `pulsarAdminUrl` as the URL to the functions worker. - -For example, both Pulsar brokers and Function Workers run in the `pulsar` K8S namespace. The brokers have a service called `brokers` and the functions worker has a service called `func-worker`. The settings are as follows: - -```yaml - -pulsarServiceUrl: pulsar://broker.pulsar:6650 // or pulsar+ssl://broker.pulsar:6651 if using TLS -pulsarAdminUrl: http://func-worker.pulsar:8080 // or https://func-worker:8443 if using TLS - -``` - -### Run RBAC in Kubernetes clusters - -If you run RBAC in your Kubernetes cluster, make sure that the service account you use for running functions workers (or brokers, if functions workers run along with brokers) have permissions on the following Kubernetes APIs. - -- services -- configmaps -- pods -- apps.statefulsets - -The following is sufficient: - -```yaml - -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRole -metadata: - name: functions-worker -rules: -- apiGroups: [""] - resources: - - services - - configmaps - - pods - verbs: - - '*' -- apiGroups: - - apps - resources: - - statefulsets - verbs: - - '*' ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: functions-worker ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: functions-worker -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: functions-worker -subjectsKubernetesSec: -- kind: ServiceAccount - name: functions-worker - -``` - -If the service-account is not properly configured, an error message similar to this is displayed: - -```bash - -22:04:27.696 [Timer-0] ERROR org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory - Error while trying to fetch configmap example-pulsar-4qvmb5gur3c6fc9dih0x1xn8b-function-worker-config at namespace pulsar -io.kubernetes.client.ApiException: Forbidden - at io.kubernetes.client.ApiClient.handleResponse(ApiClient.java:882) ~[io.kubernetes-client-java-2.0.0.jar:?] - at io.kubernetes.client.ApiClient.execute(ApiClient.java:798) ~[io.kubernetes-client-java-2.0.0.jar:?] - at io.kubernetes.client.apis.CoreV1Api.readNamespacedConfigMapWithHttpInfo(CoreV1Api.java:23673) ~[io.kubernetes-client-java-api-2.0.0.jar:?] - at io.kubernetes.client.apis.CoreV1Api.readNamespacedConfigMap(CoreV1Api.java:23655) ~[io.kubernetes-client-java-api-2.0.0.jar:?] - at org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory.fetchConfigMap(KubernetesRuntimeFactory.java:284) [org.apache.pulsar-pulsar-functions-runtime-2.4.0-42c3bf949.jar:2.4.0-42c3bf949] - at org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory$1.run(KubernetesRuntimeFactory.java:275) [org.apache.pulsar-pulsar-functions-runtime-2.4.0-42c3bf949.jar:2.4.0-42c3bf949] - at java.util.TimerThread.mainLoop(Timer.java:555) [?:1.8.0_212] - at java.util.TimerThread.run(Timer.java:505) [?:1.8.0_212] - -``` - -### Integrate Kubernetes secrets - -In order to safely distribute secrets, Pulsar Functions can reference Kubernetes secrets. To enable this, set the `secretsProviderConfiguratorClassName` to `org.apache.pulsar.functions.secretsproviderconfigurator.KubernetesSecretsProviderConfigurator`. - -You can create a secret in the namespace where your functions are deployed. For example, you deploy functions to the `pulsar-func` Kubernetes namespace, and you have a secret named `database-creds` with a field name `password`, which you want to mount in the pod as an environment variable called `DATABASE_PASSWORD`. The following functions configuration enables you to reference that secret and mount the value as an environment variable in the pod. - -```Yaml - -tenant: "mytenant" -namespace: "mynamespace" -name: "myfunction" -topicName: "persistent://mytenant/mynamespace/myfuncinput" -className: "com.company.pulsar.myfunction" - -secrets: - # the secret will be mounted from the `password` field in the `database-creds` secret as an env var called `DATABASE_PASSWORD` - DATABASE_PASSWORD: - path: "database-creds" - key: "password" - -``` - -### Enable token authentication - -When you enable authentication for your Pulsar cluster, you need a mechanism for the pod running your function to authenticate with the broker. - -The `org.apache.pulsar.functions.auth.KubernetesFunctionAuthProvider` interface provides support for any authentication mechanism. The `functionAuthProviderClassName` in `function-worker.yml` is used to specify your path to this implementation. - -Pulsar includes an implementation of this interface for token authentication, and distributes the certificate authority via the same implementation. The configuration is similar as follows: - -```Yaml - -functionAuthProviderClassName: org.apache.pulsar.functions.auth.KubernetesSecretsTokenAuthProvider - -``` - -For token authentication, the functions worker captures the token that is used to deploy (or update) the function. The token is saved as a secret and mounted into the pod. - -For custom authentication or TLS, you need to implement this interface or use an alternative mechanism to provide authentication. If you use token authentication and TLS encryption to secure the communication with the cluster, Pulsar passes your certificate authority (CA) to the client, so the client obtains what it needs to authenticate the cluster, and trusts the cluster with your signed certificate. - -:::note - -If you use tokens that expire when deploying functions, these tokens will expire. - -::: - -### Run clusters with authentication - -When you run a functions worker in a standalone process (that is, not embedded in the broker) in a cluster with authentication, you must configure your functions worker to interact with the broker and authenticate incoming requests. So you need to configure properties that the broker requires for authentication or authorization. - -For example, if you use token authentication, you need to configure the following properties in the `function-worker.yml` file. - -```Yaml - -clientAuthenticationPlugin: org.apache.pulsar.client.impl.auth.AuthenticationToken -clientAuthenticationParameters: file:///etc/pulsar/token/admin-token.txt -configurationMetadataStoreUrl: zk:zookeeper-cluster:2181 # auth requires a connection to zookeeper -authenticationProviders: - - "org.apache.pulsar.broker.authentication.AuthenticationProviderToken" -authorizationEnabled: true -authenticationEnabled: true -superUserRoles: - - superuser - - proxy -properties: - tokenSecretKey: file:///etc/pulsar/jwt/secret # if using a secret token, key file must be DER-encoded - tokenPublicKey: file:///etc/pulsar/jwt/public.key # if using public/private key tokens, key file must be DER-encoded - -``` - -:::note - -You must configure both the Function Worker authorization or authentication for the server to authenticate requests and configure the client to be authenticated to communicate with the broker. - -::: - -### Customize Kubernetes runtime - -The Kubernetes integration enables you to implement a class and customize how to generate manifests. You can configure it by setting `runtimeCustomizerClassName` in the `functions-worker.yml` file and use the fully qualified class name. You must implement the `org.apache.pulsar.functions.runtime.kubernetes.KubernetesManifestCustomizer` interface. - -The functions (and sinks/sources) API provides a flag, `customRuntimeOptions`, which is passed to this interface. - -To initialize the `KubernetesManifestCustomizer`, you can provide `runtimeCustomizerConfig` in the `functions-worker.yml` file. `runtimeCustomizerConfig` is passed to the `public void initialize(Map config)` function of the interface. `runtimeCustomizerConfig`is different from the `customRuntimeOptions` as `runtimeCustomizerConfig` is the same across all functions. If you provide both `runtimeCustomizerConfig` and `customRuntimeOptions`, you need to decide how to manage these two configurations in your implementation of `KubernetesManifestCustomizer`. - -Pulsar includes a built-in implementation. To use the basic implementation, set `runtimeCustomizerClassName` to `org.apache.pulsar.functions.runtime.kubernetes.BasicKubernetesManifestCustomizer`. The built-in implementation initialized with `runtimeCustomizerConfig` enables you to pass a JSON document as `customRuntimeOptions` with certain properties to augment, which decides how the manifests are generated. If both `runtimeCustomizerConfig` and `customRuntimeOptions` are provided, `BasicKubernetesManifestCustomizer` uses `customRuntimeOptions` to override the configuration if there are conflicts in these two configurations. - -Below is an example of `customRuntimeOptions`. - -```json - -{ - "jobName": "jobname", // the k8s pod name to run this function instance - "jobNamespace": "namespace", // the k8s namespace to run this function in - "extractLabels": { // extra labels to attach to the statefulSet, service, and pods - "extraLabel": "value" - }, - "extraAnnotations": { // extra annotations to attach to the statefulSet, service, and pods - "extraAnnotation": "value" - }, - "nodeSelectorLabels": { // node selector labels to add on to the pod spec - "customLabel": "value" - }, - "tolerations": [ // tolerations to add to the pod spec - { - "key": "custom-key", - "value": "value", - "effect": "NoSchedule" - } - ], - "resourceRequirements": { // values for cpu and memory should be defined as described here: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container - "requests": { - "cpu": 1, - "memory": "4G" - }, - "limits": { - "cpu": 2, - "memory": "8G" - } - } -} - -``` - -## Run clusters with geo-replication - -If you run multiple clusters tied together with geo-replication, it is important to use a different function namespace for each cluster. Otherwise, the function shares a namespace and potentially schedule across clusters. - -For example, if you have two clusters: `east-1` and `west-1`, you can configure the functions workers for `east-1` and `west-1` perspectively as follows. - -```Yaml - -pulsarFunctionsCluster: east-1 -pulsarFunctionsNamespace: public/functions-east-1 - -``` - -```Yaml - -pulsarFunctionsCluster: west-1 -pulsarFunctionsNamespace: public/functions-west-1 - -``` - -This ensures the two different Functions Workers use distinct sets of topics for their internal coordination. - -## Configure standalone functions worker - -When configuring a standalone functions worker, you need to configure properties that the broker requires, especially if you use TLS. And then Functions Worker can communicate with the broker. - -You need to configure the following required properties. - -```Yaml - -workerPort: 8080 -workerPortTls: 8443 # when using TLS -tlsCertificateFilePath: /etc/pulsar/tls/tls.crt # when using TLS -tlsKeyFilePath: /etc/pulsar/tls/tls.key # when using TLS -tlsTrustCertsFilePath: /etc/pulsar/tls/ca.crt # when using TLS -pulsarServiceUrl: pulsar://broker.pulsar:6650/ # or pulsar+ssl://pulsar-prod-broker.pulsar:6651/ when using TLS -pulsarWebServiceUrl: http://broker.pulsar:8080/ # or https://pulsar-prod-broker.pulsar:8443/ when using TLS -useTls: true # when using TLS, critical! - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/functions-worker.md b/site2/website/versioned_docs/version-2.10.0-deprecated/functions-worker.md deleted file mode 100644 index 60eb84657919be..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/functions-worker.md +++ /dev/null @@ -1,405 +0,0 @@ ---- -id: functions-worker -title: Deploy and manage functions worker -sidebar_label: "Setup: Pulsar Functions Worker" -original_id: functions-worker ---- -Before using Pulsar Functions, you need to learn how to set up Pulsar Functions worker and how to [configure Functions runtime](functions-runtime.md). - -Pulsar `functions-worker` is a logic component to run Pulsar Functions in cluster mode. Two options are available, and you can select either based on your requirements. -- [run with brokers](#run-functions-worker-with-brokers) -- [run it separately](#run-functions-worker-separately) in a different broker - -:::note - -The `--- Service Urls---` lines in the following diagrams represent Pulsar service URLs that Pulsar client and admin use to connect to a Pulsar cluster. - -::: - -## Run Functions-worker with brokers - -The following diagram illustrates the deployment of functions-workers running along with brokers. - -![assets/functions-worker-corun.png](/assets/functions-worker-corun.png) - -To enable functions-worker running as part of a broker, you need to set `functionsWorkerEnabled` to `true` in the `broker.conf` file. - -```conf - -functionsWorkerEnabled=true - -``` - -If the `functionsWorkerEnabled` is set to `true`, the functions-worker is started as part of a broker. You need to configure the `conf/functions_worker.yml` file to customize your functions_worker. - -Before you run Functions-worker with broker, you have to configure Functions-worker, and then start it with brokers. - -### Configure Functions-Worker to run with brokers -In this mode, most of the settings are already inherited from your broker configuration (for example, configurationStore settings, authentication settings, and so on) since `functions-worker` is running as part of the broker. - -Pay attention to the following required settings when configuring functions-worker in this mode. - -- `numFunctionPackageReplicas`: The number of replicas to store function packages. The default value is `1`, which is good for standalone deployment. For production deployment, to ensure high availability, set it to be larger than `2`. -- `initializedDlogMetadata`: Whether to initialize distributed log metadata in runtime. If it is set to `true`, you must ensure that it has been initialized by `bin/pulsar initialize-cluster-metadata` command. - -If authentication is enabled on the BookKeeper cluster, configure the following BookKeeper authentication settings. - -- `bookkeeperClientAuthenticationPlugin`: the BookKeeper client authentication plugin name. -- `bookkeeperClientAuthenticationParametersName`: the BookKeeper client authentication plugin parameters name. -- `bookkeeperClientAuthenticationParameters`: the BookKeeper client authentication plugin parameters. - -### Configure Stateful-Functions to run with broker - -If you want to use Stateful-Functions related functions (for example, `putState()` and `queryState()` related interfaces), follow steps below. - -1. Enable the **streamStorage** service in the BookKeeper. - - Currently, the service uses the NAR package, so you need to set the configuration in `bookkeeper.conf`. - - ```text - - extraServerComponents=org.apache.bookkeeper.stream.server.StreamStorageLifecycleComponent - - ``` - - After starting bookie, use the following methods to check whether the streamStorage service is started correctly. - - Input: - - ```shell - - telnet localhost 4181 - - ``` - - Output: - - ```text - - Trying 127.0.0.1... - Connected to localhost. - Escape character is '^]'. - - ``` - -2. Turn on this function in `functions_worker.yml`. - - ```text - - stateStorageServiceUrl: bk://:4181 - - ``` - - `bk-service-url` is the service URL pointing to the BookKeeper table service. - -### Start Functions-worker with broker - -Once you have configured the `functions_worker.yml` file, you can start or restart your broker. - -And then you can use the following command to verify if `functions-worker` is running well. - -```bash - -curl :8080/admin/v2/worker/cluster - -``` - -After entering the command above, a list of active function workers in the cluster is returned. The output is similar to the following. - -```json - -[{"workerId":"","workerHostname":"","port":8080}] - -``` - -## Run Functions-worker separately - -This section illustrates how to run `functions-worker` as a separate process in separate machines. - -![assets/functions-worker-separated.png](/assets/functions-worker-separated.png) - -:::note - -In this mode, make sure `functionsWorkerEnabled` is set to `false`, so you won't start `functions-worker` with brokers by mistake. Also, while accessing the `functions-worker` to manage any of the functions, the `pulsar-admin` CLI tool or any of the clients should use the `workerHostname` and `workerPort` that you set in [Worker parameters](#worker-parameters) to generate an `--admin-url`. - -::: - -### Configure Functions-worker to run separately - -To run function-worker separately, you have to configure the following parameters. - -#### Worker parameters - -- `workerId`: The type is string. It is unique across clusters, which is used to identify a worker machine. -- `workerHostname`: The hostname of the worker machine. -- `workerPort`: The port that the worker server listens on. Keep it as default if you don't customize it. -- `workerPortTls`: The TLS port that the worker server listens on. Keep it as default if you don't customize it. - -#### Function package parameter - -- `numFunctionPackageReplicas`: The number of replicas to store function packages. The default value is `1`. - -#### Function metadata parameter - -- `pulsarServiceUrl`: The Pulsar service URL for your broker cluster. -- `pulsarWebServiceUrl`: The Pulsar web service URL for your broker cluster. -- `pulsarFunctionsCluster`: Set the value to your Pulsar cluster name (same as the `clusterName` setting in the broker configuration). - -If authentication is enabled for your broker cluster, you *should* configure the authentication plugin and parameters for the functions worker to communicate with the brokers. - -- `clientAuthenticationPlugin` -- `clientAuthenticationParameters` - -#### Customize Java runtime options - -If you want to pass additional arguments to the JVM command line to every process started by a function worker, -you can configure the `additionalJavaRuntimeArguments` parameter. - -``` - -additionalJavaRuntimeArguments: ['-XX:+ExitOnOutOfMemoryError','-Dfoo=bar'] - -``` - -This is very useful in case you want to: -- add JMV flags, like `-XX:+ExitOnOutOfMemoryError` -- pass custom system properties, like `-Dlog4j2.formatMsgNoLookups` - -:::note - -This feature applies only to Process and Kubernetes runtimes. - -::: - -#### Security settings - -If you want to enable security on functions workers, you *should*: -- [Enable TLS transport encryption](#enable-tls-transport-encryption) -- [Enable Authentication Provider](#enable-authentication-provider) -- [Enable Authorization Provider](#enable-authorization-provider) -- [Enable End-to-End Encryption](#enable-end-to-end-encryption) - -##### Enable TLS transport encryption - -To enable TLS transport encryption, configure the following settings. - -``` - -useTLS: true -pulsarServiceUrl: pulsar+ssl://localhost:6651/ -pulsarWebServiceUrl: https://localhost:8443 - -tlsEnabled: true -tlsCertificateFilePath: /path/to/functions-worker.cert.pem -tlsKeyFilePath: /path/to/functions-worker.key-pk8.pem -tlsTrustCertsFilePath: /path/to/ca.cert.pem - -// The path to trusted certificates used by the Pulsar client to authenticate with Pulsar brokers -brokerClientTrustCertsFilePath: /path/to/ca.cert.pem - -``` - -For details on TLS encryption, refer to [Transport Encryption using TLS](security-tls-transport.md). - -##### Enable Authentication Provider - -To enable authentication on Functions Worker, you need to configure the following settings. - -:::note - -Substitute the *providers list* with the providers you want to enable. - -::: - -``` - -authenticationEnabled: true -authenticationProviders: [ provider1, provider2 ] - -``` - -For *TLS Authentication* provider, follow the example below to add the necessary settings. -See [TLS Authentication](security-tls-authentication.md) for more details. - -``` - -brokerClientAuthenticationPlugin: org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters: tlsCertFile:/path/to/admin.cert.pem,tlsKeyFile:/path/to/admin.key-pk8.pem - -authenticationEnabled: true -authenticationProviders: ['org.apache.pulsar.broker.authentication.AuthenticationProviderTls'] - -``` - -For *SASL Authentication* provider, add `saslJaasClientAllowedIds` and `saslJaasBrokerSectionName` -under `properties` if needed. - -``` - -properties: - saslJaasClientAllowedIds: .*pulsar.* - saslJaasBrokerSectionName: Broker - -``` - -For *Token Authentication* provider, add necessary settings for `properties` if needed. -See [Token Authentication](security-jwt.md) for more details. -Note: key files must be DER-encoded - -``` - -properties: - tokenSecretKey: file://my/secret.key - # If using public/private - # tokenPublicKey: file:///path/to/public.key - -``` - -##### Enable Authorization Provider - -To enable authorization on Functions Worker, you need to configure `authorizationEnabled`, `authorizationProvider` and `configurationMetadataStoreUrl`. The authentication provider connects to `configurationMetadataStoreUrl` to receive namespace policies. - -```yaml - -authorizationEnabled: true -authorizationProvider: org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider -configurationMetadataStoreUrl: : - -``` - -You should also configure a list of superuser roles. The superuser roles are able to access any admin API. The following is a configuration example. - -```yaml - -superUserRoles: - - role1 - - role2 - - role3 - -``` - -##### Enable End-to-End Encryption - -You can use the public and private key pair that the application configures to perform encryption. Only the consumers with a valid key can decrypt the encrypted messages. - -To enable End-to-End encryption on Functions Worker, you can set it by specifying `--producer-config` in the command line terminal, for more information, please refer to [here](security-encryption.md). - -We include the relevant configuration information of `CryptoConfig` into `ProducerConfig`. The specific configurable field information about `CryptoConfig` is as follows: - -```text - -public class CryptoConfig { - private String cryptoKeyReaderClassName; - private Map cryptoKeyReaderConfig; - - private String[] encryptionKeys; - private ProducerCryptoFailureAction producerCryptoFailureAction; - - private ConsumerCryptoFailureAction consumerCryptoFailureAction; -} - -``` - -- `producerCryptoFailureAction`: define the action if producer fail to encrypt data one of `FAIL`, `SEND`. -- `consumerCryptoFailureAction`: define the action if consumer fail to decrypt data one of `FAIL`, `DISCARD`, `CONSUME`. - -#### BookKeeper Authentication - -If authentication is enabled on the BookKeeper cluster, you need configure the BookKeeper authentication settings as follows: - -- `bookkeeperClientAuthenticationPlugin`: the plugin name of BookKeeper client authentication. -- `bookkeeperClientAuthenticationParametersName`: the plugin parameters name of BookKeeper client authentication. -- `bookkeeperClientAuthenticationParameters`: the plugin parameters of BookKeeper client authentication. - -### Start Functions-worker - -Once you have finished configuring the `functions_worker.yml` configuration file, you can start a `functions-worker` in the background by using [nohup](https://en.wikipedia.org/wiki/Nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -bin/pulsar-daemon start functions-worker - -``` - -You can also start `functions-worker` in the foreground by using `pulsar` CLI tool: - -```bash - -bin/pulsar functions-worker - -``` - -### Configure Proxies for Functions-workers - -When you are running `functions-worker` in a separate cluster, the admin rest endpoints are split into two clusters. `functions`, `function-worker`, `source` and `sink` endpoints are now served -by the `functions-worker` cluster, while all the other remaining endpoints are served by the broker cluster. -Hence you need to configure your `pulsar-admin` to use the right service URL accordingly. - -In order to address this inconvenience, you can start a proxy cluster for routing the admin rest requests accordingly. Hence you will have one central entry point for your admin service. - -If you already have a proxy cluster, continue reading. If you haven't setup a proxy cluster before, you can follow the [instructions](administration-proxy.md) to start proxies. - -![assets/functions-worker-separated.png](/assets/functions-worker-separated-proxy.png) - -To enable routing functions related admin requests to `functions-worker` in a proxy, you can edit the `proxy.conf` file to modify the following settings: - -```conf - -functionWorkerWebServiceURL= -functionWorkerWebServiceURLTLS= - -``` - -## Compare the Run-with-Broker and Run-separately modes - -As described above, you can run Function-worker with brokers, or run it separately. And it is more convenient to run functions-workers along with brokers. However, running functions-workers in a separate cluster provides better resource isolation for running functions in `Process` or `Thread` mode. - -Use which mode for your cases, refer to the following guidelines to determine. - -Use the `Run-with-Broker` mode in the following cases: -- a) if resource isolation is not required when running functions in `Process` or `Thread` mode; -- b) if you configure the functions-worker to run functions on Kubernetes (where the resource isolation problem is addressed by Kubernetes). - -Use the `Run-separately` mode in the following cases: -- a) you don't have a Kubernetes cluster; -- b) if you want to run functions and brokers separately. - -## Troubleshooting - -**Error message: Namespace missing local cluster name in clusters list** - -``` - -Failed to get partitioned topic metadata: org.apache.pulsar.client.api.PulsarClientException$BrokerMetadataException: Namespace missing local cluster name in clusters list: local_cluster=xyz ns=public/functions clusters=[standalone] - -``` - -The error message prompts when either of the cases occurs: -- a) a broker is started with `functionsWorkerEnabled=true`, but the `pulsarFunctionsCluster` is not set to the correct cluster in the `conf/functions_worker.yaml` file; -- b) setting up a geo-replicated Pulsar cluster with `functionsWorkerEnabled=true`, while brokers in one cluster run well, brokers in the other cluster do not work well. - -**Workaround** - -If any of these cases happens, follow the instructions below to fix the problem: - -1. Disable Functions Worker by setting `functionsWorkerEnabled=false`, and restart brokers. - -2. Get the current clusters list of `public/functions` namespace. - -```bash - -bin/pulsar-admin namespaces get-clusters public/functions - -``` - -3. Check if the cluster is in the clusters list. If the cluster is not in the list, add it to the list and update the clusters list. - -```bash - -bin/pulsar-admin namespaces set-clusters --clusters , public/functions - -``` - -4. After setting the cluster successfully, enable functions worker by setting `functionsWorkerEnabled=true`. - -5. Set the correct cluster name in `pulsarFunctionsCluster` in the `conf/functions_worker.yml` file, and restart brokers. diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/getting-started-concepts-and-architecture.md b/site2/website/versioned_docs/version-2.10.0-deprecated/getting-started-concepts-and-architecture.md deleted file mode 100644 index fe9c3fbc553b2c..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/getting-started-concepts-and-architecture.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -id: concepts-architecture -title: Pulsar concepts and architecture -sidebar_label: "Concepts and architecture" -original_id: concepts-architecture ---- - - - - - - - - - - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/getting-started-docker.md b/site2/website/versioned_docs/version-2.10.0-deprecated/getting-started-docker.md deleted file mode 100644 index 441a7b897278f3..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/getting-started-docker.md +++ /dev/null @@ -1,219 +0,0 @@ ---- -id: getting-started-docker -title: Set up a standalone Pulsar in Docker -sidebar_label: "Run Pulsar in Docker" -original_id: getting-started-docker ---- - -For local development and testing, you can run Pulsar in standalone mode on your own machine within a Docker container. - -If you have not installed Docker, download the [Community edition](https://www.docker.com/community-edition) and follow the instructions for your OS. - -## Start Pulsar in Docker - -For macOS, Linux, and Windows, run the following command to start Pulsar within a Docker container. - -```shell - -$ docker run -it -p 6650:6650 -p 8080:8080 --mount source=pulsardata,target=/pulsar/data --mount source=pulsarconf,target=/pulsar/conf apachepulsar/pulsar:@pulsar:version@ bin/pulsar standalone - -``` - -If you want to change Pulsar configurations and start Pulsar, run the following command by passing environment variables with the `PULSAR_PREFIX_` prefix. See [default configuration file](https://github.com/apache/pulsar/blob/e6b12c64b043903eb5ff2dc5186fe8030f157cfc/conf/standalone.conf) for more details. - -```shell - -$ docker run -it -e PULSAR_PREFIX_xxx=yyy -p 6650:6650 -p 8080:8080 --mount source=pulsardata,target=/pulsar/data --mount source=pulsarconf,target=/pulsar/conf apachepulsar/pulsar:2.10.0 sh -c "bin/apply-config-from-env.py conf/standalone.conf && bin/pulsar standalone" - -``` - -:::tip - -* The docker container runs as UID 10000 and GID 0 by default. You need to ensure the mounted volumes give write permission to either UID 10000 or GID 0. Note that UID 10000 is arbitrary, so it is recommended to make these mounts writable for the root group (GID 0). -* The data, metadata, and configuration are persisted on Docker volumes in order to not start "fresh" every time the container is restarted. For details on the volumes, you can use `docker volume inspect `. -* For Docker on Windows, make sure to configure it to use Linux containers. - -::: - -After starting Pulsar successfully, you can see `INFO`-level log messages like this: - -``` - -08:18:30.970 [main] INFO org.apache.pulsar.broker.web.WebService - HTTP Service started at http://0.0.0.0:8080 -... -07:53:37.322 [main] INFO org.apache.pulsar.broker.PulsarService - messaging service is ready, bootstrap service port = 8080, broker url= pulsar://localhost:6650, cluster=standalone, configs=org.apache.pulsar.broker.ServiceConfiguration@98b63c1 -... - -``` - -:::tip - -When you start a local standalone cluster, a `public/default` namespace is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -::: - -## Use Pulsar in Docker - -Pulsar offers a variety of [client libraries](client-libraries.md), such as [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md), [C++](client-libraries-cpp.md). - -If you're running a local standalone cluster, you can use one of these root URLs to interact with your cluster: -* `pulsar://localhost:6650` -* `http://localhost:8080` - -The following example guides you to get started with Pulsar by using the [Python client API](client-libraries-python.md) client API. - -Install the Pulsar Python client library directly from [PyPI](https://pypi.org/project/pulsar-client/): - -```shell - -$ pip install pulsar-client - -``` - -### Consume a message - -Create a consumer and subscribe to the topic: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -consumer = client.subscribe('my-topic', - subscription_name='my-sub') - -while True: - msg = consumer.receive() - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - -client.close() - -``` - -### Produce a message - -Start a producer to send some test messages: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -producer = client.create_producer('my-topic') - -for i in range(10): - producer.send(('hello-pulsar-%d' % i).encode('utf-8')) - -client.close() - -``` - -## Get the topic statistics - -In Pulsar, you can use REST API, Java, or command-line tools to control every aspect of the system. For details on APIs, refer to [Admin API Overview](admin-api-overview.md). - -In the simplest example, you can use curl to probe the stats for a particular topic: - -```shell - -$ curl http://localhost:8080/admin/v2/persistent/public/default/my-topic/stats | python -m json.tool - -``` - -The output is something like this: - -```json - -{ - "msgRateIn": 0.0, - "msgThroughputIn": 0.0, - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesInCounter": 7097, - "msgInCounter": 143, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "averageMsgSize": 0.0, - "msgChunkPublished": false, - "storageSize": 7097, - "backlogSize": 0, - "offloadedStorageSize": 0, - "publishers": [ - { - "accessMode": "Shared", - "msgRateIn": 0.0, - "msgThroughputIn": 0.0, - "averageMsgSize": 0.0, - "chunkedMessageRate": 0.0, - "producerId": 0, - "metadata": {}, - "address": "/127.0.0.1:35604", - "connectedSince": "2021-07-04T09:05:43.04788Z", - "clientVersion": "2.8.0", - "producerName": "standalone-2-5" - } - ], - "waitingPublishers": 0, - "subscriptions": { - "my-sub": { - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "msgRateRedeliver": 0.0, - "chunkedMessageRate": 0, - "msgBacklog": 0, - "backlogSize": 0, - "msgBacklogNoDelayed": 0, - "blockedSubscriptionOnUnackedMsgs": false, - "msgDelayed": 0, - "unackedMessages": 0, - "type": "Exclusive", - "activeConsumerName": "3c544f1daa", - "msgRateExpired": 0.0, - "totalMsgExpired": 0, - "lastExpireTimestamp": 0, - "lastConsumedFlowTimestamp": 1625389101290, - "lastConsumedTimestamp": 1625389546070, - "lastAckedTimestamp": 1625389546162, - "lastMarkDeleteAdvancedTimestamp": 1625389546163, - "consumers": [ - { - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "msgRateRedeliver": 0.0, - "chunkedMessageRate": 0.0, - "consumerName": "3c544f1daa", - "availablePermits": 867, - "unackedMessages": 0, - "avgMessagesPerEntry": 6, - "blockedConsumerOnUnackedMsgs": false, - "lastAckedTimestamp": 1625389546162, - "lastConsumedTimestamp": 1625389546070, - "metadata": {}, - "address": "/127.0.0.1:35472", - "connectedSince": "2021-07-04T08:58:21.287682Z", - "clientVersion": "2.8.0" - } - ], - "isDurable": true, - "isReplicated": false, - "allowOutOfOrderDelivery": false, - "consumersAfterMarkDeletePosition": {}, - "nonContiguousDeletedMessagesRanges": 0, - "nonContiguousDeletedMessagesRangesSerializedSize": 0, - "durable": true, - "replicated": false - } - }, - "replication": {}, - "deduplicationStatus": "Disabled", - "nonContiguousDeletedMessagesRanges": 0, - "nonContiguousDeletedMessagesRangesSerializedSize": 0 -} - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/getting-started-helm.md b/site2/website/versioned_docs/version-2.10.0-deprecated/getting-started-helm.md deleted file mode 100644 index 5d5401cc86a08b..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/getting-started-helm.md +++ /dev/null @@ -1,447 +0,0 @@ ---- -id: getting-started-helm -title: Get started in Kubernetes -sidebar_label: "Run Pulsar in Kubernetes" -original_id: getting-started-helm ---- - -This section guides you through every step of installing and running Apache Pulsar with Helm on Kubernetes quickly, including the following sections: - -- Install the Apache Pulsar on Kubernetes using Helm -- Start and stop Apache Pulsar -- Create topics using `pulsar-admin` -- Produce and consume messages using Pulsar clients -- Monitor Apache Pulsar status with Prometheus and Grafana - -For deploying a Pulsar cluster for production usage, read the documentation on [how to configure and install a Pulsar Helm chart](helm-deploy.md). - -## Prerequisite - -- Kubernetes server 1.14.0+ -- kubectl 1.14.0+ -- Helm 3.0+ - -:::tip - -For the following steps, step 2 and step 3 are for **developers** and step 4 and step 5 are for **administrators**. - -::: - -## Step 0: Prepare a Kubernetes cluster - -Before installing a Pulsar Helm chart, you have to create a Kubernetes cluster. You can follow [the instructions](helm-prepare.md) to prepare a Kubernetes cluster. - -We use [Minikube](https://minikube.sigs.k8s.io/docs/start/) in this quick start guide. To prepare a Kubernetes cluster, follow these steps: - -1. Create a Kubernetes cluster on Minikube. - - ```bash - - minikube start --memory=8192 --cpus=4 --kubernetes-version= - - ``` - - The `` can be any [Kubernetes version supported by your Minikube installation](https://minikube.sigs.k8s.io/docs/reference/configuration/kubernetes/), such as `v1.16.1`. - -2. Set `kubectl` to use Minikube. - - ```bash - - kubectl config use-context minikube - - ``` - -3. To use the [Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) with the local Kubernetes cluster on Minikube, enter the command below: - - ```bash - - minikube dashboard - - ``` - - The command automatically triggers opening a webpage in your browser. - -## Step 1: Install Pulsar Helm chart - -1. Add Pulsar charts repo. - - ```bash - - helm repo add apache https://pulsar.apache.org/charts - - ``` - - ```bash - - helm repo update - - ``` - -2. Clone the Pulsar Helm chart repository. - - ```bash - - git clone https://github.com/apache/pulsar-helm-chart - cd pulsar-helm-chart - - ``` - -3. Run the script `prepare_helm_release.sh` to create secrets required for installing the Apache Pulsar Helm chart. The username `pulsar` and password `pulsar` are used for logging into the Grafana dashboard and Pulsar Manager. - - :::note - - When running the script, you can use `-n` to specify the Kubernetes namespace where the Pulsar Helm chart is installed, `-k` to define the Pulsar Helm release name, and `-c` to create the Kubernetes namespace. For more information about the script, run `./scripts/pulsar/prepare_helm_release.sh --help`. - - ::: - - ```bash - - ./scripts/pulsar/prepare_helm_release.sh \ - -n pulsar \ - -k pulsar-mini \ - -c - - ``` - -4. Use the Pulsar Helm chart to install a Pulsar cluster to Kubernetes. - - :::note - - You need to specify `--set initialize=true` when installing Pulsar the first time. This command installs and starts Apache Pulsar. - - ::: - - ```bash - - helm install \ - --values examples/values-minikube.yaml \ - --set initialize=true \ - --namespace pulsar \ - pulsar-mini apache/pulsar - - ``` - -5. Check the status of all pods. - - ```bash - - kubectl get pods -n pulsar - - ``` - - If all pods start up successfully, you can see that the `STATUS` is changed to `Running` or `Completed`. - - **Output** - - ```bash - - NAME READY STATUS RESTARTS AGE - pulsar-mini-bookie-0 1/1 Running 0 9m27s - pulsar-mini-bookie-init-5gphs 0/1 Completed 0 9m27s - pulsar-mini-broker-0 1/1 Running 0 9m27s - pulsar-mini-grafana-6b7bcc64c7-4tkxd 1/1 Running 0 9m27s - pulsar-mini-prometheus-5fcf5dd84c-w8mgz 1/1 Running 0 9m27s - pulsar-mini-proxy-0 1/1 Running 0 9m27s - pulsar-mini-pulsar-init-t7cqt 0/1 Completed 0 9m27s - pulsar-mini-pulsar-manager-9bcbb4d9f-htpcs 1/1 Running 0 9m27s - pulsar-mini-toolset-0 1/1 Running 0 9m27s - pulsar-mini-zookeeper-0 1/1 Running 0 9m27s - - ``` - -6. Check the status of all services in the namespace `pulsar`. - - ```bash - - kubectl get services -n pulsar - - ``` - - **Output** - - ```bash - - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - pulsar-mini-bookie ClusterIP None 3181/TCP,8000/TCP 11m - pulsar-mini-broker ClusterIP None 8080/TCP,6650/TCP 11m - pulsar-mini-grafana LoadBalancer 10.106.141.246 3000:31905/TCP 11m - pulsar-mini-prometheus ClusterIP None 9090/TCP 11m - pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 11m - pulsar-mini-pulsar-manager LoadBalancer 10.103.192.175 9527:30190/TCP 11m - pulsar-mini-toolset ClusterIP None 11m - pulsar-mini-zookeeper ClusterIP None 2888/TCP,3888/TCP,2181/TCP 11m - - ``` - -## Step 2: Use pulsar-admin to create Pulsar tenants/namespaces/topics - -`pulsar-admin` is the CLI (command-Line Interface) tool for Pulsar. In this step, you can use `pulsar-admin` to create resources, including tenants, namespaces, and topics. - -1. Enter the `toolset` container. - - ```bash - - kubectl exec -it -n pulsar pulsar-mini-toolset-0 -- /bin/bash - - ``` - -2. In the `toolset` container, create a tenant named `apache`. - - ```bash - - bin/pulsar-admin tenants create apache - - ``` - - Then you can list the tenants to see if the tenant is created successfully. - - ```bash - - bin/pulsar-admin tenants list - - ``` - - You should see a similar output as below. The tenant `apache` has been successfully created. - - ```bash - - "apache" - "public" - "pulsar" - - ``` - -3. In the `toolset` container, create a namespace named `pulsar` in the tenant `apache`. - - ```bash - - bin/pulsar-admin namespaces create apache/pulsar - - ``` - - Then you can list the namespaces of tenant `apache` to see if the namespace is created successfully. - - ```bash - - bin/pulsar-admin namespaces list apache - - ``` - - You should see a similar output as below. The namespace `apache/pulsar` has been successfully created. - - ```bash - - "apache/pulsar" - - ``` - -4. In the `toolset` container, create a topic `test-topic` with `4` partitions in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics create-partitioned-topic apache/pulsar/test-topic -p 4 - - ``` - -5. In the `toolset` container, list all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics list-partitioned-topics apache/pulsar - - ``` - - Then you can see all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - "persistent://apache/pulsar/test-topic" - - ``` - -## Step 3: Use Pulsar client to produce and consume messages - -You can use the Pulsar client to create producers and consumers to produce and consume messages. - -By default, the Pulsar Helm chart exposes the Pulsar cluster through a Kubernetes `LoadBalancer`. In Minikube, you can use the following command to check the proxy service. - -```bash - -kubectl get services -n pulsar | grep pulsar-mini-proxy - -``` - -You will see a similar output as below. - -```bash - -pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 28m - -``` - -This output tells what are the node ports that Pulsar cluster's binary port and HTTP port are mapped to. The port after `80:` is the HTTP port while the port after `6650:` is the binary port. - -Then you can find the IP address and exposed ports of your Minikube server by running the following command. - -```bash - -minikube service pulsar-mini-proxy -n pulsar - -``` - -**Output** - -```bash - -|-----------|-------------------|-------------|-------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|-------------------------| -| pulsar | pulsar-mini-proxy | http/80 | http://172.17.0.4:32305 | -| | | pulsar/6650 | http://172.17.0.4:31816 | -|-----------|-------------------|-------------|-------------------------| -🏃 Starting tunnel for service pulsar-mini-proxy. -|-----------|-------------------|-------------|------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|------------------------| -| pulsar | pulsar-mini-proxy | | http://127.0.0.1:61853 | -| | | | http://127.0.0.1:61854 | -|-----------|-------------------|-------------|------------------------| - -``` - -At this point, you can get the service URLs to connect to your Pulsar client. Here are URL examples: - -``` - -webServiceUrl=http://127.0.0.1:61853/ -brokerServiceUrl=pulsar://127.0.0.1:61854/ - -``` - -Then you can proceed with the following steps: - -1. Download the Apache Pulsar tarball from the [downloads page](/download/). - -2. Decompress the tarball based on your download file. - - ```bash - - tar -xf .tar.gz - - ``` - -3. Expose `PULSAR_HOME`. - - (1) Enter the directory of the decompressed download file. - - (2) Expose `PULSAR_HOME` as the environment variable. - - ```bash - - export PULSAR_HOME=$(pwd) - - ``` - -4. Configure the Pulsar client. - - In the `${PULSAR_HOME}/conf/client.conf` file, replace `webServiceUrl` and `brokerServiceUrl` with the service URLs you get from the above steps. - -5. Create a subscription to consume messages from `apache/pulsar/test-topic`. - - ```bash - - bin/pulsar-client consume -s sub apache/pulsar/test-topic -n 0 - - ``` - -6. Open a new terminal. In the new terminal, create a producer and send 10 messages to the `test-topic` topic. - - ```bash - - bin/pulsar-client produce apache/pulsar/test-topic -m "---------hello apache pulsar-------" -n 10 - - ``` - -7. Verify the results. - - - From the producer side - - **Output** - - The messages have been produced successfully. - - ```bash - - 18:15:15.489 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 10 messages successfully produced - - ``` - - - From the consumer side - - **Output** - - At the same time, you can receive the messages as below. - - ```bash - - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - - ``` - -## Step 4: Use Pulsar Manager to manage the cluster - -[Pulsar Manager](administration-pulsar-manager.md) is a web-based GUI management tool for managing and monitoring Pulsar. - -1. By default, the `Pulsar Manager` is exposed as a separate `LoadBalancer`. You can open the Pulsar Manager UI using the following command: - - ```bash - - minikube service -n pulsar pulsar-mini-pulsar-manager - - ``` - -2. The Pulsar Manager UI will be open in your browser. You can use the username `pulsar` and password `pulsar` to log into Pulsar Manager. - -3. In Pulsar Manager UI, you can create an environment. - - - Click `New Environment` button in the top-left corner. - - Type `pulsar-mini` for the field `Environment Name` in the popup window. - - Type `http://pulsar-mini-broker:8080` for the field `Service URL` in the popup window. - - Click `Confirm` button in the popup window. - -4. After successfully creating an environment, you are redirected to the `tenants` page of that environment. Then you can create `tenants`, `namespaces` and `topics` using the Pulsar Manager. - -## Step 5: Use Prometheus and Grafana to monitor cluster - -Grafana is an open-source visualization tool, which can be used for visualizing time series data into dashboards. - -1. By default, the Grafana is exposed as a separate `LoadBalancer`. You can open the Grafana UI using the following command: - - ```bash - - minikube service pulsar-mini-grafana -n pulsar - - ``` - -2. The Grafana UI is open in your browser. You can use the username `pulsar` and password `pulsar` to log into the Grafana Dashboard. - -3. You can view dashboards for different components of a Pulsar cluster. diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/getting-started-pulsar.md b/site2/website/versioned_docs/version-2.10.0-deprecated/getting-started-pulsar.md deleted file mode 100644 index 752590f57b5585..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/getting-started-pulsar.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -id: pulsar-2.0 -title: Pulsar 2.0 -sidebar_label: "Pulsar 2.0" -original_id: pulsar-2.0 ---- - -Pulsar 2.0 is a major new release for Pulsar that brings some bold changes to the platform, including [simplified topic names](#topic-names), the addition of the [Pulsar Functions](functions-overview.md) feature, some terminology changes, and more. - -## New features in Pulsar 2.0 - -Feature | Description -:-------|:----------- -[Pulsar Functions](functions-overview.md) | A lightweight compute option for Pulsar - -## Major changes - -There are a few major changes that you should be aware of, as they may significantly impact your day-to-day usage. - -### Properties versus tenants - -Previously, Pulsar had a concept of properties. A property is essentially the exact same thing as a tenant, so the "property" terminology has been removed in version 2.0. The [`pulsar-admin properties`](reference-pulsar-admin.md#pulsar-admin) command-line interface, for example, has been replaced with the [`pulsar-admin tenants`](reference-pulsar-admin.md#pulsar-admin-tenants) interface. In some cases the properties terminology is still used but is now considered deprecated and will be removed entirely in a future release. - -### Topic names - -Prior to version 2.0, *all* Pulsar topics had the following form: - -```http - -{persistent|non-persistent}://property/cluster/namespace/topic - -``` - -Two important changes have been made in Pulsar 2.0: - -* There is no longer a [cluster component](#no-cluster) -* Properties have been [renamed to tenants](#tenants) -* You can use a [flexible](#flexible-topic-naming) naming system to shorten many topic names -* `/` is not allowed in topic name - -#### No cluster component - -The cluster component has been removed from topic names. Thus, all topic names now have the following form: - -```http - -{persistent|non-persistent}://tenant/namespace/topic - -``` - -> Existing topics that use the legacy name format will continue to work without any change, and there are no plans to change that. - - -#### Flexible topic naming - -All topic names in Pulsar 2.0 internally have the form shown [above](#no-cluster-component) but you can now use shorthand names in many cases (for the sake of simplicity). The flexible naming system stems from the fact that there is now a default topic type, tenant, and namespace: - -Topic aspect | Default -:------------|:------- -topic type | `persistent` -tenant | `public` -namespace | `default` - -The table below shows some example topic name translations that use implicit defaults: - -Input topic name | Translated topic name -:----------------|:--------------------- -`my-topic` | `persistent://public/default/my-topic` -`my-tenant/my-namespace/my-topic` | `persistent://my-tenant/my-namespace/my-topic` - -> For [non-persistent topics](concepts-messaging.md#non-persistent-topics) you'll need to continue to specify the entire topic name, as the default-based rules for persistent topic names don't apply. Thus you cannot use a shorthand name like `non-persistent://my-topic` and would need to use `non-persistent://public/default/my-topic` instead - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/getting-started-standalone.md b/site2/website/versioned_docs/version-2.10.0-deprecated/getting-started-standalone.md deleted file mode 100644 index c60ba7e5cedba8..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/getting-started-standalone.md +++ /dev/null @@ -1,326 +0,0 @@ ---- -id: getting-started-standalone -title: Set up a standalone Pulsar locally -sidebar_label: "Run Pulsar locally" -original_id: getting-started-standalone ---- - -For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary [RocksDB](http://rocksdb.org/) and BookKeeper components running inside of a single Java Virtual Machine (JVM) process. - -> **Pulsar in production?** -> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal.md) guide. - -## Install Pulsar standalone - -This tutorial guides you through every step of installing Pulsar locally. - -### System requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions - -:::tip - -By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. - -::: - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -#### Install JDK on M1 -In the current version, Pulsar uses a BookKeeper version which in turn uses RocksDB. RocksDB is compiled to work on x86 architecture and not ARM. Therefore, Pulsar can only work with x86 JDK. This is planned to be fixed in future versions of Pulsar. - -One of the ways to easily install an x86 JDK is to use [SDKMan](http://sdkman.io) as outlined in the following steps: - -1. Install [SDKMan](http://sdkman.io). - - * Method 1: follow instructions on the SDKMan website. - - * Method 2: if you have [Homebrew](https://brew.sh) installed, enter the following command. - -```shell - -brew install sdkman - -``` - -2. Turn on Rosetta2 compatibility for SDKMan by editing `~/.sdkman/etc/config` and changing the following property from `false` to `true`. - -```properties - -sdkman_rosetta2_compatible=true - -``` - -3. Close the current shell / terminal window and open a new one. -4. Make sure you don't have any previously installed JVM of the same version by listing existing installed versions. - -```shell - -sdk list java|grep installed - -``` - -Example output: - -```text - - | >>> | 17.0.3.6.1 | amzn | installed | 17.0.3.6.1-amzn - -``` - -If you have any Java 17 version installed, uninstall it. - -```shell - -sdk uinstall java 17.0.3.6.1 - -``` - -5. Install any Java versions greater than Java 8. - -```shell - - sdk install java 17.0.3.6.1-amzn - -``` - -### Install Pulsar using binary release - -To get started with Pulsar, download a binary tarball release in one of the following ways: - -* download from the Apache mirror (Pulsar @pulsar:version@ binary release) - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:binary_release_url - - ``` - -After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -#### What your package contains - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](/tools/pulsar-admin/). -`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker) and more.
    **Note:** Pulsar standalone uses RocksDB as the local metadata store and its configuration file path [`metadataStoreConfigPath`](reference-configuration.md) is configurable in the `standalone.conf` file. For more information about the configurations of RocksDB, see [here](https://github.com/facebook/rocksdb/blob/main/examples/rocksdb_option_file_example.ini) and related [documentation](https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide). -`examples` | A Java JAR file containing [Pulsar Functions](functions-overview.md) example. -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md). -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar. -`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar). - -These directories are created once you begin running Pulsar. - -Directory | Contains -:---------|:-------- -`data` | The data storage directory used by RocksDB and BookKeeper. -`logs` | Logs created by the installation. - -:::tip - -If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions: -* [Install builtin connectors (optional)](#install-builtin-connectors-optional) -* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional) -Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders. - -::: - -### Install builtin connectors (optional) - -Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors. -To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways: - -* download from the Apache mirror Pulsar IO Connectors @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. -For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker (or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions). -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors). - -::: - -### Install tiered storage offloaders (optional) - -:::tip - -- Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -- To enable tiered storage feature, follow the instructions below; otherwise skip this section. - -::: - -To get started with [tiered storage offloaders](concepts-tiered-storage.md), you need to download the offloaders tarball release on every broker node in one of the following ways: - -* download from the Apache mirror Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders` -in the pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage.md). - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory. -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or DC/OS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - -::: - -## Start Pulsar standalone - -Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode. - -```bash - -$ bin/pulsar standalone - -``` - -If you have started Pulsar successfully, you will see `INFO`-level log messages like this: - -```bash - -21:59:29.327 [DLM-/stream/storage-OrderedScheduler-3-0] INFO org.apache.bookkeeper.stream.storage.impl.sc.StorageContainerImpl - Successfully started storage container (0). -21:59:34.576 [main] INFO org.apache.pulsar.broker.authentication.AuthenticationService - Authentication is disabled -21:59:34.576 [main] INFO org.apache.pulsar.websocket.WebSocketService - Pulsar WebSocket Service started - -``` - -:::tip - -* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window. - -::: - -You can also run the service as a background process using the `bin/pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](reference-cli-tools.md#pulsar-daemon). -> -> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview.md) document to secure your deployment. -> -> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -## Use Pulsar standalone - -Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. - -### Consume a message - -The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client consume my-topic -s "first-subscription" - -``` - -If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -22:17:16.781 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully consumed - -``` - -:::tip - -As you have noticed that we do not explicitly create the `my-topic` topic, from which we consume the message. When you consume a message from a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well. - -::: - -### Produce a message - -The following command produces a message saying `hello-pulsar` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client produce my-topic --messages "hello-pulsar" - -``` - -If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -22:21:08.693 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced - -``` - -## Stop Pulsar standalone - -Press `Ctrl+C` to stop a local standalone Pulsar. - -:::tip - -If the service runs as a background process using the `bin/pulsar-daemon start standalone` command, then use the `bin/pulsar-daemon stop standalone` command to stop the service. -For more information, see [pulsar-daemon](reference-cli-tools.md#pulsar-daemon). - -::: - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/helm-deploy.md b/site2/website/versioned_docs/version-2.10.0-deprecated/helm-deploy.md deleted file mode 100644 index 0e7815e4f4d90b..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/helm-deploy.md +++ /dev/null @@ -1,434 +0,0 @@ ---- -id: helm-deploy -title: Deploy Pulsar cluster using Helm -sidebar_label: "Deployment" -original_id: helm-deploy ---- - -Before running `helm install`, you need to decide how to run Pulsar. -Options can be specified using Helm's `--set option.name=value` command line option. - -## Select configuration options - -In each section, collect the options that are combined to use with the `helm install` command. - -### Kubernetes namespace - -By default, the Pulsar Helm chart is installed to a namespace called `pulsar`. - -```yaml - -namespace: pulsar - -``` - -To install the Pulsar Helm chart into a different Kubernetes namespace, you can include this option in the `helm install` command. - -```bash - ---set namespace= - -``` - -By default, the Pulsar Helm chart doesn't create the namespace. - -```yaml - -namespaceCreate: false - -``` - -To use the Pulsar Helm chart to create the Kubernetes namespace automatically, you can include this option in the `helm install` command. - -```bash - ---set namespaceCreate=true - -``` - -### Persistence - -By default, the Pulsar Helm chart creates Volume Claims with the expectation that a dynamic provisioner creates the underlying Persistent Volumes. - -```yaml - -volumes: - persistence: true - # configure the components to use local persistent volume - # the local provisioner should be installed prior to enable local persistent volume - local_storage: false - -``` - -To use local persistent volumes as the persistent storage for Helm release, you can install the [local storage provisioner](#install-local-storage-provisioner) and include the following option in the `helm install` command. - -```bash - ---set volumes.local_storage=true - -``` - -:::note - -Before installing the production instance of Pulsar, ensure to plan the storage settings to avoid extra storage migration work. Because after initial installation, you must edit Kubernetes objects manually if you want to change storage settings. - -::: - -The Pulsar Helm chart is designed for production use. To use the Pulsar Helm chart in a development environment (such as Minikube), you can disable persistence by including this option in your `helm install` command. - -```bash - ---set volumes.persistence=false - -``` - -### Affinity - -By default, `anti-affinity` is enabled to ensure pods of the same component can run on different nodes. - -```yaml - -affinity: - anti_affinity: true - -``` - -To use the Pulsar Helm chart in a development environment (such as Minikube), you can disable `anti-affinity` by including this option in your `helm install` command. - -```bash - ---set affinity.anti_affinity=false - -``` - -### Components - -The Pulsar Helm chart is designed for production usage. It deploys a production-ready Pulsar cluster, including Pulsar core components and monitoring components. - -You can customize the components to be deployed by turning on/off individual components. - -```yaml - -## Components -## -## Control what components of Apache Pulsar to deploy for the cluster -components: - # zookeeper - zookeeper: true - # bookkeeper - bookkeeper: true - # bookkeeper - autorecovery - autorecovery: true - # broker - broker: true - # functions - functions: true - # proxy - proxy: true - # toolset - toolset: true - # pulsar manager - pulsar_manager: true - -## Monitoring Components -## -## Control what components of the monitoring stack to deploy for the cluster -monitoring: - # monitoring - prometheus - prometheus: true - # monitoring - grafana - grafana: true - -``` - -### Docker images - -The Pulsar Helm chart is designed to enable controlled upgrades. So it can configure independent image versions for components. You can customize the images by setting individual component. - -```yaml - -## Images -## -## Control what images to use for each component -images: - zookeeper: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - bookie: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - autorecovery: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - broker: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - proxy: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - functions: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - prometheus: - repository: prom/prometheus - tag: v1.6.3 - pullPolicy: IfNotPresent - grafana: - repository: streamnative/apache-pulsar-grafana-dashboard-k8s - tag: 0.0.4 - pullPolicy: IfNotPresent - pulsar_manager: - repository: apachepulsar/pulsar-manager - tag: v0.1.0 - pullPolicy: IfNotPresent - hasCommand: false - -``` - -### TLS - -The Pulsar Helm chart can be configured to enable TLS (Transport Layer Security) to protect all the traffic between components. Before enabling TLS, you have to provision TLS certificates for the required components. - -#### Provision TLS certificates using cert-manager - -To use the `cert-manager` to provision the TLS certificates, you have to install the [cert-manager](#install-cert-manager) before installing the Pulsar Helm chart. After successfully installing the cert-manager, you can set `certs.internal_issuer.enabled` to `true`. Therefore, the Pulsar Helm chart can use the `cert-manager` to generate `selfsigning` TLS certificates for the configured components. - -```yaml - -certs: - internal_issuer: - enabled: false - component: internal-cert-issuer - type: selfsigning - -``` - -You can also customize the generated TLS certificates by configuring the fields as the following. - -```yaml - -tls: - # common settings for generating certs - common: - # 90d - duration: 2160h - # 15d - renewBefore: 360h - organization: - - pulsar - keySize: 4096 - keyAlgorithm: rsa - keyEncoding: pkcs8 - -``` - -#### Enable TLS - -After installing the `cert-manager`, you can set `tls.enabled` to `true` to enable TLS encryption for the entire cluster. - -```yaml - -tls: - enabled: false - -``` - -You can also configure whether to enable TLS encryption for individual component. - -```yaml - -tls: - # settings for generating certs for proxy - proxy: - enabled: false - cert_name: tls-proxy - # settings for generating certs for broker - broker: - enabled: false - cert_name: tls-broker - # settings for generating certs for bookies - bookie: - enabled: false - cert_name: tls-bookie - # settings for generating certs for zookeeper - zookeeper: - enabled: false - cert_name: tls-zookeeper - # settings for generating certs for recovery - autorecovery: - cert_name: tls-recovery - # settings for generating certs for toolset - toolset: - cert_name: tls-toolset - -``` - -### Authentication - -By default, authentication is disabled. You can set `auth.authentication.enabled` to `true` to enable authentication. -Currently, the Pulsar Helm chart only supports JWT authentication provider. You can set `auth.authentication.provider` to `jwt` to use the JWT authentication provider. - -```yaml - -# Enable or disable broker authentication and authorization. -auth: - authentication: - enabled: false - provider: "jwt" - jwt: - # Enable JWT authentication - # If the token is generated by a secret key, set the usingSecretKey as true. - # If the token is generated by a private key, set the usingSecretKey as false. - usingSecretKey: false - superUsers: - # broker to broker communication - broker: "broker-admin" - # proxy to broker communication - proxy: "proxy-admin" - # pulsar-admin client to broker/proxy communication - client: "admin" - -``` - -To enable authentication, you can run [prepare helm release](#prepare-the-helm-release) to generate token secret keys and tokens for three super users specified in the `auth.superUsers` field. The generated token keys and super user tokens are uploaded and stored as Kubernetes secrets prefixed with `-token-`. You can use the following command to find those secrets. - -```bash - -kubectl get secrets -n - -``` - -### Authorization - -By default, authorization is disabled. Authorization can be enabled only when authentication is enabled. - -```yaml - -auth: - authorization: - enabled: false - -``` - -To enable authorization, you can include this option in the `helm install` command. - -```bash - ---set auth.authorization.enabled=true - -``` - -### CPU and RAM resource requirements - -By default, the resource requests and the number of replicas for the Pulsar components in the Pulsar Helm chart are adequate for a small production deployment. If you deploy a non-production instance, you can reduce the defaults to fit into a smaller cluster. - -Once you have all of your configuration options collected, you can install dependent charts before installing the Pulsar Helm chart. - -## Install dependent charts - -### Install local storage provisioner - -To use local persistent volumes as the persistent storage, you need to install a storage provisioner for [local persistent volumes](https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/). - -One of the easiest way to get started is to use the local storage provisioner provided along with the Pulsar Helm chart. - -``` - -helm repo add streamnative https://charts.streamnative.io -helm repo update -helm install pulsar-storage-provisioner streamnative/local-storage-provisioner - -``` - -### Install cert-manager - -The Pulsar Helm chart uses the [cert-manager](https://github.com/jetstack/cert-manager) to provision and manage TLS certificates automatically. To enable TLS encryption for brokers or proxies, you need to install the cert-manager in advance. - -For details about how to install the cert-manager, follow the [official instructions](https://cert-manager.io/docs/installation/kubernetes/#installing-with-helm). - -Alternatively, we provide a bash script [install-cert-manager.sh](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/cert-manager/install-cert-manager.sh) to install a cert-manager release to the namespace `cert-manager`. - -```bash - -git clone https://github.com/apache/pulsar-helm-chart -cd pulsar-helm-chart -./scripts/cert-manager/install-cert-manager.sh - -``` - -## Prepare Helm release - -Once you have install all the dependent charts and collected all of your configuration options, you can run [prepare_helm_release.sh](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/prepare_helm_release.sh) to prepare the Helm release. - -```bash - -git clone https://github.com/apache/pulsar-helm-chart -cd pulsar-helm-chart -./scripts/pulsar/prepare_helm_release.sh -n -k - -``` - -The `prepare_helm_release` creates the following resources: - -- A Kubernetes namespace for installing the Pulsar release -- JWT secret keys and tokens for three super users: `broker-admin`, `proxy-admin`, and `admin`. By default, it generates an asymmetric pubic/private key pair. You can choose to generate a symmetric secret key by specifying `--symmetric`. - - `proxy-admin` role is used for proxies to communicate to brokers. - - `broker-admin` role is used for inter-broker communications. - - `admin` role is used by the admin tools. - -## Deploy Pulsar cluster using Helm - -Once you have finished the following three things, you can install a Helm release. - -- Collect all of your configuration options. -- Install dependent charts. -- Prepare the Helm release. - -In this example, the Helm release is named `pulsar`. - -```bash - -helm repo add apache https://pulsar.apache.org/charts -helm repo update -helm install pulsar apache/pulsar \ - --timeout 10m \ - --set initialize=true \ - --set [your configuration options] - -``` - -:::note - -For the first deployment, add `--set initialize=true` option to initialize bookie and Pulsar cluster metadata. - -::: - -You can also use the `--version ` option if you want to install a specific version of Pulsar Helm chart. - -## Monitor deployment - -A list of installed resources are output once the Pulsar cluster is deployed. This may take 5-10 minutes. - -The status of the deployment can be checked by running the `helm status pulsar` command, which can also be done while the deployment is taking place if you run the command in another terminal. - -## Access Pulsar cluster - -The default values will create a `ClusterIP` for the following resources, which you can use to interact with the cluster. - -- Proxy: You can use the IP address to produce and consume messages to the installed Pulsar cluster. -- Pulsar Manager: You can access the Pulsar Manager UI at `http://:9527`. -- Grafana Dashboard: You can access the Grafana dashboard at `http://:3000`. - -To find the IP addresses of those components, run the following command: - -```bash - -kubectl get service -n - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/helm-install.md b/site2/website/versioned_docs/version-2.10.0-deprecated/helm-install.md deleted file mode 100644 index 9f81f52e0dab18..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/helm-install.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -id: helm-install -title: Install Apache Pulsar using Helm -sidebar_label: "Install" -original_id: helm-install ---- - -Install Apache Pulsar on Kubernetes with the official Pulsar Helm chart. - -## Requirements - -To deploy Apache Pulsar on Kubernetes, the followings are required. - -- kubectl 1.14 or higher, compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin)) -- Helm v3 (3.0.2 or higher) -- A Kubernetes cluster, version 1.14 or higher - -## Environment setup - -Before deploying Pulsar, you need to prepare your environment. - -### Tools - -Install [`helm`](helm-tools.md) and [`kubectl`](helm-tools.md) on your computer. - -## Cloud cluster preparation - -To create and connect to the Kubernetes cluster, follow the instructions: - -- [Google Kubernetes Engine](helm-prepare.md#google-kubernetes-engine) - -## Pulsar deployment - -Once the environment is set up and configuration is generated, you can now proceed to the [deployment of Pulsar](helm-deploy.md). - -## Pulsar upgrade - -To upgrade an existing Kubernetes installation, follow the [upgrade documentation](helm-upgrade.md). diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/helm-overview.md b/site2/website/versioned_docs/version-2.10.0-deprecated/helm-overview.md deleted file mode 100644 index 125f595cbe68a3..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/helm-overview.md +++ /dev/null @@ -1,103 +0,0 @@ ---- -id: helm-overview -title: Apache Pulsar Helm Chart -sidebar_label: "Overview" -original_id: helm-overview ---- - -[Helm chart](https://github.com/apache/pulsar-helm-chart) supports you to install Apache Pulsar in a cloud-native environment. - -## Introduction - -The Apache Pulsar Helm chart provides one of the most convenient ways to operate Pulsar on Kubernetes. With all the required components, Helm chart is scalable and thus being suitable for large-scale deployments. - -The Apache Pulsar Helm chart contains all components to support the features and functions that Pulsar delivers. You can install and configure these components separately. - -- Pulsar core components: - - ZooKeeper - - Bookies - - Brokers - - Function workers - - Proxies -- Control center: - - Pulsar Manager - - Prometheus - - Grafana - -Moreover, Helm chart supports: - -- Security - - Automatically provisioned TLS certificates, using [Jetstack](https://www.jetstack.io/)'s [cert-manager](https://cert-manager.io/docs/) - - self-signed - - [Let's Encrypt](https://letsencrypt.org/) - - TLS Encryption - - Proxy - - Broker - - Toolset - - Bookie - - ZooKeeper - - Authentication - - JWT - - Authorization -- Storage - - Non-persistence storage - - Persistent volume - - Local persistent volumes -- Functions - - Kubernetes Runtime - - Process Runtime - - Thread Runtime -- Operations - - Independent image versions for all components, enabling controlled upgrades - -## Quick start - -To run with Apache Pulsar Helm chart as fast as possible in a **non-production** use case, we provide a [quick start guide](getting-started-helm.md) for Proof of Concept (PoC) deployments. - -This guide walks you through deploying Apache Pulsar Helm chart with default values and features, but it is *not* suitable for deployments in production-ready environments. To deploy the charts in production under sustained load, you can follow the complete [Installation Guide](helm-install.md). - -## Troubleshooting - -Although we have done our best to make these charts as seamless as possible, troubles do go out of our control occasionally. We have been collecting tips and tricks for troubleshooting common issues. Please check it first before raising an [issue](https://github.com/apache/pulsar/issues/new/choose), and feel free to add your solutions by creating a [Pull Request](https://github.com/apache/pulsar/compare). - -## Installation - -The Apache Pulsar Helm chart contains all required dependencies. - -If you deploy a PoC for testing, we strongly suggest you follow this [Quick Start Guide](getting-started-helm.md) for your first iteration. - -1. [Preparation](helm-prepare.md) -2. [Deployment](helm-deploy.md) - -## Upgrading - -Once the Apache Pulsar Helm chart is installed, you can use `helm upgrade` command to configure and update it. - -```bash - -helm repo add apache https://pulsar.apache.org/charts -helm repo update -helm get values > pulsar.yaml -helm upgrade apache/pulsar -f pulsar.yaml - -``` - -For more detailed information, see [Upgrading](helm-upgrade.md). - -## Uninstallation - -To uninstall the Apache Pulsar Helm chart, run the following command: - -```bash - -helm delete - -``` - -For the purposes of continuity, some Kubernetes objects in these charts cannot be removed by `helm delete` command. It is recommended to *consciously* remove these items, as they affect re-deployment. - -* PVCs for stateful data: remove these items. - - ZooKeeper: This is your metadata. - - BookKeeper: This is your data. - - Prometheus: This is your metrics data, which can be safely removed. -* Secrets: if the secrets are generated by the [prepare release script](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/prepare_helm_release.sh), they contain secret keys and tokens. You can use the [cleanup release script](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/cleanup_helm_release.sh) to remove these secrets and tokens as needed. diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/helm-prepare.md b/site2/website/versioned_docs/version-2.10.0-deprecated/helm-prepare.md deleted file mode 100644 index e5d56c7e95e34b..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/helm-prepare.md +++ /dev/null @@ -1,80 +0,0 @@ ---- -id: helm-prepare -title: Prepare Kubernetes resources -sidebar_label: "Prepare" -original_id: helm-prepare ---- - -For a fully functional Pulsar cluster, you need a few resources before deploying the Apache Pulsar Helm chart. The following provides instructions to prepare the Kubernetes cluster before deploying the Pulsar Helm chart. - -- [Google Kubernetes Engine](#google-kubernetes-engine) - - [Manual cluster creation](#manual-cluster-creation) - - [Scripted cluster creation](#scripted-cluster-creation) - - [Create cluster with local SSDs](#create-cluster-with-local-ssds) - -## Google Kubernetes Engine - -To get started easier, a script is provided to create the cluster automatically. Alternatively, a cluster can be created manually as well. - -### Manual cluster creation - -To provision a Kubernetes cluster manually, follow the [GKE instructions](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster). - -### Scripted cluster creation - -A [bootstrap script](https://github.com/streamnative/charts/tree/master/scripts/pulsar/gke_bootstrap_script.sh) has been created to automate much of the setup process for users on GCP/GKE. - -The script can: - -1. Create a new GKE cluster. -2. Allow the cluster to modify DNS (Domain Name Server) records. -3. Setup `kubectl`, and connect it to the cluster. - -Google Cloud SDK is a dependency of this script, so ensure it is [set up correctly](helm-tools.md#connect-to-a-gke-cluster) for the script to work. - -The script reads various parameters from environment variables and an argument `up` or `down` for bootstrap and clean-up respectively. - -The following table describes all variables. - -| **Variable** | **Description** | **Default value** | -| ------------ | --------------- | ----------------- | -| PROJECT | ID of your GCP project | No default value. It requires to be set. | -| CLUSTER_NAME | Name of the GKE cluster | `pulsar-dev` | -| CONFDIR | Configuration directory to store Kubernetes configuration | ${HOME}/.config/streamnative | -| INT_NETWORK | IP space to use within this cluster | `default` | -| LOCAL_SSD_COUNT | Number of local SSD counts | 4 | -| MACHINE_TYPE | Type of machine to use for nodes | `n1-standard-4` | -| NUM_NODES | Number of nodes to be created in each of the cluster's zones | 4 | -| PREEMPTIBLE | Create nodes using preemptible VM instances in the new cluster. | false | -| REGION | Compute region for the cluster | `us-east1` | -| USE_LOCAL_SSD | Flag to create a cluster with local SSDs | false | -| ZONE | Compute zone for the cluster | `us-east1-b` | -| ZONE_EXTENSION | The extension (`a`, `b`, `c`) of the zone name of the cluster | `b` | -| EXTRA_CREATE_ARGS | Extra arguments passed to create command | | - -Run the script, by passing in your desired parameters. It can work with the default parameters except for `PROJECT` which is required: - -```bash - -PROJECT= scripts/pulsar/gke_bootstrap_script.sh up - -``` - -The script can also be used to clean up the created GKE resources. - -```bash - -PROJECT= scripts/pulsar/gke_bootstrap_script.sh down - -``` - -#### Create cluster with local SSDs - -To install the Pulsar Helm chart using local persistent volumes, you need to create a GKE cluster with local SSDs. You can do so by specifying `USE_LOCAL_SSD` to be `true` in the following command to create a Pulsar cluster with local SSDs. - -``` - -PROJECT= USE_LOCAL_SSD=true LOCAL_SSD_COUNT= scripts/pulsar/gke_bootstrap_script.sh up - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/helm-tools.md b/site2/website/versioned_docs/version-2.10.0-deprecated/helm-tools.md deleted file mode 100644 index 6ba89006913b64..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/helm-tools.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -id: helm-tools -title: Required tools for deploying Pulsar Helm Chart -sidebar_label: "Required Tools" -original_id: helm-tools ---- - -Before deploying Pulsar to your Kubernetes cluster, there are some tools you must have installed locally. - -## kubectl - -kubectl is the tool that talks to the Kubernetes API. kubectl 1.14 or higher is required and it needs to be compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin)). - -To Install kubectl locally, follow the [Kubernetes documentation](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl). - -The server version of kubectl cannot be obtained until we connect to a cluster. - -## Helm - -Helm is the package manager for Kubernetes. The Apache Pulsar Helm Chart is tested and supported with Helm v3. - -### Get Helm - -You can get Helm from the project's [releases page](https://github.com/helm/helm/releases), or follow other options under the official documentation of [installing Helm](https://helm.sh/docs/intro/install/). - -### Next steps - -Once kubectl and Helm are configured, you can configure your [Kubernetes cluster](helm-prepare.md). - -## Additional information - -### Templates - -Templating in Helm is done through Golang's [text/template](https://golang.org/pkg/text/template/) and [sprig](https://godoc.org/github.com/Masterminds/sprig). - -For more information about how all the inner workings behave, check these documents: - -- [Functions and Pipelines](https://helm.sh/docs/chart_template_guide/functions_and_pipelines/) -- [Subcharts and Globals](https://helm.sh/docs/chart_template_guide/subcharts_and_globals/) - -### Tips and tricks - -For additional information on developing with Helm, check [tips and tricks section](https://helm.sh/docs/howto/charts_tips_and_tricks/) in the Helm repository. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/helm-upgrade.md b/site2/website/versioned_docs/version-2.10.0-deprecated/helm-upgrade.md deleted file mode 100644 index 7d671e6bfb3c10..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/helm-upgrade.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -id: helm-upgrade -title: Upgrade Pulsar Helm release -sidebar_label: "Upgrade" -original_id: helm-upgrade ---- - -Before upgrading your Pulsar installation, you need to check the change log corresponding to the specific release you want to upgrade to and look for any release notes that might pertain to the new Pulsar helm chart version. - -We also recommend that you need to provide all values using the `helm upgrade --set key=value` syntax or the `-f values.yml` instead of using `--reuse-values`, because some of the current values might be deprecated. - -:::note - -You can retrieve your previous `--set` arguments cleanly, with `helm get values `. If you direct this into a file (`helm get values > pulsar.yml`), you can safely pass this file through `-f`, namely `helm upgrade apache/pulsar -f pulsar.yaml`. This safely replaces the behavior of `--reuse-values`. - -::: - -## Steps - -To upgrade Apache Pulsar to a newer version, follow these steps: - -1. Check the change log for the specific version you would like to upgrade to. -2. Go through [deployment documentation](helm-deploy.md) step by step. -3. Extract your previous `--set` arguments with the following command. - - ```bash - - helm get values > pulsar.yaml - - ``` - -4. Decide all the values you need to set. -5. Perform the upgrade, with all `--set` arguments extracted in step 4. - - ```bash - - helm upgrade apache/pulsar \ - --version \ - -f pulsar.yaml \ - --set ... - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-aerospike-sink.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-aerospike-sink.md deleted file mode 100644 index 63d7338a3ba91c..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-aerospike-sink.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -id: io-aerospike-sink -title: Aerospike sink connector -sidebar_label: "Aerospike sink connector" -original_id: io-aerospike-sink ---- - -The Aerospike sink connector pulls messages from Pulsar topics to Aerospike clusters. - -## Configuration - -The configuration of the Aerospike sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `seedHosts` |String| true | No default value| The comma-separated list of one or more Aerospike cluster hosts.

    Each host can be specified as a valid IP address or hostname followed by an optional port number. | -| `keyspace` | String| true |No default value |The Aerospike namespace. | -| `columnName` | String | true| No default value|The Aerospike column name. | -|`userName`|String|false|NULL|The Aerospike username.| -|`password`|String|false|NULL|The Aerospike password.| -| `keySet` | String|false |NULL | The Aerospike set name. | -| `maxConcurrentRequests` |int| false | 100 | The maximum number of concurrent Aerospike transactions that a sink can open. | -| `timeoutMs` | int|false | 100 | This property controls `socketTimeout` and `totalTimeout` for Aerospike transactions. | -| `retries` | int|false | 1 |The maximum number of retries before aborting a write transaction to Aerospike. | diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-canal-source.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-canal-source.md deleted file mode 100644 index d1fd43bb0f74e4..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-canal-source.md +++ /dev/null @@ -1,235 +0,0 @@ ---- -id: io-canal-source -title: Canal source connector -sidebar_label: "Canal source connector" -original_id: io-canal-source ---- - -The Canal source connector pulls messages from MySQL to Pulsar topics. - -## Configuration - -The configuration of Canal source connector has the following properties. - -### Property - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `username` | true | None | Canal server account (not MySQL).| -| `password` | true | None | Canal server password (not MySQL). | -|`destination`|true|None|Source destination that Canal source connector connects to. -| `singleHostname` | false | None | Canal server address.| -| `singlePort` | false | None | Canal server port.| -| `cluster` | true | false | Whether to enable cluster mode based on Canal server configuration or not.

  63. true: **cluster** mode.
    If set to true, it talks to `zkServers` to figure out the actual database host.

  64. false: **standalone** mode.
    If set to false, it connects to the database specified by `singleHostname` and `singlePort`.
  65. | -| `zkServers` | true | None | Address and port of the Zookeeper that Canal source connector talks to figure out the actual database host.| -| `batchSize` | false | 1000 | Batch size to fetch from Canal. | - -### Example - -Before using the Canal connector, you can create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "zkServers": "127.0.0.1:2181", - "batchSize": "5120", - "destination": "example", - "username": "", - "password": "", - "cluster": false, - "singleHostname": "127.0.0.1", - "singlePort": "11111", - } - - ``` - -* YAML - - You can create a YAML file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/resources/canal-mysql-source-config.yaml) below to your YAML file. - - ```yaml - - configs: - zkServers: "127.0.0.1:2181" - batchSize: 5120 - destination: "example" - username: "" - password: "" - cluster: false - singleHostname: "127.0.0.1" - singlePort: 11111 - - ``` - -## Usage - -Here is an example of storing MySQL data using the configuration file as above. - -1. Start a MySQL server. - - ```bash - - $ docker pull mysql:5.7 - $ docker run -d -it --rm --name pulsar-mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=canal -e MYSQL_USER=mysqluser -e MYSQL_PASSWORD=mysqlpw mysql:5.7 - - ``` - -2. Create a configuration file `mysqld.cnf`. - - ```bash - - [mysqld] - pid-file = /var/run/mysqld/mysqld.pid - socket = /var/run/mysqld/mysqld.sock - datadir = /var/lib/mysql - #log-error = /var/log/mysql/error.log - # By default we only accept connections from localhost - #bind-address = 127.0.0.1 - # Disabling symbolic-links is recommended to prevent assorted security risks - symbolic-links=0 - log-bin=mysql-bin - binlog-format=ROW - server_id=1 - - ``` - -3. Copy the configuration file `mysqld.cnf` to MySQL server. - - ```bash - - $ docker cp mysqld.cnf pulsar-mysql:/etc/mysql/mysql.conf.d/ - - ``` - -4. Restart the MySQL server. - - ```bash - - $ docker restart pulsar-mysql - - ``` - -5. Create a test database in MySQL server. - - ```bash - - $ docker exec -it pulsar-mysql /bin/bash - $ mysql -h 127.0.0.1 -uroot -pcanal -e 'create database test;' - - ``` - -6. Start a Canal server and connect to MySQL server. - - ``` - - $ docker pull canal/canal-server:v1.1.2 - $ docker run -d -it --link pulsar-mysql -e canal.auto.scan=false -e canal.destinations=test -e canal.instance.master.address=pulsar-mysql:3306 -e canal.instance.dbUsername=root -e canal.instance.dbPassword=canal -e canal.instance.connectionCharset=UTF-8 -e canal.instance.tsdb.enable=true -e canal.instance.gtidon=false --name=pulsar-canal-server -p 8000:8000 -p 2222:2222 -p 11111:11111 -p 11112:11112 -m 4096m canal/canal-server:v1.1.2 - - ``` - -7. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:2.3.0 - $ docker run -d -it --link pulsar-canal-server -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:2.3.0 bin/pulsar standalone - - ``` - -8. Modify the configuration file `canal-mysql-source-config.yaml`. - - ```yaml - - configs: - zkServers: "" - batchSize: "5120" - destination: "test" - username: "" - password: "" - cluster: false - singleHostname: "pulsar-canal-server" - singlePort: "11111" - - ``` - -9. Create a consumer file `pulsar-client.py`. - - ```python - - import pulsar - - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe('my-topic', - subscription_name='my-sub') - - while True: - msg = consumer.receive() - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - - client.close() - - ``` - -10. Copy the configuration file `canal-mysql-source-config.yaml` and the consumer file `pulsar-client.py` to Pulsar server. - - ```bash - - $ docker cp canal-mysql-source-config.yaml pulsar-standalone:/pulsar/conf/ - $ docker cp pulsar-client.py pulsar-standalone:/pulsar/ - - ``` - -11. Download a Canal connector and start it. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - $ wget https://archive.apache.org/dist/pulsar/pulsar-2.3.0/connectors/pulsar-io-canal-2.3.0.nar -P connectors - $ ./bin/pulsar-admin source localrun \ - --archive ./connectors/pulsar-io-canal-2.3.0.nar \ - --classname org.apache.pulsar.io.canal.CanalStringSource \ - --tenant public \ - --namespace default \ - --name canal \ - --destination-topic-name my-topic \ - --source-config-file /pulsar/conf/canal-mysql-source-config.yaml \ - --parallelism 1 - - ``` - -12. Consume data from MySQL. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - $ python pulsar-client.py - - ``` - -13. Open another window to log in MySQL server. - - ```bash - - $ docker exec -it pulsar-mysql /bin/bash - $ mysql -h 127.0.0.1 -uroot -pcanal - - ``` - -14. Create a table, and insert, delete, and update data in MySQL server. - - ```bash - - mysql> use test; - mysql> show tables; - mysql> CREATE TABLE IF NOT EXISTS `test_table`(`test_id` INT UNSIGNED AUTO_INCREMENT,`test_title` VARCHAR(100) NOT NULL, - `test_author` VARCHAR(40) NOT NULL, - `test_date` DATE,PRIMARY KEY ( `test_id` ))ENGINE=InnoDB DEFAULT CHARSET=utf8; - mysql> INSERT INTO test_table (test_title, test_author, test_date) VALUES("a", "b", NOW()); - mysql> UPDATE test_table SET test_title='c' WHERE test_title='a'; - mysql> DELETE FROM test_table WHERE test_title='c'; - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-cassandra-sink.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-cassandra-sink.md deleted file mode 100644 index d7f0e55abaa31e..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-cassandra-sink.md +++ /dev/null @@ -1,59 +0,0 @@ ---- -id: io-cassandra-sink -title: Cassandra sink connector -sidebar_label: "Cassandra sink connector" -original_id: io-cassandra-sink ---- - -The Cassandra sink connector pulls messages from Pulsar topics to Cassandra clusters. - -## Configuration - -The configuration of the Cassandra sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `roots` | String|true | " " (empty string) | A comma-separated list of Cassandra hosts to connect to.| -| `keyspace` | String|true| " " (empty string)| The key space used for writing pulsar messages.

    **Note: `keyspace` should be created prior to a Cassandra sink.**| -| `keyname` | String|true| " " (empty string)| The key name of the Cassandra column family.

    The column is used for storing Pulsar message keys.

    If a Pulsar message doesn't have any key associated, the message value is used as the key. | -| `columnFamily` | String|true| " " (empty string)| The Cassandra column family name.

    **Note: `columnFamily` should be created prior to a Cassandra sink.**| -| `columnName` | String|true| " " (empty string) | The column name of the Cassandra column family.

    The column is used for storing Pulsar message values. | - -### Example - -Before using the Cassandra sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - } - } - - ``` - -* YAML - - ``` - - configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - - ``` - -## Usage - -For more information about **how to connect Pulsar with Cassandra**, see [here](io-quickstart.md#connect-pulsar-to-apache-cassandra). diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-cdc-debezium.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-cdc-debezium.md deleted file mode 100644 index 4558ae41d211b2..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-cdc-debezium.md +++ /dev/null @@ -1,549 +0,0 @@ ---- -id: io-cdc-debezium -title: Debezium source connector -sidebar_label: "Debezium source connector" -original_id: io-cdc-debezium ---- - -The Debezium source connector pulls messages from MySQL or PostgreSQL -and persists the messages to Pulsar topics. - -## Configuration - -The configuration of Debezium source connector has the following properties. - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `task.class` | true | null | A source task class that implemented in Debezium. | -| `database.hostname` | true | null | The address of a database server. | -| `database.port` | true | null | The port number of a database server.| -| `database.user` | true | null | The name of a database user that has the required privileges. | -| `database.password` | true | null | The password for a database user that has the required privileges. | -| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. | -| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. | -| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the connector.

    This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. | -| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. | -| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value. | -| `database.history` | true | null | The name of the database history class. | -| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements.

    **Note: this topic is for internal use only and should not be used by consumers.** | -| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. | -| `pulsar.service.url` | true | null | Pulsar cluster service URL for the offset topic used in Debezium. You can use the `bin/pulsar-admin --admin-url http://pulsar:8080 sources localrun --source-config-file configs/pg-pulsar-config.yaml` command to point to the target Pulsar cluster.| -| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. | -| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). | -| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. | -| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. | - - - -## Example of MySQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "configs": { - "database.hostname": "localhost", - "database.port": "3306", - "database.user": "debezium", - "database.password": "dbz", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.whitelist": "inventory", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.history.pulsar.topic": "history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "pulsar.service.url": "pulsar://127.0.0.1:6650", - "offset.storage.topic": "offset-topic" - } - } - - ``` - -* YAML - - You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mysql-source" - topicName: "debezium-mysql-topic" - archive: "connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for mysql, docker image: debezium/example-mysql:0.8 - database.hostname: "localhost" - database.port: "3306" - database.user: "debezium" - database.password: "dbz" - database.server.id: "184054" - database.server.name: "dbserver1" - database.whitelist: "inventory" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.history.pulsar.topic: "history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG - key.converter: "org.apache.kafka.connect.json.JsonConverter" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## OFFSET_STORAGE_TOPIC_CONFIG - offset.storage.topic: "offset-topic" - - ``` - -### Usage - -This example shows how to change the data of a MySQL table using the Pulsar Debezium connector. - -1. Start a MySQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -it --rm \ - --name mysql \ - -p 3306:3306 \ - -e MYSQL_ROOT_PASSWORD=debezium \ - -e MYSQL_USER=mysqluser \ - -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar \ - --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","value.converter": "org.apache.kafka.connect.json.JsonConverter","pulsar.service.url": "pulsar://127.0.0.1:6650","offset.storage.topic": "offset-topic"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mysql-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the table _inventory.products_. - - ```bash - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MySQL client in docker. - - ```bash - - $ docker run -it --rm \ - --name mysqlterm \ - --link mysql \ - --rm mysql:5.7 sh \ - -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' - - ``` - -6. A MySQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - mysql> use inventory; - mysql> show tables; - mysql> SELECT * FROM products; - mysql> UPDATE products SET name='1111111111' WHERE id=101; - mysql> UPDATE products SET name='1111111111' WHERE id=107; - - ``` - - In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic. - -## Example of PostgreSQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "configs": { - "database.hostname": "localhost", - "database.port": "5432", - "database.user": "postgres", - "database.password": "postgres", - "database.dbname": "postgres", - "database.server.name": "dbserver1", - "schema.whitelist": "inventory", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - } - - ``` - -* YAML - - You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-postgres-source" - topicName: "debezium-postgres-topic" - archive: "connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-postgress:0.8 - database.hostname: "localhost" - database.port: "5432" - database.user: "postgres" - database.password: "postgres" - database.dbname: "postgres" - database.server.name: "dbserver1" - schema.whitelist: "inventory" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector. - - -1. Start a PostgreSQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-postgres:0.8 - $ docker run -d -it --rm --name pulsar-postgresql -p 5432:5432 debezium/example-postgres:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar \ - --name debezium-postgres-source \ - --destination-topic-name debezium-postgres-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "postgres","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-postgres-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a PostgreSQL client in docker. - - ```bash - - $ docker exec -it pulsar-postgresql /bin/bash - - ``` - -6. A PostgreSQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - psql -U postgres postgres - postgres=# \c postgres; - You are now connected to database "postgres" as user "postgres". - postgres=# SET search_path TO inventory; - SET - postgres=# select * from products; - id | name | description | weight - -----+--------------------+---------------------------------------------------------+-------- - 102 | car battery | 12V car battery | 8.1 - 103 | 12-pack drill bits | 12-pack of drill bits with sizes ranging from #40 to #3 | 0.8 - 104 | hammer | 12oz carpenter's hammer | 0.75 - 105 | hammer | 14oz carpenter's hammer | 0.875 - 106 | hammer | 16oz carpenter's hammer | 1 - 107 | rocks | box of assorted rocks | 5.3 - 108 | jacket | water resistent black wind breaker | 0.1 - 109 | spare tire | 24 inch spare tire | 22.2 - 101 | 1111111111 | Small 2-wheel scooter | 3.14 - (9 rows) - - postgres=# UPDATE products SET name='1111111111' WHERE id=107; - UPDATE 1 - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":107}}�{"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products.Value","field":"before"},{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products.Value","field":"after"},{"type":"struct","fields":[{"type":"string","optional":true,"field":"version"},{"type":"string","optional":true,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":false,"field":"db"},{"type":"int64","optional":true,"field":"ts_usec"},{"type":"int64","optional":true,"field":"txId"},{"type":"int64","optional":true,"field":"lsn"},{"type":"string","optional":true,"field":"schema"},{"type":"string","optional":true,"field":"table"},{"type":"boolean","optional":true,"default":false,"field":"snapshot"},{"type":"boolean","optional":true,"field":"last_snapshot_record"}],"optional":false,"name":"io.debezium.connector.postgresql.Source","field":"source"},{"type":"string","optional":false,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"before":{"id":107,"name":"rocks","description":"box of assorted rocks","weight":5.3},"after":{"id":107,"name":"1111111111","description":"box of assorted rocks","weight":5.3},"source":{"version":"0.9.2.Final","connector":"postgresql","name":"dbserver1","db":"postgres","ts_usec":1559208957661080,"txId":577,"lsn":23862872,"schema":"inventory","table":"products","snapshot":false,"last_snapshot_record":null},"op":"u","ts_ms":1559208957692}} - - ``` - -## Example of MongoDB - -You need to create a configuration file before using the Pulsar Debezium connector. - -* JSON - - ```json - - { - "configs": { - "mongodb.hosts": "rs0/mongodb:27017", - "mongodb.name": "dbserver1", - "mongodb.user": "debezium", - "mongodb.password": "dbz", - "mongodb.task.id": "1", - "database.whitelist": "inventory", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - } - - ``` - -* YAML - - You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mongodb-source" - topicName: "debezium-mongodb-topic" - archive: "connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-postgress:0.10 - mongodb.hosts: "rs0/mongodb:27017" - mongodb.name: "dbserver1" - mongodb.user: "debezium" - mongodb.password: "dbz" - mongodb.task.id: "1" - database.whitelist: "inventory" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector. - - -1. Start a MongoDB server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-mongodb:0.10 - $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017 debezium/example-mongodb:0.10 - - ``` - - Use the following commands to initialize the data. - - ``` bash - - ./usr/local/bin/init-inventory.sh - - ``` - - If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a``` - - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-mongodb-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar \ - --name debezium-mongodb-source \ - --destination-topic-name debezium-mongodb-topic \ - --tenant public \ - --namespace default \ - --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mongodb-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MongoDB client in docker. - - ```bash - - $ docker exec -it pulsar-mongodb /bin/bash - - ``` - -6. A MongoDB client pops out. - - ```bash - - mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory - db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}}) - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":false,"field":"rs"},{"type":"string","optional":false,"field":"collection"},{"type":"int32","optional":false,"field":"ord"},{"type":"int64","optional":true,"field":"h"}],"optional":false,"name":"io.debezium.connector.mongo.Source","field":"source"},{"type":"string","optional":true,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"after":"{\"_id\": {\"$numberLong\": \"104\"},\"name\": \"hammer\",\"description\": \"12oz carpenter's hammer\",\"weight\": 1.25,\"quantity\": 4}","patch":null,"source":{"version":"0.10.0.Final","connector":"mongodb","name":"dbserver1","ts_ms":1573541905000,"snapshot":"true","db":"inventory","rs":"rs0","collection":"products","ord":1,"h":4983083486544392763},"op":"r","ts_ms":1573541909761}}. - - ``` - -## FAQ - -### Debezium postgres connector will hang when create snap - -```$xslt - -#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000] - java.lang.Thread.State: WAITING (parking) - at sun.misc.Unsafe.park(Native Method) - - parking to wait for <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) - at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) - at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) - at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396) - at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649) - at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132) - at io.debezium.connector.postgresql.PostgresConnectorTask$Lambda$203/385424085.accept(Unknown Source) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$240/1347039967.accept(Unknown Source) - at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$206/589332928.run(Unknown Source) - at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705) - at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717) - at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126) - at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47) - at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127) - at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230) - at java.lang.Thread.run(Thread.java:748) - -``` - -If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file: - -```$xslt - -max.queue.size= - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-cdc.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-cdc.md deleted file mode 100644 index e6e662884826de..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-cdc.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -id: io-cdc -title: CDC connector -sidebar_label: "CDC connector" -original_id: io-cdc ---- - -CDC source connectors capture log changes of databases (such as MySQL, MongoDB, and PostgreSQL) into Pulsar. - -> CDC source connectors are built on top of [Canal](https://github.com/alibaba/canal) and [Debezium](https://debezium.io/) and store all data into Pulsar cluster in a persistent, replicated, and partitioned way. - -Currently, Pulsar has the following CDC connectors. - -Name|Java Class -|---|--- -[Canal source connector](io-canal-source.md)|[org.apache.pulsar.io.canal.CanalStringSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/java/org/apache/pulsar/io/canal/CanalStringSource.java) -[Debezium source connector](io-cdc-debezium.md)|
  66. [org.apache.pulsar.io.debezium.DebeziumSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/core/src/main/java/org/apache/pulsar/io/debezium/DebeziumSource.java)
  67. [org.apache.pulsar.io.debezium.mysql.DebeziumMysqlSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/java/org/apache/pulsar/io/debezium/mysql/DebeziumMysqlSource.java)
  68. [org.apache.pulsar.io.debezium.postgres.DebeziumPostgresSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/java/org/apache/pulsar/io/debezium/postgres/DebeziumPostgresSource.java)
  69. - -For more information about Canal and Debezium, see the information below. - -Subject | Reference -|---|--- -How to use Canal source connector with MySQL|[Canal guide](https://github.com/alibaba/canal/wiki) -How does Canal work | [Canal tutorial](https://github.com/alibaba/canal/wiki) -How to use Debezium source connector with MySQL | [Debezium guide](https://debezium.io/docs/connectors/mysql/) -How does Debezium work | [Debezium tutorial](https://debezium.io/docs/tutorial/) diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-cli.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-cli.md deleted file mode 100644 index f79d301c30b3f9..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-cli.md +++ /dev/null @@ -1,666 +0,0 @@ ---- -id: io-cli -title: Connector Admin CLI -sidebar_label: "CLI" -original_id: io-cli ---- - -:::note - -**Important** - -This page is deprecated and not updated anymore. For the latest and complete information about `Pulsar-admin`, including commands, flags, descriptions, and more, see [Pulsar admin docs](/tools/pulsar-admin/). - -::: - -The `pulsar-admin` tool helps you manage Pulsar connectors. - -## `sources` - -An interface for managing Pulsar IO sources (ingress data into Pulsar). - -```bash - -$ pulsar-admin sources subcommands - -``` - -Subcommands are: - -* `create` - -* `update` - -* `delete` - -* `get` - -* `status` - -* `list` - -* `stop` - -* `start` - -* `restart` - -* `localrun` - -* `available-sources` - -* `reload` - - -### `create` - -Submit a Pulsar IO source connector to run in a Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources create options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--classname` | The source's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per source instance (applicable only to Docker runtime). -| `--deserialization-classname` | The SerDe classname for the source. -| `--destination-topic-name` | The Pulsar topic to which data is sent. -| `--disk` | The disk (in bytes) that needs to be allocated per source instance (applicable only to Docker runtime). -|`--name` | The source's name. -| `--namespace` | The source's namespace. -| ` --parallelism` | The source's parallelism factor, that is, the number of source instances to run. -| `--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per source instance (applicable only to the process and Docker runtimes). -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -| `--source-config` | Source config key/values. -| `--source-config-file` | The path to a YAML config file specifying the source's configuration. -| `-t`, `--source-type` | The source's connector provider. -| `--tenant` | The source's tenant. -|`--producer-config`| The custom producer configuration (as a JSON string). - -### `update` - -Update a already submitted Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources update options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--classname` | The source's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per source instance (applicable only to Docker runtime). -| `--deserialization-classname` | The SerDe classname for the source. -| `--destination-topic-name` | The Pulsar topic to which data is sent. -| `--disk` | The disk (in bytes) that needs to be allocated per source instance (applicable only to Docker runtime). -|`--name` | The source's name. -| `--namespace` | The source's namespace. -| ` --parallelism` | The source's parallelism factor, that is, the number of source instances to run. -| `--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per source instance (applicable only to the process and Docker runtimes). -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -| `--source-config` | Source config key/values. -| `--source-config-file` | The path to a YAML config file specifying the source's configuration. -| `-t`, `--source-type` | The source's connector provider. The `source-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. -| `--tenant` | The source's tenant. -| `--update-auth-data` | Whether or not to update the auth data.
    **Default value: false.** - - -### `delete` - -Delete a Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources delete options - -``` - -#### Option - -|Flag|Description| -|---|---| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `get` - -Get the information about a Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources get options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `status` - -Check the current status of a Pulsar Source. - -#### Usage - -```bash - -$ pulsar-admin sources status options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source ID.
    If `instance-id` is not provided, Pulsar gets status of all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `list` - -List all running Pulsar IO source connectors. - -#### Usage - -```bash - -$ pulsar-admin sources list options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `stop` - -Stop a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources stop options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar stops all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `start` - -Start a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources start options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar starts all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `restart` - -Restart a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources restart options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar restarts all instances. -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `localrun` - -Run a Pulsar IO source connector locally rather than deploying it to the Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources localrun options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the Source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--broker-service-url` | The URL for the Pulsar broker. -|`--classname`|The source's class name if `archive` is file-url-path (file://). -| `--client-auth-params` | Client authentication parameter. -| `--client-auth-plugin` | Client authentication plugin using which function-process can connect to broker. -|`--cpu`|The CPU (in cores) that needs to be allocated per source instance (applicable only to the Docker runtime).| -|`--deserialization-classname`|The SerDe classname for the source. -|`--destination-topic-name`|The Pulsar topic to which data is sent. -|`--disk`|The disk (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime).| -|`--hostname-verification-enabled`|Enable hostname verification.
    **Default value: false**. -|`--name`|The source’s name.| -|`--namespace`|The source’s namespace.| -|`--parallelism`|The source’s parallelism factor, that is, the number of source instances to run).| -|`--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -|`--ram`|The RAM (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime).| -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -|`--source-config`|Source config key/values. -|`--source-config-file`|The path to a YAML config file specifying the source’s configuration. -|`--source-type`|The source's connector provider. -|`--tenant`|The source’s tenant. -|`--tls-allow-insecure`|Allow insecure tls connection.
    **Default value: false**. -|`--tls-trust-cert-path`|The tls trust cert file path. -|`--use-tls`|Use tls connection.
    **Default value: false**. -|`--producer-config`| The custom producer configuration (as a JSON string). - -### `available-sources` - -Get the list of Pulsar IO connector sources supported by Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources available-sources - -``` - -### `reload` - -Reload the available built-in connectors. - -#### Usage - -```bash - -$ pulsar-admin sources reload - -``` - -## `sinks` - -An interface for managing Pulsar IO sinks (egress data from Pulsar). - -```bash - -$ pulsar-admin sinks subcommands - -``` - -Subcommands are: - -* `create` - -* `update` - -* `delete` - -* `get` - -* `status` - -* `list` - -* `stop` - -* `start` - -* `restart` - -* `localrun` - -* `available-sinks` - -* `reload` - - -### `create` - -Submit a Pulsar IO sink connector to run in a Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks create options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--classname` | The sink's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per sink instance (applicable only to Docker runtime). -| `--custom-schema-inputs` | The map of input topics to schema types or class names (as a JSON string). -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -| `--disk` | The disk (in bytes) that needs to be allocated per sink instance (applicable only to Docker runtime). -|`-i, --inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name` | The sink's name. -| `--namespace` | The sink's namespace. -| ` --parallelism` | The sink's parallelism factor, that is, the number of sink instances to run. -| `--processing-guarantees` | The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the process and Docker runtimes). -| `--retain-ordering` | Sink consumes and sinks messages in order. -| `--sink-config` | sink config key/values. -| `--sink-config-file` | The path to a YAML config file specifying the sink's configuration. -| `-t`, `--sink-type` | The sink's connector provider. The `sink-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. -| `--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -| `--tenant` | The sink's tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). - -### `update` - -Update a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks update options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--classname` | The sink's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per sink instance (applicable only to Docker runtime). -| `--custom-schema-inputs` | The map of input topics to schema types or class names (as a JSON string). -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -| `--disk` | The disk (in bytes) that needs to be allocated per sink instance (applicable only to Docker runtime). -|`-i, --inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name` | The sink's name. -| `--namespace` | The sink's namespace. -| ` --parallelism` | The sink's parallelism factor, that is, the number of sink instances to run. -| `--processing-guarantees` | The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the process and Docker runtimes). -| `--retain-ordering` | Sink consumes and sinks messages in order. -| `--sink-config` | sink config key/values. -| `--sink-config-file` | The path to a YAML config file specifying the sink's configuration. -| `-t`, `--sink-type` | The sink's connector provider. -| `--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -| `--tenant` | The sink's tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). -| `--update-auth-data` | Whether or not to update the auth data.
    **Default value: false.** - -### `delete` - -Delete a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks delete options - -``` - -#### Option - -|Flag|Description| -|---|---| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - -### `get` - -Get the information about a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks get options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `status` - -Check the current status of a Pulsar sink. - -#### Usage - -```bash - -$ pulsar-admin sinks status options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink ID.
    If `instance-id` is not provided, Pulsar gets status of all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `list` - -List all running Pulsar IO sink connectors. - -#### Usage - -```bash - -$ pulsar-admin sinks list options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `stop` - -Stop a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks stop options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar stops all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - -### `start` - -Start a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks start options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar starts all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `restart` - -Restart a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks restart options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar restarts all instances. -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `localrun` - -Run a Pulsar IO sink connector locally rather than deploying it to the Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks localrun options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--broker-service-url` | The URL for the Pulsar broker. -|`--classname`|The sink's class name if `archive` is file-url-path (file://). -| `--client-auth-params` | Client authentication parameter. -| `--client-auth-plugin` | Client authentication plugin using which function-process can connect to broker. -|`--cpu`|The CPU (in cores) that needs to be allocated per sink instance (applicable only to the Docker runtime). -| `--custom-schema-inputs` | The map of input topics to Schema types or class names (as a JSON string). -| `--max-redeliver-count` | Maximum number of times that a message is redelivered before being sent to the dead letter queue. -| `--dead-letter-topic` | Name of the dead letter topic where the failing messages are sent. -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -|`--disk`|The disk (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime).| -|`--hostname-verification-enabled`|Enable hostname verification.
    **Default value: false**. -| `-i`, `--inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name`|The sink’s name.| -|`--namespace`|The sink’s namespace.| -|`--parallelism`|The sink’s parallelism factor, that is, the number of sink instances to run).| -|`--processing-guarantees`|The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -|`--ram`|The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime).| -|`--retain-ordering` | Sink consumes and sinks messages in order. -|`--sink-config`|sink config key/values. -|`--sink-config-file`|The path to a YAML config file specifying the sink’s configuration. -|`--sink-type`|The sink's connector provider. -|`--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -|`--tenant`|The sink’s tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--negative-ack-redelivery-delay-ms` | The negatively-acknowledged message redelivery delay in milliseconds. | -|`--tls-allow-insecure`|Allow insecure tls connection.
    **Default value: false**. -|`--tls-trust-cert-path`|The tls trust cert file path. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). -|`--use-tls`|Use tls connection.
    **Default value: false**. - -### `available-sinks` - -Get the list of Pulsar IO connector sinks supported by Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks available-sinks - -``` - -### `reload` - -Reload the available built-in connectors. - -#### Usage - -```bash - -$ pulsar-admin sinks reload - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-connectors.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-connectors.md deleted file mode 100644 index 957a02a5a1964a..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-connectors.md +++ /dev/null @@ -1,249 +0,0 @@ ---- -id: io-connectors -title: Built-in connector -sidebar_label: "Built-in connector" -original_id: io-connectors ---- - -Pulsar distribution includes a set of common connectors that have been packaged and tested with the rest of Apache Pulsar. These connectors import and export data from some of the most commonly used data systems. - -Using any of these connectors is as easy as writing a simple connector and running the connector locally or submitting the connector to a Pulsar Functions cluster. - -## Source connector - -Pulsar has various source connectors, which are sorted alphabetically as below. - -### Canal - -* [Configuration](io-canal-source.md#configuration) - -* [Example](io-canal-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/java/org/apache/pulsar/io/canal/CanalStringSource.java) - - -### Debezium MySQL - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-mysql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/java/org/apache/pulsar/io/debezium/mysql/DebeziumMysqlSource.java) - -### Debezium PostgreSQL - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-postgresql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/java/org/apache/pulsar/io/debezium/postgres/DebeziumPostgresSource.java) - -### Debezium MongoDB - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-mongodb) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/java/org/apache/pulsar/io/debezium/mongodb/DebeziumMongoDbSource.java) - -### Debezium Oracle - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-oracle) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/oracle/src/main/java/org/apache/pulsar/io/debezium/oracle/DebeziumOracleSource.java) - -### Debezium Microsoft SQL Server - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-microsoft-sql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mssql/src/main/java/org/apache/pulsar/io/debezium/mssql/DebeziumMsSqlSource.java) - - -### DynamoDB - -* [Configuration](io-dynamodb-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/dynamodb/src/main/java/org/apache/pulsar/io/dynamodb/DynamoDBSource.java) - -### File - -* [Configuration](io-file-source.md#configuration) - -* [Example](io-file-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/file/src/main/java/org/apache/pulsar/io/file/FileSource.java) - -### Flume - -* [Configuration](io-flume-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/java/org/apache/pulsar/io/flume/FlumeConnector.java) - -### Twitter firehose - -* [Configuration](io-twitter-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/twitter/src/main/java/org/apache/pulsar/io/twitter/TwitterFireHose.java) - -### Kafka - -* [Configuration](io-kafka-source.md#configuration) - -* [Example](io-kafka-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java) - -### Kinesis - -* [Configuration](io-kinesis-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kinesis/src/main/java/org/apache/pulsar/io/kinesis/KinesisSource.java) - -### Netty - -* [Configuration](io-netty-source.md#configuration) - -* [Example of TCP](io-netty-source.md#tcp) - -* [Example of HTTP](io-netty-source.md#http) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/netty/src/main/java/org/apache/pulsar/io/netty/NettySource.java) - -### NSQ - -* [Configuration](io-nsq-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/nsq/src/main/java/org/apache/pulsar/io/nsq/NSQSource.java) - -### RabbitMQ - -* [Configuration](io-rabbitmq-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/rabbitmq/src/main/java/org/apache/pulsar/io/rabbitmq/RabbitMQSource.java) - -## Sink connector - -Pulsar has various sink connectors, which are sorted alphabetically as below. - -### Aerospike - -* [Configuration](io-aerospike-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/aerospike/src/main/java/org/apache/pulsar/io/aerospike/AerospikeStringSink.java) - -### Cassandra - -* [Configuration](io-cassandra-sink.md#configuration) - -* [Example](io-cassandra-sink.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/cassandra/src/main/java/org/apache/pulsar/io/cassandra/CassandraStringSink.java) - -### ElasticSearch - -* [Configuration](io-elasticsearch-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/elastic-search/src/main/java/org/apache/pulsar/io/elasticsearch/ElasticSearchSink.java) - -### Flume - -* [Configuration](io-flume-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/java/org/apache/pulsar/io/flume/sink/StringSink.java) - -### HBase - -* [Configuration](io-hbase-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hbase/src/main/java/org/apache/pulsar/io/hbase/HbaseAbstractConfig.java) - -### HDFS2 - -* [Configuration](io-hdfs2-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hdfs2/src/main/java/org/apache/pulsar/io/hdfs2/AbstractHdfsConnector.java) - -### HDFS3 - -* [Configuration](io-hdfs3-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hdfs3/src/main/java/org/apache/pulsar/io/hdfs3/AbstractHdfsConnector.java) - -### InfluxDB - -* [Configuration](io-influxdb-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/influxdb/src/main/java/org/apache/pulsar/io/influxdb/InfluxDBGenericRecordSink.java) - -### JDBC ClickHouse - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-clickhouse) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/clickhouse/src/main/java/org/apache/pulsar/io/jdbc/ClickHouseJdbcAutoSchemaSink.java) - -### JDBC MariaDB - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-mariadb) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/mariadb/src/main/java/org/apache/pulsar/io/jdbc/MariadbJdbcAutoSchemaSink.java) - -### JDBC PostgreSQL - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-postgresql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/postgres/src/main/java/org/apache/pulsar/io/jdbc/PostgresJdbcAutoSchemaSink.java) - -### JDBC SQLite - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-sqlite) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/sqlite/src/main/java/org/apache/pulsar/io/jdbc/SqliteJdbcAutoSchemaSink.java) - -### Kafka - -* [Configuration](io-kafka-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSink.java) - -### Kinesis - -* [Configuration](io-kinesis-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kinesis/src/main/java/org/apache/pulsar/io/kinesis/KinesisSink.java) - -### MongoDB - -* [Configuration](io-mongo-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/mongo/src/main/java/org/apache/pulsar/io/mongodb/MongoSink.java) - -### RabbitMQ - -* [Configuration](io-rabbitmq-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/rabbitmq/src/main/java/org/apache/pulsar/io/rabbitmq/RabbitMQSink.java) - -### Redis - -* [Configuration](io-redis-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/redis/src/main/java/org/apache/pulsar/io/redis/RedisAbstractConfig.java) - -### Solr - -* [Configuration](io-solr-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/solr/src/main/java/org/apache/pulsar/io/solr/SolrSinkConfig.java) - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-debezium-source.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-debezium-source.md deleted file mode 100644 index 122c4024068ad8..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-debezium-source.md +++ /dev/null @@ -1,770 +0,0 @@ ---- -id: io-debezium-source -title: Debezium source connector -sidebar_label: "Debezium source connector" -original_id: io-debezium-source ---- - -The Debezium source connector pulls messages from MySQL or PostgreSQL -and persists the messages to Pulsar topics. - -## Configuration - -The configuration of Debezium source connector has the following properties. - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `task.class` | true | null | A source task class that implemented in Debezium. | -| `database.hostname` | true | null | The address of a database server. | -| `database.port` | true | null | The port number of a database server.| -| `database.user` | true | null | The name of a database user that has the required privileges. | -| `database.password` | true | null | The password for a database user that has the required privileges. | -| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. | -| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. | -| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the connector.

    This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. | -| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. | -| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value. | -| `database.history` | true | null | The name of the database history class. | -| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements.

    **Note: this topic is for internal use only and should not be used by consumers.** | -| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. | -| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. | -| `json-with-envelope` | false | false | Present the message only consist of payload. - -### Converter Options - -1. org.apache.kafka.connect.json.JsonConverter - -This config `json-with-envelope` is valid only for the JsonConverter. It's default value is false, the consumer use the schema ` -Schema.KeyValue(Schema.AUTO_CONSUME(), Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, -and the message only consist of payload. - -If the config `json-with-envelope` value is true, the consumer use the schema -`Schema.KeyValue(Schema.BYTES, Schema.BYTES`, the message consist of schema and payload. - -2. org.apache.pulsar.kafka.shade.io.confluent.connect.avro.AvroConverter - -If users select the AvroConverter, then the pulsar consumer should use the schema `Schema.KeyValue(Schema.AUTO_CONSUME(), -Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, and the message consist of payload. - -### MongoDB Configuration -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). | -| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. | -| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. | - - - -## Example of MySQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "configs": { - "database.hostname": "localhost", - "database.port": "3306", - "database.user": "debezium", - "database.password": "dbz", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.whitelist": "inventory", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.history.pulsar.topic": "history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "offset.storage.topic": "offset-topic" - } - } - - ``` - -* YAML - - You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mysql-source" - topicName: "debezium-mysql-topic" - archive: "connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for mysql, docker image: debezium/example-mysql:0.8 - database.hostname: "localhost" - database.port: "3306" - database.user: "debezium" - database.password: "dbz" - database.server.id: "184054" - database.server.name: "dbserver1" - database.whitelist: "inventory" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.history.pulsar.topic: "history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG - key.converter: "org.apache.kafka.connect.json.JsonConverter" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - - ## OFFSET_STORAGE_TOPIC_CONFIG - offset.storage.topic: "offset-topic" - - ``` - -### Usage - -This example shows how to change the data of a MySQL table using the Pulsar Debezium connector. - -1. Start a MySQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -it --rm \ - --name mysql \ - -p 3306:3306 \ - -e MYSQL_ROOT_PASSWORD=debezium \ - -e MYSQL_USER=mysqluser \ - -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar \ - --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","value.converter": "org.apache.kafka.connect.json.JsonConverter","pulsar.service.url": "pulsar://127.0.0.1:6650","offset.storage.topic": "offset-topic"}' - - ``` - - :::note - - Currently, the destination topic (specified by the `destination-topic-name` option ) is a required configuration but it is not used for the Debezium connector to save data. The Debezium connector saves data in the following 4 types of topics: - - - One topic named with the database server name ( `database.server.name`) for storing the database metadata messages, such as `public/default/database.server.name`. - - One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. - - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - - One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - - If the automatic topic creation is disabled on your broker, you need to manually create the above 4 types of topics and the destination topic. - - ::: - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mysql-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the table _inventory.products_. - - ```bash - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MySQL client in docker. - - ```bash - - $ docker run -it --rm \ - --name mysqlterm \ - --link mysql \ - --rm mysql:5.7 sh \ - -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' - - ``` - -6. A MySQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - mysql> use inventory; - mysql> show tables; - mysql> SELECT * FROM products; - mysql> UPDATE products SET name='1111111111' WHERE id=101; - mysql> UPDATE products SET name='1111111111' WHERE id=107; - - ``` - - In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic. - -## Example of PostgreSQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "5432", - "database.user": "postgres", - "database.password": "changeme", - "database.dbname": "postgres", - "database.server.name": "dbserver1", - "plugin.name": "pgoutput", - "schema.whitelist": "public", - "table.whitelist": "public.users", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-postgres-source" - topicName: "debezium-postgres-topic" - archive: "connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for postgres version 10+, official docker image: postgres:<10+> - database.hostname: "localhost" - database.port: "5432" - database.user: "postgres" - database.password: "changeme" - database.dbname: "postgres" - database.server.name: "dbserver1" - plugin.name: "pgoutput" - schema.whitelist: "public" - table.whitelist: "public.users" - - ## PULSAR_SERVICE_URL_CONFIG - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -Notice that `pgoutput` is a standard plugin of Postgres introduced in version 10 - [see Postgres architecture docu](https://www.postgresql.org/docs/10/logical-replication-architecture.html). You don't need to install anything, just make sure the WAL level is set to `logical` (see docker command below and [Postgres docu](https://www.postgresql.org/docs/current/runtime-config-wal.html)). - -### Usage - -This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector. - - -1. Start a PostgreSQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -d -it --rm \ - --name pulsar-postgres \ - -p 5432:5432 \ - -e POSTGRES_PASSWORD=changeme \ - postgres:13.3 -c wal_level=logical - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar \ - --name debezium-postgres-source \ - --destination-topic-name debezium-postgres-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "changeme","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "public","table.whitelist": "public.users","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - :::note - - Currently, the destination topic (specified by the `destination-topic-name` option ) is a required configuration but it is not used for the Debezium connector to save data. The Debezium connector saves data in the following 4 types of topics: - - - One topic named with the database server name ( `database.server.name`) for storing the database metadata messages, such as `public/default/database.server.name`. - - One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. - - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - - One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - - If the automatic topic creation is disabled on your broker, you need to manually create the above 4 types of topics and the destination topic. - - ::: - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-postgres-source-config.yaml - - ``` - -4. Subscribe the topic _sub-users_ for the _public.users_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-users" public/default/dbserver1.public.users -n 0 - - ``` - -5. Start a PostgreSQL client in docker. - - ```bash - - $ docker exec -it pulsar-postgresql /bin/bash - - ``` - -6. A PostgreSQL client pops out. - - Use the following commands to create sample data in the table _users_. - - ``` - - psql -U postgres -h localhost -p 5432 - Password for user postgres: - - CREATE TABLE users( - id BIGINT GENERATED ALWAYS AS IDENTITY, PRIMARY KEY(id), - hash_firstname TEXT NOT NULL, - hash_lastname TEXT NOT NULL, - gender VARCHAR(6) NOT NULL CHECK (gender IN ('male', 'female')) - ); - - INSERT INTO users(hash_firstname, hash_lastname, gender) - SELECT md5(RANDOM()::TEXT), md5(RANDOM()::TEXT), CASE WHEN RANDOM() < 0.5 THEN 'male' ELSE 'female' END FROM generate_series(1, 100); - - postgres=# select * from users; - - id | hash_firstname | hash_lastname | gender - -------+----------------------------------+----------------------------------+-------- - 1 | 02bf7880eb489edc624ba637f5ab42bd | 3e742c2cc4217d8e3382cc251415b2fb | female - 2 | dd07064326bb9119189032316158f064 | 9c0e938f9eddbd5200ba348965afbc61 | male - 3 | 2c5316fdd9d6595c1cceb70eed12e80c | 8a93d7d8f9d76acfaaa625c82a03ea8b | female - 4 | 3dfa3b4f70d8cd2155567210e5043d2b | 32c156bc28f7f03ab5d28e2588a3dc19 | female - - - postgres=# UPDATE users SET hash_firstname='maxim' WHERE id=1; - UPDATE 1 - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"before":null,"after":{"id":1,"hash_firstname":"maxim","hash_lastname":"292113d30a3ccee0e19733dd7f88b258","gender":"male"},"source:{"version":"1.0.0.Final","connector":"postgresql","name":"foobar","ts_ms":1624045862644,"snapshot":"false","db":"postgres","schema":"public","table":"users","txId":595,"lsn":24419784,"xmin":null},"op":"u","ts_ms":1624045862648} - ...many more - - ``` - -## Example of MongoDB - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "mongodb.hosts": "rs0/mongodb:27017", - "mongodb.name": "dbserver1", - "mongodb.user": "debezium", - "mongodb.password": "dbz", - "mongodb.task.id": "1", - "database.whitelist": "inventory", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mongodb-source" - topicName: "debezium-mongodb-topic" - archive: "connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-mongodb:0.10 - mongodb.hosts: "rs0/mongodb:27017" - mongodb.name: "dbserver1" - mongodb.user: "debezium" - mongodb.password: "dbz" - mongodb.task.id: "1" - database.whitelist: "inventory" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector. - - -1. Start a MongoDB server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-mongodb:0.10 - $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017 debezium/example-mongodb:0.10 - - ``` - - Use the following commands to initialize the data. - - ``` bash - - ./usr/local/bin/init-inventory.sh - - ``` - - If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a``` - - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-mongodb-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar \ - --name debezium-mongodb-source \ - --destination-topic-name debezium-mongodb-topic \ - --tenant public \ - --namespace default \ - --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - :::note - - Currently, the destination topic (specified by the `destination-topic-name` option ) is a required configuration but it is not used for the Debezium connector to save data. The Debezium connector saves data in the following 4 types of topics: - - - One topic named with the database server name ( `database.server.name`) for storing the database metadata messages, such as `public/default/database.server.name`. - - One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. - - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - - One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - - If the automatic topic creation is disabled on your broker, you need to manually create the above 4 types of topics and the destination topic. - - ::: - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mongodb-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MongoDB client in docker. - - ```bash - - $ docker exec -it pulsar-mongodb /bin/bash - - ``` - -6. A MongoDB client pops out. - - ```bash - - mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory - db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}}) - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":false,"field":"rs"},{"type":"string","optional":false,"field":"collection"},{"type":"int32","optional":false,"field":"ord"},{"type":"int64","optional":true,"field":"h"}],"optional":false,"name":"io.debezium.connector.mongo.Source","field":"source"},{"type":"string","optional":true,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"after":"{\"_id\": {\"$numberLong\": \"104\"},\"name\": \"hammer\",\"description\": \"12oz carpenter's hammer\",\"weight\": 1.25,\"quantity\": 4}","patch":null,"source":{"version":"0.10.0.Final","connector":"mongodb","name":"dbserver1","ts_ms":1573541905000,"snapshot":"true","db":"inventory","rs":"rs0","collection":"products","ord":1,"h":4983083486544392763},"op":"r","ts_ms":1573541909761}}. - - ``` - -## Example of Oracle - -### Packaging - -Oracle connector does not include Oracle JDBC driver and you need to package it with the connector. -Major reasons for not including the drivers are the variety of versions and Oracle licensing. It is recommended to use the driver provided with your Oracle DB installation, or you can [download](https://www.oracle.com/database/technologies/appdev/jdbc.html) one. -Integration test have an [example](https://github.com/apache/pulsar/blob/e2bc52d40450fa00af258c4432a5b71d50a5c6e0/tests/docker-images/latest-version-image/Dockerfile#L110-L122) of packaging the driver into the connector nar file. - -### Configuration - -Debezium [requires](https://debezium.io/documentation/reference/1.5/connectors/oracle.html#oracle-overview) Oracle DB with LogMiner or XStream API enabled. -Supported options and steps for enabling them vary from version to version of Oracle DB. -Steps outlined in the [documentation](https://debezium.io/documentation/reference/1.5/connectors/oracle.html#oracle-overview) and used in the [integration test](https://github.com/apache/pulsar/blob/master/tests/integration/src/test/java/org/apache/pulsar/tests/integration/io/sources/debezium/DebeziumOracleDbSourceTester.java) may or may not work for the version and edition of Oracle DB you are using. -Please refer to the [documentation for Oracle DB](https://docs.oracle.com/en/database/oracle/oracle-database/) as needed. - -Similarly to other connectors, you can use JSON or YAMl to configure the connector. -Using yaml as an example, you can create a debezium-oracle-source-config.yaml file like: - -* JSON - -```json - -{ - "database.hostname": "localhost", - "database.port": "1521", - "database.user": "dbzuser", - "database.password": "dbz", - "database.dbname": "XE", - "database.server.name": "XE", - "schema.exclude.list": "system,dbzuser", - "snapshot.mode": "initial", - "topic.namespace": "public/default", - "task.class": "io.debezium.connector.oracle.OracleConnectorTask", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "typeClassName": "org.apache.pulsar.common.schema.KeyValue", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.tcpKeepAlive": "true", - "decimal.handling.mode": "double", - "database.history.pulsar.topic": "debezium-oracle-source-history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650" -} - -``` - -* YAML - -```yaml - -tenant: "public" -namespace: "default" -name: "debezium-oracle-source" -topicName: "debezium-oracle-topic" -parallelism: 1 - -className: "org.apache.pulsar.io.debezium.oracle.DebeziumOracleSource" -database.dbname: "XE" - -configs: - database.hostname: "localhost" - database.port: "1521" - database.user: "dbzuser" - database.password: "dbz" - database.dbname: "XE" - database.server.name: "XE" - schema.exclude.list: "system,dbzuser" - snapshot.mode: "initial" - topic.namespace: "public/default" - task.class: "io.debezium.connector.oracle.OracleConnectorTask" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - key.converter: "org.apache.kafka.connect.json.JsonConverter" - typeClassName: "org.apache.pulsar.common.schema.KeyValue" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.tcpKeepAlive: "true" - decimal.handling.mode: "double" - database.history.pulsar.topic: "debezium-oracle-source-history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - -``` - -For the full list of configuration properties supported by Debezium, see [Debezium Connector for Oracle](https://debezium.io/documentation/reference/1.5/connectors/oracle.html#oracle-connector-properties). - -## Example of Microsoft SQL - -### Configuration - -Debezium [requires](https://debezium.io/documentation/reference/1.5/connectors/sqlserver.html#sqlserver-overview) SQL Server with CDC enabled. -Steps outlined in the [documentation](https://debezium.io/documentation/reference/1.5/connectors/sqlserver.html#setting-up-sqlserver) and used in the [integration test](https://github.com/apache/pulsar/blob/master/tests/integration/src/test/java/org/apache/pulsar/tests/integration/src/test/java/org/apache/pulsar/tests/integration/io/sources/debezium/DebeziumMsSqlSourceTester.java). -For more information, see [Enable and disable change data capture in Microsoft SQL Server](https://docs.microsoft.com/en-us/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server). - -Similarly to other connectors, you can use JSON or YAMl to configure the connector. - -* JSON - -```json - -{ - "database.hostname": "localhost", - "database.port": "1433", - "database.user": "sa", - "database.password": "MyP@ssw0rd!", - "database.dbname": "MyTestDB", - "database.server.name": "mssql", - "snapshot.mode": "schema_only", - "topic.namespace": "public/default", - "task.class": "io.debezium.connector.sqlserver.SqlServerConnectorTask", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "typeClassName": "org.apache.pulsar.common.schema.KeyValue", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.tcpKeepAlive": "true", - "decimal.handling.mode": "double", - "database.history.pulsar.topic": "debezium-mssql-source-history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650" -} - -``` - -* YAML - -```yaml - -tenant: "public" -namespace: "default" -name: "debezium-mssql-source" -topicName: "debezium-mssql-topic" -parallelism: 1 - -className: "org.apache.pulsar.io.debezium.mssql.DebeziumMsSqlSource" -database.dbname: "mssql" - -configs: - database.hostname: "localhost" - database.port: "1433" - database.user: "sa" - database.password: "MyP@ssw0rd!" - database.dbname: "MyTestDB" - database.server.name: "mssql" - snapshot.mode: "schema_only" - topic.namespace: "public/default" - task.class: "io.debezium.connector.sqlserver.SqlServerConnectorTask" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - key.converter: "org.apache.kafka.connect.json.JsonConverter" - typeClassName: "org.apache.pulsar.common.schema.KeyValue" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.tcpKeepAlive: "true" - decimal.handling.mode: "double" - database.history.pulsar.topic: "debezium-mssql-source-history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - -``` - -For the full list of configuration properties supported by Debezium, see [Debezium Connector for MS SQL](https://debezium.io/documentation/reference/1.5/connectors/sqlserver.html#sqlserver-connector-properties). - -## FAQ - -### Debezium postgres connector will hang when create snap - -```$xslt - -#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000] - java.lang.Thread.State: WAITING (parking) - at sun.misc.Unsafe.park(Native Method) - - parking to wait for <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) - at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) - at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) - at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396) - at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649) - at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132) - at io.debezium.connector.postgresql.PostgresConnectorTask$Lambda$203/385424085.accept(Unknown Source) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$240/1347039967.accept(Unknown Source) - at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$206/589332928.run(Unknown Source) - at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705) - at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717) - at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126) - at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47) - at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127) - at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230) - at java.lang.Thread.run(Thread.java:748) - -``` - -If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file: - -```$xslt - -max.queue.size= - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-debug.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-debug.md deleted file mode 100644 index 890d5f692f7b16..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-debug.md +++ /dev/null @@ -1,407 +0,0 @@ ---- -id: io-debug -title: How to debug Pulsar connectors -sidebar_label: "Debug" -original_id: io-debug ---- -This guide explains how to debug connectors in localrun or cluster mode and gives a debugging checklist. -To better demonstrate how to debug Pulsar connectors, here takes a Mongo sink connector as an example. - -**Deploy a Mongo sink environment** -1. Start a Mongo service. - - ```bash - - docker pull mongo:4 - docker run -d -p 27017:27017 --name pulsar-mongo -v $PWD/data:/data/db mongo:4 - - ``` - -2. Create a DB and a collection. - - ```bash - - docker exec -it pulsar-mongo /bin/bash - mongo - > use pulsar - > db.createCollection('messages') - > exit - - ``` - -3. Start Pulsar standalone. - - ```bash - - docker pull apachepulsar/pulsar:2.4.0 - docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --link pulsar-mongo --name pulsar-mongo-standalone apachepulsar/pulsar:2.4.0 bin/pulsar standalone - - ``` - -4. Configure the Mongo sink with the `mongo-sink-config.yaml` file. - - ```bash - - configs: - mongoUri: "mongodb://pulsar-mongo:27017" - database: "pulsar" - collection: "messages" - batchSize: 2 - batchTimeMs: 500 - - ``` - - ```bash - - docker cp mongo-sink-config.yaml pulsar-mongo-standalone:/pulsar/ - - ``` - -5. Download the Mongo sink nar package. - - ```bash - - docker exec -it pulsar-mongo-standalone /bin/bash - curl -O http://apache.01link.hk/pulsar/pulsar-2.4.0/connectors/pulsar-io-mongo-2.4.0.nar - - ``` - -## Debug in localrun mode -Start the Mongo sink in localrun mode using the `localrun` command. -:::tip - -For more information about the `localrun` command, see [`localrun`](reference-connector-admin.md/#localrun-1). - -::: - -```bash - -./bin/pulsar-admin sinks localrun \ ---archive connectors/pulsar-io-mongo-@pulsar:version@.nar \ ---tenant public --namespace default \ ---inputs test-mongo \ ---name pulsar-mongo-sink \ ---sink-config-file mongo-sink-config.yaml \ ---parallelism 1 - -``` - -### Use connector log -Use one of the following methods to get a connector log in localrun mode: -* After executing the `localrun` command, the **log is automatically printed on the console**. -* The log is located at: - - ```bash - - logs/functions/tenant/namespace/function-name/function-name-instance-id.log - - ``` - - **Example** - - The path of the Mongo sink connector is: - - ```bash - - logs/functions/public/default/pulsar-mongo-sink/pulsar-mongo-sink-0.log - - ``` - -To clearly explain the log information, here breaks down the large block of information into small blocks and add descriptions for each block. -* This piece of log information shows the storage path of the nar package after decompression. - - ``` - - 08:21:54.132 [main] INFO org.apache.pulsar.common.nar.NarClassLoader - Created class loader with paths: [file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/, file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/META-INF/bundled-dependencies/, - - ``` - - :::tip - - If `class cannot be found` exception is thrown, check whether the nar file is decompressed in the folder `file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/META-INF/bundled-dependencies/` or not. - - ::: - -* This piece of log information illustrates the basic information about the Mongo sink connector, such as tenant, namespace, name, parallelism, resources, and so on, which can be used to **check whether the Mongo sink connector is configured correctly or not**. - - ```bash - - 08:21:55.390 [main] INFO org.apache.pulsar.functions.runtime.ThreadRuntime - ThreadContainer starting function with instance config InstanceConfig(instanceId=0, functionId=853d60a1-0c48-44d5-9a5c-6917386476b2, functionVersion=c2ce1458-b69e-4175-88c0-a0a856a2be8c, functionDetails=tenant: "public" - namespace: "default" - name: "pulsar-mongo-sink" - className: "org.apache.pulsar.functions.api.utils.IdentityFunction" - autoAck: true - parallelism: 1 - source { - typeClassName: "[B" - inputSpecs { - key: "test-mongo" - value { - } - } - cleanupSubscription: true - } - sink { - className: "org.apache.pulsar.io.mongodb.MongoSink" - configs: "{\"mongoUri\":\"mongodb://pulsar-mongo:27017\",\"database\":\"pulsar\",\"collection\":\"messages\",\"batchSize\":2,\"batchTimeMs\":500}" - typeClassName: "[B" - } - resources { - cpu: 1.0 - ram: 1073741824 - disk: 10737418240 - } - componentType: SINK - , maxBufferedTuples=1024, functionAuthenticationSpec=null, port=38459, clusterName=local) - - ``` - -* This piece of log information demonstrates the status of the connections to Mongo and configuration information. - - ```bash - - 08:21:56.231 [cluster-ClusterId{value='5d6396a3c9e77c0569ff00eb', description='null'}-pulsar-mongo:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:1, serverValue:8}] to pulsar-mongo:27017 - 08:21:56.326 [cluster-ClusterId{value='5d6396a3c9e77c0569ff00eb', description='null'}-pulsar-mongo:27017] INFO org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=pulsar-mongo:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 2, 0]}, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=89058800} - - ``` - -* This piece of log information explains the configuration of consumers and clients, including the topic name, subscription name, subscription type, and so on. - - ```bash - - 08:21:56.719 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl - Starting Pulsar consumer status recorder with config: { - "topicNames" : [ "test-mongo" ], - "topicsPattern" : null, - "subscriptionName" : "public/default/pulsar-mongo-sink", - "subscriptionType" : "Shared", - "receiverQueueSize" : 1000, - "acknowledgementsGroupTimeMicros" : 100000, - "negativeAckRedeliveryDelayMicros" : 60000000, - "maxTotalReceiverQueueSizeAcrossPartitions" : 50000, - "consumerName" : null, - "ackTimeoutMillis" : 0, - "tickDurationMillis" : 1000, - "priorityLevel" : 0, - "cryptoFailureAction" : "CONSUME", - "properties" : { - "application" : "pulsar-sink", - "id" : "public/default/pulsar-mongo-sink", - "instance_id" : "0" - }, - "readCompacted" : false, - "subscriptionInitialPosition" : "Latest", - "patternAutoDiscoveryPeriod" : 1, - "regexSubscriptionMode" : "PersistentOnly", - "deadLetterPolicy" : null, - "autoUpdatePartitions" : true, - "replicateSubscriptionState" : false, - "resetIncludeHead" : false - } - 08:21:56.726 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl - Pulsar client config: { - "serviceUrl" : "pulsar://localhost:6650", - "authPluginClassName" : null, - "authParams" : null, - "operationTimeoutMs" : 30000, - "statsIntervalSeconds" : 60, - "numIoThreads" : 1, - "numListenerThreads" : 1, - "connectionsPerBroker" : 1, - "useTcpNoDelay" : true, - "useTls" : false, - "tlsTrustCertsFilePath" : null, - "tlsAllowInsecureConnection" : false, - "tlsHostnameVerificationEnable" : false, - "concurrentLookupRequest" : 5000, - "maxLookupRequest" : 50000, - "maxNumberOfRejectedRequestPerConnection" : 50, - "keepAliveIntervalSeconds" : 30, - "connectionTimeoutMs" : 10000, - "requestTimeoutMs" : 60000, - "defaultBackoffIntervalNanos" : 100000000, - "maxBackoffIntervalNanos" : 30000000000 - } - - ``` - -## Debug in cluster mode -You can use the following methods to debug a connector in cluster mode: -* [Use connector log](#use-connector-log) -* [Use admin CLI](#use-admin-cli) -### Use connector log -In cluster mode, multiple connectors can run on a worker. To find the log path of a specified connector, use the `workerId` to locate the connector log. -### Use admin CLI -Pulsar admin CLI helps you debug Pulsar connectors with the following subcommands: -* [`get`](#get) - -* [`status`](#status) -* [`topics stats`](#topics-stats) - -**Create a Mongo sink** - -```bash - -./bin/pulsar-admin sinks create \ ---archive pulsar-io-mongo-2.4.0.nar \ ---tenant public \ ---namespace default \ ---inputs test-mongo \ ---name pulsar-mongo-sink \ ---sink-config-file mongo-sink-config.yaml \ ---parallelism 1 - -``` - -### `get` -Use the `get` command to get the basic information about the Mongo sink connector, such as tenant, namespace, name, parallelism, and so on. - -```bash - -./bin/pulsar-admin sinks get --tenant public --namespace default --name pulsar-mongo-sink -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-mongo-sink", - "className": "org.apache.pulsar.io.mongodb.MongoSink", - "inputSpecs": { - "test-mongo": { - "isRegexPattern": false - } - }, - "configs": { - "mongoUri": "mongodb://pulsar-mongo:27017", - "database": "pulsar", - "collection": "messages", - "batchSize": 2.0, - "batchTimeMs": 500.0 - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -:::tip - -For more information about the `get` command, see [`get`](reference-connector-admin.md/#get-1). - -::: - -### `status` -Use the `status` command to get the current status about the Mongo sink connector, such as the number of instance, the number of running instance, instanceId, workerId and so on. - -```bash - -./bin/pulsar-admin sinks status ---tenant public \ ---namespace default \ ---name pulsar-mongo-sink -{ -"numInstances" : 1, -"numRunning" : 1, -"instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-5d202832fd18-8080" - } -} ] -} - -``` - -:::tip - -For more information about the `status` command, see [`status`](reference-connector-admin.md/#stauts-1). -If there are multiple connectors running on a worker, `workerId` can locate the worker on which the specified connector is running. - -::: - -### `topics stats` -Use the `topics stats` command to get the stats for a topic and its connected producer and consumer, such as whether the topic has received messages or not, whether there is a backlog of messages or not, the available permits and other key information. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -```bash - -./bin/pulsar-admin topics stats test-mongo -{ - "msgRateIn" : 0.0, - "msgThroughputIn" : 0.0, - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "averageMsgSize" : 0.0, - "storageSize" : 1, - "publishers" : [ ], - "subscriptions" : { - "public/default/pulsar-mongo-sink" : { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "msgRateRedeliver" : 0.0, - "msgBacklog" : 0, - "blockedSubscriptionOnUnackedMsgs" : false, - "msgDelayed" : 0, - "unackedMessages" : 0, - "type" : "Shared", - "msgRateExpired" : 0.0, - "consumers" : [ { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "msgRateRedeliver" : 0.0, - "consumerName" : "dffdd", - "availablePermits" : 999, - "unackedMessages" : 0, - "blockedConsumerOnUnackedMsgs" : false, - "metadata" : { - "instance_id" : "0", - "application" : "pulsar-sink", - "id" : "public/default/pulsar-mongo-sink" - }, - "connectedSince" : "2019-08-26T08:48:07.582Z", - "clientVersion" : "2.4.0", - "address" : "/172.17.0.3:57790" - } ], - "isReplicated" : false - } - }, - "replication" : { }, - "deduplicationStatus" : "Disabled" -} - -``` - -:::tip - -For more information about the `topic stats` command, see [`topic stats`](/tools/pulsar-admin/). - -::: - -## Checklist -This checklist indicates the major areas to check when you debug connectors. It is a reminder of what to look for to ensure a thorough review and an evaluation tool to get the status of connectors. -* Does Pulsar start successfully? - -* Does the external service run normally? - -* Is the nar package complete? - -* Is the connector configuration file correct? - -* In localrun mode, run a connector and check the printed information (connector log) on the console. - -* In cluster mode: - - * Use the `get` command to get the basic information. - - * Use the `status` command to get the current status. - * Use the `topics stats` command to get the stats for a specified topic and its connected producers and consumers. - - * Check the connector log. -* Enter into the external system and verify the result. diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-develop.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-develop.md deleted file mode 100644 index d6f4f8261ac820..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-develop.md +++ /dev/null @@ -1,421 +0,0 @@ ---- -id: io-develop -title: How to develop Pulsar connectors -sidebar_label: "Develop" -original_id: io-develop ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide describes how to develop Pulsar connectors to move data -between Pulsar and other systems. - -Pulsar connectors are special [Pulsar Functions](functions-overview.md), so creating -a Pulsar connector is similar to creating a Pulsar function. - -Pulsar connectors come in two types: - -| Type | Description | Example -|---|---|--- -{@inject: github:Source:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java}|Import data from another system to Pulsar.|[RabbitMQ source connector](io-rabbitmq.md) imports the messages of a RabbitMQ queue to a Pulsar topic. -{@inject: github:Sink:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java}|Export data from Pulsar to another system.|[Kinesis sink connector](io-kinesis.md) exports the messages of a Pulsar topic to a Kinesis stream. - -## Develop - -You can develop Pulsar source connectors and sink connectors. - -### Source - -Developing a source connector is to implement the {@inject: github:Source:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} -interface, which means you need to implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method and the {@inject: github:read:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - -1. Implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - - ```java - - /** - * Open connector with configuration - * - * @param config initialization config - * @param sourceContext - * @throws Exception IO type exceptions when opening a connector - */ - void open(final Map config, SourceContext sourceContext) throws Exception; - - ``` - - This method is called when the source connector is initialized. - - In this method, you can retrieve all connector specific settings through the passed-in `config` parameter and initialize all necessary resources. - - For example, a Kafka connector can create a Kafka client in this `open` method. - - Besides, Pulsar runtime also provides a `SourceContext` for the - connector to access runtime resources for tasks like collecting metrics. The implementation can save the `SourceContext` for future use. - -2. Implement the {@inject: github:read:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - - ```java - - /** - * Reads the next message from source. - * If source does not have any new messages, this call should block. - * @return next message from source. The return result should never be null - * @throws Exception - */ - Record read() throws Exception; - - ``` - - If nothing to return, the implementation should be blocking rather than returning `null`. - - The returned {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should encapsulate the following information, which is needed by Pulsar IO runtime. - - * {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should provide the following variables: - - |Variable|Required|Description - |---|---|--- - `TopicName`|No|Pulsar topic name from which the record is originated from. - `Key`|No| Messages can optionally be tagged with keys.

    For more information, see [Routing modes](concepts-messaging.md#routing-modes).| - `Value`|Yes|Actual data of the record. - `EventTime`|No|Event time of the record from the source. - `PartitionId`|No| If the record is originated from a partitioned source, it returns its `PartitionId`.

    `PartitionId` is used as a part of the unique identifier by Pulsar IO runtime to deduplicate messages and achieve exactly-once processing guarantee. - `RecordSequence`|No|If the record is originated from a sequential source, it returns its `RecordSequence`.

    `RecordSequence` is used as a part of the unique identifier by Pulsar IO runtime to deduplicate messages and achieve exactly-once processing guarantee. - `Properties` |No| If the record carries user-defined properties, it returns those properties. - `DestinationTopic`|No|Topic to which message should be written. - `Message`|No|A class which carries data sent by users.

    For more information, see [Message.java](https://github.com/apache/pulsar/blob/master/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/Message.java).| - - * {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should provide the following methods: - - Method|Description - |---|--- - `ack` |Acknowledge that the record is fully processed. - `fail`|Indicate that the record fails to be processed. - -## Handle schema information - -Pulsar IO automatically handles the schema and provides a strongly typed API based on Java generics. -If you know the schema type that you are producing, you can declare the Java class relative to that type in your sink declaration. - -``` - -public class MySource implements Source { - public Record read() {} -} - -``` - -If you want to implement a source that works with any schema, you can go with `byte[]` (of `ByteBuffer`) and use Schema.AUTO_PRODUCE_BYTES(). - -``` - -public class MySource implements Source { - public Record read() { - - Schema wantedSchema = .... - Record myRecord = new MyRecordImplementation(); - .... - } - class MyRecordImplementation implements Record { - public byte[] getValue() { - return ....encoded byte[]...that represents the value - } - public Schema getSchema() { - return Schema.AUTO_PRODUCE_BYTES(wantedSchema); - } - } -} - -``` - -To handle the `KeyValue` type properly, follow the guidelines for your record implementation: -- It must implement {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/KVRecord.java} interface and implement `getKeySchema`,`getValueSchema`, and `getKeyValueEncodingType` -- It must return a `KeyValue` object as `Record.getValue()` -- It may return null in `Record.getSchema()` - -When Pulsar IO runtime encounters a `KVRecord`, it brings the following changes automatically: -- Set properly the `KeyValueSchema` -- Encode the Message Key and the Message Value according to the `KeyValueEncoding` (SEPARATED or INLINE) - -:::tip - -For more information about **how to create a source connector**, see {@inject: github:KafkaSource:/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java}. - -::: - -### Sink - -Developing a sink connector **is similar to** developing a source connector, that is, you need to implement the {@inject: github:Sink:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} interface, which means implementing the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method and the {@inject: github:write:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - -1. Implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - - ```java - - /** - * Open connector with configuration - * - * @param config initialization config - * @param sinkContext - * @throws Exception IO type exceptions when opening a connector - */ - void open(final Map config, SinkContext sinkContext) throws Exception; - - ``` - -2. Implement the {@inject: github:write:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - - ```java - - /** - * Write a message to Sink - * @param record record to write to sink - * @throws Exception - */ - void write(Record record) throws Exception; - - ``` - - During the implementation, you can decide how to write the `Value` and - the `Key` to the actual source, and leverage all the provided information such as - `PartitionId` and `RecordSequence` to achieve different processing guarantees. - - You also need to ack records (if messages are sent successfully) or fail records (if messages fail to send). - -## Handling Schema information - -Pulsar IO handles automatically the Schema and provides a strongly typed API based on Java generics. -If you know the Schema type that you are consuming from you can declare the Java class relative to that type in your Sink declaration. - -``` - -public class MySink implements Sink { - public void write(Record record) {} -} - -``` - -If you want to implement a sink that works with any schema, you can you go with the special GenericObject interface. - -``` - -public class MySink implements Sink { - public void write(Record record) { - Schema schema = record.getSchema(); - GenericObject genericObject = record.getValue(); - if (genericObject != null) { - SchemaType type = genericObject.getSchemaType(); - Object nativeObject = genericObject.getNativeObject(); - ... - } - .... - } -} - -``` - -In the case of AVRO, JSON, and Protobuf records (schemaType=AVRO,JSON,PROTOBUF_NATIVE), you can cast the -`genericObject` variable to `GenericRecord` and use `getFields()` and `getField()` API. -You are able to access the native AVRO record using `genericObject.getNativeObject()`. - -In the case of KeyValue type, you can access both the schema for the key and the schema for the value using this code. - -``` - -public class MySink implements Sink { - public void write(Record record) { - Schema schema = record.getSchema(); - GenericObject genericObject = record.getValue(); - SchemaType type = genericObject.getSchemaType(); - Object nativeObject = genericObject.getNativeObject(); - if (type == SchemaType.KEY_VALUE) { - KeyValue keyValue = (KeyValue) nativeObject; - Object key = keyValue.getKey(); - Object value = keyValue.getValue(); - - KeyValueSchema keyValueSchema = (KeyValueSchema) schema; - Schema keySchema = keyValueSchema.getKeySchema(); - Schema valueSchema = keyValueSchema.getValueSchema(); - } - .... - } -} - -``` - -## Test - -Testing connectors can be challenging because Pulsar IO connectors interact with two systems -that may be difficult to mock—Pulsar and the system to which the connector is connecting. - -It is -recommended writing special tests to test the connector functionalities as below -while mocking the external service. - -### Unit test - -You can create unit tests for your connector. - -### Integration test - -Once you have written sufficient unit tests, you can add -separate integration tests to verify end-to-end functionality. - -Pulsar uses [testcontainers](https://www.testcontainers.org/) **for all integration tests**. - -:::tip - -For more information about **how to create integration tests for Pulsar connectors**, see {@inject: github:IntegrationTests:/tests/integration/src/test/java/org/apache/pulsar/tests/integration/io}. - -::: - -## Package - -Once you've developed and tested your connector, you need to package it so that it can be submitted -to a [Pulsar Functions](functions-overview.md) cluster. - -There are two methods to -work with Pulsar Functions' runtime, that is, [NAR](#nar) and [uber JAR](#uber-jar). - -:::note - -If you plan to package and distribute your connector for others to use, you are obligated to - -::: - -license and copyright your own code properly. Remember to add the license and copyright to -all libraries your code uses and to your distribution. -> -> If you use the [NAR](#nar) method, the NAR plugin -automatically creates a `DEPENDENCIES` file in the generated NAR package, including the proper -licensing and copyrights of all libraries of your connector. - -### NAR - -**NAR** stands for NiFi Archive, which is a custom packaging mechanism used by Apache NiFi, to provide -a bit of Java ClassLoader isolation. - -:::tip - -For more information about **how NAR works**, see [here](https://medium.com/hashmapinc/nifi-nar-files-explained-14113f7796fd). - -::: - -Pulsar uses the same mechanism for packaging **all** [built-in connectors](io-connectors.md). - -The easiest approach to package a Pulsar connector is to create a NAR package using [nifi-nar-maven-plugin](https://mvnrepository.com/artifact/org.apache.nifi/nifi-nar-maven-plugin). - -Include this [nifi-nar-maven-plugin](https://mvnrepository.com/artifact/org.apache.nifi/nifi-nar-maven-plugin) in your maven project for your connector as below. - -```xml - - - - org.apache.nifi - nifi-nar-maven-plugin - 1.2.0 - - - -``` - -You must also create a `resources/META-INF/services/pulsar-io.yaml` file with the following contents: - -```yaml - -name: connector name -description: connector description -sourceClass: fully qualified class name (only if source connector) -sinkClass: fully qualified class name (only if sink connector) - -``` - -For Gradle users, there is a [Gradle Nar plugin available on the Gradle Plugin Portal](https://plugins.gradle.org/plugin/io.github.lhotari.gradle-nar-plugin). - -:::tip - -For more information about an **how to use NAR for Pulsar connectors**, see {@inject: github:TwitterFirehose:/pulsar-io/twitter/pom.xml}. - -::: - -### Uber JAR - -An alternative approach is to create an **uber JAR** that contains all of the connector's JAR files -and other resource files. No directory internal structure is necessary. - -You can use [maven-shade-plugin](https://maven.apache.org/plugins/maven-shade-plugin/examples/includes-excludes.html) to create a uber JAR as below: - -```xml - - - org.apache.maven.plugins - maven-shade-plugin - 3.1.1 - - - package - - shade - - - - - *:* - - - - - - - -``` - -## Monitor - -Pulsar connectors enable you to move data in and out of Pulsar easily. It is important to ensure that the running connectors are healthy at any time. You can monitor Pulsar connectors that have been deployed with the following methods: - -- Check the metrics provided by Pulsar. - - Pulsar connectors expose the metrics that can be collected and used for monitoring the health of **Java** connectors. You can check the metrics by following the [monitoring](deploy-monitoring.md) guide. - -- Set and check your customized metrics. - - In addition to the metrics provided by Pulsar, Pulsar allows you to customize metrics for **Java** connectors. Function workers collect user-defined metrics to Prometheus automatically and you can check them in Grafana. - -Here is an example of how to customize metrics for a Java connector. - -````mdx-code-block - - - -``` - -public class TestMetricSink implements Sink { - - @Override - public void open(Map config, SinkContext sinkContext) throws Exception { - sinkContext.recordMetric("foo", 1); - } - - @Override - public void write(Record record) throws Exception { - - } - - @Override - public void close() throws Exception { - - } - } - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-dynamodb-source.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-dynamodb-source.md deleted file mode 100644 index 0314be2529b4ca..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-dynamodb-source.md +++ /dev/null @@ -1,82 +0,0 @@ ---- -id: io-dynamodb-source -title: AWS DynamoDB source connector -sidebar_label: "AWS DynamoDB source connector" -original_id: io-dynamodb-source ---- - -The DynamoDB source connector pulls data from DynamoDB table streams and persists data into Pulsar. - -This connector uses the [DynamoDB Streams Kinesis Adapter](https://github.com/awslabs/dynamodb-streams-kinesis-adapter), -which uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual -consuming of messages. The KCL uses DynamoDB to track state for consumers and requires cloudwatch access to log metrics. - - -## Configuration - -The configuration of the DynamoDB source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.

    Below are the available options:

  70. `AT_TIMESTAMP`: start from the record at or after the specified timestamp.

  71. `LATEST`: start after the most recent data record.

  72. `TRIM_HORIZON`: start from the oldest available data record.
  73. -`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption. -`applicationName`|String|false|Pulsar IO connector|The name of the KCL application. Must be unique, as it is used to define the table name for the dynamo table used for state tracking.

    By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances. -`checkpointInterval`|long|false|60000|The frequency of the KCL checkpoint in milliseconds. -`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds. -`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint. -`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector.

    Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed. -`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsEndpoint`|String|false|" " (empty string)|The DynamoDB Streams end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsDynamodbStreamArn`|String|true|" " (empty string)|The DynamoDB stream arn. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    `awsCredentialProviderPlugin` has the following built-in plugs:

  74. `org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:
    this plugin uses the default AWS provider chain.
    For more information, see [using the default credential provider chain](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default).

  75. `org.apache.pulsar.io.kinesis.STSAssumeRoleProviderPlugin`:
    this plugin takes a configuration via the `awsCredentialPluginParam` that describes a role to assume when running the KCL.
    **JSON configuration example**
    `{"roleArn": "arn...", "roleSessionName": "name"}`

    `awsCredentialPluginName` is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If `awsCredentialPluginName` set to empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`.
  76. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Example - -Before using the DynamoDB source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "awsEndpoint": "https://some.endpoint.aws", - "awsRegion": "us-east-1", - "awsDynamodbStreamArn": "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "applicationName": "My test application", - "checkpointInterval": "30000", - "backoffTime": "4000", - "numRetries": "3", - "receiveQueueSize": 2000, - "initialPositionInStream": "TRIM_HORIZON", - "startAtTime": "2019-03-05T19:28:58.000Z" - } - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "https://some.endpoint.aws" - awsRegion: "us-east-1" - awsDynamodbStreamArn: "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - applicationName: "My test application" - checkpointInterval: 30000 - backoffTime: 4000 - numRetries: 3 - receiveQueueSize: 2000 - initialPositionInStream: "TRIM_HORIZON" - startAtTime: "2019-03-05T19:28:58.000Z" - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-elasticsearch-sink.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-elasticsearch-sink.md deleted file mode 100644 index 4a5e3494138147..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-elasticsearch-sink.md +++ /dev/null @@ -1,244 +0,0 @@ ---- -id: io-elasticsearch-sink -title: Elasticsearch sink connector -sidebar_label: "Elasticsearch sink connector" -original_id: io-elasticsearch-sink ---- - -The Elasticsearch sink connector pulls messages from Pulsar topics and persists the messages to indexes. - - -## Feature - -### Handle data - -Since Pulsar 2.9.0, the Elasticsearch sink connector has the following ways of -working. You can choose one of them. - -Name | Description ----|---| -Raw processing | The sink reads from topics and passes the raw content to Elasticsearch.

    This is the **default** behavior.

    Raw processing was already available **in Pulsar 2.8.x**. -Schema aware | The sink uses the schema and handles AVRO, JSON, and KeyValue schema types while mapping the content to the Elasticsearch document.

    If you set `schemaEnable` to `true`, the sink interprets the contents of the message and you can define a **primary key** that in turn used as the special `_id` field on Elasticsearch. -

    This allows you to perform `UPDATE`, `INSERT`, and `DELETE` operations -to Elasticsearch driven by the logical primary key of the message.

    This -is very useful in a typical Change Data Capture scenario in which you follow the -changes on your database, write them to Pulsar (using the Debezium adapter for -instance), and then you write to Elasticsearch.

    You configure the -mapping of the primary key using the `primaryFields` configuration -entry.

    The `DELETE` operation can be performed when the primary key is -not empty and the remaining value is empty. Use the `nullValueAction` to -configure this behaviour. The default configuration simply ignores such empty -values. - -### Map multiple indexes - -Since Pulsar 2.9.0, the `indexName` property is no more required. If you omit it, the sink writes to an index name after the Pulsar topic name. - -### Enable bulk writes - -Since Pulsar 2.9.0, you can use bulk writes by setting the `bulkEnabled` property to `true`. - -### Enable secure connections via TLS - -Since Pulsar 2.9.0, you can enable secure connections with TLS. - -## Configuration - -The configuration of the Elasticsearch sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `elasticSearchUrl` | String| true |" " (empty string)| The URL of elastic search cluster to which the connector connects. | -| `indexName` | String| false |" " (empty string)| The index name to which the connector writes messages. The default value is the topic name. It accepts date formats in the name to support event time based index with the pattern `%{+}`. For example, suppose the event time of the record is 1645182000000L, the indexName is `logs-%{+yyyy-MM-dd}`, then the formatted index name would be `logs-2022-02-18`. | -| `schemaEnable` | Boolean | false | false | Turn on the Schema Aware mode. | -| `createIndexIfNeeded` | Boolean | false | false | Manage index if missing. | -| `maxRetries` | Integer | false | 1 | The maximum number of retries for elasticsearch requests. Use -1 to disable it. | -| `retryBackoffInMs` | Integer | false | 100 | The base time to wait when retrying an Elasticsearch request (in milliseconds). | -| `maxRetryTimeInSec` | Integer| false | 86400 | The maximum retry time interval in seconds for retrying an elasticsearch request. | -| `bulkEnabled` | Boolean | false | false | Enable the elasticsearch bulk processor to flush write requests based on the number or size of requests, or after a given period. | -| `bulkActions` | Integer | false | 1000 | The maximum number of actions per elasticsearch bulk request. Use -1 to disable it. | -| `bulkSizeInMb` | Integer | false |5 | The maximum size in megabytes of elasticsearch bulk requests. Use -1 to disable it. | -| `bulkConcurrentRequests` | Integer | false | 0 | The maximum number of in flight elasticsearch bulk requests. The default 0 allows the execution of a single request. A value of 1 means 1 concurrent request is allowed to be executed while accumulating new bulk requests. | -| `bulkFlushIntervalInMs` | Integer | false | -1 | The maximum period of time to wait for flushing pending writes when bulk writes are enabled. Default is -1 meaning not set. | -| `compressionEnabled` | Boolean | false |false | Enable elasticsearch request compression. | -| `connectTimeoutInMs` | Integer | false |5000 | The elasticsearch client connection timeout in milliseconds. | -| `connectionRequestTimeoutInMs` | Integer | false |1000 | The time in milliseconds for getting a connection from the elasticsearch connection pool. | -| `connectionIdleTimeoutInMs` | Integer | false |5 | Idle connection timeout to prevent a read timeout. | -| `keyIgnore` | Boolean | false |true | Whether to ignore the record key to build the Elasticsearch document `_id`. If primaryFields is defined, the connector extract the primary fields from the payload to build the document `_id` If no primaryFields are provided, elasticsearch auto generates a random document `_id`. | -| `primaryFields` | String | false | "id" | The comma separated ordered list of field names used to build the Elasticsearch document `_id` from the record value. If this list is a singleton, the field is converted as a string. If this list has 2 or more fields, the generated `_id` is a string representation of a JSON array of the field values. | -| `nullValueAction` | enum (IGNORE,DELETE,FAIL) | false | IGNORE | How to handle records with null values, possible options are IGNORE, DELETE or FAIL. Default is IGNORE the message. | -| `malformedDocAction` | enum (IGNORE,WARN,FAIL) | false | FAIL | How to handle elasticsearch rejected documents due to some malformation. Possible options are IGNORE, DELETE or FAIL. Default is FAIL the Elasticsearch document. | -| `stripNulls` | Boolean | false |true | If stripNulls is false, elasticsearch _source includes 'null' for empty fields (for example {"foo": null}), otherwise null fields are stripped. | -| `socketTimeoutInMs` | Integer | false |60000 | The socket timeout in milliseconds waiting to read the elasticsearch response. | -| `typeName` | String | false | "_doc" | The type name to which the connector writes messages to.

    The value should be set explicitly to a valid type name other than "_doc" for Elasticsearch version before 6.2, and left to default otherwise. | -| `indexNumberOfShards` | int| false |1| The number of shards of the index. | -| `indexNumberOfReplicas` | int| false |1 | The number of replicas of the index. | -| `username` | String| false |" " (empty string)| The username used by the connector to connect to the elastic search cluster.

    If `username` is set, then `password` should also be provided. | -| `password` | String| false | " " (empty string)|The password used by the connector to connect to the elastic search cluster.

    If `username` is set, then `password` should also be provided. | -| `ssl` | ElasticSearchSslConfig | false | | Configuration for TLS encrypted communication | - -### Definition of ElasticSearchSslConfig structure: - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `enabled` | Boolean| false | false | Enable SSL/TLS. | -| `hostnameVerification` | Boolean| false | true | Whether or not to validate node hostnames when using SSL. | -| `truststorePath` | String| false |" " (empty string)| The path to the truststore file. | -| `truststorePassword` | String| false |" " (empty string)| Truststore password. | -| `keystorePath` | String| false |" " (empty string)| The path to the keystore file. | -| `keystorePassword` | String| false |" " (empty string)| Keystore password. | -| `cipherSuites` | String| false |" " (empty string)| SSL/TLS cipher suites. | -| `protocols` | String| false |"TLSv1.2" | Comma separated list of enabled SSL/TLS protocols. | - -## Example - -Before using the Elasticsearch sink connector, you need to create a configuration file through one of the following methods. - -### Configuration - -#### For Elasticsearch After 6.2 - -* JSON - - ```json - - { - "configs": { - "elasticSearchUrl": "http://localhost:9200", - "indexName": "my_index", - "username": "scooby", - "password": "doobie" - } - } - - ``` - -* YAML - - ```yaml - - configs: - elasticSearchUrl: "http://localhost:9200" - indexName: "my_index" - username: "scooby" - password: "doobie" - - ``` - -#### For Elasticsearch Before 6.2 - -* JSON - - ```json - - { - "elasticSearchUrl": "http://localhost:9200", - "indexName": "my_index", - "typeName": "doc", - "username": "scooby", - "password": "doobie" - } - - ``` - -* YAML - - ```yaml - - configs: - elasticSearchUrl: "http://localhost:9200" - indexName: "my_index" - typeName: "doc" - username: "scooby" - password: "doobie" - - ``` - -### Usage - -1. Start a single node Elasticsearch cluster. - - ```bash - - $ docker run -p 9200:9200 -p 9300:9300 \ - -e "discovery.type=single-node" \ - docker.elastic.co/elasticsearch/elasticsearch:7.13.3 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - - Make sure the NAR file is available at `connectors/pulsar-io-elastic-search-@pulsar:version@.nar`. - -3. Start the Pulsar Elasticsearch connector in local run mode using one of the following methods. - * Use the **JSON** configuration as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-elastic-search-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name elasticsearch-test-sink \ - --sink-config '{"elasticSearchUrl":"http://localhost:9200","indexName": "my_index","username": "scooby","password": "doobie"}' \ - --inputs elasticsearch_test - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-elastic-search-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name elasticsearch-test-sink \ - --sink-config-file elasticsearch-sink.yml \ - --inputs elasticsearch_test - - ``` - -4. Publish records to the topic. - - ```bash - - $ bin/pulsar-client produce elasticsearch_test --messages "{\"a\":1}" - - ``` - -5. Check documents in Elasticsearch. - - * refresh the index - - ```bash - - $ curl -s http://localhost:9200/my_index/_refresh - - ``` - - - * search documents - - ```bash - - $ curl -s http://localhost:9200/my_index/_search - - ``` - - You can see the record that published earlier has been successfully written into Elasticsearch. - - ```json - - {"took":2,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":1,"relation":"eq"},"max_score":1.0,"hits":[{"_index":"my_index","_type":"_doc","_id":"FSxemm8BLjG_iC0EeTYJ","_score":1.0,"_source":{"a":1}}]}} - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-file-source.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-file-source.md deleted file mode 100644 index ba0f467a443146..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-file-source.md +++ /dev/null @@ -1,173 +0,0 @@ ---- -id: io-file-source -title: File source connector -sidebar_label: "File source connector" -original_id: io-file-source ---- - -The File source connector pulls messages from files in directories and persists the messages to Pulsar topics. - -## Configuration - -The configuration of the File source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `inputDirectory` | String|true | No default value|The input directory to pull files. | -| `recurse` | Boolean|false | true | Whether to pull files from subdirectory or not.| -| `keepFile` |Boolean|false | false | If set to true, the file is not deleted after it is processed, which means the file can be picked up continually. | -| `fileFilter` | String|false| [^\\.].* | The file whose name matches the given regular expression is picked up. | -| `pathFilter` | String |false | NULL | If `recurse` is set to true, the subdirectory whose path matches the given regular expression is scanned. | -| `minimumFileAge` | Integer|false | 0 | The minimum age that a file can be processed.

    Any file younger than `minimumFileAge` (according to the last modification date) is ignored. | -| `maximumFileAge` | Long|false |Long.MAX_VALUE | The maximum age that a file can be processed.

    Any file older than `maximumFileAge` (according to last modification date) is ignored. | -| `minimumSize` |Integer| false |1 | The minimum size (in bytes) that a file can be processed. | -| `maximumSize` | Double|false |Double.MAX_VALUE| The maximum size (in bytes) that a file can be processed. | -| `ignoreHiddenFiles` |Boolean| false | true| Whether the hidden files should be ignored or not. | -| `pollingInterval`|Long | false | 10000L | Indicates how long to wait before performing a directory listing. | -| `numWorkers` | Integer | false | 1 | The number of worker threads that process files.

    This allows you to process a larger number of files concurrently.

    However, setting this to a value greater than 1 makes the data from multiple files mixed in the target topic. | -| `processedFileSuffix` | String | false | NULL | If set, do not delete but only rename file that has been processed.

    This config only work when 'keepFile' property is false. | - -### Example - -Before using the File source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "inputDirectory": "/Users/david", - "recurse": true, - "keepFile": true, - "fileFilter": "[^\\.].*", - "pathFilter": "*", - "minimumFileAge": 0, - "maximumFileAge": 9999999999, - "minimumSize": 1, - "maximumSize": 5000000, - "ignoreHiddenFiles": true, - "pollingInterval": 5000, - "numWorkers": 1, - "processedFileSuffix": ".processed_done" - } - } - - ``` - -* YAML - - ```yaml - - configs: - inputDirectory: "/Users/david" - recurse: true - keepFile: true - fileFilter: "[^\\.].*" - pathFilter: "*" - minimumFileAge: 0 - maximumFileAge: 9999999999 - minimumSize: 1 - maximumSize: 5000000 - ignoreHiddenFiles: true - pollingInterval: 5000 - numWorkers: 1 - processedFileSuffix: ".processed_done" - - ``` - -## Usage - -Here is an example of using the File source connecter. - -1. Pull a Pulsar image. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - ``` - -2. Start Pulsar standalone. - - ```bash - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -3. Create a configuration file _file-connector.yaml_. - - ```yaml - - configs: - inputDirectory: "/opt" - - ``` - -4. Copy the configuration file _file-connector.yaml_ to the container. - - ```bash - - $ docker cp connectors/file-connector.yaml pulsar-standalone:/pulsar/ - - ``` - -5. Download the File source connector. - - ```bash - - $ curl -O https://mirrors.tuna.tsinghua.edu.cn/apache/pulsar/pulsar-{version}/connectors/pulsar-io-file-{version}.nar - - ``` - -6. Copy it to the `connectors` folder, then restart the container. - - ```bash - - $ docker cp pulsar-io-file-{version}.nar pulsar-standalone:/pulsar/connectors/ - $ docker restart pulsar-standalone - - ``` - -7. Start the File source connector. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - - $ ./bin/pulsar-admin sources localrun \ - --archive /pulsar/connectors/pulsar-io-file-{version}.nar \ - --name file-test \ - --destination-topic-name pulsar-file-test \ - --source-config-file /pulsar/file-connector.yaml - - ``` - -8. Start a consumer. - - ```bash - - ./bin/pulsar-client consume -s file-test -n 0 pulsar-file-test - - ``` - -9. Write the message to the file _test.txt_. - - ```bash - - echo "hello world!" > /opt/test.txt - - ``` - - The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello world! - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-flume-sink.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-flume-sink.md deleted file mode 100644 index 591681315bc264..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-flume-sink.md +++ /dev/null @@ -1,58 +0,0 @@ ---- -id: io-flume-sink -title: Flume sink connector -sidebar_label: "Flume sink connector" -original_id: io-flume-sink ---- - -The Flume sink connector pulls messages from Pulsar topics to logs. - -## Configuration - -The configuration of the Flume sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`name`|String|true|"" (empty string)|The name of the agent. -`confFile`|String|true|"" (empty string)|The configuration file. -`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed. -`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection. -`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration. - -### Example - -Before using the Flume sink connector, you need to create a configuration file through one of the following methods. - -> For more information about the `sink.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/sink.conf). - -* JSON - - ```json - - { - "configs": { - "name": "a1", - "confFile": "sink.conf", - "noReloadConf": "false", - "zkConnString": "", - "zkBasePath": "" - } - } - - ``` - -* YAML - - ```yaml - - configs: - name: a1 - confFile: sink.conf - noReloadConf: false - zkConnString: "" - zkBasePath: "" - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-flume-source.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-flume-source.md deleted file mode 100644 index ba384560111fd0..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-flume-source.md +++ /dev/null @@ -1,58 +0,0 @@ ---- -id: io-flume-source -title: Flume source connector -sidebar_label: "Flume source connector" -original_id: io-flume-source ---- - -The Flume source connector pulls messages from logs to Pulsar topics. - -## Configuration - -The configuration of the Flume source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`name`|String|true|"" (empty string)|The name of the agent. -`confFile`|String|true|"" (empty string)|The configuration file. -`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed. -`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection. -`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration. - -### Example - -Before using the Flume source connector, you need to create a configuration file through one of the following methods. - -> For more information about the `source.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/source.conf). - -* JSON - - ```json - - { - "configs": { - "name": "a1", - "confFile": "source.conf", - "noReloadConf": "false", - "zkConnString": "", - "zkBasePath": "" - } - } - - ``` - -* YAML - - ```yaml - - configs: - name: a1 - confFile: source.conf - noReloadConf: false - zkConnString: "" - zkBasePath: "" - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-hbase-sink.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-hbase-sink.md deleted file mode 100644 index 4fcd59a2c27504..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-hbase-sink.md +++ /dev/null @@ -1,69 +0,0 @@ ---- -id: io-hbase-sink -title: HBase sink connector -sidebar_label: "HBase sink connector" -original_id: io-hbase-sink ---- - -The HBase sink connector pulls the messages from Pulsar topics -and persists the messages to HBase tables - -## Configuration - -The configuration of the HBase sink connector has the following properties. - -### Property - -| Name | Type|Default | Required | Description | -|------|---------|----------|-------------|--- -| `hbaseConfigResources` | String|None | false | HBase system configuration `hbase-site.xml` file. | -| `zookeeperQuorum` | String|None | true | HBase system configuration about `hbase.zookeeper.quorum` value. | -| `zookeeperClientPort` | String|2181 | false | HBase system configuration about `hbase.zookeeper.property.clientPort` value. | -| `zookeeperZnodeParent` | String|/hbase | false | HBase system configuration about `zookeeper.znode.parent` value. | -| `tableName` | None |String | true | HBase table, the value is `namespace:tableName`. | -| `rowKeyName` | String|None | true | HBase table rowkey name. | -| `familyName` | String|None | true | HBase table column family name. | -| `qualifierNames` |String| None | true | HBase table column qualifier names. | -| `batchTimeMs` | Long|1000l| false | HBase table operation timeout in milliseconds. | -| `batchSize` | int|200| false | Batch size of updates made to the HBase table. | - -### Example - -Before using the HBase sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "hbaseConfigResources": "hbase-site.xml", - "zookeeperQuorum": "localhost", - "zookeeperClientPort": "2181", - "zookeeperZnodeParent": "/hbase", - "tableName": "pulsar_hbase", - "rowKeyName": "rowKey", - "familyName": "info", - "qualifierNames": [ 'name', 'address', 'age'] - } - } - - ``` - -* YAML - - ```yaml - - configs: - hbaseConfigResources: "hbase-site.xml" - zookeeperQuorum: "localhost" - zookeeperClientPort: "2181" - zookeeperZnodeParent: "/hbase" - tableName: "pulsar_hbase" - rowKeyName: "rowKey" - familyName: "info" - qualifierNames: [ 'name', 'address', 'age'] - - ``` - - \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-hdfs2-sink.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-hdfs2-sink.md deleted file mode 100644 index 54ab3f918bb55d..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-hdfs2-sink.md +++ /dev/null @@ -1,66 +0,0 @@ ---- -id: io-hdfs2-sink -title: HDFS2 sink connector -sidebar_label: "HDFS2 sink connector" -original_id: io-hdfs2-sink ---- - -The HDFS2 sink connector pulls the messages from Pulsar topics -and persists the messages to HDFS files. - -## Configuration - -The configuration of the HDFS2 sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `hdfsConfigResources` | String|true| None | A file or a comma-separated list containing the Hadoop file system configuration.

    **Example**
    'core-site.xml'
    'hdfs-site.xml' | -| `directory` | String | true | None|The HDFS directory where files read from or written to. | -| `encoding` | String |false |None |The character encoding for the files.

    **Example**
    UTF-8
    ASCII | -| `compression` | Compression |false |None |The compression code used to compress or de-compress the files on HDFS.

    Below are the available options:
  77. BZIP2
  78. DEFLATE
  79. GZIP
  80. LZ4
  81. SNAPPY
  82. | -| `kerberosUserPrincipal` |String| false| None|The principal account of Kerberos user used for authentication. | -| `keytab` | String|false|None| The full pathname of the Kerberos keytab file used for authentication. | -| `filenamePrefix` |String| true, if `compression` is set to `None`. | None |The prefix of the files created inside the HDFS directory.

    **Example**
    The value of topicA result in files named topicA-. | -| `fileExtension` | String| true | None | The extension added to the files written to HDFS.

    **Example**
    '.txt'
    '.seq' | -| `separator` | char|false |None |The character used to separate records in a text file.

    If no value is provided, the contents from all records are concatenated together in one continuous byte array. | -| `syncInterval` | long| false |0| The interval between calls to flush data to HDFS disk in milliseconds. | -| `maxPendingRecords` |int| false|Integer.MAX_VALUE | The maximum number of records that hold in memory before acking.

    Setting this property to 1 makes every record send to disk before the record is acked.

    Setting this property to a higher value allows buffering records before flushing them to disk. -| `subdirectoryPattern` | String | false | None | A subdirectory associated with the created time of the sink.
    The pattern is the formatted pattern of `directory`'s subdirectory.

    See [DateTimeFormatter](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html) for pattern's syntax. | - -### Example - -Before using the HDFS2 sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "hdfsConfigResources": "core-site.xml", - "directory": "/foo/bar", - "filenamePrefix": "prefix", - "fileExtension": ".log", - "compression": "SNAPPY", - "subdirectoryPattern": "yyyy-MM-dd" - } - } - - ``` - -* YAML - - ```yaml - - configs: - hdfsConfigResources: "core-site.xml" - directory: "/foo/bar" - filenamePrefix: "prefix" - fileExtension: ".log" - compression: "SNAPPY" - subdirectoryPattern: "yyyy-MM-dd" - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-hdfs3-sink.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-hdfs3-sink.md deleted file mode 100644 index 91f06153d5d771..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-hdfs3-sink.md +++ /dev/null @@ -1,61 +0,0 @@ ---- -id: io-hdfs3-sink -title: HDFS3 sink connector -sidebar_label: "HDFS3 sink connector" -original_id: io-hdfs3-sink ---- - -The HDFS3 sink connector pulls the messages from Pulsar topics -and persists the messages to HDFS files. - -## Configuration - -The configuration of the HDFS3 sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `hdfsConfigResources` | String|true| None | A file or a comma-separated list containing the Hadoop file system configuration.

    **Example**
    'core-site.xml'
    'hdfs-site.xml' | -| `directory` | String | true | None|The HDFS directory where files read from or written to. | -| `encoding` | String |false |None |The character encoding for the files.

    **Example**
    UTF-8
    ASCII | -| `compression` | Compression |false |None |The compression code used to compress or de-compress the files on HDFS.

    Below are the available options:
  83. BZIP2
  84. DEFLATE
  85. GZIP
  86. LZ4
  87. SNAPPY
  88. | -| `kerberosUserPrincipal` |String| false| None|The principal account of Kerberos user used for authentication. | -| `keytab` | String|false|None| The full pathname of the Kerberos keytab file used for authentication. | -| `filenamePrefix` |String| false |None |The prefix of the files created inside the HDFS directory.

    **Example**
    The value of topicA result in files named topicA-. | -| `fileExtension` | String| false | None| The extension added to the files written to HDFS.

    **Example**
    '.txt'
    '.seq' | -| `separator` | char|false |None |The character used to separate records in a text file.

    If no value is provided, the contents from all records are concatenated together in one continuous byte array. | -| `syncInterval` | long| false |0| The interval between calls to flush data to HDFS disk in milliseconds. | -| `maxPendingRecords` |int| false|Integer.MAX_VALUE | The maximum number of records that hold in memory before acking.

    Setting this property to 1 makes every record send to disk before the record is acked.

    Setting this property to a higher value allows buffering records before flushing them to disk. - -### Example - -Before using the HDFS3 sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "hdfsConfigResources": "core-site.xml", - "directory": "/foo/bar", - "filenamePrefix": "prefix", - "compression": "SNAPPY" - } - } - - ``` - -* YAML - - ```yaml - - configs: - hdfsConfigResources: "core-site.xml" - directory: "/foo/bar" - filenamePrefix: "prefix" - compression: "SNAPPY" - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-influxdb-sink.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-influxdb-sink.md deleted file mode 100644 index 8492aa482b50ae..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-influxdb-sink.md +++ /dev/null @@ -1,122 +0,0 @@ ---- -id: io-influxdb-sink -title: InfluxDB sink connector -sidebar_label: "InfluxDB sink connector" -original_id: io-influxdb-sink ---- - -The InfluxDB sink connector pulls messages from Pulsar topics -and persists the messages to InfluxDB. - -The InfluxDB sink provides different configurations for InfluxDBv1 and v2 respectively. - -## Configuration - -The configuration of the InfluxDB sink connector has the following properties. - -### Property -#### InfluxDBv2 -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. | -| `token` | String|true| " " (empty string) |The authentication token used to authenticate to InfluxDB. | -| `organization` | String| true|" " (empty string) | The InfluxDB organization to write to. | -| `bucket` |String| true | " " (empty string)| The InfluxDB bucket to write to. | -| `precision` | String|false| ns | The timestamp precision for writing data to InfluxDB.

    Below are the available options:
  89. ns
  90. us
  91. ms
  92. s
  93. | -| `logLevel` | String|false| NONE|The log level for InfluxDB request and response.

    Below are the available options:
  94. NONE
  95. BASIC
  96. HEADERS
  97. FULL
  98. | -| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. | -| `batchTimeMs` |long|false| 1000L | The InfluxDB operation time in milliseconds. | -| `batchSize` | int|false|200| The batch size of writing to InfluxDB. | - -#### InfluxDBv1 -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. | -| `username` | String|false| " " (empty string) |The username used to authenticate to InfluxDB. | -| `password` | String| false|" " (empty string) | The password used to authenticate to InfluxDB. | -| `database` |String| true | " " (empty string)| The InfluxDB to which write messages. | -| `consistencyLevel` | String|false|ONE | The consistency level for writing data to InfluxDB.

    Below are the available options:
  99. ALL
  100. ANY
  101. ONE
  102. QUORUM
  103. | -| `logLevel` | String|false| NONE|The log level for InfluxDB request and response.

    Below are the available options:
  104. NONE
  105. BASIC
  106. HEADERS
  107. FULL
  108. | -| `retentionPolicy` | String|false| autogen| The retention policy for InfluxDB. | -| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. | -| `batchTimeMs` |long|false| 1000L | The InfluxDB operation time in milliseconds. | -| `batchSize` | int|false|200| The batch size of writing to InfluxDB. | - -### Example -Before using the InfluxDB sink connector, you need to create a configuration file through one of the following methods. -#### InfluxDBv2 - -* JSON - - ```json - - { - "configs": { - "influxdbUrl": "http://localhost:9999", - "organization": "example-org", - "bucket": "example-bucket", - "token": "xxxx", - "precision": "ns", - "logLevel": "NONE", - "gzipEnable": false, - "batchTimeMs": 1000, - "batchSize": 100 - } - } - - ``` - -* YAML - - ```yaml - - configs: - influxdbUrl: "http://localhost:9999" - organization: "example-org" - bucket: "example-bucket" - token: "xxxx" - precision: "ns" - logLevel: "NONE" - gzipEnable: false - batchTimeMs: 1000 - batchSize: 100 - - ``` - -#### InfluxDBv1 - -* JSON - - ```json - - { - "configs": { - "influxdbUrl": "http://localhost:8086", - "database": "test_db", - "consistencyLevel": "ONE", - "logLevel": "NONE", - "retentionPolicy": "autogen", - "gzipEnable": false, - "batchTimeMs": 1000, - "batchSize": 100 - } - } - - ``` - -* YAML - - ```yaml - - configs: - influxdbUrl: "http://localhost:8086" - database: "test_db" - consistencyLevel: "ONE" - logLevel: "NONE" - retentionPolicy: "autogen" - gzipEnable: false - batchTimeMs: 1000 - batchSize: 100 - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-jdbc-sink.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-jdbc-sink.md deleted file mode 100644 index fe03d4a1e441eb..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-jdbc-sink.md +++ /dev/null @@ -1,165 +0,0 @@ ---- -id: io-jdbc-sink -title: JDBC sink connector -sidebar_label: "JDBC sink connector" -original_id: io-jdbc-sink ---- - -The JDBC sink connectors allow pulling messages from Pulsar topics -and persists the messages to ClickHouse, MariaDB, PostgreSQL, and SQLite. - -> Currently, INSERT, DELETE and UPDATE operations are supported. - -## Configuration - -The configuration of all JDBC sink connectors has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `userName` | String|false | " " (empty string) | The username used to connect to the database specified by `jdbcUrl`.

    **Note: `userName` is case-sensitive.**| -| `password` | String|false | " " (empty string)| The password used to connect to the database specified by `jdbcUrl`.

    **Note: `password` is case-sensitive.**| -| `jdbcUrl` | String|true | " " (empty string) | The JDBC URL of the database to which the connector connects. | -| `tableName` | String|true | " " (empty string) | The name of the table to which the connector writes. | -| `nonKey` | String|false | " " (empty string) | A comma-separated list contains the fields used in updating events. | -| `key` | String|false | " " (empty string) | A comma-separated list contains the fields used in `where` condition of updating and deleting events. | -| `timeoutMs` | int| false|500 | The JDBC operation timeout in milliseconds. | -| `batchSize` | int|false | 200 | The batch size of updates made to the database. | - -### Example for ClickHouse - -* JSON - - ```json - - { - "configs": { - "userName": "clickhouse", - "password": "password", - "jdbcUrl": "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink", - "tableName": "pulsar_clickhouse_jdbc_sink" - } - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-clickhouse-sink" - topicName: "persistent://public/default/jdbc-clickhouse-topic" - sinkType: "jdbc-clickhouse" - configs: - userName: "clickhouse" - password: "password" - jdbcUrl: "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink" - tableName: "pulsar_clickhouse_jdbc_sink" - - ``` - -### Example for MariaDB - -* JSON - - ```json - - { - "configs": { - "userName": "mariadb", - "password": "password", - "jdbcUrl": "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink", - "tableName": "pulsar_mariadb_jdbc_sink" - } - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-mariadb-sink" - topicName: "persistent://public/default/jdbc-mariadb-topic" - sinkType: "jdbc-mariadb" - configs: - userName: "mariadb" - password: "password" - jdbcUrl: "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink" - tableName: "pulsar_mariadb_jdbc_sink" - - ``` - -### Example for PostgreSQL - -Before using the JDBC PostgreSQL sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "userName": "postgres", - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "tableName": "pulsar_postgres_jdbc_sink" - } - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-postgres-sink" - topicName: "persistent://public/default/jdbc-postgres-topic" - sinkType: "jdbc-postgres" - configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink" - tableName: "pulsar_postgres_jdbc_sink" - - ``` - -For more information on **how to use this JDBC sink connector**, see [connect Pulsar to PostgreSQL](io-quickstart.md#connect-pulsar-to-postgresql). - -### Example for SQLite - -* JSON - - ```json - - { - "configs": { - "jdbcUrl": "jdbc:sqlite:db.sqlite", - "tableName": "pulsar_sqlite_jdbc_sink" - } - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-sqlite-sink" - topicName: "persistent://public/default/jdbc-sqlite-topic" - sinkType: "jdbc-sqlite" - configs: - jdbcUrl: "jdbc:sqlite:db.sqlite" - tableName: "pulsar_sqlite_jdbc_sink" - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-kafka-sink.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-kafka-sink.md deleted file mode 100644 index ce8967e0461073..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-kafka-sink.md +++ /dev/null @@ -1,73 +0,0 @@ ---- -id: io-kafka-sink -title: Kafka sink connector -sidebar_label: "Kafka sink connector" -original_id: io-kafka-sink ---- - -The Kafka sink connector pulls messages from Pulsar topics and persists the messages -to Kafka topics. - -This guide explains how to configure and use the Kafka sink connector. - -## Configuration - -The configuration of the Kafka sink connector has the following parameters. - -### Property - -| Name | Type| Required | Default | Description -|------|----------|---------|-------------|-------------| -| `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. | -|`acks`|String|true|" " (empty string) |The number of acknowledgments that the producer requires the leader to receive before a request completes.
    This controls the durability of the sent records. -|`batchsize`|long|false|16384L|The batch size that a Kafka producer attempts to batch records together before sending them to brokers. -|`maxRequestSize`|long|false|1048576L|The maximum size of a Kafka request in bytes. -|`topic`|String|true|" " (empty string) |The Kafka topic which receives messages from Pulsar. -| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringSerializer | The serializer class for Kafka producers to serialize keys. -| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArraySerializer | The serializer class for Kafka producers to serialize values.

    The serializer is set by a specific implementation of [`KafkaAbstractSink`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSink.java). -|`producerConfigProperties`|Map|false|" " (empty string)|The producer configuration properties to be passed to producers.

    **Note: other properties specified in the connector configuration file take precedence over this configuration**. - - -### Example - -Before using the Kafka sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "bootstrapServers": "localhost:6667", - "topic": "test", - "acks": "1", - "batchSize": "16384", - "maxRequestSize": "1048576", - "producerConfigProperties": { - "client.id": "test-pulsar-producer", - "security.protocol": "SASL_PLAINTEXT", - "sasl.mechanism": "GSSAPI", - "sasl.kerberos.service.name": "kafka", - "acks": "all" - } - } - } - -* YAML - - ``` - -yaml - configs: - bootstrapServers: "localhost:6667" - topic: "test" - acks: "1" - batchSize: "16384" - maxRequestSize: "1048576" - producerConfigProperties: - client.id: "test-pulsar-producer" - security.protocol: "SASL_PLAINTEXT" - sasl.mechanism: "GSSAPI" - sasl.kerberos.service.name: "kafka" - acks: "all" - ``` diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-kafka-source.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-kafka-source.md deleted file mode 100644 index dd6000aa0bd35e..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-kafka-source.md +++ /dev/null @@ -1,240 +0,0 @@ ---- -id: io-kafka-source -title: Kafka source connector -sidebar_label: "Kafka source connector" -original_id: io-kafka-source ---- - -The Kafka source connector pulls messages from Kafka topics and persists the messages -to Pulsar topics. - -This guide explains how to configure and use the Kafka source connector. - -## Configuration - -The configuration of the Kafka source connector has the following properties. - -### Property - -| Name | Type| Required | Default | Description -|------|----------|---------|-------------|-------------| -| `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. | -| `groupId` |String| true | " " (empty string) | A unique string that identifies the group of consumer processes to which this consumer belongs. | -| `fetchMinBytes` | long|false | 1 | The minimum byte expected for each fetch response. | -| `autoCommitEnabled` | boolean |false | true | If set to true, the consumer's offset is periodically committed in the background.

    This committed offset is used when the process fails as the position from which a new consumer begins. | -| `autoCommitIntervalMs` | long|false | 5000 | The frequency in milliseconds that the consumer offsets are auto-committed to Kafka if `autoCommitEnabled` is set to true. | -| `heartbeatIntervalMs` | long| false | 3000 | The interval between heartbeats to the consumer when using Kafka's group management facilities.

    **Note: `heartbeatIntervalMs` must be smaller than `sessionTimeoutMs`**.| -| `sessionTimeoutMs` | long|false | 30000 | The timeout used to detect consumer failures when using Kafka's group management facility. | -| `topic` | String|true | " " (empty string)| The Kafka topic that sends messages to Pulsar. | -| `consumerConfigProperties` | Map| false | " " (empty string) | The consumer configuration properties to be passed to consumers.

    **Note: other properties specified in the connector configuration file take precedence over this configuration**. | -| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringDeserializer | The deserializer class for Kafka consumers to deserialize keys.
    The deserializer is set by a specific implementation of [`KafkaAbstractSource`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java). -| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArrayDeserializer | The deserializer class for Kafka consumers to deserialize values. -| `autoOffsetReset` | String | false | earliest | The default offset reset policy. | - -### Schema Management - -This Kafka source connector applies the schema to the topic depending on the data type that is present on the Kafka topic. -You can detect the data type from the `keyDeserializationClass` and `valueDeserializationClass` configuration parameters. - -If the `valueDeserializationClass` is `org.apache.kafka.common.serialization.StringDeserializer`, you can set Schema.STRING() as schema type on the Pulsar topic. - -If `valueDeserializationClass` is `io.confluent.kafka.serializers.KafkaAvroDeserializer`, Pulsar downloads the AVRO schema from the Confluent Schema Registry® -and sets it properly on the Pulsar topic. - -In this case, you need to set `schema.registry.url` inside of the `consumerConfigProperties` configuration entry -of the source. - -If `keyDeserializationClass` is not `org.apache.kafka.common.serialization.StringDeserializer`, it means -that you do not have a String as key and the Kafka Source uses the KeyValue schema type with the SEPARATED encoding. - -Pulsar supports AVRO format for keys. - -In this case, you can have a Pulsar topic with the following properties: -- Schema: KeyValue schema with SEPARATED encoding -- Key: the content of key of the Kafka message (base64 encoded) -- Value: the content of value of the Kafka message -- KeySchema: the schema detected from `keyDeserializationClass` -- ValueSchema: the schema detected from `valueDeserializationClass` - -Topic compaction and partition routing use the Pulsar key, that contains the Kafka key, and so they are driven by the same value that you have on Kafka. - -When you consume data from Pulsar topics, you can use the `KeyValue` schema. In this way, you can decode the data properly. -If you want to access the raw key, you can use the `Message#getKeyBytes()` API. - -### Example - -Before using the Kafka source connector, you need to create a configuration file through one of the following methods. - -- JSON - - ```json - - { - "bootstrapServers": "pulsar-kafka:9092", - "groupId": "test-pulsar-io", - "topic": "my-topic", - "sessionTimeoutMs": "10000", - "autoCommitEnabled": false - } - - ``` - -- YAML - - ```yaml - - configs: - bootstrapServers: "pulsar-kafka:9092" - groupId: "test-pulsar-io" - topic: "my-topic" - sessionTimeoutMs: "10000" - autoCommitEnabled: false - - ``` - -## Usage - -You can make the Kafka source connector as a Pulsar built-in connector and use it on a standalone cluster or an on-premises cluster. - -### Standalone cluster - -This example describes how to use the Kafka source connector to feed data from Kafka and write data to Pulsar topics in the standalone mode. - -#### Prerequisites - -- Install [Docker](https://docs.docker.com/get-docker/)(Community Edition). - -#### Steps - -1. Download and start the Confluent Platform. - -For details, see the [documentation](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-1-download-and-start-cp) to install the Kafka service locally. - -2. Pull a Pulsar image and start Pulsar in standalone mode. - - ```bash - - docker pull apachepulsar/pulsar:latest - - docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-kafka-standalone apachepulsar/pulsar:latest bin/pulsar standalone - - ``` - -3. Create a producer file _kafka-producer.py_. - - ```python - - from kafka import KafkaProducer - producer = KafkaProducer(bootstrap_servers='localhost:9092') - future = producer.send('my-topic', b'hello world') - future.get() - - ``` - -4. Create a consumer file _pulsar-client.py_. - - ```python - - import pulsar - - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe('my-topic', subscription_name='my-aa') - - while True: - msg = consumer.receive() - print msg - print dir(msg) - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - - client.close() - - ``` - -5. Copy the following files to Pulsar. - - ```bash - - docker cp pulsar-io-kafka.nar pulsar-kafka-standalone:/pulsar - docker cp kafkaSourceConfig.yaml pulsar-kafka-standalone:/pulsar/conf - - ``` - -6. Open a new terminal window and start the Kafka source connector in local run mode. - - ```bash - - docker exec -it pulsar-kafka-standalone /bin/bash - - ./bin/pulsar-admin source localrun \ - --archive ./pulsar-io-kafka.nar \ - --tenant public \ - --namespace default \ - --name kafka \ - --destination-topic-name my-topic \ - --source-config-file ./conf/kafkaSourceConfig.yaml \ - --parallelism 1 - - ``` - -7. Open a new terminal window and run the Kafka producer locally. - - ```bash - - python3 kafka-producer.py - - ``` - -8. Open a new terminal window and run the Pulsar consumer locally. - - ```bash - - python3 pulsar-client.py - - ``` - -The following information appears on the consumer terminal window. - - ```bash - - Received message: 'hello world' - - ``` - -### On-premises cluster - -This example explains how to create a Kafka source connector in an on-premises cluster. - -1. Copy the NAR package of the Kafka connector to the Pulsar connectors directory. - - ``` - - cp pulsar-io-kafka-{{connector:version}}.nar $PULSAR_HOME/connectors/pulsar-io-kafka-{{connector:version}}.nar - - ``` - -2. Reload all [built-in connectors](io-connectors.md). - - ``` - - PULSAR_HOME/bin/pulsar-admin sources reload - - ``` - -3. Check whether the Kafka source connector is available on the list or not. - - ``` - - PULSAR_HOME/bin/pulsar-admin sources available-sources - - ``` - -4. Create a Kafka source connector on a Pulsar cluster using the [`pulsar-admin sources create`](/tools/pulsar-admin/) command. - - ``` - - PULSAR_HOME/bin/pulsar-admin sources create \ - --source-config-file - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-kinesis-sink.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-kinesis-sink.md deleted file mode 100644 index 810068958d13f3..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-kinesis-sink.md +++ /dev/null @@ -1,82 +0,0 @@ ---- -id: io-kinesis-sink -title: Kinesis sink connector -sidebar_label: "Kinesis sink connector" -original_id: io-kinesis-sink ---- - -The Kinesis sink connector pulls data from Pulsar and persists data into Amazon Kinesis. - -## Configuration - -The configuration of the Kinesis sink connector has the following property. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`messageFormat`|MessageFormat|true|ONLY_RAW_PAYLOAD|Message format in which Kinesis sink converts Pulsar messages and publishes to Kinesis streams.

    Below are the available options:

  109. `ONLY_RAW_PAYLOAD`: Kinesis sink directly publishes Pulsar message payload as a message into the configured Kinesis stream.

  110. `FULL_MESSAGE_IN_JSON`: Kinesis sink creates a JSON payload with Pulsar message payload, properties and encryptionCtx, and publishes JSON payload into the configured Kinesis stream.

  111. `FULL_MESSAGE_IN_FB`: Kinesis sink creates a flatbuffer serialized payload with Pulsar message payload, properties and encryptionCtx, and publishes flatbuffer payload into the configured Kinesis stream.

  112. `FULL_MESSAGE_IN_JSON_EXPAND_VALUE`: Kinesis sink sends a JSON structure containing the record topic name, key, payload, properties and event time. The record schema is used to convert the value to JSON.
  113. -`retainOrdering`|boolean|false|false|Whether Pulsar connectors to retain ordering when moving messages from Pulsar to Kinesis or not. -`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    It is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If it is empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Built-in plugins - -The following are built-in `AwsCredentialProviderPlugin` plugins: - -* `org.apache.pulsar.io.aws.AwsDefaultProviderChainPlugin` - - This plugin takes no configuration, it uses the default AWS provider chain. - - For more information, see [AWS documentation](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default). - -* `org.apache.pulsar.io.aws.STSAssumeRoleProviderPlugin` - - This plugin takes a configuration (via the `awsCredentialPluginParam`) that describes a role to assume when running the KCL. - - This configuration takes the form of a small json document like: - - ```json - - {"roleArn": "arn...", "roleSessionName": "name"} - - ``` - -### Example - -Before using the Kinesis sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "awsEndpoint": "some.endpoint.aws", - "awsRegion": "us-east-1", - "awsKinesisStreamName": "my-stream", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "messageFormat": "ONLY_RAW_PAYLOAD", - "retainOrdering": "true" - } - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "some.endpoint.aws" - awsRegion: "us-east-1" - awsKinesisStreamName: "my-stream" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - messageFormat: "ONLY_RAW_PAYLOAD" - retainOrdering: "true" - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-kinesis-source.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-kinesis-source.md deleted file mode 100644 index 1b45e264680e61..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-kinesis-source.md +++ /dev/null @@ -1,83 +0,0 @@ ---- -id: io-kinesis-source -title: Kinesis source connector -sidebar_label: "Kinesis source connector" -original_id: io-kinesis-source ---- - -The Kinesis source connector pulls data from Amazon Kinesis and persists data into Pulsar. - -This connector uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual consuming of messages. The KCL uses DynamoDB to track state for consumers. - -> Note: currently, the Kinesis source connector only supports raw messages. If you use KMS encrypted messages, the encrypted messages are sent to downstream. This connector will support decrypting messages in the future release. - - -## Configuration - -The configuration of the Kinesis source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.

    Below are the available options:

  114. `AT_TIMESTAMP`: start from the record at or after the specified timestamp.

  115. `LATEST`: start after the most recent data record.

  116. `TRIM_HORIZON`: start from the oldest available data record.
  117. -`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption. -`applicationName`|String|false|Pulsar IO connector|The name of the Amazon Kinesis application.

    By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances. -`checkpointInterval`|long|false|60000|The frequency of the Kinesis stream checkpoint in milliseconds. -`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds. -`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint. -`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector.

    Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed. -`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`useEnhancedFanOut`|boolean|false|true|If set to true, it uses Kinesis enhanced fan-out.

    If set to false, it uses polling. -`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    `awsCredentialProviderPlugin` has the following built-in plugs:

  118. `org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:
    this plugin uses the default AWS provider chain.
    For more information, see [using the default credential provider chain](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default).

  119. `org.apache.pulsar.io.kinesis.STSAssumeRoleProviderPlugin`:
    this plugin takes a configuration via the `awsCredentialPluginParam` that describes a role to assume when running the KCL.
    **JSON configuration example**
    `{"roleArn": "arn...", "roleSessionName": "name"}`

    `awsCredentialPluginName` is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If `awsCredentialPluginName` set to empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`.
  120. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Example - -Before using the Kinesis source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "awsEndpoint": "https://some.endpoint.aws", - "awsRegion": "us-east-1", - "awsKinesisStreamName": "my-stream", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "applicationName": "My test application", - "checkpointInterval": "30000", - "backoffTime": "4000", - "numRetries": "3", - "receiveQueueSize": 2000, - "initialPositionInStream": "TRIM_HORIZON", - "startAtTime": "2019-03-05T19:28:58.000Z" - } - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "https://some.endpoint.aws" - awsRegion: "us-east-1" - awsKinesisStreamName: "my-stream" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - applicationName: "My test application" - checkpointInterval: 30000 - backoffTime: 4000 - numRetries: 3 - receiveQueueSize: 2000 - initialPositionInStream: "TRIM_HORIZON" - startAtTime: "2019-03-05T19:28:58.000Z" - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-mongo-sink.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-mongo-sink.md deleted file mode 100644 index 7fc77ec80cc680..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-mongo-sink.md +++ /dev/null @@ -1,58 +0,0 @@ ---- -id: io-mongo-sink -title: MongoDB sink connector -sidebar_label: "MongoDB sink connector" -original_id: io-mongo-sink ---- - -The MongoDB sink connector pulls messages from Pulsar topics -and persists the messages to collections. - -## Configuration - -The configuration of the MongoDB sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `mongoUri` | String| true| " " (empty string) | The MongoDB URI to which the connector connects.

    For more information, see [connection string URI format](https://docs.mongodb.com/manual/reference/connection-string/). | -| `database` | String| true| " " (empty string)| The database name to which the collection belongs. | -| `collection` | String| true| " " (empty string)| The collection name to which the connector writes messages. | -| `batchSize` | int|false|100 | The batch size of writing messages to collections. | -| `batchTimeMs` |long|false|1000| The batch operation interval in milliseconds. | - - -### Example - -Before using the Mongo sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "mongoUri": "mongodb://localhost:27017", - "database": "pulsar", - "collection": "messages", - "batchSize": "2", - "batchTimeMs": "500" - } - } - - ``` - -* YAML - - ```yaml - - configs: - mongoUri: "mongodb://localhost:27017" - database: "pulsar" - collection: "messages" - batchSize: 2 - batchTimeMs: 500 - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-netty-source.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-netty-source.md deleted file mode 100644 index 2caedf2bce69bc..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-netty-source.md +++ /dev/null @@ -1,243 +0,0 @@ ---- -id: io-netty-source -title: Netty source connector -sidebar_label: "Netty source connector" -original_id: io-netty-source ---- - -The Netty source connector opens a port that accepts incoming data via the configured network protocol -and publish it to user-defined Pulsar topics. - -This connector can be used in a containerized (for example, k8s) deployment. Otherwise, if the connector is running in process or thread mode, the instance may be conflicting on listening to ports. - -## Configuration - -The configuration of the Netty source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `type` |String| true |tcp | The network protocol over which data is transmitted to netty.

    Below are the available options:
  121. tcp
  122. http
  123. udp
  124. | -| `host` | String|true | 127.0.0.1 | The host name or address on which the source instance listen. | -| `port` | int|true | 10999 | The port on which the source instance listen. | -| `numberOfThreads` |int| true |1 | The number of threads of Netty TCP server to accept incoming connections and handle the traffic of accepted connections. | - - -### Example - -Before using the Netty source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "type": "tcp", - "host": "127.0.0.1", - "port": "10911", - "numberOfThreads": "1" - } - } - - ``` - -* YAML - - ```yaml - - configs: - type: "tcp" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -## Usage - -The following examples show how to use the Netty source connector with TCP and HTTP. - -### TCP - -1. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-netty-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -2. Create a configuration file _netty-source-config.yaml_. - - ```yaml - - configs: - type: "tcp" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -3. Copy the configuration file _netty-source-config.yaml_ to Pulsar server. - - ```bash - - $ docker cp netty-source-config.yaml pulsar-netty-standalone:/pulsar/conf/ - - ``` - -4. Download the Netty source connector. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - curl -O http://mirror-hk.koddos.net/apache/pulsar/pulsar-{version}/connectors/pulsar-io-netty-{version}.nar - - ``` - -5. Start the Netty source connector. - - ```bash - - $ ./bin/pulsar-admin sources localrun \ - --archive pulsar-io-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name netty \ - --destination-topic-name netty-topic \ - --source-config-file netty-source-config.yaml \ - --parallelism 1 - - ``` - -6. Consume data. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ ./bin/pulsar-client consume -t Exclusive -s netty-sub netty-topic -n 0 - - ``` - -7. Open another terminal window to send data to the Netty source. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ apt-get update - - $ apt-get -y install telnet - - $ root@1d19327b2c67:/pulsar# telnet 127.0.0.1 10999 - Trying 127.0.0.1... - Connected to 127.0.0.1. - Escape character is '^]'. - hello - world - - ``` - -8. The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello - - ----- got message ----- - world - - ``` - -### HTTP - -1. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-netty-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -2. Create a configuration file _netty-source-config.yaml_. - - ```yaml - - configs: - type: "http" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -3. Copy the configuration file _netty-source-config.yaml_ to Pulsar server. - - ```bash - - $ docker cp netty-source-config.yaml pulsar-netty-standalone:/pulsar/conf/ - - ``` - -4. Download the Netty source connector. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - curl -O http://mirror-hk.koddos.net/apache/pulsar/pulsar-{version}/connectors/pulsar-io-netty-{version}.nar - - ``` - -5. Start the Netty source connector. - - ```bash - - $ ./bin/pulsar-admin sources localrun \ - --archive pulsar-io-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name netty \ - --destination-topic-name netty-topic \ - --source-config-file netty-source-config.yaml \ - --parallelism 1 - - ``` - -6. Consume data. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ ./bin/pulsar-client consume -t Exclusive -s netty-sub netty-topic -n 0 - - ``` - -7. Open another terminal window to send data to the Netty source. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ curl -X POST --data 'hello, world!' http://127.0.0.1:10999/ - - ``` - -8. The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello, world! - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-nsq-source.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-nsq-source.md deleted file mode 100644 index b61e7e100c22e1..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-nsq-source.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -id: io-nsq-source -title: NSQ source connector -sidebar_label: "NSQ source connector" -original_id: io-nsq-source ---- - -The NSQ source connector receives messages from NSQ topics -and writes messages to Pulsar topics. - -## Configuration - -The configuration of the NSQ source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `lookupds` |String| true | " " (empty string) | A comma-separated list of nsqlookupds to connect to. | -| `topic` | String|true | " " (empty string) | The NSQ topic to transport. | -| `channel` | String |false | pulsar-transport-{$topic} | The channel to consume from on the provided NSQ topic. | \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-overview.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-overview.md deleted file mode 100644 index 82d0cd04a31d7a..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-overview.md +++ /dev/null @@ -1,163 +0,0 @@ ---- -id: io-overview -title: Pulsar connector overview -sidebar_label: "Overview" -original_id: io-overview ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Messaging systems are most powerful when you can easily use them with external systems like databases and other messaging systems. - -**Pulsar IO connectors** enable you to easily create, deploy, and manage connectors that interact with external systems, such as [Apache Cassandra](https://cassandra.apache.org), [Aerospike](https://www.aerospike.com), and many others. - - -## Concept - -Pulsar IO connectors come in two types: **source** and **sink**. - -This diagram illustrates the relationship between source, Pulsar, and sink: - -![Pulsar IO diagram](/assets/pulsar-io.png "Pulsar IO connectors (sources and sinks)") - - -### Source - -> Sources **feed data from external systems into Pulsar**. - -Common sources include other messaging systems and firehose-style data pipeline APIs. - -For the complete list of Pulsar built-in source connectors, see [source connector](io-connectors.md#source-connector). - -### Sink - -> Sinks **feed data from Pulsar into external systems**. - -Common sinks include other messaging systems and SQL and NoSQL databases. - -For the complete list of Pulsar built-in sink connectors, see [sink connector](io-connectors.md#sink-connector). - -## Processing guarantee - -Processing guarantees are used to handle errors when writing messages to Pulsar topics. - -> Pulsar connectors and Functions use the **same** processing guarantees as below. - -Delivery semantic | Description -:------------------|:------- -`at-most-once` | Each message sent to a connector is to be **processed once** or **not to be processed**. -`at-least-once` | Each message sent to a connector is to be **processed once** or **more than once**. -`effectively-once` | Each message sent to a connector has **one output associated** with it. - -> Processing guarantees for connectors not just rely on Pulsar guarantee but also **relate to external systems**, that is, **the implementation of source and sink**. - -* Source: Pulsar ensures that writing messages to Pulsar topics respects to the processing guarantees. It is within Pulsar's control. - -* Sink: the processing guarantees rely on the sink implementation. If the sink implementation does not handle retries in an idempotent way, the sink does not respect to the processing guarantees. - -### Set - -When creating a connector, you can set the processing guarantee with the following semantics: - -* ATLEAST_ONCE - -* ATMOST_ONCE - -* EFFECTIVELY_ONCE - -> If `--processing-guarantees` is not specified when creating a connector, the default semantic is `ATLEAST_ONCE`. - -Here takes **Admin CLI** as an example. For more information about **REST API** or **JAVA Admin API**, see [here](io-use.md#create). - -````mdx-code-block - - - - -```bash - -$ bin/pulsar-admin sources create \ - --processing-guarantees ATMOST_ONCE \ - # Other source configs - -``` - -For more information about the options of `pulsar-admin sources create`, see [here](reference-connector-admin.md#create). - - - - -```bash - -$ bin/pulsar-admin sinks create \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other sink configs - -``` - -For more information about the options of `pulsar-admin sinks create`, see [here](reference-connector-admin.md#create-1). - - - - -```` - -### Update - -After creating a connector, you can update the processing guarantee with the following semantics: - -* ATLEAST_ONCE - -* ATMOST_ONCE - -* EFFECTIVELY_ONCE - -Here takes **Admin CLI** as an example. For more information about **REST API** or **JAVA Admin API**, see [here](io-use.md#create). - -````mdx-code-block - - - - -```bash - -$ bin/pulsar-admin sources update \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other source configs - -``` - -For more information about the options of `pulsar-admin sources update`, see [here](reference-connector-admin.md#update). - - - - -```bash - -$ bin/pulsar-admin sinks update \ - --processing-guarantees ATMOST_ONCE \ - # Other sink configs - -``` - -For more information about the options of `pulsar-admin sinks update`, see [here](reference-connector-admin.md#update-1). - - - - -```` - - -## Work with connector - -You can manage Pulsar connectors (for example, create, update, start, stop, restart, reload, delete and perform other operations on connectors) via the `Connector Admin CLI` with sources and sinks subcommands. For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - -Connectors (sources and sinks) and Functions are components of instances, and they all run on Functions workers. When managing a source, sink or function via the `Connector Admin CLI` or [Functions Admin CLI](functions-cli.md), an instance is started on a worker. For more information, see [Functions worker](functions-worker.md#run-functions-worker-separately). diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-quickstart.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-quickstart.md deleted file mode 100644 index 1b6528d49541ba..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-quickstart.md +++ /dev/null @@ -1,963 +0,0 @@ ---- -id: io-quickstart -title: How to connect Pulsar to database -sidebar_label: "Get started" -original_id: io-quickstart ---- - -This tutorial provides a hands-on look at how you can move data out of Pulsar without writing a single line of code. - -It is helpful to review the [concepts](io-overview.md) for Pulsar I/O with running the steps in this guide to gain a deeper understanding. - -At the end of this tutorial, you are able to: - -- [Connect Pulsar to Cassandra](#connect-pulsar-to-cassandra) - -- [Connect Pulsar to PostgreSQL](#connect-pulsar-to-postgreSQL) - -:::tip - -* These instructions assume you are running Pulsar in [standalone mode](getting-started-standalone.md). However, all -the commands used in this tutorial can be used in a multi-node Pulsar cluster without any changes. -* All the instructions are assumed to run at the root directory of a Pulsar binary distribution. - -::: - -## Install Pulsar and built-in connector - -Before connecting Pulsar to a database, you need to install Pulsar and the desired built-in connector. - -For more information about **how to install a standalone Pulsar and built-in connectors**, see [here](getting-started-standalone.md/#installing-pulsar). - -## Start Pulsar standalone - -1. Start Pulsar locally. - - ```bash - - bin/pulsar standalone - - ``` - - All the components of a Pulsar service are started in order. - - You can curl those pulsar service endpoints to make sure Pulsar service is up and running correctly. - -2. Check Pulsar binary protocol port. - - ```bash - - telnet localhost 6650 - - ``` - -3. Check Pulsar Function cluster. - - ```bash - - curl -s http://localhost:8080/admin/v2/worker/cluster - - ``` - - **Example output** - - ```json - - [{"workerId":"c-standalone-fw-localhost-6750","workerHostname":"localhost","port":6750}] - - ``` - -4. Make sure a public tenant and a default namespace exist. - - ```bash - - curl -s http://localhost:8080/admin/v2/namespaces/public - - ``` - - **Example output** - - ```json - - ["public/default","public/functions"] - - ``` - -5. All built-in connectors should be listed as available. - - ```bash - - curl -s http://localhost:8080/admin/v2/functions/connectors - - ``` - - **Example output** - - ```json - - [{"name":"aerospike","description":"Aerospike database sink","sinkClass":"org.apache.pulsar.io.aerospike.AerospikeStringSink"},{"name":"cassandra","description":"Writes data into Cassandra","sinkClass":"org.apache.pulsar.io.cassandra.CassandraStringSink"},{"name":"kafka","description":"Kafka source and sink connector","sourceClass":"org.apache.pulsar.io.kafka.KafkaStringSource","sinkClass":"org.apache.pulsar.io.kafka.KafkaBytesSink"},{"name":"kinesis","description":"Kinesis sink connector","sinkClass":"org.apache.pulsar.io.kinesis.KinesisSink"},{"name":"rabbitmq","description":"RabbitMQ source connector","sourceClass":"org.apache.pulsar.io.rabbitmq.RabbitMQSource"},{"name":"twitter","description":"Ingest data from Twitter firehose","sourceClass":"org.apache.pulsar.io.twitter.TwitterFireHose"}] - - ``` - - If an error occurs when starting Pulsar service, you may see an exception at the terminal running `pulsar/standalone`, - or you can navigate to the `logs` directory under the Pulsar directory to view the logs. - -## Connect Pulsar to Cassandra - -This section demonstrates how to connect Pulsar to Cassandra. - -:::tip - -* Make sure you have Docker installed. If you do not have one, see [install Docker](https://docs.docker.com/docker-for-mac/install/). -* The Cassandra sink connector reads messages from Pulsar topics and writes the messages into Cassandra tables. For more information, see [Cassandra sink connector](io-cassandra-sink.md). - -::: - -### Setup a Cassandra cluster - -This example uses `cassandra` Docker image to start a single-node Cassandra cluster in Docker. - -1. Start a Cassandra cluster. - - ```bash - - docker run -d --rm --name=cassandra -p 9042:9042 cassandra - - ``` - - :::note - - Before moving to the next steps, make sure the Cassandra cluster is running. - - ::: - -2. Make sure the Docker process is running. - - ```bash - - docker ps - - ``` - -3. Check the Cassandra logs to make sure the Cassandra process is running as expected. - - ```bash - - docker logs cassandra - - ``` - -4. Check the status of the Cassandra cluster. - - ```bash - - docker exec cassandra nodetool status - - ``` - - **Example output** - - ``` - - Datacenter: datacenter1 - ======================= - Status=Up/Down - |/ State=Normal/Leaving/Joining/Moving - -- Address Load Tokens Owns (effective) Host ID Rack - UN 172.17.0.2 103.67 KiB 256 100.0% af0e4b2f-84e0-4f0b-bb14-bd5f9070ff26 rack1 - - ``` - -5. Use `cqlsh` to connect to the Cassandra cluster. - - ```bash - - $ docker exec -ti cassandra cqlsh localhost - Connected to Test Cluster at localhost:9042. - [cqlsh 5.0.1 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4] - Use HELP for help. - cqlsh> - - ``` - -6. Create a keyspace `pulsar_test_keyspace`. - - ```bash - - cqlsh> CREATE KEYSPACE pulsar_test_keyspace WITH replication = {'class':'SimpleStrategy', 'replication_factor':1}; - - ``` - -7. Create a table `pulsar_test_table`. - - ```bash - - cqlsh> USE pulsar_test_keyspace; - cqlsh:pulsar_test_keyspace> CREATE TABLE pulsar_test_table (key text PRIMARY KEY, col text); - - ``` - -### Configure a Cassandra sink - -Now that we have a Cassandra cluster running locally. - -In this section, you need to configure a Cassandra sink connector. - -To run a Cassandra sink connector, you need to prepare a configuration file including the information that Pulsar connector runtime needs to know. - -For example, how Pulsar connector can find the Cassandra cluster, what is the keyspace and the table that Pulsar connector uses for writing Pulsar messages to, and so on. - -You can create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - } - - ``` - -* YAML - - ```yaml - - configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - - ``` - -For more information, see [Cassandra sink connector](io-cassandra-sink.md). - -### Create a Cassandra sink - -You can use the [Connector Admin CLI](/tools/pulsar-admin/) -to create a sink connector and perform other operations on them. - -Run the following command to create a Cassandra sink connector with sink type _cassandra_ and the config file _examples/cassandra-sink.yml_ created previously. - -#### Note -> The `sink-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. - -```bash - -bin/pulsar-admin sinks create \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink \ - --sink-type cassandra \ - --sink-config-file examples/cassandra-sink.yml \ - --inputs test_cassandra - -``` - -Once the command is executed, Pulsar creates the sink connector _cassandra-test-sink_. - -This sink connector runs -as a Pulsar Function and writes the messages produced in the topic _test_cassandra_ to the Cassandra table _pulsar_test_table_. - -### Inspect a Cassandra sink - -You can use the [Connector Admin CLI](/tools/pulsar-admin/) -to monitor a connector and perform other operations on it. - -* Get the information of a Cassandra sink. - - ```bash - - bin/pulsar-admin sinks get \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - **Example output** - - ```json - - { - "tenant": "public", - "namespace": "default", - "name": "cassandra-test-sink", - "className": "org.apache.pulsar.io.cassandra.CassandraStringSink", - "inputSpecs": { - "test_cassandra": { - "isRegexPattern": false - } - }, - "configs": { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true, - "archive": "builtin://cassandra" - } - - ``` - -* Check the status of a Cassandra sink. - - ```bash - - bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - **Example output** - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-localhost-8080" - } - } ] - } - - ``` - -### Verify a Cassandra sink - -1. Produce some messages to the input topic of the Cassandra sink _test_cassandra_. - - ```bash - - for i in {0..9}; do bin/pulsar-client produce -m "key-$i" -n 1 test_cassandra; done - - ``` - -2. Inspect the status of the Cassandra sink _test_cassandra_. - - ```bash - - bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - You can see 10 messages are processed by the Cassandra sink _test_cassandra_. - - **Example output** - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 10, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 10, - "lastReceivedTime" : 1551685489136, - "workerId" : "c-standalone-fw-localhost-8080" - } - } ] - } - - ``` - -3. Use `cqlsh` to connect to the Cassandra cluster. - - ```bash - - docker exec -ti cassandra cqlsh localhost - - ``` - -4. Check the data of the Cassandra table _pulsar_test_table_. - - ```bash - - cqlsh> use pulsar_test_keyspace; - cqlsh:pulsar_test_keyspace> select * from pulsar_test_table; - - key | col - --------+-------- - key-5 | key-5 - key-0 | key-0 - key-9 | key-9 - key-2 | key-2 - key-1 | key-1 - key-3 | key-3 - key-6 | key-6 - key-7 | key-7 - key-4 | key-4 - key-8 | key-8 - - ``` - -### Delete a Cassandra Sink - -You can use the [Connector Admin CLI](/tools/pulsar-admin/) -to delete a connector and perform other operations on it. - -```bash - -bin/pulsar-admin sinks delete \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - -``` - -## Connect Pulsar to PostgreSQL - -This section demonstrates how to connect Pulsar to PostgreSQL. - -:::tip - -* Make sure you have Docker installed. If you do not have one, see [install Docker](https://docs.docker.com/docker-for-mac/install/). -* The JDBC sink connector pulls messages from Pulsar topics and persists the messages to ClickHouse, MariaDB, PostgreSQL, or SQlite. - -::: - ->For more information, see [JDBC sink connector](io-jdbc-sink.md). - - -### Setup a PostgreSQL cluster - -This example uses the PostgreSQL 12 docker image to start a single-node PostgreSQL cluster in Docker. - -1. Pull the PostgreSQL 12 image from Docker. - - ```bash - - $ docker pull postgres:12 - - ``` - -2. Start PostgreSQL. - - ```bash - - $ docker run -d -it --rm \ - --name pulsar-postgres \ - -p 5432:5432 \ - -e POSTGRES_PASSWORD=password \ - -e POSTGRES_USER=postgres \ - postgres:12 - - ``` - - #### Tip - - Flag | Description | This example - ---|---|---| - `-d` | To start a container in detached mode. | / - `-it` | Keep STDIN open even if not attached and allocate a terminal. | / - `--rm` | Remove the container automatically when it exits. | / - `-name` | Assign a name to the container. | This example specifies _pulsar-postgres_ for the container. - `-p` | Publish the port of the container to the host. | This example publishes the port _5432_ of the container to the host. - `-e` | Set environment variables. | This example sets the following variables:
    - The password for the user is _password_.
    - The name for the user is _postgres_. - - :::tip - - For more information about Docker commands, see [Docker CLI](https://docs.docker.com/engine/reference/commandline/run/). - - ::: - -3. Check if PostgreSQL has been started successfully. - - ```bash - - $ docker logs -f pulsar-postgres - - ``` - - PostgreSQL has been started successfully if the following message appears. - - ```text - - 2020-05-11 20:09:24.492 UTC [1] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit - 2020-05-11 20:09:24.492 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 - 2020-05-11 20:09:24.492 UTC [1] LOG: listening on IPv6 address "::", port 5432 - 2020-05-11 20:09:24.499 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" - 2020-05-11 20:09:24.523 UTC [55] LOG: database system was shut down at 2020-05-11 20:09:24 UTC - 2020-05-11 20:09:24.533 UTC [1] LOG: database system is ready to accept connections - - ``` - -4. Access to PostgreSQL. - - ```bash - - $ docker exec -it pulsar-postgres /bin/bash - - ``` - -5. Create a PostgreSQL table _pulsar_postgres_jdbc_sink_. - - ```bash - - $ psql -U postgres postgres - - postgres=# create table if not exists pulsar_postgres_jdbc_sink - ( - id serial PRIMARY KEY, - name VARCHAR(255) NOT NULL - ); - - ``` - -### Configure a JDBC sink - -Now we have a PostgreSQL running locally. - -In this section, you need to configure a JDBC sink connector. - -1. Add a configuration file. - - To run a JDBC sink connector, you need to prepare a YAML configuration file including the information that Pulsar connector runtime needs to know. - - For example, how Pulsar connector can find the PostgreSQL cluster, what is the JDBC URL and the table that Pulsar connector uses for writing messages. - - Create a _pulsar-postgres-jdbc-sink.yaml_ file, copy the following contents to this file, and place the file in the `pulsar/connectors` folder. - - ```yaml - - configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/postgres" - tableName: "pulsar_postgres_jdbc_sink" - - ``` - -2. Create a schema. - - Create a _avro-schema_ file, copy the following contents to this file, and place the file in the `pulsar/connectors` folder. - - ```json - - { - "type": "AVRO", - "schema": "{\"type\":\"record\",\"name\":\"Test\",\"fields\":[{\"name\":\"id\",\"type\":[\"null\",\"int\"]},{\"name\":\"name\",\"type\":[\"null\",\"string\"]}]}", - "properties": {} - } - - ``` - - :::tip - - For more information about AVRO, see [Apache Avro](https://avro.apache.org/docs/1.9.1/). - - ::: - -3. Upload a schema to a topic. - - This example uploads the _avro-schema_ schema to the _pulsar-postgres-jdbc-sink-topic_ topic. - - ```bash - - $ bin/pulsar-admin schemas upload pulsar-postgres-jdbc-sink-topic -f ./connectors/avro-schema - - ``` - -4. Check if the schema has been uploaded successfully. - - ```bash - - $ bin/pulsar-admin schemas get pulsar-postgres-jdbc-sink-topic - - ``` - - The schema has been uploaded successfully if the following message appears. - - ```json - - {"name":"pulsar-postgres-jdbc-sink-topic","schema":"{\"type\":\"record\",\"name\":\"Test\",\"fields\":[{\"name\":\"id\",\"type\":[\"null\",\"int\"]},{\"name\":\"name\",\"type\":[\"null\",\"string\"]}]}","type":"AVRO","properties":{}} - - ``` - -### Create a JDBC sink - -You can use the [Connector Admin CLI](/tools/pulsar-admin/) -to create a sink connector and perform other operations on it. - -This example creates a sink connector and specifies the desired information. - -```bash - -$ bin/pulsar-admin sinks create \ ---archive ./connectors/pulsar-io-jdbc-postgres-@pulsar:version@.nar \ ---inputs pulsar-postgres-jdbc-sink-topic \ ---name pulsar-postgres-jdbc-sink \ ---sink-config-file ./connectors/pulsar-postgres-jdbc-sink.yaml \ ---parallelism 1 - -``` - -Once the command is executed, Pulsar creates a sink connector _pulsar-postgres-jdbc-sink_. - -This sink connector runs as a Pulsar Function and writes the messages produced in the topic _pulsar-postgres-jdbc-sink-topic_ to the PostgreSQL table _pulsar_postgres_jdbc_sink_. - - #### Tip - - Flag | Description | This example - ---|---|---| - `--archive` | The path to the archive file for the sink. | _pulsar-io-jdbc-postgres-@pulsar:version@.nar_ | - `--inputs` | The input topic(s) of the sink.

    Multiple topics can be specified as a comma-separated list.|| - `--name` | The name of the sink. | _pulsar-postgres-jdbc-sink_ | - `--sink-config-file` | The path to a YAML config file specifying the configuration of the sink. | _pulsar-postgres-jdbc-sink.yaml_ | - `--parallelism` | The parallelism factor of the sink.

    For example, the number of sink instances to run. | _1_ | - -:::tip - -For more information about `pulsar-admin sinks create options`, see [Pulsar admin docs](/tools/pulsar-admin/). - -::: - -The sink has been created successfully if the following message appears. - -```bash - -Created successfully - -``` - -### Inspect a JDBC sink - -You can use the [Connector Admin CLI](/tools/pulsar-admin/) -to monitor a connector and perform other operations on it. - -* List all running JDBC sink(s). - - ```bash - - $ bin/pulsar-admin sinks list \ - --tenant public \ - --namespace default - - ``` - - :::tip - - For more information about `pulsar-admin sinks list options`, see [Pulsar admin docs](/tools/pulsar-admin/). - - ::: - - The result shows that only the _postgres-jdbc-sink_ sink is running. - - ```json - - [ - "pulsar-postgres-jdbc-sink" - ] - - ``` - -* Get the information of a JDBC sink. - - ```bash - - $ bin/pulsar-admin sinks get \ - --tenant public \ - --namespace default \ - --name pulsar-postgres-jdbc-sink - - ``` - - :::tip - - For more information about `pulsar-admin sinks get options`, see [Pulsar admin docs](/tools/pulsar-admin/). - - ::: - - The result shows the information of the sink connector, including tenant, namespace, topic and so on. - - ```json - - { - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true - } - - ``` - -* Get the status of a JDBC sink - - ```bash - - $ bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name pulsar-postgres-jdbc-sink - - ``` - - :::tip - - For more information about `pulsar-admin sinks status options`, see [Pulsar admin docs](/tools/pulsar-admin/). - - ::: - - The result shows the current status of sink connector, including the number of instances, running status, worker ID and so on. - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-192.168.2.52-8080" - } - } ] - } - - ``` - -### Stop a JDBC sink - -You can use the [Connector Admin CLI](/tools/pulsar-admin/) -to stop a connector and perform other operations on it. - -```bash - -$ bin/pulsar-admin sinks stop \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks stop options`, see [Pulsar admin docs](/tools/pulsar-admin/). - -::: - -The sink instance has been stopped successfully if the following message disappears. - -```bash - -Stopped successfully - -``` - -### Restart a JDBC sink - -You can use the [Connector Admin CLI](/tools/pulsar-admin/) -to restart a connector and perform other operations on it. - -```bash - -$ bin/pulsar-admin sinks restart \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks restart options`, see [Pulsar admin docs](/tools/pulsar-admin/). - -::: - -The sink instance has been started successfully if the following message disappears. - -```bash - -Started successfully - -``` - -:::tip - -* Optionally, you can run a standalone sink connector using `pulsar-admin sinks localrun options`. -Note that `pulsar-admin sinks localrun options` **runs a sink connector locally**, while `pulsar-admin sinks start options` **starts a sink connector in a cluster**. -* For more information about `pulsar-admin sinks localrun options`, see [Pulsar admin docs](/tools/pulsar-admin/). - -::: - -### Update a JDBC sink - -You can use the [Connector Admin CLI](/tools/pulsar-admin/) -to update a connector and perform other operations on it. - -This example updates the parallelism of the _pulsar-postgres-jdbc-sink_ sink connector to 2. - -```bash - -$ bin/pulsar-admin sinks update \ ---name pulsar-postgres-jdbc-sink \ ---parallelism 2 - -``` - -:::tip - -For more information about `pulsar-admin sinks update options`, see [Pulsar admin docs](/tools/pulsar-admin/). - -::: - -The sink connector has been updated successfully if the following message disappears. - -```bash - -Updated successfully - -``` - -This example double-checks the information. - -```bash - -$ bin/pulsar-admin sinks get \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -The result shows that the parallelism is 2. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 2, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -### Delete a JDBC sink - -You can use the [Connector Admin CLI](/tools/pulsar-admin/) -to delete a connector and perform other operations on it. - -This example deletes the _pulsar-postgres-jdbc-sink_ sink connector. - -```bash - -$ bin/pulsar-admin sinks delete \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks delete options`, see [Pulsar admin docs](/tools/pulsar-admin/). - -::: - -The sink connector has been deleted successfully if the following message appears. - -```text - -Deleted successfully - -``` - -This example double-checks the status of the sink connector. - -```bash - -$ bin/pulsar-admin sinks get \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -The result shows that the sink connector does not exist. - -```text - -HTTP 404 Not Found - -Reason: Sink pulsar-postgres-jdbc-sink doesn't exist - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-rabbitmq-sink.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-rabbitmq-sink.md deleted file mode 100644 index 1bf8b7bd5c83ae..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-rabbitmq-sink.md +++ /dev/null @@ -1,87 +0,0 @@ ---- -id: io-rabbitmq-sink -title: RabbitMQ sink connector -sidebar_label: "RabbitMQ sink connector" -original_id: io-rabbitmq-sink ---- - -The RabbitMQ sink connector pulls messages from Pulsar topics -and persist the messages to RabbitMQ queues. - - -## Configuration - -The configuration of the RabbitMQ sink connector has the following properties. - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `connectionName` |String| true | " " (empty string) | The connection name. | -| `host` | String| true | " " (empty string) | The RabbitMQ host. | -| `port` | int |true | 5672 | The RabbitMQ port. | -| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. | -| `username` | String|false | guest | The username used to authenticate to RabbitMQ. | -| `password` | String|false | guest | The password used to authenticate to RabbitMQ. | -| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. | -| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number.

    0 means unlimited. | -| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets.

    0 means unlimited. | -| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds.

    0 means infinite. | -| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. | -| `requestedHeartbeat` | int|false | 60 | The exchange to publish messages. | -| `exchangeName` | String|true | " " (empty string) | The maximum number of messages that the server delivers.

    0 means unlimited. | -| `prefetchGlobal` |String|true | " " (empty string) |The routing key used to publish messages. | - - -### Example - -Before using the RabbitMQ sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "host": "localhost", - "port": "5672", - "virtualHost": "/", - "username": "guest", - "password": "guest", - "queueName": "test-queue", - "connectionName": "test-connection", - "requestedChannelMax": "0", - "requestedFrameMax": "0", - "connectionTimeout": "60000", - "handshakeTimeout": "10000", - "requestedHeartbeat": "60", - "exchangeName": "test-exchange", - "routingKey": "test-key" - } - } - - ``` - -* YAML - - ```yaml - - configs: - host: "localhost" - port: 5672 - virtualHost: "/", - username: "guest" - password: "guest" - queueName: "test-queue" - connectionName: "test-connection" - requestedChannelMax: 0 - requestedFrameMax: 0 - connectionTimeout: 60000 - handshakeTimeout: 10000 - requestedHeartbeat: 60 - exchangeName: "test-exchange" - routingKey: "test-key" - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-rabbitmq-source.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-rabbitmq-source.md deleted file mode 100644 index 0dbf51e15856eb..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-rabbitmq-source.md +++ /dev/null @@ -1,87 +0,0 @@ ---- -id: io-rabbitmq-source -title: RabbitMQ source connector -sidebar_label: "RabbitMQ source connector" -original_id: io-rabbitmq-source ---- - -The RabbitMQ source connector receives messages from RabbitMQ clusters -and writes messages to Pulsar topics. - -## Configuration - -The configuration of the RabbitMQ source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `connectionName` |String| true | " " (empty string) | The connection name. | -| `host` | String| true | " " (empty string) | The RabbitMQ host. | -| `port` | int |true | 5672 | The RabbitMQ port. | -| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. | -| `username` | String|false | guest | The username used to authenticate to RabbitMQ. | -| `password` | String|false | guest | The password used to authenticate to RabbitMQ. | -| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. | -| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number.

    0 means unlimited. | -| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets.

    0 means unlimited. | -| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds.

    0 means infinite. | -| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. | -| `requestedHeartbeat` | int|false | 60 | The requested heartbeat timeout in seconds. | -| `prefetchCount` | int|false | 0 | The maximum number of messages that the server delivers.

    0 means unlimited. | -| `prefetchGlobal` | boolean|false | false |Whether the setting should be applied to the entire channel rather than each consumer. | -| `passive` | boolean|false | false | Whether the rabbitmq consumer should create its own queue or bind to an existing one. | - -### Example - -Before using the RabbitMQ source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "host": "localhost", - "port": "5672", - "virtualHost": "/", - "username": "guest", - "password": "guest", - "queueName": "test-queue", - "connectionName": "test-connection", - "requestedChannelMax": "0", - "requestedFrameMax": "0", - "connectionTimeout": "60000", - "handshakeTimeout": "10000", - "requestedHeartbeat": "60", - "prefetchCount": "0", - "prefetchGlobal": "false", - "passive": "false" - } - } - - ``` - -* YAML - - ```yaml - - configs: - host: "localhost" - port: 5672 - virtualHost: "/" - username: "guest" - password: "guest" - queueName: "test-queue" - connectionName: "test-connection" - requestedChannelMax: 0 - requestedFrameMax: 0 - connectionTimeout: 60000 - handshakeTimeout: 10000 - requestedHeartbeat: 60 - prefetchCount: 0 - prefetchGlobal: "false" - passive: "false" - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-redis-sink.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-redis-sink.md deleted file mode 100644 index 9efd6ed8637694..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-redis-sink.md +++ /dev/null @@ -1,158 +0,0 @@ ---- -id: io-redis-sink -title: Redis sink connector -sidebar_label: "Redis sink connector" -original_id: io-redis-sink ---- - -The Redis sink connector pulls messages from Pulsar topics -and persists the messages to a Redis database. - - - -## Configuration - -The configuration of the Redis sink connector has the following properties. - - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `redisHosts` |String|true|" " (empty string) | A comma-separated list of Redis hosts to connect to. | -| `redisPassword` |String|false|" " (empty string) | The password used to connect to Redis. | -| `redisDatabase` | int|true|0 | The Redis database to connect to. | -| `clientMode` |String| false|Standalone | The client mode when interacting with Redis cluster.

    Below are the available options:
  125. Standalone
  126. Cluster
  127. | -| `autoReconnect` | boolean|false|true | Whether the Redis client automatically reconnect or not. | -| `requestQueue` | int|false|2147483647 | The maximum number of queued requests to Redis. | -| `tcpNoDelay` |boolean| false| false | Whether to enable TCP with no delay or not. | -| `keepAlive` | boolean|false | false |Whether to enable a keepalive to Redis or not. | -| `connectTimeout` |long| false|10000 | The time to wait before timing out when connecting in milliseconds. | -| `operationTimeout` | long|false|10000 | The time before an operation is marked as timed out in milliseconds . | -| `batchTimeMs` | int|false|1000 | The Redis operation time in milliseconds. | -| `batchSize` | int|false|200 | The batch size of writing to Redis database. | - - -### Example - -Before using the Redis sink connector, you need to create a configuration file in the path you will start Pulsar service (i.e. `PULSAR_HOME`) through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "redisHosts": "localhost:6379", - "redisPassword": "mypassword", - "redisDatabase": "0", - "clientMode": "Standalone", - "operationTimeout": "2000", - "batchSize": "1", - "batchTimeMs": "1000", - "connectTimeout": "3000" - } - } - - ``` - -* YAML - - ```yaml - - configs: - redisHosts: "localhost:6379" - redisPassword: "mypassword" - redisDatabase: 0 - clientMode: "Standalone" - operationTimeout: 2000 - batchSize: 1 - batchTimeMs: 1000 - connectTimeout: 3000 - - ``` - -### Usage - -This example shows how to write records to a Redis database using the Pulsar Redis connector. - -1. Start a Redis server. - - ```bash - - $ docker pull redis:5.0.5 - $ docker run -d -p 6379:6379 --name my-redis redis:5.0.5 --requirepass "mypassword" - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - - Make sure the NAR file is available at `connectors/pulsar-io-redis-@pulsar:version@.nar`. - -3. Start the Pulsar Redis connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-redis-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name my-redis-sink \ - --sink-config '{"redisHosts": "localhost:6379","redisPassword": "mypassword","redisDatabase": "0","clientMode": "Standalone","operationTimeout": "3000","batchSize": "1"}' \ - --inputs my-redis-topic - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-redis-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name my-redis-sink \ - --sink-config-file redis-sink-config.yaml \ - --inputs my-redis-topic - - ``` - -4. Publish records to the topic. - - ```bash - - $ bin/pulsar-client produce \ - persistent://public/default/my-redis-topic \ - -k "streaming" \ - -m "Pulsar" - - ``` - -5. Start a Redis client in Docker. - - ```bash - - $ docker exec -it my-redis redis-cli -a "mypassword" - - ``` - -6. Check the key/value in Redis. - - ``` - - 127.0.0.1:6379> keys * - 1) "streaming" - 127.0.0.1:6379> get "streaming" - "Pulsar" - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-solr-sink.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-solr-sink.md deleted file mode 100644 index d8b09db61faefe..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-solr-sink.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -id: io-solr-sink -title: Solr sink connector -sidebar_label: "Solr sink connector" -original_id: io-solr-sink ---- - -The Solr sink connector pulls messages from Pulsar topics -and persists the messages to Solr collections. - - - -## Configuration - -The configuration of the Solr sink connector has the following properties. - - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `solrUrl` | String|true|" " (empty string) |
  128. Comma-separated zookeeper hosts with chroot used in the SolrCloud mode.
    **Example**
    `localhost:2181,localhost:2182/chroot`

  129. URL to connect to Solr used in standalone mode.
    **Example**
    `localhost:8983/solr`
  130. | -| `solrMode` | String|true|SolrCloud| The client mode when interacting with the Solr cluster.

    Below are the available options:
  131. Standalone
  132. SolrCloud
  133. | -| `solrCollection` |String|true| " " (empty string) | Solr collection name to which records need to be written. | -| `solrCommitWithinMs` |int| false|10 | The time within million seconds for Solr updating commits.| -| `username` |String|false| " " (empty string) | The username for basic authentication.

    **Note: `usename` is case-sensitive.** | -| `password` | String|false| " " (empty string) | The password for basic authentication.

    **Note: `password` is case-sensitive.** | - - - -### Example - -Before using the Solr sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "solrUrl": "localhost:2181,localhost:2182/chroot", - "solrMode": "SolrCloud", - "solrCollection": "techproducts", - "solrCommitWithinMs": 100, - "username": "fakeuser", - "password": "fake@123" - } - } - - ``` - -* YAML - - ```yaml - - { - solrUrl: "localhost:2181,localhost:2182/chroot" - solrMode: "SolrCloud" - solrCollection: "techproducts" - solrCommitWithinMs: 100 - username: "fakeuser" - password: "fake@123" - } - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-twitter-source.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-twitter-source.md deleted file mode 100644 index 8de3504dd0fef2..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-twitter-source.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -id: io-twitter-source -title: Twitter Firehose source connector -sidebar_label: "Twitter Firehose source connector" -original_id: io-twitter-source ---- - -The Twitter Firehose source connector receives tweets from Twitter Firehose and -writes the tweets to Pulsar topics. - -## Configuration - -The configuration of the Twitter Firehose source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `consumerKey` | String|true | " " (empty string) | The twitter OAuth consumer key.

    For more information, see [Access tokens](https://developer.twitter.com/en/docs/basics/authentication/guides/access-tokens). | -| `consumerSecret` | String |true | " " (empty string) | The twitter OAuth consumer secret. | -| `token` | String|true | " " (empty string) | The twitter OAuth token. | -| `tokenSecret` | String|true | " " (empty string) | The twitter OAuth secret. | -| `guestimateTweetTime`|Boolean|false|false|Most firehose events have null createdAt time.

    If `guestimateTweetTime` set to true, the connector estimates the createdTime of each firehose event to be current time. -| `clientName` | String |false | openconnector-twitter-source| The twitter firehose client name. | -| `clientHosts` |String| false | Constants.STREAM_HOST | The twitter firehose hosts to which client connects. | -| `clientBufferSize` | int|false | 50000 | The buffer size for buffering tweets fetched from twitter firehose. | - -> For more information about OAuth credentials, see [Twitter developers portal](https://developer.twitter.com/en.html). diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-twitter.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-twitter.md deleted file mode 100644 index 3b2f6325453c3c..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-twitter.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: io-twitter -title: Twitter Firehose Connector -sidebar_label: "Twitter Firehose Connector" -original_id: io-twitter ---- - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/io-use.md b/site2/website/versioned_docs/version-2.10.0-deprecated/io-use.md deleted file mode 100644 index 5746faea4eaffa..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/io-use.md +++ /dev/null @@ -1,1787 +0,0 @@ ---- -id: io-use -title: How to use Pulsar connectors -sidebar_label: "Use" -original_id: io-use ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide describes how to use Pulsar connectors. - -## Install a connector - -Pulsar bundles several [builtin connectors](io-connectors.md) used to move data in and out of commonly used systems (such as database and messaging system). Optionally, you can create and use your desired non-builtin connectors. - -:::note - -When using a non-builtin connector, you need to specify the path of an archive file for the connector. - -::: - -To set up a builtin connector, follow -the instructions [here](getting-started-standalone.md#installing-builtin-connectors). - -After the setup, the builtin connector is automatically discovered by Pulsar brokers (or function-workers), so no additional installation steps are required. - -## Configure a connector - -You can configure the following information: - -* [Configure a default storage location for a connector](#configure-a-default-storage-location-for-a-connector) - -* [Configure a connector with a YAML file](#configure-a-connector-with-yaml-file) - -### Configure a default storage location for a connector - -To configure a default folder for builtin connectors, set the `connectorsDirectory` parameter in the `./conf/functions_worker.yml` configuration file. - -**Example** - -Set the `./connectors` folder as the default storage location for builtin connectors. - -``` - -######################## -# Connectors -######################## - -connectorsDirectory: ./connectors - -``` - -### Configure a connector with a YAML file - -To configure a connector, you need to provide a YAML configuration file when creating a connector. - -The YAML configuration file tells Pulsar where to locate connectors and how to connect connectors with Pulsar topics. - -**Example 1** - -Below is a YAML configuration file of a Cassandra sink, which tells Pulsar: - -* Which Cassandra cluster to connect - -* What is the `keyspace` and `columnFamily` to be used in Cassandra for collecting data - -* How to map Pulsar messages into Cassandra table key and columns - -```shell - -tenant: public -namespace: default -name: cassandra-test-sink -... -# cassandra specific config -configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - -``` - -**Example 2** - -Below is a YAML configuration file of a Kafka source. - -```shell - -configs: - bootstrapServers: "pulsar-kafka:9092" - groupId: "test-pulsar-io" - topic: "my-topic" - sessionTimeoutMs: "10000" - autoCommitEnabled: "false" - -``` - -**Example 3** - -Below is a YAML configuration file of a PostgreSQL JDBC sink. - -```shell - -configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/test_jdbc" - tableName: "test_jdbc" - -``` - -## Get available connectors - -Before starting using connectors, you can perform the following operations: - -* [Reload connectors](#reload) - -* [Get a list of available connectors](#get-available-connectors) - -### `reload` - -If you add or delete a nar file in a connector folder, reload the available builtin connector before using it. - -#### Source - -Use the `reload` subcommand. - -```shell - -$ pulsar-admin sources reload - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - -#### Sink - -Use the `reload` subcommand. - -```shell - -$ pulsar-admin sinks reload - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - -### `available` - -After reloading connectors (optional), you can get a list of available connectors. - -#### Source - -Use the `available-sources` subcommand. - -```shell - -$ pulsar-admin sources available-sources - -``` - -#### Sink - -Use the `available-sinks` subcommand. - -```shell - -$ pulsar-admin sinks available-sinks - -``` - -## Run a connector - -To run a connector, you can perform the following operations: - -* [Create a connector](#create) - -* [Start a connector](#start) - -* [Run a connector locally](#localrun) - -### `create` - -You can create a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Create a source connector. - -````mdx-code-block - - - - -Use the `create` subcommand. - -``` - -$ pulsar-admin sources create options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/registerSource?version=@pulsar:version_number@} - - - - -* Create a source connector with a **local file**. - - ```java - - void createSource(SourceConfig sourceConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - |Name|Description - |---|--- - `sourceConfig` | The source configuration object - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSource`](/api/admin/org/apache/pulsar/client/admin/Source.html#createSource-SourceConfig-java.lang.String-). - -* Create a source connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void createSourceWithUrl(SourceConfig sourceConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - Parameter| Description - |---|--- - `sourceConfig` | The source configuration object - `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSourceWithUrl`](/api/admin/org/apache/pulsar/client/admin/Source.html#createSourceWithUrl-SourceConfig-java.lang.String-). - - - - -```` - -#### Sink - -Create a sink connector. - -````mdx-code-block - - - - -Use the `create` subcommand. - -``` - -$ pulsar-admin sinks create options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/registerSink?version=@pulsar:version_number@} - - - - -* Create a sink connector with a **local file**. - - ```java - - void createSink(SinkConfig sinkConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - |Name|Description - |---|--- - `sinkConfig` | The sink configuration object - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSink`](/api/admin/org/apache/pulsar/client/admin/Sink.html#createSink-SinkConfig-java.lang.String-). - -* Create a sink connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void createSinkWithUrl(SinkConfig sinkConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - Parameter| Description - |---|--- - `sinkConfig` | The sink configuration object - `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSinkWithUrl`](/api/admin/org/apache/pulsar/client/admin/Sink.html#createSinkWithUrl-SinkConfig-java.lang.String-). - - - - -```` - -### `start` - -You can start a connector using **Admin CLI** or **REST API**. - -#### Source - -Start a source connector. - -````mdx-code-block - - - - -Use the `start` subcommand. - -``` - -$ pulsar-admin sources start options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -* Start **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/start|operation/startSource?version=@pulsar:version_number@} - -* Start a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/start|operation/startSource?version=@pulsar:version_number@} - - - - -```` - -#### Sink - -Start a sink connector. - -````mdx-code-block - - - - -Use the `start` subcommand. - -``` - -$ pulsar-admin sinks start options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -* Start **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/start|operation/startSink?version=@pulsar:version_number@} - -* Start a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sourceName/:instanceId/start|operation/startSink?version=@pulsar:version_number@} - - - - -```` - -### `localrun` - -You can run a connector locally rather than deploying it on a Pulsar cluster using **Admin CLI**. - -#### Source - -Run a source connector locally. - -````mdx-code-block - - - - -Use the `localrun` subcommand. - -``` - -$ pulsar-admin sources localrun options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -```` - -#### Sink - -Run a sink connector locally. - -````mdx-code-block - - - - -Use the `localrun` subcommand. - -``` - -$ pulsar-admin sinks localrun options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -```` - -## Monitor a connector - -To monitor a connector, you can perform the following operations: - -* [Get the information of a connector](#get) - -* [Get the list of all running connectors](#list) - -* [Get the current status of a connector](#status) - -### `get` - -You can get the information of a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the information of a source connector. - -````mdx-code-block - - - - -Use the `get` subcommand. - -``` - -$ pulsar-admin sources get options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/getSourceInfo?version=@pulsar:version_number@} - - - - -```java - -SourceConfig getSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Example** - -This is a sourceConfig. - -```java - -{ - "tenant": "tenantName", - "namespace": "namespaceName", - "name": "sourceName", - "className": "className", - "topicName": "topicName", - "configs": {}, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "resources": { - "cpu": 1.0, - "ram": 1073741824, - "disk": 10737418240 - } -} - -``` - -This is a sourceConfig example. - -``` - -{ - "tenant": "public", - "namespace": "default", - "name": "debezium-mysql-source", - "className": "org.apache.pulsar.io.debezium.mysql.DebeziumMysqlSource", - "topicName": "debezium-mysql-topic", - "configs": { - "database.user": "debezium", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.port": "3306", - "database.hostname": "localhost", - "database.password": "dbz", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "database.whitelist": "inventory", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "pulsar.service.url": "pulsar://127.0.0.1:6650", - "database.history.pulsar.topic": "history-topic2" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "resources": { - "cpu": 1.0, - "ram": 1073741824, - "disk": 10737418240 - } -} - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException.NotFoundException` | Cluster doesn't exist -`PulsarAdminException` | Unexpected error - -For more information, see [`getSource`](/api/admin/org/apache/pulsar/client/admin/Source.html#getSource-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Get the information of a sink connector. - -````mdx-code-block - - - - -Use the `get` subcommand. - -``` - -$ pulsar-admin sinks get options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/getSinkInfo?version=@pulsar:version_number@} - - - - -```java - -SinkConfig getSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - -``` - -**Example** - -This is a sinkConfig. - -```json - -{ -"tenant": "tenantName", -"namespace": "namespaceName", -"name": "sinkName", -"className": "className", -"inputSpecs": { -"topicName": { - "isRegexPattern": false -} -}, -"configs": {}, -"parallelism": 1, -"processingGuarantees": "ATLEAST_ONCE", -"retainOrdering": false, -"autoAck": true -} - -``` - -This is a sinkConfig example. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -**Parameter description** - -Name| Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`sink` | Sink name - -For more information, see [`getSink`](/api/admin/org/apache/pulsar/client/admin/Sink.html#getSink-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -### `list` - -You can get the list of all running connectors using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the list of all running source connectors. - -````mdx-code-block - - - - -Use the `list` subcommand. - -``` - -$ pulsar-admin sources list options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace|operation/listSources?version=@pulsar:version_number@} - - - - -```java - -List listSources(String tenant, - String namespace) - throws PulsarAdminException - -``` - -**Response example** - -```java - -["f1", "f2", "f3"] - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException` | Unexpected error - -For more information, see [`listSource`](/api/admin/org/apache/pulsar/client/admin/Source.html#listSources-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Get the list of all running sink connectors. - -````mdx-code-block - - - - -Use the `list` subcommand. - -``` - -$ pulsar-admin sinks list options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace|operation/listSinks?version=@pulsar:version_number@} - - - - -```java - -List listSinks(String tenant, - String namespace) - throws PulsarAdminException - -``` - -**Response example** - -```java - -["f1", "f2", "f3"] - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException` | Unexpected error - -For more information, see [`listSource`](/api/admin/org/apache/pulsar/client/admin/Sink.html#listSinks-java.lang.String-java.lang.String-). - - - - -```` - -### `status` - -You can get the current status of a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the current status of a source connector. - -````mdx-code-block - - - - -Use the `status` subcommand. - -``` - -$ pulsar-admin sources status options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -* Get the current status of **all** source connectors. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName/status|operation/getSourceStatus?version=@pulsar:version_number@} - -* Gets the current status of a **specified** source connector. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/status|operation/getSourceStatus?version=@pulsar:version_number@} - - - - -* Get the current status of **all** source connectors. - - ```java - - SourceStatus getSourceStatus(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - - **Exception** - - Name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSourceStatus`](/api/admin/org/apache/pulsar/client/admin/Source.html#getSource-java.lang.String-java.lang.String-java.lang.String-). - -* Gets the current status of a **specified** source connector. - - ```java - - SourceStatus.SourceInstanceStatus.SourceInstanceStatusData getSourceStatus(String tenant, - String namespace, - String source, - int id) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - `id` | Source instanceID - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSourceStatus`](/api/admin/org/apache/pulsar/client/admin/Source.html#getSourceStatus-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Get the current status of a Pulsar sink connector. - -````mdx-code-block - - - - -Use the `status` subcommand. - -``` - -$ pulsar-admin sinks status options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -* Get the current status of **all** sink connectors. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sinkName/status|operation/getSinkStatus?version=@pulsar:version_number@} - -* Gets the current status of a **specified** sink connector. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sourceName/:instanceId/status|operation/getSinkInstanceStatus?version=@pulsar:version_number@} - - - - -* Get the current status of **all** sink connectors. - - ```java - - SinkStatus getSinkStatus(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSinkStatus`](/api/admin/org/apache/pulsar/client/admin/Sink.html#getSinkStatus-java.lang.String-java.lang.String-java.lang.String-). - -* Gets the current status of a **specified** source connector. - - ```java - - SinkStatus.SinkInstanceStatus.SinkInstanceStatusData getSinkStatus(String tenant, - String namespace, - String sink, - int id) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - `id` | Sink instanceID - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSinkStatusWithInstanceID`](/api/admin/org/apache/pulsar/client/admin/Sink.html#getSinkStatus-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Update a connector - -### `update` - -You can update a running connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Update a running Pulsar source connector. - -````mdx-code-block - - - - -Use the `update` subcommand. - -``` - -$ pulsar-admin sources update options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/updateSource?version=@pulsar:version_number@} - - - - -* Update a running source connector with a **local file**. - - ```java - - void updateSource(SourceConfig sourceConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - |`sourceConfig` | The source configuration object - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - - For more information, see [`updateSource`](/api/admin/org/apache/pulsar/client/admin/Source.html#updateSource-SourceConfig-java.lang.String-). - -* Update a source connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void updateSourceWithUrl(SourceConfig sourceConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - | Name | Description - |---|--- - | `sourceConfig` | The source configuration object - | `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - -For more information, see [`createSourceWithUrl`](/api/admin/org/apache/pulsar/client/admin/Source.html#updateSourceWithUrl-SourceConfig-java.lang.String-). - - - - -```` - -#### Sink - -Update a running Pulsar sink connector. - -````mdx-code-block - - - - -Use the `update` subcommand. - -``` - -$ pulsar-admin sinks update options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/updateSink?version=@pulsar:version_number@} - - - - -* Update a running sink connector with a **local file**. - - ```java - - void updateSink(SinkConfig sinkConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - |`sinkConfig` | The sink configuration object - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - - For more information, see [`updateSink`](/api/admin/org/apache/pulsar/client/admin/Sink.html#updateSink-SinkConfig-java.lang.String-). - -* Update a sink connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void updateSinkWithUrl(SinkConfig sinkConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - | Name | Description - |---|--- - | `sinkConfig` | The sink configuration object - | `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - |`PulsarAdminException.NotFoundException` | Cluster doesn't exist - |`PulsarAdminException` | Unexpected error - -For more information, see [`updateSinkWithUrl`](/api/admin/org/apache/pulsar/client/admin/Sink.html#updateSinkWithUrl-SinkConfig-java.lang.String-). - - - - -```` - -## Stop a connector - -### `stop` - -You can stop a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Stop a source connector. - -````mdx-code-block - - - - -Use the `stop` subcommand. - -``` - -$ pulsar-admin sources stop options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -* Stop **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/stopSource?version=@pulsar:version_number@} - -* Stop a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId|operation/stopSource?version=@pulsar:version_number@} - - - - -* Stop **all** source connectors. - - ```java - - void stopSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSource`](/api/admin/org/apache/pulsar/client/admin/Source.html#stopSource-java.lang.String-java.lang.String-java.lang.String-). - -* Stop a **specified** source connector. - - ```java - - void stopSource(String tenant, - String namespace, - String source, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSource`](/api/admin/org/apache/pulsar/client/admin/Source.html#stopSource-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Stop a sink connector. - -````mdx-code-block - - - - -Use the `stop` subcommand. - -``` - -$ pulsar-admin sinks stop options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -* Stop **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sinkName/stop|operation/stopSink?version=@pulsar:version_number@} - -* Stop a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkeName/:instanceId/stop|operation/stopSink?version=@pulsar:version_number@} - - - - -* Stop **all** sink connectors. - - ```java - - void stopSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSink`](/api/admin/org/apache/pulsar/client/admin/Sink.html#stopSink-java.lang.String-java.lang.String-java.lang.String-). - -* Stop a **specified** sink connector. - - ```java - - void stopSink(String tenant, - String namespace, - String sink, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSink`](/api/admin/org/apache/pulsar/client/admin/Sink.html#stopSink-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Restart a connector - -### `restart` - -You can restart a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Restart a source connector. - -````mdx-code-block - - - - -Use the `restart` subcommand. - -``` - -$ pulsar-admin sources restart options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -* Restart **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/restart|operation/restartSource?version=@pulsar:version_number@} - -* Restart a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/restart|operation/restartSource?version=@pulsar:version_number@} - - - - -* Restart **all** source connectors. - - ```java - - void restartSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSource`](/api/admin/org/apache/pulsar/client/admin/Source.html#restartSource-java.lang.String-java.lang.String-java.lang.String-). - -* Restart a **specified** source connector. - - ```java - - void restartSource(String tenant, - String namespace, - String source, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSource`](/api/admin/org/apache/pulsar/client/admin/Source.html#restartSource-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Restart a sink connector. - -````mdx-code-block - - - - -Use the `restart` subcommand. - -``` - -$ pulsar-admin sinks restart options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -* Restart **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/restart|operation/restartSource?version=@pulsar:version_number@} - -* Restart a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/:instanceId/restart|operation/restartSource?version=@pulsar:version_number@} - - - - -* Restart all Pulsar sink connectors. - - ```java - - void restartSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Sink name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSink`](/api/admin/org/apache/pulsar/client/admin/Sink.html#restartSink-java.lang.String-java.lang.String-java.lang.String-). - -* Restart a **specified** sink connector. - - ```java - - void restartSink(String tenant, - String namespace, - String sink, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Sink instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSink`](/api/admin/org/apache/pulsar/client/admin/Sink.html#restartSink-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Delete a connector - -### `delete` - -You can delete a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Delete a source connector. - -````mdx-code-block - - - - -Use the `delete` subcommand. - -``` - -$ pulsar-admin sources delete options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -Delete al Pulsar source connector. - -Send a `DELETE` request to this endpoint: {@inject: endpoint|DELETE|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/deregisterSource?version=@pulsar:version_number@} - - - - -Delete a source connector. - -```java - -void deleteSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Parameter** - -| Name | Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`source` | Source name - -**Exception** - -|Name|Description| -|---|--- -|`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission -| `PulsarAdminException.NotFoundException` | Cluster doesn't exist -| `PulsarAdminException.PreconditionFailedException` | Cluster is not empty -| `PulsarAdminException` | Unexpected error - -For more information, see [`deleteSource`](/api/admin/org/apache/pulsar/client/admin/Source.html#deleteSource-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Delete a sink connector. - -````mdx-code-block - - - - -Use the `delete` subcommand. - -``` - -$ pulsar-admin sinks delete options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -Delete a sink connector. - -Send a `DELETE` request to this endpoint: {@inject: endpoint|DELETE|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/deregisterSink?version=@pulsar:version_number@} - - - - -Delete a Pulsar sink connector. - -```java - -void deleteSink(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Parameter** - -| Name | Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`sink` | Sink name - -**Exception** - -|Name|Description| -|---|--- -|`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission -| `PulsarAdminException.NotFoundException` | Cluster doesn't exist -| `PulsarAdminException.PreconditionFailedException` | Cluster is not empty -| `PulsarAdminException` | Unexpected error - -For more information, see [`deleteSource`](/api/admin/org/apache/pulsar/client/admin/Sink.html#deleteSink-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/performance-pulsar-perf.md b/site2/website/versioned_docs/version-2.10.0-deprecated/performance-pulsar-perf.md deleted file mode 100644 index 4441d1470819f7..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/performance-pulsar-perf.md +++ /dev/null @@ -1,283 +0,0 @@ ---- -id: performance-pulsar-perf -title: Pulsar Perf -sidebar_label: "Pulsar Perf" -original_id: performance-pulsar-perf ---- - -The Pulsar Perf is a built-in performance test tool for Apache Pulsar. You can use the Pulsar Perf to test message writing or reading performance. For detailed information about performance tuning, see [here](https://streamnative.io/en/blog/tech/2021-01-14-pulsar-architecture-performance-tuning). - -## Produce messages - -:::tip - -For the latest and complete information about `pulsar-perf`, including commands, flags, descriptions, and more, see [`pulsar-perf`](/tools/pulsar-perf/) or [here](reference-cli-tools.md#pulsar-perf). - -::: - -- This example shows how the Pulsar Perf produces messages with **default** options. - - **Input** - - ``` - - bin/pulsar-perf produce my-topic - - ``` - - After the command is executed, the test data is continuously output on the Console. - - **Output** - - ``` - - 19:53:31.459 [pulsar-perf-producer-exec-1-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Created 1 producers - 19:53:31.482 [pulsar-timer-5-1] WARN com.scurrilous.circe.checksum.Crc32cIntChecksum - Failed to load Circe JNI library. Falling back to Java based CRC32c provider - 19:53:40.861 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 93.7 msg/s --- 0.7 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.575 ms - med: 3.460 - 95pct: 4.790 - 99pct: 5.308 - 99.9pct: 5.834 - 99.99pct: 6.609 - Max: 6.609 - 19:53:50.909 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.437 ms - med: 3.328 - 95pct: 4.656 - 99pct: 5.071 - 99.9pct: 5.519 - 99.99pct: 5.588 - Max: 5.588 - 19:54:00.926 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.376 ms - med: 3.276 - 95pct: 4.520 - 99pct: 4.939 - 99.9pct: 5.440 - 99.99pct: 5.490 - Max: 5.490 - 19:54:10.940 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.298 ms - med: 3.220 - 95pct: 4.474 - 99pct: 4.926 - 99.9pct: 5.645 - 99.99pct: 5.654 - Max: 5.654 - 19:54:20.956 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.1 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.308 ms - med: 3.199 - 95pct: 4.532 - 99pct: 4.871 - 99.9pct: 5.291 - 99.99pct: 5.323 - Max: 5.323 - 19:54:30.972 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.249 ms - med: 3.144 - 95pct: 4.437 - 99pct: 4.970 - 99.9pct: 5.329 - 99.99pct: 5.414 - Max: 5.414 - 19:54:40.987 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.435 ms - med: 3.361 - 95pct: 4.772 - 99pct: 5.150 - 99.9pct: 5.373 - 99.99pct: 5.837 - Max: 5.837 - ^C19:54:44.325 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Aggregated throughput stats --- 7286 records sent --- 99.140 msg/s --- 0.775 Mbit/s - 19:54:44.336 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Aggregated latency stats --- Latency: mean: 3.383 ms - med: 3.293 - 95pct: 4.610 - 99pct: 5.059 - 99.9pct: 5.588 - 99.99pct: 5.837 - 99.999pct: 6.609 - Max: 6.609 - - ``` - - From the above test data, you can get the throughput statistics and the write latency statistics. The aggregated statistics are printed when the Pulsar Perf is stopped. You can press **Ctrl**+**C** to stop the Pulsar Perf. If you specify a filename with the `--histogram-file` parameter, a file with the [HdrHistogram](http://hdrhistogram.github.io/HdrHistogram/) formatted test result appears under your directory after Pulsar Perf is stopped. You can also check the test result through [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html). For details about how to check the test result through [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html), see [HdrHistogram Plotter](#hdrhistogram-plotter). - -- This example shows how the Pulsar Perf produces messages with `transaction` option. - - **Input** - - ```shell - - bin/pulsar-perf produce my-topic -r 10 -m 100 -txn - - ``` - - **Output** - - ```shell - - 2021-10-11T13:36:15,595+0800 INFO [Thread-3] o.a.p.t.PerformanceProducer@499 - --- Transaction : 2 transaction end successfully ---0 transaction end failed --- 0.200 Txn/s - - 2021-10-11T13:36:15,614+0800 INFO [Thread-3] o.a.p.t.PerformanceProducer@503 - Throughput produced: 100 msg --- 0.0 msg/s --- 0.1 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.067 ms - med: 3.104 - 95pct: 3.747 - 99pct: 4.619 - 99.9pct: 6.760 - 99.99pct: 6.760 - Max: 6.760 - - 2021-10-11T13:36:15,710+0800 INFO [pulsar-perf-producer-exec-46-1] o.a.p.t.PerformanceProducer@834 - Aggregated latency stats --- Latency: mean: 3.067 ms - med: 3.104 - 95pct: 3.747 - 99pct: 4.619 - 99.9pct: 6.760 - 99.99pct: 6.760 - 99.999pct: 6.760 - Max: 6.760 - - 2021-10-11T13:36:29,976+0800 INFO [Thread-4] o.a.p.t.PerformanceProducer@815 - --- Transaction : 2 transaction end successfully --- 0 transaction end failed --- 2 transaction open successfully --- 0 transaction open failed --- 12.237 Txn/s - - 2021-10-11T13:36:29,976+0800 INFO [Thread-4] o.a.p.t.PerformanceProducer@824 - Aggregated throughput stats --- 102 records sent --- 4.168 msg/s --- 0.033 Mbit/s - - ``` - -## Consume messages - -:::tip - -For the latest and complete information about `pulsar-perf`, including commands, flags, descriptions, and more, see [`pulsar-perf`](/tools/pulsar-perf/) or [here](reference-cli-tools.md#pulsar-perf). - -::: - -- This example shows how the Pulsar Perf consumes messages with **default** options. - - **Input** - - :::note - - If you have not created a topic (in this example, it is _my-topic_) before, the broker creates a new topic without partitions and messages, then the consumer can not receive any messages. Consequently, before using `pulsar-perf consume`, make sure your topic has enough messages to consume. - - ::: - - ``` - - bin/pulsar-perf consume my-topic - - ``` - - After the command is executed, the test data is continuously output on the Console. - - **Output** - - ``` - - 20:35:37.071 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Start receiving from 1 consumers on 1 topics - 20:35:41.150 [pulsar-client-io-1-9] WARN com.scurrilous.circe.checksum.Crc32cIntChecksum - Failed to load Circe JNI library. Falling back to Java based CRC32c provider - 20:35:47.092 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 59.572 msg/s -- 0.465 Mbit/s --- Latency: mean: 11.298 ms - med: 10 - 95pct: 15 - 99pct: 98 - 99.9pct: 137 - 99.99pct: 152 - Max: 152 - 20:35:57.104 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.958 msg/s -- 0.781 Mbit/s --- Latency: mean: 9.176 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 18 - Max: 18 - 20:36:07.115 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 100.006 msg/s -- 0.781 Mbit/s --- Latency: mean: 9.316 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 - 20:36:17.125 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 100.085 msg/s -- 0.782 Mbit/s --- Latency: mean: 9.327 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 - 20:36:27.136 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.900 msg/s -- 0.780 Mbit/s --- Latency: mean: 9.404 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 - 20:36:37.147 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.985 msg/s -- 0.781 Mbit/s --- Latency: mean: 8.998 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 - ^C20:36:42.755 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceConsumer - Aggregated throughput stats --- 6051 records received --- 92.125 msg/s --- 0.720 Mbit/s - 20:36:42.759 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceConsumer - Aggregated latency stats --- Latency: mean: 9.422 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 98 - 99.99pct: 137 - 99.999pct: 152 - Max: 152 - - ``` - - From the output test data, you can get the throughput statistics and the end-to-end latency statistics. The aggregated statistics is printed after the Pulsar Perf is stopped. You can press **Ctrl**+**C** to stop the Pulsar Perf. - -- This example shows how the Pulsar Perf consumes messages with `transaction` option. - - **Input** - - ```shell - - bin/pulsar-perf consume my-topic -r 10 -txn -ss mysubName -st Exclusive -sp Earliest -ntxn 10 - - ``` - - :::note - - If you have not created a topic (in this example, it is _my-topic_) before, the broker creates a new topic without partitions and messages, then the consumer can not receive any messages. Consequently, before using `pulsar-perf consume`, make sure your topic has enough messages to consume. - - ::: - - - **Output** - - ```shell - - 2021-10-11T13:43:36,052+0800 INFO [Thread-3] o.a.p.t.PerformanceConsumer@538 - --- Transaction: 6 transaction end successfully --- 0 transaction end failed --- 0.199 Txn/s --- AckRate: 9.952 msg/s - - 2021-10-11T13:43:36,065+0800 INFO [Thread-3] o.a.p.t.PerformanceConsumer@545 - Throughput received: 306 msg --- 9.952 msg/s -- 0.000 Mbit/s --- Latency: mean: 26177.380 ms - med: 26128 - 95pct: 30531 - 99pct: 30923 - 99.9pct: 31021 - 99.99pct: 31021 - Max: 31021 - - 2021-10-11T13:43:59,854+0800 INFO [Thread-5] o.a.p.t.PerformanceConsumer@579 - -- Transaction: 10 transaction end successfully --- 0 transaction end failed --- 10 transaction open successfully --- 0 transaction open failed --- 0.185 Txn/s - - 2021-10-11T13:43:59,854+0800 INFO [Thread-5] o.a.p.t.PerformanceConsumer@588 - Aggregated throughput stats --- 505 records received --- 9.345 msg/s --- 0.000 Mbit/s--- AckRate: 9.27065308842743 msg/s --- ack failed 4 msg - - 2021-10-11T13:43:59,882+0800 INFO [Thread-5] o.a.p.t.PerformanceConsumer@601 - Aggregated latency stats --- Latency: mean: 50593.000 ms - med: 50593 - 95pct: 50593 - 99pct: 50593 - 99.9pct: 50593 - 99.99pct: 50593 - 99.999pct: 50593 - Max: 50593 - - ``` - -## Transactions - -This section shows how Pulsar Perf runs transactions. For more information, see [Pulsar transactions](txn-why.md). - -### Use transaction - -This example executes 50 transactions. Each transaction sends and receives 1 message (default). - -**Input** - -```shell - -bin/pulsar-perf transaction --topics-c myConsumerTopic --topics-p MyproduceTopic -threads 1 -ntxn 50 -ss testSub -nmp 1 -nmc 1 - -``` - -:::note - -If you have not created a topic (in this example, it is _myConsumerTopic_) before, the broker creates a new topic without partitions and messages, then the consumer can not receive any messages. Consequently, before using `pulsar-perf transaction`, make sure your topic has enough messages to consume. - -::: - -**Output** - -```shell - -2021-10-11T14:37:27,863+0800 INFO [Thread-5] o.a.p.t.PerformanceProducer@613 - Messages ack aggregated latency stats --- Latency: mean: 29.239 ms - med: 26.799 - 95pct: 46.696 - 99pct: 55.660 - 99.9pct: 55.660 - 99.99pct: 55.660 - 99.999pct: 55.660 - Max: 55.660 {} - -2021-10-11T14:37:19,391+0800 INFO [Thread-4] o.a.p.t.PerformanceProducer@525 - Throughput transaction: 50 transaction executes --- 4.999 transaction/s ---send Latency: mean: 31.368 ms - med: 28.369 - 95pct: 55.631 - 99pct: 57.764 - 99.9pct: 57.764 - 99.99pct: 57.764 - Max: 57.764---ack Latency: mean: 29.239 ms - med: 26.799 - 95pct: 46.696 - 99pct: 55.660 - 99.9pct: 55.660 - 99.99pct: 55.660 - Max: 55.660 {} - -2021-10-11T14:37:26,625+0800 INFO [Thread-5] o.a.p.t.PerformanceProducer@571 - Aggregated throughput stats --- 50 transaction executed --- 2.718 transaction/s --- 50 transaction open successfully --- 0 transaction open failed --- 50 transaction end successfully --- 0 transaction end failed--- 0 message ack failed --- 0 message send failed--- 50 message ack success --- 50 message send success {} - -``` - -### Disable Transaction - -This example disables transactions. - -**Input** - -```shell - -bin/pulsar-perf transaction --topics-c myConsumerTopic --topics-p myproduceTopic -threads 1 -ntxn 50 -ss testSub --txn-disEnable - -``` - -:::note - -If you have not created a topic (in this example, it is _myConsumerTopic_) before, the broker creates a new topic without partitions and messages, then the consumer can not receive any messages. Consequently, before using `pulsar-perf transaction --txn-disEnable`, make sure your topic has enough messages to consume. - -::: - -**Output** - -```shell - -2021-10-11T16:48:26,876+0800 INFO [Thread-4] o.a.p.t.PerformanceProducer@529 - Throughput task: 50 task executes --- 4.999 task/s ---send Latency: mean: 10.002 ms - med: 9.875 - 95pct: 11.733 - 99pct: 15.995 - 99.9pct: 15.995 - 99.99pct: 15.995 - Max: 15.995---ack Latency: mean: 0.051 ms - med: 0.020 - 95pct: 0.059 - 99pct: 1.377 - 99.9pct: 1.377 - 99.99pct: 1.377 - Max: 1.377 - -2021-10-11T16:48:29,222+0800 INFO [Thread-5] o.a.p.t.PerformanceProducer@617 - Messages ack aggregated latency stats --- Latency: mean: 0.051 ms - med: 0.020 - 95pct: 0.059 - 99pct: 1.377 - 99.9pct: 1.377 - 99.99pct: 1.377 - 99.999pct: 1.377 - Max: 1.377 - -2021-10-11T16:48:29,246+0800 INFO [Thread-5] o.a.p.t.PerformanceProducer@629 - Messages send aggregated latency stats --- Latency: mean: 10.002 ms - med: 9.875 - 95pct: 11.733 - 99pct: 15.995 - 99.9pct: 15.995 - 99.99pct: 15.995 - 99.999pct: 15.995 - Max: 15.995 - -2021-10-11T16:48:29,117+0800 INFO [Thread-5] o.a.p.t.PerformanceProducer@602 - Aggregated throughput stats --- 50 task executed --- 4.025 task/s --- 0 message ack failed --- 0 message send failed--- 50 message ack success --- 50 message send success - -``` - -## Configurations - -By default, the Pulsar Perf uses `conf/client.conf` as the default configuration and uses `conf/log4j2.yaml` as the default Log4j configuration. If you want to connect to other Pulsar clusters, you can update the `brokerServiceUrl` in the client configuration. - -You can use the following commands to change the configuration file and the Log4j configuration file. - -``` - -export PULSAR_CLIENT_CONF= -export PULSAR_LOG_CONF= - -``` - -In addition, you can use the following command to configure the JVM configuration through environment variables: - -``` - -export PULSAR_EXTRA_OPTS='-Xms4g -Xmx4g -XX:MaxDirectMemorySize=4g' - -``` - -## HdrHistogram Plotter - -The [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html) is a visualization tool for checking Pulsar Perf test results, which makes it easier to observe the test results. - -To check test results through the HdrHistogram Plotter, follow these steps: - -1. Clone the HdrHistogram repository from GitHub to the local. - - ``` - - git clone https://github.com/HdrHistogram/HdrHistogram.git - - ``` - -2. Switch to the HdrHistogram folder. - - ``` - - cd HdrHistogram - - ``` - -3. Install the HdrHistogram Plotter. - - ``` - - mvn clean install -DskipTests - - ``` - -4. Transform the file generated by the Pulsar Perf. - - ``` - - ./HistogramLogProcessor -i -o - - ``` - -5. You will get two output files. Upload the output file with the filename extension of .hgrm to the [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html). - -6. Check the test result through the Graphical User Interface of the HdrHistogram Plotter, as shown blow. - - ![](/assets/perf-produce.png) diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/reference-cli-tools.md b/site2/website/versioned_docs/version-2.10.0-deprecated/reference-cli-tools.md deleted file mode 100644 index 1e426501e23a3d..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/reference-cli-tools.md +++ /dev/null @@ -1,1039 +0,0 @@ ---- -id: reference-cli-tools -title: Pulsar command-line tools -sidebar_label: "Pulsar CLI tools" -original_id: reference-cli-tools ---- - -Pulsar offers several command-line tools that you can use for managing Pulsar installations, performance testing, using command-line producers and consumers, and more. - -All Pulsar command-line tools can be run from the `bin` directory of your [installed Pulsar package](getting-started-standalone.md). The following tools are currently documented: - -* [`pulsar`](#pulsar) -* [`pulsar-client`](#pulsar-client) -* [`pulsar-daemon`](#pulsar-daemon) -* [`pulsar-perf`](#pulsar-perf) -* [`bookkeeper`](#bookkeeper) -* [`broker-tool`](#broker-tool) - -> **Important** -> -> - This page only shows **some frequently used commands**. For the latest information about `pulsar`, `pulsar-client`, and `pulsar-perf`, including commands, flags, descriptions, and more information, see [Pulsar tools](/tools/). -> -> - You can get help for any CLI tool, command, or subcommand using the `--help` flag, or `-h` for short. Here's an example: -> - -> ```shell -> -> $ bin/pulsar broker --help -> -> -> ``` - - -## `pulsar` - -The pulsar tool is used to start Pulsar components, such as bookies and ZooKeeper, in the foreground. - -These processes can also be started in the background, using nohup, using the pulsar-daemon tool, which has the same command interface as pulsar. - -Usage: - -```bash - -$ pulsar command - -``` - -Commands: -* `bookie` -* `broker` -* `compact-topic` -* `configuration-store` -* `initialize-cluster-metadata` -* `proxy` -* `standalone` -* `websocket` -* `zookeeper` -* `zookeeper-shell` -* `autorecovery` - -Example: - -```bash - -$ PULSAR_BROKER_CONF=/path/to/broker.conf pulsar broker - -``` - -The table below lists the environment variables that you can use to configure the `pulsar` tool. - -|Variable|Description|Default| -|---|---|---| -|`PULSAR_LOG_CONF`|Log4j configuration file|`conf/log4j2.yaml`| -|`PULSAR_BROKER_CONF`|Configuration file for broker|`conf/broker.conf`| -|`PULSAR_BOOKKEEPER_CONF`|description: Configuration file for bookie|`conf/bookkeeper.conf`| -|`PULSAR_ZK_CONF`|Configuration file for zookeeper|`conf/zookeeper.conf`| -|`PULSAR_CONFIGURATION_STORE_CONF`|Configuration file for the configuration store|`conf/global_zookeeper.conf`| -|`PULSAR_WEBSOCKET_CONF`|Configuration file for websocket proxy|`conf/websocket.conf`| -|`PULSAR_STANDALONE_CONF`|Configuration file for standalone|`conf/standalone.conf`| -|`PULSAR_EXTRA_OPTS`|Extra options to be passed to the jvm|| -|`PULSAR_EXTRA_CLASSPATH`|Extra paths for Pulsar's classpath|| -|`PULSAR_PID_DIR`|Folder where the pulsar server PID file should be stored|| -|`PULSAR_STOP_TIMEOUT`|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful|| -|`PULSAR_GC_LOG`|Gc options to be passed to the jvm|| - - -### `bookie` - -Starts up a bookie server - -Usage: - -```bash - -$ pulsar bookie options - -``` - -Options - -|Option|Description|Default| -|---|---|---| -|`-readOnly`|Force start a read-only bookie server|false| -|`-withAutoRecovery`|Start auto-recover service bookie server|false| - - -Example - -```bash - -$ PULSAR_BOOKKEEPER_CONF=/path/to/bookkeeper.conf pulsar bookie \ - -readOnly \ - -withAutoRecovery - -``` - -### `broker` - -Starts up a Pulsar broker - -Usage - -```bash - -$ pulsar broker options - -``` - -Options - -|Option|Description|Default| -|---|---|---| -|`-bc` , `--bookie-conf`|Configuration file for BookKeeper|| -|`-rb` , `--run-bookie`|Run a BookKeeper bookie on the same host as the Pulsar broker|false| -|`-ra` , `--run-bookie-autorecovery`|Run a BookKeeper autorecovery daemon on the same host as the Pulsar broker|false| - -Example - -```bash - -$ PULSAR_BROKER_CONF=/path/to/broker.conf pulsar broker - -``` - -### `compact-topic` - -Run compaction against a Pulsar topic (in a new process) - -Usage - -```bash - -$ pulsar compact-topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-t` , `--topic`|The Pulsar topic that you would like to compact|| - -Example - -```bash - -$ pulsar compact-topic --topic topic-to-compact - -``` - -### `configuration-store` - -Starts up the Pulsar configuration store - -Usage - -```bash - -$ pulsar configuration-store - -``` - -Example - -```bash - -$ PULSAR_CONFIGURATION_STORE_CONF=/path/to/configuration_store.conf pulsar configuration-store - -``` - -### `initialize-cluster-metadata` - -One-time cluster metadata initialization - -Usage - -```bash - -$ pulsar initialize-cluster-metadata options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-ub` , `--broker-service-url`|The broker service URL for the new cluster|| -|`-tb` , `--broker-service-url-tls`|The broker service URL for the new cluster with TLS encryption|| -|`-c` , `--cluster`|Cluster name|| -|`-cms` , `--configuration-metadata-store`|The configuration metadata store quorum connection string|| -|`--existing-bk-metadata-service-uri`|The metadata service URI of the existing BookKeeper cluster that you want to use|| -|`-h` , `--help`|Help message|false| -|`--initial-num-stream-storage-containers`|The number of storage containers of BookKeeper stream storage|16| -|`--initial-num-transaction-coordinators`|The number of transaction coordinators assigned in a cluster|16| -|`-uw` , `--web-service-url`|The web service URL for the new cluster|| -|`-tw` , `--web-service-url-tls`|The web service URL for the new cluster with TLS encryption|| -|`-md` , `--metadata-store`|The metadata store service url|| -|`--zookeeper-session-timeout-ms`|The local ZooKeeper session timeout. The time unit is in millisecond(ms)|30000| - - -### `proxy` - -Manages the Pulsar proxy - -Usage - -```bash - -$ pulsar proxy options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-cms`, `--configuration-metadata-store`|Configuration metadata store connection string|| -|`-md` , `--metadata-store`|Metadata Store service url|| - -Example - -```bash - -$ PULSAR_PROXY_CONF=/path/to/proxy.conf pulsar proxy \ - --metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 \ - --configuration-metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 - -``` - -### `standalone` - -Run a broker service with local bookies and local ZooKeeper - -Usage - -```bash - -$ pulsar standalone options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-a` , `--advertised-address`|The standalone broker advertised address|| -|`--bookkeeper-dir`|Local bookies’ base data directory|data/standalone/bookkeeper| -|`--bookkeeper-port`|Local bookies’ base port|3181| -|`--no-broker`|Only start ZooKeeper and BookKeeper services, not the broker|false| -|`--num-bookies`|The number of local bookies|1| -|`--only-broker`|Only start the Pulsar broker service (not ZooKeeper or BookKeeper)|| -|`--wipe-data`|Clean up previous ZooKeeper/BookKeeper data|| -|`--zookeeper-dir`|Local ZooKeeper’s data directory|data/standalone/zookeeper| -|`--zookeeper-port` |Local ZooKeeper’s port|2181| - -Example - -```bash - -$ PULSAR_STANDALONE_CONF=/path/to/standalone.conf pulsar standalone - -``` - -### `websocket` - -Usage - -```bash - -$ pulsar websocket - -``` - -Example - -```bash - -$ PULSAR_WEBSOCKET_CONF=/path/to/websocket.conf pulsar websocket - -``` - -### `zookeeper` - -Starts up a ZooKeeper cluster - -Usage - -```bash - -$ pulsar zookeeper - -``` - -Example - -```bash - -$ PULSAR_ZK_CONF=/path/to/zookeeper.conf pulsar zookeeper - -``` - -### `zookeeper-shell` - -Connects to a running ZooKeeper cluster using the ZooKeeper shell - -Usage - -```bash - -$ pulsar zookeeper-shell options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration file for ZooKeeper|| -|`-server`|Configuration zk address, eg: `127.0.0.1:2181`|| - -### `autorecovery` - -Runs an auto-recovery service. - -Usage - -```bash - -$ pulsar autorecovery options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the autorecovery|N/A| - - -## `pulsar-client` - -The pulsar-client tool - -Usage - -```bash - -$ pulsar-client command - -``` - -Commands -* `produce` -* `consume` - - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class, for example "key1:val1,key2:val2" or "{\"key1\":\"val1\",\"key2\":\"val2\"}"|{"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"}| -|`--auth-plugin`|Authentication plugin class name|org.apache.pulsar.client.impl.auth.AuthenticationSasl| -|`--listener-name`|Listener name for the broker|| -|`--proxy-protocol`|Proxy protocol to select type of routing at proxy|| -|`--proxy-url`|Proxy-server URL to which to connect|| -|`--url`|Broker URL to which to connect|pulsar://localhost:6650/
    ws://localhost:8080 | -| `-v`, `--version` | Get the version of the Pulsar client -|`-h`, `--help`|Show this help - - -### `produce` -Send a message or messages to a specific broker and topic - -Usage - -```bash - -$ pulsar-client produce topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-f`, `--files`|Comma-separated file paths to send; either -m or -f must be specified|[]| -|`-m`, `--messages`|Comma-separated string of messages to send; either -m or -f must be specified|[]| -|`-n`, `--num-produce`|The number of times to send the message(s); the count of messages/files * num-produce should be below 1000|1| -|`-r`, `--rate`|Rate (in messages per second) at which to produce; a value 0 means to produce messages as fast as possible|0.0| -|`-db`, `--disable-batching`|Disable batch sending of messages|false| -|`-c`, `--chunking`|Split the message and publish in chunks if the message size is larger than the allowed max size|false| -|`-s`, `--separator`|Character to split messages string with.|","| -|`-k`, `--key`|Message key to add|key=value string, like k1=v1,k2=v2.| -|`-p`, `--properties`|Properties to add. If you want to add multiple properties, use the comma as the separator, e.g. `k1=v1,k2=v2`.| | -|`-ekn`, `--encryption-key-name`|The public key name to encrypt payload.| | -|`-ekv`, `--encryption-key-value`|The URI of public key to encrypt payload. For example, `file:///path/to/public.key` or `data:application/x-pem-file;base64,*****`.| | - - -### `consume` -Consume messages from a specific broker and topic - -Usage - -```bash - -$ pulsar-client consume topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--hex`|Display binary messages in hexadecimal format.|false| -|`-n`, `--num-messages`|Number of messages to consume, 0 means to consume forever.|1| -|`-r`, `--rate`|Rate (in messages per second) at which to consume; a value 0 means to consume messages as fast as possible|0.0| -|`--regex`|Indicate the topic name is a regex pattern|false| -|`-s`, `--subscription-name`|Subscription name|| -|`-t`, `--subscription-type`|The type of the subscription. Possible values: Exclusive, Shared, Failover, Key_Shared.|Exclusive| -|`-p`, `--subscription-position`|The position of the subscription. Possible values: Latest, Earliest.|Latest| -|`-m`, `--subscription-mode`|Subscription mode. Possible values: Durable, NonDurable.|Durable| -|`-q`, `--queue-size`|The size of consumer's receiver queue.|0| -|`-mc`, `--max_chunked_msg`|Max pending chunk messages.|0| -|`-ac`, `--auto_ack_chunk_q_full`|Auto ack for the oldest message in consumer's receiver queue if the queue full.|false| -|`--hide-content`|Do not print the message to the console.|false| -|`-st`, `--schema-type`|Set the schema type. Use `auto_consume` to dump AVRO and other structured data types. Possible values: bytes, auto_consume.|bytes| -|`-ekv`, `--encryption-key-value`|The URI of public key to encrypt payload. For example, `file:///path/to/public.key` or `data:application/x-pem-file;base64,*****`.| | -|`-pm`, `--pool-messages`|Use the pooled message.|true| - -## `pulsar-daemon` -A wrapper around the pulsar tool that’s used to start and stop processes, such as ZooKeeper, bookies, and Pulsar brokers, in the background using nohup. - -pulsar-daemon has a similar interface to the pulsar command but adds start and stop commands for various services. For a listing of those services, run pulsar-daemon to see the help output or see the documentation for the pulsar command. - -Usage - -```bash - -$ pulsar-daemon command - -``` - -Commands -* `start` -* `stop` -* `restart` - - -### `start` -Start a service in the background using nohup. - -Usage - -```bash - -$ pulsar-daemon start service - -``` - -### `stop` -Stop a service that’s already been started using start. - -Usage - -```bash - -$ pulsar-daemon stop service options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|-force|Stop the service forcefully if not stopped by normal shutdown.|false| - -### `restart` -Restart a service that has already been started. - -```bash - -$ pulsar-daemon restart service - -``` - -## `pulsar-perf` -A tool for performance testing a Pulsar broker. - -Usage - -```bash - -$ pulsar-perf command - -``` - -Commands -* `consume` -* `produce` -* `read` -* `websocket-producer` -* `managed-ledger` -* `monitor-brokers` -* `simulation-client` -* `simulation-controller` -* `transaction` -* `help` - -Environment variables - -The table below lists the environment variables that you can use to configure the pulsar-perf tool. - -|Variable|Description|Default| -|---|---|---| -|`PULSAR_LOG_CONF`|Log4j configuration file|conf/log4j2.yaml| -|`PULSAR_CLIENT_CONF`|Configuration file for the client|conf/client.conf| -|`PULSAR_EXTRA_OPTS`|Extra options to be passed to the JVM|| -|`PULSAR_EXTRA_CLASSPATH`|Extra paths for Pulsar's classpath|| -|`PULSAR_GC_LOG`|Gc options to be passed to the jvm|| - - -### `consume` -Run a consumer - -Usage - -``` - -$ pulsar-perf consume options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth-plugin`|Authentication plugin class name|| -|`-ac`, `--auto_ack_chunk_q_full`|Auto ack for the oldest message in consumer's receiver queue if the queue full|false| -|`--listener-name`|Listener name for the broker|| -|`--acks-delay-millis`|Acknowledgements grouping delay in millis|100| -|`--batch-index-ack`|Enable or disable the batch index acknowledgment|false| -|`-bw`, `--busy-wait`|Enable or disable Busy-Wait on the Pulsar client|false| -|`-v`, `--encryption-key-value-file`|The file which contains the private key to decrypt payload|| -|`-h`, `--help`|Help message|false| -|`-cf`, `--conf-file`|Configuration file|| -|`-m`, `--num-messages`|Number of messages to consume in total. If the value is equal to or smaller than 0, it keeps consuming messages.|0| -|`-e`, `--expire_time_incomplete_chunked_messages`|The expiration time for incomplete chunk messages (in milliseconds)|0| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-mc`, `--max_chunked_msg`|Max pending chunk messages|0| -|`-n`, `--num-consumers`|Number of consumers (per topic)|1| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-lt`, `--num-listener-threads`|Set the number of threads to be used for message listeners|1| -|`-ns`, `--num-subscriptions`|Number of subscriptions (per topic)|1| -|`-t`, `--num-topics`|The number of topics|1| -|`-pm`, `--pool-messages`|Use the pooled message|true| -|`-r`, `--rate`|Simulate a slow message consumer (rate in msg/s)|0| -|`-q`, `--receiver-queue-size`|Size of the receiver queue|1000| -|`-p`, `--receiver-queue-size-across-partitions`|Max total size of the receiver queue across partitions|50000| -|`--replicated`|Whether the subscription status should be replicated|false| -|`-u`, `--service-url`|Pulsar service URL|| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled|0| -|`-s`, `--subscriber-name`|Subscriber name prefix|| -|`-ss`, `--subscriptions`|A list of subscriptions to consume on (e.g. sub1,sub2)|sub| -|`-st`, `--subscription-type`|Subscriber type. Possible values are Exclusive, Shared, Failover, Key_Shared.|Exclusive| -|`-sp`, `--subscription-position`|Subscriber position. Possible values are Latest, Earliest.|Latest| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps consuming messages.|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - -Below are **transaction** related options. - -If you want `--txn-timeout`, `--numMessage-perTransaction`, `-nmt`, `-ntxn`, or `-abort` take effect, set `--txn-enable` to true. - -|Flag|Description|Default| -|---|---|---| -`-tto`, `--txn-timeout`|Set the time of transaction timeout (in second). |10 -`-nmt`, `--numMessage-perTransaction`|The number of messages acknowledged by a transaction. |50 -`-txn`, `--txn-enable`|Enable or disable a transaction.|false -`-ntxn`|The number of opened transactions. 0 means the number of transactions is unlimited. |0 -`-abort`|Abort a transaction. |true - -### `produce` -Run a producer - -Usage - -```bash - -$ pulsar-perf produce options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-am`, `--access-mode`|Producer access mode. Valid values are `Shared`, `Exclusive` and `WaitForExclusive`|Shared| -|`-au`, `--admin-url`|Pulsar admin URL|| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth-plugin`|Authentication plugin class name|| -|`--listener-name`|Listener name for the broker|| -|`-b`, `--batch-time-window`|Batch messages in a window of the specified number of milliseconds|1| -|`-bb`, `--batch-max-bytes`|Maximum number of bytes per batch|4194304| -|`-bm`, `--batch-max-messages`|Maximum number of messages per batch|1000| -|`-bw`, `--busy-wait`|Enable or disable Busy-Wait on the Pulsar client|false| -|`-ch`, `--chunking`|Split the message and publish in chunks if the message size is larger than allowed max size|false| -|`-d`, `--delay`|Mark messages with a given delay in seconds|0s| -|`-z`, `--compression`|Compress messages’ payload. Possible values are NONE, LZ4, ZLIB, ZSTD or SNAPPY.|| -|`-cf`, `--conf-file`|Configuration file|| -|`-k`, `--encryption-key-name`|The public key name to encrypt payload|| -|`-v`, `--encryption-key-value-file`|The file which contains the public key to encrypt payload|| -|`-ef`, `--exit-on-failure`|Exit from the process on publish failure|false| -|`-fc`, `--format-class`|Custom Formatter class name|org.apache.pulsar.testclient.DefaultMessageFormatter| -|`-fp`, `--format-payload`|Format %i as a message index in the stream from producer and/or %t as the timestamp nanoseconds|false| -|`-h`, `--help`|Help message|false| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-o`, `--max-outstanding`|Max number of outstanding messages|1000| -|`-p`, `--max-outstanding-across-partitions`|Max number of outstanding messages across partitions|50000| -|`-m`, `--num-messages`|Number of messages to publish in total. If this value is less than or equal to 0, it keeps publishing messages.|0| -|`-mk`, `--message-key-generation-mode`|The generation mode of message key. Valid options are `autoIncrement`, `random`|| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-n`, `--num-producers`|The number of producers (per topic)|1| -|`-threads`, `--num-test-threads`|Number of test threads|1| -|`-t`, `--num-topic`|The number of topics|1| -|`-np`, `--partitions`|Create partitioned topics with the given number of partitions. Setting this value to 0 means not trying to create a topic|| -|`-f`, `--payload-file`|Use payload from an UTF-8 encoded text file and a payload will be randomly selected when publishing messages|| -|`-e`, `--payload-delimiter`|The delimiter used to split lines when using payload from a file|\n| -|`-pn`, `--producer-name`|Producer Name|| -|`-r`, `--rate`|Publish rate msg/s across topics|100| -|`--send-timeout`|Set the sendTimeout|0| -|`--separator`|Separator between the topic and topic number|-| -|`-u`, `--service-url`|Pulsar service URL|| -|`-s`, `--size`|Message size (in bytes)|1024| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled.|0| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps publishing messages.|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--warmup-time`|Warm-up time in seconds|1| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - -Below are **transaction** related options. - -If you want `--txn-timeout`, `--numMessage-perTransaction`, or `-abort` take effect, set `--txn-enable` to true. - -|Flag|Description|Default| -|---|---|---| -`-tto`, `--txn-timeout`|Set the time of transaction timeout (in second). |5 -`-nmt`, `--numMessage-perTransaction`|The number of messages acknowledged by a transaction. |50 -`-txn`, `--txn-enable`|Enable or disable a transaction.|true -`-abort`|Abort a transaction. |true - -### `read` -Run a topic reader - -Usage - -```bash - -$ pulsar-perf read options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth-plugin`|Authentication plugin class name|| -|`--listener-name`|Listener name for the broker|| -|`-cf`, `--conf-file`|Configuration file|| -|`-h`, `--help`|Help message|false| -|`-n`, `--num-messages`|Number of messages to consume in total. If the value is equal to or smaller than 0, it keeps consuming messages.|0| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-lt`, `--num-listener-threads`|Set the number of threads to be used for message listeners|1| -|`-t`, `--num-topics`|The number of topics|1| -|`-r`, `--rate`|Simulate a slow message reader (rate in msg/s)|0| -|`-q`, `--receiver-queue-size`|Size of the receiver queue|1000| -|`-u`, `--service-url`|Pulsar service URL|| -|`-m`, `--start-message-id`|Start message id. This can be either 'earliest', 'latest' or a specific message id by using 'lid:eid'|earliest| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled.|0| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps consuming messages.|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--use-tls`|Use TLS encryption on the connection|false| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - -### `websocket-producer` -Run a websocket producer - -Usage - -```bash - -$ pulsar-perf websocket-producer options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth-plugin`|Authentication plugin class name|| -|`-cf`, `--conf-file`|Configuration file|| -|`-h`, `--help`|Help message|false| -|`-m`, `--num-messages`|Number of messages to publish in total. If this value is less than or equal to 0, it keeps publishing messages.|0| -|`-t`, `--num-topic`|The number of topics|1| -|`-f`, `--payload-file`|Use payload from a file instead of empty buffer|| -|`-e`, `--payload-delimiter`|The delimiter used to split lines when using payload from a file|\n| -|`-fp`, `--format-payload`|Format %i as a message index in the stream from producer and/or %t as the timestamp nanoseconds|false| -|`-fc`, `--format-class`|Custom formatter class name|`org.apache.pulsar.testclient.DefaultMessageFormatter`| -|`-u`, `--proxy-url`|Pulsar Proxy URL, e.g., "ws://localhost:8080/"|| -|`-r`, `--rate`|Publish rate msg/s across topics|100| -|`-s`, `--size`|Message size in byte|1024| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps publishing messages.|0| - - -### `managed-ledger` -Write directly on managed-ledgers - -Usage - -```bash - -$ pulsar-perf managed-ledger options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-a`, `--ack-quorum`|Ledger ack quorum|1| -|`-dt`, `--digest-type`|BookKeeper digest type. Possible Values: [CRC32, MAC, CRC32C, DUMMY]|CRC32C| -|`-e`, `--ensemble-size`|Ledger ensemble size|1| -|`-h`, `--help`|Help message|false| -|`-c`, `--max-connections`|Max number of TCP connections to a single bookie|1| -|`-o`, `--max-outstanding`|Max number of outstanding requests|1000| -|`-m`, `--num-messages`|Number of messages to publish in total. If this value is less than or equal to 0, it keeps publishing messages.|0| -|`-t`, `--num-topic`|Number of managed ledgers|1| -|`-r`, `--rate`|Write rate msg/s across managed ledgers|100| -|`-s`, `--size`|Message size in byte|1024| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps publishing messages.|0| -|`--threads`|Number of threads writing|1| -|`-w`, `--write-quorum`|Ledger write quorum|1| -|`-md`, `--metadata-store`|Metadata store service URL. For example: zk:my-zk:2181|| - - -### `monitor-brokers` -Continuously receive broker data and/or load reports - -Usage - -```bash - -$ pulsar-perf monitor-brokers options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--connect-string`|A connection string for one or more ZooKeeper servers|| -|`-h`, `--help`|Help message|false| - - -### `simulation-client` -Run a simulation server acting as a Pulsar client. Uses the client configuration specified in `conf/client.conf`. - -Usage - -```bash - -$ pulsar-perf simulation-client options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--port`|Port to listen on for controller|0| -|`--service-url`|Pulsar Service URL|| -|`-h`, `--help`|Help message|false| - -### `simulation-controller` -Run a simulation controller to give commands to servers - -Usage - -```bash - -$ pulsar-perf simulation-controller options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--client-port`|The port that the clients are listening on|0| -|`--clients`|Comma-separated list of client hostnames|| -|`--cluster`|The cluster to test on|| -|`-h`, `--help`|Help message|false| - -### `transaction` - -Run a transaction. For more information, see [Pulsar transactions](txn-why.md). - -**Usage** - -```bash - -$ pulsar-perf transaction options - -``` - -**Options** - -|Flag|Description|Default| -|---|---|---| -`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|N/A -`--auth-plugin`|Authentication plugin class name.|N/A -`-au`, `--admin-url`|Pulsar admin URL.|N/A -`-cf`, `--conf-file`|Configuration file.|N/A -`-h`, `--help`|Help messages.|N/A -`-c`, `--max-connections`|Maximum number of TCP connections to a single broker.|100 -`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers. |1 -`-ns`, `--num-subscriptions`|Number of subscriptions per topic.|1 -`-threads`, `--num-test-threads`|Number of test threads.

    This thread is for a new transaction to ack messages from consumer topics, produce messages to producer topics, and commit or abort this transaction.

    Increasing the number of threads increases the parallelism of the performance test, consequently, it increases the intensity of the stress test.|1 -`-nmc`, `--numMessage-perTransaction-consume`|Set the number of messages consumed in a transaction.

    If transaction is disabled, it means the number of messages consumed in a task instead of in a transaction.|1 -`-nmp`, `--numMessage-perTransaction-produce`|Set the number of messages produced in a transaction.

    If transaction is disabled, it means the number of messages produced in a task instead of in a transaction.|1 -`-ntxn`, `--number-txn`|Set the number of transactions.

    0 means the number of transactions is unlimited.

    If transaction is disabled, it means the number of tasks instead of transactions. |0 -`-np`, `--partitions`|Create partitioned topics with a given number of partitions.

    0 means not trying to create a topic. -`-q`, `--receiver-queue-size`|Size of the receiver queue.|1000 -`-u`, `--service-url`|Pulsar service URL.|N/A -`-sp`, `--subscription-position`|Subscription position.|Earliest -`-st`, `--subscription-type`|Subscription type.|Shared -`-ss`, `--subscriptions`|A list of subscriptions to consume.

    For example, sub1,sub2.|[sub] -`-time`, `--test-duration`|Test duration (in second).

    0 means keeping publishing messages.|0 -`--topics-c`|All topics assigned to consumers.|[test-consume] -`--topics-p`|All topics assigned to producers . |[test-produce] -`--txn-disEnable`|Disable transaction.|true -`-tto`, `--txn-timeout`|Set the time of transaction timeout (in second).

    If you want `--txn-timeout` takes effect, set `--txn-enable` to true.|5 -`-abort`|Abort the transaction.

    If you want `-abort` takes effect, set `--txn-disEnable` to false.|true -`-txnRate`|Set the rate of opened transactions or tasks.

    0 means no limit.|0 - -### `help` -This help message - -Usage - -```bash - -$ pulsar-perf help - -``` - -## `bookkeeper` -A tool for managing BookKeeper. - -Usage - -```bash - -$ bookkeeper command - -``` - -Commands -* `autorecovery` -* `bookie` -* `localbookie` -* `upgrade` -* `shell` - - -Environment variables - -The table below lists the environment variables that you can use to configure the bookkeeper tool. - -|Variable|Description|Default| -|---|---|---| -|BOOKIE_LOG_CONF|Log4j configuration file|conf/log4j2.yaml| -|BOOKIE_CONF|BookKeeper configuration file|conf/bk_server.conf| -|BOOKIE_EXTRA_OPTS|Extra options to be passed to the JVM|| -|BOOKIE_EXTRA_CLASSPATH|Extra paths for BookKeeper's classpath|| -|ENTRY_FORMATTER_CLASS|The Java class used to format entries|| -|BOOKIE_PID_DIR|Folder where the BookKeeper server PID file should be stored|| -|BOOKIE_STOP_TIMEOUT|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful|| -|BOOKIE_GC_LOG|Gc options to be passed to the jvm|| - - -### `autorecovery` -Runs an auto-recovery service - -Usage - -```bash - -$ bookkeeper autorecovery options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery|| - - -### `bookie` -Starts up a BookKeeper server (aka bookie) - -Usage - -```bash - -$ bookkeeper bookie options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery|| -|-readOnly|Force start a read-only bookie server|false| -|-withAutoRecovery|Start auto-recovery service bookie server|false| - - -### `localbookie` -Runs a test ensemble of N bookies locally - -Usage - -```bash - -$ bookkeeper localbookie N - -``` - -### `upgrade` -Upgrade the bookie’s filesystem - -Usage - -```bash - -$ bookkeeper upgrade options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery|| -|`-u`, `--upgrade`|Upgrade the bookie’s directories|| - - -### `shell` -Run shell for admin commands. To see a full listing of those commands, run bookkeeper shell without an argument. - -Usage - -```bash - -$ bookkeeper shell - -``` - -Example - -```bash - -$ bookkeeper shell bookiesanity - -``` - -## `broker-tool` - -The `broker- tool` is used for operations on a specific broker. - -Usage - -```bash - -$ broker-tool command - -``` - -Commands -* `load-report` -* `help` - -Example -Two ways to get more information about a command as below: - -```bash - -$ broker-tool help command -$ broker-tool command --help - -``` - -### `load-report` - -Collect the load report of a specific broker. -The command is run on a broker, and used for troubleshooting why broker can’t collect right load report. - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--interval`| Interval to collect load report, in milliseconds || -|`-h`, `--help`| Display help information || - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/reference-configuration.md b/site2/website/versioned_docs/version-2.10.0-deprecated/reference-configuration.md deleted file mode 100644 index b31d8d0f0a9c44..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/reference-configuration.md +++ /dev/null @@ -1,857 +0,0 @@ ---- -id: reference-configuration -title: Pulsar configuration -sidebar_label: "Pulsar configuration" -original_id: reference-configuration ---- - - - - -You can manage Pulsar configuration by configuration files in the [`conf`](https://github.com/apache/pulsar/tree/master/conf) directory of a Pulsar [installation](getting-started-standalone.md). - -- [BookKeeper](#bookkeeper) -- [Broker](#broker) -- [Client](#client) -- [Log4j](#log4j) -- [Log4j shell](#log4j-shell) -- [Standalone](#standalone) -- [WebSocket](#websocket) -- [Pulsar proxy](#pulsar-proxy) -- [ZooKeeper](#zookeeper) - -## BookKeeper - -BookKeeper is a replicated log storage system that Pulsar uses for durable storage of all messages. - - -|Name|Description|Default| -|---|---|---| -|bookiePort|The port on which the bookie server listens.|3181| -|allowLoopback|Whether the bookie is allowed to use a loopback interface as its primary interface (that is the interface used to establish its identity). By default, loopback interfaces are not allowed to work as the primary interface. Using a loopback interface as the primary interface usually indicates a configuration error. For example, it’s fairly common in some VPS setups to not configure a hostname or to have the hostname resolve to `127.0.0.1`. If this is the case, then all bookies in the cluster will establish their identities as `127.0.0.1:3181` and only one will be able to join the cluster. For VPSs configured like this, you should explicitly set the listening interface.|false| -|listeningInterface|The network interface on which the bookie listens. By default, the bookie listens on all interfaces.|eth0| -|advertisedAddress|Configure a specific hostname or IP address that the bookie should use to advertise itself to clients. By default, the bookie advertises either its own IP address or hostname according to the `listeningInterface` and `useHostNameAsBookieID` settings.|N/A| -|allowMultipleDirsUnderSameDiskPartition|Configure the bookie to enable/disable multiple ledger/index/journal directories in the same filesystem disk partition.|false| -|minUsableSizeForIndexFileCreation|The minimum safe usable size available in index directory for bookie to create index files while replaying journal at the time of bookie starts in Readonly Mode (in bytes).|1073741824| -|journalDirectory|The directory where BookKeeper outputs its write-ahead log (WAL).|data/bookkeeper/journal| -|journalDirectories|Directories that BookKeeper outputs its write ahead log. Multiple directories are available, being separated by `,`. For example: `journalDirectories=/tmp/bk-journal1,/tmp/bk-journal2`. If `journalDirectories` is set, the bookies skip `journalDirectory` and use this setting directory.|/tmp/bk-journal| -|ledgerDirectories|The directory where BookKeeper outputs ledger snapshots. This could define multiple directories to store snapshots separated by `,`, for example `ledgerDirectories=/tmp/bk1-data,/tmp/bk2-data`. Ideally, ledger dirs and the journal dir are each in a different device, which reduces the contention between random I/O and sequential write. It is possible to run with a single disk, but performance will be significantly lower.|data/bookkeeper/ledgers| -|ledgerManagerType|The type of ledger manager used to manage how ledgers are stored, managed, and garbage collected. See [BookKeeper Internals](http://bookkeeper.apache.org/docs/latest/getting-started/concepts) for more info.|hierarchical| -|zkLedgersRootPath|The root ZooKeeper path used to store ledger metadata. This parameter is used by the ZooKeeper-based ledger manager as a root znode to store all ledgers.|/ledgers| -|ledgerStorageClass|Ledger storage implementation class|org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage| -|entryLogFilePreallocationEnabled|Enable or disable entry logger preallocation|true| -|logSizeLimit|Max file size of the entry logger, in bytes. A new entry log file will be created when the old one reaches the file size limitation.|1073741824| -|minorCompactionThreshold|Threshold of minor compaction. Entry log files whose remaining size percentage reaches below this threshold will be compacted in a minor compaction. If set to less than zero, the minor compaction is disabled.|0.2| -|minorCompactionInterval|Time interval to run minor compaction, in seconds. If set to less than zero, the minor compaction is disabled. Note: should be greater than gcWaitTime. |3600| -|majorCompactionThreshold|The threshold of major compaction. Entry log files whose remaining size percentage reaches below this threshold will be compacted in a major compaction. Those entry log files whose remaining size percentage is still higher than the threshold will never be compacted. If set to less than zero, the minor compaction is disabled.|0.5| -|majorCompactionInterval|The time interval to run major compaction, in seconds. If set to less than zero, the major compaction is disabled. Note: should be greater than gcWaitTime. |86400| -|readOnlyModeEnabled|If `readOnlyModeEnabled=true`, then on all full ledger disks, bookie will be converted to read-only mode and serve only read requests. Otherwise the bookie will be shutdown.|true| -|forceReadOnlyBookie|Whether the bookie is force started in read only mode.|false| -|persistBookieStatusEnabled|Persist the bookie status locally on the disks. So the bookies can keep their status upon restarts.|false| -|compactionMaxOutstandingRequests|Sets the maximum number of entries that can be compacted without flushing. When compacting, the entries are written to the entrylog and the new offsets are cached in memory. Once the entrylog is flushed the index is updated with the new offsets. This parameter controls the number of entries added to the entrylog before a flush is forced. A higher value for this parameter means more memory will be used for offsets. Each offset consists of 3 longs. This parameter should not be modified unless you’re fully aware of the consequences.|100000| -|compactionRate|The rate at which compaction will read entries, in adds per second.|1000| -|isThrottleByBytes|Throttle compaction by bytes or by entries.|false| -|compactionRateByEntries|The rate at which compaction will read entries, in adds per second.|1000| -|compactionRateByBytes|Set the rate at which compaction reads entries. The unit is bytes added per second.|1000000| -|journalMaxSizeMB|Max file size of journal file, in megabytes. A new journal file will be created when the old one reaches the file size limitation.|2048| -|journalMaxBackups|The max number of old journal files to keep. Keeping a number of old journal files would help data recovery in special cases.|5| -|journalPreAllocSizeMB|How space to pre-allocate at a time in the journal.|16| -|journalWriteBufferSizeKB|The of the write buffers used for the journal.|64| -|journalRemoveFromPageCache|Whether pages should be removed from the page cache after force write.|true| -|journalAdaptiveGroupWrites|Whether to group journal force writes, which optimizes group commit for higher throughput.|true| -|journalMaxGroupWaitMSec|The maximum latency to impose on a journal write to achieve grouping.|1| -|journalAlignmentSize|All the journal writes and commits should be aligned to given size|4096| -|journalBufferedWritesThreshold|Maximum writes to buffer to achieve grouping|524288| -|journalFlushWhenQueueEmpty|If we should flush the journal when journal queue is empty|false| -|numJournalCallbackThreads|The number of threads that should handle journal callbacks|8| -|openLedgerRereplicationGracePeriod | The grace period, in milliseconds, that the replication worker waits before fencing and replicating a ledger fragment that's still being written to upon bookie failure. | 30000 | -|rereplicationEntryBatchSize|The number of max entries to keep in fragment for re-replication|100| -|autoRecoveryDaemonEnabled|Whether the bookie itself can start auto-recovery service.|true| -|lostBookieRecoveryDelay|How long to wait, in seconds, before starting auto recovery of a lost bookie.|0| -|gcWaitTime|How long the interval to trigger next garbage collection, in milliseconds. Since garbage collection is running in background, too frequent gc will heart performance. It is better to give a higher number of gc interval if there is enough disk capacity.|900000| -|gcOverreplicatedLedgerWaitTime|How long the interval to trigger next garbage collection of overreplicated ledgers, in milliseconds. This should not be run very frequently since we read the metadata for all the ledgers on the bookie from zk.|86400000| -|flushInterval|How long the interval to flush ledger index pages to disk, in milliseconds. Flushing index files will introduce much random disk I/O. If separating journal dir and ledger dirs each on different devices, flushing would not affect performance. But if putting journal dir and ledger dirs on same device, performance degrade significantly on too frequent flushing. You can consider increment flush interval to get better performance, but you need to pay more time on bookie server restart after failure.|60000| -|bookieDeathWatchInterval|Interval to watch whether bookie is dead or not, in milliseconds|1000| -|allowStorageExpansion|Allow the bookie storage to expand. Newly added ledger and index dirs must be empty.|false| -|zkServers|A list of one of more servers on which zookeeper is running. The server list can be comma separated values, for example: zkServers=zk1:2181,zk2:2181,zk3:2181.|localhost:2181| -|zkTimeout|ZooKeeper client session timeout in milliseconds Bookie server will exit if it received SESSION_EXPIRED because it was partitioned off from ZooKeeper for more than the session timeout JVM garbage collection, disk I/O will cause SESSION_EXPIRED. Increment this value could help avoiding this issue|30000| -|zkRetryBackoffStartMs|The start time that the Zookeeper client backoff retries in milliseconds.|1000| -|zkRetryBackoffMaxMs|The maximum time that the Zookeeper client backoff retries in milliseconds.|10000| -|zkEnableSecurity|Set ACLs on every node written on ZooKeeper, allowing users to read and write BookKeeper metadata stored on ZooKeeper. In order to make ACLs work you need to setup ZooKeeper JAAS authentication. All the bookies and Client need to share the same user, and this is usually done using Kerberos authentication. See ZooKeeper documentation.|false| -|httpServerEnabled|The flag enables/disables starting the admin http server.|false| -|httpServerPort|The HTTP server port to listen on. By default, the value is `8080`. If you want to keep it consistent with the Prometheus stats provider, you can set it to `8000`.|8080 -|httpServerClass|The http server class.|org.apache.bookkeeper.http.vertx.VertxHttpServer| -|serverTcpNoDelay|This settings is used to enabled/disabled Nagle’s algorithm, which is a means of improving the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network. If you are sending many small messages, such that more than one can fit in a single IP packet, setting server.tcpnodelay to false to enable Nagle algorithm can provide better performance.|true| -|serverSockKeepalive|This setting is used to send keep-alive messages on connection-oriented sockets.|true| -|serverTcpLinger|The socket linger timeout on close. When enabled, a close or shutdown will not return until all queued messages for the socket have been successfully sent or the linger timeout has been reached. Otherwise, the call returns immediately and the closing is done in the background.|0| -|byteBufAllocatorSizeMax|The maximum buf size of the received ByteBuf allocator.|1048576| -|nettyMaxFrameSizeBytes|The maximum netty frame size in bytes. Any message received larger than this will be rejected.|5253120| -|openFileLimit|Max number of ledger index files could be opened in bookie server If number of ledger index files reaches this limitation, bookie server started to swap some ledgers from memory to disk. Too frequent swap will affect performance. You can tune this number to gain performance according your requirements.|0| -|pageSize|Size of a index page in ledger cache, in bytes A larger index page can improve performance writing page to disk, which is efficient when you have small number of ledgers and these ledgers have similar number of entries. If you have large number of ledgers and each ledger has fewer entries, smaller index page would improve memory usage.|8192| -|pageLimit|How many index pages provided in ledger cache If number of index pages reaches this limitation, bookie server starts to swap some ledgers from memory to disk. You can increment this value when you found swap became more frequent. But make sure pageLimit*pageSize should not more than JVM max memory limitation, otherwise you would got OutOfMemoryException. In general, incrementing pageLimit, using smaller index page would gain better performance in lager number of ledgers with fewer entries case If pageLimit is -1, bookie server will use 1/3 of JVM memory to compute the limitation of number of index pages.|0| -|readOnlyModeEnabled|If all ledger directories configured are full, then support only read requests for clients. If "readOnlyModeEnabled=true" then on all ledger disks full, bookie will be converted to read-only mode and serve only read requests. Otherwise the bookie will be shutdown. By default this will be disabled.|true| -|diskUsageThreshold|For each ledger dir, maximum disk space which can be used. Default is 0.95f. i.e. 95% of disk can be used at most after which nothing will be written to that partition. If all ledger dir partitions are full, then bookie will turn to readonly mode if ‘readOnlyModeEnabled=true’ is set, else it will shutdown. Valid values should be in between 0 and 1 (exclusive).|0.95| -|diskCheckInterval|Disk check interval in milli seconds, interval to check the ledger dirs usage.|10000| -|auditorPeriodicCheckInterval|Interval at which the auditor will do a check of all ledgers in the cluster. By default this runs once a week. The interval is set in seconds. To disable the periodic check completely, set this to 0. Note that periodic checking will put extra load on the cluster, so it should not be run more frequently than once a day.|604800| -|sortedLedgerStorageEnabled|Whether sorted-ledger storage is enabled.|true| -|auditorPeriodicBookieCheckInterval|The interval between auditor bookie checks. The auditor bookie check, checks ledger metadata to see which bookies should contain entries for each ledger. If a bookie which should contain entries is unavailable, thea the ledger containing that entry is marked for recovery. Setting this to 0 disabled the periodic check. Bookie checks will still run when a bookie fails. The interval is specified in seconds.|86400| -|numAddWorkerThreads|The number of threads that should handle write requests. if zero, the writes would be handled by netty threads directly.|0| -|numReadWorkerThreads|The number of threads that should handle read requests. if zero, the reads would be handled by netty threads directly.|8| -|numHighPriorityWorkerThreads|The umber of threads that should be used for high priority requests (i.e. recovery reads and adds, and fencing).|8| -|maxPendingReadRequestsPerThread|If read workers threads are enabled, limit the number of pending requests, to avoid the executor queue to grow indefinitely.|2500| -|maxPendingAddRequestsPerThread|The limited number of pending requests, which is used to avoid the executor queue to grow indefinitely when add workers threads are enabled.|10000| -|isForceGCAllowWhenNoSpace|Whether force compaction is allowed when the disk is full or almost full. Forcing GC could get some space back, but could also fill up the disk space more quickly. This is because new log files are created before GC, while old garbage log files are deleted after GC.|false| -|verifyMetadataOnGC|True if the bookie should double check `readMetadata` prior to GC.|false| -|flushEntrylogBytes|Entry log flush interval in bytes. Flushing in smaller chunks but more frequently reduces spikes in disk I/O. Flushing too frequently may also affect performance negatively.|268435456| -|readBufferSizeBytes|The number of bytes we should use as capacity for BufferedReadChannel.|4096| -|writeBufferSizeBytes|The number of bytes used as capacity for the write buffer|65536| -|useHostNameAsBookieID|Whether the bookie should use its hostname to register with the coordination service (e.g.: zookeeper service). When false, bookie will use its ip address for the registration.|false| -|bookieId | If you want to custom a bookie ID or use a dynamic network address for the bookie, you can set the `bookieId`.

    Bookie advertises itself using the `bookieId` rather than the `BookieSocketAddress` (`hostname:port` or `IP:port`). If you set the `bookieId`, then the `useHostNameAsBookieID` does not take effect.

    The `bookieId` is a non-empty string that can contain ASCII digits and letters ([a-zA-Z9-0]), colons, dashes, and dots.

    For more information about `bookieId`, see [here](http://bookkeeper.apache.org/bps/BP-41-bookieid/).|N/A| -|allowEphemeralPorts|Whether the bookie is allowed to use an ephemeral port (port 0) as its server port. By default, an ephemeral port is not allowed. Using an ephemeral port as the service port usually indicates a configuration error. However, in unit tests, using an ephemeral port will address port conflict problems and allow running tests in parallel.|false| -|enableLocalTransport|Whether the bookie is allowed to listen for the BookKeeper clients executed on the local JVM.|false| -|disableServerSocketBind|Whether the bookie is allowed to disable bind on network interfaces. This bookie will be available only to BookKeeper clients executed on the local JVM.|false| -|skipListArenaChunkSize|The number of bytes that we should use as chunk allocation for `org.apache.bookkeeper.bookie.SkipListArena`.|4194304| -|skipListArenaMaxAllocSize|The maximum size that we should allocate from the skiplist arena. Allocations larger than this should be allocated directly by the VM to avoid fragmentation.|131072| -|bookieAuthProviderFactoryClass|The factory class name of the bookie authentication provider. If this is null, then there is no authentication.|null| -|statsProviderClass||org.apache.bookkeeper.stats.prometheus.PrometheusMetricsProvider| -|prometheusStatsHttpPort||8000| -|dbStorage_writeCacheMaxSizeMb|Size of Write Cache. Memory is allocated from JVM direct memory. Write cache is used to buffer entries before flushing into the entry log. For good performance, it should be big enough to hold a substantial amount of entries in the flush interval.|25% of direct memory| -|dbStorage_readAheadCacheMaxSizeMb|Size of Read cache. Memory is allocated from JVM direct memory. This read cache is pre-filled doing read-ahead whenever a cache miss happens. By default, it is allocated to 25% of the available direct memory.|N/A| -|dbStorage_readAheadCacheBatchSize|How many entries to pre-fill in cache after a read cache miss|1000| -|dbStorage_rocksDB_blockCacheSize|Size of RocksDB block-cache. For best performance, this cache should be big enough to hold a significant portion of the index database which can reach ~2GB in some cases. By default, it uses 10% of direct memory.|N/A| -|dbStorage_rocksDB_writeBufferSizeMB||64| -|dbStorage_rocksDB_sstSizeInMB||64| -|dbStorage_rocksDB_blockSize||65536| -|dbStorage_rocksDB_bloomFilterBitsPerKey||10| -|dbStorage_rocksDB_numLevels||-1| -|dbStorage_rocksDB_numFilesInLevel0||4| -|dbStorage_rocksDB_maxSizeInLevel1MB||256| - -## Broker - -Pulsar brokers are responsible for handling incoming messages from producers, dispatching messages to consumers, replicating data between clusters, and more. - -|Name|Description|Default| -|---|---|---| -|advertisedListeners|Specify multiple advertised listeners for the broker.

    The format is `:pulsar://:`.

    If there are multiple listeners, separate them with commas.

    **Note**: do not use this configuration with `advertisedAddress` and `brokerServicePort`. If the value of this configuration is empty, the broker uses `advertisedAddress` and `brokerServicePort`|/| -|internalListenerName|Specify the internal listener name for the broker.

    **Note**: the listener name must be contained in `advertisedListeners`.

    If the value of this configuration is empty, the broker uses the first listener as the internal listener.|/| -|authenticateOriginalAuthData| If this flag is set to `true`, the broker authenticates the original Auth data; else it just accepts the originalPrincipal and authorizes it (if required). |false| -|enablePersistentTopics| Whether persistent topics are enabled on the broker |true| -|enableNonPersistentTopics| Whether non-persistent topics are enabled on the broker |true| -|functionsWorkerEnabled| Whether the Pulsar Functions worker service is enabled in the broker |false| -|exposePublisherStats|Whether to enable topic level metrics.|true| -|statsUpdateFrequencyInSecs||60| -|statsUpdateInitialDelayInSecs||60| -|metadataStoreUrl| Metadata store quorum connection string || -| metadataStoreConfigPath | The configuration file path of the local metadata store. See [Configure metadata store](administration-metadata-store.md) for details. |N/A| -|metadataStoreCacheExpirySeconds|Metadata store cache expiry time in seconds|300| -|configurationMetadataStoreUrl| Configuration store connection string (as a comma-separated list) || -|brokerServicePort| Broker data port |6650| -|brokerServicePortTls| Broker data port for TLS |6651| -|webServicePort| Port to use to server HTTP request |8080| -|webServicePortTls| Port to use to server HTTPS request |8443| -|webSocketServiceEnabled| Enable the WebSocket API service in broker |false| -|webSocketNumIoThreads|The number of IO threads in Pulsar Client used in WebSocket proxy.|Runtime.getRuntime().availableProcessors()| -|webSocketConnectionsPerBroker|The number of connections per Broker in Pulsar Client used in WebSocket proxy.|Runtime.getRuntime().availableProcessors()| -|webSocketSessionIdleTimeoutMillis|Time in milliseconds that idle WebSocket session times out.|300000| -|webSocketMaxTextFrameSize|The maximum size of a text message during parsing in WebSocket proxy.|1048576| -|exposeTopicLevelMetricsInPrometheus|Whether to enable topic level metrics.|true| -|exposeConsumerLevelMetricsInPrometheus|Whether to enable consumer level metrics.|false| -|jvmGCMetricsLoggerClassName|Classname of Pluggable JVM GC metrics logger that can log GC specific metrics.|N/A| -|bindAddress| Hostname or IP address the service binds on, default is 0.0.0.0. |0.0.0.0| -|bindAddresses| Additional Hostname or IP addresses the service binds on: `listener_name:scheme://host:port,...`. || -|advertisedAddress| Hostname or IP address the service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used. || -|clusterName| Name of the cluster to which this broker belongs to || -|maxTenants|The maximum number of tenants that can be created in each Pulsar cluster. When the number of tenants reaches the threshold, the broker rejects the request of creating a new tenant. The default value 0 disables the check. |0| -|brokerDeduplicationEnabled| Sets the default behavior for message deduplication in the broker. If enabled, the broker will reject messages that were already stored in the topic. This setting can be overridden on a per-namespace basis. |false| -|brokerDeduplicationMaxNumberOfProducers| The maximum number of producers for which information will be stored for deduplication purposes. |10000| -|brokerDeduplicationEntriesInterval| The number of entries after which a deduplication informational snapshot is taken. A larger interval will lead to fewer snapshots being taken, though this would also lengthen the topic recovery time (the time required for entries published after the snapshot to be replayed). |1000| -|brokerDeduplicationSnapshotIntervalSeconds| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |120| -|brokerDeduplicationProducerInactivityTimeoutMinutes| The time of inactivity (in minutes) after which the broker will discard deduplication information related to a disconnected producer. |360| -|brokerDeduplicationSnapshotFrequencyInSeconds| How often is the thread pool scheduled to check whether a snapshot needs to be taken. The value of `0` means it is disabled. |120| -|dispatchThrottlingRateInMsg| Dispatch throttling-limit of messages for a broker (per second). 0 means the dispatch throttling-limit is disabled. |0| -|dispatchThrottlingRateInByte| Dispatch throttling-limit of bytes for a broker (per second). 0 means the dispatch throttling-limit is disabled. |0| -|dispatchThrottlingRatePerTopicInMsg| Dispatch throttling-limit of messages for every topic (per second). 0 means the dispatch throttling-limit is disabled. |0| -|dispatchThrottlingRatePerTopicInByte| Dispatch throttling-limit of bytes for every topic (per second). 0 means the dispatch throttling-limit is disabled. |0| -|dispatchThrottlingOnBatchMessageEnabled|Apply dispatch rate limiting on batch message instead individual messages with in batch message. (Default is disabled). | false| -|dispatchThrottlingRateRelativeToPublishRate| Enable dispatch rate-limiting relative to publish rate. | false | -|dispatchThrottlingRatePerSubscriptionInMsg| Dispatch throttling-limit of messages for a subscription. 0 means the dispatch throttling-limit is disabled. |0| -|dispatchThrottlingRatePerSubscriptionInByte|Dispatch throttling-limit of bytes for a subscription. 0 means the dispatch throttling-limit is disabled.|0| -|dispatchThrottlingRatePerReplicatorInMsg| The default messages per second dispatch throttling-limit for every replicator in replication. The value of `0` means disabling replication message dispatch-throttling| 0 | -|dispatchThrottlingRatePerReplicatorInByte| The default bytes per second dispatch throttling-limit for every replicator in replication. The value of `0` means disabling replication message-byte dispatch-throttling| 0 | -|metadataStoreSessionTimeoutMillis| Metadata store session timeout in milliseconds |30000| -|brokerShutdownTimeoutMs| Time to wait for broker graceful shutdown. After this time elapses, the process will be killed |60000| -|skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out of memory error. |false| -|backlogQuotaCheckEnabled| Enable backlog quota check. Enforces action on topic when the quota is reached |true| -|backlogQuotaCheckIntervalInSeconds| How often to check for topics that have reached the quota |60| -|backlogQuotaDefaultLimitBytes| The default per-topic backlog quota limit. Being less than 0 means no limitation. By default, it is -1. | -1 | -|backlogQuotaDefaultRetentionPolicy|The defaulted backlog quota retention policy. By Default, it is `producer_request_hold`.
  134. 'producer_request_hold' Policy which holds producer's send request until the resource becomes available (or holding times out)
  135. 'producer_exception' Policy which throws `javax.jms.ResourceAllocationException` to the producer
  136. 'consumer_backlog_eviction' Policy which evicts the oldest message from the slowest consumer's backlog
  137. |producer_request_hold| -|allowAutoTopicCreation| Enable topic auto creation if a new producer or consumer connected |true| -|allowAutoTopicCreationType| The type of topic that is allowed to be automatically created.(partitioned/non-partitioned) |non-partitioned| -|allowAutoSubscriptionCreation| Enable subscription auto creation if a new consumer connected |true| -|defaultNumPartitions| The number of partitioned topics that is allowed to be automatically created if `allowAutoTopicCreationType` is partitioned |1| -|brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics. If topics are not consumed for some while, these inactive topics might be cleaned up. Deleting inactive topics is enabled by default. The default period is 1 minute. |true| -|brokerDeleteInactiveTopicsFrequencySeconds| How often to check for inactive topics |60| -| brokerDeleteInactiveTopicsMode | Set the mode to delete inactive topics.
  138. `delete_when_no_subscriptions`: delete the topic which has no subscriptions or active producers.
  139. `delete_when_subscriptions_caught_up`: delete the topic whose subscriptions have no backlogs and which has no active producers or consumers.
  140. | `delete_when_no_subscriptions` | -| brokerDeleteInactiveTopicsMaxInactiveDurationSeconds | Set the maximum duration for inactive topics. If it is not specified, the `brokerDeleteInactiveTopicsFrequencySeconds` parameter is adopted. | N/A | -|forceDeleteTenantAllowed| Enable you to delete a tenant forcefully. |false| -|forceDeleteNamespaceAllowed| Enable you to delete a namespace forcefully. |false| -|messageExpiryCheckIntervalInMinutes| The frequency of proactively checking and purging expired messages. |5| -|brokerServiceCompactionMonitorIntervalInSeconds| Interval between checks to determine whether topics with compaction policies need compaction. |60| -brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater than this threshold, compression is triggered.

    Set this threshold to 0 means disabling the compression check.|N/A -|delayedDeliveryEnabled| Whether to enable the delayed delivery for messages. If disabled, messages will be immediately delivered and there will be no tracking overhead.|true| -|delayedDeliveryTickTimeMillis|Control the tick time for retrying on delayed delivery, which affects the accuracy of the delivery time compared to the scheduled time. By default, it is 1 second.|1000| -|activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed. |1000| -|clientLibraryVersionCheckEnabled| Enable check for minimum allowed client library version |false| -|clientLibraryVersionCheckAllowUnversioned| Allow client libraries with no version information |true| -|statusFilePath| Path for the file used to determine the rotation status for the broker when responding to service discovery health checks || -|preferLaterVersions| If true, (and ModularLoadManagerImpl is being used), the load manager will attempt to use only brokers running the latest software version (to minimize impact to bundles) |false| -|maxNumPartitionsPerPartitionedTopic|Max number of partitions per partitioned topic. Use 0 or negative number to disable the check|0| -| maxSubscriptionsPerTopic | Maximum number of subscriptions allowed to subscribe to a topic. Once this limit reaches, the broker rejects new subscriptions until the number of subscriptions decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxProducersPerTopic | Maximum number of producers allowed to connect to a topic. Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerTopic | Maximum number of consumers allowed to connect to a topic. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerSubscription | Maximum number of consumers allowed to connect to a subscription. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -|tlsCertificateFilePath| Path for the TLS certificate file || -|tlsKeyFilePath| Path for the TLS private key file || -|tlsTrustCertsFilePath| Path for the trusted TLS certificate file. This cert is used to verify that any certs presented by connecting clients are signed by a certificate authority. If this verification fails, then the certs are untrusted and the connections are dropped. || -|tlsAllowInsecureConnection| Accept untrusted TLS certificate from client. If it is set to `true`, a client with a cert which cannot be verified with the 'tlsTrustCertsFilePath' cert will be allowed to connect to the server, though the cert will not be used for client authentication. |false| -|tlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLSv1.3```, ```TLSv1.2``` || -|tlsCiphers|Specify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256```|| -|tlsEnabledWithKeyStore| Enable TLS with KeyStore type configuration in broker |false| -|tlsProvider| TLS Provider for KeyStore type || -|tlsKeyStoreType| LS KeyStore type configuration in broker: JKS, PKCS12 |JKS| -|tlsKeyStore| TLS KeyStore path in broker || -|tlsKeyStorePassword| TLS KeyStore password for broker || -|brokerClientTlsEnabledWithKeyStore| Whether internal client use KeyStore type to authenticate with Pulsar brokers |false| -|brokerClientSslProvider| The TLS Provider used by internal client to authenticate with other Pulsar brokers || -|brokerClientTlsTrustStoreType| TLS TrustStore type configuration for internal client: JKS, PKCS12, used by the internal client to authenticate with Pulsar brokers |JKS| -|brokerClientTlsTrustStore| TLS TrustStore path for internal client, used by the internal client to authenticate with Pulsar brokers || -|brokerClientTlsTrustStorePassword| TLS TrustStore password for internal client, used by the internal client to authenticate with Pulsar brokers || -|brokerClientTlsCiphers| Specify the tls cipher the internal client will use to negotiate during TLS Handshake. (a comma-separated list of ciphers) e.g. [TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256]|| -|brokerClientTlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS handshake. (a comma-separated list of protocol names). e.g. `TLSv1.3`, `TLSv1.2` || -| metadataStoreBatchingEnabled | Enable metadata operations batching. | true | -| metadataStoreBatchingMaxDelayMillis | Maximum delay to impose on batching grouping. | 5 | -| metadataStoreBatchingMaxOperations | Maximum number of operations to include in a singular batch. | 1000 | -| metadataStoreBatchingMaxSizeKb | Maximum size of a batch. | 128 | -|ttlDurationDefaultInSeconds|The default Time to Live (TTL) for namespaces if the TTL is not configured at namespace policies. When the value is set to `0`, TTL is disabled. By default, TTL is disabled. |0| -|tokenSettingPrefix| Configure the prefix of the token-related settings, such as `tokenSecretKey`, `tokenPublicKey`, `tokenAuthClaim`, `tokenPublicAlg`, `tokenAudienceClaim`, and `tokenAudience`. || -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicAlg| Configure the algorithm to be used to validate auth tokens. This can be any of the asymettric algorithms supported by Java JWT (https://github.com/jwtk/jjwt#signature-algorithms-keys) |RS256| -|tokenAuthClaim| Specify which of the token's claims will be used as the authentication "principal" or "role". The default "sub" claim will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud", that will be used to get the audience from token. If not set, audience will not be verified. || -|tokenAudience| The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token, need contains this. || -|maxUnackedMessagesPerConsumer| Max number of unacknowledged messages allowed to receive messages by a consumer on a shared subscription. Broker will stop sending messages to consumer once, this limit reaches until consumer starts acknowledging messages back. Using a value of 0, is disabling unackeMessage limit check and consumer can receive messages without any restriction |50000| -|maxUnackedMessagesPerSubscription| Max number of unacknowledged messages allowed per shared subscription. Broker will stop dispatching messages to all consumers of the subscription once this limit reaches until consumer starts acknowledging messages back and unack count reaches to limit/2. Using a value of 0, is disabling unackedMessage-limit check and dispatcher can dispatch messages without any restriction |200000| -|subscriptionRedeliveryTrackerEnabled| Enable subscription message redelivery tracker |true| -|subscriptionExpirationTimeMinutes | How long to delete inactive subscriptions from last consuming.

    Setting this configuration to a value **greater than 0** deletes inactive subscriptions automatically.
    Setting this configuration to **0** does not delete inactive subscriptions automatically.

    Since this configuration takes effect on all topics, if there is even one topic whose subscriptions should not be deleted automatically, you need to set it to 0.
    Instead, you can set a subscription expiration time for each **namespace** using the [`pulsar-admin namespaces set-subscription-expiration-time options` command](/tools/pulsar-admin/). | 0 | -|maxConcurrentLookupRequest| Max number of concurrent lookup request broker allows to throttle heavy incoming lookup traffic |50000| -|maxConcurrentTopicLoadRequest| Max number of concurrent topic loading request broker allows to control number of zk-operations |5000| -|authenticationEnabled| Enable authentication |false| -|authenticationProviders| Authentication provider name list, which is comma separated list of class names || -| authenticationRefreshCheckSeconds | Interval of time for checking for expired authentication credentials | 60 | -|authorizationEnabled| Enforce authorization |false| -|superUserRoles| Role names that are treated as "super-user", meaning they will be able to do all admin operations and publish/consume from all topics || -|brokerClientAuthenticationPlugin| Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters || -|brokerClientAuthenticationParameters||| -|athenzDomainNames| Supported Athenz provider domain names(comma separated) for authentication || -|exposePreciseBacklogInPrometheus| Enable expose the precise backlog stats, set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate. |false| -|schemaRegistryStorageClassName|The schema storage implementation used by this broker.|org.apache.pulsar.broker.service.schema.BookkeeperSchemaStorageFactory| -|isSchemaValidationEnforced| Whether to enable schema validation, when schema validation is enabled, if a producer without a schema attempts to produce the message to a topic with schema, the producer is rejected and disconnected.|false| -|isAllowAutoUpdateSchemaEnabled|Allow schema to be auto updated at broker level.|true| -|schemaCompatibilityStrategy| The schema compatibility strategy at broker level, see [here](schema-evolution-compatibility.md#schema-compatibility-check-strategy) for available values.|FULL| -|systemTopicSchemaCompatibilityStrategy| The schema compatibility strategy is used for system topics, see [here](schema-evolution-compatibility.md#schema-compatibility-check-strategy) for available values.|ALWAYS_COMPATIBLE| -| topicFencingTimeoutSeconds | If a topic remains fenced for a certain time period (in seconds), it is closed forcefully. If set to 0 or a negative number, the fenced topic is not closed. | 0 | -|offloadersDirectory|The directory for all the offloader implementations.|./offloaders| -|bookkeeperMetadataServiceUri| Metadata service uri that bookkeeper is used for loading corresponding metadata driver and resolving its metadata service location. This value can be fetched using `bookkeeper shell whatisinstanceid` command in BookKeeper cluster. For example: zk+hierarchical://localhost:2181/ledgers. The metadata service uri list can also be semicolon separated values like below: zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers || -|bookkeeperClientAuthenticationPlugin| Authentication plugin to use when connecting to bookies || -|bookkeeperClientAuthenticationParametersName| BookKeeper auth plugin implementation specifics parameters name and values || -|bookkeeperClientAuthenticationParameters||| -|bookkeeperClientNumWorkerThreads| Number of BookKeeper client worker threads. Default is Runtime.getRuntime().availableProcessors() || -|bookkeeperClientTimeoutInSeconds| Timeout for BK add / read operations |30| -|bookkeeperClientSpeculativeReadTimeoutInMillis| Speculative reads are initiated if a read request doesn’t complete within a certain time Using a value of 0, is disabling the speculative reads |0| -|bookkeeperNumberOfChannelsPerBookie| Number of channels per bookie |16| -|bookkeeperClientHealthCheckEnabled| Enable bookies health check. Bookies that have more than the configured number of failure within the interval will be quarantined for some time. During this period, new ledgers won’t be created on these bookies |true| -|bookkeeperClientHealthCheckIntervalSeconds||60| -|bookkeeperClientHealthCheckErrorThresholdPerInterval||5| -|bookkeeperClientHealthCheckQuarantineTimeInSeconds ||1800| -|bookkeeperClientRackawarePolicyEnabled| Enable rack-aware bookie selection policy. BK will chose bookies from different racks when forming a new bookie ensemble |true| -|bookkeeperClientRegionawarePolicyEnabled| Enable region-aware bookie selection policy. BK will chose bookies from different regions and racks when forming a new bookie ensemble. If enabled, the value of bookkeeperClientRackawarePolicyEnabled is ignored |false| -|bookkeeperClientMinNumRacksPerWriteQuorum| Minimum number of racks per write quorum. BK rack-aware bookie selection policy will try to get bookies from at least 'bookkeeperClientMinNumRacksPerWriteQuorum' racks for a write quorum. |2| -|bookkeeperClientEnforceMinNumRacksPerWriteQuorum| Enforces rack-aware bookie selection policy to pick bookies from 'bookkeeperClientMinNumRacksPerWriteQuorum' racks for a writeQuorum. If BK can't find bookie then it would throw BKNotEnoughBookiesException instead of picking random one. |false| -|bookkeeperClientReorderReadSequenceEnabled| Enable/disable reordering read sequence on reading entries. |false| -|bookkeeperClientIsolationGroups| Enable bookie isolation by specifying a list of bookie groups to choose from. Any bookie outside the specified groups will not be used by the broker || -|bookkeeperClientSecondaryIsolationGroups| Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn't have enough bookie available. || -|bookkeeperClientMinAvailableBookiesInIsolationGroups| Minimum bookies that should be available as part of bookkeeperClientIsolationGroups else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list. || -|bookkeeperClientGetBookieInfoIntervalSeconds| Set the interval to periodically check bookie info |86400| -|bookkeeperClientGetBookieInfoRetryIntervalSeconds| Set the interval to retry a failed bookie info lookup |60| -|bookkeeperEnableStickyReads | Enable/disable having read operations for a ledger to be sticky to a single bookie. If this flag is enabled, the client will use one single bookie (by preference) to read all entries for a ledger. | true | -|managedLedgerDefaultEnsembleSize| Number of bookies to use when creating a ledger |2| -|managedLedgerDefaultWriteQuorum| Number of copies to store for each message |2| -|managedLedgerDefaultAckQuorum| Number of guaranteed copies (acks to wait before write is complete) |2| -|managedLedgerCacheSizeMB| Amount of memory to use for caching data payload in managed ledger. This memory is allocated from JVM direct memory and it’s shared across all the topics running in the same broker. By default, uses 1/5th of available direct memory || -|managedLedgerCacheCopyEntries| Whether we should make a copy of the entry payloads when inserting in cache| false| -|managedLedgerCacheEvictionWatermark| Threshold to which bring down the cache level when eviction is triggered |0.9| -|managedLedgerCacheEvictionFrequency| Configure the cache eviction frequency for the managed ledger cache (evictions/sec) | 100.0 | -|managedLedgerCacheEvictionTimeThresholdMillis| All entries that have stayed in cache for more than the configured time, will be evicted | 1000 | -|managedLedgerCursorBackloggedThreshold| Configure the threshold (in number of entries) from where a cursor should be considered 'backlogged' and thus should be set as inactive. | 1000| -|managedLedgerDefaultMarkDeleteRateLimit| Rate limit the amount of writes per second generated by consumer acking the messages |1.0| -|managedLedgerMaxEntriesPerLedger| The max number of entries to append to a ledger before triggering a rollover. A ledger rollover is triggered after the min rollover time has passed and one of the following conditions is true:
    • The max rollover time has been reached
    • The max entries have been written to the ledger
    • The max ledger size has been written to the ledger
    |50000| -|managedLedgerMinLedgerRolloverTimeMinutes| Minimum time between ledger rollover for a topic |10| -|managedLedgerMaxLedgerRolloverTimeMinutes| Maximum time before forcing a ledger rollover for a topic |240| -|managedLedgerInactiveLedgerRolloverTimeSeconds| Time to rollover ledger for inactive topic |0| -|managedLedgerCursorMaxEntriesPerLedger| Max number of entries to append to a cursor ledger |50000| -|managedLedgerCursorRolloverTimeInSeconds| Max time before triggering a rollover on a cursor ledger |14400| -|managedLedgerMaxUnackedRangesToPersist| Max number of "acknowledgment holes" that are going to be persistently stored. When acknowledging out of order, a consumer will leave holes that are supposed to be quickly filled by acking all the messages. The information of which messages are acknowledged is persisted by compressing in "ranges" of messages that were acknowledged. After the max number of ranges is reached, the information will only be tracked in memory and messages will be redelivered in case of crashes. |1000| -|autoSkipNonRecoverableData| Skip reading non-recoverable/unreadable data-ledger under managed-ledger’s list.It helps when data-ledgers gets corrupted at bookkeeper and managed-cursor is stuck at that ledger. |false| -|loadBalancerEnabled| Enable load balancer |true| -|loadBalancerPlacementStrategy| Strategy to assign a new bundle weightedRandomSelection || -|loadBalancerReportUpdateThresholdPercentage| Percentage of change to trigger load report update |10| -|loadBalancerReportUpdateMaxIntervalMinutes| Maximum interval to update load report |15| -|loadBalancerHostUsageCheckIntervalMinutes| Frequency of report to collect |1| -|loadBalancerSheddingIntervalMinutes| Load shedding interval. Broker periodically checks whether some traffic should be offload from some over-loaded broker to other under-loaded brokers |30| -|loadBalancerSheddingGracePeriodMinutes| Prevent the same topics to be shed and moved to other broker more than once within this timeframe |30| -|loadBalancerBrokerMaxTopics| Usage threshold to allocate max number of topics to broker |50000| -|loadBalancerBrokerUnderloadedThresholdPercentage| Usage threshold to determine a broker as under-loaded |1| -|loadBalancerBrokerOverloadedThresholdPercentage| Usage threshold to determine a broker as over-loaded |85| -|loadBalancerResourceQuotaUpdateIntervalMinutes| Interval to update namespace bundle resource quota |15| -|loadBalancerBrokerComfortLoadLevelPercentage| Usage threshold to determine a broker is having just right level of load |65| -|loadBalancerAutoBundleSplitEnabled| enable/disable namespace bundle auto split |false| -|loadBalancerNamespaceBundleMaxTopics| maximum topics in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxSessions| maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxMsgRate| maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxBandwidthMbytes| maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered |100| -|loadBalancerNamespaceMaximumBundles| maximum number of bundles in a namespace |128| -|loadBalancerLoadSheddingStrategy | The shedding strategy of load balance.

    Available values:
  141. `org.apache.pulsar.broker.loadbalance.impl.ThresholdShedder`
  142. `org.apache.pulsar.broker.loadbalance.impl.OverloadShedder`
  143. `org.apache.pulsar.broker.loadbalance.impl.UniformLoadShedder`

  144. For the comparisons of the shedding strategies, see [here](administration-load-balance/#shed-load-automatically).|`org.apache.pulsar.broker.loadbalance.impl.ThresholdShedder` -|replicationMetricsEnabled| Enable replication metrics |true| -|replicationConnectionsPerBroker| Max number of connections to open for each broker in a remote cluster More connections host-to-host lead to better throughput over high-latency links. |16| -|replicationProducerQueueSize| Replicator producer queue size |1000| -|replicatorPrefix| Replicator prefix used for replicator producer name and cursor name pulsar.repl|| -|transactionCoordinatorEnabled|Whether to enable transaction coordinator in broker.|true| -|transactionMetadataStoreProviderClassName| |org.apache.pulsar.transaction.coordinator.impl.InMemTransactionMetadataStoreProvider| -|defaultRetentionTimeInMinutes| Default message retention time. 0 means retention is disabled. -1 means data is not removed by time quota |0| -|defaultRetentionSizeInMB| Default retention size. 0 means retention is disabled. -1 means data is not removed by size quota |0| -|keepAliveIntervalSeconds| How often to check whether the connections are still alive |30| -|bootstrapNamespaces| The bootstrap name. | N/A | -|loadManagerClassName| Name of load manager to use |org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl| -|supportedNamespaceBundleSplitAlgorithms| Supported algorithms name for namespace bundle split |[range_equally_divide,topic_count_equally_divide]| -|defaultNamespaceBundleSplitAlgorithm| Default algorithm name for namespace bundle split |range_equally_divide| -|managedLedgerOffloadDriver| The directory for all the offloader implementations `offloadersDirectory=./offloaders`. Driver to use to offload old data to long term storage (Possible values: S3, aws-s3, google-cloud-storage). When using google-cloud-storage, Make sure both Google Cloud Storage and Google Cloud Storage JSON API are enabled for the project (check from Developers Console -> Api&auth -> APIs). || -|managedLedgerOffloadMaxThreads| Maximum number of thread pool threads for ledger offloading |2| -|managedLedgerOffloadPrefetchRounds|The maximum prefetch rounds for ledger reading for offloading.|1| -|managedLedgerUnackedRangesOpenCacheSetEnabled| Use Open Range-Set to cache unacknowledged messages |true| -|managedLedgerOffloadDeletionLagMs|Delay between a ledger being successfully offloaded to long term storage and the ledger being deleted from bookkeeper | 14400000| -|managedLedgerOffloadAutoTriggerSizeThresholdBytes|The number of bytes before triggering automatic offload to long term storage |-1 (disabled)| -|s3ManagedLedgerOffloadRegion| For Amazon S3 ledger offload, AWS region || -|s3ManagedLedgerOffloadBucket| For Amazon S3 ledger offload, Bucket to place offloaded ledger into || -|s3ManagedLedgerOffloadServiceEndpoint| For Amazon S3 ledger offload, Alternative endpoint to connect to (useful for testing) || -|s3ManagedLedgerOffloadMaxBlockSizeInBytes| For Amazon S3 ledger offload, Max block size in bytes. (64MB by default, 5MB minimum) |67108864| -|s3ManagedLedgerOffloadReadBufferSizeInBytes| For Amazon S3 ledger offload, Read buffer size in bytes (1MB by default) |1048576| -|gcsManagedLedgerOffloadRegion|For Google Cloud Storage ledger offload, region where offload bucket is located. Go to this page for more details: https://cloud.google.com/storage/docs/bucket-locations .|N/A| -|gcsManagedLedgerOffloadBucket|For Google Cloud Storage ledger offload, Bucket to place offloaded ledger into.|N/A| -|gcsManagedLedgerOffloadMaxBlockSizeInBytes|For Google Cloud Storage ledger offload, the maximum block size in bytes. (64MB by default, 5MB minimum)|67108864| -|gcsManagedLedgerOffloadReadBufferSizeInBytes|For Google Cloud Storage ledger offload, Read buffer size in bytes. (1MB by default)|1048576| -|gcsManagedLedgerOffloadServiceAccountKeyFile|For Google Cloud Storage, path to json file containing service account credentials. For more details, see the "Service Accounts" section of https://support.google.com/googleapi/answer/6158849 .|N/A| -|fileSystemProfilePath|For File System Storage, file system profile path.|../conf/filesystem_offload_core_site.xml| -|fileSystemURI|For File System Storage, file system uri.|N/A| -|s3ManagedLedgerOffloadRole| For Amazon S3 ledger offload, provide a role to assume before writing to s3 || -|s3ManagedLedgerOffloadRoleSessionName| For Amazon S3 ledger offload, provide a role session name when using a role |pulsar-s3-offload| -| acknowledgmentAtBatchIndexLevelEnabled | Enable or disable the batch index acknowledgement. | false | -|enableReplicatedSubscriptions|Whether to enable tracking of replicated subscriptions state across clusters.|true| -|replicatedSubscriptionsSnapshotFrequencyMillis|The frequency of snapshots for replicated subscriptions tracking.|1000| -|replicatedSubscriptionsSnapshotTimeoutSeconds|The timeout for building a consistent snapshot for tracking replicated subscriptions state.|30| -|replicatedSubscriptionsSnapshotMaxCachedPerSubscription|The maximum number of snapshot to be cached per subscription.|10| -|maxMessagePublishBufferSizeInMB|The maximum memory size for a broker to handle messages that are sent by producers. If the processing message size exceeds this value, the broker stops reading data from the connection. The processing messages refer to the messages that are sent to the broker but the broker has not sent response to the client. Usually the messages are waiting to be written to bookies. It is shared across all the topics running in the same broker. The value `-1` disables the memory limitation. By default, it is 50% of direct memory.|N/A| -|messagePublishBufferCheckIntervalInMillis|Interval between checks to see if message publish buffer size exceeds the maximum. Use `0` or negative number to disable the max publish buffer limiting.|100| -|retentionCheckIntervalInSeconds|Check between intervals to see if consumed ledgers need to be trimmed. Use 0 or negative number to disable the check.|120| -| maxMessageSize | Set the maximum size of a message. | 5242880 | -| preciseTopicPublishRateLimiterEnable | Enable precise topic publish rate limiting. | false | -| lazyCursorRecovery | Whether to recover cursors lazily when trying to recover a managed ledger backing a persistent topic. It can improve write availability of topics. The caveat is now when recovered ledger is ready to write we're not sure if all old consumers' last mark delete position(ack position) can be recovered or not. So user can make the trade off or have custom logic in application to checkpoint consumer state.| false | -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| -| maxNamespacesPerTenant | The maximum number of namespaces that can be created in each tenant. When the number of namespaces reaches this threshold, the broker rejects the request of creating a new tenant. The default value 0 disables the check. |0| -| maxTopicsPerNamespace | The maximum number of persistent topics that can be created in the namespace. When the number of topics reaches this threshold, the broker rejects the request of creating a new topic, including the auto-created topics by the producer or consumer, until the number of connected consumers decreases. The default value 0 disables the check. | 0 | -|subscriptionTypesEnabled| Enable all subscription types, which are exclusive, shared, failover, and key_shared. | Exclusive, Shared, Failover, Key_Shared | -| managedLedgerInfoCompressionType | Compression type of managed ledger information.

    Available options are `NONE`, `LZ4`, `ZLIB`, `ZSTD`, and `SNAPPY`).

    If this value is `NONE` or invalid, the `managedLedgerInfo` is not compressed.

    **Note** that after enabling this configuration, if you want to degrade a broker, you need to change the value to `NONE` and make sure all ledger metadata is saved without compression. | None | -| additionalServlets | Additional servlet name.

    If you have multiple additional servlets, separate them by commas.

    For example, additionalServlet_1, additionalServlet_2 | N/A | -| additionalServletDirectory | Location of broker additional servlet NAR directory | ./brokerAdditionalServlet | -| brokerEntryMetadataInterceptors | Set broker entry metadata interceptors.

    Multiple interceptors should be separated by commas.

    Available values:
  145. org.apache.pulsar.common.intercept.AppendBrokerTimestampMetadataInterceptor
  146. org.apache.pulsar.common.intercept.AppendIndexMetadataInterceptor


  147. Example
    brokerEntryMetadataInterceptors=org.apache.pulsar.common.intercept.AppendBrokerTimestampMetadataInterceptor, org.apache.pulsar.common.intercept.AppendIndexMetadataInterceptor|N/A | -| enableExposingBrokerEntryMetadataToClient|Whether to expose broker entry metadata to client or not.

    Available values:
  148. true
  149. false

  150. Example
    enableExposingBrokerEntryMetadataToClient=true | false | -| strictBookieAffinityEnabled | Enable or disable the strict bookie isolation strategy. If enabled,
    - `bookie-ensemble` first tries to choose bookies that belong to a namespace's affinity group. If the number of bookies is not enough, then the rest bookies are chosen.
    - If namespace has no affinity group, `bookie-ensemble` only chooses bookies that belong to no region. If the number of bookies is not enough, `BKNotEnoughBookiesException` is thrown.| false | -|narExtractionDirectory | The extraction directory of the nar package.
    Available for Protocol Handler, Additional Servlets, Entry Filter, Offloaders, Broker Interceptor. | System.getProperty("java.io.tmpdir") | - - -#### Deprecated parameters of Broker -The following parameters have been deprecated in the `conf/broker.conf` file. - -|Name|Description|Default| -|---|---|---| -|backlogQuotaDefaultLimitGB| Use `backlogQuotaDefaultLimitBytes` instead. |-1| -|brokerServicePurgeInactiveFrequencyInSeconds| Use `brokerDeleteInactiveTopicsFrequencySeconds`.|60| -|tlsEnabled| Use `webServicePortTls` and `brokerServicePortTls` instead. |false| -|replicationTlsEnabled| Enable TLS when talking with other clusters to replicate messages. Use `brokerClientTlsEnabled` instead. |false| -|subscriptionKeySharedEnable| Whether to enable the Key_Shared subscription. Use `subscriptionTypesEnabled` instead. |true| -|zookeeperServers| Zookeeper quorum connection string. Use `metadataStoreUrl` instead. |N/A| -|configurationStoreServers| Configuration store connection string (as a comma-separated list). Use `configurationMetadataStoreUrl` instead. |N/A| -|zooKeeperSessionTimeoutMillis| Zookeeper session timeout in milliseconds. Use `metadataStoreSessionTimeoutMillis` instead. |30000| -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds. Use `metadataStoreCacheExpirySeconds` instead.|300| - - -## Client - -You can use the [`pulsar-client`](reference-cli-tools.md#pulsar-client) CLI tool to publish messages to and consume messages from Pulsar topics. You can use this tool in place of a client library. - -|Name|Description|Default| -|---|---|---| -|webServiceUrl| The web URL for the cluster. |http://localhost:8080/| -|brokerServiceUrl| The Pulsar protocol URL for the cluster. |pulsar://localhost:6650/| -|authPlugin| The authentication plugin. || -|authParams| The authentication parameters for the cluster, as a comma-separated string. || -|useTls| Whether to enforce the TLS authentication in the cluster. |false| -| tlsAllowInsecureConnection | Allow TLS connections to servers whose certificate cannot be verified to have been signed by a trusted certificate authority. | false | -| tlsEnableHostnameVerification | Whether the server hostname must match the common name of the certificate that is used by the server. | false | -|tlsTrustCertsFilePath||| -| useKeyStoreTls | Enable TLS with KeyStore type configuration in the broker. | false | -| tlsTrustStoreType | TLS TrustStore type configuration.
  151. JKS
  152. PKCS12
  153. |JKS| -| tlsTrustStore | TLS TrustStore path. | | -| tlsTrustStorePassword | TLS TrustStore password. | | - - - - - - -## Log4j - -You can set the log level and configuration in the [log4j2.yaml](https://github.com/apache/pulsar/blob/d557e0aa286866363bc6261dec87790c055db1b0/conf/log4j2.yaml#L155) file. The following logging configuration parameters are available. - -|Name|Default| -|---|---| -|pulsar.root.logger| WARN,CONSOLE| -|pulsar.log.dir| logs| -|pulsar.log.file| pulsar.log| -|log4j.rootLogger| ${pulsar.root.logger}| -|log4j.appender.CONSOLE| org.apache.log4j.ConsoleAppender| -|log4j.appender.CONSOLE.Threshold| DEBUG| -|log4j.appender.CONSOLE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.CONSOLE.layout.ConversionPattern| %d{ISO8601} - %-5p - [%t:%C{1}@%L] - %m%n| -|log4j.appender.ROLLINGFILE| org.apache.log4j.DailyRollingFileAppender| -|log4j.appender.ROLLINGFILE.Threshold| DEBUG| -|log4j.appender.ROLLINGFILE.File| ${pulsar.log.dir}/${pulsar.log.file}| -|log4j.appender.ROLLINGFILE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.ROLLINGFILE.layout.ConversionPattern| %d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n| -|log4j.appender.TRACEFILE| org.apache.log4j.FileAppender| -|log4j.appender.TRACEFILE.Threshold| TRACE| -|log4j.appender.TRACEFILE.File| pulsar-trace.log| -|log4j.appender.TRACEFILE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.TRACEFILE.layout.ConversionPattern| %d{ISO8601} - %-5p [%t:%C{1}@%L][%x] - %m%n| - -> Note: 'topic' in log4j2.appender is configurable. -> - If you want to append all logs to a single topic, set the same topic name. -> - If you want to append logs to different topics, you can set different topic names. - -## Log4j shell - -|Name|Default| -|---|---| -|bookkeeper.root.logger| ERROR,CONSOLE| -|log4j.rootLogger| ${bookkeeper.root.logger}| -|log4j.appender.CONSOLE| org.apache.log4j.ConsoleAppender| -|log4j.appender.CONSOLE.Threshold| DEBUG| -|log4j.appender.CONSOLE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.CONSOLE.layout.ConversionPattern| %d{ABSOLUTE} %-5p %m%n| -|log4j.logger.org.apache.zookeeper| ERROR| -|log4j.logger.org.apache.bookkeeper| ERROR| -|log4j.logger.org.apache.bookkeeper.bookie.BookieShell| INFO| - - -## Standalone - -|Name|Description|Default| -|---|---|---| -|authenticateOriginalAuthData| If this flag is set to `true`, the broker authenticates the original Auth data; else it just accepts the originalPrincipal and authorizes it (if required). |false| -|metadataStoreUrl| The quorum connection string for local metadata store || -|metadataStoreCacheExpirySeconds| Metadata store cache expiry time in seconds|300| -|configurationMetadataStoreUrl| Configuration store connection string (as a comma-separated list) || -|brokerServicePort| The port on which the standalone broker listens for connections |6650| -|webServicePort| The port used by the standalone broker for HTTP requests |8080| -|bindAddress| The hostname or IP address on which the standalone service binds |0.0.0.0| -|bindAddresses| Additional Hostname or IP addresses the service binds on: `listener_name:scheme://host:port,...`. || -|advertisedAddress| The hostname or IP address that the standalone service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used. || -| numAcceptorThreads | Number of threads to use for Netty Acceptor | 1 | -| numIOThreads | Number of threads to use for Netty IO | 2 * Runtime.getRuntime().availableProcessors() | -| numHttpServerThreads | Number of threads to use for HTTP requests processing | 2 * Runtime.getRuntime().availableProcessors()| -|isRunningStandalone|This flag controls features that are meant to be used when running in standalone mode.|N/A| -|clusterName| The name of the cluster that this broker belongs to. |standalone| -| failureDomainsEnabled | Enable cluster's failure-domain which can distribute brokers into logical region. | false | -|metadataStoreSessionTimeoutMillis| Metadata store session timeout, in milliseconds. |30000| -|metadataStoreOperationTimeoutSeconds|Metadata store operation timeout in seconds.|30| -|brokerShutdownTimeoutMs| The time to wait for graceful broker shutdown. After this time elapses, the process will be killed. |60000| -|skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out of memory error. |false| -|backlogQuotaCheckEnabled| Enable the backlog quota check, which enforces a specified action when the quota is reached. |true| -|backlogQuotaCheckIntervalInSeconds| How often to check for topics that have reached the backlog quota. |60| -|backlogQuotaDefaultLimitBytes| The default per-topic backlog quota limit. Being less than 0 means no limitation. By default, it is -1. |-1| -|ttlDurationDefaultInSeconds|The default Time to Live (TTL) for namespaces if the TTL is not configured at namespace policies. When the value is set to `0`, TTL is disabled. By default, TTL is disabled. |0| -|brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics. If topics are not consumed for some while, these inactive topics might be cleaned up. Deleting inactive topics is enabled by default. The default period is 1 minute. |true| -|brokerDeleteInactiveTopicsFrequencySeconds| How often to check for inactive topics, in seconds. |60| -| maxPendingPublishRequestsPerConnection | Maximum pending publish requests per connection to avoid keeping large number of pending requests in memory | 1000| -|messageExpiryCheckIntervalInMinutes| How often to proactively check and purged expired messages. |5| -|activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed. |1000| -| subscriptionExpirationTimeMinutes | How long to delete inactive subscriptions from last consumption. When it is set to 0, inactive subscriptions are not deleted automatically | 0 | -| subscriptionRedeliveryTrackerEnabled | Enable subscription message redelivery tracker to send redelivery count to consumer. | true | -| subscriptionKeySharedUseConsistentHashing | In Key_Shared subscription type, with default AUTO_SPLIT mode, use splitting ranges or consistent hashing to reassign keys to new consumers. | false | -| subscriptionKeySharedConsistentHashingReplicaPoints | In Key_Shared subscription type, the number of points in the consistent-hashing ring. The greater the number, the more equal the assignment of keys to consumers. | 100 | -| subscriptionExpiryCheckIntervalInMinutes | How frequently to proactively check and purge expired subscription |5 | -| brokerDeduplicationEnabled | Set the default behavior for message deduplication in the broker. This can be overridden per-namespace. If it is enabled, the broker rejects messages that are already stored in the topic. | false | -| brokerDeduplicationMaxNumberOfProducers | Maximum number of producer information that it's going to be persisted for deduplication purposes | 10000 | -| brokerDeduplicationEntriesInterval | Number of entries after which a deduplication information snapshot is taken. A greater interval leads to less snapshots being taken though it would increase the topic recovery time, when the entries published after the snapshot need to be replayed. | 1000 | -| brokerDeduplicationProducerInactivityTimeoutMinutes | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | 360 | -| defaultNumberOfNamespaceBundles | When a namespace is created without specifying the number of bundles, this value is used as the default setting.| 4 | -|clientLibraryVersionCheckEnabled| Enable checks for minimum allowed client library version. |false| -|clientLibraryVersionCheckAllowUnversioned| Allow client libraries with no version information |true| -|statusFilePath| The path for the file used to determine the rotation status for the broker when responding to service discovery health checks |/usr/local/apache/htdocs| -|maxUnackedMessagesPerConsumer| The maximum number of unacknowledged messages allowed to be received by consumers on a shared subscription. The broker will stop sending messages to a consumer once this limit is reached or until the consumer begins acknowledging messages. A value of 0 disables the unacked message limit check and thus allows consumers to receive messages without any restrictions. |50000| -|maxUnackedMessagesPerSubscription| The same as above, except per subscription rather than per consumer. |200000| -| maxUnackedMessagesPerBroker | Maximum number of unacknowledged messages allowed per broker. Once this limit reaches, the broker stops dispatching messages to all shared subscriptions which has a higher number of unacknowledged messages until subscriptions start acknowledging messages back and unacknowledged messages count reaches to limit/2. When the value is set to 0, unacknowledged message limit check is disabled and broker does not block dispatchers. | 0 | -| maxUnackedMessagesPerSubscriptionOnBrokerBlocked | Once the broker reaches maxUnackedMessagesPerBroker limit, it blocks subscriptions which have higher unacknowledged messages than this percentage limit and subscription does not receive any new messages until that subscription acknowledges messages back. | 0.16 | -| unblockStuckSubscriptionEnabled|Broker periodically checks if subscription is stuck and unblock if flag is enabled.|false| -| topicPublisherThrottlingTickTimeMillis | Tick time to schedule task that checks topic publish rate limiting across all topics. A lower value can improve accuracy while throttling publish but it uses more CPU to perform frequent check. (Disable publish throttling with value 0) | 10| -| brokerPublisherThrottlingTickTimeMillis | Tick time to schedule task that checks broker publish rate limiting across all topics. A lower value can improve accuracy while throttling publish but it uses more CPU to perform frequent check. When the value is set to 0, publish throttling is disabled. |50 | -| brokerPublisherThrottlingMaxMessageRate | Maximum rate (in 1 second) of messages allowed to publish for a broker if the message rate limiting is enabled. When the value is set to 0, message rate limiting is disabled. | 0| -| brokerPublisherThrottlingMaxByteRate | Maximum rate (in 1 second) of bytes allowed to publish for a broker if the byte rate limiting is enabled. When the value is set to 0, the byte rate limiting is disabled. | 0 | -|subscribeThrottlingRatePerConsumer|Too many subscribe requests from a consumer can cause broker rewinding consumer cursors and loading data from bookies, hence causing high network bandwidth usage. When the positive value is set, broker will throttle the subscribe requests for one consumer. Otherwise, the throttling will be disabled. By default, throttling is disabled.|0| -|subscribeRatePeriodPerConsumerInSecond|Rate period for {subscribeThrottlingRatePerConsumer}. By default, it is 30s.|30| -|dispatchThrottlingRateInMsg| Dispatch throttling-limit of messages for a broker (per second). 0 means the dispatch throttling-limit is disabled. |0| -|dispatchThrottlingRateInByte| Dispatch throttling-limit of bytes for a broker (per second). 0 means the dispatch throttling-limit is disabled. |0| -| dispatchThrottlingRatePerTopicInMsg | Default messages (per second) dispatch throttling-limit for every topic. When the value is set to 0, default message dispatch throttling-limit is disabled. |0 | -| dispatchThrottlingRatePerTopicInByte | Default byte (per second) dispatch throttling-limit for every topic. When the value is set to 0, default byte dispatch throttling-limit is disabled. | 0| -| dispatchThrottlingOnBatchMessageEnabled |Apply dispatch rate limiting on batch message instead individual messages with in batch message. (Default is disabled). | false| -| dispatchThrottlingRateRelativeToPublishRate | Enable dispatch rate-limiting relative to publish rate. | false | -|dispatchThrottlingRatePerSubscriptionInMsg|The defaulted number of message dispatching throttling-limit for a subscription. The value of 0 disables message dispatch-throttling.|0| -|dispatchThrottlingRatePerSubscriptionInByte|The default number of message-bytes dispatching throttling-limit for a subscription. The value of 0 disables message-byte dispatch-throttling.|0| -|dispatchThrottlingRatePerReplicatorInMsg| Dispatch throttling-limit of messages for every replicator in replication (per second). 0 means the dispatch throttling-limit in replication is disabled. |0| -|dispatchThrottlingRatePerReplicatorInByte| Dispatch throttling-limit of bytes for every replicator in replication (per second). 0 means the dispatch throttling-limit is disabled. |0| -| dispatchThrottlingOnNonBacklogConsumerEnabled | Enable dispatch-throttling for both caught up consumers as well as consumers who have backlogs. | true | -|dispatcherMaxReadBatchSize|The maximum number of entries to read from BookKeeper. By default, it is 100 entries.|100| -|dispatcherMaxReadSizeBytes|The maximum size in bytes of entries to read from BookKeeper. By default, it is 5MB.|5242880| -|dispatcherMinReadBatchSize|The minimum number of entries to read from BookKeeper. By default, it is 1 entry. When there is an error occurred on reading entries from bookkeeper, the broker will backoff the batch size to this minimum number.|1| -|dispatcherMaxRoundRobinBatchSize|The maximum number of entries to dispatch for a shared subscription. By default, it is 20 entries.|20| -| preciseDispatcherFlowControl | Precise dispathcer flow control according to history message number of each entry. | false | -| streamingDispatch | Whether to use streaming read dispatcher. It can be useful when there's a huge backlog to drain and instead of read with micro batch we can streamline the read from bookkeeper to make the most of consumer capacity till we hit bookkeeper read limit or consumer process limit, then we can use consumer flow control to tune the speed. This feature is currently in preview and can be changed in subsequent release. | false | -| maxConcurrentLookupRequest | Maximum number of concurrent lookup request that the broker allows to throttle heavy incoming lookup traffic. | 50000 | -| maxConcurrentTopicLoadRequest | Maximum number of concurrent topic loading request that the broker allows to control the number of zk-operations. | 5000 | -| maxConcurrentNonPersistentMessagePerConnection | Maximum number of concurrent non-persistent message that can be processed per connection. | 1000 | -| numWorkerThreadsForNonPersistentTopic | Number of worker threads to serve non-persistent topic. | 8 | -| enablePersistentTopics | Enable broker to load persistent topics. | true | -| enableNonPersistentTopics | Enable broker to load non-persistent topics. | true | -| maxSubscriptionsPerTopic | Maximum number of subscriptions allowed to subscribe to a topic. Once this limit reaches, the broker rejects new subscriptions until the number of subscriptions decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxProducersPerTopic | Maximum number of producers allowed to connect to a topic. Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerTopic | Maximum number of consumers allowed to connect to a topic. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerSubscription | Maximum number of consumers allowed to connect to a subscription. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxNumPartitionsPerPartitionedTopic | Maximum number of partitions per partitioned topic. When the value is set to a negative number or is set to 0, the check is disabled. | 0 | -| metadataStoreBatchingEnabled | Enable metadata operations batching. | true | -| metadataStoreBatchingMaxDelayMillis | Maximum delay to impose on batching grouping. | 5 | -| metadataStoreBatchingMaxOperations | Maximum number of operations to include in a singular batch. | 1000 | -| metadataStoreBatchingMaxSizeKb | Maximum size of a batch. | 128 | -| tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in seconds. When the value is set to 0, check the TLS certificate on every new connection. | 300 | -| tlsCertificateFilePath | Path for the TLS certificate file. | | -| tlsKeyFilePath | Path for the TLS private key file. | | -| tlsTrustCertsFilePath | Path for the trusted TLS certificate file.| | -| tlsAllowInsecureConnection | Accept untrusted TLS certificate from the client. If it is set to true, a client with a certificate which cannot be verified with the 'tlsTrustCertsFilePath' certificate is allowed to connect to the server, though the certificate is not be used for client authentication. | false | -| tlsProtocols | Specify the TLS protocols the broker uses to negotiate during TLS handshake. | | -| tlsCiphers | Specify the TLS cipher the broker uses to negotiate during TLS Handshake. | | -| tlsRequireTrustedClientCertOnConnect | Trusted client certificates are required for to connect TLS. Reject the Connection if the client certificate is not trusted. In effect, this requires that all connecting clients perform TLS client authentication. | false | -| tlsEnabledWithKeyStore | Enable TLS with KeyStore type configuration in broker. | false | -| tlsProvider | TLS Provider for KeyStore type. | | -| tlsKeyStoreType | TLS KeyStore type configuration in the broker.
  154. JKS
  155. PKCS12
  156. |JKS| -| tlsKeyStore | TLS KeyStore path in the broker. | | -| tlsKeyStorePassword | TLS KeyStore password for the broker. | | -| tlsTrustStoreType | TLS TrustStore type configuration in the broker
  157. JKS
  158. PKCS12
  159. |JKS| -| tlsTrustStore | TLS TrustStore path in the broker. | | -| tlsTrustStorePassword | TLS TrustStore password for the broker. | | -| brokerClientTlsEnabledWithKeyStore | Configure whether the internal client uses the KeyStore type to authenticate with Pulsar brokers. | false | -| brokerClientSslProvider | The TLS Provider used by the internal client to authenticate with other Pulsar brokers. | | -| brokerClientTlsTrustStoreType | TLS TrustStore type configuration for the internal client to authenticate with Pulsar brokers.
  160. JKS
  161. PKCS12
  162. | JKS | -| brokerClientTlsTrustStore | TLS TrustStore path for the internal client to authenticate with Pulsar brokers. | | -| brokerClientTlsTrustStorePassword | TLS TrustStore password for the internal client to authenticate with Pulsar brokers. | | -| brokerClientTlsCiphers | Specify the TLS cipher that the internal client uses to negotiate during TLS Handshake. | | -| brokerClientTlsProtocols | Specify the TLS protocols that the broker uses to negotiate during TLS handshake. | | -| systemTopicEnabled | Enable/Disable system topics. | false | -| topicLevelPoliciesEnabled | Enable or disable topic level policies. Topic level policies depends on the system topic. Please enable the system topic first. | false | -| topicFencingTimeoutSeconds | If a topic remains fenced for a certain time period (in seconds), it is closed forcefully. If set to 0 or a negative number, the fenced topic is not closed. | 0 | -| proxyRoles | Role names that are treated as "proxy roles". If the broker sees a request with role as proxyRoles, it demands to see a valid original principal. | | -|authenticationEnabled| Enable authentication for the broker. |false| -|authenticationProviders| A comma-separated list of class names for authentication providers. |false| -|authorizationEnabled| Enforce authorization in brokers. |false| -| authorizationProvider | Authorization provider fully qualified class-name. | org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider | -| authorizationAllowWildcardsMatching | Allow wildcard matching in authorization. Wildcard matching is applicable only when the wildcard-character (*) presents at the **first** or **last** position. | false | -|superUserRoles| Role names that are treated as "superusers." Superusers are authorized to perform all admin tasks. | | -|brokerClientAuthenticationPlugin| The authentication settings of the broker itself. Used when the broker connects to other brokers either in the same cluster or from other clusters. | | -|brokerClientAuthenticationParameters| The parameters that go along with the plugin specified using brokerClientAuthenticationPlugin. | | -|athenzDomainNames| Supported Athenz authentication provider domain names as a comma-separated list. | | -| anonymousUserRole | When this parameter is not empty, unauthenticated users perform as anonymousUserRole. | | -|tokenSettingPrefix| Configure the prefix of the token related setting like `tokenSecretKey`, `tokenPublicKey`, `tokenAuthClaim`, `tokenPublicAlg`, `tokenAudienceClaim`, and `tokenAudience`. || -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenAuthClaim| Specify the token claim that will be used as the authentication "principal" or "role". The "subject" field will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud". It is used to get the audience from token. If it is not set, the audience is not verified. || -| tokenAudience | The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token need contains this parameter.| | -|saslJaasClientAllowedIds|This is a regexp, which limits the range of possible ids which can connect to the Broker using SASL. By default, it is set to `SaslConstants.JAAS_CLIENT_ALLOWED_IDS_DEFAULT`, which is ".*pulsar.*", so only clients whose id contains 'pulsar' are allowed to connect.|N/A| -|saslJaasBrokerSectionName|Service Principal, for login context name. By default, it is set to `SaslConstants.JAAS_DEFAULT_BROKER_SECTION_NAME`, which is "Broker".|N/A| -|httpMaxRequestSize|If the value is larger than 0, it rejects all HTTP requests with bodies larged than the configured limit.|-1| -|exposePreciseBacklogInPrometheus| Enable expose the precise backlog stats, set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate. |false| -|bookkeeperMetadataServiceUri|Metadata service uri is what BookKeeper used for loading corresponding metadata driver and resolving its metadata service location. This value can be fetched using `bookkeeper shell whatisinstanceid` command in BookKeeper cluster. For example: `zk+hierarchical://localhost:2181/ledgers`. The metadata service uri list can also be semicolon separated values like: `zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers`.|N/A| -|bookkeeperClientAuthenticationPlugin| Authentication plugin to be used when connecting to bookies (BookKeeper servers). || -|bookkeeperClientAuthenticationParametersName| BookKeeper authentication plugin implementation parameters and values. || -|bookkeeperClientAuthenticationParameters| Parameters associated with the bookkeeperClientAuthenticationParametersName || -|bookkeeperClientNumWorkerThreads| Number of BookKeeper client worker threads. Default is Runtime.getRuntime().availableProcessors() || -|bookkeeperClientTimeoutInSeconds| Timeout for BookKeeper add and read operations. |30| -|bookkeeperClientSpeculativeReadTimeoutInMillis| Speculative reads are initiated if a read request doesn’t complete within a certain time. A value of 0 disables speculative reads. |0| -|bookkeeperUseV2WireProtocol|Use older Bookkeeper wire protocol with bookie.|true| -|bookkeeperClientHealthCheckEnabled| Enable bookie health checks. |true| -|bookkeeperClientHealthCheckIntervalSeconds| The time interval, in seconds, at which health checks are performed. New ledgers are not created during health checks. |60| -|bookkeeperClientHealthCheckErrorThresholdPerInterval| Error threshold for health checks. |5| -|bookkeeperClientHealthCheckQuarantineTimeInSeconds| If bookies have more than the allowed number of failures within the time interval specified by bookkeeperClientHealthCheckIntervalSeconds |1800| -|bookkeeperClientGetBookieInfoIntervalSeconds|Specify options for the GetBookieInfo check. This setting helps ensure the list of bookies that are up to date on the brokers.|86400| -|bookkeeperClientGetBookieInfoRetryIntervalSeconds|Specify options for the GetBookieInfo check. This setting helps ensure the list of bookies that are up to date on the brokers.|60| -|bookkeeperClientRackawarePolicyEnabled| |true| -|bookkeeperClientRegionawarePolicyEnabled| |false| -|bookkeeperClientMinNumRacksPerWriteQuorum| |2| -|bookkeeperClientMinNumRacksPerWriteQuorum| |false| -|bookkeeperClientReorderReadSequenceEnabled| |false| -|bookkeeperClientIsolationGroups||| -|bookkeeperClientSecondaryIsolationGroups| Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn't have enough bookie available. || -|bookkeeperClientMinAvailableBookiesInIsolationGroups| Minimum bookies that should be available as part of bookkeeperClientIsolationGroups else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list. || -| bookkeeperTLSProviderFactoryClass | Set the client security provider factory class name. | org.apache.bookkeeper.tls.TLSContextFactory | -| bookkeeperTLSClientAuthentication | Enable TLS authentication with bookie. | false | -| bookkeeperTLSKeyFileType | Supported type: PEM, JKS, PKCS12. | PEM | -| bookkeeperTLSTrustCertTypes | Supported type: PEM, JKS, PKCS12. | PEM | -| bookkeeperTLSKeyStorePasswordPath | Path to file containing keystore password, if the client keystore is password protected. | | -| bookkeeperTLSTrustStorePasswordPath | Path to file containing truststore password, if the client truststore is password protected. | | -| bookkeeperTLSKeyFilePath | Path for the TLS private key file. | | -| bookkeeperTLSCertificateFilePath | Path for the TLS certificate file. | | -| bookkeeperTLSTrustCertsFilePath | Path for the trusted TLS certificate file. | | -| bookkeeperTlsCertFilesRefreshDurationSeconds | Tls cert refresh duration at bookKeeper-client in seconds (0 to disable check). | | -| bookkeeperDiskWeightBasedPlacementEnabled | Enable/Disable disk weight based placement. | false | -| bookkeeperExplicitLacIntervalInMills | Set the interval to check the need for sending an explicit LAC. When the value is set to 0, no explicit LAC is sent. | 0 | -| bookkeeperClientExposeStatsToPrometheus | Expose BookKeeper client managed ledger stats to Prometheus. | false | -|managedLedgerDefaultEnsembleSize| |1| -|managedLedgerDefaultWriteQuorum| |1| -|managedLedgerDefaultAckQuorum| |1| -| managedLedgerDigestType | Default type of checksum to use when writing to BookKeeper. | CRC32C | -| managedLedgerNumSchedulerThreads | Number of threads to be used for managed ledger scheduled tasks. | Runtime.getRuntime().availableProcessors() | -|managedLedgerCacheSizeMB| |N/A| -|managedLedgerCacheCopyEntries| Whether to copy the entry payloads when inserting in cache.| false| -|managedLedgerCacheEvictionWatermark| |0.9| -|managedLedgerCacheEvictionFrequency| Configure the cache eviction frequency for the managed ledger cache (evictions/sec) | 100.0 | -|managedLedgerCacheEvictionTimeThresholdMillis| All entries that have stayed in cache for more than the configured time, will be evicted | 1000 | -|managedLedgerCursorBackloggedThreshold| Configure the threshold (in number of entries) from where a cursor should be considered 'backlogged' and thus should be set as inactive. | 1000| -|managedLedgerUnackedRangesOpenCacheSetEnabled| Use Open Range-Set to cache unacknowledged messages |true| -|managedLedgerDefaultMarkDeleteRateLimit| |0.1| -|managedLedgerMaxEntriesPerLedger| |50000| -|managedLedgerMinLedgerRolloverTimeMinutes| |10| -|managedLedgerMaxLedgerRolloverTimeMinutes| |240| -|managedLedgerCursorMaxEntriesPerLedger| |50000| -|managedLedgerCursorRolloverTimeInSeconds| |14400| -| managedLedgerMaxSizePerLedgerMbytes | Maximum ledger size before triggering a rollover for a topic. | 2048 | -| managedLedgerMaxUnackedRangesToPersist | Maximum number of "acknowledgment holes" that are going to be persistently stored. When acknowledging out of order, a consumer leaves holes that are supposed to be quickly filled by acknowledging all the messages. The information of which messages are acknowledged is persisted by compressing in "ranges" of messages that were acknowledged. After the max number of ranges is reached, the information is only tracked in memory and messages are redelivered in case of crashes. | 10000 | -| managedLedgerMaxUnackedRangesToPersistInZooKeeper | Maximum number of "acknowledgment holes" that can be stored in Zookeeper. If the number of unacknowledged message range is higher than this limit, the broker persists unacknowledged ranges into bookkeeper to avoid additional data overhead into Zookeeper. | 1000 | -|autoSkipNonRecoverableData| |false| -| managedLedgerMetadataOperationsTimeoutSeconds | Operation timeout while updating managed-ledger metadata. | 60 | -| managedLedgerReadEntryTimeoutSeconds | Read entries timeout when the broker tries to read messages from BookKeeper. | 0 | -| managedLedgerAddEntryTimeoutSeconds | Add entry timeout when the broker tries to publish message to BookKeeper. | 0 | -| managedLedgerNewEntriesCheckDelayInMillis | New entries check delay for the cursor under the managed ledger. If no new messages in the topic, the cursor tries to check again after the delay time. For consumption latency sensitive scenario, you can set the value to a smaller value or 0. Of course, a smaller value may degrade consumption throughput.|10 ms| -| managedLedgerPrometheusStatsLatencyRolloverSeconds | Managed ledger prometheus stats latency rollover seconds. | 60 | -| managedLedgerTraceTaskExecution | Whether to trace managed ledger task execution time. | true | -|managedLedgerNewEntriesCheckDelayInMillis|New entries check delay for the cursor under the managed ledger. If no new messages in the topic, the cursor will try to check again after the delay time. For consumption latency sensitive scenario, it can be set to a smaller value or 0. A smaller value degrades consumption throughput. By default, it is 10ms.|10| -|loadBalancerEnabled| |false| -|loadBalancerPlacementStrategy| |weightedRandomSelection| -|loadBalancerReportUpdateThresholdPercentage| |10| -|loadBalancerReportUpdateMaxIntervalMinutes| |15| -|loadBalancerHostUsageCheckIntervalMinutes| |1| -|loadBalancerSheddingIntervalMinutes| |30| -|loadBalancerSheddingGracePeriodMinutes| |30| -|loadBalancerBrokerMaxTopics| |50000| -|loadBalancerBrokerUnderloadedThresholdPercentage| |1| -|loadBalancerBrokerOverloadedThresholdPercentage| |85| -|loadBalancerResourceQuotaUpdateIntervalMinutes| |15| -|loadBalancerBrokerComfortLoadLevelPercentage| |65| -|loadBalancerAutoBundleSplitEnabled| |false| -| loadBalancerAutoUnloadSplitBundlesEnabled | Enable/Disable automatic unloading of split bundles. | true | -|loadBalancerNamespaceBundleMaxTopics| |1000| -|loadBalancerNamespaceBundleMaxSessions| Maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered.
    To disable the threshold check, set the value to -1. |1000| -|loadBalancerNamespaceBundleMaxMsgRate| |1000| -|loadBalancerNamespaceBundleMaxBandwidthMbytes| |100| -|loadBalancerNamespaceMaximumBundles| |128| -| loadBalancerBrokerThresholdShedderPercentage | The broker resource usage threshold. When the broker resource usage is greater than the pulsar cluster average resource usage, the threshold shedder is triggered to offload bundles from the broker. It only takes effect in the ThresholdShedder strategy. | 10 | -| loadBalancerMsgRateDifferenceShedderThreshold | Message-rate percentage threshold between highest and least loaded brokers for uniform load shedding. | 50 | -| loadBalancerMsgThroughputMultiplierDifferenceShedderThreshold | Message-throughput threshold between highest and least loaded brokers for uniform load shedding. | 4 | -| loadBalancerHistoryResourcePercentage | The history usage when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 0.9 | -| loadBalancerBandwithInResourceWeight | The BandWithIn usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerBandwithOutResourceWeight | The BandWithOut usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerCPUResourceWeight | The CPU usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerMemoryResourceWeight | The heap memory usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerDirectMemoryResourceWeight | The direct memory usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerBundleUnloadMinThroughputThreshold | Bundle unload minimum throughput threshold. Avoid bundle unload frequently. It only takes effect in the ThresholdShedder strategy. | 10 | -| namespaceBundleUnloadingTimeoutMs | Time to wait for the unloading of a namespace bundle in milliseconds. | 60000 | -|replicationMetricsEnabled| |true| -|replicationConnectionsPerBroker| |16| -|replicationProducerQueueSize| |1000| -| replicationPolicyCheckDurationSeconds | Duration to check replication policy to avoid replicator inconsistency due to missing ZooKeeper watch. When the value is set to 0, disable checking replication policy. | 600 | -|defaultRetentionTimeInMinutes| |0| -|defaultRetentionSizeInMB| |0| -|keepAliveIntervalSeconds| |30| -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| -|bookieId | If you want to custom a bookie ID or use a dynamic network address for a bookie, you can set the `bookieId`.

    Bookie advertises itself using the `bookieId` rather than the `BookieSocketAddress` (`hostname:port` or `IP:port`).

    The `bookieId` is a non-empty string that can contain ASCII digits and letters ([a-zA-Z9-0]), colons, dashes, and dots.

    For more information about `bookieId`, see [here](http://bookkeeper.apache.org/bps/BP-41-bookieid/).|/| -| maxTopicsPerNamespace | The maximum number of persistent topics that can be created in the namespace. When the number of topics reaches this threshold, the broker rejects the request of creating a new topic, including the auto-created topics by the producer or consumer, until the number of connected consumers decreases. The default value 0 disables the check. | 0 | -| metadataStoreConfigPath | The configuration file path of the local metadata store. See [Configure metadata store](administration-metadata-store.md) for details. |N/A| -|schemaRegistryStorageClassName|The schema storage implementation used by this broker.|org.apache.pulsar.broker.service.schema.BookkeeperSchemaStorageFactory| -|isSchemaValidationEnforced| Whether to enable schema validation, when schema validation is enabled, if a producer without a schema attempts to produce the message to a topic with schema, the producer is rejected and disconnected.|false| -|isAllowAutoUpdateSchemaEnabled|Allow schema to be auto updated at broker level.|true| -|schemaCompatibilityStrategy| The schema compatibility strategy at broker level, see [here](schema-evolution-compatibility.md#schema-compatibility-check-strategy) for available values.|FULL| -|systemTopicSchemaCompatibilityStrategy| The schema compatibility strategy is used for system topics, see [here](schema-evolution-compatibility.md#schema-compatibility-check-strategy) for available values.|ALWAYS_COMPATIBLE| - -#### Deprecated parameters of standalone Pulsar -The following parameters have been deprecated in the `conf/standalone.conf` file. - -|Name|Description|Default| -|---|---|---| -|zookeeperServers| The quorum connection string for local metadata store. Use `metadataStoreUrl` instead. |N/A| -|configurationStoreServers| Configuration store connection string (as a comma-separated list). Use `configurationMetadataStoreUrl` instead. |N/A| -|zooKeeperOperationTimeoutSeconds|ZooKeeper operation timeout in seconds. Use `metadataStoreOperationTimeoutSeconds` instead. |30| -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds. Use `metadataStoreCacheExpirySeconds` instead. |300| -|zooKeeperSessionTimeoutMillis| The ZooKeeper session timeout, in milliseconds. Use `metadataStoreSessionTimeoutMillis` instead. |30000| - -## WebSocket - -|Name|Description|Default| -|---|---|---| -|configurationMetadataStoreUrl |Configuration store connection string. |N/A| -|metadataStoreSessionTimeoutMillis|Metadata store session timeout in milliseconds. |30000| -|metadataStoreCacheExpirySeconds|Metadata store cache expiry time in seconds|300| -|serviceUrl||| -|serviceUrlTls||| -|brokerServiceUrl||| -|brokerServiceUrlTls||| -|webServicePort||8080| -|webServicePortTls||8443| -|bindAddress||0.0.0.0| -|clusterName ||| -|authenticationEnabled||false| -|authenticationProviders||| -|authorizationEnabled||false| -|superUserRoles ||| -|brokerClientAuthenticationPlugin||| -|brokerClientAuthenticationParameters||| -|tlsEnabled||false| -|tlsAllowInsecureConnection||false| -|tlsCertificateFilePath||| -|tlsKeyFilePath ||| -|tlsTrustCertsFilePath||| - -#### Deprecated parameters of WebSocket -The following parameters have been deprecated in the `conf/websocket.conf` file. - -|Name|Description|Default| -|---|---|---| -|zooKeeperSessionTimeoutMillis|The ZooKeeper session timeout in milliseconds. Use `metadataStoreSessionTimeoutMillis` instead. |30000| -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds. Use `metadataStoreCacheExpirySeconds` instead.|300| -|configurationStoreServers| Configuration Store connection string. Use `configurationMetadataStoreUrl` instead.|N/A| - -## Pulsar proxy - -The [Pulsar proxy](concepts-architecture-overview.md#pulsar-proxy) can be configured in the `conf/proxy.conf` file. - - -|Name|Description|Default| -|---|---|---| -|forwardAuthorizationCredentials| Forward client authorization credentials to Broker for re-authorization, and make sure authentication is enabled for this to take effect. |false| -|metadataStoreUrl| Metadata store quorum connection string (as a comma-separated list) || -|configurationMetadataStoreUrl| Configuration store connection string (as a comma-separated list) || -| brokerServiceURL | The service URL pointing to the broker cluster. Must begin with `pulsar://`. | | -| brokerServiceURLTLS | The TLS service URL pointing to the broker cluster. Must begin with `pulsar+ssl://`. | | -| brokerWebServiceURL | The Web service URL pointing to the broker cluster | | -| brokerWebServiceURLTLS | The TLS Web service URL pointing to the broker cluster | | -| functionWorkerWebServiceURL | The Web service URL pointing to the function worker cluster. It is only configured when you setup function workers in a separate cluster. | | -| functionWorkerWebServiceURLTLS | The TLS Web service URL pointing to the function worker cluster. It is only configured when you setup function workers in a separate cluster. | | -|metadataStoreSessionTimeoutMillis| Metadata store session timeout (in milliseconds) |30000| -|metadataStoreCacheExpirySeconds|Metadata store cache expiry time in seconds|300| -|advertisedAddress|Hostname or IP address the service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostname()` is used.|N/A| -|servicePort| The port to use for server binary Protobuf requests |6650| -|servicePortTls| The port to use to server binary Protobuf TLS requests |6651| -|statusFilePath| Path for the file used to determine the rotation status for the proxy instance when responding to service discovery health checks || -| proxyLogLevel | Proxy log level
  163. 0: Do not log any TCP channel information.
  164. 1: Parse and log any TCP channel information and command information without message body.
  165. 2: Parse and log channel information, command information and message body.
  166. | 0 | -|authenticationEnabled| Whether authentication is enabled for the Pulsar proxy |false| -|authenticateMetricsEndpoint| Whether the '/metrics' endpoint requires authentication. Defaults to true. 'authenticationEnabled' must also be set for this to take effect. |true| -|authenticationProviders| Authentication provider name list (a comma-separated list of class names) || -|authorizationEnabled| Whether authorization is enforced by the Pulsar proxy |false| -|authorizationProvider| Authorization provider as a fully qualified class name |org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider| -| anonymousUserRole | When this parameter is not empty, unauthenticated users perform as anonymousUserRole. | | -|brokerClientAuthenticationPlugin| The authentication plugin used by the Pulsar proxy to authenticate with Pulsar brokers || -|brokerClientAuthenticationParameters| The authentication parameters used by the Pulsar proxy to authenticate with Pulsar brokers || -|brokerClientTrustCertsFilePath| The path to trusted certificates used by the Pulsar proxy to authenticate with Pulsar brokers || -|superUserRoles| Role names that are treated as "super-users," meaning that they will be able to perform all admin || -|maxConcurrentInboundConnections| Max concurrent inbound connections. The proxy will reject requests beyond that. |10000| -|maxConcurrentLookupRequests| Max concurrent outbound connections. The proxy will error out requests beyond that. |50000| -|tlsEnabledWithBroker| Whether TLS is enabled when communicating with Pulsar brokers. |false| -| tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in seconds. If the value is set 0, check TLS certificate every new connection. | 300 | -|tlsCertificateFilePath| Path for the TLS certificate file || -|tlsKeyFilePath| Path for the TLS private key file || -|tlsTrustCertsFilePath| Path for the trusted TLS certificate pem file || -|tlsHostnameVerificationEnabled| Whether the hostname is validated when the proxy creates a TLS connection with brokers |false| -|tlsRequireTrustedClientCertOnConnect| Whether client certificates are required for TLS. Connections are rejected if the client certificate isn’t trusted. |false| -|tlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLSv1.3```, ```TLSv1.2``` || -|tlsCiphers|Specify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256```|| -| httpReverseProxyConfigs | HTTP directs to redirect to non-pulsar services | | -| httpOutputBufferSize | HTTP output buffer size. The amount of data that will be buffered for HTTP requests before it is flushed to the channel. A larger buffer size may result in higher HTTP throughput though it may take longer for the client to see data. If using HTTP streaming via the reverse proxy, this should be set to the minimum value (1) so that clients see the data as soon as possible. | 32768 | -| httpNumThreads | Number of threads to use for HTTP requests processing| 2 * Runtime.getRuntime().availableProcessors() | -|tokenSettingPrefix| Configure the prefix of the token related setting like `tokenSecretKey`, `tokenPublicKey`, `tokenAuthClaim`, `tokenPublicAlg`, `tokenAudienceClaim`, and `tokenAudience`. || -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenAuthClaim| Specify the token claim that will be used as the authentication "principal" or "role". The "subject" field will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud". It is used to get the audience from token. If it is not set, the audience is not verified. || -| tokenAudience | The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token need contains this parameter.| | -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| - -#### Deprecated parameters of Pulsar proxy -The following parameters have been deprecated in the `conf/proxy.conf` file. - -|Name|Description|Default| -|---|---|---| -|tlsEnabledInProxy| Deprecated - use `servicePortTls` and `webServicePortTls` instead. |false| -|zookeeperSessionTimeoutMs| ZooKeeper session timeout (in milliseconds). Use `metadataStoreSessionTimeoutMillis` instead. |30000| -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds. Use `metadataStoreCacheExpirySeconds` instead.|300| - -## ZooKeeper - -ZooKeeper handles a broad range of essential configuration- and coordination-related tasks for Pulsar. The default configuration file for ZooKeeper is in the `conf/zookeeper.conf` file in your Pulsar installation. The following parameters are available: - - -|Name|Description|Default| -|---|---|---| -|tickTime| The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick. |2000| -|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10| -|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter. |5| -|dataDir| The location where ZooKeeper will store in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper| -|clientPort| The port on which the ZooKeeper server will listen for connections. |2181| -|admin.enableServer|The port at which the admin listens.|true| -|admin.serverPort|The port at which the admin listens.|9990| -|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest). |3| -|autopurge.purgeInterval| The time interval, in hours, by which the ZooKeeper database purge task is triggered. Setting to a non-zero number will enable auto purge; setting to 0 will disable. Read this guide before enabling auto purge. |1| -|forceSync|Requires updates to be synced to media of the transaction log before finishing processing the update. If this option is set to 'no', ZooKeeper will not require updates to be synced to the media. WARNING: it's not recommended to run a production ZK cluster with `forceSync` disabled.|yes| -|maxClientCnxns| The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60| - - - - -In addition to the parameters in the table above, configuring ZooKeeper for Pulsar involves adding -a `server.N` line to the `conf/zookeeper.conf` file for each node in the ZooKeeper cluster, where `N` is the number of the ZooKeeper node. Here's an example for a three-node ZooKeeper cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -> We strongly recommend consulting the [ZooKeeper Administrator's Guide](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html) for a more thorough and comprehensive introduction to ZooKeeper configuration \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/reference-connector-admin.md b/site2/website/versioned_docs/version-2.10.0-deprecated/reference-connector-admin.md deleted file mode 100644 index 2a7c1d82adba24..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/reference-connector-admin.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -id: reference-connector-admin -title: Connector Admin CLI -sidebar_label: "Connector Admin CLI" -original_id: reference-connector-admin ---- - -> **Important** -> -> For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](/tools/pulsar-admin/). -> - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/reference-metrics.md b/site2/website/versioned_docs/version-2.10.0-deprecated/reference-metrics.md deleted file mode 100644 index 44b03a5069ca06..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/reference-metrics.md +++ /dev/null @@ -1,617 +0,0 @@ ---- -id: reference-metrics -title: Pulsar Metrics -sidebar_label: "Pulsar Metrics" -original_id: reference-metrics ---- - - - -Pulsar exposes the following metrics in Prometheus format. You can monitor your clusters with those metrics. - -* [ZooKeeper](#zookeeper) -* [BookKeeper](#bookkeeper) -* [Broker](#broker) -* [Pulsar Functions](#pulsar-functions) -* [Proxy](#proxy) -* [Pulsar SQL Worker](#pulsar-sql-worker) -* [Pulsar transaction](#pulsar-transaction) - -The following types of metrics are available: - -- [Counter](https://prometheus.io/docs/concepts/metric_types/#counter): a cumulative metric that represents a single monotonically increasing counter. The value increases by default. You can reset the value to zero or restart your cluster. -- [Gauge](https://prometheus.io/docs/concepts/metric_types/#gauge): a metric that represents a single numerical value that can arbitrarily go up and down. -- [Histogram](https://prometheus.io/docs/concepts/metric_types/#histogram): a histogram samples observations (usually things like request durations or response sizes) and counts them in configurable buckets. The `_bucket` suffix is the number of observations within a histogram bucket, configured with parameter `{le=""}`. The `_count` suffix is the number of observations, shown as a time series and behaves like a counter. The `_sum` suffix is the sum of observed values, also shown as a time series and behaves like a counter. These suffixes are together denoted by `_*` in this doc. -- [Summary](https://prometheus.io/docs/concepts/metric_types/#summary): similar to a histogram, a summary samples observations (usually things like request durations and response sizes). While it also provides a total count of observations and a sum of all observed values, it calculates configurable quantiles over a sliding time window. - -## ZooKeeper - -The ZooKeeper metrics are exposed under "/metrics" at port `8000`. You can use a different port by configuring the `metricsProvider.httpPort` in conf/zookeeper.conf. - -ZooKeeper provides a New Metrics System since 3.6.0. For more detailed metrics, refer to the [ZooKeeper Monitor Guide](https://zookeeper.apache.org/doc/r3.7.0/zookeeperMonitor.html). - -## BookKeeper - -The BookKeeper metrics are exposed under "/metrics" at port `8000`. You can change the port by updating `prometheusStatsHttpPort` -in the `bookkeeper.conf` configuration file. - -### Server metrics - -| Name | Type | Description | -|---|---|---| -| bookie_SERVER_STATUS | Gauge | The server status for bookie server.
    • 1: the bookie is running in writable mode.
    • 0: the bookie is running in readonly mode.
    | -| bookkeeper_server_ADD_ENTRY_count | Counter | The total number of ADD_ENTRY requests received at the bookie. The `success` label is used to distinguish successes and failures. | -| bookkeeper_server_READ_ENTRY_count | Counter | The total number of READ_ENTRY requests received at the bookie. The `success` label is used to distinguish successes and failures. | -| bookie_WRITE_BYTES | Counter | The total number of bytes written to the bookie. | -| bookie_READ_BYTES | Counter | The total number of bytes read from the bookie. | -| bookkeeper_server_ADD_ENTRY_REQUEST | Summary | The summary of request latency of ADD_ENTRY requests at the bookie. The `success` label is used to distinguish successes and failures. | -| bookkeeper_server_READ_ENTRY_REQUEST | Summary | The summary of request latency of READ_ENTRY requests at the bookie. The `success` label is used to distinguish successes and failures. | -| bookkeeper_server_BookieReadThreadPool_queue_{thread_id}|Gauge|The number of requests to be processed in a read thread queue.| -| bookkeeper_server_BookieReadThreadPool_task_queued|Summary | The waiting time of a task to be processed in a read thread queue. | -| bookkeeper_server_BookieReadThreadPool_task_execution|Summary | The execution time of a task in a read thread queue.| - -### Journal metrics - -| Name | Type | Description | -|---|---|---| -| bookie_journal_JOURNAL_SYNC_count | Counter | The total number of journal fsync operations happening at the bookie. The `success` label is used to distinguish successes and failures. | -| bookie_journal_JOURNAL_QUEUE_SIZE | Gauge | The total number of requests pending in the journal queue. | -| bookie_journal_JOURNAL_FORCE_WRITE_QUEUE_SIZE | Gauge | The total number of force write (fsync) requests pending in the force-write queue. | -| bookie_journal_JOURNAL_CB_QUEUE_SIZE | Gauge | The total number of callbacks pending in the callback queue. | -| bookie_journal_JOURNAL_ADD_ENTRY | Summary | The summary of request latency of adding entries to the journal. | -| bookie_journal_JOURNAL_SYNC | Summary | The summary of fsync latency of syncing data to the journal disk. | -| bookie_journal_JOURNAL_CREATION_LATENCY| Summary | The latency created by a journal log file. | - -### Storage metrics - -| Name | Type | Description | -|---|---|---| -| bookie_ledgers_count | Gauge | The total number of ledgers stored in the bookie. | -| bookie_entries_count | Gauge | The total number of entries stored in the bookie. | -| bookie_write_cache_size | Gauge | The bookie write cache size (in bytes). | -| bookie_read_cache_size | Gauge | The bookie read cache size (in bytes). | -| bookie_DELETED_LEDGER_COUNT | Counter | The total number of ledgers deleted since the bookie has started. | -| bookie_ledger_writable_dirs | Gauge | The number of writable directories in the bookie. | -| bookie_flush | Gauge| The table flush latency of bookie memory. | -| bookie_throttled_write_requests | Counter | The number of write requests to be throttled. | - -## Broker - -The broker metrics are exposed under "/metrics" at port `8080`. You can change the port by updating `webServicePort` to a different port -in the `broker.conf` configuration file. - -All the metrics exposed by a broker are labelled with `cluster=${pulsar_cluster}`. The name of Pulsar cluster is the value of `${pulsar_cluster}`, which you have configured in the `broker.conf` file. - -The following metrics are available for broker: - -- [ZooKeeper](#zookeeper) - - [Server metrics](#server-metrics) - - [Request metrics](#request-metrics) -- [BookKeeper](#bookkeeper) - - [Server metrics](#server-metrics-1) - - [Journal metrics](#journal-metrics) - - [Storage metrics](#storage-metrics) -- [Broker](#broker) - - [Namespace metrics](#namespace-metrics) - - [Replication metrics](#replication-metrics) - - [Topic metrics](#topic-metrics) - - [Replication metrics](#replication-metrics-1) - - [ManagedLedgerCache metrics](#managedledgercache-metrics) - - [ManagedLedger metrics](#managedledger-metrics) - - [LoadBalancing metrics](#loadbalancing-metrics) - - [BundleUnloading metrics](#bundleunloading-metrics) - - [BundleSplit metrics](#bundlesplit-metrics) - - [Subscription metrics](#subscription-metrics) - - [Consumer metrics](#consumer-metrics) - - [Managed ledger bookie client metrics](#managed-ledger-bookie-client-metrics) - - [Token metrics](#token-metrics) - - [Authentication metrics](#authentication-metrics) - - [Connection metrics](#connection-metrics) - - [Jetty metrics](#jetty-metrics) -- [Pulsar Functions](#pulsar-functions) -- [Proxy](#proxy) -- [Pulsar SQL Worker](#pulsar-sql-worker) -- [Pulsar transaction](#pulsar-transaction) - -### BookKeeper client metrics - -All the BookKeeper client metric are labelled with the following label: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`. - -| Name | Type | Description | -|---|---|---| -| pulsar_managedLedger_client_bookkeeper_client_BOOKIE_QUARANTINE | Counter | The number of bookie clients to be quarantined.

    If you want to expose this metric, set `bookkeeperClientExposeStatsToPrometheus` to `true` in the `broker.conf` file.| - -### Namespace metrics - -> Namespace metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `false`. - -All the namespace metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -| Name | Type | Description | -|---|---|---| -| pulsar_topics_count | Gauge | The number of Pulsar topics of the namespace owned by this broker. | -| pulsar_subscriptions_count | Gauge | The number of Pulsar subscriptions of the namespace served by this broker. | -| pulsar_producers_count | Gauge | The number of active producers of the namespace connected to this broker. | -| pulsar_consumers_count | Gauge | The number of active consumers of the namespace connected to this broker. | -| pulsar_rate_in | Gauge | The total message rate of the namespace coming into this broker (messages/second). | -| pulsar_rate_out | Gauge | The total message rate of the namespace going out from this broker (messages/second). | -| pulsar_throughput_in | Gauge | The total throughput of the namespace coming into this broker (bytes/second). | -| pulsar_throughput_out | Gauge | The total throughput of the namespace going out from this broker (bytes/second). | -| pulsar_storage_size | Gauge | The total storage size of the topics in this namespace owned by this broker (bytes). | -| pulsar_storage_logical_size | Gauge | The storage size of topics in the namespace owned by the broker without replicas (in bytes). | -| pulsar_storage_backlog_size | Gauge | The total backlog size of the topics of this namespace owned by this broker (messages). | -| pulsar_storage_offloaded_size | Gauge | The total amount of the data in this namespace offloaded to the tiered storage (bytes). | -| pulsar_storage_write_rate | Gauge | The total message batches (entries) written to the storage for this namespace (message batches / second). | -| pulsar_storage_read_rate | Gauge | The total message batches (entries) read from the storage for this namespace (message batches / second). | -| pulsar_subscription_delayed | Gauge | The total message batches (entries) are delayed for dispatching. | -| pulsar_storage_write_latency_le_* | Histogram | The entry rate of a namespace that the storage write latency is smaller with a given threshold.
    Available thresholds:
    • pulsar_storage_write_latency_le_0_5: <= 0.5ms
    • pulsar_storage_write_latency_le_1: <= 1ms
    • pulsar_storage_write_latency_le_5: <= 5ms
    • pulsar_storage_write_latency_le_10: <= 10ms
    • pulsar_storage_write_latency_le_20: <= 20ms
    • pulsar_storage_write_latency_le_50: <= 50ms
    • pulsar_storage_write_latency_le_100: <= 100ms
    • pulsar_storage_write_latency_le_200: <= 200ms
    • pulsar_storage_write_latency_le_1000: <= 1s
    • pulsar_storage_write_latency_le_overflow: > 1s
    | -| pulsar_entry_size_le_* | Histogram | The entry rate of a namespace that the entry size is smaller with a given threshold.
    Available thresholds:
    • pulsar_entry_size_le_128: <= 128 bytes
    • pulsar_entry_size_le_512: <= 512 bytes
    • pulsar_entry_size_le_1_kb: <= 1 KB
    • pulsar_entry_size_le_2_kb: <= 2 KB
    • pulsar_entry_size_le_4_kb: <= 4 KB
    • pulsar_entry_size_le_16_kb: <= 16 KB
    • pulsar_entry_size_le_100_kb: <= 100 KB
    • pulsar_entry_size_le_1_mb: <= 1 MB
    • pulsar_entry_size_le_overflow: > 1 MB
    | - -#### Replication metrics - -If a namespace is configured to be replicated among multiple Pulsar clusters, the corresponding replication metrics is also exposed when `replicationMetricsEnabled` is enabled. - -All the replication metrics are also labelled with `remoteCluster=${pulsar_remote_cluster}`. - -| Name | Type | Description | -|---|---|---| -| pulsar_replication_rate_in | Gauge | The total message rate of the namespace replicating from remote cluster (messages/second). | -| pulsar_replication_rate_out | Gauge | The total message rate of the namespace replicating to remote cluster (messages/second). | -| pulsar_replication_throughput_in | Gauge | The total throughput of the namespace replicating from remote cluster (bytes/second). | -| pulsar_replication_throughput_out | Gauge | The total throughput of the namespace replicating to remote cluster (bytes/second). | -| pulsar_replication_backlog | Gauge | The total backlog of the namespace replicating to remote cluster (messages). | -| pulsar_replication_rate_expired | Gauge | Total rate of messages expired (messages/second). | -| pulsar_replication_connected_count | Gauge | The count of replication-subscriber up and running to replicate to remote cluster. | -| pulsar_replication_delay_in_seconds | Gauge | Time in seconds from the time a message was produced to the time when it is about to be replicated. | - - -### Topic metrics - -> Topic metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `true`. - -All the topic metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. - -| Name | Type | Description | -|---|---|---| -| pulsar_subscriptions_count | Gauge | The number of Pulsar subscriptions of the topic served by this broker. | -| pulsar_producers_count | Gauge | The number of active producers of the topic connected to this broker. | -| pulsar_consumers_count | Gauge | The number of active consumers of the topic connected to this broker. | -| pulsar_rate_in | Gauge | The total message rate of the topic coming into this broker (messages/second). | -| pulsar_rate_out | Gauge | The total message rate of the topic going out from this broker (messages/second). | -| pulsar_publish_rate_limit_times | Gauge | The number of times the publish rate limit is triggered. | -| pulsar_throughput_in | Gauge | The total throughput of the topic coming into this broker (bytes/second). | -| pulsar_throughput_out | Gauge | The total throughput of the topic going out from this broker (bytes/second). | -| pulsar_storage_size | Gauge | The total storage size of the topics in this topic owned by this broker (bytes). | -| pulsar_storage_logical_size | Gauge | The storage size of topics in the namespace owned by the broker without replicas (in bytes). | -| pulsar_storage_backlog_size | Gauge | The total backlog size of the topics of this topic owned by this broker (messages). | -| pulsar_storage_offloaded_size | Gauge | The total amount of the data in this topic offloaded to the tiered storage (bytes). | -| pulsar_storage_backlog_quota_limit | Gauge | The total amount of the data in this topic that limit the backlog quota (bytes). | -| pulsar_storage_write_rate | Gauge | The total message batches (entries) written to the storage for this topic (message batches / second). | -| pulsar_storage_read_rate | Gauge | The total message batches (entries) read from the storage for this topic (message batches / second). | -| pulsar_subscription_delayed | Gauge | The total message batches (entries) are delayed for dispatching. | -| pulsar_storage_write_latency_le_* | Histogram | The entry rate of a topic that the storage write latency is smaller with a given threshold.
    Available thresholds:
    • pulsar_storage_write_latency_le_0_5: <= 0.5ms
    • pulsar_storage_write_latency_le_1: <= 1ms
    • pulsar_storage_write_latency_le_5: <= 5ms
    • pulsar_storage_write_latency_le_10: <= 10ms
    • pulsar_storage_write_latency_le_20: <= 20ms
    • pulsar_storage_write_latency_le_50: <= 50ms
    • pulsar_storage_write_latency_le_100: <= 100ms
    • pulsar_storage_write_latency_le_200: <= 200ms
    • pulsar_storage_write_latency_le_1000: <= 1s
    • pulsar_storage_write_latency_le_overflow: > 1s
    | -| pulsar_entry_size_le_* | Histogram | The entry rate of a topic that the entry size is smaller with a given threshold.
    Available thresholds:
    • pulsar_entry_size_le_128: <= 128 bytes
    • pulsar_entry_size_le_512: <= 512 bytes
    • pulsar_entry_size_le_1_kb: <= 1 KB
    • pulsar_entry_size_le_2_kb: <= 2 KB
    • pulsar_entry_size_le_4_kb: <= 4 KB
    • pulsar_entry_size_le_16_kb: <= 16 KB
    • pulsar_entry_size_le_100_kb: <= 100 KB
    • pulsar_entry_size_le_1_mb: <= 1 MB
    • pulsar_entry_size_le_overflow: > 1 MB
    | -| pulsar_in_bytes_total | Counter | The total number of messages in bytes received for this topic. | -| pulsar_in_messages_total | Counter | The total number of messages received for this topic. | -| pulsar_out_bytes_total | Counter | The total number of messages in bytes read from this topic. | -| pulsar_out_messages_total | Counter | The total number of messages read from this topic. | -| pulsar_compaction_removed_event_count | Gauge | The total number of removed events of the compaction. | -| pulsar_compaction_succeed_count | Gauge | The total number of successes of the compaction. | -| pulsar_compaction_failed_count | Gauge | The total number of failures of the compaction. | -| pulsar_compaction_duration_time_in_mills | Gauge | The duration time of the compaction. | -| pulsar_compaction_read_throughput | Gauge | The read throughput of the compaction. | -| pulsar_compaction_write_throughput | Gauge | The write throughput of the compaction. | -| pulsar_compaction_latency_le_* | Histogram | The compaction latency with given quantile.
    Available thresholds:
    • pulsar_compaction_latency_le_0_5: <= 0.5ms
    • pulsar_compaction_latency_le_1: <= 1ms
    • pulsar_compaction_latency_le_5: <= 5ms
    • pulsar_compaction_latency_le_10: <= 10ms
    • pulsar_compaction_latency_le_20: <= 20ms
    • pulsar_compaction_latency_le_50: <= 50ms
    • pulsar_compaction_latency_le_100: <= 100ms
    • pulsar_compaction_latency_le_200: <= 200ms
    • pulsar_compaction_latency_le_1000: <= 1s
    • pulsar_compaction_latency_le_overflow: > 1s
    | -| pulsar_compaction_compacted_entries_count | Gauge | The total number of the compacted entries. | -| pulsar_compaction_compacted_entries_size |Gauge | The total size of the compacted entries. | - -#### Replication metrics - -If a namespace that a topic belongs to is configured to be replicated among multiple Pulsar clusters, the corresponding replication metrics is also exposed when `replicationMetricsEnabled` is enabled. - -All the replication metrics are labelled with `remoteCluster=${pulsar_remote_cluster}`. - -| Name | Type | Description | -|---|---|---| -| pulsar_replication_rate_in | Gauge | The total message rate of the topic replicating from remote cluster (messages/second). | -| pulsar_replication_rate_out | Gauge | The total message rate of the topic replicating to remote cluster (messages/second). | -| pulsar_replication_throughput_in | Gauge | The total throughput of the topic replicating from remote cluster (bytes/second). | -| pulsar_replication_throughput_out | Gauge | The total throughput of the topic replicating to remote cluster (bytes/second). | -| pulsar_replication_backlog | Gauge | The total backlog of the topic replicating to remote cluster (messages). | - -#### Topic lookup metrics - -| Name | Type | Description | -|---|---|---| -| pulsar_broker_load_manager_bundle_assignment | Gauge | The summary of latency of bundles ownership operations. | -| pulsar_broker_lookup | Gauge | The latency of all lookup operations. | -| pulsar_broker_lookup_redirects | Gauge | The number of lookup redirected requests. | -| pulsar_broker_lookup_answers | Gauge | The number of lookup responses (i.e. not redirected requests). | -| pulsar_broker_lookup_failures | Gauge | The number of lookup failures. | -| pulsar_broker_lookup_pending_requests | Gauge | The number of pending lookups in broker. When it is up to the threshold, new requests are rejected. | -| pulsar_broker_topic_load_pending_requests | Gauge | The load of pending topic operations. | - -### ManagedLedgerCache metrics -All the ManagedLedgerCache metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_ml_cache_evictions | Gauge | The number of cache evictions during the last minute. | -| pulsar_ml_cache_hits_rate | Gauge | The number of cache hits per second on the broker side. | -| pulsar_ml_cache_hits_throughput | Gauge | The amount of data is retrieved from the cache on the broker side (in byte/s). | -| pulsar_ml_cache_misses_rate | Gauge | The number of cache misses per second on the broker side. | -| pulsar_ml_cache_misses_throughput | Gauge | The amount of data is not retrieved from the cache on the broker side (in byte/s). | -| pulsar_ml_cache_pool_active_allocations | Gauge | The number of currently active allocations in direct arena | -| pulsar_ml_cache_pool_active_allocations_huge | Gauge | The number of currently active huge allocation in direct arena | -| pulsar_ml_cache_pool_active_allocations_normal | Gauge | The number of currently active normal allocations in direct arena | -| pulsar_ml_cache_pool_active_allocations_small | Gauge | The number of currently active small allocations in direct arena | -| pulsar_ml_cache_pool_allocated | Gauge | The total allocated memory of chunk lists in direct arena | -| pulsar_ml_cache_pool_used | Gauge | The total used memory of chunk lists in direct arena | -| pulsar_ml_cache_used_size | Gauge | The size in byte used to store the entries payloads | -| pulsar_ml_count | Gauge | The number of currently opened managed ledgers | - -### ManagedLedger metrics -All the managedLedger metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- namespace: namespace=${pulsar_namespace}. ${pulsar_namespace} is the namespace name. -- quantile: quantile=${quantile}. Quantile is only for `Histogram` type metric, and represents the threshold for given Buckets. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_ml_AddEntryBytesRate | Gauge | The bytes/s rate of messages added | -| pulsar_ml_AddEntryWithReplicasBytesRate | Gauge | The bytes/s rate of messages added with replicas | -| pulsar_ml_AddEntryErrors | Gauge | The number of addEntry requests that failed | -| pulsar_ml_AddEntryLatencyBuckets | Histogram | The latency of adding a ledger entry with a given quantile (threshold), including time spent on waiting in queue on the broker side.
    Available quantile:
    • quantile="0.0_0.5" is AddEntryLatency between (0.0ms, 0.5ms]
    • quantile="0.5_1.0" is AddEntryLatency between (0.5ms, 1.0ms]
    • quantile="1.0_5.0" is AddEntryLatency between (1ms, 5ms]
    • quantile="5.0_10.0" is AddEntryLatency between (5ms, 10ms]
    • quantile="10.0_20.0" is AddEntryLatency between (10ms, 20ms]
    • quantile="20.0_50.0" is AddEntryLatency between (20ms, 50ms]
    • quantile="50.0_100.0" is AddEntryLatency between (50ms, 100ms]
    • quantile="100.0_200.0" is AddEntryLatency between (100ms, 200ms]
    • quantile="200.0_1000.0" is AddEntryLatency between (200ms, 1s]
    | -| pulsar_ml_AddEntryLatencyBuckets_OVERFLOW | Gauge | The number of times the AddEntryLatency is longer than 1 second | -| pulsar_ml_AddEntryMessagesRate | Gauge | The msg/s rate of messages added | -| pulsar_ml_AddEntrySucceed | Gauge | The number of addEntry requests that succeeded | -| pulsar_ml_EntrySizeBuckets | Histogram | The added entry size of a ledger with a given quantile.
    Available quantile:
    • quantile="0.0_128.0" is EntrySize between (0byte, 128byte]
    • quantile="128.0_512.0" is EntrySize between (128byte, 512byte]
    • quantile="512.0_1024.0" is EntrySize between (512byte, 1KB]
    • quantile="1024.0_2048.0" is EntrySize between (1KB, 2KB]
    • quantile="2048.0_4096.0" is EntrySize between (2KB, 4KB]
    • quantile="4096.0_16384.0" is EntrySize between (4KB, 16KB]
    • quantile="16384.0_102400.0" is EntrySize between (16KB, 100KB]
    • quantile="102400.0_1232896.0" is EntrySize between (100KB, 1MB]
    | -| pulsar_ml_EntrySizeBuckets_OVERFLOW |Gauge | The number of times the EntrySize is larger than 1MB | -| pulsar_ml_LedgerSwitchLatencyBuckets | Histogram | The ledger switch latency with a given quantile.
    Available quantile:
    • quantile="0.0_0.5" is EntrySize between (0ms, 0.5ms]
    • quantile="0.5_1.0" is EntrySize between (0.5ms, 1ms]
    • quantile="1.0_5.0" is EntrySize between (1ms, 5ms]
    • quantile="5.0_10.0" is EntrySize between (5ms, 10ms]
    • quantile="10.0_20.0" is EntrySize between (10ms, 20ms]
    • quantile="20.0_50.0" is EntrySize between (20ms, 50ms]
    • quantile="50.0_100.0" is EntrySize between (50ms, 100ms]
    • quantile="100.0_200.0" is EntrySize between (100ms, 200ms]
    • quantile="200.0_1000.0" is EntrySize between (200ms, 1000ms]
    | -| pulsar_ml_LedgerSwitchLatencyBuckets_OVERFLOW | Gauge | The number of times the ledger switch latency is longer than 1 second | -| pulsar_ml_LedgerAddEntryLatencyBuckets | Histogram | The latency for bookie client to persist a ledger entry from broker to BookKeeper service with a given quantile (threshold).
    Available quantile:
    • quantile="0.0_0.5" is LedgerAddEntryLatency between (0.0ms, 0.5ms]
    • quantile="0.5_1.0" is LedgerAddEntryLatency between (0.5ms, 1.0ms]
    • quantile="1.0_5.0" is LedgerAddEntryLatency between (1ms, 5ms]
    • quantile="5.0_10.0" is LedgerAddEntryLatency between (5ms, 10ms]
    • quantile="10.0_20.0" is LedgerAddEntryLatency between (10ms, 20ms]
    • quantile="20.0_50.0" is LedgerAddEntryLatency between (20ms, 50ms]
    • quantile="50.0_100.0" is LedgerAddEntryLatency between (50ms, 100ms]
    • quantile="100.0_200.0" is LedgerAddEntryLatency between (100ms, 200ms]
    • quantile="200.0_1000.0" is LedgerAddEntryLatency between (200ms, 1s]
    | -| pulsar_ml_LedgerAddEntryLatencyBuckets_OVERFLOW | Gauge | The number of times the LedgerAddEntryLatency is longer than 1 second | -| pulsar_ml_MarkDeleteRate | Gauge | The rate of mark-delete ops/s | -| pulsar_ml_NumberOfMessagesInBacklog | Gauge | The number of backlog messages for all the consumers | -| pulsar_ml_ReadEntriesBytesRate | Gauge | The bytes/s rate of messages read | -| pulsar_ml_ReadEntriesErrors | Gauge | The number of readEntries requests that failed | -| pulsar_ml_ReadEntriesRate | Gauge | The msg/s rate of messages read | -| pulsar_ml_ReadEntriesSucceeded | Gauge | The number of readEntries requests that succeeded | -| pulsar_ml_StoredMessagesSize | Gauge | The total size of the messages in active ledgers (accounting for the multiple copies stored) | - -### Managed cursor acknowledgment state - -The acknowledgment state is persistent to the ledger first. When the acknowledgment state fails to be persistent to the ledger, they are persistent to ZooKeeper. To track the stats of acknowledgment, you can configure the metrics for the managed cursor. - -All the cursor acknowledgment state metrics are labelled with the following labels: - -- namespace: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -- ledger_name: `ledger_name=${pulsar_ledger_name}`. `${pulsar_ledger_name}` is the ledger name. - -- cursor_name: `ledger_name=${pulsar_cursor_name}`. `${pulsar_cursor_name}` is the cursor name. - -Name |Type |Description -|---|---|--- -brk_ml_cursor_persistLedgerSucceed|Gauge|The number of acknowledgment states that is persistent to a ledger.| -brk_ml_cursor_persistLedgerErrors|Gauge|The number of ledger errors occurred when acknowledgment states fail to be persistent to the ledger.| -brk_ml_cursor_persistZookeeperSucceed|Gauge|The number of acknowledgment states that is persistent to ZooKeeper. -brk_ml_cursor_persistZookeeperErrors|Gauge|The number of ledger errors occurred when acknowledgment states fail to be persistent to ZooKeeper. -brk_ml_cursor_nonContiguousDeletedMessagesRange|Gauge|The number of non-contiguous deleted messages ranges. -brk_ml_cursor_writeLedgerSize|Gauge|The size of write to ledger. -brk_ml_cursor_writeLedgerLogicalSize|Gauge|The size of write to ledger (accounting for without replicas). -brk_ml_cursor_readLedgerSize|Gauge|The size of read from ledger. - -### LoadBalancing metrics -All the loadbalancing metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- broker: broker=${broker}. ${broker} is the IP address of the broker -- metric: metric="loadBalancing". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_bandwidth_in_usage | Gauge | The broker inbound bandwith usage (in percent). | -| pulsar_lb_bandwidth_out_usage | Gauge | The broker outbound bandwith usage (in percent). | -| pulsar_lb_cpu_usage | Gauge | The broker cpu usage (in percent). | -| pulsar_lb_directMemory_usage | Gauge | The broker process direct memory usage (in percent). | -| pulsar_lb_memory_usage | Gauge | The broker process memory usage (in percent). | - -#### BundleUnloading metrics -All the bundleUnloading metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- metric: metric="bundleUnloading". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_unload_broker_count | Counter | Unload broker count in this bundle unloading | -| pulsar_lb_unload_bundle_count | Counter | Bundle unload count in this bundle unloading | - -#### BundleSplit metrics -All the bundleUnloading metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- metric: metric="bundlesSplit". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_bundles_split_count | Counter | The total count of bundle split in this leader broker | - -#### Bundle metrics -All the bundle metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- broker: broker=${broker}. ${broker} is the IP address of the broker -- bundle: bundle=${bundle}. ${bundle} is the bundle range on this broker -- metric: metric="bundle". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_bundle_msg_rate_in | Gauge | The total message rate coming into the topics in this bundle (messages/second). | -| pulsar_bundle_msg_rate_out | Gauge | The total message rate going out from the topics in this bundle (messages/second). | -| pulsar_bundle_topics_count | Gauge | The topic count in this bundle. | -| pulsar_bundle_consumer_count | Gauge | The consumer count of the topics in this bundle. | -| pulsar_bundle_producer_count | Gauge | The producer count of the topics in this bundle. | -| pulsar_bundle_msg_throughput_in | Gauge | The total throughput coming into the topics in this bundle (bytes/second). | -| pulsar_bundle_msg_throughput_out | Gauge | The total throughput going out from the topics in this bundle (bytes/second). | - -### Subscription metrics - -> Subscription metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `true`. - -All the subscription metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. -- *subscription*: `subscription=${subscription}`. `${subscription}` is the topic subscription name. - -| Name | Type | Description | -|---|---|---| -| pulsar_subscription_back_log | Gauge | The total backlog of a subscription (entries). | -| pulsar_subscription_back_log_no_delayed | Gauge | The backlog of a subscription that do not contain the delay messages (entries). | -| pulsar_subscription_delayed | Gauge | The total number of messages are delayed to be dispatched for a subscription (messages). | -| pulsar_subscription_msg_rate_redeliver | Gauge | The total message rate for message being redelivered (messages/second). | -| pulsar_subscription_unacked_messages | Gauge | The total number of unacknowledged messages of a subscription (messages). | -| pulsar_subscription_blocked_on_unacked_messages | Gauge | Indicate whether a subscription is blocked on unacknowledged messages or not.
    • 1 means the subscription is blocked on waiting unacknowledged messages to be acked.
    • 0 means the subscription is not blocked on waiting unacknowledged messages to be acked.
    | -| pulsar_subscription_msg_rate_out | Gauge | The total message dispatch rate for a subscription (messages/second). | -| pulsar_subscription_msg_throughput_out | Gauge | The total message dispatch throughput for a subscription (bytes/second). | - -### Consumer metrics - -> Consumer metrics are only exposed when both `exposeTopicLevelMetricsInPrometheus` and `exposeConsumerLevelMetricsInPrometheus` are set to `true`. - -All the consumer metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. -- *subscription*: `subscription=${subscription}`. `${subscription}` is the topic subscription name. -- *consumer_name*: `consumer_name=${consumer_name}`. `${consumer_name}` is the topic consumer name. -- *consumer_id*: `consumer_id=${consumer_id}`. `${consumer_id}` is the topic consumer id. - -| Name | Type | Description | -|---|---|---| -| pulsar_consumer_msg_rate_redeliver | Gauge | The total message rate for message being redelivered (messages/second). | -| pulsar_consumer_unacked_messages | Gauge | The total number of unacknowledged messages of a consumer (messages). | -| pulsar_consumer_blocked_on_unacked_messages | Gauge | Indicate whether a consumer is blocked on unacknowledged messages or not.
    • 1 means the consumer is blocked on waiting unacknowledged messages to be acked.
    • 0 means the consumer is not blocked on waiting unacknowledged messages to be acked.
    | -| pulsar_consumer_msg_rate_out | Gauge | The total message dispatch rate for a consumer (messages/second). | -| pulsar_consumer_msg_throughput_out | Gauge | The total message dispatch throughput for a consumer (bytes/second). | -| pulsar_consumer_available_permits | Gauge | The available permits for for a consumer. | - -### Managed ledger bookie client metrics - -All the managed ledger bookie client metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_completed_tasks_* | Gauge | The number of tasks the scheduler executor execute completed.
    The number of metrics determined by the scheduler executor thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_queue_* | Gauge | The number of tasks queued in the scheduler executor's queue.
    The number of metrics determined by scheduler executor's thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_total_tasks_* | Gauge | The total number of tasks the scheduler executor received.
    The number of metrics determined by scheduler executor's thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_task_execution | Summary | The scheduler task execution latency calculated in milliseconds. | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_task_queued | Summary | The scheduler task queued latency calculated in milliseconds. | - -### Token metrics - -All the token metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -|---|---|---| -| pulsar_expired_token_count | Counter | The number of expired tokens in Pulsar. | -| pulsar_expiring_token_minutes | Histogram | The remaining time of expiring tokens in minutes. | - -### Authentication metrics - -All the authentication metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *provider_name*: `provider_name=${provider_name}`. `${provider_name}` is the class name of the authentication provider. -- *auth_method*: `auth_method=${auth_method}`. `${auth_method}` is the authentication method of the authentication provider. -- *reason*: `reason=${reason}`. `${reason}` is the reason for failing authentication operation. (This label is only for `pulsar_authentication_failures_count`.) - -| Name | Type | Description | -|---|---|---| -| pulsar_authentication_success_count| Counter | The number of successful authentication operations. | -| pulsar_authentication_failures_count | Counter | The number of failing authentication operations. | - -### Connection metrics - -All the connection metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *broker*: `broker=${advertised_address}`. `${advertised_address}` is the advertised address of the broker. -- *metric*: `metric=${metric}`. `${metric}` is the connection metric collective name. - -| Name | Type | Description | -|---|---|---| -| pulsar_active_connections| Gauge | The number of active connections. | -| pulsar_connection_created_total_count | Gauge | The total number of connections. | -| pulsar_connection_create_success_count | Gauge | The number of successfully created connections. | -| pulsar_connection_create_fail_count | Gauge | The number of failed connections. | -| pulsar_connection_closed_total_count | Gauge | The total number of closed connections. | -| pulsar_broker_throttled_connections | Gauge | The number of throttled connections. | -| pulsar_broker_throttled_connections_global_limit | Gauge | The number of throttled connections because of per-connection limit. | - -### Jetty metrics - -> For a functions-worker running separately from brokers, its Jetty metrics are only exposed when `includeStandardPrometheusMetrics` is set to `true`. - -All the jetty metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -|---|---|---| -| jetty_requests_total | Counter | Number of requests. | -| jetty_requests_active | Gauge | Number of requests currently active. | -| jetty_requests_active_max | Gauge | Maximum number of requests that have been active at once. | -| jetty_request_time_max_seconds | Gauge | Maximum time spent handling requests. | -| jetty_request_time_seconds_total | Counter | Total time spent in all request handling. | -| jetty_dispatched_total | Counter | Number of dispatches. | -| jetty_dispatched_active | Gauge | Number of dispatches currently active. | -| jetty_dispatched_active_max | Gauge | Maximum number of active dispatches being handled. | -| jetty_dispatched_time_max | Gauge | Maximum time spent in dispatch handling. | -| jetty_dispatched_time_seconds_total | Counter | Total time spent in dispatch handling. | -| jetty_async_requests_total | Counter | Total number of async requests. | -| jetty_async_requests_waiting | Gauge | Currently waiting async requests. | -| jetty_async_requests_waiting_max | Gauge | Maximum number of waiting async requests. | -| jetty_async_dispatches_total | Counter | Number of requested that have been asynchronously dispatched. | -| jetty_expires_total | Counter | Number of async requests requests that have expired. | -| jetty_responses_total | Counter | Number of responses, labeled by status code. The `code` label can be "1xx", "2xx", "3xx", "4xx", or "5xx". | -| jetty_stats_seconds | Gauge | Time in seconds stats have been collected for. | -| jetty_responses_bytes_total | Counter | Total number of bytes across all responses. | - -## Pulsar Functions - -All the Pulsar Functions metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -| Name | Type | Description | -|---|---|---| -| pulsar_function_processed_successfully_total | Counter | The total number of messages processed successfully. | -| pulsar_function_processed_successfully_total_1min | Counter | The total number of messages processed successfully in the last 1 minute. | -| pulsar_function_system_exceptions_total | Counter | The total number of system exceptions. | -| pulsar_function_system_exceptions_total_1min | Counter | The total number of system exceptions in the last 1 minute. | -| pulsar_function_user_exceptions_total | Counter | The total number of user exceptions. | -| pulsar_function_user_exceptions_total_1min | Counter | The total number of user exceptions in the last 1 minute. | -| pulsar_function_process_latency_ms | Summary | The process latency in milliseconds. | -| pulsar_function_process_latency_ms_1min | Summary | The process latency in milliseconds in the last 1 minute. | -| pulsar_function_last_invocation | Gauge | The timestamp of the last invocation of the function. | -| pulsar_function_received_total | Counter | The total number of messages received from source. | -| pulsar_function_received_total_1min | Counter | The total number of messages received from source in the last 1 minute. | -pulsar_function_user_metric_ | Summary|The user-defined metrics. - -## Connectors - -All the Pulsar connector metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -Connector metrics contain **source** metrics and **sink** metrics. - -- **Source** metrics - - | Name | Type | Description | - |---|---|---| - pulsar_source_written_total|Counter|The total number of records written to a Pulsar topic. - pulsar_source_written_total_1min|Counter|The total number of records written to a Pulsar topic in the last 1 minute. - pulsar_source_received_total|Counter|The total number of records received from source. - pulsar_source_received_total_1min|Counter|The total number of records received from source in the last 1 minute. - pulsar_source_last_invocation|Gauge|The timestamp of the last invocation of the source. - pulsar_source_source_exception|Gauge|The exception from a source. - pulsar_source_source_exceptions_total|Counter|The total number of source exceptions. - pulsar_source_source_exceptions_total_1min |Counter|The total number of source exceptions in the last 1 minute. - pulsar_source_system_exception|Gauge|The exception from system code. - pulsar_source_system_exceptions_total|Counter|The total number of system exceptions. - pulsar_source_system_exceptions_total_1min|Counter|The total number of system exceptions in the last 1 minute. - pulsar_source_user_metric_ | Summary|The user-defined metrics. - -- **Sink** metrics - - | Name | Type | Description | - |---|---|---| - pulsar_sink_written_total|Counter| The total number of records processed by a sink. - pulsar_sink_written_total_1min|Counter| The total number of records processed by a sink in the last 1 minute. - pulsar_sink_received_total_1min|Counter| The total number of messages that a sink has received from Pulsar topics in the last 1 minute. - pulsar_sink_received_total|Counter| The total number of records that a sink has received from Pulsar topics. - pulsar_sink_last_invocation|Gauge|The timestamp of the last invocation of the sink. - pulsar_sink_sink_exception|Gauge|The exception from a sink. - pulsar_sink_sink_exceptions_total|Counter|The total number of sink exceptions. - pulsar_sink_sink_exceptions_total_1min |Counter|The total number of sink exceptions in the last 1 minute. - pulsar_sink_system_exception|Gauge|The exception from system code. - pulsar_sink_system_exceptions_total|Counter|The total number of system exceptions. - pulsar_sink_system_exceptions_total_1min|Counter|The total number of system exceptions in the last 1 minute. - pulsar_sink_user_metric_ | Summary|The user-defined metrics. - -## Proxy - -All the proxy metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *kubernetes_pod_name*: `kubernetes_pod_name=${kubernetes_pod_name}`. `${kubernetes_pod_name}` is the Kubernetes pod name. - -| Name | Type | Description | -|---|---|---| -| pulsar_proxy_active_connections | Gauge | Number of connections currently active in the proxy. | -| pulsar_proxy_new_connections | Counter | Counter of connections being opened in the proxy. | -| pulsar_proxy_rejected_connections | Counter | Counter for connections rejected due to throttling. | -| pulsar_proxy_binary_ops | Counter | Counter of proxy operations. | -| pulsar_proxy_binary_bytes | Counter | Counter of proxy bytes. | - -## Pulsar SQL Worker - -| Name | Type | Description | -|---|---|---| -| split_bytes_read | Counter | Number of bytes read from BookKeeper. | -| split_num_messages_deserialized | Counter | Number of messages deserialized. | -| split_num_record_deserialized | Counter | Number of records deserialized. | -| split_bytes_read_per_query | Summary | Total number of bytes read per query. | -| split_entry_deserialize_time | Summary | Time spent on derserializing entries. | -| split_entry_deserialize_time_per_query | Summary | Time spent on derserializing entries per query. | -| split_entry_queue_dequeue_wait_time | Summary | Time spend on waiting to get entry from entry queue because it is empty. | -| split_entry_queue_dequeue_wait_time_per_query | Summary | Total time spent on waiting to get entry from entry queue per query. | -| split_message_queue_dequeue_wait_time_per_query | Summary | Time spent on waiting to dequeue from message queue because is is empty per query. | -| split_message_queue_enqueue_wait_time | Summary | Time spent on waiting for message queue enqueue because the message queue is full. | -| split_message_queue_enqueue_wait_time_per_query | Summary | Time spent on waiting for message queue enqueue because the message queue is full per query. | -| split_num_entries_per_batch | Summary | Number of entries per batch. | -| split_num_entries_per_query | Summary | Number of entries per query. | -| split_num_messages_deserialized_per_entry | Summary | Number of messages deserialized per entry. | -| split_num_messages_deserialized_per_query | Summary | Number of messages deserialized per query. | -| split_read_attempts | Summary | Number of read attempts (fail if queues are full). | -| split_read_attempts_per_query | Summary | Number of read attempts per query. | -| split_read_latency_per_batch | Summary | Latency of reads per batch. | -| split_read_latency_per_query | Summary | Total read latency per query. | -| split_record_deserialize_time | Summary | Time spent on deserializing message to record. For example, Avro, JSON, and so on. | -| split_record_deserialize_time_per_query | Summary | Time spent on deserializing message to record per query. | -| split_total_execution_time | Summary | The total execution time. | - -## Pulsar transaction - -All the transaction metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *coordinator_id*: `coordinator_id=${coordinator_id}`. `${coordinator_id}` is the coordinator id. - -| Name | Type | Description | -|---|---|---| -| pulsar_txn_active_count | Gauge | Number of active transactions. | -| pulsar_txn_created_count | Counter | Number of created transactions. | -| pulsar_txn_committed_count | Counter | Number of committed transactions. | -| pulsar_txn_aborted_count | Counter | Number of aborted transactions of this coordinator. | -| pulsar_txn_timeout_count | Counter | Number of timeout transactions. | -| pulsar_txn_append_log_count | Counter | Number of append transaction logs. | -| pulsar_txn_execution_latency_le_* | Histogram | Transaction execution latency.
    Available latencies are as below:
    • latency="10" is TransactionExecutionLatency between (0ms, 10ms]
    • latency="20" is TransactionExecutionLatency between (10ms, 20ms]
    • latency="50" is TransactionExecutionLatency between (20ms, 50ms]
    • latency="100" is TransactionExecutionLatency between (50ms, 100ms]
    • latency="500" is TransactionExecutionLatency between (100ms, 500ms]
    • latency="1000" is TransactionExecutionLatency between (500ms, 1000ms]
    • latency="5000" is TransactionExecutionLatency between (1s, 5s]
    • latency="15000" is TransactionExecutionLatency between (5s, 15s]
    • latency="30000" is TransactionExecutionLatency between (15s, 30s]
    • latency="60000" is TransactionExecutionLatency between (30s, 60s]
    • latency="300000" is TransactionExecutionLatency between (1m,5m]
    • latency="1500000" is TransactionExecutionLatency between (5m,15m]
    • latency="3000000" is TransactionExecutionLatency between (15m,30m]
    • latency="overflow" is TransactionExecutionLatency between (30m,∞]
    | diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/reference-pulsar-admin.md b/site2/website/versioned_docs/version-2.10.0-deprecated/reference-pulsar-admin.md deleted file mode 100644 index 5ec74a86e432b7..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/reference-pulsar-admin.md +++ /dev/null @@ -1,3394 +0,0 @@ ---- -id: pulsar-admin -title: Pulsar admin CLI -sidebar_label: "Pulsar Admin CLI" -original_id: pulsar-admin ---- - -> **Important** -> -> This page is deprecated and not updated anymore. For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](/tools/pulsar-admin/) - -The `pulsar-admin` tool enables you to manage Pulsar installations, including clusters, brokers, namespaces, tenants, and more. - -Usage - -```bash - -$ pulsar-admin command - -``` - -Commands -* `broker-stats` -* `brokers` -* `clusters` -* `functions` -* `functions-worker` -* `namespaces` -* `ns-isolation-policy` -* `sources` - - For more information, see [here](io-cli.md#sources) -* `sinks` - - For more information, see [here](io-cli.md#sinks) -* `topics` -* `tenants` -* `resource-quotas` -* `schemas` - -## `broker-stats` - -Operations to collect broker statistics - -```bash - -$ pulsar-admin broker-stats subcommand - -``` - -Subcommands -* `allocator-stats` -* `topics(destinations)` -* `mbeans` -* `monitoring-metrics` -* `load-report` - - -### `allocator-stats` - -Dump allocator stats - -Usage - -```bash - -$ pulsar-admin broker-stats allocator-stats allocator-name - -``` - -### `topics(destinations)` - -Dump topic stats - -Usage - -```bash - -$ pulsar-admin broker-stats topics options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - -### `mbeans` - -Dump Mbean stats - -Usage - -```bash - -$ pulsar-admin broker-stats mbeans options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - - -### `monitoring-metrics` - -Dump metrics for monitoring - -Usage - -```bash - -$ pulsar-admin broker-stats monitoring-metrics options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - - -### `load-report` - -Dump broker load-report - -Usage - -```bash - -$ pulsar-admin broker-stats load-report - -``` - -## `brokers` - -Operations about brokers - -```bash - -$ pulsar-admin brokers subcommand - -``` - -Subcommands -* `list` -* `namespaces` -* `update-dynamic-config` -* `list-dynamic-config` -* `get-all-dynamic-config` -* `get-internal-config` -* `get-runtime-config` -* `healthcheck` - -### `list` -List active brokers of the cluster - -Usage - -```bash - -$ pulsar-admin brokers list cluster-name - -``` - -### `leader-broker` -Get the information of the leader broker - -Usage - -```bash - -$ pulsar-admin brokers leader-broker - -``` - -### `namespaces` -List namespaces owned by the broker - -Usage - -```bash - -$ pulsar-admin brokers namespaces cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--url`|The URL for the broker|| - - -### `update-dynamic-config` -Update a broker's dynamic service configuration - -Usage - -```bash - -$ pulsar-admin brokers update-dynamic-config options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--config`|Service configuration parameter name|| -|`--value`|Value for the configuration parameter value specified using the `--config` flag|| - - -### `list-dynamic-config` -Get list of updatable configuration name - -Usage - -```bash - -$ pulsar-admin brokers list-dynamic-config - -``` - -### `delete-dynamic-config` -Delete dynamic-serviceConfiguration of broker - -Usage - -```bash - -$ pulsar-admin brokers delete-dynamic-config options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--config`|Service configuration parameter name|| - - -### `get-all-dynamic-config` -Get all overridden dynamic-configuration values - -Usage - -```bash - -$ pulsar-admin brokers get-all-dynamic-config - -``` - -### `get-internal-config` -Get internal configuration information - -Usage - -```bash - -$ pulsar-admin brokers get-internal-config - -``` - -### `get-runtime-config` -Get runtime configuration values - -Usage - -```bash - -$ pulsar-admin brokers get-runtime-config - -``` - -### `healthcheck` -Run a health check against the broker - -Usage - -```bash - -$ pulsar-admin brokers healthcheck - -``` - -## `clusters` -Operations about clusters - -Usage - -```bash - -$ pulsar-admin clusters subcommand - -``` - -Subcommands -* `get` -* `create` -* `update` -* `delete` -* `list` -* `update-peer-clusters` -* `get-peer-clusters` -* `get-failure-domain` -* `create-failure-domain` -* `update-failure-domain` -* `delete-failure-domain` -* `list-failure-domains` - - -### `get` -Get the configuration data for the specified cluster - -Usage - -```bash - -$ pulsar-admin clusters get cluster-name - -``` - -### `create` -Provisions a new cluster. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin clusters create cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--broker-url`|The URL for the broker service.|| -|`--broker-url-secure`|The broker service URL for a secure connection|| -|`--url`|service-url|| -|`--url-secure`|service-url for secure connection|| - - -### `update` -Update the configuration for a cluster - -Usage - -```bash - -$ pulsar-admin clusters update cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--broker-url`|The URL for the broker service.|| -|`--broker-url-secure`|The broker service URL for a secure connection|| -|`--url`|service-url|| -|`--url-secure`|service-url for secure connection|| - - -### `delete` -Deletes an existing cluster - -Usage - -```bash - -$ pulsar-admin clusters delete cluster-name - -``` - -### `list` -List the existing clusters - -Usage - -```bash - -$ pulsar-admin clusters list - -``` - -### `update-peer-clusters` -Update peer cluster names - -Usage - -```bash - -$ pulsar-admin clusters update-peer-clusters cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--peer-clusters`|Comma separated peer cluster names (Pass empty string "" to delete list)|| - -### `get-peer-clusters` -Get list of peer clusters - -Usage - -```bash - -$ pulsar-admin clusters get-peer-clusters - -``` - -### `get-failure-domain` -Get the configuration brokers of a failure domain - -Usage - -```bash - -$ pulsar-admin clusters get-failure-domain cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `create-failure-domain` -Create a new failure domain for a cluster (updates it if already created) - -Usage - -```bash - -$ pulsar-admin clusters create-failure-domain cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--broker-list`|Comma separated broker list|| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `update-failure-domain` -Update failure domain for a cluster (creates a new one if not exist) - -Usage - -```bash - -$ pulsar-admin clusters update-failure-domain cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--broker-list`|Comma separated broker list|| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `delete-failure-domain` -Delete an existing failure domain - -Usage - -```bash - -$ pulsar-admin clusters delete-failure-domain cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `list-failure-domains` -List the existing failure domains for a cluster - -Usage - -```bash - -$ pulsar-admin clusters list-failure-domains cluster-name - -``` - -## `functions` - -A command-line interface for Pulsar Functions - -Usage - -```bash - -$ pulsar-admin functions subcommand - -``` - -Subcommands -* `localrun` -* `create` -* `delete` -* `update` -* `get` -* `restart` -* `stop` -* `start` -* `status` -* `stats` -* `list` -* `querystate` -* `putstate` -* `trigger` - - -### `localrun` -Run the Pulsar Function locally (rather than deploying it to the Pulsar cluster) - - -Usage - -```bash - -$ pulsar-admin functions localrun options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--broker-service-url `|The URL of the Pulsar broker|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--client-auth-params`|Client authentication param|| -|`--client-auth-plugin`|Client authentication plugin using which function-process can connect to broker|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--hostname-verification-enabled`|Enable hostname verification|false| -|`--instance-id-offset`|Start the instanceIds from this offset|0| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--go`|Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--state-storage-service-url`|The URL for the state storage service. By default, it it set to the service URL of the Apache BookKeeper. This service URL must be added manually when the Pulsar Function runs locally. || -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed successfully are sent|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--tls-allow-insecure`|Allow insecure tls connection|false| -|`--tls-trust-cert-path`|The tls trust cert file path|| -|`--use-tls`|Use tls connection|false| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `create` -Create a Pulsar Function in cluster mode (i.e. deploy it on a Pulsar cluster) - -Usage - -``` - -$ pulsar-admin functions create options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function’s namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--go`|Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `delete` -Delete a Pulsar Function that's running on a Pulsar cluster - -Usage - -```bash - -$ pulsar-admin functions delete options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `update` -Update a Pulsar Function that's been deployed to a Pulsar cluster - -Usage - -```bash - -$ pulsar-admin functions update options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function’s namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--go`|Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `get` -Fetch information about a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions get options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `restart` -Restart function instance - -Usage - -```bash - -$ pulsar-admin functions restart options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (restart all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `stop` -Stops function instance - -Usage - -```bash - -$ pulsar-admin functions stop options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (stop all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `start` -Starts a stopped function instance - -Usage - -```bash - -$ pulsar-admin functions start options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (start all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `status` -Check the current status of a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions status options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (Get-status of all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `stats` -Get the current stats of a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions stats options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (Get-stats of all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - -### `list` -List all of the Pulsar Functions running under a specific tenant and namespace - -Usage - -```bash - -$ pulsar-admin functions list options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `querystate` -Fetch the current state associated with a Pulsar Function running in cluster mode - -Usage - -```bash - -$ pulsar-admin functions querystate options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`-k`, `--key`|The key for the state you want to fetch|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| -|`-w`, `--watch`|Watch for changes in the value associated with a key for a Pulsar Function|false| - -### `putstate` -Put a key/value pair to the state associated with a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions putstate options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the Pulsar Function|| -|`--name`|The name of a Pulsar Function|| -|`--namespace`|The namespace of a Pulsar Function|| -|`--tenant`|The tenant of a Pulsar Function|| -|`-s`, `--state`|The FunctionState that needs to be put|| - -### `trigger` -Triggers the specified Pulsar Function with a supplied value - -Usage - -```bash - -$ pulsar-admin functions trigger options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| -|`--topic`|The specific topic name that the function consumes from that you want to inject the data to|| -|`--trigger-file`|The path to the file that contains the data with which you'd like to trigger the function|| -|`--trigger-value`|The value with which you want to trigger the function|| - - -## `functions-worker` -Operations to collect function-worker statistics - -```bash - -$ pulsar-admin functions-worker subcommand - -``` - -Subcommands - -* `function-stats` -* `get-cluster` -* `get-cluster-leader` -* `get-function-assignments` -* `monitoring-metrics` - -### `function-stats` - -Dump all functions stats running on this broker - -Usage - -```bash - -$ pulsar-admin functions-worker function-stats - -``` - -### `get-cluster` - -Get all workers belonging to this cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-cluster - -``` - -### `get-cluster-leader` - -Get the leader of the worker cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-cluster-leader - -``` - -### `get-function-assignments` - -Get the assignments of the functions across the worker cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-function-assignments - -``` - -### `monitoring-metrics` - -Dump metrics for Monitoring - -Usage - -```bash - -$ pulsar-admin functions-worker monitoring-metrics - -``` - -## `namespaces` - -Operations for managing namespaces - -```bash - -$ pulsar-admin namespaces subcommand - -``` - -Subcommands -* `list` -* `topics` -* `policies` -* `create` -* `delete` -* `set-deduplication` -* `set-auto-topic-creation` -* `remove-auto-topic-creation` -* `set-auto-subscription-creation` -* `remove-auto-subscription-creation` -* `permissions` -* `grant-permission` -* `revoke-permission` -* `grant-subscription-permission` -* `revoke-subscription-permission` -* `set-clusters` -* `get-clusters` -* `get-backlog-quotas` -* `set-backlog-quota` -* `remove-backlog-quota` -* `get-persistence` -* `set-persistence` -* `get-message-ttl` -* `set-message-ttl` -* `remove-message-ttl` -* `get-anti-affinity-group` -* `set-anti-affinity-group` -* `get-anti-affinity-namespaces` -* `delete-anti-affinity-group` -* `get-retention` -* `set-retention` -* `unload` -* `split-bundle` -* `set-dispatch-rate` -* `get-dispatch-rate` -* `set-replicator-dispatch-rate` -* `get-replicator-dispatch-rate` -* `set-subscribe-rate` -* `get-subscribe-rate` -* `set-subscription-dispatch-rate` -* `get-subscription-dispatch-rate` -* `clear-backlog` -* `unsubscribe` -* `set-encryption-required` -* `set-delayed-delivery` -* `get-delayed-delivery` -* `set-subscription-auth-mode` -* `get-max-producers-per-topic` -* `set-max-producers-per-topic` -* `get-max-consumers-per-topic` -* `set-max-consumers-per-topic` -* `get-max-consumers-per-subscription` -* `set-max-consumers-per-subscription` -* `get-max-unacked-messages-per-subscription` -* `set-max-unacked-messages-per-subscription` -* `get-max-unacked-messages-per-consumer` -* `set-max-unacked-messages-per-consumer` -* `get-compaction-threshold` -* `set-compaction-threshold` -* `get-offload-threshold` -* `set-offload-threshold` -* `get-offload-deletion-lag` -* `set-offload-deletion-lag` -* `clear-offload-deletion-lag` -* `get-schema-autoupdate-strategy` -* `set-schema-autoupdate-strategy` -* `set-offload-policies` -* `get-offload-policies` -* `set-max-subscriptions-per-topic` -* `get-max-subscriptions-per-topic` -* `remove-max-subscriptions-per-topic` - - -### `list` -Get the namespaces for a tenant - -Usage - -```bash - -$ pulsar-admin namespaces list tenant-name - -``` - -### `topics` -Get the list of topics for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces topics tenant/namespace - -``` - -### `policies` -Get the configuration policies of a namespace - -Usage - -```bash - -$ pulsar-admin namespaces policies tenant/namespace - -``` - -### `create` -Create a new namespace - -Usage - -```bash - -$ pulsar-admin namespaces create tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-b`, `--bundles`|The number of bundles to activate|0| -|`-c`, `--clusters`|List of clusters this namespace will be assigned|| - - -### `delete` -Deletes a namespace. The namespace needs to be empty - -Usage - -```bash - -$ pulsar-admin namespaces delete tenant/namespace - -``` - -### `set-deduplication` -Enable or disable message deduplication on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-deduplication tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable message deduplication on the specified namespace|false| -|`--disable`, `-d`|Disable message deduplication on the specified namespace|false| - -### `set-auto-topic-creation` -Enable or disable autoTopicCreation for a namespace, overriding broker settings - -Usage - -```bash - -$ pulsar-admin namespaces set-auto-topic-creation tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable allowAutoTopicCreation on namespace|false| -|`--disable`, `-d`|Disable allowAutoTopicCreation on namespace|false| -|`--type`, `-t`|Type of topic to be auto-created. Possible values: (partitioned, non-partitioned)|non-partitioned| -|`--num-partitions`, `-n`|Default number of partitions of topic to be auto-created, applicable to partitioned topics only|| - -### `remove-auto-topic-creation` -Remove override of autoTopicCreation for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces remove-auto-topic-creation tenant/namespace - -``` - -### `set-auto-subscription-creation` -Enable autoSubscriptionCreation for a namespace, overriding broker settings - -Usage - -```bash - -$ pulsar-admin namespaces set-auto-subscription-creation tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable allowAutoSubscriptionCreation on namespace|false| - -### `remove-auto-subscription-creation` -Remove override of autoSubscriptionCreation for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces remove-auto-subscription-creation tenant/namespace - -``` - -### `permissions` -Get the permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces permissions tenant/namespace - -``` - -### `grant-permission` -Grant permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces grant-permission tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--actions`|Actions to be granted (`produce` or `consume`)|| -|`--role`|The client role to which to grant the permissions|| - - -### `revoke-permission` -Revoke permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces revoke-permission tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--role`|The client role to which to revoke the permissions|| - -### `grant-subscription-permission` -Grant permissions to access subscription admin-api - -Usage - -```bash - -$ pulsar-admin namespaces grant-subscription-permission tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--roles`|The client roles to which to grant the permissions (comma separated roles)|| -|`--subscription`|The subscription name for which permission will be granted to roles|| - -### `revoke-subscription-permission` -Revoke permissions to access subscription admin-api - -Usage - -```bash - -$ pulsar-admin namespaces revoke-subscription-permission tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--role`|The client role to which to revoke the permissions|| -|`--subscription`|The subscription name for which permission will be revoked to roles|| - -### `set-clusters` -Set replication clusters for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-clusters tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--clusters`|Replication clusters ID list (comma-separated values)|| - - -### `get-clusters` -Get replication clusters for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-clusters tenant/namespace - -``` - -### `get-backlog-quotas` -Get the backlog quota policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-backlog-quotas tenant/namespace - -``` - -### `set-backlog-quota` -Set a backlog quota policy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-backlog-quota tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-l`, `--limit`|The backlog size limit (for example `10M` or `16G`)|| -|`-lt`, `--limitTime`|Time limit in second, non-positive number for disabling time limit. (for example 3600 for 1 hour)|| -|`-p`, `--policy`|The retention policy to enforce when the limit is reached. The valid options are: `producer_request_hold`, `producer_exception` or `consumer_backlog_eviction`| -|`-t`, `--type`|Backlog quota type to set. The valid options are: `destination_storage`, `message_age` |destination_storage| - -Example - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \ ---limit 2G \ ---policy producer_request_hold - -``` - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \ ---limitTime 3600 \ ---policy producer_request_hold \ ---type message_age - -``` - -### `remove-backlog-quota` -Remove a backlog quota policy from a namespace - -|Flag|Description|Default| -|---|---|---| -|`-t`, `--type`|Backlog quota type to remove. The valid options are: `destination_storage`, `message_age` |destination_storage| - -Usage - -```bash - -$ pulsar-admin namespaces remove-backlog-quota tenant/namespace - -``` - -### `get-persistence` -Get the persistence policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-persistence tenant/namespace - -``` - -### `set-persistence` -Set the persistence policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-persistence tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-a`, `--bookkeeper-ack-quorum`|The number of acks (guaranteed copies) to wait for each entry|0| -|`-e`, `--bookkeeper-ensemble`|The number of bookies to use for a topic|0| -|`-w`, `--bookkeeper-write-quorum`|How many writes to make of each entry|0| -|`-r`, `--ml-mark-delete-max-rate`|Throttling rate of mark-delete operation (0 means no throttle)|| - - -### `get-message-ttl` -Get the message TTL for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-message-ttl tenant/namespace - -``` - -### `set-message-ttl` -Set the message TTL for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-message-ttl tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-ttl`, `--messageTTL`|Message TTL in seconds. When the value is set to `0`, TTL is disabled. TTL is disabled by default. |0| - -### `remove-message-ttl` -Remove the message TTL for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces remove-message-ttl tenant/namespace - -``` - -### `get-anti-affinity-group` -Get Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-anti-affinity-group tenant/namespace - -``` - -### `set-anti-affinity-group` -Set Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-anti-affinity-group tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-g`, `--group`|Anti-affinity group name|| - -### `get-anti-affinity-namespaces` -Get Anti-affinity namespaces grouped with the given anti-affinity group name - -Usage - -```bash - -$ pulsar-admin namespaces get-anti-affinity-namespaces options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--cluster`|Cluster name|| -|`-g`, `--group`|Anti-affinity group name|| -|`-p`, `--tenant`|Tenant is only used for authorization. Client has to be admin of any of the tenant to access this api|| - -### `delete-anti-affinity-group` -Remove Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces delete-anti-affinity-group tenant/namespace - -``` - -### `get-retention` -Get the retention policy that is applied to each topic within the specified namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-retention tenant/namespace - -``` - -### `set-retention` -Set the retention policy for each topic within the specified namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-retention tenant/namespace - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-s`, `--size`|The retention size limits (for example 10M, 16G or 3T) for each topic in the namespace. 0 means no retention and -1 means infinite size retention|| -|`-t`, `--time`|The retention time in minutes, hours, days, or weeks. Examples: 100m, 13h, 2d, 5w. 0 means no retention and -1 means infinite time retention|| - - -### `unload` -Unload a namespace or namespace bundle from the current serving broker. - -Usage - -```bash - -$ pulsar-admin namespaces unload tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| - -### `split-bundle` -Split a namespace-bundle from the current serving broker - -Usage - -```bash - -$ pulsar-admin namespaces split-bundle tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-u`, `--unload`|Unload newly split bundles after splitting old bundle|false| - -### `set-dispatch-rate` -Set message-dispatch-rate for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-dispatch-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-dispatch-rate` -Get configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-dispatch-rate tenant/namespace - -``` - -### `set-replicator-dispatch-rate` -Set replicator message-dispatch-rate for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-replicator-dispatch-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-replicator-dispatch-rate` -Get replicator configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-replicator-dispatch-rate tenant/namespace - -``` - -### `set-subscribe-rate` -Set subscribe-rate per consumer for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscribe-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-sr`, `--subscribe-rate`|The subscribe rate (default -1 will be overwrite if not passed)|-1| -|`-st`, `--subscribe-rate-period`|The subscribe rate period in second type (default 30 second will be overwrite if not passed)|30| - -### `get-subscribe-rate` -Get configured subscribe-rate per consumer for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-subscribe-rate tenant/namespace - -``` - -### `set-subscription-dispatch-rate` -Set subscription message-dispatch-rate for all subscription of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscription-dispatch-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--sub-msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-subscription-dispatch-rate` -Get subscription configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-subscription-dispatch-rate tenant/namespace - -``` - -### `clear-backlog` -Clear the backlog for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces clear-backlog tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-force`, `--force`|Whether to force a clear backlog without prompt|false| -|`-s`, `--sub`|The subscription name|| - - -### `unsubscribe` -Unsubscribe the given subscription on all destinations on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces unsubscribe tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-s`, `--sub`|The subscription name|| - -### `set-encryption-required` -Enable or disable message encryption required for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-encryption-required tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-d`, `--disable`|Disable message encryption required|false| -|`-e`, `--enable`|Enable message encryption required|false| - -### `set-delayed-delivery` -Set the delayed delivery policy on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-delayed-delivery tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-d`, `--disable`|Disable delayed delivery messages|false| -|`-e`, `--enable`|Enable delayed delivery messages|false| -|`-t`, `--time`|The tick time for when retrying on delayed delivery messages|1s| - - -### `get-delayed-delivery` -Get the delayed delivery policy on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-delayed-delivery-time tenant/namespace - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-t`, `--time`|The tick time for when retrying on delayed delivery messages|1s| - - -### `set-subscription-auth-mode` -Set subscription auth mode on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscription-auth-mode tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-m`, `--subscription-auth-mode`|Subscription authorization mode for Pulsar policies. Valid options are: [None, Prefix]|| - -### `get-max-producers-per-topic` -Get maxProducersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-producers-per-topic tenant/namespace - -``` - -### `set-max-producers-per-topic` -Set maxProducersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-producers-per-topic tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-p`, `--max-producers-per-topic`|maxProducersPerTopic for a namespace|0| - -### `get-max-consumers-per-topic` -Get maxConsumersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-consumers-per-topic tenant/namespace - -``` - -### `set-max-consumers-per-topic` -Set maxConsumersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-consumers-per-topic tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-consumers-per-topic`|maxConsumersPerTopic for a namespace|0| - -### `get-max-consumers-per-subscription` -Get maxConsumersPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-consumers-per-subscription tenant/namespace - -``` - -### `set-max-consumers-per-subscription` -Set maxConsumersPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-consumers-per-subscription tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-consumers-per-subscription`|maxConsumersPerSubscription for a namespace|0| - -### `get-max-unacked-messages-per-subscription` -Get maxUnackedMessagesPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-unacked-messages-per-subscription tenant/namespace - -``` - -### `set-max-unacked-messages-per-subscription` -Set maxUnackedMessagesPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-unacked-messages-per-subscription tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-unacked-messages-per-subscription`|maxUnackedMessagesPerSubscription for a namespace|-1| - -### `get-max-unacked-messages-per-consumer` -Get maxUnackedMessagesPerConsumer for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-unacked-messages-per-consumer tenant/namespace - -``` - -### `set-max-unacked-messages-per-consumer` -Set maxUnackedMessagesPerConsumer for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-unacked-messages-per-consumer tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-unacked-messages-per-consumer`|maxUnackedMessagesPerConsumer for a namespace|-1| - - -### `get-compaction-threshold` -Get compactionThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-compaction-threshold tenant/namespace - -``` - -### `set-compaction-threshold` -Set compactionThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-compaction-threshold tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-t`, `--threshold`|Maximum number of bytes in a topic backlog before compaction is triggered (eg: 10M, 16G, 3T). 0 disables automatic compaction|0| - - -### `get-offload-threshold` -Get offloadThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-threshold tenant/namespace - -``` - -### `set-offload-threshold` -Set offloadThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-threshold tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-s`, `--size`|Maximum number of bytes stored in the pulsar cluster for a topic before data will start being automatically offloaded to longterm storage (eg: 10M, 16G, 3T, 100). Negative values disable automatic offload. 0 triggers offloading as soon as possible.|-1| - -### `get-offload-deletion-lag` -Get offloadDeletionLag, in minutes, for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-deletion-lag tenant/namespace - -``` - -### `set-offload-deletion-lag` -Set offloadDeletionLag for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-deletion-lag tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-l`, `--lag`|Duration to wait after offloading a ledger segment, before deleting the copy of that segment from cluster local storage. (eg: 10m, 5h, 3d, 2w).|-1| - -### `clear-offload-deletion-lag` -Clear offloadDeletionLag for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces clear-offload-deletion-lag tenant/namespace - -``` - -### `get-schema-autoupdate-strategy` -Get the schema auto-update strategy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-schema-autoupdate-strategy tenant/namespace - -``` - -### `set-schema-autoupdate-strategy` -Set the schema auto-update strategy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-schema-autoupdate-strategy tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--compatibility`|Compatibility level required for new schemas created via a Producer. Possible values (Full, Backward, Forward, None).|Full| -|`-d`, `--disabled`|Disable automatic schema updates.|false| - -### `get-publish-rate` -Get the message publish rate for each topic in a namespace, in bytes as well as messages per second - -Usage - -```bash - -$ pulsar-admin namespaces get-publish-rate tenant/namespace - -``` - -### `set-publish-rate` -Set the message publish rate for each topic in a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-publish-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-m`, `--msg-publish-rate`|Threshold for number of messages per second per topic in the namespace (-1 implies not set, 0 for no limit).|-1| -|`-b`, `--byte-publish-rate`|Threshold for number of bytes per second per topic in the namespace (-1 implies not set, 0 for no limit).|-1| - -### `set-offload-policies` -Set the offload policy for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-policies tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-d`, `--driver`|Driver to use to offload old data to long term storage,(Possible values: S3, aws-s3, google-cloud-storage)|| -|`-r`, `--region`|The long term storage region|| -|`-b`, `--bucket`|Bucket to place offloaded ledger into|| -|`-e`, `--endpoint`|Alternative endpoint to connect to|| -|`-i`, `--aws-id`|AWS Credential Id to use when using driver S3 or aws-s3|| -|`-s`, `--aws-secret`|AWS Credential Secret to use when using driver S3 or aws-s3|| -|`-ro`, `--s3-role`|S3 Role used for STSAssumeRoleSessionCredentialsProvider using driver S3 or aws-s3|| -|`-rsn`, `--s3-role-session-name`|S3 role session name used for STSAssumeRoleSessionCredentialsProvider using driver S3 or aws-s3|| -|`-mbs`, `--maxBlockSize`|Max block size|64MB| -|`-rbs`, `--readBufferSize`|Read buffer size|1MB| -|`-oat`, `--offloadAfterThreshold`|Offload after threshold size (eg: 1M, 5M)|| -|`-oae`, `--offloadAfterElapsed`|Offload after elapsed in millis (or minutes, hours,days,weeks eg: 100m, 3h, 2d, 5w).|| - -### `get-offload-policies` -Get the offload policy for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-policies tenant/namespace - -``` - -### `set-max-subscriptions-per-topic` -Set the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces set-max-subscriptions-per-topic tenant/namespace - -``` - -### `get-max-subscriptions-per-topic` -Get the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces get-max-subscriptions-per-topic tenant/namespace - -``` - -### `remove-max-subscriptions-per-topic` -Remove the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces remove-max-subscriptions-per-topic tenant/namespace - -``` - -## `ns-isolation-policy` -Operations for managing namespace isolation policies. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy subcommand - -``` - -Subcommands -* `set` -* `get` -* `list` -* `delete` -* `brokers` -* `broker` - -### `set` -Create/update a namespace isolation policy for a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy set cluster-name policy-name options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`--auto-failover-policy-params`|Comma-separated name=value auto failover policy parameters|[]| -|`--auto-failover-policy-type`|Auto failover policy type name. Currently available options: min_available.|[]| -|`--namespaces`|Comma-separated namespaces regex list|[]| -|`--primary`|Comma-separated primary broker regex list|[]| -|`--secondary`|Comma-separated secondary broker regex list|[]| - - -### `get` -Get the namespace isolation policy of a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy get cluster-name policy-name - -``` - -### `list` -List all namespace isolation policies of a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy list cluster-name - -``` - -### `delete` -Delete namespace isolation policy of a cluster. This operation requires superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy delete - -``` - -### `brokers` -List all brokers with namespace-isolation policies attached to it. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy brokers cluster-name - -``` - -### `broker` -Get broker with namespace-isolation policies attached to it. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy broker cluster-name options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`--broker`|Broker name to get namespace-isolation policies attached to it|| - -## `topics` -Operations for managing Pulsar topics (both persistent and non-persistent). - -Usage - -```bash - -$ pulsar-admin topics subcommand - -``` - -From Pulsar 2.7.0, some namespace-level policies are available on topic level. To enable topic-level policy in Pulsar, you need to configure the following parameters in the `broker.conf` file. - -```shell - -systemTopicEnabled=true -topicLevelPoliciesEnabled=true - -``` - -Subcommands -* `compact` -* `compaction-status` -* `offload` -* `offload-status` -* `create-partitioned-topic` -* `create-missed-partitions` -* `delete-partitioned-topic` -* `create` -* `get-partitioned-topic-metadata` -* `update-partitioned-topic` -* `list-partitioned-topics` -* `list` -* `terminate` -* `permissions` -* `grant-permission` -* `revoke-permission` -* `lookup` -* `bundle-range` -* `delete` -* `unload` -* `create-subscription` -* `subscriptions` -* `unsubscribe` -* `stats` -* `stats-internal` -* `info-internal` -* `partitioned-stats` -* `partitioned-stats-internal` -* `skip` -* `clear-backlog` -* `expire-messages` -* `expire-messages-all-subscriptions` -* `peek-messages` -* `reset-cursor` -* `get-message-by-id` -* `last-message-id` -* `get-backlog-quotas` -* `set-backlog-quota` -* `remove-backlog-quota` -* `get-persistence` -* `set-persistence` -* `remove-persistence` -* `get-message-ttl` -* `set-message-ttl` -* `remove-message-ttl` -* `get-deduplication` -* `set-deduplication` -* `remove-deduplication` -* `get-retention` -* `set-retention` -* `remove-retention` -* `get-dispatch-rate` -* `set-dispatch-rate` -* `remove-dispatch-rate` -* `get-max-unacked-messages-per-subscription` -* `set-max-unacked-messages-per-subscription` -* `remove-max-unacked-messages-per-subscription` -* `get-max-unacked-messages-per-consumer` -* `set-max-unacked-messages-per-consumer` -* `remove-max-unacked-messages-per-consumer` -* `get-delayed-delivery` -* `set-delayed-delivery` -* `remove-delayed-delivery` -* `get-max-producers` -* `set-max-producers` -* `remove-max-producers` -* `get-max-consumers` -* `set-max-consumers` -* `remove-max-consumers` -* `get-compaction-threshold` -* `set-compaction-threshold` -* `remove-compaction-threshold` -* `get-offload-policies` -* `set-offload-policies` -* `remove-offload-policies` -* `get-inactive-topic-policies` -* `set-inactive-topic-policies` -* `remove-inactive-topic-policies` -* `set-max-subscriptions` -* `get-max-subscriptions` -* `remove-max-subscriptions` - -### `compact` -Run compaction on the specified topic (persistent topics only) - -Usage - -``` - -$ pulsar-admin topics compact persistent://tenant/namespace/topic - -``` - -### `compaction-status` -Check the status of a topic compaction (persistent topics only) - -Usage - -```bash - -$ pulsar-admin topics compaction-status persistent://tenant/namespace/topic - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-w`, `--wait-complete`|Wait for compaction to complete|false| - - -### `offload` -Trigger offload of data from a topic to long-term storage (e.g. Amazon S3) - -Usage - -```bash - -$ pulsar-admin topics offload persistent://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--size-threshold`|The maximum amount of data to keep in BookKeeper for the specific topic|| - - -### `offload-status` -Check the status of data offloading from a topic to long-term storage - -Usage - -```bash - -$ pulsar-admin topics offload-status persistent://tenant/namespace/topic op - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-w`, `--wait-complete`|Wait for compaction to complete|false| - - -### `create-partitioned-topic` -Create a partitioned topic. A partitioned topic must be created before producers can publish to it. - -:::note - -By default, after 60 seconds of creation, topics are considered inactive and deleted automatically to prevent from generating trash data. -To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. -To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to your desired value. -For more information about these two parameters, see [here](reference-configuration.md#broker). - -::: - -Usage - -```bash - -$ pulsar-admin topics create-partitioned-topic {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-p`, `--partitions`|The number of partitions for the topic|0| - -### `create-missed-partitions` -Try to create partitions for partitioned topic. The partitions of partition topic has to be created, -can be used by repair partitions when topic auto creation is disabled - -Usage - -```bash - -$ pulsar-admin topics create-missed-partitions persistent://tenant/namespace/topic - -``` - -### `delete-partitioned-topic` -Delete a partitioned topic. This will also delete all the partitions of the topic if they exist. - -Usage - -```bash - -$ pulsar-admin topics delete-partitioned-topic {persistent|non-persistent} - -``` - -### `create` -Creates a non-partitioned topic. A non-partitioned topic must explicitly be created by the user if allowAutoTopicCreation or createIfMissing is disabled. - -:::note - -By default, after 60 seconds of creation, topics are considered inactive and deleted automatically to prevent from generating trash data. -To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. -To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to your desired value. -For more information about these two parameters, see [here](reference-configuration.md#broker). - -::: - -Usage - -```bash - -$ pulsar-admin topics create {persistent|non-persistent}://tenant/namespace/topic - -``` - -### `get-partitioned-topic-metadata` -Get the partitioned topic metadata. If the topic is not created or is a non-partitioned topic, this will return an empty topic with zero partitions. - -Usage - -```bash - -$ pulsar-admin topics get-partitioned-topic-metadata {persistent|non-persistent}://tenant/namespace/topic - -``` - -### `update-partitioned-topic` -Update existing non-global partitioned topic. New updating number of partitions must be greater than existing number of partitions. - -Usage - -```bash - -$ pulsar-admin topics update-partitioned-topic {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-p`, `--partitions`|The number of partitions for the topic|0| - -### `list-partitioned-topics` -Get the list of partitioned topics under a namespace. - -Usage - -```bash - -$ pulsar-admin topics list-partitioned-topics tenant/namespace - -``` - -### `list` -Get the list of topics under a namespace - -Usage - -``` - -$ pulsar-admin topics list tenant/cluster/namespace - -``` - -### `terminate` -Terminate a persistent topic (disallow further messages from being published on the topic) - -Usage - -```bash - -$ pulsar-admin topics terminate persistent://tenant/namespace/topic - -``` - -### `partitioned-terminate` -Terminate a persistent topic (disallow further messages from being published on the topic) - -Usage - -```bash - -$ pulsar-admin topics partitioned-terminate persistent://tenant/namespace/topic - -``` - -### `permissions` -Get the permissions on a topic. Retrieve the effective permissions for a destination. These permissions are defined by the permissions set at the namespace level combined (union) with any eventual specific permissions set on the topic. - -Usage - -```bash - -$ pulsar-admin topics permissions topic - -``` - -### `grant-permission` -Grant a new permission to a client role on a single topic - -Usage - -```bash - -$ pulsar-admin topics grant-permission {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--actions`|Actions to be granted (`produce` or `consume`)|| -|`--role`|The client role to which to grant the permissions|| - - -### `revoke-permission` -Revoke permissions to a client role on a single topic. If the permission was not set at the topic level, but rather at the namespace level, this operation will return an error (HTTP status code 412). - -Usage - -```bash - -$ pulsar-admin topics revoke-permission topic - -``` - -### `lookup` -Look up a topic from the current serving broker - -Usage - -```bash - -$ pulsar-admin topics lookup topic - -``` - -### `bundle-range` -Get the namespace bundle which contains the given topic - -Usage - -```bash - -$ pulsar-admin topics bundle-range topic - -``` - -### `delete` -Delete a topic. The topic cannot be deleted if there are any active subscriptions or producers connected to the topic. - -Usage - -```bash - -$ pulsar-admin topics delete topic - -``` - -### `unload` -Unload a topic - -Usage - -```bash - -$ pulsar-admin topics unload topic - -``` - -### `create-subscription` -Create a new subscription on a topic. - -Usage - -```bash - -$ pulsar-admin topics create-subscription [options] persistent://tenant/namespace/topic - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-m`, `--messageId`|messageId where to create the subscription. It can be either 'latest', 'earliest' or (ledgerId:entryId)|latest| -|`-s`, `--subscription`|Subscription to reset position on|| - -### `subscriptions` -Get the list of subscriptions on the topic - -Usage - -```bash - -$ pulsar-admin topics subscriptions topic - -``` - -### `unsubscribe` -Delete a durable subscriber from a topic - -Usage - -```bash - -$ pulsar-admin topics unsubscribe topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|The subscription to delete|| -|`-f`, `--force`|Disconnect and close all consumers and delete subscription forcefully|false| - - -### `stats` -Get the stats for the topic and its connected producers and consumers. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -Usage - -```bash - -$ pulsar-admin topics stats topic - -``` - -:::note - -The unit of `storageSize` and `averageMsgSize` is Byte. - -::: - -### `stats-internal` -Get the internal stats for the topic - -Usage - -```bash - -$ pulsar-admin topics stats-internal topic - -``` - -### `info-internal` -Get the internal metadata info for the topic - -Usage - -```bash - -$ pulsar-admin topics info-internal topic - -``` - -### `partitioned-stats` -Get the stats for the partitioned topic and its connected producers and consumers. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -Usage - -```bash - -$ pulsar-admin topics partitioned-stats topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--per-partition`|Get per-partition stats|false| - -### `partitioned-stats-internal` -Get the internal stats for the partitioned topic and its connected producers and consumers. All the rates are computed over a 1 minute window and are relative the last completed 1 minute period. - -Usage - -```bash - -$ pulsar-admin topics partitioned-stats-internal topic - -``` - -### `skip` -Skip some messages for the subscription - -Usage - -```bash - -$ pulsar-admin topics skip topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-n`, `--count`|The number of messages to skip|0| -|`-s`, `--subscription`|The subscription on which to skip messages|| - - -### `clear-backlog` -Clear backlog (skip all the messages) for the subscription - -Usage - -```bash - -$ pulsar-admin topics clear-backlog topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|The subscription to clear|| - - -### `expire-messages` -Expire messages that are older than the given expiry time (in seconds) for the subscription. - -Usage - -```bash - -$ pulsar-admin topics expire-messages topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-t`, `--expireTime`|Expire messages older than the time (in seconds)|0| -|`-s`, `--subscription`|The subscription to skip messages on|| - - -### `expire-messages-all-subscriptions` -Expire messages older than the given expiry time (in seconds) for all subscriptions - -Usage - -```bash - -$ pulsar-admin topics expire-messages-all-subscriptions topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-t`, `--expireTime`|Expire messages older than the time (in seconds)|0| - - -### `peek-messages` -Peek some messages for the subscription. - -Usage - -```bash - -$ pulsar-admin topics peek-messages topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-n`, `--count`|The number of messages|0| -|`-s`, `--subscription`|Subscription to get messages from|| - - -### `reset-cursor` -Reset position for subscription to a position that is closest to timestamp or messageId. - -Usage - -```bash - -$ pulsar-admin topics reset-cursor topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|Subscription to reset position on|| -|`-t`, `--time`|The time in minutes to reset back to (or minutes, hours, days, weeks, etc.). Examples: `100m`, `3h`, `2d`, `5w`.|| -|`-m`, `--messageId`| The message ID to reset back to (`ledgerId:entryId` or earliest or latest). || - -### `get-message-by-id` -Get message by ledger id and entry id - -Usage - -```bash - -$ pulsar-admin topics get-message-by-id topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-l`, `--ledgerId`|The ledger id |0| -|`-e`, `--entryId`|The entry id |0| - -### `last-message-id` -Get the last commit message ID of the topic. - -Usage - -```bash - -$ pulsar-admin topics last-message-id persistent://tenant/namespace/topic - -``` - -### `get-backlog-quotas` -Get the backlog quota policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-backlog-quotas tenant/namespace/topic - -``` - -### `set-backlog-quota` -Set a backlog quota policy for a topic. - -|Flag|Description|Default| -|----|---|---| -|`-l`, `--limit`|The backlog size limit (for example `10M` or `16G`)|| -|`-lt`, `--limitTime`|Time limit in second, non-positive number for disabling time limit. (for example 3600 for 1 hour)|| -|`-p`, `--policy`|The retention policy to enforce when the limit is reached. The valid options are: `producer_request_hold`, `producer_exception` or `consumer_backlog_eviction`| -|`-t`, `--type`|Backlog quota type to set. The valid options are: `destination_storage`, `message_age` |destination_storage| - -Usage - -```bash - -$ pulsar-admin topics set-backlog-quota tenant/namespace/topic options - -``` - -Example - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns/my-topic \ ---limit 2G \ ---policy producer_request_hold - -``` - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns/my-topic \ ---limitTime 3600 \ ---policy producer_request_hold \ ---type message_age - -``` - -### `remove-backlog-quota` -Remove a backlog quota policy from a topic. - -|Flag|Description|Default| -|---|---|---| -|`-t`, `--type`|Backlog quota type to remove. The valid options are: `destination_storage`, `message_age` |destination_storage| - -Usage - -```bash - -$ pulsar-admin topics remove-backlog-quota tenant/namespace/topic - -``` - -### `get-persistence` -Get the persistence policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-persistence tenant/namespace/topic - -``` - -### `set-persistence` -Set the persistence policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-persistence tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-e`, `--bookkeeper-ensemble`|Number of bookies to use for a topic|0| -|`-w`, `--bookkeeper-write-quorum`|How many writes to make of each entry|0| -|`-a`, `--bookkeeper-ack-quorum`|Number of acks (guaranteed copies) to wait for each entry|0| -|`-r`, `--ml-mark-delete-max-rate`|Throttling rate of mark-delete operation (0 means no throttle)|| - -### `remove-persistence` -Remove the persistence policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-persistence tenant/namespace/topic - -``` - -### `get-message-ttl` -Get the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-message-ttl tenant/namespace/topic - -``` - -### `set-message-ttl` -Set the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-message-ttl tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-ttl`, `--messageTTL`|Message TTL for a topic in second, allowed range from 1 to `Integer.MAX_VALUE` |0| - -### `remove-message-ttl` -Remove the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-message-ttl tenant/namespace/topic - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable message deduplication on the specified topic.|false| -|`--disable`, `-d`|Disable message deduplication on the specified topic.|false| - -### `get-deduplication` -Get a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-deduplication tenant/namespace/topic - -``` - -### `set-deduplication` -Set a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-deduplication tenant/namespace/topic options - -``` - -### `remove-deduplication` -Remove a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-deduplication tenant/namespace/topic - -``` - -## `tenants` -Operations for managing tenants - -Usage - -```bash - -$ pulsar-admin tenants subcommand - -``` - -Subcommands -* `list` -* `get` -* `create` -* `update` -* `delete` - -### `list` -List the existing tenants - -Usage - -```bash - -$ pulsar-admin tenants list - -``` - -### `get` -Gets the configuration of a tenant - -Usage - -```bash - -$ pulsar-admin tenants get tenant-name - -``` - -### `create` -Creates a new tenant - -Usage - -```bash - -$ pulsar-admin tenants create tenant-name options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-r`, `--admin-roles`|Comma-separated admin roles|| -|`-c`, `--allowed-clusters`|Comma-separated allowed clusters|| - -### `update` -Updates a tenant - -Usage - -```bash - -$ pulsar-admin tenants update tenant-name options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-r`, `--admin-roles`|Comma-separated admin roles|| -|`-c`, `--allowed-clusters`|Comma-separated allowed clusters|| - - -### `delete` -Deletes an existing tenant - -Usage - -```bash - -$ pulsar-admin tenants delete tenant-name - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-f`, `--force`|Delete a tenant forcefully by deleting all namespaces under it.|false| - - -## `resource-quotas` -Operations for managing resource quotas - -Usage - -```bash - -$ pulsar-admin resource-quotas subcommand - -``` - -Subcommands -* `get` -* `set` -* `reset-namespace-bundle-quota` - - -### `get` -Get the resource quota for a specified namespace bundle, or default quota if no namespace/bundle is specified. - -Usage - -```bash - -$ pulsar-admin resource-quotas get options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-n`, `--namespace`|The namespace|| - - -### `set` -Set the resource quota for the specified namespace bundle, or default quota if no namespace/bundle is specified. - -Usage - -```bash - -$ pulsar-admin resource-quotas set options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-bi`, `--bandwidthIn`|The expected inbound bandwidth (in bytes/second)|0| -|`-bo`, `--bandwidthOut`|Expected outbound bandwidth (in bytes/second)0| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-d`, `--dynamic`|Allow to be dynamically re-calculated (or not)|false| -|`-mem`, `--memory`|Expectred memory usage (in megabytes)|0| -|`-mi`, `--msgRateIn`|Expected incoming messages per second|0| -|`-mo`, `--msgRateOut`|Expected outgoing messages per second|0| -|`-n`, `--namespace`|The namespace as tenant/namespace, for example my-tenant/my-ns. Must be specified together with -b/--bundle.|| - - -### `reset-namespace-bundle-quota` -Reset the specified namespace bundle's resource quota to a default value. - -Usage - -```bash - -$ pulsar-admin resource-quotas reset-namespace-bundle-quota options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-n`, `--namespace`|The namespace|| - - - -## `schemas` -Operations related to Schemas associated with Pulsar topics. - -Usage - -``` - -$ pulsar-admin schemas subcommand - -``` - -Subcommands -* `upload` -* `delete` -* `get` -* `extract` - - -### `upload` -Upload the schema definition for a topic - -Usage - -```bash - -$ pulsar-admin schemas upload persistent://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`--filename`|The path to the schema definition file. An example schema file is available under conf directory.|| - - -### `delete` -Delete the schema definition associated with a topic - -Usage - -```bash - -$ pulsar-admin schemas delete persistent://tenant/namespace/topic - -``` - -### `get` -Retrieve the schema definition associated with a topic (at a given version if version is supplied). - -Usage - -```bash - -$ pulsar-admin schemas get persistent://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`--version`|The version of the schema definition to retrieve for a topic.|| - -### `extract` -Provide the schema definition for a topic via Java class name contained in a JAR file - -Usage - -```bash - -$ pulsar-admin schemas extract persistent://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--classname`|The Java class name|| -|`-j`, `--jar`|A path to the JAR file which contains the above Java class|| -|`-t`, `--type`|The type of the schema (avro or json)|| diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/reference-rest-api-overview.md b/site2/website/versioned_docs/version-2.10.0-deprecated/reference-rest-api-overview.md deleted file mode 100644 index 8e3d410112b878..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/reference-rest-api-overview.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: reference-rest-api-overview -title: Pulsar REST APIs -sidebar_label: "Pulsar REST APIs" ---- - -A REST API (also known as RESTful API, REpresentational State Transfer Application Programming Interface) is a set of definitions and protocols for building and integrating application software, using HTTP requests to GET, PUT, POST, and DELETE data following the REST standards. In essence, REST API is a set of remote calls using standard methods to request and return data in a specific format between two systems. - -Pulsar provides a variety of REST APIs that enable you to interact with Pulsar to retrieve information or perform an action. - -| REST API category | Description | -| --- | --- | -| [Admin](/admin-rest-api/?version=master) | REST APIs for administrative operations.| -| [Functions](/functions-rest-api/?version=master) | REST APIs for function-specific operations.| -| [Sources](/source-rest-api/?version=master) | REST APIs for source-specific operations.| -| [Sinks](/sink-rest-api/?version=master) | REST APIs for sink-specific operations.| -| [Packages](/packages-rest-api/?version=master) | REST APIs for package-specific operations. A package can be a group of functions, sources, and sinks.| - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/reference-terminology.md b/site2/website/versioned_docs/version-2.10.0-deprecated/reference-terminology.md deleted file mode 100644 index e5099141c3231e..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/reference-terminology.md +++ /dev/null @@ -1,176 +0,0 @@ ---- -id: reference-terminology -title: Pulsar Terminology -sidebar_label: "Terminology" -original_id: reference-terminology ---- - -Here is a glossary of terms related to Apache Pulsar: - -### Concepts - -#### Pulsar - -Pulsar is a distributed messaging system originally created by Yahoo but now under the stewardship of the Apache Software Foundation. - -#### Message - -Messages are the basic unit of Pulsar. They're what [producers](#producer) publish to [topics](#topic) -and what [consumers](#consumer) then consume from topics. - -#### Topic - -A named channel used to pass messages published by [producers](#producer) to [consumers](#consumer) who -process those [messages](#message). - -#### Partitioned Topic - -A topic that is served by multiple Pulsar [brokers](#broker), which enables higher throughput. - -#### Namespace - -A grouping mechanism for related [topics](#topic). - -#### Namespace Bundle - -A virtual group of [topics](#topic) that belong to the same [namespace](#namespace). A namespace bundle -is defined as a range between two 32-bit hashes, such as 0x00000000 and 0xffffffff. - -#### Tenant - -An administrative unit for allocating capacity and enforcing an authentication/authorization scheme. - -#### Subscription - -A lease on a [topic](#topic) established by a group of [consumers](#consumer). Pulsar has four subscription -modes (exclusive, shared, failover and key_shared). - -#### Pub-Sub - -A messaging pattern in which [producer](#producer) processes publish messages on [topics](#topic) that -are then consumed (processed) by [consumer](#consumer) processes. - -#### Producer - -A process that publishes [messages](#message) to a Pulsar [topic](#topic). - -#### Consumer - -A process that establishes a subscription to a Pulsar [topic](#topic) and processes messages published -to that topic by [producers](#producer). - -#### Reader - -Pulsar readers are message processors much like Pulsar [consumers](#consumer) but with two crucial differences: - -- you can specify *where* on a topic readers begin processing messages (consumers always begin with the latest - available unacked message); -- readers don't retain data or acknowledge messages. - -#### Cursor - -The subscription position for a [consumer](#consumer). - -#### Acknowledgment (ack) - -A message sent to a Pulsar broker by a [consumer](#consumer) that a message has been successfully processed. -An acknowledgement (ack) is Pulsar's way of knowing that the message can be deleted from the system; -if no acknowledgement, then the message will be retained until it's processed. - -#### Negative Acknowledgment (nack) - -When an application fails to process a particular message, it can send a "negative ack" to Pulsar -to signal that the message should be replayed at a later timer. (By default, failed messages are -replayed after a 1 minute delay). Be aware that negative acknowledgment on ordered subscription types, -such as Exclusive, Failover and Key_Shared, can cause failed messages to arrive consumers out of the original order. - -#### Unacknowledged - -A message that has been delivered to a consumer for processing but not yet confirmed as processed by the consumer. - -#### Retention Policy - -Size and time limits that you can set on a [namespace](#namespace) to configure retention of [messages](#message) -that have already been [acknowledged](#acknowledgement-ack). - -#### Multi-Tenancy - -The ability to isolate [namespaces](#namespace), specify quotas, and configure authentication and authorization -on a per-[tenant](#tenant) basis. - -#### Failure Domain - -A logical domain under a Pulsar cluster. Each logical domain contains a pre-configured list of brokers. - -#### Anti-affinity Namespaces - -A group of namespaces that have anti-affinity to each other. - -### Architecture - -#### Standalone - -A lightweight Pulsar broker in which all components run in a single Java Virtual Machine (JVM) process. Standalone -clusters can be run on a single machine and are useful for development purposes. - -#### Cluster - -A set of Pulsar [brokers](#broker) and [BookKeeper](#bookkeeper) servers (aka [bookies](#bookie)). -Clusters can reside in different geographical regions and replicate messages to one another -in a process called [geo-replication](#geo-replication). - -#### Instance - -A group of Pulsar [clusters](#cluster) that act together as a single unit. - -#### Geo-Replication - -Replication of messages across Pulsar [clusters](#cluster), potentially in different datacenters -or geographical regions. - -#### Configuration Store - -Pulsar's configuration store (previously known as configuration store) is a ZooKeeper quorum that -is used for configuration-specific tasks. A multi-cluster Pulsar installation requires just one -configuration store across all [clusters](#cluster). - -#### Topic Lookup - -A service provided by Pulsar [brokers](#broker) that enables connecting clients to automatically determine -which Pulsar [cluster](#cluster) is responsible for a [topic](#topic) (and thus where message traffic for -the topic needs to be routed). - -#### Service Discovery - -A mechanism provided by Pulsar that enables connecting clients to use just a single URL to interact -with all the [brokers](#broker) in a [cluster](#cluster). - -#### Broker - -A stateless component of Pulsar [clusters](#cluster) that runs two other components: an HTTP server -exposing a REST interface for administration and topic lookup and a [dispatcher](#dispatcher) that -handles all message transfers. Pulsar clusters typically consist of multiple brokers. - -#### Dispatcher - -An asynchronous TCP server used for all data transfers in-and-out a Pulsar [broker](#broker). The Pulsar -dispatcher uses a custom binary protocol for all communications. - -### Storage - -#### BookKeeper - -[Apache BookKeeper](http://bookkeeper.apache.org/) is a scalable, low-latency persistent log storage -service that Pulsar uses to store data. - -#### Bookie - -Bookie is the name of an individual BookKeeper server. It is effectively the storage server of Pulsar. - -#### Ledger - -An append-only data structure in [BookKeeper](#bookkeeper) that is used to persistently store messages in Pulsar [topics](#topic). - -### Functions - -Pulsar Functions are lightweight functions that can consume messages from Pulsar topics, apply custom processing logic, and, if desired, publish results to topics. diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/schema-evolution-compatibility.md b/site2/website/versioned_docs/version-2.10.0-deprecated/schema-evolution-compatibility.md deleted file mode 100644 index 04bd0129a74b20..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/schema-evolution-compatibility.md +++ /dev/null @@ -1,207 +0,0 @@ ---- -id: schema-evolution-compatibility -title: Schema evolution and compatibility -sidebar_label: "Schema evolution and compatibility" -original_id: schema-evolution-compatibility ---- - -Normally, schemas do not stay the same over a long period of time. Instead, they undergo evolutions to satisfy new needs. - -This chapter examines how Pulsar schema evolves and what Pulsar schema compatibility check strategies are. - -## Schema evolution - -Pulsar schema is defined in a data structure called `SchemaInfo`. - -Each `SchemaInfo` stored with a topic has a version. The version is used to manage the schema changes happening within a topic. - -The message produced with `SchemaInfo` is tagged with a schema version. When a message is consumed by a Pulsar client, the Pulsar client can use the schema version to retrieve the corresponding `SchemaInfo` and use the correct schema information to deserialize data. - -### What is schema evolution? - -Schemas store the details of attributes and types. To satisfy new business requirements, you need to update schemas inevitably over time, which is called **schema evolution**. - -Any schema changes affect downstream consumers. Schema evolution ensures that the downstream consumers can seamlessly handle data encoded with both old schemas and new schemas. - -### How Pulsar schema should evolve? - -The answer is Pulsar schema compatibility check strategy. It determines how schema compares old schemas with new schemas in topics. - -For more information, see [Schema compatibility check strategy](#schema-compatibility-check-strategy). - -### How does Pulsar support schema evolution? - -1. When a producer/consumer/reader connects to a broker, the broker deploys the schema compatibility checker configured by `schemaRegistryCompatibilityCheckers` to enforce schema compatibility check. - - The schema compatibility checker is one instance per schema type. - - Currently, Avro and JSON have their own compatibility checkers, while all the other schema types share the default compatibility checker which disables schema evolution. - -2. The producer/consumer/reader sends its client `SchemaInfo` to the broker. - -3. The broker knows the schema type and locates the schema compatibility checker for that type. - -4. The broker uses the checker to check if the `SchemaInfo` is compatible with the latest schema of the topic by applying its compatibility check strategy. - - Currently, the compatibility check strategy is configured at the namespace level and applied to all the topics within that namespace. - -## Schema compatibility check strategy - -Pulsar has 8 schema compatibility check strategies, which are summarized in the following table. - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Changes allowed | Check against which schema | Upgrade first | -| --- | --- | --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Disable schema compatibility check. | All changes are allowed | All previous versions | Any order | -| `ALWAYS_INCOMPATIBLE` | Disable schema evolution. | All changes are disabled | None | None | -| `BACKWARD` | Consumers using the schema V3 can process data written by producers using the schema V3 or V2. |
  167. Add optional fields
  168. Delete fields
  169. | Latest version | Consumers | -| `BACKWARD_TRANSITIVE` | Consumers using the schema V3 can process data written by producers using the schema V3, V2 or V1. |
  170. Add optional fields
  171. Delete fields
  172. | All previous versions | Consumers | -| `FORWARD` | Consumers using the schema V3 or V2 can process data written by producers using the schema V3. |
  173. Add fields
  174. Delete optional fields
  175. | Latest version | Producers | -| `FORWARD_TRANSITIVE` | Consumers using the schema V3, V2 or V1 can process data written by producers using the schema V3. |
  176. Add fields
  177. Delete optional fields
  178. | All previous versions | Producers | -| `FULL` | Backward and forward compatible between the schema V3 and V2. |
  179. Modify optional fields
  180. | Latest version | Any order | -| `FULL_TRANSITIVE` | Backward and forward compatible among the schema V3, V2, and V1. |
  181. Modify optional fields
  182. | All previous versions | Any order | - -### ALWAYS_COMPATIBLE and ALWAYS_INCOMPATIBLE - -| Compatibility check strategy | Definition | Note | -| --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Disable schema compatibility check. | None | -| `ALWAYS_INCOMPATIBLE` | Disable schema evolution, that is, any schema change is rejected. |
  183. For all schema types except Avro and JSON, the default schema compatibility check strategy is `ALWAYS_INCOMPATIBLE`.
  184. For Avro and JSON, the default schema compatibility check strategy is `FULL`.
  185. | - -#### Example - -* Example 1 - - In some situations, an application needs to store events of several different types in the same Pulsar topic. - - In particular, when developing a data model in an `Event Sourcing` style, you might have several kinds of events that affect the state of an entity. - - For example, for a user entity, there are `userCreated`, `userAddressChanged` and `userEnquiryReceived` events. The application requires that those events are always read in the same order. - - Consequently, those events need to go in the same Pulsar partition to maintain order. This application can use `ALWAYS_COMPATIBLE` to allow different kinds of events co-exist in the same topic. - -* Example 2 - - Sometimes we also make incompatible changes. - - For example, you are modifying a field type from `string` to `int`. - - In this case, you need to: - - * Upgrade all producers and consumers to the new schema versions at the same time. - - * Optionally, create a new topic and start migrating applications to use the new topic and the new schema, avoiding the need to handle two incompatible versions in the same topic. - -### BACKWARD and BACKWARD_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | -|---|---|---| -`BACKWARD` | Consumers using the new schema can process data written by producers using the **last schema**. | The consumers using the schema V3 can process data written by producers using the schema V3 or V2. | -`BACKWARD_TRANSITIVE` | Consumers using the new schema can process data written by producers using **all previous schemas**. | The consumers using the schema V3 can process data written by producers using the schema V3, V2, or V1. | - -#### Example - -* Example 1 - - Remove a field. - - A consumer constructed to process events without one field can process events written with the old schema containing the field, and the consumer will ignore that field. - -* Example 2 - - You want to load all Pulsar data into a Hive data warehouse and run SQL queries against the data. - - Same SQL queries must continue to work even the data is changed. To support it, you can evolve the schemas using the `BACKWARD` strategy. - -### FORWARD and FORWARD_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | -|---|---|---| -`FORWARD` | Consumers using the **last schema** can process data written by producers using a new schema, even though they may not be able to use the full capabilities of the new schema. | The consumers using the schema V3 or V2 can process data written by producers using the schema V3. | -`FORWARD_TRANSITIVE` | Consumers using **all previous schemas** can process data written by producers using a new schema. | The consumers using the schema V3, V2, or V1 can process data written by producers using the schema V3. - -#### Example - -* Example 1 - - Add a field. - - In most data formats, consumers written to process events without new fields can continue doing so even when they receive new events containing new fields. - -* Example 2 - - If a consumer has an application logic tied to a full version of a schema, the application logic may not be updated instantly when the schema evolves. - - In this case, you need to project data with a new schema onto an old schema that the application understands. - - Consequently, you can evolve the schemas using the `FORWARD` strategy to ensure that the old schema can process data encoded with the new schema. - -### FULL and FULL_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | Note | -| --- | --- | --- | --- | -| `FULL` | Schemas are both backward and forward compatible, which means: Consumers using the last schema can process data written by producers using the new schema. AND Consumers using the new schema can process data written by producers using the last schema. | Consumers using the schema V3 can process data written by producers using the schema V3 or V2. AND Consumers using the schema V3 or V2 can process data written by producers using the schema V3. |
  186. For Avro and JSON, the default schema compatibility check strategy is `FULL`.
  187. For all schema types except Avro and JSON, the default schema compatibility check strategy is `ALWAYS_INCOMPATIBLE`.
  188. | -| `FULL_TRANSITIVE` | The new schema is backward and forward compatible with all previously registered schemas. | Consumers using the schema V3 can process data written by producers using the schema V3, V2 or V1. AND Consumers using the schema V3, V2 or V1 can process data written by producers using the schema V3. | None | - -#### Example - -In some data formats, for example, Avro, you can define fields with default values. Consequently, adding or removing a field with a default value is a fully compatible change. - -:::tip - -You can set schema compatibility check strategy at the topic, namespace or broker level. For how to set the strategy, see [here](schema-manage.md/#set-schema-compatibility-check-strategy). - -::: - -## Schema verification - -When a producer or a consumer tries to connect to a topic, a broker performs some checks to verify a schema. - -### Producer - -When a producer tries to connect to a topic (suppose ignore the schema auto creation), a broker does the following checks: - -* Check if the schema carried by the producer exists in the schema registry or not. - - * If the schema is already registered, then the producer is connected to a broker and produce messages with that schema. - - * If the schema is not registered, then Pulsar verifies if the schema is allowed to be registered based on the configured compatibility check strategy. - -### Consumer -When a consumer tries to connect to a topic, a broker checks if a carried schema is compatible with a registered schema based on the configured schema compatibility check strategy. - -| Compatibility check strategy | Check logic | -| --- | --- | -| `ALWAYS_COMPATIBLE` | All pass | -| `ALWAYS_INCOMPATIBLE` | No pass | -| `BACKWARD` | Can read the last schema | -| `BACKWARD_TRANSITIVE` | Can read all schemas | -| `FORWARD` | Can read the last schema | -| `FORWARD_TRANSITIVE` | Can read the last schema | -| `FULL` | Can read the last schema | -| `FULL_TRANSITIVE` | Can read all schemas | - -## Order of upgrading clients - -The order of upgrading client applications is determined by the compatibility check strategy. - -For example, the producers using schemas to write data to Pulsar and the consumers using schemas to read data from Pulsar. - -| Compatibility check strategy | Upgrade first | Description | -| --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Any order | The compatibility check is disabled. Consequently, you can upgrade the producers and consumers in **any order**. | -| `ALWAYS_INCOMPATIBLE` | None | The schema evolution is disabled. | -|
  189. `BACKWARD`
  190. `BACKWARD_TRANSITIVE`
  191. | Consumers | There is no guarantee that consumers using the old schema can read data produced using the new schema. Consequently, **upgrade all consumers first**, and then start producing new data. | -|
  192. `FORWARD`
  193. `FORWARD_TRANSITIVE`
  194. | Producers | There is no guarantee that consumers using the new schema can read data produced using the old schema. Consequently, **upgrade all producers first**
  195. to use the new schema and ensure that the data already produced using the old schemas are not available to consumers, and then upgrade the consumers.
  196. | -|
  197. `FULL`
  198. `FULL_TRANSITIVE`
  199. | Any order | There is no guarantee that consumers using the old schema can read data produced using the new schema and consumers using the new schema can read data produced using the old schema. Consequently, you can upgrade the producers and consumers in **any order**. | - - - - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/schema-get-started.md b/site2/website/versioned_docs/version-2.10.0-deprecated/schema-get-started.md deleted file mode 100644 index 73a05d96d7f10d..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/schema-get-started.md +++ /dev/null @@ -1,102 +0,0 @@ ---- -id: schema-get-started -title: Get started -sidebar_label: "Get started" -original_id: schema-get-started ---- - -This chapter introduces Pulsar schemas and explains why they are important. - -## Schema Registry - -Type safety is extremely important in any application built around a message bus like Pulsar. - -Producers and consumers need some kind of mechanism for coordinating types at the topic level to avoid various potential problems arise. For example, serialization and deserialization issues. - -Applications typically adopt one of the following approaches to guarantee type safety in messaging. Both approaches are available in Pulsar, and you're free to adopt one or the other or to mix and match on a per-topic basis. - -#### Note -> -> Currently, the Pulsar schema registry is only available for the [Java client](client-libraries-java.md), [Go client](client-libraries-go.md), [Python client](client-libraries-python.md), and [C++ client](client-libraries-cpp.md). - -### Client-side approach - -Producers and consumers are responsible for not only serializing and deserializing messages (which consist of raw bytes) but also "knowing" which types are being transmitted via which topics. - -If a producer is sending temperature sensor data on the topic `topic-1`, consumers of that topic will run into trouble if they attempt to parse that data as moisture sensor readings. - -Producers and consumers can send and receive messages consisting of raw byte arrays and leave all type safety enforcement to the application on an "out-of-band" basis. - -### Server-side approach - -Producers and consumers inform the system which data types can be transmitted via the topic. - -With this approach, the messaging system enforces type safety and ensures that producers and consumers remain synced. - -Pulsar has a built-in **schema registry** that enables clients to upload data schemas on a per-topic basis. Those schemas dictate which data types are recognized as valid for that topic. - -## Why use schema - -When a schema is enabled, Pulsar does parse data, it takes bytes as inputs and sends bytes as outputs. While data has meaning beyond bytes, you need to parse data and might encounter parse exceptions which mainly occur in the following situations: - -* The field does not exist - -* The field type has changed (for example, `string` is changed to `int`) - -There are a few methods to prevent and overcome these exceptions, for example, you can catch exceptions when parsing errors, which makes code hard to maintain; or you can adopt a schema management system to perform schema evolution, not to break downstream applications, and enforces type safety to max extend in the language you are using, the solution is Pulsar Schema. - -Pulsar schema enables you to use language-specific types of data when constructing and handling messages from simple types like `string` to more complex application-specific types. - -**Example** - -You can use the _User_ class to define the messages sent to Pulsar topics. - -``` - -public class User { - String name; - int age; -} - -``` - -When constructing a producer with the _User_ class, you can specify a schema or not as below. - -### Without schema - -If you construct a producer without specifying a schema, then the producer can only produce messages of type `byte[]`. If you have a POJO class, you need to serialize the POJO into bytes before sending messages. - -**Example** - -``` - -Producer producer = client.newProducer() - .topic(topic) - .create(); -User user = new User("Tom", 28); -byte[] message = … // serialize the `user` by yourself; -producer.send(message); - -``` - -### With schema - -If you construct a producer with specifying a schema, then you can send a class to a topic directly without worrying about how to serialize POJOs into bytes. - -**Example** - -This example constructs a producer with the _JSONSchema_, and you can send the _User_ class to topics directly without worrying about how to serialize it into bytes. - -``` - -Producer producer = client.newProducer(JSONSchema.of(User.class)) - .topic(topic) - .create(); -User user = new User("Tom", 28); -producer.send(user); - -``` - -### Summary - -When constructing a producer with a schema, you do not need to serialize messages into bytes, instead Pulsar schema does this job in the background. diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/schema-manage.md b/site2/website/versioned_docs/version-2.10.0-deprecated/schema-manage.md deleted file mode 100644 index e62818c7e823f8..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/schema-manage.md +++ /dev/null @@ -1,850 +0,0 @@ ---- -id: schema-manage -title: Manage schema -sidebar_label: "Manage schema" -original_id: schema-manage ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide demonstrates the ways to manage schemas: - -* Automatically - - * [Schema AutoUpdate](#schema-autoupdate) - -* Manually - - * [Schema manual management](#schema-manual-management) - - * [Custom schema storage](#custom-schema-storage) - -## Schema AutoUpdate - -If a schema passes the schema compatibility check, Pulsar producer automatically updates this schema to the topic it produces by default. - -### AutoUpdate for producer - -For a producer, the `AutoUpdate` happens in the following cases: - -* If a **topic doesn’t have a schema**, Pulsar registers a schema automatically. - -* If a **topic has a schema**: - - * If a **producer doesn’t carry a schema**: - - * If `isSchemaValidationEnforced` or `schemaValidationEnforced` is **disabled** in the namespace to which the topic belongs, the producer is allowed to connect to the topic and produce data. - - * If `isSchemaValidationEnforced` or `schemaValidationEnforced` is **enabled** in the namespace to which the topic belongs, the producer is rejected and disconnected. - - * If a **producer carries a schema**: - - A broker performs the compatibility check based on the configured compatibility check strategy of the namespace to which the topic belongs. - - * If the schema is registered, a producer is connected to a broker. - - * If the schema is not registered: - - * If `isAllowAutoUpdateSchema` sets to **false**, the producer is rejected to connect to a broker. - - * If `isAllowAutoUpdateSchema` sets to **true**: - - * If the schema passes the compatibility check, then the broker registers a new schema automatically for the topic and the producer is connected. - - * If the schema does not pass the compatibility check, then the broker does not register a schema and the producer is rejected to connect to a broker. - -![AutoUpdate Producer](/assets/schema-producer.png) - -### AutoUpdate for consumer - -For a consumer, the `AutoUpdate` happens in the following cases: - -* If a **consumer connects to a topic without a schema** (which means the consumer receiving raw bytes), the consumer can connect to the topic successfully without doing any compatibility check. - -* If a **consumer connects to a topic with a schema**. - - * If a topic does not have all of them (a schema/data/a local consumer and a local producer): - - * If `isAllowAutoUpdateSchema` sets to **true**, then the consumer registers a schema and it is connected to a broker. - - * If `isAllowAutoUpdateSchema` sets to **false**, then the consumer is rejected to connect to a broker. - - * If a topic has one of them (a schema/data/a local consumer and a local producer), then the schema compatibility check is performed. - - * If the schema passes the compatibility check, then the consumer is connected to the broker. - - * If the schema does not pass the compatibility check, then the consumer is rejected to connect to the broker. - -![AutoUpdate Consumer](/assets/schema-consumer.png) - - -### Manage AutoUpdate strategy - -You can use the `pulsar-admin` command to manage the `AutoUpdate` strategy as below: - -* [Enable AutoUpdate](#enable-autoupdate) - -* [Disable AutoUpdate](#disable-autoupdate) - -* [Adjust compatibility](#adjust-compatibility) - -#### Enable AutoUpdate - -To enable `AutoUpdate` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-is-allow-auto-update-schema --enable tenant/namespace - -``` - -#### Disable AutoUpdate - -To disable `AutoUpdate` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-is-allow-auto-update-schema --disable tenant/namespace - -``` - -Once the `AutoUpdate` is disabled, you can only register a new schema using the `pulsar-admin` command. - -#### Adjust compatibility - -To adjust the schema compatibility level on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-compatibility-strategy --compatibility tenant/namespace - -``` - -### Schema validation - -By default, `schemaValidationEnforced` is **disabled** for producers: - -* This means a producer without a schema can produce any kind of messages to a topic with schemas, which may result in producing trash data to the topic. - -* This allows non-java language clients that don’t support schema can produce messages to a topic with schemas. - -However, if you want a stronger guarantee on the topics with schemas, you can enable `schemaValidationEnforced` across the whole cluster or on a per-namespace basis. - -#### Enable schema validation - -To enable `schemaValidationEnforced` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-validation-enforce --enable tenant/namespace - -``` - -#### Disable schema validation - -To disable `schemaValidationEnforced` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-validation-enforce --disable tenant/namespace - -``` - -## Schema manual management - -To manage schemas, you can use one of the following methods. - -| Method | Description | -| --- | --- | -| **Admin CLI**
  200. | You can use the `pulsar-admin` tool to manage Pulsar schemas, brokers, clusters, sources, sinks, topics, tenants and so on. For more information about how to use the `pulsar-admin` tool, see [here](reference-pulsar-admin.md). | -| **REST API**
  201. | Pulsar exposes schema related management API in Pulsar’s admin RESTful API. You can access the admin RESTful endpoint directly to manage schemas. For more information about how to use the Pulsar REST API, see [here](/admin-rest-api/). | -| **Java Admin API**
  202. | Pulsar provides Java admin library. | - -### Upload a schema - -To upload (register) a new schema for a topic, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `upload` subcommand. - -```bash - -$ pulsar-admin schemas upload --filename - -``` - -The `schema-definition-file` is in JSON format. - -```json - -{ - "type": "", - "schema": "", - "properties": {} // the properties associated with the schema -} - -``` - -The `schema-definition-file` includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  203. If the schema is a
  204. **primitive**
  205. schema, this field should be blank.
  206. If the schema is a
  207. **struct**
  208. schema, this field should be a JSON string of the Avro schema definition.
  209. | -| `properties` | The additional properties associated with the schema. | - -Here are examples of the `schema-definition-file` for a JSON schema. - -**Example 1** - -```json - -{ - "type": "JSON", - "schema": "{\"type\":\"record\",\"name\":\"User\",\"namespace\":\"com.foo\",\"fields\":[{\"name\":\"file1\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"file2\",\"type\":\"string\",\"default\":null},{\"name\":\"file3\",\"type\":[\"null\",\"string\"],\"default\":\"dfdf\"}]}", - "properties": {} -} - -``` - -**Example 2** - -```json - -{ - "type": "STRING", - "schema": "", - "properties": { - "key1": "value1" - } -} - -``` - -
    - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/uploadSchema?version=@pulsar:version_number@} - -The post payload is in JSON format. - -```json - -{ - "type": "", - "schema": "", - "properties": {} // the properties associated with the schema -} - -``` - -The post payload includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  210. If the schema is a
  211. **primitive**
  212. schema, this field should be blank.
  213. If the schema is a
  214. **struct**
  215. schema, this field should be a JSON string of the Avro schema definition.
  216. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -void createSchema(String topic, PostSchemaPayload schemaPayload) - -``` - -The `PostSchemaPayload` includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  217. If the schema is a
  218. **primitive**
  219. schema, this field should be blank.
  220. If the schema is a
  221. **struct**
  222. schema, this field should be a JSON string of the Avro schema definition.
  223. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `PostSchemaPayload`: - -```java - -PulsarAdmin admin = …; - -PostSchemaPayload payload = new PostSchemaPayload(); -payload.setType("INT8"); -payload.setSchema(""); - -admin.createSchema("my-tenant/my-ns/my-topic", payload); - -``` - -
    - -
    -```` - -### Get a schema (latest) - -To get the latest schema for a topic, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `get` subcommand. - -```bash - -$ pulsar-admin schemas get - -{ - "version": 0, - "type": "String", - "timestamp": 0, - "data": "string", - "properties": { - "property1": "string", - "property2": "string" - } -} - -``` - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/getSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", - "type": "", - "timestamp": "", - "data": "", - "properties": {} // the properties associated with the schema -} - -``` - -The response includes the following fields: - -| Field | Description | -| --- | --- | -| `version` | The schema version, which is a long number. | -| `type` | The schema type. | -| `timestamp` | The timestamp of creating this version of schema. | -| `data` | The schema definition data, which is encoded in UTF 8 charset.
  224. If the schema is a
  225. **primitive**
  226. schema, this field should be blank.
  227. If the schema is a
  228. **struct**
  229. schema, this field should be a JSON string of the Avro schema definition.
  230. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -SchemaInfo createSchema(String topic) - -``` - -The `SchemaInfo` includes the following fields: - -| Field | Description | -| --- | --- | -| `name` | The schema name. | -| `type` | The schema type. | -| `schema` | A byte array of the schema definition data, which is encoded in UTF 8 charset.
  231. If the schema is a
  232. **primitive**
  233. schema, this byte array should be empty.
  234. If the schema is a
  235. **struct**
  236. schema, this field should be a JSON string of the Avro schema definition converted to a byte array.
  237. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `SchemaInfo`: - -```java - -PulsarAdmin admin = …; - -SchemaInfo si = admin.getSchema("my-tenant/my-ns/my-topic"); - -``` - -
    - -
    -```` - -### Get a schema (specific) - -To get a specific version of a schema, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `get` subcommand. - -```bash - -$ pulsar-admin schemas get --version= - -``` - - - - -Send a `GET` request to a schema endpoint: {@inject: endpoint|GET|/admin/v2/schemas/:tenant/:namespace/:topic/schema/:version|operation/getSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", - "type": "", - "timestamp": "", - "data": "", - "properties": {} // the properties associated with the schema -} - -``` - -The response includes the following fields: - -| Field | Description | -| --- | --- | -| `version` | The schema version, which is a long number. | -| `type` | The schema type. | -| `timestamp` | The timestamp of creating this version of schema. | -| `data` | The schema definition data, which is encoded in UTF 8 charset.
  238. If the schema is a
  239. **primitive**
  240. schema, this field should be blank.
  241. If the schema is a
  242. **struct**
  243. schema, this field should be a JSON string of the Avro schema definition.
  244. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -SchemaInfo createSchema(String topic, long version) - -``` - -The `SchemaInfo` includes the following fields: - -| Field | Description | -| --- | --- | -| `name` | The schema name. | -| `type` | The schema type. | -| `schema` | A byte array of the schema definition data, which is encoded in UTF 8.
  245. If the schema is a
  246. **primitive**
  247. schema, this byte array should be empty.
  248. If the schema is a
  249. **struct**
  250. schema, this field should be a JSON string of the Avro schema definition converted to a byte array.
  251. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `SchemaInfo`: - -```java - -PulsarAdmin admin = …; - -SchemaInfo si = admin.getSchema("my-tenant/my-ns/my-topic", 1L); - -``` - -
    - -
    -```` - -### Extract a schema - -To provide a schema via a topic, you can use the following method. - -````mdx-code-block - - - - -Use the `extract` subcommand. - -```bash - -$ pulsar-admin schemas extract --classname --jar --type - -``` - - - - -```` - -### Delete a schema - -To delete a schema for a topic, you can use one of the following methods. - -:::note - -In any case, the **delete** action deletes **all versions** of a schema registered for a topic. - -::: - -````mdx-code-block - - - - -Use the `delete` subcommand. - -```bash - -$ pulsar-admin schemas delete - -``` - - - - -Send a `DELETE` request to a schema endpoint: {@inject: endpoint|DELETE|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/deleteSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", -} - -``` - -The response includes the following field: - -Field | Description | ----|---| -`version` | The schema version, which is a long number. | - - - - -```java - -void deleteSchema(String topic) - -``` - -Here is an example of deleting a schema. - -```java - -PulsarAdmin admin = …; - -admin.deleteSchema("my-tenant/my-ns/my-topic"); - -``` - - - - -```` - -## Custom schema storage - -By default, Pulsar stores various data types of schemas in [Apache BookKeeper](https://bookkeeper.apache.org) deployed alongside Pulsar. - -However, you can use another storage system if needed. - -### Implement - -To use a non-default (non-BookKeeper) storage system for Pulsar schemas, you need to implement the following Java interfaces: - -* [SchemaStorage interface](#schemastorage-interface) - -* [SchemaStorageFactory interface](#schemastoragefactory-interface) - -#### SchemaStorage interface - -The `SchemaStorage` interface has the following methods: - -```java - -public interface SchemaStorage { - // How schemas are updated - CompletableFuture put(String key, byte[] value, byte[] hash); - - // How schemas are fetched from storage - CompletableFuture get(String key, SchemaVersion version); - - // How schemas are deleted - CompletableFuture delete(String key); - - // Utility method for converting a schema version byte array to a SchemaVersion object - SchemaVersion versionFromBytes(byte[] version); - - // Startup behavior for the schema storage client - void start() throws Exception; - - // Shutdown behavior for the schema storage client - void close() throws Exception; -} - -``` - -:::tip - -For a complete example of **schema storage** implementation, see [BookKeeperSchemaStorage](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorage.java) class. - -::: - -#### SchemaStorageFactory interface - -The `SchemaStorageFactory` interface has the following method: - -```java - -public interface SchemaStorageFactory { - @NotNull - SchemaStorage create(PulsarService pulsar) throws Exception; -} - -``` - -:::tip - -For a complete example of **schema storage factory** implementation, see [BookKeeperSchemaStorageFactory](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorageFactory.java) class. - -::: - -### Deploy - -To use your custom schema storage implementation, perform the following steps. - -1. Package the implementation in a [JAR](https://docs.oracle.com/javase/tutorial/deployment/jar/basicsindex.html) file. - -2. Add the JAR file to the `lib` folder in your Pulsar binary or source distribution. - -3. Change the `schemaRegistryStorageClassName` configuration in `broker.conf` to your custom factory class. - -4. Start Pulsar. - -## Set schema compatibility check strategy - -You can set [schema compatibility check strategy](schema-evolution-compatibility.md#schema-compatibility-check-strategy) at the topic, namespace or broker level. - -The schema compatibility check strategy set at different levels has priority: topic level > namespace level > broker level. - -- If you set the strategy at both topic and namespace level, it uses the topic-level strategy. - -- If you set the strategy at both namespace and broker level, it uses the namespace-level strategy. - -- If you do not set the strategy at any level, it uses the `FULL` strategy. For all available values, see [here](schema-evolution-compatibility.md#schema-compatibility-check-strategy). - - -### Topic level - -To set a schema compatibility check strategy at the topic level, use one of the following methods. - -````mdx-code-block - - - - -Use the [`pulsar-admin topicPolicies set-schema-compatibility-strategy`](/tools/pulsar-admin/) command. - -```shell - -pulsar-admin topicPolicies set-schema-compatibility-strategy - -``` - - - - -Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v2/topics/:tenant/:namespace/:topic|operation/schemaCompatibilityStrategy?version=@pulsar:version_number@} - - - - -```java - -void setSchemaCompatibilityStrategy(String topic, SchemaCompatibilityStrategy strategy) - -``` - -Here is an example of setting a schema compatibility check strategy at the topic level. - -```java - -PulsarAdmin admin = …; - -admin.topicPolicies().setSchemaCompatibilityStrategy("my-tenant/my-ns/my-topic", SchemaCompatibilityStrategy.ALWAYS_INCOMPATIBLE); - -``` - - - - -```` -
    -To get the topic-level schema compatibility check strategy, use one of the following methods. - -````mdx-code-block - - - - -Use the [`pulsar-admin topicPolicies get-schema-compatibility-strategy`](/tools/pulsar-admin/) command. - -```shell - -pulsar-admin topicPolicies get-schema-compatibility-strategy - -``` - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic|operation/schemaCompatibilityStrategy?version=@pulsar:version_number@} - - - - -```java - -SchemaCompatibilityStrategy getSchemaCompatibilityStrategy(String topic, boolean applied) - -``` - -Here is an example of getting the topic-level schema compatibility check strategy. - -```java - -PulsarAdmin admin = …; - -// get the current applied schema compatibility strategy -admin.topicPolicies().getSchemaCompatibilityStrategy("my-tenant/my-ns/my-topic", true); - -// only get the schema compatibility strategy from topic policies -admin.topicPolicies().getSchemaCompatibilityStrategy("my-tenant/my-ns/my-topic", false); - -``` - - - - -```` -
    -To remove the topic-level schema compatibility check strategy, use one of the following methods. - -````mdx-code-block - - - - -Use the [`pulsar-admin topicPolicies remove-schema-compatibility-strategy`](/tools/pulsar-admin/) command. - -```shell - -pulsar-admin topicPolicies remove-schema-compatibility-strategy - -``` - - - - -Send a `DELETE` request to this endpoint: {@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic|operation/schemaCompatibilityStrategy?version=@pulsar:version_number@} - - - - -```java - -void removeSchemaCompatibilityStrategy(String topic) - -``` - -Here is an example of removing the topic-level schema compatibility check strategy. - -```java - -PulsarAdmin admin = …; - -admin.removeSchemaCompatibilityStrategy("my-tenant/my-ns/my-topic"); - -``` - - - - -```` - - -### Namespace level - -You can set schema compatibility check strategy at namespace level using one of the following methods. - -````mdx-code-block - - - - -Use the [`pulsar-admin namespaces set-schema-compatibility-strategy`](/tools/pulsar-admin/) command. - -```shell - -pulsar-admin namespaces set-schema-compatibility-strategy options - -``` - - - - -Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace|operation/schemaCompatibilityStrategy?version=@pulsar:version_number@} - - - - -Use the [`setSchemaCompatibilityStrategy`](/api/admin/)method. - -```java - -admin.namespaces().setSchemaCompatibilityStrategy("test", SchemaCompatibilityStrategy.FULL); - -``` - - - - -```` - -### Broker level - -You can set schema compatibility check strategy at broker level by setting `schemaCompatibilityStrategy` in [`broker.conf`](https://github.com/apache/pulsar/blob/f24b4890c278f72a67fe30e7bf22dc36d71aac6a/conf/broker.conf#L1240) or [`standalone.conf`](https://github.com/apache/pulsar/blob/master/conf/standalone.conf) file. - -**Example** - -``` - -schemaCompatibilityStrategy=ALWAYS_INCOMPATIBLE - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/schema-understand.md b/site2/website/versioned_docs/version-2.10.0-deprecated/schema-understand.md deleted file mode 100644 index 55bc662c666338..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/schema-understand.md +++ /dev/null @@ -1,576 +0,0 @@ ---- -id: schema-understand -title: Understand schema -sidebar_label: "Understand schema" -original_id: schema-understand ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This chapter explains the basic concepts of Pulsar schema, focuses on the topics of particular importance, and provides additional background. - -## SchemaInfo - -Pulsar schema is defined in a data structure called `SchemaInfo`. - -The `SchemaInfo` is stored and enforced on a per-topic basis and cannot be stored at the namespace or tenant level. - -A `SchemaInfo` consists of the following fields: - -| Field | Description | -| --- | --- | -| `name` | Schema name (a string). | -| `type` | Schema type, which determines how to interpret the schema data.
  252. Predefined schema: see [here](schema-understand.md#schema-type).
  253. Customized schema: it is left as an empty string.
  254. | -| `schema`(`payload`) | Schema data, which is a sequence of 8-bit unsigned bytes and schema-type specific. | -| `properties` | It is a user defined properties as a string/string map. Applications can use this bag for carrying any application specific logics. Possible properties might be the Git hash associated with the schema, an environment string like `dev` or `prod`. | - -**Example** - -This is the `SchemaInfo` of a string. - -```json - -{ - "name": "test-string-schema", - "type": "STRING", - "schema": "", - "properties": {} -} - -``` - -## Schema type - -Pulsar supports various schema types, which are mainly divided into two categories: - -* Primitive type - -* Complex type - -### Primitive type - -Currently, Pulsar supports the following primitive types: - -| Primitive Type | Description | -|---|---| -| `BOOLEAN` | A binary value | -| `INT8` | A 8-bit signed integer | -| `INT16` | A 16-bit signed integer | -| `INT32` | A 32-bit signed integer | -| `INT64` | A 64-bit signed integer | -| `FLOAT` | A single precision (32-bit) IEEE 754 floating-point number | -| `DOUBLE` | A double-precision (64-bit) IEEE 754 floating-point number | -| `BYTES` | A sequence of 8-bit unsigned bytes | -| `STRING` | A Unicode character sequence | -| `TIMESTAMP` (`DATE`, `TIME`) | A logic type represents a specific instant in time with millisecond precision.
    It stores the number of milliseconds since `January 1, 1970, 00:00:00 GMT` as an `INT64` value | -| INSTANT | A single instantaneous point on the time-line with nanoseconds precision| -| LOCAL_DATE | An immutable date-time object that represents a date, often viewed as year-month-day| -| LOCAL_TIME | An immutable date-time object that represents a time, often viewed as hour-minute-second. Time is represented to nanosecond precision.| -| LOCAL_DATE_TIME | An immutable date-time object that represents a date-time, often viewed as year-month-day-hour-minute-second | - -For primitive types, Pulsar does not store any schema data in `SchemaInfo`. The `type` in `SchemaInfo` is used to determine how to serialize and deserialize the data. - -Some of the primitive schema implementations can use `properties` to store implementation-specific tunable settings. For example, a `string` schema can use `properties` to store the encoding charset to serialize and deserialize strings. - -The conversions between **Pulsar schema types** and **language-specific primitive types** are as below. - -| Schema Type | Java Type| Python Type | Go Type | -|---|---|---|---| -| BOOLEAN | boolean | bool | bool | -| INT8 | byte | | int8 | -| INT16 | short | | int16 | -| INT32 | int | | int32 | -| INT64 | long | | int64 | -| FLOAT | float | float | float32 | -| DOUBLE | double | float | float64| -| BYTES | byte[], ByteBuffer, ByteBuf | bytes | []byte | -| STRING | string | str | string| -| TIMESTAMP | java.sql.Timestamp | | | -| TIME | java.sql.Time | | | -| DATE | java.util.Date | | | -| INSTANT | java.time.Instant | | | -| LOCAL_DATE | java.time.LocalDate | | | -| LOCAL_TIME | java.time.LocalDateTime | | -| LOCAL_DATE_TIME | java.time.LocalTime | | - -**Example** - -This example demonstrates how to use a string schema. - -1. Create a producer with a string schema and send messages. - - ```java - - Producer producer = client.newProducer(Schema.STRING).create(); - producer.newMessage().value("Hello Pulsar!").send(); - - ``` - -2. Create a consumer with a string schema and receive messages. - - ```java - - Consumer consumer = client.newConsumer(Schema.STRING).subscribe(); - consumer.receive(); - - ``` - -### Complex type - -Currently, Pulsar supports the following complex types: - -| Complex Type | Description | -|---|---| -| `keyvalue` | Represents a complex type of a key/value pair. | -| `struct` | Handles structured data. It supports `AvroBaseStructSchema` and `ProtobufNativeSchema`. | - -#### keyvalue - -`Keyvalue` schema helps applications define schemas for both key and value. - -For `SchemaInfo` of `keyvalue` schema, Pulsar stores the `SchemaInfo` of key schema and the `SchemaInfo` of value schema together. - -Pulsar provides the following methods to encode a key/value pair in messages: - -* `INLINE` - -* `SEPARATED` - -You can choose the encoding type when constructing the key/value schema. - -````mdx-code-block - - - - -Key/value pairs are encoded together in the message payload. - - - - -Key is encoded in the message key and the value is encoded in the message payload. - -**Example** - -This example shows how to construct a key/value schema and then use it to produce and consume messages. - -1. Construct a key/value schema with `INLINE` encoding type. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.INLINE - ); - - ``` - -2. Optionally, construct a key/value schema with `SEPARATED` encoding type. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - ``` - -3. Produce messages using a key/value schema. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - Producer> producer = client.newProducer(kvSchema) - .topic(TOPIC) - .create(); - - final int key = 100; - final String value = "value-100"; - - // send the key/value message - producer.newMessage() - .value(new KeyValue(key, value)) - .send(); - - ``` - -4. Consume messages using a key/value schema. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - Consumer> consumer = client.newConsumer(kvSchema) - ... - .topic(TOPIC) - .subscriptionName(SubscriptionName).subscribe(); - - // receive key/value pair - Message> msg = consumer.receive(); - KeyValue kv = msg.getValue(); - - ``` - - - - -```` - -#### struct - -This section describes the details of type and usage of the `struct` schema. - -##### Type - -`struct` schema supports `AvroBaseStructSchema` and `ProtobufNativeSchema`. - -|Type|Description| ----|---| -`AvroBaseStructSchema`|Pulsar uses [Avro Specification](http://avro.apache.org/docs/current/spec.html) to declare the schema definition for `AvroBaseStructSchema`, which supports `AvroSchema`, `JsonSchema`, and `ProtobufSchema`.

    This allows Pulsar:
    - to use the same tools to manage schema definitions
    - to use different serialization or deserialization methods to handle data| -`ProtobufNativeSchema`|`ProtobufNativeSchema` is based on protobuf native Descriptor.

    This allows Pulsar:
    - to use native protobuf-v3 to serialize or deserialize data
    - to use `AutoConsume` to deserialize data. - -##### Usage - -Pulsar provides the following methods to use the `struct` schema: - -* `static` - -* `generic` - -* `SchemaDefinition` - -````mdx-code-block - - - - -You can predefine the `struct` schema, which can be a POJO in Java, a `struct` in Go, or classes generated by Avro or Protobuf tools. - -**Example** - -Pulsar gets the schema definition from the predefined `struct` using an Avro library. The schema definition is the schema data stored as a part of the `SchemaInfo`. - -1. Create the _User_ class to define the messages sent to Pulsar topics. - - ```java - - @Builder - @AllArgsConstructor - @NoArgsConstructor - public static class User { - String name; - int age; - } - - ``` - -2. Create a producer with a `struct` schema and send messages. - - ```java - - Producer producer = client.newProducer(Schema.AVRO(User.class)).create(); - producer.newMessage().value(User.builder().name("pulsar-user").age(1).build()).send(); - - ``` - -3. Create a consumer with a `struct` schema and receive messages - - ```java - - Consumer consumer = client.newConsumer(Schema.AVRO(User.class)).subscribe(); - User user = consumer.receive(); - - ``` - - - - -Sometimes applications do not have pre-defined structs, and you can use this method to define schema and access data. - -You can define the `struct` schema using the `GenericSchemaBuilder`, generate a generic struct using `GenericRecordBuilder` and consume messages into `GenericRecord`. - -**Example** - -1. Use `RecordSchemaBuilder` to build a schema. - - ```java - - RecordSchemaBuilder recordSchemaBuilder = SchemaBuilder.record("schemaName"); - recordSchemaBuilder.field("intField").type(SchemaType.INT32); - SchemaInfo schemaInfo = recordSchemaBuilder.build(SchemaType.AVRO); - - Producer producer = client.newProducer(Schema.generic(schemaInfo)).create(); - - ``` - -2. Use `RecordBuilder` to build the struct records. - - ```java - - producer.newMessage().value(schema.newRecordBuilder() - .set("intField", 32) - .build()).send(); - - ``` - - - - -You can define the `schemaDefinition` to generate a `struct` schema. - -**Example** - -1. Create the _User_ class to define the messages sent to Pulsar topics. - - ```java - - @Builder - @AllArgsConstructor - @NoArgsConstructor - public static class User { - String name; - int age; - } - - ``` - -2. Create a producer with a `SchemaDefinition` and send messages. - - ```java - - SchemaDefinition schemaDefinition = SchemaDefinition.builder().withPojo(User.class).build(); - Producer producer = client.newProducer(Schema.AVRO(schemaDefinition)).create(); - producer.newMessage().value(User.builder().name("pulsar-user").age(1).build()).send(); - - ``` - -3. Create a consumer with a `SchemaDefinition` schema and receive messages - - ```java - - SchemaDefinition schemaDefinition = SchemaDefinition.builder().withPojo(User.class).build(); - Consumer consumer = client.newConsumer(Schema.AVRO(schemaDefinition)).subscribe(); - User user = consumer.receive().getValue(); - - ``` - - - - -```` - -### Auto Schema - -If you don't know the schema type of a Pulsar topic in advance, you can use AUTO schema to produce or consume generic records to or from brokers. - -| Auto Schema Type | Description | -|---|---| -| `AUTO_PRODUCE` | This is useful for transferring data **from a producer to a Pulsar topic that has a schema**. | -| `AUTO_CONSUME` | This is useful for transferring data **from a Pulsar topic that has a schema to a consumer**. | - -#### AUTO_PRODUCE - -`AUTO_PRODUCE` schema helps a producer validate whether the bytes sent by the producer is compatible with the schema of a topic. - -**Example** - -Suppose that: - -* You have a producer processing messages from a Kafka topic _K_. - -* You have a Pulsar topic _P_, and you do not know its schema type. - -* Your application reads the messages from _K_ and writes the messages to _P_. - -In this case, you can use `AUTO_PRODUCE` to verify whether the bytes produced by _K_ can be sent to _P_ or not. - -```java - -Produce pulsarProducer = client.newProducer(Schema.AUTO_PRODUCE()) - … - .create(); - -byte[] kafkaMessageBytes = … ; - -pulsarProducer.produce(kafkaMessageBytes); - -``` - -#### AUTO_CONSUME - -`AUTO_CONSUME` schema helps a Pulsar topic validate whether the bytes sent by a Pulsar topic is compatible with a consumer, that is, the Pulsar topic deserializes messages into language-specific objects using the `SchemaInfo` retrieved from broker-side. - -Currently, `AUTO_CONSUME` supports AVRO, JSON and ProtobufNativeSchema schemas. It deserializes messages into `GenericRecord`. - -**Example** - -Suppose that: - -* You have a Pulsar topic _P_. - -* You have a consumer (for example, MySQL) receiving messages from the topic _P_. - -* Your application reads the messages from _P_ and writes the messages to MySQL. - -In this case, you can use `AUTO_CONSUME` to verify whether the bytes produced by _P_ can be sent to MySQL or not. - -```java - -Consumer pulsarConsumer = client.newConsumer(Schema.AUTO_CONSUME()) - … - .subscribe(); - -Message msg = consumer.receive() ; -GenericRecord record = msg.getValue(); - -``` - -### Native Avro Schema - -When migrating or ingesting event or message data from external systems (such as Kafka and Cassandra), the events are often already serialized in Avro format. The applications producing the data typically have validated the data against their schemas (including compatibility checks) and stored them in a database or a dedicated service (such as a schema registry). The schema of each serialized data record is usually retrievable by some metadata attached to that record. In such cases, a Pulsar producer doesn't need to repeat the schema validation step when sending the ingested events to a topic. All it needs to do is passing each message or event with its schema to Pulsar. - -Hence, we provide `Schema.NATIVE_AVRO` to wrap a native Avro schema of type `org.apache.avro.Schema`. The result is a schema instance of Pulsar that accepts a serialized Avro payload without validating it against the wrapped Avro schema. - -**Example** - -```java - -org.apache.avro.Schema nativeAvroSchema = … ; - -Producer producer = pulsarClient.newProducer().topic("ingress").create(); - -byte[] content = … ; - -producer.newMessage(Schema.NATIVE_AVRO(nativeAvroSchema)).value(content).send(); - -``` - -## Schema version - -Each `SchemaInfo` stored with a topic has a version. Schema version manages schema changes happening within a topic. - -Messages produced with a given `SchemaInfo` is tagged with a schema version, so when a message is consumed by a Pulsar client, the Pulsar client can use the schema version to retrieve the corresponding `SchemaInfo` and then use the `SchemaInfo` to deserialize data. - -Schemas are versioned in succession. Schema storage happens in a broker that handles the associated topics so that version assignments can be made. - -Once a version is assigned/fetched to/for a schema, all subsequent messages produced by that producer are tagged with the appropriate version. - -**Example** - -The following example illustrates how the schema version works. - -Suppose that a Pulsar [Java client](client-libraries-java.md) created using the code below attempts to connect to Pulsar and begins to send messages: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -Producer producer = client.newProducer(JSONSchema.of(SensorReading.class)) - .topic("sensor-data") - .sendTimeout(3, TimeUnit.SECONDS) - .create(); - -``` - -The table below lists the possible scenarios when this connection attempt occurs and what happens in each scenario: - -| Scenario | What happens | -| --- | --- | -|
  255. No schema exists for the topic.
  256. | (1) The producer is created using the given schema. (2) Since no existing schema is compatible with the `SensorReading` schema, the schema is transmitted to the broker and stored. (3) Any consumer created using the same schema or topic can consume messages from the `sensor-data` topic. | -|
  257. A schema already exists.
  258. The producer connects using the same schema that is already stored.
  259. | (1) The schema is transmitted to the broker. (2) The broker determines that the schema is compatible. (3) The broker attempts to store the schema in [BookKeeper](concepts-architecture-overview.md#persistent-storage) but then determines that it's already stored, so it is used to tag produced messages. |
  260. A schema already exists.
  261. The producer connects using a new schema that is compatible.
  262. | (1) The schema is transmitted to the broker. (2) The broker determines that the schema is compatible and stores the new schema as the current version (with a new version number). | - -## How does schema work - -Pulsar schemas are applied and enforced at the **topic** level (schemas cannot be applied at the namespace or tenant level). - -Producers and consumers upload schemas to brokers, so Pulsar schemas work on the producer side and the consumer side. - -### Producer side - -This diagram illustrates how does schema work on the Producer side. - -![Schema works at the producer side](/assets/schema-producer.png) - -1. The application uses a schema instance to construct a producer instance. - - The schema instance defines the schema for the data being produced using the producer instance. - - Take AVRO as an example, Pulsar extracts schema definition from the POJO class and constructs the `SchemaInfo` that the producer needs to pass to a broker when it connects. - -2. The producer connects to the broker with the `SchemaInfo` extracted from the passed-in schema instance. - -3. The broker looks up the schema in the schema storage to check if it is already a registered schema. - -4. If yes, the broker skips the schema validation since it is a known schema, and returns the schema version to the producer. - -5. If no, the broker verifies whether a schema can be automatically created in this namespace: - - * If `isAllowAutoUpdateSchema` sets to **true**, then a schema can be created, and the broker validates the schema based on the schema compatibility check strategy defined for the topic. - - * If `isAllowAutoUpdateSchema` sets to **false**, then a schema can not be created, and the producer is rejected to connect to the broker. - -**Tip**: - -`isAllowAutoUpdateSchema` can be set via **Pulsar admin API** or **REST API.** - -For how to set `isAllowAutoUpdateSchema` via Pulsar admin API, see [Manage AutoUpdate Strategy](schema-manage.md/#manage-autoupdate-strategy). - -6. If the schema is allowed to be updated, then the compatible strategy check is performed. - - * If the schema is compatible, the broker stores it and returns the schema version to the producer. - - All the messages produced by this producer are tagged with the schema version. - - * If the schema is incompatible, the broker rejects it. - -### Consumer side - -This diagram illustrates how does Schema work on the consumer side. - -![Schema works at the consumer side](/assets/schema-consumer.png) - -1. The application uses a schema instance to construct a consumer instance. - - The schema instance defines the schema that the consumer uses for decoding messages received from a broker. - -2. The consumer connects to the broker with the `SchemaInfo` extracted from the passed-in schema instance. - -3. The broker determines whether the topic has one of them (a schema/data/a local consumer and a local producer). - -4. If a topic does not have all of them (a schema/data/a local consumer and a local producer): - - * If `isAllowAutoUpdateSchema` sets to **true**, then the consumer registers a schema and it is connected to a broker. - - * If `isAllowAutoUpdateSchema` sets to **false**, then the consumer is rejected to connect to a broker. - -5. If a topic has one of them (a schema/data/a local consumer and a local producer), then the schema compatibility check is performed. - - * If the schema passes the compatibility check, then the consumer is connected to the broker. - - * If the schema does not pass the compatibility check, then the consumer is rejected to connect to the broker. - -6. The consumer receives messages from the broker. - - If the schema used by the consumer supports schema versioning (for example, AVRO schema), the consumer fetches the `SchemaInfo` of the version tagged in messages and uses the passed-in schema and the schema tagged in messages to decode the messages. diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/security-athenz.md b/site2/website/versioned_docs/version-2.10.0-deprecated/security-athenz.md deleted file mode 100644 index 8a39fe25316d07..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/security-athenz.md +++ /dev/null @@ -1,98 +0,0 @@ ---- -id: security-athenz -title: Authentication using Athenz -sidebar_label: "Authentication using Athenz" -original_id: security-athenz ---- - -[Athenz](https://github.com/AthenZ/athenz) is a role-based authentication/authorization system. In Pulsar, you can use Athenz role tokens (also known as *z-tokens*) to establish the identify of the client. - -## Athenz authentication settings - -A [decentralized Athenz system](https://github.com/AthenZ/athenz/blob/master/docs/decent_authz_flow.md) contains an [authori**Z**ation **M**anagement **S**ystem](https://github.com/AthenZ/athenz/blob/master/docs/setup_zms.md) (ZMS) server and an [authori**Z**ation **T**oken **S**ystem](https://github.com/AthenZ/athenz/blob/master/docs/setup_zts) (ZTS) server. - -To begin, you need to set up Athenz service access control. You need to create domains for the *provider* (which provides some resources to other services with some authentication/authorization policies) and the *tenant* (which is provisioned to access some resources in a provider). In this case, the provider corresponds to the Pulsar service itself and the tenant corresponds to each application using Pulsar (typically, a [tenant](reference-terminology.md#tenant) in Pulsar). - -### Create the tenant domain and service - -On the [tenant](reference-terminology.md#tenant) side, you need to do the following things: - -1. Create a domain, such as `shopping` -2. Generate a private/public key pair -3. Create a service, such as `some_app`, on the domain with the public key - -Note that you need to specify the private key generated in step 2 when the Pulsar client connects to the [broker](reference-terminology.md#broker) (see client configuration examples for [Java](client-libraries-java.md#tls-authentication) and [C++](client-libraries-cpp.md#tls-authentication)). - -For more specific steps involving the Athenz UI, refer to [Example Service Access Control Setup](https://github.com/AthenZ/athenz/blob/master/docs/example_service_athenz_setup.md#client-tenant-domain). - -### Create the provider domain and add the tenant service to some role members - -On the provider side, you need to do the following things: - -1. Create a domain, such as `pulsar` -2. Create a role -3. Add the tenant service to members of the role - -Note that you can specify any action and resource in step 2 since they are not used on Pulsar. In other words, Pulsar uses the Athenz role token only for authentication, *not* for authorization. - -For more specific steps involving UI, refer to [Example Service Access Control Setup](https://github.com/AthenZ/athenz/blob/master/docs/example_service_athenz_setup.md#server-provider-domain). - -## Configure the broker for Athenz - -> ### TLS encryption -> -> Note that when you are using Athenz as an authentication provider, you had better use TLS encryption -> as it can protect role tokens from being intercepted and reused. (for more details involving TLS encryption see [Architecture - Data Model](https://github.com/AthenZ/athenz/blob/master/docs/data_model)). - -In the `conf/broker.conf` configuration file in your Pulsar installation, you need to provide the class name of the Athenz authentication provider as well as a comma-separated list of provider domain names. - -```properties - -# Add the Athenz auth provider -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderAthenz -athenzDomainNames=pulsar - -# Enable TLS -tlsEnabled=true -tlsCertificateFilePath=/path/to/broker-cert.pem -tlsKeyFilePath=/path/to/broker-key.pem - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationAthenz -brokerClientAuthenticationParameters={"tenantDomain":"shopping","tenantService":"some_app","providerDomain":"pulsar","privateKey":"file:///path/to/private.pem","keyId":"v1"} - -``` - -> A full listing of parameters is available in the `conf/broker.conf` file, you can also find the default -> values for those parameters in [Broker Configuration](reference-configuration.md#broker). - -## Configure clients for Athenz - -For more information on Pulsar client authentication using Athenz, see the following language-specific docs: - -* [Java client](client-libraries-java.md#athenz) - -## Configure CLI tools for Athenz - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following authentication parameters to the `conf/client.conf` config file to use Athenz with CLI tools of Pulsar: - -```properties - -# URL for the broker -serviceUrl=https://broker.example.com:8443/ - -# Set Athenz auth plugin and its parameters -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationAthenz -authParams={"tenantDomain":"shopping","tenantService":"some_app","providerDomain":"pulsar","privateKey":"file:///path/to/private.pem","keyId":"v1"} - -# Enable TLS -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/cacert.pem - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/security-authorization.md b/site2/website/versioned_docs/version-2.10.0-deprecated/security-authorization.md deleted file mode 100644 index 9cfd7c8c203f63..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/security-authorization.md +++ /dev/null @@ -1,130 +0,0 @@ ---- -id: security-authorization -title: Authentication and authorization in Pulsar -sidebar_label: "Authorization and ACLs" -original_id: security-authorization ---- - - -In Pulsar, the [authentication provider](security-overview.md#authentication-providers) is responsible for properly identifying clients and associating the clients with [role tokens](security-overview.md#role-tokens). If you only enable authentication, an authenticated role token has the ability to access all resources in the cluster. *Authorization* is the process that determines *what* clients are able to do. - -The role tokens with the most privileges are the *superusers*. The *superusers* can create and destroy tenants, along with having full access to all tenant resources. - -When a superuser creates a [tenant](reference-terminology.md#tenant), that tenant is assigned an admin role. A client with the admin role token can then create, modify and destroy namespaces, and grant and revoke permissions to *other role tokens* on those namespaces. - -## Broker and Proxy Setup - -### Enable authorization and assign superusers -You can enable the authorization and assign the superusers in the broker ([`conf/broker.conf`](reference-configuration.md#broker)) configuration files. - -```properties - -authorizationEnabled=true -superUserRoles=my-super-user-1,my-super-user-2 - -``` - -> A full list of parameters is available in the `conf/broker.conf` file. -> You can also find the default values for those parameters in [Broker Configuration](reference-configuration.md#broker). - -Typically, you use superuser roles for administrators, clients as well as broker-to-broker authorization. When you use [geo-replication](concepts-replication.md), every broker needs to be able to publish to all the other topics of clusters. - -You can also enable the authorization for the proxy in the proxy configuration file (`conf/proxy.conf`). Once you enable the authorization on the proxy, the proxy does an additional authorization check before forwarding the request to a broker. -If you enable authorization on the broker, the broker checks the authorization of the request when the broker receives the forwarded request. - -### Proxy Roles - -By default, the broker treats the connection between a proxy and the broker as a normal user connection. The broker authenticates the user as the role configured in `proxy.conf`(see ["Enable TLS Authentication on Proxies"](security-tls-authentication.md#enable-tls-authentication-on-proxies)). However, when the user connects to the cluster through a proxy, the user rarely requires the authentication. The user expects to be able to interact with the cluster as the role for which they have authenticated with the proxy. - -Pulsar uses *Proxy roles* to enable the authentication. Proxy roles are specified in the broker configuration file, [`conf/broker.conf`](reference-configuration.md#broker). If a client that is authenticated with a broker is one of its ```proxyRoles```, all requests from that client must also carry information about the role of the client that is authenticated with the proxy. This information is called the *original principal*. If the *original principal* is absent, the client is not able to access anything. - -You must authorize both the *proxy role* and the *original principal* to access a resource to ensure that the resource is accessible via the proxy. Administrators can take two approaches to authorize the *proxy role* and the *original principal*. - -The more secure approach is to grant access to the proxy roles each time you grant access to a resource. For example, if you have a proxy role named `proxy1`, when the superuser creates a tenant, you should specify `proxy1` as one of the admin roles. When a role is granted permissions to produce or consume from a namespace, if that client wants to produce or consume through a proxy, you should also grant `proxy1` the same permissions. - -Another approach is to make the proxy role a superuser. This allows the proxy to access all resources. The client still needs to authenticate with the proxy, and all requests made through the proxy have their role downgraded to the *original principal* of the authenticated client. However, if the proxy is compromised, a bad actor could get full access to your cluster. - -You can specify the roles as proxy roles in [`conf/broker.conf`](reference-configuration.md#broker). - -```properties - -proxyRoles=my-proxy-role - -# if you want to allow superusers to use the proxy (see above) -superUserRoles=my-super-user-1,my-super-user-2,my-proxy-role - -``` - -## Administer tenants - -Pulsar [instance](reference-terminology.md#instance) administrators or some kind of self-service portal typically provisions a Pulsar [tenant](reference-terminology.md#tenant). - -You can manage tenants using the [`pulsar-admin`](reference-pulsar-admin.md) tool. - -### Create a new tenant - -The following is an example tenant creation command: - -```shell - -$ bin/pulsar-admin tenants create my-tenant \ - --admin-roles my-admin-role \ - --allowed-clusters us-west,us-east - -``` - -This command creates a new tenant `my-tenant` that is allowed to use the clusters `us-west` and `us-east`. - -A client that successfully identifies itself as having the role `my-admin-role` is allowed to perform all administrative tasks on this tenant. - -The structure of topic names in Pulsar reflects the hierarchy between tenants, clusters, and namespaces: - -```shell - -persistent://tenant/namespace/topic - -``` - -### Manage permissions - -You can use [Pulsar Admin Tools](admin-api-permissions.md) for managing permission in Pulsar. - -### Pulsar admin authentication - -```java - -PulsarAdmin admin = PulsarAdmin.builder() - .serviceHttpUrl("http://broker:8080") - .authentication("com.org.MyAuthPluginClass", "param1:value1") - .build(); - -``` - -To use TLS: - -```java - -PulsarAdmin admin = PulsarAdmin.builder() - .serviceHttpUrl("https://broker:8080") - .authentication("com.org.MyAuthPluginClass", "param1:value1") - .tlsTrustCertsFilePath("/path/to/trust/cert") - .build(); - -``` - -## Authorize an authenticated client with multiple roles - -When a client is identified with multiple roles in a token (the type of role claim in the token is an array) during the authentication process, Pulsar supports to check the permissions of all the roles and further authorize the client as long as one of its roles has the required permissions. - -> **Note**
    -> This authorization method is only compatible with [JWT authentication](security-jwt.md). - -To enable this authorization method, configure the authorization provider as `MultiRolesTokenAuthorizationProvider` in the `conf/broker.conf` file. - - ```properties - - # Authorization provider fully qualified class-name - authorizationProvider=org.apache.pulsar.broker.authorization.MultiRolesTokenAuthorizationProvider - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/security-basic-auth.md b/site2/website/versioned_docs/version-2.10.0-deprecated/security-basic-auth.md deleted file mode 100644 index 2585526bb478af..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/security-basic-auth.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -id: security-basic-auth -title: Authentication using HTTP basic -sidebar_label: "Authentication using HTTP basic" ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - -[Basic authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) is a simple authentication scheme built into the HTTP protocol, which uses base64-encoded username and password pairs as credentials. - -## Prerequisites - -Install [`htpasswd`](https://httpd.apache.org/docs/2.4/programs/htpasswd.html) in your environment to create a password file for storing username-password pairs. - -* For Ubuntu/Debian, run the following command to install `htpasswd`. - - ``` - apt install apache2-utils - ``` - -* For CentOS/RHEL, run the following command to install `htpasswd`. - - ``` - yum install httpd-tools - ``` - -## Create your authentication file - -:::note -Currently, you can use MD5 (recommended) and CRYPT encryption to authenticate your password. -::: - -Create a password file named `.htpasswd` with a user account `superuser/admin`: -* Use MD5 encryption (recommended): - - ``` - htpasswd -cmb /path/to/.htpasswd superuser admin - ``` - -* Use CRYPT encryption: - - ``` - htpasswd -cdb /path/to/.htpasswd superuser admin - ``` - -You can preview the content of your password file by running the following command: - -``` -cat path/to/.htpasswd -superuser:$apr1$GBIYZYFZ$MzLcPrvoUky16mLcK6UtX/ -``` - -## Enable basic authentication on brokers - -To configure brokers to authenticate clients, complete the following steps. - -1. Add the following parameters to the `conf/broker.conf` file. If you use a standalone Pulsar, you need to add these parameters to the `conf/standalone.conf` file. - - ``` - # Configuration to enable Basic authentication - authenticationEnabled=true - authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderBasic - - # Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters - brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic - brokerClientAuthenticationParameters={"userId":"superuser","password":"admin"} - - # If this flag is set then the broker authenticates the original Auth data - # else it just accepts the originalPrincipal and authorizes it (if required). - authenticateOriginalAuthData=true - ``` - -2. Set an environment variable named `PULSAR_EXTRA_OPTS` and the value is `-Dpulsar.auth.basic.conf=/path/to/.htpasswd`. Pulsar reads this environment variable to implement HTTP basic authentication. - -## Enable basic authentication on proxies - -To configure proxies to authenticate clients, complete the following steps. - -1. Add the following parameters to the `conf/proxy.conf` file: - - ``` - # For clients connecting to the proxy - authenticationEnabled=true - authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderBasic - - # For the proxy to connect to brokers - brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic - brokerClientAuthenticationParameters={"userId":"superuser","password":"admin"} - - # Whether client authorization credentials are forwarded to the broker for re-authorization. - # Authentication must be enabled via authenticationEnabled=true for this to take effect. - forwardAuthorizationCredentials=true - ``` - -2. Set an environment variable named `PULSAR_EXTRA_OPTS` and the value is `-Dpulsar.auth.basic.conf=/path/to/.htpasswd`. Pulsar reads this environment variable to implement HTTP basic authentication. - -## Configure basic authentication in CLI tools - -[Command-line tools](/docs/next/reference-cli-tools), such as [Pulsar-admin](/tools/pulsar-admin/), [Pulsar-perf](/tools/pulsar-perf/) and [Pulsar-client](/tools/pulsar-client/), use the `conf/client.conf` file in your Pulsar installation. To configure basic authentication in Pulsar CLI tools, you need to add the following parameters to the `conf/client.conf` file. - -``` -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic -authParams={"userId":"superuser","password":"admin"} -``` - - -## Configure basic authentication in Pulsar clients - -The following example shows how to configure basic authentication when using Pulsar clients. - - - - - ```java - AuthenticationBasic auth = new AuthenticationBasic(); - auth.configure("{\"userId\":\"superuser\",\"password\":\"admin\"}"); - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650") - .authentication(auth) - .build(); - ``` - - - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/security-bouncy-castle.md b/site2/website/versioned_docs/version-2.10.0-deprecated/security-bouncy-castle.md deleted file mode 100644 index be937055d8e311..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/security-bouncy-castle.md +++ /dev/null @@ -1,157 +0,0 @@ ---- -id: security-bouncy-castle -title: Bouncy Castle Providers -sidebar_label: "Bouncy Castle Providers" -original_id: security-bouncy-castle ---- - -## BouncyCastle Introduce - -`Bouncy Castle` is a Java library that complements the default Java Cryptographic Extension (JCE), -and it provides more cipher suites and algorithms than the default JCE provided by Sun. - -In addition to that, `Bouncy Castle` has lots of utilities for reading arcane formats like PEM and ASN.1 that no sane person would want to rewrite themselves. - -In Pulsar, security and crypto have dependencies on BouncyCastle Jars. For the detailed installing and configuring Bouncy Castle FIPS, see [BC FIPS Documentation](https://www.bouncycastle.org/documentation.html), especially the **User Guides** and **Security Policy** PDFs. - -`Bouncy Castle` provides both [FIPS](https://www.bouncycastle.org/fips_faq.html) and non-FIPS version. But in a JVM, you can not include both of the 2 versions, and you need to exclude the current version before include the other. - -In Pulsar, the security and crypto methods also depends on `Bouncy Castle`, especially in [TLS Authentication](security-tls-authentication.md) and [Transport Encryption](security-encryption.md). This document contains the configuration between BouncyCastle FIPS(BC-FIPS) and non-FIPS(BC-non-FIPS) version while using Pulsar. - -## How BouncyCastle modules packaged in Pulsar - -In Pulsar's `bouncy-castle` module, We provide 2 sub modules: `bouncy-castle-bc`(for non-FIPS version) and `bouncy-castle-bcfips`(for FIPS version), to package BC jars together to make the include and exclude of `Bouncy Castle` easier. - -To achieve this goal, we will need to package several `bouncy-castle` jars together into `bouncy-castle-bc` or `bouncy-castle-bcfips` jar. -Each of the original bouncy-castle jar is related with security, so BouncyCastle dutifully supplies signed of each JAR. -But when we do the re-package, Maven shade explodes the BouncyCastle jar file which puts the signatures into META-INF, -these signatures aren't valid for this new, uber-jar (signatures are only for the original BC jar). -Usually, You will meet error like `java.lang.SecurityException: Invalid signature file digest for Manifest main attributes`. - -You could exclude these signatures in mvn pom file to avoid above error, by - -```access transformers - -META-INF/*.SF -META-INF/*.DSA -META-INF/*.RSA - -``` - -But it can also lead to new, cryptic errors, e.g. `java.security.NoSuchAlgorithmException: PBEWithSHA256And256BitAES-CBC-BC SecretKeyFactory not available` -By explicitly specifying where to find the algorithm like this: `SecretKeyFactory.getInstance("PBEWithSHA256And256BitAES-CBC-BC","BC")` -It will get the real error: `java.security.NoSuchProviderException: JCE cannot authenticate the provider BC` - -So, we used a [executable packer plugin](https://github.com/nthuemmel/executable-packer-maven-plugin) that uses a jar-in-jar approach to preserve the BouncyCastle signature in a single, executable jar. - -### Include dependencies of BC-non-FIPS - -Pulsar module `bouncy-castle-bc`, which defined by `bouncy-castle/bc/pom.xml` contains the needed non-FIPS jars for Pulsar, and packaged as a jar-in-jar(need to provide `pkg`). - -```xml - - - org.bouncycastle - bcpkix-jdk15on - ${bouncycastle.version} - - - - org.bouncycastle - bcprov-ext-jdk15on - ${bouncycastle.version} - - -``` - -By using this `bouncy-castle-bc` module, you can easily include and exclude BouncyCastle non-FIPS jars. - -### Modules that include BC-non-FIPS module (`bouncy-castle-bc`) - -For Pulsar client, user need the bouncy-castle module, so `pulsar-client-original` will include the `bouncy-castle-bc` module, and have `pkg` set to reference the `jar-in-jar` package. -It is included as following example: - -```xml - - - org.apache.pulsar - bouncy-castle-bc - ${pulsar.version} - pkg - - -``` - -By default `bouncy-castle-bc` already included in `pulsar-client-original`, And `pulsar-client-original` has been included in a lot of other modules like `pulsar-client-admin`, `pulsar-broker`. -But for the above shaded jar and signatures reason, we should not package Pulsar's `bouncy-castle` module into `pulsar-client-all` other shaded modules directly, such as `pulsar-client-shaded`, `pulsar-client-admin-shaded` and `pulsar-broker-shaded`. -So in the shaded modules, we will exclude the `bouncy-castle` modules. - -```xml - - - - org.apache.pulsar:pulsar-client-original - - ** - - - org/bouncycastle/** - - - - -``` - -That means, `bouncy-castle` related jars are not shaded in these fat jars. - -### Module BC-FIPS (`bouncy-castle-bcfips`) - -Pulsar module `bouncy-castle-bcfips`, which defined by `bouncy-castle/bcfips/pom.xml` contains the needed FIPS jars for Pulsar. -Similar to `bouncy-castle-bc`, `bouncy-castle-bcfips` also packaged as a `jar-in-jar` package for easy include/exclude. - -```xml - - - org.bouncycastle - bc-fips - ${bouncycastlefips.version} - - - - org.bouncycastle - bcpkix-fips - ${bouncycastlefips.version} - - -``` - -### Exclude BC-non-FIPS and include BC-FIPS - -If you want to switch from BC-non-FIPS to BC-FIPS version, Here is an example for `pulsar-broker` module: - -```xml - - - org.apache.pulsar - pulsar-broker - ${pulsar.version} - - - org.apache.pulsar - bouncy-castle-bc - - - - - - org.apache.pulsar - bouncy-castle-bcfips - ${pulsar.version} - pkg - - -``` - - -For more example, you can reference module `bcfips-include-test`. - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/security-encryption.md b/site2/website/versioned_docs/version-2.10.0-deprecated/security-encryption.md deleted file mode 100644 index 10e6285b990a78..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/security-encryption.md +++ /dev/null @@ -1,335 +0,0 @@ ---- -id: security-encryption -title: Pulsar Encryption -sidebar_label: "End-to-End Encryption" -original_id: security-encryption ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Applications can use Pulsar encryption to encrypt messages on the producer side and decrypt messages on the consumer side. You can use the public and private key pair that the application configures to perform encryption. Only the consumers with a valid key can decrypt the encrypted messages. - -## Asymmetric and symmetric encryption - -Pulsar uses a dynamically generated symmetric AES key to encrypt messages(data). You can use the application-provided ECDSA (Elliptic Curve Digital Signature Algorithm) or RSA (Rivest–Shamir–Adleman) key pair to encrypt the AES key(data key), so you do not have to share the secret with everyone. - -Key is a public and private key pair used for encryption or decryption. The producer key is the public key of the key pair, and the consumer key is the private key of the key pair. - -The application configures the producer with the public key. You can use this key to encrypt the AES data key. The encrypted data key is sent as part of message header. Only entities with the private key (in this case the consumer) are able to decrypt the data key which is used to decrypt the message. - -You can encrypt a message with more than one key. Any one of the keys used for encrypting the message is sufficient to decrypt the message. - -Pulsar does not store the encryption key anywhere in the Pulsar service. If you lose or delete the private key, your message is irretrievably lost, and is unrecoverable. - -## Producer -![alt text](/assets/pulsar-encryption-producer.jpg "Pulsar Encryption Producer") - -## Consumer -![alt text](/assets/pulsar-encryption-consumer.jpg "Pulsar Encryption Consumer") - -## Get started - -1. Create your ECDSA or RSA public and private key pair by using the following commands. - * ECDSA(for Java clients only) - - ```shell - - openssl ecparam -name secp521r1 -genkey -param_enc explicit -out test_ecdsa_privkey.pem - openssl ec -in test_ecdsa_privkey.pem -pubout -outform pem -out test_ecdsa_pubkey.pem - - ``` - - * RSA (for C++, Python and Node.js clients) - - ```shell - - openssl genrsa -out test_rsa_privkey.pem 2048 - openssl rsa -in test_rsa_privkey.pem -pubout -outform pkcs8 -out test_rsa_pubkey.pem - - ``` - -2. Add the public and private key to the key management and configure your producers to retrieve public keys and consumers clients to retrieve private keys. - -3. Implement the `CryptoKeyReader` interface, specifically `CryptoKeyReader.getPublicKey()` for producer and `CryptoKeyReader.getPrivateKey()` for consumer, which Pulsar client invokes to load the key. - -4. Add the encryption key name to the producer builder: PulsarClient.newProducer().addEncryptionKey("myapp.key"). - -5. Configure a `CryptoKeyReader` to a producer, consumer or reader. - -````mdx-code-block - - - -```java - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl("pulsar://localhost:6650").build(); -String topic = "persistent://my-tenant/my-ns/my-topic"; -// RawFileKeyReader is just an example implementation that's not provided by Pulsar -CryptoKeyReader keyReader = new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem"); - -Producer producer = pulsarClient.newProducer() - .topic(topic) - .cryptoKeyReader(keyReader) - .addEncryptionKey("myappkey") - .create(); - -Consumer consumer = pulsarClient.newConsumer() - .topic(topic) - .subscriptionName("my-subscriber-name") - .cryptoKeyReader(keyReader) - .subscribe(); - -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(MessageId.earliest) - .cryptoKeyReader(keyReader) - .create(); - -``` - - - - -```c++ - -Client client("pulsar://localhost:6650"); -std::string topic = "persistent://my-tenant/my-ns/my-topic"; -// DefaultCryptoKeyReader is a built-in implementation that reads public key and private key from files -auto keyReader = std::make_shared("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem"); - -Producer producer; -ProducerConfiguration producerConf; -producerConf.setCryptoKeyReader(keyReader); -producerConf.addEncryptionKey("myappkey"); -client.createProducer(topic, producerConf, producer); - -Consumer consumer; -ConsumerConfiguration consumerConf; -consumerConf.setCryptoKeyReader(keyReader); -client.subscribe(topic, "my-subscriber-name", consumerConf, consumer); - -Reader reader; -ReaderConfiguration readerConf; -readerConf.setCryptoKeyReader(keyReader); -client.createReader(topic, MessageId::earliest(), readerConf, reader); - -``` - - - - -```python - -from pulsar import Client, CryptoKeyReader - -client = Client('pulsar://localhost:6650') -topic = 'persistent://my-tenant/my-ns/my-topic' -# CryptoKeyReader is a built-in implementation that reads public key and private key from files -key_reader = CryptoKeyReader('test_ecdsa_pubkey.pem', 'test_ecdsa_privkey.pem') - -producer = client.create_producer( - topic=topic, - encryption_key='myappkey', - crypto_key_reader=key_reader -) - -consumer = client.subscribe( - topic=topic, - subscription_name='my-subscriber-name', - crypto_key_reader=key_reader -) - -reader = client.create_reader( - topic=topic, - start_message_id=MessageId.earliest, - crypto_key_reader=key_reader -) - -client.close() - -``` - - - - -```nodejs - -const Pulsar = require('pulsar-client'); - -(async () => { -// Create a client -const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - operationTimeoutSeconds: 30, -}); - -// Create a producer -const producer = await client.createProducer({ - topic: 'persistent://public/default/my-topic', - sendTimeoutMs: 30000, - batchingEnabled: true, - publicKeyPath: "public-key.client-rsa.pem", - encryptionKey: "encryption-key" -}); - -// Create a consumer -const consumer = await client.subscribe({ - topic: 'persistent://public/default/my-topic', - subscription: 'sub1', - subscriptionType: 'Shared', - ackTimeoutMs: 10000, - privateKeyPath: "private-key.client-rsa.pem" -}); - -// Send messages -for (let i = 0; i < 10; i += 1) { - const msg = `my-message-${i}`; - producer.send({ - data: Buffer.from(msg), - }); - console.log(`Sent message: ${msg}`); -} -await producer.flush(); - -// Receive messages -for (let i = 0; i < 10; i += 1) { - const msg = await consumer.receive(); - console.log(msg.getData().toString()); - consumer.acknowledge(msg); -} - -await consumer.close(); -await producer.close(); -await client.close(); -})(); - -``` - - - - -```` - -6. Below is an example of a **customized** `CryptoKeyReader` implementation. - -````mdx-code-block - - - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} - -``` - - - - -```c++ - -class CustomCryptoKeyReader : public CryptoKeyReader { - public: - Result getPublicKey(const std::string& keyName, std::map& metadata, - EncryptionKeyInfo& encKeyInfo) const override { - // TODO: - return ResultOk; - } - - Result getPrivateKey(const std::string& keyName, std::map& metadata, - EncryptionKeyInfo& encKeyInfo) const override { - // TODO: - return ResultOk; - } -}; - -auto keyReader = std::make_shared(/* ... */); -// TODO: create producer, consumer or reader based on keyReader here - -``` - -Besides, you can use the **default** implementation of `CryptoKeyReader` by specifying the paths of `private key` and `public key`. - - - - -Currently, **customized** `CryptoKeyReader` implementation is not supported in Python. However, you can use the **default** implementation by specifying the path of `private key` and `public key`. - - - - -Currently, **customized** `CryptoKeyReader` implementation is not supported in Node.js. However, you can use the **default** implementation by specifying the path of `private key` and `public key`. - - - - -```` - -## Key rotation -Pulsar generates a new AES data key every 4 hours or after publishing a certain number of messages. A producer fetches the asymmetric public key every 4 hours by calling CryptoKeyReader.getPublicKey() to retrieve the latest version. - -## Enable encryption at the producer application -If you produce messages that are consumed across application boundaries, you need to ensure that consumers in other applications have access to one of the private keys that can decrypt the messages. You can do this in two ways: -1. The consumer application provides you access to their public key, which you add to your producer keys. -2. You grant access to one of the private keys from the pairs that producer uses. - -When producers want to encrypt the messages with multiple keys, producers add all such keys to the config. Consumer can decrypt the message as long as the consumer has access to at least one of the keys. - -If you need to encrypt the messages using 2 keys (`myapp.messagekey1` and `myapp.messagekey2`), refer to the following example. - -```java - -PulsarClient.newProducer().addEncryptionKey("myapp.messagekey1").addEncryptionKey("myapp.messagekey2"); - -``` - -## Decrypt encrypted messages at the consumer application -Consumers require to access one of the private keys to decrypt messages that the producer produces. If you want to receive encrypted messages, create a public or private key and give your public key to the producer application to encrypt messages using your public key. - -## Handle failures -* Producer/Consumer loses access to the key - * Producer action fails to indicate the cause of the failure. Application has the option to proceed with sending unencrypted messages in such cases. Call `PulsarClient.newProducer().cryptoFailureAction(ProducerCryptoFailureAction)` to control the producer behavior. The default behavior is to fail the request. - * If consumption fails due to decryption failure or missing keys in consumer, the application has the option to consume the encrypted message or discard it. Call `PulsarClient.newConsumer().cryptoFailureAction(ConsumerCryptoFailureAction)` to control the consumer behavior. The default behavior is to fail the request. Application is never able to decrypt the messages if the private key is permanently lost. -* Batch messaging - * If decryption fails and the message contains batch messages, client is not able to retrieve individual messages in the batch, hence message consumption fails even if cryptoFailureAction() is set to `ConsumerCryptoFailureAction.CONSUME`. -* If decryption fails, the message consumption stops and the application notices backlog growth in addition to decryption failure messages in the client log. If the application does not have access to the private key to decrypt the message, the only option is to skip or discard backlogged messages. diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/security-extending.md b/site2/website/versioned_docs/version-2.10.0-deprecated/security-extending.md deleted file mode 100644 index 9c641623f83480..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/security-extending.md +++ /dev/null @@ -1,83 +0,0 @@ ---- -id: security-extending -title: Extend Authentication and Authorization in Pulsar -sidebar_label: "Extend Authentication and Authorization" -original_id: security-extending ---- - -Pulsar provides a way to use custom authentication and authorization mechanisms. - -## Authentication - -You can use a custom authentication mechanism by providing the implementation in the form of two plugins. -* Client authentication plugin -* Proxy/Broker authentication plugin - -### Client authentication plugin - -For the client library, you need to implement `org.apache.pulsar.client.api.Authentication`. By entering the command below, you can pass this class when you create a Pulsar client. - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .authentication(new MyAuthentication()) - .build(); - -``` - -You can implement 2 interfaces on the client side: - * [`Authentication`](/api/client/org/apache/pulsar/client/api/Authentication.html) - * [`AuthenticationDataProvider`](/api/client/org/apache/pulsar/client/api/AuthenticationDataProvider.html) - -This in turn requires you to provide the client credentials in the form of `org.apache.pulsar.client.api.AuthenticationDataProvider` and also leaves the chance to return different kinds of authentication token for different types of connection or by passing a certificate chain to use for TLS. - -You can find the following examples for different client authentication plugins: - * [Mutual TLS](https://github.com/apache/pulsar/blob/master/pulsar-client/src/main/java/org/apache/pulsar/client/impl/auth/AuthenticationTls.java) - * [Athenz](https://github.com/apache/pulsar/blob/master/pulsar-client-auth-athenz/src/main/java/org/apache/pulsar/client/impl/auth/AuthenticationAthenz.java) - * [Kerberos](https://github.com/apache/pulsar/blob/master/pulsar-client-auth-sasl/src/main/java/org/apache/pulsar/client/impl/auth/AuthenticationSasl.java) - * [JSON Web Token (JWT)](https://github.com/apache/pulsar/blob/master/pulsar-client/src/main/java/org/apache/pulsar/client/impl/auth/AuthenticationToken.java) - * [OAuth 2.0](https://github.com/apache/pulsar/blob/master/pulsar-client/src/main/java/org/apache/pulsar/client/impl/auth/oauth2/AuthenticationOAuth2.java) - * [Basic auth](https://github.com/apache/pulsar/blob/master/pulsar-client/src/main/java/org/apache/pulsar/client/impl/auth/AuthenticationBasic.java) - -### Proxy/Broker authentication plugin - -On the proxy/broker side, you need to configure the corresponding plugin to validate the credentials that the client sends. The proxy and broker can support multiple authentication providers at the same time. - -In `conf/broker.conf`, you can choose to specify a list of valid providers: - -```properties - -# Authentication provider name list, which is comma separated list of class names -authenticationProviders= - -``` - -For the implementation of the `org.apache.pulsar.broker.authentication.AuthenticationProvider` interface, refer to [here](https://github.com/apache/pulsar/blob/master/pulsar-broker-common/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProvider.java). - -You can find the following examples for different broker authentication plugins: - - * [Mutual TLS](https://github.com/apache/pulsar/blob/master/pulsar-broker-common/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderTls.java) - * [Athenz](https://github.com/apache/pulsar/blob/master/pulsar-broker-auth-athenz/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderAthenz.java) - * [Kerberos](https://github.com/apache/pulsar/blob/master/pulsar-broker-auth-sasl/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderSasl.java) - * [JSON Web Token (JWT)](https://github.com/apache/pulsar/blob/master/pulsar-broker-common/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderToken.java) - * [Basic auth](https://github.com/apache/pulsar/blob/master/pulsar-broker-common/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderToken.java) - -## Authorization - -Authorization is the operation that checks whether a particular "role" or "principal" has permission to perform a certain operation. - -By default, you can use the embedded authorization provider provided by Pulsar. You can also configure a different authorization provider through a plugin. Note that although the Authentication plugin is designed for use in both the proxy and broker, the Authorization plugin is designed only for use on the broker. - -### Broker authorization plugin - -To provide a custom authorization provider, you need to implement the `org.apache.pulsar.broker.authorization.AuthorizationProvider` interface, put this class in the Pulsar broker classpath and configure the class in `conf/broker.conf`: - - ```properties - - # Authorization provider fully qualified class-name - authorizationProvider=org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider - - ``` - -For the implementation of the `org.apache.pulsar.broker.authorization.AuthorizationProvider` interface, refer to [here](https://github.com/apache/pulsar/blob/master/pulsar-broker-common/src/main/java/org/apache/pulsar/broker/authorization/AuthorizationProvider.java). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/security-jwt.md b/site2/website/versioned_docs/version-2.10.0-deprecated/security-jwt.md deleted file mode 100644 index d5cbf1553b92e8..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/security-jwt.md +++ /dev/null @@ -1,331 +0,0 @@ ---- -id: security-jwt -title: Client authentication using tokens based on JSON Web Tokens -sidebar_label: "Authentication using JWT" -original_id: security-jwt ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -## Token authentication overview - -Pulsar supports authenticating clients using security tokens that are based on [JSON Web Tokens](https://jwt.io/introduction/) ([RFC-7519](https://tools.ietf.org/html/rfc7519)). - -You can use tokens to identify a Pulsar client and associate with some "principal" (or "role") that -is permitted to do some actions (eg: publish to a topic or consume from a topic). - -A user typically gets a token string from the administrator (or some automated service). - -The compact representation of a signed JWT is a string that looks like the following: - -``` - -eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -Application specifies the token when you create the client instance. An alternative is to pass a "token supplier" (a function that returns the token when the client library needs one). - -> #### Always use TLS transport encryption -> Sending a token is equivalent to sending a password over the wire. You had better use TLS encryption all the time when you connect to the Pulsar service. See -> [Transport Encryption using TLS](security-tls-transport.md) for more details. - -### CLI Tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use the token authentication with CLI tools of Pulsar: - -```properties - -webServiceUrl=http://broker.example.com:8080/ -brokerServiceUrl=pulsar://broker.example.com:6650/ -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -authParams=token:eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -The token string can also be read from a file, for example: - -``` - -authParams=file:///path/to/token/file - -``` - -### Pulsar client - -You can use tokens to authenticate the following Pulsar clients. - -````mdx-code-block - - - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactory.token("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY")) - .build(); - -``` - -Similarly, you can also pass a `Supplier`: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactory.token(() -> { - // Read token from custom source - return readToken(); - })) - .build(); - -``` - - - - -```python - -from pulsar import Client, AuthenticationToken - -client = Client('pulsar://broker.example.com:6650/' - authentication=AuthenticationToken('eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY')) - -``` - -Alternatively, you can also pass a `Supplier`: - -```python - -def read_token(): - with open('/path/to/token.txt') as tf: - return tf.read().strip() - -client = Client('pulsar://broker.example.com:6650/' - authentication=AuthenticationToken(read_token)) - -``` - - - - -```go - -client, err := NewClient(ClientOptions{ - URL: "pulsar://localhost:6650", - Authentication: NewAuthenticationToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY"), -}) - -``` - -Similarly, you can also pass a `Supplier`: - -```go - -client, err := NewClient(ClientOptions{ - URL: "pulsar://localhost:6650", - Authentication: NewAuthenticationTokenSupplier(func () string { - // Read token from custom source - return readToken() - }), -}) - -``` - - - - -```c++ - -#include - -pulsar::ClientConfiguration config; -config.setAuth(pulsar::AuthToken::createWithToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY")); - -pulsar::Client client("pulsar://broker.example.com:6650/", config); - -``` - - - - -```c# - -var client = PulsarClient.Builder() - .AuthenticateUsingToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY") - .Build(); - -``` - - - - -```` - -## Enable token authentication - -On how to enable token authentication on a Pulsar cluster, you can refer to the guide below. - -JWT supports two different kinds of keys in order to generate and validate the tokens: - - * Symmetric : - - You can use a single ***Secret*** key to generate and validate tokens. - * Asymmetric: A pair of keys consists of the Private key and the Public key. - - You can use ***Private*** key to generate tokens. - - You can use ***Public*** key to validate tokens. - -### Create a secret key - -When you use a secret key, the administrator creates the key and uses the key to generate the client tokens. You can also configure this key to brokers in order to validate the clients. - -The output file is generated in the root of your Pulsar installation directory. You can also provide an absolute path for the output file using the command below. - -```shell - -$ bin/pulsar tokens create-secret-key --output my-secret.key - -``` - -Enter this command to generate a base64 encoded private key. - -```shell - -$ bin/pulsar tokens create-secret-key --output /opt/my-secret.key --base64 - -``` - -### Create a key pair - -With Public and Private keys, you need to create a pair of keys. Pulsar supports all algorithms that the Java JWT library (shown [here](https://github.com/jwtk/jjwt#signature-algorithms-keys)) supports. - -The output file is generated in the root of your Pulsar installation directory. You can also provide an absolute path for the output file using the command below. - -```shell - -$ bin/pulsar tokens create-key-pair --output-private-key my-private.key --output-public-key my-public.key - -``` - - * Store `my-private.key` in a safe location and only administrator can use `my-private.key` to generate new tokens. - * `my-public.key` is distributed to all Pulsar brokers. You can publicly share this file without any security concern. - -### Generate tokens - -A token is a credential associated with a user. The association is done through the "principal" or "role". In the case of JWT tokens, this field is typically referred as **subject**, though they are exactly the same concept. - -Then, you need to use this command to require the generated token to have a **subject** field set. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user - -``` - -This command prints the token string on stdout. - -Similarly, you can create a token by passing the "private" key using the command below: - -```shell - -$ bin/pulsar tokens create --private-key file:///path/to/my-private.key \ - --subject test-user - -``` - -Finally, you can enter the following command to create a token with a pre-defined TTL. And then the token is automatically invalidated. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user \ - --expiry-time 1y - -``` - -### Authorization - -The token itself does not have any permission associated. The authorization engine determines whether the token should have permissions or not. Once you have created the token, you can grant permission for this token to do certain actions. The following is an example. - -```shell - -$ bin/pulsar-admin namespaces grant-permission my-tenant/my-namespace \ - --role test-user \ - --actions produce,consume - -``` - -### Enable token authentication on Brokers - -To configure brokers to authenticate clients, add the following parameters to `broker.conf`: - -```properties - -# Configuration to enable authentication and authorization -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientTlsEnabled=true -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Either configure the token string or specify to read it from a file. The following three available formats are all valid: -# brokerClientAuthenticationParameters={"token":"your-token-string"} -# brokerClientAuthenticationParameters=token:your-token-string -# brokerClientAuthenticationParameters=file:///path/to/token -brokerClientTrustCertsFilePath=/path/my-ca/certs/ca.cert.pem - -# If this flag is set then the broker authenticates the original Auth data -# else it just accepts the originalPrincipal and authorizes it (if required). -authenticateOriginalAuthData=true - -# If using secret key (Note: key files must be DER-encoded) -tokenSecretKey=file:///path/to/secret.key -# The key can also be passed inline: -# tokenSecretKey=data:;base64,FLFyW0oLJ2Fi22KKCm21J18mbAdztfSHN/lAT5ucEKU= - -# If using public/private (Note: key files must be DER-encoded) -# tokenPublicKey=file:///path/to/public.key - -``` - -### Enable token authentication on Proxies - -To configure proxies to authenticate clients, add the following parameters to `proxy.conf`: - -The proxy uses its own token when connecting to brokers. You need to configure the role token for this key pair in the `proxyRoles` of the brokers. For more details, see the [authorization guide](security-authorization.md). - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken -tokenSecretKey=file:///path/to/secret.key - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Either configure the token string or specify to read it from a file. The following three available formats are all valid: -# brokerClientAuthenticationParameters={"token":"your-token-string"} -# brokerClientAuthenticationParameters=token:your-token-string -# brokerClientAuthenticationParameters=file:///path/to/token - -# Whether client authorization credentials are forwarded to the broker for re-authorization. -# Authentication must be enabled via authenticationEnabled=true for this to take effect. -forwardAuthorizationCredentials=true - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/security-kerberos.md b/site2/website/versioned_docs/version-2.10.0-deprecated/security-kerberos.md deleted file mode 100644 index c49fa3bea1fce0..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/security-kerberos.md +++ /dev/null @@ -1,443 +0,0 @@ ---- -id: security-kerberos -title: Authentication using Kerberos -sidebar_label: "Authentication using Kerberos" -original_id: security-kerberos ---- - -[Kerberos](https://web.mit.edu/kerberos/) is a network authentication protocol. By using secret-key cryptography, [Kerberos](https://web.mit.edu/kerberos/) is designed to provide strong authentication for client applications and server applications. - -In Pulsar, you can use Kerberos with [SASL](https://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer) as a choice for authentication. And Pulsar uses the [Java Authentication and Authorization Service (JAAS)](https://en.wikipedia.org/wiki/Java_Authentication_and_Authorization_Service) for SASL configuration. You need to provide JAAS configurations for Kerberos authentication. - -This document introduces how to configure `Kerberos` with `SASL` between Pulsar clients and brokers and how to configure Kerberos for Pulsar proxy in detail. - -## Configuration for Kerberos between Client and Broker - -### Prerequisites - -To begin, you need to set up (or already have) a [Key Distribution Center(KDC)](https://en.wikipedia.org/wiki/Key_distribution_center). Also you need to configure and run the [Key Distribution Center(KDC)](https://en.wikipedia.org/wiki/Key_distribution_center)in advance. - -If your organization already uses a Kerberos server (for example, by using `Active Directory`), you do not have to install a new server for Pulsar. If your organization does not use a Kerberos server, you need to install one. Your Linux vendor might have packages for `Kerberos`. On how to install and configure Kerberos, refer to [Ubuntu](https://help.ubuntu.com/community/Kerberos), -[Redhat](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Managing_Smart_Cards/installing-kerberos.html). - -Note that if you use Oracle Java, you need to download JCE policy files for your Java version and copy them to the `$JAVA_HOME/jre/lib/security` directory. - -#### Kerberos principals - -If you use the existing Kerberos system, ask your Kerberos administrator for a principal for each Brokers in your cluster and for every operating system user that accesses Pulsar with Kerberos authentication(via clients and tools). - -If you have installed your own Kerberos system, you can create these principals with the following commands: - -```shell - -### add Principals for broker -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey broker/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{broker-keytabname}.keytab broker/{hostname}@{REALM}" -### add Principals for client -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey client/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{client-keytabname}.keytab client/{hostname}@{REALM}" - -``` - -Note that *Kerberos* requires that all your hosts can be resolved with their FQDNs. - -The first part of Broker principal (for example, `broker` in `broker/{hostname}@{REALM}`) is the `serverType` of each host. The suggested values of `serverType` are `broker` (host machine runs service Pulsar Broker) and `proxy` (host machine runs service Pulsar Proxy). - -#### Configure how to connect to KDC - -You need to enter the command below to specify the path to the `krb5.conf` file for the client side and the broker side. The content of `krb5.conf` file indicates the default Realm and KDC information. See [JDK’s Kerberos Requirements](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html) for more details. - -```shell - --Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -Here is an example of the krb5.conf file: - -In the configuration file, `EXAMPLE.COM` is the default realm; `kdc = localhost:62037` is the kdc server url for realm `EXAMPLE.COM `: - -``` - -[libdefaults] - default_realm = EXAMPLE.COM - -[realms] - EXAMPLE.COM = { - kdc = localhost:62037 - } - -``` - -Usually machines configured with kerberos already have a system wide configuration and this configuration is optional. - -#### JAAS configuration file - -You need JAAS configuration file for the client side and the broker side. JAAS configuration file provides the section of information that is used to connect KDC. Here is an example named `pulsar_jaas.conf`: - -``` - - PulsarBroker { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - - PulsarClient { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarclient.keytab" - principal="client/localhost@EXAMPLE.COM"; -}; - -``` - -You need to set the `JAAS` configuration file path as JVM parameter for client and broker. For example: - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf - -``` - -In the `pulsar_jaas.conf` file above - -1. `PulsarBroker` is a section name in the JAAS file that each broker uses. This section tells the broker to use which principal inside Kerberos and the location of the keytab where the principal is stored. `PulsarBroker` allows the broker to use the keytab specified in this section. -2. `PulsarClient` is a section name in the JASS file that each broker uses. This section tells the client to use which principal inside Kerberos and the location of the keytab where the principal is stored. `PulsarClient` allows the client to use the keytab specified in this section. - The following example also reuses this `PulsarClient` section in both the Pulsar internal admin configuration and in CLI command of `bin/pulsar-client`, `bin/pulsar-perf` and `bin/pulsar-admin`. You can also add different sections for different use cases. - -You can have 2 separate JAAS configuration files: -* the file for a broker that has sections of both `PulsarBroker` and `PulsarClient`; -* the file for a client that only has a `PulsarClient` section. - - -### Kerberos configuration for Brokers - -#### Configure the `broker.conf` file - - In the `broker.conf` file, set Kerberos related configurations. - - - Set `authenticationEnabled` to `true`; - - Set `authenticationProviders` to choose `AuthenticationProviderSasl`; - - Set `saslJaasClientAllowedIds` regex for principal that is allowed to connect to broker; - - Set `saslJaasBrokerSectionName` that corresponds to the section in JAAS configuration file for broker; - - To make Pulsar internal admin client work properly, you need to set the configuration in the `broker.conf` file as below: - - Set `brokerClientAuthenticationPlugin` to client plugin `AuthenticationSasl`; - - Set `brokerClientAuthenticationParameters` to value in JSON string `{"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"}`, in which `PulsarClient` is the section name in the `pulsar_jaas.conf` file, and `"serverType":"broker"` indicates that the internal admin client connects to a Pulsar Broker; - - Here is an example: - -``` - -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarBroker - -## Authentication settings of the broker itself. Used when the broker connects to other brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -brokerClientAuthenticationParameters={"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"} - -``` - -#### Set Broker JVM parameter - - Set JVM parameters for JAAS configuration file and krb5 configuration file with additional options. - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -You can add this at the end of `PULSAR_EXTRA_OPTS` in the file [`pulsar_env.sh`](https://github.com/apache/pulsar/blob/master/conf/pulsar_env.sh) - -You must ensure that the operating system user who starts broker can reach the keytabs configured in the `pulsar_jaas.conf` file and kdc server in the `krb5.conf` file. - -### Kerberos configuration for clients - -#### Java Client and Java Admin Client - -In client application, include `pulsar-client-auth-sasl` in your project dependency. - -``` - - - org.apache.pulsar - pulsar-client-auth-sasl - ${pulsar.version} - - -``` - -Configure the authentication type to use `AuthenticationSasl`, and also provide the authentication parameters to it. - -You need 2 parameters: -- `saslJaasClientSectionName`. This parameter corresponds to the section in JAAS configuration file for client; -- `serverType`. This parameter stands for whether this client connects to broker or proxy. And client uses this parameter to know which server side principal should be used. - -When you authenticate between client and broker with the setting in above JAAS configuration file, we need to set `saslJaasClientSectionName` to `PulsarClient` and set `serverType` to `broker`. - -The following is an example of creating a Java client: - - ```java - - System.setProperty("java.security.auth.login.config", "/etc/pulsar/pulsar_jaas.conf"); - System.setProperty("java.security.krb5.conf", "/etc/pulsar/krb5.conf"); - - Map authParams = Maps.newHashMap(); - authParams.put("saslJaasClientSectionName", "PulsarClient"); - authParams.put("serverType", "broker"); - - Authentication saslAuth = AuthenticationFactory - .create(org.apache.pulsar.client.impl.auth.AuthenticationSasl.class.getName(), authParams); - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://my-broker.com:6650") - .authentication(saslAuth) - .build(); - - ``` - -> The first two lines in the example above are hard coded, alternatively, you can set additional JVM parameters for JAAS and krb5 configuration file when you run the application like below: - -``` - -java -cp -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf $APP-jar-with-dependencies.jar $CLASSNAME - -``` - -You must ensure that the operating system user who starts pulsar client can reach the keytabs configured in the `pulsar_jaas.conf` file and kdc server in the `krb5.conf` file. - -#### Configure CLI tools - -If you use a command-line tool (such as `bin/pulsar-client`, `bin/pulsar-perf` and `bin/pulsar-admin`), you need to perform the following steps: - -Step 1. Enter the command below to configure your `client.conf`. - -```shell - -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -authParams={"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"} - -``` - -Step 2. Enter the command below to set JVM parameters for JAAS configuration file and krb5 configuration file with additional options. - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -You can add this at the end of `PULSAR_EXTRA_OPTS` in the file [`pulsar_tools_env.sh`](https://github.com/apache/pulsar/blob/master/conf/pulsar_tools_env.sh), -or add this line `OPTS="$OPTS -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf "` directly to the CLI tool script. - -The meaning of configurations is the same as the meaning of configurations in Java client section. - -## Kerberos configuration for working with Pulsar Proxy - -With the above configuration, client and broker can do authentication using Kerberos. - -A client that connects to Pulsar Proxy is a little different. Pulsar Proxy (as a SASL Server in Kerberos) authenticates Client (as a SASL client in Kerberos) first; and then Pulsar broker authenticates Pulsar Proxy. - -Now in comparison with the above configuration between client and broker, we show you how to configure Pulsar Proxy as follows. - -### Create principal for Pulsar Proxy in Kerberos - -You need to add new principals for Pulsar Proxy comparing with the above configuration. If you already have principals for client and broker, you only need to add the proxy principal here. - -```shell - -### add Principals for Pulsar Proxy -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey proxy/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{proxy-keytabname}.keytab proxy/{hostname}@{REALM}" -### add Principals for broker -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey broker/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{broker-keytabname}.keytab broker/{hostname}@{REALM}" -### add Principals for client -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey client/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{client-keytabname}.keytab client/{hostname}@{REALM}" - -``` - -### Add a section in JAAS configuration file for Pulsar Proxy - -In comparison with the above configuration, add a new section for Pulsar Proxy in JAAS configuration file. - -Here is an example named `pulsar_jaas.conf`: - -``` - - PulsarBroker { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - - PulsarProxy { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarproxy.keytab" - principal="proxy/localhost@EXAMPLE.COM"; -}; - - PulsarClient { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarclient.keytab" - principal="client/localhost@EXAMPLE.COM"; -}; - -``` - -### Proxy client configuration - -Pulsar client configuration is similar with client and broker configuration, except that you need to set `serverType` to `proxy` instead of `broker`, for the reason that you need to do the Kerberos authentication between client and proxy. - - ```java - - System.setProperty("java.security.auth.login.config", "/etc/pulsar/pulsar_jaas.conf"); - System.setProperty("java.security.krb5.conf", "/etc/pulsar/krb5.conf"); - - Map authParams = Maps.newHashMap(); - authParams.put("saslJaasClientSectionName", "PulsarClient"); - authParams.put("serverType", "proxy"); // ** here is the different ** - - Authentication saslAuth = AuthenticationFactory - .create(org.apache.pulsar.client.impl.auth.AuthenticationSasl.class.getName(), authParams); - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://my-broker.com:6650") - .authentication(saslAuth) - .build(); - - ``` - -> The first two lines in the example above are hard coded, alternatively, you can set additional JVM parameters for JAAS and krb5 configuration file when you run the application like below: - -``` - -java -cp -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf $APP-jar-with-dependencies.jar $CLASSNAME - -``` - -### Kerberos configuration for Pulsar proxy service - -In the `proxy.conf` file, set Kerberos related configuration. Here is an example: - -```shell - -## related to authenticate client. -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarProxy - -## related to be authenticated by broker -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -brokerClientAuthenticationParameters={"saslJaasClientSectionName":"PulsarProxy", "serverType":"broker"} -forwardAuthorizationCredentials=true - -``` - -The first part relates to authenticating between client and Pulsar Proxy. In this phase, client works as SASL client, while Pulsar Proxy works as SASL server. - -The second part relates to authenticating between Pulsar Proxy and Pulsar Broker. In this phase, Pulsar Proxy works as SASL client, while Pulsar Broker works as SASL server. - -### Broker side configuration. - -The broker side configuration file is the same with the above `broker.conf`, you do not need special configuration for Pulsar Proxy. - -``` - -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarBroker - -``` - -## Regarding authorization and role token - -For Kerberos authentication, we usually use the authenticated principal as the role token for Pulsar authorization. For more information of authorization in Pulsar, see [security authorization](security-authorization.md). - -If you enable 'authorizationEnabled', you need to set `superUserRoles` in `broker.conf` that corresponds to the name registered in kdc. - -For example: - -```bash - -superUserRoles=client/{clientIp}@EXAMPLE.COM - -``` - -## Regarding authentication between ZooKeeper and Broker - -Pulsar Broker acts as a Kerberos client when you authenticate with Zookeeper. According to [ZooKeeper document](https://cwiki.apache.org/confluence/display/ZOOKEEPER/Client-Server+mutual+authentication), you need these settings in `conf/zookeeper.conf`: - -``` - -authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider -requireClientAuthScheme=sasl - -``` - -Enter the following commands to add a section of `Client` configurations in the file `pulsar_jaas.conf`, which Pulsar Broker uses: - -``` - - Client { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - -``` - -In this setting, the principal of Pulsar Broker and keyTab file indicates the role of Broker when you authenticate with ZooKeeper. - -## Regarding authentication between BookKeeper and Broker - -Pulsar Broker acts as a Kerberos client when you authenticate with Bookie. According to [BookKeeper document](http://bookkeeper.apache.org/docs/latest/security/sasl/), you need to add `bookkeeperClientAuthenticationPlugin` parameter in `broker.conf`: - -``` - -bookkeeperClientAuthenticationPlugin=org.apache.bookkeeper.sasl.SASLClientProviderFactory - -``` - -In this setting, `SASLClientProviderFactory` creates a BookKeeper SASL client in a Broker, and the Broker uses the created SASL client to authenticate with a Bookie node. - -Enter the following commands to add a section of `BookKeeper` configurations in the `pulsar_jaas.conf` that Pulsar Broker uses: - -``` - - BookKeeper { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - -``` - -In this setting, the principal of Pulsar Broker and keyTab file indicates the role of Broker when you authenticate with Bookie. diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/security-oauth2.md b/site2/website/versioned_docs/version-2.10.0-deprecated/security-oauth2.md deleted file mode 100644 index 4ff8bf10b38334..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/security-oauth2.md +++ /dev/null @@ -1,282 +0,0 @@ ---- -id: security-oauth2 -title: Client authentication using OAuth 2.0 access tokens -sidebar_label: "Authentication using OAuth 2.0 access tokens" -original_id: security-oauth2 ---- - -Pulsar supports authenticating clients using OAuth 2.0 access tokens. You can use OAuth 2.0 access tokens to identify a Pulsar client and associate the Pulsar client with some "principal" (or "role"), which is permitted to do some actions, such as publishing messages to a topic or consume messages from a topic. - -This module is used to support the [Pulsar client authentication plugin](security-extending.md/#client-authentication-plugin) for OAuth 2.0. After communicating with the OAuth 2.0 server, the Pulsar client gets an `access token` from the OAuth 2.0 server, and passes this `access token` to the Pulsar broker to do the authentication. The broker can use the `org.apache.pulsar.broker.authentication.AuthenticationProviderToken`. Or, you can add your own `AuthenticationProvider` to make it with this module. - -## Authentication provider configuration - -This library allows you to authenticate the Pulsar client by using an access token that is obtained from an OAuth 2.0 authorization service, which acts as a _token issuer_. - -### Authentication types - -The authentication type determines how to obtain an access token through an OAuth 2.0 authorization flow. - -:::note - -Currently, the Pulsar Java client only supports the `client_credentials` authentication type. - -::: - -#### Client credentials - -The following table lists parameters supported for the `client credentials` authentication type. - -| Parameter | Description | Example | Required or not | -| --- | --- | --- | --- | -| `type` | OAuth 2.0 authentication type. | `client_credentials` (default) | Optional | -| `issuerUrl` | URL of the authentication provider which allows the Pulsar client to obtain an access token | `https://accounts.google.com` | Required | -| `privateKey` | URL to a JSON credentials file | Support the following pattern formats:
  263. `file:///path/to/file`
  264. `file:/path/to/file`
  265. `data:application/json;base64,`
  266. | Required | -| `audience` | An OAuth 2.0 "resource server" identifier for the Pulsar cluster | `https://broker.example.com` | Optional | -| `scope` | Scope of an access request.
    For more information, see [access token scope](https://datatracker.ietf.org/doc/html/rfc6749#section-3.3). | api://pulsar-cluster-1/.default | Optional | - -The credentials file contains service account credentials used with the client authentication type. The following shows an example of a credentials file `credentials_file.json`. - -```json - -{ - "type": "client_credentials", - "client_id": "d9ZyX97q1ef8Cr81WHVC4hFQ64vSlDK3", - "client_secret": "on1uJ...k6F6R", - "client_email": "1234567890-abcdefghijklmnopqrstuvwxyz@developer.gserviceaccount.com", - "issuer_url": "https://accounts.google.com" -} - -``` - -In the above example, the authentication type is set to `client_credentials` by default. And the fields "client_id" and "client_secret" are required. - -### Typical original OAuth2 request mapping - -The following shows a typical original OAuth2 request, which is used to obtain the access token from the OAuth2 server. - -```bash - -curl --request POST \ - --url https://dev-kt-aa9ne.us.auth0.com/oauth/token \ - --header 'content-type: application/json' \ - --data '{ - "client_id":"Xd23RHsUnvUlP7wchjNYOaIfazgeHd9x", - "client_secret":"rT7ps7WY8uhdVuBTKWZkttwLdQotmdEliaM5rLfmgNibvqziZ-g07ZH52N_poGAb", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "grant_type":"client_credentials"}' - -``` - -In the above example, the mapping relationship is shown as below. - -- The `issuerUrl` parameter in this plugin is mapped to `--url https://dev-kt-aa9ne.us.auth0.com`. -- The `privateKey` file parameter in this plugin should at least contains the `client_id` and `client_secret` fields. -- The `audience` parameter in this plugin is mapped to `"audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"`. This field is only used by some identity providers. - -## Client Configuration - -You can use the OAuth2 authentication provider with the following Pulsar clients. - -### Java client - -You can use the factory method to configure authentication for Pulsar Java client. - -```java - -import org.apache.pulsar.client.impl.auth.oauth2.AuthenticationFactoryOAuth2; - -URL issuerUrl = new URL("https://dev-kt-aa9ne.us.auth0.com"); -URL credentialsUrl = new URL("file:///path/to/KeyFile.json"); -String audience = "https://dev-kt-aa9ne.us.auth0.com/api/v2/"; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactoryOAuth2.clientCredentials(issuerUrl, credentialsUrl, audience)) - .build(); - -``` - -In addition, you can also use the encoded parameters to configure authentication for Pulsar Java client. - -```java - -Authentication auth = AuthenticationFactory - .create(AuthenticationOAuth2.class.getName(), "{"type":"client_credentials","privateKey":"./key/path/..","issuerUrl":"...","audience":"..."}"); -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication(auth) - .build(); - -``` - -### C++ client - -The C++ client is similar to the Java client. You need to provide the parameters of `issuerUrl`, `private_key` (the credentials file path), and `audience`. - -```c++ - -#include - -pulsar::ClientConfiguration config; -std::string params = R"({ - "issuer_url": "https://dev-kt-aa9ne.us.auth0.com", - "private_key": "../../pulsar-broker/src/test/resources/authentication/token/cpp_credentials_file.json", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/"})"; - -config.setAuth(pulsar::AuthOauth2::create(params)); - -pulsar::Client client("pulsar://broker.example.com:6650/", config); - -``` - -### Go client - -To enable OAuth2 authentication in Go client, you need to configure OAuth2 authentication. -This example shows how to configure OAuth2 authentication in Go client. - -```go - -oauth := pulsar.NewAuthenticationOAuth2(map[string]string{ - "type": "client_credentials", - "issuerUrl": "https://dev-kt-aa9ne.us.auth0.com", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "privateKey": "/path/to/privateKey", - "clientId": "0Xx...Yyxeny", - }) -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://my-cluster:6650", - Authentication: oauth, -}) - -``` - -### Python client - -To enable OAuth2 authentication in Python client, you need to configure OAuth2 authentication. -This example shows how to configure OAuth2 authentication in Python client. - -```python - -from pulsar import Client, AuthenticationOauth2 - -params = ''' -{ - "issuer_url": "https://dev-kt-aa9ne.us.auth0.com", - "private_key": "/path/to/privateKey", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/" -} -''' - -client = Client("pulsar://my-cluster:6650", authentication=AuthenticationOauth2(params)) - -``` - -### Node.js client - -To enable OAuth2 authentication in Node.js client, you need to configure OAuth2 authentication. -This example shows how to configure OAuth2 authentication in Node.js client. - -```JavaScript - - const Pulsar = require('pulsar-client'); - const issuer_url = process.env.ISSUER_URL; - const private_key = process.env.PRIVATE_KEY; - const audience = process.env.AUDIENCE; - const scope = process.env.SCOPE; - const service_url = process.env.SERVICE_URL; - const client_id = process.env.CLIENT_ID; - const client_secret = process.env.CLIENT_SECRET; - (async () => { - const params = { - issuer_url: issuer_url - } - if (private_key.length > 0) { - params['private_key'] = private_key - } else { - params['client_id'] = client_id - params['client_secret'] = client_secret - } - if (audience.length > 0) { - params['audience'] = audience - } - if (scope.length > 0) { - params['scope'] = scope - } - const auth = new Pulsar.AuthenticationOauth2(params); - // Create a client - const client = new Pulsar.Client({ - serviceUrl: service_url, - tlsAllowInsecureConnection: true, - authentication: auth, - }); - await client.close(); - })(); - -``` - -:::note - -The support for OAuth2 authentication is only available in Node.js client 1.6.2 and later versions. - -::: - -## CLI configuration - -This section describes how to use Pulsar CLI tools to connect a cluster through OAuth2 authentication plugin. - -### pulsar-admin - -This example shows how to use pulsar-admin to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-admin --admin-url https://streamnative.cloud:443 \ ---auth-plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ -tenants list - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). - -### pulsar-client - -This example shows how to use pulsar-client to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-client \ ---url SERVICE_URL \ ---auth-plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ -produce test-topic -m "test-message" -n 10 - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). - -### pulsar-perf - -This example shows how to use pulsar-perf to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-perf produce --service-url pulsar+ssl://streamnative.cloud:6651 \ ---auth-plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ --r 1000 -s 1024 test-topic - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/security-overview.md b/site2/website/versioned_docs/version-2.10.0-deprecated/security-overview.md deleted file mode 100644 index 93a766ee89fb8e..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/security-overview.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -id: security-overview -title: Pulsar security overview -sidebar_label: "Overview" -original_id: security-overview ---- - -As the central message bus for a business, Apache Pulsar is frequently used for storing mission-critical data. Therefore, enabling security features in Pulsar is crucial. - -By default, Pulsar configures no encryption, authentication, or authorization. Any client can communicate to Apache Pulsar via plain text service URLs. So we must ensure that Pulsar accessing via these plain text service URLs is restricted to trusted clients only. In such cases, you can use Network segmentation and/or authorization ACLs to restrict access to trusted IPs. If you use neither, the state of cluster is wide open and anyone can access the cluster. - -Pulsar supports a pluggable authentication mechanism. And Pulsar clients use this mechanism to authenticate with brokers and proxies. You can also configure Pulsar to support multiple authentication sources. - -The Pulsar broker validates the authentication credentials when a connection is established. After the initial connection is authenticated, the "principal" token is stored for authorization though the connection is not re-authenticated. The broker periodically checks the expiration status of every `ServerCnx` object. You can set the `authenticationRefreshCheckSeconds` on the broker to control the frequency to check the expiration status. By default, the `authenticationRefreshCheckSeconds` is set to 60s. When the authentication is expired, the broker forces to re-authenticate the connection. If the re-authentication fails, the broker disconnects the client. - -The broker supports learning whether a particular client supports authentication refreshing. If a client supports authentication refreshing and the credential is expired, the authentication provider calls the `refreshAuthentication` method to initiate the refreshing process. If a client does not support authentication refreshing and the credential is expired, the broker disconnects the client. - -You had better secure the service components in your Apache Pulsar deployment. - -## Role tokens - -In Pulsar, a *role* is a string, like `admin` or `app1`, which can represent a single client or multiple clients. You can use roles to control permission for clients to produce or consume from certain topics, administer the configuration for tenants, and so on. - -Apache Pulsar uses an [Authentication Provider](#authentication-providers) to establish the identity of a client and then assign a *role token* to that client. This role token is then used for [Authorization and ACLs](security-authorization.md) to determine what the client is authorized to do. - -## Authentication providers - -Currently Pulsar supports the following authentication providers: - -- [TLS authentication](security-tls-authentication.md) -- [Athenz authentication](security-athenz.md) -- [Kerberos authentication](security-kerberos.md) -- [JSON Web Token (JWT) authentication](security-jwt.md) -- [OAuth 2.0 authentication](security-oauth2.md) -- [HTTP basic authentication](security-basic-auth.md) - - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/security-policy-and-supported-versions.md b/site2/website/versioned_docs/version-2.10.0-deprecated/security-policy-and-supported-versions.md deleted file mode 100644 index 637147a5dc27c7..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/security-policy-and-supported-versions.md +++ /dev/null @@ -1,65 +0,0 @@ ---- -id: security-policy-and-supported-versions -title: Security Policy and Supported Versions -sidebar_label: "Security Policy and Supported Versions" ---- - -## Using Pulsar's Security Features - -You can find documentation on Pulsar's available security features and how to use them here: -https://pulsar.apache.org/docs/en/security-overview/. - -## Security Vulnerability Process - -The Pulsar community follows the ASF [security vulnerability handling process](https://apache.org/security/#vulnerability-handling). - -To report a new vulnerability you have discovered, please follow the [ASF security vulnerability reporting process](https://apache.org/security/#reporting-a-vulnerability). To report a vulnerability for Pulsar, contact the [Apache Security Team](https://www.apache.org/security/). When reporting a vulnerability to [security@apache.org](mailto:security@apache.org), you can copy your email to [private@pulsar.apache.org](mailto:private@pulsar.apache.org) to send your report to the Apache Pulsar Project Management Committee. This is a private mailing list. - -It is the responsibility of the security vulnerability handling project team (Apache Pulsar PMC in most cases) to make public security vulnerability announcements. You can follow announcements on the [users@pulsar.apache.org](mailto:users@pulsar.apache.org) mailing list. For instructions on how to subscribe, please see https://pulsar.apache.org/contact/. - -## Versioning Policy - -The Pulsar project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html). Existing releases can expect -patches for bugs and security vulnerabilities. New features will target minor releases. - -When upgrading an existing cluster, it is important to upgrade components linearly through each minor version. For -example, when upgrading from 2.8.x to 2.10.x, it is important to upgrade to 2.9.x before going to 2.10.x. - -## Supported Versions - -Feature release branches will be maintained with security fix and bug fix releases for a period of at least 12 months -after initial release. For example, branch 2.5.x is no longer considered maintained as of January 2021, 12 months after -the release of 2.5.0 in January 2020. No more 2.5.x releases should be expected at this point, even to fix security -vulnerabilities. - -Note that a minor version can be maintained past it's 12 month initial support period. For example, version 2.7 is still -actively maintained. - -Security fixes will be given priority when it comes to back porting fixes to older versions that are within the -supported time window. It is challenging to decide which bug fixes to back port to old versions. As such, the latest -versions will have the most bug fixes. - -When 3.0.0 is released, the community will decide how to continue supporting 2.x. It is possible that the last minor -release within 2.x will be maintained for longer as an "LTS" release, but it has not been officially decided. - -The following table shows version support timelines and will be updated with each release. - -| Version | Supported | Initial Release | At Least Until | -|:-------:|:------------------:|:---------------:|:--------------:| -| 2.10.x | :white_check_mark: | April 2022 | April 2023 | -| 2.9.x | :white_check_mark: | November 2021 | November 2022 | -| 2.8.x | :white_check_mark: | June 2021 | June 2022 | -| 2.7.x | :white_check_mark: | November 2020 | November 2021 | -| 2.6.x | :x: | June 2020 | June 2021 | -| 2.5.x | :x: | January 2020 | January 2021 | -| 2.4.x | :x: | July 2019 | July 2020 | -| < 2.3.x | :x: | - | - | - -If there is ambiguity about which versions of Pulsar are actively supported, please ask on the [users@pulsar.apache.org](mailto:users@pulsar.apache.org) -mailing list. - -## Release Frequency - -With the acceptance of [PIP-47 - A Time Based Release Plan](https://github.com/apache/pulsar/wiki/PIP-47%3A-Time-Based-Release-Plan), -the Pulsar community aims to complete 4 minor releases each year. Patch releases are completed based on demand as well -as need, in the event of security fixes. diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/security-tls-authentication.md b/site2/website/versioned_docs/version-2.10.0-deprecated/security-tls-authentication.md deleted file mode 100644 index 17da5aab34d471..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/security-tls-authentication.md +++ /dev/null @@ -1,222 +0,0 @@ ---- -id: security-tls-authentication -title: Authentication using TLS -sidebar_label: "Authentication using TLS" -original_id: security-tls-authentication ---- - -## TLS authentication overview - -TLS authentication is an extension of [TLS transport encryption](security-tls-transport.md). Not only servers have keys and certs that the client uses to verify the identity of servers, clients also have keys and certs that the server uses to verify the identity of clients. You must have TLS transport encryption configured on your cluster before you can use TLS authentication. This guide assumes you already have TLS transport encryption configured. - -`Bouncy Castle Provider` provides TLS related cipher suites and algorithms in Pulsar. If you need [FIPS](https://www.bouncycastle.org/fips_faq.html) version of `Bouncy Castle Provider`, please reference [Bouncy Castle page](security-bouncy-castle.md). - -### Create client certificates - -Client certificates are generated using the certificate authority. Server certificates are also generated with the same certificate authority. - -The biggest difference between client certs and server certs is that the **common name** for the client certificate is the **role token** which that client is authenticated as. - -To use client certificates, you need to set `tlsRequireTrustedClientCertOnConnect=true` at the broker side. For details, refer to [TLS broker configuration](security-tls-transport.md#configure-broker). - -First, you need to enter the following command to generate the key : - -```bash - -$ openssl genrsa -out admin.key.pem 2048 - -``` - -Similar to the broker, the client expects the key to be in [PKCS 8](https://en.wikipedia.org/wiki/PKCS_8) format, so you need to convert it by entering the following command: - -```bash - -$ openssl pkcs8 -topk8 -inform PEM -outform PEM \ - -in admin.key.pem -out admin.key-pk8.pem -nocrypt - -``` - -Next, enter the command below to generate the certificate request. When you are asked for a **common name**, enter the **role token** that you want this key pair to authenticate a client as. - -```bash - -$ openssl req -config openssl.cnf \ - -key admin.key.pem -new -sha256 -out admin.csr.pem - -``` - -:::note - -If openssl.cnf is not specified, read [Certificate authority](security-tls-transport.md#certificate-authority) to get the openssl.cnf. - -::: - -Then, enter the command below to sign with request with the certificate authority. Note that the client certs uses the **usr_cert** extension, which allows the cert to be used for client authentication. - -```bash - -$ openssl ca -config openssl.cnf -extensions usr_cert \ - -days 1000 -notext -md sha256 \ - -in admin.csr.pem -out admin.cert.pem - -``` - -You can get a cert, `admin.cert.pem`, and a key, `admin.key-pk8.pem` from this command. With `ca.cert.pem`, clients can use this cert and this key to authenticate themselves to brokers and proxies as the role token ``admin``. - -:::note - -If the "unable to load CA private key" error occurs and the reason of this error is "No such file or directory: /etc/pki/CA/private/cakey.pem" in this step. Try the command below: - -```bash - -$ cd /etc/pki/tls/misc/CA -$ ./CA -newca - -``` - -to generate `cakey.pem` . - -::: - -## Enable TLS authentication on brokers - -To configure brokers to authenticate clients, add the following parameters to `broker.conf`, alongside [the configuration to enable tls transport](security-tls-transport.md#broker-configuration): - -```properties - -# Configuration to enable authentication -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# operations and publish/consume from all topics -superUserRoles=admin - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientTlsEnabled=true -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters={"tlsCertFile":"/path/my-ca/admin.cert.pem","tlsKeyFile":"/path/my-ca/admin.key-pk8.pem"} -brokerClientTrustCertsFilePath=/path/my-ca/certs/ca.cert.pem - -``` - -## Enable TLS authentication on proxies - -To configure proxies to authenticate clients, add the following parameters to `proxy.conf`, alongside [the configuration to enable tls transport](security-tls-transport.md#proxy-configuration): - -The proxy should have its own client key pair for connecting to brokers. You need to configure the role token for this key pair in the ``proxyRoles`` of the brokers. See the [authorization guide](security-authorization.md) for more details. - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters=tlsCertFile:/path/to/proxy.cert.pem,tlsKeyFile:/path/to/proxy.key-pk8.pem - -``` - -## Client configuration - -When you use TLS authentication, client connects via TLS transport. You need to configure the client to use ```https://``` and 8443 port for the web service URL, ```pulsar+ssl://``` and 6651 port for the broker service URL. - -### CLI tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use TLS authentication with the CLI tools of Pulsar: - -```properties - -webServiceUrl=https://broker.example.com:8443/ -brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/ca.cert.pem -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -authParams=tlsCertFile:/path/to/my-role.cert.pem,tlsKeyFile:/path/to/my-role.key-pk8.pem - -``` - -### Java client - -```java - -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .tlsTrustCertsFilePath("/path/to/ca.cert.pem") - .authentication("org.apache.pulsar.client.impl.auth.AuthenticationTls", - "tlsCertFile:/path/to/my-role.cert.pem,tlsKeyFile:/path/to/my-role.key-pk8.pem") - .build(); - -``` - -### Python client - -```python - -from pulsar import Client, AuthenticationTLS - -auth = AuthenticationTLS("/path/to/my-role.cert.pem", "/path/to/my-role.key-pk8.pem") -client = Client("pulsar+ssl://broker.example.com:6651/", - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False, - authentication=auth) - -``` - -### C++ client - -```c++ - -#include - -pulsar::ClientConfiguration config; -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/ca.cert.pem"); -config.setTlsAllowInsecureConnection(false); - -pulsar::AuthenticationPtr auth = pulsar::AuthTls::create("/path/to/my-role.cert.pem", - "/path/to/my-role.key-pk8.pem") -config.setAuth(auth); - -pulsar::Client client("pulsar+ssl://broker.example.com:6651/", config); - -``` - -### Node.js client - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const auth = new Pulsar.AuthenticationTls({ - certificatePath: '/path/to/my-role.cert.pem', - privateKeyPath: '/path/to/my-role.key-pk8.pem', - }); - - const client = new Pulsar.Client({ - serviceUrl: 'pulsar+ssl://broker.example.com:6651/', - authentication: auth, - tlsTrustCertsFilePath: '/path/to/ca.cert.pem', - }); -})(); - -``` - -### C# client - -```c# - -var clientCertificate = new X509Certificate2("admin.pfx"); -var client = PulsarClient.Builder() - .AuthenticateUsingClientCertificate(clientCertificate) - .Build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/security-tls-keystore.md b/site2/website/versioned_docs/version-2.10.0-deprecated/security-tls-keystore.md deleted file mode 100644 index 8a4654a0c33ae9..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/security-tls-keystore.md +++ /dev/null @@ -1,345 +0,0 @@ ---- -id: security-tls-keystore -title: Using TLS with KeyStore configure -sidebar_label: "Using TLS with KeyStore configure" -original_id: security-tls-keystore ---- - -## Overview - -Apache Pulsar supports [TLS encryption](security-tls-transport.md) and [TLS authentication](security-tls-authentication.md) between clients and Apache Pulsar service. -By default it uses PEM format file configuration. This page tries to describe use [KeyStore](https://en.wikipedia.org/wiki/Java_KeyStore) type configure for TLS. - - -## TLS encryption with KeyStore configure - -### Generate TLS key and certificate - -The first step of deploying TLS is to generate the key and the certificate for each machine in the cluster. -You can use Java’s `keytool` utility to accomplish this task. We will generate the key into a temporary keystore -initially for broker, so that we can export and sign it later with CA. - -```shell - -keytool -keystore broker.keystore.jks -alias localhost -validity {validity} -genkeypair -keyalg RSA - -``` - -You need to specify two parameters in the above command: - -1. `keystore`: the keystore file that stores the certificate. The *keystore* file contains the private key of - the certificate; hence, it needs to be kept safely. -2. `validity`: the valid time of the certificate in days. - -> Ensure that common name (CN) matches exactly with the fully qualified domain name (FQDN) of the server. -The client compares the CN with the DNS domain name to ensure that it is indeed connecting to the desired server, not a malicious one. - -### Creating your own CA - -After the first step, each broker in the cluster has a public-private key pair, and a certificate to identify the machine. -The certificate, however, is unsigned, which means that an attacker can create such a certificate to pretend to be any machine. - -Therefore, it is important to prevent forged certificates by signing them for each machine in the cluster. -A `certificate authority (CA)` is responsible for signing certificates. CA works likes a government that issues passports — -the government stamps (signs) each passport so that the passport becomes difficult to forge. Other governments verify the stamps -to ensure the passport is authentic. Similarly, the CA signs the certificates, and the cryptography guarantees that a signed -certificate is computationally difficult to forge. Thus, as long as the CA is a genuine and trusted authority, the clients have -high assurance that they are connecting to the authentic machines. - -```shell - -openssl req -new -x509 -keyout ca-key -out ca-cert -days 365 - -``` - -The generated CA is simply a *public-private* key pair and certificate, and it is intended to sign other certificates. - -The next step is to add the generated CA to the clients' truststore so that the clients can trust this CA: - -```shell - -keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert - -``` - -NOTE: If you configure the brokers to require client authentication by setting `tlsRequireTrustedClientCertOnConnect` to `true` on the -broker configuration, then you must also provide a truststore for the brokers and it should have all the CA certificates that clients keys were signed by. - -```shell - -keytool -keystore broker.truststore.jks -alias CARoot -import -file ca-cert - -``` - -In contrast to the keystore, which stores each machine’s own identity, the truststore of a client stores all the certificates -that the client should trust. Importing a certificate into one’s truststore also means trusting all certificates that are signed -by that certificate. As the analogy above, trusting the government (CA) also means trusting all passports (certificates) that -it has issued. This attribute is called the chain of trust, and it is particularly useful when deploying TLS on a large BookKeeper cluster. -You can sign all certificates in the cluster with a single CA, and have all machines share the same truststore that trusts the CA. -That way all machines can authenticate all other machines. - - -### Signing the certificate - -The next step is to sign all certificates in the keystore with the CA we generated. First, you need to export the certificate from the keystore: - -```shell - -keytool -keystore broker.keystore.jks -alias localhost -certreq -file cert-file - -``` - -Then sign it with the CA: - -```shell - -openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days {validity} -CAcreateserial -passin pass:{ca-password} - -``` - -Finally, you need to import both the certificate of the CA and the signed certificate into the keystore: - -```shell - -keytool -keystore broker.keystore.jks -alias CARoot -import -file ca-cert -keytool -keystore broker.keystore.jks -alias localhost -import -file cert-signed - -``` - -The definitions of the parameters are the following: - -1. `keystore`: the location of the keystore -2. `ca-cert`: the certificate of the CA -3. `ca-key`: the private key of the CA -4. `ca-password`: the passphrase of the CA -5. `cert-file`: the exported, unsigned certificate of the broker -6. `cert-signed`: the signed certificate of the broker - -### Configuring brokers - -Brokers enable TLS by provide valid `brokerServicePortTls` and `webServicePortTls`, and also need set `tlsEnabledWithKeyStore` to `true` for using KeyStore type configuration. -Besides this, KeyStore path, KeyStore password, TrustStore path, and TrustStore password need to provided. -And since broker will create internal client/admin client to communicate with other brokers, user also need to provide config for them, this is similar to how user config the outside client/admin-client. -If `tlsRequireTrustedClientCertOnConnect` is `true`, broker will reject the Connection if the Client Certificate is not trusted. - -The following TLS configs are needed on the broker side: - -```properties - -tlsEnabledWithKeyStore=true -# key store -tlsKeyStoreType=JKS -tlsKeyStore=/var/private/tls/broker.keystore.jks -tlsKeyStorePassword=brokerpw - -# trust store -tlsTrustStoreType=JKS -tlsTrustStore=/var/private/tls/broker.truststore.jks -tlsTrustStorePassword=brokerpw - -# internal client/admin-client config -brokerClientTlsEnabled=true -brokerClientTlsEnabledWithKeyStore=true -brokerClientTlsTrustStoreType=JKS -brokerClientTlsTrustStore=/var/private/tls/client.truststore.jks -brokerClientTlsTrustStorePassword=clientpw - -``` - -NOTE: it is important to restrict access to the store files via filesystem permissions. - -If you have configured TLS on the broker, to disable non-TLS ports, you can set the values of the following configurations to empty as below. - -``` - -brokerServicePort= -webServicePort= - -``` - -In this case, you need to set the following configurations. - -```conf - -brokerClientTlsEnabled=true // Set this to true -brokerClientTlsEnabledWithKeyStore=true // Set this to true -brokerClientTlsTrustStore= // Set this to your desired value -brokerClientTlsTrustStorePassword= // Set this to your desired value - -``` - -Optional settings that may worth consider: - -1. tlsClientAuthentication=false: Enable/Disable using TLS for authentication. This config when enabled will authenticate the other end - of the communication channel. It should be enabled on both brokers and clients for mutual TLS. -2. tlsCiphers=[TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256], A cipher suite is a named combination of authentication, encryption, MAC and key exchange - algorithm used to negotiate the security settings for a network connection using TLS network protocol. By default, - it is null. [OpenSSL Ciphers](https://www.openssl.org/docs/man1.0.2/apps/ciphers.html) - [JDK Ciphers](http://docs.oracle.com/javase/8/docs/technotes/guides/security/StandardNames.html#ciphersuites) -3. tlsProtocols=[TLSv1.3,TLSv1.2] (list out the TLS protocols that you are going to accept from clients). - By default, it is not set. -### Configuring Clients - -This is similar to [TLS encryption configuing for client with PEM type](security-tls-transport.md#client-configuration). -For a minimal configuration, you need to provide the TrustStore information. - -For example: -1. for [Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools#pulsar-admin), [`pulsar-perf`](reference-cli-tools#pulsar-perf), and [`pulsar-client`](reference-cli-tools#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - - ```properties - - webServiceUrl=https://broker.example.com:8443/ - brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ - useKeyStoreTls=true - tlsTrustStoreType=JKS - tlsTrustStorePath=/var/private/tls/client.truststore.jks - tlsTrustStorePassword=clientpw - - ``` - -1. for java client - - ```java - - import org.apache.pulsar.client.api.PulsarClient; - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .build(); - - ``` - -1. for java admin client - - ```java - - PulsarAdmin amdin = PulsarAdmin.builder().serviceHttpUrl("https://broker.example.com:8443") - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .build(); - - ``` - -> **Note:** Please configure `tlsTrustStorePath` when you set `useKeyStoreTls` to `true`. - -## TLS authentication with KeyStore configure - -This similar to [TLS authentication with PEM type](security-tls-authentication.md) - -### broker authentication config - -`broker.conf` - -```properties - -# Configuration to enable authentication -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# this should be the CN for one of client keystore. -superUserRoles=admin - -# Enable KeyStore type -tlsEnabledWithKeyStore=true -requireTrustedClientCertOnConnect=true - -# key store -tlsKeyStoreType=JKS -tlsKeyStore=/var/private/tls/broker.keystore.jks -tlsKeyStorePassword=brokerpw - -# trust store -tlsTrustStoreType=JKS -tlsTrustStore=/var/private/tls/broker.truststore.jks -tlsTrustStorePassword=brokerpw - -# internal client/admin-client config -brokerClientTlsEnabled=true -brokerClientTlsEnabledWithKeyStore=true -brokerClientTlsTrustStoreType=JKS -brokerClientTlsTrustStore=/var/private/tls/client.truststore.jks -brokerClientTlsTrustStorePassword=clientpw -# internal auth config -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls -brokerClientAuthenticationParameters={"keyStoreType":"JKS","keyStorePath":"/var/private/tls/client.keystore.jks","keyStorePassword":"clientpw"} -# currently websocket not support keystore type -webSocketServiceEnabled=false - -``` - -### client authentication configuring - -Besides the TLS encryption configuring. The main work is configuring the KeyStore, which contains a valid CN as client role, for client. - -For example: -1. for [Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools#pulsar-admin), [`pulsar-perf`](reference-cli-tools#pulsar-perf), and [`pulsar-client`](reference-cli-tools#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - - ```properties - - webServiceUrl=https://broker.example.com:8443/ - brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ - useKeyStoreTls=true - tlsTrustStoreType=JKS - tlsTrustStorePath=/var/private/tls/client.truststore.jks - tlsTrustStorePassword=clientpw - authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls - authParams={"keyStoreType":"JKS","keyStorePath":"/path/to/keystorefile","keyStorePassword":"keystorepw"} - - ``` - -1. for java client - - ```java - - import org.apache.pulsar.client.api.PulsarClient; - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .authentication( - "org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls", - "keyStoreType:JKS,keyStorePath:/var/private/tls/client.keystore.jks,keyStorePassword:clientpw") - .build(); - - ``` - -1. for java admin client - - ```java - - PulsarAdmin amdin = PulsarAdmin.builder().serviceHttpUrl("https://broker.example.com:8443") - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .authentication( - "org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls", - "keyStoreType:JKS,keyStorePath:/var/private/tls/client.keystore.jks,keyStorePassword:clientpw") - .build(); - - ``` - -> **Note:** Please configure `tlsTrustStorePath` when you set `useKeyStoreTls` to `true`. - -## Enabling TLS Logging - -You can enable TLS debug logging at the JVM level by starting the brokers and/or clients with `javax.net.debug` system property. For example: - -```shell - --Djavax.net.debug=all - -``` - -You can find more details on this in [Oracle documentation](http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/ReadDebug.html) on [debugging SSL/TLS connections](http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/ReadDebug.html). diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/security-tls-transport.md b/site2/website/versioned_docs/version-2.10.0-deprecated/security-tls-transport.md deleted file mode 100644 index c3fc81e7393adf..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/security-tls-transport.md +++ /dev/null @@ -1,313 +0,0 @@ ---- -id: security-tls-transport -title: Transport Encryption using TLS -sidebar_label: "Transport Encryption using TLS" -original_id: security-tls-transport ---- - -## TLS overview - -By default, Apache Pulsar clients communicate with the Apache Pulsar service in plain text. This means that all data is sent in the clear. You can use TLS to encrypt this traffic to protect the traffic from the snooping of a man-in-the-middle attacker. - -You can also configure TLS for both encryption and authentication. Use this guide to configure just TLS transport encryption and refer to [here](security-tls-authentication.md) for TLS authentication configuration. Alternatively, you can use [another authentication mechanism](security-athenz.md) on top of TLS transport encryption. - -> Note that enabling TLS may impact the performance due to encryption overhead. - -## TLS concepts - -TLS is a form of [public key cryptography](https://en.wikipedia.org/wiki/Public-key_cryptography). Using key pairs consisting of a public key and a private key can perform the encryption. The public key encrpyts the messages and the private key decrypts the messages. - -To use TLS transport encryption, you need two kinds of key pairs, **server key pairs** and a **certificate authority**. - -You can use a third kind of key pair, **client key pairs**, for [client authentication](security-tls-authentication.md). - -You should store the **certificate authority** private key in a very secure location (a fully encrypted, disconnected, air gapped computer). As for the certificate authority public key, the **trust cert**, you can freely shared it. - -For both client and server key pairs, the administrator first generates a private key and a certificate request, then uses the certificate authority private key to sign the certificate request, finally generates a certificate. This certificate is the public key for the server/client key pair. - -For TLS transport encryption, the clients can use the **trust cert** to verify that the server has a key pair that the certificate authority signed when the clients are talking to the server. A man-in-the-middle attacker does not have access to the certificate authority, so they couldn't create a server with such a key pair. - -For TLS authentication, the server uses the **trust cert** to verify that the client has a key pair that the certificate authority signed. The common name of the **client cert** is then used as the client's role token (see [Overview](security-overview.md)). - -`Bouncy Castle Provider` provides cipher suites and algorithms in Pulsar. If you need [FIPS](https://www.bouncycastle.org/fips_faq.html) version of `Bouncy Castle Provider`, please reference [Bouncy Castle page](security-bouncy-castle.md). - -## Create TLS certificates - -Creating TLS certificates for Pulsar involves creating a [certificate authority](#certificate-authority) (CA), [server certificate](#server-certificate), and [client certificate](#client-certificate). - -Follow the guide below to set up a certificate authority. You can also refer to plenty of resources on the internet for more details. We recommend [this guide](https://jamielinux.com/docs/openssl-certificate-authority/index.html) for your detailed reference. - -### Certificate authority - -1. Create the certificate for the CA. You can use CA to sign both the broker and client certificates. This ensures that each party will trust the others. You should store CA in a very secure location (ideally completely disconnected from networks, air gapped, and fully encrypted). - -2. Entering the following command to create a directory for your CA, and place [this openssl configuration file](https://github.com/apache/pulsar/tree/master/site2/website/static/examples/openssl.cnf) in the directory. You may want to modify the default answers for company name and department in the configuration file. Export the location of the CA directory to the environment variable, CA_HOME. The configuration file uses this environment variable to find the rest of the files and directories that the CA needs. - -```bash - -mkdir my-ca -cd my-ca -wget https://raw.githubusercontent.com/apache/pulsar-site/main/site2/website/static/examples/openssl.cnf -export CA_HOME=$(pwd) - -``` - -3. Enter the commands below to create the necessary directories, keys and certs. - -```bash - -mkdir certs crl newcerts private -chmod 700 private/ -touch index.txt -echo 1000 > serial -openssl genrsa -aes256 -out private/ca.key.pem 4096 -# You need enter a password in the command above -chmod 400 private/ca.key.pem -openssl req -config openssl.cnf -key private/ca.key.pem \ - -new -x509 -days 7300 -sha256 -extensions v3_ca \ - -out certs/ca.cert.pem -# You must enter the same password in the previous openssl command -chmod 444 certs/ca.cert.pem - -``` - -:::tip - -The default `openssl` on macOS doesn't work for the commands above. You must upgrade the `openssl` via Homebrew: - -```bash - -brew install openssl -export PATH="/usr/local/Cellar/openssl@3/3.0.1/bin:$PATH" - -``` - -The version `3.0.1` might change in the future. Use the actual path from the output of `brew install` command. - -::: - -4. After you answer the question prompts, CA-related files are stored in the `./my-ca` directory. Within that directory: - -* `certs/ca.cert.pem` is the public certificate. This public certificates is meant to be distributed to all parties involved. -* `private/ca.key.pem` is the private key. You only need it when you are signing a new certificate for either broker or clients and you must safely guard this private key. - -### Server certificate - -Once you have created a CA certificate, you can create certificate requests and sign them with the CA. - -The following commands ask you a few questions and then create the certificates. When you are asked for the common name, you should match the hostname of the broker. You can also use a wildcard to match a group of broker hostnames, for example, `*.broker.usw.example.com`. This ensures that multiple machines can reuse the same certificate. - -:::tip - -Sometimes matching the hostname is not possible or makes no sense, -such as when you create the brokers with random hostnames, or you -plan to connect to the hosts via their IP. In these cases, you -should configure the client to disable TLS hostname verification. For more -details, you can see [the host verification section in client configuration](#hostname-verification). - -::: - -1. Enter the command below to generate the key. - -```bash - -openssl genrsa -out broker.key.pem 2048 - -``` - -The broker expects the key to be in [PKCS 8](https://en.wikipedia.org/wiki/PKCS_8) format, so enter the following command to convert it. - -```bash - -openssl pkcs8 -topk8 -inform PEM -outform PEM \ - -in broker.key.pem -out broker.key-pk8.pem -nocrypt - -``` - -2. Enter the following command to generate the certificate request. - -```bash - -openssl req -config openssl.cnf \ - -key broker.key.pem -new -sha256 -out broker.csr.pem - -``` - -3. Sign it with the certificate authority by entering the command below. - -```bash - -openssl ca -config openssl.cnf -extensions server_cert \ - -days 1000 -notext -md sha256 \ - -in broker.csr.pem -out broker.cert.pem - -``` - -At this point, you have a cert, `broker.cert.pem`, and a key, `broker.key-pk8.pem`, which you can use along with `ca.cert.pem` to configure TLS transport encryption for your broker and proxy nodes. - -## Configure broker - -To configure a Pulsar [broker](reference-terminology.md#broker) to use TLS transport encryption, you need to make some changes to `broker.conf`, which locates in the `conf` directory of your [Pulsar installation](getting-started-standalone.md). - -Add these values to the configuration file (substituting the appropriate certificate paths where necessary): - -```properties - -brokerServicePortTls=6651 -webServicePortTls=8081 -tlsRequireTrustedClientCertOnConnect=true -tlsCertificateFilePath=/path/to/broker.cert.pem -tlsKeyFilePath=/path/to/broker.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -> You can find a full list of parameters available in the `conf/broker.conf` file, -> as well as the default values for those parameters, in [Broker Configuration](reference-configuration.md#broker) -> -### TLS Protocol Version and Cipher - -You can configure the broker (and proxy) to require specific TLS protocol versions and ciphers for TLS negiotation. You can use the TLS protocol versions and ciphers to stop clients from requesting downgraded TLS protocol versions or ciphers that may have weaknesses. - -Both the TLS protocol versions and cipher properties can take multiple values, separated by commas. The possible values for protocol version and ciphers depend on the TLS provider that you are using. Pulsar uses OpenSSL if the OpenSSL is available, but if the OpenSSL is not available, Pulsar defaults back to the JDK implementation. - -```properties - -tlsProtocols=TLSv1.3,TLSv1.2 -tlsCiphers=TLS_DH_RSA_WITH_AES_256_GCM_SHA384,TLS_DH_RSA_WITH_AES_256_CBC_SHA - -``` - -OpenSSL currently supports ```TLSv1.1```, ```TLSv1.2``` and ```TLSv1.3``` for the protocol version. You can acquire a list of supported cipher from the openssl ciphers command, i.e. ```openssl ciphers -tls1_3```. - -For JDK 11, you can obtain a list of supported values from the documentation: -- [TLS protocol](https://docs.oracle.com/en/java/javase/11/security/oracle-providers.html#GUID-7093246A-31A3-4304-AC5F-5FB6400405E2__SUNJSSEPROVIDERPROTOCOLPARAMETERS-BBF75009) -- [Ciphers](https://docs.oracle.com/en/java/javase/11/security/oracle-providers.html#GUID-7093246A-31A3-4304-AC5F-5FB6400405E2__SUNJSSE_CIPHER_SUITES) - -## Proxy Configuration - -Proxies need to configure TLS in two directions, for clients connecting to the proxy, and for the proxy connecting to brokers. - -```properties - -# For clients connecting to the proxy -tlsEnabledInProxy=true -tlsCertificateFilePath=/path/to/broker.cert.pem -tlsKeyFilePath=/path/to/broker.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -# For the proxy to connect to brokers -tlsEnabledWithBroker=true -brokerClientTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -## Client configuration - -When you enable the TLS transport encryption, you need to configure the client to use ```https://``` and port 8443 for the web service URL, and ```pulsar+ssl://``` and port 6651 for the broker service URL. - -As the server certificate that you generated above does not belong to any of the default trust chains, you also need to either specify the path the **trust cert** (recommended), or tell the client to allow untrusted server certs. - -### Hostname verification - -Hostname verification is a TLS security feature whereby a client can refuse to connect to a server if the "CommonName" does not match the hostname to which the hostname is connecting. By default, Pulsar clients disable hostname verification, as it requires that each broker has a DNS record and a unique cert. - -Moreover, as the administrator has full control of the certificate authority, a bad actor is unlikely to be able to pull off a man-in-the-middle attack. "allowInsecureConnection" allows the client to connect to servers whose cert has not been signed by an approved CA. The client disables "allowInsecureConnection" by default, and you should always disable "allowInsecureConnection" in production environments. As long as you disable "allowInsecureConnection", a man-in-the-middle attack requires that the attacker has access to the CA. - -One scenario where you may want to enable hostname verification is where you have multiple proxy nodes behind a VIP, and the VIP has a DNS record, for example, pulsar.mycompany.com. In this case, you can generate a TLS cert with pulsar.mycompany.com as the "CommonName," and then enable hostname verification on the client. - -The examples below show that hostname verification is disabled for the CLI tools/Java/Python/C++/Node.js/C# clients by default. - -### CLI tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools.md#pulsar-admin), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use TLS transport with the CLI tools of Pulsar: - -```properties - -webServiceUrl=https://broker.example.com:8443/ -brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/ca.cert.pem -tlsEnableHostnameVerification=false - -``` - -#### Java client - -```java - -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .tlsTrustCertsFilePath("/path/to/ca.cert.pem") - .enableTlsHostnameVerification(false) // false by default, in any case - .allowTlsInsecureConnection(false) // false by default, in any case - .build(); - -``` - -#### Python client - -```python - -from pulsar import Client - -client = Client("pulsar+ssl://broker.example.com:6651/", - tls_hostname_verification=False, - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False) // defaults to false from v2.2.0 onwards - -``` - -#### C++ client - -```c++ - -#include - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); // shouldn't be needed soon -config.setTlsTrustCertsFilePath(caPath); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create(clientPublicKeyPath, clientPrivateKeyPath)); -config.setValidateHostName(false); - -``` - -#### Node.js client - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const client = new Pulsar.Client({ - serviceUrl: 'pulsar+ssl://broker.example.com:6651/', - tlsTrustCertsFilePath: '/path/to/ca.cert.pem', - useTls: true, - tlsValidateHostname: false, - tlsAllowInsecureConnection: false, - }); -})(); - -``` - -#### C# client - -```c# - -var certificate = new X509Certificate2("ca.cert.pem"); -var client = PulsarClient.Builder() - .TrustedCertificateAuthority(certificate) //If the CA is not trusted on the host, you can add it explicitly. - .VerifyCertificateAuthority(true) //Default is 'true' - .VerifyCertificateName(false) //Default is 'false' - .Build(); - -``` - -> Note that `VerifyCertificateName` refers to the configuration of hostname verification in the C# client. diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/security-token-admin.md b/site2/website/versioned_docs/version-2.10.0-deprecated/security-token-admin.md deleted file mode 100644 index a265f6320d28fe..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/security-token-admin.md +++ /dev/null @@ -1,183 +0,0 @@ ---- -id: security-token-admin -title: Token authentication admin -sidebar_label: "Token authentication admin" -original_id: security-token-admin ---- - -## Token Authentication Overview - -Pulsar supports authenticating clients using security tokens that are based on [JSON Web Tokens](https://jwt.io/introduction/) ([RFC-7519](https://tools.ietf.org/html/rfc7519)). - -Tokens are used to identify a Pulsar client and associate with some "principal" (or "role") which -will be then granted permissions to do some actions (eg: publish or consume from a topic). - -A user will typically be given a token string by an administrator (or some automated service). - -The compact representation of a signed JWT is a string that looks like: - -``` - - eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -Application will specify the token when creating the client instance. An alternative is to pass -a "token supplier", that is to say a function that returns the token when the client library -will need one. - -> #### Always use TLS transport encryption -> Sending a token is equivalent to sending a password over the wire. It is strongly recommended to -> always use TLS encryption when talking to the Pulsar service. See -> [Transport Encryption using TLS](security-tls-transport.md) - -## Secret vs Public/Private keys - -JWT support two different kind of keys in order to generate and validate the tokens: - - * Symmetric : - - there is a single ***Secret*** key that is used both to generate and validate - * Asymmetric: there is a pair of keys. - - ***Private*** key is used to generate tokens - - ***Public*** key is used to validate tokens - -### Secret key - -When using a secret key, the administrator will create the key and he will -use it to generate the client tokens. This key will be also configured to -the brokers to allow them to validate the clients. - -#### Creating a secret key - -> Output file will be generated in the root of your pulsar installation directory. You can also provide absolute path for the output file. - -```shell - -$ bin/pulsar tokens create-secret-key --output my-secret.key - -``` - -To generate base64 encoded private key - -```shell - -$ bin/pulsar tokens create-secret-key --output /opt/my-secret.key --base64 - -``` - -### Public/Private keys - -With public/private, we need to create a pair of keys. Pulsar supports all algorithms supported by the Java JWT library shown [here](https://github.com/jwtk/jjwt#signature-algorithms-keys) - -#### Creating a key pair - -> Output file will be generated in the root of your pulsar installation directory. You can also provide absolute path for the output file. - -```shell - -$ bin/pulsar tokens create-key-pair --output-private-key my-private.key --output-public-key my-public.key - -``` - - * `my-private.key` will be stored in a safe location and only used by administrator to generate - new tokens. - * `my-public.key` will be distributed to all Pulsar brokers. This file can be publicly shared without - any security concern. - -## Generating tokens - -A token is the credential associated with a user. The association is done through the "principal", -or "role". In case of JWT tokens, this field it's typically referred to as **subject**, though -it's exactly the same concept. - -The generated token is then required to have a **subject** field set. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user - -``` - -This will print the token string on stdout. - -Similarly, one can create a token by passing the "private" key: - -```shell - -$ bin/pulsar tokens create --private-key file:///path/to/my-private.key \ - --subject test-user - -``` - -Finally, a token can also be created with a pre-defined TTL. After that time, -the token will be automatically invalidated. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user \ - --expiry-time 1y - -``` - -## Authorization - -The token itself doesn't have any permission associated. That will be determined by the -authorization engine. Once the token is created, one can grant permission for this token to do certain -actions. Eg. : - -```shell - -$ bin/pulsar-admin namespaces grant-permission my-tenant/my-namespace \ - --role test-user \ - --actions produce,consume - -``` - -## Enabling Token Authentication ... - -### ... on Brokers - -To configure brokers to authenticate clients, put the following in `broker.conf`: - -```properties - -# Configuration to enable authentication and authorization -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken - -# If using secret key (Note: key files must be DER-encoded) -tokenSecretKey=file:///path/to/secret.key -# The key can also be passed inline: -# tokenSecretKey=data:;base64,FLFyW0oLJ2Fi22KKCm21J18mbAdztfSHN/lAT5ucEKU= - -# If using public/private (Note: key files must be DER-encoded) -# tokenPublicKey=file:///path/to/public.key - -``` - -### ... on Proxies - -To configure proxies to authenticate clients, put the following in `proxy.conf`: - -The proxy will have its own token used when talking to brokers. The role token for this -key pair should be configured in the ``proxyRoles`` of the brokers. See the [authorization guide](security-authorization.md) for more details. - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken -tokenSecretKey=file:///path/to/secret.key - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Or, alternatively, read token from file -# brokerClientAuthenticationParameters=file:///path/to/proxy-token.txt - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/sql-deployment-configurations.md b/site2/website/versioned_docs/version-2.10.0-deprecated/sql-deployment-configurations.md deleted file mode 100644 index 0306af30ee159f..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/sql-deployment-configurations.md +++ /dev/null @@ -1,277 +0,0 @@ ---- -id: sql-deployment-configurations -title: Pulsar SQL configuration and deployment -sidebar_label: "Configuration and deployment" -original_id: sql-deployment-configurations ---- - -You can configure Presto Pulsar connector and deploy a cluster with the following instruction. - -## Configure Presto Pulsar Connector -You can configure Presto Pulsar Connector in the `${project.root}/conf/presto/catalog/pulsar.properties` properties file. The configuration for the connector and the default values are as follows. - -```properties - -# name of the connector to be displayed in the catalog -connector.name=pulsar - -# the url of Pulsar broker service -pulsar.web-service-url=http://localhost:8080 - -# URI of Zookeeper cluster -pulsar.zookeeper-uri=localhost:2181 - -# minimum number of entries to read at a single time -pulsar.entry-read-batch-size=100 - -# default number of splits to use per query -pulsar.target-num-splits=4 - -# max size of one batch message (default value is 5MB) -pulsar.max-message-size=5242880 - -# number of split used when querying data from pulsar -pulsar.target-num-splits=2 - -# size of queue to buffer entry read from pulsar -pulsar.max-split-entry-queue-size=1000 - -# size of queue to buffer message extract from entries -pulsar.max-split-message-queue-size=10000 - -# status provider to record connector metrics -pulsar.stats-provider=org.apache.bookkeeper.stats.NullStatsProvider - -# config in map format for stats provider e.g. {"key1":"val1","key2":"val2"} -pulsar.stats-provider-configs={} - -# whether to rewrite Pulsar's default topic delimiter '/' -pulsar.namespace-delimiter-rewrite-enable=false - -# delimiter used to rewrite Pulsar's default delimiter '/', use if default is causing incompatibility with other system like Superset -pulsar.rewrite-namespace-delimiter="/" - -# maximum number of thread pool size for ledger offloader. -pulsar.managed-ledger-offload-max-threads=2 - -# driver used to offload or read cold data to or from long-term storage -pulsar.managed-ledger-offload-driver=null - -# directory to load offloaders nar file. -pulsar.offloaders-directory="./offloaders" - -# properties and configurations related to specific offloader implementation as map e.g. {"key1":"val1","key2":"val2"} -pulsar.offloader-properties={} - -# authentication plugin used to authenticate to Pulsar cluster -pulsar.auth-plugin=null - -# authentication parameter used to authenticate to the Pulsar cluster as a string e.g. "key1:val1,key2:val2". -pulsar.auth-params=null - -# whether the Pulsar client accept an untrusted TLS certificate from broker -pulsar.tls-allow-insecure-connection=null - -# whether to allow hostname verification when a client connects to broker over TLS. -pulsar.tls-hostname-verification-enable=null - -# path for the trusted TLS certificate file of Pulsar broker -pulsar.tls-trust-cert-file-path=null - -# set the threshold for BookKeeper request throttle, default is disabled -pulsar.bookkeeper-throttle-value=0 - -# set the number of IO thread -pulsar.bookkeeper-num-io-threads=2 * Runtime.getRuntime().availableProcessors() - -# set the number of worker thread -pulsar.bookkeeper-num-worker-threads=Runtime.getRuntime().availableProcessors() - -# whether to use BookKeeper V2 wire protocol -pulsar.bookkeeper-use-v2-protocol=true - -# interval to check the need for sending an explicit LAC, default is disabled -pulsar.bookkeeper-explicit-interval=0 - -# size for managed ledger entry cache (in MB). -pulsar.managed-ledger-cache-size-MB=0 - -# number of threads to be used for managed ledger tasks dispatching -pulsar.managed-ledger-num-worker-threads=Runtime.getRuntime().availableProcessors() - -# number of threads to be used for managed ledger scheduled tasks -pulsar.managed-ledger-num-scheduler-threads=Runtime.getRuntime().availableProcessors() - -# directory used to store extraction NAR file -pulsar.nar-extraction-directory=System.getProperty("java.io.tmpdir") - -``` - -You can connect Presto to a Pulsar cluster with multiple hosts. To configure multiple hosts for brokers, add multiple URLs to `pulsar.web-service-url`. To configure multiple hosts for ZooKeeper, add multiple URIs to `pulsar.zookeeper-uri`. The following is an example. - -``` - -pulsar.web-service-url=http://localhost:8080,localhost:8081,localhost:8082 -pulsar.zookeeper-uri=localhost1,localhost2:2181 - -``` - -**Note: by default, Pulsar SQL does not get the last message in a topic**. It is by design and controlled by settings. By default, BookKeeper LAC only advances when subsequent entries are added. If there is no subsequent entry added, the last written entry is not visible to readers until the ledger is closed. This is not a problem for Pulsar which uses managed ledger, but Pulsar SQL directly reads from BookKeeper ledger. - -If you want to get the last message in a topic, set the following configurations: - -1. For the broker configuration, set `bookkeeperExplicitLacIntervalInMills` > 0 in `broker.conf` or `standalone.conf`. - -2. For the Presto configuration, set `pulsar.bookkeeper-explicit-interval` > 0 and `pulsar.bookkeeper-use-v2-protocol=false`. - -However, using BookKeeper V3 protocol introduces additional GC overhead to BK as it uses Protobuf. - -## Query data from existing Presto clusters - -If you already have a Presto cluster, you can copy the Presto Pulsar connector plugin to your existing cluster. Download the archived plugin package with the following command. - -```bash - -$ wget pulsar:binary_release_url - -``` - -## Deploy a new cluster - -Since Pulsar SQL is powered by [Trino (formerly Presto SQL)](https://trino.io), the configuration for deployment is the same for the Pulsar SQL worker. - -:::note - -For how to set up a standalone single node environment, refer to [Query data](sql-getting-started.md). - -::: - -You can use the same CLI args as the Presto launcher. - -```bash - -$ ./bin/pulsar sql-worker --help -Usage: launcher [options] command - -Commands: run, start, stop, restart, kill, status - -Options: - -h, --help show this help message and exit - -v, --verbose Run verbosely - --etc-dir=DIR Defaults to INSTALL_PATH/etc - --launcher-config=FILE - Defaults to INSTALL_PATH/bin/launcher.properties - --node-config=FILE Defaults to ETC_DIR/node.properties - --jvm-config=FILE Defaults to ETC_DIR/jvm.config - --config=FILE Defaults to ETC_DIR/config.properties - --log-levels-file=FILE - Defaults to ETC_DIR/log.properties - --data-dir=DIR Defaults to INSTALL_PATH - --pid-file=FILE Defaults to DATA_DIR/var/run/launcher.pid - --launcher-log-file=FILE - Defaults to DATA_DIR/var/log/launcher.log (only in - daemon mode) - --server-log-file=FILE - Defaults to DATA_DIR/var/log/server.log (only in - daemon mode) - -D NAME=VALUE Set a Java system property - -``` - -The default configuration for the cluster is located in `${project.root}/conf/presto`. You can customize your deployment by modifying the default configuration. - -You can set the worker to read from a different configuration directory, or set a different directory to write data. - -```bash - -$ ./bin/pulsar sql-worker run --etc-dir /tmp/incubator-pulsar/conf/presto --data-dir /tmp/presto-1 - -``` - -You can start the worker as daemon process. - -```bash - -$ ./bin/pulsar sql-worker start - -``` - -### Deploy a cluster on multiple nodes - -You can deploy a Pulsar SQL cluster or Presto cluster on multiple nodes. The following example shows how to deploy a cluster on three-node cluster. - -1. Copy the Pulsar binary distribution to three nodes. - -The first node runs as Presto coordinator. The minimal configuration requirement in the `${project.root}/conf/presto/config.properties` file is as follows. - -```properties - -coordinator=true -node-scheduler.include-coordinator=true -http-server.http.port=8080 -query.max-memory=50GB -query.max-memory-per-node=1GB -discovery-server.enabled=true -discovery.uri= - -``` - -The other two nodes serve as worker nodes, you can use the following configuration for worker nodes. - -```properties - -coordinator=false -http-server.http.port=8080 -query.max-memory=50GB -query.max-memory-per-node=1GB -discovery.uri= - -``` - -2. Modify `pulsar.web-service-url` and `pulsar.zookeeper-uri` configuration in the `${project.root}/conf/presto/catalog/pulsar.properties` file accordingly for the three nodes. - -3. Start the coordinator node. - -``` - -$ ./bin/pulsar sql-worker run - -``` - -4. Start worker nodes. - -``` - -$ ./bin/pulsar sql-worker run - -``` - -5. Start the SQL CLI and check the status of your cluster. - -```bash - -$ ./bin/pulsar sql --server - -``` - -6. Check the status of your nodes. - -```bash - -presto> SELECT * FROM system.runtime.nodes; - node_id | http_uri | node_version | coordinator | state ----------+-------------------------+--------------+-------------+-------- - 1 | http://192.168.2.1:8081 | testversion | true | active - 3 | http://192.168.2.2:8081 | testversion | false | active - 2 | http://192.168.2.3:8081 | testversion | false | active - -``` - -For more information about deployment in Presto, refer to [Presto deployment](https://trino.io/docs/current/installation/deployment.html). - -:::note - -The broker does not advance LAC, so when Pulsar SQL bypass broker to query data, it can only read entries up to the LAC that all the bookies learned. You can enable periodically write LAC on the broker by setting "bookkeeperExplicitLacIntervalInMills" in the broker.conf. - -::: - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/sql-getting-started.md b/site2/website/versioned_docs/version-2.10.0-deprecated/sql-getting-started.md deleted file mode 100644 index 8a5cd7199b365a..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/sql-getting-started.md +++ /dev/null @@ -1,187 +0,0 @@ ---- -id: sql-getting-started -title: Query data with Pulsar SQL -sidebar_label: "Query data" -original_id: sql-getting-started ---- - -Before querying data in Pulsar, you need to install Pulsar and built-in connectors. - -## Requirements -1. Install [Pulsar](getting-started-standalone.md#install-pulsar-standalone). -2. Install Pulsar [built-in connectors](getting-started-standalone.md#install-builtin-connectors-optional). - -## Query data in Pulsar -To query data in Pulsar with Pulsar SQL, complete the following steps. - -1. Start a Pulsar standalone cluster. - -```bash - -./bin/pulsar standalone - -``` - -2. Start a Pulsar SQL worker. - -```bash - -./bin/pulsar sql-worker run - -``` - -3. After initializing Pulsar standalone cluster and the SQL worker, run SQL CLI. - -```bash - -./bin/pulsar sql - -``` - -4. Test with SQL commands. - -```bash - -presto> show catalogs; - Catalog ---------- - pulsar - system -(2 rows) - -Query 20180829_211752_00004_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [0 rows, 0B] [0 rows/s, 0B/s] - - -presto> show schemas in pulsar; - Schema ------------------------ - information_schema - public/default - public/functions - sample/standalone/ns1 -(4 rows) - -Query 20180829_211818_00005_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [4 rows, 89B] [21 rows/s, 471B/s] - - -presto> show tables in pulsar."public/default"; - Table -------- -(0 rows) - -Query 20180829_211839_00006_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [0 rows, 0B] [0 rows/s, 0B/s] - -``` - -Since there is no data in Pulsar, no records is returned. - -5. Start the built-in connector _DataGeneratorSource_ and ingest some mock data. - -```bash - -./bin/pulsar-admin sources create --name generator --destinationTopicName generator_test --source-type data-generator - -``` - -And then you can query a topic in the namespace "public/default". - -```bash - -presto> show tables in pulsar."public/default"; - Table ----------------- - generator_test -(1 row) - -Query 20180829_213202_00000_csyeu, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:02 [1 rows, 38B] [0 rows/s, 17B/s] - -``` - -You can now query the data within the topic "generator_test". - -```bash - -presto> select * from pulsar."public/default".generator_test; - - firstname | middlename | lastname | email | username | password | telephonenumber | age | companyemail | nationalidentitycardnumber | --------------+-------------+-------------+----------------------------------+--------------+----------+-----------------+-----+-----------------------------------------------+----------------------------+ - Genesis | Katherine | Wiley | genesis.wiley@gmail.com | genesisw | y9D2dtU3 | 959-197-1860 | 71 | genesis.wiley@interdemconsulting.eu | 880-58-9247 | - Brayden | | Stanton | brayden.stanton@yahoo.com | braydens | ZnjmhXik | 220-027-867 | 81 | brayden.stanton@supermemo.eu | 604-60-7069 | - Benjamin | Julian | Velasquez | benjamin.velasquez@yahoo.com | benjaminv | 8Bc7m3eb | 298-377-0062 | 21 | benjamin.velasquez@hostesltd.biz | 213-32-5882 | - Michael | Thomas | Donovan | donovan@mail.com | michaeld | OqBm9MLs | 078-134-4685 | 55 | michael.donovan@memortech.eu | 443-30-3442 | - Brooklyn | Avery | Roach | brooklynroach@yahoo.com | broach | IxtBLafO | 387-786-2998 | 68 | brooklyn.roach@warst.biz | 085-88-3973 | - Skylar | | Bradshaw | skylarbradshaw@yahoo.com | skylarb | p6eC6cKy | 210-872-608 | 96 | skylar.bradshaw@flyhigh.eu | 453-46-0334 | -. -. -. - -``` - -You can query the mock data. - -## Query your own data -If you want to query your own data, you need to ingest your own data first. You can write a simple producer and write custom defined data to Pulsar. The following is an example. - -```java - -public class TestProducer { - - public static class Foo { - private int field1 = 1; - private String field2; - private long field3; - - public Foo() { - } - - public int getField1() { - return field1; - } - - public void setField1(int field1) { - this.field1 = field1; - } - - public String getField2() { - return field2; - } - - public void setField2(String field2) { - this.field2 = field2; - } - - public long getField3() { - return field3; - } - - public void setField3(long field3) { - this.field3 = field3; - } - } - - public static void main(String[] args) throws Exception { - PulsarClient pulsarClient = PulsarClient.builder().serviceUrl("pulsar://localhost:6650").build(); - Producer producer = pulsarClient.newProducer(AvroSchema.of(Foo.class)).topic("test_topic").create(); - - for (int i = 0; i < 1000; i++) { - Foo foo = new Foo(); - foo.setField1(i); - foo.setField2("foo" + i); - foo.setField3(System.currentTimeMillis()); - producer.newMessage().value(foo).send(); - } - producer.close(); - pulsarClient.close(); - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/sql-overview.md b/site2/website/versioned_docs/version-2.10.0-deprecated/sql-overview.md deleted file mode 100644 index 8ba19d053003dd..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/sql-overview.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: sql-overview -title: Pulsar SQL Overview -sidebar_label: "Overview" -original_id: sql-overview ---- - -Apache Pulsar is used to store streams of event data, and the event data is structured with predefined fields. With the implementation of the [Schema Registry](schema-get-started.md), you can store structured data in Pulsar and query the data by using [Trino (formerly Presto SQL)](https://trino.io/). - -As the core of Pulsar SQL, Presto Pulsar connector enables Presto workers within a Presto cluster to query data from Pulsar. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-sql-arch-2.png) - -The query performance is efficient and highly scalable, because Pulsar adopts [two level segment based architecture](concepts-architecture-overview.md#apache-bookkeeper). - -Topics in Pulsar are stored as segments in [Apache BookKeeper](https://bookkeeper.apache.org/). Each topic segment is replicated to some BookKeeper nodes, which enables concurrent reads and high read throughput. You can configure the number of BookKeeper nodes, and the default number is `3`. In Presto Pulsar connector, data is read directly from BookKeeper, so Presto workers can read concurrently from horizontally scalable number BookKeeper nodes. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-sql-arch-1.png) diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/sql-rest-api.md b/site2/website/versioned_docs/version-2.10.0-deprecated/sql-rest-api.md deleted file mode 100644 index c92fd62f7d8703..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/sql-rest-api.md +++ /dev/null @@ -1,192 +0,0 @@ ---- -id: sql-rest-api -title: Pulsar SQL REST APIs -sidebar_label: "REST APIs" -original_id: sql-rest-api ---- - -This section lists resources that make up the Presto REST API v1. - -## Request for Presto services - -All requests for Presto services should use Presto REST API v1 version. - -To request services, use explicit URL `http://presto.service:8081/v1`. You need to update `presto.service:8081` with your real Presto address before sending requests. - -`POST` requests require the `X-Presto-User` header. If you use authentication, you must use the same `username` that is specified in the authentication configuration. If you do not use authentication, you can specify anything for `username`. - -```properties - -X-Presto-User: username - -``` - -For more information about headers, refer to [PrestoHeaders](https://github.com/trinodb/trino). - -## Schema - -You can use statement in the HTTP body. All data is received as JSON document that might contain a `nextUri` link. If the received JSON document contains a `nextUri` link, the request continues with the `nextUri` link until the received data does not contain a `nextUri` link. If no error is returned, the query completes successfully. If an `error` field is displayed in `stats`, it means the query fails. - -The following is an example of `show catalogs`. The query continues until the received JSON document does not contain a `nextUri` link. Since no `error` is displayed in `stats`, it means that the query completes successfully. - -```powershell - -➜ ~ curl --header "X-Presto-User: test-user" --request POST --data 'show catalogs' http://localhost:8081/v1/statement -{ - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "stats" : { - "queued" : true, - "nodes" : 0, - "userTimeMillis" : 0, - "cpuTimeMillis" : 0, - "wallTimeMillis" : 0, - "processedBytes" : 0, - "processedRows" : 0, - "runningSplits" : 0, - "queuedTimeMillis" : 0, - "queuedSplits" : 0, - "completedSplits" : 0, - "totalSplits" : 0, - "scheduled" : false, - "peakMemoryBytes" : 0, - "state" : "QUEUED", - "elapsedTimeMillis" : 0 - }, - "id" : "20191113_033653_00006_dg6hb", - "nextUri" : "http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/1" -} - -➜ ~ curl http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/1 -{ - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "nextUri" : "http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/2", - "id" : "20191113_033653_00006_dg6hb", - "stats" : { - "state" : "PLANNING", - "totalSplits" : 0, - "queued" : false, - "userTimeMillis" : 0, - "completedSplits" : 0, - "scheduled" : false, - "wallTimeMillis" : 0, - "runningSplits" : 0, - "queuedSplits" : 0, - "cpuTimeMillis" : 0, - "processedRows" : 0, - "processedBytes" : 0, - "nodes" : 0, - "queuedTimeMillis" : 1, - "elapsedTimeMillis" : 2, - "peakMemoryBytes" : 0 - } -} - -➜ ~ curl http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/2 -{ - "id" : "20191113_033653_00006_dg6hb", - "data" : [ - [ - "pulsar" - ], - [ - "system" - ] - ], - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "columns" : [ - { - "typeSignature" : { - "rawType" : "varchar", - "arguments" : [ - { - "kind" : "LONG_LITERAL", - "value" : 6 - } - ], - "literalArguments" : [], - "typeArguments" : [] - }, - "name" : "Catalog", - "type" : "varchar(6)" - } - ], - "stats" : { - "wallTimeMillis" : 104, - "scheduled" : true, - "userTimeMillis" : 14, - "progressPercentage" : 100, - "totalSplits" : 19, - "nodes" : 1, - "cpuTimeMillis" : 16, - "queued" : false, - "queuedTimeMillis" : 1, - "state" : "FINISHED", - "peakMemoryBytes" : 0, - "elapsedTimeMillis" : 111, - "processedBytes" : 0, - "processedRows" : 0, - "queuedSplits" : 0, - "rootStage" : { - "cpuTimeMillis" : 1, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 1, - "subStages" : [ - { - "cpuTimeMillis" : 14, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 17, - "subStages" : [ - { - "wallTimeMillis" : 7, - "subStages" : [], - "stageId" : "2", - "done" : true, - "nodes" : 1, - "totalSplits" : 1, - "processedBytes" : 22, - "processedRows" : 2, - "queuedSplits" : 0, - "userTimeMillis" : 1, - "cpuTimeMillis" : 1, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 1 - } - ], - "wallTimeMillis" : 92, - "nodes" : 1, - "done" : true, - "stageId" : "1", - "userTimeMillis" : 12, - "processedRows" : 2, - "processedBytes" : 51, - "queuedSplits" : 0, - "totalSplits" : 17 - } - ], - "wallTimeMillis" : 5, - "done" : true, - "nodes" : 1, - "stageId" : "0", - "userTimeMillis" : 1, - "processedRows" : 2, - "processedBytes" : 22, - "totalSplits" : 1, - "queuedSplits" : 0 - }, - "runningSplits" : 0, - "completedSplits" : 19 - } -} - -``` - -:::note - -Since the response data is not in sync with the query state from the perspective of clients, you cannot rely on the response data to determine whether the query completes. - -::: - -For more information about Presto REST API, refer to [Presto HTTP Protocol](https://github.com/prestosql/presto/wiki/HTTP-Protocol). diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/standalone.md b/site2/website/versioned_docs/version-2.10.0-deprecated/standalone.md deleted file mode 100644 index 3d463d635558bf..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/standalone.md +++ /dev/null @@ -1,268 +0,0 @@ ---- -id: standalone -title: Set up a standalone Pulsar locally -sidebar_label: "Run Pulsar locally" -original_id: standalone ---- - -For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary [RocksDB](http://rocksdb.org/) and BookKeeper components running inside of a single Java Virtual Machine (JVM) process. - -> **Pulsar in production?** -> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal.md) guide. - -## Install Pulsar standalone - -This tutorial guides you through every step of installing Pulsar locally. - -### System requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions - -:::tip - -By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. - -::: - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -### Install Pulsar using binary release - -To get started with Pulsar, download a binary tarball release in one of the following ways: - -* download from the Apache mirror (Pulsar @pulsar:version@ binary release) - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:binary_release_url - - ``` - -After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -#### What your package contains - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](/tools/pulsar-admin/). -`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker) and more.
    **Note:** Pulsar standalone uses RocksDB as the local metadata store and its configuration file path [`metadataStoreConfigPath`](reference-configuration.md) is configurable in the `standalone.conf` file. For more information about the configurations of RocksDB, see [here](https://github.com/facebook/rocksdb/blob/main/examples/rocksdb_option_file_example.ini) and related [documentation](https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide). -`examples` | A Java JAR file containing [Pulsar Functions](functions-overview.md) example. -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md). -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar. -`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar). - -These directories are created once you begin running Pulsar. - -Directory | Contains -:---------|:-------- -`data` | The data storage directory used by RocksDB and BookKeeper. -`logs` | Logs created by the installation. - -:::tip - -If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions: -* [Install builtin connectors (optional)](#install-builtin-connectors-optional) -* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional) -Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders. - -::: - -### Install builtin connectors (optional) - -Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors. -To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways: - -* download from the Apache mirror Pulsar IO Connectors @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. -For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker (or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions). -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors). - -::: - -### Install tiered storage offloaders (optional) - -:::tip - -- Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -- To enable tiered storage feature, follow the instructions below; otherwise skip this section. - -::: - -To get started with [tiered storage offloaders](concepts-tiered-storage.md), you need to download the offloaders tarball release on every broker node in one of the following ways: - -* download from the Apache mirror Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders` -in the pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage.md). - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory. -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or DC/OS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - -::: - -## Start Pulsar standalone - -Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode. - -```bash - -$ bin/pulsar standalone - -``` - -If you have started Pulsar successfully, you will see `INFO`-level log messages like this: - -```bash - -21:59:29.327 [DLM-/stream/storage-OrderedScheduler-3-0] INFO org.apache.bookkeeper.stream.storage.impl.sc.StorageContainerImpl - Successfully started storage container (0). -21:59:34.576 [main] INFO org.apache.pulsar.broker.authentication.AuthenticationService - Authentication is disabled -21:59:34.576 [main] INFO org.apache.pulsar.websocket.WebSocketService - Pulsar WebSocket Service started - -``` - -:::tip - -* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window. - -::: - -You can also run the service as a background process using the `bin/pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](reference-cli-tools.md#pulsar-daemon). -> -> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview.md) document to secure your deployment. -> -> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -## Use Pulsar standalone - -Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. - -### Consume a message - -The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client consume my-topic -s "first-subscription" - -``` - -If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -22:17:16.781 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully consumed - -``` - -:::tip - -As you have noticed that we do not explicitly create the `my-topic` topic, from which we consume the message. When you consume a message from a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well. - -::: - -### Produce a message - -The following command produces a message saying `hello-pulsar` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client produce my-topic --messages "hello-pulsar" - -``` - -If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -22:21:08.693 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced - -``` - -## Stop Pulsar standalone - -Press `Ctrl+C` to stop a local standalone Pulsar. - -:::tip - -If the service runs as a background process using the `bin/pulsar-daemon start standalone` command, then use the `bin/pulsar-daemon stop standalone` command to stop the service. -For more information, see [pulsar-daemon](reference-cli-tools.md#pulsar-daemon). - -::: - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/tiered-storage-aliyun.md b/site2/website/versioned_docs/version-2.10.0-deprecated/tiered-storage-aliyun.md deleted file mode 100644 index 89dc53cda76042..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/tiered-storage-aliyun.md +++ /dev/null @@ -1,257 +0,0 @@ ---- -id: tiered-storage-aliyun -title: Use Aliyun OSS offloader with Pulsar -sidebar_label: "Aliyun OSS offloader" -original_id: tiered-storage-aliyun ---- - -This chapter guides you through every step of installing and configuring the Aliyun Object Storage Service (OSS) offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the Aliyun OSS offloader. - -### Prerequisite - -- Pulsar: 2.8.0 or later versions - -### Step - -This example uses Pulsar 2.8.0. - -1. Download the Pulsar tarball, see [here](standalone.md#install-pulsar-using-binary-release). - -2. Download and untar the Pulsar offloaders package, then copy the Pulsar offloaders as `offloaders` in the Pulsar directory, see [here](standalone.md#install-tiered-storage-offloaders-optional). - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/), [GCS](https://cloud.google.com/storage/), [Azure](https://portal.azure.com/#home), and [Aliyun OSS](https://www.aliyun.com/product/oss) for long-term storage. - - ``` - - tiered-storage-file-system-2.8.0.nar - tiered-storage-jcloud-2.8.0.nar - - ``` - - :::note - - * If you are running Pulsar in a bare-metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image. The `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to Aliyun OSS, you need to configure some properties of the Aliyun OSS offload driver. - -::: - -Besides, you can also configure the Aliyun OSS offloader to run it automatically or trigger it manually. - -### Configure Aliyun OSS offloader driver - -You can configure the Aliyun OSS offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - | Required configuration | Description | Example value | - | --- | --- |--- | - | `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive. | aliyun-oss | - | `offloadersDirectory` | Offloader directory | offloaders | - | `managedLedgerOffloadBucket` | Bucket | pulsar-topic-offload | - | `managedLedgerOffloadServiceEndpoint` | Endpoint | http://oss-cn-hongkong.aliyuncs.com | - -- **Optional** configurations are as below. - - | Optional | Description | Example value | - | --- | --- | --- | - | `managedLedgerOffloadReadBufferSizeInBytes` | Size of block read | 1 MB | - | `managedLedgerOffloadMaxBlockSizeInBytes` | Size of block write | 64 MB | - | `managedLedgerMinLedgerRolloverTimeMinutes` | Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment. | 2 | - | `managedLedgerMaxEntriesPerLedger` | Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment. | 5000 | - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in Aliyun OSS must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -managedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Endpoint (required) - -The endpoint is the region where a bucket is located. - -:::tip - -For more information about Aliyun OSS regions and endpoints, see [International website](https://www.alibabacloud.com/help/doc-detail/31837.htm) or [Chinese website](https://help.aliyun.com/document_detail/31837.html). - -::: - - -##### Example - -This example sets the endpoint as _oss-us-west-1-internal_. - -``` - -managedLedgerOffloadServiceEndpoint=http://oss-us-west-1-internal.aliyuncs.com - -``` - -#### Authentication (required) - -To be able to access Aliyun OSS, you need to authenticate with Aliyun OSS. - -Set the environment variables `ALIYUN_OSS_ACCESS_KEY_ID` and `ALIYUN_OSS_ACCESS_KEY_SECRET` in `conf/pulsar_env.sh`. - -"export" is important so that the variables are made available in the environment of spawned processes. - -```bash - -export ALIYUN_OSS_ACCESS_KEY_ID=ABC123456789 -export ALIYUN_OSS_ACCESS_KEY_SECRET=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from Aliyun OSS in the configuration file `broker.conf` or `standalone.conf`. - -| Configuration | Description | Default value | -| --- | --- | --- | -| `managedLedgerOffloadReadBufferSizeInBytes` | Block size for each individual read when reading back data from Aliyun OSS. | 1 MB | -| `managedLedgerOffloadMaxBlockSizeInBytes` | Maximum size of a "part" sent during a multipart upload to Aliyun OSS. It **cannot** be smaller than 5 MB. | 64 MB | - -### Run Aliyun OSS offloader automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -| Threshold value | Action | -| --- | --- | -| > 0 | It triggers the offloading operation if the topic storage reaches its threshold. | -| = 0 | It causes a broker to offload data as soon as possible. | -| < 0 | It disables automatic offloading operation. | - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, the offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](/tools/pulsar-admin/) command. - -#### Example - -This example sets the Aliyun OSS offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](/tools/pulsar-admin/). - -::: - -### Run Aliyun OSS offloader manually - -For individual topics, you can trigger the Aliyun OSS offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to Aliyun OSS until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the Aliyun OSS offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](/tools/pulsar-admin/). - - ::: - -- This example checks the Aliyun OSS offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the Aliyun OSS offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](/tools/pulsar-admin/). - - ::: - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/tiered-storage-aws.md b/site2/website/versioned_docs/version-2.10.0-deprecated/tiered-storage-aws.md deleted file mode 100644 index 11905bbb09ea40..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/tiered-storage-aws.md +++ /dev/null @@ -1,329 +0,0 @@ ---- -id: tiered-storage-aws -title: Use AWS S3 offloader with Pulsar -sidebar_label: "AWS S3 offloader" -original_id: tiered-storage-aws ---- - -This chapter guides you through every step of installing and configuring the AWS S3 offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the AWS S3 offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or later versions - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download from the Pulsar [downloads page](/download) - - * Use [wget](https://www.gnu.org/software/wget): - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/) and [GCS](https://cloud.google.com/storage/) for long term storage. - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to AWS S3, you need to configure some properties of the AWS S3 offload driver. - -::: - -Besides, you can also configure the AWS S3 offloader to run it automatically or trigger it manually. - -### Configure AWS S3 offloader driver - -You can configure the AWS S3 offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - Required configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive.

    **Note**: there is a third driver type, S3, which is identical to AWS S3, though S3 requires that you specify an endpoint URL using `s3ManagedLedgerOffloadServiceEndpoint`. This is useful if using an S3 compatible data store other than AWS S3. | aws-s3 - `offloadersDirectory` | Offloader directory | offloaders - `s3ManagedLedgerOffloadBucket` | Bucket | pulsar-topic-offload - -- **Optional** configurations are as below. - - Optional | Description | Example value - |---|---|--- - `s3ManagedLedgerOffloadRegion` | Bucket region

    **Note**: before specifying a value for this parameter, you need to set the following configurations. Otherwise, you might get an error.

    - Set [`s3ManagedLedgerOffloadServiceEndpoint`](https://docs.aws.amazon.com/general/latest/gr/s3.html).

    Example
    `s3ManagedLedgerOffloadServiceEndpoint=https://s3.YOUR_REGION.amazonaws.com`

    - Grant `GetBucketLocation` permission to a user.

    For how to grant `GetBucketLocation` permission to a user, see [here](https://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html#using-with-s3-actions-related-to-buckets).| eu-west-3 - `s3ManagedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `s3ManagedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in AWS S3 must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -s3ManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Bucket region - -A bucket region is a region where a bucket is located. If a bucket region is not specified, the **default** region (`US East (N. Virginia)`) is used. - -:::tip - -For more information about AWS regions and endpoints, see [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). - -::: - - -##### Example - -This example sets the bucket region as _europe-west-3_. - -``` - -s3ManagedLedgerOffloadRegion=eu-west-3 - -``` - -#### Authentication (required) - -To be able to access AWS S3, you need to authenticate with AWS S3. - -Pulsar does not provide any direct methods of configuring authentication for AWS S3, -but relies on the mechanisms supported by the [DefaultAWSCredentialsProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html). - -Once you have created a set of credentials in the AWS IAM console, you can configure credentials using one of the following methods. - -* Use EC2 instance metadata credentials. - - If you are on AWS instance with an instance profile that provides credentials, Pulsar uses these credentials if no other mechanism is provided. - -* Set the environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` in `conf/pulsar_env.sh`. - - "export" is important so that the variables are made available in the environment of spawned processes. - - ```bash - - export AWS_ACCESS_KEY_ID=ABC123456789 - export AWS_SECRET_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -* Add the Java system properties `aws.accessKeyId` and `aws.secretKey` to `PULSAR_EXTRA_OPTS` in `conf/pulsar_env.sh`. - - ```bash - - PULSAR_EXTRA_OPTS="${PULSAR_EXTRA_OPTS} ${PULSAR_MEM} ${PULSAR_GC} -Daws.accessKeyId=ABC123456789 -Daws.secretKey=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.maxCapacity.default=1000 -Dio.netty.recycler.linkCapacity=1024" - - ``` - -* Set the access credentials in `~/.aws/credentials`. - - ```conf - - [default] - aws_access_key_id=ABC123456789 - aws_secret_access_key=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -* Assume an IAM role. - - This example uses the `DefaultAWSCredentialsProviderChain` for assuming this role. - - The broker must be rebooted for credentials specified in `pulsar_env` to take effect. - - ```conf - - s3ManagedLedgerOffloadRole= - s3ManagedLedgerOffloadRoleSessionName=pulsar-s3-offload - - ``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from AWS S3 in the configuration file `broker.conf` or `standalone.conf`. - -Configuration|Description|Default value -|---|---|--- -`s3ManagedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from AWS S3.|1 MB -`s3ManagedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to AWS S3. It **cannot** be smaller than 5 MB. |64 MB - -### Configure AWS S3 offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](/tools/pulsar-admin/) command. - -#### Example - -This example sets the AWS S3 offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](/tools/pulsar-admin/). - -::: - -### Configure AWS S3 offloader to run manually - -For individual topics, you can trigger AWS S3 offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to AWS S3 until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the AWS S3 offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](/tools/pulsar-admin/). - - ::: - -- This example checks the AWS S3 offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the AWS S3 offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](/tools/pulsar-admin/). - - ::: - -## Tutorial - -For the complete and step-by-step instructions on how to use the AWS S3 offloader with Pulsar, see [here](https://hub.streamnative.io/offloaders/aws-s3/2.5.1#usage). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/tiered-storage-azure.md b/site2/website/versioned_docs/version-2.10.0-deprecated/tiered-storage-azure.md deleted file mode 100644 index e65356355ccc23..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/tiered-storage-azure.md +++ /dev/null @@ -1,264 +0,0 @@ ---- -id: tiered-storage-azure -title: Use Azure BlobStore offloader with Pulsar -sidebar_label: "Azure BlobStore offloader" -original_id: tiered-storage-azure ---- - -This chapter guides you through every step of installing and configuring the Azure BlobStore offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the Azure BlobStore offloader. - -### Prerequisite - -- Pulsar: 2.6.2 or later versions - -### Step - -This example uses Pulsar 2.6.2. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.6.2/apache-pulsar-2.6.2-bin.tar.gz) - - * Download from the Pulsar [downloads page](/download) - - * Use [wget](https://www.gnu.org/software/wget): - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.6.2/apache-pulsar-2.6.2-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.6.2/apache-pulsar-offloaders-2.6.2-bin.tar.gz - tar xvfz apache-pulsar-offloaders-2.6.2-bin.tar.gz - - ``` - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.6.2/offloaders apache-pulsar-2.6.2/offloaders - - ls offloaders - - ``` - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/), [GCS](https://cloud.google.com/storage/) and [Azure](https://portal.azure.com/#home) for long term storage. - - ``` - - tiered-storage-file-system-2.6.2.nar - tiered-storage-jcloud-2.6.2.nar - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to Azure BlobStore, you need to configure some properties of the Azure BlobStore offload driver. - -::: - -Besides, you can also configure the Azure BlobStore offloader to run it automatically or trigger it manually. - -### Configure Azure BlobStore offloader driver - -You can configure the Azure BlobStore offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - Required configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name | azureblob - `offloadersDirectory` | Offloader directory | offloaders - `managedLedgerOffloadBucket` | Bucket | pulsar-topic-offload - -- **Optional** configurations are as below. - - Optional | Description | Example value - |---|---|--- - `managedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `managedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in Azure BlobStore must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -managedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Authentication (required) - -To be able to access Azure BlobStore, you need to authenticate with Azure BlobStore. - -* Set the environment variables `AZURE_STORAGE_ACCOUNT` and `AZURE_STORAGE_ACCESS_KEY` in `conf/pulsar_env.sh`. - - "export" is important so that the variables are made available in the environment of spawned processes. - - ```bash - - export AZURE_STORAGE_ACCOUNT=ABC123456789 - export AZURE_STORAGE_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from Azure BlobStore in the configuration file `broker.conf` or `standalone.conf`. - -Configuration|Description|Default value -|---|---|--- -`managedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from Azure BlobStore store.|1 MB -`managedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to Azure BlobStore store. It **cannot** be smaller than 5 MB. |64 MB - -### Configure Azure BlobStore offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](/tools/pulsar-admin/) command. - -#### Example - -This example sets the Azure BlobStore offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](/tools/pulsar-admin/). - -::: - -### Configure Azure BlobStore offloader to run manually - -For individual topics, you can trigger Azure BlobStore offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to Azure BlobStore until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the Azure BlobStore offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](/tools/pulsar-admin/). - - ::: - -- This example checks the Azure BlobStore offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the Azure BlobStore offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](/tools/pulsar-admin/). - - ::: - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/tiered-storage-filesystem.md b/site2/website/versioned_docs/version-2.10.0-deprecated/tiered-storage-filesystem.md deleted file mode 100644 index bb399b500cb022..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/tiered-storage-filesystem.md +++ /dev/null @@ -1,631 +0,0 @@ ---- -id: tiered-storage-filesystem -title: Use filesystem offloader with Pulsar -sidebar_label: "Filesystem offloader" -original_id: tiered-storage-filesystem ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This chapter guides you through every step of installing and configuring the filesystem offloader and using it with Pulsar. - -## Installation - -This section describes how to install the filesystem offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or higher versions - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download the Pulsar tarball from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download the Pulsar tarball from the Pulsar [download page](/download/) - - * Use the [wget](https://www.gnu.org/software/wget) command to dowload the Pulsar tarball. - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - - :::note - - * If you run Pulsar in a bare metal cluster, ensure that the `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you run Pulsar in Docker or deploying Pulsar using a Docker image (such as K8S and DCOS), you can use the `apachepulsar/pulsar-all` image. The `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - - :::note - - * If you run Pulsar in a bare metal cluster, ensure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you run Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image. The `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to filesystem, you need to configure some properties of the filesystem offloader driver. - -::: - -Besides, you can also configure the filesystem offloader to run it automatically or trigger it manually. - -### Configure filesystem offloader driver - -You can configure the filesystem offloader driver in the `broker.conf` or `standalone.conf` configuration file. - -````mdx-code-block - - - -- **Required** configurations are as below. - - Parameter | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive. | filesystem - `fileSystemURI` | Connection address, which is the URI to access the default Hadoop distributed file system. | hdfs://127.0.0.1:9000 - `offloadersDirectory` | Offloader directory | offloaders - `fileSystemProfilePath` | Hadoop profile path. The configuration file is stored in the Hadoop profile path. It contains various settings for Hadoop performance tuning. | conf/filesystem_offload_core_site.xml - - -- **Optional** configurations are as below. - - Parameter| Description | Example value - |---|---|--- - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic.

    **Note**: it is not recommended to set this parameter in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended to set this parameter in the production environment.|5000 - -
    - - -- **Required** configurations are as below. - - Parameter | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive. | filesystem - `offloadersDirectory` | Offloader directory | offloaders - `fileSystemProfilePath` | NFS profile path. The configuration file is stored in the NFS profile path. It contains various settings for performance tuning. | conf/filesystem_offload_core_site.xml - -- **Optional** configurations are as below. - - Parameter| Description | Example value - |---|---|--- - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic.

    **Note**: it is not recommended to set this parameter in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended to set this parameter in the production environment.|5000 - -
    - -
    -```` - -### Run filesystem offloader automatically - -You can configure the namespace policy to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic storage reaches the threshold, an offload operation is triggered automatically. - -Threshold value|Action -|---|--- -| > 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offload runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, the filesystem offloader does not work until the current segment is full. - -You can configure the threshold using CLI tools, such as pulsar-admin. - -#### Example - -This example sets the filesystem offloader threshold to 10 MB using pulsar-admin. - -```bash - -pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#set-offload-threshold). - -::: - -### Run filesystem offloader manually - -For individual topics, you can trigger the filesystem offloader manually using one of the following methods: - -- Use the REST endpoint. - -- Use CLI tools (such as pulsar-admin). - -To manually trigger the filesystem offloader via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are offloaded to the filesystem until the threshold is no longer exceeded. Older segments are offloaded first. - -#### Example - -- This example manually run the filesystem offloader using pulsar-admin. - - ```bash - - pulsar-admin topics offload --size-threshold 10M persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload). - - ::: - -- This example checks filesystem offloader status using pulsar-admin. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the filesystem to complete the job, add the `-w` flag. - - ```bash - - pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in the offloading operation, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload-status). - - ::: - -## Tutorial - -This section provides step-by-step instructions on how to use the filesystem offloader to move data from Pulsar to Hadoop Distributed File System (HDFS) or Network File system (NFS). - -````mdx-code-block - - - -To move data from Pulsar to HDFS, follow these steps. - -### Step 1: Prepare the HDFS environment - -This tutorial sets up a Hadoop single node cluster and uses Hadoop 3.2.1. - -:::tip - -For details about how to set up a Hadoop single node cluster, see [here](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html). - -::: - -1. Download and uncompress Hadoop 3.2.1. - - ``` - - wget https://mirrors.bfsu.edu.cn/apache/hadoop/common/hadoop-3.2.1/hadoop-3.2.1.tar.gz - - tar -zxvf hadoop-3.2.1.tar.gz -C $HADOOP_HOME - - ``` - -2. Configure Hadoop. - - ``` - - # $HADOOP_HOME/etc/hadoop/core-site.xml - - - fs.defaultFS - hdfs://localhost:9000 - - - - # $HADOOP_HOME/etc/hadoop/hdfs-site.xml - - - dfs.replication - 1 - - - - ``` - -3. Set passphraseless ssh. - - ``` - - # Now check that you can ssh to the localhost without a passphrase: - $ ssh localhost - # If you cannot ssh to localhost without a passphrase, execute the following commands - $ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa - $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys - $ chmod 0600 ~/.ssh/authorized_keys - - ``` - -4. Start HDFS. - - ``` - - # don't execute this command repeatedly, repeat execute will cauld the clusterId of the datanode is not consistent with namenode - $HADOOP_HOME/bin/hadoop namenode -format - $HADOOP_HOME/sbin/start-dfs.sh - - ``` - -5. Navigate to the [HDFS website](http://localhost:9870/). - - You can see the **Overview** page. - - ![](/assets/FileSystem-1.png) - - 1. At the top navigation bar, click **Datanodes** to check DataNode information. - - ![](/assets/FileSystem-2.png) - - 2. Click **HTTP Address** to get more detailed information about localhost:9866. - - As can be seen below, the size of **Capacity Used** is 4 KB, which is the initial value. - - ![](/assets/FileSystem-3.png) - -### Step 2: Install the filesystem offloader - -For details, see [installation](#installation). - -### Step 3: Configure the filesystem offloader - -As indicated in the [configuration](#configuration) section, you need to configure some properties for the filesystem offloader driver before using it. This tutorial assumes that you have configured the filesystem offloader driver as below and run Pulsar in **standalone** mode. - -Set the following configurations in the `conf/standalone.conf` file. - -```conf - -managedLedgerOffloadDriver=filesystem -fileSystemURI=hdfs://127.0.0.1:9000 -fileSystemProfilePath=conf/filesystem_offload_core_site.xml - -``` - -:::note - -For testing purposes, you can set the following two configurations to speed up ledger rollover, but it is not recommended that you set them in the production environment. - -::: - -``` - -managedLedgerMinLedgerRolloverTimeMinutes=1 -managedLedgerMaxEntriesPerLedger=100 - -``` - - - - -:::note - -In this section, it is assumed that you have enabled NFS service and set the shared path of your NFS service. In this section, `/Users/test` is used as the shared path of NFS service. - -::: - -To offload data to NFS, follow these steps. - -### Step 1: Install the filesystem offloader - -For details, see [installation](#installation). - -### Step 2: Mont your NFS to your local filesystem - -This example mounts mounts */Users/pulsar_nfs* to */Users/test*. - -``` - -mount -e 192.168.0.103:/Users/test/Users/pulsar_nfs - -``` - -### Step 3: Configure the filesystem offloader driver - -As indicated in the [configuration](#configuration) section, you need to configure some properties for the filesystem offloader driver before using it. This tutorial assumes that you have configured the filesystem offloader driver as below and run Pulsar in **standalone** mode. - -1. Set the following configurations in the `conf/standalone.conf` file. - - ```conf - - managedLedgerOffloadDriver=filesystem - fileSystemProfilePath=conf/filesystem_offload_core_site.xml - - ``` - -2. Modify the *filesystem_offload_core_site.xml* as follows. - - ``` - - - fs.defaultFS - file:/// - - - - hadoop.tmp.dir - file:///Users/pulsar_nfs - - - - io.file.buffer.size - 4096 - - - - io.seqfile.compress.blocksize - 1000000 - - - - io.seqfile.compression.type - BLOCK - - - - io.map.index.interval - 128 - - - ``` - - - - -```` - -### Step 4: Offload data from BookKeeper to filesystem - -Execute the following commands in the repository where you download Pulsar tarball. For example, `~/path/to/apache-pulsar-2.5.1`. - -1. Start Pulsar standalone. - - ``` - - bin/pulsar standalone -a 127.0.0.1 - - ``` - -2. To ensure the data generated is not deleted immediately, it is recommended to set the [retention policy](cookbooks-retention-expiry.md#retention-policies), which can be either a **size** limit or a **time** limit. The larger value you set for the retention policy, the longer the data can be retained. - - ``` - - bin/pulsar-admin namespaces set-retention public/default --size 100M --time 2d - - ``` - - :::tip - - For more information about the `pulsarctl namespaces set-retention options` command, including flags, descriptions, default values, and shorthands, see [here](https://docs.streamnative.io/pulsarctl/v2.7.0.6/#-em-set-retention-em-). - - ::: - -3. Produce data using pulsar-client. - - ``` - - bin/pulsar-client produce -m "Hello FileSystem Offloader" -n 1000 public/default/fs-test - - ``` - -4. The offloading operation starts after a ledger rollover is triggered. To ensure offload data successfully, it is recommended that you wait until several ledger rollovers are triggered. In this case, you might need to wait for a second. You can check the ledger status using pulsarctl. - - ``` - - bin/pulsar-admin topics stats-internal public/default/fs-test - - ``` - - **Output** - - The data of the ledger 696 is not offloaded. - - ``` - - { - "version": 1, - "creationDate": "2020-06-16T21:46:25.807+08:00", - "modificationDate": "2020-06-16T21:46:25.821+08:00", - "ledgers": [ - { - "ledgerId": 696, - "isOffloaded": false - } - ], - "cursors": {} - } - - ``` - -5. Wait a second and send more messages to the topic. - - ``` - - bin/pulsar-client produce -m "Hello FileSystem Offloader" -n 1000 public/default/fs-test - - ``` - -6. Check the ledger status using pulsarctl. - - ``` - - bin/pulsar-admin topics stats-internal public/default/fs-test - - ``` - - **Output** - - The ledger 696 is rolled over. - - ``` - - { - "version": 2, - "creationDate": "2020-06-16T21:46:25.807+08:00", - "modificationDate": "2020-06-16T21:48:52.288+08:00", - "ledgers": [ - { - "ledgerId": 696, - "entries": 1001, - "size": 81695, - "isOffloaded": false - }, - { - "ledgerId": 697, - "isOffloaded": false - } - ], - "cursors": {} - } - - ``` - -7. Trigger the offloading operation manually using pulsarctl. - - ``` - - bin/pulsar-admin topics offload -s 0 public/default/fs-test - - ``` - - **Output** - - Data in ledgers before the ledge 697 is offloaded. - - ``` - - # offload info, the ledgers before 697 will be offloaded - Offload triggered for persistent://public/default/fs-test3 for messages before 697:0:-1 - - ``` - -8. Check the ledger status using pulsarctl. - - ``` - - bin/pulsar-admin topics stats-internal public/default/fs-test - - ``` - - **Output** - - The data of the ledger 696 is offloaded. - - ``` - - { - "version": 4, - "creationDate": "2020-06-16T21:46:25.807+08:00", - "modificationDate": "2020-06-16T21:52:13.25+08:00", - "ledgers": [ - { - "ledgerId": 696, - "entries": 1001, - "size": 81695, - "isOffloaded": true - }, - { - "ledgerId": 697, - "isOffloaded": false - } - ], - "cursors": {} - } - - ``` - - And the **Capacity Used** is changed from 4 KB to 116.46 KB. - - ![](/assets/FileSystem-8.png) \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/tiered-storage-gcs.md b/site2/website/versioned_docs/version-2.10.0-deprecated/tiered-storage-gcs.md deleted file mode 100644 index df1b4f6fb7edbf..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/tiered-storage-gcs.md +++ /dev/null @@ -1,319 +0,0 @@ ---- -id: tiered-storage-gcs -title: Use GCS offloader with Pulsar -sidebar_label: "GCS offloader" -original_id: tiered-storage-gcs ---- - -This chapter guides you through every step of installing and configuring the GCS offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the GCS offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or later versions - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download from the Pulsar [download page](/download) - - * Use [wget](https://www.gnu.org/software/wget) - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8S and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - As shown in the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support GCS and AWS S3 for long term storage. - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - -## Configuration - -:::note - -Before offloading data from BookKeeper to GCS, you need to configure some properties of the GCS offloader driver. - -::: - -Besides, you can also configure the GCS offloader to run it automatically or trigger it manually. - -### Configure GCS offloader driver - -You can configure GCS offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - **Required** configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver`|Offloader driver name, which is case-insensitive.|google-cloud-storage - `offloadersDirectory`|Offloader directory|offloaders - `gcsManagedLedgerOffloadBucket`|Bucket|pulsar-topic-offload - `gcsManagedLedgerOffloadRegion`|Bucket region|europe-west3 - `gcsManagedLedgerOffloadServiceAccountKeyFile`|Authentication |/Users/user-name/Downloads/project-804d5e6a6f33.json - -- **Optional** configurations are as below. - - Optional configuration|Description|Example value - |---|---|--- - `gcsManagedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `gcsManagedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic.|2 - `managedLedgerMaxEntriesPerLedger`|The max number of entries to append to a ledger before triggering a rollover.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in GCS **must** be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you can not nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -gcsManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Bucket region (required) - -Bucket region is the region where a bucket is located. If a bucket region is not specified, the **default** region (`us multi-regional location`) is used. - -:::tip - -For more information about bucket location, see [here](https://cloud.google.com/storage/docs/bucket-locations). - -::: - -##### Example - -This example sets the bucket region as _europe-west3_. - -``` - -gcsManagedLedgerOffloadRegion=europe-west3 - -``` - -#### Authentication (required) - -To enable a broker access GCS, you need to configure `gcsManagedLedgerOffloadServiceAccountKeyFile` in the configuration file `broker.conf`. - -`gcsManagedLedgerOffloadServiceAccountKeyFile` is -a JSON file, containing GCS credentials of a service account. - -##### Example - -To generate service account credentials or view the public credentials that you've already generated, follow the following steps. - -1. Navigate to the [Service accounts page](https://console.developers.google.com/iam-admin/serviceaccounts). - -2. Select a project or create a new one. - -3. Click **Create service account**. - -4. In the **Create service account** window, type a name for the service account and select **Furnish a new private key**. - - If you want to [grant G Suite domain-wide authority](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#delegatingauthority) to the service account, select **Enable G Suite Domain-wide Delegation**. - -5. Click **Create**. - - :::note - - Make sure the service account you create has permission to operate GCS, you need to assign **Storage Admin** permission to your service account [here](https://cloud.google.com/storage/docs/access-control/iam). - - ::: - -6. You can get the following information and set this in `broker.conf`. - - ```conf - - gcsManagedLedgerOffloadServiceAccountKeyFile="/Users/user-name/Downloads/project-804d5e6a6f33.json" - - ``` - - :::tip - - - For more information about how to create `gcsManagedLedgerOffloadServiceAccountKeyFile`, see [here](https://support.google.com/googleapi/answer/6158849). - - For more information about Google Cloud IAM, see [here](https://cloud.google.com/storage/docs/access-control/iam). - - ::: - -#### Size of block read/write - -You can configure the size of a request sent to or read from GCS in the configuration file `broker.conf`. - -Configuration|Description -|---|--- -`gcsManagedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from GCS.

    The **default** value is 1 MB. -`gcsManagedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to GCS.

    It **can not** be smaller than 5 MB.

    The **default** value is 64 MB. - -### Configure GCS offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offload operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](/tools/pulsar-admin/) command. - -#### Example - -This example sets the GCS offloader threshold size to 10 MB using pulsar-admin. - -```bash - -pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#set-offload-threshold). - -::: - -### Configure GCS offloader to run manually - -For individual topics, you can trigger GCS offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger the GCS via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to GCS until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the GCS offloader to run manually using pulsar-admin with the command `pulsar-admin topics offload (topic-name) (threshold)`. - - ```bash - - pulsar-admin topics offload persistent://my-tenant/my-namespace/topic1 10M - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload). - - ::: - -- This example checks the GCS offloader status using pulsar-admin with the command `pulsar-admin topics offload-status options`. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for GCS to complete the job, add the `-w` flag. - - ```bash - - pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload-status). - - ::: - -## Tutorial - -For the complete and step-by-step instructions on how to use the GCS offloader with Pulsar, see [here](https://hub.streamnative.io/offloaders/gcs/2.5.1#usage). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/tiered-storage-overview.md b/site2/website/versioned_docs/version-2.10.0-deprecated/tiered-storage-overview.md deleted file mode 100644 index c635034f463b46..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/tiered-storage-overview.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -id: tiered-storage-overview -title: Overview of tiered storage -sidebar_label: "Overview" -original_id: tiered-storage-overview ---- - -Pulsar's **Tiered Storage** feature allows older backlog data to be moved from BookKeeper to long term and cheaper storage, while still allowing clients to access the backlog as if nothing has changed. - -* Tiered storage uses [Apache jclouds](https://jclouds.apache.org) to support [Amazon S3](https://aws.amazon.com/s3/) and [GCS (Google Cloud Storage)](https://cloud.google.com/storage/) for long term storage. - - With jclouds, it is easy to add support for more [cloud storage providers](https://jclouds.apache.org/reference/providers/#blobstore-providers) in the future. - - :::tip - - - For more information about how to use the AWS S3 offloader with Pulsar, see [here](tiered-storage-aws.md). - - - For more information about how to use the GCS offloader with Pulsar, see [here](tiered-storage-gcs.md). - - ::: - -* Tiered storage uses [Apache Hadoop](http://hadoop.apache.org/) to support filesystems for long term storage. - - With Hadoop, it is easy to add support for more filesystems in the future. - - :::tip - - For more information about how to use the filesystem offloader with Pulsar, see [here](tiered-storage-filesystem.md). - - ::: - -## When to use tiered storage? - -Tiered storage should be used when you have a topic for which you want to keep a very long backlog for a long time. - -For example, if you have a topic containing user actions which you use to train your recommendation systems, you may want to keep that data for a long time, so that if you change your recommendation algorithm, you can rerun it against your full user history. - -## How does tiered storage work? - -A topic in Pulsar is backed by a **log**, known as a **managed ledger**. This log is composed of an ordered list of segments. Pulsar only writes to the final segment of the log. All previous segments are sealed. The data within the segment is immutable. This is known as a **segment oriented architecture**. - -![Tiered storage](/assets/pulsar-tiered-storage.png "Tiered Storage") - -The tiered storage offloading mechanism takes advantage of segment oriented architecture. When offloading is requested, the segments of the log are copied one-by-one to tiered storage. All segments of the log (apart from the current segment) written to tiered storage can be offloaded. - -Data written to BookKeeper is replicated to 3 physical machines by default. However, once a segment is sealed in BookKeeper, it becomes immutable and can be copied to long term storage. Long term storage can achieve cost savings by using mechanisms such as [Reed-Solomon error correction](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction) to require fewer physical copies of data. - -Before offloading ledgers to long term storage, you need to configure buckets, credentials, and other properties for the cloud storage service. Additionally, Pulsar uses multi-part objects to upload the segment data and brokers may crash while uploading the data. It is recommended that you add a life cycle rule for your bucket to expire incomplete multi-part upload after a day or two days to avoid getting charged for incomplete uploads. Moreover, you can trigger the offloading operation manually (via REST API or CLI) or automatically (via CLI). - -After offloading ledgers to long term storage, you can still query data in the offloaded ledgers with Pulsar SQL. - -For more information about tiered storage for Pulsar topics, see [here](https://github.com/apache/pulsar/wiki/PIP-17:-Tiered-storage-for-Pulsar-topics). diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/transaction-api.md b/site2/website/versioned_docs/version-2.10.0-deprecated/transaction-api.md deleted file mode 100644 index 25a99479639bdb..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/transaction-api.md +++ /dev/null @@ -1,172 +0,0 @@ ---- -id: transactions-api -title: Transactions API -sidebar_label: "Transactions API" -original_id: transactions-api ---- - -All messages in a transaction are available only to consumers after the transaction has been committed. If a transaction has been aborted, all the writes and acknowledgments in this transaction roll back. - -## Prerequisites -1. To enable transactions in Pulsar, you need to configure the parameter in `broker.conf` file or `standalone.conf` file. - -``` - -transactionCoordinatorEnabled=true - -``` - -2. Initialize transaction coordinator metadata, so the transaction coordinators can leverage advantages of the partitioned topic, such as load balance. - -``` - -bin/pulsar initialize-transaction-coordinator-metadata -cs 127.0.0.1:2181 -c standalone - -``` - -After initializing transaction coordinator metadata, you can use the transactions API. The following APIs are available. - -## Initialize Pulsar client - -You can enable transaction for transaction client and initialize transaction coordinator client. - -``` - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .enableTransaction(true) - .build(); - -``` - -## Start transactions -You can start transaction in the following way. - -``` - -Transaction txn = pulsarClient - .newTransaction() - .withTransactionTimeout(5, TimeUnit.MINUTES) - .build() - .get(); - -``` - -## Produce transaction messages - -A transaction parameter is required when producing new transaction messages. The semantic of the transaction messages in Pulsar is `read-committed`, so the consumer cannot receive the ongoing transaction messages before the transaction is committed. - -``` - -producer.newMessage(txn).value("Hello Pulsar Transaction".getBytes()).sendAsync(); - -``` - -## Acknowledge the messages with the transaction - -The transaction acknowledgement requires a transaction parameter. The transaction acknowledgement marks the messages state to pending-ack state. When the transaction is committed, the pending-ack state becomes ack state. If the transaction is aborted, the pending-ack state becomes unack state. - -``` - -Message message = consumer.receive(); -consumer.acknowledgeAsync(message.getMessageId(), txn); - -``` - -## Commit transactions - -When the transaction is committed, consumers receive the transaction messages and the pending-ack state becomes ack state. - -``` - -txn.commit().get(); - -``` - -## Abort transaction - -When the transaction is aborted, the transaction acknowledgement is canceled and the pending-ack messages are redelivered. - -``` - -txn.abort().get(); - -``` - -### Example -The following example shows how messages are processed in transaction. - -``` - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl(getPulsarServiceList().get(0).getBrokerServiceUrl()) - .statsInterval(0, TimeUnit.SECONDS) - .enableTransaction(true) - .build(); - -String sourceTopic = "public/default/source-topic"; -String sinkTopic = "public/default/sink-topic"; - -Producer sourceProducer = pulsarClient - .newProducer(Schema.STRING) - .topic(sourceTopic) - .create(); -sourceProducer.newMessage().value("hello pulsar transaction").sendAsync(); - -Consumer sourceConsumer = pulsarClient - .newConsumer(Schema.STRING) - .topic(sourceTopic) - .subscriptionName("test") - .subscriptionType(SubscriptionType.Shared) - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscribe(); - -Producer sinkProducer = pulsarClient - .newProducer(Schema.STRING) - .topic(sinkTopic) - .sendTimeout(0, TimeUnit.MILLISECONDS) - .create(); - -Transaction txn = pulsarClient - .newTransaction() - .withTransactionTimeout(5, TimeUnit.MINUTES) - .build() - .get(); - -// source message acknowledgement and sink message produce belong to one transaction, -// they are combined into an atomic operation. -Message message = sourceConsumer.receive(); -sourceConsumer.acknowledgeAsync(message.getMessageId(), txn); -sinkProducer.newMessage(txn).value("sink data").sendAsync(); - -txn.commit().get(); - -``` - -## Enable batch messages in transactions - -To enable batch messages in transactions, you need to enable the batch index acknowledgement feature. The transaction acks check whether the batch index acknowledgement conflicts. - -To enable batch index acknowledgement, you need to set `acknowledgmentAtBatchIndexLevelEnabled` to `true` in the `broker.conf` or `standalone.conf` file. - -``` - -acknowledgmentAtBatchIndexLevelEnabled=true - -``` - -And then you need to call the `enableBatchIndexAcknowledgment(true)` method in the consumer builder. - -``` - -Consumer sinkConsumer = pulsarClient - .newConsumer() - .topic(transferTopic) - .subscriptionName("sink-topic") - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscriptionType(SubscriptionType.Shared) - .enableBatchIndexAcknowledgment(true) // enable batch index acknowledgement - .subscribe(); - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/transaction-guarantee.md b/site2/website/versioned_docs/version-2.10.0-deprecated/transaction-guarantee.md deleted file mode 100644 index 9db2d254e159f6..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/transaction-guarantee.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -id: transactions-guarantee -title: Transactions Guarantee -sidebar_label: "Transactions Guarantee" -original_id: transactions-guarantee ---- - -Pulsar transactions support the following guarantee. - -## Atomic multi-partition writes and multi-subscription acknowledges -Transactions enable atomic writes to multiple topics and partitions. A batch of messages in a transaction can be received from, produced to, and acknowledged by many partitions. All the operations involved in a transaction succeed or fail as a single unit. - -## Read transactional message -All the messages in a transaction are available only for consumers until the transaction is committed. - -## Acknowledge transactional message -A message is acknowledged successfully only once by a consumer under the subscription when acknowledging the message with the transaction ID. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/txn-how.md b/site2/website/versioned_docs/version-2.10.0-deprecated/txn-how.md deleted file mode 100644 index add072448aeb34..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/txn-how.md +++ /dev/null @@ -1,151 +0,0 @@ ---- -id: txn-how -title: How transactions work? -sidebar_label: "How transactions work?" -original_id: txn-how ---- - -This section describes transaction components and how the components work together. For the complete design details, see [PIP-31: Transactional Streaming](https://docs.google.com/document/d/145VYp09JKTw9jAT-7yNyFU255FptB2_B2Fye100ZXDI/edit#heading=h.bm5ainqxosrx). - -## Key concept - -It is important to know the following key concepts, which is a prerequisite for understanding how transactions work. - -### Transaction coordinator - -The transaction coordinator (TC) is a module running inside a Pulsar broker. - -* It maintains the entire life cycle of transactions and prevents a transaction from getting into an incorrect status. - -* It handles transaction timeout, and ensures that the transaction is aborted after a transaction timeout. - -### Transaction log - -All the transaction metadata persists in the transaction log. The transaction log is backed by a Pulsar topic. If the transaction coordinator crashes, it can restore the transaction metadata from the transaction log. - -The transaction log stores the transaction status rather than actual messages in the transaction (the actual messages are stored in the actual topic partitions). - -### Transaction buffer - -Messages produced to a topic partition within a transaction are stored in the transaction buffer (TB) of that topic partition. The messages in the transaction buffer are not visible to consumers until the transactions are committed. The messages in the transaction buffer are discarded when the transactions are aborted. - -Transaction buffer stores all ongoing and aborted transactions in memory. All messages are sent to the actual partitioned Pulsar topics. After transactions are committed, the messages in the transaction buffer are materialized (visible) to consumers. When the transactions are aborted, the messages in the transaction buffer are discarded. - -### Transaction ID - -Transaction ID (TxnID) identifies a unique transaction in Pulsar. The transaction ID is 128-bit. The highest 16 bits are reserved for the ID of the transaction coordinator, and the remaining bits are used for monotonically increasing numbers in each transaction coordinator. It is easy to locate the transaction crash with the TxnID. - -### Pending acknowledge state - -Pending acknowledge state maintains message acknowledgments within a transaction before a transaction completes. If a message is in the pending acknowledge state, the message cannot be acknowledged by other transactions until the message is removed from the pending acknowledge state. - -The pending acknowledge state is persisted to the pending acknowledge log (cursor ledger). A new broker can restore the state from the pending acknowledge log to ensure the acknowledgement is not lost. - -## Data flow - -At a high level, the data flow can be split into several steps: - -1. Begin a transaction. - -2. Publish messages with a transaction. - -3. Acknowledge messages with a transaction. - -4. End a transaction. - -To help you debug or tune the transaction for better performance, review the following diagrams and descriptions. - -### 1. Begin a transaction - -Before introducing the transaction in Pulsar, a producer is created and then messages are sent to brokers and stored in data logs. - -![](/assets/txn-3.png) - -Let’s walk through the steps for _beginning a transaction_. - -| Step | Description | -| --- | --- | -| 1.1 | The first step is that the Pulsar client finds the transaction coordinator. | -| 1.2 | The transaction coordinator allocates a transaction ID for the transaction. In the transaction log, the transaction is logged with its transaction ID and status (OPEN), which ensures the transaction status is persisted regardless of transaction coordinator crashes. | -| 1.3 | The transaction log sends the result of persisting the transaction ID to the transaction coordinator. | -| 1.4 | After the transaction status entry is logged, the transaction coordinator brings the transaction ID back to the Pulsar client. | - -### 2. Publish messages with a transaction - -In this stage, the Pulsar client enters a transaction loop, repeating the `consume-process-produce` operation for all the messages that comprise the transaction. This is a long phase and is potentially composed of multiple produce and acknowledgement requests. - -![](/assets/txn-4.png) - -Let’s walk through the steps for _publishing messages with a transaction_. - -| Step | Description | -| --- | --- | -| 2.1.1 | Before the Pulsar client produces messages to a new topic partition, it sends a request to the transaction coordinator to add the partition to the transaction. | -| 2.1.2 | The transaction coordinator logs the partition changes of the transaction into the transaction log for durability, which ensures the transaction coordinator knows all the partitions that a transaction is handling. The transaction coordinator can commit or abort changes on each partition at the end-partition phase. | -| 2.1.3 | The transaction log sends the result of logging the new partition (used for producing messages) to the transaction coordinator. | -| 2.1.4 | The transaction coordinator sends the result of adding a new produced partition to the transaction. | -| 2.2.1 | The Pulsar client starts producing messages to partitions. The flow of this part is the same as the normal flow of producing messages except that the batch of messages produced by a transaction contains transaction IDs. | -| 2.2.2 | The broker writes messages to a partition. | - -### 3. Acknowledge messages with a transaction - -In this phase, the Pulsar client sends a request to the transaction coordinator and a new subscription is acknowledged as a part of a transaction. - -![](/assets/txn-5.png) - -Let’s walk through the steps for _acknowledging messages with a transaction_. - -| Step | Description | -| --- | --- | -| 3.1.1 | The Pulsar client sends a request to add an acknowledged subscription to the transaction coordinator. | -| 3.1.2 | The transaction coordinator logs the addition of subscription, which ensures that it knows all subscriptions handled by a transaction and can commit or abort changes on each subscription at the end phase. | -| 3.1.3 | The transaction log sends the result of logging the new partition (used for acknowledging messages) to the transaction coordinator. | -| 3.1.4 | The transaction coordinator sends the result of adding the new acknowledged partition to the transaction. | -| 3.2 | The Pulsar client acknowledges messages on the subscription. The flow of this part is the same as the normal flow of acknowledging messages except that the acknowledged request carries a transaction ID. | -| 3.3 | The broker receiving the acknowledgement request checks if the acknowledgment belongs to a transaction or not. | - -### 4. End a transaction - -At the end of a transaction, the Pulsar client decides to commit or abort the transaction. The transaction can be aborted when a conflict is detected on acknowledging messages. - -#### 4.1 End transaction request - -When the Pulsar client finishes a transaction, it issues an end transaction request. - -![](/assets/txn-6.png) - -Let’s walk through the steps for _ending the transaction_. - -| Step | Description | -| --- | --- | -| 4.1.1 | The Pulsar client issues an end transaction request (with a field indicating whether the transaction is to be committed or aborted) to the transaction coordinator. | -| 4.1.2 | The transaction coordinator writes a COMMITTING or ABORTING message to its transaction log. | -| 4.1.3 | The transaction log sends the result of logging the committing or aborting status. | - -#### 4.2 Finalize a transaction - -The transaction coordinator starts the process of committing or aborting messages to all the partitions involved in this transaction. - -![](/assets/txn-7.png) - -Let’s walk through the steps for _finalizing a transaction_. - -| Step | Description | -| --- | --- | -| 4.2.1 | The transaction coordinator commits transactions on subscriptions and commits transactions on partitions at the same time. | -| 4.2.2 | The broker (produce) writes produced committed markers to the actual partitions. At the same time, the broker (ack) writes acked committed marks to the subscription pending ack partitions. | -| 4.2.3 | The data log sends the result of writing produced committed marks to the broker. At the same time, pending ack data log sends the result of writing acked committed marks to the broker. The cursor moves to the next position. | - -#### 4.3 Mark a transaction as COMMITTED or ABORTED - -The transaction coordinator writes the final transaction status to the transaction log to complete the transaction. - -![](/assets/txn-8.png) - -Let’s walk through the steps for _marking a transaction as COMMITTED or ABORTED_. - -| Step | Description | -| --- | --- | -| 4.3.1 | After all produced messages and acknowledgements to all partitions involved in this transaction have been successfully committed or aborted, the transaction coordinator writes the final COMMITTED or ABORTED transaction status messages to its transaction log, indicating that the transaction is complete. All the messages associated with the transaction in its transaction log can be safely removed. | -| 4.3.2 | The transaction log sends the result of the committed transaction to the transaction coordinator. | -| 4.3.3 | The transaction coordinator sends the result of the committed transaction to the Pulsar client. | diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/txn-monitor.md b/site2/website/versioned_docs/version-2.10.0-deprecated/txn-monitor.md deleted file mode 100644 index 08e4a1be320367..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/txn-monitor.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -id: txn-monitor -title: How to monitor transactions? -sidebar_label: "How to monitor transactions?" -original_id: txn-monitor ---- - -You can monitor the status of the transactions in Prometheus and Grafana using the [transaction metrics](reference-metrics.md#pulsar-transaction). - -For how to configure Prometheus and Grafana, see [here](deploy-monitoring.md). diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/txn-use.md b/site2/website/versioned_docs/version-2.10.0-deprecated/txn-use.md deleted file mode 100644 index 8468917f3ba348..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/txn-use.md +++ /dev/null @@ -1,105 +0,0 @@ ---- -id: txn-use -title: How to use transactions? -sidebar_label: "How to use transactions?" -original_id: txn-use ---- - -## Transaction API - -The transaction feature is primarily a server-side and protocol-level feature. You can use the transaction feature via the [transaction API](/api/admin/), which is available in **Pulsar 2.8.0 or later**. - -To use the transaction API, you do not need any additional settings in the Pulsar client. **By default**, transactions is **disabled**. - -Currently, transaction API is only available for **Java** clients. Support for other language clients will be added in the future releases. - -## Quick start - -This section provides an example of how to use the transaction API to send and receive messages in a Java client. - -1. Start Pulsar 2.8.0 or later. - -2. Enable transaction. - - Change the configuration in the `broker.conf` file. - - ``` - - transactionCoordinatorEnabled=true - - ``` - - If you want to enable batch messages in transactions, follow the steps below. - - Set `acknowledgmentAtBatchIndexLevelEnabled` to `true` in the `broker.conf` or `standalone.conf` file. - - ``` - - acknowledgmentAtBatchIndexLevelEnabled=true - - ``` - -3. Initialize transaction coordinator metadata. - - The transaction coordinator can leverage the advantages of partitioned topics (such as load balance). - - **Input** - - ``` - - bin/pulsar initialize-transaction-coordinator-metadata -cs 127.0.0.1:2181 -c standalone - - ``` - - **Output** - - ``` - - Transaction coordinator metadata setup success - - ``` - -4. Initialize a Pulsar client. - - ``` - - PulsarClient client = PulsarClient.builder() - - .serviceUrl("pulsar://localhost:6650") - - .enableTransaction(true) - - .build(); - - ``` - -Now you can start using the transaction API to send and receive messages. Below is an example of a `consume-process-produce` application written in Java. - -![](/assets/txn-9.png) - -Let’s walk through this example step by step. - -| Step | Description | -| --- | --- | -| 1. Start a transaction. | The application opens a new transaction by calling PulsarClient.newTransaction. It specifics the transaction timeout as 1 minute. If the transaction is not committed within 1 minute, the transaction is automatically aborted. | -| 2. Receive messages from topics. | The application creates two normal consumers to receive messages from topic input-topic-1 and input-topic-2 respectively. | -| 3. Publish messages to topics with the transaction. | The application creates two producers to produce the resulting messages to the output topic _output-topic-1_ and output-topic-2 respectively. The application applies the processing logic and generates two output messages. The application sends those two output messages as part of the transaction opened in the first step via Producer.newMessage(Transaction). | -| 4. Acknowledge the messages with the transaction. | In the same transaction, the application acknowledges the two input messages. | -| 5. Commit the transaction. | The application commits the transaction by calling Transaction.commit() on the open transaction. The commit operation ensures the two input messages are marked as acknowledged and the two output messages are written successfully to the output topics. | - -[1] Example of enabling batch messages ack in transactions in the consumer builder. - -``` - -Consumer sinkConsumer = pulsarClient - .newConsumer() - .topic(transferTopic) - .subscriptionName("sink-topic") - -.subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscriptionType(SubscriptionType.Shared) - .enableBatchIndexAcknowledgment(true) // enable batch index acknowledgement - .subscribe(); - -``` - diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/txn-what.md b/site2/website/versioned_docs/version-2.10.0-deprecated/txn-what.md deleted file mode 100644 index f8bf3eb7e56b80..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/txn-what.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -id: txn-what -title: What are transactions? -sidebar_label: "What are transactions?" -original_id: txn-what ---- - -Transactions strengthen the message delivery semantics of Apache Pulsar and [processing guarantees of Pulsar Functions](functions-overview.md#processing-guarantees). The Pulsar Transaction API supports atomic writes and acknowledgments across multiple topics. - -Transactions allow: - -- A producer to send a batch of messages to multiple topics where all messages in the batch are eventually visible to any consumer, or none are ever visible to consumers. - -- End-to-end exactly-once semantics (execute a `consume-process-produce` operation exactly once). - -## Transaction semantics - -Pulsar transactions have the following semantics: - -* All operations within a transaction are committed as a single unit. - - * Either all messages are committed, or none of them are. - - * Each message is written or processed exactly once, without data loss or duplicates (even in the event of failures). - - * If a transaction is aborted, all the writes and acknowledgments in this transaction rollback. - -* A group of messages in a transaction can be received from, produced to, and acknowledged by multiple partitions. - - * Consumers are only allowed to read committed (acked) messages. In other words, the broker does not deliver transactional messages which are part of an open transaction or messages which are part of an aborted transaction. - - * Message writes across multiple partitions are atomic. - - * Message acks across multiple subscriptions are atomic. A message is acked successfully only once by a consumer under the subscription when acknowledging the message with the transaction ID. - -## Transactions and stream processing - -Stream processing on Pulsar is a `consume-process-produce` operation on Pulsar topics: - -* `Consume`: a source operator that runs a Pulsar consumer reads messages from one or multiple Pulsar topics. - -* `Process`: a processing operator transforms the messages. - -* `Produce`: a sink operator that runs a Pulsar producer writes the resulting messages to one or multiple Pulsar topics. - -![](/assets/txn-2.png) - -Pulsar transactions support end-to-end exactly-once stream processing, which means messages are not lost from a source operator and messages are not duplicated to a sink operator. - -## Use case - -Prior to Pulsar 2.8.0, there was no easy way to build stream processing applications with Pulsar to achieve exactly-once processing guarantees. With the transaction introduced in Pulsar 2.8.0, the following services support exactly-once semantics: - -* [Pulsar Flink connector](https://flink.apache.org/2021/01/07/pulsar-flink-connector-270.html) - - Prior to Pulsar 2.8.0, if you want to build stream applications using Pulsar and Flink, the Pulsar Flink connector only supported exactly-once source connector and at-least-once sink connector, which means the highest processing guarantee for end-to-end was at-least-once, there was possibility that the resulting messages from streaming applications produce duplicated messages to the resulting topics in Pulsar. - - With the transaction introduced in Pulsar 2.8.0, the Pulsar Flink sink connector can support exactly-once semantics by implementing the designated `TwoPhaseCommitSinkFunction` and hooking up the Flink sink message lifecycle with Pulsar transaction API. - -* Support for Pulsar Functions and other connectors will be added in the future releases. diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/txn-why.md b/site2/website/versioned_docs/version-2.10.0-deprecated/txn-why.md deleted file mode 100644 index e7273379f79493..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/txn-why.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -id: txn-why -title: Why transactions? -sidebar_label: "Why transactions?" -original_id: txn-why ---- - -Pulsar transactions (txn) enable event streaming applications to consume, process, and produce messages in one atomic operation. The reason for developing this feature can be summarized as below. - -## Demand of stream processing - -The demand for stream processing applications with stronger processing guarantees has grown along with the rise of stream processing. For example, in the financial industry, financial institutions use stream processing engines to process debits and credits for users. This type of use case requires that every message is processed exactly once, without exception. - -In other words, if a stream processing application consumes message A and -produces the result as a message B (B = f(A)), then exactly-once processing -guarantee means that A can only be marked as consumed if and only if B is -successfully produced, and vice versa. - -![](/assets/txn-1.png) - -The Pulsar transactions API strengthens the message delivery semantics and the processing guarantees for stream processing. It enables stream processing applications to consume, process, and produce messages in one atomic operation. That means, a batch of messages in a transaction can be received from, produced to and acknowledged by many topic partitions. All the operations involved in a transaction succeed or fail as one single unit. - -## Limitation of idempotent producer - -Avoiding data loss or duplication can be achieved by using the Pulsar idempotent producer, but it does not provide guarantees for writes across multiple partitions. - -In Pulsar, the highest level of message delivery guarantee is using an [idempotent producer](concepts-messaging.md#producer-idempotency) with the exactly once semantic at one single partition, that is, each message is persisted exactly once without data loss and duplication. However, there are some limitations in this solution: - -- Due to the monotonic increasing sequence ID, this solution only works on a single partition and within a single producer session (that is, for producing one message), so there is no atomicity when producing multiple messages to one or multiple partitions. - - In this case, if there are some failures (for example, client / broker / bookie crashes, network failure, and more) in the process of producing and receiving messages, messages are re-processed and re-delivered, which may cause data loss or data duplication: - - - For the producer: if the producer retry sending messages, some messages are persisted multiple times; if the producer does not retry sending messages, some messages are persisted once and other messages are lost. - - - For the consumer: since the consumer does not know whether the broker has received messages or not, the consumer may not retry sending acks, which causes it to receive duplicate messages. - -- Similarly, for Pulsar Function, it only guarantees exactly once semantics for an idempotent function on a single event rather than processing multiple events or producing multiple results that can happen exactly. - - For example, if a function accepts multiple events and produces one result (for example, window function), the function may fail between producing the result and acknowledging the incoming messages, or even between acknowledging individual events, which causes all (or some) incoming messages to be re-delivered and reprocessed, and a new result is generated. - - However, many scenarios need atomic guarantees across multiple partitions and sessions. - -- Consumers need to rely on more mechanisms to acknowledge (ack) messages once. - - For example, consumers are required to store the MessageID along with its acked state. After the topic is unloaded, the subscription can recover the acked state of this MessageID in memory when the topic is loaded again. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.0-deprecated/window-functions-context.md b/site2/website/versioned_docs/version-2.10.0-deprecated/window-functions-context.md deleted file mode 100644 index f80fea57989ef0..00000000000000 --- a/site2/website/versioned_docs/version-2.10.0-deprecated/window-functions-context.md +++ /dev/null @@ -1,581 +0,0 @@ ---- -id: window-functions-context -title: Window Functions Context -sidebar_label: "Window Functions: Context" -original_id: window-functions-context ---- - -Java SDK provides access to a **window context object** that can be used by a window function. This context object provides a wide variety of information and functionality for Pulsar window functions as below. - -- [Spec](#spec) - - * Names of all input topics and the output topic associated with the function. - * Tenant and namespace associated with the function. - * Pulsar window function name, ID, and version. - * ID of the Pulsar function instance running the window function. - * Number of instances that invoke the window function. - * Built-in type or custom class name of the output schema. - -- [Logger](#logger) - - * Logger object used by the window function, which can be used to create window function log messages. - -- [User config](#user-config) - - * Access to arbitrary user configuration values. - -- [Routing](#routing) - - * Routing is supported in Pulsar window functions. Pulsar window functions send messages to arbitrary topics as per the `publish` interface. - -- [Metrics](#metrics) - - * Interface for recording metrics. - -- [State storage](#state-storage) - - * Interface for storing and retrieving state in [state storage](#state-storage). - -## Spec - -Spec contains the basic information of a function. - -### Get input topics - -The `getInputTopics` method gets the **name list** of all input topics. - -This example demonstrates how to get the name list of all input topics in a Java window function. - -```java - -public class GetInputTopicsWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - Collection inputTopics = context.getInputTopics(); - System.out.println(inputTopics); - - return null; - } - -} - -``` - -### Get output topic - -The `getOutputTopic` method gets the **name of a topic** to which the message is sent. - -This example demonstrates how to get the name of an output topic in a Java window function. - -```java - -public class GetOutputTopicWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String outputTopic = context.getOutputTopic(); - System.out.println(outputTopic); - - return null; - } -} - -``` - -### Get tenant - -The `getTenant` method gets the tenant name associated with the window function. - -This example demonstrates how to get the tenant name in a Java window function. - -```java - -public class GetTenantWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String tenant = context.getTenant(); - System.out.println(tenant); - - return null; - } - -} - -``` - -### Get namespace - -The `getNamespace` method gets the namespace associated with the window function. - -This example demonstrates how to get the namespace in a Java window function. - -```java - -public class GetNamespaceWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String ns = context.getNamespace(); - System.out.println(ns); - - return null; - } - -} - -``` - -### Get function name - -The `getFunctionName` method gets the window function name. - -This example demonstrates how to get the function name in a Java window function. - -```java - -public class GetNameOfWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionName = context.getFunctionName(); - System.out.println(functionName); - - return null; - } - -} - -``` - -### Get function ID - -The `getFunctionId` method gets the window function ID. - -This example demonstrates how to get the function ID in a Java window function. - -```java - -public class GetFunctionIDWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionID = context.getFunctionId(); - System.out.println(functionID); - - return null; - } - -} - -``` - -### Get function version - -The `getFunctionVersion` method gets the window function version. - -This example demonstrates how to get the function version of a Java window function. - -```java - -public class GetVersionOfWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionVersion = context.getFunctionVersion(); - System.out.println(functionVersion); - - return null; - } - -} - -``` - -### Get instance ID - -The `getInstanceId` method gets the instance ID of a window function. - -This example demonstrates how to get the instance ID in a Java window function. - -```java - -public class GetInstanceIDWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - int instanceId = context.getInstanceId(); - System.out.println(instanceId); - - return null; - } - -} - -``` - -### Get num instances - -The `getNumInstances` method gets the number of instances that invoke the window function. - -This example demonstrates how to get the number of instances in a Java window function. - -```java - -public class GetNumInstancesWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - int numInstances = context.getNumInstances(); - System.out.println(numInstances); - - return null; - } - -} - -``` - -### Get output schema type - -The `getOutputSchemaType` method gets the built-in type or custom class name of the output schema. - -This example demonstrates how to get the output schema type of a Java window function. - -```java - -public class GetOutputSchemaTypeWindowFunction implements WindowFunction { - - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String schemaType = context.getOutputSchemaType(); - System.out.println(schemaType); - - return null; - } -} - -``` - -## Logger - -Pulsar window functions using Java SDK has access to an [SLF4j](https://www.slf4j.org/) [`Logger`](https://www.slf4j.org/api/org/apache/log4j/Logger.html) object that can be used to produce logs at the chosen log level. - -This example logs either a `WARNING`-level or `INFO`-level log based on whether the incoming string contains the word `danger` or not in a Java function. - -```java - -import java.util.Collection; -import org.apache.pulsar.functions.api.Record; -import org.apache.pulsar.functions.api.WindowContext; -import org.apache.pulsar.functions.api.WindowFunction; -import org.slf4j.Logger; - -public class LoggingWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - Logger log = context.getLogger(); - for (Record record : inputs) { - log.info(record + "-window-log"); - } - return null; - } - -} - -``` - -If you need your function to produce logs, specify a log topic when creating or running the function. - -```bash - -bin/pulsar-admin functions create \ - --jar my-functions.jar \ - --classname my.package.LoggingFunction \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -You can access all logs produced by `LoggingFunction` via the `persistent://public/default/logging-function-logs` topic. - -## Metrics - -Pulsar window functions can publish arbitrary metrics to the metrics interface which can be queried. - -:::note - -If a Pulsar window function uses the language-native interface for Java, that function is not able to publish metrics and stats to Pulsar. - -::: - -You can record metrics using the context object on a per-key basis. - -This example sets a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message in a Java function. - -```java - -import java.util.Collection; -import org.apache.pulsar.functions.api.Record; -import org.apache.pulsar.functions.api.WindowContext; -import org.apache.pulsar.functions.api.WindowFunction; - - -/** - * Example function that wants to keep track of - * the event time of each message sent. - */ -public class UserMetricWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - - for (Record record : inputs) { - if (record.getEventTime().isPresent()) { - context.recordMetric("MessageEventTime", record.getEventTime().get().doubleValue()); - } - } - - return null; - } -} - -``` - -## User config - -When you run or update Pulsar Functions that are created using SDK, you can pass arbitrary key/value pairs to them with the `--user-config` flag. Key/value pairs **must** be specified as JSON. - -This example passes a user configured key/value to a function. - -```bash - -bin/pulsar-admin functions create \ - --name word-filter \ - --user-config '{"forbidden-word":"rosebud"}' \ - # Other function configs - -``` - -### API -You can use the following APIs to get user-defined information for window functions. -#### getUserConfigMap - -`getUserConfigMap` API gets a map of all user-defined key/value configurations for the window function. - -```java - -/** - * Get a map of all user-defined key/value configs for the function. - * - * @return The full map of user-defined config values - */ - Map getUserConfigMap(); - -``` - -#### getUserConfigValue - -The `getUserConfigValue` API gets a user-defined key/value. - -```java - -/** - * Get any user-defined key/value. - * - * @param key The key - * @return The Optional value specified by the user for that key. - */ - Optional getUserConfigValue(String key); - -``` - -#### getUserConfigValueOrDefault - -The `getUserConfigValueOrDefault` API gets a user-defined key/value or a default value if none is present. - -```java - -/** - * Get any user-defined key/value or a default value if none is present. - * - * @param key - * @param defaultValue - * @return Either the user config value associated with a given key or a supplied default value - */ - Object getUserConfigValueOrDefault(String key, Object defaultValue); - -``` - -This example demonstrates how to access key/value pairs provided to Pulsar window functions. - -Java SDK context object enables you to access key/value pairs provided to Pulsar window functions via the command line (as JSON). - -:::tip - -For all key/value pairs passed to Java window functions, both the `key` and the `value` are `String`. To set the value to be a different type, you need to deserialize it from the `String` type. - -::: - -This example passes a key/value pair in a Java window function. - -```bash - -bin/pulsar-admin functions create \ - --user-config '{"word-of-the-day":"verdure"}' \ - # Other function configs - -``` - -This example accesses values in a Java window function. - -The `UserConfigFunction` function logs the string `"The word of the day is verdure"` every time the function is invoked (which means every time a message arrives). The user config of `word-of-the-day` is changed **only** when the function is updated with a new config value via -multiple ways, such as the command line tool or REST API. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.Optional; - -public class UserConfigWindowFunction implements WindowFunction { - @Override - public String process(Collection> input, WindowContext context) throws Exception { - Optional whatToWrite = context.getUserConfigValue("WhatToWrite"); - if (whatToWrite.get() != null) { - return (String)whatToWrite.get(); - } else { - return "Not a nice way"; - } - } - -} - -``` - -If no value is provided, you can access the entire user config map or set a default value. - -```java - -// Get the whole config map -Map allConfigs = context.getUserConfigMap(); - -// Get value or resort to default -String wotd = context.getUserConfigValueOrDefault("word-of-the-day", "perspicacious"); - -``` - -## Routing - -You can use the `context.publish()` interface to publish as many results as you want. - -This example shows that the `PublishFunction` class uses the built-in function in the context to publish messages to the `publishTopic` in a Java function. - -```java - -public class PublishWindowFunction implements WindowFunction { - @Override - public Void process(Collection> input, WindowContext context) throws Exception { - String publishTopic = (String) context.getUserConfigValueOrDefault("publish-topic", "publishtopic"); - String output = String.format("%s!", input); - context.publish(publishTopic, output); - - return null; - } - -} - -``` - -## State storage - -Pulsar window functions use [Apache BookKeeper](https://bookkeeper.apache.org) as a state storage interface. Apache Pulsar installation (including the standalone installation) includes the deployment of BookKeeper bookies. - -Apache Pulsar integrates with Apache BookKeeper `table service` to store the `state` for functions. For example, the `WordCount` function can store its `counters` state into BookKeeper table service via Pulsar Functions state APIs. - -States are key-value pairs, where the key is a string and the value is arbitrary binary data—counters are stored as 64-bit big-endian binary values. Keys are scoped to an individual Pulsar Function and shared between instances of that function. - -Currently, Pulsar window functions expose Java API to access, update, and manage states. These APIs are available in the context object when you use Java SDK functions. - -| Java API| Description -|---|--- -|`incrCounter`|Increases a built-in distributed counter referred by key. -|`getCounter`|Gets the counter value for the key. -|`putState`|Updates the state value for the key. - -You can use the following APIs to access, update, and manage states in Java window functions. - -#### incrCounter - -The `incrCounter` API increases a built-in distributed counter referred by key. - -Applications use the `incrCounter` API to change the counter of a given `key` by the given `amount`. If the `key` does not exist, a new key is created. - -```java - - /** - * Increment the builtin distributed counter referred by key - * @param key The name of the key - * @param amount The amount to be incremented - */ - void incrCounter(String key, long amount); - -``` - -#### getCounter - -The `getCounter` API gets the counter value for the key. - -Applications uses the `getCounter` API to retrieve the counter of a given `key` changed by the `incrCounter` API. - -```java - - /** - * Retrieve the counter value for the key. - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - long getCounter(String key); - -``` - -Except the `getCounter` API, Pulsar also exposes a general key/value API (`putState`) for functions to store general key/value state. - -#### putState - -The `putState` API updates the state value for the key. - -```java - - /** - * Update the state value for the key. - * - * @param key name of the key - * @param value state value of the key - */ - void putState(String key, ByteBuffer value); - -``` - -This example demonstrates how applications store states in Pulsar window functions. - -The logic of the `WordCountWindowFunction` is simple and straightforward. - -1. The function first splits the received string into multiple words using regex `\\.`. - -2. For each `word`, the function increments the corresponding `counter` by 1 via `incrCounter(key, amount)`. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - for (Record input : inputs) { - Arrays.asList(input.getValue().split("\\.")).forEach(word -> context.incrCounter(word, 1)); - } - return null; - - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/about.md b/site2/website/versioned_docs/version-2.10.1-deprecated/about.md deleted file mode 100644 index 478ac8dd053e89..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/about.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -slug: / -id: about -title: Welcome to the doc portal! -sidebar_label: "About" ---- - -import BlockLinks from "@site/src/components/BlockLinks"; -import BlockLink from "@site/src/components/BlockLink"; -import { docUrl } from "@site/src/utils/index"; - - -# Welcome to the doc portal! -*** - -This portal holds a variety of support documents to help you work with Pulsar . If you’re a beginner, there are tutorials and explainers to help you understand Pulsar and how it works. - -If you’re an experienced coder, review this page to learn the easiest way to access the specific content you’re looking for. - -## Get Started Now - - - - - - - - - -## Navigation -*** - -There are several ways to get around in the doc portal. The index navigation pane is a table of contents for the entire archive. The archive is divided into sections, like chapters in a book. Click the title of the topic to view it. - -In-context links provide an easy way to immediately reference related topics. Click the underlined term to view the topic. - -Links to related topics can be found at the bottom of each topic page. Click the link to view the topic. - -![Page Linking](/assets/page-linking.png) - -## Continuous Improvement -*** -As you probably know, we are working on a new user experience for our documentation portal that will make learning about and building on top of Apache Pulsar a much better experience. Whether you need overview concepts, how-to procedures, curated guides or quick references, we’re building content to support it. This welcome page is just the first step. We will be providing updates every month. - -## Help Improve These Documents -*** - -You’ll notice an Edit button at the bottom and top of each page. Click it to open a landing page with instructions for requesting changes to posted documents. These are your resources. Participation is not only welcomed – it’s essential! - -## Join the Community! -*** - -The Pulsar community on github is active, passionate, and knowledgeable. Join discussions, voice opinions, suggest features, and dive into the code itself. Find your Pulsar family here at [apache/pulsar](https://github.com/apache/pulsar). - -An equally passionate community can be found in the [Pulsar Slack channel](https://apache-pulsar.slack.com/). You’ll need an invitation to join, but many Github Pulsar community members are Slack members too. Join, hang out, learn, and make some new friends. - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/adaptors-kafka.md b/site2/website/versioned_docs/version-2.10.1-deprecated/adaptors-kafka.md deleted file mode 100644 index e738f9d94b6a9c..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/adaptors-kafka.md +++ /dev/null @@ -1,276 +0,0 @@ ---- -id: adaptors-kafka -title: Pulsar adaptor for Apache Kafka -sidebar_label: "Kafka client wrapper" -original_id: adaptors-kafka ---- - - -Pulsar provides an easy option for applications that are currently written using the [Apache Kafka](http://kafka.apache.org) Java client API. - -## Using the Pulsar Kafka compatibility wrapper - -In an existing application, change the regular Kafka client dependency and replace it with the Pulsar Kafka wrapper. Remove the following dependency in `pom.xml`: - -```xml - - - org.apache.kafka - kafka-clients - 0.10.2.1 - - -``` - -Then include this dependency for the Pulsar Kafka wrapper: - -```xml - - - org.apache.pulsar - pulsar-client-kafka - @pulsar:version@ - - -``` - -With the new dependency, the existing code works without any changes. You need to adjust the configuration, and make sure it points the -producers and consumers to Pulsar service rather than Kafka, and uses a particular -Pulsar topic. - -## Using the Pulsar Kafka compatibility wrapper together with existing kafka client - -When migrating from Kafka to Pulsar, the application might use the original kafka client -and the pulsar kafka wrapper together during migration. You should consider using the -unshaded pulsar kafka client wrapper. - -```xml - - - org.apache.pulsar - pulsar-client-kafka-original - @pulsar:version@ - - -``` - -When using this dependency, construct producers using `org.apache.kafka.clients.producer.PulsarKafkaProducer` -instead of `org.apache.kafka.clients.producer.KafkaProducer` and `org.apache.kafka.clients.producer.PulsarKafkaConsumer` for consumers. - -## Producer example - -```java - -// Topic needs to be a regular Pulsar topic -String topic = "persistent://public/default/my-topic"; - -Properties props = new Properties(); -// Point to a Pulsar service -props.put("bootstrap.servers", "pulsar://localhost:6650"); - -props.put("key.serializer", IntegerSerializer.class.getName()); -props.put("value.serializer", StringSerializer.class.getName()); - -Producer producer = new KafkaProducer(props); - -for (int i = 0; i < 10; i++) { - producer.send(new ProducerRecord(topic, i, "hello-" + i)); - log.info("Message {} sent successfully", i); -} - -producer.close(); - -``` - -## Consumer example - -```java - -String topic = "persistent://public/default/my-topic"; - -Properties props = new Properties(); -// Point to a Pulsar service -props.put("bootstrap.servers", "pulsar://localhost:6650"); -props.put("group.id", "my-subscription-name"); -props.put("enable.auto.commit", "false"); -props.put("key.deserializer", IntegerDeserializer.class.getName()); -props.put("value.deserializer", StringDeserializer.class.getName()); - -Consumer consumer = new KafkaConsumer(props); -consumer.subscribe(Arrays.asList(topic)); - -while (true) { - ConsumerRecords records = consumer.poll(100); - records.forEach(record -> { - log.info("Received record: {}", record); - }); - - // Commit last offset - consumer.commitSync(); -} - -``` - -## Complete Examples - -You can find the complete producer and consumer examples [here](https://github.com/apache/pulsar-adapters/tree/master/pulsar-client-kafka-compat/pulsar-client-kafka-tests/src/test/java/org/apache/pulsar/client/kafka/compat/examples). - -## Compatibility matrix - -Currently the Pulsar Kafka wrapper supports most of the operations offered by the Kafka API. - -### Producer - -APIs: - -| Producer Method | Supported | Notes | -|:------------------------------------------------------------------------------|:----------|:-------------------------------------------------------------------------| -| `Future send(ProducerRecord record)` | Yes | | -| `Future send(ProducerRecord record, Callback callback)` | Yes | | -| `void flush()` | Yes | | -| `List partitionsFor(String topic)` | No | | -| `Map metrics()` | No | | -| `void close()` | Yes | | -| `void close(long timeout, TimeUnit unit)` | Yes | | - -Properties: - -| Config property | Supported | Notes | -|:----------------------------------------|:----------|:------------------------------------------------------------------------------| -| `acks` | Ignored | Durability and quorum writes are configured at the namespace level | -| `auto.offset.reset` | Yes | It uses a default value of `earliest` if you do not give a specific setting. | -| `batch.size` | Ignored | | -| `bootstrap.servers` | Yes | | -| `buffer.memory` | Ignored | | -| `client.id` | Ignored | | -| `compression.type` | Yes | Allows `gzip` and `lz4`. No `snappy`. | -| `connections.max.idle.ms` | Yes | Only support up to 2,147,483,647,000(Integer.MAX_VALUE * 1000) ms of idle time| -| `interceptor.classes` | Yes | | -| `key.serializer` | Yes | | -| `linger.ms` | Yes | Controls the group commit time when batching messages | -| `max.block.ms` | Ignored | | -| `max.in.flight.requests.per.connection` | Ignored | In Pulsar ordering is maintained even with multiple requests in flight | -| `max.request.size` | Ignored | | -| `metric.reporters` | Ignored | | -| `metrics.num.samples` | Ignored | | -| `metrics.sample.window.ms` | Ignored | | -| `partitioner.class` | Yes | | -| `receive.buffer.bytes` | Ignored | | -| `reconnect.backoff.ms` | Ignored | | -| `request.timeout.ms` | Ignored | | -| `retries` | Ignored | Pulsar client retries with exponential backoff until the send timeout expires. | -| `send.buffer.bytes` | Ignored | | -| `timeout.ms` | Yes | | -| `value.serializer` | Yes | | - - -### Consumer - -The following table lists consumer APIs. - -| Consumer Method | Supported | Notes | -|:--------------------------------------------------------------------------------------------------------|:----------|:------| -| `Set assignment()` | No | | -| `Set subscription()` | Yes | | -| `void subscribe(Collection topics)` | Yes | | -| `void subscribe(Collection topics, ConsumerRebalanceListener callback)` | No | | -| `void assign(Collection partitions)` | No | | -| `void subscribe(Pattern pattern, ConsumerRebalanceListener callback)` | No | | -| `void unsubscribe()` | Yes | | -| `ConsumerRecords poll(long timeoutMillis)` | Yes | | -| `void commitSync()` | Yes | | -| `void commitSync(Map offsets)` | Yes | | -| `void commitAsync()` | Yes | | -| `void commitAsync(OffsetCommitCallback callback)` | Yes | | -| `void commitAsync(Map offsets, OffsetCommitCallback callback)` | Yes | | -| `void seek(TopicPartition partition, long offset)` | Yes | | -| `void seekToBeginning(Collection partitions)` | Yes | | -| `void seekToEnd(Collection partitions)` | Yes | | -| `long position(TopicPartition partition)` | Yes | | -| `OffsetAndMetadata committed(TopicPartition partition)` | Yes | | -| `Map metrics()` | No | | -| `List partitionsFor(String topic)` | No | | -| `Map> listTopics()` | No | | -| `Set paused()` | No | | -| `void pause(Collection partitions)` | No | | -| `void resume(Collection partitions)` | No | | -| `Map offsetsForTimes(Map timestampsToSearch)` | No | | -| `Map beginningOffsets(Collection partitions)` | No | | -| `Map endOffsets(Collection partitions)` | No | | -| `void close()` | Yes | | -| `void close(long timeout, TimeUnit unit)` | Yes | | -| `void wakeup()` | No | | - -Properties: - -| Config property | Supported | Notes | -|:--------------------------------|:----------|:------------------------------------------------------| -| `group.id` | Yes | Maps to a Pulsar subscription name | -| `max.poll.records` | Yes | | -| `max.poll.interval.ms` | Ignored | Messages are "pushed" from broker | -| `session.timeout.ms` | Ignored | | -| `heartbeat.interval.ms` | Ignored | | -| `bootstrap.servers` | Yes | Needs to point to a single Pulsar service URL | -| `enable.auto.commit` | Yes | | -| `auto.commit.interval.ms` | Ignored | With auto-commit, acks are sent immediately to broker | -| `partition.assignment.strategy` | Ignored | | -| `auto.offset.reset` | Yes | Only support earliest and latest. | -| `fetch.min.bytes` | Ignored | | -| `fetch.max.bytes` | Ignored | | -| `fetch.max.wait.ms` | Ignored | | -| `interceptor.classes` | Yes | | -| `metadata.max.age.ms` | Ignored | | -| `max.partition.fetch.bytes` | Ignored | | -| `send.buffer.bytes` | Ignored | | -| `receive.buffer.bytes` | Ignored | | -| `client.id` | Ignored | | - - -## Customize Pulsar configurations - -You can configure Pulsar authentication provider directly from the Kafka properties. - -### Pulsar client properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.authentication.class`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-org.apache.pulsar.client.api.Authentication-) | | Configure to auth provider. For example, `org.apache.pulsar.client.impl.auth.AuthenticationTls`.| -| [`pulsar.authentication.params.map`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.util.Map-) | | Map which represents parameters for the Authentication-Plugin. | -| [`pulsar.authentication.params.string`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.lang.String-) | | String which represents parameters for the Authentication-Plugin, for example, `key1:val1,key2:val2`. | -| [`pulsar.use.tls`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTls-boolean-) | `false` | Enable TLS transport encryption. | -| [`pulsar.tls.trust.certs.file.path`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsTrustCertsFilePath-java.lang.String-) | | Path for the TLS trust certificate store. | -| [`pulsar.tls.allow.insecure.connection`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsAllowInsecureConnection-boolean-) | `false` | Accept self-signed certificates from brokers. | -| [`pulsar.operation.timeout.ms`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setOperationTimeout-int-java.util.concurrent.TimeUnit-) | `30000` | General operations timeout. | -| [`pulsar.stats.interval.seconds`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setStatsInterval-long-java.util.concurrent.TimeUnit-) | `60` | Pulsar client lib stats printing interval. | -| [`pulsar.num.io.threads`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setIoThreads-int-) | `1` | The number of Netty IO threads to use. | -| [`pulsar.connections.per.broker`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConnectionsPerBroker-int-) | `1` | The maximum number of connection to each broker. | -| [`pulsar.use.tcp.nodelay`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTcpNoDelay-boolean-) | `true` | TCP no-delay. | -| [`pulsar.concurrent.lookup.requests`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConcurrentLookupRequest-int-) | `50000` | The maximum number of concurrent topic lookups. | -| [`pulsar.max.number.rejected.request.per.connection`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setMaxNumberOfRejectedRequestPerConnection-int-) | `50` | The threshold of errors to forcefully close a connection. | -| [`pulsar.keepalive.interval.ms`](/api/client/org/apache/pulsar/client/api/ClientBuilder.html#keepAliveInterval-int-java.util.concurrent.TimeUnit-)| `30000` | Keep alive interval for each client-broker-connection. | - - -### Pulsar producer properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.producer.name`](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setProducerName-java.lang.String-) | | Specify the producer name. | -| [`pulsar.producer.initial.sequence.id`](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setInitialSequenceId-long-) | | Specify baseline for sequence ID of this producer. | -| [`pulsar.producer.max.pending.messages`](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessages-int-) | `1000` | Set the maximum size of the message queue pending to receive an acknowledgment from the broker. | -| [`pulsar.producer.max.pending.messages.across.partitions`](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessagesAcrossPartitions-int-) | `50000` | Set the maximum number of pending messages across all the partitions. | -| [`pulsar.producer.batching.enabled`](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingEnabled-boolean-) | `true` | Control whether automatic batching of messages is enabled for the producer. | -| [`pulsar.producer.batching.max.messages`](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingMaxMessages-int-) | `1000` | The maximum number of messages in a batch. | -| [`pulsar.block.if.producer.queue.full`](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBlockIfQueueFull-boolean-) | | Specify the block producer if queue is full. | -| [`pulsar.crypto.reader.factory.class.name`](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setCryptoKeyReader-org.apache.pulsar.client.api.CryptoKeyReader-) | | Specify the CryptoReader-Factory(`CryptoKeyReaderFactory`) classname which allows producer to create CryptoKeyReader. | - - -### Pulsar consumer Properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.consumer.name`](/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setConsumerName-java.lang.String-) | | Specify the consumer name. | -| [`pulsar.consumer.receiver.queue.size`](/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setReceiverQueueSize-int-) | 1000 | Set the size of the consumer receiver queue. | -| [`pulsar.consumer.acknowledgments.group.time.millis`](/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#acknowledgmentGroupTime-long-java.util.concurrent.TimeUnit-) | 100 | Set the maximum amount of group time for consumers to send the acknowledgments to the broker. | -| [`pulsar.consumer.total.receiver.queue.size.across.partitions`](/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setMaxTotalReceiverQueueSizeAcrossPartitions-int-) | 50000 | Set the maximum size of the total receiver queue across partitions. | -| [`pulsar.consumer.subscription.topics.mode`](/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#subscriptionTopicsMode-Mode-) | PersistentOnly | Set the subscription topic mode for consumers. | -| [`pulsar.crypto.reader.factory.class.name`](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setCryptoKeyReader-org.apache.pulsar.client.api.CryptoKeyReader-) | | Specify the CryptoReader-Factory(`CryptoKeyReaderFactory`) classname which allows consumer to create CryptoKeyReader. | diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/adaptors-spark.md b/site2/website/versioned_docs/version-2.10.1-deprecated/adaptors-spark.md deleted file mode 100644 index e14f13b5d4b079..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/adaptors-spark.md +++ /dev/null @@ -1,91 +0,0 @@ ---- -id: adaptors-spark -title: Pulsar adaptor for Apache Spark -sidebar_label: "Apache Spark" -original_id: adaptors-spark ---- - -## Spark Streaming receiver -The Spark Streaming receiver for Pulsar is a custom receiver that enables Apache [Spark Streaming](https://spark.apache.org/streaming/) to receive raw data from Pulsar. - -An application can receive data in [Resilient Distributed Dataset](https://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds) (RDD) format via the Spark Streaming receiver and can process it in a variety of ways. - -### Prerequisites - -To use the receiver, include a dependency for the `pulsar-spark` library in your Java configuration. - -#### Maven - -If you're using Maven, add this to your `pom.xml`: - -```xml - - -@pulsar:version@ - - - - org.apache.pulsar - pulsar-spark - ${pulsar.version} - - -``` - -#### Gradle - -If you're using Gradle, add this to your `build.gradle` file: - -```groovy - -def pulsarVersion = "@pulsar:version@" - -dependencies { - compile group: 'org.apache.pulsar', name: 'pulsar-spark', version: pulsarVersion -} - -``` - -### Usage - -Pass an instance of `SparkStreamingPulsarReceiver` to the `receiverStream` method in `JavaStreamingContext`: - -```java - - String serviceUrl = "pulsar://localhost:6650/"; - String topic = "persistent://public/default/test_src"; - String subs = "test_sub"; - - SparkConf sparkConf = new SparkConf().setMaster("local[*]").setAppName("Pulsar Spark Example"); - - JavaStreamingContext jsc = new JavaStreamingContext(sparkConf, Durations.seconds(60)); - - ConsumerConfigurationData pulsarConf = new ConsumerConfigurationData(); - - Set set = new HashSet(); - set.add(topic); - pulsarConf.setTopicNames(set); - pulsarConf.setSubscriptionName(subs); - - SparkStreamingPulsarReceiver pulsarReceiver = new SparkStreamingPulsarReceiver( - serviceUrl, - pulsarConf, - new AuthenticationDisabled()); - - JavaReceiverInputDStream lineDStream = jsc.receiverStream(pulsarReceiver); - -``` - -For a complete example, click [here](https://github.com/apache/pulsar-adapters/blob/master/examples/spark/src/main/java/org/apache/spark/streaming/receiver/example/SparkStreamingPulsarReceiverExample.java). In this example, the number of messages that contain the string "Pulsar" in received messages is counted. - -Note that if needed, other Pulsar authentication classes can be used. For example, in order to use a token during authentication the following parameters for the `SparkStreamingPulsarReceiver` constructor can be set: - -```java - -SparkStreamingPulsarReceiver pulsarReceiver = new SparkStreamingPulsarReceiver( - serviceUrl, - pulsarConf, - new AuthenticationToken("token:")); - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/adaptors-storm.md b/site2/website/versioned_docs/version-2.10.1-deprecated/adaptors-storm.md deleted file mode 100644 index 76d507164777db..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/adaptors-storm.md +++ /dev/null @@ -1,96 +0,0 @@ ---- -id: adaptors-storm -title: Pulsar adaptor for Apache Storm -sidebar_label: "Apache Storm" -original_id: adaptors-storm ---- - -Pulsar Storm is an adaptor for integrating with [Apache Storm](http://storm.apache.org/) topologies. It provides core Storm implementations for sending and receiving data. - -An application can inject data into a Storm topology via a generic Pulsar spout, as well as consume data from a Storm topology via a generic Pulsar bolt. - -## Using the Pulsar Storm Adaptor - -Include dependency for Pulsar Storm Adaptor: - -```xml - - - org.apache.pulsar - pulsar-storm - ${pulsar.version} - - -``` - -## Pulsar Spout - -The Pulsar Spout allows for the data published on a topic to be consumed by a Storm topology. It emits a Storm tuple based on the message received and the `MessageToValuesMapper` provided by the client. - -The tuples that fail to be processed by the downstream bolts will be re-injected by the spout with an exponential backoff, within a configurable timeout (the default is 60 seconds) or a configurable number of retries, whichever comes first, after which it is acknowledged by the consumer. Here's an example construction of a spout: - -```java - -MessageToValuesMapper messageToValuesMapper = new MessageToValuesMapper() { - - @Override - public Values toValues(Message msg) { - return new Values(new String(msg.getData())); - } - - @Override - public void declareOutputFields(OutputFieldsDeclarer declarer) { - // declare the output fields - declarer.declare(new Fields("string")); - } -}; - -// Configure a Pulsar Spout -PulsarSpoutConfiguration spoutConf = new PulsarSpoutConfiguration(); -spoutConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650"); -spoutConf.setTopic("persistent://my-property/usw/my-ns/my-topic1"); -spoutConf.setSubscriptionName("my-subscriber-name1"); -spoutConf.setMessageToValuesMapper(messageToValuesMapper); - -// Create a Pulsar Spout -PulsarSpout spout = new PulsarSpout(spoutConf); - -``` - -For a complete example, click [here](https://github.com/apache/pulsar-adapters/blob/master/pulsar-storm/src/test/java/org/apache/pulsar/storm/PulsarSpoutTest.java). - -## Pulsar Bolt - -The Pulsar bolt allows data in a Storm topology to be published on a topic. It publishes messages based on the Storm tuple received and the `TupleToMessageMapper` provided by the client. - -A partitioned topic can also be used to publish messages on different topics. In the implementation of the `TupleToMessageMapper`, a "key" will need to be provided in the message which will send the messages with the same key to the same topic. Here's an example bolt: - -```java - -TupleToMessageMapper tupleToMessageMapper = new TupleToMessageMapper() { - - @Override - public TypedMessageBuilder toMessage(TypedMessageBuilder msgBuilder, Tuple tuple) { - String receivedMessage = tuple.getString(0); - // message processing - String processedMsg = receivedMessage + "-processed"; - return msgBuilder.value(processedMsg.getBytes()); - } - - @Override - public void declareOutputFields(OutputFieldsDeclarer declarer) { - // declare the output fields - } -}; - -// Configure a Pulsar Bolt -PulsarBoltConfiguration boltConf = new PulsarBoltConfiguration(); -boltConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650"); -boltConf.setTopic("persistent://my-property/usw/my-ns/my-topic2"); -boltConf.setTupleToMessageMapper(tupleToMessageMapper); - -// Create a Pulsar Bolt -PulsarBolt bolt = new PulsarBolt(boltConf); - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-brokers.md b/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-brokers.md deleted file mode 100644 index 2674c7da875f9b..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-brokers.md +++ /dev/null @@ -1,286 +0,0 @@ ---- -id: admin-api-brokers -title: Managing Brokers -sidebar_label: "Brokers" -original_id: admin-api-brokers ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](/api/admin/). - -Pulsar brokers consist of two components: - -1. An HTTP server exposing a {@inject: rest:REST:/} interface administration and [topic](reference-terminology.md#topic) lookup. -2. A dispatcher that handles all Pulsar [message](reference-terminology.md#message) transfers. - -[Brokers](reference-terminology.md#broker) can be managed via: - -* The `brokers` command of the [`pulsar-admin`](/tools/pulsar-admin/) tool -* The `/admin/v2/brokers` endpoint of the admin {@inject: rest:REST:/} API -* The `brokers` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md) - -In addition to being configurable when you start them up, brokers can also be [dynamically configured](#dynamic-broker-configuration). - -> See the [Configuration](reference-configuration.md#broker) page for a full listing of broker-specific configuration parameters. - -## Brokers resources - -### List active brokers - -Fetch all available active brokers that are serving traffic with cluster name. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers list use - -``` - -``` - -broker1.use.org.com:8080 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/:cluster|operation/getActiveBrokers?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getActiveBrokers(clusterName) - -``` - - - - -```` - -### Get the information of the leader broker - -Fetch the information of the leader broker, for example, the service url. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers leader-broker - -``` - -``` - -BrokerInfo(serviceUrl=broker1.use.org.com:8080) - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/leaderBroker|operation/getLeaderBroker?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getLeaderBroker() - -``` - -For the detail of the code above, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/BrokersImpl.java#L80) - - - - -```` - -#### list of namespaces owned by a given broker - -It finds all namespaces which are owned and served by a given broker. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers namespaces use \ - --url broker1.use.org.com:8080 - -``` - -```json - -{ - "my-property/use/my-ns/0x00000000_0xffffffff": { - "broker_assignment": "shared", - "is_controlled": false, - "is_active": true - } -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/:cluster/:broker/ownedNamespaces|operation/getOwnedNamespaes?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getOwnedNamespaces(cluster,brokerUrl); - -``` - - - - -```` - -### Dynamic broker configuration - -One way to configure a Pulsar [broker](reference-terminology.md#broker) is to supply a [configuration](reference-configuration.md#broker) when the broker is [started up](reference-cli-tools.md#pulsar-broker). - -But since all broker configuration in Pulsar is stored in ZooKeeper, configuration values can also be dynamically updated *while the broker is running*. When you update broker configuration dynamically, ZooKeeper will notify the broker of the change and the broker will then override any existing configuration values. - -* The [`brokers`](reference-pulsar-admin.md#brokers) command for the [`pulsar-admin`](reference-pulsar-admin.md) tool has a variety of subcommands that enable you to manipulate a broker's configuration dynamically, enabling you to [update config values](#update-dynamic-configuration) and more. -* In the Pulsar admin {@inject: rest:REST:/} API, dynamic configuration is managed through the `/admin/v2/brokers/configuration` endpoint. - -### Update dynamic configuration - -````mdx-code-block - - - -The [`update-dynamic-config`](reference-pulsar-admin.md#brokers-update-dynamic-config) subcommand will update existing configuration. It takes two arguments: the name of the parameter and the new value using the `config` and `value` flag respectively. Here's an example for the [`brokerShutdownTimeoutMs`](reference-configuration.md#broker-brokerShutdownTimeoutMs) parameter: - -```shell - -$ pulsar-admin brokers update-dynamic-config --config brokerShutdownTimeoutMs --value 100 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/brokers/configuration/:configName/:configValue|operation/updateDynamicConfiguration?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().updateDynamicConfiguration(configName, configValue); - -``` - - - - -```` - -### List updated values - -Fetch a list of all potentially updatable configuration parameters. -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers list-dynamic-config -brokerShutdownTimeoutMs - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/configuration|operation/getDynamicConfigurationName?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getDynamicConfigurationNames(); - -``` - - - - -```` - -### List all - -Fetch a list of all parameters that have been dynamically updated. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers get-all-dynamic-config -brokerShutdownTimeoutMs:100 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/configuration/values|operation/getAllDynamicConfigurations?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getAllDynamicConfigurations(); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-clusters.md b/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-clusters.md deleted file mode 100644 index 53cd43187e0697..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-clusters.md +++ /dev/null @@ -1,318 +0,0 @@ ---- -id: admin-api-clusters -title: Managing Clusters -sidebar_label: "Clusters" -original_id: admin-api-clusters ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](/api/admin/). - -Pulsar clusters consist of one or more Pulsar [brokers](reference-terminology.md#broker), one or more [BookKeeper](reference-terminology.md#bookkeeper) -servers (aka [bookies](reference-terminology.md#bookie)), and a [ZooKeeper](https://zookeeper.apache.org) cluster that provides configuration and coordination management. - -Clusters can be managed via: - -* The `clusters` command of the [`pulsar-admin`](/tools/pulsar-admin/)) tool -* The `/admin/v2/clusters` endpoint of the admin {@inject: rest:REST:/} API -* The `clusters` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md) - -## Clusters resources - -### Provision - -New clusters can be provisioned using the admin interface. - -> Please note that this operation requires superuser privileges. - -````mdx-code-block - - - -You can provision a new cluster using the [`create`](reference-pulsar-admin.md#clusters-create) subcommand. Here's an example: - -```shell - -$ pulsar-admin clusters create cluster-1 \ - --url http://my-cluster.org.com:8080 \ - --broker-url pulsar://my-cluster.org.com:6650 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/clusters/:cluster|operation/createCluster?version=@pulsar:version_number@} - - - - -```java - -ClusterData clusterData = new ClusterData( - serviceUrl, - serviceUrlTls, - brokerServiceUrl, - brokerServiceUrlTls -); -admin.clusters().createCluster(clusterName, clusterData); - -``` - - - - -```` - -### Initialize cluster metadata - -When provision a new cluster, you need to initialize that cluster's [metadata](concepts-architecture-overview.md#metadata-store). When initializing cluster metadata, you need to specify all of the following: - -* The name of the cluster -* The local metadata store connection string for the cluster -* The configuration store connection string for the entire instance -* The web service URL for the cluster -* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster - -You must initialize cluster metadata *before* starting up any [brokers](admin-api-brokers.md) that will belong to the cluster. - -> **No cluster metadata initialization through the REST API or the Java admin API** -> -> Unlike most other admin functions in Pulsar, cluster metadata initialization cannot be performed via the admin REST API -> or the admin Java client, as metadata initialization involves communicating with ZooKeeper directly. -> Instead, you can use the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool, in particular -> the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command. - -Here's an example cluster metadata initialization command: - -```shell - -bin/pulsar initialize-cluster-metadata \ - --cluster us-west \ - --metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \ - --configuration-metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \ - --web-service-url http://pulsar.us-west.example.com:8080/ \ - --web-service-url-tls https://pulsar.us-west.example.com:8443/ \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/ - -``` - -You'll need to use `--*-tls` flags only if you're using [TLS authentication](security-tls-authentication.md) in your instance. - -### Get configuration - -You can fetch the [configuration](reference-configuration.md) for an existing cluster at any time. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#clusters-get) subcommand and specify the name of the cluster. Here's an example: - -```shell - -$ pulsar-admin clusters get cluster-1 -{ - "serviceUrl": "http://my-cluster.org.com:8080/", - "serviceUrlTls": null, - "brokerServiceUrl": "pulsar://my-cluster.org.com:6650/", - "brokerServiceUrlTls": null - "peerClusterNames": null -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/clusters/:cluster|operation/getCluster?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().getCluster(clusterName); - -``` - - - - -```` - -### Update - -You can update the configuration for an existing cluster at any time. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#clusters-update) subcommand and specify new configuration values using flags. - -```shell - -$ pulsar-admin clusters update cluster-1 \ - --url http://my-cluster.org.com:4081 \ - --broker-url pulsar://my-cluster.org.com:3350 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/clusters/:cluster|operation/updateCluster?version=@pulsar:version_number@} - - - - -```java - -ClusterData clusterData = new ClusterData( - serviceUrl, - serviceUrlTls, - brokerServiceUrl, - brokerServiceUrlTls -); -admin.clusters().updateCluster(clusterName, clusterData); - -``` - - - - -```` - -### Delete - -Clusters can be deleted from a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#clusters-delete) subcommand and specify the name of the cluster. - -``` - -$ pulsar-admin clusters delete cluster-1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/clusters/:cluster|operation/deleteCluster?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().deleteCluster(clusterName); - -``` - - - - -```` - -### List - -You can fetch a list of all clusters in a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#clusters-list) subcommand. - -```shell - -$ pulsar-admin clusters list -cluster-1 -cluster-2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/clusters|operation/getClusters?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().getClusters(); - -``` - - - - -```` - -### Update peer-cluster data - -Peer clusters can be configured for a given cluster in a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`update-peer-clusters`](reference-pulsar-admin.md#clusters-update-peer-clusters) subcommand and specify the list of peer-cluster names. - -``` - -$ pulsar-admin update-peer-clusters cluster-1 --peer-clusters cluster-2 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/clusters/:cluster/peers|operation/setPeerClusterNames?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().updatePeerClusterNames(clusterName, peerClusterList); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-functions.md b/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-functions.md deleted file mode 100644 index 8274a21d68008a..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-functions.md +++ /dev/null @@ -1,830 +0,0 @@ ---- -id: admin-api-functions -title: Manage Functions -sidebar_label: "Functions" -original_id: admin-api-functions ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](/api/admin/). - -**Pulsar Functions** are lightweight compute processes that - -* consume messages from one or more Pulsar topics -* apply a user-supplied processing logic to each message -* publish the results of the computation to another topic - -Functions can be managed via the following methods. - -Method | Description ----|--- -**Admin CLI** | The `functions` command of the [`pulsar-admin`](/tools/pulsar-admin/) tool. -**REST API** |The `/admin/v3/functions` endpoint of the admin {@inject: rest:REST:/} API. -**Java Admin API**| The `functions` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md). - -## Function resources - -You can perform the following operations on functions. - -### Create a function - -You can create a Pulsar function in cluster mode (deploy it on a Pulsar cluster) using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#functions-create) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --inputs test-input-topic \ - --output persistent://public/default/test-output-topic \ - --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \ - --jar /examples/api-examples.jar - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName|operation/registerFunction?version=@pulsar:version_number@} - - - - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setTenant(tenant); -functionConfig.setNamespace(namespace); -functionConfig.setName(functionName); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setParallelism(1); -functionConfig.setClassName("org.apache.pulsar.functions.api.examples.ExclamationFunction"); -functionConfig.setProcessingGuarantees(FunctionConfig.ProcessingGuarantees.ATLEAST_ONCE); -functionConfig.setTopicsPattern(sourceTopicPattern); -functionConfig.setSubName(subscriptionName); -functionConfig.setAutoAck(true); -functionConfig.setOutput(sinkTopic); -admin.functions().createFunction(functionConfig, fileName); - -``` - - - - -```` - -### Update a function - -You can update a Pulsar function that has been deployed to a Pulsar cluster using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#functions-update) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions update \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --output persistent://public/default/update-output-topic \ - # other options - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/functions/:tenant/:namespace/:functionName|operation/updateFunction?version=@pulsar:version_number@} - - - - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setTenant(tenant); -functionConfig.setNamespace(namespace); -functionConfig.setName(functionName); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setParallelism(1); -functionConfig.setClassName("org.apache.pulsar.functions.api.examples.ExclamationFunction"); -UpdateOptions updateOptions = new UpdateOptions(); -updateOptions.setUpdateAuthData(updateAuthData); -admin.functions().updateFunction(functionConfig, userCodeFile, updateOptions); - -``` - - - - -```` - -### Start an instance of a function - -You can start a stopped function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`start`](reference-pulsar-admin.md#functions-start) subcommand. - -```shell - -$ pulsar-admin functions start \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/start|operation/startFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().startFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Start all instances of a function - -You can start all stopped function instances using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`start`](reference-pulsar-admin.md#functions-start) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions start \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/start|operation/startFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().startFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Stop an instance of a function - -You can stop a function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`stop`](reference-pulsar-admin.md#functions-stop) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stop \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/stop|operation/stopFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().stopFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Stop all instances of a function - -You can stop all function instances using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`stop`](reference-pulsar-admin.md#functions-stop) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stop \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/stop|operation/stopFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().stopFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Restart an instance of a function - -Restart a function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`restart`](reference-pulsar-admin.md#functions-restart) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions restart \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/restart|operation/restartFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().restartFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Restart all instances of a function - -You can restart all function instances using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`restart`](reference-pulsar-admin.md#functions-restart) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions restart \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/restart|operation/restartFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().restartFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### List all functions - -You can list all Pulsar functions running under a specific tenant and namespace using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#functions-list) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions list \ - --tenant public \ - --namespace default - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace|operation/listFunctions?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctions(tenant, namespace); - -``` - - - - -```` - -### Delete a function - -You can delete a Pulsar function that is running on a Pulsar cluster using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#functions-delete) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions delete \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|DELETE|/admin/v3/functions/:tenant/:namespace/:functionName|operation/deregisterFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().deleteFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Get info about a function - -You can get information about a Pulsar function currently running in cluster mode using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#functions-get) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions get \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName|operation/getFunctionInfo?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Get status of an instance of a function - -You can get the current status of a Pulsar function instance with `instance-id` using Admin CLI, REST API or Java Admin API. -````mdx-code-block - - - -Use the [`status`](reference-pulsar-admin.md#functions-status) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/status|operation/getFunctionInstanceStatus?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStatus(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Get status of all instances of a function - -You can get the current status of a Pulsar function instance using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`status`](reference-pulsar-admin.md#functions-status) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/status|operation/getFunctionStatus?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStatus(tenant, namespace, functionName); - -``` - - - - -```` - -### Get stats of an instance of a function - -You can get the current stats of a Pulsar Function instance with `instance-id` using Admin CLI, REST API or Java admin API. -````mdx-code-block - - - -Use the [`stats`](reference-pulsar-admin.md#functions-stats) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/stats|operation/getFunctionInstanceStats?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStats(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Get stats of all instances of a function - -You can get the current stats of a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`stats`](reference-pulsar-admin.md#functions-stats) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/stats|operation/getFunctionStats?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStats(tenant, namespace, functionName); - -``` - - - - -```` - -### Trigger a function - -You can trigger a specified Pulsar function with a supplied value using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`trigger`](reference-pulsar-admin.md#functions-trigger) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --topic (the name of input topic) \ - --trigger-value \"hello pulsar\" - # or --trigger-file (the path of trigger file) - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/trigger|operation/triggerFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().triggerFunction(tenant, namespace, functionName, topic, triggerValue, triggerFile); - -``` - - - - -```` - -### Put state associated with a function - -You can put the state associated with a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`putstate`](reference-pulsar-admin.md#functions-putstate) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions putstate \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --state "{\"key\":\"pulsar\", \"stringValue\":\"hello pulsar\"}" - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/state/:key|operation/putFunctionState?version=@pulsar:version_number@} - - - - -```java - -TypeReference typeRef = new TypeReference() {}; -FunctionState stateRepr = ObjectMapperFactory.getThreadLocal().readValue(state, typeRef); -admin.functions().putFunctionState(tenant, namespace, functionName, stateRepr); - -``` - - - - -```` - -### Fetch state associated with a function - -You can fetch the current state associated with a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`querystate`](reference-pulsar-admin.md#functions-querystate) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions querystate \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --key (the key of state) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/state/:key|operation/getFunctionState?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionState(tenant, namespace, functionName, key); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-namespaces.md b/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-namespaces.md deleted file mode 100644 index ec1183f6de55f5..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-namespaces.md +++ /dev/null @@ -1,1267 +0,0 @@ ---- -id: admin-api-namespaces -title: Managing Namespaces -sidebar_label: "Namespaces" -original_id: admin-api-namespaces ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](/api/admin/). - -Pulsar [namespaces](reference-terminology.md#namespace) are logical groupings of [topics](reference-terminology.md#topic). - -Namespaces can be managed via: - -* The `namespaces` command of the [`pulsar-admin`](/tools/pulsar-admin/) tool -* The `/admin/v2/namespaces` endpoint of the admin {@inject: rest:REST:/} API -* The `namespaces` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md) - -## Namespaces resources - -### Create namespaces - -You can create new namespaces under a given [tenant](reference-terminology.md#tenant). - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#namespaces-create) subcommand and specify the namespace by name: - -```shell - -$ pulsar-admin namespaces create test-tenant/test-namespace - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace|operation/createNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().createNamespace(namespace); - -``` - - - - -```` - -### Get policies - -You can fetch the current policies associated with a namespace at any time. - -````mdx-code-block - - - -Use the [`policies`](reference-pulsar-admin.md#namespaces-policies) subcommand and specify the namespace: - -```shell - -$ pulsar-admin namespaces policies test-tenant/test-namespace -{ - "auth_policies": { - "namespace_auth": {}, - "destination_auth": {} - }, - "replication_clusters": [], - "bundles_activated": true, - "bundles": { - "boundaries": [ - "0x00000000", - "0xffffffff" - ], - "numBundles": 1 - }, - "backlog_quota_map": {}, - "persistence": null, - "latency_stats_sample_rate": {}, - "message_ttl_in_seconds": 0, - "retention_policies": null, - "deleted": false -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace|operation/getPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPolicies(namespace); - -``` - - - - -```` - -### List namespaces - -You can list all namespaces within a given Pulsar [tenant](reference-terminology.md#tenant). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#namespaces-list) subcommand and specify the tenant: - -```shell - -$ pulsar-admin namespaces list test-tenant -test-tenant/ns1 -test-tenant/ns2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant|operation/getTenantNamespaces?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaces(tenant); - -``` - - - - -```` - -### Delete namespaces - -You can delete existing namespaces from a tenant. - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#namespaces-delete) subcommand and specify the namespace: - -```shell - -$ pulsar-admin namespaces delete test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace|operation/deleteNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().deleteNamespace(namespace); - -``` - - - - -```` - -### Configure replication clusters - -#### Set replication cluster - -You can set replication clusters for a namespace to enable Pulsar to internally replicate the published messages from one colocation facility to another. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-clusters test-tenant/ns1 \ - --clusters cl1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/replication|operation/setNamespaceReplicationClusters?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setNamespaceReplicationClusters(namespace, clusters); - -``` - - - - -```` - -#### Get replication cluster - -You can get the list of replication clusters for a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-clusters test-tenant/cl1/ns1 - -``` - -``` - -cl2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/replication|operation/getNamespaceReplicationClusters?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaceReplicationClusters(namespace) - -``` - - - - -```` - -### Configure backlog quota policies - -#### Set backlog quota policies - -Backlog quota helps the broker to restrict bandwidth/storage of a namespace once it reaches a certain threshold limit. Admin can set the limit and take corresponding action after the limit is reached. - - 1. producer_request_hold: broker holds but not persists produce request payload - - 2. producer_exception: broker disconnects with the client by giving an exception - - 3. consumer_backlog_eviction: broker starts discarding backlog messages - -Backlog quota restriction can be taken care by defining restriction of backlog-quota-type: destination_storage. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-backlog-quota --limit 10G --limitTime 36000 --policy producer_request_hold test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/setBacklogQuota?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setBacklogQuota(namespace, new BacklogQuota(limit, limitTime, policy)) - -``` - - - - -```` - -#### Get backlog quota policies - -You can get a configured backlog quota for a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-backlog-quotas test-tenant/ns1 - -``` - -```json - -{ - "destination_storage": { - "limit": 10, - "policy": "producer_request_hold" - } -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/backlogQuotaMap|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getBacklogQuotaMap(namespace); - -``` - - - - -```` - -#### Remove backlog quota policies - -You can remove backlog quota policies for a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-backlog-quota test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/removeBacklogQuota?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeBacklogQuota(namespace, backlogQuotaType) - -``` - - - - -```` - -### Configure persistence policies - -#### Set persistence policies - -Persistence policies allow users to configure persistency-level for all topic messages under a given namespace. - - - Bookkeeper-ack-quorum: Number of acks (guaranteed copies) to wait for each entry, default: 0 - - - Bookkeeper-ensemble: Number of bookies to use for a topic, default: 0 - - - Bookkeeper-write-quorum: How many writes to make of each entry, default: 0 - - - Ml-mark-delete-max-rate: Throttling rate of mark-delete operation (0 means no throttle), default: 0.0 - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-persistence --bookkeeper-ack-quorum 2 --bookkeeper-ensemble 3 --bookkeeper-write-quorum 2 --ml-mark-delete-max-rate 0 test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/setPersistence?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setPersistence(namespace,new PersistencePolicies(bookkeeperEnsemble, bookkeeperWriteQuorum,bookkeeperAckQuorum,managedLedgerMaxMarkDeleteRate)) - -``` - - - - -```` - -#### Get persistence policies - -You can get the configured persistence policies of a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-persistence test-tenant/ns1 - -``` - -```json - -{ - "bookkeeperEnsemble": 3, - "bookkeeperWriteQuorum": 2, - "bookkeeperAckQuorum": 2, - "managedLedgerMaxMarkDeleteRate": 0 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/getPersistence?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPersistence(namespace) - -``` - - - - -```` - -### Configure namespace bundles - -#### Unload namespace bundles - -The namespace bundle is a virtual group of topics which belong to the same namespace. If the broker gets overloaded with the number of bundles, this command can help unload a bundle from that broker, so it can be served by some other less-loaded brokers. The namespace bundle ID ranges from 0x00000000 to 0xffffffff. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces unload --bundle 0x00000000_0xffffffff test-tenant/ns1 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/{bundle}/unload|operation/unloadNamespaceBundle?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().unloadNamespaceBundle(namespace, bundle) - -``` - - - - -```` - -#### Split namespace bundles - -One namespace bundle can contain multiple topics but can be served by only one broker. If a single bundle is creating an excessive load on a broker, an admin can split the bundle using the command below, permitting one or more of the new bundles to be unloaded, thus balancing the load across the brokers. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces split-bundle --bundle 0x00000000_0xffffffff test-tenant/ns1 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/:bundle/split|operation/splitNamespaceBundle?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().splitNamespaceBundle(namespace, bundle) - -``` - - - - -```` - -### Configure message TTL - -#### Set message-ttl - -You can configure the time to live (in seconds) duration for messages. In the example below, the message-ttl is set as 100s. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-message-ttl --messageTTL 100 test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/setNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setNamespaceMessageTTL(namespace, messageTTL) - -``` - - - - -```` - -#### Get message-ttl - -When the message-ttl for a namespace is set, you can use the command below to get the configured value. This example comtinues the example of the command `set message-ttl`, so the returned value is 100(s). - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-message-ttl test-tenant/ns1 - -``` - -``` - -100 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/getNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaceMessageTTL(namespace) - -``` - -``` - -100 - -``` - - - - -```` - -#### Remove message-ttl - -Remove a message TTL of the configured namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-message-ttl test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/removeNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeNamespaceMessageTTL(namespace) - -``` - - - - -```` - - -### Clear backlog - -#### Clear namespace backlog - -It clears all message backlog for all the topics that belong to a specific namespace. You can also clear backlog for a specific subscription as well. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces clear-backlog --sub my-subscription test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/clearBacklog|operation/clearNamespaceBacklogForSubscription?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().clearNamespaceBacklogForSubscription(namespace, subscription) - -``` - - - - -```` - -#### Clear bundle backlog - -It clears all message backlog for all the topics that belong to a specific NamespaceBundle. You can also clear backlog for a specific subscription as well. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces clear-backlog --bundle 0x00000000_0xffffffff --sub my-subscription test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/:bundle/clearBacklog|operation/clearNamespaceBundleBacklogForSubscription?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().clearNamespaceBundleBacklogForSubscription(namespace, bundle, subscription) - -``` - - - - -```` - -### Configure retention - -#### Set retention - -Each namespace contains multiple topics and the retention size (storage size) of each topic should not exceed a specific threshold or it should be stored for a certain period. This command helps configure the retention size and time of topics in a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-retention --size 100 --time 10 test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/retention|operation/setRetention?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setRetention(namespace, new RetentionPolicies(retentionTimeInMin, retentionSizeInMB)) - -``` - - - - -```` - -#### Get retention - -It shows retention information of a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-retention test-tenant/ns1 - -``` - -```json - -{ - "retentionTimeInMinutes": 10, - "retentionSizeInMB": 100 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/retention|operation/getRetention?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getRetention(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for topics - -#### Set dispatch throttling for topics - -It sets message dispatch rate for all the topics under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -:::note - -- If neither `clusterDispatchRate` nor `topicDispatchRate` is configured, dispatch throttling is disabled. -- If `topicDispatchRate` is not configured, `clusterDispatchRate` takes effect. -- If `topicDispatchRate` is configured, `topicDispatchRate` takes effect. - -::: - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/dispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for topics - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/dispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getDispatchRate(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for subscription - -#### Set dispatch throttling for subscription - -It sets message dispatch rate for all the subscription of topics under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-subscription-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/subscriptionDispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setSubscriptionDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for subscription - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-subscription-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/subscriptionDispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getSubscriptionDispatchRate(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for replicator - -#### Set dispatch throttling for replicator - -It sets message dispatch rate for all the replicator between replication clusters under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-replicator-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/replicatorDispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setReplicatorDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for replicator - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-replicator-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/replicatorDispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getReplicatorDispatchRate(namespace) - -``` - - - - -```` - -### Configure deduplication snapshot interval - -#### Get deduplication snapshot interval - -It shows configured `deduplicationSnapshotInterval` for a namespace (Each topic under the namespace will take a deduplication snapshot according to this interval) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-deduplication-snapshot-interval test-tenant/ns1 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/getDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getDeduplicationSnapshotInterval(namespace) - -``` - - - - -```` - -#### Set deduplication snapshot interval - -Set configured `deduplicationSnapshotInterval` for a namespace. Each topic under the namespace will take a deduplication snapshot according to this interval. -`brokerDeduplicationEnabled` must be set to `true` for this property to take effect. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-deduplication-snapshot-interval test-tenant/ns1 --interval 1000 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/setDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setDeduplicationSnapshotInterval(namespace, 1000) - -``` - - - - -```` - -#### Remove deduplication snapshot interval - -Remove configured `deduplicationSnapshotInterval` of a namespace (Each topic under the namespace will take a deduplication snapshot according to this interval) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-deduplication-snapshot-interval test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/deleteDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeDeduplicationSnapshotInterval(namespace) - -``` - - - - -```` - -### Namespace isolation - -You can use the [Pulsar isolation policy](administration-isolation.md) to allocate resources (broker and bookie) for a namespace. - -### Unload namespaces from a broker - -You can unload a namespace, or a [namespace bundle](reference-terminology.md#namespace-bundle), from the Pulsar [broker](reference-terminology.md#broker) that is currently responsible for it. - -#### pulsar-admin - -Use the [`unload`](reference-pulsar-admin.md#unload) subcommand of the [`namespaces`](reference-pulsar-admin.md#namespaces) command. - -````mdx-code-block - - - -```shell - -$ pulsar-admin namespaces unload my-tenant/my-ns - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/unload|operation/unloadNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().unload(namespace) - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-non-partitioned-topics.md b/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-non-partitioned-topics.md deleted file mode 100644 index e6347bb8c363a1..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-non-partitioned-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-non-partitioned-topics -title: Managing non-partitioned topics -sidebar_label: "Non-partitioned topics" -original_id: admin-api-non-partitioned-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-non-persistent-topics.md b/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-non-persistent-topics.md deleted file mode 100644 index 3126a6494c7153..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-non-persistent-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-non-persistent-topics -title: Managing non-persistent topics -sidebar_label: "Non-Persistent topics" -original_id: admin-api-non-persistent-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-overview.md b/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-overview.md deleted file mode 100644 index 408f1943fff188..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-overview.md +++ /dev/null @@ -1,144 +0,0 @@ ---- -id: admin-api-overview -title: Pulsar admin interface -sidebar_label: "Overview" -original_id: admin-api-overview ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -The Pulsar admin interface enables you to manage all important entities in a Pulsar instance, such as tenants, topics, and namespaces. - -You can interact with the admin interface via: - -- The `pulsar-admin` CLI tool, which is available in the `bin` folder of your Pulsar installation: - - ```shell - - bin/pulsar-admin - - ``` - - > **Important** - > - > For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](/tools/pulsar-admin/). - -- HTTP calls, which are made against the admin {@inject: rest:REST:/} API provided by Pulsar brokers. For some RESTful APIs, they might be redirected to the owner brokers for serving with [`307 Temporary Redirect`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/307), hence the HTTP callers should handle `307 Temporary Redirect`. If you use `curl` commands, you should specify `-L` to handle redirections. - - > **Important** - > - > For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. - -- A Java client interface. - - > **Important** - > - > For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](/api/admin/). - -> **The REST API is the admin interface**. Both the `pulsar-admin` CLI tool and the Java client use the REST API. If you implement your own admin interface client, you should use the REST API. - -## Admin setup - -Each of the three admin interfaces (the `pulsar-admin` CLI tool, the {@inject: rest:REST:/} API, and the [Java admin API](/api/admin)) requires some special setup if you have enabled authentication in your Pulsar instance. - -````mdx-code-block - - - -If you have enabled authentication, you need to provide an auth configuration to use the `pulsar-admin` tool. By default, the configuration for the `pulsar-admin` tool is in the [`conf/client.conf`](reference-configuration.md#client) file. The following are the available parameters: - -|Name|Description|Default| -|----|-----------|-------| -|webServiceUrl|The web URL for the cluster.|http://localhost:8080/| -|brokerServiceUrl|The Pulsar protocol URL for the cluster.|pulsar://localhost:6650/| -|authPlugin|The authentication plugin.| | -|authParams|The authentication parameters for the cluster, as a comma-separated string.| | -|useTls|Whether or not TLS authentication will be enforced in the cluster.|false| -|tlsAllowInsecureConnection|Accept untrusted TLS certificate from client.|false| -|tlsTrustCertsFilePath|Path for the trusted TLS certificate file.| | - - - - -You can find details for the REST API exposed by Pulsar brokers in this {@inject: rest:document:/}. - - - - -To use the Java admin API, instantiate a {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object, and specify a URL for a Pulsar broker and a {@inject: javadoc:PulsarAdminBuilder:/admin/org/apache/pulsar/client/admin/PulsarAdminBuilder}. The following is a minimal example using `localhost`: - -```java - -String url = "http://localhost:8080"; -// Pass auth-plugin class fully-qualified name if Pulsar-security enabled -String authPluginClassName = "com.org.MyAuthPluginClass"; -// Pass auth-param if auth-plugin class requires it -String authParams = "param1=value1"; -boolean useTls = false; -boolean tlsAllowInsecureConnection = false; -String tlsTrustCertsFilePath = null; -PulsarAdmin admin = PulsarAdmin.builder() -.authentication(authPluginClassName,authParams) -.serviceHttpUrl(url) -.tlsTrustCertsFilePath(tlsTrustCertsFilePath) -.allowTlsInsecureConnection(tlsAllowInsecureConnection) -.build(); - -``` - -If you use multiple brokers, you can use multi-host like Pulsar service. For example, - -```java - -String url = "http://localhost:8080,localhost:8081,localhost:8082"; -// Pass auth-plugin class fully-qualified name if Pulsar-security enabled -String authPluginClassName = "com.org.MyAuthPluginClass"; -// Pass auth-param if auth-plugin class requires it -String authParams = "param1=value1"; -boolean useTls = false; -boolean tlsAllowInsecureConnection = false; -String tlsTrustCertsFilePath = null; -PulsarAdmin admin = PulsarAdmin.builder() -.authentication(authPluginClassName,authParams) -.serviceHttpUrl(url) -.tlsTrustCertsFilePath(tlsTrustCertsFilePath) -.allowTlsInsecureConnection(tlsAllowInsecureConnection) -.build(); - -``` - - - - -```` - -## How to define Pulsar resource names when running Pulsar in Kubernetes -If you run Pulsar Functions or connectors on Kubernetes, you need to follow Kubernetes naming convention to define the names of your Pulsar resources, whichever admin interface you use. - -Kubernetes requires a name that can be used as a DNS subdomain name as defined in [RFC 1123](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names). Pulsar supports more legal characters than Kubernetes naming convention. If you create a Pulsar resource name with special characters that are not supported by Kubernetes (for example, including colons in a Pulsar namespace name), Kubernetes runtime translates the Pulsar object names into Kubernetes resource labels which are in RFC 1123-compliant forms. Consequently, you can run functions or connectors using Kubernetes runtime. The rules for translating Pulsar object names into Kubernetes resource labels are as below: - -- Truncate to 63 characters - -- Replace the following characters with dashes (-): - - - Non-alphanumeric characters - - - Underscores (_) - - - Dots (.) - -- Replace beginning and ending non-alphanumeric characters with 0 - -:::tip - -- If you get an error in translating Pulsar object names into Kubernetes resource labels (for example, you may have a naming collision if your Pulsar object name is too long) or want to customize the translating rules, see [customize Kubernetes runtime](functions-runtime.md#customize-kubernetes-runtime). -- For how to configure Kubernetes runtime, see [here](functions-runtime.md#configure-kubernetes-runtime). - -::: - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-packages.md b/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-packages.md deleted file mode 100644 index 608dfb7587daff..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-packages.md +++ /dev/null @@ -1,390 +0,0 @@ ---- -id: admin-api-packages -title: Manage packages -sidebar_label: "Packages" -original_id: admin-api-packages ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](/api/admin/). - -Package managers or package-management systems automatically manage packages in a consistent manner. These tools simplify the installation tasks, upgrade process, and deletion operations for users. A package is a minimal unit that a package manager deals with. In Pulsar, packages are organized at the tenant- and namespace-level to manage Pulsar Functions and Pulsar IO connectors (i.e., source and sink). - -## What is a package? - -A package is a set of elements that the user would like to reuse in later operations. In Pulsar, a package can be a group of functions, sources, and sinks. You can define a package according to your needs. - -The package management system in Pulsar stores the data and metadata of each package (as shown in the table below) and tracks the package versions. - -|Metadata|Description| -|--|--| -|description|The description of the package.| -|contact|The contact information of a package. For example, an email address of the developer team.| -|create_time|The time when the package is created.| -|modification_time|The time when the package is lastly modified.| -|properties|A user-defined key/value map to store other information.| - -## How to use a package - -Packages can efficiently use the same set of functions and IO connectors. For example, you can use the same function, source, and sink in multiple namespaces. The main steps are: - -1. Create a package in the package manager by providing the following information: type, tenant, namespace, package name, and version. - - |Component|Description| - |-|-| - |type|Specify one of the supported package types: function, sink and source.| - |tenant|Specify the tenant where you want to create the package.| - |namespace|Specify the namespace where you want to create the package.| - |name|Specify the complete name of the package, using the format `//`.| - |version|Specify the version of the package using the format `MajorVerion.MinorVersion` in numerals.| - - The information you provide creates a URL for a package, in the format `://///`. - -2. Upload the elements to the package, i.e., the functions, sources, and sinks that you want to use across namespaces. - -3. Apply permissions to this package from various namespaces. - -Now, you can use the elements you defined in the package by calling this package from within the package manager. The package manager locates it by the URL. For example, - -``` - -sink://public/default/mysql-sink@1.0 -function://my-tenant/my-ns/my-function@0.1 -source://my-tenant/my-ns/mysql-cdc-source@2.3 - -``` - -## Package management in Pulsar - -You can use the command line tools, REST API, or the Java client to manage your package resources in Pulsar. More specifically, you can use these tools to [upload](#upload-a-package), [download](#download-a-package), and [delete](#delete-a-package) a package, [get the metadata](#get-the-metadata-of-a-package) and [update the metadata](#update-the-metadata-of-a-package) of a package, [get the versions](#list-all-versions-of-a-package) of a package, and [get all packages of a specific type under a namespace](#list-all-packages-of-a-specific-type-under-a-namespace). - -### Upload a package - -You can use the following commands to upload a package. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages upload function://public/default/example@v0.1 --path package-file --description package-description - -``` - - - - -{@inject: endpoint|POST|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/upload?version=@pulsar:version_number@} - - - - -Upload a package to the package management service synchronously. - -```java - - void upload(PackageMetadata metadata, String packageName, String path) throws PulsarAdminException; - -``` - -Upload a package to the package management service asynchronously. - -```java - - CompletableFuture uploadAsync(PackageMetadata metadata, String packageName, String path); - -``` - - - - -```` - -### Download a package - -You can use the following commands to download a package. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages download function://public/default/example@v0.1 --path package-file - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/download?version=@pulsar:version_number@} - - - - -Download a package from the package management service synchronously. - -```java - - void download(String packageName, String path) throws PulsarAdminException; - -``` - -Download a package from the package management service asynchronously. - -```java - - CompletableFuture downloadAsync(String packageName, String path); - -``` - - - - -```` - -### Delete a package - -You can use the following commands to delete a package. - -````mdx-code-block - - - -The following command deletes a package of version 0.1. - -```shell - -bin/pulsar-admin packages delete functions://public/default/example@v0.1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/delete?version=@pulsar:version_number@} - - - - -Delete a specified package synchronously. - -```java - - void delete(String packageName) throws PulsarAdminException; - -``` - -Delete a specified package asynchronously. - -```java - - CompletableFuture deleteAsync(String packageName); - -``` - - - - -```` - -### Get the metadata of a package - -You can use the following commands to get the metadate of a package. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages get-metadata function://public/default/test@v1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version/metadata|operation/getMeta?version=@pulsar:version_number@} - - - - -Get the metadata of a package synchronously. - -```java - - PackageMetadata getMetadata(String packageName) throws PulsarAdminException; - -``` - -Get the metadata of a package asynchronously. - -```java - - CompletableFuture getMetadataAsync(String packageName); - -``` - - - - -```` - -### Update the metadata of a package - -You can use the following commands to update the metadata of a package. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages update-metadata function://public/default/example@v0.1 --description update-description - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version/metadata|operation/updateMeta?version=@pulsar:version_number@} - - - - -Update the metadata of a package synchronously. - -```java - - void updateMetadata(String packageName, PackageMetadata metadata) throws PulsarAdminException; - -``` - -Update the metadata of a package asynchronously. - -```java - - CompletableFuture updateMetadataAsync(String packageName, PackageMetadata metadata); - -``` - - - - -```` - -### List all versions of a package - -You can use the following commands to list all versions of a package. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages list-versions type://tenant/namespace/packageName - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName|operation/listPackageVersion?version=@pulsar:version_number@} - - - - -List all versions of a package synchronously. - -```java - - List listPackageVersions(String packageName) throws PulsarAdminException; - -``` - -List all versions of a package asynchronously. - -```java - - CompletableFuture> listPackageVersionsAsync(String packageName); - -``` - - - - -```` - -### List all packages of a specific type under a namespace - -You can use the following commands to list all packages of a specific type under a namespace. - -````mdx-code-block - - - - -```shell - -bin/pulsar-admin packages list --type function public/default - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/packages/:type/:tenant/:namespace|operation/listPackages?version=@pulsar:version_number@} - - - - -List all packages of a specific type under a namespace synchronously. - -```java - - List listPackages(String type, String namespace) throws PulsarAdminException; - -``` - -List all packages of a specific type under a namespace asynchronously. - -```java - - CompletableFuture> listPackagesAsync(String type, String namespace); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-partitioned-topics.md b/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-partitioned-topics.md deleted file mode 100644 index 5ce182282e0324..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-partitioned-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-partitioned-topics -title: Managing partitioned topics -sidebar_label: "Partitioned topics" -original_id: admin-api-partitioned-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-permissions.md b/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-permissions.md deleted file mode 100644 index 5ace9d573bdaa2..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-permissions.md +++ /dev/null @@ -1,189 +0,0 @@ ---- -id: admin-api-permissions -title: Managing permissions -sidebar_label: "Permissions" -original_id: admin-api-permissions ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](/api/admin/). - -Pulsar allows you to grant namespace-level or topic-level permission to users. - -- If you grant a namespace-level permission to a user, then the user can access all the topics under the namespace. - -- If you grant a topic-level permission to a user, then the user can access only the topic. - -The chapters below demonstrate how to grant namespace-level permissions to users. For how to grant topic-level permissions to users, see [manage topics](admin-api-topics.md/#grant-permission). - -## Grant permissions - -You can grant permissions to specific roles for lists of operations such as `produce` and `consume`. - -````mdx-code-block - - - -Use the [`grant-permission`](reference-pulsar-admin.md#grant-permission) subcommand and specify a namespace, actions using the `--actions` flag, and a role using the `--role` flag: - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role admin10 - -``` - -Wildcard authorization can be performed when `authorizationAllowWildcardsMatching` is set to `true` in `broker.conf`. - -e.g. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role 'my.role.*' - -``` - -Then, roles `my.role.1`, `my.role.2`, `my.role.foo`, `my.role.bar`, etc. can produce and consume. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role '*.role.my' - -``` - -Then, roles `1.role.my`, `2.role.my`, `foo.role.my`, `bar.role.my`, etc. can produce and consume. - -**Note**: A wildcard matching works at **the beginning or end of the role name only**. - -e.g. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role 'my.*.role' - -``` - -In this case, only the role `my.*.role` has permissions. -Roles `my.1.role`, `my.2.role`, `my.foo.role`, `my.bar.role`, etc. **cannot** produce and consume. - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/permissions/:role|operation/grantPermissionOnNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().grantPermissionOnNamespace(namespace, role, getAuthActions(actions)); - -``` - - - - -```` - -## Get permissions - -You can see which permissions have been granted to which roles in a namespace. - -````mdx-code-block - - - -Use the [`permissions`](reference-pulsar-admin#permissions) subcommand and specify a namespace: - -```shell - -$ pulsar-admin namespaces permissions test-tenant/ns1 -{ - "admin10": [ - "produce", - "consume" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/permissions|operation/getPermissions?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPermissions(namespace); - -``` - - - - -```` - -## Revoke permissions - -You can revoke permissions from specific roles, which means that those roles will no longer have access to the specified namespace. - -````mdx-code-block - - - -Use the [`revoke-permission`](reference-pulsar-admin.md#revoke-permission) subcommand and specify a namespace and a role using the `--role` flag: - -```shell - -$ pulsar-admin namespaces revoke-permission test-tenant/ns1 \ - --role admin10 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/permissions/:role|operation/revokePermissionsOnNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().revokePermissionsOnNamespace(namespace, role); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-persistent-topics.md b/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-persistent-topics.md deleted file mode 100644 index 50d135b72f5424..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-persistent-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-persistent-topics -title: Managing persistent topics -sidebar_label: "Persistent topics" -original_id: admin-api-persistent-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-schemas.md b/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-schemas.md deleted file mode 100644 index 9ffe21f5b0f750..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-schemas.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: admin-api-schemas -title: Managing Schemas -sidebar_label: "Schemas" -original_id: admin-api-schemas ---- - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-tenants.md b/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-tenants.md deleted file mode 100644 index e962ed851e4f0a..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-tenants.md +++ /dev/null @@ -1,242 +0,0 @@ ---- -id: admin-api-tenants -title: Managing Tenants -sidebar_label: "Tenants" -original_id: admin-api-tenants ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](/api/admin/). - -Tenants, like namespaces, can be managed using the [admin API](admin-api-overview.md). There are currently two configurable aspects of tenants: - -* Admin roles -* Allowed clusters - -## Tenant resources - -### List - -You can list all of the tenants associated with an [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#tenants-list) subcommand. - -```shell - -$ pulsar-admin tenants list -my-tenant-1 -my-tenant-2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/tenants|operation/getTenants?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().getTenants(); - -``` - - - - -```` - -### Create - -You can create a new tenant. - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#tenants-create) subcommand: - -```shell - -$ pulsar-admin tenants create my-tenant - -``` - -When creating a tenant, you can optionally assign admin roles using the `-r`/`--admin-roles` -flag, and clusters using the `-c`/`--allowed-clusters` flag. You can specify multiple values -as a comma-separated list. Here are some examples: - -```shell - -$ pulsar-admin tenants create my-tenant \ - --admin-roles role1,role2,role3 \ - --allowed-clusters cluster1 - -$ pulsar-admin tenants create my-tenant \ - -r role1 - -c cluster1 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/tenants/:tenant|operation/createTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().createTenant(tenantName, tenantInfo); - -``` - - - - -```` - -### Get configuration - -You can fetch the [configuration](reference-configuration.md) for an existing tenant at any time. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#tenants-get) subcommand and specify the name of the tenant. Here's an example: - -```shell - -$ pulsar-admin tenants get my-tenant -{ - "adminRoles": [ - "admin1", - "admin2" - ], - "allowedClusters": [ - "cl1", - "cl2" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/tenants/:tenant|operation/getTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().getTenantInfo(tenantName); - -``` - - - - -```` - -### Delete - -Tenants can be deleted from a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#tenants-delete) subcommand and specify the name of the tenant. - -```shell - -$ pulsar-admin tenants delete my-tenant - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/tenants/:tenant|operation/deleteTenant?version=@pulsar:version_number@} - - - - -```java - -admin.Tenants().deleteTenant(tenantName); - -``` - - - - -```` - -### Update - -You can update a tenant's configuration. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#tenants-update) subcommand. - -```shell - -$ pulsar-admin tenants update my-tenant - -``` - - - - -{@inject: endpoint|POST|/admin/v2/tenants/:tenant|operation/updateTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().updateTenant(tenantName, tenantInfo); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-topics.md b/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-topics.md deleted file mode 100644 index f74c58b35ab783..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/admin-api-topics.md +++ /dev/null @@ -1,2492 +0,0 @@ ---- -id: admin-api-topics -title: Manage topics -sidebar_label: "Topics" -original_id: admin-api-topics ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](/api/admin/). - -Pulsar has persistent and non-persistent topics. Persistent topic is a logical endpoint for publishing and consuming messages. The topic name structure for persistent topics is: - -```shell - -persistent://tenant/namespace/topic - -``` - -Non-persistent topics are used in applications that only consume real-time published messages and do not need persistent guarantee. In this way, it reduces message-publish latency by removing overhead of persisting messages. The topic name structure for non-persistent topics is: - -```shell - -non-persistent://tenant/namespace/topic - -``` - -## Manage topic resources -Whether it is persistent or non-persistent topic, you can obtain the topic resources through `pulsar-admin` tool, REST API and Java. - -:::note - -In REST API, `:schema` stands for persistent or non-persistent. `:tenant`, `:namespace`, `:x` are variables, replace them with the real tenant, namespace, and `x` names when using them. -Take {@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} as an example, to get the list of persistent topics in REST API, use `https://pulsar.apache.org/admin/v2/persistent/my-tenant/my-namespace`. To get the list of non-persistent topics in REST API, use `https://pulsar.apache.org/admin/v2/non-persistent/my-tenant/my-namespace`. - -::: - -### List of topics - -You can get the list of topics under a given namespace in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list \ - my-tenant/my-namespace - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} - - - - -```java - -String namespace = "my-tenant/my-namespace"; -admin.topics().getList(namespace); - -``` - - - - -```` - -### Grant permission - -You can grant permissions on a client role to perform specific actions on a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics grant-permission \ - --actions produce,consume --role application1 \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/permissions/:role|operation/grantPermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String role = "test-role"; -Set actions = Sets.newHashSet(AuthAction.produce, AuthAction.consume); -admin.topics().grantPermission(topic, role, actions); - -``` - - - - -```` - -### Get permission - -You can fetch permission in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics permissions \ - persistent://test-tenant/ns1/tp1 \ - -{ - "application1": [ - "consume", - "produce" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/permissions|operation/getPermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getPermissions(topic); - -``` - - - - -```` - -### Revoke permission - -You can revoke a permission granted on a client role in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics revoke-permission \ - --role application1 \ - persistent://test-tenant/ns1/tp1 \ - -{ - "application1": [ - "consume", - "produce" - ] -} - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic/permissions/:role|operation/revokePermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String role = "test-role"; -admin.topics().revokePermissions(topic, role); - -``` - - - - -```` - -### Delete topic - -You can delete a topic in the following ways. You cannot delete a topic if any active subscription or producers is connected to the topic. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics delete \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic|operation/deleteTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().delete(topic); - -``` - - - - -```` - -### Unload topic - -You can unload a topic in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics unload \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/unload|operation/unloadTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().unload(topic); - -``` - - - - -```` - -### Get stats - -You can check the following statistics of a given non-partitioned topic. - - - **msgRateIn**: The sum of all local and replication publishers' publish rates (msg/s). - - - **msgThroughputIn**: The sum of all local and replication publishers' publish rates (bytes/s). - - - **msgRateOut**: The sum of all local and replication consumers' dispatch rates(msg/s). - - - **msgThroughputOut**: The sum of all local and replication consumers' dispatch rates (bytes/s). - - - **averageMsgSize**: The average size (in bytes) of messages published within the last interval. - - - **storageSize**: The sum of the ledgers' storage size for this topic. The space used to store the messages for the topic. - - - **earliestMsgPublishTimeInBacklogs**: The publish time of the earliest message in the backlog (ms). - - - **bytesInCounter**: Total bytes published to the topic. - - - **msgInCounter**: Total messages published to the topic. - - - **bytesOutCounter**: Total bytes delivered to consumers. - - - **msgOutCounter**: Total messages delivered to consumers. - - - **msgChunkPublished**: Topic has chunked message published on it. - - - **backlogSize**: Estimated total unconsumed or backlog size (in bytes). - - - **offloadedStorageSize**: Space used to store the offloaded messages for the topic (in bytes). - - - **waitingPublishers**: The number of publishers waiting in a queue in exclusive access mode. - - - **deduplicationStatus**: The status of message deduplication for the topic. - - - **topicEpoch**: The topic epoch or empty if not set. - - - **nonContiguousDeletedMessagesRanges**: The number of non-contiguous deleted messages ranges. - - - **nonContiguousDeletedMessagesRangesSerializedSize**: The serialized size of non-contiguous deleted messages ranges. - - - **publishers**: The list of all local publishers into the topic. The list ranges from zero to thousands. - - - **accessMode**: The type of access to the topic that the producer requires. - - - **msgRateIn**: The total rate of messages (msg/s) published by this publisher. - - - **msgThroughputIn**: The total throughput (bytes/s) of the messages published by this publisher. - - - **averageMsgSize**: The average message size in bytes from this publisher within the last interval. - - - **chunkedMessageRate**: The total rate of chunked messages published by this publisher. - - - **producerId**: The internal identifier for this producer on this topic. - - - **producerName**: The internal identifier for this producer, generated by the client library. - - - **address**: The IP address and source port for the connection of this producer. - - - **connectedSince**: The timestamp when this producer is created or reconnected last time. - - - **clientVersion**: The client library version of this producer. - - - **metadata**: Metadata (key/value strings) associated with this publisher. - - - **subscriptions**: The list of all local subscriptions to the topic. - - - **my-subscription**: The name of this subscription. It is defined by the client. - - - **msgRateOut**: The total rate of messages (msg/s) delivered on this subscription. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered on this subscription. - - - **msgBacklog**: The number of messages in the subscription backlog. - - - **type**: The subscription type. - - - **msgRateExpired**: The rate at which messages were discarded instead of dispatched from this subscription due to TTL. - - - **lastExpireTimestamp**: The timestamp of the last message expire execution. - - - **lastConsumedFlowTimestamp**: The timestamp of the last flow command received. - - - **lastConsumedTimestamp**: The latest timestamp of all the consumed timestamp of the consumers. - - - **lastAckedTimestamp**: The latest timestamp of all the acked timestamp of the consumers. - - - **bytesOutCounter**: Total bytes delivered to consumer. - - - **msgOutCounter**: Total messages delivered to consumer. - - - **msgRateRedeliver**: Total rate of messages redelivered on this subscription (msg/s). - - - **chunkedMessageRate**: Chunked message dispatch rate. - - - **backlogSize**: Size of backlog for this subscription (in bytes). - - - **earliestMsgPublishTimeInBacklog**: The publish time of the earliest message in the backlog for the subscription (ms). - - - **msgBacklogNoDelayed**: Number of messages in the subscription backlog that do not contain the delay messages. - - - **blockedSubscriptionOnUnackedMsgs**: Flag to verify if a subscription is blocked due to reaching threshold of unacked messages. - - - **msgDelayed**: Number of delayed messages currently being tracked. - - - **unackedMessages**: Number of unacknowledged messages for the subscription, where an unacknowledged message is one that has been sent to a consumer but not yet acknowledged. This field is only meaningful when using a subscription that tracks individual message acknowledgement. - - - **activeConsumerName**: The name of the consumer that is active for single active consumer subscriptions. For example, failover or exclusive. - - - **totalMsgExpired**: Total messages expired on this subscription. - - - **lastMarkDeleteAdvancedTimestamp**: Last MarkDelete position advanced timestamp. - - - **durable**: Whether the subscription is durable or ephemeral (for example, from a reader). - - - **replicated**: Mark that the subscription state is kept in sync across different regions. - - - **allowOutOfOrderDelivery**: Whether out of order delivery is allowed on the Key_Shared subscription. - - - **keySharedMode**: Whether the Key_Shared subscription mode is AUTO_SPLIT or STICKY. - - - **consumersAfterMarkDeletePosition**: This is for Key_Shared subscription to get the recentJoinedConsumers in the Key_Shared subscription. - - - **nonContiguousDeletedMessagesRanges**: The number of non-contiguous deleted messages ranges. - - - **nonContiguousDeletedMessagesRangesSerializedSize**: The serialized size of non-contiguous deleted messages ranges. - - - **consumers**: The list of connected consumers for this subscription. - - - **msgRateOut**: The total rate of messages (msg/s) delivered to the consumer. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered to the consumer. - - - **consumerName**: The internal identifier for this consumer, generated by the client library. - - - **availablePermits**: The number of messages that the consumer has space for in the client library's listen queue. `0` means the client library's queue is full and `receive()` isn't being called. A non-zero value means this consumer is ready for dispatched messages. - - - **unackedMessages**: The number of unacknowledged messages for the consumer, where an unacknowledged message is one that has been sent to the consumer but not yet acknowledged. This field is only meaningful when using a subscription that tracks individual message acknowledgement. - - - **blockedConsumerOnUnackedMsgs**: The flag used to verify if the consumer is blocked due to reaching threshold of the unacknowledged messages. - - - **lastConsumedTimestamp**: The timestamp when the consumer reads a message the last time. - - - **lastAckedTimestamp**: The timestamp when the consumer acknowledges a message the last time. - - - **address**: The IP address and source port for the connection of this consumer. - - - **connectedSince**: The timestamp when this consumer is created or reconnected last time. - - - **clientVersion**: The client library version of this consumer. - - - **bytesOutCounter**: Total bytes delivered to consumer. - - - **msgOutCounter**: Total messages delivered to consumer. - - - **msgRateRedeliver**: Total rate of messages redelivered by this consumer (msg/s). - - - **chunkedMessageRate**: The total rate of chunked messages delivered to this consumer. - - - **avgMessagesPerEntry**: Number of average messages per entry for the consumer consumed. - - - **readPositionWhenJoining**: The read position of the cursor when the consumer joining. - - - **keyHashRanges**: Hash ranges assigned to this consumer if is Key_Shared sub mode. - - - **metadata**: Metadata (key/value strings) associated with this consumer. - - - **replication**: This section gives the stats for cross-colo replication of this topic - - - **msgRateIn**: The total rate (msg/s) of messages received from the remote cluster. - - - **msgThroughputIn**: The total throughput (bytes/s) received from the remote cluster. - - - **msgRateOut**: The total rate of messages (msg/s) delivered to the replication-subscriber. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered to the replication-subscriber. - - - **msgRateExpired**: The total rate of messages (msg/s) expired. - - - **replicationBacklog**: The number of messages pending to be replicated to remote cluster. - - - **connected**: Whether the outbound replicator is connected. - - - **replicationDelayInSeconds**: How long the oldest message has been waiting to be sent through the connection, if connected is `true`. - - - **inboundConnection**: The IP and port of the broker in the remote cluster's publisher connection to this broker. - - - **inboundConnectedSince**: The TCP connection being used to publish messages to the remote cluster. If there are no local publishers connected, this connection is automatically closed after a minute. - - - **outboundConnection**: The address of the outbound replication connection. - - - **outboundConnectedSince**: The timestamp of establishing outbound connection. - -The following is an example of a topic status. - -```json - -{ - "msgRateIn" : 0.0, - "msgThroughputIn" : 0.0, - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesInCounter" : 504, - "msgInCounter" : 9, - "bytesOutCounter" : 2296, - "msgOutCounter" : 41, - "averageMsgSize" : 0.0, - "msgChunkPublished" : false, - "storageSize" : 504, - "backlogSize" : 0, - "earliestMsgPublishTimeInBacklogs": 0, - "offloadedStorageSize" : 0, - "publishers" : [ { - "accessMode" : "Shared", - "msgRateIn" : 0.0, - "msgThroughputIn" : 0.0, - "averageMsgSize" : 0.0, - "chunkedMessageRate" : 0.0, - "producerId" : 0, - "metadata" : { }, - "address" : "/127.0.0.1:65402", - "connectedSince" : "2021-06-09T17:22:55.913+08:00", - "clientVersion" : "2.9.0-SNAPSHOT", - "producerName" : "standalone-1-0" - } ], - "waitingPublishers" : 0, - "subscriptions" : { - "sub-demo" : { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesOutCounter" : 2296, - "msgOutCounter" : 41, - "msgRateRedeliver" : 0.0, - "chunkedMessageRate" : 0, - "msgBacklog" : 0, - "backlogSize" : 0, - "earliestMsgPublishTimeInBacklog": 0, - "msgBacklogNoDelayed" : 0, - "blockedSubscriptionOnUnackedMsgs" : false, - "msgDelayed" : 0, - "unackedMessages" : 0, - "type" : "Exclusive", - "activeConsumerName" : "20b81", - "msgRateExpired" : 0.0, - "totalMsgExpired" : 0, - "lastExpireTimestamp" : 0, - "lastConsumedFlowTimestamp" : 1623230565356, - "lastConsumedTimestamp" : 1623230583946, - "lastAckedTimestamp" : 1623230584033, - "lastMarkDeleteAdvancedTimestamp" : 1623230584033, - "consumers" : [ { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesOutCounter" : 2296, - "msgOutCounter" : 41, - "msgRateRedeliver" : 0.0, - "chunkedMessageRate" : 0.0, - "consumerName" : "20b81", - "availablePermits" : 959, - "unackedMessages" : 0, - "avgMessagesPerEntry" : 314, - "blockedConsumerOnUnackedMsgs" : false, - "lastAckedTimestamp" : 1623230584033, - "lastConsumedTimestamp" : 1623230583946, - "metadata" : { }, - "address" : "/127.0.0.1:65172", - "connectedSince" : "2021-06-09T17:22:45.353+08:00", - "clientVersion" : "2.9.0-SNAPSHOT" - } ], - "allowOutOfOrderDelivery": false, - "consumersAfterMarkDeletePosition" : { }, - "nonContiguousDeletedMessagesRanges" : 0, - "nonContiguousDeletedMessagesRangesSerializedSize" : 0, - "durable" : true, - "replicated" : false - } - }, - "replication" : { }, - "deduplicationStatus" : "Disabled", - "nonContiguousDeletedMessagesRanges" : 0, - "nonContiguousDeletedMessagesRangesSerializedSize" : 0 -} - -``` - -To get the status of a topic, you can use the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getStats(topic); - -``` - - - - -```` - -### Get internal stats - -You can get the detailed statistics of a topic. - - - **entriesAddedCounter**: Messages published since this broker loaded this topic. - - - **numberOfEntries**: The total number of messages being tracked. - - - **totalSize**: The total storage size in bytes of all messages. - - - **currentLedgerEntries**: The count of messages written to the ledger that is currently open for writing. - - - **currentLedgerSize**: The size in bytes of messages written to the ledger that is currently open for writing. - - - **lastLedgerCreatedTimestamp**: The time when the last ledger is created. - - - **lastLedgerCreationFailureTimestamp:** The time when the last ledger failed. - - - **waitingCursorsCount**: The number of cursors that are "caught up" and waiting for a new message to be published. - - - **pendingAddEntriesCount**: The number of messages that complete (asynchronous) write requests. - - - **lastConfirmedEntry**: The ledgerid:entryid of the last message that is written successfully. If the entryid is `-1`, then the ledger is open, yet no entries are written. - - - **state**: The state of this ledger for writing. The state `LedgerOpened` means that a ledger is open for saving published messages. - - - **ledgers**: The ordered list of all ledgers for this topic holding messages. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. - - - **metadata**: The ledger metadata. - - - **schemaLedgers**: The ordered list of all ledgers for this topic schema. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. - - - **metadata**: The ledger metadata. - - - **compactedLedger**: The ledgers holding un-acked messages after topic compaction. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. The value is `false` for the compacted topic ledger. - - - **cursors**: The list of all cursors on this topic. Each subscription in the topic stats has a cursor. - - - **markDeletePosition**: All messages before the markDeletePosition are acknowledged by the subscriber. - - - **readPosition**: The latest position of subscriber for reading message. - - - **waitingReadOp**: This is true when the subscription has read the latest message published to the topic and is waiting for new messages to be published. - - - **pendingReadOps**: The counter for how many outstanding read requests to the BookKeepers in progress. - - - **messagesConsumedCounter**: The number of messages this cursor has acked since this broker loaded this topic. - - - **cursorLedger**: The ledger being used to persistently store the current markDeletePosition. - - - **cursorLedgerLastEntry**: The last entryid used to persistently store the current markDeletePosition. - - - **individuallyDeletedMessages**: If acknowledges are being done out of order, the ranges of messages acknowledged between the markDeletePosition and the read-position shows. - - - **lastLedgerSwitchTimestamp**: The last time the cursor ledger is rolled over. - - - **state**: The state of the cursor ledger: `Open` means you have a cursor ledger for saving updates of the markDeletePosition. - -The following is an example of the detailed statistics of a topic. - -```json - -{ - "entriesAddedCounter":0, - "numberOfEntries":0, - "totalSize":0, - "currentLedgerEntries":0, - "currentLedgerSize":0, - "lastLedgerCreatedTimestamp":"2021-01-22T21:12:14.868+08:00", - "lastLedgerCreationFailureTimestamp":null, - "waitingCursorsCount":0, - "pendingAddEntriesCount":0, - "lastConfirmedEntry":"3:-1", - "state":"LedgerOpened", - "ledgers":[ - { - "ledgerId":3, - "entries":0, - "size":0, - "offloaded":false, - "metadata":null - } - ], - "cursors":{ - "test":{ - "markDeletePosition":"3:-1", - "readPosition":"3:-1", - "waitingReadOp":false, - "pendingReadOps":0, - "messagesConsumedCounter":0, - "cursorLedger":4, - "cursorLedgerLastEntry":1, - "individuallyDeletedMessages":"[]", - "lastLedgerSwitchTimestamp":"2021-01-22T21:12:14.966+08:00", - "state":"Open", - "numberOfEntriesSinceFirstNotAckedMessage":0, - "totalNonContiguousDeletedMessagesRange":0, - "properties":{ - - } - } - }, - "schemaLedgers":[ - { - "ledgerId":1, - "entries":11, - "size":10, - "offloaded":false, - "metadata":null - } - ], - "compactedLedger":{ - "ledgerId":-1, - "entries":-1, - "size":-1, - "offloaded":false, - "metadata":null - } -} - -``` - -To get the internal status of a topic, you can use the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats-internal \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/internalStats|operation/getInternalStats?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getInternalStats(topic); - -``` - - - - -```` - -### Peek messages - -You can peek a number of messages for a specific subscription of a given topic in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics peek-messages \ - --count 10 --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -Message ID: 315674752:0 -Properties: { "X-Pulsar-publish-time" : "2015-07-13 17:40:28.451" } -msg-payload - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/position/:messagePosition|operation/peekNthMessage?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -int numMessages = 1; -admin.topics().peekMessages(topic, subName, numMessages); - -``` - - - - -```` - -### Get message by ID - -You can fetch the message with the given ledger ID and entry ID in the following ways. - -````mdx-code-block - - - -```shell - -$ ./bin/pulsar-admin topics get-message-by-id \ - persistent://public/default/my-topic \ - -l 10 -e 0 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/ledger/:ledgerId/entry/:entryId|operation/getMessageById?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -long ledgerId = 10; -long entryId = 10; -admin.topics().getMessageById(topic, ledgerId, entryId); - -``` - - - - -```` - -### Examine messages - -You can examine a specific message on a topic by position relative to the earliest or the latest message. - -````mdx-code-block - - - -```shell - -./bin/pulsar-admin topics examine-messages \ - persistent://public/default/my-topic \ - -i latest -m 1 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/examinemessage?initialPosition=:initialPosition&messagePosition=:messagePosition|operation/examineMessage?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().examineMessage(topic, "latest", 1); - -``` - - - - -```` - -### Get message ID - -You can get message ID published at or just after the given datetime. - -````mdx-code-block - - - -```shell - -./bin/pulsar-admin topics get-message-id \ - persistent://public/default/my-topic \ - -d 2021-06-28T19:01:17Z - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/messageid/:timestamp|operation/getMessageIdByTimestamp?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -long timestamp = System.currentTimeMillis() -admin.topics().getMessageIdByTimestamp(topic, timestamp); - -``` - - - - -```` - - -### Skip messages - -You can skip a number of messages for a specific subscription of a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics skip \ - --count 10 --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/skip/:numMessages|operation/skipMessages?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -int numMessages = 1; -admin.topics().skipMessages(topic, subName, numMessages); - -``` - - - - -```` - -### Skip all messages - -You can skip all the old messages for a specific subscription of a given topic. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics skip-all \ - --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/skip_all|operation/skipAllMessages?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -admin.topics().skipAllMessages(topic, subName); - -``` - - - - -```` - -### Reset cursor - -You can reset a subscription cursor position back to the position which is recorded X minutes before. It essentially calculates time and position of cursor at X minutes before and resets it at that position. You can reset the cursor in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics reset-cursor \ - --subscription my-subscription --time 10 \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/resetcursor/:timestamp|operation/resetCursor?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -long timestamp = 2342343L; -admin.topics().resetCursor(topic, subName, timestamp); - -``` - - - - -```` - -### Look up topic's owner broker - -You can locate the owner broker of the given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics lookup \ - persistent://test-tenant/ns1/tp1 \ - - "pulsar://broker1.org.com:4480" - -``` - - - - -{@inject: endpoint|GET|/lookup/v2/topic/:schema/:tenant:namespace/:topic|/?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.lookup().lookupDestination(topic); - -``` - - - - -```` - -### Look up partitioned topic's owner broker - -You can locate the owner broker of the given partitioned topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics partitioned-lookup \ - persistent://test-tenant/ns1/my-topic \ - - "persistent://test-tenant/ns1/my-topic-partition-0 pulsar://localhost:6650" - "persistent://test-tenant/ns1/my-topic-partition-1 pulsar://localhost:6650" - "persistent://test-tenant/ns1/my-topic-partition-2 pulsar://localhost:6650" - "persistent://test-tenant/ns1/my-topic-partition-3 pulsar://localhost:6650" - -``` - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.lookup().lookupPartitionedTopic(topic); - -``` - -Lookup the partitioned topics sorted by broker URL - -```shell - -$ pulsar-admin topics partitioned-lookup \ - persistent://test-tenant/ns1/my-topic --sort-by-broker \ - - "pulsar://localhost:6650 [persistent://test-tenant/ns1/my-topic-partition-0, persistent://test-tenant/ns1/my-topic-partition-1, persistent://test-tenant/ns1/my-topic-partition-2, persistent://test-tenant/ns1/my-topic-partition-3]" - -``` - - - - -```` - -### Get bundle - -You can get the range of the bundle that the given topic belongs to in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics bundle-range \ - persistent://test-tenant/ns1/tp1 \ - - "0x00000000_0xffffffff" - -``` - - - - -{@inject: endpoint|GET|/lookup/v2/topic/:topic_domain/:tenant/:namespace/:topic/bundle|/?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.lookup().getBundleRange(topic); - -``` - - - - -```` - -### Get subscriptions - -You can check all subscription names for a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics subscriptions \ - persistent://test-tenant/ns1/tp1 \ - - my-subscription - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscriptions|operation/getSubscriptions?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getSubscriptions(topic); - -``` - - - - -```` - -### Last Message Id - -You can get the last committed message ID for a persistent topic. It is available since 2.3.0 release. - -````mdx-code-block - - - -```shell - -pulsar-admin topics last-message-id topic-name - -``` - - - - -{@inject: endpoint|Get|/admin/v2/:schema/:tenant/:namespace/:topic/lastMessageId|operation/getLastMessageId?version=@pulsar:version_number@} - - - - -```Java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getLastMessage(topic); - -``` - - - - -```` - -### Get backlog size - -You can get the backlog size of a single partition topic or a non-partitioned topic with a given message ID (in bytes). - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics get-backlog-size \ - -m 1:1 \ - persistent://test-tenant/ns1/tp1-partition-0 \ - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/backlogSize|operation/getBacklogSizeByMessageId?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -MessageId messageId = MessageId.earliest; -admin.topics().getBacklogSizeByMessageId(topic, messageId); - -``` - - - - -```` - - -### Configure deduplication snapshot interval - -#### Get deduplication snapshot interval - -To get the topic-level deduplication snapshot interval, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-deduplication-snapshot-interval options - -``` - - - - -``` - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/getDeduplicationSnapshotInterval?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.topics().getDeduplicationSnapshotInterval(topic) - -``` - - - - -```` - -#### Set deduplication snapshot interval - -To set the topic-level deduplication snapshot interval, use one of the following methods. - -> **Prerequisite** `brokerDeduplicationEnabled` must be set to `true`. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-deduplication-snapshot-interval options - -``` - - - - -``` - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/setDeduplicationSnapshotInterval?version=@pulsar:version_number@} - -``` - -```json - -{ - "interval": 1000 -} - -``` - - - - -```java - -admin.topics().setDeduplicationSnapshotInterval(topic, 1000) - -``` - - - - -```` - -#### Remove deduplication snapshot interval - -To remove the topic-level deduplication snapshot interval, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-deduplication-snapshot-interval options - -``` - - - - -``` - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/deleteDeduplicationSnapshotInterval?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.topics().removeDeduplicationSnapshotInterval(topic) - -``` - - - - -```` - - -### Configure inactive topic policies - -#### Get inactive topic policies - -To get the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-inactive-topic-policies options - -``` - - - - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/getInactiveTopicPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getInactiveTopicPolicies(topic) - -``` - - - - -```` - -#### Set inactive topic policies - -To set the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-inactive-topic-policies options - -``` - - - - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/setInactiveTopicPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().setInactiveTopicPolicies(topic, inactiveTopicPolicies) - -``` - - - - -```` - -#### Remove inactive topic policies - -To remove the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-inactive-topic-policies options - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/removeInactiveTopicPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().removeInactiveTopicPolicies(topic) - -``` - - - - -```` - - -### Configure offload policies - -#### Get offload policies - -To get the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-offload-policies options - -``` - - - - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/getOffloadPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getOffloadPolicies(topic) - -``` - - - - -```` - -#### Set offload policies - -To set the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-offload-policies options - -``` - - - - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/setOffloadPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().setOffloadPolicies(topic, offloadPolicies) - -``` - - - - -```` - -#### Remove offload policies - -To remove the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-offload-policies options - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/removeOffloadPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().removeOffloadPolicies(topic) - -``` - - - - -```` - - -## Manage non-partitioned topics -You can use Pulsar [admin API](admin-api-overview.md) to create, delete and check status of non-partitioned topics. - -### Create -Non-partitioned topics must be explicitly created. When creating a new non-partitioned topic, you need to provide a name for the topic. - -By default, 60 seconds after creation, topics are considered inactive and deleted automatically to avoid generating trash data. To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to a specific value. - -For more information about the two parameters, see [here](reference-configuration.md#broker). - -You can create non-partitioned topics in the following ways. -````mdx-code-block - - - -When you create non-partitioned topics with the [`create`](reference-pulsar-admin.md#create-3) command, you need to specify the topic name as an argument. - -```shell - -$ bin/pulsar-admin topics create \ - persistent://my-tenant/my-namespace/my-topic - -``` - -:::note - -When you create a non-partitioned topic with the suffix '-partition-' followed by numeric value like 'xyz-topic-partition-x' for the topic name, if a partitioned topic with same suffix 'xyz-topic-partition-y' exists, then the numeric value(x) for the non-partitioned topic must be larger than the number of partitions(y) of the partitioned topic. Otherwise, you cannot create such a non-partitioned topic. - -::: - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic|operation/createNonPartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().createNonPartitionedTopic(topicName); - -``` - - - - -```` - -### Delete -You can delete non-partitioned topics in the following ways. -````mdx-code-block - - - -```shell - -$ bin/pulsar-admin topics delete \ - persistent://my-tenant/my-namespace/my-topic - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic|operation/deleteTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().delete(topic); - -``` - - - - -```` - -### List - -You can get the list of topics under a given namespace in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list tenant/namespace -persistent://tenant/namespace/topic1 -persistent://tenant/namespace/topic2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getList(namespace); - -``` - - - - -```` - -### Stats - -You can check the current statistics of a given topic. The following is an example. For description of each stats, refer to [get stats](#get-stats). - -```json - -{ - "msgRateIn": 4641.528542257553, - "msgThroughputIn": 44663039.74947473, - "msgRateOut": 0, - "msgThroughputOut": 0, - "averageMsgSize": 1232439.816728665, - "storageSize": 135532389160, - "publishers": [ - { - "msgRateIn": 57.855383881403576, - "msgThroughputIn": 558994.7078932219, - "averageMsgSize": 613135, - "producerId": 0, - "producerName": null, - "address": null, - "connectedSince": null - } - ], - "subscriptions": { - "my-topic_subscription": { - "msgRateOut": 0, - "msgThroughputOut": 0, - "msgBacklog": 116632, - "type": null, - "msgRateExpired": 36.98245516804671, - "consumers": [] - } - }, - "replication": {} -} - -``` - -You can check the current statistics of a given topic and its connected producers and consumers in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats \ - persistent://test-tenant/namespace/topic \ - --get-precise-backlog - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getStats(topic, false /* is precise backlog */); - -``` - - - - -```` - -## Manage partitioned topics -You can use Pulsar [admin API](admin-api-overview.md) to create, update, delete and check status of partitioned topics. - -### Create - -Partitioned topics must be explicitly created. When creating a new partitioned topic, you need to provide a name and the number of partitions for the topic. - -By default, 60 seconds after creation, topics are considered inactive and deleted automatically to avoid generating trash data. To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to a specific value. - -For more information about the two parameters, see [here](reference-configuration.md#broker). - -You can create partitioned topics in the following ways. -````mdx-code-block - - - -When you create partitioned topics with the [`create-partitioned-topic`](reference-pulsar-admin.md#create-partitioned-topic) -command, you need to specify the topic name as an argument and the number of partitions using the `-p` or `--partitions` flag. - -```shell - -$ bin/pulsar-admin topics create-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic \ - --partitions 4 - -``` - -:::note - -If a non-partitioned topic with the suffix '-partition-' followed by a numeric value like 'xyz-topic-partition-10', you can not create a partitioned topic with name 'xyz-topic', because the partitions of the partitioned topic could override the existing non-partitioned topic. To create such partitioned topic, you have to delete that non-partitioned topic first. - -::: - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/partitions|operation/createPartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -int numPartitions = 4; -admin.topics().createPartitionedTopic(topicName, numPartitions); - -``` - - - - -```` - -### Create missed partitions - -When topic auto-creation is disabled, and you have a partitioned topic without any partitions, you can use the [`create-missed-partitions`](reference-pulsar-admin.md#create-missed-partitions) command to create partitions for the topic. - -````mdx-code-block - - - -You can create missed partitions with the [`create-missed-partitions`](reference-pulsar-admin.md#create-missed-partitions) command and specify the topic name as an argument. - -```shell - -$ bin/pulsar-admin topics create-missed-partitions \ - persistent://my-tenant/my-namespace/my-topic \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic|operation/createMissedPartitions?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().createMissedPartitions(topicName); - -``` - - - - -```` - -### Get metadata - -Partitioned topics are associated with metadata, you can view it as a JSON object. The following metadata field is available. - -Field | Description -:-----|:------- -`partitions` | The number of partitions into which the topic is divided. - -````mdx-code-block - - - -You can check the number of partitions in a partitioned topic with the [`get-partitioned-topic-metadata`](reference-pulsar-admin.md#get-partitioned-topic-metadata) subcommand. - -```shell - -$ pulsar-admin topics get-partitioned-topic-metadata \ - persistent://my-tenant/my-namespace/my-topic -{ - "partitions": 4 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/partitions|operation/getPartitionedMetadata?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getPartitionedTopicMetadata(topicName); - -``` - - - - -```` - -### Update - -You can update the number of partitions for an existing partitioned topic *if* the topic is non-global. However, you can only add the partition number. Decrementing the number of partitions would delete the topic, which is not supported in Pulsar. - -Producers and consumers can find the newly created partitions automatically. - -````mdx-code-block - - - -You can update partitioned topics with the [`update-partitioned-topic`](reference-pulsar-admin.md#update-partitioned-topic) command. - -```shell - -$ pulsar-admin topics update-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic \ - --partitions 8 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:cluster/:namespace/:destination/partitions|operation/updatePartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().updatePartitionedTopic(topic, numPartitions); - -``` - - - - -```` - -### Delete -You can delete partitioned topics with the [`delete-partitioned-topic`](reference-pulsar-admin.md#delete-partitioned-topic) command, REST API and Java. - -````mdx-code-block - - - -```shell - -$ bin/pulsar-admin topics delete-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:topic/:namespace/:destination/partitions|operation/deletePartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().delete(topic); - -``` - - - - -```` - -### List -You can get the list of partitioned topics under a given namespace in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list-partitioned-topics tenant/namespace -persistent://tenant/namespace/topic1 -persistent://tenant/namespace/topic2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getPartitionedTopicList?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getPartitionedTopicList(namespace); - -``` - - - - -```` - -### Stats - -You can check the current statistics of a given partitioned topic. The following is an example. For description of each stats, refer to [get stats](#get-stats). - -Note that in the subscription JSON object, `chuckedMessageRate` is deprecated. Please use `chunkedMessageRate`. Both will be sent in the JSON for now. - -```json - -{ - "msgRateIn" : 999.992947159793, - "msgThroughputIn" : 1070918.4635439808, - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesInCounter" : 270318763, - "msgInCounter" : 252489, - "bytesOutCounter" : 0, - "msgOutCounter" : 0, - "averageMsgSize" : 1070.926056966454, - "msgChunkPublished" : false, - "storageSize" : 270316646, - "backlogSize" : 200921133, - "publishers" : [ { - "msgRateIn" : 999.992947159793, - "msgThroughputIn" : 1070918.4635439808, - "averageMsgSize" : 1070.3333333333333, - "chunkedMessageRate" : 0.0, - "producerId" : 0 - } ], - "subscriptions" : { - "test" : { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesOutCounter" : 0, - "msgOutCounter" : 0, - "msgRateRedeliver" : 0.0, - "chuckedMessageRate" : 0, - "chunkedMessageRate" : 0, - "msgBacklog" : 144318, - "msgBacklogNoDelayed" : 144318, - "blockedSubscriptionOnUnackedMsgs" : false, - "msgDelayed" : 0, - "unackedMessages" : 0, - "msgRateExpired" : 0.0, - "lastExpireTimestamp" : 0, - "lastConsumedFlowTimestamp" : 0, - "lastConsumedTimestamp" : 0, - "lastAckedTimestamp" : 0, - "consumers" : [ ], - "isDurable" : true, - "isReplicated" : false - } - }, - "replication" : { }, - "metadata" : { - "partitions" : 3 - }, - "partitions" : { } -} - -``` - -You can check the current statistics of a given partitioned topic and its connected producers and consumers in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics partitioned-stats \ - persistent://test-tenant/namespace/topic \ - --per-partition - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/partitioned-stats|operation/getPartitionedStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getPartitionedStats(topic, true /* per partition */, false /* is precise backlog */); - -``` - - - - -```` - -### Internal stats - -You can check the detailed statistics of a topic. The following is an example. For description of each stats, refer to [get internal stats](#get-internal-stats). - -```json - -{ - "entriesAddedCounter": 20449518, - "numberOfEntries": 3233, - "totalSize": 331482, - "currentLedgerEntries": 3233, - "currentLedgerSize": 331482, - "lastLedgerCreatedTimestamp": "2016-06-29 03:00:23.825", - "lastLedgerCreationFailureTimestamp": null, - "waitingCursorsCount": 1, - "pendingAddEntriesCount": 0, - "lastConfirmedEntry": "324711539:3232", - "state": "LedgerOpened", - "ledgers": [ - { - "ledgerId": 324711539, - "entries": 0, - "size": 0 - } - ], - "cursors": { - "my-subscription": { - "markDeletePosition": "324711539:3133", - "readPosition": "324711539:3233", - "waitingReadOp": true, - "pendingReadOps": 0, - "messagesConsumedCounter": 20449501, - "cursorLedger": 324702104, - "cursorLedgerLastEntry": 21, - "individuallyDeletedMessages": "[(324711539:3134‥324711539:3136], (324711539:3137‥324711539:3140], ]", - "lastLedgerSwitchTimestamp": "2016-06-29 01:30:19.313", - "state": "Open" - } - } -} - -``` - -You can get the internal stats for the partitioned topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats-internal \ - persistent://test-tenant/namespace/topic - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/internalStats|operation/getInternalStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getInternalStats(topic); - -``` - - - - -```` - - -## Publish to partitioned topics - -By default, Pulsar topics are served by a single broker, which limits the maximum throughput of a topic. *Partitioned topics* can span multiple brokers and thus allow for higher throughput. - -You can publish to partitioned topics using Pulsar client libraries. When publishing to partitioned topics, you must specify a routing mode. If you do not specify any routing mode when you create a new producer, the round robin routing mode is used. - -### Routing mode - -You can specify the routing mode in the ProducerConfiguration object that you use to configure your producer. The routing mode determines which partition(internal topic) that each message should be published to. - -The following {@inject: javadoc:MessageRoutingMode:/client/org/apache/pulsar/client/api/MessageRoutingMode} options are available. - -Mode | Description -:--------|:------------ -`RoundRobinPartition` | If no key is provided, the producer publishes messages across all partitions in round-robin policy to achieve the maximum throughput. Round-robin is not done per individual message, round-robin is set to the same boundary of batching delay to ensure that batching is effective. If a key is specified on the message, the partitioned producer hashes the key and assigns message to a particular partition. This is the default mode. -`SinglePartition` | If no key is provided, the producer picks a single partition randomly and publishes all messages into that partition. If a key is specified on the message, the partitioned producer hashes the key and assigns message to a particular partition. -`CustomPartition` | Use custom message router implementation that is called to determine the partition for a particular message. You can create a custom routing mode by using the Java client and implementing the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface. - -The following is an example: - -```java - -String pulsarBrokerRootUrl = "pulsar://localhost:6650"; -String topic = "persistent://my-tenant/my-namespace/my-topic"; - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl(pulsarBrokerRootUrl).build(); -Producer producer = pulsarClient.newProducer() - .topic(topic) - .messageRoutingMode(MessageRoutingMode.SinglePartition) - .create(); -producer.send("Partitioned topic message".getBytes()); - -``` - -### Custom message router - -To use a custom message router, you need to provide an implementation of the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface, which has just one `choosePartition` method: - -```java - -public interface MessageRouter extends Serializable { - int choosePartition(Message msg); -} - -``` - -The following router routes every message to partition 10: - -```java - -public class AlwaysTenRouter implements MessageRouter { - public int choosePartition(Message msg) { - return 10; - } -} - -``` - -With that implementation, you can send - -```java - -String pulsarBrokerRootUrl = "pulsar://localhost:6650"; -String topic = "persistent://my-tenant/my-cluster-my-namespace/my-topic"; - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl(pulsarBrokerRootUrl).build(); -Producer producer = pulsarClient.newProducer() - .topic(topic) - .messageRouter(new AlwaysTenRouter()) - .create(); -producer.send("Partitioned topic message".getBytes()); - -``` - -### How to choose partitions when using a key -If a message has a key, it supersedes the round robin routing policy. The following example illustrates how to choose the partition when using a key. - -```java - -// If the message has a key, it supersedes the round robin routing policy - if (msg.hasKey()) { - return signSafeMod(hash.makeHash(msg.getKey()), topicMetadata.numPartitions()); - } - - if (isBatchingEnabled) { // if batching is enabled, choose partition on `partitionSwitchMs` boundary. - long currentMs = clock.millis(); - return signSafeMod(currentMs / partitionSwitchMs + startPtnIdx, topicMetadata.numPartitions()); - } else { - return signSafeMod(PARTITION_INDEX_UPDATER.getAndIncrement(this), topicMetadata.numPartitions()); - } - -``` - -## Manage subscriptions - -You can use [Pulsar admin API](admin-api-overview.md) to create, check, and delete subscriptions. - -### Create subscription - -You can create a subscription for a topic using one of the following methods. - -````mdx-code-block - - - - -```shell - -pulsar-admin topics create-subscription \ ---subscription my-subscription \ -persistent://test-tenant/ns1/tp1 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/persistent/:tenant/:namespace/:topic/subscription/:subscription|operation/createSubscriptions?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subscriptionName = "my-subscription"; -admin.topics().createSubscription(topic, subscriptionName, MessageId.latest); - -``` - - - - -```` - -### Get subscription - -You can check all subscription names for a given topic using one of the following methods. - -````mdx-code-block - - - - -```shell - -pulsar-admin topics subscriptions \ -persistent://test-tenant/ns1/tp1 \ -my-subscription - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscriptions|operation/getSubscriptions?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getSubscriptions(topic); - -``` - - - - -```` - -### Unsubscribe subscription - -When a subscription does not process messages any more, you can unsubscribe it using one of the following methods. - -````mdx-code-block - - - - -```shell - -pulsar-admin topics unsubscribe \ ---subscription my-subscription \ -persistent://test-tenant/ns1/tp1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/:topic/subscription/:subscription|operation/deleteSubscription?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subscriptionName = "my-subscription"; -admin.topics().deleteSubscription(topic, subscriptionName); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/administration-geo.md b/site2/website/versioned_docs/version-2.10.1-deprecated/administration-geo.md deleted file mode 100644 index 4f5b5565eca903..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/administration-geo.md +++ /dev/null @@ -1,298 +0,0 @@ ---- -id: administration-geo -title: Pulsar geo-replication -sidebar_label: "Geo-replication" -original_id: administration-geo ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -## Enable geo-replication for a namespace - -You must enable geo-replication on a [per-tenant basis](#concepts-multi-tenancy) in Pulsar. For example, you can enable geo-replication between two specific clusters only when a tenant has access to both clusters. - -Geo-replication is managed at the namespace level, which means you only need to create and configure a namespace to replicate messages between two or more provisioned clusters that a tenant can access. - -Complete the following tasks to enable geo-replication for a namespace: - -* [Enable a geo-replication namespace](#enable-geo-replication-at-namespace-level) -* [Configure that namespace to replicate across two or more provisioned clusters](admin-api-namespaces.md/#configure-replication-clusters) - -Any message published on *any* topic in that namespace is replicated to all clusters in the specified set. - -## Local persistence and forwarding - -When messages are produced on a Pulsar topic, messages are first persisted in the local cluster, and then forwarded asynchronously to the remote clusters. - -In normal cases, when connectivity issues are none, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, the network [round-trip time](https://en.wikipedia.org/wiki/Round-trip_delay_time) (RTT) between the remote regions defines end-to-end delivery latency. - -Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (like during a network partition). - -Producers and consumers can publish messages to and consume messages from any cluster in a Pulsar instance. However, subscriptions cannot only be local to the cluster where the subscriptions are created but also can be transferred between clusters after replicated subscription is enabled. Once replicated subscription is enabled, you can keep subscription state in synchronization. Therefore, a topic can be asynchronously replicated across multiple geographical regions. In case of failover, a consumer can restart consuming messages from the failure point in a different cluster. - -![A typical geo-replication example with full-mesh pattern](/assets/geo-replication.png) - -In the aforementioned example, the **T1** topic is replicated among three clusters, **Cluster-A**, **Cluster-B**, and **Cluster-C**. - -All messages produced in any of the three clusters are delivered to all subscriptions in other clusters. In this case, **C1** and **C2** consumers receive all messages that **P1**, **P2**, and **P3** producers publish. Ordering is still guaranteed on a per-producer basis. - -## Configure replication - -This section guides you through the steps to configure geo-replicated clusters. -1. [Connect replication clusters](#connect-replication-clusters) -2. [Grant permissions to properties](#grant-permissions-to-properties) -3. [Enable geo-replication](#enable-geo-replication) -4. [Use topics with geo-replication](#use-topics-with-geo-replication) - -### Connect replication clusters - -To replicate data among clusters, you need to configure each cluster to connect to the other. You can use the [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) tool to create a connection. - -**Example** - -Suppose that you have 3 replication clusters: `us-west`, `us-cent`, and `us-east`. - -1. Configure the connection from `us-west` to `us-east`. - - Run the following command on `us-west`. - -```shell - -$ bin/pulsar-admin clusters create \ - --broker-url pulsar://: \ - --url http://: \ - us-east - -``` - - :::tip - - - If you want to use a secure connection for a cluster, you can use the flags `--broker-url-secure` and `--url-secure`. For more information, see [pulsar-admin clusters create](https://pulsar.apache.org/tools/pulsar-admin/). - - Different clusters may have different authentications. You can use the authentication flag `--auth-plugin` and `--auth-parameters` together to set cluster authentication, which overrides `brokerClientAuthenticationPlugin` and `brokerClientAuthenticationParameters` if `authenticationEnabled` sets to `true` in `broker.conf` and `standalone.conf`. For more information, see [authentication and authorization](concepts-authentication.md). - - ::: - -2. Configure the connection from `us-west` to `us-cent`. - - Run the following command on `us-west`. - -```shell - -$ bin/pulsar-admin clusters create \ - --broker-url pulsar://: \ - --url http://: \ - us-cent - -``` - -3. Run similar commands on `us-east` and `us-cent` to create connections among clusters. - -### Grant permissions to properties - -To replicate to a cluster, the tenant needs permission to use that cluster. You can grant permission to the tenant when you create the tenant or grant later. - -Specify all the intended clusters when you create a tenant: - -```shell - -$ bin/pulsar-admin tenants create my-tenant \ - --admin-roles my-admin-role \ - --allowed-clusters us-west,us-east,us-cent - -``` - -To update permissions of an existing tenant, use `update` instead of `create`. - -### Enable geo-replication - -You can enable geo-replication at **namespace** or **topic** level. - -#### Enable geo-replication at namespace level - -You can create a namespace with the following command sample. - -```shell - -$ bin/pulsar-admin namespaces create my-tenant/my-namespace - -``` - -Initially, the namespace is not assigned to any cluster. You can assign the namespace to clusters using the `set-clusters` subcommand: - -```shell - -$ bin/pulsar-admin namespaces set-clusters my-tenant/my-namespace \ - --clusters us-west,us-east,us-cent - -``` - -#### Enable geo-replication at topic level - -You can set geo-replication at topic level using the command `pulsar-admin topics set-replication-clusters`. For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). - -```shell - -$ bin/pulsar-admin topics set-replication-clusters --clusters us-west,us-east,us-cent my-tenant/my-namespace/my-topic - -``` - -:::tip - -- You can change the replication clusters for a namespace at any time, without disruption to ongoing traffic. Replication channels are immediately set up or stopped in all clusters as soon as the configuration changes. -- Once you create a geo-replication namespace, any topics that producers or consumers create within that namespace are replicated across clusters. Typically, each application uses the `serviceUrl` for the local cluster. - -::: - -### Use topics with geo-replication - -#### Selective replication - -By default, messages are replicated to all clusters configured for the namespace. You can restrict replication selectively by specifying a replication list for a message, and then that message is replicated only to the subset in the replication list. - -The following is an example for the [Java API](client-libraries-java.md). Note the use of the `setReplicationClusters` method when you construct the {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} object: - -```java - -List restrictReplicationTo = Arrays.asList( - "us-west", - "us-east" -); - -Producer producer = client.newProducer() - .topic("some-topic") - .create(); - -producer.newMessage() - .value("my-payload".getBytes()) - .setReplicationClusters(restrictReplicationTo) - .send(); - -``` - -#### Topic stats - -You can check topic-specific statistics for geo-replication topics using one of the following methods. - -````mdx-code-block - - - -Use the [`pulsar-admin topics stats`](https://pulsar.apache.org/tools/pulsar-admin/) command. - -```shell - -$ bin/pulsar-admin topics stats persistent://my-tenant/my-namespace/my-topic - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@} - - - - -```` - -Each cluster reports its own local stats, including the incoming and outgoing replication rates and backlogs. - -#### Delete a geo-replication topic - -Given that geo-replication topics exist in multiple regions, directly deleting a geo-replication topic is not possible. Instead, you should rely on automatic topic garbage collection. - -In Pulsar, a topic is automatically deleted when the topic meets the following three conditions: -- no producers or consumers are connected to it; -- no subscriptions to it; -- no more messages are kept for retention. -For geo-replication topics, each region uses a fault-tolerant mechanism to decide when deleting the topic locally is safe. - -You can explicitly disable topic garbage collection by setting `brokerDeleteInactiveTopicsEnabled` to `false` in your [broker configuration](reference-configuration.md#broker). - -To delete a geo-replication topic, close all producers and consumers on the topic, and delete all of its local subscriptions in every replication cluster. When Pulsar determines that no valid subscription for the topic remains across the system, it will garbage collect the topic. - -## Replicated subscriptions - -Pulsar supports replicated subscriptions, so you can keep subscription state in sync, within a sub-second timeframe, in the context of a topic that is being asynchronously replicated across multiple geographical regions. - -In case of failover, a consumer can restart consuming from the failure point in a different cluster. - -### Enable replicated subscription - -Replicated subscription is disabled by default. You can enable replicated subscription when creating a consumer. - -```java - -Consumer consumer = client.newConsumer(Schema.STRING) - .topic("my-topic") - .subscriptionName("my-subscription") - .replicateSubscriptionState(true) - .subscribe(); - -``` - -### Advantages - - * It is easy to implement the logic. - * You can choose to enable or disable replicated subscription. - * When you enable it, the overhead is low, and it is easy to configure. - * When you disable it, the overhead is zero. - -### Limitations - -* When you enable replicated subscription, you're creating a consistent distributed snapshot to establish an association between message ids from different clusters. The snapshots are taken periodically. The default value is `1 second`. It means that a consumer failing over to a different cluster can potentially receive 1 second of duplicates. You can also configure the frequency of the snapshot in the `broker.conf` file. -* Only the base line cursor position is synced in replicated subscriptions while the individual acknowledgments are not synced. This means the messages acknowledged out-of-order could end up getting delivered again, in the case of a cluster failover. - -## Migrate data between clusters using geo-replication - -Using geo-replication to migrate data between clusters is a special use case of the [active-active replication pattern](concepts-replication.md/#active-active-replication) when you don't have a large amount of data. - -1. Create your new cluster. -2. Add the new cluster to your old cluster. - -```shell - - bin/pulsar-admin cluster create new-cluster - -``` - -3. Add the new cluster to your tenant. - -```shell - - bin/pulsar-admin tenants update my-tenant --cluster old-cluster,new-cluster - -``` - -4. Set the clusters on your namespace. - -```shell - - bin/pulsar-admin namespaces set-clusters my-tenant/my-ns --cluster old-cluster,new-cluster - -``` - -5. Update your applications using [replicated subscriptions](#replicated-subscriptions). -6. Validate subscription replication is active. - -```shell - - bin/pulsar-admin topics stats-internal public/default/t1 - -``` - -7. Move your consumers and producers to the new cluster by modifying the values of `serviceURL`. - -:::note - -* The replication starts from step 4, which means existing messages in your old cluster are not replicated. -* If you have some older messages to migrate, you can pre-create the replication subscriptions for each topic and set it at the earliest position by using `pulsar-admin topics create-subscription -s pulsar.repl.new-cluster -m earliest `. - -::: - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/administration-isolation.md b/site2/website/versioned_docs/version-2.10.1-deprecated/administration-isolation.md deleted file mode 100644 index b176d1f14c20db..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/administration-isolation.md +++ /dev/null @@ -1,124 +0,0 @@ ---- -id: administration-isolation -title: Pulsar isolation -sidebar_label: "Pulsar isolation" -original_id: administration-isolation ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -In an organization, a Pulsar instance provides services to multiple teams. When organizing the resources across multiple teams, you want to make a suitable isolation plan to avoid the resource competition between different teams and applications and provide high-quality messaging service. In this case, you need to take resource isolation into consideration and weigh your intended actions against expected and unexpected consequences. - -To enforce resource isolation, you can use the Pulsar isolation policy, which allows you to allocate resources (**broker** and **bookie**) for the namespace. - -## Broker isolation - -In Pulsar, when namespaces (more specifically, namespace bundles) are assigned dynamically to brokers, the namespace isolation policy limits the set of brokers that can be used for assignment. Before topics are assigned to brokers, you can set the namespace isolation policy with a primary or a secondary regex to select desired brokers. - -You can set a namespace isolation policy for a cluster using one of the following methods. - -````mdx-code-block - - - - -``` - -pulsar-admin ns-isolation-policy set options - -``` - -For more information about the command `pulsar-admin ns-isolation-policy set options`, see [here](https://pulsar.apache.org/tools/pulsar-admin/). - -**Example** - -```shell - -bin/pulsar-admin ns-isolation-policy set \ ---auto-failover-policy-type min_available \ ---auto-failover-policy-params min_limit=1,usage_threshold=80 \ ---namespaces my-tenant/my-namespace \ ---primary 10.193.216.* my-cluster policy-name - -``` - - - - -[PUT /admin/v2/namespaces/{tenant}/{namespace}](https://pulsar.apache.org/admin-rest-api/?version=master&apiversion=v2#operation/createNamespace) - - - - -For how to set namespace isolation policy using Java admin API, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/NamespacesImpl.java#L251). - - - - -```` - -## Bookie isolation - -A namespace can be isolated into user-defined groups of bookies, which guarantees all the data that belongs to the namespace is stored in desired bookies. The bookie affinity group uses the BookKeeper [rack-aware placement policy](https://bookkeeper.apache.org/docs/latest/api/javadoc/org/apache/bookkeeper/client/EnsemblePlacementPolicy.html) and it is a way to feed rack information which is stored as JSON format in znode. - -You can set a bookie affinity group using one of the following methods. - -````mdx-code-block - - - - -``` - -pulsar-admin namespaces set-bookie-affinity-group options - -``` - -For more information about the command `pulsar-admin namespaces set-bookie-affinity-group options`, see [here](https://pulsar.apache.org/tools/pulsar-admin/). - -**Example** - -```shell - -bin/pulsar-admin bookies set-bookie-rack \ ---bookie 127.0.0.1:3181 \ ---hostname 127.0.0.1:3181 \ ---group group-bookie1 \ ---rack rack1 - -bin/pulsar-admin namespaces set-bookie-affinity-group public/default \ ---primary-group group-bookie1 - -``` - -:::note - -- Do not set a bookie rack name to slash (`/`) or an empty string (`""`) if you use Pulsar earlier than 2.7.5, 2.8.3, and 2.9.2. If you use Pulsar 2.7.5, 2.8.3, 2.9.2 or later versions, it falls back to `/default-rack` or `/default-region/default-rack`. -- When `RackawareEnsemblePlacementPolicy` is enabled, the rack name is not allowed to contain slash (`/`) except for the beginning and end of the rack name string. For example, rack name like `/rack0` is okay, but `/rack/0` is not allowed. -- When `RegionawareEnsemblePlacementPolicy` is enabled, the rack name can only contain one slash (`/`) except for the beginning and end of the rack name string. For example, rack name like `/region0/rack0` is okay, but `/region0rack0` and `/region0/rack/0` are not allowed. -For the bookie rack name restrictions, see [pulsar-admin bookies set-bookie-rack](https://pulsar.apache.org/tools/pulsar-admin/). - -::: - - - - -[POST /admin/v2/namespaces/{tenant}/{namespace}/persistence/bookieAffinity](https://pulsar.apache.org/admin-rest-api/?version=master&apiversion=v2#operation/setBookieAffinityGroup) - - - - -For how to set bookie affinity group for a namespace using Java admin API, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/NamespacesImpl.java#L1164). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/administration-load-balance.md b/site2/website/versioned_docs/version-2.10.1-deprecated/administration-load-balance.md deleted file mode 100644 index 3bb295d25cb1f4..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/administration-load-balance.md +++ /dev/null @@ -1,278 +0,0 @@ ---- -id: administration-load-balance -title: Load balance across brokers -sidebar_label: "Load balance" -original_id: administration-load-balance ---- - - -Pulsar is a horizontally scalable messaging system, so the traffic in a logical cluster must be balanced across all the available Pulsar brokers as evenly as possible, which is a core requirement. - -You can use multiple settings and tools to control the traffic distribution which requires a bit of context to understand how the traffic is managed in Pulsar. Though in most cases, the core requirement mentioned above is true out of the box and you should not worry about it. - -The following sections introduce how the load-balanced assignments work across Pulsar brokers and how you can leverage the framework to adjust. - -## Dynamic assignments - -Topics are dynamically assigned to brokers based on the load conditions of all brokers in the cluster. The assignment of topics to brokers is not done at the topic level but at the **bundle** level (a higher level). Instead of individual topic assignments, each broker takes ownership of a subset of the topics for a namespace. This subset is called a bundle and effectively this subset is a sharding mechanism. - -In other words, each namespace is an "administrative" unit and sharded into a list of bundles, with each bundle comprising a portion of the overall hash range of the namespace. Topics are assigned to a particular bundle by taking the hash of the topic name and checking in which bundle the hash falls. Each bundle is independent of the others and thus is independently assigned to different brokers. - -The benefit of the assignment granularity is to amortize the amount of information that you need to keep track of. Based on CPU, memory, traffic load, and other indexes, topics are assigned to a particular broker dynamically. For example: -* When a client starts using new topics that are not assigned to any broker, a process is triggered to choose the best-suited broker to acquire ownership of these topics according to the load conditions. -* If the broker owning a topic becomes overloaded, the topic is reassigned to a less-loaded broker. -* If the broker owning a topic crashes, the topic is reassigned to another active broker. - -:::tip - -For partitioned topics, different partitions are assigned to different brokers. Here "topic" means either a non-partitioned topic or one partition of a topic. - -::: - -## Create namespaces with assigned bundles - -When you create a new namespace, a number of bundles are assigned to the namespace. You can set this number in the `conf/broker.conf` file: - -```conf - -# When a namespace is created without specifying the number of bundles, this -# value will be used as the default -defaultNumberOfNamespaceBundles=4 - -``` - -Alternatively, you can override the value when you create a new namespace using [Pulsar admin](/tools/pulsar-admin/): - -```shell - -bin/pulsar-admin namespaces create my-tenant/my-namespace --clusters us-west --bundles 16 - -``` - -With the above command, you create a namespace with 16 initial bundles. Therefore the topics for this namespace can immediately be spread across up to 16 brokers. - -In general, if you know the expected traffic and number of topics in advance, you had better start with a reasonable number of bundles instead of waiting for the system to auto-correct the distribution. - -On the same note, it is beneficial to start with more bundles than the number of brokers, due to the hashing nature of the distribution of topics into bundles. For example, for a namespace with 1000 topics, using something like 64 bundles achieves a good distribution of traffic across 16 brokers. - - -## Split namespace bundles - -Since the load for the topics in a bundle might change over time and predicting the load might be hard, bundle split is designed to resolve these challenges. The broker splits a bundle into two and the new smaller bundles can be reassigned to different brokers. - -Pulsar supports the following two bundle split algorithms: -* `range_equally_divide`: split the bundle into two parts with the same hash range size. -* `topic_count_equally_divide`: split the bundle into two parts with the same number of topics. - -To enable bundle split, you need to configure the following settings in the `broker.conf` file, and set `defaultNamespaceBundleSplitAlgorithm` based on your needs. - -```conf - -loadBalancerAutoBundleSplitEnabled=true -loadBalancerAutoUnloadSplitBundlesEnabled=true -defaultNamespaceBundleSplitAlgorithm=range_equally_divide - -``` - -You can configure more parameters for splitting thresholds. Any existing bundle that exceeds any of the thresholds is a candidate to be split. By default, the newly split bundles are immediately reassigned to other brokers, to facilitate the traffic distribution. - -```conf - -# maximum topics in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxTopics=1000 - -# maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxSessions=1000 - -# maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxMsgRate=30000 - -# maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxBandwidthMbytes=100 - -# maximum number of bundles in a namespace (for auto-split) -loadBalancerNamespaceMaximumBundles=128 - -``` - -## Shed load automatically - -The support for automatic load shedding is available in the load manager of Pulsar. This means that whenever the system recognizes a particular broker is overloaded, the system forces some traffic to be reassigned to less-loaded brokers. - -When a broker is identified as overloaded, the broker forces to "unload" a subset of the bundles, the ones with higher traffic, that make up for the overload percentage. - -For example, the default threshold is 85% and if a broker is over quota at 95% CPU usage, then the broker unloads the percent difference plus a 5% margin: `(95% - 85%) + 5% = 15%`. Given the selection of bundles to unload is based on traffic (as a proxy measure for CPU, network, and memory), the broker unloads bundles for at least 15% of traffic. - -:::tip - -* The automatic load shedding is enabled by default. To disable it, you can set `loadBalancerSheddingEnabled` to `false`. -* Besides the automatic load shedding, you can [manually unload bundles](#unload-topics-and-bundles). - -::: - -Additional settings that apply to shedding: - -```conf - -# Load shedding interval. Broker periodically checks whether some traffic should be offload from -# some over-loaded broker to other under-loaded brokers -loadBalancerSheddingIntervalMinutes=1 - -# Prevent the same topics to be shed and moved to other brokers more than once within this timeframe -loadBalancerSheddingGracePeriodMinutes=30 - -``` - -Pulsar supports the following types of automatic load shedding strategies. -* [ThresholdShedder](#thresholdshedder) -* [OverloadShedder](#overloadshedder) -* [UniformLoadShedder](#uniformloadshedder) - -:::note - -* From Pulsar 2.10, the **default** shedding strategy is `ThresholdShedder`. -* You need to restart brokers if the shedding strategy is [dynamically updated](admin-api-brokers.md/#dynamic-broker-configuration). - -::: - -### ThresholdShedder -This strategy tends to shed the bundles if any broker's usage is above the configured threshold. It does this by first computing the average resource usage per broker for the whole cluster. The resource usage for each broker is calculated using the following method `LocalBrokerData#getMaxResourceUsageWithWeight`. Historical observations are included in the running average based on the broker's setting for `loadBalancerHistoryResourcePercentage`. Once the average resource usage is calculated, a broker's current/historical usage is compared to the average broker usage. If a broker's usage is greater than the average usage per broker plus the `loadBalancerBrokerThresholdShedderPercentage`, this load shedder proposes removing enough bundles to bring the unloaded broker 5% below the current average broker usage. Note that recently unloaded bundles are not unloaded again. - -![Shedding strategy - ThresholdShedder](/assets/ThresholdShedder.png) - -To use the `ThresholdShedder` strategy, configure brokers with this value. -`loadBalancerLoadSheddingStrategy=org.apache.pulsar.broker.loadbalance.impl.ThresholdShedder` - -You can configure the weights for each resource per broker in the `conf/broker.conf` file. - -```conf - -# The BandWithIn usage weight when calculating new resource usage. -loadBalancerBandwithInResourceWeight=1.0 - -# The BandWithOut usage weight when calculating new resource usage. -loadBalancerBandwithOutResourceWeight=1.0 - -# The CPU usage weight when calculating new resource usage. -loadBalancerCPUResourceWeight=1.0 - -# The heap memory usage weight when calculating new resource usage. -loadBalancerMemoryResourceWeight=1.0 - -# The direct memory usage weight when calculating new resource usage. -loadBalancerDirectMemoryResourceWeight=1.0 - -``` - -### OverloadShedder -This strategy attempts to shed exactly one bundle on brokers which are overloaded, that is, whose maximum system resource usage exceeds [`loadBalancerBrokerOverloadedThresholdPercentage`](#broker-overload-thresholds). To see which resources are considered when determining the maximum system resource. A bundle is recommended for unloading off that broker if and only if the following conditions hold: The broker has at least two bundles assigned and the broker has at least one bundle that has not been unloaded recently according to LoadBalancerSheddingGracePeriodMinutes. The unloaded bundle will be the most expensive bundle in terms of message rate that has not been recently unloaded. Note that this strategy does not take into account "underloaded" brokers when determining which bundles to unload. If you are looking for a strategy that spreads load evenly across all brokers, see ThresholdShedder. - -![Shedding strategy - OverloadShedder](/assets/OverloadShedder.png) - -To use the `OverloadShedder` strategy, configure brokers with this value. -`loadBalancerLoadSheddingStrategy=org.apache.pulsar.broker.loadbalance.impl.OverloadShedder` - -#### Broker overload thresholds - -The determination of when a broker is overloaded is based on the threshold of CPU, network, and memory usage. Whenever either of those metrics reaches the threshold, the system triggers the shedding (if enabled). - -:::note - -The overload threshold `loadBalancerBrokerOverloadedThresholdPercentage` only applies to the [`OverloadShedder`](#overloadshedder) shedding strategy. By default, it is set to 85%. - -::: - -Pulsar gathers the CPU, network, and memory usage stats from the system metrics. In some cases of network utilization, the network interface speed that Linux reports is not correct and needs to be manually overridden. This is the case in AWS EC2 instances with 1Gbps NIC speed for which the OS reports 10Gbps speed. - -Because of the incorrect max speed, the load manager might think the broker has not reached the NIC capacity, while in fact the broker already uses all the bandwidth and the traffic is slowed down. - -You can set `loadBalancerOverrideBrokerNicSpeedGbps` in the `conf/broker.conf` file to correct the max NIC speed. When the value is empty, Pulsar uses the value that the OS reports. - -### UniformLoadShedder -This strategy tends to distribute load uniformly across all brokers. This strategy checks the load difference between the broker with the highest load and the broker with the lowest load. If the difference is higher than configured thresholds `loadBalancerMsgRateDifferenceShedderThreshold` and `loadBalancerMsgThroughputMultiplierDifferenceShedderThreshold` then it finds out bundles that can be unloaded to distribute traffic evenly across all brokers. - -![Shedding strategy - UniformLoadShedder](/assets/UniformLoadShedder.png) - -To use the `UniformLoadShedder` strategy, configure brokers with this value. -`loadBalancerLoadSheddingStrategy=org.apache.pulsar.broker.loadbalance.impl.UniformLoadShedder` - -## Unload topics and bundles - -You can "unload" a topic in Pulsar manual admin operations. Unloading means closing topics, releasing ownership, and reassigning topics to a new broker, based on the current load. - -When unloading happens, the client experiences a small latency blip, typically in the order of tens of milliseconds, while the topic is reassigned. - -Unloading is the mechanism that the load manager uses to perform the load shedding, but you can also trigger the unloading manually, for example, to correct the assignments and redistribute traffic even before having any broker overloaded. - -Unloading a topic has no effect on the assignment, but just closes and reopens the particular topic: - -```shell - -pulsar-admin topics unload persistent://tenant/namespace/topic - -``` - -To unload all topics for a namespace and trigger reassignments: - -```shell - -pulsar-admin namespaces unload tenant/namespace - -``` - -## Distribute anti-affinity namespaces across failure domains - -When your application has multiple namespaces and you want one of them available all the time to avoid any downtime, you can group these namespaces and distribute them across different [failure domains](reference-terminology.md#failure-domain) and different brokers. Thus, if one of the failure domains is down (due to release rollout or brokers restart), it only disrupts namespaces owned by that specific failure domain and the rest of the namespaces owned by other domains remain available without any impact. - -Such a group of namespaces has anti-affinity to each other, that is, all the namespaces in this group are [anti-affinity namespaces](reference-terminology.md#anti-affinity-namespaces) and are distributed to different failure domains in a load-balanced manner. - -As illustrated in the following figure, Pulsar has 2 failure domains (Domain1 and Domain2) and each domain has 2 brokers in it. You can create an anti-affinity namespace group that has 4 namespaces in it, and all the 4 namespaces have anti-affinity to each other. The load manager tries to distribute namespaces evenly across all the brokers in the same domain. Since each domain has 2 brokers, every broker owns one namespace from this anti-affinity namespace group, and you can see each domain owns 2 namespaces, and each broker owns 1 namespace. - -![Distribute anti-affinity namespaces across failure domains](/assets/anti-affinity-namespaces-across-failure-domains.svg) - -The load manager follows an even distribution policy across failure domains to assign anti-affinity namespaces. The following table outlines the even-distributed assignment sequence illustrated in the above figure. - -| Assignment sequence | Namespace | Failure domain candidates | Broker candidates | Selected broker | -|:---|:------------|:------------------|:------------------------------------|:-----------------| -| 1 | Namespace1 | Domain1, Domain2 | Broker1, Broker2, Broker3, Broker4 | Domain1:Broker1 | -| 2 | Namespace2 | Domain2 | Broker3, Broker4 | Domain2:Broker3 | -| 3 | Namespace3 | Domain1, Domain2 | Broker2, Broker4 | Domain1:Broker2 | -| 4 | Namespace4 | Domain2 | Broker4 | Domain2:Broker4 | - -:::tip - -* Each namespace belongs to only one anti-affinity group. If a namespace with an existing anti-affinity assignment is assigned to another anti-affinity group, the original assignment is dropped. - -* If there are more anti-affinity namespaces than failure domains, the load manager distributes namespaces evenly across all the domains, and also every domain distributes namespaces evenly across all the brokers under that domain. - -::: - -### Create a failure domain and register brokers - -:::note - -One broker can only be registered to a single failure domain. - -::: - -To create a domain under a specific cluster and register brokers, run the following command: - -```bash - -pulsar-admin clusters create-failure-domain --domain-name --broker-list - -``` - -You can also view, update, and delete domains under a specific cluster. For more information, refer to [Pulsar admin doc](/tools/pulsar-admin/). - -### Create an anti-affinity namespace group - -An anti-affinity group is created automatically when the first namespace is assigned to the group. To assign a namespace to an anti-affinity group, run the following command. It sets an anti-affinity group name for a namespace. - -```bash - -pulsar-admin namespaces set-anti-affinity-group --group - -``` - -For more information about `anti-affinity-group` related commands, refer to [Pulsar admin doc](/tools/pulsar-admin/). diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/administration-proxy.md b/site2/website/versioned_docs/version-2.10.1-deprecated/administration-proxy.md deleted file mode 100644 index 202d7d643437bb..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/administration-proxy.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -id: administration-proxy -title: Pulsar proxy -sidebar_label: "Pulsar proxy" -original_id: administration-proxy ---- - -Pulsar proxy is an optional gateway. Pulsar proxy is used when direct connections between clients and Pulsar brokers are either infeasible or undesirable. For example, when you run Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform, you can run Pulsar proxy. - -The Pulsar proxy is not intended to be exposed on the public internet. The security considerations in the current design expect network perimeter security. The requirement of network perimeter security can be achieved with private networks. - -If a proxy deployment cannot be protected with network perimeter security, the alternative would be to use [Pulsar's "Proxy SNI routing" feature](concepts-proxy-sni-routing.md) with a properly secured and audited solution. In that case Pulsar proxy component is not used at all. - -## Configure the proxy - -Before using a proxy, you need to configure it with a broker's address in the cluster. You can configure the broker URL in the proxy configuration, or the proxy to connect directly using service discovery. - -> In a production environment service discovery is not recommended. - -### Use broker URLs - -It is more secure to specify a URL to connect to the brokers. - -Proxy authorization requires access to ZooKeeper, so if you use these broker URLs to connect to the brokers, you need to disable authorization at the Proxy level. Brokers still authorize requests after the proxy forwards them. - -You can configure the broker URLs in `conf/proxy.conf` as follows. - -```properties -brokerServiceURL=pulsar://brokers.example.com:6650 -brokerWebServiceURL=http://brokers.example.com:8080 -functionWorkerWebServiceURL=http://function-workers.example.com:8080 -``` - -If you use TLS, configure the broker URLs in the following way: - -```properties -brokerServiceURLTLS=pulsar+ssl://brokers.example.com:6651 -brokerWebServiceURLTLS=https://brokers.example.com:8443 -functionWorkerWebServiceURL=https://function-workers.example.com:8443 -``` - -The hostname in the URLs provided should be a DNS entry that points to multiple brokers or a virtual IP address, which is backed by multiple broker IP addresses, so that the proxy does not lose connectivity to Pulsar cluster if a single broker becomes unavailable. - -The ports to connect to the brokers (6650 and 8080, or in the case of TLS, 6651 and 8443) should be open in the network ACLs. - -Note that if you do not use functions, you do not need to configure `functionWorkerWebServiceURL`. - -### Use service discovery - -Pulsar uses [ZooKeeper](https://zookeeper.apache.org) for service discovery. To connect the proxy to ZooKeeper, specify the following in `conf/proxy.conf`. - -```properties -metadataStoreUrl=my-zk-0:2181,my-zk-1:2181,my-zk-2:2181 -configurationMetadataStoreUrl=my-zk-0:2184,my-zk-remote:2184 -``` - -> To use service discovery, you need to open the network ACLs, so the proxy can connects to the ZooKeeper nodes through the ZooKeeper client port (port `2181`) and the configuration store client port (port `2184`). - -> However, it is not secure to use service discovery. Because if the network ACL is open, when someone compromises a proxy, they have full access to ZooKeeper. - -### Restricting target broker addresses to mitigate CVE-2022-24280 - -The Pulsar Proxy trusts clients to provide valid target broker addresses to connect to. -Unless the Pulsar Proxy is explicitly configured to limit access, the Pulsar Proxy is vulnerable as described in the security advisory [Apache Pulsar Proxy target broker address isn't validated (CVE-2022-24280)](https://github.com/apache/pulsar/wiki/CVE-2022-24280). - -It is necessary to limit proxied broker connections to known broker addresses by specifying `brokerProxyAllowedHostNames` and `brokerProxyAllowedIPAddresses` settings. - -When specifying `brokerProxyAllowedHostNames`, it's possible to use a wildcard. -Please notice that `*` is a wildcard that matches any character in the hostname. It also matches dot `.` characters. - -It is recommended to use a pattern that matches only the desired brokers and no other hosts in the local network. Pulsar lookups will use the default host name of the broker by default. This can be overridden with the `advertisedAddress` setting in `broker.conf`. - -To increase security, it is also possible to restrict access with the `brokerProxyAllowedIPAddresses` setting. It is not mandatory to configure `brokerProxyAllowedIPAddresses` when `brokerProxyAllowedHostNames` is properly configured so that the pattern matches only the target brokers. -`brokerProxyAllowedIPAddresses` setting supports a comma separate list of IP address, IP address ranges and IP address networks [(supported format reference)](https://seancfoley.github.io/IPAddress/IPAddress/apidocs/inet/ipaddr/IPAddressString.html). - -Example: limiting by host name in a Kubernetes deployment -```yaml - # example of limiting to Kubernetes statefulset hostnames that contain "broker-" - PULSAR_PREFIX_brokerProxyAllowedHostNames: '*broker-*.*.*.svc.cluster.local' -``` - -Example: limiting by both host name and ip address in a `proxy.conf` file for host deployment. -```properties -# require "broker" in host name -brokerProxyAllowedHostNames=*broker*.localdomain -# limit target ip addresses to a specific network -brokerProxyAllowedIPAddresses=10.0.0.0/8 -``` - -Example: limiting by multiple host name patterns and multiple ip address ranges in a `proxy.conf` file for host deployment. -```properties -```properties -# require "broker" in host name -brokerProxyAllowedHostNames=*broker*.localdomain,*broker*.otherdomain -# limit target ip addresses to a specific network or range demonstrating multiple supported formats -brokerProxyAllowedIPAddresses=10.10.0.0/16,192.168.1.100-120,172.16.2.*,10.1.2.3 -``` - - -## Start the proxy - -To start the proxy: - -```bash - -$ cd /path/to/pulsar/directory -$ bin/pulsar proxy \ - --metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 \ - --configuration-metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 - -``` - -> You can run multiple instances of the Pulsar proxy in a cluster. - -## Stop the proxy - -Pulsar proxy runs in the foreground by default. To stop the proxy, simply stop the process in which the proxy is running. - -## Proxy frontends - -You can run Pulsar proxy behind some kind of load-distributing frontend, such as an [HAProxy](https://www.digitalocean.com/community/tutorials/an-introduction-to-haproxy-and-load-balancing-concepts) load balancer. - -## Use Pulsar clients with the proxy - -Once your Pulsar proxy is up and running, preferably behind a load-distributing [frontend](#proxy-frontends), clients can connect to the proxy via whichever address that the frontend uses. If the address is the DNS address `pulsar.cluster.default`, for example, the connection URL for clients is `pulsar://pulsar.cluster.default:6650`. - -For more information on Proxy configuration, refer to [Pulsar proxy](reference-configuration.md#pulsar-proxy). diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/administration-pulsar-manager.md b/site2/website/versioned_docs/version-2.10.1-deprecated/administration-pulsar-manager.md deleted file mode 100644 index 2134ae70b180bd..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/administration-pulsar-manager.md +++ /dev/null @@ -1,216 +0,0 @@ ---- -id: administration-pulsar-manager -title: Pulsar Manager -sidebar_label: "Pulsar Manager" -original_id: administration-pulsar-manager ---- - -Pulsar Manager is a web-based GUI management and monitoring tool that helps administrators and users manage and monitor tenants, namespaces, topics, subscriptions, brokers, clusters, and so on, and supports dynamic configuration of multiple environments. - -:::note - -If you are monitoring your current stats with [Pulsar dashboard](administration-dashboard.md), we recommend you use Pulsar Manager instead. Pulsar dashboard is deprecated. - -::: - -## Install - -### Quick Install -The easiest way to use the Pulsar Manager is to run it inside a [Docker](https://www.docker.com/products/docker) container. - -```shell - -docker pull apachepulsar/pulsar-manager:v0.2.0 -docker run -it \ - -p 9527:9527 -p 7750:7750 \ - -e SPRING_CONFIGURATION_FILE=/pulsar-manager/pulsar-manager/application.properties \ - apachepulsar/pulsar-manager:v0.2.0 - -``` - -* Pulsar Manager is divided into front-end and back-end, the front-end service port is `9527` and the back-end service port is `7750`. -* `SPRING_CONFIGURATION_FILE`: Default configuration file for spring. -* By default, Pulsar Manager uses the `herddb` database. HerdDB is a SQL distributed database implemented in Java and can be found at [herddb.org](https://herddb.org/) for more information. - -### Configure Database or JWT authentication -#### Configure Database (optional) - -If you have a large amount of data, you can use a custom database. Otherwise, some display errors may occur. For example, the topic information cannot be displayed when the topic exceeds 10000. -The following is an example of PostgreSQL. - -1. Initialize database and table structures using the [file](https://github.com/apache/pulsar-manager/blob/master/src/main/resources/META-INF/sql/postgresql-schema.sql). -2. Download and modify the [configuration file](https://github.com/apache/pulsar-manager/blob/master/src/main/resources/application.properties), then add the PostgreSQL configuration. - -```properties - -spring.datasource.driver-class-name=org.postgresql.Driver -spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/pulsar_manager -spring.datasource.username=postgres -spring.datasource.password=postgres - -``` - -3. Add a configuration mount and start with a docker image. - -```bash - -docker pull apachepulsar/pulsar-manager:v0.2.0 -docker run -it \ - -p 9527:9527 -p 7750:7750 \ - -v /your-path/application.properties:/pulsar-manager/pulsar- -manager/application.properties - -e SPRING_CONFIGURATION_FILE=/pulsar-manager/pulsar-manager/application.properties \ - apachepulsar/pulsar-manager:v0.2.0 - -``` - -#### Enable JWT authentication (optional) - -If you want to turn on JWT authentication, configure the `application.properties` file. - -```properties - -backend.jwt.token=token - -jwt.broker.token.mode=PRIVATE -jwt.broker.public.key=file:///path/broker-public.key -jwt.broker.private.key=file:///path/broker-private.key - -or -jwt.broker.token.mode=SECRET -jwt.broker.secret.key=file:///path/broker-secret.key - -``` - -• `backend.jwt.token`: token for the superuser. You need to configure this parameter during cluster initialization. -• `jwt.broker.token.mode`: multiple modes of generating token, including PUBLIC, PRIVATE, and SECRET. -• `jwt.broker.public.key`: configure this option if you use the PUBLIC mode. -• `jwt.broker.private.key`: configure this option if you use the PRIVATE mode. -• `jwt.broker.secret.key`: configure this option if you use the SECRET mode. -For more information, see [Token Authentication Admin of Pulsar](security-token-admin.md). - -Docker command to add profile and key files mount. - -```bash - -docker pull apachepulsar/pulsar-manager:v0.2.0 -docker run -it \ - -p 9527:9527 -p 7750:7750 \ - -v /your-path/application.properties:/pulsar-manager/pulsar- -manager/application.properties - -v /your-path/private.key:/pulsar-manager/private.key - -e SPRING_CONFIGURATION_FILE=/pulsar-manager/pulsar-manager/application.properties \ - apachepulsar/pulsar-manager:v0.2.0 - -``` - -### Set the administrator account and password - -```bash - -CSRF_TOKEN=$(curl http://localhost:7750/pulsar-manager/csrf-token) -curl \ - -H 'X-XSRF-TOKEN: $CSRF_TOKEN' \ - -H 'Cookie: XSRF-TOKEN=$CSRF_TOKEN;' \ - -H "Content-Type: application/json" \ - -X PUT http://localhost:7750/pulsar-manager/users/superuser \ - -d '{"name": "admin", "password": "apachepulsar", "description": "test", "email": "username@test.org"}' - -``` - -The request parameter in curl command: - -```json - -{"name": "admin", "password": "apachepulsar", "description": "test", "email": "username@test.org"} - -``` - -- `name` is the Pulsar Manager login username, currently `admin`. -- `password` is the password of the current user of Pulsar Manager, currently `apachepulsar`. The password should be more than or equal to 6 digits. - - - -### Configure the environment -1. Login to the system, Visit http://localhost:9527 to login. The current default account is `admin/apachepulsar` - -2. Click "New Environment" button to add an environment. - -3. Input the "Environment Name". The environment name is used for identifying an environment. - -4. Input the "Service URL". The Service URL is the admin service url of your Pulsar cluster. - - -## Other Installation -### Bare-metal installation - -When using binary packages for direct deployment, you can follow these steps. - -- Download and unzip the binary package, which is available on the [Pulsar Download](/download) page. - - ```bash - - wget https://dist.apache.org/repos/dist/release/pulsar/pulsar-manager/pulsar-manager-0.2.0/apache-pulsar-manager-0.2.0-bin.tar.gz - tar -zxvf apache-pulsar-manager-0.2.0-bin.tar.gz - - ``` - -- Extract the back-end service binary package and place the front-end resources in the back-end service directory. - - ```bash - - cd pulsar-manager - tar -zxvf pulsar-manager.tar - cd pulsar-manager - cp -r ../dist ui - - ``` - -- Modify `application.properties` configuration on demand. - - > If you don't want to modify the `application.properties` file, you can add the configuration to the startup parameters via `. /bin/pulsar-manager --backend.jwt.token=token` to add the configuration to the startup parameters. This is a capability of the spring boot framework. - -- Start Pulsar Manager - - ```bash - - ./bin/pulsar-manager - - ``` - -### Custom docker image installation - -You can find the docker image in the [Docker Hub](https://github.com/apache/pulsar-manager/tree/master/docker) directory and build an image from the source code as well: - - ```bash - - git clone https://github.com/apache/pulsar-manager - cd pulsar-manager/front-end - npm install --save - npm run build:prod - cd .. - ./gradlew build -x test - cd .. - docker build -f docker/Dockerfile --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` --build-arg VCS_REF=`latest` --build-arg VERSION=`latest` -t apachepulsar/pulsar-manager . - - ``` - -## Configuration - - - -| application.properties | System env on Docker Image | Desc | Example | -| ----------------------------------- | -------------------------- | ------------------------------------------------------------ | ------------------------------------------------- | -| backend.jwt.token | JWT_TOKEN | token for the superuser. You need to configure this parameter during cluster initialization. | `token` | -| jwt.broker.token.mode | N/A | multiple modes of generating token, including PUBLIC, PRIVATE, and SECRET. | `PUBLIC` or `PRIVATE` or `SECRET`. | -| jwt.broker.public.key | PUBLIC_KEY | configure this option if you use the PUBLIC mode. | `file:///path/broker-public.key` | -| jwt.broker.private.key | PRIVATE_KEY | configure this option if you use the PRIVATE mode. | `file:///path/broker-private.key` | -| jwt.broker.secret.key | SECRET_KEY | configure this option if you use the SECRET mode. | `file:///path/broker-secret.key` | -| spring.datasource.driver-class-name | DRIVER_CLASS_NAME | the driver class name of the database. | `org.postgresql.Driver` | -| spring.datasource.url | URL | the JDBC URL of your database. | `jdbc:postgresql://127.0.0.1:5432/pulsar_manager` | -| spring.datasource.username | USERNAME | the username of database. | `postgres` | -| spring.datasource.password | PASSWORD | the password of database. | `postgres` | -| N/A | LOG_LEVEL | the level of log. | DEBUG | -* For more information about backend configurations, see [here](https://github.com/apache/pulsar-manager/blob/master/src/README). -* For more information about frontend configurations, see [here](https://github.com/apache/pulsar-manager/tree/master/front-end). - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/administration-stats.md b/site2/website/versioned_docs/version-2.10.1-deprecated/administration-stats.md deleted file mode 100644 index ac0c03602f36d5..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/administration-stats.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -id: administration-stats -title: Pulsar stats -sidebar_label: "Pulsar statistics" -original_id: administration-stats ---- - -## Partitioned topics - -|Stat|Description| -|---|---| -|msgRateIn| The sum of publish rates of all local and replication publishers in messages per second.| -|msgThroughputIn| Same as msgRateIn but in bytes per second instead of messages per second.| -|msgRateOut| The sum of dispatch rates of all local and replication consumers in messages per second.| -|msgThroughputOut| Same as msgRateOut but in bytes per second instead of messages per second.| -|averageMsgSize| Average message size, in bytes, from this publisher within the last interval.| -|storageSize| The sum of storage size of the ledgers for this topic.| -|publishers| The list of all local publishers into the topic. Publishers can be anywhere from zero to thousands.| -|producerId| Internal identifier for this producer on this topic.| -|producerName| Internal identifier for this producer, generated by the client library.| -|address| IP address and source port for the connection of this producer.| -|connectedSince| Timestamp this producer is created or last reconnected.| -|subscriptions| The list of all local subscriptions to the topic.| -|my-subscription| The name of this subscription (client defined).| -|msgBacklog| The count of messages in backlog for this subscription.| -|type| This subscription type.| -|msgRateExpired| The rate at which messages are discarded instead of dispatched from this subscription due to TTL.| -|consumers| The list of connected consumers for this subscription.| -|consumerName| Internal identifier for this consumer, generated by the client library.| -|availablePermits| The number of messages this consumer has space for in the listen queue of client library. A value of 0 means the queue of client library is full and receive() is not being called. A nonzero value means this consumer is ready to be dispatched messages.| -|replication| This section gives the stats for cross-colo replication of this topic.| -|replicationBacklog| The outbound replication backlog in messages.| -|connected| Whether the outbound replicator is connected.| -|replicationDelayInSeconds| How long the oldest message has been waiting to be sent through the connection, if connected is true.| -|inboundConnection| The IP and port of the broker in the publisher connection of remote cluster to this broker. | -|inboundConnectedSince| The TCP connection being used to publish messages to the remote cluster. If no local publishers are connected, this connection is automatically closed after a minute.| - - -## Topics - -|Stat|Description| -|---|---| -|entriesAddedCounter| Messages published since this broker loads this topic.| -|numberOfEntries| Total number of messages being tracked.| -|totalSize| Total storage size in bytes of all messages.| -|currentLedgerEntries| Count of messages written to the ledger currently open for writing.| -|currentLedgerSize| Size in bytes of messages written to ledger currently open for writing.| -|lastLedgerCreatedTimestamp| Time when last ledger is created.| -|lastLedgerCreationFailureTimestamp| Time when last ledger is failed.| -|waitingCursorsCount| How many cursors are caught up and waiting for a new message to be published.| -|pendingAddEntriesCount| How many messages have (asynchronous) write requests you are waiting on completion.| -|lastConfirmedEntry| The ledgerid:entryid of the last message successfully written. If the entryid is -1, then the ledger is opened or is being currently opened but has no entries written yet.| -|state| The state of the cursor ledger. Open means you have a cursor ledger for saving updates of the markDeletePosition.| -|ledgers| The ordered list of all ledgers for this topic holding its messages.| -|cursors| The list of all cursors on this topic. Every subscription you saw in the topic stats has one.| -|markDeletePosition| The ack position: the last message the subscriber acknowledges receiving.| -|readPosition| The latest position of subscriber for reading message.| -|waitingReadOp| This is true when the subscription reads the latest message that is published to the topic and waits on new messages to be published.| -|pendingReadOps| The counter for how many outstanding read requests to the BookKeepers you have in progress.| -|messagesConsumedCounter| Number of messages this cursor acks since this broker loads this topic.| -|cursorLedger| The ledger used to persistently store the current markDeletePosition.| -|cursorLedgerLastEntry| The last entryid used to persistently store the current markDeletePosition.| -|individuallyDeletedMessages| If Acks are done out of order, shows the ranges of messages Acked between the markDeletePosition and the read-position.| -|lastLedgerSwitchTimestamp| The last time the cursor ledger is rolled over.| diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/administration-upgrade.md b/site2/website/versioned_docs/version-2.10.1-deprecated/administration-upgrade.md deleted file mode 100644 index 72d136b6460f62..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/administration-upgrade.md +++ /dev/null @@ -1,168 +0,0 @@ ---- -id: administration-upgrade -title: Upgrade Guide -sidebar_label: "Upgrade" -original_id: administration-upgrade ---- - -## Upgrade guidelines - -Apache Pulsar is comprised of multiple components, ZooKeeper, bookies, and brokers. These components are either stateful or stateless. You do not have to upgrade ZooKeeper nodes unless you have special requirement. While you upgrade, you need to pay attention to bookies (stateful), brokers and proxies (stateless). - -The following are some guidelines on upgrading a Pulsar cluster. Read the guidelines before upgrading. - -- Backup all your configuration files before upgrading. -- Read guide entirely, make a plan, and then execute the plan. When you make upgrade plan, you need to take your specific requirements and environment into consideration. -- Pay attention to the upgrading order of components. In general, you do not need to upgrade your ZooKeeper or configuration store cluster (the global ZooKeeper cluster). You need to upgrade bookies first, and then upgrade brokers, proxies, and your clients. -- If `autorecovery` is enabled, you need to disable `autorecovery` in the upgrade process, and re-enable it after completing the process. -- Read the release notes carefully for each release. Release notes contain features, configuration changes that might impact your upgrade. -- Upgrade a small subset of nodes of each type to canary test the new version before upgrading all nodes of that type in the cluster. When you have upgraded the canary nodes, run for a while to ensure that they work correctly. -- Upgrade one data center to verify new version before upgrading all data centers if your cluster runs in multi-cluster replicated mode. - -> Note: Currently, Apache Pulsar is compatible between versions. - -## Upgrade sequence - -To upgrade an Apache Pulsar cluster, follow the upgrade sequence. - -1. Upgrade ZooKeeper (optional) -- Canary test: test an upgraded version in one or a small set of ZooKeeper nodes. -- Rolling upgrade: rollout the upgraded version to all ZooKeeper servers incrementally, one at a time. Monitor your dashboard during the whole rolling upgrade process. -2. Upgrade bookies -- Canary test: test an upgraded version in one or a small set of bookies. -- Rolling upgrade: - - a. Disable `autorecovery` with the following command. - - ```shell - - bin/bookkeeper shell autorecovery -disable - - ``` - - - - b. Rollout the upgraded version to all bookies in the cluster after you determine that a version is safe after canary. - - c. After you upgrade all bookies, re-enable `autorecovery` with the following command. - - ```shell - - bin/bookkeeper shell autorecovery -enable - - ``` - -3. Upgrade brokers -- Canary test: test an upgraded version in one or a small set of brokers. -- Rolling upgrade: rollout the upgraded version to all brokers in the cluster after you determine that a version is safe after canary. -4. Upgrade proxies -- Canary test: test an upgraded version in one or a small set of proxies. -- Rolling upgrade: rollout the upgraded version to all proxies in the cluster after you determine that a version is safe after canary. - -## Upgrade ZooKeeper (optional) -While you upgrade ZooKeeper servers, you can do canary test first, and then upgrade all ZooKeeper servers in the cluster. - -### Canary test - -You can test an upgraded version in one of ZooKeeper servers before upgrading all ZooKeeper servers in your cluster. - -To upgrade ZooKeeper server to a new version, complete the following steps: - -1. Stop a ZooKeeper server. -2. Upgrade the binary and configuration files. -3. Start the ZooKeeper server with the new binary files. -4. Use `pulsar zookeeper-shell` to connect to the newly upgraded ZooKeeper server and run a few commands to verify if it works as expected. -5. Run the ZooKeeper server for a few days, observe and make sure the ZooKeeper cluster runs well. - -#### Canary rollback - -If issues occur during canary test, you can shut down the problematic ZooKeeper node, revert the binary and configuration, and restart the ZooKeeper with the reverted binary. - -### Upgrade all ZooKeeper servers - -After canary test to upgrade one ZooKeeper in your cluster, you can upgrade all ZooKeeper servers in your cluster. - -You can upgrade all ZooKeeper servers one by one by following steps in canary test. - -## Upgrade bookies - -While you upgrade bookies, you can do canary test first, and then upgrade all bookies in the cluster. -For more details, you can read Apache BookKeeper [Upgrade guide](http://bookkeeper.apache.org/docs/latest/admin/upgrade). - -### Canary test - -You can test an upgraded version in one or a small set of bookies before upgrading all bookies in your cluster. - -To upgrade bookie to a new version, complete the following steps: - -1. Stop a bookie. -2. Upgrade the binary and configuration files. -3. Start the bookie in `ReadOnly` mode to verify if the bookie of this new version runs well for read workload. - - ```shell - - bin/pulsar bookie --readOnly - - ``` - -4. When the bookie runs successfully in `ReadOnly` mode, stop the bookie and restart it in `Write/Read` mode. - - ```shell - - bin/pulsar bookie - - ``` - -5. Observe and make sure the cluster serves both write and read traffic. - -#### Canary rollback - -If issues occur during the canary test, you can shut down the problematic bookie node. Other bookies in the cluster replaces this problematic bookie node with autorecovery. - -### Upgrade all bookies - -After canary test to upgrade some bookies in your cluster, you can upgrade all bookies in your cluster. - -Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios. - -In a rolling upgrade scenario, upgrade one bookie at a time. In a downtime upgrade scenario, shut down the entire cluster, upgrade each bookie, and then start the cluster. - -While you upgrade in both scenarios, the procedure is the same for each bookie. - -1. Stop the bookie. -2. Upgrade the software (either new binary or new configuration files). -2. Start the bookie. - -> **Advanced operations** -> When you upgrade a large BookKeeper cluster in a rolling upgrade scenario, upgrading one bookie at a time is slow. If you configure rack-aware or region-aware placement policy, you can upgrade bookies rack by rack or region by region, which speeds up the whole upgrade process. - -## Upgrade brokers and proxies - -The upgrade procedure for brokers and proxies is the same. Brokers and proxies are `stateless`, so upgrading the two services is easy. - -### Canary test - -You can test an upgraded version in one or a small set of nodes before upgrading all nodes in your cluster. - -To upgrade to a new version, complete the following steps: - -1. Stop a broker (or proxy). -2. Upgrade the binary and configuration file. -3. Start a broker (or proxy). - -#### Canary rollback - -If issues occur during canary test, you can shut down the problematic broker (or proxy) node. Revert to the old version and restart the broker (or proxy). - -### Upgrade all brokers or proxies - -After canary test to upgrade some brokers or proxies in your cluster, you can upgrade all brokers or proxies in your cluster. - -Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios. - -In a rolling upgrade scenario, you can upgrade one broker or one proxy at a time if the size of the cluster is small. If your cluster is large, you can upgrade brokers or proxies in batches. When you upgrade a batch of brokers or proxies, make sure the remaining brokers and proxies in the cluster have enough capacity to handle the traffic during upgrade. - -In a downtime upgrade scenario, shut down the entire cluster, upgrade each broker or proxy, and then start the cluster. - -While you upgrade in both scenarios, the procedure is the same for each broker or proxy. - -1. Stop the broker or proxy. -2. Upgrade the software (either new binary or new configuration files). -3. Start the broker or proxy. diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/administration-zk-bk.md b/site2/website/versioned_docs/version-2.10.1-deprecated/administration-zk-bk.md deleted file mode 100644 index 0530b258dca2cf..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/administration-zk-bk.md +++ /dev/null @@ -1,378 +0,0 @@ ---- -id: administration-zk-bk -title: ZooKeeper and BookKeeper administration -sidebar_label: "ZooKeeper and BookKeeper" -original_id: administration-zk-bk ---- - -Pulsar relies on two external systems for essential tasks: - -* [ZooKeeper](https://zookeeper.apache.org/) is responsible for a wide variety of configuration-related and coordination-related tasks. -* [BookKeeper](http://bookkeeper.apache.org/) is responsible for [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data. - -ZooKeeper and BookKeeper are both open-source [Apache](https://www.apache.org/) projects. - -> Skip to the [How Pulsar uses ZooKeeper and BookKeeper](#how-pulsar-uses-zookeeper-and-bookkeeper) section below for a more schematic explanation of the role of these two systems in Pulsar. - - -## ZooKeeper - -Each Pulsar instance relies on two separate ZooKeeper quorums. - -* [Local ZooKeeper](#deploy-local-zookeeper) operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs to have a dedicated ZooKeeper cluster. -* [Configuration Store](#deploy-configuration-store) operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum. - -### Deploy local ZooKeeper - -ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar. - -To deploy a Pulsar instance, you need to stand up one local ZooKeeper cluster *per Pulsar cluster*. - -To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -On each host, you need to specify the node ID in `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - - -On a ZooKeeper server at `zk1.us-west.example.com`, for example, you can set the `myid` value like this: - -```shell - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com` the command is `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start zookeeper - -``` - -### Deploy configuration store - -The ZooKeeper cluster configured and started up in the section above is a *local* ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks. - -If you deploy a [single-cluster](#single-cluster-pulsar-instance) instance, you do not need a separate cluster for the configuration store. If, however, you deploy a [multi-cluster](#multi-cluster-pulsar-instance) instance, you need to stand up a separate ZooKeeper cluster for configuration tasks. - -#### Single-cluster Pulsar instance - -If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports. - -To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorum uses to the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 - -``` - -As before, create the `myid` files for each server on `data/global-zookeeper/myid`. - -#### Multi-cluster Pulsar instance - -When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions. - -The key here is to make sure the ZK quorum members are spread across at least 3 regions and that other regions run as observers. - -Again, given the very low expected load on the configuration store servers, you can share the same hosts used for the local ZooKeeper quorum. - -For example, you can assume a Pulsar instance with the following clusters `us-west`, `us-east`, `us-central`, `eu-central`, `ap-south`. Also you can assume, each cluster has its own local ZK servers named such as - -``` - -zk[1-3].${CLUSTER}.example.com - -``` - -In this scenario you want to pick the quorum participants from few clusters and let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`. - -This guarantees that writes to configuration store is possible even if one of these regions is unreachable. - -The ZK configuration in all the servers looks like: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 -server.4=zk1.us-central.example.com:2185:2186 -server.5=zk2.us-central.example.com:2185:2186 -server.6=zk3.us-central.example.com:2185:2186:observer -server.7=zk1.us-east.example.com:2185:2186 -server.8=zk2.us-east.example.com:2185:2186 -server.9=zk3.us-east.example.com:2185:2186:observer -server.10=zk1.eu-central.example.com:2185:2186:observer -server.11=zk2.eu-central.example.com:2185:2186:observer -server.12=zk3.eu-central.example.com:2185:2186:observer -server.13=zk1.ap-south.example.com:2185:2186:observer -server.14=zk2.ap-south.example.com:2185:2186:observer -server.15=zk3.ap-south.example.com:2185:2186:observer - -``` - -Additionally, ZK observers need to have: - -```properties - -peerType=observer - -``` - -##### Start the service - -Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) - -```shell - -$ bin/pulsar-daemon start configuration-store - -``` - -### ZooKeeper configuration - -In Pulsar, ZooKeeper configuration is handled by two separate configuration files in the `conf` directory of your Pulsar installation: -* The `conf/zookeeper.conf` file handles the configuration for local ZooKeeper. -* The `conf/global-zookeeper.conf` file handles the configuration for configuration store. -See [parameters](reference-configuration.md#zookeeper) for more details. - -#### Configure batching operations -Using the batching operations reduces the remote procedure call (RPC) traffic between ZooKeeper client and servers. It also reduces the number of write transactions, because each batching operation corresponds to a single ZooKeeper transaction, containing multiple read and write operations. - -The following figure demonstrates a basic benchmark of batching read/write operations that can be requested to ZooKeeper in one second: - -![Zookeeper batching benchmark](/assets/zookeeper-batching.png) - -To enable batching operations, set the [`metadataStoreBatchingEnabled`](reference-configuration.md#broker) parameter to `true` on the broker side. - - -## BookKeeper - -BookKeeper stores all durable messages in Pulsar. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) WAL system that guarantees read consistency of independent message logs calls ledgers. Individual BookKeeper servers are also called *bookies*. - -> To manage message persistence, retention, and expiry in Pulsar, refer to [cookbook](cookbooks-retention-expiry.md). - -### Hardware requirements - -Bookie hosts store message data on disk. To provide optimal performance, ensure that the bookies have a suitable hardware configuration. The following are two key dimensions of bookie hardware capacity: - -- Disk I/O capacity read/write -- Storage capacity - -Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker by default. To ensure low write latency, BookKeeper is designed to use multiple devices: - -- A **journal** to ensure durability. For sequential writes, it is critical to have fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms. -- A **ledger storage device** stores data. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller. - -### Configure BookKeeper - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. When you configure each bookie, ensure that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for local ZooKeeper of the Pulsar cluster. - -The minimum configuration changes required in `conf/bookkeeper.conf` are as follows: - -:::note - -Set `journalDirectory` and `ledgerDirectories` carefully. It is difficilt to change them later. - -::: - -```properties - -# Change to point to journal disk mount point -journalDirectory=data/bookkeeper/journal - -# Point to ledger storage disk mount point -ledgerDirectories=data/bookkeeper/ledgers - -# Point to local ZK quorum -zkServers=zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181 - -#It is recommended to set this parameter. Otherwise, BookKeeper can't start normally in certain environments (for example, Huawei Cloud). -advertisedAddress= - -``` - -To change the ZooKeeper root path that BookKeeper uses, use `zkLedgersRootPath=/MY-PREFIX/ledgers` instead of `zkServers=localhost:2181/MY-PREFIX`. - -> For more information about BookKeeper, refer to the official [BookKeeper docs](http://bookkeeper.apache.org). - -### Deploy BookKeeper - -BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar. Each Pulsar broker has its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster. - -### Start bookies manually - -You can start a bookie in the foreground or as a background daemon. - -To start a bookie in the foreground, use the [`bookkeeper`](reference-cli-tools.md#bookkeeper) CLI tool: - -```bash - -$ bin/bookkeeper bookie - -``` - -To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -You can verify whether the bookie works properly with the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell): - -```shell - -$ bin/bookkeeper shell bookiesanity - -``` - -When you use this command, you create a new ledger on the local bookie, write a few entries, read them back and finally delete the ledger. - -### Decommission bookies cleanly - -Before you decommission a bookie, you need to check your environment and meet the following requirements. - -1. Ensure the state of your cluster supports decommissioning the target bookie. Check if `EnsembleSize >= Write Quorum >= Ack Quorum` is `true` with one less bookie. - -2. Ensure the target bookie is listed after using the `listbookies` command. - -3. Ensure that no other process is ongoing (upgrade etc). - -And then you can decommission bookies safely. To decommission bookies, complete the following steps. - -1. Log in to the bookie node, check if there are underreplicated ledgers. The decommission command force to replicate the underreplicated ledgers. -`$ bin/bookkeeper shell listunderreplicated` - -2. Stop the bookie by killing the bookie process. Make sure that no liveness/readiness probes setup for the bookies to spin them back up if you deploy it in a Kubernetes environment. - -3. Run the decommission command. - - If you have logged in to the node to be decommissioned, you do not need to provide `-bookieid`. - - If you are running the decommission command for the target bookie node from another bookie node, you should mention the target bookie ID in the arguments for `-bookieid` - `$ bin/bookkeeper shell decommissionbookie` - or - `$ bin/bookkeeper shell decommissionbookie -bookieid ` - -4. Validate that no ledgers are on the decommissioned bookie. -`$ bin/bookkeeper shell listledgers -bookieid ` - -You can run the following command to check if the bookie you have decommissioned is listed in the bookies list: - -```bash - -./bookkeeper shell listbookies -rw -h -./bookkeeper shell listbookies -ro -h - -``` - -## BookKeeper persistence policies - -In Pulsar, you can set *persistence policies* at the namespace level, which determines how BookKeeper handles persistent storage of messages. Policies determine four things: - -* The number of acks (guaranteed copies) to wait for each ledger entry. -* The number of bookies to use for a topic. -* The number of writes to make for each ledger entry. -* The throttling rate for mark-delete operations. - -### Set persistence policies - -You can set persistence policies for BookKeeper at the [namespace](reference-terminology.md#namespace) level. - -#### Pulsar-admin - -Use the [`set-persistence`](reference-pulsar-admin.md#namespaces-set-persistence) subcommand and specify a namespace as well as any policies that you want to apply. The available flags are: - -Flag | Description | Default -:----|:------------|:------- -`-a`, `--bookkeeper-ack-quorum` | The number of acks (guaranteed copies) to wait on for each entry | 0 -`-e`, `--bookkeeper-ensemble` | The number of [bookies](reference-terminology.md#bookie) to use for topics in the namespace | 0 -`-w`, `--bookkeeper-write-quorum` | The number of writes to make for each entry | 0 -`-r`, `--ml-mark-delete-max-rate` | Throttling rate for mark-delete operations (0 means no throttle) | 0 - -The following is an example: - -```shell - -$ pulsar-admin namespaces set-persistence my-tenant/my-ns \ - --bookkeeper-ack-quorum 3 \ - --bookkeeper-ensemble 2 - -``` - -#### REST API - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/setPersistence?version=@pulsar:version_number@} - -#### Java - -```java - -int bkEnsemble = 2; -int bkQuorum = 3; -int bkAckQuorum = 2; -double markDeleteRate = 0.7; -PersistencePolicies policies = - new PersistencePolicies(ensemble, quorum, ackQuorum, markDeleteRate); -admin.namespaces().setPersistence(namespace, policies); - -``` - -### List persistence policies - -You can see which persistence policy currently applies to a namespace. - -#### Pulsar-admin - -Use the [`get-persistence`](reference-pulsar-admin.md#namespaces-get-persistence) subcommand and specify the namespace. - -The following is an example: - -```shell - -$ pulsar-admin namespaces get-persistence my-tenant/my-ns -{ - "bookkeeperEnsemble": 1, - "bookkeeperWriteQuorum": 1, - "bookkeeperAckQuorum", 1, - "managedLedgerMaxMarkDeleteRate": 0 -} - -``` - -#### REST API - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/getPersistence?version=@pulsar:version_number@} - -#### Java - -```java - -PersistencePolicies policies = admin.namespaces().getPersistence(namespace); - -``` - -## How Pulsar uses ZooKeeper and BookKeeper - -This diagram illustrates the role of ZooKeeper and BookKeeper in a Pulsar cluster: - -![ZooKeeper and BookKeeper](/assets/pulsar-system-architecture.png) - -Each Pulsar cluster consists of one or more message brokers. Each broker relies on an ensemble of bookies. diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-cgo.md b/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-cgo.md deleted file mode 100644 index feee2cac3bafbd..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-cgo.md +++ /dev/null @@ -1,581 +0,0 @@ ---- -id: client-libraries-cgo -title: Pulsar CGo client -sidebar_label: "CGo(deprecated)" -original_id: client-libraries-cgo ---- - -> The CGo client has been deprecated since version 2.7.0. If possible, use the [Go client](client-libraries-go.md) instead. - -You can use Pulsar Go client to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang). - -All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Go client are thread-safe. - -Currently, the following Go clients are maintained in two repositories. - -| Language | Project | Maintainer | License | Description | -|----------|---------|------------|---------|-------------| -| CGo | [pulsar-client-go](https://github.com/apache/pulsar/tree/master/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | CGo client that depends on C++ client library | -| Go | [pulsar-client-go](https://github.com/apache/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client | - -> **API docs available as well** -> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar/pulsar-client-go/pulsar). - -## Installation - -### Requirements - -Pulsar Go client library is based on the C++ client library. Follow -the instructions for [C++ library](client-libraries-cpp.md) for installing the binaries through [RPM](client-libraries-cpp.md#rpm), [Deb](client-libraries-cpp.md#deb) or [Homebrew packages](client-libraries-cpp.md#macos). - -### Install go package - -> **Compatibility Warning** -> The version number of the Go client **must match** the version number of the Pulsar C++ client library. - -You can install the `pulsar` library locally using `go get`. Note that `go get` doesn't support fetching a specific tag - it will always pull in master's version of the Go client. You'll need a C++ client library that matches master. - -```bash - -$ go get -u github.com/apache/pulsar/pulsar-client-go/pulsar - -``` - -Or you can use [dep](https://github.com/golang/dep) for managing the dependencies. - -```bash - -$ dep ensure -add github.com/apache/pulsar/pulsar-client-go/pulsar@v@pulsar:version@ - -``` - -Once installed locally, you can import it into your project: - -```go - -import "github.com/apache/pulsar/pulsar-client-go/pulsar" - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example: - -```go - -import ( - "log" - "runtime" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - OperationTimeoutSeconds: 5, - MessageListenerThreads: runtime.NumCPU(), - }) - - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } -} - -``` - -The following configurable parameters are available for Pulsar clients: - -Parameter | Description | Default -:---------|:------------|:------- -`URL` | The connection URL for the Pulsar cluster. See [above](#urls) for more info | -`IOThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker) | 1 -`OperationTimeoutSeconds` | The timeout for some Go client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries will occur until this threshold is reached, at which point the operation will fail. | 30 -`MessageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)) | 1 -`ConcurrentLookupRequests` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 5000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 5000 -`Logger` | A custom logger implementation for the client (as a function that takes a log level, file path, line number, and message). All info, warn, and error messages will be routed to this function. | `nil` -`TLSTrustCertsFilePath` | The file path for the trusted TLS certificate | -`TLSAllowInsecureConnection` | Whether the client accepts untrusted TLS certificates from the broker | `false` -`Authentication` | Configure the authentication provider. (default: no authentication). Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | `nil` -`StatsIntervalInSeconds` | The interval (in seconds) at which client stats are published | 60 - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example: - -```go - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", -}) - -if err != nil { - log.Fatalf("Could not instantiate Pulsar producer: %v", err) -} - -defer producer.Close() - -msg := pulsar.ProducerMessage{ - Payload: []byte("Hello, Pulsar"), -} - -if err := producer.Send(context.Background(), msg); err != nil { - log.Fatalf("Producer could not send message: %v", err) -} - -``` - -> **Blocking operation** -> When you create a new Pulsar producer, the operation will block (waiting on a go channel) until either a producer is successfully created or an error is thrown. - - -### Producer operations - -Pulsar Go producers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string` -`Name()` | Fetches the producer's name | `string` -`Send(context.Context, ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | `error` -`SendAndGetMsgID(context.Context, ProducerMessage)`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | (MessageID, error) -`SendAsync(context.Context, ProducerMessage, func(ProducerMessage, error))` | Publishes a [message](#messages) to the producer's topic asynchronously. The third argument is a callback function that specifies what happens either when the message is acknowledged or an error is thrown. | -`SendAndGetMsgIDAsync(context.Context, ProducerMessage, func(MessageID, error))`| Send a message in asynchronous mode. The callback will report back the message being published and the eventual error in publishing | -`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64 -`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error -`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | `error` -`Schema()` | | Schema - -Here's a more involved example usage of a producer: - -```go - -import ( - "context" - "fmt" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatal(err) } - - // Use the client to instantiate a producer - producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", - }) - - if err != nil { log.Fatal(err) } - - ctx := context.Background() - - // Send 10 messages synchronously and 10 messages asynchronously - for i := 0; i < 10; i++ { - // Create a message - msg := pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("message-%d", i)), - } - - // Attempt to send the message - if err := producer.Send(ctx, msg); err != nil { - log.Fatal(err) - } - - // Create a different message to send asynchronously - asyncMsg := pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("async-message-%d", i)), - } - - // Attempt to send the message asynchronously and handle the response - producer.SendAsync(ctx, asyncMsg, func(msg pulsar.ProducerMessage, err error) { - if err != nil { log.Fatal(err) } - - fmt.Printf("the %s successfully published", string(msg.Payload)) - }) - } -} - -``` - -### Producer configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer will publish messages | -`Name` | A name for the producer. If you don't explicitly assign a name, Pulsar will automatically generate a globally unique name that you can access later using the `Name()` method. If you choose to explicitly assign a name, it will need to be unique across *all* Pulsar clusters, otherwise the creation operation will throw an error. | -`Properties`| Attach a set of application defined properties to the producer. This properties will be visible in the topic stats | -`SendTimeout` | When publishing a message to a topic, the producer will wait for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error will be thrown. If you set `SendTimeout` to -1, the timeout will be set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30 seconds -`MaxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `Send` and `SendAsync` methods will fail *unless* `BlockIfQueueFull` is set to `true`. | -`MaxPendingMessagesAcrossPartitions` | Set the number of max pending messages across all the partitions. This setting will be used to lower the max pending messages for each partition `MaxPendingMessages(int)`, if the total exceeds the configured value.| -`BlockIfQueueFull` | If set to `true`, the producer's `Send` and `SendAsync` methods will block when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `MaxPendingMessages` parameter); if set to `false` (the default), `Send` and `SendAsync` operations will fail and throw a `ProducerQueueIsFullError` when the queue is full. | `false` -`MessageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`pulsar.RoundRobinDistribution`, the default), publishing all messages to a single partition (`pulsar.UseSinglePartition`), or a custom partitioning scheme (`pulsar.CustomPartition`). | `pulsar.RoundRobinDistribution` -`HashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `pulsar.JavaStringHash` (the equivalent of `String.hashCode()` in Java), `pulsar.Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `pulsar.BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library) | `pulsar.JavaStringHash` -`CompressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), [`ZLIB`](https://zlib.net/), [`ZSTD`](https://facebook.github.io/zstd/) and [`SNAPPY`](https://google.github.io/snappy/). | No compression -`MessageRouter` | By default, Pulsar uses a round-robin routing scheme for [partitioned topics](cookbooks-partitioned.md). The `MessageRouter` parameter enables you to specify custom routing logic via a function that takes the Pulsar message and topic metadata as an argument and returns an integer (where the ), i.e. a function signature of `func(Message, TopicMetadata) int`. | -`Batching` | Control whether automatic batching of messages is enabled for the producer. | false -`BatchingMaxPublishDelay` | Set the time period within which the messages sent will be batched (default: 1ms) if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or until | 1ms -`BatchingMaxMessages` | Set the maximum number of messages permitted in a batch. (default: 1000) If set to a value greater than 1, messages will be queued until this threshold is reached or batch interval has elapsed | 1000 - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels: - -```go - -msgChannel := make(chan pulsar.ConsumerMessage) - -consumerOpts := pulsar.ConsumerOptions{ - Topic: "my-topic", - SubscriptionName: "my-subscription-1", - Type: pulsar.Exclusive, - MessageChannel: msgChannel, -} - -consumer, err := client.Subscribe(consumerOpts) - -if err != nil { - log.Fatalf("Could not establish subscription: %v", err) -} - -defer consumer.Close() - -for cm := range msgChannel { - msg := cm.Message - - fmt.Printf("Message ID: %s", msg.ID()) - fmt.Printf("Message value: %s", string(msg.Payload())) - - consumer.Ack(msg) -} - -``` - -> **Blocking operation** -> When you create a new Pulsar consumer, the operation will block (on a go channel) until either a producer is successfully created or an error is thrown. - - -### Consumer operations - -Pulsar Go consumers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the consumer's [topic](reference-terminology.md#topic) | `string` -`Subscription()` | Returns the consumer's subscription name | `string` -`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error` -`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)` -`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | `error` -`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | `error` -`AckCumulative(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `AckCumulative` method will block until the ack has been sent to the broker. After that, the messages will *not* be redelivered to the consumer. Cumulative acking can only be used with a [shared](concepts-messaging.md#shared) subscription type. | `error` -`AckCumulativeID(MessageID)` |Ack the reception of all the messages in the stream up to (and including) the provided message. This method will block until the acknowledge has been sent to the broker. After that, the messages will not be re-delivered to this consumer. | error -`Nack(Message)` | Acknowledge the failure to process a single message. | `error` -`NackID(MessageID)` | Acknowledge the failure to process a single message. | `error` -`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | `error` -`RedeliverUnackedMessages()` | Redelivers *all* unacknowledged messages on the topic. In [failover](concepts-messaging.md#failover) mode, this request is ignored if the consumer isn't active on the specified topic; in [shared](concepts-messaging.md#shared) mode, redelivered messages are distributed across all consumers connected to the topic. **Note**: this is a *non-blocking* operation that doesn't throw an error. | -`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | error - -#### Receive example - -Here's an example usage of a Go consumer that uses the `Receive()` method to process incoming messages: - -```go - -import ( - "context" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatal(err) } - - // Use the client object to instantiate a consumer - consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "my-golang-topic", - SubscriptionName: "sub-1", - Type: pulsar.Exclusive, - }) - - if err != nil { log.Fatal(err) } - - defer consumer.Close() - - ctx := context.Background() - - // Listen indefinitely on the topic - for { - msg, err := consumer.Receive(ctx) - if err != nil { log.Fatal(err) } - - // Do something with the message - err = processMessage(msg) - - if err == nil { - // Message processed successfully - consumer.Ack(msg) - } else { - // Failed to process messages - consumer.Nack(msg) - } - } -} - -``` - -### Consumer configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the consumer will establish a subscription and listen for messages | -`Topics` | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing | -`TopicsPattern` | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing | -`SubscriptionName` | The subscription name for this consumer | -`Properties` | Attach a set of application defined properties to the consumer. This properties will be visible in the topic stats| -`Name` | The name of the consumer | -`AckTimeout` | Set the timeout for unacked messages | 0 -`NackRedeliveryDelay` | The delay after which to redeliver the messages that failed to be processed. Default is 1min. (See `Consumer.Nack()`) | 1 minute -`Type` | Available options are `Exclusive`, `Shared`, and `Failover` | `Exclusive` -`SubscriptionInitPos` | InitialPosition at which the cursor will be set when subscribe | Latest -`MessageChannel` | The Go channel used by the consumer. Messages that arrive from the Pulsar topic(s) will be passed to this channel. | -`ReceiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `Receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000 -`MaxTotalReceiverQueueSizeAcrossPartitions` |Set the max total receiver queue size across partitions. This setting will be used to reduce the receiver queue size for individual partitions if the total exceeds this value | 50000 -`ReadCompacted` | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the consumer will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal. | - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example: - -```go - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageId: pulsar.LatestMessage, -}) - -``` - -> **Blocking operation** -> When you create a new Pulsar reader, the operation will block (on a go channel) until either a reader is successfully created or an error is thrown. - - -### Reader operations - -Pulsar Go readers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string` -`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)` -`HasNext()` | Check if there is any message available to read from the current position| (bool, error) -`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error` - -#### "Next" example - -Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages: - -```go - -import ( - "context" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatalf("Could not create client: %v", err) } - - // Use the client to instantiate a reader - reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: pulsar.EarliestMessage, - }) - - if err != nil { log.Fatalf("Could not create reader: %v", err) } - - defer reader.Close() - - ctx := context.Background() - - // Listen on the topic for incoming messages - for { - msg, err := reader.Next(ctx) - if err != nil { log.Fatalf("Error reading from topic: %v", err) } - - // Process the message - } -} - -``` - -In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example: - -```go - -lastSavedId := // Read last saved message id from external store as byte[] - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: DeserializeMessageID(lastSavedId), -}) - -``` - -### Reader configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader will establish a subscription and listen for messages -`Name` | The name of the reader -`StartMessageID` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `pulsar.EarliestMessage` (the earliest available message on the topic), `pulsar.LatestMessage` (the latest available message on the topic), or a `MessageID` object for a position that isn't earliest or latest. | -`MessageChannel` | The Go channel used by the reader. Messages that arrive from the Pulsar topic(s) will be passed to this channel. | -`ReceiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `Next`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000 -`SubscriptionRolePrefix` | The subscription role prefix. | `reader` -`ReadCompacted` | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the reader will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal.| - -## Messages - -The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message: - -```go - -msg := pulsar.ProducerMessage{ - Payload: []byte("Here is some message data"), - Key: "message-key", - Properties: map[string]string{ - "foo": "bar", - }, - EventTime: time.Now(), - ReplicationClusters: []string{"cluster1", "cluster3"}, -} - -if err := producer.send(msg); err != nil { - log.Fatalf("Could not publish message due to: %v", err) -} - -``` - -The following methods parameters are available for `ProducerMessage` objects: - -Parameter | Description -:---------|:----------- -`Payload` | The actual data payload of the message -`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message. -`Key` | The optional key associated with the message (particularly useful for things like topic compaction) -`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message -`EventTime` | The timestamp associated with the message -`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. -`SequenceID` | Set the sequence id to assign to the current message - -## TLS encryption and authentication - -In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so: - - * Use `pulsar+ssl` URL type - * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker - * Configure `Authentication` option - -Here's an example: - -```go - -opts := pulsar.ClientOptions{ - URL: "pulsar+ssl://my-cluster.com:6651", - TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr", - Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"), -} - -``` - -## Schema - -This example shows how to create a producer and consumer with schema. - -```go - -var exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -jsonSchema := NewJsonSchema(exampleSchemaDef, nil) -// create producer -producer, err := client.CreateProducerWithSchema(ProducerOptions{ - Topic: "jsonTopic", -}, jsonSchema) -err = producer.Send(context.Background(), ProducerMessage{ - Value: &testJson{ - ID: 100, - Name: "pulsar", - }, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() -//create consumer -var s testJson -consumerJS := NewJsonSchema(exampleSchemaDef, nil) -consumer, err := client.SubscribeWithSchema(ConsumerOptions{ - Topic: "jsonTopic", - SubscriptionName: "sub-2", -}, consumerJS) -if err != nil { - log.Fatal(err) -} -msg, err := consumer.Receive(context.Background()) -if err != nil { - log.Fatal(err) -} -err = msg.GetValue(&s) -if err != nil { - log.Fatal(err) -} -fmt.Println(s.ID) // output: 100 -fmt.Println(s.Name) // output: pulsar -defer consumer.Close() - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-cpp.md b/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-cpp.md deleted file mode 100644 index f5b8ae3678de21..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-cpp.md +++ /dev/null @@ -1,765 +0,0 @@ ---- -id: client-libraries-cpp -title: Pulsar C++ client -sidebar_label: "C++" -original_id: client-libraries-cpp ---- - -You can use Pulsar C++ client to create Pulsar producers and consumers in C++. - -All the methods in producer, consumer, and reader of a C++ client are thread-safe. - -## Supported platforms - -Pulsar C++ client is supported on **Linux** ,**MacOS** and **Windows** platforms. - -[Doxygen](http://www.doxygen.nl/)-generated API docs for the C++ client are available [here](/api/cpp). - - -## Linux - -:::note - -You can choose one of the following installation methods based on your needs: Compilation, Install RPM or Install Debian. - -::: - -### Compilation - -#### System requirements - -You need to install the following components before using the C++ client: - -* [CMake](https://cmake.org/) -* [Boost](http://www.boost.org/) -* [Protocol Buffers](https://developers.google.com/protocol-buffers/) >= 3 -* [libcurl](https://curl.se/libcurl/) -* [Google Test](https://github.com/google/googletest) - -1. Clone the Pulsar repository. - -```shell - -$ git clone https://github.com/apache/pulsar - -``` - -2. Install all necessary dependencies. - -```shell - -$ apt-get install cmake libssl-dev libcurl4-openssl-dev liblog4cxx-dev \ - libprotobuf-dev protobuf-compiler libboost-all-dev google-mock libgtest-dev libjsoncpp-dev - -``` - -3. Compile and install [Google Test](https://github.com/google/googletest). - -```shell - -# libgtest-dev version is 1.18.0 or above -$ cd /usr/src/googletest -$ sudo cmake . -$ sudo make -$ sudo cp ./googlemock/libgmock.a ./googlemock/gtest/libgtest.a /usr/lib/ - -# less than 1.18.0 -$ cd /usr/src/gtest -$ sudo cmake . -$ sudo make -$ sudo cp libgtest.a /usr/lib - -$ cd /usr/src/gmock -$ sudo cmake . -$ sudo make -$ sudo cp libgmock.a /usr/lib - -``` - -4. Compile the Pulsar client library for C++ inside the Pulsar repository. - -```shell - -$ cd pulsar-client-cpp -$ cmake . -$ make - -``` - -After you install the components successfully, the files `libpulsar.so` and `libpulsar.a` are in the `lib` folder of the repository. The tools `perfProducer` and `perfConsumer` are in the `perf` directory. - -### Install Dependencies - -> Since 2.1.0 release, Pulsar ships pre-built RPM and Debian packages. You can download and install those packages directly. - -After you download and install RPM or DEB, the `libpulsar.so`, `libpulsarnossl.so`, `libpulsar.a`, and `libpulsarwithdeps.a` libraries are in your `/usr/lib` directory. - -By default, they are built in code path `${PULSAR_HOME}/pulsar-client-cpp`. You can build with the command below. - - `cmake . -DBUILD_TESTS=OFF -DLINK_STATIC=ON && make pulsarShared pulsarSharedNossl pulsarStatic pulsarStaticWithDeps -j 3`. - -These libraries rely on some other libraries. If you want to get detailed version of dependencies, see [RPM](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/rpm/Dockerfile) or [DEB](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/deb/Dockerfile) files. - -1. `libpulsar.so` is a shared library, containing statically linked `boost` and `openssl`. It also dynamically links all other necessary libraries. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsar.so -I/usr/local/ssl/include - -``` - -2. `libpulsarnossl.so` is a shared library, similar to `libpulsar.so` except that the libraries `openssl` and `crypto` are dynamically linked. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsarnossl.so -lssl -lcrypto -I/usr/local/ssl/include -L/usr/local/ssl/lib - -``` - -3. `libpulsar.a` is a static library. You need to load dependencies before using this library. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsar.a -lssl -lcrypto -ldl -lpthread -I/usr/local/ssl/include -L/usr/local/ssl/lib -lboost_system -lboost_regex -lcurl -lprotobuf -lzstd -lz - -``` - -4. `libpulsarwithdeps.a` is a static library, based on `libpulsar.a`. It is archived in the dependencies of `libboost_regex`, `libboost_system`, `libcurl`, `libprotobuf`, `libzstd` and `libz`. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsarwithdeps.a -lssl -lcrypto -ldl -lpthread -I/usr/local/ssl/include -L/usr/local/ssl/lib - -``` - -The `libpulsarwithdeps.a` does not include library openssl related libraries `libssl` and `libcrypto`, because these two libraries are related to security. It is more reasonable and easier to use the versions provided by the local system to handle security issues and upgrade libraries. - -### Install RPM - -1. Download a RPM package from the links in the table. - -| Link | Crypto files | -|------|--------------| -| [client](@pulsar:dist_rpm:client@) | [asc](@pulsar:dist_rpm:client@.asc), [sha512](@pulsar:dist_rpm:client@.sha512) | -| [client-debuginfo](@pulsar:dist_rpm:client-debuginfo@) | [asc](@pulsar:dist_rpm:client-debuginfo@.asc), [sha512](@pulsar:dist_rpm:client-debuginfo@.sha512) | -| [client-devel](@pulsar:dist_rpm:client-devel@) | [asc](@pulsar:dist_rpm:client-devel@.asc), [sha512](@pulsar:dist_rpm:client-devel@.sha512) | - -2. Install the package using the following command. - -```bash - -$ rpm -ivh apache-pulsar-client*.rpm - -``` - -After you install RPM successfully, Pulsar libraries are in the `/usr/lib` directory, for example: - -```bash - -lrwxrwxrwx 1 root root 18 Dec 30 22:21 libpulsar.so -> libpulsar.so.2.9.1 -lrwxrwxrwx 1 root root 23 Dec 30 22:21 libpulsarnossl.so -> libpulsarnossl.so.2.9.1 - -``` - -:::note - -If you get the error that `libpulsar.so: cannot open shared object file: No such file or directory` when starting Pulsar client, you may need to run `ldconfig` first. - -::: - -2. Install the GCC and g++ using the following command, otherwise errors would occur in installing Node.js. - -```bash - -$ sudo yum -y install gcc automake autoconf libtool make -$ sudo yum -y install gcc-c++ - -``` - -### Install Debian - -1. Download a Debian package from the links in the table. - -| Link | Crypto files | -|------|--------------| -| [client](@pulsar:deb:client@) | [asc](@pulsar:dist_deb:client@.asc), [sha512](@pulsar:dist_deb:client@.sha512) | -| [client-devel](@pulsar:deb:client-devel@) | [asc](@pulsar:dist_deb:client-devel@.asc), [sha512](@pulsar:dist_deb:client-devel@.sha512) | - -2. Install the package using the following command. - -```bash - -$ apt install ./apache-pulsar-client*.deb - -``` - -After you install DEB successfully, Pulsar libraries are in the `/usr/lib` directory. - -### Build - -> If you want to build RPM and Debian packages from the latest master, follow the instructions below. You should run all the instructions at the root directory of your cloned Pulsar repository. - -There are recipes that build RPM and Debian packages containing a -statically linked `libpulsar.so` / `libpulsarnossl.so` / `libpulsar.a` / `libpulsarwithdeps.a` with all required dependencies. - -To build the C++ library packages, you need to build the Java packages first. - -```shell - -mvn install -DskipTests - -``` - -#### RPM - -To build the RPM inside a Docker container, use the command below. The RPMs are in the `pulsar-client-cpp/pkg/rpm/RPMS/x86_64/` path. - -```shell - -pulsar-client-cpp/pkg/rpm/docker-build-rpm.sh - -``` - -| Package name | Content | -|-----|-----| -| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` | -| pulsar-client-devel | Static library `libpulsar.a`, `libpulsarwithdeps.a`and C++ and C headers | -| pulsar-client-debuginfo | Debug symbols for `libpulsar.so` | - -#### Debian - -To build Debian packages, enter the following command. - -```shell - -pulsar-client-cpp/pkg/deb/docker-build-deb.sh - -``` - -Debian packages are created in the `pulsar-client-cpp/pkg/deb/BUILD/DEB/` path. - -| Package name | Content | -|-----|-----| -| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` | -| pulsar-client-dev | Static library `libpulsar.a`, `libpulsarwithdeps.a` and C++ and C headers | - -## MacOS - -### Compilation - -1. Clone the Pulsar repository. - -```shell - -$ git clone https://github.com/apache/pulsar - -``` - -2. Install all necessary dependencies. - -```shell - -# OpenSSL installation -$ brew install openssl -$ export OPENSSL_INCLUDE_DIR=/usr/local/opt/openssl/include/ -$ export OPENSSL_ROOT_DIR=/usr/local/opt/openssl/ - -# Protocol Buffers installation -$ brew install protobuf boost boost-python log4cxx -# If you are using python3, you need to install boost-python3 - -# Google Test installation -$ git clone https://github.com/google/googletest.git -$ cd googletest -$ git checkout release-1.12.1 -$ cmake . -$ make install - -``` - -3. Compile the Pulsar client library in the repository that you cloned. - -```shell - -$ cd pulsar-client-cpp -$ cmake . -$ make - -``` - -### Install `libpulsar` - -Pulsar releases are available in the [Homebrew](https://brew.sh/) core repository. You can install the C++ client library with the following command. The package is installed with the library and headers. - -```shell - -brew install libpulsar - -``` - -## Windows (64-bit) - -### Compilation - -1. Clone the Pulsar repository. - -```shell - -$ git clone https://github.com/apache/pulsar - -``` - -2. Install all necessary dependencies. - -```shell - -cd ${PULSAR_HOME}/pulsar-client-cpp -vcpkg install --feature-flags=manifests --triplet x64-windows - -``` - -3. Build C++ libraries. - -```shell - -cmake -B ./build -A x64 -DBUILD_PYTHON_WRAPPER=OFF -DBUILD_TESTS=OFF -DVCPKG_TRIPLET=x64-windows -DCMAKE_BUILD_TYPE=Release -S . -cmake --build ./build --config Release - -``` - -> **NOTE** -> -> 1. For Windows 32-bit, you need to use `-A Win32` and `-DVCPKG_TRIPLET=x86-windows`. -> 2. For MSVC Debug mode, you need to replace `Release` with `Debug` for both `CMAKE_BUILD_TYPE` variable and `--config` option. - -4. Client libraries are available in the following places. - -``` - -${PULSAR_HOME}/pulsar-client-cpp/build/lib/Release/pulsar.lib -${PULSAR_HOME}/pulsar-client-cpp/build/lib/Release/pulsar.dll - -``` - -## Connection URLs - -To connect Pulsar using client libraries, you need to specify a Pulsar protocol URL. - -Pulsar protocol URLs are assigned to specific clusters, you can use the Pulsar URI scheme. The default port is `6650`. The following is an example for localhost. - -```http - -pulsar://localhost:6650 - -``` - -In a Pulsar cluster in production, the URL looks as follows. - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you use TLS authentication, you need to add `ssl`, and the default port is `6651`. The following is an example. - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a producer - -To use Pulsar as a producer, you need to create a producer on the C++ client. There are two main ways of using a producer: -- [Blocking style](#simple-blocking-example) : each call to `send` waits for an ack from the broker. -- [Non-blocking asynchronous style](#non-blocking-example) : `sendAsync` is called instead of `send` and a callback is supplied for when the ack is received from the broker. - -### Simple blocking example - -This example sends 100 messages using the blocking style. While simple, it does not produce high throughput as it waits for each ack to come back before sending the next message. - -```c++ - -#include -#include - -using namespace pulsar; - -int main() { - Client client("pulsar://localhost:6650"); - - Result result = client.createProducer("persistent://public/default/my-topic", producer); - if (result != ResultOk) { - std::cout << "Error creating producer: " << result << std::endl; - return -1; - } - - // Send 100 messages synchronously - int ctr = 0; - while (ctr < 100) { - std::string content = "msg" + std::to_string(ctr); - Message msg = MessageBuilder().setContent(content).setProperty("x", "1").build(); - Result result = producer.send(msg); - if (result != ResultOk) { - std::cout << "The message " << content << " could not be sent, received code: " << result << std::endl; - } else { - std::cout << "The message " << content << " sent successfully" << std::endl; - } - - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - ctr++; - } - - std::cout << "Finished producing synchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -### Non-blocking example - -This example sends 100 messages using the non-blocking style calling `sendAsync` instead of `send`. This allows the producer to have multiple messages inflight at a time which increases throughput. - -The producer configuration `blockIfQueueFull` is useful here to avoid `ResultProducerQueueIsFull` errors when the internal queue for outgoing send requests becomes full. Once the internal queue is full, `sendAsync` becomes blocking which can make your code simpler. - -Without this configuration, the result code `ResultProducerQueueIsFull` is passed to the callback. You must decide how to deal with that (retry, discard etc). - -```c++ - -#include -#include - -using namespace pulsar; - -std::atomic acksReceived; - -void callback(Result code, const MessageId& msgId, std::string msgContent) { - // message processing logic here - std::cout << "Received ack for msg: " << msgContent << " with code: " - << code << " -- MsgID: " << msgId << std::endl; - acksReceived++; -} - -int main() { - Client client("pulsar://localhost:6650"); - - ProducerConfiguration producerConf; - producerConf.setBlockIfQueueFull(true); - Producer producer; - Result result = client.createProducer("persistent://public/default/my-topic", - producerConf, producer); - if (result != ResultOk) { - std::cout << "Error creating producer: " << result << std::endl; - return -1; - } - - // Send 100 messages asynchronously - int ctr = 0; - while (ctr < 100) { - std::string content = "msg" + std::to_string(ctr); - Message msg = MessageBuilder().setContent(content).setProperty("x", "1").build(); - producer.sendAsync(msg, std::bind(callback, - std::placeholders::_1, std::placeholders::_2, content)); - - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - ctr++; - } - - // wait for 100 messages to be acked - while (acksReceived < 100) { - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - } - - std::cout << "Finished producing asynchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -### Partitioned topics and lazy producers - -When scaling out a Pulsar topic, you may configure a topic to have hundreds of partitions. Likewise, you may have also scaled out your producers so there are hundreds or even thousands of producers. This can put some strain on the Pulsar brokers as when you create a producer on a partitioned topic, internally it creates one internal producer per partition which involves communications to the brokers for each one. So for a topic with 1000 partitions and 1000 producers, it ends up creating 1,000,000 internal producers across the producer applications, each of which has to communicate with a broker to find out which broker it should connect to and then perform the connection handshake. - -You can reduce the load caused by this combination of a large number of partitions and many producers by doing the following: -- use SinglePartition partition routing mode (this ensures that all messages are only sent to a single, randomly selected partition) -- use non-keyed messages (when messages are keyed, routing is based on the hash of the key and so messages will end up being sent to multiple partitions) -- use lazy producers (this ensures that an internal producer is only created on demand when a message needs to be routed to a partition) - -With our example above, that reduces the number of internal producers spread out over the 1000 producer apps from 1,000,000 to just 1000. - -Note that there can be extra latency for the first message sent. If you set a low send timeout, this timeout could be reached if the initial connection handshake is slow to complete. - -```c++ - -ProducerConfiguration producerConf; -producerConf.setPartitionsRoutingMode(ProducerConfiguration::UseSinglePartition); -producerConf.setLazyStartPartitionedProducers(true); - -``` - -### Enable chunking - -Message [chunking](concepts-messaging.md#chunking) enables Pulsar to process large payload messages by splitting the message into chunks at the producer side and aggregating chunked messages at the consumer side. - -The message chunking feature is OFF by default. The following is an example about how to enable message chunking when creating a producer. - -```c++ - -ProducerConfiguration conf; -conf.setBatchingEnabled(false); -conf.setChunkingEnabled(true); -Producer producer; -client.createProducer("my-topic", conf, producer); - -``` - -> **Note:** To enable chunking, you need to disable batching (`setBatchingEnabled`=`false`) concurrently. - -## Create a consumer - -To use Pulsar as a consumer, you need to create a consumer on the C++ client. There are two main ways of using the consumer: -- [Blocking style](#blocking-example): synchronously calling `receive(msg)`. -- [Non-blocking](#consumer-with-a-message-listener) (event based) style: using a message listener. - -### Blocking example - -The benefit of this approach is that it is the simplest code. Simply keeps calling `receive(msg)` which blocks until a message is received. - -This example starts a subscription at the earliest offset and consumes 100 messages. - -```c++ - -#include - -using namespace pulsar; - -int main() { - Client client("pulsar://localhost:6650"); - - Consumer consumer; - ConsumerConfiguration config; - config.setSubscriptionInitialPosition(InitialPositionEarliest); - Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer); - if (result != ResultOk) { - std::cout << "Failed to subscribe: " << result << std::endl; - return -1; - } - - Message msg; - int ctr = 0; - // consume 100 messages - while (ctr < 100) { - consumer.receive(msg); - std::cout << "Received: " << msg - << " with payload '" << msg.getDataAsString() << "'" << std::endl; - - consumer.acknowledge(msg); - ctr++; - } - - std::cout << "Finished consuming synchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -### Consumer with a message listener - -You can avoid running a loop with blocking calls with an event based style by using a message listener which is invoked for each message that is received. - -This example starts a subscription at the earliest offset and consumes 100 messages. - -```c++ - -#include -#include -#include - -using namespace pulsar; - -std::atomic messagesReceived; - -void handleAckComplete(Result res) { - std::cout << "Ack res: " << res << std::endl; -} - -void listener(Consumer consumer, const Message& msg) { - std::cout << "Got message " << msg << " with content '" << msg.getDataAsString() << "'" << std::endl; - messagesReceived++; - consumer.acknowledgeAsync(msg.getMessageId(), handleAckComplete); -} - -int main() { - Client client("pulsar://localhost:6650"); - - Consumer consumer; - ConsumerConfiguration config; - config.setMessageListener(listener); - config.setSubscriptionInitialPosition(InitialPositionEarliest); - Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer); - if (result != ResultOk) { - std::cout << "Failed to subscribe: " << result << std::endl; - return -1; - } - - // wait for 100 messages to be consumed - while (messagesReceived < 100) { - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - } - - std::cout << "Finished consuming asynchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -### Configure chunking - -You can limit the maximum number of chunked messages a consumer maintains concurrently by configuring the `setMaxPendingChunkedMessage` and `setAutoAckOldestChunkedMessageOnQueueFull` parameters. When the threshold is reached, the consumer drops pending messages by silently acknowledging them or asking the broker to redeliver them later. - -The following is an example of how to configure message chunking. - -```c++ - -ConsumerConfiguration conf; -conf.setAutoAckOldestChunkedMessageOnQueueFull(true); -conf.setMaxPendingChunkedMessage(100); -Consumer consumer; -client.subscribe("my-topic", "my-sub", conf, consumer); - -``` - -## Enable authentication in connection URLs -If you use TLS authentication when connecting to Pulsar, you need to add `ssl` in the connection URLs, and the default port is `6651`. The following is an example. - -```cpp - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/cacert.pem"); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create( - "/path/to/client-cert.pem", "/path/to/client-key.pem");); - -Client client("pulsar+ssl://my-broker.com:6651", config); - -``` - -For complete examples, refer to [C++ client examples](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/examples). - -## Schema - -This section describes some examples about schema. For more information about -schema, see [Pulsar schema](schema-get-started.md). - -### Avro schema - -- The following example shows how to create a producer with an Avro schema. - - ```cpp - - static const std::string exampleSchema = - "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," - "\"fields\":[{\"name\":\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"int\"}]}"; - Producer producer; - ProducerConfiguration producerConf; - producerConf.setSchema(SchemaInfo(AVRO, "Avro", exampleSchema)); - client.createProducer("topic-avro", producerConf, producer); - - ``` - -- The following example shows how to create a consumer with an Avro schema. - - ```cpp - - static const std::string exampleSchema = - "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," - "\"fields\":[{\"name\":\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"int\"}]}"; - ConsumerConfiguration consumerConf; - Consumer consumer; - consumerConf.setSchema(SchemaInfo(AVRO, "Avro", exampleSchema)); - client.subscribe("topic-avro", "sub-2", consumerConf, consumer) - - ``` - -### ProtobufNative schema - -The following example shows how to create a producer and a consumer with a ProtobufNative schema. -​ -1. Generate the `User` class using Protobuf3. - - :::note - - You need to use Protobuf3 or later versions. - - ::: - -​ - - ```protobuf - - syntax = "proto3"; - - message User { - string name = 1; - int32 age = 2; - } - - ``` - -​ -2. Include the `ProtobufNativeSchema.h` in your source code. Ensure the Protobuf dependency has been added to your project. -​ - - ```c++ - - #include - - ``` - -​ -3. Create a producer to send a `User` instance. -​ - - ```c++ - - ProducerConfiguration producerConf; - producerConf.setSchema(createProtobufNativeSchema(User::GetDescriptor())); - Producer producer; - client.createProducer("topic-protobuf", producerConf, producer); - User user; - user.set_name("my-name"); - user.set_age(10); - std::string content; - user.SerializeToString(&content); - producer.send(MessageBuilder().setContent(content).build()); - - ``` - -​ -4. Create a consumer to receive a `User` instance. -​ - - ```c++ - - ConsumerConfiguration consumerConf; - consumerConf.setSchema(createProtobufNativeSchema(User::GetDescriptor())); - consumerConf.setSubscriptionInitialPosition(InitialPositionEarliest); - Consumer consumer; - client.subscribe("topic-protobuf", "my-sub", consumerConf, consumer); - Message msg; - consumer.receive(msg); - User user2; - user2.ParseFromArray(msg.getData(), msg.getLength()); - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-dotnet.md b/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-dotnet.md deleted file mode 100644 index 52b6200c478af8..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-dotnet.md +++ /dev/null @@ -1,456 +0,0 @@ ---- -id: client-libraries-dotnet -title: Pulsar C# client -sidebar_label: "C#" -original_id: client-libraries-dotnet ---- - -You can use the Pulsar C# client (DotPulsar) to create Pulsar producers and consumers in C#. All the methods in the producer, consumer, and reader of a C# client are thread-safe. The official documentation for DotPulsar is available [here](https://github.com/apache/pulsar-dotpulsar/wiki). - -## Installation - -You can install the Pulsar C# client library either through the dotnet CLI or through the Visual Studio. This section describes how to install the Pulsar C# client library through the dotnet CLI. For information about how to install the Pulsar C# client library through the Visual Studio, see [here](https://docs.microsoft.com/en-us/visualstudio/mac/nuget-walkthrough?view=vsmac-2019). - -### Prerequisites - -Install the [.NET Core SDK](https://dotnet.microsoft.com/download/), which provides the dotnet command-line tool. Starting in Visual Studio 2017, the dotnet CLI is automatically installed with any .NET Core related workloads. - -### Procedures - -To install the Pulsar C# client library, following these steps: - -1. Create a project. - - 1. Create a folder for the project. - - 2. Open a terminal window and switch to the new folder. - - 3. Create the project using the following command. - - ``` - - dotnet new console - - ``` - - 4. Use `dotnet run` to test that the app has been created properly. - -2. Add the DotPulsar NuGet package. - - 1. Use the following command to install the `DotPulsar` package. - - ``` - - dotnet add package DotPulsar - - ``` - - 2. After the command completes, open the `.csproj` file to see the added reference. - - ```xml - - - - - - ``` - -## Client - -This section describes some configuration examples for the Pulsar C# client. - -### Create client - -This example shows how to create a Pulsar C# client connected to localhost. - -```c# - -using DotPulsar; - -var client = PulsarClient.Builder().Build(); - -``` - -To create a Pulsar C# client by using the builder, you can specify the following options. - -| Option | Description | Default | -| ---- | ---- | ---- | -| ServiceUrl | Set the service URL for the Pulsar cluster. | pulsar://localhost:6650 | -| RetryInterval | Set the time to wait before retrying an operation or a reconnection. | 3s | - -### Create producer - -This section describes how to create a producer. - -- Create a producer by using the builder. - - ```c# - - using DotPulsar; - using DotPulsar.Extensions; - - var producer = client.NewProducer()) - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a producer without using the builder. - - ```c# - - using DotPulsar; - - var options = new ProducerOptions("persistent://public/default/mytopic", Schema.ByteArray); - var producer = client.CreateProducer(options); - - ``` - -### Create consumer - -This section describes how to create a consumer. - -- Create a consumer by using the builder. - - ```c# - - using DotPulsar; - using DotPulsar.Extensions; - - var consumer = client.NewConsumer() - .SubscriptionName("MySubscription") - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a consumer without using the builder. - - ```c# - - using DotPulsar; - - var options = new ConsumerOptions("MySubscription", "persistent://public/default/mytopic", Schema.ByteArray); - var consumer = client.CreateConsumer(options); - - ``` - -### Create reader - -This section describes how to create a reader. - -- Create a reader by using the builder. - - ```c# - - using DotPulsar; - using DotPulsar.Extensions; - - var reader = client.NewReader() - .StartMessageId(MessageId.Earliest) - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a reader without using the builder. - - ```c# - - using DotPulsar; - - var options = new ReaderOptions(MessageId.Earliest, "persistent://public/default/mytopic", Schema.ByteArray); - var reader = client.CreateReader(options); - - ``` - -### Configure encryption policies - -The Pulsar C# client supports four kinds of encryption policies: - -- `EnforceUnencrypted`: always use unencrypted connections. -- `EnforceEncrypted`: always use encrypted connections) -- `PreferUnencrypted`: use unencrypted connections, if possible. -- `PreferEncrypted`: use encrypted connections, if possible. - -This example shows how to set the `EnforceUnencrypted` encryption policy. - -```c# - -using DotPulsar; - -var client = PulsarClient.Builder() - .ConnectionSecurity(EncryptionPolicy.EnforceEncrypted) - .Build(); - -``` - -### Configure authentication - -Currently, the Pulsar C# client supports the TLS (Transport Layer Security) and JWT (JSON Web Token) authentication. - -If you have followed [Authentication using TLS](security-tls-authentication.md), you get a certificate and a key. To use them from the Pulsar C# client, follow these steps: - -1. Create an unencrypted and password-less pfx file. - - ```c# - - openssl pkcs12 -export -keypbe NONE -certpbe NONE -out admin.pfx -inkey admin.key.pem -in admin.cert.pem -passout pass: - - ``` - -2. Use the admin.pfx file to create an X509Certificate2 and pass it to the Pulsar C# client. - - ```c# - - using System.Security.Cryptography.X509Certificates; - using DotPulsar; - - var clientCertificate = new X509Certificate2("admin.pfx"); - var client = PulsarClient.Builder() - .AuthenticateUsingClientCertificate(clientCertificate) - .Build(); - - ``` - -## Producer - -A producer is a process that attaches to a topic and publishes messages to a Pulsar broker for processing. This section describes some configuration examples about the producer. - -## Send data - -This example shows how to send data. - -```c# - -var data = Encoding.UTF8.GetBytes("Hello World"); -await producer.Send(data); - -``` - -### Send messages with customized metadata - -- Send messages with customized metadata by using the builder. - - ```c# - - var messageId = await producer.NewMessage() - .Property("SomeKey", "SomeValue") - .Send(data); - - ``` - -- Send messages with customized metadata without using the builder. - - ```c# - - var data = Encoding.UTF8.GetBytes("Hello World"); - var metadata = new MessageMetadata(); - metadata["SomeKey"] = "SomeValue"; - var messageId = await producer.Send(metadata, data)); - - ``` - -## Consumer - -A consumer is a process that attaches to a topic through a subscription and then receives messages. This section describes some configuration examples about the consumer. - -### Receive messages - -This example shows how a consumer receives messages from a topic. - -```c# - -await foreach (var message in consumer.Messages()) -{ - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); -} - -``` - -### Acknowledge messages - -Messages can be acknowledged individually or cumulatively. For details about message acknowledgement, see [acknowledgement](concepts-messaging.md#acknowledgement). - -- Acknowledge messages individually. - - ```c# - - await consumer.Acknowledge(message); - - ``` - -- Acknowledge messages cumulatively. - - ```c# - - await consumer.AcknowledgeCumulative(message); - - ``` - -### Unsubscribe from topics - -This example shows how a consumer unsubscribes from a topic. - -```c# - -await consumer.Unsubscribe(); - -``` - -#### Note - -> A consumer cannot be used and is disposed once the consumer unsubscribes from a topic. - -## Reader - -A reader is actually just a consumer without a cursor. This means that Pulsar does not keep track of your progress and there is no need to acknowledge messages. - -This example shows how a reader receives messages. - -```c# - -await foreach (var message in reader.Messages()) -{ - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); -} - -``` - -## Monitoring - -This section describes how to monitor the producer, consumer, and reader state. - -### Monitor producer - -The following table lists states available for the producer. - -| State | Description | -| ---- | ----| -| Closed | The producer or the Pulsar client has been disposed. | -| Connected | All is well. | -| Disconnected | The connection is lost and attempts are being made to reconnect. | -| Faulted | An unrecoverable error has occurred. | -| PartiallyConnected | Some of the sub-producers are disconnected. | - -This example shows how to monitor the producer state. - -```c# - -private static async ValueTask Monitor(IProducer producer, CancellationToken cancellationToken) -{ - var state = ProducerState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = (await producer.StateChangedFrom(state, cancellationToken)).ProducerState; - - var stateMessage = state switch - { - ProducerState.Connected => $"The producer is connected", - ProducerState.Disconnected => $"The producer is disconnected", - ProducerState.Closed => $"The producer has closed", - ProducerState.Faulted => $"The producer has faulted", - ProducerState.PartiallyConnected => $"The producer is partially connected.", - _ => $"The producer has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (producer.IsFinalState(state)) - return; - } -} - -``` - -### Monitor consumer state - -The following table lists states available for the consumer. - -| State | Description | -| ---- | ----| -| Active | All is well. | -| Inactive | All is well. The subscription type is `Failover` and you are not the active consumer. | -| Closed | The consumer or the Pulsar client has been disposed. | -| Disconnected | The connection is lost and attempts are being made to reconnect. | -| Faulted | An unrecoverable error has occurred. | -| ReachedEndOfTopic | No more messages are delivered. | -| Unsubscribed | The consumer has unsubscribed. | - -This example shows how to monitor the consumer state. - -```c# - -private static async ValueTask Monitor(IConsumer consumer, CancellationToken cancellationToken) -{ - var state = ConsumerState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = (await consumer.StateChangedFrom(state, cancellationToken)).ConsumerState; - - var stateMessage = state switch - { - ConsumerState.Active => "The consumer is active", - ConsumerState.Inactive => "The consumer is inactive", - ConsumerState.Disconnected => "The consumer is disconnected", - ConsumerState.Closed => "The consumer has closed", - ConsumerState.ReachedEndOfTopic => "The consumer has reached end of topic", - ConsumerState.Faulted => "The consumer has faulted", - ConsumerState.Unsubscribed => "The consumer is unsubscribed.", - _ => $"The consumer has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (consumer.IsFinalState(state)) - return; - } -} - -``` - -### Monitor reader state - -The following table lists states available for the reader. - -| State | Description | -| ---- | ----| -| Closed | The reader or the Pulsar client has been disposed. | -| Connected | All is well. | -| Disconnected | The connection is lost and attempts are being made to reconnect. -| Faulted | An unrecoverable error has occurred. | -| ReachedEndOfTopic | No more messages are delivered. | - -This example shows how to monitor the reader state. - -```c# - -private static async ValueTask Monitor(IReader reader, CancellationToken cancellationToken) -{ - var state = ReaderState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = (await reader.StateChangedFrom(state, cancellationToken)).ReaderState; - - var stateMessage = state switch - { - ReaderState.Connected => "The reader is connected", - ReaderState.Disconnected => "The reader is disconnected", - ReaderState.Closed => "The reader has closed", - ReaderState.ReachedEndOfTopic => "The reader has reached end of topic", - ReaderState.Faulted => "The reader has faulted", - _ => $"The reader has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (reader.IsFinalState(state)) - return; - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-go.md b/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-go.md deleted file mode 100644 index aa36fa786ac5e9..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-go.md +++ /dev/null @@ -1,1064 +0,0 @@ ---- -id: client-libraries-go -title: Pulsar Go client -sidebar_label: "Go" -original_id: client-libraries-go ---- - -> Tips: The CGo client has been deprecated since version 2.7.0. - -You can use Pulsar [Go client](https://github.com/apache/pulsar-client-go) to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang). - -> **API docs available as well** -> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar-client-go/pulsar). - - -## Installation - -### Install go package - -You can get the `pulsar` library by using `go get` or use it with `go module`. - -Download the library of Go client to local environment: - -```bash - -$ go get -u "github.com/apache/pulsar-client-go/pulsar" - -``` - -Once installed locally, you can import it into your project: - -```go - -import "github.com/apache/pulsar-client-go/pulsar" - -``` - -Use with go module: - -```bash - -$ mkdir test_dir && cd test_dir - -``` - -Write a sample script in the `test_dir` directory (such as `test_example.go`) and write `package main` at the beginning of the file. - -```bash - -$ go mod init test_dir -$ go mod tidy && go mod download -$ go build test_example.go -$ ./test_example - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -If you have multiple brokers, you can set the URL as below. - -``` - -pulsar://localhost:6550,localhost:6651,localhost:6652 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example: - -```go - -import ( - "log" - "time" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - OperationTimeout: 30 * time.Second, - ConnectionTimeout: 30 * time.Second, - }) - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } - - defer client.Close() -} - -``` - -If you have multiple brokers, you can initiate a client object as below. - -```go - -import ( - "log" - "time" - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650,localhost:6651,localhost:6652", - OperationTimeout: 30 * time.Second, - ConnectionTimeout: 30 * time.Second, - }) - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } - - defer client.Close() -} - -``` - -The following configurable parameters are available for Pulsar clients: - - Name | Description | Default -| :-------- | :---------- |:---------- | -| URL | Configure the service URL for the Pulsar service.

    If you have multiple brokers, you can set multiple Pulsar cluster addresses for a client.

    This parameter is **required**. |None | -| ConnectionTimeout | Timeout for the establishment of a TCP connection | 30s | -| OperationTimeout| Set the operation timeout. Producer-create, subscribe and unsubscribe operations will be retried until this interval, after which the operation will be marked as failed| 30s| -| Authentication | Configure the authentication provider. Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | no authentication | -| TLSTrustCertsFilePath | Set the path to the trusted TLS certificate file | | -| TLSAllowInsecureConnection | Configure whether the Pulsar client accept untrusted TLS certificate from broker | false | -| TLSValidateHostname | Configure whether the Pulsar client verify the validity of the host name from broker | false | -| ListenerName | Configure the net model for VPC users to connect to the Pulsar broker | | -| MaxConnectionsPerBroker | Max number of connections to a single broker that is kept in the pool | 1 | -| CustomMetricsLabels | Add custom labels to all the metrics reported by this client instance | | -| Logger | Configure the logger used by the client | logrus.StandardLogger | - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example: - -```go - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", -}) - -if err != nil { - log.Fatal(err) -} - -_, err = producer.Send(context.Background(), &pulsar.ProducerMessage{ - Payload: []byte("hello"), -}) - -defer producer.Close() - -if err != nil { - fmt.Println("Failed to publish message", err) -} -fmt.Println("Published message") - -``` - -### Producer operations - -Pulsar Go producers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string` -`Name()` | Fetches the producer's name | `string` -`Send(context.Context, *ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | (MessageID, error) -`SendAsync(context.Context, *ProducerMessage, func(MessageID, *ProducerMessage, error))`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | -`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64 -`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error -`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | - -### Producer Example - -#### How to use message router in producer - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: serviceURL, -}) - -if err != nil { - log.Fatal(err) -} -defer client.Close() - -// Only subscribe on the specific partition -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "my-partitioned-topic-partition-2", - SubscriptionName: "my-sub", -}) - -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-partitioned-topic", - MessageRouter: func(msg *ProducerMessage, tm TopicMetadata) int { - fmt.Println("Routing message ", msg, " -- Partitions: ", tm.NumPartitions()) - return 2 - }, -}) - -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -``` - -#### How to use schema interface in producer - -```go - -type testJSON struct { - ID int `json:"id"` - Name string `json:"name"` -} - -``` - -```go - -var ( - exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -) - -``` - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -properties := make(map[string]string) -properties["pulsar"] = "hello" -jsonSchemaWithProperties := NewJSONSchema(exampleSchemaDef, properties) -producer, err := client.CreateProducer(ProducerOptions{ - Topic: "jsonTopic", - Schema: jsonSchemaWithProperties, -}) -assert.Nil(t, err) - -_, err = producer.Send(context.Background(), &ProducerMessage{ - Value: &testJSON{ - ID: 100, - Name: "pulsar", - }, -}) -if err != nil { - log.Fatal(err) -} -producer.Close() - -``` - -#### How to use delay relative in producer - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topicName := newTopicName() -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topicName, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: topicName, - SubscriptionName: "subName", - Type: Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -ID, err := producer.Send(context.Background(), &pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("test")), - DeliverAfter: 3 * time.Second, -}) -if err != nil { - log.Fatal(err) -} -fmt.Println(ID) - -ctx, canc := context.WithTimeout(context.Background(), 1*time.Second) -msg, err := consumer.Receive(ctx) -if err != nil { - log.Fatal(err) -} -fmt.Println(msg.Payload()) -canc() - -ctx, canc = context.WithTimeout(context.Background(), 5*time.Second) -msg, err = consumer.Receive(ctx) -if err != nil { - log.Fatal(err) -} -fmt.Println(msg.Payload()) -canc() - -``` - -#### How to use Prometheus metrics in producer - -Pulsar Go client registers client metrics using Prometheus. This section demonstrates how to create a simple Pulsar producer application that exposes Prometheus metrics via HTTP. - -1. Write a simple producer application. - -```go - -// Create a Pulsar client -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} - -defer client.Close() - -// Start a separate goroutine for Prometheus metrics -// In this case, Prometheus metrics can be accessed via http://localhost:2112/metrics -go func() { - prometheusPort := 2112 - log.Printf("Starting Prometheus metrics at http://localhost:%v/metrics\n", prometheusPort) - http.Handle("/metrics", promhttp.Handler()) - err = http.ListenAndServe(":"+strconv.Itoa(prometheusPort), nil) - if err != nil { - log.Fatal(err) - } -}() - -// Create a producer -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "topic-1", -}) -if err != nil { - log.Fatal(err) -} - -defer producer.Close() - -ctx := context.Background() - -// Write your business logic here -// In this case, you build a simple Web server. You can produce messages by requesting http://localhost:8082/produce -webPort := 8082 -http.HandleFunc("/produce", func(w http.ResponseWriter, r *http.Request) { - msgId, err := producer.Send(ctx, &pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("hello world")), - }) - if err != nil { - log.Fatal(err) - } else { - log.Printf("Published message: %v", msgId) - fmt.Fprintf(w, "Published message: %v", msgId) - } -}) - -err = http.ListenAndServe(":"+strconv.Itoa(webPort), nil) -if err != nil { - log.Fatal(err) -} - -``` - -2. To scrape metrics from applications, configure a local running Prometheus instance using a configuration file (`prometheus.yml`). - -```yaml - -scrape_configs: -- job_name: pulsar-client-go-metrics - scrape_interval: 10s - static_configs: - - targets: - - localhost:2112 - -``` - -Now you can query Pulsar client metrics on Prometheus. - -### Producer configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Name | Name specify a name for the producer. If not assigned, the system will generate a globally unique name which can be access with Producer.ProducerName(). | | -| Properties | Properties attach a set of application defined properties to the producer This properties will be visible in the topic stats | | -| SendTimeout | SendTimeout set the timeout for a message that is not acknowledged by the server | 30s | -| DisableBlockIfQueueFull | DisableBlockIfQueueFull control whether Send and SendAsync block if producer's message queue is full | false | -| MaxPendingMessages| MaxPendingMessages set the max size of the queue holding the messages pending to receive an acknowledgment from the broker. | | -| HashingScheme | HashingScheme change the `HashingScheme` used to chose the partition on where to publish a particular message. | JavaStringHash | -| CompressionType | CompressionType set the compression type for the producer. | not compressed | -| CompressionLevel | Define the desired compression level. Options: Default, Faster and Better | Default | -| MessageRouter | MessageRouter set a custom message routing policy by passing an implementation of MessageRouter | | -| DisableBatching | DisableBatching control whether automatic batching of messages is enabled for the producer. | false | -| BatchingMaxPublishDelay | BatchingMaxPublishDelay set the time period within which the messages sent will be batched | 1ms | -| BatchingMaxMessages | BatchingMaxMessages set the maximum number of messages permitted in a batch. | 1000 | -| BatchingMaxSize | BatchingMaxSize sets the maximum number of bytes permitted in a batch. | 128KB | -| Schema | Schema set a custom schema type by passing an implementation of `Schema` | bytes[] | -| Interceptors | A chain of interceptors. These interceptors are called at some points defined in the `ProducerInterceptor` interface. | None | -| MaxReconnectToBroker | MaxReconnectToBroker set the maximum retry number of reconnectToBroker | ultimate | -| BatcherBuilderType | BatcherBuilderType sets the batch builder type. This is used to create a batch container when batching is enabled. Options: DefaultBatchBuilder and KeyBasedBatchBuilder | DefaultBatchBuilder | - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels: - -```go - -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "topic-1", - SubscriptionName: "my-sub", - Type: pulsar.Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -for i := 0; i < 10; i++ { - msg, err := consumer.Receive(context.Background()) - if err != nil { - log.Fatal(err) - } - - fmt.Printf("Received message msgId: %#v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - - consumer.Ack(msg) -} - -if err := consumer.Unsubscribe(); err != nil { - log.Fatal(err) -} - -``` - -### Consumer operations - -Pulsar Go consumers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Subscription()` | Returns the consumer's subscription name | `string` -`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error` -`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)` -`Chan()` | Chan returns a channel from which to consume messages. | `<-chan ConsumerMessage` -`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | -`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | -`ReconsumeLater(msg Message, delay time.Duration)` | ReconsumeLater mark a message for redelivery after custom delay | -`Nack(Message)` | Acknowledge the failure to process a single message. | -`NackID(MessageID)` | Acknowledge the failure to process a single message. | -`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | `error` -`SeekByTime(time time.Time)` | Reset the subscription associated with this consumer to a specific message publish time. | `error` -`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | -`Name()` | Name returns the name of consumer | `string` - -### Receive example - -#### How to use regex consumer - -```go - -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) - -defer client.Close() - -p, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topicInRegex, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer p.Close() - -topicsPattern := fmt.Sprintf("persistent://%s/foo.*", namespace) -opts := pulsar.ConsumerOptions{ - TopicsPattern: topicsPattern, - SubscriptionName: "regex-sub", -} -consumer, err := client.Subscribe(opts) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -``` - -#### How to use multi topics Consumer - -```go - -func newTopicName() string { - return fmt.Sprintf("my-topic-%v", time.Now().Nanosecond()) -} - - -topic1 := "topic-1" -topic2 := "topic-2" - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -topics := []string{topic1, topic2} -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topics: topics, - SubscriptionName: "multi-topic-sub", -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -``` - -#### How to use consumer listener - -```go - -import ( - "fmt" - "log" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{URL: "pulsar://localhost:6650"}) - if err != nil { - log.Fatal(err) - } - - defer client.Close() - - channel := make(chan pulsar.ConsumerMessage, 100) - - options := pulsar.ConsumerOptions{ - Topic: "topic-1", - SubscriptionName: "my-subscription", - Type: pulsar.Shared, - } - - options.MessageChannel = channel - - consumer, err := client.Subscribe(options) - if err != nil { - log.Fatal(err) - } - - defer consumer.Close() - - // Receive messages from channel. The channel returns a struct which contains message and the consumer from where - // the message was received. It's not necessary here since we have 1 single consumer, but the channel could be - // shared across multiple consumers as well - for cm := range channel { - msg := cm.Message - fmt.Printf("Received message msgId: %v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - - consumer.Ack(msg) - } -} - -``` - -#### How to use consumer receive timeout - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topic := "test-topic-with-no-messages" -ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond) -defer cancel() - -// create consumer -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: topic, - SubscriptionName: "my-sub1", - Type: Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -msg, err := consumer.Receive(ctx) -fmt.Println(msg.Payload()) -if err != nil { - log.Fatal(err) -} - -``` - -#### How to use schema in consumer - -```go - -type testJSON struct { - ID int `json:"id"` - Name string `json:"name"` -} - -``` - -```go - -var ( - exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -) - -``` - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -var s testJSON - -consumerJS := NewJSONSchema(exampleSchemaDef, nil) -consumer, err := client.Subscribe(ConsumerOptions{ - Topic: "jsonTopic", - SubscriptionName: "sub-1", - Schema: consumerJS, - SubscriptionInitialPosition: SubscriptionPositionEarliest, -}) -assert.Nil(t, err) -msg, err := consumer.Receive(context.Background()) -assert.Nil(t, err) -err = msg.GetSchemaValue(&s) -if err != nil { - log.Fatal(err) -} - -defer consumer.Close() - -``` - -#### How to use Prometheus metrics in consumer - -In this guide, This section demonstrates how to create a simple Pulsar consumer application that exposes Prometheus metrics via HTTP. -1. Write a simple consumer application. - -```go - -// Create a Pulsar client -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} - -defer client.Close() - -// Start a separate goroutine for Prometheus metrics -// In this case, Prometheus metrics can be accessed via http://localhost:2112/metrics -go func() { - prometheusPort := 2112 - log.Printf("Starting Prometheus metrics at http://localhost:%v/metrics\n", prometheusPort) - http.Handle("/metrics", promhttp.Handler()) - err = http.ListenAndServe(":"+strconv.Itoa(prometheusPort), nil) - if err != nil { - log.Fatal(err) - } -}() - -// Create a consumer -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "topic-1", - SubscriptionName: "sub-1", - Type: pulsar.Shared, -}) -if err != nil { - log.Fatal(err) -} - -defer consumer.Close() - -ctx := context.Background() - -// Write your business logic here -// In this case, you build a simple Web server. You can consume messages by requesting http://localhost:8083/consume -webPort := 8083 -http.HandleFunc("/consume", func(w http.ResponseWriter, r *http.Request) { - msg, err := consumer.Receive(ctx) - if err != nil { - log.Fatal(err) - } else { - log.Printf("Received message msgId: %v -- content: '%s'\n", msg.ID(), string(msg.Payload())) - fmt.Fprintf(w, "Received message msgId: %v -- content: '%s'\n", msg.ID(), string(msg.Payload())) - consumer.Ack(msg) - } -}) - -err = http.ListenAndServe(":"+strconv.Itoa(webPort), nil) -if err != nil { - log.Fatal(err) -} - -``` - -2. To scrape metrics from applications, configure a local running Prometheus instance using a configuration file (`prometheus.yml`). - -```yaml - -scrape_configs: -- job_name: pulsar-client-go-metrics - scrape_interval: 10s - static_configs: - - targets: - - localhost:2112 - -``` - -Now you can query Pulsar client metrics on Prometheus. - -### Consumer configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Topics | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing| | -| TopicsPattern | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing | | -| AutoDiscoveryPeriod | Specify the interval in which to poll for new partitions or new topics if using a TopicsPattern. | | -| SubscriptionName | Specify the subscription name for this consumer. This argument is required when subscribing | | -| Name | Set the consumer name | | -| Properties | Properties attach a set of application defined properties to the producer This properties will be visible in the topic stats | | -| Type | Select the subscription type to be used when subscribing to the topic. | Exclusive | -| SubscriptionInitialPosition | InitialPosition at which the cursor will be set when subscribe | Latest | -| DLQ | Configuration for Dead Letter Queue consumer policy. | no DLQ | -| MessageChannel | Sets a `MessageChannel` for the consumer. When a message is received, it will be pushed to the channel for consumption | | -| ReceiverQueueSize | Sets the size of the consumer receive queue. | 1000| -| NackRedeliveryDelay | The delay after which to redeliver the messages that failed to be processed | 1min | -| ReadCompacted | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic | false | -| ReplicateSubscriptionState | Mark the subscription as replicated to keep it in sync across clusters | false | -| KeySharedPolicy | Configuration for Key Shared consumer policy. | | -| RetryEnable | Auto retry send messages to default filled DLQPolicy topics | false | -| Interceptors | A chain of interceptors. These interceptors are called at some points defined in the `ConsumerInterceptor` interface. | | -| MaxReconnectToBroker | MaxReconnectToBroker set the maximum retry number of reconnectToBroker. | ultimate | -| Schema | Schema set a custom schema type by passing an implementation of `Schema` | bytes[] | - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example: - -```go - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "topic-1", - StartMessageID: pulsar.EarliestMessageID(), -}) -if err != nil { - log.Fatal(err) -} -defer reader.Close() - -``` - -### Reader operations - -Pulsar Go readers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string` -`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)` -`HasNext()` | Check if there is any message available to read from the current position| (bool, error) -`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error` -`Seek(MessageID)` | Reset the subscription associated with this reader to a specific message ID | `error` -`SeekByTime(time time.Time)` | Reset the subscription associated with this reader to a specific message publish time | `error` - -### Reader example - -#### How to use reader to read 'next' message - -Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages: - -```go - -import ( - "context" - "fmt" - "log" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{URL: "pulsar://localhost:6650"}) - if err != nil { - log.Fatal(err) - } - - defer client.Close() - - reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "topic-1", - StartMessageID: pulsar.EarliestMessageID(), - }) - if err != nil { - log.Fatal(err) - } - defer reader.Close() - - for reader.HasNext() { - msg, err := reader.Next(context.Background()) - if err != nil { - log.Fatal(err) - } - - fmt.Printf("Received message msgId: %#v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - } -} - -``` - -In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example: - -```go - -lastSavedId := // Read last saved message id from external store as byte[] - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: pulsar.DeserializeMessageID(lastSavedId), -}) - -``` - -#### How to use reader to read specific message - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: lookupURL, -}) - -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topic := "topic-1" -ctx := context.Background() - -// create producer -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topic, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -// send 10 messages -msgIDs := [10]MessageID{} -for i := 0; i < 10; i++ { - msgID, err := producer.Send(ctx, &pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("hello-%d", i)), - }) - assert.NoError(t, err) - assert.NotNil(t, msgID) - msgIDs[i] = msgID -} - -// create reader on 5th message (not included) -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: topic, - StartMessageID: msgIDs[4], -}) - -if err != nil { - log.Fatal(err) -} -defer reader.Close() - -// receive the remaining 5 messages -for i := 5; i < 10; i++ { - msg, err := reader.Next(context.Background()) - if err != nil { - log.Fatal(err) -} - -// create reader on 5th message (included) -readerInclusive, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: topic, - StartMessageID: msgIDs[4], - StartMessageIDInclusive: true, -}) - -if err != nil { - log.Fatal(err) -} -defer readerInclusive.Close() - -``` - -### Reader configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Name | Name set the reader name. | | -| Properties | Attach a set of application defined properties to the reader. This properties will be visible in the topic stats | | -| StartMessageID | StartMessageID initial reader positioning is done by specifying a message id. | | -| StartMessageIDInclusive | If true, the reader will start at the `StartMessageID`, included. Default is `false` and the reader will start from the "next" message | false | -| MessageChannel | MessageChannel sets a `MessageChannel` for the consumer When a message is received, it will be pushed to the channel for consumption| | -| ReceiverQueueSize | ReceiverQueueSize sets the size of the consumer receive queue. | 1000 | -| SubscriptionRolePrefix| SubscriptionRolePrefix set the subscription role prefix. | "reader" | -| ReadCompacted | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. ReadCompacted can only be enabled when reading from a persistent topic. | false| - -## Messages - -The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message: - -```go - -msg := pulsar.ProducerMessage{ - Payload: []byte("Here is some message data"), - Key: "message-key", - Properties: map[string]string{ - "foo": "bar", - }, - EventTime: time.Now(), - ReplicationClusters: []string{"cluster1", "cluster3"}, -} - -if _, err := producer.send(msg); err != nil { - log.Fatalf("Could not publish message due to: %v", err) -} - -``` - -The following methods parameters are available for `ProducerMessage` objects: - -Parameter | Description -:---------|:----------- -`Payload` | The actual data payload of the message -`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message. -`Key` | The optional key associated with the message (particularly useful for things like topic compaction) -`OrderingKey` | OrderingKey sets the ordering key of the message. -`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message -`EventTime` | The timestamp associated with the message -`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. -`SequenceID` | Set the sequence id to assign to the current message -`DeliverAfter` | Request to deliver the message only after the specified relative delay -`DeliverAt` | Deliver the message only at or after the specified absolute timestamp - -## TLS encryption and authentication - -In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so: - - * Use `pulsar+ssl` URL type - * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker - * Configure `Authentication` option - -Here's an example: - -```go - -opts := pulsar.ClientOptions{ - URL: "pulsar+ssl://my-cluster.com:6651", - TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr", - Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"), -} - -``` - -## OAuth2 authentication - -To use [OAuth2 authentication](security-oauth2.md), you'll need to configure your client to perform the following operations. -This example shows how to configure OAuth2 authentication. - -```go - -oauth := pulsar.NewAuthenticationOAuth2(map[string]string{ - "type": "client_credentials", - "issuerUrl": "https://dev-kt-aa9ne.us.auth0.com", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "privateKey": "/path/to/privateKey", - "clientId": "0Xx...Yyxeny", - }) -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://my-cluster:6650", - Authentication: oauth, -}) - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-java.md b/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-java.md deleted file mode 100644 index dbccbb42b1c0b6..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-java.md +++ /dev/null @@ -1,1543 +0,0 @@ ---- -id: client-libraries-java -title: Pulsar Java client -sidebar_label: "Java" -original_id: client-libraries-java ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -You can use a Pulsar Java client to create the Java [producer](#producer), [consumer](#consumer), [readers](#reader) and [TableView](#tableview) of messages and to perform [administrative tasks](admin-api-overview.md). The current Java client version is **@pulsar:version@**. - -All the methods in [producer](#producer), [consumer](#consumer), [readers](#reader) and [TableView](#tableview) of a Java client are thread-safe. - -Javadoc for the Pulsar client is divided into two domains by package as follows. - -Package | Description | Maven Artifact -:-------|:------------|:-------------- -[`org.apache.pulsar.client.api`](/api/client) | [The producer and consumer API](/api/client/) | [org.apache.pulsar:pulsar-client:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C@pulsar:version@%7Cjar) -[`org.apache.pulsar.client.admin`](/api/admin) | The Java [admin API](admin-api-overview.md) | [org.apache.pulsar:pulsar-client-admin:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client-admin%7C@pulsar:version@%7Cjar) -`org.apache.pulsar.client.all` |Include both `pulsar-client` and `pulsar-client-admin`
    Both `pulsar-client` and `pulsar-client-admin` are shaded packages and they shade dependencies independently. Consequently, the applications using both `pulsar-client` and `pulsar-client-admin` have redundant shaded classes. It would be troublesome if you introduce new dependencies but forget to update shading rules.
    In this case, you can use `pulsar-client-all`, which shades dependencies only one time and reduces the size of dependencies. |[org.apache.pulsar:pulsar-client-all:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client-all%7C@pulsar:version@%7Cjar) - -This document focuses only on the client API for producing and consuming messages on Pulsar topics. For how to use the Java admin client, see [Pulsar admin interface](admin-api-overview.md). - -## Installation - -The latest version of the Pulsar Java client library is available via [Maven Central](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C@pulsar:version@%7Cjar). To use the latest version, add the `pulsar-client` library to your build configuration. - -:::tip - -- [`pulsar-client`](https://search.maven.org/artifact/org.apache.pulsar/pulsar-client) and [`pulsar-client-admin`](https://search.maven.org/artifact/org.apache.pulsar/pulsar-client-admin) shade dependencies via [maven-shade-plugin](https://maven.apache.org/plugins/maven-shade-plugin/) to avoid conflicts of the underlying dependency packages (such as Netty). If you do not want to manage dependency conflicts manually, you can use them. -- [`pulsar-client-original`](https://search.maven.org/artifact/org.apache.pulsar/pulsar-client-original) and [`pulsar-client-admin-original`](https://search.maven.org/artifact/org.apache.pulsar/pulsar-client-admin-original) **does not** shade dependencies. If you want to manage dependencies manually, you can use them. - -::: - -### Maven - -If you use Maven, add the following information to the `pom.xml` file. - -```xml - - -@pulsar:version@ - - - - org.apache.pulsar - pulsar-client - ${pulsar.version} - - -``` - -### Gradle - -If you use Gradle, add the following information to the `build.gradle` file. - -```groovy - -def pulsarVersion = '@pulsar:version@' - -dependencies { - compile group: 'org.apache.pulsar', name: 'pulsar-client', version: pulsarVersion -} - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -You can assign Pulsar protocol URLs to specific clusters and use the `pulsar` scheme. The default port is `6650`. The following is an example of `localhost`. - -```http - -pulsar://localhost:6650 - -``` - -If you have multiple brokers, the URL is as follows. - -```http - -pulsar://localhost:6550,localhost:6651,localhost:6652 - -``` - -A URL for a production Pulsar cluster is as follows. - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you use [TLS](security-tls-authentication.md) authentication, the URL is as follows. - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Client - -You can instantiate a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object using just a URL for the target Pulsar [cluster](reference-terminology.md#cluster) like this: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -``` - -If you have multiple brokers, you can initiate a PulsarClient like this: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650,localhost:6651,localhost:6652") - .build(); - -``` - -> ### Default broker URLs for standalone clusters -> If you run a cluster in [standalone mode](getting-started-standalone.md), the broker is available at the `pulsar://localhost:6650` URL by default. - -If you create a client, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -| Name | Type |
    Description
    | Default -|---|---|---|--- -`serviceUrl` | String | Service URL provider for Pulsar service | None -`authPluginClassName` | String | Name of the authentication plugin | None - `authParams` | String | Parameters for the authentication plugin

    **Example**
    key1:val1,key2:val2|None -`operationTimeoutMs`|long|`operationTimeoutMs`|Operation timeout |30000 -`statsIntervalSeconds`|long|Interval between each stats information

    Stats is activated with positive `statsInterval`

    Set `statsIntervalSeconds` to 1 second at least. |60 -`numIoThreads`| int| The number of threads used for handling connections to brokers | 1 -`numListenerThreads`|int|The number of threads used for handling message listeners. The listener thread pool is shared across all the consumers and readers using the "listener" model to get messages. For a given consumer, the listener is always invoked from the same thread to ensure ordering. If you want multiple threads to process a single topic, you need to create a [`shared`](concepts-messaging.md#shared) subscription and multiple consumers for this subscription. This does not ensure ordering.| 1 -`useTcpNoDelay`| boolean| Whether to use TCP no-delay flag on the connection to disable Nagle algorithm |true -`useTls` |boolean |Whether to use TLS encryption on the connection| false - `tlsTrustCertsFilePath` |string |Path to the trusted TLS certificate file|None -`tlsAllowInsecureConnection`|boolean|Whether the Pulsar client accepts untrusted TLS certificate from broker | false -`tlsHostnameVerificationEnable` |boolean | Whether to enable TLS hostname verification|false -`concurrentLookupRequest`|int|The number of concurrent lookup requests allowed to send on each broker connection to prevent overload on broker|5000 -`maxLookupRequest`|int|The maximum number of lookup requests allowed on each broker connection to prevent overload on broker | 50000 -`maxNumberOfRejectedRequestPerConnection`|int|The maximum number of rejected requests of a broker in a certain time frame (30 seconds) after the current connection is closed and the client creates a new connection to connect to a different broker|50 -`keepAliveIntervalSeconds`|int|Seconds of keeping alive interval for each client broker connection|30 -`connectionTimeoutMs`|int|Duration of waiting for a connection to a broker to be established

    If the duration passes without a response from a broker, the connection attempt is dropped|10000 -`requestTimeoutMs`|int|Maximum duration for completing a request |60000 -`defaultBackoffIntervalNanos`|int| Default duration for a backoff interval | TimeUnit.MILLISECONDS.toNanos(100); -`maxBackoffIntervalNanos`|long|Maximum duration for a backoff interval|TimeUnit.SECONDS.toNanos(30) -`socks5ProxyAddress`|SocketAddress|SOCKS5 proxy address | None -`socks5ProxyUsername`|string|SOCKS5 proxy username | None -`socks5ProxyPassword`|string|SOCKS5 proxy password | None - -Check out the Javadoc for the {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} class for a full list of configurable parameters. - -> In addition to client-level configuration, you can also apply [producer](#configure-producer) and [consumer](#configure-consumer) specific configuration as described in sections below. - -### Client memory allocator configuration -You can set the client memory allocator configurations through Java properties.
    - -| Property | Type |
    Description
    | Default | Available values -|---|---|---|---|--- -`pulsar.allocator.pooled` | String | If set to `true`, the client uses a direct memory pool.
    If set to `false`, the client uses a heap memory without pool | true |
  267. true
  268. false
  269. -`pulsar.allocator.exit_on_oom` | String | Whether to exit the JVM when OOM happens | false |
  270. true
  271. false
  272. -`pulsar.allocator.leak_detection` | String | The leak detection policy for Pulsar bytebuf allocator.
  273. **Disabled**: No leak detection and no overhead.
  274. **Simple**: Instruments 1% of the allocated buffer to track for leaks.
  275. **Advanced**: Instruments 1% of the allocated buffer to track for leaks, reporting stack traces of places where the buffer is used.
  276. **Paranoid**: Instruments 100% of the allocated buffer to track for leaks, reporting stack traces of places where the buffer is used and introduces a significant overhead.
  277. | Disabled |
  278. Disabled
  279. Simple
  280. Advanced
  281. Paranoid
  282. -`pulsar.allocator.out_of_memory_policy` | String | When an OOM occurs, the client throws an exception or fallbacks to heap | FallbackToHeap |
  283. ThrowException
  284. FallbackToHeap
  285. - -**Example**: - -``` - --Dpulsar.allocator.pooled=true --Dpulsar.allocator.exit_on_oom=false --Dpulsar.allocator.leak_detection=Disabled --Dpulsar.allocator.out_of_memory_policy=ThrowException - -``` - -### Cluster-level failover - -This chapter describes the concept, benefits, use cases, constraints, usage, working principles, and more information about the cluster-level failover. It contains the following sections: - -- [What is cluster-level failover?](#what-is-cluster-level-failover) - - * [Concept of cluster-level failover](#concept-of-cluster-level-failover) - - * [Why use cluster-level failover?](#why-use-cluster-level-failover) - - * [When to use cluster-level failover?](#when-to-use-cluster-level-failover) - - * [When cluster-level failover is triggered?](#when-cluster-level-failover-is-triggered) - - * [Why does cluster-level failover fail?](#why-does-cluster-level-failover-fail) - - * [What are the limitations of cluster-level failover?](#what-are-the-limitations-of-cluster-level-failover) - - * [What are the relationships between cluster-level failover and geo-replication?](#what-are-the-relationships-between-cluster-level-failover-and-geo-replication) - -- [How to use cluster-level failover?](#how-to-use-cluster-level-failover) - -- [How does cluster-level failover work?](#how-does-cluster-level-failover-work) - -> #### What is cluster-level failover - -This chapter helps you better understand the concept of cluster-level failover. -> ##### Concept of cluster-level failover - -````mdx-code-block - - - -Automatic cluster-level failover supports Pulsar clients switching from a primary cluster to one or several backup clusters automatically and seamlessly when it detects a failover event based on the configured detecting policy set by **users**. - -![Automatic cluster-level failover](/assets/cluster-level-failover-1.png) - - - - -Controlled cluster-level failover supports Pulsar clients switching from a primary cluster to one or several backup clusters. The switchover is manually set by **administrators**. - -![Controlled cluster-level failover](/assets/cluster-level-failover-2.png) - - - - -```` - -Once the primary cluster functions again, Pulsar clients can switch back to the primary cluster. Most of the time users won’t even notice a thing. Users can keep using applications and services without interruptions or timeouts. - -> ##### Why use cluster-level failover? - -The cluster-level failover provides fault tolerance, continuous availability, and high availability together. It brings a number of benefits, including but not limited to: - -* Reduced cost: services can be switched and recovered automatically with no data loss. - -* Simplified management: businesses can operate on an "always-on" basis since no immediate user intervention is required. - -* Improved stability and robustness: it ensures continuous performance and minimizes service downtime. - -> ##### When to use cluster-level failover? - -The cluster-level failover protects your environment in a number of ways, including but not limited to: - -* Disaster recovery: cluster-level failover can automatically and seamlessly transfer the production workload on a primary cluster to one or several backup clusters, which ensures minimum data loss and reduced recovery time. - -* Planned migration: if you want to migrate production workloads from an old cluster to a new cluster, you can improve the migration efficiency with cluster-level failover. For example, you can test whether the data migration goes smoothly in case of a failover event, identify possible issues and risks before the migration. - -> ##### When cluster-level failover is triggered? - -````mdx-code-block - - - -Automatic cluster-level failover is triggered when Pulsar clients cannot connect to the primary cluster for a prolonged period of time. This can be caused by any number of reasons including, but not limited to: - -* Network failure: internet connection is lost. - -* Power failure: shutdown time of a primary cluster exceeds time limits. - -* Service error: errors occur on a primary cluster (for example, the primary cluster does not function because of time limits). - -* Crashed storage space: the primary cluster does not have enough storage space, but the corresponding storage space on the backup server functions normally. - - - - -Controlled cluster-level failover is triggered when administrators set the switchover manually. - - - - -```` - -> ##### Why does cluster-level failover fail? - -Obviously, the cluster-level failover does not succeed if the backup cluster is unreachable by active Pulsar clients. This can happen for many reasons, including but not limited to: - -* Power failure: the backup cluster is shut down or does not function normally. - -* Crashed storage space: primary and backup clusters do not have enough storage space. - -* If the failover is initiated, but no cluster can assume the role of an available cluster due to errors, and the primary cluster is not able to provide service normally. - -* If you manually initiate a switchover, but services cannot be switched to the backup cluster server, then the system will attempt to switch services back to the primary cluster. - -* Fail to authenticate or authorize between 1) primary and backup clusters, or 2) between two backup clusters. - -> ##### What are the limitations of cluster-level failover? - -Currently, cluster-level failover can perform probes to prevent data loss, but it can not check the status of backup clusters. If backup clusters are not healthy, you cannot produce or consume data. - -> #### What are the relationships between cluster-level failover and geo-replication? - -The cluster-level failover is an extension of [geo-replication](concepts-replication.md) to improve stability and robustness. The cluster-level failover depends on geo-replication, and they have some **differences** as below. - -Influence |Cluster-level failover|Geo-replication -|---|---|--- -Do administrators have heavy workloads?|No or maybe.

    - For the **automatic** cluster-level failover, the cluster switchover is triggered automatically based on the policies set by **users**.

    - For the **controlled** cluster-level failover, the switchover is triggered manually by **administrators**.|Yes.

    If a cluster fails, immediate administration intervention is required.| -Result in data loss?|No.

    For both **automatic** and **controlled** cluster-level failover, if the failed primary cluster doesn't replicate messages immediately to the backup cluster, the Pulsar client can't consume the non-replicated messages. After the primary cluster is restored and the Pulsar client switches back, the non-replicated data can still be consumed by the Pulsar client. Consequently, the data is not lost.

    - For the **automatic** cluster-level failover, services can be switched and recovered automatically with no data loss.

    - For the **controlled** cluster-level failover, services can be switched and recovered manually and data loss may happen.|Yes.

    Pulsar clients and DNS systems have caches. When administrators switch the DNS from a primary cluster to a backup cluster, it takes some time for cache trigger timeout, which delays client recovery time and fails to produce or consume messages. -Result in Pulsar client failure? |No or maybe.

    - For **automatic** cluster-level failover, services can be switched and recovered automatically and the Pulsar client does not fail.

    - For **controlled** cluster-level failover, services can be switched and recovered manually, but the Pulsar client fails before administrators can take action. |Same as above. - -> #### How to use cluster-level failover - -This section guides you through every step on how to configure cluster-level failover. - -**Tip** - -- You should configure cluster-level failover only when the cluster contains sufficient resources to handle all possible consequences. Workload intensity on the backup cluster may increase significantly. - -- Connect clusters to an uninterruptible power supply (UPS) unit to reduce the risk of unexpected power loss. - -**Requirements** - -* Pulsar client 2.10 or later versions. - -* For backup clusters: - - * The number of BookKeeper nodes should be equal to or greater than the ensemble quorum. - - * The number of ZooKeeper nodes should be equal to or greater than 3. - -* **Turn on geo-replication** between the primary cluster and any dependent cluster (primary to backup or backup to backup) to prevent data loss. - -* Set `replicateSubscriptionState` to `true` when creating consumers. - -````mdx-code-block - - - -This is an example of how to construct a Java Pulsar client to use automatic cluster-level failover. The switchover is triggered automatically. - -``` - -  private PulsarClient getAutoFailoverClient() throws PulsarClientException { - -        ServiceUrlProvider failover = AutoClusterFailover.builder() -                .primary("pulsar://localhost:6650") -                .secondary(Collections.singletonList("pulsar://other1:6650","pulsar://other2:6650")) -                .failoverDelay(30, TimeUnit.SECONDS) -                .switchBackDelay(60, TimeUnit.SECONDS) -                .checkInterval(1000, TimeUnit.MILLISECONDS) -         .secondaryTlsTrustCertsFilePath("/path/to/ca.cert.pem") -    .secondaryAuthentication("org.apache.pulsar.client.impl.auth.AuthenticationTls", -"tlsCertFile:/path/to/my-role.cert.pem,tlsKeyFile:/path/to/my-role.key-pk8.pem") - -                .build(); - -        PulsarClient pulsarClient = PulsarClient.builder() -                .build(); - -        failover.initialize(pulsarClient); -        return pulsarClient; -    } - -``` - -Configure the following parameters: - -Parameter|Default value|Required?|Description -|---|---|---|--- -`primary`|N/A|Yes|Service URL of the primary cluster. -`secondary`|N/A|Yes|Service URL(s) of one or several backup clusters.

    You can specify several backup clusters using a comma-separated list.

    Note that:
    - The backup cluster is chosen in the sequence shown in the list.
    - If all backup clusters are available, the Pulsar client chooses the first backup cluster. -`failoverDelay`|N/A|Yes|The delay before the Pulsar client switches from the primary cluster to the backup cluster.

    Automatic failover is controlled by a probe task:
    1) The probe task first checks the health status of the primary cluster.
    2) If the probe task finds the continuous failure time of the primary cluster exceeds `failoverDelayMs`, it switches the Pulsar client to the backup cluster. -`switchBackDelay`|N/A|Yes|The delay before the Pulsar client switches from the backup cluster to the primary cluster.

    Automatic failover switchover is controlled by a probe task:
    1) After the Pulsar client switches from the primary cluster to the backup cluster, the probe task continues to check the status of the primary cluster.
    2) If the primary cluster functions well and continuously remains active longer than `switchBackDelay`, the Pulsar client switches back to the primary cluster. -`checkInterval`|30s|No|Frequency of performing a probe task (in seconds). -`secondaryTlsTrustCertsFilePath`|N/A|No|Path to the trusted TLS certificate file of the backup cluster. -`secondaryAuthentication`|N/A|No|Authentication of the backup cluster. - -
    - - -This is an example of how to construct a Java Pulsar client to use controlled cluster-level failover. The switchover is triggered by administrators manually. - -**Note**: you can have one or several backup clusters but can only specify one. - -``` - - public PulsarClient getControlledFailoverClient() throws IOException { -Map header = new HashMap(); - header.put("service_user_id", "my-user"); - header.put("service_password", "tiger"); - header.put("clusterA", "tokenA"); - header.put("clusterB", "tokenB"); - - ServiceUrlProvider provider = - ControlledClusterFailover.builder() - .defaultServiceUrl("pulsar://localhost:6650") - .checkInterval(1, TimeUnit.MINUTES) - .urlProvider("http://localhost:8080/test") - .urlProviderHeader(header) - .build(); - - PulsarClient pulsarClient = - PulsarClient.builder() - .build(); - - provider.initialize(pulsarClient); - return pulsarClient; -} - -``` - -Parameter|Default value|Required?|Description -|---|---|---|--- -`defaultServiceUrl`|N/A|Yes|Pulsar service URL. -`checkInterval`|30s|No|Frequency of performing a probe task (in seconds). -`urlProvider`|N/A|Yes|URL provider service. -`urlProviderHeader`|N/A|No|`urlProviderHeader` is a map containing tokens and credentials.

    If you enable authentication or authorization between Pulsar clients and primary and backup clusters, you need to provide `urlProviderHeader`. - -Here is an example of how `urlProviderHeader` works. - -![How urlProviderHeader works](/assets/cluster-level-failover-3.png) - -Assume that you want to connect Pulsar client 1 to cluster A. - -1. Pulsar client 1 sends the token *t1* to the URL provider service. - -2. The URL provider service returns the credential *c1* and the cluster A URL to the Pulsar client. - - The URL provider service manages all tokens and credentials. It returns different credentials based on different tokens and different target cluster URLs to different Pulsar clients. - - **Note**: **the credential must be in a JSON file and contain parameters as shown**. - - ``` - - { - "serviceUrl": "pulsar+ssl://target:6651", - "tlsTrustCertsFilePath": "/security/ca.cert.pem", - "authPluginClassName":"org.apache.pulsar.client.impl.auth.AuthenticationTls", - "authParamsString": " \"tlsCertFile\": \"/security/client.cert.pem\" - \"tlsKeyFile\": \"/security/client-pk8.pem\" " - } - - ``` - -3. Pulsar client 1 connects to cluster A using credential *c1*. - -
    - -
    -```` - ->#### How does cluster-level failover work? - -This chapter explains the working process of cluster-level failover. For more implementation details, see [PIP-121](https://github.com/apache/pulsar/issues/13315). - -````mdx-code-block - - - -In automatic failover cluster, the primary cluster and backup cluster are aware of each other's availability. The automatic failover cluster performs the following actions without administrator intervention: - -1. The Pulsar client runs a probe task at intervals defined in `checkInterval`. - -2. If the probe task finds the failure time of the primary cluster exceeds the time set in the `failoverDelay` parameter, it searches backup clusters for an available healthy cluster. - - 2a) If there are healthy backup clusters, the Pulsar client switches to a backup cluster in the order defined in `secondary`. - - 2b) If there is no healthy backup cluster, the Pulsar client does not perform the switchover, and the probe task continues to look for an available backup cluster. - -3. The probe task checks whether the primary cluster functions well or not. - - 3a) If the primary cluster comes back and the continuous healthy time exceeds the time set in `switchBackDelay`, the Pulsar client switches back to the primary cluster. - - 3b) If the primary cluster does not come back, the Pulsar client does not perform the switchover. - -![Workflow of automatic failover cluster](/assets/cluster-level-failover-4.png) - - - - -1. The Pulsar client runs a probe task at intervals defined in `checkInterval`. - -2. The probe task fetches the service URL configuration from the URL provider service, which is configured by `urlProvider`. - - 2a) If the service URL configuration is changed, the probe task switches to the target cluster without checking the health status of the target cluster. - - 2b) If the service URL configuration is not changed, the Pulsar client does not perform the switchover. - -3. If the Pulsar client switches to the target cluster, the probe task continues to fetch service URL configuration from the URL provider service at intervals defined in `checkInterval`. - - 3a) If the service URL configuration is changed, the probe task switches to the target cluster without checking the health status of the target cluster. - - 3b) If the service URL configuration is not changed, it does not perform the switchover. - -![Workflow of controlled failover cluster](/assets/cluster-level-failover-5.png) - - - - -```` - -## Producer - -In Pulsar, producers write messages to topics. Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object (as in the section [above](#client-configuration)), you can create a {@inject: javadoc:Producer:/client/org/apache/pulsar/client/api/Producer} for a specific Pulsar [topic](reference-terminology.md#topic). - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .create(); - -// You can then send messages to the broker and topic you specified: -producer.send("My message".getBytes()); - -``` - -By default, producers produce messages that consist of byte arrays. You can produce different types by specifying a message [schema](#schema). - -```java - -Producer stringProducer = client.newProducer(Schema.STRING) - .topic("my-topic") - .create(); -stringProducer.send("My message"); - -``` - -> Make sure that you close your producers, consumers, and clients when you do not need them. - -> ```java -> -> producer.close(); -> consumer.close(); -> client.close(); -> -> -> ``` - -> -> Close operations can also be asynchronous: - -> ```java -> -> producer.closeAsync() -> .thenRun(() -> System.out.println("Producer closed")) -> .exceptionally((ex) -> { -> System.err.println("Failed to close producer: " + ex); -> return null; -> }); -> -> -> ``` - - -### Configure producer - -If you instantiate a `Producer` object by specifying only a topic name as the example above, the default configuration of producer is used. - -If you create a producer, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -Name| Type |
    Description
    | Default -|---|---|---|--- -`topicName`| string| Topic name| null| -`producerName`| string|Producer name| null -`sendTimeoutMs`| long|Message send timeout in ms.
    If a message is not acknowledged by a server before the `sendTimeout` expires, an error occurs.|30000 -`blockIfQueueFull`|boolean|If it is set to `true`, when the outgoing message queue is full, the `Send` and `SendAsync` methods of producer block, rather than failing and throwing errors.
    If it is set to `false`, when the outgoing message queue is full, the `Send` and `SendAsync` methods of producer fail and `ProducerQueueIsFullError` exceptions occur.

    The `MaxPendingMessages` parameter determines the size of the outgoing message queue.|false -`maxPendingMessages`| int|The maximum size of a queue holding pending messages.

    For example, a message waiting to receive an acknowledgment from a [broker](reference-terminology.md#broker).

    By default, when the queue is full, all calls to the `Send` and `SendAsync` methods fail **unless** you set `BlockIfQueueFull` to `true`.|1000 -`maxPendingMessagesAcrossPartitions`|int|The maximum number of pending messages across partitions.

    Use the setting to lower the max pending messages for each partition ({@link #setMaxPendingMessages(int)}) if the total number exceeds the configured value.|50000 -`messageRoutingMode`| MessageRoutingMode|Message routing logic for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics).
    Apply the logic only when setting no key on messages.
    Available options are as follows:
  286. `pulsar.RoundRobinDistribution`: round robin
  287. `pulsar.UseSinglePartition`: publish all messages to a single partition
  288. `pulsar.CustomPartition`: a custom partitioning scheme
  289. |
  290. `pulsar.RoundRobinDistribution`
  291. -`hashingScheme`| HashingScheme|Hashing function determining the partition where you publish a particular message (**partitioned topics only**).
    Available options are as follows:
  292. `pulsar.JavastringHash`: the equivalent of `string.hashCode()` in Java
  293. `pulsar.Murmur3_32Hash`: applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function
  294. `pulsar.BoostHash`: applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library
  295. |`HashingScheme.JavastringHash` -`cryptoFailureAction`| ProducerCryptoFailureAction|Producer should take action when encryption fails.
  296. **FAIL**: if encryption fails, unencrypted messages fail to send.
  297. **SEND**: if encryption fails, unencrypted messages are sent.
  298. |`ProducerCryptoFailureAction.FAIL` -`batchingMaxPublishDelayMicros`| long|Batching time period of sending messages.|TimeUnit.MILLISECONDS.toMicros(1) -`batchingMaxMessages` |int|The maximum number of messages permitted in a batch.|1000 -`batchingEnabled`| boolean|Enable batching of messages. |true -`chunkingEnabled` | boolean | Enable chunking of messages. |false -`compressionType`|CompressionType|Message data compression type used by a producer.
    Available options:
  299. [`LZ4`](https://github.com/lz4/lz4)
  300. [`ZLIB`](https://zlib.net/)
  301. [`ZSTD`](https://facebook.github.io/zstd/)
  302. [`SNAPPY`](https://google.github.io/snappy/)
  303. | No compression -`initialSubscriptionName`|string|Use this configuration to automatically create an initial subscription when creating a topic. If this field is not set, the initial subscription is not created.|null - -You can configure parameters if you do not want to use the default configuration. - -For a full list, see the Javadoc for the {@inject: javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder} class. The following is an example. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .batchingMaxPublishDelay(10, TimeUnit.MILLISECONDS) - .sendTimeout(10, TimeUnit.SECONDS) - .blockIfQueueFull(true) - .create(); - -``` - -### Message routing - -When using partitioned topics, you can specify the routing mode whenever you publish messages using a producer. For more information on specifying a routing mode using the Java client, see the [Partitioned Topics cookbook](cookbooks-partitioned.md). - -### Async send - -You can publish messages [asynchronously](concepts-messaging.md#send-modes) using the Java client. With async send, the producer puts the message in a blocking queue and returns it immediately. Then the client library sends the message to the broker in the background. If the queue is full (max size configurable), the producer is blocked or fails immediately when calling the API, depending on arguments passed to the producer. - -The following is an example. - -```java - -producer.sendAsync("my-async-message".getBytes()).thenAccept(msgId -> { - System.out.println("Message with ID " + msgId + " successfully sent"); -}); - -``` - -As you can see from the example above, async send operations return a {@inject: javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId} wrapped in a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture). - -### Configure messages - -In addition to a value, you can set additional items on a given message: - -```java - -producer.newMessage() - .key("my-message-key") - .value("my-async-message".getBytes()) - .property("my-key", "my-value") - .property("my-other-key", "my-other-value") - .send(); - -``` - -You can terminate the builder chain with `sendAsync()` and get a future return. - -### Enable chunking - -Message [chunking](concepts-messaging.md#chunking) enables Pulsar to process large payload messages by splitting the message into chunks at the producer side and aggregating chunked messages at the consumer side. - -The message chunking feature is OFF by default. The following is an example about how to enable message chunking when creating a producer. - -```java - -Producer producer = client.newProducer() - .topic(topic) - .enableChunking(true) - .enableBatching(false) - .create(); - -``` - -By default, producer chunks the large message based on max message size (`maxMessageSize`) configured at broker (eg: 5MB). However, client can also configure max chunked size using producer configuration `chunkMaxMessageSize`. -> **Note:** To enable chunking, you need to disable batching (`enableBatching`=`false`) concurrently. - -## Consumer - -In Pulsar, consumers subscribe to topics and handle messages that producers publish to those topics. You can instantiate a new [consumer](reference-terminology.md#consumer) by first instantiating a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object and passing it a URL for a Pulsar broker (as [above](#client-configuration)). - -Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object, you can create a {@inject: javadoc:Consumer:/client/org/apache/pulsar/client/api/Consumer} by specifying a [topic](reference-terminology.md#topic) and a [subscription](concepts-messaging.md#subscription-types). - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscribe(); - -``` - -The `subscribe` method will auto subscribe the consumer to the specified topic and subscription. One way to make the consumer listen on the topic is to set up a `while` loop. In this example loop, the consumer listens for messages, prints the contents of any received message, and then [acknowledges](reference-terminology.md#acknowledgment-ack) that the message has been processed. If the processing logic fails, you can use [negative acknowledgement](reference-terminology.md#acknowledgment-ack) to redeliver the message later. - -```java - -while (true) { - // Wait for a message - Message msg = consumer.receive(); - - try { - // Do something with the message - System.out.println("Message received: " + new String(msg.getData())); - - // Acknowledge the message so that it can be deleted by the message broker - consumer.acknowledge(msg); - } catch (Exception e) { - // Message failed to process, redeliver later - consumer.negativeAcknowledge(msg); - } -} - -``` - -If you don't want to block your main thread and rather listen constantly for new messages, consider using a `MessageListener`. - -```java - -MessageListener myMessageListener = (consumer, msg) -> { - try { - System.out.println("Message received: " + new String(msg.getData())); - consumer.acknowledge(msg); - } catch (Exception e) { - consumer.negativeAcknowledge(msg); - } -} - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .messageListener(myMessageListener) - .subscribe(); - -``` - -### Configure consumer - -If you instantiate a `Consumer` object by specifying only a topic and subscription name as in the example above, the consumer uses the default configuration. - -When you create a consumer, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - - Name|Type |
    Description
    | Default -|---|---|---|--- -`topicNames`| Set<String>| Topic name| Sets.newTreeSet() - `topicsPattern`|Pattern| Topic pattern |None -`subscriptionName`|String| Subscription name| None -`subscriptionType`|SubscriptionType| Subscription type
    Four subscription types are available:
  304. Exclusive
  305. Failover
  306. Shared
  307. Key_Shared
  308. |SubscriptionType.Exclusive -`receiverQueueSize` |int | Size of a consumer's receiver queue.

    For example, the number of messages accumulated by a consumer before an application calls `Receive`.

    A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.| 1000 -`acknowledgementsGroupTimeMicros`|long|Group a consumer acknowledgment for a specified time.

    By default, a consumer uses 100ms grouping time to send out acknowledgments to a broker.

    Setting a group time of 0 sends out acknowledgments immediately.

    A longer ack group time is more efficient at the expense of a slight increase in message re-deliveries after a failure.|TimeUnit.MILLISECONDS.toMicros(100) -`negativeAckRedeliveryDelayMicros`|long|Delay to wait before redelivering messages that failed to be processed.

    When an application uses {@link Consumer#negativeAcknowledge(Message)}, failed messages are redelivered after a fixed timeout. |TimeUnit.MINUTES.toMicros(1) -`maxTotalReceiverQueueSizeAcrossPartitions`|int |The max total receiver queue size across partitions.

    This setting reduces the receiver queue size for individual partitions if the total receiver queue size exceeds this value.|50000 -`consumerName`|String|Consumer name|null -`ackTimeoutMillis`|long|Timeout of unacked messages|0 -`tickDurationMillis`|long|Granularity of the ack-timeout redelivery.

    Using an higher `tickDurationMillis` reduces the memory overhead to track messages when setting ack-timeout to a bigger value (for example, 1 hour).|1000 -`priorityLevel`|int|Priority level for a consumer to which a broker gives more priority while dispatching messages in Shared subscription type.

    The broker follows descending priorities. For example, 0=max-priority, 1, 2,...

    In Shared subscription type, the broker **first dispatches messages to the max priority level consumers if they have permits**. Otherwise, the broker considers next priority level consumers.

    **Example 1**
    If a subscription has consumerA with `priorityLevel` 0 and consumerB with `priorityLevel` 1, then the broker **only dispatches messages to consumerA until it runs out permits** and then starts dispatching messages to consumerB.

    **Example 2**
    Consumer Priority, Level, Permits
    C1, 0, 2
    C2, 0, 1
    C3, 0, 1
    C4, 1, 2
    C5, 1, 1

    Order in which a broker dispatches messages to consumers is: C1, C2, C3, C1, C4, C5, C4.|0 -`cryptoFailureAction`|ConsumerCryptoFailureAction|Consumer should take action when it receives a message that can not be decrypted.
  309. **FAIL**: this is the default option to fail messages until crypto succeeds.
  310. **DISCARD**:silently acknowledge and not deliver message to an application.
  311. **CONSUME**: deliver encrypted messages to applications. It is the application's responsibility to decrypt the message.

  312. The decompression of message fails.

    If messages contain batch messages, a client is not be able to retrieve individual messages in batch.

    Delivered encrypted message contains {@link EncryptionContext} which contains encryption and compression information in it using which application can decrypt consumed message payload.|
  313. ConsumerCryptoFailureAction.FAIL
  314. -`properties`|SortedMap|A name or value property of this consumer.

    `properties` is application defined metadata attached to a consumer.

    When getting a topic stats, associate this metadata with the consumer stats for easier identification.|new TreeMap() -`readCompacted`|boolean|If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    Only enabling `readCompacted` on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`.|false -`subscriptionInitialPosition`|SubscriptionInitialPosition|Initial position at which to set cursor when subscribing to a topic at first time.|SubscriptionInitialPosition.Latest -`patternAutoDiscoveryPeriod`|int|Topic auto discovery period when using a pattern for topic's consumer.

    The default and minimum value is 1 minute.|1 -`regexSubscriptionMode`|RegexSubscriptionMode|When subscribing to a topic using a regular expression, you can pick a certain type of topics.

  315. **PersistentOnly**: only subscribe to persistent topics.
  316. **NonPersistentOnly**: only subscribe to non-persistent topics.
  317. **AllTopics**: subscribe to both persistent and non-persistent topics.
  318. |RegexSubscriptionMode.PersistentOnly -`deadLetterPolicy`|DeadLetterPolicy|Dead letter policy for consumers.

    By default, some messages are probably redelivered many times, even to the extent that it never stops.

    By using the dead letter mechanism, messages have the max redelivery count. **When exceeding the maximum number of redeliveries, messages are sent to the Dead Letter Topic and acknowledged automatically**.

    You can enable the dead letter mechanism by setting `deadLetterPolicy`.

    **Example**

    client.newConsumer()
    .deadLetterPolicy(DeadLetterPolicy.builder().maxRedeliverCount(10).build())
    .subscribe();


    Default dead letter topic name is `{TopicName}-{Subscription}-DLQ`.

    To set a custom dead letter topic name:
    client.newConsumer()
    .deadLetterPolicy(DeadLetterPolicy.builder().maxRedeliverCount(10)
    .deadLetterTopic("your-topic-name").build())
    .subscribe();


    When specifying the dead letter policy while not specifying `ackTimeoutMillis`, you can set the ack timeout to 30000 millisecond.|None -`autoUpdatePartitions`|boolean|If `autoUpdatePartitions` is enabled, a consumer subscribes to partition increasement automatically.

    **Note**: this is only for partitioned consumers.|true -`replicateSubscriptionState`|boolean|If `replicateSubscriptionState` is enabled, a subscription state is replicated to geo-replicated clusters.|false -`negativeAckRedeliveryBackoff`|RedeliveryBackoff|Interface for custom message is negativeAcked policy. You can specify `RedeliveryBackoff` for a consumer.| `MultiplierRedeliveryBackoff` -`ackTimeoutRedeliveryBackoff`|RedeliveryBackoff|Interface for custom message is ackTimeout policy. You can specify `RedeliveryBackoff` for a consumer.| `MultiplierRedeliveryBackoff` -`autoAckOldestChunkedMessageOnQueueFull`|boolean|Whether to automatically acknowledge pending chunked messages when the threashold of `maxPendingChunkedMessage` is reached. If set to `false`, these messages will be redelivered by their broker. |true -`maxPendingChunkedMessage`|int| The maximum size of a queue holding pending chunked messages. When the threshold is reached, the consumer drops pending messages to optimize memory utilization.|10 -`expireTimeOfIncompleteChunkedMessageMillis`|long|The time interval to expire incomplete chunks if a consumer fails to receive all the chunks in the specified time period. The default value is 1 minute. | 60000 -`ackReceiptEnabled`|boolean| If `ackReceiptEnabled` is enabled, ACK returns a receipt. The client got the ack receipt means the broker has processed the ack request, but if without transaction, the broker does not guarantee persistence of acknowledgments, which means the messages still have a chance to be redelivered after the broker crashes. With the transaction, the client can only get the receipt after the acknowledgments have been persistent. | false - -You can configure parameters if you do not want to use the default configuration. For a full list, see the Javadoc for the {@inject: javadoc:ConsumerBuilder:/client/org/apache/pulsar/client/api/ConsumerBuilder} class. - -The following is an example. - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .ackTimeout(10, TimeUnit.SECONDS) - .subscriptionType(SubscriptionType.Exclusive) - .subscribe(); - -``` - -### Async receive - -The `receive` method receives messages synchronously (the consumer process is blocked until a message is available). You can also use [async receive](concepts-messaging.md#receive-modes), which returns a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) object immediately once a new message is available. - -The following is an example. - -```java - -CompletableFuture asyncMessage = consumer.receiveAsync(); - -``` - -Async receive operations return a {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} wrapped inside of a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture). - -### Batch receive - -Use `batchReceive` to receive multiple messages for each call. - -The following is an example. - -```java - -Messages messages = consumer.batchReceive(); -for (Object message : messages) { - // do something -} -consumer.acknowledge(messages) - -``` - -:::note - -Batch receive policy limits the number and bytes of messages in a single batch. You can specify a timeout to wait for enough messages. -The batch receive is completed if any of the following condition is met: enough number of messages, bytes of messages, wait timeout. - -```java - -Consumer consumer = client.newConsumer() -.topic("my-topic") -.subscriptionName("my-subscription") -.batchReceivePolicy(BatchReceivePolicy.builder() -.maxNumMessages(100) -.maxNumBytes(1024 * 1024) -.timeout(200, TimeUnit.MILLISECONDS) -.build()) -.subscribe(); - -``` - -The default batch receive policy is: - -```java - -BatchReceivePolicy.builder() -.maxNumMessage(-1) -.maxNumBytes(10 * 1024 * 1024) -.timeout(100, TimeUnit.MILLISECONDS) -.build(); - -``` - -::: - -### Configure chunking - -You can limit the maximum number of chunked messages a consumer maintains concurrently by configuring the `maxPendingChunkedMessage` and `autoAckOldestChunkedMessageOnQueueFull` parameters. When the threshold is reached, the consumer drops pending messages by silently acknowledging them or asking the broker to redeliver them later. The `expireTimeOfIncompleteChunkedMessage` parameter decides the time interval to expire incomplete chunks if the consumer fails to receive all chunks of a message within the specified time period. - -The following is an example of how to configure message chunking. - -```java - -Consumer consumer = client.newConsumer() - .topic(topic) - .subscriptionName("test") - .autoAckOldestChunkedMessageOnQueueFull(true) - .maxPendingChunkedMessage(100) - .expireTimeOfIncompleteChunkedMessage(10, TimeUnit.MINUTES) - .subscribe(); - -``` - -### Negative acknowledgment redelivery backoff - -The `RedeliveryBackoff` introduces a redelivery backoff mechanism. You can achieve redelivery with different delays by setting `redeliveryCount ` of messages. - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .negativeAckRedeliveryBackoff(MultiplierRedeliveryBackoff.builder() - .minDelayMs(1000) - .maxDelayMs(60 * 1000) - .build()) - .subscribe(); - -``` - -### Acknowledgement timeout redelivery backoff - -The `RedeliveryBackoff` introduces a redelivery backoff mechanism. You can redeliver messages with different delays by setting the number -of times the messages is retried. - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .ackTimeout(10, TimeUnit.SECOND) - .ackTimeoutRedeliveryBackoff(MultiplierRedeliveryBackoff.builder() - .minDelayMs(1000) - .maxDelayMs(60000) - .multiplier(2) - .build()) - .subscribe(); - -``` - -The message redelivery behavior should be as follows. - -Redelivery count | Redelivery delay -:--------------------|:----------- -1 | 10 + 1 seconds -2 | 10 + 2 seconds -3 | 10 + 4 seconds -4 | 10 + 8 seconds -5 | 10 + 16 seconds -6 | 10 + 32 seconds -7 | 10 + 60 seconds -8 | 10 + 60 seconds - -:::note - -- The `negativeAckRedeliveryBackoff` does not work with `consumer.negativeAcknowledge(MessageId messageId)` because you are not able to get the redelivery count from the message ID. -- If a consumer crashes, it triggers the redelivery of unacked messages. In this case, `RedeliveryBackoff` does not take effect and the messages might get redelivered earlier than the delay time from the backoff. - -::: - -### Multi-topic subscriptions - -In addition to subscribing a consumer to a single Pulsar topic, you can also subscribe to multiple topics simultaneously using [multi-topic subscriptions](concepts-messaging.md#multi-topic-subscriptions). To use multi-topic subscriptions you can supply either a regular expression (regex) or a `List` of topics. If you select topics via regex, all topics must be within the same Pulsar namespace. - -The followings are some examples. - -```java - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; - -import java.util.Arrays; -import java.util.List; -import java.util.regex.Pattern; - -ConsumerBuilder consumerBuilder = pulsarClient.newConsumer() - .subscriptionName(subscription); - -// Subscribe to all topics in a namespace -Pattern allTopicsInNamespace = Pattern.compile("public/default/.*"); -Consumer allTopicsConsumer = consumerBuilder - .topicsPattern(allTopicsInNamespace) - .subscribe(); - -// Subscribe to a subsets of topics in a namespace, based on regex -Pattern someTopicsInNamespace = Pattern.compile("public/default/foo.*"); -Consumer allTopicsConsumer = consumerBuilder - .topicsPattern(someTopicsInNamespace) - .subscribe(); - -``` - -In the above example, the consumer subscribes to the `persistent` topics that can match the topic name pattern. If you want the consumer subscribes to all `persistent` and `non-persistent` topics that can match the topic name pattern, set `subscriptionTopicsMode` to `RegexSubscriptionMode.AllTopics`. - -```java - -Pattern pattern = Pattern.compile("public/default/.*"); -pulsarClient.newConsumer() - .subscriptionName("my-sub") - .topicsPattern(pattern) - .subscriptionTopicsMode(RegexSubscriptionMode.AllTopics) - .subscribe(); - -``` - -:::note - -By default, the `subscriptionTopicsMode` of the consumer is `PersistentOnly`. Available options of `subscriptionTopicsMode` are `PersistentOnly`, `NonPersistentOnly`, and `AllTopics`. - -::: - -You can also subscribe to an explicit list of topics (across namespaces if you wish): - -```java - -List topics = Arrays.asList( - "topic-1", - "topic-2", - "topic-3" -); - -Consumer multiTopicConsumer = consumerBuilder - .topics(topics) - .subscribe(); - -// Alternatively: -Consumer multiTopicConsumer = consumerBuilder - .topic( - "topic-1", - "topic-2", - "topic-3" - ) - .subscribe(); - -``` - -You can also subscribe to multiple topics asynchronously using the `subscribeAsync` method rather than the synchronous `subscribe` method. The following is an example. - -```java - -Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default.*"); -consumerBuilder - .topics(topics) - .subscribeAsync() - .thenAccept(this::receiveMessageFromConsumer); - -private void receiveMessageFromConsumer(Object consumer) { - ((Consumer)consumer).receiveAsync().thenAccept(message -> { - // Do something with the received message - receiveMessageFromConsumer(consumer); - }); -} - -``` - -### Subscription types - -Pulsar has various [subscription types](concepts-messaging#subscription-types) to match different scenarios. A topic can have multiple subscriptions with different subscription types. However, a subscription can only have one subscription type at a time. - -A subscription is identical with the subscription name; a subscription name can specify only one subscription type at a time. To change the subscription type, you should first stop all consumers of this subscription. - -Different subscription types have different message distribution types. This section describes the differences of subscription types and how to use them. - -In order to better describe their differences, assuming you have a topic named "my-topic", and the producer has published 10 messages. - -```java - -Producer producer = client.newProducer(Schema.STRING) - .topic("my-topic") - .enableBatching(false) - .create(); -// 3 messages with "key-1", 3 messages with "key-2", 2 messages with "key-3" and 2 messages with "key-4" -producer.newMessage().key("key-1").value("message-1-1").send(); -producer.newMessage().key("key-1").value("message-1-2").send(); -producer.newMessage().key("key-1").value("message-1-3").send(); -producer.newMessage().key("key-2").value("message-2-1").send(); -producer.newMessage().key("key-2").value("message-2-2").send(); -producer.newMessage().key("key-2").value("message-2-3").send(); -producer.newMessage().key("key-3").value("message-3-1").send(); -producer.newMessage().key("key-3").value("message-3-2").send(); -producer.newMessage().key("key-4").value("message-4-1").send(); -producer.newMessage().key("key-4").value("message-4-2").send(); - -``` - -#### Exclusive - -Create a new consumer and subscribe with the `Exclusive` subscription type. - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Exclusive) - .subscribe() - -``` - -Only the first consumer is allowed to the subscription, other consumers receive an error. The first consumer receives all 10 messages, and the consuming order is the same as the producing order. - -:::note - -If topic is a partitioned topic, the first consumer subscribes to all partitioned topics, other consumers are not assigned with partitions and receive an error. - -::: - -#### Failover - -Create new consumers and subscribe with the`Failover` subscription type. - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Failover) - .subscribe() -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Failover) - .subscribe() -//conumser1 is the active consumer, consumer2 is the standby consumer. -//consumer1 receives 5 messages and then crashes, consumer2 takes over as an active consumer. - -``` - -Multiple consumers can attach to the same subscription, yet only the first consumer is active, and others are standby. When the active consumer is disconnected, messages will be dispatched to one of standby consumers, and the standby consumer then becomes active consumer. - -If the first active consumer is disconnected after receiving 5 messages, the standby consumer becomes active consumer. Consumer1 will receive: - -``` - -("key-1", "message-1-1") -("key-1", "message-1-2") -("key-1", "message-1-3") -("key-2", "message-2-1") -("key-2", "message-2-2") - -``` - -consumer2 will receive: - -``` - -("key-2", "message-2-3") -("key-3", "message-3-1") -("key-3", "message-3-2") -("key-4", "message-4-1") -("key-4", "message-4-2") - -``` - -:::note - -If a topic is a partitioned topic, each partition has only one active consumer, messages of one partition are distributed to only one consumer, and messages of multiple partitions are distributed to multiple consumers. - -::: - -#### Shared - -Create new consumers and subscribe with `Shared` subscription type. - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .subscribe() - -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .subscribe() -//Both consumer1 and consumer2 are active consumers. - -``` - -In Shared subscription type, multiple consumers can attach to the same subscription and messages are delivered in a round robin distribution across consumers. - -If a broker dispatches only one message at a time, consumer1 receives the following information. - -``` - -("key-1", "message-1-1") -("key-1", "message-1-3") -("key-2", "message-2-2") -("key-3", "message-3-1") -("key-4", "message-4-1") - -``` - -consumer2 receives the following information. - -``` - -("key-1", "message-1-2") -("key-2", "message-2-1") -("key-2", "message-2-3") -("key-3", "message-3-2") -("key-4", "message-4-2") - -``` - -`Shared` subscription is different from `Exclusive` and `Failover` subscription types. `Shared` subscription has better flexibility, but cannot provide order guarantee. - -#### Key_shared - -This is a new subscription type since 2.4.0 release. Create new consumers and subscribe with `Key_Shared` subscription type. - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Key_Shared) - .subscribe() - -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Key_Shared) - .subscribe() -//Both consumer1 and consumer2 are active consumers. - -``` - -Just like in `Shared` subscription, all consumers in `Key_Shared` subscription type can attach to the same subscription. But `Key_Shared` subscription type is different from the `Shared` subscription. In `Key_Shared` subscription type, messages with the same key are delivered to only one consumer in order. The possible distribution of messages between different consumers (by default we do not know in advance which keys will be assigned to a consumer, but a key will only be assigned to a consumer at the same time). - -consumer1 receives the following information. - -``` - -("key-1", "message-1-1") -("key-1", "message-1-2") -("key-1", "message-1-3") -("key-3", "message-3-1") -("key-3", "message-3-2") - -``` - -consumer2 receives the following information. - -``` - -("key-2", "message-2-1") -("key-2", "message-2-2") -("key-2", "message-2-3") -("key-4", "message-4-1") -("key-4", "message-4-2") - -``` - -If batching is enabled at the producer side, messages with different keys are added to a batch by default. The broker will dispatch the batch to the consumer, so the default batch mechanism may break the Key_Shared subscription guaranteed message distribution semantics. The producer needs to use the `KeyBasedBatcher`. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .batcherBuilder(BatcherBuilder.KEY_BASED) - .create(); - -``` - -Or the producer can disable batching. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .enableBatching(false) - .create(); - -``` - -:::note - -If the message key is not specified, messages without key are dispatched to one consumer in order by default. - -::: - -## Reader - -With the [reader interface](concepts-clients.md#reader-interface), Pulsar clients can "manually position" themselves within a topic and reading all messages from a specified message onward. The Pulsar API for Java enables you to create {@inject: javadoc:Reader:/client/org/apache/pulsar/client/api/Reader} objects by specifying a topic and a {@inject: javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId}. - -The following is an example. - -```java - -byte[] msgIdBytes = // Some message ID byte array -MessageId id = MessageId.fromByteArray(msgIdBytes); -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(id) - .create(); - -while (true) { - Message message = reader.readNext(); - // Process message -} - -``` - -In the example above, a `Reader` object is instantiated for a specific topic and message (by ID); the reader iterates over each message in the topic after the message is identified by `msgIdBytes` (how that value is obtained depends on the application). - -The code sample above shows pointing the `Reader` object to a specific message (by ID), but you can also use `MessageId.earliest` to point to the earliest available message on the topic of `MessageId.latest` to point to the most recent available message. - -### Configure reader -When you create a reader, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -| Name | Type|
    Description
    | Default -|---|---|---|--- -`topicName`|String|Topic name. |None -`receiverQueueSize`|int|Size of a consumer's receiver queue.

    For example, the number of messages that can be accumulated by a consumer before an application calls `Receive`.

    A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.|1000 -`readerListener`|ReaderListener<T>|A listener that is called for message received.|None -`readerName`|String|Reader name.|null -`subscriptionName`|String| Subscription name|When there is a single topic, the default subscription name is `"reader-" + 10-digit UUID`.
    When there are multiple topics, the default subscription name is `"multiTopicsReader-" + 10-digit UUID`. -`subscriptionRolePrefix`|String|Prefix of subscription role. |null -`cryptoKeyReader`|CryptoKeyReader|Interface that abstracts the access to a key store.|null -`cryptoFailureAction`|ConsumerCryptoFailureAction|Consumer should take action when it receives a message that can not be decrypted.
  319. **FAIL**: this is the default option to fail messages until crypto succeeds.
  320. **DISCARD**: silently acknowledge and not deliver message to an application.
  321. **CONSUME**: deliver encrypted messages to applications. It is the application's responsibility to decrypt the message.

  322. The message decompression fails.

    If messages contain batch messages, a client is not be able to retrieve individual messages in batch.

    Delivered encrypted message contains {@link EncryptionContext} which contains encryption and compression information in it using which application can decrypt consumed message payload.|
  323. ConsumerCryptoFailureAction.FAIL
  324. -`readCompacted`|boolean|If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (for example, failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`.|false -`resetIncludeHead`|boolean|If set to true, the first message to be returned is the one specified by `messageId`.

    If set to false, the first message to be returned is the one next to the message specified by `messageId`.|false - -### Sticky key range reader - -In sticky key range reader, broker will only dispatch messages which hash of the message key contains by the specified key hash range. Multiple key hash ranges can be specified on a reader. - -The following is an example to create a sticky key range reader. - -```java - -pulsarClient.newReader() - .topic(topic) - .startMessageId(MessageId.earliest) - .keyHashRange(Range.of(0, 10000), Range.of(20001, 30000)) - .create(); - -``` - -Total hash range size is 65536, so the max end of the range should be less than or equal to 65535. - - -## TableView - -The TableView interface serves an encapsulated access pattern, providing a continuously updated key-value map view of the compacted topic data. Messages without keys will be ignored. - -With TableView, Pulsar clients can fetch all the message updates from a topic and construct a map with the latest values of each key. These values can then be used to build a local cache of data. In addition, you can register consumers with the TableView by specifying a listener to perform a scan of the map and then receive notifications when new messages are received. Consequently, event handling can be triggered to serve use cases, such as event-driven applications and message monitoring. - -> **Note:** Each TableView uses one Reader instance per partition, and reads the topic starting from the compacted view by default. It is highly recommended to enable automatic compaction by [configuring the topic compaction policies](cookbooks-compaction.md#configuring-compaction-to-run-automatically) for the given topic or namespace. More frequent compaction results in shorter startup times because less data is replayed to reconstruct the TableView of the topic. - -The following figure illustrates the dynamic construction of a TableView updated with newer values of each key. -![TableView](/assets/tableview.png) - -### Configure TableView - -The following is an example of how to configure a TableView. - -```java - -TableView tv = client.newTableViewBuilder(Schema.STRING) - .topic("my-tableview") - .create() - -``` - -You can use the available parameters in the `loadConf` configuration or related [API](/api/client/2.10.0-SNAPSHOT/org/apache/pulsar/client/api/TableViewBuilder.html) to customize your TableView. - -| Name | Type| Required? |
    Description
    | Default -|---|---|---|---|--- -| `topic` | string | yes | The topic name of the TableView. | N/A -| `autoUpdatePartitionInterval` | int | no | The interval to check for newly added partitions. | 60 (seconds) - -### Register listeners - -You can register listeners for both existing messages on a topic and new messages coming into the topic by using `forEachAndListen`, and specify to perform operations for all existing messages by using `forEach`. - -The following is an example of how to register listeners with TableView. - -```java - -// Register listeners for all existing and incoming messages -tv.forEachAndListen((key, value) -> /*operations on all existing and incoming messages*/) - -// Register action for all existing messages -tv.forEach((key, value) -> /*operations on all existing messages*/) - -``` - -## Schema - -In Pulsar, all message data consists of byte arrays "under the hood." [Message schemas](schema-get-started.md) enable you to use other types of data when constructing and handling messages (from simple types like strings to more complex, application-specific types). If you construct, say, a [producer](#producer) without specifying a schema, then the producer can only produce messages of type `byte[]`. The following is an example. - -```java - -Producer producer = client.newProducer() - .topic(topic) - .create(); - -``` - -The producer above is equivalent to a `Producer` (in fact, you should *always* explicitly specify the type). If you'd like to use a producer for a different type of data, you'll need to specify a **schema** that informs Pulsar which data type will be transmitted over the [topic](reference-terminology.md#topic). - -### AvroBaseStructSchema example - -Let's say that you have a `SensorReading` class that you'd like to transmit over a Pulsar topic: - -```java - -public class SensorReading { - public float temperature; - - public SensorReading(float temperature) { - this.temperature = temperature; - } - - // A no-arg constructor is required - public SensorReading() { - } - - public float getTemperature() { - return temperature; - } - - public void setTemperature(float temperature) { - this.temperature = temperature; - } -} - -``` - -You could then create a `Producer` (or `Consumer`) like this: - -```java - -Producer producer = client.newProducer(JSONSchema.of(SensorReading.class)) - .topic("sensor-readings") - .create(); - -``` - -The following schema formats are currently available for Java: - -* No schema or the byte array schema (which can be applied using `Schema.BYTES`): - - ```java - - Producer bytesProducer = client.newProducer(Schema.BYTES) - .topic("some-raw-bytes-topic") - .create(); - - ``` - - Or, equivalently: - - ```java - - Producer bytesProducer = client.newProducer() - .topic("some-raw-bytes-topic") - .create(); - - ``` - -* `String` for normal UTF-8-encoded string data. Apply the schema using `Schema.STRING`: - - ```java - - Producer stringProducer = client.newProducer(Schema.STRING) - .topic("some-string-topic") - .create(); - - ``` - -* Create JSON schemas for POJOs using `Schema.JSON`. The following is an example. - - ```java - - Producer pojoProducer = client.newProducer(Schema.JSON(MyPojo.class)) - .topic("some-pojo-topic") - .create(); - - ``` - -* Generate Protobuf schemas using `Schema.PROTOBUF`. The following example shows how to create the Protobuf schema and use it to instantiate a new producer: - - ```java - - Producer protobufProducer = client.newProducer(Schema.PROTOBUF(MyProtobuf.class)) - .topic("some-protobuf-topic") - .create(); - - ``` - -* Define Avro schemas with `Schema.AVRO`. The following code snippet demonstrates how to create and use Avro schema. - - ```java - - Producer avroProducer = client.newProducer(Schema.AVRO(MyAvro.class)) - .topic("some-avro-topic") - .create(); - - ``` - -### ProtobufNativeSchema example - -For example of ProtobufNativeSchema, see [`SchemaDefinition` in `Complex type`](schema-understand.md#complex-type). - -## Authentication - -Pulsar currently supports three authentication schemes: [TLS](security-tls-authentication.md), [Athenz](security-athenz.md), and [Oauth2](security-oauth2.md). You can use the Pulsar Java client with all of them. - -### TLS Authentication - -To use [TLS](security-tls-authentication.md), you need to set TLS to `true` using the `setUseTls` method, point your Pulsar client to a TLS cert path, and provide paths to cert and key files. - -The following is an example. - -```java - -Map authParams = new HashMap(); -authParams.put("tlsCertFile", "/path/to/client-cert.pem"); -authParams.put("tlsKeyFile", "/path/to/client-key.pem"); - -Authentication tlsAuth = AuthenticationFactory - .create(AuthenticationTls.class.getName(), authParams); - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://my-broker.com:6651") - .tlsTrustCertsFilePath("/path/to/cacert.pem") - .authentication(tlsAuth) - .build(); - -``` - -### Athenz - -To use [Athenz](security-athenz.md) as an authentication provider, you need to [use TLS](#tls-authentication) and provide values for four parameters in a hash: - -* `tenantDomain` -* `tenantService` -* `providerDomain` -* `privateKey` - -You can also set an optional `keyId`. The following is an example. - -```java - -Map authParams = new HashMap(); -authParams.put("tenantDomain", "shopping"); // Tenant domain name -authParams.put("tenantService", "some_app"); // Tenant service name -authParams.put("providerDomain", "pulsar"); // Provider domain name -authParams.put("privateKey", "file:///path/to/private.pem"); // Tenant private key path -authParams.put("keyId", "v1"); // Key id for the tenant private key (optional, default: "0") - -Authentication athenzAuth = AuthenticationFactory - .create(AuthenticationAthenz.class.getName(), authParams); - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://my-broker.com:6651") - .tlsTrustCertsFilePath("/path/to/cacert.pem") - .authentication(athenzAuth) - .build(); - -``` - -> #### Supported pattern formats -> The `privateKey` parameter supports the following three pattern formats: -> * `file:///path/to/file` -> * `file:/path/to/file` -> * `data:application/x-pem-file;base64,` - -### Oauth2 - -The following example shows how to use [Oauth2](security-oauth2.md) as an authentication provider for the Pulsar Java client. - -You can use the factory method to configure authentication for Pulsar Java client. - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactoryOAuth2.clientCredentials(this.issuerUrl, this.credentialsUrl, this.audience)) - .build(); - -``` - -In addition, you can also use the encoded parameters to configure authentication for Pulsar Java client. - -```java - -Authentication auth = AuthenticationFactory - .create(AuthenticationOAuth2.class.getName(), "{"type":"client_credentials","privateKey":"...","issuerUrl":"...","audience":"..."}"); -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication(auth) - .build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-node.md b/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-node.md deleted file mode 100644 index a023b51d8ceb09..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-node.md +++ /dev/null @@ -1,652 +0,0 @@ ---- -id: client-libraries-node -title: The Pulsar Node.js client -sidebar_label: "Node.js" -original_id: client-libraries-node ---- - -The Pulsar Node.js client can be used to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Node.js. - -All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Node.js client are thread-safe. - -For 1.3.0 or later versions, [type definitions](https://github.com/apache/pulsar-client-node/blob/master/index.d.ts) used in TypeScript are available. - -## Installation - -You can install the [`pulsar-client`](https://www.npmjs.com/package/pulsar-client) library via [npm](https://www.npmjs.com/). - -### Requirements -Pulsar Node.js client library is based on the C++ client library. -Follow [these instructions](client-libraries-cpp.md#compilation) and install the Pulsar C++ client library. - -### Compatibility - -Compatibility between each version of the Node.js client and the C++ client is as follows: - -| Node.js client | C++ client | -| :------------- | :------------- | -| 1.0.0 | 2.3.0 or later | -| 1.1.0 | 2.4.0 or later | -| 1.2.0 | 2.5.0 or later | - -If an incompatible version of the C++ client is installed, you may fail to build or run this library. - -### Installation using npm - -Install the `pulsar-client` library via [npm](https://www.npmjs.com/): - -```shell - -$ npm install pulsar-client - -``` - -:::note - -Also, this library works only in Node.js 10.x or later because it uses the [`node-addon-api`](https://github.com/nodejs/node-addon-api) module to wrap the C++ library. - -::: - -## Connection URLs -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here is an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you are using [TLS encryption](security-tls-transport.md) or [TLS Authentication](security-tls-authentication.md), the URL looks like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you first need a client object. You can create a client instance using a `new` operator and the `Client` method, passing in a client options object (more on configuration [below](#client-configuration)). - -Here is an example: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - await client.close(); -})(); - -``` - -### Client configuration - -The following configurable parameters are available for Pulsar clients: - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `serviceUrl` | The connection URL for the Pulsar cluster. See [above](#connection-urls) for more info. | | -| `authentication` | Configure the authentication provider. (default: no authentication). See [TLS Authentication](security-tls-authentication.md) for more info. | | -| `operationTimeoutSeconds` | The timeout for Node.js client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries occur until this threshold is reached, at which point the operation fails. | 30 | -| `ioThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker). | 1 | -| `messageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)). | 1 | -| `concurrentLookupRequest` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 50000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 50000 | -| `tlsTrustCertsFilePath` | The file path for the trusted TLS certificate. | | -| `tlsValidateHostname` | The boolean value of setup whether to enable TLS hostname verification. | `false` | -| `tlsAllowInsecureConnection` | The boolean value of setup whether the Pulsar client accepts untrusted TLS certificate from broker. | `false` | -| `statsIntervalInSeconds` | Interval between each stat info. Stats is activated with positive statsInterval. The value should be set to 1 second at least | 600 | -| `log` | A function that is used for logging. | `console.log` | - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Node.js producers using a producer configuration object. - -Here is an example: - -```JavaScript - -const producer = await client.createProducer({ - topic: 'my-topic', // or 'my-tenant/my-namespace/my-topic' to specify topic's tenant and namespace -}); - -await producer.send({ - data: Buffer.from("Hello, Pulsar"), -}); - -await producer.close(); - -``` - -> #### Promise operation -> When you create a new Pulsar producer, the operation returns `Promise` object and get producer instance or an error through executor function. -> In this example, using await operator instead of executor function. - -### Producer operations - -Pulsar Node.js producers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `send(Object)` | Publishes a [message](#messages) to the producer's topic. When the message is successfully acknowledged by the Pulsar broker, or an error is thrown, the Promise object whose result is the message ID runs executor function. | `Promise` | -| `flush()` | Sends message from send queue to Pulsar broker. When the message is successfully acknowledged by the Pulsar broker, or an error is thrown, the Promise object runs executor function. | `Promise` | -| `close()` | Closes the producer and releases all resources allocated to it. Once `close()` is called, no more messages are accepted from the publisher. This method returns a Promise object. It runs the executor function when all pending publish requests are persisted by Pulsar. If an error is thrown, no pending writes are retried. | `Promise` | -| `getProducerName()` | Getter method of the producer name. | `string` | -| `getTopic()` | Getter method of the name of the topic. | `string` | - -### Producer configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer publishes messages. The topic format is `` or `//`. For example, `sample/ns1/my-topic`. | | -| `producerName` | A name for the producer. If you do not explicitly assign a name, Pulsar automatically generates a globally unique name. If you choose to explicitly assign a name, it needs to be unique across *all* Pulsar clusters, otherwise the creation operation throws an error. | | -| `sendTimeoutMs` | When publishing a message to a topic, the producer waits for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error is thrown. If you set `sendTimeoutMs` to -1, the timeout is set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30000 | -| `initialSequenceId` | The initial sequence ID of the message. When producer send message, add sequence ID to message. The ID is increased each time to send. | | -| `maxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `send` method fails *unless* `blockIfQueueFull` is set to `true`. | 1000 | -| `maxPendingMessagesAcrossPartitions` | The maximum size of the sum of partition's pending queue. | 50000 | -| `blockIfQueueFull` | If set to `true`, the producer's `send` method waits when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `maxPendingMessages` parameter); if set to `false` (the default), `send` operations fails and throw a error when the queue is full. | `false` | -| `messageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-messaging.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`RoundRobinDistribution`), or publishing all messages to a single partition (`UseSinglePartition`, the default). | `UseSinglePartition` | -| `hashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `JavaStringHash` (the equivalent of `String.hashCode()` in Java), `Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library). | `BoostHash` | -| `compressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), and [`Zlib`](https://zlib.net/), [ZSTD](https://github.com/facebook/zstd/), [SNAPPY](https://github.com/google/snappy/). | Compression None | -| `batchingEnabled` | If set to `true`, the producer send message as batch. | `true` | -| `batchingMaxPublishDelayMs` | The maximum time of delay sending message in batching. | 10 | -| `batchingMaxMessages` | The maximum size of sending message in each time of batching. | 1000 | -| `properties` | The metadata of producer. | | - -### Producer example - -This example creates a Node.js producer for the `my-topic` topic and sends 10 messages to that topic: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - // Create a producer - const producer = await client.createProducer({ - topic: 'my-topic', - }); - - // Send messages - for (let i = 0; i < 10; i += 1) { - const msg = `my-message-${i}`; - producer.send({ - data: Buffer.from(msg), - }); - console.log(`Sent message: ${msg}`); - } - await producer.flush(); - - await producer.close(); - await client.close(); -})(); - -``` - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Node.js consumers using a consumer configuration object. - -Here is an example: - -```JavaScript - -const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', -}); - -const msg = await consumer.receive(); -console.log(msg.getData().toString()); -consumer.acknowledge(msg); - -await consumer.close(); - -``` - -> #### Promise operation -> When you create a new Pulsar consumer, the operation returns `Promise` object and get consumer instance or an error through executor function. -> In this example, using await operator instead of executor function. - -### Consumer operations - -Pulsar Node.js consumers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `receive()` | Receives a single message from the topic. When the message is available, the Promise object run executor function and get message object. | `Promise` | -| `receive(Number)` | Receives a single message from the topic with specific timeout in milliseconds. | `Promise` | -| `acknowledge(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message object. | `void` | -| `acknowledgeId(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID object. | `void` | -| `acknowledgeCumulative(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `acknowledgeCumulative` method returns void, and send the ack to the broker asynchronously. After that, the messages are *not* redelivered to the consumer. Cumulative acking can not be used with a [shared](concepts-messaging.md#shared) subscription type. | `void` | -| `acknowledgeCumulativeId(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message ID. | `void` | -| `negativeAcknowledge(Message)`| [Negatively acknowledges](reference-terminology.md#negative-acknowledgment-nack) a message to the Pulsar broker by message object. | `void` | -| `negativeAcknowledgeId(MessageId)` | [Negatively acknowledges](reference-terminology.md#negative-acknowledgment-nack) a message to the Pulsar broker by message ID object. | `void` | -| `close()` | Closes the consumer, disabling its ability to receive messages from the broker. | `Promise` | -| `unsubscribe()` | Unsubscribes the subscription. | `Promise` | - -### Consumer configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar topic on which the consumer establishes a subscription and listen for messages. | | -| `topics` | The array of topics. | | -| `topicsPattern` | The regular expression for topics. | | -| `subscription` | The subscription name for this consumer. | | -| `subscriptionType` | Available options are `Exclusive`, `Shared`, `Key_Shared`, and `Failover`. | `Exclusive` | -| `subscriptionInitialPosition` | Initial position at which to set cursor when subscribing to a topic at first time. | `SubscriptionInitialPosition.Latest` | -| `ackTimeoutMs` | Acknowledge timeout in milliseconds. | 0 | -| `nAckRedeliverTimeoutMs` | Delay to wait before redelivering messages that failed to be processed. | 60000 | -| `receiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000 | -| `receiverQueueSizeAcrossPartitions` | Set the max total receiver queue size across partitions. This setting is used to reduce the receiver queue size for individual partitions if the total exceeds this value. | 50000 | -| `consumerName` | The name of consumer. Currently(v2.4.1), [failover](concepts-messaging.md#failover) mode use consumer name in ordering. | | -| `properties` | The metadata of consumer. | | -| `listener`| A listener that is called for a message received. | | -| `readCompacted`| If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`. | false | - -### Consumer example - -This example creates a Node.js consumer with the `my-subscription` subscription on the `my-topic` topic, receives messages, prints the content that arrive, and acknowledges each message to the Pulsar broker for 10 times: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - // Create a consumer - const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', - subscriptionType: 'Exclusive', - }); - - // Receive messages - for (let i = 0; i < 10; i += 1) { - const msg = await consumer.receive(); - console.log(msg.getData().toString()); - consumer.acknowledge(msg); - } - - await consumer.close(); - await client.close(); -})(); - -``` - -Instead a consumer can be created with `listener` to process messages. - -```JavaScript - -// Create a consumer -const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', - subscriptionType: 'Exclusive', - listener: (msg, msgConsumer) => { - console.log(msg.getData().toString()); - msgConsumer.acknowledge(msg); - }, -}); - -``` - -:::note - -Pulsar Node.js client uses [AsyncWorker](https://github.com/nodejs/node-addon-api/blob/main/doc/async_worker). Asynchronous operations such as creating consumers/producers and receiving/sending messages are performed in worker threads. -Until completion of these operations, worker threads are blocked. -Since there are only 4 worker threads by default, a called method may never complete. -To avoid this situation, you can set `UV_THREADPOOL_SIZE` to increase the number of worker threads, or define `listener` instead of calling `receive()` many times. - -::: - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recently unacked message). You can [configure](#reader-configuration) Node.js readers using a reader configuration object. - -Here is an example: - -```JavaScript - -const reader = await client.createReader({ - topic: 'my-topic', - startMessageId: Pulsar.MessageId.earliest(), -}); - -const msg = await reader.readNext(); -console.log(msg.getData().toString()); - -await reader.close(); - -``` - -### Reader operations - -Pulsar Node.js readers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `readNext()` | Receives the next message on the topic (analogous to the `receive` method for [consumers](#consumer-operations)). When the message is available, the Promise object run executor function and get message object. | `Promise` | -| `readNext(Number)` | Receives a single message from the topic with specific timeout in milliseconds. | `Promise` | -| `hasNext()` | Return whether the broker has next message in target topic. | `Boolean` | -| `close()` | Closes the reader, disabling its ability to receive messages from the broker. | `Promise` | - -### Reader configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader establishes a subscription and listen for messages. | | -| `startMessageId` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `Pulsar.MessageId.earliest` (the earliest available message on the topic), `Pulsar.MessageId.latest` (the latest available message on the topic), or a message ID object for a position that is not earliest or latest. | | -| `receiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `readNext`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000 | -| `readerName` | The name of the reader. | | -| `subscriptionRolePrefix` | The subscription role prefix. | | -| `readCompacted` | If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`. | `false` | - - -### Reader example - -This example creates a Node.js reader with the `my-topic` topic, reads messages, and prints the content that arrive for 10 times: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - operationTimeoutSeconds: 30, - }); - - // Create a reader - const reader = await client.createReader({ - topic: 'my-topic', - startMessageId: Pulsar.MessageId.earliest(), - }); - - // read messages - for (let i = 0; i < 10; i += 1) { - const msg = await reader.readNext(); - console.log(msg.getData().toString()); - } - - await reader.close(); - await client.close(); -})(); - -``` - -## Messages - -In Pulsar Node.js client, you have to construct producer message object for producer. - -Here is an example message: - -```JavaScript - -const msg = { - data: Buffer.from('Hello, Pulsar'), - partitionKey: 'key1', - properties: { - 'foo': 'bar', - }, - eventTimestamp: Date.now(), - replicationClusters: [ - 'cluster1', - 'cluster2', - ], -} - -await producer.send(msg); - -``` - -The following keys are available for producer message objects: - -| Parameter | Description | -| :-------- | :---------- | -| `data` | The actual data payload of the message. | -| `properties` | A Object for any application-specific metadata attached to the message. | -| `eventTimestamp` | The timestamp associated with the message. | -| `sequenceId` | The sequence ID of the message. | -| `partitionKey` | The optional key associated with the message (particularly useful for things like topic compaction). | -| `replicationClusters` | The clusters to which this message is replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. | -| `deliverAt` | The absolute timestamp at or after which the message is delivered. | | -| `deliverAfter` | The relative delay after which the message is delivered. | | - -### Message object operations - -In Pulsar Node.js client, you can receive (or read) message object as consumer (or reader). - -The message object have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `getTopicName()` | Getter method of topic name. | `String` | -| `getProperties()` | Getter method of properties. | `Array` | -| `getData()` | Getter method of message data. | `Buffer` | -| `getMessageId()` | Getter method of [message id object](#message-id-object-operations). | `Object` | -| `getPublishTimestamp()` | Getter method of publish timestamp. | `Number` | -| `getEventTimestamp()` | Getter method of event timestamp. | `Number` | -| `getRedeliveryCount()` | Getter method of redelivery count. | `Number` | -| `getPartitionKey()` | Getter method of partition key. | `String` | - -### Message ID object operations - -In Pulsar Node.js client, you can get message id object from message object. - -The message id object have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `serialize()` | Serialize the message id into a Buffer for storing. | `Buffer` | -| `toString()` | Get message id as String. | `String` | - -The client has static method of message id object. You can access it as `Pulsar.MessageId.someStaticMethod` too. - -The following static methods are available for the message id object: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `earliest()` | MessageId representing the earliest, or oldest available message stored in the topic. | `Object` | -| `latest()` | MessageId representing the latest, or last published message in the topic. | `Object` | -| `deserialize(Buffer)` | Deserialize a message id object from a Buffer. | `Object` | - -## End-to-end encryption - -[End-to-end encryption](cookbooks-encryption.md#docsNav) allows applications to encrypt messages at producers and decrypt at consumers. - -### Configuration - -If you want to use the end-to-end encryption feature in the Node.js client, you need to configure `publicKeyPath` and `privateKeyPath` for both producer and consumer. - -``` - -publicKeyPath: "./public.pem" -privateKeyPath: "./private.pem" - -``` - -### Tutorial - -This section provides step-by-step instructions on how to use the end-to-end encryption feature in the Node.js client. - -**Prerequisite** - -- Pulsar C++ client 2.7.1 or later - -**Step** - -1. Create both public and private key pairs. - - **Input** - - ```shell - - openssl genrsa -out private.pem 2048 - openssl rsa -in private.pem -pubout -out public.pem - - ``` - -2. Create a producer to send encrypted messages. - - **Input** - - ```nodejs - - const Pulsar = require('pulsar-client'); - - (async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - operationTimeoutSeconds: 30, - }); - - // Create a producer - const producer = await client.createProducer({ - topic: 'persistent://public/default/my-topic', - sendTimeoutMs: 30000, - batchingEnabled: true, - publicKeyPath: "./public.pem", - privateKeyPath: "./private.pem", - encryptionKey: "encryption-key" - }); - - console.log(producer.ProducerConfig) - // Send messages - for (let i = 0; i < 10; i += 1) { - const msg = `my-message-${i}`; - producer.send({ - data: Buffer.from(msg), - }); - console.log(`Sent message: ${msg}`); - } - await producer.flush(); - - await producer.close(); - await client.close(); - })(); - - ``` - -3. Create a consumer to receive encrypted messages. - - **Input** - - ```nodejs - - const Pulsar = require('pulsar-client'); - - (async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://172.25.0.3:6650', - operationTimeoutSeconds: 30 - }); - - // Create a consumer - const consumer = await client.subscribe({ - topic: 'persistent://public/default/my-topic', - subscription: 'sub1', - subscriptionType: 'Shared', - ackTimeoutMs: 10000, - publicKeyPath: "./public.pem", - privateKeyPath: "./private.pem" - }); - - console.log(consumer) - // Receive messages - for (let i = 0; i < 10; i += 1) { - const msg = await consumer.receive(); - console.log(msg.getData().toString()); - consumer.acknowledge(msg); - } - - await consumer.close(); - await client.close(); - })(); - - ``` - -4. Run the consumer to receive encrypted messages. - - **Input** - - ```shell - - node consumer.js - - ``` - -5. In a new terminal tab, run the producer to produce encrypted messages. - - **Input** - - ```shell - - node producer.js - - ``` - - Now you can see the producer sends messages and the consumer receives messages successfully. - - **Output** - - This is from the producer side. - - ``` - - Sent message: my-message-0 - Sent message: my-message-1 - Sent message: my-message-2 - Sent message: my-message-3 - Sent message: my-message-4 - Sent message: my-message-5 - Sent message: my-message-6 - Sent message: my-message-7 - Sent message: my-message-8 - Sent message: my-message-9 - - ``` - - This is from the consumer side. - - ``` - - my-message-0 - my-message-1 - my-message-2 - my-message-3 - my-message-4 - my-message-5 - my-message-6 - my-message-7 - my-message-8 - my-message-9 - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-python.md b/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-python.md deleted file mode 100644 index 10000237990443..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-python.md +++ /dev/null @@ -1,641 +0,0 @@ ---- -id: client-libraries-python -title: Pulsar Python client -sidebar_label: "Python" -original_id: client-libraries-python ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar Python client library is a wrapper over the existing [C++ client library](client-libraries-cpp.md) and exposes all of the [same features](/api/cpp). You can find the code in the [Python directory](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/python) of the C++ client code. - -All the methods in producer, consumer, and reader of a Python client are thread-safe. - -[pdoc](https://github.com/BurntSushi/pdoc)-generated API docs for the Python client are available [here](/api/python). - -## Install - -You can install the [`pulsar-client`](https://pypi.python.org/pypi/pulsar-client) library either via [PyPi](https://pypi.python.org/pypi), using [pip](#installation-using-pip), or by building the library from [source](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp). - -### Install using pip - -To install the `pulsar-client` library as a pre-built package using the [pip](https://pip.pypa.io/en/stable/) package manager: - -```shell - -$ pip install pulsar-client==@pulsar:version_number@ - -``` - -### Optional dependencies -If you install the client libraries on Linux to support services like Pulsar functions or Avro serialization, you can install optional components alongside the `pulsar-client` library. - -```shell - -# avro serialization -$ pip install pulsar-client[avro]=='@pulsar:version_number@' - -# functions runtime -$ pip install pulsar-client[functions]=='@pulsar:version_number@' - -# all optional components -$ pip install pulsar-client[all]=='@pulsar:version_number@' - -``` - -Installation via PyPi is available for the following Python versions: - -Platform | Supported Python versions -:--------|:------------------------- -MacOS
    10.13 (High Sierra), 10.14 (Mojave)
    | 2.7, 3.7, 3.8, 3.9 -Linux | 2.7, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9 - -### Install from source - -To install the `pulsar-client` library by building from source, follow [instructions](client-libraries-cpp.md#compilation) and compile the Pulsar C++ client library. That builds the Python binding for the library. - -To install the built Python bindings: - -```shell - -$ git clone https://github.com/apache/pulsar -$ cd pulsar/pulsar-client-cpp/python -$ sudo python setup.py install - -``` - -## API Reference - -The complete Python API reference is available at [api/python](/api/python). - -## Examples - -You can find a variety of Python code examples for the [pulsar-client](/pulsar-client-cpp/python) library. - -### Producer example - -The following example creates a Python producer for the `my-topic` topic and sends 10 messages on that topic: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') - -producer = client.create_producer('my-topic') - -for i in range(10): - producer.send(('Hello-%d' % i).encode('utf-8')) - -client.close() - -``` - -### Consumer example - -The following example creates a consumer with the `my-subscription` subscription name on the `my-topic` topic, receives incoming messages, prints the content and ID of messages that arrive, and acknowledges each message to the Pulsar broker. - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') - -consumer = client.subscribe('my-topic', 'my-subscription') - -while True: - msg = consumer.receive() - try: - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except Exception: - # Message failed to be processed - consumer.negative_acknowledge(msg) - -client.close() - -``` - -This example shows how to configure negative acknowledgement. - -```python - -from pulsar import Client, schema -client = Client('pulsar://localhost:6650') -consumer = client.subscribe('negative_acks','test',schema=schema.StringSchema()) -producer = client.create_producer('negative_acks',schema=schema.StringSchema()) -for i in range(10): - print('send msg "hello-%d"' % i) - producer.send_async('hello-%d' % i, callback=None) -producer.flush() -for i in range(10): - msg = consumer.receive() - consumer.negative_acknowledge(msg) - print('receive and nack msg "%s"' % msg.data()) -for i in range(10): - msg = consumer.receive() - consumer.acknowledge(msg) - print('receive and ack msg "%s"' % msg.data()) -try: - # No more messages expected - msg = consumer.receive(100) -except: - print("no more msg") - pass - -``` - -### Reader interface example - -You can use the Pulsar Python API to use the Pulsar [reader interface](concepts-clients.md#reader-interface). Here's an example: - -```python - -# MessageId taken from a previously fetched message -msg_id = msg.message_id() - -reader = client.create_reader('my-topic', msg_id) - -while True: - msg = reader.read_next() - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # No acknowledgment - -``` - -### Multi-topic subscriptions - -In addition to subscribing a consumer to a single Pulsar topic, you can also subscribe to multiple topics simultaneously. To use multi-topic subscriptions, you can supply a regular expression (regex) or a `List` of topics. If you select topics via regex, all topics must be within the same Pulsar namespace. - -The following is an example: - -```python - -import re -consumer = client.subscribe(re.compile('persistent://public/default/topic-*'), 'my-subscription') -while True: - msg = consumer.receive() - try: - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except Exception: - # Message failed to be processed - consumer.negative_acknowledge(msg) -client.close() - -``` - -## Schema - -### Supported schema types - -You can use different builtin schema types in Pulsar. All the definitions are in the `pulsar.schema` package. - -| Schema | Notes | -| ------ | ----- | -| `BytesSchema` | Get the raw payload as a `bytes` object. No serialization/deserialization are performed. This is the default schema mode | -| `StringSchema` | Encode/decode payload as a UTF-8 string. Uses `str` objects | -| `JsonSchema` | Require record definition. Serializes the record into standard JSON payload | -| `AvroSchema` | Require record definition. Serializes in AVRO format | - -### Schema definition reference - -The schema definition is done through a class that inherits from `pulsar.schema.Record`. - -This class has a number of fields which can be of either -`pulsar.schema.Field` type or another nested `Record`. All the -fields are specified in the `pulsar.schema` package. The fields -are matching the AVRO fields types. - -| Field Type | Python Type | Notes | -| ---------- | ----------- | ----- | -| `Boolean` | `bool` | | -| `Integer` | `int` | | -| `Long` | `int` | | -| `Float` | `float` | | -| `Double` | `float` | | -| `Bytes` | `bytes` | | -| `String` | `str` | | -| `Array` | `list` | Need to specify record type for items. | -| `Map` | `dict` | Key is always `String`. Need to specify value type. | - -Additionally, any Python `Enum` type can be used as a valid field type. - -#### Fields parameters - -When adding a field, you can use these parameters in the constructor. - -| Argument | Default | Notes | -| ---------- | --------| ----- | -| `default` | `None` | Set a default value for the field. Eg: `a = Integer(default=5)` | -| `required` | `False` | Mark the field as "required". It is set in the schema accordingly. | - -#### Schema definition examples - -##### Simple definition - -```python - -class Example(Record): - a = String() - b = Integer() - c = Array(String()) - i = Map(String()) - -``` - -##### Using enums - -```python - -from enum import Enum - -class Color(Enum): - red = 1 - green = 2 - blue = 3 - -class Example(Record): - name = String() - color = Color - -``` - -##### Complex types - -```python - -class MySubRecord(Record): - x = Integer() - y = Long() - z = String() - -class Example(Record): - a = String() - sub = MySubRecord() - -``` - -##### Set namespace for Avro schema - -Set the namespace for Avro Record schema using the special field `_avro_namespace`. - -```python - -class NamespaceDemo(Record): - _avro_namespace = 'xxx.xxx.xxx' - x = String() - y = Integer() - -``` - -The schema definition is like this. - -``` - -{ - 'name': 'NamespaceDemo', 'namespace': 'xxx.xxx.xxx', 'type': 'record', 'fields': [ - {'name': 'x', 'type': ['null', 'string']}, - {'name': 'y', 'type': ['null', 'int']} - ] -} - -``` - -### Declare and validate schema - -You can send messages using `BytesSchema`, `StringSchema`, `AvroSchema`, and `JsonSchema`. - -Before the producer is created, the Pulsar broker validates that the existing topic schema is the correct type and that the format is compatible with the schema definition of a class. If the format of the topic schema is incompatible with the schema definition, an exception occurs in the producer creation. - -Once a producer is created with a certain schema definition, it only accepts objects that are instances of the declared schema class. - -Similarly, for a consumer or reader, the consumer returns an object (which is an instance of the schema record class) rather than raw bytes. - -**Example** - -```python - -consumer = client.subscribe( - topic='my-topic', - subscription_name='my-subscription', - schema=AvroSchema(Example) ) - -while True: - msg = consumer.receive() - ex = msg.value() - try: - print("Received message a={} b={} c={}".format(ex.a, ex.b, ex.c)) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except Exception: - # Message failed to be processed - consumer.negative_acknowledge(msg) - -``` - -````mdx-code-block - - - - -You can send byte data using a `BytesSchema`. - -**Example** - -```python - -producer = client.create_producer( - 'bytes-schema-topic', - schema=BytesSchema()) -producer.send(b"Hello") - -consumer = client.subscribe( - 'bytes-schema-topic', - 'sub', - schema=BytesSchema()) -msg = consumer.receive() -data = msg.value() - -``` - - - - -You can send string data using a `StringSchema`. - -**Example** - -```python - -producer = client.create_producer( - 'string-schema-topic', - schema=StringSchema()) -producer.send("Hello") - -consumer = client.subscribe( - 'string-schema-topic', - 'sub', - schema=StringSchema()) -msg = consumer.receive() -str = msg.value() - -``` - - - - -You can declare an `AvroSchema` using one of the following methods. - -#### Method 1: Record - -You can declare an `AvroSchema` by passing a class that inherits -from `pulsar.schema.Record` and defines the fields as -class variables. - -**Example** - -```python - -class Example(Record): - a = Integer() - b = Integer() - -producer = client.create_producer( - 'avro-schema-topic', - schema=AvroSchema(Example)) -r = Example(a=1, b=2) -producer.send(r) - -consumer = client.subscribe( - 'avro-schema-topic', - 'sub', - schema=AvroSchema(Example)) -msg = consumer.receive() -e = msg.value() - -``` - -#### Method 2: JSON definition - -You can declare an `AvroSchema` using JSON. In this case, Avro schemas are defined using JSON. - -**Example** - -Below is an `AvroSchema` defined using a JSON file (_company.avsc_). - -```json - -{ - "doc": "this is doc", - "namespace": "example.avro", - "type": "record", - "name": "Company", - "fields": [ - {"name": "name", "type": ["null", "string"]}, - {"name": "address", "type": ["null", "string"]}, - {"name": "employees", "type": ["null", {"type": "array", "items": { - "type": "record", - "name": "Employee", - "fields": [ - {"name": "name", "type": ["null", "string"]}, - {"name": "age", "type": ["null", "int"]} - ] - }}]}, - {"name": "labels", "type": ["null", {"type": "map", "values": "string"}]} - ] -} - -``` - -You can load a schema definition from file by using [`avro.schema`]((http://avro.apache.org/docs/current/gettingstartedpython.html) or [`fastavro.schema`](https://fastavro.readthedocs.io/en/latest/schema.html#fastavro._schema_py.load_schema). - -If you use the "JSON definition" method to declare an `AvroSchema`, pay attention to the following points: - -- You need to use [Python dict](https://developers.google.com/edu/python/dict-files) to produce and consume messages, which is different from using the "Record" method. - -- When generating an `AvroSchema` object, set `_record_cls` parameter to `None`. - -**Example** - -``` - -from fastavro.schema import load_schema -from pulsar.schema import * -schema_definition = load_schema("examples/company.avsc") -avro_schema = AvroSchema(None, schema_definition=schema_definition) -producer = client.create_producer( - topic=topic, - schema=avro_schema) -consumer = client.subscribe(topic, 'test', schema=avro_schema) -company = { - "name": "company-name" + str(i), - "address": 'xxx road xxx street ' + str(i), - "employees": [ - {"name": "user" + str(i), "age": 20 + i}, - {"name": "user" + str(i), "age": 30 + i}, - {"name": "user" + str(i), "age": 35 + i}, - ], - "labels": { - "industry": "software" + str(i), - "scale": ">100", - "funds": "1000000.0" - } -} -producer.send(company) -msg = consumer.receive() -# Users could get a dict object by `value()` method. -msg.value() - -``` - - - - -#### Record - -You can declare a `JsonSchema` by passing a class that inherits -from `pulsar.schema.Record` and defines the fields as class variables. This is similar to using `AvroSchema`. The only difference is to use `JsonSchema` instead of `AvroSchema` when defining schema type as shown below. For how to use `AvroSchema` via record, see [here](client-libraries-python.md#method-1-record). - -``` - -producer = client.create_producer( - 'avro-schema-topic', - schema=JsonSchema(Example)) - -consumer = client.subscribe( - 'avro-schema-topic', - 'sub', - schema=JsonSchema(Example)) - -``` - - - - -```` - -## End-to-end encryption - -[End-to-end encryption](cookbooks-encryption.md#docsNav) allows applications to encrypt messages at producers and decrypt messages at consumers. - -### Configuration - -To use the end-to-end encryption feature in the Python client, you need to configure `publicKeyPath` and `privateKeyPath` for both producer and consumer. - -``` - -publicKeyPath: "./public.pem" -privateKeyPath: "./private.pem" - -``` - -### Tutorial - -This section provides step-by-step instructions on how to use the end-to-end encryption feature in the Python client. - -**Prerequisite** - -- Pulsar Python client 2.7.1 or later - -**Step** - -1. Create both public and private key pairs. - - **Input** - - ```shell - - openssl genrsa -out private.pem 2048 - openssl rsa -in private.pem -pubout -out public.pem - - ``` - -2. Create a producer to send encrypted messages. - - **Input** - - ```python - - import pulsar - - publicKeyPath = "./public.pem" - privateKeyPath = "./private.pem" - crypto_key_reader = pulsar.CryptoKeyReader(publicKeyPath, privateKeyPath) - client = pulsar.Client('pulsar://localhost:6650') - producer = client.create_producer(topic='encryption', encryption_key='encryption', crypto_key_reader=crypto_key_reader) - producer.send('encryption message'.encode('utf8')) - print('sent message') - producer.close() - client.close() - - ``` - -3. Create a consumer to receive encrypted messages. - - **Input** - - ```python - - import pulsar - - publicKeyPath = "./public.pem" - privateKeyPath = "./private.pem" - crypto_key_reader = pulsar.CryptoKeyReader(publicKeyPath, privateKeyPath) - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe(topic='encryption', subscription_name='encryption-sub', crypto_key_reader=crypto_key_reader) - msg = consumer.receive() - print("Received msg '{}' id = '{}'".format(msg.data(), msg.message_id())) - consumer.close() - client.close() - - ``` - -4. Run the consumer to receive encrypted messages. - - **Input** - - ```shell - - python consumer.py - - ``` - -5. In a new terminal tab, run the producer to produce encrypted messages. - - **Input** - - ```shell - - python producer.py - - ``` - - Now you can see the producer sends messages and the consumer receives messages successfully. - - **Output** - - This is from the producer side. - - ``` - - sent message - - ``` - - This is from the consumer side. - - ``` - - Received msg 'encryption message' id = '(0,0,-1,-1)' - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-rest.md b/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-rest.md deleted file mode 100644 index 1b26eedc01836a..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-rest.md +++ /dev/null @@ -1,134 +0,0 @@ ---- -id: client-libraries-rest -title: Pulsar REST -sidebar_label: "REST" -original_id: client-libraries-rest ---- - -Pulsar not only provides REST endpoints to manage resources in Pulsar clusters, but also provides methods to query the state for those resources. In addition, Pulsar REST provides a simple way to interact with Pulsar **without using client libraries**, which is convenient for applications to use HTTP to interact with Pulsar. - -## Connection - -To connect to Pulsar, you need to specify a URL. - -- Produce messages to non-partitioned or partitioned topics - - ``` - - brokerUrl:{8080/8081}/topics/{persistent/non-persistent}/{my-tenant}/{my-namespace}/{my-topic} - - ``` - -- Produce messages to specific partitions of partitioned topics - - ``` - - brokerUrl:{8080/8081}/topics/{persistent/non-persistent}/{my-tenant}/{my-namespace}/{my-topic}/partitions/{partition-number} - - ``` - -## Producer - -Currently, you can produce messages to the following destinations with tools like cURL or Postman via REST. - -- Non-partitioned or partitioned topics - -- Specific partitions of partitioned topics - -:::note - -You can only produce messages to **topics that already exist** in Pulsar via REST. - -::: - -Consuming and reading messages via REST will be supported in the future. - -### Message - -- Below is the structure of a request payload. - - Parameter|Required?|Description - |---|---|--- - `schemaVersion`|No| Schema version of existing schema used for this message

    You need provide one of the following:

    - `schemaVersion`
    - `keySchema`/`valueSchema`

    If both of them are provided, then `schemaVersion` is used - `keySchema/valueSchema`|No|Key schema / Value schema used for this message - `producerName`|No|Producer name - `Messages[] SingleMessage`|Yes|Messages to be sent - -- Below is the structure of a message. - - Parameter|Required?|Type|Description - |---|---|---|--- - `payload`|Yes|`String`|Actual message payload

    Messages are sent in strings and encoded with given schemas on the server side - `properties`|No|`Map`|Custom properties - `key`|No|`String`|Partition key - `replicationClusters`|No|`List`|Clusters to which messages replicate - `eventTime`|No|`String`|Message event time - `sequenceId`|No|`long`|Message sequence ID - `disableReplication`|No|`boolean`|Whether to disable replication of messages - `deliverAt`|No|`long`|Deliver messages only at or after specified absolute timestamp - `deliverAfterMs`|No|`long`|Deliver messages only after specified relative delay (in milliseconds) - -### Schema - -- Currently, Primitive, Avro, JSON, and KeyValue schemas are supported. - -- For Primitive, Avro and JSON schemas, schemas should be provided as the full schema encoded as a string. - -- If the schema is not set, messages are encoded with string schema. - -### Example - -Below is an example of sending messages to topics using JSON schema via REST. - -Assume that you send messages representing the following class. - -```java - - class Seller { - public String state; - public String street; - public long zipCode; - } - - class PC { - public String brand; - public String model; - public int year; - public GPU gpu; - public Seller seller; - } - -``` - -Send messages to topics with JSON schema using the command below. - -```shell - -curl --location --request POST 'brokerUrl:{8080/8081}/topics/{persistent/non-persistent}/{my-tenant}/{my-namespace}/{my-topic}' \ ---header 'Content-Type: application/json' \ ---data-raw '{ - "valueSchema": "{\"name\":\"\",\"schema\":\"eyJ0eXBlIjoicmVjb3JkIiwibmFtZSI6IlBDIiwibmFtZXNwYWNlIjoib3JnLmFwYWNoZS5wdWxzYXIuYnJva2VyLmFkbWluLlRvcGljc1Rlc3QiLCJmaWVsZHMiOlt7Im5hbWUiOiJicmFuZCIsInR5cGUiOlsibnVsbCIsInN0cmluZyJdLCJkZWZhdWx0IjpudWxsfSx7Im5hbWUiOiJncHUiLCJ0eXBlIjpbIm51bGwiLHsidHlwZSI6ImVudW0iLCJuYW1lIjoiR1BVIiwic3ltYm9scyI6WyJBTUQiLCJOVklESUEiXX1dLCJkZWZhdWx0IjpudWxsfSx7Im5hbWUiOiJtb2RlbCIsInR5cGUiOlsibnVsbCIsInN0cmluZyJdLCJkZWZhdWx0IjpudWxsfSx7Im5hbWUiOiJzZWxsZXIiLCJ0eXBlIjpbIm51bGwiLHsidHlwZSI6InJlY29yZCIsIm5hbWUiOiJTZWxsZXIiLCJmaWVsZHMiOlt7Im5hbWUiOiJzdGF0ZSIsInR5cGUiOlsibnVsbCIsInN0cmluZyJdLCJkZWZhdWx0IjpudWxsfSx7Im5hbWUiOiJzdHJlZXQiLCJ0eXBlIjpbIm51bGwiLCJzdHJpbmciXSwiZGVmYXVsdCI6bnVsbH0seyJuYW1lIjoiemlwQ29kZSIsInR5cGUiOiJsb25nIn1dfV0sImRlZmF1bHQiOm51bGx9LHsibmFtZSI6InllYXIiLCJ0eXBlIjoiaW50In1dfQ==\",\"type\":\"JSON\",\"properties\":{\"__jsr310ConversionEnabled\":\"false\",\"__alwaysAllowNull\":\"true\"},\"schemaDefinition\":\"{\\\"type\\\":\\\"record\\\",\\\"name\\\":\\\"PC\\\",\\\"namespace\\\":\\\"org.apache.pulsar.broker.admin.TopicsTest\\\",\\\"fields\\\":[{\\\"name\\\":\\\"brand\\\",\\\"type\\\":[\\\"null\\\",\\\"string\\\"],\\\"default\\\":null},{\\\"name\\\":\\\"gpu\\\",\\\"type\\\":[\\\"null\\\",{\\\"type\\\":\\\"enum\\\",\\\"name\\\":\\\"GPU\\\",\\\"symbols\\\":[\\\"AMD\\\",\\\"NVIDIA\\\"]}],\\\"default\\\":null},{\\\"name\\\":\\\"model\\\",\\\"type\\\":[\\\"null\\\",\\\"string\\\"],\\\"default\\\":null},{\\\"name\\\":\\\"seller\\\",\\\"type\\\":[\\\"null\\\",{\\\"type\\\":\\\"record\\\",\\\"name\\\":\\\"Seller\\\",\\\"fields\\\":[{\\\"name\\\":\\\"state\\\",\\\"type\\\":[\\\"null\\\",\\\"string\\\"],\\\"default\\\":null},{\\\"name\\\":\\\"street\\\",\\\"type\\\":[\\\"null\\\",\\\"string\\\"],\\\"default\\\":null},{\\\"name\\\":\\\"zipCode\\\",\\\"type\\\":\\\"long\\\"}]}],\\\"default\\\":null},{\\\"name\\\":\\\"year\\\",\\\"type\\\":\\\"int\\\"}]}\"}", - -// Schema data is just the base 64 encoded schemaDefinition. - - "producerName": "rest-producer", - "messages": [ - { - "key":"my-key", - "payload":"{\"brand\":\"dell\",\"model\":\"alienware\",\"year\":2021,\"gpu\":\"AMD\",\"seller\":{\"state\":\"WA\",\"street\":\"main street\",\"zipCode\":98004}}", - "eventTime":1603045262772, - "sequenceId":1 - }, - { - "key":"my-key", - "payload":"{\"brand\":\"asus\",\"model\":\"rog\",\"year\":2020,\"gpu\":\"NVIDIA\",\"seller\":{\"state\":\"CA\",\"street\":\"back street\",\"zipCode\":90232}}", - "eventTime":1603045262772, - "sequenceId":2 - } - ] -} -` -// Sample message - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-websocket.md b/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-websocket.md deleted file mode 100644 index 145866e41644bd..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries-websocket.md +++ /dev/null @@ -1,664 +0,0 @@ ---- -id: client-libraries-websocket -title: Pulsar WebSocket API -sidebar_label: "WebSocket" -original_id: client-libraries-websocket ---- - -Pulsar [WebSocket](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API) API provides a simple way to interact with Pulsar using languages that do not have an official [client library](getting-started-clients.md). Through WebSocket, you can publish and consume messages and use features available on the [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page. - - -> You can use Pulsar WebSocket API with any WebSocket client library. See examples for Python and Node.js [below](#client-examples). - -## Running the WebSocket service - -The standalone variant of Pulsar that we recommend using for [local development](getting-started-standalone.md) already has the WebSocket service enabled. - -In non-standalone mode, there are two ways to deploy the WebSocket service: - -* [embedded](#embedded-with-a-pulsar-broker) with a Pulsar broker -* as a [separate component](#as-a-separate-component) - -### Embedded with a Pulsar broker - -In this mode, the WebSocket service will run within the same HTTP service that's already running in the broker. To enable this mode, set the [`webSocketServiceEnabled`](reference-configuration.md#broker-webSocketServiceEnabled) parameter in the [`conf/broker.conf`](reference-configuration.md#broker) configuration file in your installation. - -```properties - -webSocketServiceEnabled=true - -``` - -### As a separate component - -In this mode, the WebSocket service will be run from a Pulsar [broker](reference-terminology.md#broker) as a separate service. Configuration for this mode is handled in the [`conf/websocket.conf`](reference-configuration.md#websocket) configuration file. You'll need to set *at least* the following parameters: - -* [`configurationMetadataStoreUrl`](reference-configuration.md#websocket) -* [`webServicePort`](reference-configuration.md#websocket-webServicePort) -* [`clusterName`](reference-configuration.md#websocket-clusterName) - -Here's an example: - -```properties - -configurationMetadataStoreUrl=zk1:2181,zk2:2181,zk3:2181 -webServicePort=8080 -clusterName=my-cluster - -``` - -### Security settings - -To enable TLS encryption on WebSocket service: - -```properties - -tlsEnabled=true -tlsAllowInsecureConnection=false -tlsCertificateFilePath=/path/to/client-websocket.cert.pem -tlsKeyFilePath=/path/to/client-websocket.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -### Starting the broker - -When the configuration is set, you can start the service using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) tool: - -```shell - -$ bin/pulsar-daemon start websocket - -``` - -## API Reference - -Pulsar's WebSocket API offers three endpoints for [producing](#producer-endpoint) messages, [consuming](#consumer-endpoint) messages and [reading](#reader-endpoint) messages. - -All exchanges via the WebSocket API use JSON. - -### Authentication - -#### Browser javascript WebSocket client - -Use the query param `token` transport the authentication token. - -```http - -ws://broker-service-url:8080/path?token=token - -``` - -### Producer endpoint - -The producer endpoint requires you to specify a tenant, namespace, and topic in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/producer/persistent/:tenant/:namespace/:topic - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`sendTimeoutMillis` | long | no | Send timeout (default: 30 secs) -`batchingEnabled` | boolean | no | Enable batching of messages (default: false) -`batchingMaxMessages` | int | no | Maximum number of messages permitted in a batch (default: 1000) -`maxPendingMessages` | int | no | Set the max size of the internal-queue holding the messages (default: 1000) -`batchingMaxPublishDelay` | long | no | Time period within which the messages will be batched (default: 10ms) -`messageRoutingMode` | string | no | Message [routing mode](/api/client/index.html?org/apache/pulsar/client/api/ProducerConfiguration.MessageRoutingMode.html) for the partitioned producer: `SinglePartition`, `RoundRobinPartition` -`compressionType` | string | no | Compression [type](/api/client/index.html?org/apache/pulsar/client/api/CompressionType.html): `LZ4`, `ZLIB` -`producerName` | string | no | Specify the name for the producer. Pulsar will enforce only one producer with same name can be publishing on a topic -`initialSequenceId` | long | no | Set the baseline for the sequence ids for messages published by the producer. -`hashingScheme` | string | no | [Hashing function](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.HashingScheme.html) to use when publishing on a partitioned topic: `JavaStringHash`, `Murmur3_32Hash` -`token` | string | no | Authentication token, this is used for the browser javascript client - - -#### Publishing a message - -```json - -{ - "payload": "SGVsbG8gV29ybGQ=", - "properties": {"key1": "value1", "key2": "value2"}, - "context": "1" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`payload` | string | yes | Base-64 encoded payload -`properties` | key-value pairs | no | Application-defined properties -`context` | string | no | Application-defined request identifier -`key` | string | no | For partitioned topics, decides which partition to use -`replicationClusters` | array | no | Restrict replication to this list of [clusters](reference-terminology.md#cluster), specified by name - - -##### Example success response - -```json - -{ - "result": "ok", - "messageId": "CAAQAw==", - "context": "1" - } - -``` - -##### Example failure response - -```json - - { - "result": "send-error:3", - "errorMsg": "Failed to de-serialize from JSON", - "context": "1" - } - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`result` | string | yes | `ok` if successful or an error message if unsuccessful -`messageId` | string | yes | Message ID assigned to the published message -`context` | string | no | Application-defined request identifier - - -### Consumer endpoint - -The consumer endpoint requires you to specify a tenant, namespace, and topic, as well as a subscription, in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/consumer/persistent/:tenant/:namespace/:topic/:subscription - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`ackTimeoutMillis` | long | no | Set the timeout for unacked messages (default: 0) -`subscriptionType` | string | no | [Subscription type](/api/client/index.html?org/apache/pulsar/client/api/SubscriptionType.html): `Exclusive`, `Failover`, `Shared`, `Key_Shared` -`receiverQueueSize` | int | no | Size of the consumer receive queue (default: 1000) -`consumerName` | string | no | Consumer name -`priorityLevel` | int | no | Define a [priority](/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setPriorityLevel-int-) for the consumer -`maxRedeliverCount` | int | no | Define a [maxRedeliverCount](/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#deadLetterPolicy-org.apache.pulsar.client.api.DeadLetterPolicy-) for the consumer (default: 0). Activates [Dead Letter Topic](https://github.com/apache/pulsar/wiki/PIP-22%3A-Pulsar-Dead-Letter-Topic) feature. -`deadLetterTopic` | string | no | Define a [deadLetterTopic](/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#deadLetterPolicy-org.apache.pulsar.client.api.DeadLetterPolicy-) for the consumer (default: {topic}-{subscription}-DLQ). Activates [Dead Letter Topic](https://github.com/apache/pulsar/wiki/PIP-22%3A-Pulsar-Dead-Letter-Topic) feature. -`pullMode` | boolean | no | Enable pull mode (default: false). See "Flow Control" below. -`negativeAckRedeliveryDelay` | int | no | When a message is negatively acknowledged, the delay time before the message is redelivered (in milliseconds). The default value is 60000. -`token` | string | no | Authentication token, this is used for the browser javascript client - -NB: these parameter (except `pullMode`) apply to the internal consumer of the WebSocket service. -So messages will be subject to the redelivery settings as soon as the get into the receive queue, -even if the client doesn't consume on the WebSocket. - -##### Receiving messages - -Server will push messages on the WebSocket session: - -```json - -{ - "messageId": "CAMQADAA", - "payload": "hvXcJvHW7kOSrUn17P2q71RA5SdiXwZBqw==", - "properties": {}, - "publishTime": "2021-10-29T16:01:38.967-07:00", - "redeliveryCount": 0, - "encryptionContext": { - "keys": { - "client-rsa.pem": { - "keyValue": "jEuwS+PeUzmCo7IfLNxqoj4h7txbLjCQjkwpaw5AWJfZ2xoIdMkOuWDkOsqgFmWwxiecakS6GOZHs94x3sxzKHQx9Oe1jpwBg2e7L4fd26pp+WmAiLm/ArZJo6JotTeFSvKO3u/yQtGTZojDDQxiqFOQ1ZbMdtMZA8DpSMuq+Zx7PqLo43UdW1+krjQfE5WD+y+qE3LJQfwyVDnXxoRtqWLpVsAROlN2LxaMbaftv5HckoejJoB4xpf/dPOUqhnRstwQHf6klKT5iNhjsY4usACt78uILT0pEPd14h8wEBidBz/vAlC/zVMEqiDVzgNS7dqEYS4iHbf7cnWVCn3Hxw==", - "metadata": {} - } - }, - "param": "Tfu1PxVm6S9D3+Hk", - "compressionType": "NONE", - "uncompressedMessageSize": 0, - "batchSize": { - "empty": false, - "present": true - } - } -} - -``` - -Below are the parameters in the WebSocket consumer response. - -- General parameters - - Key | Type | Required? | Explanation - :---|:-----|:----------|:----------- - `messageId` | string | yes | Message ID - `payload` | string | yes | Base-64 encoded payload - `publishTime` | string | yes | Publish timestamp - `redeliveryCount` | number | yes | Number of times this message was already delivered - `properties` | key-value pairs | no | Application-defined properties - `key` | string | no | Original routing key set by producer - `encryptionContext` | EncryptionContext | no | Encryption context that consumers can use to decrypt received messages - `param` | string | no | Initialization vector for cipher (Base64 encoding) - `batchSize` | string | no | Number of entries in a message (if it is a batch message) - `uncompressedMessageSize` | string | no | Message size before compression - `compressionType` | string | no | Algorithm used to compress the message payload - -- `encryptionContext` related parameter - - Key | Type | Required? | Explanation - :---|:-----|:----------|:----------- - `keys` |key-EncryptionKey pairs | yes | Key in `key-EncryptionKey` pairs is an encryption key name. Value in `key-EncryptionKey` pairs is an encryption key object. - -- `encryptionKey` related parameters - - Key | Type | Required? | Explanation - :---|:-----|:----------|:----------- - `keyValue` | string | yes | Encryption key (Base64 encoding) - `metadata` | key-value pairs | no | Application-defined metadata - -#### Acknowledging the message - -Consumer needs to acknowledge the successful processing of the message to -have the Pulsar broker delete it. - -```json - -{ - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Negatively acknowledging messages - -```json - -{ - "type": "negativeAcknowledge", - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Flow control - -##### Push Mode - -By default (`pullMode=false`), the consumer endpoint will use the `receiverQueueSize` parameter both to size its -internal receive queue and to limit the number of unacknowledged messages that are passed to the WebSocket client. -In this mode, if you don't send acknowledgements, the Pulsar WebSocket service will stop sending messages after reaching -`receiverQueueSize` unacked messages sent to the WebSocket client. - -##### Pull Mode - -If you set `pullMode` to `true`, the WebSocket client will need to send `permit` commands to permit the -Pulsar WebSocket service to send more messages. - -```json - -{ - "type": "permit", - "permitMessages": 100 -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `permit` -`permitMessages`| int | yes | Number of messages to permit - -NB: in this mode it's possible to acknowledge messages in a different connection. - -#### Check if reach end of topic - -Consumer can check if it has reached end of topic by sending `isEndOfTopic` request. - -**Request** - -```json - -{ - "type": "isEndOfTopic" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `isEndOfTopic` - -**Response** - -```json - -{ - "endOfTopic": "true/false" - } - -``` - -### Reader endpoint - -The reader endpoint requires you to specify a tenant, namespace, and topic in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/reader/persistent/:tenant/:namespace/:topic - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`readerName` | string | no | Reader name -`receiverQueueSize` | int | no | Size of the consumer receive queue (default: 1000) -`messageId` | int or enum | no | Message ID to start from, `earliest` or `latest` (default: `latest`) -`token` | string | no | Authentication token, this is used for the browser javascript client - -##### Receiving messages - -Server will push messages on the WebSocket session: - -```json - -{ - "messageId": "CAAQAw==", - "payload": "SGVsbG8gV29ybGQ=", - "properties": {"key1": "value1", "key2": "value2"}, - "publishTime": "2016-08-30 16:45:57.785", - "redeliveryCount": 4 -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId` | string | yes | Message ID -`payload` | string | yes | Base-64 encoded payload -`publishTime` | string | yes | Publish timestamp -`redeliveryCount` | number | yes | Number of times this message was already delivered -`properties` | key-value pairs | no | Application-defined properties -`key` | string | no | Original routing key set by producer - -#### Acknowledging the message - -**In WebSocket**, Reader needs to acknowledge the successful processing of the message to -have the Pulsar WebSocket service update the number of pending messages. -If you don't send acknowledgements, Pulsar WebSocket service will stop sending messages after reaching the pendingMessages limit. - -```json - -{ - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Check if reach end of topic - -Consumer can check if it has reached end of topic by sending `isEndOfTopic` request. - -**Request** - -```json - -{ - "type": "isEndOfTopic" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `isEndOfTopic` - -**Response** - -```json - -{ - "endOfTopic": "true/false" - } - -``` - -### Error codes - -In case of error the server will close the WebSocket session using the -following error codes: - -Error Code | Error Message -:----------|:------------- -1 | Failed to create producer -2 | Failed to subscribe -3 | Failed to deserialize from JSON -4 | Failed to serialize to JSON -5 | Failed to authenticate client -6 | Client is not authorized -7 | Invalid payload encoding -8 | Unknown error - -> The application is responsible for re-establishing a new WebSocket session after a backoff period. - -## Client examples - -Below you'll find code examples for the Pulsar WebSocket API in [Python](#python) and [Node.js](#nodejs). - -### Python - -This example uses the [`websocket-client`](https://pypi.python.org/pypi/websocket-client) package. You can install it using [pip](https://pypi.python.org/pypi/pip): - -```shell - -$ pip install websocket-client - -``` - -You can also download it from [PyPI](https://pypi.python.org/pypi/websocket-client). - -#### Python producer - -Here's an example Python producer that sends a simple message to a Pulsar [topic](reference-terminology.md#topic): - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/producer/persistent/public/default/my-topic' - -ws = websocket.create_connection(TOPIC) - -# encode message -s = "Hello World" -firstEncoded = s.encode("UTF-8") -binaryEncoded = base64.b64encode(firstEncoded) -payloadString = binaryEncoded.decode('UTF-8') - -# Send one message as JSON -ws.send(json.dumps({ - 'payload' : payloadString, - 'properties': { - 'key1' : 'value1', - 'key2' : 'value2' - }, - 'context' : 5 -})) - -response = json.loads(ws.recv()) -if response['result'] == 'ok': - print( 'Message published successfully') -else: - print('Failed to publish message:', response) -ws.close() - -``` - -#### Python consumer - -Here's an example Python consumer that listens on a Pulsar topic and prints the message ID whenever a message arrives: - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/consumer/persistent/public/default/my-topic/my-sub' - -ws = websocket.create_connection(TOPIC) - -while True: - msg = json.loads(ws.recv()) - if not msg: break - - print( "Received: {} - payload: {}".format(msg, base64.b64decode(msg['payload']))) - - # Acknowledge successful processing - ws.send(json.dumps({'messageId' : msg['messageId']})) - -ws.close() - -``` - -#### Python reader - -Here's an example Python reader that listens on a Pulsar topic and prints the message ID whenever a message arrives: - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/reader/persistent/public/default/my-topic' -ws = websocket.create_connection(TOPIC) - -while True: - msg = json.loads(ws.recv()) - if not msg: break - - print ( "Received: {} - payload: {}".format(msg, base64.b64decode(msg['payload']))) - - # Acknowledge successful processing - ws.send(json.dumps({'messageId' : msg['messageId']})) - -ws.close() - -``` - -### Node.js - -This example uses the [`ws`](https://websockets.github.io/ws/) package. You can install it using [npm](https://www.npmjs.com/): - -```shell - -$ npm install ws - -``` - -#### Node.js producer - -Here's an example Node.js producer that sends a simple message to a Pulsar topic: - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/producer/persistent/public/default/my-topic`; -const ws = new WebSocket(topic); - -var message = { - "payload" : new Buffer("Hello World").toString('base64'), - "properties": { - "key1" : "value1", - "key2" : "value2" - }, - "context" : "1" -}; - -ws.on('open', function() { - // Send one message - ws.send(JSON.stringify(message)); -}); - -ws.on('message', function(message) { - console.log('received ack: %s', message); -}); - -``` - -#### Node.js consumer - -Here's an example Node.js consumer that listens on the same topic used by the producer above: - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/consumer/persistent/public/default/my-topic/my-sub`; -const ws = new WebSocket(topic); - -ws.on('message', function(message) { - var receiveMsg = JSON.parse(message); - console.log('Received: %s - payload: %s', message, new Buffer(receiveMsg.payload, 'base64').toString()); - var ackMsg = {"messageId" : receiveMsg.messageId}; - ws.send(JSON.stringify(ackMsg)); -}); - -``` - -#### NodeJS reader - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/reader/persistent/public/default/my-topic`; -const ws = new WebSocket(topic); - -ws.on('message', function(message) { - var receiveMsg = JSON.parse(message); - console.log('Received: %s - payload: %s', message, new Buffer(receiveMsg.payload, 'base64').toString()); - var ackMsg = {"messageId" : receiveMsg.messageId}; - ws.send(JSON.stringify(ackMsg)); -}); - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries.md b/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries.md deleted file mode 100644 index 6cdc1e615c81fd..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/client-libraries.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -id: client-libraries -title: Pulsar client libraries -sidebar_label: "Overview" -original_id: client-libraries ---- - -Pulsar supports the following client libraries: - -|Language|Documentation|Release note|Code repo -|---|---|---|--- -Java |- [User doc](client-libraries-java.md)

    - [API doc](/api/client/)|[Here](/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-client) -C++ | - [User doc](client-libraries-cpp.md)

    - [API doc](/api/cpp/)|[Here](/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp) -Python | - [User doc](client-libraries-python.md)

    - [API doc](/api/python/)|[Here](/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/python) -WebSocket| [User doc](client-libraries-websocket.md) | [Here](/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-websocket) -Go client|[User doc](client-libraries-go.md)|[Here](https://github.com/apache/pulsar-client-go/blob/master/CHANGELOG) |[Here](https://github.com/apache/pulsar-client-go) -Node.js|[User doc](client-libraries-node.md)|[Here](https://github.com/apache/pulsar-client-node/releases) |[Here](https://github.com/apache/pulsar-client-node) -C# |[User doc](client-libraries-dotnet.md)| [Here](https://github.com/apache/pulsar-dotpulsar/blob/master/CHANGELOG)|[Here](https://github.com/apache/pulsar-dotpulsar) - -:::note - -- The code repos of **Java, C++, Python,** and **WebSocket** clients are hosted in the [Pulsar main repo](https://github.com/apache/pulsar) and these clients are released with Pulsar, so their release notes are parts of [Pulsar release note](/release-notes/). -- The code repos of **Go, Node.js,** and **C#** clients are hosted outside of the [Pulsar main repo](https://github.com/apache/pulsar) and these clients are not released with Pulsar, so they have independent release notes. - -::: - -## Feature matrix -Pulsar client feature matrix for different languages is listed on [Pulsar Feature Matrix (Client and Function)](https://docs.google.com/spreadsheets/d/1YHYTkIXR8-Ql103u-IMI18TXLlGStK8uJjDsOOA0T20/edit#gid=1784579914) page. - -## Third-party clients - -Besides the official released clients, multiple projects on developing Pulsar clients are available in different languages. - -> If you have developed a new Pulsar client, feel free to submit a pull request and add your client to the list below. - -| Language | Project | Maintainer | License | Description | -|----------|---------|------------|---------|-------------| -| Go | [pulsar-client-go](https://github.com/Comcast/pulsar-client-go) | [Comcast](https://github.com/Comcast) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client | -| Go | [go-pulsar](https://github.com/t2y/go-pulsar) | [t2y](https://github.com/t2y) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | -| Haskell | [supernova](https://github.com/cr-org/supernova) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Native Pulsar client for Haskell | -| Scala | [neutron](https://github.com/cr-org/neutron) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Purely functional Apache Pulsar client for Scala built on top of Fs2 | -| Scala | [pulsar4s](https://github.com/sksamuel/pulsar4s) | [sksamuel](https://github.com/sksamuel) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Idomatic, typesafe, and reactive Scala client for Apache Pulsar | -| Rust | [pulsar-rs](https://github.com/wyyerd/pulsar-rs) | [Wyyerd Group](https://github.com/wyyerd) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Future-based Rust bindings for Apache Pulsar | -| .NET | [pulsar-client-dotnet](https://github.com/fsharplang-ru/pulsar-client-dotnet) | [Lanayx](https://github.com/Lanayx) | [![GitHub](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT) | Native .NET client for C#/F#/VB | -| Node.js | [pulsar-flex](https://github.com/ayeo-flex-org/pulsar-flex) | [Daniel Sinai](https://github.com/danielsinai), [Ron Farkash](https://github.com/ronfarkash), [Gal Rosenberg](https://github.com/galrose)| [![GitHub](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT) | Native Nodejs client | diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-architecture-overview.md b/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-architecture-overview.md deleted file mode 100644 index 5f2fb2ea991670..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-architecture-overview.md +++ /dev/null @@ -1,176 +0,0 @@ ---- -id: concepts-architecture-overview -title: Architecture Overview -sidebar_label: "Architecture" -original_id: concepts-architecture-overview ---- - -At the highest level, a Pulsar instance is composed of one or more Pulsar clusters. Clusters within an instance can [replicate](concepts-replication.md) data amongst themselves. - -In a Pulsar cluster: - -* One or more brokers handles and load balances incoming messages from producers, dispatches messages to consumers, communicates with the Pulsar configuration store to handle various coordination tasks, stores messages in BookKeeper instances (aka bookies), relies on a cluster-specific ZooKeeper cluster for certain tasks, and more. -* A BookKeeper cluster consisting of one or more bookies handles [persistent storage](#persistent-storage) of messages. -* A ZooKeeper cluster specific to that cluster handles coordination tasks between Pulsar clusters. - -The diagram below provides an illustration of a Pulsar cluster: - -![Pulsar architecture diagram](/assets/pulsar-system-architecture.png) - -At the broader instance level, an instance-wide ZooKeeper cluster called the configuration store handles coordination tasks involving multiple clusters, for example [geo-replication](concepts-replication.md). - -## Brokers - -The Pulsar message broker is a stateless component that's primarily responsible for running two other components: - -* An HTTP server that exposes a {@inject: rest:REST:/} API for both administrative tasks and [topic lookup](concepts-clients.md#client-setup-phase) for producers and consumers. The producers connect to the brokers to publish messages and the consumers connect to the brokers to consume the messages. -* A dispatcher, which is an asynchronous TCP server over a custom [binary protocol](developing-binary-protocol.md) used for all data transfers - -Messages are typically dispatched out of a [managed ledger](#managed-ledgers) cache for the sake of performance, *unless* the backlog exceeds the cache size. If the backlog grows too large for the cache, the broker will start reading entries from BookKeeper. - -Finally, to support geo-replication on global topics, the broker manages replicators that tail the entries published in the local region and republish them to the remote region using the Pulsar [Java client library](client-libraries-java.md). - -> For a guide to managing Pulsar brokers, see the [brokers](admin-api-brokers.md) guide. - -## Clusters - -A Pulsar instance consists of one or more Pulsar *clusters*. Clusters, in turn, consist of: - -* One or more Pulsar [brokers](#brokers) -* A ZooKeeper quorum used for cluster-level configuration and coordination -* An ensemble of bookies used for [persistent storage](#persistent-storage) of messages - -Clusters can replicate amongst themselves using [geo-replication](concepts-replication.md). - -> For a guide to managing Pulsar clusters, see the [clusters](admin-api-clusters.md) guide. - -## Metadata store - -The Pulsar metadata store maintains all the metadata of a Pulsar cluster, such as topic metadata, schema, broker load data, and so on. Pulsar uses [Apache ZooKeeper](https://zookeeper.apache.org/) for metadata storage, cluster configuration, and coordination. The Pulsar metadata store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster. You can use one ZooKeeper cluster for both Pulsar metadata store and BookKeeper metadata store. If you want to deploy Pulsar brokers connected to an existing BookKeeper cluster, you need to deploy separate ZooKeeper clusters for Pulsar metadata store and BookKeeper metadata store respectively. - -> Pulsar also supports more metadata backend services, including [ETCD](https://etcd.io/) and [RocksDB](http://rocksdb.org/) (for standalone Pulsar only). - - -In a Pulsar instance: - -* A configuration store quorum stores configuration for tenants, namespaces, and other entities that need to be globally consistent. -* Each cluster has its own local ZooKeeper ensemble that stores cluster-specific configuration and coordination such as which brokers are responsible for which topics as well as ownership metadata, broker load reports, BookKeeper ledger metadata, and more. - -## Configuration store - -The configuration store maintains all the configurations of a Pulsar instance, such as clusters, tenants, namespaces, partitioned topic related configurations, and so on. A Pulsar instance can have a single local cluster, multiple local clusters, or multiple cross-region clusters. Consequently, the configuration store can share the configurations across multiple clusters under a Pulsar instance. The configuration store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster. - -## Persistent storage - -Pulsar provides guaranteed message delivery for applications. If a message successfully reaches a Pulsar broker, it will be delivered to its intended target. - -This guarantee requires that non-acknowledged messages are stored in a durable manner until they can be delivered to and acknowledged by consumers. This mode of messaging is commonly called *persistent messaging*. In Pulsar, N copies of all messages are stored and synced on disk, for example 4 copies across two servers with mirrored [RAID](https://en.wikipedia.org/wiki/RAID) volumes on each server. - -### Apache BookKeeper - -Pulsar uses a system called [Apache BookKeeper](http://bookkeeper.apache.org/) for persistent message storage. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) (WAL) system that provides a number of crucial advantages for Pulsar: - -* It enables Pulsar to utilize many independent logs, called [ledgers](#ledgers). Multiple ledgers can be created for topics over time. -* It offers very efficient storage for sequential data that handles entry replication. -* It guarantees read consistency of ledgers in the presence of various system failures. -* It offers even distribution of I/O across bookies. -* It's horizontally scalable in both capacity and throughput. Capacity can be immediately increased by adding more bookies to a cluster. -* Bookies are designed to handle thousands of ledgers with concurrent reads and writes. By using multiple disk devices---one for journal and another for general storage--bookies are able to isolate the effects of read operations from the latency of ongoing write operations. - -In addition to message data, *cursors* are also persistently stored in BookKeeper. Cursors are [subscription](reference-terminology.md#subscription) positions for [consumers](reference-terminology.md#consumer). BookKeeper enables Pulsar to store consumer position in a scalable fashion. - -At the moment, Pulsar supports persistent message storage. This accounts for the `persistent` in all topic names. Here's an example: - -```http - -persistent://my-tenant/my-namespace/my-topic - -``` - -> Pulsar also supports ephemeral ([non-persistent](concepts-messaging.md#non-persistent-topics)) message storage. - - -You can see an illustration of how brokers and bookies interact in the diagram below: - -![Brokers and bookies](/assets/broker-bookie.png) - - -### Ledgers - -A ledger is an append-only data structure with a single writer that is assigned to multiple BookKeeper storage nodes, or bookies. Ledger entries are replicated to multiple bookies. Ledgers themselves have very simple semantics: - -* A Pulsar broker can create a ledger, append entries to the ledger, and close the ledger. -* After the ledger has been closed---either explicitly or because the writer process crashed---it can then be opened only in read-only mode. -* Finally, when entries in the ledger are no longer needed, the whole ledger can be deleted from the system (across all bookies). - -#### Ledger read consistency - -The main strength of Bookkeeper is that it guarantees read consistency in ledgers in the presence of failures. Since the ledger can only be written to by a single process, that process is free to append entries very efficiently, without need to obtain consensus. After a failure, the ledger will go through a recovery process that will finalize the state of the ledger and establish which entry was last committed to the log. After that point, all readers of the ledger are guaranteed to see the exact same content. - -#### Managed ledgers - -Given that Bookkeeper ledgers provide a single log abstraction, a library was developed on top of the ledger called the *managed ledger* that represents the storage layer for a single topic. A managed ledger represents the abstraction of a stream of messages with a single writer that keeps appending at the end of the stream and multiple cursors that are consuming the stream, each with its own associated position. - -Internally, a single managed ledger uses multiple BookKeeper ledgers to store the data. There are two reasons to have multiple ledgers: - -1. After a failure, a ledger is no longer writable and a new one needs to be created. -2. A ledger can be deleted when all cursors have consumed the messages it contains. This allows for periodic rollover of ledgers. - -### Journal storage - -In BookKeeper, *journal* files contain BookKeeper transaction logs. Before making an update to a [ledger](#ledgers), a bookie needs to ensure that a transaction describing the update is written to persistent (non-volatile) storage. A new journal file is created once the bookie starts or the older journal file reaches the journal file size threshold (configured using the [`journalMaxSizeMB`](reference-configuration.md#bookkeeper-journalMaxSizeMB) parameter). - -## Pulsar proxy - -One way for Pulsar clients to interact with a Pulsar [cluster](#clusters) is by connecting to Pulsar message [brokers](#brokers) directly. In some cases, however, this kind of direct connection is either infeasible or undesirable because the client doesn't have direct access to broker addresses. If you're running Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform, for example, then direct client connections to brokers are likely not possible. - -The **Pulsar proxy** provides a solution to this problem by acting as a single gateway for all of the brokers in a cluster. If you run the Pulsar proxy (which, again, is optional), all client connections with the Pulsar cluster will flow through the proxy rather than communicating with brokers. - -> For the sake of performance and fault tolerance, you can run as many instances of the Pulsar proxy as you'd like. - -Architecturally, the Pulsar proxy gets all the information it requires from ZooKeeper. When starting the proxy on a machine, you only need to provide metadata store connection strings for the cluster-specific and instance-wide configuration store clusters. Here's an example: - -```bash - -$ cd /path/to/pulsar/directory -$ bin/pulsar proxy \ - --metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 \ - --configuration-metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 - -``` - -> #### Pulsar proxy docs -> For documentation on using the Pulsar proxy, see the [Pulsar proxy admin documentation](administration-proxy.md). - - -Some important things to know about the Pulsar proxy: - -* Connecting clients don't need to provide *any* specific configuration to use the Pulsar proxy. You won't need to update the client configuration for existing applications beyond updating the IP used for the service URL (for example if you're running a load balancer over the Pulsar proxy). -* [TLS encryption](security-tls-transport.md) and [authentication](security-tls-authentication.md) is supported by the Pulsar proxy - -## Service discovery - -[Clients](getting-started-clients.md) connecting to Pulsar brokers need to be able to communicate with an entire Pulsar instance using a single URL. - -You can use your own service discovery system if you'd like. If you use your own system, there is just one requirement: when a client performs an HTTP request to an endpoint, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to *some* active broker in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means. - -The diagram below illustrates Pulsar service discovery: - -![alt-text](/assets/pulsar-service-discovery.png) - -In this diagram, the Pulsar cluster is addressable via a single DNS name: `pulsar-cluster.acme.com`. A [Python client](client-libraries-python.md), for example, could access this Pulsar cluster like this: - -```python - -from pulsar import Client - -client = Client('pulsar://pulsar-cluster.acme.com:6650') - -``` - -:::note - -In Pulsar, each topic is handled by only one broker. Initial requests from a client to read, update or delete a topic are sent to a broker that may not be the topic owner. If the broker cannot handle the request for this topic, it redirects the request to the appropriate broker. - -::: - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-authentication.md b/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-authentication.md deleted file mode 100644 index f6307890c904a7..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-authentication.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -id: concepts-authentication -title: Authentication and Authorization -sidebar_label: "Authentication and Authorization" -original_id: concepts-authentication ---- - -Pulsar supports a pluggable [authentication](security-overview.md) mechanism which can be configured at the proxy and/or the broker. Pulsar also supports a pluggable [authorization](security-authorization.md) mechanism. These mechanisms work together to identify the client and its access rights on topics, namespaces and tenants. - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-clients.md b/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-clients.md deleted file mode 100644 index 4040624f7d6366..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-clients.md +++ /dev/null @@ -1,92 +0,0 @@ ---- -id: concepts-clients -title: Pulsar Clients -sidebar_label: "Clients" -original_id: concepts-clients ---- - -Pulsar exposes a client API with language bindings for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md), [C++](client-libraries-cpp.md) and [C#](client-libraries-dotnet.md). The client API optimizes and encapsulates Pulsar's client-broker communication protocol and exposes a simple and intuitive API for use by applications. - -Under the hood, the current official Pulsar client libraries support transparent reconnection and/or connection failover to brokers, queuing of messages until acknowledged by the broker, and heuristics such as connection retries with backoff. - -> **Custom client libraries** -> If you'd like to create your own client library, we recommend consulting the documentation on Pulsar's custom [binary protocol](developing-binary-protocol.md). - - -## Client setup phase - -Before an application creates a producer/consumer, the Pulsar client library needs to initiate a setup phase including two steps: - -1. The client attempts to determine the owner of the topic by sending an HTTP lookup request to the broker. The request could reach one of the active brokers which, by looking at the (cached) zookeeper metadata knows who is serving the topic or, in case nobody is serving it, tries to assign it to the least loaded broker. -1. Once the client library has the broker address, it creates a TCP connection (or reuse an existing connection from the pool) and authenticates it. Within this connection, client and broker exchange binary commands from a custom protocol. At this point the client sends a command to create producer/consumer to the broker, which will comply after having validated the authorization policy. - -Whenever the TCP connection breaks, the client immediately re-initiates this setup phase and keeps trying with exponential backoff to re-establish the producer or consumer until the operation succeeds. - -## Reader interface - -In Pulsar, the "standard" [consumer interface](concepts-messaging.md#consumers) involves using consumers to listen on [topics](reference-terminology.md#topic), process incoming messages, and finally acknowledge those messages when they are processed. Whenever a new subscription is created, it is initially positioned at the end of the topic (by default), and consumers associated with that subscription begin reading with the first message created afterwards. Whenever a consumer connects to a topic using a pre-existing subscription, it begins reading from the earliest message un-acked within that subscription. In summary, with the consumer interface, subscription cursors are automatically managed by Pulsar in response to [message acknowledgements](concepts-messaging.md#acknowledgement). - -The **reader interface** for Pulsar enables applications to manually manage cursors. When you use a reader to connect to a topic---rather than a consumer---you need to specify *which* message the reader begins reading from when it connects to a topic. When connecting to a topic, the reader interface enables you to begin with: - -* The **earliest** available message in the topic -* The **latest** available message in the topic -* Some other message between the earliest and the latest. If you select this option, you'll need to explicitly provide a message ID. Your application will be responsible for "knowing" this message ID in advance, perhaps fetching it from a persistent data store or cache. - -The reader interface is helpful for use cases like using Pulsar to provide effectively-once processing semantics for a stream processing system. For this use case, it's essential that the stream processing system be able to "rewind" topics to a specific message and begin reading there. The reader interface provides Pulsar clients with the low-level abstraction necessary to "manually position" themselves within a topic. - -Internally, the reader interface is implemented as a consumer using an exclusive, non-durable subscription to the topic with a randomly-allocated name. - -[ **IMPORTANT** ] - -Unlike subscription/consumer, readers are non-durable in nature and does not prevent data in a topic from being deleted, thus it is ***strongly*** advised that [data retention](cookbooks-retention-expiry.md) be configured. If data retention for a topic is not configured for an adequate amount of time, messages that the reader has not yet read might be deleted . This causes the readers to essentially skip messages. Configuring the data retention for a topic guarantees the reader with a certain duration to read a message. - -Please also note that a reader can have a "backlog", but the metric is only used for users to know how behind the reader is. The metric is not considered for any backlog quota calculations. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-reader-consumer-interfaces.png) - -Here's a Java example that begins reading from the earliest available message on a topic: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageId; -import org.apache.pulsar.client.api.Reader; - -// Create a reader on a topic and for a specific message (and onward) -Reader reader = pulsarClient.newReader() - .topic("reader-api-test") - .startMessageId(MessageId.earliest) - .create(); - -while (true) { - Message message = reader.readNext(); - - // Process the message -} - -``` - -To create a reader that reads from the latest available message: - -```java - -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(MessageId.latest) - .create(); - -``` - -To create a reader that reads from some message between the earliest and the latest: - -```java - -byte[] msgIdBytes = // Some byte array -MessageId id = MessageId.fromByteArray(msgIdBytes); -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(id) - .create(); - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-messaging.md b/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-messaging.md deleted file mode 100644 index 3470a8e9480e53..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-messaging.md +++ /dev/null @@ -1,989 +0,0 @@ ---- -id: concepts-messaging -title: Messaging -sidebar_label: "Messaging" -original_id: concepts-messaging ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar is built on the [publish-subscribe](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) pattern (often abbreviated to pub-sub). In this pattern, [producers](#producers) publish messages to [topics](#topics); [consumers](#consumers) [subscribe](#subscription-types) to those topics, process incoming messages, and send [acknowledgements](#acknowledgement) to the broker when processing is finished. - -When a subscription is created, Pulsar [retains](concepts-architecture-overview.md#persistent-storage) all messages, even if the consumer is disconnected. The retained messages are discarded only when a consumer acknowledges that all these messages are processed successfully. - -If the consumption of a message fails and you want this message to be consumed again, you can enable [message redelivery mechanism](#message-redelivery) to request the broker to resend this message. - -## Messages - -Messages are the basic "unit" of Pulsar. The following table lists the components of messages. - -Component | Description -:---------|:------- -Value / data payload | The data carried by the message. All Pulsar messages contain raw bytes, although message data can also conform to data [schemas](schema-get-started.md). -Key | Messages are optionally tagged with keys, which is useful for things like [topic compaction](concepts-topic-compaction.md). -Properties | An optional key/value map of user-defined properties. -Producer name | The name of the producer who produces the message. If you do not specify a producer name, the default name is used. -Topic name | The name of the topic that the message is published to. -Schema version | The version number of the schema that the message is produced with. -Sequence ID | Each Pulsar message belongs to an ordered sequence on its topic. The sequence ID of a message is initially assigned by its producer, indicating its order in that sequence, and can also be customized.
    Sequence ID can be used for message deduplication. If `brokerDeduplicationEnabled` is set to `true`, the sequence ID of each message is unique within a producer of a topic (non-partitioned) or a partition. -Message ID | The message ID of a message is assigned by bookies as soon as the message is persistently stored. Message ID indicates a message’s specific position in a ledger and is unique within a Pulsar cluster. -Publish time | The timestamp of when the message is published. The timestamp is automatically applied by the producer. -Event time | An optional timestamp attached to a message by applications. For example, applications attach a timestamp on when the message is processed. If nothing is set to event time, the value is `0`. - -The default size of a message is 5 MB. You can configure the max size of a message with the following configurations. - -- In the `broker.conf` file. - - ```bash - - # The max size of a message (in bytes). - maxMessageSize=5242880 - - ``` - -- In the `bookkeeper.conf` file. - - ```bash - - # The max size of the netty frame (in bytes). Any messages received larger than this value are rejected. The default value is 5 MB. - nettyMaxFrameSizeBytes=5253120 - - ``` - -> For more information on Pulsar messages, see Pulsar [binary protocol](developing-binary-protocol.md). - -## Producers - -A producer is a process that attaches to a topic and publishes messages to a Pulsar [broker](reference-terminology.md#broker). The Pulsar broker processes the messages. - -### Send modes - -Producers send messages to brokers synchronously (sync) or asynchronously (async). - -| Mode | Description | -|:-----------|-----------| -| Sync send | The producer waits for an acknowledgement from the broker after sending every message. If the acknowledgment is not received, the producer treats the sending operation as a failure. | -| Async send | The producer puts a message in a blocking queue and returns immediately. The client library sends the message to the broker in the background. If the queue is full (you can [configure](reference-configuration.md#broker) the maximum size), the producer is blocked or fails immediately when calling the API, depending on arguments passed to the producer. | - -### Access mode - -You can have different types of access modes on topics for producers. - -|Access mode | Description -|---|--- -`Shared`|Multiple producers can publish on a topic.

    This is the **default** setting. -`Exclusive`|Only one producer can publish on a topic.

    If there is already a producer connected, other producers trying to publish on this topic get errors immediately.

    The "old" producer is evicted and a "new" producer is selected to be the next exclusive producer if the "old" producer experiences a network partition with the broker. -`WaitForExclusive`|If there is already a producer connected, the producer creation is pending (rather than timing out) until the producer gets the `Exclusive` access.

    The producer that succeeds in becoming the exclusive one is treated as the leader. Consequently, if you want to implement the leader election scheme for your application, you can use this access mode. - -:::note - -Once an application creates a producer with `Exclusive` or `WaitForExclusive` access mode successfully, the instance of this application is guaranteed to be the **only writer** to the topic. Any other producers trying to produce messages on this topic will either get errors immediately or have to wait until they get the `Exclusive` access. -For more information, see [PIP 68: Exclusive Producer](https://github.com/apache/pulsar/wiki/PIP-68:-Exclusive-Producer). - -::: - -You can set producer access mode through Java Client API. For more information, see `ProducerAccessMode` in [ProducerBuilder.java](https://github.com/apache/pulsar/blob/fc5768ca3bbf92815d142fe30e6bfad70a1b4fc6/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/ProducerBuilder.java) file. - - -### Compression - -You can compress messages published by producers during transportation. Pulsar currently supports the following types of compression: - -* [LZ4](https://github.com/lz4/lz4) -* [ZLIB](https://zlib.net/) -* [ZSTD](https://facebook.github.io/zstd/) -* [SNAPPY](https://google.github.io/snappy/) - -### Batching - -When batching is enabled, the producer accumulates and sends a batch of messages in a single request. The batch size is defined by the maximum number of messages and the maximum publish latency. Therefore, the backlog size represents the total number of batches instead of the total number of messages. - -In Pulsar, batches are tracked and stored as single units rather than as individual messages. Consumer unbundles a batch into individual messages. However, scheduled messages (configured through the `deliverAt` or the `deliverAfter` parameter) are always sent as individual messages even batching is enabled. - -In general, a batch is acknowledged when all of its messages are acknowledged by a consumer. It means that when **not all** batch messages are acknowledged, then unexpected failures, negative acknowledgements, or acknowledgement timeouts can result in a redelivery of all messages in this batch. - -To avoid redelivering acknowledged messages in a batch to the consumer, Pulsar introduces batch index acknowledgement since Pulsar 2.6.0. When batch index acknowledgement is enabled, the consumer filters out the batch index that has been acknowledged and sends the batch index acknowledgement request to the broker. The broker maintains the batch index acknowledgement status and tracks the acknowledgement status of each batch index to avoid dispatching acknowledged messages to the consumer. The batch is deleted when all indices of the messages in it are acknowledged. - -By default, batch index acknowledgement is disabled (`acknowledgmentAtBatchIndexLevelEnabled=false`). You can enable batch index acknowledgement by setting the `acknowledgmentAtBatchIndexLevelEnabled` parameter to `true` at the broker side. Enabling batch index acknowledgement results in more memory overheads. - -### Chunking -Message chunking enables Pulsar to process large payload messages by splitting the message into chunks at the producer side and aggregating chunked messages at the consumer side. - -With message chunking enabled, when the size of a message exceeds the allowed maximum payload size (the `maxMessageSize` parameter of broker), the workflow of messaging is as follows: -1. The producer splits the original message into chunked messages and publishes them with chunked metadata to the broker separately and in order. -2. The broker stores the chunked messages in one managed-ledger in the same way as that of ordinary messages, and it uses the `chunkedMessageRate` parameter to record chunked message rate on the topic. -3. The consumer buffers the chunked messages and aggregates them into the receiver queue when it receives all the chunks of a message. -4. The client consumes the aggregated message from the receiver queue. - -**Limitations:** -- Chunking is only available for persisted topics. -- Chunking is only available for the exclusive and failover subscription types. -- Chunking cannot be enabled simultaneously with batching. - -#### Handle consecutive chunked messages with one ordered consumer - -The following figure shows a topic with one producer which publishes a large message payload in chunked messages along with regular non-chunked messages. The producer publishes message M1 in three chunks labeled M1-C1, M1-C2 and M1-C3. The broker stores all the three chunked messages in the managed-ledger and dispatches them to the ordered (exclusive/failover) consumer in the same order. The consumer buffers all the chunked messages in memory until it receives all the chunked messages, aggregates them into one message and then hands over the original message M1 to the client. - -![](/assets/chunking-01.png) - -#### Handle interwoven chunked messages with one ordered consumer - -When multiple producers publish chunked messages into a single topic, the broker stores all the chunked messages coming from different producers in the same managed-ledger. The chunked messages in the managed-ledger can be interwoven with each other. As shown below, Producer 1 publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. Producer 2 publishes message M2 in three chunks M2-C1, M2-C2 and M2-C3. All chunked messages of the specific message are still in order but might not be consecutive in the managed-ledger. - -![](/assets/chunking-02.png) - -:::note - -In this case, interwoven chunked messages may bring some memory pressure to the consumer because the consumer keeps a separate buffer for each large message to aggregate all its chunks in one message. You can limit the maximum number of chunked messages a consumer maintains concurrently by configuring the `maxPendingChunkedMessage` parameter. When the threshold is reached, the consumer drops pending messages by silently acknowledging them or asking the broker to redeliver them later, optimizing memory utilization. - -::: - -#### Enable Message Chunking - -**Prerequisite:** Disable batching by setting the `enableBatching` parameter to `false`. - -The message chunking feature is OFF by default. -To enable message chunking, set the `chunkingEnabled` parameter to `true` when creating a producer. - -:::note - -If the consumer fails to receive all chunks of a message within a specified time period, it expires incomplete chunks. The default value is 1 minute. For more information about the `expireTimeOfIncompleteChunkedMessage` parameter, refer to [org.apache.pulsar.client.api](/api/client/). - -::: - -## Consumers - -A consumer is a process that attaches to a topic via a subscription and then receives messages. - -A consumer sends a [flow permit request](developing-binary-protocol.md#flow-control) to a broker to get messages. There is a queue at the consumer side to receive messages pushed from the broker. You can configure the queue size with the [`receiverQueueSize`](client-libraries-java.md#configure-consumer) parameter. The default size is `1000`). Each time `consumer.receive()` is called, a message is dequeued from the buffer. - -### Receive modes - -Messages are received from [brokers](reference-terminology.md#broker) either synchronously (sync) or asynchronously (async). - -| Mode | Description | -|:--------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Sync receive | A sync receive is blocked until a message is available. | -| Async receive | An async receive returns immediately with a future value—for example, a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) in Java—that completes once a new message is available. | - -### Listeners - -Client libraries provide listener implementation for consumers. For example, the [Java client](client-libraries-java.md) provides a {@inject: javadoc:MesssageListener:/client/org/apache/pulsar/client/api/MessageListener} interface. In this interface, the `received` method is called whenever a new message is received. - -### Acknowledgement - -The consumer sends an acknowledgement request to the broker after it consumes a message successfully. Then, this consumed message will be permanently stored, and be deleted only after all the subscriptions have acknowledged it. If you want to store the messages that have been acknowledged by a consumer, you need to configure the [message retention policy](concepts-messaging.md#message-retention-and-expiry). - -For batch messages, you can enable batch index acknowledgement to avoid dispatching acknowledged messages to the consumer. For details about batch index acknowledgement, see [batching](#batching). - -Messages can be acknowledged in one of the following two ways: - -- Being acknowledged individually. With individual acknowledgement, the consumer acknowledges each message and sends an acknowledgement request to the broker. -- Being acknowledged cumulatively. With cumulative acknowledgement, the consumer **only** acknowledges the last message it received. All messages in the stream up to (and including) the provided message are not redelivered to that consumer. - -If you want to acknowledge messages individually, you can use the following API. - -```java - -consumer.acknowledge(msg); - -``` - -If you want to acknowledge messages cumulatively, you can use the following API. - -```java - -consumer.acknowledgeCumulative(msg); - -``` - -:::note - -Cumulative acknowledgement cannot be used in [Shared subscription type](#subscription-types), because Shared subscription type involves multiple consumers which have access to the same subscription. In Shared subscription type, messages are acknowledged individually. - -::: - -### Negative acknowledgement - -The [negative acknowledgement](#negative-acknowledgement) mechanism allows you to send a notification to the broker indicating the consumer did not process a message. When a consumer fails to consume a message and needs to re-consume it, the consumer sends a negative acknowledgement (nack) to the broker, triggering the broker to redeliver this message to the consumer. - -Messages are negatively acknowledged individually or cumulatively, depending on the consumption subscription type. - -In Exclusive and Failover subscription types, consumers only negatively acknowledge the last message they receive. - -In Shared and Key_Shared subscription types, consumers can negatively acknowledge messages individually. - -Be aware that negative acknowledgments on ordered subscription types, such as Exclusive, Failover and Key_Shared, might cause failed messages being sent to consumers out of the original order. - -If you are going to use negative acknowledgment on a message, make sure it is negatively acknowledged before the acknowledgment timeout. - -Use the following API to negatively acknowledge message consumption. - -```java - -Consumer consumer = pulsarClient.newConsumer() - .topic(topic) - .subscriptionName("sub-negative-ack") - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .negativeAckRedeliveryDelay(2, TimeUnit.SECONDS) // the default value is 1 min - .subscribe(); - -Message message = consumer.receive(); - -// call the API to send negative acknowledgement -consumer.negativeAcknowledge(message); - -message = consumer.receive(); -consumer.acknowledge(message); - -``` - -To redeliver messages with different delays, you can use the **redelivery backoff mechanism** by setting the number of retries to deliver the messages. -Use the following API to enable `Negative Redelivery Backoff`. - -```java - -Consumer consumer = pulsarClient.newConsumer() - .topic(topic) - .subscriptionName("sub-negative-ack") - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .negativeAckRedeliveryBackoff(MultiplierRedeliveryBackoff.builder() - .minDelayMs(1000) - .maxDelayMs(60 * 1000) - .build()) - .subscribe(); - -``` - -The message redelivery behavior should be as follows. - -Redelivery count | Redelivery delay -:--------------------|:----------- -1 | 10 + 1 seconds -2 | 10 + 2 seconds -3 | 10 + 4 seconds -4 | 10 + 8 seconds -5 | 10 + 16 seconds -6 | 10 + 32 seconds -7 | 10 + 60 seconds -8 | 10 + 60 seconds -:::note - -If batching is enabled, all messages in one batch are redelivered to the consumer. - -::: - -### Acknowledgement timeout - -The acknowledgement timeout mechanism allows you to set a time range during which the client tracks the unacknowledged messages. After this acknowledgement timeout (`ackTimeout`) period, the client sends `redeliver unacknowledged messages` request to the broker, thus the broker resends the unacknowledged messages to the consumer. - -You can configure the acknowledgement timeout mechanism to redeliver the message if it is not acknowledged after `ackTimeout` or to execute a timer task to check the acknowledgement timeout messages during every `ackTimeoutTickTime` period. - -You can also use the redelivery backoff mechanism, redeliver messages with different delays by setting the number -of times the messages is retried. - -If you want to use redelivery backoff, you can use the following API. - -```java - -consumer.ackTimeout(10, TimeUnit.SECOND) - .ackTimeoutRedeliveryBackoff(MultiplierRedeliveryBackoff.builder() - .minDelayMs(1000) - .maxDelayMs(60000) - .multiplier(2).build()) - -``` - -The message redelivery behavior should be as follows. - -Redelivery count | Redelivery delay -:--------------------|:----------- -1 | 10 + 1 seconds -2 | 10 + 2 seconds -3 | 10 + 4 seconds -4 | 10 + 8 seconds -5 | 10 + 16 seconds -6 | 10 + 32 seconds -7 | 10 + 60 seconds -8 | 10 + 60 seconds - -:::note - -- If batching is enabled, all messages in one batch are redelivered to the consumer. -- Compared with acknowledgement timeout, negative acknowledgement is preferred. First, it is difficult to set a timeout value. Second, a broker resends messages when the message processing time exceeds the acknowledgement timeout, but these messages might not need to be re-consumed. - -::: - -Use the following API to enable acknowledgement timeout. - -```java - -Consumer consumer = pulsarClient.newConsumer() - .topic(topic) - .ackTimeout(2, TimeUnit.SECONDS) // the default value is 0 - .ackTimeoutTickTime(1, TimeUnit.SECONDS) - .subscriptionName("sub") - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscribe(); - -Message message = consumer.receive(); - -// wait at least 2 seconds -message = consumer.receive(); -consumer.acknowledge(message); - -``` - -### Retry letter topic - -The retry letter topic allows you to store the messages that failed to be consumed and retry consuming them later. With this method, you can customize the interval at which the messages are redelivered. Consumers on the original topic are automatically subscribed to the retry letter topic as well. Once the maximum number of retries has been reached, the unconsumed messages are moved to a [dead letter topic](#dead-letter-topic) for manual processing. - -The diagram below illustrates the concept of the retry letter topic. -![](/assets/retry-letter-topic.svg) - -The intention of using retry letter topic is different from using [delayed message delivery](#delayed-message-delivery), even though both are aiming to consume a message later. Retry letter topic serves failure handling through message redelivery to ensure critical data is not lost, while delayed message delivery is intended to deliver a message with a specified time of delay. - -By default, automatic retry is disabled. You can set `enableRetry` to `true` to enable automatic retry on the consumer. - -Use the following API to consume messages from a retry letter topic. When the value of `maxRedeliverCount` is reached, the unconsumed messages are moved to a dead letter topic. - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .enableRetry(true) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .build()) - .subscribe(); - -``` - -The default retry letter topic uses this format: - -``` - ---RETRY - -``` - -Use the Java client to specify the name of the retry letter topic. - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .enableRetry(true) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .retryLetterTopic("my-retry-letter-topic-name") - .build()) - .subscribe(); - -``` - -The messages in the retry letter topic contain some special properties that are automatically created by the client. - -Special property | Description -:--------------------|:----------- -`REAL_TOPIC` | The real topic name. -`ORIGIN_MESSAGE_ID` | The origin message ID. It is crucial for message tracking. -`RECONSUMETIMES` | The number of retries to consume messages. -`DELAY_TIME` | Message retry interval in milliseconds. -**Example** - -``` - -REAL_TOPIC = persistent://public/default/my-topic -ORIGIN_MESSAGE_ID = 1:0:-1:0 -RECONSUMETIMES = 6 -DELAY_TIME = 3000 - -``` - -Use the following API to store the messages in a retrial queue. - -```java - -consumer.reconsumeLater(msg, 3, TimeUnit.SECONDS); - -``` - -Use the following API to add custom properties for the `reconsumeLater` function. In the next attempt to consume, custom properties can be get from message#getProperty. - -```java - -Map customProperties = new HashMap(); -customProperties.put("custom-key-1", "custom-value-1"); -customProperties.put("custom-key-2", "custom-value-2"); -consumer.reconsumeLater(msg, customProperties, 3, TimeUnit.SECONDS); - -``` - -:::note - -* Currently, retry letter topic is enabled in Shared subscription types. -* Compared with negative acknowledgment, retry letter topic is more suitable for messages that require a large number of retries with a configurable retry interval. Because messages in the retry letter topic are persisted to BookKeeper, while messages that need to be retried due to negative acknowledgment are cached on the client side. - -::: - -### Dead letter topic - -Dead letter topic allows you to continue message consumption even some messages are not consumed successfully. The messages that are failed to be consumed are stored in a specific topic, which is called dead letter topic. You can decide how to handle the messages in the dead letter topic. - -Enable dead letter topic in a Java client using the default dead letter topic. - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .build()) - .subscribe(); - -``` - -The default dead letter topic uses this format: - -``` - ---DLQ - -``` - -Use the Java client to specify the name of the dead letter topic. - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .deadLetterTopic("my-dead-letter-topic-name") - .build()) - .subscribe(); - -``` - -By default, there is no subscription during a DLQ topic creation. Without a just-in-time subscription to the DLQ topic, you may lose messages. To automatically create an initial subscription for the DLQ, you can specify the `initialSubscriptionName` parameter. If this parameter is set but the broker's `allowAutoSubscriptionCreation` is disabled, the DLQ producer will fail to be created. - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .deadLetterTopic("my-dead-letter-topic-name") - .initialSubscriptionName("init-sub") - .build()) - .subscribe(); - -``` - -Dead letter topic serves message redelivery, which is triggered by [acknowledgement timeout](#acknowledgement-timeout) or [negative acknowledgement](#negative-acknowledgement) or [retry letter topic](#retry-letter-topic) . -:::note - -* Currently, dead letter topic is enabled in Shared and Key_Shared subscription types. - -::: - -## Topics - -As in other pub-sub systems, topics in Pulsar are named channels for transmitting messages from producers to consumers. Topic names are URLs that have a well-defined structure: - -```http - -{persistent|non-persistent}://tenant/namespace/topic - -``` - -Topic name component | Description -:--------------------|:----------- -`persistent` / `non-persistent` | This identifies the type of topic. Pulsar supports two kind of topics: [persistent](concepts-architecture-overview.md#persistent-storage) and [non-persistent](#non-persistent-topics). The default is persistent, so if you do not specify a type, the topic is persistent. With persistent topics, all messages are durably persisted on disks (if the broker is not standalone, messages are durably persisted on multiple disks), whereas data for non-persistent topics is not persisted to storage disks. -`tenant` | The topic tenant within the instance. Tenants are essential to multi-tenancy in Pulsar, and spread across clusters. -`namespace` | The administrative unit of the topic, which acts as a grouping mechanism for related topics. Most topic configuration is performed at the [namespace](#namespaces) level. Each tenant has one or multiple namespaces. -`topic` | The final part of the name. Topic names have no special meaning in a Pulsar instance. - -> **No need to explicitly create new topics** -> You do not need to explicitly create topics in Pulsar. If a client attempts to write or receive messages to/from a topic that does not yet exist, Pulsar creates that topic under the namespace provided in the [topic name](#topics) automatically. -> If no tenant or namespace is specified when a client creates a topic, the topic is created in the default tenant and namespace. You can also create a topic in a specified tenant and namespace, such as `persistent://my-tenant/my-namespace/my-topic`. `persistent://my-tenant/my-namespace/my-topic` means the `my-topic` topic is created in the `my-namespace` namespace of the `my-tenant` tenant. - -## Namespaces - -A namespace is a logical nomenclature within a tenant. A tenant creates multiple namespaces via the [admin API](admin-api-namespaces.md#create). For instance, a tenant with different applications can create a separate namespace for each application. A namespace allows the application to create and manage a hierarchy of topics. The topic `my-tenant/app1` is a namespace for the application `app1` for `my-tenant`. You can create any number of [topics](#topics) under the namespace. - -## Subscriptions - -A subscription is a named configuration rule that determines how messages are delivered to consumers. Four subscription types are available in Pulsar: [exclusive](#exclusive), [shared](#shared), [failover](#failover), and [key_shared](#key_shared). These types are illustrated in the figure below. - -![Subscription types](/assets/pulsar-subscription-types.png) - -> **Pub-Sub or Queuing** -> In Pulsar, you can use different subscriptions flexibly. -> * If you want to achieve traditional "fan-out pub-sub messaging" among consumers, specify a unique subscription name for each consumer. It is exclusive subscription type. -> * If you want to achieve "message queuing" among consumers, share the same subscription name among multiple consumers(shared, failover, key_shared). -> * If you want to achieve both effects simultaneously, combine exclusive subscription type with other subscription types for consumers. - -### Subscription types - -When a subscription has no consumers, its subscription type is undefined. The type of a subscription is defined when a consumer connects to it, and the type can be changed by restarting all consumers with a different configuration. - -#### Exclusive - -In *Exclusive* type, only a single consumer is allowed to attach to the subscription. If multiple consumers subscribe to a topic using the same subscription, an error occurs. - -In the diagram below, only **Consumer A-0** is allowed to consume messages. - -> Exclusive is the default subscription type. - -![Exclusive subscriptions](/assets/pulsar-exclusive-subscriptions.png) - -#### Failover - -In *Failover* type, multiple consumers can attach to the same subscription. A master consumer is picked for non-partitioned topic or each partition of partitioned topic and receives messages. When the master consumer disconnects, all (non-acknowledged and subsequent) messages are delivered to the next consumer in line. - -For partitioned topics, broker will sort consumers by priority level and lexicographical order of consumer name. Then broker will try to evenly assigns topics to consumers with the highest priority level. - -For non-partitioned topic, broker will pick consumer in the order they subscribe to the non partitioned topic. - -In the diagram below, **Consumer-B-0** is the master consumer while **Consumer-B-1** would be the next consumer in line to receive messages if **Consumer-B-0** is disconnected. - -![Failover subscriptions](/assets/pulsar-failover-subscriptions.png) - -#### Shared - -In *shared* or *round robin* type, multiple consumers can attach to the same subscription. Messages are delivered in a round robin distribution across consumers, and any given message is delivered to only one consumer. When a consumer disconnects, all the messages that were sent to it and not acknowledged will be rescheduled for sending to the remaining consumers. - -In the diagram below, **Consumer-C-1** and **Consumer-C-2** are able to subscribe to the topic, but **Consumer-C-3** and others could as well. - -> **Limitations of Shared type** -> When using Shared type, be aware that: -> * Message ordering is not guaranteed. -> * You cannot use cumulative acknowledgment with Shared type. - -![Shared subscriptions](/assets/pulsar-shared-subscriptions.png) - -#### Key_Shared - -In *Key_Shared* type, multiple consumers can attach to the same subscription. Messages are delivered in a distribution across consumers and message with same key or same ordering key are delivered to only one consumer. No matter how many times the message is re-delivered, it is delivered to the same consumer. When a consumer connected or disconnected will cause served consumer change for some key of message. - -![Key_Shared subscriptions](/assets/pulsar-key-shared-subscriptions.png) - -Note that when the consumers are using the Key_Shared subscription type, you need to **disable batching** or **use key-based batching** for the producers. There are two reasons why the key-based batching is necessary for Key_Shared subscription type: -1. The broker dispatches messages according to the keys of the messages, but the default batching approach might fail to pack the messages with the same key to the same batch. -2. Since it is the consumers instead of the broker who dispatch the messages from the batches, the key of the first message in one batch is considered as the key of all messages in this batch, thereby leading to context errors. - -The key-based batching aims at resolving the above-mentioned issues. This batching method ensures that the producers pack the messages with the same key to the same batch. The messages without a key are packed into one batch and this batch has no key. When the broker dispatches messages from this batch, it uses `NON_KEY` as the key. In addition, each consumer is associated with **only one** key and should receive **only one message batch** for the connected key. By default, you can limit batching by configuring the number of messages that producers are allowed to send. - -Below are examples of enabling the key-based batching under the Key_Shared subscription type, with `client` being the Pulsar client that you created. - -````mdx-code-block - - - -``` - -Producer producer = client.newProducer() - .topic("my-topic") - .batcherBuilder(BatcherBuilder.KEY_BASED) - .create(); - -``` - - - - -``` - -ProducerConfiguration producerConfig; -producerConfig.setBatchingType(ProducerConfiguration::BatchingType::KeyBasedBatching); -Producer producer; -client.createProducer("my-topic", producerConfig, producer); - -``` - - - - -``` - -producer = client.create_producer(topic='my-topic', batching_type=pulsar.BatchingType.KeyBased) - -``` - - - - -```` - -> **Limitations of Key_Shared type** -> When you use Key_Shared type, be aware that: -> * You need to specify a key or orderingKey for messages. -> * You cannot use cumulative acknowledgment with Key_Shared type. - -### Subscription modes - -#### What is a subscription mode - -The subscription mode indicates the cursor type. - -- When a subscription is created, an associated cursor is created to record the last consumed position. - -- When a consumer of the subscription restarts, it can continue consuming from the last message it consumes. - -Subscription mode | Description | Note -|---|---|--- -`Durable`|The cursor is durable, which retains messages and persists the current position.
    If a broker restarts from a failure, it can recover the cursor from the persistent storage (BookKeeper), so that messages can continue to be consumed from the last consumed position.|`Durable` is the **default** subscription mode. -`NonDurable`|The cursor is non-durable.
    Once a broker stops, the cursor is lost and can never be recovered, so that messages **can not** continue to be consumed from the last consumed position.|Reader’s subscription mode is `NonDurable` in nature and it does not prevent data in a topic from being deleted. Reader’s subscription mode **can not** be changed. - -A [subscription](#concepts-messaging.md/#subscriptions) can have one or more consumers. When a consumer subscribes to a topic, it must specify the subscription name. A durable subscription and a non-durable subscription can have the same name, they are independent of each other. If a consumer specifies a subscription which does not exist before, the subscription is automatically created. - -#### When to use - -By default, messages of a topic without any durable subscriptions are marked as deleted. If you want to prevent the messages being marked as deleted, you can create a durable subscription for this topic. In this case, only acknowledged messages are marked as deleted. For more information, see [message retention and expiry](cookbooks-retention-expiry.md). - -#### How to use - -After a consumer is created, the default subscription mode of the consumer is `Durable`. You can change the subscription mode to `NonDurable` by making changes to the consumer’s configuration. - -````mdx-code-block - - - - -```java - - Consumer consumer = pulsarClient.newConsumer() - .topic("my-topic") - .subscriptionName("my-sub") - .subscriptionMode(SubscriptionMode.Durable) - .subscribe(); - -``` - - - - -```java - - Consumer consumer = pulsarClient.newConsumer() - .topic("my-topic") - .subscriptionName("my-sub") - .subscriptionMode(SubscriptionMode.NonDurable) - .subscribe(); - -``` - - - - -```` - -For how to create, check, or delete a durable subscription, see [manage subscriptions](admin-api-topics.md/#manage-subscriptions). - -## Multi-topic subscriptions - -When a consumer subscribes to a Pulsar topic, by default it subscribes to one specific topic, such as `persistent://public/default/my-topic`. As of Pulsar version 1.23.0-incubating, however, Pulsar consumers can simultaneously subscribe to multiple topics. You can define a list of topics in two ways: - -* On the basis of a [**reg**ular **ex**pression](https://en.wikipedia.org/wiki/Regular_expression) (regex), for example `persistent://public/default/finance-.*` -* By explicitly defining a list of topics - -> When subscribing to multiple topics by regex, all topics must be in the same [namespace](#namespaces). - -When subscribing to multiple topics, the Pulsar client automatically makes a call to the Pulsar API to discover the topics that match the regex pattern/list, and then subscribe to all of them. If any of the topics do not exist, the consumer auto-subscribes to them once the topics are created. - -> **No ordering guarantees across multiple topics** -> When a producer sends messages to a single topic, all messages are guaranteed to be read from that topic in the same order. However, these guarantees do not hold across multiple topics. So when a producer sends message to multiple topics, the order in which messages are read from those topics is not guaranteed to be the same. - -The following are multi-topic subscription examples for Java. - -```java - -import java.util.regex.Pattern; - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient pulsarClient = // Instantiate Pulsar client object - -// Subscribe to all topics in a namespace -Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default/.*"); -Consumer allTopicsConsumer = pulsarClient.newConsumer() - .topicsPattern(allTopicsInNamespace) - .subscriptionName("subscription-1") - .subscribe(); - -// Subscribe to a subsets of topics in a namespace, based on regex -Pattern someTopicsInNamespace = Pattern.compile("persistent://public/default/foo.*"); -Consumer someTopicsConsumer = pulsarClient.newConsumer() - .topicsPattern(someTopicsInNamespace) - .subscriptionName("subscription-1") - .subscribe(); - -``` - -For code examples, see [Java](client-libraries-java.md#multi-topic-subscriptions). - -## Partitioned topics - -Normal topics are served only by a single broker, which limits the maximum throughput of the topic. *Partitioned topics* are a special type of topic that are handled by multiple brokers, thus allowing for higher throughput. - -A partitioned topic is actually implemented as N internal topics, where N is the number of partitions. When publishing messages to a partitioned topic, each message is routed to one of several brokers. The distribution of partitions across brokers is handled automatically by Pulsar. - -The diagram below illustrates this: - -![](/assets/partitioning.png) - -The **Topic1** topic has five partitions (**P0** through **P4**) split across three brokers. Because there are more partitions than brokers, two brokers handle two partitions a piece, while the third handles only one (again, Pulsar handles this distribution of partitions automatically). - -Messages for this topic are broadcast to two consumers. The [routing mode](#routing-modes) determines each message should be published to which partition, while the [subscription type](#subscription-types) determines which messages go to which consumers. - -Decisions about routing and subscription modes can be made separately in most cases. In general, throughput concerns should guide partitioning/routing decisions while subscription decisions should be guided by application semantics. - -There is no difference between partitioned topics and normal topics in terms of how subscription types work, as partitioning only determines what happens between when a message is published by a producer and processed and acknowledged by a consumer. - -Partitioned topics need to be explicitly created via the [admin API](admin-api-overview.md). The number of partitions can be specified when creating the topic. - -### Routing modes - -When publishing to partitioned topics, you must specify a *routing mode*. The routing mode determines which partition---that is, which internal topic---each message should be published to. - -There are three {@inject: javadoc:MessageRoutingMode:/client/org/apache/pulsar/client/api/MessageRoutingMode} available: - -Mode | Description -:--------|:------------ -`RoundRobinPartition` | If no key is provided, the producer will publish messages across all partitions in round-robin fashion to achieve maximum throughput. Please note that round-robin is not done per individual message but rather it's set to the same boundary of batching delay, to ensure batching is effective. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition. This is the default mode. -`SinglePartition` | If no key is provided, the producer will randomly pick one single partition and publish all the messages into that partition. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition. -`CustomPartition` | Use custom message router implementation that will be called to determine the partition for a particular message. User can create a custom routing mode by using the [Java client](client-libraries-java.md) and implementing the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface. - -### Ordering guarantee - -The ordering of messages is related to MessageRoutingMode and Message Key. Usually, user would want an ordering of Per-key-partition guarantee. - -If there is a key attached to message, the messages will be routed to corresponding partitions based on the hashing scheme specified by {@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} in {@inject: javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder}, when using either `SinglePartition` or `RoundRobinPartition` mode. - -Ordering guarantee | Description | Routing Mode and Key -:------------------|:------------|:------------ -Per-key-partition | All the messages with the same key will be in order and be placed in same partition. | Use either `SinglePartition` or `RoundRobinPartition` mode, and Key is provided by each message. -Per-producer | All the messages from the same producer will be in order. | Use `SinglePartition` mode, and no Key is provided for each message. - -### Hashing scheme - -{@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} is an enum that represent sets of standard hashing functions available when choosing the partition to use for a particular message. - -There are 2 types of standard hashing functions available: `JavaStringHash` and `Murmur3_32Hash`. -The default hashing function for producer is `JavaStringHash`. -Please pay attention that `JavaStringHash` is not useful when producers can be from different multiple language clients, under this use case, it is recommended to use `Murmur3_32Hash`. - - - -## Non-persistent topics - - -By default, Pulsar persistently stores *all* unacknowledged messages on multiple [BookKeeper](concepts-architecture-overview.md#persistent-storage) bookies (storage nodes). Data for messages on persistent topics can thus survive broker restarts and subscriber failover. - -Pulsar also, however, supports **non-persistent topics**, which are topics on which messages are *never* persisted to disk and live only in memory. When using non-persistent delivery, killing a Pulsar broker or disconnecting a subscriber to a topic means that all in-transit messages are lost on that (non-persistent) topic, meaning that clients may see message loss. - -Non-persistent topics have names of this form (note the `non-persistent` in the name): - -```http - -non-persistent://tenant/namespace/topic - -``` - -> For more info on using non-persistent topics, see the [Non-persistent messaging cookbook](cookbooks-non-persistent.md). - -In non-persistent topics, brokers immediately deliver messages to all connected subscribers *without persisting them* in [BookKeeper](concepts-architecture-overview.md#persistent-storage). If a subscriber is disconnected, the broker will not be able to deliver those in-transit messages, and subscribers will never be able to receive those messages again. Eliminating the persistent storage step makes messaging on non-persistent topics slightly faster than on persistent topics in some cases, but with the caveat that some of the core benefits of Pulsar are lost. - -> With non-persistent topics, message data lives only in memory. If a message broker fails or message data can otherwise not be retrieved from memory, your message data may be lost. Use non-persistent topics only if you're *certain* that your use case requires it and can sustain it. - -By default, non-persistent topics are enabled on Pulsar brokers. You can disable them in the broker's [configuration](reference-configuration.md#broker-enableNonPersistentTopics). You can manage non-persistent topics using the `pulsar-admin topics` command. For more information, see [`pulsar-admin`](/tools/pulsar-admin/). - -### Performance - -Non-persistent messaging is usually faster than persistent messaging because brokers don't persist messages and immediately send acks back to the producer as soon as that message is delivered to connected brokers. Producers thus see comparatively low publish latency with non-persistent topic. - -### Client API - -Producers and consumers can connect to non-persistent topics in the same way as persistent topics, with the crucial difference that the topic name must start with `non-persistent`. All three subscription types---[exclusive](#exclusive), [shared](#shared), and [failover](#failover)---are supported for non-persistent topics. - -Here's an example [Java consumer](client-libraries-java.md#consumers) for a non-persistent topic: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); -String npTopic = "non-persistent://public/default/my-topic"; -String subscriptionName = "my-subscription-name"; - -Consumer consumer = client.newConsumer() - .topic(npTopic) - .subscriptionName(subscriptionName) - .subscribe(); - -``` - -Here's an example [Java producer](client-libraries-java.md#producer) for the same non-persistent topic: - -```java - -Producer producer = client.newProducer() - .topic(npTopic) - .create(); - -``` - - -## System topic - -System topic is a predefined topic for internal use within Pulsar. It can be either persistent or non-persistent topic. - -System topics serve to implement certain features and eliminate dependencies on third-party components, such as transactions, heartbeat detections, topic-level policies, and resource group services. System topics empower the implementation of these features to be simplified, dependent, and flexible. Take heartbeat detections for example, you can leverage the system topic for healthcheck to internally enable producer/reader to procude/consume messages under the heartbeat namespace, which can detect whether the current service is still alive. - -There are diverse system topics depending on namespaces. The following table outlines the available system topics for each specific namespace. - -| Namespace | TopicName | Domain | Count | Usage | -|-----------|-----------|--------|-------|-------| -| pulsar/system | `transaction_coordinator_assign_${id}` | Persistent | Default 16 | Transaction coordinator | -| pulsar/system | `_transaction_log${tc_id}` | Persistent | Default 16 | Transaction log | -| pulsar/system | `resource-usage` | Non-persistent | Default 4 | Resource group service | -| host/port | `heartbeat` | Persistent | 1 | Heartbeat detection | -| User-defined-ns | [`__change_events`](concepts-multi-tenancy.md#namespace-change-events-and-topic-level-policies) | Persistent | Default 4 | Topic events | -| User-defined-ns | `__transaction_buffer_snapshot` | Persistent | One per namespace | Transaction buffer snapshots | -| User-defined-ns | `${topicName}__transaction_pending_ack` | Persistent | One per every topic subscription acknowledged with transactions | Acknowledgements with transactions | - -:::note - -* You cannot create any system topics. -* By default, system topics are disabled. To enable system topics, you need to change the following configurations in the `conf/broker.conf` or `conf/standalone.conf` file. - - ```conf - systemTopicEnabled=true - topicLevelPoliciesEnabled=true - ``` - -::: - - -## Message redelivery - -Apache Pulsar supports graceful failure handling and ensures critical data is not lost. Software will always have unexpected conditions and at times messages may not be delivered successfully. Therefore, it is important to have a built-in mechanism that handles failure, particularly in asynchronous messaging as highlighted in the following examples. - -- Consumers get disconnected from the database or the HTTP server. When this happens, the database is temporarily offline while the consumer is writing the data to it and the external HTTP server that the consumer calls is momentarily unavailable. -- Consumers get disconnected from a broker due to consumer crashes, broken connections, etc. As a consequence, the unacknowledged messages are delivered to other available consumers. - -Apache Pulsar avoids these and other message delivery failures using at-least-once delivery semantics that ensure Pulsar processes a message more than once. - -To utilize message redelivery, you need to enable this mechanism before the broker can resend the unacknowledged messages in Apache Pulsar client. You can activate the message redelivery mechanism in Apache Pulsar using three methods. - -- [Negative Acknowledgment](#negative-acknowledgement) -- [Acknowledgement Timeout](#acknowledgement-timeout) -- [Retry letter topic](#retry-letter-topic) - - -## Message retention and expiry - -By default, Pulsar message brokers: - -* immediately delete *all* messages that have been acknowledged by a consumer, and -* [persistently store](concepts-architecture-overview.md#persistent-storage) all unacknowledged messages in a message backlog. - -Pulsar has two features, however, that enable you to override this default behavior: - -* Message **retention** enables you to store messages that have been acknowledged by a consumer -* Message **expiry** enables you to set a time to live (TTL) for messages that have not yet been acknowledged - -> All message retention and expiry is managed at the [namespace](#namespaces) level. For a how-to, see the [Message retention and expiry](cookbooks-retention-expiry.md) cookbook. - -The diagram below illustrates both concepts: - -![Message retention and expiry](/assets/retention-expiry.png) - -With message retention, shown at the top, a retention policy applied to all topics in a namespace dictates that some messages are durably stored in Pulsar even though they've already been acknowledged. Acknowledged messages that are not covered by the retention policy are deleted. Without a retention policy, *all* of the acknowledged messages would be deleted. - -With message expiry, shown at the bottom, some messages are deleted, even though they haven't been acknowledged, because they've expired according to the TTL applied to the namespace (for example because a TTL of 5 minutes has been applied and the messages haven't been acknowledged but are 10 minutes old). - -## Message deduplication - -Message duplication occurs when a message is [persisted](concepts-architecture-overview.md#persistent-storage) by Pulsar more than once. Message deduplication is an optional Pulsar feature that prevents unnecessary message duplication by processing each message only once, even if the message is received more than once. - -The following diagram illustrates what happens when message deduplication is disabled vs. enabled: - -![Pulsar message deduplication](/assets/message-deduplication.png) - - -Message deduplication is disabled in the scenario shown at the top. Here, a producer publishes message 1 on a topic; the message reaches a Pulsar broker and is [persisted](concepts-architecture-overview.md#persistent-storage) to BookKeeper. The producer then sends message 1 again (in this case due to some retry logic), and the message is received by the broker and stored in BookKeeper again, which means that duplication has occurred. - -In the second scenario at the bottom, the producer publishes message 1, which is received by the broker and persisted, as in the first scenario. When the producer attempts to publish the message again, however, the broker knows that it has already seen message 1 and thus does not persist the message. - -> Message deduplication is handled at the namespace level or the topic level. For more instructions, see the [message deduplication cookbook](cookbooks-deduplication.md). - - -### Producer idempotency - -The other available approach to message deduplication is to ensure that each message is *only produced once*. This approach is typically called **producer idempotency**. The drawback of this approach is that it defers the work of message deduplication to the application. In Pulsar, this is handled at the [broker](reference-terminology.md#broker) level, so you do not need to modify your Pulsar client code. Instead, you only need to make administrative changes. For details, see [Managing message deduplication](cookbooks-deduplication.md). - -### Deduplication and effectively-once semantics - -Message deduplication makes Pulsar an ideal messaging system to be used in conjunction with stream processing engines (SPEs) and other systems seeking to provide effectively-once processing semantics. Messaging systems that do not offer automatic message deduplication require the SPE or other system to guarantee deduplication, which means that strict message ordering comes at the cost of burdening the application with the responsibility of deduplication. With Pulsar, strict ordering guarantees come at no application-level cost. - -> You can find more in-depth information in [this post](https://www.splunk.com/en_us/blog/it/exactly-once-is-not-exactly-the-same.html). - -## Delayed message delivery -Delayed message delivery enables you to consume a message later. In this mechanism, a message is stored in BookKeeper. The `DelayedDeliveryTracker` maintains the time index (time -> messageId) in memory after the message is published to a broker. This message will be delivered to a consumer once the specified delay is over. - -Delayed message delivery only works in Shared subscription type. In Exclusive and Failover subscription types, the delayed message is dispatched immediately. - -The diagram below illustrates the concept of delayed message delivery: - -![Delayed Message Delivery](/assets/message_delay.png) - -A broker saves a message without any check. When a consumer consumes a message, if the message is set to delay, then the message is added to `DelayedDeliveryTracker`. A subscription checks and gets timeout messages from `DelayedDeliveryTracker`. - -### Broker -Delayed message delivery is enabled by default. You can change it in the broker configuration file as below: - -``` - -# Whether to enable the delayed delivery for messages. -# If disabled, messages are immediately delivered and there is no tracking overhead. -delayedDeliveryEnabled=true - -# Control the ticking time for the retry of delayed message delivery, -# affecting the accuracy of the delivery time compared to the scheduled time. -# Default is 1 second. -delayedDeliveryTickTimeMillis=1000 - -``` - -### Producer -The following is an example of delayed message delivery for a producer in Java: - -```java - -// message to be delivered at the configured delay interval -producer.newMessage().deliverAfter(3L, TimeUnit.Minute).value("Hello Pulsar!").send(); - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-multi-tenancy.md b/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-multi-tenancy.md deleted file mode 100644 index 93a59557b2efca..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-multi-tenancy.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -id: concepts-multi-tenancy -title: Multi Tenancy -sidebar_label: "Multi Tenancy" -original_id: concepts-multi-tenancy ---- - -Pulsar was created from the ground up as a multi-tenant system. To support multi-tenancy, Pulsar has a concept of tenants. Tenants can be spread across clusters and can each have their own [authentication and authorization](security-overview.md) scheme applied to them. They are also the administrative unit at which storage quotas, [message TTL](cookbooks-retention-expiry.md#time-to-live-ttl), and isolation policies can be managed. - -The multi-tenant nature of Pulsar is reflected mostly visibly in topic URLs, which have this structure: - -```http - -persistent://tenant/namespace/topic - -``` - -As you can see, the tenant is the most basic unit of categorization for topics (more fundamental than the namespace and topic name). - -## Tenants - -To each tenant in a Pulsar instance you can assign: - -* An [authorization](security-authorization.md) scheme -* The set of [clusters](reference-terminology.md#cluster) to which the tenant's configuration applies - -## Namespaces - -Tenants and namespaces are two key concepts of Pulsar to support multi-tenancy. - -* Pulsar is provisioned for specified tenants with appropriate capacity allocated to the tenant. -* A namespace is the administrative unit nomenclature within a tenant. The configuration policies set on a namespace apply to all the topics created in that namespace. A tenant may create multiple namespaces via self-administration using the REST API and the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool. For instance, a tenant with different applications can create a separate namespace for each application. - -Names for topics in the same namespace will look like this: - -```http - -persistent://tenant/app1/topic-1 - -persistent://tenant/app1/topic-2 - -persistent://tenant/app1/topic-3 - -``` - -### Namespace change events and topic-level policies - -Pulsar is a multi-tenant event streaming system. Administrators can manage the tenants and namespaces by setting policies at different levels. However, the policies, such as retention policy and storage quota policy, are only available at a namespace level. In many use cases, users need to set a policy at the topic level. The namespace change events approach is proposed for supporting topic-level policies in an efficient way. In this approach, Pulsar is used as an event log to store namespace change events (such as topic policy changes). This approach has a few benefits: -- Avoid using ZooKeeper and introducing more loads to ZooKeeper. -- Use Pulsar as an event log for propagating the policy cache. It can scale efficiently. -- Use Pulsar SQL to query the namespace changes and audit the system. - -Each namespace has a [system topic](concepts-messaging.md#system-topic) named `__change_events`. This system topic stores change events for a given namespace. The following figure illustrates how to leverage it to update topic-level policies. - -![Leverage the system topic to update topic-level policies](/assets/system-topic-for-topic-level-policies.svg) - -1. Pulsar Admin clients communicate with the Admin Restful API to update topic-level policies. -2. Any broker that receives the Admin HTTP request publishes a topic policy change event to the corresponding system topic (`__change_events`) of the namespace. -3. Each broker that owns a namespace bundle(s) subscribes to the system topic (`__change_events`) to receive the change events of the namespace. -4. Each broker applies the change events to its policy cache. -5. Once the policy cache is updated, the broker sends the response back to the Pulsar Admin clients. - -:::note - -By default, the system topic is disabled. To enable topic-level policy (`topicLevelPoliciesEnabled`=`true`), you need to enable the system topic by setting `systemtopicenabled` to `true` in the `conf/broker.conf` or `conf/standalone.conf` file. - -::: \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-multiple-advertised-listeners.md b/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-multiple-advertised-listeners.md deleted file mode 100644 index f2e1ae0aadc7ca..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-multiple-advertised-listeners.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -id: concepts-multiple-advertised-listeners -title: Multiple advertised listeners -sidebar_label: "Multiple advertised listeners" -original_id: concepts-multiple-advertised-listeners ---- - -When a Pulsar cluster is deployed in the production environment, it may require to expose multiple advertised addresses for the broker. For example, when you deploy a Pulsar cluster in Kubernetes and want other clients, which are not in the same Kubernetes cluster, to connect to the Pulsar cluster, you need to assign a broker URL to external clients. But clients in the same Kubernetes cluster can still connect to the Pulsar cluster through the internal network of Kubernetes. - -## Advertised listeners - -To ensure clients in both internal and external networks can connect to a Pulsar cluster, Pulsar introduces `advertisedListeners` and `internalListenerName` configuration options into the [broker configuration file](reference-configuration.md#broker) to ensure that the broker supports exposing multiple advertised listeners and support the separation of internal and external network traffic. - -- The `advertisedListeners` is used to specify multiple advertised listeners. The broker uses the listener as the broker identifier in the load manager and the bundle owner data. The `advertisedListeners` is formatted as `:pulsar://:, :pulsar+ssl://:`. You can set up the `advertisedListeners` like -`advertisedListeners=internal:pulsar://192.168.1.11:6660,internal:pulsar+ssl://192.168.1.11:6651`. - -- The `internalListenerName` is used to specify the internal service URL that the broker uses. You can specify the `internalListenerName` by choosing one of the `advertisedListeners`. The broker uses the listener name of the first advertised listener as the `internalListenerName` if the `internalListenerName` is absent. - -After setting up the `advertisedListeners`, clients can choose one of the listeners as the service URL to create a connection to the broker as long as the network is accessible. However, if the client creates producers or consumer on a topic, the client must send a lookup requests to the broker for getting the owner broker, then connect to the owner broker to publish messages or consume messages. Therefore, You must allow the client to get the corresponding service URL with the same advertised listener name as the one used by the client. This helps keep client-side simple and secure. - -## Use multiple advertised listeners - -This example shows how a Pulsar client uses multiple advertised listeners. - -1. Configure multiple advertised listeners in the broker configuration file. - -```shell - -advertisedListeners={listenerName}:pulsar://xxxx:6650, -{listenerName}:pulsar+ssl://xxxx:6651 - -``` - -2. Specify the listener name for the client. - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://xxxx:6650") - .listenerName("external") - .build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-overview.md b/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-overview.md deleted file mode 100644 index e8a2f4b9d321a7..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-overview.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -id: concepts-overview -title: Pulsar Overview -sidebar_label: "Overview" -original_id: concepts-overview ---- - -Pulsar is a multi-tenant, high-performance solution for server-to-server messaging. Originally developed by Yahoo, Pulsar is under the stewardship of the [Apache Software Foundation](https://www.apache.org/). - -Key features of Pulsar are listed below: - -* Native support for multiple clusters in a Pulsar instance, with seamless [geo-replication](administration-geo.md) of messages across clusters. -* Very low publish and end-to-end latency. -* Seamless scalability to over a million topics. -* A simple [client API](concepts-clients.md) with bindings for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) and [C++](client-libraries-cpp.md). -* Multiple [subscription types](concepts-messaging.md#subscription-types) ([exclusive](concepts-messaging.md#exclusive), [shared](concepts-messaging.md#shared), and [failover](concepts-messaging.md#failover)) for topics. -* Guaranteed message delivery with [persistent message storage](concepts-architecture-overview.md#persistent-storage) provided by [Apache BookKeeper](http://bookkeeper.apache.org/). -* A serverless light-weight computing framework [Pulsar Functions](functions-overview.md) offers the capability for stream-native data processing. -* A serverless connector framework [Pulsar IO](io-overview.md), which is built on Pulsar Functions, makes it easier to move data in and out of Apache Pulsar. -* [Tiered Storage](concepts-tiered-storage.md) offloads data from hot/warm storage to cold/long-term storage (such as S3 and GCS) when the data is aging out. - -## Contents - -- [Messaging Concepts](concepts-messaging.md) -- [Architecture Overview](concepts-architecture-overview.md) -- [Pulsar Clients](concepts-clients.md) -- [Geo Replication](concepts-replication.md) -- [Multi Tenancy](concepts-multi-tenancy.md) -- [Authentication and Authorization](concepts-authentication.md) -- [Topic Compaction](concepts-topic-compaction.md) -- [Tiered Storage](concepts-tiered-storage.md) diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-proxy-sni-routing.md b/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-proxy-sni-routing.md deleted file mode 100644 index 51419a66cefe6e..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-proxy-sni-routing.md +++ /dev/null @@ -1,180 +0,0 @@ ---- -id: concepts-proxy-sni-routing -title: Proxy support with SNI routing -sidebar_label: "Proxy support with SNI routing" -original_id: concepts-proxy-sni-routing ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -A proxy server is an intermediary server that forwards requests from multiple clients to different servers across the Internet. The proxy server acts as a "traffic cop" in both forward and reverse proxy scenarios, and benefits your system such as load balancing, performance, security, auto-scaling, and so on. - -The proxy in Pulsar acts as a reverse proxy, and creates a gateway in front of brokers. Proxies such as Apache Traffic Server (ATS), HAProxy, Nginx, and Envoy are not supported in Pulsar. These proxy-servers support **SNI routing**. SNI routing is used to route traffic to a destination without terminating the SSL connection. Layer 4 routing provides greater transparency because the outbound connection is determined by examining the destination address in the client TCP packets. - -Pulsar clients (Java, C++, Python) support [SNI routing protocol](https://github.com/apache/pulsar/wiki/PIP-60:-Support-Proxy-server-with-SNI-routing), so you can connect to brokers through the proxy. This document walks you through how to set up the ATS proxy, enable SNI routing, and connect Pulsar client to the broker through the ATS proxy. - -## ATS-SNI Routing in Pulsar -To support [layer-4 SNI routing](https://docs.trafficserver.apache.org/en/latest/admin-guide/layer-4-routing.en.html) with ATS, the inbound connection must be a TLS connection. Pulsar client supports SNI routing protocol on TLS connection, so when Pulsar clients connect to broker through ATS proxy, Pulsar uses ATS as a reverse proxy. - -Pulsar supports SNI routing for geo-replication, so brokers can connect to brokers in other clusters through the ATS proxy. - -This section explains how to set up and use ATS as a reverse proxy, so Pulsar clients can connect to brokers through the ATS proxy using the SNI routing protocol on TLS connection. - -### Set up ATS Proxy for layer-4 SNI routing -To support layer 4 SNI routing, you need to configure the `records.conf` and `ssl_server_name.conf` files. - -![Pulsar client SNI](/assets/pulsar-sni-client.png) - -The [records.config](https://docs.trafficserver.apache.org/en/latest/admin-guide/files/records.config.en.html) file is located in the `/usr/local/etc/trafficserver/` directory by default. The file lists configurable variables used by the ATS. - -To configure the `records.config` files, complete the following steps. -1. Update TLS port (`http.server_ports`) on which proxy listens, and update proxy certs (`ssl.client.cert.path` and `ssl.client.cert.filename`) to secure TLS tunneling. -2. Configure server ports (`http.connect_ports`) used for tunneling to the broker. If Pulsar brokers are listening on `4443` and `6651` ports, add the brokers service port in the `http.connect_ports` configuration. - -The following is an example. - -``` - -# PROXY TLS PORT -CONFIG proxy.config.http.server_ports STRING 4443:ssl 4080 -# PROXY CERTS FILE PATH -CONFIG proxy.config.ssl.client.cert.path STRING /proxy-cert.pem -# PROXY KEY FILE PATH -CONFIG proxy.config.ssl.client.cert.filename STRING /proxy-key.pem - - -# The range of origin server ports that can be used for tunneling via CONNECT. # Traffic Server allows tunnels only to the specified ports. Supports both wildcards (*) and ranges (e.g. 0-1023). -CONFIG proxy.config.http.connect_ports STRING 4443 6651 - -``` - -The `ssl_server_name` file is used to configure TLS connection handling for inbound and outbound connections. The configuration is determined by the SNI values provided by the inbound connection. The file consists of a set of configuration items, and each is identified by an SNI value (`fqdn`). When an inbound TLS connection is made, the SNI value from the TLS negotiation is matched with the items specified in this file. If the values match, the values specified in that item override the default values. - -The following example shows mapping of the inbound SNI hostname coming from the client, and the actual broker service URL where request should be redirected. For example, if the client sends the SNI header `pulsar-broker1`, the proxy creates a TLS tunnel by redirecting request to the `pulsar-broker1:6651` service URL. - -``` - -server_config = { - { - fqdn = 'pulsar-broker-vip', - # Forward to Pulsar broker which is listening on 6651 - tunnel_route = 'pulsar-broker-vip:6651' - }, - { - fqdn = 'pulsar-broker1', - # Forward to Pulsar broker-1 which is listening on 6651 - tunnel_route = 'pulsar-broker1:6651' - }, - { - fqdn = 'pulsar-broker2', - # Forward to Pulsar broker-2 which is listening on 6651 - tunnel_route = 'pulsar-broker2:6651' - }, -} - -``` - -After you configure the `ssl_server_name.config` and `records.config` files, the ATS-proxy server handles SNI routing and creates TCP tunnel between the client and the broker. - -### Configure Pulsar-client with SNI routing -ATS SNI-routing works only with TLS. You need to enable TLS for the ATS proxy and brokers first, configure the SNI routing protocol, and then connect Pulsar clients to brokers through ATS proxy. Pulsar clients support SNI routing by connecting to the proxy, and sending the target broker URL to the SNI header. This process is processed internally. You only need to configure the following proxy configuration initially when you create a Pulsar client to use the SNI routing protocol. - -````mdx-code-block - - - - -```java - -String brokerServiceUrl = "pulsar+ssl://pulsar-broker-vip:6651/"; -String proxyUrl = "pulsar+ssl://ats-proxy:443"; -ClientBuilder clientBuilder = PulsarClient.builder() - .serviceUrl(brokerServiceUrl) - .tlsTrustCertsFilePath(TLS_TRUST_CERT_FILE_PATH) - .enableTls(true) - .allowTlsInsecureConnection(false) - .proxyServiceUrl(proxyUrl, ProxyProtocol.SNI) - .operationTimeout(1000, TimeUnit.MILLISECONDS); - -Map authParams = new HashMap(); -authParams.put("tlsCertFile", TLS_CLIENT_CERT_FILE_PATH); -authParams.put("tlsKeyFile", TLS_CLIENT_KEY_FILE_PATH); -clientBuilder.authentication(AuthenticationTls.class.getName(), authParams); - -PulsarClient pulsarClient = clientBuilder.build(); - -``` - - - - -```c++ - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/cacert.pem"); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create( - "/path/to/client-cert.pem", "/path/to/client-key.pem");); - -Client client("pulsar+ssl://ats-proxy:443", config); - -``` - - - - -```python - -from pulsar import Client, AuthenticationTLS - -auth = AuthenticationTLS("/path/to/my-role.cert.pem", "/path/to/my-role.key-pk8.pem") -client = Client("pulsar+ssl://ats-proxy:443", - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False, - authentication=auth) - -``` - - - - -```` - -### Pulsar geo-replication with SNI routing -You can use the ATS proxy for geo-replication. Pulsar brokers can connect to brokers in geo-replication by using SNI routing. To enable SNI routing for broker connection cross clusters, you need to configure SNI proxy URL to the cluster metadata. If you have configured SNI proxy URL in the cluster metadata, you can connect to broker cross clusters through the proxy over SNI routing. - -![Pulsar client SNI](/assets/pulsar-sni-geo.png) - -In this example, a Pulsar cluster is deployed into two separate regions, `us-west` and `us-east`. Both regions are configured with ATS proxy, and brokers in each region run behind the ATS proxy. We configure the cluster metadata for both clusters, so brokers in one cluster can use SNI routing and connect to brokers in other clusters through the ATS proxy. - -(a) Configure the cluster metadata for `us-east` with `us-east` broker service URL and `us-east` ATS proxy URL with SNI proxy-protocol. - -``` - -./pulsar-admin clusters update \ ---broker-url-secure pulsar+ssl://east-broker-vip:6651 \ ---url http://east-broker-vip:8080 \ ---proxy-protocol SNI \ ---proxy-url pulsar+ssl://east-ats-proxy:443 - -``` - -(b) Configure the cluster metadata for `us-west` with `us-west` broker service URL and `us-west` ATS proxy URL with SNI proxy-protocol. - -``` - -./pulsar-admin clusters update \ ---broker-url-secure pulsar+ssl://west-broker-vip:6651 \ ---url http://west-broker-vip:8080 \ ---proxy-protocol SNI \ ---proxy-url pulsar+ssl://west-ats-proxy:443 - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-replication.md b/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-replication.md deleted file mode 100644 index 1ac455c7028325..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-replication.md +++ /dev/null @@ -1,69 +0,0 @@ ---- -id: concepts-replication -title: Geo Replication -sidebar_label: "Geo Replication" -original_id: concepts-replication ---- - -Regardless of industries, when an unforeseen event occurs and brings day-to-day operations to a halt, an organization needs a well-prepared disaster recovery plan to quickly restore service to clients. However, a disaster recovery plan usually requires a multi-datacenter deployment with geographically dispersed data centers. Such a multi-datacenter deployment requires a geo-replication mechanism to provide additional redundancy in case a data center fails. - -Pulsar's geo-replication mechanism is typically used for disaster recovery, enabling the replication of persistently stored message data across multiple data centers. For instance, your application is publishing data in one region and you would like to process it for consumption in other regions. With Pulsar’s geo-replication mechanism, messages can be produced and consumed in different geo-locations. - -The diagram below illustrates the process of [geo-replication](administration-geo.md). Whenever three producers (P1, P2 and P3) respectively publish messages to the T1 topic in three clusters, those messages are instantly replicated across clusters. Once the messages are replicated, two consumers (C1 and C2) can consume those messages from their clusters. - -![A typical geo-replication example with full-mesh pattern](/assets/full-mesh-replication.svg) - -## Replication mechanisms - -The geo-replication mechanism can be categorized into synchronous geo-replication and asynchronous geo-replication strategies. Pulsar supports both replication mechanisms. - -### Asynchronous geo-replication in Pulsar - -An asynchronous geo-replicated cluster is composed of multiple physical clusters set up in different datacenters. Messages produced on a Pulsar topic are first persisted to the local cluster and then replicated asynchronously to the remote clusters by brokers. - -![An example of asynchronous geo-replication mechanism](/assets/geo-replication-async.svg) - -In normal cases, when there are no connectivity issues, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, end-to-end delivery latency is defined by the network round-trip time (RTT) between the data centers. Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (for example, during a network partition). - -Asynchronous geo-replication provides lower latency but may result in weaker consistency guarantees due to the potential replication lag that some data hasn’t been replicated. - -### Synchronous geo-replication via BookKeeper - -In synchronous geo-replication, data is synchronously replicated to multiple data centers and the client has to wait for an acknowledgment from the other data centers. As illustrated below, when the client issues a write request to one cluster, the written data will be replicated to the other two data centers. The write request is only acknowledged to the client when the majority of data centers (in this example, at least 2 data centers) have acknowledged that the write has been persisted. - -![An example of synchronous geo-replication mechanism](/assets/geo-replication-sync.svg) - -Synchronous geo-replication in Pulsar is achieved by BookKeeper. A synchronous geo-replicated cluster consists of a cluster of bookies and a cluster of brokers that run in multiple data centers, and a global Zookeeper installation (a ZooKeeper ensemble is running across multiple data centers). You need to configure a BookKeeper region-aware placement policy to store data across multiple data centers and guarantee availability constraints on writes. - -Synchronous geo-replication provides the highest availability and also guarantees stronger data consistency between different data centers. However, your applications have to pay an extra latency penalty across data centers. - - -## Replication patterns - -Pulsar provides a great degree of flexibility for customizing your replication strategy. You can set up different replication patterns to serve your replication strategy for an application between multiple data centers. - -### Full-mesh replication - -Using full-mesh replication and applying the [selective message replication](administration-geo.md/#selective-replication), you can customize your replication strategies and topologies between any number of datacenters. - -![An example of full-mesh replication pattern](/assets/full-mesh-replication.svg) - -### Active-active replication - -Active-active replication is a variation of full-mesh replication, with only two data centers. Producers are able to run at any data center to produce messages, and consumers can consume all messages from all data centers. - -![An example of active-active replication pattern](/assets/active-active-replication.svg) - -For how to use active-active replication to migrate data between clusters, refer to [here](administration-geo.md/#migrate-data-between-clusters-using-geo-replication). - -### Active-standby replication - -Active-standby replication is a variation of active-active replication. Producers send messages to the active data center while messages are replicated to the standby data center for backup. If the active data center goes down, the standby data center takes over and becomes the active one. - -![An example of active-standby replication pattern](/assets/active-standby-replication.svg) - -### Aggregation replication - -The aggregation replication pattern is typically used when replicating messages from the edge to the cloud. For example, assume you have 3 clusters in 3 fronting datacenters and one aggregated cluster in a central data center, and you want to replicate messages from multiple fronting datacenters to the central data center for aggregation purposes. You can then create an individual namespace for the topics used by each fronting data center and assign the aggregated data center to those namespaces. - -![An example of aggregation replication pattern](/assets/aggregation-replication.svg) diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-tiered-storage.md b/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-tiered-storage.md deleted file mode 100644 index b45ccea5888bf9..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-tiered-storage.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: concepts-tiered-storage -title: Tiered Storage -sidebar_label: "Tiered Storage" -original_id: concepts-tiered-storage ---- - -Pulsar's segment oriented architecture allows for topic backlogs to grow very large, effectively without limit. However, this can become expensive over time. - -One way to alleviate this cost is to use Tiered Storage. With tiered storage, older messages in the backlog can be moved from BookKeeper to a cheaper storage mechanism, while still allowing clients to access the backlog as if nothing had changed. - -![Tiered Storage](/assets/pulsar-tiered-storage.png) - -> Data written to BookKeeper is replicated to 3 physical machines by default. However, once a segment is sealed in BookKeeper it becomes immutable and can be copied to long term storage. Long term storage can achieve cost savings by using mechanisms such as [Reed-Solomon error correction](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction) to require fewer physical copies of data. - -Pulsar currently supports S3, Google Cloud Storage (GCS), and filesystem for [long term store](cookbooks-tiered-storage.md). Offloading to long term storage triggered via a Rest API or command line interface. The user passes in the amount of topic data they wish to retain on BookKeeper, and the broker will copy the backlog data to long term storage. The original data will then be deleted from BookKeeper after a configured delay (4 hours by default). - -> For a guide for setting up tiered storage, see the [Tiered storage cookbook](cookbooks-tiered-storage.md). diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-topic-compaction.md b/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-topic-compaction.md deleted file mode 100644 index 34b7ed7fbbd31e..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-topic-compaction.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -id: concepts-topic-compaction -title: Topic Compaction -sidebar_label: "Topic Compaction" -original_id: concepts-topic-compaction ---- - -Pulsar was built with highly scalable [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data as a primary objective. Pulsar topics enable you to persistently store as many unacknowledged messages as you need while preserving message ordering. By default, Pulsar stores *all* unacknowledged/unprocessed messages produced on a topic. Accumulating many unacknowledged messages on a topic is necessary for many Pulsar use cases but it can also be very time intensive for Pulsar consumers to "rewind" through the entire log of messages. - -> For a more practical guide to topic compaction, see the [Topic compaction cookbook](cookbooks-compaction.md). - -For some use cases consumers don't need a complete "image" of the topic log. They may only need a few values to construct a more "shallow" image of the log, perhaps even just the most recent value. For these kinds of use cases Pulsar offers **topic compaction**. When you run compaction on a topic, Pulsar goes through a topic's backlog and removes messages that are *obscured* by later messages, i.e. it goes through the topic on a per-key basis and leaves only the most recent message associated with that key. - -Pulsar's topic compaction feature: - -* Allows for faster "rewind" through topic logs -* Applies only to [persistent topics](concepts-architecture-overview.md#persistent-storage) -* Triggered automatically when the backlog reaches a certain size or can be triggered manually via the command line. See the [Topic compaction cookbook](cookbooks-compaction.md) -* Is conceptually and operationally distinct from [retention and expiry](concepts-messaging.md#message-retention-and-expiry). Topic compaction *does*, however, respect retention. If retention has removed a message from the message backlog of a topic, the message will also not be readable from the compacted topic ledger. - -> #### Topic compaction example: the stock ticker -> An example use case for a compacted Pulsar topic would be a stock ticker topic. On a stock ticker topic, each message bears a timestamped dollar value for stocks for purchase (with the message key holding the stock symbol, e.g. `AAPL` or `GOOG`). With a stock ticker you may care only about the most recent value(s) of the stock and have no interest in historical data (i.e. you don't need to construct a complete image of the topic's sequence of messages per key). Compaction would be highly beneficial in this case because it would keep consumers from needing to rewind through obscured messages. - - -## How topic compaction works - -When topic compaction is triggered [via the CLI](cookbooks-compaction.md), Pulsar will iterate over the entire topic from beginning to end. For each key that it encounters the compaction routine will keep a record of the latest occurrence of that key. - -After that, the broker will create a new [BookKeeper ledger](concepts-architecture-overview.md#ledgers) and make a second iteration through each message on the topic. For each message, if the key matches the latest occurrence of that key, then the key's data payload, message ID, and metadata will be written to the newly created ledger. If the key doesn't match the latest then the message will be skipped and left alone. If any given message has an empty payload, it will be skipped and considered deleted (akin to the concept of [tombstones](https://en.wikipedia.org/wiki/Tombstone_(data_store)) in key-value databases). At the end of this second iteration through the topic, the newly created BookKeeper ledger is closed and two things are written to the topic's metadata: the ID of the BookKeeper ledger and the message ID of the last compacted message (this is known as the **compaction horizon** of the topic). Once this metadata is written compaction is complete. - -After the initial compaction operation, the Pulsar [broker](reference-terminology.md#broker) that owns the topic is notified whenever any future changes are made to the compaction horizon and compacted backlog. When such changes occur: - -* Clients (consumers and readers) that have read compacted enabled will attempt to read messages from a topic and either: - * Read from the topic like normal (if the message ID is greater than or equal to the compaction horizon) or - * Read beginning at the compaction horizon (if the message ID is lower than the compaction horizon) - - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-transactions.md b/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-transactions.md deleted file mode 100644 index 08490ba06b5d7e..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/concepts-transactions.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -id: transactions -title: Transactions -sidebar_label: "Overview" -original_id: transactions ---- - -Transactional semantics enable event streaming applications to consume, process, and produce messages in one atomic operation. In Pulsar, a producer or consumer can work with messages across multiple topics and partitions and ensure those messages are processed as a single unit. - -The following concepts help you understand Pulsar transactions. - -## Transaction coordinator and transaction log -The transaction coordinator maintains the topics and subscriptions that interact in a transaction. When a transaction is committed, the transaction coordinator interacts with the topic owner broker to complete the transaction. - -The transaction coordinator maintains the entire life cycle of transactions, and prevents a transaction from incorrect status. - -The transaction coordinator handles transaction timeout, and ensures that the transaction is aborted after a transaction timeout. - -All the transaction metadata is persisted in the transaction log. The transaction log is backed by a Pulsar topic. After the transaction coordinator crashes, it can restore the transaction metadata from the transaction log. - -## Transaction ID -The transaction ID (TxnID) identifies a unique transaction in Pulsar. The transaction ID is 128-bit. The highest 16 bits are reserved for the ID of the transaction coordinator, and the remaining bits are used for monotonically increasing numbers in each transaction coordinator. It is easy to locate the transaction crash with the TxnID. - -## Transaction buffer -Messages produced within a transaction are stored in the transaction buffer. The messages in transaction buffer are not materialized (visible) to consumers until the transactions are committed. The messages in the transaction buffer are discarded when the transactions are aborted. - -## Pending acknowledge state -Message acknowledges within a transaction are maintained by the pending acknowledge state before the transaction completes. If a message is in the pending acknowledge state, the message cannot be acknowledged by other transactions until the message is removed from the pending acknowledge state. - -The pending acknowledge state is persisted to the pending acknowledge log. The pending acknowledge log is backed by a Pulsar topic. A new broker can restore the state from the pending acknowledge log to ensure the acknowledgement is not lost. diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-bookkeepermetadata.md b/site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-bookkeepermetadata.md deleted file mode 100644 index b0fa98dc3b65d5..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-bookkeepermetadata.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -id: cookbooks-bookkeepermetadata -title: BookKeeper Ledger Metadata -original_id: cookbooks-bookkeepermetadata ---- - -Pulsar stores data on BookKeeper ledgers, you can understand the contents of a ledger by inspecting the metadata attached to the ledger. -Such metadata are stored on ZooKeeper and they are readable using BookKeeper APIs. - -Description of current metadata: - -| Scope | Metadata name | Metadata value | -| ------------- | ------------- | ------------- | -| All ledgers | application | 'pulsar' | -| All ledgers | component | 'managed-ledger', 'schema', 'compacted-topic' | -| Managed ledgers | pulsar/managed-ledger | name of the ledger | -| Cursor | pulsar/cursor | name of the cursor | -| Compacted topic | pulsar/compactedTopic | name of the original topic | -| Compacted topic | pulsar/compactedTo | id of the last compacted message | - - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-compaction.md b/site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-compaction.md deleted file mode 100644 index dfa314727241a8..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-compaction.md +++ /dev/null @@ -1,142 +0,0 @@ ---- -id: cookbooks-compaction -title: Topic compaction -sidebar_label: "Topic compaction" -original_id: cookbooks-compaction ---- - -Pulsar's [topic compaction](concepts-topic-compaction.md#compaction) feature enables you to create **compacted** topics in which older, "obscured" entries are pruned from the topic, allowing for faster reads through the topic's history (which messages are deemed obscured/outdated/irrelevant will depend on your use case). - -To use compaction: - -* You need to give messages keys, as topic compaction in Pulsar takes place on a *per-key basis* (i.e. messages are compacted based on their key). For a stock ticker use case, the stock symbol---e.g. `AAPL` or `GOOG`---could serve as the key (more on this [below](#when-should-i-use-compacted-topics)). Messages without keys will be left alone by the compaction process. -* Compaction can be configured to run [automatically](#configuring-compaction-to-run-automatically), or you can manually [trigger](#triggering-compaction-manually) compaction using the Pulsar administrative API. -* Your consumers must be [configured](#consumer-configuration) to read from compacted topics ([Java consumers](#java), for example, have a `readCompacted` setting that must be set to `true`). If this configuration is not set, consumers will still be able to read from the non-compacted topic. - - -> Compaction only works on messages that have keys (as in the stock ticker example the stock symbol serves as the key for each message). Keys can thus be thought of as the axis along which compaction is applied. Messages that don't have keys are simply ignored by compaction. - -## When should I use compacted topics? - -The classic example of a topic that could benefit from compaction would be a stock ticker topic through which consumers can access up-to-date values for specific stocks. Imagine a scenario in which messages carrying stock value data use the stock symbol as the key (`GOOG`, `AAPL`, `TWTR`, etc.). Compacting this topic would give consumers on the topic two options: - -* They can read from the "original," non-compacted topic in case they need access to "historical" values, i.e. the entirety of the topic's messages. -* They can read from the compacted topic if they only want to see the most up-to-date messages. - -Thus, if you're using a Pulsar topic called `stock-values`, some consumers could have access to all messages in the topic (perhaps because they're performing some kind of number crunching of all values in the last hour) while the consumers used to power the real-time stock ticker only see the compacted topic (and thus aren't forced to process outdated messages). Which variant of the topic any given consumer pulls messages from is determined by the consumer's [configuration](#consumer-configuration). - -> One of the benefits of compaction in Pulsar is that you aren't forced to choose between compacted and non-compacted topics, as the compaction process leaves the original topic as-is and essentially adds an alternate topic. In other words, you can run compaction on a topic and consumers that need access to the non-compacted version of the topic will not be adversely affected. - - -## Configuring compaction to run automatically - -Tenant administrators can configure a policy for compaction at the namespace level. The policy specifies how large the topic backlog can grow before compaction is triggered. - -For example, to trigger compaction when the backlog reaches 100MB: - -```bash - -$ bin/pulsar-admin namespaces set-compaction-threshold \ - --threshold 100M my-tenant/my-namespace - -``` - -Configuring the compaction threshold on a namespace will apply to all topics within that namespace. - -## Triggering compaction manually - -In order to run compaction on a topic, you need to use the [`topics compact`](reference-pulsar-admin.md#topics-compact) command for the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool. Here's an example: - -```bash - -$ bin/pulsar-admin topics compact \ - persistent://my-tenant/my-namespace/my-topic - -``` - -The `pulsar-admin` tool runs compaction via the Pulsar {@inject: rest:REST:/} API. To run compaction in its own dedicated process, i.e. *not* through the REST API, you can use the [`pulsar compact-topic`](reference-cli-tools.md#pulsar-compact-topic) command. Here's an example: - -```bash - -$ bin/pulsar compact-topic \ - --topic persistent://my-tenant-namespace/my-topic - -``` - -> Running compaction in its own process is recommended when you want to avoid interfering with the broker's performance. Broker performance should only be affected, however, when running compaction on topics with a large keyspace (i.e when there are many keys on the topic). The first phase of the compaction process keeps a copy of each key in the topic, which can create memory pressure as the number of keys grows. Using the `pulsar-admin topics compact` command to run compaction through the REST API should present no issues in the overwhelming majority of cases; using `pulsar compact-topic` should correspondingly be considered an edge case. - -The `pulsar compact-topic` command communicates with [ZooKeeper](https://zookeeper.apache.org) directly. In order to establish communication with ZooKeeper, though, the `pulsar` CLI tool will need to have a valid [broker configuration](reference-configuration.md#broker). You can either supply a proper configuration in `conf/broker.conf` or specify a non-default location for the configuration: - -```bash - -$ bin/pulsar compact-topic \ - --broker-conf /path/to/broker.conf \ - --topic persistent://my-tenant/my-namespace/my-topic - -# If the configuration is in conf/broker.conf -$ bin/pulsar compact-topic \ - --topic persistent://my-tenant/my-namespace/my-topic - -``` - -#### When should I trigger compaction? - -How often you [trigger compaction](#triggering-compaction-manually) will vary widely based on the use case. If you want a compacted topic to be extremely speedy on read, then you should run compaction fairly frequently. - -## Consumer configuration - -Pulsar consumers and readers need to be configured to read from compacted topics. The sections below show you how to enable compacted topic reads for Pulsar's language clients. - -### Java - -In order to read from a compacted topic using a Java consumer, the `readCompacted` parameter must be set to `true`. Here's an example consumer for a compacted topic: - -```java - -Consumer compactedTopicConsumer = client.newConsumer() - .topic("some-compacted-topic") - .readCompacted(true) - .subscribe(); - -``` - -As mentioned above, topic compaction in Pulsar works on a *per-key basis*. That means that messages that you produce on compacted topics need to have keys (the content of the key will depend on your use case). Messages that don't have keys will be ignored by the compaction process. Here's an example Pulsar message with a key: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageBuilder; - -Message msg = MessageBuilder.create() - .setContent(someByteArray) - .setKey("some-key") - .build(); - -``` - -The example below shows a message with a key being produced on a compacted Pulsar topic: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageBuilder; -import org.apache.pulsar.client.api.Producer; -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -Producer compactedTopicProducer = client.newProducer() - .topic("some-compacted-topic") - .create(); - -Message msg = MessageBuilder.create() - .setContent(someByteArray) - .setKey("some-key") - .build(); - -compactedTopicProducer.send(msg); - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-deduplication.md b/site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-deduplication.md deleted file mode 100644 index f7f9e3d7bb425b..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-deduplication.md +++ /dev/null @@ -1,151 +0,0 @@ ---- -id: cookbooks-deduplication -title: Message deduplication -sidebar_label: "Message deduplication" -original_id: cookbooks-deduplication ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -When **Message deduplication** is enabled, it ensures that each message produced on Pulsar topics is persisted to disk *only once*, even if the message is produced more than once. Message deduplication is handled automatically on the server side. - -To use message deduplication in Pulsar, you need to configure your Pulsar brokers and clients. - -## How it works - -You can enable or disable message deduplication at the namespace level or the topic level. By default, it is disabled on all namespaces or topics. You can enable it in the following ways: - -* Enable deduplication for all namespaces/topics at the broker-level. -* Enable deduplication for a specific namespace with the `pulsar-admin namespaces` interface. -* Enable deduplication for a specific topic with the `pulsar-admin topics` interface. - -## Configure message deduplication - -You can configure message deduplication in Pulsar using the [`broker.conf`](reference-configuration.md#broker) configuration file. The following deduplication-related parameters are available. - -Parameter | Description | Default -:---------|:------------|:------- -`brokerDeduplicationEnabled` | Sets the default behavior for message deduplication in the Pulsar broker. If it is set to `true`, message deduplication is enabled on all namespaces/topics. If it is set to `false`, you have to enable or disable deduplication at the namespace level or the topic level. | `false` -`brokerDeduplicationMaxNumberOfProducers` | The maximum number of producers for which information is stored for deduplication purposes. | `10000` -`brokerDeduplicationEntriesInterval` | The number of entries after which a deduplication informational snapshot is taken. A larger interval leads to fewer snapshots being taken, though this lengthens the topic recovery time (the time required for entries published after the snapshot to be replayed). | `1000` -`brokerDeduplicationSnapshotIntervalSeconds`| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |`120` -`brokerDeduplicationProducerInactivityTimeoutMinutes` | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | `360` (6 hours) - -### Set default value at the broker-level - -By default, message deduplication is *disabled* on all Pulsar namespaces/topics. To enable it on all namespaces/topics, set the `brokerDeduplicationEnabled` parameter to `true` and re-start the broker. - -Even if you set the value for `brokerDeduplicationEnabled`, enabling or disabling via Pulsar admin CLI overrides the default settings at the broker-level. - -### Enable message deduplication - -Though message deduplication is disabled by default at the broker level, you can enable message deduplication for a specific namespace or topic using the [`pulsar-admin namespaces set-deduplication`](reference-pulsar-admin.md#namespace-set-deduplication) or the [`pulsar-admin topics set-deduplication`](reference-pulsar-admin.md#topic-set-deduplication) command. You can use the `--enable`/`-e` flag and specify the namespace/topic. - -The following example shows how to enable message deduplication at the namespace level. - -```bash - -$ bin/pulsar-admin namespaces set-deduplication \ - public/default \ - --enable # or just -e - -``` - -### Disable message deduplication - -Even if you enable message deduplication at the broker level, you can disable message deduplication for a specific namespace or topic using the [`pulsar-admin namespace set-deduplication`](reference-pulsar-admin.md#namespace-set-deduplication) or the [`pulsar-admin topics set-deduplication`](reference-pulsar-admin.md#topic-set-deduplication) command. Use the `--disable`/`-d` flag and specify the namespace/topic. - -The following example shows how to disable message deduplication at the namespace level. - -```bash - -$ bin/pulsar-admin namespaces set-deduplication \ - public/default \ - --disable # or just -d - -``` - -## Pulsar clients - -If you enable message deduplication in Pulsar brokers, you need complete the following tasks for your client producers: - -1. Specify a name for the producer. -1. Set the message timeout to `0` (namely, no timeout). - -The instructions for Java, Python, and C++ clients are different. - -````mdx-code-block - - - -To enable message deduplication on a [Java producer](client-libraries-java.md#producers), set the producer name using the `producerName` setter, and set the timeout to `0` using the `sendTimeout` setter. - -```java - -import org.apache.pulsar.client.api.Producer; -import org.apache.pulsar.client.api.PulsarClient; -import java.util.concurrent.TimeUnit; - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); -Producer producer = pulsarClient.newProducer() - .producerName("producer-1") - .topic("persistent://public/default/topic-1") - .sendTimeout(0, TimeUnit.SECONDS) - .create(); - -``` - - - - -To enable message deduplication on a [Python producer](client-libraries-python.md#producers), set the producer name using `producer_name`, and set the timeout to `0` using `send_timeout_millis`. - -```python - -import pulsar - -client = pulsar.Client("pulsar://localhost:6650") -producer = client.create_producer( - "persistent://public/default/topic-1", - producer_name="producer-1", - send_timeout_millis=0) - -``` - - - - -To enable message deduplication on a [C++ producer](client-libraries-cpp.md#producer), set the producer name using `producer_name`, and set the timeout to `0` using `send_timeout_millis`. - -```cpp - -#include - -std::string serviceUrl = "pulsar://localhost:6650"; -std::string topic = "persistent://some-tenant/ns1/topic-1"; -std::string producerName = "producer-1"; - -Client client(serviceUrl); - -ProducerConfiguration producerConfig; -producerConfig.setSendTimeout(0); -producerConfig.setProducerName(producerName); - -Producer producer; - -Result result = client.createProducer(topic, producerConfig, producer); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-encryption.md b/site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-encryption.md deleted file mode 100644 index f0d8fb8735eb63..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-encryption.md +++ /dev/null @@ -1,184 +0,0 @@ ---- -id: cookbooks-encryption -title: Pulsar Encryption -sidebar_label: "Encryption" -original_id: cookbooks-encryption ---- - -Pulsar encryption allows applications to encrypt messages at the producer and decrypt at the consumer. Encryption is performed using the public/private key pair configured by the application. Encrypted messages can only be decrypted by consumers with a valid key. - -## Asymmetric and symmetric encryption - -Pulsar uses dynamically generated symmetric AES key to encrypt messages(data). The AES key(data key) is encrypted using application provided ECDSA/RSA key pair, as a result there is no need to share the secret with everyone. - -Key is a public/private key pair used for encryption/decryption. The producer key is the public key, and the consumer key is the private key of the key pair. - -The application configures the producer with the public key. This key is used to encrypt the AES data key. The encrypted data key is sent as part of message header. Only entities with the private key(in this case the consumer) will be able to decrypt the data key which is used to decrypt the message. - -A message can be encrypted with more than one key. Any one of the keys used for encrypting the message is sufficient to decrypt the message - -Pulsar does not store the encryption key anywhere in the pulsar service. If you lose/delete the private key, your message is irretrievably lost, and is unrecoverable - -## Producer -![alt text](/assets/pulsar-encryption-producer.jpg "Pulsar Encryption Producer") - -## Consumer -![alt text](/assets/pulsar-encryption-consumer.jpg "Pulsar Encryption Consumer") - -## Here are the steps to get started: - -1. Create your ECDSA or RSA public/private key pair. - -```shell - -openssl ecparam -name secp521r1 -genkey -param_enc explicit -out test_ecdsa_privkey.pem -openssl ec -in test_ecdsa_privkey.pem -pubout -outform pkcs8 -out test_ecdsa_pubkey.pem - -``` - -2. Add the public and private key to the key management and configure your producers to retrieve public keys and consumers clients to retrieve private keys. -3. Implement CryptoKeyReader::getPublicKey() interface from producer and CryptoKeyReader::getPrivateKey() interface from consumer, which will be invoked by Pulsar client to load the key. -4. Add encryption key to producer configuration: conf.addEncryptionKey("myapp.key") -5. Add CryptoKeyReader implementation to producer/consumer config: conf.setCryptoKeyReader(keyReader) -6. Sample producer application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} -PulsarClient pulsarClient = PulsarClient.create("http://localhost:8080"); - -ProducerConfiguration prodConf = new ProducerConfiguration(); -prodConf.setCryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")); -prodConf.addEncryptionKey("myappkey"); - -Producer producer = pulsarClient.createProducer("persistent://my-tenant/my-ns/my-topic", prodConf); - -for (int i = 0; i < 10; i++) { - producer.send("my-message".getBytes()); -} - -pulsarClient.close(); - -``` - -7. Sample Consumer Application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} - -ConsumerConfiguration consConf = new ConsumerConfiguration(); -consConf.setCryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")); -PulsarClient pulsarClient = PulsarClient.create("http://localhost:8080"); -Consumer consumer = pulsarClient.subscribe("persistent://my-tenant//my-ns/my-topic", "my-subscriber-name", consConf); -Message msg = null; - -for (int i = 0; i < 10; i++) { - msg = consumer.receive(); - // do something - System.out.println("Received: " + new String(msg.getData())); -} - -// Acknowledge the consumption of all messages at once -consumer.acknowledgeCumulative(msg); -pulsarClient.close(); - -``` - -## Key rotation -Pulsar generates new AES data key every 4 hours or after a certain number of messages are published. The asymmetric public key is automatically fetched by producer every 4 hours by calling CryptoKeyReader::getPublicKey() to retrieve the latest version. - -## Enabling encryption at the producer application: -If you produce messages that are consumed across application boundaries, you need to ensure that consumers in other applications have access to one of the private keys that can decrypt the messages. This can be done in two ways: -1. The consumer application provides you access to their public key, which you add to your producer keys -1. You grant access to one of the private keys from the pairs used by producer - -In some cases, the producer may want to encrypt the messages with multiple keys. For this, add all such keys to the config. Consumer will be able to decrypt the message, as long as it has access to at least one of the keys. - -E.g: If messages needs to be encrypted using 2 keys myapp.messagekey1 and myapp.messagekey2, - -```java - -conf.addEncryptionKey("myapp.messagekey1"); -conf.addEncryptionKey("myapp.messagekey2"); - -``` - -## Decrypting encrypted messages at the consumer application: -Consumers require access one of the private keys to decrypt messages produced by the producer. If you would like to receive encrypted messages, create a public/private key and give your public key to the producer application to encrypt messages using your public key. - -## Handling Failures: -* Producer/ Consumer loses access to the key - * Producer action will fail indicating the cause of the failure. Application has the option to proceed with sending unencrypted message in such cases. Call conf.setCryptoFailureAction(ProducerCryptoFailureAction) to control the producer behavior. The default behavior is to fail the request. - * If consumption failed due to decryption failure or missing keys in consumer, application has the option to consume the encrypted message or discard it. Call conf.setCryptoFailureAction(ConsumerCryptoFailureAction) to control the consumer behavior. The default behavior is to fail the request. -Application will never be able to decrypt the messages if the private key is permanently lost. -* Batch messaging - * If decryption fails and the message contain batch messages, client will not be able to retrieve individual messages in the batch, hence message consumption fails even if conf.setCryptoFailureAction() is set to CONSUME. -* If decryption fails, the message consumption stops and application will notice backlog growth in addition to decryption failure messages in the client log. If application does not have access to the private key to decrypt the message, the only option is to skip/discard backlogged messages. - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-message-queue.md b/site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-message-queue.md deleted file mode 100644 index eb43cbde5fb818..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-message-queue.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -id: cookbooks-message-queue -title: Using Pulsar as a message queue -sidebar_label: "Message queue" -original_id: cookbooks-message-queue ---- - -Message queues are essential components of many large-scale data architectures. If every single work object that passes through your system absolutely *must* be processed in spite of the slowness or downright failure of this or that system component, there's a good chance that you'll need a message queue to step in and ensure that unprocessed data is retained---with correct ordering---until the required actions are taken. - -Pulsar is a great choice for a message queue because: - -* it was built with [persistent message storage](concepts-architecture-overview.md#persistent-storage) in mind -* it offers automatic load balancing across [consumers](reference-terminology.md#consumer) for messages on a topic (or custom load balancing if you wish) - -> You can use the same Pulsar installation to act as a real-time message bus and as a message queue if you wish (or just one or the other). You can set aside some topics for real-time purposes and other topics for message queue purposes (or use specific namespaces for either purpose if you wish). - - -# Client configuration changes - -To use a Pulsar [topic](reference-terminology.md#topic) as a message queue, you should distribute the receiver load on that topic across several consumers (the optimal number of consumers will depend on the load). Each consumer must: - -* Establish a [shared subscription](concepts-messaging.md#shared) and use the same subscription name as the other consumers (otherwise the subscription is not shared and the consumers can't act as a processing ensemble) -* If you'd like to have tight control over message dispatching across consumers, set the consumers' **receiver queue** size very low (potentially even to 0 if necessary). Each Pulsar [consumer](reference-terminology.md#consumer) has a receiver queue that determines how many messages the consumer will attempt to fetch at a time. A receiver queue of 1000 (the default), for example, means that the consumer will attempt to process 1000 messages from the topic's backlog upon connection. Setting the receiver queue to zero essentially means ensuring that each consumer is only doing one thing at a time. - - The downside to restricting the receiver queue size of consumers is that that limits the potential throughput of those consumers and cannot be used with [partitioned topics](reference-terminology.md#partitioned-topic). Whether the performance/control trade-off is worthwhile will depend on your use case. - -## Java clients - -Here's an example Java consumer configuration that uses a shared subscription: - -```java - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; -import org.apache.pulsar.client.api.SubscriptionType; - -String SERVICE_URL = "pulsar://localhost:6650"; -String TOPIC = "persistent://public/default/mq-topic-1"; -String subscription = "sub-1"; - -PulsarClient client = PulsarClient.builder() - .serviceUrl(SERVICE_URL) - .build(); - -Consumer consumer = client.newConsumer() - .topic(TOPIC) - .subscriptionName(subscription) - .subscriptionType(SubscriptionType.Shared) - // If you'd like to restrict the receiver queue size - .receiverQueueSize(10) - .subscribe(); - -``` - -## Python clients - -Here's an example Python consumer configuration that uses a shared subscription: - -```python - -from pulsar import Client, ConsumerType - -SERVICE_URL = "pulsar://localhost:6650" -TOPIC = "persistent://public/default/mq-topic-1" -SUBSCRIPTION = "sub-1" - -client = Client(SERVICE_URL) -consumer = client.subscribe( - TOPIC, - SUBSCRIPTION, - # If you'd like to restrict the receiver queue size - receiver_queue_size=10, - consumer_type=ConsumerType.Shared) - -``` - -## C++ clients - -Here's an example C++ consumer configuration that uses a shared subscription: - -```cpp - -#include - -std::string serviceUrl = "pulsar://localhost:6650"; -std::string topic = "persistent://public/defaultmq-topic-1"; -std::string subscription = "sub-1"; - -Client client(serviceUrl); - -ConsumerConfiguration consumerConfig; -consumerConfig.setConsumerType(ConsumerType.ConsumerShared); -// If you'd like to restrict the receiver queue size -consumerConfig.setReceiverQueueSize(10); - -Consumer consumer; - -Result result = client.subscribe(topic, subscription, consumerConfig, consumer); - -``` - -## Go clients - -Here is an example of a Go consumer configuration that uses a shared subscription: - -```go - -import "github.com/apache/pulsar-client-go/pulsar" - -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "persistent://public/default/mq-topic-1", - SubscriptionName: "sub-1", - Type: pulsar.Shared, - ReceiverQueueSize: 10, // If you'd like to restrict the receiver queue size -}) -if err != nil { - log.Fatal(err) -} - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-non-persistent.md b/site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-non-persistent.md deleted file mode 100644 index 178301e86eb8df..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-non-persistent.md +++ /dev/null @@ -1,63 +0,0 @@ ---- -id: cookbooks-non-persistent -title: Non-persistent messaging -sidebar_label: "Non-persistent messaging" -original_id: cookbooks-non-persistent ---- - -**Non-persistent topics** are Pulsar topics in which message data is *never* [persistently stored](concepts-architecture-overview.md#persistent-storage) and kept only in memory. This cookbook provides: - -* A basic [conceptual overview](#overview) of non-persistent topics -* Information about [configurable parameters](#configuration) related to non-persistent topics -* A guide to the [CLI interface](#cli) for managing non-persistent topics - -## Overview - -By default, Pulsar persistently stores *all* unacknowledged messages on multiple [BookKeeper](#persistent-storage) bookies (storage nodes). Data for messages on persistent topics can thus survive broker restarts and subscriber failover. - -Pulsar also, however, supports **non-persistent topics**, which are topics on which messages are *never* persisted to disk and live only in memory. When using non-persistent delivery, killing a Pulsar [broker](reference-terminology.md#broker) or disconnecting a subscriber to a topic means that all in-transit messages are lost on that (non-persistent) topic, meaning that clients may see message loss. - -Non-persistent topics have names of this form (note the `non-persistent` in the name): - -```http - -non-persistent://tenant/namespace/topic - -``` - -> For more high-level information about non-persistent topics, see the [Concepts and Architecture](concepts-messaging.md#non-persistent-topics) documentation. - -## Using - -> In order to use non-persistent topics, they must be [enabled](#enabling) in your Pulsar broker configuration. - -In order to use non-persistent topics, you only need to differentiate them by name when interacting with them. This [`pulsar-client produce`](reference-cli-tools.md#pulsar-client-produce) command, for example, would produce one message on a non-persistent topic in a standalone cluster: - -```bash - -$ bin/pulsar-client produce non-persistent://public/default/example-np-topic \ - --num-produce 1 \ - --messages "This message will be stored only in memory" - -``` - -> For a more thorough guide to non-persistent topics from an administrative perspective, see the [Non-persistent topics](admin-api-topics.md) guide. - -## Enabling - -In order to enable non-persistent topics in a Pulsar broker, the [`enableNonPersistentTopics`](reference-configuration.md#broker-enableNonPersistentTopics) must be set to `true`. This is the default, and so you won't need to take any action to enable non-persistent messaging. - - -> #### Configuration for standalone mode -> If you're running Pulsar in standalone mode, the same configurable parameters are available but in the [`standalone.conf`](reference-configuration.md#standalone) configuration file. - -If you'd like to enable *only* non-persistent topics in a broker, you can set the [`enablePersistentTopics`](reference-configuration.md#broker-enablePersistentTopics) parameter to `false` and the `enableNonPersistentTopics` parameter to `true`. - -## Managing with cli - -Non-persistent topics can be managed using the [`pulsar-admin non-persistent`](reference-pulsar-admin.md#non-persistent) command-line interface. With that interface you can perform actions like [create a partitioned non-persistent topic](reference-pulsar-admin.md#non-persistent-create-partitioned-topic), get [stats](reference-pulsar-admin.md#non-persistent-stats) for a non-persistent topic, [list](reference-pulsar-admin.md) non-persistent topics under a namespace, and more. - -## Using with Pulsar clients - -You shouldn't need to make any changes to your Pulsar clients to use non-persistent messaging beyond making sure that you use proper [topic names](#using) with `non-persistent` as the topic type. - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-partitioned.md b/site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-partitioned.md deleted file mode 100644 index fb9ac354cc6d60..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-partitioned.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: cookbooks-partitioned -title: Partitioned topics -sidebar_label: "Partitioned Topics" -original_id: cookbooks-partitioned ---- -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-retention-expiry.md b/site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-retention-expiry.md deleted file mode 100644 index bb268ecf671664..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-retention-expiry.md +++ /dev/null @@ -1,520 +0,0 @@ ---- -id: cookbooks-retention-expiry -title: Message retention and expiry -sidebar_label: "Message retention and expiry" -original_id: cookbooks-retention-expiry ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar brokers are responsible for handling messages that pass through Pulsar, including [persistent storage](concepts-architecture-overview.md#persistent-storage) of messages. By default, for each topic, brokers only retain messages that are in at least one backlog. A backlog is the set of unacknowledged messages for a particular subscription. As a topic can have multiple subscriptions, a topic can have multiple backlogs. - -As a consequence, no messages are retained (by default) on a topic that has not had any subscriptions created for it. - -(Note that messages that are no longer being stored are not necessarily immediately deleted, and may in fact still be accessible until the next ledger rollover. Because clients cannot predict when rollovers may happen, it is not wise to rely on a rollover not happening at an inconvenient point in time.) - -In Pulsar, you can modify this behavior, with namespace granularity, in two ways: - -* You can persistently store messages that are not within a backlog (because they've been acknowledged by on every existing subscription, or because there are no subscriptions) by setting [retention policies](#retention-policies). -* Messages that are not acknowledged within a specified timeframe can be automatically acknowledged, by specifying the [time to live](#time-to-live-ttl) (TTL). - -Pulsar's [admin interface](admin-api-overview.md) enables you to manage both retention policies and TTL with namespace granularity (and thus within a specific tenant and either on a specific cluster or in the [`global`](concepts-architecture-overview.md#global-cluster) cluster). - - -> #### Retention and TTL solve two different problems -> * Message retention: Keep the data for at least X hours (even if acknowledged) -> * Time-to-live: Discard data after some time (by automatically acknowledging) -> -> Most applications will want to use at most one of these. - - -## Retention policies - -By default, when a Pulsar message arrives at a broker, the message is stored until it has been acknowledged on all subscriptions, at which point it is marked for deletion. You can override this behavior and retain messages that have already been acknowledged on all subscriptions by setting a *retention policy* for all topics in a given namespace. Retention is based on both a *size limit* and a *time limit*. - -The diagram below illustrates the concept of message retention. -![](/assets/retention.svg) - -Retention policies are useful when you use the Reader interface. The Reader interface does not use acknowledgements, and messages do not exist within backlogs. It is required to configure retention for Reader-only use cases. - -When you set a retention policy on topics in a namespace, you must set **both** a *size limit* (via `defaultRetentionSizeInMB`) and a *time limit* (via `defaultRetentionTimeInMinutes`) . You can refer to the following table to set retention policies in `pulsar-admin` and Java. - -|Time limit|Size limit| Message retention | -|----------|----------|------------------------| -| -1 | -1 | Infinite retention | -| -1 | >0 | Based on the size limit | -| >0 | -1 | Based on the time limit | -| 0 | 0 | Disable message retention (by default) | -| 0 | >0 | Invalid | -| >0 | 0 | Invalid | -| >0 | >0 | Acknowledged messages or messages with no active subscription will not be retained when either time or size reaches the limit. | - -The retention settings apply to all messages on topics that do not have any subscriptions, or to messages that have been acknowledged by all subscriptions. The retention policy settings do not affect unacknowledged messages on topics with subscriptions. The unacknowledged messages are controlled by the backlog quota. - -When a retention limit on a topic is exceeded, the oldest message is marked for deletion until the set of retained messages falls within the specified limits again. - -### Defaults - -You can set message retention at instance level with the following two parameters: `defaultRetentionTimeInMinutes` and `defaultRetentionSizeInMB`. Both parameters are set to `0` by default. - -For more information of the two parameters, refer to the [`broker.conf`](reference-configuration.md#broker) configuration file. - -### Set retention policy - -You can set a retention policy for a namespace by specifying the namespace, a size limit and a time limit in `pulsar-admin`, REST API and Java. - -````mdx-code-block - - - -You can use the [`set-retention`](reference-pulsar-admin.md#namespaces-set-retention) subcommand and specify a namespace, a size limit using the `-s`/`--size` flag, and a time limit using the `-t`/`--time` flag. - -In the following example, the size limit is set to 10 GB and the time limit is set to 3 hours for each topic within the `my-tenant/my-ns` namespace. -- When the size of messages reaches 10 GB on a topic within 3 hours, the acknowledged messages will not be retained. -- After 3 hours, even if the message size is less than 10 GB, the acknowledged messages will not be retained. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 10G \ - --time 3h - -``` - -In the following example, the time is not limited and the size limit is set to 1 TB. The size limit determines the retention. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 1T \ - --time -1 - -``` - -In the following example, the size is not limited and the time limit is set to 3 hours. The time limit determines the retention. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size -1 \ - --time 3h - -``` - -To achieve infinite retention, set both values to `-1`. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size -1 \ - --time -1 - -``` - -To disable the retention policy, set both values to `0`. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 0 \ - --time 0 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/retention|operation/setRetention?version=@pulsar:version_number@} - -:::note - -To disable the retention policy, you need to set both the size and time limit to `0`. Set either size or time limit to `0` is invalid. - -::: - - - - -```java - -int retentionTime = 10; // 10 minutes -int retentionSize = 500; // 500 megabytes -RetentionPolicies policies = new RetentionPolicies(retentionTime, retentionSize); -admin.namespaces().setRetention(namespace, policies); - -``` - - - - -```` - -### Get retention policy - -You can fetch the retention policy for a namespace by specifying the namespace. The output will be a JSON object with two keys: `retentionTimeInMinutes` and `retentionSizeInMB`. - -````mdx-code-block - - - -Use the [`get-retention`](reference-pulsar-admin.md#namespaces) subcommand and specify the namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces get-retention my-tenant/my-ns -{ - "retentionTimeInMinutes": 10, - "retentionSizeInMB": 500 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/retention|operation/getRetention?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getRetention(namespace); - -``` - - - - -```` - -## Backlog quotas - -*Backlogs* are sets of unacknowledged messages for a topic that have been stored by bookies. Pulsar stores all unacknowledged messages in backlogs until they are processed and acknowledged. - -You can control the allowable size and/or time of backlogs, at the namespace level, using *backlog quotas*. Pulsar uses a quota to enforce a hard limit on the logical size of the backlogs in a topic. Backlog quota triggers an alert policy (for example, producer exception) once the quota limit is reached. - -The diagram below illustrates the concept of backlog quota. -![](/assets/backlog-quota.svg) - -Setting a backlog quota involves setting: - -* an allowable *size and/or time threshold* for each topic in the namespace -* a *retention policy* that determines which action the [broker](reference-terminology.md#broker) takes if the threshold is exceeded. - -The following retention policies are available: - -Policy | Action -:------|:------ -`producer_request_hold` | The broker will hold and not persist produce request payload -`producer_exception` | The broker will disconnect from the client by throwing an exception -`consumer_backlog_eviction` | The broker will begin discarding backlog messages - - -> #### Beware the distinction between retention policy types -> As you may have noticed, there are two definitions of the term "retention policy" in Pulsar, one that applies to persistent storage of messages not in backlogs, and one that applies to messages within backlogs. - - -Backlog quotas are handled at the namespace level. They can be managed via: - -### Set size/time thresholds and backlog retention policies - -You can set a size and/or time threshold and backlog retention policy for all of the topics in a [namespace](reference-terminology.md#namespace) by specifying the namespace, a size limit and/or a time limit in second, and a policy by name. - -````mdx-code-block - - - -Use the [`set-backlog-quota`](reference-pulsar-admin.md#namespaces) subcommand and specify a namespace, a size limit using the `-l`/`--limit` , `-lt`/`--limitTime` flag to limit backlog, a retention policy using the `-p`/`--policy` flag and a policy type using `-t`/`--type` (default is destination_storage). - -##### Example - -```shell - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \ - --limit 2G \ - --policy producer_request_hold - -``` - -```shell - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns/my-topic \ ---limitTime 3600 \ ---policy producer_request_hold \ ---type message_age - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - - - - -```java - -long sizeLimit = 2147483648L; -BacklogQuota.RetentionPolicy policy = BacklogQuota.RetentionPolicy.producer_request_hold; -BacklogQuota quota = new BacklogQuota(sizeLimit, policy); -admin.namespaces().setBacklogQuota(namespace, quota); - -``` - - - - -```` - -### Get backlog threshold and backlog retention policy - -You can see which size threshold and backlog retention policy has been applied to a namespace. - -````mdx-code-block - - - -Use the [`get-backlog-quotas`](reference-pulsar-admin.md#pulsar-admin-namespaces-get-backlog-quotas) subcommand and specify a namespace. Here's an example: - -```shell - -$ pulsar-admin namespaces get-backlog-quotas my-tenant/my-ns -{ - "destination_storage": { - "limit" : 2147483648, - "policy" : "producer_request_hold" - } -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/backlogQuotaMap|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - - - - -```java - -Map quotas = - admin.namespaces().getBacklogQuotas(namespace); - -``` - - - - -```` - -### Remove backlog quotas - -````mdx-code-block - - - -Use the [`remove-backlog-quota`](reference-pulsar-admin.md#pulsar-admin-namespaces-remove-backlog-quota) subcommand and specify a namespace, use `t`/`--type` to specify backlog type to remove(default is destination_storage). Here's an example: - -```shell - -$ pulsar-admin namespaces remove-backlog-quota my-tenant/my-ns - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/removeBacklogQuota?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeBacklogQuota(namespace); - -``` - - - - -```` - -### Clear backlog - -#### pulsar-admin - -Use the [`clear-backlog`](reference-pulsar-admin.md#pulsar-admin-namespaces-clear-backlog) subcommand. - -##### Example - -```shell - -$ pulsar-admin namespaces clear-backlog my-tenant/my-ns - -``` - -By default, you will be prompted to ensure that you really want to clear the backlog for the namespace. You can override the prompt using the `-f`/`--force` flag. - -## Time to live (TTL) - -By default, Pulsar stores all unacknowledged messages forever. This can lead to heavy disk space usage in cases where a lot of messages are going unacknowledged. If disk space is a concern, you can set a time to live (TTL) that determines how long unacknowledged messages will be retained. - -The TTL parameter is like a stopwatch attached to each message that defines the amount of time a message is allowed to stay in the unacknowledged state. When the TTL expires, Pulsar automatically moves the message to the acknowledged state (and thus makes it ready for deletion). - -The diagram below illustrates the concept of TTL. -![](/assets/ttl.svg) - -### Set the TTL for a namespace - -````mdx-code-block - - - -Use the [`set-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-set-message-ttl) subcommand and specify a namespace and a TTL (in seconds) using the `-ttl`/`--messageTTL` flag. - -##### Example - -```shell - -$ pulsar-admin namespaces set-message-ttl my-tenant/my-ns \ - --messageTTL 120 # TTL of 2 minutes - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/setNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setNamespaceMessageTTL(namespace, ttlInSeconds); - -``` - - - - -```` - -### Get the TTL configuration for a namespace - -````mdx-code-block - - - -Use the [`get-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-get-message-ttl) subcommand and specify a namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces get-message-ttl my-tenant/my-ns -60 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/getNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaceMessageTTL(namespace) - -``` - - - - -```` - -### Remove the TTL configuration for a namespace - -````mdx-code-block - - - -Use the [`remove-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-remove-message-ttl) subcommand and specify a namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces remove-message-ttl my-tenant/my-ns - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/removeNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeNamespaceMessageTTL(namespace) - -``` - - - - -```` - -## Delete messages from namespaces - -When it comes to the physical storage size, message expiry and retention are just like two sides of the same coin. -* The backlog quota and TTL parameters prevent disk size from growing indefinitely, as Pulsar’s default behaviour is to persist unacknowledged messages. -* The retention policy allocates storage space to accommodate the messages that are supposed to be deleted by Pulsar by default. - -As a conclusion, the size of your physical storage should accommodate the sum of the backlog quota and the retention size. - -The message deletion rate (releasing rate of disk space) can be determined by multiple factors. - -- **Segment rollover period**: basically, the segment rollover period is how often a new segment is created. Once a new segment is created, the old segment will be deleted. By default, this happens either when you have written 50,000 entries (messages) or have waited 240 minutes. You can tune this in your broker. - -- **Entry log rollover period**: multiple ledgers in BookKeeper are interleaved into an [entry log](https://bookkeeper.apache.org/docs/4.11.1/getting-started/concepts/#entry-logs). In order for a ledger that has been deleted, the entry log must all be rolled over. -The entry log rollover period is configurable, but is purely based on the entry log size. For details, see [here](https://bookkeeper.apache.org/docs/4.11.1/reference/config/#entry-log-settings). Once the entry log is rolled over, the entry log can be garbage collected. - -- **Garbage collection interval**: because entry logs have interleaved ledgers, to free up space, the entry logs need to be rewritten. The garbage collection interval is how often BookKeeper performs garbage collection. which is related to minor compaction and major compaction of entry logs. For details, see [here](https://bookkeeper.apache.org/docs/4.11.1/reference/config/#entry-log-compaction-settings). - -The diagram below illustrates one of the cases that the consumed storage size is larger than the given limits for backlog and retention. Messages over the retention limit are kept because other messages in the same segment are still within retention period. -![](/assets/retention-storage-size.svg) - -If you do not have any retention period and that you never have much of a backlog, the upper limit for retained messages, which are acknowledged, equals to the Pulsar segment rollover period + entry log rollover period + (garbage collection interval * garbage collection ratios). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-tiered-storage.md b/site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-tiered-storage.md deleted file mode 100644 index b1deb135209a98..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/cookbooks-tiered-storage.md +++ /dev/null @@ -1,344 +0,0 @@ ---- -id: cookbooks-tiered-storage -title: Tiered Storage -sidebar_label: "Tiered Storage" -original_id: cookbooks-tiered-storage ---- - -Pulsar's **Tiered Storage** feature allows older backlog data to be offloaded to long term storage, thereby freeing up space in BookKeeper and reducing storage costs. This cookbook walks you through using tiered storage in your Pulsar cluster. - -* Tiered storage uses [Apache jclouds](https://jclouds.apache.org) to support [Amazon S3](https://aws.amazon.com/s3/) and [Google Cloud Storage](https://cloud.google.com/storage/)(GCS for short) -for long term storage. With Jclouds, it is easy to add support for more [cloud storage providers](https://jclouds.apache.org/reference/providers/#blobstore-providers) in the future. - -* Tiered storage uses [Apache Hadoop](http://hadoop.apache.org/) to support filesystem for long term storage. -With Hadoop, it is easy to add support for more filesystem in the future. - -## When should I use Tiered Storage? - -Tiered storage should be used when you have a topic for which you want to keep a very long backlog for a long time. For example, if you have a topic containing user actions which you use to train your recommendation systems, you may want to keep that data for a long time, so that if you change your recommendation algorithm you can rerun it against your full user history. - -## The offloading mechanism - -A topic in Pulsar is backed by a log, known as a managed ledger. This log is composed of an ordered list of segments. Pulsar only every writes to the final segment of the log. All previous segments are sealed. The data within the segment is immutable. This is known as a segment oriented architecture. - -![Tiered storage](/assets/pulsar-tiered-storage.png "Tiered Storage") - -The Tiered Storage offloading mechanism takes advantage of this segment oriented architecture. When offloading is requested, the segments of the log are copied, one-by-one, to tiered storage. All segments of the log, apart from the segment currently being written to can be offloaded. - -On the broker, the administrator must configure the bucket and credentials for the cloud storage service. -The configured bucket must exist before attempting to offload. If it does not exist, the offload operation will fail. - -Pulsar uses multi-part objects to upload the segment data. It is possible that a broker could crash while uploading the data. -We recommend you add a life cycle rule your bucket to expire incomplete multi-part upload after a day or two to avoid -getting charged for incomplete uploads. - -When ledgers are offloaded to long term storage, you can still query data in the offloaded ledgers with Pulsar SQL. - -## Configuring the offload driver - -Offloading is configured in ```broker.conf```. - -At a minimum, the administrator must configure the driver, the bucket and the authenticating credentials. -There is also some other knobs to configure, like the bucket region, the max block size in backed storage, etc. - -Currently we support driver of types: - -- `aws-s3`: [Simple Cloud Storage Service](https://aws.amazon.com/s3/) -- `google-cloud-storage`: [Google Cloud Storage](https://cloud.google.com/storage/) -- `filesystem`: [Filesystem Storage](http://hadoop.apache.org/) - -> Driver names are case-insensitive for driver's name. There is a third driver type, `s3`, which is identical to `aws-s3`, -> though it requires that you specify an endpoint url using `s3ManagedLedgerOffloadServiceEndpoint`. This is useful if -> using a S3 compatible data store, other than AWS. - -```conf - -managedLedgerOffloadDriver=aws-s3 - -``` - -### "aws-s3" Driver configuration - -#### Bucket and Region - -Buckets are the basic containers that hold your data. -Everything that you store in Cloud Storage must be contained in a bucket. -You can use buckets to organize your data and control access to your data, -but unlike directories and folders, you cannot nest buckets. - -```conf - -s3ManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -Bucket Region is the region where bucket located. Bucket Region is not a required -but a recommended configuration. If it is not configured, It will use the default region. - -With AWS S3, the default region is `US East (N. Virginia)`. Page [AWS Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html) contains more information. - -```conf - -s3ManagedLedgerOffloadRegion=eu-west-3 - -``` - -#### Authentication with AWS - -To be able to access AWS S3, you need to authenticate with AWS S3. -Pulsar does not provide any direct means of configuring authentication for AWS S3, -but relies on the mechanisms supported by the [DefaultAWSCredentialsProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html). - -Once you have created a set of credentials in the AWS IAM console, they can be configured in a number of ways. - -1. Using ec2 instance metadata credentials - -If you are on AWS instance with an instance profile that provides credentials, Pulsar will use these credentials -if no other mechanism is provided - -2. Set the environment variables **AWS_ACCESS_KEY_ID** and **AWS_SECRET_ACCESS_KEY** in ```conf/pulsar_env.sh```. - -```bash - -export AWS_ACCESS_KEY_ID=ABC123456789 -export AWS_SECRET_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -> \"export\" is important so that the variables are made available in the environment of spawned processes. - - -3. Add the Java system properties *aws.accessKeyId* and *aws.secretKey* to **PULSAR_EXTRA_OPTS** in `conf/pulsar_env.sh`. - -```bash - -PULSAR_EXTRA_OPTS="${PULSAR_EXTRA_OPTS} ${PULSAR_MEM} ${PULSAR_GC} -Daws.accessKeyId=ABC123456789 -Daws.secretKey=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.maxCapacity.default=1000 -Dio.netty.recycler.linkCapacity=1024" - -``` - -4. Set the access credentials in ```~/.aws/credentials```. - -```conf - -[default] -aws_access_key_id=ABC123456789 -aws_secret_access_key=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -5. Assuming an IAM role - -If you want to assume an IAM role, this can be done via specifying the following: - -```conf - -s3ManagedLedgerOffloadRole= -s3ManagedLedgerOffloadRoleSessionName=pulsar-s3-offload - -``` - -This will use the `DefaultAWSCredentialsProviderChain` for assuming this role. - -> The broker must be rebooted for credentials specified in pulsar_env to take effect. - -#### Configuring the size of block read/write - -Pulsar also provides some knobs to configure the size of requests sent to AWS S3. - -- ```s3ManagedLedgerOffloadMaxBlockSizeInBytes``` configures the maximum size of - a "part" sent during a multipart upload. This cannot be smaller than 5MB. Default is 64MB. -- ```s3ManagedLedgerOffloadReadBufferSizeInBytes``` configures the block size for - each individual read when reading back data from AWS S3. Default is 1MB. - -In both cases, these should not be touched unless you know what you are doing. - -### "google-cloud-storage" Driver configuration - -Buckets are the basic containers that hold your data. Everything that you store in -Cloud Storage must be contained in a bucket. You can use buckets to organize your data and -control access to your data, but unlike directories and folders, you cannot nest buckets. - -```conf - -gcsManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -Bucket Region is the region where bucket located. Bucket Region is not a required but -a recommended configuration. If it is not configured, It will use the default region. - -Regarding GCS, buckets are default created in the `us multi-regional location`, -page [Bucket Locations](https://cloud.google.com/storage/docs/bucket-locations) contains more information. - -```conf - -gcsManagedLedgerOffloadRegion=europe-west3 - -``` - -#### Authentication with GCS - -The administrator needs to configure `gcsManagedLedgerOffloadServiceAccountKeyFile` in `broker.conf` -for the broker to be able to access the GCS service. `gcsManagedLedgerOffloadServiceAccountKeyFile` is -a Json file, containing the GCS credentials of a service account. -[Service Accounts section of this page](https://support.google.com/googleapi/answer/6158849) contains -more information of how to create this key file for authentication. More information about google cloud IAM -is available [here](https://cloud.google.com/storage/docs/access-control/iam). - -To generate service account credentials or view the public credentials that you've already generated, follow the following steps: - -1. Open the [Service accounts page](https://console.developers.google.com/iam-admin/serviceaccounts). -2. Select a project or create a new one. -3. Click **Create service account**. -4. In the **Create service account** window, type a name for the service account, and select **Furnish a new private key**. If you want to [grant G Suite domain-wide authority](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#delegatingauthority) to the service account, also select **Enable G Suite Domain-wide Delegation**. -5. Click **Create**. - -> Notes: Make ensure that the service account you create has permission to operate GCS, you need to assign **Storage Admin** permission to your service account in [here](https://cloud.google.com/storage/docs/access-control/iam). - -```conf - -gcsManagedLedgerOffloadServiceAccountKeyFile="/Users/hello/Downloads/project-804d5e6a6f33.json" - -``` - -#### Configuring the size of block read/write - -Pulsar also provides some knobs to configure the size of requests sent to GCS. - -- ```gcsManagedLedgerOffloadMaxBlockSizeInBytes``` configures the maximum size of a "part" sent - during a multipart upload. This cannot be smaller than 5MB. Default is 64MB. -- ```gcsManagedLedgerOffloadReadBufferSizeInBytes``` configures the block size for each individual - read when reading back data from GCS. Default is 1MB. - -In both cases, these should not be touched unless you know what you are doing. - -### "filesystem" Driver configuration - - -#### Configure connection address - -You can configure the connection address in the `broker.conf` file. - -```conf - -fileSystemURI="hdfs://127.0.0.1:9000" - -``` - -#### Configure Hadoop profile path - -The configuration file is stored in the Hadoop profile path. It contains various settings, such as base path, authentication, and so on. - -```conf - -fileSystemProfilePath="../conf/filesystem_offload_core_site.xml" - -``` - -The model for storing topic data uses `org.apache.hadoop.io.MapFile`. You can use all of the configurations in `org.apache.hadoop.io.MapFile` for Hadoop. - -**Example** - -```conf - - - fs.defaultFS - - - - - hadoop.tmp.dir - pulsar - - - - io.file.buffer.size - 4096 - - - - io.seqfile.compress.blocksize - 1000000 - - - - io.seqfile.compression.type - BLOCK - - - - io.map.index.interval - 128 - - -``` - -For more information about the configurations in `org.apache.hadoop.io.MapFile`, see [Filesystem Storage](http://hadoop.apache.org/). -## Configuring offload to run automatically - -Namespace policies can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that the topic has stored on the pulsar cluster. Once the topic reaches the threshold, an offload operation will be triggered. Setting a negative value to the threshold will disable automatic offloading. Setting the threshold to 0 will cause the broker to offload data as soon as it possiby can. - -```bash - -$ bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -> Automatic offload runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offload will not until the current segment is full. - -## Configuring read priority for offloaded messages - -By default, once messages were offloaded to long term storage, brokers will read them from long term storage, but messages still exists in bookkeeper for a period depends on the administrator's configuration. For -messages exists in both bookkeeper and long term storage, if they are preferred to read from bookkeeper, you can use command to change this configuration. - -```bash - -# default value for -orp is tiered-storage-first -$ bin/pulsar-admin namespaces set-offload-policies my-tenant/my-namespace -orp bookkeeper-first -$ bin/pulsar-admin topics set-offload-policies my-tenant/my-namespace/topic1 -orp bookkeeper-first - -``` - -## Triggering offload manually - -Offloading can manually triggered through a REST endpoint on the Pulsar broker. We provide a CLI which will call this rest endpoint for you. - -When triggering offload, you must specify the maximum size, in bytes, of backlog which will be retained locally on the bookkeeper. The offload mechanism will offload segments from the start of the topic backlog until this condition is met. - -```bash - -$ bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 -Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - -``` - -The command to triggers an offload will not wait until the offload operation has completed. To check the status of the offload, use offload-status. - -```bash - -$ bin/pulsar-admin topics offload-status my-tenant/my-namespace/topic1 -Offload is currently running - -``` - -To wait for offload to complete, add the -w flag. - -```bash - -$ bin/pulsar-admin topics offload-status -w my-tenant/my-namespace/topic1 -Offload was a success - -``` - -If there is an error offloading, the error will be propagated to the offload-status command. - -```bash - -$ bin/pulsar-admin topics offload-status persistent://public/default/topic1 -Error in offload -null - -Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/deploy-aws.md b/site2/website/versioned_docs/version-2.10.1-deprecated/deploy-aws.md deleted file mode 100644 index 5497aadd7865f8..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/deploy-aws.md +++ /dev/null @@ -1,271 +0,0 @@ ---- -id: deploy-aws -title: Deploying a Pulsar cluster on AWS using Terraform and Ansible -sidebar_label: "Amazon Web Services" -original_id: deploy-aws ---- - -> For instructions on deploying a single Pulsar cluster manually rather than using Terraform and Ansible, see [Deploying a Pulsar cluster on bare metal](deploy-bare-metal.md). For instructions on manually deploying a multi-cluster Pulsar instance, see [Deploying a Pulsar instance on bare metal](deploy-bare-metal-multi-cluster.md). - -One of the easiest ways to get a Pulsar [cluster](reference-terminology.md#cluster) running on [Amazon Web Services](https://aws.amazon.com/) (AWS) is to use the [Terraform](https://terraform.io) infrastructure provisioning tool and the [Ansible](https://www.ansible.com) server automation tool. Terraform can create the resources necessary for running the Pulsar cluster---[EC2](https://aws.amazon.com/ec2/) instances, networking and security infrastructure, etc.---While Ansible can install and run Pulsar on the provisioned resources. - -## Requirements and setup - -In order to install a Pulsar cluster on AWS using Terraform and Ansible, you need to prepare the following things: - -* An [AWS account](https://aws.amazon.com/account/) and the [`aws`](https://aws.amazon.com/cli/) command-line tool -* Python and [pip](https://pip.pypa.io/en/stable/) -* The [`terraform-inventory`](https://github.com/adammck/terraform-inventory) tool, which enables Ansible to use Terraform artifacts - -You also need to make sure that you are currently logged into your AWS account via the `aws` tool: - -```bash - -$ aws configure - -``` - -## Installation - -You can install Ansible on Linux or macOS using pip. - -```bash - -$ pip install ansible - -``` - -You can install Terraform using the instructions [here](https://learn.hashicorp.com/tutorials/terraform/install-cli). - -You also need to have the Terraform and Ansible configuration for Pulsar locally on your machine. You can find them in the [GitHub repository](https://github.com/apache/pulsar) of Pulsar, which you can fetch using Git commands: - -```bash - -$ git clone https://github.com/apache/pulsar -$ cd pulsar/deployment/terraform-ansible/aws - -``` - -## SSH setup - -> If you already have an SSH key and want to use it, you can skip the step of generating an SSH key and update `private_key_file` setting -> in `ansible.cfg` file and `public_key_path` setting in `terraform.tfvars` file. -> -> For example, if you already have a private SSH key in `~/.ssh/pulsar_aws` and a public key in `~/.ssh/pulsar_aws.pub`, -> follow the steps below: -> -> 1. update `ansible.cfg` with following values: -> - -> ```shell -> -> private_key_file=~/.ssh/pulsar_aws -> -> -> ``` - -> -> 2. update `terraform.tfvars` with following values: -> - -> ```shell -> -> public_key_path=~/.ssh/pulsar_aws.pub -> -> -> ``` - - -In order to create the necessary AWS resources using Terraform, you need to create an SSH key. Enter the following commands to create a private SSH key in `~/.ssh/id_rsa` and a public key in `~/.ssh/id_rsa.pub`: - -```bash - -$ ssh-keygen -t rsa - -``` - -Do *not* enter a passphrase (hit **Enter** instead when the prompt comes out). Enter the following command to verify that a key has been created: - -```bash - -$ ls ~/.ssh -id_rsa id_rsa.pub - -``` - -## Create AWS resources using Terraform - -To start building AWS resources with Terraform, you need to install all Terraform dependencies. Enter the following command: - -```bash - -$ terraform init -# This will create a .terraform folder - -``` - -After that, you can apply the default Terraform configuration by entering this command: - -```bash - -$ terraform apply - -``` - -Then you see this prompt below: - -```bash - -Do you want to perform these actions? - Terraform will perform the actions described above. - Only 'yes' will be accepted to approve. - - Enter a value: - -``` - -Type `yes` and hit **Enter**. Applying the configuration could take several minutes. When the configuration applying finishes, you can see `Apply complete!` along with some other information, including the number of resources created. - -### Apply a non-default configuration - -You can apply a non-default Terraform configuration by changing the values in the `terraform.tfvars` file. The following variables are available: - -Variable name | Description | Default -:-------------|:------------|:------- -`public_key_path` | The path of the public key that you have generated. | `~/.ssh/id_rsa.pub` -`region` | The AWS region in which the Pulsar cluster runs | `us-west-2` -`availability_zone` | The AWS availability zone in which the Pulsar cluster runs | `us-west-2a` -`aws_ami` | The [Amazon Machine Image](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) (AMI) that the cluster uses | `ami-9fa343e7` -`num_zookeeper_nodes` | The number of [ZooKeeper](https://zookeeper.apache.org) nodes in the ZooKeeper cluster | 3 -`num_bookie_nodes` | The number of bookies that runs in the cluster | 3 -`num_broker_nodes` | The number of Pulsar brokers that runs in the cluster | 2 -`num_proxy_nodes` | The number of Pulsar proxies that runs in the cluster | 1 -`base_cidr_block` | The root [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) that network assets uses for the cluster | `10.0.0.0/16` -`instance_types` | The EC2 instance types to be used. This variable is a map with two keys: `zookeeper` for the ZooKeeper instances, `bookie` for the BookKeeper bookies and `broker` and `proxy` for Pulsar brokers and bookies | `t2.small` (ZooKeeper), `i3.xlarge` (BookKeeper) and `c5.2xlarge` (Brokers/Proxies) - -### What is installed - -When you run the Ansible playbook, the following AWS resources are used: - -* 9 total [Elastic Compute Cloud](https://aws.amazon.com/ec2) (EC2) instances running the [ami-9fa343e7](https://access.redhat.com/articles/3135091) Amazon Machine Image (AMI), which runs [Red Hat Enterprise Linux (RHEL) 7.4](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/7.4_release_notes/index). By default, that includes: - * 3 small VMs for ZooKeeper ([t3.small](https://www.ec2instances.info/?selected=t3.small) instances) - * 3 larger VMs for BookKeeper [bookies](reference-terminology.md#bookie) ([i3.xlarge](https://www.ec2instances.info/?selected=i3.xlarge) instances) - * 2 larger VMs for Pulsar [brokers](reference-terminology.md#broker) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances) - * 1 larger VMs for Pulsar [proxy](reference-terminology.md#proxy) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances) -* An EC2 [security group](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html) -* A [virtual private cloud](https://aws.amazon.com/vpc/) (VPC) for security -* An [API Gateway](https://aws.amazon.com/api-gateway/) for connections from the outside world -* A [route table](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html) for the Pulsar cluster's VPC -* A [subnet](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html) for the VPC - -All EC2 instances for the cluster run in the [us-west-2](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) region. - -### Fetch your Pulsar connection URL - -When you apply the Terraform configuration by entering the command `terraform apply`, Terraform outputs a value for the `pulsar_service_url`. The value should look something like this: - -``` - -pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650 - -``` - -You can fetch that value at any time by entering the command `terraform output pulsar_service_url` or parsing the `terraform.tstate` file (which is JSON, even though the filename does not reflect that): - -```bash - -$ cat terraform.tfstate | jq .modules[0].outputs.pulsar_service_url.value - -``` - -### Destroy your cluster - -At any point, you can destroy all AWS resources associated with your cluster using Terraform's `destroy` command: - -```bash - -$ terraform destroy - -``` - -## Setup Disks - -Before you run the Pulsar playbook, you need to mount the disks to the correct directories on those bookie nodes. Since different type of machines have different disk layout, you need to update the task defined in `setup-disk.yaml` file after changing the `instance_types` in your terraform config, - -To setup disks on bookie nodes, enter this command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - setup-disk.yaml - -``` - -After that, the disks is mounted under `/mnt/journal` as journal disk, and `/mnt/storage` as ledger disk. -Remember to enter this command just only once. If you attempt to enter this command again after you have run Pulsar playbook, your disks might potentially be erased again, causing the bookies to fail to start up. - -## Run the Pulsar playbook - -Once you have created the necessary AWS resources using Terraform, you can install and run Pulsar on the Terraform-created EC2 instances using Ansible. - -(Optional) If you want to use any [built-in IO connectors](io-connectors.md), edit the `Download Pulsar IO packages` task in the `deploy-pulsar.yaml` file and uncomment the connectors you want to use. - -To run the playbook, enter this command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - ../deploy-pulsar.yaml - -``` - -If you have created a private SSH key at a location different from `~/.ssh/id_rsa`, you can specify the different location using the `--private-key` flag in the following command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - --private-key="~/.ssh/some-non-default-key" \ - ../deploy-pulsar.yaml - -``` - -## Access the cluster - -You can now access your running Pulsar using the unique Pulsar connection URL for your cluster, which you can obtain following the instructions [above](#fetching-your-pulsar-connection-url). - -For a quick demonstration of accessing the cluster, we can use the Python client for Pulsar and the Python shell. First, install the Pulsar Python module using pip: - -```bash - -$ pip install pulsar-client - -``` - -Now, open up the Python shell using the `python` command: - -```bash - -$ python - -``` - -Once you are in the shell, enter the following command: - -```python - ->>> import pulsar ->>> client = pulsar.Client('pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650') -# Make sure to use your connection URL ->>> producer = client.create_producer('persistent://public/default/test-topic') ->>> producer.send('Hello world') ->>> client.close() - -``` - -If all of these commands are successful, Pulsar clients can now use your cluster! diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/deploy-bare-metal-multi-cluster.md b/site2/website/versioned_docs/version-2.10.1-deprecated/deploy-bare-metal-multi-cluster.md deleted file mode 100644 index 9ac1a85580ffa8..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/deploy-bare-metal-multi-cluster.md +++ /dev/null @@ -1,452 +0,0 @@ ---- -id: deploy-bare-metal-multi-cluster -title: Deploying a multi-cluster on bare metal -sidebar_label: "Bare metal multi-cluster" -original_id: deploy-bare-metal-multi-cluster ---- - -:::tip - -1. You can use single-cluster Pulsar installation in most use cases, such as experimenting with Pulsar or using Pulsar in a startup or in a single team. If you need to run a multi-cluster Pulsar instance, see the [guide](deploy-bare-metal-multi-cluster.md). -2. If you want to use all built-in [Pulsar IO](io-overview.md) connectors, you need to download `apache-pulsar-io-connectors`package and install `apache-pulsar-io-connectors` under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you have run a separate cluster of function workers for [Pulsar Functions](functions-overview.md). -3. If you want to use [Tiered Storage](concepts-tiered-storage.md) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders`package and install `apache-pulsar-offloaders` under `offloaders` directory in the Pulsar directory on every broker node. For more details of how to configure this feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md). - -::: - -A Pulsar instance consists of multiple Pulsar clusters working in unison. You can distribute clusters across data centers or geographical regions and replicate the clusters amongst themselves using [geo-replication](administration-geo.md). Deploying a multi-cluster Pulsar instance consists of the following steps: - -1. Deploying two separate ZooKeeper quorums: a local quorum for each cluster in the instance and a configuration store quorum for instance-wide tasks -2. Initializing cluster metadata for each cluster -3. Deploying a BookKeeper cluster of bookies in each Pulsar cluster -4. Deploying brokers in each Pulsar cluster - - -> #### Run Pulsar locally or on Kubernetes? -> This guide shows you how to deploy Pulsar in production in a non-Kubernetes environment. If you want to run a standalone Pulsar cluster on a single machine for development purposes, see the [Setting up a local cluster](getting-started-standalone.md) guide. If you want to run Pulsar on [Kubernetes](https://kubernetes.io), see the [Pulsar on Kubernetes](deploy-kubernetes.md) guide, which includes sections on running Pulsar on Kubernetes, on Google Kubernetes Engine and on Amazon Web Services. - -## System requirement - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. You need to install 64-bit JRE/JDK 8 or later versions, JRE/JDK 11 is recommended. - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -## Install Pulsar - -To get started running Pulsar, download a binary tarball release in one of the following ways: - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar @pulsar:version@ binary release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget 'https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=pulsar/pulsar-@pulsar:version@/apache-pulsar-@pulsar:version@-bin.tar.gz' -O apache-pulsar-@pulsar:version@-bin.tar.gz - - ``` - -Once you download the tarball, untar it and `cd` into the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | [Command-line tools](reference-cli-tools.md) of Pulsar, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](/tools/pulsar-admin/) -`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more -`examples` | A Java JAR file containing example [Pulsar Functions](functions-overview.md) -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that Pulsar uses -`licenses` | License files, in `.txt` form, for various components of the Pulsar codebase - -The following directories are created once you begin running Pulsar: - -Directory | Contains -:---------|:-------- -`data` | The data storage directory that ZooKeeper and BookKeeper use -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md) -`logs` | Logs that the installation creates - - -## Deploy ZooKeeper - -Each Pulsar instance relies on two separate ZooKeeper quorums. - -* Local ZooKeeper operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs a dedicated ZooKeeper cluster. -* Configuration Store operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum. - -You can use an independent cluster of machines or the same machines used by local ZooKeeper to provide the configuration store quorum. - - -### Deploy local ZooKeeper - -ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar. - -You need to stand up one local ZooKeeper cluster per Pulsar cluster for deploying a Pulsar instance. - -To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -On each host, you need to specify the ID of the node in the `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -:::tip - -See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - -::: - -On a ZooKeeper server at `zk1.us-west.example.com`, for example, you could set the `myid` value like this: - -```shell - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com` the command looks like `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start zookeeper - -``` - -### Deploy the configuration store - -The ZooKeeper cluster configured and started up in the section above is a local ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks. - -If you deploy a single-cluster instance, you do not need a separate cluster for the configuration store. If, however, you deploy a multi-cluster instance, you should stand up a separate ZooKeeper cluster for configuration tasks. - -#### Single-cluster Pulsar instance - -If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports. - -To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorum. You need to use the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 - -``` - -As before, create the `myid` files for each server on `data/global-zookeeper/myid`. - -#### Multi-cluster Pulsar instance - -When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions. - -The key here is to make sure the ZK quorum members are spread across at least 3 regions, and other regions run as observers. - -Again, given the very low expected load on the configuration store servers, you can share the same hosts used for the local ZooKeeper quorum. - -For example, assume a Pulsar instance with the following clusters `us-west`, `us-east`, `us-central`, `eu-central`, `ap-south`. Also assume, each cluster has its own local ZK servers named such as the following: - -``` - -zk[1-3].${CLUSTER}.example.com - -``` - -In this scenario if you want to pick the quorum participants from few clusters and let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`. - -This method guarantees that writes to configuration store is possible even if one of these regions is unreachable. - -The ZK configuration in all the servers looks like: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 -server.4=zk1.us-central.example.com:2185:2186 -server.5=zk2.us-central.example.com:2185:2186 -server.6=zk3.us-central.example.com:2185:2186:observer -server.7=zk1.us-east.example.com:2185:2186 -server.8=zk2.us-east.example.com:2185:2186 -server.9=zk3.us-east.example.com:2185:2186:observer -server.10=zk1.eu-central.example.com:2185:2186:observer -server.11=zk2.eu-central.example.com:2185:2186:observer -server.12=zk3.eu-central.example.com:2185:2186:observer -server.13=zk1.ap-south.example.com:2185:2186:observer -server.14=zk2.ap-south.example.com:2185:2186:observer -server.15=zk3.ap-south.example.com:2185:2186:observer - -``` - -Additionally, ZK observers need to have the following parameters: - -```properties - -peerType=observer - -``` - -##### Start the service - -Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) - -```shell - -$ bin/pulsar-daemon start configuration-store - -``` - -## Cluster metadata initialization - -Once you set up the cluster-specific ZooKeeper and configuration store quorums for your instance, you need to write some metadata to ZooKeeper for each cluster in your instance. **you only need to write these metadata once**. - -You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. The following is an example: - -```shell - -$ bin/pulsar initialize-cluster-metadata \ - --cluster us-west \ - --metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \ - --configuration-metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \ - --web-service-url http://pulsar.us-west.example.com:8080/ \ - --web-service-url-tls https://pulsar.us-west.example.com:8443/ \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/ - -``` - -As you can see from the example above, you need to specify the following: - -* The name of the cluster -* The local metadata store connection string for the cluster -* The configuration store connection string for the entire instance -* The web service URL for the cluster -* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster - -If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster. - -Make sure to run `initialize-cluster-metadata` for each cluster in your instance. - -## Deploy BookKeeper - -BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar. - -Each Pulsar broker needs its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster. - -### Configure bookies - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important aspect of configuring each bookie is ensuring that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for the local ZooKeeper of Pulsar cluster. - -### Start bookies - -You can start a bookie in two ways: in the foreground or as a background daemon. - -To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -You can verify that the bookie works properly using the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell): - -```bash - -$ bin/bookkeeper shell bookiesanity - -``` - -This command creates a new ledger on the local bookie, writes a few entries, reads them back and finally deletes the ledger. - -After you have started all bookies, you can use the `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to verify that all bookies in the cluster are running. - -```bash - -$ bin/bookkeeper shell simpletest --ensemble --writeQuorum --ackQuorum --numEntries - -``` - -Bookie hosts are responsible for storing message data on disk. In order for bookies to provide optimal performance, having a suitable hardware configuration is essential for the bookies. The following are key dimensions for bookie hardware capacity. - -* Disk I/O capacity read/write -* Storage capacity - -Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker. To ensure low write latency, BookKeeper is -designed to use multiple devices: - -* A **journal** to ensure durability. For sequential writes, having fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts is critical. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms. -* A **ledger storage device** is where data is stored until all consumers acknowledge the message. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller. - - - -## Deploy brokers - -Once you set up ZooKeeper, initialize cluster metadata, and spin up BookKeeper bookies, you can deploy brokers. - -### Broker configuration - -You can configure brokers using the [`conf/broker.conf`](reference-configuration.md#broker) configuration file. - -The most important element of broker configuration is ensuring that each broker is aware of its local ZooKeeper quorum as well as the configuration store quorum. Make sure that you set the [`metadataStoreUrl`](reference-configuration.md#broker) parameter to reflect the local quorum and the [`configurationMetadataStoreUrl`](reference-configuration.md#broker) parameter to reflect the configuration store quorum (although you need to specify only those ZooKeeper servers located in the same cluster). - -You also need to specify the name of the [cluster](reference-terminology.md#cluster) to which the broker belongs using the [`clusterName`](reference-configuration.md#broker-clusterName) parameter. In addition, you need to match the broker and web service ports provided when you initialize the metadata (especially when you use a different port from default) of the cluster. - -The following is an example configuration: - -```properties - -# Local ZooKeeper servers -metadataStoreUrl=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -# Configuration store quorum connection string. -configurationMetadataStoreUrl=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184 - -clusterName=us-west - -# Broker data port -brokerServicePort=6650 - -# Broker data port for TLS -brokerServicePortTls=6651 - -# Port to use to server HTTP request -webServicePort=8080 - -# Port to use to server HTTPS request -webServicePortTls=8443 - -``` - -### Broker hardware - -Pulsar brokers do not require any special hardware since they do not use the local disk. You had better choose fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) so that the software can take full advantage of that. - -### Start the broker service - -You can start a broker in the background by using [nohup](https://en.wikipedia.org/wiki/Nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start broker - -``` - -You can also start brokers in the foreground by using [`pulsar broker`](reference-cli-tools.md#broker): - -```shell - -$ bin/pulsar broker - -``` - -## Service discovery - -[Clients](getting-started-clients.md) connecting to Pulsar brokers need to communicate with an entire Pulsar instance using a single URL. - -You can use your own service discovery system. If you use your own system, you only need to satisfy just one requirement: when a client performs an HTTP request to an [endpoint](reference-configuration.md) for a Pulsar cluster, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to some active brokers in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means. - -> **Service discovery already provided by many scheduling systems** -> Many large-scale deployment systems, such as [Kubernetes](deploy-kubernetes.md), have service discovery systems built in. If you run Pulsar on such a system, you may not need to provide your own service discovery mechanism. - -## Admin client and verification - -At this point your Pulsar instance should be ready to use. You can now configure client machines that can serve as [administrative clients](admin-api-overview.md) for each cluster. You can use the [`conf/client.conf`](reference-configuration.md#client) configuration file to configure admin clients. - -The most important thing is that you point the [`serviceUrl`](reference-configuration.md#client-serviceUrl) parameter to the correct service URL for the cluster: - -```properties - -serviceUrl=http://pulsar.us-west.example.com:8080/ - -``` - -## Provision new tenants - -Pulsar is built as a fundamentally multi-tenant system. - - -If a new tenant wants to use the system, you need to create a new one. You can create a new tenant by using the [`pulsar-admin`](reference-pulsar-admin.md#tenants) CLI tool: - -```shell - -$ bin/pulsar-admin tenants create test-tenant \ - --allowed-clusters us-west \ - --admin-roles test-admin-role - -``` - -In this command, users who identify with `test-admin-role` role can administer the configuration for the `test-tenant` tenant. The `test-tenant` tenant can only use the `us-west` cluster. From now on, this tenant can manage its resources. - -Once you create a tenant, you need to create [namespaces](reference-terminology.md#namespace) for topics within that tenant. - - -The first step is to create a namespace. A namespace is an administrative unit that can contain many topics. A common practice is to create a namespace for each different use case from a single tenant. - -```shell - -$ bin/pulsar-admin namespaces create test-tenant/ns1 - -``` - -##### Test producer and consumer - - -Everything is now ready to send and receive messages. The quickest way to test the system is through the [`pulsar-perf`](reference-cli-tools.md#pulsar-perf) client tool. - - -You can use a topic in the namespace that you have just created. Topics are automatically created the first time when a producer or a consumer tries to use them. - -The topic name in this case could be: - -```http - -persistent://test-tenant/ns1/my-topic - -``` - -Start a consumer that creates a subscription on the topic and waits for messages: - -```shell - -$ bin/pulsar-perf consume persistent://test-tenant/ns1/my-topic - -``` - -Start a producer that publishes messages at a fixed rate and reports stats every 10 seconds: - -```shell - -$ bin/pulsar-perf produce persistent://test-tenant/ns1/my-topic - -``` - -To report the topic stats: - -```shell - -$ bin/pulsar-admin topics stats persistent://test-tenant/ns1/my-topic - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/deploy-bare-metal.md b/site2/website/versioned_docs/version-2.10.1-deprecated/deploy-bare-metal.md deleted file mode 100644 index 25dc04458613b5..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/deploy-bare-metal.md +++ /dev/null @@ -1,568 +0,0 @@ ---- -id: deploy-bare-metal -title: Deploy a cluster on bare metal -sidebar_label: "Bare metal" -original_id: deploy-bare-metal ---- - -:::tip - -1. You can use single-cluster Pulsar installation in most use cases, such as experimenting with Pulsar or using Pulsar in a startup or in a single team. If you need to run a multi-cluster Pulsar instance, see the [guide](deploy-bare-metal-multi-cluster.md). -2. If you want to use all built-in [Pulsar IO](io-overview.md) connectors, you need to download `apache-pulsar-io-connectors`package and install `apache-pulsar-io-connectors` under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you have run a separate cluster of function workers for [Pulsar Functions](functions-overview.md). -3. If you want to use [Tiered Storage](concepts-tiered-storage.md) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders`package and install `apache-pulsar-offloaders` under `offloaders` directory in the Pulsar directory on every broker node. For more details of how to configure this feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md). - -::: - -Deploying a Pulsar cluster consists of the following steps: - -1. Deploy a [ZooKeeper](#deploy-a-zookeeper-cluster) cluster (optional) -2. Initialize [cluster metadata](#initialize-cluster-metadata) -3. Deploy a [BookKeeper](#deploy-a-bookkeeper-cluster) cluster -4. Deploy one or more Pulsar [brokers](#deploy-pulsar-brokers) - -## Preparation - -### Requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions, JRE/JDK 11 is recommended. - -:::tip - -You can reuse existing Zookeeper clusters. - -::: - -To run Pulsar on bare metal, the following configuration is recommended: - -* At least 6 Linux machines or VMs - * 3 for running [ZooKeeper](https://zookeeper.apache.org) - * 3 for running a Pulsar broker, and a [BookKeeper](https://bookkeeper.apache.org) bookie -* A single [DNS](https://en.wikipedia.org/wiki/Domain_Name_System) name covering all of the Pulsar broker hosts - -:::note - -* Broker is only supported on 64-bit JVM. -* If you do not have enough machines, or you want to test Pulsar in cluster mode (and expand the cluster later), You can fully deploy Pulsar on a node on which ZooKeeper, bookie and broker run. -* If you do not have a DNS server, you can use the multi-host format in the service URL instead. - -::: - -Each machine in your cluster needs to have [Java 8](https://adoptium.net/?variant=openjdk8) or [Java 11](https://adoptium.net/?variant=openjdk11) installed. - -The following is a diagram showing the basic setup: - -![alt-text](/assets/pulsar-basic-setup.png) - -In this diagram, connecting clients need to communicate with the Pulsar cluster using a single URL. In this case, `pulsar-cluster.acme.com` abstracts over all of the message-handling brokers. Pulsar message brokers run on machines alongside BookKeeper bookies; brokers and bookies, in turn, rely on ZooKeeper. - -### Hardware considerations - -If you deploy a Pulsar cluster, keep in mind the following basic better choices when you do the capacity planning. - -#### ZooKeeper - -For machines running ZooKeeper, it is recommended to use less powerful machines or VMs. Pulsar uses ZooKeeper only for periodic coordination-related and configuration-related tasks, not for basic operations. If you run Pulsar on [Amazon Web Services](https://aws.amazon.com/) (AWS), for example, a [t2.small](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html) instance might likely suffice. - -#### Bookies and Brokers - -For machines running a bookie and a Pulsar broker, more powerful machines are required. For an AWS deployment, for example, [i3.4xlarge](https://aws.amazon.com/blogs/aws/now-available-i3-instances-for-demanding-io-intensive-applications/) instances may be appropriate. On those machines you can use the following: - -* Fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) (for Pulsar brokers) -* Small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache (for BookKeeper bookies) - -#### Hardware recommendations - -To start a Pulsar instance, below are the minimum and the recommended hardware settings. - -A cluster consists of 3 broker nodes, 3 bookie nodes, and 3 ZooKeeper nodes. The following recommendation is suitable for one node. - -- The minimum hardware settings (**250 Pulsar topics**) - - Component | CPU|Memory|Storage|Throughput |Rate - |---|---|---|---|---|--- - Broker|0.2|256 MB|/|Write throughput: 3 MB/s

    Read throughput: 6 MB/s

    |Write rate: 350 entries/s

    Read rate: 650 entries/s - Bookie|0.2|256 MB|Journal: 8 GB

    PD-SSDLedger: 16 GB, PD-STANDARD|Write throughput: 2 MB/s

    Read throughput: 2 MB/s

    |Write rate: 200 entries/s

    Read rate: 200 entries/s - ZooKeeper|0.05|256 MB|Log: 8 GB, PD-SSD

    Data: 2 GB, PD-STANDARD|/|/ - -- The recommended hardware settings (**1000 Pulsar topics**) - - Component | CPU|Memory|Storage|Throughput |Rate - |---|---|---|---|---|--- - Broker|8|8 GB|/|Write throughput: 100 MB/s

    Read throughput: 200 MB/s

    |Write rate: 10,000 entries/s

    Read rate: 20,000 entries/s - Bookie|4|8GB|Journal: 256 GB

    PD-SSDLedger: 2 TB, PD-STANDARD|Write throughput: 75 MB/s

    Read throughput: 75 MB/s

    |Write rate: 7,500 entries/s

    Read rate: 7,500 entries/s - ZooKeeper|1|2 GB|Log: 64 GB, PD-SSD

    Data: 256 GB, PD-STANDARD|/|/ - -## Install the Pulsar binary package - -> You need to install the Pulsar binary package on each machine in the cluster, including machines running ZooKeeper and BookKeeper. - -To get started deploying a Pulsar cluster on bare metal, you need to download a binary tarball release in one of the following ways: - -* By clicking on the link below directly, which automatically triggers a download: - * Pulsar @pulsar:version@ binary release -* From the Pulsar [downloads page](pulsar:download_page_url) -* From the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) on GitHub -* Using [wget](https://www.gnu.org/software/wget): - -```bash - -$ wget pulsar:binary_release_url - -``` - -Once you download the tarball, untar it and `cd` into the resulting directory: - -```bash - -$ tar xvzf apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -The extracted directory contains the following subdirectories: - -Directory | Contains -:---------|:-------- -`bin` |[command-line tools](reference-cli-tools.md) of Pulsar, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](/tools/pulsar-admin/) -`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more -`data` | The data storage directory that ZooKeeper and BookKeeper use -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that Pulsar uses -`logs` | Logs that the installation creates - -## [Install Builtin Connectors (optional)](standalone.md#install-builtin-connectors-optional) - -> Since Pulsar release `2.1.0-incubating`, Pulsar provides a separate binary distribution, containing all the `builtin` connectors. -> To enable the `builtin` connectors (optional), you can follow the instructions below. - -To use `builtin` connectors, you need to download the connectors tarball release on every broker node in one of the following ways : - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar IO Connectors @pulsar:version@ release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -Once you download the .nar file, copy the file to directory `connectors` in the pulsar directory. -For example, if you download the connector file `pulsar-io-aerospike-@pulsar:version@.nar`: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -## [Install Tiered Storage Offloaders (optional)](standalone.md#install-tiered-storage-offloaders-optional) - -> Since Pulsar release `2.2.0`, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -> If you want to enable tiered storage feature, you can follow the instructions as below; otherwise you can -> skip this section for now. - -To use tiered storage offloaders, you need to download the offloaders tarball release on every broker node in one of the following ways: - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -Once you download the tarball, in the Pulsar directory, untar the offloaders package and copy the offloaders as `offloaders` in the Pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you can find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more details of how to configure tiered storage feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md) - - -## Deploy a ZooKeeper cluster - -> If you already have an existing zookeeper cluster and want to use it, you can skip this section. - -[ZooKeeper](https://zookeeper.apache.org) manages a variety of essential coordination-related and configuration-related tasks for Pulsar. To deploy a Pulsar cluster, you need to deploy ZooKeeper first. A 3-node ZooKeeper cluster is the recommended configuration. Pulsar does not make heavy use of ZooKeeper, so the lightweight machines or VMs should suffice for running ZooKeeper. - -To begin, add all ZooKeeper servers to the configuration specified in [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) (in the Pulsar directory that you create [above](#install-the-pulsar-binary-package)). The following is an example: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -> If you only have one machine on which to deploy Pulsar, you only need to add one server entry in the configuration file. - -> If your machines are behind NAT use 0.0.0.0 as server entry for the local address. If the node use external IP in configuration for itself, behind NAT, zookeper service won't start because it tries to put a listener on an external ip that the linux box doesn't own. Using 0.0.0.0 start a listener on ALL ip, so that NAT network traffic can reach it. - -Example of configuration on _server.3_ - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=0.0.0.0:2888:3888 - -``` - -On each host, you need to specify the ID of the node in the `myid` file, which is in the `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - -For example, on a ZooKeeper server like `zk1.us-west.example.com`, you can set the `myid` value as follows: - -```bash - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com`, the command is `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and have the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start zookeeper - -``` - -> If you plan to deploy Zookeeper with the Bookie on the same node, you need to start zookeeper by using different stats -> port by configuring the `metricsProvider.httpPort` in zookeeper.conf. - -## Initialize cluster metadata - -Once you deploy ZooKeeper for your cluster, you need to write some metadata to ZooKeeper. You only need to write this data **once**. - -You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. This command can be run on any machine in your Pulsar cluster, so the metadata can be initialized from a ZooKeeper, broker, or bookie machine. The following is an example: - -```shell - -$ bin/pulsar initialize-cluster-metadata \ - --cluster pulsar-cluster-1 \ - --metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \ - --configuration-metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \ - --web-service-url http://pulsar.us-west.example.com:8080 \ - --web-service-url-tls https://pulsar.us-west.example.com:8443 \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650 \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -As you can see from the example above, you will need to specify the following: - -Flag | Description -:----|:----------- -`--cluster` | A name for the cluster -`--metadata-store` | A "local" metadata store connection string for the cluster. This connection string only needs to include *one* machine in the ZooKeeper cluster. -`--configuration-metadata-store` | The configuration metadata store connection string for the entire instance. As with the `--metadata-store` flag, this connection string only needs to include *one* machine in the ZooKeeper cluster. -`--web-service-url` | The web service URL for the cluster, plus a port. This URL should be a standard DNS name. The default port is 8080 (you had better not use a different port). -`--web-service-url-tls` | If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster. The default port is 8443 (you had better not use a different port). -`--broker-service-url` | A broker service URL enabling interaction with the brokers in the cluster. This URL should not use the same DNS name as the web service URL but should use the `pulsar` scheme instead. The default port is 6650 (you had better not use a different port). -`--broker-service-url-tls` | If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster. The default port is 6651 (you had better not use a different port). - - -> If you do not have a DNS server, you can use multi-host format in the service URL with the following settings: -> - -> ```shell -> -> --web-service-url http://host1:8080,host2:8080,host3:8080 \ -> --web-service-url-tls https://host1:8443,host2:8443,host3:8443 \ -> --broker-service-url pulsar://host1:6650,host2:6650,host3:6650 \ -> --broker-service-url-tls pulsar+ssl://host1:6651,host2:6651,host3:6651 -> -> -> ``` - -> -> If you want to use an existing BookKeeper cluster, you can add the `--existing-bk-metadata-service-uri` flag as follows: -> - -> ```shell -> -> --existing-bk-metadata-service-uri "zk+null://zk1:2181;zk2:2181/ledgers" \ -> --web-service-url http://host1:8080,host2:8080,host3:8080 \ -> --web-service-url-tls https://host1:8443,host2:8443,host3:8443 \ -> --broker-service-url pulsar://host1:6650,host2:6650,host3:6650 \ -> --broker-service-url-tls pulsar+ssl://host1:6651,host2:6651,host3:6651 -> -> -> ``` - -> You can obtain the metadata service URI of the existing BookKeeper cluster by using the `bin/bookkeeper shell whatisinstanceid` command. You must enclose the value in double quotes since the multiple metadata service URIs are separated with semicolons. - -## Deploy a BookKeeper cluster - -[BookKeeper](https://bookkeeper.apache.org) handles all persistent data storage in Pulsar. You need to deploy a cluster of BookKeeper bookies to use Pulsar. You can choose to run a **3-bookie BookKeeper cluster**. - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important step in configuring bookies for our purposes here is ensuring that [`zkServers`](reference-configuration.md#bookkeeper-zkServers) is set to the connection string for the ZooKeeper cluster. The following is an example: - -```properties - -zkServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -``` - -Once you appropriately modify the `zkServers` parameter, you can make any other configuration changes that you require. You can find a full listing of the available BookKeeper configuration parameters [here](reference-configuration.md#bookkeeper). However, consulting the [BookKeeper documentation](http://bookkeeper.apache.org/docs/latest/reference/config/) for a more in-depth guide might be a better choice. - -Once you apply the desired configuration in `conf/bookkeeper.conf`, you can start up a bookie on each of your BookKeeper hosts. You can start up each bookie either in the background, using [nohup](https://en.wikipedia.org/wiki/Nohup), or in the foreground. - -To start the bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -To start the bookie in the foreground: - -```bash - -$ bin/pulsar bookie - -``` - -You can verify that a bookie works properly by running the `bookiesanity` command on the [BookKeeper shell](reference-cli-tools.md#shell): - -```bash - -$ bin/bookkeeper shell bookiesanity - -``` - -This command creates an ephemeral BookKeeper ledger on the local bookie, writes a few entries, reads them back, and finally deletes the ledger. - -After you start all the bookies, you can use `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to verify all the bookies in the cluster are up running. - -```bash - -$ bin/bookkeeper shell simpletest --ensemble --writeQuorum --ackQuorum --numEntries - -``` - -This command creates a `num-bookies` sized ledger on the cluster, writes a few entries, and finally deletes the ledger. - - -## Deploy Pulsar brokers - -Pulsar brokers are the last thing you need to deploy in your Pulsar cluster. Brokers handle Pulsar messages and provide the administrative interface of Pulsar. A good choice is to run **3 brokers**, one for each machine that already runs a BookKeeper bookie. - -### Configure Brokers - -The most important element of broker configuration is ensuring that each broker is aware of the ZooKeeper cluster that you have deployed. Ensure that the [`metadataStoreUrl`](reference-configuration.md#broker) and [`configurationMetadataStoreUrl`](reference-configuration.md#broker) parameters are correct. In this case, since you only have 1 cluster and no configuration store setup, the `configurationMetadataStoreUrl` point to the same `metadataStoreUrl`. - -```properties - -metadataStoreUrl=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 -configurationMetadataStoreUrl=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -``` - -You also need to specify the cluster name (matching the name that you provided when you [initialize the metadata of the cluster](#initialize-cluster-metadata)): - -```properties - -clusterName=pulsar-cluster-1 - -``` - -In addition, you need to match the broker and web service ports provided when you initialize the metadata of the cluster (especially when you use a different port than the default): - -```properties - -brokerServicePort=6650 -brokerServicePortTls=6651 -webServicePort=8080 -webServicePortTls=8443 - -``` - -> If you deploy Pulsar in a one-node cluster, you should update the replication settings in `conf/broker.conf` to `1`. -> - -> ```properties -> -> # Number of bookies to use when creating a ledger -> managedLedgerDefaultEnsembleSize=1 -> -> # Number of copies to store for each message -> managedLedgerDefaultWriteQuorum=1 -> -> # Number of guaranteed copies (acks to wait before write is complete) -> managedLedgerDefaultAckQuorum=1 -> -> -> ``` - - -### Enable Pulsar Functions (optional) - -If you want to enable [Pulsar Functions](functions-overview.md), you can follow the instructions as below: - -1. Edit `conf/broker.conf` to enable functions worker, by setting `functionsWorkerEnabled` to `true`. - - ```conf - - functionsWorkerEnabled=true - - ``` - -2. Edit `conf/functions_worker.yml` and set `pulsarFunctionsCluster` to the cluster name that you provide when you [initialize the metadata of the cluster](#initialize-cluster-metadata). - - ```conf - - pulsarFunctionsCluster: pulsar-cluster-1 - - ``` - -If you want to learn more options about deploying the functions worker, check out [Deploy and manage functions worker](functions-worker.md). - -### Start Brokers - -You can then provide any other configuration changes that you want in the [`conf/broker.conf`](reference-configuration.md#broker) file. Once you decide on a configuration, you can start up the brokers for your Pulsar cluster. Like ZooKeeper and BookKeeper, you can start brokers either in the foreground or in the background, using nohup. - -You can start a broker in the foreground using the [`pulsar broker`](reference-cli-tools.md#pulsar-broker) command: - -```bash - -$ bin/pulsar broker - -``` - -You can start a broker in the background using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start broker - -``` - -Once you successfully start up all the brokers that you intend to use, your Pulsar cluster should be ready to go! - -## Connect to the running cluster - -Once your Pulsar cluster is up and running, you should be able to connect with it using Pulsar clients. One such client is the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool, which is included with the Pulsar binary package. The `pulsar-client` tool can publish messages to and consume messages from Pulsar topics and thus provide a simple way to make sure that your cluster runs properly. - -To use the `pulsar-client` tool, first modify the client configuration file in [`conf/client.conf`](reference-configuration.md#client) in your binary package. You need to change the values for `webServiceUrl` and `brokerServiceUrl`, substituting `localhost` (which is the default), with the DNS name that you assign to your broker/bookie hosts. The following is an example: - -```properties - -webServiceUrl=http://us-west.example.com:8080 -brokerServiceurl=pulsar://us-west.example.com:6650 - -``` - -> If you do not have a DNS server, you can specify multi-host in service URL as follows: -> - -> ```properties -> -> webServiceUrl=http://host1:8080,host2:8080,host3:8080 -> brokerServiceurl=pulsar://host1:6650,host2:6650,host3:6650 -> -> -> ``` - - -Once that is complete, you can publish a message to the Pulsar topic: - -```bash - -$ bin/pulsar-client produce \ - persistent://public/default/test \ - -n 1 \ - -m "Hello Pulsar" - -``` - -> You may need to use a different cluster name in the topic if you specify a cluster name other than `pulsar-cluster-1`. - -This command publishes a single message to the Pulsar topic. In addition, you can subscribe to the Pulsar topic in a different terminal before publishing messages as below: - -```bash - -$ bin/pulsar-client consume \ - persistent://public/default/test \ - -n 100 \ - -s "consumer-test" \ - -t "Exclusive" - -``` - -Once you successfully publish the above message to the topic, you should see it in the standard output: - -```bash - ------ got message ----- -Hello Pulsar - -``` - -## Run Functions - -> If you have [enabled](#enable-pulsar-functions-optional) Pulsar Functions, you can try out the Pulsar Functions now. - -Create an ExclamationFunction `exclamation`. - -```bash - -bin/pulsar-admin functions create \ - --jar examples/api-examples.jar \ - --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \ - --inputs persistent://public/default/exclamation-input \ - --output persistent://public/default/exclamation-output \ - --tenant public \ - --namespace default \ - --name exclamation - -``` - -Check whether the function runs as expected by [triggering](functions-deploying.md#triggering-pulsar-functions) the function. - -```bash - -bin/pulsar-admin functions trigger --name exclamation --trigger-value "hello world" - -``` - -You should see the following output: - -```shell - -hello world! - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/deploy-dcos.md b/site2/website/versioned_docs/version-2.10.1-deprecated/deploy-dcos.md deleted file mode 100644 index 35a0a83d716ade..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/deploy-dcos.md +++ /dev/null @@ -1,200 +0,0 @@ ---- -id: deploy-dcos -title: Deploy Pulsar on DC/OS -sidebar_label: "DC/OS" -original_id: deploy-dcos ---- - -:::tip - -To enable all built-in [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, we recommend you use `apachepulsar/pulsar-all` image instead of `apachepulsar/pulsar` image; the former has already bundled [all built-in connectors](io-overview.md#working-with-connectors). - -::: - -[DC/OS](https://dcos.io/) (the DataCenter Operating System) is a distributed operating system for deploying and managing applications and systems on [Apache Mesos](http://mesos.apache.org/). DC/OS is an open-source tool created and maintained by [Mesosphere](https://mesosphere.com/). - -Apache Pulsar is available as a [Marathon Application Group](https://mesosphere.github.io/marathon/docs/application-groups.html), which runs multiple applications as manageable sets. - -## Prerequisites - -You need to prepare your environment before running Pulsar on DC/OS. - -* DC/OS version [1.9](https://docs.mesosphere.com/1.9/) or higher -* A [DC/OS cluster](https://docs.mesosphere.com/1.9/installing/) with at least three agent nodes -* The [DC/OS CLI tool](https://docs.mesosphere.com/1.9/cli/install/) installed -* The [`PulsarGroups.json`](https://github.com/apache/pulsar/blob/master/deployment/dcos/PulsarGroups.json) configuration file from the Pulsar GitHub repo. - - ```bash - - $ curl -O https://raw.githubusercontent.com/apache/pulsar/master/deployment/dcos/PulsarGroups.json - - ``` - -Each node in the DC/OS-managed Mesos cluster must have at least: - -* 4 CPU -* 4 GB of memory -* 60 GB of total persistent disk - -Alternatively, you can change the configuration in `PulsarGroups.json` accordingly to match your resources of the DC/OS cluster. - -## Deploy Pulsar using the DC/OS command interface - -You can deploy Pulsar on DC/OS using this command: - -```bash - -$ dcos marathon group add PulsarGroups.json - -``` - -This command deploys Docker container instances in three groups, which together comprise a Pulsar cluster: - -* 3 bookies (1 [bookie](reference-terminology.md#bookie) on each agent node and 1 [bookie recovery](http://bookkeeper.apache.org/docs/latest/admin/autorecovery/) instance) -* 3 Pulsar [brokers](reference-terminology.md#broker) (1 broker on each node and 1 admin instance) -* 1 [Prometheus](http://prometheus.io/) instance and 1 [Grafana](https://grafana.com/) instance - - -> When you run DC/OS, a ZooKeeper cluster will be running at `master.mesos:2181`, thus you do not have to install or start up ZooKeeper separately. - -After executing the `dcos` command above, click the **Services** tab in the DC/OS [GUI interface](https://docs.mesosphere.com/latest/gui/), which you can access at [http://m1.dcos](http://m1.dcos) in this example. You should see several applications during the deployment. - -![DC/OS command executed](/assets/dcos_command_execute.png) - -![DC/OS command executed2](/assets/dcos_command_execute2.png) - -## The BookKeeper group - -To monitor the status of the BookKeeper cluster deployment, click the **bookkeeper** group in the parent **pulsar** group. - -![DC/OS bookkeeper status](/assets/dcos_bookkeeper_status.png) - -At this point, the status of the 3 [bookies](reference-terminology.md#bookie) are green, which means that the bookies have been deployed successfully and are running. - -![DC/OS bookkeeper running](/assets/dcos_bookkeeper_run.png) - -You can also click each bookie instance to get more detailed information, such as the bookie running log. - -![DC/OS bookie log](/assets/dcos_bookie_log.png) - -To display information about the BookKeeper in ZooKeeper, you can visit [http://m1.dcos/exhibitor](http://m1.dcos/exhibitor). In this example, 3 bookies are under the `available` directory. - -![DC/OS bookkeeper in zk](/assets/dcos_bookkeeper_in_zookeeper.png) - -## The Pulsar broker group - -Similar to the BookKeeper group above, click **brokers** to check the status of the Pulsar brokers. - -![DC/OS broker status](/assets/dcos_broker_status.png) - -![DC/OS broker running](/assets/dcos_broker_run.png) - -You can also click each broker instance to get more detailed information, such as the broker running log. - -![DC/OS broker log](/assets/dcos_broker_log.png) - -Broker cluster information in ZooKeeper is also available through the web UI. In this example, you can see that the `loadbalance` and `managed-ledgers` directories have been created. - -![DC/OS broker in zk](/assets/dcos_broker_in_zookeeper.png) - -## Monitor group - -The **monitory** group consists of Prometheus and Grafana. - -![DC/OS monitor status](/assets/dcos_monitor_status.png) - -### Prometheus - -Click the instance of `prom` to get the endpoint of Prometheus, which is `192.168.65.121:9090` in this example. - -![DC/OS prom endpoint](/assets/dcos_prom_endpoint.png) - -If you click that endpoint, you can see the Prometheus dashboard. All the bookies and brokers are listed on [http://192.168.65.121:9090/targets](http://192.168.65.121:9090/targets). - -![DC/OS prom targets](/assets/dcos_prom_targets.png) - -### Grafana - -Click `grafana` to get the endpoint for Grafana, which is `192.168.65.121:3000` in this example. - -![DC/OS grafana endpoint](/assets/dcos_grafana_endpoint.png) - -If you click that endpoint, you can access the Grafana dashboard. - -![DC/OS grafana targets](/assets/dcos_grafana_dashboard.png) - -## Run a simple Pulsar consumer and producer on DC/OS - -Now that you have a fully deployed Pulsar cluster, you can run a simple consumer and producer to show Pulsar on DC/OS in action. - -### Download and prepare the Pulsar Java tutorial - -You can clone a [Pulsar Java tutorial](https://github.com/streamlio/pulsar-java-tutorial) repo. This repo contains a simple Pulsar consumer and producer (you can find more information in the `README` file in this repo). - -```bash - -$ git clone https://github.com/streamlio/pulsar-java-tutorial - -``` - -Change the `SERVICE_URL` from `pulsar://localhost:6650` to `pulsar://a1.dcos:6650` in both [`ConsumerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ConsumerTutorial.java) file and [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java) file. - -The `pulsar://a1.dcos:6650` endpoint is for the broker service. You can fetch the endpoint details for each broker instance from the DC/OS GUI. `a1.dcos` is a DC/OS client agent that runs a broker, and you can replace it with the client agent IP address. - -Now, you can change the message number from 10 to 10000000 in the main method in [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java) file to produce more messages. - -Then, you can compile the project code using the command below: - -```bash - -$ mvn clean package - -``` - -### Run the consumer and producer - -Execute this command to run the consumer: - -```bash - -$ mvn exec:java -Dexec.mainClass="tutorial.ConsumerTutorial" - -``` - -Execute this command to run the producer: - -```bash - -$ mvn exec:java -Dexec.mainClass="tutorial.ProducerTutorial" - -``` - -You see that the producer is producing messages and the consumer is consuming messages through the DC/OS GUI. - -![DC/OS pulsar producer](/assets/dcos_producer.png) - -![DC/OS pulsar consumer](/assets/dcos_consumer.png) - -### View Grafana metric output - -While the producer and consumer are running, you can access the running metrics from Grafana. - -![DC/OS pulsar dashboard](/assets/dcos_metrics.png) - - -## Uninstall Pulsar - -You can shut down and uninstall the `pulsar` application from DC/OS at any time in one of the following two ways: - -1. Click the three dots at the right end of Pulsar group and choose **Delete** on the DC/OS GUI. - - ![DC/OS pulsar uninstall](/assets/dcos_uninstall.png) - -2. Use the command below. - - ```bash - - $ dcos marathon group remove /pulsar - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/deploy-docker.md b/site2/website/versioned_docs/version-2.10.1-deprecated/deploy-docker.md deleted file mode 100644 index 8348d78deb2378..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/deploy-docker.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -id: deploy-docker -title: Deploy a cluster on Docker -sidebar_label: "Docker" -original_id: deploy-docker ---- - -To deploy a Pulsar cluster on Docker, complete the following steps: -1. Deploy a ZooKeeper cluster (optional) -2. Initialize cluster metadata -3. Deploy a BookKeeper cluster -4. Deploy one or more Pulsar brokers - -## Prepare - -To run Pulsar on Docker, you need to create a container for each Pulsar component: ZooKeeper, BookKeeper and broker. You can pull the images of ZooKeeper and BookKeeper separately on [Docker Hub](https://hub.docker.com/), and pull a [Pulsar image](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) for the broker. You can also pull only one [Pulsar image](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) and create three containers with this image. This tutorial takes the second option as an example. - -### Pull a Pulsar image -You can pull a Pulsar image from [Docker Hub](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) with the following command. - -``` - -docker pull apachepulsar/pulsar-all:latest - -``` - -### Create three containers -Create containers for ZooKeeper, BookKeeper and broker. In this example, they are named as `zookeeper`, `bookkeeper` and `broker` respectively. You can name them as you want with the `--name` flag. By default, the container names are created randomly. - -``` - -docker run -it --name bookkeeper apachepulsar/pulsar-all:latest /bin/bash -docker run -it --name zookeeper apachepulsar/pulsar-all:latest /bin/bash -docker run -it --name broker apachepulsar/pulsar-all:latest /bin/bash - -``` - -### Create a network -To deploy a Pulsar cluster on Docker, you need to create a `network` and connect the containers of ZooKeeper, BookKeeper and broker to this network. The following command creates the network `pulsar`: - -``` - -docker network create pulsar - -``` - -### Connect containers to network -Connect the containers of ZooKeeper, BookKeeper and broker to the `pulsar` network with the following commands. - -``` - -docker network connect pulsar zookeeper -docker network connect pulsar bookkeeper -docker network connect pulsar broker - -``` - -To check whether the containers are successfully connected to the network, enter the `docker network inspect pulsar` command. - -For detailed information about how to deploy ZooKeeper cluster, BookKeeper cluster, brokers, see [deploy a cluster on bare metal](deploy-bare-metal.md). diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/deploy-kubernetes.md b/site2/website/versioned_docs/version-2.10.1-deprecated/deploy-kubernetes.md deleted file mode 100644 index 1aefc6ad79f716..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/deploy-kubernetes.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -id: deploy-kubernetes -title: Deploy Pulsar on Kubernetes -sidebar_label: "Kubernetes" -original_id: deploy-kubernetes ---- - -To get up and running with these charts as fast as possible, in a **non-production** use case, we provide -a [quick start guide](getting-started-helm.md) for Proof of Concept (PoC) deployments. - -To configure and install a Pulsar cluster on Kubernetes for production usage, follow the complete [Installation Guide](helm-install.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/deploy-monitoring.md b/site2/website/versioned_docs/version-2.10.1-deprecated/deploy-monitoring.md deleted file mode 100644 index 69d994b7d586f3..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/deploy-monitoring.md +++ /dev/null @@ -1,138 +0,0 @@ ---- -id: deploy-monitoring -title: Monitor -sidebar_label: "Monitor" -original_id: deploy-monitoring ---- - -You can use different ways to monitor a Pulsar cluster, exposing both metrics related to the usage of topics and the overall health of the individual components of the cluster. - -## Collect metrics - -You can collect broker stats, ZooKeeper stats, and BookKeeper stats. - -### Broker stats - -You can collect Pulsar broker metrics from brokers and export the metrics in JSON format. The Pulsar broker metrics mainly have two types: - -* *Destination dumps*, which contain stats for each individual topic. You can fetch the destination dumps using the command below: - - ```shell - - bin/pulsar-admin broker-stats destinations - - ``` - -* Broker metrics, which contain the broker information and topics stats aggregated at namespace level. You can fetch the broker metrics by using the following command: - - ```shell - - bin/pulsar-admin broker-stats monitoring-metrics - - ``` - -All the message rates are updated every minute. - -The aggregated broker metrics are also exposed in the [Prometheus](https://prometheus.io) format at: - -```shell - -http://$BROKER_ADDRESS:8080/metrics/ - -``` - -### ZooKeeper stats - -The local ZooKeeper, configuration store server and clients that are shipped with Pulsar can expose detailed stats through Prometheus. - -```shell - -http://$LOCAL_ZK_SERVER:8000/metrics -http://$GLOBAL_ZK_SERVER:8001/metrics - -``` - -The default port of local ZooKeeper is `8000` and the default port of the configuration store is `8001`. You can use a different stats port by configuring `metricsProvider.httpPort` in the `conf/zookeeper.conf` file. - -### BookKeeper stats - -You can configure the stats frameworks for BookKeeper by modifying the `statsProviderClass` in the `conf/bookkeeper.conf` file. - -The default BookKeeper configuration enables the Prometheus exporter. The configuration is included with Pulsar distribution. - -```shell - -http://$BOOKIE_ADDRESS:8000/metrics - -``` - -The default port for bookie is `8000`. You can change the port by configuring `prometheusStatsHttpPort` in the `conf/bookkeeper.conf` file. - -### Managed cursor acknowledgment state -The acknowledgment state is persistent to the ledger first. When the acknowledgment state fails to be persistent to the ledger, they are persistent to ZooKeeper. To track the stats of acknowledgement, you can configure the metrics for the managed cursor. - -``` - -brk_ml_cursor_persistLedgerSucceed(namespace=", ledger_name="", cursor_name:") -brk_ml_cursor_persistLedgerErrors(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_persistZookeeperSucceed(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_persistZookeeperErrors(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_nonContiguousDeletedMessagesRange(namespace="", ledger_name="", cursor_name:"") - -``` - -Those metrics are added in the Prometheus interface, you can monitor and check the metrics stats in the Grafana. - -### Function and connector stats - -You can collect functions worker stats from `functions-worker` and export the metrics in JSON formats, which contain functions worker JVM metrics. - -``` - -pulsar-admin functions-worker monitoring-metrics - -``` - -You can collect functions and connectors metrics from `functions-worker` and export the metrics in JSON formats. - -``` - -pulsar-admin functions-worker function-stats - -``` - -The aggregated functions and connectors metrics can be exposed in Prometheus formats as below. You can get [`FUNCTIONS_WORKER_ADDRESS`](functions-worker.md) and `WORKER_PORT` from the `functions_worker.yml` file. - -``` - -http://$FUNCTIONS_WORKER_ADDRESS:$WORKER_PORT/metrics: - -``` - -## Configure Prometheus - -You can use Prometheus to collect all the metrics exposed for Pulsar components and set up [Grafana](https://grafana.com/) dashboards to display the metrics and monitor your Pulsar cluster. For details, refer to [Prometheus guide](https://prometheus.io/docs/introduction/getting_started/). - -When you run Pulsar on bare metal, you can provide the list of nodes to be probed. When you deploy Pulsar in a Kubernetes cluster, the monitoring is setup automatically. For details, refer to [Kubernetes instructions](helm-deploy.md). - -## Dashboards - -When you collect time series statistics, the major problem is to make sure the number of dimensions attached to the data does not explode. Thus you only need to collect time series of metrics aggregated at the namespace level. - -### Pulsar per-topic dashboard - -The per-topic dashboard instructions are available at [Pulsar manager](administration-pulsar-manager.md). - -### Grafana - -You can use grafana to create dashboard driven by the data that is stored in Prometheus. - -When you deploy Pulsar on Kubernetes with the Pulsar Helm Chart, a `pulsar-grafana` Docker image is enabled by default. You can use the docker image with the principal dashboards. - -The following are some Grafana dashboards examples: - -- [pulsar-grafana](deploy-monitoring.md#grafana): a Grafana dashboard that displays metrics collected in Prometheus for Pulsar clusters running on Kubernetes. -- [apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard): a collection of Grafana dashboard templates for different Pulsar components running on both Kubernetes and on-premise machines. - -## Alerting rules -You can set alerting rules according to your Pulsar environment. To configure alerting rules for Apache Pulsar, refer to [alerting rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/develop-load-manager.md b/site2/website/versioned_docs/version-2.10.1-deprecated/develop-load-manager.md deleted file mode 100644 index 509209b6a852d8..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/develop-load-manager.md +++ /dev/null @@ -1,227 +0,0 @@ ---- -id: develop-load-manager -title: Modular load manager -sidebar_label: "Modular load manager" -original_id: develop-load-manager ---- - -The *modular load manager*, implemented in [`ModularLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/ModularLoadManagerImpl.java), is a flexible alternative to the previously implemented load manager, [`SimpleLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/SimpleLoadManagerImpl.java), which attempts to simplify how load is managed while also providing abstractions so that complex load management strategies may be implemented. - -## Usage - -There are two ways that you can enable the modular load manager: - -1. Change the value of the `loadManagerClassName` parameter in `conf/broker.conf` from `org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl` to `org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl`. -2. Using the `pulsar-admin` tool. Here's an example: - - ```shell - - $ pulsar-admin update-dynamic-config \ - --config loadManagerClassName \ - --value org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl - - ``` - - You can use the same method to change back to the original value. In either case, any mistake in specifying the load manager will cause Pulsar to default to `SimpleLoadManagerImpl`. - -## Verification - -There are a few different ways to determine which load manager is being used: - -1. Use `pulsar-admin` to examine the `loadManagerClassName` element: - - ```shell - - $ bin/pulsar-admin brokers get-all-dynamic-config - { - "loadManagerClassName" : "org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl" - } - - ``` - - If there is no `loadManagerClassName` element, then the default load manager is used. - -2. Consult a ZooKeeper load report. With the module load manager, the load report in `/loadbalance/brokers/...` will have many differences. for example the `systemResourceUsage` sub-elements (`bandwidthIn`, `bandwidthOut`, etc.) are now all at the top level. Here is an example load report from the module load manager: - - ```json - - { - "bandwidthIn": { - "limit": 10240000.0, - "usage": 4.256510416666667 - }, - "bandwidthOut": { - "limit": 10240000.0, - "usage": 5.287239583333333 - }, - "bundles": [], - "cpu": { - "limit": 2400.0, - "usage": 5.7353247655435915 - }, - "directMemory": { - "limit": 16384.0, - "usage": 1.0 - } - } - - ``` - - With the simple load manager, the load report in `/loadbalance/brokers/...` will look like this: - - ```json - - { - "systemResourceUsage": { - "bandwidthIn": { - "limit": 10240000.0, - "usage": 0.0 - }, - "bandwidthOut": { - "limit": 10240000.0, - "usage": 0.0 - }, - "cpu": { - "limit": 2400.0, - "usage": 0.0 - }, - "directMemory": { - "limit": 16384.0, - "usage": 1.0 - }, - "memory": { - "limit": 8192.0, - "usage": 3903.0 - } - } - } - - ``` - -3. The command-line [broker monitor](reference-cli-tools.md#monitor-brokers) will have a different output format depending on which load manager implementation is being used. - - Here is an example from the modular load manager: - - ``` - - =================================================================================================================== - ||SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.00 |48.33 |0.01 |0.00 |0.00 |48.33 || - ||COUNT |TOPIC |BUNDLE |PRODUCER |CONSUMER |BUNDLE + |BUNDLE - || - || |4 |4 |0 |2 |4 |0 || - ||LATEST |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - ||SHORT |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - ||LONG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - =================================================================================================================== - - ``` - - Here is an example from the simple load manager: - - ``` - - =================================================================================================================== - ||COUNT |TOPIC |BUNDLE |PRODUCER |CONSUMER |BUNDLE + |BUNDLE - || - || |4 |4 |0 |2 |0 |0 || - ||RAW SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.25 |47.94 |0.01 |0.00 |0.00 |47.94 || - ||ALLOC SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.20 |1.89 | |1.27 |3.21 |3.21 || - ||RAW MSG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.01 |0.01 |0.01 || - ||ALLOC MSG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |54.84 |134.48 |189.31 |126.54 |320.96 |447.50 || - =================================================================================================================== - - ``` - -It is important to note that the module load manager is _centralized_, meaning that all requests to assign a bundle---whether it's been seen before or whether this is the first time---only get handled by the _lead_ broker (which can change over time). To determine the current lead broker, examine the `/loadbalance/leader` node in ZooKeeper. - -## Implementation - -### Data - -The data monitored by the modular load manager is contained in the [`LoadData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/LoadData.java) class. -Here, the available data is subdivided into the bundle data and the broker data. - -#### Broker - -The broker data is contained in the [`BrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BrokerData.java) class. It is further subdivided into two parts, -one being the local data which every broker individually writes to ZooKeeper, and the other being the historical broker -data which is written to ZooKeeper by the leader broker. - -##### Local Broker Data -The local broker data is contained in the class [`LocalBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/java/org/apache/pulsar/policies/data/loadbalancer/LocalBrokerData.java) and provides information about the following resources: - -* CPU usage -* JVM heap memory usage -* Direct memory usage -* Bandwidth in/out usage -* Most recent total message rate in/out across all bundles -* Total number of topics, bundles, producers, and consumers -* Names of all bundles assigned to this broker -* Most recent changes in bundle assignments for this broker - -The local broker data is updated periodically according to the service configuration -"loadBalancerReportUpdateMaxIntervalMinutes". After any broker updates their local broker data, the leader broker will -receive the update immediately via a ZooKeeper watch, where the local data is read from the ZooKeeper node -`/loadbalance/brokers/` - -##### Historical Broker Data - -The historical broker data is contained in the [`TimeAverageBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/TimeAverageBrokerData.java) class. - -In order to reconcile the need to make good decisions in a steady-state scenario and make reactive decisions in a critical scenario, the historical data is split into two parts: the short-term data for reactive decisions, and the long-term data for steady-state decisions. Both time frames maintain the following information: - -* Message rate in/out for the entire broker -* Message throughput in/out for the entire broker - -Unlike the bundle data, the broker data does not maintain samples for the global broker message rates and throughputs, which is not expected to remain steady as new bundles are removed or added. Instead, this data is aggregated over the short-term and long-term data for the bundles. See the section on bundle data to understand how that data is collected and maintained. - -The historical broker data is updated for each broker in memory by the leader broker whenever any broker writes their local data to ZooKeeper. Then, the historical data is written to ZooKeeper by the leader broker periodically according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`. - -##### Bundle Data - -The bundle data is contained in the [`BundleData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BundleData.java). Like the historical broker data, the bundle data is split into a short-term and a long-term time frame. The information maintained in each time frame: - -* Message rate in/out for this bundle -* Message Throughput In/Out for this bundle -* Current number of samples for this bundle - -The time frames are implemented by maintaining the average of these values over a set, limited number of samples, where -the samples are obtained through the message rate and throughput values in the local data. Thus, if the update interval -for the local data is 2 minutes, the number of short samples is 10 and the number of long samples is 1000, the -short-term data is maintained over a period of `10 samples * 2 minutes / sample = 20 minutes`, while the long-term -data is similarly over a period of 2000 minutes. Whenever there are not enough samples to satisfy a given time frame, -the average is taken only over the existing samples. When no samples are available, default values are assumed until -they are overwritten by the first sample. Currently, the default values are - -* Message rate in/out: 50 messages per second both ways -* Message throughput in/out: 50KB per second both ways - -The bundle data is updated in memory on the leader broker whenever any broker writes their local data to ZooKeeper. -Then, the bundle data is written to ZooKeeper by the leader broker periodically at the same time as the historical -broker data, according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`. - -### Traffic Distribution - -The modular load manager uses the abstraction provided by [`ModularLoadManagerStrategy`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/ModularLoadManagerStrategy.java) to make decisions about bundle assignment. The strategy makes a decision by considering the service configuration, the entire load data, and the bundle data for the bundle to be assigned. Currently, the only supported strategy is [`LeastLongTermMessageRate`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/LeastLongTermMessageRate.java), though soon users will have the ability to inject their own strategies if desired. - -#### Least Long Term Message Rate Strategy - -As its name suggests, the least long term message rate strategy attempts to distribute bundles across brokers so that -the message rate in the long-term time window for each broker is roughly the same. However, simply balancing load based -on message rate does not handle the issue of asymmetric resource burden per message on each broker. Thus, the system -resource usages, which are CPU, memory, direct memory, bandwidth in, and bandwidth out, are also considered in the -assignment process. This is done by weighting the final message rate according to -`1 / (overload_threshold - max_usage)`, where `overload_threshold` corresponds to the configuration -`loadBalancerBrokerOverloadedThresholdPercentage` and `max_usage` is the maximum proportion among the system resources -that is being utilized by the candidate broker. This multiplier ensures that machines with are being more heavily taxed -by the same message rates will receive less load. In particular, it tries to ensure that if one machine is overloaded, -then all machines are approximately overloaded. In the case in which a broker's max usage exceeds the overload -threshold, that broker is not considered for bundle assignment. If all brokers are overloaded, the bundle is randomly -assigned. - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/develop-plugin.md b/site2/website/versioned_docs/version-2.10.1-deprecated/develop-plugin.md deleted file mode 100644 index 28d8de8ae375dc..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/develop-plugin.md +++ /dev/null @@ -1,139 +0,0 @@ ---- -id: develop-plugin -title: Pulsar plugin development -sidebar_label: "Plugin" -original_id: develop-plugin ---- - -You can develop various plugins for Pulsar, such as entry filters, protocol handlers, interceptors, and so on. - -## Entry filter - -This chapter describes what the entry filter is and shows how to use the entry filter. - -### What is an entry filter? - -The entry filter is an extension point for implementing a custom message entry strategy. With an entry filter, you can decide **whether to send messages to consumers** (brokers can use the return values of entry filters to determine whether the messages need to be sent or discarded) or **send messages to specific consumers.** - -To implement features such as tagged messages or custom delayed messages, use [`subscriptionProperties`](https://github.com/apache/pulsar/blob/ec0a44058d249a7510bb3d05685b2ee5e0874eb6/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/ConsumerBuilder.java?_pjax=%23js-repo-pjax-container%2C%20div%5Bitemtype%3D%22http%3A%2F%2Fschema.org%2FSoftwareSourceCode%22%5D%20main%2C%20%5Bdata-pjax-container%5D#L174), [`​​properties`](https://github.com/apache/pulsar/blob/ec0a44058d249a7510bb3d05685b2ee5e0874eb6/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/ConsumerBuilder.java?_pjax=%23js-repo-pjax-container%2C%20div%5Bitemtype%3D%22http%3A%2F%2Fschema.org%2FSoftwareSourceCode%22%5D%20main%2C%20%5Bdata-pjax-container%5D#L533), and entry filters. - -### How to use an entry filter? - -Follow the steps below: - -1. Create a Maven project. - -2. Implement the `EntryFilter` interface. - -3. Package the implementation class into a NAR file. - -4. Configure the `broker.conf` file (or the `standalone.conf` file) and restart your broker. - -#### Step 1: Create a Maven project - -For how to create a Maven project, see [here](https://maven.apache.org/guides/getting-started/maven-in-five-minutes.html). - -#### Step 2: Implement the `EntryFilter` interface - -1. Add a dependency for Pulsar broker in the `pom.xml` file to display. Otherwise, you can not find the [`EntryFilter` interface](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/plugin/EntryFilter.java). - - ```xml - - - org.apache.pulsar - pulsar-broker - ${pulsar.version} - provided - - - ``` - -2. Implement the [`FilterResult filterEntry(Entry entry, FilterContext context);` method](https://github.com/apache/pulsar/blob/2adb6661d5b82c5705ee00ce3ebc9941c99635d5/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/plugin/EntryFilter.java#L34). - - - If the method returns `ACCEPT` or NULL, this message is sent to consumers. - - - If the method returns `REJECT`, this message is filtered out and it does not consume message permits. - - - If there are multiple entry filters, this message passes through all filters in the pipeline in a round-robin manner. If any entry filter returns `REJECT`, this message is discarded. - - You can get entry metadata, subscriptions, and other information through `FilterContext`. - -3. Describe a NAR file. - - Create an `entry_filter.yml` file in the `resources/META-INF/services` directory to describe a NAR file. - - ```conf - - # Entry filter name, which should be configured in the broker.conf file later - name: entryFilter - # Entry filter description - description: entry filter - # Implementation class name of entry filter - entryFilterClass: com.xxxx.xxxx.xxxx.DefaultEntryFilterImpl - - ``` - -#### Step 3: package implementation class of entry filter into a NAR file - -1. Add the compiled plugin of the NAR file to your `pom.xml` file. - - ```xml - - - ${project.artifactId} - - - org.apache.nifi - nifi-nar-maven-plugin - 1.2.0 - true - - ${project.artifactId}-${project.version} - - - - default-nar - package - - nar - - - - - - - - ``` - -2. Generate a NAR file in the `target` directory. - - ```script - - mvn clean install - - ``` - -#### Step 4: configure and restart broker - -1. Configure the following parameters in the `broker.conf` file (or the `standalone.conf` file). - - ```conf - - # Class name of pluggable entry filters - # Multiple classes need to be separated by commas. - entryFilterNames=entryFilter1,entryFilter2,entryFilter3 - # The directory for all entry filter implementations - entryFiltersDirectory=tempDir - - ``` - -2. Restart your broker. - - You can see the following broker log if the plug-in is successfully loaded. - - ```text - - Successfully loaded entry filter for name `{name of your entry filter}` - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/develop-schema.md b/site2/website/versioned_docs/version-2.10.1-deprecated/develop-schema.md deleted file mode 100644 index 2d4461a5ea2b55..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/develop-schema.md +++ /dev/null @@ -1,62 +0,0 @@ ---- -id: develop-schema -title: Custom schema storage -sidebar_label: "Custom schema storage" -original_id: develop-schema ---- - -By default, Pulsar stores data type [schemas](concepts-schema-registry.md) in [Apache BookKeeper](https://bookkeeper.apache.org) (which is deployed alongside Pulsar). You can, however, use another storage system if you wish. This doc walks you through creating your own schema storage implementation. - -In order to use a non-default (i.e. non-BookKeeper) storage system for Pulsar schemas, you need to implement two Java interfaces: [`SchemaStorage`](#schemastorage-interface) and [`SchemaStorageFactory`](#schemastoragefactory-interface). - -## SchemaStorage interface - -The `SchemaStorage` interface has the following methods: - -```java - -public interface SchemaStorage { - // How schemas are updated - CompletableFuture put(String key, byte[] value, byte[] hash); - - // How schemas are fetched from storage - CompletableFuture get(String key, SchemaVersion version); - - // How schemas are deleted - CompletableFuture delete(String key); - - // Utility method for converting a schema version byte array to a SchemaVersion object - SchemaVersion versionFromBytes(byte[] version); - - // Startup behavior for the schema storage client - void start() throws Exception; - - // Shutdown behavior for the schema storage client - void close() throws Exception; -} - -``` - -> For a full-fledged example schema storage implementation, see the [`BookKeeperSchemaStorage`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorage.java) class. - -## SchemaStorageFactory interface - -```java - -public interface SchemaStorageFactory { - @NotNull - SchemaStorage create(PulsarService pulsar) throws Exception; -} - -``` - -> For a full-fledged example schema storage factory implementation, see the [`BookKeeperSchemaStorageFactory`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorageFactory.java) class. - -## Deployment - -In order to use your custom schema storage implementation, you'll need to: - -1. Package the implementation in a [JAR](https://docs.oracle.com/javase/tutorial/deployment/jar/basicsindex.html) file. -1. Add that jar to the `lib` folder in your Pulsar [binary or source distribution](getting-started-standalone.md#installing-pulsar). -1. Change the `schemaRegistryStorageClassName` configuration in [`broker.conf`](reference-configuration.md#broker) to your custom factory class (i.e. the `SchemaStorageFactory` implementation, not the `SchemaStorage` implementation). -1. Start up Pulsar. diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/develop-tools.md b/site2/website/versioned_docs/version-2.10.1-deprecated/develop-tools.md deleted file mode 100644 index b5457790b80810..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/develop-tools.md +++ /dev/null @@ -1,111 +0,0 @@ ---- -id: develop-tools -title: Simulation tools -sidebar_label: "Simulation tools" -original_id: develop-tools ---- - -It is sometimes necessary create an test environment and incur artificial load to observe how well load managers -handle the load. The load simulation controller, the load simulation client, and the broker monitor were created as an -effort to make create this load and observe the effects on the managers more easily. - -## Simulation Client -The simulation client is a machine which will create and subscribe to topics with configurable message rates and sizes. -Because it is sometimes necessary in simulating large load to use multiple client machines, the user does not interact -with the simulation client directly, but instead delegates their requests to the simulation controller, which will then -send signals to clients to start incurring load. The client implementation is in the class -`org.apache.pulsar.testclient.LoadSimulationClient`. - -### Usage -To Start a simulation client, use the `pulsar-perf` script with the command `simulation-client` as follows: - -``` - -pulsar-perf simulation-client --port --service-url - -``` - -The client will then be ready to receive controller commands. -## Simulation Controller -The simulation controller send signals to the simulation clients, requesting them to create new topics, stop old -topics, change the load incurred by topics, as well as several other tasks. It is implemented in the class -`org.apache.pulsar.testclient.LoadSimulationController` and presents a shell to the user as an interface to send -command with. - -### Usage -To start a simulation controller, use the `pulsar-perf` script with the command `simulation-controller` as follows: - -``` - -pulsar-perf simulation-controller --cluster --client-port ---clients - -``` - -The clients should already be started before the controller is started. You will then be presented with a simple prompt, -where you can issue commands to simulation clients. Arguments often refer to tenant names, namespace names, and topic -names. In all cases, the BASE name of the tenants, namespaces, and topics are used. For example, for the topic -`persistent://my_tenant/my_cluster/my_namespace/my_topic`, the tenant name is `my_tenant`, the namespace name is -`my_namespace`, and the topic name is `my_topic`. The controller can perform the following actions: - -* Create a topic with a producer and a consumer - * `trade [--rate ] - [--rand-rate ,] - [--size ]` -* Create a group of topics with a producer and a consumer - * `trade_group [--rate ] - [--rand-rate ,] - [--separation ] [--size ] - [--topics-per-namespace ]` -* Change the configuration of an existing topic - * `change [--rate ] - [--rand-rate ,] - [--size ]` -* Change the configuration of a group of topics - * `change_group [--rate ] [--rand-rate ,] - [--size ] [--topics-per-namespace ]` -* Shutdown a previously created topic - * `stop ` -* Shutdown a previously created group of topics - * `stop_group ` -* Copy the historical data from one ZooKeeper to another and simulate based on the message rates and sizes in that history - * `copy [--rate-multiplier value]` -* Simulate the load of the historical data on the current ZooKeeper (should be same ZooKeeper being simulated on) - * `simulate [--rate-multiplier value]` -* Stream the latest data from the given active ZooKeeper to simulate the real-time load of that ZooKeeper. - * `stream [--rate-multiplier value]` - -The "group" arguments in these commands allow the user to create or affect multiple topics at once. Groups are created -when calling the `trade_group` command, and all topics from these groups may be subsequently modified or stopped -with the `change_group` and `stop_group` commands respectively. All ZooKeeper arguments are of the form -`zookeeper_host:port`. - -### Difference Between Copy, Simulate, and Stream -The commands `copy`, `simulate`, and `stream` are very similar but have significant differences. `copy` is used when -you want to simulate the load of a static, external ZooKeeper on the ZooKeeper you are simulating on. Thus, -`source zookeeper` should be the ZooKeeper you want to copy and `target zookeeper` should be the ZooKeeper you are -simulating on, and then it will get the full benefit of the historical data of the source in both load manager -implementations. `simulate` on the other hand takes in only one ZooKeeper, the one you are simulating on. It assumes -that you are simulating on a ZooKeeper that has historical data for `SimpleLoadManagerImpl` and creates equivalent -historical data for `ModularLoadManagerImpl`. Then, the load according to the historical data is simulated by the -clients. Finally, `stream` takes in an active ZooKeeper different than the ZooKeeper being simulated on and streams -load data from it and simulates the real-time load. In all cases, the optional `rate-multiplier` argument allows the -user to simulate some proportion of the load. For instance, using `--rate-multiplier 0.05` will cause messages to -be sent at only `5%` of the rate of the load that is being simulated. - -## Broker Monitor -To observe the behavior of the load manager in these simulations, one may utilize the broker monitor, which is -implemented in `org.apache.pulsar.testclient.BrokerMonitor`. The broker monitor will print tabular load data to the -console as it is updated using watchers. - -### Usage -To start a broker monitor, use the `monitor-brokers` command in the `pulsar-perf` script: - -``` - -pulsar-perf monitor-brokers --connect-string - -``` - -The console will then continuously print load data until it is interrupted. - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/developing-binary-protocol.md b/site2/website/versioned_docs/version-2.10.1-deprecated/developing-binary-protocol.md deleted file mode 100644 index 11394505ac8cca..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/developing-binary-protocol.md +++ /dev/null @@ -1,637 +0,0 @@ ---- -id: developing-binary-protocol -title: Pulsar binary protocol specification -sidebar_label: "Binary protocol" -original_id: developing-binary-protocol ---- - -Pulsar uses a custom binary protocol for communications between producers/consumers and brokers. This protocol is designed to support required features, such as acknowledgements and flow control, while ensuring maximum transport and implementation efficiency. - -Clients and brokers exchange *commands* with each other. Commands are formatted as binary [protocol buffer](https://developers.google.com/protocol-buffers/) (aka *protobuf*) messages. The format of protobuf commands is specified in the [`PulsarApi.proto`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto) file and also documented in the [Protobuf interface](#protobuf-interface) section below. - -> ### Connection sharing -> Commands for different producers and consumers can be interleaved and sent through the same connection without restriction. - -All commands associated with Pulsar's protocol are contained in a [`BaseCommand`](#pulsar.proto.BaseCommand) protobuf message that includes a [`Type`](#pulsar.proto.Type) [enum](https://developers.google.com/protocol-buffers/docs/proto#enum) with all possible subcommands as optional fields. `BaseCommand` messages can specify only one subcommand. - -## Framing - -Since protobuf doesn't provide any sort of message frame, all messages in the Pulsar protocol are prepended with a 4-byte field that specifies the size of the frame. The maximum allowable size of a single frame is 5 MB. - -The Pulsar protocol allows for two types of commands: - -1. **Simple commands** that do not carry a message payload. -2. **Payload commands** that bear a payload that is used when publishing or delivering messages. In payload commands, the protobuf command data is followed by protobuf [metadata](#message-metadata) and then the payload, which is passed in raw format outside of protobuf. All sizes are passed as 4-byte unsigned big endian integers. - -> Message payloads are passed in raw format rather than protobuf format for efficiency reasons. - -### Simple commands - -Simple (payload-free) commands have this basic structure: - -| Component | Description | Size (in bytes) | -|:--------------|:----------------------------------------------------------------------------------------|:----------------| -| `totalSize` | The size of the frame, counting everything that comes after it (in bytes) | 4 | -| `commandSize` | The size of the protobuf-serialized command | 4 | -| `message` | The protobuf message serialized in a raw binary format (rather than in protobuf format) | | - -### Payload commands - -Payload commands have this basic structure: - -| Component | Required or optional| Description | Size (in bytes) | -|:-----------------------------------|:----------|:--------------------------------------------------------------------------------------------|:----------------| -| `totalSize` | Required | The size of the frame, counting everything that comes after it (in bytes) | 4 | -| `commandSize` | Required | The size of the protobuf-serialized command | 4 | -| `message` | Required | The protobuf message serialized in a raw binary format (rather than in protobuf format) | | -| `magicNumberOfBrokerEntryMetadata` | Optional | A 2-byte byte array (`0x0e02`) identifying the broker entry metadata
    **Note**: `magicNumberOfBrokerEntryMetadata` , `brokerEntryMetadataSize`, and `brokerEntryMetadata` should be used **together**. | 2 | -| `brokerEntryMetadataSize` | Optional | The size of the broker entry metadata | 4 | -| `brokerEntryMetadata` | Optional | The broker entry metadata stored as a binary protobuf message | | -| `magicNumber` | Required | A 2-byte byte array (`0x0e01`) identifying the current format | 2 | -| `checksum` | Required | A [CRC32-C checksum](http://www.evanjones.ca/crc32c.html) of everything that comes after it | 4 | -| `metadataSize` | Required | The size of the message [metadata](#message-metadata) | 4 | -| `metadata` | Required | The message [metadata](#message-metadata) stored as a binary protobuf message | | -| `payload` | Required | Anything left in the frame is considered the payload and can include any sequence of bytes | | - -## Broker entry metadata - -Broker entry metadata is stored alongside the message metadata as a serialized protobuf message. -It is created by the broker when the message arrived at the broker and passed without changes to the consumer if configured. - -| Field | Required or optional | Description | -|:-------------------|:----------------|:------------------------------------------------------------------------------------------------------------------------------| -| `broker_timestamp` | Optional | The timestamp when a message arrived at the broker (`id est` as the number of milliseconds since January 1st, 1970 in UTC) | -| `index` | Optional | The index of the message. It is assigned by the broker. - -If you want to use broker entry metadata for **brokers**, configure the [`brokerEntryMetadataInterceptors`](reference-configuration.md#broker) parameter in the `broker.conf` file. - -If you want to use broker entry metadata for **consumers**: - -1. Use the client protocol version [18 or later](https://github.com/apache/pulsar/blob/ca37e67211feda4f7e0984e6414e707f1c1dfd07/pulsar-common/src/main/proto/PulsarApi.proto#L259). - -2. Configure the [`brokerEntryMetadataInterceptors`](reference-configuration.md#broker) parameter and set the [`exposingBrokerEntryMetadataToClientEnabled`](reference-configuration-broker.md#exposingbrokerentrymetadatatoclientenabled) parameter to `true` in the `broker.conf` file. - -## Message metadata - -Message metadata is stored alongside the application-specified payload as a serialized protobuf message. Metadata is created by the producer and passed without changes to the consumer. - -| Field | Required or optional | Description | -|:-------------------------|:----------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `producer_name` | Required | The name of the producer that published the message | -| `sequence_id` | Required | The sequence ID of the message, assigned by producer | -| `publish_time` | Required | The publish timestamp in Unix time (i.e. as the number of milliseconds since January 1st, 1970 in UTC) | -| `properties` | Required | A sequence of key/value pairs (using the [`KeyValue`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto#L32) message). These are application-defined keys and values with no special meaning to Pulsar. | -| `replicated_from` | Optional | Indicates that the message has been replicated and specifies the name of the [cluster](reference-terminology.md#cluster) where the message was originally published | -| `partition_key` | Optional | While publishing on a partition topic, if the key is present, the hash of the key is used to determine which partition to choose. Partition key is used as the message key. | -| `compression` | Optional | Signals that payload has been compressed and with which compression library | -| `uncompressed_size` | Optional | If compression is used, the producer must fill the uncompressed size field with the original payload size | -| `num_messages_in_batch` | Optional | If this message is really a [batch](#batch-messages) of multiple entries, this field must be set to the number of messages in the batch | - -### Batch messages - -When using batch messages, the payload will be containing a list of entries, -each of them with its individual metadata, defined by the `SingleMessageMetadata` -object. - - -For a single batch, the payload format will look like this: - - -| Field | Required or optional | Description | -|:----------------|:---------------------|:-----------------------------------------------------------| -| `metadataSizeN` | Required |The size of the single message metadata serialized Protobuf | -| `metadataN` | Required |Single message metadata | -| `payloadN` | Required |Message payload passed by application | - -Each metadata field looks like this; - -| Field | Required or optional | Description | -|:----------------|:----------------------|:--------------------------------------------------------| -| `properties` | Required | Application-defined properties | -| `partition key` | Optional | Key to indicate the hashing to a particular partition | -| `payload_size` | Required | Size of the payload for the single message in the batch | - -When compression is enabled, the whole batch will be compressed at once. - -## Interactions - -### Connection establishment - -After opening a TCP connection to a broker, typically on port 6650, the client -is responsible to initiate the session. - -![Connect interaction](/assets/binary-protocol-connect.png) - -After receiving a `Connected` response from the broker, the client can -consider the connection ready to use. Alternatively, if the broker doesn't -validate the client authentication, it will reply with an `Error` command and -close the TCP connection. - -Example: - -```protobuf - -message CommandConnect { - "client_version" : "Pulsar-Client-Java-v1.15.2", - "auth_method_name" : "my-authentication-plugin", - "auth_data" : "my-auth-data", - "protocol_version" : 6 -} - -``` - -Fields: - * `client_version` → String based identifier. Format is not enforced - * `auth_method_name` → *(optional)* Name of the authentication plugin if auth - enabled - * `auth_data` → *(optional)* Plugin specific authentication data - * `protocol_version` → Indicates the protocol version supported by the - client. Broker will not send commands introduced in newer revisions of the - protocol. Broker might be enforcing a minimum version - -```protobuf - -message CommandConnected { - "server_version" : "Pulsar-Broker-v1.15.2", - "protocol_version" : 6 -} - -``` - -Fields: - * `server_version` → String identifier of broker version - * `protocol_version` → Protocol version supported by the broker. Client - must not attempt to send commands introduced in newer revisions of the - protocol - -### Keep Alive - -To identify prolonged network partitions between clients and brokers or cases -in which a machine crashes without interrupting the TCP connection on the remote -end (eg: power outage, kernel panic, hard reboot...), we have introduced a -mechanism to probe for the availability status of the remote peer. - -Both clients and brokers are sending `Ping` commands periodically and they will -close the socket if a `Pong` response is not received within a timeout (default -used by broker is 60s). - -A valid implementation of a Pulsar client is not required to send the `Ping` -probe, though it is required to promptly reply after receiving one from the -broker in order to prevent the remote side from forcibly closing the TCP connection. - - -### Producer - -In order to send messages, a client needs to establish a producer. When creating -a producer, the broker will first verify that this particular client is -authorized to publish on the topic. - -Once the client gets confirmation of the producer creation, it can publish -messages to the broker, referring to the producer ID negotiated before. - -![Producer interaction](/assets/binary-protocol-producer.png) - -If the client does not receive a response indicating producer creation success or failure, -the client should first send a command to close the original producer before sending a -command to re-attempt producer creation. - -##### Command Producer - -```protobuf - -message CommandProducer { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "producer_id" : 1, - "request_id" : 1 -} - -``` - -Parameters: - * `topic` → Complete topic name to where you want to create the producer on - * `producer_id` → Client generated producer identifier. Needs to be unique - within the same connection - * `request_id` → Identifier for this request. Used to match the response with - the originating request. Needs to be unique within the same connection - * `producer_name` → *(optional)* If a producer name is specified, the name will - be used, otherwise the broker will generate a unique name. Generated - producer name is guaranteed to be globally unique. Implementations are - expected to let the broker generate a new producer name when the producer - is initially created, then reuse it when recreating the producer after - reconnections. - -The broker will reply with either `ProducerSuccess` or `Error` commands. - -##### Command ProducerSuccess - -```protobuf - -message CommandProducerSuccess { - "request_id" : 1, - "producer_name" : "generated-unique-producer-name" -} - -``` - -Parameters: - * `request_id` → The original ID of the `CreateProducer` request - * `producer_name` → Generated globally unique producer name or the name - specified by the client, if any. - -##### Command Send - -Command `Send` is used to publish a new message within the context of an -already existing producer. If a producer has not yet been created for the -connection, the broker will terminate the connection. This command is used -in a frame that includes command as well as message payload, for which the -complete format is specified in the [payload commands](#payload-commands) section. - -```protobuf - -message CommandSend { - "producer_id" : 1, - "sequence_id" : 0, - "num_messages" : 1 -} - -``` - -Parameters: - * `producer_id` → The ID of an existing producer - * `sequence_id` → Each message has an associated sequence ID which is expected - to be implemented with a counter starting at 0. The `SendReceipt` that - acknowledges the effective publishing of a messages will refer to it by - its sequence id. - * `num_messages` → *(optional)* Used when publishing a batch of messages at - once. - -##### Command SendReceipt - -After a message has been persisted on the configured number of replicas, the -broker will send the acknowledgment receipt to the producer. - -```protobuf - -message CommandSendReceipt { - "producer_id" : 1, - "sequence_id" : 0, - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -Parameters: - * `producer_id` → The ID of producer originating the send request - * `sequence_id` → The sequence ID of the published message - * `message_id` → The message ID assigned by the system to the published message - Unique within a single cluster. Message ID is composed of 2 longs, `ledgerId` - and `entryId`, that reflect that this unique ID is assigned when appending - to a BookKeeper ledger - - -##### Command CloseProducer - -**Note**: *This command can be sent by either producer or broker*. - -When receiving a `CloseProducer` command, the broker will stop accepting any -more messages for the producer, wait until all pending messages are persisted -and then reply `Success` to the client. - -If the client does not receive a response to a `Producer` command within a timeout, -the client must first send a `CloseProducer` command before sending another -`Producer` command. The client does not need to await a response to the `CloseProducer` -command before sending the next `Producer` command. - -The broker can send a `CloseProducer` command to client when it's performing -a graceful failover (eg: broker is being restarted, or the topic is being unloaded -by load balancer to be transferred to a different broker). - -When receiving the `CloseProducer`, the client is expected to go through the -service discovery lookup again and recreate the producer again. The TCP -connection is not affected. - -### Consumer - -A consumer is used to attach to a subscription and consume messages from it. -After every reconnection, a client needs to subscribe to the topic. If a -subscription is not already there, a new one will be created. - -![Consumer](/assets/binary-protocol-consumer.png) - -#### Flow control - -After the consumer is ready, the client needs to *give permission* to the -broker to push messages. This is done with the `Flow` command. - -A `Flow` command gives additional *permits* to send messages to the consumer. -A typical consumer implementation will use a queue to accumulate these messages -before the application is ready to consume them. - -After the application has dequeued half of the messages in the queue, the consumer -sends permits to the broker to ask for more messages (equals to half of the messages in the queue). - -For example, if the queue size is 1000 and the consumer consumes 500 messages in the queue. -Then the consumer sends permits to the broker to ask for 500 messages. - -##### Command Subscribe - -```protobuf - -message CommandSubscribe { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "subscription" : "my-subscription-name", - "subType" : "Exclusive", - "consumer_id" : 1, - "request_id" : 1 -} - -``` - -Parameters: - * `topic` → Complete topic name to where you want to create the consumer on - * `subscription` → Subscription name - * `subType` → Subscription type: Exclusive, Shared, Failover, Key_Shared - * `consumer_id` → Client generated consumer identifier. Needs to be unique - within the same connection - * `request_id` → Identifier for this request. Used to match the response with - the originating request. Needs to be unique within the same connection - * `consumer_name` → *(optional)* Clients can specify a consumer name. This - name can be used to track a particular consumer in the stats. Also, in - Failover subscription type, the name is used to decide which consumer is - elected as *master* (the one receiving messages): consumers are sorted by - their consumer name and the first one is elected master. - -##### Command Flow - -```protobuf - -message CommandFlow { - "consumer_id" : 1, - "messagePermits" : 1000 -} - -``` - -Parameters: -* `consumer_id` → Id of an already established consumer -* `messagePermits` → Number of additional permits to grant to the broker for - pushing more messages - -##### Command Message - -Command `Message` is used by the broker to push messages to an existing consumer, -within the limits of the given permits. - - -This command is used in a frame that includes the message payload as well, for -which the complete format is specified in the [payload commands](#payload-commands) -section. - -```protobuf - -message CommandMessage { - "consumer_id" : 1, - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -##### Command Ack - -An `Ack` is used to signal to the broker that a given message has been -successfully processed by the application and can be discarded by the broker. - -In addition, the broker will also maintain the consumer position based on the -acknowledged messages. - -```protobuf - -message CommandAck { - "consumer_id" : 1, - "ack_type" : "Individual", - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -Parameters: - * `consumer_id` → Id of an already established consumer - * `ack_type` → Type of acknowledgment: `Individual` or `Cumulative` - * `message_id` → Id of the message to acknowledge - * `validation_error` → *(optional)* Indicates that the consumer has discarded - the messages due to: `UncompressedSizeCorruption`, - `DecompressionError`, `ChecksumMismatch`, `BatchDeSerializeError` - * `properties` → *(optional)* Reserved configuration items - * `txnid_most_bits` → *(optional)* Same as Transaction Coordinator ID, `txnid_most_bits` and `txnid_least_bits` - uniquely identify a transaction. - * `txnid_least_bits` → *(optional)* The ID of the transaction opened in a transaction coordinator, - `txnid_most_bits` and `txnid_least_bits`uniquely identify a transaction. - * `request_id` → *(optional)* The ID for handling response and timeout. - - - ##### Command AckResponse - -An `AckResponse` is the broker’s response to acknowledge a request sent by the client. It contains the `consumer_id` sent in the request. -If a transaction is used, it contains both the Transaction ID and the Request ID that are sent in the request. The client finishes the specific request according to the Request ID. If the `error` field is set, it indicates that the request has failed. - -An example of `AckResponse` with redirection: - -```protobuf - -message CommandAckResponse { - "consumer_id" : 1, - "txnid_least_bits" = 0, - "txnid_most_bits" = 1, - "request_id" = 5 -} - -``` - -##### Command CloseConsumer - -***Note***: **This command can be sent by either producer or broker*. - -This command behaves the same as [`CloseProducer`](#command-closeproducer) - -##### Command RedeliverUnacknowledgedMessages - -A consumer can ask the broker to redeliver some or all of the pending messages -that were pushed to that particular consumer and not yet acknowledged. - -The protobuf object accepts a list of message ids that the consumer wants to -be redelivered. If the list is empty, the broker will redeliver all the -pending messages. - -On redelivery, messages can be sent to the same consumer or, in the case of a -shared subscription, spread across all available consumers. - - -##### Command ReachedEndOfTopic - -This is sent by a broker to a particular consumer, whenever the topic -has been "terminated" and all the messages on the subscription were -acknowledged. - -The client should use this command to notify the application that no more -messages are coming from the consumer. - -##### Command ConsumerStats - -This command is sent by the client to retrieve Subscriber and Consumer level -stats from the broker. -Parameters: - * `request_id` → Id of the request, used to correlate the request - and the response. - * `consumer_id` → Id of an already established consumer. - -##### Command ConsumerStatsResponse - -This is the broker's response to ConsumerStats request by the client. -It contains the Subscriber and Consumer level stats of the `consumer_id` sent in the request. -If the `error_code` or the `error_message` field is set it indicates that the request has failed. - -##### Command Unsubscribe - -This command is sent by the client to unsubscribe the `consumer_id` from the associated topic. -Parameters: - * `request_id` → Id of the request. - * `consumer_id` → Id of an already established consumer which needs to unsubscribe. - - -## Service discovery - -### Topic lookup - -Topic lookup needs to be performed each time a client needs to create or -reconnect a producer or a consumer. Lookup is used to discover which particular -broker is serving the topic we are about to use. - -Lookup can be done with a REST call as described in the [admin API](admin-api-topics.md#look-up-topics-owner-broker) docs. - -Since Pulsar-1.16 it is also possible to perform the lookup within the binary -protocol. - -For the sake of example, let's assume we have a service discovery component -running at `pulsar://broker.example.com:6650` - -Individual brokers will be running at `pulsar://broker-1.example.com:6650`, -`pulsar://broker-2.example.com:6650`, ... - -A client can use a connection to the discovery service host to issue a -`LookupTopic` command. The response can either be a broker hostname to -connect to, or a broker hostname to which retry the lookup. - -The `LookupTopic` command has to be used in a connection that has already -gone through the `Connect` / `Connected` initial handshake. - -![Topic lookup](/assets/binary-protocol-topic-lookup.png) - -```protobuf - -message CommandLookupTopic { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "request_id" : 1, - "authoritative" : false -} - -``` - -Fields: - * `topic` → Topic name to lookup - * `request_id` → Id of the request that will be passed with its response - * `authoritative` → Initial lookup request should use false. When following a - redirect response, client should pass the same value contained in the - response - -##### LookupTopicResponse - -Example of response with successful lookup: - -```protobuf - -message CommandLookupTopicResponse { - "request_id" : 1, - "response" : "Connect", - "brokerServiceUrl" : "pulsar://broker-1.example.com:6650", - "brokerServiceUrlTls" : "pulsar+ssl://broker-1.example.com:6651", - "authoritative" : true -} - -``` - -Example of lookup response with redirection: - -```protobuf - -message CommandLookupTopicResponse { - "request_id" : 1, - "response" : "Redirect", - "brokerServiceUrl" : "pulsar://broker-2.example.com:6650", - "brokerServiceUrlTls" : "pulsar+ssl://broker-2.example.com:6651", - "authoritative" : true -} - -``` - -In this second case, we need to reissue the `LookupTopic` command request -to `broker-2.example.com` and this broker will be able to give a definitive -answer to the lookup request. - -### Partitioned topics discovery - -Partitioned topics metadata discovery is used to find out if a topic is a -"partitioned topic" and how many partitions were set up. - -If the topic is marked as "partitioned", the client is expected to create -multiple producers or consumers, one for each partition, using the `partition-X` -suffix. - -This information only needs to be retrieved the first time a producer or -consumer is created. There is no need to do this after reconnections. - -The discovery of partitioned topics metadata works very similar to the topic -lookup. The client send a request to the service discovery address and the -response will contain actual metadata. - -##### Command PartitionedTopicMetadata - -```protobuf - -message CommandPartitionedTopicMetadata { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "request_id" : 1 -} - -``` - -Fields: - * `topic` → the topic for which to check the partitions metadata - * `request_id` → Id of the request that will be passed with its response - - -##### Command PartitionedTopicMetadataResponse - -Example of response with metadata: - -```protobuf - -message CommandPartitionedTopicMetadataResponse { - "request_id" : 1, - "response" : "Success", - "partitions" : 32 -} - -``` - -## Protobuf interface - -All Pulsar's Protobuf definitions can be found {@inject: github:here:/pulsar-common/src/main/proto/PulsarApi.proto}. diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/functions-cli.md b/site2/website/versioned_docs/version-2.10.1-deprecated/functions-cli.md deleted file mode 100644 index c9fcfa201525f0..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/functions-cli.md +++ /dev/null @@ -1,198 +0,0 @@ ---- -id: functions-cli -title: Pulsar Functions command line tool -sidebar_label: "Reference: CLI" -original_id: functions-cli ---- - -The following tables list Pulsar Functions command-line tools. You can learn Pulsar Functions modes, commands, and parameters. - -## localrun - -Run Pulsar Functions locally, rather than deploying it to the Pulsar cluster. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -broker-service-url | The URL for the Pulsar broker. | | -classname | The class name of a Pulsar Function.| | -client-auth-params | Client authentication parameter. | | -client-auth-plugin | Client authentication plugin using which function-process can connect to broker. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime).| | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -hostname-verification-enabled | Enable hostname verification. | false -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -instance-id-offset | Start the instanceIds from this offset. | 0 -log-topic | The topic to which the logs a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -tls-allow-insecure | Allow insecure tls connection. | false -tls-trust-cert-path | tls trust cert file path. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -use-tls | Use tls connection. | false -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - - -## create - -Create and deploy a Pulsar Function in cluster mode. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -classname | The class name of a Pulsar Function. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime).| | -custom-runtime-options | A string that encodes options to customize the runtime, see docs for configured runtime for details | | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -log-topic | The topic to which the logs of a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - -## delete - -Delete a Pulsar Function that is running on a Pulsar cluster. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## update - -Update a Pulsar Function that has been deployed to a Pulsar cluster. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -classname | The class name of a Pulsar Function. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime). | | -custom-runtime-options | A string that encodes options to customize the runtime, see docs for configured runtime for details | | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -log-topic | The topic to which the logs of a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -update-auth-data | Whether or not to update the auth data. | false -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - -## get - -Fetch information about a Pulsar Function. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## restart - -Restart function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## stop - -Stops function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## start - -Starts a stopped function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/functions-debug.md b/site2/website/versioned_docs/version-2.10.1-deprecated/functions-debug.md deleted file mode 100644 index c1f19abda64657..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/functions-debug.md +++ /dev/null @@ -1,538 +0,0 @@ ---- -id: functions-debug -title: Debug Pulsar Functions -sidebar_label: "How-to: Debug" -original_id: functions-debug ---- - -You can use the following methods to debug Pulsar Functions: - -* [Captured stderr](functions-debug.md#captured-stderr) -* [Use unit test](functions-debug.md#use-unit-test) -* [Debug with localrun mode](functions-debug.md#debug-with-localrun-mode) -* [Use log topic](functions-debug.md#use-log-topic) -* [Use Functions CLI](functions-debug.md#use-functions-cli) - -## Captured stderr - -Function startup information and captured stderr output is written to `logs/functions////-.log` - -This is useful for debugging why a function fails to start. - -## Use unit test - -A Pulsar Function is a function with inputs and outputs, you can test a Pulsar Function in a similar way as you test any function. - -For example, if you have the following Pulsar Function: - -```java - -import java.util.function.Function; - -public class JavaNativeExclamationFunction implements Function { - @Override - public String apply(String input) { - return String.format("%s!", input); - } -} - -``` - -You can write a simple unit test to test Pulsar Function. - -:::tip - -Pulsar uses testng for testing. - -::: - -```java - -@Test -public void testJavaNativeExclamationFunction() { - JavaNativeExclamationFunction exclamation = new JavaNativeExclamationFunction(); - String output = exclamation.apply("foo"); - Assert.assertEquals(output, "foo!"); -} - -``` - -The following Pulsar Function implements the `org.apache.pulsar.functions.api.Function` interface. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class ExclamationFunction implements Function { - @Override - public String process(String input, Context context) { - return String.format("%s!", input); - } -} - -``` - -In this situation, you can write a unit test for this function as well. Remember to mock the `Context` parameter. The following is an example. - -:::tip - -Pulsar uses testng for testing. - -::: - -```java - -@Test -public void testExclamationFunction() { - ExclamationFunction exclamation = new ExclamationFunction(); - String output = exclamation.process("foo", mock(Context.class)); - Assert.assertEquals(output, "foo!"); -} - -``` - -## Debug with localrun mode -When you run a Pulsar Function in localrun mode, it launches an instance of the Function on your local machine as a thread. - -In this mode, a Pulsar Function consumes and produces actual data to a Pulsar cluster, and mirrors how the function actually runs in a Pulsar cluster. - -:::note - -Currently, debugging with localrun mode is only supported by Pulsar Functions written in Java. You need Pulsar version 2.4.0 or later to do the following. Even though localrun is available in versions earlier than Pulsar 2.4.0, you cannot debug with localrun mode programmatically or run Functions as threads. - -::: - -You can launch your function in the following manner. - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setName(functionName); -functionConfig.setInputs(Collections.singleton(sourceTopic)); -functionConfig.setClassName(ExclamationFunction.class.getName()); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setOutput(sinkTopic); - -LocalRunner localRunner = LocalRunner.builder().functionConfig(functionConfig).build(); -localRunner.start(true); - -``` - -So you can debug functions using an IDE easily. Set breakpoints and manually step through a function to debug with real data. - -The following example illustrates how to programmatically launch a function in localrun mode. - -```java - -public class ExclamationFunction implements Function { - - @Override - public String process(String s, Context context) throws Exception { - return s + "!"; - } - -public static void main(String[] args) throws Exception { - FunctionConfig functionConfig = new FunctionConfig(); - functionConfig.setName("exclamation"); - functionConfig.setInputs(Collections.singleton("input")); - functionConfig.setClassName(ExclamationFunction.class.getName()); - functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); - functionConfig.setOutput("output"); - - LocalRunner localRunner = LocalRunner.builder().functionConfig(functionConfig).build(); - localRunner.start(false); -} - -``` - -To use localrun mode programmatically, add the following dependency. - -```xml - - - org.apache.pulsar - pulsar-functions-local-runner - ${pulsar.version} - - -``` - -For complete code samples, see [here](https://github.com/jerrypeng/pulsar-functions-demos/tree/master/debugging). - -:::note - -Debugging with localrun mode for Pulsar Functions written in other languages will be supported soon. - -::: - -## Use log topic - -In Pulsar Functions, you can generate log information defined in functions to a specified log topic. You can configure consumers to consume messages from a specified log topic to check the log information. - -![Pulsar Functions core programming model](/assets/pulsar-functions-overview.png) - -**Example** - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class LoggingFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - String messageId = new String(context.getMessageId()); - - if (input.contains("danger")) { - LOG.warn("A warning was received in message {}", messageId); - } else { - LOG.info("Message {} received\nContent: {}", messageId, input); - } - - return null; - } -} - -``` - -As shown in the example above, you can get the logger via `context.getLogger()` and assign the logger to the `LOG` variable of `slf4j`, so you can define your desired log information in a function using the `LOG` variable. Meanwhile, you need to specify the topic to which the log information is produced. - -**Example** - -```bash - -$ bin/pulsar-admin functions create \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -The message published to log topic contains several properties for better reasoning: -- `loglevel` -- the level of the log message. -- `fqn` -- fully qualified function name pushes this log message. -- `instance` -- the ID of the function instance pushes this log message. - -## Use Functions CLI - -With [Pulsar Functions CLI](reference-pulsar-admin.md#functions), you can debug Pulsar Functions with the following subcommands: - -* `get` -* `status` -* `stats` -* `list` -* `trigger` - -:::tip - -For complete commands of **Pulsar Functions CLI**, see [here](reference-pulsar-admin.md#functions)。 - -::: - -### `get` - -Get information about a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions get options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -:::tip - -`--fqfn` consists of `--name`, `--namespace` and `--tenant`, so you can specify either `--fqfn` or `--name`, `--namespace` and `--tenant`. - -::: - -**Example** - -You can specify `--fqfn` to get information about a Pulsar Function. - -```bash - -$ ./bin/pulsar-admin functions get public/default/ExclamationFunctio6 - -``` - -Optionally, you can specify `--name`, `--namespace` and `--tenant` to get information about a Pulsar Function. - -```bash - -$ ./bin/pulsar-admin functions get \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 - -``` - -As shown below, the `get` command shows input, output, runtime, and other information about the _ExclamationFunctio6_ function. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "ExclamationFunctio6", - "className": "org.example.test.ExclamationFunction", - "inputSpecs": { - "persistent://public/default/my-topic-1": { - "isRegexPattern": false - } - }, - "output": "persistent://public/default/test-1", - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "userConfig": {}, - "runtime": "JAVA", - "autoAck": true, - "parallelism": 1 -} - -``` - -### `status` - -Check the current status of a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions status options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--instance-id`|The instance ID of a Pulsar Function
    If the `--instance-id` is not specified, it gets the IDs of all instances.
    -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - -``` - -As shown below, the `status` command shows the number of instances, running instances, the instance running under the _ExclamationFunctio6_ function, received messages, successfully processed messages, system exceptions, the average latency and so on. - -```json - -{ - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReceived" : 1, - "numSuccessfullyProcessed" : 1, - "numUserExceptions" : 0, - "latestUserExceptions" : [ ], - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "averageLatency" : 0.8385, - "lastInvocationTime" : 1557734137987, - "workerId" : "c-standalone-fw-23ccc88ef29b-8080" - } - } ] -} - -``` - -### `stats` - -Get the current stats of a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions stats options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--instance-id`|The instance ID of a Pulsar Function.
    If the `--instance-id` is not specified, it gets the IDs of all instances.
    -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - -``` - -The output is shown as follows: - -```json - -{ - "receivedTotal" : 1, - "processedSuccessfullyTotal" : 1, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : 0.8385, - "1min" : { - "receivedTotal" : 0, - "processedSuccessfullyTotal" : 0, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : null - }, - "lastInvocation" : 1557734137987, - "instances" : [ { - "instanceId" : 0, - "metrics" : { - "receivedTotal" : 1, - "processedSuccessfullyTotal" : 1, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : 0.8385, - "1min" : { - "receivedTotal" : 0, - "processedSuccessfullyTotal" : 0, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : null - }, - "lastInvocation" : 1557734137987, - "userMetrics" : { } - } - } ] -} - -``` - -### `list` - -List all Pulsar Functions running under a specific tenant and namespace. - -**Usage** - -```bash - -$ pulsar-admin functions list options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions list \ - --tenant public \ - --namespace default - -``` - -As shown below, the `list` command returns three functions running under the _public_ tenant and the _default_ namespace. - -```text - -ExclamationFunctio1 -ExclamationFunctio2 -ExclamationFunctio3 - -``` - -### `trigger` - -Trigger a specified Pulsar Function with a supplied value. This command simulates the execution process of a Pulsar Function and verifies it. - -**Usage** - -```bash - -$ pulsar-admin functions trigger options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. -|`--topic`|The topic name that a Pulsar Function consumes from. -|`--trigger-file`|The path to a file that contains the data to trigger a Pulsar Function. -|`--trigger-value`|The value to trigger a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - --topic persistent://public/default/my-topic-1 \ - --trigger-value "hello pulsar functions" - -``` - -As shown below, the `trigger` command returns the following result: - -```text - -This is my function! - -``` - -:::note - -You must specify the [entire topic name](getting-started-pulsar.md#topic-names) when using the `--topic` option. Otherwise, the following error occurs. - -```text - -Function in trigger function has unidentified topic -Reason: Function in trigger function has unidentified topic - -``` - -::: - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/functions-deploy.md b/site2/website/versioned_docs/version-2.10.1-deprecated/functions-deploy.md deleted file mode 100644 index 826804db6bbb73..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/functions-deploy.md +++ /dev/null @@ -1,262 +0,0 @@ ---- -id: functions-deploy -title: Deploy Pulsar Functions -sidebar_label: "How-to: Deploy" -original_id: functions-deploy ---- - -## Requirements - -To deploy and manage Pulsar Functions, you need to have a Pulsar cluster running. There are several options for this: - -* You can run a [standalone cluster](getting-started-standalone.md) locally on your own machine. -* You can deploy a Pulsar cluster on [Kubernetes](deploy-kubernetes.md), [Amazon Web Services](deploy-aws.md), [bare metal](deploy-bare-metal.md), DC/OS, and more. - -If you run a non-[standalone](reference-terminology.md#standalone) cluster, you need to obtain the service URL for the cluster. How you obtain the service URL depends on how you deploy your Pulsar cluster. - -If you want to deploy and trigger Python user-defined functions, you need to install [the pulsar python client](client-libraries-python.md) on all the machines running [functions workers](functions-worker.md). - -## Command-line interface - -Pulsar Functions are deployed and managed using the [`pulsar-admin functions`](reference-pulsar-admin.md#functions) interface, which contains commands such as [`create`](reference-pulsar-admin.md#functions-create) for deploying functions in [cluster mode](#cluster-mode), [`trigger`](reference-pulsar-admin.md#trigger) for [triggering](#triggering-pulsar-functions) functions, [`list`](reference-pulsar-admin.md#list-2) for listing deployed functions. - -To learn more commands, refer to [`pulsar-admin functions`](reference-pulsar-admin.md#functions). - -### Default arguments - -When managing Pulsar Functions, you need to specify a variety of information about functions, including tenant, namespace, input and output topics, and so on. However, some parameters have default values if you do not specify values for them. The following table lists the default values. - -Parameter | Default -:---------|:------- -Function name | You can specify any value for the class name (except org, library, or similar class names). For example, when you specify the flag `--classname org.example.MyFunction`, the function name is `MyFunction`. -Tenant | Derived from names of the input topics. If the input topics are under the `marketing` tenant, which means the topic names have the form `persistent://marketing/{namespace}/{topicName}`, the tenant is `marketing`. -Namespace | Derived from names of the input topics. If the input topics are under the `asia` namespace under the `marketing` tenant, which means the topic names have the form `persistent://marketing/asia/{topicName}`, then the namespace is `asia`. -Output topic | `{input topic}-{function name}-output`. For example, if an input topic name of a function is `incoming`, and the function name is `exclamation`, then the name of the output topic is `incoming-exclamation-output`. -Subscription type | For `at-least-once` and `at-most-once` [processing guarantees](functions-overview.md#processing-guarantees), the [`SHARED`](concepts-messaging.md#shared) mode is applied by default; for `effectively-once` guarantees, the [`FAILOVER`](concepts-messaging.md#failover) mode is applied. -Processing guarantees | [`ATLEAST_ONCE`](functions-overview.md#processing-guarantees) -Pulsar service URL | `pulsar://localhost:6650` - -### Example of default arguments - -Take the `create` command as an example. - -```bash - -$ bin/pulsar-admin functions create \ - --jar my-pulsar-functions.jar \ - --classname org.example.MyFunction \ - --inputs my-function-input-topic1,my-function-input-topic2 - -``` - -The function has default values for the function name (`MyFunction`), tenant (`public`), namespace (`default`), subscription type (`SHARED`), processing guarantees (`ATLEAST_ONCE`), and Pulsar service URL (`pulsar://localhost:6650`). - -## Local run mode - -If you run a Pulsar Function in **local run** mode, it runs on the machine from which you enter the commands (on your laptop, an [AWS EC2](https://aws.amazon.com/ec2/) instance, and so on). The following is a [`localrun`](reference-pulsar-admin.md#localrun) command example. - -```bash - -$ bin/pulsar-admin functions localrun \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/input-1 \ - --output persistent://public/default/output-1 - -``` - -By default, the function connects to a Pulsar cluster running on the same machine, via a local [broker](reference-terminology.md#broker) service URL of `pulsar://localhost:6650`. If you use local run mode to run a function but connect it to a non-local Pulsar cluster, you can specify a different broker URL using the `--brokerServiceUrl` flag. The following is an example. - -```bash - -$ bin/pulsar-admin functions localrun \ - --broker-service-url pulsar://my-cluster-host:6650 \ - # Other function parameters - -``` - -## Cluster mode - -When you run a Pulsar Function in **cluster** mode, the function code is uploaded to a Pulsar broker and runs *alongside the broker* rather than in your [local environment](#local-run-mode). You can run a function in cluster mode using the [`create`](reference-pulsar-admin.md#create-1) command. - -```bash - -$ bin/pulsar-admin functions create \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/input-1 \ - --output persistent://public/default/output-1 - -``` - -### Update functions in cluster mode - -You can use the [`update`](reference-pulsar-admin.md#update-1) command to update a Pulsar Function running in cluster mode. The following command updates the function created in the [cluster mode](#cluster-mode) section. - -```bash - -$ bin/pulsar-admin functions update \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/new-input-topic \ - --output persistent://public/default/new-output-topic - -``` - -### Parallelism - -Pulsar Functions run as processes or threads, which are called **instances**. When you run a Pulsar Function, it runs as a single instance by default. With one localrun command, you can only run a single instance of a function. If you want to run multiple instances, you can use localrun command multiple times. - -When you create a function, you can specify the *parallelism* of a function (the number of instances to run). You can set the parallelism factor using the `--parallelism` flag of the [`create`](reference-pulsar-admin.md#functions-create) command. - -```bash - -$ bin/pulsar-admin functions create \ - --parallelism 3 \ - # Other function info - -``` - -You can adjust the parallelism of an already created function using the [`update`](reference-pulsar-admin.md#update-1) interface. - -```bash - -$ bin/pulsar-admin functions update \ - --parallelism 5 \ - # Other function - -``` - -If you specify a function configuration via YAML, use the `parallelism` parameter. The following is a config file example. - -```yaml - -# function-config.yaml -parallelism: 3 -inputs: -- persistent://public/default/input-1 -output: persistent://public/default/output-1 -# other parameters - -``` - -The following is corresponding update command. - -```bash - -$ bin/pulsar-admin functions update \ - --function-config-file function-config.yaml - -``` - -### Function instance resources - -When you run Pulsar Functions in [cluster mode](#cluster-mode), you can specify the resources that are assigned to each function [instance](#parallelism). - -Resource | Specified as | Runtimes -:--------|:----------------|:-------- -CPU | The number of cores | Kubernetes -RAM | The number of bytes | Process, Docker -Disk space | The number of bytes | Docker - -The following function creation command allocates 8 cores, 8 GB of RAM, and 10 GB of disk space to a function. - -```bash - -$ bin/pulsar-admin functions create \ - --jar target/my-functions.jar \ - --classname org.example.functions.MyFunction \ - --cpu 8 \ - --ram 8589934592 \ - --disk 10737418240 - -``` - -> #### Resources are *per instance* -> The resources that you apply to a given Pulsar Function are applied to each instance of the function. For example, if you apply 8 GB of RAM to a function with a parallelism of 5, you are applying 40 GB of RAM for the function in total. Make sure that you take the parallelism (the number of instances) factor into your resource calculations. - -### Use Package management service - -Package management enables version management and simplifies the upgrade and rollback processes for Functions, Sinks, and Sources. When you use the same function, sink and source in different namespaces, you can upload them to a common package management system. - -To use [Package management service](admin-api-packages.md), ensure that the package management service has been enabled in your cluster by setting the following properties in `broker.conf`. - -> Note: Package management service is not enabled by default. - -```yaml - -enablePackagesManagement=true -packagesManagementStorageProvider=org.apache.pulsar.packages.management.storage.bookkeeper.BookKeeperPackagesStorageProvider -packagesReplicas=1 -packagesManagementLedgerRootPath=/ledgers - -``` - -With Package management service enabled, you can upload your function packages by [upload a package](admin-api-packages.md#upload-a-package) to the service and get the [package URL](admin-api-packages.md#package-url). - -When you have a ready to use package URL, you can create the function with package URL by setting `--jar`, `--py`, or `--go` to the package URL with `pulsar-admin functions create`. - -## Trigger Pulsar Functions - -If a Pulsar Function is running in [cluster mode](#cluster-mode), you can **trigger** it at any time using the command line. Triggering a function means that you send a message with a specific value to the function and get the function output (if any) via the command line. - -> Triggering a function is to invoke a function by producing a message on one of the input topics. With the [`pulsar-admin functions trigger`](reference-pulsar-admin.md#trigger) command, you can send messages to functions without using the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool or a language-specific client library. - -To learn how to trigger a function, you can start with Python function that returns a simple string based on the input. - -```python - -# myfunc.py -def process(input): - return "This function has been triggered with a value of {0}".format(input) - -``` - -You can run the function in [local run mode](functions-deploy.md#local-run-mode). - -```bash - -$ bin/pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name myfunc \ - --py myfunc.py \ - --classname myfunc \ - --inputs persistent://public/default/in \ - --output persistent://public/default/out - -``` - -Then assign a consumer to listen on the output topic for messages from the `myfunc` function with the [`pulsar-client consume`](reference-cli-tools.md#consume) command. - -```bash - -$ bin/pulsar-client consume persistent://public/default/out \ - --subscription-name my-subscription - --num-messages 0 # Listen indefinitely - -``` - -And then you can trigger the function. - -```bash - -$ bin/pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name myfunc \ - --trigger-value "hello world" - -``` - -The consumer listening on the output topic produces something as follows in the log. - -``` - ------ got message ----- -This function has been triggered with a value of hello world - -``` - -> #### Topic info is not required -> In the `trigger` command, you only need to specify basic information about the function (tenant, namespace, and name). To trigger the function, you do not need to know the function input topics. diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/functions-develop.md b/site2/website/versioned_docs/version-2.10.1-deprecated/functions-develop.md deleted file mode 100644 index c32199517cfcc2..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/functions-develop.md +++ /dev/null @@ -1,1678 +0,0 @@ ---- -id: functions-develop -title: Develop Pulsar Functions -sidebar_label: "How-to: Develop" -original_id: functions-develop ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -You learn how to develop Pulsar Functions with different APIs for Java, Python and Go. - -## Available APIs -In Java and Python, you have two options to write Pulsar Functions. In Go, you can use Pulsar Functions SDK for Go. - -Interface | Description | Use cases -:---------|:------------|:--------- -Language-native interface | No Pulsar-specific libraries or special dependencies required (only core libraries from Java/Python). | Functions that do not require access to the function [context](#context). -Pulsar Function SDK for Java/Python/Go | Pulsar-specific libraries that provide a range of functionality not provided by "native" interfaces. | Functions that require access to the function [context](#context). -Extended Pulsar Function SDK for Java | An extension to Pulsar-specific libraries, providing the initialization and close interfaces in Java. | Functions that require initializing and releasing external resources. - -### Language-native interface -The language-native function, which adds an exclamation point to all incoming strings and publishes the resulting string to a topic, has no external dependencies. The following example is language-native function. - -````mdx-code-block - - - -```Java - -import java.util.function.Function; - -public class JavaNativeExclamationFunction implements Function { - @Override - public String apply(String input) { - return String.format("%s!", input); - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/JavaNativeExclamationFunction.java). - - - - -```python - -def process(input): - return "{}!".format(input) - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/native_exclamation_function.py). - -:::note - -You can write Pulsar Functions in python2 or python3. However, Pulsar only looks for `python` as the interpreter. -If you're running Pulsar Functions on an Ubuntu system that only supports python3, you might fail to -start the functions. In this case, you can create a symlink. Your system will fail if -you subsequently install any other package that depends on Python 2.x. A solution is under development in [Issue 5518](https://github.com/apache/pulsar/issues/5518). - -```bash - -sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 10 - -``` - -::: - - - - -```` - -### Pulsar Function SDK for Java/Python/Go -The following example uses Pulsar Functions SDK. -````mdx-code-block - - - -```Java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class ExclamationFunction implements Function { - @Override - public String process(String input, Context context) { - return String.format("%s!", input); - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/ExclamationFunction.java). - - - - -```python - -from pulsar import Function - -class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - return input + '!' - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/exclamation_function.py). - - - - -```Go - -package main - -import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" -) - -func HandleRequest(ctx context.Context, in []byte) error{ - fmt.Println(string(in) + "!") - return nil -} - -func main() { - pf.Start(HandleRequest) -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/77cf09eafa4f1626a53a1fe2e65dd25f377c1127/pulsar-function-go/examples/inputFunc/inputFunc.go#L20-L36). - - - - -```` - -### Extended Pulsar Function SDK for Java -This extended Pulsar Function SDK provides two additional interfaces to initialize and release external resources. -- By using the `initialize` interface, you can initialize external resources which only need one-time initialization when the function instance starts. -- By using the `close` interface, you can close the referenced external resources when the function instance closes. - -:::note - -The extended Pulsar Function SDK for Java is available in Pulsar 2.10.0 and later versions. -Before using it, you need to set up Pulsar Function worker 2.10.0 or later versions. - -::: - -The following example uses the extended interface of Pulsar Function SDK for Java to initialize RedisClient when the function instance starts and release it when the function instance closes. - -````mdx-code-block - - - -```Java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import io.lettuce.core.RedisClient; - -public class InitializableFunction implements Function { - private RedisClient redisClient; - - private void initRedisClient(Map connectInfo) { - redisClient = RedisClient.create(connectInfo.get("redisURI")); - } - - @Override - public void initialize(Context context) { - Map connectInfo = context.getUserConfigMap(); - redisClient = initRedisClient(connectInfo); - } - - @Override - public String process(String input, Context context) { - String value = client.get(key); - return String.format("%s-%s", input, value); - } - - @Override - public void close() { - redisClient.close(); - } -} - -``` - - - - -```` - -## Schema registry -Pulsar has a built-in schema registry and is bundled with popular schema types, such as Avro, JSON and Protobuf. Pulsar Functions can leverage the existing schema information from input topics and derive the input type. The schema registry applies for output topic as well. - -## SerDe -SerDe stands for **Ser**ialization and **De**serialization. Pulsar Functions uses SerDe when publishing data to and consuming data from Pulsar topics. How SerDe works by default depends on the language you use for a particular function. - -````mdx-code-block - - - -When you write Pulsar Functions in Java, the following basic Java types are built in and supported by default: `String`, `Double`, `Integer`, `Float`, `Long`, `Short`, and `Byte`. - -To customize Java types, you need to implement the following interface. - -```java - -public interface SerDe { - T deserialize(byte[] input); - byte[] serialize(T input); -} - -``` - -SerDe works in the following ways in Java Functions. -- If the input and output topics have schema, Pulsar Functions use schema for SerDe. -- If the input or output topics do not exist, Pulsar Functions adopt the following rules to determine SerDe: - - If the schema type is specified, Pulsar Functions use the specified schema type. - - If SerDe is specified, Pulsar Functions use the specified SerDe, and the schema type for input and output topics is `Byte`. - - If neither the schema type nor SerDe is specified, Pulsar Functions use the built-in SerDe. For non-primitive schema type, the built-in SerDe serializes and deserializes objects in the `JSON` format. - - - - -In Python, the default SerDe is identity, meaning that the type is serialized as whatever type the producer function returns. - -You can specify the SerDe when [creating](functions-deploy.md#cluster-mode) or [running](functions-deploy.md#local-run-mode) functions. - -```bash - -$ bin/pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name my_function \ - --py my_function.py \ - --classname my_function.MyFunction \ - --custom-serde-inputs '{"input-topic-1":"Serde1","input-topic-2":"Serde2"}' \ - --output-serde-classname Serde3 \ - --output output-topic-1 - -``` - -This case contains two input topics: `input-topic-1` and `input-topic-2`, each of which is mapped to a different SerDe class (the map must be specified as a JSON string). The output topic, `output-topic-1`, uses the `Serde3` class for SerDe. At the moment, all Pulsar Functions logic, include processing function and SerDe classes, must be contained within a single Python file. - -When using Pulsar Functions for Python, you have three SerDe options: - -1. You can use the [`IdentitySerde`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L70), which leaves the data unchanged. The `IdentitySerDe` is the **default**. Creating or running a function without explicitly specifying SerDe means that this option is used. -2. You can use the [`PickleSerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L62), which uses Python [`pickle`](https://docs.python.org/3/library/pickle.html) for SerDe. -3. You can create a custom SerDe class by implementing the baseline [`SerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L50) class, which has just two methods: [`serialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L53) for converting the object into bytes, and [`deserialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L58) for converting bytes into an object of the required application-specific type. - -The table below shows when you should use each SerDe. - -SerDe option | When to use -:------------|:----------- -`IdentitySerde` | When you work with simple types like strings, Booleans, integers. -`PickleSerDe` | When you work with complex, application-specific types and are comfortable with the "best effort" approach of `pickle`. -Custom SerDe | When you require explicit control over SerDe, potentially for performance or data compatibility purposes. - - - - -Currently, the feature is not available in Go. - - - - -```` - -### Example -Imagine that you're writing Pulsar Functions that are processing tweet objects, you can refer to the following example of `Tweet` class. - -````mdx-code-block - - - -```java - -public class Tweet { - private String username; - private String tweetContent; - - public Tweet(String username, String tweetContent) { - this.username = username; - this.tweetContent = tweetContent; - } - - // Standard setters and getters -} - -``` - -To pass `Tweet` objects directly between Pulsar Functions, you need to provide a custom SerDe class. In the example below, `Tweet` objects are basically strings in which the username and tweet content are separated by a `|`. - -```java - -package com.example.serde; - -import org.apache.pulsar.functions.api.SerDe; - -import java.util.regex.Pattern; - -public class TweetSerde implements SerDe { - public Tweet deserialize(byte[] input) { - String s = new String(input); - String[] fields = s.split(Pattern.quote("|")); - return new Tweet(fields[0], fields[1]); - } - - public byte[] serialize(Tweet input) { - return "%s|%s".format(input.getUsername(), input.getTweetContent()).getBytes(); - } -} - -``` - -To apply this customized SerDe to a particular Pulsar Function, you need to: - -* Package the `Tweet` and `TweetSerde` classes into a JAR. -* Specify a path to the JAR and SerDe class name when deploying the function. - -The following is an example of [`create`](reference-pulsar-admin.md#create-1) operation. - -```bash - -$ bin/pulsar-admin functions create \ - --jar /path/to/your.jar \ - --output-serde-classname com.example.serde.TweetSerde \ - # Other function attributes - -``` - -> #### Custom SerDe classes must be packaged with your function JARs -> Pulsar does not store your custom SerDe classes separately from your Pulsar Functions. So you need to include your SerDe classes in your function JARs. If not, Pulsar returns an error. - - - - -```python - -class Tweet(object): - def __init__(self, username, tweet_content): - self.username = username - self.tweet_content = tweet_content - -``` - -In order to use this class in Pulsar Functions, you have two options: - -1. You can specify `PickleSerDe`, which applies the [`pickle`](https://docs.python.org/3/library/pickle.html) library SerDe. -2. You can create your own SerDe class. The following is an example. - - ```python - - from pulsar import SerDe - - class TweetSerDe(SerDe): - - def serialize(self, input): - return bytes("{0}|{1}".format(input.username, input.tweet_content)) - - def deserialize(self, input_bytes): - tweet_components = str(input_bytes).split('|') - return Tweet(tweet_components[0], tweet_componentsp[1]) - - ``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/custom_object_function.py). - - - - -```` - -In both languages, however, you can write custom SerDe logic for more complex, application-specific types. - -## Context -Java, Python and Go SDKs provide access to a **context object** that can be used by a function. This context object provides a wide variety of information and functionality to the function. - -* The name and ID of a Pulsar Function. -* The message ID of each message. Each Pulsar message is automatically assigned with an ID. -* The key, event time, properties and partition key of each message. -* The name of the topic to which the message is sent. -* The names of all input topics as well as the output topic associated with the function. -* The name of the class used for [SerDe](#serde). -* The [tenant](reference-terminology.md#tenant) and namespace associated with the function. -* The ID of the Pulsar Functions instance running the function. -* The version of the function. -* The [logger object](functions-develop.md#logger) used by the function, which can be used to create function log messages. -* Access to arbitrary [user configuration](#user-config) values supplied via the CLI. -* An interface for recording [metrics](#metrics). -* An interface for storing and retrieving state in [state storage](#state-storage). -* A function to publish new messages onto arbitrary topics. -* A function to ack the message being processed (if auto-ack is disabled). -* (Java) get Pulsar admin client. - -````mdx-code-block - - - -The [Context](https://github.com/apache/pulsar/blob/master/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Context.java) interface provides a number of methods that you can use to access the function [context](#context). The various method signatures for the `Context` interface are listed as follows. - -```java - -public interface Context { - Record getCurrentRecord(); - Collection getInputTopics(); - String getOutputTopic(); - String getOutputSchemaType(); - String getTenant(); - String getNamespace(); - String getFunctionName(); - String getFunctionId(); - String getInstanceId(); - String getFunctionVersion(); - Logger getLogger(); - void incrCounter(String key, long amount); - void incrCounterAsync(String key, long amount); - long getCounter(String key); - long getCounterAsync(String key); - void putState(String key, ByteBuffer value); - void putStateAsync(String key, ByteBuffer value); - void deleteState(String key); - ByteBuffer getState(String key); - ByteBuffer getStateAsync(String key); - Map getUserConfigMap(); - Optional getUserConfigValue(String key); - Object getUserConfigValueOrDefault(String key, Object defaultValue); - void recordMetric(String metricName, double value); - CompletableFuture publish(String topicName, O object, String schemaOrSerdeClassName); - CompletableFuture publish(String topicName, O object); - TypedMessageBuilder newOutputMessage(String topicName, Schema schema) throws PulsarClientException; - ConsumerBuilder newConsumerBuilder(Schema schema) throws PulsarClientException; - PulsarAdmin getPulsarAdmin(); - PulsarAdmin getPulsarAdmin(String clusterName); -} - -``` - -The following example uses several methods available via the `Context` object. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.stream.Collectors; - -public class ContextFunction implements Function { - public Void process(String input, Context context) { - Logger LOG = context.getLogger(); - String inputTopics = context.getInputTopics().stream().collect(Collectors.joining(", ")); - String functionName = context.getFunctionName(); - - String logMessage = String.format("A message with a value of \"%s\" has arrived on one of the following topics: %s\n", - input, - inputTopics); - - LOG.info(logMessage); - - String metricName = String.format("function-%s-messages-received", functionName); - context.recordMetric(metricName, 1); - - return null; - } -} - -``` - - - - -``` - -class ContextImpl(pulsar.Context): - def get_message_id(self): - ... - def get_message_key(self): - ... - def get_message_eventtime(self): - ... - def get_message_properties(self): - ... - def get_current_message_topic_name(self): - ... - def get_partition_key(self): - ... - def get_function_name(self): - ... - def get_function_tenant(self): - ... - def get_function_namespace(self): - ... - def get_function_id(self): - ... - def get_instance_id(self): - ... - def get_function_version(self): - ... - def get_logger(self): - ... - def get_user_config_value(self, key): - ... - def get_user_config_map(self): - ... - def record_metric(self, metric_name, metric_value): - ... - def get_input_topics(self): - ... - def get_output_topic(self): - ... - def get_output_serde_class_name(self): - ... - def publish(self, topic_name, message, serde_class_name="serde.IdentitySerDe", - properties=None, compression_type=None, callback=None, message_conf=None): - ... - def ack(self, msgid, topic): - ... - def get_and_reset_metrics(self): - ... - def reset_metrics(self): - ... - def get_metrics(self): - ... - def incr_counter(self, key, amount): - ... - def get_counter(self, key): - ... - def del_counter(self, key): - ... - def put_state(self, key, value): - ... - def get_state(self, key): - ... - -``` - - - - -``` - -func (c *FunctionContext) GetInstanceID() int { - return c.instanceConf.instanceID -} - -func (c *FunctionContext) GetInputTopics() []string { - return c.inputTopics -} - -func (c *FunctionContext) GetOutputTopic() string { - return c.instanceConf.funcDetails.GetSink().Topic -} - -func (c *FunctionContext) GetFuncTenant() string { - return c.instanceConf.funcDetails.Tenant -} - -func (c *FunctionContext) GetFuncName() string { - return c.instanceConf.funcDetails.Name -} - -func (c *FunctionContext) GetFuncNamespace() string { - return c.instanceConf.funcDetails.Namespace -} - -func (c *FunctionContext) GetFuncID() string { - return c.instanceConf.funcID -} - -func (c *FunctionContext) GetFuncVersion() string { - return c.instanceConf.funcVersion -} - -func (c *FunctionContext) GetUserConfValue(key string) interface{} { - return c.userConfigs[key] -} - -func (c *FunctionContext) GetUserConfMap() map[string]interface{} { - return c.userConfigs -} - -func (c *FunctionContext) SetCurrentRecord(record pulsar.Message) { - c.record = record -} - -func (c *FunctionContext) GetCurrentRecord() pulsar.Message { - return c.record -} - -func (c *FunctionContext) NewOutputMessage(topic string) pulsar.Producer { - return c.outputMessage(topic) -} - -``` - -The following example uses several methods available via the `Context` object. - -``` - -import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" -) - -func contextFunc(ctx context.Context) { - if fc, ok := pf.FromContext(ctx); ok { - fmt.Printf("function ID is:%s, ", fc.GetFuncID()) - fmt.Printf("function version is:%s\n", fc.GetFuncVersion()) - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/77cf09eafa4f1626a53a1fe2e65dd25f377c1127/pulsar-function-go/examples/contextFunc/contextFunc.go#L29-L34). - - - - -```` - -### User config -When you run or update Pulsar Functions created using SDK, you can pass arbitrary key/values to them with the command line with the `--user-config` flag. Key/values must be specified as JSON. The following function creation command passes a user configured key/value to a function. - -```bash - -$ bin/pulsar-admin functions create \ - --name word-filter \ - # Other function configs - --user-config '{"forbidden-word":"rosebud"}' - -``` - -````mdx-code-block - - - -The Java SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - # Other function configs - --user-config '{"word-of-the-day":"verdure"}' - -``` - -To access that value in a Java function: - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.Optional; - -public class UserConfigFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - Optional wotd = context.getUserConfigValue("word-of-the-day"); - if (wotd.isPresent()) { - LOG.info("The word of the day is {}", wotd); - } else { - LOG.warn("No word of the day provided"); - } - return null; - } -} - -``` - -The `UserConfigFunction` function will log the string `"The word of the day is verdure"` every time the function is invoked (which means every time a message arrives). The `word-of-the-day` user config will be changed only when the function is updated with a new config value via the command line. - -You can also access the entire user config map or set a default value in case no value is present: - -```java - -// Get the whole config map -Map allConfigs = context.getUserConfigMap(); - -// Get value or resort to default -String wotd = context.getUserConfigValueOrDefault("word-of-the-day", "perspicacious"); - -``` - -> For all key/value pairs passed to Java functions, both the key *and* the value are `String`. To set the value to be a different type, you need to deserialize from the `String` type. - - - - -In Python function, you can access the configuration value like this. - -```python - -from pulsar import Function - -class WordFilter(Function): - def process(self, context, input): - forbidden_word = context.user_config()["forbidden-word"] - - # Don't publish the message if it contains the user-supplied - # forbidden word - if forbidden_word in input: - pass - # Otherwise publish the message - else: - return input - -``` - -The Python SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - # Other function configs \ - --user-config '{"word-of-the-day":"verdure"}' - -``` - -To access that value in a Python function: - -```python - -from pulsar import Function - -class UserConfigFunction(Function): - def process(self, input, context): - logger = context.get_logger() - wotd = context.get_user_config_value('word-of-the-day') - if wotd is None: - logger.warn('No word of the day provided') - else: - logger.info("The word of the day is {0}".format(wotd)) - -``` - - - - -The Go SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - --go path/to/go/binary - --user-config '{"word-of-the-day":"lackadaisical"}' - -``` - -To access that value in a Go function: - -```go - -func contextFunc(ctx context.Context) { - fc, ok := pf.FromContext(ctx) - if !ok { - logutil.Fatal("Function context is not defined") - } - - wotd := fc.GetUserConfValue("word-of-the-day") - - if wotd == nil { - logutil.Warn("The word of the day is empty") - } else { - logutil.Infof("The word of the day is %s", wotd.(string)) - } -} - -``` - - - - -```` - -### Logger - -````mdx-code-block - - - -Pulsar Functions that use the Java SDK have access to an [SLF4j](https://www.slf4j.org/) [`Logger`](https://www.slf4j.org/api/org/apache/log4j/Logger.html) object that can be used to produce logs at the chosen log level. The following example logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class LoggingFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - String messageId = new String(context.getMessageId()); - - if (input.contains("danger")) { - LOG.warn("A warning was received in message {}", messageId); - } else { - LOG.info("Message {} received\nContent: {}", messageId, input); - } - - return null; - } -} - -``` - -If you want your function to produce logs, you need to specify a log topic when creating or running the function. The following is an example. - -```bash - -$ bin/pulsar-admin functions create \ - --jar my-functions.jar \ - --classname my.package.LoggingFunction \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -All logs produced by `LoggingFunction` above can be accessed via the `persistent://public/default/logging-function-logs` topic. - -#### Customize Function log level -Additionally, you can use the XML file, `functions_log4j2.xml`, to customize the function log level. -To customize the function log level, create or update `functions_log4j2.xml` in your Pulsar conf directory (for example, `/etc/pulsar/` on bare-metal, or `/pulsar/conf` on Kubernetes) to contain contents such as: - -```xml - - - pulsar-functions-instance - 30 - - - pulsar.log.appender - RollingFile - - - pulsar.log.level - debug - - - bk.log.level - debug - - - - - Console - SYSTEM_OUT - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - RollingFile - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.log - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}-%d{MM-dd-yyyy}-%i.log.gz - true - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - 1 - true - - - 1 GB - - - 0 0 0 * * ? - - - - - ${sys:pulsar.function.log.dir} - 2 - - */${sys:pulsar.function.log.file}*log.gz - - - 30d - - - - - - BkRollingFile - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.bk - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.bk-%d{MM-dd-yyyy}-%i.log.gz - true - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - 1 - true - - - 1 GB - - - 0 0 0 * * ? - - - - - ${sys:pulsar.function.log.dir} - 2 - - */${sys:pulsar.function.log.file}.bk*log.gz - - - 30d - - - - - - - - org.apache.pulsar.functions.runtime.shaded.org.apache.bookkeeper - ${sys:bk.log.level} - false - - BkRollingFile - - - - ${sys:pulsar.log.level} - - ${sys:pulsar.log.appender} - ${sys:pulsar.log.level} - - - - - -``` - -The properties set like: - -```xml - - - pulsar.log.level - debug - - -``` - -propagate to places where they are referenced, such as: - -```xml - - - ${sys:pulsar.log.level} - - ${sys:pulsar.log.appender} - ${sys:pulsar.log.level} - - - -``` - -In the above example, debug level logging would be applied to ALL function logs. -This may be more verbose than you desire. To be more selective, you can apply different log levels to different classes or modules. For example: - -```xml - - - com.example.module - info - false - - ${sys:pulsar.log.appender} - - - -``` - -You can be more specific as well, such as applying a more verbose log level to a class in the module, such as: - -```xml - - - com.example.module.className - debug - false - - Console - - - -``` - -Each `` entry allows you to output the log to a target specified in the definition of the Appender. - -Additivity pertains to whether log messages will be duplicated if multiple Logger entries overlap. -To disable additivity, specify - -```xml - -false - -``` - -as shown in examples above. Disabling additivity prevents duplication of log messages when one or more `` entries contain classes or modules that overlap. - -The `` is defined in the `` section, such as: - -```xml - - - Console - SYSTEM_OUT - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - -``` - - - - -Pulsar Functions that use the Python SDK have access to a logging object that can be used to produce logs at the chosen log level. The following example function that logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`. - -```python - -from pulsar import Function - -class LoggingFunction(Function): - def process(self, input, context): - logger = context.get_logger() - msg_id = context.get_message_id() - if 'danger' in input: - logger.warn("A warning was received in message {0}".format(context.get_message_id())) - else: - logger.info("Message {0} received\nContent: {1}".format(msg_id, input)) - -``` - -If you want your function to produce logs on a Pulsar topic, you need to specify a **log topic** when creating or running the function. The following is an example. - -```bash - -$ bin/pulsar-admin functions create \ - --py logging_function.py \ - --classname logging_function.LoggingFunction \ - --log-topic logging-function-logs \ - # Other function configs - -``` - -All logs produced by `LoggingFunction` above can be accessed via the `logging-function-logs` topic. -Additionally, you can specify the function log level through the broker XML file as described in [Customize Function log level](#customize-function-log-level). - - - - -The following Go Function example shows different log levels based on the function input. - -``` - -import ( - "context" - - "github.com/apache/pulsar/pulsar-function-go/pf" - - log "github.com/apache/pulsar/pulsar-function-go/logutil" -) - -func loggerFunc(ctx context.Context, input []byte) { - if len(input) <= 100 { - log.Infof("This input has a length of: %d", len(input)) - } else { - log.Warnf("This input is getting too long! It has {%d} characters", len(input)) - } -} - -func main() { - pf.Start(loggerFunc) -} - -``` - -When you use `logTopic` related functionalities in Go Function, import `github.com/apache/pulsar/pulsar-function-go/logutil`, and you do not have to use the `getLogger()` context object. - -Additionally, you can specify the function log level through the broker XML file, as described here: [Customize Function log level](#customize-function-log-level) - - - - -```` - -### Pulsar admin - -Pulsar Functions using the Java SDK has access to the Pulsar admin client, which allows the Pulsar admin client to manage API calls to current Pulsar clusters or external clusters (if `external-pulsars` is provided). - -````mdx-code-block - - - -Below is an example of how to use the Pulsar admin client exposed from the Function `context`. - -``` - -import org.apache.pulsar.client.admin.PulsarAdmin; -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -/** - * In this particular example, for every input message, - * the function resets the cursor of the current function's subscription to a - * specified timestamp. - */ -public class CursorManagementFunction implements Function { - - @Override - public String process(String input, Context context) throws Exception { - PulsarAdmin adminClient = context.getPulsarAdmin(); - if (adminClient != null) { - String topic = context.getCurrentRecord().getTopicName().isPresent() ? - context.getCurrentRecord().getTopicName().get() : null; - String subName = context.getTenant() + "/" + context.getNamespace() + "/" + context.getFunctionName(); - if (topic != null) { - // 1578188166 below is a random-pick timestamp - adminClient.topics().resetCursor(topic, subName, 1578188166); - return "reset cursor successfully"; - } - } - return null; - } -} - -``` - -If you want your function to get access to the Pulsar admin client, you need to enable this feature by setting `exposeAdminClientEnabled=true` in the `functions_worker.yml` file. You can test whether this feature is enabled or not using the command `pulsar-admin functions localrun` with the flag `--web-service-url`. - -``` - -$ bin/pulsar-admin functions localrun \ - --jar my-functions.jar \ - --classname my.package.CursorManagementFunction \ - --web-service-url http://pulsar-web-service:8080 \ - # Other function configs - -``` - - - - -```` - -## Metrics - -Pulsar Functions allows you to deploy and manage processing functions that consume messages from and publish messages to Pulsar topics easily. It is important to ensure that the running functions are healthy at any time. Pulsar Functions can publish arbitrary metrics to the metrics interface which can be queried. - -:::note - -If a Pulsar Function uses the language-native interface for Java or Python, that function is not able to publish metrics and stats to Pulsar. - -::: - -You can monitor Pulsar Functions that have been deployed with the following methods: - -- Check the metrics provided by Pulsar. - - Pulsar Functions expose the metrics that can be collected and used for monitoring the health of **Java, Python, and Go** functions. You can check the metrics by following the [monitoring](deploy-monitoring.md) guide. - - For the complete list of the function metrics, see [here](reference-metrics.md#pulsar-functions). - -- Set and check your customized metrics. - - In addition to the metrics provided by Pulsar, Pulsar allows you to customize metrics for **Java and Python** functions. Function workers collect user-defined metrics to Prometheus automatically and you can check them in Grafana. - -Here are examples of how to customize metrics for Java and Python functions. - -````mdx-code-block - - - -You can record metrics using the [`Context`](#context) object on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class MetricRecorderFunction implements Function { - @Override - public void apply(Integer input, Context context) { - // Records the metric 1 every time a message arrives - context.recordMetric("hit-count", 1); - - // Records the metric only if the arriving number equals 11 - if (input == 11) { - context.recordMetric("elevens-count", 1); - } - - return null; - } -} - -``` - - - - -You can record metrics using the [`Context`](#context) object on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message. The following is an example. - -```python - -from pulsar import Function - -class MetricRecorderFunction(Function): - def process(self, input, context): - context.record_metric('hit-count', 1) - - if input == 11: - context.record_metric('elevens-count', 1) - -``` - - - - -The Go SDK [`Context`](#context) object enables you to record metrics on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message: - -```go - -func metricRecorderFunction(ctx context.Context, in []byte) error { - inputstr := string(in) - fctx, ok := pf.FromContext(ctx) - if !ok { - return errors.New("get Go Functions Context error") - } - fctx.RecordMetric("hit-count", 1) - if inputstr == "eleven" { - fctx.RecordMetric("elevens-count", 1) - } - return nil -} - -``` - - - - -```` - -## Security - -If you want to enable security on Pulsar Functions, first you should enable security on [Functions Workers](functions-worker.md). For more details, refer to [Security settings](functions-worker.md#security-settings). - -Pulsar Functions can support the following providers: - -- ClearTextSecretsProvider -- EnvironmentBasedSecretsProvider - -> Pulsar Function supports ClearTextSecretsProvider by default. - -At the same time, Pulsar Functions provides two interfaces, **SecretsProvider** and **SecretsProviderConfigurator**, allowing users to customize secret provider. - -````mdx-code-block - - - -You can get secret provider using the [`Context`](#context) object. The following is an example: - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class GetSecretProviderFunction implements Function { - - @Override - public Void process(String input, Context context) throws Exception { - Logger LOG = context.getLogger(); - String secretProvider = context.getSecret(input); - - if (!secretProvider.isEmpty()) { - LOG.info("The secret provider is {}", secretProvider); - } else { - LOG.warn("No secret provider"); - } - - return null; - } -} - -``` - - - - -You can get secret provider using the [`Context`](#context) object. The following is an example: - -```python - -from pulsar import Function - -class GetSecretProviderFunction(Function): - def process(self, input, context): - logger = context.get_logger() - secret_provider = context.get_secret(input) - if secret_provider is None: - logger.warn('No secret provider') - else: - logger.info("The secret provider is {0}".format(secret_provider)) - -``` - - - - -Currently, the feature is not available in Go. - - - - -```` - -## State storage -Pulsar Functions use [Apache BookKeeper](https://bookkeeper.apache.org) as a state storage interface. Pulsar installation, including the local standalone installation, includes deployment of BookKeeper bookies. - -Since Pulsar 2.1.0 release, Pulsar integrates with Apache BookKeeper [table service](https://docs.google.com/document/d/155xAwWv5IdOitHh1NVMEwCMGgB28M3FyMiQSxEpjE-Y/edit#heading=h.56rbh52koe3f) to store the `State` for functions. For example, a `WordCount` function can store its `counters` state into BookKeeper table service via Pulsar Functions State API. - -States are key-value pairs, where the key is a string and the value is arbitrary binary data - counters are stored as 64-bit big-endian binary values. Keys are scoped to an individual Pulsar Function, and shared between instances of that function. - -You can access states within Pulsar Java Functions using the `putState`, `putStateAsync`, `getState`, `getStateAsync`, `incrCounter`, `incrCounterAsync`, `getCounter`, `getCounterAsync` and `deleteState` calls on the context object. You can access states within Pulsar Python Functions using the `putState`, `getState`, `incrCounter`, `getCounter` and `deleteState` calls on the context object. You can also manage states using the [querystate](#query-state) and [putstate](#putstate) options to `pulsar-admin functions`. - -:::note - -State storage is not available in Go. - -::: - -### API - -````mdx-code-block - - - -Currently Pulsar Functions expose the following APIs for mutating and accessing State. These APIs are available in the [Context](functions-develop.md#context) object when you are using Java SDK functions. - -#### incrCounter - -```java - - /** - * Increment the builtin distributed counter referred by key - * @param key The name of the key - * @param amount The amount to be incremented - */ - void incrCounter(String key, long amount); - -``` - -The application can use `incrCounter` to change the counter of a given `key` by the given `amount`. - -#### incrCounterAsync - -```java - - /** - * Increment the builtin distributed counter referred by key - * but dont wait for the completion of the increment operation - * - * @param key The name of the key - * @param amount The amount to be incremented - */ - CompletableFuture incrCounterAsync(String key, long amount); - -``` - -The application can use `incrCounterAsync` to asynchronously change the counter of a given `key` by the given `amount`. - -#### getCounter - -```java - - /** - * Retrieve the counter value for the key. - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - long getCounter(String key); - -``` - -The application can use `getCounter` to retrieve the counter of a given `key` mutated by `incrCounter`. - -Except the `counter` API, Pulsar also exposes a general key/value API for functions to store -general key/value state. - -#### getCounterAsync - -```java - - /** - * Retrieve the counter value for the key, but don't wait - * for the operation to be completed - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - CompletableFuture getCounterAsync(String key); - -``` - -The application can use `getCounterAsync` to asynchronously retrieve the counter of a given `key` mutated by `incrCounterAsync`. - -#### putState - -```java - - /** - * Update the state value for the key. - * - * @param key name of the key - * @param value state value of the key - */ - void putState(String key, ByteBuffer value); - -``` - -#### putStateAsync - -```java - - /** - * Update the state value for the key, but don't wait for the operation to be completed - * - * @param key name of the key - * @param value state value of the key - */ - CompletableFuture putStateAsync(String key, ByteBuffer value); - -``` - -The application can use `putStateAsync` to asynchronously update the state of a given `key`. - -#### getState - -```java - - /** - * Retrieve the state value for the key. - * - * @param key name of the key - * @return the state value for the key. - */ - ByteBuffer getState(String key); - -``` - -#### getStateAsync - -```java - - /** - * Retrieve the state value for the key, but don't wait for the operation to be completed - * - * @param key name of the key - * @return the state value for the key. - */ - CompletableFuture getStateAsync(String key); - -``` - -The application can use `getStateAsync` to asynchronously retrieve the state of a given `key`. - -#### deleteState - -```java - - /** - * Delete the state value for the key. - * - * @param key name of the key - */ - -``` - -Counters and binary values share the same keyspace, so this deletes either type. - - - - -Currently Pulsar Functions expose the following APIs for mutating and accessing State. These APIs are available in the [Context](#context) object when you are using Python SDK functions. - -#### incr_counter - -```python - - def incr_counter(self, key, amount): - ""incr the counter of a given key in the managed state"" - -``` - -Application can use `incr_counter` to change the counter of a given `key` by the given `amount`. -If the `key` does not exist, a new key is created. - -#### get_counter - -```python - - def get_counter(self, key): - """get the counter of a given key in the managed state""" - -``` - -Application can use `get_counter` to retrieve the counter of a given `key` mutated by `incrCounter`. - -Except the `counter` API, Pulsar also exposes a general key/value API for functions to store -general key/value state. - -#### put_state - -```python - - def put_state(self, key, value): - """update the value of a given key in the managed state""" - -``` - -The key is a string, and the value is arbitrary binary data. - -#### get_state - -```python - - def get_state(self, key): - """get the value of a given key in the managed state""" - -``` - -#### del_counter - -```python - - def del_counter(self, key): - """delete the counter of a given key in the managed state""" - -``` - -Counters and binary values share the same keyspace, so this deletes either type. - - - - -```` - -### Query State - -A Pulsar Function can use the [State API](#api) for storing state into Pulsar's state storage -and retrieving state back from Pulsar's state storage. Additionally Pulsar also provides -CLI commands for querying its state. - -```shell - -$ bin/pulsar-admin functions querystate \ - --tenant \ - --namespace \ - --name \ - --state-storage-url \ - --key \ - [---watch] - -``` - -If `--watch` is specified, the CLI will watch the value of the provided `state-key`. - -### Example - -````mdx-code-block - - - -{@inject: github:WordCountFunction:/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/WordCountFunction.java} is a very good example -demonstrating on how Application can easily store `state` in Pulsar Functions. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountFunction implements Function { - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split("\\.")).forEach(word -> context.incrCounter(word, 1)); - return null; - } -} - -``` - -The logic of this `WordCount` function is pretty simple and straightforward: - -1. The function first splits the received `String` into multiple words using regex `\\.`. -2. For each `word`, the function increments the corresponding `counter` by 1 (via `incrCounter(key, amount)`). - - - - -```python - -from pulsar import Function - -class WordCount(Function): - def process(self, item, context): - for word in item.split(): - context.incr_counter(word, 1) - -``` - -The logic of this `WordCount` function is pretty simple and straightforward: - -1. The function first splits the received string into multiple words on space. -2. For each `word`, the function increments the corresponding `counter` by 1 (via `incr_counter(key, amount)`). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/functions-metrics.md b/site2/website/versioned_docs/version-2.10.1-deprecated/functions-metrics.md deleted file mode 100644 index 8add6693160929..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/functions-metrics.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: functions-metrics -title: Metrics for Pulsar Functions -sidebar_label: "Metrics" -original_id: functions-metrics ---- - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/functions-overview.md b/site2/website/versioned_docs/version-2.10.1-deprecated/functions-overview.md deleted file mode 100644 index 816d301e0fd0e7..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/functions-overview.md +++ /dev/null @@ -1,209 +0,0 @@ ---- -id: functions-overview -title: Pulsar Functions overview -sidebar_label: "Overview" -original_id: functions-overview ---- - -**Pulsar Functions** are lightweight compute processes that - -* consume messages from one or more Pulsar topics, -* apply a user-supplied processing logic to each message, -* publish the results of the computation to another topic. - - -## Goals -With Pulsar Functions, you can create complex processing logic without deploying a separate neighboring system (such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://heron.incubator.apache.org/), [Apache Flink](https://flink.apache.org/)). Pulsar Functions are computing infrastructure of Pulsar messaging system. The core goal is tied to a series of other goals: - -* Developer productivity (language-native vs Pulsar Functions SDK functions) -* Easy troubleshooting -* Operational simplicity (no need for an external processing system) - -## Inspirations -Pulsar Functions are inspired by (and take cues from) several systems and paradigms: - -* Stream processing engines such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://apache.github.io/incubator-heron), and [Apache Flink](https://flink.apache.org) -* "Serverless" and "Function as a Service" (FaaS) cloud platforms like [Amazon Web Services Lambda](https://aws.amazon.com/lambda/), [Google Cloud Functions](https://cloud.google.com/functions/), and [Azure Cloud Functions](https://azure.microsoft.com/en-us/services/functions/) - -Pulsar Functions can be described as - -* [Lambda](https://aws.amazon.com/lambda/)-style functions that are -* specifically designed to use Pulsar as a message bus. - -## Programming model -Pulsar Functions provide a wide range of functionality, and the core programming model is simple. Functions receive messages from one or more **input [topics](reference-terminology.md#topic)**. Each time a message is received, the function will complete the following tasks. - - * Apply some processing logic to the input and write output to: - * An **output topic** in Pulsar - * [Apache BookKeeper](functions-develop.md#state-storage) - * Write logs to a **log topic** (potentially for debugging purposes) - * Increment a [counter](#word-count-example) - -![Pulsar Functions core programming model](/assets/pulsar-functions-overview.png) - -You can use Pulsar Functions to set up the following processing chain: - -* A Python function listens for the `raw-sentences` topic and "sanitizes" incoming strings (removing extraneous whitespace and converting all characters to lowercase) and then publishes the results to a `sanitized-sentences` topic. -* A Java function listens for the `sanitized-sentences` topic, counts the number of times each word appears within a specified time window, and publishes the results to a `results` topic -* Finally, a Python function listens for the `results` topic and writes the results to a MySQL table. - - -### Word count example - -If you implement the classic word count example using Pulsar Functions, it looks something like this: - -![Pulsar Functions word count example](/assets/pulsar-functions-word-count.png) - -To write the function in Java with [Pulsar Functions SDK for Java](functions-develop.md#available-apis), you can write the function as follows. - -```java - -package org.example.functions; - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountFunction implements Function { - // This function is invoked every time a message is published to the input topic - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split(" ")).forEach(word -> { - String counterKey = word.toLowerCase(); - context.incrCounter(counterKey, 1); - }); - return null; - } -} - -``` - -Bundle and build the JAR file to be deployed, and then deploy it in your Pulsar cluster using the [command line](functions-deploy.md#command-line-interface) as follows. - -```bash - -$ bin/pulsar-admin functions create \ - --jar target/my-jar-with-dependencies.jar \ - --classname org.example.functions.WordCountFunction \ - --tenant public \ - --namespace default \ - --name word-count \ - --inputs persistent://public/default/sentences \ - --output persistent://public/default/count - -``` - -### Content-based routing example - -Pulsar Functions are used in many cases. The following is a sophisticated example that involves content-based routing. - -For example, a function takes items (strings) as input and publishes them to either a `fruits` or `vegetables` topic, depending on the item. Or, if an item is neither fruit nor vegetable, a warning is logged to a [log topic](functions-develop.md#logger). The following is a visual representation. - -![Pulsar Functions routing example](/assets/pulsar-functions-routing-example.png) - -If you implement this routing functionality in Python, it looks something like this: - -```python - -from pulsar import Function - -class RoutingFunction(Function): - def __init__(self): - self.fruits_topic = "persistent://public/default/fruits" - self.vegetables_topic = "persistent://public/default/vegetables" - - @staticmethod - def is_fruit(item): - return item in [b"apple", b"orange", b"pear", b"other fruits..."] - - @staticmethod - def is_vegetable(item): - return item in [b"carrot", b"lettuce", b"radish", b"other vegetables..."] - - def process(self, item, context): - if self.is_fruit(item): - context.publish(self.fruits_topic, item) - elif self.is_vegetable(item): - context.publish(self.vegetables_topic, item) - else: - warning = "The item {0} is neither a fruit nor a vegetable".format(item) - context.get_logger().warn(warning) - -``` - -If this code is stored in `~/router.py`, then you can deploy it in your Pulsar cluster using the [command line](functions-deploy.md#command-line-interface) as follows. - -```bash - -$ bin/pulsar-admin functions create \ - --py ~/router.py \ - --classname router.RoutingFunction \ - --tenant public \ - --namespace default \ - --name route-fruit-veg \ - --inputs persistent://public/default/basket-items - -``` - -### Functions, messages and message types -Pulsar Functions take byte arrays as inputs and spit out byte arrays as output. However in languages that support typed interfaces(Java), you can write typed Functions, and bind messages to types in the following ways. -* [Schema Registry](functions-develop.md#schema-registry) -* [SerDe](functions-develop.md#serde) - - -## Fully Qualified Function Name (FQFN) -Each Pulsar Function has a **Fully Qualified Function Name** (FQFN) that consists of three elements: the function tenant, namespace, and function name. FQFN looks like this: - -```http - -tenant/namespace/name - -``` - -FQFNs enable you to create multiple functions with the same name provided that they are in different namespaces. - -## Supported languages -Currently, you can write Pulsar Functions in Java, Python, and Go. For details, refer to [Develop Pulsar Functions](functions-develop.md). - -## Processing guarantees -Pulsar Functions provide three different messaging semantics that you can apply to any function. - -Delivery semantics | Description -:------------------|:------- -**At-most-once** delivery | Each message sent to the function is likely to be processed, or not to be processed (hence "at most"). -**At-least-once** delivery | Each message sent to the function can be processed more than once (hence the "at least"). -**Effectively-once** delivery | Each message sent to the function will have one output associated with it. - - -### Apply processing guarantees to a function -You can set the processing guarantees for a Pulsar Function when you create the Function. The following [`pulsar-function create`](reference-pulsar-admin.md#create-1) command creates a function with effectively-once guarantees applied. - -```bash - -$ bin/pulsar-admin functions create \ - --name my-effectively-once-function \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other function configs - -``` - -The available options for `--processing-guarantees` are: - -* `ATMOST_ONCE` -* `ATLEAST_ONCE` -* `EFFECTIVELY_ONCE` - -> By default, Pulsar Functions provide at-least-once delivery guarantees. So if you create a function without supplying a value for the `--processingGuarantees` flag, the function provides at-least-once guarantees. - -### Update the processing guarantees of a function -You can change the processing guarantees applied to a function using the [`update`](reference-pulsar-admin.md#update-1) command. The following is an example. - -```bash - -$ bin/pulsar-admin functions update \ - --processing-guarantees ATMOST_ONCE \ - # Other function configs - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/functions-package.md b/site2/website/versioned_docs/version-2.10.1-deprecated/functions-package.md deleted file mode 100644 index a995d5c1588771..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/functions-package.md +++ /dev/null @@ -1,493 +0,0 @@ ---- -id: functions-package -title: Package Pulsar Functions -sidebar_label: "How-to: Package" -original_id: functions-package ---- - -You can package Pulsar functions in Java, Python, and Go. Packaging the window function in Java is the same as [packaging a function in Java](#java). - -:::note - -Currently, the window function is not available in Python and Go. - -::: - -## Prerequisite - -Before running a Pulsar function, you need to start Pulsar. You can [run a standalone Pulsar in Docker](getting-started-docker.md), or [run Pulsar in Kubernetes](getting-started-helm.md). - -To check whether the Docker image starts, you can use the `docker ps` command. - -## Java - -To package a function in Java, complete the following steps. - -1. Create a new maven project with a pom file. In the following code sample, the value of `mainClass` is your package name. - - ```Java - - - - 4.0.0 - - java-function - java-function - 1.0-SNAPSHOT - - - - org.apache.pulsar - pulsar-functions-api - 2.6.0 - - - - - - - maven-assembly-plugin - - false - - jar-with-dependencies - - - - org.example.test.ExclamationFunction - - - - - - make-assembly - package - - assembly - - - - - - org.apache.maven.plugins - maven-compiler-plugin - - 8 - 8 - - - - - - - - ``` - -2. Write a Java function. - - ``` - - package org.example.test; - - import java.util.function.Function; - - public class ExclamationFunction implements Function { - @Override - public String apply(String s) { - return "This is my function!"; - } - } - - ``` - - For the imported package, you can use one of the following interfaces: - - Function interface provided by Java 8: `java.util.function.Function` - - Pulsar Function interface: `org.apache.pulsar.functions.api.Function` - - The main difference between the two interfaces is that the `org.apache.pulsar.functions.api.Function` interface provides the context interface. When you write a function and want to interact with it, you can use context to obtain a wide variety of information and functionality for Pulsar Functions. - - The following example uses `org.apache.pulsar.functions.api.Function` interface with context. - - ``` - - package org.example.functions; - import org.apache.pulsar.functions.api.Context; - import org.apache.pulsar.functions.api.Function; - - import java.util.Arrays; - public class WordCountFunction implements Function { - // This function is invoked every time a message is published to the input topic - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split(" ")).forEach(word -> { - String counterKey = word.toLowerCase(); - context.incrCounter(counterKey, 1); - }); - return null; - } - } - - ``` - -3. Package the Java function. - - ```bash - - mvn package - - ``` - - After the Java function is packaged, a `target` directory is created automatically. Open the `target` directory to check if there is a JAR package similar to `java-function-1.0-SNAPSHOT.jar`. - - -4. Run the Java function. - - (1) Copy the packaged jar file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Java function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname org.example.test.ExclamationFunction \ - --jar java-function-1.0-SNAPSHOT.jar \ - --inputs persistent://public/default/my-topic-1 \ - --output persistent://public/default/test-1 \ - --tenant public \ - --namespace default \ - --name JavaFunction - - ``` - - The following log indicates that the Java function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -## Python - -Python Function supports the following three formats: - -- One python file -- ZIP file -- PIP - -### One python file - -To package a function with **one python file** in Python, complete the following steps. - -1. Write a Python function. - - ``` - - from pulsar import Function // import the Function module from Pulsar - - # The classic ExclamationFunction that appends an exclamation at the end - # of the input - class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - return input + '!' - - ``` - - In this example, when you write a Python function, you need to inherit the Function class and implement the `process()` method. - - `process()` mainly has two parameters: - - - `input` represents your input. - - - `context` represents an interface exposed by the Pulsar Function. You can get the attributes in the Python function based on the provided context object. - -2. Install a Python client. - - The implementation of a Python function depends on the Python client, so before deploying a Python function, you need to install the corresponding version of the Python client. - - ```bash - - pip install pulsar-client==2.6.0 - - ``` - -3. Run the Python Function. - - (1) Copy the Python function file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Python function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname . \ - --py \ - --inputs persistent://public/default/my-topic-1 \ - --output persistent://public/default/test-1 \ - --tenant public \ - --namespace default \ - --name PythonFunction - - ``` - - The following log indicates that the Python function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -### ZIP file - -To package a function with the **ZIP file** in Python, complete the following steps. - -1. Prepare the ZIP file. - - The following is required when packaging the ZIP file of the Python Function. - - ```text - - Assuming the zip file is named as `func.zip`, unzip the `func.zip` folder: - "func/src" - "func/requirements.txt" - "func/deps" - - ``` - - Take [exclamation.zip](https://github.com/apache/pulsar/tree/master/tests/docker-images/latest-version-image/python-examples) as an example. The internal structure of the example is as follows. - - ```text - - . - ├── deps - │   └── sh-1.12.14-py2.py3-none-any.whl - └── src - └── exclamation.py - - ``` - -2. Run the Python Function. - - (1) Copy the ZIP file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Python function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname exclamation \ - --py \ - --inputs persistent://public/default/in-topic \ - --output persistent://public/default/out-topic \ - --tenant public \ - --namespace default \ - --name PythonFunction - - ``` - - The following log indicates that the Python function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -### PIP - -The PIP method is only supported in Kubernetes runtime. To package a function with **PIP** in Python, complete the following steps. - -1. Configure the `functions_worker.yml` file. - - ```text - - #### Kubernetes Runtime #### - installUserCodeDependencies: true - - ``` - -2. Write your Python Function. - - ``` - - from pulsar import Function - import js2xml - - # The classic ExclamationFunction that appends an exclamation at the end - # of the input - class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - // add your logic - return input + '!' - - ``` - - You can introduce additional dependencies. When Python Function detects that the file currently used is `whl` and the `installUserCodeDependencies` parameter is specified, the system uses the `pip install` command to install the dependencies required in Python Function. - -3. Generate the `whl` file. - - ```shell script - - $ cd $PULSAR_HOME/pulsar-functions/scripts/python - $ chmod +x generate.sh - $ ./generate.sh - # e.g: ./generate.sh /path/to/python /path/to/python/output 1.0.0 - - ``` - - The output is written in `/path/to/python/output`: - - ```text - - -rw-r--r-- 1 root staff 1.8K 8 27 14:29 pulsarfunction-1.0.0-py2-none-any.whl - -rw-r--r-- 1 root staff 1.4K 8 27 14:29 pulsarfunction-1.0.0.tar.gz - -rw-r--r-- 1 root staff 0B 8 27 14:29 pulsarfunction.whl - - ``` - -## Go - -To package a function in Go, complete the following steps. - -1. Write a Go function. - - Currently, Go function can be **only** implemented using SDK and the interface of the function is exposed in the form of SDK. Before using the Go function, you need to import "github.com/apache/pulsar/pulsar-function-go/pf". - - ``` - - import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" - ) - - func HandleRequest(ctx context.Context, input []byte) error { - fmt.Println(string(input) + "!") - return nil - } - - func main() { - pf.Start(HandleRequest) - } - - ``` - - You can use context to connect to the Go function. - - ``` - - if fc, ok := pf.FromContext(ctx); ok { - fmt.Printf("function ID is:%s, ", fc.GetFuncID()) - fmt.Printf("function version is:%s\n", fc.GetFuncVersion()) - } - - ``` - - When writing a Go function, remember that - - In `main()`, you **only** need to register the function name to `Start()`. **Only** one function name is received in `Start()`. - - Go function uses Go reflection, which is based on the received function name, to verify whether the parameter list and returned value list are correct. The parameter list and returned value list **must be** one of the following sample functions: - - ``` - - func () - func () error - func (input) error - func () (output, error) - func (input) (output, error) - func (context.Context) error - func (context.Context, input) error - func (context.Context) (output, error) - func (context.Context, input) (output, error) - - ``` - -2. Build the Go function. - - ``` - - go build .go - - ``` - -3. Run the Go Function. - - (1) Copy the Go function file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Go function with the following command. - - ``` - - ./bin/pulsar-admin functions localrun \ - --go [your go function path] - --inputs [input topics] \ - --output [output topic] \ - --tenant [default:public] \ - --namespace [default:default] \ - --name [custom unique go function name] - - ``` - - The following log indicates that the Go function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -## Start Functions in cluster mode -If you want to start a function in cluster mode, replace `localrun` with `create` in the commands above. The following log indicates that your function starts successfully. - - ```text - - "Created successfully" - - ``` - -For information about parameters on `--classname`, `--jar`, `--py`, `--go`, `--inputs`, run the command `./bin/pulsar-admin functions` or see [here](reference-pulsar-admin.md#functions). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/functions-runtime.md b/site2/website/versioned_docs/version-2.10.1-deprecated/functions-runtime.md deleted file mode 100644 index 9a01dbf4da1d1d..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/functions-runtime.md +++ /dev/null @@ -1,406 +0,0 @@ ---- -id: functions-runtime -title: Configure Functions runtime -sidebar_label: "Setup: Configure Functions runtime" -original_id: functions-runtime ---- - -You can use the following methods to run functions. - -- *Thread*: Invoke functions threads in functions worker. -- *Process*: Invoke functions in processes forked by functions worker. -- *Kubernetes*: Submit functions as Kubernetes StatefulSets by functions worker. - -:::note - -Pulsar supports adding labels to the Kubernetes StatefulSets and services while launching functions, which facilitates selecting the target Kubernetes objects. - -::: - -The differences of the thread and process modes are: -- Thread mode: when a function runs in thread mode, it runs on the same Java virtual machine (JVM) with functions worker. -- Process mode: when a function runs in process mode, it runs on the same machine that functions worker runs. - -## Configure thread runtime -It is easy to configure *Thread* runtime. In most cases, you do not need to configure anything. You can customize the thread group name with the following settings: - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.thread.ThreadRuntimeFactory -functionRuntimeFactoryConfigs: - threadGroupName: "Your Function Container Group" - -``` - -*Thread* runtime is only supported in Java function. - -## Configure process runtime -When you enable *Process* runtime, you do not need to configure anything. - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.process.ProcessRuntimeFactory -functionRuntimeFactoryConfigs: - # the directory for storing the function logs - logDirectory: - # change the jar location only when you put the java instance jar in a different location - javaInstanceJarLocation: - # change the python instance location only when you put the python instance jar in a different location - pythonInstanceLocation: - # change the extra dependencies location: - extraFunctionDependenciesDir: - -``` - -*Process* runtime is supported in Java, Python, and Go functions. - -## Configure Kubernetes runtime - -When the functions worker generates Kubernetes manifests and apply the manifests, the Kubernetes runtime works. If you have run functions worker on Kubernetes, you can use the `serviceAccount` associated with the pod that the functions worker is running in. Otherwise, you can configure it to communicate with a Kubernetes cluster. - -The manifests, generated by the functions worker, include a `StatefulSet`, a `Service` (used to communicate with the pods), and a `Secret` for auth credentials (when applicable). The `StatefulSet` manifest (by default) has a single pod, with the number of replicas determined by the "parallelism" of the function. On pod boot, the pod downloads the function payload (via the functions worker REST API). The pod's container image is configurable, but must have the functions runtime. - -The Kubernetes runtime supports secrets, so you can create a Kubernetes secret and expose it as an environment variable in the pod. The Kubernetes runtime is extensible, you can implement classes and customize the way how to generate Kubernetes manifests, how to pass auth data to pods, and how to integrate secrets. - -:::tip - -For the rules of translating Pulsar object names into Kubernetes resource labels, see [here](admin-api-overview.md#how-to-define-pulsar-resource-names-when-running-pulsar-in-kubernetes). - -::: - -### Basic configuration - -It is easy to configure Kubernetes runtime. You can just uncomment the settings of `kubernetesContainerFactory` in the `functions_worker.yaml` file. The following is an example. - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.kubernetes.KubernetesRuntimeFactory -functionRuntimeFactoryConfigs: - # uri to kubernetes cluster, leave it to empty and it will use the kubernetes settings in function worker - k8Uri: - # the kubernetes namespace to run the function instances. it is `default`, if this setting is left to be empty - jobNamespace: - # The Kubernetes pod name to run the function instances. It is set to - # `pf----` if this setting is left to be empty - jobName: - # the docker image to run function instance. by default it is `apachepulsar/pulsar` - pulsarDockerImageName: - # the docker image to run function instance according to different configurations provided by users. - # By default it is `apachepulsar/pulsar`. - # e.g: - # functionDockerImages: - # JAVA: JAVA_IMAGE_NAME - # PYTHON: PYTHON_IMAGE_NAME - # GO: GO_IMAGE_NAME - functionDockerImages: - # "The image pull policy for image used to run function instance. By default it is `IfNotPresent` - imagePullPolicy: IfNotPresent - # the root directory of pulsar home directory in `pulsarDockerImageName`. by default it is `/pulsar`. - # if you are using your own built image in `pulsarDockerImageName`, you need to set this setting accordingly - pulsarRootDir: - # The config admin CLI allows users to customize the configuration of the admin cli tool, such as: - # `/bin/pulsar-admin and /bin/pulsarctl`. By default it is `/bin/pulsar-admin`. If you want to use `pulsarctl` - # you need to set this setting accordingly - configAdminCLI: - # this setting only takes effects if `k8Uri` is set to null. if your function worker is running as a k8 pod, - # setting this to true is let function worker to submit functions to the same k8s cluster as function worker - # is running. setting this to false if your function worker is not running as a k8 pod. - submittingInsidePod: false - # setting the pulsar service url that pulsar function should use to connect to pulsar - # if it is not set, it will use the pulsar service url configured in worker service - pulsarServiceUrl: - # setting the pulsar admin url that pulsar function should use to connect to pulsar - # if it is not set, it will use the pulsar admin url configured in worker service - pulsarAdminUrl: - # The flag indicates to install user code dependencies. (applied to python package) - installUserCodeDependencies: - # The repository that pulsar functions use to download python dependencies - pythonDependencyRepository: - # The repository that pulsar functions use to download extra python dependencies - pythonExtraDependencyRepository: - # the custom labels that function worker uses to select the nodes for pods - customLabels: - # The expected metrics collection interval, in seconds - expectedMetricsCollectionInterval: 30 - # Kubernetes Runtime will periodically checkback on - # this configMap if defined and if there are any changes - # to the kubernetes specific stuff, we apply those changes - changeConfigMap: - # The namespace for storing change config map - changeConfigMapNamespace: - # The ratio cpu request and cpu limit to be set for a function/source/sink. - # The formula for cpu request is cpuRequest = userRequestCpu / cpuOverCommitRatio - cpuOverCommitRatio: 1.0 - # The ratio memory request and memory limit to be set for a function/source/sink. - # The formula for memory request is memoryRequest = userRequestMemory / memoryOverCommitRatio - memoryOverCommitRatio: 1.0 - # The port inside the function pod which is used by the worker to communicate with the pod - grpcPort: 9093 - # The port inside the function pod on which prometheus metrics are exposed - metricsPort: 9094 - # The directory inside the function pod where nar packages will be extracted - narExtractionDirectory: - # The classpath where function instance files stored - functionInstanceClassPath: - # Upload the builtin sources/sinks to BookKeeper. - # True by default. - uploadBuiltinSinksSources: true - # the directory for dropping extra function dependencies - # if it is not an absolute path, it is relative to `pulsarRootDir` - extraFunctionDependenciesDir: - # Additional memory padding added on top of the memory requested by the function per on a per instance basis - percentMemoryPadding: 10 - # The duration (in seconds) before the StatefulSet is deleted after a function stops or restarts. - # Value must be a non-negative integer. 0 indicates the StatefulSet is deleted immediately. - # Default is 5 seconds. - gracePeriodSeconds: 5 - -``` - -If you run functions worker embedded in a broker on Kubernetes, you can use the default settings. - -### Run standalone functions worker on Kubernetes - -If you run functions worker standalone (that is, not embedded) on Kubernetes, you need to configure `pulsarSerivceUrl` to be the URL of the broker and `pulsarAdminUrl` as the URL to the functions worker. - -For example, both Pulsar brokers and Function Workers run in the `pulsar` K8S namespace. The brokers have a service called `brokers` and the functions worker has a service called `func-worker`. The settings are as follows: - -```yaml - -pulsarServiceUrl: pulsar://broker.pulsar:6650 // or pulsar+ssl://broker.pulsar:6651 if using TLS -pulsarAdminUrl: http://func-worker.pulsar:8080 // or https://func-worker:8443 if using TLS - -``` - -### Run RBAC in Kubernetes clusters - -If you run RBAC in your Kubernetes cluster, make sure that the service account you use for running functions workers (or brokers, if functions workers run along with brokers) have permissions on the following Kubernetes APIs. - -- services -- configmaps -- pods -- apps.statefulsets - -The following is sufficient: - -```yaml - -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRole -metadata: - name: functions-worker -rules: -- apiGroups: [""] - resources: - - services - - configmaps - - pods - verbs: - - '*' -- apiGroups: - - apps - resources: - - statefulsets - verbs: - - '*' ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: functions-worker ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: functions-worker -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: functions-worker -subjectsKubernetesSec: -- kind: ServiceAccount - name: functions-worker - -``` - -If the service-account is not properly configured, an error message similar to this is displayed: - -```bash - -22:04:27.696 [Timer-0] ERROR org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory - Error while trying to fetch configmap example-pulsar-4qvmb5gur3c6fc9dih0x1xn8b-function-worker-config at namespace pulsar -io.kubernetes.client.ApiException: Forbidden - at io.kubernetes.client.ApiClient.handleResponse(ApiClient.java:882) ~[io.kubernetes-client-java-2.0.0.jar:?] - at io.kubernetes.client.ApiClient.execute(ApiClient.java:798) ~[io.kubernetes-client-java-2.0.0.jar:?] - at io.kubernetes.client.apis.CoreV1Api.readNamespacedConfigMapWithHttpInfo(CoreV1Api.java:23673) ~[io.kubernetes-client-java-api-2.0.0.jar:?] - at io.kubernetes.client.apis.CoreV1Api.readNamespacedConfigMap(CoreV1Api.java:23655) ~[io.kubernetes-client-java-api-2.0.0.jar:?] - at org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory.fetchConfigMap(KubernetesRuntimeFactory.java:284) [org.apache.pulsar-pulsar-functions-runtime-2.4.0-42c3bf949.jar:2.4.0-42c3bf949] - at org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory$1.run(KubernetesRuntimeFactory.java:275) [org.apache.pulsar-pulsar-functions-runtime-2.4.0-42c3bf949.jar:2.4.0-42c3bf949] - at java.util.TimerThread.mainLoop(Timer.java:555) [?:1.8.0_212] - at java.util.TimerThread.run(Timer.java:505) [?:1.8.0_212] - -``` - -### Integrate Kubernetes secrets - -In order to safely distribute secrets, Pulsar Functions can reference Kubernetes secrets. To enable this, set the `secretsProviderConfiguratorClassName` to `org.apache.pulsar.functions.secretsproviderconfigurator.KubernetesSecretsProviderConfigurator`. - -You can create a secret in the namespace where your functions are deployed. For example, you deploy functions to the `pulsar-func` Kubernetes namespace, and you have a secret named `database-creds` with a field name `password`, which you want to mount in the pod as an environment variable called `DATABASE_PASSWORD`. The following functions configuration enables you to reference that secret and mount the value as an environment variable in the pod. - -```Yaml - -tenant: "mytenant" -namespace: "mynamespace" -name: "myfunction" -topicName: "persistent://mytenant/mynamespace/myfuncinput" -className: "com.company.pulsar.myfunction" - -secrets: - # the secret will be mounted from the `password` field in the `database-creds` secret as an env var called `DATABASE_PASSWORD` - DATABASE_PASSWORD: - path: "database-creds" - key: "password" - -``` - -### Enable token authentication - -When you enable authentication for your Pulsar cluster, you need a mechanism for the pod running your function to authenticate with the broker. - -The `org.apache.pulsar.functions.auth.KubernetesFunctionAuthProvider` interface provides support for any authentication mechanism. The `functionAuthProviderClassName` in `function-worker.yml` is used to specify your path to this implementation. - -Pulsar includes an implementation of this interface for token authentication, and distributes the certificate authority via the same implementation. The configuration is similar as follows: - -```Yaml - -functionAuthProviderClassName: org.apache.pulsar.functions.auth.KubernetesSecretsTokenAuthProvider - -``` - -For token authentication, the functions worker captures the token that is used to deploy (or update) the function. The token is saved as a secret and mounted into the pod. - -For custom authentication or TLS, you need to implement this interface or use an alternative mechanism to provide authentication. If you use token authentication and TLS encryption to secure the communication with the cluster, Pulsar passes your certificate authority (CA) to the client, so the client obtains what it needs to authenticate the cluster, and trusts the cluster with your signed certificate. - -:::note - -If you use tokens that expire when deploying functions, these tokens will expire. - -::: - -### Run clusters with authentication - -When you run a functions worker in a standalone process (that is, not embedded in the broker) in a cluster with authentication, you must configure your functions worker to interact with the broker and authenticate incoming requests. So you need to configure properties that the broker requires for authentication or authorization. - -For example, if you use token authentication, you need to configure the following properties in the `function-worker.yml` file. - -```Yaml - -clientAuthenticationPlugin: org.apache.pulsar.client.impl.auth.AuthenticationToken -clientAuthenticationParameters: file:///etc/pulsar/token/admin-token.txt -configurationMetadataStoreUrl: zk:zookeeper-cluster:2181 # auth requires a connection to zookeeper -authenticationProviders: - - "org.apache.pulsar.broker.authentication.AuthenticationProviderToken" -authorizationEnabled: true -authenticationEnabled: true -superUserRoles: - - superuser - - proxy -properties: - tokenSecretKey: file:///etc/pulsar/jwt/secret # if using a secret token, key file must be DER-encoded - tokenPublicKey: file:///etc/pulsar/jwt/public.key # if using public/private key tokens, key file must be DER-encoded - -``` - -:::note - -You must configure both the Function Worker authorization or authentication for the server to authenticate requests and configure the client to be authenticated to communicate with the broker. - -::: - -### Customize Kubernetes runtime - -The Kubernetes integration enables you to implement a class and customize how to generate manifests. You can configure it by setting `runtimeCustomizerClassName` in the `functions-worker.yml` file and use the fully qualified class name. You must implement the `org.apache.pulsar.functions.runtime.kubernetes.KubernetesManifestCustomizer` interface. - -The functions (and sinks/sources) API provides a flag, `customRuntimeOptions`, which is passed to this interface. - -To initialize the `KubernetesManifestCustomizer`, you can provide `runtimeCustomizerConfig` in the `functions-worker.yml` file. `runtimeCustomizerConfig` is passed to the `public void initialize(Map config)` function of the interface. `runtimeCustomizerConfig`is different from the `customRuntimeOptions` as `runtimeCustomizerConfig` is the same across all functions. If you provide both `runtimeCustomizerConfig` and `customRuntimeOptions`, you need to decide how to manage these two configurations in your implementation of `KubernetesManifestCustomizer`. - -Pulsar includes a built-in implementation. To use the basic implementation, set `runtimeCustomizerClassName` to `org.apache.pulsar.functions.runtime.kubernetes.BasicKubernetesManifestCustomizer`. The built-in implementation initialized with `runtimeCustomizerConfig` enables you to pass a JSON document as `customRuntimeOptions` with certain properties to augment, which decides how the manifests are generated. If both `runtimeCustomizerConfig` and `customRuntimeOptions` are provided, `BasicKubernetesManifestCustomizer` uses `customRuntimeOptions` to override the configuration if there are conflicts in these two configurations. - -Below is an example of `customRuntimeOptions`. - -```json - -{ - "jobName": "jobname", // the k8s pod name to run this function instance - "jobNamespace": "namespace", // the k8s namespace to run this function in - "extractLabels": { // extra labels to attach to the statefulSet, service, and pods - "extraLabel": "value" - }, - "extraAnnotations": { // extra annotations to attach to the statefulSet, service, and pods - "extraAnnotation": "value" - }, - "nodeSelectorLabels": { // node selector labels to add on to the pod spec - "customLabel": "value" - }, - "tolerations": [ // tolerations to add to the pod spec - { - "key": "custom-key", - "value": "value", - "effect": "NoSchedule" - } - ], - "resourceRequirements": { // values for cpu and memory should be defined as described here: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container - "requests": { - "cpu": 1, - "memory": "4G" - }, - "limits": { - "cpu": 2, - "memory": "8G" - } - } -} - -``` - -## Run clusters with geo-replication - -If you run multiple clusters tied together with geo-replication, it is important to use a different function namespace for each cluster. Otherwise, the function shares a namespace and potentially schedule across clusters. - -For example, if you have two clusters: `east-1` and `west-1`, you can configure the functions workers for `east-1` and `west-1` perspectively as follows. - -```Yaml - -pulsarFunctionsCluster: east-1 -pulsarFunctionsNamespace: public/functions-east-1 - -``` - -```Yaml - -pulsarFunctionsCluster: west-1 -pulsarFunctionsNamespace: public/functions-west-1 - -``` - -This ensures the two different Functions Workers use distinct sets of topics for their internal coordination. - -## Configure standalone functions worker - -When configuring a standalone functions worker, you need to configure properties that the broker requires, especially if you use TLS. And then Functions Worker can communicate with the broker. - -You need to configure the following required properties. - -```Yaml - -workerPort: 8080 -workerPortTls: 8443 # when using TLS -tlsCertificateFilePath: /etc/pulsar/tls/tls.crt # when using TLS -tlsKeyFilePath: /etc/pulsar/tls/tls.key # when using TLS -tlsTrustCertsFilePath: /etc/pulsar/tls/ca.crt # when using TLS -pulsarServiceUrl: pulsar://broker.pulsar:6650/ # or pulsar+ssl://pulsar-prod-broker.pulsar:6651/ when using TLS -pulsarWebServiceUrl: http://broker.pulsar:8080/ # or https://pulsar-prod-broker.pulsar:8443/ when using TLS -useTls: true # when using TLS, critical! - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/functions-worker.md b/site2/website/versioned_docs/version-2.10.1-deprecated/functions-worker.md deleted file mode 100644 index 60eb84657919be..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/functions-worker.md +++ /dev/null @@ -1,405 +0,0 @@ ---- -id: functions-worker -title: Deploy and manage functions worker -sidebar_label: "Setup: Pulsar Functions Worker" -original_id: functions-worker ---- -Before using Pulsar Functions, you need to learn how to set up Pulsar Functions worker and how to [configure Functions runtime](functions-runtime.md). - -Pulsar `functions-worker` is a logic component to run Pulsar Functions in cluster mode. Two options are available, and you can select either based on your requirements. -- [run with brokers](#run-functions-worker-with-brokers) -- [run it separately](#run-functions-worker-separately) in a different broker - -:::note - -The `--- Service Urls---` lines in the following diagrams represent Pulsar service URLs that Pulsar client and admin use to connect to a Pulsar cluster. - -::: - -## Run Functions-worker with brokers - -The following diagram illustrates the deployment of functions-workers running along with brokers. - -![assets/functions-worker-corun.png](/assets/functions-worker-corun.png) - -To enable functions-worker running as part of a broker, you need to set `functionsWorkerEnabled` to `true` in the `broker.conf` file. - -```conf - -functionsWorkerEnabled=true - -``` - -If the `functionsWorkerEnabled` is set to `true`, the functions-worker is started as part of a broker. You need to configure the `conf/functions_worker.yml` file to customize your functions_worker. - -Before you run Functions-worker with broker, you have to configure Functions-worker, and then start it with brokers. - -### Configure Functions-Worker to run with brokers -In this mode, most of the settings are already inherited from your broker configuration (for example, configurationStore settings, authentication settings, and so on) since `functions-worker` is running as part of the broker. - -Pay attention to the following required settings when configuring functions-worker in this mode. - -- `numFunctionPackageReplicas`: The number of replicas to store function packages. The default value is `1`, which is good for standalone deployment. For production deployment, to ensure high availability, set it to be larger than `2`. -- `initializedDlogMetadata`: Whether to initialize distributed log metadata in runtime. If it is set to `true`, you must ensure that it has been initialized by `bin/pulsar initialize-cluster-metadata` command. - -If authentication is enabled on the BookKeeper cluster, configure the following BookKeeper authentication settings. - -- `bookkeeperClientAuthenticationPlugin`: the BookKeeper client authentication plugin name. -- `bookkeeperClientAuthenticationParametersName`: the BookKeeper client authentication plugin parameters name. -- `bookkeeperClientAuthenticationParameters`: the BookKeeper client authentication plugin parameters. - -### Configure Stateful-Functions to run with broker - -If you want to use Stateful-Functions related functions (for example, `putState()` and `queryState()` related interfaces), follow steps below. - -1. Enable the **streamStorage** service in the BookKeeper. - - Currently, the service uses the NAR package, so you need to set the configuration in `bookkeeper.conf`. - - ```text - - extraServerComponents=org.apache.bookkeeper.stream.server.StreamStorageLifecycleComponent - - ``` - - After starting bookie, use the following methods to check whether the streamStorage service is started correctly. - - Input: - - ```shell - - telnet localhost 4181 - - ``` - - Output: - - ```text - - Trying 127.0.0.1... - Connected to localhost. - Escape character is '^]'. - - ``` - -2. Turn on this function in `functions_worker.yml`. - - ```text - - stateStorageServiceUrl: bk://:4181 - - ``` - - `bk-service-url` is the service URL pointing to the BookKeeper table service. - -### Start Functions-worker with broker - -Once you have configured the `functions_worker.yml` file, you can start or restart your broker. - -And then you can use the following command to verify if `functions-worker` is running well. - -```bash - -curl :8080/admin/v2/worker/cluster - -``` - -After entering the command above, a list of active function workers in the cluster is returned. The output is similar to the following. - -```json - -[{"workerId":"","workerHostname":"","port":8080}] - -``` - -## Run Functions-worker separately - -This section illustrates how to run `functions-worker` as a separate process in separate machines. - -![assets/functions-worker-separated.png](/assets/functions-worker-separated.png) - -:::note - -In this mode, make sure `functionsWorkerEnabled` is set to `false`, so you won't start `functions-worker` with brokers by mistake. Also, while accessing the `functions-worker` to manage any of the functions, the `pulsar-admin` CLI tool or any of the clients should use the `workerHostname` and `workerPort` that you set in [Worker parameters](#worker-parameters) to generate an `--admin-url`. - -::: - -### Configure Functions-worker to run separately - -To run function-worker separately, you have to configure the following parameters. - -#### Worker parameters - -- `workerId`: The type is string. It is unique across clusters, which is used to identify a worker machine. -- `workerHostname`: The hostname of the worker machine. -- `workerPort`: The port that the worker server listens on. Keep it as default if you don't customize it. -- `workerPortTls`: The TLS port that the worker server listens on. Keep it as default if you don't customize it. - -#### Function package parameter - -- `numFunctionPackageReplicas`: The number of replicas to store function packages. The default value is `1`. - -#### Function metadata parameter - -- `pulsarServiceUrl`: The Pulsar service URL for your broker cluster. -- `pulsarWebServiceUrl`: The Pulsar web service URL for your broker cluster. -- `pulsarFunctionsCluster`: Set the value to your Pulsar cluster name (same as the `clusterName` setting in the broker configuration). - -If authentication is enabled for your broker cluster, you *should* configure the authentication plugin and parameters for the functions worker to communicate with the brokers. - -- `clientAuthenticationPlugin` -- `clientAuthenticationParameters` - -#### Customize Java runtime options - -If you want to pass additional arguments to the JVM command line to every process started by a function worker, -you can configure the `additionalJavaRuntimeArguments` parameter. - -``` - -additionalJavaRuntimeArguments: ['-XX:+ExitOnOutOfMemoryError','-Dfoo=bar'] - -``` - -This is very useful in case you want to: -- add JMV flags, like `-XX:+ExitOnOutOfMemoryError` -- pass custom system properties, like `-Dlog4j2.formatMsgNoLookups` - -:::note - -This feature applies only to Process and Kubernetes runtimes. - -::: - -#### Security settings - -If you want to enable security on functions workers, you *should*: -- [Enable TLS transport encryption](#enable-tls-transport-encryption) -- [Enable Authentication Provider](#enable-authentication-provider) -- [Enable Authorization Provider](#enable-authorization-provider) -- [Enable End-to-End Encryption](#enable-end-to-end-encryption) - -##### Enable TLS transport encryption - -To enable TLS transport encryption, configure the following settings. - -``` - -useTLS: true -pulsarServiceUrl: pulsar+ssl://localhost:6651/ -pulsarWebServiceUrl: https://localhost:8443 - -tlsEnabled: true -tlsCertificateFilePath: /path/to/functions-worker.cert.pem -tlsKeyFilePath: /path/to/functions-worker.key-pk8.pem -tlsTrustCertsFilePath: /path/to/ca.cert.pem - -// The path to trusted certificates used by the Pulsar client to authenticate with Pulsar brokers -brokerClientTrustCertsFilePath: /path/to/ca.cert.pem - -``` - -For details on TLS encryption, refer to [Transport Encryption using TLS](security-tls-transport.md). - -##### Enable Authentication Provider - -To enable authentication on Functions Worker, you need to configure the following settings. - -:::note - -Substitute the *providers list* with the providers you want to enable. - -::: - -``` - -authenticationEnabled: true -authenticationProviders: [ provider1, provider2 ] - -``` - -For *TLS Authentication* provider, follow the example below to add the necessary settings. -See [TLS Authentication](security-tls-authentication.md) for more details. - -``` - -brokerClientAuthenticationPlugin: org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters: tlsCertFile:/path/to/admin.cert.pem,tlsKeyFile:/path/to/admin.key-pk8.pem - -authenticationEnabled: true -authenticationProviders: ['org.apache.pulsar.broker.authentication.AuthenticationProviderTls'] - -``` - -For *SASL Authentication* provider, add `saslJaasClientAllowedIds` and `saslJaasBrokerSectionName` -under `properties` if needed. - -``` - -properties: - saslJaasClientAllowedIds: .*pulsar.* - saslJaasBrokerSectionName: Broker - -``` - -For *Token Authentication* provider, add necessary settings for `properties` if needed. -See [Token Authentication](security-jwt.md) for more details. -Note: key files must be DER-encoded - -``` - -properties: - tokenSecretKey: file://my/secret.key - # If using public/private - # tokenPublicKey: file:///path/to/public.key - -``` - -##### Enable Authorization Provider - -To enable authorization on Functions Worker, you need to configure `authorizationEnabled`, `authorizationProvider` and `configurationMetadataStoreUrl`. The authentication provider connects to `configurationMetadataStoreUrl` to receive namespace policies. - -```yaml - -authorizationEnabled: true -authorizationProvider: org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider -configurationMetadataStoreUrl: : - -``` - -You should also configure a list of superuser roles. The superuser roles are able to access any admin API. The following is a configuration example. - -```yaml - -superUserRoles: - - role1 - - role2 - - role3 - -``` - -##### Enable End-to-End Encryption - -You can use the public and private key pair that the application configures to perform encryption. Only the consumers with a valid key can decrypt the encrypted messages. - -To enable End-to-End encryption on Functions Worker, you can set it by specifying `--producer-config` in the command line terminal, for more information, please refer to [here](security-encryption.md). - -We include the relevant configuration information of `CryptoConfig` into `ProducerConfig`. The specific configurable field information about `CryptoConfig` is as follows: - -```text - -public class CryptoConfig { - private String cryptoKeyReaderClassName; - private Map cryptoKeyReaderConfig; - - private String[] encryptionKeys; - private ProducerCryptoFailureAction producerCryptoFailureAction; - - private ConsumerCryptoFailureAction consumerCryptoFailureAction; -} - -``` - -- `producerCryptoFailureAction`: define the action if producer fail to encrypt data one of `FAIL`, `SEND`. -- `consumerCryptoFailureAction`: define the action if consumer fail to decrypt data one of `FAIL`, `DISCARD`, `CONSUME`. - -#### BookKeeper Authentication - -If authentication is enabled on the BookKeeper cluster, you need configure the BookKeeper authentication settings as follows: - -- `bookkeeperClientAuthenticationPlugin`: the plugin name of BookKeeper client authentication. -- `bookkeeperClientAuthenticationParametersName`: the plugin parameters name of BookKeeper client authentication. -- `bookkeeperClientAuthenticationParameters`: the plugin parameters of BookKeeper client authentication. - -### Start Functions-worker - -Once you have finished configuring the `functions_worker.yml` configuration file, you can start a `functions-worker` in the background by using [nohup](https://en.wikipedia.org/wiki/Nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -bin/pulsar-daemon start functions-worker - -``` - -You can also start `functions-worker` in the foreground by using `pulsar` CLI tool: - -```bash - -bin/pulsar functions-worker - -``` - -### Configure Proxies for Functions-workers - -When you are running `functions-worker` in a separate cluster, the admin rest endpoints are split into two clusters. `functions`, `function-worker`, `source` and `sink` endpoints are now served -by the `functions-worker` cluster, while all the other remaining endpoints are served by the broker cluster. -Hence you need to configure your `pulsar-admin` to use the right service URL accordingly. - -In order to address this inconvenience, you can start a proxy cluster for routing the admin rest requests accordingly. Hence you will have one central entry point for your admin service. - -If you already have a proxy cluster, continue reading. If you haven't setup a proxy cluster before, you can follow the [instructions](administration-proxy.md) to start proxies. - -![assets/functions-worker-separated.png](/assets/functions-worker-separated-proxy.png) - -To enable routing functions related admin requests to `functions-worker` in a proxy, you can edit the `proxy.conf` file to modify the following settings: - -```conf - -functionWorkerWebServiceURL= -functionWorkerWebServiceURLTLS= - -``` - -## Compare the Run-with-Broker and Run-separately modes - -As described above, you can run Function-worker with brokers, or run it separately. And it is more convenient to run functions-workers along with brokers. However, running functions-workers in a separate cluster provides better resource isolation for running functions in `Process` or `Thread` mode. - -Use which mode for your cases, refer to the following guidelines to determine. - -Use the `Run-with-Broker` mode in the following cases: -- a) if resource isolation is not required when running functions in `Process` or `Thread` mode; -- b) if you configure the functions-worker to run functions on Kubernetes (where the resource isolation problem is addressed by Kubernetes). - -Use the `Run-separately` mode in the following cases: -- a) you don't have a Kubernetes cluster; -- b) if you want to run functions and brokers separately. - -## Troubleshooting - -**Error message: Namespace missing local cluster name in clusters list** - -``` - -Failed to get partitioned topic metadata: org.apache.pulsar.client.api.PulsarClientException$BrokerMetadataException: Namespace missing local cluster name in clusters list: local_cluster=xyz ns=public/functions clusters=[standalone] - -``` - -The error message prompts when either of the cases occurs: -- a) a broker is started with `functionsWorkerEnabled=true`, but the `pulsarFunctionsCluster` is not set to the correct cluster in the `conf/functions_worker.yaml` file; -- b) setting up a geo-replicated Pulsar cluster with `functionsWorkerEnabled=true`, while brokers in one cluster run well, brokers in the other cluster do not work well. - -**Workaround** - -If any of these cases happens, follow the instructions below to fix the problem: - -1. Disable Functions Worker by setting `functionsWorkerEnabled=false`, and restart brokers. - -2. Get the current clusters list of `public/functions` namespace. - -```bash - -bin/pulsar-admin namespaces get-clusters public/functions - -``` - -3. Check if the cluster is in the clusters list. If the cluster is not in the list, add it to the list and update the clusters list. - -```bash - -bin/pulsar-admin namespaces set-clusters --clusters , public/functions - -``` - -4. After setting the cluster successfully, enable functions worker by setting `functionsWorkerEnabled=true`. - -5. Set the correct cluster name in `pulsarFunctionsCluster` in the `conf/functions_worker.yml` file, and restart brokers. diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/getting-started-concepts-and-architecture.md b/site2/website/versioned_docs/version-2.10.1-deprecated/getting-started-concepts-and-architecture.md deleted file mode 100644 index fe9c3fbc553b2c..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/getting-started-concepts-and-architecture.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -id: concepts-architecture -title: Pulsar concepts and architecture -sidebar_label: "Concepts and architecture" -original_id: concepts-architecture ---- - - - - - - - - - - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/getting-started-docker.md b/site2/website/versioned_docs/version-2.10.1-deprecated/getting-started-docker.md deleted file mode 100644 index 441a7b897278f3..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/getting-started-docker.md +++ /dev/null @@ -1,219 +0,0 @@ ---- -id: getting-started-docker -title: Set up a standalone Pulsar in Docker -sidebar_label: "Run Pulsar in Docker" -original_id: getting-started-docker ---- - -For local development and testing, you can run Pulsar in standalone mode on your own machine within a Docker container. - -If you have not installed Docker, download the [Community edition](https://www.docker.com/community-edition) and follow the instructions for your OS. - -## Start Pulsar in Docker - -For macOS, Linux, and Windows, run the following command to start Pulsar within a Docker container. - -```shell - -$ docker run -it -p 6650:6650 -p 8080:8080 --mount source=pulsardata,target=/pulsar/data --mount source=pulsarconf,target=/pulsar/conf apachepulsar/pulsar:@pulsar:version@ bin/pulsar standalone - -``` - -If you want to change Pulsar configurations and start Pulsar, run the following command by passing environment variables with the `PULSAR_PREFIX_` prefix. See [default configuration file](https://github.com/apache/pulsar/blob/e6b12c64b043903eb5ff2dc5186fe8030f157cfc/conf/standalone.conf) for more details. - -```shell - -$ docker run -it -e PULSAR_PREFIX_xxx=yyy -p 6650:6650 -p 8080:8080 --mount source=pulsardata,target=/pulsar/data --mount source=pulsarconf,target=/pulsar/conf apachepulsar/pulsar:2.10.0 sh -c "bin/apply-config-from-env.py conf/standalone.conf && bin/pulsar standalone" - -``` - -:::tip - -* The docker container runs as UID 10000 and GID 0 by default. You need to ensure the mounted volumes give write permission to either UID 10000 or GID 0. Note that UID 10000 is arbitrary, so it is recommended to make these mounts writable for the root group (GID 0). -* The data, metadata, and configuration are persisted on Docker volumes in order to not start "fresh" every time the container is restarted. For details on the volumes, you can use `docker volume inspect `. -* For Docker on Windows, make sure to configure it to use Linux containers. - -::: - -After starting Pulsar successfully, you can see `INFO`-level log messages like this: - -``` - -08:18:30.970 [main] INFO org.apache.pulsar.broker.web.WebService - HTTP Service started at http://0.0.0.0:8080 -... -07:53:37.322 [main] INFO org.apache.pulsar.broker.PulsarService - messaging service is ready, bootstrap service port = 8080, broker url= pulsar://localhost:6650, cluster=standalone, configs=org.apache.pulsar.broker.ServiceConfiguration@98b63c1 -... - -``` - -:::tip - -When you start a local standalone cluster, a `public/default` namespace is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -::: - -## Use Pulsar in Docker - -Pulsar offers a variety of [client libraries](client-libraries.md), such as [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md), [C++](client-libraries-cpp.md). - -If you're running a local standalone cluster, you can use one of these root URLs to interact with your cluster: -* `pulsar://localhost:6650` -* `http://localhost:8080` - -The following example guides you to get started with Pulsar by using the [Python client API](client-libraries-python.md) client API. - -Install the Pulsar Python client library directly from [PyPI](https://pypi.org/project/pulsar-client/): - -```shell - -$ pip install pulsar-client - -``` - -### Consume a message - -Create a consumer and subscribe to the topic: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -consumer = client.subscribe('my-topic', - subscription_name='my-sub') - -while True: - msg = consumer.receive() - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - -client.close() - -``` - -### Produce a message - -Start a producer to send some test messages: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -producer = client.create_producer('my-topic') - -for i in range(10): - producer.send(('hello-pulsar-%d' % i).encode('utf-8')) - -client.close() - -``` - -## Get the topic statistics - -In Pulsar, you can use REST API, Java, or command-line tools to control every aspect of the system. For details on APIs, refer to [Admin API Overview](admin-api-overview.md). - -In the simplest example, you can use curl to probe the stats for a particular topic: - -```shell - -$ curl http://localhost:8080/admin/v2/persistent/public/default/my-topic/stats | python -m json.tool - -``` - -The output is something like this: - -```json - -{ - "msgRateIn": 0.0, - "msgThroughputIn": 0.0, - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesInCounter": 7097, - "msgInCounter": 143, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "averageMsgSize": 0.0, - "msgChunkPublished": false, - "storageSize": 7097, - "backlogSize": 0, - "offloadedStorageSize": 0, - "publishers": [ - { - "accessMode": "Shared", - "msgRateIn": 0.0, - "msgThroughputIn": 0.0, - "averageMsgSize": 0.0, - "chunkedMessageRate": 0.0, - "producerId": 0, - "metadata": {}, - "address": "/127.0.0.1:35604", - "connectedSince": "2021-07-04T09:05:43.04788Z", - "clientVersion": "2.8.0", - "producerName": "standalone-2-5" - } - ], - "waitingPublishers": 0, - "subscriptions": { - "my-sub": { - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "msgRateRedeliver": 0.0, - "chunkedMessageRate": 0, - "msgBacklog": 0, - "backlogSize": 0, - "msgBacklogNoDelayed": 0, - "blockedSubscriptionOnUnackedMsgs": false, - "msgDelayed": 0, - "unackedMessages": 0, - "type": "Exclusive", - "activeConsumerName": "3c544f1daa", - "msgRateExpired": 0.0, - "totalMsgExpired": 0, - "lastExpireTimestamp": 0, - "lastConsumedFlowTimestamp": 1625389101290, - "lastConsumedTimestamp": 1625389546070, - "lastAckedTimestamp": 1625389546162, - "lastMarkDeleteAdvancedTimestamp": 1625389546163, - "consumers": [ - { - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "msgRateRedeliver": 0.0, - "chunkedMessageRate": 0.0, - "consumerName": "3c544f1daa", - "availablePermits": 867, - "unackedMessages": 0, - "avgMessagesPerEntry": 6, - "blockedConsumerOnUnackedMsgs": false, - "lastAckedTimestamp": 1625389546162, - "lastConsumedTimestamp": 1625389546070, - "metadata": {}, - "address": "/127.0.0.1:35472", - "connectedSince": "2021-07-04T08:58:21.287682Z", - "clientVersion": "2.8.0" - } - ], - "isDurable": true, - "isReplicated": false, - "allowOutOfOrderDelivery": false, - "consumersAfterMarkDeletePosition": {}, - "nonContiguousDeletedMessagesRanges": 0, - "nonContiguousDeletedMessagesRangesSerializedSize": 0, - "durable": true, - "replicated": false - } - }, - "replication": {}, - "deduplicationStatus": "Disabled", - "nonContiguousDeletedMessagesRanges": 0, - "nonContiguousDeletedMessagesRangesSerializedSize": 0 -} - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/getting-started-helm.md b/site2/website/versioned_docs/version-2.10.1-deprecated/getting-started-helm.md deleted file mode 100644 index 5d5401cc86a08b..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/getting-started-helm.md +++ /dev/null @@ -1,447 +0,0 @@ ---- -id: getting-started-helm -title: Get started in Kubernetes -sidebar_label: "Run Pulsar in Kubernetes" -original_id: getting-started-helm ---- - -This section guides you through every step of installing and running Apache Pulsar with Helm on Kubernetes quickly, including the following sections: - -- Install the Apache Pulsar on Kubernetes using Helm -- Start and stop Apache Pulsar -- Create topics using `pulsar-admin` -- Produce and consume messages using Pulsar clients -- Monitor Apache Pulsar status with Prometheus and Grafana - -For deploying a Pulsar cluster for production usage, read the documentation on [how to configure and install a Pulsar Helm chart](helm-deploy.md). - -## Prerequisite - -- Kubernetes server 1.14.0+ -- kubectl 1.14.0+ -- Helm 3.0+ - -:::tip - -For the following steps, step 2 and step 3 are for **developers** and step 4 and step 5 are for **administrators**. - -::: - -## Step 0: Prepare a Kubernetes cluster - -Before installing a Pulsar Helm chart, you have to create a Kubernetes cluster. You can follow [the instructions](helm-prepare.md) to prepare a Kubernetes cluster. - -We use [Minikube](https://minikube.sigs.k8s.io/docs/start/) in this quick start guide. To prepare a Kubernetes cluster, follow these steps: - -1. Create a Kubernetes cluster on Minikube. - - ```bash - - minikube start --memory=8192 --cpus=4 --kubernetes-version= - - ``` - - The `` can be any [Kubernetes version supported by your Minikube installation](https://minikube.sigs.k8s.io/docs/reference/configuration/kubernetes/), such as `v1.16.1`. - -2. Set `kubectl` to use Minikube. - - ```bash - - kubectl config use-context minikube - - ``` - -3. To use the [Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) with the local Kubernetes cluster on Minikube, enter the command below: - - ```bash - - minikube dashboard - - ``` - - The command automatically triggers opening a webpage in your browser. - -## Step 1: Install Pulsar Helm chart - -1. Add Pulsar charts repo. - - ```bash - - helm repo add apache https://pulsar.apache.org/charts - - ``` - - ```bash - - helm repo update - - ``` - -2. Clone the Pulsar Helm chart repository. - - ```bash - - git clone https://github.com/apache/pulsar-helm-chart - cd pulsar-helm-chart - - ``` - -3. Run the script `prepare_helm_release.sh` to create secrets required for installing the Apache Pulsar Helm chart. The username `pulsar` and password `pulsar` are used for logging into the Grafana dashboard and Pulsar Manager. - - :::note - - When running the script, you can use `-n` to specify the Kubernetes namespace where the Pulsar Helm chart is installed, `-k` to define the Pulsar Helm release name, and `-c` to create the Kubernetes namespace. For more information about the script, run `./scripts/pulsar/prepare_helm_release.sh --help`. - - ::: - - ```bash - - ./scripts/pulsar/prepare_helm_release.sh \ - -n pulsar \ - -k pulsar-mini \ - -c - - ``` - -4. Use the Pulsar Helm chart to install a Pulsar cluster to Kubernetes. - - :::note - - You need to specify `--set initialize=true` when installing Pulsar the first time. This command installs and starts Apache Pulsar. - - ::: - - ```bash - - helm install \ - --values examples/values-minikube.yaml \ - --set initialize=true \ - --namespace pulsar \ - pulsar-mini apache/pulsar - - ``` - -5. Check the status of all pods. - - ```bash - - kubectl get pods -n pulsar - - ``` - - If all pods start up successfully, you can see that the `STATUS` is changed to `Running` or `Completed`. - - **Output** - - ```bash - - NAME READY STATUS RESTARTS AGE - pulsar-mini-bookie-0 1/1 Running 0 9m27s - pulsar-mini-bookie-init-5gphs 0/1 Completed 0 9m27s - pulsar-mini-broker-0 1/1 Running 0 9m27s - pulsar-mini-grafana-6b7bcc64c7-4tkxd 1/1 Running 0 9m27s - pulsar-mini-prometheus-5fcf5dd84c-w8mgz 1/1 Running 0 9m27s - pulsar-mini-proxy-0 1/1 Running 0 9m27s - pulsar-mini-pulsar-init-t7cqt 0/1 Completed 0 9m27s - pulsar-mini-pulsar-manager-9bcbb4d9f-htpcs 1/1 Running 0 9m27s - pulsar-mini-toolset-0 1/1 Running 0 9m27s - pulsar-mini-zookeeper-0 1/1 Running 0 9m27s - - ``` - -6. Check the status of all services in the namespace `pulsar`. - - ```bash - - kubectl get services -n pulsar - - ``` - - **Output** - - ```bash - - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - pulsar-mini-bookie ClusterIP None 3181/TCP,8000/TCP 11m - pulsar-mini-broker ClusterIP None 8080/TCP,6650/TCP 11m - pulsar-mini-grafana LoadBalancer 10.106.141.246 3000:31905/TCP 11m - pulsar-mini-prometheus ClusterIP None 9090/TCP 11m - pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 11m - pulsar-mini-pulsar-manager LoadBalancer 10.103.192.175 9527:30190/TCP 11m - pulsar-mini-toolset ClusterIP None 11m - pulsar-mini-zookeeper ClusterIP None 2888/TCP,3888/TCP,2181/TCP 11m - - ``` - -## Step 2: Use pulsar-admin to create Pulsar tenants/namespaces/topics - -`pulsar-admin` is the CLI (command-Line Interface) tool for Pulsar. In this step, you can use `pulsar-admin` to create resources, including tenants, namespaces, and topics. - -1. Enter the `toolset` container. - - ```bash - - kubectl exec -it -n pulsar pulsar-mini-toolset-0 -- /bin/bash - - ``` - -2. In the `toolset` container, create a tenant named `apache`. - - ```bash - - bin/pulsar-admin tenants create apache - - ``` - - Then you can list the tenants to see if the tenant is created successfully. - - ```bash - - bin/pulsar-admin tenants list - - ``` - - You should see a similar output as below. The tenant `apache` has been successfully created. - - ```bash - - "apache" - "public" - "pulsar" - - ``` - -3. In the `toolset` container, create a namespace named `pulsar` in the tenant `apache`. - - ```bash - - bin/pulsar-admin namespaces create apache/pulsar - - ``` - - Then you can list the namespaces of tenant `apache` to see if the namespace is created successfully. - - ```bash - - bin/pulsar-admin namespaces list apache - - ``` - - You should see a similar output as below. The namespace `apache/pulsar` has been successfully created. - - ```bash - - "apache/pulsar" - - ``` - -4. In the `toolset` container, create a topic `test-topic` with `4` partitions in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics create-partitioned-topic apache/pulsar/test-topic -p 4 - - ``` - -5. In the `toolset` container, list all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics list-partitioned-topics apache/pulsar - - ``` - - Then you can see all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - "persistent://apache/pulsar/test-topic" - - ``` - -## Step 3: Use Pulsar client to produce and consume messages - -You can use the Pulsar client to create producers and consumers to produce and consume messages. - -By default, the Pulsar Helm chart exposes the Pulsar cluster through a Kubernetes `LoadBalancer`. In Minikube, you can use the following command to check the proxy service. - -```bash - -kubectl get services -n pulsar | grep pulsar-mini-proxy - -``` - -You will see a similar output as below. - -```bash - -pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 28m - -``` - -This output tells what are the node ports that Pulsar cluster's binary port and HTTP port are mapped to. The port after `80:` is the HTTP port while the port after `6650:` is the binary port. - -Then you can find the IP address and exposed ports of your Minikube server by running the following command. - -```bash - -minikube service pulsar-mini-proxy -n pulsar - -``` - -**Output** - -```bash - -|-----------|-------------------|-------------|-------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|-------------------------| -| pulsar | pulsar-mini-proxy | http/80 | http://172.17.0.4:32305 | -| | | pulsar/6650 | http://172.17.0.4:31816 | -|-----------|-------------------|-------------|-------------------------| -🏃 Starting tunnel for service pulsar-mini-proxy. -|-----------|-------------------|-------------|------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|------------------------| -| pulsar | pulsar-mini-proxy | | http://127.0.0.1:61853 | -| | | | http://127.0.0.1:61854 | -|-----------|-------------------|-------------|------------------------| - -``` - -At this point, you can get the service URLs to connect to your Pulsar client. Here are URL examples: - -``` - -webServiceUrl=http://127.0.0.1:61853/ -brokerServiceUrl=pulsar://127.0.0.1:61854/ - -``` - -Then you can proceed with the following steps: - -1. Download the Apache Pulsar tarball from the [downloads page](/download/). - -2. Decompress the tarball based on your download file. - - ```bash - - tar -xf .tar.gz - - ``` - -3. Expose `PULSAR_HOME`. - - (1) Enter the directory of the decompressed download file. - - (2) Expose `PULSAR_HOME` as the environment variable. - - ```bash - - export PULSAR_HOME=$(pwd) - - ``` - -4. Configure the Pulsar client. - - In the `${PULSAR_HOME}/conf/client.conf` file, replace `webServiceUrl` and `brokerServiceUrl` with the service URLs you get from the above steps. - -5. Create a subscription to consume messages from `apache/pulsar/test-topic`. - - ```bash - - bin/pulsar-client consume -s sub apache/pulsar/test-topic -n 0 - - ``` - -6. Open a new terminal. In the new terminal, create a producer and send 10 messages to the `test-topic` topic. - - ```bash - - bin/pulsar-client produce apache/pulsar/test-topic -m "---------hello apache pulsar-------" -n 10 - - ``` - -7. Verify the results. - - - From the producer side - - **Output** - - The messages have been produced successfully. - - ```bash - - 18:15:15.489 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 10 messages successfully produced - - ``` - - - From the consumer side - - **Output** - - At the same time, you can receive the messages as below. - - ```bash - - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - - ``` - -## Step 4: Use Pulsar Manager to manage the cluster - -[Pulsar Manager](administration-pulsar-manager.md) is a web-based GUI management tool for managing and monitoring Pulsar. - -1. By default, the `Pulsar Manager` is exposed as a separate `LoadBalancer`. You can open the Pulsar Manager UI using the following command: - - ```bash - - minikube service -n pulsar pulsar-mini-pulsar-manager - - ``` - -2. The Pulsar Manager UI will be open in your browser. You can use the username `pulsar` and password `pulsar` to log into Pulsar Manager. - -3. In Pulsar Manager UI, you can create an environment. - - - Click `New Environment` button in the top-left corner. - - Type `pulsar-mini` for the field `Environment Name` in the popup window. - - Type `http://pulsar-mini-broker:8080` for the field `Service URL` in the popup window. - - Click `Confirm` button in the popup window. - -4. After successfully creating an environment, you are redirected to the `tenants` page of that environment. Then you can create `tenants`, `namespaces` and `topics` using the Pulsar Manager. - -## Step 5: Use Prometheus and Grafana to monitor cluster - -Grafana is an open-source visualization tool, which can be used for visualizing time series data into dashboards. - -1. By default, the Grafana is exposed as a separate `LoadBalancer`. You can open the Grafana UI using the following command: - - ```bash - - minikube service pulsar-mini-grafana -n pulsar - - ``` - -2. The Grafana UI is open in your browser. You can use the username `pulsar` and password `pulsar` to log into the Grafana Dashboard. - -3. You can view dashboards for different components of a Pulsar cluster. diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/getting-started-pulsar.md b/site2/website/versioned_docs/version-2.10.1-deprecated/getting-started-pulsar.md deleted file mode 100644 index 752590f57b5585..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/getting-started-pulsar.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -id: pulsar-2.0 -title: Pulsar 2.0 -sidebar_label: "Pulsar 2.0" -original_id: pulsar-2.0 ---- - -Pulsar 2.0 is a major new release for Pulsar that brings some bold changes to the platform, including [simplified topic names](#topic-names), the addition of the [Pulsar Functions](functions-overview.md) feature, some terminology changes, and more. - -## New features in Pulsar 2.0 - -Feature | Description -:-------|:----------- -[Pulsar Functions](functions-overview.md) | A lightweight compute option for Pulsar - -## Major changes - -There are a few major changes that you should be aware of, as they may significantly impact your day-to-day usage. - -### Properties versus tenants - -Previously, Pulsar had a concept of properties. A property is essentially the exact same thing as a tenant, so the "property" terminology has been removed in version 2.0. The [`pulsar-admin properties`](reference-pulsar-admin.md#pulsar-admin) command-line interface, for example, has been replaced with the [`pulsar-admin tenants`](reference-pulsar-admin.md#pulsar-admin-tenants) interface. In some cases the properties terminology is still used but is now considered deprecated and will be removed entirely in a future release. - -### Topic names - -Prior to version 2.0, *all* Pulsar topics had the following form: - -```http - -{persistent|non-persistent}://property/cluster/namespace/topic - -``` - -Two important changes have been made in Pulsar 2.0: - -* There is no longer a [cluster component](#no-cluster) -* Properties have been [renamed to tenants](#tenants) -* You can use a [flexible](#flexible-topic-naming) naming system to shorten many topic names -* `/` is not allowed in topic name - -#### No cluster component - -The cluster component has been removed from topic names. Thus, all topic names now have the following form: - -```http - -{persistent|non-persistent}://tenant/namespace/topic - -``` - -> Existing topics that use the legacy name format will continue to work without any change, and there are no plans to change that. - - -#### Flexible topic naming - -All topic names in Pulsar 2.0 internally have the form shown [above](#no-cluster-component) but you can now use shorthand names in many cases (for the sake of simplicity). The flexible naming system stems from the fact that there is now a default topic type, tenant, and namespace: - -Topic aspect | Default -:------------|:------- -topic type | `persistent` -tenant | `public` -namespace | `default` - -The table below shows some example topic name translations that use implicit defaults: - -Input topic name | Translated topic name -:----------------|:--------------------- -`my-topic` | `persistent://public/default/my-topic` -`my-tenant/my-namespace/my-topic` | `persistent://my-tenant/my-namespace/my-topic` - -> For [non-persistent topics](concepts-messaging.md#non-persistent-topics) you'll need to continue to specify the entire topic name, as the default-based rules for persistent topic names don't apply. Thus you cannot use a shorthand name like `non-persistent://my-topic` and would need to use `non-persistent://public/default/my-topic` instead - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/getting-started-standalone.md b/site2/website/versioned_docs/version-2.10.1-deprecated/getting-started-standalone.md deleted file mode 100644 index c60ba7e5cedba8..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/getting-started-standalone.md +++ /dev/null @@ -1,326 +0,0 @@ ---- -id: getting-started-standalone -title: Set up a standalone Pulsar locally -sidebar_label: "Run Pulsar locally" -original_id: getting-started-standalone ---- - -For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary [RocksDB](http://rocksdb.org/) and BookKeeper components running inside of a single Java Virtual Machine (JVM) process. - -> **Pulsar in production?** -> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal.md) guide. - -## Install Pulsar standalone - -This tutorial guides you through every step of installing Pulsar locally. - -### System requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions - -:::tip - -By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. - -::: - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -#### Install JDK on M1 -In the current version, Pulsar uses a BookKeeper version which in turn uses RocksDB. RocksDB is compiled to work on x86 architecture and not ARM. Therefore, Pulsar can only work with x86 JDK. This is planned to be fixed in future versions of Pulsar. - -One of the ways to easily install an x86 JDK is to use [SDKMan](http://sdkman.io) as outlined in the following steps: - -1. Install [SDKMan](http://sdkman.io). - - * Method 1: follow instructions on the SDKMan website. - - * Method 2: if you have [Homebrew](https://brew.sh) installed, enter the following command. - -```shell - -brew install sdkman - -``` - -2. Turn on Rosetta2 compatibility for SDKMan by editing `~/.sdkman/etc/config` and changing the following property from `false` to `true`. - -```properties - -sdkman_rosetta2_compatible=true - -``` - -3. Close the current shell / terminal window and open a new one. -4. Make sure you don't have any previously installed JVM of the same version by listing existing installed versions. - -```shell - -sdk list java|grep installed - -``` - -Example output: - -```text - - | >>> | 17.0.3.6.1 | amzn | installed | 17.0.3.6.1-amzn - -``` - -If you have any Java 17 version installed, uninstall it. - -```shell - -sdk uinstall java 17.0.3.6.1 - -``` - -5. Install any Java versions greater than Java 8. - -```shell - - sdk install java 17.0.3.6.1-amzn - -``` - -### Install Pulsar using binary release - -To get started with Pulsar, download a binary tarball release in one of the following ways: - -* download from the Apache mirror (Pulsar @pulsar:version@ binary release) - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:binary_release_url - - ``` - -After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -#### What your package contains - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](/tools/pulsar-admin/). -`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker) and more.
    **Note:** Pulsar standalone uses RocksDB as the local metadata store and its configuration file path [`metadataStoreConfigPath`](reference-configuration.md) is configurable in the `standalone.conf` file. For more information about the configurations of RocksDB, see [here](https://github.com/facebook/rocksdb/blob/main/examples/rocksdb_option_file_example.ini) and related [documentation](https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide). -`examples` | A Java JAR file containing [Pulsar Functions](functions-overview.md) example. -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md). -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar. -`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar). - -These directories are created once you begin running Pulsar. - -Directory | Contains -:---------|:-------- -`data` | The data storage directory used by RocksDB and BookKeeper. -`logs` | Logs created by the installation. - -:::tip - -If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions: -* [Install builtin connectors (optional)](#install-builtin-connectors-optional) -* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional) -Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders. - -::: - -### Install builtin connectors (optional) - -Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors. -To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways: - -* download from the Apache mirror Pulsar IO Connectors @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. -For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker (or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions). -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors). - -::: - -### Install tiered storage offloaders (optional) - -:::tip - -- Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -- To enable tiered storage feature, follow the instructions below; otherwise skip this section. - -::: - -To get started with [tiered storage offloaders](concepts-tiered-storage.md), you need to download the offloaders tarball release on every broker node in one of the following ways: - -* download from the Apache mirror Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders` -in the pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage.md). - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory. -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or DC/OS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - -::: - -## Start Pulsar standalone - -Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode. - -```bash - -$ bin/pulsar standalone - -``` - -If you have started Pulsar successfully, you will see `INFO`-level log messages like this: - -```bash - -21:59:29.327 [DLM-/stream/storage-OrderedScheduler-3-0] INFO org.apache.bookkeeper.stream.storage.impl.sc.StorageContainerImpl - Successfully started storage container (0). -21:59:34.576 [main] INFO org.apache.pulsar.broker.authentication.AuthenticationService - Authentication is disabled -21:59:34.576 [main] INFO org.apache.pulsar.websocket.WebSocketService - Pulsar WebSocket Service started - -``` - -:::tip - -* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window. - -::: - -You can also run the service as a background process using the `bin/pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](reference-cli-tools.md#pulsar-daemon). -> -> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview.md) document to secure your deployment. -> -> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -## Use Pulsar standalone - -Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. - -### Consume a message - -The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client consume my-topic -s "first-subscription" - -``` - -If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -22:17:16.781 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully consumed - -``` - -:::tip - -As you have noticed that we do not explicitly create the `my-topic` topic, from which we consume the message. When you consume a message from a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well. - -::: - -### Produce a message - -The following command produces a message saying `hello-pulsar` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client produce my-topic --messages "hello-pulsar" - -``` - -If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -22:21:08.693 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced - -``` - -## Stop Pulsar standalone - -Press `Ctrl+C` to stop a local standalone Pulsar. - -:::tip - -If the service runs as a background process using the `bin/pulsar-daemon start standalone` command, then use the `bin/pulsar-daemon stop standalone` command to stop the service. -For more information, see [pulsar-daemon](reference-cli-tools.md#pulsar-daemon). - -::: - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/helm-deploy.md b/site2/website/versioned_docs/version-2.10.1-deprecated/helm-deploy.md deleted file mode 100644 index 0e7815e4f4d90b..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/helm-deploy.md +++ /dev/null @@ -1,434 +0,0 @@ ---- -id: helm-deploy -title: Deploy Pulsar cluster using Helm -sidebar_label: "Deployment" -original_id: helm-deploy ---- - -Before running `helm install`, you need to decide how to run Pulsar. -Options can be specified using Helm's `--set option.name=value` command line option. - -## Select configuration options - -In each section, collect the options that are combined to use with the `helm install` command. - -### Kubernetes namespace - -By default, the Pulsar Helm chart is installed to a namespace called `pulsar`. - -```yaml - -namespace: pulsar - -``` - -To install the Pulsar Helm chart into a different Kubernetes namespace, you can include this option in the `helm install` command. - -```bash - ---set namespace= - -``` - -By default, the Pulsar Helm chart doesn't create the namespace. - -```yaml - -namespaceCreate: false - -``` - -To use the Pulsar Helm chart to create the Kubernetes namespace automatically, you can include this option in the `helm install` command. - -```bash - ---set namespaceCreate=true - -``` - -### Persistence - -By default, the Pulsar Helm chart creates Volume Claims with the expectation that a dynamic provisioner creates the underlying Persistent Volumes. - -```yaml - -volumes: - persistence: true - # configure the components to use local persistent volume - # the local provisioner should be installed prior to enable local persistent volume - local_storage: false - -``` - -To use local persistent volumes as the persistent storage for Helm release, you can install the [local storage provisioner](#install-local-storage-provisioner) and include the following option in the `helm install` command. - -```bash - ---set volumes.local_storage=true - -``` - -:::note - -Before installing the production instance of Pulsar, ensure to plan the storage settings to avoid extra storage migration work. Because after initial installation, you must edit Kubernetes objects manually if you want to change storage settings. - -::: - -The Pulsar Helm chart is designed for production use. To use the Pulsar Helm chart in a development environment (such as Minikube), you can disable persistence by including this option in your `helm install` command. - -```bash - ---set volumes.persistence=false - -``` - -### Affinity - -By default, `anti-affinity` is enabled to ensure pods of the same component can run on different nodes. - -```yaml - -affinity: - anti_affinity: true - -``` - -To use the Pulsar Helm chart in a development environment (such as Minikube), you can disable `anti-affinity` by including this option in your `helm install` command. - -```bash - ---set affinity.anti_affinity=false - -``` - -### Components - -The Pulsar Helm chart is designed for production usage. It deploys a production-ready Pulsar cluster, including Pulsar core components and monitoring components. - -You can customize the components to be deployed by turning on/off individual components. - -```yaml - -## Components -## -## Control what components of Apache Pulsar to deploy for the cluster -components: - # zookeeper - zookeeper: true - # bookkeeper - bookkeeper: true - # bookkeeper - autorecovery - autorecovery: true - # broker - broker: true - # functions - functions: true - # proxy - proxy: true - # toolset - toolset: true - # pulsar manager - pulsar_manager: true - -## Monitoring Components -## -## Control what components of the monitoring stack to deploy for the cluster -monitoring: - # monitoring - prometheus - prometheus: true - # monitoring - grafana - grafana: true - -``` - -### Docker images - -The Pulsar Helm chart is designed to enable controlled upgrades. So it can configure independent image versions for components. You can customize the images by setting individual component. - -```yaml - -## Images -## -## Control what images to use for each component -images: - zookeeper: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - bookie: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - autorecovery: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - broker: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - proxy: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - functions: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - prometheus: - repository: prom/prometheus - tag: v1.6.3 - pullPolicy: IfNotPresent - grafana: - repository: streamnative/apache-pulsar-grafana-dashboard-k8s - tag: 0.0.4 - pullPolicy: IfNotPresent - pulsar_manager: - repository: apachepulsar/pulsar-manager - tag: v0.1.0 - pullPolicy: IfNotPresent - hasCommand: false - -``` - -### TLS - -The Pulsar Helm chart can be configured to enable TLS (Transport Layer Security) to protect all the traffic between components. Before enabling TLS, you have to provision TLS certificates for the required components. - -#### Provision TLS certificates using cert-manager - -To use the `cert-manager` to provision the TLS certificates, you have to install the [cert-manager](#install-cert-manager) before installing the Pulsar Helm chart. After successfully installing the cert-manager, you can set `certs.internal_issuer.enabled` to `true`. Therefore, the Pulsar Helm chart can use the `cert-manager` to generate `selfsigning` TLS certificates for the configured components. - -```yaml - -certs: - internal_issuer: - enabled: false - component: internal-cert-issuer - type: selfsigning - -``` - -You can also customize the generated TLS certificates by configuring the fields as the following. - -```yaml - -tls: - # common settings for generating certs - common: - # 90d - duration: 2160h - # 15d - renewBefore: 360h - organization: - - pulsar - keySize: 4096 - keyAlgorithm: rsa - keyEncoding: pkcs8 - -``` - -#### Enable TLS - -After installing the `cert-manager`, you can set `tls.enabled` to `true` to enable TLS encryption for the entire cluster. - -```yaml - -tls: - enabled: false - -``` - -You can also configure whether to enable TLS encryption for individual component. - -```yaml - -tls: - # settings for generating certs for proxy - proxy: - enabled: false - cert_name: tls-proxy - # settings for generating certs for broker - broker: - enabled: false - cert_name: tls-broker - # settings for generating certs for bookies - bookie: - enabled: false - cert_name: tls-bookie - # settings for generating certs for zookeeper - zookeeper: - enabled: false - cert_name: tls-zookeeper - # settings for generating certs for recovery - autorecovery: - cert_name: tls-recovery - # settings for generating certs for toolset - toolset: - cert_name: tls-toolset - -``` - -### Authentication - -By default, authentication is disabled. You can set `auth.authentication.enabled` to `true` to enable authentication. -Currently, the Pulsar Helm chart only supports JWT authentication provider. You can set `auth.authentication.provider` to `jwt` to use the JWT authentication provider. - -```yaml - -# Enable or disable broker authentication and authorization. -auth: - authentication: - enabled: false - provider: "jwt" - jwt: - # Enable JWT authentication - # If the token is generated by a secret key, set the usingSecretKey as true. - # If the token is generated by a private key, set the usingSecretKey as false. - usingSecretKey: false - superUsers: - # broker to broker communication - broker: "broker-admin" - # proxy to broker communication - proxy: "proxy-admin" - # pulsar-admin client to broker/proxy communication - client: "admin" - -``` - -To enable authentication, you can run [prepare helm release](#prepare-the-helm-release) to generate token secret keys and tokens for three super users specified in the `auth.superUsers` field. The generated token keys and super user tokens are uploaded and stored as Kubernetes secrets prefixed with `-token-`. You can use the following command to find those secrets. - -```bash - -kubectl get secrets -n - -``` - -### Authorization - -By default, authorization is disabled. Authorization can be enabled only when authentication is enabled. - -```yaml - -auth: - authorization: - enabled: false - -``` - -To enable authorization, you can include this option in the `helm install` command. - -```bash - ---set auth.authorization.enabled=true - -``` - -### CPU and RAM resource requirements - -By default, the resource requests and the number of replicas for the Pulsar components in the Pulsar Helm chart are adequate for a small production deployment. If you deploy a non-production instance, you can reduce the defaults to fit into a smaller cluster. - -Once you have all of your configuration options collected, you can install dependent charts before installing the Pulsar Helm chart. - -## Install dependent charts - -### Install local storage provisioner - -To use local persistent volumes as the persistent storage, you need to install a storage provisioner for [local persistent volumes](https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/). - -One of the easiest way to get started is to use the local storage provisioner provided along with the Pulsar Helm chart. - -``` - -helm repo add streamnative https://charts.streamnative.io -helm repo update -helm install pulsar-storage-provisioner streamnative/local-storage-provisioner - -``` - -### Install cert-manager - -The Pulsar Helm chart uses the [cert-manager](https://github.com/jetstack/cert-manager) to provision and manage TLS certificates automatically. To enable TLS encryption for brokers or proxies, you need to install the cert-manager in advance. - -For details about how to install the cert-manager, follow the [official instructions](https://cert-manager.io/docs/installation/kubernetes/#installing-with-helm). - -Alternatively, we provide a bash script [install-cert-manager.sh](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/cert-manager/install-cert-manager.sh) to install a cert-manager release to the namespace `cert-manager`. - -```bash - -git clone https://github.com/apache/pulsar-helm-chart -cd pulsar-helm-chart -./scripts/cert-manager/install-cert-manager.sh - -``` - -## Prepare Helm release - -Once you have install all the dependent charts and collected all of your configuration options, you can run [prepare_helm_release.sh](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/prepare_helm_release.sh) to prepare the Helm release. - -```bash - -git clone https://github.com/apache/pulsar-helm-chart -cd pulsar-helm-chart -./scripts/pulsar/prepare_helm_release.sh -n -k - -``` - -The `prepare_helm_release` creates the following resources: - -- A Kubernetes namespace for installing the Pulsar release -- JWT secret keys and tokens for three super users: `broker-admin`, `proxy-admin`, and `admin`. By default, it generates an asymmetric pubic/private key pair. You can choose to generate a symmetric secret key by specifying `--symmetric`. - - `proxy-admin` role is used for proxies to communicate to brokers. - - `broker-admin` role is used for inter-broker communications. - - `admin` role is used by the admin tools. - -## Deploy Pulsar cluster using Helm - -Once you have finished the following three things, you can install a Helm release. - -- Collect all of your configuration options. -- Install dependent charts. -- Prepare the Helm release. - -In this example, the Helm release is named `pulsar`. - -```bash - -helm repo add apache https://pulsar.apache.org/charts -helm repo update -helm install pulsar apache/pulsar \ - --timeout 10m \ - --set initialize=true \ - --set [your configuration options] - -``` - -:::note - -For the first deployment, add `--set initialize=true` option to initialize bookie and Pulsar cluster metadata. - -::: - -You can also use the `--version ` option if you want to install a specific version of Pulsar Helm chart. - -## Monitor deployment - -A list of installed resources are output once the Pulsar cluster is deployed. This may take 5-10 minutes. - -The status of the deployment can be checked by running the `helm status pulsar` command, which can also be done while the deployment is taking place if you run the command in another terminal. - -## Access Pulsar cluster - -The default values will create a `ClusterIP` for the following resources, which you can use to interact with the cluster. - -- Proxy: You can use the IP address to produce and consume messages to the installed Pulsar cluster. -- Pulsar Manager: You can access the Pulsar Manager UI at `http://:9527`. -- Grafana Dashboard: You can access the Grafana dashboard at `http://:3000`. - -To find the IP addresses of those components, run the following command: - -```bash - -kubectl get service -n - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/helm-install.md b/site2/website/versioned_docs/version-2.10.1-deprecated/helm-install.md deleted file mode 100644 index 9f81f52e0dab18..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/helm-install.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -id: helm-install -title: Install Apache Pulsar using Helm -sidebar_label: "Install" -original_id: helm-install ---- - -Install Apache Pulsar on Kubernetes with the official Pulsar Helm chart. - -## Requirements - -To deploy Apache Pulsar on Kubernetes, the followings are required. - -- kubectl 1.14 or higher, compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin)) -- Helm v3 (3.0.2 or higher) -- A Kubernetes cluster, version 1.14 or higher - -## Environment setup - -Before deploying Pulsar, you need to prepare your environment. - -### Tools - -Install [`helm`](helm-tools.md) and [`kubectl`](helm-tools.md) on your computer. - -## Cloud cluster preparation - -To create and connect to the Kubernetes cluster, follow the instructions: - -- [Google Kubernetes Engine](helm-prepare.md#google-kubernetes-engine) - -## Pulsar deployment - -Once the environment is set up and configuration is generated, you can now proceed to the [deployment of Pulsar](helm-deploy.md). - -## Pulsar upgrade - -To upgrade an existing Kubernetes installation, follow the [upgrade documentation](helm-upgrade.md). diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/helm-overview.md b/site2/website/versioned_docs/version-2.10.1-deprecated/helm-overview.md deleted file mode 100644 index 125f595cbe68a3..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/helm-overview.md +++ /dev/null @@ -1,103 +0,0 @@ ---- -id: helm-overview -title: Apache Pulsar Helm Chart -sidebar_label: "Overview" -original_id: helm-overview ---- - -[Helm chart](https://github.com/apache/pulsar-helm-chart) supports you to install Apache Pulsar in a cloud-native environment. - -## Introduction - -The Apache Pulsar Helm chart provides one of the most convenient ways to operate Pulsar on Kubernetes. With all the required components, Helm chart is scalable and thus being suitable for large-scale deployments. - -The Apache Pulsar Helm chart contains all components to support the features and functions that Pulsar delivers. You can install and configure these components separately. - -- Pulsar core components: - - ZooKeeper - - Bookies - - Brokers - - Function workers - - Proxies -- Control center: - - Pulsar Manager - - Prometheus - - Grafana - -Moreover, Helm chart supports: - -- Security - - Automatically provisioned TLS certificates, using [Jetstack](https://www.jetstack.io/)'s [cert-manager](https://cert-manager.io/docs/) - - self-signed - - [Let's Encrypt](https://letsencrypt.org/) - - TLS Encryption - - Proxy - - Broker - - Toolset - - Bookie - - ZooKeeper - - Authentication - - JWT - - Authorization -- Storage - - Non-persistence storage - - Persistent volume - - Local persistent volumes -- Functions - - Kubernetes Runtime - - Process Runtime - - Thread Runtime -- Operations - - Independent image versions for all components, enabling controlled upgrades - -## Quick start - -To run with Apache Pulsar Helm chart as fast as possible in a **non-production** use case, we provide a [quick start guide](getting-started-helm.md) for Proof of Concept (PoC) deployments. - -This guide walks you through deploying Apache Pulsar Helm chart with default values and features, but it is *not* suitable for deployments in production-ready environments. To deploy the charts in production under sustained load, you can follow the complete [Installation Guide](helm-install.md). - -## Troubleshooting - -Although we have done our best to make these charts as seamless as possible, troubles do go out of our control occasionally. We have been collecting tips and tricks for troubleshooting common issues. Please check it first before raising an [issue](https://github.com/apache/pulsar/issues/new/choose), and feel free to add your solutions by creating a [Pull Request](https://github.com/apache/pulsar/compare). - -## Installation - -The Apache Pulsar Helm chart contains all required dependencies. - -If you deploy a PoC for testing, we strongly suggest you follow this [Quick Start Guide](getting-started-helm.md) for your first iteration. - -1. [Preparation](helm-prepare.md) -2. [Deployment](helm-deploy.md) - -## Upgrading - -Once the Apache Pulsar Helm chart is installed, you can use `helm upgrade` command to configure and update it. - -```bash - -helm repo add apache https://pulsar.apache.org/charts -helm repo update -helm get values > pulsar.yaml -helm upgrade apache/pulsar -f pulsar.yaml - -``` - -For more detailed information, see [Upgrading](helm-upgrade.md). - -## Uninstallation - -To uninstall the Apache Pulsar Helm chart, run the following command: - -```bash - -helm delete - -``` - -For the purposes of continuity, some Kubernetes objects in these charts cannot be removed by `helm delete` command. It is recommended to *consciously* remove these items, as they affect re-deployment. - -* PVCs for stateful data: remove these items. - - ZooKeeper: This is your metadata. - - BookKeeper: This is your data. - - Prometheus: This is your metrics data, which can be safely removed. -* Secrets: if the secrets are generated by the [prepare release script](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/prepare_helm_release.sh), they contain secret keys and tokens. You can use the [cleanup release script](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/cleanup_helm_release.sh) to remove these secrets and tokens as needed. diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/helm-prepare.md b/site2/website/versioned_docs/version-2.10.1-deprecated/helm-prepare.md deleted file mode 100644 index e5d56c7e95e34b..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/helm-prepare.md +++ /dev/null @@ -1,80 +0,0 @@ ---- -id: helm-prepare -title: Prepare Kubernetes resources -sidebar_label: "Prepare" -original_id: helm-prepare ---- - -For a fully functional Pulsar cluster, you need a few resources before deploying the Apache Pulsar Helm chart. The following provides instructions to prepare the Kubernetes cluster before deploying the Pulsar Helm chart. - -- [Google Kubernetes Engine](#google-kubernetes-engine) - - [Manual cluster creation](#manual-cluster-creation) - - [Scripted cluster creation](#scripted-cluster-creation) - - [Create cluster with local SSDs](#create-cluster-with-local-ssds) - -## Google Kubernetes Engine - -To get started easier, a script is provided to create the cluster automatically. Alternatively, a cluster can be created manually as well. - -### Manual cluster creation - -To provision a Kubernetes cluster manually, follow the [GKE instructions](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster). - -### Scripted cluster creation - -A [bootstrap script](https://github.com/streamnative/charts/tree/master/scripts/pulsar/gke_bootstrap_script.sh) has been created to automate much of the setup process for users on GCP/GKE. - -The script can: - -1. Create a new GKE cluster. -2. Allow the cluster to modify DNS (Domain Name Server) records. -3. Setup `kubectl`, and connect it to the cluster. - -Google Cloud SDK is a dependency of this script, so ensure it is [set up correctly](helm-tools.md#connect-to-a-gke-cluster) for the script to work. - -The script reads various parameters from environment variables and an argument `up` or `down` for bootstrap and clean-up respectively. - -The following table describes all variables. - -| **Variable** | **Description** | **Default value** | -| ------------ | --------------- | ----------------- | -| PROJECT | ID of your GCP project | No default value. It requires to be set. | -| CLUSTER_NAME | Name of the GKE cluster | `pulsar-dev` | -| CONFDIR | Configuration directory to store Kubernetes configuration | ${HOME}/.config/streamnative | -| INT_NETWORK | IP space to use within this cluster | `default` | -| LOCAL_SSD_COUNT | Number of local SSD counts | 4 | -| MACHINE_TYPE | Type of machine to use for nodes | `n1-standard-4` | -| NUM_NODES | Number of nodes to be created in each of the cluster's zones | 4 | -| PREEMPTIBLE | Create nodes using preemptible VM instances in the new cluster. | false | -| REGION | Compute region for the cluster | `us-east1` | -| USE_LOCAL_SSD | Flag to create a cluster with local SSDs | false | -| ZONE | Compute zone for the cluster | `us-east1-b` | -| ZONE_EXTENSION | The extension (`a`, `b`, `c`) of the zone name of the cluster | `b` | -| EXTRA_CREATE_ARGS | Extra arguments passed to create command | | - -Run the script, by passing in your desired parameters. It can work with the default parameters except for `PROJECT` which is required: - -```bash - -PROJECT= scripts/pulsar/gke_bootstrap_script.sh up - -``` - -The script can also be used to clean up the created GKE resources. - -```bash - -PROJECT= scripts/pulsar/gke_bootstrap_script.sh down - -``` - -#### Create cluster with local SSDs - -To install the Pulsar Helm chart using local persistent volumes, you need to create a GKE cluster with local SSDs. You can do so by specifying `USE_LOCAL_SSD` to be `true` in the following command to create a Pulsar cluster with local SSDs. - -``` - -PROJECT= USE_LOCAL_SSD=true LOCAL_SSD_COUNT= scripts/pulsar/gke_bootstrap_script.sh up - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/helm-tools.md b/site2/website/versioned_docs/version-2.10.1-deprecated/helm-tools.md deleted file mode 100644 index 6ba89006913b64..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/helm-tools.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -id: helm-tools -title: Required tools for deploying Pulsar Helm Chart -sidebar_label: "Required Tools" -original_id: helm-tools ---- - -Before deploying Pulsar to your Kubernetes cluster, there are some tools you must have installed locally. - -## kubectl - -kubectl is the tool that talks to the Kubernetes API. kubectl 1.14 or higher is required and it needs to be compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin)). - -To Install kubectl locally, follow the [Kubernetes documentation](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl). - -The server version of kubectl cannot be obtained until we connect to a cluster. - -## Helm - -Helm is the package manager for Kubernetes. The Apache Pulsar Helm Chart is tested and supported with Helm v3. - -### Get Helm - -You can get Helm from the project's [releases page](https://github.com/helm/helm/releases), or follow other options under the official documentation of [installing Helm](https://helm.sh/docs/intro/install/). - -### Next steps - -Once kubectl and Helm are configured, you can configure your [Kubernetes cluster](helm-prepare.md). - -## Additional information - -### Templates - -Templating in Helm is done through Golang's [text/template](https://golang.org/pkg/text/template/) and [sprig](https://godoc.org/github.com/Masterminds/sprig). - -For more information about how all the inner workings behave, check these documents: - -- [Functions and Pipelines](https://helm.sh/docs/chart_template_guide/functions_and_pipelines/) -- [Subcharts and Globals](https://helm.sh/docs/chart_template_guide/subcharts_and_globals/) - -### Tips and tricks - -For additional information on developing with Helm, check [tips and tricks section](https://helm.sh/docs/howto/charts_tips_and_tricks/) in the Helm repository. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/helm-upgrade.md b/site2/website/versioned_docs/version-2.10.1-deprecated/helm-upgrade.md deleted file mode 100644 index 7d671e6bfb3c10..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/helm-upgrade.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -id: helm-upgrade -title: Upgrade Pulsar Helm release -sidebar_label: "Upgrade" -original_id: helm-upgrade ---- - -Before upgrading your Pulsar installation, you need to check the change log corresponding to the specific release you want to upgrade to and look for any release notes that might pertain to the new Pulsar helm chart version. - -We also recommend that you need to provide all values using the `helm upgrade --set key=value` syntax or the `-f values.yml` instead of using `--reuse-values`, because some of the current values might be deprecated. - -:::note - -You can retrieve your previous `--set` arguments cleanly, with `helm get values `. If you direct this into a file (`helm get values > pulsar.yml`), you can safely pass this file through `-f`, namely `helm upgrade apache/pulsar -f pulsar.yaml`. This safely replaces the behavior of `--reuse-values`. - -::: - -## Steps - -To upgrade Apache Pulsar to a newer version, follow these steps: - -1. Check the change log for the specific version you would like to upgrade to. -2. Go through [deployment documentation](helm-deploy.md) step by step. -3. Extract your previous `--set` arguments with the following command. - - ```bash - - helm get values > pulsar.yaml - - ``` - -4. Decide all the values you need to set. -5. Perform the upgrade, with all `--set` arguments extracted in step 4. - - ```bash - - helm upgrade apache/pulsar \ - --version \ - -f pulsar.yaml \ - --set ... - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-aerospike-sink.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-aerospike-sink.md deleted file mode 100644 index 63d7338a3ba91c..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-aerospike-sink.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -id: io-aerospike-sink -title: Aerospike sink connector -sidebar_label: "Aerospike sink connector" -original_id: io-aerospike-sink ---- - -The Aerospike sink connector pulls messages from Pulsar topics to Aerospike clusters. - -## Configuration - -The configuration of the Aerospike sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `seedHosts` |String| true | No default value| The comma-separated list of one or more Aerospike cluster hosts.

    Each host can be specified as a valid IP address or hostname followed by an optional port number. | -| `keyspace` | String| true |No default value |The Aerospike namespace. | -| `columnName` | String | true| No default value|The Aerospike column name. | -|`userName`|String|false|NULL|The Aerospike username.| -|`password`|String|false|NULL|The Aerospike password.| -| `keySet` | String|false |NULL | The Aerospike set name. | -| `maxConcurrentRequests` |int| false | 100 | The maximum number of concurrent Aerospike transactions that a sink can open. | -| `timeoutMs` | int|false | 100 | This property controls `socketTimeout` and `totalTimeout` for Aerospike transactions. | -| `retries` | int|false | 1 |The maximum number of retries before aborting a write transaction to Aerospike. | diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-canal-source.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-canal-source.md deleted file mode 100644 index d1fd43bb0f74e4..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-canal-source.md +++ /dev/null @@ -1,235 +0,0 @@ ---- -id: io-canal-source -title: Canal source connector -sidebar_label: "Canal source connector" -original_id: io-canal-source ---- - -The Canal source connector pulls messages from MySQL to Pulsar topics. - -## Configuration - -The configuration of Canal source connector has the following properties. - -### Property - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `username` | true | None | Canal server account (not MySQL).| -| `password` | true | None | Canal server password (not MySQL). | -|`destination`|true|None|Source destination that Canal source connector connects to. -| `singleHostname` | false | None | Canal server address.| -| `singlePort` | false | None | Canal server port.| -| `cluster` | true | false | Whether to enable cluster mode based on Canal server configuration or not.

  325. true: **cluster** mode.
    If set to true, it talks to `zkServers` to figure out the actual database host.

  326. false: **standalone** mode.
    If set to false, it connects to the database specified by `singleHostname` and `singlePort`.
  327. | -| `zkServers` | true | None | Address and port of the Zookeeper that Canal source connector talks to figure out the actual database host.| -| `batchSize` | false | 1000 | Batch size to fetch from Canal. | - -### Example - -Before using the Canal connector, you can create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "zkServers": "127.0.0.1:2181", - "batchSize": "5120", - "destination": "example", - "username": "", - "password": "", - "cluster": false, - "singleHostname": "127.0.0.1", - "singlePort": "11111", - } - - ``` - -* YAML - - You can create a YAML file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/resources/canal-mysql-source-config.yaml) below to your YAML file. - - ```yaml - - configs: - zkServers: "127.0.0.1:2181" - batchSize: 5120 - destination: "example" - username: "" - password: "" - cluster: false - singleHostname: "127.0.0.1" - singlePort: 11111 - - ``` - -## Usage - -Here is an example of storing MySQL data using the configuration file as above. - -1. Start a MySQL server. - - ```bash - - $ docker pull mysql:5.7 - $ docker run -d -it --rm --name pulsar-mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=canal -e MYSQL_USER=mysqluser -e MYSQL_PASSWORD=mysqlpw mysql:5.7 - - ``` - -2. Create a configuration file `mysqld.cnf`. - - ```bash - - [mysqld] - pid-file = /var/run/mysqld/mysqld.pid - socket = /var/run/mysqld/mysqld.sock - datadir = /var/lib/mysql - #log-error = /var/log/mysql/error.log - # By default we only accept connections from localhost - #bind-address = 127.0.0.1 - # Disabling symbolic-links is recommended to prevent assorted security risks - symbolic-links=0 - log-bin=mysql-bin - binlog-format=ROW - server_id=1 - - ``` - -3. Copy the configuration file `mysqld.cnf` to MySQL server. - - ```bash - - $ docker cp mysqld.cnf pulsar-mysql:/etc/mysql/mysql.conf.d/ - - ``` - -4. Restart the MySQL server. - - ```bash - - $ docker restart pulsar-mysql - - ``` - -5. Create a test database in MySQL server. - - ```bash - - $ docker exec -it pulsar-mysql /bin/bash - $ mysql -h 127.0.0.1 -uroot -pcanal -e 'create database test;' - - ``` - -6. Start a Canal server and connect to MySQL server. - - ``` - - $ docker pull canal/canal-server:v1.1.2 - $ docker run -d -it --link pulsar-mysql -e canal.auto.scan=false -e canal.destinations=test -e canal.instance.master.address=pulsar-mysql:3306 -e canal.instance.dbUsername=root -e canal.instance.dbPassword=canal -e canal.instance.connectionCharset=UTF-8 -e canal.instance.tsdb.enable=true -e canal.instance.gtidon=false --name=pulsar-canal-server -p 8000:8000 -p 2222:2222 -p 11111:11111 -p 11112:11112 -m 4096m canal/canal-server:v1.1.2 - - ``` - -7. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:2.3.0 - $ docker run -d -it --link pulsar-canal-server -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:2.3.0 bin/pulsar standalone - - ``` - -8. Modify the configuration file `canal-mysql-source-config.yaml`. - - ```yaml - - configs: - zkServers: "" - batchSize: "5120" - destination: "test" - username: "" - password: "" - cluster: false - singleHostname: "pulsar-canal-server" - singlePort: "11111" - - ``` - -9. Create a consumer file `pulsar-client.py`. - - ```python - - import pulsar - - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe('my-topic', - subscription_name='my-sub') - - while True: - msg = consumer.receive() - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - - client.close() - - ``` - -10. Copy the configuration file `canal-mysql-source-config.yaml` and the consumer file `pulsar-client.py` to Pulsar server. - - ```bash - - $ docker cp canal-mysql-source-config.yaml pulsar-standalone:/pulsar/conf/ - $ docker cp pulsar-client.py pulsar-standalone:/pulsar/ - - ``` - -11. Download a Canal connector and start it. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - $ wget https://archive.apache.org/dist/pulsar/pulsar-2.3.0/connectors/pulsar-io-canal-2.3.0.nar -P connectors - $ ./bin/pulsar-admin source localrun \ - --archive ./connectors/pulsar-io-canal-2.3.0.nar \ - --classname org.apache.pulsar.io.canal.CanalStringSource \ - --tenant public \ - --namespace default \ - --name canal \ - --destination-topic-name my-topic \ - --source-config-file /pulsar/conf/canal-mysql-source-config.yaml \ - --parallelism 1 - - ``` - -12. Consume data from MySQL. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - $ python pulsar-client.py - - ``` - -13. Open another window to log in MySQL server. - - ```bash - - $ docker exec -it pulsar-mysql /bin/bash - $ mysql -h 127.0.0.1 -uroot -pcanal - - ``` - -14. Create a table, and insert, delete, and update data in MySQL server. - - ```bash - - mysql> use test; - mysql> show tables; - mysql> CREATE TABLE IF NOT EXISTS `test_table`(`test_id` INT UNSIGNED AUTO_INCREMENT,`test_title` VARCHAR(100) NOT NULL, - `test_author` VARCHAR(40) NOT NULL, - `test_date` DATE,PRIMARY KEY ( `test_id` ))ENGINE=InnoDB DEFAULT CHARSET=utf8; - mysql> INSERT INTO test_table (test_title, test_author, test_date) VALUES("a", "b", NOW()); - mysql> UPDATE test_table SET test_title='c' WHERE test_title='a'; - mysql> DELETE FROM test_table WHERE test_title='c'; - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-cassandra-sink.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-cassandra-sink.md deleted file mode 100644 index d7f0e55abaa31e..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-cassandra-sink.md +++ /dev/null @@ -1,59 +0,0 @@ ---- -id: io-cassandra-sink -title: Cassandra sink connector -sidebar_label: "Cassandra sink connector" -original_id: io-cassandra-sink ---- - -The Cassandra sink connector pulls messages from Pulsar topics to Cassandra clusters. - -## Configuration - -The configuration of the Cassandra sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `roots` | String|true | " " (empty string) | A comma-separated list of Cassandra hosts to connect to.| -| `keyspace` | String|true| " " (empty string)| The key space used for writing pulsar messages.

    **Note: `keyspace` should be created prior to a Cassandra sink.**| -| `keyname` | String|true| " " (empty string)| The key name of the Cassandra column family.

    The column is used for storing Pulsar message keys.

    If a Pulsar message doesn't have any key associated, the message value is used as the key. | -| `columnFamily` | String|true| " " (empty string)| The Cassandra column family name.

    **Note: `columnFamily` should be created prior to a Cassandra sink.**| -| `columnName` | String|true| " " (empty string) | The column name of the Cassandra column family.

    The column is used for storing Pulsar message values. | - -### Example - -Before using the Cassandra sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - } - } - - ``` - -* YAML - - ``` - - configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - - ``` - -## Usage - -For more information about **how to connect Pulsar with Cassandra**, see [here](io-quickstart.md#connect-pulsar-to-apache-cassandra). diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-cdc-debezium.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-cdc-debezium.md deleted file mode 100644 index 4558ae41d211b2..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-cdc-debezium.md +++ /dev/null @@ -1,549 +0,0 @@ ---- -id: io-cdc-debezium -title: Debezium source connector -sidebar_label: "Debezium source connector" -original_id: io-cdc-debezium ---- - -The Debezium source connector pulls messages from MySQL or PostgreSQL -and persists the messages to Pulsar topics. - -## Configuration - -The configuration of Debezium source connector has the following properties. - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `task.class` | true | null | A source task class that implemented in Debezium. | -| `database.hostname` | true | null | The address of a database server. | -| `database.port` | true | null | The port number of a database server.| -| `database.user` | true | null | The name of a database user that has the required privileges. | -| `database.password` | true | null | The password for a database user that has the required privileges. | -| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. | -| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. | -| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the connector.

    This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. | -| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. | -| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value. | -| `database.history` | true | null | The name of the database history class. | -| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements.

    **Note: this topic is for internal use only and should not be used by consumers.** | -| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. | -| `pulsar.service.url` | true | null | Pulsar cluster service URL for the offset topic used in Debezium. You can use the `bin/pulsar-admin --admin-url http://pulsar:8080 sources localrun --source-config-file configs/pg-pulsar-config.yaml` command to point to the target Pulsar cluster.| -| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. | -| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). | -| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. | -| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. | - - - -## Example of MySQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "configs": { - "database.hostname": "localhost", - "database.port": "3306", - "database.user": "debezium", - "database.password": "dbz", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.whitelist": "inventory", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.history.pulsar.topic": "history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "pulsar.service.url": "pulsar://127.0.0.1:6650", - "offset.storage.topic": "offset-topic" - } - } - - ``` - -* YAML - - You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mysql-source" - topicName: "debezium-mysql-topic" - archive: "connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for mysql, docker image: debezium/example-mysql:0.8 - database.hostname: "localhost" - database.port: "3306" - database.user: "debezium" - database.password: "dbz" - database.server.id: "184054" - database.server.name: "dbserver1" - database.whitelist: "inventory" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.history.pulsar.topic: "history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG - key.converter: "org.apache.kafka.connect.json.JsonConverter" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## OFFSET_STORAGE_TOPIC_CONFIG - offset.storage.topic: "offset-topic" - - ``` - -### Usage - -This example shows how to change the data of a MySQL table using the Pulsar Debezium connector. - -1. Start a MySQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -it --rm \ - --name mysql \ - -p 3306:3306 \ - -e MYSQL_ROOT_PASSWORD=debezium \ - -e MYSQL_USER=mysqluser \ - -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar \ - --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","value.converter": "org.apache.kafka.connect.json.JsonConverter","pulsar.service.url": "pulsar://127.0.0.1:6650","offset.storage.topic": "offset-topic"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mysql-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the table _inventory.products_. - - ```bash - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MySQL client in docker. - - ```bash - - $ docker run -it --rm \ - --name mysqlterm \ - --link mysql \ - --rm mysql:5.7 sh \ - -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' - - ``` - -6. A MySQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - mysql> use inventory; - mysql> show tables; - mysql> SELECT * FROM products; - mysql> UPDATE products SET name='1111111111' WHERE id=101; - mysql> UPDATE products SET name='1111111111' WHERE id=107; - - ``` - - In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic. - -## Example of PostgreSQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "configs": { - "database.hostname": "localhost", - "database.port": "5432", - "database.user": "postgres", - "database.password": "postgres", - "database.dbname": "postgres", - "database.server.name": "dbserver1", - "schema.whitelist": "inventory", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - } - - ``` - -* YAML - - You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-postgres-source" - topicName: "debezium-postgres-topic" - archive: "connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-postgress:0.8 - database.hostname: "localhost" - database.port: "5432" - database.user: "postgres" - database.password: "postgres" - database.dbname: "postgres" - database.server.name: "dbserver1" - schema.whitelist: "inventory" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector. - - -1. Start a PostgreSQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-postgres:0.8 - $ docker run -d -it --rm --name pulsar-postgresql -p 5432:5432 debezium/example-postgres:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar \ - --name debezium-postgres-source \ - --destination-topic-name debezium-postgres-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "postgres","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-postgres-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a PostgreSQL client in docker. - - ```bash - - $ docker exec -it pulsar-postgresql /bin/bash - - ``` - -6. A PostgreSQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - psql -U postgres postgres - postgres=# \c postgres; - You are now connected to database "postgres" as user "postgres". - postgres=# SET search_path TO inventory; - SET - postgres=# select * from products; - id | name | description | weight - -----+--------------------+---------------------------------------------------------+-------- - 102 | car battery | 12V car battery | 8.1 - 103 | 12-pack drill bits | 12-pack of drill bits with sizes ranging from #40 to #3 | 0.8 - 104 | hammer | 12oz carpenter's hammer | 0.75 - 105 | hammer | 14oz carpenter's hammer | 0.875 - 106 | hammer | 16oz carpenter's hammer | 1 - 107 | rocks | box of assorted rocks | 5.3 - 108 | jacket | water resistent black wind breaker | 0.1 - 109 | spare tire | 24 inch spare tire | 22.2 - 101 | 1111111111 | Small 2-wheel scooter | 3.14 - (9 rows) - - postgres=# UPDATE products SET name='1111111111' WHERE id=107; - UPDATE 1 - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":107}}�{"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products.Value","field":"before"},{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products.Value","field":"after"},{"type":"struct","fields":[{"type":"string","optional":true,"field":"version"},{"type":"string","optional":true,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":false,"field":"db"},{"type":"int64","optional":true,"field":"ts_usec"},{"type":"int64","optional":true,"field":"txId"},{"type":"int64","optional":true,"field":"lsn"},{"type":"string","optional":true,"field":"schema"},{"type":"string","optional":true,"field":"table"},{"type":"boolean","optional":true,"default":false,"field":"snapshot"},{"type":"boolean","optional":true,"field":"last_snapshot_record"}],"optional":false,"name":"io.debezium.connector.postgresql.Source","field":"source"},{"type":"string","optional":false,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"before":{"id":107,"name":"rocks","description":"box of assorted rocks","weight":5.3},"after":{"id":107,"name":"1111111111","description":"box of assorted rocks","weight":5.3},"source":{"version":"0.9.2.Final","connector":"postgresql","name":"dbserver1","db":"postgres","ts_usec":1559208957661080,"txId":577,"lsn":23862872,"schema":"inventory","table":"products","snapshot":false,"last_snapshot_record":null},"op":"u","ts_ms":1559208957692}} - - ``` - -## Example of MongoDB - -You need to create a configuration file before using the Pulsar Debezium connector. - -* JSON - - ```json - - { - "configs": { - "mongodb.hosts": "rs0/mongodb:27017", - "mongodb.name": "dbserver1", - "mongodb.user": "debezium", - "mongodb.password": "dbz", - "mongodb.task.id": "1", - "database.whitelist": "inventory", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - } - - ``` - -* YAML - - You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mongodb-source" - topicName: "debezium-mongodb-topic" - archive: "connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-postgress:0.10 - mongodb.hosts: "rs0/mongodb:27017" - mongodb.name: "dbserver1" - mongodb.user: "debezium" - mongodb.password: "dbz" - mongodb.task.id: "1" - database.whitelist: "inventory" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector. - - -1. Start a MongoDB server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-mongodb:0.10 - $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017 debezium/example-mongodb:0.10 - - ``` - - Use the following commands to initialize the data. - - ``` bash - - ./usr/local/bin/init-inventory.sh - - ``` - - If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a``` - - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-mongodb-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar \ - --name debezium-mongodb-source \ - --destination-topic-name debezium-mongodb-topic \ - --tenant public \ - --namespace default \ - --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mongodb-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MongoDB client in docker. - - ```bash - - $ docker exec -it pulsar-mongodb /bin/bash - - ``` - -6. A MongoDB client pops out. - - ```bash - - mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory - db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}}) - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":false,"field":"rs"},{"type":"string","optional":false,"field":"collection"},{"type":"int32","optional":false,"field":"ord"},{"type":"int64","optional":true,"field":"h"}],"optional":false,"name":"io.debezium.connector.mongo.Source","field":"source"},{"type":"string","optional":true,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"after":"{\"_id\": {\"$numberLong\": \"104\"},\"name\": \"hammer\",\"description\": \"12oz carpenter's hammer\",\"weight\": 1.25,\"quantity\": 4}","patch":null,"source":{"version":"0.10.0.Final","connector":"mongodb","name":"dbserver1","ts_ms":1573541905000,"snapshot":"true","db":"inventory","rs":"rs0","collection":"products","ord":1,"h":4983083486544392763},"op":"r","ts_ms":1573541909761}}. - - ``` - -## FAQ - -### Debezium postgres connector will hang when create snap - -```$xslt - -#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000] - java.lang.Thread.State: WAITING (parking) - at sun.misc.Unsafe.park(Native Method) - - parking to wait for <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) - at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) - at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) - at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396) - at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649) - at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132) - at io.debezium.connector.postgresql.PostgresConnectorTask$Lambda$203/385424085.accept(Unknown Source) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$240/1347039967.accept(Unknown Source) - at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$206/589332928.run(Unknown Source) - at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705) - at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717) - at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126) - at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47) - at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127) - at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230) - at java.lang.Thread.run(Thread.java:748) - -``` - -If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file: - -```$xslt - -max.queue.size= - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-cdc.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-cdc.md deleted file mode 100644 index e6e662884826de..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-cdc.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -id: io-cdc -title: CDC connector -sidebar_label: "CDC connector" -original_id: io-cdc ---- - -CDC source connectors capture log changes of databases (such as MySQL, MongoDB, and PostgreSQL) into Pulsar. - -> CDC source connectors are built on top of [Canal](https://github.com/alibaba/canal) and [Debezium](https://debezium.io/) and store all data into Pulsar cluster in a persistent, replicated, and partitioned way. - -Currently, Pulsar has the following CDC connectors. - -Name|Java Class -|---|--- -[Canal source connector](io-canal-source.md)|[org.apache.pulsar.io.canal.CanalStringSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/java/org/apache/pulsar/io/canal/CanalStringSource.java) -[Debezium source connector](io-cdc-debezium.md)|
  328. [org.apache.pulsar.io.debezium.DebeziumSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/core/src/main/java/org/apache/pulsar/io/debezium/DebeziumSource.java)
  329. [org.apache.pulsar.io.debezium.mysql.DebeziumMysqlSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/java/org/apache/pulsar/io/debezium/mysql/DebeziumMysqlSource.java)
  330. [org.apache.pulsar.io.debezium.postgres.DebeziumPostgresSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/java/org/apache/pulsar/io/debezium/postgres/DebeziumPostgresSource.java)
  331. - -For more information about Canal and Debezium, see the information below. - -Subject | Reference -|---|--- -How to use Canal source connector with MySQL|[Canal guide](https://github.com/alibaba/canal/wiki) -How does Canal work | [Canal tutorial](https://github.com/alibaba/canal/wiki) -How to use Debezium source connector with MySQL | [Debezium guide](https://debezium.io/docs/connectors/mysql/) -How does Debezium work | [Debezium tutorial](https://debezium.io/docs/tutorial/) diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-cli.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-cli.md deleted file mode 100644 index f79d301c30b3f9..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-cli.md +++ /dev/null @@ -1,666 +0,0 @@ ---- -id: io-cli -title: Connector Admin CLI -sidebar_label: "CLI" -original_id: io-cli ---- - -:::note - -**Important** - -This page is deprecated and not updated anymore. For the latest and complete information about `Pulsar-admin`, including commands, flags, descriptions, and more, see [Pulsar admin docs](/tools/pulsar-admin/). - -::: - -The `pulsar-admin` tool helps you manage Pulsar connectors. - -## `sources` - -An interface for managing Pulsar IO sources (ingress data into Pulsar). - -```bash - -$ pulsar-admin sources subcommands - -``` - -Subcommands are: - -* `create` - -* `update` - -* `delete` - -* `get` - -* `status` - -* `list` - -* `stop` - -* `start` - -* `restart` - -* `localrun` - -* `available-sources` - -* `reload` - - -### `create` - -Submit a Pulsar IO source connector to run in a Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources create options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--classname` | The source's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per source instance (applicable only to Docker runtime). -| `--deserialization-classname` | The SerDe classname for the source. -| `--destination-topic-name` | The Pulsar topic to which data is sent. -| `--disk` | The disk (in bytes) that needs to be allocated per source instance (applicable only to Docker runtime). -|`--name` | The source's name. -| `--namespace` | The source's namespace. -| ` --parallelism` | The source's parallelism factor, that is, the number of source instances to run. -| `--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per source instance (applicable only to the process and Docker runtimes). -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -| `--source-config` | Source config key/values. -| `--source-config-file` | The path to a YAML config file specifying the source's configuration. -| `-t`, `--source-type` | The source's connector provider. -| `--tenant` | The source's tenant. -|`--producer-config`| The custom producer configuration (as a JSON string). - -### `update` - -Update a already submitted Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources update options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--classname` | The source's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per source instance (applicable only to Docker runtime). -| `--deserialization-classname` | The SerDe classname for the source. -| `--destination-topic-name` | The Pulsar topic to which data is sent. -| `--disk` | The disk (in bytes) that needs to be allocated per source instance (applicable only to Docker runtime). -|`--name` | The source's name. -| `--namespace` | The source's namespace. -| ` --parallelism` | The source's parallelism factor, that is, the number of source instances to run. -| `--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per source instance (applicable only to the process and Docker runtimes). -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -| `--source-config` | Source config key/values. -| `--source-config-file` | The path to a YAML config file specifying the source's configuration. -| `-t`, `--source-type` | The source's connector provider. The `source-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. -| `--tenant` | The source's tenant. -| `--update-auth-data` | Whether or not to update the auth data.
    **Default value: false.** - - -### `delete` - -Delete a Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources delete options - -``` - -#### Option - -|Flag|Description| -|---|---| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `get` - -Get the information about a Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources get options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `status` - -Check the current status of a Pulsar Source. - -#### Usage - -```bash - -$ pulsar-admin sources status options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source ID.
    If `instance-id` is not provided, Pulsar gets status of all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `list` - -List all running Pulsar IO source connectors. - -#### Usage - -```bash - -$ pulsar-admin sources list options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `stop` - -Stop a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources stop options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar stops all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `start` - -Start a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources start options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar starts all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `restart` - -Restart a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources restart options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar restarts all instances. -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `localrun` - -Run a Pulsar IO source connector locally rather than deploying it to the Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources localrun options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the Source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--broker-service-url` | The URL for the Pulsar broker. -|`--classname`|The source's class name if `archive` is file-url-path (file://). -| `--client-auth-params` | Client authentication parameter. -| `--client-auth-plugin` | Client authentication plugin using which function-process can connect to broker. -|`--cpu`|The CPU (in cores) that needs to be allocated per source instance (applicable only to the Docker runtime).| -|`--deserialization-classname`|The SerDe classname for the source. -|`--destination-topic-name`|The Pulsar topic to which data is sent. -|`--disk`|The disk (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime).| -|`--hostname-verification-enabled`|Enable hostname verification.
    **Default value: false**. -|`--name`|The source’s name.| -|`--namespace`|The source’s namespace.| -|`--parallelism`|The source’s parallelism factor, that is, the number of source instances to run).| -|`--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -|`--ram`|The RAM (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime).| -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -|`--source-config`|Source config key/values. -|`--source-config-file`|The path to a YAML config file specifying the source’s configuration. -|`--source-type`|The source's connector provider. -|`--tenant`|The source’s tenant. -|`--tls-allow-insecure`|Allow insecure tls connection.
    **Default value: false**. -|`--tls-trust-cert-path`|The tls trust cert file path. -|`--use-tls`|Use tls connection.
    **Default value: false**. -|`--producer-config`| The custom producer configuration (as a JSON string). - -### `available-sources` - -Get the list of Pulsar IO connector sources supported by Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources available-sources - -``` - -### `reload` - -Reload the available built-in connectors. - -#### Usage - -```bash - -$ pulsar-admin sources reload - -``` - -## `sinks` - -An interface for managing Pulsar IO sinks (egress data from Pulsar). - -```bash - -$ pulsar-admin sinks subcommands - -``` - -Subcommands are: - -* `create` - -* `update` - -* `delete` - -* `get` - -* `status` - -* `list` - -* `stop` - -* `start` - -* `restart` - -* `localrun` - -* `available-sinks` - -* `reload` - - -### `create` - -Submit a Pulsar IO sink connector to run in a Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks create options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--classname` | The sink's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per sink instance (applicable only to Docker runtime). -| `--custom-schema-inputs` | The map of input topics to schema types or class names (as a JSON string). -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -| `--disk` | The disk (in bytes) that needs to be allocated per sink instance (applicable only to Docker runtime). -|`-i, --inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name` | The sink's name. -| `--namespace` | The sink's namespace. -| ` --parallelism` | The sink's parallelism factor, that is, the number of sink instances to run. -| `--processing-guarantees` | The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the process and Docker runtimes). -| `--retain-ordering` | Sink consumes and sinks messages in order. -| `--sink-config` | sink config key/values. -| `--sink-config-file` | The path to a YAML config file specifying the sink's configuration. -| `-t`, `--sink-type` | The sink's connector provider. The `sink-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. -| `--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -| `--tenant` | The sink's tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). - -### `update` - -Update a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks update options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--classname` | The sink's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per sink instance (applicable only to Docker runtime). -| `--custom-schema-inputs` | The map of input topics to schema types or class names (as a JSON string). -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -| `--disk` | The disk (in bytes) that needs to be allocated per sink instance (applicable only to Docker runtime). -|`-i, --inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name` | The sink's name. -| `--namespace` | The sink's namespace. -| ` --parallelism` | The sink's parallelism factor, that is, the number of sink instances to run. -| `--processing-guarantees` | The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the process and Docker runtimes). -| `--retain-ordering` | Sink consumes and sinks messages in order. -| `--sink-config` | sink config key/values. -| `--sink-config-file` | The path to a YAML config file specifying the sink's configuration. -| `-t`, `--sink-type` | The sink's connector provider. -| `--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -| `--tenant` | The sink's tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). -| `--update-auth-data` | Whether or not to update the auth data.
    **Default value: false.** - -### `delete` - -Delete a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks delete options - -``` - -#### Option - -|Flag|Description| -|---|---| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - -### `get` - -Get the information about a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks get options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `status` - -Check the current status of a Pulsar sink. - -#### Usage - -```bash - -$ pulsar-admin sinks status options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink ID.
    If `instance-id` is not provided, Pulsar gets status of all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `list` - -List all running Pulsar IO sink connectors. - -#### Usage - -```bash - -$ pulsar-admin sinks list options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `stop` - -Stop a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks stop options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar stops all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - -### `start` - -Start a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks start options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar starts all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `restart` - -Restart a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks restart options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar restarts all instances. -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `localrun` - -Run a Pulsar IO sink connector locally rather than deploying it to the Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks localrun options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--broker-service-url` | The URL for the Pulsar broker. -|`--classname`|The sink's class name if `archive` is file-url-path (file://). -| `--client-auth-params` | Client authentication parameter. -| `--client-auth-plugin` | Client authentication plugin using which function-process can connect to broker. -|`--cpu`|The CPU (in cores) that needs to be allocated per sink instance (applicable only to the Docker runtime). -| `--custom-schema-inputs` | The map of input topics to Schema types or class names (as a JSON string). -| `--max-redeliver-count` | Maximum number of times that a message is redelivered before being sent to the dead letter queue. -| `--dead-letter-topic` | Name of the dead letter topic where the failing messages are sent. -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -|`--disk`|The disk (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime).| -|`--hostname-verification-enabled`|Enable hostname verification.
    **Default value: false**. -| `-i`, `--inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name`|The sink’s name.| -|`--namespace`|The sink’s namespace.| -|`--parallelism`|The sink’s parallelism factor, that is, the number of sink instances to run).| -|`--processing-guarantees`|The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -|`--ram`|The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime).| -|`--retain-ordering` | Sink consumes and sinks messages in order. -|`--sink-config`|sink config key/values. -|`--sink-config-file`|The path to a YAML config file specifying the sink’s configuration. -|`--sink-type`|The sink's connector provider. -|`--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -|`--tenant`|The sink’s tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--negative-ack-redelivery-delay-ms` | The negatively-acknowledged message redelivery delay in milliseconds. | -|`--tls-allow-insecure`|Allow insecure tls connection.
    **Default value: false**. -|`--tls-trust-cert-path`|The tls trust cert file path. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). -|`--use-tls`|Use tls connection.
    **Default value: false**. - -### `available-sinks` - -Get the list of Pulsar IO connector sinks supported by Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks available-sinks - -``` - -### `reload` - -Reload the available built-in connectors. - -#### Usage - -```bash - -$ pulsar-admin sinks reload - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-connectors.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-connectors.md deleted file mode 100644 index 957a02a5a1964a..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-connectors.md +++ /dev/null @@ -1,249 +0,0 @@ ---- -id: io-connectors -title: Built-in connector -sidebar_label: "Built-in connector" -original_id: io-connectors ---- - -Pulsar distribution includes a set of common connectors that have been packaged and tested with the rest of Apache Pulsar. These connectors import and export data from some of the most commonly used data systems. - -Using any of these connectors is as easy as writing a simple connector and running the connector locally or submitting the connector to a Pulsar Functions cluster. - -## Source connector - -Pulsar has various source connectors, which are sorted alphabetically as below. - -### Canal - -* [Configuration](io-canal-source.md#configuration) - -* [Example](io-canal-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/java/org/apache/pulsar/io/canal/CanalStringSource.java) - - -### Debezium MySQL - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-mysql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/java/org/apache/pulsar/io/debezium/mysql/DebeziumMysqlSource.java) - -### Debezium PostgreSQL - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-postgresql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/java/org/apache/pulsar/io/debezium/postgres/DebeziumPostgresSource.java) - -### Debezium MongoDB - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-mongodb) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/java/org/apache/pulsar/io/debezium/mongodb/DebeziumMongoDbSource.java) - -### Debezium Oracle - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-oracle) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/oracle/src/main/java/org/apache/pulsar/io/debezium/oracle/DebeziumOracleSource.java) - -### Debezium Microsoft SQL Server - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-microsoft-sql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mssql/src/main/java/org/apache/pulsar/io/debezium/mssql/DebeziumMsSqlSource.java) - - -### DynamoDB - -* [Configuration](io-dynamodb-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/dynamodb/src/main/java/org/apache/pulsar/io/dynamodb/DynamoDBSource.java) - -### File - -* [Configuration](io-file-source.md#configuration) - -* [Example](io-file-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/file/src/main/java/org/apache/pulsar/io/file/FileSource.java) - -### Flume - -* [Configuration](io-flume-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/java/org/apache/pulsar/io/flume/FlumeConnector.java) - -### Twitter firehose - -* [Configuration](io-twitter-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/twitter/src/main/java/org/apache/pulsar/io/twitter/TwitterFireHose.java) - -### Kafka - -* [Configuration](io-kafka-source.md#configuration) - -* [Example](io-kafka-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java) - -### Kinesis - -* [Configuration](io-kinesis-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kinesis/src/main/java/org/apache/pulsar/io/kinesis/KinesisSource.java) - -### Netty - -* [Configuration](io-netty-source.md#configuration) - -* [Example of TCP](io-netty-source.md#tcp) - -* [Example of HTTP](io-netty-source.md#http) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/netty/src/main/java/org/apache/pulsar/io/netty/NettySource.java) - -### NSQ - -* [Configuration](io-nsq-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/nsq/src/main/java/org/apache/pulsar/io/nsq/NSQSource.java) - -### RabbitMQ - -* [Configuration](io-rabbitmq-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/rabbitmq/src/main/java/org/apache/pulsar/io/rabbitmq/RabbitMQSource.java) - -## Sink connector - -Pulsar has various sink connectors, which are sorted alphabetically as below. - -### Aerospike - -* [Configuration](io-aerospike-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/aerospike/src/main/java/org/apache/pulsar/io/aerospike/AerospikeStringSink.java) - -### Cassandra - -* [Configuration](io-cassandra-sink.md#configuration) - -* [Example](io-cassandra-sink.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/cassandra/src/main/java/org/apache/pulsar/io/cassandra/CassandraStringSink.java) - -### ElasticSearch - -* [Configuration](io-elasticsearch-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/elastic-search/src/main/java/org/apache/pulsar/io/elasticsearch/ElasticSearchSink.java) - -### Flume - -* [Configuration](io-flume-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/java/org/apache/pulsar/io/flume/sink/StringSink.java) - -### HBase - -* [Configuration](io-hbase-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hbase/src/main/java/org/apache/pulsar/io/hbase/HbaseAbstractConfig.java) - -### HDFS2 - -* [Configuration](io-hdfs2-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hdfs2/src/main/java/org/apache/pulsar/io/hdfs2/AbstractHdfsConnector.java) - -### HDFS3 - -* [Configuration](io-hdfs3-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hdfs3/src/main/java/org/apache/pulsar/io/hdfs3/AbstractHdfsConnector.java) - -### InfluxDB - -* [Configuration](io-influxdb-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/influxdb/src/main/java/org/apache/pulsar/io/influxdb/InfluxDBGenericRecordSink.java) - -### JDBC ClickHouse - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-clickhouse) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/clickhouse/src/main/java/org/apache/pulsar/io/jdbc/ClickHouseJdbcAutoSchemaSink.java) - -### JDBC MariaDB - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-mariadb) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/mariadb/src/main/java/org/apache/pulsar/io/jdbc/MariadbJdbcAutoSchemaSink.java) - -### JDBC PostgreSQL - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-postgresql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/postgres/src/main/java/org/apache/pulsar/io/jdbc/PostgresJdbcAutoSchemaSink.java) - -### JDBC SQLite - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-sqlite) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/sqlite/src/main/java/org/apache/pulsar/io/jdbc/SqliteJdbcAutoSchemaSink.java) - -### Kafka - -* [Configuration](io-kafka-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSink.java) - -### Kinesis - -* [Configuration](io-kinesis-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kinesis/src/main/java/org/apache/pulsar/io/kinesis/KinesisSink.java) - -### MongoDB - -* [Configuration](io-mongo-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/mongo/src/main/java/org/apache/pulsar/io/mongodb/MongoSink.java) - -### RabbitMQ - -* [Configuration](io-rabbitmq-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/rabbitmq/src/main/java/org/apache/pulsar/io/rabbitmq/RabbitMQSink.java) - -### Redis - -* [Configuration](io-redis-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/redis/src/main/java/org/apache/pulsar/io/redis/RedisAbstractConfig.java) - -### Solr - -* [Configuration](io-solr-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/solr/src/main/java/org/apache/pulsar/io/solr/SolrSinkConfig.java) - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-debezium-source.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-debezium-source.md deleted file mode 100644 index 6015f3f17d74f2..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-debezium-source.md +++ /dev/null @@ -1,770 +0,0 @@ ---- -id: io-debezium-source -title: Debezium source connector -sidebar_label: "Debezium source connector" -original_id: io-debezium-source ---- - -The Debezium source connector pulls messages from MySQL or PostgreSQL -and persists the messages to Pulsar topics. - -## Configuration - -The configuration of Debezium source connector has the following properties. - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `task.class` | true | null | A source task class that implemented in Debezium. | -| `database.hostname` | true | null | The address of a database server. | -| `database.port` | true | null | The port number of a database server.| -| `database.user` | true | null | The name of a database user that has the required privileges. | -| `database.password` | true | null | The password for a database user that has the required privileges. | -| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. | -| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. | -| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the connector.

    This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. | -| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. | -| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value. | -| `database.history` | true | null | The name of the database history class. | -| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements.

    **Note: this topic is for internal use only and should not be used by consumers.** | -| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. | -| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. | -| `json-with-envelope` | false | false | Present the message only consist of payload. - -### Converter Options - -1. org.apache.kafka.connect.json.JsonConverter - -This config `json-with-envelope` is valid only for the JsonConverter. It's default value is false, the consumer use the schema ` -Schema.KeyValue(Schema.AUTO_CONSUME(), Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, -and the message only consist of payload. - -If the config `json-with-envelope` value is true, the consumer use the schema -`Schema.KeyValue(Schema.BYTES, Schema.BYTES`, the message consist of schema and payload. - -2. org.apache.pulsar.kafka.shade.io.confluent.connect.avro.AvroConverter - -If users select the AvroConverter, then the pulsar consumer should use the schema `Schema.KeyValue(Schema.AUTO_CONSUME(), -Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, and the message consist of payload. - -### MongoDB Configuration -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). | -| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. | -| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. | - - - -## Example of MySQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "configs": { - "database.hostname": "localhost", - "database.port": "3306", - "database.user": "debezium", - "database.password": "dbz", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.whitelist": "inventory", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.history.pulsar.topic": "history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "offset.storage.topic": "offset-topic" - } - } - - ``` - -* YAML - - You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mysql-source" - topicName: "debezium-mysql-topic" - archive: "connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for mysql, docker image: debezium/example-mysql:0.8 - database.hostname: "localhost" - database.port: "3306" - database.user: "debezium" - database.password: "dbz" - database.server.id: "184054" - database.server.name: "dbserver1" - database.whitelist: "inventory" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.history.pulsar.topic: "history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG - key.converter: "org.apache.kafka.connect.json.JsonConverter" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - - ## OFFSET_STORAGE_TOPIC_CONFIG - offset.storage.topic: "offset-topic" - - ``` - -### Usage - -This example shows how to change the data of a MySQL table using the Pulsar Debezium connector. - -1. Start a MySQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -it --rm \ - --name mysql \ - -p 3306:3306 \ - -e MYSQL_ROOT_PASSWORD=debezium \ - -e MYSQL_USER=mysqluser \ - -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar \ - --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","value.converter": "org.apache.kafka.connect.json.JsonConverter","pulsar.service.url": "pulsar://127.0.0.1:6650","offset.storage.topic": "offset-topic"}' - - ``` - - :::note - - Currently, the destination topic (specified by the `destination-topic-name` option ) is a required configuration but it is not used for the Debezium connector to save data. The Debezium connector saves data in the following 4 types of topics: - - - One topic named with the database server name ( `database.server.name`) for storing the database metadata messages, such as `public/default/database.server.name`. - - One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. - - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - - One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - - If the automatic topic creation is disabled on your broker, you need to manually create the above 4 types of topics and the destination topic. - - ::: - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mysql-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the table _inventory.products_. - - ```bash - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MySQL client in docker. - - ```bash - - $ docker run -it --rm \ - --name mysqlterm \ - --link mysql \ - --rm mysql:5.7 sh \ - -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' - - ``` - -6. A MySQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - mysql> use inventory; - mysql> show tables; - mysql> SELECT * FROM products; - mysql> UPDATE products SET name='1111111111' WHERE id=101; - mysql> UPDATE products SET name='1111111111' WHERE id=107; - - ``` - - In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic. - -## Example of PostgreSQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "5432", - "database.user": "postgres", - "database.password": "changeme", - "database.dbname": "postgres", - "database.server.name": "dbserver1", - "plugin.name": "pgoutput", - "schema.whitelist": "public", - "table.whitelist": "public.users", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-postgres-source" - topicName: "debezium-postgres-topic" - archive: "connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for postgres version 10+, official docker image: postgres:<10+> - database.hostname: "localhost" - database.port: "5432" - database.user: "postgres" - database.password: "changeme" - database.dbname: "postgres" - database.server.name: "dbserver1" - plugin.name: "pgoutput" - schema.whitelist: "public" - table.whitelist: "public.users" - - ## PULSAR_SERVICE_URL_CONFIG - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -Notice that `pgoutput` is a standard plugin of Postgres introduced in version 10 - [see Postgres architecture docu](https://www.postgresql.org/docs/10/logical-replication-architecture.html). You don't need to install anything, just make sure the WAL level is set to `logical` (see docker command below and [Postgres docu](https://www.postgresql.org/docs/current/runtime-config-wal.html)). - -### Usage - -This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector. - - -1. Start a PostgreSQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -d -it --rm \ - --name pulsar-postgres \ - -p 5432:5432 \ - -e POSTGRES_PASSWORD=changeme \ - postgres:13.3 -c wal_level=logical - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar \ - --name debezium-postgres-source \ - --destination-topic-name debezium-postgres-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "changeme","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "public","table.whitelist": "public.users","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - :::note - - Currently, the destination topic (specified by the `destination-topic-name` option ) is a required configuration but it is not used for the Debezium connector to save data. The Debezium connector saves data in the following 4 types of topics: - - - One topic named with the database server name ( `database.server.name`) for storing the database metadata messages, such as `public/default/database.server.name`. - - One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. - - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - - One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - - If the automatic topic creation is disabled on your broker, you need to manually create the above 4 types of topics and the destination topic. - - ::: - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-postgres-source-config.yaml - - ``` - -4. Subscribe the topic _sub-users_ for the _public.users_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-users" public/default/dbserver1.public.users -n 0 - - ``` - -5. Start a PostgreSQL client in docker. - - ```bash - - $ docker exec -it pulsar-postgresql /bin/bash - - ``` - -6. A PostgreSQL client pops out. - - Use the following commands to create sample data in the table _users_. - - ``` - - psql -U postgres -h localhost -p 5432 - Password for user postgres: - - CREATE TABLE users( - id BIGINT GENERATED ALWAYS AS IDENTITY, PRIMARY KEY(id), - hash_firstname TEXT NOT NULL, - hash_lastname TEXT NOT NULL, - gender VARCHAR(6) NOT NULL CHECK (gender IN ('male', 'female')) - ); - - INSERT INTO users(hash_firstname, hash_lastname, gender) - SELECT md5(RANDOM()::TEXT), md5(RANDOM()::TEXT), CASE WHEN RANDOM() < 0.5 THEN 'male' ELSE 'female' END FROM generate_series(1, 100); - - postgres=# select * from users; - - id | hash_firstname | hash_lastname | gender - -------+----------------------------------+----------------------------------+-------- - 1 | 02bf7880eb489edc624ba637f5ab42bd | 3e742c2cc4217d8e3382cc251415b2fb | female - 2 | dd07064326bb9119189032316158f064 | 9c0e938f9eddbd5200ba348965afbc61 | male - 3 | 2c5316fdd9d6595c1cceb70eed12e80c | 8a93d7d8f9d76acfaaa625c82a03ea8b | female - 4 | 3dfa3b4f70d8cd2155567210e5043d2b | 32c156bc28f7f03ab5d28e2588a3dc19 | female - - - postgres=# UPDATE users SET hash_firstname='maxim' WHERE id=1; - UPDATE 1 - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"before":null,"after":{"id":1,"hash_firstname":"maxim","hash_lastname":"292113d30a3ccee0e19733dd7f88b258","gender":"male"},"source:{"version":"1.0.0.Final","connector":"postgresql","name":"foobar","ts_ms":1624045862644,"snapshot":"false","db":"postgres","schema":"public","table":"users","txId":595,"lsn":24419784,"xmin":null},"op":"u","ts_ms":1624045862648} - ...many more - - ``` - -## Example of MongoDB - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "mongodb.hosts": "rs0/mongodb:27017", - "mongodb.name": "dbserver1", - "mongodb.user": "debezium", - "mongodb.password": "dbz", - "mongodb.task.id": "1", - "database.whitelist": "inventory", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mongodb-source" - topicName: "debezium-mongodb-topic" - archive: "connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-mongodb:0.10 - mongodb.hosts: "rs0/mongodb:27017" - mongodb.name: "dbserver1" - mongodb.user: "debezium" - mongodb.password: "dbz" - mongodb.task.id: "1" - database.whitelist: "inventory" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector. - - -1. Start a MongoDB server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-mongodb:0.10 - $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017 debezium/example-mongodb:0.10 - - ``` - - Use the following commands to initialize the data. - - ``` bash - - ./usr/local/bin/init-inventory.sh - - ``` - - If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a``` - - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-mongodb-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar \ - --name debezium-mongodb-source \ - --destination-topic-name debezium-mongodb-topic \ - --tenant public \ - --namespace default \ - --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - :::note - - Currently, the destination topic (specified by the `destination-topic-name` option ) is a required configuration but it is not used for the Debezium connector to save data. The Debezium connector saves data in the following 4 types of topics: - - - One topic named with the database server name ( `database.server.name`) for storing the database metadata messages, such as `public/default/database.server.name`. - - One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. - - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - - One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - - If the automatic topic creation is disabled on your broker, you need to manually create the above 4 types of topics and the destination topic. - - ::: - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mongodb-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MongoDB client in docker. - - ```bash - - $ docker exec -it pulsar-mongodb /bin/bash - - ``` - -6. A MongoDB client pops out. - - ```bash - - mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory - db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}}) - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":false,"field":"rs"},{"type":"string","optional":false,"field":"collection"},{"type":"int32","optional":false,"field":"ord"},{"type":"int64","optional":true,"field":"h"}],"optional":false,"name":"io.debezium.connector.mongo.Source","field":"source"},{"type":"string","optional":true,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"after":"{\"_id\": {\"$numberLong\": \"104\"},\"name\": \"hammer\",\"description\": \"12oz carpenter's hammer\",\"weight\": 1.25,\"quantity\": 4}","patch":null,"source":{"version":"0.10.0.Final","connector":"mongodb","name":"dbserver1","ts_ms":1573541905000,"snapshot":"true","db":"inventory","rs":"rs0","collection":"products","ord":1,"h":4983083486544392763},"op":"r","ts_ms":1573541909761}}. - - ``` - -## Example of Oracle - -### Packaging - -Oracle connector does not include Oracle JDBC driver and you need to package it with the connector. -Major reasons for not including the drivers are the variety of versions and Oracle licensing. It is recommended to use the driver provided with your Oracle DB installation, or you can [download](https://www.oracle.com/database/technologies/appdev/jdbc.html) one. -Integration test have an [example](https://github.com/apache/pulsar/blob/e2bc52d40450fa00af258c4432a5b71d50a5c6e0/tests/docker-images/latest-version-image/Dockerfile#L110-L122) of packaging the driver into the connector nar file. - -### Configuration - -Debezium [requires](https://debezium.io/documentation/reference/1.5/connectors/oracle.html#oracle-overview) Oracle DB with LogMiner or XStream API enabled. -Supported options and steps for enabling them vary from version to version of Oracle DB. -Steps outlined in the [documentation](https://debezium.io/documentation/reference/1.5/connectors/oracle.html#oracle-overview) and used in the [integration test](https://github.com/apache/pulsar/blob/master/tests/integration/src/test/java/org/apache/pulsar/tests/integration/io/sources/debezium/DebeziumOracleDbSourceTester.java) may or may not work for the version and edition of Oracle DB you are using. -Please refer to the [documentation for Oracle DB](https://docs.oracle.com/en/database/oracle/oracle-database/) as needed. - -Similarly to other connectors, you can use JSON or YAMl to configure the connector. -Using yaml as an example, you can create a debezium-oracle-source-config.yaml file like: - -* JSON - -```json - -{ - "database.hostname": "localhost", - "database.port": "1521", - "database.user": "dbzuser", - "database.password": "dbz", - "database.dbname": "XE", - "database.server.name": "XE", - "schema.exclude.list": "system,dbzuser", - "snapshot.mode": "initial", - "topic.namespace": "public/default", - "task.class": "io.debezium.connector.oracle.OracleConnectorTask", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "typeClassName": "org.apache.pulsar.common.schema.KeyValue", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.tcpKeepAlive": "true", - "decimal.handling.mode": "double", - "database.history.pulsar.topic": "debezium-oracle-source-history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650" -} - -``` - -* YAML - -```yaml - -tenant: "public" -namespace: "default" -name: "debezium-oracle-source" -topicName: "debezium-oracle-topic" -parallelism: 1 - -className: "org.apache.pulsar.io.debezium.oracle.DebeziumOracleSource" -database.dbname: "XE" - -configs: - database.hostname: "localhost" - database.port: "1521" - database.user: "dbzuser" - database.password: "dbz" - database.dbname: "XE" - database.server.name: "XE" - schema.exclude.list: "system,dbzuser" - snapshot.mode: "initial" - topic.namespace: "public/default" - task.class: "io.debezium.connector.oracle.OracleConnectorTask" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - key.converter: "org.apache.kafka.connect.json.JsonConverter" - typeClassName: "org.apache.pulsar.common.schema.KeyValue" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.tcpKeepAlive: "true" - decimal.handling.mode: "double" - database.history.pulsar.topic: "debezium-oracle-source-history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - -``` - -For the full list of configuration properties supported by Debezium, see [Debezium Connector for Oracle](https://debezium.io/documentation/reference/1.5/connectors/oracle.html#oracle-connector-properties). - -## Example of Microsoft SQL - -### Configuration - -Debezium [requires](https://debezium.io/documentation/reference/1.5/connectors/sqlserver.html#sqlserver-overview) SQL Server with CDC enabled. -Steps outlined in the [documentation](https://debezium.io/documentation/reference/1.5/connectors/sqlserver.html#setting-up-sqlserver) and used in the [integration test](https://github.com/apache/pulsar/blob/master/tests/integration/src/test/java/org/apache/pulsar/tests/integration/src/test/java/org/apache/pulsar/tests/integration/io/sources/debezium/DebeziumMsSqlSourceTester.java). -For more information, see [Enable and disable change data capture in Microsoft SQL Server](https://docs.microsoft.com/en-us/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server). - -Similarly to other connectors, you can use JSON or YAMl to configure the connector. - -* JSON - -```json - -{ - "database.hostname": "localhost", - "database.port": "1433", - "database.user": "sa", - "database.password": "MyP@ssw0rd!", - "database.dbname": "MyTestDB", - "database.server.name": "mssql", - "snapshot.mode": "schema_only", - "topic.namespace": "public/default", - "task.class": "io.debezium.connector.sqlserver.SqlServerConnectorTask", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "typeClassName": "org.apache.pulsar.common.schema.KeyValue", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.tcpKeepAlive": "true", - "decimal.handling.mode": "double", - "database.history.pulsar.topic": "debezium-mssql-source-history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650" -} - -``` - -* YAML - -```yaml - -tenant: "public" -namespace: "default" -name: "debezium-mssql-source" -topicName: "debezium-mssql-topic" -parallelism: 1 - -className: "org.apache.pulsar.io.debezium.mssql.DebeziumMsSqlSource" -database.dbname: "mssql" - -configs: - database.hostname: "localhost" - database.port: "1433" - database.user: "sa" - database.password: "MyP@ssw0rd!" - database.dbname: "MyTestDB" - database.server.name: "mssql" - snapshot.mode: "schema_only" - topic.namespace: "public/default" - task.class: "io.debezium.connector.sqlserver.SqlServerConnectorTask" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - key.converter: "org.apache.kafka.connect.json.JsonConverter" - typeClassName: "org.apache.pulsar.common.schema.KeyValue" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.tcpKeepAlive: "true" - decimal.handling.mode: "double" - database.history.pulsar.topic: "debezium-mssql-source-history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - -``` - -For the full list of configuration properties supported by Debezium, see [Debezium Connector for MS SQL](https://debezium.io/documentation/reference/1.5/connectors/sqlserver.html#sqlserver-connector-properties). - -## FAQ - -### Debezium postgres connector will hang when create snap - -```$xslt - -#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000] - java.lang.Thread.State: WAITING (parking) - at sun.misc.Unsafe.park(Native Method) - - parking to wait for <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) - at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) - at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) - at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396) - at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649) - at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132) - at io.debezium.connector.postgresql.PostgresConnectorTask$Lambda$203/385424085.accept(Unknown Source) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$240/1347039967.accept(Unknown Source) - at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$206/589332928.run(Unknown Source) - at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705) - at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717) - at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126) - at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47) - at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127) - at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230) - at java.lang.Thread.run(Thread.java:748) - -``` - -If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file: - -```$xslt - -max.queue.size= - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-debug.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-debug.md deleted file mode 100644 index 890d5f692f7b16..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-debug.md +++ /dev/null @@ -1,407 +0,0 @@ ---- -id: io-debug -title: How to debug Pulsar connectors -sidebar_label: "Debug" -original_id: io-debug ---- -This guide explains how to debug connectors in localrun or cluster mode and gives a debugging checklist. -To better demonstrate how to debug Pulsar connectors, here takes a Mongo sink connector as an example. - -**Deploy a Mongo sink environment** -1. Start a Mongo service. - - ```bash - - docker pull mongo:4 - docker run -d -p 27017:27017 --name pulsar-mongo -v $PWD/data:/data/db mongo:4 - - ``` - -2. Create a DB and a collection. - - ```bash - - docker exec -it pulsar-mongo /bin/bash - mongo - > use pulsar - > db.createCollection('messages') - > exit - - ``` - -3. Start Pulsar standalone. - - ```bash - - docker pull apachepulsar/pulsar:2.4.0 - docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --link pulsar-mongo --name pulsar-mongo-standalone apachepulsar/pulsar:2.4.0 bin/pulsar standalone - - ``` - -4. Configure the Mongo sink with the `mongo-sink-config.yaml` file. - - ```bash - - configs: - mongoUri: "mongodb://pulsar-mongo:27017" - database: "pulsar" - collection: "messages" - batchSize: 2 - batchTimeMs: 500 - - ``` - - ```bash - - docker cp mongo-sink-config.yaml pulsar-mongo-standalone:/pulsar/ - - ``` - -5. Download the Mongo sink nar package. - - ```bash - - docker exec -it pulsar-mongo-standalone /bin/bash - curl -O http://apache.01link.hk/pulsar/pulsar-2.4.0/connectors/pulsar-io-mongo-2.4.0.nar - - ``` - -## Debug in localrun mode -Start the Mongo sink in localrun mode using the `localrun` command. -:::tip - -For more information about the `localrun` command, see [`localrun`](reference-connector-admin.md/#localrun-1). - -::: - -```bash - -./bin/pulsar-admin sinks localrun \ ---archive connectors/pulsar-io-mongo-@pulsar:version@.nar \ ---tenant public --namespace default \ ---inputs test-mongo \ ---name pulsar-mongo-sink \ ---sink-config-file mongo-sink-config.yaml \ ---parallelism 1 - -``` - -### Use connector log -Use one of the following methods to get a connector log in localrun mode: -* After executing the `localrun` command, the **log is automatically printed on the console**. -* The log is located at: - - ```bash - - logs/functions/tenant/namespace/function-name/function-name-instance-id.log - - ``` - - **Example** - - The path of the Mongo sink connector is: - - ```bash - - logs/functions/public/default/pulsar-mongo-sink/pulsar-mongo-sink-0.log - - ``` - -To clearly explain the log information, here breaks down the large block of information into small blocks and add descriptions for each block. -* This piece of log information shows the storage path of the nar package after decompression. - - ``` - - 08:21:54.132 [main] INFO org.apache.pulsar.common.nar.NarClassLoader - Created class loader with paths: [file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/, file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/META-INF/bundled-dependencies/, - - ``` - - :::tip - - If `class cannot be found` exception is thrown, check whether the nar file is decompressed in the folder `file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/META-INF/bundled-dependencies/` or not. - - ::: - -* This piece of log information illustrates the basic information about the Mongo sink connector, such as tenant, namespace, name, parallelism, resources, and so on, which can be used to **check whether the Mongo sink connector is configured correctly or not**. - - ```bash - - 08:21:55.390 [main] INFO org.apache.pulsar.functions.runtime.ThreadRuntime - ThreadContainer starting function with instance config InstanceConfig(instanceId=0, functionId=853d60a1-0c48-44d5-9a5c-6917386476b2, functionVersion=c2ce1458-b69e-4175-88c0-a0a856a2be8c, functionDetails=tenant: "public" - namespace: "default" - name: "pulsar-mongo-sink" - className: "org.apache.pulsar.functions.api.utils.IdentityFunction" - autoAck: true - parallelism: 1 - source { - typeClassName: "[B" - inputSpecs { - key: "test-mongo" - value { - } - } - cleanupSubscription: true - } - sink { - className: "org.apache.pulsar.io.mongodb.MongoSink" - configs: "{\"mongoUri\":\"mongodb://pulsar-mongo:27017\",\"database\":\"pulsar\",\"collection\":\"messages\",\"batchSize\":2,\"batchTimeMs\":500}" - typeClassName: "[B" - } - resources { - cpu: 1.0 - ram: 1073741824 - disk: 10737418240 - } - componentType: SINK - , maxBufferedTuples=1024, functionAuthenticationSpec=null, port=38459, clusterName=local) - - ``` - -* This piece of log information demonstrates the status of the connections to Mongo and configuration information. - - ```bash - - 08:21:56.231 [cluster-ClusterId{value='5d6396a3c9e77c0569ff00eb', description='null'}-pulsar-mongo:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:1, serverValue:8}] to pulsar-mongo:27017 - 08:21:56.326 [cluster-ClusterId{value='5d6396a3c9e77c0569ff00eb', description='null'}-pulsar-mongo:27017] INFO org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=pulsar-mongo:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 2, 0]}, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=89058800} - - ``` - -* This piece of log information explains the configuration of consumers and clients, including the topic name, subscription name, subscription type, and so on. - - ```bash - - 08:21:56.719 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl - Starting Pulsar consumer status recorder with config: { - "topicNames" : [ "test-mongo" ], - "topicsPattern" : null, - "subscriptionName" : "public/default/pulsar-mongo-sink", - "subscriptionType" : "Shared", - "receiverQueueSize" : 1000, - "acknowledgementsGroupTimeMicros" : 100000, - "negativeAckRedeliveryDelayMicros" : 60000000, - "maxTotalReceiverQueueSizeAcrossPartitions" : 50000, - "consumerName" : null, - "ackTimeoutMillis" : 0, - "tickDurationMillis" : 1000, - "priorityLevel" : 0, - "cryptoFailureAction" : "CONSUME", - "properties" : { - "application" : "pulsar-sink", - "id" : "public/default/pulsar-mongo-sink", - "instance_id" : "0" - }, - "readCompacted" : false, - "subscriptionInitialPosition" : "Latest", - "patternAutoDiscoveryPeriod" : 1, - "regexSubscriptionMode" : "PersistentOnly", - "deadLetterPolicy" : null, - "autoUpdatePartitions" : true, - "replicateSubscriptionState" : false, - "resetIncludeHead" : false - } - 08:21:56.726 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl - Pulsar client config: { - "serviceUrl" : "pulsar://localhost:6650", - "authPluginClassName" : null, - "authParams" : null, - "operationTimeoutMs" : 30000, - "statsIntervalSeconds" : 60, - "numIoThreads" : 1, - "numListenerThreads" : 1, - "connectionsPerBroker" : 1, - "useTcpNoDelay" : true, - "useTls" : false, - "tlsTrustCertsFilePath" : null, - "tlsAllowInsecureConnection" : false, - "tlsHostnameVerificationEnable" : false, - "concurrentLookupRequest" : 5000, - "maxLookupRequest" : 50000, - "maxNumberOfRejectedRequestPerConnection" : 50, - "keepAliveIntervalSeconds" : 30, - "connectionTimeoutMs" : 10000, - "requestTimeoutMs" : 60000, - "defaultBackoffIntervalNanos" : 100000000, - "maxBackoffIntervalNanos" : 30000000000 - } - - ``` - -## Debug in cluster mode -You can use the following methods to debug a connector in cluster mode: -* [Use connector log](#use-connector-log) -* [Use admin CLI](#use-admin-cli) -### Use connector log -In cluster mode, multiple connectors can run on a worker. To find the log path of a specified connector, use the `workerId` to locate the connector log. -### Use admin CLI -Pulsar admin CLI helps you debug Pulsar connectors with the following subcommands: -* [`get`](#get) - -* [`status`](#status) -* [`topics stats`](#topics-stats) - -**Create a Mongo sink** - -```bash - -./bin/pulsar-admin sinks create \ ---archive pulsar-io-mongo-2.4.0.nar \ ---tenant public \ ---namespace default \ ---inputs test-mongo \ ---name pulsar-mongo-sink \ ---sink-config-file mongo-sink-config.yaml \ ---parallelism 1 - -``` - -### `get` -Use the `get` command to get the basic information about the Mongo sink connector, such as tenant, namespace, name, parallelism, and so on. - -```bash - -./bin/pulsar-admin sinks get --tenant public --namespace default --name pulsar-mongo-sink -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-mongo-sink", - "className": "org.apache.pulsar.io.mongodb.MongoSink", - "inputSpecs": { - "test-mongo": { - "isRegexPattern": false - } - }, - "configs": { - "mongoUri": "mongodb://pulsar-mongo:27017", - "database": "pulsar", - "collection": "messages", - "batchSize": 2.0, - "batchTimeMs": 500.0 - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -:::tip - -For more information about the `get` command, see [`get`](reference-connector-admin.md/#get-1). - -::: - -### `status` -Use the `status` command to get the current status about the Mongo sink connector, such as the number of instance, the number of running instance, instanceId, workerId and so on. - -```bash - -./bin/pulsar-admin sinks status ---tenant public \ ---namespace default \ ---name pulsar-mongo-sink -{ -"numInstances" : 1, -"numRunning" : 1, -"instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-5d202832fd18-8080" - } -} ] -} - -``` - -:::tip - -For more information about the `status` command, see [`status`](reference-connector-admin.md/#stauts-1). -If there are multiple connectors running on a worker, `workerId` can locate the worker on which the specified connector is running. - -::: - -### `topics stats` -Use the `topics stats` command to get the stats for a topic and its connected producer and consumer, such as whether the topic has received messages or not, whether there is a backlog of messages or not, the available permits and other key information. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -```bash - -./bin/pulsar-admin topics stats test-mongo -{ - "msgRateIn" : 0.0, - "msgThroughputIn" : 0.0, - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "averageMsgSize" : 0.0, - "storageSize" : 1, - "publishers" : [ ], - "subscriptions" : { - "public/default/pulsar-mongo-sink" : { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "msgRateRedeliver" : 0.0, - "msgBacklog" : 0, - "blockedSubscriptionOnUnackedMsgs" : false, - "msgDelayed" : 0, - "unackedMessages" : 0, - "type" : "Shared", - "msgRateExpired" : 0.0, - "consumers" : [ { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "msgRateRedeliver" : 0.0, - "consumerName" : "dffdd", - "availablePermits" : 999, - "unackedMessages" : 0, - "blockedConsumerOnUnackedMsgs" : false, - "metadata" : { - "instance_id" : "0", - "application" : "pulsar-sink", - "id" : "public/default/pulsar-mongo-sink" - }, - "connectedSince" : "2019-08-26T08:48:07.582Z", - "clientVersion" : "2.4.0", - "address" : "/172.17.0.3:57790" - } ], - "isReplicated" : false - } - }, - "replication" : { }, - "deduplicationStatus" : "Disabled" -} - -``` - -:::tip - -For more information about the `topic stats` command, see [`topic stats`](/tools/pulsar-admin/). - -::: - -## Checklist -This checklist indicates the major areas to check when you debug connectors. It is a reminder of what to look for to ensure a thorough review and an evaluation tool to get the status of connectors. -* Does Pulsar start successfully? - -* Does the external service run normally? - -* Is the nar package complete? - -* Is the connector configuration file correct? - -* In localrun mode, run a connector and check the printed information (connector log) on the console. - -* In cluster mode: - - * Use the `get` command to get the basic information. - - * Use the `status` command to get the current status. - * Use the `topics stats` command to get the stats for a specified topic and its connected producers and consumers. - - * Check the connector log. -* Enter into the external system and verify the result. diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-develop.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-develop.md deleted file mode 100644 index d6f4f8261ac820..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-develop.md +++ /dev/null @@ -1,421 +0,0 @@ ---- -id: io-develop -title: How to develop Pulsar connectors -sidebar_label: "Develop" -original_id: io-develop ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide describes how to develop Pulsar connectors to move data -between Pulsar and other systems. - -Pulsar connectors are special [Pulsar Functions](functions-overview.md), so creating -a Pulsar connector is similar to creating a Pulsar function. - -Pulsar connectors come in two types: - -| Type | Description | Example -|---|---|--- -{@inject: github:Source:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java}|Import data from another system to Pulsar.|[RabbitMQ source connector](io-rabbitmq.md) imports the messages of a RabbitMQ queue to a Pulsar topic. -{@inject: github:Sink:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java}|Export data from Pulsar to another system.|[Kinesis sink connector](io-kinesis.md) exports the messages of a Pulsar topic to a Kinesis stream. - -## Develop - -You can develop Pulsar source connectors and sink connectors. - -### Source - -Developing a source connector is to implement the {@inject: github:Source:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} -interface, which means you need to implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method and the {@inject: github:read:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - -1. Implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - - ```java - - /** - * Open connector with configuration - * - * @param config initialization config - * @param sourceContext - * @throws Exception IO type exceptions when opening a connector - */ - void open(final Map config, SourceContext sourceContext) throws Exception; - - ``` - - This method is called when the source connector is initialized. - - In this method, you can retrieve all connector specific settings through the passed-in `config` parameter and initialize all necessary resources. - - For example, a Kafka connector can create a Kafka client in this `open` method. - - Besides, Pulsar runtime also provides a `SourceContext` for the - connector to access runtime resources for tasks like collecting metrics. The implementation can save the `SourceContext` for future use. - -2. Implement the {@inject: github:read:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - - ```java - - /** - * Reads the next message from source. - * If source does not have any new messages, this call should block. - * @return next message from source. The return result should never be null - * @throws Exception - */ - Record read() throws Exception; - - ``` - - If nothing to return, the implementation should be blocking rather than returning `null`. - - The returned {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should encapsulate the following information, which is needed by Pulsar IO runtime. - - * {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should provide the following variables: - - |Variable|Required|Description - |---|---|--- - `TopicName`|No|Pulsar topic name from which the record is originated from. - `Key`|No| Messages can optionally be tagged with keys.

    For more information, see [Routing modes](concepts-messaging.md#routing-modes).| - `Value`|Yes|Actual data of the record. - `EventTime`|No|Event time of the record from the source. - `PartitionId`|No| If the record is originated from a partitioned source, it returns its `PartitionId`.

    `PartitionId` is used as a part of the unique identifier by Pulsar IO runtime to deduplicate messages and achieve exactly-once processing guarantee. - `RecordSequence`|No|If the record is originated from a sequential source, it returns its `RecordSequence`.

    `RecordSequence` is used as a part of the unique identifier by Pulsar IO runtime to deduplicate messages and achieve exactly-once processing guarantee. - `Properties` |No| If the record carries user-defined properties, it returns those properties. - `DestinationTopic`|No|Topic to which message should be written. - `Message`|No|A class which carries data sent by users.

    For more information, see [Message.java](https://github.com/apache/pulsar/blob/master/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/Message.java).| - - * {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should provide the following methods: - - Method|Description - |---|--- - `ack` |Acknowledge that the record is fully processed. - `fail`|Indicate that the record fails to be processed. - -## Handle schema information - -Pulsar IO automatically handles the schema and provides a strongly typed API based on Java generics. -If you know the schema type that you are producing, you can declare the Java class relative to that type in your sink declaration. - -``` - -public class MySource implements Source { - public Record read() {} -} - -``` - -If you want to implement a source that works with any schema, you can go with `byte[]` (of `ByteBuffer`) and use Schema.AUTO_PRODUCE_BYTES(). - -``` - -public class MySource implements Source { - public Record read() { - - Schema wantedSchema = .... - Record myRecord = new MyRecordImplementation(); - .... - } - class MyRecordImplementation implements Record { - public byte[] getValue() { - return ....encoded byte[]...that represents the value - } - public Schema getSchema() { - return Schema.AUTO_PRODUCE_BYTES(wantedSchema); - } - } -} - -``` - -To handle the `KeyValue` type properly, follow the guidelines for your record implementation: -- It must implement {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/KVRecord.java} interface and implement `getKeySchema`,`getValueSchema`, and `getKeyValueEncodingType` -- It must return a `KeyValue` object as `Record.getValue()` -- It may return null in `Record.getSchema()` - -When Pulsar IO runtime encounters a `KVRecord`, it brings the following changes automatically: -- Set properly the `KeyValueSchema` -- Encode the Message Key and the Message Value according to the `KeyValueEncoding` (SEPARATED or INLINE) - -:::tip - -For more information about **how to create a source connector**, see {@inject: github:KafkaSource:/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java}. - -::: - -### Sink - -Developing a sink connector **is similar to** developing a source connector, that is, you need to implement the {@inject: github:Sink:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} interface, which means implementing the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method and the {@inject: github:write:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - -1. Implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - - ```java - - /** - * Open connector with configuration - * - * @param config initialization config - * @param sinkContext - * @throws Exception IO type exceptions when opening a connector - */ - void open(final Map config, SinkContext sinkContext) throws Exception; - - ``` - -2. Implement the {@inject: github:write:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - - ```java - - /** - * Write a message to Sink - * @param record record to write to sink - * @throws Exception - */ - void write(Record record) throws Exception; - - ``` - - During the implementation, you can decide how to write the `Value` and - the `Key` to the actual source, and leverage all the provided information such as - `PartitionId` and `RecordSequence` to achieve different processing guarantees. - - You also need to ack records (if messages are sent successfully) or fail records (if messages fail to send). - -## Handling Schema information - -Pulsar IO handles automatically the Schema and provides a strongly typed API based on Java generics. -If you know the Schema type that you are consuming from you can declare the Java class relative to that type in your Sink declaration. - -``` - -public class MySink implements Sink { - public void write(Record record) {} -} - -``` - -If you want to implement a sink that works with any schema, you can you go with the special GenericObject interface. - -``` - -public class MySink implements Sink { - public void write(Record record) { - Schema schema = record.getSchema(); - GenericObject genericObject = record.getValue(); - if (genericObject != null) { - SchemaType type = genericObject.getSchemaType(); - Object nativeObject = genericObject.getNativeObject(); - ... - } - .... - } -} - -``` - -In the case of AVRO, JSON, and Protobuf records (schemaType=AVRO,JSON,PROTOBUF_NATIVE), you can cast the -`genericObject` variable to `GenericRecord` and use `getFields()` and `getField()` API. -You are able to access the native AVRO record using `genericObject.getNativeObject()`. - -In the case of KeyValue type, you can access both the schema for the key and the schema for the value using this code. - -``` - -public class MySink implements Sink { - public void write(Record record) { - Schema schema = record.getSchema(); - GenericObject genericObject = record.getValue(); - SchemaType type = genericObject.getSchemaType(); - Object nativeObject = genericObject.getNativeObject(); - if (type == SchemaType.KEY_VALUE) { - KeyValue keyValue = (KeyValue) nativeObject; - Object key = keyValue.getKey(); - Object value = keyValue.getValue(); - - KeyValueSchema keyValueSchema = (KeyValueSchema) schema; - Schema keySchema = keyValueSchema.getKeySchema(); - Schema valueSchema = keyValueSchema.getValueSchema(); - } - .... - } -} - -``` - -## Test - -Testing connectors can be challenging because Pulsar IO connectors interact with two systems -that may be difficult to mock—Pulsar and the system to which the connector is connecting. - -It is -recommended writing special tests to test the connector functionalities as below -while mocking the external service. - -### Unit test - -You can create unit tests for your connector. - -### Integration test - -Once you have written sufficient unit tests, you can add -separate integration tests to verify end-to-end functionality. - -Pulsar uses [testcontainers](https://www.testcontainers.org/) **for all integration tests**. - -:::tip - -For more information about **how to create integration tests for Pulsar connectors**, see {@inject: github:IntegrationTests:/tests/integration/src/test/java/org/apache/pulsar/tests/integration/io}. - -::: - -## Package - -Once you've developed and tested your connector, you need to package it so that it can be submitted -to a [Pulsar Functions](functions-overview.md) cluster. - -There are two methods to -work with Pulsar Functions' runtime, that is, [NAR](#nar) and [uber JAR](#uber-jar). - -:::note - -If you plan to package and distribute your connector for others to use, you are obligated to - -::: - -license and copyright your own code properly. Remember to add the license and copyright to -all libraries your code uses and to your distribution. -> -> If you use the [NAR](#nar) method, the NAR plugin -automatically creates a `DEPENDENCIES` file in the generated NAR package, including the proper -licensing and copyrights of all libraries of your connector. - -### NAR - -**NAR** stands for NiFi Archive, which is a custom packaging mechanism used by Apache NiFi, to provide -a bit of Java ClassLoader isolation. - -:::tip - -For more information about **how NAR works**, see [here](https://medium.com/hashmapinc/nifi-nar-files-explained-14113f7796fd). - -::: - -Pulsar uses the same mechanism for packaging **all** [built-in connectors](io-connectors.md). - -The easiest approach to package a Pulsar connector is to create a NAR package using [nifi-nar-maven-plugin](https://mvnrepository.com/artifact/org.apache.nifi/nifi-nar-maven-plugin). - -Include this [nifi-nar-maven-plugin](https://mvnrepository.com/artifact/org.apache.nifi/nifi-nar-maven-plugin) in your maven project for your connector as below. - -```xml - - - - org.apache.nifi - nifi-nar-maven-plugin - 1.2.0 - - - -``` - -You must also create a `resources/META-INF/services/pulsar-io.yaml` file with the following contents: - -```yaml - -name: connector name -description: connector description -sourceClass: fully qualified class name (only if source connector) -sinkClass: fully qualified class name (only if sink connector) - -``` - -For Gradle users, there is a [Gradle Nar plugin available on the Gradle Plugin Portal](https://plugins.gradle.org/plugin/io.github.lhotari.gradle-nar-plugin). - -:::tip - -For more information about an **how to use NAR for Pulsar connectors**, see {@inject: github:TwitterFirehose:/pulsar-io/twitter/pom.xml}. - -::: - -### Uber JAR - -An alternative approach is to create an **uber JAR** that contains all of the connector's JAR files -and other resource files. No directory internal structure is necessary. - -You can use [maven-shade-plugin](https://maven.apache.org/plugins/maven-shade-plugin/examples/includes-excludes.html) to create a uber JAR as below: - -```xml - - - org.apache.maven.plugins - maven-shade-plugin - 3.1.1 - - - package - - shade - - - - - *:* - - - - - - - -``` - -## Monitor - -Pulsar connectors enable you to move data in and out of Pulsar easily. It is important to ensure that the running connectors are healthy at any time. You can monitor Pulsar connectors that have been deployed with the following methods: - -- Check the metrics provided by Pulsar. - - Pulsar connectors expose the metrics that can be collected and used for monitoring the health of **Java** connectors. You can check the metrics by following the [monitoring](deploy-monitoring.md) guide. - -- Set and check your customized metrics. - - In addition to the metrics provided by Pulsar, Pulsar allows you to customize metrics for **Java** connectors. Function workers collect user-defined metrics to Prometheus automatically and you can check them in Grafana. - -Here is an example of how to customize metrics for a Java connector. - -````mdx-code-block - - - -``` - -public class TestMetricSink implements Sink { - - @Override - public void open(Map config, SinkContext sinkContext) throws Exception { - sinkContext.recordMetric("foo", 1); - } - - @Override - public void write(Record record) throws Exception { - - } - - @Override - public void close() throws Exception { - - } - } - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-dynamodb-source.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-dynamodb-source.md deleted file mode 100644 index 0314be2529b4ca..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-dynamodb-source.md +++ /dev/null @@ -1,82 +0,0 @@ ---- -id: io-dynamodb-source -title: AWS DynamoDB source connector -sidebar_label: "AWS DynamoDB source connector" -original_id: io-dynamodb-source ---- - -The DynamoDB source connector pulls data from DynamoDB table streams and persists data into Pulsar. - -This connector uses the [DynamoDB Streams Kinesis Adapter](https://github.com/awslabs/dynamodb-streams-kinesis-adapter), -which uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual -consuming of messages. The KCL uses DynamoDB to track state for consumers and requires cloudwatch access to log metrics. - - -## Configuration - -The configuration of the DynamoDB source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.

    Below are the available options:

  332. `AT_TIMESTAMP`: start from the record at or after the specified timestamp.

  333. `LATEST`: start after the most recent data record.

  334. `TRIM_HORIZON`: start from the oldest available data record.
  335. -`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption. -`applicationName`|String|false|Pulsar IO connector|The name of the KCL application. Must be unique, as it is used to define the table name for the dynamo table used for state tracking.

    By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances. -`checkpointInterval`|long|false|60000|The frequency of the KCL checkpoint in milliseconds. -`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds. -`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint. -`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector.

    Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed. -`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsEndpoint`|String|false|" " (empty string)|The DynamoDB Streams end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsDynamodbStreamArn`|String|true|" " (empty string)|The DynamoDB stream arn. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    `awsCredentialProviderPlugin` has the following built-in plugs:

  336. `org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:
    this plugin uses the default AWS provider chain.
    For more information, see [using the default credential provider chain](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default).

  337. `org.apache.pulsar.io.kinesis.STSAssumeRoleProviderPlugin`:
    this plugin takes a configuration via the `awsCredentialPluginParam` that describes a role to assume when running the KCL.
    **JSON configuration example**
    `{"roleArn": "arn...", "roleSessionName": "name"}`

    `awsCredentialPluginName` is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If `awsCredentialPluginName` set to empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`.
  338. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Example - -Before using the DynamoDB source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "awsEndpoint": "https://some.endpoint.aws", - "awsRegion": "us-east-1", - "awsDynamodbStreamArn": "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "applicationName": "My test application", - "checkpointInterval": "30000", - "backoffTime": "4000", - "numRetries": "3", - "receiveQueueSize": 2000, - "initialPositionInStream": "TRIM_HORIZON", - "startAtTime": "2019-03-05T19:28:58.000Z" - } - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "https://some.endpoint.aws" - awsRegion: "us-east-1" - awsDynamodbStreamArn: "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - applicationName: "My test application" - checkpointInterval: 30000 - backoffTime: 4000 - numRetries: 3 - receiveQueueSize: 2000 - initialPositionInStream: "TRIM_HORIZON" - startAtTime: "2019-03-05T19:28:58.000Z" - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-elasticsearch-sink.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-elasticsearch-sink.md deleted file mode 100644 index 4a5e3494138147..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-elasticsearch-sink.md +++ /dev/null @@ -1,244 +0,0 @@ ---- -id: io-elasticsearch-sink -title: Elasticsearch sink connector -sidebar_label: "Elasticsearch sink connector" -original_id: io-elasticsearch-sink ---- - -The Elasticsearch sink connector pulls messages from Pulsar topics and persists the messages to indexes. - - -## Feature - -### Handle data - -Since Pulsar 2.9.0, the Elasticsearch sink connector has the following ways of -working. You can choose one of them. - -Name | Description ----|---| -Raw processing | The sink reads from topics and passes the raw content to Elasticsearch.

    This is the **default** behavior.

    Raw processing was already available **in Pulsar 2.8.x**. -Schema aware | The sink uses the schema and handles AVRO, JSON, and KeyValue schema types while mapping the content to the Elasticsearch document.

    If you set `schemaEnable` to `true`, the sink interprets the contents of the message and you can define a **primary key** that in turn used as the special `_id` field on Elasticsearch. -

    This allows you to perform `UPDATE`, `INSERT`, and `DELETE` operations -to Elasticsearch driven by the logical primary key of the message.

    This -is very useful in a typical Change Data Capture scenario in which you follow the -changes on your database, write them to Pulsar (using the Debezium adapter for -instance), and then you write to Elasticsearch.

    You configure the -mapping of the primary key using the `primaryFields` configuration -entry.

    The `DELETE` operation can be performed when the primary key is -not empty and the remaining value is empty. Use the `nullValueAction` to -configure this behaviour. The default configuration simply ignores such empty -values. - -### Map multiple indexes - -Since Pulsar 2.9.0, the `indexName` property is no more required. If you omit it, the sink writes to an index name after the Pulsar topic name. - -### Enable bulk writes - -Since Pulsar 2.9.0, you can use bulk writes by setting the `bulkEnabled` property to `true`. - -### Enable secure connections via TLS - -Since Pulsar 2.9.0, you can enable secure connections with TLS. - -## Configuration - -The configuration of the Elasticsearch sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `elasticSearchUrl` | String| true |" " (empty string)| The URL of elastic search cluster to which the connector connects. | -| `indexName` | String| false |" " (empty string)| The index name to which the connector writes messages. The default value is the topic name. It accepts date formats in the name to support event time based index with the pattern `%{+}`. For example, suppose the event time of the record is 1645182000000L, the indexName is `logs-%{+yyyy-MM-dd}`, then the formatted index name would be `logs-2022-02-18`. | -| `schemaEnable` | Boolean | false | false | Turn on the Schema Aware mode. | -| `createIndexIfNeeded` | Boolean | false | false | Manage index if missing. | -| `maxRetries` | Integer | false | 1 | The maximum number of retries for elasticsearch requests. Use -1 to disable it. | -| `retryBackoffInMs` | Integer | false | 100 | The base time to wait when retrying an Elasticsearch request (in milliseconds). | -| `maxRetryTimeInSec` | Integer| false | 86400 | The maximum retry time interval in seconds for retrying an elasticsearch request. | -| `bulkEnabled` | Boolean | false | false | Enable the elasticsearch bulk processor to flush write requests based on the number or size of requests, or after a given period. | -| `bulkActions` | Integer | false | 1000 | The maximum number of actions per elasticsearch bulk request. Use -1 to disable it. | -| `bulkSizeInMb` | Integer | false |5 | The maximum size in megabytes of elasticsearch bulk requests. Use -1 to disable it. | -| `bulkConcurrentRequests` | Integer | false | 0 | The maximum number of in flight elasticsearch bulk requests. The default 0 allows the execution of a single request. A value of 1 means 1 concurrent request is allowed to be executed while accumulating new bulk requests. | -| `bulkFlushIntervalInMs` | Integer | false | -1 | The maximum period of time to wait for flushing pending writes when bulk writes are enabled. Default is -1 meaning not set. | -| `compressionEnabled` | Boolean | false |false | Enable elasticsearch request compression. | -| `connectTimeoutInMs` | Integer | false |5000 | The elasticsearch client connection timeout in milliseconds. | -| `connectionRequestTimeoutInMs` | Integer | false |1000 | The time in milliseconds for getting a connection from the elasticsearch connection pool. | -| `connectionIdleTimeoutInMs` | Integer | false |5 | Idle connection timeout to prevent a read timeout. | -| `keyIgnore` | Boolean | false |true | Whether to ignore the record key to build the Elasticsearch document `_id`. If primaryFields is defined, the connector extract the primary fields from the payload to build the document `_id` If no primaryFields are provided, elasticsearch auto generates a random document `_id`. | -| `primaryFields` | String | false | "id" | The comma separated ordered list of field names used to build the Elasticsearch document `_id` from the record value. If this list is a singleton, the field is converted as a string. If this list has 2 or more fields, the generated `_id` is a string representation of a JSON array of the field values. | -| `nullValueAction` | enum (IGNORE,DELETE,FAIL) | false | IGNORE | How to handle records with null values, possible options are IGNORE, DELETE or FAIL. Default is IGNORE the message. | -| `malformedDocAction` | enum (IGNORE,WARN,FAIL) | false | FAIL | How to handle elasticsearch rejected documents due to some malformation. Possible options are IGNORE, DELETE or FAIL. Default is FAIL the Elasticsearch document. | -| `stripNulls` | Boolean | false |true | If stripNulls is false, elasticsearch _source includes 'null' for empty fields (for example {"foo": null}), otherwise null fields are stripped. | -| `socketTimeoutInMs` | Integer | false |60000 | The socket timeout in milliseconds waiting to read the elasticsearch response. | -| `typeName` | String | false | "_doc" | The type name to which the connector writes messages to.

    The value should be set explicitly to a valid type name other than "_doc" for Elasticsearch version before 6.2, and left to default otherwise. | -| `indexNumberOfShards` | int| false |1| The number of shards of the index. | -| `indexNumberOfReplicas` | int| false |1 | The number of replicas of the index. | -| `username` | String| false |" " (empty string)| The username used by the connector to connect to the elastic search cluster.

    If `username` is set, then `password` should also be provided. | -| `password` | String| false | " " (empty string)|The password used by the connector to connect to the elastic search cluster.

    If `username` is set, then `password` should also be provided. | -| `ssl` | ElasticSearchSslConfig | false | | Configuration for TLS encrypted communication | - -### Definition of ElasticSearchSslConfig structure: - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `enabled` | Boolean| false | false | Enable SSL/TLS. | -| `hostnameVerification` | Boolean| false | true | Whether or not to validate node hostnames when using SSL. | -| `truststorePath` | String| false |" " (empty string)| The path to the truststore file. | -| `truststorePassword` | String| false |" " (empty string)| Truststore password. | -| `keystorePath` | String| false |" " (empty string)| The path to the keystore file. | -| `keystorePassword` | String| false |" " (empty string)| Keystore password. | -| `cipherSuites` | String| false |" " (empty string)| SSL/TLS cipher suites. | -| `protocols` | String| false |"TLSv1.2" | Comma separated list of enabled SSL/TLS protocols. | - -## Example - -Before using the Elasticsearch sink connector, you need to create a configuration file through one of the following methods. - -### Configuration - -#### For Elasticsearch After 6.2 - -* JSON - - ```json - - { - "configs": { - "elasticSearchUrl": "http://localhost:9200", - "indexName": "my_index", - "username": "scooby", - "password": "doobie" - } - } - - ``` - -* YAML - - ```yaml - - configs: - elasticSearchUrl: "http://localhost:9200" - indexName: "my_index" - username: "scooby" - password: "doobie" - - ``` - -#### For Elasticsearch Before 6.2 - -* JSON - - ```json - - { - "elasticSearchUrl": "http://localhost:9200", - "indexName": "my_index", - "typeName": "doc", - "username": "scooby", - "password": "doobie" - } - - ``` - -* YAML - - ```yaml - - configs: - elasticSearchUrl: "http://localhost:9200" - indexName: "my_index" - typeName: "doc" - username: "scooby" - password: "doobie" - - ``` - -### Usage - -1. Start a single node Elasticsearch cluster. - - ```bash - - $ docker run -p 9200:9200 -p 9300:9300 \ - -e "discovery.type=single-node" \ - docker.elastic.co/elasticsearch/elasticsearch:7.13.3 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - - Make sure the NAR file is available at `connectors/pulsar-io-elastic-search-@pulsar:version@.nar`. - -3. Start the Pulsar Elasticsearch connector in local run mode using one of the following methods. - * Use the **JSON** configuration as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-elastic-search-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name elasticsearch-test-sink \ - --sink-config '{"elasticSearchUrl":"http://localhost:9200","indexName": "my_index","username": "scooby","password": "doobie"}' \ - --inputs elasticsearch_test - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-elastic-search-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name elasticsearch-test-sink \ - --sink-config-file elasticsearch-sink.yml \ - --inputs elasticsearch_test - - ``` - -4. Publish records to the topic. - - ```bash - - $ bin/pulsar-client produce elasticsearch_test --messages "{\"a\":1}" - - ``` - -5. Check documents in Elasticsearch. - - * refresh the index - - ```bash - - $ curl -s http://localhost:9200/my_index/_refresh - - ``` - - - * search documents - - ```bash - - $ curl -s http://localhost:9200/my_index/_search - - ``` - - You can see the record that published earlier has been successfully written into Elasticsearch. - - ```json - - {"took":2,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":1,"relation":"eq"},"max_score":1.0,"hits":[{"_index":"my_index","_type":"_doc","_id":"FSxemm8BLjG_iC0EeTYJ","_score":1.0,"_source":{"a":1}}]}} - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-file-source.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-file-source.md deleted file mode 100644 index ba0f467a443146..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-file-source.md +++ /dev/null @@ -1,173 +0,0 @@ ---- -id: io-file-source -title: File source connector -sidebar_label: "File source connector" -original_id: io-file-source ---- - -The File source connector pulls messages from files in directories and persists the messages to Pulsar topics. - -## Configuration - -The configuration of the File source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `inputDirectory` | String|true | No default value|The input directory to pull files. | -| `recurse` | Boolean|false | true | Whether to pull files from subdirectory or not.| -| `keepFile` |Boolean|false | false | If set to true, the file is not deleted after it is processed, which means the file can be picked up continually. | -| `fileFilter` | String|false| [^\\.].* | The file whose name matches the given regular expression is picked up. | -| `pathFilter` | String |false | NULL | If `recurse` is set to true, the subdirectory whose path matches the given regular expression is scanned. | -| `minimumFileAge` | Integer|false | 0 | The minimum age that a file can be processed.

    Any file younger than `minimumFileAge` (according to the last modification date) is ignored. | -| `maximumFileAge` | Long|false |Long.MAX_VALUE | The maximum age that a file can be processed.

    Any file older than `maximumFileAge` (according to last modification date) is ignored. | -| `minimumSize` |Integer| false |1 | The minimum size (in bytes) that a file can be processed. | -| `maximumSize` | Double|false |Double.MAX_VALUE| The maximum size (in bytes) that a file can be processed. | -| `ignoreHiddenFiles` |Boolean| false | true| Whether the hidden files should be ignored or not. | -| `pollingInterval`|Long | false | 10000L | Indicates how long to wait before performing a directory listing. | -| `numWorkers` | Integer | false | 1 | The number of worker threads that process files.

    This allows you to process a larger number of files concurrently.

    However, setting this to a value greater than 1 makes the data from multiple files mixed in the target topic. | -| `processedFileSuffix` | String | false | NULL | If set, do not delete but only rename file that has been processed.

    This config only work when 'keepFile' property is false. | - -### Example - -Before using the File source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "inputDirectory": "/Users/david", - "recurse": true, - "keepFile": true, - "fileFilter": "[^\\.].*", - "pathFilter": "*", - "minimumFileAge": 0, - "maximumFileAge": 9999999999, - "minimumSize": 1, - "maximumSize": 5000000, - "ignoreHiddenFiles": true, - "pollingInterval": 5000, - "numWorkers": 1, - "processedFileSuffix": ".processed_done" - } - } - - ``` - -* YAML - - ```yaml - - configs: - inputDirectory: "/Users/david" - recurse: true - keepFile: true - fileFilter: "[^\\.].*" - pathFilter: "*" - minimumFileAge: 0 - maximumFileAge: 9999999999 - minimumSize: 1 - maximumSize: 5000000 - ignoreHiddenFiles: true - pollingInterval: 5000 - numWorkers: 1 - processedFileSuffix: ".processed_done" - - ``` - -## Usage - -Here is an example of using the File source connecter. - -1. Pull a Pulsar image. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - ``` - -2. Start Pulsar standalone. - - ```bash - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -3. Create a configuration file _file-connector.yaml_. - - ```yaml - - configs: - inputDirectory: "/opt" - - ``` - -4. Copy the configuration file _file-connector.yaml_ to the container. - - ```bash - - $ docker cp connectors/file-connector.yaml pulsar-standalone:/pulsar/ - - ``` - -5. Download the File source connector. - - ```bash - - $ curl -O https://mirrors.tuna.tsinghua.edu.cn/apache/pulsar/pulsar-{version}/connectors/pulsar-io-file-{version}.nar - - ``` - -6. Copy it to the `connectors` folder, then restart the container. - - ```bash - - $ docker cp pulsar-io-file-{version}.nar pulsar-standalone:/pulsar/connectors/ - $ docker restart pulsar-standalone - - ``` - -7. Start the File source connector. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - - $ ./bin/pulsar-admin sources localrun \ - --archive /pulsar/connectors/pulsar-io-file-{version}.nar \ - --name file-test \ - --destination-topic-name pulsar-file-test \ - --source-config-file /pulsar/file-connector.yaml - - ``` - -8. Start a consumer. - - ```bash - - ./bin/pulsar-client consume -s file-test -n 0 pulsar-file-test - - ``` - -9. Write the message to the file _test.txt_. - - ```bash - - echo "hello world!" > /opt/test.txt - - ``` - - The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello world! - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-flume-sink.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-flume-sink.md deleted file mode 100644 index 591681315bc264..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-flume-sink.md +++ /dev/null @@ -1,58 +0,0 @@ ---- -id: io-flume-sink -title: Flume sink connector -sidebar_label: "Flume sink connector" -original_id: io-flume-sink ---- - -The Flume sink connector pulls messages from Pulsar topics to logs. - -## Configuration - -The configuration of the Flume sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`name`|String|true|"" (empty string)|The name of the agent. -`confFile`|String|true|"" (empty string)|The configuration file. -`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed. -`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection. -`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration. - -### Example - -Before using the Flume sink connector, you need to create a configuration file through one of the following methods. - -> For more information about the `sink.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/sink.conf). - -* JSON - - ```json - - { - "configs": { - "name": "a1", - "confFile": "sink.conf", - "noReloadConf": "false", - "zkConnString": "", - "zkBasePath": "" - } - } - - ``` - -* YAML - - ```yaml - - configs: - name: a1 - confFile: sink.conf - noReloadConf: false - zkConnString: "" - zkBasePath: "" - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-flume-source.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-flume-source.md deleted file mode 100644 index ba384560111fd0..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-flume-source.md +++ /dev/null @@ -1,58 +0,0 @@ ---- -id: io-flume-source -title: Flume source connector -sidebar_label: "Flume source connector" -original_id: io-flume-source ---- - -The Flume source connector pulls messages from logs to Pulsar topics. - -## Configuration - -The configuration of the Flume source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`name`|String|true|"" (empty string)|The name of the agent. -`confFile`|String|true|"" (empty string)|The configuration file. -`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed. -`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection. -`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration. - -### Example - -Before using the Flume source connector, you need to create a configuration file through one of the following methods. - -> For more information about the `source.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/source.conf). - -* JSON - - ```json - - { - "configs": { - "name": "a1", - "confFile": "source.conf", - "noReloadConf": "false", - "zkConnString": "", - "zkBasePath": "" - } - } - - ``` - -* YAML - - ```yaml - - configs: - name: a1 - confFile: source.conf - noReloadConf: false - zkConnString: "" - zkBasePath: "" - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-hbase-sink.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-hbase-sink.md deleted file mode 100644 index 4fcd59a2c27504..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-hbase-sink.md +++ /dev/null @@ -1,69 +0,0 @@ ---- -id: io-hbase-sink -title: HBase sink connector -sidebar_label: "HBase sink connector" -original_id: io-hbase-sink ---- - -The HBase sink connector pulls the messages from Pulsar topics -and persists the messages to HBase tables - -## Configuration - -The configuration of the HBase sink connector has the following properties. - -### Property - -| Name | Type|Default | Required | Description | -|------|---------|----------|-------------|--- -| `hbaseConfigResources` | String|None | false | HBase system configuration `hbase-site.xml` file. | -| `zookeeperQuorum` | String|None | true | HBase system configuration about `hbase.zookeeper.quorum` value. | -| `zookeeperClientPort` | String|2181 | false | HBase system configuration about `hbase.zookeeper.property.clientPort` value. | -| `zookeeperZnodeParent` | String|/hbase | false | HBase system configuration about `zookeeper.znode.parent` value. | -| `tableName` | None |String | true | HBase table, the value is `namespace:tableName`. | -| `rowKeyName` | String|None | true | HBase table rowkey name. | -| `familyName` | String|None | true | HBase table column family name. | -| `qualifierNames` |String| None | true | HBase table column qualifier names. | -| `batchTimeMs` | Long|1000l| false | HBase table operation timeout in milliseconds. | -| `batchSize` | int|200| false | Batch size of updates made to the HBase table. | - -### Example - -Before using the HBase sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "hbaseConfigResources": "hbase-site.xml", - "zookeeperQuorum": "localhost", - "zookeeperClientPort": "2181", - "zookeeperZnodeParent": "/hbase", - "tableName": "pulsar_hbase", - "rowKeyName": "rowKey", - "familyName": "info", - "qualifierNames": [ 'name', 'address', 'age'] - } - } - - ``` - -* YAML - - ```yaml - - configs: - hbaseConfigResources: "hbase-site.xml" - zookeeperQuorum: "localhost" - zookeeperClientPort: "2181" - zookeeperZnodeParent: "/hbase" - tableName: "pulsar_hbase" - rowKeyName: "rowKey" - familyName: "info" - qualifierNames: [ 'name', 'address', 'age'] - - ``` - - \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-hdfs2-sink.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-hdfs2-sink.md deleted file mode 100644 index 54ab3f918bb55d..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-hdfs2-sink.md +++ /dev/null @@ -1,66 +0,0 @@ ---- -id: io-hdfs2-sink -title: HDFS2 sink connector -sidebar_label: "HDFS2 sink connector" -original_id: io-hdfs2-sink ---- - -The HDFS2 sink connector pulls the messages from Pulsar topics -and persists the messages to HDFS files. - -## Configuration - -The configuration of the HDFS2 sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `hdfsConfigResources` | String|true| None | A file or a comma-separated list containing the Hadoop file system configuration.

    **Example**
    'core-site.xml'
    'hdfs-site.xml' | -| `directory` | String | true | None|The HDFS directory where files read from or written to. | -| `encoding` | String |false |None |The character encoding for the files.

    **Example**
    UTF-8
    ASCII | -| `compression` | Compression |false |None |The compression code used to compress or de-compress the files on HDFS.

    Below are the available options:
  339. BZIP2
  340. DEFLATE
  341. GZIP
  342. LZ4
  343. SNAPPY
  344. | -| `kerberosUserPrincipal` |String| false| None|The principal account of Kerberos user used for authentication. | -| `keytab` | String|false|None| The full pathname of the Kerberos keytab file used for authentication. | -| `filenamePrefix` |String| true, if `compression` is set to `None`. | None |The prefix of the files created inside the HDFS directory.

    **Example**
    The value of topicA result in files named topicA-. | -| `fileExtension` | String| true | None | The extension added to the files written to HDFS.

    **Example**
    '.txt'
    '.seq' | -| `separator` | char|false |None |The character used to separate records in a text file.

    If no value is provided, the contents from all records are concatenated together in one continuous byte array. | -| `syncInterval` | long| false |0| The interval between calls to flush data to HDFS disk in milliseconds. | -| `maxPendingRecords` |int| false|Integer.MAX_VALUE | The maximum number of records that hold in memory before acking.

    Setting this property to 1 makes every record send to disk before the record is acked.

    Setting this property to a higher value allows buffering records before flushing them to disk. -| `subdirectoryPattern` | String | false | None | A subdirectory associated with the created time of the sink.
    The pattern is the formatted pattern of `directory`'s subdirectory.

    See [DateTimeFormatter](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html) for pattern's syntax. | - -### Example - -Before using the HDFS2 sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "hdfsConfigResources": "core-site.xml", - "directory": "/foo/bar", - "filenamePrefix": "prefix", - "fileExtension": ".log", - "compression": "SNAPPY", - "subdirectoryPattern": "yyyy-MM-dd" - } - } - - ``` - -* YAML - - ```yaml - - configs: - hdfsConfigResources: "core-site.xml" - directory: "/foo/bar" - filenamePrefix: "prefix" - fileExtension: ".log" - compression: "SNAPPY" - subdirectoryPattern: "yyyy-MM-dd" - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-hdfs3-sink.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-hdfs3-sink.md deleted file mode 100644 index 91f06153d5d771..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-hdfs3-sink.md +++ /dev/null @@ -1,61 +0,0 @@ ---- -id: io-hdfs3-sink -title: HDFS3 sink connector -sidebar_label: "HDFS3 sink connector" -original_id: io-hdfs3-sink ---- - -The HDFS3 sink connector pulls the messages from Pulsar topics -and persists the messages to HDFS files. - -## Configuration - -The configuration of the HDFS3 sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `hdfsConfigResources` | String|true| None | A file or a comma-separated list containing the Hadoop file system configuration.

    **Example**
    'core-site.xml'
    'hdfs-site.xml' | -| `directory` | String | true | None|The HDFS directory where files read from or written to. | -| `encoding` | String |false |None |The character encoding for the files.

    **Example**
    UTF-8
    ASCII | -| `compression` | Compression |false |None |The compression code used to compress or de-compress the files on HDFS.

    Below are the available options:
  345. BZIP2
  346. DEFLATE
  347. GZIP
  348. LZ4
  349. SNAPPY
  350. | -| `kerberosUserPrincipal` |String| false| None|The principal account of Kerberos user used for authentication. | -| `keytab` | String|false|None| The full pathname of the Kerberos keytab file used for authentication. | -| `filenamePrefix` |String| false |None |The prefix of the files created inside the HDFS directory.

    **Example**
    The value of topicA result in files named topicA-. | -| `fileExtension` | String| false | None| The extension added to the files written to HDFS.

    **Example**
    '.txt'
    '.seq' | -| `separator` | char|false |None |The character used to separate records in a text file.

    If no value is provided, the contents from all records are concatenated together in one continuous byte array. | -| `syncInterval` | long| false |0| The interval between calls to flush data to HDFS disk in milliseconds. | -| `maxPendingRecords` |int| false|Integer.MAX_VALUE | The maximum number of records that hold in memory before acking.

    Setting this property to 1 makes every record send to disk before the record is acked.

    Setting this property to a higher value allows buffering records before flushing them to disk. - -### Example - -Before using the HDFS3 sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "hdfsConfigResources": "core-site.xml", - "directory": "/foo/bar", - "filenamePrefix": "prefix", - "compression": "SNAPPY" - } - } - - ``` - -* YAML - - ```yaml - - configs: - hdfsConfigResources: "core-site.xml" - directory: "/foo/bar" - filenamePrefix: "prefix" - compression: "SNAPPY" - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-influxdb-sink.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-influxdb-sink.md deleted file mode 100644 index 8492aa482b50ae..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-influxdb-sink.md +++ /dev/null @@ -1,122 +0,0 @@ ---- -id: io-influxdb-sink -title: InfluxDB sink connector -sidebar_label: "InfluxDB sink connector" -original_id: io-influxdb-sink ---- - -The InfluxDB sink connector pulls messages from Pulsar topics -and persists the messages to InfluxDB. - -The InfluxDB sink provides different configurations for InfluxDBv1 and v2 respectively. - -## Configuration - -The configuration of the InfluxDB sink connector has the following properties. - -### Property -#### InfluxDBv2 -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. | -| `token` | String|true| " " (empty string) |The authentication token used to authenticate to InfluxDB. | -| `organization` | String| true|" " (empty string) | The InfluxDB organization to write to. | -| `bucket` |String| true | " " (empty string)| The InfluxDB bucket to write to. | -| `precision` | String|false| ns | The timestamp precision for writing data to InfluxDB.

    Below are the available options:
  351. ns
  352. us
  353. ms
  354. s
  355. | -| `logLevel` | String|false| NONE|The log level for InfluxDB request and response.

    Below are the available options:
  356. NONE
  357. BASIC
  358. HEADERS
  359. FULL
  360. | -| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. | -| `batchTimeMs` |long|false| 1000L | The InfluxDB operation time in milliseconds. | -| `batchSize` | int|false|200| The batch size of writing to InfluxDB. | - -#### InfluxDBv1 -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. | -| `username` | String|false| " " (empty string) |The username used to authenticate to InfluxDB. | -| `password` | String| false|" " (empty string) | The password used to authenticate to InfluxDB. | -| `database` |String| true | " " (empty string)| The InfluxDB to which write messages. | -| `consistencyLevel` | String|false|ONE | The consistency level for writing data to InfluxDB.

    Below are the available options:
  361. ALL
  362. ANY
  363. ONE
  364. QUORUM
  365. | -| `logLevel` | String|false| NONE|The log level for InfluxDB request and response.

    Below are the available options:
  366. NONE
  367. BASIC
  368. HEADERS
  369. FULL
  370. | -| `retentionPolicy` | String|false| autogen| The retention policy for InfluxDB. | -| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. | -| `batchTimeMs` |long|false| 1000L | The InfluxDB operation time in milliseconds. | -| `batchSize` | int|false|200| The batch size of writing to InfluxDB. | - -### Example -Before using the InfluxDB sink connector, you need to create a configuration file through one of the following methods. -#### InfluxDBv2 - -* JSON - - ```json - - { - "configs": { - "influxdbUrl": "http://localhost:9999", - "organization": "example-org", - "bucket": "example-bucket", - "token": "xxxx", - "precision": "ns", - "logLevel": "NONE", - "gzipEnable": false, - "batchTimeMs": 1000, - "batchSize": 100 - } - } - - ``` - -* YAML - - ```yaml - - configs: - influxdbUrl: "http://localhost:9999" - organization: "example-org" - bucket: "example-bucket" - token: "xxxx" - precision: "ns" - logLevel: "NONE" - gzipEnable: false - batchTimeMs: 1000 - batchSize: 100 - - ``` - -#### InfluxDBv1 - -* JSON - - ```json - - { - "configs": { - "influxdbUrl": "http://localhost:8086", - "database": "test_db", - "consistencyLevel": "ONE", - "logLevel": "NONE", - "retentionPolicy": "autogen", - "gzipEnable": false, - "batchTimeMs": 1000, - "batchSize": 100 - } - } - - ``` - -* YAML - - ```yaml - - configs: - influxdbUrl: "http://localhost:8086" - database: "test_db" - consistencyLevel: "ONE" - logLevel: "NONE" - retentionPolicy: "autogen" - gzipEnable: false - batchTimeMs: 1000 - batchSize: 100 - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-jdbc-sink.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-jdbc-sink.md deleted file mode 100644 index fe03d4a1e441eb..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-jdbc-sink.md +++ /dev/null @@ -1,165 +0,0 @@ ---- -id: io-jdbc-sink -title: JDBC sink connector -sidebar_label: "JDBC sink connector" -original_id: io-jdbc-sink ---- - -The JDBC sink connectors allow pulling messages from Pulsar topics -and persists the messages to ClickHouse, MariaDB, PostgreSQL, and SQLite. - -> Currently, INSERT, DELETE and UPDATE operations are supported. - -## Configuration - -The configuration of all JDBC sink connectors has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `userName` | String|false | " " (empty string) | The username used to connect to the database specified by `jdbcUrl`.

    **Note: `userName` is case-sensitive.**| -| `password` | String|false | " " (empty string)| The password used to connect to the database specified by `jdbcUrl`.

    **Note: `password` is case-sensitive.**| -| `jdbcUrl` | String|true | " " (empty string) | The JDBC URL of the database to which the connector connects. | -| `tableName` | String|true | " " (empty string) | The name of the table to which the connector writes. | -| `nonKey` | String|false | " " (empty string) | A comma-separated list contains the fields used in updating events. | -| `key` | String|false | " " (empty string) | A comma-separated list contains the fields used in `where` condition of updating and deleting events. | -| `timeoutMs` | int| false|500 | The JDBC operation timeout in milliseconds. | -| `batchSize` | int|false | 200 | The batch size of updates made to the database. | - -### Example for ClickHouse - -* JSON - - ```json - - { - "configs": { - "userName": "clickhouse", - "password": "password", - "jdbcUrl": "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink", - "tableName": "pulsar_clickhouse_jdbc_sink" - } - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-clickhouse-sink" - topicName: "persistent://public/default/jdbc-clickhouse-topic" - sinkType: "jdbc-clickhouse" - configs: - userName: "clickhouse" - password: "password" - jdbcUrl: "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink" - tableName: "pulsar_clickhouse_jdbc_sink" - - ``` - -### Example for MariaDB - -* JSON - - ```json - - { - "configs": { - "userName": "mariadb", - "password": "password", - "jdbcUrl": "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink", - "tableName": "pulsar_mariadb_jdbc_sink" - } - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-mariadb-sink" - topicName: "persistent://public/default/jdbc-mariadb-topic" - sinkType: "jdbc-mariadb" - configs: - userName: "mariadb" - password: "password" - jdbcUrl: "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink" - tableName: "pulsar_mariadb_jdbc_sink" - - ``` - -### Example for PostgreSQL - -Before using the JDBC PostgreSQL sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "userName": "postgres", - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "tableName": "pulsar_postgres_jdbc_sink" - } - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-postgres-sink" - topicName: "persistent://public/default/jdbc-postgres-topic" - sinkType: "jdbc-postgres" - configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink" - tableName: "pulsar_postgres_jdbc_sink" - - ``` - -For more information on **how to use this JDBC sink connector**, see [connect Pulsar to PostgreSQL](io-quickstart.md#connect-pulsar-to-postgresql). - -### Example for SQLite - -* JSON - - ```json - - { - "configs": { - "jdbcUrl": "jdbc:sqlite:db.sqlite", - "tableName": "pulsar_sqlite_jdbc_sink" - } - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-sqlite-sink" - topicName: "persistent://public/default/jdbc-sqlite-topic" - sinkType: "jdbc-sqlite" - configs: - jdbcUrl: "jdbc:sqlite:db.sqlite" - tableName: "pulsar_sqlite_jdbc_sink" - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-kafka-sink.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-kafka-sink.md deleted file mode 100644 index ce8967e0461073..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-kafka-sink.md +++ /dev/null @@ -1,73 +0,0 @@ ---- -id: io-kafka-sink -title: Kafka sink connector -sidebar_label: "Kafka sink connector" -original_id: io-kafka-sink ---- - -The Kafka sink connector pulls messages from Pulsar topics and persists the messages -to Kafka topics. - -This guide explains how to configure and use the Kafka sink connector. - -## Configuration - -The configuration of the Kafka sink connector has the following parameters. - -### Property - -| Name | Type| Required | Default | Description -|------|----------|---------|-------------|-------------| -| `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. | -|`acks`|String|true|" " (empty string) |The number of acknowledgments that the producer requires the leader to receive before a request completes.
    This controls the durability of the sent records. -|`batchsize`|long|false|16384L|The batch size that a Kafka producer attempts to batch records together before sending them to brokers. -|`maxRequestSize`|long|false|1048576L|The maximum size of a Kafka request in bytes. -|`topic`|String|true|" " (empty string) |The Kafka topic which receives messages from Pulsar. -| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringSerializer | The serializer class for Kafka producers to serialize keys. -| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArraySerializer | The serializer class for Kafka producers to serialize values.

    The serializer is set by a specific implementation of [`KafkaAbstractSink`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSink.java). -|`producerConfigProperties`|Map|false|" " (empty string)|The producer configuration properties to be passed to producers.

    **Note: other properties specified in the connector configuration file take precedence over this configuration**. - - -### Example - -Before using the Kafka sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "bootstrapServers": "localhost:6667", - "topic": "test", - "acks": "1", - "batchSize": "16384", - "maxRequestSize": "1048576", - "producerConfigProperties": { - "client.id": "test-pulsar-producer", - "security.protocol": "SASL_PLAINTEXT", - "sasl.mechanism": "GSSAPI", - "sasl.kerberos.service.name": "kafka", - "acks": "all" - } - } - } - -* YAML - - ``` - -yaml - configs: - bootstrapServers: "localhost:6667" - topic: "test" - acks: "1" - batchSize: "16384" - maxRequestSize: "1048576" - producerConfigProperties: - client.id: "test-pulsar-producer" - security.protocol: "SASL_PLAINTEXT" - sasl.mechanism: "GSSAPI" - sasl.kerberos.service.name: "kafka" - acks: "all" - ``` diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-kafka-source.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-kafka-source.md deleted file mode 100644 index dd6000aa0bd35e..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-kafka-source.md +++ /dev/null @@ -1,240 +0,0 @@ ---- -id: io-kafka-source -title: Kafka source connector -sidebar_label: "Kafka source connector" -original_id: io-kafka-source ---- - -The Kafka source connector pulls messages from Kafka topics and persists the messages -to Pulsar topics. - -This guide explains how to configure and use the Kafka source connector. - -## Configuration - -The configuration of the Kafka source connector has the following properties. - -### Property - -| Name | Type| Required | Default | Description -|------|----------|---------|-------------|-------------| -| `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. | -| `groupId` |String| true | " " (empty string) | A unique string that identifies the group of consumer processes to which this consumer belongs. | -| `fetchMinBytes` | long|false | 1 | The minimum byte expected for each fetch response. | -| `autoCommitEnabled` | boolean |false | true | If set to true, the consumer's offset is periodically committed in the background.

    This committed offset is used when the process fails as the position from which a new consumer begins. | -| `autoCommitIntervalMs` | long|false | 5000 | The frequency in milliseconds that the consumer offsets are auto-committed to Kafka if `autoCommitEnabled` is set to true. | -| `heartbeatIntervalMs` | long| false | 3000 | The interval between heartbeats to the consumer when using Kafka's group management facilities.

    **Note: `heartbeatIntervalMs` must be smaller than `sessionTimeoutMs`**.| -| `sessionTimeoutMs` | long|false | 30000 | The timeout used to detect consumer failures when using Kafka's group management facility. | -| `topic` | String|true | " " (empty string)| The Kafka topic that sends messages to Pulsar. | -| `consumerConfigProperties` | Map| false | " " (empty string) | The consumer configuration properties to be passed to consumers.

    **Note: other properties specified in the connector configuration file take precedence over this configuration**. | -| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringDeserializer | The deserializer class for Kafka consumers to deserialize keys.
    The deserializer is set by a specific implementation of [`KafkaAbstractSource`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java). -| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArrayDeserializer | The deserializer class for Kafka consumers to deserialize values. -| `autoOffsetReset` | String | false | earliest | The default offset reset policy. | - -### Schema Management - -This Kafka source connector applies the schema to the topic depending on the data type that is present on the Kafka topic. -You can detect the data type from the `keyDeserializationClass` and `valueDeserializationClass` configuration parameters. - -If the `valueDeserializationClass` is `org.apache.kafka.common.serialization.StringDeserializer`, you can set Schema.STRING() as schema type on the Pulsar topic. - -If `valueDeserializationClass` is `io.confluent.kafka.serializers.KafkaAvroDeserializer`, Pulsar downloads the AVRO schema from the Confluent Schema Registry® -and sets it properly on the Pulsar topic. - -In this case, you need to set `schema.registry.url` inside of the `consumerConfigProperties` configuration entry -of the source. - -If `keyDeserializationClass` is not `org.apache.kafka.common.serialization.StringDeserializer`, it means -that you do not have a String as key and the Kafka Source uses the KeyValue schema type with the SEPARATED encoding. - -Pulsar supports AVRO format for keys. - -In this case, you can have a Pulsar topic with the following properties: -- Schema: KeyValue schema with SEPARATED encoding -- Key: the content of key of the Kafka message (base64 encoded) -- Value: the content of value of the Kafka message -- KeySchema: the schema detected from `keyDeserializationClass` -- ValueSchema: the schema detected from `valueDeserializationClass` - -Topic compaction and partition routing use the Pulsar key, that contains the Kafka key, and so they are driven by the same value that you have on Kafka. - -When you consume data from Pulsar topics, you can use the `KeyValue` schema. In this way, you can decode the data properly. -If you want to access the raw key, you can use the `Message#getKeyBytes()` API. - -### Example - -Before using the Kafka source connector, you need to create a configuration file through one of the following methods. - -- JSON - - ```json - - { - "bootstrapServers": "pulsar-kafka:9092", - "groupId": "test-pulsar-io", - "topic": "my-topic", - "sessionTimeoutMs": "10000", - "autoCommitEnabled": false - } - - ``` - -- YAML - - ```yaml - - configs: - bootstrapServers: "pulsar-kafka:9092" - groupId: "test-pulsar-io" - topic: "my-topic" - sessionTimeoutMs: "10000" - autoCommitEnabled: false - - ``` - -## Usage - -You can make the Kafka source connector as a Pulsar built-in connector and use it on a standalone cluster or an on-premises cluster. - -### Standalone cluster - -This example describes how to use the Kafka source connector to feed data from Kafka and write data to Pulsar topics in the standalone mode. - -#### Prerequisites - -- Install [Docker](https://docs.docker.com/get-docker/)(Community Edition). - -#### Steps - -1. Download and start the Confluent Platform. - -For details, see the [documentation](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-1-download-and-start-cp) to install the Kafka service locally. - -2. Pull a Pulsar image and start Pulsar in standalone mode. - - ```bash - - docker pull apachepulsar/pulsar:latest - - docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-kafka-standalone apachepulsar/pulsar:latest bin/pulsar standalone - - ``` - -3. Create a producer file _kafka-producer.py_. - - ```python - - from kafka import KafkaProducer - producer = KafkaProducer(bootstrap_servers='localhost:9092') - future = producer.send('my-topic', b'hello world') - future.get() - - ``` - -4. Create a consumer file _pulsar-client.py_. - - ```python - - import pulsar - - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe('my-topic', subscription_name='my-aa') - - while True: - msg = consumer.receive() - print msg - print dir(msg) - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - - client.close() - - ``` - -5. Copy the following files to Pulsar. - - ```bash - - docker cp pulsar-io-kafka.nar pulsar-kafka-standalone:/pulsar - docker cp kafkaSourceConfig.yaml pulsar-kafka-standalone:/pulsar/conf - - ``` - -6. Open a new terminal window and start the Kafka source connector in local run mode. - - ```bash - - docker exec -it pulsar-kafka-standalone /bin/bash - - ./bin/pulsar-admin source localrun \ - --archive ./pulsar-io-kafka.nar \ - --tenant public \ - --namespace default \ - --name kafka \ - --destination-topic-name my-topic \ - --source-config-file ./conf/kafkaSourceConfig.yaml \ - --parallelism 1 - - ``` - -7. Open a new terminal window and run the Kafka producer locally. - - ```bash - - python3 kafka-producer.py - - ``` - -8. Open a new terminal window and run the Pulsar consumer locally. - - ```bash - - python3 pulsar-client.py - - ``` - -The following information appears on the consumer terminal window. - - ```bash - - Received message: 'hello world' - - ``` - -### On-premises cluster - -This example explains how to create a Kafka source connector in an on-premises cluster. - -1. Copy the NAR package of the Kafka connector to the Pulsar connectors directory. - - ``` - - cp pulsar-io-kafka-{{connector:version}}.nar $PULSAR_HOME/connectors/pulsar-io-kafka-{{connector:version}}.nar - - ``` - -2. Reload all [built-in connectors](io-connectors.md). - - ``` - - PULSAR_HOME/bin/pulsar-admin sources reload - - ``` - -3. Check whether the Kafka source connector is available on the list or not. - - ``` - - PULSAR_HOME/bin/pulsar-admin sources available-sources - - ``` - -4. Create a Kafka source connector on a Pulsar cluster using the [`pulsar-admin sources create`](/tools/pulsar-admin/) command. - - ``` - - PULSAR_HOME/bin/pulsar-admin sources create \ - --source-config-file - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-kinesis-sink.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-kinesis-sink.md deleted file mode 100644 index 810068958d13f3..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-kinesis-sink.md +++ /dev/null @@ -1,82 +0,0 @@ ---- -id: io-kinesis-sink -title: Kinesis sink connector -sidebar_label: "Kinesis sink connector" -original_id: io-kinesis-sink ---- - -The Kinesis sink connector pulls data from Pulsar and persists data into Amazon Kinesis. - -## Configuration - -The configuration of the Kinesis sink connector has the following property. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`messageFormat`|MessageFormat|true|ONLY_RAW_PAYLOAD|Message format in which Kinesis sink converts Pulsar messages and publishes to Kinesis streams.

    Below are the available options:

  371. `ONLY_RAW_PAYLOAD`: Kinesis sink directly publishes Pulsar message payload as a message into the configured Kinesis stream.

  372. `FULL_MESSAGE_IN_JSON`: Kinesis sink creates a JSON payload with Pulsar message payload, properties and encryptionCtx, and publishes JSON payload into the configured Kinesis stream.

  373. `FULL_MESSAGE_IN_FB`: Kinesis sink creates a flatbuffer serialized payload with Pulsar message payload, properties and encryptionCtx, and publishes flatbuffer payload into the configured Kinesis stream.

  374. `FULL_MESSAGE_IN_JSON_EXPAND_VALUE`: Kinesis sink sends a JSON structure containing the record topic name, key, payload, properties and event time. The record schema is used to convert the value to JSON.
  375. -`retainOrdering`|boolean|false|false|Whether Pulsar connectors to retain ordering when moving messages from Pulsar to Kinesis or not. -`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    It is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If it is empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Built-in plugins - -The following are built-in `AwsCredentialProviderPlugin` plugins: - -* `org.apache.pulsar.io.aws.AwsDefaultProviderChainPlugin` - - This plugin takes no configuration, it uses the default AWS provider chain. - - For more information, see [AWS documentation](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default). - -* `org.apache.pulsar.io.aws.STSAssumeRoleProviderPlugin` - - This plugin takes a configuration (via the `awsCredentialPluginParam`) that describes a role to assume when running the KCL. - - This configuration takes the form of a small json document like: - - ```json - - {"roleArn": "arn...", "roleSessionName": "name"} - - ``` - -### Example - -Before using the Kinesis sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "awsEndpoint": "some.endpoint.aws", - "awsRegion": "us-east-1", - "awsKinesisStreamName": "my-stream", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "messageFormat": "ONLY_RAW_PAYLOAD", - "retainOrdering": "true" - } - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "some.endpoint.aws" - awsRegion: "us-east-1" - awsKinesisStreamName: "my-stream" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - messageFormat: "ONLY_RAW_PAYLOAD" - retainOrdering: "true" - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-kinesis-source.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-kinesis-source.md deleted file mode 100644 index 1b45e264680e61..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-kinesis-source.md +++ /dev/null @@ -1,83 +0,0 @@ ---- -id: io-kinesis-source -title: Kinesis source connector -sidebar_label: "Kinesis source connector" -original_id: io-kinesis-source ---- - -The Kinesis source connector pulls data from Amazon Kinesis and persists data into Pulsar. - -This connector uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual consuming of messages. The KCL uses DynamoDB to track state for consumers. - -> Note: currently, the Kinesis source connector only supports raw messages. If you use KMS encrypted messages, the encrypted messages are sent to downstream. This connector will support decrypting messages in the future release. - - -## Configuration - -The configuration of the Kinesis source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.

    Below are the available options:

  376. `AT_TIMESTAMP`: start from the record at or after the specified timestamp.

  377. `LATEST`: start after the most recent data record.

  378. `TRIM_HORIZON`: start from the oldest available data record.
  379. -`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption. -`applicationName`|String|false|Pulsar IO connector|The name of the Amazon Kinesis application.

    By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances. -`checkpointInterval`|long|false|60000|The frequency of the Kinesis stream checkpoint in milliseconds. -`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds. -`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint. -`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector.

    Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed. -`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`useEnhancedFanOut`|boolean|false|true|If set to true, it uses Kinesis enhanced fan-out.

    If set to false, it uses polling. -`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    `awsCredentialProviderPlugin` has the following built-in plugs:

  380. `org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:
    this plugin uses the default AWS provider chain.
    For more information, see [using the default credential provider chain](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default).

  381. `org.apache.pulsar.io.kinesis.STSAssumeRoleProviderPlugin`:
    this plugin takes a configuration via the `awsCredentialPluginParam` that describes a role to assume when running the KCL.
    **JSON configuration example**
    `{"roleArn": "arn...", "roleSessionName": "name"}`

    `awsCredentialPluginName` is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If `awsCredentialPluginName` set to empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`.
  382. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Example - -Before using the Kinesis source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "awsEndpoint": "https://some.endpoint.aws", - "awsRegion": "us-east-1", - "awsKinesisStreamName": "my-stream", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "applicationName": "My test application", - "checkpointInterval": "30000", - "backoffTime": "4000", - "numRetries": "3", - "receiveQueueSize": 2000, - "initialPositionInStream": "TRIM_HORIZON", - "startAtTime": "2019-03-05T19:28:58.000Z" - } - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "https://some.endpoint.aws" - awsRegion: "us-east-1" - awsKinesisStreamName: "my-stream" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - applicationName: "My test application" - checkpointInterval: 30000 - backoffTime: 4000 - numRetries: 3 - receiveQueueSize: 2000 - initialPositionInStream: "TRIM_HORIZON" - startAtTime: "2019-03-05T19:28:58.000Z" - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-mongo-sink.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-mongo-sink.md deleted file mode 100644 index 7fc77ec80cc680..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-mongo-sink.md +++ /dev/null @@ -1,58 +0,0 @@ ---- -id: io-mongo-sink -title: MongoDB sink connector -sidebar_label: "MongoDB sink connector" -original_id: io-mongo-sink ---- - -The MongoDB sink connector pulls messages from Pulsar topics -and persists the messages to collections. - -## Configuration - -The configuration of the MongoDB sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `mongoUri` | String| true| " " (empty string) | The MongoDB URI to which the connector connects.

    For more information, see [connection string URI format](https://docs.mongodb.com/manual/reference/connection-string/). | -| `database` | String| true| " " (empty string)| The database name to which the collection belongs. | -| `collection` | String| true| " " (empty string)| The collection name to which the connector writes messages. | -| `batchSize` | int|false|100 | The batch size of writing messages to collections. | -| `batchTimeMs` |long|false|1000| The batch operation interval in milliseconds. | - - -### Example - -Before using the Mongo sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "mongoUri": "mongodb://localhost:27017", - "database": "pulsar", - "collection": "messages", - "batchSize": "2", - "batchTimeMs": "500" - } - } - - ``` - -* YAML - - ```yaml - - configs: - mongoUri: "mongodb://localhost:27017" - database: "pulsar" - collection: "messages" - batchSize: 2 - batchTimeMs: 500 - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-netty-source.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-netty-source.md deleted file mode 100644 index 2caedf2bce69bc..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-netty-source.md +++ /dev/null @@ -1,243 +0,0 @@ ---- -id: io-netty-source -title: Netty source connector -sidebar_label: "Netty source connector" -original_id: io-netty-source ---- - -The Netty source connector opens a port that accepts incoming data via the configured network protocol -and publish it to user-defined Pulsar topics. - -This connector can be used in a containerized (for example, k8s) deployment. Otherwise, if the connector is running in process or thread mode, the instance may be conflicting on listening to ports. - -## Configuration - -The configuration of the Netty source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `type` |String| true |tcp | The network protocol over which data is transmitted to netty.

    Below are the available options:
  383. tcp
  384. http
  385. udp
  386. | -| `host` | String|true | 127.0.0.1 | The host name or address on which the source instance listen. | -| `port` | int|true | 10999 | The port on which the source instance listen. | -| `numberOfThreads` |int| true |1 | The number of threads of Netty TCP server to accept incoming connections and handle the traffic of accepted connections. | - - -### Example - -Before using the Netty source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "type": "tcp", - "host": "127.0.0.1", - "port": "10911", - "numberOfThreads": "1" - } - } - - ``` - -* YAML - - ```yaml - - configs: - type: "tcp" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -## Usage - -The following examples show how to use the Netty source connector with TCP and HTTP. - -### TCP - -1. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-netty-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -2. Create a configuration file _netty-source-config.yaml_. - - ```yaml - - configs: - type: "tcp" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -3. Copy the configuration file _netty-source-config.yaml_ to Pulsar server. - - ```bash - - $ docker cp netty-source-config.yaml pulsar-netty-standalone:/pulsar/conf/ - - ``` - -4. Download the Netty source connector. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - curl -O http://mirror-hk.koddos.net/apache/pulsar/pulsar-{version}/connectors/pulsar-io-netty-{version}.nar - - ``` - -5. Start the Netty source connector. - - ```bash - - $ ./bin/pulsar-admin sources localrun \ - --archive pulsar-io-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name netty \ - --destination-topic-name netty-topic \ - --source-config-file netty-source-config.yaml \ - --parallelism 1 - - ``` - -6. Consume data. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ ./bin/pulsar-client consume -t Exclusive -s netty-sub netty-topic -n 0 - - ``` - -7. Open another terminal window to send data to the Netty source. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ apt-get update - - $ apt-get -y install telnet - - $ root@1d19327b2c67:/pulsar# telnet 127.0.0.1 10999 - Trying 127.0.0.1... - Connected to 127.0.0.1. - Escape character is '^]'. - hello - world - - ``` - -8. The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello - - ----- got message ----- - world - - ``` - -### HTTP - -1. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-netty-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -2. Create a configuration file _netty-source-config.yaml_. - - ```yaml - - configs: - type: "http" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -3. Copy the configuration file _netty-source-config.yaml_ to Pulsar server. - - ```bash - - $ docker cp netty-source-config.yaml pulsar-netty-standalone:/pulsar/conf/ - - ``` - -4. Download the Netty source connector. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - curl -O http://mirror-hk.koddos.net/apache/pulsar/pulsar-{version}/connectors/pulsar-io-netty-{version}.nar - - ``` - -5. Start the Netty source connector. - - ```bash - - $ ./bin/pulsar-admin sources localrun \ - --archive pulsar-io-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name netty \ - --destination-topic-name netty-topic \ - --source-config-file netty-source-config.yaml \ - --parallelism 1 - - ``` - -6. Consume data. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ ./bin/pulsar-client consume -t Exclusive -s netty-sub netty-topic -n 0 - - ``` - -7. Open another terminal window to send data to the Netty source. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ curl -X POST --data 'hello, world!' http://127.0.0.1:10999/ - - ``` - -8. The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello, world! - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-nsq-source.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-nsq-source.md deleted file mode 100644 index b61e7e100c22e1..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-nsq-source.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -id: io-nsq-source -title: NSQ source connector -sidebar_label: "NSQ source connector" -original_id: io-nsq-source ---- - -The NSQ source connector receives messages from NSQ topics -and writes messages to Pulsar topics. - -## Configuration - -The configuration of the NSQ source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `lookupds` |String| true | " " (empty string) | A comma-separated list of nsqlookupds to connect to. | -| `topic` | String|true | " " (empty string) | The NSQ topic to transport. | -| `channel` | String |false | pulsar-transport-{$topic} | The channel to consume from on the provided NSQ topic. | \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-overview.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-overview.md deleted file mode 100644 index 82d0cd04a31d7a..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-overview.md +++ /dev/null @@ -1,163 +0,0 @@ ---- -id: io-overview -title: Pulsar connector overview -sidebar_label: "Overview" -original_id: io-overview ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Messaging systems are most powerful when you can easily use them with external systems like databases and other messaging systems. - -**Pulsar IO connectors** enable you to easily create, deploy, and manage connectors that interact with external systems, such as [Apache Cassandra](https://cassandra.apache.org), [Aerospike](https://www.aerospike.com), and many others. - - -## Concept - -Pulsar IO connectors come in two types: **source** and **sink**. - -This diagram illustrates the relationship between source, Pulsar, and sink: - -![Pulsar IO diagram](/assets/pulsar-io.png "Pulsar IO connectors (sources and sinks)") - - -### Source - -> Sources **feed data from external systems into Pulsar**. - -Common sources include other messaging systems and firehose-style data pipeline APIs. - -For the complete list of Pulsar built-in source connectors, see [source connector](io-connectors.md#source-connector). - -### Sink - -> Sinks **feed data from Pulsar into external systems**. - -Common sinks include other messaging systems and SQL and NoSQL databases. - -For the complete list of Pulsar built-in sink connectors, see [sink connector](io-connectors.md#sink-connector). - -## Processing guarantee - -Processing guarantees are used to handle errors when writing messages to Pulsar topics. - -> Pulsar connectors and Functions use the **same** processing guarantees as below. - -Delivery semantic | Description -:------------------|:------- -`at-most-once` | Each message sent to a connector is to be **processed once** or **not to be processed**. -`at-least-once` | Each message sent to a connector is to be **processed once** or **more than once**. -`effectively-once` | Each message sent to a connector has **one output associated** with it. - -> Processing guarantees for connectors not just rely on Pulsar guarantee but also **relate to external systems**, that is, **the implementation of source and sink**. - -* Source: Pulsar ensures that writing messages to Pulsar topics respects to the processing guarantees. It is within Pulsar's control. - -* Sink: the processing guarantees rely on the sink implementation. If the sink implementation does not handle retries in an idempotent way, the sink does not respect to the processing guarantees. - -### Set - -When creating a connector, you can set the processing guarantee with the following semantics: - -* ATLEAST_ONCE - -* ATMOST_ONCE - -* EFFECTIVELY_ONCE - -> If `--processing-guarantees` is not specified when creating a connector, the default semantic is `ATLEAST_ONCE`. - -Here takes **Admin CLI** as an example. For more information about **REST API** or **JAVA Admin API**, see [here](io-use.md#create). - -````mdx-code-block - - - - -```bash - -$ bin/pulsar-admin sources create \ - --processing-guarantees ATMOST_ONCE \ - # Other source configs - -``` - -For more information about the options of `pulsar-admin sources create`, see [here](reference-connector-admin.md#create). - - - - -```bash - -$ bin/pulsar-admin sinks create \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other sink configs - -``` - -For more information about the options of `pulsar-admin sinks create`, see [here](reference-connector-admin.md#create-1). - - - - -```` - -### Update - -After creating a connector, you can update the processing guarantee with the following semantics: - -* ATLEAST_ONCE - -* ATMOST_ONCE - -* EFFECTIVELY_ONCE - -Here takes **Admin CLI** as an example. For more information about **REST API** or **JAVA Admin API**, see [here](io-use.md#create). - -````mdx-code-block - - - - -```bash - -$ bin/pulsar-admin sources update \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other source configs - -``` - -For more information about the options of `pulsar-admin sources update`, see [here](reference-connector-admin.md#update). - - - - -```bash - -$ bin/pulsar-admin sinks update \ - --processing-guarantees ATMOST_ONCE \ - # Other sink configs - -``` - -For more information about the options of `pulsar-admin sinks update`, see [here](reference-connector-admin.md#update-1). - - - - -```` - - -## Work with connector - -You can manage Pulsar connectors (for example, create, update, start, stop, restart, reload, delete and perform other operations on connectors) via the `Connector Admin CLI` with sources and sinks subcommands. For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - -Connectors (sources and sinks) and Functions are components of instances, and they all run on Functions workers. When managing a source, sink or function via the `Connector Admin CLI` or [Functions Admin CLI](functions-cli.md), an instance is started on a worker. For more information, see [Functions worker](functions-worker.md#run-functions-worker-separately). diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-quickstart.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-quickstart.md deleted file mode 100644 index 1b6528d49541ba..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-quickstart.md +++ /dev/null @@ -1,963 +0,0 @@ ---- -id: io-quickstart -title: How to connect Pulsar to database -sidebar_label: "Get started" -original_id: io-quickstart ---- - -This tutorial provides a hands-on look at how you can move data out of Pulsar without writing a single line of code. - -It is helpful to review the [concepts](io-overview.md) for Pulsar I/O with running the steps in this guide to gain a deeper understanding. - -At the end of this tutorial, you are able to: - -- [Connect Pulsar to Cassandra](#connect-pulsar-to-cassandra) - -- [Connect Pulsar to PostgreSQL](#connect-pulsar-to-postgreSQL) - -:::tip - -* These instructions assume you are running Pulsar in [standalone mode](getting-started-standalone.md). However, all -the commands used in this tutorial can be used in a multi-node Pulsar cluster without any changes. -* All the instructions are assumed to run at the root directory of a Pulsar binary distribution. - -::: - -## Install Pulsar and built-in connector - -Before connecting Pulsar to a database, you need to install Pulsar and the desired built-in connector. - -For more information about **how to install a standalone Pulsar and built-in connectors**, see [here](getting-started-standalone.md/#installing-pulsar). - -## Start Pulsar standalone - -1. Start Pulsar locally. - - ```bash - - bin/pulsar standalone - - ``` - - All the components of a Pulsar service are started in order. - - You can curl those pulsar service endpoints to make sure Pulsar service is up and running correctly. - -2. Check Pulsar binary protocol port. - - ```bash - - telnet localhost 6650 - - ``` - -3. Check Pulsar Function cluster. - - ```bash - - curl -s http://localhost:8080/admin/v2/worker/cluster - - ``` - - **Example output** - - ```json - - [{"workerId":"c-standalone-fw-localhost-6750","workerHostname":"localhost","port":6750}] - - ``` - -4. Make sure a public tenant and a default namespace exist. - - ```bash - - curl -s http://localhost:8080/admin/v2/namespaces/public - - ``` - - **Example output** - - ```json - - ["public/default","public/functions"] - - ``` - -5. All built-in connectors should be listed as available. - - ```bash - - curl -s http://localhost:8080/admin/v2/functions/connectors - - ``` - - **Example output** - - ```json - - [{"name":"aerospike","description":"Aerospike database sink","sinkClass":"org.apache.pulsar.io.aerospike.AerospikeStringSink"},{"name":"cassandra","description":"Writes data into Cassandra","sinkClass":"org.apache.pulsar.io.cassandra.CassandraStringSink"},{"name":"kafka","description":"Kafka source and sink connector","sourceClass":"org.apache.pulsar.io.kafka.KafkaStringSource","sinkClass":"org.apache.pulsar.io.kafka.KafkaBytesSink"},{"name":"kinesis","description":"Kinesis sink connector","sinkClass":"org.apache.pulsar.io.kinesis.KinesisSink"},{"name":"rabbitmq","description":"RabbitMQ source connector","sourceClass":"org.apache.pulsar.io.rabbitmq.RabbitMQSource"},{"name":"twitter","description":"Ingest data from Twitter firehose","sourceClass":"org.apache.pulsar.io.twitter.TwitterFireHose"}] - - ``` - - If an error occurs when starting Pulsar service, you may see an exception at the terminal running `pulsar/standalone`, - or you can navigate to the `logs` directory under the Pulsar directory to view the logs. - -## Connect Pulsar to Cassandra - -This section demonstrates how to connect Pulsar to Cassandra. - -:::tip - -* Make sure you have Docker installed. If you do not have one, see [install Docker](https://docs.docker.com/docker-for-mac/install/). -* The Cassandra sink connector reads messages from Pulsar topics and writes the messages into Cassandra tables. For more information, see [Cassandra sink connector](io-cassandra-sink.md). - -::: - -### Setup a Cassandra cluster - -This example uses `cassandra` Docker image to start a single-node Cassandra cluster in Docker. - -1. Start a Cassandra cluster. - - ```bash - - docker run -d --rm --name=cassandra -p 9042:9042 cassandra - - ``` - - :::note - - Before moving to the next steps, make sure the Cassandra cluster is running. - - ::: - -2. Make sure the Docker process is running. - - ```bash - - docker ps - - ``` - -3. Check the Cassandra logs to make sure the Cassandra process is running as expected. - - ```bash - - docker logs cassandra - - ``` - -4. Check the status of the Cassandra cluster. - - ```bash - - docker exec cassandra nodetool status - - ``` - - **Example output** - - ``` - - Datacenter: datacenter1 - ======================= - Status=Up/Down - |/ State=Normal/Leaving/Joining/Moving - -- Address Load Tokens Owns (effective) Host ID Rack - UN 172.17.0.2 103.67 KiB 256 100.0% af0e4b2f-84e0-4f0b-bb14-bd5f9070ff26 rack1 - - ``` - -5. Use `cqlsh` to connect to the Cassandra cluster. - - ```bash - - $ docker exec -ti cassandra cqlsh localhost - Connected to Test Cluster at localhost:9042. - [cqlsh 5.0.1 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4] - Use HELP for help. - cqlsh> - - ``` - -6. Create a keyspace `pulsar_test_keyspace`. - - ```bash - - cqlsh> CREATE KEYSPACE pulsar_test_keyspace WITH replication = {'class':'SimpleStrategy', 'replication_factor':1}; - - ``` - -7. Create a table `pulsar_test_table`. - - ```bash - - cqlsh> USE pulsar_test_keyspace; - cqlsh:pulsar_test_keyspace> CREATE TABLE pulsar_test_table (key text PRIMARY KEY, col text); - - ``` - -### Configure a Cassandra sink - -Now that we have a Cassandra cluster running locally. - -In this section, you need to configure a Cassandra sink connector. - -To run a Cassandra sink connector, you need to prepare a configuration file including the information that Pulsar connector runtime needs to know. - -For example, how Pulsar connector can find the Cassandra cluster, what is the keyspace and the table that Pulsar connector uses for writing Pulsar messages to, and so on. - -You can create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - } - - ``` - -* YAML - - ```yaml - - configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - - ``` - -For more information, see [Cassandra sink connector](io-cassandra-sink.md). - -### Create a Cassandra sink - -You can use the [Connector Admin CLI](/tools/pulsar-admin/) -to create a sink connector and perform other operations on them. - -Run the following command to create a Cassandra sink connector with sink type _cassandra_ and the config file _examples/cassandra-sink.yml_ created previously. - -#### Note -> The `sink-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. - -```bash - -bin/pulsar-admin sinks create \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink \ - --sink-type cassandra \ - --sink-config-file examples/cassandra-sink.yml \ - --inputs test_cassandra - -``` - -Once the command is executed, Pulsar creates the sink connector _cassandra-test-sink_. - -This sink connector runs -as a Pulsar Function and writes the messages produced in the topic _test_cassandra_ to the Cassandra table _pulsar_test_table_. - -### Inspect a Cassandra sink - -You can use the [Connector Admin CLI](/tools/pulsar-admin/) -to monitor a connector and perform other operations on it. - -* Get the information of a Cassandra sink. - - ```bash - - bin/pulsar-admin sinks get \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - **Example output** - - ```json - - { - "tenant": "public", - "namespace": "default", - "name": "cassandra-test-sink", - "className": "org.apache.pulsar.io.cassandra.CassandraStringSink", - "inputSpecs": { - "test_cassandra": { - "isRegexPattern": false - } - }, - "configs": { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true, - "archive": "builtin://cassandra" - } - - ``` - -* Check the status of a Cassandra sink. - - ```bash - - bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - **Example output** - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-localhost-8080" - } - } ] - } - - ``` - -### Verify a Cassandra sink - -1. Produce some messages to the input topic of the Cassandra sink _test_cassandra_. - - ```bash - - for i in {0..9}; do bin/pulsar-client produce -m "key-$i" -n 1 test_cassandra; done - - ``` - -2. Inspect the status of the Cassandra sink _test_cassandra_. - - ```bash - - bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - You can see 10 messages are processed by the Cassandra sink _test_cassandra_. - - **Example output** - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 10, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 10, - "lastReceivedTime" : 1551685489136, - "workerId" : "c-standalone-fw-localhost-8080" - } - } ] - } - - ``` - -3. Use `cqlsh` to connect to the Cassandra cluster. - - ```bash - - docker exec -ti cassandra cqlsh localhost - - ``` - -4. Check the data of the Cassandra table _pulsar_test_table_. - - ```bash - - cqlsh> use pulsar_test_keyspace; - cqlsh:pulsar_test_keyspace> select * from pulsar_test_table; - - key | col - --------+-------- - key-5 | key-5 - key-0 | key-0 - key-9 | key-9 - key-2 | key-2 - key-1 | key-1 - key-3 | key-3 - key-6 | key-6 - key-7 | key-7 - key-4 | key-4 - key-8 | key-8 - - ``` - -### Delete a Cassandra Sink - -You can use the [Connector Admin CLI](/tools/pulsar-admin/) -to delete a connector and perform other operations on it. - -```bash - -bin/pulsar-admin sinks delete \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - -``` - -## Connect Pulsar to PostgreSQL - -This section demonstrates how to connect Pulsar to PostgreSQL. - -:::tip - -* Make sure you have Docker installed. If you do not have one, see [install Docker](https://docs.docker.com/docker-for-mac/install/). -* The JDBC sink connector pulls messages from Pulsar topics and persists the messages to ClickHouse, MariaDB, PostgreSQL, or SQlite. - -::: - ->For more information, see [JDBC sink connector](io-jdbc-sink.md). - - -### Setup a PostgreSQL cluster - -This example uses the PostgreSQL 12 docker image to start a single-node PostgreSQL cluster in Docker. - -1. Pull the PostgreSQL 12 image from Docker. - - ```bash - - $ docker pull postgres:12 - - ``` - -2. Start PostgreSQL. - - ```bash - - $ docker run -d -it --rm \ - --name pulsar-postgres \ - -p 5432:5432 \ - -e POSTGRES_PASSWORD=password \ - -e POSTGRES_USER=postgres \ - postgres:12 - - ``` - - #### Tip - - Flag | Description | This example - ---|---|---| - `-d` | To start a container in detached mode. | / - `-it` | Keep STDIN open even if not attached and allocate a terminal. | / - `--rm` | Remove the container automatically when it exits. | / - `-name` | Assign a name to the container. | This example specifies _pulsar-postgres_ for the container. - `-p` | Publish the port of the container to the host. | This example publishes the port _5432_ of the container to the host. - `-e` | Set environment variables. | This example sets the following variables:
    - The password for the user is _password_.
    - The name for the user is _postgres_. - - :::tip - - For more information about Docker commands, see [Docker CLI](https://docs.docker.com/engine/reference/commandline/run/). - - ::: - -3. Check if PostgreSQL has been started successfully. - - ```bash - - $ docker logs -f pulsar-postgres - - ``` - - PostgreSQL has been started successfully if the following message appears. - - ```text - - 2020-05-11 20:09:24.492 UTC [1] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit - 2020-05-11 20:09:24.492 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 - 2020-05-11 20:09:24.492 UTC [1] LOG: listening on IPv6 address "::", port 5432 - 2020-05-11 20:09:24.499 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" - 2020-05-11 20:09:24.523 UTC [55] LOG: database system was shut down at 2020-05-11 20:09:24 UTC - 2020-05-11 20:09:24.533 UTC [1] LOG: database system is ready to accept connections - - ``` - -4. Access to PostgreSQL. - - ```bash - - $ docker exec -it pulsar-postgres /bin/bash - - ``` - -5. Create a PostgreSQL table _pulsar_postgres_jdbc_sink_. - - ```bash - - $ psql -U postgres postgres - - postgres=# create table if not exists pulsar_postgres_jdbc_sink - ( - id serial PRIMARY KEY, - name VARCHAR(255) NOT NULL - ); - - ``` - -### Configure a JDBC sink - -Now we have a PostgreSQL running locally. - -In this section, you need to configure a JDBC sink connector. - -1. Add a configuration file. - - To run a JDBC sink connector, you need to prepare a YAML configuration file including the information that Pulsar connector runtime needs to know. - - For example, how Pulsar connector can find the PostgreSQL cluster, what is the JDBC URL and the table that Pulsar connector uses for writing messages. - - Create a _pulsar-postgres-jdbc-sink.yaml_ file, copy the following contents to this file, and place the file in the `pulsar/connectors` folder. - - ```yaml - - configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/postgres" - tableName: "pulsar_postgres_jdbc_sink" - - ``` - -2. Create a schema. - - Create a _avro-schema_ file, copy the following contents to this file, and place the file in the `pulsar/connectors` folder. - - ```json - - { - "type": "AVRO", - "schema": "{\"type\":\"record\",\"name\":\"Test\",\"fields\":[{\"name\":\"id\",\"type\":[\"null\",\"int\"]},{\"name\":\"name\",\"type\":[\"null\",\"string\"]}]}", - "properties": {} - } - - ``` - - :::tip - - For more information about AVRO, see [Apache Avro](https://avro.apache.org/docs/1.9.1/). - - ::: - -3. Upload a schema to a topic. - - This example uploads the _avro-schema_ schema to the _pulsar-postgres-jdbc-sink-topic_ topic. - - ```bash - - $ bin/pulsar-admin schemas upload pulsar-postgres-jdbc-sink-topic -f ./connectors/avro-schema - - ``` - -4. Check if the schema has been uploaded successfully. - - ```bash - - $ bin/pulsar-admin schemas get pulsar-postgres-jdbc-sink-topic - - ``` - - The schema has been uploaded successfully if the following message appears. - - ```json - - {"name":"pulsar-postgres-jdbc-sink-topic","schema":"{\"type\":\"record\",\"name\":\"Test\",\"fields\":[{\"name\":\"id\",\"type\":[\"null\",\"int\"]},{\"name\":\"name\",\"type\":[\"null\",\"string\"]}]}","type":"AVRO","properties":{}} - - ``` - -### Create a JDBC sink - -You can use the [Connector Admin CLI](/tools/pulsar-admin/) -to create a sink connector and perform other operations on it. - -This example creates a sink connector and specifies the desired information. - -```bash - -$ bin/pulsar-admin sinks create \ ---archive ./connectors/pulsar-io-jdbc-postgres-@pulsar:version@.nar \ ---inputs pulsar-postgres-jdbc-sink-topic \ ---name pulsar-postgres-jdbc-sink \ ---sink-config-file ./connectors/pulsar-postgres-jdbc-sink.yaml \ ---parallelism 1 - -``` - -Once the command is executed, Pulsar creates a sink connector _pulsar-postgres-jdbc-sink_. - -This sink connector runs as a Pulsar Function and writes the messages produced in the topic _pulsar-postgres-jdbc-sink-topic_ to the PostgreSQL table _pulsar_postgres_jdbc_sink_. - - #### Tip - - Flag | Description | This example - ---|---|---| - `--archive` | The path to the archive file for the sink. | _pulsar-io-jdbc-postgres-@pulsar:version@.nar_ | - `--inputs` | The input topic(s) of the sink.

    Multiple topics can be specified as a comma-separated list.|| - `--name` | The name of the sink. | _pulsar-postgres-jdbc-sink_ | - `--sink-config-file` | The path to a YAML config file specifying the configuration of the sink. | _pulsar-postgres-jdbc-sink.yaml_ | - `--parallelism` | The parallelism factor of the sink.

    For example, the number of sink instances to run. | _1_ | - -:::tip - -For more information about `pulsar-admin sinks create options`, see [Pulsar admin docs](/tools/pulsar-admin/). - -::: - -The sink has been created successfully if the following message appears. - -```bash - -Created successfully - -``` - -### Inspect a JDBC sink - -You can use the [Connector Admin CLI](/tools/pulsar-admin/) -to monitor a connector and perform other operations on it. - -* List all running JDBC sink(s). - - ```bash - - $ bin/pulsar-admin sinks list \ - --tenant public \ - --namespace default - - ``` - - :::tip - - For more information about `pulsar-admin sinks list options`, see [Pulsar admin docs](/tools/pulsar-admin/). - - ::: - - The result shows that only the _postgres-jdbc-sink_ sink is running. - - ```json - - [ - "pulsar-postgres-jdbc-sink" - ] - - ``` - -* Get the information of a JDBC sink. - - ```bash - - $ bin/pulsar-admin sinks get \ - --tenant public \ - --namespace default \ - --name pulsar-postgres-jdbc-sink - - ``` - - :::tip - - For more information about `pulsar-admin sinks get options`, see [Pulsar admin docs](/tools/pulsar-admin/). - - ::: - - The result shows the information of the sink connector, including tenant, namespace, topic and so on. - - ```json - - { - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true - } - - ``` - -* Get the status of a JDBC sink - - ```bash - - $ bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name pulsar-postgres-jdbc-sink - - ``` - - :::tip - - For more information about `pulsar-admin sinks status options`, see [Pulsar admin docs](/tools/pulsar-admin/). - - ::: - - The result shows the current status of sink connector, including the number of instances, running status, worker ID and so on. - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-192.168.2.52-8080" - } - } ] - } - - ``` - -### Stop a JDBC sink - -You can use the [Connector Admin CLI](/tools/pulsar-admin/) -to stop a connector and perform other operations on it. - -```bash - -$ bin/pulsar-admin sinks stop \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks stop options`, see [Pulsar admin docs](/tools/pulsar-admin/). - -::: - -The sink instance has been stopped successfully if the following message disappears. - -```bash - -Stopped successfully - -``` - -### Restart a JDBC sink - -You can use the [Connector Admin CLI](/tools/pulsar-admin/) -to restart a connector and perform other operations on it. - -```bash - -$ bin/pulsar-admin sinks restart \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks restart options`, see [Pulsar admin docs](/tools/pulsar-admin/). - -::: - -The sink instance has been started successfully if the following message disappears. - -```bash - -Started successfully - -``` - -:::tip - -* Optionally, you can run a standalone sink connector using `pulsar-admin sinks localrun options`. -Note that `pulsar-admin sinks localrun options` **runs a sink connector locally**, while `pulsar-admin sinks start options` **starts a sink connector in a cluster**. -* For more information about `pulsar-admin sinks localrun options`, see [Pulsar admin docs](/tools/pulsar-admin/). - -::: - -### Update a JDBC sink - -You can use the [Connector Admin CLI](/tools/pulsar-admin/) -to update a connector and perform other operations on it. - -This example updates the parallelism of the _pulsar-postgres-jdbc-sink_ sink connector to 2. - -```bash - -$ bin/pulsar-admin sinks update \ ---name pulsar-postgres-jdbc-sink \ ---parallelism 2 - -``` - -:::tip - -For more information about `pulsar-admin sinks update options`, see [Pulsar admin docs](/tools/pulsar-admin/). - -::: - -The sink connector has been updated successfully if the following message disappears. - -```bash - -Updated successfully - -``` - -This example double-checks the information. - -```bash - -$ bin/pulsar-admin sinks get \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -The result shows that the parallelism is 2. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 2, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -### Delete a JDBC sink - -You can use the [Connector Admin CLI](/tools/pulsar-admin/) -to delete a connector and perform other operations on it. - -This example deletes the _pulsar-postgres-jdbc-sink_ sink connector. - -```bash - -$ bin/pulsar-admin sinks delete \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks delete options`, see [Pulsar admin docs](/tools/pulsar-admin/). - -::: - -The sink connector has been deleted successfully if the following message appears. - -```text - -Deleted successfully - -``` - -This example double-checks the status of the sink connector. - -```bash - -$ bin/pulsar-admin sinks get \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -The result shows that the sink connector does not exist. - -```text - -HTTP 404 Not Found - -Reason: Sink pulsar-postgres-jdbc-sink doesn't exist - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-rabbitmq-sink.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-rabbitmq-sink.md deleted file mode 100644 index 1bf8b7bd5c83ae..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-rabbitmq-sink.md +++ /dev/null @@ -1,87 +0,0 @@ ---- -id: io-rabbitmq-sink -title: RabbitMQ sink connector -sidebar_label: "RabbitMQ sink connector" -original_id: io-rabbitmq-sink ---- - -The RabbitMQ sink connector pulls messages from Pulsar topics -and persist the messages to RabbitMQ queues. - - -## Configuration - -The configuration of the RabbitMQ sink connector has the following properties. - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `connectionName` |String| true | " " (empty string) | The connection name. | -| `host` | String| true | " " (empty string) | The RabbitMQ host. | -| `port` | int |true | 5672 | The RabbitMQ port. | -| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. | -| `username` | String|false | guest | The username used to authenticate to RabbitMQ. | -| `password` | String|false | guest | The password used to authenticate to RabbitMQ. | -| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. | -| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number.

    0 means unlimited. | -| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets.

    0 means unlimited. | -| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds.

    0 means infinite. | -| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. | -| `requestedHeartbeat` | int|false | 60 | The exchange to publish messages. | -| `exchangeName` | String|true | " " (empty string) | The maximum number of messages that the server delivers.

    0 means unlimited. | -| `prefetchGlobal` |String|true | " " (empty string) |The routing key used to publish messages. | - - -### Example - -Before using the RabbitMQ sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "host": "localhost", - "port": "5672", - "virtualHost": "/", - "username": "guest", - "password": "guest", - "queueName": "test-queue", - "connectionName": "test-connection", - "requestedChannelMax": "0", - "requestedFrameMax": "0", - "connectionTimeout": "60000", - "handshakeTimeout": "10000", - "requestedHeartbeat": "60", - "exchangeName": "test-exchange", - "routingKey": "test-key" - } - } - - ``` - -* YAML - - ```yaml - - configs: - host: "localhost" - port: 5672 - virtualHost: "/", - username: "guest" - password: "guest" - queueName: "test-queue" - connectionName: "test-connection" - requestedChannelMax: 0 - requestedFrameMax: 0 - connectionTimeout: 60000 - handshakeTimeout: 10000 - requestedHeartbeat: 60 - exchangeName: "test-exchange" - routingKey: "test-key" - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-rabbitmq-source.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-rabbitmq-source.md deleted file mode 100644 index 0dbf51e15856eb..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-rabbitmq-source.md +++ /dev/null @@ -1,87 +0,0 @@ ---- -id: io-rabbitmq-source -title: RabbitMQ source connector -sidebar_label: "RabbitMQ source connector" -original_id: io-rabbitmq-source ---- - -The RabbitMQ source connector receives messages from RabbitMQ clusters -and writes messages to Pulsar topics. - -## Configuration - -The configuration of the RabbitMQ source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `connectionName` |String| true | " " (empty string) | The connection name. | -| `host` | String| true | " " (empty string) | The RabbitMQ host. | -| `port` | int |true | 5672 | The RabbitMQ port. | -| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. | -| `username` | String|false | guest | The username used to authenticate to RabbitMQ. | -| `password` | String|false | guest | The password used to authenticate to RabbitMQ. | -| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. | -| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number.

    0 means unlimited. | -| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets.

    0 means unlimited. | -| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds.

    0 means infinite. | -| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. | -| `requestedHeartbeat` | int|false | 60 | The requested heartbeat timeout in seconds. | -| `prefetchCount` | int|false | 0 | The maximum number of messages that the server delivers.

    0 means unlimited. | -| `prefetchGlobal` | boolean|false | false |Whether the setting should be applied to the entire channel rather than each consumer. | -| `passive` | boolean|false | false | Whether the rabbitmq consumer should create its own queue or bind to an existing one. | - -### Example - -Before using the RabbitMQ source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "host": "localhost", - "port": "5672", - "virtualHost": "/", - "username": "guest", - "password": "guest", - "queueName": "test-queue", - "connectionName": "test-connection", - "requestedChannelMax": "0", - "requestedFrameMax": "0", - "connectionTimeout": "60000", - "handshakeTimeout": "10000", - "requestedHeartbeat": "60", - "prefetchCount": "0", - "prefetchGlobal": "false", - "passive": "false" - } - } - - ``` - -* YAML - - ```yaml - - configs: - host: "localhost" - port: 5672 - virtualHost: "/" - username: "guest" - password: "guest" - queueName: "test-queue" - connectionName: "test-connection" - requestedChannelMax: 0 - requestedFrameMax: 0 - connectionTimeout: 60000 - handshakeTimeout: 10000 - requestedHeartbeat: 60 - prefetchCount: 0 - prefetchGlobal: "false" - passive: "false" - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-redis-sink.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-redis-sink.md deleted file mode 100644 index 9efd6ed8637694..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-redis-sink.md +++ /dev/null @@ -1,158 +0,0 @@ ---- -id: io-redis-sink -title: Redis sink connector -sidebar_label: "Redis sink connector" -original_id: io-redis-sink ---- - -The Redis sink connector pulls messages from Pulsar topics -and persists the messages to a Redis database. - - - -## Configuration - -The configuration of the Redis sink connector has the following properties. - - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `redisHosts` |String|true|" " (empty string) | A comma-separated list of Redis hosts to connect to. | -| `redisPassword` |String|false|" " (empty string) | The password used to connect to Redis. | -| `redisDatabase` | int|true|0 | The Redis database to connect to. | -| `clientMode` |String| false|Standalone | The client mode when interacting with Redis cluster.

    Below are the available options:
  387. Standalone
  388. Cluster
  389. | -| `autoReconnect` | boolean|false|true | Whether the Redis client automatically reconnect or not. | -| `requestQueue` | int|false|2147483647 | The maximum number of queued requests to Redis. | -| `tcpNoDelay` |boolean| false| false | Whether to enable TCP with no delay or not. | -| `keepAlive` | boolean|false | false |Whether to enable a keepalive to Redis or not. | -| `connectTimeout` |long| false|10000 | The time to wait before timing out when connecting in milliseconds. | -| `operationTimeout` | long|false|10000 | The time before an operation is marked as timed out in milliseconds . | -| `batchTimeMs` | int|false|1000 | The Redis operation time in milliseconds. | -| `batchSize` | int|false|200 | The batch size of writing to Redis database. | - - -### Example - -Before using the Redis sink connector, you need to create a configuration file in the path you will start Pulsar service (i.e. `PULSAR_HOME`) through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "redisHosts": "localhost:6379", - "redisPassword": "mypassword", - "redisDatabase": "0", - "clientMode": "Standalone", - "operationTimeout": "2000", - "batchSize": "1", - "batchTimeMs": "1000", - "connectTimeout": "3000" - } - } - - ``` - -* YAML - - ```yaml - - configs: - redisHosts: "localhost:6379" - redisPassword: "mypassword" - redisDatabase: 0 - clientMode: "Standalone" - operationTimeout: 2000 - batchSize: 1 - batchTimeMs: 1000 - connectTimeout: 3000 - - ``` - -### Usage - -This example shows how to write records to a Redis database using the Pulsar Redis connector. - -1. Start a Redis server. - - ```bash - - $ docker pull redis:5.0.5 - $ docker run -d -p 6379:6379 --name my-redis redis:5.0.5 --requirepass "mypassword" - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - - Make sure the NAR file is available at `connectors/pulsar-io-redis-@pulsar:version@.nar`. - -3. Start the Pulsar Redis connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-redis-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name my-redis-sink \ - --sink-config '{"redisHosts": "localhost:6379","redisPassword": "mypassword","redisDatabase": "0","clientMode": "Standalone","operationTimeout": "3000","batchSize": "1"}' \ - --inputs my-redis-topic - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-redis-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name my-redis-sink \ - --sink-config-file redis-sink-config.yaml \ - --inputs my-redis-topic - - ``` - -4. Publish records to the topic. - - ```bash - - $ bin/pulsar-client produce \ - persistent://public/default/my-redis-topic \ - -k "streaming" \ - -m "Pulsar" - - ``` - -5. Start a Redis client in Docker. - - ```bash - - $ docker exec -it my-redis redis-cli -a "mypassword" - - ``` - -6. Check the key/value in Redis. - - ``` - - 127.0.0.1:6379> keys * - 1) "streaming" - 127.0.0.1:6379> get "streaming" - "Pulsar" - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-solr-sink.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-solr-sink.md deleted file mode 100644 index d8b09db61faefe..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-solr-sink.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -id: io-solr-sink -title: Solr sink connector -sidebar_label: "Solr sink connector" -original_id: io-solr-sink ---- - -The Solr sink connector pulls messages from Pulsar topics -and persists the messages to Solr collections. - - - -## Configuration - -The configuration of the Solr sink connector has the following properties. - - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `solrUrl` | String|true|" " (empty string) |
  390. Comma-separated zookeeper hosts with chroot used in the SolrCloud mode.
    **Example**
    `localhost:2181,localhost:2182/chroot`

  391. URL to connect to Solr used in standalone mode.
    **Example**
    `localhost:8983/solr`
  392. | -| `solrMode` | String|true|SolrCloud| The client mode when interacting with the Solr cluster.

    Below are the available options:
  393. Standalone
  394. SolrCloud
  395. | -| `solrCollection` |String|true| " " (empty string) | Solr collection name to which records need to be written. | -| `solrCommitWithinMs` |int| false|10 | The time within million seconds for Solr updating commits.| -| `username` |String|false| " " (empty string) | The username for basic authentication.

    **Note: `usename` is case-sensitive.** | -| `password` | String|false| " " (empty string) | The password for basic authentication.

    **Note: `password` is case-sensitive.** | - - - -### Example - -Before using the Solr sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "configs": { - "solrUrl": "localhost:2181,localhost:2182/chroot", - "solrMode": "SolrCloud", - "solrCollection": "techproducts", - "solrCommitWithinMs": 100, - "username": "fakeuser", - "password": "fake@123" - } - } - - ``` - -* YAML - - ```yaml - - { - solrUrl: "localhost:2181,localhost:2182/chroot" - solrMode: "SolrCloud" - solrCollection: "techproducts" - solrCommitWithinMs: 100 - username: "fakeuser" - password: "fake@123" - } - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-twitter-source.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-twitter-source.md deleted file mode 100644 index 8de3504dd0fef2..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-twitter-source.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -id: io-twitter-source -title: Twitter Firehose source connector -sidebar_label: "Twitter Firehose source connector" -original_id: io-twitter-source ---- - -The Twitter Firehose source connector receives tweets from Twitter Firehose and -writes the tweets to Pulsar topics. - -## Configuration - -The configuration of the Twitter Firehose source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `consumerKey` | String|true | " " (empty string) | The twitter OAuth consumer key.

    For more information, see [Access tokens](https://developer.twitter.com/en/docs/basics/authentication/guides/access-tokens). | -| `consumerSecret` | String |true | " " (empty string) | The twitter OAuth consumer secret. | -| `token` | String|true | " " (empty string) | The twitter OAuth token. | -| `tokenSecret` | String|true | " " (empty string) | The twitter OAuth secret. | -| `guestimateTweetTime`|Boolean|false|false|Most firehose events have null createdAt time.

    If `guestimateTweetTime` set to true, the connector estimates the createdTime of each firehose event to be current time. -| `clientName` | String |false | openconnector-twitter-source| The twitter firehose client name. | -| `clientHosts` |String| false | Constants.STREAM_HOST | The twitter firehose hosts to which client connects. | -| `clientBufferSize` | int|false | 50000 | The buffer size for buffering tweets fetched from twitter firehose. | - -> For more information about OAuth credentials, see [Twitter developers portal](https://developer.twitter.com/en.html). diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-twitter.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-twitter.md deleted file mode 100644 index 3b2f6325453c3c..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-twitter.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: io-twitter -title: Twitter Firehose Connector -sidebar_label: "Twitter Firehose Connector" -original_id: io-twitter ---- - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/io-use.md b/site2/website/versioned_docs/version-2.10.1-deprecated/io-use.md deleted file mode 100644 index 5746faea4eaffa..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/io-use.md +++ /dev/null @@ -1,1787 +0,0 @@ ---- -id: io-use -title: How to use Pulsar connectors -sidebar_label: "Use" -original_id: io-use ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide describes how to use Pulsar connectors. - -## Install a connector - -Pulsar bundles several [builtin connectors](io-connectors.md) used to move data in and out of commonly used systems (such as database and messaging system). Optionally, you can create and use your desired non-builtin connectors. - -:::note - -When using a non-builtin connector, you need to specify the path of an archive file for the connector. - -::: - -To set up a builtin connector, follow -the instructions [here](getting-started-standalone.md#installing-builtin-connectors). - -After the setup, the builtin connector is automatically discovered by Pulsar brokers (or function-workers), so no additional installation steps are required. - -## Configure a connector - -You can configure the following information: - -* [Configure a default storage location for a connector](#configure-a-default-storage-location-for-a-connector) - -* [Configure a connector with a YAML file](#configure-a-connector-with-yaml-file) - -### Configure a default storage location for a connector - -To configure a default folder for builtin connectors, set the `connectorsDirectory` parameter in the `./conf/functions_worker.yml` configuration file. - -**Example** - -Set the `./connectors` folder as the default storage location for builtin connectors. - -``` - -######################## -# Connectors -######################## - -connectorsDirectory: ./connectors - -``` - -### Configure a connector with a YAML file - -To configure a connector, you need to provide a YAML configuration file when creating a connector. - -The YAML configuration file tells Pulsar where to locate connectors and how to connect connectors with Pulsar topics. - -**Example 1** - -Below is a YAML configuration file of a Cassandra sink, which tells Pulsar: - -* Which Cassandra cluster to connect - -* What is the `keyspace` and `columnFamily` to be used in Cassandra for collecting data - -* How to map Pulsar messages into Cassandra table key and columns - -```shell - -tenant: public -namespace: default -name: cassandra-test-sink -... -# cassandra specific config -configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - -``` - -**Example 2** - -Below is a YAML configuration file of a Kafka source. - -```shell - -configs: - bootstrapServers: "pulsar-kafka:9092" - groupId: "test-pulsar-io" - topic: "my-topic" - sessionTimeoutMs: "10000" - autoCommitEnabled: "false" - -``` - -**Example 3** - -Below is a YAML configuration file of a PostgreSQL JDBC sink. - -```shell - -configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/test_jdbc" - tableName: "test_jdbc" - -``` - -## Get available connectors - -Before starting using connectors, you can perform the following operations: - -* [Reload connectors](#reload) - -* [Get a list of available connectors](#get-available-connectors) - -### `reload` - -If you add or delete a nar file in a connector folder, reload the available builtin connector before using it. - -#### Source - -Use the `reload` subcommand. - -```shell - -$ pulsar-admin sources reload - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - -#### Sink - -Use the `reload` subcommand. - -```shell - -$ pulsar-admin sinks reload - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - -### `available` - -After reloading connectors (optional), you can get a list of available connectors. - -#### Source - -Use the `available-sources` subcommand. - -```shell - -$ pulsar-admin sources available-sources - -``` - -#### Sink - -Use the `available-sinks` subcommand. - -```shell - -$ pulsar-admin sinks available-sinks - -``` - -## Run a connector - -To run a connector, you can perform the following operations: - -* [Create a connector](#create) - -* [Start a connector](#start) - -* [Run a connector locally](#localrun) - -### `create` - -You can create a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Create a source connector. - -````mdx-code-block - - - - -Use the `create` subcommand. - -``` - -$ pulsar-admin sources create options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/registerSource?version=@pulsar:version_number@} - - - - -* Create a source connector with a **local file**. - - ```java - - void createSource(SourceConfig sourceConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - |Name|Description - |---|--- - `sourceConfig` | The source configuration object - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSource`](/api/admin/org/apache/pulsar/client/admin/Source.html#createSource-SourceConfig-java.lang.String-). - -* Create a source connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void createSourceWithUrl(SourceConfig sourceConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - Parameter| Description - |---|--- - `sourceConfig` | The source configuration object - `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSourceWithUrl`](/api/admin/org/apache/pulsar/client/admin/Source.html#createSourceWithUrl-SourceConfig-java.lang.String-). - - - - -```` - -#### Sink - -Create a sink connector. - -````mdx-code-block - - - - -Use the `create` subcommand. - -``` - -$ pulsar-admin sinks create options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/registerSink?version=@pulsar:version_number@} - - - - -* Create a sink connector with a **local file**. - - ```java - - void createSink(SinkConfig sinkConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - |Name|Description - |---|--- - `sinkConfig` | The sink configuration object - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSink`](/api/admin/org/apache/pulsar/client/admin/Sink.html#createSink-SinkConfig-java.lang.String-). - -* Create a sink connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void createSinkWithUrl(SinkConfig sinkConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - Parameter| Description - |---|--- - `sinkConfig` | The sink configuration object - `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSinkWithUrl`](/api/admin/org/apache/pulsar/client/admin/Sink.html#createSinkWithUrl-SinkConfig-java.lang.String-). - - - - -```` - -### `start` - -You can start a connector using **Admin CLI** or **REST API**. - -#### Source - -Start a source connector. - -````mdx-code-block - - - - -Use the `start` subcommand. - -``` - -$ pulsar-admin sources start options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -* Start **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/start|operation/startSource?version=@pulsar:version_number@} - -* Start a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/start|operation/startSource?version=@pulsar:version_number@} - - - - -```` - -#### Sink - -Start a sink connector. - -````mdx-code-block - - - - -Use the `start` subcommand. - -``` - -$ pulsar-admin sinks start options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -* Start **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/start|operation/startSink?version=@pulsar:version_number@} - -* Start a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sourceName/:instanceId/start|operation/startSink?version=@pulsar:version_number@} - - - - -```` - -### `localrun` - -You can run a connector locally rather than deploying it on a Pulsar cluster using **Admin CLI**. - -#### Source - -Run a source connector locally. - -````mdx-code-block - - - - -Use the `localrun` subcommand. - -``` - -$ pulsar-admin sources localrun options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -```` - -#### Sink - -Run a sink connector locally. - -````mdx-code-block - - - - -Use the `localrun` subcommand. - -``` - -$ pulsar-admin sinks localrun options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -```` - -## Monitor a connector - -To monitor a connector, you can perform the following operations: - -* [Get the information of a connector](#get) - -* [Get the list of all running connectors](#list) - -* [Get the current status of a connector](#status) - -### `get` - -You can get the information of a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the information of a source connector. - -````mdx-code-block - - - - -Use the `get` subcommand. - -``` - -$ pulsar-admin sources get options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/getSourceInfo?version=@pulsar:version_number@} - - - - -```java - -SourceConfig getSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Example** - -This is a sourceConfig. - -```java - -{ - "tenant": "tenantName", - "namespace": "namespaceName", - "name": "sourceName", - "className": "className", - "topicName": "topicName", - "configs": {}, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "resources": { - "cpu": 1.0, - "ram": 1073741824, - "disk": 10737418240 - } -} - -``` - -This is a sourceConfig example. - -``` - -{ - "tenant": "public", - "namespace": "default", - "name": "debezium-mysql-source", - "className": "org.apache.pulsar.io.debezium.mysql.DebeziumMysqlSource", - "topicName": "debezium-mysql-topic", - "configs": { - "database.user": "debezium", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.port": "3306", - "database.hostname": "localhost", - "database.password": "dbz", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "database.whitelist": "inventory", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "pulsar.service.url": "pulsar://127.0.0.1:6650", - "database.history.pulsar.topic": "history-topic2" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "resources": { - "cpu": 1.0, - "ram": 1073741824, - "disk": 10737418240 - } -} - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException.NotFoundException` | Cluster doesn't exist -`PulsarAdminException` | Unexpected error - -For more information, see [`getSource`](/api/admin/org/apache/pulsar/client/admin/Source.html#getSource-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Get the information of a sink connector. - -````mdx-code-block - - - - -Use the `get` subcommand. - -``` - -$ pulsar-admin sinks get options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/getSinkInfo?version=@pulsar:version_number@} - - - - -```java - -SinkConfig getSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - -``` - -**Example** - -This is a sinkConfig. - -```json - -{ -"tenant": "tenantName", -"namespace": "namespaceName", -"name": "sinkName", -"className": "className", -"inputSpecs": { -"topicName": { - "isRegexPattern": false -} -}, -"configs": {}, -"parallelism": 1, -"processingGuarantees": "ATLEAST_ONCE", -"retainOrdering": false, -"autoAck": true -} - -``` - -This is a sinkConfig example. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -**Parameter description** - -Name| Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`sink` | Sink name - -For more information, see [`getSink`](/api/admin/org/apache/pulsar/client/admin/Sink.html#getSink-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -### `list` - -You can get the list of all running connectors using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the list of all running source connectors. - -````mdx-code-block - - - - -Use the `list` subcommand. - -``` - -$ pulsar-admin sources list options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace|operation/listSources?version=@pulsar:version_number@} - - - - -```java - -List listSources(String tenant, - String namespace) - throws PulsarAdminException - -``` - -**Response example** - -```java - -["f1", "f2", "f3"] - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException` | Unexpected error - -For more information, see [`listSource`](/api/admin/org/apache/pulsar/client/admin/Source.html#listSources-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Get the list of all running sink connectors. - -````mdx-code-block - - - - -Use the `list` subcommand. - -``` - -$ pulsar-admin sinks list options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace|operation/listSinks?version=@pulsar:version_number@} - - - - -```java - -List listSinks(String tenant, - String namespace) - throws PulsarAdminException - -``` - -**Response example** - -```java - -["f1", "f2", "f3"] - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException` | Unexpected error - -For more information, see [`listSource`](/api/admin/org/apache/pulsar/client/admin/Sink.html#listSinks-java.lang.String-java.lang.String-). - - - - -```` - -### `status` - -You can get the current status of a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the current status of a source connector. - -````mdx-code-block - - - - -Use the `status` subcommand. - -``` - -$ pulsar-admin sources status options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -* Get the current status of **all** source connectors. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName/status|operation/getSourceStatus?version=@pulsar:version_number@} - -* Gets the current status of a **specified** source connector. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/status|operation/getSourceStatus?version=@pulsar:version_number@} - - - - -* Get the current status of **all** source connectors. - - ```java - - SourceStatus getSourceStatus(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - - **Exception** - - Name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSourceStatus`](/api/admin/org/apache/pulsar/client/admin/Source.html#getSource-java.lang.String-java.lang.String-java.lang.String-). - -* Gets the current status of a **specified** source connector. - - ```java - - SourceStatus.SourceInstanceStatus.SourceInstanceStatusData getSourceStatus(String tenant, - String namespace, - String source, - int id) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - `id` | Source instanceID - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSourceStatus`](/api/admin/org/apache/pulsar/client/admin/Source.html#getSourceStatus-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Get the current status of a Pulsar sink connector. - -````mdx-code-block - - - - -Use the `status` subcommand. - -``` - -$ pulsar-admin sinks status options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -* Get the current status of **all** sink connectors. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sinkName/status|operation/getSinkStatus?version=@pulsar:version_number@} - -* Gets the current status of a **specified** sink connector. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sourceName/:instanceId/status|operation/getSinkInstanceStatus?version=@pulsar:version_number@} - - - - -* Get the current status of **all** sink connectors. - - ```java - - SinkStatus getSinkStatus(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSinkStatus`](/api/admin/org/apache/pulsar/client/admin/Sink.html#getSinkStatus-java.lang.String-java.lang.String-java.lang.String-). - -* Gets the current status of a **specified** source connector. - - ```java - - SinkStatus.SinkInstanceStatus.SinkInstanceStatusData getSinkStatus(String tenant, - String namespace, - String sink, - int id) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - `id` | Sink instanceID - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSinkStatusWithInstanceID`](/api/admin/org/apache/pulsar/client/admin/Sink.html#getSinkStatus-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Update a connector - -### `update` - -You can update a running connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Update a running Pulsar source connector. - -````mdx-code-block - - - - -Use the `update` subcommand. - -``` - -$ pulsar-admin sources update options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/updateSource?version=@pulsar:version_number@} - - - - -* Update a running source connector with a **local file**. - - ```java - - void updateSource(SourceConfig sourceConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - |`sourceConfig` | The source configuration object - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - - For more information, see [`updateSource`](/api/admin/org/apache/pulsar/client/admin/Source.html#updateSource-SourceConfig-java.lang.String-). - -* Update a source connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void updateSourceWithUrl(SourceConfig sourceConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - | Name | Description - |---|--- - | `sourceConfig` | The source configuration object - | `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - -For more information, see [`createSourceWithUrl`](/api/admin/org/apache/pulsar/client/admin/Source.html#updateSourceWithUrl-SourceConfig-java.lang.String-). - - - - -```` - -#### Sink - -Update a running Pulsar sink connector. - -````mdx-code-block - - - - -Use the `update` subcommand. - -``` - -$ pulsar-admin sinks update options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/updateSink?version=@pulsar:version_number@} - - - - -* Update a running sink connector with a **local file**. - - ```java - - void updateSink(SinkConfig sinkConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - |`sinkConfig` | The sink configuration object - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - - For more information, see [`updateSink`](/api/admin/org/apache/pulsar/client/admin/Sink.html#updateSink-SinkConfig-java.lang.String-). - -* Update a sink connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void updateSinkWithUrl(SinkConfig sinkConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - | Name | Description - |---|--- - | `sinkConfig` | The sink configuration object - | `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - |`PulsarAdminException.NotFoundException` | Cluster doesn't exist - |`PulsarAdminException` | Unexpected error - -For more information, see [`updateSinkWithUrl`](/api/admin/org/apache/pulsar/client/admin/Sink.html#updateSinkWithUrl-SinkConfig-java.lang.String-). - - - - -```` - -## Stop a connector - -### `stop` - -You can stop a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Stop a source connector. - -````mdx-code-block - - - - -Use the `stop` subcommand. - -``` - -$ pulsar-admin sources stop options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -* Stop **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/stopSource?version=@pulsar:version_number@} - -* Stop a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId|operation/stopSource?version=@pulsar:version_number@} - - - - -* Stop **all** source connectors. - - ```java - - void stopSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSource`](/api/admin/org/apache/pulsar/client/admin/Source.html#stopSource-java.lang.String-java.lang.String-java.lang.String-). - -* Stop a **specified** source connector. - - ```java - - void stopSource(String tenant, - String namespace, - String source, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSource`](/api/admin/org/apache/pulsar/client/admin/Source.html#stopSource-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Stop a sink connector. - -````mdx-code-block - - - - -Use the `stop` subcommand. - -``` - -$ pulsar-admin sinks stop options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -* Stop **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sinkName/stop|operation/stopSink?version=@pulsar:version_number@} - -* Stop a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkeName/:instanceId/stop|operation/stopSink?version=@pulsar:version_number@} - - - - -* Stop **all** sink connectors. - - ```java - - void stopSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSink`](/api/admin/org/apache/pulsar/client/admin/Sink.html#stopSink-java.lang.String-java.lang.String-java.lang.String-). - -* Stop a **specified** sink connector. - - ```java - - void stopSink(String tenant, - String namespace, - String sink, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSink`](/api/admin/org/apache/pulsar/client/admin/Sink.html#stopSink-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Restart a connector - -### `restart` - -You can restart a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Restart a source connector. - -````mdx-code-block - - - - -Use the `restart` subcommand. - -``` - -$ pulsar-admin sources restart options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -* Restart **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/restart|operation/restartSource?version=@pulsar:version_number@} - -* Restart a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/restart|operation/restartSource?version=@pulsar:version_number@} - - - - -* Restart **all** source connectors. - - ```java - - void restartSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSource`](/api/admin/org/apache/pulsar/client/admin/Source.html#restartSource-java.lang.String-java.lang.String-java.lang.String-). - -* Restart a **specified** source connector. - - ```java - - void restartSource(String tenant, - String namespace, - String source, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSource`](/api/admin/org/apache/pulsar/client/admin/Source.html#restartSource-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Restart a sink connector. - -````mdx-code-block - - - - -Use the `restart` subcommand. - -``` - -$ pulsar-admin sinks restart options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -* Restart **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/restart|operation/restartSource?version=@pulsar:version_number@} - -* Restart a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/:instanceId/restart|operation/restartSource?version=@pulsar:version_number@} - - - - -* Restart all Pulsar sink connectors. - - ```java - - void restartSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Sink name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSink`](/api/admin/org/apache/pulsar/client/admin/Sink.html#restartSink-java.lang.String-java.lang.String-java.lang.String-). - -* Restart a **specified** sink connector. - - ```java - - void restartSink(String tenant, - String namespace, - String sink, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Sink instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSink`](/api/admin/org/apache/pulsar/client/admin/Sink.html#restartSink-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Delete a connector - -### `delete` - -You can delete a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Delete a source connector. - -````mdx-code-block - - - - -Use the `delete` subcommand. - -``` - -$ pulsar-admin sources delete options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -Delete al Pulsar source connector. - -Send a `DELETE` request to this endpoint: {@inject: endpoint|DELETE|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/deregisterSource?version=@pulsar:version_number@} - - - - -Delete a source connector. - -```java - -void deleteSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Parameter** - -| Name | Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`source` | Source name - -**Exception** - -|Name|Description| -|---|--- -|`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission -| `PulsarAdminException.NotFoundException` | Cluster doesn't exist -| `PulsarAdminException.PreconditionFailedException` | Cluster is not empty -| `PulsarAdminException` | Unexpected error - -For more information, see [`deleteSource`](/api/admin/org/apache/pulsar/client/admin/Source.html#deleteSource-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Delete a sink connector. - -````mdx-code-block - - - - -Use the `delete` subcommand. - -``` - -$ pulsar-admin sinks delete options - -``` - -For the latest and complete information, see [Pulsar admin docs](/tools/pulsar-admin/). - - - - -Delete a sink connector. - -Send a `DELETE` request to this endpoint: {@inject: endpoint|DELETE|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/deregisterSink?version=@pulsar:version_number@} - - - - -Delete a Pulsar sink connector. - -```java - -void deleteSink(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Parameter** - -| Name | Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`sink` | Sink name - -**Exception** - -|Name|Description| -|---|--- -|`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission -| `PulsarAdminException.NotFoundException` | Cluster doesn't exist -| `PulsarAdminException.PreconditionFailedException` | Cluster is not empty -| `PulsarAdminException` | Unexpected error - -For more information, see [`deleteSource`](/api/admin/org/apache/pulsar/client/admin/Sink.html#deleteSink-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/performance-pulsar-perf.md b/site2/website/versioned_docs/version-2.10.1-deprecated/performance-pulsar-perf.md deleted file mode 100644 index 986fce84f0770f..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/performance-pulsar-perf.md +++ /dev/null @@ -1,282 +0,0 @@ ---- -id: performance-pulsar-perf -title: Pulsar Perf -sidebar_label: "Pulsar Perf" -original_id: performance-pulsar-perf ---- - -The Pulsar Perf is a built-in performance test tool for Apache Pulsar. You can use the Pulsar Perf to test message writing or reading performance. For detailed information about performance tuning, see [here](https://streamnative.io/en/blog/tech/2021-01-14-pulsar-architecture-performance-tuning). - -## Produce messages - -:::tip - -For the latest and complete information about `pulsar-perf`, including commands, flags, descriptions, and more, see [`pulsar-perf`](/tools/pulsar-perf/) or [here](reference-cli-tools.md#pulsar-perf). - -::: - -- This example shows how the Pulsar Perf produces messages with **default** options. - - **Input** - - ``` - - bin/pulsar-perf produce my-topic - - ``` - - After the command is executed, the test data is continuously output on the Console. - - **Output** - - ``` - - 19:53:31.459 [pulsar-perf-producer-exec-1-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Created 1 producers - 19:53:31.482 [pulsar-timer-5-1] WARN com.scurrilous.circe.checksum.Crc32cIntChecksum - Failed to load Circe JNI library. Falling back to Java based CRC32c provider - 19:53:40.861 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 93.7 msg/s --- 0.7 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.575 ms - med: 3.460 - 95pct: 4.790 - 99pct: 5.308 - 99.9pct: 5.834 - 99.99pct: 6.609 - Max: 6.609 - 19:53:50.909 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.437 ms - med: 3.328 - 95pct: 4.656 - 99pct: 5.071 - 99.9pct: 5.519 - 99.99pct: 5.588 - Max: 5.588 - 19:54:00.926 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.376 ms - med: 3.276 - 95pct: 4.520 - 99pct: 4.939 - 99.9pct: 5.440 - 99.99pct: 5.490 - Max: 5.490 - 19:54:10.940 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.298 ms - med: 3.220 - 95pct: 4.474 - 99pct: 4.926 - 99.9pct: 5.645 - 99.99pct: 5.654 - Max: 5.654 - 19:54:20.956 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.1 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.308 ms - med: 3.199 - 95pct: 4.532 - 99pct: 4.871 - 99.9pct: 5.291 - 99.99pct: 5.323 - Max: 5.323 - 19:54:30.972 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.249 ms - med: 3.144 - 95pct: 4.437 - 99pct: 4.970 - 99.9pct: 5.329 - 99.99pct: 5.414 - Max: 5.414 - 19:54:40.987 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.435 ms - med: 3.361 - 95pct: 4.772 - 99pct: 5.150 - 99.9pct: 5.373 - 99.99pct: 5.837 - Max: 5.837 - ^C19:54:44.325 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Aggregated throughput stats --- 7286 records sent --- 99.140 msg/s --- 0.775 Mbit/s - 19:54:44.336 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Aggregated latency stats --- Latency: mean: 3.383 ms - med: 3.293 - 95pct: 4.610 - 99pct: 5.059 - 99.9pct: 5.588 - 99.99pct: 5.837 - 99.999pct: 6.609 - Max: 6.609 - - ``` - - From the above test data, you can get the throughput statistics and the write latency statistics. The aggregated statistics are printed when the Pulsar Perf is stopped. You can press **Ctrl**+**C** to stop the Pulsar Perf. If you specify a filename with the `--histogram-file` parameter, a file with the [HdrHistogram](http://hdrhistogram.github.io/HdrHistogram/) formatted test result appears under your directory after Pulsar Perf is stopped. You can also check the test result through [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html). For details about how to check the test result through [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html), see [HdrHistogram Plotter](#hdrhistogram-plotter). -- This example shows how the Pulsar Perf produces messages with `transaction` option. - - **Input** - - ```shell - - bin/pulsar-perf produce my-topic -r 10 -m 100 -txn - - ``` - - **Output** - - ```shell - - 2021-10-11T13:36:15,595+0800 INFO [Thread-3] o.a.p.t.PerformanceProducer@499 - --- Transaction : 2 transaction end successfully ---0 transaction end failed --- 0.200 Txn/s - - 2021-10-11T13:36:15,614+0800 INFO [Thread-3] o.a.p.t.PerformanceProducer@503 - Throughput produced: 100 msg --- 0.0 msg/s --- 0.1 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.067 ms - med: 3.104 - 95pct: 3.747 - 99pct: 4.619 - 99.9pct: 6.760 - 99.99pct: 6.760 - Max: 6.760 - - 2021-10-11T13:36:15,710+0800 INFO [pulsar-perf-producer-exec-46-1] o.a.p.t.PerformanceProducer@834 - Aggregated latency stats --- Latency: mean: 3.067 ms - med: 3.104 - 95pct: 3.747 - 99pct: 4.619 - 99.9pct: 6.760 - 99.99pct: 6.760 - 99.999pct: 6.760 - Max: 6.760 - - 2021-10-11T13:36:29,976+0800 INFO [Thread-4] o.a.p.t.PerformanceProducer@815 - --- Transaction : 2 transaction end successfully --- 0 transaction end failed --- 2 transaction open successfully --- 0 transaction open failed --- 12.237 Txn/s - - 2021-10-11T13:36:29,976+0800 INFO [Thread-4] o.a.p.t.PerformanceProducer@824 - Aggregated throughput stats --- 102 records sent --- 4.168 msg/s --- 0.033 Mbit/s - - ``` - -## Consume messages - -:::tip - -For the latest and complete information about `pulsar-perf`, including commands, flags, descriptions, and more, see [`pulsar-perf`](/tools/pulsar-perf/) or [here](reference-cli-tools.md#pulsar-perf). - -::: - -- This example shows how the Pulsar Perf consumes messages with **default** options. - - **Input** - - :::note - - If you have not created a topic (in this example, it is _my-topic_) before, the broker creates a new topic without partitions and messages, then the consumer can not receive any messages. Consequently, before using `pulsar-perf consume`, make sure your topic has enough messages to consume. - - ::: - - ``` - - bin/pulsar-perf consume my-topic - - ``` - - After the command is executed, the test data is continuously output on the Console. - - **Output** - - ``` - - 20:35:37.071 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Start receiving from 1 consumers on 1 topics - 20:35:41.150 [pulsar-client-io-1-9] WARN com.scurrilous.circe.checksum.Crc32cIntChecksum - Failed to load Circe JNI library. Falling back to Java based CRC32c provider - 20:35:47.092 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 59.572 msg/s -- 0.465 Mbit/s --- Latency: mean: 11.298 ms - med: 10 - 95pct: 15 - 99pct: 98 - 99.9pct: 137 - 99.99pct: 152 - Max: 152 - 20:35:57.104 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.958 msg/s -- 0.781 Mbit/s --- Latency: mean: 9.176 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 18 - Max: 18 - 20:36:07.115 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 100.006 msg/s -- 0.781 Mbit/s --- Latency: mean: 9.316 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 - 20:36:17.125 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 100.085 msg/s -- 0.782 Mbit/s --- Latency: mean: 9.327 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 - 20:36:27.136 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.900 msg/s -- 0.780 Mbit/s --- Latency: mean: 9.404 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 - 20:36:37.147 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.985 msg/s -- 0.781 Mbit/s --- Latency: mean: 8.998 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 - ^C20:36:42.755 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceConsumer - Aggregated throughput stats --- 6051 records received --- 92.125 msg/s --- 0.720 Mbit/s - 20:36:42.759 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceConsumer - Aggregated latency stats --- Latency: mean: 9.422 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 98 - 99.99pct: 137 - 99.999pct: 152 - Max: 152 - - ``` - - From the output test data, you can get the throughput statistics and the end-to-end latency statistics. The aggregated statistics is printed after the Pulsar Perf is stopped. You can press **Ctrl**+**C** to stop the Pulsar Perf. - -- This example shows how the Pulsar Perf consumes messages with `transaction` option. - - **Input** - - ```shell - - bin/pulsar-perf consume my-topic -r 10 -txn -ss mysubName -st Exclusive -sp Earliest -ntxn 10 - - ``` - - :::note - - If you have not created a topic (in this example, it is _my-topic_) before, the broker creates a new topic without partitions and messages, then the consumer can not receive any messages. Consequently, before using `pulsar-perf consume`, make sure your topic has enough messages to consume. - - ::: - - - **Output** - - ```shell - - 2021-10-11T13:43:36,052+0800 INFO [Thread-3] o.a.p.t.PerformanceConsumer@538 - --- Transaction: 6 transaction end successfully --- 0 transaction end failed --- 0.199 Txn/s --- AckRate: 9.952 msg/s - - 2021-10-11T13:43:36,065+0800 INFO [Thread-3] o.a.p.t.PerformanceConsumer@545 - Throughput received: 306 msg --- 9.952 msg/s -- 0.000 Mbit/s --- Latency: mean: 26177.380 ms - med: 26128 - 95pct: 30531 - 99pct: 30923 - 99.9pct: 31021 - 99.99pct: 31021 - Max: 31021 - - 2021-10-11T13:43:59,854+0800 INFO [Thread-5] o.a.p.t.PerformanceConsumer@579 - -- Transaction: 10 transaction end successfully --- 0 transaction end failed --- 10 transaction open successfully --- 0 transaction open failed --- 0.185 Txn/s - - 2021-10-11T13:43:59,854+0800 INFO [Thread-5] o.a.p.t.PerformanceConsumer@588 - Aggregated throughput stats --- 505 records received --- 9.345 msg/s --- 0.000 Mbit/s--- AckRate: 9.27065308842743 msg/s --- ack failed 4 msg - - 2021-10-11T13:43:59,882+0800 INFO [Thread-5] o.a.p.t.PerformanceConsumer@601 - Aggregated latency stats --- Latency: mean: 50593.000 ms - med: 50593 - 95pct: 50593 - 99pct: 50593 - 99.9pct: 50593 - 99.99pct: 50593 - 99.999pct: 50593 - Max: 50593 - - ``` - -## Transactions - -This section shows how Pulsar Perf runs transactions. For more information, see [Pulsar transactions](txn-why.md). - -### Use transaction - -This example executes 50 transactions. Each transaction sends and receives 1 message (default). - -**Input** - -```shell - -bin/pulsar-perf transaction --topics-c myConsumerTopic --topics-p MyproduceTopic -threads 1 -ntxn 50 -ss testSub -nmp 1 -nmc 1 - -``` - -:::note - -If you have not created a topic (in this example, it is _myConsumerTopic_) before, the broker creates a new topic without partitions and messages, then the consumer can not receive any messages. Consequently, before using `pulsar-perf transaction`, make sure your topic has enough messages to consume. - -::: - -**Output** - -```shell - -2021-10-11T14:37:27,863+0800 INFO [Thread-5] o.a.p.t.PerformanceProducer@613 - Messages ack aggregated latency stats --- Latency: mean: 29.239 ms - med: 26.799 - 95pct: 46.696 - 99pct: 55.660 - 99.9pct: 55.660 - 99.99pct: 55.660 - 99.999pct: 55.660 - Max: 55.660 {} - -2021-10-11T14:37:19,391+0800 INFO [Thread-4] o.a.p.t.PerformanceProducer@525 - Throughput transaction: 50 transaction executes --- 4.999 transaction/s ---send Latency: mean: 31.368 ms - med: 28.369 - 95pct: 55.631 - 99pct: 57.764 - 99.9pct: 57.764 - 99.99pct: 57.764 - Max: 57.764---ack Latency: mean: 29.239 ms - med: 26.799 - 95pct: 46.696 - 99pct: 55.660 - 99.9pct: 55.660 - 99.99pct: 55.660 - Max: 55.660 {} - -2021-10-11T14:37:26,625+0800 INFO [Thread-5] o.a.p.t.PerformanceProducer@571 - Aggregated throughput stats --- 50 transaction executed --- 2.718 transaction/s --- 50 transaction open successfully --- 0 transaction open failed --- 50 transaction end successfully --- 0 transaction end failed--- 0 message ack failed --- 0 message send failed--- 50 message ack success --- 50 message send success {} - -``` - -### Disable Transaction - -This example disables transactions. - -**Input** - -```shell - -bin/pulsar-perf transaction --topics-c myConsumerTopic --topics-p myproduceTopic -threads 1 -ntxn 50 -ss testSub --txn-disEnable - -``` - -:::note - -If you have not created a topic (in this example, it is _myConsumerTopic_) before, the broker creates a new topic without partitions and messages, then the consumer can not receive any messages. Consequently, before using `pulsar-perf transaction --txn-disEnable`, make sure your topic has enough messages to consume. - -::: - -**Output** - -```shell - -2021-10-11T16:48:26,876+0800 INFO [Thread-4] o.a.p.t.PerformanceProducer@529 - Throughput task: 50 task executes --- 4.999 task/s ---send Latency: mean: 10.002 ms - med: 9.875 - 95pct: 11.733 - 99pct: 15.995 - 99.9pct: 15.995 - 99.99pct: 15.995 - Max: 15.995---ack Latency: mean: 0.051 ms - med: 0.020 - 95pct: 0.059 - 99pct: 1.377 - 99.9pct: 1.377 - 99.99pct: 1.377 - Max: 1.377 - -2021-10-11T16:48:29,222+0800 INFO [Thread-5] o.a.p.t.PerformanceProducer@617 - Messages ack aggregated latency stats --- Latency: mean: 0.051 ms - med: 0.020 - 95pct: 0.059 - 99pct: 1.377 - 99.9pct: 1.377 - 99.99pct: 1.377 - 99.999pct: 1.377 - Max: 1.377 - -2021-10-11T16:48:29,246+0800 INFO [Thread-5] o.a.p.t.PerformanceProducer@629 - Messages send aggregated latency stats --- Latency: mean: 10.002 ms - med: 9.875 - 95pct: 11.733 - 99pct: 15.995 - 99.9pct: 15.995 - 99.99pct: 15.995 - 99.999pct: 15.995 - Max: 15.995 - -2021-10-11T16:48:29,117+0800 INFO [Thread-5] o.a.p.t.PerformanceProducer@602 - Aggregated throughput stats --- 50 task executed --- 4.025 task/s --- 0 message ack failed --- 0 message send failed--- 50 message ack success --- 50 message send success - -``` - -## Configurations - -By default, the Pulsar Perf uses `conf/client.conf` as the default configuration and uses `conf/log4j2.yaml` as the default Log4j configuration. If you want to connect to other Pulsar clusters, you can update the `brokerServiceUrl` in the client configuration. - -You can use the following commands to change the configuration file and the Log4j configuration file. - -``` - -export PULSAR_CLIENT_CONF= -export PULSAR_LOG_CONF= - -``` - -In addition, you can use the following command to configure the JVM configuration through environment variables: - -``` - -export PULSAR_EXTRA_OPTS='-Xms4g -Xmx4g -XX:MaxDirectMemorySize=4g' - -``` - -## HdrHistogram Plotter - -The [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html) is a visualization tool for checking Pulsar Perf test results, which makes it easier to observe the test results. - -To check test results through the HdrHistogram Plotter, follow these steps: - -1. Clone the HdrHistogram repository from GitHub to the local. - - ``` - - git clone https://github.com/HdrHistogram/HdrHistogram.git - - ``` - -2. Switch to the HdrHistogram folder. - - ``` - - cd HdrHistogram - - ``` - -3. Install the HdrHistogram Plotter. - - ``` - - mvn clean install -DskipTests - - ``` - -4. Transform the file generated by the Pulsar Perf. - - ``` - - ./HistogramLogProcessor -i -o - - ``` - -5. You will get two output files. Upload the output file with the filename extension of .hgrm to the [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html). - -6. Check the test result through the Graphical User Interface of the HdrHistogram Plotter, as shown blow. - - ![](/assets/perf-produce.png) diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/reference-cli-tools.md b/site2/website/versioned_docs/version-2.10.1-deprecated/reference-cli-tools.md deleted file mode 100644 index 1e426501e23a3d..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/reference-cli-tools.md +++ /dev/null @@ -1,1039 +0,0 @@ ---- -id: reference-cli-tools -title: Pulsar command-line tools -sidebar_label: "Pulsar CLI tools" -original_id: reference-cli-tools ---- - -Pulsar offers several command-line tools that you can use for managing Pulsar installations, performance testing, using command-line producers and consumers, and more. - -All Pulsar command-line tools can be run from the `bin` directory of your [installed Pulsar package](getting-started-standalone.md). The following tools are currently documented: - -* [`pulsar`](#pulsar) -* [`pulsar-client`](#pulsar-client) -* [`pulsar-daemon`](#pulsar-daemon) -* [`pulsar-perf`](#pulsar-perf) -* [`bookkeeper`](#bookkeeper) -* [`broker-tool`](#broker-tool) - -> **Important** -> -> - This page only shows **some frequently used commands**. For the latest information about `pulsar`, `pulsar-client`, and `pulsar-perf`, including commands, flags, descriptions, and more information, see [Pulsar tools](/tools/). -> -> - You can get help for any CLI tool, command, or subcommand using the `--help` flag, or `-h` for short. Here's an example: -> - -> ```shell -> -> $ bin/pulsar broker --help -> -> -> ``` - - -## `pulsar` - -The pulsar tool is used to start Pulsar components, such as bookies and ZooKeeper, in the foreground. - -These processes can also be started in the background, using nohup, using the pulsar-daemon tool, which has the same command interface as pulsar. - -Usage: - -```bash - -$ pulsar command - -``` - -Commands: -* `bookie` -* `broker` -* `compact-topic` -* `configuration-store` -* `initialize-cluster-metadata` -* `proxy` -* `standalone` -* `websocket` -* `zookeeper` -* `zookeeper-shell` -* `autorecovery` - -Example: - -```bash - -$ PULSAR_BROKER_CONF=/path/to/broker.conf pulsar broker - -``` - -The table below lists the environment variables that you can use to configure the `pulsar` tool. - -|Variable|Description|Default| -|---|---|---| -|`PULSAR_LOG_CONF`|Log4j configuration file|`conf/log4j2.yaml`| -|`PULSAR_BROKER_CONF`|Configuration file for broker|`conf/broker.conf`| -|`PULSAR_BOOKKEEPER_CONF`|description: Configuration file for bookie|`conf/bookkeeper.conf`| -|`PULSAR_ZK_CONF`|Configuration file for zookeeper|`conf/zookeeper.conf`| -|`PULSAR_CONFIGURATION_STORE_CONF`|Configuration file for the configuration store|`conf/global_zookeeper.conf`| -|`PULSAR_WEBSOCKET_CONF`|Configuration file for websocket proxy|`conf/websocket.conf`| -|`PULSAR_STANDALONE_CONF`|Configuration file for standalone|`conf/standalone.conf`| -|`PULSAR_EXTRA_OPTS`|Extra options to be passed to the jvm|| -|`PULSAR_EXTRA_CLASSPATH`|Extra paths for Pulsar's classpath|| -|`PULSAR_PID_DIR`|Folder where the pulsar server PID file should be stored|| -|`PULSAR_STOP_TIMEOUT`|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful|| -|`PULSAR_GC_LOG`|Gc options to be passed to the jvm|| - - -### `bookie` - -Starts up a bookie server - -Usage: - -```bash - -$ pulsar bookie options - -``` - -Options - -|Option|Description|Default| -|---|---|---| -|`-readOnly`|Force start a read-only bookie server|false| -|`-withAutoRecovery`|Start auto-recover service bookie server|false| - - -Example - -```bash - -$ PULSAR_BOOKKEEPER_CONF=/path/to/bookkeeper.conf pulsar bookie \ - -readOnly \ - -withAutoRecovery - -``` - -### `broker` - -Starts up a Pulsar broker - -Usage - -```bash - -$ pulsar broker options - -``` - -Options - -|Option|Description|Default| -|---|---|---| -|`-bc` , `--bookie-conf`|Configuration file for BookKeeper|| -|`-rb` , `--run-bookie`|Run a BookKeeper bookie on the same host as the Pulsar broker|false| -|`-ra` , `--run-bookie-autorecovery`|Run a BookKeeper autorecovery daemon on the same host as the Pulsar broker|false| - -Example - -```bash - -$ PULSAR_BROKER_CONF=/path/to/broker.conf pulsar broker - -``` - -### `compact-topic` - -Run compaction against a Pulsar topic (in a new process) - -Usage - -```bash - -$ pulsar compact-topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-t` , `--topic`|The Pulsar topic that you would like to compact|| - -Example - -```bash - -$ pulsar compact-topic --topic topic-to-compact - -``` - -### `configuration-store` - -Starts up the Pulsar configuration store - -Usage - -```bash - -$ pulsar configuration-store - -``` - -Example - -```bash - -$ PULSAR_CONFIGURATION_STORE_CONF=/path/to/configuration_store.conf pulsar configuration-store - -``` - -### `initialize-cluster-metadata` - -One-time cluster metadata initialization - -Usage - -```bash - -$ pulsar initialize-cluster-metadata options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-ub` , `--broker-service-url`|The broker service URL for the new cluster|| -|`-tb` , `--broker-service-url-tls`|The broker service URL for the new cluster with TLS encryption|| -|`-c` , `--cluster`|Cluster name|| -|`-cms` , `--configuration-metadata-store`|The configuration metadata store quorum connection string|| -|`--existing-bk-metadata-service-uri`|The metadata service URI of the existing BookKeeper cluster that you want to use|| -|`-h` , `--help`|Help message|false| -|`--initial-num-stream-storage-containers`|The number of storage containers of BookKeeper stream storage|16| -|`--initial-num-transaction-coordinators`|The number of transaction coordinators assigned in a cluster|16| -|`-uw` , `--web-service-url`|The web service URL for the new cluster|| -|`-tw` , `--web-service-url-tls`|The web service URL for the new cluster with TLS encryption|| -|`-md` , `--metadata-store`|The metadata store service url|| -|`--zookeeper-session-timeout-ms`|The local ZooKeeper session timeout. The time unit is in millisecond(ms)|30000| - - -### `proxy` - -Manages the Pulsar proxy - -Usage - -```bash - -$ pulsar proxy options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-cms`, `--configuration-metadata-store`|Configuration metadata store connection string|| -|`-md` , `--metadata-store`|Metadata Store service url|| - -Example - -```bash - -$ PULSAR_PROXY_CONF=/path/to/proxy.conf pulsar proxy \ - --metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 \ - --configuration-metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 - -``` - -### `standalone` - -Run a broker service with local bookies and local ZooKeeper - -Usage - -```bash - -$ pulsar standalone options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-a` , `--advertised-address`|The standalone broker advertised address|| -|`--bookkeeper-dir`|Local bookies’ base data directory|data/standalone/bookkeeper| -|`--bookkeeper-port`|Local bookies’ base port|3181| -|`--no-broker`|Only start ZooKeeper and BookKeeper services, not the broker|false| -|`--num-bookies`|The number of local bookies|1| -|`--only-broker`|Only start the Pulsar broker service (not ZooKeeper or BookKeeper)|| -|`--wipe-data`|Clean up previous ZooKeeper/BookKeeper data|| -|`--zookeeper-dir`|Local ZooKeeper’s data directory|data/standalone/zookeeper| -|`--zookeeper-port` |Local ZooKeeper’s port|2181| - -Example - -```bash - -$ PULSAR_STANDALONE_CONF=/path/to/standalone.conf pulsar standalone - -``` - -### `websocket` - -Usage - -```bash - -$ pulsar websocket - -``` - -Example - -```bash - -$ PULSAR_WEBSOCKET_CONF=/path/to/websocket.conf pulsar websocket - -``` - -### `zookeeper` - -Starts up a ZooKeeper cluster - -Usage - -```bash - -$ pulsar zookeeper - -``` - -Example - -```bash - -$ PULSAR_ZK_CONF=/path/to/zookeeper.conf pulsar zookeeper - -``` - -### `zookeeper-shell` - -Connects to a running ZooKeeper cluster using the ZooKeeper shell - -Usage - -```bash - -$ pulsar zookeeper-shell options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration file for ZooKeeper|| -|`-server`|Configuration zk address, eg: `127.0.0.1:2181`|| - -### `autorecovery` - -Runs an auto-recovery service. - -Usage - -```bash - -$ pulsar autorecovery options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the autorecovery|N/A| - - -## `pulsar-client` - -The pulsar-client tool - -Usage - -```bash - -$ pulsar-client command - -``` - -Commands -* `produce` -* `consume` - - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class, for example "key1:val1,key2:val2" or "{\"key1\":\"val1\",\"key2\":\"val2\"}"|{"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"}| -|`--auth-plugin`|Authentication plugin class name|org.apache.pulsar.client.impl.auth.AuthenticationSasl| -|`--listener-name`|Listener name for the broker|| -|`--proxy-protocol`|Proxy protocol to select type of routing at proxy|| -|`--proxy-url`|Proxy-server URL to which to connect|| -|`--url`|Broker URL to which to connect|pulsar://localhost:6650/
    ws://localhost:8080 | -| `-v`, `--version` | Get the version of the Pulsar client -|`-h`, `--help`|Show this help - - -### `produce` -Send a message or messages to a specific broker and topic - -Usage - -```bash - -$ pulsar-client produce topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-f`, `--files`|Comma-separated file paths to send; either -m or -f must be specified|[]| -|`-m`, `--messages`|Comma-separated string of messages to send; either -m or -f must be specified|[]| -|`-n`, `--num-produce`|The number of times to send the message(s); the count of messages/files * num-produce should be below 1000|1| -|`-r`, `--rate`|Rate (in messages per second) at which to produce; a value 0 means to produce messages as fast as possible|0.0| -|`-db`, `--disable-batching`|Disable batch sending of messages|false| -|`-c`, `--chunking`|Split the message and publish in chunks if the message size is larger than the allowed max size|false| -|`-s`, `--separator`|Character to split messages string with.|","| -|`-k`, `--key`|Message key to add|key=value string, like k1=v1,k2=v2.| -|`-p`, `--properties`|Properties to add. If you want to add multiple properties, use the comma as the separator, e.g. `k1=v1,k2=v2`.| | -|`-ekn`, `--encryption-key-name`|The public key name to encrypt payload.| | -|`-ekv`, `--encryption-key-value`|The URI of public key to encrypt payload. For example, `file:///path/to/public.key` or `data:application/x-pem-file;base64,*****`.| | - - -### `consume` -Consume messages from a specific broker and topic - -Usage - -```bash - -$ pulsar-client consume topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--hex`|Display binary messages in hexadecimal format.|false| -|`-n`, `--num-messages`|Number of messages to consume, 0 means to consume forever.|1| -|`-r`, `--rate`|Rate (in messages per second) at which to consume; a value 0 means to consume messages as fast as possible|0.0| -|`--regex`|Indicate the topic name is a regex pattern|false| -|`-s`, `--subscription-name`|Subscription name|| -|`-t`, `--subscription-type`|The type of the subscription. Possible values: Exclusive, Shared, Failover, Key_Shared.|Exclusive| -|`-p`, `--subscription-position`|The position of the subscription. Possible values: Latest, Earliest.|Latest| -|`-m`, `--subscription-mode`|Subscription mode. Possible values: Durable, NonDurable.|Durable| -|`-q`, `--queue-size`|The size of consumer's receiver queue.|0| -|`-mc`, `--max_chunked_msg`|Max pending chunk messages.|0| -|`-ac`, `--auto_ack_chunk_q_full`|Auto ack for the oldest message in consumer's receiver queue if the queue full.|false| -|`--hide-content`|Do not print the message to the console.|false| -|`-st`, `--schema-type`|Set the schema type. Use `auto_consume` to dump AVRO and other structured data types. Possible values: bytes, auto_consume.|bytes| -|`-ekv`, `--encryption-key-value`|The URI of public key to encrypt payload. For example, `file:///path/to/public.key` or `data:application/x-pem-file;base64,*****`.| | -|`-pm`, `--pool-messages`|Use the pooled message.|true| - -## `pulsar-daemon` -A wrapper around the pulsar tool that’s used to start and stop processes, such as ZooKeeper, bookies, and Pulsar brokers, in the background using nohup. - -pulsar-daemon has a similar interface to the pulsar command but adds start and stop commands for various services. For a listing of those services, run pulsar-daemon to see the help output or see the documentation for the pulsar command. - -Usage - -```bash - -$ pulsar-daemon command - -``` - -Commands -* `start` -* `stop` -* `restart` - - -### `start` -Start a service in the background using nohup. - -Usage - -```bash - -$ pulsar-daemon start service - -``` - -### `stop` -Stop a service that’s already been started using start. - -Usage - -```bash - -$ pulsar-daemon stop service options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|-force|Stop the service forcefully if not stopped by normal shutdown.|false| - -### `restart` -Restart a service that has already been started. - -```bash - -$ pulsar-daemon restart service - -``` - -## `pulsar-perf` -A tool for performance testing a Pulsar broker. - -Usage - -```bash - -$ pulsar-perf command - -``` - -Commands -* `consume` -* `produce` -* `read` -* `websocket-producer` -* `managed-ledger` -* `monitor-brokers` -* `simulation-client` -* `simulation-controller` -* `transaction` -* `help` - -Environment variables - -The table below lists the environment variables that you can use to configure the pulsar-perf tool. - -|Variable|Description|Default| -|---|---|---| -|`PULSAR_LOG_CONF`|Log4j configuration file|conf/log4j2.yaml| -|`PULSAR_CLIENT_CONF`|Configuration file for the client|conf/client.conf| -|`PULSAR_EXTRA_OPTS`|Extra options to be passed to the JVM|| -|`PULSAR_EXTRA_CLASSPATH`|Extra paths for Pulsar's classpath|| -|`PULSAR_GC_LOG`|Gc options to be passed to the jvm|| - - -### `consume` -Run a consumer - -Usage - -``` - -$ pulsar-perf consume options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth-plugin`|Authentication plugin class name|| -|`-ac`, `--auto_ack_chunk_q_full`|Auto ack for the oldest message in consumer's receiver queue if the queue full|false| -|`--listener-name`|Listener name for the broker|| -|`--acks-delay-millis`|Acknowledgements grouping delay in millis|100| -|`--batch-index-ack`|Enable or disable the batch index acknowledgment|false| -|`-bw`, `--busy-wait`|Enable or disable Busy-Wait on the Pulsar client|false| -|`-v`, `--encryption-key-value-file`|The file which contains the private key to decrypt payload|| -|`-h`, `--help`|Help message|false| -|`-cf`, `--conf-file`|Configuration file|| -|`-m`, `--num-messages`|Number of messages to consume in total. If the value is equal to or smaller than 0, it keeps consuming messages.|0| -|`-e`, `--expire_time_incomplete_chunked_messages`|The expiration time for incomplete chunk messages (in milliseconds)|0| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-mc`, `--max_chunked_msg`|Max pending chunk messages|0| -|`-n`, `--num-consumers`|Number of consumers (per topic)|1| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-lt`, `--num-listener-threads`|Set the number of threads to be used for message listeners|1| -|`-ns`, `--num-subscriptions`|Number of subscriptions (per topic)|1| -|`-t`, `--num-topics`|The number of topics|1| -|`-pm`, `--pool-messages`|Use the pooled message|true| -|`-r`, `--rate`|Simulate a slow message consumer (rate in msg/s)|0| -|`-q`, `--receiver-queue-size`|Size of the receiver queue|1000| -|`-p`, `--receiver-queue-size-across-partitions`|Max total size of the receiver queue across partitions|50000| -|`--replicated`|Whether the subscription status should be replicated|false| -|`-u`, `--service-url`|Pulsar service URL|| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled|0| -|`-s`, `--subscriber-name`|Subscriber name prefix|| -|`-ss`, `--subscriptions`|A list of subscriptions to consume on (e.g. sub1,sub2)|sub| -|`-st`, `--subscription-type`|Subscriber type. Possible values are Exclusive, Shared, Failover, Key_Shared.|Exclusive| -|`-sp`, `--subscription-position`|Subscriber position. Possible values are Latest, Earliest.|Latest| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps consuming messages.|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - -Below are **transaction** related options. - -If you want `--txn-timeout`, `--numMessage-perTransaction`, `-nmt`, `-ntxn`, or `-abort` take effect, set `--txn-enable` to true. - -|Flag|Description|Default| -|---|---|---| -`-tto`, `--txn-timeout`|Set the time of transaction timeout (in second). |10 -`-nmt`, `--numMessage-perTransaction`|The number of messages acknowledged by a transaction. |50 -`-txn`, `--txn-enable`|Enable or disable a transaction.|false -`-ntxn`|The number of opened transactions. 0 means the number of transactions is unlimited. |0 -`-abort`|Abort a transaction. |true - -### `produce` -Run a producer - -Usage - -```bash - -$ pulsar-perf produce options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-am`, `--access-mode`|Producer access mode. Valid values are `Shared`, `Exclusive` and `WaitForExclusive`|Shared| -|`-au`, `--admin-url`|Pulsar admin URL|| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth-plugin`|Authentication plugin class name|| -|`--listener-name`|Listener name for the broker|| -|`-b`, `--batch-time-window`|Batch messages in a window of the specified number of milliseconds|1| -|`-bb`, `--batch-max-bytes`|Maximum number of bytes per batch|4194304| -|`-bm`, `--batch-max-messages`|Maximum number of messages per batch|1000| -|`-bw`, `--busy-wait`|Enable or disable Busy-Wait on the Pulsar client|false| -|`-ch`, `--chunking`|Split the message and publish in chunks if the message size is larger than allowed max size|false| -|`-d`, `--delay`|Mark messages with a given delay in seconds|0s| -|`-z`, `--compression`|Compress messages’ payload. Possible values are NONE, LZ4, ZLIB, ZSTD or SNAPPY.|| -|`-cf`, `--conf-file`|Configuration file|| -|`-k`, `--encryption-key-name`|The public key name to encrypt payload|| -|`-v`, `--encryption-key-value-file`|The file which contains the public key to encrypt payload|| -|`-ef`, `--exit-on-failure`|Exit from the process on publish failure|false| -|`-fc`, `--format-class`|Custom Formatter class name|org.apache.pulsar.testclient.DefaultMessageFormatter| -|`-fp`, `--format-payload`|Format %i as a message index in the stream from producer and/or %t as the timestamp nanoseconds|false| -|`-h`, `--help`|Help message|false| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-o`, `--max-outstanding`|Max number of outstanding messages|1000| -|`-p`, `--max-outstanding-across-partitions`|Max number of outstanding messages across partitions|50000| -|`-m`, `--num-messages`|Number of messages to publish in total. If this value is less than or equal to 0, it keeps publishing messages.|0| -|`-mk`, `--message-key-generation-mode`|The generation mode of message key. Valid options are `autoIncrement`, `random`|| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-n`, `--num-producers`|The number of producers (per topic)|1| -|`-threads`, `--num-test-threads`|Number of test threads|1| -|`-t`, `--num-topic`|The number of topics|1| -|`-np`, `--partitions`|Create partitioned topics with the given number of partitions. Setting this value to 0 means not trying to create a topic|| -|`-f`, `--payload-file`|Use payload from an UTF-8 encoded text file and a payload will be randomly selected when publishing messages|| -|`-e`, `--payload-delimiter`|The delimiter used to split lines when using payload from a file|\n| -|`-pn`, `--producer-name`|Producer Name|| -|`-r`, `--rate`|Publish rate msg/s across topics|100| -|`--send-timeout`|Set the sendTimeout|0| -|`--separator`|Separator between the topic and topic number|-| -|`-u`, `--service-url`|Pulsar service URL|| -|`-s`, `--size`|Message size (in bytes)|1024| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled.|0| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps publishing messages.|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--warmup-time`|Warm-up time in seconds|1| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - -Below are **transaction** related options. - -If you want `--txn-timeout`, `--numMessage-perTransaction`, or `-abort` take effect, set `--txn-enable` to true. - -|Flag|Description|Default| -|---|---|---| -`-tto`, `--txn-timeout`|Set the time of transaction timeout (in second). |5 -`-nmt`, `--numMessage-perTransaction`|The number of messages acknowledged by a transaction. |50 -`-txn`, `--txn-enable`|Enable or disable a transaction.|true -`-abort`|Abort a transaction. |true - -### `read` -Run a topic reader - -Usage - -```bash - -$ pulsar-perf read options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth-plugin`|Authentication plugin class name|| -|`--listener-name`|Listener name for the broker|| -|`-cf`, `--conf-file`|Configuration file|| -|`-h`, `--help`|Help message|false| -|`-n`, `--num-messages`|Number of messages to consume in total. If the value is equal to or smaller than 0, it keeps consuming messages.|0| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-lt`, `--num-listener-threads`|Set the number of threads to be used for message listeners|1| -|`-t`, `--num-topics`|The number of topics|1| -|`-r`, `--rate`|Simulate a slow message reader (rate in msg/s)|0| -|`-q`, `--receiver-queue-size`|Size of the receiver queue|1000| -|`-u`, `--service-url`|Pulsar service URL|| -|`-m`, `--start-message-id`|Start message id. This can be either 'earliest', 'latest' or a specific message id by using 'lid:eid'|earliest| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled.|0| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps consuming messages.|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--use-tls`|Use TLS encryption on the connection|false| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - -### `websocket-producer` -Run a websocket producer - -Usage - -```bash - -$ pulsar-perf websocket-producer options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth-plugin`|Authentication plugin class name|| -|`-cf`, `--conf-file`|Configuration file|| -|`-h`, `--help`|Help message|false| -|`-m`, `--num-messages`|Number of messages to publish in total. If this value is less than or equal to 0, it keeps publishing messages.|0| -|`-t`, `--num-topic`|The number of topics|1| -|`-f`, `--payload-file`|Use payload from a file instead of empty buffer|| -|`-e`, `--payload-delimiter`|The delimiter used to split lines when using payload from a file|\n| -|`-fp`, `--format-payload`|Format %i as a message index in the stream from producer and/or %t as the timestamp nanoseconds|false| -|`-fc`, `--format-class`|Custom formatter class name|`org.apache.pulsar.testclient.DefaultMessageFormatter`| -|`-u`, `--proxy-url`|Pulsar Proxy URL, e.g., "ws://localhost:8080/"|| -|`-r`, `--rate`|Publish rate msg/s across topics|100| -|`-s`, `--size`|Message size in byte|1024| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps publishing messages.|0| - - -### `managed-ledger` -Write directly on managed-ledgers - -Usage - -```bash - -$ pulsar-perf managed-ledger options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-a`, `--ack-quorum`|Ledger ack quorum|1| -|`-dt`, `--digest-type`|BookKeeper digest type. Possible Values: [CRC32, MAC, CRC32C, DUMMY]|CRC32C| -|`-e`, `--ensemble-size`|Ledger ensemble size|1| -|`-h`, `--help`|Help message|false| -|`-c`, `--max-connections`|Max number of TCP connections to a single bookie|1| -|`-o`, `--max-outstanding`|Max number of outstanding requests|1000| -|`-m`, `--num-messages`|Number of messages to publish in total. If this value is less than or equal to 0, it keeps publishing messages.|0| -|`-t`, `--num-topic`|Number of managed ledgers|1| -|`-r`, `--rate`|Write rate msg/s across managed ledgers|100| -|`-s`, `--size`|Message size in byte|1024| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps publishing messages.|0| -|`--threads`|Number of threads writing|1| -|`-w`, `--write-quorum`|Ledger write quorum|1| -|`-md`, `--metadata-store`|Metadata store service URL. For example: zk:my-zk:2181|| - - -### `monitor-brokers` -Continuously receive broker data and/or load reports - -Usage - -```bash - -$ pulsar-perf monitor-brokers options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--connect-string`|A connection string for one or more ZooKeeper servers|| -|`-h`, `--help`|Help message|false| - - -### `simulation-client` -Run a simulation server acting as a Pulsar client. Uses the client configuration specified in `conf/client.conf`. - -Usage - -```bash - -$ pulsar-perf simulation-client options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--port`|Port to listen on for controller|0| -|`--service-url`|Pulsar Service URL|| -|`-h`, `--help`|Help message|false| - -### `simulation-controller` -Run a simulation controller to give commands to servers - -Usage - -```bash - -$ pulsar-perf simulation-controller options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--client-port`|The port that the clients are listening on|0| -|`--clients`|Comma-separated list of client hostnames|| -|`--cluster`|The cluster to test on|| -|`-h`, `--help`|Help message|false| - -### `transaction` - -Run a transaction. For more information, see [Pulsar transactions](txn-why.md). - -**Usage** - -```bash - -$ pulsar-perf transaction options - -``` - -**Options** - -|Flag|Description|Default| -|---|---|---| -`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|N/A -`--auth-plugin`|Authentication plugin class name.|N/A -`-au`, `--admin-url`|Pulsar admin URL.|N/A -`-cf`, `--conf-file`|Configuration file.|N/A -`-h`, `--help`|Help messages.|N/A -`-c`, `--max-connections`|Maximum number of TCP connections to a single broker.|100 -`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers. |1 -`-ns`, `--num-subscriptions`|Number of subscriptions per topic.|1 -`-threads`, `--num-test-threads`|Number of test threads.

    This thread is for a new transaction to ack messages from consumer topics, produce messages to producer topics, and commit or abort this transaction.

    Increasing the number of threads increases the parallelism of the performance test, consequently, it increases the intensity of the stress test.|1 -`-nmc`, `--numMessage-perTransaction-consume`|Set the number of messages consumed in a transaction.

    If transaction is disabled, it means the number of messages consumed in a task instead of in a transaction.|1 -`-nmp`, `--numMessage-perTransaction-produce`|Set the number of messages produced in a transaction.

    If transaction is disabled, it means the number of messages produced in a task instead of in a transaction.|1 -`-ntxn`, `--number-txn`|Set the number of transactions.

    0 means the number of transactions is unlimited.

    If transaction is disabled, it means the number of tasks instead of transactions. |0 -`-np`, `--partitions`|Create partitioned topics with a given number of partitions.

    0 means not trying to create a topic. -`-q`, `--receiver-queue-size`|Size of the receiver queue.|1000 -`-u`, `--service-url`|Pulsar service URL.|N/A -`-sp`, `--subscription-position`|Subscription position.|Earliest -`-st`, `--subscription-type`|Subscription type.|Shared -`-ss`, `--subscriptions`|A list of subscriptions to consume.

    For example, sub1,sub2.|[sub] -`-time`, `--test-duration`|Test duration (in second).

    0 means keeping publishing messages.|0 -`--topics-c`|All topics assigned to consumers.|[test-consume] -`--topics-p`|All topics assigned to producers . |[test-produce] -`--txn-disEnable`|Disable transaction.|true -`-tto`, `--txn-timeout`|Set the time of transaction timeout (in second).

    If you want `--txn-timeout` takes effect, set `--txn-enable` to true.|5 -`-abort`|Abort the transaction.

    If you want `-abort` takes effect, set `--txn-disEnable` to false.|true -`-txnRate`|Set the rate of opened transactions or tasks.

    0 means no limit.|0 - -### `help` -This help message - -Usage - -```bash - -$ pulsar-perf help - -``` - -## `bookkeeper` -A tool for managing BookKeeper. - -Usage - -```bash - -$ bookkeeper command - -``` - -Commands -* `autorecovery` -* `bookie` -* `localbookie` -* `upgrade` -* `shell` - - -Environment variables - -The table below lists the environment variables that you can use to configure the bookkeeper tool. - -|Variable|Description|Default| -|---|---|---| -|BOOKIE_LOG_CONF|Log4j configuration file|conf/log4j2.yaml| -|BOOKIE_CONF|BookKeeper configuration file|conf/bk_server.conf| -|BOOKIE_EXTRA_OPTS|Extra options to be passed to the JVM|| -|BOOKIE_EXTRA_CLASSPATH|Extra paths for BookKeeper's classpath|| -|ENTRY_FORMATTER_CLASS|The Java class used to format entries|| -|BOOKIE_PID_DIR|Folder where the BookKeeper server PID file should be stored|| -|BOOKIE_STOP_TIMEOUT|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful|| -|BOOKIE_GC_LOG|Gc options to be passed to the jvm|| - - -### `autorecovery` -Runs an auto-recovery service - -Usage - -```bash - -$ bookkeeper autorecovery options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery|| - - -### `bookie` -Starts up a BookKeeper server (aka bookie) - -Usage - -```bash - -$ bookkeeper bookie options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery|| -|-readOnly|Force start a read-only bookie server|false| -|-withAutoRecovery|Start auto-recovery service bookie server|false| - - -### `localbookie` -Runs a test ensemble of N bookies locally - -Usage - -```bash - -$ bookkeeper localbookie N - -``` - -### `upgrade` -Upgrade the bookie’s filesystem - -Usage - -```bash - -$ bookkeeper upgrade options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery|| -|`-u`, `--upgrade`|Upgrade the bookie’s directories|| - - -### `shell` -Run shell for admin commands. To see a full listing of those commands, run bookkeeper shell without an argument. - -Usage - -```bash - -$ bookkeeper shell - -``` - -Example - -```bash - -$ bookkeeper shell bookiesanity - -``` - -## `broker-tool` - -The `broker- tool` is used for operations on a specific broker. - -Usage - -```bash - -$ broker-tool command - -``` - -Commands -* `load-report` -* `help` - -Example -Two ways to get more information about a command as below: - -```bash - -$ broker-tool help command -$ broker-tool command --help - -``` - -### `load-report` - -Collect the load report of a specific broker. -The command is run on a broker, and used for troubleshooting why broker can’t collect right load report. - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--interval`| Interval to collect load report, in milliseconds || -|`-h`, `--help`| Display help information || - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/reference-configuration.md b/site2/website/versioned_docs/version-2.10.1-deprecated/reference-configuration.md deleted file mode 100644 index 0f3ee813f83884..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/reference-configuration.md +++ /dev/null @@ -1,885 +0,0 @@ ---- -id: reference-configuration -title: Pulsar configuration -sidebar_label: "Pulsar configuration" -original_id: reference-configuration ---- - - - - -You can manage Pulsar configuration by configuration files in the [`conf`](https://github.com/apache/pulsar/tree/master/conf) directory of a Pulsar [installation](getting-started-standalone.md). - -- [BookKeeper](#bookkeeper) -- [Broker](#broker) -- [Client](#client) -- [Log4j](#log4j) -- [Log4j shell](#log4j-shell) -- [Standalone](#standalone) -- [WebSocket](#websocket) -- [Pulsar proxy](#pulsar-proxy) -- [ZooKeeper](#zookeeper) - -## BookKeeper - -BookKeeper is a replicated log storage system that Pulsar uses for durable storage of all messages. - - -|Name|Description|Default| -|---|---|---| -|bookiePort|The port on which the bookie server listens.|3181| -|allowLoopback|Whether the bookie is allowed to use a loopback interface as its primary interface (that is the interface used to establish its identity). By default, loopback interfaces are not allowed to work as the primary interface. Using a loopback interface as the primary interface usually indicates a configuration error. For example, it’s fairly common in some VPS setups to not configure a hostname or to have the hostname resolve to `127.0.0.1`. If this is the case, then all bookies in the cluster will establish their identities as `127.0.0.1:3181` and only one will be able to join the cluster. For VPSs configured like this, you should explicitly set the listening interface.|false| -|listeningInterface|The network interface on which the bookie listens. By default, the bookie listens on all interfaces.|eth0| -|advertisedAddress|Configure a specific hostname or IP address that the bookie should use to advertise itself to clients. By default, the bookie advertises either its own IP address or hostname according to the `listeningInterface` and `useHostNameAsBookieID` settings.|N/A| -|allowMultipleDirsUnderSameDiskPartition|Configure the bookie to enable/disable multiple ledger/index/journal directories in the same filesystem disk partition.|false| -|minUsableSizeForIndexFileCreation|The minimum safe usable size available in index directory for bookie to create index files while replaying journal at the time of bookie starts in Readonly Mode (in bytes).|1073741824| -|journalDirectory|The directory where BookKeeper outputs its write-ahead log (WAL).|data/bookkeeper/journal| -|journalDirectories|Directories that BookKeeper outputs its write ahead log. Multiple directories are available, being separated by `,`. For example: `journalDirectories=/tmp/bk-journal1,/tmp/bk-journal2`. If `journalDirectories` is set, the bookies skip `journalDirectory` and use this setting directory.|/tmp/bk-journal| -|ledgerDirectories|The directory where BookKeeper outputs ledger snapshots. This could define multiple directories to store snapshots separated by `,`, for example `ledgerDirectories=/tmp/bk1-data,/tmp/bk2-data`. Ideally, ledger dirs and the journal dir are each in a different device, which reduces the contention between random I/O and sequential write. It is possible to run with a single disk, but performance will be significantly lower.|data/bookkeeper/ledgers| -|ledgerManagerType|The type of ledger manager used to manage how ledgers are stored, managed, and garbage collected. See [BookKeeper Internals](http://bookkeeper.apache.org/docs/latest/getting-started/concepts) for more info.|hierarchical| -|zkLedgersRootPath|The root ZooKeeper path used to store ledger metadata. This parameter is used by the ZooKeeper-based ledger manager as a root znode to store all ledgers.|/ledgers| -|ledgerStorageClass|Ledger storage implementation class|org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage| -|entryLogFilePreallocationEnabled|Enable or disable entry logger preallocation|true| -|logSizeLimit|Max file size of the entry logger, in bytes. A new entry log file will be created when the old one reaches the file size limitation.|1073741824| -|minorCompactionThreshold|Threshold of minor compaction. Entry log files whose remaining size percentage reaches below this threshold will be compacted in a minor compaction. If set to less than zero, the minor compaction is disabled.|0.2| -|minorCompactionInterval|Time interval to run minor compaction, in seconds. If set to less than zero, the minor compaction is disabled. Note: should be greater than gcWaitTime. |3600| -|majorCompactionThreshold|The threshold of major compaction. Entry log files whose remaining size percentage reaches below this threshold will be compacted in a major compaction. Those entry log files whose remaining size percentage is still higher than the threshold will never be compacted. If set to less than zero, the minor compaction is disabled.|0.5| -|majorCompactionInterval|The time interval to run major compaction, in seconds. If set to less than zero, the major compaction is disabled. Note: should be greater than gcWaitTime. |86400| -|readOnlyModeEnabled|If `readOnlyModeEnabled=true`, then on all full ledger disks, bookie will be converted to read-only mode and serve only read requests. Otherwise the bookie will be shutdown.|true| -|forceReadOnlyBookie|Whether the bookie is force started in read only mode.|false| -|persistBookieStatusEnabled|Persist the bookie status locally on the disks. So the bookies can keep their status upon restarts.|false| -|compactionMaxOutstandingRequests|Sets the maximum number of entries that can be compacted without flushing. When compacting, the entries are written to the entrylog and the new offsets are cached in memory. Once the entrylog is flushed the index is updated with the new offsets. This parameter controls the number of entries added to the entrylog before a flush is forced. A higher value for this parameter means more memory will be used for offsets. Each offset consists of 3 longs. This parameter should not be modified unless you’re fully aware of the consequences.|100000| -|compactionRate|The rate at which compaction will read entries, in adds per second.|1000| -|isThrottleByBytes|Throttle compaction by bytes or by entries.|false| -|compactionRateByEntries|The rate at which compaction will read entries, in adds per second.|1000| -|compactionRateByBytes|Set the rate at which compaction reads entries. The unit is bytes added per second.|1000000| -|journalMaxSizeMB|Max file size of journal file, in megabytes. A new journal file will be created when the old one reaches the file size limitation.|2048| -|journalMaxBackups|The max number of old journal files to keep. Keeping a number of old journal files would help data recovery in special cases.|5| -|journalPreAllocSizeMB|How space to pre-allocate at a time in the journal.|16| -|journalWriteBufferSizeKB|The of the write buffers used for the journal.|64| -|journalRemoveFromPageCache|Whether pages should be removed from the page cache after force write.|true| -|journalAdaptiveGroupWrites|Whether to group journal force writes, which optimizes group commit for higher throughput.|true| -|journalMaxGroupWaitMSec|The maximum latency to impose on a journal write to achieve grouping.|1| -|journalAlignmentSize|All the journal writes and commits should be aligned to given size|4096| -|journalBufferedWritesThreshold|Maximum writes to buffer to achieve grouping|524288| -|journalFlushWhenQueueEmpty|If we should flush the journal when journal queue is empty|false| -|numJournalCallbackThreads|The number of threads that should handle journal callbacks|8| -|openLedgerRereplicationGracePeriod | The grace period, in milliseconds, that the replication worker waits before fencing and replicating a ledger fragment that's still being written to upon bookie failure. | 30000 | -|rereplicationEntryBatchSize|The number of max entries to keep in fragment for re-replication|100| -|autoRecoveryDaemonEnabled|Whether the bookie itself can start auto-recovery service.|true| -|lostBookieRecoveryDelay|How long to wait, in seconds, before starting auto recovery of a lost bookie.|0| -|gcWaitTime|How long the interval to trigger next garbage collection, in milliseconds. Since garbage collection is running in background, too frequent gc will heart performance. It is better to give a higher number of gc interval if there is enough disk capacity.|900000| -|gcOverreplicatedLedgerWaitTime|How long the interval to trigger next garbage collection of overreplicated ledgers, in milliseconds. This should not be run very frequently since we read the metadata for all the ledgers on the bookie from zk.|86400000| -|flushInterval|How long the interval to flush ledger index pages to disk, in milliseconds. Flushing index files will introduce much random disk I/O. If separating journal dir and ledger dirs each on different devices, flushing would not affect performance. But if putting journal dir and ledger dirs on same device, performance degrade significantly on too frequent flushing. You can consider increment flush interval to get better performance, but you need to pay more time on bookie server restart after failure.|60000| -|bookieDeathWatchInterval|Interval to watch whether bookie is dead or not, in milliseconds|1000| -|allowStorageExpansion|Allow the bookie storage to expand. Newly added ledger and index dirs must be empty.|false| -|zkServers|A list of one of more servers on which zookeeper is running. The server list can be comma separated values, for example: zkServers=zk1:2181,zk2:2181,zk3:2181.|localhost:2181| -|zkTimeout|ZooKeeper client session timeout in milliseconds Bookie server will exit if it received SESSION_EXPIRED because it was partitioned off from ZooKeeper for more than the session timeout JVM garbage collection, disk I/O will cause SESSION_EXPIRED. Increment this value could help avoiding this issue|30000| -|zkRetryBackoffStartMs|The start time that the Zookeeper client backoff retries in milliseconds.|1000| -|zkRetryBackoffMaxMs|The maximum time that the Zookeeper client backoff retries in milliseconds.|10000| -|zkEnableSecurity|Set ACLs on every node written on ZooKeeper, allowing users to read and write BookKeeper metadata stored on ZooKeeper. In order to make ACLs work you need to setup ZooKeeper JAAS authentication. All the bookies and Client need to share the same user, and this is usually done using Kerberos authentication. See ZooKeeper documentation.|false| -|httpServerEnabled|The flag enables/disables starting the admin http server.|false| -|httpServerPort|The HTTP server port to listen on. By default, the value is `8080`. If you want to keep it consistent with the Prometheus stats provider, you can set it to `8000`.|8080 -|httpServerClass|The http server class.|org.apache.bookkeeper.http.vertx.VertxHttpServer| -|serverTcpNoDelay|This settings is used to enabled/disabled Nagle’s algorithm, which is a means of improving the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network. If you are sending many small messages, such that more than one can fit in a single IP packet, setting server.tcpnodelay to false to enable Nagle algorithm can provide better performance.|true| -|serverSockKeepalive|This setting is used to send keep-alive messages on connection-oriented sockets.|true| -|serverTcpLinger|The socket linger timeout on close. When enabled, a close or shutdown will not return until all queued messages for the socket have been successfully sent or the linger timeout has been reached. Otherwise, the call returns immediately and the closing is done in the background.|0| -|byteBufAllocatorSizeMax|The maximum buf size of the received ByteBuf allocator.|1048576| -|nettyMaxFrameSizeBytes|The maximum netty frame size in bytes. Any message received larger than this will be rejected.|5253120| -|openFileLimit|Max number of ledger index files could be opened in bookie server If number of ledger index files reaches this limitation, bookie server started to swap some ledgers from memory to disk. Too frequent swap will affect performance. You can tune this number to gain performance according your requirements.|0| -|pageSize|Size of a index page in ledger cache, in bytes A larger index page can improve performance writing page to disk, which is efficient when you have small number of ledgers and these ledgers have similar number of entries. If you have large number of ledgers and each ledger has fewer entries, smaller index page would improve memory usage.|8192| -|pageLimit|How many index pages provided in ledger cache If number of index pages reaches this limitation, bookie server starts to swap some ledgers from memory to disk. You can increment this value when you found swap became more frequent. But make sure pageLimit*pageSize should not more than JVM max memory limitation, otherwise you would got OutOfMemoryException. In general, incrementing pageLimit, using smaller index page would gain better performance in lager number of ledgers with fewer entries case If pageLimit is -1, bookie server will use 1/3 of JVM memory to compute the limitation of number of index pages.|0| -|readOnlyModeEnabled|If all ledger directories configured are full, then support only read requests for clients. If "readOnlyModeEnabled=true" then on all ledger disks full, bookie will be converted to read-only mode and serve only read requests. Otherwise the bookie will be shutdown. By default this will be disabled.|true| -|diskUsageThreshold|For each ledger dir, maximum disk space which can be used. Default is 0.95f. i.e. 95% of disk can be used at most after which nothing will be written to that partition. If all ledger dir partitions are full, then bookie will turn to readonly mode if ‘readOnlyModeEnabled=true’ is set, else it will shutdown. Valid values should be in between 0 and 1 (exclusive).|0.95| -|diskCheckInterval|Disk check interval in milli seconds, interval to check the ledger dirs usage.|10000| -|auditorPeriodicCheckInterval|Interval at which the auditor will do a check of all ledgers in the cluster. By default this runs once a week. The interval is set in seconds. To disable the periodic check completely, set this to 0. Note that periodic checking will put extra load on the cluster, so it should not be run more frequently than once a day.|604800| -|sortedLedgerStorageEnabled|Whether sorted-ledger storage is enabled.|true| -|auditorPeriodicBookieCheckInterval|The interval between auditor bookie checks. The auditor bookie check, checks ledger metadata to see which bookies should contain entries for each ledger. If a bookie which should contain entries is unavailable, thea the ledger containing that entry is marked for recovery. Setting this to 0 disabled the periodic check. Bookie checks will still run when a bookie fails. The interval is specified in seconds.|86400| -|numAddWorkerThreads|The number of threads that should handle write requests. if zero, the writes would be handled by netty threads directly.|0| -|numReadWorkerThreads|The number of threads that should handle read requests. if zero, the reads would be handled by netty threads directly.|8| -|numHighPriorityWorkerThreads|The umber of threads that should be used for high priority requests (i.e. recovery reads and adds, and fencing).|8| -|maxPendingReadRequestsPerThread|If read workers threads are enabled, limit the number of pending requests, to avoid the executor queue to grow indefinitely.|2500| -|maxPendingAddRequestsPerThread|The limited number of pending requests, which is used to avoid the executor queue to grow indefinitely when add workers threads are enabled.|10000| -|isForceGCAllowWhenNoSpace|Whether force compaction is allowed when the disk is full or almost full. Forcing GC could get some space back, but could also fill up the disk space more quickly. This is because new log files are created before GC, while old garbage log files are deleted after GC.|false| -|verifyMetadataOnGC|True if the bookie should double check `readMetadata` prior to GC.|false| -|flushEntrylogBytes|Entry log flush interval in bytes. Flushing in smaller chunks but more frequently reduces spikes in disk I/O. Flushing too frequently may also affect performance negatively.|268435456| -|readBufferSizeBytes|The number of bytes we should use as capacity for BufferedReadChannel.|4096| -|writeBufferSizeBytes|The number of bytes used as capacity for the write buffer|65536| -|useHostNameAsBookieID|Whether the bookie should use its hostname to register with the coordination service (e.g.: zookeeper service). When false, bookie will use its ip address for the registration.|false| -|bookieId | If you want to custom a bookie ID or use a dynamic network address for the bookie, you can set the `bookieId`.

    Bookie advertises itself using the `bookieId` rather than the `BookieSocketAddress` (`hostname:port` or `IP:port`). If you set the `bookieId`, then the `useHostNameAsBookieID` does not take effect.

    The `bookieId` is a non-empty string that can contain ASCII digits and letters ([a-zA-Z9-0]), colons, dashes, and dots.

    For more information about `bookieId`, see [here](http://bookkeeper.apache.org/bps/BP-41-bookieid/).|N/A| -|allowEphemeralPorts|Whether the bookie is allowed to use an ephemeral port (port 0) as its server port. By default, an ephemeral port is not allowed. Using an ephemeral port as the service port usually indicates a configuration error. However, in unit tests, using an ephemeral port will address port conflict problems and allow running tests in parallel.|false| -|enableLocalTransport|Whether the bookie is allowed to listen for the BookKeeper clients executed on the local JVM.|false| -|disableServerSocketBind|Whether the bookie is allowed to disable bind on network interfaces. This bookie will be available only to BookKeeper clients executed on the local JVM.|false| -|skipListArenaChunkSize|The number of bytes that we should use as chunk allocation for `org.apache.bookkeeper.bookie.SkipListArena`.|4194304| -|skipListArenaMaxAllocSize|The maximum size that we should allocate from the skiplist arena. Allocations larger than this should be allocated directly by the VM to avoid fragmentation.|131072| -|bookieAuthProviderFactoryClass|The factory class name of the bookie authentication provider. If this is null, then there is no authentication.|null| -|statsProviderClass||org.apache.bookkeeper.stats.prometheus.PrometheusMetricsProvider| -|prometheusStatsHttpPort||8000| -|dbStorage_writeCacheMaxSizeMb|Size of Write Cache. Memory is allocated from JVM direct memory. Write cache is used to buffer entries before flushing into the entry log. For good performance, it should be big enough to hold a substantial amount of entries in the flush interval.|25% of direct memory| -|dbStorage_readAheadCacheMaxSizeMb|Size of Read cache. Memory is allocated from JVM direct memory. This read cache is pre-filled doing read-ahead whenever a cache miss happens. By default, it is allocated to 25% of the available direct memory.|N/A| -|dbStorage_readAheadCacheBatchSize|How many entries to pre-fill in cache after a read cache miss|1000| -|dbStorage_rocksDB_blockCacheSize|Size of RocksDB block-cache. For best performance, this cache should be big enough to hold a significant portion of the index database which can reach ~2GB in some cases. By default, it uses 10% of direct memory.|N/A| -|dbStorage_rocksDB_writeBufferSizeMB||64| -|dbStorage_rocksDB_sstSizeInMB||64| -|dbStorage_rocksDB_blockSize||65536| -|dbStorage_rocksDB_bloomFilterBitsPerKey||10| -|dbStorage_rocksDB_numLevels||-1| -|dbStorage_rocksDB_numFilesInLevel0||4| -|dbStorage_rocksDB_maxSizeInLevel1MB||256| - -## Broker - -Pulsar brokers are responsible for handling incoming messages from producers, dispatching messages to consumers, replicating data between clusters, and more. - -|Name|Description|Default| -|---|---|---| -|advertisedListeners|Specify multiple advertised listeners for the broker.

    The format is `:pulsar://:`.

    If there are multiple listeners, separate them with commas.

    **Note**: do not use this configuration with `advertisedAddress` and `brokerServicePort`. If the value of this configuration is empty, the broker uses `advertisedAddress` and `brokerServicePort`|/| -|internalListenerName|Specify the internal listener name for the broker.

    **Note**: the listener name must be contained in `advertisedListeners`.

    If the value of this configuration is empty, the broker uses the first listener as the internal listener.|/| -|authenticateOriginalAuthData| If this flag is set to `true`, the broker authenticates the original Auth data; else it just accepts the originalPrincipal and authorizes it (if required). |false| -|enablePersistentTopics| Whether persistent topics are enabled on the broker |true| -|enableNonPersistentTopics| Whether non-persistent topics are enabled on the broker |true| -|functionsWorkerEnabled| Whether the Pulsar Functions worker service is enabled in the broker |false| -|exposePublisherStats|Whether to enable topic level metrics.|true| -|statsUpdateFrequencyInSecs||60| -|statsUpdateInitialDelayInSecs||60| -|metadataStoreUrl| Metadata store quorum connection string || -| metadataStoreConfigPath | The configuration file path of the local metadata store. See [Configure metadata store](administration-metadata-store.md) for details. |N/A| -|metadataStoreCacheExpirySeconds|Metadata store cache expiry time in seconds|300| -|configurationMetadataStoreUrl| Configuration store connection string (as a comma-separated list) || -|brokerServicePort| Broker data port |6650| -|brokerServicePortTls| Broker data port for TLS |6651| -|webServicePort| Port to use to server HTTP request |8080| -|webServicePortTls| Port to use to server HTTPS request |8443| -|webSocketServiceEnabled| Enable the WebSocket API service in broker |false| -|webSocketNumIoThreads|The number of IO threads in Pulsar Client used in WebSocket proxy.|Runtime.getRuntime().availableProcessors()| -|webSocketConnectionsPerBroker|The number of connections per Broker in Pulsar Client used in WebSocket proxy.|Runtime.getRuntime().availableProcessors()| -|webSocketSessionIdleTimeoutMillis|Time in milliseconds that idle WebSocket session times out.|300000| -|webSocketMaxTextFrameSize|The maximum size of a text message during parsing in WebSocket proxy.|1048576| -|exposeTopicLevelMetricsInPrometheus|Whether to enable topic level metrics.|true| -|exposeConsumerLevelMetricsInPrometheus|Whether to enable consumer level metrics.|false| -|jvmGCMetricsLoggerClassName|Classname of Pluggable JVM GC metrics logger that can log GC specific metrics.|N/A| -|bindAddress| Hostname or IP address the service binds on, default is 0.0.0.0. |0.0.0.0| -|bindAddresses| Additional Hostname or IP addresses the service binds on: `listener_name:scheme://host:port,...`. || -|advertisedAddress| Hostname or IP address the service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used. || -|clusterName| Name of the cluster to which this broker belongs to || -|maxTenants|The maximum number of tenants that can be created in each Pulsar cluster. When the number of tenants reaches the threshold, the broker rejects the request of creating a new tenant. The default value 0 disables the check. |0| -|brokerDeduplicationEnabled| Sets the default behavior for message deduplication in the broker. If enabled, the broker will reject messages that were already stored in the topic. This setting can be overridden on a per-namespace basis. |false| -|brokerDeduplicationMaxNumberOfProducers| The maximum number of producers for which information will be stored for deduplication purposes. |10000| -|brokerDeduplicationEntriesInterval| The number of entries after which a deduplication informational snapshot is taken. A larger interval will lead to fewer snapshots being taken, though this would also lengthen the topic recovery time (the time required for entries published after the snapshot to be replayed). |1000| -|brokerDeduplicationSnapshotIntervalSeconds| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |120| -|brokerDeduplicationProducerInactivityTimeoutMinutes| The time of inactivity (in minutes) after which the broker will discard deduplication information related to a disconnected producer. |360| -|brokerDeduplicationSnapshotFrequencyInSeconds| How often is the thread pool scheduled to check whether a snapshot needs to be taken. The value of `0` means it is disabled. |120| -|dispatchThrottlingRateInMsg| Dispatch throttling-limit of messages for a broker (per second). 0 means the dispatch throttling-limit is disabled. |0| -|dispatchThrottlingRateInByte| Dispatch throttling-limit of bytes for a broker (per second). 0 means the dispatch throttling-limit is disabled. |0| -|dispatchThrottlingRatePerTopicInMsg| Dispatch throttling-limit of messages for every topic (per second). 0 means the dispatch throttling-limit is disabled. |0| -|dispatchThrottlingRatePerTopicInByte| Dispatch throttling-limit of bytes for every topic (per second). 0 means the dispatch throttling-limit is disabled. |0| -|dispatchThrottlingOnBatchMessageEnabled|Apply dispatch rate limiting on batch message instead individual messages with in batch message. (Default is disabled). | false| -|dispatchThrottlingRateRelativeToPublishRate| Enable dispatch rate-limiting relative to publish rate. | false | -|dispatchThrottlingRatePerSubscriptionInMsg| Dispatch throttling-limit of messages for a subscription. 0 means the dispatch throttling-limit is disabled. |0| -|dispatchThrottlingRatePerSubscriptionInByte|Dispatch throttling-limit of bytes for a subscription. 0 means the dispatch throttling-limit is disabled.|0| -|dispatchThrottlingRatePerReplicatorInMsg| The default messages per second dispatch throttling-limit for every replicator in replication. The value of `0` means disabling replication message dispatch-throttling| 0 | -|dispatchThrottlingRatePerReplicatorInByte| The default bytes per second dispatch throttling-limit for every replicator in replication. The value of `0` means disabling replication message-byte dispatch-throttling| 0 | -|metadataStoreSessionTimeoutMillis| Metadata store session timeout in milliseconds |30000| -|brokerShutdownTimeoutMs| Time to wait for broker graceful shutdown. After this time elapses, the process will be killed |60000| -|skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out of memory error. |false| -|backlogQuotaCheckEnabled| Enable backlog quota check. Enforces action on topic when the quota is reached |true| -|backlogQuotaCheckIntervalInSeconds| How often to check for topics that have reached the quota |60| -|backlogQuotaDefaultLimitBytes| The default per-topic backlog quota limit. Being less than 0 means no limitation. By default, it is -1. | -1 | -|backlogQuotaDefaultRetentionPolicy|The defaulted backlog quota retention policy. By Default, it is `producer_request_hold`.
  396. 'producer_request_hold' Policy which holds producer's send request until the resource becomes available (or holding times out)
  397. 'producer_exception' Policy which throws `javax.jms.ResourceAllocationException` to the producer
  398. 'consumer_backlog_eviction' Policy which evicts the oldest message from the slowest consumer's backlog
  399. |producer_request_hold| -|allowAutoTopicCreation| Enable topic auto creation if a new producer or consumer connected |true| -|allowAutoTopicCreationType| The type of topic that is allowed to be automatically created.(partitioned/non-partitioned) |non-partitioned| -|allowAutoSubscriptionCreation| Enable subscription auto creation if a new consumer connected |true| -|defaultNumPartitions| The number of partitioned topics that is allowed to be automatically created if `allowAutoTopicCreationType` is partitioned |1| -|brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics. If topics are not consumed for some while, these inactive topics might be cleaned up. Deleting inactive topics is enabled by default. The default period is 1 minute. |true| -|brokerDeleteInactiveTopicsFrequencySeconds| How often to check for inactive topics |60| -| brokerDeleteInactiveTopicsMode | Set the mode to delete inactive topics.
  400. `delete_when_no_subscriptions`: delete the topic which has no subscriptions or active producers.
  401. `delete_when_subscriptions_caught_up`: delete the topic whose subscriptions have no backlogs and which has no active producers or consumers.
  402. | `delete_when_no_subscriptions` | -| brokerDeleteInactiveTopicsMaxInactiveDurationSeconds | Set the maximum duration for inactive topics. If it is not specified, the `brokerDeleteInactiveTopicsFrequencySeconds` parameter is adopted. | N/A | -|forceDeleteTenantAllowed| Enable you to delete a tenant forcefully. |false| -|forceDeleteNamespaceAllowed| Enable you to delete a namespace forcefully. |false| -|messageExpiryCheckIntervalInMinutes| The frequency of proactively checking and purging expired messages. |5| -|brokerServiceCompactionMonitorIntervalInSeconds| Interval between checks to determine whether topics with compaction policies need compaction. |60| -brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater than this threshold, compression is triggered.

    Set this threshold to 0 means disabling the compression check.|N/A -|delayedDeliveryEnabled| Whether to enable the delayed delivery for messages. If disabled, messages will be immediately delivered and there will be no tracking overhead.|true| -|delayedDeliveryTickTimeMillis|Control the tick time for retrying on delayed delivery, which affects the accuracy of the delivery time compared to the scheduled time. By default, it is 1 second.|1000| -|activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed. |1000| -|clientLibraryVersionCheckEnabled| Enable check for minimum allowed client library version |false| -|clientLibraryVersionCheckAllowUnversioned| Allow client libraries with no version information |true| -|statusFilePath| Path for the file used to determine the rotation status for the broker when responding to service discovery health checks || -|preferLaterVersions| If true, (and ModularLoadManagerImpl is being used), the load manager will attempt to use only brokers running the latest software version (to minimize impact to bundles) |false| -|maxNumPartitionsPerPartitionedTopic|Max number of partitions per partitioned topic. Use 0 or negative number to disable the check|0| -| maxSubscriptionsPerTopic | Maximum number of subscriptions allowed to subscribe to a topic. Once this limit reaches, the broker rejects new subscriptions until the number of subscriptions decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxProducersPerTopic | Maximum number of producers allowed to connect to a topic. Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerTopic | Maximum number of consumers allowed to connect to a topic. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerSubscription | Maximum number of consumers allowed to connect to a subscription. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -|tlsCertificateFilePath| Path for the TLS certificate file || -|tlsKeyFilePath| Path for the TLS private key file || -|tlsTrustCertsFilePath| Path for the trusted TLS certificate file. This cert is used to verify that any certs presented by connecting clients are signed by a certificate authority. If this verification fails, then the certs are untrusted and the connections are dropped. || -|tlsAllowInsecureConnection| Accept untrusted TLS certificate from client. If it is set to `true`, a client with a cert which cannot be verified with the 'tlsTrustCertsFilePath' cert will be allowed to connect to the server, though the cert will not be used for client authentication. |false| -|tlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLSv1.3```, ```TLSv1.2``` || -|tlsCiphers|Specify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256```|| -|tlsEnabledWithKeyStore| Enable TLS with KeyStore type configuration in broker |false| -|tlsProvider| TLS Provider for KeyStore type || -|tlsKeyStoreType| LS KeyStore type configuration in broker: JKS, PKCS12 |JKS| -|tlsKeyStore| TLS KeyStore path in broker || -|tlsKeyStorePassword| TLS KeyStore password for broker || -|brokerClientTlsEnabledWithKeyStore| Whether internal client use KeyStore type to authenticate with Pulsar brokers |false| -|brokerClientSslProvider| The TLS Provider used by internal client to authenticate with other Pulsar brokers || -|brokerClientTlsTrustStoreType| TLS TrustStore type configuration for internal client: JKS, PKCS12, used by the internal client to authenticate with Pulsar brokers |JKS| -|brokerClientTlsTrustStore| TLS TrustStore path for internal client, used by the internal client to authenticate with Pulsar brokers || -|brokerClientTlsTrustStorePassword| TLS TrustStore password for internal client, used by the internal client to authenticate with Pulsar brokers || -|brokerClientTlsCiphers| Specify the tls cipher the internal client will use to negotiate during TLS Handshake. (a comma-separated list of ciphers) e.g. [TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256]|| -|brokerClientTlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS handshake. (a comma-separated list of protocol names). e.g. `TLSv1.3`, `TLSv1.2` || -| metadataStoreBatchingEnabled | Enable metadata operations batching. | true | -| metadataStoreBatchingMaxDelayMillis | Maximum delay to impose on batching grouping. | 5 | -| metadataStoreBatchingMaxOperations | Maximum number of operations to include in a singular batch. | 1000 | -| metadataStoreBatchingMaxSizeKb | Maximum size of a batch. | 128 | -|ttlDurationDefaultInSeconds|The default Time to Live (TTL) for namespaces if the TTL is not configured at namespace policies. When the value is set to `0`, TTL is disabled. By default, TTL is disabled. |0| -|tokenSettingPrefix| Configure the prefix of the token-related settings, such as `tokenSecretKey`, `tokenPublicKey`, `tokenAuthClaim`, `tokenPublicAlg`, `tokenAudienceClaim`, and `tokenAudience`. || -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicAlg| Configure the algorithm to be used to validate auth tokens. This can be any of the asymettric algorithms supported by Java JWT (https://github.com/jwtk/jjwt#signature-algorithms-keys) |RS256| -|tokenAuthClaim| Specify which of the token's claims will be used as the authentication "principal" or "role". The default "sub" claim will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud", that will be used to get the audience from token. If not set, audience will not be verified. || -|tokenAudience| The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token, need contains this. || -|maxUnackedMessagesPerConsumer| Max number of unacknowledged messages allowed to receive messages by a consumer on a shared subscription. Broker will stop sending messages to consumer once, this limit reaches until consumer starts acknowledging messages back. Using a value of 0, is disabling unackeMessage limit check and consumer can receive messages without any restriction |50000| -|maxUnackedMessagesPerSubscription| Max number of unacknowledged messages allowed per shared subscription. Broker will stop dispatching messages to all consumers of the subscription once this limit reaches until consumer starts acknowledging messages back and unack count reaches to limit/2. Using a value of 0, is disabling unackedMessage-limit check and dispatcher can dispatch messages without any restriction |200000| -|subscriptionRedeliveryTrackerEnabled| Enable subscription message redelivery tracker |true| -|subscriptionExpirationTimeMinutes | How long to delete inactive subscriptions from last consuming.

    Setting this configuration to a value **greater than 0** deletes inactive subscriptions automatically.
    Setting this configuration to **0** does not delete inactive subscriptions automatically.

    Since this configuration takes effect on all topics, if there is even one topic whose subscriptions should not be deleted automatically, you need to set it to 0.
    Instead, you can set a subscription expiration time for each **namespace** using the [`pulsar-admin namespaces set-subscription-expiration-time options` command](/tools/pulsar-admin/). | 0 | -|maxConcurrentLookupRequest| Max number of concurrent lookup request broker allows to throttle heavy incoming lookup traffic |50000| -|maxConcurrentTopicLoadRequest| Max number of concurrent topic loading request broker allows to control number of zk-operations |5000| -|authenticationEnabled| Enable authentication |false| -|authenticationProviders| Authentication provider name list, which is comma separated list of class names || -| authenticationRefreshCheckSeconds | Interval of time for checking for expired authentication credentials | 60 | -|authorizationEnabled| Enforce authorization |false| -|superUserRoles| Role names that are treated as "super-user", meaning they will be able to do all admin operations and publish/consume from all topics || -|brokerClientAuthenticationPlugin| Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters || -|brokerClientAuthenticationParameters||| -|athenzDomainNames| Supported Athenz provider domain names(comma separated) for authentication || -|exposePreciseBacklogInPrometheus| Enable expose the precise backlog stats, set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate. |false| -|schemaRegistryStorageClassName|The schema storage implementation used by this broker.|org.apache.pulsar.broker.service.schema.BookkeeperSchemaStorageFactory| -|isSchemaValidationEnforced| Whether to enable schema validation, when schema validation is enabled, if a producer without a schema attempts to produce the message to a topic with schema, the producer is rejected and disconnected.|false| -|isAllowAutoUpdateSchemaEnabled|Allow schema to be auto updated at broker level.|true| -|schemaCompatibilityStrategy| The schema compatibility strategy at broker level, see [here](schema-evolution-compatibility.md#schema-compatibility-check-strategy) for available values.|FULL| -|systemTopicSchemaCompatibilityStrategy| The schema compatibility strategy is used for system topics, see [here](schema-evolution-compatibility.md#schema-compatibility-check-strategy) for available values.|ALWAYS_COMPATIBLE| -| topicFencingTimeoutSeconds | If a topic remains fenced for a certain time period (in seconds), it is closed forcefully. If set to 0 or a negative number, the fenced topic is not closed. | 0 | -|offloadersDirectory|The directory for all the offloader implementations.|./offloaders| -|bookkeeperMetadataServiceUri| Metadata service uri that bookkeeper is used for loading corresponding metadata driver and resolving its metadata service location. This value can be fetched using `bookkeeper shell whatisinstanceid` command in BookKeeper cluster. For example: zk+hierarchical://localhost:2181/ledgers. The metadata service uri list can also be semicolon separated values like below: zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers || -|bookkeeperClientAuthenticationPlugin| Authentication plugin to use when connecting to bookies || -|bookkeeperClientAuthenticationParametersName| BookKeeper auth plugin implementation specifics parameters name and values || -|bookkeeperClientAuthenticationParameters||| -|bookkeeperClientNumWorkerThreads| Number of BookKeeper client worker threads. Default is Runtime.getRuntime().availableProcessors() || -|bookkeeperClientTimeoutInSeconds| Timeout for BK add / read operations |30| -|bookkeeperClientSpeculativeReadTimeoutInMillis| Speculative reads are initiated if a read request doesn’t complete within a certain time Using a value of 0, is disabling the speculative reads |0| -|bookkeeperNumberOfChannelsPerBookie| Number of channels per bookie |16| -|bookkeeperClientHealthCheckEnabled| Enable bookies health check. Bookies that have more than the configured number of failure within the interval will be quarantined for some time. During this period, new ledgers won’t be created on these bookies |true| -|bookkeeperClientHealthCheckIntervalSeconds||60| -|bookkeeperClientHealthCheckErrorThresholdPerInterval||5| -|bookkeeperClientHealthCheckQuarantineTimeInSeconds ||1800| -|bookkeeperClientRackawarePolicyEnabled| Enable rack-aware bookie selection policy. BK will chose bookies from different racks when forming a new bookie ensemble |true| -|bookkeeperClientRegionawarePolicyEnabled| Enable region-aware bookie selection policy. BK will chose bookies from different regions and racks when forming a new bookie ensemble. If enabled, the value of bookkeeperClientRackawarePolicyEnabled is ignored |false| -|bookkeeperClientMinNumRacksPerWriteQuorum| Minimum number of racks per write quorum. BK rack-aware bookie selection policy will try to get bookies from at least 'bookkeeperClientMinNumRacksPerWriteQuorum' racks for a write quorum. |2| -|bookkeeperClientEnforceMinNumRacksPerWriteQuorum| Enforces rack-aware bookie selection policy to pick bookies from 'bookkeeperClientMinNumRacksPerWriteQuorum' racks for a writeQuorum. If BK can't find bookie then it would throw BKNotEnoughBookiesException instead of picking random one. |false| -|bookkeeperClientReorderReadSequenceEnabled| Enable/disable reordering read sequence on reading entries. |false| -|bookkeeperClientIsolationGroups| Enable bookie isolation by specifying a list of bookie groups to choose from. Any bookie outside the specified groups will not be used by the broker || -|bookkeeperClientSecondaryIsolationGroups| Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn't have enough bookie available. || -|bookkeeperClientMinAvailableBookiesInIsolationGroups| Minimum bookies that should be available as part of bookkeeperClientIsolationGroups else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list. || -|bookkeeperClientGetBookieInfoIntervalSeconds| Set the interval to periodically check bookie info |86400| -|bookkeeperClientGetBookieInfoRetryIntervalSeconds| Set the interval to retry a failed bookie info lookup |60| -|bookkeeperEnableStickyReads | Enable/disable having read operations for a ledger to be sticky to a single bookie. If this flag is enabled, the client will use one single bookie (by preference) to read all entries for a ledger. | true | -|managedLedgerDefaultEnsembleSize| Number of bookies to use when creating a ledger |2| -|managedLedgerDefaultWriteQuorum| Number of copies to store for each message |2| -|managedLedgerDefaultAckQuorum| Number of guaranteed copies (acks to wait before write is complete) |2| -|managedLedgerCacheSizeMB| Amount of memory to use for caching data payload in managed ledger. This memory is allocated from JVM direct memory and it’s shared across all the topics running in the same broker. By default, uses 1/5th of available direct memory || -|managedLedgerCacheCopyEntries| Whether we should make a copy of the entry payloads when inserting in cache| false| -|managedLedgerCacheEvictionWatermark| Threshold to which bring down the cache level when eviction is triggered |0.9| -|managedLedgerCacheEvictionFrequency| Configure the cache eviction frequency for the managed ledger cache (evictions/sec) | 100.0 | -|managedLedgerCacheEvictionTimeThresholdMillis| All entries that have stayed in cache for more than the configured time, will be evicted | 1000 | -|managedLedgerCursorBackloggedThreshold| Configure the threshold (in number of entries) from where a cursor should be considered 'backlogged' and thus should be set as inactive. | 1000| -|managedLedgerDefaultMarkDeleteRateLimit| Rate limit the amount of writes per second generated by consumer acking the messages |1.0| -|managedLedgerMaxEntriesPerLedger| The max number of entries to append to a ledger before triggering a rollover. A ledger rollover is triggered after the min rollover time has passed and one of the following conditions is true:
    • The max rollover time has been reached
    • The max entries have been written to the ledger
    • The max ledger size has been written to the ledger
    |50000| -|managedLedgerMinLedgerRolloverTimeMinutes| Minimum time between ledger rollover for a topic |10| -|managedLedgerMaxLedgerRolloverTimeMinutes| Maximum time before forcing a ledger rollover for a topic |240| -|managedLedgerInactiveLedgerRolloverTimeSeconds| Time to rollover ledger for inactive topic |0| -|managedLedgerCursorMaxEntriesPerLedger| Max number of entries to append to a cursor ledger |50000| -|managedLedgerCursorRolloverTimeInSeconds| Max time before triggering a rollover on a cursor ledger |14400| -|managedLedgerMaxUnackedRangesToPersist| Max number of "acknowledgment holes" that are going to be persistently stored. When acknowledging out of order, a consumer will leave holes that are supposed to be quickly filled by acking all the messages. The information of which messages are acknowledged is persisted by compressing in "ranges" of messages that were acknowledged. After the max number of ranges is reached, the information will only be tracked in memory and messages will be redelivered in case of crashes. |1000| -|autoSkipNonRecoverableData| Skip reading non-recoverable/unreadable data-ledger under managed-ledger’s list.It helps when data-ledgers gets corrupted at bookkeeper and managed-cursor is stuck at that ledger. |false| -|loadBalancerEnabled| Enable load balancer |true| -|loadBalancerPlacementStrategy| Strategy to assign a new bundle weightedRandomSelection || -|loadBalancerReportUpdateThresholdPercentage| Percentage of change to trigger load report update |10| -|loadBalancerReportUpdateMaxIntervalMinutes| Maximum interval to update load report |15| -|loadBalancerHostUsageCheckIntervalMinutes| Frequency of report to collect |1| -|loadBalancerSheddingIntervalMinutes| Load shedding interval. Broker periodically checks whether some traffic should be offload from some over-loaded broker to other under-loaded brokers |30| -|loadBalancerSheddingGracePeriodMinutes| Prevent the same topics to be shed and moved to other broker more than once within this timeframe |30| -|loadBalancerBrokerMaxTopics| Usage threshold to allocate max number of topics to broker |50000| -|loadBalancerBrokerUnderloadedThresholdPercentage| Usage threshold to determine a broker as under-loaded |1| -|loadBalancerBrokerOverloadedThresholdPercentage| Usage threshold to determine a broker as over-loaded |85| -|loadBalancerResourceQuotaUpdateIntervalMinutes| Interval to update namespace bundle resource quota |15| -|loadBalancerBrokerComfortLoadLevelPercentage| Usage threshold to determine a broker is having just right level of load |65| -|loadBalancerAutoBundleSplitEnabled| enable/disable namespace bundle auto split |false| -|loadBalancerNamespaceBundleMaxTopics| maximum topics in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxSessions| maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxMsgRate| maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxBandwidthMbytes| maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered |100| -|loadBalancerNamespaceMaximumBundles| maximum number of bundles in a namespace |128| -|loadBalancerLoadSheddingStrategy | The shedding strategy of load balance.

    Available values:
  403. `org.apache.pulsar.broker.loadbalance.impl.ThresholdShedder`
  404. `org.apache.pulsar.broker.loadbalance.impl.OverloadShedder`
  405. `org.apache.pulsar.broker.loadbalance.impl.UniformLoadShedder`

  406. For the comparisons of the shedding strategies, see [here](administration-load-balance/#shed-load-automatically).|`org.apache.pulsar.broker.loadbalance.impl.ThresholdShedder` -|replicationMetricsEnabled| Enable replication metrics |true| -|replicationConnectionsPerBroker| Max number of connections to open for each broker in a remote cluster More connections host-to-host lead to better throughput over high-latency links. |16| -|replicationProducerQueueSize| Replicator producer queue size |1000| -|replicatorPrefix| Replicator prefix used for replicator producer name and cursor name pulsar.repl|| -|transactionCoordinatorEnabled|Whether to enable transaction coordinator in broker.|true| -|transactionMetadataStoreProviderClassName| |org.apache.pulsar.transaction.coordinator.impl.InMemTransactionMetadataStoreProvider| -|defaultRetentionTimeInMinutes| Default message retention time. 0 means retention is disabled. -1 means data is not removed by time quota |0| -|defaultRetentionSizeInMB| Default retention size. 0 means retention is disabled. -1 means data is not removed by size quota |0| -|keepAliveIntervalSeconds| How often to check whether the connections are still alive |30| -|bootstrapNamespaces| The bootstrap name. | N/A | -|loadManagerClassName| Name of load manager to use |org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl| -|supportedNamespaceBundleSplitAlgorithms| Supported algorithms name for namespace bundle split |[range_equally_divide,topic_count_equally_divide]| -|defaultNamespaceBundleSplitAlgorithm| Default algorithm name for namespace bundle split |range_equally_divide| -|managedLedgerOffloadDriver| The directory for all the offloader implementations `offloadersDirectory=./offloaders`. Driver to use to offload old data to long term storage (Possible values: S3, aws-s3, google-cloud-storage). When using google-cloud-storage, Make sure both Google Cloud Storage and Google Cloud Storage JSON API are enabled for the project (check from Developers Console -> Api&auth -> APIs). || -|managedLedgerOffloadMaxThreads| Maximum number of thread pool threads for ledger offloading |2| -|managedLedgerOffloadPrefetchRounds|The maximum prefetch rounds for ledger reading for offloading.|1| -|managedLedgerUnackedRangesOpenCacheSetEnabled| Use Open Range-Set to cache unacknowledged messages |true| -|managedLedgerOffloadDeletionLagMs|Delay between a ledger being successfully offloaded to long term storage and the ledger being deleted from bookkeeper | 14400000| -|managedLedgerOffloadAutoTriggerSizeThresholdBytes|The number of bytes before triggering automatic offload to long term storage |-1 (disabled)| -|s3ManagedLedgerOffloadRegion| For Amazon S3 ledger offload, AWS region || -|s3ManagedLedgerOffloadBucket| For Amazon S3 ledger offload, Bucket to place offloaded ledger into || -|s3ManagedLedgerOffloadServiceEndpoint| For Amazon S3 ledger offload, Alternative endpoint to connect to (useful for testing) || -|s3ManagedLedgerOffloadMaxBlockSizeInBytes| For Amazon S3 ledger offload, Max block size in bytes. (64MB by default, 5MB minimum) |67108864| -|s3ManagedLedgerOffloadReadBufferSizeInBytes| For Amazon S3 ledger offload, Read buffer size in bytes (1MB by default) |1048576| -|gcsManagedLedgerOffloadRegion|For Google Cloud Storage ledger offload, region where offload bucket is located. Go to this page for more details: https://cloud.google.com/storage/docs/bucket-locations .|N/A| -|gcsManagedLedgerOffloadBucket|For Google Cloud Storage ledger offload, Bucket to place offloaded ledger into.|N/A| -|gcsManagedLedgerOffloadMaxBlockSizeInBytes|For Google Cloud Storage ledger offload, the maximum block size in bytes. (64MB by default, 5MB minimum)|67108864| -|gcsManagedLedgerOffloadReadBufferSizeInBytes|For Google Cloud Storage ledger offload, Read buffer size in bytes. (1MB by default)|1048576| -|gcsManagedLedgerOffloadServiceAccountKeyFile|For Google Cloud Storage, path to json file containing service account credentials. For more details, see the "Service Accounts" section of https://support.google.com/googleapi/answer/6158849 .|N/A| -|fileSystemProfilePath|For File System Storage, file system profile path.|../conf/filesystem_offload_core_site.xml| -|fileSystemURI|For File System Storage, file system uri.|N/A| -|s3ManagedLedgerOffloadRole| For Amazon S3 ledger offload, provide a role to assume before writing to s3 || -|s3ManagedLedgerOffloadRoleSessionName| For Amazon S3 ledger offload, provide a role session name when using a role |pulsar-s3-offload| -| acknowledgmentAtBatchIndexLevelEnabled | Enable or disable the batch index acknowledgement. | false | -|enableReplicatedSubscriptions|Whether to enable tracking of replicated subscriptions state across clusters.|true| -|replicatedSubscriptionsSnapshotFrequencyMillis|The frequency of snapshots for replicated subscriptions tracking.|1000| -|replicatedSubscriptionsSnapshotTimeoutSeconds|The timeout for building a consistent snapshot for tracking replicated subscriptions state.|30| -|replicatedSubscriptionsSnapshotMaxCachedPerSubscription|The maximum number of snapshot to be cached per subscription.|10| -|maxMessagePublishBufferSizeInMB|The maximum memory size for a broker to handle messages that are sent by producers. If the processing message size exceeds this value, the broker stops reading data from the connection. The processing messages refer to the messages that are sent to the broker but the broker has not sent response to the client. Usually the messages are waiting to be written to bookies. It is shared across all the topics running in the same broker. The value `-1` disables the memory limitation. By default, it is 50% of direct memory.|N/A| -|messagePublishBufferCheckIntervalInMillis|Interval between checks to see if message publish buffer size exceeds the maximum. Use `0` or negative number to disable the max publish buffer limiting.|100| -|retentionCheckIntervalInSeconds|Check between intervals to see if consumed ledgers need to be trimmed. Use 0 or negative number to disable the check.|120| -| maxMessageSize | Set the maximum size of a message. | 5242880 | -| preciseTopicPublishRateLimiterEnable | Enable precise topic publish rate limiting. | false | -| lazyCursorRecovery | Whether to recover cursors lazily when trying to recover a managed ledger backing a persistent topic. It can improve write availability of topics. The caveat is now when recovered ledger is ready to write we're not sure if all old consumers' last mark delete position(ack position) can be recovered or not. So user can make the trade off or have custom logic in application to checkpoint consumer state.| false | -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| -| maxNamespacesPerTenant | The maximum number of namespaces that can be created in each tenant. When the number of namespaces reaches this threshold, the broker rejects the request of creating a new tenant. The default value 0 disables the check. |0| -| maxTopicsPerNamespace | The maximum number of persistent topics that can be created in the namespace. When the number of topics reaches this threshold, the broker rejects the request of creating a new topic, including the auto-created topics by the producer or consumer, until the number of connected consumers decreases. The default value 0 disables the check. | 0 | -|subscriptionTypesEnabled| Enable all subscription types, which are exclusive, shared, failover, and key_shared. | Exclusive, Shared, Failover, Key_Shared | -| managedLedgerInfoCompressionType | Compression type of managed ledger information.

    Available options are `NONE`, `LZ4`, `ZLIB`, `ZSTD`, and `SNAPPY`).

    If this value is `NONE` or invalid, the `managedLedgerInfo` is not compressed.

    **Note** that after enabling this configuration, if you want to degrade a broker, you need to change the value to `NONE` and make sure all ledger metadata is saved without compression. | None | -| additionalServlets | Additional servlet name.

    If you have multiple additional servlets, separate them by commas.

    For example, additionalServlet_1, additionalServlet_2 | N/A | -| additionalServletDirectory | Location of broker additional servlet NAR directory | ./brokerAdditionalServlet | -| brokerEntryMetadataInterceptors | Set broker entry metadata interceptors.

    Multiple interceptors should be separated by commas.

    Available values:
  407. org.apache.pulsar.common.intercept.AppendBrokerTimestampMetadataInterceptor
  408. org.apache.pulsar.common.intercept.AppendIndexMetadataInterceptor


  409. Example
    brokerEntryMetadataInterceptors=org.apache.pulsar.common.intercept.AppendBrokerTimestampMetadataInterceptor, org.apache.pulsar.common.intercept.AppendIndexMetadataInterceptor|N/A | -| enableExposingBrokerEntryMetadataToClient|Whether to expose broker entry metadata to client or not.

    Available values:
  410. true
  411. false

  412. Example
    enableExposingBrokerEntryMetadataToClient=true | false | -| strictBookieAffinityEnabled | Enable or disable the strict bookie isolation strategy. If enabled,
    - `bookie-ensemble` first tries to choose bookies that belong to a namespace's affinity group. If the number of bookies is not enough, then the rest bookies are chosen.
    - If namespace has no affinity group, `bookie-ensemble` only chooses bookies that belong to no region. If the number of bookies is not enough, `BKNotEnoughBookiesException` is thrown.| false | -|narExtractionDirectory | The extraction directory of the nar package.
    Available for Protocol Handler, Additional Servlets, Entry Filter, Offloaders, Broker Interceptor. | System.getProperty("java.io.tmpdir") | - -#### Configuration override for clients internal to broker - -It's possible to configure some clients by using the appropriate prefix. - -|Prefix|Description| -|---|---| -|brokerClient_| Configure **all** the broker's Pulsar Clients and Pulsar Admin Clients. These configurations are applied after hard coded configuration and before the above broker client configurations named above.| -|bookkeeper_| Configure the broker's BookKeeper clients used by managed ledgers and the BookkeeperPackagesStorage bookkeeper client. Takes precedence over most other configuration values.| - -::: Note - -When running the function worker within the broker, these prefixed configurations do not apply to any of those clients. You must configure those clients using the `functions_worker.yml` file. - -::: - -#### Deprecated parameters of Broker -The following parameters have been deprecated in the `conf/broker.conf` file. - -|Name|Description|Default| -|---|---|---| -|backlogQuotaDefaultLimitGB| Use `backlogQuotaDefaultLimitBytes` instead. |-1| -|brokerServicePurgeInactiveFrequencyInSeconds| Use `brokerDeleteInactiveTopicsFrequencySeconds`.|60| -|tlsEnabled| Use `webServicePortTls` and `brokerServicePortTls` instead. |false| -|replicationTlsEnabled| Enable TLS when talking with other clusters to replicate messages. Use `brokerClientTlsEnabled` instead. |false| -|subscriptionKeySharedEnable| Whether to enable the Key_Shared subscription. Use `subscriptionTypesEnabled` instead. |true| -|zookeeperServers| Zookeeper quorum connection string. Use `metadataStoreUrl` instead. |N/A| -|configurationStoreServers| Configuration store connection string (as a comma-separated list). Use `configurationMetadataStoreUrl` instead. |N/A| -|zooKeeperSessionTimeoutMillis| Zookeeper session timeout in milliseconds. Use `metadataStoreSessionTimeoutMillis` instead. |30000| -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds. Use `metadataStoreCacheExpirySeconds` instead.|300| - - -## Client - -You can use the [`pulsar-client`](reference-cli-tools.md#pulsar-client) CLI tool to publish messages to and consume messages from Pulsar topics. You can use this tool in place of a client library. - -|Name|Description|Default| -|---|---|---| -|webServiceUrl| The web URL for the cluster. |http://localhost:8080/| -|brokerServiceUrl| The Pulsar protocol URL for the cluster. |pulsar://localhost:6650/| -|authPlugin| The authentication plugin. || -|authParams| The authentication parameters for the cluster, as a comma-separated string. || -|useTls| Whether to enforce the TLS authentication in the cluster. |false| -| tlsAllowInsecureConnection | Allow TLS connections to servers whose certificate cannot be verified to have been signed by a trusted certificate authority. | false | -| tlsEnableHostnameVerification | Whether the server hostname must match the common name of the certificate that is used by the server. | false | -|tlsTrustCertsFilePath||| -| useKeyStoreTls | Enable TLS with KeyStore type configuration in the broker. | false | -| tlsTrustStoreType | TLS TrustStore type configuration.
  413. JKS
  414. PKCS12
  415. |JKS| -| tlsTrustStore | TLS TrustStore path. | | -| tlsTrustStorePassword | TLS TrustStore password. | | -| webserviceTlsProvider | The TLS provider for the web service.
    When TLS authentication with CACert is used, the valid value is either `OPENSSL` or `JDK`.
    When TLS authentication with KeyStore is used, available options can be `SunJSSE`, `Conscrypt` and so on. | N/A | - -## Log4j - -You can set the log level and configuration in the [log4j2.yaml](https://github.com/apache/pulsar/blob/d557e0aa286866363bc6261dec87790c055db1b0/conf/log4j2.yaml#L155) file. The following logging configuration parameters are available. - -|Name|Default| -|---|---| -|pulsar.root.logger| WARN,CONSOLE| -|pulsar.log.dir| logs| -|pulsar.log.file| pulsar.log| -|log4j.rootLogger| ${pulsar.root.logger}| -|log4j.appender.CONSOLE| org.apache.log4j.ConsoleAppender| -|log4j.appender.CONSOLE.Threshold| DEBUG| -|log4j.appender.CONSOLE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.CONSOLE.layout.ConversionPattern| %d{ISO8601} - %-5p - [%t:%C{1}@%L] - %m%n| -|log4j.appender.ROLLINGFILE| org.apache.log4j.DailyRollingFileAppender| -|log4j.appender.ROLLINGFILE.Threshold| DEBUG| -|log4j.appender.ROLLINGFILE.File| ${pulsar.log.dir}/${pulsar.log.file}| -|log4j.appender.ROLLINGFILE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.ROLLINGFILE.layout.ConversionPattern| %d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n| -|log4j.appender.TRACEFILE| org.apache.log4j.FileAppender| -|log4j.appender.TRACEFILE.Threshold| TRACE| -|log4j.appender.TRACEFILE.File| pulsar-trace.log| -|log4j.appender.TRACEFILE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.TRACEFILE.layout.ConversionPattern| %d{ISO8601} - %-5p [%t:%C{1}@%L][%x] - %m%n| - -> Note: 'topic' in log4j2.appender is configurable. -> - If you want to append all logs to a single topic, set the same topic name. -> - If you want to append logs to different topics, you can set different topic names. - -## Log4j shell - -|Name|Default| -|---|---| -|bookkeeper.root.logger| ERROR,CONSOLE| -|log4j.rootLogger| ${bookkeeper.root.logger}| -|log4j.appender.CONSOLE| org.apache.log4j.ConsoleAppender| -|log4j.appender.CONSOLE.Threshold| DEBUG| -|log4j.appender.CONSOLE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.CONSOLE.layout.ConversionPattern| %d{ABSOLUTE} %-5p %m%n| -|log4j.logger.org.apache.zookeeper| ERROR| -|log4j.logger.org.apache.bookkeeper| ERROR| -|log4j.logger.org.apache.bookkeeper.bookie.BookieShell| INFO| - - -## Standalone - -|Name|Description|Default| -|---|---|---| -|authenticateOriginalAuthData| If this flag is set to `true`, the broker authenticates the original Auth data; else it just accepts the originalPrincipal and authorizes it (if required). |false| -|metadataStoreUrl| The quorum connection string for local metadata store || -|metadataStoreCacheExpirySeconds| Metadata store cache expiry time in seconds|300| -|configurationMetadataStoreUrl| Configuration store connection string (as a comma-separated list) || -|brokerServicePort| The port on which the standalone broker listens for connections |6650| -|webServicePort| The port used by the standalone broker for HTTP requests |8080| -|bindAddress| The hostname or IP address on which the standalone service binds |0.0.0.0| -|bindAddresses| Additional Hostname or IP addresses the service binds on: `listener_name:scheme://host:port,...`. || -|advertisedAddress| The hostname or IP address that the standalone service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used. || -| numAcceptorThreads | Number of threads to use for Netty Acceptor | 1 | -| numIOThreads | Number of threads to use for Netty IO | 2 * Runtime.getRuntime().availableProcessors() | -| numHttpServerThreads | Number of threads to use for HTTP requests processing | 2 * Runtime.getRuntime().availableProcessors()| -|isRunningStandalone|This flag controls features that are meant to be used when running in standalone mode.|N/A| -|clusterName| The name of the cluster that this broker belongs to. |standalone| -| failureDomainsEnabled | Enable cluster's failure-domain which can distribute brokers into logical region. | false | -|metadataStoreSessionTimeoutMillis| Metadata store session timeout, in milliseconds. |30000| -|metadataStoreOperationTimeoutSeconds|Metadata store operation timeout in seconds.|30| -|brokerShutdownTimeoutMs| The time to wait for graceful broker shutdown. After this time elapses, the process will be killed. |60000| -|skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out of memory error. |false| -|backlogQuotaCheckEnabled| Enable the backlog quota check, which enforces a specified action when the quota is reached. |true| -|backlogQuotaCheckIntervalInSeconds| How often to check for topics that have reached the backlog quota. |60| -|backlogQuotaDefaultLimitBytes| The default per-topic backlog quota limit. Being less than 0 means no limitation. By default, it is -1. |-1| -|ttlDurationDefaultInSeconds|The default Time to Live (TTL) for namespaces if the TTL is not configured at namespace policies. When the value is set to `0`, TTL is disabled. By default, TTL is disabled. |0| -|brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics. If topics are not consumed for some while, these inactive topics might be cleaned up. Deleting inactive topics is enabled by default. The default period is 1 minute. |true| -|brokerDeleteInactiveTopicsFrequencySeconds| How often to check for inactive topics, in seconds. |60| -| maxPendingPublishRequestsPerConnection | Maximum pending publish requests per connection to avoid keeping large number of pending requests in memory | 1000| -|messageExpiryCheckIntervalInMinutes| How often to proactively check and purged expired messages. |5| -|activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed. |1000| -| subscriptionExpirationTimeMinutes | How long to delete inactive subscriptions from last consumption. When it is set to 0, inactive subscriptions are not deleted automatically | 0 | -| subscriptionRedeliveryTrackerEnabled | Enable subscription message redelivery tracker to send redelivery count to consumer. | true | -| subscriptionKeySharedUseConsistentHashing | In Key_Shared subscription type, with default AUTO_SPLIT mode, use splitting ranges or consistent hashing to reassign keys to new consumers. | false | -| subscriptionKeySharedConsistentHashingReplicaPoints | In Key_Shared subscription type, the number of points in the consistent-hashing ring. The greater the number, the more equal the assignment of keys to consumers. | 100 | -| subscriptionExpiryCheckIntervalInMinutes | How frequently to proactively check and purge expired subscription |5 | -| brokerDeduplicationEnabled | Set the default behavior for message deduplication in the broker. This can be overridden per-namespace. If it is enabled, the broker rejects messages that are already stored in the topic. | false | -| brokerDeduplicationMaxNumberOfProducers | Maximum number of producer information that it's going to be persisted for deduplication purposes | 10000 | -| brokerDeduplicationEntriesInterval | Number of entries after which a deduplication information snapshot is taken. A greater interval leads to less snapshots being taken though it would increase the topic recovery time, when the entries published after the snapshot need to be replayed. | 1000 | -| brokerDeduplicationProducerInactivityTimeoutMinutes | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | 360 | -| defaultNumberOfNamespaceBundles | When a namespace is created without specifying the number of bundles, this value is used as the default setting.| 4 | -|clientLibraryVersionCheckEnabled| Enable checks for minimum allowed client library version. |false| -|clientLibraryVersionCheckAllowUnversioned| Allow client libraries with no version information |true| -|statusFilePath| The path for the file used to determine the rotation status for the broker when responding to service discovery health checks |/usr/local/apache/htdocs| -|maxUnackedMessagesPerConsumer| The maximum number of unacknowledged messages allowed to be received by consumers on a shared subscription. The broker will stop sending messages to a consumer once this limit is reached or until the consumer begins acknowledging messages. A value of 0 disables the unacked message limit check and thus allows consumers to receive messages without any restrictions. |50000| -|maxUnackedMessagesPerSubscription| The same as above, except per subscription rather than per consumer. |200000| -| maxUnackedMessagesPerBroker | Maximum number of unacknowledged messages allowed per broker. Once this limit reaches, the broker stops dispatching messages to all shared subscriptions which has a higher number of unacknowledged messages until subscriptions start acknowledging messages back and unacknowledged messages count reaches to limit/2. When the value is set to 0, unacknowledged message limit check is disabled and broker does not block dispatchers. | 0 | -| maxUnackedMessagesPerSubscriptionOnBrokerBlocked | Once the broker reaches maxUnackedMessagesPerBroker limit, it blocks subscriptions which have higher unacknowledged messages than this percentage limit and subscription does not receive any new messages until that subscription acknowledges messages back. | 0.16 | -| unblockStuckSubscriptionEnabled|Broker periodically checks if subscription is stuck and unblock if flag is enabled.|false| -| topicPublisherThrottlingTickTimeMillis | Tick time to schedule task that checks topic publish rate limiting across all topics. A lower value can improve accuracy while throttling publish but it uses more CPU to perform frequent check. (Disable publish throttling with value 0) | 10| -| brokerPublisherThrottlingTickTimeMillis | Tick time to schedule task that checks broker publish rate limiting across all topics. A lower value can improve accuracy while throttling publish but it uses more CPU to perform frequent check. When the value is set to 0, publish throttling is disabled. |50 | -| brokerPublisherThrottlingMaxMessageRate | Maximum rate (in 1 second) of messages allowed to publish for a broker if the message rate limiting is enabled. When the value is set to 0, message rate limiting is disabled. | 0| -| brokerPublisherThrottlingMaxByteRate | Maximum rate (in 1 second) of bytes allowed to publish for a broker if the byte rate limiting is enabled. When the value is set to 0, the byte rate limiting is disabled. | 0 | -|subscribeThrottlingRatePerConsumer|Too many subscribe requests from a consumer can cause broker rewinding consumer cursors and loading data from bookies, hence causing high network bandwidth usage. When the positive value is set, broker will throttle the subscribe requests for one consumer. Otherwise, the throttling will be disabled. By default, throttling is disabled.|0| -|subscribeRatePeriodPerConsumerInSecond|Rate period for {subscribeThrottlingRatePerConsumer}. By default, it is 30s.|30| -|dispatchThrottlingRateInMsg| Dispatch throttling-limit of messages for a broker (per second). 0 means the dispatch throttling-limit is disabled. |0| -|dispatchThrottlingRateInByte| Dispatch throttling-limit of bytes for a broker (per second). 0 means the dispatch throttling-limit is disabled. |0| -| dispatchThrottlingRatePerTopicInMsg | Default messages (per second) dispatch throttling-limit for every topic. When the value is set to 0, default message dispatch throttling-limit is disabled. |0 | -| dispatchThrottlingRatePerTopicInByte | Default byte (per second) dispatch throttling-limit for every topic. When the value is set to 0, default byte dispatch throttling-limit is disabled. | 0| -| dispatchThrottlingOnBatchMessageEnabled |Apply dispatch rate limiting on batch message instead individual messages with in batch message. (Default is disabled). | false| -| dispatchThrottlingRateRelativeToPublishRate | Enable dispatch rate-limiting relative to publish rate. | false | -|dispatchThrottlingRatePerSubscriptionInMsg|The defaulted number of message dispatching throttling-limit for a subscription. The value of 0 disables message dispatch-throttling.|0| -|dispatchThrottlingRatePerSubscriptionInByte|The default number of message-bytes dispatching throttling-limit for a subscription. The value of 0 disables message-byte dispatch-throttling.|0| -|dispatchThrottlingRatePerReplicatorInMsg| Dispatch throttling-limit of messages for every replicator in replication (per second). 0 means the dispatch throttling-limit in replication is disabled. |0| -|dispatchThrottlingRatePerReplicatorInByte| Dispatch throttling-limit of bytes for every replicator in replication (per second). 0 means the dispatch throttling-limit is disabled. |0| -| dispatchThrottlingOnNonBacklogConsumerEnabled | Enable dispatch-throttling for both caught up consumers as well as consumers who have backlogs. | true | -|dispatcherMaxReadBatchSize|The maximum number of entries to read from BookKeeper. By default, it is 100 entries.|100| -|dispatcherMaxReadSizeBytes|The maximum size in bytes of entries to read from BookKeeper. By default, it is 5MB.|5242880| -|dispatcherMinReadBatchSize|The minimum number of entries to read from BookKeeper. By default, it is 1 entry. When there is an error occurred on reading entries from bookkeeper, the broker will backoff the batch size to this minimum number.|1| -|dispatcherMaxRoundRobinBatchSize|The maximum number of entries to dispatch for a shared subscription. By default, it is 20 entries.|20| -| preciseDispatcherFlowControl | Precise dispathcer flow control according to history message number of each entry. | false | -| streamingDispatch | Whether to use streaming read dispatcher. It can be useful when there's a huge backlog to drain and instead of read with micro batch we can streamline the read from bookkeeper to make the most of consumer capacity till we hit bookkeeper read limit or consumer process limit, then we can use consumer flow control to tune the speed. This feature is currently in preview and can be changed in subsequent release. | false | -| maxConcurrentLookupRequest | Maximum number of concurrent lookup request that the broker allows to throttle heavy incoming lookup traffic. | 50000 | -| maxConcurrentTopicLoadRequest | Maximum number of concurrent topic loading request that the broker allows to control the number of zk-operations. | 5000 | -| maxConcurrentNonPersistentMessagePerConnection | Maximum number of concurrent non-persistent message that can be processed per connection. | 1000 | -| numWorkerThreadsForNonPersistentTopic | Number of worker threads to serve non-persistent topic. | 8 | -| enablePersistentTopics | Enable broker to load persistent topics. | true | -| enableNonPersistentTopics | Enable broker to load non-persistent topics. | true | -| maxSubscriptionsPerTopic | Maximum number of subscriptions allowed to subscribe to a topic. Once this limit reaches, the broker rejects new subscriptions until the number of subscriptions decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxProducersPerTopic | Maximum number of producers allowed to connect to a topic. Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerTopic | Maximum number of consumers allowed to connect to a topic. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerSubscription | Maximum number of consumers allowed to connect to a subscription. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxNumPartitionsPerPartitionedTopic | Maximum number of partitions per partitioned topic. When the value is set to a negative number or is set to 0, the check is disabled. | 0 | -| metadataStoreBatchingEnabled | Enable metadata operations batching. | true | -| metadataStoreBatchingMaxDelayMillis | Maximum delay to impose on batching grouping. | 5 | -| metadataStoreBatchingMaxOperations | Maximum number of operations to include in a singular batch. | 1000 | -| metadataStoreBatchingMaxSizeKb | Maximum size of a batch. | 128 | -| tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in seconds. When the value is set to 0, check the TLS certificate on every new connection. | 300 | -| tlsCertificateFilePath | Path for the TLS certificate file. | | -| tlsKeyFilePath | Path for the TLS private key file. | | -| tlsTrustCertsFilePath | Path for the trusted TLS certificate file.| | -| tlsAllowInsecureConnection | Accept untrusted TLS certificate from the client. If it is set to true, a client with a certificate which cannot be verified with the 'tlsTrustCertsFilePath' certificate is allowed to connect to the server, though the certificate is not be used for client authentication. | false | -| tlsProtocols | Specify the TLS protocols the broker uses to negotiate during TLS handshake. | | -| tlsCiphers | Specify the TLS cipher the broker uses to negotiate during TLS Handshake. | | -| tlsRequireTrustedClientCertOnConnect | Trusted client certificates are required for to connect TLS. Reject the Connection if the client certificate is not trusted. In effect, this requires that all connecting clients perform TLS client authentication. | false | -| tlsEnabledWithKeyStore | Enable TLS with KeyStore type configuration in broker. | false | -| tlsProvider | TLS Provider for KeyStore type. | | -| tlsKeyStoreType | TLS KeyStore type configuration in the broker.
  416. JKS
  417. PKCS12
  418. |JKS| -| tlsKeyStore | TLS KeyStore path in the broker. | | -| tlsKeyStorePassword | TLS KeyStore password for the broker. | | -| tlsTrustStoreType | TLS TrustStore type configuration in the broker
  419. JKS
  420. PKCS12
  421. |JKS| -| tlsTrustStore | TLS TrustStore path in the broker. | | -| tlsTrustStorePassword | TLS TrustStore password for the broker. | | -| brokerClientTlsEnabledWithKeyStore | Configure whether the internal client uses the KeyStore type to authenticate with Pulsar brokers. | false | -| brokerClientSslProvider | The TLS Provider used by the internal client to authenticate with other Pulsar brokers. | | -| brokerClientTlsTrustStoreType | TLS TrustStore type configuration for the internal client to authenticate with Pulsar brokers.
  422. JKS
  423. PKCS12
  424. | JKS | -| brokerClientTlsTrustStore | TLS TrustStore path for the internal client to authenticate with Pulsar brokers. | | -| brokerClientTlsTrustStorePassword | TLS TrustStore password for the internal client to authenticate with Pulsar brokers. | | -| brokerClientTlsCiphers | Specify the TLS cipher that the internal client uses to negotiate during TLS Handshake. | | -| brokerClientTlsProtocols | Specify the TLS protocols that the broker uses to negotiate during TLS handshake. | | -| systemTopicEnabled | Enable/Disable system topics. | false | -| topicLevelPoliciesEnabled | Enable or disable topic level policies. Topic level policies depends on the system topic. Please enable the system topic first. | false | -| topicFencingTimeoutSeconds | If a topic remains fenced for a certain time period (in seconds), it is closed forcefully. If set to 0 or a negative number, the fenced topic is not closed. | 0 | -| proxyRoles | Role names that are treated as "proxy roles". If the broker sees a request with role as proxyRoles, it demands to see a valid original principal. | | -|authenticationEnabled| Enable authentication for the broker. |false| -|authenticationProviders| A comma-separated list of class names for authentication providers. |false| -|authorizationEnabled| Enforce authorization in brokers. |false| -| authorizationProvider | Authorization provider fully qualified class-name. | org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider | -| authorizationAllowWildcardsMatching | Allow wildcard matching in authorization. Wildcard matching is applicable only when the wildcard-character (*) presents at the **first** or **last** position. | false | -|superUserRoles| Role names that are treated as "superusers." Superusers are authorized to perform all admin tasks. | | -|brokerClientAuthenticationPlugin| The authentication settings of the broker itself. Used when the broker connects to other brokers either in the same cluster or from other clusters. | | -|brokerClientAuthenticationParameters| The parameters that go along with the plugin specified using brokerClientAuthenticationPlugin. | | -|athenzDomainNames| Supported Athenz authentication provider domain names as a comma-separated list. | | -| anonymousUserRole | When this parameter is not empty, unauthenticated users perform as anonymousUserRole. | | -|tokenSettingPrefix| Configure the prefix of the token related setting like `tokenSecretKey`, `tokenPublicKey`, `tokenAuthClaim`, `tokenPublicAlg`, `tokenAudienceClaim`, and `tokenAudience`. || -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenAuthClaim| Specify the token claim that will be used as the authentication "principal" or "role". The "subject" field will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud". It is used to get the audience from token. If it is not set, the audience is not verified. || -| tokenAudience | The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token need contains this parameter.| | -|saslJaasClientAllowedIds|This is a regexp, which limits the range of possible ids which can connect to the Broker using SASL. By default, it is set to `SaslConstants.JAAS_CLIENT_ALLOWED_IDS_DEFAULT`, which is ".*pulsar.*", so only clients whose id contains 'pulsar' are allowed to connect.|N/A| -|saslJaasBrokerSectionName|Service Principal, for login context name. By default, it is set to `SaslConstants.JAAS_DEFAULT_BROKER_SECTION_NAME`, which is "Broker".|N/A| -|httpMaxRequestSize|If the value is larger than 0, it rejects all HTTP requests with bodies larged than the configured limit.|-1| -|exposePreciseBacklogInPrometheus| Enable expose the precise backlog stats, set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate. |false| -|bookkeeperMetadataServiceUri|Metadata service uri is what BookKeeper used for loading corresponding metadata driver and resolving its metadata service location. This value can be fetched using `bookkeeper shell whatisinstanceid` command in BookKeeper cluster. For example: `zk+hierarchical://localhost:2181/ledgers`. The metadata service uri list can also be semicolon separated values like: `zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers`.|N/A| -|bookkeeperClientAuthenticationPlugin| Authentication plugin to be used when connecting to bookies (BookKeeper servers). || -|bookkeeperClientAuthenticationParametersName| BookKeeper authentication plugin implementation parameters and values. || -|bookkeeperClientAuthenticationParameters| Parameters associated with the bookkeeperClientAuthenticationParametersName || -|bookkeeperClientNumWorkerThreads| Number of BookKeeper client worker threads. Default is Runtime.getRuntime().availableProcessors() || -|bookkeeperClientTimeoutInSeconds| Timeout for BookKeeper add and read operations. |30| -|bookkeeperClientSpeculativeReadTimeoutInMillis| Speculative reads are initiated if a read request doesn’t complete within a certain time. A value of 0 disables speculative reads. |0| -|bookkeeperUseV2WireProtocol|Use older Bookkeeper wire protocol with bookie.|true| -|bookkeeperClientHealthCheckEnabled| Enable bookie health checks. |true| -|bookkeeperClientHealthCheckIntervalSeconds| The time interval, in seconds, at which health checks are performed. New ledgers are not created during health checks. |60| -|bookkeeperClientHealthCheckErrorThresholdPerInterval| Error threshold for health checks. |5| -|bookkeeperClientHealthCheckQuarantineTimeInSeconds| If bookies have more than the allowed number of failures within the time interval specified by bookkeeperClientHealthCheckIntervalSeconds |1800| -|bookkeeperClientGetBookieInfoIntervalSeconds|Specify options for the GetBookieInfo check. This setting helps ensure the list of bookies that are up to date on the brokers.|86400| -|bookkeeperClientGetBookieInfoRetryIntervalSeconds|Specify options for the GetBookieInfo check. This setting helps ensure the list of bookies that are up to date on the brokers.|60| -|bookkeeperClientRackawarePolicyEnabled| |true| -|bookkeeperClientRegionawarePolicyEnabled| |false| -|bookkeeperClientMinNumRacksPerWriteQuorum| |2| -|bookkeeperClientMinNumRacksPerWriteQuorum| |false| -|bookkeeperClientReorderReadSequenceEnabled| |false| -|bookkeeperClientIsolationGroups||| -|bookkeeperClientSecondaryIsolationGroups| Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn't have enough bookie available. || -|bookkeeperClientMinAvailableBookiesInIsolationGroups| Minimum bookies that should be available as part of bookkeeperClientIsolationGroups else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list. || -| bookkeeperTLSProviderFactoryClass | Set the client security provider factory class name. | org.apache.bookkeeper.tls.TLSContextFactory | -| bookkeeperTLSClientAuthentication | Enable TLS authentication with bookie. | false | -| bookkeeperTLSKeyFileType | Supported type: PEM, JKS, PKCS12. | PEM | -| bookkeeperTLSTrustCertTypes | Supported type: PEM, JKS, PKCS12. | PEM | -| bookkeeperTLSKeyStorePasswordPath | Path to file containing keystore password, if the client keystore is password protected. | | -| bookkeeperTLSTrustStorePasswordPath | Path to file containing truststore password, if the client truststore is password protected. | | -| bookkeeperTLSKeyFilePath | Path for the TLS private key file. | | -| bookkeeperTLSCertificateFilePath | Path for the TLS certificate file. | | -| bookkeeperTLSTrustCertsFilePath | Path for the trusted TLS certificate file. | | -| bookkeeperTlsCertFilesRefreshDurationSeconds | Tls cert refresh duration at bookKeeper-client in seconds (0 to disable check). | | -| bookkeeperDiskWeightBasedPlacementEnabled | Enable/Disable disk weight based placement. | false | -| bookkeeperExplicitLacIntervalInMills | Set the interval to check the need for sending an explicit LAC. When the value is set to 0, no explicit LAC is sent. | 0 | -| bookkeeperClientExposeStatsToPrometheus | Expose BookKeeper client managed ledger stats to Prometheus. | false | -|managedLedgerDefaultEnsembleSize| |1| -|managedLedgerDefaultWriteQuorum| |1| -|managedLedgerDefaultAckQuorum| |1| -| managedLedgerDigestType | Default type of checksum to use when writing to BookKeeper. | CRC32C | -| managedLedgerNumSchedulerThreads | Number of threads to be used for managed ledger scheduled tasks. | Runtime.getRuntime().availableProcessors() | -|managedLedgerCacheSizeMB| |N/A| -|managedLedgerCacheCopyEntries| Whether to copy the entry payloads when inserting in cache.| false| -|managedLedgerCacheEvictionWatermark| |0.9| -|managedLedgerCacheEvictionFrequency| Configure the cache eviction frequency for the managed ledger cache (evictions/sec) | 100.0 | -|managedLedgerCacheEvictionTimeThresholdMillis| All entries that have stayed in cache for more than the configured time, will be evicted | 1000 | -|managedLedgerCursorBackloggedThreshold| Configure the threshold (in number of entries) from where a cursor should be considered 'backlogged' and thus should be set as inactive. | 1000| -|managedLedgerUnackedRangesOpenCacheSetEnabled| Use Open Range-Set to cache unacknowledged messages |true| -|managedLedgerDefaultMarkDeleteRateLimit| |0.1| -|managedLedgerMaxEntriesPerLedger| |50000| -|managedLedgerMinLedgerRolloverTimeMinutes| |10| -|managedLedgerMaxLedgerRolloverTimeMinutes| |240| -|managedLedgerCursorMaxEntriesPerLedger| |50000| -|managedLedgerCursorRolloverTimeInSeconds| |14400| -| managedLedgerMaxSizePerLedgerMbytes | Maximum ledger size before triggering a rollover for a topic. | 2048 | -| managedLedgerMaxUnackedRangesToPersist | Maximum number of "acknowledgment holes" that are going to be persistently stored. When acknowledging out of order, a consumer leaves holes that are supposed to be quickly filled by acknowledging all the messages. The information of which messages are acknowledged is persisted by compressing in "ranges" of messages that were acknowledged. After the max number of ranges is reached, the information is only tracked in memory and messages are redelivered in case of crashes. | 10000 | -| managedLedgerMaxUnackedRangesToPersistInZooKeeper | Maximum number of "acknowledgment holes" that can be stored in Zookeeper. If the number of unacknowledged message range is higher than this limit, the broker persists unacknowledged ranges into bookkeeper to avoid additional data overhead into Zookeeper. | 1000 | -|autoSkipNonRecoverableData| |false| -| managedLedgerMetadataOperationsTimeoutSeconds | Operation timeout while updating managed-ledger metadata. | 60 | -| managedLedgerReadEntryTimeoutSeconds | Read entries timeout when the broker tries to read messages from BookKeeper. | 0 | -| managedLedgerAddEntryTimeoutSeconds | Add entry timeout when the broker tries to publish message to BookKeeper. | 0 | -| managedLedgerNewEntriesCheckDelayInMillis | New entries check delay for the cursor under the managed ledger. If no new messages in the topic, the cursor tries to check again after the delay time. For consumption latency sensitive scenario, you can set the value to a smaller value or 0. Of course, a smaller value may degrade consumption throughput.|10 ms| -| managedLedgerPrometheusStatsLatencyRolloverSeconds | Managed ledger prometheus stats latency rollover seconds. | 60 | -| managedLedgerTraceTaskExecution | Whether to trace managed ledger task execution time. | true | -|managedLedgerNewEntriesCheckDelayInMillis|New entries check delay for the cursor under the managed ledger. If no new messages in the topic, the cursor will try to check again after the delay time. For consumption latency sensitive scenario, it can be set to a smaller value or 0. A smaller value degrades consumption throughput. By default, it is 10ms.|10| -|loadBalancerEnabled| |false| -|loadBalancerPlacementStrategy| |weightedRandomSelection| -|loadBalancerReportUpdateThresholdPercentage| |10| -|loadBalancerReportUpdateMaxIntervalMinutes| |15| -|loadBalancerHostUsageCheckIntervalMinutes| |1| -|loadBalancerSheddingIntervalMinutes| |30| -|loadBalancerSheddingGracePeriodMinutes| |30| -|loadBalancerBrokerMaxTopics| |50000| -|loadBalancerBrokerUnderloadedThresholdPercentage| |1| -|loadBalancerBrokerOverloadedThresholdPercentage| |85| -|loadBalancerResourceQuotaUpdateIntervalMinutes| |15| -|loadBalancerBrokerComfortLoadLevelPercentage| |65| -|loadBalancerAutoBundleSplitEnabled| |false| -| loadBalancerAutoUnloadSplitBundlesEnabled | Enable/Disable automatic unloading of split bundles. | true | -|loadBalancerNamespaceBundleMaxTopics| |1000| -|loadBalancerNamespaceBundleMaxSessions| Maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered.
    To disable the threshold check, set the value to -1. |1000| -|loadBalancerNamespaceBundleMaxMsgRate| |1000| -|loadBalancerNamespaceBundleMaxBandwidthMbytes| |100| -|loadBalancerNamespaceMaximumBundles| |128| -| loadBalancerBrokerThresholdShedderPercentage | The broker resource usage threshold. When the broker resource usage is greater than the pulsar cluster average resource usage, the threshold shedder is triggered to offload bundles from the broker. It only takes effect in the ThresholdShedder strategy. | 10 | -| loadBalancerMsgRateDifferenceShedderThreshold | Message-rate percentage threshold between highest and least loaded brokers for uniform load shedding. | 50 | -| loadBalancerMsgThroughputMultiplierDifferenceShedderThreshold | Message-throughput threshold between highest and least loaded brokers for uniform load shedding. | 4 | -| loadBalancerHistoryResourcePercentage | The history usage when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 0.9 | -| loadBalancerBandwithInResourceWeight | The BandWithIn usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerBandwithOutResourceWeight | The BandWithOut usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerCPUResourceWeight | The CPU usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerMemoryResourceWeight | The heap memory usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerDirectMemoryResourceWeight | The direct memory usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerBundleUnloadMinThroughputThreshold | Bundle unload minimum throughput threshold. Avoid bundle unload frequently. It only takes effect in the ThresholdShedder strategy. | 10 | -| namespaceBundleUnloadingTimeoutMs | Time to wait for the unloading of a namespace bundle in milliseconds. | 60000 | -|replicationMetricsEnabled| |true| -|replicationConnectionsPerBroker| |16| -|replicationProducerQueueSize| |1000| -| replicationPolicyCheckDurationSeconds | Duration to check replication policy to avoid replicator inconsistency due to missing ZooKeeper watch. When the value is set to 0, disable checking replication policy. | 600 | -|defaultRetentionTimeInMinutes| |0| -|defaultRetentionSizeInMB| |0| -|keepAliveIntervalSeconds| |30| -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| -|bookieId | If you want to custom a bookie ID or use a dynamic network address for a bookie, you can set the `bookieId`.

    Bookie advertises itself using the `bookieId` rather than the `BookieSocketAddress` (`hostname:port` or `IP:port`).

    The `bookieId` is a non-empty string that can contain ASCII digits and letters ([a-zA-Z9-0]), colons, dashes, and dots.

    For more information about `bookieId`, see [here](http://bookkeeper.apache.org/bps/BP-41-bookieid/).|/| -| maxTopicsPerNamespace | The maximum number of persistent topics that can be created in the namespace. When the number of topics reaches this threshold, the broker rejects the request of creating a new topic, including the auto-created topics by the producer or consumer, until the number of connected consumers decreases. The default value 0 disables the check. | 0 | -| metadataStoreConfigPath | The configuration file path of the local metadata store. See [Configure metadata store](administration-metadata-store.md) for details. |N/A| -|schemaRegistryStorageClassName|The schema storage implementation used by this broker.|org.apache.pulsar.broker.service.schema.BookkeeperSchemaStorageFactory| -|isSchemaValidationEnforced| Whether to enable schema validation, when schema validation is enabled, if a producer without a schema attempts to produce the message to a topic with schema, the producer is rejected and disconnected.|false| -|isAllowAutoUpdateSchemaEnabled|Allow schema to be auto updated at broker level.|true| -|schemaCompatibilityStrategy| The schema compatibility strategy at broker level, see [here](schema-evolution-compatibility.md#schema-compatibility-check-strategy) for available values.|FULL| -|systemTopicSchemaCompatibilityStrategy| The schema compatibility strategy is used for system topics, see [here](schema-evolution-compatibility.md#schema-compatibility-check-strategy) for available values.|ALWAYS_COMPATIBLE| - -#### Deprecated parameters of standalone Pulsar -The following parameters have been deprecated in the `conf/standalone.conf` file. - -|Name|Description|Default| -|---|---|---| -|zookeeperServers| The quorum connection string for local metadata store. Use `metadataStoreUrl` instead. |N/A| -|configurationStoreServers| Configuration store connection string (as a comma-separated list). Use `configurationMetadataStoreUrl` instead. |N/A| -|zooKeeperOperationTimeoutSeconds|ZooKeeper operation timeout in seconds. Use `metadataStoreOperationTimeoutSeconds` instead. |30| -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds. Use `metadataStoreCacheExpirySeconds` instead. |300| -|zooKeeperSessionTimeoutMillis| The ZooKeeper session timeout, in milliseconds. Use `metadataStoreSessionTimeoutMillis` instead. |30000| - -## WebSocket - -|Name|Description|Default| -|---|---|---| -|configurationMetadataStoreUrl |Configuration store connection string. |N/A| -|metadataStoreSessionTimeoutMillis|Metadata store session timeout in milliseconds. |30000| -|metadataStoreCacheExpirySeconds|Metadata store cache expiry time in seconds|300| -|serviceUrl||| -|serviceUrlTls||| -|brokerServiceUrl||| -|brokerServiceUrlTls||| -|webServicePort||8080| -|webServicePortTls||8443| -|bindAddress||0.0.0.0| -|clusterName ||| -|authenticationEnabled||false| -|authenticationProviders||| -|authorizationEnabled||false| -|superUserRoles ||| -|brokerClientAuthenticationPlugin||| -|brokerClientAuthenticationParameters||| -|tlsEnabled||false| -|tlsAllowInsecureConnection||false| -|tlsCertificateFilePath||| -|tlsKeyFilePath ||| -|tlsTrustCertsFilePath||| - -#### Configuration Override For Clients Internal to WebSocket - -It's possible to configure some clients by using the appropriate prefix. - -|Prefix|Description| -|---|---| -|brokerClient_| Configure **all** the broker's Pulsar Clients. These configurations are applied after hard coded configuration and before the above brokerClient configurations named above.| - -#### Deprecated parameters of WebSocket -The following parameters have been deprecated in the `conf/websocket.conf` file. - -|Name|Description|Default| -|---|---|---| -|zooKeeperSessionTimeoutMillis|The ZooKeeper session timeout in milliseconds. Use `metadataStoreSessionTimeoutMillis` instead. |30000| -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds. Use `metadataStoreCacheExpirySeconds` instead.|300| -|configurationStoreServers| Configuration Store connection string. Use `configurationMetadataStoreUrl` instead.|N/A| - -## Pulsar proxy - -The [Pulsar proxy](concepts-architecture-overview.md#pulsar-proxy) can be configured in the `conf/proxy.conf` file. - - -|Name|Description|Default| -|---|---|---| -|forwardAuthorizationCredentials| Forward client authorization credentials to Broker for re-authorization, and make sure authentication is enabled for this to take effect. |false| -|metadataStoreUrl| Metadata store quorum connection string (as a comma-separated list) || -|configurationMetadataStoreUrl| Configuration store connection string (as a comma-separated list) || -| brokerServiceURL | The service URL pointing to the broker cluster. Must begin with `pulsar://`. | | -| brokerServiceURLTLS | The TLS service URL pointing to the broker cluster. Must begin with `pulsar+ssl://`. | | -| brokerWebServiceURL | The Web service URL pointing to the broker cluster | | -| brokerWebServiceURLTLS | The TLS Web service URL pointing to the broker cluster | | -| functionWorkerWebServiceURL | The Web service URL pointing to the function worker cluster. It is only configured when you setup function workers in a separate cluster. | | -| functionWorkerWebServiceURLTLS | The TLS Web service URL pointing to the function worker cluster. It is only configured when you setup function workers in a separate cluster. | | -|metadataStoreSessionTimeoutMillis| Metadata store session timeout (in milliseconds) |30000| -|metadataStoreCacheExpirySeconds|Metadata store cache expiry time in seconds|300| -|advertisedAddress|Hostname or IP address the service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostname()` is used.|N/A| -|servicePort| The port to use for server binary Protobuf requests |6650| -|servicePortTls| The port to use to server binary Protobuf TLS requests |6651| -|statusFilePath| Path for the file used to determine the rotation status for the proxy instance when responding to service discovery health checks || -| proxyLogLevel | Proxy log level
  425. 0: Do not log any TCP channel information.
  426. 1: Parse and log any TCP channel information and command information without message body.
  427. 2: Parse and log channel information, command information and message body.
  428. | 0 | -|authenticationEnabled| Whether authentication is enabled for the Pulsar proxy |false| -|authenticateMetricsEndpoint| Whether the '/metrics' endpoint requires authentication. Defaults to true. 'authenticationEnabled' must also be set for this to take effect. |true| -|authenticationProviders| Authentication provider name list (a comma-separated list of class names) || -|authorizationEnabled| Whether authorization is enforced by the Pulsar proxy |false| -|authorizationProvider| Authorization provider as a fully qualified class name |org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider| -| anonymousUserRole | When this parameter is not empty, unauthenticated users perform as anonymousUserRole. | | -|brokerClientAuthenticationPlugin| The authentication plugin used by the Pulsar proxy to authenticate with Pulsar brokers || -|brokerClientAuthenticationParameters| The authentication parameters used by the Pulsar proxy to authenticate with Pulsar brokers || -|brokerClientTrustCertsFilePath| The path to trusted certificates used by the Pulsar proxy to authenticate with Pulsar brokers || -|superUserRoles| Role names that are treated as "super-users," meaning that they will be able to perform all admin || -|maxConcurrentInboundConnections| Max concurrent inbound connections. The proxy will reject requests beyond that. |10000| -|maxConcurrentLookupRequests| Max concurrent outbound connections. The proxy will error out requests beyond that. |50000| -|tlsEnabledWithBroker| Whether TLS is enabled when communicating with Pulsar brokers. |false| -| tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in seconds. If the value is set 0, check TLS certificate every new connection. | 300 | -|tlsCertificateFilePath| Path for the TLS certificate file || -|tlsKeyFilePath| Path for the TLS private key file || -|tlsTrustCertsFilePath| Path for the trusted TLS certificate pem file || -|tlsHostnameVerificationEnabled| Whether the hostname is validated when the proxy creates a TLS connection with brokers |false| -|tlsRequireTrustedClientCertOnConnect| Whether client certificates are required for TLS. Connections are rejected if the client certificate isn’t trusted. |false| -|tlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLSv1.3```, ```TLSv1.2``` || -|tlsCiphers|Specify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256```|| -| httpReverseProxyConfigs | HTTP directs to redirect to non-pulsar services | | -| httpOutputBufferSize | HTTP output buffer size. The amount of data that will be buffered for HTTP requests before it is flushed to the channel. A larger buffer size may result in higher HTTP throughput though it may take longer for the client to see data. If using HTTP streaming via the reverse proxy, this should be set to the minimum value (1) so that clients see the data as soon as possible. | 32768 | -| httpNumThreads | Number of threads to use for HTTP requests processing| 2 * Runtime.getRuntime().availableProcessors() | -|tokenSettingPrefix| Configure the prefix of the token related setting like `tokenSecretKey`, `tokenPublicKey`, `tokenAuthClaim`, `tokenPublicAlg`, `tokenAudienceClaim`, and `tokenAudience`. || -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenAuthClaim| Specify the token claim that will be used as the authentication "principal" or "role". The "subject" field will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud". It is used to get the audience from token. If it is not set, the audience is not verified. || -| tokenAudience | The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token need contains this parameter.| | -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| -| numIOThreads | Number of threads used for Netty IO. | 2 * Runtime.getRuntime().availableProcessors() | -| numAcceptorThreads | Number of threads used for Netty Acceptor. | 1 | - -#### Configuration Override For Clients Internal to Proxy - -It's possible to configure some clients by using the appropriate prefix. - -|Prefix|Description| -|---|---| -|brokerClient_| Configure **all** the proxy's Pulsar Clients. These configurations are applied after hard coded configuration and before the above brokerClient configurations named above.| - -#### Deprecated parameters of Pulsar proxy -The following parameters have been deprecated in the `conf/proxy.conf` file. - -|Name|Description|Default| -|---|---|---| -|tlsEnabledInProxy| Deprecated - use `servicePortTls` and `webServicePortTls` instead. |false| -|zookeeperSessionTimeoutMs| ZooKeeper session timeout (in milliseconds). Use `metadataStoreSessionTimeoutMillis` instead. |30000| -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds. Use `metadataStoreCacheExpirySeconds` instead.|300| - -## ZooKeeper - -ZooKeeper handles a broad range of essential configuration- and coordination-related tasks for Pulsar. The default configuration file for ZooKeeper is in the `conf/zookeeper.conf` file in your Pulsar installation. The following parameters are available: - - -|Name|Description|Default| -|---|---|---| -|tickTime| The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick. |2000| -|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10| -|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter. |5| -|dataDir| The location where ZooKeeper will store in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper| -|clientPort| The port on which the ZooKeeper server will listen for connections. |2181| -|admin.enableServer|The port at which the admin listens.|true| -|admin.serverPort|The port at which the admin listens.|9990| -|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest). |3| -|autopurge.purgeInterval| The time interval, in hours, by which the ZooKeeper database purge task is triggered. Setting to a non-zero number will enable auto purge; setting to 0 will disable. Read this guide before enabling auto purge. |1| -|forceSync|Requires updates to be synced to media of the transaction log before finishing processing the update. If this option is set to 'no', ZooKeeper will not require updates to be synced to the media. WARNING: it's not recommended to run a production ZK cluster with `forceSync` disabled.|yes| -|maxClientCnxns| The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60| - - - - -In addition to the parameters in the table above, configuring ZooKeeper for Pulsar involves adding -a `server.N` line to the `conf/zookeeper.conf` file for each node in the ZooKeeper cluster, where `N` is the number of the ZooKeeper node. Here's an example for a three-node ZooKeeper cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -> We strongly recommend consulting the [ZooKeeper Administrator's Guide](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html) for a more thorough and comprehensive introduction to ZooKeeper configuration \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/reference-connector-admin.md b/site2/website/versioned_docs/version-2.10.1-deprecated/reference-connector-admin.md deleted file mode 100644 index 2a7c1d82adba24..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/reference-connector-admin.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -id: reference-connector-admin -title: Connector Admin CLI -sidebar_label: "Connector Admin CLI" -original_id: reference-connector-admin ---- - -> **Important** -> -> For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](/tools/pulsar-admin/). -> - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/reference-metrics.md b/site2/website/versioned_docs/version-2.10.1-deprecated/reference-metrics.md deleted file mode 100644 index c0c67c3bfd2e02..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/reference-metrics.md +++ /dev/null @@ -1,617 +0,0 @@ ---- -id: reference-metrics -title: Pulsar Metrics -sidebar_label: "Pulsar Metrics" -original_id: reference-metrics ---- - - - -Pulsar exposes the following metrics in Prometheus format. You can monitor your clusters with those metrics. - -* [ZooKeeper](#zookeeper) -* [BookKeeper](#bookkeeper) -* [Broker](#broker) -* [Pulsar Functions](#pulsar-functions) -* [Proxy](#proxy) -* [Pulsar SQL Worker](#pulsar-sql-worker) -* [Pulsar transaction](#pulsar-transaction) - -The following types of metrics are available: - -- [Counter](https://prometheus.io/docs/concepts/metric_types/#counter): a cumulative metric that represents a single monotonically increasing counter. The value increases by default. You can reset the value to zero or restart your cluster. -- [Gauge](https://prometheus.io/docs/concepts/metric_types/#gauge): a metric that represents a single numerical value that can arbitrarily go up and down. -- [Histogram](https://prometheus.io/docs/concepts/metric_types/#histogram): a histogram samples observations (usually things like request durations or response sizes) and counts them in configurable buckets. The `_bucket` suffix is the number of observations within a histogram bucket, configured with parameter `{le=""}`. The `_count` suffix is the number of observations, shown as a time series and behaves like a counter. The `_sum` suffix is the sum of observed values, also shown as a time series and behaves like a counter. These suffixes are together denoted by `_*` in this doc. -- [Summary](https://prometheus.io/docs/concepts/metric_types/#summary): similar to a histogram, a summary samples observations (usually things like request durations and response sizes). While it also provides a total count of observations and a sum of all observed values, it calculates configurable quantiles over a sliding time window. - -## ZooKeeper - -The ZooKeeper metrics are exposed under "/metrics" at port `8000`. You can use a different port by configuring the `metricsProvider.httpPort` in conf/zookeeper.conf. - -ZooKeeper provides a New Metrics System since 3.6.0. For more detailed metrics, refer to the [ZooKeeper Monitor Guide](https://zookeeper.apache.org/doc/r3.7.0/zookeeperMonitor.html). - -## BookKeeper - -The BookKeeper metrics are exposed under "/metrics" at port `8000`. You can change the port by updating `prometheusStatsHttpPort` -in the `bookkeeper.conf` configuration file. - -### Server metrics - -| Name | Type | Description | -|---|---|---| -| bookie_SERVER_STATUS | Gauge | The server status for bookie server.
    • 1: the bookie is running in writable mode.
    • 0: the bookie is running in readonly mode.
    | -| bookkeeper_server_ADD_ENTRY_count | Counter | The total number of ADD_ENTRY requests received at the bookie. The `success` label is used to distinguish successes and failures. | -| bookkeeper_server_READ_ENTRY_count | Counter | The total number of READ_ENTRY requests received at the bookie. The `success` label is used to distinguish successes and failures. | -| bookie_WRITE_BYTES | Counter | The total number of bytes written to the bookie. | -| bookie_READ_BYTES | Counter | The total number of bytes read from the bookie. | -| bookkeeper_server_ADD_ENTRY_REQUEST | Summary | The summary of request latency of ADD_ENTRY requests at the bookie. The `success` label is used to distinguish successes and failures. | -| bookkeeper_server_READ_ENTRY_REQUEST | Summary | The summary of request latency of READ_ENTRY requests at the bookie. The `success` label is used to distinguish successes and failures. | -| bookkeeper_server_BookieReadThreadPool_queue_{thread_id}|Gauge|The number of requests to be processed in a read thread queue.| -| bookkeeper_server_BookieReadThreadPool_task_queued|Summary | The waiting time of a task to be processed in a read thread queue. | -| bookkeeper_server_BookieReadThreadPool_task_execution|Summary | The execution time of a task in a read thread queue.| - -### Journal metrics - -| Name | Type | Description | -|---|---|---| -| bookie_journal_JOURNAL_SYNC_count | Counter | The total number of journal fsync operations happening at the bookie. The `success` label is used to distinguish successes and failures. | -| bookie_journal_JOURNAL_QUEUE_SIZE | Gauge | The total number of requests pending in the journal queue. | -| bookie_journal_JOURNAL_FORCE_WRITE_QUEUE_SIZE | Gauge | The total number of force write (fsync) requests pending in the force-write queue. | -| bookie_journal_JOURNAL_CB_QUEUE_SIZE | Gauge | The total number of callbacks pending in the callback queue. | -| bookie_journal_JOURNAL_ADD_ENTRY | Summary | The summary of request latency of adding entries to the journal. | -| bookie_journal_JOURNAL_SYNC | Summary | The summary of fsync latency of syncing data to the journal disk. | -| bookie_journal_JOURNAL_CREATION_LATENCY| Summary | The latency created by a journal log file. | - -### Storage metrics - -| Name | Type | Description | -|---|---|---| -| bookie_ledgers_count | Gauge | The total number of ledgers stored in the bookie. | -| bookie_entries_count | Gauge | The total number of entries stored in the bookie. | -| bookie_write_cache_size | Gauge | The bookie write cache size (in bytes). | -| bookie_read_cache_size | Gauge | The bookie read cache size (in bytes). | -| bookie_DELETED_LEDGER_COUNT | Counter | The total number of ledgers deleted since the bookie has started. | -| bookie_ledger_writable_dirs | Gauge | The number of writable directories in the bookie. | -| bookie_flush | Gauge| The table flush latency of bookie memory. | -| bookie_throttled_write_requests | Counter | The number of write requests to be throttled. | - -## Broker - -The broker metrics are exposed under "/metrics" at port `8080`. You can change the port by updating `webServicePort` to a different port -in the `broker.conf` configuration file. - -All the metrics exposed by a broker are labelled with `cluster=${pulsar_cluster}`. The name of Pulsar cluster is the value of `${pulsar_cluster}`, which you have configured in the `broker.conf` file. - -The following metrics are available for broker: - -- [ZooKeeper](#zookeeper) - - [Server metrics](#server-metrics) - - [Request metrics](#request-metrics) -- [BookKeeper](#bookkeeper) - - [Server metrics](#server-metrics-1) - - [Journal metrics](#journal-metrics) - - [Storage metrics](#storage-metrics) -- [Broker](#broker) - - [Namespace metrics](#namespace-metrics) - - [Replication metrics](#replication-metrics) - - [Topic metrics](#topic-metrics) - - [Replication metrics](#replication-metrics-1) - - [ManagedLedgerCache metrics](#managedledgercache-metrics) - - [ManagedLedger metrics](#managedledger-metrics) - - [LoadBalancing metrics](#loadbalancing-metrics) - - [BundleUnloading metrics](#bundleunloading-metrics) - - [BundleSplit metrics](#bundlesplit-metrics) - - [Subscription metrics](#subscription-metrics) - - [Consumer metrics](#consumer-metrics) - - [Managed ledger bookie client metrics](#managed-ledger-bookie-client-metrics) - - [Token metrics](#token-metrics) - - [Authentication metrics](#authentication-metrics) - - [Connection metrics](#connection-metrics) - - [Jetty metrics](#jetty-metrics) -- [Pulsar Functions](#pulsar-functions) -- [Proxy](#proxy) -- [Pulsar SQL Worker](#pulsar-sql-worker) -- [Pulsar transaction](#pulsar-transaction) - -### BookKeeper client metrics - -All the BookKeeper client metric are labelled with the following label: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`. - -| Name | Type | Description | -|---|---|---| -| pulsar_managedLedger_client_bookkeeper_client_BOOKIE_QUARANTINE | Counter | The number of bookie clients to be quarantined.

    If you want to expose this metric, set `bookkeeperClientExposeStatsToPrometheus` to `true` in the `broker.conf` file.| - -### Namespace metrics - -> Namespace metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `false`. - -All the namespace metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -| Name | Type | Description | -|---|---|---| -| pulsar_topics_count | Gauge | The number of Pulsar topics of the namespace owned by this broker. | -| pulsar_subscriptions_count | Gauge | The number of Pulsar subscriptions of the namespace served by this broker. | -| pulsar_producers_count | Gauge | The number of active producers of the namespace connected to this broker. | -| pulsar_consumers_count | Gauge | The number of active consumers of the namespace connected to this broker. | -| pulsar_rate_in | Gauge | The total message rate of the namespace coming into this broker (messages/second). | -| pulsar_rate_out | Gauge | The total message rate of the namespace going out from this broker (messages/second). | -| pulsar_throughput_in | Gauge | The total throughput of the namespace coming into this broker (bytes/second). | -| pulsar_throughput_out | Gauge | The total throughput of the namespace going out from this broker (bytes/second). | -| pulsar_storage_size | Gauge | The total storage size of the topics in this namespace owned by this broker (bytes). | -| pulsar_storage_logical_size | Gauge | The storage size of topics in the namespace owned by the broker without replicas (in bytes). | -| pulsar_storage_backlog_size | Gauge | The total backlog size of the topics of this namespace owned by this broker (messages). | -| pulsar_storage_offloaded_size | Gauge | The total amount of the data in this namespace offloaded to the tiered storage (bytes). | -| pulsar_storage_write_rate | Gauge | The total message batches (entries) written to the storage for this namespace (message batches / second). | -| pulsar_storage_read_rate | Gauge | The total message batches (entries) read from the storage for this namespace (message batches / second). | -| pulsar_subscription_delayed | Gauge | The total message batches (entries) are delayed for dispatching. | -| pulsar_storage_write_latency_le_* | Histogram | The entry rate of a namespace that the storage write latency is smaller with a given threshold.
    Available thresholds:
    • pulsar_storage_write_latency_le_0_5: <= 0.5ms
    • pulsar_storage_write_latency_le_1: <= 1ms
    • pulsar_storage_write_latency_le_5: <= 5ms
    • pulsar_storage_write_latency_le_10: <= 10ms
    • pulsar_storage_write_latency_le_20: <= 20ms
    • pulsar_storage_write_latency_le_50: <= 50ms
    • pulsar_storage_write_latency_le_100: <= 100ms
    • pulsar_storage_write_latency_le_200: <= 200ms
    • pulsar_storage_write_latency_le_1000: <= 1s
    • pulsar_storage_write_latency_le_overflow: > 1s
    | -| pulsar_entry_size_le_* | Histogram | The entry rate of a namespace that the entry size is smaller with a given threshold.
    Available thresholds:
    • pulsar_entry_size_le_128: <= 128 bytes
    • pulsar_entry_size_le_512: <= 512 bytes
    • pulsar_entry_size_le_1_kb: <= 1 KB
    • pulsar_entry_size_le_2_kb: <= 2 KB
    • pulsar_entry_size_le_4_kb: <= 4 KB
    • pulsar_entry_size_le_16_kb: <= 16 KB
    • pulsar_entry_size_le_100_kb: <= 100 KB
    • pulsar_entry_size_le_1_mb: <= 1 MB
    • pulsar_entry_size_le_overflow: > 1 MB
    | - -#### Replication metrics - -If a namespace is configured to be replicated among multiple Pulsar clusters, the corresponding replication metrics is also exposed when `replicationMetricsEnabled` is enabled. - -All the replication metrics are also labelled with `remoteCluster=${pulsar_remote_cluster}`. - -| Name | Type | Description | -|---|---|---| -| pulsar_replication_rate_in | Gauge | The total message rate of the namespace replicating from remote cluster (messages/second). | -| pulsar_replication_rate_out | Gauge | The total message rate of the namespace replicating to remote cluster (messages/second). | -| pulsar_replication_throughput_in | Gauge | The total throughput of the namespace replicating from remote cluster (bytes/second). | -| pulsar_replication_throughput_out | Gauge | The total throughput of the namespace replicating to remote cluster (bytes/second). | -| pulsar_replication_backlog | Gauge | The total backlog of the namespace replicating to remote cluster (messages). | -| pulsar_replication_rate_expired | Gauge | Total rate of messages expired (messages/second). | -| pulsar_replication_connected_count | Gauge | The count of replication-subscriber up and running to replicate to remote cluster. | -| pulsar_replication_delay_in_seconds | Gauge | Time in seconds from the time a message was produced to the time when it is about to be replicated. | - - -### Topic metrics - -> Topic metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `true`. - -All the topic metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. - -| Name | Type | Description | -|---|---|---| -| pulsar_subscriptions_count | Gauge | The number of Pulsar subscriptions of the topic served by this broker. | -| pulsar_producers_count | Gauge | The number of active producers of the topic connected to this broker. | -| pulsar_consumers_count | Gauge | The number of active consumers of the topic connected to this broker. | -| pulsar_rate_in | Gauge | The total message rate of the topic coming into this broker (messages/second). | -| pulsar_rate_out | Gauge | The total message rate of the topic going out from this broker (messages/second). | -| pulsar_publish_rate_limit_times | Gauge | The number of times the publish rate limit is triggered. | -| pulsar_throughput_in | Gauge | The total throughput of the topic coming into this broker (bytes/second). | -| pulsar_throughput_out | Gauge | The total throughput of the topic going out from this broker (bytes/second). | -| pulsar_storage_size | Gauge | The total storage size of the topics in this topic owned by this broker (bytes). | -| pulsar_storage_logical_size | Gauge | The storage size of topics in the namespace owned by the broker without replicas (in bytes). | -| pulsar_storage_backlog_size | Gauge | The total backlog size of the topics of this topic owned by this broker (messages). | -| pulsar_storage_offloaded_size | Gauge | The total amount of the data in this topic offloaded to the tiered storage (bytes). | -| pulsar_storage_backlog_quota_limit | Gauge | The total amount of the data in this topic that limit the backlog quota (bytes). | -| pulsar_storage_write_rate | Gauge | The total message batches (entries) written to the storage for this topic (message batches / second). | -| pulsar_storage_read_rate | Gauge | The total message batches (entries) read from the storage for this topic (message batches / second). | -| pulsar_subscription_delayed | Gauge | The total message batches (entries) are delayed for dispatching. | -| pulsar_storage_write_latency_le_* | Histogram | The entry rate of a topic that the storage write latency is smaller with a given threshold.
    Available thresholds:
    • pulsar_storage_write_latency_le_0_5: <= 0.5ms
    • pulsar_storage_write_latency_le_1: <= 1ms
    • pulsar_storage_write_latency_le_5: <= 5ms
    • pulsar_storage_write_latency_le_10: <= 10ms
    • pulsar_storage_write_latency_le_20: <= 20ms
    • pulsar_storage_write_latency_le_50: <= 50ms
    • pulsar_storage_write_latency_le_100: <= 100ms
    • pulsar_storage_write_latency_le_200: <= 200ms
    • pulsar_storage_write_latency_le_1000: <= 1s
    • pulsar_storage_write_latency_le_overflow: > 1s
    | -| pulsar_entry_size_le_* | Histogram | The entry rate of a topic that the entry size is smaller with a given threshold.
    Available thresholds:
    • pulsar_entry_size_le_128: <= 128 bytes
    • pulsar_entry_size_le_512: <= 512 bytes
    • pulsar_entry_size_le_1_kb: <= 1 KB
    • pulsar_entry_size_le_2_kb: <= 2 KB
    • pulsar_entry_size_le_4_kb: <= 4 KB
    • pulsar_entry_size_le_16_kb: <= 16 KB
    • pulsar_entry_size_le_100_kb: <= 100 KB
    • pulsar_entry_size_le_1_mb: <= 1 MB
    • pulsar_entry_size_le_overflow: > 1 MB
    | -| pulsar_in_bytes_total | Counter | The total number of messages in bytes received for this topic. | -| pulsar_in_messages_total | Counter | The total number of messages received for this topic. | -| pulsar_out_bytes_total | Counter | The total number of messages in bytes read from this topic. | -| pulsar_out_messages_total | Counter | The total number of messages read from this topic. | -| pulsar_compaction_removed_event_count | Gauge | The total number of removed events of the compaction. | -| pulsar_compaction_succeed_count | Gauge | The total number of successes of the compaction. | -| pulsar_compaction_failed_count | Gauge | The total number of failures of the compaction. | -| pulsar_compaction_duration_time_in_mills | Gauge | The duration time of the compaction. | -| pulsar_compaction_read_throughput | Gauge | The read throughput of the compaction. | -| pulsar_compaction_write_throughput | Gauge | The write throughput of the compaction. | -| pulsar_compaction_latency_le_* | Histogram | The compaction latency with given quantile.
    Available thresholds:
    • pulsar_compaction_latency_le_0_5: <= 0.5ms
    • pulsar_compaction_latency_le_1: <= 1ms
    • pulsar_compaction_latency_le_5: <= 5ms
    • pulsar_compaction_latency_le_10: <= 10ms
    • pulsar_compaction_latency_le_20: <= 20ms
    • pulsar_compaction_latency_le_50: <= 50ms
    • pulsar_compaction_latency_le_100: <= 100ms
    • pulsar_compaction_latency_le_200: <= 200ms
    • pulsar_compaction_latency_le_1000: <= 1s
    • pulsar_compaction_latency_le_overflow: > 1s
    | -| pulsar_compaction_compacted_entries_count | Gauge | The total number of the compacted entries. | -| pulsar_compaction_compacted_entries_size |Gauge | The total size of the compacted entries. | - -#### Replication metrics - -If a namespace that a topic belongs to is configured to be replicated among multiple Pulsar clusters, the corresponding replication metrics is also exposed when `replicationMetricsEnabled` is enabled. - -All the replication metrics are labelled with `remoteCluster=${pulsar_remote_cluster}`. - -| Name | Type | Description | -|---|---|---| -| pulsar_replication_rate_in | Gauge | The total message rate of the topic replicating from remote cluster (messages/second). | -| pulsar_replication_rate_out | Gauge | The total message rate of the topic replicating to remote cluster (messages/second). | -| pulsar_replication_throughput_in | Gauge | The total throughput of the topic replicating from remote cluster (bytes/second). | -| pulsar_replication_throughput_out | Gauge | The total throughput of the topic replicating to remote cluster (bytes/second). | -| pulsar_replication_backlog | Gauge | The total backlog of the topic replicating to remote cluster (messages). | - -#### Topic lookup metrics - -| Name | Type | Description | -|---|---|---| -| pulsar_broker_load_manager_bundle_assignment | Gauge | The summary of latency of bundles ownership operations. | -| pulsar_broker_lookup | Gauge | The latency of all lookup operations. | -| pulsar_broker_lookup_redirects | Gauge | The number of lookup redirected requests. | -| pulsar_broker_lookup_answers | Gauge | The number of lookup responses (i.e. not redirected requests). | -| pulsar_broker_lookup_failures | Gauge | The number of lookup failures. | -| pulsar_broker_lookup_pending_requests | Gauge | The number of pending lookups in broker. When it is up to the threshold, new requests are rejected. | -| pulsar_broker_topic_load_pending_requests | Gauge | The load of pending topic operations. | - -### ManagedLedgerCache metrics -All the ManagedLedgerCache metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_ml_cache_evictions | Gauge | The number of cache evictions during the last minute. | -| pulsar_ml_cache_hits_rate | Gauge | The number of cache hits per second on the broker side. | -| pulsar_ml_cache_hits_throughput | Gauge | The amount of data is retrieved from the cache on the broker side (in byte/s). | -| pulsar_ml_cache_misses_rate | Gauge | The number of cache misses per second on the broker side. | -| pulsar_ml_cache_misses_throughput | Gauge | The amount of data is not retrieved from the cache on the broker side (in byte/s). | -| pulsar_ml_cache_pool_active_allocations | Gauge | The number of currently active allocations in direct arena | -| pulsar_ml_cache_pool_active_allocations_huge | Gauge | The number of currently active huge allocation in direct arena | -| pulsar_ml_cache_pool_active_allocations_normal | Gauge | The number of currently active normal allocations in direct arena | -| pulsar_ml_cache_pool_active_allocations_small | Gauge | The number of currently active small allocations in direct arena | -| pulsar_ml_cache_pool_allocated | Gauge | The total allocated memory of chunk lists in direct arena | -| pulsar_ml_cache_pool_used | Gauge | The total used memory of chunk lists in direct arena | -| pulsar_ml_cache_used_size | Gauge | The size in byte used to store the entries payloads | -| pulsar_ml_count | Gauge | The number of currently opened managed ledgers | - -### ManagedLedger metrics -All the managedLedger metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- namespace: namespace=${pulsar_namespace}. ${pulsar_namespace} is the namespace name. -- quantile: quantile=${quantile}. Quantile is only for `Histogram` type metric, and represents the threshold for given Buckets. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_ml_AddEntryBytesRate | Gauge | The bytes/s rate of messages added | -| pulsar_ml_AddEntryWithReplicasBytesRate | Gauge | The bytes/s rate of messages added with replicas | -| pulsar_ml_AddEntryErrors | Gauge | The number of addEntry requests that failed | -| pulsar_ml_AddEntryLatencyBuckets | Histogram | The latency of adding a ledger entry with a given quantile (threshold), including time spent on waiting in queue on the broker side.
    Available quantile:
    • quantile="0.0_0.5" is AddEntryLatency between (0.0ms, 0.5ms]
    • quantile="0.5_1.0" is AddEntryLatency between (0.5ms, 1.0ms]
    • quantile="1.0_5.0" is AddEntryLatency between (1ms, 5ms]
    • quantile="5.0_10.0" is AddEntryLatency between (5ms, 10ms]
    • quantile="10.0_20.0" is AddEntryLatency between (10ms, 20ms]
    • quantile="20.0_50.0" is AddEntryLatency between (20ms, 50ms]
    • quantile="50.0_100.0" is AddEntryLatency between (50ms, 100ms]
    • quantile="100.0_200.0" is AddEntryLatency between (100ms, 200ms]
    • quantile="200.0_1000.0" is AddEntryLatency between (200ms, 1s]
    | -| pulsar_ml_AddEntryLatencyBuckets_OVERFLOW | Gauge | The number of times the AddEntryLatency is longer than 1 second | -| pulsar_ml_AddEntryMessagesRate | Gauge | The msg/s rate of messages added | -| pulsar_ml_AddEntrySucceed | Gauge | The number of addEntry requests that succeeded | -| pulsar_ml_EntrySizeBuckets | Histogram | The added entry size of a ledger with a given quantile.
    Available quantile:
    • quantile="0.0_128.0" is EntrySize between (0byte, 128byte]
    • quantile="128.0_512.0" is EntrySize between (128byte, 512byte]
    • quantile="512.0_1024.0" is EntrySize between (512byte, 1KB]
    • quantile="1024.0_2048.0" is EntrySize between (1KB, 2KB]
    • quantile="2048.0_4096.0" is EntrySize between (2KB, 4KB]
    • quantile="4096.0_16384.0" is EntrySize between (4KB, 16KB]
    • quantile="16384.0_102400.0" is EntrySize between (16KB, 100KB]
    • quantile="102400.0_1232896.0" is EntrySize between (100KB, 1MB]
    | -| pulsar_ml_EntrySizeBuckets_OVERFLOW |Gauge | The number of times the EntrySize is larger than 1MB | -| pulsar_ml_LedgerSwitchLatencyBuckets | Histogram | The ledger switch latency with a given quantile.
    Available quantile:
    • quantile="0.0_0.5" is EntrySize between (0ms, 0.5ms]
    • quantile="0.5_1.0" is EntrySize between (0.5ms, 1ms]
    • quantile="1.0_5.0" is EntrySize between (1ms, 5ms]
    • quantile="5.0_10.0" is EntrySize between (5ms, 10ms]
    • quantile="10.0_20.0" is EntrySize between (10ms, 20ms]
    • quantile="20.0_50.0" is EntrySize between (20ms, 50ms]
    • quantile="50.0_100.0" is EntrySize between (50ms, 100ms]
    • quantile="100.0_200.0" is EntrySize between (100ms, 200ms]
    • quantile="200.0_1000.0" is EntrySize between (200ms, 1000ms]
    | -| pulsar_ml_LedgerSwitchLatencyBuckets_OVERFLOW | Gauge | The number of times the ledger switch latency is longer than 1 second | -| pulsar_ml_LedgerAddEntryLatencyBuckets | Histogram | The latency for bookie client to persist a ledger entry from broker to BookKeeper service with a given quantile (threshold).
    Available quantile:
    • quantile="0.0_0.5" is LedgerAddEntryLatency between (0.0ms, 0.5ms]
    • quantile="0.5_1.0" is LedgerAddEntryLatency between (0.5ms, 1.0ms]
    • quantile="1.0_5.0" is LedgerAddEntryLatency between (1ms, 5ms]
    • quantile="5.0_10.0" is LedgerAddEntryLatency between (5ms, 10ms]
    • quantile="10.0_20.0" is LedgerAddEntryLatency between (10ms, 20ms]
    • quantile="20.0_50.0" is LedgerAddEntryLatency between (20ms, 50ms]
    • quantile="50.0_100.0" is LedgerAddEntryLatency between (50ms, 100ms]
    • quantile="100.0_200.0" is LedgerAddEntryLatency between (100ms, 200ms]
    • quantile="200.0_1000.0" is LedgerAddEntryLatency between (200ms, 1s]
    | -| pulsar_ml_LedgerAddEntryLatencyBuckets_OVERFLOW | Gauge | The number of times the LedgerAddEntryLatency is longer than 1 second | -| pulsar_ml_MarkDeleteRate | Gauge | The rate of mark-delete ops/s | -| pulsar_ml_NumberOfMessagesInBacklog | Gauge | The number of backlog messages for all the consumers | -| pulsar_ml_ReadEntriesBytesRate | Gauge | The bytes/s rate of messages read | -| pulsar_ml_ReadEntriesErrors | Gauge | The number of readEntries requests that failed | -| pulsar_ml_ReadEntriesRate | Gauge | The msg/s rate of messages read | -| pulsar_ml_ReadEntriesSucceeded | Gauge | The number of readEntries requests that succeeded | -| pulsar_ml_StoredMessagesSize | Gauge | The total size of the messages in active ledgers (accounting for the multiple copies stored) | - -### Managed cursor acknowledgment state - -The acknowledgment state is persistent to the ledger first. When the acknowledgment state fails to be persistent to the ledger, they are persistent to ZooKeeper. To track the stats of acknowledgment, you can configure the metrics for the managed cursor. - -All the cursor acknowledgment state metrics are labelled with the following labels: - -- namespace: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -- ledger_name: `ledger_name=${pulsar_ledger_name}`. `${pulsar_ledger_name}` is the ledger name. - -- cursor_name: `ledger_name=${pulsar_cursor_name}`. `${pulsar_cursor_name}` is the cursor name. - -Name |Type |Description -|---|---|--- -brk_ml_cursor_persistLedgerSucceed|Gauge|The number of acknowledgment states that is persistent to a ledger.| -brk_ml_cursor_persistLedgerErrors|Gauge|The number of ledger errors occurred when acknowledgment states fail to be persistent to the ledger.| -brk_ml_cursor_persistZookeeperSucceed|Gauge|The number of acknowledgment states that is persistent to ZooKeeper. -brk_ml_cursor_persistZookeeperErrors|Gauge|The number of ledger errors occurred when acknowledgment states fail to be persistent to ZooKeeper. -brk_ml_cursor_nonContiguousDeletedMessagesRange|Gauge|The number of non-contiguous deleted messages ranges. -brk_ml_cursor_writeLedgerSize|Gauge|The size of write to ledger. -brk_ml_cursor_writeLedgerLogicalSize|Gauge|The size of write to ledger (accounting for without replicas). -brk_ml_cursor_readLedgerSize|Gauge|The size of read from ledger. - -### LoadBalancing metrics -All the loadbalancing metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- broker: broker=${broker}. ${broker} is the IP address of the broker -- metric: metric="loadBalancing". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_bandwidth_in_usage | Gauge | The broker inbound bandwith usage (in percent). | -| pulsar_lb_bandwidth_out_usage | Gauge | The broker outbound bandwith usage (in percent). | -| pulsar_lb_cpu_usage | Gauge | The broker cpu usage (in percent). | -| pulsar_lb_directMemory_usage | Gauge | The broker process direct memory usage (in percent). | -| pulsar_lb_memory_usage | Gauge | The broker process memory usage (in percent). | - -#### BundleUnloading metrics -All the bundleUnloading metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- metric: metric="bundleUnloading". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_unload_broker_count | Counter | Unload broker count in this bundle unloading | -| pulsar_lb_unload_bundle_count | Counter | Bundle unload count in this bundle unloading | - -#### BundleSplit metrics -All the bundleUnloading metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- metric: metric="bundlesSplit". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_bundles_split_count | Counter | bundle split count in this bundle splitting check interval | - -#### Bundle metrics -All the bundle metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- broker: broker=${broker}. ${broker} is the IP address of the broker -- bundle: bundle=${bundle}. ${bundle} is the bundle range on this broker -- metric: metric="bundle". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_bundle_msg_rate_in | Gauge | The total message rate coming into the topics in this bundle (messages/second). | -| pulsar_bundle_msg_rate_out | Gauge | The total message rate going out from the topics in this bundle (messages/second). | -| pulsar_bundle_topics_count | Gauge | The topic count in this bundle. | -| pulsar_bundle_consumer_count | Gauge | The consumer count of the topics in this bundle. | -| pulsar_bundle_producer_count | Gauge | The producer count of the topics in this bundle. | -| pulsar_bundle_msg_throughput_in | Gauge | The total throughput coming into the topics in this bundle (bytes/second). | -| pulsar_bundle_msg_throughput_out | Gauge | The total throughput going out from the topics in this bundle (bytes/second). | - -### Subscription metrics - -> Subscription metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `true`. - -All the subscription metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. -- *subscription*: `subscription=${subscription}`. `${subscription}` is the topic subscription name. - -| Name | Type | Description | -|---|---|---| -| pulsar_subscription_back_log | Gauge | The total backlog of a subscription (entries). | -| pulsar_subscription_back_log_no_delayed | Gauge | The backlog of a subscription that do not contain the delay messages (entries). | -| pulsar_subscription_delayed | Gauge | The total number of messages are delayed to be dispatched for a subscription (messages). | -| pulsar_subscription_msg_rate_redeliver | Gauge | The total message rate for message being redelivered (messages/second). | -| pulsar_subscription_unacked_messages | Gauge | The total number of unacknowledged messages of a subscription (messages). | -| pulsar_subscription_blocked_on_unacked_messages | Gauge | Indicate whether a subscription is blocked on unacknowledged messages or not.
    • 1 means the subscription is blocked on waiting unacknowledged messages to be acked.
    • 0 means the subscription is not blocked on waiting unacknowledged messages to be acked.
    | -| pulsar_subscription_msg_rate_out | Gauge | The total message dispatch rate for a subscription (messages/second). | -| pulsar_subscription_msg_throughput_out | Gauge | The total message dispatch throughput for a subscription (bytes/second). | - -### Consumer metrics - -> Consumer metrics are only exposed when both `exposeTopicLevelMetricsInPrometheus` and `exposeConsumerLevelMetricsInPrometheus` are set to `true`. - -All the consumer metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. -- *subscription*: `subscription=${subscription}`. `${subscription}` is the topic subscription name. -- *consumer_name*: `consumer_name=${consumer_name}`. `${consumer_name}` is the topic consumer name. -- *consumer_id*: `consumer_id=${consumer_id}`. `${consumer_id}` is the topic consumer id. - -| Name | Type | Description | -|---|---|---| -| pulsar_consumer_msg_rate_redeliver | Gauge | The total message rate for message being redelivered (messages/second). | -| pulsar_consumer_unacked_messages | Gauge | The total number of unacknowledged messages of a consumer (messages). | -| pulsar_consumer_blocked_on_unacked_messages | Gauge | Indicate whether a consumer is blocked on unacknowledged messages or not.
    • 1 means the consumer is blocked on waiting unacknowledged messages to be acked.
    • 0 means the consumer is not blocked on waiting unacknowledged messages to be acked.
    | -| pulsar_consumer_msg_rate_out | Gauge | The total message dispatch rate for a consumer (messages/second). | -| pulsar_consumer_msg_throughput_out | Gauge | The total message dispatch throughput for a consumer (bytes/second). | -| pulsar_consumer_available_permits | Gauge | The available permits for for a consumer. | - -### Managed ledger bookie client metrics - -All the managed ledger bookie client metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_completed_tasks_* | Gauge | The number of tasks the scheduler executor execute completed.
    The number of metrics determined by the scheduler executor thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_queue_* | Gauge | The number of tasks queued in the scheduler executor's queue.
    The number of metrics determined by scheduler executor's thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_total_tasks_* | Gauge | The total number of tasks the scheduler executor received.
    The number of metrics determined by scheduler executor's thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_task_execution | Summary | The scheduler task execution latency calculated in milliseconds. | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_task_queued | Summary | The scheduler task queued latency calculated in milliseconds. | - -### Token metrics - -All the token metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -|---|---|---| -| pulsar_expired_token_count | Counter | The number of expired tokens in Pulsar. | -| pulsar_expiring_token_minutes | Histogram | The remaining time of expiring tokens in minutes. | - -### Authentication metrics - -All the authentication metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *provider_name*: `provider_name=${provider_name}`. `${provider_name}` is the class name of the authentication provider. -- *auth_method*: `auth_method=${auth_method}`. `${auth_method}` is the authentication method of the authentication provider. -- *reason*: `reason=${reason}`. `${reason}` is the reason for failing authentication operation. (This label is only for `pulsar_authentication_failures_count`.) - -| Name | Type | Description | -|---|---|---| -| pulsar_authentication_success_count| Counter | The number of successful authentication operations. | -| pulsar_authentication_failures_count | Counter | The number of failing authentication operations. | - -### Connection metrics - -All the connection metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *broker*: `broker=${advertised_address}`. `${advertised_address}` is the advertised address of the broker. -- *metric*: `metric=${metric}`. `${metric}` is the connection metric collective name. - -| Name | Type | Description | -|---|---|---| -| pulsar_active_connections| Gauge | The number of active connections. | -| pulsar_connection_created_total_count | Gauge | The total number of connections. | -| pulsar_connection_create_success_count | Gauge | The number of successfully created connections. | -| pulsar_connection_create_fail_count | Gauge | The number of failed connections. | -| pulsar_connection_closed_total_count | Gauge | The total number of closed connections. | -| pulsar_broker_throttled_connections | Gauge | The number of throttled connections. | -| pulsar_broker_throttled_connections_global_limit | Gauge | The number of throttled connections because of per-connection limit. | - -### Jetty metrics - -> For a functions-worker running separately from brokers, its Jetty metrics are only exposed when `includeStandardPrometheusMetrics` is set to `true`. - -All the jetty metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -|---|---|---| -| jetty_requests_total | Counter | Number of requests. | -| jetty_requests_active | Gauge | Number of requests currently active. | -| jetty_requests_active_max | Gauge | Maximum number of requests that have been active at once. | -| jetty_request_time_max_seconds | Gauge | Maximum time spent handling requests. | -| jetty_request_time_seconds_total | Counter | Total time spent in all request handling. | -| jetty_dispatched_total | Counter | Number of dispatches. | -| jetty_dispatched_active | Gauge | Number of dispatches currently active. | -| jetty_dispatched_active_max | Gauge | Maximum number of active dispatches being handled. | -| jetty_dispatched_time_max | Gauge | Maximum time spent in dispatch handling. | -| jetty_dispatched_time_seconds_total | Counter | Total time spent in dispatch handling. | -| jetty_async_requests_total | Counter | Total number of async requests. | -| jetty_async_requests_waiting | Gauge | Currently waiting async requests. | -| jetty_async_requests_waiting_max | Gauge | Maximum number of waiting async requests. | -| jetty_async_dispatches_total | Counter | Number of requested that have been asynchronously dispatched. | -| jetty_expires_total | Counter | Number of async requests requests that have expired. | -| jetty_responses_total | Counter | Number of responses, labeled by status code. The `code` label can be "1xx", "2xx", "3xx", "4xx", or "5xx". | -| jetty_stats_seconds | Gauge | Time in seconds stats have been collected for. | -| jetty_responses_bytes_total | Counter | Total number of bytes across all responses. | - -## Pulsar Functions - -All the Pulsar Functions metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -| Name | Type | Description | -|---|---|---| -| pulsar_function_processed_successfully_total | Counter | The total number of messages processed successfully. | -| pulsar_function_processed_successfully_total_1min | Counter | The total number of messages processed successfully in the last 1 minute. | -| pulsar_function_system_exceptions_total | Counter | The total number of system exceptions. | -| pulsar_function_system_exceptions_total_1min | Counter | The total number of system exceptions in the last 1 minute. | -| pulsar_function_user_exceptions_total | Counter | The total number of user exceptions. | -| pulsar_function_user_exceptions_total_1min | Counter | The total number of user exceptions in the last 1 minute. | -| pulsar_function_process_latency_ms | Summary | The process latency in milliseconds. | -| pulsar_function_process_latency_ms_1min | Summary | The process latency in milliseconds in the last 1 minute. | -| pulsar_function_last_invocation | Gauge | The timestamp of the last invocation of the function. | -| pulsar_function_received_total | Counter | The total number of messages received from source. | -| pulsar_function_received_total_1min | Counter | The total number of messages received from source in the last 1 minute. | -pulsar_function_user_metric_ | Summary|The user-defined metrics. - -## Connectors - -All the Pulsar connector metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -Connector metrics contain **source** metrics and **sink** metrics. - -- **Source** metrics - - | Name | Type | Description | - |---|---|---| - pulsar_source_written_total|Counter|The total number of records written to a Pulsar topic. - pulsar_source_written_total_1min|Counter|The total number of records written to a Pulsar topic in the last 1 minute. - pulsar_source_received_total|Counter|The total number of records received from source. - pulsar_source_received_total_1min|Counter|The total number of records received from source in the last 1 minute. - pulsar_source_last_invocation|Gauge|The timestamp of the last invocation of the source. - pulsar_source_source_exception|Gauge|The exception from a source. - pulsar_source_source_exceptions_total|Counter|The total number of source exceptions. - pulsar_source_source_exceptions_total_1min |Counter|The total number of source exceptions in the last 1 minute. - pulsar_source_system_exception|Gauge|The exception from system code. - pulsar_source_system_exceptions_total|Counter|The total number of system exceptions. - pulsar_source_system_exceptions_total_1min|Counter|The total number of system exceptions in the last 1 minute. - pulsar_source_user_metric_ | Summary|The user-defined metrics. - -- **Sink** metrics - - | Name | Type | Description | - |---|---|---| - pulsar_sink_written_total|Counter| The total number of records processed by a sink. - pulsar_sink_written_total_1min|Counter| The total number of records processed by a sink in the last 1 minute. - pulsar_sink_received_total_1min|Counter| The total number of messages that a sink has received from Pulsar topics in the last 1 minute. - pulsar_sink_received_total|Counter| The total number of records that a sink has received from Pulsar topics. - pulsar_sink_last_invocation|Gauge|The timestamp of the last invocation of the sink. - pulsar_sink_sink_exception|Gauge|The exception from a sink. - pulsar_sink_sink_exceptions_total|Counter|The total number of sink exceptions. - pulsar_sink_sink_exceptions_total_1min |Counter|The total number of sink exceptions in the last 1 minute. - pulsar_sink_system_exception|Gauge|The exception from system code. - pulsar_sink_system_exceptions_total|Counter|The total number of system exceptions. - pulsar_sink_system_exceptions_total_1min|Counter|The total number of system exceptions in the last 1 minute. - pulsar_sink_user_metric_ | Summary|The user-defined metrics. - -## Proxy - -All the proxy metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *kubernetes_pod_name*: `kubernetes_pod_name=${kubernetes_pod_name}`. `${kubernetes_pod_name}` is the Kubernetes pod name. - -| Name | Type | Description | -|---|---|---| -| pulsar_proxy_active_connections | Gauge | Number of connections currently active in the proxy. | -| pulsar_proxy_new_connections | Counter | Counter of connections being opened in the proxy. | -| pulsar_proxy_rejected_connections | Counter | Counter for connections rejected due to throttling. | -| pulsar_proxy_binary_ops | Counter | Counter of proxy operations. | -| pulsar_proxy_binary_bytes | Counter | Counter of proxy bytes. | - -## Pulsar SQL Worker - -| Name | Type | Description | -|---|---|---| -| split_bytes_read | Counter | Number of bytes read from BookKeeper. | -| split_num_messages_deserialized | Counter | Number of messages deserialized. | -| split_num_record_deserialized | Counter | Number of records deserialized. | -| split_bytes_read_per_query | Summary | Total number of bytes read per query. | -| split_entry_deserialize_time | Summary | Time spent on derserializing entries. | -| split_entry_deserialize_time_per_query | Summary | Time spent on derserializing entries per query. | -| split_entry_queue_dequeue_wait_time | Summary | Time spend on waiting to get entry from entry queue because it is empty. | -| split_entry_queue_dequeue_wait_time_per_query | Summary | Total time spent on waiting to get entry from entry queue per query. | -| split_message_queue_dequeue_wait_time_per_query | Summary | Time spent on waiting to dequeue from message queue because is is empty per query. | -| split_message_queue_enqueue_wait_time | Summary | Time spent on waiting for message queue enqueue because the message queue is full. | -| split_message_queue_enqueue_wait_time_per_query | Summary | Time spent on waiting for message queue enqueue because the message queue is full per query. | -| split_num_entries_per_batch | Summary | Number of entries per batch. | -| split_num_entries_per_query | Summary | Number of entries per query. | -| split_num_messages_deserialized_per_entry | Summary | Number of messages deserialized per entry. | -| split_num_messages_deserialized_per_query | Summary | Number of messages deserialized per query. | -| split_read_attempts | Summary | Number of read attempts (fail if queues are full). | -| split_read_attempts_per_query | Summary | Number of read attempts per query. | -| split_read_latency_per_batch | Summary | Latency of reads per batch. | -| split_read_latency_per_query | Summary | Total read latency per query. | -| split_record_deserialize_time | Summary | Time spent on deserializing message to record. For example, Avro, JSON, and so on. | -| split_record_deserialize_time_per_query | Summary | Time spent on deserializing message to record per query. | -| split_total_execution_time | Summary | The total execution time. | - -## Pulsar transaction - -All the transaction metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *coordinator_id*: `coordinator_id=${coordinator_id}`. `${coordinator_id}` is the coordinator id. - -| Name | Type | Description | -|---|---|---| -| pulsar_txn_active_count | Gauge | Number of active transactions. | -| pulsar_txn_created_count | Counter | Number of created transactions. | -| pulsar_txn_committed_count | Counter | Number of committed transactions. | -| pulsar_txn_aborted_count | Counter | Number of aborted transactions of this coordinator. | -| pulsar_txn_timeout_count | Counter | Number of timeout transactions. | -| pulsar_txn_append_log_count | Counter | Number of append transaction logs. | -| pulsar_txn_execution_latency_le_* | Histogram | Transaction execution latency.
    Available latencies are as below:
    • latency="10" is TransactionExecutionLatency between (0ms, 10ms]
    • latency="20" is TransactionExecutionLatency between (10ms, 20ms]
    • latency="50" is TransactionExecutionLatency between (20ms, 50ms]
    • latency="100" is TransactionExecutionLatency between (50ms, 100ms]
    • latency="500" is TransactionExecutionLatency between (100ms, 500ms]
    • latency="1000" is TransactionExecutionLatency between (500ms, 1000ms]
    • latency="5000" is TransactionExecutionLatency between (1s, 5s]
    • latency="15000" is TransactionExecutionLatency between (5s, 15s]
    • latency="30000" is TransactionExecutionLatency between (15s, 30s]
    • latency="60000" is TransactionExecutionLatency between (30s, 60s]
    • latency="300000" is TransactionExecutionLatency between (1m,5m]
    • latency="1500000" is TransactionExecutionLatency between (5m,15m]
    • latency="3000000" is TransactionExecutionLatency between (15m,30m]
    • latency="overflow" is TransactionExecutionLatency between (30m,∞]
    | diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/reference-pulsar-admin.md b/site2/website/versioned_docs/version-2.10.1-deprecated/reference-pulsar-admin.md deleted file mode 100644 index 5ec74a86e432b7..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/reference-pulsar-admin.md +++ /dev/null @@ -1,3394 +0,0 @@ ---- -id: pulsar-admin -title: Pulsar admin CLI -sidebar_label: "Pulsar Admin CLI" -original_id: pulsar-admin ---- - -> **Important** -> -> This page is deprecated and not updated anymore. For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](/tools/pulsar-admin/) - -The `pulsar-admin` tool enables you to manage Pulsar installations, including clusters, brokers, namespaces, tenants, and more. - -Usage - -```bash - -$ pulsar-admin command - -``` - -Commands -* `broker-stats` -* `brokers` -* `clusters` -* `functions` -* `functions-worker` -* `namespaces` -* `ns-isolation-policy` -* `sources` - - For more information, see [here](io-cli.md#sources) -* `sinks` - - For more information, see [here](io-cli.md#sinks) -* `topics` -* `tenants` -* `resource-quotas` -* `schemas` - -## `broker-stats` - -Operations to collect broker statistics - -```bash - -$ pulsar-admin broker-stats subcommand - -``` - -Subcommands -* `allocator-stats` -* `topics(destinations)` -* `mbeans` -* `monitoring-metrics` -* `load-report` - - -### `allocator-stats` - -Dump allocator stats - -Usage - -```bash - -$ pulsar-admin broker-stats allocator-stats allocator-name - -``` - -### `topics(destinations)` - -Dump topic stats - -Usage - -```bash - -$ pulsar-admin broker-stats topics options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - -### `mbeans` - -Dump Mbean stats - -Usage - -```bash - -$ pulsar-admin broker-stats mbeans options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - - -### `monitoring-metrics` - -Dump metrics for monitoring - -Usage - -```bash - -$ pulsar-admin broker-stats monitoring-metrics options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - - -### `load-report` - -Dump broker load-report - -Usage - -```bash - -$ pulsar-admin broker-stats load-report - -``` - -## `brokers` - -Operations about brokers - -```bash - -$ pulsar-admin brokers subcommand - -``` - -Subcommands -* `list` -* `namespaces` -* `update-dynamic-config` -* `list-dynamic-config` -* `get-all-dynamic-config` -* `get-internal-config` -* `get-runtime-config` -* `healthcheck` - -### `list` -List active brokers of the cluster - -Usage - -```bash - -$ pulsar-admin brokers list cluster-name - -``` - -### `leader-broker` -Get the information of the leader broker - -Usage - -```bash - -$ pulsar-admin brokers leader-broker - -``` - -### `namespaces` -List namespaces owned by the broker - -Usage - -```bash - -$ pulsar-admin brokers namespaces cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--url`|The URL for the broker|| - - -### `update-dynamic-config` -Update a broker's dynamic service configuration - -Usage - -```bash - -$ pulsar-admin brokers update-dynamic-config options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--config`|Service configuration parameter name|| -|`--value`|Value for the configuration parameter value specified using the `--config` flag|| - - -### `list-dynamic-config` -Get list of updatable configuration name - -Usage - -```bash - -$ pulsar-admin brokers list-dynamic-config - -``` - -### `delete-dynamic-config` -Delete dynamic-serviceConfiguration of broker - -Usage - -```bash - -$ pulsar-admin brokers delete-dynamic-config options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--config`|Service configuration parameter name|| - - -### `get-all-dynamic-config` -Get all overridden dynamic-configuration values - -Usage - -```bash - -$ pulsar-admin brokers get-all-dynamic-config - -``` - -### `get-internal-config` -Get internal configuration information - -Usage - -```bash - -$ pulsar-admin brokers get-internal-config - -``` - -### `get-runtime-config` -Get runtime configuration values - -Usage - -```bash - -$ pulsar-admin brokers get-runtime-config - -``` - -### `healthcheck` -Run a health check against the broker - -Usage - -```bash - -$ pulsar-admin brokers healthcheck - -``` - -## `clusters` -Operations about clusters - -Usage - -```bash - -$ pulsar-admin clusters subcommand - -``` - -Subcommands -* `get` -* `create` -* `update` -* `delete` -* `list` -* `update-peer-clusters` -* `get-peer-clusters` -* `get-failure-domain` -* `create-failure-domain` -* `update-failure-domain` -* `delete-failure-domain` -* `list-failure-domains` - - -### `get` -Get the configuration data for the specified cluster - -Usage - -```bash - -$ pulsar-admin clusters get cluster-name - -``` - -### `create` -Provisions a new cluster. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin clusters create cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--broker-url`|The URL for the broker service.|| -|`--broker-url-secure`|The broker service URL for a secure connection|| -|`--url`|service-url|| -|`--url-secure`|service-url for secure connection|| - - -### `update` -Update the configuration for a cluster - -Usage - -```bash - -$ pulsar-admin clusters update cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--broker-url`|The URL for the broker service.|| -|`--broker-url-secure`|The broker service URL for a secure connection|| -|`--url`|service-url|| -|`--url-secure`|service-url for secure connection|| - - -### `delete` -Deletes an existing cluster - -Usage - -```bash - -$ pulsar-admin clusters delete cluster-name - -``` - -### `list` -List the existing clusters - -Usage - -```bash - -$ pulsar-admin clusters list - -``` - -### `update-peer-clusters` -Update peer cluster names - -Usage - -```bash - -$ pulsar-admin clusters update-peer-clusters cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--peer-clusters`|Comma separated peer cluster names (Pass empty string "" to delete list)|| - -### `get-peer-clusters` -Get list of peer clusters - -Usage - -```bash - -$ pulsar-admin clusters get-peer-clusters - -``` - -### `get-failure-domain` -Get the configuration brokers of a failure domain - -Usage - -```bash - -$ pulsar-admin clusters get-failure-domain cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `create-failure-domain` -Create a new failure domain for a cluster (updates it if already created) - -Usage - -```bash - -$ pulsar-admin clusters create-failure-domain cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--broker-list`|Comma separated broker list|| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `update-failure-domain` -Update failure domain for a cluster (creates a new one if not exist) - -Usage - -```bash - -$ pulsar-admin clusters update-failure-domain cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--broker-list`|Comma separated broker list|| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `delete-failure-domain` -Delete an existing failure domain - -Usage - -```bash - -$ pulsar-admin clusters delete-failure-domain cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `list-failure-domains` -List the existing failure domains for a cluster - -Usage - -```bash - -$ pulsar-admin clusters list-failure-domains cluster-name - -``` - -## `functions` - -A command-line interface for Pulsar Functions - -Usage - -```bash - -$ pulsar-admin functions subcommand - -``` - -Subcommands -* `localrun` -* `create` -* `delete` -* `update` -* `get` -* `restart` -* `stop` -* `start` -* `status` -* `stats` -* `list` -* `querystate` -* `putstate` -* `trigger` - - -### `localrun` -Run the Pulsar Function locally (rather than deploying it to the Pulsar cluster) - - -Usage - -```bash - -$ pulsar-admin functions localrun options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--broker-service-url `|The URL of the Pulsar broker|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--client-auth-params`|Client authentication param|| -|`--client-auth-plugin`|Client authentication plugin using which function-process can connect to broker|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--hostname-verification-enabled`|Enable hostname verification|false| -|`--instance-id-offset`|Start the instanceIds from this offset|0| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--go`|Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--state-storage-service-url`|The URL for the state storage service. By default, it it set to the service URL of the Apache BookKeeper. This service URL must be added manually when the Pulsar Function runs locally. || -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed successfully are sent|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--tls-allow-insecure`|Allow insecure tls connection|false| -|`--tls-trust-cert-path`|The tls trust cert file path|| -|`--use-tls`|Use tls connection|false| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `create` -Create a Pulsar Function in cluster mode (i.e. deploy it on a Pulsar cluster) - -Usage - -``` - -$ pulsar-admin functions create options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function’s namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--go`|Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `delete` -Delete a Pulsar Function that's running on a Pulsar cluster - -Usage - -```bash - -$ pulsar-admin functions delete options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `update` -Update a Pulsar Function that's been deployed to a Pulsar cluster - -Usage - -```bash - -$ pulsar-admin functions update options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function’s namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--go`|Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `get` -Fetch information about a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions get options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `restart` -Restart function instance - -Usage - -```bash - -$ pulsar-admin functions restart options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (restart all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `stop` -Stops function instance - -Usage - -```bash - -$ pulsar-admin functions stop options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (stop all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `start` -Starts a stopped function instance - -Usage - -```bash - -$ pulsar-admin functions start options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (start all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `status` -Check the current status of a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions status options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (Get-status of all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `stats` -Get the current stats of a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions stats options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (Get-stats of all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - -### `list` -List all of the Pulsar Functions running under a specific tenant and namespace - -Usage - -```bash - -$ pulsar-admin functions list options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `querystate` -Fetch the current state associated with a Pulsar Function running in cluster mode - -Usage - -```bash - -$ pulsar-admin functions querystate options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`-k`, `--key`|The key for the state you want to fetch|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| -|`-w`, `--watch`|Watch for changes in the value associated with a key for a Pulsar Function|false| - -### `putstate` -Put a key/value pair to the state associated with a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions putstate options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the Pulsar Function|| -|`--name`|The name of a Pulsar Function|| -|`--namespace`|The namespace of a Pulsar Function|| -|`--tenant`|The tenant of a Pulsar Function|| -|`-s`, `--state`|The FunctionState that needs to be put|| - -### `trigger` -Triggers the specified Pulsar Function with a supplied value - -Usage - -```bash - -$ pulsar-admin functions trigger options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| -|`--topic`|The specific topic name that the function consumes from that you want to inject the data to|| -|`--trigger-file`|The path to the file that contains the data with which you'd like to trigger the function|| -|`--trigger-value`|The value with which you want to trigger the function|| - - -## `functions-worker` -Operations to collect function-worker statistics - -```bash - -$ pulsar-admin functions-worker subcommand - -``` - -Subcommands - -* `function-stats` -* `get-cluster` -* `get-cluster-leader` -* `get-function-assignments` -* `monitoring-metrics` - -### `function-stats` - -Dump all functions stats running on this broker - -Usage - -```bash - -$ pulsar-admin functions-worker function-stats - -``` - -### `get-cluster` - -Get all workers belonging to this cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-cluster - -``` - -### `get-cluster-leader` - -Get the leader of the worker cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-cluster-leader - -``` - -### `get-function-assignments` - -Get the assignments of the functions across the worker cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-function-assignments - -``` - -### `monitoring-metrics` - -Dump metrics for Monitoring - -Usage - -```bash - -$ pulsar-admin functions-worker monitoring-metrics - -``` - -## `namespaces` - -Operations for managing namespaces - -```bash - -$ pulsar-admin namespaces subcommand - -``` - -Subcommands -* `list` -* `topics` -* `policies` -* `create` -* `delete` -* `set-deduplication` -* `set-auto-topic-creation` -* `remove-auto-topic-creation` -* `set-auto-subscription-creation` -* `remove-auto-subscription-creation` -* `permissions` -* `grant-permission` -* `revoke-permission` -* `grant-subscription-permission` -* `revoke-subscription-permission` -* `set-clusters` -* `get-clusters` -* `get-backlog-quotas` -* `set-backlog-quota` -* `remove-backlog-quota` -* `get-persistence` -* `set-persistence` -* `get-message-ttl` -* `set-message-ttl` -* `remove-message-ttl` -* `get-anti-affinity-group` -* `set-anti-affinity-group` -* `get-anti-affinity-namespaces` -* `delete-anti-affinity-group` -* `get-retention` -* `set-retention` -* `unload` -* `split-bundle` -* `set-dispatch-rate` -* `get-dispatch-rate` -* `set-replicator-dispatch-rate` -* `get-replicator-dispatch-rate` -* `set-subscribe-rate` -* `get-subscribe-rate` -* `set-subscription-dispatch-rate` -* `get-subscription-dispatch-rate` -* `clear-backlog` -* `unsubscribe` -* `set-encryption-required` -* `set-delayed-delivery` -* `get-delayed-delivery` -* `set-subscription-auth-mode` -* `get-max-producers-per-topic` -* `set-max-producers-per-topic` -* `get-max-consumers-per-topic` -* `set-max-consumers-per-topic` -* `get-max-consumers-per-subscription` -* `set-max-consumers-per-subscription` -* `get-max-unacked-messages-per-subscription` -* `set-max-unacked-messages-per-subscription` -* `get-max-unacked-messages-per-consumer` -* `set-max-unacked-messages-per-consumer` -* `get-compaction-threshold` -* `set-compaction-threshold` -* `get-offload-threshold` -* `set-offload-threshold` -* `get-offload-deletion-lag` -* `set-offload-deletion-lag` -* `clear-offload-deletion-lag` -* `get-schema-autoupdate-strategy` -* `set-schema-autoupdate-strategy` -* `set-offload-policies` -* `get-offload-policies` -* `set-max-subscriptions-per-topic` -* `get-max-subscriptions-per-topic` -* `remove-max-subscriptions-per-topic` - - -### `list` -Get the namespaces for a tenant - -Usage - -```bash - -$ pulsar-admin namespaces list tenant-name - -``` - -### `topics` -Get the list of topics for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces topics tenant/namespace - -``` - -### `policies` -Get the configuration policies of a namespace - -Usage - -```bash - -$ pulsar-admin namespaces policies tenant/namespace - -``` - -### `create` -Create a new namespace - -Usage - -```bash - -$ pulsar-admin namespaces create tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-b`, `--bundles`|The number of bundles to activate|0| -|`-c`, `--clusters`|List of clusters this namespace will be assigned|| - - -### `delete` -Deletes a namespace. The namespace needs to be empty - -Usage - -```bash - -$ pulsar-admin namespaces delete tenant/namespace - -``` - -### `set-deduplication` -Enable or disable message deduplication on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-deduplication tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable message deduplication on the specified namespace|false| -|`--disable`, `-d`|Disable message deduplication on the specified namespace|false| - -### `set-auto-topic-creation` -Enable or disable autoTopicCreation for a namespace, overriding broker settings - -Usage - -```bash - -$ pulsar-admin namespaces set-auto-topic-creation tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable allowAutoTopicCreation on namespace|false| -|`--disable`, `-d`|Disable allowAutoTopicCreation on namespace|false| -|`--type`, `-t`|Type of topic to be auto-created. Possible values: (partitioned, non-partitioned)|non-partitioned| -|`--num-partitions`, `-n`|Default number of partitions of topic to be auto-created, applicable to partitioned topics only|| - -### `remove-auto-topic-creation` -Remove override of autoTopicCreation for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces remove-auto-topic-creation tenant/namespace - -``` - -### `set-auto-subscription-creation` -Enable autoSubscriptionCreation for a namespace, overriding broker settings - -Usage - -```bash - -$ pulsar-admin namespaces set-auto-subscription-creation tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable allowAutoSubscriptionCreation on namespace|false| - -### `remove-auto-subscription-creation` -Remove override of autoSubscriptionCreation for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces remove-auto-subscription-creation tenant/namespace - -``` - -### `permissions` -Get the permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces permissions tenant/namespace - -``` - -### `grant-permission` -Grant permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces grant-permission tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--actions`|Actions to be granted (`produce` or `consume`)|| -|`--role`|The client role to which to grant the permissions|| - - -### `revoke-permission` -Revoke permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces revoke-permission tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--role`|The client role to which to revoke the permissions|| - -### `grant-subscription-permission` -Grant permissions to access subscription admin-api - -Usage - -```bash - -$ pulsar-admin namespaces grant-subscription-permission tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--roles`|The client roles to which to grant the permissions (comma separated roles)|| -|`--subscription`|The subscription name for which permission will be granted to roles|| - -### `revoke-subscription-permission` -Revoke permissions to access subscription admin-api - -Usage - -```bash - -$ pulsar-admin namespaces revoke-subscription-permission tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--role`|The client role to which to revoke the permissions|| -|`--subscription`|The subscription name for which permission will be revoked to roles|| - -### `set-clusters` -Set replication clusters for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-clusters tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--clusters`|Replication clusters ID list (comma-separated values)|| - - -### `get-clusters` -Get replication clusters for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-clusters tenant/namespace - -``` - -### `get-backlog-quotas` -Get the backlog quota policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-backlog-quotas tenant/namespace - -``` - -### `set-backlog-quota` -Set a backlog quota policy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-backlog-quota tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-l`, `--limit`|The backlog size limit (for example `10M` or `16G`)|| -|`-lt`, `--limitTime`|Time limit in second, non-positive number for disabling time limit. (for example 3600 for 1 hour)|| -|`-p`, `--policy`|The retention policy to enforce when the limit is reached. The valid options are: `producer_request_hold`, `producer_exception` or `consumer_backlog_eviction`| -|`-t`, `--type`|Backlog quota type to set. The valid options are: `destination_storage`, `message_age` |destination_storage| - -Example - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \ ---limit 2G \ ---policy producer_request_hold - -``` - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \ ---limitTime 3600 \ ---policy producer_request_hold \ ---type message_age - -``` - -### `remove-backlog-quota` -Remove a backlog quota policy from a namespace - -|Flag|Description|Default| -|---|---|---| -|`-t`, `--type`|Backlog quota type to remove. The valid options are: `destination_storage`, `message_age` |destination_storage| - -Usage - -```bash - -$ pulsar-admin namespaces remove-backlog-quota tenant/namespace - -``` - -### `get-persistence` -Get the persistence policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-persistence tenant/namespace - -``` - -### `set-persistence` -Set the persistence policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-persistence tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-a`, `--bookkeeper-ack-quorum`|The number of acks (guaranteed copies) to wait for each entry|0| -|`-e`, `--bookkeeper-ensemble`|The number of bookies to use for a topic|0| -|`-w`, `--bookkeeper-write-quorum`|How many writes to make of each entry|0| -|`-r`, `--ml-mark-delete-max-rate`|Throttling rate of mark-delete operation (0 means no throttle)|| - - -### `get-message-ttl` -Get the message TTL for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-message-ttl tenant/namespace - -``` - -### `set-message-ttl` -Set the message TTL for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-message-ttl tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-ttl`, `--messageTTL`|Message TTL in seconds. When the value is set to `0`, TTL is disabled. TTL is disabled by default. |0| - -### `remove-message-ttl` -Remove the message TTL for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces remove-message-ttl tenant/namespace - -``` - -### `get-anti-affinity-group` -Get Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-anti-affinity-group tenant/namespace - -``` - -### `set-anti-affinity-group` -Set Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-anti-affinity-group tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-g`, `--group`|Anti-affinity group name|| - -### `get-anti-affinity-namespaces` -Get Anti-affinity namespaces grouped with the given anti-affinity group name - -Usage - -```bash - -$ pulsar-admin namespaces get-anti-affinity-namespaces options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--cluster`|Cluster name|| -|`-g`, `--group`|Anti-affinity group name|| -|`-p`, `--tenant`|Tenant is only used for authorization. Client has to be admin of any of the tenant to access this api|| - -### `delete-anti-affinity-group` -Remove Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces delete-anti-affinity-group tenant/namespace - -``` - -### `get-retention` -Get the retention policy that is applied to each topic within the specified namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-retention tenant/namespace - -``` - -### `set-retention` -Set the retention policy for each topic within the specified namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-retention tenant/namespace - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-s`, `--size`|The retention size limits (for example 10M, 16G or 3T) for each topic in the namespace. 0 means no retention and -1 means infinite size retention|| -|`-t`, `--time`|The retention time in minutes, hours, days, or weeks. Examples: 100m, 13h, 2d, 5w. 0 means no retention and -1 means infinite time retention|| - - -### `unload` -Unload a namespace or namespace bundle from the current serving broker. - -Usage - -```bash - -$ pulsar-admin namespaces unload tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| - -### `split-bundle` -Split a namespace-bundle from the current serving broker - -Usage - -```bash - -$ pulsar-admin namespaces split-bundle tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-u`, `--unload`|Unload newly split bundles after splitting old bundle|false| - -### `set-dispatch-rate` -Set message-dispatch-rate for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-dispatch-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-dispatch-rate` -Get configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-dispatch-rate tenant/namespace - -``` - -### `set-replicator-dispatch-rate` -Set replicator message-dispatch-rate for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-replicator-dispatch-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-replicator-dispatch-rate` -Get replicator configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-replicator-dispatch-rate tenant/namespace - -``` - -### `set-subscribe-rate` -Set subscribe-rate per consumer for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscribe-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-sr`, `--subscribe-rate`|The subscribe rate (default -1 will be overwrite if not passed)|-1| -|`-st`, `--subscribe-rate-period`|The subscribe rate period in second type (default 30 second will be overwrite if not passed)|30| - -### `get-subscribe-rate` -Get configured subscribe-rate per consumer for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-subscribe-rate tenant/namespace - -``` - -### `set-subscription-dispatch-rate` -Set subscription message-dispatch-rate for all subscription of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscription-dispatch-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--sub-msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-subscription-dispatch-rate` -Get subscription configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-subscription-dispatch-rate tenant/namespace - -``` - -### `clear-backlog` -Clear the backlog for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces clear-backlog tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-force`, `--force`|Whether to force a clear backlog without prompt|false| -|`-s`, `--sub`|The subscription name|| - - -### `unsubscribe` -Unsubscribe the given subscription on all destinations on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces unsubscribe tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-s`, `--sub`|The subscription name|| - -### `set-encryption-required` -Enable or disable message encryption required for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-encryption-required tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-d`, `--disable`|Disable message encryption required|false| -|`-e`, `--enable`|Enable message encryption required|false| - -### `set-delayed-delivery` -Set the delayed delivery policy on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-delayed-delivery tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-d`, `--disable`|Disable delayed delivery messages|false| -|`-e`, `--enable`|Enable delayed delivery messages|false| -|`-t`, `--time`|The tick time for when retrying on delayed delivery messages|1s| - - -### `get-delayed-delivery` -Get the delayed delivery policy on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-delayed-delivery-time tenant/namespace - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-t`, `--time`|The tick time for when retrying on delayed delivery messages|1s| - - -### `set-subscription-auth-mode` -Set subscription auth mode on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscription-auth-mode tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-m`, `--subscription-auth-mode`|Subscription authorization mode for Pulsar policies. Valid options are: [None, Prefix]|| - -### `get-max-producers-per-topic` -Get maxProducersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-producers-per-topic tenant/namespace - -``` - -### `set-max-producers-per-topic` -Set maxProducersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-producers-per-topic tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-p`, `--max-producers-per-topic`|maxProducersPerTopic for a namespace|0| - -### `get-max-consumers-per-topic` -Get maxConsumersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-consumers-per-topic tenant/namespace - -``` - -### `set-max-consumers-per-topic` -Set maxConsumersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-consumers-per-topic tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-consumers-per-topic`|maxConsumersPerTopic for a namespace|0| - -### `get-max-consumers-per-subscription` -Get maxConsumersPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-consumers-per-subscription tenant/namespace - -``` - -### `set-max-consumers-per-subscription` -Set maxConsumersPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-consumers-per-subscription tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-consumers-per-subscription`|maxConsumersPerSubscription for a namespace|0| - -### `get-max-unacked-messages-per-subscription` -Get maxUnackedMessagesPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-unacked-messages-per-subscription tenant/namespace - -``` - -### `set-max-unacked-messages-per-subscription` -Set maxUnackedMessagesPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-unacked-messages-per-subscription tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-unacked-messages-per-subscription`|maxUnackedMessagesPerSubscription for a namespace|-1| - -### `get-max-unacked-messages-per-consumer` -Get maxUnackedMessagesPerConsumer for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-unacked-messages-per-consumer tenant/namespace - -``` - -### `set-max-unacked-messages-per-consumer` -Set maxUnackedMessagesPerConsumer for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-unacked-messages-per-consumer tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-unacked-messages-per-consumer`|maxUnackedMessagesPerConsumer for a namespace|-1| - - -### `get-compaction-threshold` -Get compactionThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-compaction-threshold tenant/namespace - -``` - -### `set-compaction-threshold` -Set compactionThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-compaction-threshold tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-t`, `--threshold`|Maximum number of bytes in a topic backlog before compaction is triggered (eg: 10M, 16G, 3T). 0 disables automatic compaction|0| - - -### `get-offload-threshold` -Get offloadThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-threshold tenant/namespace - -``` - -### `set-offload-threshold` -Set offloadThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-threshold tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-s`, `--size`|Maximum number of bytes stored in the pulsar cluster for a topic before data will start being automatically offloaded to longterm storage (eg: 10M, 16G, 3T, 100). Negative values disable automatic offload. 0 triggers offloading as soon as possible.|-1| - -### `get-offload-deletion-lag` -Get offloadDeletionLag, in minutes, for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-deletion-lag tenant/namespace - -``` - -### `set-offload-deletion-lag` -Set offloadDeletionLag for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-deletion-lag tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-l`, `--lag`|Duration to wait after offloading a ledger segment, before deleting the copy of that segment from cluster local storage. (eg: 10m, 5h, 3d, 2w).|-1| - -### `clear-offload-deletion-lag` -Clear offloadDeletionLag for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces clear-offload-deletion-lag tenant/namespace - -``` - -### `get-schema-autoupdate-strategy` -Get the schema auto-update strategy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-schema-autoupdate-strategy tenant/namespace - -``` - -### `set-schema-autoupdate-strategy` -Set the schema auto-update strategy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-schema-autoupdate-strategy tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--compatibility`|Compatibility level required for new schemas created via a Producer. Possible values (Full, Backward, Forward, None).|Full| -|`-d`, `--disabled`|Disable automatic schema updates.|false| - -### `get-publish-rate` -Get the message publish rate for each topic in a namespace, in bytes as well as messages per second - -Usage - -```bash - -$ pulsar-admin namespaces get-publish-rate tenant/namespace - -``` - -### `set-publish-rate` -Set the message publish rate for each topic in a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-publish-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-m`, `--msg-publish-rate`|Threshold for number of messages per second per topic in the namespace (-1 implies not set, 0 for no limit).|-1| -|`-b`, `--byte-publish-rate`|Threshold for number of bytes per second per topic in the namespace (-1 implies not set, 0 for no limit).|-1| - -### `set-offload-policies` -Set the offload policy for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-policies tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-d`, `--driver`|Driver to use to offload old data to long term storage,(Possible values: S3, aws-s3, google-cloud-storage)|| -|`-r`, `--region`|The long term storage region|| -|`-b`, `--bucket`|Bucket to place offloaded ledger into|| -|`-e`, `--endpoint`|Alternative endpoint to connect to|| -|`-i`, `--aws-id`|AWS Credential Id to use when using driver S3 or aws-s3|| -|`-s`, `--aws-secret`|AWS Credential Secret to use when using driver S3 or aws-s3|| -|`-ro`, `--s3-role`|S3 Role used for STSAssumeRoleSessionCredentialsProvider using driver S3 or aws-s3|| -|`-rsn`, `--s3-role-session-name`|S3 role session name used for STSAssumeRoleSessionCredentialsProvider using driver S3 or aws-s3|| -|`-mbs`, `--maxBlockSize`|Max block size|64MB| -|`-rbs`, `--readBufferSize`|Read buffer size|1MB| -|`-oat`, `--offloadAfterThreshold`|Offload after threshold size (eg: 1M, 5M)|| -|`-oae`, `--offloadAfterElapsed`|Offload after elapsed in millis (or minutes, hours,days,weeks eg: 100m, 3h, 2d, 5w).|| - -### `get-offload-policies` -Get the offload policy for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-policies tenant/namespace - -``` - -### `set-max-subscriptions-per-topic` -Set the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces set-max-subscriptions-per-topic tenant/namespace - -``` - -### `get-max-subscriptions-per-topic` -Get the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces get-max-subscriptions-per-topic tenant/namespace - -``` - -### `remove-max-subscriptions-per-topic` -Remove the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces remove-max-subscriptions-per-topic tenant/namespace - -``` - -## `ns-isolation-policy` -Operations for managing namespace isolation policies. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy subcommand - -``` - -Subcommands -* `set` -* `get` -* `list` -* `delete` -* `brokers` -* `broker` - -### `set` -Create/update a namespace isolation policy for a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy set cluster-name policy-name options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`--auto-failover-policy-params`|Comma-separated name=value auto failover policy parameters|[]| -|`--auto-failover-policy-type`|Auto failover policy type name. Currently available options: min_available.|[]| -|`--namespaces`|Comma-separated namespaces regex list|[]| -|`--primary`|Comma-separated primary broker regex list|[]| -|`--secondary`|Comma-separated secondary broker regex list|[]| - - -### `get` -Get the namespace isolation policy of a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy get cluster-name policy-name - -``` - -### `list` -List all namespace isolation policies of a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy list cluster-name - -``` - -### `delete` -Delete namespace isolation policy of a cluster. This operation requires superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy delete - -``` - -### `brokers` -List all brokers with namespace-isolation policies attached to it. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy brokers cluster-name - -``` - -### `broker` -Get broker with namespace-isolation policies attached to it. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy broker cluster-name options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`--broker`|Broker name to get namespace-isolation policies attached to it|| - -## `topics` -Operations for managing Pulsar topics (both persistent and non-persistent). - -Usage - -```bash - -$ pulsar-admin topics subcommand - -``` - -From Pulsar 2.7.0, some namespace-level policies are available on topic level. To enable topic-level policy in Pulsar, you need to configure the following parameters in the `broker.conf` file. - -```shell - -systemTopicEnabled=true -topicLevelPoliciesEnabled=true - -``` - -Subcommands -* `compact` -* `compaction-status` -* `offload` -* `offload-status` -* `create-partitioned-topic` -* `create-missed-partitions` -* `delete-partitioned-topic` -* `create` -* `get-partitioned-topic-metadata` -* `update-partitioned-topic` -* `list-partitioned-topics` -* `list` -* `terminate` -* `permissions` -* `grant-permission` -* `revoke-permission` -* `lookup` -* `bundle-range` -* `delete` -* `unload` -* `create-subscription` -* `subscriptions` -* `unsubscribe` -* `stats` -* `stats-internal` -* `info-internal` -* `partitioned-stats` -* `partitioned-stats-internal` -* `skip` -* `clear-backlog` -* `expire-messages` -* `expire-messages-all-subscriptions` -* `peek-messages` -* `reset-cursor` -* `get-message-by-id` -* `last-message-id` -* `get-backlog-quotas` -* `set-backlog-quota` -* `remove-backlog-quota` -* `get-persistence` -* `set-persistence` -* `remove-persistence` -* `get-message-ttl` -* `set-message-ttl` -* `remove-message-ttl` -* `get-deduplication` -* `set-deduplication` -* `remove-deduplication` -* `get-retention` -* `set-retention` -* `remove-retention` -* `get-dispatch-rate` -* `set-dispatch-rate` -* `remove-dispatch-rate` -* `get-max-unacked-messages-per-subscription` -* `set-max-unacked-messages-per-subscription` -* `remove-max-unacked-messages-per-subscription` -* `get-max-unacked-messages-per-consumer` -* `set-max-unacked-messages-per-consumer` -* `remove-max-unacked-messages-per-consumer` -* `get-delayed-delivery` -* `set-delayed-delivery` -* `remove-delayed-delivery` -* `get-max-producers` -* `set-max-producers` -* `remove-max-producers` -* `get-max-consumers` -* `set-max-consumers` -* `remove-max-consumers` -* `get-compaction-threshold` -* `set-compaction-threshold` -* `remove-compaction-threshold` -* `get-offload-policies` -* `set-offload-policies` -* `remove-offload-policies` -* `get-inactive-topic-policies` -* `set-inactive-topic-policies` -* `remove-inactive-topic-policies` -* `set-max-subscriptions` -* `get-max-subscriptions` -* `remove-max-subscriptions` - -### `compact` -Run compaction on the specified topic (persistent topics only) - -Usage - -``` - -$ pulsar-admin topics compact persistent://tenant/namespace/topic - -``` - -### `compaction-status` -Check the status of a topic compaction (persistent topics only) - -Usage - -```bash - -$ pulsar-admin topics compaction-status persistent://tenant/namespace/topic - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-w`, `--wait-complete`|Wait for compaction to complete|false| - - -### `offload` -Trigger offload of data from a topic to long-term storage (e.g. Amazon S3) - -Usage - -```bash - -$ pulsar-admin topics offload persistent://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--size-threshold`|The maximum amount of data to keep in BookKeeper for the specific topic|| - - -### `offload-status` -Check the status of data offloading from a topic to long-term storage - -Usage - -```bash - -$ pulsar-admin topics offload-status persistent://tenant/namespace/topic op - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-w`, `--wait-complete`|Wait for compaction to complete|false| - - -### `create-partitioned-topic` -Create a partitioned topic. A partitioned topic must be created before producers can publish to it. - -:::note - -By default, after 60 seconds of creation, topics are considered inactive and deleted automatically to prevent from generating trash data. -To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. -To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to your desired value. -For more information about these two parameters, see [here](reference-configuration.md#broker). - -::: - -Usage - -```bash - -$ pulsar-admin topics create-partitioned-topic {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-p`, `--partitions`|The number of partitions for the topic|0| - -### `create-missed-partitions` -Try to create partitions for partitioned topic. The partitions of partition topic has to be created, -can be used by repair partitions when topic auto creation is disabled - -Usage - -```bash - -$ pulsar-admin topics create-missed-partitions persistent://tenant/namespace/topic - -``` - -### `delete-partitioned-topic` -Delete a partitioned topic. This will also delete all the partitions of the topic if they exist. - -Usage - -```bash - -$ pulsar-admin topics delete-partitioned-topic {persistent|non-persistent} - -``` - -### `create` -Creates a non-partitioned topic. A non-partitioned topic must explicitly be created by the user if allowAutoTopicCreation or createIfMissing is disabled. - -:::note - -By default, after 60 seconds of creation, topics are considered inactive and deleted automatically to prevent from generating trash data. -To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. -To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to your desired value. -For more information about these two parameters, see [here](reference-configuration.md#broker). - -::: - -Usage - -```bash - -$ pulsar-admin topics create {persistent|non-persistent}://tenant/namespace/topic - -``` - -### `get-partitioned-topic-metadata` -Get the partitioned topic metadata. If the topic is not created or is a non-partitioned topic, this will return an empty topic with zero partitions. - -Usage - -```bash - -$ pulsar-admin topics get-partitioned-topic-metadata {persistent|non-persistent}://tenant/namespace/topic - -``` - -### `update-partitioned-topic` -Update existing non-global partitioned topic. New updating number of partitions must be greater than existing number of partitions. - -Usage - -```bash - -$ pulsar-admin topics update-partitioned-topic {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-p`, `--partitions`|The number of partitions for the topic|0| - -### `list-partitioned-topics` -Get the list of partitioned topics under a namespace. - -Usage - -```bash - -$ pulsar-admin topics list-partitioned-topics tenant/namespace - -``` - -### `list` -Get the list of topics under a namespace - -Usage - -``` - -$ pulsar-admin topics list tenant/cluster/namespace - -``` - -### `terminate` -Terminate a persistent topic (disallow further messages from being published on the topic) - -Usage - -```bash - -$ pulsar-admin topics terminate persistent://tenant/namespace/topic - -``` - -### `partitioned-terminate` -Terminate a persistent topic (disallow further messages from being published on the topic) - -Usage - -```bash - -$ pulsar-admin topics partitioned-terminate persistent://tenant/namespace/topic - -``` - -### `permissions` -Get the permissions on a topic. Retrieve the effective permissions for a destination. These permissions are defined by the permissions set at the namespace level combined (union) with any eventual specific permissions set on the topic. - -Usage - -```bash - -$ pulsar-admin topics permissions topic - -``` - -### `grant-permission` -Grant a new permission to a client role on a single topic - -Usage - -```bash - -$ pulsar-admin topics grant-permission {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--actions`|Actions to be granted (`produce` or `consume`)|| -|`--role`|The client role to which to grant the permissions|| - - -### `revoke-permission` -Revoke permissions to a client role on a single topic. If the permission was not set at the topic level, but rather at the namespace level, this operation will return an error (HTTP status code 412). - -Usage - -```bash - -$ pulsar-admin topics revoke-permission topic - -``` - -### `lookup` -Look up a topic from the current serving broker - -Usage - -```bash - -$ pulsar-admin topics lookup topic - -``` - -### `bundle-range` -Get the namespace bundle which contains the given topic - -Usage - -```bash - -$ pulsar-admin topics bundle-range topic - -``` - -### `delete` -Delete a topic. The topic cannot be deleted if there are any active subscriptions or producers connected to the topic. - -Usage - -```bash - -$ pulsar-admin topics delete topic - -``` - -### `unload` -Unload a topic - -Usage - -```bash - -$ pulsar-admin topics unload topic - -``` - -### `create-subscription` -Create a new subscription on a topic. - -Usage - -```bash - -$ pulsar-admin topics create-subscription [options] persistent://tenant/namespace/topic - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-m`, `--messageId`|messageId where to create the subscription. It can be either 'latest', 'earliest' or (ledgerId:entryId)|latest| -|`-s`, `--subscription`|Subscription to reset position on|| - -### `subscriptions` -Get the list of subscriptions on the topic - -Usage - -```bash - -$ pulsar-admin topics subscriptions topic - -``` - -### `unsubscribe` -Delete a durable subscriber from a topic - -Usage - -```bash - -$ pulsar-admin topics unsubscribe topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|The subscription to delete|| -|`-f`, `--force`|Disconnect and close all consumers and delete subscription forcefully|false| - - -### `stats` -Get the stats for the topic and its connected producers and consumers. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -Usage - -```bash - -$ pulsar-admin topics stats topic - -``` - -:::note - -The unit of `storageSize` and `averageMsgSize` is Byte. - -::: - -### `stats-internal` -Get the internal stats for the topic - -Usage - -```bash - -$ pulsar-admin topics stats-internal topic - -``` - -### `info-internal` -Get the internal metadata info for the topic - -Usage - -```bash - -$ pulsar-admin topics info-internal topic - -``` - -### `partitioned-stats` -Get the stats for the partitioned topic and its connected producers and consumers. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -Usage - -```bash - -$ pulsar-admin topics partitioned-stats topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--per-partition`|Get per-partition stats|false| - -### `partitioned-stats-internal` -Get the internal stats for the partitioned topic and its connected producers and consumers. All the rates are computed over a 1 minute window and are relative the last completed 1 minute period. - -Usage - -```bash - -$ pulsar-admin topics partitioned-stats-internal topic - -``` - -### `skip` -Skip some messages for the subscription - -Usage - -```bash - -$ pulsar-admin topics skip topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-n`, `--count`|The number of messages to skip|0| -|`-s`, `--subscription`|The subscription on which to skip messages|| - - -### `clear-backlog` -Clear backlog (skip all the messages) for the subscription - -Usage - -```bash - -$ pulsar-admin topics clear-backlog topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|The subscription to clear|| - - -### `expire-messages` -Expire messages that are older than the given expiry time (in seconds) for the subscription. - -Usage - -```bash - -$ pulsar-admin topics expire-messages topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-t`, `--expireTime`|Expire messages older than the time (in seconds)|0| -|`-s`, `--subscription`|The subscription to skip messages on|| - - -### `expire-messages-all-subscriptions` -Expire messages older than the given expiry time (in seconds) for all subscriptions - -Usage - -```bash - -$ pulsar-admin topics expire-messages-all-subscriptions topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-t`, `--expireTime`|Expire messages older than the time (in seconds)|0| - - -### `peek-messages` -Peek some messages for the subscription. - -Usage - -```bash - -$ pulsar-admin topics peek-messages topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-n`, `--count`|The number of messages|0| -|`-s`, `--subscription`|Subscription to get messages from|| - - -### `reset-cursor` -Reset position for subscription to a position that is closest to timestamp or messageId. - -Usage - -```bash - -$ pulsar-admin topics reset-cursor topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|Subscription to reset position on|| -|`-t`, `--time`|The time in minutes to reset back to (or minutes, hours, days, weeks, etc.). Examples: `100m`, `3h`, `2d`, `5w`.|| -|`-m`, `--messageId`| The message ID to reset back to (`ledgerId:entryId` or earliest or latest). || - -### `get-message-by-id` -Get message by ledger id and entry id - -Usage - -```bash - -$ pulsar-admin topics get-message-by-id topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-l`, `--ledgerId`|The ledger id |0| -|`-e`, `--entryId`|The entry id |0| - -### `last-message-id` -Get the last commit message ID of the topic. - -Usage - -```bash - -$ pulsar-admin topics last-message-id persistent://tenant/namespace/topic - -``` - -### `get-backlog-quotas` -Get the backlog quota policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-backlog-quotas tenant/namespace/topic - -``` - -### `set-backlog-quota` -Set a backlog quota policy for a topic. - -|Flag|Description|Default| -|----|---|---| -|`-l`, `--limit`|The backlog size limit (for example `10M` or `16G`)|| -|`-lt`, `--limitTime`|Time limit in second, non-positive number for disabling time limit. (for example 3600 for 1 hour)|| -|`-p`, `--policy`|The retention policy to enforce when the limit is reached. The valid options are: `producer_request_hold`, `producer_exception` or `consumer_backlog_eviction`| -|`-t`, `--type`|Backlog quota type to set. The valid options are: `destination_storage`, `message_age` |destination_storage| - -Usage - -```bash - -$ pulsar-admin topics set-backlog-quota tenant/namespace/topic options - -``` - -Example - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns/my-topic \ ---limit 2G \ ---policy producer_request_hold - -``` - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns/my-topic \ ---limitTime 3600 \ ---policy producer_request_hold \ ---type message_age - -``` - -### `remove-backlog-quota` -Remove a backlog quota policy from a topic. - -|Flag|Description|Default| -|---|---|---| -|`-t`, `--type`|Backlog quota type to remove. The valid options are: `destination_storage`, `message_age` |destination_storage| - -Usage - -```bash - -$ pulsar-admin topics remove-backlog-quota tenant/namespace/topic - -``` - -### `get-persistence` -Get the persistence policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-persistence tenant/namespace/topic - -``` - -### `set-persistence` -Set the persistence policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-persistence tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-e`, `--bookkeeper-ensemble`|Number of bookies to use for a topic|0| -|`-w`, `--bookkeeper-write-quorum`|How many writes to make of each entry|0| -|`-a`, `--bookkeeper-ack-quorum`|Number of acks (guaranteed copies) to wait for each entry|0| -|`-r`, `--ml-mark-delete-max-rate`|Throttling rate of mark-delete operation (0 means no throttle)|| - -### `remove-persistence` -Remove the persistence policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-persistence tenant/namespace/topic - -``` - -### `get-message-ttl` -Get the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-message-ttl tenant/namespace/topic - -``` - -### `set-message-ttl` -Set the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-message-ttl tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-ttl`, `--messageTTL`|Message TTL for a topic in second, allowed range from 1 to `Integer.MAX_VALUE` |0| - -### `remove-message-ttl` -Remove the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-message-ttl tenant/namespace/topic - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable message deduplication on the specified topic.|false| -|`--disable`, `-d`|Disable message deduplication on the specified topic.|false| - -### `get-deduplication` -Get a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-deduplication tenant/namespace/topic - -``` - -### `set-deduplication` -Set a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-deduplication tenant/namespace/topic options - -``` - -### `remove-deduplication` -Remove a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-deduplication tenant/namespace/topic - -``` - -## `tenants` -Operations for managing tenants - -Usage - -```bash - -$ pulsar-admin tenants subcommand - -``` - -Subcommands -* `list` -* `get` -* `create` -* `update` -* `delete` - -### `list` -List the existing tenants - -Usage - -```bash - -$ pulsar-admin tenants list - -``` - -### `get` -Gets the configuration of a tenant - -Usage - -```bash - -$ pulsar-admin tenants get tenant-name - -``` - -### `create` -Creates a new tenant - -Usage - -```bash - -$ pulsar-admin tenants create tenant-name options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-r`, `--admin-roles`|Comma-separated admin roles|| -|`-c`, `--allowed-clusters`|Comma-separated allowed clusters|| - -### `update` -Updates a tenant - -Usage - -```bash - -$ pulsar-admin tenants update tenant-name options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-r`, `--admin-roles`|Comma-separated admin roles|| -|`-c`, `--allowed-clusters`|Comma-separated allowed clusters|| - - -### `delete` -Deletes an existing tenant - -Usage - -```bash - -$ pulsar-admin tenants delete tenant-name - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-f`, `--force`|Delete a tenant forcefully by deleting all namespaces under it.|false| - - -## `resource-quotas` -Operations for managing resource quotas - -Usage - -```bash - -$ pulsar-admin resource-quotas subcommand - -``` - -Subcommands -* `get` -* `set` -* `reset-namespace-bundle-quota` - - -### `get` -Get the resource quota for a specified namespace bundle, or default quota if no namespace/bundle is specified. - -Usage - -```bash - -$ pulsar-admin resource-quotas get options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-n`, `--namespace`|The namespace|| - - -### `set` -Set the resource quota for the specified namespace bundle, or default quota if no namespace/bundle is specified. - -Usage - -```bash - -$ pulsar-admin resource-quotas set options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-bi`, `--bandwidthIn`|The expected inbound bandwidth (in bytes/second)|0| -|`-bo`, `--bandwidthOut`|Expected outbound bandwidth (in bytes/second)0| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-d`, `--dynamic`|Allow to be dynamically re-calculated (or not)|false| -|`-mem`, `--memory`|Expectred memory usage (in megabytes)|0| -|`-mi`, `--msgRateIn`|Expected incoming messages per second|0| -|`-mo`, `--msgRateOut`|Expected outgoing messages per second|0| -|`-n`, `--namespace`|The namespace as tenant/namespace, for example my-tenant/my-ns. Must be specified together with -b/--bundle.|| - - -### `reset-namespace-bundle-quota` -Reset the specified namespace bundle's resource quota to a default value. - -Usage - -```bash - -$ pulsar-admin resource-quotas reset-namespace-bundle-quota options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-n`, `--namespace`|The namespace|| - - - -## `schemas` -Operations related to Schemas associated with Pulsar topics. - -Usage - -``` - -$ pulsar-admin schemas subcommand - -``` - -Subcommands -* `upload` -* `delete` -* `get` -* `extract` - - -### `upload` -Upload the schema definition for a topic - -Usage - -```bash - -$ pulsar-admin schemas upload persistent://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`--filename`|The path to the schema definition file. An example schema file is available under conf directory.|| - - -### `delete` -Delete the schema definition associated with a topic - -Usage - -```bash - -$ pulsar-admin schemas delete persistent://tenant/namespace/topic - -``` - -### `get` -Retrieve the schema definition associated with a topic (at a given version if version is supplied). - -Usage - -```bash - -$ pulsar-admin schemas get persistent://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`--version`|The version of the schema definition to retrieve for a topic.|| - -### `extract` -Provide the schema definition for a topic via Java class name contained in a JAR file - -Usage - -```bash - -$ pulsar-admin schemas extract persistent://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--classname`|The Java class name|| -|`-j`, `--jar`|A path to the JAR file which contains the above Java class|| -|`-t`, `--type`|The type of the schema (avro or json)|| diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/reference-rest-api-overview.md b/site2/website/versioned_docs/version-2.10.1-deprecated/reference-rest-api-overview.md deleted file mode 100644 index 8e3d410112b878..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/reference-rest-api-overview.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: reference-rest-api-overview -title: Pulsar REST APIs -sidebar_label: "Pulsar REST APIs" ---- - -A REST API (also known as RESTful API, REpresentational State Transfer Application Programming Interface) is a set of definitions and protocols for building and integrating application software, using HTTP requests to GET, PUT, POST, and DELETE data following the REST standards. In essence, REST API is a set of remote calls using standard methods to request and return data in a specific format between two systems. - -Pulsar provides a variety of REST APIs that enable you to interact with Pulsar to retrieve information or perform an action. - -| REST API category | Description | -| --- | --- | -| [Admin](/admin-rest-api/?version=master) | REST APIs for administrative operations.| -| [Functions](/functions-rest-api/?version=master) | REST APIs for function-specific operations.| -| [Sources](/source-rest-api/?version=master) | REST APIs for source-specific operations.| -| [Sinks](/sink-rest-api/?version=master) | REST APIs for sink-specific operations.| -| [Packages](/packages-rest-api/?version=master) | REST APIs for package-specific operations. A package can be a group of functions, sources, and sinks.| - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/reference-terminology.md b/site2/website/versioned_docs/version-2.10.1-deprecated/reference-terminology.md deleted file mode 100644 index e5099141c3231e..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/reference-terminology.md +++ /dev/null @@ -1,176 +0,0 @@ ---- -id: reference-terminology -title: Pulsar Terminology -sidebar_label: "Terminology" -original_id: reference-terminology ---- - -Here is a glossary of terms related to Apache Pulsar: - -### Concepts - -#### Pulsar - -Pulsar is a distributed messaging system originally created by Yahoo but now under the stewardship of the Apache Software Foundation. - -#### Message - -Messages are the basic unit of Pulsar. They're what [producers](#producer) publish to [topics](#topic) -and what [consumers](#consumer) then consume from topics. - -#### Topic - -A named channel used to pass messages published by [producers](#producer) to [consumers](#consumer) who -process those [messages](#message). - -#### Partitioned Topic - -A topic that is served by multiple Pulsar [brokers](#broker), which enables higher throughput. - -#### Namespace - -A grouping mechanism for related [topics](#topic). - -#### Namespace Bundle - -A virtual group of [topics](#topic) that belong to the same [namespace](#namespace). A namespace bundle -is defined as a range between two 32-bit hashes, such as 0x00000000 and 0xffffffff. - -#### Tenant - -An administrative unit for allocating capacity and enforcing an authentication/authorization scheme. - -#### Subscription - -A lease on a [topic](#topic) established by a group of [consumers](#consumer). Pulsar has four subscription -modes (exclusive, shared, failover and key_shared). - -#### Pub-Sub - -A messaging pattern in which [producer](#producer) processes publish messages on [topics](#topic) that -are then consumed (processed) by [consumer](#consumer) processes. - -#### Producer - -A process that publishes [messages](#message) to a Pulsar [topic](#topic). - -#### Consumer - -A process that establishes a subscription to a Pulsar [topic](#topic) and processes messages published -to that topic by [producers](#producer). - -#### Reader - -Pulsar readers are message processors much like Pulsar [consumers](#consumer) but with two crucial differences: - -- you can specify *where* on a topic readers begin processing messages (consumers always begin with the latest - available unacked message); -- readers don't retain data or acknowledge messages. - -#### Cursor - -The subscription position for a [consumer](#consumer). - -#### Acknowledgment (ack) - -A message sent to a Pulsar broker by a [consumer](#consumer) that a message has been successfully processed. -An acknowledgement (ack) is Pulsar's way of knowing that the message can be deleted from the system; -if no acknowledgement, then the message will be retained until it's processed. - -#### Negative Acknowledgment (nack) - -When an application fails to process a particular message, it can send a "negative ack" to Pulsar -to signal that the message should be replayed at a later timer. (By default, failed messages are -replayed after a 1 minute delay). Be aware that negative acknowledgment on ordered subscription types, -such as Exclusive, Failover and Key_Shared, can cause failed messages to arrive consumers out of the original order. - -#### Unacknowledged - -A message that has been delivered to a consumer for processing but not yet confirmed as processed by the consumer. - -#### Retention Policy - -Size and time limits that you can set on a [namespace](#namespace) to configure retention of [messages](#message) -that have already been [acknowledged](#acknowledgement-ack). - -#### Multi-Tenancy - -The ability to isolate [namespaces](#namespace), specify quotas, and configure authentication and authorization -on a per-[tenant](#tenant) basis. - -#### Failure Domain - -A logical domain under a Pulsar cluster. Each logical domain contains a pre-configured list of brokers. - -#### Anti-affinity Namespaces - -A group of namespaces that have anti-affinity to each other. - -### Architecture - -#### Standalone - -A lightweight Pulsar broker in which all components run in a single Java Virtual Machine (JVM) process. Standalone -clusters can be run on a single machine and are useful for development purposes. - -#### Cluster - -A set of Pulsar [brokers](#broker) and [BookKeeper](#bookkeeper) servers (aka [bookies](#bookie)). -Clusters can reside in different geographical regions and replicate messages to one another -in a process called [geo-replication](#geo-replication). - -#### Instance - -A group of Pulsar [clusters](#cluster) that act together as a single unit. - -#### Geo-Replication - -Replication of messages across Pulsar [clusters](#cluster), potentially in different datacenters -or geographical regions. - -#### Configuration Store - -Pulsar's configuration store (previously known as configuration store) is a ZooKeeper quorum that -is used for configuration-specific tasks. A multi-cluster Pulsar installation requires just one -configuration store across all [clusters](#cluster). - -#### Topic Lookup - -A service provided by Pulsar [brokers](#broker) that enables connecting clients to automatically determine -which Pulsar [cluster](#cluster) is responsible for a [topic](#topic) (and thus where message traffic for -the topic needs to be routed). - -#### Service Discovery - -A mechanism provided by Pulsar that enables connecting clients to use just a single URL to interact -with all the [brokers](#broker) in a [cluster](#cluster). - -#### Broker - -A stateless component of Pulsar [clusters](#cluster) that runs two other components: an HTTP server -exposing a REST interface for administration and topic lookup and a [dispatcher](#dispatcher) that -handles all message transfers. Pulsar clusters typically consist of multiple brokers. - -#### Dispatcher - -An asynchronous TCP server used for all data transfers in-and-out a Pulsar [broker](#broker). The Pulsar -dispatcher uses a custom binary protocol for all communications. - -### Storage - -#### BookKeeper - -[Apache BookKeeper](http://bookkeeper.apache.org/) is a scalable, low-latency persistent log storage -service that Pulsar uses to store data. - -#### Bookie - -Bookie is the name of an individual BookKeeper server. It is effectively the storage server of Pulsar. - -#### Ledger - -An append-only data structure in [BookKeeper](#bookkeeper) that is used to persistently store messages in Pulsar [topics](#topic). - -### Functions - -Pulsar Functions are lightweight functions that can consume messages from Pulsar topics, apply custom processing logic, and, if desired, publish results to topics. diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/schema-evolution-compatibility.md b/site2/website/versioned_docs/version-2.10.1-deprecated/schema-evolution-compatibility.md deleted file mode 100644 index 04bd0129a74b20..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/schema-evolution-compatibility.md +++ /dev/null @@ -1,207 +0,0 @@ ---- -id: schema-evolution-compatibility -title: Schema evolution and compatibility -sidebar_label: "Schema evolution and compatibility" -original_id: schema-evolution-compatibility ---- - -Normally, schemas do not stay the same over a long period of time. Instead, they undergo evolutions to satisfy new needs. - -This chapter examines how Pulsar schema evolves and what Pulsar schema compatibility check strategies are. - -## Schema evolution - -Pulsar schema is defined in a data structure called `SchemaInfo`. - -Each `SchemaInfo` stored with a topic has a version. The version is used to manage the schema changes happening within a topic. - -The message produced with `SchemaInfo` is tagged with a schema version. When a message is consumed by a Pulsar client, the Pulsar client can use the schema version to retrieve the corresponding `SchemaInfo` and use the correct schema information to deserialize data. - -### What is schema evolution? - -Schemas store the details of attributes and types. To satisfy new business requirements, you need to update schemas inevitably over time, which is called **schema evolution**. - -Any schema changes affect downstream consumers. Schema evolution ensures that the downstream consumers can seamlessly handle data encoded with both old schemas and new schemas. - -### How Pulsar schema should evolve? - -The answer is Pulsar schema compatibility check strategy. It determines how schema compares old schemas with new schemas in topics. - -For more information, see [Schema compatibility check strategy](#schema-compatibility-check-strategy). - -### How does Pulsar support schema evolution? - -1. When a producer/consumer/reader connects to a broker, the broker deploys the schema compatibility checker configured by `schemaRegistryCompatibilityCheckers` to enforce schema compatibility check. - - The schema compatibility checker is one instance per schema type. - - Currently, Avro and JSON have their own compatibility checkers, while all the other schema types share the default compatibility checker which disables schema evolution. - -2. The producer/consumer/reader sends its client `SchemaInfo` to the broker. - -3. The broker knows the schema type and locates the schema compatibility checker for that type. - -4. The broker uses the checker to check if the `SchemaInfo` is compatible with the latest schema of the topic by applying its compatibility check strategy. - - Currently, the compatibility check strategy is configured at the namespace level and applied to all the topics within that namespace. - -## Schema compatibility check strategy - -Pulsar has 8 schema compatibility check strategies, which are summarized in the following table. - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Changes allowed | Check against which schema | Upgrade first | -| --- | --- | --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Disable schema compatibility check. | All changes are allowed | All previous versions | Any order | -| `ALWAYS_INCOMPATIBLE` | Disable schema evolution. | All changes are disabled | None | None | -| `BACKWARD` | Consumers using the schema V3 can process data written by producers using the schema V3 or V2. |
  429. Add optional fields
  430. Delete fields
  431. | Latest version | Consumers | -| `BACKWARD_TRANSITIVE` | Consumers using the schema V3 can process data written by producers using the schema V3, V2 or V1. |
  432. Add optional fields
  433. Delete fields
  434. | All previous versions | Consumers | -| `FORWARD` | Consumers using the schema V3 or V2 can process data written by producers using the schema V3. |
  435. Add fields
  436. Delete optional fields
  437. | Latest version | Producers | -| `FORWARD_TRANSITIVE` | Consumers using the schema V3, V2 or V1 can process data written by producers using the schema V3. |
  438. Add fields
  439. Delete optional fields
  440. | All previous versions | Producers | -| `FULL` | Backward and forward compatible between the schema V3 and V2. |
  441. Modify optional fields
  442. | Latest version | Any order | -| `FULL_TRANSITIVE` | Backward and forward compatible among the schema V3, V2, and V1. |
  443. Modify optional fields
  444. | All previous versions | Any order | - -### ALWAYS_COMPATIBLE and ALWAYS_INCOMPATIBLE - -| Compatibility check strategy | Definition | Note | -| --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Disable schema compatibility check. | None | -| `ALWAYS_INCOMPATIBLE` | Disable schema evolution, that is, any schema change is rejected. |
  445. For all schema types except Avro and JSON, the default schema compatibility check strategy is `ALWAYS_INCOMPATIBLE`.
  446. For Avro and JSON, the default schema compatibility check strategy is `FULL`.
  447. | - -#### Example - -* Example 1 - - In some situations, an application needs to store events of several different types in the same Pulsar topic. - - In particular, when developing a data model in an `Event Sourcing` style, you might have several kinds of events that affect the state of an entity. - - For example, for a user entity, there are `userCreated`, `userAddressChanged` and `userEnquiryReceived` events. The application requires that those events are always read in the same order. - - Consequently, those events need to go in the same Pulsar partition to maintain order. This application can use `ALWAYS_COMPATIBLE` to allow different kinds of events co-exist in the same topic. - -* Example 2 - - Sometimes we also make incompatible changes. - - For example, you are modifying a field type from `string` to `int`. - - In this case, you need to: - - * Upgrade all producers and consumers to the new schema versions at the same time. - - * Optionally, create a new topic and start migrating applications to use the new topic and the new schema, avoiding the need to handle two incompatible versions in the same topic. - -### BACKWARD and BACKWARD_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | -|---|---|---| -`BACKWARD` | Consumers using the new schema can process data written by producers using the **last schema**. | The consumers using the schema V3 can process data written by producers using the schema V3 or V2. | -`BACKWARD_TRANSITIVE` | Consumers using the new schema can process data written by producers using **all previous schemas**. | The consumers using the schema V3 can process data written by producers using the schema V3, V2, or V1. | - -#### Example - -* Example 1 - - Remove a field. - - A consumer constructed to process events without one field can process events written with the old schema containing the field, and the consumer will ignore that field. - -* Example 2 - - You want to load all Pulsar data into a Hive data warehouse and run SQL queries against the data. - - Same SQL queries must continue to work even the data is changed. To support it, you can evolve the schemas using the `BACKWARD` strategy. - -### FORWARD and FORWARD_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | -|---|---|---| -`FORWARD` | Consumers using the **last schema** can process data written by producers using a new schema, even though they may not be able to use the full capabilities of the new schema. | The consumers using the schema V3 or V2 can process data written by producers using the schema V3. | -`FORWARD_TRANSITIVE` | Consumers using **all previous schemas** can process data written by producers using a new schema. | The consumers using the schema V3, V2, or V1 can process data written by producers using the schema V3. - -#### Example - -* Example 1 - - Add a field. - - In most data formats, consumers written to process events without new fields can continue doing so even when they receive new events containing new fields. - -* Example 2 - - If a consumer has an application logic tied to a full version of a schema, the application logic may not be updated instantly when the schema evolves. - - In this case, you need to project data with a new schema onto an old schema that the application understands. - - Consequently, you can evolve the schemas using the `FORWARD` strategy to ensure that the old schema can process data encoded with the new schema. - -### FULL and FULL_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | Note | -| --- | --- | --- | --- | -| `FULL` | Schemas are both backward and forward compatible, which means: Consumers using the last schema can process data written by producers using the new schema. AND Consumers using the new schema can process data written by producers using the last schema. | Consumers using the schema V3 can process data written by producers using the schema V3 or V2. AND Consumers using the schema V3 or V2 can process data written by producers using the schema V3. |
  448. For Avro and JSON, the default schema compatibility check strategy is `FULL`.
  449. For all schema types except Avro and JSON, the default schema compatibility check strategy is `ALWAYS_INCOMPATIBLE`.
  450. | -| `FULL_TRANSITIVE` | The new schema is backward and forward compatible with all previously registered schemas. | Consumers using the schema V3 can process data written by producers using the schema V3, V2 or V1. AND Consumers using the schema V3, V2 or V1 can process data written by producers using the schema V3. | None | - -#### Example - -In some data formats, for example, Avro, you can define fields with default values. Consequently, adding or removing a field with a default value is a fully compatible change. - -:::tip - -You can set schema compatibility check strategy at the topic, namespace or broker level. For how to set the strategy, see [here](schema-manage.md/#set-schema-compatibility-check-strategy). - -::: - -## Schema verification - -When a producer or a consumer tries to connect to a topic, a broker performs some checks to verify a schema. - -### Producer - -When a producer tries to connect to a topic (suppose ignore the schema auto creation), a broker does the following checks: - -* Check if the schema carried by the producer exists in the schema registry or not. - - * If the schema is already registered, then the producer is connected to a broker and produce messages with that schema. - - * If the schema is not registered, then Pulsar verifies if the schema is allowed to be registered based on the configured compatibility check strategy. - -### Consumer -When a consumer tries to connect to a topic, a broker checks if a carried schema is compatible with a registered schema based on the configured schema compatibility check strategy. - -| Compatibility check strategy | Check logic | -| --- | --- | -| `ALWAYS_COMPATIBLE` | All pass | -| `ALWAYS_INCOMPATIBLE` | No pass | -| `BACKWARD` | Can read the last schema | -| `BACKWARD_TRANSITIVE` | Can read all schemas | -| `FORWARD` | Can read the last schema | -| `FORWARD_TRANSITIVE` | Can read the last schema | -| `FULL` | Can read the last schema | -| `FULL_TRANSITIVE` | Can read all schemas | - -## Order of upgrading clients - -The order of upgrading client applications is determined by the compatibility check strategy. - -For example, the producers using schemas to write data to Pulsar and the consumers using schemas to read data from Pulsar. - -| Compatibility check strategy | Upgrade first | Description | -| --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Any order | The compatibility check is disabled. Consequently, you can upgrade the producers and consumers in **any order**. | -| `ALWAYS_INCOMPATIBLE` | None | The schema evolution is disabled. | -|
  451. `BACKWARD`
  452. `BACKWARD_TRANSITIVE`
  453. | Consumers | There is no guarantee that consumers using the old schema can read data produced using the new schema. Consequently, **upgrade all consumers first**, and then start producing new data. | -|
  454. `FORWARD`
  455. `FORWARD_TRANSITIVE`
  456. | Producers | There is no guarantee that consumers using the new schema can read data produced using the old schema. Consequently, **upgrade all producers first**
  457. to use the new schema and ensure that the data already produced using the old schemas are not available to consumers, and then upgrade the consumers.
  458. | -|
  459. `FULL`
  460. `FULL_TRANSITIVE`
  461. | Any order | There is no guarantee that consumers using the old schema can read data produced using the new schema and consumers using the new schema can read data produced using the old schema. Consequently, you can upgrade the producers and consumers in **any order**. | - - - - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/schema-get-started.md b/site2/website/versioned_docs/version-2.10.1-deprecated/schema-get-started.md deleted file mode 100644 index 73a05d96d7f10d..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/schema-get-started.md +++ /dev/null @@ -1,102 +0,0 @@ ---- -id: schema-get-started -title: Get started -sidebar_label: "Get started" -original_id: schema-get-started ---- - -This chapter introduces Pulsar schemas and explains why they are important. - -## Schema Registry - -Type safety is extremely important in any application built around a message bus like Pulsar. - -Producers and consumers need some kind of mechanism for coordinating types at the topic level to avoid various potential problems arise. For example, serialization and deserialization issues. - -Applications typically adopt one of the following approaches to guarantee type safety in messaging. Both approaches are available in Pulsar, and you're free to adopt one or the other or to mix and match on a per-topic basis. - -#### Note -> -> Currently, the Pulsar schema registry is only available for the [Java client](client-libraries-java.md), [Go client](client-libraries-go.md), [Python client](client-libraries-python.md), and [C++ client](client-libraries-cpp.md). - -### Client-side approach - -Producers and consumers are responsible for not only serializing and deserializing messages (which consist of raw bytes) but also "knowing" which types are being transmitted via which topics. - -If a producer is sending temperature sensor data on the topic `topic-1`, consumers of that topic will run into trouble if they attempt to parse that data as moisture sensor readings. - -Producers and consumers can send and receive messages consisting of raw byte arrays and leave all type safety enforcement to the application on an "out-of-band" basis. - -### Server-side approach - -Producers and consumers inform the system which data types can be transmitted via the topic. - -With this approach, the messaging system enforces type safety and ensures that producers and consumers remain synced. - -Pulsar has a built-in **schema registry** that enables clients to upload data schemas on a per-topic basis. Those schemas dictate which data types are recognized as valid for that topic. - -## Why use schema - -When a schema is enabled, Pulsar does parse data, it takes bytes as inputs and sends bytes as outputs. While data has meaning beyond bytes, you need to parse data and might encounter parse exceptions which mainly occur in the following situations: - -* The field does not exist - -* The field type has changed (for example, `string` is changed to `int`) - -There are a few methods to prevent and overcome these exceptions, for example, you can catch exceptions when parsing errors, which makes code hard to maintain; or you can adopt a schema management system to perform schema evolution, not to break downstream applications, and enforces type safety to max extend in the language you are using, the solution is Pulsar Schema. - -Pulsar schema enables you to use language-specific types of data when constructing and handling messages from simple types like `string` to more complex application-specific types. - -**Example** - -You can use the _User_ class to define the messages sent to Pulsar topics. - -``` - -public class User { - String name; - int age; -} - -``` - -When constructing a producer with the _User_ class, you can specify a schema or not as below. - -### Without schema - -If you construct a producer without specifying a schema, then the producer can only produce messages of type `byte[]`. If you have a POJO class, you need to serialize the POJO into bytes before sending messages. - -**Example** - -``` - -Producer producer = client.newProducer() - .topic(topic) - .create(); -User user = new User("Tom", 28); -byte[] message = … // serialize the `user` by yourself; -producer.send(message); - -``` - -### With schema - -If you construct a producer with specifying a schema, then you can send a class to a topic directly without worrying about how to serialize POJOs into bytes. - -**Example** - -This example constructs a producer with the _JSONSchema_, and you can send the _User_ class to topics directly without worrying about how to serialize it into bytes. - -``` - -Producer producer = client.newProducer(JSONSchema.of(User.class)) - .topic(topic) - .create(); -User user = new User("Tom", 28); -producer.send(user); - -``` - -### Summary - -When constructing a producer with a schema, you do not need to serialize messages into bytes, instead Pulsar schema does this job in the background. diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/schema-manage.md b/site2/website/versioned_docs/version-2.10.1-deprecated/schema-manage.md deleted file mode 100644 index e62818c7e823f8..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/schema-manage.md +++ /dev/null @@ -1,850 +0,0 @@ ---- -id: schema-manage -title: Manage schema -sidebar_label: "Manage schema" -original_id: schema-manage ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide demonstrates the ways to manage schemas: - -* Automatically - - * [Schema AutoUpdate](#schema-autoupdate) - -* Manually - - * [Schema manual management](#schema-manual-management) - - * [Custom schema storage](#custom-schema-storage) - -## Schema AutoUpdate - -If a schema passes the schema compatibility check, Pulsar producer automatically updates this schema to the topic it produces by default. - -### AutoUpdate for producer - -For a producer, the `AutoUpdate` happens in the following cases: - -* If a **topic doesn’t have a schema**, Pulsar registers a schema automatically. - -* If a **topic has a schema**: - - * If a **producer doesn’t carry a schema**: - - * If `isSchemaValidationEnforced` or `schemaValidationEnforced` is **disabled** in the namespace to which the topic belongs, the producer is allowed to connect to the topic and produce data. - - * If `isSchemaValidationEnforced` or `schemaValidationEnforced` is **enabled** in the namespace to which the topic belongs, the producer is rejected and disconnected. - - * If a **producer carries a schema**: - - A broker performs the compatibility check based on the configured compatibility check strategy of the namespace to which the topic belongs. - - * If the schema is registered, a producer is connected to a broker. - - * If the schema is not registered: - - * If `isAllowAutoUpdateSchema` sets to **false**, the producer is rejected to connect to a broker. - - * If `isAllowAutoUpdateSchema` sets to **true**: - - * If the schema passes the compatibility check, then the broker registers a new schema automatically for the topic and the producer is connected. - - * If the schema does not pass the compatibility check, then the broker does not register a schema and the producer is rejected to connect to a broker. - -![AutoUpdate Producer](/assets/schema-producer.png) - -### AutoUpdate for consumer - -For a consumer, the `AutoUpdate` happens in the following cases: - -* If a **consumer connects to a topic without a schema** (which means the consumer receiving raw bytes), the consumer can connect to the topic successfully without doing any compatibility check. - -* If a **consumer connects to a topic with a schema**. - - * If a topic does not have all of them (a schema/data/a local consumer and a local producer): - - * If `isAllowAutoUpdateSchema` sets to **true**, then the consumer registers a schema and it is connected to a broker. - - * If `isAllowAutoUpdateSchema` sets to **false**, then the consumer is rejected to connect to a broker. - - * If a topic has one of them (a schema/data/a local consumer and a local producer), then the schema compatibility check is performed. - - * If the schema passes the compatibility check, then the consumer is connected to the broker. - - * If the schema does not pass the compatibility check, then the consumer is rejected to connect to the broker. - -![AutoUpdate Consumer](/assets/schema-consumer.png) - - -### Manage AutoUpdate strategy - -You can use the `pulsar-admin` command to manage the `AutoUpdate` strategy as below: - -* [Enable AutoUpdate](#enable-autoupdate) - -* [Disable AutoUpdate](#disable-autoupdate) - -* [Adjust compatibility](#adjust-compatibility) - -#### Enable AutoUpdate - -To enable `AutoUpdate` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-is-allow-auto-update-schema --enable tenant/namespace - -``` - -#### Disable AutoUpdate - -To disable `AutoUpdate` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-is-allow-auto-update-schema --disable tenant/namespace - -``` - -Once the `AutoUpdate` is disabled, you can only register a new schema using the `pulsar-admin` command. - -#### Adjust compatibility - -To adjust the schema compatibility level on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-compatibility-strategy --compatibility tenant/namespace - -``` - -### Schema validation - -By default, `schemaValidationEnforced` is **disabled** for producers: - -* This means a producer without a schema can produce any kind of messages to a topic with schemas, which may result in producing trash data to the topic. - -* This allows non-java language clients that don’t support schema can produce messages to a topic with schemas. - -However, if you want a stronger guarantee on the topics with schemas, you can enable `schemaValidationEnforced` across the whole cluster or on a per-namespace basis. - -#### Enable schema validation - -To enable `schemaValidationEnforced` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-validation-enforce --enable tenant/namespace - -``` - -#### Disable schema validation - -To disable `schemaValidationEnforced` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-validation-enforce --disable tenant/namespace - -``` - -## Schema manual management - -To manage schemas, you can use one of the following methods. - -| Method | Description | -| --- | --- | -| **Admin CLI**
  462. | You can use the `pulsar-admin` tool to manage Pulsar schemas, brokers, clusters, sources, sinks, topics, tenants and so on. For more information about how to use the `pulsar-admin` tool, see [here](reference-pulsar-admin.md). | -| **REST API**
  463. | Pulsar exposes schema related management API in Pulsar’s admin RESTful API. You can access the admin RESTful endpoint directly to manage schemas. For more information about how to use the Pulsar REST API, see [here](/admin-rest-api/). | -| **Java Admin API**
  464. | Pulsar provides Java admin library. | - -### Upload a schema - -To upload (register) a new schema for a topic, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `upload` subcommand. - -```bash - -$ pulsar-admin schemas upload --filename - -``` - -The `schema-definition-file` is in JSON format. - -```json - -{ - "type": "", - "schema": "", - "properties": {} // the properties associated with the schema -} - -``` - -The `schema-definition-file` includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  465. If the schema is a
  466. **primitive**
  467. schema, this field should be blank.
  468. If the schema is a
  469. **struct**
  470. schema, this field should be a JSON string of the Avro schema definition.
  471. | -| `properties` | The additional properties associated with the schema. | - -Here are examples of the `schema-definition-file` for a JSON schema. - -**Example 1** - -```json - -{ - "type": "JSON", - "schema": "{\"type\":\"record\",\"name\":\"User\",\"namespace\":\"com.foo\",\"fields\":[{\"name\":\"file1\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"file2\",\"type\":\"string\",\"default\":null},{\"name\":\"file3\",\"type\":[\"null\",\"string\"],\"default\":\"dfdf\"}]}", - "properties": {} -} - -``` - -**Example 2** - -```json - -{ - "type": "STRING", - "schema": "", - "properties": { - "key1": "value1" - } -} - -``` - -
    - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/uploadSchema?version=@pulsar:version_number@} - -The post payload is in JSON format. - -```json - -{ - "type": "", - "schema": "", - "properties": {} // the properties associated with the schema -} - -``` - -The post payload includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  472. If the schema is a
  473. **primitive**
  474. schema, this field should be blank.
  475. If the schema is a
  476. **struct**
  477. schema, this field should be a JSON string of the Avro schema definition.
  478. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -void createSchema(String topic, PostSchemaPayload schemaPayload) - -``` - -The `PostSchemaPayload` includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  479. If the schema is a
  480. **primitive**
  481. schema, this field should be blank.
  482. If the schema is a
  483. **struct**
  484. schema, this field should be a JSON string of the Avro schema definition.
  485. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `PostSchemaPayload`: - -```java - -PulsarAdmin admin = …; - -PostSchemaPayload payload = new PostSchemaPayload(); -payload.setType("INT8"); -payload.setSchema(""); - -admin.createSchema("my-tenant/my-ns/my-topic", payload); - -``` - -
    - -
    -```` - -### Get a schema (latest) - -To get the latest schema for a topic, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `get` subcommand. - -```bash - -$ pulsar-admin schemas get - -{ - "version": 0, - "type": "String", - "timestamp": 0, - "data": "string", - "properties": { - "property1": "string", - "property2": "string" - } -} - -``` - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/getSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", - "type": "", - "timestamp": "", - "data": "", - "properties": {} // the properties associated with the schema -} - -``` - -The response includes the following fields: - -| Field | Description | -| --- | --- | -| `version` | The schema version, which is a long number. | -| `type` | The schema type. | -| `timestamp` | The timestamp of creating this version of schema. | -| `data` | The schema definition data, which is encoded in UTF 8 charset.
  486. If the schema is a
  487. **primitive**
  488. schema, this field should be blank.
  489. If the schema is a
  490. **struct**
  491. schema, this field should be a JSON string of the Avro schema definition.
  492. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -SchemaInfo createSchema(String topic) - -``` - -The `SchemaInfo` includes the following fields: - -| Field | Description | -| --- | --- | -| `name` | The schema name. | -| `type` | The schema type. | -| `schema` | A byte array of the schema definition data, which is encoded in UTF 8 charset.
  493. If the schema is a
  494. **primitive**
  495. schema, this byte array should be empty.
  496. If the schema is a
  497. **struct**
  498. schema, this field should be a JSON string of the Avro schema definition converted to a byte array.
  499. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `SchemaInfo`: - -```java - -PulsarAdmin admin = …; - -SchemaInfo si = admin.getSchema("my-tenant/my-ns/my-topic"); - -``` - -
    - -
    -```` - -### Get a schema (specific) - -To get a specific version of a schema, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `get` subcommand. - -```bash - -$ pulsar-admin schemas get --version= - -``` - - - - -Send a `GET` request to a schema endpoint: {@inject: endpoint|GET|/admin/v2/schemas/:tenant/:namespace/:topic/schema/:version|operation/getSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", - "type": "", - "timestamp": "", - "data": "", - "properties": {} // the properties associated with the schema -} - -``` - -The response includes the following fields: - -| Field | Description | -| --- | --- | -| `version` | The schema version, which is a long number. | -| `type` | The schema type. | -| `timestamp` | The timestamp of creating this version of schema. | -| `data` | The schema definition data, which is encoded in UTF 8 charset.
  500. If the schema is a
  501. **primitive**
  502. schema, this field should be blank.
  503. If the schema is a
  504. **struct**
  505. schema, this field should be a JSON string of the Avro schema definition.
  506. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -SchemaInfo createSchema(String topic, long version) - -``` - -The `SchemaInfo` includes the following fields: - -| Field | Description | -| --- | --- | -| `name` | The schema name. | -| `type` | The schema type. | -| `schema` | A byte array of the schema definition data, which is encoded in UTF 8.
  507. If the schema is a
  508. **primitive**
  509. schema, this byte array should be empty.
  510. If the schema is a
  511. **struct**
  512. schema, this field should be a JSON string of the Avro schema definition converted to a byte array.
  513. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `SchemaInfo`: - -```java - -PulsarAdmin admin = …; - -SchemaInfo si = admin.getSchema("my-tenant/my-ns/my-topic", 1L); - -``` - -
    - -
    -```` - -### Extract a schema - -To provide a schema via a topic, you can use the following method. - -````mdx-code-block - - - - -Use the `extract` subcommand. - -```bash - -$ pulsar-admin schemas extract --classname --jar --type - -``` - - - - -```` - -### Delete a schema - -To delete a schema for a topic, you can use one of the following methods. - -:::note - -In any case, the **delete** action deletes **all versions** of a schema registered for a topic. - -::: - -````mdx-code-block - - - - -Use the `delete` subcommand. - -```bash - -$ pulsar-admin schemas delete - -``` - - - - -Send a `DELETE` request to a schema endpoint: {@inject: endpoint|DELETE|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/deleteSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", -} - -``` - -The response includes the following field: - -Field | Description | ----|---| -`version` | The schema version, which is a long number. | - - - - -```java - -void deleteSchema(String topic) - -``` - -Here is an example of deleting a schema. - -```java - -PulsarAdmin admin = …; - -admin.deleteSchema("my-tenant/my-ns/my-topic"); - -``` - - - - -```` - -## Custom schema storage - -By default, Pulsar stores various data types of schemas in [Apache BookKeeper](https://bookkeeper.apache.org) deployed alongside Pulsar. - -However, you can use another storage system if needed. - -### Implement - -To use a non-default (non-BookKeeper) storage system for Pulsar schemas, you need to implement the following Java interfaces: - -* [SchemaStorage interface](#schemastorage-interface) - -* [SchemaStorageFactory interface](#schemastoragefactory-interface) - -#### SchemaStorage interface - -The `SchemaStorage` interface has the following methods: - -```java - -public interface SchemaStorage { - // How schemas are updated - CompletableFuture put(String key, byte[] value, byte[] hash); - - // How schemas are fetched from storage - CompletableFuture get(String key, SchemaVersion version); - - // How schemas are deleted - CompletableFuture delete(String key); - - // Utility method for converting a schema version byte array to a SchemaVersion object - SchemaVersion versionFromBytes(byte[] version); - - // Startup behavior for the schema storage client - void start() throws Exception; - - // Shutdown behavior for the schema storage client - void close() throws Exception; -} - -``` - -:::tip - -For a complete example of **schema storage** implementation, see [BookKeeperSchemaStorage](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorage.java) class. - -::: - -#### SchemaStorageFactory interface - -The `SchemaStorageFactory` interface has the following method: - -```java - -public interface SchemaStorageFactory { - @NotNull - SchemaStorage create(PulsarService pulsar) throws Exception; -} - -``` - -:::tip - -For a complete example of **schema storage factory** implementation, see [BookKeeperSchemaStorageFactory](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorageFactory.java) class. - -::: - -### Deploy - -To use your custom schema storage implementation, perform the following steps. - -1. Package the implementation in a [JAR](https://docs.oracle.com/javase/tutorial/deployment/jar/basicsindex.html) file. - -2. Add the JAR file to the `lib` folder in your Pulsar binary or source distribution. - -3. Change the `schemaRegistryStorageClassName` configuration in `broker.conf` to your custom factory class. - -4. Start Pulsar. - -## Set schema compatibility check strategy - -You can set [schema compatibility check strategy](schema-evolution-compatibility.md#schema-compatibility-check-strategy) at the topic, namespace or broker level. - -The schema compatibility check strategy set at different levels has priority: topic level > namespace level > broker level. - -- If you set the strategy at both topic and namespace level, it uses the topic-level strategy. - -- If you set the strategy at both namespace and broker level, it uses the namespace-level strategy. - -- If you do not set the strategy at any level, it uses the `FULL` strategy. For all available values, see [here](schema-evolution-compatibility.md#schema-compatibility-check-strategy). - - -### Topic level - -To set a schema compatibility check strategy at the topic level, use one of the following methods. - -````mdx-code-block - - - - -Use the [`pulsar-admin topicPolicies set-schema-compatibility-strategy`](/tools/pulsar-admin/) command. - -```shell - -pulsar-admin topicPolicies set-schema-compatibility-strategy - -``` - - - - -Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v2/topics/:tenant/:namespace/:topic|operation/schemaCompatibilityStrategy?version=@pulsar:version_number@} - - - - -```java - -void setSchemaCompatibilityStrategy(String topic, SchemaCompatibilityStrategy strategy) - -``` - -Here is an example of setting a schema compatibility check strategy at the topic level. - -```java - -PulsarAdmin admin = …; - -admin.topicPolicies().setSchemaCompatibilityStrategy("my-tenant/my-ns/my-topic", SchemaCompatibilityStrategy.ALWAYS_INCOMPATIBLE); - -``` - - - - -```` -
    -To get the topic-level schema compatibility check strategy, use one of the following methods. - -````mdx-code-block - - - - -Use the [`pulsar-admin topicPolicies get-schema-compatibility-strategy`](/tools/pulsar-admin/) command. - -```shell - -pulsar-admin topicPolicies get-schema-compatibility-strategy - -``` - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic|operation/schemaCompatibilityStrategy?version=@pulsar:version_number@} - - - - -```java - -SchemaCompatibilityStrategy getSchemaCompatibilityStrategy(String topic, boolean applied) - -``` - -Here is an example of getting the topic-level schema compatibility check strategy. - -```java - -PulsarAdmin admin = …; - -// get the current applied schema compatibility strategy -admin.topicPolicies().getSchemaCompatibilityStrategy("my-tenant/my-ns/my-topic", true); - -// only get the schema compatibility strategy from topic policies -admin.topicPolicies().getSchemaCompatibilityStrategy("my-tenant/my-ns/my-topic", false); - -``` - - - - -```` -
    -To remove the topic-level schema compatibility check strategy, use one of the following methods. - -````mdx-code-block - - - - -Use the [`pulsar-admin topicPolicies remove-schema-compatibility-strategy`](/tools/pulsar-admin/) command. - -```shell - -pulsar-admin topicPolicies remove-schema-compatibility-strategy - -``` - - - - -Send a `DELETE` request to this endpoint: {@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic|operation/schemaCompatibilityStrategy?version=@pulsar:version_number@} - - - - -```java - -void removeSchemaCompatibilityStrategy(String topic) - -``` - -Here is an example of removing the topic-level schema compatibility check strategy. - -```java - -PulsarAdmin admin = …; - -admin.removeSchemaCompatibilityStrategy("my-tenant/my-ns/my-topic"); - -``` - - - - -```` - - -### Namespace level - -You can set schema compatibility check strategy at namespace level using one of the following methods. - -````mdx-code-block - - - - -Use the [`pulsar-admin namespaces set-schema-compatibility-strategy`](/tools/pulsar-admin/) command. - -```shell - -pulsar-admin namespaces set-schema-compatibility-strategy options - -``` - - - - -Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace|operation/schemaCompatibilityStrategy?version=@pulsar:version_number@} - - - - -Use the [`setSchemaCompatibilityStrategy`](/api/admin/)method. - -```java - -admin.namespaces().setSchemaCompatibilityStrategy("test", SchemaCompatibilityStrategy.FULL); - -``` - - - - -```` - -### Broker level - -You can set schema compatibility check strategy at broker level by setting `schemaCompatibilityStrategy` in [`broker.conf`](https://github.com/apache/pulsar/blob/f24b4890c278f72a67fe30e7bf22dc36d71aac6a/conf/broker.conf#L1240) or [`standalone.conf`](https://github.com/apache/pulsar/blob/master/conf/standalone.conf) file. - -**Example** - -``` - -schemaCompatibilityStrategy=ALWAYS_INCOMPATIBLE - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/schema-understand.md b/site2/website/versioned_docs/version-2.10.1-deprecated/schema-understand.md deleted file mode 100644 index 55bc662c666338..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/schema-understand.md +++ /dev/null @@ -1,576 +0,0 @@ ---- -id: schema-understand -title: Understand schema -sidebar_label: "Understand schema" -original_id: schema-understand ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This chapter explains the basic concepts of Pulsar schema, focuses on the topics of particular importance, and provides additional background. - -## SchemaInfo - -Pulsar schema is defined in a data structure called `SchemaInfo`. - -The `SchemaInfo` is stored and enforced on a per-topic basis and cannot be stored at the namespace or tenant level. - -A `SchemaInfo` consists of the following fields: - -| Field | Description | -| --- | --- | -| `name` | Schema name (a string). | -| `type` | Schema type, which determines how to interpret the schema data.
  514. Predefined schema: see [here](schema-understand.md#schema-type).
  515. Customized schema: it is left as an empty string.
  516. | -| `schema`(`payload`) | Schema data, which is a sequence of 8-bit unsigned bytes and schema-type specific. | -| `properties` | It is a user defined properties as a string/string map. Applications can use this bag for carrying any application specific logics. Possible properties might be the Git hash associated with the schema, an environment string like `dev` or `prod`. | - -**Example** - -This is the `SchemaInfo` of a string. - -```json - -{ - "name": "test-string-schema", - "type": "STRING", - "schema": "", - "properties": {} -} - -``` - -## Schema type - -Pulsar supports various schema types, which are mainly divided into two categories: - -* Primitive type - -* Complex type - -### Primitive type - -Currently, Pulsar supports the following primitive types: - -| Primitive Type | Description | -|---|---| -| `BOOLEAN` | A binary value | -| `INT8` | A 8-bit signed integer | -| `INT16` | A 16-bit signed integer | -| `INT32` | A 32-bit signed integer | -| `INT64` | A 64-bit signed integer | -| `FLOAT` | A single precision (32-bit) IEEE 754 floating-point number | -| `DOUBLE` | A double-precision (64-bit) IEEE 754 floating-point number | -| `BYTES` | A sequence of 8-bit unsigned bytes | -| `STRING` | A Unicode character sequence | -| `TIMESTAMP` (`DATE`, `TIME`) | A logic type represents a specific instant in time with millisecond precision.
    It stores the number of milliseconds since `January 1, 1970, 00:00:00 GMT` as an `INT64` value | -| INSTANT | A single instantaneous point on the time-line with nanoseconds precision| -| LOCAL_DATE | An immutable date-time object that represents a date, often viewed as year-month-day| -| LOCAL_TIME | An immutable date-time object that represents a time, often viewed as hour-minute-second. Time is represented to nanosecond precision.| -| LOCAL_DATE_TIME | An immutable date-time object that represents a date-time, often viewed as year-month-day-hour-minute-second | - -For primitive types, Pulsar does not store any schema data in `SchemaInfo`. The `type` in `SchemaInfo` is used to determine how to serialize and deserialize the data. - -Some of the primitive schema implementations can use `properties` to store implementation-specific tunable settings. For example, a `string` schema can use `properties` to store the encoding charset to serialize and deserialize strings. - -The conversions between **Pulsar schema types** and **language-specific primitive types** are as below. - -| Schema Type | Java Type| Python Type | Go Type | -|---|---|---|---| -| BOOLEAN | boolean | bool | bool | -| INT8 | byte | | int8 | -| INT16 | short | | int16 | -| INT32 | int | | int32 | -| INT64 | long | | int64 | -| FLOAT | float | float | float32 | -| DOUBLE | double | float | float64| -| BYTES | byte[], ByteBuffer, ByteBuf | bytes | []byte | -| STRING | string | str | string| -| TIMESTAMP | java.sql.Timestamp | | | -| TIME | java.sql.Time | | | -| DATE | java.util.Date | | | -| INSTANT | java.time.Instant | | | -| LOCAL_DATE | java.time.LocalDate | | | -| LOCAL_TIME | java.time.LocalDateTime | | -| LOCAL_DATE_TIME | java.time.LocalTime | | - -**Example** - -This example demonstrates how to use a string schema. - -1. Create a producer with a string schema and send messages. - - ```java - - Producer producer = client.newProducer(Schema.STRING).create(); - producer.newMessage().value("Hello Pulsar!").send(); - - ``` - -2. Create a consumer with a string schema and receive messages. - - ```java - - Consumer consumer = client.newConsumer(Schema.STRING).subscribe(); - consumer.receive(); - - ``` - -### Complex type - -Currently, Pulsar supports the following complex types: - -| Complex Type | Description | -|---|---| -| `keyvalue` | Represents a complex type of a key/value pair. | -| `struct` | Handles structured data. It supports `AvroBaseStructSchema` and `ProtobufNativeSchema`. | - -#### keyvalue - -`Keyvalue` schema helps applications define schemas for both key and value. - -For `SchemaInfo` of `keyvalue` schema, Pulsar stores the `SchemaInfo` of key schema and the `SchemaInfo` of value schema together. - -Pulsar provides the following methods to encode a key/value pair in messages: - -* `INLINE` - -* `SEPARATED` - -You can choose the encoding type when constructing the key/value schema. - -````mdx-code-block - - - - -Key/value pairs are encoded together in the message payload. - - - - -Key is encoded in the message key and the value is encoded in the message payload. - -**Example** - -This example shows how to construct a key/value schema and then use it to produce and consume messages. - -1. Construct a key/value schema with `INLINE` encoding type. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.INLINE - ); - - ``` - -2. Optionally, construct a key/value schema with `SEPARATED` encoding type. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - ``` - -3. Produce messages using a key/value schema. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - Producer> producer = client.newProducer(kvSchema) - .topic(TOPIC) - .create(); - - final int key = 100; - final String value = "value-100"; - - // send the key/value message - producer.newMessage() - .value(new KeyValue(key, value)) - .send(); - - ``` - -4. Consume messages using a key/value schema. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - Consumer> consumer = client.newConsumer(kvSchema) - ... - .topic(TOPIC) - .subscriptionName(SubscriptionName).subscribe(); - - // receive key/value pair - Message> msg = consumer.receive(); - KeyValue kv = msg.getValue(); - - ``` - - - - -```` - -#### struct - -This section describes the details of type and usage of the `struct` schema. - -##### Type - -`struct` schema supports `AvroBaseStructSchema` and `ProtobufNativeSchema`. - -|Type|Description| ----|---| -`AvroBaseStructSchema`|Pulsar uses [Avro Specification](http://avro.apache.org/docs/current/spec.html) to declare the schema definition for `AvroBaseStructSchema`, which supports `AvroSchema`, `JsonSchema`, and `ProtobufSchema`.

    This allows Pulsar:
    - to use the same tools to manage schema definitions
    - to use different serialization or deserialization methods to handle data| -`ProtobufNativeSchema`|`ProtobufNativeSchema` is based on protobuf native Descriptor.

    This allows Pulsar:
    - to use native protobuf-v3 to serialize or deserialize data
    - to use `AutoConsume` to deserialize data. - -##### Usage - -Pulsar provides the following methods to use the `struct` schema: - -* `static` - -* `generic` - -* `SchemaDefinition` - -````mdx-code-block - - - - -You can predefine the `struct` schema, which can be a POJO in Java, a `struct` in Go, or classes generated by Avro or Protobuf tools. - -**Example** - -Pulsar gets the schema definition from the predefined `struct` using an Avro library. The schema definition is the schema data stored as a part of the `SchemaInfo`. - -1. Create the _User_ class to define the messages sent to Pulsar topics. - - ```java - - @Builder - @AllArgsConstructor - @NoArgsConstructor - public static class User { - String name; - int age; - } - - ``` - -2. Create a producer with a `struct` schema and send messages. - - ```java - - Producer producer = client.newProducer(Schema.AVRO(User.class)).create(); - producer.newMessage().value(User.builder().name("pulsar-user").age(1).build()).send(); - - ``` - -3. Create a consumer with a `struct` schema and receive messages - - ```java - - Consumer consumer = client.newConsumer(Schema.AVRO(User.class)).subscribe(); - User user = consumer.receive(); - - ``` - - - - -Sometimes applications do not have pre-defined structs, and you can use this method to define schema and access data. - -You can define the `struct` schema using the `GenericSchemaBuilder`, generate a generic struct using `GenericRecordBuilder` and consume messages into `GenericRecord`. - -**Example** - -1. Use `RecordSchemaBuilder` to build a schema. - - ```java - - RecordSchemaBuilder recordSchemaBuilder = SchemaBuilder.record("schemaName"); - recordSchemaBuilder.field("intField").type(SchemaType.INT32); - SchemaInfo schemaInfo = recordSchemaBuilder.build(SchemaType.AVRO); - - Producer producer = client.newProducer(Schema.generic(schemaInfo)).create(); - - ``` - -2. Use `RecordBuilder` to build the struct records. - - ```java - - producer.newMessage().value(schema.newRecordBuilder() - .set("intField", 32) - .build()).send(); - - ``` - - - - -You can define the `schemaDefinition` to generate a `struct` schema. - -**Example** - -1. Create the _User_ class to define the messages sent to Pulsar topics. - - ```java - - @Builder - @AllArgsConstructor - @NoArgsConstructor - public static class User { - String name; - int age; - } - - ``` - -2. Create a producer with a `SchemaDefinition` and send messages. - - ```java - - SchemaDefinition schemaDefinition = SchemaDefinition.builder().withPojo(User.class).build(); - Producer producer = client.newProducer(Schema.AVRO(schemaDefinition)).create(); - producer.newMessage().value(User.builder().name("pulsar-user").age(1).build()).send(); - - ``` - -3. Create a consumer with a `SchemaDefinition` schema and receive messages - - ```java - - SchemaDefinition schemaDefinition = SchemaDefinition.builder().withPojo(User.class).build(); - Consumer consumer = client.newConsumer(Schema.AVRO(schemaDefinition)).subscribe(); - User user = consumer.receive().getValue(); - - ``` - - - - -```` - -### Auto Schema - -If you don't know the schema type of a Pulsar topic in advance, you can use AUTO schema to produce or consume generic records to or from brokers. - -| Auto Schema Type | Description | -|---|---| -| `AUTO_PRODUCE` | This is useful for transferring data **from a producer to a Pulsar topic that has a schema**. | -| `AUTO_CONSUME` | This is useful for transferring data **from a Pulsar topic that has a schema to a consumer**. | - -#### AUTO_PRODUCE - -`AUTO_PRODUCE` schema helps a producer validate whether the bytes sent by the producer is compatible with the schema of a topic. - -**Example** - -Suppose that: - -* You have a producer processing messages from a Kafka topic _K_. - -* You have a Pulsar topic _P_, and you do not know its schema type. - -* Your application reads the messages from _K_ and writes the messages to _P_. - -In this case, you can use `AUTO_PRODUCE` to verify whether the bytes produced by _K_ can be sent to _P_ or not. - -```java - -Produce pulsarProducer = client.newProducer(Schema.AUTO_PRODUCE()) - … - .create(); - -byte[] kafkaMessageBytes = … ; - -pulsarProducer.produce(kafkaMessageBytes); - -``` - -#### AUTO_CONSUME - -`AUTO_CONSUME` schema helps a Pulsar topic validate whether the bytes sent by a Pulsar topic is compatible with a consumer, that is, the Pulsar topic deserializes messages into language-specific objects using the `SchemaInfo` retrieved from broker-side. - -Currently, `AUTO_CONSUME` supports AVRO, JSON and ProtobufNativeSchema schemas. It deserializes messages into `GenericRecord`. - -**Example** - -Suppose that: - -* You have a Pulsar topic _P_. - -* You have a consumer (for example, MySQL) receiving messages from the topic _P_. - -* Your application reads the messages from _P_ and writes the messages to MySQL. - -In this case, you can use `AUTO_CONSUME` to verify whether the bytes produced by _P_ can be sent to MySQL or not. - -```java - -Consumer pulsarConsumer = client.newConsumer(Schema.AUTO_CONSUME()) - … - .subscribe(); - -Message msg = consumer.receive() ; -GenericRecord record = msg.getValue(); - -``` - -### Native Avro Schema - -When migrating or ingesting event or message data from external systems (such as Kafka and Cassandra), the events are often already serialized in Avro format. The applications producing the data typically have validated the data against their schemas (including compatibility checks) and stored them in a database or a dedicated service (such as a schema registry). The schema of each serialized data record is usually retrievable by some metadata attached to that record. In such cases, a Pulsar producer doesn't need to repeat the schema validation step when sending the ingested events to a topic. All it needs to do is passing each message or event with its schema to Pulsar. - -Hence, we provide `Schema.NATIVE_AVRO` to wrap a native Avro schema of type `org.apache.avro.Schema`. The result is a schema instance of Pulsar that accepts a serialized Avro payload without validating it against the wrapped Avro schema. - -**Example** - -```java - -org.apache.avro.Schema nativeAvroSchema = … ; - -Producer producer = pulsarClient.newProducer().topic("ingress").create(); - -byte[] content = … ; - -producer.newMessage(Schema.NATIVE_AVRO(nativeAvroSchema)).value(content).send(); - -``` - -## Schema version - -Each `SchemaInfo` stored with a topic has a version. Schema version manages schema changes happening within a topic. - -Messages produced with a given `SchemaInfo` is tagged with a schema version, so when a message is consumed by a Pulsar client, the Pulsar client can use the schema version to retrieve the corresponding `SchemaInfo` and then use the `SchemaInfo` to deserialize data. - -Schemas are versioned in succession. Schema storage happens in a broker that handles the associated topics so that version assignments can be made. - -Once a version is assigned/fetched to/for a schema, all subsequent messages produced by that producer are tagged with the appropriate version. - -**Example** - -The following example illustrates how the schema version works. - -Suppose that a Pulsar [Java client](client-libraries-java.md) created using the code below attempts to connect to Pulsar and begins to send messages: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -Producer producer = client.newProducer(JSONSchema.of(SensorReading.class)) - .topic("sensor-data") - .sendTimeout(3, TimeUnit.SECONDS) - .create(); - -``` - -The table below lists the possible scenarios when this connection attempt occurs and what happens in each scenario: - -| Scenario | What happens | -| --- | --- | -|
  517. No schema exists for the topic.
  518. | (1) The producer is created using the given schema. (2) Since no existing schema is compatible with the `SensorReading` schema, the schema is transmitted to the broker and stored. (3) Any consumer created using the same schema or topic can consume messages from the `sensor-data` topic. | -|
  519. A schema already exists.
  520. The producer connects using the same schema that is already stored.
  521. | (1) The schema is transmitted to the broker. (2) The broker determines that the schema is compatible. (3) The broker attempts to store the schema in [BookKeeper](concepts-architecture-overview.md#persistent-storage) but then determines that it's already stored, so it is used to tag produced messages. |
  522. A schema already exists.
  523. The producer connects using a new schema that is compatible.
  524. | (1) The schema is transmitted to the broker. (2) The broker determines that the schema is compatible and stores the new schema as the current version (with a new version number). | - -## How does schema work - -Pulsar schemas are applied and enforced at the **topic** level (schemas cannot be applied at the namespace or tenant level). - -Producers and consumers upload schemas to brokers, so Pulsar schemas work on the producer side and the consumer side. - -### Producer side - -This diagram illustrates how does schema work on the Producer side. - -![Schema works at the producer side](/assets/schema-producer.png) - -1. The application uses a schema instance to construct a producer instance. - - The schema instance defines the schema for the data being produced using the producer instance. - - Take AVRO as an example, Pulsar extracts schema definition from the POJO class and constructs the `SchemaInfo` that the producer needs to pass to a broker when it connects. - -2. The producer connects to the broker with the `SchemaInfo` extracted from the passed-in schema instance. - -3. The broker looks up the schema in the schema storage to check if it is already a registered schema. - -4. If yes, the broker skips the schema validation since it is a known schema, and returns the schema version to the producer. - -5. If no, the broker verifies whether a schema can be automatically created in this namespace: - - * If `isAllowAutoUpdateSchema` sets to **true**, then a schema can be created, and the broker validates the schema based on the schema compatibility check strategy defined for the topic. - - * If `isAllowAutoUpdateSchema` sets to **false**, then a schema can not be created, and the producer is rejected to connect to the broker. - -**Tip**: - -`isAllowAutoUpdateSchema` can be set via **Pulsar admin API** or **REST API.** - -For how to set `isAllowAutoUpdateSchema` via Pulsar admin API, see [Manage AutoUpdate Strategy](schema-manage.md/#manage-autoupdate-strategy). - -6. If the schema is allowed to be updated, then the compatible strategy check is performed. - - * If the schema is compatible, the broker stores it and returns the schema version to the producer. - - All the messages produced by this producer are tagged with the schema version. - - * If the schema is incompatible, the broker rejects it. - -### Consumer side - -This diagram illustrates how does Schema work on the consumer side. - -![Schema works at the consumer side](/assets/schema-consumer.png) - -1. The application uses a schema instance to construct a consumer instance. - - The schema instance defines the schema that the consumer uses for decoding messages received from a broker. - -2. The consumer connects to the broker with the `SchemaInfo` extracted from the passed-in schema instance. - -3. The broker determines whether the topic has one of them (a schema/data/a local consumer and a local producer). - -4. If a topic does not have all of them (a schema/data/a local consumer and a local producer): - - * If `isAllowAutoUpdateSchema` sets to **true**, then the consumer registers a schema and it is connected to a broker. - - * If `isAllowAutoUpdateSchema` sets to **false**, then the consumer is rejected to connect to a broker. - -5. If a topic has one of them (a schema/data/a local consumer and a local producer), then the schema compatibility check is performed. - - * If the schema passes the compatibility check, then the consumer is connected to the broker. - - * If the schema does not pass the compatibility check, then the consumer is rejected to connect to the broker. - -6. The consumer receives messages from the broker. - - If the schema used by the consumer supports schema versioning (for example, AVRO schema), the consumer fetches the `SchemaInfo` of the version tagged in messages and uses the passed-in schema and the schema tagged in messages to decode the messages. diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/security-athenz.md b/site2/website/versioned_docs/version-2.10.1-deprecated/security-athenz.md deleted file mode 100644 index 8a39fe25316d07..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/security-athenz.md +++ /dev/null @@ -1,98 +0,0 @@ ---- -id: security-athenz -title: Authentication using Athenz -sidebar_label: "Authentication using Athenz" -original_id: security-athenz ---- - -[Athenz](https://github.com/AthenZ/athenz) is a role-based authentication/authorization system. In Pulsar, you can use Athenz role tokens (also known as *z-tokens*) to establish the identify of the client. - -## Athenz authentication settings - -A [decentralized Athenz system](https://github.com/AthenZ/athenz/blob/master/docs/decent_authz_flow.md) contains an [authori**Z**ation **M**anagement **S**ystem](https://github.com/AthenZ/athenz/blob/master/docs/setup_zms.md) (ZMS) server and an [authori**Z**ation **T**oken **S**ystem](https://github.com/AthenZ/athenz/blob/master/docs/setup_zts) (ZTS) server. - -To begin, you need to set up Athenz service access control. You need to create domains for the *provider* (which provides some resources to other services with some authentication/authorization policies) and the *tenant* (which is provisioned to access some resources in a provider). In this case, the provider corresponds to the Pulsar service itself and the tenant corresponds to each application using Pulsar (typically, a [tenant](reference-terminology.md#tenant) in Pulsar). - -### Create the tenant domain and service - -On the [tenant](reference-terminology.md#tenant) side, you need to do the following things: - -1. Create a domain, such as `shopping` -2. Generate a private/public key pair -3. Create a service, such as `some_app`, on the domain with the public key - -Note that you need to specify the private key generated in step 2 when the Pulsar client connects to the [broker](reference-terminology.md#broker) (see client configuration examples for [Java](client-libraries-java.md#tls-authentication) and [C++](client-libraries-cpp.md#tls-authentication)). - -For more specific steps involving the Athenz UI, refer to [Example Service Access Control Setup](https://github.com/AthenZ/athenz/blob/master/docs/example_service_athenz_setup.md#client-tenant-domain). - -### Create the provider domain and add the tenant service to some role members - -On the provider side, you need to do the following things: - -1. Create a domain, such as `pulsar` -2. Create a role -3. Add the tenant service to members of the role - -Note that you can specify any action and resource in step 2 since they are not used on Pulsar. In other words, Pulsar uses the Athenz role token only for authentication, *not* for authorization. - -For more specific steps involving UI, refer to [Example Service Access Control Setup](https://github.com/AthenZ/athenz/blob/master/docs/example_service_athenz_setup.md#server-provider-domain). - -## Configure the broker for Athenz - -> ### TLS encryption -> -> Note that when you are using Athenz as an authentication provider, you had better use TLS encryption -> as it can protect role tokens from being intercepted and reused. (for more details involving TLS encryption see [Architecture - Data Model](https://github.com/AthenZ/athenz/blob/master/docs/data_model)). - -In the `conf/broker.conf` configuration file in your Pulsar installation, you need to provide the class name of the Athenz authentication provider as well as a comma-separated list of provider domain names. - -```properties - -# Add the Athenz auth provider -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderAthenz -athenzDomainNames=pulsar - -# Enable TLS -tlsEnabled=true -tlsCertificateFilePath=/path/to/broker-cert.pem -tlsKeyFilePath=/path/to/broker-key.pem - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationAthenz -brokerClientAuthenticationParameters={"tenantDomain":"shopping","tenantService":"some_app","providerDomain":"pulsar","privateKey":"file:///path/to/private.pem","keyId":"v1"} - -``` - -> A full listing of parameters is available in the `conf/broker.conf` file, you can also find the default -> values for those parameters in [Broker Configuration](reference-configuration.md#broker). - -## Configure clients for Athenz - -For more information on Pulsar client authentication using Athenz, see the following language-specific docs: - -* [Java client](client-libraries-java.md#athenz) - -## Configure CLI tools for Athenz - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following authentication parameters to the `conf/client.conf` config file to use Athenz with CLI tools of Pulsar: - -```properties - -# URL for the broker -serviceUrl=https://broker.example.com:8443/ - -# Set Athenz auth plugin and its parameters -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationAthenz -authParams={"tenantDomain":"shopping","tenantService":"some_app","providerDomain":"pulsar","privateKey":"file:///path/to/private.pem","keyId":"v1"} - -# Enable TLS -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/cacert.pem - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/security-authorization.md b/site2/website/versioned_docs/version-2.10.1-deprecated/security-authorization.md deleted file mode 100644 index 9cfd7c8c203f63..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/security-authorization.md +++ /dev/null @@ -1,130 +0,0 @@ ---- -id: security-authorization -title: Authentication and authorization in Pulsar -sidebar_label: "Authorization and ACLs" -original_id: security-authorization ---- - - -In Pulsar, the [authentication provider](security-overview.md#authentication-providers) is responsible for properly identifying clients and associating the clients with [role tokens](security-overview.md#role-tokens). If you only enable authentication, an authenticated role token has the ability to access all resources in the cluster. *Authorization* is the process that determines *what* clients are able to do. - -The role tokens with the most privileges are the *superusers*. The *superusers* can create and destroy tenants, along with having full access to all tenant resources. - -When a superuser creates a [tenant](reference-terminology.md#tenant), that tenant is assigned an admin role. A client with the admin role token can then create, modify and destroy namespaces, and grant and revoke permissions to *other role tokens* on those namespaces. - -## Broker and Proxy Setup - -### Enable authorization and assign superusers -You can enable the authorization and assign the superusers in the broker ([`conf/broker.conf`](reference-configuration.md#broker)) configuration files. - -```properties - -authorizationEnabled=true -superUserRoles=my-super-user-1,my-super-user-2 - -``` - -> A full list of parameters is available in the `conf/broker.conf` file. -> You can also find the default values for those parameters in [Broker Configuration](reference-configuration.md#broker). - -Typically, you use superuser roles for administrators, clients as well as broker-to-broker authorization. When you use [geo-replication](concepts-replication.md), every broker needs to be able to publish to all the other topics of clusters. - -You can also enable the authorization for the proxy in the proxy configuration file (`conf/proxy.conf`). Once you enable the authorization on the proxy, the proxy does an additional authorization check before forwarding the request to a broker. -If you enable authorization on the broker, the broker checks the authorization of the request when the broker receives the forwarded request. - -### Proxy Roles - -By default, the broker treats the connection between a proxy and the broker as a normal user connection. The broker authenticates the user as the role configured in `proxy.conf`(see ["Enable TLS Authentication on Proxies"](security-tls-authentication.md#enable-tls-authentication-on-proxies)). However, when the user connects to the cluster through a proxy, the user rarely requires the authentication. The user expects to be able to interact with the cluster as the role for which they have authenticated with the proxy. - -Pulsar uses *Proxy roles* to enable the authentication. Proxy roles are specified in the broker configuration file, [`conf/broker.conf`](reference-configuration.md#broker). If a client that is authenticated with a broker is one of its ```proxyRoles```, all requests from that client must also carry information about the role of the client that is authenticated with the proxy. This information is called the *original principal*. If the *original principal* is absent, the client is not able to access anything. - -You must authorize both the *proxy role* and the *original principal* to access a resource to ensure that the resource is accessible via the proxy. Administrators can take two approaches to authorize the *proxy role* and the *original principal*. - -The more secure approach is to grant access to the proxy roles each time you grant access to a resource. For example, if you have a proxy role named `proxy1`, when the superuser creates a tenant, you should specify `proxy1` as one of the admin roles. When a role is granted permissions to produce or consume from a namespace, if that client wants to produce or consume through a proxy, you should also grant `proxy1` the same permissions. - -Another approach is to make the proxy role a superuser. This allows the proxy to access all resources. The client still needs to authenticate with the proxy, and all requests made through the proxy have their role downgraded to the *original principal* of the authenticated client. However, if the proxy is compromised, a bad actor could get full access to your cluster. - -You can specify the roles as proxy roles in [`conf/broker.conf`](reference-configuration.md#broker). - -```properties - -proxyRoles=my-proxy-role - -# if you want to allow superusers to use the proxy (see above) -superUserRoles=my-super-user-1,my-super-user-2,my-proxy-role - -``` - -## Administer tenants - -Pulsar [instance](reference-terminology.md#instance) administrators or some kind of self-service portal typically provisions a Pulsar [tenant](reference-terminology.md#tenant). - -You can manage tenants using the [`pulsar-admin`](reference-pulsar-admin.md) tool. - -### Create a new tenant - -The following is an example tenant creation command: - -```shell - -$ bin/pulsar-admin tenants create my-tenant \ - --admin-roles my-admin-role \ - --allowed-clusters us-west,us-east - -``` - -This command creates a new tenant `my-tenant` that is allowed to use the clusters `us-west` and `us-east`. - -A client that successfully identifies itself as having the role `my-admin-role` is allowed to perform all administrative tasks on this tenant. - -The structure of topic names in Pulsar reflects the hierarchy between tenants, clusters, and namespaces: - -```shell - -persistent://tenant/namespace/topic - -``` - -### Manage permissions - -You can use [Pulsar Admin Tools](admin-api-permissions.md) for managing permission in Pulsar. - -### Pulsar admin authentication - -```java - -PulsarAdmin admin = PulsarAdmin.builder() - .serviceHttpUrl("http://broker:8080") - .authentication("com.org.MyAuthPluginClass", "param1:value1") - .build(); - -``` - -To use TLS: - -```java - -PulsarAdmin admin = PulsarAdmin.builder() - .serviceHttpUrl("https://broker:8080") - .authentication("com.org.MyAuthPluginClass", "param1:value1") - .tlsTrustCertsFilePath("/path/to/trust/cert") - .build(); - -``` - -## Authorize an authenticated client with multiple roles - -When a client is identified with multiple roles in a token (the type of role claim in the token is an array) during the authentication process, Pulsar supports to check the permissions of all the roles and further authorize the client as long as one of its roles has the required permissions. - -> **Note**
    -> This authorization method is only compatible with [JWT authentication](security-jwt.md). - -To enable this authorization method, configure the authorization provider as `MultiRolesTokenAuthorizationProvider` in the `conf/broker.conf` file. - - ```properties - - # Authorization provider fully qualified class-name - authorizationProvider=org.apache.pulsar.broker.authorization.MultiRolesTokenAuthorizationProvider - - ``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/security-basic-auth.md b/site2/website/versioned_docs/version-2.10.1-deprecated/security-basic-auth.md deleted file mode 100644 index 2585526bb478af..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/security-basic-auth.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -id: security-basic-auth -title: Authentication using HTTP basic -sidebar_label: "Authentication using HTTP basic" ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - -[Basic authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) is a simple authentication scheme built into the HTTP protocol, which uses base64-encoded username and password pairs as credentials. - -## Prerequisites - -Install [`htpasswd`](https://httpd.apache.org/docs/2.4/programs/htpasswd.html) in your environment to create a password file for storing username-password pairs. - -* For Ubuntu/Debian, run the following command to install `htpasswd`. - - ``` - apt install apache2-utils - ``` - -* For CentOS/RHEL, run the following command to install `htpasswd`. - - ``` - yum install httpd-tools - ``` - -## Create your authentication file - -:::note -Currently, you can use MD5 (recommended) and CRYPT encryption to authenticate your password. -::: - -Create a password file named `.htpasswd` with a user account `superuser/admin`: -* Use MD5 encryption (recommended): - - ``` - htpasswd -cmb /path/to/.htpasswd superuser admin - ``` - -* Use CRYPT encryption: - - ``` - htpasswd -cdb /path/to/.htpasswd superuser admin - ``` - -You can preview the content of your password file by running the following command: - -``` -cat path/to/.htpasswd -superuser:$apr1$GBIYZYFZ$MzLcPrvoUky16mLcK6UtX/ -``` - -## Enable basic authentication on brokers - -To configure brokers to authenticate clients, complete the following steps. - -1. Add the following parameters to the `conf/broker.conf` file. If you use a standalone Pulsar, you need to add these parameters to the `conf/standalone.conf` file. - - ``` - # Configuration to enable Basic authentication - authenticationEnabled=true - authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderBasic - - # Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters - brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic - brokerClientAuthenticationParameters={"userId":"superuser","password":"admin"} - - # If this flag is set then the broker authenticates the original Auth data - # else it just accepts the originalPrincipal and authorizes it (if required). - authenticateOriginalAuthData=true - ``` - -2. Set an environment variable named `PULSAR_EXTRA_OPTS` and the value is `-Dpulsar.auth.basic.conf=/path/to/.htpasswd`. Pulsar reads this environment variable to implement HTTP basic authentication. - -## Enable basic authentication on proxies - -To configure proxies to authenticate clients, complete the following steps. - -1. Add the following parameters to the `conf/proxy.conf` file: - - ``` - # For clients connecting to the proxy - authenticationEnabled=true - authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderBasic - - # For the proxy to connect to brokers - brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic - brokerClientAuthenticationParameters={"userId":"superuser","password":"admin"} - - # Whether client authorization credentials are forwarded to the broker for re-authorization. - # Authentication must be enabled via authenticationEnabled=true for this to take effect. - forwardAuthorizationCredentials=true - ``` - -2. Set an environment variable named `PULSAR_EXTRA_OPTS` and the value is `-Dpulsar.auth.basic.conf=/path/to/.htpasswd`. Pulsar reads this environment variable to implement HTTP basic authentication. - -## Configure basic authentication in CLI tools - -[Command-line tools](/docs/next/reference-cli-tools), such as [Pulsar-admin](/tools/pulsar-admin/), [Pulsar-perf](/tools/pulsar-perf/) and [Pulsar-client](/tools/pulsar-client/), use the `conf/client.conf` file in your Pulsar installation. To configure basic authentication in Pulsar CLI tools, you need to add the following parameters to the `conf/client.conf` file. - -``` -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic -authParams={"userId":"superuser","password":"admin"} -``` - - -## Configure basic authentication in Pulsar clients - -The following example shows how to configure basic authentication when using Pulsar clients. - - - - - ```java - AuthenticationBasic auth = new AuthenticationBasic(); - auth.configure("{\"userId\":\"superuser\",\"password\":\"admin\"}"); - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650") - .authentication(auth) - .build(); - ``` - - - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/security-bouncy-castle.md b/site2/website/versioned_docs/version-2.10.1-deprecated/security-bouncy-castle.md deleted file mode 100644 index be937055d8e311..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/security-bouncy-castle.md +++ /dev/null @@ -1,157 +0,0 @@ ---- -id: security-bouncy-castle -title: Bouncy Castle Providers -sidebar_label: "Bouncy Castle Providers" -original_id: security-bouncy-castle ---- - -## BouncyCastle Introduce - -`Bouncy Castle` is a Java library that complements the default Java Cryptographic Extension (JCE), -and it provides more cipher suites and algorithms than the default JCE provided by Sun. - -In addition to that, `Bouncy Castle` has lots of utilities for reading arcane formats like PEM and ASN.1 that no sane person would want to rewrite themselves. - -In Pulsar, security and crypto have dependencies on BouncyCastle Jars. For the detailed installing and configuring Bouncy Castle FIPS, see [BC FIPS Documentation](https://www.bouncycastle.org/documentation.html), especially the **User Guides** and **Security Policy** PDFs. - -`Bouncy Castle` provides both [FIPS](https://www.bouncycastle.org/fips_faq.html) and non-FIPS version. But in a JVM, you can not include both of the 2 versions, and you need to exclude the current version before include the other. - -In Pulsar, the security and crypto methods also depends on `Bouncy Castle`, especially in [TLS Authentication](security-tls-authentication.md) and [Transport Encryption](security-encryption.md). This document contains the configuration between BouncyCastle FIPS(BC-FIPS) and non-FIPS(BC-non-FIPS) version while using Pulsar. - -## How BouncyCastle modules packaged in Pulsar - -In Pulsar's `bouncy-castle` module, We provide 2 sub modules: `bouncy-castle-bc`(for non-FIPS version) and `bouncy-castle-bcfips`(for FIPS version), to package BC jars together to make the include and exclude of `Bouncy Castle` easier. - -To achieve this goal, we will need to package several `bouncy-castle` jars together into `bouncy-castle-bc` or `bouncy-castle-bcfips` jar. -Each of the original bouncy-castle jar is related with security, so BouncyCastle dutifully supplies signed of each JAR. -But when we do the re-package, Maven shade explodes the BouncyCastle jar file which puts the signatures into META-INF, -these signatures aren't valid for this new, uber-jar (signatures are only for the original BC jar). -Usually, You will meet error like `java.lang.SecurityException: Invalid signature file digest for Manifest main attributes`. - -You could exclude these signatures in mvn pom file to avoid above error, by - -```access transformers - -META-INF/*.SF -META-INF/*.DSA -META-INF/*.RSA - -``` - -But it can also lead to new, cryptic errors, e.g. `java.security.NoSuchAlgorithmException: PBEWithSHA256And256BitAES-CBC-BC SecretKeyFactory not available` -By explicitly specifying where to find the algorithm like this: `SecretKeyFactory.getInstance("PBEWithSHA256And256BitAES-CBC-BC","BC")` -It will get the real error: `java.security.NoSuchProviderException: JCE cannot authenticate the provider BC` - -So, we used a [executable packer plugin](https://github.com/nthuemmel/executable-packer-maven-plugin) that uses a jar-in-jar approach to preserve the BouncyCastle signature in a single, executable jar. - -### Include dependencies of BC-non-FIPS - -Pulsar module `bouncy-castle-bc`, which defined by `bouncy-castle/bc/pom.xml` contains the needed non-FIPS jars for Pulsar, and packaged as a jar-in-jar(need to provide `pkg`). - -```xml - - - org.bouncycastle - bcpkix-jdk15on - ${bouncycastle.version} - - - - org.bouncycastle - bcprov-ext-jdk15on - ${bouncycastle.version} - - -``` - -By using this `bouncy-castle-bc` module, you can easily include and exclude BouncyCastle non-FIPS jars. - -### Modules that include BC-non-FIPS module (`bouncy-castle-bc`) - -For Pulsar client, user need the bouncy-castle module, so `pulsar-client-original` will include the `bouncy-castle-bc` module, and have `pkg` set to reference the `jar-in-jar` package. -It is included as following example: - -```xml - - - org.apache.pulsar - bouncy-castle-bc - ${pulsar.version} - pkg - - -``` - -By default `bouncy-castle-bc` already included in `pulsar-client-original`, And `pulsar-client-original` has been included in a lot of other modules like `pulsar-client-admin`, `pulsar-broker`. -But for the above shaded jar and signatures reason, we should not package Pulsar's `bouncy-castle` module into `pulsar-client-all` other shaded modules directly, such as `pulsar-client-shaded`, `pulsar-client-admin-shaded` and `pulsar-broker-shaded`. -So in the shaded modules, we will exclude the `bouncy-castle` modules. - -```xml - - - - org.apache.pulsar:pulsar-client-original - - ** - - - org/bouncycastle/** - - - - -``` - -That means, `bouncy-castle` related jars are not shaded in these fat jars. - -### Module BC-FIPS (`bouncy-castle-bcfips`) - -Pulsar module `bouncy-castle-bcfips`, which defined by `bouncy-castle/bcfips/pom.xml` contains the needed FIPS jars for Pulsar. -Similar to `bouncy-castle-bc`, `bouncy-castle-bcfips` also packaged as a `jar-in-jar` package for easy include/exclude. - -```xml - - - org.bouncycastle - bc-fips - ${bouncycastlefips.version} - - - - org.bouncycastle - bcpkix-fips - ${bouncycastlefips.version} - - -``` - -### Exclude BC-non-FIPS and include BC-FIPS - -If you want to switch from BC-non-FIPS to BC-FIPS version, Here is an example for `pulsar-broker` module: - -```xml - - - org.apache.pulsar - pulsar-broker - ${pulsar.version} - - - org.apache.pulsar - bouncy-castle-bc - - - - - - org.apache.pulsar - bouncy-castle-bcfips - ${pulsar.version} - pkg - - -``` - - -For more example, you can reference module `bcfips-include-test`. - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/security-encryption.md b/site2/website/versioned_docs/version-2.10.1-deprecated/security-encryption.md deleted file mode 100644 index 10e6285b990a78..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/security-encryption.md +++ /dev/null @@ -1,335 +0,0 @@ ---- -id: security-encryption -title: Pulsar Encryption -sidebar_label: "End-to-End Encryption" -original_id: security-encryption ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Applications can use Pulsar encryption to encrypt messages on the producer side and decrypt messages on the consumer side. You can use the public and private key pair that the application configures to perform encryption. Only the consumers with a valid key can decrypt the encrypted messages. - -## Asymmetric and symmetric encryption - -Pulsar uses a dynamically generated symmetric AES key to encrypt messages(data). You can use the application-provided ECDSA (Elliptic Curve Digital Signature Algorithm) or RSA (Rivest–Shamir–Adleman) key pair to encrypt the AES key(data key), so you do not have to share the secret with everyone. - -Key is a public and private key pair used for encryption or decryption. The producer key is the public key of the key pair, and the consumer key is the private key of the key pair. - -The application configures the producer with the public key. You can use this key to encrypt the AES data key. The encrypted data key is sent as part of message header. Only entities with the private key (in this case the consumer) are able to decrypt the data key which is used to decrypt the message. - -You can encrypt a message with more than one key. Any one of the keys used for encrypting the message is sufficient to decrypt the message. - -Pulsar does not store the encryption key anywhere in the Pulsar service. If you lose or delete the private key, your message is irretrievably lost, and is unrecoverable. - -## Producer -![alt text](/assets/pulsar-encryption-producer.jpg "Pulsar Encryption Producer") - -## Consumer -![alt text](/assets/pulsar-encryption-consumer.jpg "Pulsar Encryption Consumer") - -## Get started - -1. Create your ECDSA or RSA public and private key pair by using the following commands. - * ECDSA(for Java clients only) - - ```shell - - openssl ecparam -name secp521r1 -genkey -param_enc explicit -out test_ecdsa_privkey.pem - openssl ec -in test_ecdsa_privkey.pem -pubout -outform pem -out test_ecdsa_pubkey.pem - - ``` - - * RSA (for C++, Python and Node.js clients) - - ```shell - - openssl genrsa -out test_rsa_privkey.pem 2048 - openssl rsa -in test_rsa_privkey.pem -pubout -outform pkcs8 -out test_rsa_pubkey.pem - - ``` - -2. Add the public and private key to the key management and configure your producers to retrieve public keys and consumers clients to retrieve private keys. - -3. Implement the `CryptoKeyReader` interface, specifically `CryptoKeyReader.getPublicKey()` for producer and `CryptoKeyReader.getPrivateKey()` for consumer, which Pulsar client invokes to load the key. - -4. Add the encryption key name to the producer builder: PulsarClient.newProducer().addEncryptionKey("myapp.key"). - -5. Configure a `CryptoKeyReader` to a producer, consumer or reader. - -````mdx-code-block - - - -```java - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl("pulsar://localhost:6650").build(); -String topic = "persistent://my-tenant/my-ns/my-topic"; -// RawFileKeyReader is just an example implementation that's not provided by Pulsar -CryptoKeyReader keyReader = new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem"); - -Producer producer = pulsarClient.newProducer() - .topic(topic) - .cryptoKeyReader(keyReader) - .addEncryptionKey("myappkey") - .create(); - -Consumer consumer = pulsarClient.newConsumer() - .topic(topic) - .subscriptionName("my-subscriber-name") - .cryptoKeyReader(keyReader) - .subscribe(); - -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(MessageId.earliest) - .cryptoKeyReader(keyReader) - .create(); - -``` - - - - -```c++ - -Client client("pulsar://localhost:6650"); -std::string topic = "persistent://my-tenant/my-ns/my-topic"; -// DefaultCryptoKeyReader is a built-in implementation that reads public key and private key from files -auto keyReader = std::make_shared("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem"); - -Producer producer; -ProducerConfiguration producerConf; -producerConf.setCryptoKeyReader(keyReader); -producerConf.addEncryptionKey("myappkey"); -client.createProducer(topic, producerConf, producer); - -Consumer consumer; -ConsumerConfiguration consumerConf; -consumerConf.setCryptoKeyReader(keyReader); -client.subscribe(topic, "my-subscriber-name", consumerConf, consumer); - -Reader reader; -ReaderConfiguration readerConf; -readerConf.setCryptoKeyReader(keyReader); -client.createReader(topic, MessageId::earliest(), readerConf, reader); - -``` - - - - -```python - -from pulsar import Client, CryptoKeyReader - -client = Client('pulsar://localhost:6650') -topic = 'persistent://my-tenant/my-ns/my-topic' -# CryptoKeyReader is a built-in implementation that reads public key and private key from files -key_reader = CryptoKeyReader('test_ecdsa_pubkey.pem', 'test_ecdsa_privkey.pem') - -producer = client.create_producer( - topic=topic, - encryption_key='myappkey', - crypto_key_reader=key_reader -) - -consumer = client.subscribe( - topic=topic, - subscription_name='my-subscriber-name', - crypto_key_reader=key_reader -) - -reader = client.create_reader( - topic=topic, - start_message_id=MessageId.earliest, - crypto_key_reader=key_reader -) - -client.close() - -``` - - - - -```nodejs - -const Pulsar = require('pulsar-client'); - -(async () => { -// Create a client -const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - operationTimeoutSeconds: 30, -}); - -// Create a producer -const producer = await client.createProducer({ - topic: 'persistent://public/default/my-topic', - sendTimeoutMs: 30000, - batchingEnabled: true, - publicKeyPath: "public-key.client-rsa.pem", - encryptionKey: "encryption-key" -}); - -// Create a consumer -const consumer = await client.subscribe({ - topic: 'persistent://public/default/my-topic', - subscription: 'sub1', - subscriptionType: 'Shared', - ackTimeoutMs: 10000, - privateKeyPath: "private-key.client-rsa.pem" -}); - -// Send messages -for (let i = 0; i < 10; i += 1) { - const msg = `my-message-${i}`; - producer.send({ - data: Buffer.from(msg), - }); - console.log(`Sent message: ${msg}`); -} -await producer.flush(); - -// Receive messages -for (let i = 0; i < 10; i += 1) { - const msg = await consumer.receive(); - console.log(msg.getData().toString()); - consumer.acknowledge(msg); -} - -await consumer.close(); -await producer.close(); -await client.close(); -})(); - -``` - - - - -```` - -6. Below is an example of a **customized** `CryptoKeyReader` implementation. - -````mdx-code-block - - - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} - -``` - - - - -```c++ - -class CustomCryptoKeyReader : public CryptoKeyReader { - public: - Result getPublicKey(const std::string& keyName, std::map& metadata, - EncryptionKeyInfo& encKeyInfo) const override { - // TODO: - return ResultOk; - } - - Result getPrivateKey(const std::string& keyName, std::map& metadata, - EncryptionKeyInfo& encKeyInfo) const override { - // TODO: - return ResultOk; - } -}; - -auto keyReader = std::make_shared(/* ... */); -// TODO: create producer, consumer or reader based on keyReader here - -``` - -Besides, you can use the **default** implementation of `CryptoKeyReader` by specifying the paths of `private key` and `public key`. - - - - -Currently, **customized** `CryptoKeyReader` implementation is not supported in Python. However, you can use the **default** implementation by specifying the path of `private key` and `public key`. - - - - -Currently, **customized** `CryptoKeyReader` implementation is not supported in Node.js. However, you can use the **default** implementation by specifying the path of `private key` and `public key`. - - - - -```` - -## Key rotation -Pulsar generates a new AES data key every 4 hours or after publishing a certain number of messages. A producer fetches the asymmetric public key every 4 hours by calling CryptoKeyReader.getPublicKey() to retrieve the latest version. - -## Enable encryption at the producer application -If you produce messages that are consumed across application boundaries, you need to ensure that consumers in other applications have access to one of the private keys that can decrypt the messages. You can do this in two ways: -1. The consumer application provides you access to their public key, which you add to your producer keys. -2. You grant access to one of the private keys from the pairs that producer uses. - -When producers want to encrypt the messages with multiple keys, producers add all such keys to the config. Consumer can decrypt the message as long as the consumer has access to at least one of the keys. - -If you need to encrypt the messages using 2 keys (`myapp.messagekey1` and `myapp.messagekey2`), refer to the following example. - -```java - -PulsarClient.newProducer().addEncryptionKey("myapp.messagekey1").addEncryptionKey("myapp.messagekey2"); - -``` - -## Decrypt encrypted messages at the consumer application -Consumers require to access one of the private keys to decrypt messages that the producer produces. If you want to receive encrypted messages, create a public or private key and give your public key to the producer application to encrypt messages using your public key. - -## Handle failures -* Producer/Consumer loses access to the key - * Producer action fails to indicate the cause of the failure. Application has the option to proceed with sending unencrypted messages in such cases. Call `PulsarClient.newProducer().cryptoFailureAction(ProducerCryptoFailureAction)` to control the producer behavior. The default behavior is to fail the request. - * If consumption fails due to decryption failure or missing keys in consumer, the application has the option to consume the encrypted message or discard it. Call `PulsarClient.newConsumer().cryptoFailureAction(ConsumerCryptoFailureAction)` to control the consumer behavior. The default behavior is to fail the request. Application is never able to decrypt the messages if the private key is permanently lost. -* Batch messaging - * If decryption fails and the message contains batch messages, client is not able to retrieve individual messages in the batch, hence message consumption fails even if cryptoFailureAction() is set to `ConsumerCryptoFailureAction.CONSUME`. -* If decryption fails, the message consumption stops and the application notices backlog growth in addition to decryption failure messages in the client log. If the application does not have access to the private key to decrypt the message, the only option is to skip or discard backlogged messages. diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/security-extending.md b/site2/website/versioned_docs/version-2.10.1-deprecated/security-extending.md deleted file mode 100644 index 9c641623f83480..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/security-extending.md +++ /dev/null @@ -1,83 +0,0 @@ ---- -id: security-extending -title: Extend Authentication and Authorization in Pulsar -sidebar_label: "Extend Authentication and Authorization" -original_id: security-extending ---- - -Pulsar provides a way to use custom authentication and authorization mechanisms. - -## Authentication - -You can use a custom authentication mechanism by providing the implementation in the form of two plugins. -* Client authentication plugin -* Proxy/Broker authentication plugin - -### Client authentication plugin - -For the client library, you need to implement `org.apache.pulsar.client.api.Authentication`. By entering the command below, you can pass this class when you create a Pulsar client. - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .authentication(new MyAuthentication()) - .build(); - -``` - -You can implement 2 interfaces on the client side: - * [`Authentication`](/api/client/org/apache/pulsar/client/api/Authentication.html) - * [`AuthenticationDataProvider`](/api/client/org/apache/pulsar/client/api/AuthenticationDataProvider.html) - -This in turn requires you to provide the client credentials in the form of `org.apache.pulsar.client.api.AuthenticationDataProvider` and also leaves the chance to return different kinds of authentication token for different types of connection or by passing a certificate chain to use for TLS. - -You can find the following examples for different client authentication plugins: - * [Mutual TLS](https://github.com/apache/pulsar/blob/master/pulsar-client/src/main/java/org/apache/pulsar/client/impl/auth/AuthenticationTls.java) - * [Athenz](https://github.com/apache/pulsar/blob/master/pulsar-client-auth-athenz/src/main/java/org/apache/pulsar/client/impl/auth/AuthenticationAthenz.java) - * [Kerberos](https://github.com/apache/pulsar/blob/master/pulsar-client-auth-sasl/src/main/java/org/apache/pulsar/client/impl/auth/AuthenticationSasl.java) - * [JSON Web Token (JWT)](https://github.com/apache/pulsar/blob/master/pulsar-client/src/main/java/org/apache/pulsar/client/impl/auth/AuthenticationToken.java) - * [OAuth 2.0](https://github.com/apache/pulsar/blob/master/pulsar-client/src/main/java/org/apache/pulsar/client/impl/auth/oauth2/AuthenticationOAuth2.java) - * [Basic auth](https://github.com/apache/pulsar/blob/master/pulsar-client/src/main/java/org/apache/pulsar/client/impl/auth/AuthenticationBasic.java) - -### Proxy/Broker authentication plugin - -On the proxy/broker side, you need to configure the corresponding plugin to validate the credentials that the client sends. The proxy and broker can support multiple authentication providers at the same time. - -In `conf/broker.conf`, you can choose to specify a list of valid providers: - -```properties - -# Authentication provider name list, which is comma separated list of class names -authenticationProviders= - -``` - -For the implementation of the `org.apache.pulsar.broker.authentication.AuthenticationProvider` interface, refer to [here](https://github.com/apache/pulsar/blob/master/pulsar-broker-common/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProvider.java). - -You can find the following examples for different broker authentication plugins: - - * [Mutual TLS](https://github.com/apache/pulsar/blob/master/pulsar-broker-common/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderTls.java) - * [Athenz](https://github.com/apache/pulsar/blob/master/pulsar-broker-auth-athenz/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderAthenz.java) - * [Kerberos](https://github.com/apache/pulsar/blob/master/pulsar-broker-auth-sasl/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderSasl.java) - * [JSON Web Token (JWT)](https://github.com/apache/pulsar/blob/master/pulsar-broker-common/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderToken.java) - * [Basic auth](https://github.com/apache/pulsar/blob/master/pulsar-broker-common/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderToken.java) - -## Authorization - -Authorization is the operation that checks whether a particular "role" or "principal" has permission to perform a certain operation. - -By default, you can use the embedded authorization provider provided by Pulsar. You can also configure a different authorization provider through a plugin. Note that although the Authentication plugin is designed for use in both the proxy and broker, the Authorization plugin is designed only for use on the broker. - -### Broker authorization plugin - -To provide a custom authorization provider, you need to implement the `org.apache.pulsar.broker.authorization.AuthorizationProvider` interface, put this class in the Pulsar broker classpath and configure the class in `conf/broker.conf`: - - ```properties - - # Authorization provider fully qualified class-name - authorizationProvider=org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider - - ``` - -For the implementation of the `org.apache.pulsar.broker.authorization.AuthorizationProvider` interface, refer to [here](https://github.com/apache/pulsar/blob/master/pulsar-broker-common/src/main/java/org/apache/pulsar/broker/authorization/AuthorizationProvider.java). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/security-jwt.md b/site2/website/versioned_docs/version-2.10.1-deprecated/security-jwt.md deleted file mode 100644 index d5cbf1553b92e8..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/security-jwt.md +++ /dev/null @@ -1,331 +0,0 @@ ---- -id: security-jwt -title: Client authentication using tokens based on JSON Web Tokens -sidebar_label: "Authentication using JWT" -original_id: security-jwt ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -## Token authentication overview - -Pulsar supports authenticating clients using security tokens that are based on [JSON Web Tokens](https://jwt.io/introduction/) ([RFC-7519](https://tools.ietf.org/html/rfc7519)). - -You can use tokens to identify a Pulsar client and associate with some "principal" (or "role") that -is permitted to do some actions (eg: publish to a topic or consume from a topic). - -A user typically gets a token string from the administrator (or some automated service). - -The compact representation of a signed JWT is a string that looks like the following: - -``` - -eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -Application specifies the token when you create the client instance. An alternative is to pass a "token supplier" (a function that returns the token when the client library needs one). - -> #### Always use TLS transport encryption -> Sending a token is equivalent to sending a password over the wire. You had better use TLS encryption all the time when you connect to the Pulsar service. See -> [Transport Encryption using TLS](security-tls-transport.md) for more details. - -### CLI Tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use the token authentication with CLI tools of Pulsar: - -```properties - -webServiceUrl=http://broker.example.com:8080/ -brokerServiceUrl=pulsar://broker.example.com:6650/ -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -authParams=token:eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -The token string can also be read from a file, for example: - -``` - -authParams=file:///path/to/token/file - -``` - -### Pulsar client - -You can use tokens to authenticate the following Pulsar clients. - -````mdx-code-block - - - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactory.token("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY")) - .build(); - -``` - -Similarly, you can also pass a `Supplier`: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactory.token(() -> { - // Read token from custom source - return readToken(); - })) - .build(); - -``` - - - - -```python - -from pulsar import Client, AuthenticationToken - -client = Client('pulsar://broker.example.com:6650/' - authentication=AuthenticationToken('eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY')) - -``` - -Alternatively, you can also pass a `Supplier`: - -```python - -def read_token(): - with open('/path/to/token.txt') as tf: - return tf.read().strip() - -client = Client('pulsar://broker.example.com:6650/' - authentication=AuthenticationToken(read_token)) - -``` - - - - -```go - -client, err := NewClient(ClientOptions{ - URL: "pulsar://localhost:6650", - Authentication: NewAuthenticationToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY"), -}) - -``` - -Similarly, you can also pass a `Supplier`: - -```go - -client, err := NewClient(ClientOptions{ - URL: "pulsar://localhost:6650", - Authentication: NewAuthenticationTokenSupplier(func () string { - // Read token from custom source - return readToken() - }), -}) - -``` - - - - -```c++ - -#include - -pulsar::ClientConfiguration config; -config.setAuth(pulsar::AuthToken::createWithToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY")); - -pulsar::Client client("pulsar://broker.example.com:6650/", config); - -``` - - - - -```c# - -var client = PulsarClient.Builder() - .AuthenticateUsingToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY") - .Build(); - -``` - - - - -```` - -## Enable token authentication - -On how to enable token authentication on a Pulsar cluster, you can refer to the guide below. - -JWT supports two different kinds of keys in order to generate and validate the tokens: - - * Symmetric : - - You can use a single ***Secret*** key to generate and validate tokens. - * Asymmetric: A pair of keys consists of the Private key and the Public key. - - You can use ***Private*** key to generate tokens. - - You can use ***Public*** key to validate tokens. - -### Create a secret key - -When you use a secret key, the administrator creates the key and uses the key to generate the client tokens. You can also configure this key to brokers in order to validate the clients. - -The output file is generated in the root of your Pulsar installation directory. You can also provide an absolute path for the output file using the command below. - -```shell - -$ bin/pulsar tokens create-secret-key --output my-secret.key - -``` - -Enter this command to generate a base64 encoded private key. - -```shell - -$ bin/pulsar tokens create-secret-key --output /opt/my-secret.key --base64 - -``` - -### Create a key pair - -With Public and Private keys, you need to create a pair of keys. Pulsar supports all algorithms that the Java JWT library (shown [here](https://github.com/jwtk/jjwt#signature-algorithms-keys)) supports. - -The output file is generated in the root of your Pulsar installation directory. You can also provide an absolute path for the output file using the command below. - -```shell - -$ bin/pulsar tokens create-key-pair --output-private-key my-private.key --output-public-key my-public.key - -``` - - * Store `my-private.key` in a safe location and only administrator can use `my-private.key` to generate new tokens. - * `my-public.key` is distributed to all Pulsar brokers. You can publicly share this file without any security concern. - -### Generate tokens - -A token is a credential associated with a user. The association is done through the "principal" or "role". In the case of JWT tokens, this field is typically referred as **subject**, though they are exactly the same concept. - -Then, you need to use this command to require the generated token to have a **subject** field set. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user - -``` - -This command prints the token string on stdout. - -Similarly, you can create a token by passing the "private" key using the command below: - -```shell - -$ bin/pulsar tokens create --private-key file:///path/to/my-private.key \ - --subject test-user - -``` - -Finally, you can enter the following command to create a token with a pre-defined TTL. And then the token is automatically invalidated. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user \ - --expiry-time 1y - -``` - -### Authorization - -The token itself does not have any permission associated. The authorization engine determines whether the token should have permissions or not. Once you have created the token, you can grant permission for this token to do certain actions. The following is an example. - -```shell - -$ bin/pulsar-admin namespaces grant-permission my-tenant/my-namespace \ - --role test-user \ - --actions produce,consume - -``` - -### Enable token authentication on Brokers - -To configure brokers to authenticate clients, add the following parameters to `broker.conf`: - -```properties - -# Configuration to enable authentication and authorization -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientTlsEnabled=true -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Either configure the token string or specify to read it from a file. The following three available formats are all valid: -# brokerClientAuthenticationParameters={"token":"your-token-string"} -# brokerClientAuthenticationParameters=token:your-token-string -# brokerClientAuthenticationParameters=file:///path/to/token -brokerClientTrustCertsFilePath=/path/my-ca/certs/ca.cert.pem - -# If this flag is set then the broker authenticates the original Auth data -# else it just accepts the originalPrincipal and authorizes it (if required). -authenticateOriginalAuthData=true - -# If using secret key (Note: key files must be DER-encoded) -tokenSecretKey=file:///path/to/secret.key -# The key can also be passed inline: -# tokenSecretKey=data:;base64,FLFyW0oLJ2Fi22KKCm21J18mbAdztfSHN/lAT5ucEKU= - -# If using public/private (Note: key files must be DER-encoded) -# tokenPublicKey=file:///path/to/public.key - -``` - -### Enable token authentication on Proxies - -To configure proxies to authenticate clients, add the following parameters to `proxy.conf`: - -The proxy uses its own token when connecting to brokers. You need to configure the role token for this key pair in the `proxyRoles` of the brokers. For more details, see the [authorization guide](security-authorization.md). - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken -tokenSecretKey=file:///path/to/secret.key - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Either configure the token string or specify to read it from a file. The following three available formats are all valid: -# brokerClientAuthenticationParameters={"token":"your-token-string"} -# brokerClientAuthenticationParameters=token:your-token-string -# brokerClientAuthenticationParameters=file:///path/to/token - -# Whether client authorization credentials are forwarded to the broker for re-authorization. -# Authentication must be enabled via authenticationEnabled=true for this to take effect. -forwardAuthorizationCredentials=true - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/security-kerberos.md b/site2/website/versioned_docs/version-2.10.1-deprecated/security-kerberos.md deleted file mode 100644 index c49fa3bea1fce0..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/security-kerberos.md +++ /dev/null @@ -1,443 +0,0 @@ ---- -id: security-kerberos -title: Authentication using Kerberos -sidebar_label: "Authentication using Kerberos" -original_id: security-kerberos ---- - -[Kerberos](https://web.mit.edu/kerberos/) is a network authentication protocol. By using secret-key cryptography, [Kerberos](https://web.mit.edu/kerberos/) is designed to provide strong authentication for client applications and server applications. - -In Pulsar, you can use Kerberos with [SASL](https://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer) as a choice for authentication. And Pulsar uses the [Java Authentication and Authorization Service (JAAS)](https://en.wikipedia.org/wiki/Java_Authentication_and_Authorization_Service) for SASL configuration. You need to provide JAAS configurations for Kerberos authentication. - -This document introduces how to configure `Kerberos` with `SASL` between Pulsar clients and brokers and how to configure Kerberos for Pulsar proxy in detail. - -## Configuration for Kerberos between Client and Broker - -### Prerequisites - -To begin, you need to set up (or already have) a [Key Distribution Center(KDC)](https://en.wikipedia.org/wiki/Key_distribution_center). Also you need to configure and run the [Key Distribution Center(KDC)](https://en.wikipedia.org/wiki/Key_distribution_center)in advance. - -If your organization already uses a Kerberos server (for example, by using `Active Directory`), you do not have to install a new server for Pulsar. If your organization does not use a Kerberos server, you need to install one. Your Linux vendor might have packages for `Kerberos`. On how to install and configure Kerberos, refer to [Ubuntu](https://help.ubuntu.com/community/Kerberos), -[Redhat](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Managing_Smart_Cards/installing-kerberos.html). - -Note that if you use Oracle Java, you need to download JCE policy files for your Java version and copy them to the `$JAVA_HOME/jre/lib/security` directory. - -#### Kerberos principals - -If you use the existing Kerberos system, ask your Kerberos administrator for a principal for each Brokers in your cluster and for every operating system user that accesses Pulsar with Kerberos authentication(via clients and tools). - -If you have installed your own Kerberos system, you can create these principals with the following commands: - -```shell - -### add Principals for broker -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey broker/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{broker-keytabname}.keytab broker/{hostname}@{REALM}" -### add Principals for client -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey client/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{client-keytabname}.keytab client/{hostname}@{REALM}" - -``` - -Note that *Kerberos* requires that all your hosts can be resolved with their FQDNs. - -The first part of Broker principal (for example, `broker` in `broker/{hostname}@{REALM}`) is the `serverType` of each host. The suggested values of `serverType` are `broker` (host machine runs service Pulsar Broker) and `proxy` (host machine runs service Pulsar Proxy). - -#### Configure how to connect to KDC - -You need to enter the command below to specify the path to the `krb5.conf` file for the client side and the broker side. The content of `krb5.conf` file indicates the default Realm and KDC information. See [JDK’s Kerberos Requirements](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html) for more details. - -```shell - --Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -Here is an example of the krb5.conf file: - -In the configuration file, `EXAMPLE.COM` is the default realm; `kdc = localhost:62037` is the kdc server url for realm `EXAMPLE.COM `: - -``` - -[libdefaults] - default_realm = EXAMPLE.COM - -[realms] - EXAMPLE.COM = { - kdc = localhost:62037 - } - -``` - -Usually machines configured with kerberos already have a system wide configuration and this configuration is optional. - -#### JAAS configuration file - -You need JAAS configuration file for the client side and the broker side. JAAS configuration file provides the section of information that is used to connect KDC. Here is an example named `pulsar_jaas.conf`: - -``` - - PulsarBroker { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - - PulsarClient { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarclient.keytab" - principal="client/localhost@EXAMPLE.COM"; -}; - -``` - -You need to set the `JAAS` configuration file path as JVM parameter for client and broker. For example: - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf - -``` - -In the `pulsar_jaas.conf` file above - -1. `PulsarBroker` is a section name in the JAAS file that each broker uses. This section tells the broker to use which principal inside Kerberos and the location of the keytab where the principal is stored. `PulsarBroker` allows the broker to use the keytab specified in this section. -2. `PulsarClient` is a section name in the JASS file that each broker uses. This section tells the client to use which principal inside Kerberos and the location of the keytab where the principal is stored. `PulsarClient` allows the client to use the keytab specified in this section. - The following example also reuses this `PulsarClient` section in both the Pulsar internal admin configuration and in CLI command of `bin/pulsar-client`, `bin/pulsar-perf` and `bin/pulsar-admin`. You can also add different sections for different use cases. - -You can have 2 separate JAAS configuration files: -* the file for a broker that has sections of both `PulsarBroker` and `PulsarClient`; -* the file for a client that only has a `PulsarClient` section. - - -### Kerberos configuration for Brokers - -#### Configure the `broker.conf` file - - In the `broker.conf` file, set Kerberos related configurations. - - - Set `authenticationEnabled` to `true`; - - Set `authenticationProviders` to choose `AuthenticationProviderSasl`; - - Set `saslJaasClientAllowedIds` regex for principal that is allowed to connect to broker; - - Set `saslJaasBrokerSectionName` that corresponds to the section in JAAS configuration file for broker; - - To make Pulsar internal admin client work properly, you need to set the configuration in the `broker.conf` file as below: - - Set `brokerClientAuthenticationPlugin` to client plugin `AuthenticationSasl`; - - Set `brokerClientAuthenticationParameters` to value in JSON string `{"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"}`, in which `PulsarClient` is the section name in the `pulsar_jaas.conf` file, and `"serverType":"broker"` indicates that the internal admin client connects to a Pulsar Broker; - - Here is an example: - -``` - -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarBroker - -## Authentication settings of the broker itself. Used when the broker connects to other brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -brokerClientAuthenticationParameters={"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"} - -``` - -#### Set Broker JVM parameter - - Set JVM parameters for JAAS configuration file and krb5 configuration file with additional options. - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -You can add this at the end of `PULSAR_EXTRA_OPTS` in the file [`pulsar_env.sh`](https://github.com/apache/pulsar/blob/master/conf/pulsar_env.sh) - -You must ensure that the operating system user who starts broker can reach the keytabs configured in the `pulsar_jaas.conf` file and kdc server in the `krb5.conf` file. - -### Kerberos configuration for clients - -#### Java Client and Java Admin Client - -In client application, include `pulsar-client-auth-sasl` in your project dependency. - -``` - - - org.apache.pulsar - pulsar-client-auth-sasl - ${pulsar.version} - - -``` - -Configure the authentication type to use `AuthenticationSasl`, and also provide the authentication parameters to it. - -You need 2 parameters: -- `saslJaasClientSectionName`. This parameter corresponds to the section in JAAS configuration file for client; -- `serverType`. This parameter stands for whether this client connects to broker or proxy. And client uses this parameter to know which server side principal should be used. - -When you authenticate between client and broker with the setting in above JAAS configuration file, we need to set `saslJaasClientSectionName` to `PulsarClient` and set `serverType` to `broker`. - -The following is an example of creating a Java client: - - ```java - - System.setProperty("java.security.auth.login.config", "/etc/pulsar/pulsar_jaas.conf"); - System.setProperty("java.security.krb5.conf", "/etc/pulsar/krb5.conf"); - - Map authParams = Maps.newHashMap(); - authParams.put("saslJaasClientSectionName", "PulsarClient"); - authParams.put("serverType", "broker"); - - Authentication saslAuth = AuthenticationFactory - .create(org.apache.pulsar.client.impl.auth.AuthenticationSasl.class.getName(), authParams); - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://my-broker.com:6650") - .authentication(saslAuth) - .build(); - - ``` - -> The first two lines in the example above are hard coded, alternatively, you can set additional JVM parameters for JAAS and krb5 configuration file when you run the application like below: - -``` - -java -cp -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf $APP-jar-with-dependencies.jar $CLASSNAME - -``` - -You must ensure that the operating system user who starts pulsar client can reach the keytabs configured in the `pulsar_jaas.conf` file and kdc server in the `krb5.conf` file. - -#### Configure CLI tools - -If you use a command-line tool (such as `bin/pulsar-client`, `bin/pulsar-perf` and `bin/pulsar-admin`), you need to perform the following steps: - -Step 1. Enter the command below to configure your `client.conf`. - -```shell - -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -authParams={"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"} - -``` - -Step 2. Enter the command below to set JVM parameters for JAAS configuration file and krb5 configuration file with additional options. - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -You can add this at the end of `PULSAR_EXTRA_OPTS` in the file [`pulsar_tools_env.sh`](https://github.com/apache/pulsar/blob/master/conf/pulsar_tools_env.sh), -or add this line `OPTS="$OPTS -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf "` directly to the CLI tool script. - -The meaning of configurations is the same as the meaning of configurations in Java client section. - -## Kerberos configuration for working with Pulsar Proxy - -With the above configuration, client and broker can do authentication using Kerberos. - -A client that connects to Pulsar Proxy is a little different. Pulsar Proxy (as a SASL Server in Kerberos) authenticates Client (as a SASL client in Kerberos) first; and then Pulsar broker authenticates Pulsar Proxy. - -Now in comparison with the above configuration between client and broker, we show you how to configure Pulsar Proxy as follows. - -### Create principal for Pulsar Proxy in Kerberos - -You need to add new principals for Pulsar Proxy comparing with the above configuration. If you already have principals for client and broker, you only need to add the proxy principal here. - -```shell - -### add Principals for Pulsar Proxy -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey proxy/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{proxy-keytabname}.keytab proxy/{hostname}@{REALM}" -### add Principals for broker -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey broker/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{broker-keytabname}.keytab broker/{hostname}@{REALM}" -### add Principals for client -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey client/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{client-keytabname}.keytab client/{hostname}@{REALM}" - -``` - -### Add a section in JAAS configuration file for Pulsar Proxy - -In comparison with the above configuration, add a new section for Pulsar Proxy in JAAS configuration file. - -Here is an example named `pulsar_jaas.conf`: - -``` - - PulsarBroker { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - - PulsarProxy { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarproxy.keytab" - principal="proxy/localhost@EXAMPLE.COM"; -}; - - PulsarClient { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarclient.keytab" - principal="client/localhost@EXAMPLE.COM"; -}; - -``` - -### Proxy client configuration - -Pulsar client configuration is similar with client and broker configuration, except that you need to set `serverType` to `proxy` instead of `broker`, for the reason that you need to do the Kerberos authentication between client and proxy. - - ```java - - System.setProperty("java.security.auth.login.config", "/etc/pulsar/pulsar_jaas.conf"); - System.setProperty("java.security.krb5.conf", "/etc/pulsar/krb5.conf"); - - Map authParams = Maps.newHashMap(); - authParams.put("saslJaasClientSectionName", "PulsarClient"); - authParams.put("serverType", "proxy"); // ** here is the different ** - - Authentication saslAuth = AuthenticationFactory - .create(org.apache.pulsar.client.impl.auth.AuthenticationSasl.class.getName(), authParams); - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://my-broker.com:6650") - .authentication(saslAuth) - .build(); - - ``` - -> The first two lines in the example above are hard coded, alternatively, you can set additional JVM parameters for JAAS and krb5 configuration file when you run the application like below: - -``` - -java -cp -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf $APP-jar-with-dependencies.jar $CLASSNAME - -``` - -### Kerberos configuration for Pulsar proxy service - -In the `proxy.conf` file, set Kerberos related configuration. Here is an example: - -```shell - -## related to authenticate client. -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarProxy - -## related to be authenticated by broker -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -brokerClientAuthenticationParameters={"saslJaasClientSectionName":"PulsarProxy", "serverType":"broker"} -forwardAuthorizationCredentials=true - -``` - -The first part relates to authenticating between client and Pulsar Proxy. In this phase, client works as SASL client, while Pulsar Proxy works as SASL server. - -The second part relates to authenticating between Pulsar Proxy and Pulsar Broker. In this phase, Pulsar Proxy works as SASL client, while Pulsar Broker works as SASL server. - -### Broker side configuration. - -The broker side configuration file is the same with the above `broker.conf`, you do not need special configuration for Pulsar Proxy. - -``` - -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarBroker - -``` - -## Regarding authorization and role token - -For Kerberos authentication, we usually use the authenticated principal as the role token for Pulsar authorization. For more information of authorization in Pulsar, see [security authorization](security-authorization.md). - -If you enable 'authorizationEnabled', you need to set `superUserRoles` in `broker.conf` that corresponds to the name registered in kdc. - -For example: - -```bash - -superUserRoles=client/{clientIp}@EXAMPLE.COM - -``` - -## Regarding authentication between ZooKeeper and Broker - -Pulsar Broker acts as a Kerberos client when you authenticate with Zookeeper. According to [ZooKeeper document](https://cwiki.apache.org/confluence/display/ZOOKEEPER/Client-Server+mutual+authentication), you need these settings in `conf/zookeeper.conf`: - -``` - -authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider -requireClientAuthScheme=sasl - -``` - -Enter the following commands to add a section of `Client` configurations in the file `pulsar_jaas.conf`, which Pulsar Broker uses: - -``` - - Client { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - -``` - -In this setting, the principal of Pulsar Broker and keyTab file indicates the role of Broker when you authenticate with ZooKeeper. - -## Regarding authentication between BookKeeper and Broker - -Pulsar Broker acts as a Kerberos client when you authenticate with Bookie. According to [BookKeeper document](http://bookkeeper.apache.org/docs/latest/security/sasl/), you need to add `bookkeeperClientAuthenticationPlugin` parameter in `broker.conf`: - -``` - -bookkeeperClientAuthenticationPlugin=org.apache.bookkeeper.sasl.SASLClientProviderFactory - -``` - -In this setting, `SASLClientProviderFactory` creates a BookKeeper SASL client in a Broker, and the Broker uses the created SASL client to authenticate with a Bookie node. - -Enter the following commands to add a section of `BookKeeper` configurations in the `pulsar_jaas.conf` that Pulsar Broker uses: - -``` - - BookKeeper { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - -``` - -In this setting, the principal of Pulsar Broker and keyTab file indicates the role of Broker when you authenticate with Bookie. diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/security-oauth2.md b/site2/website/versioned_docs/version-2.10.1-deprecated/security-oauth2.md deleted file mode 100644 index 4ff8bf10b38334..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/security-oauth2.md +++ /dev/null @@ -1,282 +0,0 @@ ---- -id: security-oauth2 -title: Client authentication using OAuth 2.0 access tokens -sidebar_label: "Authentication using OAuth 2.0 access tokens" -original_id: security-oauth2 ---- - -Pulsar supports authenticating clients using OAuth 2.0 access tokens. You can use OAuth 2.0 access tokens to identify a Pulsar client and associate the Pulsar client with some "principal" (or "role"), which is permitted to do some actions, such as publishing messages to a topic or consume messages from a topic. - -This module is used to support the [Pulsar client authentication plugin](security-extending.md/#client-authentication-plugin) for OAuth 2.0. After communicating with the OAuth 2.0 server, the Pulsar client gets an `access token` from the OAuth 2.0 server, and passes this `access token` to the Pulsar broker to do the authentication. The broker can use the `org.apache.pulsar.broker.authentication.AuthenticationProviderToken`. Or, you can add your own `AuthenticationProvider` to make it with this module. - -## Authentication provider configuration - -This library allows you to authenticate the Pulsar client by using an access token that is obtained from an OAuth 2.0 authorization service, which acts as a _token issuer_. - -### Authentication types - -The authentication type determines how to obtain an access token through an OAuth 2.0 authorization flow. - -:::note - -Currently, the Pulsar Java client only supports the `client_credentials` authentication type. - -::: - -#### Client credentials - -The following table lists parameters supported for the `client credentials` authentication type. - -| Parameter | Description | Example | Required or not | -| --- | --- | --- | --- | -| `type` | OAuth 2.0 authentication type. | `client_credentials` (default) | Optional | -| `issuerUrl` | URL of the authentication provider which allows the Pulsar client to obtain an access token | `https://accounts.google.com` | Required | -| `privateKey` | URL to a JSON credentials file | Support the following pattern formats:
  525. `file:///path/to/file`
  526. `file:/path/to/file`
  527. `data:application/json;base64,`
  528. | Required | -| `audience` | An OAuth 2.0 "resource server" identifier for the Pulsar cluster | `https://broker.example.com` | Optional | -| `scope` | Scope of an access request.
    For more information, see [access token scope](https://datatracker.ietf.org/doc/html/rfc6749#section-3.3). | api://pulsar-cluster-1/.default | Optional | - -The credentials file contains service account credentials used with the client authentication type. The following shows an example of a credentials file `credentials_file.json`. - -```json - -{ - "type": "client_credentials", - "client_id": "d9ZyX97q1ef8Cr81WHVC4hFQ64vSlDK3", - "client_secret": "on1uJ...k6F6R", - "client_email": "1234567890-abcdefghijklmnopqrstuvwxyz@developer.gserviceaccount.com", - "issuer_url": "https://accounts.google.com" -} - -``` - -In the above example, the authentication type is set to `client_credentials` by default. And the fields "client_id" and "client_secret" are required. - -### Typical original OAuth2 request mapping - -The following shows a typical original OAuth2 request, which is used to obtain the access token from the OAuth2 server. - -```bash - -curl --request POST \ - --url https://dev-kt-aa9ne.us.auth0.com/oauth/token \ - --header 'content-type: application/json' \ - --data '{ - "client_id":"Xd23RHsUnvUlP7wchjNYOaIfazgeHd9x", - "client_secret":"rT7ps7WY8uhdVuBTKWZkttwLdQotmdEliaM5rLfmgNibvqziZ-g07ZH52N_poGAb", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "grant_type":"client_credentials"}' - -``` - -In the above example, the mapping relationship is shown as below. - -- The `issuerUrl` parameter in this plugin is mapped to `--url https://dev-kt-aa9ne.us.auth0.com`. -- The `privateKey` file parameter in this plugin should at least contains the `client_id` and `client_secret` fields. -- The `audience` parameter in this plugin is mapped to `"audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"`. This field is only used by some identity providers. - -## Client Configuration - -You can use the OAuth2 authentication provider with the following Pulsar clients. - -### Java client - -You can use the factory method to configure authentication for Pulsar Java client. - -```java - -import org.apache.pulsar.client.impl.auth.oauth2.AuthenticationFactoryOAuth2; - -URL issuerUrl = new URL("https://dev-kt-aa9ne.us.auth0.com"); -URL credentialsUrl = new URL("file:///path/to/KeyFile.json"); -String audience = "https://dev-kt-aa9ne.us.auth0.com/api/v2/"; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactoryOAuth2.clientCredentials(issuerUrl, credentialsUrl, audience)) - .build(); - -``` - -In addition, you can also use the encoded parameters to configure authentication for Pulsar Java client. - -```java - -Authentication auth = AuthenticationFactory - .create(AuthenticationOAuth2.class.getName(), "{"type":"client_credentials","privateKey":"./key/path/..","issuerUrl":"...","audience":"..."}"); -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication(auth) - .build(); - -``` - -### C++ client - -The C++ client is similar to the Java client. You need to provide the parameters of `issuerUrl`, `private_key` (the credentials file path), and `audience`. - -```c++ - -#include - -pulsar::ClientConfiguration config; -std::string params = R"({ - "issuer_url": "https://dev-kt-aa9ne.us.auth0.com", - "private_key": "../../pulsar-broker/src/test/resources/authentication/token/cpp_credentials_file.json", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/"})"; - -config.setAuth(pulsar::AuthOauth2::create(params)); - -pulsar::Client client("pulsar://broker.example.com:6650/", config); - -``` - -### Go client - -To enable OAuth2 authentication in Go client, you need to configure OAuth2 authentication. -This example shows how to configure OAuth2 authentication in Go client. - -```go - -oauth := pulsar.NewAuthenticationOAuth2(map[string]string{ - "type": "client_credentials", - "issuerUrl": "https://dev-kt-aa9ne.us.auth0.com", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "privateKey": "/path/to/privateKey", - "clientId": "0Xx...Yyxeny", - }) -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://my-cluster:6650", - Authentication: oauth, -}) - -``` - -### Python client - -To enable OAuth2 authentication in Python client, you need to configure OAuth2 authentication. -This example shows how to configure OAuth2 authentication in Python client. - -```python - -from pulsar import Client, AuthenticationOauth2 - -params = ''' -{ - "issuer_url": "https://dev-kt-aa9ne.us.auth0.com", - "private_key": "/path/to/privateKey", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/" -} -''' - -client = Client("pulsar://my-cluster:6650", authentication=AuthenticationOauth2(params)) - -``` - -### Node.js client - -To enable OAuth2 authentication in Node.js client, you need to configure OAuth2 authentication. -This example shows how to configure OAuth2 authentication in Node.js client. - -```JavaScript - - const Pulsar = require('pulsar-client'); - const issuer_url = process.env.ISSUER_URL; - const private_key = process.env.PRIVATE_KEY; - const audience = process.env.AUDIENCE; - const scope = process.env.SCOPE; - const service_url = process.env.SERVICE_URL; - const client_id = process.env.CLIENT_ID; - const client_secret = process.env.CLIENT_SECRET; - (async () => { - const params = { - issuer_url: issuer_url - } - if (private_key.length > 0) { - params['private_key'] = private_key - } else { - params['client_id'] = client_id - params['client_secret'] = client_secret - } - if (audience.length > 0) { - params['audience'] = audience - } - if (scope.length > 0) { - params['scope'] = scope - } - const auth = new Pulsar.AuthenticationOauth2(params); - // Create a client - const client = new Pulsar.Client({ - serviceUrl: service_url, - tlsAllowInsecureConnection: true, - authentication: auth, - }); - await client.close(); - })(); - -``` - -:::note - -The support for OAuth2 authentication is only available in Node.js client 1.6.2 and later versions. - -::: - -## CLI configuration - -This section describes how to use Pulsar CLI tools to connect a cluster through OAuth2 authentication plugin. - -### pulsar-admin - -This example shows how to use pulsar-admin to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-admin --admin-url https://streamnative.cloud:443 \ ---auth-plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ -tenants list - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). - -### pulsar-client - -This example shows how to use pulsar-client to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-client \ ---url SERVICE_URL \ ---auth-plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ -produce test-topic -m "test-message" -n 10 - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). - -### pulsar-perf - -This example shows how to use pulsar-perf to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-perf produce --service-url pulsar+ssl://streamnative.cloud:6651 \ ---auth-plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ --r 1000 -s 1024 test-topic - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/security-overview.md b/site2/website/versioned_docs/version-2.10.1-deprecated/security-overview.md deleted file mode 100644 index 93a766ee89fb8e..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/security-overview.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -id: security-overview -title: Pulsar security overview -sidebar_label: "Overview" -original_id: security-overview ---- - -As the central message bus for a business, Apache Pulsar is frequently used for storing mission-critical data. Therefore, enabling security features in Pulsar is crucial. - -By default, Pulsar configures no encryption, authentication, or authorization. Any client can communicate to Apache Pulsar via plain text service URLs. So we must ensure that Pulsar accessing via these plain text service URLs is restricted to trusted clients only. In such cases, you can use Network segmentation and/or authorization ACLs to restrict access to trusted IPs. If you use neither, the state of cluster is wide open and anyone can access the cluster. - -Pulsar supports a pluggable authentication mechanism. And Pulsar clients use this mechanism to authenticate with brokers and proxies. You can also configure Pulsar to support multiple authentication sources. - -The Pulsar broker validates the authentication credentials when a connection is established. After the initial connection is authenticated, the "principal" token is stored for authorization though the connection is not re-authenticated. The broker periodically checks the expiration status of every `ServerCnx` object. You can set the `authenticationRefreshCheckSeconds` on the broker to control the frequency to check the expiration status. By default, the `authenticationRefreshCheckSeconds` is set to 60s. When the authentication is expired, the broker forces to re-authenticate the connection. If the re-authentication fails, the broker disconnects the client. - -The broker supports learning whether a particular client supports authentication refreshing. If a client supports authentication refreshing and the credential is expired, the authentication provider calls the `refreshAuthentication` method to initiate the refreshing process. If a client does not support authentication refreshing and the credential is expired, the broker disconnects the client. - -You had better secure the service components in your Apache Pulsar deployment. - -## Role tokens - -In Pulsar, a *role* is a string, like `admin` or `app1`, which can represent a single client or multiple clients. You can use roles to control permission for clients to produce or consume from certain topics, administer the configuration for tenants, and so on. - -Apache Pulsar uses an [Authentication Provider](#authentication-providers) to establish the identity of a client and then assign a *role token* to that client. This role token is then used for [Authorization and ACLs](security-authorization.md) to determine what the client is authorized to do. - -## Authentication providers - -Currently Pulsar supports the following authentication providers: - -- [TLS authentication](security-tls-authentication.md) -- [Athenz authentication](security-athenz.md) -- [Kerberos authentication](security-kerberos.md) -- [JSON Web Token (JWT) authentication](security-jwt.md) -- [OAuth 2.0 authentication](security-oauth2.md) -- [HTTP basic authentication](security-basic-auth.md) - - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/security-policy-and-supported-versions.md b/site2/website/versioned_docs/version-2.10.1-deprecated/security-policy-and-supported-versions.md deleted file mode 100644 index 637147a5dc27c7..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/security-policy-and-supported-versions.md +++ /dev/null @@ -1,65 +0,0 @@ ---- -id: security-policy-and-supported-versions -title: Security Policy and Supported Versions -sidebar_label: "Security Policy and Supported Versions" ---- - -## Using Pulsar's Security Features - -You can find documentation on Pulsar's available security features and how to use them here: -https://pulsar.apache.org/docs/en/security-overview/. - -## Security Vulnerability Process - -The Pulsar community follows the ASF [security vulnerability handling process](https://apache.org/security/#vulnerability-handling). - -To report a new vulnerability you have discovered, please follow the [ASF security vulnerability reporting process](https://apache.org/security/#reporting-a-vulnerability). To report a vulnerability for Pulsar, contact the [Apache Security Team](https://www.apache.org/security/). When reporting a vulnerability to [security@apache.org](mailto:security@apache.org), you can copy your email to [private@pulsar.apache.org](mailto:private@pulsar.apache.org) to send your report to the Apache Pulsar Project Management Committee. This is a private mailing list. - -It is the responsibility of the security vulnerability handling project team (Apache Pulsar PMC in most cases) to make public security vulnerability announcements. You can follow announcements on the [users@pulsar.apache.org](mailto:users@pulsar.apache.org) mailing list. For instructions on how to subscribe, please see https://pulsar.apache.org/contact/. - -## Versioning Policy - -The Pulsar project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html). Existing releases can expect -patches for bugs and security vulnerabilities. New features will target minor releases. - -When upgrading an existing cluster, it is important to upgrade components linearly through each minor version. For -example, when upgrading from 2.8.x to 2.10.x, it is important to upgrade to 2.9.x before going to 2.10.x. - -## Supported Versions - -Feature release branches will be maintained with security fix and bug fix releases for a period of at least 12 months -after initial release. For example, branch 2.5.x is no longer considered maintained as of January 2021, 12 months after -the release of 2.5.0 in January 2020. No more 2.5.x releases should be expected at this point, even to fix security -vulnerabilities. - -Note that a minor version can be maintained past it's 12 month initial support period. For example, version 2.7 is still -actively maintained. - -Security fixes will be given priority when it comes to back porting fixes to older versions that are within the -supported time window. It is challenging to decide which bug fixes to back port to old versions. As such, the latest -versions will have the most bug fixes. - -When 3.0.0 is released, the community will decide how to continue supporting 2.x. It is possible that the last minor -release within 2.x will be maintained for longer as an "LTS" release, but it has not been officially decided. - -The following table shows version support timelines and will be updated with each release. - -| Version | Supported | Initial Release | At Least Until | -|:-------:|:------------------:|:---------------:|:--------------:| -| 2.10.x | :white_check_mark: | April 2022 | April 2023 | -| 2.9.x | :white_check_mark: | November 2021 | November 2022 | -| 2.8.x | :white_check_mark: | June 2021 | June 2022 | -| 2.7.x | :white_check_mark: | November 2020 | November 2021 | -| 2.6.x | :x: | June 2020 | June 2021 | -| 2.5.x | :x: | January 2020 | January 2021 | -| 2.4.x | :x: | July 2019 | July 2020 | -| < 2.3.x | :x: | - | - | - -If there is ambiguity about which versions of Pulsar are actively supported, please ask on the [users@pulsar.apache.org](mailto:users@pulsar.apache.org) -mailing list. - -## Release Frequency - -With the acceptance of [PIP-47 - A Time Based Release Plan](https://github.com/apache/pulsar/wiki/PIP-47%3A-Time-Based-Release-Plan), -the Pulsar community aims to complete 4 minor releases each year. Patch releases are completed based on demand as well -as need, in the event of security fixes. diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/security-tls-authentication.md b/site2/website/versioned_docs/version-2.10.1-deprecated/security-tls-authentication.md deleted file mode 100644 index 17da5aab34d471..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/security-tls-authentication.md +++ /dev/null @@ -1,222 +0,0 @@ ---- -id: security-tls-authentication -title: Authentication using TLS -sidebar_label: "Authentication using TLS" -original_id: security-tls-authentication ---- - -## TLS authentication overview - -TLS authentication is an extension of [TLS transport encryption](security-tls-transport.md). Not only servers have keys and certs that the client uses to verify the identity of servers, clients also have keys and certs that the server uses to verify the identity of clients. You must have TLS transport encryption configured on your cluster before you can use TLS authentication. This guide assumes you already have TLS transport encryption configured. - -`Bouncy Castle Provider` provides TLS related cipher suites and algorithms in Pulsar. If you need [FIPS](https://www.bouncycastle.org/fips_faq.html) version of `Bouncy Castle Provider`, please reference [Bouncy Castle page](security-bouncy-castle.md). - -### Create client certificates - -Client certificates are generated using the certificate authority. Server certificates are also generated with the same certificate authority. - -The biggest difference between client certs and server certs is that the **common name** for the client certificate is the **role token** which that client is authenticated as. - -To use client certificates, you need to set `tlsRequireTrustedClientCertOnConnect=true` at the broker side. For details, refer to [TLS broker configuration](security-tls-transport.md#configure-broker). - -First, you need to enter the following command to generate the key : - -```bash - -$ openssl genrsa -out admin.key.pem 2048 - -``` - -Similar to the broker, the client expects the key to be in [PKCS 8](https://en.wikipedia.org/wiki/PKCS_8) format, so you need to convert it by entering the following command: - -```bash - -$ openssl pkcs8 -topk8 -inform PEM -outform PEM \ - -in admin.key.pem -out admin.key-pk8.pem -nocrypt - -``` - -Next, enter the command below to generate the certificate request. When you are asked for a **common name**, enter the **role token** that you want this key pair to authenticate a client as. - -```bash - -$ openssl req -config openssl.cnf \ - -key admin.key.pem -new -sha256 -out admin.csr.pem - -``` - -:::note - -If openssl.cnf is not specified, read [Certificate authority](security-tls-transport.md#certificate-authority) to get the openssl.cnf. - -::: - -Then, enter the command below to sign with request with the certificate authority. Note that the client certs uses the **usr_cert** extension, which allows the cert to be used for client authentication. - -```bash - -$ openssl ca -config openssl.cnf -extensions usr_cert \ - -days 1000 -notext -md sha256 \ - -in admin.csr.pem -out admin.cert.pem - -``` - -You can get a cert, `admin.cert.pem`, and a key, `admin.key-pk8.pem` from this command. With `ca.cert.pem`, clients can use this cert and this key to authenticate themselves to brokers and proxies as the role token ``admin``. - -:::note - -If the "unable to load CA private key" error occurs and the reason of this error is "No such file or directory: /etc/pki/CA/private/cakey.pem" in this step. Try the command below: - -```bash - -$ cd /etc/pki/tls/misc/CA -$ ./CA -newca - -``` - -to generate `cakey.pem` . - -::: - -## Enable TLS authentication on brokers - -To configure brokers to authenticate clients, add the following parameters to `broker.conf`, alongside [the configuration to enable tls transport](security-tls-transport.md#broker-configuration): - -```properties - -# Configuration to enable authentication -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# operations and publish/consume from all topics -superUserRoles=admin - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientTlsEnabled=true -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters={"tlsCertFile":"/path/my-ca/admin.cert.pem","tlsKeyFile":"/path/my-ca/admin.key-pk8.pem"} -brokerClientTrustCertsFilePath=/path/my-ca/certs/ca.cert.pem - -``` - -## Enable TLS authentication on proxies - -To configure proxies to authenticate clients, add the following parameters to `proxy.conf`, alongside [the configuration to enable tls transport](security-tls-transport.md#proxy-configuration): - -The proxy should have its own client key pair for connecting to brokers. You need to configure the role token for this key pair in the ``proxyRoles`` of the brokers. See the [authorization guide](security-authorization.md) for more details. - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters=tlsCertFile:/path/to/proxy.cert.pem,tlsKeyFile:/path/to/proxy.key-pk8.pem - -``` - -## Client configuration - -When you use TLS authentication, client connects via TLS transport. You need to configure the client to use ```https://``` and 8443 port for the web service URL, ```pulsar+ssl://``` and 6651 port for the broker service URL. - -### CLI tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use TLS authentication with the CLI tools of Pulsar: - -```properties - -webServiceUrl=https://broker.example.com:8443/ -brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/ca.cert.pem -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -authParams=tlsCertFile:/path/to/my-role.cert.pem,tlsKeyFile:/path/to/my-role.key-pk8.pem - -``` - -### Java client - -```java - -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .tlsTrustCertsFilePath("/path/to/ca.cert.pem") - .authentication("org.apache.pulsar.client.impl.auth.AuthenticationTls", - "tlsCertFile:/path/to/my-role.cert.pem,tlsKeyFile:/path/to/my-role.key-pk8.pem") - .build(); - -``` - -### Python client - -```python - -from pulsar import Client, AuthenticationTLS - -auth = AuthenticationTLS("/path/to/my-role.cert.pem", "/path/to/my-role.key-pk8.pem") -client = Client("pulsar+ssl://broker.example.com:6651/", - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False, - authentication=auth) - -``` - -### C++ client - -```c++ - -#include - -pulsar::ClientConfiguration config; -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/ca.cert.pem"); -config.setTlsAllowInsecureConnection(false); - -pulsar::AuthenticationPtr auth = pulsar::AuthTls::create("/path/to/my-role.cert.pem", - "/path/to/my-role.key-pk8.pem") -config.setAuth(auth); - -pulsar::Client client("pulsar+ssl://broker.example.com:6651/", config); - -``` - -### Node.js client - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const auth = new Pulsar.AuthenticationTls({ - certificatePath: '/path/to/my-role.cert.pem', - privateKeyPath: '/path/to/my-role.key-pk8.pem', - }); - - const client = new Pulsar.Client({ - serviceUrl: 'pulsar+ssl://broker.example.com:6651/', - authentication: auth, - tlsTrustCertsFilePath: '/path/to/ca.cert.pem', - }); -})(); - -``` - -### C# client - -```c# - -var clientCertificate = new X509Certificate2("admin.pfx"); -var client = PulsarClient.Builder() - .AuthenticateUsingClientCertificate(clientCertificate) - .Build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/security-tls-keystore.md b/site2/website/versioned_docs/version-2.10.1-deprecated/security-tls-keystore.md deleted file mode 100644 index 8a4654a0c33ae9..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/security-tls-keystore.md +++ /dev/null @@ -1,345 +0,0 @@ ---- -id: security-tls-keystore -title: Using TLS with KeyStore configure -sidebar_label: "Using TLS with KeyStore configure" -original_id: security-tls-keystore ---- - -## Overview - -Apache Pulsar supports [TLS encryption](security-tls-transport.md) and [TLS authentication](security-tls-authentication.md) between clients and Apache Pulsar service. -By default it uses PEM format file configuration. This page tries to describe use [KeyStore](https://en.wikipedia.org/wiki/Java_KeyStore) type configure for TLS. - - -## TLS encryption with KeyStore configure - -### Generate TLS key and certificate - -The first step of deploying TLS is to generate the key and the certificate for each machine in the cluster. -You can use Java’s `keytool` utility to accomplish this task. We will generate the key into a temporary keystore -initially for broker, so that we can export and sign it later with CA. - -```shell - -keytool -keystore broker.keystore.jks -alias localhost -validity {validity} -genkeypair -keyalg RSA - -``` - -You need to specify two parameters in the above command: - -1. `keystore`: the keystore file that stores the certificate. The *keystore* file contains the private key of - the certificate; hence, it needs to be kept safely. -2. `validity`: the valid time of the certificate in days. - -> Ensure that common name (CN) matches exactly with the fully qualified domain name (FQDN) of the server. -The client compares the CN with the DNS domain name to ensure that it is indeed connecting to the desired server, not a malicious one. - -### Creating your own CA - -After the first step, each broker in the cluster has a public-private key pair, and a certificate to identify the machine. -The certificate, however, is unsigned, which means that an attacker can create such a certificate to pretend to be any machine. - -Therefore, it is important to prevent forged certificates by signing them for each machine in the cluster. -A `certificate authority (CA)` is responsible for signing certificates. CA works likes a government that issues passports — -the government stamps (signs) each passport so that the passport becomes difficult to forge. Other governments verify the stamps -to ensure the passport is authentic. Similarly, the CA signs the certificates, and the cryptography guarantees that a signed -certificate is computationally difficult to forge. Thus, as long as the CA is a genuine and trusted authority, the clients have -high assurance that they are connecting to the authentic machines. - -```shell - -openssl req -new -x509 -keyout ca-key -out ca-cert -days 365 - -``` - -The generated CA is simply a *public-private* key pair and certificate, and it is intended to sign other certificates. - -The next step is to add the generated CA to the clients' truststore so that the clients can trust this CA: - -```shell - -keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert - -``` - -NOTE: If you configure the brokers to require client authentication by setting `tlsRequireTrustedClientCertOnConnect` to `true` on the -broker configuration, then you must also provide a truststore for the brokers and it should have all the CA certificates that clients keys were signed by. - -```shell - -keytool -keystore broker.truststore.jks -alias CARoot -import -file ca-cert - -``` - -In contrast to the keystore, which stores each machine’s own identity, the truststore of a client stores all the certificates -that the client should trust. Importing a certificate into one’s truststore also means trusting all certificates that are signed -by that certificate. As the analogy above, trusting the government (CA) also means trusting all passports (certificates) that -it has issued. This attribute is called the chain of trust, and it is particularly useful when deploying TLS on a large BookKeeper cluster. -You can sign all certificates in the cluster with a single CA, and have all machines share the same truststore that trusts the CA. -That way all machines can authenticate all other machines. - - -### Signing the certificate - -The next step is to sign all certificates in the keystore with the CA we generated. First, you need to export the certificate from the keystore: - -```shell - -keytool -keystore broker.keystore.jks -alias localhost -certreq -file cert-file - -``` - -Then sign it with the CA: - -```shell - -openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days {validity} -CAcreateserial -passin pass:{ca-password} - -``` - -Finally, you need to import both the certificate of the CA and the signed certificate into the keystore: - -```shell - -keytool -keystore broker.keystore.jks -alias CARoot -import -file ca-cert -keytool -keystore broker.keystore.jks -alias localhost -import -file cert-signed - -``` - -The definitions of the parameters are the following: - -1. `keystore`: the location of the keystore -2. `ca-cert`: the certificate of the CA -3. `ca-key`: the private key of the CA -4. `ca-password`: the passphrase of the CA -5. `cert-file`: the exported, unsigned certificate of the broker -6. `cert-signed`: the signed certificate of the broker - -### Configuring brokers - -Brokers enable TLS by provide valid `brokerServicePortTls` and `webServicePortTls`, and also need set `tlsEnabledWithKeyStore` to `true` for using KeyStore type configuration. -Besides this, KeyStore path, KeyStore password, TrustStore path, and TrustStore password need to provided. -And since broker will create internal client/admin client to communicate with other brokers, user also need to provide config for them, this is similar to how user config the outside client/admin-client. -If `tlsRequireTrustedClientCertOnConnect` is `true`, broker will reject the Connection if the Client Certificate is not trusted. - -The following TLS configs are needed on the broker side: - -```properties - -tlsEnabledWithKeyStore=true -# key store -tlsKeyStoreType=JKS -tlsKeyStore=/var/private/tls/broker.keystore.jks -tlsKeyStorePassword=brokerpw - -# trust store -tlsTrustStoreType=JKS -tlsTrustStore=/var/private/tls/broker.truststore.jks -tlsTrustStorePassword=brokerpw - -# internal client/admin-client config -brokerClientTlsEnabled=true -brokerClientTlsEnabledWithKeyStore=true -brokerClientTlsTrustStoreType=JKS -brokerClientTlsTrustStore=/var/private/tls/client.truststore.jks -brokerClientTlsTrustStorePassword=clientpw - -``` - -NOTE: it is important to restrict access to the store files via filesystem permissions. - -If you have configured TLS on the broker, to disable non-TLS ports, you can set the values of the following configurations to empty as below. - -``` - -brokerServicePort= -webServicePort= - -``` - -In this case, you need to set the following configurations. - -```conf - -brokerClientTlsEnabled=true // Set this to true -brokerClientTlsEnabledWithKeyStore=true // Set this to true -brokerClientTlsTrustStore= // Set this to your desired value -brokerClientTlsTrustStorePassword= // Set this to your desired value - -``` - -Optional settings that may worth consider: - -1. tlsClientAuthentication=false: Enable/Disable using TLS for authentication. This config when enabled will authenticate the other end - of the communication channel. It should be enabled on both brokers and clients for mutual TLS. -2. tlsCiphers=[TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256], A cipher suite is a named combination of authentication, encryption, MAC and key exchange - algorithm used to negotiate the security settings for a network connection using TLS network protocol. By default, - it is null. [OpenSSL Ciphers](https://www.openssl.org/docs/man1.0.2/apps/ciphers.html) - [JDK Ciphers](http://docs.oracle.com/javase/8/docs/technotes/guides/security/StandardNames.html#ciphersuites) -3. tlsProtocols=[TLSv1.3,TLSv1.2] (list out the TLS protocols that you are going to accept from clients). - By default, it is not set. -### Configuring Clients - -This is similar to [TLS encryption configuing for client with PEM type](security-tls-transport.md#client-configuration). -For a minimal configuration, you need to provide the TrustStore information. - -For example: -1. for [Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools#pulsar-admin), [`pulsar-perf`](reference-cli-tools#pulsar-perf), and [`pulsar-client`](reference-cli-tools#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - - ```properties - - webServiceUrl=https://broker.example.com:8443/ - brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ - useKeyStoreTls=true - tlsTrustStoreType=JKS - tlsTrustStorePath=/var/private/tls/client.truststore.jks - tlsTrustStorePassword=clientpw - - ``` - -1. for java client - - ```java - - import org.apache.pulsar.client.api.PulsarClient; - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .build(); - - ``` - -1. for java admin client - - ```java - - PulsarAdmin amdin = PulsarAdmin.builder().serviceHttpUrl("https://broker.example.com:8443") - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .build(); - - ``` - -> **Note:** Please configure `tlsTrustStorePath` when you set `useKeyStoreTls` to `true`. - -## TLS authentication with KeyStore configure - -This similar to [TLS authentication with PEM type](security-tls-authentication.md) - -### broker authentication config - -`broker.conf` - -```properties - -# Configuration to enable authentication -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# this should be the CN for one of client keystore. -superUserRoles=admin - -# Enable KeyStore type -tlsEnabledWithKeyStore=true -requireTrustedClientCertOnConnect=true - -# key store -tlsKeyStoreType=JKS -tlsKeyStore=/var/private/tls/broker.keystore.jks -tlsKeyStorePassword=brokerpw - -# trust store -tlsTrustStoreType=JKS -tlsTrustStore=/var/private/tls/broker.truststore.jks -tlsTrustStorePassword=brokerpw - -# internal client/admin-client config -brokerClientTlsEnabled=true -brokerClientTlsEnabledWithKeyStore=true -brokerClientTlsTrustStoreType=JKS -brokerClientTlsTrustStore=/var/private/tls/client.truststore.jks -brokerClientTlsTrustStorePassword=clientpw -# internal auth config -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls -brokerClientAuthenticationParameters={"keyStoreType":"JKS","keyStorePath":"/var/private/tls/client.keystore.jks","keyStorePassword":"clientpw"} -# currently websocket not support keystore type -webSocketServiceEnabled=false - -``` - -### client authentication configuring - -Besides the TLS encryption configuring. The main work is configuring the KeyStore, which contains a valid CN as client role, for client. - -For example: -1. for [Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools#pulsar-admin), [`pulsar-perf`](reference-cli-tools#pulsar-perf), and [`pulsar-client`](reference-cli-tools#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - - ```properties - - webServiceUrl=https://broker.example.com:8443/ - brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ - useKeyStoreTls=true - tlsTrustStoreType=JKS - tlsTrustStorePath=/var/private/tls/client.truststore.jks - tlsTrustStorePassword=clientpw - authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls - authParams={"keyStoreType":"JKS","keyStorePath":"/path/to/keystorefile","keyStorePassword":"keystorepw"} - - ``` - -1. for java client - - ```java - - import org.apache.pulsar.client.api.PulsarClient; - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .authentication( - "org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls", - "keyStoreType:JKS,keyStorePath:/var/private/tls/client.keystore.jks,keyStorePassword:clientpw") - .build(); - - ``` - -1. for java admin client - - ```java - - PulsarAdmin amdin = PulsarAdmin.builder().serviceHttpUrl("https://broker.example.com:8443") - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .authentication( - "org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls", - "keyStoreType:JKS,keyStorePath:/var/private/tls/client.keystore.jks,keyStorePassword:clientpw") - .build(); - - ``` - -> **Note:** Please configure `tlsTrustStorePath` when you set `useKeyStoreTls` to `true`. - -## Enabling TLS Logging - -You can enable TLS debug logging at the JVM level by starting the brokers and/or clients with `javax.net.debug` system property. For example: - -```shell - --Djavax.net.debug=all - -``` - -You can find more details on this in [Oracle documentation](http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/ReadDebug.html) on [debugging SSL/TLS connections](http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/ReadDebug.html). diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/security-tls-transport.md b/site2/website/versioned_docs/version-2.10.1-deprecated/security-tls-transport.md deleted file mode 100644 index c3fc81e7393adf..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/security-tls-transport.md +++ /dev/null @@ -1,313 +0,0 @@ ---- -id: security-tls-transport -title: Transport Encryption using TLS -sidebar_label: "Transport Encryption using TLS" -original_id: security-tls-transport ---- - -## TLS overview - -By default, Apache Pulsar clients communicate with the Apache Pulsar service in plain text. This means that all data is sent in the clear. You can use TLS to encrypt this traffic to protect the traffic from the snooping of a man-in-the-middle attacker. - -You can also configure TLS for both encryption and authentication. Use this guide to configure just TLS transport encryption and refer to [here](security-tls-authentication.md) for TLS authentication configuration. Alternatively, you can use [another authentication mechanism](security-athenz.md) on top of TLS transport encryption. - -> Note that enabling TLS may impact the performance due to encryption overhead. - -## TLS concepts - -TLS is a form of [public key cryptography](https://en.wikipedia.org/wiki/Public-key_cryptography). Using key pairs consisting of a public key and a private key can perform the encryption. The public key encrpyts the messages and the private key decrypts the messages. - -To use TLS transport encryption, you need two kinds of key pairs, **server key pairs** and a **certificate authority**. - -You can use a third kind of key pair, **client key pairs**, for [client authentication](security-tls-authentication.md). - -You should store the **certificate authority** private key in a very secure location (a fully encrypted, disconnected, air gapped computer). As for the certificate authority public key, the **trust cert**, you can freely shared it. - -For both client and server key pairs, the administrator first generates a private key and a certificate request, then uses the certificate authority private key to sign the certificate request, finally generates a certificate. This certificate is the public key for the server/client key pair. - -For TLS transport encryption, the clients can use the **trust cert** to verify that the server has a key pair that the certificate authority signed when the clients are talking to the server. A man-in-the-middle attacker does not have access to the certificate authority, so they couldn't create a server with such a key pair. - -For TLS authentication, the server uses the **trust cert** to verify that the client has a key pair that the certificate authority signed. The common name of the **client cert** is then used as the client's role token (see [Overview](security-overview.md)). - -`Bouncy Castle Provider` provides cipher suites and algorithms in Pulsar. If you need [FIPS](https://www.bouncycastle.org/fips_faq.html) version of `Bouncy Castle Provider`, please reference [Bouncy Castle page](security-bouncy-castle.md). - -## Create TLS certificates - -Creating TLS certificates for Pulsar involves creating a [certificate authority](#certificate-authority) (CA), [server certificate](#server-certificate), and [client certificate](#client-certificate). - -Follow the guide below to set up a certificate authority. You can also refer to plenty of resources on the internet for more details. We recommend [this guide](https://jamielinux.com/docs/openssl-certificate-authority/index.html) for your detailed reference. - -### Certificate authority - -1. Create the certificate for the CA. You can use CA to sign both the broker and client certificates. This ensures that each party will trust the others. You should store CA in a very secure location (ideally completely disconnected from networks, air gapped, and fully encrypted). - -2. Entering the following command to create a directory for your CA, and place [this openssl configuration file](https://github.com/apache/pulsar/tree/master/site2/website/static/examples/openssl.cnf) in the directory. You may want to modify the default answers for company name and department in the configuration file. Export the location of the CA directory to the environment variable, CA_HOME. The configuration file uses this environment variable to find the rest of the files and directories that the CA needs. - -```bash - -mkdir my-ca -cd my-ca -wget https://raw.githubusercontent.com/apache/pulsar-site/main/site2/website/static/examples/openssl.cnf -export CA_HOME=$(pwd) - -``` - -3. Enter the commands below to create the necessary directories, keys and certs. - -```bash - -mkdir certs crl newcerts private -chmod 700 private/ -touch index.txt -echo 1000 > serial -openssl genrsa -aes256 -out private/ca.key.pem 4096 -# You need enter a password in the command above -chmod 400 private/ca.key.pem -openssl req -config openssl.cnf -key private/ca.key.pem \ - -new -x509 -days 7300 -sha256 -extensions v3_ca \ - -out certs/ca.cert.pem -# You must enter the same password in the previous openssl command -chmod 444 certs/ca.cert.pem - -``` - -:::tip - -The default `openssl` on macOS doesn't work for the commands above. You must upgrade the `openssl` via Homebrew: - -```bash - -brew install openssl -export PATH="/usr/local/Cellar/openssl@3/3.0.1/bin:$PATH" - -``` - -The version `3.0.1` might change in the future. Use the actual path from the output of `brew install` command. - -::: - -4. After you answer the question prompts, CA-related files are stored in the `./my-ca` directory. Within that directory: - -* `certs/ca.cert.pem` is the public certificate. This public certificates is meant to be distributed to all parties involved. -* `private/ca.key.pem` is the private key. You only need it when you are signing a new certificate for either broker or clients and you must safely guard this private key. - -### Server certificate - -Once you have created a CA certificate, you can create certificate requests and sign them with the CA. - -The following commands ask you a few questions and then create the certificates. When you are asked for the common name, you should match the hostname of the broker. You can also use a wildcard to match a group of broker hostnames, for example, `*.broker.usw.example.com`. This ensures that multiple machines can reuse the same certificate. - -:::tip - -Sometimes matching the hostname is not possible or makes no sense, -such as when you create the brokers with random hostnames, or you -plan to connect to the hosts via their IP. In these cases, you -should configure the client to disable TLS hostname verification. For more -details, you can see [the host verification section in client configuration](#hostname-verification). - -::: - -1. Enter the command below to generate the key. - -```bash - -openssl genrsa -out broker.key.pem 2048 - -``` - -The broker expects the key to be in [PKCS 8](https://en.wikipedia.org/wiki/PKCS_8) format, so enter the following command to convert it. - -```bash - -openssl pkcs8 -topk8 -inform PEM -outform PEM \ - -in broker.key.pem -out broker.key-pk8.pem -nocrypt - -``` - -2. Enter the following command to generate the certificate request. - -```bash - -openssl req -config openssl.cnf \ - -key broker.key.pem -new -sha256 -out broker.csr.pem - -``` - -3. Sign it with the certificate authority by entering the command below. - -```bash - -openssl ca -config openssl.cnf -extensions server_cert \ - -days 1000 -notext -md sha256 \ - -in broker.csr.pem -out broker.cert.pem - -``` - -At this point, you have a cert, `broker.cert.pem`, and a key, `broker.key-pk8.pem`, which you can use along with `ca.cert.pem` to configure TLS transport encryption for your broker and proxy nodes. - -## Configure broker - -To configure a Pulsar [broker](reference-terminology.md#broker) to use TLS transport encryption, you need to make some changes to `broker.conf`, which locates in the `conf` directory of your [Pulsar installation](getting-started-standalone.md). - -Add these values to the configuration file (substituting the appropriate certificate paths where necessary): - -```properties - -brokerServicePortTls=6651 -webServicePortTls=8081 -tlsRequireTrustedClientCertOnConnect=true -tlsCertificateFilePath=/path/to/broker.cert.pem -tlsKeyFilePath=/path/to/broker.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -> You can find a full list of parameters available in the `conf/broker.conf` file, -> as well as the default values for those parameters, in [Broker Configuration](reference-configuration.md#broker) -> -### TLS Protocol Version and Cipher - -You can configure the broker (and proxy) to require specific TLS protocol versions and ciphers for TLS negiotation. You can use the TLS protocol versions and ciphers to stop clients from requesting downgraded TLS protocol versions or ciphers that may have weaknesses. - -Both the TLS protocol versions and cipher properties can take multiple values, separated by commas. The possible values for protocol version and ciphers depend on the TLS provider that you are using. Pulsar uses OpenSSL if the OpenSSL is available, but if the OpenSSL is not available, Pulsar defaults back to the JDK implementation. - -```properties - -tlsProtocols=TLSv1.3,TLSv1.2 -tlsCiphers=TLS_DH_RSA_WITH_AES_256_GCM_SHA384,TLS_DH_RSA_WITH_AES_256_CBC_SHA - -``` - -OpenSSL currently supports ```TLSv1.1```, ```TLSv1.2``` and ```TLSv1.3``` for the protocol version. You can acquire a list of supported cipher from the openssl ciphers command, i.e. ```openssl ciphers -tls1_3```. - -For JDK 11, you can obtain a list of supported values from the documentation: -- [TLS protocol](https://docs.oracle.com/en/java/javase/11/security/oracle-providers.html#GUID-7093246A-31A3-4304-AC5F-5FB6400405E2__SUNJSSEPROVIDERPROTOCOLPARAMETERS-BBF75009) -- [Ciphers](https://docs.oracle.com/en/java/javase/11/security/oracle-providers.html#GUID-7093246A-31A3-4304-AC5F-5FB6400405E2__SUNJSSE_CIPHER_SUITES) - -## Proxy Configuration - -Proxies need to configure TLS in two directions, for clients connecting to the proxy, and for the proxy connecting to brokers. - -```properties - -# For clients connecting to the proxy -tlsEnabledInProxy=true -tlsCertificateFilePath=/path/to/broker.cert.pem -tlsKeyFilePath=/path/to/broker.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -# For the proxy to connect to brokers -tlsEnabledWithBroker=true -brokerClientTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -## Client configuration - -When you enable the TLS transport encryption, you need to configure the client to use ```https://``` and port 8443 for the web service URL, and ```pulsar+ssl://``` and port 6651 for the broker service URL. - -As the server certificate that you generated above does not belong to any of the default trust chains, you also need to either specify the path the **trust cert** (recommended), or tell the client to allow untrusted server certs. - -### Hostname verification - -Hostname verification is a TLS security feature whereby a client can refuse to connect to a server if the "CommonName" does not match the hostname to which the hostname is connecting. By default, Pulsar clients disable hostname verification, as it requires that each broker has a DNS record and a unique cert. - -Moreover, as the administrator has full control of the certificate authority, a bad actor is unlikely to be able to pull off a man-in-the-middle attack. "allowInsecureConnection" allows the client to connect to servers whose cert has not been signed by an approved CA. The client disables "allowInsecureConnection" by default, and you should always disable "allowInsecureConnection" in production environments. As long as you disable "allowInsecureConnection", a man-in-the-middle attack requires that the attacker has access to the CA. - -One scenario where you may want to enable hostname verification is where you have multiple proxy nodes behind a VIP, and the VIP has a DNS record, for example, pulsar.mycompany.com. In this case, you can generate a TLS cert with pulsar.mycompany.com as the "CommonName," and then enable hostname verification on the client. - -The examples below show that hostname verification is disabled for the CLI tools/Java/Python/C++/Node.js/C# clients by default. - -### CLI tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools.md#pulsar-admin), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use TLS transport with the CLI tools of Pulsar: - -```properties - -webServiceUrl=https://broker.example.com:8443/ -brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/ca.cert.pem -tlsEnableHostnameVerification=false - -``` - -#### Java client - -```java - -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .tlsTrustCertsFilePath("/path/to/ca.cert.pem") - .enableTlsHostnameVerification(false) // false by default, in any case - .allowTlsInsecureConnection(false) // false by default, in any case - .build(); - -``` - -#### Python client - -```python - -from pulsar import Client - -client = Client("pulsar+ssl://broker.example.com:6651/", - tls_hostname_verification=False, - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False) // defaults to false from v2.2.0 onwards - -``` - -#### C++ client - -```c++ - -#include - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); // shouldn't be needed soon -config.setTlsTrustCertsFilePath(caPath); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create(clientPublicKeyPath, clientPrivateKeyPath)); -config.setValidateHostName(false); - -``` - -#### Node.js client - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const client = new Pulsar.Client({ - serviceUrl: 'pulsar+ssl://broker.example.com:6651/', - tlsTrustCertsFilePath: '/path/to/ca.cert.pem', - useTls: true, - tlsValidateHostname: false, - tlsAllowInsecureConnection: false, - }); -})(); - -``` - -#### C# client - -```c# - -var certificate = new X509Certificate2("ca.cert.pem"); -var client = PulsarClient.Builder() - .TrustedCertificateAuthority(certificate) //If the CA is not trusted on the host, you can add it explicitly. - .VerifyCertificateAuthority(true) //Default is 'true' - .VerifyCertificateName(false) //Default is 'false' - .Build(); - -``` - -> Note that `VerifyCertificateName` refers to the configuration of hostname verification in the C# client. diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/security-token-admin.md b/site2/website/versioned_docs/version-2.10.1-deprecated/security-token-admin.md deleted file mode 100644 index a265f6320d28fe..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/security-token-admin.md +++ /dev/null @@ -1,183 +0,0 @@ ---- -id: security-token-admin -title: Token authentication admin -sidebar_label: "Token authentication admin" -original_id: security-token-admin ---- - -## Token Authentication Overview - -Pulsar supports authenticating clients using security tokens that are based on [JSON Web Tokens](https://jwt.io/introduction/) ([RFC-7519](https://tools.ietf.org/html/rfc7519)). - -Tokens are used to identify a Pulsar client and associate with some "principal" (or "role") which -will be then granted permissions to do some actions (eg: publish or consume from a topic). - -A user will typically be given a token string by an administrator (or some automated service). - -The compact representation of a signed JWT is a string that looks like: - -``` - - eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -Application will specify the token when creating the client instance. An alternative is to pass -a "token supplier", that is to say a function that returns the token when the client library -will need one. - -> #### Always use TLS transport encryption -> Sending a token is equivalent to sending a password over the wire. It is strongly recommended to -> always use TLS encryption when talking to the Pulsar service. See -> [Transport Encryption using TLS](security-tls-transport.md) - -## Secret vs Public/Private keys - -JWT support two different kind of keys in order to generate and validate the tokens: - - * Symmetric : - - there is a single ***Secret*** key that is used both to generate and validate - * Asymmetric: there is a pair of keys. - - ***Private*** key is used to generate tokens - - ***Public*** key is used to validate tokens - -### Secret key - -When using a secret key, the administrator will create the key and he will -use it to generate the client tokens. This key will be also configured to -the brokers to allow them to validate the clients. - -#### Creating a secret key - -> Output file will be generated in the root of your pulsar installation directory. You can also provide absolute path for the output file. - -```shell - -$ bin/pulsar tokens create-secret-key --output my-secret.key - -``` - -To generate base64 encoded private key - -```shell - -$ bin/pulsar tokens create-secret-key --output /opt/my-secret.key --base64 - -``` - -### Public/Private keys - -With public/private, we need to create a pair of keys. Pulsar supports all algorithms supported by the Java JWT library shown [here](https://github.com/jwtk/jjwt#signature-algorithms-keys) - -#### Creating a key pair - -> Output file will be generated in the root of your pulsar installation directory. You can also provide absolute path for the output file. - -```shell - -$ bin/pulsar tokens create-key-pair --output-private-key my-private.key --output-public-key my-public.key - -``` - - * `my-private.key` will be stored in a safe location and only used by administrator to generate - new tokens. - * `my-public.key` will be distributed to all Pulsar brokers. This file can be publicly shared without - any security concern. - -## Generating tokens - -A token is the credential associated with a user. The association is done through the "principal", -or "role". In case of JWT tokens, this field it's typically referred to as **subject**, though -it's exactly the same concept. - -The generated token is then required to have a **subject** field set. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user - -``` - -This will print the token string on stdout. - -Similarly, one can create a token by passing the "private" key: - -```shell - -$ bin/pulsar tokens create --private-key file:///path/to/my-private.key \ - --subject test-user - -``` - -Finally, a token can also be created with a pre-defined TTL. After that time, -the token will be automatically invalidated. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user \ - --expiry-time 1y - -``` - -## Authorization - -The token itself doesn't have any permission associated. That will be determined by the -authorization engine. Once the token is created, one can grant permission for this token to do certain -actions. Eg. : - -```shell - -$ bin/pulsar-admin namespaces grant-permission my-tenant/my-namespace \ - --role test-user \ - --actions produce,consume - -``` - -## Enabling Token Authentication ... - -### ... on Brokers - -To configure brokers to authenticate clients, put the following in `broker.conf`: - -```properties - -# Configuration to enable authentication and authorization -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken - -# If using secret key (Note: key files must be DER-encoded) -tokenSecretKey=file:///path/to/secret.key -# The key can also be passed inline: -# tokenSecretKey=data:;base64,FLFyW0oLJ2Fi22KKCm21J18mbAdztfSHN/lAT5ucEKU= - -# If using public/private (Note: key files must be DER-encoded) -# tokenPublicKey=file:///path/to/public.key - -``` - -### ... on Proxies - -To configure proxies to authenticate clients, put the following in `proxy.conf`: - -The proxy will have its own token used when talking to brokers. The role token for this -key pair should be configured in the ``proxyRoles`` of the brokers. See the [authorization guide](security-authorization.md) for more details. - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken -tokenSecretKey=file:///path/to/secret.key - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Or, alternatively, read token from file -# brokerClientAuthenticationParameters=file:///path/to/proxy-token.txt - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/sql-deployment-configurations.md b/site2/website/versioned_docs/version-2.10.1-deprecated/sql-deployment-configurations.md deleted file mode 100644 index 0306af30ee159f..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/sql-deployment-configurations.md +++ /dev/null @@ -1,277 +0,0 @@ ---- -id: sql-deployment-configurations -title: Pulsar SQL configuration and deployment -sidebar_label: "Configuration and deployment" -original_id: sql-deployment-configurations ---- - -You can configure Presto Pulsar connector and deploy a cluster with the following instruction. - -## Configure Presto Pulsar Connector -You can configure Presto Pulsar Connector in the `${project.root}/conf/presto/catalog/pulsar.properties` properties file. The configuration for the connector and the default values are as follows. - -```properties - -# name of the connector to be displayed in the catalog -connector.name=pulsar - -# the url of Pulsar broker service -pulsar.web-service-url=http://localhost:8080 - -# URI of Zookeeper cluster -pulsar.zookeeper-uri=localhost:2181 - -# minimum number of entries to read at a single time -pulsar.entry-read-batch-size=100 - -# default number of splits to use per query -pulsar.target-num-splits=4 - -# max size of one batch message (default value is 5MB) -pulsar.max-message-size=5242880 - -# number of split used when querying data from pulsar -pulsar.target-num-splits=2 - -# size of queue to buffer entry read from pulsar -pulsar.max-split-entry-queue-size=1000 - -# size of queue to buffer message extract from entries -pulsar.max-split-message-queue-size=10000 - -# status provider to record connector metrics -pulsar.stats-provider=org.apache.bookkeeper.stats.NullStatsProvider - -# config in map format for stats provider e.g. {"key1":"val1","key2":"val2"} -pulsar.stats-provider-configs={} - -# whether to rewrite Pulsar's default topic delimiter '/' -pulsar.namespace-delimiter-rewrite-enable=false - -# delimiter used to rewrite Pulsar's default delimiter '/', use if default is causing incompatibility with other system like Superset -pulsar.rewrite-namespace-delimiter="/" - -# maximum number of thread pool size for ledger offloader. -pulsar.managed-ledger-offload-max-threads=2 - -# driver used to offload or read cold data to or from long-term storage -pulsar.managed-ledger-offload-driver=null - -# directory to load offloaders nar file. -pulsar.offloaders-directory="./offloaders" - -# properties and configurations related to specific offloader implementation as map e.g. {"key1":"val1","key2":"val2"} -pulsar.offloader-properties={} - -# authentication plugin used to authenticate to Pulsar cluster -pulsar.auth-plugin=null - -# authentication parameter used to authenticate to the Pulsar cluster as a string e.g. "key1:val1,key2:val2". -pulsar.auth-params=null - -# whether the Pulsar client accept an untrusted TLS certificate from broker -pulsar.tls-allow-insecure-connection=null - -# whether to allow hostname verification when a client connects to broker over TLS. -pulsar.tls-hostname-verification-enable=null - -# path for the trusted TLS certificate file of Pulsar broker -pulsar.tls-trust-cert-file-path=null - -# set the threshold for BookKeeper request throttle, default is disabled -pulsar.bookkeeper-throttle-value=0 - -# set the number of IO thread -pulsar.bookkeeper-num-io-threads=2 * Runtime.getRuntime().availableProcessors() - -# set the number of worker thread -pulsar.bookkeeper-num-worker-threads=Runtime.getRuntime().availableProcessors() - -# whether to use BookKeeper V2 wire protocol -pulsar.bookkeeper-use-v2-protocol=true - -# interval to check the need for sending an explicit LAC, default is disabled -pulsar.bookkeeper-explicit-interval=0 - -# size for managed ledger entry cache (in MB). -pulsar.managed-ledger-cache-size-MB=0 - -# number of threads to be used for managed ledger tasks dispatching -pulsar.managed-ledger-num-worker-threads=Runtime.getRuntime().availableProcessors() - -# number of threads to be used for managed ledger scheduled tasks -pulsar.managed-ledger-num-scheduler-threads=Runtime.getRuntime().availableProcessors() - -# directory used to store extraction NAR file -pulsar.nar-extraction-directory=System.getProperty("java.io.tmpdir") - -``` - -You can connect Presto to a Pulsar cluster with multiple hosts. To configure multiple hosts for brokers, add multiple URLs to `pulsar.web-service-url`. To configure multiple hosts for ZooKeeper, add multiple URIs to `pulsar.zookeeper-uri`. The following is an example. - -``` - -pulsar.web-service-url=http://localhost:8080,localhost:8081,localhost:8082 -pulsar.zookeeper-uri=localhost1,localhost2:2181 - -``` - -**Note: by default, Pulsar SQL does not get the last message in a topic**. It is by design and controlled by settings. By default, BookKeeper LAC only advances when subsequent entries are added. If there is no subsequent entry added, the last written entry is not visible to readers until the ledger is closed. This is not a problem for Pulsar which uses managed ledger, but Pulsar SQL directly reads from BookKeeper ledger. - -If you want to get the last message in a topic, set the following configurations: - -1. For the broker configuration, set `bookkeeperExplicitLacIntervalInMills` > 0 in `broker.conf` or `standalone.conf`. - -2. For the Presto configuration, set `pulsar.bookkeeper-explicit-interval` > 0 and `pulsar.bookkeeper-use-v2-protocol=false`. - -However, using BookKeeper V3 protocol introduces additional GC overhead to BK as it uses Protobuf. - -## Query data from existing Presto clusters - -If you already have a Presto cluster, you can copy the Presto Pulsar connector plugin to your existing cluster. Download the archived plugin package with the following command. - -```bash - -$ wget pulsar:binary_release_url - -``` - -## Deploy a new cluster - -Since Pulsar SQL is powered by [Trino (formerly Presto SQL)](https://trino.io), the configuration for deployment is the same for the Pulsar SQL worker. - -:::note - -For how to set up a standalone single node environment, refer to [Query data](sql-getting-started.md). - -::: - -You can use the same CLI args as the Presto launcher. - -```bash - -$ ./bin/pulsar sql-worker --help -Usage: launcher [options] command - -Commands: run, start, stop, restart, kill, status - -Options: - -h, --help show this help message and exit - -v, --verbose Run verbosely - --etc-dir=DIR Defaults to INSTALL_PATH/etc - --launcher-config=FILE - Defaults to INSTALL_PATH/bin/launcher.properties - --node-config=FILE Defaults to ETC_DIR/node.properties - --jvm-config=FILE Defaults to ETC_DIR/jvm.config - --config=FILE Defaults to ETC_DIR/config.properties - --log-levels-file=FILE - Defaults to ETC_DIR/log.properties - --data-dir=DIR Defaults to INSTALL_PATH - --pid-file=FILE Defaults to DATA_DIR/var/run/launcher.pid - --launcher-log-file=FILE - Defaults to DATA_DIR/var/log/launcher.log (only in - daemon mode) - --server-log-file=FILE - Defaults to DATA_DIR/var/log/server.log (only in - daemon mode) - -D NAME=VALUE Set a Java system property - -``` - -The default configuration for the cluster is located in `${project.root}/conf/presto`. You can customize your deployment by modifying the default configuration. - -You can set the worker to read from a different configuration directory, or set a different directory to write data. - -```bash - -$ ./bin/pulsar sql-worker run --etc-dir /tmp/incubator-pulsar/conf/presto --data-dir /tmp/presto-1 - -``` - -You can start the worker as daemon process. - -```bash - -$ ./bin/pulsar sql-worker start - -``` - -### Deploy a cluster on multiple nodes - -You can deploy a Pulsar SQL cluster or Presto cluster on multiple nodes. The following example shows how to deploy a cluster on three-node cluster. - -1. Copy the Pulsar binary distribution to three nodes. - -The first node runs as Presto coordinator. The minimal configuration requirement in the `${project.root}/conf/presto/config.properties` file is as follows. - -```properties - -coordinator=true -node-scheduler.include-coordinator=true -http-server.http.port=8080 -query.max-memory=50GB -query.max-memory-per-node=1GB -discovery-server.enabled=true -discovery.uri= - -``` - -The other two nodes serve as worker nodes, you can use the following configuration for worker nodes. - -```properties - -coordinator=false -http-server.http.port=8080 -query.max-memory=50GB -query.max-memory-per-node=1GB -discovery.uri= - -``` - -2. Modify `pulsar.web-service-url` and `pulsar.zookeeper-uri` configuration in the `${project.root}/conf/presto/catalog/pulsar.properties` file accordingly for the three nodes. - -3. Start the coordinator node. - -``` - -$ ./bin/pulsar sql-worker run - -``` - -4. Start worker nodes. - -``` - -$ ./bin/pulsar sql-worker run - -``` - -5. Start the SQL CLI and check the status of your cluster. - -```bash - -$ ./bin/pulsar sql --server - -``` - -6. Check the status of your nodes. - -```bash - -presto> SELECT * FROM system.runtime.nodes; - node_id | http_uri | node_version | coordinator | state ----------+-------------------------+--------------+-------------+-------- - 1 | http://192.168.2.1:8081 | testversion | true | active - 3 | http://192.168.2.2:8081 | testversion | false | active - 2 | http://192.168.2.3:8081 | testversion | false | active - -``` - -For more information about deployment in Presto, refer to [Presto deployment](https://trino.io/docs/current/installation/deployment.html). - -:::note - -The broker does not advance LAC, so when Pulsar SQL bypass broker to query data, it can only read entries up to the LAC that all the bookies learned. You can enable periodically write LAC on the broker by setting "bookkeeperExplicitLacIntervalInMills" in the broker.conf. - -::: - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/sql-getting-started.md b/site2/website/versioned_docs/version-2.10.1-deprecated/sql-getting-started.md deleted file mode 100644 index 8a5cd7199b365a..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/sql-getting-started.md +++ /dev/null @@ -1,187 +0,0 @@ ---- -id: sql-getting-started -title: Query data with Pulsar SQL -sidebar_label: "Query data" -original_id: sql-getting-started ---- - -Before querying data in Pulsar, you need to install Pulsar and built-in connectors. - -## Requirements -1. Install [Pulsar](getting-started-standalone.md#install-pulsar-standalone). -2. Install Pulsar [built-in connectors](getting-started-standalone.md#install-builtin-connectors-optional). - -## Query data in Pulsar -To query data in Pulsar with Pulsar SQL, complete the following steps. - -1. Start a Pulsar standalone cluster. - -```bash - -./bin/pulsar standalone - -``` - -2. Start a Pulsar SQL worker. - -```bash - -./bin/pulsar sql-worker run - -``` - -3. After initializing Pulsar standalone cluster and the SQL worker, run SQL CLI. - -```bash - -./bin/pulsar sql - -``` - -4. Test with SQL commands. - -```bash - -presto> show catalogs; - Catalog ---------- - pulsar - system -(2 rows) - -Query 20180829_211752_00004_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [0 rows, 0B] [0 rows/s, 0B/s] - - -presto> show schemas in pulsar; - Schema ------------------------ - information_schema - public/default - public/functions - sample/standalone/ns1 -(4 rows) - -Query 20180829_211818_00005_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [4 rows, 89B] [21 rows/s, 471B/s] - - -presto> show tables in pulsar."public/default"; - Table -------- -(0 rows) - -Query 20180829_211839_00006_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [0 rows, 0B] [0 rows/s, 0B/s] - -``` - -Since there is no data in Pulsar, no records is returned. - -5. Start the built-in connector _DataGeneratorSource_ and ingest some mock data. - -```bash - -./bin/pulsar-admin sources create --name generator --destinationTopicName generator_test --source-type data-generator - -``` - -And then you can query a topic in the namespace "public/default". - -```bash - -presto> show tables in pulsar."public/default"; - Table ----------------- - generator_test -(1 row) - -Query 20180829_213202_00000_csyeu, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:02 [1 rows, 38B] [0 rows/s, 17B/s] - -``` - -You can now query the data within the topic "generator_test". - -```bash - -presto> select * from pulsar."public/default".generator_test; - - firstname | middlename | lastname | email | username | password | telephonenumber | age | companyemail | nationalidentitycardnumber | --------------+-------------+-------------+----------------------------------+--------------+----------+-----------------+-----+-----------------------------------------------+----------------------------+ - Genesis | Katherine | Wiley | genesis.wiley@gmail.com | genesisw | y9D2dtU3 | 959-197-1860 | 71 | genesis.wiley@interdemconsulting.eu | 880-58-9247 | - Brayden | | Stanton | brayden.stanton@yahoo.com | braydens | ZnjmhXik | 220-027-867 | 81 | brayden.stanton@supermemo.eu | 604-60-7069 | - Benjamin | Julian | Velasquez | benjamin.velasquez@yahoo.com | benjaminv | 8Bc7m3eb | 298-377-0062 | 21 | benjamin.velasquez@hostesltd.biz | 213-32-5882 | - Michael | Thomas | Donovan | donovan@mail.com | michaeld | OqBm9MLs | 078-134-4685 | 55 | michael.donovan@memortech.eu | 443-30-3442 | - Brooklyn | Avery | Roach | brooklynroach@yahoo.com | broach | IxtBLafO | 387-786-2998 | 68 | brooklyn.roach@warst.biz | 085-88-3973 | - Skylar | | Bradshaw | skylarbradshaw@yahoo.com | skylarb | p6eC6cKy | 210-872-608 | 96 | skylar.bradshaw@flyhigh.eu | 453-46-0334 | -. -. -. - -``` - -You can query the mock data. - -## Query your own data -If you want to query your own data, you need to ingest your own data first. You can write a simple producer and write custom defined data to Pulsar. The following is an example. - -```java - -public class TestProducer { - - public static class Foo { - private int field1 = 1; - private String field2; - private long field3; - - public Foo() { - } - - public int getField1() { - return field1; - } - - public void setField1(int field1) { - this.field1 = field1; - } - - public String getField2() { - return field2; - } - - public void setField2(String field2) { - this.field2 = field2; - } - - public long getField3() { - return field3; - } - - public void setField3(long field3) { - this.field3 = field3; - } - } - - public static void main(String[] args) throws Exception { - PulsarClient pulsarClient = PulsarClient.builder().serviceUrl("pulsar://localhost:6650").build(); - Producer producer = pulsarClient.newProducer(AvroSchema.of(Foo.class)).topic("test_topic").create(); - - for (int i = 0; i < 1000; i++) { - Foo foo = new Foo(); - foo.setField1(i); - foo.setField2("foo" + i); - foo.setField3(System.currentTimeMillis()); - producer.newMessage().value(foo).send(); - } - producer.close(); - pulsarClient.close(); - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/sql-overview.md b/site2/website/versioned_docs/version-2.10.1-deprecated/sql-overview.md deleted file mode 100644 index 8ba19d053003dd..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/sql-overview.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: sql-overview -title: Pulsar SQL Overview -sidebar_label: "Overview" -original_id: sql-overview ---- - -Apache Pulsar is used to store streams of event data, and the event data is structured with predefined fields. With the implementation of the [Schema Registry](schema-get-started.md), you can store structured data in Pulsar and query the data by using [Trino (formerly Presto SQL)](https://trino.io/). - -As the core of Pulsar SQL, Presto Pulsar connector enables Presto workers within a Presto cluster to query data from Pulsar. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-sql-arch-2.png) - -The query performance is efficient and highly scalable, because Pulsar adopts [two level segment based architecture](concepts-architecture-overview.md#apache-bookkeeper). - -Topics in Pulsar are stored as segments in [Apache BookKeeper](https://bookkeeper.apache.org/). Each topic segment is replicated to some BookKeeper nodes, which enables concurrent reads and high read throughput. You can configure the number of BookKeeper nodes, and the default number is `3`. In Presto Pulsar connector, data is read directly from BookKeeper, so Presto workers can read concurrently from horizontally scalable number BookKeeper nodes. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-sql-arch-1.png) diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/sql-rest-api.md b/site2/website/versioned_docs/version-2.10.1-deprecated/sql-rest-api.md deleted file mode 100644 index c92fd62f7d8703..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/sql-rest-api.md +++ /dev/null @@ -1,192 +0,0 @@ ---- -id: sql-rest-api -title: Pulsar SQL REST APIs -sidebar_label: "REST APIs" -original_id: sql-rest-api ---- - -This section lists resources that make up the Presto REST API v1. - -## Request for Presto services - -All requests for Presto services should use Presto REST API v1 version. - -To request services, use explicit URL `http://presto.service:8081/v1`. You need to update `presto.service:8081` with your real Presto address before sending requests. - -`POST` requests require the `X-Presto-User` header. If you use authentication, you must use the same `username` that is specified in the authentication configuration. If you do not use authentication, you can specify anything for `username`. - -```properties - -X-Presto-User: username - -``` - -For more information about headers, refer to [PrestoHeaders](https://github.com/trinodb/trino). - -## Schema - -You can use statement in the HTTP body. All data is received as JSON document that might contain a `nextUri` link. If the received JSON document contains a `nextUri` link, the request continues with the `nextUri` link until the received data does not contain a `nextUri` link. If no error is returned, the query completes successfully. If an `error` field is displayed in `stats`, it means the query fails. - -The following is an example of `show catalogs`. The query continues until the received JSON document does not contain a `nextUri` link. Since no `error` is displayed in `stats`, it means that the query completes successfully. - -```powershell - -➜ ~ curl --header "X-Presto-User: test-user" --request POST --data 'show catalogs' http://localhost:8081/v1/statement -{ - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "stats" : { - "queued" : true, - "nodes" : 0, - "userTimeMillis" : 0, - "cpuTimeMillis" : 0, - "wallTimeMillis" : 0, - "processedBytes" : 0, - "processedRows" : 0, - "runningSplits" : 0, - "queuedTimeMillis" : 0, - "queuedSplits" : 0, - "completedSplits" : 0, - "totalSplits" : 0, - "scheduled" : false, - "peakMemoryBytes" : 0, - "state" : "QUEUED", - "elapsedTimeMillis" : 0 - }, - "id" : "20191113_033653_00006_dg6hb", - "nextUri" : "http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/1" -} - -➜ ~ curl http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/1 -{ - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "nextUri" : "http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/2", - "id" : "20191113_033653_00006_dg6hb", - "stats" : { - "state" : "PLANNING", - "totalSplits" : 0, - "queued" : false, - "userTimeMillis" : 0, - "completedSplits" : 0, - "scheduled" : false, - "wallTimeMillis" : 0, - "runningSplits" : 0, - "queuedSplits" : 0, - "cpuTimeMillis" : 0, - "processedRows" : 0, - "processedBytes" : 0, - "nodes" : 0, - "queuedTimeMillis" : 1, - "elapsedTimeMillis" : 2, - "peakMemoryBytes" : 0 - } -} - -➜ ~ curl http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/2 -{ - "id" : "20191113_033653_00006_dg6hb", - "data" : [ - [ - "pulsar" - ], - [ - "system" - ] - ], - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "columns" : [ - { - "typeSignature" : { - "rawType" : "varchar", - "arguments" : [ - { - "kind" : "LONG_LITERAL", - "value" : 6 - } - ], - "literalArguments" : [], - "typeArguments" : [] - }, - "name" : "Catalog", - "type" : "varchar(6)" - } - ], - "stats" : { - "wallTimeMillis" : 104, - "scheduled" : true, - "userTimeMillis" : 14, - "progressPercentage" : 100, - "totalSplits" : 19, - "nodes" : 1, - "cpuTimeMillis" : 16, - "queued" : false, - "queuedTimeMillis" : 1, - "state" : "FINISHED", - "peakMemoryBytes" : 0, - "elapsedTimeMillis" : 111, - "processedBytes" : 0, - "processedRows" : 0, - "queuedSplits" : 0, - "rootStage" : { - "cpuTimeMillis" : 1, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 1, - "subStages" : [ - { - "cpuTimeMillis" : 14, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 17, - "subStages" : [ - { - "wallTimeMillis" : 7, - "subStages" : [], - "stageId" : "2", - "done" : true, - "nodes" : 1, - "totalSplits" : 1, - "processedBytes" : 22, - "processedRows" : 2, - "queuedSplits" : 0, - "userTimeMillis" : 1, - "cpuTimeMillis" : 1, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 1 - } - ], - "wallTimeMillis" : 92, - "nodes" : 1, - "done" : true, - "stageId" : "1", - "userTimeMillis" : 12, - "processedRows" : 2, - "processedBytes" : 51, - "queuedSplits" : 0, - "totalSplits" : 17 - } - ], - "wallTimeMillis" : 5, - "done" : true, - "nodes" : 1, - "stageId" : "0", - "userTimeMillis" : 1, - "processedRows" : 2, - "processedBytes" : 22, - "totalSplits" : 1, - "queuedSplits" : 0 - }, - "runningSplits" : 0, - "completedSplits" : 19 - } -} - -``` - -:::note - -Since the response data is not in sync with the query state from the perspective of clients, you cannot rely on the response data to determine whether the query completes. - -::: - -For more information about Presto REST API, refer to [Presto HTTP Protocol](https://github.com/prestosql/presto/wiki/HTTP-Protocol). diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/standalone.md b/site2/website/versioned_docs/version-2.10.1-deprecated/standalone.md deleted file mode 100644 index 3d463d635558bf..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/standalone.md +++ /dev/null @@ -1,268 +0,0 @@ ---- -id: standalone -title: Set up a standalone Pulsar locally -sidebar_label: "Run Pulsar locally" -original_id: standalone ---- - -For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary [RocksDB](http://rocksdb.org/) and BookKeeper components running inside of a single Java Virtual Machine (JVM) process. - -> **Pulsar in production?** -> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal.md) guide. - -## Install Pulsar standalone - -This tutorial guides you through every step of installing Pulsar locally. - -### System requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions - -:::tip - -By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. - -::: - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -### Install Pulsar using binary release - -To get started with Pulsar, download a binary tarball release in one of the following ways: - -* download from the Apache mirror (Pulsar @pulsar:version@ binary release) - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:binary_release_url - - ``` - -After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -#### What your package contains - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](/tools/pulsar-admin/). -`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker) and more.
    **Note:** Pulsar standalone uses RocksDB as the local metadata store and its configuration file path [`metadataStoreConfigPath`](reference-configuration.md) is configurable in the `standalone.conf` file. For more information about the configurations of RocksDB, see [here](https://github.com/facebook/rocksdb/blob/main/examples/rocksdb_option_file_example.ini) and related [documentation](https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide). -`examples` | A Java JAR file containing [Pulsar Functions](functions-overview.md) example. -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md). -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar. -`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar). - -These directories are created once you begin running Pulsar. - -Directory | Contains -:---------|:-------- -`data` | The data storage directory used by RocksDB and BookKeeper. -`logs` | Logs created by the installation. - -:::tip - -If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions: -* [Install builtin connectors (optional)](#install-builtin-connectors-optional) -* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional) -Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders. - -::: - -### Install builtin connectors (optional) - -Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors. -To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways: - -* download from the Apache mirror Pulsar IO Connectors @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. -For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker (or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions). -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors). - -::: - -### Install tiered storage offloaders (optional) - -:::tip - -- Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -- To enable tiered storage feature, follow the instructions below; otherwise skip this section. - -::: - -To get started with [tiered storage offloaders](concepts-tiered-storage.md), you need to download the offloaders tarball release on every broker node in one of the following ways: - -* download from the Apache mirror Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders` -in the pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage.md). - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory. -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or DC/OS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - -::: - -## Start Pulsar standalone - -Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode. - -```bash - -$ bin/pulsar standalone - -``` - -If you have started Pulsar successfully, you will see `INFO`-level log messages like this: - -```bash - -21:59:29.327 [DLM-/stream/storage-OrderedScheduler-3-0] INFO org.apache.bookkeeper.stream.storage.impl.sc.StorageContainerImpl - Successfully started storage container (0). -21:59:34.576 [main] INFO org.apache.pulsar.broker.authentication.AuthenticationService - Authentication is disabled -21:59:34.576 [main] INFO org.apache.pulsar.websocket.WebSocketService - Pulsar WebSocket Service started - -``` - -:::tip - -* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window. - -::: - -You can also run the service as a background process using the `bin/pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](reference-cli-tools.md#pulsar-daemon). -> -> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview.md) document to secure your deployment. -> -> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -## Use Pulsar standalone - -Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. - -### Consume a message - -The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client consume my-topic -s "first-subscription" - -``` - -If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -22:17:16.781 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully consumed - -``` - -:::tip - -As you have noticed that we do not explicitly create the `my-topic` topic, from which we consume the message. When you consume a message from a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well. - -::: - -### Produce a message - -The following command produces a message saying `hello-pulsar` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client produce my-topic --messages "hello-pulsar" - -``` - -If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -22:21:08.693 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced - -``` - -## Stop Pulsar standalone - -Press `Ctrl+C` to stop a local standalone Pulsar. - -:::tip - -If the service runs as a background process using the `bin/pulsar-daemon start standalone` command, then use the `bin/pulsar-daemon stop standalone` command to stop the service. -For more information, see [pulsar-daemon](reference-cli-tools.md#pulsar-daemon). - -::: - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/tiered-storage-aliyun.md b/site2/website/versioned_docs/version-2.10.1-deprecated/tiered-storage-aliyun.md deleted file mode 100644 index 89dc53cda76042..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/tiered-storage-aliyun.md +++ /dev/null @@ -1,257 +0,0 @@ ---- -id: tiered-storage-aliyun -title: Use Aliyun OSS offloader with Pulsar -sidebar_label: "Aliyun OSS offloader" -original_id: tiered-storage-aliyun ---- - -This chapter guides you through every step of installing and configuring the Aliyun Object Storage Service (OSS) offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the Aliyun OSS offloader. - -### Prerequisite - -- Pulsar: 2.8.0 or later versions - -### Step - -This example uses Pulsar 2.8.0. - -1. Download the Pulsar tarball, see [here](standalone.md#install-pulsar-using-binary-release). - -2. Download and untar the Pulsar offloaders package, then copy the Pulsar offloaders as `offloaders` in the Pulsar directory, see [here](standalone.md#install-tiered-storage-offloaders-optional). - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/), [GCS](https://cloud.google.com/storage/), [Azure](https://portal.azure.com/#home), and [Aliyun OSS](https://www.aliyun.com/product/oss) for long-term storage. - - ``` - - tiered-storage-file-system-2.8.0.nar - tiered-storage-jcloud-2.8.0.nar - - ``` - - :::note - - * If you are running Pulsar in a bare-metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image. The `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to Aliyun OSS, you need to configure some properties of the Aliyun OSS offload driver. - -::: - -Besides, you can also configure the Aliyun OSS offloader to run it automatically or trigger it manually. - -### Configure Aliyun OSS offloader driver - -You can configure the Aliyun OSS offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - | Required configuration | Description | Example value | - | --- | --- |--- | - | `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive. | aliyun-oss | - | `offloadersDirectory` | Offloader directory | offloaders | - | `managedLedgerOffloadBucket` | Bucket | pulsar-topic-offload | - | `managedLedgerOffloadServiceEndpoint` | Endpoint | http://oss-cn-hongkong.aliyuncs.com | - -- **Optional** configurations are as below. - - | Optional | Description | Example value | - | --- | --- | --- | - | `managedLedgerOffloadReadBufferSizeInBytes` | Size of block read | 1 MB | - | `managedLedgerOffloadMaxBlockSizeInBytes` | Size of block write | 64 MB | - | `managedLedgerMinLedgerRolloverTimeMinutes` | Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment. | 2 | - | `managedLedgerMaxEntriesPerLedger` | Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment. | 5000 | - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in Aliyun OSS must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -managedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Endpoint (required) - -The endpoint is the region where a bucket is located. - -:::tip - -For more information about Aliyun OSS regions and endpoints, see [International website](https://www.alibabacloud.com/help/doc-detail/31837.htm) or [Chinese website](https://help.aliyun.com/document_detail/31837.html). - -::: - - -##### Example - -This example sets the endpoint as _oss-us-west-1-internal_. - -``` - -managedLedgerOffloadServiceEndpoint=http://oss-us-west-1-internal.aliyuncs.com - -``` - -#### Authentication (required) - -To be able to access Aliyun OSS, you need to authenticate with Aliyun OSS. - -Set the environment variables `ALIYUN_OSS_ACCESS_KEY_ID` and `ALIYUN_OSS_ACCESS_KEY_SECRET` in `conf/pulsar_env.sh`. - -"export" is important so that the variables are made available in the environment of spawned processes. - -```bash - -export ALIYUN_OSS_ACCESS_KEY_ID=ABC123456789 -export ALIYUN_OSS_ACCESS_KEY_SECRET=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from Aliyun OSS in the configuration file `broker.conf` or `standalone.conf`. - -| Configuration | Description | Default value | -| --- | --- | --- | -| `managedLedgerOffloadReadBufferSizeInBytes` | Block size for each individual read when reading back data from Aliyun OSS. | 1 MB | -| `managedLedgerOffloadMaxBlockSizeInBytes` | Maximum size of a "part" sent during a multipart upload to Aliyun OSS. It **cannot** be smaller than 5 MB. | 64 MB | - -### Run Aliyun OSS offloader automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -| Threshold value | Action | -| --- | --- | -| > 0 | It triggers the offloading operation if the topic storage reaches its threshold. | -| = 0 | It causes a broker to offload data as soon as possible. | -| < 0 | It disables automatic offloading operation. | - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, the offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](/tools/pulsar-admin/) command. - -#### Example - -This example sets the Aliyun OSS offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](/tools/pulsar-admin/). - -::: - -### Run Aliyun OSS offloader manually - -For individual topics, you can trigger the Aliyun OSS offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to Aliyun OSS until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the Aliyun OSS offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](/tools/pulsar-admin/). - - ::: - -- This example checks the Aliyun OSS offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the Aliyun OSS offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](/tools/pulsar-admin/). - - ::: - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/tiered-storage-aws.md b/site2/website/versioned_docs/version-2.10.1-deprecated/tiered-storage-aws.md deleted file mode 100644 index 11905bbb09ea40..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/tiered-storage-aws.md +++ /dev/null @@ -1,329 +0,0 @@ ---- -id: tiered-storage-aws -title: Use AWS S3 offloader with Pulsar -sidebar_label: "AWS S3 offloader" -original_id: tiered-storage-aws ---- - -This chapter guides you through every step of installing and configuring the AWS S3 offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the AWS S3 offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or later versions - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download from the Pulsar [downloads page](/download) - - * Use [wget](https://www.gnu.org/software/wget): - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/) and [GCS](https://cloud.google.com/storage/) for long term storage. - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to AWS S3, you need to configure some properties of the AWS S3 offload driver. - -::: - -Besides, you can also configure the AWS S3 offloader to run it automatically or trigger it manually. - -### Configure AWS S3 offloader driver - -You can configure the AWS S3 offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - Required configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive.

    **Note**: there is a third driver type, S3, which is identical to AWS S3, though S3 requires that you specify an endpoint URL using `s3ManagedLedgerOffloadServiceEndpoint`. This is useful if using an S3 compatible data store other than AWS S3. | aws-s3 - `offloadersDirectory` | Offloader directory | offloaders - `s3ManagedLedgerOffloadBucket` | Bucket | pulsar-topic-offload - -- **Optional** configurations are as below. - - Optional | Description | Example value - |---|---|--- - `s3ManagedLedgerOffloadRegion` | Bucket region

    **Note**: before specifying a value for this parameter, you need to set the following configurations. Otherwise, you might get an error.

    - Set [`s3ManagedLedgerOffloadServiceEndpoint`](https://docs.aws.amazon.com/general/latest/gr/s3.html).

    Example
    `s3ManagedLedgerOffloadServiceEndpoint=https://s3.YOUR_REGION.amazonaws.com`

    - Grant `GetBucketLocation` permission to a user.

    For how to grant `GetBucketLocation` permission to a user, see [here](https://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html#using-with-s3-actions-related-to-buckets).| eu-west-3 - `s3ManagedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `s3ManagedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in AWS S3 must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -s3ManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Bucket region - -A bucket region is a region where a bucket is located. If a bucket region is not specified, the **default** region (`US East (N. Virginia)`) is used. - -:::tip - -For more information about AWS regions and endpoints, see [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). - -::: - - -##### Example - -This example sets the bucket region as _europe-west-3_. - -``` - -s3ManagedLedgerOffloadRegion=eu-west-3 - -``` - -#### Authentication (required) - -To be able to access AWS S3, you need to authenticate with AWS S3. - -Pulsar does not provide any direct methods of configuring authentication for AWS S3, -but relies on the mechanisms supported by the [DefaultAWSCredentialsProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html). - -Once you have created a set of credentials in the AWS IAM console, you can configure credentials using one of the following methods. - -* Use EC2 instance metadata credentials. - - If you are on AWS instance with an instance profile that provides credentials, Pulsar uses these credentials if no other mechanism is provided. - -* Set the environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` in `conf/pulsar_env.sh`. - - "export" is important so that the variables are made available in the environment of spawned processes. - - ```bash - - export AWS_ACCESS_KEY_ID=ABC123456789 - export AWS_SECRET_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -* Add the Java system properties `aws.accessKeyId` and `aws.secretKey` to `PULSAR_EXTRA_OPTS` in `conf/pulsar_env.sh`. - - ```bash - - PULSAR_EXTRA_OPTS="${PULSAR_EXTRA_OPTS} ${PULSAR_MEM} ${PULSAR_GC} -Daws.accessKeyId=ABC123456789 -Daws.secretKey=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.maxCapacity.default=1000 -Dio.netty.recycler.linkCapacity=1024" - - ``` - -* Set the access credentials in `~/.aws/credentials`. - - ```conf - - [default] - aws_access_key_id=ABC123456789 - aws_secret_access_key=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -* Assume an IAM role. - - This example uses the `DefaultAWSCredentialsProviderChain` for assuming this role. - - The broker must be rebooted for credentials specified in `pulsar_env` to take effect. - - ```conf - - s3ManagedLedgerOffloadRole= - s3ManagedLedgerOffloadRoleSessionName=pulsar-s3-offload - - ``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from AWS S3 in the configuration file `broker.conf` or `standalone.conf`. - -Configuration|Description|Default value -|---|---|--- -`s3ManagedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from AWS S3.|1 MB -`s3ManagedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to AWS S3. It **cannot** be smaller than 5 MB. |64 MB - -### Configure AWS S3 offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](/tools/pulsar-admin/) command. - -#### Example - -This example sets the AWS S3 offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](/tools/pulsar-admin/). - -::: - -### Configure AWS S3 offloader to run manually - -For individual topics, you can trigger AWS S3 offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to AWS S3 until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the AWS S3 offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](/tools/pulsar-admin/). - - ::: - -- This example checks the AWS S3 offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the AWS S3 offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](/tools/pulsar-admin/). - - ::: - -## Tutorial - -For the complete and step-by-step instructions on how to use the AWS S3 offloader with Pulsar, see [here](https://hub.streamnative.io/offloaders/aws-s3/2.5.1#usage). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/tiered-storage-azure.md b/site2/website/versioned_docs/version-2.10.1-deprecated/tiered-storage-azure.md deleted file mode 100644 index e65356355ccc23..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/tiered-storage-azure.md +++ /dev/null @@ -1,264 +0,0 @@ ---- -id: tiered-storage-azure -title: Use Azure BlobStore offloader with Pulsar -sidebar_label: "Azure BlobStore offloader" -original_id: tiered-storage-azure ---- - -This chapter guides you through every step of installing and configuring the Azure BlobStore offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the Azure BlobStore offloader. - -### Prerequisite - -- Pulsar: 2.6.2 or later versions - -### Step - -This example uses Pulsar 2.6.2. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.6.2/apache-pulsar-2.6.2-bin.tar.gz) - - * Download from the Pulsar [downloads page](/download) - - * Use [wget](https://www.gnu.org/software/wget): - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.6.2/apache-pulsar-2.6.2-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.6.2/apache-pulsar-offloaders-2.6.2-bin.tar.gz - tar xvfz apache-pulsar-offloaders-2.6.2-bin.tar.gz - - ``` - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.6.2/offloaders apache-pulsar-2.6.2/offloaders - - ls offloaders - - ``` - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/), [GCS](https://cloud.google.com/storage/) and [Azure](https://portal.azure.com/#home) for long term storage. - - ``` - - tiered-storage-file-system-2.6.2.nar - tiered-storage-jcloud-2.6.2.nar - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to Azure BlobStore, you need to configure some properties of the Azure BlobStore offload driver. - -::: - -Besides, you can also configure the Azure BlobStore offloader to run it automatically or trigger it manually. - -### Configure Azure BlobStore offloader driver - -You can configure the Azure BlobStore offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - Required configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name | azureblob - `offloadersDirectory` | Offloader directory | offloaders - `managedLedgerOffloadBucket` | Bucket | pulsar-topic-offload - -- **Optional** configurations are as below. - - Optional | Description | Example value - |---|---|--- - `managedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `managedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in Azure BlobStore must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -managedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Authentication (required) - -To be able to access Azure BlobStore, you need to authenticate with Azure BlobStore. - -* Set the environment variables `AZURE_STORAGE_ACCOUNT` and `AZURE_STORAGE_ACCESS_KEY` in `conf/pulsar_env.sh`. - - "export" is important so that the variables are made available in the environment of spawned processes. - - ```bash - - export AZURE_STORAGE_ACCOUNT=ABC123456789 - export AZURE_STORAGE_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from Azure BlobStore in the configuration file `broker.conf` or `standalone.conf`. - -Configuration|Description|Default value -|---|---|--- -`managedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from Azure BlobStore store.|1 MB -`managedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to Azure BlobStore store. It **cannot** be smaller than 5 MB. |64 MB - -### Configure Azure BlobStore offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](/tools/pulsar-admin/) command. - -#### Example - -This example sets the Azure BlobStore offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](/tools/pulsar-admin/). - -::: - -### Configure Azure BlobStore offloader to run manually - -For individual topics, you can trigger Azure BlobStore offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to Azure BlobStore until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the Azure BlobStore offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](/tools/pulsar-admin/). - - ::: - -- This example checks the Azure BlobStore offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the Azure BlobStore offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](/tools/pulsar-admin/). - - ::: - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/tiered-storage-filesystem.md b/site2/website/versioned_docs/version-2.10.1-deprecated/tiered-storage-filesystem.md deleted file mode 100644 index bb399b500cb022..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/tiered-storage-filesystem.md +++ /dev/null @@ -1,631 +0,0 @@ ---- -id: tiered-storage-filesystem -title: Use filesystem offloader with Pulsar -sidebar_label: "Filesystem offloader" -original_id: tiered-storage-filesystem ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This chapter guides you through every step of installing and configuring the filesystem offloader and using it with Pulsar. - -## Installation - -This section describes how to install the filesystem offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or higher versions - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download the Pulsar tarball from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download the Pulsar tarball from the Pulsar [download page](/download/) - - * Use the [wget](https://www.gnu.org/software/wget) command to dowload the Pulsar tarball. - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - - :::note - - * If you run Pulsar in a bare metal cluster, ensure that the `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you run Pulsar in Docker or deploying Pulsar using a Docker image (such as K8S and DCOS), you can use the `apachepulsar/pulsar-all` image. The `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - - :::note - - * If you run Pulsar in a bare metal cluster, ensure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you run Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image. The `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to filesystem, you need to configure some properties of the filesystem offloader driver. - -::: - -Besides, you can also configure the filesystem offloader to run it automatically or trigger it manually. - -### Configure filesystem offloader driver - -You can configure the filesystem offloader driver in the `broker.conf` or `standalone.conf` configuration file. - -````mdx-code-block - - - -- **Required** configurations are as below. - - Parameter | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive. | filesystem - `fileSystemURI` | Connection address, which is the URI to access the default Hadoop distributed file system. | hdfs://127.0.0.1:9000 - `offloadersDirectory` | Offloader directory | offloaders - `fileSystemProfilePath` | Hadoop profile path. The configuration file is stored in the Hadoop profile path. It contains various settings for Hadoop performance tuning. | conf/filesystem_offload_core_site.xml - - -- **Optional** configurations are as below. - - Parameter| Description | Example value - |---|---|--- - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic.

    **Note**: it is not recommended to set this parameter in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended to set this parameter in the production environment.|5000 - -
    - - -- **Required** configurations are as below. - - Parameter | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive. | filesystem - `offloadersDirectory` | Offloader directory | offloaders - `fileSystemProfilePath` | NFS profile path. The configuration file is stored in the NFS profile path. It contains various settings for performance tuning. | conf/filesystem_offload_core_site.xml - -- **Optional** configurations are as below. - - Parameter| Description | Example value - |---|---|--- - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic.

    **Note**: it is not recommended to set this parameter in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended to set this parameter in the production environment.|5000 - -
    - -
    -```` - -### Run filesystem offloader automatically - -You can configure the namespace policy to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic storage reaches the threshold, an offload operation is triggered automatically. - -Threshold value|Action -|---|--- -| > 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offload runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, the filesystem offloader does not work until the current segment is full. - -You can configure the threshold using CLI tools, such as pulsar-admin. - -#### Example - -This example sets the filesystem offloader threshold to 10 MB using pulsar-admin. - -```bash - -pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#set-offload-threshold). - -::: - -### Run filesystem offloader manually - -For individual topics, you can trigger the filesystem offloader manually using one of the following methods: - -- Use the REST endpoint. - -- Use CLI tools (such as pulsar-admin). - -To manually trigger the filesystem offloader via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are offloaded to the filesystem until the threshold is no longer exceeded. Older segments are offloaded first. - -#### Example - -- This example manually run the filesystem offloader using pulsar-admin. - - ```bash - - pulsar-admin topics offload --size-threshold 10M persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload). - - ::: - -- This example checks filesystem offloader status using pulsar-admin. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the filesystem to complete the job, add the `-w` flag. - - ```bash - - pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in the offloading operation, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload-status). - - ::: - -## Tutorial - -This section provides step-by-step instructions on how to use the filesystem offloader to move data from Pulsar to Hadoop Distributed File System (HDFS) or Network File system (NFS). - -````mdx-code-block - - - -To move data from Pulsar to HDFS, follow these steps. - -### Step 1: Prepare the HDFS environment - -This tutorial sets up a Hadoop single node cluster and uses Hadoop 3.2.1. - -:::tip - -For details about how to set up a Hadoop single node cluster, see [here](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html). - -::: - -1. Download and uncompress Hadoop 3.2.1. - - ``` - - wget https://mirrors.bfsu.edu.cn/apache/hadoop/common/hadoop-3.2.1/hadoop-3.2.1.tar.gz - - tar -zxvf hadoop-3.2.1.tar.gz -C $HADOOP_HOME - - ``` - -2. Configure Hadoop. - - ``` - - # $HADOOP_HOME/etc/hadoop/core-site.xml - - - fs.defaultFS - hdfs://localhost:9000 - - - - # $HADOOP_HOME/etc/hadoop/hdfs-site.xml - - - dfs.replication - 1 - - - - ``` - -3. Set passphraseless ssh. - - ``` - - # Now check that you can ssh to the localhost without a passphrase: - $ ssh localhost - # If you cannot ssh to localhost without a passphrase, execute the following commands - $ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa - $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys - $ chmod 0600 ~/.ssh/authorized_keys - - ``` - -4. Start HDFS. - - ``` - - # don't execute this command repeatedly, repeat execute will cauld the clusterId of the datanode is not consistent with namenode - $HADOOP_HOME/bin/hadoop namenode -format - $HADOOP_HOME/sbin/start-dfs.sh - - ``` - -5. Navigate to the [HDFS website](http://localhost:9870/). - - You can see the **Overview** page. - - ![](/assets/FileSystem-1.png) - - 1. At the top navigation bar, click **Datanodes** to check DataNode information. - - ![](/assets/FileSystem-2.png) - - 2. Click **HTTP Address** to get more detailed information about localhost:9866. - - As can be seen below, the size of **Capacity Used** is 4 KB, which is the initial value. - - ![](/assets/FileSystem-3.png) - -### Step 2: Install the filesystem offloader - -For details, see [installation](#installation). - -### Step 3: Configure the filesystem offloader - -As indicated in the [configuration](#configuration) section, you need to configure some properties for the filesystem offloader driver before using it. This tutorial assumes that you have configured the filesystem offloader driver as below and run Pulsar in **standalone** mode. - -Set the following configurations in the `conf/standalone.conf` file. - -```conf - -managedLedgerOffloadDriver=filesystem -fileSystemURI=hdfs://127.0.0.1:9000 -fileSystemProfilePath=conf/filesystem_offload_core_site.xml - -``` - -:::note - -For testing purposes, you can set the following two configurations to speed up ledger rollover, but it is not recommended that you set them in the production environment. - -::: - -``` - -managedLedgerMinLedgerRolloverTimeMinutes=1 -managedLedgerMaxEntriesPerLedger=100 - -``` - - - - -:::note - -In this section, it is assumed that you have enabled NFS service and set the shared path of your NFS service. In this section, `/Users/test` is used as the shared path of NFS service. - -::: - -To offload data to NFS, follow these steps. - -### Step 1: Install the filesystem offloader - -For details, see [installation](#installation). - -### Step 2: Mont your NFS to your local filesystem - -This example mounts mounts */Users/pulsar_nfs* to */Users/test*. - -``` - -mount -e 192.168.0.103:/Users/test/Users/pulsar_nfs - -``` - -### Step 3: Configure the filesystem offloader driver - -As indicated in the [configuration](#configuration) section, you need to configure some properties for the filesystem offloader driver before using it. This tutorial assumes that you have configured the filesystem offloader driver as below and run Pulsar in **standalone** mode. - -1. Set the following configurations in the `conf/standalone.conf` file. - - ```conf - - managedLedgerOffloadDriver=filesystem - fileSystemProfilePath=conf/filesystem_offload_core_site.xml - - ``` - -2. Modify the *filesystem_offload_core_site.xml* as follows. - - ``` - - - fs.defaultFS - file:/// - - - - hadoop.tmp.dir - file:///Users/pulsar_nfs - - - - io.file.buffer.size - 4096 - - - - io.seqfile.compress.blocksize - 1000000 - - - - io.seqfile.compression.type - BLOCK - - - - io.map.index.interval - 128 - - - ``` - - - - -```` - -### Step 4: Offload data from BookKeeper to filesystem - -Execute the following commands in the repository where you download Pulsar tarball. For example, `~/path/to/apache-pulsar-2.5.1`. - -1. Start Pulsar standalone. - - ``` - - bin/pulsar standalone -a 127.0.0.1 - - ``` - -2. To ensure the data generated is not deleted immediately, it is recommended to set the [retention policy](cookbooks-retention-expiry.md#retention-policies), which can be either a **size** limit or a **time** limit. The larger value you set for the retention policy, the longer the data can be retained. - - ``` - - bin/pulsar-admin namespaces set-retention public/default --size 100M --time 2d - - ``` - - :::tip - - For more information about the `pulsarctl namespaces set-retention options` command, including flags, descriptions, default values, and shorthands, see [here](https://docs.streamnative.io/pulsarctl/v2.7.0.6/#-em-set-retention-em-). - - ::: - -3. Produce data using pulsar-client. - - ``` - - bin/pulsar-client produce -m "Hello FileSystem Offloader" -n 1000 public/default/fs-test - - ``` - -4. The offloading operation starts after a ledger rollover is triggered. To ensure offload data successfully, it is recommended that you wait until several ledger rollovers are triggered. In this case, you might need to wait for a second. You can check the ledger status using pulsarctl. - - ``` - - bin/pulsar-admin topics stats-internal public/default/fs-test - - ``` - - **Output** - - The data of the ledger 696 is not offloaded. - - ``` - - { - "version": 1, - "creationDate": "2020-06-16T21:46:25.807+08:00", - "modificationDate": "2020-06-16T21:46:25.821+08:00", - "ledgers": [ - { - "ledgerId": 696, - "isOffloaded": false - } - ], - "cursors": {} - } - - ``` - -5. Wait a second and send more messages to the topic. - - ``` - - bin/pulsar-client produce -m "Hello FileSystem Offloader" -n 1000 public/default/fs-test - - ``` - -6. Check the ledger status using pulsarctl. - - ``` - - bin/pulsar-admin topics stats-internal public/default/fs-test - - ``` - - **Output** - - The ledger 696 is rolled over. - - ``` - - { - "version": 2, - "creationDate": "2020-06-16T21:46:25.807+08:00", - "modificationDate": "2020-06-16T21:48:52.288+08:00", - "ledgers": [ - { - "ledgerId": 696, - "entries": 1001, - "size": 81695, - "isOffloaded": false - }, - { - "ledgerId": 697, - "isOffloaded": false - } - ], - "cursors": {} - } - - ``` - -7. Trigger the offloading operation manually using pulsarctl. - - ``` - - bin/pulsar-admin topics offload -s 0 public/default/fs-test - - ``` - - **Output** - - Data in ledgers before the ledge 697 is offloaded. - - ``` - - # offload info, the ledgers before 697 will be offloaded - Offload triggered for persistent://public/default/fs-test3 for messages before 697:0:-1 - - ``` - -8. Check the ledger status using pulsarctl. - - ``` - - bin/pulsar-admin topics stats-internal public/default/fs-test - - ``` - - **Output** - - The data of the ledger 696 is offloaded. - - ``` - - { - "version": 4, - "creationDate": "2020-06-16T21:46:25.807+08:00", - "modificationDate": "2020-06-16T21:52:13.25+08:00", - "ledgers": [ - { - "ledgerId": 696, - "entries": 1001, - "size": 81695, - "isOffloaded": true - }, - { - "ledgerId": 697, - "isOffloaded": false - } - ], - "cursors": {} - } - - ``` - - And the **Capacity Used** is changed from 4 KB to 116.46 KB. - - ![](/assets/FileSystem-8.png) \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/tiered-storage-gcs.md b/site2/website/versioned_docs/version-2.10.1-deprecated/tiered-storage-gcs.md deleted file mode 100644 index df1b4f6fb7edbf..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/tiered-storage-gcs.md +++ /dev/null @@ -1,319 +0,0 @@ ---- -id: tiered-storage-gcs -title: Use GCS offloader with Pulsar -sidebar_label: "GCS offloader" -original_id: tiered-storage-gcs ---- - -This chapter guides you through every step of installing and configuring the GCS offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the GCS offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or later versions - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download from the Pulsar [download page](/download) - - * Use [wget](https://www.gnu.org/software/wget) - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8S and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - As shown in the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support GCS and AWS S3 for long term storage. - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - -## Configuration - -:::note - -Before offloading data from BookKeeper to GCS, you need to configure some properties of the GCS offloader driver. - -::: - -Besides, you can also configure the GCS offloader to run it automatically or trigger it manually. - -### Configure GCS offloader driver - -You can configure GCS offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - **Required** configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver`|Offloader driver name, which is case-insensitive.|google-cloud-storage - `offloadersDirectory`|Offloader directory|offloaders - `gcsManagedLedgerOffloadBucket`|Bucket|pulsar-topic-offload - `gcsManagedLedgerOffloadRegion`|Bucket region|europe-west3 - `gcsManagedLedgerOffloadServiceAccountKeyFile`|Authentication |/Users/user-name/Downloads/project-804d5e6a6f33.json - -- **Optional** configurations are as below. - - Optional configuration|Description|Example value - |---|---|--- - `gcsManagedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `gcsManagedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic.|2 - `managedLedgerMaxEntriesPerLedger`|The max number of entries to append to a ledger before triggering a rollover.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in GCS **must** be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you can not nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -gcsManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Bucket region (required) - -Bucket region is the region where a bucket is located. If a bucket region is not specified, the **default** region (`us multi-regional location`) is used. - -:::tip - -For more information about bucket location, see [here](https://cloud.google.com/storage/docs/bucket-locations). - -::: - -##### Example - -This example sets the bucket region as _europe-west3_. - -``` - -gcsManagedLedgerOffloadRegion=europe-west3 - -``` - -#### Authentication (required) - -To enable a broker access GCS, you need to configure `gcsManagedLedgerOffloadServiceAccountKeyFile` in the configuration file `broker.conf`. - -`gcsManagedLedgerOffloadServiceAccountKeyFile` is -a JSON file, containing GCS credentials of a service account. - -##### Example - -To generate service account credentials or view the public credentials that you've already generated, follow the following steps. - -1. Navigate to the [Service accounts page](https://console.developers.google.com/iam-admin/serviceaccounts). - -2. Select a project or create a new one. - -3. Click **Create service account**. - -4. In the **Create service account** window, type a name for the service account and select **Furnish a new private key**. - - If you want to [grant G Suite domain-wide authority](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#delegatingauthority) to the service account, select **Enable G Suite Domain-wide Delegation**. - -5. Click **Create**. - - :::note - - Make sure the service account you create has permission to operate GCS, you need to assign **Storage Admin** permission to your service account [here](https://cloud.google.com/storage/docs/access-control/iam). - - ::: - -6. You can get the following information and set this in `broker.conf`. - - ```conf - - gcsManagedLedgerOffloadServiceAccountKeyFile="/Users/user-name/Downloads/project-804d5e6a6f33.json" - - ``` - - :::tip - - - For more information about how to create `gcsManagedLedgerOffloadServiceAccountKeyFile`, see [here](https://support.google.com/googleapi/answer/6158849). - - For more information about Google Cloud IAM, see [here](https://cloud.google.com/storage/docs/access-control/iam). - - ::: - -#### Size of block read/write - -You can configure the size of a request sent to or read from GCS in the configuration file `broker.conf`. - -Configuration|Description -|---|--- -`gcsManagedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from GCS.

    The **default** value is 1 MB. -`gcsManagedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to GCS.

    It **can not** be smaller than 5 MB.

    The **default** value is 64 MB. - -### Configure GCS offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offload operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](/tools/pulsar-admin/) command. - -#### Example - -This example sets the GCS offloader threshold size to 10 MB using pulsar-admin. - -```bash - -pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#set-offload-threshold). - -::: - -### Configure GCS offloader to run manually - -For individual topics, you can trigger GCS offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger the GCS via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to GCS until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the GCS offloader to run manually using pulsar-admin with the command `pulsar-admin topics offload (topic-name) (threshold)`. - - ```bash - - pulsar-admin topics offload persistent://my-tenant/my-namespace/topic1 10M - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload). - - ::: - -- This example checks the GCS offloader status using pulsar-admin with the command `pulsar-admin topics offload-status options`. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for GCS to complete the job, add the `-w` flag. - - ```bash - - pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload-status). - - ::: - -## Tutorial - -For the complete and step-by-step instructions on how to use the GCS offloader with Pulsar, see [here](https://hub.streamnative.io/offloaders/gcs/2.5.1#usage). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/tiered-storage-overview.md b/site2/website/versioned_docs/version-2.10.1-deprecated/tiered-storage-overview.md deleted file mode 100644 index c635034f463b46..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/tiered-storage-overview.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -id: tiered-storage-overview -title: Overview of tiered storage -sidebar_label: "Overview" -original_id: tiered-storage-overview ---- - -Pulsar's **Tiered Storage** feature allows older backlog data to be moved from BookKeeper to long term and cheaper storage, while still allowing clients to access the backlog as if nothing has changed. - -* Tiered storage uses [Apache jclouds](https://jclouds.apache.org) to support [Amazon S3](https://aws.amazon.com/s3/) and [GCS (Google Cloud Storage)](https://cloud.google.com/storage/) for long term storage. - - With jclouds, it is easy to add support for more [cloud storage providers](https://jclouds.apache.org/reference/providers/#blobstore-providers) in the future. - - :::tip - - - For more information about how to use the AWS S3 offloader with Pulsar, see [here](tiered-storage-aws.md). - - - For more information about how to use the GCS offloader with Pulsar, see [here](tiered-storage-gcs.md). - - ::: - -* Tiered storage uses [Apache Hadoop](http://hadoop.apache.org/) to support filesystems for long term storage. - - With Hadoop, it is easy to add support for more filesystems in the future. - - :::tip - - For more information about how to use the filesystem offloader with Pulsar, see [here](tiered-storage-filesystem.md). - - ::: - -## When to use tiered storage? - -Tiered storage should be used when you have a topic for which you want to keep a very long backlog for a long time. - -For example, if you have a topic containing user actions which you use to train your recommendation systems, you may want to keep that data for a long time, so that if you change your recommendation algorithm, you can rerun it against your full user history. - -## How does tiered storage work? - -A topic in Pulsar is backed by a **log**, known as a **managed ledger**. This log is composed of an ordered list of segments. Pulsar only writes to the final segment of the log. All previous segments are sealed. The data within the segment is immutable. This is known as a **segment oriented architecture**. - -![Tiered storage](/assets/pulsar-tiered-storage.png "Tiered Storage") - -The tiered storage offloading mechanism takes advantage of segment oriented architecture. When offloading is requested, the segments of the log are copied one-by-one to tiered storage. All segments of the log (apart from the current segment) written to tiered storage can be offloaded. - -Data written to BookKeeper is replicated to 3 physical machines by default. However, once a segment is sealed in BookKeeper, it becomes immutable and can be copied to long term storage. Long term storage can achieve cost savings by using mechanisms such as [Reed-Solomon error correction](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction) to require fewer physical copies of data. - -Before offloading ledgers to long term storage, you need to configure buckets, credentials, and other properties for the cloud storage service. Additionally, Pulsar uses multi-part objects to upload the segment data and brokers may crash while uploading the data. It is recommended that you add a life cycle rule for your bucket to expire incomplete multi-part upload after a day or two days to avoid getting charged for incomplete uploads. Moreover, you can trigger the offloading operation manually (via REST API or CLI) or automatically (via CLI). - -After offloading ledgers to long term storage, you can still query data in the offloaded ledgers with Pulsar SQL. - -For more information about tiered storage for Pulsar topics, see [here](https://github.com/apache/pulsar/wiki/PIP-17:-Tiered-storage-for-Pulsar-topics). diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/transaction-api.md b/site2/website/versioned_docs/version-2.10.1-deprecated/transaction-api.md deleted file mode 100644 index 25a99479639bdb..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/transaction-api.md +++ /dev/null @@ -1,172 +0,0 @@ ---- -id: transactions-api -title: Transactions API -sidebar_label: "Transactions API" -original_id: transactions-api ---- - -All messages in a transaction are available only to consumers after the transaction has been committed. If a transaction has been aborted, all the writes and acknowledgments in this transaction roll back. - -## Prerequisites -1. To enable transactions in Pulsar, you need to configure the parameter in `broker.conf` file or `standalone.conf` file. - -``` - -transactionCoordinatorEnabled=true - -``` - -2. Initialize transaction coordinator metadata, so the transaction coordinators can leverage advantages of the partitioned topic, such as load balance. - -``` - -bin/pulsar initialize-transaction-coordinator-metadata -cs 127.0.0.1:2181 -c standalone - -``` - -After initializing transaction coordinator metadata, you can use the transactions API. The following APIs are available. - -## Initialize Pulsar client - -You can enable transaction for transaction client and initialize transaction coordinator client. - -``` - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .enableTransaction(true) - .build(); - -``` - -## Start transactions -You can start transaction in the following way. - -``` - -Transaction txn = pulsarClient - .newTransaction() - .withTransactionTimeout(5, TimeUnit.MINUTES) - .build() - .get(); - -``` - -## Produce transaction messages - -A transaction parameter is required when producing new transaction messages. The semantic of the transaction messages in Pulsar is `read-committed`, so the consumer cannot receive the ongoing transaction messages before the transaction is committed. - -``` - -producer.newMessage(txn).value("Hello Pulsar Transaction".getBytes()).sendAsync(); - -``` - -## Acknowledge the messages with the transaction - -The transaction acknowledgement requires a transaction parameter. The transaction acknowledgement marks the messages state to pending-ack state. When the transaction is committed, the pending-ack state becomes ack state. If the transaction is aborted, the pending-ack state becomes unack state. - -``` - -Message message = consumer.receive(); -consumer.acknowledgeAsync(message.getMessageId(), txn); - -``` - -## Commit transactions - -When the transaction is committed, consumers receive the transaction messages and the pending-ack state becomes ack state. - -``` - -txn.commit().get(); - -``` - -## Abort transaction - -When the transaction is aborted, the transaction acknowledgement is canceled and the pending-ack messages are redelivered. - -``` - -txn.abort().get(); - -``` - -### Example -The following example shows how messages are processed in transaction. - -``` - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl(getPulsarServiceList().get(0).getBrokerServiceUrl()) - .statsInterval(0, TimeUnit.SECONDS) - .enableTransaction(true) - .build(); - -String sourceTopic = "public/default/source-topic"; -String sinkTopic = "public/default/sink-topic"; - -Producer sourceProducer = pulsarClient - .newProducer(Schema.STRING) - .topic(sourceTopic) - .create(); -sourceProducer.newMessage().value("hello pulsar transaction").sendAsync(); - -Consumer sourceConsumer = pulsarClient - .newConsumer(Schema.STRING) - .topic(sourceTopic) - .subscriptionName("test") - .subscriptionType(SubscriptionType.Shared) - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscribe(); - -Producer sinkProducer = pulsarClient - .newProducer(Schema.STRING) - .topic(sinkTopic) - .sendTimeout(0, TimeUnit.MILLISECONDS) - .create(); - -Transaction txn = pulsarClient - .newTransaction() - .withTransactionTimeout(5, TimeUnit.MINUTES) - .build() - .get(); - -// source message acknowledgement and sink message produce belong to one transaction, -// they are combined into an atomic operation. -Message message = sourceConsumer.receive(); -sourceConsumer.acknowledgeAsync(message.getMessageId(), txn); -sinkProducer.newMessage(txn).value("sink data").sendAsync(); - -txn.commit().get(); - -``` - -## Enable batch messages in transactions - -To enable batch messages in transactions, you need to enable the batch index acknowledgement feature. The transaction acks check whether the batch index acknowledgement conflicts. - -To enable batch index acknowledgement, you need to set `acknowledgmentAtBatchIndexLevelEnabled` to `true` in the `broker.conf` or `standalone.conf` file. - -``` - -acknowledgmentAtBatchIndexLevelEnabled=true - -``` - -And then you need to call the `enableBatchIndexAcknowledgment(true)` method in the consumer builder. - -``` - -Consumer sinkConsumer = pulsarClient - .newConsumer() - .topic(transferTopic) - .subscriptionName("sink-topic") - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscriptionType(SubscriptionType.Shared) - .enableBatchIndexAcknowledgment(true) // enable batch index acknowledgement - .subscribe(); - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/transaction-guarantee.md b/site2/website/versioned_docs/version-2.10.1-deprecated/transaction-guarantee.md deleted file mode 100644 index 9db2d254e159f6..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/transaction-guarantee.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -id: transactions-guarantee -title: Transactions Guarantee -sidebar_label: "Transactions Guarantee" -original_id: transactions-guarantee ---- - -Pulsar transactions support the following guarantee. - -## Atomic multi-partition writes and multi-subscription acknowledges -Transactions enable atomic writes to multiple topics and partitions. A batch of messages in a transaction can be received from, produced to, and acknowledged by many partitions. All the operations involved in a transaction succeed or fail as a single unit. - -## Read transactional message -All the messages in a transaction are available only for consumers until the transaction is committed. - -## Acknowledge transactional message -A message is acknowledged successfully only once by a consumer under the subscription when acknowledging the message with the transaction ID. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/txn-how.md b/site2/website/versioned_docs/version-2.10.1-deprecated/txn-how.md deleted file mode 100644 index add072448aeb34..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/txn-how.md +++ /dev/null @@ -1,151 +0,0 @@ ---- -id: txn-how -title: How transactions work? -sidebar_label: "How transactions work?" -original_id: txn-how ---- - -This section describes transaction components and how the components work together. For the complete design details, see [PIP-31: Transactional Streaming](https://docs.google.com/document/d/145VYp09JKTw9jAT-7yNyFU255FptB2_B2Fye100ZXDI/edit#heading=h.bm5ainqxosrx). - -## Key concept - -It is important to know the following key concepts, which is a prerequisite for understanding how transactions work. - -### Transaction coordinator - -The transaction coordinator (TC) is a module running inside a Pulsar broker. - -* It maintains the entire life cycle of transactions and prevents a transaction from getting into an incorrect status. - -* It handles transaction timeout, and ensures that the transaction is aborted after a transaction timeout. - -### Transaction log - -All the transaction metadata persists in the transaction log. The transaction log is backed by a Pulsar topic. If the transaction coordinator crashes, it can restore the transaction metadata from the transaction log. - -The transaction log stores the transaction status rather than actual messages in the transaction (the actual messages are stored in the actual topic partitions). - -### Transaction buffer - -Messages produced to a topic partition within a transaction are stored in the transaction buffer (TB) of that topic partition. The messages in the transaction buffer are not visible to consumers until the transactions are committed. The messages in the transaction buffer are discarded when the transactions are aborted. - -Transaction buffer stores all ongoing and aborted transactions in memory. All messages are sent to the actual partitioned Pulsar topics. After transactions are committed, the messages in the transaction buffer are materialized (visible) to consumers. When the transactions are aborted, the messages in the transaction buffer are discarded. - -### Transaction ID - -Transaction ID (TxnID) identifies a unique transaction in Pulsar. The transaction ID is 128-bit. The highest 16 bits are reserved for the ID of the transaction coordinator, and the remaining bits are used for monotonically increasing numbers in each transaction coordinator. It is easy to locate the transaction crash with the TxnID. - -### Pending acknowledge state - -Pending acknowledge state maintains message acknowledgments within a transaction before a transaction completes. If a message is in the pending acknowledge state, the message cannot be acknowledged by other transactions until the message is removed from the pending acknowledge state. - -The pending acknowledge state is persisted to the pending acknowledge log (cursor ledger). A new broker can restore the state from the pending acknowledge log to ensure the acknowledgement is not lost. - -## Data flow - -At a high level, the data flow can be split into several steps: - -1. Begin a transaction. - -2. Publish messages with a transaction. - -3. Acknowledge messages with a transaction. - -4. End a transaction. - -To help you debug or tune the transaction for better performance, review the following diagrams and descriptions. - -### 1. Begin a transaction - -Before introducing the transaction in Pulsar, a producer is created and then messages are sent to brokers and stored in data logs. - -![](/assets/txn-3.png) - -Let’s walk through the steps for _beginning a transaction_. - -| Step | Description | -| --- | --- | -| 1.1 | The first step is that the Pulsar client finds the transaction coordinator. | -| 1.2 | The transaction coordinator allocates a transaction ID for the transaction. In the transaction log, the transaction is logged with its transaction ID and status (OPEN), which ensures the transaction status is persisted regardless of transaction coordinator crashes. | -| 1.3 | The transaction log sends the result of persisting the transaction ID to the transaction coordinator. | -| 1.4 | After the transaction status entry is logged, the transaction coordinator brings the transaction ID back to the Pulsar client. | - -### 2. Publish messages with a transaction - -In this stage, the Pulsar client enters a transaction loop, repeating the `consume-process-produce` operation for all the messages that comprise the transaction. This is a long phase and is potentially composed of multiple produce and acknowledgement requests. - -![](/assets/txn-4.png) - -Let’s walk through the steps for _publishing messages with a transaction_. - -| Step | Description | -| --- | --- | -| 2.1.1 | Before the Pulsar client produces messages to a new topic partition, it sends a request to the transaction coordinator to add the partition to the transaction. | -| 2.1.2 | The transaction coordinator logs the partition changes of the transaction into the transaction log for durability, which ensures the transaction coordinator knows all the partitions that a transaction is handling. The transaction coordinator can commit or abort changes on each partition at the end-partition phase. | -| 2.1.3 | The transaction log sends the result of logging the new partition (used for producing messages) to the transaction coordinator. | -| 2.1.4 | The transaction coordinator sends the result of adding a new produced partition to the transaction. | -| 2.2.1 | The Pulsar client starts producing messages to partitions. The flow of this part is the same as the normal flow of producing messages except that the batch of messages produced by a transaction contains transaction IDs. | -| 2.2.2 | The broker writes messages to a partition. | - -### 3. Acknowledge messages with a transaction - -In this phase, the Pulsar client sends a request to the transaction coordinator and a new subscription is acknowledged as a part of a transaction. - -![](/assets/txn-5.png) - -Let’s walk through the steps for _acknowledging messages with a transaction_. - -| Step | Description | -| --- | --- | -| 3.1.1 | The Pulsar client sends a request to add an acknowledged subscription to the transaction coordinator. | -| 3.1.2 | The transaction coordinator logs the addition of subscription, which ensures that it knows all subscriptions handled by a transaction and can commit or abort changes on each subscription at the end phase. | -| 3.1.3 | The transaction log sends the result of logging the new partition (used for acknowledging messages) to the transaction coordinator. | -| 3.1.4 | The transaction coordinator sends the result of adding the new acknowledged partition to the transaction. | -| 3.2 | The Pulsar client acknowledges messages on the subscription. The flow of this part is the same as the normal flow of acknowledging messages except that the acknowledged request carries a transaction ID. | -| 3.3 | The broker receiving the acknowledgement request checks if the acknowledgment belongs to a transaction or not. | - -### 4. End a transaction - -At the end of a transaction, the Pulsar client decides to commit or abort the transaction. The transaction can be aborted when a conflict is detected on acknowledging messages. - -#### 4.1 End transaction request - -When the Pulsar client finishes a transaction, it issues an end transaction request. - -![](/assets/txn-6.png) - -Let’s walk through the steps for _ending the transaction_. - -| Step | Description | -| --- | --- | -| 4.1.1 | The Pulsar client issues an end transaction request (with a field indicating whether the transaction is to be committed or aborted) to the transaction coordinator. | -| 4.1.2 | The transaction coordinator writes a COMMITTING or ABORTING message to its transaction log. | -| 4.1.3 | The transaction log sends the result of logging the committing or aborting status. | - -#### 4.2 Finalize a transaction - -The transaction coordinator starts the process of committing or aborting messages to all the partitions involved in this transaction. - -![](/assets/txn-7.png) - -Let’s walk through the steps for _finalizing a transaction_. - -| Step | Description | -| --- | --- | -| 4.2.1 | The transaction coordinator commits transactions on subscriptions and commits transactions on partitions at the same time. | -| 4.2.2 | The broker (produce) writes produced committed markers to the actual partitions. At the same time, the broker (ack) writes acked committed marks to the subscription pending ack partitions. | -| 4.2.3 | The data log sends the result of writing produced committed marks to the broker. At the same time, pending ack data log sends the result of writing acked committed marks to the broker. The cursor moves to the next position. | - -#### 4.3 Mark a transaction as COMMITTED or ABORTED - -The transaction coordinator writes the final transaction status to the transaction log to complete the transaction. - -![](/assets/txn-8.png) - -Let’s walk through the steps for _marking a transaction as COMMITTED or ABORTED_. - -| Step | Description | -| --- | --- | -| 4.3.1 | After all produced messages and acknowledgements to all partitions involved in this transaction have been successfully committed or aborted, the transaction coordinator writes the final COMMITTED or ABORTED transaction status messages to its transaction log, indicating that the transaction is complete. All the messages associated with the transaction in its transaction log can be safely removed. | -| 4.3.2 | The transaction log sends the result of the committed transaction to the transaction coordinator. | -| 4.3.3 | The transaction coordinator sends the result of the committed transaction to the Pulsar client. | diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/txn-monitor.md b/site2/website/versioned_docs/version-2.10.1-deprecated/txn-monitor.md deleted file mode 100644 index 08e4a1be320367..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/txn-monitor.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -id: txn-monitor -title: How to monitor transactions? -sidebar_label: "How to monitor transactions?" -original_id: txn-monitor ---- - -You can monitor the status of the transactions in Prometheus and Grafana using the [transaction metrics](reference-metrics.md#pulsar-transaction). - -For how to configure Prometheus and Grafana, see [here](deploy-monitoring.md). diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/txn-use.md b/site2/website/versioned_docs/version-2.10.1-deprecated/txn-use.md deleted file mode 100644 index 8468917f3ba348..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/txn-use.md +++ /dev/null @@ -1,105 +0,0 @@ ---- -id: txn-use -title: How to use transactions? -sidebar_label: "How to use transactions?" -original_id: txn-use ---- - -## Transaction API - -The transaction feature is primarily a server-side and protocol-level feature. You can use the transaction feature via the [transaction API](/api/admin/), which is available in **Pulsar 2.8.0 or later**. - -To use the transaction API, you do not need any additional settings in the Pulsar client. **By default**, transactions is **disabled**. - -Currently, transaction API is only available for **Java** clients. Support for other language clients will be added in the future releases. - -## Quick start - -This section provides an example of how to use the transaction API to send and receive messages in a Java client. - -1. Start Pulsar 2.8.0 or later. - -2. Enable transaction. - - Change the configuration in the `broker.conf` file. - - ``` - - transactionCoordinatorEnabled=true - - ``` - - If you want to enable batch messages in transactions, follow the steps below. - - Set `acknowledgmentAtBatchIndexLevelEnabled` to `true` in the `broker.conf` or `standalone.conf` file. - - ``` - - acknowledgmentAtBatchIndexLevelEnabled=true - - ``` - -3. Initialize transaction coordinator metadata. - - The transaction coordinator can leverage the advantages of partitioned topics (such as load balance). - - **Input** - - ``` - - bin/pulsar initialize-transaction-coordinator-metadata -cs 127.0.0.1:2181 -c standalone - - ``` - - **Output** - - ``` - - Transaction coordinator metadata setup success - - ``` - -4. Initialize a Pulsar client. - - ``` - - PulsarClient client = PulsarClient.builder() - - .serviceUrl("pulsar://localhost:6650") - - .enableTransaction(true) - - .build(); - - ``` - -Now you can start using the transaction API to send and receive messages. Below is an example of a `consume-process-produce` application written in Java. - -![](/assets/txn-9.png) - -Let’s walk through this example step by step. - -| Step | Description | -| --- | --- | -| 1. Start a transaction. | The application opens a new transaction by calling PulsarClient.newTransaction. It specifics the transaction timeout as 1 minute. If the transaction is not committed within 1 minute, the transaction is automatically aborted. | -| 2. Receive messages from topics. | The application creates two normal consumers to receive messages from topic input-topic-1 and input-topic-2 respectively. | -| 3. Publish messages to topics with the transaction. | The application creates two producers to produce the resulting messages to the output topic _output-topic-1_ and output-topic-2 respectively. The application applies the processing logic and generates two output messages. The application sends those two output messages as part of the transaction opened in the first step via Producer.newMessage(Transaction). | -| 4. Acknowledge the messages with the transaction. | In the same transaction, the application acknowledges the two input messages. | -| 5. Commit the transaction. | The application commits the transaction by calling Transaction.commit() on the open transaction. The commit operation ensures the two input messages are marked as acknowledged and the two output messages are written successfully to the output topics. | - -[1] Example of enabling batch messages ack in transactions in the consumer builder. - -``` - -Consumer sinkConsumer = pulsarClient - .newConsumer() - .topic(transferTopic) - .subscriptionName("sink-topic") - -.subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscriptionType(SubscriptionType.Shared) - .enableBatchIndexAcknowledgment(true) // enable batch index acknowledgement - .subscribe(); - -``` - diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/txn-what.md b/site2/website/versioned_docs/version-2.10.1-deprecated/txn-what.md deleted file mode 100644 index f8bf3eb7e56b80..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/txn-what.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -id: txn-what -title: What are transactions? -sidebar_label: "What are transactions?" -original_id: txn-what ---- - -Transactions strengthen the message delivery semantics of Apache Pulsar and [processing guarantees of Pulsar Functions](functions-overview.md#processing-guarantees). The Pulsar Transaction API supports atomic writes and acknowledgments across multiple topics. - -Transactions allow: - -- A producer to send a batch of messages to multiple topics where all messages in the batch are eventually visible to any consumer, or none are ever visible to consumers. - -- End-to-end exactly-once semantics (execute a `consume-process-produce` operation exactly once). - -## Transaction semantics - -Pulsar transactions have the following semantics: - -* All operations within a transaction are committed as a single unit. - - * Either all messages are committed, or none of them are. - - * Each message is written or processed exactly once, without data loss or duplicates (even in the event of failures). - - * If a transaction is aborted, all the writes and acknowledgments in this transaction rollback. - -* A group of messages in a transaction can be received from, produced to, and acknowledged by multiple partitions. - - * Consumers are only allowed to read committed (acked) messages. In other words, the broker does not deliver transactional messages which are part of an open transaction or messages which are part of an aborted transaction. - - * Message writes across multiple partitions are atomic. - - * Message acks across multiple subscriptions are atomic. A message is acked successfully only once by a consumer under the subscription when acknowledging the message with the transaction ID. - -## Transactions and stream processing - -Stream processing on Pulsar is a `consume-process-produce` operation on Pulsar topics: - -* `Consume`: a source operator that runs a Pulsar consumer reads messages from one or multiple Pulsar topics. - -* `Process`: a processing operator transforms the messages. - -* `Produce`: a sink operator that runs a Pulsar producer writes the resulting messages to one or multiple Pulsar topics. - -![](/assets/txn-2.png) - -Pulsar transactions support end-to-end exactly-once stream processing, which means messages are not lost from a source operator and messages are not duplicated to a sink operator. - -## Use case - -Prior to Pulsar 2.8.0, there was no easy way to build stream processing applications with Pulsar to achieve exactly-once processing guarantees. With the transaction introduced in Pulsar 2.8.0, the following services support exactly-once semantics: - -* [Pulsar Flink connector](https://flink.apache.org/2021/01/07/pulsar-flink-connector-270.html) - - Prior to Pulsar 2.8.0, if you want to build stream applications using Pulsar and Flink, the Pulsar Flink connector only supported exactly-once source connector and at-least-once sink connector, which means the highest processing guarantee for end-to-end was at-least-once, there was possibility that the resulting messages from streaming applications produce duplicated messages to the resulting topics in Pulsar. - - With the transaction introduced in Pulsar 2.8.0, the Pulsar Flink sink connector can support exactly-once semantics by implementing the designated `TwoPhaseCommitSinkFunction` and hooking up the Flink sink message lifecycle with Pulsar transaction API. - -* Support for Pulsar Functions and other connectors will be added in the future releases. diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/txn-why.md b/site2/website/versioned_docs/version-2.10.1-deprecated/txn-why.md deleted file mode 100644 index e7273379f79493..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/txn-why.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -id: txn-why -title: Why transactions? -sidebar_label: "Why transactions?" -original_id: txn-why ---- - -Pulsar transactions (txn) enable event streaming applications to consume, process, and produce messages in one atomic operation. The reason for developing this feature can be summarized as below. - -## Demand of stream processing - -The demand for stream processing applications with stronger processing guarantees has grown along with the rise of stream processing. For example, in the financial industry, financial institutions use stream processing engines to process debits and credits for users. This type of use case requires that every message is processed exactly once, without exception. - -In other words, if a stream processing application consumes message A and -produces the result as a message B (B = f(A)), then exactly-once processing -guarantee means that A can only be marked as consumed if and only if B is -successfully produced, and vice versa. - -![](/assets/txn-1.png) - -The Pulsar transactions API strengthens the message delivery semantics and the processing guarantees for stream processing. It enables stream processing applications to consume, process, and produce messages in one atomic operation. That means, a batch of messages in a transaction can be received from, produced to and acknowledged by many topic partitions. All the operations involved in a transaction succeed or fail as one single unit. - -## Limitation of idempotent producer - -Avoiding data loss or duplication can be achieved by using the Pulsar idempotent producer, but it does not provide guarantees for writes across multiple partitions. - -In Pulsar, the highest level of message delivery guarantee is using an [idempotent producer](concepts-messaging.md#producer-idempotency) with the exactly once semantic at one single partition, that is, each message is persisted exactly once without data loss and duplication. However, there are some limitations in this solution: - -- Due to the monotonic increasing sequence ID, this solution only works on a single partition and within a single producer session (that is, for producing one message), so there is no atomicity when producing multiple messages to one or multiple partitions. - - In this case, if there are some failures (for example, client / broker / bookie crashes, network failure, and more) in the process of producing and receiving messages, messages are re-processed and re-delivered, which may cause data loss or data duplication: - - - For the producer: if the producer retry sending messages, some messages are persisted multiple times; if the producer does not retry sending messages, some messages are persisted once and other messages are lost. - - - For the consumer: since the consumer does not know whether the broker has received messages or not, the consumer may not retry sending acks, which causes it to receive duplicate messages. - -- Similarly, for Pulsar Function, it only guarantees exactly once semantics for an idempotent function on a single event rather than processing multiple events or producing multiple results that can happen exactly. - - For example, if a function accepts multiple events and produces one result (for example, window function), the function may fail between producing the result and acknowledging the incoming messages, or even between acknowledging individual events, which causes all (or some) incoming messages to be re-delivered and reprocessed, and a new result is generated. - - However, many scenarios need atomic guarantees across multiple partitions and sessions. - -- Consumers need to rely on more mechanisms to acknowledge (ack) messages once. - - For example, consumers are required to store the MessageID along with its acked state. After the topic is unloaded, the subscription can recover the acked state of this MessageID in memory when the topic is loaded again. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.10.1-deprecated/window-functions-context.md b/site2/website/versioned_docs/version-2.10.1-deprecated/window-functions-context.md deleted file mode 100644 index f80fea57989ef0..00000000000000 --- a/site2/website/versioned_docs/version-2.10.1-deprecated/window-functions-context.md +++ /dev/null @@ -1,581 +0,0 @@ ---- -id: window-functions-context -title: Window Functions Context -sidebar_label: "Window Functions: Context" -original_id: window-functions-context ---- - -Java SDK provides access to a **window context object** that can be used by a window function. This context object provides a wide variety of information and functionality for Pulsar window functions as below. - -- [Spec](#spec) - - * Names of all input topics and the output topic associated with the function. - * Tenant and namespace associated with the function. - * Pulsar window function name, ID, and version. - * ID of the Pulsar function instance running the window function. - * Number of instances that invoke the window function. - * Built-in type or custom class name of the output schema. - -- [Logger](#logger) - - * Logger object used by the window function, which can be used to create window function log messages. - -- [User config](#user-config) - - * Access to arbitrary user configuration values. - -- [Routing](#routing) - - * Routing is supported in Pulsar window functions. Pulsar window functions send messages to arbitrary topics as per the `publish` interface. - -- [Metrics](#metrics) - - * Interface for recording metrics. - -- [State storage](#state-storage) - - * Interface for storing and retrieving state in [state storage](#state-storage). - -## Spec - -Spec contains the basic information of a function. - -### Get input topics - -The `getInputTopics` method gets the **name list** of all input topics. - -This example demonstrates how to get the name list of all input topics in a Java window function. - -```java - -public class GetInputTopicsWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - Collection inputTopics = context.getInputTopics(); - System.out.println(inputTopics); - - return null; - } - -} - -``` - -### Get output topic - -The `getOutputTopic` method gets the **name of a topic** to which the message is sent. - -This example demonstrates how to get the name of an output topic in a Java window function. - -```java - -public class GetOutputTopicWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String outputTopic = context.getOutputTopic(); - System.out.println(outputTopic); - - return null; - } -} - -``` - -### Get tenant - -The `getTenant` method gets the tenant name associated with the window function. - -This example demonstrates how to get the tenant name in a Java window function. - -```java - -public class GetTenantWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String tenant = context.getTenant(); - System.out.println(tenant); - - return null; - } - -} - -``` - -### Get namespace - -The `getNamespace` method gets the namespace associated with the window function. - -This example demonstrates how to get the namespace in a Java window function. - -```java - -public class GetNamespaceWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String ns = context.getNamespace(); - System.out.println(ns); - - return null; - } - -} - -``` - -### Get function name - -The `getFunctionName` method gets the window function name. - -This example demonstrates how to get the function name in a Java window function. - -```java - -public class GetNameOfWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionName = context.getFunctionName(); - System.out.println(functionName); - - return null; - } - -} - -``` - -### Get function ID - -The `getFunctionId` method gets the window function ID. - -This example demonstrates how to get the function ID in a Java window function. - -```java - -public class GetFunctionIDWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionID = context.getFunctionId(); - System.out.println(functionID); - - return null; - } - -} - -``` - -### Get function version - -The `getFunctionVersion` method gets the window function version. - -This example demonstrates how to get the function version of a Java window function. - -```java - -public class GetVersionOfWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionVersion = context.getFunctionVersion(); - System.out.println(functionVersion); - - return null; - } - -} - -``` - -### Get instance ID - -The `getInstanceId` method gets the instance ID of a window function. - -This example demonstrates how to get the instance ID in a Java window function. - -```java - -public class GetInstanceIDWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - int instanceId = context.getInstanceId(); - System.out.println(instanceId); - - return null; - } - -} - -``` - -### Get num instances - -The `getNumInstances` method gets the number of instances that invoke the window function. - -This example demonstrates how to get the number of instances in a Java window function. - -```java - -public class GetNumInstancesWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - int numInstances = context.getNumInstances(); - System.out.println(numInstances); - - return null; - } - -} - -``` - -### Get output schema type - -The `getOutputSchemaType` method gets the built-in type or custom class name of the output schema. - -This example demonstrates how to get the output schema type of a Java window function. - -```java - -public class GetOutputSchemaTypeWindowFunction implements WindowFunction { - - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String schemaType = context.getOutputSchemaType(); - System.out.println(schemaType); - - return null; - } -} - -``` - -## Logger - -Pulsar window functions using Java SDK has access to an [SLF4j](https://www.slf4j.org/) [`Logger`](https://www.slf4j.org/api/org/apache/log4j/Logger.html) object that can be used to produce logs at the chosen log level. - -This example logs either a `WARNING`-level or `INFO`-level log based on whether the incoming string contains the word `danger` or not in a Java function. - -```java - -import java.util.Collection; -import org.apache.pulsar.functions.api.Record; -import org.apache.pulsar.functions.api.WindowContext; -import org.apache.pulsar.functions.api.WindowFunction; -import org.slf4j.Logger; - -public class LoggingWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - Logger log = context.getLogger(); - for (Record record : inputs) { - log.info(record + "-window-log"); - } - return null; - } - -} - -``` - -If you need your function to produce logs, specify a log topic when creating or running the function. - -```bash - -bin/pulsar-admin functions create \ - --jar my-functions.jar \ - --classname my.package.LoggingFunction \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -You can access all logs produced by `LoggingFunction` via the `persistent://public/default/logging-function-logs` topic. - -## Metrics - -Pulsar window functions can publish arbitrary metrics to the metrics interface which can be queried. - -:::note - -If a Pulsar window function uses the language-native interface for Java, that function is not able to publish metrics and stats to Pulsar. - -::: - -You can record metrics using the context object on a per-key basis. - -This example sets a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message in a Java function. - -```java - -import java.util.Collection; -import org.apache.pulsar.functions.api.Record; -import org.apache.pulsar.functions.api.WindowContext; -import org.apache.pulsar.functions.api.WindowFunction; - - -/** - * Example function that wants to keep track of - * the event time of each message sent. - */ -public class UserMetricWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - - for (Record record : inputs) { - if (record.getEventTime().isPresent()) { - context.recordMetric("MessageEventTime", record.getEventTime().get().doubleValue()); - } - } - - return null; - } -} - -``` - -## User config - -When you run or update Pulsar Functions that are created using SDK, you can pass arbitrary key/value pairs to them with the `--user-config` flag. Key/value pairs **must** be specified as JSON. - -This example passes a user configured key/value to a function. - -```bash - -bin/pulsar-admin functions create \ - --name word-filter \ - --user-config '{"forbidden-word":"rosebud"}' \ - # Other function configs - -``` - -### API -You can use the following APIs to get user-defined information for window functions. -#### getUserConfigMap - -`getUserConfigMap` API gets a map of all user-defined key/value configurations for the window function. - -```java - -/** - * Get a map of all user-defined key/value configs for the function. - * - * @return The full map of user-defined config values - */ - Map getUserConfigMap(); - -``` - -#### getUserConfigValue - -The `getUserConfigValue` API gets a user-defined key/value. - -```java - -/** - * Get any user-defined key/value. - * - * @param key The key - * @return The Optional value specified by the user for that key. - */ - Optional getUserConfigValue(String key); - -``` - -#### getUserConfigValueOrDefault - -The `getUserConfigValueOrDefault` API gets a user-defined key/value or a default value if none is present. - -```java - -/** - * Get any user-defined key/value or a default value if none is present. - * - * @param key - * @param defaultValue - * @return Either the user config value associated with a given key or a supplied default value - */ - Object getUserConfigValueOrDefault(String key, Object defaultValue); - -``` - -This example demonstrates how to access key/value pairs provided to Pulsar window functions. - -Java SDK context object enables you to access key/value pairs provided to Pulsar window functions via the command line (as JSON). - -:::tip - -For all key/value pairs passed to Java window functions, both the `key` and the `value` are `String`. To set the value to be a different type, you need to deserialize it from the `String` type. - -::: - -This example passes a key/value pair in a Java window function. - -```bash - -bin/pulsar-admin functions create \ - --user-config '{"word-of-the-day":"verdure"}' \ - # Other function configs - -``` - -This example accesses values in a Java window function. - -The `UserConfigFunction` function logs the string `"The word of the day is verdure"` every time the function is invoked (which means every time a message arrives). The user config of `word-of-the-day` is changed **only** when the function is updated with a new config value via -multiple ways, such as the command line tool or REST API. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.Optional; - -public class UserConfigWindowFunction implements WindowFunction { - @Override - public String process(Collection> input, WindowContext context) throws Exception { - Optional whatToWrite = context.getUserConfigValue("WhatToWrite"); - if (whatToWrite.get() != null) { - return (String)whatToWrite.get(); - } else { - return "Not a nice way"; - } - } - -} - -``` - -If no value is provided, you can access the entire user config map or set a default value. - -```java - -// Get the whole config map -Map allConfigs = context.getUserConfigMap(); - -// Get value or resort to default -String wotd = context.getUserConfigValueOrDefault("word-of-the-day", "perspicacious"); - -``` - -## Routing - -You can use the `context.publish()` interface to publish as many results as you want. - -This example shows that the `PublishFunction` class uses the built-in function in the context to publish messages to the `publishTopic` in a Java function. - -```java - -public class PublishWindowFunction implements WindowFunction { - @Override - public Void process(Collection> input, WindowContext context) throws Exception { - String publishTopic = (String) context.getUserConfigValueOrDefault("publish-topic", "publishtopic"); - String output = String.format("%s!", input); - context.publish(publishTopic, output); - - return null; - } - -} - -``` - -## State storage - -Pulsar window functions use [Apache BookKeeper](https://bookkeeper.apache.org) as a state storage interface. Apache Pulsar installation (including the standalone installation) includes the deployment of BookKeeper bookies. - -Apache Pulsar integrates with Apache BookKeeper `table service` to store the `state` for functions. For example, the `WordCount` function can store its `counters` state into BookKeeper table service via Pulsar Functions state APIs. - -States are key-value pairs, where the key is a string and the value is arbitrary binary data—counters are stored as 64-bit big-endian binary values. Keys are scoped to an individual Pulsar Function and shared between instances of that function. - -Currently, Pulsar window functions expose Java API to access, update, and manage states. These APIs are available in the context object when you use Java SDK functions. - -| Java API| Description -|---|--- -|`incrCounter`|Increases a built-in distributed counter referred by key. -|`getCounter`|Gets the counter value for the key. -|`putState`|Updates the state value for the key. - -You can use the following APIs to access, update, and manage states in Java window functions. - -#### incrCounter - -The `incrCounter` API increases a built-in distributed counter referred by key. - -Applications use the `incrCounter` API to change the counter of a given `key` by the given `amount`. If the `key` does not exist, a new key is created. - -```java - - /** - * Increment the builtin distributed counter referred by key - * @param key The name of the key - * @param amount The amount to be incremented - */ - void incrCounter(String key, long amount); - -``` - -#### getCounter - -The `getCounter` API gets the counter value for the key. - -Applications uses the `getCounter` API to retrieve the counter of a given `key` changed by the `incrCounter` API. - -```java - - /** - * Retrieve the counter value for the key. - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - long getCounter(String key); - -``` - -Except the `getCounter` API, Pulsar also exposes a general key/value API (`putState`) for functions to store general key/value state. - -#### putState - -The `putState` API updates the state value for the key. - -```java - - /** - * Update the state value for the key. - * - * @param key name of the key - * @param value state value of the key - */ - void putState(String key, ByteBuffer value); - -``` - -This example demonstrates how applications store states in Pulsar window functions. - -The logic of the `WordCountWindowFunction` is simple and straightforward. - -1. The function first splits the received string into multiple words using regex `\\.`. - -2. For each `word`, the function increments the corresponding `counter` by 1 via `incrCounter(key, amount)`. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - for (Record input : inputs) { - Arrays.asList(input.getValue().split("\\.")).forEach(word -> context.incrCounter(word, 1)); - } - return null; - - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/about.md b/site2/website/versioned_docs/version-2.8.0-deprecated/about.md deleted file mode 100644 index 1eafb414c2f365..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/about.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -slug: / -id: about -title: Welcome to the doc portal! -sidebar_label: "About" ---- - -import BlockLinks from "@site/src/components/BlockLinks"; -import BlockLink from "@site/src/components/BlockLink"; -import { docUrl } from "@site/src/utils/index"; - - -# Welcome to the doc portal! -*** - -This portal holds a variety of support documents to help you work with Pulsar . If you’re a beginner, there are tutorials and explainers to help you understand Pulsar and how it works. - -If you’re an experienced coder, review this page to learn the easiest way to access the specific content you’re looking for. - -## Get Started Now - - - - - - - - - -## Navigation -*** - -There are several ways to get around in the doc portal. The index navigation pane is a table of contents for the entire archive. The archive is divided into sections, like chapters in a book. Click the title of the topic to view it. - -In-context links provide an easy way to immediately reference related topics. Click the underlined term to view the topic. - -Links to related topics can be found at the bottom of each topic page. Click the link to view the topic. - -![Page Linking](/assets/page-linking.png) - -## Continuous Improvement -*** -As you probably know, we are working on a new user experience for our documentation portal that will make learning about and building on top of Apache Pulsar a much better experience. Whether you need overview concepts, how-to procedures, curated guides or quick references, we’re building content to support it. This welcome page is just the first step. We will be providing updates every month. - -## Help Improve These Documents -*** - -You’ll notice an Edit button at the bottom and top of each page. Click it to open a landing page with instructions for requesting changes to posted documents. These are your resources. Participation is not only welcomed – it’s essential! - -## Join the Community! -*** - -The Pulsar community on github is active, passionate, and knowledgeable. Join discussions, voice opinions, suggest features, and dive into the code itself. Find your Pulsar family here at [apache/pulsar](https://github.com/apache/pulsar). - -An equally passionate community can be found in the [Pulsar Slack channel](https://apache-pulsar.slack.com/). You’ll need an invitation to join, but many Github Pulsar community members are Slack members too. Join, hang out, learn, and make some new friends. - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/adaptors-kafka.md b/site2/website/versioned_docs/version-2.8.0-deprecated/adaptors-kafka.md deleted file mode 100644 index ad0d886a9e04b8..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/adaptors-kafka.md +++ /dev/null @@ -1,274 +0,0 @@ ---- -id: adaptors-kafka -title: Pulsar adaptor for Apache Kafka -sidebar_label: "Kafka client wrapper" -original_id: adaptors-kafka ---- - - -Pulsar provides an easy option for applications that are currently written using the [Apache Kafka](http://kafka.apache.org) Java client API. - -## Using the Pulsar Kafka compatibility wrapper - -In an existing application, change the regular Kafka client dependency and replace it with the Pulsar Kafka wrapper. Remove the following dependency in `pom.xml`: - -```xml - - - org.apache.kafka - kafka-clients - 0.10.2.1 - - -``` - -Then include this dependency for the Pulsar Kafka wrapper: - -```xml - - - org.apache.pulsar - pulsar-client-kafka - @pulsar:version@ - - -``` - -With the new dependency, the existing code works without any changes. You need to adjust the configuration, and make sure it points the -producers and consumers to Pulsar service rather than Kafka, and uses a particular -Pulsar topic. - -## Using the Pulsar Kafka compatibility wrapper together with existing kafka client - -When migrating from Kafka to Pulsar, the application might use the original kafka client -and the pulsar kafka wrapper together during migration. You should consider using the -unshaded pulsar kafka client wrapper. - -```xml - - - org.apache.pulsar - pulsar-client-kafka-original - @pulsar:version@ - - -``` - -When using this dependency, construct producers using `org.apache.kafka.clients.producer.PulsarKafkaProducer` -instead of `org.apache.kafka.clients.producer.KafkaProducer` and `org.apache.kafka.clients.producer.PulsarKafkaConsumer` for consumers. - -## Producer example - -```java - -// Topic needs to be a regular Pulsar topic -String topic = "persistent://public/default/my-topic"; - -Properties props = new Properties(); -// Point to a Pulsar service -props.put("bootstrap.servers", "pulsar://localhost:6650"); - -props.put("key.serializer", IntegerSerializer.class.getName()); -props.put("value.serializer", StringSerializer.class.getName()); - -Producer producer = new KafkaProducer(props); - -for (int i = 0; i < 10; i++) { - producer.send(new ProducerRecord(topic, i, "hello-" + i)); - log.info("Message {} sent successfully", i); -} - -producer.close(); - -``` - -## Consumer example - -```java - -String topic = "persistent://public/default/my-topic"; - -Properties props = new Properties(); -// Point to a Pulsar service -props.put("bootstrap.servers", "pulsar://localhost:6650"); -props.put("group.id", "my-subscription-name"); -props.put("enable.auto.commit", "false"); -props.put("key.deserializer", IntegerDeserializer.class.getName()); -props.put("value.deserializer", StringDeserializer.class.getName()); - -Consumer consumer = new KafkaConsumer(props); -consumer.subscribe(Arrays.asList(topic)); - -while (true) { - ConsumerRecords records = consumer.poll(100); - records.forEach(record -> { - log.info("Received record: {}", record); - }); - - // Commit last offset - consumer.commitSync(); -} - -``` - -## Complete Examples - -You can find the complete producer and consumer examples [here](https://github.com/apache/pulsar-adapters/tree/master/pulsar-client-kafka-compat/pulsar-client-kafka-tests/src/test/java/org/apache/pulsar/client/kafka/compat/examples). - -## Compatibility matrix - -Currently the Pulsar Kafka wrapper supports most of the operations offered by the Kafka API. - -### Producer - -APIs: - -| Producer Method | Supported | Notes | -|:------------------------------------------------------------------------------|:----------|:-------------------------------------------------------------------------| -| `Future send(ProducerRecord record)` | Yes | | -| `Future send(ProducerRecord record, Callback callback)` | Yes | | -| `void flush()` | Yes | | -| `List partitionsFor(String topic)` | No | | -| `Map metrics()` | No | | -| `void close()` | Yes | | -| `void close(long timeout, TimeUnit unit)` | Yes | | - -Properties: - -| Config property | Supported | Notes | -|:----------------------------------------|:----------|:------------------------------------------------------------------------------| -| `acks` | Ignored | Durability and quorum writes are configured at the namespace level | -| `auto.offset.reset` | Yes | It uses a default value of `earliest` if you do not give a specific setting. | -| `batch.size` | Ignored | | -| `bootstrap.servers` | Yes | | -| `buffer.memory` | Ignored | | -| `client.id` | Ignored | | -| `compression.type` | Yes | Allows `gzip` and `lz4`. No `snappy`. | -| `connections.max.idle.ms` | Yes | Only support up to 2,147,483,647,000(Integer.MAX_VALUE * 1000) ms of idle time| -| `interceptor.classes` | Yes | | -| `key.serializer` | Yes | | -| `linger.ms` | Yes | Controls the group commit time when batching messages | -| `max.block.ms` | Ignored | | -| `max.in.flight.requests.per.connection` | Ignored | In Pulsar ordering is maintained even with multiple requests in flight | -| `max.request.size` | Ignored | | -| `metric.reporters` | Ignored | | -| `metrics.num.samples` | Ignored | | -| `metrics.sample.window.ms` | Ignored | | -| `partitioner.class` | Yes | | -| `receive.buffer.bytes` | Ignored | | -| `reconnect.backoff.ms` | Ignored | | -| `request.timeout.ms` | Ignored | | -| `retries` | Ignored | Pulsar client retries with exponential backoff until the send timeout expires. | -| `send.buffer.bytes` | Ignored | | -| `timeout.ms` | Yes | | -| `value.serializer` | Yes | | - - -### Consumer - -The following table lists consumer APIs. - -| Consumer Method | Supported | Notes | -|:--------------------------------------------------------------------------------------------------------|:----------|:------| -| `Set assignment()` | No | | -| `Set subscription()` | Yes | | -| `void subscribe(Collection topics)` | Yes | | -| `void subscribe(Collection topics, ConsumerRebalanceListener callback)` | No | | -| `void assign(Collection partitions)` | No | | -| `void subscribe(Pattern pattern, ConsumerRebalanceListener callback)` | No | | -| `void unsubscribe()` | Yes | | -| `ConsumerRecords poll(long timeoutMillis)` | Yes | | -| `void commitSync()` | Yes | | -| `void commitSync(Map offsets)` | Yes | | -| `void commitAsync()` | Yes | | -| `void commitAsync(OffsetCommitCallback callback)` | Yes | | -| `void commitAsync(Map offsets, OffsetCommitCallback callback)` | Yes | | -| `void seek(TopicPartition partition, long offset)` | Yes | | -| `void seekToBeginning(Collection partitions)` | Yes | | -| `void seekToEnd(Collection partitions)` | Yes | | -| `long position(TopicPartition partition)` | Yes | | -| `OffsetAndMetadata committed(TopicPartition partition)` | Yes | | -| `Map metrics()` | No | | -| `List partitionsFor(String topic)` | No | | -| `Map> listTopics()` | No | | -| `Set paused()` | No | | -| `void pause(Collection partitions)` | No | | -| `void resume(Collection partitions)` | No | | -| `Map offsetsForTimes(Map timestampsToSearch)` | No | | -| `Map beginningOffsets(Collection partitions)` | No | | -| `Map endOffsets(Collection partitions)` | No | | -| `void close()` | Yes | | -| `void close(long timeout, TimeUnit unit)` | Yes | | -| `void wakeup()` | No | | - -Properties: - -| Config property | Supported | Notes | -|:--------------------------------|:----------|:------------------------------------------------------| -| `group.id` | Yes | Maps to a Pulsar subscription name | -| `max.poll.records` | Yes | | -| `max.poll.interval.ms` | Ignored | Messages are "pushed" from broker | -| `session.timeout.ms` | Ignored | | -| `heartbeat.interval.ms` | Ignored | | -| `bootstrap.servers` | Yes | Needs to point to a single Pulsar service URL | -| `enable.auto.commit` | Yes | | -| `auto.commit.interval.ms` | Ignored | With auto-commit, acks are sent immediately to broker | -| `partition.assignment.strategy` | Ignored | | -| `auto.offset.reset` | Yes | Only support earliest and latest. | -| `fetch.min.bytes` | Ignored | | -| `fetch.max.bytes` | Ignored | | -| `fetch.max.wait.ms` | Ignored | | -| `interceptor.classes` | Yes | | -| `metadata.max.age.ms` | Ignored | | -| `max.partition.fetch.bytes` | Ignored | | -| `send.buffer.bytes` | Ignored | | -| `receive.buffer.bytes` | Ignored | | -| `client.id` | Ignored | | - - -## Customize Pulsar configurations - -You can configure Pulsar authentication provider directly from the Kafka properties. - -### Pulsar client properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.authentication.class`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-org.apache.pulsar.client.api.Authentication-) | | Configure to auth provider. For example, `org.apache.pulsar.client.impl.auth.AuthenticationTls`.| -| [`pulsar.authentication.params.map`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.util.Map-) | | Map which represents parameters for the Authentication-Plugin. | -| [`pulsar.authentication.params.string`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.lang.String-) | | String which represents parameters for the Authentication-Plugin, for example, `key1:val1,key2:val2`. | -| [`pulsar.use.tls`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTls-boolean-) | `false` | Enable TLS transport encryption. | -| [`pulsar.tls.trust.certs.file.path`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsTrustCertsFilePath-java.lang.String-) | | Path for the TLS trust certificate store. | -| [`pulsar.tls.allow.insecure.connection`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsAllowInsecureConnection-boolean-) | `false` | Accept self-signed certificates from brokers. | -| [`pulsar.operation.timeout.ms`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setOperationTimeout-int-java.util.concurrent.TimeUnit-) | `30000` | General operations timeout. | -| [`pulsar.stats.interval.seconds`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setStatsInterval-long-java.util.concurrent.TimeUnit-) | `60` | Pulsar client lib stats printing interval. | -| [`pulsar.num.io.threads`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setIoThreads-int-) | `1` | The number of Netty IO threads to use. | -| [`pulsar.connections.per.broker`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConnectionsPerBroker-int-) | `1` | The maximum number of connection to each broker. | -| [`pulsar.use.tcp.nodelay`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTcpNoDelay-boolean-) | `true` | TCP no-delay. | -| [`pulsar.concurrent.lookup.requests`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConcurrentLookupRequest-int-) | `50000` | The maximum number of concurrent topic lookups. | -| [`pulsar.max.number.rejected.request.per.connection`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setMaxNumberOfRejectedRequestPerConnection-int-) | `50` | The threshold of errors to forcefully close a connection. | -| [`pulsar.keepalive.interval.ms`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientBuilder.html#keepAliveInterval-int-java.util.concurrent.TimeUnit-)| `30000` | Keep alive interval for each client-broker-connection. | - - -### Pulsar producer properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.producer.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setProducerName-java.lang.String-) | | Specify the producer name. | -| [`pulsar.producer.initial.sequence.id`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setInitialSequenceId-long-) | | Specify baseline for sequence ID of this producer. | -| [`pulsar.producer.max.pending.messages`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessages-int-) | `1000` | Set the maximum size of the message queue pending to receive an acknowledgment from the broker. | -| [`pulsar.producer.max.pending.messages.across.partitions`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessagesAcrossPartitions-int-) | `50000` | Set the maximum number of pending messages across all the partitions. | -| [`pulsar.producer.batching.enabled`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingEnabled-boolean-) | `true` | Control whether automatic batching of messages is enabled for the producer. | -| [`pulsar.producer.batching.max.messages`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingMaxMessages-int-) | `1000` | The maximum number of messages in a batch. | -| [`pulsar.block.if.producer.queue.full`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBlockIfQueueFull-boolean-) | | Specify the block producer if queue is full. | - - -### Pulsar consumer Properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.consumer.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setConsumerName-java.lang.String-) | | Specify the consumer name. | -| [`pulsar.consumer.receiver.queue.size`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setReceiverQueueSize-int-) | 1000 | Set the size of the consumer receiver queue. | -| [`pulsar.consumer.acknowledgments.group.time.millis`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#acknowledgmentGroupTime-long-java.util.concurrent.TimeUnit-) | 100 | Set the maximum amount of group time for consumers to send the acknowledgments to the broker. | -| [`pulsar.consumer.total.receiver.queue.size.across.partitions`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setMaxTotalReceiverQueueSizeAcrossPartitions-int-) | 50000 | Set the maximum size of the total receiver queue across partitions. | -| [`pulsar.consumer.subscription.topics.mode`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#subscriptionTopicsMode-Mode-) | PersistentOnly | Set the subscription topic mode for consumers. | diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/adaptors-spark.md b/site2/website/versioned_docs/version-2.8.0-deprecated/adaptors-spark.md deleted file mode 100644 index e14f13b5d4b079..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/adaptors-spark.md +++ /dev/null @@ -1,91 +0,0 @@ ---- -id: adaptors-spark -title: Pulsar adaptor for Apache Spark -sidebar_label: "Apache Spark" -original_id: adaptors-spark ---- - -## Spark Streaming receiver -The Spark Streaming receiver for Pulsar is a custom receiver that enables Apache [Spark Streaming](https://spark.apache.org/streaming/) to receive raw data from Pulsar. - -An application can receive data in [Resilient Distributed Dataset](https://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds) (RDD) format via the Spark Streaming receiver and can process it in a variety of ways. - -### Prerequisites - -To use the receiver, include a dependency for the `pulsar-spark` library in your Java configuration. - -#### Maven - -If you're using Maven, add this to your `pom.xml`: - -```xml - - -@pulsar:version@ - - - - org.apache.pulsar - pulsar-spark - ${pulsar.version} - - -``` - -#### Gradle - -If you're using Gradle, add this to your `build.gradle` file: - -```groovy - -def pulsarVersion = "@pulsar:version@" - -dependencies { - compile group: 'org.apache.pulsar', name: 'pulsar-spark', version: pulsarVersion -} - -``` - -### Usage - -Pass an instance of `SparkStreamingPulsarReceiver` to the `receiverStream` method in `JavaStreamingContext`: - -```java - - String serviceUrl = "pulsar://localhost:6650/"; - String topic = "persistent://public/default/test_src"; - String subs = "test_sub"; - - SparkConf sparkConf = new SparkConf().setMaster("local[*]").setAppName("Pulsar Spark Example"); - - JavaStreamingContext jsc = new JavaStreamingContext(sparkConf, Durations.seconds(60)); - - ConsumerConfigurationData pulsarConf = new ConsumerConfigurationData(); - - Set set = new HashSet(); - set.add(topic); - pulsarConf.setTopicNames(set); - pulsarConf.setSubscriptionName(subs); - - SparkStreamingPulsarReceiver pulsarReceiver = new SparkStreamingPulsarReceiver( - serviceUrl, - pulsarConf, - new AuthenticationDisabled()); - - JavaReceiverInputDStream lineDStream = jsc.receiverStream(pulsarReceiver); - -``` - -For a complete example, click [here](https://github.com/apache/pulsar-adapters/blob/master/examples/spark/src/main/java/org/apache/spark/streaming/receiver/example/SparkStreamingPulsarReceiverExample.java). In this example, the number of messages that contain the string "Pulsar" in received messages is counted. - -Note that if needed, other Pulsar authentication classes can be used. For example, in order to use a token during authentication the following parameters for the `SparkStreamingPulsarReceiver` constructor can be set: - -```java - -SparkStreamingPulsarReceiver pulsarReceiver = new SparkStreamingPulsarReceiver( - serviceUrl, - pulsarConf, - new AuthenticationToken("token:")); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/adaptors-storm.md b/site2/website/versioned_docs/version-2.8.0-deprecated/adaptors-storm.md deleted file mode 100644 index 76d507164777db..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/adaptors-storm.md +++ /dev/null @@ -1,96 +0,0 @@ ---- -id: adaptors-storm -title: Pulsar adaptor for Apache Storm -sidebar_label: "Apache Storm" -original_id: adaptors-storm ---- - -Pulsar Storm is an adaptor for integrating with [Apache Storm](http://storm.apache.org/) topologies. It provides core Storm implementations for sending and receiving data. - -An application can inject data into a Storm topology via a generic Pulsar spout, as well as consume data from a Storm topology via a generic Pulsar bolt. - -## Using the Pulsar Storm Adaptor - -Include dependency for Pulsar Storm Adaptor: - -```xml - - - org.apache.pulsar - pulsar-storm - ${pulsar.version} - - -``` - -## Pulsar Spout - -The Pulsar Spout allows for the data published on a topic to be consumed by a Storm topology. It emits a Storm tuple based on the message received and the `MessageToValuesMapper` provided by the client. - -The tuples that fail to be processed by the downstream bolts will be re-injected by the spout with an exponential backoff, within a configurable timeout (the default is 60 seconds) or a configurable number of retries, whichever comes first, after which it is acknowledged by the consumer. Here's an example construction of a spout: - -```java - -MessageToValuesMapper messageToValuesMapper = new MessageToValuesMapper() { - - @Override - public Values toValues(Message msg) { - return new Values(new String(msg.getData())); - } - - @Override - public void declareOutputFields(OutputFieldsDeclarer declarer) { - // declare the output fields - declarer.declare(new Fields("string")); - } -}; - -// Configure a Pulsar Spout -PulsarSpoutConfiguration spoutConf = new PulsarSpoutConfiguration(); -spoutConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650"); -spoutConf.setTopic("persistent://my-property/usw/my-ns/my-topic1"); -spoutConf.setSubscriptionName("my-subscriber-name1"); -spoutConf.setMessageToValuesMapper(messageToValuesMapper); - -// Create a Pulsar Spout -PulsarSpout spout = new PulsarSpout(spoutConf); - -``` - -For a complete example, click [here](https://github.com/apache/pulsar-adapters/blob/master/pulsar-storm/src/test/java/org/apache/pulsar/storm/PulsarSpoutTest.java). - -## Pulsar Bolt - -The Pulsar bolt allows data in a Storm topology to be published on a topic. It publishes messages based on the Storm tuple received and the `TupleToMessageMapper` provided by the client. - -A partitioned topic can also be used to publish messages on different topics. In the implementation of the `TupleToMessageMapper`, a "key" will need to be provided in the message which will send the messages with the same key to the same topic. Here's an example bolt: - -```java - -TupleToMessageMapper tupleToMessageMapper = new TupleToMessageMapper() { - - @Override - public TypedMessageBuilder toMessage(TypedMessageBuilder msgBuilder, Tuple tuple) { - String receivedMessage = tuple.getString(0); - // message processing - String processedMsg = receivedMessage + "-processed"; - return msgBuilder.value(processedMsg.getBytes()); - } - - @Override - public void declareOutputFields(OutputFieldsDeclarer declarer) { - // declare the output fields - } -}; - -// Configure a Pulsar Bolt -PulsarBoltConfiguration boltConf = new PulsarBoltConfiguration(); -boltConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650"); -boltConf.setTopic("persistent://my-property/usw/my-ns/my-topic2"); -boltConf.setTupleToMessageMapper(tupleToMessageMapper); - -// Create a Pulsar Bolt -PulsarBolt bolt = new PulsarBolt(boltConf); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-brokers.md b/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-brokers.md deleted file mode 100644 index 4af4363850efd6..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-brokers.md +++ /dev/null @@ -1,276 +0,0 @@ ---- -id: admin-api-brokers -title: Managing Brokers -sidebar_label: "Brokers" -original_id: admin-api-brokers ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar brokers consist of two components: - -1. An HTTP server exposing a {@inject: rest:REST:/} interface administration and [topic](reference-terminology.md#topic) lookup. -2. A dispatcher that handles all Pulsar [message](reference-terminology.md#message) transfers. - -[Brokers](reference-terminology.md#broker) can be managed via: - -* The [`brokers`](reference-pulsar-admin.md#brokers) command of the [`pulsar-admin`](reference-pulsar-admin.md) tool -* The `/admin/v2/brokers` endpoint of the admin {@inject: rest:REST:/} API -* The `brokers` method of the {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin.html} object in the [Java API](client-libraries-java.md) - -In addition to being configurable when you start them up, brokers can also be [dynamically configured](#dynamic-broker-configuration). - -> See the [Configuration](reference-configuration.md#broker) page for a full listing of broker-specific configuration parameters. - -## Brokers resources - -### List active brokers - -Fetch all available active brokers that are serving traffic with cluster name. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers list use - -``` - -``` - -broker1.use.org.com:8080 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/:cluster|operation/getActiveBrokers?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getActiveBrokers(clusterName) - -``` - - - - -```` - -### Get the information of the leader broker - -Fetch the information of the leader broker, for example, the service url. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers leader-broker - -``` - -``` - -BrokerInfo(serviceUrl=broker1.use.org.com:8080) - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/leaderBroker|operation/getLeaderBroker?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getLeaderBroker() - -``` - -For the detail of the code above, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/BrokersImpl.java#L80) - - - - -```` - -#### list of namespaces owned by a given broker - -It finds all namespaces which are owned and served by a given broker. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers namespaces use \ - --url broker1.use.org.com:8080 - -``` - -```json - -{ - "my-property/use/my-ns/0x00000000_0xffffffff": { - "broker_assignment": "shared", - "is_controlled": false, - "is_active": true - } -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/:cluster/:broker/ownedNamespaces|operation/getOwnedNamespaes?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getOwnedNamespaces(cluster,brokerUrl); - -``` - - - - -```` - -### Dynamic broker configuration - -One way to configure a Pulsar [broker](reference-terminology.md#broker) is to supply a [configuration](reference-configuration.md#broker) when the broker is [started up](reference-cli-tools.md#pulsar-broker). - -But since all broker configuration in Pulsar is stored in ZooKeeper, configuration values can also be dynamically updated *while the broker is running*. When you update broker configuration dynamically, ZooKeeper will notify the broker of the change and the broker will then override any existing configuration values. - -* The [`brokers`](reference-pulsar-admin.md#brokers) command for the [`pulsar-admin`](reference-pulsar-admin.md) tool has a variety of subcommands that enable you to manipulate a broker's configuration dynamically, enabling you to [update config values](#update-dynamic-configuration) and more. -* In the Pulsar admin {@inject: rest:REST:/} API, dynamic configuration is managed through the `/admin/v2/brokers/configuration` endpoint. - -### Update dynamic configuration - -````mdx-code-block - - - -The [`update-dynamic-config`](reference-pulsar-admin.md#brokers-update-dynamic-config) subcommand will update existing configuration. It takes two arguments: the name of the parameter and the new value using the `config` and `value` flag respectively. Here's an example for the [`brokerShutdownTimeoutMs`](reference-configuration.md#broker-brokerShutdownTimeoutMs) parameter: - -```shell - -$ pulsar-admin brokers update-dynamic-config --config brokerShutdownTimeoutMs --value 100 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/brokers/configuration/:configName/:configValue|operation/updateDynamicConfiguration?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().updateDynamicConfiguration(configName, configValue); - -``` - - - - -```` - -### List updated values - -Fetch a list of all potentially updatable configuration parameters. -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers list-dynamic-config -brokerShutdownTimeoutMs - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/configuration|operation/getDynamicConfigurationName?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getDynamicConfigurationNames(); - -``` - - - - -```` - -### List all - -Fetch a list of all parameters that have been dynamically updated. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers get-all-dynamic-config -brokerShutdownTimeoutMs:100 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/configuration/values|operation/getAllDynamicConfigurations?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getAllDynamicConfigurations(); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-clusters.md b/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-clusters.md deleted file mode 100644 index e0e9fb5f91f65b..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-clusters.md +++ /dev/null @@ -1,308 +0,0 @@ ---- -id: admin-api-clusters -title: Managing Clusters -sidebar_label: "Clusters" -original_id: admin-api-clusters ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar clusters consist of one or more Pulsar [brokers](reference-terminology.md#broker), one or more [BookKeeper](reference-terminology.md#bookkeeper) -servers (aka [bookies](reference-terminology.md#bookie)), and a [ZooKeeper](https://zookeeper.apache.org) cluster that provides configuration and coordination management. - -Clusters can be managed via: - -* The [`clusters`](reference-pulsar-admin.md#clusters) command of the [`pulsar-admin`](reference-pulsar-admin.md) tool -* The `/admin/v2/clusters` endpoint of the admin {@inject: rest:REST:/} API -* The `clusters` method of the {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object in the [Java API](client-libraries-java.md) - -## Clusters resources - -### Provision - -New clusters can be provisioned using the admin interface. - -> Please note that this operation requires superuser privileges. - -````mdx-code-block - - - -You can provision a new cluster using the [`create`](reference-pulsar-admin.md#clusters-create) subcommand. Here's an example: - -```shell - -$ pulsar-admin clusters create cluster-1 \ - --url http://my-cluster.org.com:8080 \ - --broker-url pulsar://my-cluster.org.com:6650 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/clusters/:cluster|operation/createCluster?version=@pulsar:version_number@} - - - - -```java - -ClusterData clusterData = new ClusterData( - serviceUrl, - serviceUrlTls, - brokerServiceUrl, - brokerServiceUrlTls -); -admin.clusters().createCluster(clusterName, clusterData); - -``` - - - - -```` - -### Initialize cluster metadata - -When provision a new cluster, you need to initialize that cluster's [metadata](concepts-architecture-overview.md#metadata-store). When initializing cluster metadata, you need to specify all of the following: - -* The name of the cluster -* The local ZooKeeper connection string for the cluster -* The configuration store connection string for the entire instance -* The web service URL for the cluster -* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster - -You must initialize cluster metadata *before* starting up any [brokers](admin-api-brokers.md) that will belong to the cluster. - -> **No cluster metadata initialization through the REST API or the Java admin API** -> -> Unlike most other admin functions in Pulsar, cluster metadata initialization cannot be performed via the admin REST API -> or the admin Java client, as metadata initialization involves communicating with ZooKeeper directly. -> Instead, you can use the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool, in particular -> the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command. - -Here's an example cluster metadata initialization command: - -```shell - -bin/pulsar initialize-cluster-metadata \ - --cluster us-west \ - --zookeeper zk1.us-west.example.com:2181 \ - --configuration-store zk1.us-west.example.com:2184 \ - --web-service-url http://pulsar.us-west.example.com:8080/ \ - --web-service-url-tls https://pulsar.us-west.example.com:8443/ \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/ - -``` - -You'll need to use `--*-tls` flags only if you're using [TLS authentication](security-tls-authentication.md) in your instance. - -### Get configuration - -You can fetch the [configuration](reference-configuration.md) for an existing cluster at any time. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#clusters-get) subcommand and specify the name of the cluster. Here's an example: - -```shell - -$ pulsar-admin clusters get cluster-1 -{ - "serviceUrl": "http://my-cluster.org.com:8080/", - "serviceUrlTls": null, - "brokerServiceUrl": "pulsar://my-cluster.org.com:6650/", - "brokerServiceUrlTls": null - "peerClusterNames": null -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/clusters/:cluster|operation/getCluster?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().getCluster(clusterName); - -``` - - - - -```` - -### Update - -You can update the configuration for an existing cluster at any time. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#clusters-update) subcommand and specify new configuration values using flags. - -```shell - -$ pulsar-admin clusters update cluster-1 \ - --url http://my-cluster.org.com:4081 \ - --broker-url pulsar://my-cluster.org.com:3350 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/clusters/:cluster|operation/updateCluster?version=@pulsar:version_number@} - - - - -```java - -ClusterData clusterData = new ClusterData( - serviceUrl, - serviceUrlTls, - brokerServiceUrl, - brokerServiceUrlTls -); -admin.clusters().updateCluster(clusterName, clusterData); - -``` - - - - -```` - -### Delete - -Clusters can be deleted from a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#clusters-delete) subcommand and specify the name of the cluster. - -``` - -$ pulsar-admin clusters delete cluster-1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/clusters/:cluster|operation/deleteCluster?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().deleteCluster(clusterName); - -``` - - - - -```` - -### List - -You can fetch a list of all clusters in a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#clusters-list) subcommand. - -```shell - -$ pulsar-admin clusters list -cluster-1 -cluster-2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/clusters|operation/getClusters?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().getClusters(); - -``` - - - - -```` - -### Update peer-cluster data - -Peer clusters can be configured for a given cluster in a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`update-peer-clusters`](reference-pulsar-admin.md#clusters-update-peer-clusters) subcommand and specify the list of peer-cluster names. - -``` - -$ pulsar-admin update-peer-clusters cluster-1 --peer-clusters cluster-2 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/clusters/:cluster/peers|operation/setPeerClusterNames?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().updatePeerClusterNames(clusterName, peerClusterList); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-functions.md b/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-functions.md deleted file mode 100644 index 93d41ac257301f..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-functions.md +++ /dev/null @@ -1,820 +0,0 @@ ---- -id: admin-api-functions -title: Manage Functions -sidebar_label: "Functions" -original_id: admin-api-functions ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -**Pulsar Functions** are lightweight compute processes that - -* consume messages from one or more Pulsar topics -* apply a user-supplied processing logic to each message -* publish the results of the computation to another topic - -Functions can be managed via the following methods. - -Method | Description ----|--- -**Admin CLI** | The [`functions`](reference-pulsar-admin.md#functions) command of the [`pulsar-admin`](reference-pulsar-admin.md) tool. -**REST API** |The `/admin/v3/functions` endpoint of the admin {@inject: rest:REST:/} API. -**Java Admin API**| The `functions` method of the {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object in the [Java API](client-libraries-java.md). - -## Function resources - -You can perform the following operations on functions. - -### Create a function - -You can create a Pulsar function in cluster mode (deploy it on a Pulsar cluster) using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#functions-create) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --inputs test-input-topic \ - --output persistent://public/default/test-output-topic \ - --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \ - --jar /examples/api-examples.jar - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName|operation/registerFunction?version=@pulsar:version_number@} - - - - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setTenant(tenant); -functionConfig.setNamespace(namespace); -functionConfig.setName(functionName); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setParallelism(1); -functionConfig.setClassName("org.apache.pulsar.functions.api.examples.ExclamationFunction"); -functionConfig.setProcessingGuarantees(FunctionConfig.ProcessingGuarantees.ATLEAST_ONCE); -functionConfig.setTopicsPattern(sourceTopicPattern); -functionConfig.setSubName(subscriptionName); -functionConfig.setAutoAck(true); -functionConfig.setOutput(sinkTopic); -admin.functions().createFunction(functionConfig, fileName); - -``` - - - - -```` - -### Update a function - -You can update a Pulsar function that has been deployed to a Pulsar cluster using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#functions-update) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions update \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --output persistent://public/default/update-output-topic \ - # other options - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/functions/:tenant/:namespace/:functionName|operation/updateFunction?version=@pulsar:version_number@} - - - - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setTenant(tenant); -functionConfig.setNamespace(namespace); -functionConfig.setName(functionName); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setParallelism(1); -functionConfig.setClassName("org.apache.pulsar.functions.api.examples.ExclamationFunction"); -UpdateOptions updateOptions = new UpdateOptions(); -updateOptions.setUpdateAuthData(updateAuthData); -admin.functions().updateFunction(functionConfig, userCodeFile, updateOptions); - -``` - - - - -```` - -### Start an instance of a function - -You can start a stopped function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`start`](reference-pulsar-admin.md#functions-start) subcommand. - -```shell - -$ pulsar-admin functions start \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/start|operation/startFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().startFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Start all instances of a function - -You can start all stopped function instances using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`start`](reference-pulsar-admin.md#functions-start) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions start \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/start|operation/startFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().startFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Stop an instance of a function - -You can stop a function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`stop`](reference-pulsar-admin.md#functions-stop) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stop \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/stop|operation/stopFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().stopFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Stop all instances of a function - -You can stop all function instances using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`stop`](reference-pulsar-admin.md#functions-stop) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stop \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/stop|operation/stopFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().stopFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Restart an instance of a function - -Restart a function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`restart`](reference-pulsar-admin.md#functions-restart) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions restart \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/restart|operation/restartFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().restartFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Restart all instances of a function - -You can restart all function instances using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`restart`](reference-pulsar-admin.md#functions-restart) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions restart \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/restart|operation/restartFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().restartFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### List all functions - -You can list all Pulsar functions running under a specific tenant and namespace using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#functions-list) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions list \ - --tenant public \ - --namespace default - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace|operation/listFunctions?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctions(tenant, namespace); - -``` - - - - -```` - -### Delete a function - -You can delete a Pulsar function that is running on a Pulsar cluster using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#functions-delete) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions delete \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|DELETE|/admin/v3/functions/:tenant/:namespace/:functionName|operation/deregisterFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().deleteFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Get info about a function - -You can get information about a Pulsar function currently running in cluster mode using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#functions-get) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions get \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName|operation/getFunctionInfo?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Get status of an instance of a function - -You can get the current status of a Pulsar function instance with `instance-id` using Admin CLI, REST API or Java Admin API. -````mdx-code-block - - - -Use the [`status`](reference-pulsar-admin.md#functions-status) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/status|operation/getFunctionInstanceStatus?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStatus(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Get status of all instances of a function - -You can get the current status of a Pulsar function instance using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`status`](reference-pulsar-admin.md#functions-status) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/status|operation/getFunctionStatus?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStatus(tenant, namespace, functionName); - -``` - - - - -```` - -### Get stats of an instance of a function - -You can get the current stats of a Pulsar Function instance with `instance-id` using Admin CLI, REST API or Java admin API. -````mdx-code-block - - - -Use the [`stats`](reference-pulsar-admin.md#functions-stats) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/stats|operation/getFunctionInstanceStats?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStats(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Get stats of all instances of a function - -You can get the current stats of a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`stats`](reference-pulsar-admin.md#functions-stats) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/stats|operation/getFunctionStats?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStats(tenant, namespace, functionName); - -``` - - - - -```` - -### Trigger a function - -You can trigger a specified Pulsar function with a supplied value using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`trigger`](reference-pulsar-admin.md#functions-trigger) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --topic (the name of input topic) \ - --trigger-value \"hello pulsar\" - # or --trigger-file (the path of trigger file) - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/trigger|operation/triggerFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().triggerFunction(tenant, namespace, functionName, topic, triggerValue, triggerFile); - -``` - - - - -```` - -### Put state associated with a function - -You can put the state associated with a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`putstate`](reference-pulsar-admin.md#functions-putstate) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions putstate \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --state "{\"key\":\"pulsar\", \"stringValue\":\"hello pulsar\"}" - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/state/:key|operation/putFunctionState?version=@pulsar:version_number@} - - - - -```java - -TypeReference typeRef = new TypeReference() {}; -FunctionState stateRepr = ObjectMapperFactory.getThreadLocal().readValue(state, typeRef); -admin.functions().putFunctionState(tenant, namespace, functionName, stateRepr); - -``` - - - - -```` - -### Fetch state associated with a function - -You can fetch the current state associated with a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`querystate`](reference-pulsar-admin.md#functions-querystate) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions querystate \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --key (the key of state) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/state/:key|operation/getFunctionState?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionState(tenant, namespace, functionName, key); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-namespaces.md b/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-namespaces.md deleted file mode 100644 index 9cb387041f11c2..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-namespaces.md +++ /dev/null @@ -1,1315 +0,0 @@ ---- -id: admin-api-namespaces -title: Managing Namespaces -sidebar_label: "Namespaces" -original_id: admin-api-namespaces ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar [namespaces](reference-terminology.md#namespace) are logical groupings of [topics](reference-terminology.md#topic). - -Namespaces can be managed via: - -* The [`namespaces`](reference-pulsar-admin.md#clusters) command of the [`pulsar-admin`](reference-pulsar-admin.md) tool -* The `/admin/v2/namespaces` endpoint of the admin {@inject: rest:REST:/} API -* The `namespaces` method of the {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object in the [Java API](client-libraries-java.md) - -## Namespaces resources - -### Create namespaces - -You can create new namespaces under a given [tenant](reference-terminology.md#tenant). - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#namespaces-create) subcommand and specify the namespace by name: - -```shell - -$ pulsar-admin namespaces create test-tenant/test-namespace - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace|operation/createNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().createNamespace(namespace); - -``` - - - - -```` - -### Get policies - -You can fetch the current policies associated with a namespace at any time. - -````mdx-code-block - - - -Use the [`policies`](reference-pulsar-admin.md#namespaces-policies) subcommand and specify the namespace: - -```shell - -$ pulsar-admin namespaces policies test-tenant/test-namespace -{ - "auth_policies": { - "namespace_auth": {}, - "destination_auth": {} - }, - "replication_clusters": [], - "bundles_activated": true, - "bundles": { - "boundaries": [ - "0x00000000", - "0xffffffff" - ], - "numBundles": 1 - }, - "backlog_quota_map": {}, - "persistence": null, - "latency_stats_sample_rate": {}, - "message_ttl_in_seconds": 0, - "retention_policies": null, - "deleted": false -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace|operation/getPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPolicies(namespace); - -``` - - - - -```` - -### List namespaces - -You can list all namespaces within a given Pulsar [tenant](reference-terminology.md#tenant). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#namespaces-list) subcommand and specify the tenant: - -```shell - -$ pulsar-admin namespaces list test-tenant -test-tenant/ns1 -test-tenant/ns2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant|operation/getTenantNamespaces?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaces(tenant); - -``` - - - - -```` - -### Delete namespaces - -You can delete existing namespaces from a tenant. - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#namespaces-delete) subcommand and specify the namespace: - -```shell - -$ pulsar-admin namespaces delete test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace|operation/deleteNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().deleteNamespace(namespace); - -``` - - - - -```` - -### Configure replication clusters - -#### Set replication cluster - -It sets replication clusters for a namespace, so Pulsar can internally replicate publish message from one colo to another colo. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-clusters test-tenant/ns1 \ - --clusters cl1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/replication|operation/setNamespaceReplicationClusters?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setNamespaceReplicationClusters(namespace, clusters); - -``` - - - - -```` - -#### Get replication cluster - -It gives a list of replication clusters for a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-clusters test-tenant/cl1/ns1 - -``` - -``` - -cl2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/replication|operation/getNamespaceReplicationClusters?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaceReplicationClusters(namespace) - -``` - - - - -```` - -### Configure backlog quota policies - -#### Set backlog quota policies - -Backlog quota helps the broker to restrict bandwidth/storage of a namespace once it reaches a certain threshold limit. Admin can set the limit and take corresponding action after the limit is reached. - - 1. producer_request_hold: broker will hold and not persist produce request payload - - 2. producer_exception: broker disconnects with the client by giving an exception. - - 3. consumer_backlog_eviction: broker will start discarding backlog messages - - Backlog quota restriction can be taken care by defining restriction of backlog-quota-type: destination_storage - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-backlog-quota --limit 10G --limitTime 36000 --policy producer_request_hold test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/setBacklogQuota?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setBacklogQuota(namespace, new BacklogQuota(limit, limitTime, policy)) - -``` - - - - -```` - -#### Get backlog quota policies - -It shows a configured backlog quota for a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-backlog-quotas test-tenant/ns1 - -``` - -```json - -{ - "destination_storage": { - "limit": 10, - "policy": "producer_request_hold" - } -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/backlogQuotaMap|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getBacklogQuotaMap(namespace); - -``` - - - - -```` - -#### Remove backlog quota policies - -It removes backlog quota policies for a given namespace - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-backlog-quota test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/removeBacklogQuota?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeBacklogQuota(namespace, backlogQuotaType) - -``` - - - - -```` - -### Configure persistence policies - -#### Set persistence policies - -Persistence policies allow to configure persistency-level for all topic messages under a given namespace. - - - Bookkeeper-ack-quorum: Number of acks (guaranteed copies) to wait for each entry, default: 0 - - - Bookkeeper-ensemble: Number of bookies to use for a topic, default: 0 - - - Bookkeeper-write-quorum: How many writes to make of each entry, default: 0 - - - Ml-mark-delete-max-rate: Throttling rate of mark-delete operation (0 means no throttle), default: 0.0 - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-persistence --bookkeeper-ack-quorum 2 --bookkeeper-ensemble 3 --bookkeeper-write-quorum 2 --ml-mark-delete-max-rate 0 test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/setPersistence?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setPersistence(namespace,new PersistencePolicies(bookkeeperEnsemble, bookkeeperWriteQuorum,bookkeeperAckQuorum,managedLedgerMaxMarkDeleteRate)) - -``` - - - - -```` - -#### Get persistence policies - -It shows the configured persistence policies of a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-persistence test-tenant/ns1 - -``` - -```json - -{ - "bookkeeperEnsemble": 3, - "bookkeeperWriteQuorum": 2, - "bookkeeperAckQuorum": 2, - "managedLedgerMaxMarkDeleteRate": 0 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/getPersistence?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPersistence(namespace) - -``` - - - - -```` - -### Configure namespace bundles - -#### Unload namespace bundles - -The namespace bundle is a virtual group of topics which belong to the same namespace. If the broker gets overloaded with the number of bundles, this command can help unload a bundle from that broker, so it can be served by some other less-loaded brokers. The namespace bundle ID ranges from 0x00000000 to 0xffffffff. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces unload --bundle 0x00000000_0xffffffff test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/:bundle/unload|operation/unloadNamespaceBundle?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().unloadNamespaceBundle(namespace, bundle) - -``` - - - - -```` - -#### Split namespace bundles - -Each namespace bundle can contain multiple topics and each bundle can be served by only one broker. -If a single bundle is creating an excessive load on a broker, an admin splits the bundle using this command permitting one or more of the new bundles to be unloaded thus spreading the load across the brokers. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces split-bundle --bundle 0x00000000_0xffffffff test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/:bundle/split|operation/splitNamespaceBundle?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().splitNamespaceBundle(namespace, bundle) - -``` - - - - -```` - -### Configure message TTL - -#### Set message-ttl - -It configures message’s time to live (in seconds) duration. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-message-ttl --messageTTL 100 test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/setNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setNamespaceMessageTTL(namespace, messageTTL) - -``` - - - - -```` - -#### Get message-ttl - -It gives a message ttl of configured namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-message-ttl test-tenant/ns1 - -``` - -``` - -100 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/getNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaceMessageTTL(namespace) - -``` - - - - -```` - -#### Remove message-ttl - -Remove a message TTL of the configured namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-message-ttl test-tenant/ns1 - -``` - -``` - -100 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/removeNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeNamespaceMessageTTL(namespace) - -``` - - - - -```` - - -### Clear backlog - -#### Clear namespace backlog - -It clears all message backlog for all the topics that belong to a specific namespace. You can also clear backlog for a specific subscription as well. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces clear-backlog --sub my-subscription test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/clearBacklog|operation/clearNamespaceBacklogForSubscription?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().clearNamespaceBacklogForSubscription(namespace, subscription) - -``` - - - - -```` - -#### Clear bundle backlog - -It clears all message backlog for all the topics that belong to a specific NamespaceBundle. You can also clear backlog for a specific subscription as well. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces clear-backlog --bundle 0x00000000_0xffffffff --sub my-subscription test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/:bundle/clearBacklog|operation/clearNamespaceBundleBacklogForSubscription?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().clearNamespaceBundleBacklogForSubscription(namespace, bundle, subscription) - -``` - - - - -```` - -### Configure retention - -#### Set retention - -Each namespace contains multiple topics and the retention size (storage size) of each topic should not exceed a specific threshold or it should be stored for a certain period. This command helps configure the retention size and time of topics in a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin set-retention --size 100 --time 10 test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/retention|operation/setRetention?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setRetention(namespace, new RetentionPolicies(retentionTimeInMin, retentionSizeInMB)) - -``` - - - - -```` - -#### Get retention - -It shows retention information of a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-retention test-tenant/ns1 - -``` - -```json - -{ - "retentionTimeInMinutes": 10, - "retentionSizeInMB": 100 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/retention|operation/getRetention?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getRetention(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for topics - -#### Set dispatch throttling for topics - -It sets message dispatch rate for all the topics under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -:::note - -- If neither `clusterDispatchRate` nor `topicDispatchRate` is configured, dispatch throttling is disabled. - -- If `topicDispatchRate` is not configured, `clusterDispatchRate` takes effect. - -- If `topicDispatchRate` is configured, `topicDispatchRate` takes effect. - -::: - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/dispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for topics - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/dispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getDispatchRate(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for subscription - -#### Set dispatch throttling for subscription - -It sets message dispatch rate for all the subscription of topics under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-subscription-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/subscriptionDispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setSubscriptionDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for subscription - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-subscription-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/subscriptionDispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getSubscriptionDispatchRate(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for replicator - -#### Set dispatch throttling for replicator - -It sets message dispatch rate for all the replicator between replication clusters under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-replicator-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/replicatorDispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setReplicatorDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for replicator - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-replicator-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/replicatorDispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getReplicatorDispatchRate(namespace) - -``` - - - - -```` - -### Configure deduplication snapshot interval - -#### Get deduplication snapshot interval - -It shows configured `deduplicationSnapshotInterval` for a namespace (Each topic under the namespace will take a deduplication snapshot according to this interval) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-deduplication-snapshot-interval test-tenant/ns1 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/getDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getDeduplicationSnapshotInterval(namespace) - -``` - - - - -```` - -#### Set deduplication snapshot interval - -Set configured `deduplicationSnapshotInterval` for a namespace. Each topic under the namespace will take a deduplication snapshot according to this interval. -`brokerDeduplicationEnabled` must be set to `true` for this property to take effect. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-deduplication-snapshot-interval test-tenant/ns1 --interval 1000 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/setDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setDeduplicationSnapshotInterval(namespace, 1000) - -``` - - - - -```` - -#### Remove deduplication snapshot interval - -Remove configured `deduplicationSnapshotInterval` of a namespace (Each topic under the namespace will take a deduplication snapshot according to this interval) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-deduplication-snapshot-interval test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/deleteDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - - -```java - -admin.namespaces().removeDeduplicationSnapshotInterval(namespace) - -``` - - - - -```` - -### Namespace isolation - -You can use the [Pulsar isolation policy](administration-isolation.md) to allocate resources (broker and bookie) for a namespace. - -### Unload namespaces from a broker - -You can unload a namespace, or a [namespace bundle](reference-terminology.md#namespace-bundle), from the Pulsar [broker](reference-terminology.md#broker) that is currently responsible for it. - -#### pulsar-admin - -Use the [`unload`](reference-pulsar-admin.md#unload) subcommand of the [`namespaces`](reference-pulsar-admin.md#namespaces) command. - -````mdx-code-block - - - -```shell - -$ pulsar-admin namespaces unload my-tenant/my-ns - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/unload|operation/unloadNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().unload(namespace) - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-non-partitioned-topics.md b/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-non-partitioned-topics.md deleted file mode 100644 index e6347bb8c363a1..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-non-partitioned-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-non-partitioned-topics -title: Managing non-partitioned topics -sidebar_label: "Non-partitioned topics" -original_id: admin-api-non-partitioned-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-non-persistent-topics.md b/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-non-persistent-topics.md deleted file mode 100644 index 3126a6494c7153..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-non-persistent-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-non-persistent-topics -title: Managing non-persistent topics -sidebar_label: "Non-Persistent topics" -original_id: admin-api-non-persistent-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-overview.md b/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-overview.md deleted file mode 100644 index 81e6587fab3509..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-overview.md +++ /dev/null @@ -1,133 +0,0 @@ ---- -id: admin-api-overview -title: Pulsar admin interface -sidebar_label: "Overview" -original_id: admin-api-overview ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -The Pulsar admin interface enables you to manage all important entities in a Pulsar instance, such as tenants, topics, and namespaces. - -You can interact with the admin interface via: - -- HTTP calls, which are made against the admin {@inject: rest:REST:/} API provided by Pulsar brokers. For some RESTful APIs, they might be redirected to the owner brokers for serving with [`307 Temporary Redirect`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/307), hence the HTTP callers should handle `307 Temporary Redirect`. If you use `curl` commands, you should specify `-L` to handle redirections. -- A Java client interface. -- The `pulsar-admin` CLI tool, which is available in the `bin` folder of your Pulsar installation: - - ```shell - - $ bin/pulsar-admin - - ``` - - For complete commands of `pulsar-admin` tool, see [Pulsar admin snapshot](https://pulsar.apache.org/tools/pulsar-admin/). - - -> **The REST API is the admin interface**. Both the `pulsar-admin` CLI tool and the Java client use the REST API. If you implement your own admin interface client, you should use the REST API. - -## Admin setup - -Each of the three admin interfaces (the `pulsar-admin` CLI tool, the {@inject: rest:REST:/} API, and the [Java admin API](/api/admin)) requires some special setup if you have enabled authentication in your Pulsar instance. - -````mdx-code-block - - - -If you have enabled authentication, you need to provide an auth configuration to use the `pulsar-admin` tool. By default, the configuration for the `pulsar-admin` tool is in the [`conf/client.conf`](reference-configuration.md#client) file. The following are the available parameters: - -|Name|Description|Default| -|----|-----------|-------| -|webServiceUrl|The web URL for the cluster.|http://localhost:8080/| -|brokerServiceUrl|The Pulsar protocol URL for the cluster.|pulsar://localhost:6650/| -|authPlugin|The authentication plugin.| | -|authParams|The authentication parameters for the cluster, as a comma-separated string.| | -|useTls|Whether or not TLS authentication will be enforced in the cluster.|false| -|tlsAllowInsecureConnection|Accept untrusted TLS certificate from client.|false| -|tlsTrustCertsFilePath|Path for the trusted TLS certificate file.| | - - - - -You can find details for the REST API exposed by Pulsar brokers in this {@inject: rest:document:/}. - - - - -To use the Java admin API, instantiate a {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object, and specify a URL for a Pulsar broker and a {@inject: javadoc:PulsarAdminBuilder:/admin/org/apache/pulsar/client/admin/PulsarAdminBuilder}. The following is a minimal example using `localhost`: - -```java - -String url = "http://localhost:8080"; -// Pass auth-plugin class fully-qualified name if Pulsar-security enabled -String authPluginClassName = "com.org.MyAuthPluginClass"; -// Pass auth-param if auth-plugin class requires it -String authParams = "param1=value1"; -boolean useTls = false; -boolean tlsAllowInsecureConnection = false; -String tlsTrustCertsFilePath = null; -PulsarAdmin admin = PulsarAdmin.builder() -.authentication(authPluginClassName,authParams) -.serviceHttpUrl(url) -.tlsTrustCertsFilePath(tlsTrustCertsFilePath) -.allowTlsInsecureConnection(tlsAllowInsecureConnection) -.build(); - -``` - -If you use multiple brokers, you can use multi-host like Pulsar service. For example, - -```java - -String url = "http://localhost:8080,localhost:8081,localhost:8082"; -// Pass auth-plugin class fully-qualified name if Pulsar-security enabled -String authPluginClassName = "com.org.MyAuthPluginClass"; -// Pass auth-param if auth-plugin class requires it -String authParams = "param1=value1"; -boolean useTls = false; -boolean tlsAllowInsecureConnection = false; -String tlsTrustCertsFilePath = null; -PulsarAdmin admin = PulsarAdmin.builder() -.authentication(authPluginClassName,authParams) -.serviceHttpUrl(url) -.tlsTrustCertsFilePath(tlsTrustCertsFilePath) -.allowTlsInsecureConnection(tlsAllowInsecureConnection) -.build(); - -``` - - - - -```` - -## How to define Pulsar resource names when running Pulsar in Kubernetes -If you run Pulsar Functions or connectors on Kubernetes, you need to follow Kubernetes naming convention to define the names of your Pulsar resources, whichever admin interface you use. - -Kubernetes requires a name that can be used as a DNS subdomain name as defined in [RFC 1123](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names). Pulsar supports more legal characters than Kubernetes naming convention. If you create a Pulsar resource name with special characters that are not supported by Kubernetes (for example, including colons in a Pulsar namespace name), Kubernetes runtime translates the Pulsar object names into Kubernetes resource labels which are in RFC 1123-compliant forms. Consequently, you can run functions or connectors using Kubernetes runtime. The rules for translating Pulsar object names into Kubernetes resource labels are as below: - -- Truncate to 63 characters - -- Replace the following characters with dashes (-): - - - Non-alphanumeric characters - - - Underscores (_) - - - Dots (.) - -- Replace beginning and ending non-alphanumeric characters with 0 - -:::tip - -- If you get an error in translating Pulsar object names into Kubernetes resource labels (for example, you may have a naming collision if your Pulsar object name is too long) or want to customize the translating rules, see [customize Kubernetes runtime](https://pulsar.apache.org/docs/en/next/functions-runtime/#customize-kubernetes-runtime). -- For how to configure Kubernetes runtime, see [here](https://pulsar.apache.org/docs/en/next/functions-runtime/#configure-kubernetes-runtime). - -::: - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-packages.md b/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-packages.md deleted file mode 100644 index 70bebfcf35dfee..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-packages.md +++ /dev/null @@ -1,381 +0,0 @@ ---- -id: admin-api-packages -title: Manage packages -sidebar_label: "Packages" -original_id: admin-api-packages ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Package management enables version management and simplifies the upgrade and rollback processes for Functions, Sinks, and Sources. When you use the same function, sink and source in different namespaces, you can upload them to a common package management system. - -## Package name - -A `package` is identified by five parts: `type`, `tenant`, `namespace`, `package name`, and `version`. - -| Part | Description | -|-------|-------------| -|`type` |The type of the package. The following types are supported: `function`, `sink` and `source`. | -| `name`|The fully qualified name of the package: `//`.| -|`version`|The version of the package.| - -The following is a code sample. - -```java - -class PackageName { - private final PackageType type; - private final String namespace; - private final String tenant; - private final String name; - private final String version; -} - -enum PackageType { - FUNCTION("function"), SINK("sink"), SOURCE("source"); -} - -``` - -## Package URL -A package is located using a URL. The package URL is written in the following format: - -```shell - -:////@ - -``` - -The following are package URL examples: - -`sink://public/default/mysql-sink@1.0` -`function://my-tenant/my-ns/my-function@0.1` -`source://my-tenant/my-ns/mysql-cdc-source@2.3` - -The package management system stores the data, versions and metadata of each package. The metadata is shown in the following table. - -| metadata | Description | -|----------|-------------| -|description|The description of the package.| -|contact |The contact information of a package. For example, team email.| -|create_time| The time when the package is created.| -|modification_time| The time when the package is modified.| -|properties |A key/value map that stores your own information.| - -## Permissions - -The packages are organized by the tenant and namespace, so you can apply the tenant and namespace permissions to packages directly. - -## Package resources -You can use the package management with command line tools, REST API and Java client. - -### Upload a package -You can upload a package to the package management service in the following ways. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages upload function://public/default/example@v0.1 --path package-file --description package-description - -``` - - - - -{@inject: endpoint|POST|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/upload?version=@pulsar:version_number@} - - - - -Upload a package to the package management service synchronously. - -```java - - void upload(PackageMetadata metadata, String packageName, String path) throws PulsarAdminException; - -``` - -Upload a package to the package management service asynchronously. - -```java - - CompletableFuture uploadAsync(PackageMetadata metadata, String packageName, String path); - -``` - - - - -```` - -### Download a package -You can download a package to the package management service in the following ways. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages download function://public/default/example@v0.1 --path package-file - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/download?version=@pulsar:version_number@} - - - - -Download a package to the package management service synchronously. - -```java - - void download(String packageName, String path) throws PulsarAdminException; - -``` - -Download a package to the package management service asynchronously. - -```java - - CompletableFuture downloadAsync(String packageName, String path); - -``` - - - - -```` - -### List all versions of a package -You can get a list of all versions of a package in the following ways. -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages list --type function public/default - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName|operation/listPackageVersion?version=@pulsar:version_number@} - - - - -List all versions of a package synchronously. - -```java - - List listPackageVersions(String packageName) throws PulsarAdminException; - -``` - -List all versions of a package asynchronously. - -```java - - CompletableFuture> listPackageVersionsAsync(String packageName); - -``` - - - - -```` - -### List all the specified type packages under a namespace -You can get a list of all the packages with the given type in a namespace in the following ways. -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages list --type function public/default - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/packages/:type/:tenant/:namespace|operation/listPackages?version=@pulsar:version_number@} - - - - -List all the packages with the given type in a namespace synchronously. - -```java - - List listPackages(String type, String namespace) throws PulsarAdminException; - -``` - -List all the packages with the given type in a namespace asynchronously. - -```java - - CompletableFuture> listPackagesAsync(String type, String namespace); - -``` - - - - -```` - -### Get the metadata of a package -You can get the metadata of a package in the following ways. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages get-metadata function://public/default/test@v1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version/metadata|operation/getMeta?version=@pulsar:version_number@} - - - - -Get the metadata of a package synchronously. - -```java - - PackageMetadata getMetadata(String packageName) throws PulsarAdminException; - -``` - -Get the metadata of a package asynchronously. - -```java - - CompletableFuture getMetadataAsync(String packageName); - -``` - - - - -```` - -### Update the metadata of a package -You can update the metadata of a package in the following ways. -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages update-metadata function://public/default/example@v0.1 --description update-description - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version/metadata|operation/updateMeta?version=@pulsar:version_number@} - - - - -Update a package metadata information synchronously. - -```java - - void updateMetadata(String packageName, PackageMetadata metadata) throws PulsarAdminException; - -``` - -Update a package metadata information asynchronously. - -```java - - CompletableFuture updateMetadataAsync(String packageName, PackageMetadata metadata); - -``` - - - - -```` - -### Delete a specified package -You can delete a specified package with its package name in the following ways. - -````mdx-code-block - - - -The following command example deletes a package of version 0.1. - -```shell - -bin/pulsar-admin packages delete function://public/default/example@v0.1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/delete?version=@pulsar:version_number@} - - - - -Delete a specified package synchronously. - -```java - - void delete(String packageName) throws PulsarAdminException; - -``` - -Delete a specified package asynchronously. - -```java - - CompletableFuture deleteAsync(String packageName); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-partitioned-topics.md b/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-partitioned-topics.md deleted file mode 100644 index 5ce182282e0324..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-partitioned-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-partitioned-topics -title: Managing partitioned topics -sidebar_label: "Partitioned topics" -original_id: admin-api-partitioned-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-permissions.md b/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-permissions.md deleted file mode 100644 index 6897517553f2b2..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-permissions.md +++ /dev/null @@ -1,174 +0,0 @@ ---- -id: admin-api-permissions -title: Managing permissions -sidebar_label: "Permissions" -original_id: admin-api-permissions ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Permissions in Pulsar are managed at the [namespace](reference-terminology.md#namespace) level -(that is, within [tenants](reference-terminology.md#tenant) and [clusters](reference-terminology.md#cluster)). - -## Grant permissions - -You can grant permissions to specific roles for lists of operations such as `produce` and `consume`. - -````mdx-code-block - - - -Use the [`grant-permission`](reference-pulsar-admin.md#grant-permission) subcommand and specify a namespace, actions using the `--actions` flag, and a role using the `--role` flag: - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role admin10 - -``` - -Wildcard authorization can be performed when `authorizationAllowWildcardsMatching` is set to `true` in `broker.conf`. - -e.g. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role 'my.role.*' - -``` - -Then, roles `my.role.1`, `my.role.2`, `my.role.foo`, `my.role.bar`, etc. can produce and consume. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role '*.role.my' - -``` - -Then, roles `1.role.my`, `2.role.my`, `foo.role.my`, `bar.role.my`, etc. can produce and consume. - -**Note**: A wildcard matching works at **the beginning or end of the role name only**. - -e.g. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role 'my.*.role' - -``` - -In this case, only the role `my.*.role` has permissions. -Roles `my.1.role`, `my.2.role`, `my.foo.role`, `my.bar.role`, etc. **cannot** produce and consume. - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/permissions/:role|operation/grantPermissionOnNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().grantPermissionOnNamespace(namespace, role, getAuthActions(actions)); - -``` - - - - -```` - -## Get permissions - -You can see which permissions have been granted to which roles in a namespace. - -````mdx-code-block - - - -Use the [`permissions`](reference-pulsar-admin#permissions) subcommand and specify a namespace: - -```shell - -$ pulsar-admin namespaces permissions test-tenant/ns1 -{ - "admin10": [ - "produce", - "consume" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/permissions|operation/getPermissions?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPermissions(namespace); - -``` - - - - -```` - -## Revoke permissions - -You can revoke permissions from specific roles, which means that those roles will no longer have access to the specified namespace. - -````mdx-code-block - - - -Use the [`revoke-permission`](reference-pulsar-admin.md#revoke-permission) subcommand and specify a namespace and a role using the `--role` flag: - -```shell - -$ pulsar-admin namespaces revoke-permission test-tenant/ns1 \ - --role admin10 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/permissions/:role|operation/revokePermissionsOnNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().revokePermissionsOnNamespace(namespace, role); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-persistent-topics.md b/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-persistent-topics.md deleted file mode 100644 index 50d135b72f5424..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-persistent-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-persistent-topics -title: Managing persistent topics -sidebar_label: "Persistent topics" -original_id: admin-api-persistent-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-schemas.md b/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-schemas.md deleted file mode 100644 index 9ffe21f5b0f750..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-schemas.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: admin-api-schemas -title: Managing Schemas -sidebar_label: "Schemas" -original_id: admin-api-schemas ---- - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-tenants.md b/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-tenants.md deleted file mode 100644 index d78aa2e55f4c33..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-tenants.md +++ /dev/null @@ -1,228 +0,0 @@ ---- -id: admin-api-tenants -title: Managing Tenants -sidebar_label: "Tenants" -original_id: admin-api-tenants ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Tenants, like namespaces, can be managed using the [admin API](admin-api-overview.md). There are currently two configurable aspects of tenants: - -* Admin roles -* Allowed clusters - -## Tenant resources - -### List - -You can list all of the tenants associated with an [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#tenants-list) subcommand. - -```shell - -$ pulsar-admin tenants list -my-tenant-1 -my-tenant-2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/tenants|operation/getTenants?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().getTenants(); - -``` - - - - -```` - -### Create - -You can create a new tenant. - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#tenants-create) subcommand: - -```shell - -$ pulsar-admin tenants create my-tenant - -``` - -When creating a tenant, you can assign admin roles using the `-r`/`--admin-roles` flag. You can specify multiple roles as a comma-separated list. Here are some examples: - -```shell - -$ pulsar-admin tenants create my-tenant \ - --admin-roles role1,role2,role3 - -$ pulsar-admin tenants create my-tenant \ - -r role1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/tenants/:tenant|operation/createTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().createTenant(tenantName, tenantInfo); - -``` - - - - -```` - -### Get configuration - -You can fetch the [configuration](reference-configuration.md) for an existing tenant at any time. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#tenants-get) subcommand and specify the name of the tenant. Here's an example: - -```shell - -$ pulsar-admin tenants get my-tenant -{ - "adminRoles": [ - "admin1", - "admin2" - ], - "allowedClusters": [ - "cl1", - "cl2" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/tenants/:cluster|operation/getTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().getTenantInfo(tenantName); - -``` - - - - -```` - -### Delete - -Tenants can be deleted from a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#tenants-delete) subcommand and specify the name of the tenant. - -```shell - -$ pulsar-admin tenants delete my-tenant - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/tenants/:cluster|operation/deleteTenant?version=@pulsar:version_number@} - - - - -```java - -admin.Tenants().deleteTenant(tenantName); - -``` - - - - -```` - -### Update - -You can update a tenant's configuration. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#tenants-update) subcommand. - -```shell - -$ pulsar-admin tenants update my-tenant - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/tenants/:cluster|operation/updateTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().updateTenant(tenantName, tenantInfo); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-topics.md b/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-topics.md deleted file mode 100644 index dcbb6e6682e180..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/admin-api-topics.md +++ /dev/null @@ -1,2132 +0,0 @@ ---- -id: admin-api-topics -title: Manage topics -sidebar_label: "Topics" -original_id: admin-api-topics ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar has persistent and non-persistent topics. Persistent topic is a logical endpoint for publishing and consuming messages. The topic name structure for persistent topics is: - -```shell - -persistent://tenant/namespace/topic - -``` - -Non-persistent topics are used in applications that only consume real-time published messages and do not need persistent guarantee. In this way, it reduces message-publish latency by removing overhead of persisting messages. The topic name structure for non-persistent topics is: - -```shell - -non-persistent://tenant/namespace/topic - -``` - -## Manage topic resources -Whether it is persistent or non-persistent topic, you can obtain the topic resources through `pulsar-admin` tool, REST API and Java. - -:::note - -In REST API, `:schema` stands for persistent or non-persistent. `:tenant`, `:namespace`, `:x` are variables, replace them with the real tenant, namespace, and `x` names when using them. -Take {@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} as an example, to get the list of persistent topics in REST API, use `https://pulsar.apache.org/admin/v2/persistent/my-tenant/my-namespace`. To get the list of non-persistent topics in REST API, use `https://pulsar.apache.org/admin/v2/non-persistent/my-tenant/my-namespace`. - -::: - -### List of topics - -You can get the list of topics under a given namespace in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list \ - my-tenant/my-namespace - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} - - - - -```java - -String namespace = "my-tenant/my-namespace"; -admin.topics().getList(namespace); - -``` - - - - -```` - -### Grant permission - -You can grant permissions on a client role to perform specific actions on a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics grant-permission \ - --actions produce,consume --role application1 \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/permissions/:role|operation/grantPermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String role = "test-role"; -Set actions = Sets.newHashSet(AuthAction.produce, AuthAction.consume); -admin.topics().grantPermission(topic, role, actions); - -``` - - - - -```` - -### Get permission - -You can fetch permission in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics permissions \ - persistent://test-tenant/ns1/tp1 \ - -{ - "application1": [ - "consume", - "produce" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/permissions|operation/getPermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getPermissions(topic); - -``` - - - - -```` - -### Revoke permission - -You can revoke a permission granted on a client role in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics revoke-permission \ - --role application1 \ - persistent://test-tenant/ns1/tp1 \ - -{ - "application1": [ - "consume", - "produce" - ] -} - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic/permissions/:role|operation/revokePermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String role = "test-role"; -admin.topics().revokePermissions(topic, role); - -``` - - - - -```` - -### Delete topic - -You can delete a topic in the following ways. You cannot delete a topic if any active subscription or producers is connected to the topic. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics delete \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic|operation/deleteTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().delete(topic); - -``` - - - - -```` - -### Unload topic - -You can unload a topic in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics unload \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/unload|operation/unloadTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().unload(topic); - -``` - - - - -```` - -### Get stats - -You can check the following statistics of a given non-partitioned topic. - - - **msgRateIn**: The sum of all local and replication publishers' publish rates (msg/s). - - - **msgThroughputIn**: The sum of all local and replication publishers' publish rates (bytes/s). - - - **msgRateOut**: The sum of all local and replication consumers' dispatch rates(msg/s). - - - **msgThroughputOut**: The sum of all local and replication consumers' dispatch rates (bytes/s). - - - **averageMsgSize**: The average size (in bytes) of messages published within the last interval. - - - **storageSize**: The sum of the ledgers' storage size for this topic. The space used to store the messages for the topic. - - - **publishers**: The list of all local publishers into the topic. The list ranges from zero to thousands. - - - **msgRateIn**: The total rate of messages (msg/s) published by this publisher. - - - **msgThroughputIn**: The total throughput (bytes/s) of the messages published by this publisher. - - - **averageMsgSize**: The average message size in bytes from this publisher within the last interval. - - - **producerId**: The internal identifier for this producer on this topic. - - - **producerName**: The internal identifier for this producer, generated by the client library. - - - **address**: The IP address and source port for the connection of this producer. - - - **connectedSince**: The timestamp when this producer is created or reconnected last time. - - - **subscriptions**: The list of all local subscriptions to the topic. - - - **my-subscription**: The name of this subscription. It is defined by the client. - - - **msgRateOut**: The total rate of messages (msg/s) delivered on this subscription. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered on this subscription. - - - **msgBacklog**: The number of messages in the subscription backlog. - - - **type**: The subscription type. - - - **msgRateExpired**: The rate at which messages were discarded instead of dispatched from this subscription due to TTL. - - - **lastExpireTimestamp**: The timestamp of the last message expire execution. - - - **lastConsumedFlowTimestamp**: The timestamp of the last flow command received. - - - **lastConsumedTimestamp**: The latest timestamp of all the consumed timestamp of the consumers. - - - **lastAckedTimestamp**: The latest timestamp of all the acked timestamp of the consumers. - - - **consumers**: The list of connected consumers for this subscription. - - - **msgRateOut**: The total rate of messages (msg/s) delivered to the consumer. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered to the consumer. - - - **consumerName**: The internal identifier for this consumer, generated by the client library. - - - **availablePermits**: The number of messages that the consumer has space for in the client library's listen queue. `0` means the client library's queue is full and `receive()` isn't being called. A non-zero value means this consumer is ready for dispatched messages. - - - **unackedMessages**: The number of unacknowledged messages for the consumer, where an unacknowledged message is one that has been sent to the consumer but not yet acknowledged. This field is only meaningful when using a subscription that tracks individual message acknowledgement. - - - **blockedConsumerOnUnackedMsgs**: The flag used to verify if the consumer is blocked due to reaching threshold of the unacknowledged messages. - - - **lastConsumedTimestamp**: The timestamp when the consumer reads a message the last time. - - - **lastAckedTimestamp**: The timestamp when the consumer acknowledges a message the last time. - - - **replication**: This section gives the stats for cross-colo replication of this topic - - - **msgRateIn**: The total rate (msg/s) of messages received from the remote cluster. - - - **msgThroughputIn**: The total throughput (bytes/s) received from the remote cluster. - - - **msgRateOut**: The total rate of messages (msg/s) delivered to the replication-subscriber. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered to the replication-subscriber. - - - **msgRateExpired**: The total rate of messages (msg/s) expired. - - - **replicationBacklog**: The number of messages pending to be replicated to remote cluster. - - - **connected**: Whether the outbound replicator is connected. - - - **replicationDelayInSeconds**: How long the oldest message has been waiting to be sent through the connection, if connected is `true`. - - - **inboundConnection**: The IP and port of the broker in the remote cluster's publisher connection to this broker. - - - **inboundConnectedSince**: The TCP connection being used to publish messages to the remote cluster. If there are no local publishers connected, this connection is automatically closed after a minute. - - - **outboundConnection**: The address of the outbound replication connection. - - - **outboundConnectedSince**: The timestamp of establishing outbound connection. - -The following is an example of a topic status. - -```json - -{ - "msgRateIn": 4641.528542257553, - "msgThroughputIn": 44663039.74947473, - "msgRateOut": 0, - "msgThroughputOut": 0, - "averageMsgSize": 1232439.816728665, - "storageSize": 135532389160, - "publishers": [ - { - "msgRateIn": 57.855383881403576, - "msgThroughputIn": 558994.7078932219, - "averageMsgSize": 613135, - "producerId": 0, - "producerName": null, - "address": null, - "connectedSince": null - } - ], - "subscriptions": { - "my-topic_subscription": { - "msgRateOut": 0, - "msgThroughputOut": 0, - "msgBacklog": 116632, - "type": null, - "msgRateExpired": 36.98245516804671, - "consumers": [] - } - }, - "replication": {} -} - -``` - -To get the status of a topic, you can use the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getStats(topic); - -``` - - - - -```` - -### Get internal stats - -You can get the detailed statistics of a topic. - - - **entriesAddedCounter**: Messages published since this broker loaded this topic. - - - **numberOfEntries**: The total number of messages being tracked. - - - **totalSize**: The total storage size in bytes of all messages. - - - **currentLedgerEntries**: The count of messages written to the ledger that is currently open for writing. - - - **currentLedgerSize**: The size in bytes of messages written to the ledger that is currently open for writing. - - - **lastLedgerCreatedTimestamp**: The time when the last ledger is created. - - - **lastLedgerCreationFailureTimestamp:** The time when the last ledger failed. - - - **waitingCursorsCount**: The number of cursors that are "caught up" and waiting for a new message to be published. - - - **pendingAddEntriesCount**: The number of messages that complete (asynchronous) write requests. - - - **lastConfirmedEntry**: The ledgerid:entryid of the last message that is written successfully. If the entryid is `-1`, then the ledger is open, yet no entries are written. - - - **state**: The state of this ledger for writing. The state `LedgerOpened` means that a ledger is open for saving published messages. - - - **ledgers**: The ordered list of all ledgers for this topic holding messages. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. - - - **metadata**: The ledger metadata. - - - **schemaLedgers**: The ordered list of all ledgers for this topic schema. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. - - - **metadata**: The ledger metadata. - - - **compactedLedger**: The ledgers holding un-acked messages after topic compaction. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. The value is `false` for the compacted topic ledger. - - - **cursors**: The list of all cursors on this topic. Each subscription in the topic stats has a cursor. - - - **markDeletePosition**: All messages before the markDeletePosition are acknowledged by the subscriber. - - - **readPosition**: The latest position of subscriber for reading message. - - - **waitingReadOp**: This is true when the subscription has read the latest message published to the topic and is waiting for new messages to be published. - - - **pendingReadOps**: The counter for how many outstanding read requests to the BookKeepers in progress. - - - **messagesConsumedCounter**: The number of messages this cursor has acked since this broker loaded this topic. - - - **cursorLedger**: The ledger being used to persistently store the current markDeletePosition. - - - **cursorLedgerLastEntry**: The last entryid used to persistently store the current markDeletePosition. - - - **individuallyDeletedMessages**: If acknowledges are being done out of order, the ranges of messages acknowledged between the markDeletePosition and the read-position shows. - - - **lastLedgerSwitchTimestamp**: The last time the cursor ledger is rolled over. - - - **state**: The state of the cursor ledger: `Open` means you have a cursor ledger for saving updates of the markDeletePosition. - -The following is an example of the detailed statistics of a topic. - -```json - -{ - "entriesAddedCounter":0, - "numberOfEntries":0, - "totalSize":0, - "currentLedgerEntries":0, - "currentLedgerSize":0, - "lastLedgerCreatedTimestamp":"2021-01-22T21:12:14.868+08:00", - "lastLedgerCreationFailureTimestamp":null, - "waitingCursorsCount":0, - "pendingAddEntriesCount":0, - "lastConfirmedEntry":"3:-1", - "state":"LedgerOpened", - "ledgers":[ - { - "ledgerId":3, - "entries":0, - "size":0, - "offloaded":false, - "metadata":null - } - ], - "cursors":{ - "test":{ - "markDeletePosition":"3:-1", - "readPosition":"3:-1", - "waitingReadOp":false, - "pendingReadOps":0, - "messagesConsumedCounter":0, - "cursorLedger":4, - "cursorLedgerLastEntry":1, - "individuallyDeletedMessages":"[]", - "lastLedgerSwitchTimestamp":"2021-01-22T21:12:14.966+08:00", - "state":"Open", - "numberOfEntriesSinceFirstNotAckedMessage":0, - "totalNonContiguousDeletedMessagesRange":0, - "properties":{ - - } - } - }, - "schemaLedgers":[ - { - "ledgerId":1, - "entries":11, - "size":10, - "offloaded":false, - "metadata":null - } - ], - "compactedLedger":{ - "ledgerId":-1, - "entries":-1, - "size":-1, - "offloaded":false, - "metadata":null - } -} - -``` - -To get the internal status of a topic, you can use the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats-internal \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/internalStats|operation/getInternalStats?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getInternalStats(topic); - -``` - - - - -```` - -### Peek messages - -You can peek a number of messages for a specific subscription of a given topic in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics peek-messages \ - --count 10 --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -Message ID: 315674752:0 -Properties: { "X-Pulsar-publish-time" : "2015-07-13 17:40:28.451" } -msg-payload - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/position/:messagePosition|operation/peekNthMessage?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -int numMessages = 1; -admin.topics().peekMessages(topic, subName, numMessages); - -``` - - - - -```` - -### Get message by ID - -You can fetch the message with the given ledger ID and entry ID in the following ways. - -````mdx-code-block - - - -```shell - -$ ./bin/pulsar-admin topics get-message-by-id \ - persistent://public/default/my-topic \ - -l 10 -e 0 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/ledger/:ledgerId/entry/:entryId|operation/getMessageById?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -long ledgerId = 10; -long entryId = 10; -admin.topics().getMessageById(topic, ledgerId, entryId); - -``` - - - - -```` - -### Skip messages - -You can skip a number of messages for a specific subscription of a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics skip \ - --count 10 --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/skip/:numMessages|operation/skipMessages?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -int numMessages = 1; -admin.topics().skipMessages(topic, subName, numMessages); - -``` - - - - -```` - -### Skip all messages - -You can skip all the old messages for a specific subscription of a given topic. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics skip-all \ - --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/skip_all|operation/skipAllMessages?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -admin.topics().skipAllMessages(topic, subName); - -``` - - - - -```` - -### Reset cursor - -You can reset a subscription cursor position back to the position which is recorded X minutes before. It essentially calculates time and position of cursor at X minutes before and resets it at that position. You can reset the cursor in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics reset-cursor \ - --subscription my-subscription --time 10 \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/resetcursor/:timestamp|operation/resetCursor?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -long timestamp = 2342343L; -admin.topics().skipAllMessages(topic, subName, timestamp); - -``` - - - - -```` - -### Look up topic's owner broker - -You can locate the owner broker of the given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics lookup \ - persistent://test-tenant/ns1/tp1 \ - - "pulsar://broker1.org.com:4480" - -``` - - - - -{@inject: endpoint|GET|/lookup/v2/topic/:schema/:tenant:namespace/:topic|/?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.lookup().lookupDestination(topic); - -``` - - - - -```` - -### Get bundle - -You can get the range of the bundle that the given topic belongs to in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics bundle-range \ - persistent://test-tenant/ns1/tp1 \ - - "0x00000000_0xffffffff" - -``` - - - - -{@inject: endpoint|GET|/lookup/v2/topic/:topic_domain/:tenant/:namespace/:topic/bundle|/?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.lookup().getBundleRange(topic); - -``` - - - - -```` - -### Get subscriptions - -You can check all subscription names for a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics subscriptions \ - persistent://test-tenant/ns1/tp1 \ - - my-subscription - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscriptions|operation/getSubscriptions?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getSubscriptions(topic); - -``` - - - - -```` - -### Last Message Id - -You can get the last committed message ID for a persistent topic. It is available since 2.3.0 release. - -````mdx-code-block - - - -```shell - -pulsar-admin topics last-message-id topic-name - -``` - - - - -{@inject: endpoint|Get|/admin/v2/:schema/:tenant/:namespace/:topic/lastMessageId|operation/getLastMessageId?version=@pulsar:version_number@} - - - - -```Java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getLastMessage(topic); - -``` - - - - -```` - - -### Configure deduplication snapshot interval - -#### Get deduplication snapshot interval - -To get the topic-level deduplication snapshot interval, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-deduplication-snapshot-interval options - -``` - - - - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/getDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getDeduplicationSnapshotInterval(topic) - -``` - - - - -```` - -#### Set deduplication snapshot interval - -To set the topic-level deduplication snapshot interval, use one of the following methods. - -> **Prerequisite** `brokerDeduplicationEnabled` must be set to `true`. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-deduplication-snapshot-interval options - -``` - - - - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/setDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.topics().setDeduplicationSnapshotInterval(topic, 1000) - -``` - - - - -```` - -#### Remove deduplication snapshot interval - -To remove the topic-level deduplication snapshot interval, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-deduplication-snapshot-interval options - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/deleteDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.topics().removeDeduplicationSnapshotInterval(topic) - -``` - - - - -```` - - -### Configure inactive topic policies - -#### Get inactive topic policies - -To get the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-inactive-topic-policies options - -``` - - - - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/getInactiveTopicPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getInactiveTopicPolicies(topic) - -``` - - - - -```` - -#### Set inactive topic policies - -To set the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-inactive-topic-policies options - -``` - - - - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/setInactiveTopicPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().setInactiveTopicPolicies(topic, inactiveTopicPolicies) - -``` - - - - -```` - -#### Remove inactive topic policies - -To remove the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-inactive-topic-policies options - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/removeInactiveTopicPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().removeInactiveTopicPolicies(topic) - -``` - - - - -```` - - -### Configure offload policies - -#### Get offload policies - -To get the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-offload-policies options - -``` - - - - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/getOffloadPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getOffloadPolicies(topic) - -``` - - - - -```` - -#### Set offload policies - -To set the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-offload-policies options - -``` - - - - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/setOffloadPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().setOffloadPolicies(topic, offloadPolicies) - -``` - - - - -```` - -#### Remove offload policies - -To remove the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-offload-policies options - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/removeOffloadPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().removeOffloadPolicies(topic) - -``` - - - - -```` - - -## Manage non-partitioned topics -You can use Pulsar [admin API](admin-api-overview.md) to create, delete and check status of non-partitioned topics. - -### Create -Non-partitioned topics must be explicitly created. When creating a new non-partitioned topic, you need to provide a name for the topic. - -By default, 60 seconds after creation, topics are considered inactive and deleted automatically to avoid generating trash data. To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to a specific value. - -For more information about the two parameters, see [here](reference-configuration.md#broker). - -You can create non-partitioned topics in the following ways. -````mdx-code-block - - - -When you create non-partitioned topics with the [`create`](reference-pulsar-admin.md#create-3) command, you need to specify the topic name as an argument. - -```shell - -$ bin/pulsar-admin topics create \ - persistent://my-tenant/my-namespace/my-topic - -``` - -:::note - -When you create a non-partitioned topic with the suffix '-partition-' followed by numeric value like 'xyz-topic-partition-x' for the topic name, if a partitioned topic with same suffix 'xyz-topic-partition-y' exists, then the numeric value(x) for the non-partitioned topic must be larger than the number of partitions(y) of the partitioned topic. Otherwise, you cannot create such a non-partitioned topic. - -::: - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic|operation/createNonPartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().createNonPartitionedTopic(topicName); - -``` - - - - -```` - -### Delete -You can delete non-partitioned topics in the following ways. -````mdx-code-block - - - -```shell - -$ bin/pulsar-admin topics delete \ - persistent://my-tenant/my-namespace/my-topic - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic|operation/deleteTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().delete(topic); - -``` - - - - -```` - -### List - -You can get the list of topics under a given namespace in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list tenant/namespace -persistent://tenant/namespace/topic1 -persistent://tenant/namespace/topic2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getList(namespace); - -``` - - - - -```` - -### Stats - -You can check the current statistics of a given topic. The following is an example. For description of each stats, refer to [get stats](#get-stats). - -```json - -{ - "msgRateIn": 4641.528542257553, - "msgThroughputIn": 44663039.74947473, - "msgRateOut": 0, - "msgThroughputOut": 0, - "averageMsgSize": 1232439.816728665, - "storageSize": 135532389160, - "publishers": [ - { - "msgRateIn": 57.855383881403576, - "msgThroughputIn": 558994.7078932219, - "averageMsgSize": 613135, - "producerId": 0, - "producerName": null, - "address": null, - "connectedSince": null - } - ], - "subscriptions": { - "my-topic_subscription": { - "msgRateOut": 0, - "msgThroughputOut": 0, - "msgBacklog": 116632, - "type": null, - "msgRateExpired": 36.98245516804671, - "consumers": [] - } - }, - "replication": {} -} - -``` - -You can check the current statistics of a given topic and its connected producers and consumers in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats \ - persistent://test-tenant/namespace/topic \ - --get-precise-backlog - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getStats(topic, false /* is precise backlog */); - -``` - - - - -```` - -## Manage partitioned topics -You can use Pulsar [admin API](admin-api-overview.md) to create, update, delete and check status of partitioned topics. - -### Create - -Partitioned topics must be explicitly created. When creating a new partitioned topic, you need to provide a name and the number of partitions for the topic. - -By default, 60 seconds after creation, topics are considered inactive and deleted automatically to avoid generating trash data. To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to a specific value. - -For more information about the two parameters, see [here](reference-configuration.md#broker). - -You can create partitioned topics in the following ways. -````mdx-code-block - - - -When you create partitioned topics with the [`create-partitioned-topic`](reference-pulsar-admin.md#create-partitioned-topic) -command, you need to specify the topic name as an argument and the number of partitions using the `-p` or `--partitions` flag. - -```shell - -$ bin/pulsar-admin topics create-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic \ - --partitions 4 - -``` - -:::note - -If a non-partitioned topic with the suffix '-partition-' followed by a numeric value like 'xyz-topic-partition-10', you can not create a partitioned topic with name 'xyz-topic', because the partitions of the partitioned topic could override the existing non-partitioned topic. To create such partitioned topic, you have to delete that non-partitioned topic first. - -::: - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/partitions|operation/createPartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -int numPartitions = 4; -admin.topics().createPartitionedTopic(topicName, numPartitions); - -``` - - - - -```` - -### Create missed partitions - -When topic auto-creation is disabled, and you have a partitioned topic without any partitions, you can use the [`create-missed-partitions`](reference-pulsar-admin.md#create-missed-partitions) command to create partitions for the topic. - -````mdx-code-block - - - -You can create missed partitions with the [`create-missed-partitions`](reference-pulsar-admin.md#create-missed-partitions) command and specify the topic name as an argument. - -```shell - -$ bin/pulsar-admin topics create-missed-partitions \ - persistent://my-tenant/my-namespace/my-topic \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic|operation/createMissedPartitions?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().createMissedPartitions(topicName); - -``` - - - - -```` - -### Get metadata - -Partitioned topics are associated with metadata, you can view it as a JSON object. The following metadata field is available. - -Field | Description -:-----|:------- -`partitions` | The number of partitions into which the topic is divided. - -````mdx-code-block - - - -You can check the number of partitions in a partitioned topic with the [`get-partitioned-topic-metadata`](reference-pulsar-admin.md#get-partitioned-topic-metadata) subcommand. - -```shell - -$ pulsar-admin topics get-partitioned-topic-metadata \ - persistent://my-tenant/my-namespace/my-topic -{ - "partitions": 4 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/partitions|operation/getPartitionedMetadata?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getPartitionedTopicMetadata(topicName); - -``` - - - - -```` - -### Update - -You can update the number of partitions for an existing partitioned topic *if* the topic is non-global. However, you can only add the partition number. Decrementing the number of partitions would delete the topic, which is not supported in Pulsar. - -Producers and consumers can find the newly created partitions automatically. - -````mdx-code-block - - - -You can update partitioned topics with the [`update-partitioned-topic`](reference-pulsar-admin.md#update-partitioned-topic) command. - -```shell - -$ pulsar-admin topics update-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic \ - --partitions 8 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:cluster/:namespace/:destination/partitions|operation/updatePartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().updatePartitionedTopic(topic, numPartitions); - -``` - - - - -```` - -### Delete -You can delete partitioned topics with the [`delete-partitioned-topic`](reference-pulsar-admin.md#delete-partitioned-topic) command, REST API and Java. - -````mdx-code-block - - - -```shell - -$ bin/pulsar-admin topics delete-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:topic/:namespace/:destination/partitions|operation/deletePartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().delete(topic); - -``` - - - - -```` - -### List -You can get the list of partitioned topics under a given namespace in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list-partitioned-topics tenant/namespace -persistent://tenant/namespace/topic1 -persistent://tenant/namespace/topic2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getPartitionedTopicList?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getPartitionedTopicList(namespace); - -``` - - - - -```` - -### Stats - -You can check the current statistics of a given partitioned topic. The following is an example. For description of each stats, refer to [get stats](#get-stats). - -Note that in the subscription JSON object, `chuckedMessageRate` is deprecated. Please use `chunkedMessageRate`. Both will be sent in the JSON for now. - -```json - -{ - "msgRateIn" : 999.992947159793, - "msgThroughputIn" : 1070918.4635439808, - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesInCounter" : 270318763, - "msgInCounter" : 252489, - "bytesOutCounter" : 0, - "msgOutCounter" : 0, - "averageMsgSize" : 1070.926056966454, - "msgChunkPublished" : false, - "storageSize" : 270316646, - "backlogSize" : 200921133, - "publishers" : [ { - "msgRateIn" : 999.992947159793, - "msgThroughputIn" : 1070918.4635439808, - "averageMsgSize" : 1070.3333333333333, - "chunkedMessageRate" : 0.0, - "producerId" : 0 - } ], - "subscriptions" : { - "test" : { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesOutCounter" : 0, - "msgOutCounter" : 0, - "msgRateRedeliver" : 0.0, - "chuckedMessageRate" : 0, - "chunkedMessageRate" : 0, - "msgBacklog" : 144318, - "msgBacklogNoDelayed" : 144318, - "blockedSubscriptionOnUnackedMsgs" : false, - "msgDelayed" : 0, - "unackedMessages" : 0, - "msgRateExpired" : 0.0, - "lastExpireTimestamp" : 0, - "lastConsumedFlowTimestamp" : 0, - "lastConsumedTimestamp" : 0, - "lastAckedTimestamp" : 0, - "consumers" : [ ], - "isDurable" : true, - "isReplicated" : false - } - }, - "replication" : { }, - "metadata" : { - "partitions" : 3 - }, - "partitions" : { } -} - -``` - -You can check the current statistics of a given partitioned topic and its connected producers and consumers in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics partitioned-stats \ - persistent://test-tenant/namespace/topic \ - --per-partition - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/partitioned-stats|operation/getPartitionedStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getPartitionedStats(topic, true /* per partition */, false /* is precise backlog */); - -``` - - - - -```` - -### Internal stats - -You can check the detailed statistics of a topic. The following is an example. For description of each stats, refer to [get internal stats](#get-internal-stats). - -```json - -{ - "entriesAddedCounter": 20449518, - "numberOfEntries": 3233, - "totalSize": 331482, - "currentLedgerEntries": 3233, - "currentLedgerSize": 331482, - "lastLedgerCreatedTimestamp": "2016-06-29 03:00:23.825", - "lastLedgerCreationFailureTimestamp": null, - "waitingCursorsCount": 1, - "pendingAddEntriesCount": 0, - "lastConfirmedEntry": "324711539:3232", - "state": "LedgerOpened", - "ledgers": [ - { - "ledgerId": 324711539, - "entries": 0, - "size": 0 - } - ], - "cursors": { - "my-subscription": { - "markDeletePosition": "324711539:3133", - "readPosition": "324711539:3233", - "waitingReadOp": true, - "pendingReadOps": 0, - "messagesConsumedCounter": 20449501, - "cursorLedger": 324702104, - "cursorLedgerLastEntry": 21, - "individuallyDeletedMessages": "[(324711539:3134‥324711539:3136], (324711539:3137‥324711539:3140], ]", - "lastLedgerSwitchTimestamp": "2016-06-29 01:30:19.313", - "state": "Open" - } - } -} - -``` - -You can get the internal stats for the partitioned topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats-internal \ - persistent://test-tenant/namespace/topic - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/internalStats|operation/getInternalStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getInternalStats(topic); - -``` - - - - -```` - -## Publish to partitioned topics - -By default, Pulsar topics are served by a single broker, which limits the maximum throughput of a topic. *Partitioned topics* can span multiple brokers and thus allow for higher throughput. - -You can publish to partitioned topics using Pulsar client libraries. When publishing to partitioned topics, you must specify a routing mode. If you do not specify any routing mode when you create a new producer, the round robin routing mode is used. - -### Routing mode - -You can specify the routing mode in the ProducerConfiguration object that you use to configure your producer. The routing mode determines which partition(internal topic) that each message should be published to. - -The following {@inject: javadoc:MessageRoutingMode:/client/org/apache/pulsar/client/api/MessageRoutingMode} options are available. - -Mode | Description -:--------|:------------ -`RoundRobinPartition` | If no key is provided, the producer publishes messages across all partitions in round-robin policy to achieve the maximum throughput. Round-robin is not done per individual message, round-robin is set to the same boundary of batching delay to ensure that batching is effective. If a key is specified on the message, the partitioned producer hashes the key and assigns message to a particular partition. This is the default mode. -`SinglePartition` | If no key is provided, the producer picks a single partition randomly and publishes all messages into that partition. If a key is specified on the message, the partitioned producer hashes the key and assigns message to a particular partition. -`CustomPartition` | Use custom message router implementation that is called to determine the partition for a particular message. You can create a custom routing mode by using the Java client and implementing the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface. - -The following is an example: - -```java - -String pulsarBrokerRootUrl = "pulsar://localhost:6650"; -String topic = "persistent://my-tenant/my-namespace/my-topic"; - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl(pulsarBrokerRootUrl).build(); -Producer producer = pulsarClient.newProducer() - .topic(topic) - .messageRoutingMode(MessageRoutingMode.SinglePartition) - .create(); -producer.send("Partitioned topic message".getBytes()); - -``` - -### Custom message router - -To use a custom message router, you need to provide an implementation of the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface, which has just one `choosePartition` method: - -```java - -public interface MessageRouter extends Serializable { - int choosePartition(Message msg); -} - -``` - -The following router routes every message to partition 10: - -```java - -public class AlwaysTenRouter implements MessageRouter { - public int choosePartition(Message msg) { - return 10; - } -} - -``` - -With that implementation, you can send - -```java - -String pulsarBrokerRootUrl = "pulsar://localhost:6650"; -String topic = "persistent://my-tenant/my-cluster-my-namespace/my-topic"; - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl(pulsarBrokerRootUrl).build(); -Producer producer = pulsarClient.newProducer() - .topic(topic) - .messageRouter(new AlwaysTenRouter()) - .create(); -producer.send("Partitioned topic message".getBytes()); - -``` - -### How to choose partitions when using a key -If a message has a key, it supersedes the round robin routing policy. The following example illustrates how to choose the partition when using a key. - -```java - -// If the message has a key, it supersedes the round robin routing policy - if (msg.hasKey()) { - return signSafeMod(hash.makeHash(msg.getKey()), topicMetadata.numPartitions()); - } - - if (isBatchingEnabled) { // if batching is enabled, choose partition on `partitionSwitchMs` boundary. - long currentMs = clock.millis(); - return signSafeMod(currentMs / partitionSwitchMs + startPtnIdx, topicMetadata.numPartitions()); - } else { - return signSafeMod(PARTITION_INDEX_UPDATER.getAndIncrement(this), topicMetadata.numPartitions()); - } - -``` - -## Manage subscriptions -You can use [Pulsar admin API](admin-api-overview.md) to create, check, and delete subscriptions. -### Create subscription -You can create a subscription for a topic using one of the following methods. -````mdx-code-block - - - -```shell - -pulsar-admin topics create-subscription \ ---subscription my-subscription \ -persistent://test-tenant/ns1/tp1 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/persistent/:tenant/:namespace/:topic/subscription/:subscription|operation/createSubscriptions?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subscriptionName = "my-subscription"; -admin.topics().createSubscription(topic, subscriptionName, MessageId.latest); - -``` - - - - -```` -### Get subscription -You can check all subscription names for a given topic using one of the following methods. -````mdx-code-block - - - -```shell - -pulsar-admin topics subscriptions \ -persistent://test-tenant/ns1/tp1 \ -my-subscription - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscriptions|operation/getSubscriptions?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getSubscriptions(topic); - -``` - - - - -```` -### Unsubscribe subscription -When a subscription does not process messages any more, you can unsubscribe it using one of the following methods. -````mdx-code-block - - - -```shell - -pulsar-admin topics unsubscribe \ ---subscription my-subscription \ -persistent://test-tenant/ns1/tp1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/:topic/subscription/:subscription|operation/deleteSubscription?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subscriptionName = "my-subscription"; -admin.topics().deleteSubscription(topic, subscriptionName); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/administration-dashboard.md b/site2/website/versioned_docs/version-2.8.0-deprecated/administration-dashboard.md deleted file mode 100644 index 25f976609b40bf..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/administration-dashboard.md +++ /dev/null @@ -1,76 +0,0 @@ ---- -id: administration-dashboard -title: Pulsar dashboard -sidebar_label: "Dashboard" -original_id: administration-dashboard ---- - -:::note - -Pulsar dashboard is deprecated. If you want to manage and monitor the stats of your topics, use [Pulsar Manager](administration-pulsar-manager.md). - -::: - -Pulsar dashboard is a web application that enables users to monitor current stats for all [topics](reference-terminology.md#topic) in tabular form. - -The dashboard is a data collector that polls stats from all the brokers in a Pulsar instance (across multiple clusters) and stores all the information in a [PostgreSQL](https://www.postgresql.org/) database. - -You can use the [Django](https://www.djangoproject.com) web app to render the collected data. - -## Install - -The easiest way to use the dashboard is to run it inside a [Docker](https://www.docker.com/products/docker) container. - -```shell - -$ SERVICE_URL=http://broker.example.com:8080/ -$ docker run -p 80:80 \ - -e SERVICE_URL=$SERVICE_URL \ - apachepulsar/pulsar-dashboard:@pulsar:version@ - -``` - -You can find the {@inject: github:Dockerfile:/dashboard/Dockerfile} in the `dashboard` directory and build an image from scratch as well: - -```shell - -$ docker build -t apachepulsar/pulsar-dashboard dashboard - -``` - -If token authentication is enabled: -> Provided token should have super-user access. - -```shell - -$ SERVICE_URL=http://broker.example.com:8080/ -$ JWT_TOKEN=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c -$ docker run -p 80:80 \ - -e SERVICE_URL=$SERVICE_URL \ - -e JWT_TOKEN=$JWT_TOKEN \ - apachepulsar/pulsar-dashboard - -``` - - -You need to specify only one service URL for a Pulsar cluster. Internally, the collector figures out all the existing clusters and the brokers from where it needs to pull the metrics. If you connect the dashboard to Pulsar running in standalone mode, the URL is `http://:8080` by default. `` is the ip address or hostname of the machine running Pulsar standalone. The ip address or hostname should be accessible from the docker instance running dashboard. - -Once the Docker container runs, the web dashboard is accessible via `localhost` or whichever host that Docker uses. - -> The `SERVICE_URL` that the dashboard uses needs to be reachable from inside the Docker container - -If the Pulsar service runs in standalone mode in `localhost`, the `SERVICE_URL` has to -be the IP of the machine. - -Similarly, given the Pulsar standalone advertises itself with localhost by default, you need to -explicitly set the advertise address to the host IP. For example: - -```shell - -$ bin/pulsar standalone --advertised-address 1.2.3.4 - -``` - -### Known issues - -Currently, only Pulsar Token [authentication](security-overview.md#authentication-providers) is supported. diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/administration-geo.md b/site2/website/versioned_docs/version-2.8.0-deprecated/administration-geo.md deleted file mode 100644 index 29edeac4811ac7..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/administration-geo.md +++ /dev/null @@ -1,215 +0,0 @@ ---- -id: administration-geo -title: Pulsar geo-replication -sidebar_label: "Geo-replication" -original_id: administration-geo ---- - -*Geo-replication* is the replication of persistently stored message data across multiple clusters of a Pulsar instance. - -## How geo-replication works - -The diagram below illustrates the process of geo-replication across Pulsar clusters: - -![Replication Diagram](/assets/geo-replication.png) - -In this diagram, whenever **P1**, **P2**, and **P3** producers publish messages to the **T1** topic on **Cluster-A**, **Cluster-B**, and **Cluster-C** clusters respectively, those messages are instantly replicated across clusters. Once the messages are replicated, **C1** and **C2** consumers can consume those messages from their respective clusters. - -Without geo-replication, **C1** and **C2** consumers are not able to consume messages that **P3** producer publishes. - -## Geo-replication and Pulsar properties - -You must enable geo-replication on a per-tenant basis in Pulsar. You can enable geo-replication between clusters only when a tenant is created that allows access to both clusters. - -Although geo-replication must be enabled between two clusters, actually geo-replication is managed at the namespace level. You must complete the following tasks to enable geo-replication for a namespace: - -* [Enable geo-replication namespaces](#enable-geo-replication-namespaces) -* Configure that namespace to replicate across two or more provisioned clusters - -Any message published on *any* topic in that namespace is replicated to all clusters in the specified set. - -## Local persistence and forwarding - -When messages are produced on a Pulsar topic, messages are first persisted in the local cluster, and then forwarded asynchronously to the remote clusters. - -In normal cases, when connectivity issues are none, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, the network [round-trip time](https://en.wikipedia.org/wiki/Round-trip_delay_time) (RTT) between the remote regions defines end-to-end delivery latency. - -Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (like during a network partition). - -Producers and consumers can publish messages to and consume messages from any cluster in a Pulsar instance. However, subscriptions cannot only be local to the cluster where the subscriptions are created but also can be transferred between clusters after replicated subscription is enabled. Once replicated subscription is enabled, you can keep subscription state in synchronization. Therefore, a topic can be asynchronously replicated across multiple geographical regions. In case of failover, a consumer can restart consuming messages from the failure point in a different cluster. - -In the aforementioned example, the **T1** topic is replicated among three clusters, **Cluster-A**, **Cluster-B**, and **Cluster-C**. - -All messages produced in any of the three clusters are delivered to all subscriptions in other clusters. In this case, **C1** and **C2** consumers receive all messages that **P1**, **P2**, and **P3** producers publish. Ordering is still guaranteed on a per-producer basis. - -## Configure replication - -As stated in [Geo-replication and Pulsar properties](#geo-replication-and-pulsar-properties) section, geo-replication in Pulsar is managed at the [tenant](reference-terminology.md#tenant) level. - -The following example connects three clusters: **us-east**, **us-west**, and **us-cent**. - -### Connect replication clusters - -To replicate data among clusters, you need to configure each cluster to connect to the other. You can use the [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) tool to create a connection. - -**Example** - -Suppose that you have 3 replication clusters: `us-west`, `us-cent`, and `us-east`. - -1. Configure the connection from `us-west` to `us-east`. - - Run the following command on `us-west`. - - ```shell - - $ bin/pulsar-admin clusters create \ - --broker-url pulsar://: \ - --url http://: \ - us-east - - ``` - - :::tip - - - If you want to use a secure connection for a cluster, you can use the flags `--broker-url-secure` and `--url-secure`. For more information, see [pulsar-admin clusters create](https://pulsar.apache.org/tools/pulsar-admin/). - - Different clusters may have different authentications. You can use the authentication flag `--auth-plugin` and `--auth-parameters` together to set cluster authentication, which overrides `brokerClientAuthenticationPlugin` and `brokerClientAuthenticationParameters` if `authenticationEnabled` sets to `true` in `broker.conf` and `standalone.conf`. For more information, see [authentication and authorization](concepts-authentication.md). - - ::: - -2. Configure the connection from `us-west` to `us-cent`. - - Run the following command on `us-west`. - - ```shell - - $ bin/pulsar-admin clusters create \ - --broker-url pulsar://: \ - --url http://: \ - us-cent - - ``` - -3. Run similar commands on `us-east` and `us-cent` to create connections among clusters. - -### Grant permissions to properties - -To replicate to a cluster, the tenant needs permission to use that cluster. You can grant permission to the tenant when you create the tenant or grant later. - -Specify all the intended clusters when you create a tenant: - -```shell - -$ bin/pulsar-admin tenants create my-tenant \ - --admin-roles my-admin-role \ - --allowed-clusters us-west,us-east,us-cent - -``` - -To update permissions of an existing tenant, use `update` instead of `create`. - -### Enable geo-replication namespaces - -You can create a namespace with the following command sample. - -```shell - -$ bin/pulsar-admin namespaces create my-tenant/my-namespace - -``` - -Initially, the namespace is not assigned to any cluster. You can assign the namespace to clusters using the `set-clusters` subcommand: - -```shell - -$ bin/pulsar-admin namespaces set-clusters my-tenant/my-namespace \ - --clusters us-west,us-east,us-cent - -``` - -You can change the replication clusters for a namespace at any time, without disruption to ongoing traffic. Replication channels are immediately set up or stopped in all clusters as soon as the configuration changes. - -### Use topics with geo-replication - -Once you create a geo-replication namespace, any topics that producers or consumers create within that namespace is replicated across clusters. Typically, each application uses the `serviceUrl` for the local cluster. - -#### Selective replication - -By default, messages are replicated to all clusters configured for the namespace. You can restrict replication selectively by specifying a replication list for a message, and then that message is replicated only to the subset in the replication list. - -The following is an example for the [Java API](client-libraries-java.md). Note the use of the `setReplicationClusters` method when you construct the {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} object: - -```java - -List restrictReplicationTo = Arrays.asList( - "us-west", - "us-east" -); - -Producer producer = client.newProducer() - .topic("some-topic") - .create(); - -producer.newMessage() - .value("my-payload".getBytes()) - .setReplicationClusters(restrictReplicationTo) - .send(); - -``` - -#### Topic stats - -Topic-specific statistics for geo-replication topics are available via the [`pulsar-admin`](reference-pulsar-admin.md) tool and {@inject: rest:REST:/} API: - -```shell - -$ bin/pulsar-admin persistent stats persistent://my-tenant/my-namespace/my-topic - -``` - -Each cluster reports its own local stats, including the incoming and outgoing replication rates and backlogs. - -#### Delete a geo-replication topic - -Given that geo-replication topics exist in multiple regions, directly deleting a geo-replication topic is not possible. Instead, you should rely on automatic topic garbage collection. - -In Pulsar, a topic is automatically deleted when the topic meets the following three conditions: -- no producers or consumers are connected to it; -- no subscriptions to it; -- no more messages are kept for retention. -For geo-replication topics, each region uses a fault-tolerant mechanism to decide when deleting the topic locally is safe. - -You can explicitly disable topic garbage collection by setting `brokerDeleteInactiveTopicsEnabled` to `false` in your [broker configuration](reference-configuration.md#broker). - -To delete a geo-replication topic, close all producers and consumers on the topic, and delete all of its local subscriptions in every replication cluster. When Pulsar determines that no valid subscription for the topic remains across the system, it will garbage collect the topic. - -## Replicated subscriptions - -Pulsar supports replicated subscriptions, so you can keep subscription state in sync, within a sub-second timeframe, in the context of a topic that is being asynchronously replicated across multiple geographical regions. - -In case of failover, a consumer can restart consuming from the failure point in a different cluster. - -### Enable replicated subscription - -Replicated subscription is disabled by default. You can enable replicated subscription when creating a consumer. - -```java - -Consumer consumer = client.newConsumer(Schema.STRING) - .topic("my-topic") - .subscriptionName("my-subscription") - .replicateSubscriptionState(true) - .subscribe(); - -``` - -### Advantages - - * It is easy to implement the logic. - * You can choose to enable or disable replicated subscription. - * When you enable it, the overhead is low, and it is easy to configure. - * When you disable it, the overhead is zero. - -### Limitations - -* When you enable replicated subscription, you're creating a consistent distributed snapshot to establish an association between message ids from different clusters. The snapshots are taken periodically. The default value is `1 second`. It means that a consumer failing over to a different cluster can potentially receive 1 second of duplicates. You can also configure the frequency of the snapshot in the `broker.conf` file. -* Only the base line cursor position is synced in replicated subscriptions while the individual acknowledgments are not synced. This means the messages acknowledged out-of-order could end up getting delivered again, in the case of a cluster failover. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/administration-isolation.md b/site2/website/versioned_docs/version-2.8.0-deprecated/administration-isolation.md deleted file mode 100644 index d2de042a2e7415..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/administration-isolation.md +++ /dev/null @@ -1,115 +0,0 @@ ---- -id: administration-isolation -title: Pulsar isolation -sidebar_label: "Pulsar isolation" -original_id: administration-isolation ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -In an organization, a Pulsar instance provides services to multiple teams. When organizing the resources across multiple teams, you want to make a suitable isolation plan to avoid the resource competition between different teams and applications and provide high-quality messaging service. In this case, you need to take resource isolation into consideration and weigh your intended actions against expected and unexpected consequences. - -To enforce resource isolation, you can use the Pulsar isolation policy, which allows you to allocate resources (**broker** and **bookie**) for the namespace. - -## Broker isolation - -In Pulsar, when namespaces (more specifically, namespace bundles) are assigned dynamically to brokers, the namespace isolation policy limits the set of brokers that can be used for assignment. Before topics are assigned to brokers, you can set the namespace isolation policy with a primary or a secondary regex to select desired brokers. - -You can set a namespace isolation policy for a cluster using one of the following methods. - -````mdx-code-block - - - - -``` - -pulsar-admin ns-isolation-policy set options - -``` - -For more information about the command `pulsar-admin ns-isolation-policy set options`, see [here](https://pulsar.apache.org/tools/pulsar-admin/). - -**Example** - -```shell - -bin/pulsar-admin ns-isolation-policy set \ ---auto-failover-policy-type min_available \ ---auto-failover-policy-params min_limit=1,usage_threshold=80 \ ---namespaces my-tenant/my-namespace \ ---primary 10.193.216.* my-cluster policy-name - -``` - - - - -[PUT /admin/v2/namespaces/{tenant}/{namespace}](https://pulsar.apache.org/admin-rest-api/?version=master&apiversion=v2#operation/createNamespace) - - - - -For how to set namespace isolation policy using Java admin API, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/NamespacesImpl.java#L251). - - - - -```` - -## Bookie isolation - -A namespace can be isolated into user-defined groups of bookies, which guarantees all the data that belongs to the namespace is stored in desired bookies. The bookie affinity group uses the BookKeeper [rack-aware placement policy](https://bookkeeper.apache.org/docs/latest/api/javadoc/org/apache/bookkeeper/client/EnsemblePlacementPolicy.html) and it is a way to feed rack information which is stored as JSON format in znode. - -You can set a bookie affinity group using one of the following methods. - -````mdx-code-block - - - - -``` - -pulsar-admin namespaces set-bookie-affinity-group options - -``` - -For more information about the command `pulsar-admin namespaces set-bookie-affinity-group options`, see [here](https://pulsar.apache.org/tools/pulsar-admin/). - -**Example** - -```shell - -bin/pulsar-admin bookies set-bookie-rack \ ---bookie 127.0.0.1:3181 \ ---hostname 127.0.0.1:3181 \ ---group group-bookie1 \ ---rack rack1 - -bin/pulsar-admin namespaces set-bookie-affinity-group public/default \ ---primary-group group-bookie1 - -``` - - - - -[POST /admin/v2/namespaces/{tenant}/{namespace}/persistence/bookieAffinity](https://pulsar.apache.org/admin-rest-api/?version=master&apiversion=v2#operation/setBookieAffinityGroup) - - - - -For how to set bookie affinity group for a namespace using Java admin API, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/NamespacesImpl.java#L1164). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/administration-load-balance.md b/site2/website/versioned_docs/version-2.8.0-deprecated/administration-load-balance.md deleted file mode 100644 index 890ebf661624ce..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/administration-load-balance.md +++ /dev/null @@ -1,256 +0,0 @@ ---- -id: administration-load-balance -title: Pulsar load balance -sidebar_label: "Load balance" -original_id: administration-load-balance ---- - -## Load balance across Pulsar brokers - -Pulsar is an horizontally scalable messaging system, so the traffic -in a logical cluster must be spread across all the available Pulsar brokers as evenly as possible, which is a core requirement. - -You can use multiple settings and tools to control the traffic distribution which require a bit of context to understand how the traffic is managed in Pulsar. Though, in most cases, the core requirement mentioned above is true out of the box and you should not worry about it. - -## Pulsar load manager architecture - -The following part introduces the basic architecture of the Pulsar load manager. - -### Assign topics to brokers dynamically - -Topics are dynamically assigned to brokers based on the load conditions of all brokers in the cluster. - -When a client starts using new topics that are not assigned to any broker, a process is triggered to choose the best suited broker to acquire ownership of these topics according to the load conditions. - -In case of partitioned topics, different partitions are assigned to different brokers. Here "topic" means either a non-partitioned topic or one partition of a topic. - -The assignment is "dynamic" because the assignment changes quickly. For example, if the broker owning the topic crashes, the topic is reassigned immediately to another broker. Another scenario is that the broker owning the topic becomes overloaded. In this case, the topic is reassigned to a less loaded broker. - -The stateless nature of brokers makes the dynamic assignment possible, so you can quickly expand or shrink the cluster based on usage. - -#### Assignment granularity - -The assignment of topics or partitions to brokers is not done at the topics or partitions level, but done at the Bundle level (a higher level). The reason is to amortize the amount of information that you need to keep track. Based on CPU, memory, traffic load and other indexes, topics are assigned to a particular broker dynamically. - -Instead of individual topic or partition assignment, each broker takes ownership of a subset of the topics for a namespace. This subset is called a "*bundle*" and effectively this subset is a sharding mechanism. - -The namespace is the "administrative" unit: many config knobs or operations are done at the namespace level. - -For assignment, a namespaces is sharded into a list of "bundles", with each bundle comprising -a portion of overall hash range of the namespace. - -Topics are assigned to a particular bundle by taking the hash of the topic name and checking in which -bundle the hash falls into. - -Each bundle is independent of the others and thus is independently assigned to different brokers. - -### Create namespaces and bundles - -When you create a new namespace, the new namespace sets to use the default number of bundles. You can set this in `conf/broker.conf`: - -```properties - -# When a namespace is created without specifying the number of bundle, this -# value will be used as the default -defaultNumberOfNamespaceBundles=4 - -``` - -You can either change the system default, or override it when you create a new namespace: - -```shell - -$ bin/pulsar-admin namespaces create my-tenant/my-namespace --clusters us-west --bundles 16 - -``` - -With this command, you create a namespace with 16 initial bundles. Therefore the topics for this namespaces can immediately be spread across up to 16 brokers. - -In general, if you know the expected traffic and number of topics in advance, you had better start with a reasonable number of bundles instead of waiting for the system to auto-correct the distribution. - -On the same note, it is beneficial to start with more bundles than the number of brokers, because of the hashing nature of the distribution of topics into bundles. For example, for a namespace with 1000 topics, using something like 64 bundles achieves a good distribution of traffic across 16 brokers. - -### Unload topics and bundles - -You can "unload" a topic in Pulsar with admin operation. Unloading means to close the topics, -release ownership and reassign the topics to a new broker, based on current load. - -When unloading happens, the client experiences a small latency blip, typically in the order of tens of milliseconds, while the topic is reassigned. - -Unloading is the mechanism that the load-manager uses to perform the load shedding, but you can also trigger the unloading manually, for example to correct the assignments and redistribute traffic even before having any broker overloaded. - -Unloading a topic has no effect on the assignment, but just closes and reopens the particular topic: - -```shell - -pulsar-admin topics unload persistent://tenant/namespace/topic - -``` - -To unload all topics for a namespace and trigger reassignments: - -```shell - -pulsar-admin namespaces unload tenant/namespace - -``` - -### Split namespace bundles - -Since the load for the topics in a bundle might change over time, or predicting upfront might just be hard, brokers can split bundles into two. The new smaller bundles can be reassigned to different brokers. - -The splitting happens based on some tunable thresholds. Any existing bundle that exceeds any of the threshold is a candidate to be split. By default the newly split bundles are also immediately offloaded to other brokers, to facilitate the traffic distribution. - -```properties - -# enable/disable namespace bundle auto split -loadBalancerAutoBundleSplitEnabled=true - -# enable/disable automatic unloading of split bundles -loadBalancerAutoUnloadSplitBundlesEnabled=true - -# maximum topics in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxTopics=1000 - -# maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxSessions=1000 - -# maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxMsgRate=30000 - -# maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxBandwidthMbytes=100 - -# maximum number of bundles in a namespace (for auto-split) -loadBalancerNamespaceMaximumBundles=128 - -``` - -### Shed load automatically - -The support for automatic load shedding is available in the load manager of Pulsar. This means that whenever the system recognizes a particular broker is overloaded, the system forces some traffic to be reassigned to less loaded brokers. - -When a broker is identified as overloaded, the broker forces to "unload" a subset of the bundles, the -ones with higher traffic, that make up for the overload percentage. - -For example, the default threshold is 85% and if a broker is over quota at 95% CPU usage, then the broker unloads the percent difference plus a 5% margin: `(95% - 85%) + 5% = 15%`. - -Given the selection of bundles to offload is based on traffic (as a proxy measure for cpu, network -and memory), broker unloads bundles for at least 15% of traffic. - -The automatic load shedding is enabled by default and you can disable the automatic load shedding with this setting: - -```properties - -# Enable/disable automatic bundle unloading for load-shedding -loadBalancerSheddingEnabled=true - -``` - -Additional settings that apply to shedding: - -```properties - -# Load shedding interval. Broker periodically checks whether some traffic should be offload from -# some over-loaded broker to other under-loaded brokers -loadBalancerSheddingIntervalMinutes=1 - -# Prevent the same topics to be shed and moved to other brokers more that once within this timeframe -loadBalancerSheddingGracePeriodMinutes=30 - -``` - -#### Broker overload thresholds - -The determinations of when a broker is overloaded is based on threshold of CPU, network and memory usage. Whenever either of those metrics reaches the threshold, the system triggers the shedding (if enabled). - -By default, overload threshold is set at 85%: - -```properties - -# Usage threshold to determine a broker as over-loaded -loadBalancerBrokerOverloadedThresholdPercentage=85 - -``` - -Pulsar gathers the usage stats from the system metrics. - -In case of network utilization, in some cases the network interface speed that Linux reports is -not correct and needs to be manually overridden. This is the case in AWS EC2 instances with 1Gbps -NIC speed for which the OS reports 10Gbps speed. - -Because of the incorrect max speed, the Pulsar load manager might think the broker has not reached the NIC capacity, while in fact the broker already uses all the bandwidth and the traffic is slowed down. - -You can use the following setting to correct the max NIC speed: - -```properties - -# Override the auto-detection of the network interfaces max speed. -# This option is useful in some environments (eg: EC2 VMs) where the max speed -# reported by Linux is not reflecting the real bandwidth available to the broker. -# Since the network usage is employed by the load manager to decide when a broker -# is overloaded, it is important to make sure the info is correct or override it -# with the right value here. The configured value can be a double (eg: 0.8) and that -# can be used to trigger load-shedding even before hitting on NIC limits. -loadBalancerOverrideBrokerNicSpeedGbps= - -``` - -When the value is empty, Pulsar uses the value that the OS reports. - -### Distribute anti-affinity namespaces across failure domains - -When your application has multiple namespaces and you want one of them available all the time to avoid any downtime, you can group these namespaces and distribute them across different [failure domains](reference-terminology.md#failure-domain) and different brokers. Thus, if one of the failure domains is down (due to release rollout or brokers restart), it only disrupts namespaces owned by that specific failure domain and the rest of the namespaces owned by other domains remain available without any impact. - -Such a group of namespaces has anti-affinity to each other, that is, all the namespaces in this group are [anti-affinity namespaces](reference-terminology.md#anti-affinity-namespaces) and are distributed to different failure domains in a load-balanced manner. - -As illustrated in the following figure, Pulsar has 2 failure domains (Domain1 and Domain2) and each domain has 2 brokers in it. You can create an anti-affinity namespace group that has 4 namespaces in it, and all the 4 namespaces have anti-affinity to each other. The load manager tries to distribute namespaces evenly across all the brokers in the same domain. Since each domain has 2 brokers, every broker owns one namespace from this anti-affinity namespace group, and you can see each domain owns 2 namespaces, and each broker owns 1 namespace. - -![Distribute anti-affinity namespaces across failure domains](/assets/anti-affinity-namespaces-across-failure-domains.svg) - -The load manager follows an even distribution policy across failure domains to assign anti-affinity namespaces. The following table outlines the even-distributed assignment sequence illustrated in the above figure. - -| Assignment sequence | Namespace | Failure domain candidates | Broker candidates | Selected broker | -|:---|:------------|:------------------|:------------------------------------|:-----------------| -| 1 | Namespace1 | Domain1, Domain2 | Broker1, Broker2, Broker3, Broker4 | Domain1:Broker1 | -| 2 | Namespace2 | Domain2 | Broker3, Broker4 | Domain2:Broker3 | -| 3 | Namespace3 | Domain1, Domain2 | Broker2, Broker4 | Domain1:Broker2 | -| 4 | Namespace4 | Domain2 | Broker4 | Domain2:Broker4 | - -:::tip - -* Each namespace belongs to only one anti-affinity group. If a namespace with an existing anti-affinity assignment is assigned to another anti-affinity group, the original assignment is dropped. - -* If there are more anti-affinity namespaces than failure domains, the load manager distributes namespaces evenly across all the domains, and also every domain distributes namespaces evenly across all the brokers under that domain. - -::: - -#### Create a failure domain and register brokers - -:::note - -One broker can only be registered to a single failure domain. - -::: - -To create a domain under a specific cluster and register brokers, run the following command: - -```bash - -pulsar-admin clusters create-failure-domain --domain-name --broker-list - -``` - -You can also view, update, and delete domains under a specific cluster. For more information, refer to [Pulsar admin doc](/tools/pulsar-admin/). - -#### Create an anti-affinity namespace group - -An anti-affinity group is created automatically when the first namespace is assigned to the group. To assign a namespace to an anti-affinity group, run the following command. It sets an anti-affinity group name for a namespace. - -```bash - -pulsar-admin namespaces set-anti-affinity-group --group - -``` - -For more information about `anti-affinity-group` related commands, refer to [Pulsar admin doc](/tools/pulsar-admin/). diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/administration-proxy.md b/site2/website/versioned_docs/version-2.8.0-deprecated/administration-proxy.md deleted file mode 100644 index c046ed34d46b22..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/administration-proxy.md +++ /dev/null @@ -1,86 +0,0 @@ ---- -id: administration-proxy -title: Pulsar proxy -sidebar_label: "Pulsar proxy" -original_id: administration-proxy ---- - -Pulsar proxy is an optional gateway. Pulsar proxy is used when direction connections between clients and Pulsar brokers are either infeasible or undesirable. For example, when you run Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform, you can run Pulsar proxy. - -## Configure the proxy - -Before using the proxy, you need to configure it with the brokers addresses in the cluster. You can configure the proxy to connect directly to service discovery, or specify a broker URL in the configuration. - -### Use service discovery - -Pulsar uses [ZooKeeper](https://zookeeper.apache.org) for service discovery. To connect the proxy to ZooKeeper, specify the following in `conf/proxy.conf`. - -```properties - -zookeeperServers=zk-0,zk-1,zk-2 -configurationStoreServers=zk-0:2184,zk-remote:2184 - -``` - -> To use service discovery, you need to open the network ACLs, so the proxy can connects to the ZooKeeper nodes through the ZooKeeper client port (port `2181`) and the configuration store client port (port `2184`). - -> However, it is not secure to use service discovery. Because if the network ACL is open, when someone compromises a proxy, they have full access to ZooKeeper. - -### Use broker URLs - -It is more secure to specify a URL to connect to the brokers. - -Proxy authorization requires access to ZooKeeper, so if you use these broker URLs to connect to the brokers, you need to disable authorization at the Proxy level. Brokers still authorize requests after the proxy forwards them. - -You can configure the broker URLs in `conf/proxy.conf` as follows. - -```properties - -brokerServiceURL=pulsar://brokers.example.com:6650 -brokerWebServiceURL=http://brokers.example.com:8080 -functionWorkerWebServiceURL=http://function-workers.example.com:8080 - -``` - -If you use TLS, configure the broker URLs in the following way: - -```properties - -brokerServiceURLTLS=pulsar+ssl://brokers.example.com:6651 -brokerWebServiceURLTLS=https://brokers.example.com:8443 -functionWorkerWebServiceURL=https://function-workers.example.com:8443 - -``` - -The hostname in the URLs provided should be a DNS entry which points to multiple brokers or a virtual IP address, which is backed by multiple broker IP addresses, so that the proxy does not lose connectivity to Pulsar cluster if a single broker becomes unavailable. - -The ports to connect to the brokers (6650 and 8080, or in the case of TLS, 6651 and 8443) should be open in the network ACLs. - -Note that if you do not use functions, you do not need to configure `functionWorkerWebServiceURL`. - -## Start the proxy - -To start the proxy: - -```bash - -$ cd /path/to/pulsar/directory -$ bin/pulsar proxy - -``` - -> You can run multiple instances of the Pulsar proxy in a cluster. - -## Stop the proxy - -Pulsar proxy runs in the foreground by default. To stop the proxy, simply stop the process in which the proxy is running. - -## Proxy frontends - -You can run Pulsar proxy behind some kind of load-distributing frontend, such as an [HAProxy](https://www.digitalocean.com/community/tutorials/an-introduction-to-haproxy-and-load-balancing-concepts) load balancer. - -## Use Pulsar clients with the proxy - -Once your Pulsar proxy is up and running, preferably behind a load-distributing [frontend](#proxy-frontends), clients can connect to the proxy via whichever address that the frontend uses. If the address is the DNS address `pulsar.cluster.default`, for example, the connection URL for clients is `pulsar://pulsar.cluster.default:6650`. - -For more information on Proxy configuration, refer to [Pulsar proxy](reference-configuration.md#pulsar-proxy). diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/administration-pulsar-manager.md b/site2/website/versioned_docs/version-2.8.0-deprecated/administration-pulsar-manager.md deleted file mode 100644 index 0e3800d847f0c8..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/administration-pulsar-manager.md +++ /dev/null @@ -1,205 +0,0 @@ ---- -id: administration-pulsar-manager -title: Pulsar Manager -sidebar_label: "Pulsar Manager" -original_id: administration-pulsar-manager ---- - -Pulsar Manager is a web-based GUI management and monitoring tool that helps administrators and users manage and monitor tenants, namespaces, topics, subscriptions, brokers, clusters, and so on, and supports dynamic configuration of multiple environments. - -:::note - -If you monitor your current stats with Pulsar dashboard, you can try to use Pulsar Manager instead. Pulsar dashboard is deprecated. - -::: - -## Install - -The easiest way to use the Pulsar Manager is to run it inside a [Docker](https://www.docker.com/products/docker) container. - -```shell - -docker pull apachepulsar/pulsar-manager:v0.2.0 -docker run -it \ - -p 9527:9527 -p 7750:7750 \ - -e SPRING_CONFIGURATION_FILE=/pulsar-manager/pulsar-manager/application.properties \ - apachepulsar/pulsar-manager:v0.2.0 - -``` - -* `SPRING_CONFIGURATION_FILE`: Default configuration file for spring. - -### Set administrator account and password - - ```shell - - CSRF_TOKEN=$(curl http://localhost:7750/pulsar-manager/csrf-token) - curl \ - -H 'X-XSRF-TOKEN: $CSRF_TOKEN' \ - -H 'Cookie: XSRF-TOKEN=$CSRF_TOKEN;' \ - -H "Content-Type: application/json" \ - -X PUT http://localhost:7750/pulsar-manager/users/superuser \ - -d '{"name": "admin", "password": "apachepulsar", "description": "test", "email": "username@test.org"}' - - ``` - -You can find the docker image in the [Docker Hub](https://github.com/apache/pulsar-manager/tree/master/docker) directory and build an image from the source code as well: - -``` - -git clone https://github.com/apache/pulsar-manager -cd pulsar-manager/front-end -npm install --save -npm run build:prod -cd .. -./gradlew build -x test -cd .. -docker build -f docker/Dockerfile --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` --build-arg VCS_REF=`latest` --build-arg VERSION=`latest` -t apachepulsar/pulsar-manager . - -``` - -### Use custom databases - -If you have a large amount of data, you can use a custom database. The following is an example of PostgreSQL. - -1. Initialize database and table structures using the [file](https://github.com/apache/pulsar-manager/tree/master/src/main/resources/META-INF/sql/postgresql-schema.sql). - -2. Modify the [configuration file](https://github.com/apache/pulsar-manager/blob/master/src/main/resources/application.properties) and add PostgreSQL configuration. - -``` - -spring.datasource.driver-class-name=org.postgresql.Driver -spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/pulsar_manager -spring.datasource.username=postgres -spring.datasource.password=postgres - -``` - -3. Compile to generate a new executable jar package. - -``` - -./gradlew build -x test - -``` - -### Enable JWT authentication - -If you want to turn on JWT authentication, configure the following parameters: - -* `backend.jwt.token`: token for the superuser. You need to configure this parameter during cluster initialization. -* `jwt.broker.token.mode`: multiple modes of generating token, including PUBLIC, PRIVATE, and SECRET. -* `jwt.broker.public.key`: configure this option if you use the PUBLIC mode. -* `jwt.broker.private.key`: configure this option if you use the PRIVATE mode. -* `jwt.broker.secret.key`: configure this option if you use the SECRET mode. - -For more information, see [Token Authentication Admin of Pulsar](http://pulsar.apache.org/docs/en/security-token-admin/). - - -If you want to enable JWT authentication, use one of the following methods. - - -* Method 1: use command-line tool - -``` - -wget https://dist.apache.org/repos/dist/release/pulsar/pulsar-manager/pulsar-manager-0.2.0/apache-pulsar-manager-0.2.0-bin.tar.gz -tar -zxvf apache-pulsar-manager-0.2.0-bin.tar.gz -cd pulsar-manager -tar -zxvf pulsar-manager.tar -cd pulsar-manager -cp -r ../dist ui -./bin/pulsar-manager --redirect.host=http://localhost --redirect.port=9527 insert.stats.interval=600000 --backend.jwt.token=token --jwt.broker.token.mode=PRIVATE --jwt.broker.private.key=file:///path/broker-private.key --jwt.broker.public.key=file:///path/broker-public.key - -``` - -Firstly, [set the administrator account and password](#set-administrator-account-and-password) - -Secondly, log in to Pulsar manager through http://localhost:7750/ui/index.html. - -* Method 2: configure the application.properties file - -``` - -backend.jwt.token=token - -jwt.broker.token.mode=PRIVATE -jwt.broker.public.key=file:///path/broker-public.key -jwt.broker.private.key=file:///path/broker-private.key - -or -jwt.broker.token.mode=SECRET -jwt.broker.secret.key=file:///path/broker-secret.key - -``` - -* Method 3: use Docker and enable token authentication. - -``` - -export JWT_TOKEN="your-token" -docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -v $PWD:/data apachepulsar/pulsar-manager:v0.2.0 /bin/sh - -``` - -* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --secret-key` or `bin/pulsar tokens create --private-key` command. -* `REDIRECT_HOST`: the IP address of the front-end server. -* `REDIRECT_PORT`: the port of the front-end server. -* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database. -* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database. -* `USERNAME`: the username of PostgreSQL. -* `PASSWORD`: the password of PostgreSQL. -* `LOG_LEVEL`: the level of log. - -* Method 4: use Docker and turn on **token authentication** and **token management** by private key and public key. - -``` - -export JWT_TOKEN="your-token" -export PRIVATE_KEY="file:///pulsar-manager/secret/my-private.key" -export PUBLIC_KEY="file:///pulsar-manager/secret/my-public.key" -docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -e PRIVATE_KEY=$PRIVATE_KEY -e PUBLIC_KEY=$PUBLIC_KEY -v $PWD:/data -v $PWD/secret:/pulsar-manager/secret apachepulsar/pulsar-manager:v0.2.0 /bin/sh - -``` - -* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --private-key` command. -* `PRIVATE_KEY`: private key path mounted in container, generated by `bin/pulsar tokens create-key-pair` command. -* `PUBLIC_KEY`: public key path mounted in container, generated by `bin/pulsar tokens create-key-pair` command. -* `$PWD/secret`: the folder where the private key and public key generated by the `bin/pulsar tokens create-key-pair` command are placed locally -* `REDIRECT_HOST`: the IP address of the front-end server. -* `REDIRECT_PORT`: the port of the front-end server. -* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database. -* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database. -* `USERNAME`: the username of PostgreSQL. -* `PASSWORD`: the password of PostgreSQL. -* `LOG_LEVEL`: the level of log. - -* Method 5: use Docker and turn on **token authentication** and **token management** by secret key. - -``` - -export JWT_TOKEN="your-token" -export SECRET_KEY="file:///pulsar-manager/secret/my-secret.key" -docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -e SECRET_KEY=$SECRET_KEY -v $PWD:/data -v $PWD/secret:/pulsar-manager/secret apachepulsar/pulsar-manager:v0.2.0 /bin/sh - -``` - -* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --secret-key` command. -* `SECRET_KEY`: secret key path mounted in container, generated by `bin/pulsar tokens create-secret-key` command. -* `$PWD/secret`: the folder where the secret key generated by the `bin/pulsar tokens create-secret-key` command are placed locally -* `REDIRECT_HOST`: the IP address of the front-end server. -* `REDIRECT_PORT`: the port of the front-end server. -* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database. -* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database. -* `USERNAME`: the username of PostgreSQL. -* `PASSWORD`: the password of PostgreSQL. -* `LOG_LEVEL`: the level of log. - -* For more information about backend configurations, see [here](https://github.com/apache/pulsar-manager/blob/master/src/README). -* For more information about frontend configurations, see [here](https://github.com/apache/pulsar-manager/tree/master/front-end). - -## Log in - -[Set the administrator account and password](#set-administrator-account-and-password). - -Visit http://localhost:9527 to log in. diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/administration-stats.md b/site2/website/versioned_docs/version-2.8.0-deprecated/administration-stats.md deleted file mode 100644 index ac0c03602f36d5..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/administration-stats.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -id: administration-stats -title: Pulsar stats -sidebar_label: "Pulsar statistics" -original_id: administration-stats ---- - -## Partitioned topics - -|Stat|Description| -|---|---| -|msgRateIn| The sum of publish rates of all local and replication publishers in messages per second.| -|msgThroughputIn| Same as msgRateIn but in bytes per second instead of messages per second.| -|msgRateOut| The sum of dispatch rates of all local and replication consumers in messages per second.| -|msgThroughputOut| Same as msgRateOut but in bytes per second instead of messages per second.| -|averageMsgSize| Average message size, in bytes, from this publisher within the last interval.| -|storageSize| The sum of storage size of the ledgers for this topic.| -|publishers| The list of all local publishers into the topic. Publishers can be anywhere from zero to thousands.| -|producerId| Internal identifier for this producer on this topic.| -|producerName| Internal identifier for this producer, generated by the client library.| -|address| IP address and source port for the connection of this producer.| -|connectedSince| Timestamp this producer is created or last reconnected.| -|subscriptions| The list of all local subscriptions to the topic.| -|my-subscription| The name of this subscription (client defined).| -|msgBacklog| The count of messages in backlog for this subscription.| -|type| This subscription type.| -|msgRateExpired| The rate at which messages are discarded instead of dispatched from this subscription due to TTL.| -|consumers| The list of connected consumers for this subscription.| -|consumerName| Internal identifier for this consumer, generated by the client library.| -|availablePermits| The number of messages this consumer has space for in the listen queue of client library. A value of 0 means the queue of client library is full and receive() is not being called. A nonzero value means this consumer is ready to be dispatched messages.| -|replication| This section gives the stats for cross-colo replication of this topic.| -|replicationBacklog| The outbound replication backlog in messages.| -|connected| Whether the outbound replicator is connected.| -|replicationDelayInSeconds| How long the oldest message has been waiting to be sent through the connection, if connected is true.| -|inboundConnection| The IP and port of the broker in the publisher connection of remote cluster to this broker. | -|inboundConnectedSince| The TCP connection being used to publish messages to the remote cluster. If no local publishers are connected, this connection is automatically closed after a minute.| - - -## Topics - -|Stat|Description| -|---|---| -|entriesAddedCounter| Messages published since this broker loads this topic.| -|numberOfEntries| Total number of messages being tracked.| -|totalSize| Total storage size in bytes of all messages.| -|currentLedgerEntries| Count of messages written to the ledger currently open for writing.| -|currentLedgerSize| Size in bytes of messages written to ledger currently open for writing.| -|lastLedgerCreatedTimestamp| Time when last ledger is created.| -|lastLedgerCreationFailureTimestamp| Time when last ledger is failed.| -|waitingCursorsCount| How many cursors are caught up and waiting for a new message to be published.| -|pendingAddEntriesCount| How many messages have (asynchronous) write requests you are waiting on completion.| -|lastConfirmedEntry| The ledgerid:entryid of the last message successfully written. If the entryid is -1, then the ledger is opened or is being currently opened but has no entries written yet.| -|state| The state of the cursor ledger. Open means you have a cursor ledger for saving updates of the markDeletePosition.| -|ledgers| The ordered list of all ledgers for this topic holding its messages.| -|cursors| The list of all cursors on this topic. Every subscription you saw in the topic stats has one.| -|markDeletePosition| The ack position: the last message the subscriber acknowledges receiving.| -|readPosition| The latest position of subscriber for reading message.| -|waitingReadOp| This is true when the subscription reads the latest message that is published to the topic and waits on new messages to be published.| -|pendingReadOps| The counter for how many outstanding read requests to the BookKeepers you have in progress.| -|messagesConsumedCounter| Number of messages this cursor acks since this broker loads this topic.| -|cursorLedger| The ledger used to persistently store the current markDeletePosition.| -|cursorLedgerLastEntry| The last entryid used to persistently store the current markDeletePosition.| -|individuallyDeletedMessages| If Acks are done out of order, shows the ranges of messages Acked between the markDeletePosition and the read-position.| -|lastLedgerSwitchTimestamp| The last time the cursor ledger is rolled over.| diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/administration-upgrade.md b/site2/website/versioned_docs/version-2.8.0-deprecated/administration-upgrade.md deleted file mode 100644 index 72d136b6460f62..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/administration-upgrade.md +++ /dev/null @@ -1,168 +0,0 @@ ---- -id: administration-upgrade -title: Upgrade Guide -sidebar_label: "Upgrade" -original_id: administration-upgrade ---- - -## Upgrade guidelines - -Apache Pulsar is comprised of multiple components, ZooKeeper, bookies, and brokers. These components are either stateful or stateless. You do not have to upgrade ZooKeeper nodes unless you have special requirement. While you upgrade, you need to pay attention to bookies (stateful), brokers and proxies (stateless). - -The following are some guidelines on upgrading a Pulsar cluster. Read the guidelines before upgrading. - -- Backup all your configuration files before upgrading. -- Read guide entirely, make a plan, and then execute the plan. When you make upgrade plan, you need to take your specific requirements and environment into consideration. -- Pay attention to the upgrading order of components. In general, you do not need to upgrade your ZooKeeper or configuration store cluster (the global ZooKeeper cluster). You need to upgrade bookies first, and then upgrade brokers, proxies, and your clients. -- If `autorecovery` is enabled, you need to disable `autorecovery` in the upgrade process, and re-enable it after completing the process. -- Read the release notes carefully for each release. Release notes contain features, configuration changes that might impact your upgrade. -- Upgrade a small subset of nodes of each type to canary test the new version before upgrading all nodes of that type in the cluster. When you have upgraded the canary nodes, run for a while to ensure that they work correctly. -- Upgrade one data center to verify new version before upgrading all data centers if your cluster runs in multi-cluster replicated mode. - -> Note: Currently, Apache Pulsar is compatible between versions. - -## Upgrade sequence - -To upgrade an Apache Pulsar cluster, follow the upgrade sequence. - -1. Upgrade ZooKeeper (optional) -- Canary test: test an upgraded version in one or a small set of ZooKeeper nodes. -- Rolling upgrade: rollout the upgraded version to all ZooKeeper servers incrementally, one at a time. Monitor your dashboard during the whole rolling upgrade process. -2. Upgrade bookies -- Canary test: test an upgraded version in one or a small set of bookies. -- Rolling upgrade: - - a. Disable `autorecovery` with the following command. - - ```shell - - bin/bookkeeper shell autorecovery -disable - - ``` - - - - b. Rollout the upgraded version to all bookies in the cluster after you determine that a version is safe after canary. - - c. After you upgrade all bookies, re-enable `autorecovery` with the following command. - - ```shell - - bin/bookkeeper shell autorecovery -enable - - ``` - -3. Upgrade brokers -- Canary test: test an upgraded version in one or a small set of brokers. -- Rolling upgrade: rollout the upgraded version to all brokers in the cluster after you determine that a version is safe after canary. -4. Upgrade proxies -- Canary test: test an upgraded version in one or a small set of proxies. -- Rolling upgrade: rollout the upgraded version to all proxies in the cluster after you determine that a version is safe after canary. - -## Upgrade ZooKeeper (optional) -While you upgrade ZooKeeper servers, you can do canary test first, and then upgrade all ZooKeeper servers in the cluster. - -### Canary test - -You can test an upgraded version in one of ZooKeeper servers before upgrading all ZooKeeper servers in your cluster. - -To upgrade ZooKeeper server to a new version, complete the following steps: - -1. Stop a ZooKeeper server. -2. Upgrade the binary and configuration files. -3. Start the ZooKeeper server with the new binary files. -4. Use `pulsar zookeeper-shell` to connect to the newly upgraded ZooKeeper server and run a few commands to verify if it works as expected. -5. Run the ZooKeeper server for a few days, observe and make sure the ZooKeeper cluster runs well. - -#### Canary rollback - -If issues occur during canary test, you can shut down the problematic ZooKeeper node, revert the binary and configuration, and restart the ZooKeeper with the reverted binary. - -### Upgrade all ZooKeeper servers - -After canary test to upgrade one ZooKeeper in your cluster, you can upgrade all ZooKeeper servers in your cluster. - -You can upgrade all ZooKeeper servers one by one by following steps in canary test. - -## Upgrade bookies - -While you upgrade bookies, you can do canary test first, and then upgrade all bookies in the cluster. -For more details, you can read Apache BookKeeper [Upgrade guide](http://bookkeeper.apache.org/docs/latest/admin/upgrade). - -### Canary test - -You can test an upgraded version in one or a small set of bookies before upgrading all bookies in your cluster. - -To upgrade bookie to a new version, complete the following steps: - -1. Stop a bookie. -2. Upgrade the binary and configuration files. -3. Start the bookie in `ReadOnly` mode to verify if the bookie of this new version runs well for read workload. - - ```shell - - bin/pulsar bookie --readOnly - - ``` - -4. When the bookie runs successfully in `ReadOnly` mode, stop the bookie and restart it in `Write/Read` mode. - - ```shell - - bin/pulsar bookie - - ``` - -5. Observe and make sure the cluster serves both write and read traffic. - -#### Canary rollback - -If issues occur during the canary test, you can shut down the problematic bookie node. Other bookies in the cluster replaces this problematic bookie node with autorecovery. - -### Upgrade all bookies - -After canary test to upgrade some bookies in your cluster, you can upgrade all bookies in your cluster. - -Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios. - -In a rolling upgrade scenario, upgrade one bookie at a time. In a downtime upgrade scenario, shut down the entire cluster, upgrade each bookie, and then start the cluster. - -While you upgrade in both scenarios, the procedure is the same for each bookie. - -1. Stop the bookie. -2. Upgrade the software (either new binary or new configuration files). -2. Start the bookie. - -> **Advanced operations** -> When you upgrade a large BookKeeper cluster in a rolling upgrade scenario, upgrading one bookie at a time is slow. If you configure rack-aware or region-aware placement policy, you can upgrade bookies rack by rack or region by region, which speeds up the whole upgrade process. - -## Upgrade brokers and proxies - -The upgrade procedure for brokers and proxies is the same. Brokers and proxies are `stateless`, so upgrading the two services is easy. - -### Canary test - -You can test an upgraded version in one or a small set of nodes before upgrading all nodes in your cluster. - -To upgrade to a new version, complete the following steps: - -1. Stop a broker (or proxy). -2. Upgrade the binary and configuration file. -3. Start a broker (or proxy). - -#### Canary rollback - -If issues occur during canary test, you can shut down the problematic broker (or proxy) node. Revert to the old version and restart the broker (or proxy). - -### Upgrade all brokers or proxies - -After canary test to upgrade some brokers or proxies in your cluster, you can upgrade all brokers or proxies in your cluster. - -Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios. - -In a rolling upgrade scenario, you can upgrade one broker or one proxy at a time if the size of the cluster is small. If your cluster is large, you can upgrade brokers or proxies in batches. When you upgrade a batch of brokers or proxies, make sure the remaining brokers and proxies in the cluster have enough capacity to handle the traffic during upgrade. - -In a downtime upgrade scenario, shut down the entire cluster, upgrade each broker or proxy, and then start the cluster. - -While you upgrade in both scenarios, the procedure is the same for each broker or proxy. - -1. Stop the broker or proxy. -2. Upgrade the software (either new binary or new configuration files). -3. Start the broker or proxy. diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/administration-zk-bk.md b/site2/website/versioned_docs/version-2.8.0-deprecated/administration-zk-bk.md deleted file mode 100644 index 2c080123ca81df..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/administration-zk-bk.md +++ /dev/null @@ -1,386 +0,0 @@ ---- -id: administration-zk-bk -title: ZooKeeper and BookKeeper administration -sidebar_label: "ZooKeeper and BookKeeper" -original_id: administration-zk-bk ---- - -Pulsar relies on two external systems for essential tasks: - -* [ZooKeeper](https://zookeeper.apache.org/) is responsible for a wide variety of configuration-related and coordination-related tasks. -* [BookKeeper](http://bookkeeper.apache.org/) is responsible for [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data. - -ZooKeeper and BookKeeper are both open-source [Apache](https://www.apache.org/) projects. - -> Skip to the [How Pulsar uses ZooKeeper and BookKeeper](#how-pulsar-uses-zookeeper-and-bookkeeper) section below for a more schematic explanation of the role of these two systems in Pulsar. - - -## ZooKeeper - -Each Pulsar instance relies on two separate ZooKeeper quorums. - -* [Local ZooKeeper](#deploy-local-zookeeper) operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs to have a dedicated ZooKeeper cluster. -* [Configuration Store](#deploy-configuration-store) operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum. - -### Deploy local ZooKeeper - -ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar. - -To deploy a Pulsar instance, you need to stand up one local ZooKeeper cluster *per Pulsar cluster*. - -To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -On each host, you need to specify the node ID in `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - - -On a ZooKeeper server at `zk1.us-west.example.com`, for example, you can set the `myid` value like this: - -```shell - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com` the command is `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start zookeeper - -``` - -### Deploy configuration store - -The ZooKeeper cluster configured and started up in the section above is a *local* ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks. - -If you deploy a [single-cluster](#single-cluster-pulsar-instance) instance, you do not need a separate cluster for the configuration store. If, however, you deploy a [multi-cluster](#multi-cluster-pulsar-instance) instance, you need to stand up a separate ZooKeeper cluster for configuration tasks. - -#### Single-cluster Pulsar instance - -If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports. - -To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorum uses to the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 - -``` - -As before, create the `myid` files for each server on `data/global-zookeeper/myid`. - -#### Multi-cluster Pulsar instance - -When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions. - -The key here is to make sure the ZK quorum members are spread across at least 3 regions and that other regions run as observers. - -Again, given the very low expected load on the configuration store servers, you can share the same hosts used for the local ZooKeeper quorum. - -For example, you can assume a Pulsar instance with the following clusters `us-west`, `us-east`, `us-central`, `eu-central`, `ap-south`. Also you can assume, each cluster has its own local ZK servers named such as - -``` - -zk[1-3].${CLUSTER}.example.com - -``` - -In this scenario you want to pick the quorum participants from few clusters and let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`. - -This guarantees that writes to configuration store is possible even if one of these regions is unreachable. - -The ZK configuration in all the servers looks like: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 -server.4=zk1.us-central.example.com:2185:2186 -server.5=zk2.us-central.example.com:2185:2186 -server.6=zk3.us-central.example.com:2185:2186:observer -server.7=zk1.us-east.example.com:2185:2186 -server.8=zk2.us-east.example.com:2185:2186 -server.9=zk3.us-east.example.com:2185:2186:observer -server.10=zk1.eu-central.example.com:2185:2186:observer -server.11=zk2.eu-central.example.com:2185:2186:observer -server.12=zk3.eu-central.example.com:2185:2186:observer -server.13=zk1.ap-south.example.com:2185:2186:observer -server.14=zk2.ap-south.example.com:2185:2186:observer -server.15=zk3.ap-south.example.com:2185:2186:observer - -``` - -Additionally, ZK observers need to have: - -```properties - -peerType=observer - -``` - -##### Start the service - -Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) - -```shell - -$ bin/pulsar-daemon start configuration-store - -``` - -### ZooKeeper configuration - -In Pulsar, ZooKeeper configuration is handled by two separate configuration files in the `conf` directory of your Pulsar installation: `conf/zookeeper.conf` for [local ZooKeeper](#local-zookeeper) and `conf/global-zookeeper.conf` for [configuration store](#configuration-store). - -#### Local ZooKeeper - -The [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file handles the configuration for local ZooKeeper. The table below shows the available parameters: - -|Name|Description|Default| -|---|---|---| -|tickTime| The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick. |2000| -|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10| -|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter. |5| -|dataDir| The location where ZooKeeper stores in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper| -|clientPort| The port on which the ZooKeeper server listens for connections. |2181| -|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest). |3| -|autopurge.purgeInterval| The time interval, in hours, which triggers the ZooKeeper database purge task. Setting to a non-zero number enables auto purge; setting to 0 disables. Read this guide before enabling auto purge. |1| -|maxClientCnxns| The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60| - - -#### Configuration Store - -The [`conf/global-zookeeper.conf`](reference-configuration.md#configuration-store) file handles the configuration for configuration store. The table below shows the available parameters: - - -## BookKeeper - -BookKeeper stores all durable messages in Pulsar. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) WAL system that guarantees read consistency of independent message logs calls ledgers. Individual BookKeeper servers are also called *bookies*. - -> To manage message persistence, retention, and expiry in Pulsar, refer to [cookbook](cookbooks-retention-expiry.md). - -### Hardware requirements - -Bookie hosts store message data on disk. To provide optimal performance, ensure that the bookies have a suitable hardware configuration. The following are two key dimensions of bookie hardware capacity: - -- Disk I/O capacity read/write -- Storage capacity - -Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker by default. To ensure low write latency, BookKeeper is designed to use multiple devices: - -- A **journal** to ensure durability. For sequential writes, it is critical to have fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms. -- A **ledger storage device** stores data. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller. - -### Configure BookKeeper - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. When you configure each bookie, ensure that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for local ZooKeeper of the Pulsar cluster. - -The minimum configuration changes required in `conf/bookkeeper.conf` are as follows: - -:::note - -Set `journalDirectory` and `ledgerDirectories` carefully. It is difficilt to change them later. - -::: - -```properties - -# Change to point to journal disk mount point -journalDirectory=data/bookkeeper/journal - -# Point to ledger storage disk mount point -ledgerDirectories=data/bookkeeper/ledgers - -# Point to local ZK quorum -zkServers=zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181 - -#It is recommended to set this parameter. Otherwise, BookKeeper can't start normally in certain environments (for example, Huawei Cloud). -advertisedAddress= - -``` - -To change the ZooKeeper root path that BookKeeper uses, use `zkLedgersRootPath=/MY-PREFIX/ledgers` instead of `zkServers=localhost:2181/MY-PREFIX`. - -> For more information about BookKeeper, refer to the official [BookKeeper docs](http://bookkeeper.apache.org). - -### Deploy BookKeeper - -BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar. Each Pulsar broker has its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster. - -### Start bookies manually - -You can start a bookie in the foreground or as a background daemon. - -To start a bookie in the foreground, use the [`bookkeeper`](reference-cli-tools.md#bookkeeper) CLI tool: - -```bash - -$ bin/bookkeeper bookie - -``` - -To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -You can verify whether the bookie works properly with the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell): - -```shell - -$ bin/bookkeeper shell bookiesanity - -``` - -When you use this command, you create a new ledger on the local bookie, write a few entries, read them back and finally delete the ledger. - -### Decommission bookies cleanly - -Before you decommission a bookie, you need to check your environment and meet the following requirements. - -1. Ensure the state of your cluster supports decommissioning the target bookie. Check if `EnsembleSize >= Write Quorum >= Ack Quorum` is `true` with one less bookie. - -2. Ensure the target bookie is listed after using the `listbookies` command. - -3. Ensure that no other process is ongoing (upgrade etc). - -And then you can decommission bookies safely. To decommission bookies, complete the following steps. - -1. Log in to the bookie node, check if there are underreplicated ledgers. The decommission command force to replicate the underreplicated ledgers. -`$ bin/bookkeeper shell listunderreplicated` - -2. Stop the bookie by killing the bookie process. Make sure that no liveness/readiness probes setup for the bookies to spin them back up if you deploy it in a Kubernetes environment. - -3. Run the decommission command. - - If you have logged in to the node to be decommissioned, you do not need to provide `-bookieid`. - - If you are running the decommission command for the target bookie node from another bookie node, you should mention the target bookie ID in the arguments for `-bookieid` - `$ bin/bookkeeper shell decommissionbookie` - or - `$ bin/bookkeeper shell decommissionbookie -bookieid ` - -4. Validate that no ledgers are on the decommissioned bookie. -`$ bin/bookkeeper shell listledgers -bookieid ` - -You can run the following command to check if the bookie you have decommissioned is listed in the bookies list: - -```bash - -./bookkeeper shell listbookies -rw -h -./bookkeeper shell listbookies -ro -h - -``` - -## BookKeeper persistence policies - -In Pulsar, you can set *persistence policies* at the namespace level, which determines how BookKeeper handles persistent storage of messages. Policies determine four things: - -* The number of acks (guaranteed copies) to wait for each ledger entry. -* The number of bookies to use for a topic. -* The number of writes to make for each ledger entry. -* The throttling rate for mark-delete operations. - -### Set persistence policies - -You can set persistence policies for BookKeeper at the [namespace](reference-terminology.md#namespace) level. - -#### Pulsar-admin - -Use the [`set-persistence`](reference-pulsar-admin.md#namespaces-set-persistence) subcommand and specify a namespace as well as any policies that you want to apply. The available flags are: - -Flag | Description | Default -:----|:------------|:------- -`-a`, `--bookkeeper-ack-quorum` | The number of acks (guaranteed copies) to wait on for each entry | 0 -`-e`, `--bookkeeper-ensemble` | The number of [bookies](reference-terminology.md#bookie) to use for topics in the namespace | 0 -`-w`, `--bookkeeper-write-quorum` | The number of writes to make for each entry | 0 -`-r`, `--ml-mark-delete-max-rate` | Throttling rate for mark-delete operations (0 means no throttle) | 0 - -The following is an example: - -```shell - -$ pulsar-admin namespaces set-persistence my-tenant/my-ns \ - --bookkeeper-ack-quorum 3 \ - --bookkeeper-ensemble 2 - -``` - -#### REST API - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/setPersistence?version=@pulsar:version_number@} - -#### Java - -```java - -int bkEnsemble = 2; -int bkQuorum = 3; -int bkAckQuorum = 2; -double markDeleteRate = 0.7; -PersistencePolicies policies = - new PersistencePolicies(ensemble, quorum, ackQuorum, markDeleteRate); -admin.namespaces().setPersistence(namespace, policies); - -``` - -### List persistence policies - -You can see which persistence policy currently applies to a namespace. - -#### Pulsar-admin - -Use the [`get-persistence`](reference-pulsar-admin.md#namespaces-get-persistence) subcommand and specify the namespace. - -The following is an example: - -```shell - -$ pulsar-admin namespaces get-persistence my-tenant/my-ns -{ - "bookkeeperEnsemble": 1, - "bookkeeperWriteQuorum": 1, - "bookkeeperAckQuorum", 1, - "managedLedgerMaxMarkDeleteRate": 0 -} - -``` - -#### REST API - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/getPersistence?version=@pulsar:version_number@} - -#### Java - -```java - -PersistencePolicies policies = admin.namespaces().getPersistence(namespace); - -``` - -## How Pulsar uses ZooKeeper and BookKeeper - -This diagram illustrates the role of ZooKeeper and BookKeeper in a Pulsar cluster: - -![ZooKeeper and BookKeeper](/assets/pulsar-system-architecture.png) - -Each Pulsar cluster consists of one or more message brokers. Each broker relies on an ensemble of bookies. diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-cgo.md b/site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-cgo.md deleted file mode 100644 index f352f942b77144..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-cgo.md +++ /dev/null @@ -1,579 +0,0 @@ ---- -id: client-libraries-cgo -title: Pulsar CGo client -sidebar_label: "CGo(deprecated)" -original_id: client-libraries-cgo ---- - -You can use Pulsar Go client to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang). - -All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Go client are thread-safe. - -Currently, the following Go clients are maintained in two repositories. - -| Language | Project | Maintainer | License | Description | -|----------|---------|------------|---------|-------------| -| CGo | [pulsar-client-go](https://github.com/apache/pulsar/tree/master/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | CGo client that depends on C++ client library | -| Go | [pulsar-client-go](https://github.com/apache/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client | - -> **API docs available as well** -> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar/pulsar-client-go/pulsar). - -## Installation - -### Requirements - -Pulsar Go client library is based on the C++ client library. Follow -the instructions for [C++ library](client-libraries-cpp.md) for installing the binaries through [RPM](client-libraries-cpp.md#rpm), [Deb](client-libraries-cpp.md#deb) or [Homebrew packages](client-libraries-cpp.md#macos). - -### Install go package - -> **Compatibility Warning** -> The version number of the Go client **must match** the version number of the Pulsar C++ client library. - -You can install the `pulsar` library locally using `go get`. Note that `go get` doesn't support fetching a specific tag - it will always pull in master's version of the Go client. You'll need a C++ client library that matches master. - -```bash - -$ go get -u github.com/apache/pulsar/pulsar-client-go/pulsar - -``` - -Or you can use [dep](https://github.com/golang/dep) for managing the dependencies. - -```bash - -$ dep ensure -add github.com/apache/pulsar/pulsar-client-go/pulsar@v@pulsar:version@ - -``` - -Once installed locally, you can import it into your project: - -```go - -import "github.com/apache/pulsar/pulsar-client-go/pulsar" - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example: - -```go - -import ( - "log" - "runtime" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - OperationTimeoutSeconds: 5, - MessageListenerThreads: runtime.NumCPU(), - }) - - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } -} - -``` - -The following configurable parameters are available for Pulsar clients: - -Parameter | Description | Default -:---------|:------------|:------- -`URL` | The connection URL for the Pulsar cluster. See [above](#urls) for more info | -`IOThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker) | 1 -`OperationTimeoutSeconds` | The timeout for some Go client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries will occur until this threshold is reached, at which point the operation will fail. | 30 -`MessageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)) | 1 -`ConcurrentLookupRequests` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 5000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 5000 -`Logger` | A custom logger implementation for the client (as a function that takes a log level, file path, line number, and message). All info, warn, and error messages will be routed to this function. | `nil` -`TLSTrustCertsFilePath` | The file path for the trusted TLS certificate | -`TLSAllowInsecureConnection` | Whether the client accepts untrusted TLS certificates from the broker | `false` -`Authentication` | Configure the authentication provider. (default: no authentication). Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | `nil` -`StatsIntervalInSeconds` | The interval (in seconds) at which client stats are published | 60 - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example: - -```go - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", -}) - -if err != nil { - log.Fatalf("Could not instantiate Pulsar producer: %v", err) -} - -defer producer.Close() - -msg := pulsar.ProducerMessage{ - Payload: []byte("Hello, Pulsar"), -} - -if err := producer.Send(context.Background(), msg); err != nil { - log.Fatalf("Producer could not send message: %v", err) -} - -``` - -> **Blocking operation** -> When you create a new Pulsar producer, the operation will block (waiting on a go channel) until either a producer is successfully created or an error is thrown. - - -### Producer operations - -Pulsar Go producers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string` -`Name()` | Fetches the producer's name | `string` -`Send(context.Context, ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | `error` -`SendAndGetMsgID(context.Context, ProducerMessage)`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | (MessageID, error) -`SendAsync(context.Context, ProducerMessage, func(ProducerMessage, error))` | Publishes a [message](#messages) to the producer's topic asynchronously. The third argument is a callback function that specifies what happens either when the message is acknowledged or an error is thrown. | -`SendAndGetMsgIDAsync(context.Context, ProducerMessage, func(MessageID, error))`| Send a message in asynchronous mode. The callback will report back the message being published and the eventual error in publishing | -`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64 -`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error -`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | `error` -`Schema()` | | Schema - -Here's a more involved example usage of a producer: - -```go - -import ( - "context" - "fmt" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatal(err) } - - // Use the client to instantiate a producer - producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", - }) - - if err != nil { log.Fatal(err) } - - ctx := context.Background() - - // Send 10 messages synchronously and 10 messages asynchronously - for i := 0; i < 10; i++ { - // Create a message - msg := pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("message-%d", i)), - } - - // Attempt to send the message - if err := producer.Send(ctx, msg); err != nil { - log.Fatal(err) - } - - // Create a different message to send asynchronously - asyncMsg := pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("async-message-%d", i)), - } - - // Attempt to send the message asynchronously and handle the response - producer.SendAsync(ctx, asyncMsg, func(msg pulsar.ProducerMessage, err error) { - if err != nil { log.Fatal(err) } - - fmt.Printf("the %s successfully published", string(msg.Payload)) - }) - } -} - -``` - -### Producer configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer will publish messages | -`Name` | A name for the producer. If you don't explicitly assign a name, Pulsar will automatically generate a globally unique name that you can access later using the `Name()` method. If you choose to explicitly assign a name, it will need to be unique across *all* Pulsar clusters, otherwise the creation operation will throw an error. | -`Properties`| Attach a set of application defined properties to the producer. This properties will be visible in the topic stats | -`SendTimeout` | When publishing a message to a topic, the producer will wait for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error will be thrown. If you set `SendTimeout` to -1, the timeout will be set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30 seconds -`MaxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `Send` and `SendAsync` methods will fail *unless* `BlockIfQueueFull` is set to `true`. | -`MaxPendingMessagesAcrossPartitions` | Set the number of max pending messages across all the partitions. This setting will be used to lower the max pending messages for each partition `MaxPendingMessages(int)`, if the total exceeds the configured value.| -`BlockIfQueueFull` | If set to `true`, the producer's `Send` and `SendAsync` methods will block when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `MaxPendingMessages` parameter); if set to `false` (the default), `Send` and `SendAsync` operations will fail and throw a `ProducerQueueIsFullError` when the queue is full. | `false` -`MessageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`pulsar.RoundRobinDistribution`, the default), publishing all messages to a single partition (`pulsar.UseSinglePartition`), or a custom partitioning scheme (`pulsar.CustomPartition`). | `pulsar.RoundRobinDistribution` -`HashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `pulsar.JavaStringHash` (the equivalent of `String.hashCode()` in Java), `pulsar.Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `pulsar.BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library) | `pulsar.JavaStringHash` -`CompressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), [`ZLIB`](https://zlib.net/), [`ZSTD`](https://facebook.github.io/zstd/) and [`SNAPPY`](https://google.github.io/snappy/). | No compression -`MessageRouter` | By default, Pulsar uses a round-robin routing scheme for [partitioned topics](cookbooks-partitioned.md). The `MessageRouter` parameter enables you to specify custom routing logic via a function that takes the Pulsar message and topic metadata as an argument and returns an integer (where the ), i.e. a function signature of `func(Message, TopicMetadata) int`. | -`Batching` | Control whether automatic batching of messages is enabled for the producer. | false -`BatchingMaxPublishDelay` | Set the time period within which the messages sent will be batched (default: 1ms) if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or until | 1ms -`BatchingMaxMessages` | Set the maximum number of messages permitted in a batch. (default: 1000) If set to a value greater than 1, messages will be queued until this threshold is reached or batch interval has elapsed | 1000 - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels: - -```go - -msgChannel := make(chan pulsar.ConsumerMessage) - -consumerOpts := pulsar.ConsumerOptions{ - Topic: "my-topic", - SubscriptionName: "my-subscription-1", - Type: pulsar.Exclusive, - MessageChannel: msgChannel, -} - -consumer, err := client.Subscribe(consumerOpts) - -if err != nil { - log.Fatalf("Could not establish subscription: %v", err) -} - -defer consumer.Close() - -for cm := range msgChannel { - msg := cm.Message - - fmt.Printf("Message ID: %s", msg.ID()) - fmt.Printf("Message value: %s", string(msg.Payload())) - - consumer.Ack(msg) -} - -``` - -> **Blocking operation** -> When you create a new Pulsar consumer, the operation will block (on a go channel) until either a producer is successfully created or an error is thrown. - - -### Consumer operations - -Pulsar Go consumers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the consumer's [topic](reference-terminology.md#topic) | `string` -`Subscription()` | Returns the consumer's subscription name | `string` -`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error` -`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)` -`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | `error` -`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | `error` -`AckCumulative(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `AckCumulative` method will block until the ack has been sent to the broker. After that, the messages will *not* be redelivered to the consumer. Cumulative acking can only be used with a [shared](concepts-messaging.md#shared) subscription type. | `error` -`AckCumulativeID(MessageID)` |Ack the reception of all the messages in the stream up to (and including) the provided message. This method will block until the acknowledge has been sent to the broker. After that, the messages will not be re-delivered to this consumer. | error -`Nack(Message)` | Acknowledge the failure to process a single message. | `error` -`NackID(MessageID)` | Acknowledge the failure to process a single message. | `error` -`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | `error` -`RedeliverUnackedMessages()` | Redelivers *all* unacknowledged messages on the topic. In [failover](concepts-messaging.md#failover) mode, this request is ignored if the consumer isn't active on the specified topic; in [shared](concepts-messaging.md#shared) mode, redelivered messages are distributed across all consumers connected to the topic. **Note**: this is a *non-blocking* operation that doesn't throw an error. | -`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | error - -#### Receive example - -Here's an example usage of a Go consumer that uses the `Receive()` method to process incoming messages: - -```go - -import ( - "context" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatal(err) } - - // Use the client object to instantiate a consumer - consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "my-golang-topic", - SubscriptionName: "sub-1", - Type: pulsar.Exclusive, - }) - - if err != nil { log.Fatal(err) } - - defer consumer.Close() - - ctx := context.Background() - - // Listen indefinitely on the topic - for { - msg, err := consumer.Receive(ctx) - if err != nil { log.Fatal(err) } - - // Do something with the message - err = processMessage(msg) - - if err == nil { - // Message processed successfully - consumer.Ack(msg) - } else { - // Failed to process messages - consumer.Nack(msg) - } - } -} - -``` - -### Consumer configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the consumer will establish a subscription and listen for messages | -`Topics` | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing | -`TopicsPattern` | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing | -`SubscriptionName` | The subscription name for this consumer | -`Properties` | Attach a set of application defined properties to the consumer. This properties will be visible in the topic stats| -`Name` | The name of the consumer | -`AckTimeout` | Set the timeout for unacked messages | 0 -`NackRedeliveryDelay` | The delay after which to redeliver the messages that failed to be processed. Default is 1min. (See `Consumer.Nack()`) | 1 minute -`Type` | Available options are `Exclusive`, `Shared`, and `Failover` | `Exclusive` -`SubscriptionInitPos` | InitialPosition at which the cursor will be set when subscribe | Latest -`MessageChannel` | The Go channel used by the consumer. Messages that arrive from the Pulsar topic(s) will be passed to this channel. | -`ReceiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `Receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000 -`MaxTotalReceiverQueueSizeAcrossPartitions` |Set the max total receiver queue size across partitions. This setting will be used to reduce the receiver queue size for individual partitions if the total exceeds this value | 50000 -`ReadCompacted` | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the consumer will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal. | - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example: - -```go - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageId: pulsar.LatestMessage, -}) - -``` - -> **Blocking operation** -> When you create a new Pulsar reader, the operation will block (on a go channel) until either a reader is successfully created or an error is thrown. - - -### Reader operations - -Pulsar Go readers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string` -`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)` -`HasNext()` | Check if there is any message available to read from the current position| (bool, error) -`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error` - -#### "Next" example - -Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages: - -```go - -import ( - "context" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatalf("Could not create client: %v", err) } - - // Use the client to instantiate a reader - reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: pulsar.EarliestMessage, - }) - - if err != nil { log.Fatalf("Could not create reader: %v", err) } - - defer reader.Close() - - ctx := context.Background() - - // Listen on the topic for incoming messages - for { - msg, err := reader.Next(ctx) - if err != nil { log.Fatalf("Error reading from topic: %v", err) } - - // Process the message - } -} - -``` - -In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example: - -```go - -lastSavedId := // Read last saved message id from external store as byte[] - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: DeserializeMessageID(lastSavedId), -}) - -``` - -### Reader configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader will establish a subscription and listen for messages -`Name` | The name of the reader -`StartMessageID` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `pulsar.EarliestMessage` (the earliest available message on the topic), `pulsar.LatestMessage` (the latest available message on the topic), or a `MessageID` object for a position that isn't earliest or latest. | -`MessageChannel` | The Go channel used by the reader. Messages that arrive from the Pulsar topic(s) will be passed to this channel. | -`ReceiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `Next`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000 -`SubscriptionRolePrefix` | The subscription role prefix. | `reader` -`ReadCompacted` | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the reader will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal.| - -## Messages - -The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message: - -```go - -msg := pulsar.ProducerMessage{ - Payload: []byte("Here is some message data"), - Key: "message-key", - Properties: map[string]string{ - "foo": "bar", - }, - EventTime: time.Now(), - ReplicationClusters: []string{"cluster1", "cluster3"}, -} - -if err := producer.send(msg); err != nil { - log.Fatalf("Could not publish message due to: %v", err) -} - -``` - -The following methods parameters are available for `ProducerMessage` objects: - -Parameter | Description -:---------|:----------- -`Payload` | The actual data payload of the message -`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message. -`Key` | The optional key associated with the message (particularly useful for things like topic compaction) -`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message -`EventTime` | The timestamp associated with the message -`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. -`SequenceID` | Set the sequence id to assign to the current message - -## TLS encryption and authentication - -In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so: - - * Use `pulsar+ssl` URL type - * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker - * Configure `Authentication` option - -Here's an example: - -```go - -opts := pulsar.ClientOptions{ - URL: "pulsar+ssl://my-cluster.com:6651", - TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr", - Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"), -} - -``` - -## Schema - -This example shows how to create a producer and consumer with schema. - -```go - -var exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -jsonSchema := NewJsonSchema(exampleSchemaDef, nil) -// create producer -producer, err := client.CreateProducerWithSchema(ProducerOptions{ - Topic: "jsonTopic", -}, jsonSchema) -err = producer.Send(context.Background(), ProducerMessage{ - Value: &testJson{ - ID: 100, - Name: "pulsar", - }, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() -//create consumer -var s testJson -consumerJS := NewJsonSchema(exampleSchemaDef, nil) -consumer, err := client.SubscribeWithSchema(ConsumerOptions{ - Topic: "jsonTopic", - SubscriptionName: "sub-2", -}, consumerJS) -if err != nil { - log.Fatal(err) -} -msg, err := consumer.Receive(context.Background()) -if err != nil { - log.Fatal(err) -} -err = msg.GetValue(&s) -if err != nil { - log.Fatal(err) -} -fmt.Println(s.ID) // output: 100 -fmt.Println(s.Name) // output: pulsar -defer consumer.Close() - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-cpp.md b/site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-cpp.md deleted file mode 100644 index 08622b9e830714..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-cpp.md +++ /dev/null @@ -1,408 +0,0 @@ ---- -id: client-libraries-cpp -title: Pulsar C++ client -sidebar_label: "C++" -original_id: client-libraries-cpp ---- - -You can use Pulsar C++ client to create Pulsar producers and consumers in C++. - -All the methods in producer, consumer, and reader of a C++ client are thread-safe. - -## Supported platforms - -Pulsar C++ client is supported on **Linux** and **MacOS** platforms. - -[Doxygen](http://www.doxygen.nl/)-generated API docs for the C++ client are available [here](/api/cpp). - -## System requirements - -You need to install the following components before using the C++ client: - -* [CMake](https://cmake.org/) -* [Boost](http://www.boost.org/) -* [Protocol Buffers](https://developers.google.com/protocol-buffers/) 2.6 -* [libcurl](https://curl.haxx.se/libcurl/) -* [Google Test](https://github.com/google/googletest) - -## Linux - -### Compilation - -1. Clone the Pulsar repository. - -```shell - -$ git clone https://github.com/apache/pulsar - -``` - -2. Install all necessary dependencies. - -```shell - -$ apt-get install cmake libssl-dev libcurl4-openssl-dev liblog4cxx-dev \ - libprotobuf-dev protobuf-compiler libboost-all-dev google-mock libgtest-dev libjsoncpp-dev - -``` - -3. Compile and install [Google Test](https://github.com/google/googletest). - -```shell - -# libgtest-dev version is 1.18.0 or above -$ cd /usr/src/googletest -$ sudo cmake . -$ sudo make -$ sudo cp ./googlemock/libgmock.a ./googlemock/gtest/libgtest.a /usr/lib/ - -# less than 1.18.0 -$ cd /usr/src/gtest -$ sudo cmake . -$ sudo make -$ sudo cp libgtest.a /usr/lib - -$ cd /usr/src/gmock -$ sudo cmake . -$ sudo make -$ sudo cp libgmock.a /usr/lib - -``` - -4. Compile the Pulsar client library for C++ inside the Pulsar repository. - -```shell - -$ cd pulsar-client-cpp -$ cmake . -$ make - -``` - -After you install the components successfully, the files `libpulsar.so` and `libpulsar.a` are in the `lib` folder of the repository. The tools `perfProducer` and `perfConsumer` are in the `perf` directory. - -### Install Dependencies - -> Since 2.1.0 release, Pulsar ships pre-built RPM and Debian packages. You can download and install those packages directly. - -After you download and install RPM or DEB, the `libpulsar.so`, `libpulsarnossl.so`, `libpulsar.a`, and `libpulsarwithdeps.a` libraries are in your `/usr/lib` directory. - -By default, they are built in code path `${PULSAR_HOME}/pulsar-client-cpp`. You can build with the command below. - - `cmake . -DBUILD_TESTS=OFF -DLINK_STATIC=ON && make pulsarShared pulsarSharedNossl pulsarStatic pulsarStaticWithDeps -j 3`. - -These libraries rely on some other libraries. If you want to get detailed version of dependencies, see [RPM](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/rpm/Dockerfile) or [DEB](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/deb/Dockerfile) files. - -1. `libpulsar.so` is a shared library, containing statically linked `boost` and `openssl`. It also dynamically links all other necessary libraries. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsar.so -I/usr/local/ssl/include - -``` - -2. `libpulsarnossl.so` is a shared library, similar to `libpulsar.so` except that the libraries `openssl` and `crypto` are dynamically linked. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsarnossl.so -lssl -lcrypto -I/usr/local/ssl/include -L/usr/local/ssl/lib - -``` - -3. `libpulsar.a` is a static library. You need to load dependencies before using this library. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsar.a -lssl -lcrypto -ldl -lpthread -I/usr/local/ssl/include -L/usr/local/ssl/lib -lboost_system -lboost_regex -lcurl -lprotobuf -lzstd -lz - -``` - -4. `libpulsarwithdeps.a` is a static library, based on `libpulsar.a`. It is archived in the dependencies of `libboost_regex`, `libboost_system`, `libcurl`, `libprotobuf`, `libzstd` and `libz`. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsarwithdeps.a -lssl -lcrypto -ldl -lpthread -I/usr/local/ssl/include -L/usr/local/ssl/lib - -``` - -The `libpulsarwithdeps.a` does not include library openssl related libraries `libssl` and `libcrypto`, because these two libraries are related to security. It is more reasonable and easier to use the versions provided by the local system to handle security issues and upgrade libraries. - -### Install RPM - -1. Download a RPM package from the links in the table. - -| Link | Crypto files | -|------|--------------| -| [client](@pulsar:dist_rpm:client@) | [asc](@pulsar:dist_rpm:client@.asc), [sha512](@pulsar:dist_rpm:client@.sha512) | -| [client-debuginfo](@pulsar:dist_rpm:client-debuginfo@) | [asc](@pulsar:dist_rpm:client-debuginfo@.asc), [sha512](@pulsar:dist_rpm:client-debuginfo@.sha512) | -| [client-devel](@pulsar:dist_rpm:client-devel@) | [asc](@pulsar:dist_rpm:client-devel@.asc), [sha512](@pulsar:dist_rpm:client-devel@.sha512) | - -2. Install the package using the following command. - -```bash - -$ rpm -ivh apache-pulsar-client*.rpm - -``` - -After you install RPM successfully, Pulsar libraries are in the `/usr/lib` directory. - -### Install Debian - -1. Download a Debian package from the links in the table. - -| Link | Crypto files | -|------|--------------| -| [client](@pulsar:deb:client@) | [asc](@pulsar:dist_deb:client@.asc), [sha512](@pulsar:dist_deb:client@.sha512) | -| [client-devel](@pulsar:deb:client-devel@) | [asc](@pulsar:dist_deb:client-devel@.asc), [sha512](@pulsar:dist_deb:client-devel@.sha512) | - -2. Install the package using the following command. - -```bash - -$ apt install ./apache-pulsar-client*.deb - -``` - -After you install DEB successfully, Pulsar libraries are in the `/usr/lib` directory. - -### Build - -> If you want to build RPM and Debian packages from the latest master, follow the instructions below. You should run all the instructions at the root directory of your cloned Pulsar repository. - -There are recipes that build RPM and Debian packages containing a -statically linked `libpulsar.so` / `libpulsarnossl.so` / `libpulsar.a` / `libpulsarwithdeps.a` with all required dependencies. - -To build the C++ library packages, you need to build the Java packages first. - -```shell - -mvn install -DskipTests - -``` - -#### RPM - -To build the RPM inside a Docker container, use the command below. The RPMs are in the `pulsar-client-cpp/pkg/rpm/RPMS/x86_64/` path. - -```shell - -pulsar-client-cpp/pkg/rpm/docker-build-rpm.sh - -``` - -| Package name | Content | -|-----|-----| -| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` | -| pulsar-client-devel | Static library `libpulsar.a`, `libpulsarwithdeps.a`and C++ and C headers | -| pulsar-client-debuginfo | Debug symbols for `libpulsar.so` | - -#### Debian - -To build Debian packages, enter the following command. - -```shell - -pulsar-client-cpp/pkg/deb/docker-build-deb.sh - -``` - -Debian packages are created in the `pulsar-client-cpp/pkg/deb/BUILD/DEB/` path. - -| Package name | Content | -|-----|-----| -| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` | -| pulsar-client-dev | Static library `libpulsar.a`, `libpulsarwithdeps.a` and C++ and C headers | - -## MacOS - -### Compilation - -1. Clone the Pulsar repository. - -```shell - -$ git clone https://github.com/apache/pulsar - -``` - -2. Install all necessary dependencies. - -```shell - -# OpenSSL installation -$ brew install openssl -$ export OPENSSL_INCLUDE_DIR=/usr/local/opt/openssl/include/ -$ export OPENSSL_ROOT_DIR=/usr/local/opt/openssl/ - -# Protocol Buffers installation -$ brew tap homebrew/versions -$ brew install protobuf260 -$ brew install boost -$ brew install log4cxx - -# Google Test installation -$ git clone https://github.com/google/googletest.git -$ cd googletest -$ git checkout release-1.12.1 -$ cmake . -$ make install - -``` - -3. Compile the Pulsar client library in the repository that you cloned. - -```shell - -$ cd pulsar-client-cpp -$ cmake . -$ make - -``` - -### Install `libpulsar` - -Pulsar releases are available in the [Homebrew](https://brew.sh/) core repository. You can install the C++ client library with the following command. The package is installed with the library and headers. - -```shell - -brew install libpulsar - -``` - -## Connection URLs - -To connect Pulsar using client libraries, you need to specify a Pulsar protocol URL. - -Pulsar protocol URLs are assigned to specific clusters, you can use the Pulsar URI scheme. The default port is `6650`. The following is an example for localhost. - -```http - -pulsar://localhost:6650 - -``` - -In a Pulsar cluster in production, the URL looks as follows. - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you use TLS authentication, you need to add `ssl`, and the default port is `6651`. The following is an example. - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a consumer - -To use Pulsar as a consumer, you need to create a consumer on the C++ client. The following is an example. - -```c++ - -Client client("pulsar://localhost:6650"); - -Consumer consumer; -Result result = client.subscribe("my-topic", "my-subscription-name", consumer); -if (result != ResultOk) { - LOG_ERROR("Failed to subscribe: " << result); - return -1; -} - -Message msg; - -while (true) { - consumer.receive(msg); - LOG_INFO("Received: " << msg - << " with payload '" << msg.getDataAsString() << "'"); - - consumer.acknowledge(msg); -} - -client.close(); - -``` - -## Create a producer - -To use Pulsar as a producer, you need to create a producer on the C++ client. The following is an example. - -```c++ - -Client client("pulsar://localhost:6650"); - -Producer producer; -Result result = client.createProducer("my-topic", producer); -if (result != ResultOk) { - LOG_ERROR("Error creating producer: " << result); - return -1; -} - -// Publish 10 messages to the topic -for (int i = 0; i < 10; i++){ - Message msg = MessageBuilder().setContent("my-message").build(); - Result res = producer.send(msg); - LOG_INFO("Message sent: " << res); -} -client.close(); - -``` - -## Enable authentication in connection URLs -If you use TLS authentication when connecting to Pulsar, you need to add `ssl` in the connection URLs, and the default port is `6651`. The following is an example. - -```cpp - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/cacert.pem"); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create( - "/path/to/client-cert.pem", "/path/to/client-key.pem");); - -Client client("pulsar+ssl://my-broker.com:6651", config); - -``` - -For complete examples, refer to [C++ client examples](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/examples). - -## Schema - -This section describes some examples about schema. For more information about schema, see [Pulsar schema](schema-get-started.md). - -### Create producer with Avro schema - -The following example shows how to create a producer with an Avro schema. - -```cpp - -static const std::string exampleSchema = - "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," - "\"fields\":[{\"name\":\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"int\"}]}"; -Producer producer; -ProducerConfiguration producerConf; -producerConf.setSchema(SchemaInfo(AVRO, "Avro", exampleSchema)); -client.createProducer("topic-avro", producerConf, producer); - -``` - -### Create consumer with Avro schema - -The following example shows how to create a consumer with an Avro schema. - -```cpp - -static const std::string exampleSchema = - "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," - "\"fields\":[{\"name\":\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"int\"}]}"; -ConsumerConfiguration consumerConf; -Consumer consumer; -consumerConf.setSchema(SchemaInfo(AVRO, "Avro", exampleSchema)); -client.subscribe("topic-avro", "sub-2", consumerConf, consumer) - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-dotnet.md b/site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-dotnet.md deleted file mode 100644 index b574fa0b2e5ed8..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-dotnet.md +++ /dev/null @@ -1,434 +0,0 @@ ---- -id: client-libraries-dotnet -title: Pulsar C# client -sidebar_label: "C#" -original_id: client-libraries-dotnet ---- - -You can use the Pulsar C# client (DotPulsar) to create Pulsar producers and consumers in C#. All the methods in the producer, consumer, and reader of a C# client are thread-safe. The official documentation for DotPulsar is available [here](https://github.com/apache/pulsar-dotpulsar/wiki). - -## Installation - -You can install the Pulsar C# client library either through the dotnet CLI or through the Visual Studio. This section describes how to install the Pulsar C# client library through the dotnet CLI. For information about how to install the Pulsar C# client library through the Visual Studio , see [here](https://docs.microsoft.com/en-us/visualstudio/mac/nuget-walkthrough?view=vsmac-2019). - -### Prerequisites - -Install the [.NET Core SDK](https://dotnet.microsoft.com/download/), which provides the dotnet command-line tool. Starting in Visual Studio 2017, the dotnet CLI is automatically installed with any .NET Core related workloads. - -### Procedures - -To install the Pulsar C# client library, following these steps: - -1. Create a project. - - 1. Create a folder for the project. - - 2. Open a terminal window and switch to the new folder. - - 3. Create the project using the following command. - - ``` - - dotnet new console - - ``` - - 4. Use `dotnet run` to test that the app has been created properly. - -2. Add the DotPulsar NuGet package. - - 1. Use the following command to install the `DotPulsar` package. - - ``` - - dotnet add package DotPulsar - - ``` - - 2. After the command completes, open the `.csproj` file to see the added reference. - - ```xml - - - - - - ``` - -## Client - -This section describes some configuration examples for the Pulsar C# client. - -### Create client - -This example shows how to create a Pulsar C# client connected to localhost. - -```c# - -var client = PulsarClient.Builder().Build(); - -``` - -To create a Pulsar C# client by using the builder, you can specify the following options. - -| Option | Description | Default | -| ---- | ---- | ---- | -| ServiceUrl | Set the service URL for the Pulsar cluster. | pulsar://localhost:6650 | -| RetryInterval | Set the time to wait before retrying an operation or a reconnection. | 3s | - -### Create producer - -This section describes how to create a producer. - -- Create a producer by using the builder. - - ```c# - - var producer = client.NewProducer() - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a producer without using the builder. - - ```c# - - var options = new ProducerOptions("persistent://public/default/mytopic"); - var producer = client.CreateProducer(options); - - ``` - -### Create consumer - -This section describes how to create a consumer. - -- Create a consumer by using the builder. - - ```c# - - var consumer = client.NewConsumer() - .SubscriptionName("MySubscription") - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a consumer without using the builder. - - ```c# - - var options = new ConsumerOptions("MySubscription", "persistent://public/default/mytopic"); - var consumer = client.CreateConsumer(options); - - ``` - -### Create reader - -This section describes how to create a reader. - -- Create a reader by using the builder. - - ```c# - - var reader = client.NewReader() - .StartMessageId(MessageId.Earliest) - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a reader without using the builder. - - ```c# - - var options = new ReaderOptions(MessageId.Earliest, "persistent://public/default/mytopic"); - var reader = client.CreateReader(options); - - ``` - -### Configure encryption policies - -The Pulsar C# client supports four kinds of encryption policies: - -- `EnforceUnencrypted`: always use unencrypted connections. -- `EnforceEncrypted`: always use encrypted connections) -- `PreferUnencrypted`: use unencrypted connections, if possible. -- `PreferEncrypted`: use encrypted connections, if possible. - -This example shows how to set the `EnforceUnencrypted` encryption policy. - -```c# - -var client = PulsarClient.Builder() - .ConnectionSecurity(EncryptionPolicy.EnforceEncrypted) - .Build(); - -``` - -### Configure authentication - -Currently, the Pulsar C# client supports the TLS (Transport Layer Security) and JWT (JSON Web Token) authentication. - -If you have followed [Authentication using TLS](security-tls-authentication.md), you get a certificate and a key. To use them from the Pulsar C# client, follow these steps: - -1. Create an unencrypted and password-less pfx file. - - ```c# - - openssl pkcs12 -export -keypbe NONE -certpbe NONE -out admin.pfx -inkey admin.key.pem -in admin.cert.pem -passout pass: - - ``` - -2. Use the admin.pfx file to create an X509Certificate2 and pass it to the Pulsar C# client. - - ```c# - - var clientCertificate = new X509Certificate2("admin.pfx"); - var client = PulsarClient.Builder() - .AuthenticateUsingClientCertificate(clientCertificate) - .Build(); - - ``` - -## Producer - -A producer is a process that attaches to a topic and publishes messages to a Pulsar broker for processing. This section describes some configuration examples about the producer. - -## Send data - -This example shows how to send data. - -```c# - -var data = Encoding.UTF8.GetBytes("Hello World"); -await producer.Send(data); - -``` - -### Send messages with customized metadata - -- Send messages with customized metadata by using the builder. - - ```c# - - var data = Encoding.UTF8.GetBytes("Hello World"); - var messageId = await producer.NewMessage() - .Property("SomeKey", "SomeValue") - .Send(data); - - ``` - -- Send messages with customized metadata without using the builder. - - ```c# - - var data = Encoding.UTF8.GetBytes("Hello World"); - var metadata = new MessageMetadata(); - metadata["SomeKey"] = "SomeValue"; - var messageId = await producer.Send(metadata, data)); - - ``` - -## Consumer - -A consumer is a process that attaches to a topic through a subscription and then receives messages. This section describes some configuration examples about the consumer. - -### Receive messages - -This example shows how a consumer receives messages from a topic. - -```c# - -await foreach (var message in consumer.Messages()) -{ - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); -} - -``` - -### Acknowledge messages - -Messages can be acknowledged individually or cumulatively. For details about message acknowledgement, see [acknowledgement](concepts-messaging.md#acknowledgement). - -- Acknowledge messages individually. - - ```c# - - await foreach (var message in consumer.Messages()) - { - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); - } - - ``` - -- Acknowledge messages cumulatively. - - ```c# - - await consumer.AcknowledgeCumulative(message); - - ``` - -### Unsubscribe from topics - -This example shows how a consumer unsubscribes from a topic. - -```c# - -await consumer.Unsubscribe(); - -``` - -#### Note - -> A consumer cannot be used and is disposed once the consumer unsubscribes from a topic. - -## Reader - -A reader is actually just a consumer without a cursor. This means that Pulsar does not keep track of your progress and there is no need to acknowledge messages. - -This example shows how a reader receives messages. - -```c# - -await foreach (var message in reader.Messages()) -{ - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); -} - -``` - -## Monitoring - -This section describes how to monitor the producer, consumer, and reader state. - -### Monitor producer - -The following table lists states available for the producer. - -| State | Description | -| ---- | ----| -| Closed | The producer or the Pulsar client has been disposed. | -| Connected | All is well. | -| Disconnected | The connection is lost and attempts are being made to reconnect. | -| Faulted | An unrecoverable error has occurred. | - -This example shows how to monitor the producer state. - -```c# - -private static async ValueTask Monitor(IProducer producer, CancellationToken cancellationToken) -{ - var state = ProducerState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = await producer.StateChangedFrom(state, cancellationToken); - - var stateMessage = state switch - { - ProducerState.Connected => $"The producer is connected", - ProducerState.Disconnected => $"The producer is disconnected", - ProducerState.Closed => $"The producer has closed", - ProducerState.Faulted => $"The producer has faulted", - _ => $"The producer has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (producer.IsFinalState(state)) - return; - } -} - -``` - -### Monitor consumer state - -The following table lists states available for the consumer. - -| State | Description | -| ---- | ----| -| Active | All is well. | -| Inactive | All is well. The subscription type is `Failover` and you are not the active consumer. | -| Closed | The consumer or the Pulsar client has been disposed. | -| Disconnected | The connection is lost and attempts are being made to reconnect. | -| Faulted | An unrecoverable error has occurred. | -| ReachedEndOfTopic | No more messages are delivered. | - -This example shows how to monitor the consumer state. - -```c# - -private static async ValueTask Monitor(IConsumer consumer, CancellationToken cancellationToken) -{ - var state = ConsumerState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = await consumer.StateChangedFrom(state, cancellationToken); - - var stateMessage = state switch - { - ConsumerState.Active => "The consumer is active", - ConsumerState.Inactive => "The consumer is inactive", - ConsumerState.Disconnected => "The consumer is disconnected", - ConsumerState.Closed => "The consumer has closed", - ConsumerState.ReachedEndOfTopic => "The consumer has reached end of topic", - ConsumerState.Faulted => "The consumer has faulted", - _ => $"The consumer has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (consumer.IsFinalState(state)) - return; - } -} - -``` - -### Monitor reader state - -The following table lists states available for the reader. - -| State | Description | -| ---- | ----| -| Closed | The reader or the Pulsar client has been disposed. | -| Connected | All is well. | -| Disconnected | The connection is lost and attempts are being made to reconnect. -| Faulted | An unrecoverable error has occurred. | -| ReachedEndOfTopic | No more messages are delivered. | - -This example shows how to monitor the reader state. - -```c# - -private static async ValueTask Monitor(IReader reader, CancellationToken cancellationToken) -{ - var state = ReaderState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = await reader.StateChangedFrom(state, cancellationToken); - - var stateMessage = state switch - { - ReaderState.Connected => "The reader is connected", - ReaderState.Disconnected => "The reader is disconnected", - ReaderState.Closed => "The reader has closed", - ReaderState.ReachedEndOfTopic => "The reader has reached end of topic", - ReaderState.Faulted => "The reader has faulted", - _ => $"The reader has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (reader.IsFinalState(state)) - return; - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-go.md b/site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-go.md deleted file mode 100644 index 6281b03dd8c805..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-go.md +++ /dev/null @@ -1,885 +0,0 @@ ---- -id: client-libraries-go -title: Pulsar Go client -sidebar_label: "Go" -original_id: client-libraries-go ---- - -> Tips: Currently, the CGo client will be deprecated, if you want to know more about the CGo client, please refer to [CGo client docs](client-libraries-cgo.md) - -You can use Pulsar [Go client](https://github.com/apache/pulsar-client-go) to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang). - -> **API docs available as well** -> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar-client-go/pulsar). - - -## Installation - -### Install go package - -You can install the `pulsar` library locally using `go get`. - -```bash - -$ go get -u "github.com/apache/pulsar-client-go/pulsar" - -``` - -Once installed locally, you can import it into your project: - -```go - -import "github.com/apache/pulsar-client-go/pulsar" - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -If you have multiple brokers, you can set the URL as below. - -``` - -pulsar://localhost:6550,localhost:6651,localhost:6652 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example: - -```go - -import ( - "log" - "time" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - OperationTimeout: 30 * time.Second, - ConnectionTimeout: 30 * time.Second, - }) - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } - - defer client.Close() -} - -``` - -If you have multiple brokers, you can initiate a client object as below. - -```go - -import ( - "log" - "time" - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650,localhost:6651,localhost:6652", - OperationTimeout: 30 * time.Second, - ConnectionTimeout: 30 * time.Second, - }) - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } - - defer client.Close() -} - -``` - -The following configurable parameters are available for Pulsar clients: - - Name | Description | Default -| :-------- | :---------- |:---------- | -| URL | Configure the service URL for the Pulsar service.

    If you have multiple brokers, you can set multiple Pulsar cluster addresses for a client.

    This parameter is **required**. |None | -| ConnectionTimeout | Timeout for the establishment of a TCP connection | 30s | -| OperationTimeout| Set the operation timeout. Producer-create, subscribe and unsubscribe operations will be retried until this interval, after which the operation will be marked as failed| 30s| -| Authentication | Configure the authentication provider. Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | no authentication | -| TLSTrustCertsFilePath | Set the path to the trusted TLS certificate file | | -| TLSAllowInsecureConnection | Configure whether the Pulsar client accept untrusted TLS certificate from broker | false | -| TLSValidateHostname | Configure whether the Pulsar client verify the validity of the host name from broker | false | -| ListenerName | Configure the net model for VPC users to connect to the Pulsar broker | | -| MaxConnectionsPerBroker | Max number of connections to a single broker that is kept in the pool | 1 | -| CustomMetricsLabels | Add custom labels to all the metrics reported by this client instance | | -| Logger | Configure the logger used by the client | logrus.StandardLogger | - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example: - -```go - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", -}) - -if err != nil { - log.Fatal(err) -} - -_, err = producer.Send(context.Background(), &pulsar.ProducerMessage{ - Payload: []byte("hello"), -}) - -defer producer.Close() - -if err != nil { - fmt.Println("Failed to publish message", err) -} -fmt.Println("Published message") - -``` - -### Producer operations - -Pulsar Go producers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string` -`Name()` | Fetches the producer's name | `string` -`Send(context.Context, *ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | (MessageID, error) -`SendAsync(context.Context, *ProducerMessage, func(MessageID, *ProducerMessage, error))`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | -`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64 -`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error -`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | - -### Producer Example - -#### How to use message router in producer - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: serviceURL, -}) - -if err != nil { - log.Fatal(err) -} -defer client.Close() - -// Only subscribe on the specific partition -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "my-partitioned-topic-partition-2", - SubscriptionName: "my-sub", -}) - -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-partitioned-topic", - MessageRouter: func(msg *ProducerMessage, tm TopicMetadata) int { - fmt.Println("Routing message ", msg, " -- Partitions: ", tm.NumPartitions()) - return 2 - }, -}) - -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -``` - -#### How to use schema interface in producer - -```go - -type testJSON struct { - ID int `json:"id"` - Name string `json:"name"` -} - -``` - -```go - -var ( - exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -) - -``` - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -properties := make(map[string]string) -properties["pulsar"] = "hello" -jsonSchemaWithProperties := NewJSONSchema(exampleSchemaDef, properties) -producer, err := client.CreateProducer(ProducerOptions{ - Topic: "jsonTopic", - Schema: jsonSchemaWithProperties, -}) -assert.Nil(t, err) - -_, err = producer.Send(context.Background(), &ProducerMessage{ - Value: &testJSON{ - ID: 100, - Name: "pulsar", - }, -}) -if err != nil { - log.Fatal(err) -} -producer.Close() - -``` - -#### How to use delay relative in producer - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topicName := newTopicName() -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topicName, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: topicName, - SubscriptionName: "subName", - Type: Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -ID, err := producer.Send(context.Background(), &pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("test")), - DeliverAfter: 3 * time.Second, -}) -if err != nil { - log.Fatal(err) -} -fmt.Println(ID) - -ctx, canc := context.WithTimeout(context.Background(), 1*time.Second) -msg, err := consumer.Receive(ctx) -if err != nil { - log.Fatal(err) -} -fmt.Println(msg.Payload()) -canc() - -ctx, canc = context.WithTimeout(context.Background(), 5*time.Second) -msg, err = consumer.Receive(ctx) -if err != nil { - log.Fatal(err) -} -fmt.Println(msg.Payload()) -canc() - -``` - -### Producer configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Name | Name specify a name for the producer. If not assigned, the system will generate a globally unique name which can be access with Producer.ProducerName(). | | -| Properties | Properties attach a set of application defined properties to the producer This properties will be visible in the topic stats | | -| SendTimeout | SendTimeout set the timeout for a message that is not acknowledged by the server | 30s | -| DisableBlockIfQueueFull | DisableBlockIfQueueFull control whether Send and SendAsync block if producer's message queue is full | false | -| MaxPendingMessages| MaxPendingMessages set the max size of the queue holding the messages pending to receive an acknowledgment from the broker. | | -| HashingScheme | HashingScheme change the `HashingScheme` used to chose the partition on where to publish a particular message. | JavaStringHash | -| CompressionType | CompressionType set the compression type for the producer. | not compressed | -| CompressionLevel | Define the desired compression level. Options: Default, Faster and Better | Default | -| MessageRouter | MessageRouter set a custom message routing policy by passing an implementation of MessageRouter | | -| DisableBatching | DisableBatching control whether automatic batching of messages is enabled for the producer. | false | -| BatchingMaxPublishDelay | BatchingMaxPublishDelay set the time period within which the messages sent will be batched | 1ms | -| BatchingMaxMessages | BatchingMaxMessages set the maximum number of messages permitted in a batch. | 1000 | -| BatchingMaxSize | BatchingMaxSize sets the maximum number of bytes permitted in a batch. | 128KB | -| Schema | Schema set a custom schema type by passing an implementation of `Schema` | bytes[] | -| Interceptors | A chain of interceptors. These interceptors are called at some points defined in the `ProducerInterceptor` interface. | None | -| MaxReconnectToBroker | MaxReconnectToBroker set the maximum retry number of reconnectToBroker | ultimate | -| BatcherBuilderType | BatcherBuilderType sets the batch builder type. This is used to create a batch container when batching is enabled. Options: DefaultBatchBuilder and KeyBasedBatchBuilder | DefaultBatchBuilder | - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels: - -```go - -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "topic-1", - SubscriptionName: "my-sub", - Type: pulsar.Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -for i := 0; i < 10; i++ { - msg, err := consumer.Receive(context.Background()) - if err != nil { - log.Fatal(err) - } - - fmt.Printf("Received message msgId: %#v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - - consumer.Ack(msg) -} - -if err := consumer.Unsubscribe(); err != nil { - log.Fatal(err) -} - -``` - -### Consumer operations - -Pulsar Go consumers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Subscription()` | Returns the consumer's subscription name | `string` -`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error` -`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)` -`Chan()` | Chan returns a channel from which to consume messages. | `<-chan ConsumerMessage` -`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | -`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | -`ReconsumeLater(msg Message, delay time.Duration)` | ReconsumeLater mark a message for redelivery after custom delay | -`Nack(Message)` | Acknowledge the failure to process a single message. | -`NackID(MessageID)` | Acknowledge the failure to process a single message. | -`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | `error` -`SeekByTime(time time.Time)` | Reset the subscription associated with this consumer to a specific message publish time. | `error` -`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | -`Name()` | Name returns the name of consumer | `string` - -### Receive example - -#### How to use regex consumer - -```go - -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) - -defer client.Close() - -p, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topicInRegex, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer p.Close() - -topicsPattern := fmt.Sprintf("persistent://%s/foo.*", namespace) -opts := pulsar.ConsumerOptions{ - TopicsPattern: topicsPattern, - SubscriptionName: "regex-sub", -} -consumer, err := client.Subscribe(opts) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -``` - -#### How to use multi topics Consumer - -```go - -func newTopicName() string { - return fmt.Sprintf("my-topic-%v", time.Now().Nanosecond()) -} - - -topic1 := "topic-1" -topic2 := "topic-2" - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -topics := []string{topic1, topic2} -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topics: topics, - SubscriptionName: "multi-topic-sub", -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -``` - -#### How to use consumer listener - -```go - -import ( - "fmt" - "log" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{URL: "pulsar://localhost:6650"}) - if err != nil { - log.Fatal(err) - } - - defer client.Close() - - channel := make(chan pulsar.ConsumerMessage, 100) - - options := pulsar.ConsumerOptions{ - Topic: "topic-1", - SubscriptionName: "my-subscription", - Type: pulsar.Shared, - } - - options.MessageChannel = channel - - consumer, err := client.Subscribe(options) - if err != nil { - log.Fatal(err) - } - - defer consumer.Close() - - // Receive messages from channel. The channel returns a struct which contains message and the consumer from where - // the message was received. It's not necessary here since we have 1 single consumer, but the channel could be - // shared across multiple consumers as well - for cm := range channel { - msg := cm.Message - fmt.Printf("Received message msgId: %v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - - consumer.Ack(msg) - } -} - -``` - -#### How to use consumer receive timeout - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topic := "test-topic-with-no-messages" -ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond) -defer cancel() - -// create consumer -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: topic, - SubscriptionName: "my-sub1", - Type: Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -msg, err := consumer.Receive(ctx) -fmt.Println(msg.Payload()) -if err != nil { - log.Fatal(err) -} - -``` - -#### How to use schema in consumer - -```go - -type testJSON struct { - ID int `json:"id"` - Name string `json:"name"` -} - -``` - -```go - -var ( - exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -) - -``` - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -var s testJSON - -consumerJS := NewJSONSchema(exampleSchemaDef, nil) -consumer, err := client.Subscribe(ConsumerOptions{ - Topic: "jsonTopic", - SubscriptionName: "sub-1", - Schema: consumerJS, - SubscriptionInitialPosition: SubscriptionPositionEarliest, -}) -assert.Nil(t, err) -msg, err := consumer.Receive(context.Background()) -assert.Nil(t, err) -err = msg.GetSchemaValue(&s) -if err != nil { - log.Fatal(err) -} - -defer consumer.Close() - -``` - -### Consumer configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Topics | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing| | -| TopicsPattern | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing | | -| AutoDiscoveryPeriod | Specify the interval in which to poll for new partitions or new topics if using a TopicsPattern. | | -| SubscriptionName | Specify the subscription name for this consumer. This argument is required when subscribing | | -| Name | Set the consumer name | | -| Properties | Properties attach a set of application defined properties to the producer This properties will be visible in the topic stats | | -| Type | Select the subscription type to be used when subscribing to the topic. | Exclusive | -| SubscriptionInitialPosition | InitialPosition at which the cursor will be set when subscribe | Latest | -| DLQ | Configuration for Dead Letter Queue consumer policy. | no DLQ | -| MessageChannel | Sets a `MessageChannel` for the consumer. When a message is received, it will be pushed to the channel for consumption | | -| ReceiverQueueSize | Sets the size of the consumer receive queue. | 1000| -| NackRedeliveryDelay | The delay after which to redeliver the messages that failed to be processed | 1min | -| ReadCompacted | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic | false | -| ReplicateSubscriptionState | Mark the subscription as replicated to keep it in sync across clusters | false | -| KeySharedPolicy | Configuration for Key Shared consumer policy. | | -| RetryEnable | Auto retry send messages to default filled DLQPolicy topics | false | -| Interceptors | A chain of interceptors. These interceptors are called at some points defined in the `ConsumerInterceptor` interface. | | -| MaxReconnectToBroker | MaxReconnectToBroker set the maximum retry number of reconnectToBroker. | ultimate | -| Schema | Schema set a custom schema type by passing an implementation of `Schema` | bytes[] | - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example: - -```go - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "topic-1", - StartMessageID: pulsar.EarliestMessageID(), -}) -if err != nil { - log.Fatal(err) -} -defer reader.Close() - -``` - -### Reader operations - -Pulsar Go readers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string` -`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)` -`HasNext()` | Check if there is any message available to read from the current position| (bool, error) -`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error` -`Seek(MessageID)` | Reset the subscription associated with this reader to a specific message ID | `error` -`SeekByTime(time time.Time)` | Reset the subscription associated with this reader to a specific message publish time | `error` - -### Reader example - -#### How to use reader to read 'next' message - -Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages: - -```go - -import ( - "context" - "fmt" - "log" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{URL: "pulsar://localhost:6650"}) - if err != nil { - log.Fatal(err) - } - - defer client.Close() - - reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "topic-1", - StartMessageID: pulsar.EarliestMessageID(), - }) - if err != nil { - log.Fatal(err) - } - defer reader.Close() - - for reader.HasNext() { - msg, err := reader.Next(context.Background()) - if err != nil { - log.Fatal(err) - } - - fmt.Printf("Received message msgId: %#v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - } -} - -``` - -In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example: - -```go - -lastSavedId := // Read last saved message id from external store as byte[] - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: pulsar.DeserializeMessageID(lastSavedId), -}) - -``` - -#### How to use reader to read specific message - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: lookupURL, -}) - -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topic := "topic-1" -ctx := context.Background() - -// create producer -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topic, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -// send 10 messages -msgIDs := [10]MessageID{} -for i := 0; i < 10; i++ { - msgID, err := producer.Send(ctx, &pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("hello-%d", i)), - }) - assert.NoError(t, err) - assert.NotNil(t, msgID) - msgIDs[i] = msgID -} - -// create reader on 5th message (not included) -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: topic, - StartMessageID: msgIDs[4], -}) - -if err != nil { - log.Fatal(err) -} -defer reader.Close() - -// receive the remaining 5 messages -for i := 5; i < 10; i++ { - msg, err := reader.Next(context.Background()) - if err != nil { - log.Fatal(err) -} - -// create reader on 5th message (included) -readerInclusive, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: topic, - StartMessageID: msgIDs[4], - StartMessageIDInclusive: true, -}) - -if err != nil { - log.Fatal(err) -} -defer readerInclusive.Close() - -``` - -### Reader configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Name | Name set the reader name. | | -| Properties | Attach a set of application defined properties to the reader. This properties will be visible in the topic stats | | -| StartMessageID | StartMessageID initial reader positioning is done by specifying a message id. | | -| StartMessageIDInclusive | If true, the reader will start at the `StartMessageID`, included. Default is `false` and the reader will start from the "next" message | false | -| MessageChannel | MessageChannel sets a `MessageChannel` for the consumer When a message is received, it will be pushed to the channel for consumption| | -| ReceiverQueueSize | ReceiverQueueSize sets the size of the consumer receive queue. | 1000 | -| SubscriptionRolePrefix| SubscriptionRolePrefix set the subscription role prefix. | "reader" | -| ReadCompacted | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. ReadCompacted can only be enabled when reading from a persistent topic. | false| - -## Messages - -The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message: - -```go - -msg := pulsar.ProducerMessage{ - Payload: []byte("Here is some message data"), - Key: "message-key", - Properties: map[string]string{ - "foo": "bar", - }, - EventTime: time.Now(), - ReplicationClusters: []string{"cluster1", "cluster3"}, -} - -if _, err := producer.send(msg); err != nil { - log.Fatalf("Could not publish message due to: %v", err) -} - -``` - -The following methods parameters are available for `ProducerMessage` objects: - -Parameter | Description -:---------|:----------- -`Payload` | The actual data payload of the message -`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message. -`Key` | The optional key associated with the message (particularly useful for things like topic compaction) -`OrderingKey` | OrderingKey sets the ordering key of the message. -`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message -`EventTime` | The timestamp associated with the message -`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. -`SequenceID` | Set the sequence id to assign to the current message -`DeliverAfter` | Request to deliver the message only after the specified relative delay -`DeliverAt` | Deliver the message only at or after the specified absolute timestamp - -## TLS encryption and authentication - -In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so: - - * Use `pulsar+ssl` URL type - * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker - * Configure `Authentication` option - -Here's an example: - -```go - -opts := pulsar.ClientOptions{ - URL: "pulsar+ssl://my-cluster.com:6651", - TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr", - Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"), -} - -``` - -## OAuth2 authentication - -To use [OAuth2 authentication](security-oauth2.md), you'll need to configure your client to perform the following operations. -This example shows how to configure OAuth2 authentication. - -```go - -oauth := pulsar.NewAuthenticationOAuth2(map[string]string{ - "type": "client_credentials", - "issuerUrl": "https://dev-kt-aa9ne.us.auth0.com", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "privateKey": "/path/to/privateKey", - "clientId": "0Xx...Yyxeny", - }) -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://my-cluster:6650", - Authentication: oauth, -}) - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-java.md b/site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-java.md deleted file mode 100644 index 5ee69dd5a8e7cd..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-java.md +++ /dev/null @@ -1,1035 +0,0 @@ ---- -id: client-libraries-java -title: Pulsar Java client -sidebar_label: "Java" -original_id: client-libraries-java ---- - -You can use Pulsar Java client to create Java [producer](#producer), [consumer](#consumer), and [readers](#reader-interface) of messages and to perform [administrative tasks](admin-api-overview.md). The current version of the Java client is **@pulsar:version@**. - -All the methods in [producer](#producer), [consumer](#consumer), and [reader](#reader) of a Java client are thread-safe. - -Javadoc for the Pulsar client is divided into two domains by package as follows. - -Package | Description | Maven Artifact -:-------|:------------|:-------------- -[`org.apache.pulsar.client.api`](/api/client) | The producer and consumer API | [org.apache.pulsar:pulsar-client:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C@pulsar:version@%7Cjar) -[`org.apache.pulsar.client.admin`](/api/admin) | The Java [admin API](admin-api-overview.md) | [org.apache.pulsar:pulsar-client-admin:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client-admin%7C@pulsar:version@%7Cjar) -`org.apache.pulsar.client.all` |Includes both `pulsar-client` and `pulsar-client-admin`

    Both `pulsar-client` and `pulsar-client-admin` are shaded packages and they shade dependencies independently. Consequently, the applications using both `pulsar-client` and `pulsar-client-admin` have redundant shaded classes. It would be troublesome if you introduce new dependencies but forget to update shading rules.

    In this case, you can use `pulsar-client-all`, which shades dependencies only one time and reduces the size of dependencies. |[org.apache.pulsar:pulsar-client-all:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client-all%7C@pulsar:version@%7Cjar) - -This document focuses only on the client API for producing and consuming messages on Pulsar topics. For how to use the Java admin client, see [Pulsar admin interface](admin-api-overview.md). - -## Installation - -The latest version of the Pulsar Java client library is available via [Maven Central](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C@pulsar:version@%7Cjar). To use the latest version, add the `pulsar-client` library to your build configuration. - -### Maven - -If you use Maven, add the following information to the `pom.xml` file. - -```xml - - -@pulsar:version@ - - - - org.apache.pulsar - pulsar-client - ${pulsar.version} - - -``` - -### Gradle - -If you use Gradle, add the following information to the `build.gradle` file. - -```groovy - -def pulsarVersion = '@pulsar:version@' - -dependencies { - compile group: 'org.apache.pulsar', name: 'pulsar-client', version: pulsarVersion -} - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -You can assign Pulsar protocol URLs to specific clusters and use the `pulsar` scheme. The default port is `6650`. The following is an example of `localhost`. - -```http - -pulsar://localhost:6650 - -``` - -If you have multiple brokers, the URL is as follows. - -```http - -pulsar://localhost:6550,localhost:6651,localhost:6652 - -``` - -A URL for a production Pulsar cluster is as follows. - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you use [TLS](security-tls-authentication.md) authentication, the URL is as follows. - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Client - -You can instantiate a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object using just a URL for the target Pulsar [cluster](reference-terminology.md#cluster) like this: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -``` - -If you have multiple brokers, you can initiate a PulsarClient like this: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650,localhost:6651,localhost:6652") - .build(); - -``` - -> ### Default broker URLs for standalone clusters -> If you run a cluster in [standalone mode](getting-started-standalone.md), the broker is available at the `pulsar://localhost:6650` URL by default. - -If you create a client, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -| Type | Name |
    Description
    | Default -|---|---|---|--- -String | `serviceUrl` |Service URL provider for Pulsar service | None -String | `authPluginClassName` | Name of the authentication plugin | None -String | `authParams` | String represents parameters for the authentication plugin

    **Example**
    key1:val1,key2:val2|None -long|`operationTimeoutMs`|Operation timeout |30000 -long|`statsIntervalSeconds`|Interval between each stats info

    Stats is activated with positive `statsInterval`

    Set `statsIntervalSeconds` to 1 second at least |60 -int|`numIoThreads`| The number of threads used for handling connections to brokers | 1 -int|`numListenerThreads`|The number of threads used for handling message listeners. The listener thread pool is shared across all the consumers and readers using the "listener" model to get messages. For a given consumer, the listener is always invoked from the same thread to ensure ordering. If you want multiple threads to process a single topic, you need to create a [`shared`](https://pulsar.apache.org/docs/en/next/concepts-messaging/#shared) subscription and multiple consumers for this subscription. This does not ensure ordering.| 1 -boolean|`useTcpNoDelay`|Whether to use TCP no-delay flag on the connection to disable Nagle algorithm |true -boolean |`useTls` |Whether to use TLS encryption on the connection| false -string | `tlsTrustCertsFilePath` |Path to the trusted TLS certificate file|None -boolean|`tlsAllowInsecureConnection`|Whether the Pulsar client accepts untrusted TLS certificate from broker | false -boolean | `tlsHostnameVerificationEnable` | Whether to enable TLS hostname verification|false -int|`concurrentLookupRequest`|The number of concurrent lookup requests allowed to send on each broker connection to prevent overload on broker|5000 -int|`maxLookupRequest`|The maximum number of lookup requests allowed on each broker connection to prevent overload on broker | 50000 -int|`maxNumberOfRejectedRequestPerConnection`|The maximum number of rejected requests of a broker in a certain time frame (30 seconds) after the current connection is closed and the client creates a new connection to connect to a different broker|50 -int|`keepAliveIntervalSeconds`|Seconds of keeping alive interval for each client broker connection|30 -int|`connectionTimeoutMs`|Duration of waiting for a connection to a broker to be established

    If the duration passes without a response from a broker, the connection attempt is dropped|10000 -int|`requestTimeoutMs`|Maximum duration for completing a request |60000 -int|`defaultBackoffIntervalNanos`| Default duration for a backoff interval | TimeUnit.MILLISECONDS.toNanos(100); -long|`maxBackoffIntervalNanos`|Maximum duration for a backoff interval|TimeUnit.SECONDS.toNanos(30) - -Check out the Javadoc for the {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} class for a full list of configurable parameters. - -> In addition to client-level configuration, you can also apply [producer](#configuring-producers) and [consumer](#configuring-consumers) specific configuration as described in sections below. - -## Producer - -In Pulsar, producers write messages to topics. Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object (as in the section [above](#client-configuration)), you can create a {@inject: javadoc:Producer:/client/org/apache/pulsar/client/api/Producer} for a specific Pulsar [topic](reference-terminology.md#topic). - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .create(); - -// You can then send messages to the broker and topic you specified: -producer.send("My message".getBytes()); - -``` - -By default, producers produce messages that consist of byte arrays. You can produce different types by specifying a message [schema](#schemas). - -```java - -Producer stringProducer = client.newProducer(Schema.STRING) - .topic("my-topic") - .create(); -stringProducer.send("My message"); - -``` - -> Make sure that you close your producers, consumers, and clients when you do not need them. - -> ```java -> -> producer.close(); -> consumer.close(); -> client.close(); -> -> -> ``` - -> -> Close operations can also be asynchronous: - -> ```java -> -> producer.closeAsync() -> .thenRun(() -> System.out.println("Producer closed")) -> .exceptionally((ex) -> { -> System.err.println("Failed to close producer: " + ex); -> return null; -> }); -> -> -> ``` - - -### Configure producer - -If you instantiate a `Producer` object by specifying only a topic name as the example above, use the default configuration for producer. - -If you create a producer, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -Type | Name|
    Description
    | Default -|---|---|---|--- -String| `topicName`| Topic name| null| -String|`producerName`|Producer name| null -long|`sendTimeoutMs`|Message send timeout in ms.

    If a message is not acknowledged by a server before the `sendTimeout` expires, an error occurs.|30000 -boolean|`blockIfQueueFull`|If it is set to `true`, when the outgoing message queue is full, the `Send` and `SendAsync` methods of producer block, rather than failing and throwing errors.

    If it is set to `false`, when the outgoing message queue is full, the `Send` and `SendAsync` methods of producer fail and `ProducerQueueIsFullError` exceptions occur.

    The `MaxPendingMessages` parameter determines the size of the outgoing message queue.|false -int|`maxPendingMessages`|The maximum size of a queue holding pending messages.

    For example, a message waiting to receive an acknowledgment from a [broker](reference-terminology.md#broker).

    By default, when the queue is full, all calls to the `Send` and `SendAsync` methods fail **unless** you set `BlockIfQueueFull` to `true`.|1000 -int|`maxPendingMessagesAcrossPartitions`|The maximum number of pending messages across partitions.

    Use the setting to lower the max pending messages for each partition ({@link #setMaxPendingMessages(int)}) if the total number exceeds the configured value.|50000 -MessageRoutingMode|`messageRoutingMode`|Message routing logic for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics).

    Apply the logic only when setting no key on messages.

    Available options are as follows:

  529. `pulsar.RoundRobinDistribution`: round robin

  530. `pulsar.UseSinglePartition`: publish all messages to a single partition

  531. `pulsar.CustomPartition`: a custom partitioning scheme
  532. |`pulsar.RoundRobinDistribution` -HashingScheme|`hashingScheme`|Hashing function determining the partition where you publish a particular message (**partitioned topics only**).

    Available options are as follows:

  533. `pulsar.JavaStringHash`: the equivalent of `String.hashCode()` in Java

  534. `pulsar.Murmur3_32Hash`: applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function

  535. `pulsar.BoostHash`: applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library
  536. |`HashingScheme.JavaStringHash` -ProducerCryptoFailureAction|`cryptoFailureAction`|Producer should take action when encryption fails.

  537. **FAIL**: if encryption fails, unencrypted messages fail to send.

  538. **SEND**: if encryption fails, unencrypted messages are sent.
  539. |`ProducerCryptoFailureAction.FAIL` -long|`batchingMaxPublishDelayMicros`|Batching time period of sending messages.|TimeUnit.MILLISECONDS.toMicros(1) -int|batchingMaxMessages|The maximum number of messages permitted in a batch.|1000 -boolean|`batchingEnabled`|Enable batching of messages. |true -CompressionType|`compressionType`|Message data compression type used by a producer.

    Available options:
  540. [`LZ4`](https://github.com/lz4/lz4)
  541. [`ZLIB`](https://zlib.net/)
  542. [`ZSTD`](https://facebook.github.io/zstd/)
  543. [`SNAPPY`](https://google.github.io/snappy/)
  544. | No compression - -You can configure parameters if you do not want to use the default configuration. - -For a full list, see the Javadoc for the {@inject: javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder} class. The following is an example. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .batchingMaxPublishDelay(10, TimeUnit.MILLISECONDS) - .sendTimeout(10, TimeUnit.SECONDS) - .blockIfQueueFull(true) - .create(); - -``` - -### Message routing - -When using partitioned topics, you can specify the routing mode whenever you publish messages using a producer. For more information on specifying a routing mode using the Java client, see the [Partitioned Topics](cookbooks-partitioned.md) cookbook. - -### Async send - -You can publish messages [asynchronously](concepts-messaging.md#send-modes) using the Java client. With async send, the producer puts the message in a blocking queue and returns it immediately. Then the client library sends the message to the broker in the background. If the queue is full (max size configurable), the producer is blocked or fails immediately when calling the API, depending on arguments passed to the producer. - -The following is an example. - -```java - -producer.sendAsync("my-async-message".getBytes()).thenAccept(msgId -> { - System.out.println("Message with ID " + msgId + " successfully sent"); -}); - -``` - -As you can see from the example above, async send operations return a {@inject: javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId} wrapped in a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture). - -### Configure messages - -In addition to a value, you can set additional items on a given message: - -```java - -producer.newMessage() - .key("my-message-key") - .value("my-async-message".getBytes()) - .property("my-key", "my-value") - .property("my-other-key", "my-other-value") - .send(); - -``` - -You can terminate the builder chain with `sendAsync()` and get a future return. - -## Consumer - -In Pulsar, consumers subscribe to topics and handle messages that producers publish to those topics. You can instantiate a new [consumer](reference-terminology.md#consumer) by first instantiating a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object and passing it a URL for a Pulsar broker (as [above](#client-configuration)). - -Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object, you can create a {@inject: javadoc:Consumer:/client/org/apache/pulsar/client/api/Consumer} by specifying a [topic](reference-terminology.md#topic) and a [subscription](concepts-messaging.md#subscription-modes). - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscribe(); - -``` - -The `subscribe` method will auto subscribe the consumer to the specified topic and subscription. One way to make the consumer listen on the topic is to set up a `while` loop. In this example loop, the consumer listens for messages, prints the contents of any received message, and then [acknowledges](reference-terminology.md#acknowledgment-ack) that the message has been processed. If the processing logic fails, you can use [negative acknowledgement](reference-terminology.md#acknowledgment-ack) to redeliver the message later. - -```java - -while (true) { - // Wait for a message - Message msg = consumer.receive(); - - try { - // Do something with the message - System.out.println("Message received: " + new String(msg.getData())); - - // Acknowledge the message so that it can be deleted by the message broker - consumer.acknowledge(msg); - } catch (Exception e) { - // Message failed to process, redeliver later - consumer.negativeAcknowledge(msg); - } -} - -``` - -If you don't want to block your main thread and rather listen constantly for new messages, consider using a `MessageListener`. - -```java - -MessageListener myMessageListener = (consumer, msg) -> { - try { - System.out.println("Message received: " + new String(msg.getData())); - consumer.acknowledge(msg); - } catch (Exception e) { - consumer.negativeAcknowledge(msg); - } -} - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .messageListener(myMessageListener) - .subscribe(); - -``` - -### Configure consumer - -If you instantiate a `Consumer` object by specifying only a topic and subscription name as in the example above, the consumer uses the default configuration. - -When you create a consumer, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -Type | Name|
    Description
    | Default -|---|---|---|--- -Set<String>| `topicNames`| Topic name| Sets.newTreeSet() -Pattern| `topicsPattern`| Topic pattern |None -String| `subscriptionName`| Subscription name| None -SubscriptionType| `subscriptionType`| Subscription type

    Four subscription types are available:
  545. Exclusive
  546. Failover
  547. Shared
  548. Key_Shared
  549. |SubscriptionType.Exclusive -int | `receiverQueueSize` | Size of a consumer's receiver queue.

    For example, the number of messages accumulated by a consumer before an application calls `Receive`.

    A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.| 1000 -long|`acknowledgementsGroupTimeMicros`|Group a consumer acknowledgment for a specified time.

    By default, a consumer uses 100ms grouping time to send out acknowledgments to a broker.

    Setting a group time of 0 sends out acknowledgments immediately.

    A longer ack group time is more efficient at the expense of a slight increase in message re-deliveries after a failure.|TimeUnit.MILLISECONDS.toMicros(100) -long|`negativeAckRedeliveryDelayMicros`|Delay to wait before redelivering messages that failed to be processed.

    When an application uses {@link Consumer#negativeAcknowledge(Message)}, failed messages are redelivered after a fixed timeout. |TimeUnit.MINUTES.toMicros(1) -int |`maxTotalReceiverQueueSizeAcrossPartitions`|The max total receiver queue size across partitions.

    This setting reduces the receiver queue size for individual partitions if the total receiver queue size exceeds this value.|50000 -String|`consumerName`|Consumer name|null -long|`ackTimeoutMillis`|Timeout of unacked messages|0 -long|`tickDurationMillis`|Granularity of the ack-timeout redelivery.

    Using an higher `tickDurationMillis` reduces the memory overhead to track messages when setting ack-timeout to a bigger value (for example, 1 hour).|1000 -int|`priorityLevel`|Priority level for a consumer to which a broker gives more priority while dispatching messages in Shared subscription type.

    The broker follows descending priorities. For example, 0=max-priority, 1, 2,...

    In shared subscription type, the broker **first dispatches messages to the max priority level consumers if they have permits**. Otherwise, the broker considers next priority level consumers.

    **Example 1**

    If a subscription has consumerA with `priorityLevel` 0 and consumerB with `priorityLevel` 1, then the broker **only dispatches messages to consumerA until it runs out permits** and then starts dispatching messages to consumerB.

    **Example 2**

    Consumer Priority, Level, Permits
    C1, 0, 2
    C2, 0, 1
    C3, 0, 1
    C4, 1, 2
    C5, 1, 1

    Order in which a broker dispatches messages to consumers is: C1, C2, C3, C1, C4, C5, C4.|0 -ConsumerCryptoFailureAction|`cryptoFailureAction`|Consumer should take action when it receives a message that can not be decrypted.

  550. **FAIL**: this is the default option to fail messages until crypto succeeds.

  551. **DISCARD**:silently acknowledge and not deliver message to an application.

  552. **CONSUME**: deliver encrypted messages to applications. It is the application's responsibility to decrypt the message.

    The decompression of message fails.

    If messages contain batch messages, a client is not be able to retrieve individual messages in batch.

    Delivered encrypted message contains {@link EncryptionContext} which contains encryption and compression information in it using which application can decrypt consumed message payload.
  553. |
  554. ConsumerCryptoFailureAction.FAIL
  555. -SortedMap|`properties`|A name or value property of this consumer.

    `properties` is application defined metadata attached to a consumer.

    When getting a topic stats, associate this metadata with the consumer stats for easier identification.|new TreeMap() -boolean|`readCompacted`|If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    Only enabling `readCompacted` on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`.|false -SubscriptionInitialPosition|`subscriptionInitialPosition`|Initial position at which to set cursor when subscribing to a topic at first time.|SubscriptionInitialPosition.Latest -int|`patternAutoDiscoveryPeriod`|Topic auto discovery period when using a pattern for topic's consumer.

    The default and minimum value is 1 minute.|1 -RegexSubscriptionMode|`regexSubscriptionMode`|When subscribing to a topic using a regular expression, you can pick a certain type of topics.

  556. **PersistentOnly**: only subscribe to persistent topics.

  557. **NonPersistentOnly**: only subscribe to non-persistent topics.

  558. **AllTopics**: subscribe to both persistent and non-persistent topics.
  559. |RegexSubscriptionMode.PersistentOnly -DeadLetterPolicy|`deadLetterPolicy`|Dead letter policy for consumers.

    By default, some messages are probably redelivered many times, even to the extent that it never stops.

    By using the dead letter mechanism, messages have the max redelivery count. **When exceeding the maximum number of redeliveries, messages are sent to the Dead Letter Topic and acknowledged automatically**.

    You can enable the dead letter mechanism by setting `deadLetterPolicy`.

    **Example**

    client.newConsumer()
    .deadLetterPolicy(DeadLetterPolicy.builder().maxRedeliverCount(10).build())
    .subscribe();


    Default dead letter topic name is `{TopicName}-{Subscription}-DLQ`.

    To set a custom dead letter topic name:
    client.newConsumer()
    .deadLetterPolicy(DeadLetterPolicy.builder().maxRedeliverCount(10)
    .deadLetterTopic("your-topic-name").build())
    .subscribe();


    When specifying the dead letter policy while not specifying `ackTimeoutMillis`, you can set the ack timeout to 30000 millisecond.|None -boolean|`autoUpdatePartitions`|If `autoUpdatePartitions` is enabled, a consumer subscribes to partition increasement automatically.

    **Note**: this is only for partitioned consumers.|true -boolean|`replicateSubscriptionState`|If `replicateSubscriptionState` is enabled, a subscription state is replicated to geo-replicated clusters.|false - -You can configure parameters if you do not want to use the default configuration. For a full list, see the Javadoc for the {@inject: javadoc:ConsumerBuilder:/client/org/apache/pulsar/client/api/ConsumerBuilder} class. - -The following is an example. - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .ackTimeout(10, TimeUnit.SECONDS) - .subscriptionType(SubscriptionType.Exclusive) - .subscribe(); - -``` - -### Async receive - -The `receive` method receives messages synchronously (the consumer process is blocked until a message is available). You can also use [async receive](concepts-messaging.md#receive-modes), which returns a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) object immediately once a new message is available. - -The following is an example. - -```java - -CompletableFuture asyncMessage = consumer.receiveAsync(); - -``` - -Async receive operations return a {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} wrapped inside of a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture). - -### Batch receive - -Use `batchReceive` to receive multiple messages for each call. - -The following is an example. - -```java - -Messages messages = consumer.batchReceive(); -for (Object message : messages) { - // do something -} -consumer.acknowledge(messages) - -``` - -:::note - -Batch receive policy limits the number and bytes of messages in a single batch. You can specify a timeout to wait for enough messages. -The batch receive is completed if any of the following condition is met: enough number of messages, bytes of messages, wait timeout. - -```java - -Consumer consumer = client.newConsumer() -.topic("my-topic") -.subscriptionName("my-subscription") -.batchReceivePolicy(BatchReceivePolicy.builder() -.maxNumMessages(100) -.maxNumBytes(1024 * 1024) -.timeout(200, TimeUnit.MILLISECONDS) -.build()) -.subscribe(); - -``` - -The default batch receive policy is: - -```java - -BatchReceivePolicy.builder() -.maxNumMessage(-1) -.maxNumBytes(10 * 1024 * 1024) -.timeout(100, TimeUnit.MILLISECONDS) -.build(); - -``` - -::: - -### Multi-topic subscriptions - -In addition to subscribing a consumer to a single Pulsar topic, you can also subscribe to multiple topics simultaneously using [multi-topic subscriptions](concepts-messaging.md#multi-topic-subscriptions). To use multi-topic subscriptions you can supply either a regular expression (regex) or a `List` of topics. If you select topics via regex, all topics must be within the same Pulsar namespace. - -The followings are some examples. - -```java - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; - -import java.util.Arrays; -import java.util.List; -import java.util.regex.Pattern; - -ConsumerBuilder consumerBuilder = pulsarClient.newConsumer() - .subscriptionName(subscription); - -// Subscribe to all topics in a namespace -Pattern allTopicsInNamespace = Pattern.compile("public/default/.*"); -Consumer allTopicsConsumer = consumerBuilder - .topicsPattern(allTopicsInNamespace) - .subscribe(); - -// Subscribe to a subsets of topics in a namespace, based on regex -Pattern someTopicsInNamespace = Pattern.compile("public/default/foo.*"); -Consumer allTopicsConsumer = consumerBuilder - .topicsPattern(someTopicsInNamespace) - .subscribe(); - -``` - -In the above example, the consumer subscribes to the `persistent` topics that can match the topic name pattern. If you want the consumer subscribes to all `persistent` and `non-persistent` topics that can match the topic name pattern, set `subscriptionTopicsMode` to `RegexSubscriptionMode.AllTopics`. - -```java - -Pattern pattern = Pattern.compile("public/default/.*"); -pulsarClient.newConsumer() - .subscriptionName("my-sub") - .topicsPattern(pattern) - .subscriptionTopicsMode(RegexSubscriptionMode.AllTopics) - .subscribe(); - -``` - -:::note - -By default, the `subscriptionTopicsMode` of the consumer is `PersistentOnly`. Available options of `subscriptionTopicsMode` are `PersistentOnly`, `NonPersistentOnly`, and `AllTopics`. - -::: - -You can also subscribe to an explicit list of topics (across namespaces if you wish): - -```java - -List topics = Arrays.asList( - "topic-1", - "topic-2", - "topic-3" -); - -Consumer multiTopicConsumer = consumerBuilder - .topics(topics) - .subscribe(); - -// Alternatively: -Consumer multiTopicConsumer = consumerBuilder - .topic( - "topic-1", - "topic-2", - "topic-3" - ) - .subscribe(); - -``` - -You can also subscribe to multiple topics asynchronously using the `subscribeAsync` method rather than the synchronous `subscribe` method. The following is an example. - -```java - -Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default.*"); -consumerBuilder - .topics(topics) - .subscribeAsync() - .thenAccept(this::receiveMessageFromConsumer); - -private void receiveMessageFromConsumer(Object consumer) { - ((Consumer)consumer).receiveAsync().thenAccept(message -> { - // Do something with the received message - receiveMessageFromConsumer(consumer); - }); -} - -``` - -### Subscription types - -Pulsar has various [subscription types](concepts-messaging#subscription-types) to match different scenarios. A topic can have multiple subscriptions with different subscription types. However, a subscription can only have one subscription type at a time. - -A subscription is identical with the subscription name; a subscription name can specify only one subscription type at a time. To change the subscription type, you should first stop all consumers of this subscription. - -Different subscription types have different message distribution modes. This section describes the differences of subscription types and how to use them. - -In order to better describe their differences, assuming you have a topic named "my-topic", and the producer has published 10 messages. - -```java - -Producer producer = client.newProducer(Schema.STRING) - .topic("my-topic") - .enableBatching(false) - .create(); -// 3 messages with "key-1", 3 messages with "key-2", 2 messages with "key-3" and 2 messages with "key-4" -producer.newMessage().key("key-1").value("message-1-1").send(); -producer.newMessage().key("key-1").value("message-1-2").send(); -producer.newMessage().key("key-1").value("message-1-3").send(); -producer.newMessage().key("key-2").value("message-2-1").send(); -producer.newMessage().key("key-2").value("message-2-2").send(); -producer.newMessage().key("key-2").value("message-2-3").send(); -producer.newMessage().key("key-3").value("message-3-1").send(); -producer.newMessage().key("key-3").value("message-3-2").send(); -producer.newMessage().key("key-4").value("message-4-1").send(); -producer.newMessage().key("key-4").value("message-4-2").send(); - -``` - -#### Exclusive - -Create a new consumer and subscribe with the `Exclusive` subscription type. - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Exclusive) - .subscribe() - -``` - -Only the first consumer is allowed to the subscription, other consumers receive an error. The first consumer receives all 10 messages, and the consuming order is the same as the producing order. - -:::note - -If topic is a partitioned topic, the first consumer subscribes to all partitioned topics, other consumers are not assigned with partitions and receive an error. - -::: - -#### Failover - -Create new consumers and subscribe with the`Failover` subscription type. - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Failover) - .subscribe() -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Failover) - .subscribe() -//conumser1 is the active consumer, consumer2 is the standby consumer. -//consumer1 receives 5 messages and then crashes, consumer2 takes over as an active consumer. - -``` - -Multiple consumers can attach to the same subscription, yet only the first consumer is active, and others are standby. When the active consumer is disconnected, messages will be dispatched to one of standby consumers, and the standby consumer then becomes active consumer. - -If the first active consumer is disconnected after receiving 5 messages, the standby consumer becomes active consumer. Consumer1 will receive: - -``` - -("key-1", "message-1-1") -("key-1", "message-1-2") -("key-1", "message-1-3") -("key-2", "message-2-1") -("key-2", "message-2-2") - -``` - -consumer2 will receive: - -``` - -("key-2", "message-2-3") -("key-3", "message-3-1") -("key-3", "message-3-2") -("key-4", "message-4-1") -("key-4", "message-4-2") - -``` - -:::note - -If a topic is a partitioned topic, each partition has only one active consumer, messages of one partition are distributed to only one consumer, and messages of multiple partitions are distributed to multiple consumers. - -::: - -#### Shared - -Create new consumers and subscribe with `Shared` subscription type. - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .subscribe() - -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .subscribe() -//Both consumer1 and consumer2 are active consumers. - -``` - -In shared subscription type, multiple consumers can attach to the same subscription and messages are delivered in a round robin distribution across consumers. - -If a broker dispatches only one message at a time, consumer1 receives the following information. - -``` - -("key-1", "message-1-1") -("key-1", "message-1-3") -("key-2", "message-2-2") -("key-3", "message-3-1") -("key-4", "message-4-1") - -``` - -consumer2 receives the following information. - -``` - -("key-1", "message-1-2") -("key-2", "message-2-1") -("key-2", "message-2-3") -("key-3", "message-3-2") -("key-4", "message-4-2") - -``` - -`Shared` subscription is different from `Exclusive` and `Failover` subscription types. `Shared` subscription has better flexibility, but cannot provide order guarantee. - -#### Key_shared - -This is a new subscription type since 2.4.0 release. Create new consumers and subscribe with `Key_Shared` subscription type. - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Key_Shared) - .subscribe() - -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Key_Shared) - .subscribe() -//Both consumer1 and consumer2 are active consumers. - -``` - -`Key_Shared` subscription is like `Shared` subscription, all consumers can attach to the same subscription. But it is different from `Key_Shared` subscription, messages with the same key are delivered to only one consumer in order. The possible distribution of messages between different consumers (by default we do not know in advance which keys will be assigned to a consumer, but a key will only be assigned to a consumer at the same time). - -consumer1 receives the following information. - -``` - -("key-1", "message-1-1") -("key-1", "message-1-2") -("key-1", "message-1-3") -("key-3", "message-3-1") -("key-3", "message-3-2") - -``` - -consumer2 receives the following information. - -``` - -("key-2", "message-2-1") -("key-2", "message-2-2") -("key-2", "message-2-3") -("key-4", "message-4-1") -("key-4", "message-4-2") - -``` - -If batching is enabled at the producer side, messages with different keys are added to a batch by default. The broker will dispatch the batch to the consumer, so the default batch mechanism may break the Key_Shared subscription guaranteed message distribution semantics. The producer needs to use the `KeyBasedBatcher`. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .batcherBuilder(BatcherBuilder.KEY_BASED) - .create(); - -``` - -Or the producer can disable batching. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .enableBatching(false) - .create(); - -``` - -:::note - -If the message key is not specified, messages without key are dispatched to one consumer in order by default. - -::: - -## Reader - -With the [reader interface](concepts-clients.md#reader-interface), Pulsar clients can "manually position" themselves within a topic and reading all messages from a specified message onward. The Pulsar API for Java enables you to create {@inject: javadoc:Reader:/client/org/apache/pulsar/client/api/Reader} objects by specifying a topic and a {@inject: javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId}. - -The following is an example. - -```java - -byte[] msgIdBytes = // Some message ID byte array -MessageId id = MessageId.fromByteArray(msgIdBytes); -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(id) - .create(); - -while (true) { - Message message = reader.readNext(); - // Process message -} - -``` - -In the example above, a `Reader` object is instantiated for a specific topic and message (by ID); the reader iterates over each message in the topic after the message is identified by `msgIdBytes` (how that value is obtained depends on the application). - -The code sample above shows pointing the `Reader` object to a specific message (by ID), but you can also use `MessageId.earliest` to point to the earliest available message on the topic of `MessageId.latest` to point to the most recent available message. - -### Configure reader -When you create a reader, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -| Type | Name |
    Description
    | Default -|---|---|---|--- -String|`topicName`|Topic name. |None -int|`receiverQueueSize`|Size of a consumer's receiver queue.

    For example, the number of messages that can be accumulated by a consumer before an application calls `Receive`.

    A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.|1000 -ReaderListener<T>|`readerListener`|A listener that is called for message received.|None -String|`readerName`|Reader name.|null -String| `subscriptionName`|Subscription name|When there is a single topic, the default subscription name is `"reader-" + 10-digit UUID`.
    When there are multiple topics, the default subscription name is `"multiTopicsReader-" + 10-digit UUID`. -String|`subscriptionRolePrefix`|Prefix of subscription role. |null -CryptoKeyReader|`cryptoKeyReader`|Interface that abstracts the access to a key store.|null -ConsumerCryptoFailureAction|`cryptoFailureAction`|Consumer should take action when it receives a message that can not be decrypted.

  560. **FAIL**: this is the default option to fail messages until crypto succeeds.

  561. **DISCARD**: silently acknowledge and not deliver message to an application.

  562. **CONSUME**: deliver encrypted messages to applications. It is the application's responsibility to decrypt the message.

    The message decompression fails.

    If messages contain batch messages, a client is not be able to retrieve individual messages in batch.

    Delivered encrypted message contains {@link EncryptionContext} which contains encryption and compression information in it using which application can decrypt consumed message payload.
  563. |
  564. ConsumerCryptoFailureAction.FAIL
  565. -boolean|`readCompacted`|If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (for example, failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`.|false -boolean|`resetIncludeHead`|If set to true, the first message to be returned is the one specified by `messageId`.

    If set to false, the first message to be returned is the one next to the message specified by `messageId`.|false - -### Sticky key range reader - -In sticky key range reader, broker will only dispatch messages which hash of the message key contains by the specified key hash range. Multiple key hash ranges can be specified on a reader. - -The following is an example to create a sticky key range reader. - -```java - -pulsarClient.newReader() - .topic(topic) - .startMessageId(MessageId.earliest) - .keyHashRange(Range.of(0, 10000), Range.of(20001, 30000)) - .create(); - -``` - -Total hash range size is 65536, so the max end of the range should be less than or equal to 65535. - -## Schema - -In Pulsar, all message data consists of byte arrays "under the hood." [Message schemas](schema-get-started.md) enable you to use other types of data when constructing and handling messages (from simple types like strings to more complex, application-specific types.md). If you construct, say, a [producer](#producers) without specifying a schema, then the producer can only produce messages of type `byte[]`. The following is an example. - -```java - -Producer producer = client.newProducer() - .topic(topic) - .create(); - -``` - -The producer above is equivalent to a `Producer` (in fact, you should *always* explicitly specify the type). If you'd like to use a producer for a different type of data, you'll need to specify a **schema** that informs Pulsar which data type will be transmitted over the [topic](reference-terminology.md#topic). - -### AvroBaseStructSchema example - -Let's say that you have a `SensorReading` class that you'd like to transmit over a Pulsar topic: - -```java - -public class SensorReading { - public float temperature; - - public SensorReading(float temperature) { - this.temperature = temperature; - } - - // A no-arg constructor is required - public SensorReading() { - } - - public float getTemperature() { - return temperature; - } - - public void setTemperature(float temperature) { - this.temperature = temperature; - } -} - -``` - -You could then create a `Producer` (or `Consumer`) like this: - -```java - -Producer producer = client.newProducer(JSONSchema.of(SensorReading.class)) - .topic("sensor-readings") - .create(); - -``` - -The following schema formats are currently available for Java: - -* No schema or the byte array schema (which can be applied using `Schema.BYTES`): - - ```java - - Producer bytesProducer = client.newProducer(Schema.BYTES) - .topic("some-raw-bytes-topic") - .create(); - - ``` - - Or, equivalently: - - ```java - - Producer bytesProducer = client.newProducer() - .topic("some-raw-bytes-topic") - .create(); - - ``` - -* `String` for normal UTF-8-encoded string data. Apply the schema using `Schema.STRING`: - - ```java - - Producer stringProducer = client.newProducer(Schema.STRING) - .topic("some-string-topic") - .create(); - - ``` - -* Create JSON schemas for POJOs using `Schema.JSON`. The following is an example. - - ```java - - Producer pojoProducer = client.newProducer(Schema.JSON(MyPojo.class)) - .topic("some-pojo-topic") - .create(); - - ``` - -* Generate Protobuf schemas using `Schema.PROTOBUF`. The following example shows how to create the Protobuf schema and use it to instantiate a new producer: - - ```java - - Producer protobufProducer = client.newProducer(Schema.PROTOBUF(MyProtobuf.class)) - .topic("some-protobuf-topic") - .create(); - - ``` - -* Define Avro schemas with `Schema.AVRO`. The following code snippet demonstrates how to create and use Avro schema. - - ```java - - Producer avroProducer = client.newProducer(Schema.AVRO(MyAvro.class)) - .topic("some-avro-topic") - .create(); - - ``` - -### ProtobufNativeSchema example - -For example of ProtobufNativeSchema, see [`SchemaDefinition` in `Complex type`](schema-understand.md#complex-type). - -## Authentication - -Pulsar currently supports three authentication schemes: [TLS](security-tls-authentication.md), [Athenz](security-athenz.md), and [Oauth2](security-oauth2.md). You can use the Pulsar Java client with all of them. - -### TLS Authentication - -To use [TLS](security-tls-authentication.md), `enableTls` method is deprecated and you need to use "pulsar+ssl://" in serviceUrl to enable, point your Pulsar client to a TLS cert path, and provide paths to cert and key files. - -The following is an example. - -```java - -Map authParams = new HashMap(); -authParams.put("tlsCertFile", "/path/to/client-cert.pem"); -authParams.put("tlsKeyFile", "/path/to/client-key.pem"); - -Authentication tlsAuth = AuthenticationFactory - .create(AuthenticationTls.class.getName(), authParams); - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://my-broker.com:6651") - .tlsTrustCertsFilePath("/path/to/cacert.pem") - .authentication(tlsAuth) - .build(); - -``` - -### Athenz - -To use [Athenz](security-athenz.md) as an authentication provider, you need to [use TLS](#tls-authentication) and provide values for four parameters in a hash: - -* `tenantDomain` -* `tenantService` -* `providerDomain` -* `privateKey` - -You can also set an optional `keyId`. The following is an example. - -```java - -Map authParams = new HashMap(); -authParams.put("tenantDomain", "shopping"); // Tenant domain name -authParams.put("tenantService", "some_app"); // Tenant service name -authParams.put("providerDomain", "pulsar"); // Provider domain name -authParams.put("privateKey", "file:///path/to/private.pem"); // Tenant private key path -authParams.put("keyId", "v1"); // Key id for the tenant private key (optional, default: "0") - -Authentication athenzAuth = AuthenticationFactory - .create(AuthenticationAthenz.class.getName(), authParams); - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://my-broker.com:6651") - .tlsTrustCertsFilePath("/path/to/cacert.pem") - .authentication(athenzAuth) - .build(); - -``` - -> #### Supported pattern formats -> The `privateKey` parameter supports the following three pattern formats: -> * `file:///path/to/file` -> * `file:/path/to/file` -> * `data:application/x-pem-file;base64,` - -### Oauth2 - -The following example shows how to use [Oauth2](security-oauth2.md) as an authentication provider for the Pulsar Java client. - -You can use the factory method to configure authentication for Pulsar Java client. - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactoryOAuth2.clientCredentials(this.issuerUrl, this.credentialsUrl, this.audience)) - .build(); - -``` - -In addition, you can also use the encoded parameters to configure authentication for Pulsar Java client. - -```java - -Authentication auth = AuthenticationFactory - .create(AuthenticationOAuth2.class.getName(), "{"type":"client_credentials","privateKey":"...","issuerUrl":"...","audience":"..."}"); -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication(auth) - .build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-node.md b/site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-node.md deleted file mode 100644 index e24032946bdcde..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-node.md +++ /dev/null @@ -1,643 +0,0 @@ ---- -id: client-libraries-node -title: The Pulsar Node.js client -sidebar_label: "Node.js" -original_id: client-libraries-node ---- - -The Pulsar Node.js client can be used to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Node.js. - -All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Node.js client are thread-safe. - -For 1.3.0 or later versions, [type definitions](https://github.com/apache/pulsar-client-node/blob/master/index.d.ts) used in TypeScript are available. - -## Installation - -You can install the [`pulsar-client`](https://www.npmjs.com/package/pulsar-client) library via [npm](https://www.npmjs.com/). - -### Requirements -Pulsar Node.js client library is based on the C++ client library. -Follow [these instructions](client-libraries-cpp.md#compilation) and install the Pulsar C++ client library. - -### Compatibility - -Compatibility between each version of the Node.js client and the C++ client is as follows: - -| Node.js client | C++ client | -| :------------- | :------------- | -| 1.0.0 | 2.3.0 or later | -| 1.1.0 | 2.4.0 or later | -| 1.2.0 | 2.5.0 or later | - -If an incompatible version of the C++ client is installed, you may fail to build or run this library. - -### Installation using npm - -Install the `pulsar-client` library via [npm](https://www.npmjs.com/): - -```shell - -$ npm install pulsar-client - -``` - -:::note - -Also, this library works only in Node.js 10.x or later because it uses the [`node-addon-api`](https://github.com/nodejs/node-addon-api) module to wrap the C++ library. - -::: - -## Connection URLs -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here is an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you are using [TLS encryption](security-tls-transport.md) or [TLS Authentication](security-tls-authentication.md), the URL looks like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you first need a client object. You can create a client instance using a `new` operator and the `Client` method, passing in a client options object (more on configuration [below](#client-configuration)). - -Here is an example: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - await client.close(); -})(); - -``` - -### Client configuration - -The following configurable parameters are available for Pulsar clients: - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `serviceUrl` | The connection URL for the Pulsar cluster. See [above](#connection-urls) for more info. | | -| `authentication` | Configure the authentication provider. (default: no authentication). See [TLS Authentication](security-tls-authentication.md) for more info. | | -| `operationTimeoutSeconds` | The timeout for Node.js client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries occur until this threshold is reached, at which point the operation fails. | 30 | -| `ioThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker). | 1 | -| `messageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)). | 1 | -| `concurrentLookupRequest` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 50000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 50000 | -| `tlsTrustCertsFilePath` | The file path for the trusted TLS certificate. | | -| `tlsValidateHostname` | The boolean value of setup whether to enable TLS hostname verification. | `false` | -| `tlsAllowInsecureConnection` | The boolean value of setup whether the Pulsar client accepts untrusted TLS certificate from broker. | `false` | -| `statsIntervalInSeconds` | Interval between each stat info. Stats is activated with positive statsInterval. The value should be set to 1 second at least | 600 | -| `log` | A function that is used for logging. | `console.log` | - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Node.js producers using a producer configuration object. - -Here is an example: - -```JavaScript - -const producer = await client.createProducer({ - topic: 'my-topic', -}); - -await producer.send({ - data: Buffer.from("Hello, Pulsar"), -}); - -await producer.close(); - -``` - -> #### Promise operation -> When you create a new Pulsar producer, the operation returns `Promise` object and get producer instance or an error through executor function. -> In this example, using await operator instead of executor function. - -### Producer operations - -Pulsar Node.js producers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `send(Object)` | Publishes a [message](#messages) to the producer's topic. When the message is successfully acknowledged by the Pulsar broker, or an error is thrown, the Promise object whose result is the message ID runs executor function. | `Promise` | -| `flush()` | Sends message from send queue to Pulsar broker. When the message is successfully acknowledged by the Pulsar broker, or an error is thrown, the Promise object runs executor function. | `Promise` | -| `close()` | Closes the producer and releases all resources allocated to it. Once `close()` is called, no more messages are accepted from the publisher. This method returns a Promise object. It runs the executor function when all pending publish requests are persisted by Pulsar. If an error is thrown, no pending writes are retried. | `Promise` | -| `getProducerName()` | Getter method of the producer name. | `string` | -| `getTopic()` | Getter method of the name of the topic. | `string` | - -### Producer configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer publishes messages. The topic format is `` or `//`. For example, `sample/ns1/my-topic`. | | -| `producerName` | A name for the producer. If you do not explicitly assign a name, Pulsar automatically generates a globally unique name. If you choose to explicitly assign a name, it needs to be unique across *all* Pulsar clusters, otherwise the creation operation throws an error. | | -| `sendTimeoutMs` | When publishing a message to a topic, the producer waits for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error is thrown. If you set `sendTimeoutMs` to -1, the timeout is set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30000 | -| `initialSequenceId` | The initial sequence ID of the message. When producer send message, add sequence ID to message. The ID is increased each time to send. | | -| `maxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `send` method fails *unless* `blockIfQueueFull` is set to `true`. | 1000 | -| `maxPendingMessagesAcrossPartitions` | The maximum size of the sum of partition's pending queue. | 50000 | -| `blockIfQueueFull` | If set to `true`, the producer's `send` method waits when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `maxPendingMessages` parameter); if set to `false` (the default), `send` operations fails and throw a error when the queue is full. | `false` | -| `messageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-messaging.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`RoundRobinDistribution`), or publishing all messages to a single partition (`UseSinglePartition`, the default). | `UseSinglePartition` | -| `hashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `JavaStringHash` (the equivalent of `String.hashCode()` in Java), `Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library). | `BoostHash` | -| `compressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), and [`Zlib`](https://zlib.net/), [ZSTD](https://github.com/facebook/zstd/), [SNAPPY](https://github.com/google/snappy/). | Compression None | -| `batchingEnabled` | If set to `true`, the producer send message as batch. | `true` | -| `batchingMaxPublishDelayMs` | The maximum time of delay sending message in batching. | 10 | -| `batchingMaxMessages` | The maximum size of sending message in each time of batching. | 1000 | -| `properties` | The metadata of producer. | | - -### Producer example - -This example creates a Node.js producer for the `my-topic` topic and sends 10 messages to that topic: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - // Create a producer - const producer = await client.createProducer({ - topic: 'my-topic', - }); - - // Send messages - for (let i = 0; i < 10; i += 1) { - const msg = `my-message-${i}`; - producer.send({ - data: Buffer.from(msg), - }); - console.log(`Sent message: ${msg}`); - } - await producer.flush(); - - await producer.close(); - await client.close(); -})(); - -``` - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Node.js consumers using a consumer configuration object. - -Here is an example: - -```JavaScript - -const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', -}); - -const msg = await consumer.receive(); -console.log(msg.getData().toString()); -consumer.acknowledge(msg); - -await consumer.close(); - -``` - -> #### Promise operation -> When you create a new Pulsar consumer, the operation returns `Promise` object and get consumer instance or an error through executor function. -> In this example, using await operator instead of executor function. - -### Consumer operations - -Pulsar Node.js consumers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `receive()` | Receives a single message from the topic. When the message is available, the Promise object run executor function and get message object. | `Promise` | -| `receive(Number)` | Receives a single message from the topic with specific timeout in milliseconds. | `Promise` | -| `acknowledge(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message object. | `void` | -| `acknowledgeId(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID object. | `void` | -| `acknowledgeCumulative(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `acknowledgeCumulative` method returns void, and send the ack to the broker asynchronously. After that, the messages are *not* redelivered to the consumer. Cumulative acking can not be used with a [shared](concepts-messaging.md#shared) subscription type. | `void` | -| `acknowledgeCumulativeId(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message ID. | `void` | -| `negativeAcknowledge(Message)`| [Negatively acknowledges](reference-terminology.md#negative-acknowledgment-nack) a message to the Pulsar broker by message object. | `void` | -| `negativeAcknowledgeId(MessageId)` | [Negatively acknowledges](reference-terminology.md#negative-acknowledgment-nack) a message to the Pulsar broker by message ID object. | `void` | -| `close()` | Closes the consumer, disabling its ability to receive messages from the broker. | `Promise` | -| `unsubscribe()` | Unsubscribes the subscription. | `Promise` | - -### Consumer configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar topic on which the consumer establishes a subscription and listen for messages. | | -| `topics` | The array of topics. | | -| `topicsPattern` | The regular expression for topics. | | -| `subscription` | The subscription name for this consumer. | | -| `subscriptionType` | Available options are `Exclusive`, `Shared`, `Key_Shared`, and `Failover`. | `Exclusive` | -| `subscriptionInitialPosition` | Initial position at which to set cursor when subscribing to a topic at first time. | `SubscriptionInitialPosition.Latest` | -| `ackTimeoutMs` | Acknowledge timeout in milliseconds. | 0 | -| `nAckRedeliverTimeoutMs` | Delay to wait before redelivering messages that failed to be processed. | 60000 | -| `receiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000 | -| `receiverQueueSizeAcrossPartitions` | Set the max total receiver queue size across partitions. This setting is used to reduce the receiver queue size for individual partitions if the total exceeds this value. | 50000 | -| `consumerName` | The name of consumer. Currently(v2.4.1), [failover](concepts-messaging.md#failover) mode use consumer name in ordering. | | -| `properties` | The metadata of consumer. | | -| `listener`| A listener that is called for a message received. | | -| `readCompacted`| If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`. | false | - -### Consumer example - -This example creates a Node.js consumer with the `my-subscription` subscription on the `my-topic` topic, receives messages, prints the content that arrive, and acknowledges each message to the Pulsar broker for 10 times: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - // Create a consumer - const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', - subscriptionType: 'Exclusive', - }); - - // Receive messages - for (let i = 0; i < 10; i += 1) { - const msg = await consumer.receive(); - console.log(msg.getData().toString()); - consumer.acknowledge(msg); - } - - await consumer.close(); - await client.close(); -})(); - -``` - -Instead a consumer can be created with `listener` to process messages. - -```JavaScript - -// Create a consumer -const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', - subscriptionType: 'Exclusive', - listener: (msg, msgConsumer) => { - console.log(msg.getData().toString()); - msgConsumer.acknowledge(msg); - }, -}); - -``` - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recently unacked message). You can [configure](#reader-configuration) Node.js readers using a reader configuration object. - -Here is an example: - -```JavaScript - -const reader = await client.createReader({ - topic: 'my-topic', - startMessageId: Pulsar.MessageId.earliest(), -}); - -const msg = await reader.readNext(); -console.log(msg.getData().toString()); - -await reader.close(); - -``` - -### Reader operations - -Pulsar Node.js readers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `readNext()` | Receives the next message on the topic (analogous to the `receive` method for [consumers](#consumer-operations)). When the message is available, the Promise object run executor function and get message object. | `Promise` | -| `readNext(Number)` | Receives a single message from the topic with specific timeout in milliseconds. | `Promise` | -| `hasNext()` | Return whether the broker has next message in target topic. | `Boolean` | -| `close()` | Closes the reader, disabling its ability to receive messages from the broker. | `Promise` | - -### Reader configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader establishes a subscription and listen for messages. | | -| `startMessageId` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `Pulsar.MessageId.earliest` (the earliest available message on the topic), `Pulsar.MessageId.latest` (the latest available message on the topic), or a message ID object for a position that is not earliest or latest. | | -| `receiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `readNext`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000 | -| `readerName` | The name of the reader. | | -| `subscriptionRolePrefix` | The subscription role prefix. | | -| `readCompacted` | If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`. | `false` | - - -### Reader example - -This example creates a Node.js reader with the `my-topic` topic, reads messages, and prints the content that arrive for 10 times: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - operationTimeoutSeconds: 30, - }); - - // Create a reader - const reader = await client.createReader({ - topic: 'my-topic', - startMessageId: Pulsar.MessageId.earliest(), - }); - - // read messages - for (let i = 0; i < 10; i += 1) { - const msg = await reader.readNext(); - console.log(msg.getData().toString()); - } - - await reader.close(); - await client.close(); -})(); - -``` - -## Messages - -In Pulsar Node.js client, you have to construct producer message object for producer. - -Here is an example message: - -```JavaScript - -const msg = { - data: Buffer.from('Hello, Pulsar'), - partitionKey: 'key1', - properties: { - 'foo': 'bar', - }, - eventTimestamp: Date.now(), - replicationClusters: [ - 'cluster1', - 'cluster2', - ], -} - -await producer.send(msg); - -``` - -The following keys are available for producer message objects: - -| Parameter | Description | -| :-------- | :---------- | -| `data` | The actual data payload of the message. | -| `properties` | A Object for any application-specific metadata attached to the message. | -| `eventTimestamp` | The timestamp associated with the message. | -| `sequenceId` | The sequence ID of the message. | -| `partitionKey` | The optional key associated with the message (particularly useful for things like topic compaction). | -| `replicationClusters` | The clusters to which this message is replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. | -| `deliverAt` | The absolute timestamp at or after which the message is delivered. | | -| `deliverAfter` | The relative delay after which the message is delivered. | | - -### Message object operations - -In Pulsar Node.js client, you can receive (or read) message object as consumer (or reader). - -The message object have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `getTopicName()` | Getter method of topic name. | `String` | -| `getProperties()` | Getter method of properties. | `Array` | -| `getData()` | Getter method of message data. | `Buffer` | -| `getMessageId()` | Getter method of [message id object](#message-id-object-operations). | `Object` | -| `getPublishTimestamp()` | Getter method of publish timestamp. | `Number` | -| `getEventTimestamp()` | Getter method of event timestamp. | `Number` | -| `getRedeliveryCount()` | Getter method of redelivery count. | `Number` | -| `getPartitionKey()` | Getter method of partition key. | `String` | - -### Message ID object operations - -In Pulsar Node.js client, you can get message id object from message object. - -The message id object have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `serialize()` | Serialize the message id into a Buffer for storing. | `Buffer` | -| `toString()` | Get message id as String. | `String` | - -The client has static method of message id object. You can access it as `Pulsar.MessageId.someStaticMethod` too. - -The following static methods are available for the message id object: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `earliest()` | MessageId representing the earliest, or oldest available message stored in the topic. | `Object` | -| `latest()` | MessageId representing the latest, or last published message in the topic. | `Object` | -| `deserialize(Buffer)` | Deserialize a message id object from a Buffer. | `Object` | - -## End-to-end encryption - -[End-to-end encryption](https://pulsar.apache.org/docs/en/next/cookbooks-encryption/#docsNav) allows applications to encrypt messages at producers and decrypt at consumers. - -### Configuration - -If you want to use the end-to-end encryption feature in the Node.js client, you need to configure `publicKeyPath` and `privateKeyPath` for both producer and consumer. - -``` - -publicKeyPath: "./public.pem" -privateKeyPath: "./private.pem" - -``` - -### Tutorial - -This section provides step-by-step instructions on how to use the end-to-end encryption feature in the Node.js client. - -**Prerequisite** - -- Pulsar C++ client 2.7.1 or later - -**Step** - -1. Create both public and private key pairs. - - **Input** - - ```shell - - openssl genrsa -out private.pem 2048 - openssl rsa -in private.pem -pubout -out public.pem - - ``` - -2. Create a producer to send encrypted messages. - - **Input** - - ```nodejs - - const Pulsar = require('pulsar-client'); - - (async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - operationTimeoutSeconds: 30, - }); - - // Create a producer - const producer = await client.createProducer({ - topic: 'persistent://public/default/my-topic', - sendTimeoutMs: 30000, - batchingEnabled: true, - publicKeyPath: "./public.pem", - privateKeyPath: "./private.pem", - encryptionKey: "encryption-key" - }); - - console.log(producer.ProducerConfig) - // Send messages - for (let i = 0; i < 10; i += 1) { - const msg = `my-message-${i}`; - producer.send({ - data: Buffer.from(msg), - }); - console.log(`Sent message: ${msg}`); - } - await producer.flush(); - - await producer.close(); - await client.close(); - })(); - - ``` - -3. Create a consumer to receive encrypted messages. - - **Input** - - ```nodejs - - const Pulsar = require('pulsar-client'); - - (async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://172.25.0.3:6650', - operationTimeoutSeconds: 30 - }); - - // Create a consumer - const consumer = await client.subscribe({ - topic: 'persistent://public/default/my-topic', - subscription: 'sub1', - subscriptionType: 'Shared', - ackTimeoutMs: 10000, - publicKeyPath: "./public.pem", - privateKeyPath: "./private.pem" - }); - - console.log(consumer) - // Receive messages - for (let i = 0; i < 10; i += 1) { - const msg = await consumer.receive(); - console.log(msg.getData().toString()); - consumer.acknowledge(msg); - } - - await consumer.close(); - await client.close(); - })(); - - ``` - -4. Run the consumer to receive encrypted messages. - - **Input** - - ```shell - - node consumer.js - - ``` - -5. In a new terminal tab, run the producer to produce encrypted messages. - - **Input** - - ```shell - - node producer.js - - ``` - - Now you can see the producer sends messages and the consumer receives messages successfully. - - **Output** - - This is from the producer side. - - ``` - - Sent message: my-message-0 - Sent message: my-message-1 - Sent message: my-message-2 - Sent message: my-message-3 - Sent message: my-message-4 - Sent message: my-message-5 - Sent message: my-message-6 - Sent message: my-message-7 - Sent message: my-message-8 - Sent message: my-message-9 - - ``` - - This is from the consumer side. - - ``` - - my-message-0 - my-message-1 - my-message-2 - my-message-3 - my-message-4 - my-message-5 - my-message-6 - my-message-7 - my-message-8 - my-message-9 - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-python.md b/site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-python.md deleted file mode 100644 index f30cf55387d92e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-python.md +++ /dev/null @@ -1,456 +0,0 @@ ---- -id: client-libraries-python -title: Pulsar Python client -sidebar_label: "Python" -original_id: client-libraries-python ---- - -Pulsar Python client library is a wrapper over the existing [C++ client library](client-libraries-cpp.md) and exposes all of the [same features](/api/cpp). You can find the code in the [`python` subdirectory](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/python) of the C++ client code. - -All the methods in producer, consumer, and reader of a Python client are thread-safe. - -[pdoc](https://github.com/BurntSushi/pdoc)-generated API docs for the Python client are available [here](/api/python). - -## Install - -You can install the [`pulsar-client`](https://pypi.python.org/pypi/pulsar-client) library either via [PyPi](https://pypi.python.org/pypi), using [pip](#installation-using-pip), or by building the library from source. - -### Install using pip - -To install the `pulsar-client` library as a pre-built package using the [pip](https://pip.pypa.io/en/stable/) package manager: - -```shell - -$ pip install pulsar-client==@pulsar:version_number@ - -``` - -### Optional dependencies - -To support aspects like pulsar functions or Avro serialization, additional optional components can be installed alongside the `pulsar-client` library - -```shell - -# avro serialization -$ pip install pulsar-client[avro]=='@pulsar:version_number@' - -# functions runtime -$ pip install pulsar-client[functions]=='@pulsar:version_number@' - -# all optional components -$ pip install pulsar-client[all]=='@pulsar:version_number@' - -``` - -Installation via PyPi is available for the following Python versions: - -Platform | Supported Python versions -:--------|:------------------------- -MacOS
    10.13 (High Sierra), 10.14 (Mojave)
    | 2.7, 3.7 -Linux | 2.7, 3.4, 3.5, 3.6, 3.7, 3.8 - -### Install from source - -To install the `pulsar-client` library by building from source, follow [instructions](client-libraries-cpp.md#compilation) and compile the Pulsar C++ client library. That builds the Python binding for the library. - -To install the built Python bindings: - -```shell - -$ git clone https://github.com/apache/pulsar -$ cd pulsar/pulsar-client-cpp/python -$ sudo python setup.py install - -``` - -## API Reference - -The complete Python API reference is available at [api/python](/api/python). - -## Examples - -You can find a variety of Python code examples for the `pulsar-client` library. - -### Producer example - -The following example creates a Python producer for the `my-topic` topic and sends 10 messages on that topic: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') - -producer = client.create_producer('my-topic') - -for i in range(10): - producer.send(('Hello-%d' % i).encode('utf-8')) - -client.close() - -``` - -### Consumer example - -The following example creates a consumer with the `my-subscription` subscription name on the `my-topic` topic, receives incoming messages, prints the content and ID of messages that arrive, and acknowledges each message to the Pulsar broker. - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') - -consumer = client.subscribe('my-topic', 'my-subscription') - -while True: - msg = consumer.receive() - try: - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except: - # Message failed to be processed - consumer.negative_acknowledge(msg) - -client.close() - -``` - -This example shows how to configure negative acknowledgement. - -```python - -from pulsar import Client, schema -client = Client('pulsar://localhost:6650') -consumer = client.subscribe('negative_acks','test',schema=schema.StringSchema()) -producer = client.create_producer('negative_acks',schema=schema.StringSchema()) -for i in range(10): - print('send msg "hello-%d"' % i) - producer.send_async('hello-%d' % i, callback=None) -producer.flush() -for i in range(10): - msg = consumer.receive() - consumer.negative_acknowledge(msg) - print('receive and nack msg "%s"' % msg.data()) -for i in range(10): - msg = consumer.receive() - consumer.acknowledge(msg) - print('receive and ack msg "%s"' % msg.data()) -try: - # No more messages expected - msg = consumer.receive(100) -except: - print("no more msg") - pass - -``` - -### Reader interface example - -You can use the Pulsar Python API to use the Pulsar [reader interface](concepts-clients.md#reader-interface). Here's an example: - -```python - -# MessageId taken from a previously fetched message -msg_id = msg.message_id() - -reader = client.create_reader('my-topic', msg_id) - -while True: - msg = reader.read_next() - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # No acknowledgment - -``` - -### Multi-topic subscriptions - -In addition to subscribing a consumer to a single Pulsar topic, you can also subscribe to multiple topics simultaneously. To use multi-topic subscriptions, you can supply a regular expression (regex) or a `List` of topics. If you select topics via regex, all topics must be within the same Pulsar namespace. - -The following is an example. - -```python - -import re -consumer = client.subscribe(re.compile('persistent://public/default/topic-*'), 'my-subscription') -while True: - msg = consumer.receive() - try: - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except: - # Message failed to be processed - consumer.negative_acknowledge(msg) -client.close() - -``` - -## Schema - -### Declare and validate schema - -You can declare a schema by passing a class that inherits -from `pulsar.schema.Record` and defines the fields as -class variables. For example: - -```python - -from pulsar.schema import * - -class Example(Record): - a = String() - b = Integer() - c = Boolean() - -``` - -With this simple schema definition, you can create producers, consumers and readers instances that refer to that. - -```python - -producer = client.create_producer( - topic='my-topic', - schema=AvroSchema(Example) ) - -producer.send(Example(a='Hello', b=1)) - -``` - -After creating the producer, the Pulsar broker validates that the existing topic schema is indeed of "Avro" type and that the format is compatible with the schema definition of the `Example` class. - -If there is a mismatch, an exception occurs in the producer creation. - -Once a producer is created with a certain schema definition, -it will only accept objects that are instances of the declared -schema class. - -Similarly, for a consumer/reader, the consumer will return an -object, instance of the schema record class, rather than the raw -bytes: - -```python - -consumer = client.subscribe( - topic='my-topic', - subscription_name='my-subscription', - schema=AvroSchema(Example) ) - -while True: - msg = consumer.receive() - ex = msg.value() - try: - print("Received message a={} b={} c={}".format(ex.a, ex.b, ex.c)) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except: - # Message failed to be processed - consumer.negative_acknowledge(msg) - -``` - -### Supported schema types - -You can use different builtin schema types in Pulsar. All the definitions are in the `pulsar.schema` package. - -| Schema | Notes | -| ------ | ----- | -| `BytesSchema` | Get the raw payload as a `bytes` object. No serialization/deserialization are performed. This is the default schema mode | -| `StringSchema` | Encode/decode payload as a UTF-8 string. Uses `str` objects | -| `JsonSchema` | Require record definition. Serializes the record into standard JSON payload | -| `AvroSchema` | Require record definition. Serializes in AVRO format | - -### Schema definition reference - -The schema definition is done through a class that inherits from `pulsar.schema.Record`. - -This class has a number of fields which can be of either -`pulsar.schema.Field` type or another nested `Record`. All the -fields are specified in the `pulsar.schema` package. The fields -are matching the AVRO fields types. - -| Field Type | Python Type | Notes | -| ---------- | ----------- | ----- | -| `Boolean` | `bool` | | -| `Integer` | `int` | | -| `Long` | `int` | | -| `Float` | `float` | | -| `Double` | `float` | | -| `Bytes` | `bytes` | | -| `String` | `str` | | -| `Array` | `list` | Need to specify record type for items. | -| `Map` | `dict` | Key is always `String`. Need to specify value type. | - -Additionally, any Python `Enum` type can be used as a valid field type. - -#### Fields parameters - -When adding a field, you can use these parameters in the constructor. - -| Argument | Default | Notes | -| ---------- | --------| ----- | -| `default` | `None` | Set a default value for the field. Eg: `a = Integer(default=5)` | -| `required` | `False` | Mark the field as "required". It is set in the schema accordingly. | - -#### Schema definition examples - -##### Simple definition - -```python - -class Example(Record): - a = String() - b = Integer() - c = Array(String()) - i = Map(String()) - -``` - -##### Using enums - -```python - -from enum import Enum - -class Color(Enum): - red = 1 - green = 2 - blue = 3 - -class Example(Record): - name = String() - color = Color - -``` - -##### Complex types - -```python - -class MySubRecord(Record): - x = Integer() - y = Long() - z = String() - -class Example(Record): - a = String() - sub = MySubRecord() - -``` - -## End-to-end encryption - -[End-to-end encryption](https://pulsar.apache.org/docs/en/next/cookbooks-encryption/#docsNav) allows applications to encrypt messages at producers and decrypt messages at consumers. - -### Configuration - -To use the end-to-end encryption feature in the Python client, you need to configure `publicKeyPath` and `privateKeyPath` for both producer and consumer. - -``` - -publicKeyPath: "./public.pem" -privateKeyPath: "./private.pem" - -``` - -### Tutorial - -This section provides step-by-step instructions on how to use the end-to-end encryption feature in the Python client. - -**Prerequisite** - -- Pulsar Python client 2.7.1 or later - -**Step** - -1. Create both public and private key pairs. - - **Input** - - ```shell - - openssl genrsa -out private.pem 2048 - openssl rsa -in private.pem -pubout -out public.pem - - ``` - -2. Create a producer to send encrypted messages. - - **Input** - - ```python - - import pulsar - - publicKeyPath = "./public.pem" - privateKeyPath = "./private.pem" - crypto_key_reader = pulsar.CryptoKeyReader(publicKeyPath, privateKeyPath) - client = pulsar.Client('pulsar://localhost:6650') - producer = client.create_producer(topic='encryption', encryption_key='encryption', crypto_key_reader=crypto_key_reader) - producer.send('encryption message'.encode('utf8')) - print('sent message') - producer.close() - client.close() - - ``` - -3. Create a consumer to receive encrypted messages. - - **Input** - - ```python - - import pulsar - - publicKeyPath = "./public.pem" - privateKeyPath = "./private.pem" - crypto_key_reader = pulsar.CryptoKeyReader(publicKeyPath, privateKeyPath) - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe(topic='encryption', subscription_name='encryption-sub', crypto_key_reader=crypto_key_reader) - msg = consumer.receive() - print("Received msg '{}' id = '{}'".format(msg.data(), msg.message_id())) - consumer.close() - client.close() - - ``` - -4. Run the consumer to receive encrypted messages. - - **Input** - - ```shell - - python consumer.py - - ``` - -5. In a new terminal tab, run the producer to produce encrypted messages. - - **Input** - - ```shell - - python producer.py - - ``` - - Now you can see the producer sends messages and the consumer receives messages successfully. - - **Output** - - This is from the producer side. - - ``` - - sent message - - ``` - - This is from the consumer side. - - ``` - - Received msg 'b'encryption message'' id = '(0,0,-1,-1)' - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-websocket.md b/site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-websocket.md deleted file mode 100644 index ebdb9bc1cd18f6..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries-websocket.md +++ /dev/null @@ -1,621 +0,0 @@ ---- -id: client-libraries-websocket -title: Pulsar WebSocket API -sidebar_label: "WebSocket" -original_id: client-libraries-websocket ---- - -Pulsar [WebSocket](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API) API provides a simple way to interact with Pulsar using languages that do not have an official [client library](getting-started-clients.md). Through WebSocket, you can publish and consume messages and use features available on the [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page. - - -> You can use Pulsar WebSocket API with any WebSocket client library. See examples for Python and Node.js [below](#client-examples). - -## Running the WebSocket service - -The standalone variant of Pulsar that we recommend using for [local development](getting-started-standalone.md) already has the WebSocket service enabled. - -In non-standalone mode, there are two ways to deploy the WebSocket service: - -* [embedded](#embedded-with-a-pulsar-broker) with a Pulsar broker -* as a [separate component](#as-a-separate-component) - -### Embedded with a Pulsar broker - -In this mode, the WebSocket service will run within the same HTTP service that's already running in the broker. To enable this mode, set the [`webSocketServiceEnabled`](reference-configuration.md#broker-webSocketServiceEnabled) parameter in the [`conf/broker.conf`](reference-configuration.md#broker) configuration file in your installation. - -```properties - -webSocketServiceEnabled=true - -``` - -### As a separate component - -In this mode, the WebSocket service will be run from a Pulsar [broker](reference-terminology.md#broker) as a separate service. Configuration for this mode is handled in the [`conf/websocket.conf`](reference-configuration.md#websocket) configuration file. You'll need to set *at least* the following parameters: - -* [`configurationStoreServers`](reference-configuration.md#websocket-configurationStoreServers) -* [`webServicePort`](reference-configuration.md#websocket-webServicePort) -* [`clusterName`](reference-configuration.md#websocket-clusterName) - -Here's an example: - -```properties - -configurationStoreServers=zk1:2181,zk2:2181,zk3:2181 -webServicePort=8080 -clusterName=my-cluster - -``` - -### Security settings - -To enable TLS encryption on WebSocket service: - -```properties - -tlsEnabled=true -tlsAllowInsecureConnection=false -tlsCertificateFilePath=/path/to/client-websocket.cert.pem -tlsKeyFilePath=/path/to/client-websocket.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -### Starting the broker - -When the configuration is set, you can start the service using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) tool: - -```shell - -$ bin/pulsar-daemon start websocket - -``` - -## API Reference - -Pulsar's WebSocket API offers three endpoints for [producing](#producer-endpoint) messages, [consuming](#consumer-endpoint) messages and [reading](#reader-endpoint) messages. - -All exchanges via the WebSocket API use JSON. - -### Authentication - -#### Browser javascript WebSocket client - -Use the query param `token` transport the authentication token. - -```http - -ws://broker-service-url:8080/path?token=token - -``` - -### Producer endpoint - -The producer endpoint requires you to specify a tenant, namespace, and topic in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/producer/persistent/:tenant/:namespace/:topic - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`sendTimeoutMillis` | long | no | Send timeout (default: 30 secs) -`batchingEnabled` | boolean | no | Enable batching of messages (default: false) -`batchingMaxMessages` | int | no | Maximum number of messages permitted in a batch (default: 1000) -`maxPendingMessages` | int | no | Set the max size of the internal-queue holding the messages (default: 1000) -`batchingMaxPublishDelay` | long | no | Time period within which the messages will be batched (default: 10ms) -`messageRoutingMode` | string | no | Message [routing mode](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/ProducerConfiguration.MessageRoutingMode.html) for the partitioned producer: `SinglePartition`, `RoundRobinPartition` -`compressionType` | string | no | Compression [type](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/CompressionType.html): `LZ4`, `ZLIB` -`producerName` | string | no | Specify the name for the producer. Pulsar will enforce only one producer with same name can be publishing on a topic -`initialSequenceId` | long | no | Set the baseline for the sequence ids for messages published by the producer. -`hashingScheme` | string | no | [Hashing function](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.HashingScheme.html) to use when publishing on a partitioned topic: `JavaStringHash`, `Murmur3_32Hash` -`token` | string | no | Authentication token, this is used for the browser javascript client - - -#### Publishing a message - -```json - -{ - "payload": "SGVsbG8gV29ybGQ=", - "properties": {"key1": "value1", "key2": "value2"}, - "context": "1" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`payload` | string | yes | Base-64 encoded payload -`properties` | key-value pairs | no | Application-defined properties -`context` | string | no | Application-defined request identifier -`key` | string | no | For partitioned topics, decides which partition to use -`replicationClusters` | array | no | Restrict replication to this list of [clusters](reference-terminology.md#cluster), specified by name - - -##### Example success response - -```json - -{ - "result": "ok", - "messageId": "CAAQAw==", - "context": "1" - } - -``` - -##### Example failure response - -```json - - { - "result": "send-error:3", - "errorMsg": "Failed to de-serialize from JSON", - "context": "1" - } - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`result` | string | yes | `ok` if successful or an error message if unsuccessful -`messageId` | string | yes | Message ID assigned to the published message -`context` | string | no | Application-defined request identifier - - -### Consumer endpoint - -The consumer endpoint requires you to specify a tenant, namespace, and topic, as well as a subscription, in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/consumer/persistent/:tenant/:namespace/:topic/:subscription - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`ackTimeoutMillis` | long | no | Set the timeout for unacked messages (default: 0) -`subscriptionType` | string | no | [Subscription type](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/SubscriptionType.html): `Exclusive`, `Failover`, `Shared`, `Key_Shared` -`receiverQueueSize` | int | no | Size of the consumer receive queue (default: 1000) -`consumerName` | string | no | Consumer name -`priorityLevel` | int | no | Define a [priority](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setPriorityLevel-int-) for the consumer -`maxRedeliverCount` | int | no | Define a [maxRedeliverCount](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#deadLetterPolicy-org.apache.pulsar.client.api.DeadLetterPolicy-) for the consumer (default: 0). Activates [Dead Letter Topic](https://github.com/apache/pulsar/wiki/PIP-22%3A-Pulsar-Dead-Letter-Topic) feature. -`deadLetterTopic` | string | no | Define a [deadLetterTopic](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#deadLetterPolicy-org.apache.pulsar.client.api.DeadLetterPolicy-) for the consumer (default: {topic}-{subscription}-DLQ). Activates [Dead Letter Topic](https://github.com/apache/pulsar/wiki/PIP-22%3A-Pulsar-Dead-Letter-Topic) feature. -`pullMode` | boolean | no | Enable pull mode (default: false). See "Flow Control" below. -`negativeAckRedeliveryDelay` | int | no | When a message is negatively acknowledged, it will be redelivered to the DLQ. -`token` | string | no | Authentication token, this is used for the browser javascript client - -NB: these parameter (except `pullMode`) apply to the internal consumer of the WebSocket service. -So messages will be subject to the redelivery settings as soon as the get into the receive queue, -even if the client doesn't consume on the WebSocket. - -##### Receiving messages - -Server will push messages on the WebSocket session: - -```json - -{ - "messageId": "CAAQAw==", - "payload": "SGVsbG8gV29ybGQ=", - "properties": {"key1": "value1", "key2": "value2"}, - "publishTime": "2016-08-30 16:45:57.785", - "redeliveryCount": 4 -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId` | string | yes | Message ID -`payload` | string | yes | Base-64 encoded payload -`publishTime` | string | yes | Publish timestamp -`redeliveryCount` | number | yes | Number of times this message was already delivered -`properties` | key-value pairs | no | Application-defined properties -`key` | string | no | Original routing key set by producer - -#### Acknowledging the message - -Consumer needs to acknowledge the successful processing of the message to -have the Pulsar broker delete it. - -```json - -{ - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Negatively acknowledging messages - -```json - -{ - "type": "negativeAcknowledge", - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Flow control - -##### Push Mode - -By default (`pullMode=false`), the consumer endpoint will use the `receiverQueueSize` parameter both to size its -internal receive queue and to limit the number of unacknowledged messages that are passed to the WebSocket client. -In this mode, if you don't send acknowledgements, the Pulsar WebSocket service will stop sending messages after reaching -`receiverQueueSize` unacked messages sent to the WebSocket client. - -##### Pull Mode - -If you set `pullMode` to `true`, the WebSocket client will need to send `permit` commands to permit the -Pulsar WebSocket service to send more messages. - -```json - -{ - "type": "permit", - "permitMessages": 100 -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `permit` -`permitMessages`| int | yes | Number of messages to permit - -NB: in this mode it's possible to acknowledge messages in a different connection. - -#### Check if reach end of topic - -Consumer can check if it has reached end of topic by sending `isEndOfTopic` request. - -**Request** - -```json - -{ - "type": "isEndOfTopic" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `isEndOfTopic` - -**Response** - -```json - -{ - "endOfTopic": "true/false" - } - -``` - -### Reader endpoint - -The reader endpoint requires you to specify a tenant, namespace, and topic in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/reader/persistent/:tenant/:namespace/:topic - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`readerName` | string | no | Reader name -`receiverQueueSize` | int | no | Size of the consumer receive queue (default: 1000) -`messageId` | int or enum | no | Message ID to start from, `earliest` or `latest` (default: `latest`) -`token` | string | no | Authentication token, this is used for the browser javascript client - -##### Receiving messages - -Server will push messages on the WebSocket session: - -```json - -{ - "messageId": "CAAQAw==", - "payload": "SGVsbG8gV29ybGQ=", - "properties": {"key1": "value1", "key2": "value2"}, - "publishTime": "2016-08-30 16:45:57.785", - "redeliveryCount": 4 -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId` | string | yes | Message ID -`payload` | string | yes | Base-64 encoded payload -`publishTime` | string | yes | Publish timestamp -`redeliveryCount` | number | yes | Number of times this message was already delivered -`properties` | key-value pairs | no | Application-defined properties -`key` | string | no | Original routing key set by producer - -#### Acknowledging the message - -**In WebSocket**, Reader needs to acknowledge the successful processing of the message to -have the Pulsar WebSocket service update the number of pending messages. -If you don't send acknowledgements, Pulsar WebSocket service will stop sending messages after reaching the pendingMessages limit. - -```json - -{ - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Check if reach end of topic - -Consumer can check if it has reached end of topic by sending `isEndOfTopic` request. - -**Request** - -```json - -{ - "type": "isEndOfTopic" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `isEndOfTopic` - -**Response** - -```json - -{ - "endOfTopic": "true/false" - } - -``` - -### Error codes - -In case of error the server will close the WebSocket session using the -following error codes: - -Error Code | Error Message -:----------|:------------- -1 | Failed to create producer -2 | Failed to subscribe -3 | Failed to deserialize from JSON -4 | Failed to serialize to JSON -5 | Failed to authenticate client -6 | Client is not authorized -7 | Invalid payload encoding -8 | Unknown error - -> The application is responsible for re-establishing a new WebSocket session after a backoff period. - -## Client examples - -Below you'll find code examples for the Pulsar WebSocket API in [Python](#python) and [Node.js](#nodejs). - -### Python - -This example uses the [`websocket-client`](https://pypi.python.org/pypi/websocket-client) package. You can install it using [pip](https://pypi.python.org/pypi/pip): - -```shell - -$ pip install websocket-client - -``` - -You can also download it from [PyPI](https://pypi.python.org/pypi/websocket-client). - -#### Python producer - -Here's an example Python producer that sends a simple message to a Pulsar [topic](reference-terminology.md#topic): - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/producer/persistent/public/default/my-topic' - -ws = websocket.create_connection(TOPIC) - -# Send one message as JSON -ws.send(json.dumps({ - 'payload' : base64.b64encode('Hello World'), - 'properties': { - 'key1' : 'value1', - 'key2' : 'value2' - }, - 'context' : 5 -})) - -response = json.loads(ws.recv()) -if response['result'] == 'ok': - print 'Message published successfully' -else: - print 'Failed to publish message:', response -ws.close() - -``` - -#### Python consumer - -Here's an example Python consumer that listens on a Pulsar topic and prints the message ID whenever a message arrives: - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/consumer/persistent/public/default/my-topic/my-sub' - -ws = websocket.create_connection(TOPIC) - -while True: - msg = json.loads(ws.recv()) - if not msg: break - - print "Received: {} - payload: {}".format(msg, base64.b64decode(msg['payload'])) - - # Acknowledge successful processing - ws.send(json.dumps({'messageId' : msg['messageId']})) - -ws.close() - -``` - -#### Python reader - -Here's an example Python reader that listens on a Pulsar topic and prints the message ID whenever a message arrives: - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/reader/persistent/public/default/my-topic' -ws = websocket.create_connection(TOPIC) - -while True: - msg = json.loads(ws.recv()) - if not msg: break - - print "Received: {} - payload: {}".format(msg, base64.b64decode(msg['payload'])) - - # Acknowledge successful processing - ws.send(json.dumps({'messageId' : msg['messageId']})) - -ws.close() - -``` - -### Node.js - -This example uses the [`ws`](https://websockets.github.io/ws/) package. You can install it using [npm](https://www.npmjs.com/): - -```shell - -$ npm install ws - -``` - -#### Node.js producer - -Here's an example Node.js producer that sends a simple message to a Pulsar topic: - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/producer/persistent/public/default/my-topic`; -const ws = new WebSocket(topic); - -var message = { - "payload" : new Buffer("Hello World").toString('base64'), - "properties": { - "key1" : "value1", - "key2" : "value2" - }, - "context" : "1" -}; - -ws.on('open', function() { - // Send one message - ws.send(JSON.stringify(message)); -}); - -ws.on('message', function(message) { - console.log('received ack: %s', message); -}); - -``` - -#### Node.js consumer - -Here's an example Node.js consumer that listens on the same topic used by the producer above: - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/consumer/persistent/public/default/my-topic/my-sub`; -const ws = new WebSocket(topic); - -ws.on('message', function(message) { - var receiveMsg = JSON.parse(message); - console.log('Received: %s - payload: %s', message, new Buffer(receiveMsg.payload, 'base64').toString()); - var ackMsg = {"messageId" : receiveMsg.messageId}; - ws.send(JSON.stringify(ackMsg)); -}); - -``` - -#### NodeJS reader - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/reader/persistent/public/default/my-topic`; -const ws = new WebSocket(topic); - -ws.on('message', function(message) { - var receiveMsg = JSON.parse(message); - console.log('Received: %s - payload: %s', message, new Buffer(receiveMsg.payload, 'base64').toString()); - var ackMsg = {"messageId" : receiveMsg.messageId}; - ws.send(JSON.stringify(ackMsg)); -}); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries.md b/site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries.md deleted file mode 100644 index 00d128c514040f..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/client-libraries.md +++ /dev/null @@ -1,35 +0,0 @@ ---- -id: client-libraries -title: Pulsar client libraries -sidebar_label: "Overview" -original_id: client-libraries ---- - -Pulsar supports the following client libraries: - -- [Java client](client-libraries-java.md) -- [Go client](client-libraries-go.md) -- [Python client](client-libraries-python.md) -- [C++ client](client-libraries-cpp.md) -- [Node.js client](client-libraries-node.md) -- [WebSocket client](client-libraries-websocket.md) -- [C# client](client-libraries-dotnet.md) - -## Feature matrix -Pulsar client feature matrix for different languages is listed on [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page. - -## Third-party clients - -Besides the official released clients, multiple projects on developing Pulsar clients are available in different languages. - -> If you have developed a new Pulsar client, feel free to submit a pull request and add your client to the list below. - -| Language | Project | Maintainer | License | Description | -|----------|---------|------------|---------|-------------| -| Go | [pulsar-client-go](https://github.com/Comcast/pulsar-client-go) | [Comcast](https://github.com/Comcast) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client | -| Go | [go-pulsar](https://github.com/t2y/go-pulsar) | [t2y](https://github.com/t2y) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | -| Haskell | [supernova](https://github.com/cr-org/supernova) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Native Pulsar client for Haskell | -| Scala | [neutron](https://github.com/cr-org/neutron) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Purely functional Apache Pulsar client for Scala built on top of Fs2 | -| Scala | [pulsar4s](https://github.com/sksamuel/pulsar4s) | [sksamuel](https://github.com/sksamuel) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Idomatic, typesafe, and reactive Scala client for Apache Pulsar | -| Rust | [pulsar-rs](https://github.com/wyyerd/pulsar-rs) | [Wyyerd Group](https://github.com/wyyerd) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Future-based Rust bindings for Apache Pulsar | -| .NET | [pulsar-client-dotnet](https://github.com/fsharplang-ru/pulsar-client-dotnet) | [Lanayx](https://github.com/Lanayx) | [![GitHub](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT) | Native .NET client for C#/F#/VB | diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-architecture-overview.md b/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-architecture-overview.md deleted file mode 100644 index f3e75c3e307e0c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-architecture-overview.md +++ /dev/null @@ -1,172 +0,0 @@ ---- -id: concepts-architecture-overview -title: Architecture Overview -sidebar_label: "Architecture" -original_id: concepts-architecture-overview ---- - -At the highest level, a Pulsar instance is composed of one or more Pulsar clusters. Clusters within an instance can [replicate](concepts-replication.md) data amongst themselves. - -In a Pulsar cluster: - -* One or more brokers handles and load balances incoming messages from producers, dispatches messages to consumers, communicates with the Pulsar configuration store to handle various coordination tasks, stores messages in BookKeeper instances (aka bookies), relies on a cluster-specific ZooKeeper cluster for certain tasks, and more. -* A BookKeeper cluster consisting of one or more bookies handles [persistent storage](#persistent-storage) of messages. -* A ZooKeeper cluster specific to that cluster handles coordination tasks between Pulsar clusters. - -The diagram below provides an illustration of a Pulsar cluster: - -![Pulsar architecture diagram](/assets/pulsar-system-architecture.png) - -At the broader instance level, an instance-wide ZooKeeper cluster called the configuration store handles coordination tasks involving multiple clusters, for example [geo-replication](concepts-replication.md). - -## Brokers - -The Pulsar message broker is a stateless component that's primarily responsible for running two other components: - -* An HTTP server that exposes a {@inject: rest:REST:/} API for both administrative tasks and [topic lookup](concepts-clients.md#client-setup-phase) for producers and consumers. The producers connect to the brokers to publish messages and the consumers connect to the brokers to consume the messages. -* A dispatcher, which is an asynchronous TCP server over a custom [binary protocol](developing-binary-protocol.md) used for all data transfers - -Messages are typically dispatched out of a [managed ledger](#managed-ledgers) cache for the sake of performance, *unless* the backlog exceeds the cache size. If the backlog grows too large for the cache, the broker will start reading entries from BookKeeper. - -Finally, to support geo-replication on global topics, the broker manages replicators that tail the entries published in the local region and republish them to the remote region using the Pulsar [Java client library](client-libraries-java.md). - -> For a guide to managing Pulsar brokers, see the [brokers](admin-api-brokers.md) guide. - -## Clusters - -A Pulsar instance consists of one or more Pulsar *clusters*. Clusters, in turn, consist of: - -* One or more Pulsar [brokers](#brokers) -* A ZooKeeper quorum used for cluster-level configuration and coordination -* An ensemble of bookies used for [persistent storage](#persistent-storage) of messages - -Clusters can replicate amongst themselves using [geo-replication](concepts-replication.md). - -> For a guide to managing Pulsar clusters, see the [clusters](admin-api-clusters.md) guide. - -## Metadata store - -The Pulsar metadata store maintains all the metadata of a Pulsar cluster, such as topic metadata, schema, broker load data, and so on. Pulsar uses [Apache ZooKeeper](https://zookeeper.apache.org/) for metadata storage, cluster configuration, and coordination. The Pulsar metadata store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster. You can use one ZooKeeper cluster for both Pulsar metadata store and BookKeeper metadata store. If you want to deploy Pulsar brokers connected to an existing BookKeeper cluster, you need to deploy separate ZooKeeper clusters for Pulsar metadata store and BookKeeper metadata store respectively. - -In a Pulsar instance: - -* A configuration store quorum stores configuration for tenants, namespaces, and other entities that need to be globally consistent. -* Each cluster has its own local ZooKeeper ensemble that stores cluster-specific configuration and coordination such as which brokers are responsible for which topics as well as ownership metadata, broker load reports, BookKeeper ledger metadata, and more. - -## Configuration store - -The configuration store maintains all the configurations of a Pulsar instance, such as clusters, tenants, namespaces, partitioned topic related configurations, and so on. A Pulsar instance can have a single local cluster, multiple local clusters, or multiple cross-region clusters. Consequently, the configuration store can share the configurations across multiple clusters under a Pulsar instance. The configuration store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster. - -## Persistent storage - -Pulsar provides guaranteed message delivery for applications. If a message successfully reaches a Pulsar broker, it will be delivered to its intended target. - -This guarantee requires that non-acknowledged messages are stored in a durable manner until they can be delivered to and acknowledged by consumers. This mode of messaging is commonly called *persistent messaging*. In Pulsar, N copies of all messages are stored and synced on disk, for example 4 copies across two servers with mirrored [RAID](https://en.wikipedia.org/wiki/RAID) volumes on each server. - -### Apache BookKeeper - -Pulsar uses a system called [Apache BookKeeper](http://bookkeeper.apache.org/) for persistent message storage. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) (WAL) system that provides a number of crucial advantages for Pulsar: - -* It enables Pulsar to utilize many independent logs, called [ledgers](#ledgers). Multiple ledgers can be created for topics over time. -* It offers very efficient storage for sequential data that handles entry replication. -* It guarantees read consistency of ledgers in the presence of various system failures. -* It offers even distribution of I/O across bookies. -* It's horizontally scalable in both capacity and throughput. Capacity can be immediately increased by adding more bookies to a cluster. -* Bookies are designed to handle thousands of ledgers with concurrent reads and writes. By using multiple disk devices---one for journal and another for general storage--bookies are able to isolate the effects of read operations from the latency of ongoing write operations. - -In addition to message data, *cursors* are also persistently stored in BookKeeper. Cursors are [subscription](reference-terminology.md#subscription) positions for [consumers](reference-terminology.md#consumer). BookKeeper enables Pulsar to store consumer position in a scalable fashion. - -At the moment, Pulsar supports persistent message storage. This accounts for the `persistent` in all topic names. Here's an example: - -```http - -persistent://my-tenant/my-namespace/my-topic - -``` - -> Pulsar also supports ephemeral ([non-persistent](concepts-messaging.md#non-persistent-topics)) message storage. - - -You can see an illustration of how brokers and bookies interact in the diagram below: - -![Brokers and bookies](/assets/broker-bookie.png) - - -### Ledgers - -A ledger is an append-only data structure with a single writer that is assigned to multiple BookKeeper storage nodes, or bookies. Ledger entries are replicated to multiple bookies. Ledgers themselves have very simple semantics: - -* A Pulsar broker can create a ledger, append entries to the ledger, and close the ledger. -* After the ledger has been closed---either explicitly or because the writer process crashed---it can then be opened only in read-only mode. -* Finally, when entries in the ledger are no longer needed, the whole ledger can be deleted from the system (across all bookies). - -#### Ledger read consistency - -The main strength of Bookkeeper is that it guarantees read consistency in ledgers in the presence of failures. Since the ledger can only be written to by a single process, that process is free to append entries very efficiently, without need to obtain consensus. After a failure, the ledger will go through a recovery process that will finalize the state of the ledger and establish which entry was last committed to the log. After that point, all readers of the ledger are guaranteed to see the exact same content. - -#### Managed ledgers - -Given that Bookkeeper ledgers provide a single log abstraction, a library was developed on top of the ledger called the *managed ledger* that represents the storage layer for a single topic. A managed ledger represents the abstraction of a stream of messages with a single writer that keeps appending at the end of the stream and multiple cursors that are consuming the stream, each with its own associated position. - -Internally, a single managed ledger uses multiple BookKeeper ledgers to store the data. There are two reasons to have multiple ledgers: - -1. After a failure, a ledger is no longer writable and a new one needs to be created. -2. A ledger can be deleted when all cursors have consumed the messages it contains. This allows for periodic rollover of ledgers. - -### Journal storage - -In BookKeeper, *journal* files contain BookKeeper transaction logs. Before making an update to a [ledger](#ledgers), a bookie needs to ensure that a transaction describing the update is written to persistent (non-volatile) storage. A new journal file is created once the bookie starts or the older journal file reaches the journal file size threshold (configured using the [`journalMaxSizeMB`](reference-configuration.md#bookkeeper-journalMaxSizeMB) parameter). - -## Pulsar proxy - -One way for Pulsar clients to interact with a Pulsar [cluster](#clusters) is by connecting to Pulsar message [brokers](#brokers) directly. In some cases, however, this kind of direct connection is either infeasible or undesirable because the client doesn't have direct access to broker addresses. If you're running Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform, for example, then direct client connections to brokers are likely not possible. - -The **Pulsar proxy** provides a solution to this problem by acting as a single gateway for all of the brokers in a cluster. If you run the Pulsar proxy (which, again, is optional), all client connections with the Pulsar cluster will flow through the proxy rather than communicating with brokers. - -> For the sake of performance and fault tolerance, you can run as many instances of the Pulsar proxy as you'd like. - -Architecturally, the Pulsar proxy gets all the information it requires from ZooKeeper. When starting the proxy on a machine, you only need to provide ZooKeeper connection strings for the cluster-specific and instance-wide configuration store clusters. Here's an example: - -```bash - -$ bin/pulsar proxy \ - --zookeeper-servers zk-0,zk-1,zk-2 \ - --configuration-store-servers zk-0,zk-1,zk-2 - -``` - -> #### Pulsar proxy docs -> For documentation on using the Pulsar proxy, see the [Pulsar proxy admin documentation](administration-proxy.md). - - -Some important things to know about the Pulsar proxy: - -* Connecting clients don't need to provide *any* specific configuration to use the Pulsar proxy. You won't need to update the client configuration for existing applications beyond updating the IP used for the service URL (for example if you're running a load balancer over the Pulsar proxy). -* [TLS encryption](security-tls-transport.md) and [authentication](security-tls-authentication.md) is supported by the Pulsar proxy - -## Service discovery - -[Clients](getting-started-clients.md) connecting to Pulsar brokers need to be able to communicate with an entire Pulsar instance using a single URL. Pulsar provides a built-in service discovery mechanism that you can set up using the instructions in the [Deploying a Pulsar instance](deploy-bare-metal.md#service-discovery-setup) guide. - -You can use your own service discovery system if you'd like. If you use your own system, there is just one requirement: when a client performs an HTTP request to an endpoint, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to *some* active broker in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means. - -The diagram below illustrates Pulsar service discovery: - -![alt-text](/assets/pulsar-service-discovery.png) - -In this diagram, the Pulsar cluster is addressable via a single DNS name: `pulsar-cluster.acme.com`. A [Python client](client-libraries-python.md), for example, could access this Pulsar cluster like this: - -```python - -from pulsar import Client - -client = Client('pulsar://pulsar-cluster.acme.com:6650') - -``` - -:::note - -In Pulsar, each topic is handled by only one broker. Initial requests from a client to read, update or delete a topic are sent to a broker that may not be the topic owner. If the broker cannot handle the request for this topic, it redirects the request to the appropriate broker. - -::: - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-authentication.md b/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-authentication.md deleted file mode 100644 index f6307890c904a7..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-authentication.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -id: concepts-authentication -title: Authentication and Authorization -sidebar_label: "Authentication and Authorization" -original_id: concepts-authentication ---- - -Pulsar supports a pluggable [authentication](security-overview.md) mechanism which can be configured at the proxy and/or the broker. Pulsar also supports a pluggable [authorization](security-authorization.md) mechanism. These mechanisms work together to identify the client and its access rights on topics, namespaces and tenants. - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-clients.md b/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-clients.md deleted file mode 100644 index 4040624f7d6366..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-clients.md +++ /dev/null @@ -1,92 +0,0 @@ ---- -id: concepts-clients -title: Pulsar Clients -sidebar_label: "Clients" -original_id: concepts-clients ---- - -Pulsar exposes a client API with language bindings for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md), [C++](client-libraries-cpp.md) and [C#](client-libraries-dotnet.md). The client API optimizes and encapsulates Pulsar's client-broker communication protocol and exposes a simple and intuitive API for use by applications. - -Under the hood, the current official Pulsar client libraries support transparent reconnection and/or connection failover to brokers, queuing of messages until acknowledged by the broker, and heuristics such as connection retries with backoff. - -> **Custom client libraries** -> If you'd like to create your own client library, we recommend consulting the documentation on Pulsar's custom [binary protocol](developing-binary-protocol.md). - - -## Client setup phase - -Before an application creates a producer/consumer, the Pulsar client library needs to initiate a setup phase including two steps: - -1. The client attempts to determine the owner of the topic by sending an HTTP lookup request to the broker. The request could reach one of the active brokers which, by looking at the (cached) zookeeper metadata knows who is serving the topic or, in case nobody is serving it, tries to assign it to the least loaded broker. -1. Once the client library has the broker address, it creates a TCP connection (or reuse an existing connection from the pool) and authenticates it. Within this connection, client and broker exchange binary commands from a custom protocol. At this point the client sends a command to create producer/consumer to the broker, which will comply after having validated the authorization policy. - -Whenever the TCP connection breaks, the client immediately re-initiates this setup phase and keeps trying with exponential backoff to re-establish the producer or consumer until the operation succeeds. - -## Reader interface - -In Pulsar, the "standard" [consumer interface](concepts-messaging.md#consumers) involves using consumers to listen on [topics](reference-terminology.md#topic), process incoming messages, and finally acknowledge those messages when they are processed. Whenever a new subscription is created, it is initially positioned at the end of the topic (by default), and consumers associated with that subscription begin reading with the first message created afterwards. Whenever a consumer connects to a topic using a pre-existing subscription, it begins reading from the earliest message un-acked within that subscription. In summary, with the consumer interface, subscription cursors are automatically managed by Pulsar in response to [message acknowledgements](concepts-messaging.md#acknowledgement). - -The **reader interface** for Pulsar enables applications to manually manage cursors. When you use a reader to connect to a topic---rather than a consumer---you need to specify *which* message the reader begins reading from when it connects to a topic. When connecting to a topic, the reader interface enables you to begin with: - -* The **earliest** available message in the topic -* The **latest** available message in the topic -* Some other message between the earliest and the latest. If you select this option, you'll need to explicitly provide a message ID. Your application will be responsible for "knowing" this message ID in advance, perhaps fetching it from a persistent data store or cache. - -The reader interface is helpful for use cases like using Pulsar to provide effectively-once processing semantics for a stream processing system. For this use case, it's essential that the stream processing system be able to "rewind" topics to a specific message and begin reading there. The reader interface provides Pulsar clients with the low-level abstraction necessary to "manually position" themselves within a topic. - -Internally, the reader interface is implemented as a consumer using an exclusive, non-durable subscription to the topic with a randomly-allocated name. - -[ **IMPORTANT** ] - -Unlike subscription/consumer, readers are non-durable in nature and does not prevent data in a topic from being deleted, thus it is ***strongly*** advised that [data retention](cookbooks-retention-expiry.md) be configured. If data retention for a topic is not configured for an adequate amount of time, messages that the reader has not yet read might be deleted . This causes the readers to essentially skip messages. Configuring the data retention for a topic guarantees the reader with a certain duration to read a message. - -Please also note that a reader can have a "backlog", but the metric is only used for users to know how behind the reader is. The metric is not considered for any backlog quota calculations. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-reader-consumer-interfaces.png) - -Here's a Java example that begins reading from the earliest available message on a topic: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageId; -import org.apache.pulsar.client.api.Reader; - -// Create a reader on a topic and for a specific message (and onward) -Reader reader = pulsarClient.newReader() - .topic("reader-api-test") - .startMessageId(MessageId.earliest) - .create(); - -while (true) { - Message message = reader.readNext(); - - // Process the message -} - -``` - -To create a reader that reads from the latest available message: - -```java - -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(MessageId.latest) - .create(); - -``` - -To create a reader that reads from some message between the earliest and the latest: - -```java - -byte[] msgIdBytes = // Some byte array -MessageId id = MessageId.fromByteArray(msgIdBytes); -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(id) - .create(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-messaging.md b/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-messaging.md deleted file mode 100644 index c2a545d7f8d51f..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-messaging.md +++ /dev/null @@ -1,700 +0,0 @@ ---- -id: concepts-messaging -title: Messaging -sidebar_label: "Messaging" -original_id: concepts-messaging ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar is built on the [publish-subscribe](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) pattern (often abbreviated to pub-sub). In this pattern, [producers](#producers) publish messages to [topics](#topics). [Consumers](#consumers) [subscribe](#subscription-types) to those topics, process incoming messages, and send an acknowledgement when processing is complete. - -When a subscription is created, Pulsar [retains](concepts-architecture-overview.md#persistent-storage) all messages, even if the consumer is disconnected. Retained messages are discarded only when a consumer acknowledges that those messages are processed successfully. - -## Messages - -Messages are the basic "unit" of Pulsar. The following table lists the components of messages. - -Component | Description -:---------|:------- -Value / data payload | The data carried by the message. All Pulsar messages contain raw bytes, although message data can also conform to data [schemas](schema-get-started.md). -Key | Messages are optionally tagged with keys, which is useful for things like [topic compaction](concepts-topic-compaction.md). -Properties | An optional key/value map of user-defined properties. -Producer name | The name of the producer who produces the message. If you do not specify a producer name, the default name is used. -Sequence ID | Each Pulsar message belongs to an ordered sequence on its topic. The sequence ID of the message is its order in that sequence. -Publish time | The timestamp of when the message is published. The timestamp is automatically applied by the producer. -Event time | An optional timestamp attached to a message by applications. For example, applications attach a timestamp on when the message is processed. If nothing is set to event time, the value is `0`. -TypedMessageBuilder | It is used to construct a message. You can set message properties such as the message key, message value with `TypedMessageBuilder`.
    When you set `TypedMessageBuilder`, set the key as a string. If you set the key as other types, for example, an AVRO object, the key is sent as bytes, and it is difficult to get the AVRO object back on the consumer. - -The default size of a message is 5 MB. You can configure the max size of a message with the following configurations. - -- In the `broker.conf` file. - - ```bash - - # The max size of a message (in bytes). - maxMessageSize=5242880 - - ``` - -- In the `bookkeeper.conf` file. - - ```bash - - # The max size of the netty frame (in bytes). Any messages received larger than this value are rejected. The default value is 5 MB. - nettyMaxFrameSizeBytes=5253120 - - ``` - -> For more information on Pulsar message contents, see Pulsar [binary protocol](developing-binary-protocol.md). - -## Producers - -A producer is a process that attaches to a topic and publishes messages to a Pulsar [broker](reference-terminology.md#broker). The Pulsar broker process the messages. - -### Send modes - -Producers send messages to brokers synchronously (sync) or asynchronously (async). - -| Mode | Description | -|:-----------|-----------| -| Sync send | The producer waits for an acknowledgement from the broker after sending every message. If the acknowledgment is not received, the producer treats the sending operation as a failure. | -| Async send | The producer puts a message in a blocking queue and returns immediately. The client library sends the message to the broker in the background. If the queue is full (you can [configure](reference-configuration.md#broker) the maximum size), the producer is blocked or fails immediately when calling the API, depending on arguments passed to the producer. | - -### Access mode - -You can have different types of access modes on topics for producers. - -|Access mode | Description -|---|--- -`Shared`|Multiple producers can publish on a topic.

    This is the **default** setting. -`Exclusive`|Only one producer can publish on a topic.

    If there is already a producer connected, other producers trying to publish on this topic get errors immediately.

    The "old" producer is evicted and a "new" producer is selected to be the next exclusive producer if the "old" producer experiences a network partition with the broker. -`WaitForExclusive`|If there is already a producer connected, the producer creation is pending (rather than timing out) until the producer gets the `Exclusive` access.

    The producer that succeeds in becoming the exclusive one is treated as the leader. Consequently, if you want to implement the leader election scheme for your application, you can use this access mode. - -:::note - -Once an application creates a producer with the `Exclusive` or `WaitForExclusive` access mode successfully, the instance of the application is guaranteed to be the **only one writer** on the topic. Other producers trying to produce on this topic get errors immediately or have to wait until they get the `Exclusive` access. -For more information, see [PIP 68: Exclusive Producer](https://github.com/apache/pulsar/wiki/PIP-68:-Exclusive-Producer). - -::: - -You can set producer access mode through Java Client API. For more information, see `ProducerAccessMode` in [ProducerBuilder.java](https://github.com/apache/pulsar/blob/fc5768ca3bbf92815d142fe30e6bfad70a1b4fc6/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/ProducerBuilder.java). - - -### Compression - -You can compress messages published by producers during transportation. Pulsar currently supports the following types of compression: - -* [LZ4](https://github.com/lz4/lz4) -* [ZLIB](https://zlib.net/) -* [ZSTD](https://facebook.github.io/zstd/) -* [SNAPPY](https://google.github.io/snappy/) - -### Batching - -When batching is enabled, the producer accumulates and sends a batch of messages in a single request. The batch size is defined by the maximum number of messages and the maximum publish latency. Therefore, the backlog size represents the total number of batches instead of the total number of messages. - -In Pulsar, batches are tracked and stored as single units rather than as individual messages. Consumer unbundles a batch into individual messages. However, scheduled messages (configured through the `deliverAt` or the `deliverAfter` parameter) are always sent as individual messages even batching is enabled. - -In general, a batch is acknowledged when all of its messages are acknowledged by a consumer. It means unexpected failures, negative acknowledgements, or acknowledgement timeouts can result in redelivery of all messages in a batch, even if some of the messages are acknowledged. - -To avoid redelivering acknowledged messages in a batch to the consumer, Pulsar introduces batch index acknowledgement since Pulsar 2.6.0. When batch index acknowledgement is enabled, the consumer filters out the batch index that has been acknowledged and sends the batch index acknowledgement request to the broker. The broker maintains the batch index acknowledgement status and tracks the acknowledgement status of each batch index to avoid dispatching acknowledged messages to the consumer. When all indexes of the batch message are acknowledged, the batch message is deleted. - -By default, batch index acknowledgement is disabled (`acknowledgmentAtBatchIndexLevelEnabled=false`). You can enable batch index acknowledgement by setting the `acknowledgmentAtBatchIndexLevelEnabled` parameter to `true` at the broker side. Enabling batch index acknowledgement results in more memory overheads. - -### Chunking -When you enable chunking, read the following instructions. -- Batching and chunking cannot be enabled simultaneously. To enable chunking, you must disable batching in advance. -- Chunking is only supported for persisted topics. -- Chunking is only supported for the exclusive and failover subscription types. - -When chunking is enabled (`chunkingEnabled=true`), if the message size is greater than the allowed maximum publish-payload size, the producer splits the original message into chunked messages and publishes them with chunked metadata to the broker separately and in order. At the broker side, the chunked messages are stored in the managed-ledger in the same way as that of ordinary messages. The only difference is that the consumer needs to buffer the chunked messages and combines them into the real message when all chunked messages have been collected. The chunked messages in the managed-ledger can be interwoven with ordinary messages. If producer fails to publish all the chunks of a message, the consumer can expire incomplete chunks if consumer fail to receive all chunks in expire time. By default, the expire time is set to one minute. - -The consumer consumes the chunked messages and buffers them until the consumer receives all the chunks of a message. And then the consumer stitches chunked messages together and places them into the receiver-queue. Clients consume messages from the receiver-queue. Once the consumer consumes the entire large message and acknowledges it, the consumer internally sends acknowledgement of all the chunk messages associated to that large message. You can set the `maxPendingChunkedMessage` parameter on the consumer. When the threshold is reached, the consumer drops the unchunked messages by silently acknowledging them or asking the broker to redeliver them later by marking them unacknowledged. - -The broker does not require any changes to support chunking for non-shared subscription. The broker only uses `chunkedMessageRate` to record chunked message rate on the topic. - -#### Handle chunked messages with one producer and one ordered consumer - -As shown in the following figure, when a topic has one producer which publishes large message payload in chunked messages along with regular non-chunked messages. The producer publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. The broker stores all the three chunked messages in the managed-ledger and dispatches to the ordered (exclusive/failover) consumer in the same order. The consumer buffers all the chunked messages in memory until it receives all the chunked messages, combines them into one message and then hands over the original message M1 to the client. - -![](/assets/chunking-01.png) - -#### Handle chunked messages with multiple producers and one ordered consumer - -When multiple publishers publish chunked messages into a single topic, the broker stores all the chunked messages coming from different publishers in the same managed-ledger. As shown below, Producer 1 publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. Producer 2 publishes message M2 in three chunks M2-C1, M2-C2 and M2-C3. All chunked messages of the specific message are still in order but might not be consecutive in the managed-ledger. This brings some memory pressure to the consumer because the consumer keeps separate buffer for each large message to aggregate all chunks of the large message and combine them into one message. - -![](/assets/chunking-02.png) - -## Consumers - -A consumer is a process that attaches to a topic via a subscription and then receives messages. - -A consumer sends a [flow permit request](developing-binary-protocol.md#flow-control) to a broker to get messages. There is a queue at the consumer side to receive messages pushed from the broker. You can configure the queue size with the [`receiverQueueSize`](client-libraries-java.md#configure-consumer) parameter. The default size is `1000`). Each time `consumer.receive()` is called, a message is dequeued from the buffer. - -### Receive modes - -Messages are received from [brokers](reference-terminology.md#broker) either synchronously (sync) or asynchronously (async). - -| Mode | Description | -|:--------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Sync receive | A sync receive is blocked until a message is available. | -| Async receive | An async receive returns immediately with a future value—for example, a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) in Java—that completes once a new message is available. | - -### Listeners - -Client libraries provide listener implementation for consumers. For example, the [Java client](client-libraries-java.md) provides a {@inject: javadoc:MesssageListener:/client/org/apache/pulsar/client/api/MessageListener} interface. In this interface, the `received` method is called whenever a new message is received. - -### Acknowledgement - -When a consumer consumes a message successfully, the consumer sends an acknowledgement request to the broker. This message is permanently stored, and then deleted only after all the subscriptions have acknowledged it. If you want to store the message that has been acknowledged by a consumer, you need to configure the [message retention policy](concepts-messaging.md#message-retention-and-expiry). - -For a batch message, if batch index acknowledgement is enabled, the broker maintains the batch index acknowledgement status and tracks the acknowledgement status of each batch index to avoid dispatching acknowledged messages to the consumer. When all indexes of the batch message are acknowledged, the batch message is deleted. For details about the batch index acknowledgement, see [batching](#batching). - -Messages can be acknowledged in the following two ways: - -- Messages are acknowledged individually. With individual acknowledgement, the consumer needs to acknowledge each message and sends an acknowledgement request to the broker. -- Messages are acknowledged cumulatively. With cumulative acknowledgement, the consumer only needs to acknowledge the last message it received. All messages in the stream up to (and including) the provided message are not re-delivered to that consumer. - -:::note - -Cumulative acknowledgement cannot be used in [Shared subscription type](#subscription-types), because this subscription type involves multiple consumers which have access to the same subscription. In Shared subscription type, messages are acknowledged individually. - -::: - -### Negative acknowledgement - -When a consumer does not consume a message successfully at a time, and wants to consume the message again, the consumer sends a negative acknowledgement to the broker, and then the broker redelivers the message. - -Messages are negatively acknowledged either individually or cumulatively, depending on the consumption subscription type. - -In the exclusive and failover subscription types, consumers only negatively acknowledge the last message they receive. - -In the shared and Key_Shared subscription types, you can negatively acknowledge messages individually. - -Be aware that negative acknowledgment on ordered subscription types, such as Exclusive, Failover and Key_Shared, can cause failed messages to arrive consumers out of the original order. - -:::note - -If batching is enabled, other messages and the negatively acknowledged messages in the same batch are redelivered to the consumer. - -::: - -### Acknowledgement timeout - -If a message is not consumed successfully, and you want to trigger the broker to redeliver the message automatically, you can adopt the unacknowledged message automatic re-delivery mechanism. Client tracks the unacknowledged messages within the entire `acktimeout` time range, and sends a `redeliver unacknowledged messages` request to the broker automatically when the acknowledgement timeout is specified. - -:::note - -If batching is enabled, other messages and the unacknowledged messages in the same batch are redelivered to the consumer. - -::: - -:::note - -Prefer negative acknowledgements over acknowledgement timeout. Negative acknowledgement controls the re-delivery of individual messages with more precision, and avoids invalid redeliveries when the message processing time exceeds the acknowledgement timeout. - -::: - -### Dead letter topic - -Dead letter topic enables you to consume new messages when some messages cannot be consumed successfully by a consumer. In this mechanism, messages that are failed to be consumed are stored in a separate topic, which is called dead letter topic. You can decide how to handle messages in the dead letter topic. - -The following example shows how to enable dead letter topic in a Java client using the default dead letter topic: - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic(topic) - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .build()) - .subscribe(); - -``` - -The default dead letter topic uses this format: - -``` - ---DLQ - -``` - - -If you want to specify the name of the dead letter topic, use this Java client example: - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic(topic) - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .deadLetterTopic("your-topic-name") - .build()) - .subscribe(); - -``` - -Dead letter topic depends on message re-delivery. Messages are redelivered either due to [acknowledgement timeout](#acknowledgement-timeout) or [negative acknowledgement](#negative-acknowledgement). If you are going to use negative acknowledgement on a message, make sure it is negatively acknowledged before the acknowledgement timeout. - -:::note - -Currently, dead letter topic is enabled In the shared and Key_Shared subscription types. - -::: - -### Retry letter topic - -For many online business systems, a message is re-consumed due to exception occurs in the business logic processing. To configure the delay time for re-consuming the failed messages, you can configure the producer to send messages to both the business topic and the retry letter topic, and enable automatic retry on the consumer. When automatic retry is enabled on the consumer, a message is stored in the retry letter topic if the messages are not consumed, and therefore the consumer automatically consumes the failed messages from the retry letter topic after a specified delay time. - -By default, automatic retry is disabled. You can set `enableRetry` to `true` to enable automatic retry on the consumer. - -This example shows how to consume messages from a retry letter topic. - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic(topic) - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .enableRetry(true) - .receiverQueueSize(100) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .retryLetterTopic("persistent://my-property/my-ns/my-subscription-custom-Retry") - .build()) - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscribe(); - -``` - -## Topics - -As in other pub-sub systems, topics in Pulsar are named channels for transmitting messages from producers to consumers. Topic names are URLs that have a well-defined structure: - -```http - -{persistent|non-persistent}://tenant/namespace/topic - -``` - -Topic name component | Description -:--------------------|:----------- -`persistent` / `non-persistent` | This identifies the type of topic. Pulsar supports two kind of topics: [persistent](concepts-architecture-overview.md#persistent-storage) and [non-persistent](#non-persistent-topics). The default is persistent, so if you do not specify a type, the topic is persistent. With persistent topics, all messages are durably persisted on disks (if the broker is not standalone, messages are durably persisted on multiple disks), whereas data for non-persistent topics is not persisted to storage disks. -`tenant` | The topic tenant within the instance. Tenants are essential to multi-tenancy in Pulsar, and spread across clusters. -`namespace` | The administrative unit of the topic, which acts as a grouping mechanism for related topics. Most topic configuration is performed at the [namespace](#namespaces) level. Each tenant has one or multiple namespaces. -`topic` | The final part of the name. Topic names have no special meaning in a Pulsar instance. - -> **No need to explicitly create new topics** -> You do not need to explicitly create topics in Pulsar. If a client attempts to write or receive messages to/from a topic that does not yet exist, Pulsar creates that topic under the namespace provided in the [topic name](#topics) automatically. -> If no tenant or namespace is specified when a client creates a topic, the topic is created in the default tenant and namespace. You can also create a topic in a specified tenant and namespace, such as `persistent://my-tenant/my-namespace/my-topic`. `persistent://my-tenant/my-namespace/my-topic` means the `my-topic` topic is created in the `my-namespace` namespace of the `my-tenant` tenant. - -## Namespaces - -A namespace is a logical nomenclature within a tenant. A tenant creates multiple namespaces via the [admin API](admin-api-namespaces.md#create). For instance, a tenant with different applications can create a separate namespace for each application. A namespace allows the application to create and manage a hierarchy of topics. The topic `my-tenant/app1` is a namespace for the application `app1` for `my-tenant`. You can create any number of [topics](#topics) under the namespace. - -## Subscriptions - -A subscription is a named configuration rule that determines how messages are delivered to consumers. Four subscription types are available in Pulsar: [exclusive](#exclusive), [shared](#shared), [failover](#failover), and [key_shared](#key_shared). These types are illustrated in the figure below. - -![Subscription types](/assets/pulsar-subscription-types.png) - -> **Pub-Sub or Queuing** -> In Pulsar, you can use different subscriptions flexibly. -> * If you want to achieve traditional "fan-out pub-sub messaging" among consumers, specify a unique subscription name for each consumer. It is exclusive subscription type. -> * If you want to achieve "message queuing" among consumers, share the same subscription name among multiple consumers(shared, failover, key_shared). -> * If you want to achieve both effects simultaneously, combine exclusive subscription type with other subscription types for consumers. - -### Subscription types -When a subscription has no consumers, its subscription type is undefined. The type of a subscription is defined when a consumer connects to it, and the type can be changed by restarting all consumers with a different configuration. - -#### Exclusive - -In *exclusive* type, only a single consumer is allowed to attach to the subscription. If multiple consumers subscribe to a topic using the same subscription, an error occurs. - -In the diagram below, only **Consumer A-0** is allowed to consume messages. - -> Exclusive is the default subscription type. - -![Exclusive subscriptions](/assets/pulsar-exclusive-subscriptions.png) - -#### Failover - -In *Failover* type, multiple consumers can attach to the same subscription. A master consumer is picked for non-partitioned topic or each partition of partitioned topic and receives messages. When the master consumer disconnects, all (non-acknowledged and subsequent) messages are delivered to the next consumer in line. - -For partitioned topics, broker will sort consumers by priority level and lexicographical order of consumer name. Then broker will try to evenly assigns topics to consumers with the highest priority level. - -For non-partitioned topic, broker will pick consumer in the order they subscribe to the non partitioned topic. - -In the diagram below, **Consumer-B-0** is the master consumer while **Consumer-B-1** would be the next consumer in line to receive messages if **Consumer-B-0** is disconnected. - -![Failover subscriptions](/assets/pulsar-failover-subscriptions.png) - -#### Shared - -In *shared* or *round robin* mode, multiple consumers can attach to the same subscription. Messages are delivered in a round robin distribution across consumers, and any given message is delivered to only one consumer. When a consumer disconnects, all the messages that were sent to it and not acknowledged will be rescheduled for sending to the remaining consumers. - -In the diagram below, **Consumer-C-1** and **Consumer-C-2** are able to subscribe to the topic, but **Consumer-C-3** and others could as well. - -> **Limitations of Shared type** -> When using Shared type, be aware that: -> * Message ordering is not guaranteed. -> * You cannot use cumulative acknowledgment with Shared type. - -![Shared subscriptions](/assets/pulsar-shared-subscriptions.png) - -#### Key_Shared - -In *Key_Shared* type, multiple consumers can attach to the same subscription. Messages are delivered in a distribution across consumers and message with same key or same ordering key are delivered to only one consumer. No matter how many times the message is re-delivered, it is delivered to the same consumer. When a consumer connected or disconnected will cause served consumer change for some key of message. - -> **Limitations of Key_Shared type** -> When you use Key_Shared type, be aware that: -> * You need to specify a key or orderingKey for messages. -> * You cannot use cumulative acknowledgment with Key_Shared type. -> * Your producers should disable batching or use a key-based batch builder. - -![Key_Shared subscriptions](/assets/pulsar-key-shared-subscriptions.png) - -**You can disable Key_Shared subscription in the `broker.config` file.** - -### Subscription modes - -#### What is a subscription mode - -The subscription mode indicates the cursor type. - -- When a subscription is created, an associated cursor is created to record the last consumed position. -- When a consumer of the subscription restarts, it can continue consuming from the last message it consumes. - -Subscription mode | Description | Note -|---|---|--- -`Durable`|The cursor is durable, which retains messages and persists the current position.

    If a broker restarts from a failure, it can recover the cursor from the persistent storage (BookKeeper), so that messages can continue to be consumed from the last consumed position.|`Durable` is the **default** subscription mode. -`NonDurable`|The cursor is non-durable.

    Once a broker stops, the cursor is lost and can never be recovered, so that messages **can not** continue to be consumed from the last consumed position.|Reader’s subscription mode is `NonDurable` in nature and it does not prevent data in a topic from being deleted. Reader’s subscription mode **can not** be changed. - -A [subscription](#subscriptions) can have one or more consumers. When a consumer subscribes to a topic, it must specify the subscription name. A durable subscription and a non-durable subscription can have the same name, they are independent of each other. If a consumer specifies a subscription which does not exist before, the subscription is automatically created. - -#### When to use - -By default, messages of a topic without any durable subscriptions are marked as deleted. If you want to prevent the messages being marked as deleted, you can create a durable subscription for this topic. In this case, only acknowledged messages are marked as deleted. For more information, see [message retention and expiry](cookbooks-retention-expiry.md). - -#### How to use - -After a consumer is created, the default subscription mode of the consumer is `Durable`. You can change the subscription mode to `NonDurable` by making changes to the consumer’s configuration. - -````mdx-code-block - - - - -```java - - Consumer consumer = pulsarClient.newConsumer() - .topic("my-topic") - .subscriptionName("my-sub") - .subscriptionMode(SubscriptionMode.Durable) - .subscribe(); - -``` - - - - -```java - - Consumer consumer = pulsarClient.newConsumer() - .topic("my-topic") - .subscriptionName("my-sub") - .subscriptionMode(SubscriptionMode.NonDurable) - .subscribe(); - -``` - - - - -```` - -For how to create, check, or delete a durable subscription, see [manage subscriptions](admin-api-topics.md/#manage-subscriptions). - -## Multi-topic subscriptions - -When a consumer subscribes to a Pulsar topic, by default it subscribes to one specific topic, such as `persistent://public/default/my-topic`. As of Pulsar version 1.23.0-incubating, however, Pulsar consumers can simultaneously subscribe to multiple topics. You can define a list of topics in two ways: - -* On the basis of a [**reg**ular **ex**pression](https://en.wikipedia.org/wiki/Regular_expression) (regex), for example `persistent://public/default/finance-.*` -* By explicitly defining a list of topics - -> When subscribing to multiple topics by regex, all topics must be in the same [namespace](#namespaces). - -When subscribing to multiple topics, the Pulsar client automatically makes a call to the Pulsar API to discover the topics that match the regex pattern/list, and then subscribe to all of them. If any of the topics do not exist, the consumer auto-subscribes to them once the topics are created. - -> **No ordering guarantees across multiple topics** -> When a producer sends messages to a single topic, all messages are guaranteed to be read from that topic in the same order. However, these guarantees do not hold across multiple topics. So when a producer sends message to multiple topics, the order in which messages are read from those topics is not guaranteed to be the same. - -The following are multi-topic subscription examples for Java. - -```java - -import java.util.regex.Pattern; - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient pulsarClient = // Instantiate Pulsar client object - -// Subscribe to all topics in a namespace -Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default/.*"); -Consumer allTopicsConsumer = pulsarClient.newConsumer() - .topicsPattern(allTopicsInNamespace) - .subscriptionName("subscription-1") - .subscribe(); - -// Subscribe to a subsets of topics in a namespace, based on regex -Pattern someTopicsInNamespace = Pattern.compile("persistent://public/default/foo.*"); -Consumer someTopicsConsumer = pulsarClient.newConsumer() - .topicsPattern(someTopicsInNamespace) - .subscriptionName("subscription-1") - .subscribe(); - -``` - -For code examples, see [Java](client-libraries-java.md#multi-topic-subscriptions). - -## Partitioned topics - -Normal topics are served only by a single broker, which limits the maximum throughput of the topic. *Partitioned topics* are a special type of topic that are handled by multiple brokers, thus allowing for higher throughput. - -A partitioned topic is actually implemented as N internal topics, where N is the number of partitions. When publishing messages to a partitioned topic, each message is routed to one of several brokers. The distribution of partitions across brokers is handled automatically by Pulsar. - -The diagram below illustrates this: - -![](/assets/partitioning.png) - -The **Topic1** topic has five partitions (**P0** through **P4**) split across three brokers. Because there are more partitions than brokers, two brokers handle two partitions a piece, while the third handles only one (again, Pulsar handles this distribution of partitions automatically). - -Messages for this topic are broadcast to two consumers. The [routing mode](#routing-modes) determines each message should be published to which partition, while the [subscription type](#subscription-types) determines which messages go to which consumers. - -Decisions about routing and subscription types can be made separately in most cases. In general, throughput concerns should guide partitioning/routing decisions while subscription decisions should be guided by application semantics. - -There is no difference between partitioned topics and normal topics in terms of how subscription types work, as partitioning only determines what happens between when a message is published by a producer and processed and acknowledged by a consumer. - -Partitioned topics need to be explicitly created via the [admin API](admin-api-overview.md). The number of partitions can be specified when creating the topic. - -### Routing modes - -When publishing to partitioned topics, you must specify a *routing mode*. The routing mode determines which partition---that is, which internal topic---each message should be published to. - -There are three {@inject: javadoc:MessageRoutingMode:/client/org/apache/pulsar/client/api/MessageRoutingMode} available: - -Mode | Description -:--------|:------------ -`RoundRobinPartition` | If no key is provided, the producer will publish messages across all partitions in round-robin fashion to achieve maximum throughput. Please note that round-robin is not done per individual message but rather it's set to the same boundary of batching delay, to ensure batching is effective. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition. This is the default mode. -`SinglePartition` | If no key is provided, the producer will randomly pick one single partition and publish all the messages into that partition. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition. -`CustomPartition` | Use custom message router implementation that will be called to determine the partition for a particular message. User can create a custom routing mode by using the [Java client](client-libraries-java.md) and implementing the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface. - -### Ordering guarantee - -The ordering of messages is related to MessageRoutingMode and Message Key. Usually, user would want an ordering of Per-key-partition guarantee. - -If there is a key attached to message, the messages will be routed to corresponding partitions based on the hashing scheme specified by {@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} in {@inject: javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder}, when using either `SinglePartition` or `RoundRobinPartition` mode. - -Ordering guarantee | Description | Routing Mode and Key -:------------------|:------------|:------------ -Per-key-partition | All the messages with the same key will be in order and be placed in same partition. | Use either `SinglePartition` or `RoundRobinPartition` mode, and Key is provided by each message. -Per-producer | All the messages from the same producer will be in order. | Use `SinglePartition` mode, and no Key is provided for each message. - -### Hashing scheme - -{@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} is an enum that represent sets of standard hashing functions available when choosing the partition to use for a particular message. - -There are 2 types of standard hashing functions available: `JavaStringHash` and `Murmur3_32Hash`. -The default hashing function for producer is `JavaStringHash`. -Please pay attention that `JavaStringHash` is not useful when producers can be from different multiple language clients, under this use case, it is recommended to use `Murmur3_32Hash`. - - - -## Non-persistent topics - - -By default, Pulsar persistently stores *all* unacknowledged messages on multiple [BookKeeper](concepts-architecture-overview.md#persistent-storage) bookies (storage nodes). Data for messages on persistent topics can thus survive broker restarts and subscriber failover. - -Pulsar also, however, supports **non-persistent topics**, which are topics on which messages are *never* persisted to disk and live only in memory. When using non-persistent delivery, killing a Pulsar broker or disconnecting a subscriber to a topic means that all in-transit messages are lost on that (non-persistent) topic, meaning that clients may see message loss. - -Non-persistent topics have names of this form (note the `non-persistent` in the name): - -```http - -non-persistent://tenant/namespace/topic - -``` - -> For more info on using non-persistent topics, see the [Non-persistent messaging cookbook](cookbooks-non-persistent.md). - -In non-persistent topics, brokers immediately deliver messages to all connected subscribers *without persisting them* in [BookKeeper](concepts-architecture-overview.md#persistent-storage). If a subscriber is disconnected, the broker will not be able to deliver those in-transit messages, and subscribers will never be able to receive those messages again. Eliminating the persistent storage step makes messaging on non-persistent topics slightly faster than on persistent topics in some cases, but with the caveat that some of the core benefits of Pulsar are lost. - -> With non-persistent topics, message data lives only in memory. If a message broker fails or message data can otherwise not be retrieved from memory, your message data may be lost. Use non-persistent topics only if you're *certain* that your use case requires it and can sustain it. - -By default, non-persistent topics are enabled on Pulsar brokers. You can disable them in the broker's [configuration](reference-configuration.md#broker-enableNonPersistentTopics). You can manage non-persistent topics using the `pulsar-admin topics` command. For more information, see [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/). - -### Performance - -Non-persistent messaging is usually faster than persistent messaging because brokers don't persist messages and immediately send acks back to the producer as soon as that message is delivered to connected brokers. Producers thus see comparatively low publish latency with non-persistent topic. - -### Client API - -Producers and consumers can connect to non-persistent topics in the same way as persistent topics, with the crucial difference that the topic name must start with `non-persistent`. All three subscription types---[exclusive](#exclusive), [shared](#shared), and [failover](#failover)---are supported for non-persistent topics. - -Here's an example [Java consumer](client-libraries-java.md#consumers) for a non-persistent topic: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); -String npTopic = "non-persistent://public/default/my-topic"; -String subscriptionName = "my-subscription-name"; - -Consumer consumer = client.newConsumer() - .topic(npTopic) - .subscriptionName(subscriptionName) - .subscribe(); - -``` - -Here's an example [Java producer](client-libraries-java.md#producer) for the same non-persistent topic: - -```java - -Producer producer = client.newProducer() - .topic(npTopic) - .create(); - -``` - - -## System topic - -System topic is a predefined topic for internal use within Pulsar. It can be either persistent or non-persistent topic. - -System topics serve to implement certain features and eliminate dependencies on third-party components, such as transactions, heartbeat detections, topic-level policies, and resource group services. System topics empower the implementation of these features to be simplified, dependent, and flexible. Take heartbeat detections for example, you can leverage the system topic for healthcheck to internally enable producer/reader to procude/consume messages under the heartbeat namespace, which can detect whether the current service is still alive. - -There are diverse system topics depending on namespaces. The following table outlines the available system topics for each specific namespace. - -| Namespace | TopicName | Domain | Count | Usage | -|-----------|-----------|--------|-------|-------| -| pulsar/system | `transaction_coordinator_assign_${id}` | Persistent | Default 16 | Transaction coordinator | -| pulsar/system | `_transaction_log${tc_id}` | Persistent | Default 16 | Transaction log | -| pulsar/system | `resource-usage` | Non-persistent | Default 4 | Resource group service | -| host/port | `heartbeat` | Persistent | 1 | Heartbeat detection | -| User-defined-ns | [`__change_events`](concepts-multi-tenancy.md#namespace-change-events-and-topic-level-policies) | Persistent | Default 4 | Topic events | -| User-defined-ns | `__transaction_buffer_snapshot` | Persistent | One per namespace | Transaction buffer snapshots | -| User-defined-ns | `${topicName}__transaction_pending_ack` | Persistent | One per every topic subscription acknowledged with transactions | Acknowledgements with transactions | - -:::note - -* You cannot create any system topics. -* By default, system topics are disabled. To enable system topics, you need to change the following configurations in the `conf/broker.conf` or `conf/standalone.conf` file. - - ```conf - systemTopicEnabled=true - topicLevelPoliciesEnabled=true - ``` - -::: - - -## Message retention and expiry - -By default, Pulsar message brokers: - -* immediately delete *all* messages that have been acknowledged by a consumer, and -* [persistently store](concepts-architecture-overview.md#persistent-storage) all unacknowledged messages in a message backlog. - -Pulsar has two features, however, that enable you to override this default behavior: - -* Message **retention** enables you to store messages that have been acknowledged by a consumer -* Message **expiry** enables you to set a time to live (TTL) for messages that have not yet been acknowledged - -> All message retention and expiry is managed at the [namespace](#namespaces) level. For a how-to, see the [Message retention and expiry](cookbooks-retention-expiry.md) cookbook. - -The diagram below illustrates both concepts: - -![Message retention and expiry](/assets/retention-expiry.png) - -With message retention, shown at the top, a retention policy applied to all topics in a namespace dictates that some messages are durably stored in Pulsar even though they've already been acknowledged. Acknowledged messages that are not covered by the retention policy are deleted. Without a retention policy, *all* of the acknowledged messages would be deleted. - -With message expiry, shown at the bottom, some messages are deleted, even though they haven't been acknowledged, because they've expired according to the TTL applied to the namespace (for example because a TTL of 5 minutes has been applied and the messages haven't been acknowledged but are 10 minutes old). - -## Message deduplication - -Message duplication occurs when a message is [persisted](concepts-architecture-overview.md#persistent-storage) by Pulsar more than once. Message deduplication is an optional Pulsar feature that prevents unnecessary message duplication by processing each message only once, even if the message is received more than once. - -The following diagram illustrates what happens when message deduplication is disabled vs. enabled: - -![Pulsar message deduplication](/assets/message-deduplication.png) - - -Message deduplication is disabled in the scenario shown at the top. Here, a producer publishes message 1 on a topic; the message reaches a Pulsar broker and is [persisted](concepts-architecture-overview.md#persistent-storage) to BookKeeper. The producer then sends message 1 again (in this case due to some retry logic), and the message is received by the broker and stored in BookKeeper again, which means that duplication has occurred. - -In the second scenario at the bottom, the producer publishes message 1, which is received by the broker and persisted, as in the first scenario. When the producer attempts to publish the message again, however, the broker knows that it has already seen message 1 and thus does not persist the message. - -> Message deduplication is handled at the namespace level or the topic level. For more instructions, see the [message deduplication cookbook](cookbooks-deduplication.md). - - -### Producer idempotency - -The other available approach to message deduplication is to ensure that each message is *only produced once*. This approach is typically called **producer idempotency**. The drawback of this approach is that it defers the work of message deduplication to the application. In Pulsar, this is handled at the [broker](reference-terminology.md#broker) level, so you do not need to modify your Pulsar client code. Instead, you only need to make administrative changes. For details, see [Managing message deduplication](cookbooks-deduplication.md). - -### Deduplication and effectively-once semantics - -Message deduplication makes Pulsar an ideal messaging system to be used in conjunction with stream processing engines (SPEs) and other systems seeking to provide effectively-once processing semantics. Messaging systems that do not offer automatic message deduplication require the SPE or other system to guarantee deduplication, which means that strict message ordering comes at the cost of burdening the application with the responsibility of deduplication. With Pulsar, strict ordering guarantees come at no application-level cost. - -> You can find more in-depth information in [this post](https://www.splunk.com/en_us/blog/it/exactly-once-is-not-exactly-the-same.html). - -## Delayed message delivery -Delayed message delivery enables you to consume a message later rather than immediately. In this mechanism, a message is stored in BookKeeper, `DelayedDeliveryTracker` maintains the time index(time -> messageId) in memory after published to a broker, and it is delivered to a consumer once the specific delayed time is passed. - -Delayed message delivery only works in Shared subscription type. In Exclusive and Failover subscription types, the delayed message is dispatched immediately. - -The diagram below illustrates the concept of delayed message delivery: - -![Delayed Message Delivery](/assets/message_delay.png) - -A broker saves a message without any check. When a consumer consumes a message, if the message is set to delay, then the message is added to `DelayedDeliveryTracker`. A subscription checks and gets timeout messages from `DelayedDeliveryTracker`. - -### Broker -Delayed message delivery is enabled by default. You can change it in the broker configuration file as below: - -``` - -# Whether to enable the delayed delivery for messages. -# If disabled, messages are immediately delivered and there is no tracking overhead. -delayedDeliveryEnabled=true - -# Control the ticking time for the retry of delayed message delivery, -# affecting the accuracy of the delivery time compared to the scheduled time. -# Default is 1 second. -delayedDeliveryTickTimeMillis=1000 - -``` - -### Producer -The following is an example of delayed message delivery for a producer in Java: - -```java - -// message to be delivered at the configured delay interval -producer.newMessage().deliverAfter(3L, TimeUnit.Minute).value("Hello Pulsar!").send(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-multi-tenancy.md b/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-multi-tenancy.md deleted file mode 100644 index 93a59557b2efca..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-multi-tenancy.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -id: concepts-multi-tenancy -title: Multi Tenancy -sidebar_label: "Multi Tenancy" -original_id: concepts-multi-tenancy ---- - -Pulsar was created from the ground up as a multi-tenant system. To support multi-tenancy, Pulsar has a concept of tenants. Tenants can be spread across clusters and can each have their own [authentication and authorization](security-overview.md) scheme applied to them. They are also the administrative unit at which storage quotas, [message TTL](cookbooks-retention-expiry.md#time-to-live-ttl), and isolation policies can be managed. - -The multi-tenant nature of Pulsar is reflected mostly visibly in topic URLs, which have this structure: - -```http - -persistent://tenant/namespace/topic - -``` - -As you can see, the tenant is the most basic unit of categorization for topics (more fundamental than the namespace and topic name). - -## Tenants - -To each tenant in a Pulsar instance you can assign: - -* An [authorization](security-authorization.md) scheme -* The set of [clusters](reference-terminology.md#cluster) to which the tenant's configuration applies - -## Namespaces - -Tenants and namespaces are two key concepts of Pulsar to support multi-tenancy. - -* Pulsar is provisioned for specified tenants with appropriate capacity allocated to the tenant. -* A namespace is the administrative unit nomenclature within a tenant. The configuration policies set on a namespace apply to all the topics created in that namespace. A tenant may create multiple namespaces via self-administration using the REST API and the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool. For instance, a tenant with different applications can create a separate namespace for each application. - -Names for topics in the same namespace will look like this: - -```http - -persistent://tenant/app1/topic-1 - -persistent://tenant/app1/topic-2 - -persistent://tenant/app1/topic-3 - -``` - -### Namespace change events and topic-level policies - -Pulsar is a multi-tenant event streaming system. Administrators can manage the tenants and namespaces by setting policies at different levels. However, the policies, such as retention policy and storage quota policy, are only available at a namespace level. In many use cases, users need to set a policy at the topic level. The namespace change events approach is proposed for supporting topic-level policies in an efficient way. In this approach, Pulsar is used as an event log to store namespace change events (such as topic policy changes). This approach has a few benefits: -- Avoid using ZooKeeper and introducing more loads to ZooKeeper. -- Use Pulsar as an event log for propagating the policy cache. It can scale efficiently. -- Use Pulsar SQL to query the namespace changes and audit the system. - -Each namespace has a [system topic](concepts-messaging.md#system-topic) named `__change_events`. This system topic stores change events for a given namespace. The following figure illustrates how to leverage it to update topic-level policies. - -![Leverage the system topic to update topic-level policies](/assets/system-topic-for-topic-level-policies.svg) - -1. Pulsar Admin clients communicate with the Admin Restful API to update topic-level policies. -2. Any broker that receives the Admin HTTP request publishes a topic policy change event to the corresponding system topic (`__change_events`) of the namespace. -3. Each broker that owns a namespace bundle(s) subscribes to the system topic (`__change_events`) to receive the change events of the namespace. -4. Each broker applies the change events to its policy cache. -5. Once the policy cache is updated, the broker sends the response back to the Pulsar Admin clients. - -:::note - -By default, the system topic is disabled. To enable topic-level policy (`topicLevelPoliciesEnabled`=`true`), you need to enable the system topic by setting `systemtopicenabled` to `true` in the `conf/broker.conf` or `conf/standalone.conf` file. - -::: \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-multiple-advertised-listeners.md b/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-multiple-advertised-listeners.md deleted file mode 100644 index f2e1ae0aadc7ca..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-multiple-advertised-listeners.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -id: concepts-multiple-advertised-listeners -title: Multiple advertised listeners -sidebar_label: "Multiple advertised listeners" -original_id: concepts-multiple-advertised-listeners ---- - -When a Pulsar cluster is deployed in the production environment, it may require to expose multiple advertised addresses for the broker. For example, when you deploy a Pulsar cluster in Kubernetes and want other clients, which are not in the same Kubernetes cluster, to connect to the Pulsar cluster, you need to assign a broker URL to external clients. But clients in the same Kubernetes cluster can still connect to the Pulsar cluster through the internal network of Kubernetes. - -## Advertised listeners - -To ensure clients in both internal and external networks can connect to a Pulsar cluster, Pulsar introduces `advertisedListeners` and `internalListenerName` configuration options into the [broker configuration file](reference-configuration.md#broker) to ensure that the broker supports exposing multiple advertised listeners and support the separation of internal and external network traffic. - -- The `advertisedListeners` is used to specify multiple advertised listeners. The broker uses the listener as the broker identifier in the load manager and the bundle owner data. The `advertisedListeners` is formatted as `:pulsar://:, :pulsar+ssl://:`. You can set up the `advertisedListeners` like -`advertisedListeners=internal:pulsar://192.168.1.11:6660,internal:pulsar+ssl://192.168.1.11:6651`. - -- The `internalListenerName` is used to specify the internal service URL that the broker uses. You can specify the `internalListenerName` by choosing one of the `advertisedListeners`. The broker uses the listener name of the first advertised listener as the `internalListenerName` if the `internalListenerName` is absent. - -After setting up the `advertisedListeners`, clients can choose one of the listeners as the service URL to create a connection to the broker as long as the network is accessible. However, if the client creates producers or consumer on a topic, the client must send a lookup requests to the broker for getting the owner broker, then connect to the owner broker to publish messages or consume messages. Therefore, You must allow the client to get the corresponding service URL with the same advertised listener name as the one used by the client. This helps keep client-side simple and secure. - -## Use multiple advertised listeners - -This example shows how a Pulsar client uses multiple advertised listeners. - -1. Configure multiple advertised listeners in the broker configuration file. - -```shell - -advertisedListeners={listenerName}:pulsar://xxxx:6650, -{listenerName}:pulsar+ssl://xxxx:6651 - -``` - -2. Specify the listener name for the client. - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://xxxx:6650") - .listenerName("external") - .build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-overview.md b/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-overview.md deleted file mode 100644 index e8a2f4b9d321a7..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-overview.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -id: concepts-overview -title: Pulsar Overview -sidebar_label: "Overview" -original_id: concepts-overview ---- - -Pulsar is a multi-tenant, high-performance solution for server-to-server messaging. Originally developed by Yahoo, Pulsar is under the stewardship of the [Apache Software Foundation](https://www.apache.org/). - -Key features of Pulsar are listed below: - -* Native support for multiple clusters in a Pulsar instance, with seamless [geo-replication](administration-geo.md) of messages across clusters. -* Very low publish and end-to-end latency. -* Seamless scalability to over a million topics. -* A simple [client API](concepts-clients.md) with bindings for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) and [C++](client-libraries-cpp.md). -* Multiple [subscription types](concepts-messaging.md#subscription-types) ([exclusive](concepts-messaging.md#exclusive), [shared](concepts-messaging.md#shared), and [failover](concepts-messaging.md#failover)) for topics. -* Guaranteed message delivery with [persistent message storage](concepts-architecture-overview.md#persistent-storage) provided by [Apache BookKeeper](http://bookkeeper.apache.org/). -* A serverless light-weight computing framework [Pulsar Functions](functions-overview.md) offers the capability for stream-native data processing. -* A serverless connector framework [Pulsar IO](io-overview.md), which is built on Pulsar Functions, makes it easier to move data in and out of Apache Pulsar. -* [Tiered Storage](concepts-tiered-storage.md) offloads data from hot/warm storage to cold/long-term storage (such as S3 and GCS) when the data is aging out. - -## Contents - -- [Messaging Concepts](concepts-messaging.md) -- [Architecture Overview](concepts-architecture-overview.md) -- [Pulsar Clients](concepts-clients.md) -- [Geo Replication](concepts-replication.md) -- [Multi Tenancy](concepts-multi-tenancy.md) -- [Authentication and Authorization](concepts-authentication.md) -- [Topic Compaction](concepts-topic-compaction.md) -- [Tiered Storage](concepts-tiered-storage.md) diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-proxy-sni-routing.md b/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-proxy-sni-routing.md deleted file mode 100644 index 51419a66cefe6e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-proxy-sni-routing.md +++ /dev/null @@ -1,180 +0,0 @@ ---- -id: concepts-proxy-sni-routing -title: Proxy support with SNI routing -sidebar_label: "Proxy support with SNI routing" -original_id: concepts-proxy-sni-routing ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -A proxy server is an intermediary server that forwards requests from multiple clients to different servers across the Internet. The proxy server acts as a "traffic cop" in both forward and reverse proxy scenarios, and benefits your system such as load balancing, performance, security, auto-scaling, and so on. - -The proxy in Pulsar acts as a reverse proxy, and creates a gateway in front of brokers. Proxies such as Apache Traffic Server (ATS), HAProxy, Nginx, and Envoy are not supported in Pulsar. These proxy-servers support **SNI routing**. SNI routing is used to route traffic to a destination without terminating the SSL connection. Layer 4 routing provides greater transparency because the outbound connection is determined by examining the destination address in the client TCP packets. - -Pulsar clients (Java, C++, Python) support [SNI routing protocol](https://github.com/apache/pulsar/wiki/PIP-60:-Support-Proxy-server-with-SNI-routing), so you can connect to brokers through the proxy. This document walks you through how to set up the ATS proxy, enable SNI routing, and connect Pulsar client to the broker through the ATS proxy. - -## ATS-SNI Routing in Pulsar -To support [layer-4 SNI routing](https://docs.trafficserver.apache.org/en/latest/admin-guide/layer-4-routing.en.html) with ATS, the inbound connection must be a TLS connection. Pulsar client supports SNI routing protocol on TLS connection, so when Pulsar clients connect to broker through ATS proxy, Pulsar uses ATS as a reverse proxy. - -Pulsar supports SNI routing for geo-replication, so brokers can connect to brokers in other clusters through the ATS proxy. - -This section explains how to set up and use ATS as a reverse proxy, so Pulsar clients can connect to brokers through the ATS proxy using the SNI routing protocol on TLS connection. - -### Set up ATS Proxy for layer-4 SNI routing -To support layer 4 SNI routing, you need to configure the `records.conf` and `ssl_server_name.conf` files. - -![Pulsar client SNI](/assets/pulsar-sni-client.png) - -The [records.config](https://docs.trafficserver.apache.org/en/latest/admin-guide/files/records.config.en.html) file is located in the `/usr/local/etc/trafficserver/` directory by default. The file lists configurable variables used by the ATS. - -To configure the `records.config` files, complete the following steps. -1. Update TLS port (`http.server_ports`) on which proxy listens, and update proxy certs (`ssl.client.cert.path` and `ssl.client.cert.filename`) to secure TLS tunneling. -2. Configure server ports (`http.connect_ports`) used for tunneling to the broker. If Pulsar brokers are listening on `4443` and `6651` ports, add the brokers service port in the `http.connect_ports` configuration. - -The following is an example. - -``` - -# PROXY TLS PORT -CONFIG proxy.config.http.server_ports STRING 4443:ssl 4080 -# PROXY CERTS FILE PATH -CONFIG proxy.config.ssl.client.cert.path STRING /proxy-cert.pem -# PROXY KEY FILE PATH -CONFIG proxy.config.ssl.client.cert.filename STRING /proxy-key.pem - - -# The range of origin server ports that can be used for tunneling via CONNECT. # Traffic Server allows tunnels only to the specified ports. Supports both wildcards (*) and ranges (e.g. 0-1023). -CONFIG proxy.config.http.connect_ports STRING 4443 6651 - -``` - -The `ssl_server_name` file is used to configure TLS connection handling for inbound and outbound connections. The configuration is determined by the SNI values provided by the inbound connection. The file consists of a set of configuration items, and each is identified by an SNI value (`fqdn`). When an inbound TLS connection is made, the SNI value from the TLS negotiation is matched with the items specified in this file. If the values match, the values specified in that item override the default values. - -The following example shows mapping of the inbound SNI hostname coming from the client, and the actual broker service URL where request should be redirected. For example, if the client sends the SNI header `pulsar-broker1`, the proxy creates a TLS tunnel by redirecting request to the `pulsar-broker1:6651` service URL. - -``` - -server_config = { - { - fqdn = 'pulsar-broker-vip', - # Forward to Pulsar broker which is listening on 6651 - tunnel_route = 'pulsar-broker-vip:6651' - }, - { - fqdn = 'pulsar-broker1', - # Forward to Pulsar broker-1 which is listening on 6651 - tunnel_route = 'pulsar-broker1:6651' - }, - { - fqdn = 'pulsar-broker2', - # Forward to Pulsar broker-2 which is listening on 6651 - tunnel_route = 'pulsar-broker2:6651' - }, -} - -``` - -After you configure the `ssl_server_name.config` and `records.config` files, the ATS-proxy server handles SNI routing and creates TCP tunnel between the client and the broker. - -### Configure Pulsar-client with SNI routing -ATS SNI-routing works only with TLS. You need to enable TLS for the ATS proxy and brokers first, configure the SNI routing protocol, and then connect Pulsar clients to brokers through ATS proxy. Pulsar clients support SNI routing by connecting to the proxy, and sending the target broker URL to the SNI header. This process is processed internally. You only need to configure the following proxy configuration initially when you create a Pulsar client to use the SNI routing protocol. - -````mdx-code-block - - - - -```java - -String brokerServiceUrl = "pulsar+ssl://pulsar-broker-vip:6651/"; -String proxyUrl = "pulsar+ssl://ats-proxy:443"; -ClientBuilder clientBuilder = PulsarClient.builder() - .serviceUrl(brokerServiceUrl) - .tlsTrustCertsFilePath(TLS_TRUST_CERT_FILE_PATH) - .enableTls(true) - .allowTlsInsecureConnection(false) - .proxyServiceUrl(proxyUrl, ProxyProtocol.SNI) - .operationTimeout(1000, TimeUnit.MILLISECONDS); - -Map authParams = new HashMap(); -authParams.put("tlsCertFile", TLS_CLIENT_CERT_FILE_PATH); -authParams.put("tlsKeyFile", TLS_CLIENT_KEY_FILE_PATH); -clientBuilder.authentication(AuthenticationTls.class.getName(), authParams); - -PulsarClient pulsarClient = clientBuilder.build(); - -``` - - - - -```c++ - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/cacert.pem"); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create( - "/path/to/client-cert.pem", "/path/to/client-key.pem");); - -Client client("pulsar+ssl://ats-proxy:443", config); - -``` - - - - -```python - -from pulsar import Client, AuthenticationTLS - -auth = AuthenticationTLS("/path/to/my-role.cert.pem", "/path/to/my-role.key-pk8.pem") -client = Client("pulsar+ssl://ats-proxy:443", - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False, - authentication=auth) - -``` - - - - -```` - -### Pulsar geo-replication with SNI routing -You can use the ATS proxy for geo-replication. Pulsar brokers can connect to brokers in geo-replication by using SNI routing. To enable SNI routing for broker connection cross clusters, you need to configure SNI proxy URL to the cluster metadata. If you have configured SNI proxy URL in the cluster metadata, you can connect to broker cross clusters through the proxy over SNI routing. - -![Pulsar client SNI](/assets/pulsar-sni-geo.png) - -In this example, a Pulsar cluster is deployed into two separate regions, `us-west` and `us-east`. Both regions are configured with ATS proxy, and brokers in each region run behind the ATS proxy. We configure the cluster metadata for both clusters, so brokers in one cluster can use SNI routing and connect to brokers in other clusters through the ATS proxy. - -(a) Configure the cluster metadata for `us-east` with `us-east` broker service URL and `us-east` ATS proxy URL with SNI proxy-protocol. - -``` - -./pulsar-admin clusters update \ ---broker-url-secure pulsar+ssl://east-broker-vip:6651 \ ---url http://east-broker-vip:8080 \ ---proxy-protocol SNI \ ---proxy-url pulsar+ssl://east-ats-proxy:443 - -``` - -(b) Configure the cluster metadata for `us-west` with `us-west` broker service URL and `us-west` ATS proxy URL with SNI proxy-protocol. - -``` - -./pulsar-admin clusters update \ ---broker-url-secure pulsar+ssl://west-broker-vip:6651 \ ---url http://west-broker-vip:8080 \ ---proxy-protocol SNI \ ---proxy-url pulsar+ssl://west-ats-proxy:443 - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-replication.md b/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-replication.md deleted file mode 100644 index 799f0eb4d92c6b..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-replication.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -id: concepts-replication -title: Geo Replication -sidebar_label: "Geo Replication" -original_id: concepts-replication ---- - -Pulsar enables messages to be produced and consumed in different geo-locations. For instance, your application may be publishing data in one region or market and you would like to process it for consumption in other regions or markets. [Geo-replication](administration-geo.md) in Pulsar enables you to do that. - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-tiered-storage.md b/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-tiered-storage.md deleted file mode 100644 index f6988e53a8cd4e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-tiered-storage.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: concepts-tiered-storage -title: Tiered Storage -sidebar_label: "Tiered Storage" -original_id: concepts-tiered-storage ---- - -Pulsar's segment oriented architecture allows for topic backlogs to grow very large, effectively without limit. However, this can become expensive over time. - -One way to alleviate this cost is to use Tiered Storage. With tiered storage, older messages in the backlog can be moved from BookKeeper to a cheaper storage mechanism, while still allowing clients to access the backlog as if nothing had changed. - -![Tiered Storage](/assets/pulsar-tiered-storage.png) - -> Data written to BookKeeper is replicated to 3 physical machines by default. However, once a segment is sealed in BookKeeper it becomes immutable and can be copied to long term storage. Long term storage can achieve cost savings by using mechanisms such as [Reed-Solomon error correction](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction) to require fewer physical copies of data. - -Pulsar currently supports S3, Google Cloud Storage (GCS), and filesystem for [long term store](https://pulsar.apache.org/docs/en/cookbooks-tiered-storage/). Offloading to long term storage triggered via a Rest API or command line interface. The user passes in the amount of topic data they wish to retain on BookKeeper, and the broker will copy the backlog data to long term storage. The original data will then be deleted from BookKeeper after a configured delay (4 hours by default). - -> For a guide for setting up tiered storage, see the [Tiered storage cookbook](cookbooks-tiered-storage.md). diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-topic-compaction.md b/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-topic-compaction.md deleted file mode 100644 index 34b7ed7fbbd31e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-topic-compaction.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -id: concepts-topic-compaction -title: Topic Compaction -sidebar_label: "Topic Compaction" -original_id: concepts-topic-compaction ---- - -Pulsar was built with highly scalable [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data as a primary objective. Pulsar topics enable you to persistently store as many unacknowledged messages as you need while preserving message ordering. By default, Pulsar stores *all* unacknowledged/unprocessed messages produced on a topic. Accumulating many unacknowledged messages on a topic is necessary for many Pulsar use cases but it can also be very time intensive for Pulsar consumers to "rewind" through the entire log of messages. - -> For a more practical guide to topic compaction, see the [Topic compaction cookbook](cookbooks-compaction.md). - -For some use cases consumers don't need a complete "image" of the topic log. They may only need a few values to construct a more "shallow" image of the log, perhaps even just the most recent value. For these kinds of use cases Pulsar offers **topic compaction**. When you run compaction on a topic, Pulsar goes through a topic's backlog and removes messages that are *obscured* by later messages, i.e. it goes through the topic on a per-key basis and leaves only the most recent message associated with that key. - -Pulsar's topic compaction feature: - -* Allows for faster "rewind" through topic logs -* Applies only to [persistent topics](concepts-architecture-overview.md#persistent-storage) -* Triggered automatically when the backlog reaches a certain size or can be triggered manually via the command line. See the [Topic compaction cookbook](cookbooks-compaction.md) -* Is conceptually and operationally distinct from [retention and expiry](concepts-messaging.md#message-retention-and-expiry). Topic compaction *does*, however, respect retention. If retention has removed a message from the message backlog of a topic, the message will also not be readable from the compacted topic ledger. - -> #### Topic compaction example: the stock ticker -> An example use case for a compacted Pulsar topic would be a stock ticker topic. On a stock ticker topic, each message bears a timestamped dollar value for stocks for purchase (with the message key holding the stock symbol, e.g. `AAPL` or `GOOG`). With a stock ticker you may care only about the most recent value(s) of the stock and have no interest in historical data (i.e. you don't need to construct a complete image of the topic's sequence of messages per key). Compaction would be highly beneficial in this case because it would keep consumers from needing to rewind through obscured messages. - - -## How topic compaction works - -When topic compaction is triggered [via the CLI](cookbooks-compaction.md), Pulsar will iterate over the entire topic from beginning to end. For each key that it encounters the compaction routine will keep a record of the latest occurrence of that key. - -After that, the broker will create a new [BookKeeper ledger](concepts-architecture-overview.md#ledgers) and make a second iteration through each message on the topic. For each message, if the key matches the latest occurrence of that key, then the key's data payload, message ID, and metadata will be written to the newly created ledger. If the key doesn't match the latest then the message will be skipped and left alone. If any given message has an empty payload, it will be skipped and considered deleted (akin to the concept of [tombstones](https://en.wikipedia.org/wiki/Tombstone_(data_store)) in key-value databases). At the end of this second iteration through the topic, the newly created BookKeeper ledger is closed and two things are written to the topic's metadata: the ID of the BookKeeper ledger and the message ID of the last compacted message (this is known as the **compaction horizon** of the topic). Once this metadata is written compaction is complete. - -After the initial compaction operation, the Pulsar [broker](reference-terminology.md#broker) that owns the topic is notified whenever any future changes are made to the compaction horizon and compacted backlog. When such changes occur: - -* Clients (consumers and readers) that have read compacted enabled will attempt to read messages from a topic and either: - * Read from the topic like normal (if the message ID is greater than or equal to the compaction horizon) or - * Read beginning at the compaction horizon (if the message ID is lower than the compaction horizon) - - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-transactions.md b/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-transactions.md deleted file mode 100644 index 08490ba06b5d7e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/concepts-transactions.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -id: transactions -title: Transactions -sidebar_label: "Overview" -original_id: transactions ---- - -Transactional semantics enable event streaming applications to consume, process, and produce messages in one atomic operation. In Pulsar, a producer or consumer can work with messages across multiple topics and partitions and ensure those messages are processed as a single unit. - -The following concepts help you understand Pulsar transactions. - -## Transaction coordinator and transaction log -The transaction coordinator maintains the topics and subscriptions that interact in a transaction. When a transaction is committed, the transaction coordinator interacts with the topic owner broker to complete the transaction. - -The transaction coordinator maintains the entire life cycle of transactions, and prevents a transaction from incorrect status. - -The transaction coordinator handles transaction timeout, and ensures that the transaction is aborted after a transaction timeout. - -All the transaction metadata is persisted in the transaction log. The transaction log is backed by a Pulsar topic. After the transaction coordinator crashes, it can restore the transaction metadata from the transaction log. - -## Transaction ID -The transaction ID (TxnID) identifies a unique transaction in Pulsar. The transaction ID is 128-bit. The highest 16 bits are reserved for the ID of the transaction coordinator, and the remaining bits are used for monotonically increasing numbers in each transaction coordinator. It is easy to locate the transaction crash with the TxnID. - -## Transaction buffer -Messages produced within a transaction are stored in the transaction buffer. The messages in transaction buffer are not materialized (visible) to consumers until the transactions are committed. The messages in the transaction buffer are discarded when the transactions are aborted. - -## Pending acknowledge state -Message acknowledges within a transaction are maintained by the pending acknowledge state before the transaction completes. If a message is in the pending acknowledge state, the message cannot be acknowledged by other transactions until the message is removed from the pending acknowledge state. - -The pending acknowledge state is persisted to the pending acknowledge log. The pending acknowledge log is backed by a Pulsar topic. A new broker can restore the state from the pending acknowledge log to ensure the acknowledgement is not lost. diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-bookkeepermetadata.md b/site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-bookkeepermetadata.md deleted file mode 100644 index b0fa98dc3b65d5..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-bookkeepermetadata.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -id: cookbooks-bookkeepermetadata -title: BookKeeper Ledger Metadata -original_id: cookbooks-bookkeepermetadata ---- - -Pulsar stores data on BookKeeper ledgers, you can understand the contents of a ledger by inspecting the metadata attached to the ledger. -Such metadata are stored on ZooKeeper and they are readable using BookKeeper APIs. - -Description of current metadata: - -| Scope | Metadata name | Metadata value | -| ------------- | ------------- | ------------- | -| All ledgers | application | 'pulsar' | -| All ledgers | component | 'managed-ledger', 'schema', 'compacted-topic' | -| Managed ledgers | pulsar/managed-ledger | name of the ledger | -| Cursor | pulsar/cursor | name of the cursor | -| Compacted topic | pulsar/compactedTopic | name of the original topic | -| Compacted topic | pulsar/compactedTo | id of the last compacted message | - - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-compaction.md b/site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-compaction.md deleted file mode 100644 index dfa314727241a8..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-compaction.md +++ /dev/null @@ -1,142 +0,0 @@ ---- -id: cookbooks-compaction -title: Topic compaction -sidebar_label: "Topic compaction" -original_id: cookbooks-compaction ---- - -Pulsar's [topic compaction](concepts-topic-compaction.md#compaction) feature enables you to create **compacted** topics in which older, "obscured" entries are pruned from the topic, allowing for faster reads through the topic's history (which messages are deemed obscured/outdated/irrelevant will depend on your use case). - -To use compaction: - -* You need to give messages keys, as topic compaction in Pulsar takes place on a *per-key basis* (i.e. messages are compacted based on their key). For a stock ticker use case, the stock symbol---e.g. `AAPL` or `GOOG`---could serve as the key (more on this [below](#when-should-i-use-compacted-topics)). Messages without keys will be left alone by the compaction process. -* Compaction can be configured to run [automatically](#configuring-compaction-to-run-automatically), or you can manually [trigger](#triggering-compaction-manually) compaction using the Pulsar administrative API. -* Your consumers must be [configured](#consumer-configuration) to read from compacted topics ([Java consumers](#java), for example, have a `readCompacted` setting that must be set to `true`). If this configuration is not set, consumers will still be able to read from the non-compacted topic. - - -> Compaction only works on messages that have keys (as in the stock ticker example the stock symbol serves as the key for each message). Keys can thus be thought of as the axis along which compaction is applied. Messages that don't have keys are simply ignored by compaction. - -## When should I use compacted topics? - -The classic example of a topic that could benefit from compaction would be a stock ticker topic through which consumers can access up-to-date values for specific stocks. Imagine a scenario in which messages carrying stock value data use the stock symbol as the key (`GOOG`, `AAPL`, `TWTR`, etc.). Compacting this topic would give consumers on the topic two options: - -* They can read from the "original," non-compacted topic in case they need access to "historical" values, i.e. the entirety of the topic's messages. -* They can read from the compacted topic if they only want to see the most up-to-date messages. - -Thus, if you're using a Pulsar topic called `stock-values`, some consumers could have access to all messages in the topic (perhaps because they're performing some kind of number crunching of all values in the last hour) while the consumers used to power the real-time stock ticker only see the compacted topic (and thus aren't forced to process outdated messages). Which variant of the topic any given consumer pulls messages from is determined by the consumer's [configuration](#consumer-configuration). - -> One of the benefits of compaction in Pulsar is that you aren't forced to choose between compacted and non-compacted topics, as the compaction process leaves the original topic as-is and essentially adds an alternate topic. In other words, you can run compaction on a topic and consumers that need access to the non-compacted version of the topic will not be adversely affected. - - -## Configuring compaction to run automatically - -Tenant administrators can configure a policy for compaction at the namespace level. The policy specifies how large the topic backlog can grow before compaction is triggered. - -For example, to trigger compaction when the backlog reaches 100MB: - -```bash - -$ bin/pulsar-admin namespaces set-compaction-threshold \ - --threshold 100M my-tenant/my-namespace - -``` - -Configuring the compaction threshold on a namespace will apply to all topics within that namespace. - -## Triggering compaction manually - -In order to run compaction on a topic, you need to use the [`topics compact`](reference-pulsar-admin.md#topics-compact) command for the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool. Here's an example: - -```bash - -$ bin/pulsar-admin topics compact \ - persistent://my-tenant/my-namespace/my-topic - -``` - -The `pulsar-admin` tool runs compaction via the Pulsar {@inject: rest:REST:/} API. To run compaction in its own dedicated process, i.e. *not* through the REST API, you can use the [`pulsar compact-topic`](reference-cli-tools.md#pulsar-compact-topic) command. Here's an example: - -```bash - -$ bin/pulsar compact-topic \ - --topic persistent://my-tenant-namespace/my-topic - -``` - -> Running compaction in its own process is recommended when you want to avoid interfering with the broker's performance. Broker performance should only be affected, however, when running compaction on topics with a large keyspace (i.e when there are many keys on the topic). The first phase of the compaction process keeps a copy of each key in the topic, which can create memory pressure as the number of keys grows. Using the `pulsar-admin topics compact` command to run compaction through the REST API should present no issues in the overwhelming majority of cases; using `pulsar compact-topic` should correspondingly be considered an edge case. - -The `pulsar compact-topic` command communicates with [ZooKeeper](https://zookeeper.apache.org) directly. In order to establish communication with ZooKeeper, though, the `pulsar` CLI tool will need to have a valid [broker configuration](reference-configuration.md#broker). You can either supply a proper configuration in `conf/broker.conf` or specify a non-default location for the configuration: - -```bash - -$ bin/pulsar compact-topic \ - --broker-conf /path/to/broker.conf \ - --topic persistent://my-tenant/my-namespace/my-topic - -# If the configuration is in conf/broker.conf -$ bin/pulsar compact-topic \ - --topic persistent://my-tenant/my-namespace/my-topic - -``` - -#### When should I trigger compaction? - -How often you [trigger compaction](#triggering-compaction-manually) will vary widely based on the use case. If you want a compacted topic to be extremely speedy on read, then you should run compaction fairly frequently. - -## Consumer configuration - -Pulsar consumers and readers need to be configured to read from compacted topics. The sections below show you how to enable compacted topic reads for Pulsar's language clients. - -### Java - -In order to read from a compacted topic using a Java consumer, the `readCompacted` parameter must be set to `true`. Here's an example consumer for a compacted topic: - -```java - -Consumer compactedTopicConsumer = client.newConsumer() - .topic("some-compacted-topic") - .readCompacted(true) - .subscribe(); - -``` - -As mentioned above, topic compaction in Pulsar works on a *per-key basis*. That means that messages that you produce on compacted topics need to have keys (the content of the key will depend on your use case). Messages that don't have keys will be ignored by the compaction process. Here's an example Pulsar message with a key: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageBuilder; - -Message msg = MessageBuilder.create() - .setContent(someByteArray) - .setKey("some-key") - .build(); - -``` - -The example below shows a message with a key being produced on a compacted Pulsar topic: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageBuilder; -import org.apache.pulsar.client.api.Producer; -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -Producer compactedTopicProducer = client.newProducer() - .topic("some-compacted-topic") - .create(); - -Message msg = MessageBuilder.create() - .setContent(someByteArray) - .setKey("some-key") - .build(); - -compactedTopicProducer.send(msg); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-deduplication.md b/site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-deduplication.md deleted file mode 100644 index f7f9e3d7bb425b..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-deduplication.md +++ /dev/null @@ -1,151 +0,0 @@ ---- -id: cookbooks-deduplication -title: Message deduplication -sidebar_label: "Message deduplication" -original_id: cookbooks-deduplication ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -When **Message deduplication** is enabled, it ensures that each message produced on Pulsar topics is persisted to disk *only once*, even if the message is produced more than once. Message deduplication is handled automatically on the server side. - -To use message deduplication in Pulsar, you need to configure your Pulsar brokers and clients. - -## How it works - -You can enable or disable message deduplication at the namespace level or the topic level. By default, it is disabled on all namespaces or topics. You can enable it in the following ways: - -* Enable deduplication for all namespaces/topics at the broker-level. -* Enable deduplication for a specific namespace with the `pulsar-admin namespaces` interface. -* Enable deduplication for a specific topic with the `pulsar-admin topics` interface. - -## Configure message deduplication - -You can configure message deduplication in Pulsar using the [`broker.conf`](reference-configuration.md#broker) configuration file. The following deduplication-related parameters are available. - -Parameter | Description | Default -:---------|:------------|:------- -`brokerDeduplicationEnabled` | Sets the default behavior for message deduplication in the Pulsar broker. If it is set to `true`, message deduplication is enabled on all namespaces/topics. If it is set to `false`, you have to enable or disable deduplication at the namespace level or the topic level. | `false` -`brokerDeduplicationMaxNumberOfProducers` | The maximum number of producers for which information is stored for deduplication purposes. | `10000` -`brokerDeduplicationEntriesInterval` | The number of entries after which a deduplication informational snapshot is taken. A larger interval leads to fewer snapshots being taken, though this lengthens the topic recovery time (the time required for entries published after the snapshot to be replayed). | `1000` -`brokerDeduplicationSnapshotIntervalSeconds`| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |`120` -`brokerDeduplicationProducerInactivityTimeoutMinutes` | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | `360` (6 hours) - -### Set default value at the broker-level - -By default, message deduplication is *disabled* on all Pulsar namespaces/topics. To enable it on all namespaces/topics, set the `brokerDeduplicationEnabled` parameter to `true` and re-start the broker. - -Even if you set the value for `brokerDeduplicationEnabled`, enabling or disabling via Pulsar admin CLI overrides the default settings at the broker-level. - -### Enable message deduplication - -Though message deduplication is disabled by default at the broker level, you can enable message deduplication for a specific namespace or topic using the [`pulsar-admin namespaces set-deduplication`](reference-pulsar-admin.md#namespace-set-deduplication) or the [`pulsar-admin topics set-deduplication`](reference-pulsar-admin.md#topic-set-deduplication) command. You can use the `--enable`/`-e` flag and specify the namespace/topic. - -The following example shows how to enable message deduplication at the namespace level. - -```bash - -$ bin/pulsar-admin namespaces set-deduplication \ - public/default \ - --enable # or just -e - -``` - -### Disable message deduplication - -Even if you enable message deduplication at the broker level, you can disable message deduplication for a specific namespace or topic using the [`pulsar-admin namespace set-deduplication`](reference-pulsar-admin.md#namespace-set-deduplication) or the [`pulsar-admin topics set-deduplication`](reference-pulsar-admin.md#topic-set-deduplication) command. Use the `--disable`/`-d` flag and specify the namespace/topic. - -The following example shows how to disable message deduplication at the namespace level. - -```bash - -$ bin/pulsar-admin namespaces set-deduplication \ - public/default \ - --disable # or just -d - -``` - -## Pulsar clients - -If you enable message deduplication in Pulsar brokers, you need complete the following tasks for your client producers: - -1. Specify a name for the producer. -1. Set the message timeout to `0` (namely, no timeout). - -The instructions for Java, Python, and C++ clients are different. - -````mdx-code-block - - - -To enable message deduplication on a [Java producer](client-libraries-java.md#producers), set the producer name using the `producerName` setter, and set the timeout to `0` using the `sendTimeout` setter. - -```java - -import org.apache.pulsar.client.api.Producer; -import org.apache.pulsar.client.api.PulsarClient; -import java.util.concurrent.TimeUnit; - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); -Producer producer = pulsarClient.newProducer() - .producerName("producer-1") - .topic("persistent://public/default/topic-1") - .sendTimeout(0, TimeUnit.SECONDS) - .create(); - -``` - - - - -To enable message deduplication on a [Python producer](client-libraries-python.md#producers), set the producer name using `producer_name`, and set the timeout to `0` using `send_timeout_millis`. - -```python - -import pulsar - -client = pulsar.Client("pulsar://localhost:6650") -producer = client.create_producer( - "persistent://public/default/topic-1", - producer_name="producer-1", - send_timeout_millis=0) - -``` - - - - -To enable message deduplication on a [C++ producer](client-libraries-cpp.md#producer), set the producer name using `producer_name`, and set the timeout to `0` using `send_timeout_millis`. - -```cpp - -#include - -std::string serviceUrl = "pulsar://localhost:6650"; -std::string topic = "persistent://some-tenant/ns1/topic-1"; -std::string producerName = "producer-1"; - -Client client(serviceUrl); - -ProducerConfiguration producerConfig; -producerConfig.setSendTimeout(0); -producerConfig.setProducerName(producerName); - -Producer producer; - -Result result = client.createProducer(topic, producerConfig, producer); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-encryption.md b/site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-encryption.md deleted file mode 100644 index f0d8fb8735eb63..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-encryption.md +++ /dev/null @@ -1,184 +0,0 @@ ---- -id: cookbooks-encryption -title: Pulsar Encryption -sidebar_label: "Encryption" -original_id: cookbooks-encryption ---- - -Pulsar encryption allows applications to encrypt messages at the producer and decrypt at the consumer. Encryption is performed using the public/private key pair configured by the application. Encrypted messages can only be decrypted by consumers with a valid key. - -## Asymmetric and symmetric encryption - -Pulsar uses dynamically generated symmetric AES key to encrypt messages(data). The AES key(data key) is encrypted using application provided ECDSA/RSA key pair, as a result there is no need to share the secret with everyone. - -Key is a public/private key pair used for encryption/decryption. The producer key is the public key, and the consumer key is the private key of the key pair. - -The application configures the producer with the public key. This key is used to encrypt the AES data key. The encrypted data key is sent as part of message header. Only entities with the private key(in this case the consumer) will be able to decrypt the data key which is used to decrypt the message. - -A message can be encrypted with more than one key. Any one of the keys used for encrypting the message is sufficient to decrypt the message - -Pulsar does not store the encryption key anywhere in the pulsar service. If you lose/delete the private key, your message is irretrievably lost, and is unrecoverable - -## Producer -![alt text](/assets/pulsar-encryption-producer.jpg "Pulsar Encryption Producer") - -## Consumer -![alt text](/assets/pulsar-encryption-consumer.jpg "Pulsar Encryption Consumer") - -## Here are the steps to get started: - -1. Create your ECDSA or RSA public/private key pair. - -```shell - -openssl ecparam -name secp521r1 -genkey -param_enc explicit -out test_ecdsa_privkey.pem -openssl ec -in test_ecdsa_privkey.pem -pubout -outform pkcs8 -out test_ecdsa_pubkey.pem - -``` - -2. Add the public and private key to the key management and configure your producers to retrieve public keys and consumers clients to retrieve private keys. -3. Implement CryptoKeyReader::getPublicKey() interface from producer and CryptoKeyReader::getPrivateKey() interface from consumer, which will be invoked by Pulsar client to load the key. -4. Add encryption key to producer configuration: conf.addEncryptionKey("myapp.key") -5. Add CryptoKeyReader implementation to producer/consumer config: conf.setCryptoKeyReader(keyReader) -6. Sample producer application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} -PulsarClient pulsarClient = PulsarClient.create("http://localhost:8080"); - -ProducerConfiguration prodConf = new ProducerConfiguration(); -prodConf.setCryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")); -prodConf.addEncryptionKey("myappkey"); - -Producer producer = pulsarClient.createProducer("persistent://my-tenant/my-ns/my-topic", prodConf); - -for (int i = 0; i < 10; i++) { - producer.send("my-message".getBytes()); -} - -pulsarClient.close(); - -``` - -7. Sample Consumer Application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} - -ConsumerConfiguration consConf = new ConsumerConfiguration(); -consConf.setCryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")); -PulsarClient pulsarClient = PulsarClient.create("http://localhost:8080"); -Consumer consumer = pulsarClient.subscribe("persistent://my-tenant//my-ns/my-topic", "my-subscriber-name", consConf); -Message msg = null; - -for (int i = 0; i < 10; i++) { - msg = consumer.receive(); - // do something - System.out.println("Received: " + new String(msg.getData())); -} - -// Acknowledge the consumption of all messages at once -consumer.acknowledgeCumulative(msg); -pulsarClient.close(); - -``` - -## Key rotation -Pulsar generates new AES data key every 4 hours or after a certain number of messages are published. The asymmetric public key is automatically fetched by producer every 4 hours by calling CryptoKeyReader::getPublicKey() to retrieve the latest version. - -## Enabling encryption at the producer application: -If you produce messages that are consumed across application boundaries, you need to ensure that consumers in other applications have access to one of the private keys that can decrypt the messages. This can be done in two ways: -1. The consumer application provides you access to their public key, which you add to your producer keys -1. You grant access to one of the private keys from the pairs used by producer - -In some cases, the producer may want to encrypt the messages with multiple keys. For this, add all such keys to the config. Consumer will be able to decrypt the message, as long as it has access to at least one of the keys. - -E.g: If messages needs to be encrypted using 2 keys myapp.messagekey1 and myapp.messagekey2, - -```java - -conf.addEncryptionKey("myapp.messagekey1"); -conf.addEncryptionKey("myapp.messagekey2"); - -``` - -## Decrypting encrypted messages at the consumer application: -Consumers require access one of the private keys to decrypt messages produced by the producer. If you would like to receive encrypted messages, create a public/private key and give your public key to the producer application to encrypt messages using your public key. - -## Handling Failures: -* Producer/ Consumer loses access to the key - * Producer action will fail indicating the cause of the failure. Application has the option to proceed with sending unencrypted message in such cases. Call conf.setCryptoFailureAction(ProducerCryptoFailureAction) to control the producer behavior. The default behavior is to fail the request. - * If consumption failed due to decryption failure or missing keys in consumer, application has the option to consume the encrypted message or discard it. Call conf.setCryptoFailureAction(ConsumerCryptoFailureAction) to control the consumer behavior. The default behavior is to fail the request. -Application will never be able to decrypt the messages if the private key is permanently lost. -* Batch messaging - * If decryption fails and the message contain batch messages, client will not be able to retrieve individual messages in the batch, hence message consumption fails even if conf.setCryptoFailureAction() is set to CONSUME. -* If decryption fails, the message consumption stops and application will notice backlog growth in addition to decryption failure messages in the client log. If application does not have access to the private key to decrypt the message, the only option is to skip/discard backlogged messages. - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-message-queue.md b/site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-message-queue.md deleted file mode 100644 index eb43cbde5fb818..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-message-queue.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -id: cookbooks-message-queue -title: Using Pulsar as a message queue -sidebar_label: "Message queue" -original_id: cookbooks-message-queue ---- - -Message queues are essential components of many large-scale data architectures. If every single work object that passes through your system absolutely *must* be processed in spite of the slowness or downright failure of this or that system component, there's a good chance that you'll need a message queue to step in and ensure that unprocessed data is retained---with correct ordering---until the required actions are taken. - -Pulsar is a great choice for a message queue because: - -* it was built with [persistent message storage](concepts-architecture-overview.md#persistent-storage) in mind -* it offers automatic load balancing across [consumers](reference-terminology.md#consumer) for messages on a topic (or custom load balancing if you wish) - -> You can use the same Pulsar installation to act as a real-time message bus and as a message queue if you wish (or just one or the other). You can set aside some topics for real-time purposes and other topics for message queue purposes (or use specific namespaces for either purpose if you wish). - - -# Client configuration changes - -To use a Pulsar [topic](reference-terminology.md#topic) as a message queue, you should distribute the receiver load on that topic across several consumers (the optimal number of consumers will depend on the load). Each consumer must: - -* Establish a [shared subscription](concepts-messaging.md#shared) and use the same subscription name as the other consumers (otherwise the subscription is not shared and the consumers can't act as a processing ensemble) -* If you'd like to have tight control over message dispatching across consumers, set the consumers' **receiver queue** size very low (potentially even to 0 if necessary). Each Pulsar [consumer](reference-terminology.md#consumer) has a receiver queue that determines how many messages the consumer will attempt to fetch at a time. A receiver queue of 1000 (the default), for example, means that the consumer will attempt to process 1000 messages from the topic's backlog upon connection. Setting the receiver queue to zero essentially means ensuring that each consumer is only doing one thing at a time. - - The downside to restricting the receiver queue size of consumers is that that limits the potential throughput of those consumers and cannot be used with [partitioned topics](reference-terminology.md#partitioned-topic). Whether the performance/control trade-off is worthwhile will depend on your use case. - -## Java clients - -Here's an example Java consumer configuration that uses a shared subscription: - -```java - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; -import org.apache.pulsar.client.api.SubscriptionType; - -String SERVICE_URL = "pulsar://localhost:6650"; -String TOPIC = "persistent://public/default/mq-topic-1"; -String subscription = "sub-1"; - -PulsarClient client = PulsarClient.builder() - .serviceUrl(SERVICE_URL) - .build(); - -Consumer consumer = client.newConsumer() - .topic(TOPIC) - .subscriptionName(subscription) - .subscriptionType(SubscriptionType.Shared) - // If you'd like to restrict the receiver queue size - .receiverQueueSize(10) - .subscribe(); - -``` - -## Python clients - -Here's an example Python consumer configuration that uses a shared subscription: - -```python - -from pulsar import Client, ConsumerType - -SERVICE_URL = "pulsar://localhost:6650" -TOPIC = "persistent://public/default/mq-topic-1" -SUBSCRIPTION = "sub-1" - -client = Client(SERVICE_URL) -consumer = client.subscribe( - TOPIC, - SUBSCRIPTION, - # If you'd like to restrict the receiver queue size - receiver_queue_size=10, - consumer_type=ConsumerType.Shared) - -``` - -## C++ clients - -Here's an example C++ consumer configuration that uses a shared subscription: - -```cpp - -#include - -std::string serviceUrl = "pulsar://localhost:6650"; -std::string topic = "persistent://public/defaultmq-topic-1"; -std::string subscription = "sub-1"; - -Client client(serviceUrl); - -ConsumerConfiguration consumerConfig; -consumerConfig.setConsumerType(ConsumerType.ConsumerShared); -// If you'd like to restrict the receiver queue size -consumerConfig.setReceiverQueueSize(10); - -Consumer consumer; - -Result result = client.subscribe(topic, subscription, consumerConfig, consumer); - -``` - -## Go clients - -Here is an example of a Go consumer configuration that uses a shared subscription: - -```go - -import "github.com/apache/pulsar-client-go/pulsar" - -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "persistent://public/default/mq-topic-1", - SubscriptionName: "sub-1", - Type: pulsar.Shared, - ReceiverQueueSize: 10, // If you'd like to restrict the receiver queue size -}) -if err != nil { - log.Fatal(err) -} - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-non-persistent.md b/site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-non-persistent.md deleted file mode 100644 index 178301e86eb8df..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-non-persistent.md +++ /dev/null @@ -1,63 +0,0 @@ ---- -id: cookbooks-non-persistent -title: Non-persistent messaging -sidebar_label: "Non-persistent messaging" -original_id: cookbooks-non-persistent ---- - -**Non-persistent topics** are Pulsar topics in which message data is *never* [persistently stored](concepts-architecture-overview.md#persistent-storage) and kept only in memory. This cookbook provides: - -* A basic [conceptual overview](#overview) of non-persistent topics -* Information about [configurable parameters](#configuration) related to non-persistent topics -* A guide to the [CLI interface](#cli) for managing non-persistent topics - -## Overview - -By default, Pulsar persistently stores *all* unacknowledged messages on multiple [BookKeeper](#persistent-storage) bookies (storage nodes). Data for messages on persistent topics can thus survive broker restarts and subscriber failover. - -Pulsar also, however, supports **non-persistent topics**, which are topics on which messages are *never* persisted to disk and live only in memory. When using non-persistent delivery, killing a Pulsar [broker](reference-terminology.md#broker) or disconnecting a subscriber to a topic means that all in-transit messages are lost on that (non-persistent) topic, meaning that clients may see message loss. - -Non-persistent topics have names of this form (note the `non-persistent` in the name): - -```http - -non-persistent://tenant/namespace/topic - -``` - -> For more high-level information about non-persistent topics, see the [Concepts and Architecture](concepts-messaging.md#non-persistent-topics) documentation. - -## Using - -> In order to use non-persistent topics, they must be [enabled](#enabling) in your Pulsar broker configuration. - -In order to use non-persistent topics, you only need to differentiate them by name when interacting with them. This [`pulsar-client produce`](reference-cli-tools.md#pulsar-client-produce) command, for example, would produce one message on a non-persistent topic in a standalone cluster: - -```bash - -$ bin/pulsar-client produce non-persistent://public/default/example-np-topic \ - --num-produce 1 \ - --messages "This message will be stored only in memory" - -``` - -> For a more thorough guide to non-persistent topics from an administrative perspective, see the [Non-persistent topics](admin-api-topics.md) guide. - -## Enabling - -In order to enable non-persistent topics in a Pulsar broker, the [`enableNonPersistentTopics`](reference-configuration.md#broker-enableNonPersistentTopics) must be set to `true`. This is the default, and so you won't need to take any action to enable non-persistent messaging. - - -> #### Configuration for standalone mode -> If you're running Pulsar in standalone mode, the same configurable parameters are available but in the [`standalone.conf`](reference-configuration.md#standalone) configuration file. - -If you'd like to enable *only* non-persistent topics in a broker, you can set the [`enablePersistentTopics`](reference-configuration.md#broker-enablePersistentTopics) parameter to `false` and the `enableNonPersistentTopics` parameter to `true`. - -## Managing with cli - -Non-persistent topics can be managed using the [`pulsar-admin non-persistent`](reference-pulsar-admin.md#non-persistent) command-line interface. With that interface you can perform actions like [create a partitioned non-persistent topic](reference-pulsar-admin.md#non-persistent-create-partitioned-topic), get [stats](reference-pulsar-admin.md#non-persistent-stats) for a non-persistent topic, [list](reference-pulsar-admin.md) non-persistent topics under a namespace, and more. - -## Using with Pulsar clients - -You shouldn't need to make any changes to your Pulsar clients to use non-persistent messaging beyond making sure that you use proper [topic names](#using) with `non-persistent` as the topic type. - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-partitioned.md b/site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-partitioned.md deleted file mode 100644 index fb9ac354cc6d60..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-partitioned.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: cookbooks-partitioned -title: Partitioned topics -sidebar_label: "Partitioned Topics" -original_id: cookbooks-partitioned ---- -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-retention-expiry.md b/site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-retention-expiry.md deleted file mode 100644 index 192fbe6c53277f..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-retention-expiry.md +++ /dev/null @@ -1,413 +0,0 @@ ---- -id: cookbooks-retention-expiry -title: Message retention and expiry -sidebar_label: "Message retention and expiry" -original_id: cookbooks-retention-expiry ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar brokers are responsible for handling messages that pass through Pulsar, including [persistent storage](concepts-architecture-overview.md#persistent-storage) of messages. By default, for each topic, brokers only retain messages that are in at least one backlog. A backlog is the set of unacknowledged messages for a particular subscription. As a topic can have multiple subscriptions, a topic can have multiple backlogs. - -As a consequence, no messages are retained (by default) on a topic that has not had any subscriptions created for it. - -(Note that messages that are no longer being stored are not necessarily immediately deleted, and may in fact still be accessible until the next ledger rollover. Because clients cannot predict when rollovers may happen, it is not wise to rely on a rollover not happening at an inconvenient point in time.) - -In Pulsar, you can modify this behavior, with namespace granularity, in two ways: - -* You can persistently store messages that are not within a backlog (because they've been acknowledged by on every existing subscription, or because there are no subscriptions) by setting [retention policies](#retention-policies). -* Messages that are not acknowledged within a specified timeframe can be automatically acknowledged, by specifying the [time to live](#time-to-live-ttl) (TTL). - -Pulsar's [admin interface](admin-api-overview.md) enables you to manage both retention policies and TTL with namespace granularity (and thus within a specific tenant and either on a specific cluster or in the [`global`](concepts-architecture-overview.md#global-cluster) cluster). - - -> #### Retention and TTL solve two different problems -> * Message retention: Keep the data for at least X hours (even if acknowledged) -> * Time-to-live: Discard data after some time (by automatically acknowledging) -> -> Most applications will want to use at most one of these. - - -## Retention policies - -By default, when a Pulsar message arrives at a broker, the message is stored until it has been acknowledged on all subscriptions, at which point it is marked for deletion. You can override this behavior and retain messages that have already been acknowledged on all subscriptions by setting a *retention policy* for all topics in a given namespace. Retention is based on both a *size limit* and a *time limit*. - -Retention policies are useful when you use the Reader interface. The Reader interface does not use acknowledgements, and messages do not exist within backlogs. It is required to configure retention for Reader-only use cases. - -When you set a retention policy on topics in a namespace, you must set **both** a *size limit* and a *time limit*. You can refer to the following table to set retention policies in `pulsar-admin` and Java. - -|Time limit|Size limit| Message retention | -|----------|----------|------------------------| -| -1 | -1 | Infinite retention | -| -1 | >0 | Based on the size limit | -| >0 | -1 | Based on the time limit | -| 0 | 0 | Disable message retention (by default) | -| 0 | >0 | Invalid | -| >0 | 0 | Invalid | -| >0 | >0 | Acknowledged messages or messages with no active subscription will not be retained when either time or size reaches the limit. | - -The retention settings apply to all messages on topics that do not have any subscriptions, or to messages that have been acknowledged by all subscriptions. The retention policy settings do not affect unacknowledged messages on topics with subscriptions. The unacknowledged messages are controlled by the backlog quota. - -When a retention limit on a topic is exceeded, the oldest message is marked for deletion until the set of retained messages falls within the specified limits again. - -### Defaults - -You can set message retention at instance level with the following two parameters: `defaultRetentionTimeInMinutes` and `defaultRetentionSizeInMB`. Both parameters are set to `0` by default. - -For more information of the two parameters, refer to the [`broker.conf`](reference-configuration.md#broker) configuration file. - -### Set retention policy - -You can set a retention policy for a namespace by specifying the namespace, a size limit and a time limit in `pulsar-admin`, REST API and Java. - -````mdx-code-block - - - -You can use the [`set-retention`](reference-pulsar-admin.md#namespaces-set-retention) subcommand and specify a namespace, a size limit using the `-s`/`--size` flag, and a time limit using the `-t`/`--time` flag. - -In the following example, the size limit is set to 10 GB and the time limit is set to 3 hours for each topic within the `my-tenant/my-ns` namespace. -- When the size of messages reaches 10 GB on a topic within 3 hours, the acknowledged messages will not be retained. -- After 3 hours, even if the message size is less than 10 GB, the acknowledged messages will not be retained. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 10G \ - --time 3h - -``` - -In the following example, the time is not limited and the size limit is set to 1 TB. The size limit determines the retention. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 1T \ - --time -1 - -``` - -In the following example, the size is not limited and the time limit is set to 3 hours. The time limit determines the retention. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size -1 \ - --time 3h - -``` - -To achieve infinite retention, set both values to `-1`. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size -1 \ - --time -1 - -``` - -To disable the retention policy, set both values to `0`. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 0 \ - --time 0 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/retention|operation/setRetention?version=@pulsar:version_number@} - -:::note - -To disable the retention policy, you need to set both the size and time limit to `0`. Set either size or time limit to `0` is invalid. - -::: - - - - -```java - -int retentionTime = 10; // 10 minutes -int retentionSize = 500; // 500 megabytes -RetentionPolicies policies = new RetentionPolicies(retentionTime, retentionSize); -admin.namespaces().setRetention(namespace, policies); - -``` - - - - -```` - -### Get retention policy - -You can fetch the retention policy for a namespace by specifying the namespace. The output will be a JSON object with two keys: `retentionTimeInMinutes` and `retentionSizeInMB`. - -#### pulsar-admin - -Use the [`get-retention`](reference-pulsar-admin.md#namespaces) subcommand and specify the namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces get-retention my-tenant/my-ns -{ - "retentionTimeInMinutes": 10, - "retentionSizeInMB": 500 -} - -``` - -#### REST API - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/retention|operation/getRetention?version=@pulsar:version_number@} - -#### Java - -```java - -admin.namespaces().getRetention(namespace); - -``` - -## Backlog quotas - -*Backlogs* are sets of unacknowledged messages for a topic that have been stored by bookies. Pulsar stores all unacknowledged messages in backlogs until they are processed and acknowledged. - -You can control the allowable size of backlogs, at the namespace level, using *backlog quotas*. Setting a backlog quota involves setting: - -TODO: Expand on is this per backlog or per topic? - -* an allowable *size threshold* for each topic in the namespace -* a *retention policy* that determines which action the [broker](reference-terminology.md#broker) takes if the threshold is exceeded. - -The following retention policies are available: - -Policy | Action -:------|:------ -`producer_request_hold` | The broker will hold and not persist produce request payload -`producer_exception` | The broker will disconnect from the client by throwing an exception -`consumer_backlog_eviction` | The broker will begin discarding backlog messages - - -> #### Beware the distinction between retention policy types -> As you may have noticed, there are two definitions of the term "retention policy" in Pulsar, one that applies to persistent storage of messages not in backlogs, and one that applies to messages within backlogs. - - -Backlog quotas are handled at the namespace level. They can be managed via: - -### Set size/time thresholds and backlog retention policies - -You can set a size and/or time threshold and backlog retention policy for all of the topics in a [namespace](reference-terminology.md#namespace) by specifying the namespace, a size limit and/or a time limit in second, and a policy by name. - -#### pulsar-admin - -Use the [`set-backlog-quota`](reference-pulsar-admin.md#namespaces) subcommand and specify a namespace, a size limit using the `-l`/`--limit` flag, and a retention policy using the `-p`/`--policy` flag. - -##### Example - -```shell - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \ - --limit 2G \ - --limitTime 36000 \ - --policy producer_request_hold - -``` - -#### REST API - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - -#### Java - -```java - -long sizeLimit = 2147483648L; -BacklogQuota.RetentionPolicy policy = BacklogQuota.RetentionPolicy.producer_request_hold; -BacklogQuota quota = new BacklogQuota(sizeLimit, policy); -admin.namespaces().setBacklogQuota(namespace, quota); - -``` - -### Get backlog threshold and backlog retention policy - -You can see which size threshold and backlog retention policy has been applied to a namespace. - -#### pulsar-admin - -Use the [`get-backlog-quotas`](reference-pulsar-admin.md#pulsar-admin-namespaces-get-backlog-quotas) subcommand and specify a namespace. Here's an example: - -```shell - -$ pulsar-admin namespaces get-backlog-quotas my-tenant/my-ns -{ - "destination_storage": { - "limit" : 2147483648, - "policy" : "producer_request_hold" - } -} - -``` - -#### REST API - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/backlogQuotaMap|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - -#### Java - -```java - -Map quotas = - admin.namespaces().getBacklogQuotas(namespace); - -``` - -### Remove backlog quotas - -#### pulsar-admin - -Use the [`remove-backlog-quota`](reference-pulsar-admin.md#pulsar-admin-namespaces-remove-backlog-quota) subcommand and specify a namespace. Here's an example: - -```shell - -$ pulsar-admin namespaces remove-backlog-quota my-tenant/my-ns - -``` - -#### REST API - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/removeBacklogQuota?version=@pulsar:version_number@} - -#### Java - -```java - -admin.namespaces().removeBacklogQuota(namespace); - -``` - -### Clear backlog - -#### pulsar-admin - -Use the [`clear-backlog`](reference-pulsar-admin.md#pulsar-admin-namespaces-clear-backlog) subcommand. - -##### Example - -```shell - -$ pulsar-admin namespaces clear-backlog my-tenant/my-ns - -``` - -By default, you will be prompted to ensure that you really want to clear the backlog for the namespace. You can override the prompt using the `-f`/`--force` flag. - -## Time to live (TTL) - -By default, Pulsar stores all unacknowledged messages forever. This can lead to heavy disk space usage in cases where a lot of messages are going unacknowledged. If disk space is a concern, you can set a time to live (TTL) that determines how long unacknowledged messages will be retained. - -### Set the TTL for a namespace - -#### pulsar-admin - -Use the [`set-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-set-message-ttl) subcommand and specify a namespace and a TTL (in seconds) using the `-ttl`/`--messageTTL` flag. - -##### Example - -```shell - -$ pulsar-admin namespaces set-message-ttl my-tenant/my-ns \ - --messageTTL 120 # TTL of 2 minutes - -``` - -#### REST API - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/setNamespaceMessageTTL?version=@pulsar:version_number@} - -#### Java - -```java - -admin.namespaces().setNamespaceMessageTTL(namespace, ttlInSeconds); - -``` - -### Get the TTL configuration for a namespace - -#### pulsar-admin - -Use the [`get-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-get-message-ttl) subcommand and specify a namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces get-message-ttl my-tenant/my-ns -60 - -``` - -#### REST API - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/getNamespaceMessageTTL?version=@pulsar:version_number@} - -#### Java - -```java - -admin.namespaces().getNamespaceMessageTTL(namespace) - -``` - -### Remove the TTL configuration for a namespace - -#### pulsar-admin - -Use the [`remove-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-remove-message-ttl) subcommand and specify a namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces remove-message-ttl my-tenant/my-ns - -``` - -#### REST API - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/removeNamespaceMessageTTL?version=@pulsar:version_number@} - -#### Java - -```java - -admin.namespaces().removeNamespaceMessageTTL(namespace) - -``` - -## Delete messages from namespaces - -If you do not have any retention period and that you never have much of a backlog, the upper limit for retaining messages, which are acknowledged, equals to the Pulsar segment rollover period + entry log rollover period + (garbage collection interval * garbage collection ratios). - -- **Segment rollover period**: basically, the segment rollover period is how often a new segment is created. Once a new segment is created, the old segment will be deleted. By default, this happens either when you have written 50,000 entries (messages) or have waited 240 minutes. You can tune this in your broker. - -- **Entry log rollover period**: multiple ledgers in BookKeeper are interleaved into an [entry log](https://bookkeeper.apache.org/docs/4.11.1/getting-started/concepts/#entry-logs). In order for a ledger that has been deleted, the entry log must all be rolled over. -The entry log rollover period is configurable, but is purely based on the entry log size. For details, see [here](https://bookkeeper.apache.org/docs/4.11.1/reference/config/#entry-log-settings). Once the entry log is rolled over, the entry log can be garbage collected. - -- **Garbage collection interval**: because entry logs have interleaved ledgers, to free up space, the entry logs need to be rewritten. The garbage collection interval is how often BookKeeper performs garbage collection. which is related to minor compaction and major compaction of entry logs. For details, see [here](https://bookkeeper.apache.org/docs/4.11.1/reference/config/#entry-log-compaction-settings). diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-tiered-storage.md b/site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-tiered-storage.md deleted file mode 100644 index 4c86166c7b1ceb..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/cookbooks-tiered-storage.md +++ /dev/null @@ -1,342 +0,0 @@ ---- -id: cookbooks-tiered-storage -title: Tiered Storage -sidebar_label: "Tiered Storage" -original_id: cookbooks-tiered-storage ---- - -Pulsar's **Tiered Storage** feature allows older backlog data to be offloaded to long term storage, thereby freeing up space in BookKeeper and reducing storage costs. This cookbook walks you through using tiered storage in your Pulsar cluster. - -* Tiered storage uses [Apache jclouds](https://jclouds.apache.org) to support [Amazon S3](https://aws.amazon.com/s3/) and [Google Cloud Storage](https://cloud.google.com/storage/)(GCS for short) for long term storage. With Jclouds, it is easy to add support for more [cloud storage providers](https://jclouds.apache.org/reference/providers/#blobstore-providers) in the future. - -* Tiered storage uses [Apache Hadoop](http://hadoop.apache.org/) to support filesystem for long term storage. With Hadoop, it is easy to add support for more filesystem in the future. - -## When should I use Tiered Storage? - -Tiered storage should be used when you have a topic for which you want to keep a very long backlog for a long time. For example, if you have a topic containing user actions which you use to train your recommendation systems, you may want to keep that data for a long time, so that if you change your recommendation algorithm you can rerun it against your full user history. - -## The offloading mechanism - -A topic in Pulsar is backed by a log, known as a managed ledger. This log is composed of an ordered list of segments. Pulsar only every writes to the final segment of the log. All previous segments are sealed. The data within the segment is immutable. This is known as a segment oriented architecture. - -![Tiered storage](/assets/pulsar-tiered-storage.png "Tiered Storage") - -The Tiered Storage offloading mechanism takes advantage of this segment oriented architecture. When offloading is requested, the segments of the log are copied, one-by-one, to tiered storage. All segments of the log, apart from the segment currently being written to can be offloaded. - -On the broker, the administrator must configure the bucket and credentials for the cloud storage service. -The configured bucket must exist before attempting to offload. If it does not exist, the offload operation will fail. - -Pulsar uses multi-part objects to upload the segment data. It is possible that a broker could crash while uploading the data. -We recommend you add a life cycle rule your bucket to expire incomplete multi-part upload after a day or two to avoid -getting charged for incomplete uploads. - -When ledgers are offloaded to long term storage, you can still query data in the offloaded ledgers with Pulsar SQL. - -## Configuring the offload driver - -Offloading is configured in ```broker.conf```. - -At a minimum, the administrator must configure the driver, the bucket and the authenticating credentials. -There is also some other knobs to configure, like the bucket region, the max block size in backed storage, etc. - -Currently we support driver of types: - -- `aws-s3`: [Simple Cloud Storage Service](https://aws.amazon.com/s3/) -- `google-cloud-storage`: [Google Cloud Storage](https://cloud.google.com/storage/) -- `filesystem`: [Filesystem Storage](http://hadoop.apache.org/) - -> Driver names are case-insensitive for driver's name. There is a third driver type, `s3`, which is identical to `aws-s3`, -> though it requires that you specify an endpoint url using `s3ManagedLedgerOffloadServiceEndpoint`. This is useful if -> using a S3 compatible data store, other than AWS. - -```conf - -managedLedgerOffloadDriver=aws-s3 - -``` - -### "aws-s3" Driver configuration - -#### Bucket and Region - -Buckets are the basic containers that hold your data. -Everything that you store in Cloud Storage must be contained in a bucket. -You can use buckets to organize your data and control access to your data, -but unlike directories and folders, you cannot nest buckets. - -```conf - -s3ManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -Bucket Region is the region where bucket located. Bucket Region is not a required -but a recommended configuration. If it is not configured, It will use the default region. - -With AWS S3, the default region is `US East (N. Virginia)`. Page [AWS Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html) contains more information. - -```conf - -s3ManagedLedgerOffloadRegion=eu-west-3 - -``` - -#### Authentication with AWS - -To be able to access AWS S3, you need to authenticate with AWS S3. -Pulsar does not provide any direct means of configuring authentication for AWS S3, -but relies on the mechanisms supported by the [DefaultAWSCredentialsProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html). - -Once you have created a set of credentials in the AWS IAM console, they can be configured in a number of ways. - -1. Using ec2 instance metadata credentials - -If you are on AWS instance with an instance profile that provides credentials, Pulsar will use these credentials -if no other mechanism is provided - -2. Set the environment variables **AWS_ACCESS_KEY_ID** and **AWS_SECRET_ACCESS_KEY** in ```conf/pulsar_env.sh```. - -```bash - -export AWS_ACCESS_KEY_ID=ABC123456789 -export AWS_SECRET_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -> \"export\" is important so that the variables are made available in the environment of spawned processes. - - -3. Add the Java system properties *aws.accessKeyId* and *aws.secretKey* to **PULSAR_EXTRA_OPTS** in `conf/pulsar_env.sh`. - -```bash - -PULSAR_EXTRA_OPTS="${PULSAR_EXTRA_OPTS} ${PULSAR_MEM} ${PULSAR_GC} -Daws.accessKeyId=ABC123456789 -Daws.secretKey=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.maxCapacityPerThread=4096" - -``` - -4. Set the access credentials in ```~/.aws/credentials```. - -```conf - -[default] -aws_access_key_id=ABC123456789 -aws_secret_access_key=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -5. Assuming an IAM role - -If you want to assume an IAM role, this can be done via specifying the following: - -```conf - -s3ManagedLedgerOffloadRole= -s3ManagedLedgerOffloadRoleSessionName=pulsar-s3-offload - -``` - -This will use the `DefaultAWSCredentialsProviderChain` for assuming this role. - -> The broker must be rebooted for credentials specified in pulsar_env to take effect. - -#### Configuring the size of block read/write - -Pulsar also provides some knobs to configure the size of requests sent to AWS S3. - -- ```s3ManagedLedgerOffloadMaxBlockSizeInBytes``` configures the maximum size of - a "part" sent during a multipart upload. This cannot be smaller than 5MB. Default is 64MB. -- ```s3ManagedLedgerOffloadReadBufferSizeInBytes``` configures the block size for - each individual read when reading back data from AWS S3. Default is 1MB. - -In both cases, these should not be touched unless you know what you are doing. - -### "google-cloud-storage" Driver configuration - -Buckets are the basic containers that hold your data. Everything that you store in -Cloud Storage must be contained in a bucket. You can use buckets to organize your data and -control access to your data, but unlike directories and folders, you cannot nest buckets. - -```conf - -gcsManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -Bucket Region is the region where bucket located. Bucket Region is not a required but -a recommended configuration. If it is not configured, It will use the default region. - -Regarding GCS, buckets are default created in the `us multi-regional location`, -page [Bucket Locations](https://cloud.google.com/storage/docs/bucket-locations) contains more information. - -```conf - -gcsManagedLedgerOffloadRegion=europe-west3 - -``` - -#### Authentication with GCS - -The administrator needs to configure `gcsManagedLedgerOffloadServiceAccountKeyFile` in `broker.conf` -for the broker to be able to access the GCS service. `gcsManagedLedgerOffloadServiceAccountKeyFile` is -a Json file, containing the GCS credentials of a service account. -[Service Accounts section of this page](https://support.google.com/googleapi/answer/6158849) contains -more information of how to create this key file for authentication. More information about google cloud IAM -is available [here](https://cloud.google.com/storage/docs/access-control/iam). - -To generate service account credentials or view the public credentials that you've already generated, follow the following steps: - -1. Open the [Service accounts page](https://console.developers.google.com/iam-admin/serviceaccounts). -2. Select a project or create a new one. -3. Click **Create service account**. -4. In the **Create service account** window, type a name for the service account, and select **Furnish a new private key**. If you want to [grant G Suite domain-wide authority](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#delegatingauthority) to the service account, also select **Enable G Suite Domain-wide Delegation**. -5. Click **Create**. - -> Notes: Make ensure that the service account you create has permission to operate GCS, you need to assign **Storage Admin** permission to your service account in [here](https://cloud.google.com/storage/docs/access-control/iam). - -```conf - -gcsManagedLedgerOffloadServiceAccountKeyFile="/Users/hello/Downloads/project-804d5e6a6f33.json" - -``` - -#### Configuring the size of block read/write - -Pulsar also provides some knobs to configure the size of requests sent to GCS. - -- ```gcsManagedLedgerOffloadMaxBlockSizeInBytes``` configures the maximum size of a "part" sent - during a multipart upload. This cannot be smaller than 5MB. Default is 64MB. -- ```gcsManagedLedgerOffloadReadBufferSizeInBytes``` configures the block size for each individual - read when reading back data from GCS. Default is 1MB. - -In both cases, these should not be touched unless you know what you are doing. - -### "filesystem" Driver configuration - - -#### Configure connection address - -You can configure the connection address in the `broker.conf` file. - -```conf - -fileSystemURI="hdfs://127.0.0.1:9000" - -``` - -#### Configure Hadoop profile path - -The configuration file is stored in the Hadoop profile path. It contains various settings, such as base path, authentication, and so on. - -```conf - -fileSystemProfilePath="../conf/filesystem_offload_core_site.xml" - -``` - -The model for storing topic data uses `org.apache.hadoop.io.MapFile`. You can use all of the configurations in `org.apache.hadoop.io.MapFile` for Hadoop. - -**Example** - -```conf - - - fs.defaultFS - - - - - hadoop.tmp.dir - pulsar - - - - io.file.buffer.size - 4096 - - - - io.seqfile.compress.blocksize - 1000000 - - - - io.seqfile.compression.type - BLOCK - - - - io.map.index.interval - 128 - - -``` - -For more information about the configurations in `org.apache.hadoop.io.MapFile`, see [Filesystem Storage](http://hadoop.apache.org/). -## Configuring offload to run automatically - -Namespace policies can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that the topic has stored on the pulsar cluster. Once the topic reaches the threshold, an offload operation will be triggered. Setting a negative value to the threshold will disable automatic offloading. Setting the threshold to 0 will cause the broker to offload data as soon as it possiby can. - -```bash - -$ bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -> Automatic offload runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offload will not until the current segment is full. - -## Configuring read priority for offloaded messages - -By default, once messages were offloaded to long term storage, brokers will read them from long term storage, but messages still exists in bookkeeper for a period depends on the administrator's configuration. For -messages exists in both bookkeeper and long term storage, if they are preferred to read from bookkeeper, you can use command to change this configuration. - -```bash - -# default value for -orp is tiered-storage-first -$ bin/pulsar-admin namespaces set-offload-policies my-tenant/my-namespace -orp bookkeeper-first -$ bin/pulsar-admin topics set-offload-policies my-tenant/my-namespace/topic1 -orp bookkeeper-first - -``` - -## Triggering offload manually - -Offloading can manually triggered through a REST endpoint on the Pulsar broker. We provide a CLI which will call this rest endpoint for you. - -When triggering offload, you must specify the maximum size, in bytes, of backlog which will be retained locally on the bookkeeper. The offload mechanism will offload segments from the start of the topic backlog until this condition is met. - -```bash - -$ bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 -Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - -``` - -The command to triggers an offload will not wait until the offload operation has completed. To check the status of the offload, use offload-status. - -```bash - -$ bin/pulsar-admin topics offload-status my-tenant/my-namespace/topic1 -Offload is currently running - -``` - -To wait for offload to complete, add the -w flag. - -```bash - -$ bin/pulsar-admin topics offload-status -w my-tenant/my-namespace/topic1 -Offload was a success - -``` - -If there is an error offloading, the error will be propagated to the offload-status command. - -```bash - -$ bin/pulsar-admin topics offload-status persistent://public/default/topic1 -Error in offload -null - -Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/deploy-aws.md b/site2/website/versioned_docs/version-2.8.0-deprecated/deploy-aws.md deleted file mode 100644 index 93c389b56e2cf1..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/deploy-aws.md +++ /dev/null @@ -1,271 +0,0 @@ ---- -id: deploy-aws -title: Deploying a Pulsar cluster on AWS using Terraform and Ansible -sidebar_label: "Amazon Web Services" -original_id: deploy-aws ---- - -> For instructions on deploying a single Pulsar cluster manually rather than using Terraform and Ansible, see [Deploying a Pulsar cluster on bare metal](deploy-bare-metal.md). For instructions on manually deploying a multi-cluster Pulsar instance, see [Deploying a Pulsar instance on bare metal](deploy-bare-metal-multi-cluster.md). - -One of the easiest ways to get a Pulsar [cluster](reference-terminology.md#cluster) running on [Amazon Web Services](https://aws.amazon.com/) (AWS) is to use the [Terraform](https://terraform.io) infrastructure provisioning tool and the [Ansible](https://www.ansible.com) server automation tool. Terraform can create the resources necessary for running the Pulsar cluster---[EC2](https://aws.amazon.com/ec2/) instances, networking and security infrastructure, etc.---While Ansible can install and run Pulsar on the provisioned resources. - -## Requirements and setup - -In order to install a Pulsar cluster on AWS using Terraform and Ansible, you need to prepare the following things: - -* An [AWS account](https://aws.amazon.com/account/) and the [`aws`](https://aws.amazon.com/cli/) command-line tool -* Python and [pip](https://pip.pypa.io/en/stable/) -* The [`terraform-inventory`](https://github.com/adammck/terraform-inventory) tool, which enables Ansible to use Terraform artifacts - -You also need to make sure that you are currently logged into your AWS account via the `aws` tool: - -```bash - -$ aws configure - -``` - -## Installation - -You can install Ansible on Linux or macOS using pip. - -```bash - -$ pip install ansible - -``` - -You can install Terraform using the instructions [here](https://learn.hashicorp.com/tutorials/terraform/install-cli). - -You also need to have the Terraform and Ansible configuration for Pulsar locally on your machine. You can find them in the [GitHub repository](https://github.com/apache/pulsar) of Pulsar, which you can fetch using Git commands: - -```bash - -$ git clone https://github.com/apache/pulsar -$ cd pulsar/deployment/terraform-ansible/aws - -``` - -## SSH setup - -> If you already have an SSH key and want to use it, you can skip the step of generating an SSH key and update `private_key_file` setting -> in `ansible.cfg` file and `public_key_path` setting in `terraform.tfvars` file. -> -> For example, if you already have a private SSH key in `~/.ssh/pulsar_aws` and a public key in `~/.ssh/pulsar_aws.pub`, -> follow the steps below: -> -> 1. update `ansible.cfg` with following values: -> - -> ```shell -> -> private_key_file=~/.ssh/pulsar_aws -> -> -> ``` - -> -> 2. update `terraform.tfvars` with following values: -> - -> ```shell -> -> public_key_path=~/.ssh/pulsar_aws.pub -> -> -> ``` - - -In order to create the necessary AWS resources using Terraform, you need to create an SSH key. Enter the following commands to create a private SSH key in `~/.ssh/id_rsa` and a public key in `~/.ssh/id_rsa.pub`: - -```bash - -$ ssh-keygen -t rsa - -``` - -Do *not* enter a passphrase (hit **Enter** instead when the prompt comes out). Enter the following command to verify that a key has been created: - -```bash - -$ ls ~/.ssh -id_rsa id_rsa.pub - -``` - -## Create AWS resources using Terraform - -To start building AWS resources with Terraform, you need to install all Terraform dependencies. Enter the following command: - -```bash - -$ terraform init -# This will create a .terraform folder - -``` - -After that, you can apply the default Terraform configuration by entering this command: - -```bash - -$ terraform apply - -``` - -Then you see this prompt below: - -```bash - -Do you want to perform these actions? - Terraform will perform the actions described above. - Only 'yes' will be accepted to approve. - - Enter a value: - -``` - -Type `yes` and hit **Enter**. Applying the configuration could take several minutes. When the configuration applying finishes, you can see `Apply complete!` along with some other information, including the number of resources created. - -### Apply a non-default configuration - -You can apply a non-default Terraform configuration by changing the values in the `terraform.tfvars` file. The following variables are available: - -Variable name | Description | Default -:-------------|:------------|:------- -`public_key_path` | The path of the public key that you have generated. | `~/.ssh/id_rsa.pub` -`region` | The AWS region in which the Pulsar cluster runs | `us-west-2` -`availability_zone` | The AWS availability zone in which the Pulsar cluster runs | `us-west-2a` -`aws_ami` | The [Amazon Machine Image](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) (AMI) that the cluster uses | `ami-9fa343e7` -`num_zookeeper_nodes` | The number of [ZooKeeper](https://zookeeper.apache.org) nodes in the ZooKeeper cluster | 3 -`num_bookie_nodes` | The number of bookies that runs in the cluster | 3 -`num_broker_nodes` | The number of Pulsar brokers that runs in the cluster | 2 -`num_proxy_nodes` | The number of Pulsar proxies that runs in the cluster | 1 -`base_cidr_block` | The root [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) that network assets uses for the cluster | `10.0.0.0/16` -`instance_types` | The EC2 instance types to be used. This variable is a map with two keys: `zookeeper` for the ZooKeeper instances, `bookie` for the BookKeeper bookies and `broker` and `proxy` for Pulsar brokers and bookies | `t2.small` (ZooKeeper), `i3.xlarge` (BookKeeper) and `c5.2xlarge` (Brokers/Proxies) - -### What is installed - -When you run the Ansible playbook, the following AWS resources are used: - -* 9 total [Elastic Compute Cloud](https://aws.amazon.com/ec2) (EC2) instances running the [ami-9fa343e7](https://access.redhat.com/articles/3135091) Amazon Machine Image (AMI), which runs [Red Hat Enterprise Linux (RHEL) 7.4](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/7.4_release_notes/index). By default, that includes: - * 3 small VMs for ZooKeeper ([t2.small](https://www.ec2instances.info/?selected=t2.small) instances) - * 3 larger VMs for BookKeeper [bookies](reference-terminology.md#bookie) ([i3.xlarge](https://www.ec2instances.info/?selected=i3.xlarge) instances) - * 2 larger VMs for Pulsar [brokers](reference-terminology.md#broker) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances) - * 1 larger VMs for Pulsar [proxy](reference-terminology.md#proxy) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances) -* An EC2 [security group](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html) -* A [virtual private cloud](https://aws.amazon.com/vpc/) (VPC) for security -* An [API Gateway](https://aws.amazon.com/api-gateway/) for connections from the outside world -* A [route table](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html) for the Pulsar cluster's VPC -* A [subnet](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html) for the VPC - -All EC2 instances for the cluster run in the [us-west-2](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) region. - -### Fetch your Pulsar connection URL - -When you apply the Terraform configuration by entering the command `terraform apply`, Terraform outputs a value for the `pulsar_service_url`. The value should look something like this: - -``` - -pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650 - -``` - -You can fetch that value at any time by entering the command `terraform output pulsar_service_url` or parsing the `terraform.tstate` file (which is JSON, even though the filename does not reflect that): - -```bash - -$ cat terraform.tfstate | jq .modules[0].outputs.pulsar_service_url.value - -``` - -### Destroy your cluster - -At any point, you can destroy all AWS resources associated with your cluster using Terraform's `destroy` command: - -```bash - -$ terraform destroy - -``` - -## Setup Disks - -Before you run the Pulsar playbook, you need to mount the disks to the correct directories on those bookie nodes. Since different type of machines have different disk layout, you need to update the task defined in `setup-disk.yaml` file after changing the `instance_types` in your terraform config, - -To setup disks on bookie nodes, enter this command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - setup-disk.yaml - -``` - -After that, the disks is mounted under `/mnt/journal` as journal disk, and `/mnt/storage` as ledger disk. -Remember to enter this command just only once. If you attempt to enter this command again after you have run Pulsar playbook, your disks might potentially be erased again, causing the bookies to fail to start up. - -## Run the Pulsar playbook - -Once you have created the necessary AWS resources using Terraform, you can install and run Pulsar on the Terraform-created EC2 instances using Ansible. - -(Optional) If you want to use any [built-in IO connectors](io-connectors.md) , edit the `Download Pulsar IO packages` task in the `deploy-pulsar.yaml` file and uncomment the connectors you want to use. - -To run the playbook, enter this command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - ../deploy-pulsar.yaml - -``` - -If you have created a private SSH key at a location different from `~/.ssh/id_rsa`, you can specify the different location using the `--private-key` flag in the following command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - --private-key="~/.ssh/some-non-default-key" \ - ../deploy-pulsar.yaml - -``` - -## Access the cluster - -You can now access your running Pulsar using the unique Pulsar connection URL for your cluster, which you can obtain following the instructions [above](#fetching-your-pulsar-connection-url). - -For a quick demonstration of accessing the cluster, we can use the Python client for Pulsar and the Python shell. First, install the Pulsar Python module using pip: - -```bash - -$ pip install pulsar-client - -``` - -Now, open up the Python shell using the `python` command: - -```bash - -$ python - -``` - -Once you are in the shell, enter the following command: - -```python - ->>> import pulsar ->>> client = pulsar.Client('pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650') -# Make sure to use your connection URL ->>> producer = client.create_producer('persistent://public/default/test-topic') ->>> producer.send('Hello world') ->>> client.close() - -``` - -If all of these commands are successful, Pulsar clients can now use your cluster! diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/deploy-bare-metal-multi-cluster.md b/site2/website/versioned_docs/version-2.8.0-deprecated/deploy-bare-metal-multi-cluster.md deleted file mode 100644 index 1b23eea07a20bf..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/deploy-bare-metal-multi-cluster.md +++ /dev/null @@ -1,486 +0,0 @@ ---- -id: deploy-bare-metal-multi-cluster -title: Deploying a multi-cluster on bare metal -sidebar_label: "Bare metal multi-cluster" -original_id: deploy-bare-metal-multi-cluster ---- - -:::tip - -1. Single-cluster Pulsar installations should be sufficient for all but the most ambitious use cases. If you are interested in experimenting with -Pulsar or using it in a startup or on a single team, you had better opt for a single cluster. For instructions on deploying a single cluster, -see the guide [here](deploy-bare-metal.md). -2. If you want to use all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you need to download `apache-pulsar-io-connectors` -package and install `apache-pulsar-io-connectors` under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you -run a separate cluster of function workers for [Pulsar Functions](functions-overview.md). -3. If you want to use [Tiered Storage](concepts-tiered-storage.md) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders` -package and install `apache-pulsar-offloaders` under `offloaders` directory in the pulsar directory on every broker node. For more details of how to configure -this feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md). - -::: - -A Pulsar *instance* consists of multiple Pulsar clusters working in unison. You can distribute clusters across data centers or geographical regions and replicate the clusters amongst themselves using [geo-replication](administration-geo.md). Deploying a multi-cluster Pulsar instance involves the following basic steps: - -* Deploying two separate [ZooKeeper](#deploy-zookeeper) quorums: a [local](#deploy-local-zookeeper) quorum for each cluster in the instance and a [configuration store](#configuration-store) quorum for instance-wide tasks -* Initializing [cluster metadata](#cluster-metadata-initialization) for each cluster -* Deploying a [BookKeeper cluster](#deploy-bookkeeper) of bookies in each Pulsar cluster -* Deploying [brokers](#deploy-brokers) in each Pulsar cluster - -If you want to deploy a single Pulsar cluster, see [Clusters and Brokers](getting-started-standalone.md#start-the-cluster). - -> #### Run Pulsar locally or on Kubernetes? -> This guide shows you how to deploy Pulsar in production in a non-Kubernetes environment. If you want to run a standalone Pulsar cluster on a single machine for development purposes, see the [Setting up a local cluster](getting-started-standalone.md) guide. If you want to run Pulsar on [Kubernetes](https://kubernetes.io), see the [Pulsar on Kubernetes](deploy-kubernetes.md) guide, which includes sections on running Pulsar on Kubernetes on [Google Kubernetes Engine](deploy-kubernetes#pulsar-on-google-kubernetes-engine) and on [Amazon Web Services](deploy-kubernetes#pulsar-on-amazon-web-services). - -## System requirement - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions, JRE/JDK 11 is recommended. - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -## Install Pulsar - -To get started running Pulsar, download a binary tarball release in one of the following ways: - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar @pulsar:version@ binary release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget 'https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=pulsar/pulsar-@pulsar:version@/apache-pulsar-@pulsar:version@-bin.tar.gz' -O apache-pulsar-@pulsar:version@-bin.tar.gz - - ``` - -Once you download the tarball, untar it and `cd` into the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -## What your package contains - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | [Command-line tools](reference-cli-tools.md) of Pulsar, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) -`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more -`examples` | A Java JAR file containing example [Pulsar Functions](functions-overview.md) -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that Pulsar uses -`licenses` | License files, in `.txt` form, for various components of the Pulsar codebase - -The following directories are created once you begin running Pulsar: - -Directory | Contains -:---------|:-------- -`data` | The data storage directory that ZooKeeper and BookKeeper use -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md) -`logs` | Logs that the installation creates - - -## Deploy ZooKeeper - -Each Pulsar instance relies on two separate ZooKeeper quorums. - -* [Local ZooKeeper](#deploy-local-zookeeper) operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs to have a dedicated ZooKeeper cluster. -* [Configuration Store](#deploy-the-configuration-store) operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum. - -The configuration store quorum can be provided by an independent cluster of machines or by the same machines used by local ZooKeeper. - - -### Deploy local ZooKeeper - -ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar. - -You need to stand up one local ZooKeeper cluster *per Pulsar cluster* for deploying a Pulsar instance. - -To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -On each host, you need to specify the ID of the node in the `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - -On a ZooKeeper server at `zk1.us-west.example.com`, for example, you could set the `myid` value like this: - -```shell - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com` the command looks like `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start zookeeper - -``` - -### Deploy the configuration store - -The ZooKeeper cluster that is configured and started up in the section above is a *local* ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks. - -If you deploy a [single-cluster](#single-cluster-pulsar-instance) instance, you do not need a separate cluster for the configuration store. If, however, you deploy a [multi-cluster](#multi-cluster-pulsar-instance) instance, you should stand up a separate ZooKeeper cluster for configuration tasks. - -#### Single-cluster Pulsar instance - -If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports. - -To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorum uses to the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 - -``` - -As before, create the `myid` files for each server on `data/global-zookeeper/myid`. - -#### Multi-cluster Pulsar instance - -When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions. - -The key here is to make sure the ZK quorum members are spread across at least 3 regions and that other regions run as observers. - -Again, given the very low expected load on the configuration store servers, you can -share the same hosts used for the local ZooKeeper quorum. - -For example, assume a Pulsar instance with the following clusters `us-west`, -`us-east`, `us-central`, `eu-central`, `ap-south`. Also assume, each cluster has its own local ZK servers named such as the following: - -``` - -zk[1-3].${CLUSTER}.example.com - -``` - -In this scenario if you want to pick the quorum participants from few clusters and -let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`. - -This method guarantees that writes to configuration store is possible even if one of these regions is unreachable. - -The ZK configuration in all the servers looks like: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 -server.4=zk1.us-central.example.com:2185:2186 -server.5=zk2.us-central.example.com:2185:2186 -server.6=zk3.us-central.example.com:2185:2186:observer -server.7=zk1.us-east.example.com:2185:2186 -server.8=zk2.us-east.example.com:2185:2186 -server.9=zk3.us-east.example.com:2185:2186:observer -server.10=zk1.eu-central.example.com:2185:2186:observer -server.11=zk2.eu-central.example.com:2185:2186:observer -server.12=zk3.eu-central.example.com:2185:2186:observer -server.13=zk1.ap-south.example.com:2185:2186:observer -server.14=zk2.ap-south.example.com:2185:2186:observer -server.15=zk3.ap-south.example.com:2185:2186:observer - -``` - -Additionally, ZK observers need to have the following parameters: - -```properties - -peerType=observer - -``` - -##### Start the service - -Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) - -```shell - -$ bin/pulsar-daemon start configuration-store - -``` - -## Cluster metadata initialization - -Once you set up the cluster-specific ZooKeeper and configuration store quorums for your instance, you need to write some metadata to ZooKeeper for each cluster in your instance. **you only needs to write these metadata once**. - -You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. The following is an example: - -```shell - -$ bin/pulsar initialize-cluster-metadata \ - --cluster us-west \ - --zookeeper zk1.us-west.example.com:2181 \ - --configuration-store zk1.us-west.example.com:2184 \ - --web-service-url http://pulsar.us-west.example.com:8080/ \ - --web-service-url-tls https://pulsar.us-west.example.com:8443/ \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/ - -``` - -As you can see from the example above, you need to specify the following: - -* The name of the cluster -* The local ZooKeeper connection string for the cluster -* The configuration store connection string for the entire instance -* The web service URL for the cluster -* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster - -If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster. - -Make sure to run `initialize-cluster-metadata` for each cluster in your instance. - -## Deploy BookKeeper - -BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar. - -Each Pulsar broker needs to have its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster. - -### Configure bookies - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important aspect of configuring each bookie is ensuring that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for the local ZooKeeper of Pulsar cluster. - -### Start bookies - -You can start a bookie in two ways: in the foreground or as a background daemon. - -To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -You can verify that the bookie works properly using the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell): - -```shell - -$ bin/bookkeeper shell bookiesanity - -``` - -This command creates a new ledger on the local bookie, writes a few entries, reads them back and finally deletes the ledger. - -After you have started all bookies, you can use the `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to verify that all bookies in the cluster are running. - -```bash - -$ bin/bookkeeper shell simpletest --ensemble --writeQuorum --ackQuorum --numEntries - -``` - -Bookie hosts are responsible for storing message data on disk. In order for bookies to provide optimal performance, having a suitable hardware configuration is essential for the bookies. The following are key dimensions for bookie hardware capacity. - -* Disk I/O capacity read/write -* Storage capacity - -Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker. To ensure low write latency, BookKeeper is -designed to use multiple devices: - -* A **journal** to ensure durability. For sequential writes, having fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts is critical. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID)s controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms. -* A **ledger storage device** is where data is stored until all consumers acknowledge the message. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller. - - - -## Deploy brokers - -Once you set up ZooKeeper, initialize cluster metadata, and spin up BookKeeper bookies, you can deploy brokers. - -### Broker configuration - -You can configure brokers using the [`conf/broker.conf`](reference-configuration.md#broker) configuration file. - -The most important element of broker configuration is ensuring that each broker is aware of its local ZooKeeper quorum as well as the configuration store quorum. Make sure that you set the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) parameter to reflect the local quorum and the [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameter to reflect the configuration store quorum (although you need to specify only those ZooKeeper servers located in the same cluster). - -You also need to specify the name of the [cluster](reference-terminology.md#cluster) to which the broker belongs using the [`clusterName`](reference-configuration.md#broker-clusterName) parameter. In addition, you need to match the broker and web service ports provided when you initialize the metadata (especially when you use a different port from default) of the cluster. - -The following is an example configuration: - -```properties - -# Local ZooKeeper servers -zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -# Configuration store quorum connection string. -configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184 - -clusterName=us-west - -# Broker data port -brokerServicePort=6650 - -# Broker data port for TLS -brokerServicePortTls=6651 - -# Port to use to server HTTP request -webServicePort=8080 - -# Port to use to server HTTPS request -webServicePortTls=8443 - -``` - -### Broker hardware - -Pulsar brokers do not require any special hardware since they do not use the local disk. You had better choose fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) so that the software can take full advantage of that. - -### Start the broker service - -You can start a broker in the background by using [nohup](https://en.wikipedia.org/wiki/Nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start broker - -``` - -You can also start brokers in the foreground by using [`pulsar broker`](reference-cli-tools.md#broker): - -```shell - -$ bin/pulsar broker - -``` - -## Service discovery - -[Clients](getting-started-clients.md) connecting to Pulsar brokers need to be able to communicate with an entire Pulsar instance using a single URL. Pulsar provides a built-in service discovery mechanism that you can set up using the instructions [immediately below](#service-discovery-setup). - -You can also use your own service discovery system if you want. If you use your own system, you only need to satisfy just one requirement: when a client performs an HTTP request to an [endpoint](reference-configuration.md) for a Pulsar cluster, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to *some* active broker in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means. - -> #### Service discovery already provided by many scheduling systems -> Many large-scale deployment systems, such as [Kubernetes](deploy-kubernetes.md), have service discovery systems built in. If you run Pulsar on such a system, you may not need to provide your own service discovery mechanism. - - -### Service discovery setup - -The service discovery mechanism that included with Pulsar maintains a list of active brokers, which stored in ZooKeeper, and supports lookup using HTTP and also the [binary protocol](developing-binary-protocol.md) of Pulsar. - -To get started setting up the built-in service of discovery of Pulsar, you need to change a few parameters in the [`conf/discovery.conf`](reference-configuration.md#service-discovery) configuration file. Set the [`zookeeperServers`](reference-configuration.md#service-discovery-zookeeperServers) parameter to the ZooKeeper quorum connection string of the cluster and the [`configurationStoreServers`](reference-configuration.md#service-discovery-configurationStoreServers) setting to the [configuration -store](reference-terminology.md#configuration-store) quorum connection string. - -```properties - -# Zookeeper quorum connection string -zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -# Global configuration store connection string -configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184 - -``` - -To start the discovery service: - -```shell - -$ bin/pulsar-daemon start discovery - -``` - -## Admin client and verification - -At this point your Pulsar instance should be ready to use. You can now configure client machines that can serve as [administrative clients](admin-api-overview.md) for each cluster. You can use the [`conf/client.conf`](reference-configuration.md#client) configuration file to configure admin clients. - -The most important thing is that you point the [`serviceUrl`](reference-configuration.md#client-serviceUrl) parameter to the correct service URL for the cluster: - -```properties - -serviceUrl=http://pulsar.us-west.example.com:8080/ - -``` - -## Provision new tenants - -Pulsar is built as a fundamentally multi-tenant system. - - -If a new tenant wants to use the system, you need to create a new one. You can create a new tenant by using the [`pulsar-admin`](reference-pulsar-admin.md#tenants) CLI tool: - -```shell - -$ bin/pulsar-admin tenants create test-tenant \ - --allowed-clusters us-west \ - --admin-roles test-admin-role - -``` - -In this command, users who identify with `test-admin-role` role can administer the configuration for the `test-tenant` tenant. The `test-tenant` tenant can only use the `us-west` cluster. From now on, this tenant can manage its resources. - -Once you create a tenant, you need to create [namespaces](reference-terminology.md#namespace) for topics within that tenant. - - -The first step is to create a namespace. A namespace is an administrative unit that can contain many topics. A common practice is to create a namespace for each different use case from a single tenant. - -```shell - -$ bin/pulsar-admin namespaces create test-tenant/ns1 - -``` - -##### Test producer and consumer - - -Everything is now ready to send and receive messages. The quickest way to test the system is through the [`pulsar-perf`](reference-cli-tools.md#pulsar-perf) client tool. - - -You can use a topic in the namespace that you have just created. Topics are automatically created the first time when a producer or a consumer tries to use them. - -The topic name in this case could be: - -```http - -persistent://test-tenant/ns1/my-topic - -``` - -Start a consumer that creates a subscription on the topic and waits for messages: - -```shell - -$ bin/pulsar-perf consume persistent://test-tenant/ns1/my-topic - -``` - -Start a producer that publishes messages at a fixed rate and reports stats every 10 seconds: - -```shell - -$ bin/pulsar-perf produce persistent://test-tenant/ns1/my-topic - -``` - -To report the topic stats: - -```shell - -$ bin/pulsar-admin topics stats persistent://test-tenant/ns1/my-topic - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/deploy-bare-metal.md b/site2/website/versioned_docs/version-2.8.0-deprecated/deploy-bare-metal.md deleted file mode 100644 index bdd05f24f2566d..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/deploy-bare-metal.md +++ /dev/null @@ -1,541 +0,0 @@ ---- -id: deploy-bare-metal -title: Deploy a cluster on bare metal -sidebar_label: "Bare metal" -original_id: deploy-bare-metal ---- - -:::tip - -1. Single-cluster Pulsar installations should be sufficient for all but the most ambitious use cases. If you are interested in experimenting with -Pulsar or using Pulsar in a startup or on a single team, it is simplest to opt for a single cluster. If you do need to run a multi-cluster Pulsar instance, -see the guide [here](deploy-bare-metal-multi-cluster.md). -2. If you want to use all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you need to download `apache-pulsar-io-connectors` -package and install `apache-pulsar-io-connectors` under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you -have run a separate cluster of function workers for [Pulsar Functions](functions-overview.md). -3. If you want to use [Tiered Storage](concepts-tiered-storage.md) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders` -package and install `apache-pulsar-offloaders` under `offloaders` directory in the pulsar directory on every broker node. For more details of how to configure -this feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md). - -::: - -Deploying a Pulsar cluster involves doing the following (in order): - -* Deploy a [ZooKeeper](#deploy-a-zookeeper-cluster) cluster (optional) -* Initialize [cluster metadata](#initialize-cluster-metadata) -* Deploy a [BookKeeper](#deploy-a-bookkeeper-cluster) cluster -* Deploy one or more Pulsar [brokers](#deploy-pulsar-brokers) - -## Preparation - -### Requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions, JRE/JDK 11 is recommended. - -> If you already have an existing ZooKeeper cluster and want to reuse it, you do not need to prepare the machines -> for running ZooKeeper. - -To run Pulsar on bare metal, the following configuration is recommended: - -* At least 6 Linux machines or VMs - * 3 for running [ZooKeeper](https://zookeeper.apache.org) - * 3 for running a Pulsar broker, and a [BookKeeper](https://bookkeeper.apache.org) bookie -* A single [DNS](https://en.wikipedia.org/wiki/Domain_Name_System) name covering all of the Pulsar broker hosts - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -> If you do not have enough machines, or to try out Pulsar in cluster mode (and expand the cluster later), -> you can deploy a full Pulsar configuration on one node, where Zookeeper, the bookie and broker are run on the same machine. - -> If you do not have a DNS server, you can use the multi-host format in the service URL instead. - -Each machine in your cluster needs to have [Java 8](https://adoptium.net/?variant=openjdk8) or [Java 11](https://adoptium.net/?variant=openjdk11) installed. - -The following is a diagram showing the basic setup: - -![alt-text](/assets/pulsar-basic-setup.png) - -In this diagram, connecting clients need to be able to communicate with the Pulsar cluster using a single URL. In this case, `pulsar-cluster.acme.com` abstracts over all of the message-handling brokers. Pulsar message brokers run on machines alongside BookKeeper bookies; brokers and bookies, in turn, rely on ZooKeeper. - -### Hardware considerations - -When you deploy a Pulsar cluster, keep in mind the following basic better choices when you do the capacity planning. - -#### ZooKeeper - -For machines running ZooKeeper, is is recommended to use less powerful machines or VMs. Pulsar uses ZooKeeper only for periodic coordination-related and configuration-related tasks, *not* for basic operations. If you run Pulsar on [Amazon Web Services](https://aws.amazon.com/) (AWS), for example, a [t2.small](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html) instance might likely suffice. - -#### Bookies and Brokers - -For machines running a bookie and a Pulsar broker, more powerful machines are required. For an AWS deployment, for example, [i3.4xlarge](https://aws.amazon.com/blogs/aws/now-available-i3-instances-for-demanding-io-intensive-applications/) instances may be appropriate. On those machines you can use the following: - -* Fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) (for Pulsar brokers) -* Small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache (for BookKeeper bookies) - -## Install the Pulsar binary package - -> You need to install the Pulsar binary package on *each machine in the cluster*, including machines running [ZooKeeper](#deploy-a-zookeeper-cluster) and [BookKeeper](#deploy-a-bookkeeper-cluster). - -To get started deploying a Pulsar cluster on bare metal, you need to download a binary tarball release in one of the following ways: - -* By clicking on the link below directly, which automatically triggers a download: - * Pulsar @pulsar:version@ binary release -* From the Pulsar [downloads page](pulsar:download_page_url) -* From the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) on [GitHub](https://github.com) -* Using [wget](https://www.gnu.org/software/wget): - -```bash - -$ wget pulsar:binary_release_url - -``` - -Once you download the tarball, untar it and `cd` into the resulting directory: - -```bash - -$ tar xvzf apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -The extracted directory contains the following subdirectories: - -Directory | Contains -:---------|:-------- -`bin` |[command-line tools](reference-cli-tools.md) of Pulsar, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) -`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more -`data` | The data storage directory that ZooKeeper and BookKeeper use -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that Pulsar uses -`logs` | Logs that the installation creates - -## [Install Builtin Connectors (optional)]( https://pulsar.apache.org/docs/en/next/standalone/#install-builtin-connectors-optional) - -> Since Pulsar release `2.1.0-incubating`, Pulsar provides a separate binary distribution, containing all the `builtin` connectors. -> If you want to enable those `builtin` connectors, you can follow the instructions as below; otherwise you can -> skip this section for now. - -To get started using builtin connectors, you need to download the connectors tarball release on every broker node in one of the following ways: - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar IO Connectors @pulsar:version@ release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -Once you download the .nar file, copy the file to directory `connectors` in the pulsar directory. -For example, if you download the connector file `pulsar-io-aerospike-@pulsar:version@.nar`: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -## [Install Tiered Storage Offloaders (optional)](https://pulsar.apache.org/docs/en/next/standalone/#install-tiered-storage-offloaders-optional) - -> Since Pulsar release `2.2.0`, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -> If you want to enable tiered storage feature, you can follow the instructions as below; otherwise you can -> skip this section for now. - -To get started using tiered storage offloaders, you need to download the offloaders tarball release on every broker node in one of the following ways: - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -Once you download the tarball, in the pulsar directory, untar the offloaders package and copy the offloaders as `offloaders` in the pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you can find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more details of how to configure tiered storage feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md) - - -## Deploy a ZooKeeper cluster - -> If you already have an existing zookeeper cluster and want to use it, you can skip this section. - -[ZooKeeper](https://zookeeper.apache.org) manages a variety of essential coordination- and configuration-related tasks for Pulsar. To deploy a Pulsar cluster, you need to deploy ZooKeeper first (before all other components). A 3-node ZooKeeper cluster is the recommended configuration. Pulsar does not make heavy use of ZooKeeper, so more lightweight machines or VMs should suffice for running ZooKeeper. - -To begin, add all ZooKeeper servers to the configuration specified in [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) (in the Pulsar directory that you create [above](#install-the-pulsar-binary-package)). The following is an example: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -> If you only have one machine on which to deploy Pulsar, you only need to add one server entry in the configuration file. - -On each host, you need to specify the ID of the node in the `myid` file, which is in the `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - -For example, on a ZooKeeper server like `zk1.us-west.example.com`, you can set the `myid` value as follows: - -```bash - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com`, the command is `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and have the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start zookeeper - -``` - -> If you plan to deploy Zookeeper with the Bookie on the same node, you need to start zookeeper by using different stats -> port by configuring the `metricsProvider.httpPort` in zookeeper.conf. - -## Initialize cluster metadata - -Once you deploy ZooKeeper for your cluster, you need to write some metadata to ZooKeeper. You only need to write this data **once**. - -You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. This command can be run on any machine in your Pulsar cluster, so the metadata can be initialized from a ZooKeeper, broker, or bookie machine. The following is an example: - -```shell - -$ bin/pulsar initialize-cluster-metadata \ - --cluster pulsar-cluster-1 \ - --zookeeper zk1.us-west.example.com:2181 \ - --configuration-store zk1.us-west.example.com:2181 \ - --web-service-url http://pulsar.us-west.example.com:8080 \ - --web-service-url-tls https://pulsar.us-west.example.com:8443 \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650 \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -As you can see from the example above, you will need to specify the following: - -Flag | Description -:----|:----------- -`--cluster` | A name for the cluster -`--zookeeper` | A "local" ZooKeeper connection string for the cluster. This connection string only needs to include *one* machine in the ZooKeeper cluster. -`--configuration-store` | The configuration store connection string for the entire instance. As with the `--zookeeper` flag, this connection string only needs to include *one* machine in the ZooKeeper cluster. -`--web-service-url` | The web service URL for the cluster, plus a port. This URL should be a standard DNS name. The default port is 8080 (you had better not use a different port). -`--web-service-url-tls` | If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster. The default port is 8443 (you had better not use a different port). -`--broker-service-url` | A broker service URL enabling interaction with the brokers in the cluster. This URL should not use the same DNS name as the web service URL but should use the `pulsar` scheme instead. The default port is 6650 (you had better not use a different port). -`--broker-service-url-tls` | If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster. The default port is 6651 (you had better not use a different port). - - -> If you do not have a DNS server, you can use multi-host format in the service URL with the following settings: -> - -> ```properties -> -> --web-service-url http://host1:8080,host2:8080,host3:8080 \ -> --web-service-url-tls https://host1:8443,host2:8443,host3:8443 \ -> --broker-service-url pulsar://host1:6650,host2:6650,host3:6650 \ -> --broker-service-url-tls pulsar+ssl://host1:6651,host2:6651,host3:6651 -> -> -> ``` - -> -> If you want to use an existing BookKeeper cluster, you can add the `--existing-bk-metadata-service-uri` flag as follows: -> - -> ```properties -> -> --existing-bk-metadata-service-uri "zk+null://zk1:2181;zk2:2181/ledgers" \ -> --web-service-url http://host1:8080,host2:8080,host3:8080 \ -> --web-service-url-tls https://host1:8443,host2:8443,host3:8443 \ -> --broker-service-url pulsar://host1:6650,host2:6650,host3:6650 \ -> --broker-service-url-tls pulsar+ssl://host1:6651,host2:6651,host3:6651 -> -> -> ``` - -> You can obtain the metadata service URI of the existing BookKeeper cluster by using the `bin/bookkeeper shell whatisinstanceid` command. You must enclose the value in double quotes since the multiple metadata service URIs are separated with semicolons. - -## Deploy a BookKeeper cluster - -[BookKeeper](https://bookkeeper.apache.org) handles all persistent data storage in Pulsar. You need to deploy a cluster of BookKeeper bookies to use Pulsar. You can choose to run a **3-bookie BookKeeper cluster**. - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important step in configuring bookies for our purposes here is ensuring that [`zkServers`](reference-configuration.md#bookkeeper-zkServers) is set to the connection string for the ZooKeeper cluster. The following is an example: - -```properties - -zkServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -``` - -Once you appropriately modify the `zkServers` parameter, you can make any other configuration changes that you require. You can find a full listing of the available BookKeeper configuration parameters [here](reference-configuration.md#bookkeeper). However, consulting the [BookKeeper documentation](http://bookkeeper.apache.org/docs/latest/reference/config/) for a more in-depth guide might be a better choice. - -Once you apply the desired configuration in `conf/bookkeeper.conf`, you can start up a bookie on each of your BookKeeper hosts. You can start up each bookie either in the background, using [nohup](https://en.wikipedia.org/wiki/Nohup), or in the foreground. - -To start the bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -To start the bookie in the foreground: - -```bash - -$ bin/pulsar bookie - -``` - -You can verify that a bookie works properly by running the `bookiesanity` command on the [BookKeeper shell](reference-cli-tools.md#shell): - -```bash - -$ bin/bookkeeper shell bookiesanity - -``` - -This command creates an ephemeral BookKeeper ledger on the local bookie, writes a few entries, reads them back, and finally deletes the ledger. - -After you start all the bookies, you can use `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to verify all the bookies in the cluster are up running. - -```bash - -$ bin/bookkeeper shell simpletest --ensemble --writeQuorum --ackQuorum --numEntries - -``` - -This command creates a `num-bookies` sized ledger on the cluster, writes a few entries, and finally deletes the ledger. - - -## Deploy Pulsar brokers - -Pulsar brokers are the last thing you need to deploy in your Pulsar cluster. Brokers handle Pulsar messages and provide the administrative interface of Pulsar. A good choice is to run **3 brokers**, one for each machine that already runs a BookKeeper bookie. - -### Configure Brokers - -The most important element of broker configuration is ensuring that each broker is aware of the ZooKeeper cluster that you have deployed. Ensure that the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) and [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameters are correct. In this case, since you only have 1 cluster and no configuration store setup, the `configurationStoreServers` point to the same `zookeeperServers`. - -```properties - -zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 -configurationStoreServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -``` - -You also need to specify the cluster name (matching the name that you provided when you [initialize the metadata of the cluster](#initialize-cluster-metadata)): - -```properties - -clusterName=pulsar-cluster-1 - -``` - -In addition, you need to match the broker and web service ports provided when you initialize the metadata of the cluster (especially when you use a different port than the default): - -```properties - -brokerServicePort=6650 -brokerServicePortTls=6651 -webServicePort=8080 -webServicePortTls=8443 - -``` - -> If you deploy Pulsar in a one-node cluster, you should update the replication settings in `conf/broker.conf` to `1`. -> - -> ```properties -> -> # Number of bookies to use when creating a ledger -> managedLedgerDefaultEnsembleSize=1 -> -> # Number of copies to store for each message -> managedLedgerDefaultWriteQuorum=1 -> -> # Number of guaranteed copies (acks to wait before write is complete) -> managedLedgerDefaultAckQuorum=1 -> -> -> ``` - - -### Enable Pulsar Functions (optional) - -If you want to enable [Pulsar Functions](functions-overview.md), you can follow the instructions as below: - -1. Edit `conf/broker.conf` to enable functions worker, by setting `functionsWorkerEnabled` to `true`. - - ```conf - - functionsWorkerEnabled=true - - ``` - -2. Edit `conf/functions_worker.yml` and set `pulsarFunctionsCluster` to the cluster name that you provide when you [initialize the metadata of the cluster](#initialize-cluster-metadata). - - ```conf - - pulsarFunctionsCluster: pulsar-cluster-1 - - ``` - -If you want to learn more options about deploying the functions worker, check out [Deploy and manage functions worker](functions-worker.md). - -### Start Brokers - -You can then provide any other configuration changes that you want in the [`conf/broker.conf`](reference-configuration.md#broker) file. Once you decide on a configuration, you can start up the brokers for your Pulsar cluster. Like ZooKeeper and BookKeeper, you can start brokers either in the foreground or in the background, using nohup. - -You can start a broker in the foreground using the [`pulsar broker`](reference-cli-tools.md#pulsar-broker) command: - -```bash - -$ bin/pulsar broker - -``` - -You can start a broker in the background using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start broker - -``` - -Once you successfully start up all the brokers that you intend to use, your Pulsar cluster should be ready to go! - -## Connect to the running cluster - -Once your Pulsar cluster is up and running, you should be able to connect with it using Pulsar clients. One such client is the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool, which is included with the Pulsar binary package. The `pulsar-client` tool can publish messages to and consume messages from Pulsar topics and thus provide a simple way to make sure that your cluster runs properly. - -To use the `pulsar-client` tool, first modify the client configuration file in [`conf/client.conf`](reference-configuration.md#client) in your binary package. You need to change the values for `webServiceUrl` and `brokerServiceUrl`, substituting `localhost` (which is the default), with the DNS name that you assign to your broker/bookie hosts. The following is an example: - -```properties - -webServiceUrl=http://us-west.example.com:8080 -brokerServiceurl=pulsar://us-west.example.com:6650 - -``` - -> If you do not have a DNS server, you can specify multi-host in service URL as follows: -> - -> ```properties -> -> webServiceUrl=http://host1:8080,host2:8080,host3:8080 -> brokerServiceurl=pulsar://host1:6650,host2:6650,host3:6650 -> -> -> ``` - - -Once that is complete, you can publish a message to the Pulsar topic: - -```bash - -$ bin/pulsar-client produce \ - persistent://public/default/test \ - -n 1 \ - -m "Hello Pulsar" - -``` - -> You may need to use a different cluster name in the topic if you specify a cluster name other than `pulsar-cluster-1`. - -This command publishes a single message to the Pulsar topic. In addition, you can subscribe to the Pulsar topic in a different terminal before publishing messages as below: - -```bash - -$ bin/pulsar-client consume \ - persistent://public/default/test \ - -n 100 \ - -s "consumer-test" \ - -t "Exclusive" - -``` - -Once you successfully publish the above message to the topic, you should see it in the standard output: - -```bash - ------ got message ----- -Hello Pulsar - -``` - -## Run Functions - -> If you have [enabled](#enable-pulsar-functions-optional) Pulsar Functions, you can try out the Pulsar Functions now. - -Create an ExclamationFunction `exclamation`. - -```bash - -bin/pulsar-admin functions create \ - --jar examples/api-examples.jar \ - --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \ - --inputs persistent://public/default/exclamation-input \ - --output persistent://public/default/exclamation-output \ - --tenant public \ - --namespace default \ - --name exclamation - -``` - -Check whether the function runs as expected by [triggering](functions-deploying.md#triggering-pulsar-functions) the function. - -```bash - -bin/pulsar-admin functions trigger --name exclamation --trigger-value "hello world" - -``` - -You should see the following output: - -```shell - -hello world! - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/deploy-dcos.md b/site2/website/versioned_docs/version-2.8.0-deprecated/deploy-dcos.md deleted file mode 100644 index 952d5f47e30fa7..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/deploy-dcos.md +++ /dev/null @@ -1,200 +0,0 @@ ---- -id: deploy-dcos -title: Deploy Pulsar on DC/OS -sidebar_label: "DC/OS" -original_id: deploy-dcos ---- - -:::tip - -If you want to enable all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you can choose to use `apachepulsar/pulsar-all` image instead of -`apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors). - -::: - -[DC/OS](https://dcos.io/) (the DataCenter Operating System) is a distributed operating system used for deploying and managing applications and systems on [Apache Mesos](http://mesos.apache.org/). DC/OS is an open-source tool that [Mesosphere](https://mesosphere.com/) creates and maintains . - -Apache Pulsar is available as a [Marathon Application Group](https://mesosphere.github.io/marathon/docs/application-groups.html), which runs multiple applications as manageable sets. - -## Prerequisites - -In order to run Pulsar on DC/OS, you need the following: - -* DC/OS version [1.9](https://docs.mesosphere.com/1.9/) or higher -* A [DC/OS cluster](https://docs.mesosphere.com/1.9/installing/) with at least three agent nodes -* The [DC/OS CLI tool](https://docs.mesosphere.com/1.9/cli/install/) installed -* The [`PulsarGroups.json`](https://github.com/apache/pulsar/blob/master/deployment/dcos/PulsarGroups.json) configuration file from the Pulsar GitHub repo. - - ```bash - - $ curl -O https://raw.githubusercontent.com/apache/pulsar/master/deployment/dcos/PulsarGroups.json - - ``` - -Each node in the DC/OS-managed Mesos cluster must have at least: - -* 4 CPU -* 4 GB of memory -* 60 GB of total persistent disk - -Alternatively, you can change the configuration in `PulsarGroups.json` according to match your resources of DC/OS cluster. - -## Deploy Pulsar using the DC/OS command interface - -You can deploy Pulsar on DC/OS using this command: - -```bash - -$ dcos marathon group add PulsarGroups.json - -``` - -This command deploys Docker container instances in three groups, which together comprise a Pulsar cluster: - -* 3 bookies (1 [bookie](reference-terminology.md#bookie) on each agent node and 1 [bookie recovery](http://bookkeeper.apache.org/docs/latest/admin/autorecovery/) instance) -* 3 Pulsar [brokers](reference-terminology.md#broker) (1 broker on each node and 1 admin instance) -* 1 [Prometheus](http://prometheus.io/) instance and 1 [Grafana](https://grafana.com/) instance - - -> When you run DC/OS, a ZooKeeper cluster already runs at `master.mesos:2181`, thus you do not have to install or start up ZooKeeper separately. - -After executing the `dcos` command above, click on the **Services** tab in the DC/OS [GUI interface](https://docs.mesosphere.com/latest/gui/), which you can access at [http://m1.dcos](http://m1.dcos) in this example. You should see several applications in the process of deploying. - -![DC/OS command executed](/assets/dcos_command_execute.png) - -![DC/OS command executed2](/assets/dcos_command_execute2.png) - -## The BookKeeper group - -To monitor the status of the BookKeeper cluster deployment, click on the **bookkeeper** group in the parent **pulsar** group. - -![DC/OS bookkeeper status](/assets/dcos_bookkeeper_status.png) - -At this point, 3 [bookies](reference-terminology.md#bookie) should be shown as green, which means that the bookies have been deployed successfully and are now running. - -![DC/OS bookkeeper running](/assets/dcos_bookkeeper_run.png) - -You can also click into each bookie instance to get more detailed information, such as the bookie running log. - -![DC/OS bookie log](/assets/dcos_bookie_log.png) - -To display information about the BookKeeper in ZooKeeper, you can visit [http://m1.dcos/exhibitor](http://m1.dcos/exhibitor). In this example, 3 bookies are under the `available` directory. - -![DC/OS bookkeeper in zk](/assets/dcos_bookkeeper_in_zookeeper.png) - -## The Pulsar broker Group - -Similar to the BookKeeper group above, click into the **brokers** to check the status of the Pulsar brokers. - -![DC/OS broker status](/assets/dcos_broker_status.png) - -![DC/OS broker running](/assets/dcos_broker_run.png) - -You can also click into each broker instance to get more detailed information, such as the broker running log. - -![DC/OS broker log](/assets/dcos_broker_log.png) - -Broker cluster information in Zookeeper is also available through the web UI. In this example, you can see that the `loadbalance` and `managed-ledgers` directories have been created. - -![DC/OS broker in zk](/assets/dcos_broker_in_zookeeper.png) - -## Monitor Group - -The **monitory** group consists of Prometheus and Grafana. - -![DC/OS monitor status](/assets/dcos_monitor_status.png) - -### Prometheus - -Click into the instance of `prom` to get the endpoint of Prometheus, which is `192.168.65.121:9090` in this example. - -![DC/OS prom endpoint](/assets/dcos_prom_endpoint.png) - -If you click that endpoint, you can see the Prometheus dashboard. The [http://192.168.65.121:9090/targets](http://192.168.65.121:9090/targets) URL display all the bookies and brokers. - -![DC/OS prom targets](/assets/dcos_prom_targets.png) - -### Grafana - -Click into `grafana` to get the endpoint for Grafana, which is `192.168.65.121:3000` in this example. - -![DC/OS grafana endpoint](/assets/dcos_grafana_endpoint.png) - -If you click that endpoint, you can access the Grafana dashboard. - -![DC/OS grafana targets](/assets/dcos_grafana_dashboard.png) - -## Run a simple Pulsar consumer and producer on DC/OS - -Now that you have a fully deployed Pulsar cluster, you can run a simple consumer and producer to show Pulsar on DC/OS in action. - -### Download and prepare the Pulsar Java tutorial - -You can clone a [Pulsar Java tutorial](https://github.com/streamlio/pulsar-java-tutorial) repo. This repo contains a simple Pulsar consumer and producer (you can find more information in the `README` file of the repo). - -```bash - -$ git clone https://github.com/streamlio/pulsar-java-tutorial - -``` - -Change the `SERVICE_URL` from `pulsar://localhost:6650` to `pulsar://a1.dcos:6650` in both [`ConsumerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ConsumerTutorial.java) and [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java). -The `pulsar://a1.dcos:6650` endpoint is for the broker service. You can fetch the endpoint details for each broker instance from the DC/OS GUI. `a1.dcos` is a DC/OS client agent, which runs a broker. The client agent IP address can also replace this. - -Now, change the message number from 10 to 10000000 in main method of [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java) so that it can produce more messages. - -Now compile the project code using the command below: - -```bash - -$ mvn clean package - -``` - -### Run the consumer and producer - -Execute this command to run the consumer: - -```bash - -$ mvn exec:java -Dexec.mainClass="tutorial.ConsumerTutorial" - -``` - -Execute this command to run the producer: - -```bash - -$ mvn exec:java -Dexec.mainClass="tutorial.ProducerTutorial" - -``` - -You can see the producer producing messages and the consumer consuming messages through the DC/OS GUI. - -![DC/OS pulsar producer](/assets/dcos_producer.png) - -![DC/OS pulsar consumer](/assets/dcos_consumer.png) - -### View Grafana metric output - -While the producer and consumer run, you can access running metrics information from Grafana. - -![DC/OS pulsar dashboard](/assets/dcos_metrics.png) - - -## Uninstall Pulsar - -You can shut down and uninstall the `pulsar` application from DC/OS at any time in the following two ways: - -1. Using the DC/OS GUI, you can choose **Delete** at the right end of Pulsar group. - - ![DC/OS pulsar uninstall](/assets/dcos_uninstall.png) - -2. You can use the following command: - - ```bash - - $ dcos marathon group remove /pulsar - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/deploy-docker.md b/site2/website/versioned_docs/version-2.8.0-deprecated/deploy-docker.md deleted file mode 100644 index 8348d78deb2378..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/deploy-docker.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -id: deploy-docker -title: Deploy a cluster on Docker -sidebar_label: "Docker" -original_id: deploy-docker ---- - -To deploy a Pulsar cluster on Docker, complete the following steps: -1. Deploy a ZooKeeper cluster (optional) -2. Initialize cluster metadata -3. Deploy a BookKeeper cluster -4. Deploy one or more Pulsar brokers - -## Prepare - -To run Pulsar on Docker, you need to create a container for each Pulsar component: ZooKeeper, BookKeeper and broker. You can pull the images of ZooKeeper and BookKeeper separately on [Docker Hub](https://hub.docker.com/), and pull a [Pulsar image](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) for the broker. You can also pull only one [Pulsar image](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) and create three containers with this image. This tutorial takes the second option as an example. - -### Pull a Pulsar image -You can pull a Pulsar image from [Docker Hub](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) with the following command. - -``` - -docker pull apachepulsar/pulsar-all:latest - -``` - -### Create three containers -Create containers for ZooKeeper, BookKeeper and broker. In this example, they are named as `zookeeper`, `bookkeeper` and `broker` respectively. You can name them as you want with the `--name` flag. By default, the container names are created randomly. - -``` - -docker run -it --name bookkeeper apachepulsar/pulsar-all:latest /bin/bash -docker run -it --name zookeeper apachepulsar/pulsar-all:latest /bin/bash -docker run -it --name broker apachepulsar/pulsar-all:latest /bin/bash - -``` - -### Create a network -To deploy a Pulsar cluster on Docker, you need to create a `network` and connect the containers of ZooKeeper, BookKeeper and broker to this network. The following command creates the network `pulsar`: - -``` - -docker network create pulsar - -``` - -### Connect containers to network -Connect the containers of ZooKeeper, BookKeeper and broker to the `pulsar` network with the following commands. - -``` - -docker network connect pulsar zookeeper -docker network connect pulsar bookkeeper -docker network connect pulsar broker - -``` - -To check whether the containers are successfully connected to the network, enter the `docker network inspect pulsar` command. - -For detailed information about how to deploy ZooKeeper cluster, BookKeeper cluster, brokers, see [deploy a cluster on bare metal](deploy-bare-metal.md). diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/deploy-kubernetes.md b/site2/website/versioned_docs/version-2.8.0-deprecated/deploy-kubernetes.md deleted file mode 100644 index 1aefc6ad79f716..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/deploy-kubernetes.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -id: deploy-kubernetes -title: Deploy Pulsar on Kubernetes -sidebar_label: "Kubernetes" -original_id: deploy-kubernetes ---- - -To get up and running with these charts as fast as possible, in a **non-production** use case, we provide -a [quick start guide](getting-started-helm.md) for Proof of Concept (PoC) deployments. - -To configure and install a Pulsar cluster on Kubernetes for production usage, follow the complete [Installation Guide](helm-install.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/deploy-monitoring.md b/site2/website/versioned_docs/version-2.8.0-deprecated/deploy-monitoring.md deleted file mode 100644 index f9fe0e0bb97be7..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/deploy-monitoring.md +++ /dev/null @@ -1,148 +0,0 @@ ---- -id: deploy-monitoring -title: Monitor -sidebar_label: "Monitor" -original_id: deploy-monitoring ---- - -You can use different ways to monitor a Pulsar cluster, exposing both metrics related to the usage of topics and the overall health of the individual components of the cluster. - -## Collect metrics - -You can collect broker stats, ZooKeeper stats, and BookKeeper stats. - -### Broker stats - -You can collect Pulsar broker metrics from brokers and export the metrics in JSON format. The Pulsar broker metrics mainly have two types: - -* *Destination dumps*, which contain stats for each individual topic. You can fetch the destination dumps using the command below: - - ```shell - - bin/pulsar-admin broker-stats destinations - - ``` - -* Broker metrics, which contain the broker information and topics stats aggregated at namespace level. You can fetch the broker metrics by using the following command: - - ```shell - - bin/pulsar-admin broker-stats monitoring-metrics - - ``` - -All the message rates are updated every minute. - -The aggregated broker metrics are also exposed in the [Prometheus](https://prometheus.io) format at: - -```shell - -http://$BROKER_ADDRESS:8080/metrics/ - -``` - -### ZooKeeper stats - -The local ZooKeeper, configuration store server and clients that are shipped with Pulsar can expose detailed stats through Prometheus. - -```shell - -http://$LOCAL_ZK_SERVER:8000/metrics -http://$GLOBAL_ZK_SERVER:8001/metrics - -``` - -The default port of local ZooKeeper is `8000` and the default port of the configuration store is `8001`. You can use a different stats port by configuring `metricsProvider.httpPort` in the `conf/zookeeper.conf` file. - -### BookKeeper stats - -You can configure the stats frameworks for BookKeeper by modifying the `statsProviderClass` in the `conf/bookkeeper.conf` file. - -The default BookKeeper configuration enables the Prometheus exporter. The configuration is included with Pulsar distribution. - -```shell - -http://$BOOKIE_ADDRESS:8000/metrics - -``` - -The default port for bookie is `8000`. You can change the port by configuring `prometheusStatsHttpPort` in the `conf/bookkeeper.conf` file. - -### Managed cursor acknowledgment state -The acknowledgment state is persistent to the ledger first. When the acknowledgment state fails to be persistent to the ledger, they are persistent to ZooKeeper. To track the stats of acknowledgement, you can configure the metrics for the managed cursor. - -``` - -brk_ml_cursor_persistLedgerSucceed(namespace=", ledger_name="", cursor_name:") -brk_ml_cursor_persistLedgerErrors(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_persistZookeeperSucceed(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_persistZookeeperErrors(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_nonContiguousDeletedMessagesRange(namespace="", ledger_name="", cursor_name:"") - -``` - -Those metrics are added in the Prometheus interface, you can monitor and check the metrics stats in the Grafana. - -### Function and connector stats - -You can collect functions worker stats from `functions-worker` and export the metrics in JSON formats, which contain functions worker JVM metrics. - -``` - -pulsar-admin functions-worker monitoring-metrics - -``` - -You can collect functions and connectors metrics from `functions-worker` and export the metrics in JSON formats. - -``` - -pulsar-admin functions-worker function-stats - -``` - -The aggregated functions and connectors metrics can be exposed in Prometheus formats as below. You can get [`FUNCTIONS_WORKER_ADDRESS`](http://pulsar.apache.org/docs/en/next/functions-worker/) and `WORKER_PORT` from the `functions_worker.yml` file. - -``` - -http://$FUNCTIONS_WORKER_ADDRESS:$WORKER_PORT/metrics: - -``` - -## Configure Prometheus - -You can use Prometheus to collect all the metrics exposed for Pulsar components and set up [Grafana](https://grafana.com/) dashboards to display the metrics and monitor your Pulsar cluster. For details, refer to [Prometheus guide](https://prometheus.io/docs/introduction/getting_started/). - -When you run Pulsar on bare metal, you can provide the list of nodes to be probed. When you deploy Pulsar in a Kubernetes cluster, the monitoring is setup automatically. For details, refer to [Kubernetes instructions](helm-deploy.md). - -## Dashboards - -When you collect time series statistics, the major problem is to make sure the number of dimensions attached to the data does not explode. Thus you only need to collect time series of metrics aggregated at the namespace level. - -### Pulsar per-topic dashboard - -The per-topic dashboard instructions are available at [Pulsar manager](administration-pulsar-manager.md). - -### Grafana - -You can use grafana to create dashboard driven by the data that is stored in Prometheus. - -When you deploy Pulsar on Kubernetes, a `pulsar-grafana` Docker image is enabled by default. You can use the docker image with the principal dashboards. - -Enter the command below to use the dashboard manually: - -```shell - -docker run -p3000:3000 \ - -e PROMETHEUS_URL=http://$PROMETHEUS_HOST:9090/ \ - apachepulsar/pulsar-grafana:latest - -``` - -The following are some Grafana dashboards examples: - -- [pulsar-grafana](http://pulsar.apache.org/docs/en/deploy-monitoring/#grafana): a Grafana dashboard that displays metrics collected in Prometheus for Pulsar clusters running on Kubernetes. -- [apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard): a collection of Grafana dashboard templates for different Pulsar components running on both Kubernetes and on-premise machines. - -## Alerting rules -You can set alerting rules according to your Pulsar environment. To configure alerting rules for Apache Pulsar, refer to [alerting rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/). diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/develop-load-manager.md b/site2/website/versioned_docs/version-2.8.0-deprecated/develop-load-manager.md deleted file mode 100644 index 509209b6a852d8..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/develop-load-manager.md +++ /dev/null @@ -1,227 +0,0 @@ ---- -id: develop-load-manager -title: Modular load manager -sidebar_label: "Modular load manager" -original_id: develop-load-manager ---- - -The *modular load manager*, implemented in [`ModularLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/ModularLoadManagerImpl.java), is a flexible alternative to the previously implemented load manager, [`SimpleLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/SimpleLoadManagerImpl.java), which attempts to simplify how load is managed while also providing abstractions so that complex load management strategies may be implemented. - -## Usage - -There are two ways that you can enable the modular load manager: - -1. Change the value of the `loadManagerClassName` parameter in `conf/broker.conf` from `org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl` to `org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl`. -2. Using the `pulsar-admin` tool. Here's an example: - - ```shell - - $ pulsar-admin update-dynamic-config \ - --config loadManagerClassName \ - --value org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl - - ``` - - You can use the same method to change back to the original value. In either case, any mistake in specifying the load manager will cause Pulsar to default to `SimpleLoadManagerImpl`. - -## Verification - -There are a few different ways to determine which load manager is being used: - -1. Use `pulsar-admin` to examine the `loadManagerClassName` element: - - ```shell - - $ bin/pulsar-admin brokers get-all-dynamic-config - { - "loadManagerClassName" : "org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl" - } - - ``` - - If there is no `loadManagerClassName` element, then the default load manager is used. - -2. Consult a ZooKeeper load report. With the module load manager, the load report in `/loadbalance/brokers/...` will have many differences. for example the `systemResourceUsage` sub-elements (`bandwidthIn`, `bandwidthOut`, etc.) are now all at the top level. Here is an example load report from the module load manager: - - ```json - - { - "bandwidthIn": { - "limit": 10240000.0, - "usage": 4.256510416666667 - }, - "bandwidthOut": { - "limit": 10240000.0, - "usage": 5.287239583333333 - }, - "bundles": [], - "cpu": { - "limit": 2400.0, - "usage": 5.7353247655435915 - }, - "directMemory": { - "limit": 16384.0, - "usage": 1.0 - } - } - - ``` - - With the simple load manager, the load report in `/loadbalance/brokers/...` will look like this: - - ```json - - { - "systemResourceUsage": { - "bandwidthIn": { - "limit": 10240000.0, - "usage": 0.0 - }, - "bandwidthOut": { - "limit": 10240000.0, - "usage": 0.0 - }, - "cpu": { - "limit": 2400.0, - "usage": 0.0 - }, - "directMemory": { - "limit": 16384.0, - "usage": 1.0 - }, - "memory": { - "limit": 8192.0, - "usage": 3903.0 - } - } - } - - ``` - -3. The command-line [broker monitor](reference-cli-tools.md#monitor-brokers) will have a different output format depending on which load manager implementation is being used. - - Here is an example from the modular load manager: - - ``` - - =================================================================================================================== - ||SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.00 |48.33 |0.01 |0.00 |0.00 |48.33 || - ||COUNT |TOPIC |BUNDLE |PRODUCER |CONSUMER |BUNDLE + |BUNDLE - || - || |4 |4 |0 |2 |4 |0 || - ||LATEST |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - ||SHORT |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - ||LONG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - =================================================================================================================== - - ``` - - Here is an example from the simple load manager: - - ``` - - =================================================================================================================== - ||COUNT |TOPIC |BUNDLE |PRODUCER |CONSUMER |BUNDLE + |BUNDLE - || - || |4 |4 |0 |2 |0 |0 || - ||RAW SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.25 |47.94 |0.01 |0.00 |0.00 |47.94 || - ||ALLOC SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.20 |1.89 | |1.27 |3.21 |3.21 || - ||RAW MSG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.01 |0.01 |0.01 || - ||ALLOC MSG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |54.84 |134.48 |189.31 |126.54 |320.96 |447.50 || - =================================================================================================================== - - ``` - -It is important to note that the module load manager is _centralized_, meaning that all requests to assign a bundle---whether it's been seen before or whether this is the first time---only get handled by the _lead_ broker (which can change over time). To determine the current lead broker, examine the `/loadbalance/leader` node in ZooKeeper. - -## Implementation - -### Data - -The data monitored by the modular load manager is contained in the [`LoadData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/LoadData.java) class. -Here, the available data is subdivided into the bundle data and the broker data. - -#### Broker - -The broker data is contained in the [`BrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BrokerData.java) class. It is further subdivided into two parts, -one being the local data which every broker individually writes to ZooKeeper, and the other being the historical broker -data which is written to ZooKeeper by the leader broker. - -##### Local Broker Data -The local broker data is contained in the class [`LocalBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/java/org/apache/pulsar/policies/data/loadbalancer/LocalBrokerData.java) and provides information about the following resources: - -* CPU usage -* JVM heap memory usage -* Direct memory usage -* Bandwidth in/out usage -* Most recent total message rate in/out across all bundles -* Total number of topics, bundles, producers, and consumers -* Names of all bundles assigned to this broker -* Most recent changes in bundle assignments for this broker - -The local broker data is updated periodically according to the service configuration -"loadBalancerReportUpdateMaxIntervalMinutes". After any broker updates their local broker data, the leader broker will -receive the update immediately via a ZooKeeper watch, where the local data is read from the ZooKeeper node -`/loadbalance/brokers/` - -##### Historical Broker Data - -The historical broker data is contained in the [`TimeAverageBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/TimeAverageBrokerData.java) class. - -In order to reconcile the need to make good decisions in a steady-state scenario and make reactive decisions in a critical scenario, the historical data is split into two parts: the short-term data for reactive decisions, and the long-term data for steady-state decisions. Both time frames maintain the following information: - -* Message rate in/out for the entire broker -* Message throughput in/out for the entire broker - -Unlike the bundle data, the broker data does not maintain samples for the global broker message rates and throughputs, which is not expected to remain steady as new bundles are removed or added. Instead, this data is aggregated over the short-term and long-term data for the bundles. See the section on bundle data to understand how that data is collected and maintained. - -The historical broker data is updated for each broker in memory by the leader broker whenever any broker writes their local data to ZooKeeper. Then, the historical data is written to ZooKeeper by the leader broker periodically according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`. - -##### Bundle Data - -The bundle data is contained in the [`BundleData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BundleData.java). Like the historical broker data, the bundle data is split into a short-term and a long-term time frame. The information maintained in each time frame: - -* Message rate in/out for this bundle -* Message Throughput In/Out for this bundle -* Current number of samples for this bundle - -The time frames are implemented by maintaining the average of these values over a set, limited number of samples, where -the samples are obtained through the message rate and throughput values in the local data. Thus, if the update interval -for the local data is 2 minutes, the number of short samples is 10 and the number of long samples is 1000, the -short-term data is maintained over a period of `10 samples * 2 minutes / sample = 20 minutes`, while the long-term -data is similarly over a period of 2000 minutes. Whenever there are not enough samples to satisfy a given time frame, -the average is taken only over the existing samples. When no samples are available, default values are assumed until -they are overwritten by the first sample. Currently, the default values are - -* Message rate in/out: 50 messages per second both ways -* Message throughput in/out: 50KB per second both ways - -The bundle data is updated in memory on the leader broker whenever any broker writes their local data to ZooKeeper. -Then, the bundle data is written to ZooKeeper by the leader broker periodically at the same time as the historical -broker data, according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`. - -### Traffic Distribution - -The modular load manager uses the abstraction provided by [`ModularLoadManagerStrategy`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/ModularLoadManagerStrategy.java) to make decisions about bundle assignment. The strategy makes a decision by considering the service configuration, the entire load data, and the bundle data for the bundle to be assigned. Currently, the only supported strategy is [`LeastLongTermMessageRate`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/LeastLongTermMessageRate.java), though soon users will have the ability to inject their own strategies if desired. - -#### Least Long Term Message Rate Strategy - -As its name suggests, the least long term message rate strategy attempts to distribute bundles across brokers so that -the message rate in the long-term time window for each broker is roughly the same. However, simply balancing load based -on message rate does not handle the issue of asymmetric resource burden per message on each broker. Thus, the system -resource usages, which are CPU, memory, direct memory, bandwidth in, and bandwidth out, are also considered in the -assignment process. This is done by weighting the final message rate according to -`1 / (overload_threshold - max_usage)`, where `overload_threshold` corresponds to the configuration -`loadBalancerBrokerOverloadedThresholdPercentage` and `max_usage` is the maximum proportion among the system resources -that is being utilized by the candidate broker. This multiplier ensures that machines with are being more heavily taxed -by the same message rates will receive less load. In particular, it tries to ensure that if one machine is overloaded, -then all machines are approximately overloaded. In the case in which a broker's max usage exceeds the overload -threshold, that broker is not considered for bundle assignment. If all brokers are overloaded, the bundle is randomly -assigned. - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/develop-schema.md b/site2/website/versioned_docs/version-2.8.0-deprecated/develop-schema.md deleted file mode 100644 index 2d4461a5ea2b55..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/develop-schema.md +++ /dev/null @@ -1,62 +0,0 @@ ---- -id: develop-schema -title: Custom schema storage -sidebar_label: "Custom schema storage" -original_id: develop-schema ---- - -By default, Pulsar stores data type [schemas](concepts-schema-registry.md) in [Apache BookKeeper](https://bookkeeper.apache.org) (which is deployed alongside Pulsar). You can, however, use another storage system if you wish. This doc walks you through creating your own schema storage implementation. - -In order to use a non-default (i.e. non-BookKeeper) storage system for Pulsar schemas, you need to implement two Java interfaces: [`SchemaStorage`](#schemastorage-interface) and [`SchemaStorageFactory`](#schemastoragefactory-interface). - -## SchemaStorage interface - -The `SchemaStorage` interface has the following methods: - -```java - -public interface SchemaStorage { - // How schemas are updated - CompletableFuture put(String key, byte[] value, byte[] hash); - - // How schemas are fetched from storage - CompletableFuture get(String key, SchemaVersion version); - - // How schemas are deleted - CompletableFuture delete(String key); - - // Utility method for converting a schema version byte array to a SchemaVersion object - SchemaVersion versionFromBytes(byte[] version); - - // Startup behavior for the schema storage client - void start() throws Exception; - - // Shutdown behavior for the schema storage client - void close() throws Exception; -} - -``` - -> For a full-fledged example schema storage implementation, see the [`BookKeeperSchemaStorage`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorage.java) class. - -## SchemaStorageFactory interface - -```java - -public interface SchemaStorageFactory { - @NotNull - SchemaStorage create(PulsarService pulsar) throws Exception; -} - -``` - -> For a full-fledged example schema storage factory implementation, see the [`BookKeeperSchemaStorageFactory`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorageFactory.java) class. - -## Deployment - -In order to use your custom schema storage implementation, you'll need to: - -1. Package the implementation in a [JAR](https://docs.oracle.com/javase/tutorial/deployment/jar/basicsindex.html) file. -1. Add that jar to the `lib` folder in your Pulsar [binary or source distribution](getting-started-standalone.md#installing-pulsar). -1. Change the `schemaRegistryStorageClassName` configuration in [`broker.conf`](reference-configuration.md#broker) to your custom factory class (i.e. the `SchemaStorageFactory` implementation, not the `SchemaStorage` implementation). -1. Start up Pulsar. diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/develop-tools.md b/site2/website/versioned_docs/version-2.8.0-deprecated/develop-tools.md deleted file mode 100644 index b5457790b80810..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/develop-tools.md +++ /dev/null @@ -1,111 +0,0 @@ ---- -id: develop-tools -title: Simulation tools -sidebar_label: "Simulation tools" -original_id: develop-tools ---- - -It is sometimes necessary create an test environment and incur artificial load to observe how well load managers -handle the load. The load simulation controller, the load simulation client, and the broker monitor were created as an -effort to make create this load and observe the effects on the managers more easily. - -## Simulation Client -The simulation client is a machine which will create and subscribe to topics with configurable message rates and sizes. -Because it is sometimes necessary in simulating large load to use multiple client machines, the user does not interact -with the simulation client directly, but instead delegates their requests to the simulation controller, which will then -send signals to clients to start incurring load. The client implementation is in the class -`org.apache.pulsar.testclient.LoadSimulationClient`. - -### Usage -To Start a simulation client, use the `pulsar-perf` script with the command `simulation-client` as follows: - -``` - -pulsar-perf simulation-client --port --service-url - -``` - -The client will then be ready to receive controller commands. -## Simulation Controller -The simulation controller send signals to the simulation clients, requesting them to create new topics, stop old -topics, change the load incurred by topics, as well as several other tasks. It is implemented in the class -`org.apache.pulsar.testclient.LoadSimulationController` and presents a shell to the user as an interface to send -command with. - -### Usage -To start a simulation controller, use the `pulsar-perf` script with the command `simulation-controller` as follows: - -``` - -pulsar-perf simulation-controller --cluster --client-port ---clients - -``` - -The clients should already be started before the controller is started. You will then be presented with a simple prompt, -where you can issue commands to simulation clients. Arguments often refer to tenant names, namespace names, and topic -names. In all cases, the BASE name of the tenants, namespaces, and topics are used. For example, for the topic -`persistent://my_tenant/my_cluster/my_namespace/my_topic`, the tenant name is `my_tenant`, the namespace name is -`my_namespace`, and the topic name is `my_topic`. The controller can perform the following actions: - -* Create a topic with a producer and a consumer - * `trade [--rate ] - [--rand-rate ,] - [--size ]` -* Create a group of topics with a producer and a consumer - * `trade_group [--rate ] - [--rand-rate ,] - [--separation ] [--size ] - [--topics-per-namespace ]` -* Change the configuration of an existing topic - * `change [--rate ] - [--rand-rate ,] - [--size ]` -* Change the configuration of a group of topics - * `change_group [--rate ] [--rand-rate ,] - [--size ] [--topics-per-namespace ]` -* Shutdown a previously created topic - * `stop ` -* Shutdown a previously created group of topics - * `stop_group ` -* Copy the historical data from one ZooKeeper to another and simulate based on the message rates and sizes in that history - * `copy [--rate-multiplier value]` -* Simulate the load of the historical data on the current ZooKeeper (should be same ZooKeeper being simulated on) - * `simulate [--rate-multiplier value]` -* Stream the latest data from the given active ZooKeeper to simulate the real-time load of that ZooKeeper. - * `stream [--rate-multiplier value]` - -The "group" arguments in these commands allow the user to create or affect multiple topics at once. Groups are created -when calling the `trade_group` command, and all topics from these groups may be subsequently modified or stopped -with the `change_group` and `stop_group` commands respectively. All ZooKeeper arguments are of the form -`zookeeper_host:port`. - -### Difference Between Copy, Simulate, and Stream -The commands `copy`, `simulate`, and `stream` are very similar but have significant differences. `copy` is used when -you want to simulate the load of a static, external ZooKeeper on the ZooKeeper you are simulating on. Thus, -`source zookeeper` should be the ZooKeeper you want to copy and `target zookeeper` should be the ZooKeeper you are -simulating on, and then it will get the full benefit of the historical data of the source in both load manager -implementations. `simulate` on the other hand takes in only one ZooKeeper, the one you are simulating on. It assumes -that you are simulating on a ZooKeeper that has historical data for `SimpleLoadManagerImpl` and creates equivalent -historical data for `ModularLoadManagerImpl`. Then, the load according to the historical data is simulated by the -clients. Finally, `stream` takes in an active ZooKeeper different than the ZooKeeper being simulated on and streams -load data from it and simulates the real-time load. In all cases, the optional `rate-multiplier` argument allows the -user to simulate some proportion of the load. For instance, using `--rate-multiplier 0.05` will cause messages to -be sent at only `5%` of the rate of the load that is being simulated. - -## Broker Monitor -To observe the behavior of the load manager in these simulations, one may utilize the broker monitor, which is -implemented in `org.apache.pulsar.testclient.BrokerMonitor`. The broker monitor will print tabular load data to the -console as it is updated using watchers. - -### Usage -To start a broker monitor, use the `monitor-brokers` command in the `pulsar-perf` script: - -``` - -pulsar-perf monitor-brokers --connect-string - -``` - -The console will then continuously print load data until it is interrupted. - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/developing-binary-protocol.md b/site2/website/versioned_docs/version-2.8.0-deprecated/developing-binary-protocol.md deleted file mode 100644 index 177ffebd6e5ded..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/developing-binary-protocol.md +++ /dev/null @@ -1,606 +0,0 @@ ---- -id: developing-binary-protocol -title: Pulsar binary protocol specification -sidebar_label: "Binary protocol" -original_id: developing-binary-protocol ---- - -Pulsar uses a custom binary protocol for communications between producers/consumers and brokers. This protocol is designed to support required features, such as acknowledgements and flow control, while ensuring maximum transport and implementation efficiency. - -Clients and brokers exchange *commands* with each other. Commands are formatted as binary [protocol buffer](https://developers.google.com/protocol-buffers/) (aka *protobuf*) messages. The format of protobuf commands is specified in the [`PulsarApi.proto`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto) file and also documented in the [Protobuf interface](#protobuf-interface) section below. - -> ### Connection sharing -> Commands for different producers and consumers can be interleaved and sent through the same connection without restriction. - -All commands associated with Pulsar's protocol are contained in a [`BaseCommand`](#pulsar.proto.BaseCommand) protobuf message that includes a [`Type`](#pulsar.proto.Type) [enum](https://developers.google.com/protocol-buffers/docs/proto#enum) with all possible subcommands as optional fields. `BaseCommand` messages can specify only one subcommand. - -## Framing - -Since protobuf doesn't provide any sort of message frame, all messages in the Pulsar protocol are prepended with a 4-byte field that specifies the size of the frame. The maximum allowable size of a single frame is 5 MB. - -The Pulsar protocol allows for two types of commands: - -1. **Simple commands** that do not carry a message payload. -2. **Payload commands** that bear a payload that is used when publishing or delivering messages. In payload commands, the protobuf command data is followed by protobuf [metadata](#message-metadata) and then the payload, which is passed in raw format outside of protobuf. All sizes are passed as 4-byte unsigned big endian integers. - -> Message payloads are passed in raw format rather than protobuf format for efficiency reasons. - -### Simple commands - -Simple (payload-free) commands have this basic structure: - -| Component | Description | Size (in bytes) | -|:------------|:----------------------------------------------------------------------------------------|:----------------| -| totalSize | The size of the frame, counting everything that comes after it (in bytes) | 4 | -| commandSize | The size of the protobuf-serialized command | 4 | -| message | The protobuf message serialized in a raw binary format (rather than in protobuf format) | | - -### Payload commands - -Payload commands have this basic structure: - -| Component | Description | Size (in bytes) | -|:-------------|:--------------------------------------------------------------------------------------------|:----------------| -| totalSize | The size of the frame, counting everything that comes after it (in bytes) | 4 | -| commandSize | The size of the protobuf-serialized command | 4 | -| message | The protobuf message serialized in a raw binary format (rather than in protobuf format) | | -| magicNumber | A 2-byte byte array (`0x0e01`) identifying the current format | 2 | -| checksum | A [CRC32-C checksum](http://www.evanjones.ca/crc32c.html) of everything that comes after it | 4 | -| metadataSize | The size of the message [metadata](#message-metadata) | 4 | -| metadata | The message [metadata](#message-metadata) stored as a binary protobuf message | | -| payload | Anything left in the frame is considered the payload and can include any sequence of bytes | | - -## Message metadata - -Message metadata is stored alongside the application-specified payload as a serialized protobuf message. Metadata is created by the producer and passed on unchanged to the consumer. - -| Field | Description | -|:-------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `producer_name` | The name of the producer that published the message | -| `sequence_id` | The sequence ID of the message, assigned by producer | -| `publish_time` | The publish timestamp in Unix time (i.e. as the number of milliseconds since January 1st, 1970 in UTC) | -| `properties` | A sequence of key/value pairs (using the [`KeyValue`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto#L32) message). These are application-defined keys and values with no special meaning to Pulsar. | -| `replicated_from` *(optional)* | Indicates that the message has been replicated and specifies the name of the [cluster](reference-terminology.md#cluster) where the message was originally published | -| `partition_key` *(optional)* | While publishing on a partition topic, if the key is present, the hash of the key is used to determine which partition to choose | -| `compression` *(optional)* | Signals that payload has been compressed and with which compression library | -| `uncompressed_size` *(optional)* | If compression is used, the producer must fill the uncompressed size field with the original payload size | -| `num_messages_in_batch` *(optional)* | If this message is really a [batch](#batch-messages) of multiple entries, this field must be set to the number of messages in the batch | - -### Batch messages - -When using batch messages, the payload will be containing a list of entries, -each of them with its individual metadata, defined by the `SingleMessageMetadata` -object. - - -For a single batch, the payload format will look like this: - - -| Field | Description | -|:--------------|:------------------------------------------------------------| -| metadataSizeN | The size of the single message metadata serialized Protobuf | -| metadataN | Single message metadata | -| payloadN | Message payload passed by application | - -Each metadata field looks like this; - -| Field | Description | -|:---------------------------|:--------------------------------------------------------| -| properties | Application-defined properties | -| partition key *(optional)* | Key to indicate the hashing to a particular partition | -| payload_size | Size of the payload for the single message in the batch | - -When compression is enabled, the whole batch will be compressed at once. - -## Interactions - -### Connection establishment - -After opening a TCP connection to a broker, typically on port 6650, the client -is responsible to initiate the session. - -![Connect interaction](/assets/binary-protocol-connect.png) - -After receiving a `Connected` response from the broker, the client can -consider the connection ready to use. Alternatively, if the broker doesn't -validate the client authentication, it will reply with an `Error` command and -close the TCP connection. - -Example: - -```protobuf - -message CommandConnect { - "client_version" : "Pulsar-Client-Java-v1.15.2", - "auth_method_name" : "my-authentication-plugin", - "auth_data" : "my-auth-data", - "protocol_version" : 6 -} - -``` - -Fields: - * `client_version` → String based identifier. Format is not enforced - * `auth_method_name` → *(optional)* Name of the authentication plugin if auth - enabled - * `auth_data` → *(optional)* Plugin specific authentication data - * `protocol_version` → Indicates the protocol version supported by the - client. Broker will not send commands introduced in newer revisions of the - protocol. Broker might be enforcing a minimum version - -```protobuf - -message CommandConnected { - "server_version" : "Pulsar-Broker-v1.15.2", - "protocol_version" : 6 -} - -``` - -Fields: - * `server_version` → String identifier of broker version - * `protocol_version` → Protocol version supported by the broker. Client - must not attempt to send commands introduced in newer revisions of the - protocol - -### Keep Alive - -To identify prolonged network partitions between clients and brokers or cases -in which a machine crashes without interrupting the TCP connection on the remote -end (eg: power outage, kernel panic, hard reboot...), we have introduced a -mechanism to probe for the availability status of the remote peer. - -Both clients and brokers are sending `Ping` commands periodically and they will -close the socket if a `Pong` response is not received within a timeout (default -used by broker is 60s). - -A valid implementation of a Pulsar client is not required to send the `Ping` -probe, though it is required to promptly reply after receiving one from the -broker in order to prevent the remote side from forcibly closing the TCP connection. - - -### Producer - -In order to send messages, a client needs to establish a producer. When creating -a producer, the broker will first verify that this particular client is -authorized to publish on the topic. - -Once the client gets confirmation of the producer creation, it can publish -messages to the broker, referring to the producer id negotiated before. - -![Producer interaction](/assets/binary-protocol-producer.png) - -##### Command Producer - -```protobuf - -message CommandProducer { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "producer_id" : 1, - "request_id" : 1 -} - -``` - -Parameters: - * `topic` → Complete topic name to where you want to create the producer on - * `producer_id` → Client generated producer identifier. Needs to be unique - within the same connection - * `request_id` → Identifier for this request. Used to match the response with - the originating request. Needs to be unique within the same connection - * `producer_name` → *(optional)* If a producer name is specified, the name will - be used, otherwise the broker will generate a unique name. Generated - producer name is guaranteed to be globally unique. Implementations are - expected to let the broker generate a new producer name when the producer - is initially created, then reuse it when recreating the producer after - reconnections. - -The broker will reply with either `ProducerSuccess` or `Error` commands. - -##### Command ProducerSuccess - -```protobuf - -message CommandProducerSuccess { - "request_id" : 1, - "producer_name" : "generated-unique-producer-name" -} - -``` - -Parameters: - * `request_id` → Original id of the `CreateProducer` request - * `producer_name` → Generated globally unique producer name or the name - specified by the client, if any. - -##### Command Send - -Command `Send` is used to publish a new message within the context of an -already existing producer. This command is used in a frame that includes command -as well as message payload, for which the complete format is specified in the [payload commands](#payload-commands) section. - -```protobuf - -message CommandSend { - "producer_id" : 1, - "sequence_id" : 0, - "num_messages" : 1 -} - -``` - -Parameters: - * `producer_id` → id of an existing producer - * `sequence_id` → each message has an associated sequence id which is expected - to be implemented with a counter starting at 0. The `SendReceipt` that - acknowledges the effective publishing of messages will refer to it by - its sequence id. - * `num_messages` → *(optional)* Used when publishing a batch of messages at - once. - -##### Command SendReceipt - -After a message has been persisted on the configured number of replicas, the -broker will send the acknowledgment receipt to the producer. - -```protobuf - -message CommandSendReceipt { - "producer_id" : 1, - "sequence_id" : 0, - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -Parameters: - * `producer_id` → id of producer originating the send request - * `sequence_id` → sequence id of the published message - * `message_id` → message id assigned by the system to the published message - Unique within a single cluster. Message id is composed of 2 longs, `ledgerId` - and `entryId`, that reflect that this unique id is assigned when appending - to a BookKeeper ledger - - -##### Command CloseProducer - -**Note**: *This command can be sent by either producer or broker*. - -When receiving a `CloseProducer` command, the broker will stop accepting any -more messages for the producer, wait until all pending messages are persisted -and then reply `Success` to the client. - -The broker can send a `CloseProducer` command to client when it's performing -a graceful failover (eg: broker is being restarted, or the topic is being unloaded -by load balancer to be transferred to a different broker). - -When receiving the `CloseProducer`, the client is expected to go through the -service discovery lookup again and recreate the producer again. The TCP -connection is not affected. - -### Consumer - -A consumer is used to attach to a subscription and consume messages from it. -After every reconnection, a client needs to subscribe to the topic. If a -subscription is not already there, a new one will be created. - -![Consumer](/assets/binary-protocol-consumer.png) - -#### Flow control - -After the consumer is ready, the client needs to *give permission* to the -broker to push messages. This is done with the `Flow` command. - -A `Flow` command gives additional *permits* to send messages to the consumer. -A typical consumer implementation will use a queue to accumulate these messages -before the application is ready to consume them. - -After the application has dequeued half of the messages in the queue, the consumer -sends permits to the broker to ask for more messages (equals to half of the messages in the queue). - -For example, if the queue size is 1000 and the consumer consumes 500 messages in the queue. -Then the consumer sends permits to the broker to ask for 500 messages. - -##### Command Subscribe - -```protobuf - -message CommandSubscribe { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "subscription" : "my-subscription-name", - "subType" : "Exclusive", - "consumer_id" : 1, - "request_id" : 1 -} - -``` - -Parameters: - * `topic` → Complete topic name to where you want to create the consumer on - * `subscription` → Subscription name - * `subType` → Subscription type: Exclusive, Shared, Failover, Key_Shared - * `consumer_id` → Client generated consumer identifier. Needs to be unique - within the same connection - * `request_id` → Identifier for this request. Used to match the response with - the originating request. Needs to be unique within the same connection - * `consumer_name` → *(optional)* Clients can specify a consumer name. This - name can be used to track a particular consumer in the stats. Also, in - Failover subscription type, the name is used to decide which consumer is - elected as *master* (the one receiving messages): consumers are sorted by - their consumer name and the first one is elected master. - -##### Command Flow - -```protobuf - -message CommandFlow { - "consumer_id" : 1, - "messagePermits" : 1000 -} - -``` - -Parameters: -* `consumer_id` → Id of an already established consumer -* `messagePermits` → Number of additional permits to grant to the broker for - pushing more messages - -##### Command Message - -Command `Message` is used by the broker to push messages to an existing consumer, -within the limits of the given permits. - - -This command is used in a frame that includes the message payload as well, for -which the complete format is specified in the [payload commands](#payload-commands) -section. - -```protobuf - -message CommandMessage { - "consumer_id" : 1, - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -##### Command Ack - -An `Ack` is used to signal to the broker that a given message has been -successfully processed by the application and can be discarded by the broker. - -In addition, the broker will also maintain the consumer position based on the -acknowledged messages. - -```protobuf - -message CommandAck { - "consumer_id" : 1, - "ack_type" : "Individual", - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -Parameters: - * `consumer_id` → Id of an already established consumer - * `ack_type` → Type of acknowledgment: `Individual` or `Cumulative` - * `message_id` → Id of the message to acknowledge - * `validation_error` → *(optional)* Indicates that the consumer has discarded - the messages due to: `UncompressedSizeCorruption`, - `DecompressionError`, `ChecksumMismatch`, `BatchDeSerializeError` - * `properties` → *(optional)* Reserved configuration items - * `txnid_most_bits` → *(optional)* Same as Transaction Coordinator ID, `txnid_most_bits` and `txnid_least_bits` - uniquely identify a transaction. - * `txnid_least_bits` → *(optional)* The ID of the transaction opened in a transaction coordinator, - `txnid_most_bits` and `txnid_least_bits`uniquely identify a transaction. - * `request_id` → *(optional)* ID for handling response and timeout. - - - ##### Command AckResponse - -An `AckResponse` is the broker’s response to acknowledge a request sent by the client. It contains the `consumer_id` sent in the request. -If a transaction is used, it contains both the Transaction ID and the Request ID that are sent in the request. The client finishes the specific request according to the Request ID. If the `error` field is set, it indicates that the request has failed. - -An example of `AckResponse` with redirection: - -```protobuf - -message CommandAckResponse { - "consumer_id" : 1, - "txnid_least_bits" = 0, - "txnid_most_bits" = 1, - "request_id" = 5 -} - -``` - -##### Command CloseConsumer - -***Note***: **This command can be sent by either producer or broker*. - -This command behaves the same as [`CloseProducer`](#command-closeproducer) - -##### Command RedeliverUnacknowledgedMessages - -A consumer can ask the broker to redeliver some or all of the pending messages -that were pushed to that particular consumer and not yet acknowledged. - -The protobuf object accepts a list of message ids that the consumer wants to -be redelivered. If the list is empty, the broker will redeliver all the -pending messages. - -On redelivery, messages can be sent to the same consumer or, in the case of a -shared subscription, spread across all available consumers. - - -##### Command ReachedEndOfTopic - -This is sent by a broker to a particular consumer, whenever the topic -has been "terminated" and all the messages on the subscription were -acknowledged. - -The client should use this command to notify the application that no more -messages are coming from the consumer. - -##### Command ConsumerStats - -This command is sent by the client to retrieve Subscriber and Consumer level -stats from the broker. -Parameters: - * `request_id` → Id of the request, used to correlate the request - and the response. - * `consumer_id` → Id of an already established consumer. - -##### Command ConsumerStatsResponse - -This is the broker's response to ConsumerStats request by the client. -It contains the Subscriber and Consumer level stats of the `consumer_id` sent in the request. -If the `error_code` or the `error_message` field is set it indicates that the request has failed. - -##### Command Unsubscribe - -This command is sent by the client to unsubscribe the `consumer_id` from the associated topic. -Parameters: - * `request_id` → Id of the request. - * `consumer_id` → Id of an already established consumer which needs to unsubscribe. - - -## Service discovery - -### Topic lookup - -Topic lookup needs to be performed each time a client needs to create or -reconnect a producer or a consumer. Lookup is used to discover which particular -broker is serving the topic we are about to use. - -Lookup can be done with a REST call as described in the [admin API](admin-api-topics.md#look-up-topics-owner-broker) -docs. - -Since Pulsar-1.16 it is also possible to perform the lookup within the binary -protocol. - -For the sake of example, let's assume we have a service discovery component -running at `pulsar://broker.example.com:6650` - -Individual brokers will be running at `pulsar://broker-1.example.com:6650`, -`pulsar://broker-2.example.com:6650`, ... - -A client can use a connection to the discovery service host to issue a -`LookupTopic` command. The response can either be a broker hostname to -connect to, or a broker hostname to which retry the lookup. - -The `LookupTopic` command has to be used in a connection that has already -gone through the `Connect` / `Connected` initial handshake. - -![Topic lookup](/assets/binary-protocol-topic-lookup.png) - -```protobuf - -message CommandLookupTopic { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "request_id" : 1, - "authoritative" : false -} - -``` - -Fields: - * `topic` → Topic name to lookup - * `request_id` → Id of the request that will be passed with its response - * `authoritative` → Initial lookup request should use false. When following a - redirect response, client should pass the same value contained in the - response - -##### LookupTopicResponse - -Example of response with successful lookup: - -```protobuf - -message CommandLookupTopicResponse { - "request_id" : 1, - "response" : "Connect", - "brokerServiceUrl" : "pulsar://broker-1.example.com:6650", - "brokerServiceUrlTls" : "pulsar+ssl://broker-1.example.com:6651", - "authoritative" : true -} - -``` - -Example of lookup response with redirection: - -```protobuf - -message CommandLookupTopicResponse { - "request_id" : 1, - "response" : "Redirect", - "brokerServiceUrl" : "pulsar://broker-2.example.com:6650", - "brokerServiceUrlTls" : "pulsar+ssl://broker-2.example.com:6651", - "authoritative" : true -} - -``` - -In this second case, we need to reissue the `LookupTopic` command request -to `broker-2.example.com` and this broker will be able to give a definitive -answer to the lookup request. - -### Partitioned topics discovery - -Partitioned topics metadata discovery is used to find out if a topic is a -"partitioned topic" and how many partitions were set up. - -If the topic is marked as "partitioned", the client is expected to create -multiple producers or consumers, one for each partition, using the `partition-X` -suffix. - -This information only needs to be retrieved the first time a producer or -consumer is created. There is no need to do this after reconnections. - -The discovery of partitioned topics metadata works very similar to the topic -lookup. The client send a request to the service discovery address and the -response will contain actual metadata. - -##### Command PartitionedTopicMetadata - -```protobuf - -message CommandPartitionedTopicMetadata { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "request_id" : 1 -} - -``` - -Fields: - * `topic` → the topic for which to check the partitions metadata - * `request_id` → Id of the request that will be passed with its response - - -##### Command PartitionedTopicMetadataResponse - -Example of response with metadata: - -```protobuf - -message CommandPartitionedTopicMetadataResponse { - "request_id" : 1, - "response" : "Success", - "partitions" : 32 -} - -``` - -## Protobuf interface - -All Pulsar's Protobuf definitions can be found {@inject: github:here:/pulsar-common/src/main/proto/PulsarApi.proto}. diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/functions-cli.md b/site2/website/versioned_docs/version-2.8.0-deprecated/functions-cli.md deleted file mode 100644 index ba7b7fbb9561ed..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/functions-cli.md +++ /dev/null @@ -1,198 +0,0 @@ ---- -id: functions-cli -title: Pulsar Functions command line tool -sidebar_label: "Reference: CLI" -original_id: functions-cli ---- - -The following tables list Pulsar Functions command-line tools. You can learn Pulsar Functions modes, commands, and parameters. - -## localrun - -Run Pulsar Functions locally, rather than deploying it to the Pulsar cluster. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -broker-service-url | The URL for the Pulsar broker. | | -classname | The class name of a Pulsar Function.| | -client-auth-params | Client authentication parameter. | | -client-auth-plugin | Client authentication plugin using which function-process can connect to broker. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime).| | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). | | -hostname-verification-enabled | Enable hostname verification. | false -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL-path [http/https/file (file protocol assumes that file already exists on worker host)] from which worker can download the package. | | -instance-id-offset | Start the instanceIds from this offset. | 0 -log-topic | The topic to which the logs a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -tls-allow-insecure | Allow insecure tls connection. | false -tls-trust-cert-path | tls trust cert file path. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -use-tls | Use tls connection. | false -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - - -## create - -Create and deploy a Pulsar Function in cluster mode. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -classname | The class name of a Pulsar Function. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime).| | -custom-runtime-options | A string that encodes options to customize the runtime, see docs for configured runtime for details | | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). | | -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL-path [http/https/file (file protocol assumes that file already exists on worker host)] from which worker can download the package. | | -log-topic | The topic to which the logs of a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - -## delete - -Delete a Pulsar Function that is running on a Pulsar cluster. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## update - -Update a Pulsar Function that has been deployed to a Pulsar cluster. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -classname | The class name of a Pulsar Function. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime). | | -custom-runtime-options | A string that encodes options to customize the runtime, see docs for configured runtime for details | | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). | | -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL-path [http/https/file (file protocol assumes that file already exists on worker host)] from which worker can download the package. | | -log-topic | The topic to which the logs of a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -update-auth-data | Whether or not to update the auth data. | false -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - -## get - -Fetch information about a Pulsar Function. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## restart - -Restart function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## stop - -Stops function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## start - -Starts a stopped function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/functions-debug.md b/site2/website/versioned_docs/version-2.8.0-deprecated/functions-debug.md deleted file mode 100644 index e1d55ae0897aa5..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/functions-debug.md +++ /dev/null @@ -1,533 +0,0 @@ ---- -id: functions-debug -title: Debug Pulsar Functions -sidebar_label: "How-to: Debug" -original_id: functions-debug ---- - -You can use the following methods to debug Pulsar Functions: - -* [Captured stderr](functions-debug.md#captured-stderr) -* [Use unit test](functions-debug.md#use-unit-test) -* [Debug with localrun mode](functions-debug.md#debug-with-localrun-mode) -* [Use log topic](functions-debug.md#use-log-topic) -* [Use Functions CLI](functions-debug.md#use-functions-cli) - -## Captured stderr - -Function startup information and captured stderr output is written to `logs/functions////-.log` - -This is useful for debugging why a function fails to start. - -## Use unit test - -A Pulsar Function is a function with inputs and outputs, you can test a Pulsar Function in a similar way as you test any function. - -For example, if you have the following Pulsar Function: - -```java - -import java.util.function.Function; - -public class JavaNativeExclamationFunction implements Function { - @Override - public String apply(String input) { - return String.format("%s!", input); - } -} - -``` - -You can write a simple unit test to test Pulsar Function. - -:::tip - -Pulsar uses testng for testing. - -::: - -```java - -@Test -public void testJavaNativeExclamationFunction() { - JavaNativeExclamationFunction exclamation = new JavaNativeExclamationFunction(); - String output = exclamation.apply("foo"); - Assert.assertEquals(output, "foo!"); -} - -``` - -The following Pulsar Function implements the `org.apache.pulsar.functions.api.Function` interface. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class ExclamationFunction implements Function { - @Override - public String process(String input, Context context) { - return String.format("%s!", input); - } -} - -``` - -In this situation, you can write a unit test for this function as well. Remember to mock the `Context` parameter. The following is an example. - -:::tip - -Pulsar uses testng for testing. - -::: - -```java - -@Test -public void testExclamationFunction() { - ExclamationFunction exclamation = new ExclamationFunction(); - String output = exclamation.process("foo", mock(Context.class)); - Assert.assertEquals(output, "foo!"); -} - -``` - -## Debug with localrun mode -When you run a Pulsar Function in localrun mode, it launches an instance of the Function on your local machine as a thread. - -In this mode, a Pulsar Function consumes and produces actual data to a Pulsar cluster, and mirrors how the function actually runs in a Pulsar cluster. - -:::note - -Currently, debugging with localrun mode is only supported by Pulsar Functions written in Java. You need Pulsar version 2.4.0 or later to do the following. Even though localrun is available in versions earlier than Pulsar 2.4.0, you cannot debug with localrun mode programmatically or run Functions as threads. - -::: - -You can launch your function in the following manner. - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setName(functionName); -functionConfig.setInputs(Collections.singleton(sourceTopic)); -functionConfig.setClassName(ExclamationFunction.class.getName()); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setOutput(sinkTopic); - -LocalRunner localRunner = LocalRunner.builder().functionConfig(functionConfig).build(); -localRunner.start(true); - -``` - -So you can debug functions using an IDE easily. Set breakpoints and manually step through a function to debug with real data. - -The following example illustrates how to programmatically launch a function in localrun mode. - -```java - -public class ExclamationFunction implements Function { - - @Override - public String process(String s, Context context) throws Exception { - return s + "!"; - } - -public static void main(String[] args) throws Exception { - FunctionConfig functionConfig = new FunctionConfig(); - functionConfig.setName("exclamation"); - functionConfig.setInputs(Collections.singleton("input")); - functionConfig.setClassName(ExclamationFunction.class.getName()); - functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); - functionConfig.setOutput("output"); - - LocalRunner localRunner = LocalRunner.builder().functionConfig(functionConfig).build(); - localRunner.start(false); -} - -``` - -To use localrun mode programmatically, add the following dependency. - -```xml - - - org.apache.pulsar - pulsar-functions-local-runner - ${pulsar.version} - - -``` - -For complete code samples, see [here](https://github.com/jerrypeng/pulsar-functions-demos/tree/master/debugging). - -:::note - -Debugging with localrun mode for Pulsar Functions written in other languages will be supported soon. - -::: - -## Use log topic - -In Pulsar Functions, you can generate log information defined in functions to a specified log topic. You can configure consumers to consume messages from a specified log topic to check the log information. - -![Pulsar Functions core programming model](/assets/pulsar-functions-overview.png) - -**Example** - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class LoggingFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - String messageId = new String(context.getMessageId()); - - if (input.contains("danger")) { - LOG.warn("A warning was received in message {}", messageId); - } else { - LOG.info("Message {} received\nContent: {}", messageId, input); - } - - return null; - } -} - -``` - -As shown in the example above, you can get the logger via `context.getLogger()` and assign the logger to the `LOG` variable of `slf4j`, so you can define your desired log information in a function using the `LOG` variable. Meanwhile, you need to specify the topic to which the log information is produced. - -**Example** - -```bash - -$ bin/pulsar-admin functions create \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -## Use Functions CLI - -With [Pulsar Functions CLI](reference-pulsar-admin.md#functions), you can debug Pulsar Functions with the following subcommands: - -* `get` -* `status` -* `stats` -* `list` -* `trigger` - -:::tip - -For complete commands of **Pulsar Functions CLI**, see [here](reference-pulsar-admin.md#functions)。 - -::: - -### `get` - -Get information about a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions get options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -:::tip - -`--fqfn` consists of `--name`, `--namespace` and `--tenant`, so you can specify either `--fqfn` or `--name`, `--namespace` and `--tenant`. - -::: - -**Example** - -You can specify `--fqfn` to get information about a Pulsar Function. - -```bash - -$ ./bin/pulsar-admin functions get public/default/ExclamationFunctio6 - -``` - -Optionally, you can specify `--name`, `--namespace` and `--tenant` to get information about a Pulsar Function. - -```bash - -$ ./bin/pulsar-admin functions get \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 - -``` - -As shown below, the `get` command shows input, output, runtime, and other information about the _ExclamationFunctio6_ function. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "ExclamationFunctio6", - "className": "org.example.test.ExclamationFunction", - "inputSpecs": { - "persistent://public/default/my-topic-1": { - "isRegexPattern": false - } - }, - "output": "persistent://public/default/test-1", - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "userConfig": {}, - "runtime": "JAVA", - "autoAck": true, - "parallelism": 1 -} - -``` - -### `status` - -Check the current status of a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions status options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--instance-id`|The instance ID of a Pulsar Function
    If the `--instance-id` is not specified, it gets the IDs of all instances.
    -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - -``` - -As shown below, the `status` command shows the number of instances, running instances, the instance running under the _ExclamationFunctio6_ function, received messages, successfully processed messages, system exceptions, the average latency and so on. - -```json - -{ - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReceived" : 1, - "numSuccessfullyProcessed" : 1, - "numUserExceptions" : 0, - "latestUserExceptions" : [ ], - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "averageLatency" : 0.8385, - "lastInvocationTime" : 1557734137987, - "workerId" : "c-standalone-fw-23ccc88ef29b-8080" - } - } ] -} - -``` - -### `stats` - -Get the current stats of a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions stats options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--instance-id`|The instance ID of a Pulsar Function.
    If the `--instance-id` is not specified, it gets the IDs of all instances.
    -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - -``` - -The output is shown as follows: - -```json - -{ - "receivedTotal" : 1, - "processedSuccessfullyTotal" : 1, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : 0.8385, - "1min" : { - "receivedTotal" : 0, - "processedSuccessfullyTotal" : 0, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : null - }, - "lastInvocation" : 1557734137987, - "instances" : [ { - "instanceId" : 0, - "metrics" : { - "receivedTotal" : 1, - "processedSuccessfullyTotal" : 1, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : 0.8385, - "1min" : { - "receivedTotal" : 0, - "processedSuccessfullyTotal" : 0, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : null - }, - "lastInvocation" : 1557734137987, - "userMetrics" : { } - } - } ] -} - -``` - -### `list` - -List all Pulsar Functions running under a specific tenant and namespace. - -**Usage** - -```bash - -$ pulsar-admin functions list options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions list \ - --tenant public \ - --namespace default - -``` - -As shown below, the `list` command returns three functions running under the _public_ tenant and the _default_ namespace. - -```text - -ExclamationFunctio1 -ExclamationFunctio2 -ExclamationFunctio3 - -``` - -### `trigger` - -Trigger a specified Pulsar Function with a supplied value. This command simulates the execution process of a Pulsar Function and verifies it. - -**Usage** - -```bash - -$ pulsar-admin functions trigger options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. -|`--topic`|The topic name that a Pulsar Function consumes from. -|`--trigger-file`|The path to a file that contains the data to trigger a Pulsar Function. -|`--trigger-value`|The value to trigger a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - --topic persistent://public/default/my-topic-1 \ - --trigger-value "hello pulsar functions" - -``` - -As shown below, the `trigger` command returns the following result: - -```text - -This is my function! - -``` - -:::note - -You must specify the [entire topic name](getting-started-pulsar.md#topic-names) when using the `--topic` option. Otherwise, the following error occurs. - -```text - -Function in trigger function has unidentified topic -Reason: Function in trigger function has unidentified topic - -``` - -::: - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/functions-deploy.md b/site2/website/versioned_docs/version-2.8.0-deprecated/functions-deploy.md deleted file mode 100644 index d9496b3ed5b485..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/functions-deploy.md +++ /dev/null @@ -1,241 +0,0 @@ ---- -id: functions-deploy -title: Deploy Pulsar Functions -sidebar_label: "How-to: Deploy" -original_id: functions-deploy ---- - -## Requirements - -To deploy and manage Pulsar Functions, you need to have a Pulsar cluster running. There are several options for this: - -* You can run a [standalone cluster](getting-started-standalone.md) locally on your own machine. -* You can deploy a Pulsar cluster on [Kubernetes](deploy-kubernetes.md), [Amazon Web Services](deploy-aws.md), [bare metal](deploy-bare-metal.md), [DC/OS](https://dcos.io/), and more. - -If you run a non-[standalone](reference-terminology.md#standalone) cluster, you need to obtain the service URL for the cluster. How you obtain the service URL depends on how you deploy your Pulsar cluster. - -If you want to deploy and trigger Python user-defined functions, you need to install [the pulsar python client](http://pulsar.apache.org/docs/en/client-libraries-python/) on all the machines running [functions workers](functions-worker.md). - -## Command-line interface - -Pulsar Functions are deployed and managed using the [`pulsar-admin functions`](reference-pulsar-admin.md#functions) interface, which contains commands such as [`create`](reference-pulsar-admin.md#functions-create) for deploying functions in [cluster mode](#cluster-mode), [`trigger`](reference-pulsar-admin.md#trigger) for [triggering](#triggering-pulsar-functions) functions, [`list`](reference-pulsar-admin.md#list-2) for listing deployed functions. - -To learn more commands, refer to [`pulsar-admin functions`](reference-pulsar-admin.md#functions). - -### Default arguments - -When managing Pulsar Functions, you need to specify a variety of information about functions, including tenant, namespace, input and output topics, and so on. However, some parameters have default values if you do not specify values for them. The following table lists the default values. - -Parameter | Default -:---------|:------- -Function name | You can specify any value for the class name (except org, library, or similar class names). For example, when you specify the flag `--classname org.example.MyFunction`, the function name is `MyFunction`. -Tenant | Derived from names of the input topics. If the input topics are under the `marketing` tenant, which means the topic names have the form `persistent://marketing/{namespace}/{topicName}`, the tenant is `marketing`. -Namespace | Derived from names of the input topics. If the input topics are under the `asia` namespace under the `marketing` tenant, which means the topic names have the form `persistent://marketing/asia/{topicName}`, then the namespace is `asia`. -Output topic | `{input topic}-{function name}-output`. For example, if an input topic name of a function is `incoming`, and the function name is `exclamation`, then the name of the output topic is `incoming-exclamation-output`. -Subscription type | For `at-least-once` and `at-most-once` [processing guarantees](functions-overview.md#processing-guarantees), the [`SHARED`](concepts-messaging.md#shared) mode is applied by default; for `effectively-once` guarantees, the [`FAILOVER`](concepts-messaging.md#failover) mode is applied. -Processing guarantees | [`ATLEAST_ONCE`](functions-overview.md#processing-guarantees) -Pulsar service URL | `pulsar://localhost:6650` - -### Example of default arguments - -Take the `create` command as an example. - -```bash - -$ bin/pulsar-admin functions create \ - --jar my-pulsar-functions.jar \ - --classname org.example.MyFunction \ - --inputs my-function-input-topic1,my-function-input-topic2 - -``` - -The function has default values for the function name (`MyFunction`), tenant (`public`), namespace (`default`), subscription type (`SHARED`), processing guarantees (`ATLEAST_ONCE`), and Pulsar service URL (`pulsar://localhost:6650`). - -## Local run mode - -If you run a Pulsar Function in **local run** mode, it runs on the machine from which you enter the commands (on your laptop, an [AWS EC2](https://aws.amazon.com/ec2/) instance, and so on). The following is a [`localrun`](reference-pulsar-admin.md#localrun) command example. - -```bash - -$ bin/pulsar-admin functions localrun \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/input-1 \ - --output persistent://public/default/output-1 - -``` - -By default, the function connects to a Pulsar cluster running on the same machine, via a local [broker](reference-terminology.md#broker) service URL of `pulsar://localhost:6650`. If you use local run mode to run a function but connect it to a non-local Pulsar cluster, you can specify a different broker URL using the `--brokerServiceUrl` flag. The following is an example. - -```bash - -$ bin/pulsar-admin functions localrun \ - --broker-service-url pulsar://my-cluster-host:6650 \ - # Other function parameters - -``` - -## Cluster mode - -When you run a Pulsar Function in **cluster** mode, the function code is uploaded to a Pulsar broker and runs *alongside the broker* rather than in your [local environment](#local-run-mode). You can run a function in cluster mode using the [`create`](reference-pulsar-admin.md#create-1) command. - -```bash - -$ bin/pulsar-admin functions create \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/input-1 \ - --output persistent://public/default/output-1 - -``` - -### Update functions in cluster mode - -You can use the [`update`](reference-pulsar-admin.md#update-1) command to update a Pulsar Function running in cluster mode. The following command updates the function created in the [cluster mode](#cluster-mode) section. - -```bash - -$ bin/pulsar-admin functions update \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/new-input-topic \ - --output persistent://public/default/new-output-topic - -``` - -### Parallelism - -Pulsar Functions run as processes or threads, which are called **instances**. When you run a Pulsar Function, it runs as a single instance by default. With one localrun command, you can only run a single instance of a function. If you want to run multiple instances, you can use localrun command multiple times. - -When you create a function, you can specify the *parallelism* of a function (the number of instances to run). You can set the parallelism factor using the `--parallelism` flag of the [`create`](reference-pulsar-admin.md#functions-create) command. - -```bash - -$ bin/pulsar-admin functions create \ - --parallelism 3 \ - # Other function info - -``` - -You can adjust the parallelism of an already created function using the [`update`](reference-pulsar-admin.md#update-1) interface. - -```bash - -$ bin/pulsar-admin functions update \ - --parallelism 5 \ - # Other function - -``` - -If you specify a function configuration via YAML, use the `parallelism` parameter. The following is a config file example. - -```yaml - -# function-config.yaml -parallelism: 3 -inputs: -- persistent://public/default/input-1 -output: persistent://public/default/output-1 -# other parameters - -``` - -The following is corresponding update command. - -```bash - -$ bin/pulsar-admin functions update \ - --function-config-file function-config.yaml - -``` - -### Function instance resources - -When you run Pulsar Functions in [cluster mode](#cluster-mode), you can specify the resources that are assigned to each function [instance](#parallelism). - -Resource | Specified as | Runtimes -:--------|:----------------|:-------- -CPU | The number of cores | Kubernetes -RAM | The number of bytes | Process, Docker -Disk space | The number of bytes | Docker - -The following function creation command allocates 8 cores, 8 GB of RAM, and 10 GB of disk space to a function. - -```bash - -$ bin/pulsar-admin functions create \ - --jar target/my-functions.jar \ - --classname org.example.functions.MyFunction \ - --cpu 8 \ - --ram 8589934592 \ - --disk 10737418240 - -``` - -> #### Resources are *per instance* -> The resources that you apply to a given Pulsar Function are applied to each instance of the function. For example, if you apply 8 GB of RAM to a function with a parallelism of 5, you are applying 40 GB of RAM for the function in total. Make sure that you take the parallelism (the number of instances) factor into your resource calculations. - -## Trigger Pulsar Functions - -If a Pulsar Function is running in [cluster mode](#cluster-mode), you can **trigger** it at any time using the command line. Triggering a function means that you send a message with a specific value to the function and get the function output (if any) via the command line. - -> Triggering a function is to invoke a function by producing a message on one of the input topics. With the [`pulsar-admin functions trigger`](reference-pulsar-admin.md#trigger) command, you can send messages to functions without using the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool or a language-specific client library. - -To learn how to trigger a function, you can start with Python function that returns a simple string based on the input. - -```python - -# myfunc.py -def process(input): - return "This function has been triggered with a value of {0}".format(input) - -``` - -You can run the function in [local run mode](functions-deploy.md#local-run-mode). - -```bash - -$ bin/pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name myfunc \ - --py myfunc.py \ - --classname myfunc \ - --inputs persistent://public/default/in \ - --output persistent://public/default/out - -``` - -Then assign a consumer to listen on the output topic for messages from the `myfunc` function with the [`pulsar-client consume`](reference-cli-tools.md#consume) command. - -```bash - -$ bin/pulsar-client consume persistent://public/default/out \ - --subscription-name my-subscription - --num-messages 0 # Listen indefinitely - -``` - -And then you can trigger the function. - -```bash - -$ bin/pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name myfunc \ - --trigger-value "hello world" - -``` - -The consumer listening on the output topic produces something as follows in the log. - -``` - ------ got message ----- -This function has been triggered with a value of hello world - -``` - -> #### Topic info is not required -> In the `trigger` command, you only need to specify basic information about the function (tenant, namespace, and name). To trigger the function, you do not need to know the function input topics. diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/functions-develop.md b/site2/website/versioned_docs/version-2.8.0-deprecated/functions-develop.md deleted file mode 100644 index 2e29aa1c474005..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/functions-develop.md +++ /dev/null @@ -1,1600 +0,0 @@ ---- -id: functions-develop -title: Develop Pulsar Functions -sidebar_label: "How-to: Develop" -original_id: functions-develop ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -You learn how to develop Pulsar Functions with different APIs for Java, Python and Go. - -## Available APIs -In Java and Python, you have two options to write Pulsar Functions. In Go, you can use Pulsar Functions SDK for Go. - -Interface | Description | Use cases -:---------|:------------|:--------- -Language-native interface | No Pulsar-specific libraries or special dependencies required (only core libraries from Java/Python). | Functions that do not require access to the function [context](#context). -Pulsar Function SDK for Java/Python/Go | Pulsar-specific libraries that provide a range of functionality not provided by "native" interfaces. | Functions that require access to the function [context](#context). - -The language-native function, which adds an exclamation point to all incoming strings and publishes the resulting string to a topic, has no external dependencies. The following example is language-native function. - -````mdx-code-block - - - -```Java - -import java.util.function.Function; - -public class JavaNativeExclamationFunction implements Function { - @Override - public String apply(String input) { - return String.format("%s!", input); - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/JavaNativeExclamationFunction.java). - - - - -```python - -def process(input): - return "{}!".format(input) - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/native_exclamation_function.py). - -:::note - -You can write Pulsar Functions in python2 or python3. However, Pulsar only looks for `python` as the interpreter. -If you're running Pulsar Functions on an Ubuntu system that only supports python3, you might fail to -start the functions. In this case, you can create a symlink. Your system will fail if -you subsequently install any other package that depends on Python 2.x. A solution is under development in [Issue 5518](https://github.com/apache/pulsar/issues/5518). - -```bash - -sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 10 - -``` - -::: - - - - -```` - -The following example uses Pulsar Functions SDK. -````mdx-code-block - - - -```Java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class ExclamationFunction implements Function { - @Override - public String process(String input, Context context) { - return String.format("%s!", input); - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/ExclamationFunction.java). - - - - -```python - -from pulsar import Function - -class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - return input + '!' - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/exclamation_function.py). - - - - -```Go - -package main - -import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" -) - -func HandleRequest(ctx context.Context, in []byte) error{ - fmt.Println(string(in) + "!") - return nil -} - -func main() { - pf.Start(HandleRequest) -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/77cf09eafa4f1626a53a1fe2e65dd25f377c1127/pulsar-function-go/examples/inputFunc/inputFunc.go#L20-L36). - - - - -```` - -## Schema registry -Pulsar has a built-in schema registry and is bundled with popular schema types, such as Avro, JSON and Protobuf. Pulsar Functions can leverage the existing schema information from input topics and derive the input type. The schema registry applies for output topic as well. - -## SerDe -SerDe stands for **Ser**ialization and **De**serialization. Pulsar Functions uses SerDe when publishing data to and consuming data from Pulsar topics. How SerDe works by default depends on the language you use for a particular function. - -````mdx-code-block - - - -When you write Pulsar Functions in Java, the following basic Java types are built in and supported by default: `String`, `Double`, `Integer`, `Float`, `Long`, `Short`, and `Byte`. - -To customize Java types, you need to implement the following interface. - -```java - -public interface SerDe { - T deserialize(byte[] input); - byte[] serialize(T input); -} - -``` - -SerDe works in the following ways in Java Functions. -- If the input and output topics have schema, Pulsar Functions use schema for SerDe. -- If the input or output topics do not exist, Pulsar Functions adopt the following rules to determine SerDe: - - If the schema type is specified, Pulsar Functions use the specified schema type. - - If SerDe is specified, Pulsar Functions use the specified SerDe, and the schema type for input and output topics is `Byte`. - - If neither the schema type nor SerDe is specified, Pulsar Functions use the built-in SerDe. For non-primitive schema type, the built-in SerDe serializes and deserializes objects in the `JSON` format. - - - - -In Python, the default SerDe is identity, meaning that the type is serialized as whatever type the producer function returns. - -You can specify the SerDe when [creating](functions-deploy.md#cluster-mode) or [running](functions-deploy.md#local-run-mode) functions. - -```bash - -$ bin/pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name my_function \ - --py my_function.py \ - --classname my_function.MyFunction \ - --custom-serde-inputs '{"input-topic-1":"Serde1","input-topic-2":"Serde2"}' \ - --output-serde-classname Serde3 \ - --output output-topic-1 - -``` - -This case contains two input topics: `input-topic-1` and `input-topic-2`, each of which is mapped to a different SerDe class (the map must be specified as a JSON string). The output topic, `output-topic-1`, uses the `Serde3` class for SerDe. At the moment, all Pulsar Functions logic, include processing function and SerDe classes, must be contained within a single Python file. - -When using Pulsar Functions for Python, you have three SerDe options: - -1. You can use the [`IdentitySerde`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L70), which leaves the data unchanged. The `IdentitySerDe` is the **default**. Creating or running a function without explicitly specifying SerDe means that this option is used. -2. You can use the [`PickleSerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L62), which uses Python [`pickle`](https://docs.python.org/3/library/pickle.html) for SerDe. -3. You can create a custom SerDe class by implementing the baseline [`SerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L50) class, which has just two methods: [`serialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L53) for converting the object into bytes, and [`deserialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L58) for converting bytes into an object of the required application-specific type. - -The table below shows when you should use each SerDe. - -SerDe option | When to use -:------------|:----------- -`IdentitySerde` | When you work with simple types like strings, Booleans, integers. -`PickleSerDe` | When you work with complex, application-specific types and are comfortable with the "best effort" approach of `pickle`. -Custom SerDe | When you require explicit control over SerDe, potentially for performance or data compatibility purposes. - - - - -Currently, the feature is not available in Go. - - - - -```` - -### Example -Imagine that you're writing Pulsar Functions that are processing tweet objects, you can refer to the following example of `Tweet` class. - -````mdx-code-block - - - -```java - -public class Tweet { - private String username; - private String tweetContent; - - public Tweet(String username, String tweetContent) { - this.username = username; - this.tweetContent = tweetContent; - } - - // Standard setters and getters -} - -``` - -To pass `Tweet` objects directly between Pulsar Functions, you need to provide a custom SerDe class. In the example below, `Tweet` objects are basically strings in which the username and tweet content are separated by a `|`. - -```java - -package com.example.serde; - -import org.apache.pulsar.functions.api.SerDe; - -import java.util.regex.Pattern; - -public class TweetSerde implements SerDe { - public Tweet deserialize(byte[] input) { - String s = new String(input); - String[] fields = s.split(Pattern.quote("|")); - return new Tweet(fields[0], fields[1]); - } - - public byte[] serialize(Tweet input) { - return "%s|%s".format(input.getUsername(), input.getTweetContent()).getBytes(); - } -} - -``` - -To apply this customized SerDe to a particular Pulsar Function, you need to: - -* Package the `Tweet` and `TweetSerde` classes into a JAR. -* Specify a path to the JAR and SerDe class name when deploying the function. - -The following is an example of [`create`](reference-pulsar-admin.md#create-1) operation. - -```bash - -$ bin/pulsar-admin functions create \ - --jar /path/to/your.jar \ - --output-serde-classname com.example.serde.TweetSerde \ - # Other function attributes - -``` - -> #### Custom SerDe classes must be packaged with your function JARs -> Pulsar does not store your custom SerDe classes separately from your Pulsar Functions. So you need to include your SerDe classes in your function JARs. If not, Pulsar returns an error. - - - - -```python - -class Tweet(object): - def __init__(self, username, tweet_content): - self.username = username - self.tweet_content = tweet_content - -``` - -In order to use this class in Pulsar Functions, you have two options: - -1. You can specify `PickleSerDe`, which applies the [`pickle`](https://docs.python.org/3/library/pickle.html) library SerDe. -2. You can create your own SerDe class. The following is an example. - - ```python - - from pulsar import SerDe - - class TweetSerDe(SerDe): - - def serialize(self, input): - return bytes("{0}|{1}".format(input.username, input.tweet_content)) - - def deserialize(self, input_bytes): - tweet_components = str(input_bytes).split('|') - return Tweet(tweet_components[0], tweet_componentsp[1]) - - ``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/custom_object_function.py). - - - - -```` - -In both languages, however, you can write custom SerDe logic for more complex, application-specific types. - -## Context -Java, Python and Go SDKs provide access to a **context object** that can be used by a function. This context object provides a wide variety of information and functionality to the function. - -* The name and ID of a Pulsar Function. -* The message ID of each message. Each Pulsar message is automatically assigned with an ID. -* The key, event time, properties and partition key of each message. -* The name of the topic to which the message is sent. -* The names of all input topics as well as the output topic associated with the function. -* The name of the class used for [SerDe](#serde). -* The [tenant](reference-terminology.md#tenant) and namespace associated with the function. -* The ID of the Pulsar Functions instance running the function. -* The version of the function. -* The [logger object](functions-develop.md#logger) used by the function, which can be used to create function log messages. -* Access to arbitrary [user configuration](#user-config) values supplied via the CLI. -* An interface for recording [metrics](#metrics). -* An interface for storing and retrieving state in [state storage](#state-storage). -* A function to publish new messages onto arbitrary topics. -* A function to ack the message being processed (if auto-ack is disabled). -* (Java) get Pulsar admin client. - -````mdx-code-block - - - -The [Context](https://github.com/apache/pulsar/blob/master/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Context.java) interface provides a number of methods that you can use to access the function [context](#context). The various method signatures for the `Context` interface are listed as follows. - -```java - -public interface Context { - Record getCurrentRecord(); - Collection getInputTopics(); - String getOutputTopic(); - String getOutputSchemaType(); - String getTenant(); - String getNamespace(); - String getFunctionName(); - String getFunctionId(); - String getInstanceId(); - String getFunctionVersion(); - Logger getLogger(); - void incrCounter(String key, long amount); - void incrCounterAsync(String key, long amount); - long getCounter(String key); - long getCounterAsync(String key); - void putState(String key, ByteBuffer value); - void putStateAsync(String key, ByteBuffer value); - void deleteState(String key); - ByteBuffer getState(String key); - ByteBuffer getStateAsync(String key); - Map getUserConfigMap(); - Optional getUserConfigValue(String key); - Object getUserConfigValueOrDefault(String key, Object defaultValue); - void recordMetric(String metricName, double value); - CompletableFuture publish(String topicName, O object, String schemaOrSerdeClassName); - CompletableFuture publish(String topicName, O object); - TypedMessageBuilder newOutputMessage(String topicName, Schema schema) throws PulsarClientException; - ConsumerBuilder newConsumerBuilder(Schema schema) throws PulsarClientException; - PulsarAdmin getPulsarAdmin(); - PulsarAdmin getPulsarAdmin(String clusterName); -} - -``` - -The following example uses several methods available via the `Context` object. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.stream.Collectors; - -public class ContextFunction implements Function { - public Void process(String input, Context context) { - Logger LOG = context.getLogger(); - String inputTopics = context.getInputTopics().stream().collect(Collectors.joining(", ")); - String functionName = context.getFunctionName(); - - String logMessage = String.format("A message with a value of \"%s\" has arrived on one of the following topics: %s\n", - input, - inputTopics); - - LOG.info(logMessage); - - String metricName = String.format("function-%s-messages-received", functionName); - context.recordMetric(metricName, 1); - - return null; - } -} - -``` - - - - -``` - -class ContextImpl(pulsar.Context): - def get_message_id(self): - ... - def get_message_key(self): - ... - def get_message_eventtime(self): - ... - def get_message_properties(self): - ... - def get_current_message_topic_name(self): - ... - def get_partition_key(self): - ... - def get_function_name(self): - ... - def get_function_tenant(self): - ... - def get_function_namespace(self): - ... - def get_function_id(self): - ... - def get_instance_id(self): - ... - def get_function_version(self): - ... - def get_logger(self): - ... - def get_user_config_value(self, key): - ... - def get_user_config_map(self): - ... - def record_metric(self, metric_name, metric_value): - ... - def get_input_topics(self): - ... - def get_output_topic(self): - ... - def get_output_serde_class_name(self): - ... - def publish(self, topic_name, message, serde_class_name="serde.IdentitySerDe", - properties=None, compression_type=None, callback=None, message_conf=None): - ... - def ack(self, msgid, topic): - ... - def get_and_reset_metrics(self): - ... - def reset_metrics(self): - ... - def get_metrics(self): - ... - def incr_counter(self, key, amount): - ... - def get_counter(self, key): - ... - def del_counter(self, key): - ... - def put_state(self, key, value): - ... - def get_state(self, key): - ... - -``` - - - - -``` - -func (c *FunctionContext) GetInstanceID() int { - return c.instanceConf.instanceID -} - -func (c *FunctionContext) GetInputTopics() []string { - return c.inputTopics -} - -func (c *FunctionContext) GetOutputTopic() string { - return c.instanceConf.funcDetails.GetSink().Topic -} - -func (c *FunctionContext) GetFuncTenant() string { - return c.instanceConf.funcDetails.Tenant -} - -func (c *FunctionContext) GetFuncName() string { - return c.instanceConf.funcDetails.Name -} - -func (c *FunctionContext) GetFuncNamespace() string { - return c.instanceConf.funcDetails.Namespace -} - -func (c *FunctionContext) GetFuncID() string { - return c.instanceConf.funcID -} - -func (c *FunctionContext) GetFuncVersion() string { - return c.instanceConf.funcVersion -} - -func (c *FunctionContext) GetUserConfValue(key string) interface{} { - return c.userConfigs[key] -} - -func (c *FunctionContext) GetUserConfMap() map[string]interface{} { - return c.userConfigs -} - -func (c *FunctionContext) SetCurrentRecord(record pulsar.Message) { - c.record = record -} - -func (c *FunctionContext) GetCurrentRecord() pulsar.Message { - return c.record -} - -func (c *FunctionContext) NewOutputMessage(topic string) pulsar.Producer { - return c.outputMessage(topic) -} - -``` - -The following example uses several methods available via the `Context` object. - -``` - -import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" -) - -func contextFunc(ctx context.Context) { - if fc, ok := pf.FromContext(ctx); ok { - fmt.Printf("function ID is:%s, ", fc.GetFuncID()) - fmt.Printf("function version is:%s\n", fc.GetFuncVersion()) - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/77cf09eafa4f1626a53a1fe2e65dd25f377c1127/pulsar-function-go/examples/contextFunc/contextFunc.go#L29-L34). - - - - -```` - -### User config -When you run or update Pulsar Functions created using SDK, you can pass arbitrary key/values to them with the command line with the `--user-config` flag. Key/values must be specified as JSON. The following function creation command passes a user configured key/value to a function. - -```bash - -$ bin/pulsar-admin functions create \ - --name word-filter \ - # Other function configs - --user-config '{"forbidden-word":"rosebud"}' - -``` - -````mdx-code-block - - - -The Java SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - # Other function configs - --user-config '{"word-of-the-day":"verdure"}' - -``` - -To access that value in a Java function: - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.Optional; - -public class UserConfigFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - Optional wotd = context.getUserConfigValue("word-of-the-day"); - if (wotd.isPresent()) { - LOG.info("The word of the day is {}", wotd); - } else { - LOG.warn("No word of the day provided"); - } - return null; - } -} - -``` - -The `UserConfigFunction` function will log the string `"The word of the day is verdure"` every time the function is invoked (which means every time a message arrives). The `word-of-the-day` user config will be changed only when the function is updated with a new config value via the command line. - -You can also access the entire user config map or set a default value in case no value is present: - -```java - -// Get the whole config map -Map allConfigs = context.getUserConfigMap(); - -// Get value or resort to default -String wotd = context.getUserConfigValueOrDefault("word-of-the-day", "perspicacious"); - -``` - -> For all key/value pairs passed to Java functions, both the key *and* the value are `String`. To set the value to be a different type, you need to deserialize from the `String` type. - - - - -In Python function, you can access the configuration value like this. - -```python - -from pulsar import Function - -class WordFilter(Function): - def process(self, context, input): - forbidden_word = context.user_config()["forbidden-word"] - - # Don't publish the message if it contains the user-supplied - # forbidden word - if forbidden_word in input: - pass - # Otherwise publish the message - else: - return input - -``` - -The Python SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - # Other function configs \ - --user-config '{"word-of-the-day":"verdure"}' - -``` - -To access that value in a Python function: - -```python - -from pulsar import Function - -class UserConfigFunction(Function): - def process(self, input, context): - logger = context.get_logger() - wotd = context.get_user_config_value('word-of-the-day') - if wotd is None: - logger.warn('No word of the day provided') - else: - logger.info("The word of the day is {0}".format(wotd)) - -``` - - - - -The Go SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - --go path/to/go/binary - --user-config '{"word-of-the-day":"lackadaisical"}' - -``` - -To access that value in a Go function: - -```go - -func contextFunc(ctx context.Context) { - fc, ok := pf.FromContext(ctx) - if !ok { - logutil.Fatal("Function context is not defined") - } - - wotd := fc.GetUserConfValue("word-of-the-day") - - if wotd == nil { - logutil.Warn("The word of the day is empty") - } else { - logutil.Infof("The word of the day is %s", wotd.(string)) - } -} - -``` - - - - -```` - -### Logger - -````mdx-code-block - - - -Pulsar Functions that use the Java SDK have access to an [SLF4j](https://www.slf4j.org/) [`Logger`](https://www.slf4j.org/api/org/apache/log4j/Logger.html) object that can be used to produce logs at the chosen log level. The following example logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class LoggingFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - String messageId = new String(context.getMessageId()); - - if (input.contains("danger")) { - LOG.warn("A warning was received in message {}", messageId); - } else { - LOG.info("Message {} received\nContent: {}", messageId, input); - } - - return null; - } -} - -``` - -If you want your function to produce logs, you need to specify a log topic when creating or running the function. The following is an example. - -```bash - -$ bin/pulsar-admin functions create \ - --jar my-functions.jar \ - --classname my.package.LoggingFunction \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -All logs produced by `LoggingFunction` above can be accessed via the `persistent://public/default/logging-function-logs` topic. - -#### Customize Function log level -Additionally, you can use the XML file, `functions_log4j2.xml`, to customize the function log level. -To customize the function log level, create or update `functions_log4j2.xml` in your Pulsar conf directory (for example, `/etc/pulsar/` on bare-metal, or `/pulsar/conf` on Kubernetes) to contain contents such as: - -```xml - - - pulsar-functions-instance - 30 - - - pulsar.log.appender - RollingFile - - - pulsar.log.level - debug - - - bk.log.level - debug - - - - - Console - SYSTEM_OUT - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - RollingFile - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.log - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}-%d{MM-dd-yyyy}-%i.log.gz - true - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - 1 - true - - - 1 GB - - - 0 0 0 * * ? - - - - - ${sys:pulsar.function.log.dir} - 2 - - */${sys:pulsar.function.log.file}*log.gz - - - 30d - - - - - - BkRollingFile - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.bk - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.bk-%d{MM-dd-yyyy}-%i.log.gz - true - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - 1 - true - - - 1 GB - - - 0 0 0 * * ? - - - - - ${sys:pulsar.function.log.dir} - 2 - - */${sys:pulsar.function.log.file}.bk*log.gz - - - 30d - - - - - - - - org.apache.pulsar.functions.runtime.shaded.org.apache.bookkeeper - ${sys:bk.log.level} - false - - BkRollingFile - - - - ${sys:pulsar.log.level} - - ${sys:pulsar.log.appender} - ${sys:pulsar.log.level} - - - - - -``` - -The properties set like: - -```xml - - - pulsar.log.level - debug - - -``` - -propagate to places where they are referenced, such as: - -```xml - - - ${sys:pulsar.log.level} - - ${sys:pulsar.log.appender} - ${sys:pulsar.log.level} - - - -``` - -In the above example, debug level logging would be applied to ALL function logs. -This may be more verbose than you desire. To be more selective, you can apply different log levels to different classes or modules. For example: - -```xml - - - com.example.module - info - false - - ${sys:pulsar.log.appender} - - - -``` - -You can be more specific as well, such as applying a more verbose log level to a class in the module, such as: - -```xml - - - com.example.module.className - debug - false - - Console - - - -``` - -Each `` entry allows you to output the log to a target specified in the definition of the Appender. - -Additivity pertains to whether log messages will be duplicated if multiple Logger entries overlap. -To disable additivity, specify - -```xml - -false - -``` - -as shown in examples above. Disabling additivity prevents duplication of log messages when one or more `` entries contain classes or modules that overlap. - -The `` is defined in the `` section, such as: - -```xml - - - Console - SYSTEM_OUT - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - -``` - - - - -Pulsar Functions that use the Python SDK have access to a logging object that can be used to produce logs at the chosen log level. The following example function that logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`. - -```python - -from pulsar import Function - -class LoggingFunction(Function): - def process(self, input, context): - logger = context.get_logger() - msg_id = context.get_message_id() - if 'danger' in input: - logger.warn("A warning was received in message {0}".format(context.get_message_id())) - else: - logger.info("Message {0} received\nContent: {1}".format(msg_id, input)) - -``` - -If you want your function to produce logs on a Pulsar topic, you need to specify a **log topic** when creating or running the function. The following is an example. - -```bash - -$ bin/pulsar-admin functions create \ - --py logging_function.py \ - --classname logging_function.LoggingFunction \ - --log-topic logging-function-logs \ - # Other function configs - -``` - -All logs produced by `LoggingFunction` above can be accessed via the `logging-function-logs` topic. -Additionally, you can specify the function log level through the broker XML file as described in [Customize Function log level](#customize-function-log-level). - - - - -The following Go Function example shows different log levels based on the function input. - -``` - -import ( - "context" - - "github.com/apache/pulsar/pulsar-function-go/pf" - - log "github.com/apache/pulsar/pulsar-function-go/logutil" -) - -func loggerFunc(ctx context.Context, input []byte) { - if len(input) <= 100 { - log.Infof("This input has a length of: %d", len(input)) - } else { - log.Warnf("This input is getting too long! It has {%d} characters", len(input)) - } -} - -func main() { - pf.Start(loggerFunc) -} - -``` - -When you use `logTopic` related functionalities in Go Function, import `github.com/apache/pulsar/pulsar-function-go/logutil`, and you do not have to use the `getLogger()` context object. - -Additionally, you can specify the function log level through the broker XML file, as described here: [Customize Function log level](#customize-function-log-level) - - - - -```` - -### Pulsar admin - -Pulsar Functions using the Java SDK has access to the Pulsar admin client, which allows the Pulsar admin client to manage API calls to current Pulsar clusters or external clusters (if `external-pulsars` is provided). - -````mdx-code-block - - - -Below is an example of how to use the Pulsar admin client exposed from the Function `context`. - -``` - -import org.apache.pulsar.client.admin.PulsarAdmin; -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -/** - * In this particular example, for every input message, - * the function resets the cursor of the current function's subscription to a - * specified timestamp. - */ -public class CursorManagementFunction implements Function { - - @Override - public String process(String input, Context context) throws Exception { - PulsarAdmin adminClient = context.getPulsarAdmin(); - if (adminClient != null) { - String topic = context.getCurrentRecord().getTopicName().isPresent() ? - context.getCurrentRecord().getTopicName().get() : null; - String subName = context.getTenant() + "/" + context.getNamespace() + "/" + context.getFunctionName(); - if (topic != null) { - // 1578188166 below is a random-pick timestamp - adminClient.topics().resetCursor(topic, subName, 1578188166); - return "reset cursor successfully"; - } - } - return null; - } -} - -``` - -If you want your function to get access to the Pulsar admin client, you need to enable this feature by setting `exposeAdminClientEnabled=true` in the `functions_worker.yml` file. You can test whether this feature is enabled or not using the command `pulsar-admin functions localrun` with the flag `--web-service-url`. - -``` - -$ bin/pulsar-admin functions localrun \ - --jar my-functions.jar \ - --classname my.package.CursorManagementFunction \ - --web-service-url http://pulsar-web-service:8080 \ - # Other function configs - -``` - - - - -```` - -## Metrics - -Pulsar Functions allows you to deploy and manage processing functions that consume messages from and publish messages to Pulsar topics easily. It is important to ensure that the running functions are healthy at any time. Pulsar Functions can publish arbitrary metrics to the metrics interface which can be queried. - -:::note - -If a Pulsar Function uses the language-native interface for Java or Python, that function is not able to publish metrics and stats to Pulsar. - -::: - -You can monitor Pulsar Functions that have been deployed with the following methods: - -- Check the metrics provided by Pulsar. - - Pulsar Functions expose the metrics that can be collected and used for monitoring the health of **Java, Python, and Go** functions. You can check the metrics by following the [monitoring](deploy-monitoring.md) guide. - - For the complete list of the function metrics, see [here](reference-metrics.md#pulsar-functions). - -- Set and check your customized metrics. - - In addition to the metrics provided by Pulsar, Pulsar allows you to customize metrics for **Java and Python** functions. Function workers collect user-defined metrics to Prometheus automatically and you can check them in Grafana. - -Here are examples of how to customize metrics for Java and Python functions. - -````mdx-code-block - - - -You can record metrics using the [`Context`](#context) object on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class MetricRecorderFunction implements Function { - @Override - public void apply(Integer input, Context context) { - // Records the metric 1 every time a message arrives - context.recordMetric("hit-count", 1); - - // Records the metric only if the arriving number equals 11 - if (input == 11) { - context.recordMetric("elevens-count", 1); - } - - return null; - } -} - -``` - - - - -You can record metrics using the [`Context`](#context) object on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message. The following is an example. - -```python - -from pulsar import Function - -class MetricRecorderFunction(Function): - def process(self, input, context): - context.record_metric('hit-count', 1) - - if input == 11: - context.record_metric('elevens-count', 1) - -``` - - - - -Currently, the feature is not available in Go. - - - - -```` - -## Security - -If you want to enable security on Pulsar Functions, first you should enable security on [Functions Workers](functions-worker.md). For more details, refer to [Security settings](functions-worker.md#security-settings). - -Pulsar Functions can support the following providers: - -- ClearTextSecretsProvider -- EnvironmentBasedSecretsProvider - -> Pulsar Function supports ClearTextSecretsProvider by default. - -At the same time, Pulsar Functions provides two interfaces, **SecretsProvider** and **SecretsProviderConfigurator**, allowing users to customize secret provider. - -````mdx-code-block - - - -You can get secret provider using the [`Context`](#context) object. The following is an example: - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class GetSecretProviderFunction implements Function { - - @Override - public Void process(String input, Context context) throws Exception { - Logger LOG = context.getLogger(); - String secretProvider = context.getSecret(input); - - if (!secretProvider.isEmpty()) { - LOG.info("The secret provider is {}", secretProvider); - } else { - LOG.warn("No secret provider"); - } - - return null; - } -} - -``` - - - - -You can get secret provider using the [`Context`](#context) object. The following is an example: - -```python - -from pulsar import Function - -class GetSecretProviderFunction(Function): - def process(self, input, context): - logger = context.get_logger() - secret_provider = context.get_secret(input) - if secret_provider is None: - logger.warn('No secret provider') - else: - logger.info("The secret provider is {0}".format(secret_provider)) - -``` - - - - -Currently, the feature is not available in Go. - - - - -```` - -## State storage -Pulsar Functions use [Apache BookKeeper](https://bookkeeper.apache.org) as a state storage interface. Pulsar installation, including the local standalone installation, includes deployment of BookKeeper bookies. - -Since Pulsar 2.1.0 release, Pulsar integrates with Apache BookKeeper [table service](https://docs.google.com/document/d/155xAwWv5IdOitHh1NVMEwCMGgB28M3FyMiQSxEpjE-Y/edit#heading=h.56rbh52koe3f) to store the `State` for functions. For example, a `WordCount` function can store its `counters` state into BookKeeper table service via Pulsar Functions State API. - -States are key-value pairs, where the key is a string and the value is arbitrary binary data - counters are stored as 64-bit big-endian binary values. Keys are scoped to an individual Pulsar Function, and shared between instances of that function. - -You can access states within Pulsar Java Functions using the `putState`, `putStateAsync`, `getState`, `getStateAsync`, `incrCounter`, `incrCounterAsync`, `getCounter`, `getCounterAsync` and `deleteState` calls on the context object. You can access states within Pulsar Python Functions using the `putState`, `getState`, `incrCounter`, `getCounter` and `deleteState` calls on the context object. You can also manage states using the [querystate](#query-state) and [putstate](#putstate) options to `pulsar-admin functions`. - -:::note - -State storage is not available in Go. - -::: - -### API - -````mdx-code-block - - - -Currently Pulsar Functions expose the following APIs for mutating and accessing State. These APIs are available in the [Context](functions-develop.md#context) object when you are using Java SDK functions. - -#### incrCounter - -```java - - /** - * Increment the builtin distributed counter referred by key - * @param key The name of the key - * @param amount The amount to be incremented - */ - void incrCounter(String key, long amount); - -``` - -The application can use `incrCounter` to change the counter of a given `key` by the given `amount`. - -#### incrCounterAsync - -```java - - /** - * Increment the builtin distributed counter referred by key - * but dont wait for the completion of the increment operation - * - * @param key The name of the key - * @param amount The amount to be incremented - */ - CompletableFuture incrCounterAsync(String key, long amount); - -``` - -The application can use `incrCounterAsync` to asynchronously change the counter of a given `key` by the given `amount`. - -#### getCounter - -```java - - /** - * Retrieve the counter value for the key. - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - long getCounter(String key); - -``` - -The application can use `getCounter` to retrieve the counter of a given `key` mutated by `incrCounter`. - -Except the `counter` API, Pulsar also exposes a general key/value API for functions to store -general key/value state. - -#### getCounterAsync - -```java - - /** - * Retrieve the counter value for the key, but don't wait - * for the operation to be completed - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - CompletableFuture getCounterAsync(String key); - -``` - -The application can use `getCounterAsync` to asynchronously retrieve the counter of a given `key` mutated by `incrCounterAsync`. - -#### putState - -```java - - /** - * Update the state value for the key. - * - * @param key name of the key - * @param value state value of the key - */ - void putState(String key, ByteBuffer value); - -``` - -#### putStateAsync - -```java - - /** - * Update the state value for the key, but don't wait for the operation to be completed - * - * @param key name of the key - * @param value state value of the key - */ - CompletableFuture putStateAsync(String key, ByteBuffer value); - -``` - -The application can use `putStateAsync` to asynchronously update the state of a given `key`. - -#### getState - -```java - - /** - * Retrieve the state value for the key. - * - * @param key name of the key - * @return the state value for the key. - */ - ByteBuffer getState(String key); - -``` - -#### getStateAsync - -```java - - /** - * Retrieve the state value for the key, but don't wait for the operation to be completed - * - * @param key name of the key - * @return the state value for the key. - */ - CompletableFuture getStateAsync(String key); - -``` - -The application can use `getStateAsync` to asynchronously retrieve the state of a given `key`. - -#### deleteState - -```java - - /** - * Delete the state value for the key. - * - * @param key name of the key - */ - -``` - -Counters and binary values share the same keyspace, so this deletes either type. - - - - -Currently Pulsar Functions expose the following APIs for mutating and accessing State. These APIs are available in the [Context](#context) object when you are using Python SDK functions. - -#### incr_counter - -```python - - def incr_counter(self, key, amount): - ""incr the counter of a given key in the managed state"" - -``` - -Application can use `incr_counter` to change the counter of a given `key` by the given `amount`. -If the `key` does not exist, a new key is created. - -#### get_counter - -```python - - def get_counter(self, key): - """get the counter of a given key in the managed state""" - -``` - -Application can use `get_counter` to retrieve the counter of a given `key` mutated by `incrCounter`. - -Except the `counter` API, Pulsar also exposes a general key/value API for functions to store -general key/value state. - -#### put_state - -```python - - def put_state(self, key, value): - """update the value of a given key in the managed state""" - -``` - -The key is a string, and the value is arbitrary binary data. - -#### get_state - -```python - - def get_state(self, key): - """get the value of a given key in the managed state""" - -``` - -#### del_counter - -```python - - def del_counter(self, key): - """delete the counter of a given key in the managed state""" - -``` - -Counters and binary values share the same keyspace, so this deletes either type. - - - - -```` - -### Query State - -A Pulsar Function can use the [State API](#api) for storing state into Pulsar's state storage -and retrieving state back from Pulsar's state storage. Additionally Pulsar also provides -CLI commands for querying its state. - -```shell - -$ bin/pulsar-admin functions querystate \ - --tenant \ - --namespace \ - --name \ - --state-storage-url \ - --key \ - [---watch] - -``` - -If `--watch` is specified, the CLI will watch the value of the provided `state-key`. - -### Example - -````mdx-code-block - - - -{@inject: github:WordCountFunction:/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/WordCountFunction.java} is a very good example -demonstrating on how Application can easily store `state` in Pulsar Functions. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountFunction implements Function { - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split("\\.")).forEach(word -> context.incrCounter(word, 1)); - return null; - } -} - -``` - -The logic of this `WordCount` function is pretty simple and straightforward: - -1. The function first splits the received `String` into multiple words using regex `\\.`. -2. For each `word`, the function increments the corresponding `counter` by 1 (via `incrCounter(key, amount)`). - - - - -```python - -from pulsar import Function - -class WordCount(Function): - def process(self, item, context): - for word in item.split(): - context.incr_counter(word, 1) - -``` - -The logic of this `WordCount` function is pretty simple and straightforward: - -1. The function first splits the received string into multiple words on space. -2. For each `word`, the function increments the corresponding `counter` by 1 (via `incr_counter(key, amount)`). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/functions-metrics.md b/site2/website/versioned_docs/version-2.8.0-deprecated/functions-metrics.md deleted file mode 100644 index 8add6693160929..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/functions-metrics.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: functions-metrics -title: Metrics for Pulsar Functions -sidebar_label: "Metrics" -original_id: functions-metrics ---- - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/functions-overview.md b/site2/website/versioned_docs/version-2.8.0-deprecated/functions-overview.md deleted file mode 100644 index 816d301e0fd0e7..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/functions-overview.md +++ /dev/null @@ -1,209 +0,0 @@ ---- -id: functions-overview -title: Pulsar Functions overview -sidebar_label: "Overview" -original_id: functions-overview ---- - -**Pulsar Functions** are lightweight compute processes that - -* consume messages from one or more Pulsar topics, -* apply a user-supplied processing logic to each message, -* publish the results of the computation to another topic. - - -## Goals -With Pulsar Functions, you can create complex processing logic without deploying a separate neighboring system (such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://heron.incubator.apache.org/), [Apache Flink](https://flink.apache.org/)). Pulsar Functions are computing infrastructure of Pulsar messaging system. The core goal is tied to a series of other goals: - -* Developer productivity (language-native vs Pulsar Functions SDK functions) -* Easy troubleshooting -* Operational simplicity (no need for an external processing system) - -## Inspirations -Pulsar Functions are inspired by (and take cues from) several systems and paradigms: - -* Stream processing engines such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://apache.github.io/incubator-heron), and [Apache Flink](https://flink.apache.org) -* "Serverless" and "Function as a Service" (FaaS) cloud platforms like [Amazon Web Services Lambda](https://aws.amazon.com/lambda/), [Google Cloud Functions](https://cloud.google.com/functions/), and [Azure Cloud Functions](https://azure.microsoft.com/en-us/services/functions/) - -Pulsar Functions can be described as - -* [Lambda](https://aws.amazon.com/lambda/)-style functions that are -* specifically designed to use Pulsar as a message bus. - -## Programming model -Pulsar Functions provide a wide range of functionality, and the core programming model is simple. Functions receive messages from one or more **input [topics](reference-terminology.md#topic)**. Each time a message is received, the function will complete the following tasks. - - * Apply some processing logic to the input and write output to: - * An **output topic** in Pulsar - * [Apache BookKeeper](functions-develop.md#state-storage) - * Write logs to a **log topic** (potentially for debugging purposes) - * Increment a [counter](#word-count-example) - -![Pulsar Functions core programming model](/assets/pulsar-functions-overview.png) - -You can use Pulsar Functions to set up the following processing chain: - -* A Python function listens for the `raw-sentences` topic and "sanitizes" incoming strings (removing extraneous whitespace and converting all characters to lowercase) and then publishes the results to a `sanitized-sentences` topic. -* A Java function listens for the `sanitized-sentences` topic, counts the number of times each word appears within a specified time window, and publishes the results to a `results` topic -* Finally, a Python function listens for the `results` topic and writes the results to a MySQL table. - - -### Word count example - -If you implement the classic word count example using Pulsar Functions, it looks something like this: - -![Pulsar Functions word count example](/assets/pulsar-functions-word-count.png) - -To write the function in Java with [Pulsar Functions SDK for Java](functions-develop.md#available-apis), you can write the function as follows. - -```java - -package org.example.functions; - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountFunction implements Function { - // This function is invoked every time a message is published to the input topic - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split(" ")).forEach(word -> { - String counterKey = word.toLowerCase(); - context.incrCounter(counterKey, 1); - }); - return null; - } -} - -``` - -Bundle and build the JAR file to be deployed, and then deploy it in your Pulsar cluster using the [command line](functions-deploy.md#command-line-interface) as follows. - -```bash - -$ bin/pulsar-admin functions create \ - --jar target/my-jar-with-dependencies.jar \ - --classname org.example.functions.WordCountFunction \ - --tenant public \ - --namespace default \ - --name word-count \ - --inputs persistent://public/default/sentences \ - --output persistent://public/default/count - -``` - -### Content-based routing example - -Pulsar Functions are used in many cases. The following is a sophisticated example that involves content-based routing. - -For example, a function takes items (strings) as input and publishes them to either a `fruits` or `vegetables` topic, depending on the item. Or, if an item is neither fruit nor vegetable, a warning is logged to a [log topic](functions-develop.md#logger). The following is a visual representation. - -![Pulsar Functions routing example](/assets/pulsar-functions-routing-example.png) - -If you implement this routing functionality in Python, it looks something like this: - -```python - -from pulsar import Function - -class RoutingFunction(Function): - def __init__(self): - self.fruits_topic = "persistent://public/default/fruits" - self.vegetables_topic = "persistent://public/default/vegetables" - - @staticmethod - def is_fruit(item): - return item in [b"apple", b"orange", b"pear", b"other fruits..."] - - @staticmethod - def is_vegetable(item): - return item in [b"carrot", b"lettuce", b"radish", b"other vegetables..."] - - def process(self, item, context): - if self.is_fruit(item): - context.publish(self.fruits_topic, item) - elif self.is_vegetable(item): - context.publish(self.vegetables_topic, item) - else: - warning = "The item {0} is neither a fruit nor a vegetable".format(item) - context.get_logger().warn(warning) - -``` - -If this code is stored in `~/router.py`, then you can deploy it in your Pulsar cluster using the [command line](functions-deploy.md#command-line-interface) as follows. - -```bash - -$ bin/pulsar-admin functions create \ - --py ~/router.py \ - --classname router.RoutingFunction \ - --tenant public \ - --namespace default \ - --name route-fruit-veg \ - --inputs persistent://public/default/basket-items - -``` - -### Functions, messages and message types -Pulsar Functions take byte arrays as inputs and spit out byte arrays as output. However in languages that support typed interfaces(Java), you can write typed Functions, and bind messages to types in the following ways. -* [Schema Registry](functions-develop.md#schema-registry) -* [SerDe](functions-develop.md#serde) - - -## Fully Qualified Function Name (FQFN) -Each Pulsar Function has a **Fully Qualified Function Name** (FQFN) that consists of three elements: the function tenant, namespace, and function name. FQFN looks like this: - -```http - -tenant/namespace/name - -``` - -FQFNs enable you to create multiple functions with the same name provided that they are in different namespaces. - -## Supported languages -Currently, you can write Pulsar Functions in Java, Python, and Go. For details, refer to [Develop Pulsar Functions](functions-develop.md). - -## Processing guarantees -Pulsar Functions provide three different messaging semantics that you can apply to any function. - -Delivery semantics | Description -:------------------|:------- -**At-most-once** delivery | Each message sent to the function is likely to be processed, or not to be processed (hence "at most"). -**At-least-once** delivery | Each message sent to the function can be processed more than once (hence the "at least"). -**Effectively-once** delivery | Each message sent to the function will have one output associated with it. - - -### Apply processing guarantees to a function -You can set the processing guarantees for a Pulsar Function when you create the Function. The following [`pulsar-function create`](reference-pulsar-admin.md#create-1) command creates a function with effectively-once guarantees applied. - -```bash - -$ bin/pulsar-admin functions create \ - --name my-effectively-once-function \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other function configs - -``` - -The available options for `--processing-guarantees` are: - -* `ATMOST_ONCE` -* `ATLEAST_ONCE` -* `EFFECTIVELY_ONCE` - -> By default, Pulsar Functions provide at-least-once delivery guarantees. So if you create a function without supplying a value for the `--processingGuarantees` flag, the function provides at-least-once guarantees. - -### Update the processing guarantees of a function -You can change the processing guarantees applied to a function using the [`update`](reference-pulsar-admin.md#update-1) command. The following is an example. - -```bash - -$ bin/pulsar-admin functions update \ - --processing-guarantees ATMOST_ONCE \ - # Other function configs - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/functions-package.md b/site2/website/versioned_docs/version-2.8.0-deprecated/functions-package.md deleted file mode 100644 index a995d5c1588771..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/functions-package.md +++ /dev/null @@ -1,493 +0,0 @@ ---- -id: functions-package -title: Package Pulsar Functions -sidebar_label: "How-to: Package" -original_id: functions-package ---- - -You can package Pulsar functions in Java, Python, and Go. Packaging the window function in Java is the same as [packaging a function in Java](#java). - -:::note - -Currently, the window function is not available in Python and Go. - -::: - -## Prerequisite - -Before running a Pulsar function, you need to start Pulsar. You can [run a standalone Pulsar in Docker](getting-started-docker.md), or [run Pulsar in Kubernetes](getting-started-helm.md). - -To check whether the Docker image starts, you can use the `docker ps` command. - -## Java - -To package a function in Java, complete the following steps. - -1. Create a new maven project with a pom file. In the following code sample, the value of `mainClass` is your package name. - - ```Java - - - - 4.0.0 - - java-function - java-function - 1.0-SNAPSHOT - - - - org.apache.pulsar - pulsar-functions-api - 2.6.0 - - - - - - - maven-assembly-plugin - - false - - jar-with-dependencies - - - - org.example.test.ExclamationFunction - - - - - - make-assembly - package - - assembly - - - - - - org.apache.maven.plugins - maven-compiler-plugin - - 8 - 8 - - - - - - - - ``` - -2. Write a Java function. - - ``` - - package org.example.test; - - import java.util.function.Function; - - public class ExclamationFunction implements Function { - @Override - public String apply(String s) { - return "This is my function!"; - } - } - - ``` - - For the imported package, you can use one of the following interfaces: - - Function interface provided by Java 8: `java.util.function.Function` - - Pulsar Function interface: `org.apache.pulsar.functions.api.Function` - - The main difference between the two interfaces is that the `org.apache.pulsar.functions.api.Function` interface provides the context interface. When you write a function and want to interact with it, you can use context to obtain a wide variety of information and functionality for Pulsar Functions. - - The following example uses `org.apache.pulsar.functions.api.Function` interface with context. - - ``` - - package org.example.functions; - import org.apache.pulsar.functions.api.Context; - import org.apache.pulsar.functions.api.Function; - - import java.util.Arrays; - public class WordCountFunction implements Function { - // This function is invoked every time a message is published to the input topic - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split(" ")).forEach(word -> { - String counterKey = word.toLowerCase(); - context.incrCounter(counterKey, 1); - }); - return null; - } - } - - ``` - -3. Package the Java function. - - ```bash - - mvn package - - ``` - - After the Java function is packaged, a `target` directory is created automatically. Open the `target` directory to check if there is a JAR package similar to `java-function-1.0-SNAPSHOT.jar`. - - -4. Run the Java function. - - (1) Copy the packaged jar file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Java function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname org.example.test.ExclamationFunction \ - --jar java-function-1.0-SNAPSHOT.jar \ - --inputs persistent://public/default/my-topic-1 \ - --output persistent://public/default/test-1 \ - --tenant public \ - --namespace default \ - --name JavaFunction - - ``` - - The following log indicates that the Java function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -## Python - -Python Function supports the following three formats: - -- One python file -- ZIP file -- PIP - -### One python file - -To package a function with **one python file** in Python, complete the following steps. - -1. Write a Python function. - - ``` - - from pulsar import Function // import the Function module from Pulsar - - # The classic ExclamationFunction that appends an exclamation at the end - # of the input - class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - return input + '!' - - ``` - - In this example, when you write a Python function, you need to inherit the Function class and implement the `process()` method. - - `process()` mainly has two parameters: - - - `input` represents your input. - - - `context` represents an interface exposed by the Pulsar Function. You can get the attributes in the Python function based on the provided context object. - -2. Install a Python client. - - The implementation of a Python function depends on the Python client, so before deploying a Python function, you need to install the corresponding version of the Python client. - - ```bash - - pip install pulsar-client==2.6.0 - - ``` - -3. Run the Python Function. - - (1) Copy the Python function file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Python function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname . \ - --py \ - --inputs persistent://public/default/my-topic-1 \ - --output persistent://public/default/test-1 \ - --tenant public \ - --namespace default \ - --name PythonFunction - - ``` - - The following log indicates that the Python function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -### ZIP file - -To package a function with the **ZIP file** in Python, complete the following steps. - -1. Prepare the ZIP file. - - The following is required when packaging the ZIP file of the Python Function. - - ```text - - Assuming the zip file is named as `func.zip`, unzip the `func.zip` folder: - "func/src" - "func/requirements.txt" - "func/deps" - - ``` - - Take [exclamation.zip](https://github.com/apache/pulsar/tree/master/tests/docker-images/latest-version-image/python-examples) as an example. The internal structure of the example is as follows. - - ```text - - . - ├── deps - │   └── sh-1.12.14-py2.py3-none-any.whl - └── src - └── exclamation.py - - ``` - -2. Run the Python Function. - - (1) Copy the ZIP file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Python function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname exclamation \ - --py \ - --inputs persistent://public/default/in-topic \ - --output persistent://public/default/out-topic \ - --tenant public \ - --namespace default \ - --name PythonFunction - - ``` - - The following log indicates that the Python function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -### PIP - -The PIP method is only supported in Kubernetes runtime. To package a function with **PIP** in Python, complete the following steps. - -1. Configure the `functions_worker.yml` file. - - ```text - - #### Kubernetes Runtime #### - installUserCodeDependencies: true - - ``` - -2. Write your Python Function. - - ``` - - from pulsar import Function - import js2xml - - # The classic ExclamationFunction that appends an exclamation at the end - # of the input - class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - // add your logic - return input + '!' - - ``` - - You can introduce additional dependencies. When Python Function detects that the file currently used is `whl` and the `installUserCodeDependencies` parameter is specified, the system uses the `pip install` command to install the dependencies required in Python Function. - -3. Generate the `whl` file. - - ```shell script - - $ cd $PULSAR_HOME/pulsar-functions/scripts/python - $ chmod +x generate.sh - $ ./generate.sh - # e.g: ./generate.sh /path/to/python /path/to/python/output 1.0.0 - - ``` - - The output is written in `/path/to/python/output`: - - ```text - - -rw-r--r-- 1 root staff 1.8K 8 27 14:29 pulsarfunction-1.0.0-py2-none-any.whl - -rw-r--r-- 1 root staff 1.4K 8 27 14:29 pulsarfunction-1.0.0.tar.gz - -rw-r--r-- 1 root staff 0B 8 27 14:29 pulsarfunction.whl - - ``` - -## Go - -To package a function in Go, complete the following steps. - -1. Write a Go function. - - Currently, Go function can be **only** implemented using SDK and the interface of the function is exposed in the form of SDK. Before using the Go function, you need to import "github.com/apache/pulsar/pulsar-function-go/pf". - - ``` - - import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" - ) - - func HandleRequest(ctx context.Context, input []byte) error { - fmt.Println(string(input) + "!") - return nil - } - - func main() { - pf.Start(HandleRequest) - } - - ``` - - You can use context to connect to the Go function. - - ``` - - if fc, ok := pf.FromContext(ctx); ok { - fmt.Printf("function ID is:%s, ", fc.GetFuncID()) - fmt.Printf("function version is:%s\n", fc.GetFuncVersion()) - } - - ``` - - When writing a Go function, remember that - - In `main()`, you **only** need to register the function name to `Start()`. **Only** one function name is received in `Start()`. - - Go function uses Go reflection, which is based on the received function name, to verify whether the parameter list and returned value list are correct. The parameter list and returned value list **must be** one of the following sample functions: - - ``` - - func () - func () error - func (input) error - func () (output, error) - func (input) (output, error) - func (context.Context) error - func (context.Context, input) error - func (context.Context) (output, error) - func (context.Context, input) (output, error) - - ``` - -2. Build the Go function. - - ``` - - go build .go - - ``` - -3. Run the Go Function. - - (1) Copy the Go function file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Go function with the following command. - - ``` - - ./bin/pulsar-admin functions localrun \ - --go [your go function path] - --inputs [input topics] \ - --output [output topic] \ - --tenant [default:public] \ - --namespace [default:default] \ - --name [custom unique go function name] - - ``` - - The following log indicates that the Go function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -## Start Functions in cluster mode -If you want to start a function in cluster mode, replace `localrun` with `create` in the commands above. The following log indicates that your function starts successfully. - - ```text - - "Created successfully" - - ``` - -For information about parameters on `--classname`, `--jar`, `--py`, `--go`, `--inputs`, run the command `./bin/pulsar-admin functions` or see [here](reference-pulsar-admin.md#functions). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/functions-runtime.md b/site2/website/versioned_docs/version-2.8.0-deprecated/functions-runtime.md deleted file mode 100644 index ab7d1c05db421e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/functions-runtime.md +++ /dev/null @@ -1,399 +0,0 @@ ---- -id: functions-runtime -title: Configure Functions runtime -sidebar_label: "Setup: Configure Functions runtime" -original_id: functions-runtime ---- - -You can use the following methods to run functions. - -- *Thread*: Invoke functions threads in functions worker. -- *Process*: Invoke functions in processes forked by functions worker. -- *Kubernetes*: Submit functions as Kubernetes StatefulSets by functions worker. - -:::note - -Pulsar supports adding labels to the Kubernetes StatefulSets and services while launching functions, which facilitates selecting the target Kubernetes objects. - -::: - -The differences of the thread and process modes are: -- Thread mode: when a function runs in thread mode, it runs on the same Java virtual machine (JVM) with functions worker. -- Process mode: when a function runs in process mode, it runs on the same machine that functions worker runs. - -## Configure thread runtime -It is easy to configure *Thread* runtime. In most cases, you do not need to configure anything. You can customize the thread group name with the following settings: - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.thread.ThreadRuntimeFactory -functionRuntimeFactoryConfigs: - threadGroupName: "Your Function Container Group" - -``` - -*Thread* runtime is only supported in Java function. - -## Configure process runtime -When you enable *Process* runtime, you do not need to configure anything. - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.process.ProcessRuntimeFactory -functionRuntimeFactoryConfigs: - # the directory for storing the function logs - logDirectory: - # change the jar location only when you put the java instance jar in a different location - javaInstanceJarLocation: - # change the python instance location only when you put the python instance jar in a different location - pythonInstanceLocation: - # change the extra dependencies location: - extraFunctionDependenciesDir: - -``` - -*Process* runtime is supported in Java, Python, and Go functions. - -## Configure Kubernetes runtime - -When the functions worker generates Kubernetes manifests and apply the manifests, the Kubernetes runtime works. If you have run functions worker on Kubernetes, you can use the `serviceAccount` associated with the pod that the functions worker is running in. Otherwise, you can configure it to communicate with a Kubernetes cluster. - -The manifests, generated by the functions worker, include a `StatefulSet`, a `Service` (used to communicate with the pods), and a `Secret` for auth credentials (when applicable). The `StatefulSet` manifest (by default) has a single pod, with the number of replicas determined by the "parallelism" of the function. On pod boot, the pod downloads the function payload (via the functions worker REST API). The pod's container image is configurable, but must have the functions runtime. - -The Kubernetes runtime supports secrets, so you can create a Kubernetes secret and expose it as an environment variable in the pod. The Kubernetes runtime is extensible, you can implement classes and customize the way how to generate Kubernetes manifests, how to pass auth data to pods, and how to integrate secrets. - -:::tip - -For the rules of translating Pulsar object names into Kubernetes resource labels, see [here](admin-api-overview.md#how-to-define-pulsar-resource-names-when-running-pulsar-in-kubernetes). - -::: - -### Basic configuration - -It is easy to configure Kubernetes runtime. You can just uncomment the settings of `kubernetesContainerFactory` in the `functions_worker.yaml` file. The following is an example. - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.kubernetes.KubernetesRuntimeFactory -functionRuntimeFactoryConfigs: - # uri to kubernetes cluster, leave it to empty and it will use the kubernetes settings in function worker - k8Uri: - # the kubernetes namespace to run the function instances. it is `default`, if this setting is left to be empty - jobNamespace: - # The Kubernetes pod name to run the function instances. It is set to - # `pf----` if this setting is left to be empty - jobName: - # the docker image to run function instance. by default it is `apachepulsar/pulsar` - pulsarDockerImageName: - # the docker image to run function instance according to different configurations provided by users. - # By default it is `apachepulsar/pulsar`. - # e.g: - # functionDockerImages: - # JAVA: JAVA_IMAGE_NAME - # PYTHON: PYTHON_IMAGE_NAME - # GO: GO_IMAGE_NAME - functionDockerImages: - # "The image pull policy for image used to run function instance. By default it is `IfNotPresent` - imagePullPolicy: IfNotPresent - # the root directory of pulsar home directory in `pulsarDockerImageName`. by default it is `/pulsar`. - # if you are using your own built image in `pulsarDockerImageName`, you need to set this setting accordingly - pulsarRootDir: - # The config admin CLI allows users to customize the configuration of the admin cli tool, such as: - # `/bin/pulsar-admin and /bin/pulsarctl`. By default it is `/bin/pulsar-admin`. If you want to use `pulsarctl` - # you need to set this setting accordingly - configAdminCLI: - # this setting only takes effects if `k8Uri` is set to null. if your function worker is running as a k8 pod, - # setting this to true is let function worker to submit functions to the same k8s cluster as function worker - # is running. setting this to false if your function worker is not running as a k8 pod. - submittingInsidePod: false - # setting the pulsar service url that pulsar function should use to connect to pulsar - # if it is not set, it will use the pulsar service url configured in worker service - pulsarServiceUrl: - # setting the pulsar admin url that pulsar function should use to connect to pulsar - # if it is not set, it will use the pulsar admin url configured in worker service - pulsarAdminUrl: - # The flag indicates to install user code dependencies. (applied to python package) - installUserCodeDependencies: - # The repository that pulsar functions use to download python dependencies - pythonDependencyRepository: - # The repository that pulsar functions use to download extra python dependencies - pythonExtraDependencyRepository: - # the custom labels that function worker uses to select the nodes for pods - customLabels: - # The expected metrics collection interval, in seconds - expectedMetricsCollectionInterval: 30 - # Kubernetes Runtime will periodically checkback on - # this configMap if defined and if there are any changes - # to the kubernetes specific stuff, we apply those changes - changeConfigMap: - # The namespace for storing change config map - changeConfigMapNamespace: - # The ratio cpu request and cpu limit to be set for a function/source/sink. - # The formula for cpu request is cpuRequest = userRequestCpu / cpuOverCommitRatio - cpuOverCommitRatio: 1.0 - # The ratio memory request and memory limit to be set for a function/source/sink. - # The formula for memory request is memoryRequest = userRequestMemory / memoryOverCommitRatio - memoryOverCommitRatio: 1.0 - # The port inside the function pod which is used by the worker to communicate with the pod - grpcPort: 9093 - # The port inside the function pod on which prometheus metrics are exposed - metricsPort: 9094 - # The directory inside the function pod where nar packages will be extracted - narExtractionDirectory: - # The classpath where function instance files stored - functionInstanceClassPath: - # the directory for dropping extra function dependencies - # if it is not an absolute path, it is relative to `pulsarRootDir` - extraFunctionDependenciesDir: - # Additional memory padding added on top of the memory requested by the function per on a per instance basis - percentMemoryPadding: 10 - -``` - -If you run functions worker embedded in a broker on Kubernetes, you can use the default settings. - -### Run standalone functions worker on Kubernetes - -If you run functions worker standalone (that is, not embedded) on Kubernetes, you need to configure `pulsarSerivceUrl` to be the URL of the broker and `pulsarAdminUrl` as the URL to the functions worker. - -For example, both Pulsar brokers and Function Workers run in the `pulsar` K8S namespace. The brokers have a service called `brokers` and the functions worker has a service called `func-worker`. The settings are as follows: - -```yaml - -pulsarServiceUrl: pulsar://broker.pulsar:6650 // or pulsar+ssl://broker.pulsar:6651 if using TLS -pulsarAdminUrl: http://func-worker.pulsar:8080 // or https://func-worker:8443 if using TLS - -``` - -### Run RBAC in Kubernetes clusters - -If you run RBAC in your Kubernetes cluster, make sure that the service account you use for running functions workers (or brokers, if functions workers run along with brokers) have permissions on the following Kubernetes APIs. - -- services -- configmaps -- pods -- apps.statefulsets - -The following is sufficient: - -```yaml - -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRole -metadata: - name: functions-worker -rules: -- apiGroups: [""] - resources: - - services - - configmaps - - pods - verbs: - - '*' -- apiGroups: - - apps - resources: - - statefulsets - verbs: - - '*' ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: functions-worker ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: functions-worker -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: functions-worker -subjectsKubernetesSec: -- kind: ServiceAccount - name: functions-worker - -``` - -If the service-account is not properly configured, an error message similar to this is displayed: - -```bash - -22:04:27.696 [Timer-0] ERROR org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory - Error while trying to fetch configmap example-pulsar-4qvmb5gur3c6fc9dih0x1xn8b-function-worker-config at namespace pulsar -io.kubernetes.client.ApiException: Forbidden - at io.kubernetes.client.ApiClient.handleResponse(ApiClient.java:882) ~[io.kubernetes-client-java-2.0.0.jar:?] - at io.kubernetes.client.ApiClient.execute(ApiClient.java:798) ~[io.kubernetes-client-java-2.0.0.jar:?] - at io.kubernetes.client.apis.CoreV1Api.readNamespacedConfigMapWithHttpInfo(CoreV1Api.java:23673) ~[io.kubernetes-client-java-api-2.0.0.jar:?] - at io.kubernetes.client.apis.CoreV1Api.readNamespacedConfigMap(CoreV1Api.java:23655) ~[io.kubernetes-client-java-api-2.0.0.jar:?] - at org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory.fetchConfigMap(KubernetesRuntimeFactory.java:284) [org.apache.pulsar-pulsar-functions-runtime-2.4.0-42c3bf949.jar:2.4.0-42c3bf949] - at org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory$1.run(KubernetesRuntimeFactory.java:275) [org.apache.pulsar-pulsar-functions-runtime-2.4.0-42c3bf949.jar:2.4.0-42c3bf949] - at java.util.TimerThread.mainLoop(Timer.java:555) [?:1.8.0_212] - at java.util.TimerThread.run(Timer.java:505) [?:1.8.0_212] - -``` - -### Integrate Kubernetes secrets - -In order to safely distribute secrets, Pulsar Functions can reference Kubernetes secrets. To enable this, set the `secretsProviderConfiguratorClassName` to `org.apache.pulsar.functions.secretsproviderconfigurator.KubernetesSecretsProviderConfigurator`. - -You can create a secret in the namespace where your functions are deployed. For example, you deploy functions to the `pulsar-func` Kubernetes namespace, and you have a secret named `database-creds` with a field name `password`, which you want to mount in the pod as an environment variable called `DATABASE_PASSWORD`. The following functions configuration enables you to reference that secret and mount the value as an environment variable in the pod. - -```Yaml - -tenant: "mytenant" -namespace: "mynamespace" -name: "myfunction" -topicName: "persistent://mytenant/mynamespace/myfuncinput" -className: "com.company.pulsar.myfunction" - -secrets: - # the secret will be mounted from the `password` field in the `database-creds` secret as an env var called `DATABASE_PASSWORD` - DATABASE_PASSWORD: - path: "database-creds" - key: "password" - -``` - -### Enable token authentication - -When you enable authentication for your Pulsar cluster, you need a mechanism for the pod running your function to authenticate with the broker. - -The `org.apache.pulsar.functions.auth.KubernetesFunctionAuthProvider` interface provides support for any authentication mechanism. The `functionAuthProviderClassName` in `function-worker.yml` is used to specify your path to this implementation. - -Pulsar includes an implementation of this interface for token authentication, and distributes the certificate authority via the same implementation. The configuration is similar as follows: - -```Yaml - -functionAuthProviderClassName: org.apache.pulsar.functions.auth.KubernetesSecretsTokenAuthProvider - -``` - -For token authentication, the functions worker captures the token that is used to deploy (or update) the function. The token is saved as a secret and mounted into the pod. - -For custom authentication or TLS, you need to implement this interface or use an alternative mechanism to provide authentication. If you use token authentication and TLS encryption to secure the communication with the cluster, Pulsar passes your certificate authority (CA) to the client, so the client obtains what it needs to authenticate the cluster, and trusts the cluster with your signed certificate. - -:::note - -If you use tokens that expire when deploying functions, these tokens will expire. - -::: - -### Run clusters with authentication - -When you run a functions worker in a standalone process (that is, not embedded in the broker) in a cluster with authentication, you must configure your functions worker to interact with the broker and authenticate incoming requests. So you need to configure properties that the broker requires for authentication or authorization. - -For example, if you use token authentication, you need to configure the following properties in the `function-worker.yml` file. - -```Yaml - -clientAuthenticationPlugin: org.apache.pulsar.client.impl.auth.AuthenticationToken -clientAuthenticationParameters: file:///etc/pulsar/token/admin-token.txt -configurationStoreServers: zookeeper-cluster:2181 # auth requires a connection to zookeeper -authenticationProviders: - - "org.apache.pulsar.broker.authentication.AuthenticationProviderToken" -authorizationEnabled: true -authenticationEnabled: true -superUserRoles: - - superuser - - proxy -properties: - tokenSecretKey: file:///etc/pulsar/jwt/secret # if using a secret token, key file must be DER-encoded - tokenPublicKey: file:///etc/pulsar/jwt/public.key # if using public/private key tokens, key file must be DER-encoded - -``` - -:::note - -You must configure both the Function Worker authorization or authentication for the server to authenticate requests and configure the client to be authenticated to communicate with the broker. - -::: - -### Customize Kubernetes runtime - -The Kubernetes integration enables you to implement a class and customize how to generate manifests. You can configure it by setting `runtimeCustomizerClassName` in the `functions-worker.yml` file and use the fully qualified class name. You must implement the `org.apache.pulsar.functions.runtime.kubernetes.KubernetesManifestCustomizer` interface. - -The functions (and sinks/sources) API provides a flag, `customRuntimeOptions`, which is passed to this interface. - -To initialize the `KubernetesManifestCustomizer`, you can provide `runtimeCustomizerConfig` in the `functions-worker.yml` file. `runtimeCustomizerConfig` is passed to the `public void initialize(Map config)` function of the interface. `runtimeCustomizerConfig`is different from the `customRuntimeOptions` as `runtimeCustomizerConfig` is the same across all functions. If you provide both `runtimeCustomizerConfig` and `customRuntimeOptions`, you need to decide how to manage these two configurations in your implementation of `KubernetesManifestCustomizer`. - -Pulsar includes a built-in implementation. To use the basic implementation, set `runtimeCustomizerClassName` to `org.apache.pulsar.functions.runtime.kubernetes.BasicKubernetesManifestCustomizer`. The built-in implementation initialized with `runtimeCustomizerConfig` enables you to pass a JSON document as `customRuntimeOptions` with certain properties to augment, which decides how the manifests are generated. If both `runtimeCustomizerConfig` and `customRuntimeOptions` are provided, `BasicKubernetesManifestCustomizer` uses `customRuntimeOptions` to override the configuration if there are conflicts in these two configurations. - -Below is an example of `customRuntimeOptions`. - -```json - -{ - "jobName": "jobname", // the k8s pod name to run this function instance - "jobNamespace": "namespace", // the k8s namespace to run this function in - "extractLabels": { // extra labels to attach to the statefulSet, service, and pods - "extraLabel": "value" - }, - "extraAnnotations": { // extra annotations to attach to the statefulSet, service, and pods - "extraAnnotation": "value" - }, - "nodeSelectorLabels": { // node selector labels to add on to the pod spec - "customLabel": "value" - }, - "tolerations": [ // tolerations to add to the pod spec - { - "key": "custom-key", - "value": "value", - "effect": "NoSchedule" - } - ], - "resourceRequirements": { // values for cpu and memory should be defined as described here: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container - "requests": { - "cpu": 1, - "memory": "4G" - }, - "limits": { - "cpu": 2, - "memory": "8G" - } - } -} - -``` - -## Run clusters with geo-replication - -If you run multiple clusters tied together with geo-replication, it is important to use a different function namespace for each cluster. Otherwise, the function shares a namespace and potentially schedule across clusters. - -For example, if you have two clusters: `east-1` and `west-1`, you can configure the functions workers for `east-1` and `west-1` perspectively as follows. - -```Yaml - -pulsarFunctionsCluster: east-1 -pulsarFunctionsNamespace: public/functions-east-1 - -``` - -```Yaml - -pulsarFunctionsCluster: west-1 -pulsarFunctionsNamespace: public/functions-west-1 - -``` - -This ensures the two different Functions Workers use distinct sets of topics for their internal coordination. - -## Configure standalone functions worker - -When configuring a standalone functions worker, you need to configure properties that the broker requires, especially if you use TLS. And then Functions Worker can communicate with the broker. - -You need to configure the following required properties. - -```Yaml - -workerPort: 8080 -workerPortTls: 8443 # when using TLS -tlsCertificateFilePath: /etc/pulsar/tls/tls.crt # when using TLS -tlsKeyFilePath: /etc/pulsar/tls/tls.key # when using TLS -tlsTrustCertsFilePath: /etc/pulsar/tls/ca.crt # when using TLS -pulsarServiceUrl: pulsar://broker.pulsar:6650/ # or pulsar+ssl://pulsar-prod-broker.pulsar:6651/ when using TLS -pulsarWebServiceUrl: http://broker.pulsar:8080/ # or https://pulsar-prod-broker.pulsar:8443/ when using TLS -useTls: true # when using TLS, critical! - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/functions-worker.md b/site2/website/versioned_docs/version-2.8.0-deprecated/functions-worker.md deleted file mode 100644 index 1ad643cee8431a..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/functions-worker.md +++ /dev/null @@ -1,386 +0,0 @@ ---- -id: functions-worker -title: Deploy and manage functions worker -sidebar_label: "Setup: Pulsar Functions Worker" -original_id: functions-worker ---- -Before using Pulsar Functions, you need to learn how to set up Pulsar Functions worker and how to [configure Functions runtime](functions-runtime.md). - -Pulsar `functions-worker` is a logic component to run Pulsar Functions in cluster mode. Two options are available, and you can select either based on your requirements. -- [run with brokers](#run-functions-worker-with-brokers) -- [run it separately](#run-functions-worker-separately) in a different broker - -:::note - -The `--- Service Urls---` lines in the following diagrams represent Pulsar service URLs that Pulsar client and admin use to connect to a Pulsar cluster. - -::: - -## Run Functions-worker with brokers - -The following diagram illustrates the deployment of functions-workers running along with brokers. - -![assets/functions-worker-corun.png](/assets/functions-worker-corun.png) - -To enable functions-worker running as part of a broker, you need to set `functionsWorkerEnabled` to `true` in the `broker.conf` file. - -```conf - -functionsWorkerEnabled=true - -``` - -If the `functionsWorkerEnabled` is set to `true`, the functions-worker is started as part of a broker. You need to configure the `conf/functions_worker.yml` file to customize your functions_worker. - -Before you run Functions-worker with broker, you have to configure Functions-worker, and then start it with brokers. - -### Configure Functions-Worker to run with brokers -In this mode, most of the settings are already inherited from your broker configuration (for example, configurationStore settings, authentication settings, and so on) since `functions-worker` is running as part of the broker. - -Pay attention to the following required settings when configuring functions-worker in this mode. - -- `numFunctionPackageReplicas`: The number of replicas to store function packages. The default value is `1`, which is good for standalone deployment. For production deployment, to ensure high availability, set it to be larger than `2`. -- `initializedDlogMetadata`: Whether to initialize distributed log metadata in runtime. If it is set to `true`, you must ensure that it has been initialized by `bin/pulsar initialize-cluster-metadata` command. - -If authentication is enabled on the BookKeeper cluster, configure the following BookKeeper authentication settings. - -- `bookkeeperClientAuthenticationPlugin`: the BookKeeper client authentication plugin name. -- `bookkeeperClientAuthenticationParametersName`: the BookKeeper client authentication plugin parameters name. -- `bookkeeperClientAuthenticationParameters`: the BookKeeper client authentication plugin parameters. - -### Configure Stateful-Functions to run with broker - -If you want to use Stateful-Functions related functions (for example, `putState()` and `queryState()` related interfaces), follow steps below. - -1. Enable the **streamStorage** service in the BookKeeper. - - Currently, the service uses the NAR package, so you need to set the configuration in `bookkeeper.conf`. - - ```text - - extraServerComponents=org.apache.bookkeeper.stream.server.StreamStorageLifecycleComponent - - ``` - - After starting bookie, use the following methods to check whether the streamStorage service is started correctly. - - Input: - - ```shell - - telnet localhost 4181 - - ``` - - Output: - - ```text - - Trying 127.0.0.1... - Connected to localhost. - Escape character is '^]'. - - ``` - -2. Turn on this function in `functions_worker.yml`. - - ```text - - stateStorageServiceUrl: bk://:4181 - - ``` - - `bk-service-url` is the service URL pointing to the BookKeeper table service. - -### Start Functions-worker with broker - -Once you have configured the `functions_worker.yml` file, you can start or restart your broker. - -And then you can use the following command to verify if `functions-worker` is running well. - -```bash - -curl :8080/admin/v2/worker/cluster - -``` - -After entering the command above, a list of active function workers in the cluster is returned. The output is similar to the following. - -```json - -[{"workerId":"","workerHostname":"","port":8080}] - -``` - -## Run Functions-worker separately - -This section illustrates how to run `functions-worker` as a separate process in separate machines. - -![assets/functions-worker-separated.png](/assets/functions-worker-separated.png) - -:::note - -In this mode, make sure `functionsWorkerEnabled` is set to `false`, so you won't start `functions-worker` with brokers by mistake. Also, while accessing the `functions-worker` to manage any of the functions, the `pulsar-admin` CLI tool or any of the clients should use the `workerHostname` and `workerPort` that you set in [Worker parameters](#worker-parameters) to generate an `--admin-url`. - -::: - -### Configure Functions-worker to run separately - -To run function-worker separately, you have to configure the following parameters. - -#### Worker parameters - -- `workerId`: The type is string. It is unique across clusters, which is used to identify a worker machine. -- `workerHostname`: The hostname of the worker machine. -- `workerPort`: The port that the worker server listens on. Keep it as default if you don't customize it. -- `workerPortTls`: The TLS port that the worker server listens on. Keep it as default if you don't customize it. - -#### Function package parameter - -- `numFunctionPackageReplicas`: The number of replicas to store function packages. The default value is `1`. - -#### Function metadata parameter - -- `pulsarServiceUrl`: The Pulsar service URL for your broker cluster. -- `pulsarWebServiceUrl`: The Pulsar web service URL for your broker cluster. -- `pulsarFunctionsCluster`: Set the value to your Pulsar cluster name (same as the `clusterName` setting in the broker configuration). - -If authentication is enabled for your broker cluster, you *should* configure the authentication plugin and parameters for the functions worker to communicate with the brokers. - -- `brokerClientAuthenticationEnabled`: Whether to enable the broker client authentication used by function workers to talk to brokers. -- `clientAuthenticationPlugin`: The authentication plugin to be used by the Pulsar client used in worker service. -- `clientAuthenticationParameters`: The authentication parameter to be used by the Pulsar client used in worker service. - -#### Security settings - -If you want to enable security on functions workers, you *should*: -- [Enable TLS transport encryption](#enable-tls-transport-encryption) -- [Enable Authentication Provider](#enable-authentication-provider) -- [Enable Authorization Provider](#enable-authorization-provider) -- [Enable End-to-End Encryption](#enable-end-to-end-encryption) - -##### Enable TLS transport encryption - -To enable TLS transport encryption, configure the following settings. - -``` - -useTLS: true -pulsarServiceUrl: pulsar+ssl://localhost:6651/ -pulsarWebServiceUrl: https://localhost:8443 - -tlsEnabled: true -tlsCertificateFilePath: /path/to/functions-worker.cert.pem -tlsKeyFilePath: /path/to/functions-worker.key-pk8.pem -tlsTrustCertsFilePath: /path/to/ca.cert.pem - -// The path to trusted certificates used by the Pulsar client to authenticate with Pulsar brokers -brokerClientTrustCertsFilePath: /path/to/ca.cert.pem - -``` - -For details on TLS encryption, refer to [Transport Encryption using TLS](security-tls-transport.md). - -##### Enable Authentication Provider - -To enable authentication on Functions Worker, you need to configure the following settings. - -:::note - -Substitute the *providers list* with the providers you want to enable. - -::: - -``` - -authenticationEnabled: true -authenticationProviders: [ provider1, provider2 ] - -``` - -For *TLS Authentication* provider, follow the example below to add the necessary settings. -See [TLS Authentication](security-tls-authentication.md) for more details. - -``` - -brokerClientAuthenticationPlugin: org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters: tlsCertFile:/path/to/admin.cert.pem,tlsKeyFile:/path/to/admin.key-pk8.pem - -authenticationEnabled: true -authenticationProviders: ['org.apache.pulsar.broker.authentication.AuthenticationProviderTls'] - -``` - -For *SASL Authentication* provider, add `saslJaasClientAllowedIds` and `saslJaasBrokerSectionName` -under `properties` if needed. - -``` - -properties: - saslJaasClientAllowedIds: .*pulsar.* - saslJaasBrokerSectionName: Broker - -``` - -For *Token Authentication* provider, add necessary settings for `properties` if needed. -See [Token Authentication](security-jwt.md) for more details. -Note: key files must be DER-encoded - -``` - -properties: - tokenSecretKey: file://my/secret.key - # If using public/private - # tokenPublicKey: file:///path/to/public.key - -``` - -##### Enable Authorization Provider - -To enable authorization on Functions Worker, you need to configure `authorizationEnabled`, `authorizationProvider` and `configurationStoreServers`. The authentication provider connects to `configurationStoreServers` to receive namespace policies. - -```yaml - -authorizationEnabled: true -authorizationProvider: org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider -configurationStoreServers: - -``` - -You should also configure a list of superuser roles. The superuser roles are able to access any admin API. The following is a configuration example. - -```yaml - -superUserRoles: - - role1 - - role2 - - role3 - -``` - -##### Enable End-to-End Encryption - -You can use the public and private key pair that the application configures to perform encryption. Only the consumers with a valid key can decrypt the encrypted messages. - -To enable End-to-End encryption on Functions Worker, you can set it by specifying `--producer-config` in the command line terminal, for more information, please refer to [here](security-encryption.md). - -We include the relevant configuration information of `CryptoConfig` into `ProducerConfig`. The specific configurable field information about `CryptoConfig` is as follows: - -```text - -public class CryptoConfig { - private String cryptoKeyReaderClassName; - private Map cryptoKeyReaderConfig; - - private String[] encryptionKeys; - private ProducerCryptoFailureAction producerCryptoFailureAction; - - private ConsumerCryptoFailureAction consumerCryptoFailureAction; -} - -``` - -- `producerCryptoFailureAction`: define the action if producer fail to encrypt data one of `FAIL`, `SEND`. -- `consumerCryptoFailureAction`: define the action if consumer fail to decrypt data one of `FAIL`, `DISCARD`, `CONSUME`. - -#### BookKeeper Authentication - -If authentication is enabled on the BookKeeper cluster, you need configure the BookKeeper authentication settings as follows: - -- `bookkeeperClientAuthenticationPlugin`: the plugin name of BookKeeper client authentication. -- `bookkeeperClientAuthenticationParametersName`: the plugin parameters name of BookKeeper client authentication. -- `bookkeeperClientAuthenticationParameters`: the plugin parameters of BookKeeper client authentication. - -### Start Functions-worker - -Once you have finished configuring the `functions_worker.yml` configuration file, you can start a `functions-worker` in the background by using [nohup](https://en.wikipedia.org/wiki/Nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -bin/pulsar-daemon start functions-worker - -``` - -You can also start `functions-worker` in the foreground by using `pulsar` CLI tool: - -```bash - -bin/pulsar functions-worker - -``` - -### Configure Proxies for Functions-workers - -When you are running `functions-worker` in a separate cluster, the admin rest endpoints are split into two clusters. `functions`, `function-worker`, `source` and `sink` endpoints are now served -by the `functions-worker` cluster, while all the other remaining endpoints are served by the broker cluster. -Hence you need to configure your `pulsar-admin` to use the right service URL accordingly. - -In order to address this inconvenience, you can start a proxy cluster for routing the admin rest requests accordingly. Hence you will have one central entry point for your admin service. - -If you already have a proxy cluster, continue reading. If you haven't setup a proxy cluster before, you can follow the [instructions](http://pulsar.apache.org/docs/en/administration-proxy/) to -start proxies. - -![assets/functions-worker-separated.png](/assets/functions-worker-separated-proxy.png) - -To enable routing functions related admin requests to `functions-worker` in a proxy, you can edit the `proxy.conf` file to modify the following settings: - -```conf - -functionWorkerWebServiceURL= -functionWorkerWebServiceURLTLS= - -``` - -## Compare the Run-with-Broker and Run-separately modes - -As described above, you can run Function-worker with brokers, or run it separately. And it is more convenient to run functions-workers along with brokers. However, running functions-workers in a separate cluster provides better resource isolation for running functions in `Process` or `Thread` mode. - -Use which mode for your cases, refer to the following guidelines to determine. - -Use the `Run-with-Broker` mode in the following cases: -- a) if resource isolation is not required when running functions in `Process` or `Thread` mode; -- b) if you configure the functions-worker to run functions on Kubernetes (where the resource isolation problem is addressed by Kubernetes). - -Use the `Run-separately` mode in the following cases: -- a) you don't have a Kubernetes cluster; -- b) if you want to run functions and brokers separately. - -## Troubleshooting - -**Error message: Namespace missing local cluster name in clusters list** - -``` - -Failed to get partitioned topic metadata: org.apache.pulsar.client.api.PulsarClientException$BrokerMetadataException: Namespace missing local cluster name in clusters list: local_cluster=xyz ns=public/functions clusters=[standalone] - -``` - -The error message prompts when either of the cases occurs: -- a) a broker is started with `functionsWorkerEnabled=true`, but the `pulsarFunctionsCluster` is not set to the correct cluster in the `conf/functions_worker.yaml` file; -- b) setting up a geo-replicated Pulsar cluster with `functionsWorkerEnabled=true`, while brokers in one cluster run well, brokers in the other cluster do not work well. - -**Workaround** - -If any of these cases happens, follow the instructions below to fix the problem: - -1. Disable Functions Worker by setting `functionsWorkerEnabled=false`, and restart brokers. - -2. Get the current clusters list of `public/functions` namespace. - -```bash - -bin/pulsar-admin namespaces get-clusters public/functions - -``` - -3. Check if the cluster is in the clusters list. If the cluster is not in the list, add it to the list and update the clusters list. - -```bash - -bin/pulsar-admin namespaces set-clusters --clusters , public/functions - -``` - -4. After setting the cluster successfully, enable functions worker by setting `functionsWorkerEnabled=true`. - -5. Set the correct cluster name in `pulsarFunctionsCluster` in the `conf/functions_worker.yml` file, and restart brokers. diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/getting-started-concepts-and-architecture.md b/site2/website/versioned_docs/version-2.8.0-deprecated/getting-started-concepts-and-architecture.md deleted file mode 100644 index fe9c3fbc553b2c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/getting-started-concepts-and-architecture.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -id: concepts-architecture -title: Pulsar concepts and architecture -sidebar_label: "Concepts and architecture" -original_id: concepts-architecture ---- - - - - - - - - - - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/getting-started-docker.md b/site2/website/versioned_docs/version-2.8.0-deprecated/getting-started-docker.md deleted file mode 100644 index 4f20971d75330c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/getting-started-docker.md +++ /dev/null @@ -1,176 +0,0 @@ ---- -id: getting-started-docker -title: Set up a standalone Pulsar in Docker -sidebar_label: "Run Pulsar in Docker" -original_id: getting-started-docker ---- - -For local development and testing, you can run Pulsar in standalone -mode on your own machine within a Docker container. - -If you have not installed Docker, download the [Community edition](https://www.docker.com/community-edition) -and follow the instructions for your OS. - -## Start Pulsar in Docker - -* For MacOS, Linux, and Windows: - - ```shell - - $ docker run -it \ - -p 6650:6650 \ - -p 8080:8080 \ - --mount source=pulsardata,target=/pulsar/data \ - --mount source=pulsarconf,target=/pulsar/conf \ - apachepulsar/pulsar:@pulsar:version@ \ - bin/pulsar standalone - - ``` - -A few things to note about this command: - * The data, metadata, and configuration are persisted on Docker volumes in order to not start "fresh" every -time the container is restarted. For details on the volumes you can use `docker volume inspect ` - * For Docker on Windows make sure to configure it to use Linux containers - -If you start Pulsar successfully, you will see `INFO`-level log messages like this: - -``` - -2017-08-09 22:34:04,030 - INFO - [main:WebService@213] - Web Service started at http://127.0.0.1:8080 -2017-08-09 22:34:04,038 - INFO - [main:PulsarService@335] - messaging service is ready, bootstrap service on port=8080, broker url=pulsar://127.0.0.1:6650, cluster=standalone, configs=org.apache.pulsar.broker.ServiceConfiguration@4db60246 -... - -``` - -:::tip - -When you start a local standalone cluster, a `public/default` namespace is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -::: - -## Use Pulsar in Docker - -Pulsar offers client libraries for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) -and [C++](client-libraries-cpp.md). If you're running a local standalone cluster, you can -use one of these root URLs to interact with your cluster: - -* `pulsar://localhost:6650` -* `http://localhost:8080` - -The following example will guide you get started with Pulsar quickly by using the [Python](client-libraries-python.md) -client API. - -Install the Pulsar Python client library directly from [PyPI](https://pypi.org/project/pulsar-client/): - -```shell - -$ pip install pulsar-client - -``` - -### Consume a message - -Create a consumer and subscribe to the topic: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -consumer = client.subscribe('my-topic', - subscription_name='my-sub') - -while True: - msg = consumer.receive() - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - -client.close() - -``` - -### Produce a message - -Now start a producer to send some test messages: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -producer = client.create_producer('my-topic') - -for i in range(10): - producer.send(('hello-pulsar-%d' % i).encode('utf-8')) - -client.close() - -``` - -## Get the topic statistics - -In Pulsar, you can use REST, Java, or command-line tools to control every aspect of the system. -For details on APIs, refer to [Admin API Overview](admin-api-overview.md). - -In the simplest example, you can use curl to probe the stats for a particular topic: - -```shell - -$ curl http://localhost:8080/admin/v2/persistent/public/default/my-topic/stats | python -m json.tool - -``` - -The output is something like this: - -```json - -{ - "averageMsgSize": 0.0, - "msgRateIn": 0.0, - "msgRateOut": 0.0, - "msgThroughputIn": 0.0, - "msgThroughputOut": 0.0, - "publishers": [ - { - "address": "/172.17.0.1:35048", - "averageMsgSize": 0.0, - "clientVersion": "1.19.0-incubating", - "connectedSince": "2017-08-09 20:59:34.621+0000", - "msgRateIn": 0.0, - "msgThroughputIn": 0.0, - "producerId": 0, - "producerName": "standalone-0-1" - } - ], - "replication": {}, - "storageSize": 16, - "subscriptions": { - "my-sub": { - "blockedSubscriptionOnUnackedMsgs": false, - "consumers": [ - { - "address": "/172.17.0.1:35064", - "availablePermits": 996, - "blockedConsumerOnUnackedMsgs": false, - "clientVersion": "1.19.0-incubating", - "connectedSince": "2017-08-09 21:05:39.222+0000", - "consumerName": "166111", - "msgRateOut": 0.0, - "msgRateRedeliver": 0.0, - "msgThroughputOut": 0.0, - "unackedMessages": 0 - } - ], - "msgBacklog": 0, - "msgRateExpired": 0.0, - "msgRateOut": 0.0, - "msgRateRedeliver": 0.0, - "msgThroughputOut": 0.0, - "type": "Exclusive", - "unackedMessages": 0 - } - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/getting-started-helm.md b/site2/website/versioned_docs/version-2.8.0-deprecated/getting-started-helm.md deleted file mode 100644 index 440087c275c053..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/getting-started-helm.md +++ /dev/null @@ -1,441 +0,0 @@ ---- -id: getting-started-helm -title: Get started in Kubernetes -sidebar_label: "Run Pulsar in Kubernetes" -original_id: getting-started-helm ---- - -This section guides you through every step of installing and running Apache Pulsar with Helm on Kubernetes quickly, including the following sections: - -- Install the Apache Pulsar on Kubernetes using Helm -- Start and stop Apache Pulsar -- Create topics using `pulsar-admin` -- Produce and consume messages using Pulsar clients -- Monitor Apache Pulsar status with Prometheus and Grafana - -For deploying a Pulsar cluster for production usage, read the documentation on [how to configure and install a Pulsar Helm chart](helm-deploy.md). - -## Prerequisite - -- Kubernetes server 1.14.0+ -- kubectl 1.14.0+ -- Helm 3.0+ - -:::tip - -For the following steps, step 2 and step 3 are for **developers** and step 4 and step 5 are for **administrators**. - -::: - -## Step 0: Prepare a Kubernetes cluster - -Before installing a Pulsar Helm chart, you have to create a Kubernetes cluster. You can follow [the instructions](helm-prepare.md) to prepare a Kubernetes cluster. - -We use [Minikube](https://minikube.sigs.k8s.io/docs/start/) in this quick start guide. To prepare a Kubernetes cluster, follow these steps: - -1. Create a Kubernetes cluster on Minikube. - - ```bash - - minikube start --memory=8192 --cpus=4 --kubernetes-version= - - ``` - - The `` can be any [Kubernetes version supported by your Minikube installation](https://minikube.sigs.k8s.io/docs/reference/configuration/kubernetes/), such as `v1.16.1`. - -2. Set `kubectl` to use Minikube. - - ```bash - - kubectl config use-context minikube - - ``` - -3. To use the [Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) with the local Kubernetes cluster on Minikube, enter the command below: - - ```bash - - minikube dashboard - - ``` - - The command automatically triggers opening a webpage in your browser. - -## Step 1: Install Pulsar Helm chart - -0. Add Pulsar charts repo. - - ```bash - - helm repo add apache https://pulsar.apache.org/charts - - ``` - - ```bash - - helm repo update - - ``` - -1. Clone the Pulsar Helm chart repository. - - ```bash - - git clone https://github.com/apache/pulsar-helm-chart - cd pulsar-helm-chart - - ``` - -2. Run the script `prepare_helm_release.sh` to create secrets required for installing the Apache Pulsar Helm chart. The username `pulsar` and password `pulsar` are used for logging into the Grafana dashboard and Pulsar Manager. - - > **NOTE** - > When running the script, you can use `-n` to specify the Kubernetes namespace where the Pulsar Helm chart is installed, `-k` to define the Pulsar Helm release name, and `-c` to create the Kubernetes namespace. For more information about the script, run `./scripts/pulsar/prepare_helm_release.sh --help`. - - ```bash - - ./scripts/pulsar/prepare_helm_release.sh \ - -n pulsar \ - -k pulsar-mini \ - -c - - ``` - -3. Use the Pulsar Helm chart to install a Pulsar cluster to Kubernetes. - - > **NOTE** - > You need to specify `--set initialize=true` when installing Pulsar the first time. This command installs and starts Apache Pulsar. - - ```bash - - helm install \ - --values examples/values-minikube.yaml \ - --set initialize=true \ - --namespace pulsar \ - pulsar-mini apache/pulsar - - ``` - -4. Check the status of all pods. - - ```bash - - kubectl get pods -n pulsar - - ``` - - If all pods start up successfully, you can see that the `STATUS` is changed to `Running` or `Completed`. - - **Output** - - ```bash - - NAME READY STATUS RESTARTS AGE - pulsar-mini-bookie-0 1/1 Running 0 9m27s - pulsar-mini-bookie-init-5gphs 0/1 Completed 0 9m27s - pulsar-mini-broker-0 1/1 Running 0 9m27s - pulsar-mini-grafana-6b7bcc64c7-4tkxd 1/1 Running 0 9m27s - pulsar-mini-prometheus-5fcf5dd84c-w8mgz 1/1 Running 0 9m27s - pulsar-mini-proxy-0 1/1 Running 0 9m27s - pulsar-mini-pulsar-init-t7cqt 0/1 Completed 0 9m27s - pulsar-mini-pulsar-manager-9bcbb4d9f-htpcs 1/1 Running 0 9m27s - pulsar-mini-toolset-0 1/1 Running 0 9m27s - pulsar-mini-zookeeper-0 1/1 Running 0 9m27s - - ``` - -5. Check the status of all services in the namespace `pulsar`. - - ```bash - - kubectl get services -n pulsar - - ``` - - **Output** - - ```bash - - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - pulsar-mini-bookie ClusterIP None 3181/TCP,8000/TCP 11m - pulsar-mini-broker ClusterIP None 8080/TCP,6650/TCP 11m - pulsar-mini-grafana LoadBalancer 10.106.141.246 3000:31905/TCP 11m - pulsar-mini-prometheus ClusterIP None 9090/TCP 11m - pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 11m - pulsar-mini-pulsar-manager LoadBalancer 10.103.192.175 9527:30190/TCP 11m - pulsar-mini-toolset ClusterIP None 11m - pulsar-mini-zookeeper ClusterIP None 2888/TCP,3888/TCP,2181/TCP 11m - - ``` - -## Step 2: Use pulsar-admin to create Pulsar tenants/namespaces/topics - -`pulsar-admin` is the CLI (command-Line Interface) tool for Pulsar. In this step, you can use `pulsar-admin` to create resources, including tenants, namespaces, and topics. - -1. Enter the `toolset` container. - - ```bash - - kubectl exec -it -n pulsar pulsar-mini-toolset-0 -- /bin/bash - - ``` - -2. In the `toolset` container, create a tenant named `apache`. - - ```bash - - bin/pulsar-admin tenants create apache - - ``` - - Then you can list the tenants to see if the tenant is created successfully. - - ```bash - - bin/pulsar-admin tenants list - - ``` - - You should see a similar output as below. The tenant `apache` has been successfully created. - - ```bash - - "apache" - "public" - "pulsar" - - ``` - -3. In the `toolset` container, create a namespace named `pulsar` in the tenant `apache`. - - ```bash - - bin/pulsar-admin namespaces create apache/pulsar - - ``` - - Then you can list the namespaces of tenant `apache` to see if the namespace is created successfully. - - ```bash - - bin/pulsar-admin namespaces list apache - - ``` - - You should see a similar output as below. The namespace `apache/pulsar` has been successfully created. - - ```bash - - "apache/pulsar" - - ``` - -4. In the `toolset` container, create a topic `test-topic` with `4` partitions in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics create-partitioned-topic apache/pulsar/test-topic -p 4 - - ``` - -5. In the `toolset` container, list all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics list-partitioned-topics apache/pulsar - - ``` - - Then you can see all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - "persistent://apache/pulsar/test-topic" - - ``` - -## Step 3: Use Pulsar client to produce and consume messages - -You can use the Pulsar client to create producers and consumers to produce and consume messages. - -By default, the Pulsar Helm chart exposes the Pulsar cluster through a Kubernetes `LoadBalancer`. In Minikube, you can use the following command to check the proxy service. - -```bash - -kubectl get services -n pulsar | grep pulsar-mini-proxy - -``` - -You will see a similar output as below. - -```bash - -pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 28m - -``` - -This output tells what are the node ports that Pulsar cluster's binary port and HTTP port are mapped to. The port after `80:` is the HTTP port while the port after `6650:` is the binary port. - -Then you can find the IP address and exposed ports of your Minikube server by running the following command. - -```bash - -minikube service pulsar-mini-proxy -n pulsar - -``` - -**Output** - -```bash - -|-----------|-------------------|-------------|-------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|-------------------------| -| pulsar | pulsar-mini-proxy | http/80 | http://172.17.0.4:32305 | -| | | pulsar/6650 | http://172.17.0.4:31816 | -|-----------|-------------------|-------------|-------------------------| -🏃 Starting tunnel for service pulsar-mini-proxy. -|-----------|-------------------|-------------|------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|------------------------| -| pulsar | pulsar-mini-proxy | | http://127.0.0.1:61853 | -| | | | http://127.0.0.1:61854 | -|-----------|-------------------|-------------|------------------------| - -``` - -At this point, you can get the service URLs to connect to your Pulsar client. Here are URL examples: - -``` - -webServiceUrl=http://127.0.0.1:61853/ -brokerServiceUrl=pulsar://127.0.0.1:61854/ - -``` - -Then you can proceed with the following steps: - -1. Download the Apache Pulsar tarball from the [downloads page](https://pulsar.apache.org/download/). - -2. Decompress the tarball based on your download file. - - ```bash - - tar -xf .tar.gz - - ``` - -3. Expose `PULSAR_HOME`. - - (1) Enter the directory of the decompressed download file. - - (2) Expose `PULSAR_HOME` as the environment variable. - - ```bash - - export PULSAR_HOME=$(pwd) - - ``` - -4. Configure the Pulsar client. - - In the `${PULSAR_HOME}/conf/client.conf` file, replace `webServiceUrl` and `brokerServiceUrl` with the service URLs you get from the above steps. - -5. Create a subscription to consume messages from `apache/pulsar/test-topic`. - - ```bash - - bin/pulsar-client consume -s sub apache/pulsar/test-topic -n 0 - - ``` - -6. Open a new terminal. In the new terminal, create a producer and send 10 messages to the `test-topic` topic. - - ```bash - - bin/pulsar-client produce apache/pulsar/test-topic -m "---------hello apache pulsar-------" -n 10 - - ``` - -7. Verify the results. - - - From the producer side - - **Output** - - The messages have been produced successfully. - - ```bash - - 18:15:15.489 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 10 messages successfully produced - - ``` - - - From the consumer side - - **Output** - - At the same time, you can receive the messages as below. - - ```bash - - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - - ``` - -## Step 4: Use Pulsar Manager to manage the cluster - -[Pulsar Manager](administration-pulsar-manager.md) is a web-based GUI management tool for managing and monitoring Pulsar. - -1. By default, the `Pulsar Manager` is exposed as a separate `LoadBalancer`. You can open the Pulsar Manager UI using the following command: - - ```bash - - minikube service -n pulsar pulsar-mini-pulsar-manager - - ``` - -2. The Pulsar Manager UI will be open in your browser. You can use the username `pulsar` and password `pulsar` to log into Pulsar Manager. - -3. In Pulsar Manager UI, you can create an environment. - - - Click `New Environment` button in the top-left corner. - - Type `pulsar-mini` for the field `Environment Name` in the popup window. - - Type `http://pulsar-mini-broker:8080` for the field `Service URL` in the popup window. - - Click `Confirm` button in the popup window. - -4. After successfully creating an environment, you are redirected to the `tenants` page of that environment. Then you can create `tenants`, `namespaces` and `topics` using the Pulsar Manager. - -## Step 5: Use Prometheus and Grafana to monitor cluster - -Grafana is an open-source visualization tool, which can be used for visualizing time series data into dashboards. - -1. By default, the Grafana is exposed as a separate `LoadBalancer`. You can open the Grafana UI using the following command: - - ```bash - - minikube service pulsar-mini-grafana -n pulsar - - ``` - -2. The Grafana UI is open in your browser. You can use the username `pulsar` and password `pulsar` to log into the Grafana Dashboard. - -3. You can view dashboards for different components of a Pulsar cluster. diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/getting-started-pulsar.md b/site2/website/versioned_docs/version-2.8.0-deprecated/getting-started-pulsar.md deleted file mode 100644 index 752590f57b5585..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/getting-started-pulsar.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -id: pulsar-2.0 -title: Pulsar 2.0 -sidebar_label: "Pulsar 2.0" -original_id: pulsar-2.0 ---- - -Pulsar 2.0 is a major new release for Pulsar that brings some bold changes to the platform, including [simplified topic names](#topic-names), the addition of the [Pulsar Functions](functions-overview.md) feature, some terminology changes, and more. - -## New features in Pulsar 2.0 - -Feature | Description -:-------|:----------- -[Pulsar Functions](functions-overview.md) | A lightweight compute option for Pulsar - -## Major changes - -There are a few major changes that you should be aware of, as they may significantly impact your day-to-day usage. - -### Properties versus tenants - -Previously, Pulsar had a concept of properties. A property is essentially the exact same thing as a tenant, so the "property" terminology has been removed in version 2.0. The [`pulsar-admin properties`](reference-pulsar-admin.md#pulsar-admin) command-line interface, for example, has been replaced with the [`pulsar-admin tenants`](reference-pulsar-admin.md#pulsar-admin-tenants) interface. In some cases the properties terminology is still used but is now considered deprecated and will be removed entirely in a future release. - -### Topic names - -Prior to version 2.0, *all* Pulsar topics had the following form: - -```http - -{persistent|non-persistent}://property/cluster/namespace/topic - -``` - -Two important changes have been made in Pulsar 2.0: - -* There is no longer a [cluster component](#no-cluster) -* Properties have been [renamed to tenants](#tenants) -* You can use a [flexible](#flexible-topic-naming) naming system to shorten many topic names -* `/` is not allowed in topic name - -#### No cluster component - -The cluster component has been removed from topic names. Thus, all topic names now have the following form: - -```http - -{persistent|non-persistent}://tenant/namespace/topic - -``` - -> Existing topics that use the legacy name format will continue to work without any change, and there are no plans to change that. - - -#### Flexible topic naming - -All topic names in Pulsar 2.0 internally have the form shown [above](#no-cluster-component) but you can now use shorthand names in many cases (for the sake of simplicity). The flexible naming system stems from the fact that there is now a default topic type, tenant, and namespace: - -Topic aspect | Default -:------------|:------- -topic type | `persistent` -tenant | `public` -namespace | `default` - -The table below shows some example topic name translations that use implicit defaults: - -Input topic name | Translated topic name -:----------------|:--------------------- -`my-topic` | `persistent://public/default/my-topic` -`my-tenant/my-namespace/my-topic` | `persistent://my-tenant/my-namespace/my-topic` - -> For [non-persistent topics](concepts-messaging.md#non-persistent-topics) you'll need to continue to specify the entire topic name, as the default-based rules for persistent topic names don't apply. Thus you cannot use a shorthand name like `non-persistent://my-topic` and would need to use `non-persistent://public/default/my-topic` instead - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/getting-started-standalone.md b/site2/website/versioned_docs/version-2.8.0-deprecated/getting-started-standalone.md deleted file mode 100644 index cea47efd08d4b3..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/getting-started-standalone.md +++ /dev/null @@ -1,271 +0,0 @@ ---- -id: getting-started-standalone -title: Set up a standalone Pulsar locally -sidebar_label: "Run Pulsar locally" -original_id: getting-started-standalone ---- - -For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process. - -> #### Pulsar in production? -> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal.md) guide. - -## Install Pulsar standalone - -This tutorial guides you through every step of the installation process. - -### System requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions, JRE/JDK 11 is recommended. - -:::tip - -By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. - -::: - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -### Install Pulsar using binary release - -To get started with Pulsar, download a binary tarball release in one of the following ways: - -* download from the Apache mirror (Pulsar @pulsar:version@ binary release) - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:binary_release_url - - ``` - -After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -#### What your package contains - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/). -`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more. -`examples` | A Java JAR file containing [Pulsar Functions](functions-overview.md) example. -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar. -`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar). - -These directories are created once you begin running Pulsar. - -Directory | Contains -:---------|:-------- -`data` | The data storage directory used by ZooKeeper and BookKeeper. -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md). -`logs` | Logs created by the installation. - -:::tip - -If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions: -* [Install builtin connectors (optional)](#install-builtin-connectors-optional) -* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional) -Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders. - -::: - -### Install builtin connectors (optional) - -Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors. -To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways: - -* download from the Apache mirror Pulsar IO Connectors @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. -For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker -(or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions). -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), -you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors). - -::: - -### Install tiered storage offloaders (optional) - -:::tip - -Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -To enable tiered storage feature, follow the instructions below; otherwise skip this section. - -::: - -To get started with [tiered storage offloaders](concepts-tiered-storage.md), you need to download the offloaders tarball release on every broker node in one of the following ways: - -* download from the Apache mirror Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders` -in the pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage.md). - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory. -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), -you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - -::: - -## Start Pulsar standalone - -Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode. - -```bash - -$ bin/pulsar standalone - -``` - -If you have started Pulsar successfully, you will see `INFO`-level log messages like this: - -```bash - -2017-06-01 14:46:29,192 - INFO - [main:WebSocketService@95] - Configuration Store cache started -2017-06-01 14:46:29,192 - INFO - [main:AuthenticationService@61] - Authentication is disabled -2017-06-01 14:46:29,192 - INFO - [main:WebSocketService@108] - Pulsar WebSocket Service started - -``` - -:::tip - -* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window. - -::: - -You can also run the service as a background process using the `pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). -> -> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview.md) document to secure your deployment. -> -> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -## Use Pulsar standalone - -Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. - -### Consume a message - -The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client consume my-topic -s "first-subscription" - -``` - -If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -09:56:55.566 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.MultiTopicsConsumerImpl - [TopicsConsumerFakeTopicNamee2df9] [first-subscription] Success subscribe new topic my-topic in topics consumer, partitions: 4, allTopicPartitionsNumber: 4 - -``` - -:::tip - -As you have noticed that we do not explicitly create the `my-topic` topic, from which we consume the message. When you consume a message from a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well. - -::: - -### Produce a message - -The following command produces a message saying `hello-pulsar` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client produce my-topic --messages "hello-pulsar" - -``` - -If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -13:09:39.356 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced - -``` - -## Stop Pulsar standalone - -Press `Ctrl+C` to stop a local standalone Pulsar. - -:::tip - -If the service runs as a background process using the `pulsar-daemon start standalone` command, then use the `pulsar-daemon stop standalone` command to stop the service. -For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). - -::: - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/helm-deploy.md b/site2/website/versioned_docs/version-2.8.0-deprecated/helm-deploy.md deleted file mode 100644 index 93709f7091c1ea..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/helm-deploy.md +++ /dev/null @@ -1,434 +0,0 @@ ---- -id: helm-deploy -title: Deploy Pulsar cluster using Helm -sidebar_label: "Deployment" -original_id: helm-deploy ---- - -Before running `helm install`, you need to decide how to run Pulsar. -Options can be specified using Helm's `--set option.name=value` command line option. - -## Select configuration options - -In each section, collect the options that are combined to use with the `helm install` command. - -### Kubernetes namespace - -By default, the Pulsar Helm chart is installed to a namespace called `pulsar`. - -```yaml - -namespace: pulsar - -``` - -To install the Pulsar Helm chart into a different Kubernetes namespace, you can include this option in the `helm install` command. - -```bash - ---set namespace= - -``` - -By default, the Pulsar Helm chart doesn't create the namespace. - -```yaml - -namespaceCreate: false - -``` - -To use the Pulsar Helm chart to create the Kubernetes namespace automatically, you can include this option in the `helm install` command. - -```bash - ---set namespaceCreate=true - -``` - -### Persistence - -By default, the Pulsar Helm chart creates Volume Claims with the expectation that a dynamic provisioner creates the underlying Persistent Volumes. - -```yaml - -volumes: - persistence: true - # configure the components to use local persistent volume - # the local provisioner should be installed prior to enable local persistent volume - local_storage: false - -``` - -To use local persistent volumes as the persistent storage for Helm release, you can install the [local storage provisioner](#install-local-storage-provisioner) and include the following option in the `helm install` command. - -```bash - ---set volumes.local_storage=true - -``` - -:::note - -Before installing the production instance of Pulsar, ensure to plan the storage settings to avoid extra storage migration work. Because after initial installation, you must edit Kubernetes objects manually if you want to change storage settings. - -::: - -The Pulsar Helm chart is designed for production use. To use the Pulsar Helm chart in a development environment (such as Minikube), you can disable persistence by including this option in your `helm install` command. - -```bash - ---set volumes.persistence=false - -``` - -### Affinity - -By default, `anti-affinity` is enabled to ensure pods of the same component can run on different nodes. - -```yaml - -affinity: - anti_affinity: true - -``` - -To use the Pulsar Helm chart in a development environment (such as Minikube), you can disable `anti-affinity` by including this option in your `helm install` command. - -```bash - ---set affinity.anti_affinity=false - -``` - -### Components - -The Pulsar Helm chart is designed for production usage. It deploys a production-ready Pulsar cluster, including Pulsar core components and monitoring components. - -You can customize the components to be deployed by turning on/off individual components. - -```yaml - -## Components -## -## Control what components of Apache Pulsar to deploy for the cluster -components: - # zookeeper - zookeeper: true - # bookkeeper - bookkeeper: true - # bookkeeper - autorecovery - autorecovery: true - # broker - broker: true - # functions - functions: true - # proxy - proxy: true - # toolset - toolset: true - # pulsar manager - pulsar_manager: true - -## Monitoring Components -## -## Control what components of the monitoring stack to deploy for the cluster -monitoring: - # monitoring - prometheus - prometheus: true - # monitoring - grafana - grafana: true - -``` - -### Docker images - -The Pulsar Helm chart is designed to enable controlled upgrades. So it can configure independent image versions for components. You can customize the images by setting individual component. - -```yaml - -## Images -## -## Control what images to use for each component -images: - zookeeper: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - bookie: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - autorecovery: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - broker: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - proxy: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - functions: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - prometheus: - repository: prom/prometheus - tag: v1.6.3 - pullPolicy: IfNotPresent - grafana: - repository: streamnative/apache-pulsar-grafana-dashboard-k8s - tag: 0.0.4 - pullPolicy: IfNotPresent - pulsar_manager: - repository: apachepulsar/pulsar-manager - tag: v0.1.0 - pullPolicy: IfNotPresent - hasCommand: false - -``` - -### TLS - -The Pulsar Helm chart can be configured to enable TLS (Transport Layer Security) to protect all the traffic between components. Before enabling TLS, you have to provision TLS certificates for the required components. - -#### Provision TLS certificates using cert-manager - -To use the `cert-manager` to provision the TLS certificates, you have to install the [cert-manager](#install-cert-manager) before installing the Pulsar Helm chart. After successfully installing the cert-manager, you can set `certs.internal_issuer.enabled` to `true`. Therefore, the Pulsar Helm chart can use the `cert-manager` to generate `selfsigning` TLS certificates for the configured components. - -```yaml - -certs: - internal_issuer: - enabled: false - component: internal-cert-issuer - type: selfsigning - -``` - -You can also customize the generated TLS certificates by configuring the fields as the following. - -```yaml - -tls: - # common settings for generating certs - common: - # 90d - duration: 2160h - # 15d - renewBefore: 360h - organization: - - pulsar - keySize: 4096 - keyAlgorithm: rsa - keyEncoding: pkcs8 - -``` - -#### Enable TLS - -After installing the `cert-manager`, you can set `tls.enabled` to `true` to enable TLS encryption for the entire cluster. - -```yaml - -tls: - enabled: false - -``` - -You can also configure whether to enable TLS encryption for individual component. - -```yaml - -tls: - # settings for generating certs for proxy - proxy: - enabled: false - cert_name: tls-proxy - # settings for generating certs for broker - broker: - enabled: false - cert_name: tls-broker - # settings for generating certs for bookies - bookie: - enabled: false - cert_name: tls-bookie - # settings for generating certs for zookeeper - zookeeper: - enabled: false - cert_name: tls-zookeeper - # settings for generating certs for recovery - autorecovery: - cert_name: tls-recovery - # settings for generating certs for toolset - toolset: - cert_name: tls-toolset - -``` - -### Authentication - -By default, authentication is disabled. You can set `auth.authentication.enabled` to `true` to enable authentication. -Currently, the Pulsar Helm chart only supports JWT authentication provider. You can set `auth.authentication.provider` to `jwt` to use the JWT authentication provider. - -```yaml - -# Enable or disable broker authentication and authorization. -auth: - authentication: - enabled: false - provider: "jwt" - jwt: - # Enable JWT authentication - # If the token is generated by a secret key, set the usingSecretKey as true. - # If the token is generated by a private key, set the usingSecretKey as false. - usingSecretKey: false - superUsers: - # broker to broker communication - broker: "broker-admin" - # proxy to broker communication - proxy: "proxy-admin" - # pulsar-admin client to broker/proxy communication - client: "admin" - -``` - -To enable authentication, you can run [prepare helm release](#prepare-the-helm-release) to generate token secret keys and tokens for three super users specified in the `auth.superUsers` field. The generated token keys and super user tokens are uploaded and stored as Kubernetes secrets prefixed with `-token-`. You can use the following command to find those secrets. - -```bash - -kubectl get secrets -n - -``` - -### Authorization - -By default, authorization is disabled. Authorization can be enabled only when authentication is enabled. - -```yaml - -auth: - authorization: - enabled: false - -``` - -To enable authorization, you can include this option in the `helm install` command. - -```bash - ---set auth.authorization.enabled=true - -``` - -### CPU and RAM resource requirements - -By default, the resource requests and the number of replicas for the Pulsar components in the Pulsar Helm chart are adequate for a small production deployment. If you deploy a non-production instance, you can reduce the defaults to fit into a smaller cluster. - -Once you have all of your configuration options collected, you can install dependent charts before installing the Pulsar Helm chart. - -## Install dependent charts - -### Install local storage provisioner - -To use local persistent volumes as the persistent storage, you need to install a storage provisioner for [local persistent volumes](https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/). - -One of the easiest way to get started is to use the local storage provisioner provided along with the Pulsar Helm chart. - -``` - -helm repo add streamnative https://charts.streamnative.io -helm repo update -helm install pulsar-storage-provisioner streamnative/local-storage-provisioner - -``` - -### Install cert-manager - -The Pulsar Helm chart uses the [cert-manager](https://github.com/jetstack/cert-manager) to provision and manage TLS certificates automatically. To enable TLS encryption for brokers or proxies, you need to install the cert-manager in advance. - -For details about how to install the cert-manager, follow the [official instructions](https://cert-manager.io/docs/installation/kubernetes/#installing-with-helm). - -Alternatively, we provide a bash script [install-cert-manager.sh](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/cert-manager/install-cert-manager.sh) to install a cert-manager release to the namespace `cert-manager`. - -```bash - -git clone https://github.com/apache/pulsar-helm-chart -cd pulsar-helm-chart -./scripts/cert-manager/install-cert-manager.sh - -``` - -## Prepare Helm release - -Once you have install all the dependent charts and collected all of your configuration options, you can run [prepare_helm_release.sh](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/prepare_helm_release.sh) to prepare the Helm release. - -```bash - -git clone https://github.com/apache/pulsar-helm-chart -cd pulsar-helm-chart -./scripts/pulsar/prepare_helm_release.sh -n -k - -``` - -The `prepare_helm_release` creates the following resources: - -- A Kubernetes namespace for installing the Pulsar release -- JWT secret keys and tokens for three super users: `broker-admin`, `proxy-admin`, and `admin`. By default, it generates an asymmetric pubic/private key pair. You can choose to generate a symmetric secret key by specifying `--symmetric`. - - `proxy-admin` role is used for proxies to communicate to brokers. - - `broker-admin` role is used for inter-broker communications. - - `admin` role is used by the admin tools. - -## Deploy Pulsar cluster using Helm - -Once you have finished the following three things, you can install a Helm release. - -- Collect all of your configuration options. -- Install dependent charts. -- Prepare the Helm release. - -In this example, we name our Helm release `pulsar`. - -```bash - -helm repo add apache https://pulsar.apache.org/charts -helm repo update -helm install pulsar apache/pulsar \ - --timeout 10m \ - --set initialize=true \ - --set [your configuration options] - -``` - -:::note - -For the first deployment, add `--set initialize=true` option to initialize bookie and Pulsar cluster metadata. - -::: - -You can also use the `--version ` option if you want to install a specific version of Pulsar Helm chart. - -## Monitor deployment - -A list of installed resources are output once the Pulsar cluster is deployed. This may take 5-10 minutes. - -The status of the deployment can be checked by running the `helm status pulsar` command, which can also be done while the deployment is taking place if you run the command in another terminal. - -## Access Pulsar cluster - -The default values will create a `ClusterIP` for the following resources, which you can use to interact with the cluster. - -- Proxy: You can use the IP address to produce and consume messages to the installed Pulsar cluster. -- Pulsar Manager: You can access the Pulsar Manager UI at `http://:9527`. -- Grafana Dashboard: You can access the Grafana dashboard at `http://:3000`. - -To find the IP addresses of those components, run the following command: - -```bash - -kubectl get service -n - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/helm-install.md b/site2/website/versioned_docs/version-2.8.0-deprecated/helm-install.md deleted file mode 100644 index 1f4d5eb69d5ddd..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/helm-install.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -id: helm-install -title: Install Apache Pulsar using Helm -sidebar_label: "Install" -original_id: helm-install ---- - -Install Apache Pulsar on Kubernetes with the official Pulsar Helm chart. - -## Requirements - -To deploy Apache Pulsar on Kubernetes, the followings are required. - -- kubectl 1.14 or higher, compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin)) -- Helm v3 (3.0.2 or higher) -- A Kubernetes cluster, version 1.14 or higher - -## Environment setup - -Before deploying Pulsar, you need to prepare your environment. - -### Tools - -Install [`helm`](helm-tools.md) and [`kubectl`](helm-tools.md) on your computer. - -## Cloud cluster preparation - -:::note - -Kubernetes 1.14 or higher is required. - -::: - -To create and connect to the Kubernetes cluster, follow the instructions: - -- [Google Kubernetes Engine](helm-prepare.md#google-kubernetes-engine) - -## Pulsar deployment - -Once the environment is set up and configuration is generated, you can now proceed to the [deployment of Pulsar](helm-deploy.md). - -## Pulsar upgrade - -To upgrade an existing Kubernetes installation, follow the [upgrade documentation](helm-upgrade.md). diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/helm-overview.md b/site2/website/versioned_docs/version-2.8.0-deprecated/helm-overview.md deleted file mode 100644 index 385d535e319b65..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/helm-overview.md +++ /dev/null @@ -1,104 +0,0 @@ ---- -id: helm-overview -title: Apache Pulsar Helm Chart -sidebar_label: "Overview" -original_id: helm-overview ---- - -This is the official supported Helm chart to install Apache Pulsar on a cloud-native environment. It was enhanced based on StreamNative's [Helm Chart](https://github.com/streamnative/charts). - -## Introduction - -The Apache Pulsar Helm chart is one of the most convenient ways to operate Pulsar on Kubernetes. This Pulsar Helm chart contains all the required components to get started and can scale to large deployments. - -This chart includes all the components for a complete experience, but each part can be configured to be installed separately. - -- Pulsar core components: - - ZooKeeper - - Bookies - - Brokers - - Function workers - - Proxies -- Control Center: - - Pulsar Manager - - Prometheus - - Grafana - -It includes support for: - -- Security - - Automatically provisioned TLS certificates, using [Jetstack](https://www.jetstack.io/)'s [cert-manager](https://cert-manager.io/docs/) - - self-signed - - [Let's Encrypt](https://letsencrypt.org/) - - TLS Encryption - - Proxy - - Broker - - Toolset - - Bookie - - ZooKeeper - - Authentication - - JWT - - Authorization -- Storage - - Non-persistence storage - - Persistence volume - - Local persistent volumes -- Functions - - Kubernetes Runtime - - Process Runtime - - Thread Runtime -- Operations - - Independent image versions for all components, enabling controlled upgrades - -## Pulsar Helm chart quick start - -To get up and run with these charts as fast as possible, in a **non-production** use case, we provide a [quick start guide](getting-started-helm.md) for Proof of Concept (PoC) deployments. - -This guide walks the user through deploying these charts with default values and features, but *does not* meet production ready requirements. To deploy these charts into production under sustained load, follow the complete [Installation Guide](helm-install.md). - -## Troubleshooting - -We have done our best to make these charts as seamless as possible. Occasionally, troubles do go outside of our control. We have collected tips and tricks for troubleshooting common issues. Please check them first before raising an [issue](https://github.com/apache/pulsar/issues/new/choose), and feel free to add to them by raising a [Pull Request](https://github.com/apache/pulsar/compare). - -## Installation - -The Apache Pulsar Helm chart contains all required dependencies. - -If you deploy a PoC for testing, we strongly suggest you follow our [Quick Start Guide](getting-started-helm.md) for your first iteration. - -1. [Preparation](helm-prepare.md) -2. [Deployment](helm-deploy.md) - -## Upgrading - -Once the Pulsar Helm chart is installed, use the `helm upgrade` to complete configuration changes and chart updates. - -```bash - -helm repo add apache https://pulsar.apache.org/charts -helm repo update -helm get values > pulsar.yaml -helm upgrade apache/pulsar -f pulsar.yaml - -``` - -For more detailed information, see [Upgrading](helm-upgrade.md). - -## Uninstallation - -To uninstall the Pulsar Helm chart, run the following command: - -```bash - -helm delete - -``` - -For the purposes of continuity, these charts have some Kubernetes objects that cannot be removed when performing `helm delete`. -It is recommended to *consciously* remove these items, as they affect re-deployment. - -* PVCs for stateful data: *consciously* remove these items. - - ZooKeeper: This is your metadata. - - BookKeeper: This is your data. - - Prometheus: This is your metrics data, which can be safely removed. -* Secrets: if the secrets are generated by the [prepare release script](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/prepare_helm_release.sh), they contain secret keys and tokens. You can use the [cleanup release script](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/cleanup_helm_release.sh) to remove these secrets and tokens as needed. diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/helm-prepare.md b/site2/website/versioned_docs/version-2.8.0-deprecated/helm-prepare.md deleted file mode 100644 index 5e9f2f9ef4f680..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/helm-prepare.md +++ /dev/null @@ -1,92 +0,0 @@ ---- -id: helm-prepare -title: Prepare Kubernetes resources -sidebar_label: "Prepare" -original_id: helm-prepare ---- - -For a fully functional Pulsar cluster, you need a few resources before deploying the Apache Pulsar Helm chart. The following provides instructions to prepare the Kubernetes cluster before deploying the Pulsar Helm chart. - -- [Google Kubernetes Engine](#google-kubernetes-engine) - - [Manual cluster creation](#manual-cluster-creation) - - [Scripted cluster creation](#scripted-cluster-creation) - - [Create cluster with local SSDs](#create-cluster-with-local-ssds) -- [Next Steps](#next-steps) - -## Google Kubernetes Engine - -To get started easier, a script is provided to create the cluster automatically. Alternatively, a cluster can be created manually as well. - -- [Google Kubernetes Engine](#google-kubernetes-engine) - - [Manual cluster creation](#manual-cluster-creation) - - [Scripted cluster creation](#scripted-cluster-creation) - - [Create cluster with local SSDs](#create-cluster-with-local-ssds) -- [Next Steps](#next-steps) - -### Manual cluster creation - -To provision a Kubernetes cluster manually, follow the [GKE instructions](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster). - -Alternatively, you can use the [instructions](#scripted-cluster-creation) below to provision a GKE cluster as needed. - -### Scripted cluster creation - -A [bootstrap script](https://github.com/streamnative/charts/tree/master/scripts/pulsar/gke_bootstrap_script.sh) has been created to automate much of the setup process for users on GCP/GKE. - -The script can: - -1. Create a new GKE cluster. -2. Allow the cluster to modify DNS (Domain Name Server) records. -3. Setup `kubectl`, and connect it to the cluster. - -Google Cloud SDK is a dependency of this script, so ensure it is [set up correctly](helm-tools.md#connect-to-a-gke-cluster) for the script to work. - -The script reads various parameters from environment variables and an argument `up` or `down` for bootstrap and clean-up respectively. - -The following table describes all variables. - -| **Variable** | **Description** | **Default value** | -| ------------ | --------------- | ----------------- | -| PROJECT | ID of your GCP project | No default value. It requires to be set. | -| CLUSTER_NAME | Name of the GKE cluster | `pulsar-dev` | -| CONFDIR | Configuration directory to store Kubernetes configuration | ${HOME}/.config/streamnative | -| INT_NETWORK | IP space to use within this cluster | `default` | -| LOCAL_SSD_COUNT | Number of local SSD counts | 4 | -| MACHINE_TYPE | Type of machine to use for nodes | `n1-standard-4` | -| NUM_NODES | Number of nodes to be created in each of the cluster's zones | 4 | -| PREEMPTIBLE | Create nodes using preemptible VM instances in the new cluster. | false | -| REGION | Compute region for the cluster | `us-east1` | -| USE_LOCAL_SSD | Flag to create a cluster with local SSDs | false | -| ZONE | Compute zone for the cluster | `us-east1-b` | -| ZONE_EXTENSION | The extension (`a`, `b`, `c`) of the zone name of the cluster | `b` | -| EXTRA_CREATE_ARGS | Extra arguments passed to create command | | - -Run the script, by passing in your desired parameters. It can work with the default parameters except for `PROJECT` which is required: - -```bash - -PROJECT= scripts/pulsar/gke_bootstrap_script.sh up - -``` - -The script can also be used to clean up the created GKE resources. - -```bash - -PROJECT= scripts/pulsar/gke_bootstrap_script.sh down - -``` - -#### Create cluster with local SSDs - -To install the Pulsar Helm chart using local persistent volumes, you need to create a GKE cluster with local SSDs. You can do so by specifying `USE_LOCAL_SSD` to be `true` in the following command to create a Pulsar cluster with local SSDs. - -``` - -PROJECT= USE_LOCAL_SSD=true LOCAL_SSD_COUNT= scripts/pulsar/gke_bootstrap_script.sh up - -``` - -## Next Steps - -Continue with the [installation of the chart](helm-deploy.md) once you have the cluster up and running. diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/helm-tools.md b/site2/website/versioned_docs/version-2.8.0-deprecated/helm-tools.md deleted file mode 100644 index 6ba89006913b64..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/helm-tools.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -id: helm-tools -title: Required tools for deploying Pulsar Helm Chart -sidebar_label: "Required Tools" -original_id: helm-tools ---- - -Before deploying Pulsar to your Kubernetes cluster, there are some tools you must have installed locally. - -## kubectl - -kubectl is the tool that talks to the Kubernetes API. kubectl 1.14 or higher is required and it needs to be compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin)). - -To Install kubectl locally, follow the [Kubernetes documentation](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl). - -The server version of kubectl cannot be obtained until we connect to a cluster. - -## Helm - -Helm is the package manager for Kubernetes. The Apache Pulsar Helm Chart is tested and supported with Helm v3. - -### Get Helm - -You can get Helm from the project's [releases page](https://github.com/helm/helm/releases), or follow other options under the official documentation of [installing Helm](https://helm.sh/docs/intro/install/). - -### Next steps - -Once kubectl and Helm are configured, you can configure your [Kubernetes cluster](helm-prepare.md). - -## Additional information - -### Templates - -Templating in Helm is done through Golang's [text/template](https://golang.org/pkg/text/template/) and [sprig](https://godoc.org/github.com/Masterminds/sprig). - -For more information about how all the inner workings behave, check these documents: - -- [Functions and Pipelines](https://helm.sh/docs/chart_template_guide/functions_and_pipelines/) -- [Subcharts and Globals](https://helm.sh/docs/chart_template_guide/subcharts_and_globals/) - -### Tips and tricks - -For additional information on developing with Helm, check [tips and tricks section](https://helm.sh/docs/howto/charts_tips_and_tricks/) in the Helm repository. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/helm-upgrade.md b/site2/website/versioned_docs/version-2.8.0-deprecated/helm-upgrade.md deleted file mode 100644 index 7d671e6bfb3c10..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/helm-upgrade.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -id: helm-upgrade -title: Upgrade Pulsar Helm release -sidebar_label: "Upgrade" -original_id: helm-upgrade ---- - -Before upgrading your Pulsar installation, you need to check the change log corresponding to the specific release you want to upgrade to and look for any release notes that might pertain to the new Pulsar helm chart version. - -We also recommend that you need to provide all values using the `helm upgrade --set key=value` syntax or the `-f values.yml` instead of using `--reuse-values`, because some of the current values might be deprecated. - -:::note - -You can retrieve your previous `--set` arguments cleanly, with `helm get values `. If you direct this into a file (`helm get values > pulsar.yml`), you can safely pass this file through `-f`, namely `helm upgrade apache/pulsar -f pulsar.yaml`. This safely replaces the behavior of `--reuse-values`. - -::: - -## Steps - -To upgrade Apache Pulsar to a newer version, follow these steps: - -1. Check the change log for the specific version you would like to upgrade to. -2. Go through [deployment documentation](helm-deploy.md) step by step. -3. Extract your previous `--set` arguments with the following command. - - ```bash - - helm get values > pulsar.yaml - - ``` - -4. Decide all the values you need to set. -5. Perform the upgrade, with all `--set` arguments extracted in step 4. - - ```bash - - helm upgrade apache/pulsar \ - --version \ - -f pulsar.yaml \ - --set ... - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-aerospike-sink.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-aerospike-sink.md deleted file mode 100644 index 63d7338a3ba91c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-aerospike-sink.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -id: io-aerospike-sink -title: Aerospike sink connector -sidebar_label: "Aerospike sink connector" -original_id: io-aerospike-sink ---- - -The Aerospike sink connector pulls messages from Pulsar topics to Aerospike clusters. - -## Configuration - -The configuration of the Aerospike sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `seedHosts` |String| true | No default value| The comma-separated list of one or more Aerospike cluster hosts.

    Each host can be specified as a valid IP address or hostname followed by an optional port number. | -| `keyspace` | String| true |No default value |The Aerospike namespace. | -| `columnName` | String | true| No default value|The Aerospike column name. | -|`userName`|String|false|NULL|The Aerospike username.| -|`password`|String|false|NULL|The Aerospike password.| -| `keySet` | String|false |NULL | The Aerospike set name. | -| `maxConcurrentRequests` |int| false | 100 | The maximum number of concurrent Aerospike transactions that a sink can open. | -| `timeoutMs` | int|false | 100 | This property controls `socketTimeout` and `totalTimeout` for Aerospike transactions. | -| `retries` | int|false | 1 |The maximum number of retries before aborting a write transaction to Aerospike. | diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-canal-source.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-canal-source.md deleted file mode 100644 index d1fd43bb0f74e4..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-canal-source.md +++ /dev/null @@ -1,235 +0,0 @@ ---- -id: io-canal-source -title: Canal source connector -sidebar_label: "Canal source connector" -original_id: io-canal-source ---- - -The Canal source connector pulls messages from MySQL to Pulsar topics. - -## Configuration - -The configuration of Canal source connector has the following properties. - -### Property - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `username` | true | None | Canal server account (not MySQL).| -| `password` | true | None | Canal server password (not MySQL). | -|`destination`|true|None|Source destination that Canal source connector connects to. -| `singleHostname` | false | None | Canal server address.| -| `singlePort` | false | None | Canal server port.| -| `cluster` | true | false | Whether to enable cluster mode based on Canal server configuration or not.

  566. true: **cluster** mode.
    If set to true, it talks to `zkServers` to figure out the actual database host.

  567. false: **standalone** mode.
    If set to false, it connects to the database specified by `singleHostname` and `singlePort`.
  568. | -| `zkServers` | true | None | Address and port of the Zookeeper that Canal source connector talks to figure out the actual database host.| -| `batchSize` | false | 1000 | Batch size to fetch from Canal. | - -### Example - -Before using the Canal connector, you can create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "zkServers": "127.0.0.1:2181", - "batchSize": "5120", - "destination": "example", - "username": "", - "password": "", - "cluster": false, - "singleHostname": "127.0.0.1", - "singlePort": "11111", - } - - ``` - -* YAML - - You can create a YAML file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/resources/canal-mysql-source-config.yaml) below to your YAML file. - - ```yaml - - configs: - zkServers: "127.0.0.1:2181" - batchSize: 5120 - destination: "example" - username: "" - password: "" - cluster: false - singleHostname: "127.0.0.1" - singlePort: 11111 - - ``` - -## Usage - -Here is an example of storing MySQL data using the configuration file as above. - -1. Start a MySQL server. - - ```bash - - $ docker pull mysql:5.7 - $ docker run -d -it --rm --name pulsar-mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=canal -e MYSQL_USER=mysqluser -e MYSQL_PASSWORD=mysqlpw mysql:5.7 - - ``` - -2. Create a configuration file `mysqld.cnf`. - - ```bash - - [mysqld] - pid-file = /var/run/mysqld/mysqld.pid - socket = /var/run/mysqld/mysqld.sock - datadir = /var/lib/mysql - #log-error = /var/log/mysql/error.log - # By default we only accept connections from localhost - #bind-address = 127.0.0.1 - # Disabling symbolic-links is recommended to prevent assorted security risks - symbolic-links=0 - log-bin=mysql-bin - binlog-format=ROW - server_id=1 - - ``` - -3. Copy the configuration file `mysqld.cnf` to MySQL server. - - ```bash - - $ docker cp mysqld.cnf pulsar-mysql:/etc/mysql/mysql.conf.d/ - - ``` - -4. Restart the MySQL server. - - ```bash - - $ docker restart pulsar-mysql - - ``` - -5. Create a test database in MySQL server. - - ```bash - - $ docker exec -it pulsar-mysql /bin/bash - $ mysql -h 127.0.0.1 -uroot -pcanal -e 'create database test;' - - ``` - -6. Start a Canal server and connect to MySQL server. - - ``` - - $ docker pull canal/canal-server:v1.1.2 - $ docker run -d -it --link pulsar-mysql -e canal.auto.scan=false -e canal.destinations=test -e canal.instance.master.address=pulsar-mysql:3306 -e canal.instance.dbUsername=root -e canal.instance.dbPassword=canal -e canal.instance.connectionCharset=UTF-8 -e canal.instance.tsdb.enable=true -e canal.instance.gtidon=false --name=pulsar-canal-server -p 8000:8000 -p 2222:2222 -p 11111:11111 -p 11112:11112 -m 4096m canal/canal-server:v1.1.2 - - ``` - -7. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:2.3.0 - $ docker run -d -it --link pulsar-canal-server -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:2.3.0 bin/pulsar standalone - - ``` - -8. Modify the configuration file `canal-mysql-source-config.yaml`. - - ```yaml - - configs: - zkServers: "" - batchSize: "5120" - destination: "test" - username: "" - password: "" - cluster: false - singleHostname: "pulsar-canal-server" - singlePort: "11111" - - ``` - -9. Create a consumer file `pulsar-client.py`. - - ```python - - import pulsar - - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe('my-topic', - subscription_name='my-sub') - - while True: - msg = consumer.receive() - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - - client.close() - - ``` - -10. Copy the configuration file `canal-mysql-source-config.yaml` and the consumer file `pulsar-client.py` to Pulsar server. - - ```bash - - $ docker cp canal-mysql-source-config.yaml pulsar-standalone:/pulsar/conf/ - $ docker cp pulsar-client.py pulsar-standalone:/pulsar/ - - ``` - -11. Download a Canal connector and start it. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - $ wget https://archive.apache.org/dist/pulsar/pulsar-2.3.0/connectors/pulsar-io-canal-2.3.0.nar -P connectors - $ ./bin/pulsar-admin source localrun \ - --archive ./connectors/pulsar-io-canal-2.3.0.nar \ - --classname org.apache.pulsar.io.canal.CanalStringSource \ - --tenant public \ - --namespace default \ - --name canal \ - --destination-topic-name my-topic \ - --source-config-file /pulsar/conf/canal-mysql-source-config.yaml \ - --parallelism 1 - - ``` - -12. Consume data from MySQL. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - $ python pulsar-client.py - - ``` - -13. Open another window to log in MySQL server. - - ```bash - - $ docker exec -it pulsar-mysql /bin/bash - $ mysql -h 127.0.0.1 -uroot -pcanal - - ``` - -14. Create a table, and insert, delete, and update data in MySQL server. - - ```bash - - mysql> use test; - mysql> show tables; - mysql> CREATE TABLE IF NOT EXISTS `test_table`(`test_id` INT UNSIGNED AUTO_INCREMENT,`test_title` VARCHAR(100) NOT NULL, - `test_author` VARCHAR(40) NOT NULL, - `test_date` DATE,PRIMARY KEY ( `test_id` ))ENGINE=InnoDB DEFAULT CHARSET=utf8; - mysql> INSERT INTO test_table (test_title, test_author, test_date) VALUES("a", "b", NOW()); - mysql> UPDATE test_table SET test_title='c' WHERE test_title='a'; - mysql> DELETE FROM test_table WHERE test_title='c'; - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-cassandra-sink.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-cassandra-sink.md deleted file mode 100644 index b27a754f49e182..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-cassandra-sink.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -id: io-cassandra-sink -title: Cassandra sink connector -sidebar_label: "Cassandra sink connector" -original_id: io-cassandra-sink ---- - -The Cassandra sink connector pulls messages from Pulsar topics to Cassandra clusters. - -## Configuration - -The configuration of the Cassandra sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `roots` | String|true | " " (empty string) | A comma-separated list of Cassandra hosts to connect to.| -| `keyspace` | String|true| " " (empty string)| The key space used for writing pulsar messages.

    **Note: `keyspace` should be created prior to a Cassandra sink.**| -| `keyname` | String|true| " " (empty string)| The key name of the Cassandra column family.

    The column is used for storing Pulsar message keys.

    If a Pulsar message doesn't have any key associated, the message value is used as the key. | -| `columnFamily` | String|true| " " (empty string)| The Cassandra column family name.

    **Note: `columnFamily` should be created prior to a Cassandra sink.**| -| `columnName` | String|true| " " (empty string) | The column name of the Cassandra column family.

    The column is used for storing Pulsar message values. | - -### Example - -Before using the Cassandra sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - } - - ``` - -* YAML - - ``` - - configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - - ``` - -## Usage - -For more information about **how to connect Pulsar with Cassandra**, see [here](io-quickstart.md#connect-pulsar-to-apache-cassandra). diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-cdc-debezium.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-cdc-debezium.md deleted file mode 100644 index 293ccf2b35e8aa..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-cdc-debezium.md +++ /dev/null @@ -1,543 +0,0 @@ ---- -id: io-cdc-debezium -title: Debezium source connector -sidebar_label: "Debezium source connector" -original_id: io-cdc-debezium ---- - -The Debezium source connector pulls messages from MySQL or PostgreSQL -and persists the messages to Pulsar topics. - -## Configuration - -The configuration of Debezium source connector has the following properties. - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `task.class` | true | null | A source task class that implemented in Debezium. | -| `database.hostname` | true | null | The address of a database server. | -| `database.port` | true | null | The port number of a database server.| -| `database.user` | true | null | The name of a database user that has the required privileges. | -| `database.password` | true | null | The password for a database user that has the required privileges. | -| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. | -| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. | -| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the connector.

    This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. | -| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. | -| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value. | -| `database.history` | true | null | The name of the database history class. | -| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements.

    **Note: this topic is for internal use only and should not be used by consumers.** | -| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. | -| `pulsar.service.url` | true | null | Pulsar cluster service URL for the offset topic used in Debezium. You can use the `bin/pulsar-admin --admin-url http://pulsar:8080 sources localrun --source-config-file configs/pg-pulsar-config.yaml` command to point to the target Pulsar cluster.| -| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. | -| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). | -| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. | -| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. | - - - -## Example of MySQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "3306", - "database.user": "debezium", - "database.password": "dbz", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.whitelist": "inventory", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.history.pulsar.topic": "history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "pulsar.service.url": "pulsar://127.0.0.1:6650", - "offset.storage.topic": "offset-topic" - } - - ``` - -* YAML - - You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mysql-source" - topicName: "debezium-mysql-topic" - archive: "connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for mysql, docker image: debezium/example-mysql:0.8 - database.hostname: "localhost" - database.port: "3306" - database.user: "debezium" - database.password: "dbz" - database.server.id: "184054" - database.server.name: "dbserver1" - database.whitelist: "inventory" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.history.pulsar.topic: "history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG - key.converter: "org.apache.kafka.connect.json.JsonConverter" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## OFFSET_STORAGE_TOPIC_CONFIG - offset.storage.topic: "offset-topic" - - ``` - -### Usage - -This example shows how to change the data of a MySQL table using the Pulsar Debezium connector. - -1. Start a MySQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -it --rm \ - --name mysql \ - -p 3306:3306 \ - -e MYSQL_ROOT_PASSWORD=debezium \ - -e MYSQL_USER=mysqluser \ - -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar \ - --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","value.converter": "org.apache.kafka.connect.json.JsonConverter","pulsar.service.url": "pulsar://127.0.0.1:6650","offset.storage.topic": "offset-topic"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mysql-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the table _inventory.products_. - - ```bash - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MySQL client in docker. - - ```bash - - $ docker run -it --rm \ - --name mysqlterm \ - --link mysql \ - --rm mysql:5.7 sh \ - -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' - - ``` - -6. A MySQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - mysql> use inventory; - mysql> show tables; - mysql> SELECT * FROM products; - mysql> UPDATE products SET name='1111111111' WHERE id=101; - mysql> UPDATE products SET name='1111111111' WHERE id=107; - - ``` - - In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic. - -## Example of PostgreSQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "5432", - "database.user": "postgres", - "database.password": "postgres", - "database.dbname": "postgres", - "database.server.name": "dbserver1", - "schema.whitelist": "inventory", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-postgres-source" - topicName: "debezium-postgres-topic" - archive: "connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-postgress:0.8 - database.hostname: "localhost" - database.port: "5432" - database.user: "postgres" - database.password: "postgres" - database.dbname: "postgres" - database.server.name: "dbserver1" - schema.whitelist: "inventory" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector. - - -1. Start a PostgreSQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-postgres:0.8 - $ docker run -d -it --rm --name pulsar-postgresql -p 5432:5432 debezium/example-postgres:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar \ - --name debezium-postgres-source \ - --destination-topic-name debezium-postgres-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "postgres","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-postgres-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a PostgreSQL client in docker. - - ```bash - - $ docker exec -it pulsar-postgresql /bin/bash - - ``` - -6. A PostgreSQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - psql -U postgres postgres - postgres=# \c postgres; - You are now connected to database "postgres" as user "postgres". - postgres=# SET search_path TO inventory; - SET - postgres=# select * from products; - id | name | description | weight - -----+--------------------+---------------------------------------------------------+-------- - 102 | car battery | 12V car battery | 8.1 - 103 | 12-pack drill bits | 12-pack of drill bits with sizes ranging from #40 to #3 | 0.8 - 104 | hammer | 12oz carpenter's hammer | 0.75 - 105 | hammer | 14oz carpenter's hammer | 0.875 - 106 | hammer | 16oz carpenter's hammer | 1 - 107 | rocks | box of assorted rocks | 5.3 - 108 | jacket | water resistent black wind breaker | 0.1 - 109 | spare tire | 24 inch spare tire | 22.2 - 101 | 1111111111 | Small 2-wheel scooter | 3.14 - (9 rows) - - postgres=# UPDATE products SET name='1111111111' WHERE id=107; - UPDATE 1 - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":107}}�{"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products.Value","field":"before"},{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products.Value","field":"after"},{"type":"struct","fields":[{"type":"string","optional":true,"field":"version"},{"type":"string","optional":true,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":false,"field":"db"},{"type":"int64","optional":true,"field":"ts_usec"},{"type":"int64","optional":true,"field":"txId"},{"type":"int64","optional":true,"field":"lsn"},{"type":"string","optional":true,"field":"schema"},{"type":"string","optional":true,"field":"table"},{"type":"boolean","optional":true,"default":false,"field":"snapshot"},{"type":"boolean","optional":true,"field":"last_snapshot_record"}],"optional":false,"name":"io.debezium.connector.postgresql.Source","field":"source"},{"type":"string","optional":false,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"before":{"id":107,"name":"rocks","description":"box of assorted rocks","weight":5.3},"after":{"id":107,"name":"1111111111","description":"box of assorted rocks","weight":5.3},"source":{"version":"0.9.2.Final","connector":"postgresql","name":"dbserver1","db":"postgres","ts_usec":1559208957661080,"txId":577,"lsn":23862872,"schema":"inventory","table":"products","snapshot":false,"last_snapshot_record":null},"op":"u","ts_ms":1559208957692}} - - ``` - -## Example of MongoDB - -You need to create a configuration file before using the Pulsar Debezium connector. - -* JSON - - ```json - - { - "mongodb.hosts": "rs0/mongodb:27017", - "mongodb.name": "dbserver1", - "mongodb.user": "debezium", - "mongodb.password": "dbz", - "mongodb.task.id": "1", - "database.whitelist": "inventory", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mongodb-source" - topicName: "debezium-mongodb-topic" - archive: "connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-postgress:0.10 - mongodb.hosts: "rs0/mongodb:27017", - mongodb.name: "dbserver1", - mongodb.user: "debezium", - mongodb.password: "dbz", - mongodb.task.id: "1", - database.whitelist: "inventory", - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector. - - -1. Start a MongoDB server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-mongodb:0.10 - $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017 debezium/example-mongodb:0.10 - - ``` - - Use the following commands to initialize the data. - - ``` bash - - ./usr/local/bin/init-inventory.sh - - ``` - - If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a``` - - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-mongodb-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar \ - --name debezium-mongodb-source \ - --destination-topic-name debezium-mongodb-topic \ - --tenant public \ - --namespace default \ - --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mongodb-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MongoDB client in docker. - - ```bash - - $ docker exec -it pulsar-mongodb /bin/bash - - ``` - -6. A MongoDB client pops out. - - ```bash - - mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory - db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}}) - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":false,"field":"rs"},{"type":"string","optional":false,"field":"collection"},{"type":"int32","optional":false,"field":"ord"},{"type":"int64","optional":true,"field":"h"}],"optional":false,"name":"io.debezium.connector.mongo.Source","field":"source"},{"type":"string","optional":true,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"after":"{\"_id\": {\"$numberLong\": \"104\"},\"name\": \"hammer\",\"description\": \"12oz carpenter's hammer\",\"weight\": 1.25,\"quantity\": 4}","patch":null,"source":{"version":"0.10.0.Final","connector":"mongodb","name":"dbserver1","ts_ms":1573541905000,"snapshot":"true","db":"inventory","rs":"rs0","collection":"products","ord":1,"h":4983083486544392763},"op":"r","ts_ms":1573541909761}}. - - ``` - -## FAQ - -### Debezium postgres connector will hang when create snap - -```$xslt - -#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000] - java.lang.Thread.State: WAITING (parking) - at sun.misc.Unsafe.park(Native Method) - - parking to wait for <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) - at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) - at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) - at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396) - at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649) - at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132) - at io.debezium.connector.postgresql.PostgresConnectorTask$Lambda$203/385424085.accept(Unknown Source) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$240/1347039967.accept(Unknown Source) - at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$206/589332928.run(Unknown Source) - at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705) - at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717) - at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126) - at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47) - at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127) - at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230) - at java.lang.Thread.run(Thread.java:748) - -``` - -If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file: - -```$xslt - -max.queue.size= - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-cdc.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-cdc.md deleted file mode 100644 index e6e662884826de..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-cdc.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -id: io-cdc -title: CDC connector -sidebar_label: "CDC connector" -original_id: io-cdc ---- - -CDC source connectors capture log changes of databases (such as MySQL, MongoDB, and PostgreSQL) into Pulsar. - -> CDC source connectors are built on top of [Canal](https://github.com/alibaba/canal) and [Debezium](https://debezium.io/) and store all data into Pulsar cluster in a persistent, replicated, and partitioned way. - -Currently, Pulsar has the following CDC connectors. - -Name|Java Class -|---|--- -[Canal source connector](io-canal-source.md)|[org.apache.pulsar.io.canal.CanalStringSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/java/org/apache/pulsar/io/canal/CanalStringSource.java) -[Debezium source connector](io-cdc-debezium.md)|
  569. [org.apache.pulsar.io.debezium.DebeziumSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/core/src/main/java/org/apache/pulsar/io/debezium/DebeziumSource.java)
  570. [org.apache.pulsar.io.debezium.mysql.DebeziumMysqlSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/java/org/apache/pulsar/io/debezium/mysql/DebeziumMysqlSource.java)
  571. [org.apache.pulsar.io.debezium.postgres.DebeziumPostgresSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/java/org/apache/pulsar/io/debezium/postgres/DebeziumPostgresSource.java)
  572. - -For more information about Canal and Debezium, see the information below. - -Subject | Reference -|---|--- -How to use Canal source connector with MySQL|[Canal guide](https://github.com/alibaba/canal/wiki) -How does Canal work | [Canal tutorial](https://github.com/alibaba/canal/wiki) -How to use Debezium source connector with MySQL | [Debezium guide](https://debezium.io/docs/connectors/mysql/) -How does Debezium work | [Debezium tutorial](https://debezium.io/docs/tutorial/) diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-cli.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-cli.md deleted file mode 100644 index 3d54bb61875e25..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-cli.md +++ /dev/null @@ -1,658 +0,0 @@ ---- -id: io-cli -title: Connector Admin CLI -sidebar_label: "CLI" -original_id: io-cli ---- - -The `pulsar-admin` tool helps you manage Pulsar connectors. - -## `sources` - -An interface for managing Pulsar IO sources (ingress data into Pulsar). - -```bash - -$ pulsar-admin sources subcommands - -``` - -Subcommands are: - -* `create` - -* `update` - -* `delete` - -* `get` - -* `status` - -* `list` - -* `stop` - -* `start` - -* `restart` - -* `localrun` - -* `available-sources` - -* `reload` - - -### `create` - -Submit a Pulsar IO source connector to run in a Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources create options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--classname` | The source's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per source instance (applicable only to Docker runtime). -| `--deserialization-classname` | The SerDe classname for the source. -| `--destination-topic-name` | The Pulsar topic to which data is sent. -| `--disk` | The disk (in bytes) that needs to be allocated per source instance (applicable only to Docker runtime). -|`--name` | The source's name. -| `--namespace` | The source's namespace. -| ` --parallelism` | The source's parallelism factor, that is, the number of source instances to run. -| `--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per source instance (applicable only to the process and Docker runtimes). -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -| `--source-config` | Source config key/values. -| `--source-config-file` | The path to a YAML config file specifying the source's configuration. -| `-t`, `--source-type` | The source's connector provider. -| `--tenant` | The source's tenant. -|`--producer-config`| The custom producer configuration (as a JSON string). - -### `update` - -Update a already submitted Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources update options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--classname` | The source's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per source instance (applicable only to Docker runtime). -| `--deserialization-classname` | The SerDe classname for the source. -| `--destination-topic-name` | The Pulsar topic to which data is sent. -| `--disk` | The disk (in bytes) that needs to be allocated per source instance (applicable only to Docker runtime). -|`--name` | The source's name. -| `--namespace` | The source's namespace. -| ` --parallelism` | The source's parallelism factor, that is, the number of source instances to run. -| `--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per source instance (applicable only to the process and Docker runtimes). -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -| `--source-config` | Source config key/values. -| `--source-config-file` | The path to a YAML config file specifying the source's configuration. -| `-t`, `--source-type` | The source's connector provider. The `source-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. -| `--tenant` | The source's tenant. -| `--update-auth-data` | Whether or not to update the auth data.
    **Default value: false.** - - -### `delete` - -Delete a Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources delete options - -``` - -#### Option - -|Flag|Description| -|---|---| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `get` - -Get the information about a Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources get options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `status` - -Check the current status of a Pulsar Source. - -#### Usage - -```bash - -$ pulsar-admin sources status options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source ID.
    If `instance-id` is not provided, Pulsar gets status of all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `list` - -List all running Pulsar IO source connectors. - -#### Usage - -```bash - -$ pulsar-admin sources list options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `stop` - -Stop a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources stop options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar stops all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `start` - -Start a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources start options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar starts all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `restart` - -Restart a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources restart options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar restarts all instances. -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `localrun` - -Run a Pulsar IO source connector locally rather than deploying it to the Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources localrun options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the Source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--broker-service-url` | The URL for the Pulsar broker. -|`--classname`|The source's class name if `archive` is file-url-path (file://). -| `--client-auth-params` | Client authentication parameter. -| `--client-auth-plugin` | Client authentication plugin using which function-process can connect to broker. -|`--cpu`|The CPU (in cores) that needs to be allocated per source instance (applicable only to the Docker runtime).| -|`--deserialization-classname`|The SerDe classname for the source. -|`--destination-topic-name`|The Pulsar topic to which data is sent. -|`--disk`|The disk (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime).| -|`--hostname-verification-enabled`|Enable hostname verification.
    **Default value: false**. -|`--name`|The source’s name.| -|`--namespace`|The source’s namespace.| -|`--parallelism`|The source’s parallelism factor, that is, the number of source instances to run).| -|`--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -|`--ram`|The RAM (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime).| -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -|`--source-config`|Source config key/values. -|`--source-config-file`|The path to a YAML config file specifying the source’s configuration. -|`--source-type`|The source's connector provider. -|`--tenant`|The source’s tenant. -|`--tls-allow-insecure`|Allow insecure tls connection.
    **Default value: false**. -|`--tls-trust-cert-path`|The tls trust cert file path. -|`--use-tls`|Use tls connection.
    **Default value: false**. -|`--producer-config`| The custom producer configuration (as a JSON string). - -### `available-sources` - -Get the list of Pulsar IO connector sources supported by Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources available-sources - -``` - -### `reload` - -Reload the available built-in connectors. - -#### Usage - -```bash - -$ pulsar-admin sources reload - -``` - -## `sinks` - -An interface for managing Pulsar IO sinks (egress data from Pulsar). - -```bash - -$ pulsar-admin sinks subcommands - -``` - -Subcommands are: - -* `create` - -* `update` - -* `delete` - -* `get` - -* `status` - -* `list` - -* `stop` - -* `start` - -* `restart` - -* `localrun` - -* `available-sinks` - -* `reload` - - -### `create` - -Submit a Pulsar IO sink connector to run in a Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks create options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--classname` | The sink's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per sink instance (applicable only to Docker runtime). -| `--custom-schema-inputs` | The map of input topics to schema types or class names (as a JSON string). -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -| `--disk` | The disk (in bytes) that needs to be allocated per sink instance (applicable only to Docker runtime). -|`-i, --inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name` | The sink's name. -| `--namespace` | The sink's namespace. -| ` --parallelism` | The sink's parallelism factor, that is, the number of sink instances to run. -| `--processing-guarantees` | The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the process and Docker runtimes). -| `--retain-ordering` | Sink consumes and sinks messages in order. -| `--sink-config` | sink config key/values. -| `--sink-config-file` | The path to a YAML config file specifying the sink's configuration. -| `-t`, `--sink-type` | The sink's connector provider. The `sink-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. -| `--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -| `--tenant` | The sink's tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). - -### `update` - -Update a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks update options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--classname` | The sink's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per sink instance (applicable only to Docker runtime). -| `--custom-schema-inputs` | The map of input topics to schema types or class names (as a JSON string). -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -| `--disk` | The disk (in bytes) that needs to be allocated per sink instance (applicable only to Docker runtime). -|`-i, --inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name` | The sink's name. -| `--namespace` | The sink's namespace. -| ` --parallelism` | The sink's parallelism factor, that is, the number of sink instances to run. -| `--processing-guarantees` | The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the process and Docker runtimes). -| `--retain-ordering` | Sink consumes and sinks messages in order. -| `--sink-config` | sink config key/values. -| `--sink-config-file` | The path to a YAML config file specifying the sink's configuration. -| `-t`, `--sink-type` | The sink's connector provider. -| `--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -| `--tenant` | The sink's tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). -| `--update-auth-data` | Whether or not to update the auth data.
    **Default value: false.** - -### `delete` - -Delete a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks delete options - -``` - -#### Option - -|Flag|Description| -|---|---| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - -### `get` - -Get the information about a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks get options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `status` - -Check the current status of a Pulsar sink. - -#### Usage - -```bash - -$ pulsar-admin sinks status options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink ID.
    If `instance-id` is not provided, Pulsar gets status of all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `list` - -List all running Pulsar IO sink connectors. - -#### Usage - -```bash - -$ pulsar-admin sinks list options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `stop` - -Stop a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks stop options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar stops all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - -### `start` - -Start a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks start options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar starts all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `restart` - -Restart a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks restart options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar restarts all instances. -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `localrun` - -Run a Pulsar IO sink connector locally rather than deploying it to the Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks localrun options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--broker-service-url` | The URL for the Pulsar broker. -|`--classname`|The sink's class name if `archive` is file-url-path (file://). -| `--client-auth-params` | Client authentication parameter. -| `--client-auth-plugin` | Client authentication plugin using which function-process can connect to broker. -|`--cpu`|The CPU (in cores) that needs to be allocated per sink instance (applicable only to the Docker runtime). -| `--custom-schema-inputs` | The map of input topics to Schema types or class names (as a JSON string). -| `--max-redeliver-count` | Maximum number of times that a message is redelivered before being sent to the dead letter queue. -| `--dead-letter-topic` | Name of the dead letter topic where the failing messages are sent. -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -|`--disk`|The disk (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime).| -|`--hostname-verification-enabled`|Enable hostname verification.
    **Default value: false**. -| `-i`, `--inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name`|The sink’s name.| -|`--namespace`|The sink’s namespace.| -|`--parallelism`|The sink’s parallelism factor, that is, the number of sink instances to run).| -|`--processing-guarantees`|The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -|`--ram`|The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime).| -|`--retain-ordering` | Sink consumes and sinks messages in order. -|`--sink-config`|sink config key/values. -|`--sink-config-file`|The path to a YAML config file specifying the sink’s configuration. -|`--sink-type`|The sink's connector provider. -|`--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -|`--tenant`|The sink’s tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--negative-ack-redelivery-delay-ms` | The negatively-acknowledged message redelivery delay in milliseconds. | -|`--tls-allow-insecure`|Allow insecure tls connection.
    **Default value: false**. -|`--tls-trust-cert-path`|The tls trust cert file path. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). -|`--use-tls`|Use tls connection.
    **Default value: false**. - -### `available-sinks` - -Get the list of Pulsar IO connector sinks supported by Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks available-sinks - -``` - -### `reload` - -Reload the available built-in connectors. - -#### Usage - -```bash - -$ pulsar-admin sinks reload - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-connectors.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-connectors.md deleted file mode 100644 index 8db368e0e70637..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-connectors.md +++ /dev/null @@ -1,232 +0,0 @@ ---- -id: io-connectors -title: Built-in connector -sidebar_label: "Built-in connector" -original_id: io-connectors ---- - -Pulsar distribution includes a set of common connectors that have been packaged and tested with the rest of Apache Pulsar. These connectors import and export data from some of the most commonly used data systems. - -Using any of these connectors is as easy as writing a simple connector and running the connector locally or submitting the connector to a Pulsar Functions cluster. - -## Source connector - -Pulsar has various source connectors, which are sorted alphabetically as below. - -### Canal - -* [Configuration](io-canal-source.md#configuration) - -* [Example](io-canal-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/java/org/apache/pulsar/io/canal/CanalStringSource.java) - - -### Debezium MySQL - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-mysql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/java/org/apache/pulsar/io/debezium/mysql/DebeziumMysqlSource.java) - -### Debezium PostgreSQL - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-postgresql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/java/org/apache/pulsar/io/debezium/postgres/DebeziumPostgresSource.java) - -### Debezium MongoDB - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-mongodb) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/java/org/apache/pulsar/io/debezium/mongodb/DebeziumMongoDbSource.java) - -### DynamoDB - -* [Configuration](io-dynamodb-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/dynamodb/src/main/java/org/apache/pulsar/io/dynamodb/DynamoDBSource.java) - -### File - -* [Configuration](io-file-source.md#configuration) - -* [Example](io-file-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/file/src/main/java/org/apache/pulsar/io/file/FileSource.java) - -### Flume - -* [Configuration](io-flume-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/java/org/apache/pulsar/io/flume/FlumeConnector.java) - -### Twitter firehose - -* [Configuration](io-twitter-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/twitter/src/main/java/org/apache/pulsar/io/twitter/TwitterFireHose.java) - -### Kafka - -* [Configuration](io-kafka-source.md#configuration) - -* [Example](io-kafka-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java) - -### Kinesis - -* [Configuration](io-kinesis-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kinesis/src/main/java/org/apache/pulsar/io/kinesis/KinesisSource.java) - -### Netty - -* [Configuration](io-netty-source.md#configuration) - -* [Example of TCP](io-netty-source.md#tcp) - -* [Example of HTTP](io-netty-source.md#http) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/netty/src/main/java/org/apache/pulsar/io/netty/NettySource.java) - -### NSQ - -* [Configuration](io-nsq-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/nsq/src/main/java/org/apache/pulsar/io/nsq/NSQSource.java) - -### RabbitMQ - -* [Configuration](io-rabbitmq-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/rabbitmq/src/main/java/org/apache/pulsar/io/rabbitmq/RabbitMQSource.java) - -## Sink connector - -Pulsar has various sink connectors, which are sorted alphabetically as below. - -### Aerospike - -* [Configuration](io-aerospike-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/aerospike/src/main/java/org/apache/pulsar/io/aerospike/AerospikeStringSink.java) - -### Cassandra - -* [Configuration](io-cassandra-sink.md#configuration) - -* [Example](io-cassandra-sink.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/cassandra/src/main/java/org/apache/pulsar/io/cassandra/CassandraStringSink.java) - -### ElasticSearch - -* [Configuration](io-elasticsearch-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/elastic-search/src/main/java/org/apache/pulsar/io/elasticsearch/ElasticSearchSink.java) - -### Flume - -* [Configuration](io-flume-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/java/org/apache/pulsar/io/flume/sink/StringSink.java) - -### HBase - -* [Configuration](io-hbase-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hbase/src/main/java/org/apache/pulsar/io/hbase/HbaseAbstractConfig.java) - -### HDFS2 - -* [Configuration](io-hdfs2-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hdfs2/src/main/java/org/apache/pulsar/io/hdfs2/AbstractHdfsConnector.java) - -### HDFS3 - -* [Configuration](io-hdfs3-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hdfs3/src/main/java/org/apache/pulsar/io/hdfs3/AbstractHdfsConnector.java) - -### InfluxDB - -* [Configuration](io-influxdb-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/influxdb/src/main/java/org/apache/pulsar/io/influxdb/InfluxDBGenericRecordSink.java) - -### JDBC ClickHouse - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-clickhouse) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/clickhouse/src/main/java/org/apache/pulsar/io/jdbc/ClickHouseJdbcAutoSchemaSink.java) - -### JDBC MariaDB - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-mariadb) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/mariadb/src/main/java/org/apache/pulsar/io/jdbc/MariadbJdbcAutoSchemaSink.java) - -### JDBC PostgreSQL - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-postgresql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/postgres/src/main/java/org/apache/pulsar/io/jdbc/PostgresJdbcAutoSchemaSink.java) - -### JDBC SQLite - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-sqlite) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/sqlite/src/main/java/org/apache/pulsar/io/jdbc/SqliteJdbcAutoSchemaSink.java) - -### Kafka - -* [Configuration](io-kafka-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSink.java) - -### Kinesis - -* [Configuration](io-kinesis-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kinesis/src/main/java/org/apache/pulsar/io/kinesis/KinesisSink.java) - -### MongoDB - -* [Configuration](io-mongo-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/mongo/src/main/java/org/apache/pulsar/io/mongodb/MongoSink.java) - -### RabbitMQ - -* [Configuration](io-rabbitmq-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/rabbitmq/src/main/java/org/apache/pulsar/io/rabbitmq/RabbitMQSink.java) - -### Redis - -* [Configuration](io-redis-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/redis/src/main/java/org/apache/pulsar/io/redis/RedisAbstractConfig.java) - -### Solr - -* [Configuration](io-solr-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/solr/src/main/java/org/apache/pulsar/io/solr/SolrSinkConfig.java) - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-debezium-source.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-debezium-source.md deleted file mode 100644 index 8c3ba0cb20f252..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-debezium-source.md +++ /dev/null @@ -1,621 +0,0 @@ ---- -id: io-debezium-source -title: Debezium source connector -sidebar_label: "Debezium source connector" -original_id: io-debezium-source ---- - -The Debezium source connector pulls messages from MySQL or PostgreSQL -and persists the messages to Pulsar topics. - -## Configuration - -The configuration of Debezium source connector has the following properties. - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `task.class` | true | null | A source task class that implemented in Debezium. | -| `database.hostname` | true | null | The address of a database server. | -| `database.port` | true | null | The port number of a database server.| -| `database.user` | true | null | The name of a database user that has the required privileges. | -| `database.password` | true | null | The password for a database user that has the required privileges. | -| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. | -| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. | -| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the connector.

    This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. | -| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. | -| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value. | -| `database.history` | true | null | The name of the database history class. | -| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements.

    **Note: this topic is for internal use only and should not be used by consumers.** | -| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. | -| `pulsar.service.url` | true | null | Pulsar cluster service URL. | -| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. | -| `json-with-envelope` | false | false | Present the message only consist of payload. - -### Converter Options - -1. org.apache.kafka.connect.json.JsonConverter - -This config `json-with-envelope` is valid only for the JsonConverter. It's default value is false, the consumer use the schema ` -Schema.KeyValue(Schema.AUTO_CONSUME(), Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, -and the message only consist of payload. - -If the config `json-with-envelope` value is true, the consumer use the schema -`Schema.KeyValue(Schema.BYTES, Schema.BYTES`, the message consist of schema and payload. - -2. org.apache.pulsar.kafka.shade.io.confluent.connect.avro.AvroConverter - -If users select the AvroConverter, then the pulsar consumer should use the schema `Schema.KeyValue(Schema.AUTO_CONSUME(), -Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, and the message consist of payload. - -### MongoDB Configuration -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). | -| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. | -| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. | - - - -## Example of MySQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "3306", - "database.user": "debezium", - "database.password": "dbz", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.whitelist": "inventory", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.history.pulsar.topic": "history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "pulsar.service.url": "pulsar://127.0.0.1:6650", - "offset.storage.topic": "offset-topic" - } - - ``` - -* YAML - - You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mysql-source" - topicName: "debezium-mysql-topic" - archive: "connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for mysql, docker image: debezium/example-mysql:0.8 - database.hostname: "localhost" - database.port: "3306" - database.user: "debezium" - database.password: "dbz" - database.server.id: "184054" - database.server.name: "dbserver1" - database.whitelist: "inventory" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.history.pulsar.topic: "history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG - key.converter: "org.apache.kafka.connect.json.JsonConverter" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## OFFSET_STORAGE_TOPIC_CONFIG - offset.storage.topic: "offset-topic" - - ``` - -### Usage - -This example shows how to change the data of a MySQL table using the Pulsar Debezium connector. - -1. Start a MySQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -it --rm \ - --name mysql \ - -p 3306:3306 \ - -e MYSQL_ROOT_PASSWORD=debezium \ - -e MYSQL_USER=mysqluser \ - -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar \ - --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","value.converter": "org.apache.kafka.connect.json.JsonConverter","pulsar.service.url": "pulsar://127.0.0.1:6650","offset.storage.topic": "offset-topic"}' - - ``` - - :::note - - Currently, the destination topic (specified by the `destination-topic-name` option ) is a required configuration but it is not used for the Debezium connector to save data. The Debezium connector saves data in the following 4 types of topics: - - - One topic named with the database server name ( `database.server.name`) for storing the database metadata messages, such as `public/default/database.server.name`. - - One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. - - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - - One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - - If the automatic topic creation is disabled on your broker, you need to manually create the above 4 types of topics and the destination topic. - - ::: - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mysql-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the table _inventory.products_. - - ```bash - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MySQL client in docker. - - ```bash - - $ docker run -it --rm \ - --name mysqlterm \ - --link mysql \ - --rm mysql:5.7 sh \ - -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' - - ``` - -6. A MySQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - mysql> use inventory; - mysql> show tables; - mysql> SELECT * FROM products; - mysql> UPDATE products SET name='1111111111' WHERE id=101; - mysql> UPDATE products SET name='1111111111' WHERE id=107; - - ``` - - In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic. - -## Example of PostgreSQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "5432", - "database.user": "postgres", - "database.password": "changeme", - "database.dbname": "postgres", - "database.server.name": "dbserver1", - "plugin.name": "pgoutput", - "schema.whitelist": "public", - "table.whitelist": "public.users", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-postgres-source" - topicName: "debezium-postgres-topic" - archive: "connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for postgres version 10+, official docker image: postgres:<10+> - database.hostname: "localhost" - database.port: "5432" - database.user: "postgres" - database.password: "changeme" - database.dbname: "postgres" - database.server.name: "dbserver1" - plugin.name: "pgoutput" - schema.whitelist: "public" - table.whitelist: "public.users" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -Notice that `pgoutput` is a standard plugin of Postgres introduced in version 10 - [see Postgres architecture docu](https://www.postgresql.org/docs/10/logical-replication-architecture.html). You don't need to install anything, just make sure the WAL level is set to `logical` (see docker command below and [Postgres docu](https://www.postgresql.org/docs/current/runtime-config-wal.html)). - -### Usage - -This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector. - - -1. Start a PostgreSQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -d -it --rm \ - --name pulsar-postgres \ - -p 5432:5432 \ - -e POSTGRES_PASSWORD=changeme \ - postgres:13.3 -c wal_level=logical - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar \ - --name debezium-postgres-source \ - --destination-topic-name debezium-postgres-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "changeme","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "public","table.whitelist": "public.users","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - :::note - - Currently, the destination topic (specified by the `destination-topic-name` option ) is a required configuration but it is not used for the Debezium connector to save data. The Debezium connector saves data in the following 4 types of topics: - - - One topic named with the database server name ( `database.server.name`) for storing the database metadata messages, such as `public/default/database.server.name`. - - One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. - - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - - One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - - If the automatic topic creation is disabled on your broker, you need to manually create the above 4 types of topics and the destination topic. - - ::: - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-postgres-source-config.yaml - - ``` - -4. Subscribe the topic _sub-users_ for the _public.users_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-users" public/default/dbserver1.public.users -n 0 - - ``` - -5. Start a PostgreSQL client in docker. - - ```bash - - $ docker exec -it pulsar-postgresql /bin/bash - - ``` - -6. A PostgreSQL client pops out. - - Use the following commands to create sample data in the table _users_. - - ``` - - psql -U postgres -h localhost -p 5432 - Password for user postgres: - - CREATE TABLE users( - id BIGINT GENERATED ALWAYS AS IDENTITY, PRIMARY KEY(id), - hash_firstname TEXT NOT NULL, - hash_lastname TEXT NOT NULL, - gender VARCHAR(6) NOT NULL CHECK (gender IN ('male', 'female')) - ); - - INSERT INTO users(hash_firstname, hash_lastname, gender) - SELECT md5(RANDOM()::TEXT), md5(RANDOM()::TEXT), CASE WHEN RANDOM() < 0.5 THEN 'male' ELSE 'female' END FROM generate_series(1, 100); - - postgres=# select * from users; - - id | hash_firstname | hash_lastname | gender - -------+----------------------------------+----------------------------------+-------- - 1 | 02bf7880eb489edc624ba637f5ab42bd | 3e742c2cc4217d8e3382cc251415b2fb | female - 2 | dd07064326bb9119189032316158f064 | 9c0e938f9eddbd5200ba348965afbc61 | male - 3 | 2c5316fdd9d6595c1cceb70eed12e80c | 8a93d7d8f9d76acfaaa625c82a03ea8b | female - 4 | 3dfa3b4f70d8cd2155567210e5043d2b | 32c156bc28f7f03ab5d28e2588a3dc19 | female - - - postgres=# UPDATE users SET hash_firstname='maxim' WHERE id=1; - UPDATE 1 - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"before":null,"after":{"id":1,"hash_firstname":"maxim","hash_lastname":"292113d30a3ccee0e19733dd7f88b258","gender":"male"},"source:{"version":"1.0.0.Final","connector":"postgresql","name":"foobar","ts_ms":1624045862644,"snapshot":"false","db":"postgres","schema":"public","table":"users","txId":595,"lsn":24419784,"xmin":null},"op":"u","ts_ms":1624045862648} - ...many more - - ``` - -## Example of MongoDB - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "mongodb.hosts": "rs0/mongodb:27017", - "mongodb.name": "dbserver1", - "mongodb.user": "debezium", - "mongodb.password": "dbz", - "mongodb.task.id": "1", - "database.whitelist": "inventory", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mongodb-source" - topicName: "debezium-mongodb-topic" - archive: "connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-mongodb:0.10 - mongodb.hosts: "rs0/mongodb:27017", - mongodb.name: "dbserver1", - mongodb.user: "debezium", - mongodb.password: "dbz", - mongodb.task.id: "1", - database.whitelist: "inventory", - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector. - - -1. Start a MongoDB server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-mongodb:0.10 - $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017 debezium/example-mongodb:0.10 - - ``` - - Use the following commands to initialize the data. - - ``` bash - - ./usr/local/bin/init-inventory.sh - - ``` - - If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a``` - - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-mongodb-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar \ - --name debezium-mongodb-source \ - --destination-topic-name debezium-mongodb-topic \ - --tenant public \ - --namespace default \ - --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - :::note - - Currently, the destination topic (specified by the `destination-topic-name` option ) is a required configuration but it is not used for the Debezium connector to save data. The Debezium connector saves data in the following 4 types of topics: - - - One topic named with the database server name ( `database.server.name`) for storing the database metadata messages, such as `public/default/database.server.name`. - - One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. - - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - - One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - - If the automatic topic creation is disabled on your broker, you need to manually create the above 4 types of topics and the destination topic. - - ::: - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mongodb-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MongoDB client in docker. - - ```bash - - $ docker exec -it pulsar-mongodb /bin/bash - - ``` - -6. A MongoDB client pops out. - - ```bash - - mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory - db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}}) - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":false,"field":"rs"},{"type":"string","optional":false,"field":"collection"},{"type":"int32","optional":false,"field":"ord"},{"type":"int64","optional":true,"field":"h"}],"optional":false,"name":"io.debezium.connector.mongo.Source","field":"source"},{"type":"string","optional":true,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"after":"{\"_id\": {\"$numberLong\": \"104\"},\"name\": \"hammer\",\"description\": \"12oz carpenter's hammer\",\"weight\": 1.25,\"quantity\": 4}","patch":null,"source":{"version":"0.10.0.Final","connector":"mongodb","name":"dbserver1","ts_ms":1573541905000,"snapshot":"true","db":"inventory","rs":"rs0","collection":"products","ord":1,"h":4983083486544392763},"op":"r","ts_ms":1573541909761}}. - - ``` - -## FAQ - -### Debezium postgres connector will hang when create snap - -```$xslt - -#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000] - java.lang.Thread.State: WAITING (parking) - at sun.misc.Unsafe.park(Native Method) - - parking to wait for <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) - at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) - at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) - at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396) - at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649) - at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132) - at io.debezium.connector.postgresql.PostgresConnectorTask$Lambda$203/385424085.accept(Unknown Source) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$240/1347039967.accept(Unknown Source) - at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$206/589332928.run(Unknown Source) - at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705) - at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717) - at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126) - at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47) - at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127) - at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230) - at java.lang.Thread.run(Thread.java:748) - -``` - -If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file: - -```$xslt - -max.queue.size= - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-debug.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-debug.md deleted file mode 100644 index 844e101d00d2a7..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-debug.md +++ /dev/null @@ -1,407 +0,0 @@ ---- -id: io-debug -title: How to debug Pulsar connectors -sidebar_label: "Debug" -original_id: io-debug ---- -This guide explains how to debug connectors in localrun or cluster mode and gives a debugging checklist. -To better demonstrate how to debug Pulsar connectors, here takes a Mongo sink connector as an example. - -**Deploy a Mongo sink environment** -1. Start a Mongo service. - - ```bash - - docker pull mongo:4 - docker run -d -p 27017:27017 --name pulsar-mongo -v $PWD/data:/data/db mongo:4 - - ``` - -2. Create a DB and a collection. - - ```bash - - docker exec -it pulsar-mongo /bin/bash - mongo - > use pulsar - > db.createCollection('messages') - > exit - - ``` - -3. Start Pulsar standalone. - - ```bash - - docker pull apachepulsar/pulsar:2.4.0 - docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --link pulsar-mongo --name pulsar-mongo-standalone apachepulsar/pulsar:2.4.0 bin/pulsar standalone - - ``` - -4. Configure the Mongo sink with the `mongo-sink-config.yaml` file. - - ```bash - - configs: - mongoUri: "mongodb://pulsar-mongo:27017" - database: "pulsar" - collection: "messages" - batchSize: 2 - batchTimeMs: 500 - - ``` - - ```bash - - docker cp mongo-sink-config.yaml pulsar-mongo-standalone:/pulsar/ - - ``` - -5. Download the Mongo sink nar package. - - ```bash - - docker exec -it pulsar-mongo-standalone /bin/bash - curl -O http://apache.01link.hk/pulsar/pulsar-2.4.0/connectors/pulsar-io-mongo-2.4.0.nar - - ``` - -## Debug in localrun mode -Start the Mongo sink in localrun mode using the `localrun` command. -:::tip - -For more information about the `localrun` command, see [`localrun`](reference-connector-admin.md/#localrun-1). - -::: - -```bash - -./bin/pulsar-admin sinks localrun \ ---archive pulsar-io-mongo-2.4.0.nar \ ---tenant public --namespace default \ ---inputs test-mongo \ ---name pulsar-mongo-sink \ ---sink-config-file mongo-sink-config.yaml \ ---parallelism 1 - -``` - -### Use connector log -Use one of the following methods to get a connector log in localrun mode: -* After executing the `localrun` command, the **log is automatically printed on the console**. -* The log is located at: - - ```bash - - logs/functions/tenant/namespace/function-name/function-name-instance-id.log - - ``` - - **Example** - - The path of the Mongo sink connector is: - - ```bash - - logs/functions/public/default/pulsar-mongo-sink/pulsar-mongo-sink-0.log - - ``` - -To clearly explain the log information, here breaks down the large block of information into small blocks and add descriptions for each block. -* This piece of log information shows the storage path of the nar package after decompression. - - ``` - - 08:21:54.132 [main] INFO org.apache.pulsar.common.nar.NarClassLoader - Created class loader with paths: [file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/, file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/META-INF/bundled-dependencies/, - - ``` - - :::tip - - If `class cannot be found` exception is thrown, check whether the nar file is decompressed in the folder `file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/META-INF/bundled-dependencies/` or not. - - ::: - -* This piece of log information illustrates the basic information about the Mongo sink connector, such as tenant, namespace, name, parallelism, resources, and so on, which can be used to **check whether the Mongo sink connector is configured correctly or not**. - - ```bash - - 08:21:55.390 [main] INFO org.apache.pulsar.functions.runtime.ThreadRuntime - ThreadContainer starting function with instance config InstanceConfig(instanceId=0, functionId=853d60a1-0c48-44d5-9a5c-6917386476b2, functionVersion=c2ce1458-b69e-4175-88c0-a0a856a2be8c, functionDetails=tenant: "public" - namespace: "default" - name: "pulsar-mongo-sink" - className: "org.apache.pulsar.functions.api.utils.IdentityFunction" - autoAck: true - parallelism: 1 - source { - typeClassName: "[B" - inputSpecs { - key: "test-mongo" - value { - } - } - cleanupSubscription: true - } - sink { - className: "org.apache.pulsar.io.mongodb.MongoSink" - configs: "{\"mongoUri\":\"mongodb://pulsar-mongo:27017\",\"database\":\"pulsar\",\"collection\":\"messages\",\"batchSize\":2,\"batchTimeMs\":500}" - typeClassName: "[B" - } - resources { - cpu: 1.0 - ram: 1073741824 - disk: 10737418240 - } - componentType: SINK - , maxBufferedTuples=1024, functionAuthenticationSpec=null, port=38459, clusterName=local) - - ``` - -* This piece of log information demonstrates the status of the connections to Mongo and configuration information. - - ```bash - - 08:21:56.231 [cluster-ClusterId{value='5d6396a3c9e77c0569ff00eb', description='null'}-pulsar-mongo:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:1, serverValue:8}] to pulsar-mongo:27017 - 08:21:56.326 [cluster-ClusterId{value='5d6396a3c9e77c0569ff00eb', description='null'}-pulsar-mongo:27017] INFO org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=pulsar-mongo:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 2, 0]}, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=89058800} - - ``` - -* This piece of log information explains the configuration of consumers and clients, including the topic name, subscription name, subscription type, and so on. - - ```bash - - 08:21:56.719 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl - Starting Pulsar consumer status recorder with config: { - "topicNames" : [ "test-mongo" ], - "topicsPattern" : null, - "subscriptionName" : "public/default/pulsar-mongo-sink", - "subscriptionType" : "Shared", - "receiverQueueSize" : 1000, - "acknowledgementsGroupTimeMicros" : 100000, - "negativeAckRedeliveryDelayMicros" : 60000000, - "maxTotalReceiverQueueSizeAcrossPartitions" : 50000, - "consumerName" : null, - "ackTimeoutMillis" : 0, - "tickDurationMillis" : 1000, - "priorityLevel" : 0, - "cryptoFailureAction" : "CONSUME", - "properties" : { - "application" : "pulsar-sink", - "id" : "public/default/pulsar-mongo-sink", - "instance_id" : "0" - }, - "readCompacted" : false, - "subscriptionInitialPosition" : "Latest", - "patternAutoDiscoveryPeriod" : 1, - "regexSubscriptionMode" : "PersistentOnly", - "deadLetterPolicy" : null, - "autoUpdatePartitions" : true, - "replicateSubscriptionState" : false, - "resetIncludeHead" : false - } - 08:21:56.726 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl - Pulsar client config: { - "serviceUrl" : "pulsar://localhost:6650", - "authPluginClassName" : null, - "authParams" : null, - "operationTimeoutMs" : 30000, - "statsIntervalSeconds" : 60, - "numIoThreads" : 1, - "numListenerThreads" : 1, - "connectionsPerBroker" : 1, - "useTcpNoDelay" : true, - "useTls" : false, - "tlsTrustCertsFilePath" : null, - "tlsAllowInsecureConnection" : false, - "tlsHostnameVerificationEnable" : false, - "concurrentLookupRequest" : 5000, - "maxLookupRequest" : 50000, - "maxNumberOfRejectedRequestPerConnection" : 50, - "keepAliveIntervalSeconds" : 30, - "connectionTimeoutMs" : 10000, - "requestTimeoutMs" : 60000, - "defaultBackoffIntervalNanos" : 100000000, - "maxBackoffIntervalNanos" : 30000000000 - } - - ``` - -## Debug in cluster mode -You can use the following methods to debug a connector in cluster mode: -* [Use connector log](#use-connector-log) -* [Use admin CLI](#use-admin-cli) -### Use connector log -In cluster mode, multiple connectors can run on a worker. To find the log path of a specified connector, use the `workerId` to locate the connector log. -### Use admin CLI -Pulsar admin CLI helps you debug Pulsar connectors with the following subcommands: -* [`get`](#get) - -* [`status`](#status) -* [`topics stats`](#topics-stats) - -**Create a Mongo sink** - -```bash - -./bin/pulsar-admin sinks create \ ---archive pulsar-io-mongo-2.4.0.nar \ ---tenant public \ ---namespace default \ ---inputs test-mongo \ ---name pulsar-mongo-sink \ ---sink-config-file mongo-sink-config.yaml \ ---parallelism 1 - -``` - -### `get` -Use the `get` command to get the basic information about the Mongo sink connector, such as tenant, namespace, name, parallelism, and so on. - -```bash - -./bin/pulsar-admin sinks get --tenant public --namespace default --name pulsar-mongo-sink -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-mongo-sink", - "className": "org.apache.pulsar.io.mongodb.MongoSink", - "inputSpecs": { - "test-mongo": { - "isRegexPattern": false - } - }, - "configs": { - "mongoUri": "mongodb://pulsar-mongo:27017", - "database": "pulsar", - "collection": "messages", - "batchSize": 2.0, - "batchTimeMs": 500.0 - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -:::tip - -For more information about the `get` command, see [`get`](reference-connector-admin.md/#get-1). - -::: - -### `status` -Use the `status` command to get the current status about the Mongo sink connector, such as the number of instance, the number of running instance, instanceId, workerId and so on. - -```bash - -./bin/pulsar-admin sinks status ---tenant public \ ---namespace default \ ---name pulsar-mongo-sink -{ -"numInstances" : 1, -"numRunning" : 1, -"instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-5d202832fd18-8080" - } -} ] -} - -``` - -:::tip - -For more information about the `status` command, see [`status`](reference-connector-admin.md/#stauts-1). -If there are multiple connectors running on a worker, `workerId` can locate the worker on which the specified connector is running. - -::: - -### `topics stats` -Use the `topics stats` command to get the stats for a topic and its connected producer and consumer, such as whether the topic has received messages or not, whether there is a backlog of messages or not, the available permits and other key information. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -```bash - -./bin/pulsar-admin topics stats test-mongo -{ - "msgRateIn" : 0.0, - "msgThroughputIn" : 0.0, - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "averageMsgSize" : 0.0, - "storageSize" : 1, - "publishers" : [ ], - "subscriptions" : { - "public/default/pulsar-mongo-sink" : { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "msgRateRedeliver" : 0.0, - "msgBacklog" : 0, - "blockedSubscriptionOnUnackedMsgs" : false, - "msgDelayed" : 0, - "unackedMessages" : 0, - "type" : "Shared", - "msgRateExpired" : 0.0, - "consumers" : [ { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "msgRateRedeliver" : 0.0, - "consumerName" : "dffdd", - "availablePermits" : 999, - "unackedMessages" : 0, - "blockedConsumerOnUnackedMsgs" : false, - "metadata" : { - "instance_id" : "0", - "application" : "pulsar-sink", - "id" : "public/default/pulsar-mongo-sink" - }, - "connectedSince" : "2019-08-26T08:48:07.582Z", - "clientVersion" : "2.4.0", - "address" : "/172.17.0.3:57790" - } ], - "isReplicated" : false - } - }, - "replication" : { }, - "deduplicationStatus" : "Disabled" -} - -``` - -:::tip - -For more information about the `topic stats` command, see [`topic stats`](http://pulsar.apache.org/docs/en/pulsar-admin/#stats-1). - -::: - -## Checklist -This checklist indicates the major areas to check when you debug connectors. It is a reminder of what to look for to ensure a thorough review and an evaluation tool to get the status of connectors. -* Does Pulsar start successfully? - -* Does the external service run normally? - -* Is the nar package complete? - -* Is the connector configuration file correct? - -* In localrun mode, run a connector and check the printed information (connector log) on the console. - -* In cluster mode: - - * Use the `get` command to get the basic information. - - * Use the `status` command to get the current status. - * Use the `topics stats` command to get the stats for a specified topic and its connected producers and consumers. - - * Check the connector log. -* Enter into the external system and verify the result. diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-develop.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-develop.md deleted file mode 100644 index d6f4f8261ac820..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-develop.md +++ /dev/null @@ -1,421 +0,0 @@ ---- -id: io-develop -title: How to develop Pulsar connectors -sidebar_label: "Develop" -original_id: io-develop ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide describes how to develop Pulsar connectors to move data -between Pulsar and other systems. - -Pulsar connectors are special [Pulsar Functions](functions-overview.md), so creating -a Pulsar connector is similar to creating a Pulsar function. - -Pulsar connectors come in two types: - -| Type | Description | Example -|---|---|--- -{@inject: github:Source:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java}|Import data from another system to Pulsar.|[RabbitMQ source connector](io-rabbitmq.md) imports the messages of a RabbitMQ queue to a Pulsar topic. -{@inject: github:Sink:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java}|Export data from Pulsar to another system.|[Kinesis sink connector](io-kinesis.md) exports the messages of a Pulsar topic to a Kinesis stream. - -## Develop - -You can develop Pulsar source connectors and sink connectors. - -### Source - -Developing a source connector is to implement the {@inject: github:Source:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} -interface, which means you need to implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method and the {@inject: github:read:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - -1. Implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - - ```java - - /** - * Open connector with configuration - * - * @param config initialization config - * @param sourceContext - * @throws Exception IO type exceptions when opening a connector - */ - void open(final Map config, SourceContext sourceContext) throws Exception; - - ``` - - This method is called when the source connector is initialized. - - In this method, you can retrieve all connector specific settings through the passed-in `config` parameter and initialize all necessary resources. - - For example, a Kafka connector can create a Kafka client in this `open` method. - - Besides, Pulsar runtime also provides a `SourceContext` for the - connector to access runtime resources for tasks like collecting metrics. The implementation can save the `SourceContext` for future use. - -2. Implement the {@inject: github:read:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - - ```java - - /** - * Reads the next message from source. - * If source does not have any new messages, this call should block. - * @return next message from source. The return result should never be null - * @throws Exception - */ - Record read() throws Exception; - - ``` - - If nothing to return, the implementation should be blocking rather than returning `null`. - - The returned {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should encapsulate the following information, which is needed by Pulsar IO runtime. - - * {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should provide the following variables: - - |Variable|Required|Description - |---|---|--- - `TopicName`|No|Pulsar topic name from which the record is originated from. - `Key`|No| Messages can optionally be tagged with keys.

    For more information, see [Routing modes](concepts-messaging.md#routing-modes).| - `Value`|Yes|Actual data of the record. - `EventTime`|No|Event time of the record from the source. - `PartitionId`|No| If the record is originated from a partitioned source, it returns its `PartitionId`.

    `PartitionId` is used as a part of the unique identifier by Pulsar IO runtime to deduplicate messages and achieve exactly-once processing guarantee. - `RecordSequence`|No|If the record is originated from a sequential source, it returns its `RecordSequence`.

    `RecordSequence` is used as a part of the unique identifier by Pulsar IO runtime to deduplicate messages and achieve exactly-once processing guarantee. - `Properties` |No| If the record carries user-defined properties, it returns those properties. - `DestinationTopic`|No|Topic to which message should be written. - `Message`|No|A class which carries data sent by users.

    For more information, see [Message.java](https://github.com/apache/pulsar/blob/master/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/Message.java).| - - * {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should provide the following methods: - - Method|Description - |---|--- - `ack` |Acknowledge that the record is fully processed. - `fail`|Indicate that the record fails to be processed. - -## Handle schema information - -Pulsar IO automatically handles the schema and provides a strongly typed API based on Java generics. -If you know the schema type that you are producing, you can declare the Java class relative to that type in your sink declaration. - -``` - -public class MySource implements Source { - public Record read() {} -} - -``` - -If you want to implement a source that works with any schema, you can go with `byte[]` (of `ByteBuffer`) and use Schema.AUTO_PRODUCE_BYTES(). - -``` - -public class MySource implements Source { - public Record read() { - - Schema wantedSchema = .... - Record myRecord = new MyRecordImplementation(); - .... - } - class MyRecordImplementation implements Record { - public byte[] getValue() { - return ....encoded byte[]...that represents the value - } - public Schema getSchema() { - return Schema.AUTO_PRODUCE_BYTES(wantedSchema); - } - } -} - -``` - -To handle the `KeyValue` type properly, follow the guidelines for your record implementation: -- It must implement {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/KVRecord.java} interface and implement `getKeySchema`,`getValueSchema`, and `getKeyValueEncodingType` -- It must return a `KeyValue` object as `Record.getValue()` -- It may return null in `Record.getSchema()` - -When Pulsar IO runtime encounters a `KVRecord`, it brings the following changes automatically: -- Set properly the `KeyValueSchema` -- Encode the Message Key and the Message Value according to the `KeyValueEncoding` (SEPARATED or INLINE) - -:::tip - -For more information about **how to create a source connector**, see {@inject: github:KafkaSource:/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java}. - -::: - -### Sink - -Developing a sink connector **is similar to** developing a source connector, that is, you need to implement the {@inject: github:Sink:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} interface, which means implementing the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method and the {@inject: github:write:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - -1. Implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - - ```java - - /** - * Open connector with configuration - * - * @param config initialization config - * @param sinkContext - * @throws Exception IO type exceptions when opening a connector - */ - void open(final Map config, SinkContext sinkContext) throws Exception; - - ``` - -2. Implement the {@inject: github:write:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - - ```java - - /** - * Write a message to Sink - * @param record record to write to sink - * @throws Exception - */ - void write(Record record) throws Exception; - - ``` - - During the implementation, you can decide how to write the `Value` and - the `Key` to the actual source, and leverage all the provided information such as - `PartitionId` and `RecordSequence` to achieve different processing guarantees. - - You also need to ack records (if messages are sent successfully) or fail records (if messages fail to send). - -## Handling Schema information - -Pulsar IO handles automatically the Schema and provides a strongly typed API based on Java generics. -If you know the Schema type that you are consuming from you can declare the Java class relative to that type in your Sink declaration. - -``` - -public class MySink implements Sink { - public void write(Record record) {} -} - -``` - -If you want to implement a sink that works with any schema, you can you go with the special GenericObject interface. - -``` - -public class MySink implements Sink { - public void write(Record record) { - Schema schema = record.getSchema(); - GenericObject genericObject = record.getValue(); - if (genericObject != null) { - SchemaType type = genericObject.getSchemaType(); - Object nativeObject = genericObject.getNativeObject(); - ... - } - .... - } -} - -``` - -In the case of AVRO, JSON, and Protobuf records (schemaType=AVRO,JSON,PROTOBUF_NATIVE), you can cast the -`genericObject` variable to `GenericRecord` and use `getFields()` and `getField()` API. -You are able to access the native AVRO record using `genericObject.getNativeObject()`. - -In the case of KeyValue type, you can access both the schema for the key and the schema for the value using this code. - -``` - -public class MySink implements Sink { - public void write(Record record) { - Schema schema = record.getSchema(); - GenericObject genericObject = record.getValue(); - SchemaType type = genericObject.getSchemaType(); - Object nativeObject = genericObject.getNativeObject(); - if (type == SchemaType.KEY_VALUE) { - KeyValue keyValue = (KeyValue) nativeObject; - Object key = keyValue.getKey(); - Object value = keyValue.getValue(); - - KeyValueSchema keyValueSchema = (KeyValueSchema) schema; - Schema keySchema = keyValueSchema.getKeySchema(); - Schema valueSchema = keyValueSchema.getValueSchema(); - } - .... - } -} - -``` - -## Test - -Testing connectors can be challenging because Pulsar IO connectors interact with two systems -that may be difficult to mock—Pulsar and the system to which the connector is connecting. - -It is -recommended writing special tests to test the connector functionalities as below -while mocking the external service. - -### Unit test - -You can create unit tests for your connector. - -### Integration test - -Once you have written sufficient unit tests, you can add -separate integration tests to verify end-to-end functionality. - -Pulsar uses [testcontainers](https://www.testcontainers.org/) **for all integration tests**. - -:::tip - -For more information about **how to create integration tests for Pulsar connectors**, see {@inject: github:IntegrationTests:/tests/integration/src/test/java/org/apache/pulsar/tests/integration/io}. - -::: - -## Package - -Once you've developed and tested your connector, you need to package it so that it can be submitted -to a [Pulsar Functions](functions-overview.md) cluster. - -There are two methods to -work with Pulsar Functions' runtime, that is, [NAR](#nar) and [uber JAR](#uber-jar). - -:::note - -If you plan to package and distribute your connector for others to use, you are obligated to - -::: - -license and copyright your own code properly. Remember to add the license and copyright to -all libraries your code uses and to your distribution. -> -> If you use the [NAR](#nar) method, the NAR plugin -automatically creates a `DEPENDENCIES` file in the generated NAR package, including the proper -licensing and copyrights of all libraries of your connector. - -### NAR - -**NAR** stands for NiFi Archive, which is a custom packaging mechanism used by Apache NiFi, to provide -a bit of Java ClassLoader isolation. - -:::tip - -For more information about **how NAR works**, see [here](https://medium.com/hashmapinc/nifi-nar-files-explained-14113f7796fd). - -::: - -Pulsar uses the same mechanism for packaging **all** [built-in connectors](io-connectors.md). - -The easiest approach to package a Pulsar connector is to create a NAR package using [nifi-nar-maven-plugin](https://mvnrepository.com/artifact/org.apache.nifi/nifi-nar-maven-plugin). - -Include this [nifi-nar-maven-plugin](https://mvnrepository.com/artifact/org.apache.nifi/nifi-nar-maven-plugin) in your maven project for your connector as below. - -```xml - - - - org.apache.nifi - nifi-nar-maven-plugin - 1.2.0 - - - -``` - -You must also create a `resources/META-INF/services/pulsar-io.yaml` file with the following contents: - -```yaml - -name: connector name -description: connector description -sourceClass: fully qualified class name (only if source connector) -sinkClass: fully qualified class name (only if sink connector) - -``` - -For Gradle users, there is a [Gradle Nar plugin available on the Gradle Plugin Portal](https://plugins.gradle.org/plugin/io.github.lhotari.gradle-nar-plugin). - -:::tip - -For more information about an **how to use NAR for Pulsar connectors**, see {@inject: github:TwitterFirehose:/pulsar-io/twitter/pom.xml}. - -::: - -### Uber JAR - -An alternative approach is to create an **uber JAR** that contains all of the connector's JAR files -and other resource files. No directory internal structure is necessary. - -You can use [maven-shade-plugin](https://maven.apache.org/plugins/maven-shade-plugin/examples/includes-excludes.html) to create a uber JAR as below: - -```xml - - - org.apache.maven.plugins - maven-shade-plugin - 3.1.1 - - - package - - shade - - - - - *:* - - - - - - - -``` - -## Monitor - -Pulsar connectors enable you to move data in and out of Pulsar easily. It is important to ensure that the running connectors are healthy at any time. You can monitor Pulsar connectors that have been deployed with the following methods: - -- Check the metrics provided by Pulsar. - - Pulsar connectors expose the metrics that can be collected and used for monitoring the health of **Java** connectors. You can check the metrics by following the [monitoring](deploy-monitoring.md) guide. - -- Set and check your customized metrics. - - In addition to the metrics provided by Pulsar, Pulsar allows you to customize metrics for **Java** connectors. Function workers collect user-defined metrics to Prometheus automatically and you can check them in Grafana. - -Here is an example of how to customize metrics for a Java connector. - -````mdx-code-block - - - -``` - -public class TestMetricSink implements Sink { - - @Override - public void open(Map config, SinkContext sinkContext) throws Exception { - sinkContext.recordMetric("foo", 1); - } - - @Override - public void write(Record record) throws Exception { - - } - - @Override - public void close() throws Exception { - - } - } - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-dynamodb-source.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-dynamodb-source.md deleted file mode 100644 index ce585786eb0428..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-dynamodb-source.md +++ /dev/null @@ -1,80 +0,0 @@ ---- -id: io-dynamodb-source -title: AWS DynamoDB source connector -sidebar_label: "AWS DynamoDB source connector" -original_id: io-dynamodb-source ---- - -The DynamoDB source connector pulls data from DynamoDB table streams and persists data into Pulsar. - -This connector uses the [DynamoDB Streams Kinesis Adapter](https://github.com/awslabs/dynamodb-streams-kinesis-adapter), -which uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual -consuming of messages. The KCL uses DynamoDB to track state for consumers and requires cloudwatch access to log metrics. - - -## Configuration - -The configuration of the DynamoDB source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.

    Below are the available options:

  573. `AT_TIMESTAMP`: start from the record at or after the specified timestamp.

  574. `LATEST`: start after the most recent data record.

  575. `TRIM_HORIZON`: start from the oldest available data record.
  576. -`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption. -`applicationName`|String|false|Pulsar IO connector|The name of the KCL application. Must be unique, as it is used to define the table name for the dynamo table used for state tracking.

    By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances. -`checkpointInterval`|long|false|60000|The frequency of the KCL checkpoint in milliseconds. -`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds. -`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint. -`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector.

    Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed. -`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsEndpoint`|String|false|" " (empty string)|The DynamoDB Streams end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsDynamodbStreamArn`|String|true|" " (empty string)|The DynamoDB stream arn. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    `awsCredentialProviderPlugin` has the following built-in plugs:

  577. `org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:
    this plugin uses the default AWS provider chain.
    For more information, see [using the default credential provider chain](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default).

  578. `org.apache.pulsar.io.kinesis.STSAssumeRoleProviderPlugin`:
    this plugin takes a configuration via the `awsCredentialPluginParam` that describes a role to assume when running the KCL.
    **JSON configuration example**
    `{"roleArn": "arn...", "roleSessionName": "name"}`

    `awsCredentialPluginName` is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If `awsCredentialPluginName` set to empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`.
  579. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Example - -Before using the DynamoDB source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "awsEndpoint": "https://some.endpoint.aws", - "awsRegion": "us-east-1", - "awsDynamodbStreamArn": "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "applicationName": "My test application", - "checkpointInterval": "30000", - "backoffTime": "4000", - "numRetries": "3", - "receiveQueueSize": 2000, - "initialPositionInStream": "TRIM_HORIZON", - "startAtTime": "2019-03-05T19:28:58.000Z" - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "https://some.endpoint.aws" - awsRegion: "us-east-1" - awsDynamodbStreamArn: "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - applicationName: "My test application" - checkpointInterval: 30000 - backoffTime: 4000 - numRetries: 3 - receiveQueueSize: 2000 - initialPositionInStream: "TRIM_HORIZON" - startAtTime: "2019-03-05T19:28:58.000Z" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-elasticsearch-sink.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-elasticsearch-sink.md deleted file mode 100644 index 4acedd3dd0788d..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-elasticsearch-sink.md +++ /dev/null @@ -1,173 +0,0 @@ ---- -id: io-elasticsearch-sink -title: ElasticSearch sink connector -sidebar_label: "ElasticSearch sink connector" -original_id: io-elasticsearch-sink ---- - -The ElasticSearch sink connector pulls messages from Pulsar topics and persists the messages to indexes. - -## Configuration - -The configuration of the ElasticSearch sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `elasticSearchUrl` | String| true |" " (empty string)| The URL of elastic search cluster to which the connector connects. | -| `indexName` | String| true |" " (empty string)| The index name to which the connector writes messages. | -| `typeName` | String | false | "_doc" | The type name to which the connector writes messages to.

    The value should be set explicitly to a valid type name other than "_doc" for Elasticsearch version before 6.2, and left to default otherwise. | -| `indexNumberOfShards` | int| false |1| The number of shards of the index. | -| `indexNumberOfReplicas` | int| false |1 | The number of replicas of the index. | -| `username` | String| false |" " (empty string)| The username used by the connector to connect to the elastic search cluster.

    If `username` is set, then `password` should also be provided. | -| `password` | String| false | " " (empty string)|The password used by the connector to connect to the elastic search cluster.

    If `username` is set, then `password` should also be provided. | - -## Example - -Before using the ElasticSearch sink connector, you need to create a configuration file through one of the following methods. - -### Configuration - -#### For Elasticsearch After 6.2 - -* JSON - - ```json - - { - "elasticSearchUrl": "http://localhost:9200", - "indexName": "my_index", - "username": "scooby", - "password": "doobie" - } - - ``` - -* YAML - - ```yaml - - configs: - elasticSearchUrl: "http://localhost:9200" - indexName: "my_index" - username: "scooby" - password: "doobie" - - ``` - -#### For Elasticsearch Before 6.2 - -* JSON - - ```json - - { - "elasticSearchUrl": "http://localhost:9200", - "indexName": "my_index", - "typeName": "doc", - "username": "scooby", - "password": "doobie" - } - - ``` - -* YAML - - ```yaml - - configs: - elasticSearchUrl: "http://localhost:9200" - indexName: "my_index" - typeName: "doc" - username: "scooby" - password: "doobie" - - ``` - -### Usage - -1. Start a single node Elasticsearch cluster. - - ```bash - - $ docker run -p 9200:9200 -p 9300:9300 \ - -e "discovery.type=single-node" \ - docker.elastic.co/elasticsearch/elasticsearch:7.5.1 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - - Make sure the NAR file is available at `connectors/pulsar-io-elastic-search-@pulsar:version@.nar`. - -3. Start the Pulsar Elasticsearch connector in local run mode using one of the following methods. - * Use the **JSON** configuration as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-elastic-search-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name elasticsearch-test-sink \ - --sink-config '{"elasticSearchUrl":"http://localhost:9200","indexName": "my_index","username": "scooby","password": "doobie"}' \ - --inputs elasticsearch_test - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-elastic-search-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name elasticsearch-test-sink \ - --sink-config-file elasticsearch-sink.yml \ - --inputs elasticsearch_test - - ``` - -4. Publish records to the topic. - - ```bash - - $ bin/pulsar-client produce elasticsearch_test --messages "{\"a\":1}" - - ``` - -5. Check documents in Elasticsearch. - - * refresh the index - - ```bash - - $ curl -s http://localhost:9200/my_index/_refresh - - ``` - - - * search documents - - ```bash - - $ curl -s http://localhost:9200/my_index/_search - - ``` - - You can see the record that published earlier has been successfully written into Elasticsearch. - - ```json - - {"took":2,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":1,"relation":"eq"},"max_score":1.0,"hits":[{"_index":"my_index","_type":"_doc","_id":"FSxemm8BLjG_iC0EeTYJ","_score":1.0,"_source":{"a":1}}]}} - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-file-source.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-file-source.md deleted file mode 100644 index e9d710cce65e83..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-file-source.md +++ /dev/null @@ -1,160 +0,0 @@ ---- -id: io-file-source -title: File source connector -sidebar_label: "File source connector" -original_id: io-file-source ---- - -The File source connector pulls messages from files in directories and persists the messages to Pulsar topics. - -## Configuration - -The configuration of the File source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `inputDirectory` | String|true | No default value|The input directory to pull files. | -| `recurse` | Boolean|false | true | Whether to pull files from subdirectory or not.| -| `keepFile` |Boolean|false | false | If set to true, the file is not deleted after it is processed, which means the file can be picked up continually. | -| `fileFilter` | String|false| [^\\.].* | The file whose name matches the given regular expression is picked up. | -| `pathFilter` | String |false | NULL | If `recurse` is set to true, the subdirectory whose path matches the given regular expression is scanned. | -| `minimumFileAge` | Integer|false | 0 | The minimum age that a file can be processed.

    Any file younger than `minimumFileAge` (according to the last modification date) is ignored. | -| `maximumFileAge` | Long|false |Long.MAX_VALUE | The maximum age that a file can be processed.

    Any file older than `maximumFileAge` (according to last modification date) is ignored. | -| `minimumSize` |Integer| false |1 | The minimum size (in bytes) that a file can be processed. | -| `maximumSize` | Double|false |Double.MAX_VALUE| The maximum size (in bytes) that a file can be processed. | -| `ignoreHiddenFiles` |Boolean| false | true| Whether the hidden files should be ignored or not. | -| `pollingInterval`|Long | false | 10000L | Indicates how long to wait before performing a directory listing. | -| `numWorkers` | Integer | false | 1 | The number of worker threads that process files.

    This allows you to process a larger number of files concurrently.

    However, setting this to a value greater than 1 makes the data from multiple files mixed in the target topic. | - -### Example - -Before using the File source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "inputDirectory": "/Users/david", - "recurse": true, - "keepFile": true, - "fileFilter": "[^\\.].*", - "pathFilter": "*", - "minimumFileAge": 0, - "maximumFileAge": 9999999999, - "minimumSize": 1, - "maximumSize": 5000000, - "ignoreHiddenFiles": true, - "pollingInterval": 5000, - "numWorkers": 1 - } - - ``` - -* YAML - - ```yaml - - configs: - inputDirectory: "/Users/david" - recurse: true - keepFile: true - fileFilter: "[^\\.].*" - pathFilter: "*" - minimumFileAge: 0 - maximumFileAge: 9999999999 - minimumSize: 1 - maximumSize: 5000000 - ignoreHiddenFiles: true - pollingInterval: 5000 - numWorkers: 1 - - ``` - -## Usage - -Here is an example of using the File source connecter. - -1. Pull a Pulsar image. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - ``` - -2. Start Pulsar standalone. - - ```bash - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -3. Create a configuration file _file-connector.yaml_. - - ```yaml - - configs: - inputDirectory: "/opt" - - ``` - -4. Copy the configuration file _file-connector.yaml_ to the container. - - ```bash - - $ docker cp connectors/file-connector.yaml pulsar-standalone:/pulsar/ - - ``` - -5. Download the File source connector. - - ```bash - - $ curl -O https://mirrors.tuna.tsinghua.edu.cn/apache/pulsar/pulsar-{version}/connectors/pulsar-io-file-{version}.nar - - ``` - -6. Start the File source connector. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - - $ ./bin/pulsar-admin sources localrun \ - --archive /pulsar/pulsar-io-file-{version}.nar \ - --name file-test \ - --destination-topic-name pulsar-file-test \ - --source-config-file /pulsar/file-connector.yaml - - ``` - -7. Start a consumer. - - ```bash - - ./bin/pulsar-client consume -s file-test -n 0 pulsar-file-test - - ``` - -8. Write the message to the file _test.txt_. - - ```bash - - echo "hello world!" > /opt/test.txt - - ``` - - The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello world! - - ``` - - \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-flume-sink.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-flume-sink.md deleted file mode 100644 index b2ace53702f8ca..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-flume-sink.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -id: io-flume-sink -title: Flume sink connector -sidebar_label: "Flume sink connector" -original_id: io-flume-sink ---- - -The Flume sink connector pulls messages from Pulsar topics to logs. - -## Configuration - -The configuration of the Flume sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`name`|String|true|"" (empty string)|The name of the agent. -`confFile`|String|true|"" (empty string)|The configuration file. -`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed. -`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection. -`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration. - -### Example - -Before using the Flume sink connector, you need to create a configuration file through one of the following methods. - -> For more information about the `sink.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/sink.conf). - -* JSON - - ```json - - { - "name": "a1", - "confFile": "sink.conf", - "noReloadConf": "false", - "zkConnString": "", - "zkBasePath": "" - } - - ``` - -* YAML - - ```yaml - - configs: - name: a1 - confFile: sink.conf - noReloadConf: false - zkConnString: "" - zkBasePath: "" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-flume-source.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-flume-source.md deleted file mode 100644 index b7fd7edad88111..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-flume-source.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -id: io-flume-source -title: Flume source connector -sidebar_label: "Flume source connector" -original_id: io-flume-source ---- - -The Flume source connector pulls messages from logs to Pulsar topics. - -## Configuration - -The configuration of the Flume source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`name`|String|true|"" (empty string)|The name of the agent. -`confFile`|String|true|"" (empty string)|The configuration file. -`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed. -`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection. -`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration. - -### Example - -Before using the Flume source connector, you need to create a configuration file through one of the following methods. - -> For more information about the `source.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/source.conf). - -* JSON - - ```json - - { - "name": "a1", - "confFile": "source.conf", - "noReloadConf": "false", - "zkConnString": "", - "zkBasePath": "" - } - - ``` - -* YAML - - ```yaml - - configs: - name: a1 - confFile: source.conf - noReloadConf: false - zkConnString: "" - zkBasePath: "" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-hbase-sink.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-hbase-sink.md deleted file mode 100644 index 1737b00fa26805..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-hbase-sink.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -id: io-hbase-sink -title: HBase sink connector -sidebar_label: "HBase sink connector" -original_id: io-hbase-sink ---- - -The HBase sink connector pulls the messages from Pulsar topics -and persists the messages to HBase tables - -## Configuration - -The configuration of the HBase sink connector has the following properties. - -### Property - -| Name | Type|Default | Required | Description | -|------|---------|----------|-------------|--- -| `hbaseConfigResources` | String|None | false | HBase system configuration `hbase-site.xml` file. | -| `zookeeperQuorum` | String|None | true | HBase system configuration about `hbase.zookeeper.quorum` value. | -| `zookeeperClientPort` | String|2181 | false | HBase system configuration about `hbase.zookeeper.property.clientPort` value. | -| `zookeeperZnodeParent` | String|/hbase | false | HBase system configuration about `zookeeper.znode.parent` value. | -| `tableName` | None |String | true | HBase table, the value is `namespace:tableName`. | -| `rowKeyName` | String|None | true | HBase table rowkey name. | -| `familyName` | String|None | true | HBase table column family name. | -| `qualifierNames` |String| None | true | HBase table column qualifier names. | -| `batchTimeMs` | Long|1000l| false | HBase table operation timeout in milliseconds. | -| `batchSize` | int|200| false | Batch size of updates made to the HBase table. | - -### Example - -Before using the HBase sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "hbaseConfigResources": "hbase-site.xml", - "zookeeperQuorum": "localhost", - "zookeeperClientPort": "2181", - "zookeeperZnodeParent": "/hbase", - "tableName": "pulsar_hbase", - "rowKeyName": "rowKey", - "familyName": "info", - "qualifierNames": [ 'name', 'address', 'age'] - } - - ``` - -* YAML - - ```yaml - - configs: - hbaseConfigResources: "hbase-site.xml" - zookeeperQuorum: "localhost" - zookeeperClientPort: "2181" - zookeeperZnodeParent: "/hbase" - tableName: "pulsar_hbase" - rowKeyName: "rowKey" - familyName: "info" - qualifierNames: [ 'name', 'address', 'age'] - - ``` - - \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-hdfs2-sink.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-hdfs2-sink.md deleted file mode 100644 index 4a8527154430d0..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-hdfs2-sink.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -id: io-hdfs2-sink -title: HDFS2 sink connector -sidebar_label: "HDFS2 sink connector" -original_id: io-hdfs2-sink ---- - -The HDFS2 sink connector pulls the messages from Pulsar topics -and persists the messages to HDFS files. - -## Configuration - -The configuration of the HDFS2 sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `hdfsConfigResources` | String|true| None | A file or a comma-separated list containing the Hadoop file system configuration.

    **Example**
    'core-site.xml'
    'hdfs-site.xml' | -| `directory` | String | true | None|The HDFS directory where files read from or written to. | -| `encoding` | String |false |None |The character encoding for the files.

    **Example**
    UTF-8
    ASCII | -| `compression` | Compression |false |None |The compression code used to compress or de-compress the files on HDFS.

    Below are the available options:
  580. BZIP2
  581. DEFLATE
  582. GZIP
  583. LZ4
  584. SNAPPY
  585. | -| `kerberosUserPrincipal` |String| false| None|The principal account of Kerberos user used for authentication. | -| `keytab` | String|false|None| The full pathname of the Kerberos keytab file used for authentication. | -| `filenamePrefix` |String| true, if `compression` is set to `None`. | None |The prefix of the files created inside the HDFS directory.

    **Example**
    The value of topicA result in files named topicA-. | -| `fileExtension` | String| true | None | The extension added to the files written to HDFS.

    **Example**
    '.txt'
    '.seq' | -| `separator` | char|false |None |The character used to separate records in a text file.

    If no value is provided, the contents from all records are concatenated together in one continuous byte array. | -| `syncInterval` | long| false |0| The interval between calls to flush data to HDFS disk in milliseconds. | -| `maxPendingRecords` |int| false|Integer.MAX_VALUE | The maximum number of records that hold in memory before acking.

    Setting this property to 1 makes every record send to disk before the record is acked.

    Setting this property to a higher value allows buffering records before flushing them to disk. -| `subdirectoryPattern` | String | false | None | A subdirectory associated with the created time of the sink.
    The pattern is the formatted pattern of `directory`'s subdirectory.

    See [DateTimeFormatter](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html) for pattern's syntax. | - -### Example - -Before using the HDFS2 sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "hdfsConfigResources": "core-site.xml", - "directory": "/foo/bar", - "filenamePrefix": "prefix", - "fileExtension": ".log", - "compression": "SNAPPY", - "subdirectoryPattern": "yyyy-MM-dd" - } - - ``` - -* YAML - - ```yaml - - configs: - hdfsConfigResources: "core-site.xml" - directory: "/foo/bar" - filenamePrefix: "prefix" - fileExtension: ".log" - compression: "SNAPPY" - subdirectoryPattern: "yyyy-MM-dd" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-hdfs3-sink.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-hdfs3-sink.md deleted file mode 100644 index aec065a25db7f4..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-hdfs3-sink.md +++ /dev/null @@ -1,59 +0,0 @@ ---- -id: io-hdfs3-sink -title: HDFS3 sink connector -sidebar_label: "HDFS3 sink connector" -original_id: io-hdfs3-sink ---- - -The HDFS3 sink connector pulls the messages from Pulsar topics -and persists the messages to HDFS files. - -## Configuration - -The configuration of the HDFS3 sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `hdfsConfigResources` | String|true| None | A file or a comma-separated list containing the Hadoop file system configuration.

    **Example**
    'core-site.xml'
    'hdfs-site.xml' | -| `directory` | String | true | None|The HDFS directory where files read from or written to. | -| `encoding` | String |false |None |The character encoding for the files.

    **Example**
    UTF-8
    ASCII | -| `compression` | Compression |false |None |The compression code used to compress or de-compress the files on HDFS.

    Below are the available options:
  586. BZIP2
  587. DEFLATE
  588. GZIP
  589. LZ4
  590. SNAPPY
  591. | -| `kerberosUserPrincipal` |String| false| None|The principal account of Kerberos user used for authentication. | -| `keytab` | String|false|None| The full pathname of the Kerberos keytab file used for authentication. | -| `filenamePrefix` |String| false |None |The prefix of the files created inside the HDFS directory.

    **Example**
    The value of topicA result in files named topicA-. | -| `fileExtension` | String| false | None| The extension added to the files written to HDFS.

    **Example**
    '.txt'
    '.seq' | -| `separator` | char|false |None |The character used to separate records in a text file.

    If no value is provided, the contents from all records are concatenated together in one continuous byte array. | -| `syncInterval` | long| false |0| The interval between calls to flush data to HDFS disk in milliseconds. | -| `maxPendingRecords` |int| false|Integer.MAX_VALUE | The maximum number of records that hold in memory before acking.

    Setting this property to 1 makes every record send to disk before the record is acked.

    Setting this property to a higher value allows buffering records before flushing them to disk. - -### Example - -Before using the HDFS3 sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "hdfsConfigResources": "core-site.xml", - "directory": "/foo/bar", - "filenamePrefix": "prefix", - "compression": "SNAPPY" - } - - ``` - -* YAML - - ```yaml - - configs: - hdfsConfigResources: "core-site.xml" - directory: "/foo/bar" - filenamePrefix: "prefix" - compression: "SNAPPY" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-influxdb-sink.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-influxdb-sink.md deleted file mode 100644 index 9382f8c03121cc..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-influxdb-sink.md +++ /dev/null @@ -1,119 +0,0 @@ ---- -id: io-influxdb-sink -title: InfluxDB sink connector -sidebar_label: "InfluxDB sink connector" -original_id: io-influxdb-sink ---- - -The InfluxDB sink connector pulls messages from Pulsar topics -and persists the messages to InfluxDB. - -The InfluxDB sink provides different configurations for InfluxDBv1 and v2 respectively. - -## Configuration - -The configuration of the InfluxDB sink connector has the following properties. - -### Property -#### InfluxDBv2 -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. | -| `token` | String|true| " " (empty string) |The authentication token used to authenticate to InfluxDB. | -| `organization` | String| true|" " (empty string) | The InfluxDB organization to write to. | -| `bucket` |String| true | " " (empty string)| The InfluxDB bucket to write to. | -| `precision` | String|false| ns | The timestamp precision for writing data to InfluxDB.

    Below are the available options:
  592. ns
  593. us
  594. ms
  595. s
  596. | -| `logLevel` | String|false| NONE|The log level for InfluxDB request and response.

    Below are the available options:
  597. NONE
  598. BASIC
  599. HEADERS
  600. FULL
  601. | -| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. | -| `batchTimeMs` |long|false| 1000L | The InfluxDB operation time in milliseconds. | -| `batchSize` | int|false|200| The batch size of writing to InfluxDB. | - -#### InfluxDBv1 -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. | -| `username` | String|false| " " (empty string) |The username used to authenticate to InfluxDB. | -| `password` | String| false|" " (empty string) | The password used to authenticate to InfluxDB. | -| `database` |String| true | " " (empty string)| The InfluxDB to which write messages. | -| `consistencyLevel` | String|false|ONE | The consistency level for writing data to InfluxDB.

    Below are the available options:
  602. ALL
  603. ANY
  604. ONE
  605. QUORUM
  606. | -| `logLevel` | String|false| NONE|The log level for InfluxDB request and response.

    Below are the available options:
  607. NONE
  608. BASIC
  609. HEADERS
  610. FULL
  611. | -| `retentionPolicy` | String|false| autogen| The retention policy for InfluxDB. | -| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. | -| `batchTimeMs` |long|false| 1000L | The InfluxDB operation time in milliseconds. | -| `batchSize` | int|false|200| The batch size of writing to InfluxDB. | - -### Example -Before using the InfluxDB sink connector, you need to create a configuration file through one of the following methods. -#### InfluxDBv2 -* JSON - - ```json - - { - "influxdbUrl": "http://localhost:9999", - "organization": "example-org", - "bucket": "example-bucket", - "token": "xxxx", - "precision": "ns", - "logLevel": "NONE", - "gzipEnable": false, - "batchTimeMs": 1000, - "batchSize": 100 - } - - ``` - - -* YAML - - ```yaml - - configs: - influxdbUrl: "http://localhost:9999" - organization: "example-org" - bucket: "example-bucket" - token: "xxxx" - precision: "ns" - logLevel: "NONE" - gzipEnable: false - batchTimeMs: 1000 - batchSize: 100 - - ``` - - -#### InfluxDBv1 - -* JSON - - ```json - - { - "influxdbUrl": "http://localhost:8086", - "database": "test_db", - "consistencyLevel": "ONE", - "logLevel": "NONE", - "retentionPolicy": "autogen", - "gzipEnable": false, - "batchTimeMs": 1000, - "batchSize": 100 - } - - ``` - -* YAML - - ```yaml - - configs: - influxdbUrl: "http://localhost:8086" - database: "test_db" - consistencyLevel: "ONE" - logLevel: "NONE" - retentionPolicy: "autogen" - gzipEnable: false - batchTimeMs: 1000 - batchSize: 100 - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-jdbc-sink.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-jdbc-sink.md deleted file mode 100644 index 77dbb61fccd7ed..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-jdbc-sink.md +++ /dev/null @@ -1,157 +0,0 @@ ---- -id: io-jdbc-sink -title: JDBC sink connector -sidebar_label: "JDBC sink connector" -original_id: io-jdbc-sink ---- - -The JDBC sink connectors allow pulling messages from Pulsar topics -and persists the messages to ClickHouse, MariaDB, PostgreSQL, and SQLite. - -> Currently, INSERT, DELETE and UPDATE operations are supported. - -## Configuration - -The configuration of all JDBC sink connectors has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `userName` | String|false | " " (empty string) | The username used to connect to the database specified by `jdbcUrl`.

    **Note: `userName` is case-sensitive.**| -| `password` | String|false | " " (empty string)| The password used to connect to the database specified by `jdbcUrl`.

    **Note: `password` is case-sensitive.**| -| `jdbcUrl` | String|true | " " (empty string) | The JDBC URL of the database to which the connector connects. | -| `tableName` | String|true | " " (empty string) | The name of the table to which the connector writes. | -| `nonKey` | String|false | " " (empty string) | A comma-separated list contains the fields used in updating events. | -| `key` | String|false | " " (empty string) | A comma-separated list contains the fields used in `where` condition of updating and deleting events. | -| `timeoutMs` | int| false|500 | The JDBC operation timeout in milliseconds. | -| `batchSize` | int|false | 200 | The batch size of updates made to the database. | - -### Example for ClickHouse - -* JSON - - ```json - - { - "userName": "clickhouse", - "password": "password", - "jdbcUrl": "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink", - "tableName": "pulsar_clickhouse_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-clickhouse-sink" - topicName: "persistent://public/default/jdbc-clickhouse-topic" - sinkType: "jdbc-clickhouse" - configs: - userName: "clickhouse" - password: "password" - jdbcUrl: "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink" - tableName: "pulsar_clickhouse_jdbc_sink" - - ``` - -### Example for MariaDB - -* JSON - - ```json - - { - "userName": "mariadb", - "password": "password", - "jdbcUrl": "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink", - "tableName": "pulsar_mariadb_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-mariadb-sink" - topicName: "persistent://public/default/jdbc-mariadb-topic" - sinkType: "jdbc-mariadb" - configs: - userName: "mariadb" - password: "password" - jdbcUrl: "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink" - tableName: "pulsar_mariadb_jdbc_sink" - - ``` - -### Example for PostgreSQL - -Before using the JDBC PostgreSQL sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "userName": "postgres", - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "tableName": "pulsar_postgres_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-postgres-sink" - topicName: "persistent://public/default/jdbc-postgres-topic" - sinkType: "jdbc-postgres" - configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink" - tableName: "pulsar_postgres_jdbc_sink" - - ``` - -For more information on **how to use this JDBC sink connector**, see [connect Pulsar to PostgreSQL](io-quickstart.md#connect-pulsar-to-postgresql). - -### Example for SQLite - -* JSON - - ```json - - { - "jdbcUrl": "jdbc:sqlite:db.sqlite", - "tableName": "pulsar_sqlite_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-sqlite-sink" - topicName: "persistent://public/default/jdbc-sqlite-topic" - sinkType: "jdbc-sqlite" - configs: - jdbcUrl: "jdbc:sqlite:db.sqlite" - tableName: "pulsar_sqlite_jdbc_sink" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-kafka-sink.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-kafka-sink.md deleted file mode 100644 index 09dad4ce70bac9..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-kafka-sink.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -id: io-kafka-sink -title: Kafka sink connector -sidebar_label: "Kafka sink connector" -original_id: io-kafka-sink ---- - -The Kafka sink connector pulls messages from Pulsar topics and persists the messages -to Kafka topics. - -This guide explains how to configure and use the Kafka sink connector. - -## Configuration - -The configuration of the Kafka sink connector has the following parameters. - -### Property - -| Name | Type| Required | Default | Description -|------|----------|---------|-------------|-------------| -| `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. | -|`acks`|String|true|" " (empty string) |The number of acknowledgments that the producer requires the leader to receive before a request completes.
    This controls the durability of the sent records. -|`batchsize`|long|false|16384L|The batch size that a Kafka producer attempts to batch records together before sending them to brokers. -|`maxRequestSize`|long|false|1048576L|The maximum size of a Kafka request in bytes. -|`topic`|String|true|" " (empty string) |The Kafka topic which receives messages from Pulsar. -| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringSerializer | The serializer class for Kafka producers to serialize keys. -| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArraySerializer | The serializer class for Kafka producers to serialize values.

    The serializer is set by a specific implementation of [`KafkaAbstractSink`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSink.java). -|`producerConfigProperties`|Map|false|" " (empty string)|The producer configuration properties to be passed to producers.

    **Note: other properties specified in the connector configuration file take precedence over this configuration**. - - -### Example - -Before using the Kafka sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "bootstrapServers": "localhost:6667", - "topic": "test", - "acks": "1", - "batchSize": "16384", - "maxRequestSize": "1048576", - "producerConfigProperties": - { - "client.id": "test-pulsar-producer", - "security.protocol": "SASL_PLAINTEXT", - "sasl.mechanism": "GSSAPI", - "sasl.kerberos.service.name": "kafka", - "acks": "all" - } - } - -* YAML - - ``` - -yaml - configs: - bootstrapServers: "localhost:6667" - topic: "test" - acks: "1" - batchSize: "16384" - maxRequestSize: "1048576" - producerConfigProperties: - client.id: "test-pulsar-producer" - security.protocol: "SASL_PLAINTEXT" - sasl.mechanism: "GSSAPI" - sasl.kerberos.service.name: "kafka" - acks: "all" - ``` diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-kafka-source.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-kafka-source.md deleted file mode 100644 index 53448699e21b4a..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-kafka-source.md +++ /dev/null @@ -1,226 +0,0 @@ ---- -id: io-kafka-source -title: Kafka source connector -sidebar_label: "Kafka source connector" -original_id: io-kafka-source ---- - -The Kafka source connector pulls messages from Kafka topics and persists the messages -to Pulsar topics. - -This guide explains how to configure and use the Kafka source connector. - -## Configuration - -The configuration of the Kafka source connector has the following properties. - -### Property - -| Name | Type| Required | Default | Description -|------|----------|---------|-------------|-------------| -| `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. | -| `groupId` |String| true | " " (empty string) | A unique string that identifies the group of consumer processes to which this consumer belongs. | -| `fetchMinBytes` | long|false | 1 | The minimum byte expected for each fetch response. | -| `autoCommitEnabled` | boolean |false | true | If set to true, the consumer's offset is periodically committed in the background.

    This committed offset is used when the process fails as the position from which a new consumer begins. | -| `autoCommitIntervalMs` | long|false | 5000 | The frequency in milliseconds that the consumer offsets are auto-committed to Kafka if `autoCommitEnabled` is set to true. | -| `heartbeatIntervalMs` | long| false | 3000 | The interval between heartbeats to the consumer when using Kafka's group management facilities.

    **Note: `heartbeatIntervalMs` must be smaller than `sessionTimeoutMs`**.| -| `sessionTimeoutMs` | long|false | 30000 | The timeout used to detect consumer failures when using Kafka's group management facility. | -| `topic` | String|true | " " (empty string)| The Kafka topic which sends messages to Pulsar. | -| `consumerConfigProperties` | Map| false | " " (empty string) | The consumer configuration properties to be passed to consumers.

    **Note: other properties specified in the connector configuration file take precedence over this configuration**. | -| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringDeserializer | The deserializer class for Kafka consumers to deserialize keys.
    The deserializer is set by a specific implementation of [`KafkaAbstractSource`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java). -| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArrayDeserializer | The deserializer class for Kafka consumers to deserialize values. -| `autoOffsetReset` | String | false | "earliest" | The default offset reset policy. | - -### Schema Management - -This Kafka source connector applies the schema to the topic depending on the data type that is present on the Kafka topic. -You can detect the data type from the `keyDeserializationClass` and `valueDeserializationClass` configuration parameters. - -If the `valueDeserializationClass` is `org.apache.kafka.common.serialization.StringDeserializer`, you can set Schema.STRING() as schema type on the Pulsar topic. - -If `valueDeserializationClass` is `io.confluent.kafka.serializers.KafkaAvroDeserializer`, Pulsar downloads the AVRO schema from the Confluent Schema Registry® -and sets it properly on the Pulsar topic. - -In this case, you need to set `schema.registry.url` inside of the `consumerConfigProperties` configuration entry -of the source. - -If `keyDeserializationClass` is not `org.apache.kafka.common.serialization.StringDeserializer`, it means -that you do not have a String as key and the Kafka Source uses the KeyValue schema type with the SEPARATED encoding. - -Pulsar supports AVRO format for keys. - -In this case, you can have a Pulsar topic with the following properties: -- Schema: KeyValue schema with SEPARATED encoding -- Key: the content of key of the Kafka message (base64 encoded) -- Value: the content of value of the Kafka message -- KeySchema: the schema detected from `keyDeserializationClass` -- ValueSchema: the schema detected from `valueDeserializationClass` - -Topic compaction and partition routing use the Pulsar key, that contains the Kafka key, and so they are driven by the same value that you have on Kafka. - -When you consume data from Pulsar topics, you can use the `KeyValue` schema. In this way, you can decode the data properly. -If you want to access the raw key, you can use the `Message#getKeyBytes()` API. - -### Example - -Before using the Kafka source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "bootstrapServers": "pulsar-kafka:9092", - "groupId": "test-pulsar-io", - "topic": "my-topic", - "sessionTimeoutMs": "10000", - "autoCommitEnabled": false - } - - ``` - -* YAML - - ```yaml - - configs: - bootstrapServers: "pulsar-kafka:9092" - groupId: "test-pulsar-io" - topic: "my-topic" - sessionTimeoutMs: "10000" - autoCommitEnabled: false - - ``` - -## Usage - -Here is an example of using the Kafka source connector with the configuration file as shown previously. - -1. Download a Kafka client and a Kafka connector. - - ```bash - - $ wget https://repo1.maven.org/maven2/org/apache/kafka/kafka-clients/0.10.2.1/kafka-clients-0.10.2.1.jar - - $ wget https://archive.apache.org/dist/pulsar/pulsar-2.4.0/connectors/pulsar-io-kafka-2.4.0.nar - - ``` - -2. Create a network. - - ```bash - - $ docker network create kafka-pulsar - - ``` - -3. Pull a ZooKeeper image and start ZooKeeper. - - ```bash - - $ docker pull wurstmeister/zookeeper - - $ docker run -d -it -p 2181:2181 --name pulsar-kafka-zookeeper --network kafka-pulsar wurstmeister/zookeeper - - ``` - -4. Pull a Kafka image and start Kafka. - - ```bash - - $ docker pull wurstmeister/kafka:2.11-1.0.2 - - $ docker run -d -it --network kafka-pulsar -p 6667:6667 -p 9092:9092 -e KAFKA_ADVERTISED_HOST_NAME=pulsar-kafka -e KAFKA_ZOOKEEPER_CONNECT=pulsar-kafka-zookeeper:2181 --name pulsar-kafka wurstmeister/kafka:2.11-1.0.2 - - ``` - -5. Pull a Pulsar image and start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:@pulsar:version@ - - $ docker run -d -it --network kafka-pulsar -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-kafka-standalone apachepulsar/pulsar:2.4.0 bin/pulsar standalone - - ``` - -6. Create a producer file _kafka-producer.py_. - - ```python - - from kafka import KafkaProducer - producer = KafkaProducer(bootstrap_servers='pulsar-kafka:9092') - future = producer.send('my-topic', b'hello world') - future.get() - - ``` - -7. Create a consumer file _pulsar-client.py_. - - ```python - - import pulsar - - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe('my-topic', subscription_name='my-aa') - - while True: - msg = consumer.receive() - print msg - print dir(msg) - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - - client.close() - - ``` - -8. Copy the following files to Pulsar. - - ```bash - - $ docker cp pulsar-io-kafka-@pulsar:version@.nar pulsar-kafka-standalone:/pulsar - $ docker cp kafkaSourceConfig.yaml pulsar-kafka-standalone:/pulsar/conf - $ docker cp pulsar-client.py pulsar-kafka-standalone:/pulsar/ - $ docker cp kafka-producer.py pulsar-kafka-standalone:/pulsar/ - - ``` - -9. Open a new terminal window and start the Kafka source connector in local run mode. - - ```bash - - $ docker exec -it pulsar-kafka-standalone /bin/bash - - $ ./bin/pulsar-admin source localrun \ - --archive ./pulsar-io-kafka-@pulsar:version@.nar \ - --classname org.apache.pulsar.io.kafka.KafkaBytesSource \ - --tenant public \ - --namespace default \ - --name kafka \ - --destination-topic-name my-topic \ - --source-config-file ./conf/kafkaSourceConfig.yaml \ - --parallelism 1 - - ``` - -10. Open a new terminal window and run the consumer. - - ```bash - - $ docker exec -it pulsar-kafka-standalone /bin/bash - - $ pip install kafka-python - - $ python3 kafka-producer.py - - ``` - - The following information appears on the consumer terminal window. - - ```bash - - Received message: 'hello world' - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-kinesis-sink.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-kinesis-sink.md deleted file mode 100644 index 153587dcfc783e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-kinesis-sink.md +++ /dev/null @@ -1,80 +0,0 @@ ---- -id: io-kinesis-sink -title: Kinesis sink connector -sidebar_label: "Kinesis sink connector" -original_id: io-kinesis-sink ---- - -The Kinesis sink connector pulls data from Pulsar and persists data into Amazon Kinesis. - -## Configuration - -The configuration of the Kinesis sink connector has the following property. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`messageFormat`|MessageFormat|true|ONLY_RAW_PAYLOAD|Message format in which Kinesis sink converts Pulsar messages and publishes to Kinesis streams.

    Below are the available options:

  612. `ONLY_RAW_PAYLOAD`: Kinesis sink directly publishes Pulsar message payload as a message into the configured Kinesis stream.

  613. `FULL_MESSAGE_IN_JSON`: Kinesis sink creates a JSON payload with Pulsar message payload, properties and encryptionCtx, and publishes JSON payload into the configured Kinesis stream.

  614. `FULL_MESSAGE_IN_FB`: Kinesis sink creates a flatbuffer serialized payload with Pulsar message payload, properties and encryptionCtx, and publishes flatbuffer payload into the configured Kinesis stream.
  615. -`retainOrdering`|boolean|false|false|Whether Pulsar connectors to retain ordering when moving messages from Pulsar to Kinesis or not. -`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    It is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If it is empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Built-in plugins - -The following are built-in `AwsCredentialProviderPlugin` plugins: - -* `org.apache.pulsar.io.aws.AwsDefaultProviderChainPlugin` - - This plugin takes no configuration, it uses the default AWS provider chain. - - For more information, see [AWS documentation](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default). - -* `org.apache.pulsar.io.aws.STSAssumeRoleProviderPlugin` - - This plugin takes a configuration (via the `awsCredentialPluginParam`) that describes a role to assume when running the KCL. - - This configuration takes the form of a small json document like: - - ```json - - {"roleArn": "arn...", "roleSessionName": "name"} - - ``` - -### Example - -Before using the Kinesis sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "awsEndpoint": "some.endpoint.aws", - "awsRegion": "us-east-1", - "awsKinesisStreamName": "my-stream", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "messageFormat": "ONLY_RAW_PAYLOAD", - "retainOrdering": "true" - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "some.endpoint.aws" - awsRegion: "us-east-1" - awsKinesisStreamName: "my-stream" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - messageFormat: "ONLY_RAW_PAYLOAD" - retainOrdering: "true" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-kinesis-source.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-kinesis-source.md deleted file mode 100644 index 0d07eefc3703b3..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-kinesis-source.md +++ /dev/null @@ -1,81 +0,0 @@ ---- -id: io-kinesis-source -title: Kinesis source connector -sidebar_label: "Kinesis source connector" -original_id: io-kinesis-source ---- - -The Kinesis source connector pulls data from Amazon Kinesis and persists data into Pulsar. - -This connector uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual consuming of messages. The KCL uses DynamoDB to track state for consumers. - -> Note: currently, the Kinesis source connector only supports raw messages. If you use KMS encrypted messages, the encrypted messages are sent to downstream. This connector will support decrypting messages in the future release. - - -## Configuration - -The configuration of the Kinesis source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.

    Below are the available options:

  616. `AT_TIMESTAMP`: start from the record at or after the specified timestamp.

  617. `LATEST`: start after the most recent data record.

  618. `TRIM_HORIZON`: start from the oldest available data record.
  619. -`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption. -`applicationName`|String|false|Pulsar IO connector|The name of the Amazon Kinesis application.

    By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances. -`checkpointInterval`|long|false|60000|The frequency of the Kinesis stream checkpoint in milliseconds. -`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds. -`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint. -`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector.

    Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed. -`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`useEnhancedFanOut`|boolean|false|true|If set to true, it uses Kinesis enhanced fan-out.

    If set to false, it uses polling. -`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    `awsCredentialProviderPlugin` has the following built-in plugs:

  620. `org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:
    this plugin uses the default AWS provider chain.
    For more information, see [using the default credential provider chain](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default).

  621. `org.apache.pulsar.io.kinesis.STSAssumeRoleProviderPlugin`:
    this plugin takes a configuration via the `awsCredentialPluginParam` that describes a role to assume when running the KCL.
    **JSON configuration example**
    `{"roleArn": "arn...", "roleSessionName": "name"}`

    `awsCredentialPluginName` is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If `awsCredentialPluginName` set to empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`.
  622. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Example - -Before using the Kinesis source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "awsEndpoint": "https://some.endpoint.aws", - "awsRegion": "us-east-1", - "awsKinesisStreamName": "my-stream", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "applicationName": "My test application", - "checkpointInterval": "30000", - "backoffTime": "4000", - "numRetries": "3", - "receiveQueueSize": 2000, - "initialPositionInStream": "TRIM_HORIZON", - "startAtTime": "2019-03-05T19:28:58.000Z" - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "https://some.endpoint.aws" - awsRegion: "us-east-1" - awsKinesisStreamName: "my-stream" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - applicationName: "My test application" - checkpointInterval: 30000 - backoffTime: 4000 - numRetries: 3 - receiveQueueSize: 2000 - initialPositionInStream: "TRIM_HORIZON" - startAtTime: "2019-03-05T19:28:58.000Z" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-mongo-sink.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-mongo-sink.md deleted file mode 100644 index 30c15a6c280938..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-mongo-sink.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -id: io-mongo-sink -title: MongoDB sink connector -sidebar_label: "MongoDB sink connector" -original_id: io-mongo-sink ---- - -The MongoDB sink connector pulls messages from Pulsar topics -and persists the messages to collections. - -## Configuration - -The configuration of the MongoDB sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `mongoUri` | String| true| " " (empty string) | The MongoDB URI to which the connector connects.

    For more information, see [connection string URI format](https://docs.mongodb.com/manual/reference/connection-string/). | -| `database` | String| true| " " (empty string)| The database name to which the collection belongs. | -| `collection` | String| true| " " (empty string)| The collection name to which the connector writes messages. | -| `batchSize` | int|false|100 | The batch size of writing messages to collections. | -| `batchTimeMs` |long|false|1000| The batch operation interval in milliseconds. | - - -### Example - -Before using the Mongo sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "mongoUri": "mongodb://localhost:27017", - "database": "pulsar", - "collection": "messages", - "batchSize": "2", - "batchTimeMs": "500" - } - - ``` - -* YAML - - ```yaml - - configs: - mongoUri: "mongodb://localhost:27017" - database: "pulsar" - collection: "messages" - batchSize: 2 - batchTimeMs: 500 - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-netty-source.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-netty-source.md deleted file mode 100644 index e1ec8d863115b3..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-netty-source.md +++ /dev/null @@ -1,241 +0,0 @@ ---- -id: io-netty-source -title: Netty source connector -sidebar_label: "Netty source connector" -original_id: io-netty-source ---- - -The Netty source connector opens a port that accepts incoming data via the configured network protocol -and publish it to user-defined Pulsar topics. - -This connector can be used in a containerized (for example, k8s) deployment. Otherwise, if the connector is running in process or thread mode, the instance may be conflicting on listening to ports. - -## Configuration - -The configuration of the Netty source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `type` |String| true |tcp | The network protocol over which data is transmitted to netty.

    Below are the available options:
  623. tcp
  624. http
  625. udp
  626. | -| `host` | String|true | 127.0.0.1 | The host name or address on which the source instance listen. | -| `port` | int|true | 10999 | The port on which the source instance listen. | -| `numberOfThreads` |int| true |1 | The number of threads of Netty TCP server to accept incoming connections and handle the traffic of accepted connections. | - - -### Example - -Before using the Netty source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "type": "tcp", - "host": "127.0.0.1", - "port": "10911", - "numberOfThreads": "1" - } - - ``` - -* YAML - - ```yaml - - configs: - type: "tcp" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -## Usage - -The following examples show how to use the Netty source connector with TCP and HTTP. - -### TCP - -1. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-netty-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -2. Create a configuration file _netty-source-config.yaml_. - - ```yaml - - configs: - type: "tcp" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -3. Copy the configuration file _netty-source-config.yaml_ to Pulsar server. - - ```bash - - $ docker cp netty-source-config.yaml pulsar-netty-standalone:/pulsar/conf/ - - ``` - -4. Download the Netty source connector. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - curl -O http://mirror-hk.koddos.net/apache/pulsar/pulsar-{version}/connectors/pulsar-io-netty-{version}.nar - - ``` - -5. Start the Netty source connector. - - ```bash - - $ ./bin/pulsar-admin sources localrun \ - --archive pulsar-io-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name netty \ - --destination-topic-name netty-topic \ - --source-config-file netty-source-config.yaml \ - --parallelism 1 - - ``` - -6. Consume data. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ ./bin/pulsar-client consume -t Exclusive -s netty-sub netty-topic -n 0 - - ``` - -7. Open another terminal window to send data to the Netty source. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ apt-get update - - $ apt-get -y install telnet - - $ root@1d19327b2c67:/pulsar# telnet 127.0.0.1 10999 - Trying 127.0.0.1... - Connected to 127.0.0.1. - Escape character is '^]'. - hello - world - - ``` - -8. The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello - - ----- got message ----- - world - - ``` - -### HTTP - -1. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-netty-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -2. Create a configuration file _netty-source-config.yaml_. - - ```yaml - - configs: - type: "http" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -3. Copy the configuration file _netty-source-config.yaml_ to Pulsar server. - - ```bash - - $ docker cp netty-source-config.yaml pulsar-netty-standalone:/pulsar/conf/ - - ``` - -4. Download the Netty source connector. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - curl -O http://mirror-hk.koddos.net/apache/pulsar/pulsar-{version}/connectors/pulsar-io-netty-{version}.nar - - ``` - -5. Start the Netty source connector. - - ```bash - - $ ./bin/pulsar-admin sources localrun \ - --archive pulsar-io-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name netty \ - --destination-topic-name netty-topic \ - --source-config-file netty-source-config.yaml \ - --parallelism 1 - - ``` - -6. Consume data. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ ./bin/pulsar-client consume -t Exclusive -s netty-sub netty-topic -n 0 - - ``` - -7. Open another terminal window to send data to the Netty source. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ curl -X POST --data 'hello, world!' http://127.0.0.1:10999/ - - ``` - -8. The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello, world! - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-nsq-source.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-nsq-source.md deleted file mode 100644 index b61e7e100c22e1..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-nsq-source.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -id: io-nsq-source -title: NSQ source connector -sidebar_label: "NSQ source connector" -original_id: io-nsq-source ---- - -The NSQ source connector receives messages from NSQ topics -and writes messages to Pulsar topics. - -## Configuration - -The configuration of the NSQ source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `lookupds` |String| true | " " (empty string) | A comma-separated list of nsqlookupds to connect to. | -| `topic` | String|true | " " (empty string) | The NSQ topic to transport. | -| `channel` | String |false | pulsar-transport-{$topic} | The channel to consume from on the provided NSQ topic. | \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-overview.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-overview.md deleted file mode 100644 index 3db5ee34042d3f..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-overview.md +++ /dev/null @@ -1,164 +0,0 @@ ---- -id: io-overview -title: Pulsar connector overview -sidebar_label: "Overview" -original_id: io-overview ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Messaging systems are most powerful when you can easily use them with external systems like databases and other messaging systems. - -**Pulsar IO connectors** enable you to easily create, deploy, and manage connectors that interact with external systems, such as [Apache Cassandra](https://cassandra.apache.org), [Aerospike](https://www.aerospike.com), and many others. - - -## Concept - -Pulsar IO connectors come in two types: **source** and **sink**. - -This diagram illustrates the relationship between source, Pulsar, and sink: - -![Pulsar IO diagram](/assets/pulsar-io.png "Pulsar IO connectors (sources and sinks)") - - -### Source - -> Sources **feed data from external systems into Pulsar**. - -Common sources include other messaging systems and firehose-style data pipeline APIs. - -For the complete list of Pulsar built-in source connectors, see [source connector](io-connectors.md#source-connector). - -### Sink - -> Sinks **feed data from Pulsar into external systems**. - -Common sinks include other messaging systems and SQL and NoSQL databases. - -For the complete list of Pulsar built-in sink connectors, see [sink connector](io-connectors.md#sink-connector). - -## Processing guarantee - -Processing guarantees are used to handle errors when writing messages to Pulsar topics. - -> Pulsar connectors and Functions use the **same** processing guarantees as below. - -Delivery semantic | Description -:------------------|:------- -`at-most-once` | Each message sent to a connector is to be **processed once** or **not to be processed**. -`at-least-once` | Each message sent to a connector is to be **processed once** or **more than once**. -`effectively-once` | Each message sent to a connector has **one output associated** with it. - -> Processing guarantees for connectors not just rely on Pulsar guarantee but also **relate to external systems**, that is, **the implementation of source and sink**. - -* Source: Pulsar ensures that writing messages to Pulsar topics respects to the processing guarantees. It is within Pulsar's control. - -* Sink: the processing guarantees rely on the sink implementation. If the sink implementation does not handle retries in an idempotent way, the sink does not respect to the processing guarantees. - -### Set - -When creating a connector, you can set the processing guarantee with the following semantics: - -* ATLEAST_ONCE - -* ATMOST_ONCE - -* EFFECTIVELY_ONCE - -> If `--processing-guarantees` is not specified when creating a connector, the default semantic is `ATLEAST_ONCE`. - -Here takes **Admin CLI** as an example. For more information about **REST API** or **JAVA Admin API**, see [here](io-use.md#create). - -````mdx-code-block - - - - -```bash - -$ bin/pulsar-admin sources create \ - --processing-guarantees ATMOST_ONCE \ - # Other source configs - -``` - -For more information about the options of `pulsar-admin sources create`, see [here](reference-connector-admin.md#create). - - - - -```bash - -$ bin/pulsar-admin sinks create \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other sink configs - -``` - -For more information about the options of `pulsar-admin sinks create`, see [here](reference-connector-admin.md#create-1). - - - - -```` - -### Update - -After creating a connector, you can update the processing guarantee with the following semantics: - -* ATLEAST_ONCE - -* ATMOST_ONCE - -* EFFECTIVELY_ONCE - -Here takes **Admin CLI** as an example. For more information about **REST API** or **JAVA Admin API**, see [here](io-use.md#create). - -````mdx-code-block - - - - -```bash - -$ bin/pulsar-admin sources update \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other source configs - -``` - -For more information about the options of `pulsar-admin sources update`, see [here](reference-connector-admin.md#update). - - - - -```bash - -$ bin/pulsar-admin sinks update \ - --processing-guarantees ATMOST_ONCE \ - # Other sink configs - -``` - -For more information about the options of `pulsar-admin sinks update`, see [here](reference-connector-admin.md#update-1). - - - - -```` - - -## Work with connector - -You can manage Pulsar connectors (for example, create, update, start, stop, restart, reload, delete and perform other operations on connectors) via the [Connector Admin CLI](reference-connector-admin.md) with [sources](io-cli.md#sources) and [sinks](io-cli.md#sinks) subcommands. - -Connectors (sources and sinks) and Functions are components of instances, and they all run on Functions workers. When managing a source, sink or function via [Connector Admin CLI](reference-connector-admin.md) or [Functions Admin CLI](functions-cli.md), an instance is started on a worker. For more information, see [Functions worker](functions-worker.md#run-functions-worker-separately). - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-quickstart.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-quickstart.md deleted file mode 100644 index 8474c93f51336d..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-quickstart.md +++ /dev/null @@ -1,963 +0,0 @@ ---- -id: io-quickstart -title: How to connect Pulsar to database -sidebar_label: "Get started" -original_id: io-quickstart ---- - -This tutorial provides a hands-on look at how you can move data out of Pulsar without writing a single line of code. - -It is helpful to review the [concepts](io-overview.md) for Pulsar I/O with running the steps in this guide to gain a deeper understanding. - -At the end of this tutorial, you are able to: - -- [Connect Pulsar to Cassandra](#Connect-Pulsar-to-Cassandra) - -- [Connect Pulsar to PostgreSQL](#Connect-Pulsar-to-PostgreSQL) - -:::tip - -* These instructions assume you are running Pulsar in [standalone mode](getting-started-standalone.md). However, all -the commands used in this tutorial can be used in a multi-nodes Pulsar cluster without any changes. -* All the instructions are assumed to run at the root directory of a Pulsar binary distribution. - -::: - -## Install Pulsar and built-in connector - -Before connecting Pulsar to a database, you need to install Pulsar and the desired built-in connector. - -For more information about **how to install a standalone Pulsar and built-in connectors**, see [here](getting-started-standalone.md/#installing-pulsar). - -## Start Pulsar standalone - -1. Start Pulsar locally. - - ```bash - - bin/pulsar standalone - - ``` - - All the components of a Pulsar service are start in order. - - You can curl those pulsar service endpoints to make sure Pulsar service is up running correctly. - -2. Check Pulsar binary protocol port. - - ```bash - - telnet localhost 6650 - - ``` - -3. Check Pulsar Function cluster. - - ```bash - - curl -s http://localhost:8080/admin/v2/worker/cluster - - ``` - - **Example output** - - ```json - - [{"workerId":"c-standalone-fw-localhost-6750","workerHostname":"localhost","port":6750}] - - ``` - -4. Make sure a public tenant and a default namespace exist. - - ```bash - - curl -s http://localhost:8080/admin/v2/namespaces/public - - ``` - - **Example output** - - ```json - - ["public/default","public/functions"] - - ``` - -5. All built-in connectors should be listed as available. - - ```bash - - curl -s http://localhost:8080/admin/v2/functions/connectors - - ``` - - **Example output** - - ```json - - [{"name":"aerospike","description":"Aerospike database sink","sinkClass":"org.apache.pulsar.io.aerospike.AerospikeStringSink"},{"name":"cassandra","description":"Writes data into Cassandra","sinkClass":"org.apache.pulsar.io.cassandra.CassandraStringSink"},{"name":"kafka","description":"Kafka source and sink connector","sourceClass":"org.apache.pulsar.io.kafka.KafkaStringSource","sinkClass":"org.apache.pulsar.io.kafka.KafkaBytesSink"},{"name":"kinesis","description":"Kinesis sink connector","sinkClass":"org.apache.pulsar.io.kinesis.KinesisSink"},{"name":"rabbitmq","description":"RabbitMQ source connector","sourceClass":"org.apache.pulsar.io.rabbitmq.RabbitMQSource"},{"name":"twitter","description":"Ingest data from Twitter firehose","sourceClass":"org.apache.pulsar.io.twitter.TwitterFireHose"}] - - ``` - - If an error occurs when starting Pulsar service, you may see an exception at the terminal running `pulsar/standalone`, - or you can navigate to the `logs` directory under the Pulsar directory to view the logs. - -## Connect Pulsar to Cassandra - -This section demonstrates how to connect Pulsar to Cassandra. - -:::tip - -* Make sure you have Docker installed. If you do not have one, see [install Docker](https://docs.docker.com/docker-for-mac/install/). -* The Cassandra sink connector reads messages from Pulsar topics and writes the messages into Cassandra tables. For more information, see [Cassandra sink connector](io-cassandra-sink.md). - -::: - -### Setup a Cassandra cluster - -This example uses `cassandra` Docker image to start a single-node Cassandra cluster in Docker. - -1. Start a Cassandra cluster. - - ```bash - - docker run -d --rm --name=cassandra -p 9042:9042 cassandra - - ``` - - :::note - - Before moving to the next steps, make sure the Cassandra cluster is running. - - ::: - -2. Make sure the Docker process is running. - - ```bash - - docker ps - - ``` - -3. Check the Cassandra logs to make sure the Cassandra process is running as expected. - - ```bash - - docker logs cassandra - - ``` - -4. Check the status of the Cassandra cluster. - - ```bash - - docker exec cassandra nodetool status - - ``` - - **Example output** - - ``` - - Datacenter: datacenter1 - ======================= - Status=Up/Down - |/ State=Normal/Leaving/Joining/Moving - -- Address Load Tokens Owns (effective) Host ID Rack - UN 172.17.0.2 103.67 KiB 256 100.0% af0e4b2f-84e0-4f0b-bb14-bd5f9070ff26 rack1 - - ``` - -5. Use `cqlsh` to connect to the Cassandra cluster. - - ```bash - - $ docker exec -ti cassandra cqlsh localhost - Connected to Test Cluster at localhost:9042. - [cqlsh 5.0.1 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4] - Use HELP for help. - cqlsh> - - ``` - -6. Create a keyspace `pulsar_test_keyspace`. - - ```bash - - cqlsh> CREATE KEYSPACE pulsar_test_keyspace WITH replication = {'class':'SimpleStrategy', 'replication_factor':1}; - - ``` - -7. Create a table `pulsar_test_table`. - - ```bash - - cqlsh> USE pulsar_test_keyspace; - cqlsh:pulsar_test_keyspace> CREATE TABLE pulsar_test_table (key text PRIMARY KEY, col text); - - ``` - -### Configure a Cassandra sink - -Now that we have a Cassandra cluster running locally. - -In this section, you need to configure a Cassandra sink connector. - -To run a Cassandra sink connector, you need to prepare a configuration file including the information that Pulsar connector runtime needs to know. - -For example, how Pulsar connector can find the Cassandra cluster, what is the keyspace and the table that Pulsar connector uses for writing Pulsar messages to, and so on. - -You can create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - } - - ``` - -* YAML - - ```yaml - - configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - - ``` - -For more information, see [Cassandra sink connector](io-cassandra-sink.md). - -### Create a Cassandra sink - -You can use the [Connector Admin CLI](io-cli.md) -to create a sink connector and perform other operations on them. - -Run the following command to create a Cassandra sink connector with sink type _cassandra_ and the config file _examples/cassandra-sink.yml_ created previously. - -#### Note -> The `sink-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. - -```bash - -bin/pulsar-admin sinks create \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink \ - --sink-type cassandra \ - --sink-config-file examples/cassandra-sink.yml \ - --inputs test_cassandra - -``` - -Once the command is executed, Pulsar creates the sink connector _cassandra-test-sink_. - -This sink connector runs -as a Pulsar Function and writes the messages produced in the topic _test_cassandra_ to the Cassandra table _pulsar_test_table_. - -### Inspect a Cassandra sink - -You can use the [Connector Admin CLI](io-cli.md) -to monitor a connector and perform other operations on it. - -* Get the information of a Cassandra sink. - - ```bash - - bin/pulsar-admin sinks get \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - **Example output** - - ```json - - { - "tenant": "public", - "namespace": "default", - "name": "cassandra-test-sink", - "className": "org.apache.pulsar.io.cassandra.CassandraStringSink", - "inputSpecs": { - "test_cassandra": { - "isRegexPattern": false - } - }, - "configs": { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true, - "archive": "builtin://cassandra" - } - - ``` - -* Check the status of a Cassandra sink. - - ```bash - - bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - **Example output** - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-localhost-8080" - } - } ] - } - - ``` - -### Verify a Cassandra sink - -1. Produce some messages to the input topic of the Cassandra sink _test_cassandra_. - - ```bash - - for i in {0..9}; do bin/pulsar-client produce -m "key-$i" -n 1 test_cassandra; done - - ``` - -2. Inspect the status of the Cassandra sink _test_cassandra_. - - ```bash - - bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - You can see 10 messages are processed by the Cassandra sink _test_cassandra_. - - **Example output** - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 10, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 10, - "lastReceivedTime" : 1551685489136, - "workerId" : "c-standalone-fw-localhost-8080" - } - } ] - } - - ``` - -3. Use `cqlsh` to connect to the Cassandra cluster. - - ```bash - - docker exec -ti cassandra cqlsh localhost - - ``` - -4. Check the data of the Cassandra table _pulsar_test_table_. - - ```bash - - cqlsh> use pulsar_test_keyspace; - cqlsh:pulsar_test_keyspace> select * from pulsar_test_table; - - key | col - --------+-------- - key-5 | key-5 - key-0 | key-0 - key-9 | key-9 - key-2 | key-2 - key-1 | key-1 - key-3 | key-3 - key-6 | key-6 - key-7 | key-7 - key-4 | key-4 - key-8 | key-8 - - ``` - -### Delete a Cassandra Sink - -You can use the [Connector Admin CLI](io-cli.md) -to delete a connector and perform other operations on it. - -```bash - -bin/pulsar-admin sinks delete \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - -``` - -## Connect Pulsar to PostgreSQL - -This section demonstrates how to connect Pulsar to PostgreSQL. - -:::tip - -* Make sure you have Docker installed. If you do not have one, see [install Docker](https://docs.docker.com/docker-for-mac/install/). -* The JDBC sink connector pulls messages from Pulsar topics and persists the messages to ClickHouse, MariaDB, PostgreSQL, or SQlite. - -::: - ->For more information, see [JDBC sink connector](io-jdbc-sink.md). - - -### Setup a PostgreSQL cluster - -This example uses the PostgreSQL 12 docker image to start a single-node PostgreSQL cluster in Docker. - -1. Pull the PostgreSQL 12 image from Docker. - - ```bash - - $ docker pull postgres:12 - - ``` - -2. Start PostgreSQL. - - ```bash - - $ docker run -d -it --rm \ - --name pulsar-postgres \ - -p 5432:5432 \ - -e POSTGRES_PASSWORD=password \ - -e POSTGRES_USER=postgres \ - postgres:12 - - ``` - - #### Tip - - Flag | Description | This example - ---|---|---| - `-d` | To start a container in detached mode. | / - `-it` | Keep STDIN open even if not attached and allocate a terminal. | / - `--rm` | Remove the container automatically when it exits. | / - `-name` | Assign a name to the container. | This example specifies _pulsar-postgres_ for the container. - `-p` | Publish the port of the container to the host. | This example publishes the port _5432_ of the container to the host. - `-e` | Set environment variables. | This example sets the following variables:
    - The password for the user is _password_.
    - The name for the user is _postgres_. - - :::tip - - For more information about Docker commands, see [Docker CLI](https://docs.docker.com/engine/reference/commandline/run/). - - ::: - -3. Check if PostgreSQL has been started successfully. - - ```bash - - $ docker logs -f pulsar-postgres - - ``` - - PostgreSQL has been started successfully if the following message appears. - - ```text - - 2020-05-11 20:09:24.492 UTC [1] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit - 2020-05-11 20:09:24.492 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 - 2020-05-11 20:09:24.492 UTC [1] LOG: listening on IPv6 address "::", port 5432 - 2020-05-11 20:09:24.499 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" - 2020-05-11 20:09:24.523 UTC [55] LOG: database system was shut down at 2020-05-11 20:09:24 UTC - 2020-05-11 20:09:24.533 UTC [1] LOG: database system is ready to accept connections - - ``` - -4. Access to PostgreSQL. - - ```bash - - $ docker exec -it pulsar-postgres /bin/bash - - ``` - -5. Create a PostgreSQL table _pulsar_postgres_jdbc_sink_. - - ```bash - - $ psql -U postgres postgres - - postgres=# create table if not exists pulsar_postgres_jdbc_sink - ( - id serial PRIMARY KEY, - name VARCHAR(255) NOT NULL - ); - - ``` - -### Configure a JDBC sink - -Now we have a PostgreSQL running locally. - -In this section, you need to configure a JDBC sink connector. - -1. Add a configuration file. - - To run a JDBC sink connector, you need to prepare a YAML configuration file including the information that Pulsar connector runtime needs to know. - - For example, how Pulsar connector can find the PostgreSQL cluster, what is the JDBC URL and the table that Pulsar connector uses for writing messages to. - - Create a _pulsar-postgres-jdbc-sink.yaml_ file, copy the following contents to this file, and place the file in the `pulsar/connectors` folder. - - ```yaml - - configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink" - tableName: "pulsar_postgres_jdbc_sink" - - ``` - -2. Create a schema. - - Create a _avro-schema_ file, copy the following contents to this file, and place the file in the `pulsar/connectors` folder. - - ```json - - { - "type": "AVRO", - "schema": "{\"type\":\"record\",\"name\":\"Test\",\"fields\":[{\"name\":\"id\",\"type\":[\"null\",\"int\"]},{\"name\":\"name\",\"type\":[\"null\",\"string\"]}]}", - "properties": {} - } - - ``` - - :::tip - - For more information about AVRO, see [Apache Avro](https://avro.apache.org/docs/1.9.1/). - - ::: - -3. Upload a schema to a topic. - - This example uploads the _avro-schema_ schema to the _pulsar-postgres-jdbc-sink-topic_ topic. - - ```bash - - $ bin/pulsar-admin schemas upload pulsar-postgres-jdbc-sink-topic -f ./connectors/avro-schema - - ``` - -4. Check if the schema has been uploaded successfully. - - ```bash - - $ bin/pulsar-admin schemas get pulsar-postgres-jdbc-sink-topic - - ``` - - The schema has been uploaded successfully if the following message appears. - - ```json - - {"name":"pulsar-postgres-jdbc-sink-topic","schema":"{\"type\":\"record\",\"name\":\"Test\",\"fields\":[{\"name\":\"id\",\"type\":[\"null\",\"int\"]},{\"name\":\"name\",\"type\":[\"null\",\"string\"]}]}","type":"AVRO","properties":{}} - - ``` - -### Create a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to create a sink connector and perform other operations on it. - -This example creates a sink connector and specifies the desired information. - -```bash - -$ bin/pulsar-admin sinks create \ ---archive ./connectors/pulsar-io-jdbc-postgres-@pulsar:version@.nar \ ---inputs pulsar-postgres-jdbc-sink-topic \ ---name pulsar-postgres-jdbc-sink \ ---sink-config-file ./connectors/pulsar-postgres-jdbc-sink.yaml \ ---parallelism 1 - -``` - -Once the command is executed, Pulsar creates a sink connector _pulsar-postgres-jdbc-sink_. - -This sink connector runs as a Pulsar Function and writes the messages produced in the topic _pulsar-postgres-jdbc-sink-topic_ to the PostgreSQL table _pulsar_postgres_jdbc_sink_. - - #### Tip - - Flag | Description | This example - ---|---|---| - `--archive` | The path to the archive file for the sink. | _pulsar-io-jdbc-postgres-@pulsar:version@.nar_ | - `--inputs` | The input topic(s) of the sink.

    Multiple topics can be specified as a comma-separated list.|| - `--name` | The name of the sink. | _pulsar-postgres-jdbc-sink_ | - `--sink-config-file` | The path to a YAML config file specifying the configuration of the sink. | _pulsar-postgres-jdbc-sink.yaml_ | - `--parallelism` | The parallelism factor of the sink.

    For example, the number of sink instances to run. | _1_ | - -:::tip - -For more information about `pulsar-admin sinks create options`, see [here](io-cli.md#sinks). - -::: - -The sink has been created successfully if the following message appears. - -```bash - -"Created successfully" - -``` - -### Inspect a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to monitor a connector and perform other operations on it. - -* List all running JDBC sink(s). - - ```bash - - $ bin/pulsar-admin sinks list \ - --tenant public \ - --namespace default - - ``` - - :::tip - - For more information about `pulsar-admin sinks list options`, see [here](io-cli.md/#list-1). - - ::: - - The result shows that only the _postgres-jdbc-sink_ sink is running. - - ```json - - [ - "pulsar-postgres-jdbc-sink" - ] - - ``` - -* Get the information of a JDBC sink. - - ```bash - - $ bin/pulsar-admin sinks get \ - --tenant public \ - --namespace default \ - --name pulsar-postgres-jdbc-sink - - ``` - - :::tip - - For more information about `pulsar-admin sinks get options`, see [here](io-cli.md/#get-1). - - ::: - - The result shows the information of the sink connector, including tenant, namespace, topic and so on. - - ```json - - { - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true - } - - ``` - -* Get the status of a JDBC sink - - ```bash - - $ bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name pulsar-postgres-jdbc-sink - - ``` - - :::tip - - For more information about `pulsar-admin sinks status options`, see [here](io-cli.md/#status-1). - - ::: - - The result shows the current status of sink connector, including the number of instance, running status, worker ID and so on. - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-192.168.2.52-8080" - } - } ] - } - - ``` - -### Stop a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to stop a connector and perform other operations on it. - -```bash - -$ bin/pulsar-admin sinks stop \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks stop options`, see [here](io-cli.md/#stop-1). - -::: - -The sink instance has been stopped successfully if the following message disappears. - -```bash - -"Stopped successfully" - -``` - -### Restart a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to restart a connector and perform other operations on it. - -```bash - -$ bin/pulsar-admin sinks restart \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks restart options`, see [here](io-cli.md/#restart-1). - -::: - -The sink instance has been started successfully if the following message disappears. - -```bash - -"Started successfully" - -``` - -:::tip - -* Optionally, you can run a standalone sink connector using `pulsar-admin sinks localrun options`. -Note that `pulsar-admin sinks localrun options` **runs a sink connector locally**, while `pulsar-admin sinks start options` **starts a sink connector in a cluster**. -* For more information about `pulsar-admin sinks localrun options`, see [here](io-cli.md#localrun-1). - -::: - -### Update a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to update a connector and perform other operations on it. - -This example updates the parallelism of the _pulsar-postgres-jdbc-sink_ sink connector to 2. - -```bash - -$ bin/pulsar-admin sinks update \ ---name pulsar-postgres-jdbc-sink \ ---parallelism 2 - -``` - -:::tip - -For more information about `pulsar-admin sinks update options`, see [here](io-cli.md/#update-1). - -::: - -The sink connector has been updated successfully if the following message disappears. - -```bash - -"Updated successfully" - -``` - -This example double-checks the information. - -```bash - -$ bin/pulsar-admin sinks get \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -The result shows that the parallelism is 2. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 2, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -### Delete a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to delete a connector and perform other operations on it. - -This example deletes the _pulsar-postgres-jdbc-sink_ sink connector. - -```bash - -$ bin/pulsar-admin sinks delete \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks delete options`, see [here](io-cli.md/#delete-1). - -::: - -The sink connector has been deleted successfully if the following message appears. - -```text - -"Deleted successfully" - -``` - -This example double-checks the status of the sink connector. - -```bash - -$ bin/pulsar-admin sinks get \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -The result shows that the sink connector does not exist. - -```text - -HTTP 404 Not Found - -Reason: Sink pulsar-postgres-jdbc-sink doesn't exist - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-rabbitmq-sink.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-rabbitmq-sink.md deleted file mode 100644 index d7fda99460dc97..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-rabbitmq-sink.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -id: io-rabbitmq-sink -title: RabbitMQ sink connector -sidebar_label: "RabbitMQ sink connector" -original_id: io-rabbitmq-sink ---- - -The RabbitMQ sink connector pulls messages from Pulsar topics -and persist the messages to RabbitMQ queues. - - -## Configuration - -The configuration of the RabbitMQ sink connector has the following properties. - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `connectionName` |String| true | " " (empty string) | The connection name. | -| `host` | String| true | " " (empty string) | The RabbitMQ host. | -| `port` | int |true | 5672 | The RabbitMQ port. | -| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. | -| `username` | String|false | guest | The username used to authenticate to RabbitMQ. | -| `password` | String|false | guest | The password used to authenticate to RabbitMQ. | -| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. | -| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number.

    0 means unlimited. | -| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets.

    0 means unlimited. | -| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds.

    0 means infinite. | -| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. | -| `requestedHeartbeat` | int|false | 60 | The exchange to publish messages. | -| `exchangeName` | String|true | " " (empty string) | The maximum number of messages that the server delivers.

    0 means unlimited. | -| `prefetchGlobal` |String|true | " " (empty string) |The routing key used to publish messages. | - - -### Example - -Before using the RabbitMQ sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "host": "localhost", - "port": "5672", - "virtualHost": "/", - "username": "guest", - "password": "guest", - "queueName": "test-queue", - "connectionName": "test-connection", - "requestedChannelMax": "0", - "requestedFrameMax": "0", - "connectionTimeout": "60000", - "handshakeTimeout": "10000", - "requestedHeartbeat": "60", - "exchangeName": "test-exchange", - "routingKey": "test-key" - } - - ``` - -* YAML - - ```yaml - - configs: - host: "localhost" - port: 5672 - virtualHost: "/", - username: "guest" - password: "guest" - queueName: "test-queue" - connectionName: "test-connection" - requestedChannelMax: 0 - requestedFrameMax: 0 - connectionTimeout: 60000 - handshakeTimeout: 10000 - requestedHeartbeat: 60 - exchangeName: "test-exchange" - routingKey: "test-key" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-rabbitmq-source.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-rabbitmq-source.md deleted file mode 100644 index c2c31cc97d10d9..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-rabbitmq-source.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -id: io-rabbitmq-source -title: RabbitMQ source connector -sidebar_label: "RabbitMQ source connector" -original_id: io-rabbitmq-source ---- - -The RabbitMQ source connector receives messages from RabbitMQ clusters -and writes messages to Pulsar topics. - -## Configuration - -The configuration of the RabbitMQ source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `connectionName` |String| true | " " (empty string) | The connection name. | -| `host` | String| true | " " (empty string) | The RabbitMQ host. | -| `port` | int |true | 5672 | The RabbitMQ port. | -| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. | -| `username` | String|false | guest | The username used to authenticate to RabbitMQ. | -| `password` | String|false | guest | The password used to authenticate to RabbitMQ. | -| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. | -| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number.

    0 means unlimited. | -| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets.

    0 means unlimited. | -| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds.

    0 means infinite. | -| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. | -| `requestedHeartbeat` | int|false | 60 | The requested heartbeat timeout in seconds. | -| `prefetchCount` | int|false | 0 | The maximum number of messages that the server delivers.

    0 means unlimited. | -| `prefetchGlobal` | boolean|false | false |Whether the setting should be applied to the entire channel rather than each consumer. | -| `passive` | boolean|false | false | Whether the rabbitmq consumer should create its own queue or bind to an existing one. | - -### Example - -Before using the RabbitMQ source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "host": "localhost", - "port": "5672", - "virtualHost": "/", - "username": "guest", - "password": "guest", - "queueName": "test-queue", - "connectionName": "test-connection", - "requestedChannelMax": "0", - "requestedFrameMax": "0", - "connectionTimeout": "60000", - "handshakeTimeout": "10000", - "requestedHeartbeat": "60", - "prefetchCount": "0", - "prefetchGlobal": "false", - "passive": "false" - } - - ``` - -* YAML - - ```yaml - - configs: - host: "localhost" - port: 5672 - virtualHost: "/" - username: "guest" - password: "guest" - queueName: "test-queue" - connectionName: "test-connection" - requestedChannelMax: 0 - requestedFrameMax: 0 - connectionTimeout: 60000 - handshakeTimeout: 10000 - requestedHeartbeat: 60 - prefetchCount: 0 - prefetchGlobal: "false" - passive: "false" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-redis-sink.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-redis-sink.md deleted file mode 100644 index 793d74a5f2cb38..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-redis-sink.md +++ /dev/null @@ -1,74 +0,0 @@ ---- -id: io-redis-sink -title: Redis sink connector -sidebar_label: "Redis sink connector" -original_id: io-redis-sink ---- - -The Redis sink connector pulls messages from Pulsar topics -and persists the messages to a Redis database. - - - -## Configuration - -The configuration of the Redis sink connector has the following properties. - - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `redisHosts` |String|true|" " (empty string) | A comma-separated list of Redis hosts to connect to. | -| `redisPassword` |String|false|" " (empty string) | The password used to connect to Redis. | -| `redisDatabase` | int|true|0 | The Redis database to connect to. | -| `clientMode` |String| false|Standalone | The client mode when interacting with Redis cluster.

    Below are the available options:
  627. Standalone
  628. Cluster
  629. | -| `autoReconnect` | boolean|false|true | Whether the Redis client automatically reconnect or not. | -| `requestQueue` | int|false|2147483647 | The maximum number of queued requests to Redis. | -| `tcpNoDelay` |boolean| false| false | Whether to enable TCP with no delay or not. | -| `keepAlive` | boolean|false | false |Whether to enable a keepalive to Redis or not. | -| `connectTimeout` |long| false|10000 | The time to wait before timing out when connecting in milliseconds. | -| `operationTimeout` | long|false|10000 | The time before an operation is marked as timed out in milliseconds . | -| `batchTimeMs` | int|false|1000 | The Redis operation time in milliseconds. | -| `batchSize` | int|false|200 | The batch size of writing to Redis database. | - - -### Example - -Before using the Redis sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "redisHosts": "localhost:6379", - "redisPassword": "fake@123", - "redisDatabase": "1", - "clientMode": "Standalone", - "operationTimeout": "2000", - "batchSize": "100", - "batchTimeMs": "1000", - "connectTimeout": "3000" - } - - ``` - -* YAML - - ```yaml - - { - redisHosts: "localhost:6379" - redisPassword: "fake@123" - redisDatabase: 1 - clientMode: "Standalone" - operationTimeout: 2000 - batchSize: 100 - batchTimeMs: 1000 - connectTimeout: 3000 - } - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-solr-sink.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-solr-sink.md deleted file mode 100644 index df2c3612c38eb6..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-solr-sink.md +++ /dev/null @@ -1,65 +0,0 @@ ---- -id: io-solr-sink -title: Solr sink connector -sidebar_label: "Solr sink connector" -original_id: io-solr-sink ---- - -The Solr sink connector pulls messages from Pulsar topics -and persists the messages to Solr collections. - - - -## Configuration - -The configuration of the Solr sink connector has the following properties. - - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `solrUrl` | String|true|" " (empty string) |
  630. Comma-separated zookeeper hosts with chroot used in the SolrCloud mode.
    **Example**
    `localhost:2181,localhost:2182/chroot`

  631. URL to connect to Solr used in standalone mode.
    **Example**
    `localhost:8983/solr`
  632. | -| `solrMode` | String|true|SolrCloud| The client mode when interacting with the Solr cluster.

    Below are the available options:
  633. Standalone
  634. SolrCloud
  635. | -| `solrCollection` |String|true| " " (empty string) | Solr collection name to which records need to be written. | -| `solrCommitWithinMs` |int| false|10 | The time within million seconds for Solr updating commits.| -| `username` |String|false| " " (empty string) | The username for basic authentication.

    **Note: `usename` is case-sensitive.** | -| `password` | String|false| " " (empty string) | The password for basic authentication.

    **Note: `password` is case-sensitive.** | - - - -### Example - -Before using the Solr sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "solrUrl": "localhost:2181,localhost:2182/chroot", - "solrMode": "SolrCloud", - "solrCollection": "techproducts", - "solrCommitWithinMs": 100, - "username": "fakeuser", - "password": "fake@123" - } - - ``` - -* YAML - - ```yaml - - { - solrUrl: "localhost:2181,localhost:2182/chroot" - solrMode: "SolrCloud" - solrCollection: "techproducts" - solrCommitWithinMs: 100 - username: "fakeuser" - password: "fake@123" - } - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-twitter-source.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-twitter-source.md deleted file mode 100644 index 8de3504dd0fef2..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-twitter-source.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -id: io-twitter-source -title: Twitter Firehose source connector -sidebar_label: "Twitter Firehose source connector" -original_id: io-twitter-source ---- - -The Twitter Firehose source connector receives tweets from Twitter Firehose and -writes the tweets to Pulsar topics. - -## Configuration - -The configuration of the Twitter Firehose source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `consumerKey` | String|true | " " (empty string) | The twitter OAuth consumer key.

    For more information, see [Access tokens](https://developer.twitter.com/en/docs/basics/authentication/guides/access-tokens). | -| `consumerSecret` | String |true | " " (empty string) | The twitter OAuth consumer secret. | -| `token` | String|true | " " (empty string) | The twitter OAuth token. | -| `tokenSecret` | String|true | " " (empty string) | The twitter OAuth secret. | -| `guestimateTweetTime`|Boolean|false|false|Most firehose events have null createdAt time.

    If `guestimateTweetTime` set to true, the connector estimates the createdTime of each firehose event to be current time. -| `clientName` | String |false | openconnector-twitter-source| The twitter firehose client name. | -| `clientHosts` |String| false | Constants.STREAM_HOST | The twitter firehose hosts to which client connects. | -| `clientBufferSize` | int|false | 50000 | The buffer size for buffering tweets fetched from twitter firehose. | - -> For more information about OAuth credentials, see [Twitter developers portal](https://developer.twitter.com/en.html). diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-twitter.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-twitter.md deleted file mode 100644 index 3b2f6325453c3c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-twitter.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: io-twitter -title: Twitter Firehose Connector -sidebar_label: "Twitter Firehose Connector" -original_id: io-twitter ---- - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/io-use.md b/site2/website/versioned_docs/version-2.8.0-deprecated/io-use.md deleted file mode 100644 index da9ed746c4d372..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/io-use.md +++ /dev/null @@ -1,1787 +0,0 @@ ---- -id: io-use -title: How to use Pulsar connectors -sidebar_label: "Use" -original_id: io-use ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide describes how to use Pulsar connectors. - -## Install a connector - -Pulsar bundles several [builtin connectors](io-connectors.md) used to move data in and out of commonly used systems (such as database and messaging system). Optionally, you can create and use your desired non-builtin connectors. - -:::note - -When using a non-builtin connector, you need to specify the path of a archive file for the connector. - -::: - -To set up a builtin connector, follow -the instructions [here](getting-started-standalone.md#installing-builtin-connectors). - -After the setup, the builtin connector is automatically discovered by Pulsar brokers (or function-workers), so no additional installation steps are required. - -## Configure a connector - -You can configure the following information: - -* [Configure a default storage location for a connector](#configure-a-default-storage-location-for-a-connector) - -* [Configure a connector with a YAML file](#configure-a-connector-with-yaml-file) - -### Configure a default storage location for a connector - -To configure a default folder for builtin connectors, set the `connectorsDirectory` parameter in the `./conf/functions_worker.yml` configuration file. - -**Example** - -Set the `./connectors` folder as the default storage location for builtin connectors. - -``` - -######################## -# Connectors -######################## - -connectorsDirectory: ./connectors - -``` - -### Configure a connector with a YAML file - -To configure a connector, you need to provide a YAML configuration file when creating a connector. - -The YAML configuration file tells Pulsar where to locate connectors and how to connect connectors with Pulsar topics. - -**Example 1** - -Below is a YAML configuration file of a Cassandra sink, which tells Pulsar: - -* Which Cassandra cluster to connect - -* What is the `keyspace` and `columnFamily` to be used in Cassandra for collecting data - -* How to map Pulsar messages into Cassandra table key and columns - -```shell - -tenant: public -namespace: default -name: cassandra-test-sink -... -# cassandra specific config -configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - -``` - -**Example 2** - -Below is a YAML configuration file of a Kafka source. - -```shell - -configs: - bootstrapServers: "pulsar-kafka:9092" - groupId: "test-pulsar-io" - topic: "my-topic" - sessionTimeoutMs: "10000" - autoCommitEnabled: "false" - -``` - -**Example 3** - -Below is a YAML configuration file of a PostgreSQL JDBC sink. - -```shell - -configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/test_jdbc" - tableName: "test_jdbc" - -``` - -## Get available connectors - -Before starting using connectors, you can perform the following operations: - -* [Reload connectors](#reload) - -* [Get a list of available connectors](#get-available-connectors) - -### `reload` - -If you add or delete a nar file in a connector folder, reload the available builtin connector before using it. - -#### Source - -Use the `reload` subcommand. - -```shell - -$ pulsar-admin sources reload - -``` - -For more information, see [`here`](io-cli.md#reload). - -#### Sink - -Use the `reload` subcommand. - -```shell - -$ pulsar-admin sinks reload - -``` - -For more information, see [`here`](io-cli.md#reload-1). - -### `available` - -After reloading connectors (optional), you can get a list of available connectors. - -#### Source - -Use the `available-sources` subcommand. - -```shell - -$ pulsar-admin sources available-sources - -``` - -#### Sink - -Use the `available-sinks` subcommand. - -```shell - -$ pulsar-admin sinks available-sinks - -``` - -## Run a connector - -To run a connector, you can perform the following operations: - -* [Create a connector](#create) - -* [Start a connector](#start) - -* [Run a connector locally](#localrun) - -### `create` - -You can create a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Create a source connector. - -````mdx-code-block - - - - -Use the `create` subcommand. - -``` - -$ pulsar-admin sources create options - -``` - -For more information, see [here](io-cli.md#create). - - - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/registerSource?version=@pulsar:version_number@} - - - - -* Create a source connector with a **local file**. - - ```java - - void createSource(SourceConfig sourceConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - |Name|Description - |---|--- - `sourceConfig` | The source configuration object - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#createSource-SourceConfig-java.lang.String-). - -* Create a source connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void createSourceWithUrl(SourceConfig sourceConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - Parameter| Description - |---|--- - `sourceConfig` | The source configuration object - `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSourceWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#createSourceWithUrl-SourceConfig-java.lang.String-). - - - - -```` - -#### Sink - -Create a sink connector. - -````mdx-code-block - - - - -Use the `create` subcommand. - -``` - -$ pulsar-admin sinks create options - -``` - -For more information, see [here](io-cli.md#create-1). - - - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/registerSink?version=@pulsar:version_number@} - - - - -* Create a sink connector with a **local file**. - - ```java - - void createSink(SinkConfig sinkConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - |Name|Description - |---|--- - `sinkConfig` | The sink configuration object - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#createSink-SinkConfig-java.lang.String-). - -* Create a sink connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void createSinkWithUrl(SinkConfig sinkConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - Parameter| Description - |---|--- - `sinkConfig` | The sink configuration object - `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSinkWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#createSinkWithUrl-SinkConfig-java.lang.String-). - - - - -```` - -### `start` - -You can start a connector using **Admin CLI** or **REST API**. - -#### Source - -Start a source connector. - -````mdx-code-block - - - - -Use the `start` subcommand. - -``` - -$ pulsar-admin sources start options - -``` - -For more information, see [here](io-cli.md#start). - - - - -* Start **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/start|operation/startSource?version=@pulsar:version_number@} - -* Start a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/start|operation/startSource?version=@pulsar:version_number@} - - - - -```` - -#### Sink - -Start a sink connector. - -````mdx-code-block - - - - -Use the `start` subcommand. - -``` - -$ pulsar-admin sinks start options - -``` - -For more information, see [here](io-cli.md#start-1). - - - - -* Start **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/start|operation/startSink?version=@pulsar:version_number@} - -* Start a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sourceName/:instanceId/start|operation/startSink?version=@pulsar:version_number@} - - - - -```` - -### `localrun` - -You can run a connector locally rather than deploying it on a Pulsar cluster using **Admin CLI**. - -#### Source - -Run a source connector locally. - -````mdx-code-block - - - - -Use the `localrun` subcommand. - -``` - -$ pulsar-admin sources localrun options - -``` - -For more information, see [here](io-cli.md#localrun). - - - - -```` - -#### Sink - -Run a sink connector locally. - -````mdx-code-block - - - - -Use the `localrun` subcommand. - -``` - -$ pulsar-admin sinks localrun options - -``` - -For more information, see [here](io-cli.md#localrun-1). - - - - -```` - -## Monitor a connector - -To monitor a connector, you can perform the following operations: - -* [Get the information of a connector](#get) - -* [Get the list of all running connectors](#list) - -* [Get the current status of a connector](#status) - -### `get` - -You can get the information of a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the information of a source connector. - -````mdx-code-block - - - - -Use the `get` subcommand. - -``` - -$ pulsar-admin sources get options - -``` - -For more information, see [here](io-cli.md#get). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/getSourceInfo?version=@pulsar:version_number@} - - - - -```java - -SourceConfig getSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Example** - -This is a sourceConfig. - -```java - -{ - "tenant": "tenantName", - "namespace": "namespaceName", - "name": "sourceName", - "className": "className", - "topicName": "topicName", - "configs": {}, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "resources": { - "cpu": 1.0, - "ram": 1073741824, - "disk": 10737418240 - } -} - -``` - -This is a sourceConfig example. - -``` - -{ - "tenant": "public", - "namespace": "default", - "name": "debezium-mysql-source", - "className": "org.apache.pulsar.io.debezium.mysql.DebeziumMysqlSource", - "topicName": "debezium-mysql-topic", - "configs": { - "database.user": "debezium", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.port": "3306", - "database.hostname": "localhost", - "database.password": "dbz", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "database.whitelist": "inventory", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "pulsar.service.url": "pulsar://127.0.0.1:6650", - "database.history.pulsar.topic": "history-topic2" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "resources": { - "cpu": 1.0, - "ram": 1073741824, - "disk": 10737418240 - } -} - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException.NotFoundException` | Cluster doesn't exist -`PulsarAdminException` | Unexpected error - -For more information, see [`getSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#getSource-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Get the information of a sink connector. - -````mdx-code-block - - - - -Use the `get` subcommand. - -``` - -$ pulsar-admin sinks get options - -``` - -For more information, see [here](io-cli.md#get-1). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/getSinkInfo?version=@pulsar:version_number@} - - - - -```java - -SinkConfig getSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - -``` - -**Example** - -This is a sinkConfig. - -```json - -{ -"tenant": "tenantName", -"namespace": "namespaceName", -"name": "sinkName", -"className": "className", -"inputSpecs": { -"topicName": { - "isRegexPattern": false -} -}, -"configs": {}, -"parallelism": 1, -"processingGuarantees": "ATLEAST_ONCE", -"retainOrdering": false, -"autoAck": true -} - -``` - -This is a sinkConfig example. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -**Parameter description** - -Name| Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`sink` | Sink name - -For more information, see [`getSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#getSink-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -### `list` - -You can get the list of all running connectors using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the list of all running source connectors. - -````mdx-code-block - - - - -Use the `list` subcommand. - -``` - -$ pulsar-admin sources list options - -``` - -For more information, see [here](io-cli.md#list). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace|operation/listSources?version=@pulsar:version_number@} - - - - -```java - -List listSources(String tenant, - String namespace) - throws PulsarAdminException - -``` - -**Response example** - -```java - -["f1", "f2", "f3"] - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException` | Unexpected error - -For more information, see [`listSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#listSources-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Get the list of all running sink connectors. - -````mdx-code-block - - - - -Use the `list` subcommand. - -``` - -$ pulsar-admin sinks list options - -``` - -For more information, see [here](io-cli.md#list-1). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace|operation/listSinks?version=@pulsar:version_number@} - - - - -```java - -List listSinks(String tenant, - String namespace) - throws PulsarAdminException - -``` - -**Response example** - -```java - -["f1", "f2", "f3"] - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException` | Unexpected error - -For more information, see [`listSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#listSinks-java.lang.String-java.lang.String-). - - - - -```` - -### `status` - -You can get the current status of a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the current status of a source connector. - -````mdx-code-block - - - - -Use the `status` subcommand. - -``` - -$ pulsar-admin sources status options - -``` - -For more information, see [here](io-cli.md#status). - - - - -* Get the current status of **all** source connectors. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName/status|operation/getSourceStatus?version=@pulsar:version_number@} - -* Gets the current status of a **specified** source connector. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/status|operation/getSourceStatus?version=@pulsar:version_number@} - - - - -* Get the current status of **all** source connectors. - - ```java - - SourceStatus getSourceStatus(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - - **Exception** - - Name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSourceStatus`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#getSource-java.lang.String-java.lang.String-java.lang.String-). - -* Gets the current status of a **specified** source connector. - - ```java - - SourceStatus.SourceInstanceStatus.SourceInstanceStatusData getSourceStatus(String tenant, - String namespace, - String source, - int id) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - `id` | Source instanceID - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSourceStatus`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#getSourceStatus-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Get the current status of a Pulsar sink connector. - -````mdx-code-block - - - - -Use the `status` subcommand. - -``` - -$ pulsar-admin sinks status options - -``` - -For more information, see [here](io-cli.md#status-1). - - - - -* Get the current status of **all** sink connectors. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sinkName/status|operation/getSinkStatus?version=@pulsar:version_number@} - -* Gets the current status of a **specified** sink connector. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sourceName/:instanceId/status|operation/getSinkInstanceStatus?version=@pulsar:version_number@} - - - - -* Get the current status of **all** sink connectors. - - ```java - - SinkStatus getSinkStatus(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSinkStatus`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#getSinkStatus-java.lang.String-java.lang.String-java.lang.String-). - -* Gets the current status of a **specified** source connector. - - ```java - - SinkStatus.SinkInstanceStatus.SinkInstanceStatusData getSinkStatus(String tenant, - String namespace, - String sink, - int id) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - `id` | Sink instanceID - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSinkStatusWithInstanceID`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#getSinkStatus-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Update a connector - -### `update` - -You can update a running connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Update a running Pulsar source connector. - -````mdx-code-block - - - - -Use the `update` subcommand. - -``` - -$ pulsar-admin sources update options - -``` - -For more information, see [here](io-cli.md#update). - - - - -Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/updateSource?version=@pulsar:version_number@} - - - - -* Update a running source connector with a **local file**. - - ```java - - void updateSource(SourceConfig sourceConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - |`sourceConfig` | The source configuration object - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - - For more information, see [`updateSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#updateSource-SourceConfig-java.lang.String-). - -* Update a source connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void updateSourceWithUrl(SourceConfig sourceConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - | Name | Description - |---|--- - | `sourceConfig` | The source configuration object - | `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - -For more information, see [`createSourceWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#updateSourceWithUrl-SourceConfig-java.lang.String-). - - - - -```` - -#### Sink - -Update a running Pulsar sink connector. - -````mdx-code-block - - - - -Use the `update` subcommand. - -``` - -$ pulsar-admin sinks update options - -``` - -For more information, see [here](io-cli.md#update-1). - - - - -Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/updateSink?version=@pulsar:version_number@} - - - - -* Update a running sink connector with a **local file**. - - ```java - - void updateSink(SinkConfig sinkConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - |`sinkConfig` | The sink configuration object - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - - For more information, see [`updateSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#updateSink-SinkConfig-java.lang.String-). - -* Update a sink connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void updateSinkWithUrl(SinkConfig sinkConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - | Name | Description - |---|--- - | `sinkConfig` | The sink configuration object - | `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - |`PulsarAdminException.NotFoundException` | Cluster doesn't exist - |`PulsarAdminException` | Unexpected error - -For more information, see [`updateSinkWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#updateSinkWithUrl-SinkConfig-java.lang.String-). - - - - -```` - -## Stop a connector - -### `stop` - -You can stop a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Stop a source connector. - -````mdx-code-block - - - - -Use the `stop` subcommand. - -``` - -$ pulsar-admin sources stop options - -``` - -For more information, see [here](io-cli.md#stop). - - - - -* Stop **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/stopSource?version=@pulsar:version_number@} - -* Stop a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId|operation/stopSource?version=@pulsar:version_number@} - - - - -* Stop **all** source connectors. - - ```java - - void stopSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#stopSource-java.lang.String-java.lang.String-java.lang.String-). - -* Stop a **specified** source connector. - - ```java - - void stopSource(String tenant, - String namespace, - String source, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#stopSource-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Stop a sink connector. - -````mdx-code-block - - - - -Use the `stop` subcommand. - -``` - -$ pulsar-admin sinks stop options - -``` - -For more information, see [here](io-cli.md#stop-1). - - - - -* Stop **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sinkName/stop|operation/stopSink?version=@pulsar:version_number@} - -* Stop a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkeName/:instanceId/stop|operation/stopSink?version=@pulsar:version_number@} - - - - -* Stop **all** sink connectors. - - ```java - - void stopSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#stopSink-java.lang.String-java.lang.String-java.lang.String-). - -* Stop a **specified** sink connector. - - ```java - - void stopSink(String tenant, - String namespace, - String sink, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#stopSink-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Restart a connector - -### `restart` - -You can restart a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Restart a source connector. - -````mdx-code-block - - - - -Use the `restart` subcommand. - -``` - -$ pulsar-admin sources restart options - -``` - -For more information, see [here](io-cli.md#restart). - - - - -* Restart **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/restart|operation/restartSource?version=@pulsar:version_number@} - -* Restart a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/restart|operation/restartSource?version=@pulsar:version_number@} - - - - -* Restart **all** source connectors. - - ```java - - void restartSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#restartSource-java.lang.String-java.lang.String-java.lang.String-). - -* Restart a **specified** source connector. - - ```java - - void restartSource(String tenant, - String namespace, - String source, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#restartSource-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Restart a sink connector. - -````mdx-code-block - - - - -Use the `restart` subcommand. - -``` - -$ pulsar-admin sinks restart options - -``` - -For more information, see [here](io-cli.md#restart-1). - - - - -* Restart **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/restart|operation/restartSource?version=@pulsar:version_number@} - -* Restart a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/:instanceId/restart|operation/restartSource?version=@pulsar:version_number@} - - - - -* Restart all Pulsar sink connectors. - - ```java - - void restartSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Sink name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#restartSink-java.lang.String-java.lang.String-java.lang.String-). - -* Restart a **specified** sink connector. - - ```java - - void restartSink(String tenant, - String namespace, - String sink, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Sink instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#restartSink-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Delete a connector - -### `delete` - -You can delete a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Delete a source connector. - -````mdx-code-block - - - - -Use the `delete` subcommand. - -``` - -$ pulsar-admin sources delete options - -``` - -For more information, see [here](io-cli.md#delete). - - - - -Delete al Pulsar source connector. - -Send a `DELETE` request to this endpoint: {@inject: endpoint|DELETE|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/deregisterSource?version=@pulsar:version_number@} - - - - -Delete a source connector. - -```java - -void deleteSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Parameter** - -| Name | Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`source` | Source name - -**Exception** - -|Name|Description| -|---|--- -|`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission -| `PulsarAdminException.NotFoundException` | Cluster doesn't exist -| `PulsarAdminException.PreconditionFailedException` | Cluster is not empty -| `PulsarAdminException` | Unexpected error - -For more information, see [`deleteSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#deleteSource-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Delete a sink connector. - -````mdx-code-block - - - - -Use the `delete` subcommand. - -``` - -$ pulsar-admin sinks delete options - -``` - -For more information, see [here](io-cli.md#delete-1). - - - - -Delete a sink connector. - -Send a `DELETE` request to this endpoint: {@inject: endpoint|DELETE|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/deregisterSink?version=@pulsar:version_number@} - - - - -Delete a Pulsar sink connector. - -```java - -void deleteSink(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Parameter** - -| Name | Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`sink` | Sink name - -**Exception** - -|Name|Description| -|---|--- -|`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission -| `PulsarAdminException.NotFoundException` | Cluster doesn't exist -| `PulsarAdminException.PreconditionFailedException` | Cluster is not empty -| `PulsarAdminException` | Unexpected error - -For more information, see [`deleteSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#deleteSink-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/kubernetes-helm.md b/site2/website/versioned_docs/version-2.8.0-deprecated/kubernetes-helm.md deleted file mode 100644 index ea92a0968cd7d0..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/kubernetes-helm.md +++ /dev/null @@ -1,441 +0,0 @@ ---- -id: kubernetes-helm -title: Get started in Kubernetes -sidebar_label: "Run Pulsar in Kubernetes" -original_id: kubernetes-helm ---- - -This section guides you through every step of installing and running Apache Pulsar with Helm on Kubernetes quickly, including the following sections: - -- Install the Apache Pulsar on Kubernetes using Helm -- Start and stop Apache Pulsar -- Create topics using `pulsar-admin` -- Produce and consume messages using Pulsar clients -- Monitor Apache Pulsar status with Prometheus and Grafana - -For deploying a Pulsar cluster for production usage, read the documentation on [how to configure and install a Pulsar Helm chart](helm-deploy.md). - -## Prerequisite - -- Kubernetes server 1.14.0+ -- kubectl 1.14.0+ -- Helm 3.0+ - -:::tip - -For the following steps, step 2 and step 3 are for **developers** and step 4 and step 5 are for **administrators**. - -::: - -## Step 0: Prepare a Kubernetes cluster - -Before installing a Pulsar Helm chart, you have to create a Kubernetes cluster. You can follow [the instructions](helm-prepare.md) to prepare a Kubernetes cluster. - -We use [Minikube](https://minikube.sigs.k8s.io/docs/start/) in this quick start guide. To prepare a Kubernetes cluster, follow these steps: - -1. Create a Kubernetes cluster on Minikube. - - ```bash - - minikube start --memory=8192 --cpus=4 --kubernetes-version= - - ``` - - The `` can be any [Kubernetes version supported by your Minikube installation](https://minikube.sigs.k8s.io/docs/reference/configuration/kubernetes/), such as `v1.16.1`. - -2. Set `kubectl` to use Minikube. - - ```bash - - kubectl config use-context minikube - - ``` - -3. To use the [Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) with the local Kubernetes cluster on Minikube, enter the command below: - - ```bash - - minikube dashboard - - ``` - - The command automatically triggers opening a webpage in your browser. - -## Step 1: Install Pulsar Helm chart - -1. Add Pulsar charts repo. - - ```bash - - helm repo add apache https://pulsar.apache.org/charts - - ``` - - ```bash - - helm repo update - - ``` - -2. Clone the Pulsar Helm chart repository. - - ```bash - - git clone https://github.com/apache/pulsar-helm-chart - cd pulsar-helm-chart - - ``` - -3. Run the script `prepare_helm_release.sh` to create secrets required for installing the Apache Pulsar Helm chart. The username `pulsar` and password `pulsar` are used for logging into the Grafana dashboard and Pulsar Manager. - - ```bash - - ./scripts/pulsar/prepare_helm_release.sh \ - -n pulsar \ - -k pulsar-mini \ - -c - - ``` - -4. Use the Pulsar Helm chart to install a Pulsar cluster to Kubernetes. - - :::note - - You need to specify `--set initialize=true` when installing Pulsar the first time. This command installs and starts Apache Pulsar. - - ::: - - ```bash - - helm install \ - --values examples/values-minikube.yaml \ - --set initialize=true \ - --namespace pulsar \ - pulsar-mini apache/pulsar - - ``` - -5. Check the status of all pods. - - ```bash - - kubectl get pods -n pulsar - - ``` - - If all pods start up successfully, you can see that the `STATUS` is changed to `Running` or `Completed`. - - **Output** - - ```bash - - NAME READY STATUS RESTARTS AGE - pulsar-mini-bookie-0 1/1 Running 0 9m27s - pulsar-mini-bookie-init-5gphs 0/1 Completed 0 9m27s - pulsar-mini-broker-0 1/1 Running 0 9m27s - pulsar-mini-grafana-6b7bcc64c7-4tkxd 1/1 Running 0 9m27s - pulsar-mini-prometheus-5fcf5dd84c-w8mgz 1/1 Running 0 9m27s - pulsar-mini-proxy-0 1/1 Running 0 9m27s - pulsar-mini-pulsar-init-t7cqt 0/1 Completed 0 9m27s - pulsar-mini-pulsar-manager-9bcbb4d9f-htpcs 1/1 Running 0 9m27s - pulsar-mini-toolset-0 1/1 Running 0 9m27s - pulsar-mini-zookeeper-0 1/1 Running 0 9m27s - - ``` - -6. Check the status of all services in the namespace `pulsar`. - - ```bash - - kubectl get services -n pulsar - - ``` - - **Output** - - ```bash - - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - pulsar-mini-bookie ClusterIP None 3181/TCP,8000/TCP 11m - pulsar-mini-broker ClusterIP None 8080/TCP,6650/TCP 11m - pulsar-mini-grafana LoadBalancer 10.106.141.246 3000:31905/TCP 11m - pulsar-mini-prometheus ClusterIP None 9090/TCP 11m - pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 11m - pulsar-mini-pulsar-manager LoadBalancer 10.103.192.175 9527:30190/TCP 11m - pulsar-mini-toolset ClusterIP None 11m - pulsar-mini-zookeeper ClusterIP None 2888/TCP,3888/TCP,2181/TCP 11m - - ``` - -## Step 2: Use pulsar-admin to create Pulsar tenants/namespaces/topics - -`pulsar-admin` is the CLI (command-Line Interface) tool for Pulsar. In this step, you can use `pulsar-admin` to create resources, including tenants, namespaces, and topics. - -1. Enter the `toolset` container. - - ```bash - - kubectl exec -it -n pulsar pulsar-mini-toolset-0 -- /bin/bash - - ``` - -2. In the `toolset` container, create a tenant named `apache`. - - ```bash - - bin/pulsar-admin tenants create apache - - ``` - - Then you can list the tenants to see if the tenant is created successfully. - - ```bash - - bin/pulsar-admin tenants list - - ``` - - You should see a similar output as below. The tenant `apache` has been successfully created. - - ```bash - - "apache" - "public" - "pulsar" - - ``` - -3. In the `toolset` container, create a namespace named `pulsar` in the tenant `apache`. - - ```bash - - bin/pulsar-admin namespaces create apache/pulsar - - ``` - - Then you can list the namespaces of tenant `apache` to see if the namespace is created successfully. - - ```bash - - bin/pulsar-admin namespaces list apache - - ``` - - You should see a similar output as below. The namespace `apache/pulsar` has been successfully created. - - ```bash - - "apache/pulsar" - - ``` - -4. In the `toolset` container, create a topic `test-topic` with `4` partitions in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics create-partitioned-topic apache/pulsar/test-topic -p 4 - - ``` - -5. In the `toolset` container, list all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics list-partitioned-topics apache/pulsar - - ``` - - Then you can see all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - "persistent://apache/pulsar/test-topic" - - ``` - -## Step 3: Use Pulsar client to produce and consume messages - -You can use the Pulsar client to create producers and consumers to produce and consume messages. - -By default, the Pulsar Helm chart exposes the Pulsar cluster through a Kubernetes `LoadBalancer`. In Minikube, you can use the following command to check the proxy service. - -```bash - -kubectl get services -n pulsar | grep pulsar-mini-proxy - -``` - -You will see a similar output as below. - -```bash - -pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 28m - -``` - -This output tells what are the node ports that Pulsar cluster's binary port and HTTP port are mapped to. The port after `80:` is the HTTP port while the port after `6650:` is the binary port. - -Then you can find the IP address and exposed ports of your Minikube server by running the following command. - -```bash - -minikube service pulsar-mini-proxy -n pulsar - -``` - -**Output** - -```bash - -|-----------|-------------------|-------------|-------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|-------------------------| -| pulsar | pulsar-mini-proxy | http/80 | http://172.17.0.4:32305 | -| | | pulsar/6650 | http://172.17.0.4:31816 | -|-----------|-------------------|-------------|-------------------------| -🏃 Starting tunnel for service pulsar-mini-proxy. -|-----------|-------------------|-------------|------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|------------------------| -| pulsar | pulsar-mini-proxy | | http://127.0.0.1:61853 | -| | | | http://127.0.0.1:61854 | -|-----------|-------------------|-------------|------------------------| - -``` - -At this point, you can get the service URLs to connect to your Pulsar client. Here are URL examples: - -``` - -webServiceUrl=http://127.0.0.1:61853/ -brokerServiceUrl=pulsar://127.0.0.1:61854/ - -``` - -Then you can proceed with the following steps: - -1. Download the Apache Pulsar tarball from the [downloads page](https://pulsar.apache.org/download/). - -2. Decompress the tarball based on your download file. - - ```bash - - tar -xf .tar.gz - - ``` - -3. Expose `PULSAR_HOME`. - - (1) Enter the directory of the decompressed download file. - - (2) Expose `PULSAR_HOME` as the environment variable. - - ```bash - - export PULSAR_HOME=$(pwd) - - ``` - -4. Configure the Pulsar client. - - In the `${PULSAR_HOME}/conf/client.conf` file, replace `webServiceUrl` and `brokerServiceUrl` with the service URLs you get from the above steps. - -5. Create a subscription to consume messages from `apache/pulsar/test-topic`. - - ```bash - - bin/pulsar-client consume -s sub apache/pulsar/test-topic -n 0 - - ``` - -6. Open a new terminal. In the new terminal, create a producer and send 10 messages to the `test-topic` topic. - - ```bash - - bin/pulsar-client produce apache/pulsar/test-topic -m "---------hello apache pulsar-------" -n 10 - - ``` - -7. Verify the results. - - - From the producer side - - **Output** - - The messages have been produced successfully. - - ```bash - - 18:15:15.489 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 10 messages successfully produced - - ``` - - - From the consumer side - - **Output** - - At the same time, you can receive the messages as below. - - ```bash - - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - - ``` - -## Step 4: Use Pulsar Manager to manage the cluster - -[Pulsar Manager](administration-pulsar-manager.md) is a web-based GUI management tool for managing and monitoring Pulsar. - -1. By default, the `Pulsar Manager` is exposed as a separate `LoadBalancer`. You can open the Pulsar Manager UI using the following command: - - ```bash - - minikube service -n pulsar pulsar-mini-pulsar-manager - - ``` - -2. The Pulsar Manager UI will be open in your browser. You can use the username `pulsar` and password `pulsar` to log into Pulsar Manager. - -3. In Pulsar Manager UI, you can create an environment. - - - Click `New Environment` button in the top-left corner. - - Type `pulsar-mini` for the field `Environment Name` in the popup window. - - Type `http://pulsar-mini-broker:8080` for the field `Service URL` in the popup window. - - Click `Confirm` button in the popup window. - -4. After successfully creating an environment, you are redirected to the `tenants` page of that environment. Then you can create `tenants`, `namespaces` and `topics` using the Pulsar Manager. - -## Step 5: Use Prometheus and Grafana to monitor cluster - -Grafana is an open-source visualization tool, which can be used for visualizing time series data into dashboards. - -1. By default, the Grafana is exposed as a separate `LoadBalancer`. You can open the Grafana UI using the following command: - - ```bash - - minikube service pulsar-mini-grafana -n pulsar - - ``` - -2. The Grafana UI is open in your browser. You can use the username `pulsar` and password `pulsar` to log into the Grafana Dashboard. - -3. You can view dashboards for different components of a Pulsar cluster. diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/performance-pulsar-perf.md b/site2/website/versioned_docs/version-2.8.0-deprecated/performance-pulsar-perf.md deleted file mode 100644 index 7b7f312bbb3ca2..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/performance-pulsar-perf.md +++ /dev/null @@ -1,227 +0,0 @@ ---- -id: performance-pulsar-perf -title: Pulsar Perf -sidebar_label: "Pulsar Perf" -original_id: performance-pulsar-perf ---- - -The Pulsar Perf is a built-in performance test tool for Apache Pulsar. You can use the Pulsar Perf to test message writing or reading performance. For detailed information about performance tuning, see [here](https://streamnative.io/en/blog/tech/2021-01-14-pulsar-architecture-performance-tuning). - -## Produce messages - -This example shows how the Pulsar Perf produces messages with default options. For all configuration options available for the `pulsar-perf produce` command, see [configuration options](#configuration-options-for-pulsar-perf-produce). - -``` - -bin/pulsar-perf produce my-topic - -``` - -After the command is executed, the test data is continuously output on the Console. - -**Output** - -``` - -19:53:31.459 [pulsar-perf-producer-exec-1-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Created 1 producers -19:53:31.482 [pulsar-timer-5-1] WARN com.scurrilous.circe.checksum.Crc32cIntChecksum - Failed to load Circe JNI library. Falling back to Java based CRC32c provider -19:53:40.861 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 93.7 msg/s --- 0.7 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.575 ms - med: 3.460 - 95pct: 4.790 - 99pct: 5.308 - 99.9pct: 5.834 - 99.99pct: 6.609 - Max: 6.609 -19:53:50.909 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.437 ms - med: 3.328 - 95pct: 4.656 - 99pct: 5.071 - 99.9pct: 5.519 - 99.99pct: 5.588 - Max: 5.588 -19:54:00.926 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.376 ms - med: 3.276 - 95pct: 4.520 - 99pct: 4.939 - 99.9pct: 5.440 - 99.99pct: 5.490 - Max: 5.490 -19:54:10.940 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.298 ms - med: 3.220 - 95pct: 4.474 - 99pct: 4.926 - 99.9pct: 5.645 - 99.99pct: 5.654 - Max: 5.654 -19:54:20.956 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.1 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.308 ms - med: 3.199 - 95pct: 4.532 - 99pct: 4.871 - 99.9pct: 5.291 - 99.99pct: 5.323 - Max: 5.323 -19:54:30.972 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.249 ms - med: 3.144 - 95pct: 4.437 - 99pct: 4.970 - 99.9pct: 5.329 - 99.99pct: 5.414 - Max: 5.414 -19:54:40.987 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.435 ms - med: 3.361 - 95pct: 4.772 - 99pct: 5.150 - 99.9pct: 5.373 - 99.99pct: 5.837 - Max: 5.837 -^C19:54:44.325 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Aggregated throughput stats --- 7286 records sent --- 99.140 msg/s --- 0.775 Mbit/s -19:54:44.336 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Aggregated latency stats --- Latency: mean: 3.383 ms - med: 3.293 - 95pct: 4.610 - 99pct: 5.059 - 99.9pct: 5.588 - 99.99pct: 5.837 - 99.999pct: 6.609 - Max: 6.609 - -``` - -From the above test data, you can get the throughput statistics and the write latency statistics. The aggregated statistics is printed when the Pulsar Perf is stopped. You can press **Ctrl**+**C** to stop the Pulsar Perf. After the Pulsar Perf is stopped, the [HdrHistogram](http://hdrhistogram.github.io/HdrHistogram/) formatted test result appears under your directory. The document looks like `perf-producer-1589370810837.hgrm`. You can also check the test result through [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html). For details about how to check the test result through [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html), see [HdrHistogram Plotter](#hdrhistogram-plotter). - -### Configuration options for `pulsar-perf produce` - -You can get all options by executing the `bin/pulsar-perf produce -h` command. Therefore, you can modify these options as required. - -The following table lists configuration options available for the `pulsar-perf produce` command. - -| Option | Description | Default value| -|----|----|----| -| access-mode | Set the producer access mode. Valid values are `Shared`, `Exclusive` and `WaitForExclusive`. | Shared | -| admin-url | Set the Pulsar admin URL. | N/A | -| auth-params | Set the authentication parameters, whose format is determined by the implementation of the `configure` method in the authentication plugin class, such as "key1:val1,key2:val2" or "{"key1":"val1","key2":"val2"}". | N/A | -| auth_plugin | Set the authentication plugin class name. | N/A | -| listener-name | Set the listener name for the broker. | N/A | -| batch-max-bytes | Set the maximum number of bytes for each batch. | 4194304 | -| batch-max-messages | Set the maximum number of messages for each batch. | 1000 | -| batch-time-window | Set a window for a batch of messages. | 1 ms | -| busy-wait | Enable or disable Busy-Wait on the Pulsar client. | false | -| chunking | Configure whether to split the message and publish in chunks if message size is larger than allowed max size. | false | -| compression | Compress the message payload. | N/A | -| conf-file | Set the configuration file. | N/A | -| delay | Mark messages with a given delay. | 0s | -| encryption-key-name | Set the name of the public key used to encrypt the payload. | N/A | -| encryption-key-value-file | Set the file which contains the public key used to encrypt the payload. | N/A | -| exit-on-failure | Configure whether to exit from the process on publish failure. | false | -| format-class | Set the custom formatter class name. | org.apache.pulsar.testclient.DefaultMessageFormatter | -| format-payload | Configure whether to format %i as a message index in the stream from producer and/or %t as the timestamp nanoseconds. | false | -| help | Configure the help message. | false | -| max-connections | Set the maximum number of TCP connections to a single broker. | 100 | -| max-outstanding | Set the maximum number of outstanding messages. | 1000 | -| max-outstanding-across-partitions | Set the maximum number of outstanding messages across partitions. | 50000 | -| message-key-generation-mode | Set the generation mode of message key. Valid options are `autoIncrement`, `random`. | N/A | -| num-io-threads | Set the number of threads to be used for handling connections to brokers. | 1 | -| num-messages | Set the number of messages to be published in total. If it is set to 0, it keeps publishing messages. | 0 | -| num-producers | Set the number of producers for each topic. | 1 | -| num-test-threads | Set the number of test threads. | 1 | -| num-topic | Set the number of topics. | 1 | -| partitions | Configure whether to create partitioned topics with the given number of partitions. | N/A | -| payload-delimiter | Set the delimiter used to split lines when using payload from a file. | \n | -| payload-file | Use the payload from an UTF-8 encoded text file and a payload is randomly selected when messages are published. | N/A | -| producer-name | Set the producer name. | N/A | -| rate | Set the publish rate of messages across topics. | 100 | -| send-timeout | Set the sendTimeout. | 0 | -| separator | Set the separator between the topic and topic number. | - | -| service-url | Set the Pulsar service URL. | | -| size | Set the message size. | 1024 bytes | -| stats-interval-seconds | Set the statistics interval. If it is set to 0, statistics is disabled. | 0 | -| test-duration | Set the test duration. If it is set to 0, it keeps publishing tests. | 0s | -| trust-cert-file | Set the path for the trusted TLS certificate file. | | | -| warmup-time | Set the warm-up time. | 1s | -| tls-allow-insecure | Set the allowed insecure TLS connection. | N/A | - -## Consume messages - -This example shows how the Pulsar Perf consumes messages with default options. - -``` - -bin/pulsar-perf consume my-topic - -``` - -After the command is executed, the test data is continuously output on the Console. - -**Output** - -``` - -20:35:37.071 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Start receiving from 1 consumers on 1 topics -20:35:41.150 [pulsar-client-io-1-9] WARN com.scurrilous.circe.checksum.Crc32cIntChecksum - Failed to load Circe JNI library. Falling back to Java based CRC32c provider -20:35:47.092 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 59.572 msg/s -- 0.465 Mbit/s --- Latency: mean: 11.298 ms - med: 10 - 95pct: 15 - 99pct: 98 - 99.9pct: 137 - 99.99pct: 152 - Max: 152 -20:35:57.104 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.958 msg/s -- 0.781 Mbit/s --- Latency: mean: 9.176 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 18 - Max: 18 -20:36:07.115 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 100.006 msg/s -- 0.781 Mbit/s --- Latency: mean: 9.316 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -20:36:17.125 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 100.085 msg/s -- 0.782 Mbit/s --- Latency: mean: 9.327 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -20:36:27.136 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.900 msg/s -- 0.780 Mbit/s --- Latency: mean: 9.404 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -20:36:37.147 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.985 msg/s -- 0.781 Mbit/s --- Latency: mean: 8.998 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -^C20:36:42.755 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceConsumer - Aggregated throughput stats --- 6051 records received --- 92.125 msg/s --- 0.720 Mbit/s -20:36:42.759 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceConsumer - Aggregated latency stats --- Latency: mean: 9.422 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 98 - 99.99pct: 137 - 99.999pct: 152 - Max: 152 - -``` - -From the output test data, you can get the throughput statistics and the end-to-end latency statistics. The aggregated statistics is printed after the Pulsar Perf is stopped. You can press **Ctrl**+**C** to stop the Pulsar Perf. - -### Configuration options for `pulsar-perf consume` - -You can get all options by executing the `bin/pulsar-perf consume -h` command. Therefore, you can modify these options as required. - -The following table lists configuration options available for the `pulsar-perf consume` command. - -| Option | Description | Default value | -|----|----|----| -| acks-delay-millis | Set the acknowledgment grouping delay in milliseconds. | 100 ms | -| auth-params | Set the authentication parameters, whose format is determined by the implementation of the `configure` method in the authentication plugin class, such as "key1:val1,key2:val2" or "{"key1":"val1","key2":"val2"}". | N/A | -| auth_plugin | Set the authentication plugin class name. | N/A | -| auto_ack_chunk_q_full | Configure whether to automatically ack for the oldest message in receiver queue if the queue is full. | false | -| listener-name | Set the listener name for the broker. | N/A | -| batch-index-ack | Enable or disable the batch index acknowledgment. | false | -| busy-wait | Enable or disable Busy-Wait on the Pulsar client. | false | -| conf-file | Set the configuration file. | N/A | -| encryption-key-name | Set the name of the public key used to encrypt the payload. | N/A | -| encryption-key-value-file | Set the file which contains the public key used to encrypt the payload. | N/A | -| help | Configure the help message. | false | -| expire_time_incomplete_chunked_messages | Set the expiration time for incomplete chunk messages (in milliseconds). | 0 | -| max-connections | Set the maximum number of TCP connections to a single broker. | 100 | -| max_chunked_msg | Set the max pending chunk messages. | 0 | -| num-consumers | Set the number of consumers for each topic. | 1 | -| num-io-threads |Set the number of threads to be used for handling connections to brokers. | 1 | -| num-subscriptions | Set the number of subscriptions (per topic). | 1 | -| num-topic | Set the number of topics. | 1 | -| pool-messages | Configure whether to use the pooled message. | true | -| rate | Simulate a slow message consumer (rate in msg/s). | 0.0 | -| receiver-queue-size | Set the size of the receiver queue. | 1000 | -| receiver-queue-size-across-partitions | Set the max total size of the receiver queue across partitions. | 50000 | -| replicated | Configure whether the subscription status should be replicated. | false | -| service-url | Set the Pulsar service URL. | | -| stats-interval-seconds | Set the statistics interval. If it is set to 0, statistics is disabled. | 0 | -| subscriber-name | Set the subscriber name prefix. | sub | -| subscription-position | Set the subscription position. Valid values are `Latest`, `Earliest`.| Latest | -| subscription-type | Set the subscription type.
  636. Exclusive
  637. Shared
  638. Failover
  639. Key_Shared
  640. | Exclusive | -| test-duration | Set the test duration (in seconds). If the value is 0 or smaller than 0, it keeps consuming messages. | 0 | -| tls-allow-insecure | Set the allowed insecure TLS connection. | N/A | -| trust-cert-file | Set the path for the trusted TLS certificate file. | | | - -## Configurations - -By default, the Pulsar Perf uses `conf/client.conf` as the default configuration and uses `conf/log4j2.yaml` as the default Log4j configuration. If you want to connect to other Pulsar clusters, you can update the `brokerServiceUrl` in the client configuration. - -You can use the following commands to change the configuration file and the Log4j configuration file. - -``` - -export PULSAR_CLIENT_CONF= -export PULSAR_LOG_CONF= - -``` - -In addition, you can use the following command to configure the JVM configuration through environment variables: - -``` - -export PULSAR_EXTRA_OPTS='-Xms4g -Xmx4g -XX:MaxDirectMemorySize=4g' - -``` - -## HdrHistogram Plotter - -The [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html) is a visualization tool for checking Pulsar Perf test results, which makes it easier to observe the test results. - -To check test results through the HdrHistogram Plotter, follow these steps: - -1. Clone the HdrHistogram repository from GitHub to the local. - - ``` - - git clone https://github.com/HdrHistogram/HdrHistogram.git - - ``` - -2. Switch to the HdrHistogram folder. - - ``` - - cd HdrHistogram - - ``` - -3. Install the HdrHistogram Plotter. - - ``` - - mvn clean install -DskipTests - - ``` - -4. Transform the file generated by the Pulsar Perf. - - ``` - - ./HistogramLogProcessor -i -o - - ``` - -5. You will get two output files. Upload the output file with the filename extension of .hgrm to the [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html). - -6. Check the test result through the Graphical User Interface of the HdrHistogram Plotter, as shown blow. - - ![](/assets/perf-produce.png) diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/reference-cli-tools.md b/site2/website/versioned_docs/version-2.8.0-deprecated/reference-cli-tools.md deleted file mode 100644 index 309009f635e295..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/reference-cli-tools.md +++ /dev/null @@ -1,958 +0,0 @@ ---- -id: reference-cli-tools -title: Pulsar command-line tools -sidebar_label: "Pulsar CLI tools" -original_id: reference-cli-tools ---- - -Pulsar offers several command-line tools that you can use for managing Pulsar installations, performance testing, using command-line producers and consumers, and more. - -All Pulsar command-line tools can be run from the `bin` directory of your [installed Pulsar package](getting-started-standalone.md). The following tools are currently documented: - -* [`pulsar`](#pulsar) -* [`pulsar-client`](#pulsar-client) -* [`pulsar-daemon`](#pulsar-daemon) -* [`pulsar-perf`](#pulsar-perf) -* [`bookkeeper`](#bookkeeper) -* [`broker-tool`](#broker-tool) - -> ### Getting help -> You can get help for any CLI tool, command, or subcommand using the `--help` flag, or `-h` for short. Here's an example: - -> ```shell -> -> $ bin/pulsar broker --help -> -> -> ``` - - -## `pulsar` - -The pulsar tool is used to start Pulsar components, such as bookies and ZooKeeper, in the foreground. - -These processes can also be started in the background, using nohup, using the pulsar-daemon tool, which has the same command interface as pulsar. - -Usage: - -```bash - -$ pulsar command - -``` - -Commands: -* `bookie` -* `broker` -* `compact-topic` -* `discovery` -* `configuration-store` -* `initialize-cluster-metadata` -* `proxy` -* `standalone` -* `websocket` -* `zookeeper` -* `zookeeper-shell` - -Example: - -```bash - -$ PULSAR_BROKER_CONF=/path/to/broker.conf pulsar broker - -``` - -The table below lists the environment variables that you can use to configure the `pulsar` tool. - -|Variable|Description|Default| -|---|---|---| -|`PULSAR_LOG_CONF`|Log4j configuration file|`conf/log4j2.yaml`| -|`PULSAR_BROKER_CONF`|Configuration file for broker|`conf/broker.conf`| -|`PULSAR_BOOKKEEPER_CONF`|description: Configuration file for bookie|`conf/bookkeeper.conf`| -|`PULSAR_ZK_CONF`|Configuration file for zookeeper|`conf/zookeeper.conf`| -|`PULSAR_CONFIGURATION_STORE_CONF`|Configuration file for the configuration store|`conf/global_zookeeper.conf`| -|`PULSAR_DISCOVERY_CONF`|Configuration file for discovery service|`conf/discovery.conf`| -|`PULSAR_WEBSOCKET_CONF`|Configuration file for websocket proxy|`conf/websocket.conf`| -|`PULSAR_STANDALONE_CONF`|Configuration file for standalone|`conf/standalone.conf`| -|`PULSAR_EXTRA_OPTS`|Extra options to be passed to the jvm|| -|`PULSAR_EXTRA_CLASSPATH`|Extra paths for Pulsar's classpath|| -|`PULSAR_PID_DIR`|Folder where the pulsar server PID file should be stored|| -|`PULSAR_STOP_TIMEOUT`|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful|| - - - -### `bookie` - -Starts up a bookie server - -Usage: - -```bash - -$ pulsar bookie options - -``` - -Options - -|Option|Description|Default| -|---|---|---| -|`-readOnly`|Force start a read-only bookie server|false| -|`-withAutoRecovery`|Start auto-recover service bookie server|false| - - -Example - -```bash - -$ PULSAR_BOOKKEEPER_CONF=/path/to/bookkeeper.conf pulsar bookie \ - -readOnly \ - -withAutoRecovery - -``` - -### `broker` - -Starts up a Pulsar broker - -Usage - -```bash - -$ pulsar broker options - -``` - -Options - -|Option|Description|Default| -|---|---|---| -|`-bc` , `--bookie-conf`|Configuration file for BookKeeper|| -|`-rb` , `--run-bookie`|Run a BookKeeper bookie on the same host as the Pulsar broker|false| -|`-ra` , `--run-bookie-autorecovery`|Run a BookKeeper autorecovery daemon on the same host as the Pulsar broker|false| - -Example - -```bash - -$ PULSAR_BROKER_CONF=/path/to/broker.conf pulsar broker - -``` - -### `compact-topic` - -Run compaction against a Pulsar topic (in a new process) - -Usage - -```bash - -$ pulsar compact-topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-t` , `--topic`|The Pulsar topic that you would like to compact|| - -Example - -```bash - -$ pulsar compact-topic --topic topic-to-compact - -``` - -### `discovery` - -Run a discovery server - -Usage - -```bash - -$ pulsar discovery - -``` - -Example - -```bash - -$ PULSAR_DISCOVERY_CONF=/path/to/discovery.conf pulsar discovery - -``` - -### `configuration-store` - -Starts up the Pulsar configuration store - -Usage - -```bash - -$ pulsar configuration-store - -``` - -Example - -```bash - -$ PULSAR_CONFIGURATION_STORE_CONF=/path/to/configuration_store.conf pulsar configuration-store - -``` - -### `initialize-cluster-metadata` - -One-time cluster metadata initialization - -Usage - -```bash - -$ pulsar initialize-cluster-metadata options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-ub` , `--broker-service-url`|The broker service URL for the new cluster|| -|`-tb` , `--broker-service-url-tls`|The broker service URL for the new cluster with TLS encryption|| -|`-c` , `--cluster`|Cluster name|| -|`-cs` , `--configuration-store`|The configuration store quorum connection string|| -|`--existing-bk-metadata-service-uri`|The metadata service URI of the existing BookKeeper cluster that you want to use|| -|`-h` , `--help`|Cluster name|false| -|`--initial-num-stream-storage-containers`|The number of storage containers of BookKeeper stream storage|16| -|`--initial-num-transaction-coordinators`|The number of transaction coordinators assigned in a cluster|16| -|`-uw` , `--web-service-url`|The web service URL for the new cluster|| -|`-tw` , `--web-service-url-tls`|The web service URL for the new cluster with TLS encryption|| -|`-zk` , `--zookeeper`|The local ZooKeeper quorum connection string|| -|`--zookeeper-session-timeout-ms`|The local ZooKeeper session timeout. The time unit is in millisecond(ms)|30000| - - -### `proxy` - -Manages the Pulsar proxy - -Usage - -```bash - -$ pulsar proxy options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--configuration-store`|Configuration store connection string|| -|`-zk` , `--zookeeper-servers`|Local ZooKeeper connection string|| - -Example - -```bash - -$ PULSAR_PROXY_CONF=/path/to/proxy.conf pulsar proxy \ - --zookeeper-servers zk-0,zk-1,zk2 \ - --configuration-store zk-0,zk-1,zk-2 - -``` - -### `standalone` - -Run a broker service with local bookies and local ZooKeeper - -Usage - -```bash - -$ pulsar standalone options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-a` , `--advertised-address`|The standalone broker advertised address|| -|`--bookkeeper-dir`|Local bookies’ base data directory|data/standalone/bookkeeper| -|`--bookkeeper-port`|Local bookies’ base port|3181| -|`--no-broker`|Only start ZooKeeper and BookKeeper services, not the broker|false| -|`--num-bookies`|The number of local bookies|1| -|`--only-broker`|Only start the Pulsar broker service (not ZooKeeper or BookKeeper)|| -|`--wipe-data`|Clean up previous ZooKeeper/BookKeeper data|| -|`--zookeeper-dir`|Local ZooKeeper’s data directory|data/standalone/zookeeper| -|`--zookeeper-port` |Local ZooKeeper’s port|2181| - -Example - -```bash - -$ PULSAR_STANDALONE_CONF=/path/to/standalone.conf pulsar standalone - -``` - -### `websocket` - -Usage - -```bash - -$ pulsar websocket - -``` - -Example - -```bash - -$ PULSAR_WEBSOCKET_CONF=/path/to/websocket.conf pulsar websocket - -``` - -### `zookeeper` - -Starts up a ZooKeeper cluster - -Usage - -```bash - -$ pulsar zookeeper - -``` - -Example - -```bash - -$ PULSAR_ZK_CONF=/path/to/zookeeper.conf pulsar zookeeper - -``` - -### `zookeeper-shell` - -Connects to a running ZooKeeper cluster using the ZooKeeper shell - -Usage - -```bash - -$ pulsar zookeeper-shell options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration file for ZooKeeper|| -|`-server`|Configuration zk address, eg: `127.0.0.1:2181`|| - - - -## `pulsar-client` - -The pulsar-client tool - -Usage - -```bash - -$ pulsar-client command - -``` - -Commands -* `produce` -* `consume` - - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class, for example "key1:val1,key2:val2" or "{\"key1\":\"val1\",\"key2\":\"val2\"}"|{"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"}| -|`--auth-plugin`|Authentication plugin class name|org.apache.pulsar.client.impl.auth.AuthenticationSasl| -|`--listener-name`|Listener name for the broker|| -|`--proxy-protocol`|Proxy protocol to select type of routing at proxy|| -|`--proxy-url`|Proxy-server URL to which to connect|| -|`--url`|Broker URL to which to connect|pulsar://localhost:6650/
    ws://localhost:8080 | -| `-v`, `--version` | Get the version of the Pulsar client -|`-h`, `--help`|Show this help - - -### `produce` -Send a message or messages to a specific broker and topic - -Usage - -```bash - -$ pulsar-client produce topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-f`, `--files`|Comma-separated file paths to send; either -m or -f must be specified|[]| -|`-m`, `--messages`|Comma-separated string of messages to send; either -m or -f must be specified|[]| -|`-n`, `--num-produce`|The number of times to send the message(s); the count of messages/files * num-produce should be below 1000|1| -|`-r`, `--rate`|Rate (in messages per second) at which to produce; a value 0 means to produce messages as fast as possible|0.0| -|`-c`, `--chunking`|Split the message and publish in chunks if the message size is larger than the allowed max size|false| -|`-s`, `--separator`|Character to split messages string with.|","| -|`-k`, `--key`|Message key to add|key=value string, like k1=v1,k2=v2.| -|`-p`, `--properties`|Properties to add. If you want to add multiple properties, use the comma as the separator, e.g. `k1=v1,k2=v2`.| | -|`-ekn`, `--encryption-key-name`|The public key name to encrypt payload.| | -|`-ekv`, `--encryption-key-value`|The URI of public key to encrypt payload. For example, `file:///path/to/public.key` or `data:application/x-pem-file;base64,*****`.| | - - -### `consume` -Consume messages from a specific broker and topic - -Usage - -```bash - -$ pulsar-client consume topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--hex`|Display binary messages in hexadecimal format.|false| -|`-n`, `--num-messages`|Number of messages to consume, 0 means to consume forever.|1| -|`-r`, `--rate`|Rate (in messages per second) at which to consume; a value 0 means to consume messages as fast as possible|0.0| -|`--regex`|Indicate the topic name is a regex pattern|false| -|`-s`, `--subscription-name`|Subscription name|| -|`-t`, `--subscription-type`|The type of the subscription. Possible values: Exclusive, Shared, Failover, Key_Shared.|Exclusive| -|`-p`, `--subscription-position`|The position of the subscription. Possible values: Latest, Earliest.|Latest| -|`-m`, `--subscription-mode`|Subscription mode. Possible values: Durable, NonDurable.|Durable| -|`-q`, `--queue-size`|The size of consumer's receiver queue.|0| -|`-mc`, `--max_chunked_msg`|Max pending chunk messages.|0| -|`-ac`, `--auto_ack_chunk_q_full`|Auto ack for the oldest message in consumer's receiver queue if the queue full.|false| -|`--hide-content`|Do not print the message to the console.|false| -|`-st`, `--schema-type`|Set the schema type. Use `auto_consume` to dump AVRO and other structured data types. Possible values: bytes, auto_consume.|bytes| -|`-ekv`, `--encryption-key-value`|The URI of public key to encrypt payload. For example, `file:///path/to/public.key` or `data:application/x-pem-file;base64,*****`.| | -|`-pm`, `--pool-messages`|Use the pooled message.|true| - -## `pulsar-daemon` -A wrapper around the pulsar tool that’s used to start and stop processes, such as ZooKeeper, bookies, and Pulsar brokers, in the background using nohup. - -pulsar-daemon has a similar interface to the pulsar command but adds start and stop commands for various services. For a listing of those services, run pulsar-daemon to see the help output or see the documentation for the pulsar command. - -Usage - -```bash - -$ pulsar-daemon command - -``` - -Commands -* `start` -* `stop` - - -### `start` -Start a service in the background using nohup. - -Usage - -```bash - -$ pulsar-daemon start service - -``` - -### `stop` -Stop a service that’s already been started using start. - -Usage - -```bash - -$ pulsar-daemon stop service options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|-force|Stop the service forcefully if not stopped by normal shutdown.|false| - - - -## `pulsar-perf` -A tool for performance testing a Pulsar broker. - -Usage - -```bash - -$ pulsar-perf command - -``` - -Commands -* `consume` -* `produce` -* `read` -* `websocket-producer` -* `managed-ledger` -* `monitor-brokers` -* `simulation-client` -* `simulation-controller` -* `help` - -Environment variables - -The table below lists the environment variables that you can use to configure the pulsar-perf tool. - -|Variable|Description|Default| -|---|---|---| -|`PULSAR_LOG_CONF`|Log4j configuration file|conf/log4j2.yaml| -|`PULSAR_CLIENT_CONF`|Configuration file for the client|conf/client.conf| -|`PULSAR_EXTRA_OPTS`|Extra options to be passed to the JVM|| -|`PULSAR_EXTRA_CLASSPATH`|Extra paths for Pulsar's classpath|| - - -### `consume` -Run a consumer - -Usage - -``` - -$ pulsar-perf consume options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth_plugin`|Authentication plugin class name|| -|`-ac`, `--auto_ack_chunk_q_full`|Auto ack for the oldest message in consumer's receiver queue if the queue full|false| -|`--listener-name`|Listener name for the broker|| -|`--acks-delay-millis`|Acknowledgements grouping delay in millis|100| -|`--batch-index-ack`|Enable or disable the batch index acknowledgment|false| -|`-bw`, `--busy-wait`|Enable or disable Busy-Wait on the Pulsar client|false| -|`-v`, `--encryption-key-value-file`|The file which contains the private key to decrypt payload|| -|`-h`, `--help`|Help message|false| -|`--conf-file`|Configuration file|| -|`-e`, `--expire_time_incomplete_chunked_messages`|The expiration time for incomplete chunk messages (in milliseconds)|0| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-mc`, `--max_chunked_msg`|Max pending chunk messages|0| -|`-n`, `--num-consumers`|Number of consumers (per topic)|1| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-ns`, `--num-subscriptions`|Number of subscriptions (per topic)|1| -|`-t`, `--num-topics`|The number of topics|1| -|`-pm`, `--pool-messages`|Use the pooled message|true| -|`-r`, `--rate`|Simulate a slow message consumer (rate in msg/s)|0| -|`-q`, `--receiver-queue-size`|Size of the receiver queue|1000| -|`-p`, `--receiver-queue-size-across-partitions`|Max total size of the receiver queue across partitions|50000| -|`--replicated`|Whether the subscription status should be replicated|false| -|`-u`, `--service-url`|Pulsar service URL|| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled|0| -|`-s`, `--subscriber-name`|Subscriber name prefix|sub| -|`-ss`, `--subscriptions`|A list of subscriptions to consume on (e.g. sub1,sub2)|sub| -|`-st`, `--subscription-type`|Subscriber type. Possible values are Exclusive, Shared, Failover, Key_Shared.|Exclusive| -|`-sp`, `--subscription-position`|Subscriber position. Possible values are Latest, Earliest.|Latest| -|`-time`, `--test-duration`|Test duration (in seconds). If the value is 0 or smaller than 0, it keeps consuming messages|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - - -### `produce` -Run a producer - -Usage - -```bash - -$ pulsar-perf produce options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-am`, `--access-mode`|Producer access mode. Valid values are `Shared`, `Exclusive` and `WaitForExclusive`|Shared| -|`-au`, `--admin-url`|Pulsar admin URL|| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth_plugin`|Authentication plugin class name|| -|`--listener-name`|Listener name for the broker|| -|`-b`, `--batch-time-window`|Batch messages in a window of the specified number of milliseconds|1| -|`-bb`, `--batch-max-bytes`|Maximum number of bytes per batch|4194304| -|`-bm`, `--batch-max-messages`|Maximum number of messages per batch|1000| -|`-bw`, `--busy-wait`|Enable or disable Busy-Wait on the Pulsar client|false| -|`-ch`, `--chunking`|Split the message and publish in chunks if the message size is larger than allowed max size|false| -|`-d`, `--delay`|Mark messages with a given delay in seconds|0s| -|`-z`, `--compression`|Compress messages’ payload. Possible values are NONE, LZ4, ZLIB, ZSTD or SNAPPY.|| -|`--conf-file`|Configuration file|| -|`-k`, `--encryption-key-name`|The public key name to encrypt payload|| -|`-v`, `--encryption-key-value-file`|The file which contains the public key to encrypt payload|| -|`-ef`, `--exit-on-failure`|Exit from the process on publish failure|false| -|`-fc`, `--format-class`|Custom Formatter class name|org.apache.pulsar.testclient.DefaultMessageFormatter| -|`-fp`, `--format-payload`|Format %i as a message index in the stream from producer and/or %t as the timestamp nanoseconds|false| -|`-h`, `--help`|Help message|false| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-o`, `--max-outstanding`|Max number of outstanding messages|1000| -|`-p`, `--max-outstanding-across-partitions`|Max number of outstanding messages across partitions|50000| -|`-mk`, `--message-key-generation-mode`|The generation mode of message key. Valid options are `autoIncrement`, `random`|| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-m`, `--num-messages`|Number of messages to publish in total. If the value is 0 or smaller than 0, it keeps publishing messages.|0| -|`-n`, `--num-producers`|The number of producers (per topic)|1| -|`-threads`, `--num-test-threads`|Number of test threads|1| -|`-t`, `--num-topic`|The number of topics|1| -|`-np`, `--partitions`|Create partitioned topics with the given number of partitions. Setting this value to 0 means not trying to create a topic|| -|`-f`, `--payload-file`|Use payload from an UTF-8 encoded text file and a payload will be randomly selected when publishing messages|| -|`-e`, `--payload-delimiter`|The delimiter used to split lines when using payload from a file|\n| -|`-pn`, `--producer-name`|Producer Name|| -|`-r`, `--rate`|Publish rate msg/s across topics|100| -|`--send-timeout`|Set the sendTimeout|0| -|`--separator`|Separator between the topic and topic number|-| -|`-u`, `--service-url`|Pulsar service URL|| -|`-s`, `--size`|Message size (in bytes)|1024| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled.|0| -|`-time`, `--test-duration`|Test duration (in seconds). If the value is 0 or smaller than 0, it keeps publishing messages.|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--warmup-time`|Warm-up time in seconds|1| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - - -### `read` -Run a topic reader - -Usage - -```bash - -$ pulsar-perf read options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth_plugin`|Authentication plugin class name|| -|`--listener-name`|Listener name for the broker|| -|`--conf-file`|Configuration file|| -|`-h`, `--help`|Help message|false| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-t`, `--num-topics`|The number of topics|1| -|`-r`, `--rate`|Simulate a slow message reader (rate in msg/s)|0| -|`-q`, `--receiver-queue-size`|Size of the receiver queue|1000| -|`-u`, `--service-url`|Pulsar service URL|| -|`-m`, `--start-message-id`|Start message id. This can be either 'earliest', 'latest' or a specific message id by using 'lid:eid'|earliest| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled.|0| -|`-time`, `--test-duration`|Test duration (in seconds). If the value is 0 or smaller than 0, it keeps consuming messages.|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--use-tls`|Use TLS encryption on the connection|false| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - -### `websocket-producer` -Run a websocket producer - -Usage - -```bash - -$ pulsar-perf websocket-producer options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth_plugin`|Authentication plugin class name|| -|`--conf-file`|Configuration file|| -|`-h`, `--help`|Help message|false| -|`-m`, `--num-messages`|Number of messages to publish in total. If the value is 0 or smaller than 0, it keeps publishing messages|0| -|`-t`, `--num-topic`|The number of topics|1| -|`-f`, `--payload-file`|Use payload from a file instead of empty buffer|| -|`-u`, `--proxy-url`|Pulsar Proxy URL, e.g., "ws://localhost:8080/"|| -|`-r`, `--rate`|Publish rate msg/s across topics|100| -|`-s`, `--size`|Message size in byte|1024| -|`-time`, `--test-duration`|Test duration (in seconds). If the value is 0 or smaller than 0, it keeps publishing messages|0| - - -### `managed-ledger` -Write directly on managed-ledgers - -Usage - -```bash - -$ pulsar-perf managed-ledger options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-a`, `--ack-quorum`|Ledger ack quorum|1| -|`-dt`, `--digest-type`|BookKeeper digest type. Possible Values: [CRC32, MAC, CRC32C, DUMMY]|CRC32C| -|`-e`, `--ensemble-size`|Ledger ensemble size|1| -|`-h`, `--help`|Help message|false| -|`-c`, `--max-connections`|Max number of TCP connections to a single bookie|1| -|`-o`, `--max-outstanding`|Max number of outstanding requests|1000| -|`-m`, `--num-messages`|Number of messages to publish in total. If the value is 0 or smaller than 0, it keeps publishing messages|0| -|`-t`, `--num-topic`|Number of managed ledgers|1| -|`-r`, `--rate`|Write rate msg/s across managed ledgers|100| -|`-s`, `--size`|Message size in byte|1024| -|`-time`, `--test-duration`|Test duration (in seconds). If the value is 0 or smaller than 0, it keeps publishing messages|0| -|`--threads`|Number of threads writing|1| -|`-w`, `--write-quorum`|Ledger write quorum|1| -|`-zk`, `--zookeeperServers`|ZooKeeper connection string|| - - -### `monitor-brokers` -Continuously receive broker data and/or load reports - -Usage - -```bash - -$ pulsar-perf monitor-brokers options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--connect-string`|A connection string for one or more ZooKeeper servers|| -|`-h`, `--help`|Help message|false| - - -### `simulation-client` -Run a simulation server acting as a Pulsar client. Uses the client configuration specified in `conf/client.conf`. - -Usage - -```bash - -$ pulsar-perf simulation-client options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--port`|Port to listen on for controller|0| -|`--service-url`|Pulsar Service URL|| -|`-h`, `--help`|Help message|false| - -### `simulation-controller` -Run a simulation controller to give commands to servers - -Usage - -```bash - -$ pulsar-perf simulation-controller options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--client-port`|The port that the clients are listening on|0| -|`--clients`|Comma-separated list of client hostnames|| -|`--cluster`|The cluster to test on|| -|`-h`, `--help`|Help message|false| - - -### `help` -This help message - -Usage - -```bash - -$ pulsar-perf help - -``` - -## `bookkeeper` -A tool for managing BookKeeper. - -Usage - -```bash - -$ bookkeeper command - -``` - -Commands -* `autorecovery` -* `bookie` -* `localbookie` -* `upgrade` -* `shell` - - -Environment variables - -The table below lists the environment variables that you can use to configure the bookkeeper tool. - -|Variable|Description|Default| -|---|---|---| -|BOOKIE_LOG_CONF|Log4j configuration file|conf/log4j2.yaml| -|BOOKIE_CONF|BookKeeper configuration file|conf/bk_server.conf| -|BOOKIE_EXTRA_OPTS|Extra options to be passed to the JVM|| -|BOOKIE_EXTRA_CLASSPATH|Extra paths for BookKeeper's classpath|| -|ENTRY_FORMATTER_CLASS|The Java class used to format entries|| -|BOOKIE_PID_DIR|Folder where the BookKeeper server PID file should be stored|| -|BOOKIE_STOP_TIMEOUT|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful|| - - -### `auto-recovery` -Runs an auto-recovery service - -Usage - -```bash - -$ bookkeeper autorecovery options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery|| - - -### `bookie` -Starts up a BookKeeper server (aka bookie) - -Usage - -```bash - -$ bookkeeper bookie options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery|| -|-readOnly|Force start a read-only bookie server|false| -|-withAutoRecovery|Start auto-recovery service bookie server|false| - - -### `localbookie` -Runs a test ensemble of N bookies locally - -Usage - -```bash - -$ bookkeeper localbookie N - -``` - -### `upgrade` -Upgrade the bookie’s filesystem - -Usage - -```bash - -$ bookkeeper upgrade options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery|| -|`-u`, `--upgrade`|Upgrade the bookie’s directories|| - - -### `shell` -Run shell for admin commands. To see a full listing of those commands, run bookkeeper shell without an argument. - -Usage - -```bash - -$ bookkeeper shell - -``` - -Example - -```bash - -$ bookkeeper shell bookiesanity - -``` - -## `broker-tool` - -The `broker- tool` is used for operations on a specific broker. - -Usage - -```bash - -$ broker-tool command - -``` - -Commands -* `load-report` -* `help` - -Example -Two ways to get more information about a command as below: - -```bash - -$ broker-tool help command -$ broker-tool command --help - -``` - -### `load-report` - -Collect the load report of a specific broker. -The command is run on a broker, and used for troubleshooting why broker can’t collect right load report. - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--interval`| Interval to collect load report, in milliseconds || -|`-h`, `--help`| Display help information || - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/reference-configuration.md b/site2/website/versioned_docs/version-2.8.0-deprecated/reference-configuration.md deleted file mode 100644 index eb3bc1c574feb9..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/reference-configuration.md +++ /dev/null @@ -1,788 +0,0 @@ ---- -id: reference-configuration -title: Pulsar configuration -sidebar_label: "Pulsar configuration" -original_id: reference-configuration ---- - - - - -You can manage Pulsar configuration by configuration files in the [`conf`](https://github.com/apache/pulsar/tree/master/conf) directory of a Pulsar [installation](getting-started-standalone.md). - -- [BookKeeper](#bookkeeper) -- [Broker](#broker) -- [Client](#client) -- [Service discovery](#service-discovery) -- [Log4j](#log4j) -- [Log4j shell](#log4j-shell) -- [Standalone](#standalone) -- [WebSocket](#websocket) -- [Pulsar proxy](#pulsar-proxy) -- [ZooKeeper](#zookeeper) - -## BookKeeper - -BookKeeper is a replicated log storage system that Pulsar uses for durable storage of all messages. - - -|Name|Description|Default| -|---|---|---| -|bookiePort|The port on which the bookie server listens.|3181| -|allowLoopback|Whether the bookie is allowed to use a loopback interface as its primary interface (that is the interface used to establish its identity). By default, loopback interfaces are not allowed to work as the primary interface. Using a loopback interface as the primary interface usually indicates a configuration error. For example, it’s fairly common in some VPS setups to not configure a hostname or to have the hostname resolve to `127.0.0.1`. If this is the case, then all bookies in the cluster will establish their identities as `127.0.0.1:3181` and only one will be able to join the cluster. For VPSs configured like this, you should explicitly set the listening interface.|false| -|listeningInterface|The network interface on which the bookie listens. By default, the bookie listens on all interfaces.|eth0| -|advertisedAddress|Configure a specific hostname or IP address that the bookie should use to advertise itself to clients. By default, the bookie advertises either its own IP address or hostname according to the `listeningInterface` and `useHostNameAsBookieID` settings.|N/A| -|allowMultipleDirsUnderSameDiskPartition|Configure the bookie to enable/disable multiple ledger/index/journal directories in the same filesystem disk partition.|false| -|minUsableSizeForIndexFileCreation|The minimum safe usable size available in index directory for bookie to create index files while replaying journal at the time of bookie starts in Readonly Mode (in bytes).|1073741824| -|journalDirectory|The directory where BookKeeper outputs its write-ahead log (WAL).|data/bookkeeper/journal| -|journalDirectories|Directories that BookKeeper outputs its write ahead log. Multiple directories are available, being separated by `,`. For example: `journalDirectories=/tmp/bk-journal1,/tmp/bk-journal2`. If `journalDirectories` is set, the bookies skip `journalDirectory` and use this setting directory.|/tmp/bk-journal| -|ledgerDirectories|The directory where BookKeeper outputs ledger snapshots. This could define multiple directories to store snapshots separated by `,`, for example `ledgerDirectories=/tmp/bk1-data,/tmp/bk2-data`. Ideally, ledger dirs and the journal dir are each in a different device, which reduces the contention between random I/O and sequential write. It is possible to run with a single disk, but performance will be significantly lower.|data/bookkeeper/ledgers| -|ledgerManagerType|The type of ledger manager used to manage how ledgers are stored, managed, and garbage collected. See [BookKeeper Internals](http://bookkeeper.apache.org/docs/latest/getting-started/concepts) for more info.|hierarchical| -|zkLedgersRootPath|The root ZooKeeper path used to store ledger metadata. This parameter is used by the ZooKeeper-based ledger manager as a root znode to store all ledgers.|/ledgers| -|ledgerStorageClass|Ledger storage implementation class|org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage| -|entryLogFilePreallocationEnabled|Enable or disable entry logger preallocation|true| -|logSizeLimit|Max file size of the entry logger, in bytes. A new entry log file will be created when the old one reaches the file size limitation.|1073741824| -|minorCompactionThreshold|Threshold of minor compaction. Entry log files whose remaining size percentage reaches below this threshold will be compacted in a minor compaction. If set to less than zero, the minor compaction is disabled.|0.2| -|minorCompactionInterval|Time interval to run minor compaction, in seconds. If set to less than zero, the minor compaction is disabled. Note: should be greater than gcWaitTime. |3600| -|majorCompactionThreshold|The threshold of major compaction. Entry log files whose remaining size percentage reaches below this threshold will be compacted in a major compaction. Those entry log files whose remaining size percentage is still higher than the threshold will never be compacted. If set to less than zero, the minor compaction is disabled.|0.5| -|majorCompactionInterval|The time interval to run major compaction, in seconds. If set to less than zero, the major compaction is disabled. Note: should be greater than gcWaitTime. |86400| -|readOnlyModeEnabled|If `readOnlyModeEnabled=true`, then on all full ledger disks, bookie will be converted to read-only mode and serve only read requests. Otherwise the bookie will be shutdown.|true| -|forceReadOnlyBookie|Whether the bookie is force started in read only mode.|false| -|persistBookieStatusEnabled|Persist the bookie status locally on the disks. So the bookies can keep their status upon restarts.|false| -|compactionMaxOutstandingRequests|Sets the maximum number of entries that can be compacted without flushing. When compacting, the entries are written to the entrylog and the new offsets are cached in memory. Once the entrylog is flushed the index is updated with the new offsets. This parameter controls the number of entries added to the entrylog before a flush is forced. A higher value for this parameter means more memory will be used for offsets. Each offset consists of 3 longs. This parameter should not be modified unless you’re fully aware of the consequences.|100000| -|compactionRate|The rate at which compaction will read entries, in adds per second.|1000| -|isThrottleByBytes|Throttle compaction by bytes or by entries.|false| -|compactionRateByEntries|The rate at which compaction will read entries, in adds per second.|1000| -|compactionRateByBytes|Set the rate at which compaction reads entries. The unit is bytes added per second.|1000000| -|journalMaxSizeMB|Max file size of journal file, in megabytes. A new journal file will be created when the old one reaches the file size limitation.|2048| -|journalMaxBackups|The max number of old journal files to keep. Keeping a number of old journal files would help data recovery in special cases.|5| -|journalPreAllocSizeMB|How space to pre-allocate at a time in the journal.|16| -|journalWriteBufferSizeKB|The of the write buffers used for the journal.|64| -|journalRemoveFromPageCache|Whether pages should be removed from the page cache after force write.|true| -|journalAdaptiveGroupWrites|Whether to group journal force writes, which optimizes group commit for higher throughput.|true| -|journalMaxGroupWaitMSec|The maximum latency to impose on a journal write to achieve grouping.|1| -|journalAlignmentSize|All the journal writes and commits should be aligned to given size|4096| -|journalBufferedWritesThreshold|Maximum writes to buffer to achieve grouping|524288| -|journalFlushWhenQueueEmpty|If we should flush the journal when journal queue is empty|false| -|numJournalCallbackThreads|The number of threads that should handle journal callbacks|8| -|openLedgerRereplicationGracePeriod | The grace period, in milliseconds, that the replication worker waits before fencing and replicating a ledger fragment that's still being written to upon bookie failure. | 30000 | -|rereplicationEntryBatchSize|The number of max entries to keep in fragment for re-replication|100| -|autoRecoveryDaemonEnabled|Whether the bookie itself can start auto-recovery service.|true| -|lostBookieRecoveryDelay|How long to wait, in seconds, before starting auto recovery of a lost bookie.|0| -|gcWaitTime|How long the interval to trigger next garbage collection, in milliseconds. Since garbage collection is running in background, too frequent gc will heart performance. It is better to give a higher number of gc interval if there is enough disk capacity.|900000| -|gcOverreplicatedLedgerWaitTime|How long the interval to trigger next garbage collection of overreplicated ledgers, in milliseconds. This should not be run very frequently since we read the metadata for all the ledgers on the bookie from zk.|86400000| -|flushInterval|How long the interval to flush ledger index pages to disk, in milliseconds. Flushing index files will introduce much random disk I/O. If separating journal dir and ledger dirs each on different devices, flushing would not affect performance. But if putting journal dir and ledger dirs on same device, performance degrade significantly on too frequent flushing. You can consider increment flush interval to get better performance, but you need to pay more time on bookie server restart after failure.|60000| -|bookieDeathWatchInterval|Interval to watch whether bookie is dead or not, in milliseconds|1000| -|allowStorageExpansion|Allow the bookie storage to expand. Newly added ledger and index dirs must be empty.|false| -|zkServers|A list of one of more servers on which zookeeper is running. The server list can be comma separated values, for example: zkServers=zk1:2181,zk2:2181,zk3:2181.|localhost:2181| -|zkTimeout|ZooKeeper client session timeout in milliseconds Bookie server will exit if it received SESSION_EXPIRED because it was partitioned off from ZooKeeper for more than the session timeout JVM garbage collection, disk I/O will cause SESSION_EXPIRED. Increment this value could help avoiding this issue|30000| -|zkRetryBackoffStartMs|The start time that the Zookeeper client backoff retries in milliseconds.|1000| -|zkRetryBackoffMaxMs|The maximum time that the Zookeeper client backoff retries in milliseconds.|10000| -|zkEnableSecurity|Set ACLs on every node written on ZooKeeper, allowing users to read and write BookKeeper metadata stored on ZooKeeper. In order to make ACLs work you need to setup ZooKeeper JAAS authentication. All the bookies and Client need to share the same user, and this is usually done using Kerberos authentication. See ZooKeeper documentation.|false| -|httpServerEnabled|The flag enables/disables starting the admin http server.|false| -|httpServerPort|The HTTP server port to listen on. By default, the value is `8080`. If you want to keep it consistent with the Prometheus stats provider, you can set it to `8000`.|8080 -|httpServerClass|The http server class.|org.apache.bookkeeper.http.vertx.VertxHttpServer| -|serverTcpNoDelay|This settings is used to enabled/disabled Nagle’s algorithm, which is a means of improving the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network. If you are sending many small messages, such that more than one can fit in a single IP packet, setting server.tcpnodelay to false to enable Nagle algorithm can provide better performance.|true| -|serverSockKeepalive|This setting is used to send keep-alive messages on connection-oriented sockets.|true| -|serverTcpLinger|The socket linger timeout on close. When enabled, a close or shutdown will not return until all queued messages for the socket have been successfully sent or the linger timeout has been reached. Otherwise, the call returns immediately and the closing is done in the background.|0| -|byteBufAllocatorSizeMax|The maximum buf size of the received ByteBuf allocator.|1048576| -|nettyMaxFrameSizeBytes|The maximum netty frame size in bytes. Any message received larger than this will be rejected.|5253120| -|openFileLimit|Max number of ledger index files could be opened in bookie server If number of ledger index files reaches this limitation, bookie server started to swap some ledgers from memory to disk. Too frequent swap will affect performance. You can tune this number to gain performance according your requirements.|0| -|pageSize|Size of a index page in ledger cache, in bytes A larger index page can improve performance writing page to disk, which is efficient when you have small number of ledgers and these ledgers have similar number of entries. If you have large number of ledgers and each ledger has fewer entries, smaller index page would improve memory usage.|8192| -|pageLimit|How many index pages provided in ledger cache If number of index pages reaches this limitation, bookie server starts to swap some ledgers from memory to disk. You can increment this value when you found swap became more frequent. But make sure pageLimit*pageSize should not more than JVM max memory limitation, otherwise you would got OutOfMemoryException. In general, incrementing pageLimit, using smaller index page would gain better performance in lager number of ledgers with fewer entries case If pageLimit is -1, bookie server will use 1/3 of JVM memory to compute the limitation of number of index pages.|0| -|readOnlyModeEnabled|If all ledger directories configured are full, then support only read requests for clients. If "readOnlyModeEnabled=true" then on all ledger disks full, bookie will be converted to read-only mode and serve only read requests. Otherwise the bookie will be shutdown. By default this will be disabled.|true| -|diskUsageThreshold|For each ledger dir, maximum disk space which can be used. Default is 0.95f. i.e. 95% of disk can be used at most after which nothing will be written to that partition. If all ledger dir partitions are full, then bookie will turn to readonly mode if ‘readOnlyModeEnabled=true’ is set, else it will shutdown. Valid values should be in between 0 and 1 (exclusive).|0.95| -|diskCheckInterval|Disk check interval in milli seconds, interval to check the ledger dirs usage.|10000| -|auditorPeriodicCheckInterval|Interval at which the auditor will do a check of all ledgers in the cluster. By default this runs once a week. The interval is set in seconds. To disable the periodic check completely, set this to 0. Note that periodic checking will put extra load on the cluster, so it should not be run more frequently than once a day.|604800| -|sortedLedgerStorageEnabled|Whether sorted-ledger storage is enabled.|true| -|auditorPeriodicBookieCheckInterval|The interval between auditor bookie checks. The auditor bookie check, checks ledger metadata to see which bookies should contain entries for each ledger. If a bookie which should contain entries is unavailable, thea the ledger containing that entry is marked for recovery. Setting this to 0 disabled the periodic check. Bookie checks will still run when a bookie fails. The interval is specified in seconds.|86400| -|numAddWorkerThreads|The number of threads that should handle write requests. if zero, the writes would be handled by netty threads directly.|0| -|numReadWorkerThreads|The number of threads that should handle read requests. if zero, the reads would be handled by netty threads directly.|8| -|numHighPriorityWorkerThreads|The umber of threads that should be used for high priority requests (i.e. recovery reads and adds, and fencing).|8| -|maxPendingReadRequestsPerThread|If read workers threads are enabled, limit the number of pending requests, to avoid the executor queue to grow indefinitely.|2500| -|maxPendingAddRequestsPerThread|The limited number of pending requests, which is used to avoid the executor queue to grow indefinitely when add workers threads are enabled.|10000| -|isForceGCAllowWhenNoSpace|Whether force compaction is allowed when the disk is full or almost full. Forcing GC could get some space back, but could also fill up the disk space more quickly. This is because new log files are created before GC, while old garbage log files are deleted after GC.|false| -|verifyMetadataOnGC|True if the bookie should double check `readMetadata` prior to GC.|false| -|flushEntrylogBytes|Entry log flush interval in bytes. Flushing in smaller chunks but more frequently reduces spikes in disk I/O. Flushing too frequently may also affect performance negatively.|268435456| -|readBufferSizeBytes|The number of bytes we should use as capacity for BufferedReadChannel.|4096| -|writeBufferSizeBytes|The number of bytes used as capacity for the write buffer|65536| -|useHostNameAsBookieID|Whether the bookie should use its hostname to register with the coordination service (e.g.: zookeeper service). When false, bookie will use its ip address for the registration.|false| -|bookieId | If you want to custom a bookie ID or use a dynamic network address for the bookie, you can set the `bookieId`.

    Bookie advertises itself using the `bookieId` rather than the `BookieSocketAddress` (`hostname:port` or `IP:port`). If you set the `bookieId`, then the `useHostNameAsBookieID` does not take effect.

    The `bookieId` is a non-empty string that can contain ASCII digits and letters ([a-zA-Z9-0]), colons, dashes, and dots.

    For more information about `bookieId`, see [here](http://bookkeeper.apache.org/bps/BP-41-bookieid/).|N/A| -|allowEphemeralPorts|Whether the bookie is allowed to use an ephemeral port (port 0) as its server port. By default, an ephemeral port is not allowed. Using an ephemeral port as the service port usually indicates a configuration error. However, in unit tests, using an ephemeral port will address port conflict problems and allow running tests in parallel.|false| -|enableLocalTransport|Whether the bookie is allowed to listen for the BookKeeper clients executed on the local JVM.|false| -|disableServerSocketBind|Whether the bookie is allowed to disable bind on network interfaces. This bookie will be available only to BookKeeper clients executed on the local JVM.|false| -|skipListArenaChunkSize|The number of bytes that we should use as chunk allocation for `org.apache.bookkeeper.bookie.SkipListArena`.|4194304| -|skipListArenaMaxAllocSize|The maximum size that we should allocate from the skiplist arena. Allocations larger than this should be allocated directly by the VM to avoid fragmentation.|131072| -|bookieAuthProviderFactoryClass|The factory class name of the bookie authentication provider. If this is null, then there is no authentication.|null| -|statsProviderClass||org.apache.bookkeeper.stats.prometheus.PrometheusMetricsProvider| -|prometheusStatsHttpPort||8000| -|dbStorage_writeCacheMaxSizeMb|Size of Write Cache. Memory is allocated from JVM direct memory. Write cache is used to buffer entries before flushing into the entry log. For good performance, it should be big enough to hold a substantial amount of entries in the flush interval.|25% of direct memory| -|dbStorage_readAheadCacheMaxSizeMb|Size of Read cache. Memory is allocated from JVM direct memory. This read cache is pre-filled doing read-ahead whenever a cache miss happens. By default, it is allocated to 25% of the available direct memory.|N/A| -|dbStorage_readAheadCacheBatchSize|How many entries to pre-fill in cache after a read cache miss|1000| -|dbStorage_rocksDB_blockCacheSize|Size of RocksDB block-cache. For best performance, this cache should be big enough to hold a significant portion of the index database which can reach ~2GB in some cases. By default, it uses 10% of direct memory.|N/A| -|dbStorage_rocksDB_writeBufferSizeMB||64| -|dbStorage_rocksDB_sstSizeInMB||64| -|dbStorage_rocksDB_blockSize||65536| -|dbStorage_rocksDB_bloomFilterBitsPerKey||10| -|dbStorage_rocksDB_numLevels||-1| -|dbStorage_rocksDB_numFilesInLevel0||4| -|dbStorage_rocksDB_maxSizeInLevel1MB||256| - -## Broker - -Pulsar brokers are responsible for handling incoming messages from producers, dispatching messages to consumers, replicating data between clusters, and more. - -|Name|Description|Default| -|---|---|---| -|advertisedListeners|Specify multiple advertised listeners for the broker.

    The format is `:pulsar://:`.

    If there are multiple listeners, separate them with commas.

    **Note**: do not use this configuration with `advertisedAddress` and `brokerServicePort`. If the value of this configuration is empty, the broker uses `advertisedAddress` and `brokerServicePort`|/| -|internalListenerName|Specify the internal listener name for the broker.

    **Note**: the listener name must be contained in `advertisedListeners`.

    If the value of this configuration is empty, the broker uses the first listener as the internal listener.|/| -|authenticateOriginalAuthData| If this flag is set to `true`, the broker authenticates the original Auth data; else it just accepts the originalPrincipal and authorizes it (if required). |false| -|enablePersistentTopics| Whether persistent topics are enabled on the broker |true| -|enableNonPersistentTopics| Whether non-persistent topics are enabled on the broker |true| -|functionsWorkerEnabled| Whether the Pulsar Functions worker service is enabled in the broker |false| -|exposePublisherStats|Whether to enable topic level metrics.|true| -|statsUpdateFrequencyInSecs||60| -|statsUpdateInitialDelayInSecs||60| -|zookeeperServers| Zookeeper quorum connection string || -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -|brokerServicePort| Broker data port |6650| -|brokerServicePortTls| Broker data port for TLS |6651| -|webServicePort| Port to use to server HTTP request |8080| -|webServicePortTls| Port to use to server HTTPS request |8443| -|webSocketServiceEnabled| Enable the WebSocket API service in broker |false| -|webSocketNumIoThreads|The number of IO threads in Pulsar Client used in WebSocket proxy.|8| -|webSocketConnectionsPerBroker|The number of connections per Broker in Pulsar Client used in WebSocket proxy.|8| -|webSocketSessionIdleTimeoutMillis|Time in milliseconds that idle WebSocket session times out.|300000| -|webSocketMaxTextFrameSize|The maximum size of a text message during parsing in WebSocket proxy.|1048576| -|exposeTopicLevelMetricsInPrometheus|Whether to enable topic level metrics.|true| -|exposeConsumerLevelMetricsInPrometheus|Whether to enable consumer level metrics.|false| -|jvmGCMetricsLoggerClassName|Classname of Pluggable JVM GC metrics logger that can log GC specific metrics.|N/A| -|bindAddress| Hostname or IP address the service binds on, default is 0.0.0.0. |0.0.0.0| -|advertisedAddress| Hostname or IP address the service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used. || -|clusterName| Name of the cluster to which this broker belongs to || -|maxTenants|The maximum number of tenants that can be created in each Pulsar cluster. When the number of tenants reaches the threshold, the broker rejects the request of creating a new tenant. The default value 0 disables the check. |0| -| maxNamespacesPerTenant | The maximum number of namespaces that can be created in each tenant. When the number of namespaces reaches this threshold, the broker rejects the request of creating a new tenant. The default value 0 disables the check. |0| -|brokerDeduplicationEnabled| Sets the default behavior for message deduplication in the broker. If enabled, the broker will reject messages that were already stored in the topic. This setting can be overridden on a per-namespace basis. |false| -|brokerDeduplicationMaxNumberOfProducers| The maximum number of producers for which information will be stored for deduplication purposes. |10000| -|brokerDeduplicationEntriesInterval| The number of entries after which a deduplication informational snapshot is taken. A larger interval will lead to fewer snapshots being taken, though this would also lengthen the topic recovery time (the time required for entries published after the snapshot to be replayed). |1000| -|brokerDeduplicationSnapshotIntervalSeconds| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |120| -|brokerDeduplicationProducerInactivityTimeoutMinutes| The time of inactivity (in minutes) after which the broker will discard deduplication information related to a disconnected producer. |360| -|dispatchThrottlingRatePerReplicatorInMsg| The default messages per second dispatch throttling-limit for every replicator in replication. The value of `0` means disabling replication message dispatch-throttling| 0 | -|dispatchThrottlingRatePerReplicatorInByte| The default bytes per second dispatch throttling-limit for every replicator in replication. The value of `0` means disabling replication message-byte dispatch-throttling| 0 | -|zooKeeperSessionTimeoutMillis| Zookeeper session timeout in milliseconds |30000| -|brokerShutdownTimeoutMs| Time to wait for broker graceful shutdown. After this time elapses, the process will be killed |60000| -|skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out of memory error. |false| -|backlogQuotaCheckEnabled| Enable backlog quota check. Enforces action on topic when the quota is reached |true| -|backlogQuotaCheckIntervalInSeconds| How often to check for topics that have reached the quota |60| -|backlogQuotaDefaultLimitGB| The default per-topic backlog quota limit. Being less than 0 means no limitation. By default, it is -1. | -1 | -|backlogQuotaDefaultRetentionPolicy|The defaulted backlog quota retention policy. By Default, it is `producer_request_hold`.
  641. 'producer_request_hold' Policy which holds producer's send request until the resource becomes available (or holding times out)
  642. 'producer_exception' Policy which throws `javax.jms.ResourceAllocationException` to the producer
  643. 'consumer_backlog_eviction' Policy which evicts the oldest message from the slowest consumer's backlog
  644. |producer_request_hold| -|allowAutoTopicCreation| Enable topic auto creation if a new producer or consumer connected |true| -|allowAutoTopicCreationType| The type of topic that is allowed to be automatically created.(partitioned/non-partitioned) |non-partitioned| -|allowAutoSubscriptionCreation| Enable subscription auto creation if a new consumer connected |true| -|defaultNumPartitions| The number of partitioned topics that is allowed to be automatically created if `allowAutoTopicCreationType` is partitioned |1| -|brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics. If topics are not consumed for some while, these inactive topics might be cleaned up. Deleting inactive topics is enabled by default. The default period is 1 minute. |true| -|brokerDeleteInactiveTopicsFrequencySeconds| How often to check for inactive topics |60| -| brokerDeleteInactiveTopicsMode | Set the mode to delete inactive topics.
  645. `delete_when_no_subscriptions`: delete the topic which has no subscriptions or active producers.
  646. `delete_when_subscriptions_caught_up`: delete the topic whose subscriptions have no backlogs and which has no active producers or consumers.
  647. | `delete_when_no_subscriptions` | -| brokerDeleteInactiveTopicsMaxInactiveDurationSeconds | Set the maximum duration for inactive topics. If it is not specified, the `brokerDeleteInactiveTopicsFrequencySeconds` parameter is adopted. | N/A | -|forceDeleteTenantAllowed| Enable you to delete a tenant forcefully. |false| -|forceDeleteNamespaceAllowed| Enable you to delete a namespace forcefully. |false| -|messageExpiryCheckIntervalInMinutes| The frequency of proactively checking and purging expired messages. |5| -|brokerServiceCompactionMonitorIntervalInSeconds| Interval between checks to determine whether topics with compaction policies need compaction. |60| -brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater than this threshold, compression is triggered.

    Set this threshold to 0 means disabling the compression check.|N/A -|delayedDeliveryEnabled| Whether to enable the delayed delivery for messages. If disabled, messages will be immediately delivered and there will be no tracking overhead.|true| -|delayedDeliveryTickTimeMillis|Control the tick time for retrying on delayed delivery, which affects the accuracy of the delivery time compared to the scheduled time. By default, it is 1 second.|1000| -|activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed. |1000| -|clientLibraryVersionCheckEnabled| Enable check for minimum allowed client library version |false| -|clientLibraryVersionCheckAllowUnversioned| Allow client libraries with no version information |true| -|statusFilePath| Path for the file used to determine the rotation status for the broker when responding to service discovery health checks || -|preferLaterVersions| If true, (and ModularLoadManagerImpl is being used), the load manager will attempt to use only brokers running the latest software version (to minimize impact to bundles) |false| -|maxNumPartitionsPerPartitionedTopic|Max number of partitions per partitioned topic. Use 0 or negative number to disable the check|0| -| maxSubscriptionsPerTopic | Maximum number of subscriptions allowed to subscribe to a topic. Once this limit reaches, the broker rejects new subscriptions until the number of subscriptions decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxProducersPerTopic | Maximum number of producers allowed to connect to a topic. Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerTopic | Maximum number of consumers allowed to connect to a topic. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerSubscription | Maximum number of consumers allowed to connect to a subscription. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -|tlsEnabled|Deprecated - Use `webServicePortTls` and `brokerServicePortTls` instead. |false| -|tlsCertificateFilePath| Path for the TLS certificate file || -|tlsKeyFilePath| Path for the TLS private key file || -|tlsTrustCertsFilePath| Path for the trusted TLS certificate file. This cert is used to verify that any certs presented by connecting clients are signed by a certificate authority. If this verification fails, then the certs are untrusted and the connections are dropped. || -|tlsAllowInsecureConnection| Accept untrusted TLS certificate from client. If it is set to `true`, a client with a cert which cannot be verified with the 'tlsTrustCertsFilePath' cert will be allowed to connect to the server, though the cert will not be used for client authentication. |false| -|tlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLSv1.3```, ```TLSv1.2``` || -|tlsCiphers|Specify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256```|| -|tlsEnabledWithKeyStore| Enable TLS with KeyStore type configuration in broker |false| -|tlsProvider| TLS Provider for KeyStore type || -|tlsKeyStoreType| LS KeyStore type configuration in broker: JKS, PKCS12 |JKS| -|tlsKeyStore| TLS KeyStore path in broker || -|tlsKeyStorePassword| TLS KeyStore password for broker || -|brokerClientTlsEnabledWithKeyStore| Whether internal client use KeyStore type to authenticate with Pulsar brokers |false| -|brokerClientSslProvider| The TLS Provider used by internal client to authenticate with other Pulsar brokers || -|brokerClientTlsTrustStoreType| TLS TrustStore type configuration for internal client: JKS, PKCS12, used by the internal client to authenticate with Pulsar brokers |JKS| -|brokerClientTlsTrustStore| TLS TrustStore path for internal client, used by the internal client to authenticate with Pulsar brokers || -|brokerClientTlsTrustStorePassword| TLS TrustStore password for internal client, used by the internal client to authenticate with Pulsar brokers || -|brokerClientTlsCiphers| Specify the tls cipher the internal client will use to negotiate during TLS Handshake. (a comma-separated list of ciphers) e.g. [TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256]|| -|brokerClientTlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS handshake. (a comma-separated list of protocol names). e.g. `TLSv1.3`, `TLSv1.2` || -|ttlDurationDefaultInSeconds|The default Time to Live (TTL) for namespaces if the TTL is not configured at namespace policies. When the value is set to `0`, TTL is disabled. By default, TTL is disabled. |0| -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicAlg| Configure the algorithm to be used to validate auth tokens. This can be any of the asymettric algorithms supported by Java JWT (https://github.com/jwtk/jjwt#signature-algorithms-keys) |RS256| -|tokenAuthClaim| Specify which of the token's claims will be used as the authentication "principal" or "role". The default "sub" claim will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud", that will be used to get the audience from token. If not set, audience will not be verified. || -|tokenAudience| The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token, need contains this. || -|maxUnackedMessagesPerConsumer| Max number of unacknowledged messages allowed to receive messages by a consumer on a shared subscription. Broker will stop sending messages to consumer once, this limit reaches until consumer starts acknowledging messages back. Using a value of 0, is disabling unackeMessage limit check and consumer can receive messages without any restriction |50000| -|maxUnackedMessagesPerSubscription| Max number of unacknowledged messages allowed per shared subscription. Broker will stop dispatching messages to all consumers of the subscription once this limit reaches until consumer starts acknowledging messages back and unack count reaches to limit/2. Using a value of 0, is disabling unackedMessage-limit check and dispatcher can dispatch messages without any restriction |200000| -|subscriptionRedeliveryTrackerEnabled| Enable subscription message redelivery tracker |true| -|subscriptionExpirationTimeMinutes | How long to delete inactive subscriptions from last consuming.

    Setting this configuration to a value **greater than 0** deletes inactive subscriptions automatically.
    Setting this configuration to **0** does not delete inactive subscriptions automatically.

    Since this configuration takes effect on all topics, if there is even one topic whose subscriptions should not be deleted automatically, you need to set it to 0.
    Instead, you can set a subscription expiration time for each **namespace** using the [`pulsar-admin namespaces set-subscription-expiration-time options` command](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-subscription-expiration-time-em-). | 0 | -|maxConcurrentLookupRequest| Max number of concurrent lookup request broker allows to throttle heavy incoming lookup traffic |50000| -|maxConcurrentTopicLoadRequest| Max number of concurrent topic loading request broker allows to control number of zk-operations |5000| -|authenticationEnabled| Enable authentication |false| -|authenticationProviders| Authentication provider name list, which is comma separated list of class names || -| authenticationRefreshCheckSeconds | Interval of time for checking for expired authentication credentials | 60 | -|authorizationEnabled| Enforce authorization |false| -|superUserRoles| Role names that are treated as "super-user", meaning they will be able to do all admin operations and publish/consume from all topics || -|brokerClientAuthenticationPlugin| Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters || -|brokerClientAuthenticationParameters||| -|athenzDomainNames| Supported Athenz provider domain names(comma separated) for authentication || -|exposePreciseBacklogInPrometheus| Enable expose the precise backlog stats, set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate. |false| -|schemaRegistryStorageClassName|The schema storage implementation used by this broker.|org.apache.pulsar.broker.service.schema.BookkeeperSchemaStorageFactory| -|isSchemaValidationEnforced|Enforce schema validation on following cases: if a producer without a schema attempts to produce to a topic with schema, the producer will be failed to connect. PLEASE be carefully on using this, since non-java clients don't support schema. If this setting is enabled, then non-java clients fail to produce.|false| -| topicFencingTimeoutSeconds | If a topic remains fenced for a certain time period (in seconds), it is closed forcefully. If set to 0 or a negative number, the fenced topic is not closed. | 0 | -|offloadersDirectory|The directory for all the offloader implementations.|./offloaders| -|bookkeeperMetadataServiceUri| Metadata service uri that bookkeeper is used for loading corresponding metadata driver and resolving its metadata service location. This value can be fetched using `bookkeeper shell whatisinstanceid` command in BookKeeper cluster. For example: zk+hierarchical://localhost:2181/ledgers. The metadata service uri list can also be semicolon separated values like below: zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers || -|bookkeeperClientAuthenticationPlugin| Authentication plugin to use when connecting to bookies || -|bookkeeperClientAuthenticationParametersName| BookKeeper auth plugin implementation specifics parameters name and values || -|bookkeeperClientAuthenticationParameters||| -|bookkeeperClientNumWorkerThreads| Number of BookKeeper client worker threads. Default is Runtime.getRuntime().availableProcessors() || -|bookkeeperClientTimeoutInSeconds| Timeout for BK add / read operations |30| -|bookkeeperClientSpeculativeReadTimeoutInMillis| Speculative reads are initiated if a read request doesn’t complete within a certain time Using a value of 0, is disabling the speculative reads |0| -|bookkeeperNumberOfChannelsPerBookie| Number of channels per bookie |16| -|bookkeeperClientHealthCheckEnabled| Enable bookies health check. Bookies that have more than the configured number of failure within the interval will be quarantined for some time. During this period, new ledgers won’t be created on these bookies |true| -|bookkeeperClientHealthCheckIntervalSeconds||60| -|bookkeeperClientHealthCheckErrorThresholdPerInterval||5| -|bookkeeperClientHealthCheckQuarantineTimeInSeconds ||1800| -|bookkeeperClientRackawarePolicyEnabled| Enable rack-aware bookie selection policy. BK will chose bookies from different racks when forming a new bookie ensemble |true| -|bookkeeperClientRegionawarePolicyEnabled| Enable region-aware bookie selection policy. BK will chose bookies from different regions and racks when forming a new bookie ensemble. If enabled, the value of bookkeeperClientRackawarePolicyEnabled is ignored |false| -|bookkeeperClientMinNumRacksPerWriteQuorum| Minimum number of racks per write quorum. BK rack-aware bookie selection policy will try to get bookies from at least 'bookkeeperClientMinNumRacksPerWriteQuorum' racks for a write quorum. |2| -|bookkeeperClientEnforceMinNumRacksPerWriteQuorum| Enforces rack-aware bookie selection policy to pick bookies from 'bookkeeperClientMinNumRacksPerWriteQuorum' racks for a writeQuorum. If BK can't find bookie then it would throw BKNotEnoughBookiesException instead of picking random one. |false| -|bookkeeperClientReorderReadSequenceEnabled| Enable/disable reordering read sequence on reading entries. |false| -|bookkeeperClientIsolationGroups| Enable bookie isolation by specifying a list of bookie groups to choose from. Any bookie outside the specified groups will not be used by the broker || -|bookkeeperClientSecondaryIsolationGroups| Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn't have enough bookie available. || -|bookkeeperClientMinAvailableBookiesInIsolationGroups| Minimum bookies that should be available as part of bookkeeperClientIsolationGroups else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list. || -|bookkeeperClientGetBookieInfoIntervalSeconds| Set the interval to periodically check bookie info |86400| -|bookkeeperClientGetBookieInfoRetryIntervalSeconds| Set the interval to retry a failed bookie info lookup |60| -|bookkeeperEnableStickyReads | Enable/disable having read operations for a ledger to be sticky to a single bookie. If this flag is enabled, the client will use one single bookie (by preference) to read all entries for a ledger. | true | -|managedLedgerDefaultEnsembleSize| Number of bookies to use when creating a ledger |2| -|managedLedgerDefaultWriteQuorum| Number of copies to store for each message |2| -|managedLedgerDefaultAckQuorum| Number of guaranteed copies (acks to wait before write is complete) |2| -|managedLedgerCacheSizeMB| Amount of memory to use for caching data payload in managed ledger. This memory is allocated from JVM direct memory and it’s shared across all the topics running in the same broker. By default, uses 1/5th of available direct memory || -|managedLedgerCacheCopyEntries| Whether we should make a copy of the entry payloads when inserting in cache| false| -|managedLedgerCacheEvictionWatermark| Threshold to which bring down the cache level when eviction is triggered |0.9| -|managedLedgerCacheEvictionFrequency| Configure the cache eviction frequency for the managed ledger cache (evictions/sec) | 100.0 | -|managedLedgerCacheEvictionTimeThresholdMillis| All entries that have stayed in cache for more than the configured time, will be evicted | 1000 | -|managedLedgerCursorBackloggedThreshold| Configure the threshold (in number of entries) from where a cursor should be considered 'backlogged' and thus should be set as inactive. | 1000| -|managedLedgerDefaultMarkDeleteRateLimit| Rate limit the amount of writes per second generated by consumer acking the messages |1.0| -|managedLedgerMaxEntriesPerLedger| The max number of entries to append to a ledger before triggering a rollover. A ledger rollover is triggered after the min rollover time has passed and one of the following conditions is true:
    • The max rollover time has been reached
    • The max entries have been written to the ledger
    • The max ledger size has been written to the ledger
    |50000| -|managedLedgerMinLedgerRolloverTimeMinutes| Minimum time between ledger rollover for a topic |10| -|managedLedgerMaxLedgerRolloverTimeMinutes| Maximum time before forcing a ledger rollover for a topic |240| -|managedLedgerCursorMaxEntriesPerLedger| Max number of entries to append to a cursor ledger |50000| -|managedLedgerCursorRolloverTimeInSeconds| Max time before triggering a rollover on a cursor ledger |14400| -|managedLedgerMaxUnackedRangesToPersist| Max number of "acknowledgment holes" that are going to be persistently stored. When acknowledging out of order, a consumer will leave holes that are supposed to be quickly filled by acking all the messages. The information of which messages are acknowledged is persisted by compressing in "ranges" of messages that were acknowledged. After the max number of ranges is reached, the information will only be tracked in memory and messages will be redelivered in case of crashes. |1000| -|autoSkipNonRecoverableData| Skip reading non-recoverable/unreadable data-ledger under managed-ledger’s list.It helps when data-ledgers gets corrupted at bookkeeper and managed-cursor is stuck at that ledger. |false| -|loadBalancerEnabled| Enable load balancer |true| -|loadBalancerPlacementStrategy| Strategy to assign a new bundle weightedRandomSelection || -|loadBalancerReportUpdateThresholdPercentage| Percentage of change to trigger load report update |10| -|loadBalancerReportUpdateMaxIntervalMinutes| maximum interval to update load report |15| -|loadBalancerHostUsageCheckIntervalMinutes| Frequency of report to collect |1| -|loadBalancerSheddingIntervalMinutes| Load shedding interval. Broker periodically checks whether some traffic should be offload from some over-loaded broker to other under-loaded brokers |30| -|loadBalancerSheddingGracePeriodMinutes| Prevent the same topics to be shed and moved to other broker more than once within this timeframe |30| -|loadBalancerBrokerMaxTopics| Usage threshold to allocate max number of topics to broker |50000| -|loadBalancerBrokerUnderloadedThresholdPercentage| Usage threshold to determine a broker as under-loaded |1| -|loadBalancerBrokerOverloadedThresholdPercentage| Usage threshold to determine a broker as over-loaded |85| -|loadBalancerResourceQuotaUpdateIntervalMinutes| Interval to update namespace bundle resource quota |15| -|loadBalancerBrokerComfortLoadLevelPercentage| Usage threshold to determine a broker is having just right level of load |65| -|loadBalancerAutoBundleSplitEnabled| enable/disable namespace bundle auto split |false| -|loadBalancerNamespaceBundleMaxTopics| maximum topics in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxSessions| maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxMsgRate| maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxBandwidthMbytes| maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered |100| -|loadBalancerNamespaceMaximumBundles| maximum number of bundles in a namespace |128| -|replicationMetricsEnabled| Enable replication metrics |true| -|replicationConnectionsPerBroker| Max number of connections to open for each broker in a remote cluster More connections host-to-host lead to better throughput over high-latency links. |16| -|replicationProducerQueueSize| Replicator producer queue size |1000| -|replicatorPrefix| Replicator prefix used for replicator producer name and cursor name pulsar.repl|| -|replicationTlsEnabled| Enable TLS when talking with other clusters to replicate messages |false| -|brokerServicePurgeInactiveFrequencyInSeconds|Deprecated. Use `brokerDeleteInactiveTopicsFrequencySeconds`.|60| -|transactionCoordinatorEnabled|Whether to enable transaction coordinator in broker.|true| -|transactionMetadataStoreProviderClassName| |org.apache.pulsar.transaction.coordinator.impl.InMemTransactionMetadataStoreProvider| -|defaultRetentionTimeInMinutes| Default message retention time. 0 means retention is disabled. -1 means data is not removed by time quota |0| -|defaultRetentionSizeInMB| Default retention size. 0 means retention is disabled. -1 means data is not removed by size quota |0| -|keepAliveIntervalSeconds| How often to check whether the connections are still alive |30| -|bootstrapNamespaces| The bootstrap name. | N/A | -|loadManagerClassName| Name of load manager to use |org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl| -|supportedNamespaceBundleSplitAlgorithms| Supported algorithms name for namespace bundle split |[range_equally_divide,topic_count_equally_divide]| -|defaultNamespaceBundleSplitAlgorithm| Default algorithm name for namespace bundle split |range_equally_divide| -|managedLedgerOffloadDriver| The directory for all the offloader implementations `offloadersDirectory=./offloaders`. Driver to use to offload old data to long term storage (Possible values: S3, aws-s3, google-cloud-storage). When using google-cloud-storage, Make sure both Google Cloud Storage and Google Cloud Storage JSON API are enabled for the project (check from Developers Console -> Api&auth -> APIs). || -|managedLedgerOffloadMaxThreads| Maximum number of thread pool threads for ledger offloading |2| -|managedLedgerOffloadPrefetchRounds|The maximum prefetch rounds for ledger reading for offloading.|1| -|managedLedgerUnackedRangesOpenCacheSetEnabled| Use Open Range-Set to cache unacknowledged messages |true| -|managedLedgerOffloadDeletionLagMs|Delay between a ledger being successfully offloaded to long term storage and the ledger being deleted from bookkeeper | 14400000| -|managedLedgerOffloadAutoTriggerSizeThresholdBytes|The number of bytes before triggering automatic offload to long term storage |-1 (disabled)| -|s3ManagedLedgerOffloadRegion| For Amazon S3 ledger offload, AWS region || -|s3ManagedLedgerOffloadBucket| For Amazon S3 ledger offload, Bucket to place offloaded ledger into || -|s3ManagedLedgerOffloadServiceEndpoint| For Amazon S3 ledger offload, Alternative endpoint to connect to (useful for testing) || -|s3ManagedLedgerOffloadMaxBlockSizeInBytes| For Amazon S3 ledger offload, Max block size in bytes. (64MB by default, 5MB minimum) |67108864| -|s3ManagedLedgerOffloadReadBufferSizeInBytes| For Amazon S3 ledger offload, Read buffer size in bytes (1MB by default) |1048576| -|gcsManagedLedgerOffloadRegion|For Google Cloud Storage ledger offload, region where offload bucket is located. Go to this page for more details: https://cloud.google.com/storage/docs/bucket-locations .|N/A| -|gcsManagedLedgerOffloadBucket|For Google Cloud Storage ledger offload, Bucket to place offloaded ledger into.|N/A| -|gcsManagedLedgerOffloadMaxBlockSizeInBytes|For Google Cloud Storage ledger offload, the maximum block size in bytes. (64MB by default, 5MB minimum)|67108864| -|gcsManagedLedgerOffloadReadBufferSizeInBytes|For Google Cloud Storage ledger offload, Read buffer size in bytes. (1MB by default)|1048576| -|gcsManagedLedgerOffloadServiceAccountKeyFile|For Google Cloud Storage, path to json file containing service account credentials. For more details, see the "Service Accounts" section of https://support.google.com/googleapi/answer/6158849 .|N/A| -|fileSystemProfilePath|For File System Storage, file system profile path.|../conf/filesystem_offload_core_site.xml| -|fileSystemURI|For File System Storage, file system uri.|N/A| -|s3ManagedLedgerOffloadRole| For Amazon S3 ledger offload, provide a role to assume before writing to s3 || -|s3ManagedLedgerOffloadRoleSessionName| For Amazon S3 ledger offload, provide a role session name when using a role |pulsar-s3-offload| -| acknowledgmentAtBatchIndexLevelEnabled | Enable or disable the batch index acknowledgement. | false | -|enableReplicatedSubscriptions|Whether to enable tracking of replicated subscriptions state across clusters.|true| -|replicatedSubscriptionsSnapshotFrequencyMillis|The frequency of snapshots for replicated subscriptions tracking.|1000| -|replicatedSubscriptionsSnapshotTimeoutSeconds|The timeout for building a consistent snapshot for tracking replicated subscriptions state.|30| -|replicatedSubscriptionsSnapshotMaxCachedPerSubscription|The maximum number of snapshot to be cached per subscription.|10| -|maxMessagePublishBufferSizeInMB|The maximum memory size for a broker to handle messages that are sent by producers. If the processing message size exceeds this value, the broker stops reading data from the connection. The processing messages refer to the messages that are sent to the broker but the broker has not sent response to the client. Usually the messages are waiting to be written to bookies. It is shared across all the topics running in the same broker. The value `-1` disables the memory limitation. By default, it is 50% of direct memory.|N/A| -|messagePublishBufferCheckIntervalInMillis|Interval between checks to see if message publish buffer size exceeds the maximum. Use `0` or negative number to disable the max publish buffer limiting.|100| -|retentionCheckIntervalInSeconds|Check between intervals to see if consumed ledgers need to be trimmed. Use 0 or negative number to disable the check.|120| -| maxMessageSize | Set the maximum size of a message. | 5242880 | -| preciseTopicPublishRateLimiterEnable | Enable precise topic publish rate limiting. | false | -| lazyCursorRecovery | Whether to recover cursors lazily when trying to recover a managed ledger backing a persistent topic. It can improve write availability of topics. The caveat is now when recovered ledger is ready to write we're not sure if all old consumers' last mark delete position(ack position) can be recovered or not. So user can make the trade off or have custom logic in application to checkpoint consumer state.| false | -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| -| maxTopicsPerNamespace | The maximum number of persistent topics that can be created in the namespace. When the number of topics reaches this threshold, the broker rejects the request of creating a new topic, including the auto-created topics by the producer or consumer, until the number of connected consumers decreases. The default value 0 disables the check. | 0 | -|subscriptionTypesEnabled| Enable all subscription types, which are exclusive, shared, failover, and key_shared. | Exclusive, Shared, Failover, Key_Shared | -|narExtractionDirectory | The extraction directory of the nar package.
    Available for Protocol Handler, Additional Servlets, Offloaders, Broker Interceptor. | System.getProperty("java.io.tmpdir") | - -## Client - -You can use the [`pulsar-client`](reference-cli-tools.md#pulsar-client) CLI tool to publish messages to and consume messages from Pulsar topics. You can use this tool in place of a client library. - -|Name|Description|Default| -|---|---|---| -|webServiceUrl| The web URL for the cluster. |http://localhost:8080/| -|brokerServiceUrl| The Pulsar protocol URL for the cluster. |pulsar://localhost:6650/| -|authPlugin| The authentication plugin. || -|authParams| The authentication parameters for the cluster, as a comma-separated string. || -|useTls| Whether to enforce the TLS authentication in the cluster. |false| -| tlsAllowInsecureConnection | Allow TLS connections to servers whose certificate cannot be verified to have been signed by a trusted certificate authority. | false | -| tlsEnableHostnameVerification | Whether the server hostname must match the common name of the certificate that is used by the server. | false | -|tlsTrustCertsFilePath||| -| useKeyStoreTls | Enable TLS with KeyStore type configuration in the broker. | false | -| tlsTrustStoreType | TLS TrustStore type configuration.
  648. JKS
  649. PKCS12
  650. |JKS| -| tlsTrustStore | TLS TrustStore path. | | -| tlsTrustStorePassword | TLS TrustStore password. | | - - -## Service discovery - -|Name|Description|Default| -|---|---|---| -|zookeeperServers| Zookeeper quorum connection string (comma-separated) || -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -|zookeeperSessionTimeoutMs| ZooKeeper session timeout |30000| -|servicePort| Port to use to server binary-proto request |6650| -|servicePortTls| Port to use to server binary-proto-tls request |6651| -|webServicePort| Port that discovery service listen on |8080| -|webServicePortTls| Port to use to server HTTPS request |8443| -|bindOnLocalhost| Control whether to bind directly on localhost rather than on normal hostname |false| -|authenticationEnabled| Enable authentication |false| -|authenticationProviders| Authentication provider name list, which is comma separated list of class names (comma-separated) || -|authorizationEnabled| Enforce authorization |false| -|superUserRoles| Role names that are treated as "super-user", meaning they will be able to do all admin operations and publish/consume from all topics (comma-separated) || -|tlsEnabled| Enable TLS |false| -|tlsCertificateFilePath| Path for the TLS certificate file || -|tlsKeyFilePath| Path for the TLS private key file || - - - -## Log4j - -You can set the log level and configuration in the [log4j2.yaml](https://github.com/apache/pulsar/blob/d557e0aa286866363bc6261dec87790c055db1b0/conf/log4j2.yaml#L155) file. The following logging configuration parameters are available. - -|Name|Default| -|---|---| -|pulsar.root.logger| WARN,CONSOLE| -|pulsar.log.dir| logs| -|pulsar.log.file| pulsar.log| -|log4j.rootLogger| ${pulsar.root.logger}| -|log4j.appender.CONSOLE| org.apache.log4j.ConsoleAppender| -|log4j.appender.CONSOLE.Threshold| DEBUG| -|log4j.appender.CONSOLE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.CONSOLE.layout.ConversionPattern| %d{ISO8601} - %-5p - [%t:%C{1}@%L] - %m%n| -|log4j.appender.ROLLINGFILE| org.apache.log4j.DailyRollingFileAppender| -|log4j.appender.ROLLINGFILE.Threshold| DEBUG| -|log4j.appender.ROLLINGFILE.File| ${pulsar.log.dir}/${pulsar.log.file}| -|log4j.appender.ROLLINGFILE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.ROLLINGFILE.layout.ConversionPattern| %d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n| -|log4j.appender.TRACEFILE| org.apache.log4j.FileAppender| -|log4j.appender.TRACEFILE.Threshold| TRACE| -|log4j.appender.TRACEFILE.File| pulsar-trace.log| -|log4j.appender.TRACEFILE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.TRACEFILE.layout.ConversionPattern| %d{ISO8601} - %-5p [%t:%C{1}@%L][%x] - %m%n| - -> Note: 'topic' in log4j2.appender is configurable. -> - If you want to append all logs to a single topic, set the same topic name. -> - If you want to append logs to different topics, you can set different topic names. - -## Log4j shell - -|Name|Default| -|---|---| -|bookkeeper.root.logger| ERROR,CONSOLE| -|log4j.rootLogger| ${bookkeeper.root.logger}| -|log4j.appender.CONSOLE| org.apache.log4j.ConsoleAppender| -|log4j.appender.CONSOLE.Threshold| DEBUG| -|log4j.appender.CONSOLE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.CONSOLE.layout.ConversionPattern| %d{ABSOLUTE} %-5p %m%n| -|log4j.logger.org.apache.zookeeper| ERROR| -|log4j.logger.org.apache.bookkeeper| ERROR| -|log4j.logger.org.apache.bookkeeper.bookie.BookieShell| INFO| - - -## Standalone - -|Name|Description|Default| -|---|---|---| -|authenticateOriginalAuthData| If this flag is set to `true`, the broker authenticates the original Auth data; else it just accepts the originalPrincipal and authorizes it (if required). |false| -|zookeeperServers| The quorum connection string for local ZooKeeper || -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -|brokerServicePort| The port on which the standalone broker listens for connections |6650| -|webServicePort| The port used by the standalone broker for HTTP requests |8080| -|bindAddress| The hostname or IP address on which the standalone service binds |0.0.0.0| -|advertisedAddress| The hostname or IP address that the standalone service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used. || -| numAcceptorThreads | Number of threads to use for Netty Acceptor | 1 | -| numIOThreads | Number of threads to use for Netty IO | 2 * Runtime.getRuntime().availableProcessors() | -| numHttpServerThreads | Number of threads to use for HTTP requests processing | 2 * Runtime.getRuntime().availableProcessors()| -|isRunningStandalone|This flag controls features that are meant to be used when running in standalone mode.|N/A| -|clusterName| The name of the cluster that this broker belongs to. |standalone| -| failureDomainsEnabled | Enable cluster's failure-domain which can distribute brokers into logical region. | false | -|zooKeeperSessionTimeoutMillis| The ZooKeeper session timeout, in milliseconds. |30000| -|zooKeeperOperationTimeoutSeconds|ZooKeeper operation timeout in seconds.|30| -|brokerShutdownTimeoutMs| The time to wait for graceful broker shutdown. After this time elapses, the process will be killed. |60000| -|skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out of memory error. |false| -|backlogQuotaCheckEnabled| Enable the backlog quota check, which enforces a specified action when the quota is reached. |true| -|backlogQuotaCheckIntervalInSeconds| How often to check for topics that have reached the backlog quota. |60| -|backlogQuotaDefaultLimitGB| The default per-topic backlog quota limit. Being less than 0 means no limitation. By default, it is -1. |-1| -|ttlDurationDefaultInSeconds|The default Time to Live (TTL) for namespaces if the TTL is not configured at namespace policies. When the value is set to `0`, TTL is disabled. By default, TTL is disabled. |0| -|brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics. If topics are not consumed for some while, these inactive topics might be cleaned up. Deleting inactive topics is enabled by default. The default period is 1 minute. |true| -|brokerDeleteInactiveTopicsFrequencySeconds| How often to check for inactive topics, in seconds. |60| -| maxPendingPublishRequestsPerConnection | Maximum pending publish requests per connection to avoid keeping large number of pending requests in memory | 1000| -|messageExpiryCheckIntervalInMinutes| How often to proactively check and purged expired messages. |5| -|activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed. |1000| -| subscriptionExpirationTimeMinutes | How long to delete inactive subscriptions from last consumption. When it is set to 0, inactive subscriptions are not deleted automatically | 0 | -| subscriptionRedeliveryTrackerEnabled | Enable subscription message redelivery tracker to send redelivery count to consumer. | true | -|subscriptionKeySharedEnable|Whether to enable the Key_Shared subscription.|true| -| subscriptionKeySharedUseConsistentHashing | In Key_Shared subscription type, with default AUTO_SPLIT mode, use splitting ranges or consistent hashing to reassign keys to new consumers. | false | -| subscriptionKeySharedConsistentHashingReplicaPoints | In Key_Shared subscription type, the number of points in the consistent-hashing ring. The greater the number, the more equal the assignment of keys to consumers. | 100 | -| subscriptionExpiryCheckIntervalInMinutes | How frequently to proactively check and purge expired subscription |5 | -| brokerDeduplicationEnabled | Set the default behavior for message deduplication in the broker. This can be overridden per-namespace. If it is enabled, the broker rejects messages that are already stored in the topic. | false | -| brokerDeduplicationMaxNumberOfProducers | Maximum number of producer information that it's going to be persisted for deduplication purposes | 10000 | -| brokerDeduplicationEntriesInterval | Number of entries after which a deduplication information snapshot is taken. A greater interval leads to less snapshots being taken though it would increase the topic recovery time, when the entries published after the snapshot need to be replayed. | 1000 | -| brokerDeduplicationProducerInactivityTimeoutMinutes | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | 360 | -| defaultNumberOfNamespaceBundles | When a namespace is created without specifying the number of bundles, this value is used as the default setting.| 4 | -|clientLibraryVersionCheckEnabled| Enable checks for minimum allowed client library version. |false| -|clientLibraryVersionCheckAllowUnversioned| Allow client libraries with no version information |true| -|statusFilePath| The path for the file used to determine the rotation status for the broker when responding to service discovery health checks |/usr/local/apache/htdocs| -|maxUnackedMessagesPerConsumer| The maximum number of unacknowledged messages allowed to be received by consumers on a shared subscription. The broker will stop sending messages to a consumer once this limit is reached or until the consumer begins acknowledging messages. A value of 0 disables the unacked message limit check and thus allows consumers to receive messages without any restrictions. |50000| -|maxUnackedMessagesPerSubscription| The same as above, except per subscription rather than per consumer. |200000| -| maxUnackedMessagesPerBroker | Maximum number of unacknowledged messages allowed per broker. Once this limit reaches, the broker stops dispatching messages to all shared subscriptions which has a higher number of unacknowledged messages until subscriptions start acknowledging messages back and unacknowledged messages count reaches to limit/2. When the value is set to 0, unacknowledged message limit check is disabled and broker does not block dispatchers. | 0 | -| maxUnackedMessagesPerSubscriptionOnBrokerBlocked | Once the broker reaches maxUnackedMessagesPerBroker limit, it blocks subscriptions which have higher unacknowledged messages than this percentage limit and subscription does not receive any new messages until that subscription acknowledges messages back. | 0.16 | -| unblockStuckSubscriptionEnabled|Broker periodically checks if subscription is stuck and unblock if flag is enabled.|false| -|zookeeperSessionExpiredPolicy|There are two policies when ZooKeeper session expired happens, "shutdown" and "reconnect". If it is set to "shutdown" policy, when ZooKeeper session expired happens, the broker is shutdown. If it is set to "reconnect" policy, the broker tries to reconnect to ZooKeeper server and re-register metadata to ZooKeeper. Note: the "reconnect" policy is an experiment feature.|shutdown| -| topicPublisherThrottlingTickTimeMillis | Tick time to schedule task that checks topic publish rate limiting across all topics. A lower value can improve accuracy while throttling publish but it uses more CPU to perform frequent check. (Disable publish throttling with value 0) | 10| -| brokerPublisherThrottlingTickTimeMillis | Tick time to schedule task that checks broker publish rate limiting across all topics. A lower value can improve accuracy while throttling publish but it uses more CPU to perform frequent check. When the value is set to 0, publish throttling is disabled. |50 | -| brokerPublisherThrottlingMaxMessageRate | Maximum rate (in 1 second) of messages allowed to publish for a broker if the message rate limiting is enabled. When the value is set to 0, message rate limiting is disabled. | 0| -| brokerPublisherThrottlingMaxByteRate | Maximum rate (in 1 second) of bytes allowed to publish for a broker if the byte rate limiting is enabled. When the value is set to 0, the byte rate limiting is disabled. | 0 | -|subscribeThrottlingRatePerConsumer|Too many subscribe requests from a consumer can cause broker rewinding consumer cursors and loading data from bookies, hence causing high network bandwidth usage. When the positive value is set, broker will throttle the subscribe requests for one consumer. Otherwise, the throttling will be disabled. By default, throttling is disabled.|0| -|subscribeRatePeriodPerConsumerInSecond|Rate period for {subscribeThrottlingRatePerConsumer}. By default, it is 30s.|30| -| dispatchThrottlingRatePerTopicInMsg | Default messages (per second) dispatch throttling-limit for every topic. When the value is set to 0, default message dispatch throttling-limit is disabled. |0 | -| dispatchThrottlingRatePerTopicInByte | Default byte (per second) dispatch throttling-limit for every topic. When the value is set to 0, default byte dispatch throttling-limit is disabled. | 0| -| dispatchThrottlingRateRelativeToPublishRate | Enable dispatch rate-limiting relative to publish rate. | false | -|dispatchThrottlingRatePerSubscriptionInMsg|The defaulted number of message dispatching throttling-limit for a subscription. The value of 0 disables message dispatch-throttling.|0| -|dispatchThrottlingRatePerSubscriptionInByte|The default number of message-bytes dispatching throttling-limit for a subscription. The value of 0 disables message-byte dispatch-throttling.|0| -| dispatchThrottlingOnNonBacklogConsumerEnabled | Enable dispatch-throttling for both caught up consumers as well as consumers who have backlogs. | true | -|dispatcherMaxReadBatchSize|The maximum number of entries to read from BookKeeper. By default, it is 100 entries.|100| -|dispatcherMaxReadSizeBytes|The maximum size in bytes of entries to read from BookKeeper. By default, it is 5MB.|5242880| -|dispatcherMinReadBatchSize|The minimum number of entries to read from BookKeeper. By default, it is 1 entry. When there is an error occurred on reading entries from bookkeeper, the broker will backoff the batch size to this minimum number.|1| -|dispatcherMaxRoundRobinBatchSize|The maximum number of entries to dispatch for a shared subscription. By default, it is 20 entries.|20| -| preciseDispatcherFlowControl | Precise dispathcer flow control according to history message number of each entry. | false | -| streamingDispatch | Whether to use streaming read dispatcher. It can be useful when there's a huge backlog to drain and instead of read with micro batch we can streamline the read from bookkeeper to make the most of consumer capacity till we hit bookkeeper read limit or consumer process limit, then we can use consumer flow control to tune the speed. This feature is currently in preview and can be changed in subsequent release. | false | -| maxConcurrentLookupRequest | Maximum number of concurrent lookup request that the broker allows to throttle heavy incoming lookup traffic. | 50000 | -| maxConcurrentTopicLoadRequest | Maximum number of concurrent topic loading request that the broker allows to control the number of zk-operations. | 5000 | -| maxConcurrentNonPersistentMessagePerConnection | Maximum number of concurrent non-persistent message that can be processed per connection. | 1000 | -| numWorkerThreadsForNonPersistentTopic | Number of worker threads to serve non-persistent topic. | 8 | -| enablePersistentTopics | Enable broker to load persistent topics. | true | -| enableNonPersistentTopics | Enable broker to load non-persistent topics. | true | -| maxSubscriptionsPerTopic | Maximum number of subscriptions allowed to subscribe to a topic. Once this limit reaches, the broker rejects new subscriptions until the number of subscriptions decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxProducersPerTopic | Maximum number of producers allowed to connect to a topic. Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerTopic | Maximum number of consumers allowed to connect to a topic. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerSubscription | Maximum number of consumers allowed to connect to a subscription. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxNumPartitionsPerPartitionedTopic | Maximum number of partitions per partitioned topic. When the value is set to a negative number or is set to 0, the check is disabled. | 0 | -| tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in seconds. When the value is set to 0, check the TLS certificate on every new connection. | 300 | -| tlsCertificateFilePath | Path for the TLS certificate file. | | -| tlsKeyFilePath | Path for the TLS private key file. | | -| tlsTrustCertsFilePath | Path for the trusted TLS certificate file.| | -| tlsAllowInsecureConnection | Accept untrusted TLS certificate from the client. If it is set to true, a client with a certificate which cannot be verified with the 'tlsTrustCertsFilePath' certificate is allowed to connect to the server, though the certificate is not be used for client authentication. | false | -| tlsProtocols | Specify the TLS protocols the broker uses to negotiate during TLS handshake. | | -| tlsCiphers | Specify the TLS cipher the broker uses to negotiate during TLS Handshake. | | -| tlsRequireTrustedClientCertOnConnect | Trusted client certificates are required for to connect TLS. Reject the Connection if the client certificate is not trusted. In effect, this requires that all connecting clients perform TLS client authentication. | false | -| tlsEnabledWithKeyStore | Enable TLS with KeyStore type configuration in broker. | false | -| tlsProvider | TLS Provider for KeyStore type. | | -| tlsKeyStoreType | TLS KeyStore type configuration in the broker.
  651. JKS
  652. PKCS12
  653. |JKS| -| tlsKeyStore | TLS KeyStore path in the broker. | | -| tlsKeyStorePassword | TLS KeyStore password for the broker. | | -| tlsTrustStoreType | TLS TrustStore type configuration in the broker
  654. JKS
  655. PKCS12
  656. |JKS| -| tlsTrustStore | TLS TrustStore path in the broker. | | -| tlsTrustStorePassword | TLS TrustStore password for the broker. | | -| brokerClientTlsEnabledWithKeyStore | Configure whether the internal client uses the KeyStore type to authenticate with Pulsar brokers. | false | -| brokerClientSslProvider | The TLS Provider used by the internal client to authenticate with other Pulsar brokers. | | -| brokerClientTlsTrustStoreType | TLS TrustStore type configuration for the internal client to authenticate with Pulsar brokers.
  657. JKS
  658. PKCS12
  659. | JKS | -| brokerClientTlsTrustStore | TLS TrustStore path for the internal client to authenticate with Pulsar brokers. | | -| brokerClientTlsTrustStorePassword | TLS TrustStore password for the internal client to authenticate with Pulsar brokers. | | -| brokerClientTlsCiphers | Specify the TLS cipher that the internal client uses to negotiate during TLS Handshake. | | -| brokerClientTlsProtocols | Specify the TLS protocols that the broker uses to negotiate during TLS handshake. | | -| systemTopicEnabled | Enable/Disable system topics. | false | -| topicLevelPoliciesEnabled | Enable or disable topic level policies. Topic level policies depends on the system topic. Please enable the system topic first. | false | -| topicFencingTimeoutSeconds | If a topic remains fenced for a certain time period (in seconds), it is closed forcefully. If set to 0 or a negative number, the fenced topic is not closed. | 0 | -| proxyRoles | Role names that are treated as "proxy roles". If the broker sees a request with role as proxyRoles, it demands to see a valid original principal. | | -|authenticationEnabled| Enable authentication for the broker. |false| -|authenticationProviders| A comma-separated list of class names for authentication providers. |false| -|authorizationEnabled| Enforce authorization in brokers. |false| -| authorizationProvider | Authorization provider fully qualified class-name. | org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider | -| authorizationAllowWildcardsMatching | Allow wildcard matching in authorization. Wildcard matching is applicable only when the wildcard-character (*) presents at the **first** or **last** position. | false | -|superUserRoles| Role names that are treated as "superusers." Superusers are authorized to perform all admin tasks. | | -|brokerClientAuthenticationPlugin| The authentication settings of the broker itself. Used when the broker connects to other brokers either in the same cluster or from other clusters. | | -|brokerClientAuthenticationParameters| The parameters that go along with the plugin specified using brokerClientAuthenticationPlugin. | | -|athenzDomainNames| Supported Athenz authentication provider domain names as a comma-separated list. | | -| anonymousUserRole | When this parameter is not empty, unauthenticated users perform as anonymousUserRole. | | -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenAuthClaim| Specify the token claim that will be used as the authentication "principal" or "role". The "subject" field will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud". It is used to get the audience from token. If it is not set, the audience is not verified. || -| tokenAudience | The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token need contains this parameter.| | -|saslJaasClientAllowedIds|This is a regexp, which limits the range of possible ids which can connect to the Broker using SASL. By default, it is set to `SaslConstants.JAAS_CLIENT_ALLOWED_IDS_DEFAULT`, which is ".*pulsar.*", so only clients whose id contains 'pulsar' are allowed to connect.|N/A| -|saslJaasBrokerSectionName|Service Principal, for login context name. By default, it is set to `SaslConstants.JAAS_DEFAULT_BROKER_SECTION_NAME`, which is "Broker".|N/A| -|httpMaxRequestSize|If the value is larger than 0, it rejects all HTTP requests with bodies larged than the configured limit.|-1| -|exposePreciseBacklogInPrometheus| Enable expose the precise backlog stats, set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate. |false| -|bookkeeperMetadataServiceUri|Metadata service uri is what BookKeeper used for loading corresponding metadata driver and resolving its metadata service location. This value can be fetched using `bookkeeper shell whatisinstanceid` command in BookKeeper cluster. For example: `zk+hierarchical://localhost:2181/ledgers`. The metadata service uri list can also be semicolon separated values like: `zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers`.|N/A| -|bookkeeperClientAuthenticationPlugin| Authentication plugin to be used when connecting to bookies (BookKeeper servers). || -|bookkeeperClientAuthenticationParametersName| BookKeeper authentication plugin implementation parameters and values. || -|bookkeeperClientAuthenticationParameters| Parameters associated with the bookkeeperClientAuthenticationParametersName || -|bookkeeperClientNumWorkerThreads| Number of BookKeeper client worker threads. Default is Runtime.getRuntime().availableProcessors() || -|bookkeeperClientTimeoutInSeconds| Timeout for BookKeeper add and read operations. |30| -|bookkeeperClientSpeculativeReadTimeoutInMillis| Speculative reads are initiated if a read request doesn’t complete within a certain time. A value of 0 disables speculative reads. |0| -|bookkeeperUseV2WireProtocol|Use older Bookkeeper wire protocol with bookie.|true| -|bookkeeperClientHealthCheckEnabled| Enable bookie health checks. |true| -|bookkeeperClientHealthCheckIntervalSeconds| The time interval, in seconds, at which health checks are performed. New ledgers are not created during health checks. |60| -|bookkeeperClientHealthCheckErrorThresholdPerInterval| Error threshold for health checks. |5| -|bookkeeperClientHealthCheckQuarantineTimeInSeconds| If bookies have more than the allowed number of failures within the time interval specified by bookkeeperClientHealthCheckIntervalSeconds |1800| -|bookkeeperClientGetBookieInfoIntervalSeconds|Specify options for the GetBookieInfo check. This setting helps ensure the list of bookies that are up to date on the brokers.|86400| -|bookkeeperClientGetBookieInfoRetryIntervalSeconds|Specify options for the GetBookieInfo check. This setting helps ensure the list of bookies that are up to date on the brokers.|60| -|bookkeeperClientRackawarePolicyEnabled| |true| -|bookkeeperClientRegionawarePolicyEnabled| |false| -|bookkeeperClientMinNumRacksPerWriteQuorum| |2| -|bookkeeperClientMinNumRacksPerWriteQuorum| |false| -|bookkeeperClientReorderReadSequenceEnabled| |false| -|bookkeeperClientIsolationGroups||| -|bookkeeperClientSecondaryIsolationGroups| Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn't have enough bookie available. || -|bookkeeperClientMinAvailableBookiesInIsolationGroups| Minimum bookies that should be available as part of bookkeeperClientIsolationGroups else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list. || -| bookkeeperTLSProviderFactoryClass | Set the client security provider factory class name. | org.apache.bookkeeper.tls.TLSContextFactory | -| bookkeeperTLSClientAuthentication | Enable TLS authentication with bookie. | false | -| bookkeeperTLSKeyFileType | Supported type: PEM, JKS, PKCS12. | PEM | -| bookkeeperTLSTrustCertTypes | Supported type: PEM, JKS, PKCS12. | PEM | -| bookkeeperTLSKeyStorePasswordPath | Path to file containing keystore password, if the client keystore is password protected. | | -| bookkeeperTLSTrustStorePasswordPath | Path to file containing truststore password, if the client truststore is password protected. | | -| bookkeeperTLSKeyFilePath | Path for the TLS private key file. | | -| bookkeeperTLSCertificateFilePath | Path for the TLS certificate file. | | -| bookkeeperTLSTrustCertsFilePath | Path for the trusted TLS certificate file. | | -| bookkeeperDiskWeightBasedPlacementEnabled | Enable/Disable disk weight based placement. | false | -| bookkeeperExplicitLacIntervalInMills | Set the interval to check the need for sending an explicit LAC. When the value is set to 0, no explicit LAC is sent. | 0 | -| bookkeeperClientExposeStatsToPrometheus | Expose BookKeeper client managed ledger stats to Prometheus. | false | -|managedLedgerDefaultEnsembleSize| |1| -|managedLedgerDefaultWriteQuorum| |1| -|managedLedgerDefaultAckQuorum| |1| -| managedLedgerDigestType | Default type of checksum to use when writing to BookKeeper. | CRC32C | -| managedLedgerNumSchedulerThreads | Number of threads to be used for managed ledger scheduled tasks. | 8 | -|managedLedgerCacheSizeMB| |N/A| -|managedLedgerCacheCopyEntries| Whether to copy the entry payloads when inserting in cache.| false| -|managedLedgerCacheEvictionWatermark| |0.9| -|managedLedgerCacheEvictionFrequency| Configure the cache eviction frequency for the managed ledger cache (evictions/sec) | 100.0 | -|managedLedgerCacheEvictionTimeThresholdMillis| All entries that have stayed in cache for more than the configured time, will be evicted | 1000 | -|managedLedgerCursorBackloggedThreshold| Configure the threshold (in number of entries) from where a cursor should be considered 'backlogged' and thus should be set as inactive. | 1000| -|managedLedgerUnackedRangesOpenCacheSetEnabled| Use Open Range-Set to cache unacknowledged messages |true| -|managedLedgerDefaultMarkDeleteRateLimit| |0.1| -|managedLedgerMaxEntriesPerLedger| |50000| -|managedLedgerMinLedgerRolloverTimeMinutes| |10| -|managedLedgerMaxLedgerRolloverTimeMinutes| |240| -|managedLedgerCursorMaxEntriesPerLedger| |50000| -|managedLedgerCursorRolloverTimeInSeconds| |14400| -| managedLedgerMaxSizePerLedgerMbytes | Maximum ledger size before triggering a rollover for a topic. | 2048 | -| managedLedgerMaxUnackedRangesToPersist | Maximum number of "acknowledgment holes" that are going to be persistently stored. When acknowledging out of order, a consumer leaves holes that are supposed to be quickly filled by acknowledging all the messages. The information of which messages are acknowledged is persisted by compressing in "ranges" of messages that were acknowledged. After the max number of ranges is reached, the information is only tracked in memory and messages are redelivered in case of crashes. | 10000 | -| managedLedgerMaxUnackedRangesToPersistInZooKeeper | Maximum number of "acknowledgment holes" that can be stored in Zookeeper. If the number of unacknowledged message range is higher than this limit, the broker persists unacknowledged ranges into bookkeeper to avoid additional data overhead into Zookeeper. | 1000 | -|autoSkipNonRecoverableData| |false| -| managedLedgerMetadataOperationsTimeoutSeconds | Operation timeout while updating managed-ledger metadata. | 60 | -| managedLedgerReadEntryTimeoutSeconds | Read entries timeout when the broker tries to read messages from BookKeeper. | 0 | -| managedLedgerAddEntryTimeoutSeconds | Add entry timeout when the broker tries to publish message to BookKeeper. | 0 | -| managedLedgerNewEntriesCheckDelayInMillis | New entries check delay for the cursor under the managed ledger. If no new messages in the topic, the cursor tries to check again after the delay time. For consumption latency sensitive scenario, you can set the value to a smaller value or 0. Of course, a smaller value may degrade consumption throughput.|10 ms| -| managedLedgerPrometheusStatsLatencyRolloverSeconds | Managed ledger prometheus stats latency rollover seconds. | 60 | -| managedLedgerTraceTaskExecution | Whether to trace managed ledger task execution time. | true | -|managedLedgerNewEntriesCheckDelayInMillis|New entries check delay for the cursor under the managed ledger. If no new messages in the topic, the cursor will try to check again after the delay time. For consumption latency sensitive scenario, it can be set to a smaller value or 0. A smaller value degrades consumption throughput. By default, it is 10ms.|10| -|loadBalancerEnabled| |false| -|loadBalancerPlacementStrategy| |weightedRandomSelection| -|loadBalancerReportUpdateThresholdPercentage| |10| -|loadBalancerReportUpdateMaxIntervalMinutes| |15| -|loadBalancerHostUsageCheckIntervalMinutes| |1| -|loadBalancerSheddingIntervalMinutes| |30| -|loadBalancerSheddingGracePeriodMinutes| |30| -|loadBalancerBrokerMaxTopics| |50000| -|loadBalancerBrokerUnderloadedThresholdPercentage| |1| -|loadBalancerBrokerOverloadedThresholdPercentage| |85| -|loadBalancerResourceQuotaUpdateIntervalMinutes| |15| -|loadBalancerBrokerComfortLoadLevelPercentage| |65| -|loadBalancerAutoBundleSplitEnabled| |false| -| loadBalancerAutoUnloadSplitBundlesEnabled | Enable/Disable automatic unloading of split bundles. | true | -|loadBalancerNamespaceBundleMaxTopics| |1000| -|loadBalancerNamespaceBundleMaxSessions| |1000| -|loadBalancerNamespaceBundleMaxMsgRate| |1000| -|loadBalancerNamespaceBundleMaxBandwidthMbytes| |100| -|loadBalancerNamespaceMaximumBundles| |128| -| loadBalancerBrokerThresholdShedderPercentage | The broker resource usage threshold. When the broker resource usage is greater than the pulsar cluster average resource usage, the threshold shedder is triggered to offload bundles from the broker. It only takes effect in the ThresholdShedder strategy. | 10 | -| loadBalancerHistoryResourcePercentage | The history usage when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 0.9 | -| loadBalancerBandwithInResourceWeight | The BandWithIn usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerBandwithOutResourceWeight | The BandWithOut usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerCPUResourceWeight | The CPU usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerMemoryResourceWeight | The heap memory usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerDirectMemoryResourceWeight | The direct memory usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerBundleUnloadMinThroughputThreshold | Bundle unload minimum throughput threshold. Avoid bundle unload frequently. It only takes effect in the ThresholdShedder strategy. | 10 | -|replicationMetricsEnabled| |true| -|replicationConnectionsPerBroker| |16| -|replicationProducerQueueSize| |1000| -| replicationPolicyCheckDurationSeconds | Duration to check replication policy to avoid replicator inconsistency due to missing ZooKeeper watch. When the value is set to 0, disable checking replication policy. | 600 | -|defaultRetentionTimeInMinutes| |0| -|defaultRetentionSizeInMB| |0| -|keepAliveIntervalSeconds| |30| -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| -|bookieId | If you want to custom a bookie ID or use a dynamic network address for a bookie, you can set the `bookieId`.

    Bookie advertises itself using the `bookieId` rather than the `BookieSocketAddress` (`hostname:port` or `IP:port`).

    The `bookieId` is a non-empty string that can contain ASCII digits and letters ([a-zA-Z9-0]), colons, dashes, and dots.

    For more information about `bookieId`, see [here](http://bookkeeper.apache.org/bps/BP-41-bookieid/).|/| -| maxTopicsPerNamespace | The maximum number of persistent topics that can be created in the namespace. When the number of topics reaches this threshold, the broker rejects the request of creating a new topic, including the auto-created topics by the producer or consumer, until the number of connected consumers decreases. The default value 0 disables the check. | 0 | - -## WebSocket - -|Name|Description|Default| -|---|---|---| -|configurationStoreServers ||| -|zooKeeperSessionTimeoutMillis| |30000| -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|serviceUrl||| -|serviceUrlTls||| -|brokerServiceUrl||| -|brokerServiceUrlTls||| -|webServicePort||8080| -|webServicePortTls||8443| -|bindAddress||0.0.0.0| -|clusterName ||| -|authenticationEnabled||false| -|authenticationProviders||| -|authorizationEnabled||false| -|superUserRoles ||| -|brokerClientAuthenticationPlugin||| -|brokerClientAuthenticationParameters||| -|tlsEnabled||false| -|tlsAllowInsecureConnection||false| -|tlsCertificateFilePath||| -|tlsKeyFilePath ||| -|tlsTrustCertsFilePath||| - -## Pulsar proxy - -The [Pulsar proxy](concepts-architecture-overview.md#pulsar-proxy) can be configured in the `conf/proxy.conf` file. - - -|Name|Description|Default| -|---|---|---| -|forwardAuthorizationCredentials| Forward client authorization credentials to Broker for re-authorization, and make sure authentication is enabled for this to take effect. |false| -|zookeeperServers| The ZooKeeper quorum connection string (as a comma-separated list) || -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -| brokerServiceURL | The service URL pointing to the broker cluster. Must begin with `pulsar://`. | | -| brokerServiceURLTLS | The TLS service URL pointing to the broker cluster. Must begin with `pulsar+ssl://`. | | -| brokerWebServiceURL | The Web service URL pointing to the broker cluster | | -| brokerWebServiceURLTLS | The TLS Web service URL pointing to the broker cluster | | -| functionWorkerWebServiceURL | The Web service URL pointing to the function worker cluster. It is only configured when you setup function workers in a separate cluster. | | -| functionWorkerWebServiceURLTLS | The TLS Web service URL pointing to the function worker cluster. It is only configured when you setup function workers in a separate cluster. | | -|zookeeperSessionTimeoutMs| ZooKeeper session timeout (in milliseconds) |30000| -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|advertisedAddress|Hostname or IP address the service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostname()` is used.|N/A| -|servicePort| The port to use for server binary Protobuf requests |6650| -|servicePortTls| The port to use to server binary Protobuf TLS requests |6651| -|statusFilePath| Path for the file used to determine the rotation status for the proxy instance when responding to service discovery health checks || -| proxyLogLevel | Proxy log level
  660. 0: Do not log any TCP channel information.
  661. 1: Parse and log any TCP channel information and command information without message body.
  662. 2: Parse and log channel information, command information and message body.
  663. | 0 | -|authenticationEnabled| Whether authentication is enabled for the Pulsar proxy |false| -|authenticateMetricsEndpoint| Whether the '/metrics' endpoint requires authentication. Defaults to true. 'authenticationEnabled' must also be set for this to take effect. |true| -|authenticationProviders| Authentication provider name list (a comma-separated list of class names) || -|authorizationEnabled| Whether authorization is enforced by the Pulsar proxy |false| -|authorizationProvider| Authorization provider as a fully qualified class name |org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider| -| anonymousUserRole | When this parameter is not empty, unauthenticated users perform as anonymousUserRole. | | -|brokerClientAuthenticationPlugin| The authentication plugin used by the Pulsar proxy to authenticate with Pulsar brokers || -|brokerClientAuthenticationParameters| The authentication parameters used by the Pulsar proxy to authenticate with Pulsar brokers || -|brokerClientTrustCertsFilePath| The path to trusted certificates used by the Pulsar proxy to authenticate with Pulsar brokers || -|superUserRoles| Role names that are treated as "super-users," meaning that they will be able to perform all admin || -|maxConcurrentInboundConnections| Max concurrent inbound connections. The proxy will reject requests beyond that. |10000| -|maxConcurrentLookupRequests| Max concurrent outbound connections. The proxy will error out requests beyond that. |50000| -|tlsEnabledInProxy| Deprecated - use `servicePortTls` and `webServicePortTls` instead. |false| -|tlsEnabledWithBroker| Whether TLS is enabled when communicating with Pulsar brokers. |false| -| tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in seconds. If the value is set 0, check TLS certificate every new connection. | 300 | -|tlsCertificateFilePath| Path for the TLS certificate file || -|tlsKeyFilePath| Path for the TLS private key file || -|tlsTrustCertsFilePath| Path for the trusted TLS certificate pem file || -|tlsHostnameVerificationEnabled| Whether the hostname is validated when the proxy creates a TLS connection with brokers |false| -|tlsRequireTrustedClientCertOnConnect| Whether client certificates are required for TLS. Connections are rejected if the client certificate isn’t trusted. |false| -|tlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLSv1.3```, ```TLSv1.2``` || -|tlsCiphers|Specify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256```|| -| httpReverseProxyConfigs | HTTP directs to redirect to non-pulsar services | | -| httpOutputBufferSize | HTTP output buffer size. The amount of data that will be buffered for HTTP requests before it is flushed to the channel. A larger buffer size may result in higher HTTP throughput though it may take longer for the client to see data. If using HTTP streaming via the reverse proxy, this should be set to the minimum value (1) so that clients see the data as soon as possible. | 32768 | -| httpNumThreads | Number of threads to use for HTTP requests processing| 2 * Runtime.getRuntime().availableProcessors() | -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenAuthClaim| Specify the token claim that will be used as the authentication "principal" or "role". The "subject" field will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud". It is used to get the audience from token. If it is not set, the audience is not verified. || -| tokenAudience | The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token need contains this parameter.| | -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| - -## ZooKeeper - -ZooKeeper handles a broad range of essential configuration- and coordination-related tasks for Pulsar. The default configuration file for ZooKeeper is in the `conf/zookeeper.conf` file in your Pulsar installation. The following parameters are available: - - -|Name|Description|Default| -|---|---|---| -|tickTime| The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick. |2000| -|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10| -|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter. |5| -|dataDir| The location where ZooKeeper will store in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper| -|clientPort| The port on which the ZooKeeper server will listen for connections. |2181| -|admin.enableServer|The port at which the admin listens.|true| -|admin.serverPort|The port at which the admin listens.|9990| -|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest). |3| -|autopurge.purgeInterval| The time interval, in hours, by which the ZooKeeper database purge task is triggered. Setting to a non-zero number will enable auto purge; setting to 0 will disable. Read this guide before enabling auto purge. |1| -|forceSync|Requires updates to be synced to media of the transaction log before finishing processing the update. If this option is set to 'no', ZooKeeper will not require updates to be synced to the media. WARNING: it's not recommended to run a production ZK cluster with `forceSync` disabled.|yes| -|maxClientCnxns| The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60| - - - - -In addition to the parameters in the table above, configuring ZooKeeper for Pulsar involves adding -a `server.N` line to the `conf/zookeeper.conf` file for each node in the ZooKeeper cluster, where `N` is the number of the ZooKeeper node. Here's an example for a three-node ZooKeeper cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -> We strongly recommend consulting the [ZooKeeper Administrator's Guide](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html) for a more thorough and comprehensive introduction to ZooKeeper configuration diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/reference-connector-admin.md b/site2/website/versioned_docs/version-2.8.0-deprecated/reference-connector-admin.md deleted file mode 100644 index 7b73ae80750cd4..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/reference-connector-admin.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -id: reference-connector-admin -title: Connector Admin CLI -sidebar_label: "Connector Admin CLI" -original_id: reference-connector-admin ---- - -> **Important** -> -> For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/reference-metrics.md b/site2/website/versioned_docs/version-2.8.0-deprecated/reference-metrics.md deleted file mode 100644 index df5f7594335ae0..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/reference-metrics.md +++ /dev/null @@ -1,552 +0,0 @@ ---- -id: reference-metrics -title: Pulsar Metrics -sidebar_label: "Pulsar Metrics" -original_id: reference-metrics ---- - - - -Pulsar exposes the following metrics in Prometheus format. You can monitor your clusters with those metrics. - -* [ZooKeeper](#zookeeper) -* [BookKeeper](#bookkeeper) -* [Broker](#broker) -* [Pulsar Functions](#pulsar-functions) -* [Proxy](#proxy) -* [Pulsar SQL Worker](#pulsar-sql-worker) -* [Pulsar transaction](#pulsar-transaction) - -The following types of metrics are available: - -- [Counter](https://prometheus.io/docs/concepts/metric_types/#counter): a cumulative metric that represents a single monotonically increasing counter. The value increases by default. You can reset the value to zero or restart your cluster. -- [Gauge](https://prometheus.io/docs/concepts/metric_types/#gauge): a metric that represents a single numerical value that can arbitrarily go up and down. -- [Histogram](https://prometheus.io/docs/concepts/metric_types/#histogram): a histogram samples observations (usually things like request durations or response sizes) and counts them in configurable buckets. The `_bucket` suffix is the number of observations within a histogram bucket, configured with parameter `{le=""}`. The `_count` suffix is the number of observations, shown as a time series and behaves like a counter. The `_sum` suffix is the sum of observed values, also shown as a time series and behaves like a counter. These suffixes are together denoted by `_*` in this doc. -- [Summary](https://prometheus.io/docs/concepts/metric_types/#summary): similar to a histogram, a summary samples observations (usually things like request durations and response sizes). While it also provides a total count of observations and a sum of all observed values, it calculates configurable quantiles over a sliding time window. - -## ZooKeeper - -The ZooKeeper metrics are exposed under "/metrics" at port `8000`. You can use a different port by configuring the `metricsProvider.httpPort` in conf/zookeeper.conf. - -ZooKeeper provides a New Metrics System since 3.6.0. For more detailed metrics, refer to the [ZooKeeper Monitor Guide](https://zookeeper.apache.org/doc/r3.7.0/zookeeperMonitor.html). - -## BookKeeper - -The BookKeeper metrics are exposed under "/metrics" at port `8000`. You can change the port by updating `prometheusStatsHttpPort` -in the `bookkeeper.conf` configuration file. - -### Server metrics - -| Name | Type | Description | -|---|---|---| -| bookie_SERVER_STATUS | Gauge | The server status for bookie server.
    • 1: the bookie is running in writable mode.
    • 0: the bookie is running in readonly mode.
    | -| bookkeeper_server_ADD_ENTRY_count | Counter | The total number of ADD_ENTRY requests received at the bookie. The `success` label is used to distinguish successes and failures. | -| bookkeeper_server_READ_ENTRY_count | Counter | The total number of READ_ENTRY requests received at the bookie. The `success` label is used to distinguish successes and failures. | -| bookie_WRITE_BYTES | Counter | The total number of bytes written to the bookie. | -| bookie_READ_BYTES | Counter | The total number of bytes read from the bookie. | -| bookkeeper_server_ADD_ENTRY_REQUEST | Summary | The summary of request latency of ADD_ENTRY requests at the bookie. The `success` label is used to distinguish successes and failures. | -| bookkeeper_server_READ_ENTRY_REQUEST | Summary | The summary of request latency of READ_ENTRY requests at the bookie. The `success` label is used to distinguish successes and failures. | - -### Journal metrics - -| Name | Type | Description | -|---|---|---| -| bookie_journal_JOURNAL_SYNC_count | Counter | The total number of journal fsync operations happening at the bookie. The `success` label is used to distinguish successes and failures. | -| bookie_journal_JOURNAL_QUEUE_SIZE | Gauge | The total number of requests pending in the journal queue. | -| bookie_journal_JOURNAL_FORCE_WRITE_QUEUE_SIZE | Gauge | The total number of force write (fsync) requests pending in the force-write queue. | -| bookie_journal_JOURNAL_CB_QUEUE_SIZE | Gauge | The total number of callbacks pending in the callback queue. | -| bookie_journal_JOURNAL_ADD_ENTRY | Summary | The summary of request latency of adding entries to the journal. | -| bookie_journal_JOURNAL_SYNC | Summary | The summary of fsync latency of syncing data to the journal disk. | - -### Storage metrics - -| Name | Type | Description | -|---|---|---| -| bookie_ledgers_count | Gauge | The total number of ledgers stored in the bookie. | -| bookie_entries_count | Gauge | The total number of entries stored in the bookie. | -| bookie_write_cache_size | Gauge | The bookie write cache size (in bytes). | -| bookie_read_cache_size | Gauge | The bookie read cache size (in bytes). | -| bookie_DELETED_LEDGER_COUNT | Counter | The total number of ledgers deleted since the bookie has started. | -| bookie_ledger_writable_dirs | Gauge | The number of writable directories in the bookie. | - -## Broker - -The broker metrics are exposed under "/metrics" at port `8080`. You can change the port by updating `webServicePort` to a different port -in the `broker.conf` configuration file. - -All the metrics exposed by a broker are labelled with `cluster=${pulsar_cluster}`. The name of Pulsar cluster is the value of `${pulsar_cluster}`, which you have configured in the `broker.conf` file. - -The following metrics are available for broker: - -- [ZooKeeper](#zookeeper) - - [Server metrics](#server-metrics) - - [Request metrics](#request-metrics) -- [BookKeeper](#bookkeeper) - - [Server metrics](#server-metrics-1) - - [Journal metrics](#journal-metrics) - - [Storage metrics](#storage-metrics) -- [Broker](#broker) - - [Namespace metrics](#namespace-metrics) - - [Replication metrics](#replication-metrics) - - [Topic metrics](#topic-metrics) - - [Replication metrics](#replication-metrics-1) - - [ManagedLedgerCache metrics](#managedledgercache-metrics) - - [ManagedLedger metrics](#managedledger-metrics) - - [LoadBalancing metrics](#loadbalancing-metrics) - - [BundleUnloading metrics](#bundleunloading-metrics) - - [BundleSplit metrics](#bundlesplit-metrics) - - [Subscription metrics](#subscription-metrics) - - [Consumer metrics](#consumer-metrics) - - [Managed ledger bookie client metrics](#managed-ledger-bookie-client-metrics) - - [Token metrics](#token-metrics) - - [Authentication metrics](#authentication-metrics) - - [Connection metrics](#connection-metrics) - - [Jetty metrics](#jetty-metrics) -- [Pulsar Functions](#pulsar-functions) -- [Proxy](#proxy) -- [Pulsar SQL Worker](#pulsar-sql-worker) -- [Pulsar transaction](#pulsar-transaction) - -### Namespace metrics - -> Namespace metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `false`. - -All the namespace metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -| Name | Type | Description | -|---|---|---| -| pulsar_topics_count | Gauge | The number of Pulsar topics of the namespace owned by this broker. | -| pulsar_subscriptions_count | Gauge | The number of Pulsar subscriptions of the namespace served by this broker. | -| pulsar_producers_count | Gauge | The number of active producers of the namespace connected to this broker. | -| pulsar_consumers_count | Gauge | The number of active consumers of the namespace connected to this broker. | -| pulsar_rate_in | Gauge | The total message rate of the namespace coming into this broker (messages/second). | -| pulsar_rate_out | Gauge | The total message rate of the namespace going out from this broker (messages/second). | -| pulsar_throughput_in | Gauge | The total throughput of the namespace coming into this broker (bytes/second). | -| pulsar_throughput_out | Gauge | The total throughput of the namespace going out from this broker (bytes/second). | -| pulsar_storage_size | Gauge | The total storage size of the topics in this namespace owned by this broker (bytes). | -| pulsar_storage_backlog_size | Gauge | The total backlog size of the topics of this namespace owned by this broker (messages). | -| pulsar_storage_offloaded_size | Gauge | The total amount of the data in this namespace offloaded to the tiered storage (bytes). | -| pulsar_storage_write_rate | Gauge | The total message batches (entries) written to the storage for this namespace (message batches / second). | -| pulsar_storage_read_rate | Gauge | The total message batches (entries) read from the storage for this namespace (message batches / second). | -| pulsar_subscription_delayed | Gauge | The total message batches (entries) are delayed for dispatching. | -| pulsar_storage_write_latency_le_* | Histogram | The entry rate of a namespace that the storage write latency is smaller with a given threshold.
    Available thresholds:
    • pulsar_storage_write_latency_le_0_5: <= 0.5ms
    • pulsar_storage_write_latency_le_1: <= 1ms
    • pulsar_storage_write_latency_le_5: <= 5ms
    • pulsar_storage_write_latency_le_10: <= 10ms
    • pulsar_storage_write_latency_le_20: <= 20ms
    • pulsar_storage_write_latency_le_50: <= 50ms
    • pulsar_storage_write_latency_le_100: <= 100ms
    • pulsar_storage_write_latency_le_200: <= 200ms
    • pulsar_storage_write_latency_le_1000: <= 1s
    • pulsar_storage_write_latency_le_overflow: > 1s
    | -| pulsar_entry_size_le_* | Histogram | The entry rate of a namespace that the entry size is smaller with a given threshold.
    Available thresholds:
    • pulsar_entry_size_le_128: <= 128 bytes
    • pulsar_entry_size_le_512: <= 512 bytes
    • pulsar_entry_size_le_1_kb: <= 1 KB
    • pulsar_entry_size_le_2_kb: <= 2 KB
    • pulsar_entry_size_le_4_kb: <= 4 KB
    • pulsar_entry_size_le_16_kb: <= 16 KB
    • pulsar_entry_size_le_100_kb: <= 100 KB
    • pulsar_entry_size_le_1_mb: <= 1 MB
    • pulsar_entry_size_le_overflow: > 1 MB
    | - -#### Replication metrics - -If a namespace is configured to be replicated among multiple Pulsar clusters, the corresponding replication metrics is also exposed when `replicationMetricsEnabled` is enabled. - -All the replication metrics are also labelled with `remoteCluster=${pulsar_remote_cluster}`. - -| Name | Type | Description | -|---|---|---| -| pulsar_replication_rate_in | Gauge | The total message rate of the namespace replicating from remote cluster (messages/second). | -| pulsar_replication_rate_out | Gauge | The total message rate of the namespace replicating to remote cluster (messages/second). | -| pulsar_replication_throughput_in | Gauge | The total throughput of the namespace replicating from remote cluster (bytes/second). | -| pulsar_replication_throughput_out | Gauge | The total throughput of the namespace replicating to remote cluster (bytes/second). | -| pulsar_replication_backlog | Gauge | The total backlog of the namespace replicating to remote cluster (messages). | - -### Topic metrics - -> Topic metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `true`. - -All the topic metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. - -| Name | Type | Description | -|---|---|---| -| pulsar_subscriptions_count | Gauge | The number of Pulsar subscriptions of the topic served by this broker. | -| pulsar_producers_count | Gauge | The number of active producers of the topic connected to this broker. | -| pulsar_consumers_count | Gauge | The number of active consumers of the topic connected to this broker. | -| pulsar_rate_in | Gauge | The total message rate of the topic coming into this broker (messages/second). | -| pulsar_rate_out | Gauge | The total message rate of the topic going out from this broker (messages/second). | -| pulsar_throughput_in | Gauge | The total throughput of the topic coming into this broker (bytes/second). | -| pulsar_throughput_out | Gauge | The total throughput of the topic going out from this broker (bytes/second). | -| pulsar_storage_size | Gauge | The total storage size of the topics in this topic owned by this broker (bytes). | -| pulsar_storage_backlog_size | Gauge | The total backlog size of the topics of this topic owned by this broker (messages). | -| pulsar_storage_offloaded_size | Gauge | The total amount of the data in this topic offloaded to the tiered storage (bytes). | -| pulsar_storage_backlog_quota_limit | Gauge | The total amount of the data in this topic that limit the backlog quota (bytes). | -| pulsar_storage_write_rate | Gauge | The total message batches (entries) written to the storage for this topic (message batches / second). | -| pulsar_storage_read_rate | Gauge | The total message batches (entries) read from the storage for this topic (message batches / second). | -| pulsar_subscription_delayed | Gauge | The total message batches (entries) are delayed for dispatching. | -| pulsar_storage_write_latency_le_* | Histogram | The entry rate of a topic that the storage write latency is smaller with a given threshold.
    Available thresholds:
    • pulsar_storage_write_latency_le_0_5: <= 0.5ms
    • pulsar_storage_write_latency_le_1: <= 1ms
    • pulsar_storage_write_latency_le_5: <= 5ms
    • pulsar_storage_write_latency_le_10: <= 10ms
    • pulsar_storage_write_latency_le_20: <= 20ms
    • pulsar_storage_write_latency_le_50: <= 50ms
    • pulsar_storage_write_latency_le_100: <= 100ms
    • pulsar_storage_write_latency_le_200: <= 200ms
    • pulsar_storage_write_latency_le_1000: <= 1s
    • pulsar_storage_write_latency_le_overflow: > 1s
    | -| pulsar_entry_size_le_* | Histogram | The entry rate of a topic that the entry size is smaller with a given threshold.
    Available thresholds:
    • pulsar_entry_size_le_128: <= 128 bytes
    • pulsar_entry_size_le_512: <= 512 bytes
    • pulsar_entry_size_le_1_kb: <= 1 KB
    • pulsar_entry_size_le_2_kb: <= 2 KB
    • pulsar_entry_size_le_4_kb: <= 4 KB
    • pulsar_entry_size_le_16_kb: <= 16 KB
    • pulsar_entry_size_le_100_kb: <= 100 KB
    • pulsar_entry_size_le_1_mb: <= 1 MB
    • pulsar_entry_size_le_overflow: > 1 MB
    | -| pulsar_in_bytes_total | Counter | The total number of bytes received for this topic | -| pulsar_in_messages_total | Counter | The total number of messages received for this topic | -| pulsar_out_bytes_total | Counter | The total number of bytes read from this topic | -| pulsar_out_messages_total | Counter | The total number of messages read from this topic | - -#### Replication metrics - -If a namespace that a topic belongs to is configured to be replicated among multiple Pulsar clusters, the corresponding replication metrics is also exposed when `replicationMetricsEnabled` is enabled. - -All the replication metrics are labelled with `remoteCluster=${pulsar_remote_cluster}`. - -| Name | Type | Description | -|---|---|---| -| pulsar_replication_rate_in | Gauge | The total message rate of the topic replicating from remote cluster (messages/second). | -| pulsar_replication_rate_out | Gauge | The total message rate of the topic replicating to remote cluster (messages/second). | -| pulsar_replication_throughput_in | Gauge | The total throughput of the topic replicating from remote cluster (bytes/second). | -| pulsar_replication_throughput_out | Gauge | The total throughput of the topic replicating to remote cluster (bytes/second). | -| pulsar_replication_backlog | Gauge | The total backlog of the topic replicating to remote cluster (messages). | - -### ManagedLedgerCache metrics -All the ManagedLedgerCache metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_ml_cache_evictions | Gauge | The number of cache evictions during the last minute. | -| pulsar_ml_cache_hits_rate | Gauge | The number of cache hits per second. | -| pulsar_ml_cache_hits_throughput | Gauge | The amount of data is retrieved from the cache in byte/s | -| pulsar_ml_cache_misses_rate | Gauge | The number of cache misses per second | -| pulsar_ml_cache_misses_throughput | Gauge | The amount of data is retrieved from the cache in byte/s | -| pulsar_ml_cache_pool_active_allocations | Gauge | The number of currently active allocations in direct arena | -| pulsar_ml_cache_pool_active_allocations_huge | Gauge | The number of currently active huge allocation in direct arena | -| pulsar_ml_cache_pool_active_allocations_normal | Gauge | The number of currently active normal allocations in direct arena | -| pulsar_ml_cache_pool_active_allocations_small | Gauge | The number of currently active small allocations in direct arena | -| pulsar_ml_cache_pool_active_allocations_tiny | Gauge | The number of currently active tiny allocations in direct arena | -| pulsar_ml_cache_pool_allocated | Gauge | The total allocated memory of chunk lists in direct arena | -| pulsar_ml_cache_pool_used | Gauge | The total used memory of chunk lists in direct arena | -| pulsar_ml_cache_used_size | Gauge | The size in byte used to store the entries payloads | -| pulsar_ml_count | Gauge | The number of currently opened managed ledgers | - -### ManagedLedger metrics -All the managedLedger metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- namespace: namespace=${pulsar_namespace}. ${pulsar_namespace} is the namespace name. -- quantile: quantile=${quantile}. Quantile is only for `Histogram` type metric, and represents the threshold for given Buckets. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_ml_AddEntryBytesRate | Gauge | The bytes/s rate of messages added | -| pulsar_ml_AddEntryErrors | Gauge | The number of addEntry requests that failed | -| pulsar_ml_AddEntryLatencyBuckets | Histogram | The latency of adding a ledger entry with a given quantile (threshold), including time spent on waiting in queue on the broker side
    Available quantile:
    • quantile="0.0_0.5" is AddEntryLatency between (0.0ms, 0.5ms]
    • quantile="0.5_1.0" is AddEntryLatency between (0.5ms, 1.0ms]
    • quantile="1.0_5.0" is AddEntryLatency between (1ms, 5ms]
    • quantile="5.0_10.0" is AddEntryLatency between (5ms, 10ms]
    • quantile="10.0_20.0" is AddEntryLatency between (10ms, 20ms]
    • quantile="20.0_50.0" is AddEntryLatency between (20ms, 50ms]
    • quantile="50.0_100.0" is AddEntryLatency between (50ms, 100ms]
    • quantile="100.0_200.0" is AddEntryLatency between (100ms, 200ms]
    • quantile="200.0_1000.0" is AddEntryLatency between (200ms, 1s]
    | -| pulsar_ml_AddEntryLatencyBuckets_OVERFLOW | Gauge | The number of times the AddEntryLatency is longer than 1 second | -| pulsar_ml_AddEntryMessagesRate | Gauge | The msg/s rate of messages added | -| pulsar_ml_AddEntrySucceed | Gauge | The number of addEntry requests that succeeded | -| pulsar_ml_EntrySizeBuckets | Histogram | The added entry size of a ledger with a given quantile.
    Available quantile:
    • quantile="0.0_128.0" is EntrySize between (0byte, 128byte]
    • quantile="128.0_512.0" is EntrySize between (128byte, 512byte]
    • quantile="512.0_1024.0" is EntrySize between (512byte, 1KB]
    • quantile="1024.0_2048.0" is EntrySize between (1KB, 2KB]
    • quantile="2048.0_4096.0" is EntrySize between (2KB, 4KB]
    • quantile="4096.0_16384.0" is EntrySize between (4KB, 16KB]
    • quantile="16384.0_102400.0" is EntrySize between (16KB, 100KB]
    • quantile="102400.0_1232896.0" is EntrySize between (100KB, 1MB]
    | -| pulsar_ml_EntrySizeBuckets_OVERFLOW |Gauge | The number of times the EntrySize is larger than 1MB | -| pulsar_ml_LedgerSwitchLatencyBuckets | Histogram | The ledger switch latency with a given quantile.
    Available quantile:
    • quantile="0.0_0.5" is EntrySize between (0ms, 0.5ms]
    • quantile="0.5_1.0" is EntrySize between (0.5ms, 1ms]
    • quantile="1.0_5.0" is EntrySize between (1ms, 5ms]
    • quantile="5.0_10.0" is EntrySize between (5ms, 10ms]
    • quantile="10.0_20.0" is EntrySize between (10ms, 20ms]
    • quantile="20.0_50.0" is EntrySize between (20ms, 50ms]
    • quantile="50.0_100.0" is EntrySize between (50ms, 100ms]
    • quantile="100.0_200.0" is EntrySize between (100ms, 200ms]
    • quantile="200.0_1000.0" is EntrySize between (200ms, 1000ms]
    | -| pulsar_ml_LedgerSwitchLatencyBuckets_OVERFLOW | Gauge | The number of times the ledger switch latency is longer than 1 second | -| pulsar_ml_LedgerAddEntryLatencyBuckets | Histogram | The latency for bookie client to persist a ledger entry from broker to BookKeeper service with a given quantile (threshold).
    Available quantile:
    • quantile="0.0_0.5" is LedgerAddEntryLatency between (0.0ms, 0.5ms]
    • quantile="0.5_1.0" is LedgerAddEntryLatency between (0.5ms, 1.0ms]
    • quantile="1.0_5.0" is LedgerAddEntryLatency between (1ms, 5ms]
    • quantile="5.0_10.0" is LedgerAddEntryLatency between (5ms, 10ms]
    • quantile="10.0_20.0" is LedgerAddEntryLatency between (10ms, 20ms]
    • quantile="20.0_50.0" is LedgerAddEntryLatency between (20ms, 50ms]
    • quantile="50.0_100.0" is LedgerAddEntryLatency between (50ms, 100ms]
    • quantile="100.0_200.0" is LedgerAddEntryLatency between (100ms, 200ms]
    • quantile="200.0_1000.0" is LedgerAddEntryLatency between (200ms, 1s]
    | -| pulsar_ml_LedgerAddEntryLatencyBuckets_OVERFLOW | Gauge | The number of times the LedgerAddEntryLatency is longer than 1 second | -| pulsar_ml_MarkDeleteRate | Gauge | The rate of mark-delete ops/s | -| pulsar_ml_NumberOfMessagesInBacklog | Gauge | The number of backlog messages for all the consumers | -| pulsar_ml_ReadEntriesBytesRate | Gauge | The bytes/s rate of messages read | -| pulsar_ml_ReadEntriesErrors | Gauge | The number of readEntries requests that failed | -| pulsar_ml_ReadEntriesRate | Gauge | The msg/s rate of messages read | -| pulsar_ml_ReadEntriesSucceeded | Gauge | The number of readEntries requests that succeeded | -| pulsar_ml_StoredMessagesSize | Gauge | The total size of the messages in active ledgers (accounting for the multiple copies stored) | - -### Managed cursor acknowledgment state - -The acknowledgment state is persistent to the ledger first. When the acknowledgment state fails to be persistent to the ledger, they are persistent to ZooKeeper. To track the stats of acknowledgment, you can configure the metrics for the managed cursor. - -All the cursor acknowledgment state metrics are labelled with the following labels: - -- namespace: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -- ledger_name: `ledger_name=${pulsar_ledger_name}`. `${pulsar_ledger_name}` is the ledger name. - -- cursor_name: `ledger_name=${pulsar_cursor_name}`. `${pulsar_cursor_name}` is the cursor name. - -Name |Type |Description -|---|---|--- -brk_ml_cursor_persistLedgerSucceed(namespace=", ledger_name="", cursor_name:")|Gauge|The number of acknowledgment states that is persistent to a ledger.| -brk_ml_cursor_persistLedgerErrors(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of ledger errors occurred when acknowledgment states fail to be persistent to the ledger.| -brk_ml_cursor_persistZookeeperSucceed(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of acknowledgment states that is persistent to ZooKeeper. -brk_ml_cursor_persistZookeeperErrors(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of ledger errors occurred when acknowledgment states fail to be persistent to ZooKeeper. -brk_ml_cursor_nonContiguousDeletedMessagesRange(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of non-contiguous deleted messages ranges. - -### LoadBalancing metrics -All the loadbalancing metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- broker: broker=${broker}. ${broker} is the IP address of the broker -- metric: metric="loadBalancing". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_bandwidth_in_usage | Gauge | The broker inbound bandwith usage (in percent). | -| pulsar_lb_bandwidth_out_usage | Gauge | The broker outbound bandwith usage (in percent). | -| pulsar_lb_cpu_usage | Gauge | The broker cpu usage (in percent). | -| pulsar_lb_directMemory_usage | Gauge | The broker process direct memory usage (in percent). | -| pulsar_lb_memory_usage | Gauge | The broker process memory usage (in percent). | - -#### BundleUnloading metrics -All the bundleUnloading metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- metric: metric="bundleUnloading". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_unload_broker_count | Counter | Unload broker count in this bundle unloading | -| pulsar_lb_unload_bundle_count | Counter | Bundle unload count in this bundle unloading | - -#### BundleSplit metrics -All the bundleUnloading metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- metric: metric="bundlesSplit". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_bundles_split_count | Counter | The total count of bundle split in this leader broker | - -### Subscription metrics - -> Subscription metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `true`. - -All the subscription metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. -- *subscription*: `subscription=${subscription}`. `${subscription}` is the topic subscription name. - -| Name | Type | Description | -|---|---|---| -| pulsar_subscription_back_log | Gauge | The total backlog of a subscription (messages). | -| pulsar_subscription_delayed | Gauge | The total number of messages are delayed to be dispatched for a subscription (messages). | -| pulsar_subscription_msg_rate_redeliver | Gauge | The total message rate for message being redelivered (messages/second). | -| pulsar_subscription_unacked_messages | Gauge | The total number of unacknowledged messages of a subscription (messages). | -| pulsar_subscription_blocked_on_unacked_messages | Gauge | Indicate whether a subscription is blocked on unacknowledged messages or not.
    • 1 means the subscription is blocked on waiting unacknowledged messages to be acked.
    • 0 means the subscription is not blocked on waiting unacknowledged messages to be acked.
    | -| pulsar_subscription_msg_rate_out | Gauge | The total message dispatch rate for a subscription (messages/second). | -| pulsar_subscription_msg_throughput_out | Gauge | The total message dispatch throughput for a subscription (bytes/second). | - -### Consumer metrics - -> Consumer metrics are only exposed when both `exposeTopicLevelMetricsInPrometheus` and `exposeConsumerLevelMetricsInPrometheus` are set to `true`. - -All the consumer metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. -- *subscription*: `subscription=${subscription}`. `${subscription}` is the topic subscription name. -- *consumer_name*: `consumer_name=${consumer_name}`. `${consumer_name}` is the topic consumer name. -- *consumer_id*: `consumer_id=${consumer_id}`. `${consumer_id}` is the topic consumer id. - -| Name | Type | Description | -|---|---|---| -| pulsar_consumer_msg_rate_redeliver | Gauge | The total message rate for message being redelivered (messages/second). | -| pulsar_consumer_unacked_messages | Gauge | The total number of unacknowledged messages of a consumer (messages). | -| pulsar_consumer_blocked_on_unacked_messages | Gauge | Indicate whether a consumer is blocked on unacknowledged messages or not.
    • 1 means the consumer is blocked on waiting unacknowledged messages to be acked.
    • 0 means the consumer is not blocked on waiting unacknowledged messages to be acked.
    | -| pulsar_consumer_msg_rate_out | Gauge | The total message dispatch rate for a consumer (messages/second). | -| pulsar_consumer_msg_throughput_out | Gauge | The total message dispatch throughput for a consumer (bytes/second). | -| pulsar_consumer_available_permits | Gauge | The available permits for for a consumer. | - -### Managed ledger bookie client metrics - -All the managed ledger bookie client metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_completed_tasks_* | Gauge | The number of tasks the scheduler executor execute completed.
    The number of metrics determined by the scheduler executor thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_queue_* | Gauge | The number of tasks queued in the scheduler executor's queue.
    The number of metrics determined by scheduler executor's thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_total_tasks_* | Gauge | The total number of tasks the scheduler executor received.
    The number of metrics determined by scheduler executor's thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_task_execution | Summary | The scheduler task execution latency calculated in milliseconds. | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_task_queued | Summary | The scheduler task queued latency calculated in milliseconds. | - -### Token metrics - -All the token metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -|---|---|---| -| pulsar_expired_token_count | Counter | The number of expired tokens in Pulsar. | -| pulsar_expiring_token_minutes | Histogram | The remaining time of expiring tokens in minutes. | - -### Authentication metrics - -All the authentication metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *provider_name*: `provider_name=${provider_name}`. `${provider_name}` is the class name of the authentication provider. -- *auth_method*: `auth_method=${auth_method}`. `${auth_method}` is the authentication method of the authentication provider. -- *reason*: `reason=${reason}`. `${reason}` is the reason for failing authentication operation. (This label is only for `pulsar_authentication_failures_count`.) - -| Name | Type | Description | -|---|---|---| -| pulsar_authentication_success_count| Counter | The number of successful authentication operations. | -| pulsar_authentication_failures_count | Counter | The number of failing authentication operations. | - -### Connection metrics - -All the connection metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *broker*: `broker=${advertised_address}`. `${advertised_address}` is the advertised address of the broker. -- *metric*: `metric=${metric}`. `${metric}` is the connection metric collective name. - -| Name | Type | Description | -|---|---|---| -| pulsar_active_connections| Gauge | The number of active connections. | -| pulsar_connection_created_total_count | Gauge | The total number of connections. | -| pulsar_connection_create_success_count | Gauge | The number of successfully created connections. | -| pulsar_connection_create_fail_count | Gauge | The number of failed connections. | -| pulsar_connection_closed_total_count | Gauge | The total number of closed connections. | -| pulsar_broker_throttled_connections | Gauge | The number of throttled connections. | -| pulsar_broker_throttled_connections_global_limit | Gauge | The number of throttled connections because of per-connection limit. | - -### Jetty metrics - -> For a functions-worker running separately from brokers, its Jetty metrics are only exposed when `includeStandardPrometheusMetrics` is set to `true`. - -All the jetty metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -|---|---|---| -| jetty_requests_total | Counter | Number of requests. | -| jetty_requests_active | Gauge | Number of requests currently active. | -| jetty_requests_active_max | Gauge | Maximum number of requests that have been active at once. | -| jetty_request_time_max_seconds | Gauge | Maximum time spent handling requests. | -| jetty_request_time_seconds_total | Counter | Total time spent in all request handling. | -| jetty_dispatched_total | Counter | Number of dispatches. | -| jetty_dispatched_active | Gauge | Number of dispatches currently active. | -| jetty_dispatched_active_max | Gauge | Maximum number of active dispatches being handled. | -| jetty_dispatched_time_max | Gauge | Maximum time spent in dispatch handling. | -| jetty_dispatched_time_seconds_total | Counter | Total time spent in dispatch handling. | -| jetty_async_requests_total | Counter | Total number of async requests. | -| jetty_async_requests_waiting | Gauge | Currently waiting async requests. | -| jetty_async_requests_waiting_max | Gauge | Maximum number of waiting async requests. | -| jetty_async_dispatches_total | Counter | Number of requested that have been asynchronously dispatched. | -| jetty_expires_total | Counter | Number of async requests requests that have expired. | -| jetty_responses_total | Counter | Number of responses, labeled by status code. The `code` label can be "1xx", "2xx", "3xx", "4xx", or "5xx". | -| jetty_stats_seconds | Gauge | Time in seconds stats have been collected for. | -| jetty_responses_bytes_total | Counter | Total number of bytes across all responses. | - -## Pulsar Functions - -All the Pulsar Functions metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -| Name | Type | Description | -|---|---|---| -| pulsar_function_processed_successfully_total | Counter | The total number of messages processed successfully. | -| pulsar_function_processed_successfully_total_1min | Counter | The total number of messages processed successfully in the last 1 minute. | -| pulsar_function_system_exceptions_total | Counter | The total number of system exceptions. | -| pulsar_function_system_exceptions_total_1min | Counter | The total number of system exceptions in the last 1 minute. | -| pulsar_function_user_exceptions_total | Counter | The total number of user exceptions. | -| pulsar_function_user_exceptions_total_1min | Counter | The total number of user exceptions in the last 1 minute. | -| pulsar_function_process_latency_ms | Summary | The process latency in milliseconds. | -| pulsar_function_process_latency_ms_1min | Summary | The process latency in milliseconds in the last 1 minute. | -| pulsar_function_last_invocation | Gauge | The timestamp of the last invocation of the function. | -| pulsar_function_received_total | Counter | The total number of messages received from source. | -| pulsar_function_received_total_1min | Counter | The total number of messages received from source in the last 1 minute. | -pulsar_function_user_metric_ | Summary|The user-defined metrics. - -## Connectors - -All the Pulsar connector metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -Connector metrics contain **source** metrics and **sink** metrics. - -- **Source** metrics - - | Name | Type | Description | - |---|---|---| - pulsar_source_written_total|Counter|The total number of records written to a Pulsar topic. - pulsar_source_written_total_1min|Counter|The total number of records written to a Pulsar topic in the last 1 minute. - pulsar_source_received_total|Counter|The total number of records received from source. - pulsar_source_received_total_1min|Counter|The total number of records received from source in the last 1 minute. - pulsar_source_last_invocation|Gauge|The timestamp of the last invocation of the source. - pulsar_source_source_exception|Gauge|The exception from a source. - pulsar_source_source_exceptions_total|Counter|The total number of source exceptions. - pulsar_source_source_exceptions_total_1min |Counter|The total number of source exceptions in the last 1 minute. - pulsar_source_system_exception|Gauge|The exception from system code. - pulsar_source_system_exceptions_total|Counter|The total number of system exceptions. - pulsar_source_system_exceptions_total_1min|Counter|The total number of system exceptions in the last 1 minute. - pulsar_source_user_metric_ | Summary|The user-defined metrics. - -- **Sink** metrics - - | Name | Type | Description | - |---|---|---| - pulsar_sink_written_total|Counter| The total number of records processed by a sink. - pulsar_sink_written_total_1min|Counter| The total number of records processed by a sink in the last 1 minute. - pulsar_sink_received_total_1min|Counter| The total number of messages that a sink has received from Pulsar topics in the last 1 minute. - pulsar_sink_received_total|Counter| The total number of records that a sink has received from Pulsar topics. - pulsar_sink_last_invocation|Gauge|The timestamp of the last invocation of the sink. - pulsar_sink_sink_exception|Gauge|The exception from a sink. - pulsar_sink_sink_exceptions_total|Counter|The total number of sink exceptions. - pulsar_sink_sink_exceptions_total_1min |Counter|The total number of sink exceptions in the last 1 minute. - pulsar_sink_system_exception|Gauge|The exception from system code. - pulsar_sink_system_exceptions_total|Counter|The total number of system exceptions. - pulsar_sink_system_exceptions_total_1min|Counter|The total number of system exceptions in the last 1 minute. - pulsar_sink_user_metric_ | Summary|The user-defined metrics. - -## Proxy - -All the proxy metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *kubernetes_pod_name*: `kubernetes_pod_name=${kubernetes_pod_name}`. `${kubernetes_pod_name}` is the Kubernetes pod name. - -| Name | Type | Description | -|---|---|---| -| pulsar_proxy_active_connections | Gauge | Number of connections currently active in the proxy. | -| pulsar_proxy_new_connections | Counter | Counter of connections being opened in the proxy. | -| pulsar_proxy_rejected_connections | Counter | Counter for connections rejected due to throttling. | -| pulsar_proxy_binary_ops | Counter | Counter of proxy operations. | -| pulsar_proxy_binary_bytes | Counter | Counter of proxy bytes. | - -## Pulsar SQL Worker - -| Name | Type | Description | -|---|---|---| -| split_bytes_read | Counter | Number of bytes read from BookKeeper. | -| split_num_messages_deserialized | Counter | Number of messages deserialized. | -| split_num_record_deserialized | Counter | Number of records deserialized. | -| split_bytes_read_per_query | Summary | Total number of bytes read per query. | -| split_entry_deserialize_time | Summary | Time spent on derserializing entries. | -| split_entry_deserialize_time_per_query | Summary | Time spent on derserializing entries per query. | -| split_entry_queue_dequeue_wait_time | Summary | Time spend on waiting to get entry from entry queue because it is empty. | -| split_entry_queue_dequeue_wait_time_per_query | Summary | Total time spent on waiting to get entry from entry queue per query. | -| split_message_queue_dequeue_wait_time_per_query | Summary | Time spent on waiting to dequeue from message queue because is is empty per query. | -| split_message_queue_enqueue_wait_time | Summary | Time spent on waiting for message queue enqueue because the message queue is full. | -| split_message_queue_enqueue_wait_time_per_query | Summary | Time spent on waiting for message queue enqueue because the message queue is full per query. | -| split_num_entries_per_batch | Summary | Number of entries per batch. | -| split_num_entries_per_query | Summary | Number of entries per query. | -| split_num_messages_deserialized_per_entry | Summary | Number of messages deserialized per entry. | -| split_num_messages_deserialized_per_query | Summary | Number of messages deserialized per query. | -| split_read_attempts | Summary | Number of read attempts (fail if queues are full). | -| split_read_attempts_per_query | Summary | Number of read attempts per query. | -| split_read_latency_per_batch | Summary | Latency of reads per batch. | -| split_read_latency_per_query | Summary | Total read latency per query. | -| split_record_deserialize_time | Summary | Time spent on deserializing message to record. For example, Avro, JSON, and so on. | -| split_record_deserialize_time_per_query | Summary | Time spent on deserializing message to record per query. | -| split_total_execution_time | Summary | The total execution time. | - -## Pulsar transaction - -All the transaction metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *coordinator_id*: `coordinator_id=${coordinator_id}`. `${coordinator_id}` is the coordinator id. - -| Name | Type | Description | -|---|---|---| -| pulsar_txn_active_count | Gauge | Number of active transactions. | -| pulsar_txn_created_count | Counter | Number of created transactions. | -| pulsar_txn_committed_count | Counter | Number of committed transactions. | -| pulsar_txn_aborted_count | Counter | Number of aborted transactions of this coordinator. | -| pulsar_txn_timeout_count | Counter | Number of timeout transactions. | -| pulsar_txn_append_log_count | Counter | Number of append transaction logs. | -| pulsar_txn_execution_latency_le_* | Histogram | Transaction execution latency.
    Available latencies are as below:
    • latency="10" is TransactionExecutionLatency between (0ms, 10ms]
    • latency="20" is TransactionExecutionLatency between (10ms, 20ms]
    • latency="50" is TransactionExecutionLatency between (20ms, 50ms]
    • latency="100" is TransactionExecutionLatency between (50ms, 100ms]
    • latency="500" is TransactionExecutionLatency between (100ms, 500ms]
    • latency="1000" is TransactionExecutionLatency between (500ms, 1000ms]
    • latency="5000" is TransactionExecutionLatency between (1s, 5s]
    • latency="15000" is TransactionExecutionLatency between (5s, 15s]
    • latency="30000" is TransactionExecutionLatency between (15s, 30s]
    • latency="60000" is TransactionExecutionLatency between (30s, 60s]
    • latency="300000" is TransactionExecutionLatency between (1m,5m]
    • latency="1500000" is TransactionExecutionLatency between (5m,15m]
    • latency="3000000" is TransactionExecutionLatency between (15m,30m]
    • latency="overflow" is TransactionExecutionLatency between (30m,∞]
    | diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/reference-pulsar-admin.md b/site2/website/versioned_docs/version-2.8.0-deprecated/reference-pulsar-admin.md deleted file mode 100644 index e41afd6f11bbf0..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/reference-pulsar-admin.md +++ /dev/null @@ -1,3335 +0,0 @@ ---- -id: reference-pulsar-admin -title: Pulsar admin CLI -sidebar_label: "Pulsar Admin CLI" -original_id: reference-pulsar-admin ---- - -> **Important** -> -> This page is deprecated and not updated anymore. For the latest and complete information about `pulsar-admin`, including commands, flags, descriptions, and more, see [pulsar-admin doc](https://pulsar.apache.org/tools/pulsar-admin/). - -The `pulsar-admin` tool enables you to manage Pulsar installations, including clusters, brokers, namespaces, tenants, and more. - -Usage - -```bash - -$ pulsar-admin command - -``` - -Commands -* `broker-stats` -* `brokers` -* `clusters` -* `functions` -* `functions-worker` -* `namespaces` -* `ns-isolation-policy` -* `sources` - - For more information, see [here](io-cli.md#sources) -* `sinks` - - For more information, see [here](io-cli.md#sinks) -* `topics` -* `tenants` -* `resource-quotas` -* `schemas` - -## `broker-stats` - -Operations to collect broker statistics - -```bash - -$ pulsar-admin broker-stats subcommand - -``` - -Subcommands -* `allocator-stats` -* `topics(destinations)` -* `mbeans` -* `monitoring-metrics` -* `load-report` - - -### `allocator-stats` - -Dump allocator stats - -Usage - -```bash - -$ pulsar-admin broker-stats allocator-stats allocator-name - -``` - -### `topics(destinations)` - -Dump topic stats - -Usage - -```bash - -$ pulsar-admin broker-stats topics options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - -### `mbeans` - -Dump Mbean stats - -Usage - -```bash - -$ pulsar-admin broker-stats mbeans options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - - -### `monitoring-metrics` - -Dump metrics for monitoring - -Usage - -```bash - -$ pulsar-admin broker-stats monitoring-metrics options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - - -### `load-report` - -Dump broker load-report - -Usage - -```bash - -$ pulsar-admin broker-stats load-report - -``` - -## `brokers` - -Operations about brokers - -```bash - -$ pulsar-admin brokers subcommand - -``` - -Subcommands -* `list` -* `namespaces` -* `update-dynamic-config` -* `list-dynamic-config` -* `get-all-dynamic-config` -* `get-internal-config` -* `get-runtime-config` -* `healthcheck` - -### `list` -List active brokers of the cluster - -Usage - -```bash - -$ pulsar-admin brokers list cluster-name - -``` - -### `leader-broker` -Get the information of the leader broker - -Usage - -```bash - -$ pulsar-admin brokers leader-broker - -``` - -### `namespaces` -List namespaces owned by the broker - -Usage - -```bash - -$ pulsar-admin brokers namespaces cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--url`|The URL for the broker|| - - -### `update-dynamic-config` -Update a broker's dynamic service configuration - -Usage - -```bash - -$ pulsar-admin brokers update-dynamic-config options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--config`|Service configuration parameter name|| -|`--value`|Value for the configuration parameter value specified using the `--config` flag|| - - -### `list-dynamic-config` -Get list of updatable configuration name - -Usage - -```bash - -$ pulsar-admin brokers list-dynamic-config - -``` - -### `delete-dynamic-config` -Delete dynamic-serviceConfiguration of broker - -Usage - -```bash - -$ pulsar-admin brokers delete-dynamic-config options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--config`|Service configuration parameter name|| - - -### `get-all-dynamic-config` -Get all overridden dynamic-configuration values - -Usage - -```bash - -$ pulsar-admin brokers get-all-dynamic-config - -``` - -### `get-internal-config` -Get internal configuration information - -Usage - -```bash - -$ pulsar-admin brokers get-internal-config - -``` - -### `get-runtime-config` -Get runtime configuration values - -Usage - -```bash - -$ pulsar-admin brokers get-runtime-config - -``` - -### `healthcheck` -Run a health check against the broker - -Usage - -```bash - -$ pulsar-admin brokers healthcheck - -``` - -## `clusters` -Operations about clusters - -Usage - -```bash - -$ pulsar-admin clusters subcommand - -``` - -Subcommands -* `get` -* `create` -* `update` -* `delete` -* `list` -* `update-peer-clusters` -* `get-peer-clusters` -* `get-failure-domain` -* `create-failure-domain` -* `update-failure-domain` -* `delete-failure-domain` -* `list-failure-domains` - - -### `get` -Get the configuration data for the specified cluster - -Usage - -```bash - -$ pulsar-admin clusters get cluster-name - -``` - -### `create` -Provisions a new cluster. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin clusters create cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--broker-url`|The URL for the broker service.|| -|`--broker-url-secure`|The broker service URL for a secure connection|| -|`--url`|service-url|| -|`--url-secure`|service-url for secure connection|| - - -### `update` -Update the configuration for a cluster - -Usage - -```bash - -$ pulsar-admin clusters update cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--broker-url`|The URL for the broker service.|| -|`--broker-url-secure`|The broker service URL for a secure connection|| -|`--url`|service-url|| -|`--url-secure`|service-url for secure connection|| - - -### `delete` -Deletes an existing cluster - -Usage - -```bash - -$ pulsar-admin clusters delete cluster-name - -``` - -### `list` -List the existing clusters - -Usage - -```bash - -$ pulsar-admin clusters list - -``` - -### `update-peer-clusters` -Update peer cluster names - -Usage - -```bash - -$ pulsar-admin clusters update-peer-clusters cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--peer-clusters`|Comma separated peer cluster names (Pass empty string "" to delete list)|| - -### `get-peer-clusters` -Get list of peer clusters - -Usage - -```bash - -$ pulsar-admin clusters get-peer-clusters - -``` - -### `get-failure-domain` -Get the configuration brokers of a failure domain - -Usage - -```bash - -$ pulsar-admin clusters get-failure-domain cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `create-failure-domain` -Create a new failure domain for a cluster (updates it if already created) - -Usage - -```bash - -$ pulsar-admin clusters create-failure-domain cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--broker-list`|Comma separated broker list|| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `update-failure-domain` -Update failure domain for a cluster (creates a new one if not exist) - -Usage - -```bash - -$ pulsar-admin clusters update-failure-domain cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--broker-list`|Comma separated broker list|| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `delete-failure-domain` -Delete an existing failure domain - -Usage - -```bash - -$ pulsar-admin clusters delete-failure-domain cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `list-failure-domains` -List the existing failure domains for a cluster - -Usage - -```bash - -$ pulsar-admin clusters list-failure-domains cluster-name - -``` - -## `functions` - -A command-line interface for Pulsar Functions - -Usage - -```bash - -$ pulsar-admin functions subcommand - -``` - -Subcommands -* `localrun` -* `create` -* `delete` -* `update` -* `get` -* `restart` -* `stop` -* `start` -* `status` -* `stats` -* `list` -* `querystate` -* `putstate` -* `trigger` - - -### `localrun` -Run the Pulsar Function locally (rather than deploying it to the Pulsar cluster) - - -Usage - -```bash - -$ pulsar-admin functions localrun options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--broker-service-url `|The URL of the Pulsar broker|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--client-auth-params`|Client authentication param|| -|`--client-auth-plugin`|Client authentication plugin using which function-process can connect to broker|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--hostname-verification-enabled`|Enable hostname verification|false| -|`--instance-id-offset`|Start the instanceIds from this offset|0| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports url-path [http/https/file (file protocol assumes that file already exists on worker host)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python)|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--state-storage-service-url`|The URL for the state storage service. By default, it it set to the service URL of the Apache BookKeeper. This service URL must be added manually when the Pulsar Function runs locally. || -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed successfully are sent|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--tls-allow-insecure`|Allow insecure tls connection|false| -|`--tls-trust-cert-path`|The tls trust cert file path|| -|`--use-tls`|Use tls connection|false| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `create` -Create a Pulsar Function in cluster mode (i.e. deploy it on a Pulsar cluster) - -Usage - -``` - -$ pulsar-admin functions create options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports url-path [http/https/file (file protocol assumes that file already exists on worker host)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function’s namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python)|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `delete` -Delete a Pulsar Function that's running on a Pulsar cluster - -Usage - -```bash - -$ pulsar-admin functions delete options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `update` -Update a Pulsar Function that's been deployed to a Pulsar cluster - -Usage - -```bash - -$ pulsar-admin functions update options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports url-path [http/https/file (file protocol assumes that file already exists on worker host)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function’s namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python)|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `get` -Fetch information about a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions get options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `restart` -Restart function instance - -Usage - -```bash - -$ pulsar-admin functions restart options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (restart all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `stop` -Stops function instance - -Usage - -```bash - -$ pulsar-admin functions stop options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (stop all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `start` -Starts a stopped function instance - -Usage - -```bash - -$ pulsar-admin functions start options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (start all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `status` -Check the current status of a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions status options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (Get-status of all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `stats` -Get the current stats of a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions stats options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (Get-stats of all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - -### `list` -List all of the Pulsar Functions running under a specific tenant and namespace - -Usage - -```bash - -$ pulsar-admin functions list options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `querystate` -Fetch the current state associated with a Pulsar Function running in cluster mode - -Usage - -```bash - -$ pulsar-admin functions querystate options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`-k`, `--key`|The key for the state you want to fetch|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| -|`-w`, `--watch`|Watch for changes in the value associated with a key for a Pulsar Function|false| - -### `putstate` -Put a key/value pair to the state associated with a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions putstate options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the Pulsar Function|| -|`--name`|The name of a Pulsar Function|| -|`--namespace`|The namespace of a Pulsar Function|| -|`--tenant`|The tenant of a Pulsar Function|| -|`-s`, `--state`|The FunctionState that needs to be put|| - -### `trigger` -Triggers the specified Pulsar Function with a supplied value - -Usage - -```bash - -$ pulsar-admin functions trigger options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| -|`--topic`|The specific topic name that the function consumes from that you want to inject the data to|| -|`--trigger-file`|The path to the file that contains the data with which you'd like to trigger the function|| -|`--trigger-value`|The value with which you want to trigger the function|| - - -## `functions-worker` -Operations to collect function-worker statistics - -```bash - -$ pulsar-admin functions-worker subcommand - -``` - -Subcommands - -* `function-stats` -* `get-cluster` -* `get-cluster-leader` -* `get-function-assignments` -* `monitoring-metrics` - -### `function-stats` - -Dump all functions stats running on this broker - -Usage - -```bash - -$ pulsar-admin functions-worker function-stats - -``` - -### `get-cluster` - -Get all workers belonging to this cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-cluster - -``` - -### `get-cluster-leader` - -Get the leader of the worker cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-cluster-leader - -``` - -### `get-function-assignments` - -Get the assignments of the functions across the worker cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-function-assignments - -``` - -### `monitoring-metrics` - -Dump metrics for Monitoring - -Usage - -```bash - -$ pulsar-admin functions-worker monitoring-metrics - -``` - -## `namespaces` - -Operations for managing namespaces - -```bash - -$ pulsar-admin namespaces subcommand - -``` - -Subcommands -* `list` -* `topics` -* `policies` -* `create` -* `delete` -* `set-deduplication` -* `set-auto-topic-creation` -* `remove-auto-topic-creation` -* `set-auto-subscription-creation` -* `remove-auto-subscription-creation` -* `permissions` -* `grant-permission` -* `revoke-permission` -* `grant-subscription-permission` -* `revoke-subscription-permission` -* `set-clusters` -* `get-clusters` -* `get-backlog-quotas` -* `set-backlog-quota` -* `remove-backlog-quota` -* `get-persistence` -* `set-persistence` -* `get-message-ttl` -* `set-message-ttl` -* `remove-message-ttl` -* `get-anti-affinity-group` -* `set-anti-affinity-group` -* `get-anti-affinity-namespaces` -* `delete-anti-affinity-group` -* `get-retention` -* `set-retention` -* `unload` -* `split-bundle` -* `set-dispatch-rate` -* `get-dispatch-rate` -* `set-replicator-dispatch-rate` -* `get-replicator-dispatch-rate` -* `set-subscribe-rate` -* `get-subscribe-rate` -* `set-subscription-dispatch-rate` -* `get-subscription-dispatch-rate` -* `clear-backlog` -* `unsubscribe` -* `set-encryption-required` -* `set-delayed-delivery` -* `get-delayed-delivery` -* `set-subscription-auth-mode` -* `get-max-producers-per-topic` -* `set-max-producers-per-topic` -* `get-max-consumers-per-topic` -* `set-max-consumers-per-topic` -* `get-max-consumers-per-subscription` -* `set-max-consumers-per-subscription` -* `get-max-unacked-messages-per-subscription` -* `set-max-unacked-messages-per-subscription` -* `get-max-unacked-messages-per-consumer` -* `set-max-unacked-messages-per-consumer` -* `get-compaction-threshold` -* `set-compaction-threshold` -* `get-offload-threshold` -* `set-offload-threshold` -* `get-offload-deletion-lag` -* `set-offload-deletion-lag` -* `clear-offload-deletion-lag` -* `get-schema-autoupdate-strategy` -* `set-schema-autoupdate-strategy` -* `set-offload-policies` -* `get-offload-policies` -* `set-max-subscriptions-per-topic` -* `get-max-subscriptions-per-topic` -* `remove-max-subscriptions-per-topic` - - -### `list` -Get the namespaces for a tenant - -Usage - -```bash - -$ pulsar-admin namespaces list tenant-name - -``` - -### `topics` -Get the list of topics for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces topics tenant/namespace - -``` - -### `policies` -Get the configuration policies of a namespace - -Usage - -```bash - -$ pulsar-admin namespaces policies tenant/namespace - -``` - -### `create` -Create a new namespace - -Usage - -```bash - -$ pulsar-admin namespaces create tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-b`, `--bundles`|The number of bundles to activate|0| -|`-c`, `--clusters`|List of clusters this namespace will be assigned|| - - -### `delete` -Deletes a namespace. The namespace needs to be empty - -Usage - -```bash - -$ pulsar-admin namespaces delete tenant/namespace - -``` - -### `set-deduplication` -Enable or disable message deduplication on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-deduplication tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable message deduplication on the specified namespace|false| -|`--disable`, `-d`|Disable message deduplication on the specified namespace|false| - -### `set-auto-topic-creation` -Enable or disable autoTopicCreation for a namespace, overriding broker settings - -Usage - -```bash - -$ pulsar-admin namespaces set-auto-topic-creation tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable allowAutoTopicCreation on namespace|false| -|`--disable`, `-d`|Disable allowAutoTopicCreation on namespace|false| -|`--type`, `-t`|Type of topic to be auto-created. Possible values: (partitioned, non-partitioned)|non-partitioned| -|`--num-partitions`, `-n`|Default number of partitions of topic to be auto-created, applicable to partitioned topics only|| - -### `remove-auto-topic-creation` -Remove override of autoTopicCreation for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces remove-auto-topic-creation tenant/namespace - -``` - -### `set-auto-subscription-creation` -Enable autoSubscriptionCreation for a namespace, overriding broker settings - -Usage - -```bash - -$ pulsar-admin namespaces set-auto-subscription-creation tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable allowAutoSubscriptionCreation on namespace|false| - -### `remove-auto-subscription-creation` -Remove override of autoSubscriptionCreation for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces remove-auto-subscription-creation tenant/namespace - -``` - -### `permissions` -Get the permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces permissions tenant/namespace - -``` - -### `grant-permission` -Grant permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces grant-permission tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--actions`|Actions to be granted (`produce` or `consume`)|| -|`--role`|The client role to which to grant the permissions|| - - -### `revoke-permission` -Revoke permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces revoke-permission tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--role`|The client role to which to revoke the permissions|| - -### `grant-subscription-permission` -Grant permissions to access subscription admin-api - -Usage - -```bash - -$ pulsar-admin namespaces grant-subscription-permission tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--roles`|The client roles to which to grant the permissions (comma separated roles)|| -|`--subscription`|The subscription name for which permission will be granted to roles|| - -### `revoke-subscription-permission` -Revoke permissions to access subscription admin-api - -Usage - -```bash - -$ pulsar-admin namespaces revoke-subscription-permission tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--role`|The client role to which to revoke the permissions|| -|`--subscription`|The subscription name for which permission will be revoked to roles|| - -### `set-clusters` -Set replication clusters for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-clusters tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--clusters`|Replication clusters ID list (comma-separated values)|| - - -### `get-clusters` -Get replication clusters for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-clusters tenant/namespace - -``` - -### `get-backlog-quotas` -Get the backlog quota policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-backlog-quotas tenant/namespace - -``` - -### `set-backlog-quota` -Set a backlog quota policy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-backlog-quota tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-l`, `--limit`|The backlog size limit (for example `10M` or `16G`)|| -|`-p`, `--policy`|The retention policy to enforce when the limit is reached. The valid options are: `producer_request_hold`, `producer_exception` or `consumer_backlog_eviction`| - -Example - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \ ---limit 2G \ ---policy producer_request_hold - -``` - -### `remove-backlog-quota` -Remove a backlog quota policy from a namespace - -Usage - -```bash - -$ pulsar-admin namespaces remove-backlog-quota tenant/namespace - -``` - -### `get-persistence` -Get the persistence policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-persistence tenant/namespace - -``` - -### `set-persistence` -Set the persistence policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-persistence tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-a`, `--bookkeeper-ack-quorum`|The number of acks (guaranteed copies) to wait for each entry|0| -|`-e`, `--bookkeeper-ensemble`|The number of bookies to use for a topic|0| -|`-w`, `--bookkeeper-write-quorum`|How many writes to make of each entry|0| -|`-r`, `--ml-mark-delete-max-rate`|Throttling rate of mark-delete operation (0 means no throttle)|| - - -### `get-message-ttl` -Get the message TTL for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-message-ttl tenant/namespace - -``` - -### `set-message-ttl` -Set the message TTL for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-message-ttl tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-ttl`, `--messageTTL`|Message TTL in seconds. When the value is set to `0`, TTL is disabled. TTL is disabled by default. |0| - -### `remove-message-ttl` -Remove the message TTL for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces remove-message-ttl tenant/namespace - -``` - -### `get-anti-affinity-group` -Get Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-anti-affinity-group tenant/namespace - -``` - -### `set-anti-affinity-group` -Set Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-anti-affinity-group tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-g`, `--group`|Anti-affinity group name|| - -### `get-anti-affinity-namespaces` -Get Anti-affinity namespaces grouped with the given anti-affinity group name - -Usage - -```bash - -$ pulsar-admin namespaces get-anti-affinity-namespaces options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--cluster`|Cluster name|| -|`-g`, `--group`|Anti-affinity group name|| -|`-p`, `--tenant`|Tenant is only used for authorization. Client has to be admin of any of the tenant to access this api|| - -### `delete-anti-affinity-group` -Remove Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces delete-anti-affinity-group tenant/namespace - -``` - -### `get-retention` -Get the retention policy that is applied to each topic within the specified namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-retention tenant/namespace - -``` - -### `set-retention` -Set the retention policy for each topic within the specified namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-retention tenant/namespace - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-s`, `--size`|The retention size limits (for example 10M, 16G or 3T) for each topic in the namespace. 0 means no retention and -1 means infinite size retention|| -|`-t`, `--time`|The retention time in minutes, hours, days, or weeks. Examples: 100m, 13h, 2d, 5w. 0 means no retention and -1 means infinite time retention|| - - -### `unload` -Unload a namespace or namespace bundle from the current serving broker. - -Usage - -```bash - -$ pulsar-admin namespaces unload tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| - -### `split-bundle` -Split a namespace-bundle from the current serving broker - -Usage - -```bash - -$ pulsar-admin namespaces split-bundle tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-u`, `--unload`|Unload newly split bundles after splitting old bundle|false| - -### `set-dispatch-rate` -Set message-dispatch-rate for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-dispatch-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-dispatch-rate` -Get configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-dispatch-rate tenant/namespace - -``` - -### `set-replicator-dispatch-rate` -Set replicator message-dispatch-rate for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-replicator-dispatch-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-replicator-dispatch-rate` -Get replicator configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-replicator-dispatch-rate tenant/namespace - -``` - -### `set-subscribe-rate` -Set subscribe-rate per consumer for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscribe-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-sr`, `--subscribe-rate`|The subscribe rate (default -1 will be overwrite if not passed)|-1| -|`-st`, `--subscribe-rate-period`|The subscribe rate period in second type (default 30 second will be overwrite if not passed)|30| - -### `get-subscribe-rate` -Get configured subscribe-rate per consumer for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-subscribe-rate tenant/namespace - -``` - -### `set-subscription-dispatch-rate` -Set subscription message-dispatch-rate for all subscription of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscription-dispatch-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--sub-msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-subscription-dispatch-rate` -Get subscription configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-subscription-dispatch-rate tenant/namespace - -``` - -### `clear-backlog` -Clear the backlog for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces clear-backlog tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-force`, `--force`|Whether to force a clear backlog without prompt|false| -|`-s`, `--sub`|The subscription name|| - - -### `unsubscribe` -Unsubscribe the given subscription on all destinations on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces unsubscribe tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-s`, `--sub`|The subscription name|| - -### `set-encryption-required` -Enable or disable message encryption required for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-encryption-required tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-d`, `--disable`|Disable message encryption required|false| -|`-e`, `--enable`|Enable message encryption required|false| - -### `set-delayed-delivery` -Set the delayed delivery policy on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-delayed-delivery tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-d`, `--disable`|Disable delayed delivery messages|false| -|`-e`, `--enable`|Enable delayed delivery messages|false| -|`-t`, `--time`|The tick time for when retrying on delayed delivery messages|1s| - - -### `get-delayed-delivery` -Get the delayed delivery policy on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-delayed-delivery-time tenant/namespace - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-t`, `--time`|The tick time for when retrying on delayed delivery messages|1s| - - -### `set-subscription-auth-mode` -Set subscription auth mode on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscription-auth-mode tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-m`, `--subscription-auth-mode`|Subscription authorization mode for Pulsar policies. Valid options are: [None, Prefix]|| - -### `get-max-producers-per-topic` -Get maxProducersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-producers-per-topic tenant/namespace - -``` - -### `set-max-producers-per-topic` -Set maxProducersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-producers-per-topic tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-p`, `--max-producers-per-topic`|maxProducersPerTopic for a namespace|0| - -### `get-max-consumers-per-topic` -Get maxConsumersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-consumers-per-topic tenant/namespace - -``` - -### `set-max-consumers-per-topic` -Set maxConsumersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-consumers-per-topic tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-consumers-per-topic`|maxConsumersPerTopic for a namespace|0| - -### `get-max-consumers-per-subscription` -Get maxConsumersPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-consumers-per-subscription tenant/namespace - -``` - -### `set-max-consumers-per-subscription` -Set maxConsumersPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-consumers-per-subscription tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-consumers-per-subscription`|maxConsumersPerSubscription for a namespace|0| - -### `get-max-unacked-messages-per-subscription` -Get maxUnackedMessagesPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-unacked-messages-per-subscription tenant/namespace - -``` - -### `set-max-unacked-messages-per-subscription` -Set maxUnackedMessagesPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-unacked-messages-per-subscription tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-unacked-messages-per-subscription`|maxUnackedMessagesPerSubscription for a namespace|-1| - -### `get-max-unacked-messages-per-consumer` -Get maxUnackedMessagesPerConsumer for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-unacked-messages-per-consumer tenant/namespace - -``` - -### `set-max-unacked-messages-per-consumer` -Set maxUnackedMessagesPerConsumer for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-unacked-messages-per-consumer tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-unacked-messages-per-consumer`|maxUnackedMessagesPerConsumer for a namespace|-1| - - -### `get-compaction-threshold` -Get compactionThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-compaction-threshold tenant/namespace - -``` - -### `set-compaction-threshold` -Set compactionThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-compaction-threshold tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-t`, `--threshold`|Maximum number of bytes in a topic backlog before compaction is triggered (eg: 10M, 16G, 3T). 0 disables automatic compaction|0| - - -### `get-offload-threshold` -Get offloadThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-threshold tenant/namespace - -``` - -### `set-offload-threshold` -Set offloadThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-threshold tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-s`, `--size`|Maximum number of bytes stored in the pulsar cluster for a topic before data will start being automatically offloaded to longterm storage (eg: 10M, 16G, 3T, 100). Negative values disable automatic offload. 0 triggers offloading as soon as possible.|-1| - -### `get-offload-deletion-lag` -Get offloadDeletionLag, in minutes, for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-deletion-lag tenant/namespace - -``` - -### `set-offload-deletion-lag` -Set offloadDeletionLag for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-deletion-lag tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-l`, `--lag`|Duration to wait after offloading a ledger segment, before deleting the copy of that segment from cluster local storage. (eg: 10m, 5h, 3d, 2w).|-1| - -### `clear-offload-deletion-lag` -Clear offloadDeletionLag for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces clear-offload-deletion-lag tenant/namespace - -``` - -### `get-schema-autoupdate-strategy` -Get the schema auto-update strategy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-schema-autoupdate-strategy tenant/namespace - -``` - -### `set-schema-autoupdate-strategy` -Set the schema auto-update strategy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-schema-autoupdate-strategy tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--compatibility`|Compatibility level required for new schemas created via a Producer. Possible values (Full, Backward, Forward, None).|Full| -|`-d`, `--disabled`|Disable automatic schema updates.|false| - -### `get-publish-rate` -Get the message publish rate for each topic in a namespace, in bytes as well as messages per second - -Usage - -```bash - -$ pulsar-admin namespaces get-publish-rate tenant/namespace - -``` - -### `set-publish-rate` -Set the message publish rate for each topic in a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-publish-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-m`, `--msg-publish-rate`|Threshold for number of messages per second per topic in the namespace (-1 implies not set, 0 for no limit).|-1| -|`-b`, `--byte-publish-rate`|Threshold for number of bytes per second per topic in the namespace (-1 implies not set, 0 for no limit).|-1| - -### `set-offload-policies` -Set the offload policy for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-policies tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-d`, `--driver`|Driver to use to offload old data to long term storage,(Possible values: S3, aws-s3, google-cloud-storage)|| -|`-r`, `--region`|The long term storage region|| -|`-b`, `--bucket`|Bucket to place offloaded ledger into|| -|`-e`, `--endpoint`|Alternative endpoint to connect to|| -|`-i`, `--aws-id`|AWS Credential Id to use when using driver S3 or aws-s3|| -|`-s`, `--aws-secret`|AWS Credential Secret to use when using driver S3 or aws-s3|| -|`-ro`, `--s3-role`|S3 Role used for STSAssumeRoleSessionCredentialsProvider using driver S3 or aws-s3|| -|`-rsn`, `--s3-role-session-name`|S3 role session name used for STSAssumeRoleSessionCredentialsProvider using driver S3 or aws-s3|| -|`-mbs`, `--maxBlockSize`|Max block size|64MB| -|`-rbs`, `--readBufferSize`|Read buffer size|1MB| -|`-oat`, `--offloadAfterThreshold`|Offload after threshold size (eg: 1M, 5M)|| -|`-oae`, `--offloadAfterElapsed`|Offload after elapsed in millis (or minutes, hours,days,weeks eg: 100m, 3h, 2d, 5w).|| - -### `get-offload-policies` -Get the offload policy for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-policies tenant/namespace - -``` - -### `set-max-subscriptions-per-topic` -Set the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces set-max-subscriptions-per-topic tenant/namespace - -``` - -### `get-max-subscriptions-per-topic` -Get the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces get-max-subscriptions-per-topic tenant/namespace - -``` - -### `remove-max-subscriptions-per-topic` -Remove the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces remove-max-subscriptions-per-topic tenant/namespace - -``` - -## `ns-isolation-policy` -Operations for managing namespace isolation policies. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy subcommand - -``` - -Subcommands -* `set` -* `get` -* `list` -* `delete` -* `brokers` -* `broker` - -### `set` -Create/update a namespace isolation policy for a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy set cluster-name policy-name options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`--auto-failover-policy-params`|Comma-separated name=value auto failover policy parameters|[]| -|`--auto-failover-policy-type`|Auto failover policy type name. Currently available options: min_available.|[]| -|`--namespaces`|Comma-separated namespaces regex list|[]| -|`--primary`|Comma-separated primary broker regex list|[]| -|`--secondary`|Comma-separated secondary broker regex list|[]| - - -### `get` -Get the namespace isolation policy of a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy get cluster-name policy-name - -``` - -### `list` -List all namespace isolation policies of a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy list cluster-name - -``` - -### `delete` -Delete namespace isolation policy of a cluster. This operation requires superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy delete - -``` - -### `brokers` -List all brokers with namespace-isolation policies attached to it. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy brokers cluster-name - -``` - -### `broker` -Get broker with namespace-isolation policies attached to it. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy broker cluster-name options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`--broker`|Broker name to get namespace-isolation policies attached to it|| - -## `topics` -Operations for managing Pulsar topics (both persistent and non-persistent). - -Usage - -```bash - -$ pulsar-admin topics subcommand - -``` - -From Pulsar 2.7.0, some namespace-level policies are available on topic level. To enable topic-level policy in Pulsar, you need to configure the following parameters in the `broker.conf` file. - -```shell - -systemTopicEnabled=true -topicLevelPoliciesEnabled=true - -``` - -Subcommands -* `compact` -* `compaction-status` -* `offload` -* `offload-status` -* `create-partitioned-topic` -* `create-missed-partitions` -* `delete-partitioned-topic` -* `create` -* `get-partitioned-topic-metadata` -* `update-partitioned-topic` -* `list-partitioned-topics` -* `list` -* `terminate` -* `permissions` -* `grant-permission` -* `revoke-permission` -* `lookup` -* `bundle-range` -* `delete` -* `unload` -* `create-subscription` -* `subscriptions` -* `unsubscribe` -* `stats` -* `stats-internal` -* `info-internal` -* `partitioned-stats` -* `partitioned-stats-internal` -* `skip` -* `clear-backlog` -* `expire-messages` -* `expire-messages-all-subscriptions` -* `peek-messages` -* `reset-cursor` -* `get-message-by-id` -* `last-message-id` -* `get-backlog-quotas` -* `set-backlog-quota` -* `remove-backlog-quota` -* `get-persistence` -* `set-persistence` -* `remove-persistence` -* `get-message-ttl` -* `set-message-ttl` -* `remove-message-ttl` -* `get-deduplication` -* `set-deduplication` -* `remove-deduplication` -* `get-retention` -* `set-retention` -* `remove-retention` -* `get-dispatch-rate` -* `set-dispatch-rate` -* `remove-dispatch-rate` -* `get-max-unacked-messages-per-subscription` -* `set-max-unacked-messages-per-subscription` -* `remove-max-unacked-messages-per-subscription` -* `get-max-unacked-messages-per-consumer` -* `set-max-unacked-messages-per-consumer` -* `remove-max-unacked-messages-per-consumer` -* `get-delayed-delivery` -* `set-delayed-delivery` -* `remove-delayed-delivery` -* `get-max-producers` -* `set-max-producers` -* `remove-max-producers` -* `get-max-consumers` -* `set-max-consumers` -* `remove-max-consumers` -* `get-compaction-threshold` -* `set-compaction-threshold` -* `remove-compaction-threshold` -* `get-offload-policies` -* `set-offload-policies` -* `remove-offload-policies` -* `get-inactive-topic-policies` -* `set-inactive-topic-policies` -* `remove-inactive-topic-policies` -* `set-max-subscriptions` -* `get-max-subscriptions` -* `remove-max-subscriptions` - -### `compact` -Run compaction on the specified topic (persistent topics only) - -Usage - -``` - -$ pulsar-admin topics compact persistent://tenant/namespace/topic - -``` - -### `compaction-status` -Check the status of a topic compaction (persistent topics only) - -Usage - -```bash - -$ pulsar-admin topics compaction-status persistent://tenant/namespace/topic - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-w`, `--wait-complete`|Wait for compaction to complete|false| - - -### `offload` -Trigger offload of data from a topic to long-term storage (e.g. Amazon S3) - -Usage - -```bash - -$ pulsar-admin topics offload persistent://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--size-threshold`|The maximum amount of data to keep in BookKeeper for the specific topic|| - - -### `offload-status` -Check the status of data offloading from a topic to long-term storage - -Usage - -```bash - -$ pulsar-admin topics offload-status persistent://tenant/namespace/topic op - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-w`, `--wait-complete`|Wait for compaction to complete|false| - - -### `create-partitioned-topic` -Create a partitioned topic. A partitioned topic must be created before producers can publish to it. - -:::note - -By default, after 60 seconds of creation, topics are considered inactive and deleted automatically to prevent from generating trash data. -To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. -To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to your desired value. -For more information about these two parameters, see [here](reference-configuration.md#broker). - -::: - -Usage - -```bash - -$ pulsar-admin topics create-partitioned-topic {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-p`, `--partitions`|The number of partitions for the topic|0| - -### `create-missed-partitions` -Try to create partitions for partitioned topic. The partitions of partition topic has to be created, -can be used by repair partitions when topic auto creation is disabled - -Usage - -```bash - -$ pulsar-admin topics create-missed-partitions persistent://tenant/namespace/topic - -``` - -### `delete-partitioned-topic` -Delete a partitioned topic. This will also delete all the partitions of the topic if they exist. - -Usage - -```bash - -$ pulsar-admin topics delete-partitioned-topic {persistent|non-persistent} - -``` - -### `create` -Creates a non-partitioned topic. A non-partitioned topic must explicitly be created by the user if allowAutoTopicCreation or createIfMissing is disabled. - -:::note - -By default, after 60 seconds of creation, topics are considered inactive and deleted automatically to prevent from generating trash data. -To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. -To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to your desired value. -For more information about these two parameters, see [here](reference-configuration.md#broker). - -::: - -Usage - -```bash - -$ pulsar-admin topics create {persistent|non-persistent}://tenant/namespace/topic - -``` - -### `get-partitioned-topic-metadata` -Get the partitioned topic metadata. If the topic is not created or is a non-partitioned topic, this will return an empty topic with zero partitions. - -Usage - -```bash - -$ pulsar-admin topics get-partitioned-topic-metadata {persistent|non-persistent}://tenant/namespace/topic - -``` - -### `update-partitioned-topic` -Update existing non-global partitioned topic. New updating number of partitions must be greater than existing number of partitions. - -Usage - -```bash - -$ pulsar-admin topics update-partitioned-topic {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-p`, `--partitions`|The number of partitions for the topic|0| - -### `list-partitioned-topics` -Get the list of partitioned topics under a namespace. - -Usage - -```bash - -$ pulsar-admin topics list-partitioned-topics tenant/namespace - -``` - -### `list` -Get the list of topics under a namespace - -Usage - -``` - -$ pulsar-admin topics list tenant/cluster/namespace - -``` - -### `terminate` -Terminate a persistent topic (disallow further messages from being published on the topic) - -Usage - -```bash - -$ pulsar-admin topics terminate persistent://tenant/namespace/topic - -``` - -### `permissions` -Get the permissions on a topic. Retrieve the effective permissions for a destination. These permissions are defined by the permissions set at the namespace level combined (union) with any eventual specific permissions set on the topic. - -Usage - -```bash - -$ pulsar-admin topics permissions topic - -``` - -### `grant-permission` -Grant a new permission to a client role on a single topic - -Usage - -```bash - -$ pulsar-admin topics grant-permission {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--actions`|Actions to be granted (`produce` or `consume`)|| -|`--role`|The client role to which to grant the permissions|| - - -### `revoke-permission` -Revoke permissions to a client role on a single topic. If the permission was not set at the topic level, but rather at the namespace level, this operation will return an error (HTTP status code 412). - -Usage - -```bash - -$ pulsar-admin topics revoke-permission topic - -``` - -### `lookup` -Look up a topic from the current serving broker - -Usage - -```bash - -$ pulsar-admin topics lookup topic - -``` - -### `bundle-range` -Get the namespace bundle which contains the given topic - -Usage - -```bash - -$ pulsar-admin topics bundle-range topic - -``` - -### `delete` -Delete a topic. The topic cannot be deleted if there are any active subscriptions or producers connected to the topic. - -Usage - -```bash - -$ pulsar-admin topics delete topic - -``` - -### `unload` -Unload a topic - -Usage - -```bash - -$ pulsar-admin topics unload topic - -``` - -### `create-subscription` -Create a new subscription on a topic. - -Usage - -```bash - -$ pulsar-admin topics create-subscription [options] persistent://tenant/namespace/topic - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-m`, `--messageId`|messageId where to create the subscription. It can be either 'latest', 'earliest' or (ledgerId:entryId)|latest| -|`-s`, `--subscription`|Subscription to reset position on|| - -### `subscriptions` -Get the list of subscriptions on the topic - -Usage - -```bash - -$ pulsar-admin topics subscriptions topic - -``` - -### `unsubscribe` -Delete a durable subscriber from a topic - -Usage - -```bash - -$ pulsar-admin topics unsubscribe topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|The subscription to delete|| -|`-f`, `--force`|Disconnect and close all consumers and delete subscription forcefully|false| - - -### `stats` -Get the stats for the topic and its connected producers and consumers. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -Usage - -```bash - -$ pulsar-admin topics stats topic - -``` - -:::note - -The unit of `storageSize` and `averageMsgSize` is Byte. - -::: - -### `stats-internal` -Get the internal stats for the topic - -Usage - -```bash - -$ pulsar-admin topics stats-internal topic - -``` - -### `info-internal` -Get the internal metadata info for the topic - -Usage - -```bash - -$ pulsar-admin topics info-internal topic - -``` - -### `partitioned-stats` -Get the stats for the partitioned topic and its connected producers and consumers. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -Usage - -```bash - -$ pulsar-admin topics partitioned-stats topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--per-partition`|Get per-partition stats|false| - -### `partitioned-stats-internal` -Get the internal stats for the partitioned topic and its connected producers and consumers. All the rates are computed over a 1 minute window and are relative the last completed 1 minute period. - -Usage - -```bash - -$ pulsar-admin topics partitioned-stats-internal topic - -``` - -### `skip` -Skip some messages for the subscription - -Usage - -```bash - -$ pulsar-admin topics skip topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-n`, `--count`|The number of messages to skip|0| -|`-s`, `--subscription`|The subscription on which to skip messages|| - - -### `clear-backlog` -Clear backlog (skip all the messages) for the subscription - -Usage - -```bash - -$ pulsar-admin topics clear-backlog topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|The subscription to clear|| - - -### `expire-messages` -Expire messages that are older than the given expiry time (in seconds) for the subscription. - -Usage - -```bash - -$ pulsar-admin topics expire-messages topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-t`, `--expireTime`|Expire messages older than the time (in seconds)|0| -|`-s`, `--subscription`|The subscription to skip messages on|| - - -### `expire-messages-all-subscriptions` -Expire messages older than the given expiry time (in seconds) for all subscriptions - -Usage - -```bash - -$ pulsar-admin topics expire-messages-all-subscriptions topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-t`, `--expireTime`|Expire messages older than the time (in seconds)|0| - - -### `peek-messages` -Peek some messages for the subscription. - -Usage - -```bash - -$ pulsar-admin topics peek-messages topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-n`, `--count`|The number of messages|0| -|`-s`, `--subscription`|Subscription to get messages from|| - - -### `reset-cursor` -Reset position for subscription to a position that is closest to timestamp or messageId. - -Usage - -```bash - -$ pulsar-admin topics reset-cursor topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|Subscription to reset position on|| -|`-t`, `--time`|The time in minutes to reset back to (or minutes, hours, days, weeks, etc.). Examples: `100m`, `3h`, `2d`, `5w`.|| -|`-m`, `--messageId`| The messageId to reset back to (ledgerId:entryId). || - -### `get-message-by-id` -Get message by ledger id and entry id - -Usage - -```bash - -$ pulsar-admin topics get-message-by-id topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-l`, `--ledgerId`|The ledger id |0| -|`-e`, `--entryId`|The entry id |0| - -### `last-message-id` -Get the last commit message ID of the topic. - -Usage - -```bash - -$ pulsar-admin topics last-message-id persistent://tenant/namespace/topic - -``` - -### `get-backlog-quotas` -Get the backlog quota policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-backlog-quotas tenant/namespace/topic - -``` - -### `set-backlog-quota` -Set a backlog quota policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-backlog-quota tenant/namespace/topic options - -``` - -### `remove-backlog-quota` -Remove a backlog quota policy from a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-backlog-quota tenant/namespace/topic - -``` - -### `get-persistence` -Get the persistence policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-persistence tenant/namespace/topic - -``` - -### `set-persistence` -Set the persistence policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-persistence tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-e`, `--bookkeeper-ensemble`|Number of bookies to use for a topic|0| -|`-w`, `--bookkeeper-write-quorum`|How many writes to make of each entry|0| -|`-a`, `--bookkeeper-ack-quorum`|Number of acks (guaranteed copies) to wait for each entry|0| -|`-r`, `--ml-mark-delete-max-rate`|Throttling rate of mark-delete operation (0 means no throttle)|| - -### `remove-persistence` -Remove the persistence policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-persistence tenant/namespace/topic - -``` - -### `get-message-ttl` -Get the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-message-ttl tenant/namespace/topic - -``` - -### `set-message-ttl` -Set the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-message-ttl tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-ttl`, `--messageTTL`|Message TTL for a topic in second, allowed range from 1 to `Integer.MAX_VALUE` |0| - -### `remove-message-ttl` -Remove the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-message-ttl tenant/namespace/topic - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable message deduplication on the specified topic.|false| -|`--disable`, `-d`|Disable message deduplication on the specified topic.|false| - -### `get-deduplication` -Get a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-deduplication tenant/namespace/topic - -``` - -### `set-deduplication` -Set a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-deduplication tenant/namespace/topic options - -``` - -### `remove-deduplication` -Remove a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-deduplication tenant/namespace/topic - -``` - -## `tenants` -Operations for managing tenants - -Usage - -```bash - -$ pulsar-admin tenants subcommand - -``` - -Subcommands -* `list` -* `get` -* `create` -* `update` -* `delete` - -### `list` -List the existing tenants - -Usage - -```bash - -$ pulsar-admin tenants list - -``` - -### `get` -Gets the configuration of a tenant - -Usage - -```bash - -$ pulsar-admin tenants get tenant-name - -``` - -### `create` -Creates a new tenant - -Usage - -```bash - -$ pulsar-admin tenants create tenant-name options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-r`, `--admin-roles`|Comma-separated admin roles|| -|`-c`, `--allowed-clusters`|Comma-separated allowed clusters|| - -### `update` -Updates a tenant - -Usage - -```bash - -$ pulsar-admin tenants update tenant-name options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-r`, `--admin-roles`|Comma-separated admin roles|| -|`-c`, `--allowed-clusters`|Comma-separated allowed clusters|| - - -### `delete` -Deletes an existing tenant - -Usage - -```bash - -$ pulsar-admin tenants delete tenant-name - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-f`, `--force`|Delete a tenant forcefully by deleting all namespaces under it.|false| - - -## `resource-quotas` -Operations for managing resource quotas - -Usage - -```bash - -$ pulsar-admin resource-quotas subcommand - -``` - -Subcommands -* `get` -* `set` -* `reset-namespace-bundle-quota` - - -### `get` -Get the resource quota for a specified namespace bundle, or default quota if no namespace/bundle is specified. - -Usage - -```bash - -$ pulsar-admin resource-quotas get options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-n`, `--namespace`|The namespace|| - - -### `set` -Set the resource quota for the specified namespace bundle, or default quota if no namespace/bundle is specified. - -Usage - -```bash - -$ pulsar-admin resource-quotas set options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-bi`, `--bandwidthIn`|The expected inbound bandwidth (in bytes/second)|0| -|`-bo`, `--bandwidthOut`|Expected outbound bandwidth (in bytes/second)0| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-d`, `--dynamic`|Allow to be dynamically re-calculated (or not)|false| -|`-mem`, `--memory`|Expectred memory usage (in megabytes)|0| -|`-mi`, `--msgRateIn`|Expected incoming messages per second|0| -|`-mo`, `--msgRateOut`|Expected outgoing messages per second|0| -|`-n`, `--namespace`|The namespace as tenant/namespace, for example my-tenant/my-ns. Must be specified together with -b/--bundle.|| - - -### `reset-namespace-bundle-quota` -Reset the specified namespace bundle's resource quota to a default value. - -Usage - -```bash - -$ pulsar-admin resource-quotas reset-namespace-bundle-quota options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-n`, `--namespace`|The namespace|| - - - -## `schemas` -Operations related to Schemas associated with Pulsar topics. - -Usage - -``` - -$ pulsar-admin schemas subcommand - -``` - -Subcommands -* `upload` -* `delete` -* `get` -* `extract` - - -### `upload` -Upload the schema definition for a topic - -Usage - -```bash - -$ pulsar-admin schemas upload persistent://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`--filename`|The path to the schema definition file. An example schema file is available under conf directory.|| - - -### `delete` -Delete the schema definition associated with a topic - -Usage - -```bash - -$ pulsar-admin schemas delete persistent://tenant/namespace/topic - -``` - -### `get` -Retrieve the schema definition associated with a topic (at a given version if version is supplied). - -Usage - -```bash - -$ pulsar-admin schemas get persistent://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`--version`|The version of the schema definition to retrieve for a topic.|| - -### `extract` -Provide the schema definition for a topic via Java class name contained in a JAR file - -Usage - -```bash - -$ pulsar-admin schemas extract persistent://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--classname`|The Java class name|| -|`-j`, `--jar`|A path to the JAR file which contains the above Java class|| -|`-t`, `--type`|The type of the schema (avro or json)|| diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/reference-rest-api-overview.md b/site2/website/versioned_docs/version-2.8.0-deprecated/reference-rest-api-overview.md deleted file mode 100644 index 4bdcf23483a2b5..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/reference-rest-api-overview.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: reference-rest-api-overview -title: Pulsar REST APIs -sidebar_label: "Pulsar REST APIs" ---- - -A REST API (also known as RESTful API, REpresentational State Transfer Application Programming Interface) is a set of definitions and protocols for building and integrating application software, using HTTP requests to GET, PUT, POST, and DELETE data following the REST standards. In essence, REST API is a set of remote calls using standard methods to request and return data in a specific format between two systems. - -Pulsar provides a variety of REST APIs that enable you to interact with Pulsar to retrieve information or perform an action. - -| REST API category | Description | -| --- | --- | -| [Admin](https://pulsar.apache.org/admin-rest-api/?version=master) | REST APIs for administrative operations.| -| [Functions](https://pulsar.apache.org/functions-rest-api/?version=master) | REST APIs for function-specific operations.| -| [Sources](https://pulsar.apache.org/source-rest-api/?version=master) | REST APIs for source-specific operations.| -| [Sinks](https://pulsar.apache.org/sink-rest-api/?version=master) | REST APIs for sink-specific operations.| -| [Packages](https://pulsar.apache.org/packages-rest-api/?version=master) | REST APIs for package-specific operations. A package can be a group of functions, sources, and sinks.| - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/reference-terminology.md b/site2/website/versioned_docs/version-2.8.0-deprecated/reference-terminology.md deleted file mode 100644 index e5099141c3231e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/reference-terminology.md +++ /dev/null @@ -1,176 +0,0 @@ ---- -id: reference-terminology -title: Pulsar Terminology -sidebar_label: "Terminology" -original_id: reference-terminology ---- - -Here is a glossary of terms related to Apache Pulsar: - -### Concepts - -#### Pulsar - -Pulsar is a distributed messaging system originally created by Yahoo but now under the stewardship of the Apache Software Foundation. - -#### Message - -Messages are the basic unit of Pulsar. They're what [producers](#producer) publish to [topics](#topic) -and what [consumers](#consumer) then consume from topics. - -#### Topic - -A named channel used to pass messages published by [producers](#producer) to [consumers](#consumer) who -process those [messages](#message). - -#### Partitioned Topic - -A topic that is served by multiple Pulsar [brokers](#broker), which enables higher throughput. - -#### Namespace - -A grouping mechanism for related [topics](#topic). - -#### Namespace Bundle - -A virtual group of [topics](#topic) that belong to the same [namespace](#namespace). A namespace bundle -is defined as a range between two 32-bit hashes, such as 0x00000000 and 0xffffffff. - -#### Tenant - -An administrative unit for allocating capacity and enforcing an authentication/authorization scheme. - -#### Subscription - -A lease on a [topic](#topic) established by a group of [consumers](#consumer). Pulsar has four subscription -modes (exclusive, shared, failover and key_shared). - -#### Pub-Sub - -A messaging pattern in which [producer](#producer) processes publish messages on [topics](#topic) that -are then consumed (processed) by [consumer](#consumer) processes. - -#### Producer - -A process that publishes [messages](#message) to a Pulsar [topic](#topic). - -#### Consumer - -A process that establishes a subscription to a Pulsar [topic](#topic) and processes messages published -to that topic by [producers](#producer). - -#### Reader - -Pulsar readers are message processors much like Pulsar [consumers](#consumer) but with two crucial differences: - -- you can specify *where* on a topic readers begin processing messages (consumers always begin with the latest - available unacked message); -- readers don't retain data or acknowledge messages. - -#### Cursor - -The subscription position for a [consumer](#consumer). - -#### Acknowledgment (ack) - -A message sent to a Pulsar broker by a [consumer](#consumer) that a message has been successfully processed. -An acknowledgement (ack) is Pulsar's way of knowing that the message can be deleted from the system; -if no acknowledgement, then the message will be retained until it's processed. - -#### Negative Acknowledgment (nack) - -When an application fails to process a particular message, it can send a "negative ack" to Pulsar -to signal that the message should be replayed at a later timer. (By default, failed messages are -replayed after a 1 minute delay). Be aware that negative acknowledgment on ordered subscription types, -such as Exclusive, Failover and Key_Shared, can cause failed messages to arrive consumers out of the original order. - -#### Unacknowledged - -A message that has been delivered to a consumer for processing but not yet confirmed as processed by the consumer. - -#### Retention Policy - -Size and time limits that you can set on a [namespace](#namespace) to configure retention of [messages](#message) -that have already been [acknowledged](#acknowledgement-ack). - -#### Multi-Tenancy - -The ability to isolate [namespaces](#namespace), specify quotas, and configure authentication and authorization -on a per-[tenant](#tenant) basis. - -#### Failure Domain - -A logical domain under a Pulsar cluster. Each logical domain contains a pre-configured list of brokers. - -#### Anti-affinity Namespaces - -A group of namespaces that have anti-affinity to each other. - -### Architecture - -#### Standalone - -A lightweight Pulsar broker in which all components run in a single Java Virtual Machine (JVM) process. Standalone -clusters can be run on a single machine and are useful for development purposes. - -#### Cluster - -A set of Pulsar [brokers](#broker) and [BookKeeper](#bookkeeper) servers (aka [bookies](#bookie)). -Clusters can reside in different geographical regions and replicate messages to one another -in a process called [geo-replication](#geo-replication). - -#### Instance - -A group of Pulsar [clusters](#cluster) that act together as a single unit. - -#### Geo-Replication - -Replication of messages across Pulsar [clusters](#cluster), potentially in different datacenters -or geographical regions. - -#### Configuration Store - -Pulsar's configuration store (previously known as configuration store) is a ZooKeeper quorum that -is used for configuration-specific tasks. A multi-cluster Pulsar installation requires just one -configuration store across all [clusters](#cluster). - -#### Topic Lookup - -A service provided by Pulsar [brokers](#broker) that enables connecting clients to automatically determine -which Pulsar [cluster](#cluster) is responsible for a [topic](#topic) (and thus where message traffic for -the topic needs to be routed). - -#### Service Discovery - -A mechanism provided by Pulsar that enables connecting clients to use just a single URL to interact -with all the [brokers](#broker) in a [cluster](#cluster). - -#### Broker - -A stateless component of Pulsar [clusters](#cluster) that runs two other components: an HTTP server -exposing a REST interface for administration and topic lookup and a [dispatcher](#dispatcher) that -handles all message transfers. Pulsar clusters typically consist of multiple brokers. - -#### Dispatcher - -An asynchronous TCP server used for all data transfers in-and-out a Pulsar [broker](#broker). The Pulsar -dispatcher uses a custom binary protocol for all communications. - -### Storage - -#### BookKeeper - -[Apache BookKeeper](http://bookkeeper.apache.org/) is a scalable, low-latency persistent log storage -service that Pulsar uses to store data. - -#### Bookie - -Bookie is the name of an individual BookKeeper server. It is effectively the storage server of Pulsar. - -#### Ledger - -An append-only data structure in [BookKeeper](#bookkeeper) that is used to persistently store messages in Pulsar [topics](#topic). - -### Functions - -Pulsar Functions are lightweight functions that can consume messages from Pulsar topics, apply custom processing logic, and, if desired, publish results to topics. diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/schema-evolution-compatibility.md b/site2/website/versioned_docs/version-2.8.0-deprecated/schema-evolution-compatibility.md deleted file mode 100644 index 3e78429df69da2..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/schema-evolution-compatibility.md +++ /dev/null @@ -1,201 +0,0 @@ ---- -id: schema-evolution-compatibility -title: Schema evolution and compatibility -sidebar_label: "Schema evolution and compatibility" -original_id: schema-evolution-compatibility ---- - -Normally, schemas do not stay the same over a long period of time. Instead, they undergo evolutions to satisfy new needs. - -This chapter examines how Pulsar schema evolves and what Pulsar schema compatibility check strategies are. - -## Schema evolution - -Pulsar schema is defined in a data structure called `SchemaInfo`. - -Each `SchemaInfo` stored with a topic has a version. The version is used to manage the schema changes happening within a topic. - -The message produced with `SchemaInfo` is tagged with a schema version. When a message is consumed by a Pulsar client, the Pulsar client can use the schema version to retrieve the corresponding `SchemaInfo` and use the correct schema information to deserialize data. - -### What is schema evolution? - -Schemas store the details of attributes and types. To satisfy new business requirements, you need to update schemas inevitably over time, which is called **schema evolution**. - -Any schema changes affect downstream consumers. Schema evolution ensures that the downstream consumers can seamlessly handle data encoded with both old schemas and new schemas. - -### How Pulsar schema should evolve? - -The answer is Pulsar schema compatibility check strategy. It determines how schema compares old schemas with new schemas in topics. - -For more information, see [Schema compatibility check strategy](#schema-compatibility-check-strategy). - -### How does Pulsar support schema evolution? - -1. When a producer/consumer/reader connects to a broker, the broker deploys the schema compatibility checker configured by `schemaRegistryCompatibilityCheckers` to enforce schema compatibility check. - - The schema compatibility checker is one instance per schema type. - - Currently, Avro and JSON have their own compatibility checkers, while all the other schema types share the default compatibility checker which disables schema evolution. - -2. The producer/consumer/reader sends its client `SchemaInfo` to the broker. - -3. The broker knows the schema type and locates the schema compatibility checker for that type. - -4. The broker uses the checker to check if the `SchemaInfo` is compatible with the latest schema of the topic by applying its compatibility check strategy. - - Currently, the compatibility check strategy is configured at the namespace level and applied to all the topics within that namespace. - -## Schema compatibility check strategy - -Pulsar has 8 schema compatibility check strategies, which are summarized in the following table. - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Changes allowed | Check against which schema | Upgrade first | -| --- | --- | --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Disable schema compatibility check. | All changes are allowed | All previous versions | Any order | -| `ALWAYS_INCOMPATIBLE` | Disable schema evolution. | All changes are disabled | None | None | -| `BACKWARD` | Consumers using the schema V3 can process data written by producers using the schema V3 or V2. |
  664. Add optional fields
  665. Delete fields
  666. | Latest version | Consumers | -| `BACKWARD_TRANSITIVE` | Consumers using the schema V3 can process data written by producers using the schema V3, V2 or V1. |
  667. Add optional fields
  668. Delete fields
  669. | All previous versions | Consumers | -| `FORWARD` | Consumers using the schema V3 or V2 can process data written by producers using the schema V3. |
  670. Add fields
  671. Delete optional fields
  672. | Latest version | Producers | -| `FORWARD_TRANSITIVE` | Consumers using the schema V3, V2 or V1 can process data written by producers using the schema V3. |
  673. Add fields
  674. Delete optional fields
  675. | All previous versions | Producers | -| `FULL` | Backward and forward compatible between the schema V3 and V2. |
  676. Modify optional fields
  677. | Latest version | Any order | -| `FULL_TRANSITIVE` | Backward and forward compatible among the schema V3, V2, and V1. |
  678. Modify optional fields
  679. | All previous versions | Any order | - -### ALWAYS_COMPATIBLE and ALWAYS_INCOMPATIBLE - -| Compatibility check strategy | Definition | Note | -| --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Disable schema compatibility check. | None | -| `ALWAYS_INCOMPATIBLE` | Disable schema evolution, that is, any schema change is rejected. |
  680. For all schema types except Avro and JSON, the default schema compatibility check strategy is `ALWAYS_INCOMPATIBLE`.
  681. For Avro and JSON, the default schema compatibility check strategy is `FULL`.
  682. | - -#### Example - -* Example 1 - - In some situations, an application needs to store events of several different types in the same Pulsar topic. - - In particular, when developing a data model in an `Event Sourcing` style, you might have several kinds of events that affect the state of an entity. - - For example, for a user entity, there are `userCreated`, `userAddressChanged` and `userEnquiryReceived` events. The application requires that those events are always read in the same order. - - Consequently, those events need to go in the same Pulsar partition to maintain order. This application can use `ALWAYS_COMPATIBLE` to allow different kinds of events co-exist in the same topic. - -* Example 2 - - Sometimes we also make incompatible changes. - - For example, you are modifying a field type from `string` to `int`. - - In this case, you need to: - - * Upgrade all producers and consumers to the new schema versions at the same time. - - * Optionally, create a new topic and start migrating applications to use the new topic and the new schema, avoiding the need to handle two incompatible versions in the same topic. - -### BACKWARD and BACKWARD_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | -|---|---|---| -`BACKWARD` | Consumers using the new schema can process data written by producers using the **last schema**. | The consumers using the schema V3 can process data written by producers using the schema V3 or V2. | -`BACKWARD_TRANSITIVE` | Consumers using the new schema can process data written by producers using **all previous schemas**. | The consumers using the schema V3 can process data written by producers using the schema V3, V2, or V1. | - -#### Example - -* Example 1 - - Remove a field. - - A consumer constructed to process events without one field can process events written with the old schema containing the field, and the consumer will ignore that field. - -* Example 2 - - You want to load all Pulsar data into a Hive data warehouse and run SQL queries against the data. - - Same SQL queries must continue to work even the data is changed. To support it, you can evolve the schemas using the `BACKWARD` strategy. - -### FORWARD and FORWARD_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | -|---|---|---| -`FORWARD` | Consumers using the **last schema** can process data written by producers using a new schema, even though they may not be able to use the full capabilities of the new schema. | The consumers using the schema V3 or V2 can process data written by producers using the schema V3. | -`FORWARD_TRANSITIVE` | Consumers using **all previous schemas** can process data written by producers using a new schema. | The consumers using the schema V3, V2, or V1 can process data written by producers using the schema V3. - -#### Example - -* Example 1 - - Add a field. - - In most data formats, consumers written to process events without new fields can continue doing so even when they receive new events containing new fields. - -* Example 2 - - If a consumer has an application logic tied to a full version of a schema, the application logic may not be updated instantly when the schema evolves. - - In this case, you need to project data with a new schema onto an old schema that the application understands. - - Consequently, you can evolve the schemas using the `FORWARD` strategy to ensure that the old schema can process data encoded with the new schema. - -### FULL and FULL_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | Note | -| --- | --- | --- | --- | -| `FULL` | Schemas are both backward and forward compatible, which means: Consumers using the last schema can process data written by producers using the new schema. AND Consumers using the new schema can process data written by producers using the last schema. | Consumers using the schema V3 can process data written by producers using the schema V3 or V2. AND Consumers using the schema V3 or V2 can process data written by producers using the schema V3. |
  683. For Avro and JSON, the default schema compatibility check strategy is `FULL`.
  684. For all schema types except Avro and JSON, the default schema compatibility check strategy is `ALWAYS_INCOMPATIBLE`.
  685. | -| `FULL_TRANSITIVE` | The new schema is backward and forward compatible with all previously registered schemas. | Consumers using the schema V3 can process data written by producers using the schema V3, V2 or V1. AND Consumers using the schema V3, V2 or V1 can process data written by producers using the schema V3. | None | - -#### Example - -In some data formats, for example, Avro, you can define fields with default values. Consequently, adding or removing a field with a default value is a fully compatible change. - -## Schema verification - -When a producer or a consumer tries to connect to a topic, a broker performs some checks to verify a schema. - -### Producer - -When a producer tries to connect to a topic (suppose ignore the schema auto creation), a broker does the following checks: - -* Check if the schema carried by the producer exists in the schema registry or not. - - * If the schema is already registered, then the producer is connected to a broker and produce messages with that schema. - - * If the schema is not registered, then Pulsar verifies if the schema is allowed to be registered based on the configured compatibility check strategy. - -### Consumer -When a consumer tries to connect to a topic, a broker checks if a carried schema is compatible with a registered schema based on the configured schema compatibility check strategy. - -| Compatibility check strategy | Check logic | -| --- | --- | -| `ALWAYS_COMPATIBLE` | All pass | -| `ALWAYS_INCOMPATIBLE` | No pass | -| `BACKWARD` | Can read the last schema | -| `BACKWARD_TRANSITIVE` | Can read all schemas | -| `FORWARD` | Can read the last schema | -| `FORWARD_TRANSITIVE` | Can read the last schema | -| `FULL` | Can read the last schema | -| `FULL_TRANSITIVE` | Can read all schemas | - -## Order of upgrading clients - -The order of upgrading client applications is determined by the compatibility check strategy. - -For example, the producers using schemas to write data to Pulsar and the consumers using schemas to read data from Pulsar. - -| Compatibility check strategy | Upgrade first | Description | -| --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Any order | The compatibility check is disabled. Consequently, you can upgrade the producers and consumers in **any order**. | -| `ALWAYS_INCOMPATIBLE` | None | The schema evolution is disabled. | -|
  686. `BACKWARD`
  687. `BACKWARD_TRANSITIVE`
  688. | Consumers | There is no guarantee that consumers using the old schema can read data produced using the new schema. Consequently, **upgrade all consumers first**, and then start producing new data. | -|
  689. `FORWARD`
  690. `FORWARD_TRANSITIVE`
  691. | Producers | There is no guarantee that consumers using the new schema can read data produced using the old schema. Consequently, **upgrade all producers first**
  692. to use the new schema and ensure that the data already produced using the old schemas are not available to consumers, and then upgrade the consumers.
  693. | -|
  694. `FULL`
  695. `FULL_TRANSITIVE`
  696. | Any order | There is no guarantee that consumers using the old schema can read data produced using the new schema and consumers using the new schema can read data produced using the old schema. Consequently, you can upgrade the producers and consumers in **any order**. | - - - - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/schema-get-started.md b/site2/website/versioned_docs/version-2.8.0-deprecated/schema-get-started.md deleted file mode 100644 index afacb0fa51f2ef..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/schema-get-started.md +++ /dev/null @@ -1,102 +0,0 @@ ---- -id: schema-get-started -title: Get started -sidebar_label: "Get started" -original_id: schema-get-started ---- - -This chapter introduces Pulsar schemas and explains why they are important. - -## Schema Registry - -Type safety is extremely important in any application built around a message bus like Pulsar. - -Producers and consumers need some kind of mechanism for coordinating types at the topic level to avoid various potential problems arise. For example, serialization and deserialization issues. - -Applications typically adopt one of the following approaches to guarantee type safety in messaging. Both approaches are available in Pulsar, and you're free to adopt one or the other or to mix and match on a per-topic basis. - -#### Note -> -> Currently, the Pulsar schema registry is only available for the [Java client](client-libraries-java.md), [CGo client](client-libraries-cgo.md), [Python client](client-libraries-python.md), and [C++ client](client-libraries-cpp.md). - -### Client-side approach - -Producers and consumers are responsible for not only serializing and deserializing messages (which consist of raw bytes) but also "knowing" which types are being transmitted via which topics. - -If a producer is sending temperature sensor data on the topic `topic-1`, consumers of that topic will run into trouble if they attempt to parse that data as moisture sensor readings. - -Producers and consumers can send and receive messages consisting of raw byte arrays and leave all type safety enforcement to the application on an "out-of-band" basis. - -### Server-side approach - -Producers and consumers inform the system which data types can be transmitted via the topic. - -With this approach, the messaging system enforces type safety and ensures that producers and consumers remain synced. - -Pulsar has a built-in **schema registry** that enables clients to upload data schemas on a per-topic basis. Those schemas dictate which data types are recognized as valid for that topic. - -## Why use schema - -When a schema is enabled, Pulsar does parse data, it takes bytes as inputs and sends bytes as outputs. While data has meaning beyond bytes, you need to parse data and might encounter parse exceptions which mainly occur in the following situations: - -* The field does not exist - -* The field type has changed (for example, `string` is changed to `int`) - -There are a few methods to prevent and overcome these exceptions, for example, you can catch exceptions when parsing errors, which makes code hard to maintain; or you can adopt a schema management system to perform schema evolution, not to break downstream applications, and enforces type safety to max extend in the language you are using, the solution is Pulsar Schema. - -Pulsar schema enables you to use language-specific types of data when constructing and handling messages from simple types like `string` to more complex application-specific types. - -**Example** - -You can use the _User_ class to define the messages sent to Pulsar topics. - -``` - -public class User { - String name; - int age; -} - -``` - -When constructing a producer with the _User_ class, you can specify a schema or not as below. - -### Without schema - -If you construct a producer without specifying a schema, then the producer can only produce messages of type `byte[]`. If you have a POJO class, you need to serialize the POJO into bytes before sending messages. - -**Example** - -``` - -Producer producer = client.newProducer() - .topic(topic) - .create(); -User user = new User("Tom", 28); -byte[] message = … // serialize the `user` by yourself; -producer.send(message); - -``` - -### With schema - -If you construct a producer with specifying a schema, then you can send a class to a topic directly without worrying about how to serialize POJOs into bytes. - -**Example** - -This example constructs a producer with the _JSONSchema_, and you can send the _User_ class to topics directly without worrying about how to serialize it into bytes. - -``` - -Producer producer = client.newProducer(JSONSchema.of(User.class)) - .topic(topic) - .create(); -User user = new User("Tom", 28); -producer.send(user); - -``` - -### Summary - -When constructing a producer with a schema, you do not need to serialize messages into bytes, instead Pulsar schema does this job in the background. diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/schema-manage.md b/site2/website/versioned_docs/version-2.8.0-deprecated/schema-manage.md deleted file mode 100644 index c588aae619eee9..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/schema-manage.md +++ /dev/null @@ -1,639 +0,0 @@ ---- -id: schema-manage -title: Manage schema -sidebar_label: "Manage schema" -original_id: schema-manage ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide demonstrates the ways to manage schemas: - -* Automatically - - * [Schema AutoUpdate](#schema-autoupdate) - -* Manually - - * [Schema manual management](#schema-manual-management) - - * [Custom schema storage](#custom-schema-storage) - -## Schema AutoUpdate - -If a schema passes the schema compatibility check, Pulsar producer automatically updates this schema to the topic it produces by default. - -### AutoUpdate for producer - -For a producer, the `AutoUpdate` happens in the following cases: - -* If a **topic doesn’t have a schema**, Pulsar registers a schema automatically. - -* If a **topic has a schema**: - - * If a **producer doesn’t carry a schema**: - - * If `isSchemaValidationEnforced` or `schemaValidationEnforced` is **disabled** in the namespace to which the topic belongs, the producer is allowed to connect to the topic and produce data. - - * If `isSchemaValidationEnforced` or `schemaValidationEnforced` is **enabled** in the namespace to which the topic belongs, the producer is rejected and disconnected. - - * If a **producer carries a schema**: - - A broker performs the compatibility check based on the configured compatibility check strategy of the namespace to which the topic belongs. - - * If the schema is registered, a producer is connected to a broker. - - * If the schema is not registered: - - * If `isAllowAutoUpdateSchema` sets to **false**, the producer is rejected to connect to a broker. - - * If `isAllowAutoUpdateSchema` sets to **true**: - - * If the schema passes the compatibility check, then the broker registers a new schema automatically for the topic and the producer is connected. - - * If the schema does not pass the compatibility check, then the broker does not register a schema and the producer is rejected to connect to a broker. - -![AutoUpdate Producer](/assets/schema-producer.png) - -### AutoUpdate for consumer - -For a consumer, the `AutoUpdate` happens in the following cases: - -* If a **consumer connects to a topic without a schema** (which means the consumer receiving raw bytes), the consumer can connect to the topic successfully without doing any compatibility check. - -* If a **consumer connects to a topic with a schema**. - - * If a topic does not have all of them (a schema/data/a local consumer and a local producer): - - * If `isAllowAutoUpdateSchema` sets to **true**, then the consumer registers a schema and it is connected to a broker. - - * If `isAllowAutoUpdateSchema` sets to **false**, then the consumer is rejected to connect to a broker. - - * If a topic has one of them (a schema/data/a local consumer and a local producer), then the schema compatibility check is performed. - - * If the schema passes the compatibility check, then the consumer is connected to the broker. - - * If the schema does not pass the compatibility check, then the consumer is rejected to connect to the broker. - -![AutoUpdate Consumer](/assets/schema-consumer.png) - - -### Manage AutoUpdate strategy - -You can use the `pulsar-admin` command to manage the `AutoUpdate` strategy as below: - -* [Enable AutoUpdate](#enable-autoupdate) - -* [Disable AutoUpdate](#disable-autoupdate) - -* [Adjust compatibility](#adjust-compatibility) - -#### Enable AutoUpdate - -To enable `AutoUpdate` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-is-allow-auto-update-schema --enable tenant/namespace - -``` - -#### Disable AutoUpdate - -To disable `AutoUpdate` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-is-allow-auto-update-schema --disable tenant/namespace - -``` - -Once the `AutoUpdate` is disabled, you can only register a new schema using the `pulsar-admin` command. - -#### Adjust compatibility - -To adjust the schema compatibility level on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-compatibility-strategy --compatibility tenant/namespace - -``` - -### Schema validation - -By default, `schemaValidationEnforced` is **disabled** for producers: - -* This means a producer without a schema can produce any kind of messages to a topic with schemas, which may result in producing trash data to the topic. - -* This allows non-java language clients that don’t support schema can produce messages to a topic with schemas. - -However, if you want a stronger guarantee on the topics with schemas, you can enable `schemaValidationEnforced` across the whole cluster or on a per-namespace basis. - -#### Enable schema validation - -To enable `schemaValidationEnforced` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-validation-enforce --enable tenant/namespace - -``` - -#### Disable schema validation - -To disable `schemaValidationEnforced` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-validation-enforce --disable tenant/namespace - -``` - -## Schema manual management - -To manage schemas, you can use one of the following methods. - -| Method | Description | -| --- | --- | -| **Admin CLI**
  697. | You can use the `pulsar-admin` tool to manage Pulsar schemas, brokers, clusters, sources, sinks, topics, tenants and so on. For more information about how to use the `pulsar-admin` tool, see [here](reference-pulsar-admin.md). | -| **REST API**
  698. | Pulsar exposes schema related management API in Pulsar’s admin RESTful API. You can access the admin RESTful endpoint directly to manage schemas. For more information about how to use the Pulsar REST API, see [here](http://pulsar.apache.org/admin-rest-api/). | -| **Java Admin API**
  699. | Pulsar provides Java admin library. | - -### Upload a schema - -To upload (register) a new schema for a topic, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `upload` subcommand. - -```bash - -$ pulsar-admin schemas upload --filename - -``` - -The `schema-definition-file` is in JSON format. - -```json - -{ - "type": "", - "schema": "", - "properties": {} // the properties associated with the schema -} - -``` - -The `schema-definition-file` includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  700. If the schema is a
  701. **primitive**
  702. schema, this field should be blank.
  703. If the schema is a
  704. **struct**
  705. schema, this field should be a JSON string of the Avro schema definition.
  706. | -| `properties` | The additional properties associated with the schema. | - -Here are examples of the `schema-definition-file` for a JSON schema. - -**Example 1** - -```json - -{ - "type": "JSON", - "schema": "{\"type\":\"record\",\"name\":\"User\",\"namespace\":\"com.foo\",\"fields\":[{\"name\":\"file1\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"file2\",\"type\":\"string\",\"default\":null},{\"name\":\"file3\",\"type\":[\"null\",\"string\"],\"default\":\"dfdf\"}]}", - "properties": {} -} - -``` - -**Example 2** - -```json - -{ - "type": "STRING", - "schema": "", - "properties": { - "key1": "value1" - } -} - -``` - -
    - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/uploadSchema?version=@pulsar:version_number@} - -The post payload is in JSON format. - -```json - -{ - "type": "", - "schema": "", - "properties": {} // the properties associated with the schema -} - -``` - -The post payload includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  707. If the schema is a
  708. **primitive**
  709. schema, this field should be blank.
  710. If the schema is a
  711. **struct**
  712. schema, this field should be a JSON string of the Avro schema definition.
  713. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -void createSchema(String topic, PostSchemaPayload schemaPayload) - -``` - -The `PostSchemaPayload` includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  714. If the schema is a
  715. **primitive**
  716. schema, this field should be blank.
  717. If the schema is a
  718. **struct**
  719. schema, this field should be a JSON string of the Avro schema definition.
  720. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `PostSchemaPayload`: - -```java - -PulsarAdmin admin = …; - -PostSchemaPayload payload = new PostSchemaPayload(); -payload.setType("INT8"); -payload.setSchema(""); - -admin.createSchema("my-tenant/my-ns/my-topic", payload); - -``` - -
    - -
    -```` - -### Get a schema (latest) - -To get the latest schema for a topic, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `get` subcommand. - -```bash - -$ pulsar-admin schemas get - -{ - "version": 0, - "type": "String", - "timestamp": 0, - "data": "string", - "properties": { - "property1": "string", - "property2": "string" - } -} - -``` - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/getSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", - "type": "", - "timestamp": "", - "data": "", - "properties": {} // the properties associated with the schema -} - -``` - -The response includes the following fields: - -| Field | Description | -| --- | --- | -| `version` | The schema version, which is a long number. | -| `type` | The schema type. | -| `timestamp` | The timestamp of creating this version of schema. | -| `data` | The schema definition data, which is encoded in UTF 8 charset.
  721. If the schema is a
  722. **primitive**
  723. schema, this field should be blank.
  724. If the schema is a
  725. **struct**
  726. schema, this field should be a JSON string of the Avro schema definition.
  727. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -SchemaInfo createSchema(String topic) - -``` - -The `SchemaInfo` includes the following fields: - -| Field | Description | -| --- | --- | -| `name` | The schema name. | -| `type` | The schema type. | -| `schema` | A byte array of the schema definition data, which is encoded in UTF 8 charset.
  728. If the schema is a
  729. **primitive**
  730. schema, this byte array should be empty.
  731. If the schema is a
  732. **struct**
  733. schema, this field should be a JSON string of the Avro schema definition converted to a byte array.
  734. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `SchemaInfo`: - -```java - -PulsarAdmin admin = …; - -SchemaInfo si = admin.getSchema("my-tenant/my-ns/my-topic"); - -``` - -
    - -
    -```` - -### Get a schema (specific) - -To get a specific version of a schema, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `get` subcommand. - -```bash - -$ pulsar-admin schemas get --version= - -``` - - - - -Send a `GET` request to a schema endpoint: {@inject: endpoint|GET|/admin/v2/schemas/:tenant/:namespace/:topic/schema/:version|operation/getSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", - "type": "", - "timestamp": "", - "data": "", - "properties": {} // the properties associated with the schema -} - -``` - -The response includes the following fields: - -| Field | Description | -| --- | --- | -| `version` | The schema version, which is a long number. | -| `type` | The schema type. | -| `timestamp` | The timestamp of creating this version of schema. | -| `data` | The schema definition data, which is encoded in UTF 8 charset.
  735. If the schema is a
  736. **primitive**
  737. schema, this field should be blank.
  738. If the schema is a
  739. **struct**
  740. schema, this field should be a JSON string of the Avro schema definition.
  741. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -SchemaInfo createSchema(String topic, long version) - -``` - -The `SchemaInfo` includes the following fields: - -| Field | Description | -| --- | --- | -| `name` | The schema name. | -| `type` | The schema type. | -| `schema` | A byte array of the schema definition data, which is encoded in UTF 8.
  742. If the schema is a
  743. **primitive**
  744. schema, this byte array should be empty.
  745. If the schema is a
  746. **struct**
  747. schema, this field should be a JSON string of the Avro schema definition converted to a byte array.
  748. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `SchemaInfo`: - -```java - -PulsarAdmin admin = …; - -SchemaInfo si = admin.getSchema("my-tenant/my-ns/my-topic", 1L); - -``` - -
    - -
    -```` - -### Extract a schema - -To provide a schema via a topic, you can use the following method. - -````mdx-code-block - - - - -Use the `extract` subcommand. - -```bash - -$ pulsar-admin schemas extract --classname --jar --type - -``` - - - - -```` - -### Delete a schema - -To delete a schema for a topic, you can use one of the following methods. - -:::note - -In any case, the **delete** action deletes **all versions** of a schema registered for a topic. - -::: - -````mdx-code-block - - - - -Use the `delete` subcommand. - -```bash - -$ pulsar-admin schemas delete - -``` - - - - -Send a `DELETE` request to a schema endpoint: {@inject: endpoint|DELETE|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/deleteSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", -} - -``` - -The response includes the following field: - -Field | Description | ----|---| -`version` | The schema version, which is a long number. | - - - - -```java - -void deleteSchema(String topic) - -``` - -Here is an example of deleting a schema. - -```java - -PulsarAdmin admin = …; - -admin.deleteSchema("my-tenant/my-ns/my-topic"); - -``` - - - - -```` - -## Custom schema storage - -By default, Pulsar stores various data types of schemas in [Apache BookKeeper](https://bookkeeper.apache.org) deployed alongside Pulsar. - -However, you can use another storage system if needed. - -### Implement - -To use a non-default (non-BookKeeper) storage system for Pulsar schemas, you need to implement the following Java interfaces: - -* [SchemaStorage interface](#schemastorage-interface) - -* [SchemaStorageFactory interface](#schemastoragefactory-interface) - -#### SchemaStorage interface - -The `SchemaStorage` interface has the following methods: - -```java - -public interface SchemaStorage { - // How schemas are updated - CompletableFuture put(String key, byte[] value, byte[] hash); - - // How schemas are fetched from storage - CompletableFuture get(String key, SchemaVersion version); - - // How schemas are deleted - CompletableFuture delete(String key); - - // Utility method for converting a schema version byte array to a SchemaVersion object - SchemaVersion versionFromBytes(byte[] version); - - // Startup behavior for the schema storage client - void start() throws Exception; - - // Shutdown behavior for the schema storage client - void close() throws Exception; -} - -``` - -:::tip - -For a complete example of **schema storage** implementation, see [BookKeeperSchemaStorage](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorage.java) class. - -::: - -#### SchemaStorageFactory interface - -The `SchemaStorageFactory` interface has the following method: - -```java - -public interface SchemaStorageFactory { - @NotNull - SchemaStorage create(PulsarService pulsar) throws Exception; -} - -``` - -:::tip - -For a complete example of **schema storage factory** implementation, see [BookKeeperSchemaStorageFactory](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorageFactory.java) class. - -::: - -### Deploy - -To use your custom schema storage implementation, perform the following steps. - -1. Package the implementation in a [JAR](https://docs.oracle.com/javase/tutorial/deployment/jar/basicsindex.html) file. - -2. Add the JAR file to the `lib` folder in your Pulsar binary or source distribution. - -3. Change the `schemaRegistryStorageClassName` configuration in `broker.conf` to your custom factory class. - -4. Start Pulsar. diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/schema-understand.md b/site2/website/versioned_docs/version-2.8.0-deprecated/schema-understand.md deleted file mode 100644 index a86b02add435e1..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/schema-understand.md +++ /dev/null @@ -1,556 +0,0 @@ ---- -id: schema-understand -title: Understand schema -sidebar_label: "Understand schema" -original_id: schema-understand ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This chapter explains the basic concepts of Pulsar schema, focuses on the topics of particular importance, and provides additional background. - -## SchemaInfo - -Pulsar schema is defined in a data structure called `SchemaInfo`. - -The `SchemaInfo` is stored and enforced on a per-topic basis and cannot be stored at the namespace or tenant level. - -A `SchemaInfo` consists of the following fields: - -| Field | Description | -| --- | --- | -| `name` | Schema name (a string). | -| `type` | Schema type, which determines how to interpret the schema data.
  749. Predefined schema: see [here](schema-understand.md#schema-type).
  750. Customized schema: it is left as an empty string.
  751. | -| `schema`(`payload`) | Schema data, which is a sequence of 8-bit unsigned bytes and schema-type specific. | -| `properties` | It is a user defined properties as a string/string map. Applications can use this bag for carrying any application specific logics. Possible properties might be the Git hash associated with the schema, an environment string like `dev` or `prod`. | - -**Example** - -This is the `SchemaInfo` of a string. - -```json - -{ - "name": "test-string-schema", - "type": "STRING", - "schema": "", - "properties": {} -} - -``` - -## Schema type - -Pulsar supports various schema types, which are mainly divided into two categories: - -* Primitive type - -* Complex type - -### Primitive type - -Currently, Pulsar supports the following primitive types: - -| Primitive Type | Description | -|---|---| -| `BOOLEAN` | A binary value | -| `INT8` | A 8-bit signed integer | -| `INT16` | A 16-bit signed integer | -| `INT32` | A 32-bit signed integer | -| `INT64` | A 64-bit signed integer | -| `FLOAT` | A single precision (32-bit) IEEE 754 floating-point number | -| `DOUBLE` | A double-precision (64-bit) IEEE 754 floating-point number | -| `BYTES` | A sequence of 8-bit unsigned bytes | -| `STRING` | A Unicode character sequence | -| `TIMESTAMP` (`DATE`, `TIME`) | A logic type represents a specific instant in time with millisecond precision.
    It stores the number of milliseconds since `January 1, 1970, 00:00:00 GMT` as an `INT64` value | -| INSTANT | A single instantaneous point on the time-line with nanoseconds precision| -| LOCAL_DATE | An immutable date-time object that represents a date, often viewed as year-month-day| -| LOCAL_TIME | An immutable date-time object that represents a time, often viewed as hour-minute-second. Time is represented to nanosecond precision.| -| LOCAL_DATE_TIME | An immutable date-time object that represents a date-time, often viewed as year-month-day-hour-minute-second | - -For primitive types, Pulsar does not store any schema data in `SchemaInfo`. The `type` in `SchemaInfo` is used to determine how to serialize and deserialize the data. - -Some of the primitive schema implementations can use `properties` to store implementation-specific tunable settings. For example, a `string` schema can use `properties` to store the encoding charset to serialize and deserialize strings. - -The conversions between **Pulsar schema types** and **language-specific primitive types** are as below. - -| Schema Type | Java Type| Python Type | Go Type | -|---|---|---|---| -| BOOLEAN | boolean | bool | bool | -| INT8 | byte | | int8 | -| INT16 | short | | int16 | -| INT32 | int | | int32 | -| INT64 | long | | int64 | -| FLOAT | float | float | float32 | -| DOUBLE | double | float | float64| -| BYTES | byte[], ByteBuffer, ByteBuf | bytes | []byte | -| STRING | string | str | string| -| TIMESTAMP | java.sql.Timestamp | | | -| TIME | java.sql.Time | | | -| DATE | java.util.Date | | | -| INSTANT | java.time.Instant | | | -| LOCAL_DATE | java.time.LocalDate | | | -| LOCAL_TIME | java.time.LocalDateTime | | -| LOCAL_DATE_TIME | java.time.LocalTime | | - -**Example** - -This example demonstrates how to use a string schema. - -1. Create a producer with a string schema and send messages. - - ```java - - Producer producer = client.newProducer(Schema.STRING).create(); - producer.newMessage().value("Hello Pulsar!").send(); - - ``` - -2. Create a consumer with a string schema and receive messages. - - ```java - - Consumer consumer = client.newConsumer(Schema.STRING).subscribe(); - consumer.receive(); - - ``` - -### Complex type - -Currently, Pulsar supports the following complex types: - -| Complex Type | Description | -|---|---| -| `keyvalue` | Represents a complex type of a key/value pair. | -| `struct` | Handles structured data. It supports `AvroBaseStructSchema` and `ProtobufNativeSchema`. | - -#### keyvalue - -`Keyvalue` schema helps applications define schemas for both key and value. - -For `SchemaInfo` of `keyvalue` schema, Pulsar stores the `SchemaInfo` of key schema and the `SchemaInfo` of value schema together. - -Pulsar provides the following methods to encode a key/value pair in messages: - -* `INLINE` - -* `SEPARATED` - -You can choose the encoding type when constructing the key/value schema. - -````mdx-code-block - - - - -Key/value pairs are encoded together in the message payload. - - - - -Key is encoded in the message key and the value is encoded in the message payload. - -**Example** - -This example shows how to construct a key/value schema and then use it to produce and consume messages. - -1. Construct a key/value schema with `INLINE` encoding type. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.INLINE - ); - - ``` - -2. Optionally, construct a key/value schema with `SEPARATED` encoding type. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - ``` - -3. Produce messages using a key/value schema. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - Producer> producer = client.newProducer(kvSchema) - .topic(TOPIC) - .create(); - - final int key = 100; - final String value = "value-100"; - - // send the key/value message - producer.newMessage() - .value(new KeyValue(key, value)) - .send(); - - ``` - -4. Consume messages using a key/value schema. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - Consumer> consumer = client.newConsumer(kvSchema) - ... - .topic(TOPIC) - .subscriptionName(SubscriptionName).subscribe(); - - // receive key/value pair - Message> msg = consumer.receive(); - KeyValue kv = msg.getValue(); - - ``` - - - - -```` - -#### struct - -This section describes the details of type and usage of the `struct` schema. - -##### Type - -`struct` schema supports `AvroBaseStructSchema` and `ProtobufNativeSchema`. - -|Type|Description| ----|---| -`AvroBaseStructSchema`|Pulsar uses [Avro Specification](http://avro.apache.org/docs/current/spec.html) to declare the schema definition for `AvroBaseStructSchema`, which supports `AvroSchema`, `JsonSchema`, and `ProtobufSchema`.

    This allows Pulsar:
    - to use the same tools to manage schema definitions
    - to use different serialization or deserialization methods to handle data| -`ProtobufNativeSchema`|`ProtobufNativeSchema` is based on protobuf native Descriptor.

    This allows Pulsar:
    - to use native protobuf-v3 to serialize or deserialize data
    - to use `AutoConsume` to deserialize data. - -##### Usage - -Pulsar provides the following methods to use the `struct` schema: - -* `static` - -* `generic` - -* `SchemaDefinition` - -````mdx-code-block - - - - -You can predefine the `struct` schema, which can be a POJO in Java, a `struct` in Go, or classes generated by Avro or Protobuf tools. - -**Example** - -Pulsar gets the schema definition from the predefined `struct` using an Avro library. The schema definition is the schema data stored as a part of the `SchemaInfo`. - -1. Create the _User_ class to define the messages sent to Pulsar topics. - - ```java - - @Builder - @AllArgsConstructor - @NoArgsConstructor - public static class User { - String name; - int age; - } - - ``` - -2. Create a producer with a `struct` schema and send messages. - - ```java - - Producer producer = client.newProducer(Schema.AVRO(User.class)).create(); - producer.newMessage().value(User.builder().name("pulsar-user").age(1).build()).send(); - - ``` - -3. Create a consumer with a `struct` schema and receive messages - - ```java - - Consumer consumer = client.newConsumer(Schema.AVRO(User.class)).subscribe(); - User user = consumer.receive(); - - ``` - - - - -Sometimes applications do not have pre-defined structs, and you can use this method to define schema and access data. - -You can define the `struct` schema using the `GenericSchemaBuilder`, generate a generic struct using `GenericRecordBuilder` and consume messages into `GenericRecord`. - -**Example** - -1. Use `RecordSchemaBuilder` to build a schema. - - ```java - - RecordSchemaBuilder recordSchemaBuilder = SchemaBuilder.record("schemaName"); - recordSchemaBuilder.field("intField").type(SchemaType.INT32); - SchemaInfo schemaInfo = recordSchemaBuilder.build(SchemaType.AVRO); - - Producer producer = client.newProducer(Schema.generic(schemaInfo)).create(); - - ``` - -2. Use `RecordBuilder` to build the struct records. - - ```java - - producer.newMessage().value(schema.newRecordBuilder() - .set("intField", 32) - .build()).send(); - - ``` - - - - -You can define the `schemaDefinition` to generate a `struct` schema. - -**Example** - -1. Create the _User_ class to define the messages sent to Pulsar topics. - - ```java - - @Builder - @AllArgsConstructor - @NoArgsConstructor - public static class User { - String name; - int age; - } - - ``` - -2. Create a producer with a `SchemaDefinition` and send messages. - - ```java - - SchemaDefinition schemaDefinition = SchemaDefinition.builder().withPojo(User.class).build(); - Producer producer = client.newProducer(Schema.AVRO(schemaDefinition)).create(); - producer.newMessage().value(User.builder().name("pulsar-user").age(1).build()).send(); - - ``` - -3. Create a consumer with a `SchemaDefinition` schema and receive messages - - ```java - - SchemaDefinition schemaDefinition = SchemaDefinition.builder().withPojo(User.class).build(); - Consumer consumer = client.newConsumer(Schema.AVRO(schemaDefinition)).subscribe(); - User user = consumer.receive().getValue(); - - ``` - - - - -```` - -### Auto Schema - -If you don't know the schema type of a Pulsar topic in advance, you can use AUTO schema to produce or consume generic records to or from brokers. - -| Auto Schema Type | Description | -|---|---| -| `AUTO_PRODUCE` | This is useful for transferring data **from a producer to a Pulsar topic that has a schema**. | -| `AUTO_CONSUME` | This is useful for transferring data **from a Pulsar topic that has a schema to a consumer**. | - -#### AUTO_PRODUCE - -`AUTO_PRODUCE` schema helps a producer validate whether the bytes sent by the producer is compatible with the schema of a topic. - -**Example** - -Suppose that: - -* You have a producer processing messages from a Kafka topic _K_. - -* You have a Pulsar topic _P_, and you do not know its schema type. - -* Your application reads the messages from _K_ and writes the messages to _P_. - -In this case, you can use `AUTO_PRODUCE` to verify whether the bytes produced by _K_ can be sent to _P_ or not. - -```java - -Produce pulsarProducer = client.newProducer(Schema.AUTO_PRODUCE()) - … - .create(); - -byte[] kafkaMessageBytes = … ; - -pulsarProducer.produce(kafkaMessageBytes); - -``` - -#### AUTO_CONSUME - -`AUTO_CONSUME` schema helps a Pulsar topic validate whether the bytes sent by a Pulsar topic is compatible with a consumer, that is, the Pulsar topic deserializes messages into language-specific objects using the `SchemaInfo` retrieved from broker-side. - -Currently, `AUTO_CONSUME` supports AVRO, JSON and ProtobufNativeSchema schemas. It deserializes messages into `GenericRecord`. - -**Example** - -Suppose that: - -* You have a Pulsar topic _P_. - -* You have a consumer (for example, MySQL) receiving messages from the topic _P_. - -* Your application reads the messages from _P_ and writes the messages to MySQL. - -In this case, you can use `AUTO_CONSUME` to verify whether the bytes produced by _P_ can be sent to MySQL or not. - -```java - -Consumer pulsarConsumer = client.newConsumer(Schema.AUTO_CONSUME()) - … - .subscribe(); - -Message msg = consumer.receive() ; -GenericRecord record = msg.getValue(); - -``` - -## Schema version - -Each `SchemaInfo` stored with a topic has a version. Schema version manages schema changes happening within a topic. - -Messages produced with a given `SchemaInfo` is tagged with a schema version, so when a message is consumed by a Pulsar client, the Pulsar client can use the schema version to retrieve the corresponding `SchemaInfo` and then use the `SchemaInfo` to deserialize data. - -Schemas are versioned in succession. Schema storage happens in a broker that handles the associated topics so that version assignments can be made. - -Once a version is assigned/fetched to/for a schema, all subsequent messages produced by that producer are tagged with the appropriate version. - -**Example** - -The following example illustrates how the schema version works. - -Suppose that a Pulsar [Java client](client-libraries-java.md) created using the code below attempts to connect to Pulsar and begins to send messages: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -Producer producer = client.newProducer(JSONSchema.of(SensorReading.class)) - .topic("sensor-data") - .sendTimeout(3, TimeUnit.SECONDS) - .create(); - -``` - -The table below lists the possible scenarios when this connection attempt occurs and what happens in each scenario: - -| Scenario | What happens | -| --- | --- | -|
  752. No schema exists for the topic.
  753. | (1) The producer is created using the given schema. (2) Since no existing schema is compatible with the `SensorReading` schema, the schema is transmitted to the broker and stored. (3) Any consumer created using the same schema or topic can consume messages from the `sensor-data` topic. | -|
  754. A schema already exists.
  755. The producer connects using the same schema that is already stored.
  756. | (1) The schema is transmitted to the broker. (2) The broker determines that the schema is compatible. (3) The broker attempts to store the schema in [BookKeeper](concepts-architecture-overview.md#persistent-storage) but then determines that it's already stored, so it is used to tag produced messages. |
  757. A schema already exists.
  758. The producer connects using a new schema that is compatible.
  759. | (1) The schema is transmitted to the broker. (2) The broker determines that the schema is compatible and stores the new schema as the current version (with a new version number). | - -## How does schema work - -Pulsar schemas are applied and enforced at the **topic** level (schemas cannot be applied at the namespace or tenant level). - -Producers and consumers upload schemas to brokers, so Pulsar schemas work on the producer side and the consumer side. - -### Producer side - -This diagram illustrates how does schema work on the Producer side. - -![Schema works at the producer side](/assets/schema-producer.png) - -1. The application uses a schema instance to construct a producer instance. - - The schema instance defines the schema for the data being produced using the producer instance. - - Take AVRO as an example, Pulsar extracts schema definition from the POJO class and constructs the `SchemaInfo` that the producer needs to pass to a broker when it connects. - -2. The producer connects to the broker with the `SchemaInfo` extracted from the passed-in schema instance. - -3. The broker looks up the schema in the schema storage to check if it is already a registered schema. - -4. If yes, the broker skips the schema validation since it is a known schema, and returns the schema version to the producer. - -5. If no, the broker verifies whether a schema can be automatically created in this namespace: - - * If `isAllowAutoUpdateSchema` sets to **true**, then a schema can be created, and the broker validates the schema based on the schema compatibility check strategy defined for the topic. - - * If `isAllowAutoUpdateSchema` sets to **false**, then a schema can not be created, and the producer is rejected to connect to the broker. - -**Tip**: - -`isAllowAutoUpdateSchema` can be set via **Pulsar admin API** or **REST API.** - -For how to set `isAllowAutoUpdateSchema` via Pulsar admin API, see [Manage AutoUpdate Strategy](schema-manage.md/#manage-autoupdate-strategy). - -6. If the schema is allowed to be updated, then the compatible strategy check is performed. - - * If the schema is compatible, the broker stores it and returns the schema version to the producer. - - All the messages produced by this producer are tagged with the schema version. - - * If the schema is incompatible, the broker rejects it. - -### Consumer side - -This diagram illustrates how does Schema work on the consumer side. - -![Schema works at the consumer side](/assets/schema-consumer.png) - -1. The application uses a schema instance to construct a consumer instance. - - The schema instance defines the schema that the consumer uses for decoding messages received from a broker. - -2. The consumer connects to the broker with the `SchemaInfo` extracted from the passed-in schema instance. - -3. The broker determines whether the topic has one of them (a schema/data/a local consumer and a local producer). - -4. If a topic does not have all of them (a schema/data/a local consumer and a local producer): - - * If `isAllowAutoUpdateSchema` sets to **true**, then the consumer registers a schema and it is connected to a broker. - - * If `isAllowAutoUpdateSchema` sets to **false**, then the consumer is rejected to connect to a broker. - -5. If a topic has one of them (a schema/data/a local consumer and a local producer), then the schema compatibility check is performed. - - * If the schema passes the compatibility check, then the consumer is connected to the broker. - - * If the schema does not pass the compatibility check, then the consumer is rejected to connect to the broker. - -6. The consumer receives messages from the broker. - - If the schema used by the consumer supports schema versioning (for example, AVRO schema), the consumer fetches the `SchemaInfo` of the version tagged in messages and uses the passed-in schema and the schema tagged in messages to decode the messages. diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/security-athenz.md b/site2/website/versioned_docs/version-2.8.0-deprecated/security-athenz.md deleted file mode 100644 index 8a39fe25316d07..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/security-athenz.md +++ /dev/null @@ -1,98 +0,0 @@ ---- -id: security-athenz -title: Authentication using Athenz -sidebar_label: "Authentication using Athenz" -original_id: security-athenz ---- - -[Athenz](https://github.com/AthenZ/athenz) is a role-based authentication/authorization system. In Pulsar, you can use Athenz role tokens (also known as *z-tokens*) to establish the identify of the client. - -## Athenz authentication settings - -A [decentralized Athenz system](https://github.com/AthenZ/athenz/blob/master/docs/decent_authz_flow.md) contains an [authori**Z**ation **M**anagement **S**ystem](https://github.com/AthenZ/athenz/blob/master/docs/setup_zms.md) (ZMS) server and an [authori**Z**ation **T**oken **S**ystem](https://github.com/AthenZ/athenz/blob/master/docs/setup_zts) (ZTS) server. - -To begin, you need to set up Athenz service access control. You need to create domains for the *provider* (which provides some resources to other services with some authentication/authorization policies) and the *tenant* (which is provisioned to access some resources in a provider). In this case, the provider corresponds to the Pulsar service itself and the tenant corresponds to each application using Pulsar (typically, a [tenant](reference-terminology.md#tenant) in Pulsar). - -### Create the tenant domain and service - -On the [tenant](reference-terminology.md#tenant) side, you need to do the following things: - -1. Create a domain, such as `shopping` -2. Generate a private/public key pair -3. Create a service, such as `some_app`, on the domain with the public key - -Note that you need to specify the private key generated in step 2 when the Pulsar client connects to the [broker](reference-terminology.md#broker) (see client configuration examples for [Java](client-libraries-java.md#tls-authentication) and [C++](client-libraries-cpp.md#tls-authentication)). - -For more specific steps involving the Athenz UI, refer to [Example Service Access Control Setup](https://github.com/AthenZ/athenz/blob/master/docs/example_service_athenz_setup.md#client-tenant-domain). - -### Create the provider domain and add the tenant service to some role members - -On the provider side, you need to do the following things: - -1. Create a domain, such as `pulsar` -2. Create a role -3. Add the tenant service to members of the role - -Note that you can specify any action and resource in step 2 since they are not used on Pulsar. In other words, Pulsar uses the Athenz role token only for authentication, *not* for authorization. - -For more specific steps involving UI, refer to [Example Service Access Control Setup](https://github.com/AthenZ/athenz/blob/master/docs/example_service_athenz_setup.md#server-provider-domain). - -## Configure the broker for Athenz - -> ### TLS encryption -> -> Note that when you are using Athenz as an authentication provider, you had better use TLS encryption -> as it can protect role tokens from being intercepted and reused. (for more details involving TLS encryption see [Architecture - Data Model](https://github.com/AthenZ/athenz/blob/master/docs/data_model)). - -In the `conf/broker.conf` configuration file in your Pulsar installation, you need to provide the class name of the Athenz authentication provider as well as a comma-separated list of provider domain names. - -```properties - -# Add the Athenz auth provider -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderAthenz -athenzDomainNames=pulsar - -# Enable TLS -tlsEnabled=true -tlsCertificateFilePath=/path/to/broker-cert.pem -tlsKeyFilePath=/path/to/broker-key.pem - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationAthenz -brokerClientAuthenticationParameters={"tenantDomain":"shopping","tenantService":"some_app","providerDomain":"pulsar","privateKey":"file:///path/to/private.pem","keyId":"v1"} - -``` - -> A full listing of parameters is available in the `conf/broker.conf` file, you can also find the default -> values for those parameters in [Broker Configuration](reference-configuration.md#broker). - -## Configure clients for Athenz - -For more information on Pulsar client authentication using Athenz, see the following language-specific docs: - -* [Java client](client-libraries-java.md#athenz) - -## Configure CLI tools for Athenz - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following authentication parameters to the `conf/client.conf` config file to use Athenz with CLI tools of Pulsar: - -```properties - -# URL for the broker -serviceUrl=https://broker.example.com:8443/ - -# Set Athenz auth plugin and its parameters -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationAthenz -authParams={"tenantDomain":"shopping","tenantService":"some_app","providerDomain":"pulsar","privateKey":"file:///path/to/private.pem","keyId":"v1"} - -# Enable TLS -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/cacert.pem - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/security-authorization.md b/site2/website/versioned_docs/version-2.8.0-deprecated/security-authorization.md deleted file mode 100644 index 7ac09b6f439eae..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/security-authorization.md +++ /dev/null @@ -1,114 +0,0 @@ ---- -id: security-authorization -title: Authentication and authorization in Pulsar -sidebar_label: "Authorization and ACLs" -original_id: security-authorization ---- - - -In Pulsar, the [authentication provider](security-overview.md#authentication-providers) is responsible for properly identifying clients and associating the clients with [role tokens](security-overview.md#role-tokens). If you only enable authentication, an authenticated role token has the ability to access all resources in the cluster. *Authorization* is the process that determines *what* clients are able to do. - -The role tokens with the most privileges are the *superusers*. The *superusers* can create and destroy tenants, along with having full access to all tenant resources. - -When a superuser creates a [tenant](reference-terminology.md#tenant), that tenant is assigned an admin role. A client with the admin role token can then create, modify and destroy namespaces, and grant and revoke permissions to *other role tokens* on those namespaces. - -## Broker and Proxy Setup - -### Enable authorization and assign superusers -You can enable the authorization and assign the superusers in the broker ([`conf/broker.conf`](reference-configuration.md#broker)) configuration files. - -```properties - -authorizationEnabled=true -superUserRoles=my-super-user-1,my-super-user-2 - -``` - -> A full list of parameters is available in the `conf/broker.conf` file. -> You can also find the default values for those parameters in [Broker Configuration](reference-configuration.md#broker). - -Typically, you use superuser roles for administrators, clients as well as broker-to-broker authorization. When you use [geo-replication](concepts-replication.md), every broker needs to be able to publish to all the other topics of clusters. - -You can also enable the authorization for the proxy in the proxy configuration file (`conf/proxy.conf`). Once you enable the authorization on the proxy, the proxy does an additional authorization check before forwarding the request to a broker. -If you enable authorization on the broker, the broker checks the authorization of the request when the broker receives the forwarded request. - -### Proxy Roles - -By default, the broker treats the connection between a proxy and the broker as a normal user connection. The broker authenticates the user as the role configured in `proxy.conf`(see ["Enable TLS Authentication on Proxies"](security-tls-authentication.md#enable-tls-authentication-on-proxies)). However, when the user connects to the cluster through a proxy, the user rarely requires the authentication. The user expects to be able to interact with the cluster as the role for which they have authenticated with the proxy. - -Pulsar uses *Proxy roles* to enable the authentication. Proxy roles are specified in the broker configuration file, [`conf/broker.conf`](reference-configuration.md#broker). If a client that is authenticated with a broker is one of its ```proxyRoles```, all requests from that client must also carry information about the role of the client that is authenticated with the proxy. This information is called the *original principal*. If the *original principal* is absent, the client is not able to access anything. - -You must authorize both the *proxy role* and the *original principal* to access a resource to ensure that the resource is accessible via the proxy. Administrators can take two approaches to authorize the *proxy role* and the *original principal*. - -The more secure approach is to grant access to the proxy roles each time you grant access to a resource. For example, if you have a proxy role named `proxy1`, when the superuser creates a tenant, you should specify `proxy1` as one of the admin roles. When a role is granted permissions to produce or consume from a namespace, if that client wants to produce or consume through a proxy, you should also grant `proxy1` the same permissions. - -Another approach is to make the proxy role a superuser. This allows the proxy to access all resources. The client still needs to authenticate with the proxy, and all requests made through the proxy have their role downgraded to the *original principal* of the authenticated client. However, if the proxy is compromised, a bad actor could get full access to your cluster. - -You can specify the roles as proxy roles in [`conf/broker.conf`](reference-configuration.md#broker). - -```properties - -proxyRoles=my-proxy-role - -# if you want to allow superusers to use the proxy (see above) -superUserRoles=my-super-user-1,my-super-user-2,my-proxy-role - -``` - -## Administer tenants - -Pulsar [instance](reference-terminology.md#instance) administrators or some kind of self-service portal typically provisions a Pulsar [tenant](reference-terminology.md#tenant). - -You can manage tenants using the [`pulsar-admin`](reference-pulsar-admin.md) tool. - -### Create a new tenant - -The following is an example tenant creation command: - -```shell - -$ bin/pulsar-admin tenants create my-tenant \ - --admin-roles my-admin-role \ - --allowed-clusters us-west,us-east - -``` - -This command creates a new tenant `my-tenant` that is allowed to use the clusters `us-west` and `us-east`. - -A client that successfully identifies itself as having the role `my-admin-role` is allowed to perform all administrative tasks on this tenant. - -The structure of topic names in Pulsar reflects the hierarchy between tenants, clusters, and namespaces: - -```shell - -persistent://tenant/namespace/topic - -``` - -### Manage permissions - -You can use [Pulsar Admin Tools](admin-api-permissions.md) for managing permission in Pulsar. - -### Pulsar admin authentication - -```java - -PulsarAdmin admin = PulsarAdmin.builder() - .serviceHttpUrl("http://broker:8080") - .authentication("com.org.MyAuthPluginClass", "param1:value1") - .build(); - -``` - -To use TLS: - -```java - -PulsarAdmin admin = PulsarAdmin.builder() - .serviceHttpUrl("https://broker:8080") - .authentication("com.org.MyAuthPluginClass", "param1:value1") - .tlsTrustCertsFilePath("/path/to/trust/cert") - .build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/security-basic-auth.md b/site2/website/versioned_docs/version-2.8.0-deprecated/security-basic-auth.md deleted file mode 100644 index 2585526bb478af..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/security-basic-auth.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -id: security-basic-auth -title: Authentication using HTTP basic -sidebar_label: "Authentication using HTTP basic" ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - -[Basic authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) is a simple authentication scheme built into the HTTP protocol, which uses base64-encoded username and password pairs as credentials. - -## Prerequisites - -Install [`htpasswd`](https://httpd.apache.org/docs/2.4/programs/htpasswd.html) in your environment to create a password file for storing username-password pairs. - -* For Ubuntu/Debian, run the following command to install `htpasswd`. - - ``` - apt install apache2-utils - ``` - -* For CentOS/RHEL, run the following command to install `htpasswd`. - - ``` - yum install httpd-tools - ``` - -## Create your authentication file - -:::note -Currently, you can use MD5 (recommended) and CRYPT encryption to authenticate your password. -::: - -Create a password file named `.htpasswd` with a user account `superuser/admin`: -* Use MD5 encryption (recommended): - - ``` - htpasswd -cmb /path/to/.htpasswd superuser admin - ``` - -* Use CRYPT encryption: - - ``` - htpasswd -cdb /path/to/.htpasswd superuser admin - ``` - -You can preview the content of your password file by running the following command: - -``` -cat path/to/.htpasswd -superuser:$apr1$GBIYZYFZ$MzLcPrvoUky16mLcK6UtX/ -``` - -## Enable basic authentication on brokers - -To configure brokers to authenticate clients, complete the following steps. - -1. Add the following parameters to the `conf/broker.conf` file. If you use a standalone Pulsar, you need to add these parameters to the `conf/standalone.conf` file. - - ``` - # Configuration to enable Basic authentication - authenticationEnabled=true - authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderBasic - - # Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters - brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic - brokerClientAuthenticationParameters={"userId":"superuser","password":"admin"} - - # If this flag is set then the broker authenticates the original Auth data - # else it just accepts the originalPrincipal and authorizes it (if required). - authenticateOriginalAuthData=true - ``` - -2. Set an environment variable named `PULSAR_EXTRA_OPTS` and the value is `-Dpulsar.auth.basic.conf=/path/to/.htpasswd`. Pulsar reads this environment variable to implement HTTP basic authentication. - -## Enable basic authentication on proxies - -To configure proxies to authenticate clients, complete the following steps. - -1. Add the following parameters to the `conf/proxy.conf` file: - - ``` - # For clients connecting to the proxy - authenticationEnabled=true - authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderBasic - - # For the proxy to connect to brokers - brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic - brokerClientAuthenticationParameters={"userId":"superuser","password":"admin"} - - # Whether client authorization credentials are forwarded to the broker for re-authorization. - # Authentication must be enabled via authenticationEnabled=true for this to take effect. - forwardAuthorizationCredentials=true - ``` - -2. Set an environment variable named `PULSAR_EXTRA_OPTS` and the value is `-Dpulsar.auth.basic.conf=/path/to/.htpasswd`. Pulsar reads this environment variable to implement HTTP basic authentication. - -## Configure basic authentication in CLI tools - -[Command-line tools](/docs/next/reference-cli-tools), such as [Pulsar-admin](/tools/pulsar-admin/), [Pulsar-perf](/tools/pulsar-perf/) and [Pulsar-client](/tools/pulsar-client/), use the `conf/client.conf` file in your Pulsar installation. To configure basic authentication in Pulsar CLI tools, you need to add the following parameters to the `conf/client.conf` file. - -``` -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic -authParams={"userId":"superuser","password":"admin"} -``` - - -## Configure basic authentication in Pulsar clients - -The following example shows how to configure basic authentication when using Pulsar clients. - - - - - ```java - AuthenticationBasic auth = new AuthenticationBasic(); - auth.configure("{\"userId\":\"superuser\",\"password\":\"admin\"}"); - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650") - .authentication(auth) - .build(); - ``` - - - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/security-bouncy-castle.md b/site2/website/versioned_docs/version-2.8.0-deprecated/security-bouncy-castle.md deleted file mode 100644 index be937055d8e311..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/security-bouncy-castle.md +++ /dev/null @@ -1,157 +0,0 @@ ---- -id: security-bouncy-castle -title: Bouncy Castle Providers -sidebar_label: "Bouncy Castle Providers" -original_id: security-bouncy-castle ---- - -## BouncyCastle Introduce - -`Bouncy Castle` is a Java library that complements the default Java Cryptographic Extension (JCE), -and it provides more cipher suites and algorithms than the default JCE provided by Sun. - -In addition to that, `Bouncy Castle` has lots of utilities for reading arcane formats like PEM and ASN.1 that no sane person would want to rewrite themselves. - -In Pulsar, security and crypto have dependencies on BouncyCastle Jars. For the detailed installing and configuring Bouncy Castle FIPS, see [BC FIPS Documentation](https://www.bouncycastle.org/documentation.html), especially the **User Guides** and **Security Policy** PDFs. - -`Bouncy Castle` provides both [FIPS](https://www.bouncycastle.org/fips_faq.html) and non-FIPS version. But in a JVM, you can not include both of the 2 versions, and you need to exclude the current version before include the other. - -In Pulsar, the security and crypto methods also depends on `Bouncy Castle`, especially in [TLS Authentication](security-tls-authentication.md) and [Transport Encryption](security-encryption.md). This document contains the configuration between BouncyCastle FIPS(BC-FIPS) and non-FIPS(BC-non-FIPS) version while using Pulsar. - -## How BouncyCastle modules packaged in Pulsar - -In Pulsar's `bouncy-castle` module, We provide 2 sub modules: `bouncy-castle-bc`(for non-FIPS version) and `bouncy-castle-bcfips`(for FIPS version), to package BC jars together to make the include and exclude of `Bouncy Castle` easier. - -To achieve this goal, we will need to package several `bouncy-castle` jars together into `bouncy-castle-bc` or `bouncy-castle-bcfips` jar. -Each of the original bouncy-castle jar is related with security, so BouncyCastle dutifully supplies signed of each JAR. -But when we do the re-package, Maven shade explodes the BouncyCastle jar file which puts the signatures into META-INF, -these signatures aren't valid for this new, uber-jar (signatures are only for the original BC jar). -Usually, You will meet error like `java.lang.SecurityException: Invalid signature file digest for Manifest main attributes`. - -You could exclude these signatures in mvn pom file to avoid above error, by - -```access transformers - -META-INF/*.SF -META-INF/*.DSA -META-INF/*.RSA - -``` - -But it can also lead to new, cryptic errors, e.g. `java.security.NoSuchAlgorithmException: PBEWithSHA256And256BitAES-CBC-BC SecretKeyFactory not available` -By explicitly specifying where to find the algorithm like this: `SecretKeyFactory.getInstance("PBEWithSHA256And256BitAES-CBC-BC","BC")` -It will get the real error: `java.security.NoSuchProviderException: JCE cannot authenticate the provider BC` - -So, we used a [executable packer plugin](https://github.com/nthuemmel/executable-packer-maven-plugin) that uses a jar-in-jar approach to preserve the BouncyCastle signature in a single, executable jar. - -### Include dependencies of BC-non-FIPS - -Pulsar module `bouncy-castle-bc`, which defined by `bouncy-castle/bc/pom.xml` contains the needed non-FIPS jars for Pulsar, and packaged as a jar-in-jar(need to provide `pkg`). - -```xml - - - org.bouncycastle - bcpkix-jdk15on - ${bouncycastle.version} - - - - org.bouncycastle - bcprov-ext-jdk15on - ${bouncycastle.version} - - -``` - -By using this `bouncy-castle-bc` module, you can easily include and exclude BouncyCastle non-FIPS jars. - -### Modules that include BC-non-FIPS module (`bouncy-castle-bc`) - -For Pulsar client, user need the bouncy-castle module, so `pulsar-client-original` will include the `bouncy-castle-bc` module, and have `pkg` set to reference the `jar-in-jar` package. -It is included as following example: - -```xml - - - org.apache.pulsar - bouncy-castle-bc - ${pulsar.version} - pkg - - -``` - -By default `bouncy-castle-bc` already included in `pulsar-client-original`, And `pulsar-client-original` has been included in a lot of other modules like `pulsar-client-admin`, `pulsar-broker`. -But for the above shaded jar and signatures reason, we should not package Pulsar's `bouncy-castle` module into `pulsar-client-all` other shaded modules directly, such as `pulsar-client-shaded`, `pulsar-client-admin-shaded` and `pulsar-broker-shaded`. -So in the shaded modules, we will exclude the `bouncy-castle` modules. - -```xml - - - - org.apache.pulsar:pulsar-client-original - - ** - - - org/bouncycastle/** - - - - -``` - -That means, `bouncy-castle` related jars are not shaded in these fat jars. - -### Module BC-FIPS (`bouncy-castle-bcfips`) - -Pulsar module `bouncy-castle-bcfips`, which defined by `bouncy-castle/bcfips/pom.xml` contains the needed FIPS jars for Pulsar. -Similar to `bouncy-castle-bc`, `bouncy-castle-bcfips` also packaged as a `jar-in-jar` package for easy include/exclude. - -```xml - - - org.bouncycastle - bc-fips - ${bouncycastlefips.version} - - - - org.bouncycastle - bcpkix-fips - ${bouncycastlefips.version} - - -``` - -### Exclude BC-non-FIPS and include BC-FIPS - -If you want to switch from BC-non-FIPS to BC-FIPS version, Here is an example for `pulsar-broker` module: - -```xml - - - org.apache.pulsar - pulsar-broker - ${pulsar.version} - - - org.apache.pulsar - bouncy-castle-bc - - - - - - org.apache.pulsar - bouncy-castle-bcfips - ${pulsar.version} - pkg - - -``` - - -For more example, you can reference module `bcfips-include-test`. - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/security-encryption.md b/site2/website/versioned_docs/version-2.8.0-deprecated/security-encryption.md deleted file mode 100644 index c2f3530d94d9e4..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/security-encryption.md +++ /dev/null @@ -1,200 +0,0 @@ ---- -id: security-encryption -title: Pulsar Encryption -sidebar_label: "End-to-End Encryption" -original_id: security-encryption ---- - -Applications can use Pulsar encryption to encrypt messages on the producer side and decrypt messages on the consumer side. You can use the public and private key pair that the application configures to perform encryption. Only the consumers with a valid key can decrypt the encrypted messages. - -## Asymmetric and symmetric encryption - -Pulsar uses a dynamically generated symmetric AES key to encrypt messages(data). You can use the application-provided ECDSA (Elliptic Curve Digital Signature Algorithm) or RSA (Rivest–Shamir–Adleman) key pair to encrypt the AES key(data key), so you do not have to share the secret with everyone. - -Key is a public and private key pair used for encryption or decryption. The producer key is the public key of the key pair, and the consumer key is the private key of the key pair. - -The application configures the producer with the public key. You can use this key to encrypt the AES data key. The encrypted data key is sent as part of message header. Only entities with the private key (in this case the consumer) are able to decrypt the data key which is used to decrypt the message. - -You can encrypt a message with more than one key. Any one of the keys used for encrypting the message is sufficient to decrypt the message. - -Pulsar does not store the encryption key anywhere in the Pulsar service. If you lose or delete the private key, your message is irretrievably lost, and is unrecoverable. - -## Producer -![alt text](/assets/pulsar-encryption-producer.jpg "Pulsar Encryption Producer") - -## Consumer -![alt text](/assets/pulsar-encryption-consumer.jpg "Pulsar Encryption Consumer") - -## Get started - -1. Create your ECDSA or RSA public and private key pair by using the following commands. - * ECDSA(for Java clients only) - - ```shell - - openssl ecparam -name secp521r1 -genkey -param_enc explicit -out test_ecdsa_privkey.pem - openssl ec -in test_ecdsa_privkey.pem -pubout -outform pem -out test_ecdsa_pubkey.pem - - ``` - - * RSA (for C++, Python and Node.js clients) - - ```shell - - openssl genrsa -out test_rsa_privkey.pem 2048 - openssl rsa -in test_rsa_privkey.pem -pubout -outform pkcs8 -out test_rsa_pubkey.pem - - ``` - -2. Add the public and private key to the key management and configure your producers to retrieve public keys and consumers clients to retrieve private keys. - -3. Implement the `CryptoKeyReader` interface, specifically `CryptoKeyReader.getPublicKey()` for producer and `CryptoKeyReader.getPrivateKey()` for consumer, which Pulsar client invokes to load the key. - -4. Add the encryption key name to the producer builder: PulsarClient.newProducer().addEncryptionKey("myapp.key"). - -5. Add CryptoKeyReader implementation to producer or consumer builder: PulsarClient.newProducer().cryptoKeyReader(keyReader) / PulsarClient.newConsumer().cryptoKeyReader(keyReader). - -6. Sample producer application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl("pulsar://localhost:6650").build(); - -Producer producer = pulsarClient.newProducer() - .topic("persistent://my-tenant/my-ns/my-topic") - .addEncryptionKey("myappkey") - .cryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")) - .create(); - -for (int i = 0; i < 10; i++) { - producer.send("my-message".getBytes()); -} - -producer.close(); -pulsarClient.close(); - -``` - -7. Sample Consumer Application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl("pulsar://localhost:6650").build(); -Consumer consumer = pulsarClient.newConsumer() - .topic("persistent://my-tenant/my-ns/my-topic") - .subscriptionName("my-subscriber-name") - .cryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")) - .subscribe(); -Message msg = null; - -for (int i = 0; i < 10; i++) { - msg = consumer.receive(); - // do something - System.out.println("Received: " + new String(msg.getData())); -} - -// Acknowledge the consumption of all messages at once -consumer.acknowledgeCumulative(msg); -consumer.close(); -pulsarClient.close(); - -``` - -## Key rotation -Pulsar generates a new AES data key every 4 hours or after publishing a certain number of messages. A producer fetches the asymmetric public key every 4 hours by calling CryptoKeyReader.getPublicKey() to retrieve the latest version. - -## Enable encryption at the producer application -If you produce messages that are consumed across application boundaries, you need to ensure that consumers in other applications have access to one of the private keys that can decrypt the messages. You can do this in two ways: -1. The consumer application provides you access to their public key, which you add to your producer keys. -2. You grant access to one of the private keys from the pairs that producer uses. - -When producers want to encrypt the messages with multiple keys, producers add all such keys to the config. Consumer can decrypt the message as long as the consumer has access to at least one of the keys. - -If you need to encrypt the messages using 2 keys (`myapp.messagekey1` and `myapp.messagekey2`), refer to the following example. - -```java - -PulsarClient.newProducer().addEncryptionKey("myapp.messagekey1").addEncryptionKey("myapp.messagekey2"); - -``` - -## Decrypt encrypted messages at the consumer application -Consumers require to access one of the private keys to decrypt messages that the producer produces. If you want to receive encrypted messages, create a public or private key and give your public key to the producer application to encrypt messages using your public key. - -## Handle failures -* Producer/Consumer loses access to the key - * Producer action fails to indicate the cause of the failure. Application has the option to proceed with sending unencrypted messages in such cases. Call `PulsarClient.newProducer().cryptoFailureAction(ProducerCryptoFailureAction)` to control the producer behavior. The default behavior is to fail the request. - * If consumption fails due to decryption failure or missing keys in consumer, the application has the option to consume the encrypted message or discard it. Call `PulsarClient.newConsumer().cryptoFailureAction(ConsumerCryptoFailureAction)` to control the consumer behavior. The default behavior is to fail the request. Application is never able to decrypt the messages if the private key is permanently lost. -* Batch messaging - * If decryption fails and the message contains batch messages, client is not able to retrieve individual messages in the batch, hence message consumption fails even if cryptoFailureAction() is set to `ConsumerCryptoFailureAction.CONSUME`. -* If decryption fails, the message consumption stops and the application notices backlog growth in addition to decryption failure messages in the client log. If the application does not have access to the private key to decrypt the message, the only option is to skip or discard backlogged messages. diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/security-extending.md b/site2/website/versioned_docs/version-2.8.0-deprecated/security-extending.md deleted file mode 100644 index e7484453b8beb8..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/security-extending.md +++ /dev/null @@ -1,207 +0,0 @@ ---- -id: security-extending -title: Extending Authentication and Authorization in Pulsar -sidebar_label: "Extending" -original_id: security-extending ---- - -Pulsar provides a way to use custom authentication and authorization mechanisms. - -## Authentication - -Pulsar supports mutual TLS and Athenz authentication plugins. For how to use these authentication plugins, you can refer to the description in [Security](security-overview.md). - -You can use a custom authentication mechanism by providing the implementation in the form of two plugins. One plugin is for the Client library and the other plugin is for the Pulsar Proxy and/or Pulsar Broker to validate the credentials. - -### Client authentication plugin - -For the client library, you need to implement `org.apache.pulsar.client.api.Authentication`. By entering the command below you can pass this class when you create a Pulsar client: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .authentication(new MyAuthentication()) - .build(); - -``` - -You can use 2 interfaces to implement on the client side: - * `Authentication` -> http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/Authentication.html - * `AuthenticationDataProvider` -> http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/AuthenticationDataProvider.html - - -This in turn needs to provide the client credentials in the form of `org.apache.pulsar.client.api.AuthenticationDataProvider`. This leaves the chance to return different kinds of authentication token for different types of connection or by passing a certificate chain to use for TLS. - - -You can find examples for client authentication providers at: - - * Mutual TLS Auth -- https://github.com/apache/pulsar/tree/master/pulsar-client/src/main/java/org/apache/pulsar/client/impl/auth - * Athenz -- https://github.com/apache/pulsar/tree/master/pulsar-client-auth-athenz/src/main/java/org/apache/pulsar/client/impl/auth - -### Proxy/Broker authentication plugin - -On the proxy/broker side, you need to configure the corresponding plugin to validate the credentials that the client sends. The Proxy and Broker can support multiple authentication providers at the same time. - -In `conf/broker.conf` you can choose to specify a list of valid providers: - -```properties - -# Authentication provider name list, which is comma separated list of class names -authenticationProviders= - -``` - -To implement `org.apache.pulsar.broker.authentication.AuthenticationProvider` on one single interface: - -```java - -/** - * Provider of authentication mechanism - */ -public interface AuthenticationProvider extends Closeable { - - /** - * Perform initialization for the authentication provider - * - * @param config - * broker config object - * @throws IOException - * if the initialization fails - */ - void initialize(ServiceConfiguration config) throws IOException; - - /** - * @return the authentication method name supported by this provider - */ - String getAuthMethodName(); - - /** - * Validate the authentication for the given credentials with the specified authentication data - * - * @param authData - * provider specific authentication data - * @return the "role" string for the authenticated connection, if the authentication was successful - * @throws AuthenticationException - * if the credentials are not valid - */ - String authenticate(AuthenticationDataSource authData) throws AuthenticationException; - -} - -``` - -The following is the example for Broker authentication plugins: - - * Mutual TLS -- https://github.com/apache/pulsar/blob/master/pulsar-broker-common/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderTls.java - * Athenz -- https://github.com/apache/pulsar/blob/master/pulsar-broker-auth-athenz/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderAthenz.java - -## Authorization - -Authorization is the operation that checks whether a particular "role" or "principal" has permission to perform a certain operation. - -By default, you can use the embedded authorization provider provided by Pulsar. You can also configure a different authorization provider through a plugin. -Note that although the Authentication plugin is designed for use in both the Proxy and Broker, -the Authorization plugin is designed only for use on the Broker however the Proxy does perform some simple Authorization checks of Roles if authorization is enabled. - -To provide a custom provider, you need to implement the `org.apache.pulsar.broker.authorization.AuthorizationProvider` interface, put this class in the Pulsar broker classpath and configure the class in `conf/broker.conf`: - - ```properties - - # Authorization provider fully qualified class-name - authorizationProvider=org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider - - ``` - -```java - -/** - * Provider of authorization mechanism - */ -public interface AuthorizationProvider extends Closeable { - - /** - * Perform initialization for the authorization provider - * - * @param conf - * broker config object - * @param configCache - * pulsar zk configuration cache service - * @throws IOException - * if the initialization fails - */ - void initialize(ServiceConfiguration conf, ConfigurationCacheService configCache) throws IOException; - - /** - * Check if the specified role has permission to send messages to the specified fully qualified topic name. - * - * @param topicName - * the fully qualified topic name associated with the topic. - * @param role - * the app id used to send messages to the topic. - */ - CompletableFuture canProduceAsync(TopicName topicName, String role, - AuthenticationDataSource authenticationData); - - /** - * Check if the specified role has permission to receive messages from the specified fully qualified topic name. - * - * @param topicName - * the fully qualified topic name associated with the topic. - * @param role - * the app id used to receive messages from the topic. - * @param subscription - * the subscription name defined by the client - */ - CompletableFuture canConsumeAsync(TopicName topicName, String role, - AuthenticationDataSource authenticationData, String subscription); - - /** - * Check whether the specified role can perform a lookup for the specified topic. - * - * For that the caller needs to have producer or consumer permission. - * - * @param topicName - * @param role - * @return - * @throws Exception - */ - CompletableFuture canLookupAsync(TopicName topicName, String role, - AuthenticationDataSource authenticationData); - - /** - * - * Grant authorization-action permission on a namespace to the given client - * - * @param namespace - * @param actions - * @param role - * @param authDataJson - * additional authdata in json format - * @return CompletableFuture - * @completesWith
    - * IllegalArgumentException when namespace not found
    - * IllegalStateException when failed to grant permission - */ - CompletableFuture grantPermissionAsync(NamespaceName namespace, Set actions, String role, - String authDataJson); - - /** - * Grant authorization-action permission on a topic to the given client - * - * @param topicName - * @param role - * @param authDataJson - * additional authdata in json format - * @return CompletableFuture - * @completesWith
    - * IllegalArgumentException when namespace not found
    - * IllegalStateException when failed to grant permission - */ - CompletableFuture grantPermissionAsync(TopicName topicName, Set actions, String role, - String authDataJson); - -} - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/security-jwt.md b/site2/website/versioned_docs/version-2.8.0-deprecated/security-jwt.md deleted file mode 100644 index 1fa65b7c27f60c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/security-jwt.md +++ /dev/null @@ -1,331 +0,0 @@ ---- -id: security-jwt -title: Client authentication using tokens based on JSON Web Tokens -sidebar_label: "Authentication using JWT" -original_id: security-jwt ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -## Token authentication overview - -Pulsar supports authenticating clients using security tokens that are based on [JSON Web Tokens](https://jwt.io/introduction/) ([RFC-7519](https://tools.ietf.org/html/rfc7519)). - -You can use tokens to identify a Pulsar client and associate with some "principal" (or "role") that -is permitted to do some actions (eg: publish to a topic or consume from a topic). - -A user typically gets a token string from the administrator (or some automated service). - -The compact representation of a signed JWT is a string that looks like as the following: - -``` - -eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -Application specifies the token when you create the client instance. An alternative is to pass a "token supplier" (a function that returns the token when the client library needs one). - -> #### Always use TLS transport encryption -> Sending a token is equivalent to sending a password over the wire. You had better use TLS encryption all the time when you connect to the Pulsar service. See -> [Transport Encryption using TLS](security-tls-transport.md) for more details. - -### CLI Tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use the token authentication with CLI tools of Pulsar: - -```properties - -webServiceUrl=http://broker.example.com:8080/ -brokerServiceUrl=pulsar://broker.example.com:6650/ -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -authParams=token:eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -The token string can also be read from a file, for example: - -``` - -authParams=file:///path/to/token/file - -``` - -### Pulsar client - -You can use tokens to authenticate the following Pulsar clients. - -````mdx-code-block - - - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactory.token("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY")) - .build(); - -``` - -Similarly, you can also pass a `Supplier`: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactory.token(() -> { - // Read token from custom source - return readToken(); - })) - .build(); - -``` - - - - -```python - -from pulsar import Client, AuthenticationToken - -client = Client('pulsar://broker.example.com:6650/' - authentication=AuthenticationToken('eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY')) - -``` - -Alternatively, you can also pass a `Supplier`: - -```python - -def read_token(): - with open('/path/to/token.txt') as tf: - return tf.read().strip() - -client = Client('pulsar://broker.example.com:6650/' - authentication=AuthenticationToken(read_token)) - -``` - - - - -```go - -client, err := NewClient(ClientOptions{ - URL: "pulsar://localhost:6650", - Authentication: NewAuthenticationToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY"), -}) - -``` - -Similarly, you can also pass a `Supplier`: - -```go - -client, err := NewClient(ClientOptions{ - URL: "pulsar://localhost:6650", - Authentication: NewAuthenticationTokenSupplier(func () string { - // Read token from custom source - return readToken() - }), -}) - -``` - - - - -```c++ - -#include - -pulsar::ClientConfiguration config; -config.setAuth(pulsar::AuthToken::createWithToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY")); - -pulsar::Client client("pulsar://broker.example.com:6650/", config); - -``` - - - - -```c# - -var client = PulsarClient.Builder() - .AuthenticateUsingToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY") - .Build(); - -``` - - - - -```` - -## Enable token authentication - -On how to enable token authentication on a Pulsar cluster, you can refer to the guide below. - -JWT supports two different kinds of keys in order to generate and validate the tokens: - - * Symmetric : - - You can use a single ***Secret*** key to generate and validate tokens. - * Asymmetric: A pair of keys consists of the Private key and the Public key. - - You can use ***Private*** key to generate tokens. - - You can use ***Public*** key to validate tokens. - -### Create a secret key - -When you use a secret key, the administrator creates the key and uses the key to generate the client tokens. You can also configure this key to brokers in order to validate the clients. - -Output file is generated in the root of your Pulsar installation directory. You can also provide absolute path for the output file using the command below. - -```shell - -$ bin/pulsar tokens create-secret-key --output my-secret.key - -``` - -Enter this command to generate base64 encoded private key. - -```shell - -$ bin/pulsar tokens create-secret-key --output /opt/my-secret.key --base64 - -``` - -### Create a key pair - -With Public and Private keys, you need to create a pair of keys. Pulsar supports all algorithms that the Java JWT library (shown [here](https://github.com/jwtk/jjwt#signature-algorithms-keys)) supports. - -Output file is generated in the root of your Pulsar installation directory. You can also provide absolute path for the output file using the command below. - -```shell - -$ bin/pulsar tokens create-key-pair --output-private-key my-private.key --output-public-key my-public.key - -``` - - * Store `my-private.key` in a safe location and only administrator can use `my-private.key` to generate new tokens. - * `my-public.key` is distributed to all Pulsar brokers. You can publicly share this file without any security concern. - -### Generate tokens - -A token is the credential associated with a user. The association is done through the "principal" or "role". In the case of JWT tokens, this field is typically referred as **subject**, though they are exactly the same concept. - -Then, you need to use this command to require the generated token to have a **subject** field set. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user - -``` - -This command prints the token string on stdout. - -Similarly, you can create a token by passing the "private" key using the command below: - -```shell - -$ bin/pulsar tokens create --private-key file:///path/to/my-private.key \ - --subject test-user - -``` - -Finally, you can enter the following command to create a token with a pre-defined TTL. And then the token is automatically invalidated. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user \ - --expiry-time 1y - -``` - -### Authorization - -The token itself does not have any permission associated. The authorization engine determines whether the token should have permissions or not. Once you have created the token, you can grant permission for this token to do certain actions. The following is an example. - -```shell - -$ bin/pulsar-admin namespaces grant-permission my-tenant/my-namespace \ - --role test-user \ - --actions produce,consume - -``` - -### Enable token authentication on Brokers - -To configure brokers to authenticate clients, add the following parameters to `broker.conf`: - -```properties - -# Configuration to enable authentication and authorization -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientTlsEnabled=true -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Either configure the token string or specify to read it from a file. The following three available formats are all valid: -# brokerClientAuthenticationParameters={"token":"your-token-string"} -# brokerClientAuthenticationParameters=token:your-token-string -# brokerClientAuthenticationParameters=file:///path/to/token -brokerClientTrustCertsFilePath=/path/my-ca/certs/ca.cert.pem - -# If this flag is set then the broker authenticates the original Auth data -# else it just accepts the originalPrincipal and authorizes it (if required). -authenticateOriginalAuthData=true - -# If using secret key (Note: key files must be DER-encoded) -tokenSecretKey=file:///path/to/secret.key -# The key can also be passed inline: -# tokenSecretKey=data:;base64,FLFyW0oLJ2Fi22KKCm21J18mbAdztfSHN/lAT5ucEKU= - -# If using public/private (Note: key files must be DER-encoded) -# tokenPublicKey=file:///path/to/public.key - -``` - -### Enable token authentication on Proxies - -To configure proxies to authenticate clients, add the following parameters to `proxy.conf`: - -The proxy uses its own token when connecting to brokers. You need to configure the role token for this key pair in the `proxyRoles` of the brokers. For more details, see the [authorization guide](security-authorization.md). - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken -tokenSecretKey=file:///path/to/secret.key - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Either configure the token string or specify to read it from a file. The following three available formats are all valid: -# brokerClientAuthenticationParameters={"token":"your-token-string"} -# brokerClientAuthenticationParameters=token:your-token-string -# brokerClientAuthenticationParameters=file:///path/to/token - -# Whether client authorization credentials are forwarded to the broker for re-authorization. -# Authentication must be enabled via authenticationEnabled=true for this to take effect. -forwardAuthorizationCredentials=true - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/security-kerberos.md b/site2/website/versioned_docs/version-2.8.0-deprecated/security-kerberos.md deleted file mode 100644 index c49fa3bea1fce0..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/security-kerberos.md +++ /dev/null @@ -1,443 +0,0 @@ ---- -id: security-kerberos -title: Authentication using Kerberos -sidebar_label: "Authentication using Kerberos" -original_id: security-kerberos ---- - -[Kerberos](https://web.mit.edu/kerberos/) is a network authentication protocol. By using secret-key cryptography, [Kerberos](https://web.mit.edu/kerberos/) is designed to provide strong authentication for client applications and server applications. - -In Pulsar, you can use Kerberos with [SASL](https://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer) as a choice for authentication. And Pulsar uses the [Java Authentication and Authorization Service (JAAS)](https://en.wikipedia.org/wiki/Java_Authentication_and_Authorization_Service) for SASL configuration. You need to provide JAAS configurations for Kerberos authentication. - -This document introduces how to configure `Kerberos` with `SASL` between Pulsar clients and brokers and how to configure Kerberos for Pulsar proxy in detail. - -## Configuration for Kerberos between Client and Broker - -### Prerequisites - -To begin, you need to set up (or already have) a [Key Distribution Center(KDC)](https://en.wikipedia.org/wiki/Key_distribution_center). Also you need to configure and run the [Key Distribution Center(KDC)](https://en.wikipedia.org/wiki/Key_distribution_center)in advance. - -If your organization already uses a Kerberos server (for example, by using `Active Directory`), you do not have to install a new server for Pulsar. If your organization does not use a Kerberos server, you need to install one. Your Linux vendor might have packages for `Kerberos`. On how to install and configure Kerberos, refer to [Ubuntu](https://help.ubuntu.com/community/Kerberos), -[Redhat](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Managing_Smart_Cards/installing-kerberos.html). - -Note that if you use Oracle Java, you need to download JCE policy files for your Java version and copy them to the `$JAVA_HOME/jre/lib/security` directory. - -#### Kerberos principals - -If you use the existing Kerberos system, ask your Kerberos administrator for a principal for each Brokers in your cluster and for every operating system user that accesses Pulsar with Kerberos authentication(via clients and tools). - -If you have installed your own Kerberos system, you can create these principals with the following commands: - -```shell - -### add Principals for broker -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey broker/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{broker-keytabname}.keytab broker/{hostname}@{REALM}" -### add Principals for client -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey client/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{client-keytabname}.keytab client/{hostname}@{REALM}" - -``` - -Note that *Kerberos* requires that all your hosts can be resolved with their FQDNs. - -The first part of Broker principal (for example, `broker` in `broker/{hostname}@{REALM}`) is the `serverType` of each host. The suggested values of `serverType` are `broker` (host machine runs service Pulsar Broker) and `proxy` (host machine runs service Pulsar Proxy). - -#### Configure how to connect to KDC - -You need to enter the command below to specify the path to the `krb5.conf` file for the client side and the broker side. The content of `krb5.conf` file indicates the default Realm and KDC information. See [JDK’s Kerberos Requirements](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html) for more details. - -```shell - --Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -Here is an example of the krb5.conf file: - -In the configuration file, `EXAMPLE.COM` is the default realm; `kdc = localhost:62037` is the kdc server url for realm `EXAMPLE.COM `: - -``` - -[libdefaults] - default_realm = EXAMPLE.COM - -[realms] - EXAMPLE.COM = { - kdc = localhost:62037 - } - -``` - -Usually machines configured with kerberos already have a system wide configuration and this configuration is optional. - -#### JAAS configuration file - -You need JAAS configuration file for the client side and the broker side. JAAS configuration file provides the section of information that is used to connect KDC. Here is an example named `pulsar_jaas.conf`: - -``` - - PulsarBroker { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - - PulsarClient { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarclient.keytab" - principal="client/localhost@EXAMPLE.COM"; -}; - -``` - -You need to set the `JAAS` configuration file path as JVM parameter for client and broker. For example: - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf - -``` - -In the `pulsar_jaas.conf` file above - -1. `PulsarBroker` is a section name in the JAAS file that each broker uses. This section tells the broker to use which principal inside Kerberos and the location of the keytab where the principal is stored. `PulsarBroker` allows the broker to use the keytab specified in this section. -2. `PulsarClient` is a section name in the JASS file that each broker uses. This section tells the client to use which principal inside Kerberos and the location of the keytab where the principal is stored. `PulsarClient` allows the client to use the keytab specified in this section. - The following example also reuses this `PulsarClient` section in both the Pulsar internal admin configuration and in CLI command of `bin/pulsar-client`, `bin/pulsar-perf` and `bin/pulsar-admin`. You can also add different sections for different use cases. - -You can have 2 separate JAAS configuration files: -* the file for a broker that has sections of both `PulsarBroker` and `PulsarClient`; -* the file for a client that only has a `PulsarClient` section. - - -### Kerberos configuration for Brokers - -#### Configure the `broker.conf` file - - In the `broker.conf` file, set Kerberos related configurations. - - - Set `authenticationEnabled` to `true`; - - Set `authenticationProviders` to choose `AuthenticationProviderSasl`; - - Set `saslJaasClientAllowedIds` regex for principal that is allowed to connect to broker; - - Set `saslJaasBrokerSectionName` that corresponds to the section in JAAS configuration file for broker; - - To make Pulsar internal admin client work properly, you need to set the configuration in the `broker.conf` file as below: - - Set `brokerClientAuthenticationPlugin` to client plugin `AuthenticationSasl`; - - Set `brokerClientAuthenticationParameters` to value in JSON string `{"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"}`, in which `PulsarClient` is the section name in the `pulsar_jaas.conf` file, and `"serverType":"broker"` indicates that the internal admin client connects to a Pulsar Broker; - - Here is an example: - -``` - -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarBroker - -## Authentication settings of the broker itself. Used when the broker connects to other brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -brokerClientAuthenticationParameters={"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"} - -``` - -#### Set Broker JVM parameter - - Set JVM parameters for JAAS configuration file and krb5 configuration file with additional options. - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -You can add this at the end of `PULSAR_EXTRA_OPTS` in the file [`pulsar_env.sh`](https://github.com/apache/pulsar/blob/master/conf/pulsar_env.sh) - -You must ensure that the operating system user who starts broker can reach the keytabs configured in the `pulsar_jaas.conf` file and kdc server in the `krb5.conf` file. - -### Kerberos configuration for clients - -#### Java Client and Java Admin Client - -In client application, include `pulsar-client-auth-sasl` in your project dependency. - -``` - - - org.apache.pulsar - pulsar-client-auth-sasl - ${pulsar.version} - - -``` - -Configure the authentication type to use `AuthenticationSasl`, and also provide the authentication parameters to it. - -You need 2 parameters: -- `saslJaasClientSectionName`. This parameter corresponds to the section in JAAS configuration file for client; -- `serverType`. This parameter stands for whether this client connects to broker or proxy. And client uses this parameter to know which server side principal should be used. - -When you authenticate between client and broker with the setting in above JAAS configuration file, we need to set `saslJaasClientSectionName` to `PulsarClient` and set `serverType` to `broker`. - -The following is an example of creating a Java client: - - ```java - - System.setProperty("java.security.auth.login.config", "/etc/pulsar/pulsar_jaas.conf"); - System.setProperty("java.security.krb5.conf", "/etc/pulsar/krb5.conf"); - - Map authParams = Maps.newHashMap(); - authParams.put("saslJaasClientSectionName", "PulsarClient"); - authParams.put("serverType", "broker"); - - Authentication saslAuth = AuthenticationFactory - .create(org.apache.pulsar.client.impl.auth.AuthenticationSasl.class.getName(), authParams); - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://my-broker.com:6650") - .authentication(saslAuth) - .build(); - - ``` - -> The first two lines in the example above are hard coded, alternatively, you can set additional JVM parameters for JAAS and krb5 configuration file when you run the application like below: - -``` - -java -cp -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf $APP-jar-with-dependencies.jar $CLASSNAME - -``` - -You must ensure that the operating system user who starts pulsar client can reach the keytabs configured in the `pulsar_jaas.conf` file and kdc server in the `krb5.conf` file. - -#### Configure CLI tools - -If you use a command-line tool (such as `bin/pulsar-client`, `bin/pulsar-perf` and `bin/pulsar-admin`), you need to perform the following steps: - -Step 1. Enter the command below to configure your `client.conf`. - -```shell - -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -authParams={"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"} - -``` - -Step 2. Enter the command below to set JVM parameters for JAAS configuration file and krb5 configuration file with additional options. - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -You can add this at the end of `PULSAR_EXTRA_OPTS` in the file [`pulsar_tools_env.sh`](https://github.com/apache/pulsar/blob/master/conf/pulsar_tools_env.sh), -or add this line `OPTS="$OPTS -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf "` directly to the CLI tool script. - -The meaning of configurations is the same as the meaning of configurations in Java client section. - -## Kerberos configuration for working with Pulsar Proxy - -With the above configuration, client and broker can do authentication using Kerberos. - -A client that connects to Pulsar Proxy is a little different. Pulsar Proxy (as a SASL Server in Kerberos) authenticates Client (as a SASL client in Kerberos) first; and then Pulsar broker authenticates Pulsar Proxy. - -Now in comparison with the above configuration between client and broker, we show you how to configure Pulsar Proxy as follows. - -### Create principal for Pulsar Proxy in Kerberos - -You need to add new principals for Pulsar Proxy comparing with the above configuration. If you already have principals for client and broker, you only need to add the proxy principal here. - -```shell - -### add Principals for Pulsar Proxy -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey proxy/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{proxy-keytabname}.keytab proxy/{hostname}@{REALM}" -### add Principals for broker -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey broker/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{broker-keytabname}.keytab broker/{hostname}@{REALM}" -### add Principals for client -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey client/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{client-keytabname}.keytab client/{hostname}@{REALM}" - -``` - -### Add a section in JAAS configuration file for Pulsar Proxy - -In comparison with the above configuration, add a new section for Pulsar Proxy in JAAS configuration file. - -Here is an example named `pulsar_jaas.conf`: - -``` - - PulsarBroker { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - - PulsarProxy { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarproxy.keytab" - principal="proxy/localhost@EXAMPLE.COM"; -}; - - PulsarClient { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarclient.keytab" - principal="client/localhost@EXAMPLE.COM"; -}; - -``` - -### Proxy client configuration - -Pulsar client configuration is similar with client and broker configuration, except that you need to set `serverType` to `proxy` instead of `broker`, for the reason that you need to do the Kerberos authentication between client and proxy. - - ```java - - System.setProperty("java.security.auth.login.config", "/etc/pulsar/pulsar_jaas.conf"); - System.setProperty("java.security.krb5.conf", "/etc/pulsar/krb5.conf"); - - Map authParams = Maps.newHashMap(); - authParams.put("saslJaasClientSectionName", "PulsarClient"); - authParams.put("serverType", "proxy"); // ** here is the different ** - - Authentication saslAuth = AuthenticationFactory - .create(org.apache.pulsar.client.impl.auth.AuthenticationSasl.class.getName(), authParams); - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://my-broker.com:6650") - .authentication(saslAuth) - .build(); - - ``` - -> The first two lines in the example above are hard coded, alternatively, you can set additional JVM parameters for JAAS and krb5 configuration file when you run the application like below: - -``` - -java -cp -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf $APP-jar-with-dependencies.jar $CLASSNAME - -``` - -### Kerberos configuration for Pulsar proxy service - -In the `proxy.conf` file, set Kerberos related configuration. Here is an example: - -```shell - -## related to authenticate client. -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarProxy - -## related to be authenticated by broker -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -brokerClientAuthenticationParameters={"saslJaasClientSectionName":"PulsarProxy", "serverType":"broker"} -forwardAuthorizationCredentials=true - -``` - -The first part relates to authenticating between client and Pulsar Proxy. In this phase, client works as SASL client, while Pulsar Proxy works as SASL server. - -The second part relates to authenticating between Pulsar Proxy and Pulsar Broker. In this phase, Pulsar Proxy works as SASL client, while Pulsar Broker works as SASL server. - -### Broker side configuration. - -The broker side configuration file is the same with the above `broker.conf`, you do not need special configuration for Pulsar Proxy. - -``` - -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarBroker - -``` - -## Regarding authorization and role token - -For Kerberos authentication, we usually use the authenticated principal as the role token for Pulsar authorization. For more information of authorization in Pulsar, see [security authorization](security-authorization.md). - -If you enable 'authorizationEnabled', you need to set `superUserRoles` in `broker.conf` that corresponds to the name registered in kdc. - -For example: - -```bash - -superUserRoles=client/{clientIp}@EXAMPLE.COM - -``` - -## Regarding authentication between ZooKeeper and Broker - -Pulsar Broker acts as a Kerberos client when you authenticate with Zookeeper. According to [ZooKeeper document](https://cwiki.apache.org/confluence/display/ZOOKEEPER/Client-Server+mutual+authentication), you need these settings in `conf/zookeeper.conf`: - -``` - -authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider -requireClientAuthScheme=sasl - -``` - -Enter the following commands to add a section of `Client` configurations in the file `pulsar_jaas.conf`, which Pulsar Broker uses: - -``` - - Client { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - -``` - -In this setting, the principal of Pulsar Broker and keyTab file indicates the role of Broker when you authenticate with ZooKeeper. - -## Regarding authentication between BookKeeper and Broker - -Pulsar Broker acts as a Kerberos client when you authenticate with Bookie. According to [BookKeeper document](http://bookkeeper.apache.org/docs/latest/security/sasl/), you need to add `bookkeeperClientAuthenticationPlugin` parameter in `broker.conf`: - -``` - -bookkeeperClientAuthenticationPlugin=org.apache.bookkeeper.sasl.SASLClientProviderFactory - -``` - -In this setting, `SASLClientProviderFactory` creates a BookKeeper SASL client in a Broker, and the Broker uses the created SASL client to authenticate with a Bookie node. - -Enter the following commands to add a section of `BookKeeper` configurations in the `pulsar_jaas.conf` that Pulsar Broker uses: - -``` - - BookKeeper { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - -``` - -In this setting, the principal of Pulsar Broker and keyTab file indicates the role of Broker when you authenticate with Bookie. diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/security-oauth2.md b/site2/website/versioned_docs/version-2.8.0-deprecated/security-oauth2.md deleted file mode 100644 index 24b1530cc848ae..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/security-oauth2.md +++ /dev/null @@ -1,232 +0,0 @@ ---- -id: security-oauth2 -title: Client authentication using OAuth 2.0 access tokens -sidebar_label: "Authentication using OAuth 2.0 access tokens" -original_id: security-oauth2 ---- - -Pulsar supports authenticating clients using OAuth 2.0 access tokens. You can use OAuth 2.0 access tokens to identify a Pulsar client and associate the Pulsar client with some "principal" (or "role"), which is permitted to do some actions, such as publishing messages to a topic or consume messages from a topic. - -This module is used to support the Pulsar client authentication plugin for OAuth 2.0. After communicating with the Oauth 2.0 server, the Pulsar client gets an `access token` from the Oauth 2.0 server, and passes this `access token` to the Pulsar broker to do the authentication. The broker can use the `org.apache.pulsar.broker.authentication.AuthenticationProviderToken`. Or, you can add your own `AuthenticationProvider` to make it with this module. - -## Authentication provider configuration - -This library allows you to authenticate the Pulsar client by using an access token that is obtained from an OAuth 2.0 authorization service, which acts as a _token issuer_. - -### Authentication types - -The authentication type determines how to obtain an access token through an OAuth 2.0 authorization flow. - -:::note - -Currently, the Pulsar Java client only supports the `client_credentials` authentication type. - -::: - -#### Client credentials - -The following table lists parameters supported for the `client credentials` authentication type. - -| Parameter | Description | Example | Required or not | -| --- | --- | --- | --- | -| `type` | Oauth 2.0 authentication type. | `client_credentials` (default) | Optional | -| `issuerUrl` | URL of the authentication provider which allows the Pulsar client to obtain an access token | `https://accounts.google.com` | Required | -| `privateKey` | URL to a JSON credentials file | Support the following pattern formats:
  760. `file:///path/to/file`
  761. `file:/path/to/file`
  762. `data:application/json;base64,`
  763. | Required | -| `audience` | An OAuth 2.0 "resource server" identifier for the Pulsar cluster | `https://broker.example.com` | Required | - -The credentials file contains service account credentials used with the client authentication type. The following shows an example of a credentials file `credentials_file.json`. - -```json - -{ - "type": "client_credentials", - "client_id": "d9ZyX97q1ef8Cr81WHVC4hFQ64vSlDK3", - "client_secret": "on1uJ...k6F6R", - "client_email": "1234567890-abcdefghijklmnopqrstuvwxyz@developer.gserviceaccount.com", - "issuer_url": "https://accounts.google.com" -} - -``` - -In the above example, the authentication type is set to `client_credentials` by default. And the fields "client_id" and "client_secret" are required. - -### Typical original OAuth2 request mapping - -The following shows a typical original OAuth2 request, which is used to obtain the access token from the OAuth2 server. - -```bash - -curl --request POST \ - --url https://dev-kt-aa9ne.us.auth0.com/oauth/token \ - --header 'content-type: application/json' \ - --data '{ - "client_id":"Xd23RHsUnvUlP7wchjNYOaIfazgeHd9x", - "client_secret":"rT7ps7WY8uhdVuBTKWZkttwLdQotmdEliaM5rLfmgNibvqziZ-g07ZH52N_poGAb", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "grant_type":"client_credentials"}' - -``` - -In the above example, the mapping relationship is shown as below. - -- The `issuerUrl` parameter in this plugin is mapped to `--url https://dev-kt-aa9ne.us.auth0.com`. -- The `privateKey` file parameter in this plugin should at least contains the `client_id` and `client_secret` fields. -- The `audience` parameter in this plugin is mapped to `"audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"`. - -## Client Configuration - -You can use the OAuth2 authentication provider with the following Pulsar clients. - -### Java - -You can use the factory method to configure authentication for Pulsar Java client. - -```java - -import org.apache.pulsar.client.impl.auth.oauth2.AuthenticationFactoryOAuth2; - -String issuerUrl = "https://dev-kt-aa9ne.us.auth0.com"; -String credentialsUrl = "file:///path/to/KeyFile.json"; -String audience = "https://dev-kt-aa9ne.us.auth0.com/api/v2/"; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactoryOAuth2.clientCredentials(issuerUrl, credentialsUrl, audience)) - .build(); - -``` - -In addition, you can also use the encoded parameters to configure authentication for Pulsar Java client. - -```java - -Authentication auth = AuthenticationFactory - .create(AuthenticationOAuth2.class.getName(), "{"type":"client_credentials","privateKey":"./key/path/..","issuerUrl":"...","audience":"..."}"); -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication(auth) - .build(); - -``` - -### C++ client - -The C++ client is similar to the Java client. You need to provide parameters of `issuerUrl`, `private_key` (the credentials file path), and the audience. - -```c++ - -#include - -pulsar::ClientConfiguration config; -std::string params = R"({ - "issuer_url": "https://dev-kt-aa9ne.us.auth0.com", - "private_key": "../../pulsar-broker/src/test/resources/authentication/token/cpp_credentials_file.json", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/"})"; - -config.setAuth(pulsar::AuthOauth2::create(params)); - -pulsar::Client client("pulsar://broker.example.com:6650/", config); - -``` - -### Go client - -To enable OAuth2 authentication in Go client, you need to configure OAuth2 authentication. -This example shows how to configure OAuth2 authentication in Go client. - -```go - -oauth := pulsar.NewAuthenticationOAuth2(map[string]string{ - "type": "client_credentials", - "issuerUrl": "https://dev-kt-aa9ne.us.auth0.com", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "privateKey": "/path/to/privateKey", - "clientId": "0Xx...Yyxeny", - }) -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://my-cluster:6650", - Authentication: oauth, -}) - -``` - -### Python client - -To enable OAuth2 authentication in Python client, you need to configure OAuth2 authentication. -This example shows how to configure OAuth2 authentication in Python client. - -```python - -from pulsar import Client, AuthenticationOauth2 - -params = ''' -{ - "issuer_url": "https://dev-kt-aa9ne.us.auth0.com", - "private_key": "/path/to/privateKey", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/" -} -''' - -client = Client("pulsar://my-cluster:6650", authentication=AuthenticationOauth2(params)) - -``` - -## CLI configuration - -This section describes how to use Pulsar CLI tools to connect a cluster through OAuth2 authentication plugin. - -### pulsar-admin - -This example shows how to use pulsar-admin to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-admin --admin-url https://streamnative.cloud:443 \ ---auth-plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ -tenants list - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). - -### pulsar-client - -This example shows how to use pulsar-client to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-client \ ---url SERVICE_URL \ ---auth-plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ -produce test-topic -m "test-message" -n 10 - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). - -### pulsar-perf - -This example shows how to use pulsar-perf to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-perf produce --service-url pulsar+ssl://streamnative.cloud:6651 \ ---auth_plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ --r 1000 -s 1024 test-topic - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/security-overview.md b/site2/website/versioned_docs/version-2.8.0-deprecated/security-overview.md deleted file mode 100644 index c6bd9b64e4f766..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/security-overview.md +++ /dev/null @@ -1,36 +0,0 @@ ---- -id: security-overview -title: Pulsar security overview -sidebar_label: "Overview" -original_id: security-overview ---- - -As the central message bus for a business, Apache Pulsar is frequently used for storing mission-critical data. Therefore, enabling security features in Pulsar is crucial. - -By default, Pulsar configures no encryption, authentication, or authorization. Any client can communicate to Apache Pulsar via plain text service URLs. So we must ensure that Pulsar accessing via these plain text service URLs is restricted to trusted clients only. In such cases, you can use Network segmentation and/or authorization ACLs to restrict access to trusted IPs. If you use neither, the state of cluster is wide open and anyone can access the cluster. - -Pulsar supports a pluggable authentication mechanism. And Pulsar clients use this mechanism to authenticate with brokers and proxies. You can also configure Pulsar to support multiple authentication sources. - -The Pulsar broker validates the authentication credentials when a connection is established. After the initial connection is authenticated, the "principal" token is stored for authorization though the connection is not re-authenticated. The broker periodically checks the expiration status of every `ServerCnx` object. You can set the `authenticationRefreshCheckSeconds` on the broker to control the frequency to check the expiration status. By default, the `authenticationRefreshCheckSeconds` is set to 60s. When the authentication is expired, the broker forces to re-authenticate the connection. If the re-authentication fails, the broker disconnects the client. - -The broker supports learning whether a particular client supports authentication refreshing. If a client supports authentication refreshing and the credential is expired, the authentication provider calls the `refreshAuthentication` method to initiate the refreshing process. If a client does not support authentication refreshing and the credential is expired, the broker disconnects the client. - -You had better secure the service components in your Apache Pulsar deployment. - -## Role tokens - -In Pulsar, a *role* is a string, like `admin` or `app1`, which can represent a single client or multiple clients. You can use roles to control permission for clients to produce or consume from certain topics, administer the configuration for tenants, and so on. - -Apache Pulsar uses an [Authentication Provider](#authentication-providers) to establish the identity of a client and then assign a *role token* to that client. This role token is then used for [Authorization and ACLs](security-authorization.md) to determine what the client is authorized to do. - -## Authentication providers - -Currently Pulsar supports the following authentication providers: - -- [TLS Authentication](security-tls-authentication.md) -- [Athenz](security-athenz.md) -- [Kerberos](security-kerberos.md) -- [JSON Web Token Authentication](security-jwt.md) -- [OAuth 2.0 authentication](security-oauth2.md) -- [HTTP basic authentication](security-basic-auth.md) - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/security-tls-authentication.md b/site2/website/versioned_docs/version-2.8.0-deprecated/security-tls-authentication.md deleted file mode 100644 index 85d2240f413060..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/security-tls-authentication.md +++ /dev/null @@ -1,222 +0,0 @@ ---- -id: security-tls-authentication -title: Authentication using TLS -sidebar_label: "Authentication using TLS" -original_id: security-tls-authentication ---- - -## TLS authentication overview - -TLS authentication is an extension of [TLS transport encryption](security-tls-transport.md). Not only servers have keys and certs that the client uses to verify the identity of servers, clients also have keys and certs that the server uses to verify the identity of clients. You must have TLS transport encryption configured on your cluster before you can use TLS authentication. This guide assumes you already have TLS transport encryption configured. - -`Bouncy Castle Provider` provides TLS related cipher suites and algorithms in Pulsar. If you need [FIPS](https://www.bouncycastle.org/fips_faq.html) version of `Bouncy Castle Provider`, please reference [Bouncy Castle page](security-bouncy-castle.md). - -### Create client certificates - -Client certificates are generated using the certificate authority. Server certificates are also generated with the same certificate authority. - -The biggest difference between client certs and server certs is that the **common name** for the client certificate is the **role token** which that client is authenticated as. - -To use client certificates, you need to set `tlsRequireTrustedClientCertOnConnect=true` at the broker side. For details, refer to [TLS broker configuration](security-tls-transport.md#configure-broker). - -First, you need to enter the following command to generate the key : - -```bash - -$ openssl genrsa -out admin.key.pem 2048 - -``` - -Similar to the broker, the client expects the key to be in [PKCS 8](https://en.wikipedia.org/wiki/PKCS_8) format, so you need to convert it by entering the following command: - -```bash - -$ openssl pkcs8 -topk8 -inform PEM -outform PEM \ - -in admin.key.pem -out admin.key-pk8.pem -nocrypt - -``` - -Next, enter the command below to generate the certificate request. When you are asked for a **common name**, enter the **role token** that you want this key pair to authenticate a client as. - -```bash - -$ openssl req -config openssl.cnf \ - -key admin.key.pem -new -sha256 -out admin.csr.pem - -``` - -:::note - -If openssl.cnf is not specified, read [Certificate authority](http://pulsar.apache.org/docs/en/security-tls-transport/#certificate-authority) to get the openssl.cnf. - -::: - -Then, enter the command below to sign with request with the certificate authority. Note that the client certs uses the **usr_cert** extension, which allows the cert to be used for client authentication. - -```bash - -$ openssl ca -config openssl.cnf -extensions usr_cert \ - -days 1000 -notext -md sha256 \ - -in admin.csr.pem -out admin.cert.pem - -``` - -You can get a cert, `admin.cert.pem`, and a key, `admin.key-pk8.pem` from this command. With `ca.cert.pem`, clients can use this cert and this key to authenticate themselves to brokers and proxies as the role token ``admin``. - -:::note - -If the "unable to load CA private key" error occurs and the reason of this error is "No such file or directory: /etc/pki/CA/private/cakey.pem" in this step. Try the command below: - -```bash - -$ cd /etc/pki/tls/misc/CA -$ ./CA -newca - -``` - -to generate `cakey.pem` . - -::: - -## Enable TLS authentication on brokers - -To configure brokers to authenticate clients, add the following parameters to `broker.conf`, alongside [the configuration to enable tls transport](security-tls-transport.md#broker-configuration): - -```properties - -# Configuration to enable authentication -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# operations and publish/consume from all topics -superUserRoles=admin - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientTlsEnabled=true -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters={"tlsCertFile":"/path/my-ca/admin.cert.pem","tlsKeyFile":"/path/my-ca/admin.key-pk8.pem"} -brokerClientTrustCertsFilePath=/path/my-ca/certs/ca.cert.pem - -``` - -## Enable TLS authentication on proxies - -To configure proxies to authenticate clients, add the following parameters to `proxy.conf`, alongside [the configuration to enable tls transport](security-tls-transport.md#proxy-configuration): - -The proxy should have its own client key pair for connecting to brokers. You need to configure the role token for this key pair in the ``proxyRoles`` of the brokers. See the [authorization guide](security-authorization.md) for more details. - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters=tlsCertFile:/path/to/proxy.cert.pem,tlsKeyFile:/path/to/proxy.key-pk8.pem - -``` - -## Client configuration - -When you use TLS authentication, client connects via TLS transport. You need to configure the client to use ```https://``` and 8443 port for the web service URL, ```pulsar+ssl://``` and 6651 port for the broker service URL. - -### CLI tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use TLS authentication with the CLI tools of Pulsar: - -```properties - -webServiceUrl=https://broker.example.com:8443/ -brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/ca.cert.pem -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -authParams=tlsCertFile:/path/to/my-role.cert.pem,tlsKeyFile:/path/to/my-role.key-pk8.pem - -``` - -### Java client - -```java - -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .tlsTrustCertsFilePath("/path/to/ca.cert.pem") - .authentication("org.apache.pulsar.client.impl.auth.AuthenticationTls", - "tlsCertFile:/path/to/my-role.cert.pem,tlsKeyFile:/path/to/my-role.key-pk8.pem") - .build(); - -``` - -### Python client - -```python - -from pulsar import Client, AuthenticationTLS - -auth = AuthenticationTLS("/path/to/my-role.cert.pem", "/path/to/my-role.key-pk8.pem") -client = Client("pulsar+ssl://broker.example.com:6651/", - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False, - authentication=auth) - -``` - -### C++ client - -```c++ - -#include - -pulsar::ClientConfiguration config; -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/ca.cert.pem"); -config.setTlsAllowInsecureConnection(false); - -pulsar::AuthenticationPtr auth = pulsar::AuthTls::create("/path/to/my-role.cert.pem", - "/path/to/my-role.key-pk8.pem") -config.setAuth(auth); - -pulsar::Client client("pulsar+ssl://broker.example.com:6651/", config); - -``` - -### Node.js client - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const auth = new Pulsar.AuthenticationTls({ - certificatePath: '/path/to/my-role.cert.pem', - privateKeyPath: '/path/to/my-role.key-pk8.pem', - }); - - const client = new Pulsar.Client({ - serviceUrl: 'pulsar+ssl://broker.example.com:6651/', - authentication: auth, - tlsTrustCertsFilePath: '/path/to/ca.cert.pem', - }); -})(); - -``` - -### C# client - -```c# - -var clientCertificate = new X509Certificate2("admin.pfx"); -var client = PulsarClient.Builder() - .AuthenticateUsingClientCertificate(clientCertificate) - .Build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/security-tls-keystore.md b/site2/website/versioned_docs/version-2.8.0-deprecated/security-tls-keystore.md deleted file mode 100644 index b09500e3daf04c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/security-tls-keystore.md +++ /dev/null @@ -1,322 +0,0 @@ ---- -id: security-tls-keystore -title: Using TLS with KeyStore configure -sidebar_label: "Using TLS with KeyStore configure" -original_id: security-tls-keystore ---- - -## Overview - -Apache Pulsar supports [TLS encryption](security-tls-transport.md) and [TLS authentication](security-tls-authentication.md) between clients and Apache Pulsar service. -By default it uses PEM format file configuration. This page tries to describe use [KeyStore](https://en.wikipedia.org/wiki/Java_KeyStore) type configure for TLS. - - -## TLS encryption with KeyStore configure - -### Generate TLS key and certificate - -The first step of deploying TLS is to generate the key and the certificate for each machine in the cluster. -You can use Java’s `keytool` utility to accomplish this task. We will generate the key into a temporary keystore -initially for broker, so that we can export and sign it later with CA. - -```shell - -keytool -keystore broker.keystore.jks -alias localhost -validity {validity} -genkeypair -keyalg RSA - -``` - -You need to specify two parameters in the above command: - -1. `keystore`: the keystore file that stores the certificate. The *keystore* file contains the private key of - the certificate; hence, it needs to be kept safely. -2. `validity`: the valid time of the certificate in days. - -> Ensure that common name (CN) matches exactly with the fully qualified domain name (FQDN) of the server. -The client compares the CN with the DNS domain name to ensure that it is indeed connecting to the desired server, not a malicious one. - -### Creating your own CA - -After the first step, each broker in the cluster has a public-private key pair, and a certificate to identify the machine. -The certificate, however, is unsigned, which means that an attacker can create such a certificate to pretend to be any machine. - -Therefore, it is important to prevent forged certificates by signing them for each machine in the cluster. -A `certificate authority (CA)` is responsible for signing certificates. CA works likes a government that issues passports — -the government stamps (signs) each passport so that the passport becomes difficult to forge. Other governments verify the stamps -to ensure the passport is authentic. Similarly, the CA signs the certificates, and the cryptography guarantees that a signed -certificate is computationally difficult to forge. Thus, as long as the CA is a genuine and trusted authority, the clients have -high assurance that they are connecting to the authentic machines. - -```shell - -openssl req -new -x509 -keyout ca-key -out ca-cert -days 365 - -``` - -The generated CA is simply a *public-private* key pair and certificate, and it is intended to sign other certificates. - -The next step is to add the generated CA to the clients' truststore so that the clients can trust this CA: - -```shell - -keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert - -``` - -NOTE: If you configure the brokers to require client authentication by setting `tlsRequireTrustedClientCertOnConnect` to `true` on the -broker configuration, then you must also provide a truststore for the brokers and it should have all the CA certificates that clients keys were signed by. - -```shell - -keytool -keystore broker.truststore.jks -alias CARoot -import -file ca-cert - -``` - -In contrast to the keystore, which stores each machine’s own identity, the truststore of a client stores all the certificates -that the client should trust. Importing a certificate into one’s truststore also means trusting all certificates that are signed -by that certificate. As the analogy above, trusting the government (CA) also means trusting all passports (certificates) that -it has issued. This attribute is called the chain of trust, and it is particularly useful when deploying TLS on a large BookKeeper cluster. -You can sign all certificates in the cluster with a single CA, and have all machines share the same truststore that trusts the CA. -That way all machines can authenticate all other machines. - - -### Signing the certificate - -The next step is to sign all certificates in the keystore with the CA we generated. First, you need to export the certificate from the keystore: - -```shell - -keytool -keystore broker.keystore.jks -alias localhost -certreq -file cert-file - -``` - -Then sign it with the CA: - -```shell - -openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days {validity} -CAcreateserial -passin pass:{ca-password} - -``` - -Finally, you need to import both the certificate of the CA and the signed certificate into the keystore: - -```shell - -keytool -keystore broker.keystore.jks -alias CARoot -import -file ca-cert -keytool -keystore broker.keystore.jks -alias localhost -import -file cert-signed - -``` - -The definitions of the parameters are the following: - -1. `keystore`: the location of the keystore -2. `ca-cert`: the certificate of the CA -3. `ca-key`: the private key of the CA -4. `ca-password`: the passphrase of the CA -5. `cert-file`: the exported, unsigned certificate of the broker -6. `cert-signed`: the signed certificate of the broker - -### Configuring brokers - -Brokers enable TLS by provide valid `brokerServicePortTls` and `webServicePortTls`, and also need set `tlsEnabledWithKeyStore` to `true` for using KeyStore type configuration. -Besides this, KeyStore path, KeyStore password, TrustStore path, and TrustStore password need to provided. -And since broker will create internal client/admin client to communicate with other brokers, user also need to provide config for them, this is similar to how user config the outside client/admin-client. -If `tlsRequireTrustedClientCertOnConnect` is `true`, broker will reject the Connection if the Client Certificate is not trusted. - -The following TLS configs are needed on the broker side: - -```properties - -tlsEnabledWithKeyStore=true -# key store -tlsKeyStoreType=JKS -tlsKeyStore=/var/private/tls/broker.keystore.jks -tlsKeyStorePassword=brokerpw - -# trust store -tlsTrustStoreType=JKS -tlsTrustStore=/var/private/tls/broker.truststore.jks -tlsTrustStorePassword=brokerpw - -# internal client/admin-client config -brokerClientTlsEnabled=true -brokerClientTlsEnabledWithKeyStore=true -brokerClientTlsTrustStoreType=JKS -brokerClientTlsTrustStore=/var/private/tls/client.truststore.jks -brokerClientTlsTrustStorePassword=clientpw - -``` - -NOTE: it is important to restrict access to the store files via filesystem permissions. - -Optional settings that may worth consider: - -1. tlsClientAuthentication=false: Enable/Disable using TLS for authentication. This config when enabled will authenticate the other end - of the communication channel. It should be enabled on both brokers and clients for mutual TLS. -2. tlsCiphers=[TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256], A cipher suite is a named combination of authentication, encryption, MAC and key exchange - algorithm used to negotiate the security settings for a network connection using TLS network protocol. By default, - it is null. [OpenSSL Ciphers](https://www.openssl.org/docs/man1.0.2/apps/ciphers.html) - [JDK Ciphers](http://docs.oracle.com/javase/8/docs/technotes/guides/security/StandardNames.html#ciphersuites) -3. tlsProtocols=[TLSv1.3,TLSv1.2] (list out the TLS protocols that you are going to accept from clients). - By default, it is not set. - -### Configuring Clients - -This is similar to [TLS encryption configuing for client with PEM type](security-tls-transport.md#Client configuration). -For a a minimal configuration, user need to provide the TrustStore information. - -e.g. -1. for [Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools#pulsar-admin), [`pulsar-perf`](reference-cli-tools#pulsar-perf), and [`pulsar-client`](reference-cli-tools#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - - ```properties - - webServiceUrl=https://broker.example.com:8443/ - brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ - useKeyStoreTls=true - tlsTrustStoreType=JKS - tlsTrustStorePath=/var/private/tls/client.truststore.jks - tlsTrustStorePassword=clientpw - - ``` - -1. for java client - - ```java - - import org.apache.pulsar.client.api.PulsarClient; - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .build(); - - ``` - -1. for java admin client - -```java - - PulsarAdmin amdin = PulsarAdmin.builder().serviceHttpUrl("https://broker.example.com:8443") - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .build(); - -``` - -## TLS authentication with KeyStore configure - -This similar to [TLS authentication with PEM type](security-tls-authentication.md) - -### broker authentication config - -`broker.conf` - -```properties - -# Configuration to enable authentication -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# this should be the CN for one of client keystore. -superUserRoles=admin - -# Enable KeyStore type -tlsEnabledWithKeyStore=true -requireTrustedClientCertOnConnect=true - -# key store -tlsKeyStoreType=JKS -tlsKeyStore=/var/private/tls/broker.keystore.jks -tlsKeyStorePassword=brokerpw - -# trust store -tlsTrustStoreType=JKS -tlsTrustStore=/var/private/tls/broker.truststore.jks -tlsTrustStorePassword=brokerpw - -# internal client/admin-client config -brokerClientTlsEnabled=true -brokerClientTlsEnabledWithKeyStore=true -brokerClientTlsTrustStoreType=JKS -brokerClientTlsTrustStore=/var/private/tls/client.truststore.jks -brokerClientTlsTrustStorePassword=clientpw -# internal auth config -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls -brokerClientAuthenticationParameters={"keyStoreType":"JKS","keyStorePath":"/var/private/tls/client.keystore.jks","keyStorePassword":"clientpw"} -# currently websocket not support keystore type -webSocketServiceEnabled=false - -``` - -### client authentication configuring - -Besides the TLS encryption configuring. The main work is configuring the KeyStore, which contains a valid CN as client role, for client. - -e.g. -1. for [Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools#pulsar-admin), [`pulsar-perf`](reference-cli-tools#pulsar-perf), and [`pulsar-client`](reference-cli-tools#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - - ```properties - - webServiceUrl=https://broker.example.com:8443/ - brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ - useKeyStoreTls=true - tlsTrustStoreType=JKS - tlsTrustStorePath=/var/private/tls/client.truststore.jks - tlsTrustStorePassword=clientpw - authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls - authParams={"keyStoreType":"JKS","keyStorePath":"/path/to/keystorefile","keyStorePassword":"keystorepw"} - - ``` - -1. for java client - - ```java - - import org.apache.pulsar.client.api.PulsarClient; - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .authentication( - "org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls", - "keyStoreType:JKS,keyStorePath:/var/private/tls/client.keystore.jks,keyStorePassword:clientpw") - .build(); - - ``` - -1. for java admin client - - ```java - - PulsarAdmin amdin = PulsarAdmin.builder().serviceHttpUrl("https://broker.example.com:8443") - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .authentication( - "org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls", - "keyStoreType:JKS,keyStorePath:/var/private/tls/client.keystore.jks,keyStorePassword:clientpw") - .build(); - - ``` - -## Enabling TLS Logging - -You can enable TLS debug logging at the JVM level by starting the brokers and/or clients with `javax.net.debug` system property. For example: - -```shell - --Djavax.net.debug=all - -``` - -You can find more details on this in [Oracle documentation](http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/ReadDebug.html) on [debugging SSL/TLS connections](http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/ReadDebug.html). diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/security-tls-transport.md b/site2/website/versioned_docs/version-2.8.0-deprecated/security-tls-transport.md deleted file mode 100644 index 2cad17a78c3507..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/security-tls-transport.md +++ /dev/null @@ -1,295 +0,0 @@ ---- -id: security-tls-transport -title: Transport Encryption using TLS -sidebar_label: "Transport Encryption using TLS" -original_id: security-tls-transport ---- - -## TLS overview - -By default, Apache Pulsar clients communicate with the Apache Pulsar service in plain text. This means that all data is sent in the clear. You can use TLS to encrypt this traffic to protect the traffic from the snooping of a man-in-the-middle attacker. - -You can also configure TLS for both encryption and authentication. Use this guide to configure just TLS transport encryption and refer to [here](security-tls-authentication.md) for TLS authentication configuration. Alternatively, you can use [another authentication mechanism](security-athenz.md) on top of TLS transport encryption. - -> Note that enabling TLS may impact the performance due to encryption overhead. - -## TLS concepts - -TLS is a form of [public key cryptography](https://en.wikipedia.org/wiki/Public-key_cryptography). Using key pairs consisting of a public key and a private key can perform the encryption. The public key encrpyts the messages and the private key decrypts the messages. - -To use TLS transport encryption, you need two kinds of key pairs, **server key pairs** and a **certificate authority**. - -You can use a third kind of key pair, **client key pairs**, for [client authentication](security-tls-authentication.md). - -You should store the **certificate authority** private key in a very secure location (a fully encrypted, disconnected, air gapped computer). As for the certificate authority public key, the **trust cert**, you can freely shared it. - -For both client and server key pairs, the administrator first generates a private key and a certificate request, then uses the certificate authority private key to sign the certificate request, finally generates a certificate. This certificate is the public key for the server/client key pair. - -For TLS transport encryption, the clients can use the **trust cert** to verify that the server has a key pair that the certificate authority signed when the clients are talking to the server. A man-in-the-middle attacker does not have access to the certificate authority, so they couldn't create a server with such a key pair. - -For TLS authentication, the server uses the **trust cert** to verify that the client has a key pair that the certificate authority signed. The common name of the **client cert** is then used as the client's role token (see [Overview](security-overview.md)). - -`Bouncy Castle Provider` provides cipher suites and algorithms in Pulsar. If you need [FIPS](https://www.bouncycastle.org/fips_faq.html) version of `Bouncy Castle Provider`, please reference [Bouncy Castle page](security-bouncy-castle.md). - -## Create TLS certificates - -Creating TLS certificates for Pulsar involves creating a [certificate authority](#certificate-authority) (CA), [server certificate](#server-certificate), and [client certificate](#client-certificate). - -Follow the guide below to set up a certificate authority. You can also refer to plenty of resources on the internet for more details. We recommend [this guide](https://jamielinux.com/docs/openssl-certificate-authority/index.html) for your detailed reference. - -### Certificate authority - -1. Create the certificate for the CA. You can use CA to sign both the broker and client certificates. This ensures that each party will trust the others. You should store CA in a very secure location (ideally completely disconnected from networks, air gapped, and fully encrypted). - -2. Entering the following command to create a directory for your CA, and place [this openssl configuration file](https://github.com/apache/pulsar/tree/master/site2/website/static/examples/openssl.cnf) in the directory. You may want to modify the default answers for company name and department in the configuration file. Export the location of the CA directory to the environment variable, CA_HOME. The configuration file uses this environment variable to find the rest of the files and directories that the CA needs. - -```bash - -mkdir my-ca -cd my-ca -wget https://raw.githubusercontent.com/apache/pulsar-site/main/site2/website/static/examples/openssl.cnf -export CA_HOME=$(pwd) - -``` - -3. Enter the commands below to create the necessary directories, keys and certs. - -```bash - -mkdir certs crl newcerts private -chmod 700 private/ -touch index.txt -echo 1000 > serial -openssl genrsa -aes256 -out private/ca.key.pem 4096 -chmod 400 private/ca.key.pem -openssl req -config openssl.cnf -key private/ca.key.pem \ - -new -x509 -days 7300 -sha256 -extensions v3_ca \ - -out certs/ca.cert.pem -chmod 444 certs/ca.cert.pem - -``` - -4. After you answer the question prompts, CA-related files are stored in the `./my-ca` directory. Within that directory: - -* `certs/ca.cert.pem` is the public certificate. This public certificates is meant to be distributed to all parties involved. -* `private/ca.key.pem` is the private key. You only need it when you are signing a new certificate for either broker or clients and you must safely guard this private key. - -### Server certificate - -Once you have created a CA certificate, you can create certificate requests and sign them with the CA. - -The following commands ask you a few questions and then create the certificates. When you are asked for the common name, you should match the hostname of the broker. You can also use a wildcard to match a group of broker hostnames, for example, `*.broker.usw.example.com`. This ensures that multiple machines can reuse the same certificate. - -:::tip - -Sometimes matching the hostname is not possible or makes no sense, -such as when you create the brokers with random hostnames, or you -plan to connect to the hosts via their IP. In these cases, you -should configure the client to disable TLS hostname verification. For more -details, you can see [the host verification section in client configuration](#hostname-verification). - -::: - -1. Enter the command below to generate the key. - -```bash - -openssl genrsa -out broker.key.pem 2048 - -``` - -The broker expects the key to be in [PKCS 8](https://en.wikipedia.org/wiki/PKCS_8) format, so enter the following command to convert it. - -```bash - -openssl pkcs8 -topk8 -inform PEM -outform PEM \ - -in broker.key.pem -out broker.key-pk8.pem -nocrypt - -``` - -2. Enter the following command to generate the certificate request. - -```bash - -openssl req -config openssl.cnf \ - -key broker.key.pem -new -sha256 -out broker.csr.pem - -``` - -3. Sign it with the certificate authority by entering the command below. - -```bash - -openssl ca -config openssl.cnf -extensions server_cert \ - -days 1000 -notext -md sha256 \ - -in broker.csr.pem -out broker.cert.pem - -``` - -At this point, you have a cert, `broker.cert.pem`, and a key, `broker.key-pk8.pem`, which you can use along with `ca.cert.pem` to configure TLS transport encryption for your broker and proxy nodes. - -## Configure broker - -To configure a Pulsar [broker](reference-terminology.md#broker) to use TLS transport encryption, you need to make some changes to `broker.conf`, which locates in the `conf` directory of your [Pulsar installation](getting-started-standalone.md). - -Add these values to the configuration file (substituting the appropriate certificate paths where necessary): - -```properties - -tlsEnabled=true -tlsRequireTrustedClientCertOnConnect=true -tlsCertificateFilePath=/path/to/broker.cert.pem -tlsKeyFilePath=/path/to/broker.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -> You can find a full list of parameters available in the `conf/broker.conf` file, -> as well as the default values for those parameters, in [Broker Configuration](reference-configuration.md#broker) -> -### TLS Protocol Version and Cipher - -You can configure the broker (and proxy) to require specific TLS protocol versions and ciphers for TLS negiotation. You can use the TLS protocol versions and ciphers to stop clients from requesting downgraded TLS protocol versions or ciphers that may have weaknesses. - -Both the TLS protocol versions and cipher properties can take multiple values, separated by commas. The possible values for protocol version and ciphers depend on the TLS provider that you are using. Pulsar uses OpenSSL if the OpenSSL is available, but if the OpenSSL is not available, Pulsar defaults back to the JDK implementation. - -```properties - -tlsProtocols=TLSv1.3,TLSv1.2 -tlsCiphers=TLS_DH_RSA_WITH_AES_256_GCM_SHA384,TLS_DH_RSA_WITH_AES_256_CBC_SHA - -``` - -OpenSSL currently supports ```TLSv1.1```, ```TLSv1.2``` and ```TLSv1.3``` for the protocol version. You can acquire a list of supported cipher from the openssl ciphers command, i.e. ```openssl ciphers -tls1_3```. - -For JDK 11, you can obtain a list of supported values from the documentation: -- [TLS protocol](https://docs.oracle.com/en/java/javase/11/security/oracle-providers.html#GUID-7093246A-31A3-4304-AC5F-5FB6400405E2__SUNJSSEPROVIDERPROTOCOLPARAMETERS-BBF75009) -- [Ciphers](https://docs.oracle.com/en/java/javase/11/security/oracle-providers.html#GUID-7093246A-31A3-4304-AC5F-5FB6400405E2__SUNJSSE_CIPHER_SUITES) - -## Proxy Configuration - -Proxies need to configure TLS in two directions, for clients connecting to the proxy, and for the proxy connecting to brokers. - -```properties - -# For clients connecting to the proxy -tlsEnabledInProxy=true -tlsCertificateFilePath=/path/to/broker.cert.pem -tlsKeyFilePath=/path/to/broker.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -# For the proxy to connect to brokers -tlsEnabledWithBroker=true -brokerClientTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -## Client configuration - -When you enable the TLS transport encryption, you need to configure the client to use ```https://``` and port 8443 for the web service URL, and ```pulsar+ssl://``` and port 6651 for the broker service URL. - -As the server certificate that you generated above does not belong to any of the default trust chains, you also need to either specify the path the **trust cert** (recommended), or tell the client to allow untrusted server certs. - -### Hostname verification - -Hostname verification is a TLS security feature whereby a client can refuse to connect to a server if the "CommonName" does not match the hostname to which the hostname is connecting. By default, Pulsar clients disable hostname verification, as it requires that each broker has a DNS record and a unique cert. - -Moreover, as the administrator has full control of the certificate authority, a bad actor is unlikely to be able to pull off a man-in-the-middle attack. "allowInsecureConnection" allows the client to connect to servers whose cert has not been signed by an approved CA. The client disables "allowInsecureConnection" by default, and you should always disable "allowInsecureConnection" in production environments. As long as you disable "allowInsecureConnection", a man-in-the-middle attack requires that the attacker has access to the CA. - -One scenario where you may want to enable hostname verification is where you have multiple proxy nodes behind a VIP, and the VIP has a DNS record, for example, pulsar.mycompany.com. In this case, you can generate a TLS cert with pulsar.mycompany.com as the "CommonName," and then enable hostname verification on the client. - -The examples below show that hostname verification is disabled for the CLI tools/Java/Python/C++/Node.js/C# clients by default. - -### CLI tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools.md#pulsar-admin), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use TLS transport with the CLI tools of Pulsar: - -```properties - -webServiceUrl=https://broker.example.com:8443/ -brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/ca.cert.pem -tlsEnableHostnameVerification=false - -``` - -#### Java client - -```java - -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .tlsTrustCertsFilePath("/path/to/ca.cert.pem") - .enableTlsHostnameVerification(false) // false by default, in any case - .allowTlsInsecureConnection(false) // false by default, in any case - .build(); - -``` - -#### Python client - -```python - -from pulsar import Client - -client = Client("pulsar+ssl://broker.example.com:6651/", - tls_hostname_verification=False, - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False) // defaults to false from v2.2.0 onwards - -``` - -#### C++ client - -```c++ - -#include - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); // shouldn't be needed soon -config.setTlsTrustCertsFilePath(caPath); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create(clientPublicKeyPath, clientPrivateKeyPath)); -config.setValidateHostName(false); - -``` - -#### Node.js client - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const client = new Pulsar.Client({ - serviceUrl: 'pulsar+ssl://broker.example.com:6651/', - tlsTrustCertsFilePath: '/path/to/ca.cert.pem', - useTls: true, - tlsValidateHostname: false, - tlsAllowInsecureConnection: false, - }); -})(); - -``` - -#### C# client - -```c# - -var certificate = new X509Certificate2("ca.cert.pem"); -var client = PulsarClient.Builder() - .TrustedCertificateAuthority(certificate) //If the CA is not trusted on the host, you can add it explicitly. - .VerifyCertificateAuthority(true) //Default is 'true' - .VerifyCertificateName(false) //Default is 'false' - .Build(); - -``` - -> Note that `VerifyCertificateName` refers to the configuration of hostname verification in the C# client. diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/security-token-admin.md b/site2/website/versioned_docs/version-2.8.0-deprecated/security-token-admin.md deleted file mode 100644 index a265f6320d28fe..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/security-token-admin.md +++ /dev/null @@ -1,183 +0,0 @@ ---- -id: security-token-admin -title: Token authentication admin -sidebar_label: "Token authentication admin" -original_id: security-token-admin ---- - -## Token Authentication Overview - -Pulsar supports authenticating clients using security tokens that are based on [JSON Web Tokens](https://jwt.io/introduction/) ([RFC-7519](https://tools.ietf.org/html/rfc7519)). - -Tokens are used to identify a Pulsar client and associate with some "principal" (or "role") which -will be then granted permissions to do some actions (eg: publish or consume from a topic). - -A user will typically be given a token string by an administrator (or some automated service). - -The compact representation of a signed JWT is a string that looks like: - -``` - - eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -Application will specify the token when creating the client instance. An alternative is to pass -a "token supplier", that is to say a function that returns the token when the client library -will need one. - -> #### Always use TLS transport encryption -> Sending a token is equivalent to sending a password over the wire. It is strongly recommended to -> always use TLS encryption when talking to the Pulsar service. See -> [Transport Encryption using TLS](security-tls-transport.md) - -## Secret vs Public/Private keys - -JWT support two different kind of keys in order to generate and validate the tokens: - - * Symmetric : - - there is a single ***Secret*** key that is used both to generate and validate - * Asymmetric: there is a pair of keys. - - ***Private*** key is used to generate tokens - - ***Public*** key is used to validate tokens - -### Secret key - -When using a secret key, the administrator will create the key and he will -use it to generate the client tokens. This key will be also configured to -the brokers to allow them to validate the clients. - -#### Creating a secret key - -> Output file will be generated in the root of your pulsar installation directory. You can also provide absolute path for the output file. - -```shell - -$ bin/pulsar tokens create-secret-key --output my-secret.key - -``` - -To generate base64 encoded private key - -```shell - -$ bin/pulsar tokens create-secret-key --output /opt/my-secret.key --base64 - -``` - -### Public/Private keys - -With public/private, we need to create a pair of keys. Pulsar supports all algorithms supported by the Java JWT library shown [here](https://github.com/jwtk/jjwt#signature-algorithms-keys) - -#### Creating a key pair - -> Output file will be generated in the root of your pulsar installation directory. You can also provide absolute path for the output file. - -```shell - -$ bin/pulsar tokens create-key-pair --output-private-key my-private.key --output-public-key my-public.key - -``` - - * `my-private.key` will be stored in a safe location and only used by administrator to generate - new tokens. - * `my-public.key` will be distributed to all Pulsar brokers. This file can be publicly shared without - any security concern. - -## Generating tokens - -A token is the credential associated with a user. The association is done through the "principal", -or "role". In case of JWT tokens, this field it's typically referred to as **subject**, though -it's exactly the same concept. - -The generated token is then required to have a **subject** field set. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user - -``` - -This will print the token string on stdout. - -Similarly, one can create a token by passing the "private" key: - -```shell - -$ bin/pulsar tokens create --private-key file:///path/to/my-private.key \ - --subject test-user - -``` - -Finally, a token can also be created with a pre-defined TTL. After that time, -the token will be automatically invalidated. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user \ - --expiry-time 1y - -``` - -## Authorization - -The token itself doesn't have any permission associated. That will be determined by the -authorization engine. Once the token is created, one can grant permission for this token to do certain -actions. Eg. : - -```shell - -$ bin/pulsar-admin namespaces grant-permission my-tenant/my-namespace \ - --role test-user \ - --actions produce,consume - -``` - -## Enabling Token Authentication ... - -### ... on Brokers - -To configure brokers to authenticate clients, put the following in `broker.conf`: - -```properties - -# Configuration to enable authentication and authorization -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken - -# If using secret key (Note: key files must be DER-encoded) -tokenSecretKey=file:///path/to/secret.key -# The key can also be passed inline: -# tokenSecretKey=data:;base64,FLFyW0oLJ2Fi22KKCm21J18mbAdztfSHN/lAT5ucEKU= - -# If using public/private (Note: key files must be DER-encoded) -# tokenPublicKey=file:///path/to/public.key - -``` - -### ... on Proxies - -To configure proxies to authenticate clients, put the following in `proxy.conf`: - -The proxy will have its own token used when talking to brokers. The role token for this -key pair should be configured in the ``proxyRoles`` of the brokers. See the [authorization guide](security-authorization.md) for more details. - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken -tokenSecretKey=file:///path/to/secret.key - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Or, alternatively, read token from file -# brokerClientAuthenticationParameters=file:///path/to/proxy-token.txt - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/sql-deployment-configurations.md b/site2/website/versioned_docs/version-2.8.0-deprecated/sql-deployment-configurations.md deleted file mode 100644 index 02d8bc78f6cb9d..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/sql-deployment-configurations.md +++ /dev/null @@ -1,199 +0,0 @@ ---- -id: sql-deployment-configurations -title: Pulsar SQL configuration and deployment -sidebar_label: "Configuration and deployment" -original_id: sql-deployment-configurations ---- - -You can configure Presto Pulsar connector and deploy a cluster with the following instruction. - -## Configure Presto Pulsar Connector -You can configure Presto Pulsar Connector in the `${project.root}/conf/presto/catalog/pulsar.properties` properties file. The configuration for the connector and the default values are as follows. - -```properties - -# name of the connector to be displayed in the catalog -connector.name=pulsar - -# the url of Pulsar broker service -pulsar.web-service-url=http://localhost:8080 - -# URI of Zookeeper cluster -pulsar.zookeeper-uri=localhost:2181 - -# minimum number of entries to read at a single time -pulsar.entry-read-batch-size=100 - -# default number of splits to use per query -pulsar.target-num-splits=4 - -``` - -You can connect Presto to a Pulsar cluster with multiple hosts. To configure multiple hosts for brokers, add multiple URLs to `pulsar.web-service-url`. To configure multiple hosts for ZooKeeper, add multiple URIs to `pulsar.zookeeper-uri`. The following is an example. - -``` - -pulsar.web-service-url=http://localhost:8080,localhost:8081,localhost:8082 -pulsar.zookeeper-uri=localhost1,localhost2:2181 - -``` - -**Note: by default, Pulsar SQL does not get the last message in a topic**. It is by design and controlled by settings. By default, BookKeeper LAC only advances when subsequent entries are added. If there is no subsequent entry added, the last written entry is not visible to readers until the ledger is closed. This is not a problem for Pulsar which uses managed ledger, but Pulsar SQL directly reads from BookKeeper ledger. - -If you want to get the last message in a topic, set the following configurations: - -1. For the broker configuration, set `bookkeeperExplicitLacIntervalInMills` > 0 in `broker.conf` or `standalone.conf`. - -2. For the Presto configuration, set `pulsar.bookkeeper-explicit-interval` > 0 and `pulsar.bookkeeper-use-v2-protocol=false`. - -However, using BookKeeper V3 protocol introduces additional GC overhead to BK as it uses Protobuf. - -## Query data from existing Presto clusters - -If you already have a Presto cluster, you can copy the Presto Pulsar connector plugin to your existing cluster. Download the archived plugin package with the following command. - -```bash - -$ wget pulsar:binary_release_url - -``` - -## Deploy a new cluster - -Since Pulsar SQL is powered by [Trino (formerly Presto SQL)](https://trino.io), the configuration for deployment is the same for the Pulsar SQL worker. - -:::note - -For how to set up a standalone single node environment, refer to [Query data](sql-getting-started.md). - -::: - -You can use the same CLI args as the Presto launcher. - -```bash - -$ ./bin/pulsar sql-worker --help -Usage: launcher [options] command - -Commands: run, start, stop, restart, kill, status - -Options: - -h, --help show this help message and exit - -v, --verbose Run verbosely - --etc-dir=DIR Defaults to INSTALL_PATH/etc - --launcher-config=FILE - Defaults to INSTALL_PATH/bin/launcher.properties - --node-config=FILE Defaults to ETC_DIR/node.properties - --jvm-config=FILE Defaults to ETC_DIR/jvm.config - --config=FILE Defaults to ETC_DIR/config.properties - --log-levels-file=FILE - Defaults to ETC_DIR/log.properties - --data-dir=DIR Defaults to INSTALL_PATH - --pid-file=FILE Defaults to DATA_DIR/var/run/launcher.pid - --launcher-log-file=FILE - Defaults to DATA_DIR/var/log/launcher.log (only in - daemon mode) - --server-log-file=FILE - Defaults to DATA_DIR/var/log/server.log (only in - daemon mode) - -D NAME=VALUE Set a Java system property - -``` - -The default configuration for the cluster is located in `${project.root}/conf/presto`. You can customize your deployment by modifying the default configuration. - -You can set the worker to read from a different configuration directory, or set a different directory to write data. - -```bash - -$ ./bin/pulsar sql-worker run --etc-dir /tmp/incubator-pulsar/conf/presto --data-dir /tmp/presto-1 - -``` - -You can start the worker as daemon process. - -```bash - -$ ./bin/pulsar sql-worker start - -``` - -### Deploy a cluster on multiple nodes - -You can deploy a Pulsar SQL cluster or Presto cluster on multiple nodes. The following example shows how to deploy a cluster on three-node cluster. - -1. Copy the Pulsar binary distribution to three nodes. - -The first node runs as Presto coordinator. The minimal configuration requirement in the `${project.root}/conf/presto/config.properties` file is as follows. - -```properties - -coordinator=true -node-scheduler.include-coordinator=true -http-server.http.port=8080 -query.max-memory=50GB -query.max-memory-per-node=1GB -discovery-server.enabled=true -discovery.uri= - -``` - -The other two nodes serve as worker nodes, you can use the following configuration for worker nodes. - -```properties - -coordinator=false -http-server.http.port=8080 -query.max-memory=50GB -query.max-memory-per-node=1GB -discovery.uri= - -``` - -2. Modify `pulsar.web-service-url` and `pulsar.zookeeper-uri` configuration in the `${project.root}/conf/presto/catalog/pulsar.properties` file accordingly for the three nodes. - -3. Start the coordinator node. - -``` - -$ ./bin/pulsar sql-worker run - -``` - -4. Start worker nodes. - -``` - -$ ./bin/pulsar sql-worker run - -``` - -5. Start the SQL CLI and check the status of your cluster. - -```bash - -$ ./bin/pulsar sql --server - -``` - -6. Check the status of your nodes. - -```bash - -presto> SELECT * FROM system.runtime.nodes; - node_id | http_uri | node_version | coordinator | state ----------+-------------------------+--------------+-------------+-------- - 1 | http://192.168.2.1:8081 | testversion | true | active - 3 | http://192.168.2.2:8081 | testversion | false | active - 2 | http://192.168.2.3:8081 | testversion | false | active - -``` - -For more information about deployment in Presto, refer to [Presto deployment](https://trino.io/docs/current/installation/deployment.html). - -:::note - -The broker does not advance LAC, so when Pulsar SQL bypass broker to query data, it can only read entries up to the LAC that all the bookies learned. You can enable periodically write LAC on the broker by setting "bookkeeperExplicitLacIntervalInMills" in the broker.conf. - -::: - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/sql-getting-started.md b/site2/website/versioned_docs/version-2.8.0-deprecated/sql-getting-started.md deleted file mode 100644 index 8a5cd7199b365a..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/sql-getting-started.md +++ /dev/null @@ -1,187 +0,0 @@ ---- -id: sql-getting-started -title: Query data with Pulsar SQL -sidebar_label: "Query data" -original_id: sql-getting-started ---- - -Before querying data in Pulsar, you need to install Pulsar and built-in connectors. - -## Requirements -1. Install [Pulsar](getting-started-standalone.md#install-pulsar-standalone). -2. Install Pulsar [built-in connectors](getting-started-standalone.md#install-builtin-connectors-optional). - -## Query data in Pulsar -To query data in Pulsar with Pulsar SQL, complete the following steps. - -1. Start a Pulsar standalone cluster. - -```bash - -./bin/pulsar standalone - -``` - -2. Start a Pulsar SQL worker. - -```bash - -./bin/pulsar sql-worker run - -``` - -3. After initializing Pulsar standalone cluster and the SQL worker, run SQL CLI. - -```bash - -./bin/pulsar sql - -``` - -4. Test with SQL commands. - -```bash - -presto> show catalogs; - Catalog ---------- - pulsar - system -(2 rows) - -Query 20180829_211752_00004_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [0 rows, 0B] [0 rows/s, 0B/s] - - -presto> show schemas in pulsar; - Schema ------------------------ - information_schema - public/default - public/functions - sample/standalone/ns1 -(4 rows) - -Query 20180829_211818_00005_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [4 rows, 89B] [21 rows/s, 471B/s] - - -presto> show tables in pulsar."public/default"; - Table -------- -(0 rows) - -Query 20180829_211839_00006_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [0 rows, 0B] [0 rows/s, 0B/s] - -``` - -Since there is no data in Pulsar, no records is returned. - -5. Start the built-in connector _DataGeneratorSource_ and ingest some mock data. - -```bash - -./bin/pulsar-admin sources create --name generator --destinationTopicName generator_test --source-type data-generator - -``` - -And then you can query a topic in the namespace "public/default". - -```bash - -presto> show tables in pulsar."public/default"; - Table ----------------- - generator_test -(1 row) - -Query 20180829_213202_00000_csyeu, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:02 [1 rows, 38B] [0 rows/s, 17B/s] - -``` - -You can now query the data within the topic "generator_test". - -```bash - -presto> select * from pulsar."public/default".generator_test; - - firstname | middlename | lastname | email | username | password | telephonenumber | age | companyemail | nationalidentitycardnumber | --------------+-------------+-------------+----------------------------------+--------------+----------+-----------------+-----+-----------------------------------------------+----------------------------+ - Genesis | Katherine | Wiley | genesis.wiley@gmail.com | genesisw | y9D2dtU3 | 959-197-1860 | 71 | genesis.wiley@interdemconsulting.eu | 880-58-9247 | - Brayden | | Stanton | brayden.stanton@yahoo.com | braydens | ZnjmhXik | 220-027-867 | 81 | brayden.stanton@supermemo.eu | 604-60-7069 | - Benjamin | Julian | Velasquez | benjamin.velasquez@yahoo.com | benjaminv | 8Bc7m3eb | 298-377-0062 | 21 | benjamin.velasquez@hostesltd.biz | 213-32-5882 | - Michael | Thomas | Donovan | donovan@mail.com | michaeld | OqBm9MLs | 078-134-4685 | 55 | michael.donovan@memortech.eu | 443-30-3442 | - Brooklyn | Avery | Roach | brooklynroach@yahoo.com | broach | IxtBLafO | 387-786-2998 | 68 | brooklyn.roach@warst.biz | 085-88-3973 | - Skylar | | Bradshaw | skylarbradshaw@yahoo.com | skylarb | p6eC6cKy | 210-872-608 | 96 | skylar.bradshaw@flyhigh.eu | 453-46-0334 | -. -. -. - -``` - -You can query the mock data. - -## Query your own data -If you want to query your own data, you need to ingest your own data first. You can write a simple producer and write custom defined data to Pulsar. The following is an example. - -```java - -public class TestProducer { - - public static class Foo { - private int field1 = 1; - private String field2; - private long field3; - - public Foo() { - } - - public int getField1() { - return field1; - } - - public void setField1(int field1) { - this.field1 = field1; - } - - public String getField2() { - return field2; - } - - public void setField2(String field2) { - this.field2 = field2; - } - - public long getField3() { - return field3; - } - - public void setField3(long field3) { - this.field3 = field3; - } - } - - public static void main(String[] args) throws Exception { - PulsarClient pulsarClient = PulsarClient.builder().serviceUrl("pulsar://localhost:6650").build(); - Producer producer = pulsarClient.newProducer(AvroSchema.of(Foo.class)).topic("test_topic").create(); - - for (int i = 0; i < 1000; i++) { - Foo foo = new Foo(); - foo.setField1(i); - foo.setField2("foo" + i); - foo.setField3(System.currentTimeMillis()); - producer.newMessage().value(foo).send(); - } - producer.close(); - pulsarClient.close(); - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/sql-overview.md b/site2/website/versioned_docs/version-2.8.0-deprecated/sql-overview.md deleted file mode 100644 index 0235d0256c5f19..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/sql-overview.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: sql-overview -title: Pulsar SQL Overview -sidebar_label: "Overview" -original_id: sql-overview ---- - -Apache Pulsar is used to store streams of event data, and the event data is structured with predefined fields. With the implementation of the [Schema Registry](schema-get-started.md), you can store structured data in Pulsar and query the data by using [Trino (formerly Presto SQL.md)](https://trino.io/). - -As the core of Pulsar SQL, Presto Pulsar connector enables Presto workers within a Presto cluster to query data from Pulsar. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-sql-arch-2.png) - -The query performance is efficient and highly scalable, because Pulsar adopts [two level segment based architecture](concepts-architecture-overview.md#apache-bookkeeper). - -Topics in Pulsar are stored as segments in [Apache BookKeeper](https://bookkeeper.apache.org/). Each topic segment is replicated to some BookKeeper nodes, which enables concurrent reads and high read throughput. You can configure the number of BookKeeper nodes, and the default number is `3`. In Presto Pulsar connector, data is read directly from BookKeeper, so Presto workers can read concurrently from horizontally scalable number BookKeeper nodes. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-sql-arch-1.png) diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/sql-rest-api.md b/site2/website/versioned_docs/version-2.8.0-deprecated/sql-rest-api.md deleted file mode 100644 index c92fd62f7d8703..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/sql-rest-api.md +++ /dev/null @@ -1,192 +0,0 @@ ---- -id: sql-rest-api -title: Pulsar SQL REST APIs -sidebar_label: "REST APIs" -original_id: sql-rest-api ---- - -This section lists resources that make up the Presto REST API v1. - -## Request for Presto services - -All requests for Presto services should use Presto REST API v1 version. - -To request services, use explicit URL `http://presto.service:8081/v1`. You need to update `presto.service:8081` with your real Presto address before sending requests. - -`POST` requests require the `X-Presto-User` header. If you use authentication, you must use the same `username` that is specified in the authentication configuration. If you do not use authentication, you can specify anything for `username`. - -```properties - -X-Presto-User: username - -``` - -For more information about headers, refer to [PrestoHeaders](https://github.com/trinodb/trino). - -## Schema - -You can use statement in the HTTP body. All data is received as JSON document that might contain a `nextUri` link. If the received JSON document contains a `nextUri` link, the request continues with the `nextUri` link until the received data does not contain a `nextUri` link. If no error is returned, the query completes successfully. If an `error` field is displayed in `stats`, it means the query fails. - -The following is an example of `show catalogs`. The query continues until the received JSON document does not contain a `nextUri` link. Since no `error` is displayed in `stats`, it means that the query completes successfully. - -```powershell - -➜ ~ curl --header "X-Presto-User: test-user" --request POST --data 'show catalogs' http://localhost:8081/v1/statement -{ - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "stats" : { - "queued" : true, - "nodes" : 0, - "userTimeMillis" : 0, - "cpuTimeMillis" : 0, - "wallTimeMillis" : 0, - "processedBytes" : 0, - "processedRows" : 0, - "runningSplits" : 0, - "queuedTimeMillis" : 0, - "queuedSplits" : 0, - "completedSplits" : 0, - "totalSplits" : 0, - "scheduled" : false, - "peakMemoryBytes" : 0, - "state" : "QUEUED", - "elapsedTimeMillis" : 0 - }, - "id" : "20191113_033653_00006_dg6hb", - "nextUri" : "http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/1" -} - -➜ ~ curl http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/1 -{ - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "nextUri" : "http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/2", - "id" : "20191113_033653_00006_dg6hb", - "stats" : { - "state" : "PLANNING", - "totalSplits" : 0, - "queued" : false, - "userTimeMillis" : 0, - "completedSplits" : 0, - "scheduled" : false, - "wallTimeMillis" : 0, - "runningSplits" : 0, - "queuedSplits" : 0, - "cpuTimeMillis" : 0, - "processedRows" : 0, - "processedBytes" : 0, - "nodes" : 0, - "queuedTimeMillis" : 1, - "elapsedTimeMillis" : 2, - "peakMemoryBytes" : 0 - } -} - -➜ ~ curl http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/2 -{ - "id" : "20191113_033653_00006_dg6hb", - "data" : [ - [ - "pulsar" - ], - [ - "system" - ] - ], - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "columns" : [ - { - "typeSignature" : { - "rawType" : "varchar", - "arguments" : [ - { - "kind" : "LONG_LITERAL", - "value" : 6 - } - ], - "literalArguments" : [], - "typeArguments" : [] - }, - "name" : "Catalog", - "type" : "varchar(6)" - } - ], - "stats" : { - "wallTimeMillis" : 104, - "scheduled" : true, - "userTimeMillis" : 14, - "progressPercentage" : 100, - "totalSplits" : 19, - "nodes" : 1, - "cpuTimeMillis" : 16, - "queued" : false, - "queuedTimeMillis" : 1, - "state" : "FINISHED", - "peakMemoryBytes" : 0, - "elapsedTimeMillis" : 111, - "processedBytes" : 0, - "processedRows" : 0, - "queuedSplits" : 0, - "rootStage" : { - "cpuTimeMillis" : 1, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 1, - "subStages" : [ - { - "cpuTimeMillis" : 14, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 17, - "subStages" : [ - { - "wallTimeMillis" : 7, - "subStages" : [], - "stageId" : "2", - "done" : true, - "nodes" : 1, - "totalSplits" : 1, - "processedBytes" : 22, - "processedRows" : 2, - "queuedSplits" : 0, - "userTimeMillis" : 1, - "cpuTimeMillis" : 1, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 1 - } - ], - "wallTimeMillis" : 92, - "nodes" : 1, - "done" : true, - "stageId" : "1", - "userTimeMillis" : 12, - "processedRows" : 2, - "processedBytes" : 51, - "queuedSplits" : 0, - "totalSplits" : 17 - } - ], - "wallTimeMillis" : 5, - "done" : true, - "nodes" : 1, - "stageId" : "0", - "userTimeMillis" : 1, - "processedRows" : 2, - "processedBytes" : 22, - "totalSplits" : 1, - "queuedSplits" : 0 - }, - "runningSplits" : 0, - "completedSplits" : 19 - } -} - -``` - -:::note - -Since the response data is not in sync with the query state from the perspective of clients, you cannot rely on the response data to determine whether the query completes. - -::: - -For more information about Presto REST API, refer to [Presto HTTP Protocol](https://github.com/prestosql/presto/wiki/HTTP-Protocol). diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/standalone-docker.md b/site2/website/versioned_docs/version-2.8.0-deprecated/standalone-docker.md deleted file mode 100644 index 1710ec819d7a4e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/standalone-docker.md +++ /dev/null @@ -1,214 +0,0 @@ ---- -id: standalone-docker -title: Set up a standalone Pulsar in Docker -sidebar_label: "Run Pulsar in Docker" -original_id: standalone-docker ---- - -For local development and testing, you can run Pulsar in standalone mode on your own machine within a Docker container. - -If you have not installed Docker, download the [Community edition](https://www.docker.com/community-edition) and follow the instructions for your OS. - -## Start Pulsar in Docker - -* For MacOS, Linux, and Windows: - - ```shell - - $ docker run -it -p 6650:6650 -p 8080:8080 --mount source=pulsardata,target=/pulsar/data --mount source=pulsarconf,target=/pulsar/conf apachepulsar/pulsar:@pulsar:version@ bin/pulsar standalone - - ``` - -A few things to note about this command: - * The data, metadata, and configuration are persisted on Docker volumes in order to not start "fresh" every -time the container is restarted. For details on the volumes you can use `docker volume inspect ` - * For Docker on Windows make sure to configure it to use Linux containers - -If you start Pulsar successfully, you will see `INFO`-level log messages like this: - -``` - -08:18:30.970 [main] INFO org.apache.pulsar.broker.web.WebService - HTTP Service started at http://0.0.0.0:8080 -... -07:53:37.322 [main] INFO org.apache.pulsar.broker.PulsarService - messaging service is ready, bootstrap service port = 8080, broker url= pulsar://localhost:6650, cluster=standalone, configs=org.apache.pulsar.broker.ServiceConfiguration@98b63c1 -... - -``` - -:::tip - -When you start a local standalone cluster, a `public/default` - -::: - -namespace is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. -For more information, see [Topics](concepts-messaging.md#topics). - -## Use Pulsar in Docker - -Pulsar offers client libraries for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) -and [C++](client-libraries-cpp.md). If you're running a local standalone cluster, you can -use one of these root URLs to interact with your cluster: - -* `pulsar://localhost:6650` -* `http://localhost:8080` - -The following example will guide you get started with Pulsar quickly by using the [Python client API](client-libraries-python.md) -client API. - -Install the Pulsar Python client library directly from [PyPI](https://pypi.org/project/pulsar-client/): - -```shell - -$ pip install pulsar-client - -``` - -### Consume a message - -Create a consumer and subscribe to the topic: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -consumer = client.subscribe('my-topic', - subscription_name='my-sub') - -while True: - msg = consumer.receive() - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - -client.close() - -``` - -### Produce a message - -Now start a producer to send some test messages: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -producer = client.create_producer('my-topic') - -for i in range(10): - producer.send(('hello-pulsar-%d' % i).encode('utf-8')) - -client.close() - -``` - -## Get the topic statistics - -In Pulsar, you can use REST, Java, or command-line tools to control every aspect of the system. -For details on APIs, refer to [Admin API Overview](admin-api-overview.md). - -In the simplest example, you can use curl to probe the stats for a particular topic: - -```shell - -$ curl http://localhost:8080/admin/v2/persistent/public/default/my-topic/stats | python -m json.tool - -``` - -The output is something like this: - -```json - -{ - "msgRateIn": 0.0, - "msgThroughputIn": 0.0, - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesInCounter": 7097, - "msgInCounter": 143, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "averageMsgSize": 0.0, - "msgChunkPublished": false, - "storageSize": 7097, - "backlogSize": 0, - "offloadedStorageSize": 0, - "publishers": [ - { - "accessMode": "Shared", - "msgRateIn": 0.0, - "msgThroughputIn": 0.0, - "averageMsgSize": 0.0, - "chunkedMessageRate": 0.0, - "producerId": 0, - "metadata": {}, - "address": "/127.0.0.1:35604", - "connectedSince": "2021-07-04T09:05:43.04788Z", - "clientVersion": "2.8.0", - "producerName": "standalone-2-5" - } - ], - "waitingPublishers": 0, - "subscriptions": { - "my-sub": { - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "msgRateRedeliver": 0.0, - "chunkedMessageRate": 0, - "msgBacklog": 0, - "backlogSize": 0, - "msgBacklogNoDelayed": 0, - "blockedSubscriptionOnUnackedMsgs": false, - "msgDelayed": 0, - "unackedMessages": 0, - "type": "Exclusive", - "activeConsumerName": "3c544f1daa", - "msgRateExpired": 0.0, - "totalMsgExpired": 0, - "lastExpireTimestamp": 0, - "lastConsumedFlowTimestamp": 1625389101290, - "lastConsumedTimestamp": 1625389546070, - "lastAckedTimestamp": 1625389546162, - "lastMarkDeleteAdvancedTimestamp": 1625389546163, - "consumers": [ - { - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "msgRateRedeliver": 0.0, - "chunkedMessageRate": 0.0, - "consumerName": "3c544f1daa", - "availablePermits": 867, - "unackedMessages": 0, - "avgMessagesPerEntry": 6, - "blockedConsumerOnUnackedMsgs": false, - "lastAckedTimestamp": 1625389546162, - "lastConsumedTimestamp": 1625389546070, - "metadata": {}, - "address": "/127.0.0.1:35472", - "connectedSince": "2021-07-04T08:58:21.287682Z", - "clientVersion": "2.8.0" - } - ], - "isDurable": true, - "isReplicated": false, - "allowOutOfOrderDelivery": false, - "consumersAfterMarkDeletePosition": {}, - "nonContiguousDeletedMessagesRanges": 0, - "nonContiguousDeletedMessagesRangesSerializedSize": 0, - "durable": true, - "replicated": false - } - }, - "replication": {}, - "deduplicationStatus": "Disabled", - "nonContiguousDeletedMessagesRanges": 0, - "nonContiguousDeletedMessagesRangesSerializedSize": 0 -} - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/standalone.md b/site2/website/versioned_docs/version-2.8.0-deprecated/standalone.md deleted file mode 100644 index 25afa11a91b117..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/standalone.md +++ /dev/null @@ -1,271 +0,0 @@ ---- -id: standalone -title: Set up a standalone Pulsar locally -sidebar_label: "Run Pulsar locally" -original_id: standalone ---- - -For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process. - -> #### Pulsar in production? -> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal.md) guide. - -## Install Pulsar standalone - -This tutorial guides you through every step of the installation process. - -### System requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions, JRE/JDK 11 is recommended. - -:::tip - -By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. - -::: - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -### Install Pulsar using binary release - -To get started with Pulsar, download a binary tarball release in one of the following ways: - -* download from the Apache mirror (Pulsar @pulsar:version@ binary release) - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:binary_release_url - - ``` - -After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -#### What your package contains - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/). -`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more. -`examples` | A Java JAR file containing [Pulsar Functions](functions-overview.md) example. -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar. -`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar). - -These directories are created once you begin running Pulsar. - -Directory | Contains -:---------|:-------- -`data` | The data storage directory used by ZooKeeper and BookKeeper. -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md). -`logs` | Logs created by the installation. - -:::tip - -If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions: -* [Install builtin connectors (optional)](#install-builtin-connectors-optional) -* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional) -Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders. - -::: - -### Install builtin connectors (optional) - -Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors. -To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways: - -* download from the Apache mirror Pulsar IO Connectors @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. -For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker -(or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions). -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), -you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors). - -::: - -### Install tiered storage offloaders (optional) - -:::tip - -Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -To enable tiered storage feature, follow the instructions below; otherwise skip this section. - -::: - -To get started with [tiered storage offloaders](concepts-tiered-storage.md), you need to download the offloaders tarball release on every broker node in one of the following ways: - -* download from the Apache mirror Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders` -in the pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage.md). - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory. -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), -you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - -::: - -## Start Pulsar standalone - -Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode. - -```bash - -$ bin/pulsar standalone - -``` - -If you have started Pulsar successfully, you will see `INFO`-level log messages like this: - -```bash - -2017-06-01 14:46:29,192 - INFO - [main:WebSocketService@95] - Configuration Store cache started -2017-06-01 14:46:29,192 - INFO - [main:AuthenticationService@61] - Authentication is disabled -2017-06-01 14:46:29,192 - INFO - [main:WebSocketService@108] - Pulsar WebSocket Service started - -``` - -:::tip - -* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window. - -::: - -You can also run the service as a background process using the `pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). -> -> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview.md) document to secure your deployment. -> -> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -## Use Pulsar standalone - -Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. - -### Consume a message - -The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client consume my-topic -s "first-subscription" - -``` - -If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -09:56:55.566 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.MultiTopicsConsumerImpl - [TopicsConsumerFakeTopicNamee2df9] [first-subscription] Success subscribe new topic my-topic in topics consumer, partitions: 4, allTopicPartitionsNumber: 4 - -``` - -:::tip - -As you have noticed that we do not explicitly create the `my-topic` topic, from which we consume the message. When you consume a message from a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well. - -::: - -### Produce a message - -The following command produces a message saying `hello-pulsar` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client produce my-topic --messages "hello-pulsar" - -``` - -If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -13:09:39.356 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced - -``` - -## Stop Pulsar standalone - -Press `Ctrl+C` to stop a local standalone Pulsar. - -:::tip - -If the service runs as a background process using the `pulsar-daemon start standalone` command, then use the `pulsar-daemon stop standalone` command to stop the service. -For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). - -::: - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/tiered-storage-aliyun.md b/site2/website/versioned_docs/version-2.8.0-deprecated/tiered-storage-aliyun.md deleted file mode 100644 index 5772f162b5e26d..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/tiered-storage-aliyun.md +++ /dev/null @@ -1,257 +0,0 @@ ---- -id: tiered-storage-aliyun -title: Use Aliyun OSS offloader with Pulsar -sidebar_label: "Aliyun OSS offloader" -original_id: tiered-storage-aliyun ---- - -This chapter guides you through every step of installing and configuring the Aliyun Object Storage Service (OSS) offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the Aliyun OSS offloader. - -### Prerequisite - -- Pulsar: 2.8.0 or later versions - -### Step - -This example uses Pulsar 2.8.0. - -1. Download the Pulsar tarball, see [here](https://pulsar.apache.org/docs/en/standalone/#install-pulsar-using-binary-release). - -2. Download and untar the Pulsar offloaders package, then copy the Pulsar offloaders as `offloaders` in the Pulsar directory, see [here](https://pulsar.apache.org/docs/en/standalone/#install-tiered-storage-offloaders-optional). - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/), [GCS](https://cloud.google.com/storage/), [Azure](https://portal.azure.com/#home), and [Aliyun OSS](https://www.aliyun.com/product/oss) for long-term storage. - - ``` - - tiered-storage-file-system-2.8.0.nar - tiered-storage-jcloud-2.8.0.nar - - ``` - - :::note - - * If you are running Pulsar in a bare-metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image. The `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to Aliyun OSS, you need to configure some properties of the Aliyun OSS offload driver. - -::: - -Besides, you can also configure the Aliyun OSS offloader to run it automatically or trigger it manually. - -### Configure Aliyun OSS offloader driver - -You can configure the Aliyun OSS offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - | Required configuration | Description | Example value | - | --- | --- |--- | - | `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive. | aliyun-oss | - | `offloadersDirectory` | Offloader directory | offloaders | - | `managedLedgerOffloadBucket` | Bucket | pulsar-topic-offload | - | `managedLedgerOffloadServiceEndpoint` | Endpoint | http://oss-cn-hongkong.aliyuncs.com | - -- **Optional** configurations are as below. - - | Optional | Description | Example value | - | --- | --- | --- | - | `managedLedgerOffloadReadBufferSizeInBytes` | Size of block read | 1 MB | - | `managedLedgerOffloadMaxBlockSizeInBytes` | Size of block write | 64 MB | - | `managedLedgerMinLedgerRolloverTimeMinutes` | Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment. | 2 | - | `managedLedgerMaxEntriesPerLedger` | Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment. | 5000 | - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in Aliyun OSS must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -managedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Endpoint (required) - -The endpoint is the region where a bucket is located. - -:::tip - -For more information about Aliyun OSS regions and endpoints, see [International website](https://www.alibabacloud.com/help/doc-detail/31837.htm) or [Chinese website](https://help.aliyun.com/document_detail/31837.html). - -::: - - -##### Example - -This example sets the endpoint as _oss-us-west-1-internal_. - -``` - -managedLedgerOffloadServiceEndpoint=http://oss-us-west-1-internal.aliyuncs.com - -``` - -#### Authentication (required) - -To be able to access Aliyun OSS, you need to authenticate with Aliyun OSS. - -Set the environment variables `ALIYUN_OSS_ACCESS_KEY_ID` and `ALIYUN_OSS_ACCESS_KEY_SECRET` in `conf/pulsar_env.sh`. - -"export" is important so that the variables are made available in the environment of spawned processes. - -```bash - -export ALIYUN_OSS_ACCESS_KEY_ID=ABC123456789 -export ALIYUN_OSS_ACCESS_KEY_SECRET=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from Aliyun OSS in the configuration file `broker.conf` or `standalone.conf`. - -| Configuration | Description | Default value | -| --- | --- | --- | -| `managedLedgerOffloadReadBufferSizeInBytes` | Block size for each individual read when reading back data from Aliyun OSS. | 1 MB | -| `managedLedgerOffloadMaxBlockSizeInBytes` | Maximum size of a "part" sent during a multipart upload to Aliyun OSS. It **cannot** be smaller than 5 MB. | 64 MB | - -### Run Aliyun OSS offloader automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -| Threshold value | Action | -| --- | --- | -| > 0 | It triggers the offloading operation if the topic storage reaches its threshold. | -| = 0 | It causes a broker to offload data as soon as possible. | -| < 0 | It disables automatic offloading operation. | - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, the offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the Aliyun OSS offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-threshold-em-). - -::: - -### Run Aliyun OSS offloader manually - -For individual topics, you can trigger the Aliyun OSS offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to Aliyun OSS until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the Aliyun OSS offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-em-). - - ::: - -- This example checks the Aliyun OSS offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the Aliyun OSS offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-status-em-). - - ::: - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/tiered-storage-aws.md b/site2/website/versioned_docs/version-2.8.0-deprecated/tiered-storage-aws.md deleted file mode 100644 index a83de62643638e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/tiered-storage-aws.md +++ /dev/null @@ -1,329 +0,0 @@ ---- -id: tiered-storage-aws -title: Use AWS S3 offloader with Pulsar -sidebar_label: "AWS S3 offloader" -original_id: tiered-storage-aws ---- - -This chapter guides you through every step of installing and configuring the AWS S3 offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the AWS S3 offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or later versions - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download from the Pulsar [downloads page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget): - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/) and [GCS](https://cloud.google.com/storage/) for long term storage. - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to AWS S3, you need to configure some properties of the AWS S3 offload driver. - -::: - -Besides, you can also configure the AWS S3 offloader to run it automatically or trigger it manually. - -### Configure AWS S3 offloader driver - -You can configure the AWS S3 offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - Required configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive.

    **Note**: there is a third driver type, S3, which is identical to AWS S3, though S3 requires that you specify an endpoint URL using `s3ManagedLedgerOffloadServiceEndpoint`. This is useful if using an S3 compatible data store other than AWS S3. | aws-s3 - `offloadersDirectory` | Offloader directory | offloaders - `s3ManagedLedgerOffloadBucket` | Bucket | pulsar-topic-offload - -- **Optional** configurations are as below. - - Optional | Description | Example value - |---|---|--- - `s3ManagedLedgerOffloadRegion` | Bucket region

    **Note**: before specifying a value for this parameter, you need to set the following configurations. Otherwise, you might get an error.

    - Set [`s3ManagedLedgerOffloadServiceEndpoint`](https://docs.aws.amazon.com/general/latest/gr/s3.html).

    Example
    `s3ManagedLedgerOffloadServiceEndpoint=https://s3.YOUR_REGION.amazonaws.com`

    - Grant `GetBucketLocation` permission to a user.

    For how to grant `GetBucketLocation` permission to a user, see [here](https://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html#using-with-s3-actions-related-to-buckets).| eu-west-3 - `s3ManagedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `s3ManagedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in AWS S3 must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -s3ManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Bucket region - -A bucket region is a region where a bucket is located. If a bucket region is not specified, the **default** region (`US East (N. Virginia)`) is used. - -:::tip - -For more information about AWS regions and endpoints, see [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). - -::: - - -##### Example - -This example sets the bucket region as _europe-west-3_. - -``` - -s3ManagedLedgerOffloadRegion=eu-west-3 - -``` - -#### Authentication (required) - -To be able to access AWS S3, you need to authenticate with AWS S3. - -Pulsar does not provide any direct methods of configuring authentication for AWS S3, -but relies on the mechanisms supported by the [DefaultAWSCredentialsProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html). - -Once you have created a set of credentials in the AWS IAM console, you can configure credentials using one of the following methods. - -* Use EC2 instance metadata credentials. - - If you are on AWS instance with an instance profile that provides credentials, Pulsar uses these credentials if no other mechanism is provided. - -* Set the environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` in `conf/pulsar_env.sh`. - - "export" is important so that the variables are made available in the environment of spawned processes. - - ```bash - - export AWS_ACCESS_KEY_ID=ABC123456789 - export AWS_SECRET_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -* Add the Java system properties `aws.accessKeyId` and `aws.secretKey` to `PULSAR_EXTRA_OPTS` in `conf/pulsar_env.sh`. - - ```bash - - PULSAR_EXTRA_OPTS="${PULSAR_EXTRA_OPTS} ${PULSAR_MEM} ${PULSAR_GC} -Daws.accessKeyId=ABC123456789 -Daws.secretKey=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.maxCapacityPerThread=4096" - - ``` - -* Set the access credentials in `~/.aws/credentials`. - - ```conf - - [default] - aws_access_key_id=ABC123456789 - aws_secret_access_key=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -* Assume an IAM role. - - This example uses the `DefaultAWSCredentialsProviderChain` for assuming this role. - - The broker must be rebooted for credentials specified in `pulsar_env` to take effect. - - ```conf - - s3ManagedLedgerOffloadRole= - s3ManagedLedgerOffloadRoleSessionName=pulsar-s3-offload - - ``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from AWS S3 in the configuration file `broker.conf` or `standalone.conf`. - -Configuration|Description|Default value -|---|---|--- -`s3ManagedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from AWS S3.|1 MB -`s3ManagedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to AWS S3. It **cannot** be smaller than 5 MB. |64 MB - -### Configure AWS S3 offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the AWS S3 offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-threshold-em-). - -::: - -### Configure AWS S3 offloader to run manually - -For individual topics, you can trigger AWS S3 offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to AWS S3 until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the AWS S3 offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-em-). - - ::: - -- This example checks the AWS S3 offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the AWS S3 offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-status-em-). - - ::: - -## Tutorial - -For the complete and step-by-step instructions on how to use the AWS S3 offloader with Pulsar, see [here](https://hub.streamnative.io/offloaders/aws-s3/2.5.1#usage). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/tiered-storage-azure.md b/site2/website/versioned_docs/version-2.8.0-deprecated/tiered-storage-azure.md deleted file mode 100644 index e1485af3984e31..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/tiered-storage-azure.md +++ /dev/null @@ -1,264 +0,0 @@ ---- -id: tiered-storage-azure -title: Use Azure BlobStore offloader with Pulsar -sidebar_label: "Azure BlobStore offloader" -original_id: tiered-storage-azure ---- - -This chapter guides you through every step of installing and configuring the Azure BlobStore offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the Azure BlobStore offloader. - -### Prerequisite - -- Pulsar: 2.6.2 or later versions - -### Step - -This example uses Pulsar 2.6.2. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.6.2/apache-pulsar-2.6.2-bin.tar.gz) - - * Download from the Pulsar [downloads page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget): - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.6.2/apache-pulsar-2.6.2-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.6.2/apache-pulsar-offloaders-2.6.2-bin.tar.gz - tar xvfz apache-pulsar-offloaders-2.6.2-bin.tar.gz - - ``` - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.6.2/offloaders apache-pulsar-2.6.2/offloaders - - ls offloaders - - ``` - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/), [GCS](https://cloud.google.com/storage/) and [Azure](https://portal.azure.com/#home) for long term storage. - - ``` - - tiered-storage-file-system-2.6.2.nar - tiered-storage-jcloud-2.6.2.nar - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to Azure BlobStore, you need to configure some properties of the Azure BlobStore offload driver. - -::: - -Besides, you can also configure the Azure BlobStore offloader to run it automatically or trigger it manually. - -### Configure Azure BlobStore offloader driver - -You can configure the Azure BlobStore offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - Required configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name | azureblob - `offloadersDirectory` | Offloader directory | offloaders - `managedLedgerOffloadBucket` | Bucket | pulsar-topic-offload - -- **Optional** configurations are as below. - - Optional | Description | Example value - |---|---|--- - `managedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `managedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in Azure BlobStore must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -managedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Authentication (required) - -To be able to access Azure BlobStore, you need to authenticate with Azure BlobStore. - -* Set the environment variables `AZURE_STORAGE_ACCOUNT` and `AZURE_STORAGE_ACCESS_KEY` in `conf/pulsar_env.sh`. - - "export" is important so that the variables are made available in the environment of spawned processes. - - ```bash - - export AZURE_STORAGE_ACCOUNT=ABC123456789 - export AZURE_STORAGE_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from Azure BlobStore in the configuration file `broker.conf` or `standalone.conf`. - -Configuration|Description|Default value -|---|---|--- -`managedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from Azure BlobStore store.|1 MB -`managedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to Azure BlobStore store. It **cannot** be smaller than 5 MB. |64 MB - -### Configure Azure BlobStore offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the Azure BlobStore offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-threshold-em-). - -::: - -### Configure Azure BlobStore offloader to run manually - -For individual topics, you can trigger Azure BlobStore offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to Azure BlobStore until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the Azure BlobStore offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-em-). - - ::: - -- This example checks the Azure BlobStore offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the Azure BlobStore offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-status-em-). - - ::: - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/tiered-storage-filesystem.md b/site2/website/versioned_docs/version-2.8.0-deprecated/tiered-storage-filesystem.md deleted file mode 100644 index 85a1644120fc63..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/tiered-storage-filesystem.md +++ /dev/null @@ -1,630 +0,0 @@ ---- -id: tiered-storage-filesystem -title: Use filesystem offloader with Pulsar -sidebar_label: "Filesystem offloader" -original_id: tiered-storage-filesystem ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This chapter guides you through every step of installing and configuring the filesystem offloader and using it with Pulsar. - -## Installation - -This section describes how to install the filesystem offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or higher versions - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download the Pulsar tarball from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download the Pulsar tarball from the Pulsar [download page](https://pulsar.apache.org/download) - - * Use the [wget](https://www.gnu.org/software/wget) command to dowload the Pulsar tarball. - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - - :::note - - * If you run Pulsar in a bare metal cluster, ensure that the `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you run Pulsar in Docker or deploying Pulsar using a Docker image (such as K8S and DCOS), you can use the `apachepulsar/pulsar-all` image. The `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - - :::note - - * If you run Pulsar in a bare metal cluster, ensure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you run Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image. The `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to filesystem, you need to configure some properties of the filesystem offloader driver. - -::: - -Besides, you can also configure the filesystem offloader to run it automatically or trigger it manually. - -### Configure filesystem offloader driver - -You can configure the filesystem offloader driver in the `broker.conf` or `standalone.conf` configuration file. - -````mdx-code-block - - - -- **Required** configurations are as below. - - Parameter | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive. | filesystem - `fileSystemURI` | Connection address, which is the URI to access the default Hadoop distributed file system. | hdfs://127.0.0.1:9000 - `offloadersDirectory` | Offloader directory | offloaders - `fileSystemProfilePath` | Hadoop profile path. The configuration file is stored in the Hadoop profile path. It contains various settings for Hadoop performance tuning. | conf/filesystem_offload_core_site.xml - -- **Optional** configurations are as below. - - Parameter| Description | Example value - |---|---|--- - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic.

    **Note**: it is not recommended to set this parameter in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended to set this parameter in the production environment.|5000 - -
    - - -- **Required** configurations are as below. - - Parameter | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive. | filesystem - `offloadersDirectory` | Offloader directory | offloaders - `fileSystemProfilePath` | NFS profile path. The configuration file is stored in the NFS profile path. It contains various settings for performance tuning. | conf/filesystem_offload_core_site.xml - -- **Optional** configurations are as below. - - Parameter| Description | Example value - |---|---|--- - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic.

    **Note**: it is not recommended to set this parameter in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended to set this parameter in the production environment.|5000 - -
    - -
    -```` - -### Run filesystem offloader automatically - -You can configure the namespace policy to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic storage reaches the threshold, an offload operation is triggered automatically. - -Threshold value|Action -|---|--- -| > 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offload runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, the filesystem offloader does not work until the current segment is full. - -You can configure the threshold using CLI tools, such as pulsar-admin. - -#### Example - -This example sets the filesystem offloader threshold to 10 MB using pulsar-admin. - -```bash - -pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#set-offload-threshold). - -::: - -### Run filesystem offloader manually - -For individual topics, you can trigger the filesystem offloader manually using one of the following methods: - -- Use the REST endpoint. - -- Use CLI tools (such as pulsar-admin). - -To manually trigger the filesystem offloader via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are offloaded to the filesystem until the threshold is no longer exceeded. Older segments are offloaded first. - -#### Example - -- This example manually run the filesystem offloader using pulsar-admin. - - ```bash - - pulsar-admin topics offload --size-threshold 10M persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload). - - ::: - -- This example checks filesystem offloader status using pulsar-admin. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the filesystem to complete the job, add the `-w` flag. - - ```bash - - pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in the offloading operation, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload-status). - - ::: - -## Tutorial - -This section provides step-by-step instructions on how to use the filesystem offloader to move data from Pulsar to Hadoop Distributed File System (HDFS) or Network File system (NFS). - -````mdx-code-block - - - -To move data from Pulsar to HDFS, follow these steps. - -### Step 1: Prepare the HDFS environment - -This tutorial sets up a Hadoop single node cluster and uses Hadoop 3.2.1. - -:::tip - -For details about how to set up a Hadoop single node cluster, see [here](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html). - -::: - -1. Download and uncompress Hadoop 3.2.1. - - ``` - - wget https://mirrors.bfsu.edu.cn/apache/hadoop/common/hadoop-3.2.1/hadoop-3.2.1.tar.gz - - tar -zxvf hadoop-3.2.1.tar.gz -C $HADOOP_HOME - - ``` - -2. Configure Hadoop. - - ``` - - # $HADOOP_HOME/etc/hadoop/core-site.xml - - - fs.defaultFS - hdfs://localhost:9000 - - - - # $HADOOP_HOME/etc/hadoop/hdfs-site.xml - - - dfs.replication - 1 - - - - ``` - -3. Set passphraseless ssh. - - ``` - - # Now check that you can ssh to the localhost without a passphrase: - $ ssh localhost - # If you cannot ssh to localhost without a passphrase, execute the following commands - $ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa - $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys - $ chmod 0600 ~/.ssh/authorized_keys - - ``` - -4. Start HDFS. - - ``` - - # don't execute this command repeatedly, repeat execute will cauld the clusterId of the datanode is not consistent with namenode - $HADOOP_HOME/bin/hadoop namenode -format - $HADOOP_HOME/sbin/start-dfs.sh - - ``` - -5. Navigate to the [HDFS website](http://localhost:9870/). - - You can see the **Overview** page. - - ![](/assets/FileSystem-1.png) - - 1. At the top navigation bar, click **Datanodes** to check DataNode information. - - ![](/assets/FileSystem-2.png) - - 2. Click **HTTP Address** to get more detailed information about localhost:9866. - - As can be seen below, the size of **Capacity Used** is 4 KB, which is the initial value. - - ![](/assets/FileSystem-3.png) - -### Step 2: Install the filesystem offloader - -For details, see [installation](#installation). - -### Step 3: Configure the filesystem offloader - -As indicated in the [configuration](#configuration) section, you need to configure some properties for the filesystem offloader driver before using it. This tutorial assumes that you have configured the filesystem offloader driver as below and run Pulsar in **standalone** mode. - -Set the following configurations in the `conf/standalone.conf` file. - -```conf - -managedLedgerOffloadDriver=filesystem -fileSystemURI=hdfs://127.0.0.1:9000 -fileSystemProfilePath=conf/filesystem_offload_core_site.xml - -``` - -:::note - -For testing purposes, you can set the following two configurations to speed up ledger rollover, but it is not recommended that you set them in the production environment. - -::: - -``` - -managedLedgerMinLedgerRolloverTimeMinutes=1 -managedLedgerMaxEntriesPerLedger=100 - -``` - - - - -:::note - -In this section, it is assumed that you have enabled NFS service and set the shared path of your NFS service. In this section, `/Users/test` is used as the shared path of NFS service. - -::: - -To offload data to NFS, follow these steps. - -### Step 1: Install the filesystem offloader - -For details, see [installation](#installation). - -### Step 2: Mont your NFS to your local filesystem - -This example mounts mounts */Users/pulsar_nfs* to */Users/test*. - -``` - -mount -e 192.168.0.103:/Users/test/Users/pulsar_nfs - -``` - -### Step 3: Configure the filesystem offloader driver - -As indicated in the [configuration](#configuration) section, you need to configure some properties for the filesystem offloader driver before using it. This tutorial assumes that you have configured the filesystem offloader driver as below and run Pulsar in **standalone** mode. - -1. Set the following configurations in the `conf/standalone.conf` file. - - ```conf - - managedLedgerOffloadDriver=filesystem - fileSystemProfilePath=conf/filesystem_offload_core_site.xml - - ``` - -2. Modify the *filesystem_offload_core_site.xml* as follows. - - ``` - - - fs.defaultFS - file:/// - - - - hadoop.tmp.dir - file:///Users/pulsar_nfs - - - - io.file.buffer.size - 4096 - - - - io.seqfile.compress.blocksize - 1000000 - - - - io.seqfile.compression.type - BLOCK - - - - io.map.index.interval - 128 - - - ``` - - - - -```` - -### Step 4: Offload data from BookKeeper to filesystem - -Execute the following commands in the repository where you download Pulsar tarball. For example, `~/path/to/apache-pulsar-2.5.1`. - -1. Start Pulsar standalone. - - ``` - - bin/pulsar standalone -a 127.0.0.1 - - ``` - -2. To ensure the data generated is not deleted immediately, it is recommended to set the [retention policy](https://pulsar.apache.org/docs/en/next/cookbooks-retention-expiry/#retention-policies), which can be either a **size** limit or a **time** limit. The larger value you set for the retention policy, the longer the data can be retained. - - ``` - - bin/pulsar-admin namespaces set-retention public/default --size 100M --time 2d - - ``` - - :::tip - - For more information about the `pulsarctl namespaces set-retention options` command, including flags, descriptions, default values, and shorthands, see [here](https://docs.streamnative.io/pulsarctl/v2.7.0.6/#-em-set-retention-em-). - - ::: - -3. Produce data using pulsar-client. - - ``` - - bin/pulsar-client produce -m "Hello FileSystem Offloader" -n 1000 public/default/fs-test - - ``` - -4. The offloading operation starts after a ledger rollover is triggered. To ensure offload data successfully, it is recommended that you wait until several ledger rollovers are triggered. In this case, you might need to wait for a second. You can check the ledger status using pulsarctl. - - ``` - - bin/pulsar-admin topics stats-internal public/default/fs-test - - ``` - - **Output** - - The data of the ledger 696 is not offloaded. - - ``` - - { - "version": 1, - "creationDate": "2020-06-16T21:46:25.807+08:00", - "modificationDate": "2020-06-16T21:46:25.821+08:00", - "ledgers": [ - { - "ledgerId": 696, - "isOffloaded": false - } - ], - "cursors": {} - } - - ``` - -5. Wait a second and send more messages to the topic. - - ``` - - bin/pulsar-client produce -m "Hello FileSystem Offloader" -n 1000 public/default/fs-test - - ``` - -6. Check the ledger status using pulsarctl. - - ``` - - bin/pulsar-admin topics stats-internal public/default/fs-test - - ``` - - **Output** - - The ledger 696 is rolled over. - - ``` - - { - "version": 2, - "creationDate": "2020-06-16T21:46:25.807+08:00", - "modificationDate": "2020-06-16T21:48:52.288+08:00", - "ledgers": [ - { - "ledgerId": 696, - "entries": 1001, - "size": 81695, - "isOffloaded": false - }, - { - "ledgerId": 697, - "isOffloaded": false - } - ], - "cursors": {} - } - - ``` - -7. Trigger the offloading operation manually using pulsarctl. - - ``` - - bin/pulsar-admin topics offload -s 0 public/default/fs-test - - ``` - - **Output** - - Data in ledgers before the ledge 697 is offloaded. - - ``` - - # offload info, the ledgers before 697 will be offloaded - Offload triggered for persistent://public/default/fs-test3 for messages before 697:0:-1 - - ``` - -8. Check the ledger status using pulsarctl. - - ``` - - bin/pulsar-admin topics stats-internal public/default/fs-test - - ``` - - **Output** - - The data of the ledger 696 is offloaded. - - ``` - - { - "version": 4, - "creationDate": "2020-06-16T21:46:25.807+08:00", - "modificationDate": "2020-06-16T21:52:13.25+08:00", - "ledgers": [ - { - "ledgerId": 696, - "entries": 1001, - "size": 81695, - "isOffloaded": true - }, - { - "ledgerId": 697, - "isOffloaded": false - } - ], - "cursors": {} - } - - ``` - - And the **Capacity Used** is changed from 4 KB to 116.46 KB. - - ![](/assets/FileSystem-8.png) \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/tiered-storage-gcs.md b/site2/website/versioned_docs/version-2.8.0-deprecated/tiered-storage-gcs.md deleted file mode 100644 index 81e7c5c6e6a44b..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/tiered-storage-gcs.md +++ /dev/null @@ -1,319 +0,0 @@ ---- -id: tiered-storage-gcs -title: Use GCS offloader with Pulsar -sidebar_label: "GCS offloader" -original_id: tiered-storage-gcs ---- - -This chapter guides you through every step of installing and configuring the GCS offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the GCS offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or later versions - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download from the Pulsar [download page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget) - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8S and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - As shown in the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support GCS and AWS S3 for long term storage. - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - -## Configuration - -:::note - -Before offloading data from BookKeeper to GCS, you need to configure some properties of the GCS offloader driver. - -::: - -Besides, you can also configure the GCS offloader to run it automatically or trigger it manually. - -### Configure GCS offloader driver - -You can configure GCS offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - **Required** configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver`|Offloader driver name, which is case-insensitive.|google-cloud-storage - `offloadersDirectory`|Offloader directory|offloaders - `gcsManagedLedgerOffloadBucket`|Bucket|pulsar-topic-offload - `gcsManagedLedgerOffloadRegion`|Bucket region|europe-west3 - `gcsManagedLedgerOffloadServiceAccountKeyFile`|Authentication |/Users/user-name/Downloads/project-804d5e6a6f33.json - -- **Optional** configurations are as below. - - Optional configuration|Description|Example value - |---|---|--- - `gcsManagedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `gcsManagedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic.|2 - `managedLedgerMaxEntriesPerLedger`|The max number of entries to append to a ledger before triggering a rollover.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in GCS **must** be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you can not nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -gcsManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Bucket region (required) - -Bucket region is the region where a bucket is located. If a bucket region is not specified, the **default** region (`us multi-regional location`) is used. - -:::tip - -For more information about bucket location, see [here](https://cloud.google.com/storage/docs/bucket-locations). - -::: - -##### Example - -This example sets the bucket region as _europe-west3_. - -``` - -gcsManagedLedgerOffloadRegion=europe-west3 - -``` - -#### Authentication (required) - -To enable a broker access GCS, you need to configure `gcsManagedLedgerOffloadServiceAccountKeyFile` in the configuration file `broker.conf`. - -`gcsManagedLedgerOffloadServiceAccountKeyFile` is -a JSON file, containing GCS credentials of a service account. - -##### Example - -To generate service account credentials or view the public credentials that you've already generated, follow the following steps. - -1. Navigate to the [Service accounts page](https://console.developers.google.com/iam-admin/serviceaccounts). - -2. Select a project or create a new one. - -3. Click **Create service account**. - -4. In the **Create service account** window, type a name for the service account and select **Furnish a new private key**. - - If you want to [grant G Suite domain-wide authority](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#delegatingauthority) to the service account, select **Enable G Suite Domain-wide Delegation**. - -5. Click **Create**. - - :::note - - Make sure the service account you create has permission to operate GCS, you need to assign **Storage Admin** permission to your service account [here](https://cloud.google.com/storage/docs/access-control/iam). - - ::: - -6. You can get the following information and set this in `broker.conf`. - - ```conf - - gcsManagedLedgerOffloadServiceAccountKeyFile="/Users/user-name/Downloads/project-804d5e6a6f33.json" - - ``` - - :::tip - - - For more information about how to create `gcsManagedLedgerOffloadServiceAccountKeyFile`, see [here](https://support.google.com/googleapi/answer/6158849). - - For more information about Google Cloud IAM, see [here](https://cloud.google.com/storage/docs/access-control/iam). - - ::: - -#### Size of block read/write - -You can configure the size of a request sent to or read from GCS in the configuration file `broker.conf`. - -Configuration|Description -|---|--- -`gcsManagedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from GCS.

    The **default** value is 1 MB. -`gcsManagedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to GCS.

    It **can not** be smaller than 5 MB.

    The **default** value is 64 MB. - -### Configure GCS offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offload operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the GCS offloader threshold size to 10 MB using pulsar-admin. - -```bash - -pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#set-offload-threshold). - -::: - -### Configure GCS offloader to run manually - -For individual topics, you can trigger GCS offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger the GCS via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to GCS until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the GCS offloader to run manually using pulsar-admin with the command `pulsar-admin topics offload (topic-name) (threshold)`. - - ```bash - - pulsar-admin topics offload persistent://my-tenant/my-namespace/topic1 10M - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload). - - ::: - -- This example checks the GCS offloader status using pulsar-admin with the command `pulsar-admin topics offload-status options`. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for GCS to complete the job, add the `-w` flag. - - ```bash - - pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload-status). - - ::: - -## Tutorial - -For the complete and step-by-step instructions on how to use the GCS offloader with Pulsar, see [here](https://hub.streamnative.io/offloaders/gcs/2.5.1#usage). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/tiered-storage-overview.md b/site2/website/versioned_docs/version-2.8.0-deprecated/tiered-storage-overview.md deleted file mode 100644 index c635034f463b46..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/tiered-storage-overview.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -id: tiered-storage-overview -title: Overview of tiered storage -sidebar_label: "Overview" -original_id: tiered-storage-overview ---- - -Pulsar's **Tiered Storage** feature allows older backlog data to be moved from BookKeeper to long term and cheaper storage, while still allowing clients to access the backlog as if nothing has changed. - -* Tiered storage uses [Apache jclouds](https://jclouds.apache.org) to support [Amazon S3](https://aws.amazon.com/s3/) and [GCS (Google Cloud Storage)](https://cloud.google.com/storage/) for long term storage. - - With jclouds, it is easy to add support for more [cloud storage providers](https://jclouds.apache.org/reference/providers/#blobstore-providers) in the future. - - :::tip - - - For more information about how to use the AWS S3 offloader with Pulsar, see [here](tiered-storage-aws.md). - - - For more information about how to use the GCS offloader with Pulsar, see [here](tiered-storage-gcs.md). - - ::: - -* Tiered storage uses [Apache Hadoop](http://hadoop.apache.org/) to support filesystems for long term storage. - - With Hadoop, it is easy to add support for more filesystems in the future. - - :::tip - - For more information about how to use the filesystem offloader with Pulsar, see [here](tiered-storage-filesystem.md). - - ::: - -## When to use tiered storage? - -Tiered storage should be used when you have a topic for which you want to keep a very long backlog for a long time. - -For example, if you have a topic containing user actions which you use to train your recommendation systems, you may want to keep that data for a long time, so that if you change your recommendation algorithm, you can rerun it against your full user history. - -## How does tiered storage work? - -A topic in Pulsar is backed by a **log**, known as a **managed ledger**. This log is composed of an ordered list of segments. Pulsar only writes to the final segment of the log. All previous segments are sealed. The data within the segment is immutable. This is known as a **segment oriented architecture**. - -![Tiered storage](/assets/pulsar-tiered-storage.png "Tiered Storage") - -The tiered storage offloading mechanism takes advantage of segment oriented architecture. When offloading is requested, the segments of the log are copied one-by-one to tiered storage. All segments of the log (apart from the current segment) written to tiered storage can be offloaded. - -Data written to BookKeeper is replicated to 3 physical machines by default. However, once a segment is sealed in BookKeeper, it becomes immutable and can be copied to long term storage. Long term storage can achieve cost savings by using mechanisms such as [Reed-Solomon error correction](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction) to require fewer physical copies of data. - -Before offloading ledgers to long term storage, you need to configure buckets, credentials, and other properties for the cloud storage service. Additionally, Pulsar uses multi-part objects to upload the segment data and brokers may crash while uploading the data. It is recommended that you add a life cycle rule for your bucket to expire incomplete multi-part upload after a day or two days to avoid getting charged for incomplete uploads. Moreover, you can trigger the offloading operation manually (via REST API or CLI) or automatically (via CLI). - -After offloading ledgers to long term storage, you can still query data in the offloaded ledgers with Pulsar SQL. - -For more information about tiered storage for Pulsar topics, see [here](https://github.com/apache/pulsar/wiki/PIP-17:-Tiered-storage-for-Pulsar-topics). diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/transaction-api.md b/site2/website/versioned_docs/version-2.8.0-deprecated/transaction-api.md deleted file mode 100644 index fedc314646c938..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/transaction-api.md +++ /dev/null @@ -1,172 +0,0 @@ ---- -id: transactions-api -title: Transactions API -sidebar_label: "Transactions API" -original_id: transactions-api ---- - -All messages in a transaction are available only to consumers after the transaction has been committed. If a transaction has been aborted, all the writes and acknowledgments in this transaction roll back. - -## Prerequisites - -1. To enable transactions in Pulsar, you need to configure the parameter in `broker.conf` file or `standalone.conf` file. - -``` - -transactionCoordinatorEnabled=true - -``` - -2. Initialize transaction coordinator metadata, so the transaction coordinators can leverage advantages of the partitioned topic, such as load balance. - -``` - -bin/pulsar initialize-transaction-coordinator-metadata -cs 127.0.0.1:2181 -c standalone - -``` - -After initializing transaction coordinator metadata, you can use the transactions API. The following APIs are available. - -## Initialize Pulsar client - -You can enable transaction for transaction client and initialize transaction coordinator client. - -``` - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .enableTransaction(true) - .build(); - -``` - -## Start transactions -You can start transaction in the following way. - -``` - -Transaction txn = pulsarClient - .newTransaction() - .withTransactionTimeout(5, TimeUnit.MINUTES) - .build() - .get(); - -``` - -## Produce transaction messages - -A transaction parameter is required when producing new transaction messages. The semantic of the transaction messages in Pulsar is `read-committed`, so the consumer cannot receive the ongoing transaction messages before the transaction is committed. - -``` - -producer.newMessage(txn).value("Hello Pulsar Transaction".getBytes()).sendAsync(); - -``` - -## Acknowledge the messages with the transaction - -The transaction acknowledgement requires a transaction parameter. The transaction acknowledgement marks the messages state to pending-ack state. When the transaction is committed, the pending-ack state becomes ack state. If the transaction is aborted, the pending-ack state becomes unack state. - -``` - -Message message = consumer.receive(); -consumer.acknowledgeAsync(message.getMessageId(), txn); - -``` - -## Commit transactions - -When the transaction is committed, consumers receive the transaction messages and the pending-ack state becomes ack state. - -``` - -txn.commit().get(); - -``` - -## Abort transaction - -When the transaction is aborted, the transaction acknowledgement is canceled and the pending-ack messages are redelivered. - -``` - -txn.abort().get(); - -``` - -### Example -The following example shows how messages are processed in transaction. - -``` - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl(getPulsarServiceList().get(0).getBrokerServiceUrl()) - .statsInterval(0, TimeUnit.SECONDS) - .enableTransaction(true) - .build(); - -String sourceTopic = "public/default/source-topic"; -String sinkTopic = "public/default/sink-topic"; - -Producer sourceProducer = pulsarClient - .newProducer(Schema.STRING) - .topic(sourceTopic) - .create(); -sourceProducer.newMessage().value("hello pulsar transaction").sendAsync(); - -Consumer sourceConsumer = pulsarClient - .newConsumer(Schema.STRING) - .topic(sourceTopic) - .subscriptionName("test") - .subscriptionType(SubscriptionType.Shared) - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscribe(); - -Producer sinkProducer = pulsarClient - .newProducer(Schema.STRING) - .topic(sinkTopic) - .create(); - -Transaction txn = pulsarClient - .newTransaction() - .withTransactionTimeout(5, TimeUnit.MINUTES) - .build() - .get(); - -// source message acknowledgement and sink message produce belong to one transaction, -// they are combined into an atomic operation. -Message message = sourceConsumer.receive(); -sourceConsumer.acknowledgeAsync(message.getMessageId(), txn); -sinkProducer.newMessage(txn).value("sink data").sendAsync(); - -txn.commit().get(); - -``` - -## Enable batch messages in transactions - -To enable batch messages in transactions, you need to enable the batch index acknowledgement feature. The transaction acks check whether the batch index acknowledgement conflicts. - -To enable batch index acknowledgement, you need to set `acknowledgmentAtBatchIndexLevelEnabled` to `true` in the `broker.conf` or `standalone.conf` file. - -``` - -acknowledgmentAtBatchIndexLevelEnabled=true - -``` - -And then you need to call the `enableBatchIndexAcknowledgment(true)` method in the consumer builder. - -``` - -Consumer sinkConsumer = pulsarClient - .newConsumer() - .topic(transferTopic) - .subscriptionName("sink-topic") - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscriptionType(SubscriptionType.Shared) - .enableBatchIndexAcknowledgment(true) // enable batch index acknowledgement - .subscribe(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/transaction-guarantee.md b/site2/website/versioned_docs/version-2.8.0-deprecated/transaction-guarantee.md deleted file mode 100644 index 9db2d254e159f6..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/transaction-guarantee.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -id: transactions-guarantee -title: Transactions Guarantee -sidebar_label: "Transactions Guarantee" -original_id: transactions-guarantee ---- - -Pulsar transactions support the following guarantee. - -## Atomic multi-partition writes and multi-subscription acknowledges -Transactions enable atomic writes to multiple topics and partitions. A batch of messages in a transaction can be received from, produced to, and acknowledged by many partitions. All the operations involved in a transaction succeed or fail as a single unit. - -## Read transactional message -All the messages in a transaction are available only for consumers until the transaction is committed. - -## Acknowledge transactional message -A message is acknowledged successfully only once by a consumer under the subscription when acknowledging the message with the transaction ID. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/txn-how.md b/site2/website/versioned_docs/version-2.8.0-deprecated/txn-how.md deleted file mode 100644 index add072448aeb34..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/txn-how.md +++ /dev/null @@ -1,151 +0,0 @@ ---- -id: txn-how -title: How transactions work? -sidebar_label: "How transactions work?" -original_id: txn-how ---- - -This section describes transaction components and how the components work together. For the complete design details, see [PIP-31: Transactional Streaming](https://docs.google.com/document/d/145VYp09JKTw9jAT-7yNyFU255FptB2_B2Fye100ZXDI/edit#heading=h.bm5ainqxosrx). - -## Key concept - -It is important to know the following key concepts, which is a prerequisite for understanding how transactions work. - -### Transaction coordinator - -The transaction coordinator (TC) is a module running inside a Pulsar broker. - -* It maintains the entire life cycle of transactions and prevents a transaction from getting into an incorrect status. - -* It handles transaction timeout, and ensures that the transaction is aborted after a transaction timeout. - -### Transaction log - -All the transaction metadata persists in the transaction log. The transaction log is backed by a Pulsar topic. If the transaction coordinator crashes, it can restore the transaction metadata from the transaction log. - -The transaction log stores the transaction status rather than actual messages in the transaction (the actual messages are stored in the actual topic partitions). - -### Transaction buffer - -Messages produced to a topic partition within a transaction are stored in the transaction buffer (TB) of that topic partition. The messages in the transaction buffer are not visible to consumers until the transactions are committed. The messages in the transaction buffer are discarded when the transactions are aborted. - -Transaction buffer stores all ongoing and aborted transactions in memory. All messages are sent to the actual partitioned Pulsar topics. After transactions are committed, the messages in the transaction buffer are materialized (visible) to consumers. When the transactions are aborted, the messages in the transaction buffer are discarded. - -### Transaction ID - -Transaction ID (TxnID) identifies a unique transaction in Pulsar. The transaction ID is 128-bit. The highest 16 bits are reserved for the ID of the transaction coordinator, and the remaining bits are used for monotonically increasing numbers in each transaction coordinator. It is easy to locate the transaction crash with the TxnID. - -### Pending acknowledge state - -Pending acknowledge state maintains message acknowledgments within a transaction before a transaction completes. If a message is in the pending acknowledge state, the message cannot be acknowledged by other transactions until the message is removed from the pending acknowledge state. - -The pending acknowledge state is persisted to the pending acknowledge log (cursor ledger). A new broker can restore the state from the pending acknowledge log to ensure the acknowledgement is not lost. - -## Data flow - -At a high level, the data flow can be split into several steps: - -1. Begin a transaction. - -2. Publish messages with a transaction. - -3. Acknowledge messages with a transaction. - -4. End a transaction. - -To help you debug or tune the transaction for better performance, review the following diagrams and descriptions. - -### 1. Begin a transaction - -Before introducing the transaction in Pulsar, a producer is created and then messages are sent to brokers and stored in data logs. - -![](/assets/txn-3.png) - -Let’s walk through the steps for _beginning a transaction_. - -| Step | Description | -| --- | --- | -| 1.1 | The first step is that the Pulsar client finds the transaction coordinator. | -| 1.2 | The transaction coordinator allocates a transaction ID for the transaction. In the transaction log, the transaction is logged with its transaction ID and status (OPEN), which ensures the transaction status is persisted regardless of transaction coordinator crashes. | -| 1.3 | The transaction log sends the result of persisting the transaction ID to the transaction coordinator. | -| 1.4 | After the transaction status entry is logged, the transaction coordinator brings the transaction ID back to the Pulsar client. | - -### 2. Publish messages with a transaction - -In this stage, the Pulsar client enters a transaction loop, repeating the `consume-process-produce` operation for all the messages that comprise the transaction. This is a long phase and is potentially composed of multiple produce and acknowledgement requests. - -![](/assets/txn-4.png) - -Let’s walk through the steps for _publishing messages with a transaction_. - -| Step | Description | -| --- | --- | -| 2.1.1 | Before the Pulsar client produces messages to a new topic partition, it sends a request to the transaction coordinator to add the partition to the transaction. | -| 2.1.2 | The transaction coordinator logs the partition changes of the transaction into the transaction log for durability, which ensures the transaction coordinator knows all the partitions that a transaction is handling. The transaction coordinator can commit or abort changes on each partition at the end-partition phase. | -| 2.1.3 | The transaction log sends the result of logging the new partition (used for producing messages) to the transaction coordinator. | -| 2.1.4 | The transaction coordinator sends the result of adding a new produced partition to the transaction. | -| 2.2.1 | The Pulsar client starts producing messages to partitions. The flow of this part is the same as the normal flow of producing messages except that the batch of messages produced by a transaction contains transaction IDs. | -| 2.2.2 | The broker writes messages to a partition. | - -### 3. Acknowledge messages with a transaction - -In this phase, the Pulsar client sends a request to the transaction coordinator and a new subscription is acknowledged as a part of a transaction. - -![](/assets/txn-5.png) - -Let’s walk through the steps for _acknowledging messages with a transaction_. - -| Step | Description | -| --- | --- | -| 3.1.1 | The Pulsar client sends a request to add an acknowledged subscription to the transaction coordinator. | -| 3.1.2 | The transaction coordinator logs the addition of subscription, which ensures that it knows all subscriptions handled by a transaction and can commit or abort changes on each subscription at the end phase. | -| 3.1.3 | The transaction log sends the result of logging the new partition (used for acknowledging messages) to the transaction coordinator. | -| 3.1.4 | The transaction coordinator sends the result of adding the new acknowledged partition to the transaction. | -| 3.2 | The Pulsar client acknowledges messages on the subscription. The flow of this part is the same as the normal flow of acknowledging messages except that the acknowledged request carries a transaction ID. | -| 3.3 | The broker receiving the acknowledgement request checks if the acknowledgment belongs to a transaction or not. | - -### 4. End a transaction - -At the end of a transaction, the Pulsar client decides to commit or abort the transaction. The transaction can be aborted when a conflict is detected on acknowledging messages. - -#### 4.1 End transaction request - -When the Pulsar client finishes a transaction, it issues an end transaction request. - -![](/assets/txn-6.png) - -Let’s walk through the steps for _ending the transaction_. - -| Step | Description | -| --- | --- | -| 4.1.1 | The Pulsar client issues an end transaction request (with a field indicating whether the transaction is to be committed or aborted) to the transaction coordinator. | -| 4.1.2 | The transaction coordinator writes a COMMITTING or ABORTING message to its transaction log. | -| 4.1.3 | The transaction log sends the result of logging the committing or aborting status. | - -#### 4.2 Finalize a transaction - -The transaction coordinator starts the process of committing or aborting messages to all the partitions involved in this transaction. - -![](/assets/txn-7.png) - -Let’s walk through the steps for _finalizing a transaction_. - -| Step | Description | -| --- | --- | -| 4.2.1 | The transaction coordinator commits transactions on subscriptions and commits transactions on partitions at the same time. | -| 4.2.2 | The broker (produce) writes produced committed markers to the actual partitions. At the same time, the broker (ack) writes acked committed marks to the subscription pending ack partitions. | -| 4.2.3 | The data log sends the result of writing produced committed marks to the broker. At the same time, pending ack data log sends the result of writing acked committed marks to the broker. The cursor moves to the next position. | - -#### 4.3 Mark a transaction as COMMITTED or ABORTED - -The transaction coordinator writes the final transaction status to the transaction log to complete the transaction. - -![](/assets/txn-8.png) - -Let’s walk through the steps for _marking a transaction as COMMITTED or ABORTED_. - -| Step | Description | -| --- | --- | -| 4.3.1 | After all produced messages and acknowledgements to all partitions involved in this transaction have been successfully committed or aborted, the transaction coordinator writes the final COMMITTED or ABORTED transaction status messages to its transaction log, indicating that the transaction is complete. All the messages associated with the transaction in its transaction log can be safely removed. | -| 4.3.2 | The transaction log sends the result of the committed transaction to the transaction coordinator. | -| 4.3.3 | The transaction coordinator sends the result of the committed transaction to the Pulsar client. | diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/txn-monitor.md b/site2/website/versioned_docs/version-2.8.0-deprecated/txn-monitor.md deleted file mode 100644 index 5b50953772d092..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/txn-monitor.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -id: txn-monitor -title: How to monitor transactions? -sidebar_label: "How to monitor transactions?" -original_id: txn-monitor ---- - -You can monitor the status of the transactions in Prometheus and Grafana using the [transaction metrics](https://pulsar.apache.org/docs/en/next/reference-metrics/#pulsar-transaction). - -For how to configure Prometheus and Grafana, see [here](https://pulsar.apache.org/docs/en/next/deploy-monitoring). diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/txn-use.md b/site2/website/versioned_docs/version-2.8.0-deprecated/txn-use.md deleted file mode 100644 index de0e4a92f1b27e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/txn-use.md +++ /dev/null @@ -1,105 +0,0 @@ ---- -id: txn-use -title: How to use transactions? -sidebar_label: "How to use transactions?" -original_id: txn-use ---- - -## Transaction API - -The transaction feature is primarily a server-side and protocol-level feature. You can use the transaction feature via the [transaction API](https://pulsar.apache.org/api/admin/), which is available in **Pulsar 2.8.0 or later**. - -To use the transaction API, you do not need any additional settings in the Pulsar client. **By default**, transactions is **disabled**. - -Currently, transaction API is only available for **Java** clients. Support for other language clients will be added in the future releases. - -## Quick start - -This section provides an example of how to use the transaction API to send and receive messages in a Java client. - -1. Start Pulsar 2.8.0 or later. - -2. Enable transaction. - - Change the configuration in the `broker.conf` file. - - ``` - - transactionCoordinatorEnabled=true - - ``` - - If you want to enable batch messages in transactions, follow the steps below. - - Set `acknowledgmentAtBatchIndexLevelEnabled` to `true` in the `broker.conf` or `standalone.conf` file. - - ``` - - acknowledgmentAtBatchIndexLevelEnabled=true - - ``` - -3. Initialize transaction coordinator metadata. - - The transaction coordinator can leverage the advantages of partitioned topics (such as load balance). - - **Input** - - ``` - - bin/pulsar initialize-transaction-coordinator-metadata -cs 127.0.0.1:2181 -c standalone - - ``` - - **Output** - - ``` - - Transaction coordinator metadata setup success - - ``` - -4. Initialize a Pulsar client. - - ``` - - PulsarClient client = PulsarClient.builder() - - .serviceUrl("pulsar://localhost:6650") - - .enableTransaction(true) - - .build(); - - ``` - -Now you can start using the transaction API to send and receive messages. Below is an example of a `consume-process-produce` application written in Java. - -![](/assets/txn-9.png) - -Let’s walk through this example step by step. - -| Step | Description | -| --- | --- | -| 1. Start a transaction. | The application opens a new transaction by calling PulsarClient.newTransaction. It specifics the transaction timeout as 1 minute. If the transaction is not committed within 1 minute, the transaction is automatically aborted. | -| 2. Receive messages from topics. | The application creates two normal consumers to receive messages from topic input-topic-1 and input-topic-2 respectively. | -| 3. Publish messages to topics with the transaction. | The application creates two producers to produce the resulting messages to the output topic _output-topic-1_ and output-topic-2 respectively. The application applies the processing logic and generates two output messages. The application sends those two output messages as part of the transaction opened in the first step via Producer.newMessage(Transaction). | -| 4. Acknowledge the messages with the transaction. | In the same transaction, the application acknowledges the two input messages. | -| 5. Commit the transaction. | The application commits the transaction by calling Transaction.commit() on the open transaction. The commit operation ensures the two input messages are marked as acknowledged and the two output messages are written successfully to the output topics. | - -[1] Example of enabling batch messages ack in transactions in the consumer builder. - -``` - -Consumer sinkConsumer = pulsarClient - .newConsumer() - .topic(transferTopic) - .subscriptionName("sink-topic") - -.subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscriptionType(SubscriptionType.Shared) - .enableBatchIndexAcknowledgment(true) // enable batch index acknowledgement - .subscribe(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/txn-what.md b/site2/website/versioned_docs/version-2.8.0-deprecated/txn-what.md deleted file mode 100644 index 844f19a700f8f0..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/txn-what.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -id: txn-what -title: What are transactions? -sidebar_label: "What are transactions?" -original_id: txn-what ---- - -Transactions strengthen the message delivery semantics of Apache Pulsar and [processing guarantees of Pulsar Functions](https://pulsar.apache.org/docs/en/next/functions-overview/#processing-guarantees). The Pulsar Transaction API supports atomic writes and acknowledgments across multiple topics. - -Transactions allow: - -- A producer to send a batch of messages to multiple topics where all messages in the batch are eventually visible to any consumer, or none are ever visible to consumers. - -- End-to-end exactly-once semantics (execute a `consume-process-produce` operation exactly once). - -## Transaction semantics - -Pulsar transactions have the following semantics: - -* All operations within a transaction are committed as a single unit. - - * Either all messages are committed, or none of them are. - - * Each message is written or processed exactly once, without data loss or duplicates (even in the event of failures). - - * If a transaction is aborted, all the writes and acknowledgments in this transaction rollback. - -* A group of messages in a transaction can be received from, produced to, and acknowledged by multiple partitions. - - * Consumers are only allowed to read committed (acked) messages. In other words, the broker does not deliver transactional messages which are part of an open transaction or messages which are part of an aborted transaction. - - * Message writes across multiple partitions are atomic. - - * Message acks across multiple subscriptions are atomic. A message is acked successfully only once by a consumer under the subscription when acknowledging the message with the transaction ID. - -## Transactions and stream processing - -Stream processing on Pulsar is a `consume-process-produce` operation on Pulsar topics: - -* `Consume`: a source operator that runs a Pulsar consumer reads messages from one or multiple Pulsar topics. - -* `Process`: a processing operator transforms the messages. - -* `Produce`: a sink operator that runs a Pulsar producer writes the resulting messages to one or multiple Pulsar topics. - -![](/assets/txn-2.png) - -Pulsar transactions support end-to-end exactly-once stream processing, which means messages are not lost from a source operator and messages are not duplicated to a sink operator. - -## Use case - -Prior to Pulsar 2.8.0, there was no easy way to build stream processing applications with Pulsar to achieve exactly-once processing guarantees. With the transaction introduced in Pulsar 2.8.0, the following services support exactly-once semantics: - -* [Pulsar Flink connector](https://flink.apache.org/2021/01/07/pulsar-flink-connector-270.html) - - Prior to Pulsar 2.8.0, if you want to build stream applications using Pulsar and Flink, the Pulsar Flink connector only supported exactly-once source connector and at-least-once sink connector, which means the highest processing guarantee for end-to-end was at-least-once, there was possibility that the resulting messages from streaming applications produce duplicated messages to the resulting topics in Pulsar. - - With the transaction introduced in Pulsar 2.8.0, the Pulsar Flink sink connector can support exactly-once semantics by implementing the designated `TwoPhaseCommitSinkFunction` and hooking up the Flink sink message lifecycle with Pulsar transaction API. - -* Support for Pulsar Functions and other connectors will be added in the future releases. diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/txn-why.md b/site2/website/versioned_docs/version-2.8.0-deprecated/txn-why.md deleted file mode 100644 index 1ed8769977654e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/txn-why.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -id: txn-why -title: Why transactions? -sidebar_label: "Why transactions?" -original_id: txn-why ---- - -Pulsar transactions (txn) enable event streaming applications to consume, process, and produce messages in one atomic operation. The reason for developing this feature can be summarized as below. - -## Demand of stream processing - -The demand for stream processing applications with stronger processing guarantees has grown along with the rise of stream processing. For example, in the financial industry, financial institutions use stream processing engines to process debits and credits for users. This type of use case requires that every message is processed exactly once, without exception. - -In other words, if a stream processing application consumes message A and -produces the result as a message B (B = f(A)), then exactly-once processing -guarantee means that A can only be marked as consumed if and only if B is -successfully produced, and vice versa. - -![](/assets/txn-1.png) - -The Pulsar transactions API strengthens the message delivery semantics and the processing guarantees for stream processing. It enables stream processing applications to consume, process, and produce messages in one atomic operation. That means, a batch of messages in a transaction can be received from, produced to and acknowledged by many topic partitions. All the operations involved in a transaction succeed or fail as one single until. - -## Limitation of idempotent producer - -Avoiding data loss or duplication can be achieved by using the Pulsar idempotent producer, but it does not provide guarantees for writes across multiple partitions. - -In Pulsar, the highest level of message delivery guarantee is using an [idempotent producer](https://pulsar.apache.org/docs/en/next/concepts-messaging/#producer-idempotency) with the exactly once semantic at one single partition, that is, each message is persisted exactly once without data loss and duplication. However, there are some limitations in this solution: - -- Due to the monotonic increasing sequence ID, this solution only works on a single partition and within a single producer session (that is, for producing one message), so there is no atomicity when producing multiple messages to one or multiple partitions. - - In this case, if there are some failures (for example, client / broker / bookie crashes, network failure, and more) in the process of producing and receiving messages, messages are re-processed and re-delivered, which may cause data loss or data duplication: - - - For the producer: if the producer retry sending messages, some messages are persisted multiple times; if the producer does not retry sending messages, some messages are persisted once and other messages are lost. - - - For the consumer: since the consumer does not know whether the broker has received messages or not, the consumer may not retry sending acks, which causes it to receive duplicate messages. - -- Similarly, for Pulsar Function, it only guarantees exactly once semantics for an idempotent function on a single event rather than processing multiple events or producing multiple results that can happen exactly. - - For example, if a function accepts multiple events and produces one result (for example, window function), the function may fail between producing the result and acknowledging the incoming messages, or even between acknowledging individual events, which causes all (or some) incoming messages to be re-delivered and reprocessed, and a new result is generated. - - However, many scenarios need atomic guarantees across multiple partitions and sessions. - -- Consumers need to rely on more mechanisms to acknowledge (ack) messages once. - - For example, consumers are required to store the MessageID along with its acked state. After the topic is unloaded, the subscription can recover the acked state of this MessageID in memory when the topic is loaded again. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.0-deprecated/window-functions-context.md b/site2/website/versioned_docs/version-2.8.0-deprecated/window-functions-context.md deleted file mode 100644 index f80fea57989ef0..00000000000000 --- a/site2/website/versioned_docs/version-2.8.0-deprecated/window-functions-context.md +++ /dev/null @@ -1,581 +0,0 @@ ---- -id: window-functions-context -title: Window Functions Context -sidebar_label: "Window Functions: Context" -original_id: window-functions-context ---- - -Java SDK provides access to a **window context object** that can be used by a window function. This context object provides a wide variety of information and functionality for Pulsar window functions as below. - -- [Spec](#spec) - - * Names of all input topics and the output topic associated with the function. - * Tenant and namespace associated with the function. - * Pulsar window function name, ID, and version. - * ID of the Pulsar function instance running the window function. - * Number of instances that invoke the window function. - * Built-in type or custom class name of the output schema. - -- [Logger](#logger) - - * Logger object used by the window function, which can be used to create window function log messages. - -- [User config](#user-config) - - * Access to arbitrary user configuration values. - -- [Routing](#routing) - - * Routing is supported in Pulsar window functions. Pulsar window functions send messages to arbitrary topics as per the `publish` interface. - -- [Metrics](#metrics) - - * Interface for recording metrics. - -- [State storage](#state-storage) - - * Interface for storing and retrieving state in [state storage](#state-storage). - -## Spec - -Spec contains the basic information of a function. - -### Get input topics - -The `getInputTopics` method gets the **name list** of all input topics. - -This example demonstrates how to get the name list of all input topics in a Java window function. - -```java - -public class GetInputTopicsWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - Collection inputTopics = context.getInputTopics(); - System.out.println(inputTopics); - - return null; - } - -} - -``` - -### Get output topic - -The `getOutputTopic` method gets the **name of a topic** to which the message is sent. - -This example demonstrates how to get the name of an output topic in a Java window function. - -```java - -public class GetOutputTopicWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String outputTopic = context.getOutputTopic(); - System.out.println(outputTopic); - - return null; - } -} - -``` - -### Get tenant - -The `getTenant` method gets the tenant name associated with the window function. - -This example demonstrates how to get the tenant name in a Java window function. - -```java - -public class GetTenantWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String tenant = context.getTenant(); - System.out.println(tenant); - - return null; - } - -} - -``` - -### Get namespace - -The `getNamespace` method gets the namespace associated with the window function. - -This example demonstrates how to get the namespace in a Java window function. - -```java - -public class GetNamespaceWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String ns = context.getNamespace(); - System.out.println(ns); - - return null; - } - -} - -``` - -### Get function name - -The `getFunctionName` method gets the window function name. - -This example demonstrates how to get the function name in a Java window function. - -```java - -public class GetNameOfWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionName = context.getFunctionName(); - System.out.println(functionName); - - return null; - } - -} - -``` - -### Get function ID - -The `getFunctionId` method gets the window function ID. - -This example demonstrates how to get the function ID in a Java window function. - -```java - -public class GetFunctionIDWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionID = context.getFunctionId(); - System.out.println(functionID); - - return null; - } - -} - -``` - -### Get function version - -The `getFunctionVersion` method gets the window function version. - -This example demonstrates how to get the function version of a Java window function. - -```java - -public class GetVersionOfWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionVersion = context.getFunctionVersion(); - System.out.println(functionVersion); - - return null; - } - -} - -``` - -### Get instance ID - -The `getInstanceId` method gets the instance ID of a window function. - -This example demonstrates how to get the instance ID in a Java window function. - -```java - -public class GetInstanceIDWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - int instanceId = context.getInstanceId(); - System.out.println(instanceId); - - return null; - } - -} - -``` - -### Get num instances - -The `getNumInstances` method gets the number of instances that invoke the window function. - -This example demonstrates how to get the number of instances in a Java window function. - -```java - -public class GetNumInstancesWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - int numInstances = context.getNumInstances(); - System.out.println(numInstances); - - return null; - } - -} - -``` - -### Get output schema type - -The `getOutputSchemaType` method gets the built-in type or custom class name of the output schema. - -This example demonstrates how to get the output schema type of a Java window function. - -```java - -public class GetOutputSchemaTypeWindowFunction implements WindowFunction { - - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String schemaType = context.getOutputSchemaType(); - System.out.println(schemaType); - - return null; - } -} - -``` - -## Logger - -Pulsar window functions using Java SDK has access to an [SLF4j](https://www.slf4j.org/) [`Logger`](https://www.slf4j.org/api/org/apache/log4j/Logger.html) object that can be used to produce logs at the chosen log level. - -This example logs either a `WARNING`-level or `INFO`-level log based on whether the incoming string contains the word `danger` or not in a Java function. - -```java - -import java.util.Collection; -import org.apache.pulsar.functions.api.Record; -import org.apache.pulsar.functions.api.WindowContext; -import org.apache.pulsar.functions.api.WindowFunction; -import org.slf4j.Logger; - -public class LoggingWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - Logger log = context.getLogger(); - for (Record record : inputs) { - log.info(record + "-window-log"); - } - return null; - } - -} - -``` - -If you need your function to produce logs, specify a log topic when creating or running the function. - -```bash - -bin/pulsar-admin functions create \ - --jar my-functions.jar \ - --classname my.package.LoggingFunction \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -You can access all logs produced by `LoggingFunction` via the `persistent://public/default/logging-function-logs` topic. - -## Metrics - -Pulsar window functions can publish arbitrary metrics to the metrics interface which can be queried. - -:::note - -If a Pulsar window function uses the language-native interface for Java, that function is not able to publish metrics and stats to Pulsar. - -::: - -You can record metrics using the context object on a per-key basis. - -This example sets a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message in a Java function. - -```java - -import java.util.Collection; -import org.apache.pulsar.functions.api.Record; -import org.apache.pulsar.functions.api.WindowContext; -import org.apache.pulsar.functions.api.WindowFunction; - - -/** - * Example function that wants to keep track of - * the event time of each message sent. - */ -public class UserMetricWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - - for (Record record : inputs) { - if (record.getEventTime().isPresent()) { - context.recordMetric("MessageEventTime", record.getEventTime().get().doubleValue()); - } - } - - return null; - } -} - -``` - -## User config - -When you run or update Pulsar Functions that are created using SDK, you can pass arbitrary key/value pairs to them with the `--user-config` flag. Key/value pairs **must** be specified as JSON. - -This example passes a user configured key/value to a function. - -```bash - -bin/pulsar-admin functions create \ - --name word-filter \ - --user-config '{"forbidden-word":"rosebud"}' \ - # Other function configs - -``` - -### API -You can use the following APIs to get user-defined information for window functions. -#### getUserConfigMap - -`getUserConfigMap` API gets a map of all user-defined key/value configurations for the window function. - -```java - -/** - * Get a map of all user-defined key/value configs for the function. - * - * @return The full map of user-defined config values - */ - Map getUserConfigMap(); - -``` - -#### getUserConfigValue - -The `getUserConfigValue` API gets a user-defined key/value. - -```java - -/** - * Get any user-defined key/value. - * - * @param key The key - * @return The Optional value specified by the user for that key. - */ - Optional getUserConfigValue(String key); - -``` - -#### getUserConfigValueOrDefault - -The `getUserConfigValueOrDefault` API gets a user-defined key/value or a default value if none is present. - -```java - -/** - * Get any user-defined key/value or a default value if none is present. - * - * @param key - * @param defaultValue - * @return Either the user config value associated with a given key or a supplied default value - */ - Object getUserConfigValueOrDefault(String key, Object defaultValue); - -``` - -This example demonstrates how to access key/value pairs provided to Pulsar window functions. - -Java SDK context object enables you to access key/value pairs provided to Pulsar window functions via the command line (as JSON). - -:::tip - -For all key/value pairs passed to Java window functions, both the `key` and the `value` are `String`. To set the value to be a different type, you need to deserialize it from the `String` type. - -::: - -This example passes a key/value pair in a Java window function. - -```bash - -bin/pulsar-admin functions create \ - --user-config '{"word-of-the-day":"verdure"}' \ - # Other function configs - -``` - -This example accesses values in a Java window function. - -The `UserConfigFunction` function logs the string `"The word of the day is verdure"` every time the function is invoked (which means every time a message arrives). The user config of `word-of-the-day` is changed **only** when the function is updated with a new config value via -multiple ways, such as the command line tool or REST API. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.Optional; - -public class UserConfigWindowFunction implements WindowFunction { - @Override - public String process(Collection> input, WindowContext context) throws Exception { - Optional whatToWrite = context.getUserConfigValue("WhatToWrite"); - if (whatToWrite.get() != null) { - return (String)whatToWrite.get(); - } else { - return "Not a nice way"; - } - } - -} - -``` - -If no value is provided, you can access the entire user config map or set a default value. - -```java - -// Get the whole config map -Map allConfigs = context.getUserConfigMap(); - -// Get value or resort to default -String wotd = context.getUserConfigValueOrDefault("word-of-the-day", "perspicacious"); - -``` - -## Routing - -You can use the `context.publish()` interface to publish as many results as you want. - -This example shows that the `PublishFunction` class uses the built-in function in the context to publish messages to the `publishTopic` in a Java function. - -```java - -public class PublishWindowFunction implements WindowFunction { - @Override - public Void process(Collection> input, WindowContext context) throws Exception { - String publishTopic = (String) context.getUserConfigValueOrDefault("publish-topic", "publishtopic"); - String output = String.format("%s!", input); - context.publish(publishTopic, output); - - return null; - } - -} - -``` - -## State storage - -Pulsar window functions use [Apache BookKeeper](https://bookkeeper.apache.org) as a state storage interface. Apache Pulsar installation (including the standalone installation) includes the deployment of BookKeeper bookies. - -Apache Pulsar integrates with Apache BookKeeper `table service` to store the `state` for functions. For example, the `WordCount` function can store its `counters` state into BookKeeper table service via Pulsar Functions state APIs. - -States are key-value pairs, where the key is a string and the value is arbitrary binary data—counters are stored as 64-bit big-endian binary values. Keys are scoped to an individual Pulsar Function and shared between instances of that function. - -Currently, Pulsar window functions expose Java API to access, update, and manage states. These APIs are available in the context object when you use Java SDK functions. - -| Java API| Description -|---|--- -|`incrCounter`|Increases a built-in distributed counter referred by key. -|`getCounter`|Gets the counter value for the key. -|`putState`|Updates the state value for the key. - -You can use the following APIs to access, update, and manage states in Java window functions. - -#### incrCounter - -The `incrCounter` API increases a built-in distributed counter referred by key. - -Applications use the `incrCounter` API to change the counter of a given `key` by the given `amount`. If the `key` does not exist, a new key is created. - -```java - - /** - * Increment the builtin distributed counter referred by key - * @param key The name of the key - * @param amount The amount to be incremented - */ - void incrCounter(String key, long amount); - -``` - -#### getCounter - -The `getCounter` API gets the counter value for the key. - -Applications uses the `getCounter` API to retrieve the counter of a given `key` changed by the `incrCounter` API. - -```java - - /** - * Retrieve the counter value for the key. - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - long getCounter(String key); - -``` - -Except the `getCounter` API, Pulsar also exposes a general key/value API (`putState`) for functions to store general key/value state. - -#### putState - -The `putState` API updates the state value for the key. - -```java - - /** - * Update the state value for the key. - * - * @param key name of the key - * @param value state value of the key - */ - void putState(String key, ByteBuffer value); - -``` - -This example demonstrates how applications store states in Pulsar window functions. - -The logic of the `WordCountWindowFunction` is simple and straightforward. - -1. The function first splits the received string into multiple words using regex `\\.`. - -2. For each `word`, the function increments the corresponding `counter` by 1 via `incrCounter(key, amount)`. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - for (Record input : inputs) { - Arrays.asList(input.getValue().split("\\.")).forEach(word -> context.incrCounter(word, 1)); - } - return null; - - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/about.md b/site2/website/versioned_docs/version-2.8.1-deprecated/about.md deleted file mode 100644 index 7190af6346de7e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/about.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -slug: / -id: about -title: Welcome to the doc portal! -sidebar_label: "About" ---- - -import BlockLinks from "@site/src/components/BlockLinks"; -import BlockLink from "@site/src/components/BlockLink"; -import { docUrl } from "@site/src/utils/index"; - - -# Welcome to the doc portal! -*** - -This portal holds a variety of support documents to help you work with Pulsar . If you’re a beginner, there are tutorials and explainers to help you understand Pulsar and how it works. - -If you’re an experienced coder, review this page to learn the easiest way to access the specific content you’re looking for. - -## Get Started Now - - - - - - - - - -## Navigation -*** - -There are several ways to get around in the doc portal. The index navigation pane is a table of contents for the entire archive. The archive is divided into sections, like chapters in a book. Click the title of the topic to view it. - -In-context links provide an easy way to immediately reference related topics. Click the underlined term to view the topic. - -Links to related topics can be found at the bottom of each topic page. Click the link to view the topic. - -![Page Linking](/assets/page-linking.png) - -## Continuous Improvement -*** -As you probably know, we are working on a new user experience for our documentation portal that will make learning about and building on top of Apache Pulsar a much better experience. Whether you need overview concepts, how-to procedures, curated guides or quick references, we’re building content to support it. This welcome page is just the first step. We will be providing updates every month. - -## Help Improve These Documents -*** - -You’ll notice an Edit button at the bottom and top of each page. Click it to open a landing page with instructions for requesting changes to posted documents. These are your resources. Participation is not only welcomed – it’s essential! - -## Join the Community! -*** - -The Pulsar community on github is active, passionate, and knowledgeable. Join discussions, voice opinions, suggest features, and dive into the code itself. Find your Pulsar family here at [apache/pulsar](https://github.com/apache/pulsar). - -An equally passionate community can be found in the [Pulsar Slack channel](https://apache-pulsar.slack.com/). You’ll need an invitation to join, but many Github Pulsar community members are Slack members too. Join, hang out, learn, and make some new friends. - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/adaptors-kafka.md b/site2/website/versioned_docs/version-2.8.1-deprecated/adaptors-kafka.md deleted file mode 100644 index ad0d886a9e04b8..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/adaptors-kafka.md +++ /dev/null @@ -1,274 +0,0 @@ ---- -id: adaptors-kafka -title: Pulsar adaptor for Apache Kafka -sidebar_label: "Kafka client wrapper" -original_id: adaptors-kafka ---- - - -Pulsar provides an easy option for applications that are currently written using the [Apache Kafka](http://kafka.apache.org) Java client API. - -## Using the Pulsar Kafka compatibility wrapper - -In an existing application, change the regular Kafka client dependency and replace it with the Pulsar Kafka wrapper. Remove the following dependency in `pom.xml`: - -```xml - - - org.apache.kafka - kafka-clients - 0.10.2.1 - - -``` - -Then include this dependency for the Pulsar Kafka wrapper: - -```xml - - - org.apache.pulsar - pulsar-client-kafka - @pulsar:version@ - - -``` - -With the new dependency, the existing code works without any changes. You need to adjust the configuration, and make sure it points the -producers and consumers to Pulsar service rather than Kafka, and uses a particular -Pulsar topic. - -## Using the Pulsar Kafka compatibility wrapper together with existing kafka client - -When migrating from Kafka to Pulsar, the application might use the original kafka client -and the pulsar kafka wrapper together during migration. You should consider using the -unshaded pulsar kafka client wrapper. - -```xml - - - org.apache.pulsar - pulsar-client-kafka-original - @pulsar:version@ - - -``` - -When using this dependency, construct producers using `org.apache.kafka.clients.producer.PulsarKafkaProducer` -instead of `org.apache.kafka.clients.producer.KafkaProducer` and `org.apache.kafka.clients.producer.PulsarKafkaConsumer` for consumers. - -## Producer example - -```java - -// Topic needs to be a regular Pulsar topic -String topic = "persistent://public/default/my-topic"; - -Properties props = new Properties(); -// Point to a Pulsar service -props.put("bootstrap.servers", "pulsar://localhost:6650"); - -props.put("key.serializer", IntegerSerializer.class.getName()); -props.put("value.serializer", StringSerializer.class.getName()); - -Producer producer = new KafkaProducer(props); - -for (int i = 0; i < 10; i++) { - producer.send(new ProducerRecord(topic, i, "hello-" + i)); - log.info("Message {} sent successfully", i); -} - -producer.close(); - -``` - -## Consumer example - -```java - -String topic = "persistent://public/default/my-topic"; - -Properties props = new Properties(); -// Point to a Pulsar service -props.put("bootstrap.servers", "pulsar://localhost:6650"); -props.put("group.id", "my-subscription-name"); -props.put("enable.auto.commit", "false"); -props.put("key.deserializer", IntegerDeserializer.class.getName()); -props.put("value.deserializer", StringDeserializer.class.getName()); - -Consumer consumer = new KafkaConsumer(props); -consumer.subscribe(Arrays.asList(topic)); - -while (true) { - ConsumerRecords records = consumer.poll(100); - records.forEach(record -> { - log.info("Received record: {}", record); - }); - - // Commit last offset - consumer.commitSync(); -} - -``` - -## Complete Examples - -You can find the complete producer and consumer examples [here](https://github.com/apache/pulsar-adapters/tree/master/pulsar-client-kafka-compat/pulsar-client-kafka-tests/src/test/java/org/apache/pulsar/client/kafka/compat/examples). - -## Compatibility matrix - -Currently the Pulsar Kafka wrapper supports most of the operations offered by the Kafka API. - -### Producer - -APIs: - -| Producer Method | Supported | Notes | -|:------------------------------------------------------------------------------|:----------|:-------------------------------------------------------------------------| -| `Future send(ProducerRecord record)` | Yes | | -| `Future send(ProducerRecord record, Callback callback)` | Yes | | -| `void flush()` | Yes | | -| `List partitionsFor(String topic)` | No | | -| `Map metrics()` | No | | -| `void close()` | Yes | | -| `void close(long timeout, TimeUnit unit)` | Yes | | - -Properties: - -| Config property | Supported | Notes | -|:----------------------------------------|:----------|:------------------------------------------------------------------------------| -| `acks` | Ignored | Durability and quorum writes are configured at the namespace level | -| `auto.offset.reset` | Yes | It uses a default value of `earliest` if you do not give a specific setting. | -| `batch.size` | Ignored | | -| `bootstrap.servers` | Yes | | -| `buffer.memory` | Ignored | | -| `client.id` | Ignored | | -| `compression.type` | Yes | Allows `gzip` and `lz4`. No `snappy`. | -| `connections.max.idle.ms` | Yes | Only support up to 2,147,483,647,000(Integer.MAX_VALUE * 1000) ms of idle time| -| `interceptor.classes` | Yes | | -| `key.serializer` | Yes | | -| `linger.ms` | Yes | Controls the group commit time when batching messages | -| `max.block.ms` | Ignored | | -| `max.in.flight.requests.per.connection` | Ignored | In Pulsar ordering is maintained even with multiple requests in flight | -| `max.request.size` | Ignored | | -| `metric.reporters` | Ignored | | -| `metrics.num.samples` | Ignored | | -| `metrics.sample.window.ms` | Ignored | | -| `partitioner.class` | Yes | | -| `receive.buffer.bytes` | Ignored | | -| `reconnect.backoff.ms` | Ignored | | -| `request.timeout.ms` | Ignored | | -| `retries` | Ignored | Pulsar client retries with exponential backoff until the send timeout expires. | -| `send.buffer.bytes` | Ignored | | -| `timeout.ms` | Yes | | -| `value.serializer` | Yes | | - - -### Consumer - -The following table lists consumer APIs. - -| Consumer Method | Supported | Notes | -|:--------------------------------------------------------------------------------------------------------|:----------|:------| -| `Set assignment()` | No | | -| `Set subscription()` | Yes | | -| `void subscribe(Collection topics)` | Yes | | -| `void subscribe(Collection topics, ConsumerRebalanceListener callback)` | No | | -| `void assign(Collection partitions)` | No | | -| `void subscribe(Pattern pattern, ConsumerRebalanceListener callback)` | No | | -| `void unsubscribe()` | Yes | | -| `ConsumerRecords poll(long timeoutMillis)` | Yes | | -| `void commitSync()` | Yes | | -| `void commitSync(Map offsets)` | Yes | | -| `void commitAsync()` | Yes | | -| `void commitAsync(OffsetCommitCallback callback)` | Yes | | -| `void commitAsync(Map offsets, OffsetCommitCallback callback)` | Yes | | -| `void seek(TopicPartition partition, long offset)` | Yes | | -| `void seekToBeginning(Collection partitions)` | Yes | | -| `void seekToEnd(Collection partitions)` | Yes | | -| `long position(TopicPartition partition)` | Yes | | -| `OffsetAndMetadata committed(TopicPartition partition)` | Yes | | -| `Map metrics()` | No | | -| `List partitionsFor(String topic)` | No | | -| `Map> listTopics()` | No | | -| `Set paused()` | No | | -| `void pause(Collection partitions)` | No | | -| `void resume(Collection partitions)` | No | | -| `Map offsetsForTimes(Map timestampsToSearch)` | No | | -| `Map beginningOffsets(Collection partitions)` | No | | -| `Map endOffsets(Collection partitions)` | No | | -| `void close()` | Yes | | -| `void close(long timeout, TimeUnit unit)` | Yes | | -| `void wakeup()` | No | | - -Properties: - -| Config property | Supported | Notes | -|:--------------------------------|:----------|:------------------------------------------------------| -| `group.id` | Yes | Maps to a Pulsar subscription name | -| `max.poll.records` | Yes | | -| `max.poll.interval.ms` | Ignored | Messages are "pushed" from broker | -| `session.timeout.ms` | Ignored | | -| `heartbeat.interval.ms` | Ignored | | -| `bootstrap.servers` | Yes | Needs to point to a single Pulsar service URL | -| `enable.auto.commit` | Yes | | -| `auto.commit.interval.ms` | Ignored | With auto-commit, acks are sent immediately to broker | -| `partition.assignment.strategy` | Ignored | | -| `auto.offset.reset` | Yes | Only support earliest and latest. | -| `fetch.min.bytes` | Ignored | | -| `fetch.max.bytes` | Ignored | | -| `fetch.max.wait.ms` | Ignored | | -| `interceptor.classes` | Yes | | -| `metadata.max.age.ms` | Ignored | | -| `max.partition.fetch.bytes` | Ignored | | -| `send.buffer.bytes` | Ignored | | -| `receive.buffer.bytes` | Ignored | | -| `client.id` | Ignored | | - - -## Customize Pulsar configurations - -You can configure Pulsar authentication provider directly from the Kafka properties. - -### Pulsar client properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.authentication.class`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-org.apache.pulsar.client.api.Authentication-) | | Configure to auth provider. For example, `org.apache.pulsar.client.impl.auth.AuthenticationTls`.| -| [`pulsar.authentication.params.map`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.util.Map-) | | Map which represents parameters for the Authentication-Plugin. | -| [`pulsar.authentication.params.string`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.lang.String-) | | String which represents parameters for the Authentication-Plugin, for example, `key1:val1,key2:val2`. | -| [`pulsar.use.tls`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTls-boolean-) | `false` | Enable TLS transport encryption. | -| [`pulsar.tls.trust.certs.file.path`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsTrustCertsFilePath-java.lang.String-) | | Path for the TLS trust certificate store. | -| [`pulsar.tls.allow.insecure.connection`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsAllowInsecureConnection-boolean-) | `false` | Accept self-signed certificates from brokers. | -| [`pulsar.operation.timeout.ms`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setOperationTimeout-int-java.util.concurrent.TimeUnit-) | `30000` | General operations timeout. | -| [`pulsar.stats.interval.seconds`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setStatsInterval-long-java.util.concurrent.TimeUnit-) | `60` | Pulsar client lib stats printing interval. | -| [`pulsar.num.io.threads`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setIoThreads-int-) | `1` | The number of Netty IO threads to use. | -| [`pulsar.connections.per.broker`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConnectionsPerBroker-int-) | `1` | The maximum number of connection to each broker. | -| [`pulsar.use.tcp.nodelay`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTcpNoDelay-boolean-) | `true` | TCP no-delay. | -| [`pulsar.concurrent.lookup.requests`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConcurrentLookupRequest-int-) | `50000` | The maximum number of concurrent topic lookups. | -| [`pulsar.max.number.rejected.request.per.connection`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setMaxNumberOfRejectedRequestPerConnection-int-) | `50` | The threshold of errors to forcefully close a connection. | -| [`pulsar.keepalive.interval.ms`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientBuilder.html#keepAliveInterval-int-java.util.concurrent.TimeUnit-)| `30000` | Keep alive interval for each client-broker-connection. | - - -### Pulsar producer properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.producer.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setProducerName-java.lang.String-) | | Specify the producer name. | -| [`pulsar.producer.initial.sequence.id`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setInitialSequenceId-long-) | | Specify baseline for sequence ID of this producer. | -| [`pulsar.producer.max.pending.messages`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessages-int-) | `1000` | Set the maximum size of the message queue pending to receive an acknowledgment from the broker. | -| [`pulsar.producer.max.pending.messages.across.partitions`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessagesAcrossPartitions-int-) | `50000` | Set the maximum number of pending messages across all the partitions. | -| [`pulsar.producer.batching.enabled`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingEnabled-boolean-) | `true` | Control whether automatic batching of messages is enabled for the producer. | -| [`pulsar.producer.batching.max.messages`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingMaxMessages-int-) | `1000` | The maximum number of messages in a batch. | -| [`pulsar.block.if.producer.queue.full`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBlockIfQueueFull-boolean-) | | Specify the block producer if queue is full. | - - -### Pulsar consumer Properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.consumer.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setConsumerName-java.lang.String-) | | Specify the consumer name. | -| [`pulsar.consumer.receiver.queue.size`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setReceiverQueueSize-int-) | 1000 | Set the size of the consumer receiver queue. | -| [`pulsar.consumer.acknowledgments.group.time.millis`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#acknowledgmentGroupTime-long-java.util.concurrent.TimeUnit-) | 100 | Set the maximum amount of group time for consumers to send the acknowledgments to the broker. | -| [`pulsar.consumer.total.receiver.queue.size.across.partitions`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setMaxTotalReceiverQueueSizeAcrossPartitions-int-) | 50000 | Set the maximum size of the total receiver queue across partitions. | -| [`pulsar.consumer.subscription.topics.mode`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#subscriptionTopicsMode-Mode-) | PersistentOnly | Set the subscription topic mode for consumers. | diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/adaptors-spark.md b/site2/website/versioned_docs/version-2.8.1-deprecated/adaptors-spark.md deleted file mode 100644 index e14f13b5d4b079..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/adaptors-spark.md +++ /dev/null @@ -1,91 +0,0 @@ ---- -id: adaptors-spark -title: Pulsar adaptor for Apache Spark -sidebar_label: "Apache Spark" -original_id: adaptors-spark ---- - -## Spark Streaming receiver -The Spark Streaming receiver for Pulsar is a custom receiver that enables Apache [Spark Streaming](https://spark.apache.org/streaming/) to receive raw data from Pulsar. - -An application can receive data in [Resilient Distributed Dataset](https://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds) (RDD) format via the Spark Streaming receiver and can process it in a variety of ways. - -### Prerequisites - -To use the receiver, include a dependency for the `pulsar-spark` library in your Java configuration. - -#### Maven - -If you're using Maven, add this to your `pom.xml`: - -```xml - - -@pulsar:version@ - - - - org.apache.pulsar - pulsar-spark - ${pulsar.version} - - -``` - -#### Gradle - -If you're using Gradle, add this to your `build.gradle` file: - -```groovy - -def pulsarVersion = "@pulsar:version@" - -dependencies { - compile group: 'org.apache.pulsar', name: 'pulsar-spark', version: pulsarVersion -} - -``` - -### Usage - -Pass an instance of `SparkStreamingPulsarReceiver` to the `receiverStream` method in `JavaStreamingContext`: - -```java - - String serviceUrl = "pulsar://localhost:6650/"; - String topic = "persistent://public/default/test_src"; - String subs = "test_sub"; - - SparkConf sparkConf = new SparkConf().setMaster("local[*]").setAppName("Pulsar Spark Example"); - - JavaStreamingContext jsc = new JavaStreamingContext(sparkConf, Durations.seconds(60)); - - ConsumerConfigurationData pulsarConf = new ConsumerConfigurationData(); - - Set set = new HashSet(); - set.add(topic); - pulsarConf.setTopicNames(set); - pulsarConf.setSubscriptionName(subs); - - SparkStreamingPulsarReceiver pulsarReceiver = new SparkStreamingPulsarReceiver( - serviceUrl, - pulsarConf, - new AuthenticationDisabled()); - - JavaReceiverInputDStream lineDStream = jsc.receiverStream(pulsarReceiver); - -``` - -For a complete example, click [here](https://github.com/apache/pulsar-adapters/blob/master/examples/spark/src/main/java/org/apache/spark/streaming/receiver/example/SparkStreamingPulsarReceiverExample.java). In this example, the number of messages that contain the string "Pulsar" in received messages is counted. - -Note that if needed, other Pulsar authentication classes can be used. For example, in order to use a token during authentication the following parameters for the `SparkStreamingPulsarReceiver` constructor can be set: - -```java - -SparkStreamingPulsarReceiver pulsarReceiver = new SparkStreamingPulsarReceiver( - serviceUrl, - pulsarConf, - new AuthenticationToken("token:")); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/adaptors-storm.md b/site2/website/versioned_docs/version-2.8.1-deprecated/adaptors-storm.md deleted file mode 100644 index 76d507164777db..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/adaptors-storm.md +++ /dev/null @@ -1,96 +0,0 @@ ---- -id: adaptors-storm -title: Pulsar adaptor for Apache Storm -sidebar_label: "Apache Storm" -original_id: adaptors-storm ---- - -Pulsar Storm is an adaptor for integrating with [Apache Storm](http://storm.apache.org/) topologies. It provides core Storm implementations for sending and receiving data. - -An application can inject data into a Storm topology via a generic Pulsar spout, as well as consume data from a Storm topology via a generic Pulsar bolt. - -## Using the Pulsar Storm Adaptor - -Include dependency for Pulsar Storm Adaptor: - -```xml - - - org.apache.pulsar - pulsar-storm - ${pulsar.version} - - -``` - -## Pulsar Spout - -The Pulsar Spout allows for the data published on a topic to be consumed by a Storm topology. It emits a Storm tuple based on the message received and the `MessageToValuesMapper` provided by the client. - -The tuples that fail to be processed by the downstream bolts will be re-injected by the spout with an exponential backoff, within a configurable timeout (the default is 60 seconds) or a configurable number of retries, whichever comes first, after which it is acknowledged by the consumer. Here's an example construction of a spout: - -```java - -MessageToValuesMapper messageToValuesMapper = new MessageToValuesMapper() { - - @Override - public Values toValues(Message msg) { - return new Values(new String(msg.getData())); - } - - @Override - public void declareOutputFields(OutputFieldsDeclarer declarer) { - // declare the output fields - declarer.declare(new Fields("string")); - } -}; - -// Configure a Pulsar Spout -PulsarSpoutConfiguration spoutConf = new PulsarSpoutConfiguration(); -spoutConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650"); -spoutConf.setTopic("persistent://my-property/usw/my-ns/my-topic1"); -spoutConf.setSubscriptionName("my-subscriber-name1"); -spoutConf.setMessageToValuesMapper(messageToValuesMapper); - -// Create a Pulsar Spout -PulsarSpout spout = new PulsarSpout(spoutConf); - -``` - -For a complete example, click [here](https://github.com/apache/pulsar-adapters/blob/master/pulsar-storm/src/test/java/org/apache/pulsar/storm/PulsarSpoutTest.java). - -## Pulsar Bolt - -The Pulsar bolt allows data in a Storm topology to be published on a topic. It publishes messages based on the Storm tuple received and the `TupleToMessageMapper` provided by the client. - -A partitioned topic can also be used to publish messages on different topics. In the implementation of the `TupleToMessageMapper`, a "key" will need to be provided in the message which will send the messages with the same key to the same topic. Here's an example bolt: - -```java - -TupleToMessageMapper tupleToMessageMapper = new TupleToMessageMapper() { - - @Override - public TypedMessageBuilder toMessage(TypedMessageBuilder msgBuilder, Tuple tuple) { - String receivedMessage = tuple.getString(0); - // message processing - String processedMsg = receivedMessage + "-processed"; - return msgBuilder.value(processedMsg.getBytes()); - } - - @Override - public void declareOutputFields(OutputFieldsDeclarer declarer) { - // declare the output fields - } -}; - -// Configure a Pulsar Bolt -PulsarBoltConfiguration boltConf = new PulsarBoltConfiguration(); -boltConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650"); -boltConf.setTopic("persistent://my-property/usw/my-ns/my-topic2"); -boltConf.setTupleToMessageMapper(tupleToMessageMapper); - -// Create a Pulsar Bolt -PulsarBolt bolt = new PulsarBolt(boltConf); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-brokers.md b/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-brokers.md deleted file mode 100644 index 4af4363850efd6..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-brokers.md +++ /dev/null @@ -1,276 +0,0 @@ ---- -id: admin-api-brokers -title: Managing Brokers -sidebar_label: "Brokers" -original_id: admin-api-brokers ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar brokers consist of two components: - -1. An HTTP server exposing a {@inject: rest:REST:/} interface administration and [topic](reference-terminology.md#topic) lookup. -2. A dispatcher that handles all Pulsar [message](reference-terminology.md#message) transfers. - -[Brokers](reference-terminology.md#broker) can be managed via: - -* The [`brokers`](reference-pulsar-admin.md#brokers) command of the [`pulsar-admin`](reference-pulsar-admin.md) tool -* The `/admin/v2/brokers` endpoint of the admin {@inject: rest:REST:/} API -* The `brokers` method of the {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin.html} object in the [Java API](client-libraries-java.md) - -In addition to being configurable when you start them up, brokers can also be [dynamically configured](#dynamic-broker-configuration). - -> See the [Configuration](reference-configuration.md#broker) page for a full listing of broker-specific configuration parameters. - -## Brokers resources - -### List active brokers - -Fetch all available active brokers that are serving traffic with cluster name. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers list use - -``` - -``` - -broker1.use.org.com:8080 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/:cluster|operation/getActiveBrokers?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getActiveBrokers(clusterName) - -``` - - - - -```` - -### Get the information of the leader broker - -Fetch the information of the leader broker, for example, the service url. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers leader-broker - -``` - -``` - -BrokerInfo(serviceUrl=broker1.use.org.com:8080) - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/leaderBroker|operation/getLeaderBroker?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getLeaderBroker() - -``` - -For the detail of the code above, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/BrokersImpl.java#L80) - - - - -```` - -#### list of namespaces owned by a given broker - -It finds all namespaces which are owned and served by a given broker. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers namespaces use \ - --url broker1.use.org.com:8080 - -``` - -```json - -{ - "my-property/use/my-ns/0x00000000_0xffffffff": { - "broker_assignment": "shared", - "is_controlled": false, - "is_active": true - } -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/:cluster/:broker/ownedNamespaces|operation/getOwnedNamespaes?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getOwnedNamespaces(cluster,brokerUrl); - -``` - - - - -```` - -### Dynamic broker configuration - -One way to configure a Pulsar [broker](reference-terminology.md#broker) is to supply a [configuration](reference-configuration.md#broker) when the broker is [started up](reference-cli-tools.md#pulsar-broker). - -But since all broker configuration in Pulsar is stored in ZooKeeper, configuration values can also be dynamically updated *while the broker is running*. When you update broker configuration dynamically, ZooKeeper will notify the broker of the change and the broker will then override any existing configuration values. - -* The [`brokers`](reference-pulsar-admin.md#brokers) command for the [`pulsar-admin`](reference-pulsar-admin.md) tool has a variety of subcommands that enable you to manipulate a broker's configuration dynamically, enabling you to [update config values](#update-dynamic-configuration) and more. -* In the Pulsar admin {@inject: rest:REST:/} API, dynamic configuration is managed through the `/admin/v2/brokers/configuration` endpoint. - -### Update dynamic configuration - -````mdx-code-block - - - -The [`update-dynamic-config`](reference-pulsar-admin.md#brokers-update-dynamic-config) subcommand will update existing configuration. It takes two arguments: the name of the parameter and the new value using the `config` and `value` flag respectively. Here's an example for the [`brokerShutdownTimeoutMs`](reference-configuration.md#broker-brokerShutdownTimeoutMs) parameter: - -```shell - -$ pulsar-admin brokers update-dynamic-config --config brokerShutdownTimeoutMs --value 100 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/brokers/configuration/:configName/:configValue|operation/updateDynamicConfiguration?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().updateDynamicConfiguration(configName, configValue); - -``` - - - - -```` - -### List updated values - -Fetch a list of all potentially updatable configuration parameters. -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers list-dynamic-config -brokerShutdownTimeoutMs - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/configuration|operation/getDynamicConfigurationName?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getDynamicConfigurationNames(); - -``` - - - - -```` - -### List all - -Fetch a list of all parameters that have been dynamically updated. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers get-all-dynamic-config -brokerShutdownTimeoutMs:100 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/configuration/values|operation/getAllDynamicConfigurations?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getAllDynamicConfigurations(); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-clusters.md b/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-clusters.md deleted file mode 100644 index e0e9fb5f91f65b..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-clusters.md +++ /dev/null @@ -1,308 +0,0 @@ ---- -id: admin-api-clusters -title: Managing Clusters -sidebar_label: "Clusters" -original_id: admin-api-clusters ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar clusters consist of one or more Pulsar [brokers](reference-terminology.md#broker), one or more [BookKeeper](reference-terminology.md#bookkeeper) -servers (aka [bookies](reference-terminology.md#bookie)), and a [ZooKeeper](https://zookeeper.apache.org) cluster that provides configuration and coordination management. - -Clusters can be managed via: - -* The [`clusters`](reference-pulsar-admin.md#clusters) command of the [`pulsar-admin`](reference-pulsar-admin.md) tool -* The `/admin/v2/clusters` endpoint of the admin {@inject: rest:REST:/} API -* The `clusters` method of the {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object in the [Java API](client-libraries-java.md) - -## Clusters resources - -### Provision - -New clusters can be provisioned using the admin interface. - -> Please note that this operation requires superuser privileges. - -````mdx-code-block - - - -You can provision a new cluster using the [`create`](reference-pulsar-admin.md#clusters-create) subcommand. Here's an example: - -```shell - -$ pulsar-admin clusters create cluster-1 \ - --url http://my-cluster.org.com:8080 \ - --broker-url pulsar://my-cluster.org.com:6650 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/clusters/:cluster|operation/createCluster?version=@pulsar:version_number@} - - - - -```java - -ClusterData clusterData = new ClusterData( - serviceUrl, - serviceUrlTls, - brokerServiceUrl, - brokerServiceUrlTls -); -admin.clusters().createCluster(clusterName, clusterData); - -``` - - - - -```` - -### Initialize cluster metadata - -When provision a new cluster, you need to initialize that cluster's [metadata](concepts-architecture-overview.md#metadata-store). When initializing cluster metadata, you need to specify all of the following: - -* The name of the cluster -* The local ZooKeeper connection string for the cluster -* The configuration store connection string for the entire instance -* The web service URL for the cluster -* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster - -You must initialize cluster metadata *before* starting up any [brokers](admin-api-brokers.md) that will belong to the cluster. - -> **No cluster metadata initialization through the REST API or the Java admin API** -> -> Unlike most other admin functions in Pulsar, cluster metadata initialization cannot be performed via the admin REST API -> or the admin Java client, as metadata initialization involves communicating with ZooKeeper directly. -> Instead, you can use the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool, in particular -> the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command. - -Here's an example cluster metadata initialization command: - -```shell - -bin/pulsar initialize-cluster-metadata \ - --cluster us-west \ - --zookeeper zk1.us-west.example.com:2181 \ - --configuration-store zk1.us-west.example.com:2184 \ - --web-service-url http://pulsar.us-west.example.com:8080/ \ - --web-service-url-tls https://pulsar.us-west.example.com:8443/ \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/ - -``` - -You'll need to use `--*-tls` flags only if you're using [TLS authentication](security-tls-authentication.md) in your instance. - -### Get configuration - -You can fetch the [configuration](reference-configuration.md) for an existing cluster at any time. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#clusters-get) subcommand and specify the name of the cluster. Here's an example: - -```shell - -$ pulsar-admin clusters get cluster-1 -{ - "serviceUrl": "http://my-cluster.org.com:8080/", - "serviceUrlTls": null, - "brokerServiceUrl": "pulsar://my-cluster.org.com:6650/", - "brokerServiceUrlTls": null - "peerClusterNames": null -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/clusters/:cluster|operation/getCluster?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().getCluster(clusterName); - -``` - - - - -```` - -### Update - -You can update the configuration for an existing cluster at any time. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#clusters-update) subcommand and specify new configuration values using flags. - -```shell - -$ pulsar-admin clusters update cluster-1 \ - --url http://my-cluster.org.com:4081 \ - --broker-url pulsar://my-cluster.org.com:3350 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/clusters/:cluster|operation/updateCluster?version=@pulsar:version_number@} - - - - -```java - -ClusterData clusterData = new ClusterData( - serviceUrl, - serviceUrlTls, - brokerServiceUrl, - brokerServiceUrlTls -); -admin.clusters().updateCluster(clusterName, clusterData); - -``` - - - - -```` - -### Delete - -Clusters can be deleted from a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#clusters-delete) subcommand and specify the name of the cluster. - -``` - -$ pulsar-admin clusters delete cluster-1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/clusters/:cluster|operation/deleteCluster?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().deleteCluster(clusterName); - -``` - - - - -```` - -### List - -You can fetch a list of all clusters in a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#clusters-list) subcommand. - -```shell - -$ pulsar-admin clusters list -cluster-1 -cluster-2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/clusters|operation/getClusters?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().getClusters(); - -``` - - - - -```` - -### Update peer-cluster data - -Peer clusters can be configured for a given cluster in a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`update-peer-clusters`](reference-pulsar-admin.md#clusters-update-peer-clusters) subcommand and specify the list of peer-cluster names. - -``` - -$ pulsar-admin update-peer-clusters cluster-1 --peer-clusters cluster-2 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/clusters/:cluster/peers|operation/setPeerClusterNames?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().updatePeerClusterNames(clusterName, peerClusterList); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-functions.md b/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-functions.md deleted file mode 100644 index 93d41ac257301f..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-functions.md +++ /dev/null @@ -1,820 +0,0 @@ ---- -id: admin-api-functions -title: Manage Functions -sidebar_label: "Functions" -original_id: admin-api-functions ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -**Pulsar Functions** are lightweight compute processes that - -* consume messages from one or more Pulsar topics -* apply a user-supplied processing logic to each message -* publish the results of the computation to another topic - -Functions can be managed via the following methods. - -Method | Description ----|--- -**Admin CLI** | The [`functions`](reference-pulsar-admin.md#functions) command of the [`pulsar-admin`](reference-pulsar-admin.md) tool. -**REST API** |The `/admin/v3/functions` endpoint of the admin {@inject: rest:REST:/} API. -**Java Admin API**| The `functions` method of the {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object in the [Java API](client-libraries-java.md). - -## Function resources - -You can perform the following operations on functions. - -### Create a function - -You can create a Pulsar function in cluster mode (deploy it on a Pulsar cluster) using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#functions-create) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --inputs test-input-topic \ - --output persistent://public/default/test-output-topic \ - --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \ - --jar /examples/api-examples.jar - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName|operation/registerFunction?version=@pulsar:version_number@} - - - - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setTenant(tenant); -functionConfig.setNamespace(namespace); -functionConfig.setName(functionName); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setParallelism(1); -functionConfig.setClassName("org.apache.pulsar.functions.api.examples.ExclamationFunction"); -functionConfig.setProcessingGuarantees(FunctionConfig.ProcessingGuarantees.ATLEAST_ONCE); -functionConfig.setTopicsPattern(sourceTopicPattern); -functionConfig.setSubName(subscriptionName); -functionConfig.setAutoAck(true); -functionConfig.setOutput(sinkTopic); -admin.functions().createFunction(functionConfig, fileName); - -``` - - - - -```` - -### Update a function - -You can update a Pulsar function that has been deployed to a Pulsar cluster using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#functions-update) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions update \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --output persistent://public/default/update-output-topic \ - # other options - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/functions/:tenant/:namespace/:functionName|operation/updateFunction?version=@pulsar:version_number@} - - - - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setTenant(tenant); -functionConfig.setNamespace(namespace); -functionConfig.setName(functionName); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setParallelism(1); -functionConfig.setClassName("org.apache.pulsar.functions.api.examples.ExclamationFunction"); -UpdateOptions updateOptions = new UpdateOptions(); -updateOptions.setUpdateAuthData(updateAuthData); -admin.functions().updateFunction(functionConfig, userCodeFile, updateOptions); - -``` - - - - -```` - -### Start an instance of a function - -You can start a stopped function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`start`](reference-pulsar-admin.md#functions-start) subcommand. - -```shell - -$ pulsar-admin functions start \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/start|operation/startFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().startFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Start all instances of a function - -You can start all stopped function instances using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`start`](reference-pulsar-admin.md#functions-start) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions start \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/start|operation/startFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().startFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Stop an instance of a function - -You can stop a function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`stop`](reference-pulsar-admin.md#functions-stop) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stop \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/stop|operation/stopFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().stopFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Stop all instances of a function - -You can stop all function instances using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`stop`](reference-pulsar-admin.md#functions-stop) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stop \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/stop|operation/stopFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().stopFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Restart an instance of a function - -Restart a function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`restart`](reference-pulsar-admin.md#functions-restart) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions restart \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/restart|operation/restartFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().restartFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Restart all instances of a function - -You can restart all function instances using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`restart`](reference-pulsar-admin.md#functions-restart) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions restart \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/restart|operation/restartFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().restartFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### List all functions - -You can list all Pulsar functions running under a specific tenant and namespace using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#functions-list) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions list \ - --tenant public \ - --namespace default - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace|operation/listFunctions?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctions(tenant, namespace); - -``` - - - - -```` - -### Delete a function - -You can delete a Pulsar function that is running on a Pulsar cluster using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#functions-delete) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions delete \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|DELETE|/admin/v3/functions/:tenant/:namespace/:functionName|operation/deregisterFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().deleteFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Get info about a function - -You can get information about a Pulsar function currently running in cluster mode using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#functions-get) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions get \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName|operation/getFunctionInfo?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Get status of an instance of a function - -You can get the current status of a Pulsar function instance with `instance-id` using Admin CLI, REST API or Java Admin API. -````mdx-code-block - - - -Use the [`status`](reference-pulsar-admin.md#functions-status) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/status|operation/getFunctionInstanceStatus?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStatus(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Get status of all instances of a function - -You can get the current status of a Pulsar function instance using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`status`](reference-pulsar-admin.md#functions-status) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/status|operation/getFunctionStatus?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStatus(tenant, namespace, functionName); - -``` - - - - -```` - -### Get stats of an instance of a function - -You can get the current stats of a Pulsar Function instance with `instance-id` using Admin CLI, REST API or Java admin API. -````mdx-code-block - - - -Use the [`stats`](reference-pulsar-admin.md#functions-stats) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/stats|operation/getFunctionInstanceStats?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStats(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Get stats of all instances of a function - -You can get the current stats of a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`stats`](reference-pulsar-admin.md#functions-stats) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/stats|operation/getFunctionStats?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStats(tenant, namespace, functionName); - -``` - - - - -```` - -### Trigger a function - -You can trigger a specified Pulsar function with a supplied value using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`trigger`](reference-pulsar-admin.md#functions-trigger) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --topic (the name of input topic) \ - --trigger-value \"hello pulsar\" - # or --trigger-file (the path of trigger file) - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/trigger|operation/triggerFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().triggerFunction(tenant, namespace, functionName, topic, triggerValue, triggerFile); - -``` - - - - -```` - -### Put state associated with a function - -You can put the state associated with a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`putstate`](reference-pulsar-admin.md#functions-putstate) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions putstate \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --state "{\"key\":\"pulsar\", \"stringValue\":\"hello pulsar\"}" - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/state/:key|operation/putFunctionState?version=@pulsar:version_number@} - - - - -```java - -TypeReference typeRef = new TypeReference() {}; -FunctionState stateRepr = ObjectMapperFactory.getThreadLocal().readValue(state, typeRef); -admin.functions().putFunctionState(tenant, namespace, functionName, stateRepr); - -``` - - - - -```` - -### Fetch state associated with a function - -You can fetch the current state associated with a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`querystate`](reference-pulsar-admin.md#functions-querystate) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions querystate \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --key (the key of state) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/state/:key|operation/getFunctionState?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionState(tenant, namespace, functionName, key); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-namespaces.md b/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-namespaces.md deleted file mode 100644 index 9ed2bc8e3aae6b..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-namespaces.md +++ /dev/null @@ -1,1314 +0,0 @@ ---- -id: admin-api-namespaces -title: Managing Namespaces -sidebar_label: "Namespaces" -original_id: admin-api-namespaces ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar [namespaces](reference-terminology.md#namespace) are logical groupings of [topics](reference-terminology.md#topic). - -Namespaces can be managed via: - -* The [`namespaces`](reference-pulsar-admin.md#clusters) command of the [`pulsar-admin`](reference-pulsar-admin.md) tool -* The `/admin/v2/namespaces` endpoint of the admin {@inject: rest:REST:/} API -* The `namespaces` method of the {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object in the [Java API](client-libraries-java.md) - -## Namespaces resources - -### Create namespaces - -You can create new namespaces under a given [tenant](reference-terminology.md#tenant). - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#namespaces-create) subcommand and specify the namespace by name: - -```shell - -$ pulsar-admin namespaces create test-tenant/test-namespace - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace|operation/createNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().createNamespace(namespace); - -``` - - - - -```` - -### Get policies - -You can fetch the current policies associated with a namespace at any time. - -````mdx-code-block - - - -Use the [`policies`](reference-pulsar-admin.md#namespaces-policies) subcommand and specify the namespace: - -```shell - -$ pulsar-admin namespaces policies test-tenant/test-namespace -{ - "auth_policies": { - "namespace_auth": {}, - "destination_auth": {} - }, - "replication_clusters": [], - "bundles_activated": true, - "bundles": { - "boundaries": [ - "0x00000000", - "0xffffffff" - ], - "numBundles": 1 - }, - "backlog_quota_map": {}, - "persistence": null, - "latency_stats_sample_rate": {}, - "message_ttl_in_seconds": 0, - "retention_policies": null, - "deleted": false -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace|operation/getPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPolicies(namespace); - -``` - - - - -```` - -### List namespaces - -You can list all namespaces within a given Pulsar [tenant](reference-terminology.md#tenant). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#namespaces-list) subcommand and specify the tenant: - -```shell - -$ pulsar-admin namespaces list test-tenant -test-tenant/ns1 -test-tenant/ns2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant|operation/getTenantNamespaces?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaces(tenant); - -``` - - - - -```` - -### Delete namespaces - -You can delete existing namespaces from a tenant. - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#namespaces-delete) subcommand and specify the namespace: - -```shell - -$ pulsar-admin namespaces delete test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace|operation/deleteNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().deleteNamespace(namespace); - -``` - - - - -```` - -### Configure replication clusters - -#### Set replication cluster - -It sets replication clusters for a namespace, so Pulsar can internally replicate publish message from one colo to another colo. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-clusters test-tenant/ns1 \ - --clusters cl1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/replication|operation/setNamespaceReplicationClusters?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setNamespaceReplicationClusters(namespace, clusters); - -``` - - - - -```` - -#### Get replication cluster - -It gives a list of replication clusters for a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-clusters test-tenant/cl1/ns1 - -``` - -``` - -cl2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/replication|operation/getNamespaceReplicationClusters?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaceReplicationClusters(namespace) - -``` - - - - -```` - -### Configure backlog quota policies - -#### Set backlog quota policies - -Backlog quota helps the broker to restrict bandwidth/storage of a namespace once it reaches a certain threshold limit. Admin can set the limit and take corresponding action after the limit is reached. - - 1. producer_request_hold: broker will hold and not persist produce request payload - - 2. producer_exception: broker disconnects with the client by giving an exception. - - 3. consumer_backlog_eviction: broker will start discarding backlog messages - - Backlog quota restriction can be taken care by defining restriction of backlog-quota-type: destination_storage - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-backlog-quota --limit 10G --limitTime 36000 --policy producer_request_hold test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/setBacklogQuota?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setBacklogQuota(namespace, new BacklogQuota(limit, limitTime, policy)) - -``` - - - - -```` - -#### Get backlog quota policies - -It shows a configured backlog quota for a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-backlog-quotas test-tenant/ns1 - -``` - -```json - -{ - "destination_storage": { - "limit": 10, - "policy": "producer_request_hold" - } -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/backlogQuotaMap|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getBacklogQuotaMap(namespace); - -``` - - - - -```` - -#### Remove backlog quota policies - -It removes backlog quota policies for a given namespace - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-backlog-quota test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/removeBacklogQuota?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeBacklogQuota(namespace, backlogQuotaType) - -``` - - - - -```` - -### Configure persistence policies - -#### Set persistence policies - -Persistence policies allow to configure persistency-level for all topic messages under a given namespace. - - - Bookkeeper-ack-quorum: Number of acks (guaranteed copies) to wait for each entry, default: 0 - - - Bookkeeper-ensemble: Number of bookies to use for a topic, default: 0 - - - Bookkeeper-write-quorum: How many writes to make of each entry, default: 0 - - - Ml-mark-delete-max-rate: Throttling rate of mark-delete operation (0 means no throttle), default: 0.0 - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-persistence --bookkeeper-ack-quorum 2 --bookkeeper-ensemble 3 --bookkeeper-write-quorum 2 --ml-mark-delete-max-rate 0 test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/setPersistence?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setPersistence(namespace,new PersistencePolicies(bookkeeperEnsemble, bookkeeperWriteQuorum,bookkeeperAckQuorum,managedLedgerMaxMarkDeleteRate)) - -``` - - - - -```` - -#### Get persistence policies - -It shows the configured persistence policies of a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-persistence test-tenant/ns1 - -``` - -```json - -{ - "bookkeeperEnsemble": 3, - "bookkeeperWriteQuorum": 2, - "bookkeeperAckQuorum": 2, - "managedLedgerMaxMarkDeleteRate": 0 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/getPersistence?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPersistence(namespace) - -``` - - - - -```` - -### Configure namespace bundles - -#### Unload namespace bundles - -The namespace bundle is a virtual group of topics which belong to the same namespace. If the broker gets overloaded with the number of bundles, this command can help unload a bundle from that broker, so it can be served by some other less-loaded brokers. The namespace bundle ID ranges from 0x00000000 to 0xffffffff. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces unload --bundle 0x00000000_0xffffffff test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/:bundle/unload|operation/unloadNamespaceBundle?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().unloadNamespaceBundle(namespace, bundle) - -``` - - - - -```` - -#### Split namespace bundles - -Each namespace bundle can contain multiple topics and each bundle can be served by only one broker. -If a single bundle is creating an excessive load on a broker, an admin splits the bundle using this command permitting one or more of the new bundles to be unloaded thus spreading the load across the brokers. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces split-bundle --bundle 0x00000000_0xffffffff test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/:bundle/split|operation/splitNamespaceBundle?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().splitNamespaceBundle(namespace, bundle) - -``` - - - - -```` - -### Configure message TTL - -#### Set message-ttl - -It configures message’s time to live (in seconds) duration. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-message-ttl --messageTTL 100 test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/setNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setNamespaceMessageTTL(namespace, messageTTL) - -``` - - - - -```` - -#### Get message-ttl - -It gives a message ttl of configured namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-message-ttl test-tenant/ns1 - -``` - -``` - -100 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/getNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaceMessageTTL(namespace) - -``` - - - - -```` - -#### Remove message-ttl - -Remove a message TTL of the configured namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-message-ttl test-tenant/ns1 - -``` - -``` - -100 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/removeNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeNamespaceMessageTTL(namespace) - -``` - - - - -```` - - -### Clear backlog - -#### Clear namespace backlog - -It clears all message backlog for all the topics that belong to a specific namespace. You can also clear backlog for a specific subscription as well. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces clear-backlog --sub my-subscription test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/clearBacklog|operation/clearNamespaceBacklogForSubscription?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().clearNamespaceBacklogForSubscription(namespace, subscription) - -``` - - - - -```` - -#### Clear bundle backlog - -It clears all message backlog for all the topics that belong to a specific NamespaceBundle. You can also clear backlog for a specific subscription as well. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces clear-backlog --bundle 0x00000000_0xffffffff --sub my-subscription test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/:bundle/clearBacklog|operation/clearNamespaceBundleBacklogForSubscription?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().clearNamespaceBundleBacklogForSubscription(namespace, bundle, subscription) - -``` - - - - -```` - -### Configure retention - -#### Set retention - -Each namespace contains multiple topics and the retention size (storage size) of each topic should not exceed a specific threshold or it should be stored for a certain period. This command helps configure the retention size and time of topics in a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin set-retention --size 100 --time 10 test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/retention|operation/setRetention?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setRetention(namespace, new RetentionPolicies(retentionTimeInMin, retentionSizeInMB)) - -``` - - - - -```` - -#### Get retention - -It shows retention information of a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-retention test-tenant/ns1 - -``` - -```json - -{ - "retentionTimeInMinutes": 10, - "retentionSizeInMB": 100 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/retention|operation/getRetention?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getRetention(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for topics - -#### Set dispatch throttling for topics - -It sets message dispatch rate for all the topics under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -:::note - -- If neither `clusterDispatchRate` nor `topicDispatchRate` is configured, dispatch throttling is disabled. - -- If `topicDispatchRate` is not configured, `clusterDispatchRate` takes effect. - -- If `topicDispatchRate` is configured, `topicDispatchRate` takes effect. - -::: - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/dispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for topics - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/dispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getDispatchRate(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for subscription - -#### Set dispatch throttling for subscription - -It sets message dispatch rate for all the subscription of topics under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-subscription-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/subscriptionDispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setSubscriptionDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for subscription - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-subscription-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/subscriptionDispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getSubscriptionDispatchRate(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for replicator - -#### Set dispatch throttling for replicator - -It sets message dispatch rate for all the replicator between replication clusters under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-replicator-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/replicatorDispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setReplicatorDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for replicator - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-replicator-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/replicatorDispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getReplicatorDispatchRate(namespace) - -``` - - - - -```` - -### Configure deduplication snapshot interval - -#### Get deduplication snapshot interval - -It shows configured `deduplicationSnapshotInterval` for a namespace (Each topic under the namespace will take a deduplication snapshot according to this interval) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-deduplication-snapshot-interval test-tenant/ns1 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/getDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getDeduplicationSnapshotInterval(namespace) - -``` - - - - -```` - -#### Set deduplication snapshot interval - -Set configured `deduplicationSnapshotInterval` for a namespace. Each topic under the namespace will take a deduplication snapshot according to this interval. -`brokerDeduplicationEnabled` must be set to `true` for this property to take effect. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-deduplication-snapshot-interval test-tenant/ns1 --interval 1000 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/setDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setDeduplicationSnapshotInterval(namespace, 1000) - -``` - - - - -```` - -#### Remove deduplication snapshot interval - -Remove configured `deduplicationSnapshotInterval` of a namespace (Each topic under the namespace will take a deduplication snapshot according to this interval) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-deduplication-snapshot-interval test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/deleteDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeDeduplicationSnapshotInterval(namespace) - -``` - - - - -```` - -### Namespace isolation - -You can use the [Pulsar isolation policy](administration-isolation.md) to allocate resources (broker and bookie) for a namespace. - -### Unload namespaces from a broker - -You can unload a namespace, or a [namespace bundle](reference-terminology.md#namespace-bundle), from the Pulsar [broker](reference-terminology.md#broker) that is currently responsible for it. - -#### pulsar-admin - -Use the [`unload`](reference-pulsar-admin.md#unload) subcommand of the [`namespaces`](reference-pulsar-admin.md#namespaces) command. - -````mdx-code-block - - - -```shell - -$ pulsar-admin namespaces unload my-tenant/my-ns - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/unload|operation/unloadNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().unload(namespace) - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-non-partitioned-topics.md b/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-non-partitioned-topics.md deleted file mode 100644 index e6347bb8c363a1..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-non-partitioned-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-non-partitioned-topics -title: Managing non-partitioned topics -sidebar_label: "Non-partitioned topics" -original_id: admin-api-non-partitioned-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-non-persistent-topics.md b/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-non-persistent-topics.md deleted file mode 100644 index 3126a6494c7153..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-non-persistent-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-non-persistent-topics -title: Managing non-persistent topics -sidebar_label: "Non-Persistent topics" -original_id: admin-api-non-persistent-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-overview.md b/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-overview.md deleted file mode 100644 index 81e6587fab3509..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-overview.md +++ /dev/null @@ -1,133 +0,0 @@ ---- -id: admin-api-overview -title: Pulsar admin interface -sidebar_label: "Overview" -original_id: admin-api-overview ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -The Pulsar admin interface enables you to manage all important entities in a Pulsar instance, such as tenants, topics, and namespaces. - -You can interact with the admin interface via: - -- HTTP calls, which are made against the admin {@inject: rest:REST:/} API provided by Pulsar brokers. For some RESTful APIs, they might be redirected to the owner brokers for serving with [`307 Temporary Redirect`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/307), hence the HTTP callers should handle `307 Temporary Redirect`. If you use `curl` commands, you should specify `-L` to handle redirections. -- A Java client interface. -- The `pulsar-admin` CLI tool, which is available in the `bin` folder of your Pulsar installation: - - ```shell - - $ bin/pulsar-admin - - ``` - - For complete commands of `pulsar-admin` tool, see [Pulsar admin snapshot](https://pulsar.apache.org/tools/pulsar-admin/). - - -> **The REST API is the admin interface**. Both the `pulsar-admin` CLI tool and the Java client use the REST API. If you implement your own admin interface client, you should use the REST API. - -## Admin setup - -Each of the three admin interfaces (the `pulsar-admin` CLI tool, the {@inject: rest:REST:/} API, and the [Java admin API](/api/admin)) requires some special setup if you have enabled authentication in your Pulsar instance. - -````mdx-code-block - - - -If you have enabled authentication, you need to provide an auth configuration to use the `pulsar-admin` tool. By default, the configuration for the `pulsar-admin` tool is in the [`conf/client.conf`](reference-configuration.md#client) file. The following are the available parameters: - -|Name|Description|Default| -|----|-----------|-------| -|webServiceUrl|The web URL for the cluster.|http://localhost:8080/| -|brokerServiceUrl|The Pulsar protocol URL for the cluster.|pulsar://localhost:6650/| -|authPlugin|The authentication plugin.| | -|authParams|The authentication parameters for the cluster, as a comma-separated string.| | -|useTls|Whether or not TLS authentication will be enforced in the cluster.|false| -|tlsAllowInsecureConnection|Accept untrusted TLS certificate from client.|false| -|tlsTrustCertsFilePath|Path for the trusted TLS certificate file.| | - - - - -You can find details for the REST API exposed by Pulsar brokers in this {@inject: rest:document:/}. - - - - -To use the Java admin API, instantiate a {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object, and specify a URL for a Pulsar broker and a {@inject: javadoc:PulsarAdminBuilder:/admin/org/apache/pulsar/client/admin/PulsarAdminBuilder}. The following is a minimal example using `localhost`: - -```java - -String url = "http://localhost:8080"; -// Pass auth-plugin class fully-qualified name if Pulsar-security enabled -String authPluginClassName = "com.org.MyAuthPluginClass"; -// Pass auth-param if auth-plugin class requires it -String authParams = "param1=value1"; -boolean useTls = false; -boolean tlsAllowInsecureConnection = false; -String tlsTrustCertsFilePath = null; -PulsarAdmin admin = PulsarAdmin.builder() -.authentication(authPluginClassName,authParams) -.serviceHttpUrl(url) -.tlsTrustCertsFilePath(tlsTrustCertsFilePath) -.allowTlsInsecureConnection(tlsAllowInsecureConnection) -.build(); - -``` - -If you use multiple brokers, you can use multi-host like Pulsar service. For example, - -```java - -String url = "http://localhost:8080,localhost:8081,localhost:8082"; -// Pass auth-plugin class fully-qualified name if Pulsar-security enabled -String authPluginClassName = "com.org.MyAuthPluginClass"; -// Pass auth-param if auth-plugin class requires it -String authParams = "param1=value1"; -boolean useTls = false; -boolean tlsAllowInsecureConnection = false; -String tlsTrustCertsFilePath = null; -PulsarAdmin admin = PulsarAdmin.builder() -.authentication(authPluginClassName,authParams) -.serviceHttpUrl(url) -.tlsTrustCertsFilePath(tlsTrustCertsFilePath) -.allowTlsInsecureConnection(tlsAllowInsecureConnection) -.build(); - -``` - - - - -```` - -## How to define Pulsar resource names when running Pulsar in Kubernetes -If you run Pulsar Functions or connectors on Kubernetes, you need to follow Kubernetes naming convention to define the names of your Pulsar resources, whichever admin interface you use. - -Kubernetes requires a name that can be used as a DNS subdomain name as defined in [RFC 1123](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names). Pulsar supports more legal characters than Kubernetes naming convention. If you create a Pulsar resource name with special characters that are not supported by Kubernetes (for example, including colons in a Pulsar namespace name), Kubernetes runtime translates the Pulsar object names into Kubernetes resource labels which are in RFC 1123-compliant forms. Consequently, you can run functions or connectors using Kubernetes runtime. The rules for translating Pulsar object names into Kubernetes resource labels are as below: - -- Truncate to 63 characters - -- Replace the following characters with dashes (-): - - - Non-alphanumeric characters - - - Underscores (_) - - - Dots (.) - -- Replace beginning and ending non-alphanumeric characters with 0 - -:::tip - -- If you get an error in translating Pulsar object names into Kubernetes resource labels (for example, you may have a naming collision if your Pulsar object name is too long) or want to customize the translating rules, see [customize Kubernetes runtime](https://pulsar.apache.org/docs/en/next/functions-runtime/#customize-kubernetes-runtime). -- For how to configure Kubernetes runtime, see [here](https://pulsar.apache.org/docs/en/next/functions-runtime/#configure-kubernetes-runtime). - -::: - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-packages.md b/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-packages.md deleted file mode 100644 index 70bebfcf35dfee..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-packages.md +++ /dev/null @@ -1,381 +0,0 @@ ---- -id: admin-api-packages -title: Manage packages -sidebar_label: "Packages" -original_id: admin-api-packages ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Package management enables version management and simplifies the upgrade and rollback processes for Functions, Sinks, and Sources. When you use the same function, sink and source in different namespaces, you can upload them to a common package management system. - -## Package name - -A `package` is identified by five parts: `type`, `tenant`, `namespace`, `package name`, and `version`. - -| Part | Description | -|-------|-------------| -|`type` |The type of the package. The following types are supported: `function`, `sink` and `source`. | -| `name`|The fully qualified name of the package: `//`.| -|`version`|The version of the package.| - -The following is a code sample. - -```java - -class PackageName { - private final PackageType type; - private final String namespace; - private final String tenant; - private final String name; - private final String version; -} - -enum PackageType { - FUNCTION("function"), SINK("sink"), SOURCE("source"); -} - -``` - -## Package URL -A package is located using a URL. The package URL is written in the following format: - -```shell - -:////@ - -``` - -The following are package URL examples: - -`sink://public/default/mysql-sink@1.0` -`function://my-tenant/my-ns/my-function@0.1` -`source://my-tenant/my-ns/mysql-cdc-source@2.3` - -The package management system stores the data, versions and metadata of each package. The metadata is shown in the following table. - -| metadata | Description | -|----------|-------------| -|description|The description of the package.| -|contact |The contact information of a package. For example, team email.| -|create_time| The time when the package is created.| -|modification_time| The time when the package is modified.| -|properties |A key/value map that stores your own information.| - -## Permissions - -The packages are organized by the tenant and namespace, so you can apply the tenant and namespace permissions to packages directly. - -## Package resources -You can use the package management with command line tools, REST API and Java client. - -### Upload a package -You can upload a package to the package management service in the following ways. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages upload function://public/default/example@v0.1 --path package-file --description package-description - -``` - - - - -{@inject: endpoint|POST|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/upload?version=@pulsar:version_number@} - - - - -Upload a package to the package management service synchronously. - -```java - - void upload(PackageMetadata metadata, String packageName, String path) throws PulsarAdminException; - -``` - -Upload a package to the package management service asynchronously. - -```java - - CompletableFuture uploadAsync(PackageMetadata metadata, String packageName, String path); - -``` - - - - -```` - -### Download a package -You can download a package to the package management service in the following ways. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages download function://public/default/example@v0.1 --path package-file - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/download?version=@pulsar:version_number@} - - - - -Download a package to the package management service synchronously. - -```java - - void download(String packageName, String path) throws PulsarAdminException; - -``` - -Download a package to the package management service asynchronously. - -```java - - CompletableFuture downloadAsync(String packageName, String path); - -``` - - - - -```` - -### List all versions of a package -You can get a list of all versions of a package in the following ways. -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages list --type function public/default - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName|operation/listPackageVersion?version=@pulsar:version_number@} - - - - -List all versions of a package synchronously. - -```java - - List listPackageVersions(String packageName) throws PulsarAdminException; - -``` - -List all versions of a package asynchronously. - -```java - - CompletableFuture> listPackageVersionsAsync(String packageName); - -``` - - - - -```` - -### List all the specified type packages under a namespace -You can get a list of all the packages with the given type in a namespace in the following ways. -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages list --type function public/default - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/packages/:type/:tenant/:namespace|operation/listPackages?version=@pulsar:version_number@} - - - - -List all the packages with the given type in a namespace synchronously. - -```java - - List listPackages(String type, String namespace) throws PulsarAdminException; - -``` - -List all the packages with the given type in a namespace asynchronously. - -```java - - CompletableFuture> listPackagesAsync(String type, String namespace); - -``` - - - - -```` - -### Get the metadata of a package -You can get the metadata of a package in the following ways. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages get-metadata function://public/default/test@v1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version/metadata|operation/getMeta?version=@pulsar:version_number@} - - - - -Get the metadata of a package synchronously. - -```java - - PackageMetadata getMetadata(String packageName) throws PulsarAdminException; - -``` - -Get the metadata of a package asynchronously. - -```java - - CompletableFuture getMetadataAsync(String packageName); - -``` - - - - -```` - -### Update the metadata of a package -You can update the metadata of a package in the following ways. -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages update-metadata function://public/default/example@v0.1 --description update-description - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version/metadata|operation/updateMeta?version=@pulsar:version_number@} - - - - -Update a package metadata information synchronously. - -```java - - void updateMetadata(String packageName, PackageMetadata metadata) throws PulsarAdminException; - -``` - -Update a package metadata information asynchronously. - -```java - - CompletableFuture updateMetadataAsync(String packageName, PackageMetadata metadata); - -``` - - - - -```` - -### Delete a specified package -You can delete a specified package with its package name in the following ways. - -````mdx-code-block - - - -The following command example deletes a package of version 0.1. - -```shell - -bin/pulsar-admin packages delete function://public/default/example@v0.1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/delete?version=@pulsar:version_number@} - - - - -Delete a specified package synchronously. - -```java - - void delete(String packageName) throws PulsarAdminException; - -``` - -Delete a specified package asynchronously. - -```java - - CompletableFuture deleteAsync(String packageName); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-partitioned-topics.md b/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-partitioned-topics.md deleted file mode 100644 index 5ce182282e0324..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-partitioned-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-partitioned-topics -title: Managing partitioned topics -sidebar_label: "Partitioned topics" -original_id: admin-api-partitioned-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-permissions.md b/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-permissions.md deleted file mode 100644 index 6897517553f2b2..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-permissions.md +++ /dev/null @@ -1,174 +0,0 @@ ---- -id: admin-api-permissions -title: Managing permissions -sidebar_label: "Permissions" -original_id: admin-api-permissions ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Permissions in Pulsar are managed at the [namespace](reference-terminology.md#namespace) level -(that is, within [tenants](reference-terminology.md#tenant) and [clusters](reference-terminology.md#cluster)). - -## Grant permissions - -You can grant permissions to specific roles for lists of operations such as `produce` and `consume`. - -````mdx-code-block - - - -Use the [`grant-permission`](reference-pulsar-admin.md#grant-permission) subcommand and specify a namespace, actions using the `--actions` flag, and a role using the `--role` flag: - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role admin10 - -``` - -Wildcard authorization can be performed when `authorizationAllowWildcardsMatching` is set to `true` in `broker.conf`. - -e.g. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role 'my.role.*' - -``` - -Then, roles `my.role.1`, `my.role.2`, `my.role.foo`, `my.role.bar`, etc. can produce and consume. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role '*.role.my' - -``` - -Then, roles `1.role.my`, `2.role.my`, `foo.role.my`, `bar.role.my`, etc. can produce and consume. - -**Note**: A wildcard matching works at **the beginning or end of the role name only**. - -e.g. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role 'my.*.role' - -``` - -In this case, only the role `my.*.role` has permissions. -Roles `my.1.role`, `my.2.role`, `my.foo.role`, `my.bar.role`, etc. **cannot** produce and consume. - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/permissions/:role|operation/grantPermissionOnNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().grantPermissionOnNamespace(namespace, role, getAuthActions(actions)); - -``` - - - - -```` - -## Get permissions - -You can see which permissions have been granted to which roles in a namespace. - -````mdx-code-block - - - -Use the [`permissions`](reference-pulsar-admin#permissions) subcommand and specify a namespace: - -```shell - -$ pulsar-admin namespaces permissions test-tenant/ns1 -{ - "admin10": [ - "produce", - "consume" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/permissions|operation/getPermissions?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPermissions(namespace); - -``` - - - - -```` - -## Revoke permissions - -You can revoke permissions from specific roles, which means that those roles will no longer have access to the specified namespace. - -````mdx-code-block - - - -Use the [`revoke-permission`](reference-pulsar-admin.md#revoke-permission) subcommand and specify a namespace and a role using the `--role` flag: - -```shell - -$ pulsar-admin namespaces revoke-permission test-tenant/ns1 \ - --role admin10 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/permissions/:role|operation/revokePermissionsOnNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().revokePermissionsOnNamespace(namespace, role); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-persistent-topics.md b/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-persistent-topics.md deleted file mode 100644 index 50d135b72f5424..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-persistent-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-persistent-topics -title: Managing persistent topics -sidebar_label: "Persistent topics" -original_id: admin-api-persistent-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-schemas.md b/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-schemas.md deleted file mode 100644 index 9ffe21f5b0f750..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-schemas.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: admin-api-schemas -title: Managing Schemas -sidebar_label: "Schemas" -original_id: admin-api-schemas ---- - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-tenants.md b/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-tenants.md deleted file mode 100644 index d78aa2e55f4c33..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-tenants.md +++ /dev/null @@ -1,228 +0,0 @@ ---- -id: admin-api-tenants -title: Managing Tenants -sidebar_label: "Tenants" -original_id: admin-api-tenants ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Tenants, like namespaces, can be managed using the [admin API](admin-api-overview.md). There are currently two configurable aspects of tenants: - -* Admin roles -* Allowed clusters - -## Tenant resources - -### List - -You can list all of the tenants associated with an [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#tenants-list) subcommand. - -```shell - -$ pulsar-admin tenants list -my-tenant-1 -my-tenant-2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/tenants|operation/getTenants?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().getTenants(); - -``` - - - - -```` - -### Create - -You can create a new tenant. - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#tenants-create) subcommand: - -```shell - -$ pulsar-admin tenants create my-tenant - -``` - -When creating a tenant, you can assign admin roles using the `-r`/`--admin-roles` flag. You can specify multiple roles as a comma-separated list. Here are some examples: - -```shell - -$ pulsar-admin tenants create my-tenant \ - --admin-roles role1,role2,role3 - -$ pulsar-admin tenants create my-tenant \ - -r role1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/tenants/:tenant|operation/createTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().createTenant(tenantName, tenantInfo); - -``` - - - - -```` - -### Get configuration - -You can fetch the [configuration](reference-configuration.md) for an existing tenant at any time. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#tenants-get) subcommand and specify the name of the tenant. Here's an example: - -```shell - -$ pulsar-admin tenants get my-tenant -{ - "adminRoles": [ - "admin1", - "admin2" - ], - "allowedClusters": [ - "cl1", - "cl2" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/tenants/:cluster|operation/getTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().getTenantInfo(tenantName); - -``` - - - - -```` - -### Delete - -Tenants can be deleted from a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#tenants-delete) subcommand and specify the name of the tenant. - -```shell - -$ pulsar-admin tenants delete my-tenant - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/tenants/:cluster|operation/deleteTenant?version=@pulsar:version_number@} - - - - -```java - -admin.Tenants().deleteTenant(tenantName); - -``` - - - - -```` - -### Update - -You can update a tenant's configuration. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#tenants-update) subcommand. - -```shell - -$ pulsar-admin tenants update my-tenant - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/tenants/:cluster|operation/updateTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().updateTenant(tenantName, tenantInfo); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-topics.md b/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-topics.md deleted file mode 100644 index c09c0016854356..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/admin-api-topics.md +++ /dev/null @@ -1,2133 +0,0 @@ ---- -id: admin-api-topics -title: Manage topics -sidebar_label: "Topics" -original_id: admin-api-topics ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar has persistent and non-persistent topics. Persistent topic is a logical endpoint for publishing and consuming messages. The topic name structure for persistent topics is: - -```shell - -persistent://tenant/namespace/topic - -``` - -Non-persistent topics are used in applications that only consume real-time published messages and do not need persistent guarantee. In this way, it reduces message-publish latency by removing overhead of persisting messages. The topic name structure for non-persistent topics is: - -```shell - -non-persistent://tenant/namespace/topic - -``` - -## Manage topic resources -Whether it is persistent or non-persistent topic, you can obtain the topic resources through `pulsar-admin` tool, REST API and Java. - -:::note - -In REST API, `:schema` stands for persistent or non-persistent. `:tenant`, `:namespace`, `:x` are variables, replace them with the real tenant, namespace, and `x` names when using them. -Take {@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} as an example, to get the list of persistent topics in REST API, use `https://pulsar.apache.org/admin/v2/persistent/my-tenant/my-namespace`. To get the list of non-persistent topics in REST API, use `https://pulsar.apache.org/admin/v2/non-persistent/my-tenant/my-namespace`. - -::: - -### List of topics - -You can get the list of topics under a given namespace in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list \ - my-tenant/my-namespace - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} - - - - -```java - -String namespace = "my-tenant/my-namespace"; -admin.topics().getList(namespace); - -``` - - - - -```` - -### Grant permission - -You can grant permissions on a client role to perform specific actions on a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics grant-permission \ - --actions produce,consume --role application1 \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/permissions/:role|operation/grantPermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String role = "test-role"; -Set actions = Sets.newHashSet(AuthAction.produce, AuthAction.consume); -admin.topics().grantPermission(topic, role, actions); - -``` - - - - -```` - -### Get permission - -You can fetch permission in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics permissions \ - persistent://test-tenant/ns1/tp1 \ - -{ - "application1": [ - "consume", - "produce" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/permissions|operation/getPermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getPermissions(topic); - -``` - - - - -```` - -### Revoke permission - -You can revoke a permission granted on a client role in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics revoke-permission \ - --role application1 \ - persistent://test-tenant/ns1/tp1 \ - -{ - "application1": [ - "consume", - "produce" - ] -} - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic/permissions/:role|operation/revokePermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String role = "test-role"; -admin.topics().revokePermissions(topic, role); - -``` - - - - -```` - -### Delete topic - -You can delete a topic in the following ways. You cannot delete a topic if any active subscription or producers is connected to the topic. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics delete \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic|operation/deleteTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().delete(topic); - -``` - - - - -```` - -### Unload topic - -You can unload a topic in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics unload \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/unload|operation/unloadTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().unload(topic); - -``` - - - - -```` - -### Get stats - -You can check the following statistics of a given non-partitioned topic. - - - **msgRateIn**: The sum of all local and replication publishers' publish rates (msg/s). - - - **msgThroughputIn**: The sum of all local and replication publishers' publish rates (bytes/s). - - - **msgRateOut**: The sum of all local and replication consumers' dispatch rates(msg/s). - - - **msgThroughputOut**: The sum of all local and replication consumers' dispatch rates (bytes/s). - - - **averageMsgSize**: The average size (in bytes) of messages published within the last interval. - - - **storageSize**: The sum of the ledgers' storage size for this topic. The space used to store the messages for the topic. - - - **publishers**: The list of all local publishers into the topic. The list ranges from zero to thousands. - - - **msgRateIn**: The total rate of messages (msg/s) published by this publisher. - - - **msgThroughputIn**: The total throughput (bytes/s) of the messages published by this publisher. - - - **averageMsgSize**: The average message size in bytes from this publisher within the last interval. - - - **producerId**: The internal identifier for this producer on this topic. - - - **producerName**: The internal identifier for this producer, generated by the client library. - - - **address**: The IP address and source port for the connection of this producer. - - - **connectedSince**: The timestamp when this producer is created or reconnected last time. - - - **subscriptions**: The list of all local subscriptions to the topic. - - - **my-subscription**: The name of this subscription. It is defined by the client. - - - **msgRateOut**: The total rate of messages (msg/s) delivered on this subscription. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered on this subscription. - - - **msgBacklog**: The number of messages in the subscription backlog. - - - **type**: The subscription type. - - - **msgRateExpired**: The rate at which messages were discarded instead of dispatched from this subscription due to TTL. - - - **lastExpireTimestamp**: The timestamp of the last message expire execution. - - - **lastConsumedFlowTimestamp**: The timestamp of the last flow command received. - - - **lastConsumedTimestamp**: The latest timestamp of all the consumed timestamp of the consumers. - - - **lastAckedTimestamp**: The latest timestamp of all the acked timestamp of the consumers. - - - **consumers**: The list of connected consumers for this subscription. - - - **msgRateOut**: The total rate of messages (msg/s) delivered to the consumer. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered to the consumer. - - - **consumerName**: The internal identifier for this consumer, generated by the client library. - - - **availablePermits**: The number of messages that the consumer has space for in the client library's listen queue. `0` means the client library's queue is full and `receive()` isn't being called. A non-zero value means this consumer is ready for dispatched messages. - - - **unackedMessages**: The number of unacknowledged messages for the consumer, where an unacknowledged message is one that has been sent to the consumer but not yet acknowledged. This field is only meaningful when using a subscription that tracks individual message acknowledgement. - - - **blockedConsumerOnUnackedMsgs**: The flag used to verify if the consumer is blocked due to reaching threshold of the unacknowledged messages. - - - **lastConsumedTimestamp**: The timestamp when the consumer reads a message the last time. - - - **lastAckedTimestamp**: The timestamp when the consumer acknowledges a message the last time. - - - **replication**: This section gives the stats for cross-colo replication of this topic - - - **msgRateIn**: The total rate (msg/s) of messages received from the remote cluster. - - - **msgThroughputIn**: The total throughput (bytes/s) received from the remote cluster. - - - **msgRateOut**: The total rate of messages (msg/s) delivered to the replication-subscriber. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered to the replication-subscriber. - - - **msgRateExpired**: The total rate of messages (msg/s) expired. - - - **replicationBacklog**: The number of messages pending to be replicated to remote cluster. - - - **connected**: Whether the outbound replicator is connected. - - - **replicationDelayInSeconds**: How long the oldest message has been waiting to be sent through the connection, if connected is `true`. - - - **inboundConnection**: The IP and port of the broker in the remote cluster's publisher connection to this broker. - - - **inboundConnectedSince**: The TCP connection being used to publish messages to the remote cluster. If there are no local publishers connected, this connection is automatically closed after a minute. - - - **outboundConnection**: The address of the outbound replication connection. - - - **outboundConnectedSince**: The timestamp of establishing outbound connection. - -The following is an example of a topic status. - -```json - -{ - "msgRateIn": 4641.528542257553, - "msgThroughputIn": 44663039.74947473, - "msgRateOut": 0, - "msgThroughputOut": 0, - "averageMsgSize": 1232439.816728665, - "storageSize": 135532389160, - "publishers": [ - { - "msgRateIn": 57.855383881403576, - "msgThroughputIn": 558994.7078932219, - "averageMsgSize": 613135, - "producerId": 0, - "producerName": null, - "address": null, - "connectedSince": null - } - ], - "subscriptions": { - "my-topic_subscription": { - "msgRateOut": 0, - "msgThroughputOut": 0, - "msgBacklog": 116632, - "type": null, - "msgRateExpired": 36.98245516804671, - "consumers": [] - } - }, - "replication": {} -} - -``` - -To get the status of a topic, you can use the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getStats(topic); - -``` - - - - -```` - -### Get internal stats - -You can get the detailed statistics of a topic. - - - **entriesAddedCounter**: Messages published since this broker loaded this topic. - - - **numberOfEntries**: The total number of messages being tracked. - - - **totalSize**: The total storage size in bytes of all messages. - - - **currentLedgerEntries**: The count of messages written to the ledger that is currently open for writing. - - - **currentLedgerSize**: The size in bytes of messages written to the ledger that is currently open for writing. - - - **lastLedgerCreatedTimestamp**: The time when the last ledger is created. - - - **lastLedgerCreationFailureTimestamp:** The time when the last ledger failed. - - - **waitingCursorsCount**: The number of cursors that are "caught up" and waiting for a new message to be published. - - - **pendingAddEntriesCount**: The number of messages that complete (asynchronous) write requests. - - - **lastConfirmedEntry**: The ledgerid:entryid of the last message that is written successfully. If the entryid is `-1`, then the ledger is open, yet no entries are written. - - - **state**: The state of this ledger for writing. The state `LedgerOpened` means that a ledger is open for saving published messages. - - - **ledgers**: The ordered list of all ledgers for this topic holding messages. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. - - - **metadata**: The ledger metadata. - - - **schemaLedgers**: The ordered list of all ledgers for this topic schema. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. - - - **metadata**: The ledger metadata. - - - **compactedLedger**: The ledgers holding un-acked messages after topic compaction. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. The value is `false` for the compacted topic ledger. - - - **cursors**: The list of all cursors on this topic. Each subscription in the topic stats has a cursor. - - - **markDeletePosition**: All messages before the markDeletePosition are acknowledged by the subscriber. - - - **readPosition**: The latest position of subscriber for reading message. - - - **waitingReadOp**: This is true when the subscription has read the latest message published to the topic and is waiting for new messages to be published. - - - **pendingReadOps**: The counter for how many outstanding read requests to the BookKeepers in progress. - - - **messagesConsumedCounter**: The number of messages this cursor has acked since this broker loaded this topic. - - - **cursorLedger**: The ledger being used to persistently store the current markDeletePosition. - - - **cursorLedgerLastEntry**: The last entryid used to persistently store the current markDeletePosition. - - - **individuallyDeletedMessages**: If acknowledges are being done out of order, the ranges of messages acknowledged between the markDeletePosition and the read-position shows. - - - **lastLedgerSwitchTimestamp**: The last time the cursor ledger is rolled over. - - - **state**: The state of the cursor ledger: `Open` means you have a cursor ledger for saving updates of the markDeletePosition. - -The following is an example of the detailed statistics of a topic. - -```json - -{ - "entriesAddedCounter":0, - "numberOfEntries":0, - "totalSize":0, - "currentLedgerEntries":0, - "currentLedgerSize":0, - "lastLedgerCreatedTimestamp":"2021-01-22T21:12:14.868+08:00", - "lastLedgerCreationFailureTimestamp":null, - "waitingCursorsCount":0, - "pendingAddEntriesCount":0, - "lastConfirmedEntry":"3:-1", - "state":"LedgerOpened", - "ledgers":[ - { - "ledgerId":3, - "entries":0, - "size":0, - "offloaded":false, - "metadata":null - } - ], - "cursors":{ - "test":{ - "markDeletePosition":"3:-1", - "readPosition":"3:-1", - "waitingReadOp":false, - "pendingReadOps":0, - "messagesConsumedCounter":0, - "cursorLedger":4, - "cursorLedgerLastEntry":1, - "individuallyDeletedMessages":"[]", - "lastLedgerSwitchTimestamp":"2021-01-22T21:12:14.966+08:00", - "state":"Open", - "numberOfEntriesSinceFirstNotAckedMessage":0, - "totalNonContiguousDeletedMessagesRange":0, - "properties":{ - - } - } - }, - "schemaLedgers":[ - { - "ledgerId":1, - "entries":11, - "size":10, - "offloaded":false, - "metadata":null - } - ], - "compactedLedger":{ - "ledgerId":-1, - "entries":-1, - "size":-1, - "offloaded":false, - "metadata":null - } -} - -``` - -To get the internal status of a topic, you can use the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats-internal \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/internalStats|operation/getInternalStats?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getInternalStats(topic); - -``` - - - - -```` - -### Peek messages - -You can peek a number of messages for a specific subscription of a given topic in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics peek-messages \ - --count 10 --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -Message ID: 315674752:0 -Properties: { "X-Pulsar-publish-time" : "2015-07-13 17:40:28.451" } -msg-payload - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/position/:messagePosition|operation/peekNthMessage?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -int numMessages = 1; -admin.topics().peekMessages(topic, subName, numMessages); - -``` - - - - -```` - -### Get message by ID - -You can fetch the message with the given ledger ID and entry ID in the following ways. - -````mdx-code-block - - - -```shell - -$ ./bin/pulsar-admin topics get-message-by-id \ - persistent://public/default/my-topic \ - -l 10 -e 0 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/ledger/:ledgerId/entry/:entryId|operation/getMessageById?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -long ledgerId = 10; -long entryId = 10; -admin.topics().getMessageById(topic, ledgerId, entryId); - -``` - - - - -```` - -### Skip messages - -You can skip a number of messages for a specific subscription of a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics skip \ - --count 10 --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/skip/:numMessages|operation/skipMessages?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -int numMessages = 1; -admin.topics().skipMessages(topic, subName, numMessages); - -``` - - - - -```` - -### Skip all messages - -You can skip all the old messages for a specific subscription of a given topic. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics skip-all \ - --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/skip_all|operation/skipAllMessages?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -admin.topics().skipAllMessages(topic, subName); - -``` - - - - -```` - -### Reset cursor - -You can reset a subscription cursor position back to the position which is recorded X minutes before. It essentially calculates time and position of cursor at X minutes before and resets it at that position. You can reset the cursor in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics reset-cursor \ - --subscription my-subscription --time 10 \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/resetcursor/:timestamp|operation/resetCursor?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -long timestamp = 2342343L; -admin.topics().skipAllMessages(topic, subName, timestamp); - -``` - - - - -```` - -### Look up topic's owner broker - -You can locate the owner broker of the given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics lookup \ - persistent://test-tenant/ns1/tp1 \ - - "pulsar://broker1.org.com:4480" - -``` - - - - -{@inject: endpoint|GET|/lookup/v2/topic/:schema/:tenant:namespace/:topic|/?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.lookup().lookupDestination(topic); - -``` - - - - -```` - -### Get bundle - -You can get the range of the bundle that the given topic belongs to in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics bundle-range \ - persistent://test-tenant/ns1/tp1 \ - - "0x00000000_0xffffffff" - -``` - - - - -{@inject: endpoint|GET|/lookup/v2/topic/:topic_domain/:tenant/:namespace/:topic/bundle|/?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.lookup().getBundleRange(topic); - -``` - - - - -```` - -### Get subscriptions - -You can check all subscription names for a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics subscriptions \ - persistent://test-tenant/ns1/tp1 \ - - my-subscription - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscriptions|operation/getSubscriptions?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getSubscriptions(topic); - -``` - - - - -```` - -### Last Message Id - -You can get the last committed message ID for a persistent topic. It is available since 2.3.0 release. - -````mdx-code-block - - - -```shell - -pulsar-admin topics last-message-id topic-name - -``` - - - - -{@inject: endpoint|Get|/admin/v2/:schema/:tenant/:namespace/:topic/lastMessageId|operation/getLastMessageId?version=@pulsar:version_number@} - - - - -```Java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getLastMessage(topic); - -``` - - - - -```` - - - -### Configure deduplication snapshot interval - -#### Get deduplication snapshot interval - -To get the topic-level deduplication snapshot interval, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-deduplication-snapshot-interval options - -``` - - - - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/getDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getDeduplicationSnapshotInterval(topic) - -``` - - - - -```` - -#### Set deduplication snapshot interval - -To set the topic-level deduplication snapshot interval, use one of the following methods. - -> **Prerequisite** `brokerDeduplicationEnabled` must be set to `true`. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-deduplication-snapshot-interval options - -``` - - - - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/setDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.topics().setDeduplicationSnapshotInterval(topic, 1000) - -``` - - - - -```` - -#### Remove deduplication snapshot interval - -To remove the topic-level deduplication snapshot interval, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-deduplication-snapshot-interval options - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/deleteDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.topics().removeDeduplicationSnapshotInterval(topic) - -``` - - - - -```` - - -### Configure inactive topic policies - -#### Get inactive topic policies - -To get the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-inactive-topic-policies options - -``` - - - - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/getInactiveTopicPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getInactiveTopicPolicies(topic) - -``` - - - - -```` - -#### Set inactive topic policies - -To set the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-inactive-topic-policies options - -``` - - - - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/setInactiveTopicPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().setInactiveTopicPolicies(topic, inactiveTopicPolicies) - -``` - - - - -```` - -#### Remove inactive topic policies - -To remove the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-inactive-topic-policies options - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/removeInactiveTopicPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().removeInactiveTopicPolicies(topic) - -``` - - - - -```` - - -### Configure offload policies - -#### Get offload policies - -To get the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-offload-policies options - -``` - - - - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/getOffloadPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getOffloadPolicies(topic) - -``` - - - - -```` - -#### Set offload policies - -To set the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-offload-policies options - -``` - - - - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/setOffloadPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().setOffloadPolicies(topic, offloadPolicies) - -``` - - - - -```` - -#### Remove offload policies - -To remove the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-offload-policies options - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/removeOffloadPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().removeOffloadPolicies(topic) - -``` - - - - -```` - - -## Manage non-partitioned topics -You can use Pulsar [admin API](admin-api-overview.md) to create, delete and check status of non-partitioned topics. - -### Create -Non-partitioned topics must be explicitly created. When creating a new non-partitioned topic, you need to provide a name for the topic. - -By default, 60 seconds after creation, topics are considered inactive and deleted automatically to avoid generating trash data. To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to a specific value. - -For more information about the two parameters, see [here](reference-configuration.md#broker). - -You can create non-partitioned topics in the following ways. -````mdx-code-block - - - -When you create non-partitioned topics with the [`create`](reference-pulsar-admin.md#create-3) command, you need to specify the topic name as an argument. - -```shell - -$ bin/pulsar-admin topics create \ - persistent://my-tenant/my-namespace/my-topic - -``` - -:::note - -When you create a non-partitioned topic with the suffix '-partition-' followed by numeric value like 'xyz-topic-partition-x' for the topic name, if a partitioned topic with same suffix 'xyz-topic-partition-y' exists, then the numeric value(x) for the non-partitioned topic must be larger than the number of partitions(y) of the partitioned topic. Otherwise, you cannot create such a non-partitioned topic. - -::: - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic|operation/createNonPartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().createNonPartitionedTopic(topicName); - -``` - - - - -```` - -### Delete -You can delete non-partitioned topics in the following ways. -````mdx-code-block - - - -```shell - -$ bin/pulsar-admin topics delete \ - persistent://my-tenant/my-namespace/my-topic - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic|operation/deleteTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().delete(topic); - -``` - - - - -```` - -### List - -You can get the list of topics under a given namespace in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list tenant/namespace -persistent://tenant/namespace/topic1 -persistent://tenant/namespace/topic2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getList(namespace); - -``` - - - - -```` - -### Stats - -You can check the current statistics of a given topic. The following is an example. For description of each stats, refer to [get stats](#get-stats). - -```json - -{ - "msgRateIn": 4641.528542257553, - "msgThroughputIn": 44663039.74947473, - "msgRateOut": 0, - "msgThroughputOut": 0, - "averageMsgSize": 1232439.816728665, - "storageSize": 135532389160, - "publishers": [ - { - "msgRateIn": 57.855383881403576, - "msgThroughputIn": 558994.7078932219, - "averageMsgSize": 613135, - "producerId": 0, - "producerName": null, - "address": null, - "connectedSince": null - } - ], - "subscriptions": { - "my-topic_subscription": { - "msgRateOut": 0, - "msgThroughputOut": 0, - "msgBacklog": 116632, - "type": null, - "msgRateExpired": 36.98245516804671, - "consumers": [] - } - }, - "replication": {} -} - -``` - -You can check the current statistics of a given topic and its connected producers and consumers in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats \ - persistent://test-tenant/namespace/topic \ - --get-precise-backlog - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getStats(topic, false /* is precise backlog */); - -``` - - - - -```` - -## Manage partitioned topics -You can use Pulsar [admin API](admin-api-overview.md) to create, update, delete and check status of partitioned topics. - -### Create - -Partitioned topics must be explicitly created. When creating a new partitioned topic, you need to provide a name and the number of partitions for the topic. - -By default, 60 seconds after creation, topics are considered inactive and deleted automatically to avoid generating trash data. To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to a specific value. - -For more information about the two parameters, see [here](reference-configuration.md#broker). - -You can create partitioned topics in the following ways. -````mdx-code-block - - - -When you create partitioned topics with the [`create-partitioned-topic`](reference-pulsar-admin.md#create-partitioned-topic) -command, you need to specify the topic name as an argument and the number of partitions using the `-p` or `--partitions` flag. - -```shell - -$ bin/pulsar-admin topics create-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic \ - --partitions 4 - -``` - -:::note - -If a non-partitioned topic with the suffix '-partition-' followed by a numeric value like 'xyz-topic-partition-10', you can not create a partitioned topic with name 'xyz-topic', because the partitions of the partitioned topic could override the existing non-partitioned topic. To create such partitioned topic, you have to delete that non-partitioned topic first. - -::: - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/partitions|operation/createPartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -int numPartitions = 4; -admin.topics().createPartitionedTopic(topicName, numPartitions); - -``` - - - - -```` - -### Create missed partitions - -When topic auto-creation is disabled, and you have a partitioned topic without any partitions, you can use the [`create-missed-partitions`](reference-pulsar-admin.md#create-missed-partitions) command to create partitions for the topic. - -````mdx-code-block - - - -You can create missed partitions with the [`create-missed-partitions`](reference-pulsar-admin.md#create-missed-partitions) command and specify the topic name as an argument. - -```shell - -$ bin/pulsar-admin topics create-missed-partitions \ - persistent://my-tenant/my-namespace/my-topic \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic|operation/createMissedPartitions?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().createMissedPartitions(topicName); - -``` - - - - -```` - -### Get metadata - -Partitioned topics are associated with metadata, you can view it as a JSON object. The following metadata field is available. - -Field | Description -:-----|:------- -`partitions` | The number of partitions into which the topic is divided. - -````mdx-code-block - - - -You can check the number of partitions in a partitioned topic with the [`get-partitioned-topic-metadata`](reference-pulsar-admin.md#get-partitioned-topic-metadata) subcommand. - -```shell - -$ pulsar-admin topics get-partitioned-topic-metadata \ - persistent://my-tenant/my-namespace/my-topic -{ - "partitions": 4 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/partitions|operation/getPartitionedMetadata?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getPartitionedTopicMetadata(topicName); - -``` - - - - -```` - -### Update - -You can update the number of partitions for an existing partitioned topic *if* the topic is non-global. However, you can only add the partition number. Decrementing the number of partitions would delete the topic, which is not supported in Pulsar. - -Producers and consumers can find the newly created partitions automatically. - -````mdx-code-block - - - -You can update partitioned topics with the [`update-partitioned-topic`](reference-pulsar-admin.md#update-partitioned-topic) command. - -```shell - -$ pulsar-admin topics update-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic \ - --partitions 8 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:cluster/:namespace/:destination/partitions|operation/updatePartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().updatePartitionedTopic(topic, numPartitions); - -``` - - - - -```` - -### Delete -You can delete partitioned topics with the [`delete-partitioned-topic`](reference-pulsar-admin.md#delete-partitioned-topic) command, REST API and Java. - -````mdx-code-block - - - -```shell - -$ bin/pulsar-admin topics delete-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:topic/:namespace/:destination/partitions|operation/deletePartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().delete(topic); - -``` - - - - -```` - -### List -You can get the list of partitioned topics under a given namespace in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list-partitioned-topics tenant/namespace -persistent://tenant/namespace/topic1 -persistent://tenant/namespace/topic2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getPartitionedTopicList?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getPartitionedTopicList(namespace); - -``` - - - - -```` - -### Stats - -You can check the current statistics of a given partitioned topic. The following is an example. For description of each stats, refer to [get stats](#get-stats). - -Note that in the subscription JSON object, `chuckedMessageRate` is deprecated. Please use `chunkedMessageRate`. Both will be sent in the JSON for now. - -```json - -{ - "msgRateIn" : 999.992947159793, - "msgThroughputIn" : 1070918.4635439808, - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesInCounter" : 270318763, - "msgInCounter" : 252489, - "bytesOutCounter" : 0, - "msgOutCounter" : 0, - "averageMsgSize" : 1070.926056966454, - "msgChunkPublished" : false, - "storageSize" : 270316646, - "backlogSize" : 200921133, - "publishers" : [ { - "msgRateIn" : 999.992947159793, - "msgThroughputIn" : 1070918.4635439808, - "averageMsgSize" : 1070.3333333333333, - "chunkedMessageRate" : 0.0, - "producerId" : 0 - } ], - "subscriptions" : { - "test" : { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesOutCounter" : 0, - "msgOutCounter" : 0, - "msgRateRedeliver" : 0.0, - "chuckedMessageRate" : 0, - "chunkedMessageRate" : 0, - "msgBacklog" : 144318, - "msgBacklogNoDelayed" : 144318, - "blockedSubscriptionOnUnackedMsgs" : false, - "msgDelayed" : 0, - "unackedMessages" : 0, - "msgRateExpired" : 0.0, - "lastExpireTimestamp" : 0, - "lastConsumedFlowTimestamp" : 0, - "lastConsumedTimestamp" : 0, - "lastAckedTimestamp" : 0, - "consumers" : [ ], - "isDurable" : true, - "isReplicated" : false - } - }, - "replication" : { }, - "metadata" : { - "partitions" : 3 - }, - "partitions" : { } -} - -``` - -You can check the current statistics of a given partitioned topic and its connected producers and consumers in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics partitioned-stats \ - persistent://test-tenant/namespace/topic \ - --per-partition - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/partitioned-stats|operation/getPartitionedStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getPartitionedStats(topic, true /* per partition */, false /* is precise backlog */); - -``` - - - - -```` - -### Internal stats - -You can check the detailed statistics of a topic. The following is an example. For description of each stats, refer to [get internal stats](#get-internal-stats). - -```json - -{ - "entriesAddedCounter": 20449518, - "numberOfEntries": 3233, - "totalSize": 331482, - "currentLedgerEntries": 3233, - "currentLedgerSize": 331482, - "lastLedgerCreatedTimestamp": "2016-06-29 03:00:23.825", - "lastLedgerCreationFailureTimestamp": null, - "waitingCursorsCount": 1, - "pendingAddEntriesCount": 0, - "lastConfirmedEntry": "324711539:3232", - "state": "LedgerOpened", - "ledgers": [ - { - "ledgerId": 324711539, - "entries": 0, - "size": 0 - } - ], - "cursors": { - "my-subscription": { - "markDeletePosition": "324711539:3133", - "readPosition": "324711539:3233", - "waitingReadOp": true, - "pendingReadOps": 0, - "messagesConsumedCounter": 20449501, - "cursorLedger": 324702104, - "cursorLedgerLastEntry": 21, - "individuallyDeletedMessages": "[(324711539:3134‥324711539:3136], (324711539:3137‥324711539:3140], ]", - "lastLedgerSwitchTimestamp": "2016-06-29 01:30:19.313", - "state": "Open" - } - } -} - -``` - -You can get the internal stats for the partitioned topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats-internal \ - persistent://test-tenant/namespace/topic - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/internalStats|operation/getInternalStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getInternalStats(topic); - -``` - - - - -```` - -## Publish to partitioned topics - -By default, Pulsar topics are served by a single broker, which limits the maximum throughput of a topic. *Partitioned topics* can span multiple brokers and thus allow for higher throughput. - -You can publish to partitioned topics using Pulsar client libraries. When publishing to partitioned topics, you must specify a routing mode. If you do not specify any routing mode when you create a new producer, the round robin routing mode is used. - -### Routing mode - -You can specify the routing mode in the ProducerConfiguration object that you use to configure your producer. The routing mode determines which partition(internal topic) that each message should be published to. - -The following {@inject: javadoc:MessageRoutingMode:/client/org/apache/pulsar/client/api/MessageRoutingMode} options are available. - -Mode | Description -:--------|:------------ -`RoundRobinPartition` | If no key is provided, the producer publishes messages across all partitions in round-robin policy to achieve the maximum throughput. Round-robin is not done per individual message, round-robin is set to the same boundary of batching delay to ensure that batching is effective. If a key is specified on the message, the partitioned producer hashes the key and assigns message to a particular partition. This is the default mode. -`SinglePartition` | If no key is provided, the producer picks a single partition randomly and publishes all messages into that partition. If a key is specified on the message, the partitioned producer hashes the key and assigns message to a particular partition. -`CustomPartition` | Use custom message router implementation that is called to determine the partition for a particular message. You can create a custom routing mode by using the Java client and implementing the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface. - -The following is an example: - -```java - -String pulsarBrokerRootUrl = "pulsar://localhost:6650"; -String topic = "persistent://my-tenant/my-namespace/my-topic"; - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl(pulsarBrokerRootUrl).build(); -Producer producer = pulsarClient.newProducer() - .topic(topic) - .messageRoutingMode(MessageRoutingMode.SinglePartition) - .create(); -producer.send("Partitioned topic message".getBytes()); - -``` - -### Custom message router - -To use a custom message router, you need to provide an implementation of the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface, which has just one `choosePartition` method: - -```java - -public interface MessageRouter extends Serializable { - int choosePartition(Message msg); -} - -``` - -The following router routes every message to partition 10: - -```java - -public class AlwaysTenRouter implements MessageRouter { - public int choosePartition(Message msg) { - return 10; - } -} - -``` - -With that implementation, you can send - -```java - -String pulsarBrokerRootUrl = "pulsar://localhost:6650"; -String topic = "persistent://my-tenant/my-cluster-my-namespace/my-topic"; - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl(pulsarBrokerRootUrl).build(); -Producer producer = pulsarClient.newProducer() - .topic(topic) - .messageRouter(new AlwaysTenRouter()) - .create(); -producer.send("Partitioned topic message".getBytes()); - -``` - -### How to choose partitions when using a key -If a message has a key, it supersedes the round robin routing policy. The following example illustrates how to choose the partition when using a key. - -```java - -// If the message has a key, it supersedes the round robin routing policy - if (msg.hasKey()) { - return signSafeMod(hash.makeHash(msg.getKey()), topicMetadata.numPartitions()); - } - - if (isBatchingEnabled) { // if batching is enabled, choose partition on `partitionSwitchMs` boundary. - long currentMs = clock.millis(); - return signSafeMod(currentMs / partitionSwitchMs + startPtnIdx, topicMetadata.numPartitions()); - } else { - return signSafeMod(PARTITION_INDEX_UPDATER.getAndIncrement(this), topicMetadata.numPartitions()); - } - -``` - -## Manage subscriptions -You can use [Pulsar admin API](admin-api-overview.md) to create, check, and delete subscriptions. -### Create subscription -You can create a subscription for a topic using one of the following methods. -````mdx-code-block - - - -```shell - -pulsar-admin topics create-subscription \ ---subscription my-subscription \ -persistent://test-tenant/ns1/tp1 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/persistent/:tenant/:namespace/:topic/subscription/:subscription|operation/createSubscriptions?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subscriptionName = "my-subscription"; -admin.topics().createSubscription(topic, subscriptionName, MessageId.latest); - -``` - - - - -```` -### Get subscription -You can check all subscription names for a given topic using one of the following methods. -````mdx-code-block - - - -```shell - -pulsar-admin topics subscriptions \ -persistent://test-tenant/ns1/tp1 \ -my-subscription - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscriptions|operation/getSubscriptions?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getSubscriptions(topic); - -``` - - - - -```` -### Unsubscribe subscription -When a subscription does not process messages any more, you can unsubscribe it using one of the following methods. -````mdx-code-block - - - -```shell - -pulsar-admin topics unsubscribe \ ---subscription my-subscription \ -persistent://test-tenant/ns1/tp1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/:topic/subscription/:subscription|operation/deleteSubscription?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subscriptionName = "my-subscription"; -admin.topics().deleteSubscription(topic, subscriptionName); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/administration-dashboard.md b/site2/website/versioned_docs/version-2.8.1-deprecated/administration-dashboard.md deleted file mode 100644 index 25f976609b40bf..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/administration-dashboard.md +++ /dev/null @@ -1,76 +0,0 @@ ---- -id: administration-dashboard -title: Pulsar dashboard -sidebar_label: "Dashboard" -original_id: administration-dashboard ---- - -:::note - -Pulsar dashboard is deprecated. If you want to manage and monitor the stats of your topics, use [Pulsar Manager](administration-pulsar-manager.md). - -::: - -Pulsar dashboard is a web application that enables users to monitor current stats for all [topics](reference-terminology.md#topic) in tabular form. - -The dashboard is a data collector that polls stats from all the brokers in a Pulsar instance (across multiple clusters) and stores all the information in a [PostgreSQL](https://www.postgresql.org/) database. - -You can use the [Django](https://www.djangoproject.com) web app to render the collected data. - -## Install - -The easiest way to use the dashboard is to run it inside a [Docker](https://www.docker.com/products/docker) container. - -```shell - -$ SERVICE_URL=http://broker.example.com:8080/ -$ docker run -p 80:80 \ - -e SERVICE_URL=$SERVICE_URL \ - apachepulsar/pulsar-dashboard:@pulsar:version@ - -``` - -You can find the {@inject: github:Dockerfile:/dashboard/Dockerfile} in the `dashboard` directory and build an image from scratch as well: - -```shell - -$ docker build -t apachepulsar/pulsar-dashboard dashboard - -``` - -If token authentication is enabled: -> Provided token should have super-user access. - -```shell - -$ SERVICE_URL=http://broker.example.com:8080/ -$ JWT_TOKEN=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c -$ docker run -p 80:80 \ - -e SERVICE_URL=$SERVICE_URL \ - -e JWT_TOKEN=$JWT_TOKEN \ - apachepulsar/pulsar-dashboard - -``` - - -You need to specify only one service URL for a Pulsar cluster. Internally, the collector figures out all the existing clusters and the brokers from where it needs to pull the metrics. If you connect the dashboard to Pulsar running in standalone mode, the URL is `http://:8080` by default. `` is the ip address or hostname of the machine running Pulsar standalone. The ip address or hostname should be accessible from the docker instance running dashboard. - -Once the Docker container runs, the web dashboard is accessible via `localhost` or whichever host that Docker uses. - -> The `SERVICE_URL` that the dashboard uses needs to be reachable from inside the Docker container - -If the Pulsar service runs in standalone mode in `localhost`, the `SERVICE_URL` has to -be the IP of the machine. - -Similarly, given the Pulsar standalone advertises itself with localhost by default, you need to -explicitly set the advertise address to the host IP. For example: - -```shell - -$ bin/pulsar standalone --advertised-address 1.2.3.4 - -``` - -### Known issues - -Currently, only Pulsar Token [authentication](security-overview.md#authentication-providers) is supported. diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/administration-geo.md b/site2/website/versioned_docs/version-2.8.1-deprecated/administration-geo.md deleted file mode 100644 index f5dc1b2f79a3fa..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/administration-geo.md +++ /dev/null @@ -1,215 +0,0 @@ ---- -id: administration-geo -title: Pulsar geo-replication -sidebar_label: "Geo-replication" -original_id: administration-geo ---- - -*Geo-replication* is the replication of persistently stored message data across multiple clusters of a Pulsar instance. - -## How geo-replication works - -The diagram below illustrates the process of geo-replication across Pulsar clusters: - -![Replication Diagram](/assets/geo-replication.png) - -In this diagram, whenever **P1**, **P2**, and **P3** producers publish messages to the **T1** topic on **Cluster-A**, **Cluster-B**, and **Cluster-C** clusters respectively, those messages are instantly replicated across clusters. Once the messages are replicated, **C1** and **C2** consumers can consume those messages from their respective clusters. - -Without geo-replication, **C1** and **C2** consumers are not able to consume messages that **P3** producer publishes. - -## Geo-replication and Pulsar properties - -You must enable geo-replication on a per-tenant basis in Pulsar. You can enable geo-replication between clusters only when a tenant is created that allows access to both clusters. - -Although geo-replication must be enabled between two clusters, actually geo-replication is managed at the namespace level. You must complete the following tasks to enable geo-replication for a namespace: - -* [Enable geo-replication namespaces](#enable-geo-replication-namespaces) -* Configure that namespace to replicate across two or more provisioned clusters - -Any message published on *any* topic in that namespace is replicated to all clusters in the specified set. - -## Local persistence and forwarding - -When messages are produced on a Pulsar topic, messages are first persisted in the local cluster, and then forwarded asynchronously to the remote clusters. - -In normal cases, when connectivity issues are none, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, the network [round-trip time](https://en.wikipedia.org/wiki/Round-trip_delay_time) (RTT) between the remote regions defines end-to-end delivery latency. - -Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (like during a network partition). - -Producers and consumers can publish messages to and consume messages from any cluster in a Pulsar instance. However, subscriptions cannot only be local to the cluster where the subscriptions are created but also can be transferred between clusters after replicated subscription is enabled. Once replicated subscription is enabled, you can keep subscription state in synchronization. Therefore, a topic can be asynchronously replicated across multiple geographical regions. In case of failover, a consumer can restart consuming messages from the failure point in a different cluster. - -In the aforementioned example, the **T1** topic is replicated among three clusters, **Cluster-A**, **Cluster-B**, and **Cluster-C**. - -All messages produced in any of the three clusters are delivered to all subscriptions in other clusters. In this case, **C1** and **C2** consumers receive all messages that **P1**, **P2**, and **P3** producers publish. Ordering is still guaranteed on a per-producer basis. - -## Configure replication - -As stated in [Geo-replication and Pulsar properties](#geo-replication-and-pulsar-properties) section, geo-replication in Pulsar is managed at the [tenant](reference-terminology.md#tenant) level. - -The following example connects three clusters: **us-east**, **us-west**, and **us-cent**. - -### Connect replication clusters - -To replicate data among clusters, you need to configure each cluster to connect to the other. You can use the [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) tool to create a connection. - -**Example** - -Suppose that you have 3 replication clusters: `us-west`, `us-cent`, and `us-east`. - -1. Configure the connection from `us-west` to `us-east`. - - Run the following command on `us-west`. - -```shell - -$ bin/pulsar-admin clusters create \ - --broker-url pulsar://: \ - --url http://: \ - us-east - -``` - - :::tip - - - If you want to use a secure connection for a cluster, you can use the flags `--broker-url-secure` and `--url-secure`. For more information, see [pulsar-admin clusters create](https://pulsar.apache.org/tools/pulsar-admin/). - - Different clusters may have different authentications. You can use the authentication flag `--auth-plugin` and `--auth-parameters` together to set cluster authentication, which overrides `brokerClientAuthenticationPlugin` and `brokerClientAuthenticationParameters` if `authenticationEnabled` sets to `true` in `broker.conf` and `standalone.conf`. For more information, see [authentication and authorization](concepts-authentication.md). - - ::: - -2. Configure the connection from `us-west` to `us-cent`. - - Run the following command on `us-west`. - -```shell - -$ bin/pulsar-admin clusters create \ - --broker-url pulsar://: \ - --url http://: \ - us-cent - -``` - -3. Run similar commands on `us-east` and `us-cent` to create connections among clusters. - -### Grant permissions to properties - -To replicate to a cluster, the tenant needs permission to use that cluster. You can grant permission to the tenant when you create the tenant or grant later. - -Specify all the intended clusters when you create a tenant: - -```shell - -$ bin/pulsar-admin tenants create my-tenant \ - --admin-roles my-admin-role \ - --allowed-clusters us-west,us-east,us-cent - -``` - -To update permissions of an existing tenant, use `update` instead of `create`. - -### Enable geo-replication namespaces - -You can create a namespace with the following command sample. - -```shell - -$ bin/pulsar-admin namespaces create my-tenant/my-namespace - -``` - -Initially, the namespace is not assigned to any cluster. You can assign the namespace to clusters using the `set-clusters` subcommand: - -```shell - -$ bin/pulsar-admin namespaces set-clusters my-tenant/my-namespace \ - --clusters us-west,us-east,us-cent - -``` - -You can change the replication clusters for a namespace at any time, without disruption to ongoing traffic. Replication channels are immediately set up or stopped in all clusters as soon as the configuration changes. - -### Use topics with geo-replication - -Once you create a geo-replication namespace, any topics that producers or consumers create within that namespace is replicated across clusters. Typically, each application uses the `serviceUrl` for the local cluster. - -#### Selective replication - -By default, messages are replicated to all clusters configured for the namespace. You can restrict replication selectively by specifying a replication list for a message, and then that message is replicated only to the subset in the replication list. - -The following is an example for the [Java API](client-libraries-java.md). Note the use of the `setReplicationClusters` method when you construct the {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} object: - -```java - -List restrictReplicationTo = Arrays.asList( - "us-west", - "us-east" -); - -Producer producer = client.newProducer() - .topic("some-topic") - .create(); - -producer.newMessage() - .value("my-payload".getBytes()) - .setReplicationClusters(restrictReplicationTo) - .send(); - -``` - -#### Topic stats - -Topic-specific statistics for geo-replication topics are available via the [`pulsar-admin`](reference-pulsar-admin.md) tool and {@inject: rest:REST:/} API: - -```shell - -$ bin/pulsar-admin persistent stats persistent://my-tenant/my-namespace/my-topic - -``` - -Each cluster reports its own local stats, including the incoming and outgoing replication rates and backlogs. - -#### Delete a geo-replication topic - -Given that geo-replication topics exist in multiple regions, directly deleting a geo-replication topic is not possible. Instead, you should rely on automatic topic garbage collection. - -In Pulsar, a topic is automatically deleted when the topic meets the following three conditions: -- no producers or consumers are connected to it; -- no subscriptions to it; -- no more messages are kept for retention. -For geo-replication topics, each region uses a fault-tolerant mechanism to decide when deleting the topic locally is safe. - -You can explicitly disable topic garbage collection by setting `brokerDeleteInactiveTopicsEnabled` to `false` in your [broker configuration](reference-configuration.md#broker). - -To delete a geo-replication topic, close all producers and consumers on the topic, and delete all of its local subscriptions in every replication cluster. When Pulsar determines that no valid subscription for the topic remains across the system, it will garbage collect the topic. - -## Replicated subscriptions - -Pulsar supports replicated subscriptions, so you can keep subscription state in sync, within a sub-second timeframe, in the context of a topic that is being asynchronously replicated across multiple geographical regions. - -In case of failover, a consumer can restart consuming from the failure point in a different cluster. - -### Enable replicated subscription - -Replicated subscription is disabled by default. You can enable replicated subscription when creating a consumer. - -```java - -Consumer consumer = client.newConsumer(Schema.STRING) - .topic("my-topic") - .subscriptionName("my-subscription") - .replicateSubscriptionState(true) - .subscribe(); - -``` - -### Advantages - - * It is easy to implement the logic. - * You can choose to enable or disable replicated subscription. - * When you enable it, the overhead is low, and it is easy to configure. - * When you disable it, the overhead is zero. - -### Limitations - -* When you enable replicated subscription, you're creating a consistent distributed snapshot to establish an association between message ids from different clusters. The snapshots are taken periodically. The default value is `1 second`. It means that a consumer failing over to a different cluster can potentially receive 1 second of duplicates. You can also configure the frequency of the snapshot in the `broker.conf` file. -* Only the base line cursor position is synced in replicated subscriptions while the individual acknowledgments are not synced. This means the messages acknowledged out-of-order could end up getting delivered again, in the case of a cluster failover. diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/administration-isolation.md b/site2/website/versioned_docs/version-2.8.1-deprecated/administration-isolation.md deleted file mode 100644 index d2de042a2e7415..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/administration-isolation.md +++ /dev/null @@ -1,115 +0,0 @@ ---- -id: administration-isolation -title: Pulsar isolation -sidebar_label: "Pulsar isolation" -original_id: administration-isolation ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -In an organization, a Pulsar instance provides services to multiple teams. When organizing the resources across multiple teams, you want to make a suitable isolation plan to avoid the resource competition between different teams and applications and provide high-quality messaging service. In this case, you need to take resource isolation into consideration and weigh your intended actions against expected and unexpected consequences. - -To enforce resource isolation, you can use the Pulsar isolation policy, which allows you to allocate resources (**broker** and **bookie**) for the namespace. - -## Broker isolation - -In Pulsar, when namespaces (more specifically, namespace bundles) are assigned dynamically to brokers, the namespace isolation policy limits the set of brokers that can be used for assignment. Before topics are assigned to brokers, you can set the namespace isolation policy with a primary or a secondary regex to select desired brokers. - -You can set a namespace isolation policy for a cluster using one of the following methods. - -````mdx-code-block - - - - -``` - -pulsar-admin ns-isolation-policy set options - -``` - -For more information about the command `pulsar-admin ns-isolation-policy set options`, see [here](https://pulsar.apache.org/tools/pulsar-admin/). - -**Example** - -```shell - -bin/pulsar-admin ns-isolation-policy set \ ---auto-failover-policy-type min_available \ ---auto-failover-policy-params min_limit=1,usage_threshold=80 \ ---namespaces my-tenant/my-namespace \ ---primary 10.193.216.* my-cluster policy-name - -``` - - - - -[PUT /admin/v2/namespaces/{tenant}/{namespace}](https://pulsar.apache.org/admin-rest-api/?version=master&apiversion=v2#operation/createNamespace) - - - - -For how to set namespace isolation policy using Java admin API, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/NamespacesImpl.java#L251). - - - - -```` - -## Bookie isolation - -A namespace can be isolated into user-defined groups of bookies, which guarantees all the data that belongs to the namespace is stored in desired bookies. The bookie affinity group uses the BookKeeper [rack-aware placement policy](https://bookkeeper.apache.org/docs/latest/api/javadoc/org/apache/bookkeeper/client/EnsemblePlacementPolicy.html) and it is a way to feed rack information which is stored as JSON format in znode. - -You can set a bookie affinity group using one of the following methods. - -````mdx-code-block - - - - -``` - -pulsar-admin namespaces set-bookie-affinity-group options - -``` - -For more information about the command `pulsar-admin namespaces set-bookie-affinity-group options`, see [here](https://pulsar.apache.org/tools/pulsar-admin/). - -**Example** - -```shell - -bin/pulsar-admin bookies set-bookie-rack \ ---bookie 127.0.0.1:3181 \ ---hostname 127.0.0.1:3181 \ ---group group-bookie1 \ ---rack rack1 - -bin/pulsar-admin namespaces set-bookie-affinity-group public/default \ ---primary-group group-bookie1 - -``` - - - - -[POST /admin/v2/namespaces/{tenant}/{namespace}/persistence/bookieAffinity](https://pulsar.apache.org/admin-rest-api/?version=master&apiversion=v2#operation/setBookieAffinityGroup) - - - - -For how to set bookie affinity group for a namespace using Java admin API, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/NamespacesImpl.java#L1164). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/administration-load-balance.md b/site2/website/versioned_docs/version-2.8.1-deprecated/administration-load-balance.md deleted file mode 100644 index 134c39e0956b23..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/administration-load-balance.md +++ /dev/null @@ -1,256 +0,0 @@ ---- -id: administration-load-balance -title: Pulsar load balance -sidebar_label: "Load balance" -original_id: administration-load-balance ---- - -## Load balance across Pulsar brokers - -Pulsar is an horizontally scalable messaging system, so the traffic -in a logical cluster must be spread across all the available Pulsar brokers as evenly as possible, which is a core requirement. - -You can use multiple settings and tools to control the traffic distribution which require a bit of context to understand how the traffic is managed in Pulsar. Though, in most cases, the core requirement mentioned above is true out of the box and you should not worry about it. - -## Pulsar load manager architecture - -The following part introduces the basic architecture of the Pulsar load manager. - -### Assign topics to brokers dynamically - -Topics are dynamically assigned to brokers based on the load conditions of all brokers in the cluster. - -When a client starts using new topics that are not assigned to any broker, a process is triggered to choose the best suited broker to acquire ownership of these topics according to the load conditions. - -In case of partitioned topics, different partitions are assigned to different brokers. Here "topic" means either a non-partitioned topic or one partition of a topic. - -The assignment is "dynamic" because the assignment changes quickly. For example, if the broker owning the topic crashes, the topic is reassigned immediately to another broker. Another scenario is that the broker owning the topic becomes overloaded. In this case, the topic is reassigned to a less loaded broker. - -The stateless nature of brokers makes the dynamic assignment possible, so you can quickly expand or shrink the cluster based on usage. - -#### Assignment granularity - -The assignment of topics or partitions to brokers is not done at the topics or partitions level, but done at the Bundle level (a higher level). The reason is to amortize the amount of information that you need to keep track. Based on CPU, memory, traffic load and other indexes, topics are assigned to a particular broker dynamically. - -Instead of individual topic or partition assignment, each broker takes ownership of a subset of the topics for a namespace. This subset is called a "*bundle*" and effectively this subset is a sharding mechanism. - -The namespace is the "administrative" unit: many config knobs or operations are done at the namespace level. - -For assignment, a namespaces is sharded into a list of "bundles", with each bundle comprising -a portion of overall hash range of the namespace. - -Topics are assigned to a particular bundle by taking the hash of the topic name and checking in which -bundle the hash falls into. - -Each bundle is independent of the others and thus is independently assigned to different brokers. - -### Create namespaces and bundles - -When you create a new namespace, the new namespace sets to use the default number of bundles. You can set this in `conf/broker.conf`: - -```properties - -# When a namespace is created without specifying the number of bundle, this -# value will be used as the default -defaultNumberOfNamespaceBundles=4 - -``` - -You can either change the system default, or override it when you create a new namespace: - -```shell - -$ bin/pulsar-admin namespaces create my-tenant/my-namespace --clusters us-west --bundles 16 - -``` - -With this command, you create a namespace with 16 initial bundles. Therefore the topics for this namespaces can immediately be spread across up to 16 brokers. - -In general, if you know the expected traffic and number of topics in advance, you had better start with a reasonable number of bundles instead of waiting for the system to auto-correct the distribution. - -On the same note, it is beneficial to start with more bundles than the number of brokers, because of the hashing nature of the distribution of topics into bundles. For example, for a namespace with 1000 topics, using something like 64 bundles achieves a good distribution of traffic across 16 brokers. - -### Unload topics and bundles - -You can "unload" a topic in Pulsar with admin operation. Unloading means to close the topics, -release ownership and reassign the topics to a new broker, based on current load. - -When unloading happens, the client experiences a small latency blip, typically in the order of tens of milliseconds, while the topic is reassigned. - -Unloading is the mechanism that the load-manager uses to perform the load shedding, but you can also trigger the unloading manually, for example to correct the assignments and redistribute traffic even before having any broker overloaded. - -Unloading a topic has no effect on the assignment, but just closes and reopens the particular topic: - -```shell - -pulsar-admin topics unload persistent://tenant/namespace/topic - -``` - -To unload all topics for a namespace and trigger reassignments: - -```shell - -pulsar-admin namespaces unload tenant/namespace - -``` - -### Split namespace bundles - -Since the load for the topics in a bundle might change over time, or predicting upfront might just be hard, brokers can split bundles into two. The new smaller bundles can be reassigned to different brokers. - -The splitting happens based on some tunable thresholds. Any existing bundle that exceeds any of the threshold is a candidate to be split. By default the newly split bundles are also immediately offloaded to other brokers, to facilitate the traffic distribution. - -```properties - -# enable/disable namespace bundle auto split -loadBalancerAutoBundleSplitEnabled=true - -# enable/disable automatic unloading of split bundles -loadBalancerAutoUnloadSplitBundlesEnabled=true - -# maximum topics in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxTopics=1000 - -# maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxSessions=1000 - -# maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxMsgRate=30000 - -# maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxBandwidthMbytes=100 - -# maximum number of bundles in a namespace (for auto-split) -loadBalancerNamespaceMaximumBundles=128 - -``` - -### Shed load automatically - -The support for automatic load shedding is available in the load manager of Pulsar. This means that whenever the system recognizes a particular broker is overloaded, the system forces some traffic to be reassigned to less loaded brokers. - -When a broker is identified as overloaded, the broker forces to "unload" a subset of the bundles, the -ones with higher traffic, that make up for the overload percentage. - -For example, the default threshold is 85% and if a broker is over quota at 95% CPU usage, then the broker unloads the percent difference plus a 5% margin: `(95% - 85%) + 5% = 15%`. - -Given the selection of bundles to offload is based on traffic (as a proxy measure for cpu, network -and memory), broker unloads bundles for at least 15% of traffic. - -The automatic load shedding is enabled by default and you can disable the automatic load shedding with this setting: - -```properties - -# Enable/disable automatic bundle unloading for load-shedding -loadBalancerSheddingEnabled=true - -``` - -Additional settings that apply to shedding: - -```properties - -# Load shedding interval. Broker periodically checks whether some traffic should be offload from -# some over-loaded broker to other under-loaded brokers -loadBalancerSheddingIntervalMinutes=1 - -# Prevent the same topics to be shed and moved to other brokers more that once within this timeframe -loadBalancerSheddingGracePeriodMinutes=30 - -``` - -#### Broker overload thresholds - -The determinations of when a broker is overloaded is based on threshold of CPU, network and memory usage. Whenever either of those metrics reaches the threshold, the system triggers the shedding (if enabled). - -By default, overload threshold is set at 85%: - -```properties - -# Usage threshold to determine a broker as over-loaded -loadBalancerBrokerOverloadedThresholdPercentage=85 - -``` - -Pulsar gathers the usage stats from the system metrics. - -In case of network utilization, in some cases the network interface speed that Linux reports is -not correct and needs to be manually overridden. This is the case in AWS EC2 instances with 1Gbps -NIC speed for which the OS reports 10Gbps speed. - -Because of the incorrect max speed, the Pulsar load manager might think the broker has not reached the NIC capacity, while in fact the broker already uses all the bandwidth and the traffic is slowed down. - -You can use the following setting to correct the max NIC speed: - -```properties - -# Override the auto-detection of the network interfaces max speed. -# This option is useful in some environments (eg: EC2 VMs) where the max speed -# reported by Linux is not reflecting the real bandwidth available to the broker. -# Since the network usage is employed by the load manager to decide when a broker -# is overloaded, it is important to make sure the info is correct or override it -# with the right value here. The configured value can be a double (eg: 0.8) and that -# can be used to trigger load-shedding even before hitting on NIC limits. -loadBalancerOverrideBrokerNicSpeedGbps= - -``` - -When the value is empty, Pulsar uses the value that the OS reports. - -### Distribute anti-affinity namespaces across failure domains - -When your application has multiple namespaces and you want one of them available all the time to avoid any downtime, you can group these namespaces and distribute them across different [failure domains](reference-terminology.md#failure-domain) and different brokers. Thus, if one of the failure domains is down (due to release rollout or brokers restart), it only disrupts namespaces owned by that specific failure domain and the rest of the namespaces owned by other domains remain available without any impact. - -Such a group of namespaces has anti-affinity to each other, that is, all the namespaces in this group are [anti-affinity namespaces](reference-terminology.md#anti-affinity-namespaces) and are distributed to different failure domains in a load-balanced manner. - -As illustrated in the following figure, Pulsar has 2 failure domains (Domain1 and Domain2) and each domain has 2 brokers in it. You can create an anti-affinity namespace group that has 4 namespaces in it, and all the 4 namespaces have anti-affinity to each other. The load manager tries to distribute namespaces evenly across all the brokers in the same domain. Since each domain has 2 brokers, every broker owns one namespace from this anti-affinity namespace group, and you can see each domain owns 2 namespaces, and each broker owns 1 namespace. - -![Distribute anti-affinity namespaces across failure domains](/assets/anti-affinity-namespaces-across-failure-domains.svg) - -The load manager follows an even distribution policy across failure domains to assign anti-affinity namespaces. The following table outlines the even-distributed assignment sequence illustrated in the above figure. - -| Assignment sequence | Namespace | Failure domain candidates | Broker candidates | Selected broker | -|:---|:------------|:------------------|:------------------------------------|:-----------------| -| 1 | Namespace1 | Domain1, Domain2 | Broker1, Broker2, Broker3, Broker4 | Domain1:Broker1 | -| 2 | Namespace2 | Domain2 | Broker3, Broker4 | Domain2:Broker3 | -| 3 | Namespace3 | Domain1, Domain2 | Broker2, Broker4 | Domain1:Broker2 | -| 4 | Namespace4 | Domain2 | Broker4 | Domain2:Broker4 | - -:::tip - -* Each namespace belongs to only one anti-affinity group. If a namespace with an existing anti-affinity assignment is assigned to another anti-affinity group, the original assignment is dropped. - -* If there are more anti-affinity namespaces than failure domains, the load manager distributes namespaces evenly across all the domains, and also every domain distributes namespaces evenly across all the brokers under that domain. - -::: - -#### Create a failure domain and register brokers - -:::note - -One broker can only be registered to a single failure domain. - -::: - -To create a domain under a specific cluster and register brokers, run the following command: - -```bash - -pulsar-admin clusters create-failure-domain --domain-name --broker-list - -``` - -You can also view, update, and delete domains under a specific cluster. For more information, refer to [Pulsar admin doc](/tools/pulsar-admin/). - -#### Create an anti-affinity namespace group - -An anti-affinity group is created automatically when the first namespace is assigned to the group. To assign a namespace to an anti-affinity group, run the following command. It sets an anti-affinity group name for a namespace. - -```bash - -pulsar-admin namespaces set-anti-affinity-group --group - -``` - -For more information about `anti-affinity-group` related commands, refer to [Pulsar admin doc](/tools/pulsar-admin/). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/administration-proxy.md b/site2/website/versioned_docs/version-2.8.1-deprecated/administration-proxy.md deleted file mode 100644 index c046ed34d46b22..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/administration-proxy.md +++ /dev/null @@ -1,86 +0,0 @@ ---- -id: administration-proxy -title: Pulsar proxy -sidebar_label: "Pulsar proxy" -original_id: administration-proxy ---- - -Pulsar proxy is an optional gateway. Pulsar proxy is used when direction connections between clients and Pulsar brokers are either infeasible or undesirable. For example, when you run Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform, you can run Pulsar proxy. - -## Configure the proxy - -Before using the proxy, you need to configure it with the brokers addresses in the cluster. You can configure the proxy to connect directly to service discovery, or specify a broker URL in the configuration. - -### Use service discovery - -Pulsar uses [ZooKeeper](https://zookeeper.apache.org) for service discovery. To connect the proxy to ZooKeeper, specify the following in `conf/proxy.conf`. - -```properties - -zookeeperServers=zk-0,zk-1,zk-2 -configurationStoreServers=zk-0:2184,zk-remote:2184 - -``` - -> To use service discovery, you need to open the network ACLs, so the proxy can connects to the ZooKeeper nodes through the ZooKeeper client port (port `2181`) and the configuration store client port (port `2184`). - -> However, it is not secure to use service discovery. Because if the network ACL is open, when someone compromises a proxy, they have full access to ZooKeeper. - -### Use broker URLs - -It is more secure to specify a URL to connect to the brokers. - -Proxy authorization requires access to ZooKeeper, so if you use these broker URLs to connect to the brokers, you need to disable authorization at the Proxy level. Brokers still authorize requests after the proxy forwards them. - -You can configure the broker URLs in `conf/proxy.conf` as follows. - -```properties - -brokerServiceURL=pulsar://brokers.example.com:6650 -brokerWebServiceURL=http://brokers.example.com:8080 -functionWorkerWebServiceURL=http://function-workers.example.com:8080 - -``` - -If you use TLS, configure the broker URLs in the following way: - -```properties - -brokerServiceURLTLS=pulsar+ssl://brokers.example.com:6651 -brokerWebServiceURLTLS=https://brokers.example.com:8443 -functionWorkerWebServiceURL=https://function-workers.example.com:8443 - -``` - -The hostname in the URLs provided should be a DNS entry which points to multiple brokers or a virtual IP address, which is backed by multiple broker IP addresses, so that the proxy does not lose connectivity to Pulsar cluster if a single broker becomes unavailable. - -The ports to connect to the brokers (6650 and 8080, or in the case of TLS, 6651 and 8443) should be open in the network ACLs. - -Note that if you do not use functions, you do not need to configure `functionWorkerWebServiceURL`. - -## Start the proxy - -To start the proxy: - -```bash - -$ cd /path/to/pulsar/directory -$ bin/pulsar proxy - -``` - -> You can run multiple instances of the Pulsar proxy in a cluster. - -## Stop the proxy - -Pulsar proxy runs in the foreground by default. To stop the proxy, simply stop the process in which the proxy is running. - -## Proxy frontends - -You can run Pulsar proxy behind some kind of load-distributing frontend, such as an [HAProxy](https://www.digitalocean.com/community/tutorials/an-introduction-to-haproxy-and-load-balancing-concepts) load balancer. - -## Use Pulsar clients with the proxy - -Once your Pulsar proxy is up and running, preferably behind a load-distributing [frontend](#proxy-frontends), clients can connect to the proxy via whichever address that the frontend uses. If the address is the DNS address `pulsar.cluster.default`, for example, the connection URL for clients is `pulsar://pulsar.cluster.default:6650`. - -For more information on Proxy configuration, refer to [Pulsar proxy](reference-configuration.md#pulsar-proxy). diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/administration-pulsar-manager.md b/site2/website/versioned_docs/version-2.8.1-deprecated/administration-pulsar-manager.md deleted file mode 100644 index 0e3800d847f0c8..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/administration-pulsar-manager.md +++ /dev/null @@ -1,205 +0,0 @@ ---- -id: administration-pulsar-manager -title: Pulsar Manager -sidebar_label: "Pulsar Manager" -original_id: administration-pulsar-manager ---- - -Pulsar Manager is a web-based GUI management and monitoring tool that helps administrators and users manage and monitor tenants, namespaces, topics, subscriptions, brokers, clusters, and so on, and supports dynamic configuration of multiple environments. - -:::note - -If you monitor your current stats with Pulsar dashboard, you can try to use Pulsar Manager instead. Pulsar dashboard is deprecated. - -::: - -## Install - -The easiest way to use the Pulsar Manager is to run it inside a [Docker](https://www.docker.com/products/docker) container. - -```shell - -docker pull apachepulsar/pulsar-manager:v0.2.0 -docker run -it \ - -p 9527:9527 -p 7750:7750 \ - -e SPRING_CONFIGURATION_FILE=/pulsar-manager/pulsar-manager/application.properties \ - apachepulsar/pulsar-manager:v0.2.0 - -``` - -* `SPRING_CONFIGURATION_FILE`: Default configuration file for spring. - -### Set administrator account and password - - ```shell - - CSRF_TOKEN=$(curl http://localhost:7750/pulsar-manager/csrf-token) - curl \ - -H 'X-XSRF-TOKEN: $CSRF_TOKEN' \ - -H 'Cookie: XSRF-TOKEN=$CSRF_TOKEN;' \ - -H "Content-Type: application/json" \ - -X PUT http://localhost:7750/pulsar-manager/users/superuser \ - -d '{"name": "admin", "password": "apachepulsar", "description": "test", "email": "username@test.org"}' - - ``` - -You can find the docker image in the [Docker Hub](https://github.com/apache/pulsar-manager/tree/master/docker) directory and build an image from the source code as well: - -``` - -git clone https://github.com/apache/pulsar-manager -cd pulsar-manager/front-end -npm install --save -npm run build:prod -cd .. -./gradlew build -x test -cd .. -docker build -f docker/Dockerfile --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` --build-arg VCS_REF=`latest` --build-arg VERSION=`latest` -t apachepulsar/pulsar-manager . - -``` - -### Use custom databases - -If you have a large amount of data, you can use a custom database. The following is an example of PostgreSQL. - -1. Initialize database and table structures using the [file](https://github.com/apache/pulsar-manager/tree/master/src/main/resources/META-INF/sql/postgresql-schema.sql). - -2. Modify the [configuration file](https://github.com/apache/pulsar-manager/blob/master/src/main/resources/application.properties) and add PostgreSQL configuration. - -``` - -spring.datasource.driver-class-name=org.postgresql.Driver -spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/pulsar_manager -spring.datasource.username=postgres -spring.datasource.password=postgres - -``` - -3. Compile to generate a new executable jar package. - -``` - -./gradlew build -x test - -``` - -### Enable JWT authentication - -If you want to turn on JWT authentication, configure the following parameters: - -* `backend.jwt.token`: token for the superuser. You need to configure this parameter during cluster initialization. -* `jwt.broker.token.mode`: multiple modes of generating token, including PUBLIC, PRIVATE, and SECRET. -* `jwt.broker.public.key`: configure this option if you use the PUBLIC mode. -* `jwt.broker.private.key`: configure this option if you use the PRIVATE mode. -* `jwt.broker.secret.key`: configure this option if you use the SECRET mode. - -For more information, see [Token Authentication Admin of Pulsar](http://pulsar.apache.org/docs/en/security-token-admin/). - - -If you want to enable JWT authentication, use one of the following methods. - - -* Method 1: use command-line tool - -``` - -wget https://dist.apache.org/repos/dist/release/pulsar/pulsar-manager/pulsar-manager-0.2.0/apache-pulsar-manager-0.2.0-bin.tar.gz -tar -zxvf apache-pulsar-manager-0.2.0-bin.tar.gz -cd pulsar-manager -tar -zxvf pulsar-manager.tar -cd pulsar-manager -cp -r ../dist ui -./bin/pulsar-manager --redirect.host=http://localhost --redirect.port=9527 insert.stats.interval=600000 --backend.jwt.token=token --jwt.broker.token.mode=PRIVATE --jwt.broker.private.key=file:///path/broker-private.key --jwt.broker.public.key=file:///path/broker-public.key - -``` - -Firstly, [set the administrator account and password](#set-administrator-account-and-password) - -Secondly, log in to Pulsar manager through http://localhost:7750/ui/index.html. - -* Method 2: configure the application.properties file - -``` - -backend.jwt.token=token - -jwt.broker.token.mode=PRIVATE -jwt.broker.public.key=file:///path/broker-public.key -jwt.broker.private.key=file:///path/broker-private.key - -or -jwt.broker.token.mode=SECRET -jwt.broker.secret.key=file:///path/broker-secret.key - -``` - -* Method 3: use Docker and enable token authentication. - -``` - -export JWT_TOKEN="your-token" -docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -v $PWD:/data apachepulsar/pulsar-manager:v0.2.0 /bin/sh - -``` - -* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --secret-key` or `bin/pulsar tokens create --private-key` command. -* `REDIRECT_HOST`: the IP address of the front-end server. -* `REDIRECT_PORT`: the port of the front-end server. -* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database. -* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database. -* `USERNAME`: the username of PostgreSQL. -* `PASSWORD`: the password of PostgreSQL. -* `LOG_LEVEL`: the level of log. - -* Method 4: use Docker and turn on **token authentication** and **token management** by private key and public key. - -``` - -export JWT_TOKEN="your-token" -export PRIVATE_KEY="file:///pulsar-manager/secret/my-private.key" -export PUBLIC_KEY="file:///pulsar-manager/secret/my-public.key" -docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -e PRIVATE_KEY=$PRIVATE_KEY -e PUBLIC_KEY=$PUBLIC_KEY -v $PWD:/data -v $PWD/secret:/pulsar-manager/secret apachepulsar/pulsar-manager:v0.2.0 /bin/sh - -``` - -* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --private-key` command. -* `PRIVATE_KEY`: private key path mounted in container, generated by `bin/pulsar tokens create-key-pair` command. -* `PUBLIC_KEY`: public key path mounted in container, generated by `bin/pulsar tokens create-key-pair` command. -* `$PWD/secret`: the folder where the private key and public key generated by the `bin/pulsar tokens create-key-pair` command are placed locally -* `REDIRECT_HOST`: the IP address of the front-end server. -* `REDIRECT_PORT`: the port of the front-end server. -* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database. -* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database. -* `USERNAME`: the username of PostgreSQL. -* `PASSWORD`: the password of PostgreSQL. -* `LOG_LEVEL`: the level of log. - -* Method 5: use Docker and turn on **token authentication** and **token management** by secret key. - -``` - -export JWT_TOKEN="your-token" -export SECRET_KEY="file:///pulsar-manager/secret/my-secret.key" -docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -e SECRET_KEY=$SECRET_KEY -v $PWD:/data -v $PWD/secret:/pulsar-manager/secret apachepulsar/pulsar-manager:v0.2.0 /bin/sh - -``` - -* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --secret-key` command. -* `SECRET_KEY`: secret key path mounted in container, generated by `bin/pulsar tokens create-secret-key` command. -* `$PWD/secret`: the folder where the secret key generated by the `bin/pulsar tokens create-secret-key` command are placed locally -* `REDIRECT_HOST`: the IP address of the front-end server. -* `REDIRECT_PORT`: the port of the front-end server. -* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database. -* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database. -* `USERNAME`: the username of PostgreSQL. -* `PASSWORD`: the password of PostgreSQL. -* `LOG_LEVEL`: the level of log. - -* For more information about backend configurations, see [here](https://github.com/apache/pulsar-manager/blob/master/src/README). -* For more information about frontend configurations, see [here](https://github.com/apache/pulsar-manager/tree/master/front-end). - -## Log in - -[Set the administrator account and password](#set-administrator-account-and-password). - -Visit http://localhost:9527 to log in. diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/administration-stats.md b/site2/website/versioned_docs/version-2.8.1-deprecated/administration-stats.md deleted file mode 100644 index ac0c03602f36d5..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/administration-stats.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -id: administration-stats -title: Pulsar stats -sidebar_label: "Pulsar statistics" -original_id: administration-stats ---- - -## Partitioned topics - -|Stat|Description| -|---|---| -|msgRateIn| The sum of publish rates of all local and replication publishers in messages per second.| -|msgThroughputIn| Same as msgRateIn but in bytes per second instead of messages per second.| -|msgRateOut| The sum of dispatch rates of all local and replication consumers in messages per second.| -|msgThroughputOut| Same as msgRateOut but in bytes per second instead of messages per second.| -|averageMsgSize| Average message size, in bytes, from this publisher within the last interval.| -|storageSize| The sum of storage size of the ledgers for this topic.| -|publishers| The list of all local publishers into the topic. Publishers can be anywhere from zero to thousands.| -|producerId| Internal identifier for this producer on this topic.| -|producerName| Internal identifier for this producer, generated by the client library.| -|address| IP address and source port for the connection of this producer.| -|connectedSince| Timestamp this producer is created or last reconnected.| -|subscriptions| The list of all local subscriptions to the topic.| -|my-subscription| The name of this subscription (client defined).| -|msgBacklog| The count of messages in backlog for this subscription.| -|type| This subscription type.| -|msgRateExpired| The rate at which messages are discarded instead of dispatched from this subscription due to TTL.| -|consumers| The list of connected consumers for this subscription.| -|consumerName| Internal identifier for this consumer, generated by the client library.| -|availablePermits| The number of messages this consumer has space for in the listen queue of client library. A value of 0 means the queue of client library is full and receive() is not being called. A nonzero value means this consumer is ready to be dispatched messages.| -|replication| This section gives the stats for cross-colo replication of this topic.| -|replicationBacklog| The outbound replication backlog in messages.| -|connected| Whether the outbound replicator is connected.| -|replicationDelayInSeconds| How long the oldest message has been waiting to be sent through the connection, if connected is true.| -|inboundConnection| The IP and port of the broker in the publisher connection of remote cluster to this broker. | -|inboundConnectedSince| The TCP connection being used to publish messages to the remote cluster. If no local publishers are connected, this connection is automatically closed after a minute.| - - -## Topics - -|Stat|Description| -|---|---| -|entriesAddedCounter| Messages published since this broker loads this topic.| -|numberOfEntries| Total number of messages being tracked.| -|totalSize| Total storage size in bytes of all messages.| -|currentLedgerEntries| Count of messages written to the ledger currently open for writing.| -|currentLedgerSize| Size in bytes of messages written to ledger currently open for writing.| -|lastLedgerCreatedTimestamp| Time when last ledger is created.| -|lastLedgerCreationFailureTimestamp| Time when last ledger is failed.| -|waitingCursorsCount| How many cursors are caught up and waiting for a new message to be published.| -|pendingAddEntriesCount| How many messages have (asynchronous) write requests you are waiting on completion.| -|lastConfirmedEntry| The ledgerid:entryid of the last message successfully written. If the entryid is -1, then the ledger is opened or is being currently opened but has no entries written yet.| -|state| The state of the cursor ledger. Open means you have a cursor ledger for saving updates of the markDeletePosition.| -|ledgers| The ordered list of all ledgers for this topic holding its messages.| -|cursors| The list of all cursors on this topic. Every subscription you saw in the topic stats has one.| -|markDeletePosition| The ack position: the last message the subscriber acknowledges receiving.| -|readPosition| The latest position of subscriber for reading message.| -|waitingReadOp| This is true when the subscription reads the latest message that is published to the topic and waits on new messages to be published.| -|pendingReadOps| The counter for how many outstanding read requests to the BookKeepers you have in progress.| -|messagesConsumedCounter| Number of messages this cursor acks since this broker loads this topic.| -|cursorLedger| The ledger used to persistently store the current markDeletePosition.| -|cursorLedgerLastEntry| The last entryid used to persistently store the current markDeletePosition.| -|individuallyDeletedMessages| If Acks are done out of order, shows the ranges of messages Acked between the markDeletePosition and the read-position.| -|lastLedgerSwitchTimestamp| The last time the cursor ledger is rolled over.| diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/administration-upgrade.md b/site2/website/versioned_docs/version-2.8.1-deprecated/administration-upgrade.md deleted file mode 100644 index 72d136b6460f62..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/administration-upgrade.md +++ /dev/null @@ -1,168 +0,0 @@ ---- -id: administration-upgrade -title: Upgrade Guide -sidebar_label: "Upgrade" -original_id: administration-upgrade ---- - -## Upgrade guidelines - -Apache Pulsar is comprised of multiple components, ZooKeeper, bookies, and brokers. These components are either stateful or stateless. You do not have to upgrade ZooKeeper nodes unless you have special requirement. While you upgrade, you need to pay attention to bookies (stateful), brokers and proxies (stateless). - -The following are some guidelines on upgrading a Pulsar cluster. Read the guidelines before upgrading. - -- Backup all your configuration files before upgrading. -- Read guide entirely, make a plan, and then execute the plan. When you make upgrade plan, you need to take your specific requirements and environment into consideration. -- Pay attention to the upgrading order of components. In general, you do not need to upgrade your ZooKeeper or configuration store cluster (the global ZooKeeper cluster). You need to upgrade bookies first, and then upgrade brokers, proxies, and your clients. -- If `autorecovery` is enabled, you need to disable `autorecovery` in the upgrade process, and re-enable it after completing the process. -- Read the release notes carefully for each release. Release notes contain features, configuration changes that might impact your upgrade. -- Upgrade a small subset of nodes of each type to canary test the new version before upgrading all nodes of that type in the cluster. When you have upgraded the canary nodes, run for a while to ensure that they work correctly. -- Upgrade one data center to verify new version before upgrading all data centers if your cluster runs in multi-cluster replicated mode. - -> Note: Currently, Apache Pulsar is compatible between versions. - -## Upgrade sequence - -To upgrade an Apache Pulsar cluster, follow the upgrade sequence. - -1. Upgrade ZooKeeper (optional) -- Canary test: test an upgraded version in one or a small set of ZooKeeper nodes. -- Rolling upgrade: rollout the upgraded version to all ZooKeeper servers incrementally, one at a time. Monitor your dashboard during the whole rolling upgrade process. -2. Upgrade bookies -- Canary test: test an upgraded version in one or a small set of bookies. -- Rolling upgrade: - - a. Disable `autorecovery` with the following command. - - ```shell - - bin/bookkeeper shell autorecovery -disable - - ``` - - - - b. Rollout the upgraded version to all bookies in the cluster after you determine that a version is safe after canary. - - c. After you upgrade all bookies, re-enable `autorecovery` with the following command. - - ```shell - - bin/bookkeeper shell autorecovery -enable - - ``` - -3. Upgrade brokers -- Canary test: test an upgraded version in one or a small set of brokers. -- Rolling upgrade: rollout the upgraded version to all brokers in the cluster after you determine that a version is safe after canary. -4. Upgrade proxies -- Canary test: test an upgraded version in one or a small set of proxies. -- Rolling upgrade: rollout the upgraded version to all proxies in the cluster after you determine that a version is safe after canary. - -## Upgrade ZooKeeper (optional) -While you upgrade ZooKeeper servers, you can do canary test first, and then upgrade all ZooKeeper servers in the cluster. - -### Canary test - -You can test an upgraded version in one of ZooKeeper servers before upgrading all ZooKeeper servers in your cluster. - -To upgrade ZooKeeper server to a new version, complete the following steps: - -1. Stop a ZooKeeper server. -2. Upgrade the binary and configuration files. -3. Start the ZooKeeper server with the new binary files. -4. Use `pulsar zookeeper-shell` to connect to the newly upgraded ZooKeeper server and run a few commands to verify if it works as expected. -5. Run the ZooKeeper server for a few days, observe and make sure the ZooKeeper cluster runs well. - -#### Canary rollback - -If issues occur during canary test, you can shut down the problematic ZooKeeper node, revert the binary and configuration, and restart the ZooKeeper with the reverted binary. - -### Upgrade all ZooKeeper servers - -After canary test to upgrade one ZooKeeper in your cluster, you can upgrade all ZooKeeper servers in your cluster. - -You can upgrade all ZooKeeper servers one by one by following steps in canary test. - -## Upgrade bookies - -While you upgrade bookies, you can do canary test first, and then upgrade all bookies in the cluster. -For more details, you can read Apache BookKeeper [Upgrade guide](http://bookkeeper.apache.org/docs/latest/admin/upgrade). - -### Canary test - -You can test an upgraded version in one or a small set of bookies before upgrading all bookies in your cluster. - -To upgrade bookie to a new version, complete the following steps: - -1. Stop a bookie. -2. Upgrade the binary and configuration files. -3. Start the bookie in `ReadOnly` mode to verify if the bookie of this new version runs well for read workload. - - ```shell - - bin/pulsar bookie --readOnly - - ``` - -4. When the bookie runs successfully in `ReadOnly` mode, stop the bookie and restart it in `Write/Read` mode. - - ```shell - - bin/pulsar bookie - - ``` - -5. Observe and make sure the cluster serves both write and read traffic. - -#### Canary rollback - -If issues occur during the canary test, you can shut down the problematic bookie node. Other bookies in the cluster replaces this problematic bookie node with autorecovery. - -### Upgrade all bookies - -After canary test to upgrade some bookies in your cluster, you can upgrade all bookies in your cluster. - -Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios. - -In a rolling upgrade scenario, upgrade one bookie at a time. In a downtime upgrade scenario, shut down the entire cluster, upgrade each bookie, and then start the cluster. - -While you upgrade in both scenarios, the procedure is the same for each bookie. - -1. Stop the bookie. -2. Upgrade the software (either new binary or new configuration files). -2. Start the bookie. - -> **Advanced operations** -> When you upgrade a large BookKeeper cluster in a rolling upgrade scenario, upgrading one bookie at a time is slow. If you configure rack-aware or region-aware placement policy, you can upgrade bookies rack by rack or region by region, which speeds up the whole upgrade process. - -## Upgrade brokers and proxies - -The upgrade procedure for brokers and proxies is the same. Brokers and proxies are `stateless`, so upgrading the two services is easy. - -### Canary test - -You can test an upgraded version in one or a small set of nodes before upgrading all nodes in your cluster. - -To upgrade to a new version, complete the following steps: - -1. Stop a broker (or proxy). -2. Upgrade the binary and configuration file. -3. Start a broker (or proxy). - -#### Canary rollback - -If issues occur during canary test, you can shut down the problematic broker (or proxy) node. Revert to the old version and restart the broker (or proxy). - -### Upgrade all brokers or proxies - -After canary test to upgrade some brokers or proxies in your cluster, you can upgrade all brokers or proxies in your cluster. - -Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios. - -In a rolling upgrade scenario, you can upgrade one broker or one proxy at a time if the size of the cluster is small. If your cluster is large, you can upgrade brokers or proxies in batches. When you upgrade a batch of brokers or proxies, make sure the remaining brokers and proxies in the cluster have enough capacity to handle the traffic during upgrade. - -In a downtime upgrade scenario, shut down the entire cluster, upgrade each broker or proxy, and then start the cluster. - -While you upgrade in both scenarios, the procedure is the same for each broker or proxy. - -1. Stop the broker or proxy. -2. Upgrade the software (either new binary or new configuration files). -3. Start the broker or proxy. diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/administration-zk-bk.md b/site2/website/versioned_docs/version-2.8.1-deprecated/administration-zk-bk.md deleted file mode 100644 index 2c080123ca81df..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/administration-zk-bk.md +++ /dev/null @@ -1,386 +0,0 @@ ---- -id: administration-zk-bk -title: ZooKeeper and BookKeeper administration -sidebar_label: "ZooKeeper and BookKeeper" -original_id: administration-zk-bk ---- - -Pulsar relies on two external systems for essential tasks: - -* [ZooKeeper](https://zookeeper.apache.org/) is responsible for a wide variety of configuration-related and coordination-related tasks. -* [BookKeeper](http://bookkeeper.apache.org/) is responsible for [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data. - -ZooKeeper and BookKeeper are both open-source [Apache](https://www.apache.org/) projects. - -> Skip to the [How Pulsar uses ZooKeeper and BookKeeper](#how-pulsar-uses-zookeeper-and-bookkeeper) section below for a more schematic explanation of the role of these two systems in Pulsar. - - -## ZooKeeper - -Each Pulsar instance relies on two separate ZooKeeper quorums. - -* [Local ZooKeeper](#deploy-local-zookeeper) operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs to have a dedicated ZooKeeper cluster. -* [Configuration Store](#deploy-configuration-store) operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum. - -### Deploy local ZooKeeper - -ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar. - -To deploy a Pulsar instance, you need to stand up one local ZooKeeper cluster *per Pulsar cluster*. - -To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -On each host, you need to specify the node ID in `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - - -On a ZooKeeper server at `zk1.us-west.example.com`, for example, you can set the `myid` value like this: - -```shell - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com` the command is `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start zookeeper - -``` - -### Deploy configuration store - -The ZooKeeper cluster configured and started up in the section above is a *local* ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks. - -If you deploy a [single-cluster](#single-cluster-pulsar-instance) instance, you do not need a separate cluster for the configuration store. If, however, you deploy a [multi-cluster](#multi-cluster-pulsar-instance) instance, you need to stand up a separate ZooKeeper cluster for configuration tasks. - -#### Single-cluster Pulsar instance - -If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports. - -To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorum uses to the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 - -``` - -As before, create the `myid` files for each server on `data/global-zookeeper/myid`. - -#### Multi-cluster Pulsar instance - -When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions. - -The key here is to make sure the ZK quorum members are spread across at least 3 regions and that other regions run as observers. - -Again, given the very low expected load on the configuration store servers, you can share the same hosts used for the local ZooKeeper quorum. - -For example, you can assume a Pulsar instance with the following clusters `us-west`, `us-east`, `us-central`, `eu-central`, `ap-south`. Also you can assume, each cluster has its own local ZK servers named such as - -``` - -zk[1-3].${CLUSTER}.example.com - -``` - -In this scenario you want to pick the quorum participants from few clusters and let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`. - -This guarantees that writes to configuration store is possible even if one of these regions is unreachable. - -The ZK configuration in all the servers looks like: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 -server.4=zk1.us-central.example.com:2185:2186 -server.5=zk2.us-central.example.com:2185:2186 -server.6=zk3.us-central.example.com:2185:2186:observer -server.7=zk1.us-east.example.com:2185:2186 -server.8=zk2.us-east.example.com:2185:2186 -server.9=zk3.us-east.example.com:2185:2186:observer -server.10=zk1.eu-central.example.com:2185:2186:observer -server.11=zk2.eu-central.example.com:2185:2186:observer -server.12=zk3.eu-central.example.com:2185:2186:observer -server.13=zk1.ap-south.example.com:2185:2186:observer -server.14=zk2.ap-south.example.com:2185:2186:observer -server.15=zk3.ap-south.example.com:2185:2186:observer - -``` - -Additionally, ZK observers need to have: - -```properties - -peerType=observer - -``` - -##### Start the service - -Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) - -```shell - -$ bin/pulsar-daemon start configuration-store - -``` - -### ZooKeeper configuration - -In Pulsar, ZooKeeper configuration is handled by two separate configuration files in the `conf` directory of your Pulsar installation: `conf/zookeeper.conf` for [local ZooKeeper](#local-zookeeper) and `conf/global-zookeeper.conf` for [configuration store](#configuration-store). - -#### Local ZooKeeper - -The [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file handles the configuration for local ZooKeeper. The table below shows the available parameters: - -|Name|Description|Default| -|---|---|---| -|tickTime| The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick. |2000| -|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10| -|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter. |5| -|dataDir| The location where ZooKeeper stores in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper| -|clientPort| The port on which the ZooKeeper server listens for connections. |2181| -|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest). |3| -|autopurge.purgeInterval| The time interval, in hours, which triggers the ZooKeeper database purge task. Setting to a non-zero number enables auto purge; setting to 0 disables. Read this guide before enabling auto purge. |1| -|maxClientCnxns| The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60| - - -#### Configuration Store - -The [`conf/global-zookeeper.conf`](reference-configuration.md#configuration-store) file handles the configuration for configuration store. The table below shows the available parameters: - - -## BookKeeper - -BookKeeper stores all durable messages in Pulsar. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) WAL system that guarantees read consistency of independent message logs calls ledgers. Individual BookKeeper servers are also called *bookies*. - -> To manage message persistence, retention, and expiry in Pulsar, refer to [cookbook](cookbooks-retention-expiry.md). - -### Hardware requirements - -Bookie hosts store message data on disk. To provide optimal performance, ensure that the bookies have a suitable hardware configuration. The following are two key dimensions of bookie hardware capacity: - -- Disk I/O capacity read/write -- Storage capacity - -Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker by default. To ensure low write latency, BookKeeper is designed to use multiple devices: - -- A **journal** to ensure durability. For sequential writes, it is critical to have fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms. -- A **ledger storage device** stores data. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller. - -### Configure BookKeeper - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. When you configure each bookie, ensure that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for local ZooKeeper of the Pulsar cluster. - -The minimum configuration changes required in `conf/bookkeeper.conf` are as follows: - -:::note - -Set `journalDirectory` and `ledgerDirectories` carefully. It is difficilt to change them later. - -::: - -```properties - -# Change to point to journal disk mount point -journalDirectory=data/bookkeeper/journal - -# Point to ledger storage disk mount point -ledgerDirectories=data/bookkeeper/ledgers - -# Point to local ZK quorum -zkServers=zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181 - -#It is recommended to set this parameter. Otherwise, BookKeeper can't start normally in certain environments (for example, Huawei Cloud). -advertisedAddress= - -``` - -To change the ZooKeeper root path that BookKeeper uses, use `zkLedgersRootPath=/MY-PREFIX/ledgers` instead of `zkServers=localhost:2181/MY-PREFIX`. - -> For more information about BookKeeper, refer to the official [BookKeeper docs](http://bookkeeper.apache.org). - -### Deploy BookKeeper - -BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar. Each Pulsar broker has its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster. - -### Start bookies manually - -You can start a bookie in the foreground or as a background daemon. - -To start a bookie in the foreground, use the [`bookkeeper`](reference-cli-tools.md#bookkeeper) CLI tool: - -```bash - -$ bin/bookkeeper bookie - -``` - -To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -You can verify whether the bookie works properly with the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell): - -```shell - -$ bin/bookkeeper shell bookiesanity - -``` - -When you use this command, you create a new ledger on the local bookie, write a few entries, read them back and finally delete the ledger. - -### Decommission bookies cleanly - -Before you decommission a bookie, you need to check your environment and meet the following requirements. - -1. Ensure the state of your cluster supports decommissioning the target bookie. Check if `EnsembleSize >= Write Quorum >= Ack Quorum` is `true` with one less bookie. - -2. Ensure the target bookie is listed after using the `listbookies` command. - -3. Ensure that no other process is ongoing (upgrade etc). - -And then you can decommission bookies safely. To decommission bookies, complete the following steps. - -1. Log in to the bookie node, check if there are underreplicated ledgers. The decommission command force to replicate the underreplicated ledgers. -`$ bin/bookkeeper shell listunderreplicated` - -2. Stop the bookie by killing the bookie process. Make sure that no liveness/readiness probes setup for the bookies to spin them back up if you deploy it in a Kubernetes environment. - -3. Run the decommission command. - - If you have logged in to the node to be decommissioned, you do not need to provide `-bookieid`. - - If you are running the decommission command for the target bookie node from another bookie node, you should mention the target bookie ID in the arguments for `-bookieid` - `$ bin/bookkeeper shell decommissionbookie` - or - `$ bin/bookkeeper shell decommissionbookie -bookieid ` - -4. Validate that no ledgers are on the decommissioned bookie. -`$ bin/bookkeeper shell listledgers -bookieid ` - -You can run the following command to check if the bookie you have decommissioned is listed in the bookies list: - -```bash - -./bookkeeper shell listbookies -rw -h -./bookkeeper shell listbookies -ro -h - -``` - -## BookKeeper persistence policies - -In Pulsar, you can set *persistence policies* at the namespace level, which determines how BookKeeper handles persistent storage of messages. Policies determine four things: - -* The number of acks (guaranteed copies) to wait for each ledger entry. -* The number of bookies to use for a topic. -* The number of writes to make for each ledger entry. -* The throttling rate for mark-delete operations. - -### Set persistence policies - -You can set persistence policies for BookKeeper at the [namespace](reference-terminology.md#namespace) level. - -#### Pulsar-admin - -Use the [`set-persistence`](reference-pulsar-admin.md#namespaces-set-persistence) subcommand and specify a namespace as well as any policies that you want to apply. The available flags are: - -Flag | Description | Default -:----|:------------|:------- -`-a`, `--bookkeeper-ack-quorum` | The number of acks (guaranteed copies) to wait on for each entry | 0 -`-e`, `--bookkeeper-ensemble` | The number of [bookies](reference-terminology.md#bookie) to use for topics in the namespace | 0 -`-w`, `--bookkeeper-write-quorum` | The number of writes to make for each entry | 0 -`-r`, `--ml-mark-delete-max-rate` | Throttling rate for mark-delete operations (0 means no throttle) | 0 - -The following is an example: - -```shell - -$ pulsar-admin namespaces set-persistence my-tenant/my-ns \ - --bookkeeper-ack-quorum 3 \ - --bookkeeper-ensemble 2 - -``` - -#### REST API - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/setPersistence?version=@pulsar:version_number@} - -#### Java - -```java - -int bkEnsemble = 2; -int bkQuorum = 3; -int bkAckQuorum = 2; -double markDeleteRate = 0.7; -PersistencePolicies policies = - new PersistencePolicies(ensemble, quorum, ackQuorum, markDeleteRate); -admin.namespaces().setPersistence(namespace, policies); - -``` - -### List persistence policies - -You can see which persistence policy currently applies to a namespace. - -#### Pulsar-admin - -Use the [`get-persistence`](reference-pulsar-admin.md#namespaces-get-persistence) subcommand and specify the namespace. - -The following is an example: - -```shell - -$ pulsar-admin namespaces get-persistence my-tenant/my-ns -{ - "bookkeeperEnsemble": 1, - "bookkeeperWriteQuorum": 1, - "bookkeeperAckQuorum", 1, - "managedLedgerMaxMarkDeleteRate": 0 -} - -``` - -#### REST API - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/getPersistence?version=@pulsar:version_number@} - -#### Java - -```java - -PersistencePolicies policies = admin.namespaces().getPersistence(namespace); - -``` - -## How Pulsar uses ZooKeeper and BookKeeper - -This diagram illustrates the role of ZooKeeper and BookKeeper in a Pulsar cluster: - -![ZooKeeper and BookKeeper](/assets/pulsar-system-architecture.png) - -Each Pulsar cluster consists of one or more message brokers. Each broker relies on an ensemble of bookies. diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-cgo.md b/site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-cgo.md deleted file mode 100644 index f352f942b77144..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-cgo.md +++ /dev/null @@ -1,579 +0,0 @@ ---- -id: client-libraries-cgo -title: Pulsar CGo client -sidebar_label: "CGo(deprecated)" -original_id: client-libraries-cgo ---- - -You can use Pulsar Go client to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang). - -All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Go client are thread-safe. - -Currently, the following Go clients are maintained in two repositories. - -| Language | Project | Maintainer | License | Description | -|----------|---------|------------|---------|-------------| -| CGo | [pulsar-client-go](https://github.com/apache/pulsar/tree/master/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | CGo client that depends on C++ client library | -| Go | [pulsar-client-go](https://github.com/apache/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client | - -> **API docs available as well** -> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar/pulsar-client-go/pulsar). - -## Installation - -### Requirements - -Pulsar Go client library is based on the C++ client library. Follow -the instructions for [C++ library](client-libraries-cpp.md) for installing the binaries through [RPM](client-libraries-cpp.md#rpm), [Deb](client-libraries-cpp.md#deb) or [Homebrew packages](client-libraries-cpp.md#macos). - -### Install go package - -> **Compatibility Warning** -> The version number of the Go client **must match** the version number of the Pulsar C++ client library. - -You can install the `pulsar` library locally using `go get`. Note that `go get` doesn't support fetching a specific tag - it will always pull in master's version of the Go client. You'll need a C++ client library that matches master. - -```bash - -$ go get -u github.com/apache/pulsar/pulsar-client-go/pulsar - -``` - -Or you can use [dep](https://github.com/golang/dep) for managing the dependencies. - -```bash - -$ dep ensure -add github.com/apache/pulsar/pulsar-client-go/pulsar@v@pulsar:version@ - -``` - -Once installed locally, you can import it into your project: - -```go - -import "github.com/apache/pulsar/pulsar-client-go/pulsar" - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example: - -```go - -import ( - "log" - "runtime" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - OperationTimeoutSeconds: 5, - MessageListenerThreads: runtime.NumCPU(), - }) - - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } -} - -``` - -The following configurable parameters are available for Pulsar clients: - -Parameter | Description | Default -:---------|:------------|:------- -`URL` | The connection URL for the Pulsar cluster. See [above](#urls) for more info | -`IOThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker) | 1 -`OperationTimeoutSeconds` | The timeout for some Go client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries will occur until this threshold is reached, at which point the operation will fail. | 30 -`MessageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)) | 1 -`ConcurrentLookupRequests` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 5000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 5000 -`Logger` | A custom logger implementation for the client (as a function that takes a log level, file path, line number, and message). All info, warn, and error messages will be routed to this function. | `nil` -`TLSTrustCertsFilePath` | The file path for the trusted TLS certificate | -`TLSAllowInsecureConnection` | Whether the client accepts untrusted TLS certificates from the broker | `false` -`Authentication` | Configure the authentication provider. (default: no authentication). Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | `nil` -`StatsIntervalInSeconds` | The interval (in seconds) at which client stats are published | 60 - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example: - -```go - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", -}) - -if err != nil { - log.Fatalf("Could not instantiate Pulsar producer: %v", err) -} - -defer producer.Close() - -msg := pulsar.ProducerMessage{ - Payload: []byte("Hello, Pulsar"), -} - -if err := producer.Send(context.Background(), msg); err != nil { - log.Fatalf("Producer could not send message: %v", err) -} - -``` - -> **Blocking operation** -> When you create a new Pulsar producer, the operation will block (waiting on a go channel) until either a producer is successfully created or an error is thrown. - - -### Producer operations - -Pulsar Go producers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string` -`Name()` | Fetches the producer's name | `string` -`Send(context.Context, ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | `error` -`SendAndGetMsgID(context.Context, ProducerMessage)`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | (MessageID, error) -`SendAsync(context.Context, ProducerMessage, func(ProducerMessage, error))` | Publishes a [message](#messages) to the producer's topic asynchronously. The third argument is a callback function that specifies what happens either when the message is acknowledged or an error is thrown. | -`SendAndGetMsgIDAsync(context.Context, ProducerMessage, func(MessageID, error))`| Send a message in asynchronous mode. The callback will report back the message being published and the eventual error in publishing | -`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64 -`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error -`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | `error` -`Schema()` | | Schema - -Here's a more involved example usage of a producer: - -```go - -import ( - "context" - "fmt" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatal(err) } - - // Use the client to instantiate a producer - producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", - }) - - if err != nil { log.Fatal(err) } - - ctx := context.Background() - - // Send 10 messages synchronously and 10 messages asynchronously - for i := 0; i < 10; i++ { - // Create a message - msg := pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("message-%d", i)), - } - - // Attempt to send the message - if err := producer.Send(ctx, msg); err != nil { - log.Fatal(err) - } - - // Create a different message to send asynchronously - asyncMsg := pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("async-message-%d", i)), - } - - // Attempt to send the message asynchronously and handle the response - producer.SendAsync(ctx, asyncMsg, func(msg pulsar.ProducerMessage, err error) { - if err != nil { log.Fatal(err) } - - fmt.Printf("the %s successfully published", string(msg.Payload)) - }) - } -} - -``` - -### Producer configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer will publish messages | -`Name` | A name for the producer. If you don't explicitly assign a name, Pulsar will automatically generate a globally unique name that you can access later using the `Name()` method. If you choose to explicitly assign a name, it will need to be unique across *all* Pulsar clusters, otherwise the creation operation will throw an error. | -`Properties`| Attach a set of application defined properties to the producer. This properties will be visible in the topic stats | -`SendTimeout` | When publishing a message to a topic, the producer will wait for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error will be thrown. If you set `SendTimeout` to -1, the timeout will be set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30 seconds -`MaxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `Send` and `SendAsync` methods will fail *unless* `BlockIfQueueFull` is set to `true`. | -`MaxPendingMessagesAcrossPartitions` | Set the number of max pending messages across all the partitions. This setting will be used to lower the max pending messages for each partition `MaxPendingMessages(int)`, if the total exceeds the configured value.| -`BlockIfQueueFull` | If set to `true`, the producer's `Send` and `SendAsync` methods will block when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `MaxPendingMessages` parameter); if set to `false` (the default), `Send` and `SendAsync` operations will fail and throw a `ProducerQueueIsFullError` when the queue is full. | `false` -`MessageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`pulsar.RoundRobinDistribution`, the default), publishing all messages to a single partition (`pulsar.UseSinglePartition`), or a custom partitioning scheme (`pulsar.CustomPartition`). | `pulsar.RoundRobinDistribution` -`HashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `pulsar.JavaStringHash` (the equivalent of `String.hashCode()` in Java), `pulsar.Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `pulsar.BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library) | `pulsar.JavaStringHash` -`CompressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), [`ZLIB`](https://zlib.net/), [`ZSTD`](https://facebook.github.io/zstd/) and [`SNAPPY`](https://google.github.io/snappy/). | No compression -`MessageRouter` | By default, Pulsar uses a round-robin routing scheme for [partitioned topics](cookbooks-partitioned.md). The `MessageRouter` parameter enables you to specify custom routing logic via a function that takes the Pulsar message and topic metadata as an argument and returns an integer (where the ), i.e. a function signature of `func(Message, TopicMetadata) int`. | -`Batching` | Control whether automatic batching of messages is enabled for the producer. | false -`BatchingMaxPublishDelay` | Set the time period within which the messages sent will be batched (default: 1ms) if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or until | 1ms -`BatchingMaxMessages` | Set the maximum number of messages permitted in a batch. (default: 1000) If set to a value greater than 1, messages will be queued until this threshold is reached or batch interval has elapsed | 1000 - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels: - -```go - -msgChannel := make(chan pulsar.ConsumerMessage) - -consumerOpts := pulsar.ConsumerOptions{ - Topic: "my-topic", - SubscriptionName: "my-subscription-1", - Type: pulsar.Exclusive, - MessageChannel: msgChannel, -} - -consumer, err := client.Subscribe(consumerOpts) - -if err != nil { - log.Fatalf("Could not establish subscription: %v", err) -} - -defer consumer.Close() - -for cm := range msgChannel { - msg := cm.Message - - fmt.Printf("Message ID: %s", msg.ID()) - fmt.Printf("Message value: %s", string(msg.Payload())) - - consumer.Ack(msg) -} - -``` - -> **Blocking operation** -> When you create a new Pulsar consumer, the operation will block (on a go channel) until either a producer is successfully created or an error is thrown. - - -### Consumer operations - -Pulsar Go consumers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the consumer's [topic](reference-terminology.md#topic) | `string` -`Subscription()` | Returns the consumer's subscription name | `string` -`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error` -`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)` -`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | `error` -`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | `error` -`AckCumulative(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `AckCumulative` method will block until the ack has been sent to the broker. After that, the messages will *not* be redelivered to the consumer. Cumulative acking can only be used with a [shared](concepts-messaging.md#shared) subscription type. | `error` -`AckCumulativeID(MessageID)` |Ack the reception of all the messages in the stream up to (and including) the provided message. This method will block until the acknowledge has been sent to the broker. After that, the messages will not be re-delivered to this consumer. | error -`Nack(Message)` | Acknowledge the failure to process a single message. | `error` -`NackID(MessageID)` | Acknowledge the failure to process a single message. | `error` -`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | `error` -`RedeliverUnackedMessages()` | Redelivers *all* unacknowledged messages on the topic. In [failover](concepts-messaging.md#failover) mode, this request is ignored if the consumer isn't active on the specified topic; in [shared](concepts-messaging.md#shared) mode, redelivered messages are distributed across all consumers connected to the topic. **Note**: this is a *non-blocking* operation that doesn't throw an error. | -`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | error - -#### Receive example - -Here's an example usage of a Go consumer that uses the `Receive()` method to process incoming messages: - -```go - -import ( - "context" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatal(err) } - - // Use the client object to instantiate a consumer - consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "my-golang-topic", - SubscriptionName: "sub-1", - Type: pulsar.Exclusive, - }) - - if err != nil { log.Fatal(err) } - - defer consumer.Close() - - ctx := context.Background() - - // Listen indefinitely on the topic - for { - msg, err := consumer.Receive(ctx) - if err != nil { log.Fatal(err) } - - // Do something with the message - err = processMessage(msg) - - if err == nil { - // Message processed successfully - consumer.Ack(msg) - } else { - // Failed to process messages - consumer.Nack(msg) - } - } -} - -``` - -### Consumer configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the consumer will establish a subscription and listen for messages | -`Topics` | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing | -`TopicsPattern` | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing | -`SubscriptionName` | The subscription name for this consumer | -`Properties` | Attach a set of application defined properties to the consumer. This properties will be visible in the topic stats| -`Name` | The name of the consumer | -`AckTimeout` | Set the timeout for unacked messages | 0 -`NackRedeliveryDelay` | The delay after which to redeliver the messages that failed to be processed. Default is 1min. (See `Consumer.Nack()`) | 1 minute -`Type` | Available options are `Exclusive`, `Shared`, and `Failover` | `Exclusive` -`SubscriptionInitPos` | InitialPosition at which the cursor will be set when subscribe | Latest -`MessageChannel` | The Go channel used by the consumer. Messages that arrive from the Pulsar topic(s) will be passed to this channel. | -`ReceiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `Receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000 -`MaxTotalReceiverQueueSizeAcrossPartitions` |Set the max total receiver queue size across partitions. This setting will be used to reduce the receiver queue size for individual partitions if the total exceeds this value | 50000 -`ReadCompacted` | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the consumer will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal. | - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example: - -```go - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageId: pulsar.LatestMessage, -}) - -``` - -> **Blocking operation** -> When you create a new Pulsar reader, the operation will block (on a go channel) until either a reader is successfully created or an error is thrown. - - -### Reader operations - -Pulsar Go readers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string` -`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)` -`HasNext()` | Check if there is any message available to read from the current position| (bool, error) -`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error` - -#### "Next" example - -Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages: - -```go - -import ( - "context" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatalf("Could not create client: %v", err) } - - // Use the client to instantiate a reader - reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: pulsar.EarliestMessage, - }) - - if err != nil { log.Fatalf("Could not create reader: %v", err) } - - defer reader.Close() - - ctx := context.Background() - - // Listen on the topic for incoming messages - for { - msg, err := reader.Next(ctx) - if err != nil { log.Fatalf("Error reading from topic: %v", err) } - - // Process the message - } -} - -``` - -In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example: - -```go - -lastSavedId := // Read last saved message id from external store as byte[] - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: DeserializeMessageID(lastSavedId), -}) - -``` - -### Reader configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader will establish a subscription and listen for messages -`Name` | The name of the reader -`StartMessageID` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `pulsar.EarliestMessage` (the earliest available message on the topic), `pulsar.LatestMessage` (the latest available message on the topic), or a `MessageID` object for a position that isn't earliest or latest. | -`MessageChannel` | The Go channel used by the reader. Messages that arrive from the Pulsar topic(s) will be passed to this channel. | -`ReceiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `Next`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000 -`SubscriptionRolePrefix` | The subscription role prefix. | `reader` -`ReadCompacted` | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the reader will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal.| - -## Messages - -The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message: - -```go - -msg := pulsar.ProducerMessage{ - Payload: []byte("Here is some message data"), - Key: "message-key", - Properties: map[string]string{ - "foo": "bar", - }, - EventTime: time.Now(), - ReplicationClusters: []string{"cluster1", "cluster3"}, -} - -if err := producer.send(msg); err != nil { - log.Fatalf("Could not publish message due to: %v", err) -} - -``` - -The following methods parameters are available for `ProducerMessage` objects: - -Parameter | Description -:---------|:----------- -`Payload` | The actual data payload of the message -`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message. -`Key` | The optional key associated with the message (particularly useful for things like topic compaction) -`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message -`EventTime` | The timestamp associated with the message -`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. -`SequenceID` | Set the sequence id to assign to the current message - -## TLS encryption and authentication - -In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so: - - * Use `pulsar+ssl` URL type - * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker - * Configure `Authentication` option - -Here's an example: - -```go - -opts := pulsar.ClientOptions{ - URL: "pulsar+ssl://my-cluster.com:6651", - TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr", - Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"), -} - -``` - -## Schema - -This example shows how to create a producer and consumer with schema. - -```go - -var exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -jsonSchema := NewJsonSchema(exampleSchemaDef, nil) -// create producer -producer, err := client.CreateProducerWithSchema(ProducerOptions{ - Topic: "jsonTopic", -}, jsonSchema) -err = producer.Send(context.Background(), ProducerMessage{ - Value: &testJson{ - ID: 100, - Name: "pulsar", - }, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() -//create consumer -var s testJson -consumerJS := NewJsonSchema(exampleSchemaDef, nil) -consumer, err := client.SubscribeWithSchema(ConsumerOptions{ - Topic: "jsonTopic", - SubscriptionName: "sub-2", -}, consumerJS) -if err != nil { - log.Fatal(err) -} -msg, err := consumer.Receive(context.Background()) -if err != nil { - log.Fatal(err) -} -err = msg.GetValue(&s) -if err != nil { - log.Fatal(err) -} -fmt.Println(s.ID) // output: 100 -fmt.Println(s.Name) // output: pulsar -defer consumer.Close() - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-cpp.md b/site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-cpp.md deleted file mode 100644 index 455cf02116d502..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-cpp.md +++ /dev/null @@ -1,708 +0,0 @@ ---- -id: client-libraries-cpp -title: Pulsar C++ client -sidebar_label: "C++" -original_id: client-libraries-cpp ---- - -You can use Pulsar C++ client to create Pulsar producers and consumers in C++. - -All the methods in producer, consumer, and reader of a C++ client are thread-safe. - -## Supported platforms - -Pulsar C++ client is supported on **Linux** ,**MacOS** and **Windows** platforms. - -[Doxygen](http://www.doxygen.nl/)-generated API docs for the C++ client are available [here](/api/cpp). - -## System requirements - -You need to install the following components before using the C++ client: - -* [CMake](https://cmake.org/) -* [Boost](http://www.boost.org/) -* [Protocol Buffers](https://developers.google.com/protocol-buffers/) >= 3 -* [libcurl](https://curl.se/libcurl/) -* [Google Test](https://github.com/google/googletest) - -## Linux - -### Compilation - -1. Clone the Pulsar repository. - -```shell - -$ git clone https://github.com/apache/pulsar - -``` - -2. Install all necessary dependencies. - -```shell - -$ apt-get install cmake libssl-dev libcurl4-openssl-dev liblog4cxx-dev \ - libprotobuf-dev protobuf-compiler libboost-all-dev google-mock libgtest-dev libjsoncpp-dev - -``` - -3. Compile and install [Google Test](https://github.com/google/googletest). - -```shell - -# libgtest-dev version is 1.18.0 or above -$ cd /usr/src/googletest -$ sudo cmake . -$ sudo make -$ sudo cp ./googlemock/libgmock.a ./googlemock/gtest/libgtest.a /usr/lib/ - -# less than 1.18.0 -$ cd /usr/src/gtest -$ sudo cmake . -$ sudo make -$ sudo cp libgtest.a /usr/lib - -$ cd /usr/src/gmock -$ sudo cmake . -$ sudo make -$ sudo cp libgmock.a /usr/lib - -``` - -4. Compile the Pulsar client library for C++ inside the Pulsar repository. - -```shell - -$ cd pulsar-client-cpp -$ cmake . -$ make - -``` - -After you install the components successfully, the files `libpulsar.so` and `libpulsar.a` are in the `lib` folder of the repository. The tools `perfProducer` and `perfConsumer` are in the `perf` directory. - -### Install Dependencies - -> Since 2.1.0 release, Pulsar ships pre-built RPM and Debian packages. You can download and install those packages directly. - -After you download and install RPM or DEB, the `libpulsar.so`, `libpulsarnossl.so`, `libpulsar.a`, and `libpulsarwithdeps.a` libraries are in your `/usr/lib` directory. - -By default, they are built in code path `${PULSAR_HOME}/pulsar-client-cpp`. You can build with the command below. - - `cmake . -DBUILD_TESTS=OFF -DLINK_STATIC=ON && make pulsarShared pulsarSharedNossl pulsarStatic pulsarStaticWithDeps -j 3`. - -These libraries rely on some other libraries. If you want to get detailed version of dependencies, see [RPM](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/rpm/Dockerfile) or [DEB](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/deb/Dockerfile) files. - -1. `libpulsar.so` is a shared library, containing statically linked `boost` and `openssl`. It also dynamically links all other necessary libraries. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsar.so -I/usr/local/ssl/include - -``` - -2. `libpulsarnossl.so` is a shared library, similar to `libpulsar.so` except that the libraries `openssl` and `crypto` are dynamically linked. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsarnossl.so -lssl -lcrypto -I/usr/local/ssl/include -L/usr/local/ssl/lib - -``` - -3. `libpulsar.a` is a static library. You need to load dependencies before using this library. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsar.a -lssl -lcrypto -ldl -lpthread -I/usr/local/ssl/include -L/usr/local/ssl/lib -lboost_system -lboost_regex -lcurl -lprotobuf -lzstd -lz - -``` - -4. `libpulsarwithdeps.a` is a static library, based on `libpulsar.a`. It is archived in the dependencies of `libboost_regex`, `libboost_system`, `libcurl`, `libprotobuf`, `libzstd` and `libz`. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsarwithdeps.a -lssl -lcrypto -ldl -lpthread -I/usr/local/ssl/include -L/usr/local/ssl/lib - -``` - -The `libpulsarwithdeps.a` does not include library openssl related libraries `libssl` and `libcrypto`, because these two libraries are related to security. It is more reasonable and easier to use the versions provided by the local system to handle security issues and upgrade libraries. - -### Install RPM - -1. Download a RPM package from the links in the table. - -| Link | Crypto files | -|------|--------------| -| [client](@pulsar:dist_rpm:client@) | [asc](@pulsar:dist_rpm:client@.asc), [sha512](@pulsar:dist_rpm:client@.sha512) | -| [client-debuginfo](@pulsar:dist_rpm:client-debuginfo@) | [asc](@pulsar:dist_rpm:client-debuginfo@.asc), [sha512](@pulsar:dist_rpm:client-debuginfo@.sha512) | -| [client-devel](@pulsar:dist_rpm:client-devel@) | [asc](@pulsar:dist_rpm:client-devel@.asc), [sha512](@pulsar:dist_rpm:client-devel@.sha512) | - -2. Install the package using the following command. - -```bash - -$ rpm -ivh apache-pulsar-client*.rpm - -``` - -After you install RPM successfully, Pulsar libraries are in the `/usr/lib` directory. - -:::note - -If you get the error that `libpulsar.so: cannot open shared object file: No such file or directory` when starting Pulsar client, you may need to run `ldconfig` first. - -::: - -### Install Debian - -1. Download a Debian package from the links in the table. - -| Link | Crypto files | -|------|--------------| -| [client](@pulsar:deb:client@) | [asc](@pulsar:dist_deb:client@.asc), [sha512](@pulsar:dist_deb:client@.sha512) | -| [client-devel](@pulsar:deb:client-devel@) | [asc](@pulsar:dist_deb:client-devel@.asc), [sha512](@pulsar:dist_deb:client-devel@.sha512) | - -2. Install the package using the following command. - -```bash - -$ apt install ./apache-pulsar-client*.deb - -``` - -After you install DEB successfully, Pulsar libraries are in the `/usr/lib` directory. - -### Build - -> If you want to build RPM and Debian packages from the latest master, follow the instructions below. You should run all the instructions at the root directory of your cloned Pulsar repository. - -There are recipes that build RPM and Debian packages containing a -statically linked `libpulsar.so` / `libpulsarnossl.so` / `libpulsar.a` / `libpulsarwithdeps.a` with all required dependencies. - -To build the C++ library packages, you need to build the Java packages first. - -```shell - -mvn install -DskipTests - -``` - -#### RPM - -To build the RPM inside a Docker container, use the command below. The RPMs are in the `pulsar-client-cpp/pkg/rpm/RPMS/x86_64/` path. - -```shell - -pulsar-client-cpp/pkg/rpm/docker-build-rpm.sh - -``` - -| Package name | Content | -|-----|-----| -| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` | -| pulsar-client-devel | Static library `libpulsar.a`, `libpulsarwithdeps.a`and C++ and C headers | -| pulsar-client-debuginfo | Debug symbols for `libpulsar.so` | - -#### Debian - -To build Debian packages, enter the following command. - -```shell - -pulsar-client-cpp/pkg/deb/docker-build-deb.sh - -``` - -Debian packages are created in the `pulsar-client-cpp/pkg/deb/BUILD/DEB/` path. - -| Package name | Content | -|-----|-----| -| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` | -| pulsar-client-dev | Static library `libpulsar.a`, `libpulsarwithdeps.a` and C++ and C headers | - -## MacOS - -### Compilation - -1. Clone the Pulsar repository. - -```shell - -$ git clone https://github.com/apache/pulsar - -``` - -2. Install all necessary dependencies. - -```shell - -# OpenSSL installation -$ brew install openssl -$ export OPENSSL_INCLUDE_DIR=/usr/local/opt/openssl/include/ -$ export OPENSSL_ROOT_DIR=/usr/local/opt/openssl/ - -# Protocol Buffers installation -$ brew install protobuf boost boost-python log4cxx -# If you are using python3, you need to install boost-python3 - -# Google Test installation -$ git clone https://github.com/google/googletest.git -$ cd googletest -$ git checkout release-1.12.1 -$ cmake . -$ make install - -``` - -3. Compile the Pulsar client library in the repository that you cloned. - -```shell - -$ cd pulsar-client-cpp -$ cmake . -$ make - -``` - -### Install `libpulsar` - -Pulsar releases are available in the [Homebrew](https://brew.sh/) core repository. You can install the C++ client library with the following command. The package is installed with the library and headers. - -```shell - -brew install libpulsar - -``` - -## Windows (64-bit) - -### Compilation - -1. Clone the Pulsar repository. - -```shell - -$ git clone https://github.com/apache/pulsar - -``` - -2. Install all necessary dependencies. - -```shell - -cd ${PULSAR_HOME}/pulsar-client-cpp -vcpkg install --feature-flags=manifests --triplet x64-windows - -``` - -3. Build C++ libraries. - -```shell - -cmake -B ./build -A x64 -DBUILD_PYTHON_WRAPPER=OFF -DBUILD_TESTS=OFF -DVCPKG_TRIPLET=x64-windows -DCMAKE_BUILD_TYPE=Release -S . -cmake --build ./build --config Release - -``` - -> **NOTE** -> -> 1. For Windows 32-bit, you need to use `-A Win32` and `-DVCPKG_TRIPLET=x86-windows`. -> 2. For MSVC Debug mode, you need to replace `Release` with `Debug` for both `CMAKE_BUILD_TYPE` variable and `--config` option. - -4. Client libraries are available in the following places. - -``` - -${PULSAR_HOME}/pulsar-client-cpp/build/lib/Release/pulsar.lib -${PULSAR_HOME}/pulsar-client-cpp/build/lib/Release/pulsar.dll - -``` - -## Connection URLs - -To connect Pulsar using client libraries, you need to specify a Pulsar protocol URL. - -Pulsar protocol URLs are assigned to specific clusters, you can use the Pulsar URI scheme. The default port is `6650`. The following is an example for localhost. - -```http - -pulsar://localhost:6650 - -``` - -In a Pulsar cluster in production, the URL looks as follows. - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you use TLS authentication, you need to add `ssl`, and the default port is `6651`. The following is an example. - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a consumer - -To use Pulsar as a consumer, you need to create a consumer on the C++ client. There are two main ways of using the consumer: -- [Blocking style](#blocking-example): synchronously calling `receive(msg)`. -- [Non-blocking](#consumer-with-a-message-listener) (event based) style: using a message listener. - -### Blocking example - -The benefit of this approach is that it is the simplest code. Simply keeps calling `receive(msg)` which blocks until a message is received. - -This example starts a subscription at the earliest offset and consumes 100 messages. - -```c++ - -#include - -using namespace pulsar; - -int main() { - Client client("pulsar://localhost:6650"); - - Consumer consumer; - ConsumerConfiguration config; - config.setSubscriptionInitialPosition(InitialPositionEarliest); - Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer); - if (result != ResultOk) { - std::cout << "Failed to subscribe: " << result << std::endl; - return -1; - } - - Message msg; - int ctr = 0; - // consume 100 messages - while (ctr < 100) { - consumer.receive(msg); - std::cout << "Received: " << msg - << " with payload '" << msg.getDataAsString() << "'" << std::endl; - - consumer.acknowledge(msg); - ctr++; - } - - std::cout << "Finished consuming synchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -### Consumer with a message listener - -You can avoid running a loop with blocking calls with an event based style by using a message listener which is invoked for each message that is received. - -This example starts a subscription at the earliest offset and consumes 100 messages. - -```c++ - -#include -#include -#include - -using namespace pulsar; - -std::atomic messagesReceived; - -void handleAckComplete(Result res) { - std::cout << "Ack res: " << res << std::endl; -} - -void listener(Consumer consumer, const Message& msg) { - std::cout << "Got message " << msg << " with content '" << msg.getDataAsString() << "'" << std::endl; - messagesReceived++; - consumer.acknowledgeAsync(msg.getMessageId(), handleAckComplete); -} - -int main() { - Client client("pulsar://localhost:6650"); - - Consumer consumer; - ConsumerConfiguration config; - config.setMessageListener(listener); - config.setSubscriptionInitialPosition(InitialPositionEarliest); - Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer); - if (result != ResultOk) { - std::cout << "Failed to subscribe: " << result << std::endl; - return -1; - } - - // wait for 100 messages to be consumed - while (messagesReceived < 100) { - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - } - - std::cout << "Finished consuming asynchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -## Create a producer - -To use Pulsar as a producer, you need to create a producer on the C++ client. There are two main ways of using a producer: -- [Blocking style](#simple-blocking-example) : each call to `send` waits for an ack from the broker. -- [Non-blocking asynchronous style](#non-blocking-example) : `sendAsync` is called instead of `send` and a callback is supplied for when the ack is received from the broker. - -### Simple blocking example - -This example sends 100 messages using the blocking style. While simple, it does not produce high throughput as it waits for each ack to come back before sending the next message. - -```c++ - -#include -#include - -using namespace pulsar; - -int main() { - Client client("pulsar://localhost:6650"); - - Result result = client.createProducer("persistent://public/default/my-topic", producer); - if (result != ResultOk) { - std::cout << "Error creating producer: " << result << std::endl; - return -1; - } - - // Send 100 messages synchronously - int ctr = 0; - while (ctr < 100) { - std::string content = "msg" + std::to_string(ctr); - Message msg = MessageBuilder().setContent(content).setProperty("x", "1").build(); - Result result = producer.send(msg); - if (result != ResultOk) { - std::cout << "The message " << content << " could not be sent, received code: " << result << std::endl; - } else { - std::cout << "The message " << content << " sent successfully" << std::endl; - } - - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - ctr++; - } - - std::cout << "Finished producing synchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -### Non-blocking example - -This example sends 100 messages using the non-blocking style calling `sendAsync` instead of `send`. This allows the producer to have multiple messages inflight at a time which increases throughput. - -The producer configuration `blockIfQueueFull` is useful here to avoid `ResultProducerQueueIsFull` errors when the internal queue for outgoing send requests becomes full. Once the internal queue is full, `sendAsync` becomes blocking which can make your code simpler. - -Without this configuration, the result code `ResultProducerQueueIsFull` is passed to the callback. You must decide how to deal with that (retry, discard etc). - -```c++ - -#include -#include - -using namespace pulsar; - -std::atomic acksReceived; - -void callback(Result code, const MessageId& msgId, std::string msgContent) { - // message processing logic here - std::cout << "Received ack for msg: " << msgContent << " with code: " - << code << " -- MsgID: " << msgId << std::endl; - acksReceived++; -} - -int main() { - Client client("pulsar://localhost:6650"); - - ProducerConfiguration producerConf; - producerConf.setBlockIfQueueFull(true); - Producer producer; - Result result = client.createProducer("persistent://public/default/my-topic", - producerConf, producer); - if (result != ResultOk) { - std::cout << "Error creating producer: " << result << std::endl; - return -1; - } - - // Send 100 messages asynchronously - int ctr = 0; - while (ctr < 100) { - std::string content = "msg" + std::to_string(ctr); - Message msg = MessageBuilder().setContent(content).setProperty("x", "1").build(); - producer.sendAsync(msg, std::bind(callback, - std::placeholders::_1, std::placeholders::_2, content)); - - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - ctr++; - } - - // wait for 100 messages to be acked - while (acksReceived < 100) { - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - } - - std::cout << "Finished producing asynchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -### Partitioned topics and lazy producers - -When scaling out a Pulsar topic, you may configure a topic to have hundreds of partitions. Likewise, you may have also scaled out your producers so there are hundreds or even thousands of producers. This can put some strain on the Pulsar brokers as when you create a producer on a partitioned topic, internally it creates one internal producer per partition which involves communications to the brokers for each one. So for a topic with 1000 partitions and 1000 producers, it ends up creating 1,000,000 internal producers across the producer applications, each of which has to communicate with a broker to find out which broker it should connect to and then perform the connection handshake. - -You can reduce the load caused by this combination of a large number of partitions and many producers by doing the following: -- use SinglePartition partition routing mode (this ensures that all messages are only sent to a single, randomly selected partition) -- use non-keyed messages (when messages are keyed, routing is based on the hash of the key and so messages will end up being sent to multiple partitions) -- use lazy producers (this ensures that an internal producer is only created on demand when a message needs to be routed to a partition) - -With our example above, that reduces the number of internal producers spread out over the 1000 producer apps from 1,000,000 to just 1000. - -Note that there can be extra latency for the first message sent. If you set a low send timeout, this timeout could be reached if the initial connection handshake is slow to complete. - -```c++ - -ProducerConfiguration producerConf; -producerConf.setPartitionsRoutingMode(ProducerConfiguration::UseSinglePartition); -producerConf.setLazyStartPartitionedProducers(true); - -``` - -## Enable authentication in connection URLs -If you use TLS authentication when connecting to Pulsar, you need to add `ssl` in the connection URLs, and the default port is `6651`. The following is an example. - -```cpp - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/cacert.pem"); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create( - "/path/to/client-cert.pem", "/path/to/client-key.pem");); - -Client client("pulsar+ssl://my-broker.com:6651", config); - -``` - -For complete examples, refer to [C++ client examples](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/examples). - -## Schema - -This section describes some examples about schema. For more information about -schema, see [Pulsar schema](schema-get-started.md). - -### Avro schema - -- The following example shows how to create a producer with an Avro schema. - - ```cpp - - static const std::string exampleSchema = - "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," - "\"fields\":[{\"name\":\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"int\"}]}"; - Producer producer; - ProducerConfiguration producerConf; - producerConf.setSchema(SchemaInfo(AVRO, "Avro", exampleSchema)); - client.createProducer("topic-avro", producerConf, producer); - - ``` - -- The following example shows how to create a consumer with an Avro schema. - - ```cpp - - static const std::string exampleSchema = - "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," - "\"fields\":[{\"name\":\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"int\"}]}"; - ConsumerConfiguration consumerConf; - Consumer consumer; - consumerConf.setSchema(SchemaInfo(AVRO, "Avro", exampleSchema)); - client.subscribe("topic-avro", "sub-2", consumerConf, consumer) - - ``` - -### ProtobufNative schema - -The following example shows how to create a producer and a consumer with a ProtobufNative schema. -​ -1. Generate the `User` class using Protobuf3. - - :::note - - You need to use Protobuf3 or later versions. - - ::: - -​ - - ```protobuf - - syntax = "proto3"; - - message User { - string name = 1; - int32 age = 2; - } - - ``` - -​ -2. Include the `ProtobufNativeSchema.h` in your source code. Ensure the Protobuf dependency has been added to your project. -​ - - ```c++ - - #include - - ``` - -​ -3. Create a producer to send a `User` instance. -​ - - ```c++ - - ProducerConfiguration producerConf; - producerConf.setSchema(createProtobufNativeSchema(User::GetDescriptor())); - Producer producer; - client.createProducer("topic-protobuf", producerConf, producer); - User user; - user.set_name("my-name"); - user.set_age(10); - std::string content; - user.SerializeToString(&content); - producer.send(MessageBuilder().setContent(content).build()); - - ``` - -​ -4. Create a consumer to receive a `User` instance. -​ - - ```c++ - - ConsumerConfiguration consumerConf; - consumerConf.setSchema(createProtobufNativeSchema(User::GetDescriptor())); - consumerConf.setSubscriptionInitialPosition(InitialPositionEarliest); - Consumer consumer; - client.subscribe("topic-protobuf", "my-sub", consumerConf, consumer); - Message msg; - consumer.receive(msg); - User user2; - user2.ParseFromArray(msg.getData(), msg.getLength()); - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-dotnet.md b/site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-dotnet.md deleted file mode 100644 index b574fa0b2e5ed8..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-dotnet.md +++ /dev/null @@ -1,434 +0,0 @@ ---- -id: client-libraries-dotnet -title: Pulsar C# client -sidebar_label: "C#" -original_id: client-libraries-dotnet ---- - -You can use the Pulsar C# client (DotPulsar) to create Pulsar producers and consumers in C#. All the methods in the producer, consumer, and reader of a C# client are thread-safe. The official documentation for DotPulsar is available [here](https://github.com/apache/pulsar-dotpulsar/wiki). - -## Installation - -You can install the Pulsar C# client library either through the dotnet CLI or through the Visual Studio. This section describes how to install the Pulsar C# client library through the dotnet CLI. For information about how to install the Pulsar C# client library through the Visual Studio , see [here](https://docs.microsoft.com/en-us/visualstudio/mac/nuget-walkthrough?view=vsmac-2019). - -### Prerequisites - -Install the [.NET Core SDK](https://dotnet.microsoft.com/download/), which provides the dotnet command-line tool. Starting in Visual Studio 2017, the dotnet CLI is automatically installed with any .NET Core related workloads. - -### Procedures - -To install the Pulsar C# client library, following these steps: - -1. Create a project. - - 1. Create a folder for the project. - - 2. Open a terminal window and switch to the new folder. - - 3. Create the project using the following command. - - ``` - - dotnet new console - - ``` - - 4. Use `dotnet run` to test that the app has been created properly. - -2. Add the DotPulsar NuGet package. - - 1. Use the following command to install the `DotPulsar` package. - - ``` - - dotnet add package DotPulsar - - ``` - - 2. After the command completes, open the `.csproj` file to see the added reference. - - ```xml - - - - - - ``` - -## Client - -This section describes some configuration examples for the Pulsar C# client. - -### Create client - -This example shows how to create a Pulsar C# client connected to localhost. - -```c# - -var client = PulsarClient.Builder().Build(); - -``` - -To create a Pulsar C# client by using the builder, you can specify the following options. - -| Option | Description | Default | -| ---- | ---- | ---- | -| ServiceUrl | Set the service URL for the Pulsar cluster. | pulsar://localhost:6650 | -| RetryInterval | Set the time to wait before retrying an operation or a reconnection. | 3s | - -### Create producer - -This section describes how to create a producer. - -- Create a producer by using the builder. - - ```c# - - var producer = client.NewProducer() - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a producer without using the builder. - - ```c# - - var options = new ProducerOptions("persistent://public/default/mytopic"); - var producer = client.CreateProducer(options); - - ``` - -### Create consumer - -This section describes how to create a consumer. - -- Create a consumer by using the builder. - - ```c# - - var consumer = client.NewConsumer() - .SubscriptionName("MySubscription") - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a consumer without using the builder. - - ```c# - - var options = new ConsumerOptions("MySubscription", "persistent://public/default/mytopic"); - var consumer = client.CreateConsumer(options); - - ``` - -### Create reader - -This section describes how to create a reader. - -- Create a reader by using the builder. - - ```c# - - var reader = client.NewReader() - .StartMessageId(MessageId.Earliest) - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a reader without using the builder. - - ```c# - - var options = new ReaderOptions(MessageId.Earliest, "persistent://public/default/mytopic"); - var reader = client.CreateReader(options); - - ``` - -### Configure encryption policies - -The Pulsar C# client supports four kinds of encryption policies: - -- `EnforceUnencrypted`: always use unencrypted connections. -- `EnforceEncrypted`: always use encrypted connections) -- `PreferUnencrypted`: use unencrypted connections, if possible. -- `PreferEncrypted`: use encrypted connections, if possible. - -This example shows how to set the `EnforceUnencrypted` encryption policy. - -```c# - -var client = PulsarClient.Builder() - .ConnectionSecurity(EncryptionPolicy.EnforceEncrypted) - .Build(); - -``` - -### Configure authentication - -Currently, the Pulsar C# client supports the TLS (Transport Layer Security) and JWT (JSON Web Token) authentication. - -If you have followed [Authentication using TLS](security-tls-authentication.md), you get a certificate and a key. To use them from the Pulsar C# client, follow these steps: - -1. Create an unencrypted and password-less pfx file. - - ```c# - - openssl pkcs12 -export -keypbe NONE -certpbe NONE -out admin.pfx -inkey admin.key.pem -in admin.cert.pem -passout pass: - - ``` - -2. Use the admin.pfx file to create an X509Certificate2 and pass it to the Pulsar C# client. - - ```c# - - var clientCertificate = new X509Certificate2("admin.pfx"); - var client = PulsarClient.Builder() - .AuthenticateUsingClientCertificate(clientCertificate) - .Build(); - - ``` - -## Producer - -A producer is a process that attaches to a topic and publishes messages to a Pulsar broker for processing. This section describes some configuration examples about the producer. - -## Send data - -This example shows how to send data. - -```c# - -var data = Encoding.UTF8.GetBytes("Hello World"); -await producer.Send(data); - -``` - -### Send messages with customized metadata - -- Send messages with customized metadata by using the builder. - - ```c# - - var data = Encoding.UTF8.GetBytes("Hello World"); - var messageId = await producer.NewMessage() - .Property("SomeKey", "SomeValue") - .Send(data); - - ``` - -- Send messages with customized metadata without using the builder. - - ```c# - - var data = Encoding.UTF8.GetBytes("Hello World"); - var metadata = new MessageMetadata(); - metadata["SomeKey"] = "SomeValue"; - var messageId = await producer.Send(metadata, data)); - - ``` - -## Consumer - -A consumer is a process that attaches to a topic through a subscription and then receives messages. This section describes some configuration examples about the consumer. - -### Receive messages - -This example shows how a consumer receives messages from a topic. - -```c# - -await foreach (var message in consumer.Messages()) -{ - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); -} - -``` - -### Acknowledge messages - -Messages can be acknowledged individually or cumulatively. For details about message acknowledgement, see [acknowledgement](concepts-messaging.md#acknowledgement). - -- Acknowledge messages individually. - - ```c# - - await foreach (var message in consumer.Messages()) - { - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); - } - - ``` - -- Acknowledge messages cumulatively. - - ```c# - - await consumer.AcknowledgeCumulative(message); - - ``` - -### Unsubscribe from topics - -This example shows how a consumer unsubscribes from a topic. - -```c# - -await consumer.Unsubscribe(); - -``` - -#### Note - -> A consumer cannot be used and is disposed once the consumer unsubscribes from a topic. - -## Reader - -A reader is actually just a consumer without a cursor. This means that Pulsar does not keep track of your progress and there is no need to acknowledge messages. - -This example shows how a reader receives messages. - -```c# - -await foreach (var message in reader.Messages()) -{ - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); -} - -``` - -## Monitoring - -This section describes how to monitor the producer, consumer, and reader state. - -### Monitor producer - -The following table lists states available for the producer. - -| State | Description | -| ---- | ----| -| Closed | The producer or the Pulsar client has been disposed. | -| Connected | All is well. | -| Disconnected | The connection is lost and attempts are being made to reconnect. | -| Faulted | An unrecoverable error has occurred. | - -This example shows how to monitor the producer state. - -```c# - -private static async ValueTask Monitor(IProducer producer, CancellationToken cancellationToken) -{ - var state = ProducerState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = await producer.StateChangedFrom(state, cancellationToken); - - var stateMessage = state switch - { - ProducerState.Connected => $"The producer is connected", - ProducerState.Disconnected => $"The producer is disconnected", - ProducerState.Closed => $"The producer has closed", - ProducerState.Faulted => $"The producer has faulted", - _ => $"The producer has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (producer.IsFinalState(state)) - return; - } -} - -``` - -### Monitor consumer state - -The following table lists states available for the consumer. - -| State | Description | -| ---- | ----| -| Active | All is well. | -| Inactive | All is well. The subscription type is `Failover` and you are not the active consumer. | -| Closed | The consumer or the Pulsar client has been disposed. | -| Disconnected | The connection is lost and attempts are being made to reconnect. | -| Faulted | An unrecoverable error has occurred. | -| ReachedEndOfTopic | No more messages are delivered. | - -This example shows how to monitor the consumer state. - -```c# - -private static async ValueTask Monitor(IConsumer consumer, CancellationToken cancellationToken) -{ - var state = ConsumerState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = await consumer.StateChangedFrom(state, cancellationToken); - - var stateMessage = state switch - { - ConsumerState.Active => "The consumer is active", - ConsumerState.Inactive => "The consumer is inactive", - ConsumerState.Disconnected => "The consumer is disconnected", - ConsumerState.Closed => "The consumer has closed", - ConsumerState.ReachedEndOfTopic => "The consumer has reached end of topic", - ConsumerState.Faulted => "The consumer has faulted", - _ => $"The consumer has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (consumer.IsFinalState(state)) - return; - } -} - -``` - -### Monitor reader state - -The following table lists states available for the reader. - -| State | Description | -| ---- | ----| -| Closed | The reader or the Pulsar client has been disposed. | -| Connected | All is well. | -| Disconnected | The connection is lost and attempts are being made to reconnect. -| Faulted | An unrecoverable error has occurred. | -| ReachedEndOfTopic | No more messages are delivered. | - -This example shows how to monitor the reader state. - -```c# - -private static async ValueTask Monitor(IReader reader, CancellationToken cancellationToken) -{ - var state = ReaderState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = await reader.StateChangedFrom(state, cancellationToken); - - var stateMessage = state switch - { - ReaderState.Connected => "The reader is connected", - ReaderState.Disconnected => "The reader is disconnected", - ReaderState.Closed => "The reader has closed", - ReaderState.ReachedEndOfTopic => "The reader has reached end of topic", - ReaderState.Faulted => "The reader has faulted", - _ => $"The reader has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (reader.IsFinalState(state)) - return; - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-go.md b/site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-go.md deleted file mode 100644 index 6281b03dd8c805..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-go.md +++ /dev/null @@ -1,885 +0,0 @@ ---- -id: client-libraries-go -title: Pulsar Go client -sidebar_label: "Go" -original_id: client-libraries-go ---- - -> Tips: Currently, the CGo client will be deprecated, if you want to know more about the CGo client, please refer to [CGo client docs](client-libraries-cgo.md) - -You can use Pulsar [Go client](https://github.com/apache/pulsar-client-go) to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang). - -> **API docs available as well** -> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar-client-go/pulsar). - - -## Installation - -### Install go package - -You can install the `pulsar` library locally using `go get`. - -```bash - -$ go get -u "github.com/apache/pulsar-client-go/pulsar" - -``` - -Once installed locally, you can import it into your project: - -```go - -import "github.com/apache/pulsar-client-go/pulsar" - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -If you have multiple brokers, you can set the URL as below. - -``` - -pulsar://localhost:6550,localhost:6651,localhost:6652 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example: - -```go - -import ( - "log" - "time" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - OperationTimeout: 30 * time.Second, - ConnectionTimeout: 30 * time.Second, - }) - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } - - defer client.Close() -} - -``` - -If you have multiple brokers, you can initiate a client object as below. - -```go - -import ( - "log" - "time" - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650,localhost:6651,localhost:6652", - OperationTimeout: 30 * time.Second, - ConnectionTimeout: 30 * time.Second, - }) - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } - - defer client.Close() -} - -``` - -The following configurable parameters are available for Pulsar clients: - - Name | Description | Default -| :-------- | :---------- |:---------- | -| URL | Configure the service URL for the Pulsar service.

    If you have multiple brokers, you can set multiple Pulsar cluster addresses for a client.

    This parameter is **required**. |None | -| ConnectionTimeout | Timeout for the establishment of a TCP connection | 30s | -| OperationTimeout| Set the operation timeout. Producer-create, subscribe and unsubscribe operations will be retried until this interval, after which the operation will be marked as failed| 30s| -| Authentication | Configure the authentication provider. Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | no authentication | -| TLSTrustCertsFilePath | Set the path to the trusted TLS certificate file | | -| TLSAllowInsecureConnection | Configure whether the Pulsar client accept untrusted TLS certificate from broker | false | -| TLSValidateHostname | Configure whether the Pulsar client verify the validity of the host name from broker | false | -| ListenerName | Configure the net model for VPC users to connect to the Pulsar broker | | -| MaxConnectionsPerBroker | Max number of connections to a single broker that is kept in the pool | 1 | -| CustomMetricsLabels | Add custom labels to all the metrics reported by this client instance | | -| Logger | Configure the logger used by the client | logrus.StandardLogger | - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example: - -```go - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", -}) - -if err != nil { - log.Fatal(err) -} - -_, err = producer.Send(context.Background(), &pulsar.ProducerMessage{ - Payload: []byte("hello"), -}) - -defer producer.Close() - -if err != nil { - fmt.Println("Failed to publish message", err) -} -fmt.Println("Published message") - -``` - -### Producer operations - -Pulsar Go producers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string` -`Name()` | Fetches the producer's name | `string` -`Send(context.Context, *ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | (MessageID, error) -`SendAsync(context.Context, *ProducerMessage, func(MessageID, *ProducerMessage, error))`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | -`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64 -`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error -`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | - -### Producer Example - -#### How to use message router in producer - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: serviceURL, -}) - -if err != nil { - log.Fatal(err) -} -defer client.Close() - -// Only subscribe on the specific partition -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "my-partitioned-topic-partition-2", - SubscriptionName: "my-sub", -}) - -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-partitioned-topic", - MessageRouter: func(msg *ProducerMessage, tm TopicMetadata) int { - fmt.Println("Routing message ", msg, " -- Partitions: ", tm.NumPartitions()) - return 2 - }, -}) - -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -``` - -#### How to use schema interface in producer - -```go - -type testJSON struct { - ID int `json:"id"` - Name string `json:"name"` -} - -``` - -```go - -var ( - exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -) - -``` - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -properties := make(map[string]string) -properties["pulsar"] = "hello" -jsonSchemaWithProperties := NewJSONSchema(exampleSchemaDef, properties) -producer, err := client.CreateProducer(ProducerOptions{ - Topic: "jsonTopic", - Schema: jsonSchemaWithProperties, -}) -assert.Nil(t, err) - -_, err = producer.Send(context.Background(), &ProducerMessage{ - Value: &testJSON{ - ID: 100, - Name: "pulsar", - }, -}) -if err != nil { - log.Fatal(err) -} -producer.Close() - -``` - -#### How to use delay relative in producer - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topicName := newTopicName() -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topicName, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: topicName, - SubscriptionName: "subName", - Type: Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -ID, err := producer.Send(context.Background(), &pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("test")), - DeliverAfter: 3 * time.Second, -}) -if err != nil { - log.Fatal(err) -} -fmt.Println(ID) - -ctx, canc := context.WithTimeout(context.Background(), 1*time.Second) -msg, err := consumer.Receive(ctx) -if err != nil { - log.Fatal(err) -} -fmt.Println(msg.Payload()) -canc() - -ctx, canc = context.WithTimeout(context.Background(), 5*time.Second) -msg, err = consumer.Receive(ctx) -if err != nil { - log.Fatal(err) -} -fmt.Println(msg.Payload()) -canc() - -``` - -### Producer configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Name | Name specify a name for the producer. If not assigned, the system will generate a globally unique name which can be access with Producer.ProducerName(). | | -| Properties | Properties attach a set of application defined properties to the producer This properties will be visible in the topic stats | | -| SendTimeout | SendTimeout set the timeout for a message that is not acknowledged by the server | 30s | -| DisableBlockIfQueueFull | DisableBlockIfQueueFull control whether Send and SendAsync block if producer's message queue is full | false | -| MaxPendingMessages| MaxPendingMessages set the max size of the queue holding the messages pending to receive an acknowledgment from the broker. | | -| HashingScheme | HashingScheme change the `HashingScheme` used to chose the partition on where to publish a particular message. | JavaStringHash | -| CompressionType | CompressionType set the compression type for the producer. | not compressed | -| CompressionLevel | Define the desired compression level. Options: Default, Faster and Better | Default | -| MessageRouter | MessageRouter set a custom message routing policy by passing an implementation of MessageRouter | | -| DisableBatching | DisableBatching control whether automatic batching of messages is enabled for the producer. | false | -| BatchingMaxPublishDelay | BatchingMaxPublishDelay set the time period within which the messages sent will be batched | 1ms | -| BatchingMaxMessages | BatchingMaxMessages set the maximum number of messages permitted in a batch. | 1000 | -| BatchingMaxSize | BatchingMaxSize sets the maximum number of bytes permitted in a batch. | 128KB | -| Schema | Schema set a custom schema type by passing an implementation of `Schema` | bytes[] | -| Interceptors | A chain of interceptors. These interceptors are called at some points defined in the `ProducerInterceptor` interface. | None | -| MaxReconnectToBroker | MaxReconnectToBroker set the maximum retry number of reconnectToBroker | ultimate | -| BatcherBuilderType | BatcherBuilderType sets the batch builder type. This is used to create a batch container when batching is enabled. Options: DefaultBatchBuilder and KeyBasedBatchBuilder | DefaultBatchBuilder | - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels: - -```go - -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "topic-1", - SubscriptionName: "my-sub", - Type: pulsar.Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -for i := 0; i < 10; i++ { - msg, err := consumer.Receive(context.Background()) - if err != nil { - log.Fatal(err) - } - - fmt.Printf("Received message msgId: %#v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - - consumer.Ack(msg) -} - -if err := consumer.Unsubscribe(); err != nil { - log.Fatal(err) -} - -``` - -### Consumer operations - -Pulsar Go consumers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Subscription()` | Returns the consumer's subscription name | `string` -`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error` -`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)` -`Chan()` | Chan returns a channel from which to consume messages. | `<-chan ConsumerMessage` -`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | -`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | -`ReconsumeLater(msg Message, delay time.Duration)` | ReconsumeLater mark a message for redelivery after custom delay | -`Nack(Message)` | Acknowledge the failure to process a single message. | -`NackID(MessageID)` | Acknowledge the failure to process a single message. | -`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | `error` -`SeekByTime(time time.Time)` | Reset the subscription associated with this consumer to a specific message publish time. | `error` -`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | -`Name()` | Name returns the name of consumer | `string` - -### Receive example - -#### How to use regex consumer - -```go - -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) - -defer client.Close() - -p, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topicInRegex, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer p.Close() - -topicsPattern := fmt.Sprintf("persistent://%s/foo.*", namespace) -opts := pulsar.ConsumerOptions{ - TopicsPattern: topicsPattern, - SubscriptionName: "regex-sub", -} -consumer, err := client.Subscribe(opts) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -``` - -#### How to use multi topics Consumer - -```go - -func newTopicName() string { - return fmt.Sprintf("my-topic-%v", time.Now().Nanosecond()) -} - - -topic1 := "topic-1" -topic2 := "topic-2" - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -topics := []string{topic1, topic2} -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topics: topics, - SubscriptionName: "multi-topic-sub", -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -``` - -#### How to use consumer listener - -```go - -import ( - "fmt" - "log" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{URL: "pulsar://localhost:6650"}) - if err != nil { - log.Fatal(err) - } - - defer client.Close() - - channel := make(chan pulsar.ConsumerMessage, 100) - - options := pulsar.ConsumerOptions{ - Topic: "topic-1", - SubscriptionName: "my-subscription", - Type: pulsar.Shared, - } - - options.MessageChannel = channel - - consumer, err := client.Subscribe(options) - if err != nil { - log.Fatal(err) - } - - defer consumer.Close() - - // Receive messages from channel. The channel returns a struct which contains message and the consumer from where - // the message was received. It's not necessary here since we have 1 single consumer, but the channel could be - // shared across multiple consumers as well - for cm := range channel { - msg := cm.Message - fmt.Printf("Received message msgId: %v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - - consumer.Ack(msg) - } -} - -``` - -#### How to use consumer receive timeout - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topic := "test-topic-with-no-messages" -ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond) -defer cancel() - -// create consumer -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: topic, - SubscriptionName: "my-sub1", - Type: Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -msg, err := consumer.Receive(ctx) -fmt.Println(msg.Payload()) -if err != nil { - log.Fatal(err) -} - -``` - -#### How to use schema in consumer - -```go - -type testJSON struct { - ID int `json:"id"` - Name string `json:"name"` -} - -``` - -```go - -var ( - exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -) - -``` - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -var s testJSON - -consumerJS := NewJSONSchema(exampleSchemaDef, nil) -consumer, err := client.Subscribe(ConsumerOptions{ - Topic: "jsonTopic", - SubscriptionName: "sub-1", - Schema: consumerJS, - SubscriptionInitialPosition: SubscriptionPositionEarliest, -}) -assert.Nil(t, err) -msg, err := consumer.Receive(context.Background()) -assert.Nil(t, err) -err = msg.GetSchemaValue(&s) -if err != nil { - log.Fatal(err) -} - -defer consumer.Close() - -``` - -### Consumer configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Topics | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing| | -| TopicsPattern | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing | | -| AutoDiscoveryPeriod | Specify the interval in which to poll for new partitions or new topics if using a TopicsPattern. | | -| SubscriptionName | Specify the subscription name for this consumer. This argument is required when subscribing | | -| Name | Set the consumer name | | -| Properties | Properties attach a set of application defined properties to the producer This properties will be visible in the topic stats | | -| Type | Select the subscription type to be used when subscribing to the topic. | Exclusive | -| SubscriptionInitialPosition | InitialPosition at which the cursor will be set when subscribe | Latest | -| DLQ | Configuration for Dead Letter Queue consumer policy. | no DLQ | -| MessageChannel | Sets a `MessageChannel` for the consumer. When a message is received, it will be pushed to the channel for consumption | | -| ReceiverQueueSize | Sets the size of the consumer receive queue. | 1000| -| NackRedeliveryDelay | The delay after which to redeliver the messages that failed to be processed | 1min | -| ReadCompacted | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic | false | -| ReplicateSubscriptionState | Mark the subscription as replicated to keep it in sync across clusters | false | -| KeySharedPolicy | Configuration for Key Shared consumer policy. | | -| RetryEnable | Auto retry send messages to default filled DLQPolicy topics | false | -| Interceptors | A chain of interceptors. These interceptors are called at some points defined in the `ConsumerInterceptor` interface. | | -| MaxReconnectToBroker | MaxReconnectToBroker set the maximum retry number of reconnectToBroker. | ultimate | -| Schema | Schema set a custom schema type by passing an implementation of `Schema` | bytes[] | - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example: - -```go - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "topic-1", - StartMessageID: pulsar.EarliestMessageID(), -}) -if err != nil { - log.Fatal(err) -} -defer reader.Close() - -``` - -### Reader operations - -Pulsar Go readers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string` -`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)` -`HasNext()` | Check if there is any message available to read from the current position| (bool, error) -`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error` -`Seek(MessageID)` | Reset the subscription associated with this reader to a specific message ID | `error` -`SeekByTime(time time.Time)` | Reset the subscription associated with this reader to a specific message publish time | `error` - -### Reader example - -#### How to use reader to read 'next' message - -Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages: - -```go - -import ( - "context" - "fmt" - "log" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{URL: "pulsar://localhost:6650"}) - if err != nil { - log.Fatal(err) - } - - defer client.Close() - - reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "topic-1", - StartMessageID: pulsar.EarliestMessageID(), - }) - if err != nil { - log.Fatal(err) - } - defer reader.Close() - - for reader.HasNext() { - msg, err := reader.Next(context.Background()) - if err != nil { - log.Fatal(err) - } - - fmt.Printf("Received message msgId: %#v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - } -} - -``` - -In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example: - -```go - -lastSavedId := // Read last saved message id from external store as byte[] - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: pulsar.DeserializeMessageID(lastSavedId), -}) - -``` - -#### How to use reader to read specific message - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: lookupURL, -}) - -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topic := "topic-1" -ctx := context.Background() - -// create producer -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topic, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -// send 10 messages -msgIDs := [10]MessageID{} -for i := 0; i < 10; i++ { - msgID, err := producer.Send(ctx, &pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("hello-%d", i)), - }) - assert.NoError(t, err) - assert.NotNil(t, msgID) - msgIDs[i] = msgID -} - -// create reader on 5th message (not included) -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: topic, - StartMessageID: msgIDs[4], -}) - -if err != nil { - log.Fatal(err) -} -defer reader.Close() - -// receive the remaining 5 messages -for i := 5; i < 10; i++ { - msg, err := reader.Next(context.Background()) - if err != nil { - log.Fatal(err) -} - -// create reader on 5th message (included) -readerInclusive, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: topic, - StartMessageID: msgIDs[4], - StartMessageIDInclusive: true, -}) - -if err != nil { - log.Fatal(err) -} -defer readerInclusive.Close() - -``` - -### Reader configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Name | Name set the reader name. | | -| Properties | Attach a set of application defined properties to the reader. This properties will be visible in the topic stats | | -| StartMessageID | StartMessageID initial reader positioning is done by specifying a message id. | | -| StartMessageIDInclusive | If true, the reader will start at the `StartMessageID`, included. Default is `false` and the reader will start from the "next" message | false | -| MessageChannel | MessageChannel sets a `MessageChannel` for the consumer When a message is received, it will be pushed to the channel for consumption| | -| ReceiverQueueSize | ReceiverQueueSize sets the size of the consumer receive queue. | 1000 | -| SubscriptionRolePrefix| SubscriptionRolePrefix set the subscription role prefix. | "reader" | -| ReadCompacted | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. ReadCompacted can only be enabled when reading from a persistent topic. | false| - -## Messages - -The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message: - -```go - -msg := pulsar.ProducerMessage{ - Payload: []byte("Here is some message data"), - Key: "message-key", - Properties: map[string]string{ - "foo": "bar", - }, - EventTime: time.Now(), - ReplicationClusters: []string{"cluster1", "cluster3"}, -} - -if _, err := producer.send(msg); err != nil { - log.Fatalf("Could not publish message due to: %v", err) -} - -``` - -The following methods parameters are available for `ProducerMessage` objects: - -Parameter | Description -:---------|:----------- -`Payload` | The actual data payload of the message -`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message. -`Key` | The optional key associated with the message (particularly useful for things like topic compaction) -`OrderingKey` | OrderingKey sets the ordering key of the message. -`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message -`EventTime` | The timestamp associated with the message -`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. -`SequenceID` | Set the sequence id to assign to the current message -`DeliverAfter` | Request to deliver the message only after the specified relative delay -`DeliverAt` | Deliver the message only at or after the specified absolute timestamp - -## TLS encryption and authentication - -In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so: - - * Use `pulsar+ssl` URL type - * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker - * Configure `Authentication` option - -Here's an example: - -```go - -opts := pulsar.ClientOptions{ - URL: "pulsar+ssl://my-cluster.com:6651", - TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr", - Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"), -} - -``` - -## OAuth2 authentication - -To use [OAuth2 authentication](security-oauth2.md), you'll need to configure your client to perform the following operations. -This example shows how to configure OAuth2 authentication. - -```go - -oauth := pulsar.NewAuthenticationOAuth2(map[string]string{ - "type": "client_credentials", - "issuerUrl": "https://dev-kt-aa9ne.us.auth0.com", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "privateKey": "/path/to/privateKey", - "clientId": "0Xx...Yyxeny", - }) -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://my-cluster:6650", - Authentication: oauth, -}) - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-java.md b/site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-java.md deleted file mode 100644 index 0ff9a22936fafc..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-java.md +++ /dev/null @@ -1,1038 +0,0 @@ ---- -id: client-libraries-java -title: Pulsar Java client -sidebar_label: "Java" -original_id: client-libraries-java ---- - -You can use Pulsar Java client to create Java [producer](#producer), [consumer](#consumer), and [readers](#reader-interface) of messages and to perform [administrative tasks](admin-api-overview.md). The current version of the Java client is **@pulsar:version@**. - -All the methods in [producer](#producer), [consumer](#consumer), and [reader](#reader) of a Java client are thread-safe. - -Javadoc for the Pulsar client is divided into two domains by package as follows. - -Package | Description | Maven Artifact -:-------|:------------|:-------------- -[`org.apache.pulsar.client.api`](/api/client) | The producer and consumer API | [org.apache.pulsar:pulsar-client:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C@pulsar:version@%7Cjar) -[`org.apache.pulsar.client.admin`](/api/admin) | The Java [admin API](admin-api-overview.md) | [org.apache.pulsar:pulsar-client-admin:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client-admin%7C@pulsar:version@%7Cjar) -`org.apache.pulsar.client.all` |Includes both `pulsar-client` and `pulsar-client-admin`

    Both `pulsar-client` and `pulsar-client-admin` are shaded packages and they shade dependencies independently. Consequently, the applications using both `pulsar-client` and `pulsar-client-admin` have redundant shaded classes. It would be troublesome if you introduce new dependencies but forget to update shading rules.

    In this case, you can use `pulsar-client-all`, which shades dependencies only one time and reduces the size of dependencies. |[org.apache.pulsar:pulsar-client-all:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client-all%7C@pulsar:version@%7Cjar) - -This document focuses only on the client API for producing and consuming messages on Pulsar topics. For how to use the Java admin client, see [Pulsar admin interface](admin-api-overview.md). - -## Installation - -The latest version of the Pulsar Java client library is available via [Maven Central](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C@pulsar:version@%7Cjar). To use the latest version, add the `pulsar-client` library to your build configuration. - -### Maven - -If you use Maven, add the following information to the `pom.xml` file. - -```xml - - -@pulsar:version@ - - - - org.apache.pulsar - pulsar-client - ${pulsar.version} - - -``` - -### Gradle - -If you use Gradle, add the following information to the `build.gradle` file. - -```groovy - -def pulsarVersion = '@pulsar:version@' - -dependencies { - compile group: 'org.apache.pulsar', name: 'pulsar-client', version: pulsarVersion -} - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -You can assign Pulsar protocol URLs to specific clusters and use the `pulsar` scheme. The default port is `6650`. The following is an example of `localhost`. - -```http - -pulsar://localhost:6650 - -``` - -If you have multiple brokers, the URL is as follows. - -```http - -pulsar://localhost:6550,localhost:6651,localhost:6652 - -``` - -A URL for a production Pulsar cluster is as follows. - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you use [TLS](security-tls-authentication.md) authentication, the URL is as follows. - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Client - -You can instantiate a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object using just a URL for the target Pulsar [cluster](reference-terminology.md#cluster) like this: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -``` - -If you have multiple brokers, you can initiate a PulsarClient like this: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650,localhost:6651,localhost:6652") - .build(); - -``` - -> ### Default broker URLs for standalone clusters -> If you run a cluster in [standalone mode](getting-started-standalone.md), the broker is available at the `pulsar://localhost:6650` URL by default. - -If you create a client, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -| Type | Name |
    Description
    | Default -|---|---|---|--- -String | `serviceUrl` |Service URL provider for Pulsar service | None -String | `authPluginClassName` | Name of the authentication plugin | None -String | `authParams` | String represents parameters for the authentication plugin

    **Example**
    key1:val1,key2:val2|None -long|`operationTimeoutMs`|Operation timeout |30000 -long|`statsIntervalSeconds`|Interval between each stats info

    Stats is activated with positive `statsInterval`

    Set `statsIntervalSeconds` to 1 second at least |60 -int|`numIoThreads`| The number of threads used for handling connections to brokers | 1 -int|`numListenerThreads`|The number of threads used for handling message listeners. The listener thread pool is shared across all the consumers and readers using the "listener" model to get messages. For a given consumer, the listener is always invoked from the same thread to ensure ordering. If you want multiple threads to process a single topic, you need to create a [`shared`](https://pulsar.apache.org/docs/en/next/concepts-messaging/#shared) subscription and multiple consumers for this subscription. This does not ensure ordering.| 1 -boolean|`useTcpNoDelay`|Whether to use TCP no-delay flag on the connection to disable Nagle algorithm |true -boolean |`useTls` |Whether to use TLS encryption on the connection| false -string | `tlsTrustCertsFilePath` |Path to the trusted TLS certificate file|None -boolean|`tlsAllowInsecureConnection`|Whether the Pulsar client accepts untrusted TLS certificate from broker | false -boolean | `tlsHostnameVerificationEnable` | Whether to enable TLS hostname verification|false -int|`concurrentLookupRequest`|The number of concurrent lookup requests allowed to send on each broker connection to prevent overload on broker|5000 -int|`maxLookupRequest`|The maximum number of lookup requests allowed on each broker connection to prevent overload on broker | 50000 -int|`maxNumberOfRejectedRequestPerConnection`|The maximum number of rejected requests of a broker in a certain time frame (30 seconds) after the current connection is closed and the client creates a new connection to connect to a different broker|50 -int|`keepAliveIntervalSeconds`|Seconds of keeping alive interval for each client broker connection|30 -int|`connectionTimeoutMs`|Duration of waiting for a connection to a broker to be established

    If the duration passes without a response from a broker, the connection attempt is dropped|10000 -int|`requestTimeoutMs`|Maximum duration for completing a request |60000 -int|`defaultBackoffIntervalNanos`| Default duration for a backoff interval | TimeUnit.MILLISECONDS.toNanos(100); -long|`maxBackoffIntervalNanos`|Maximum duration for a backoff interval|TimeUnit.SECONDS.toNanos(30) -SocketAddress|`socks5ProxyAddress`|SOCKS5 proxy address | None -String|`socks5ProxyUsername`|SOCKS5 proxy username | None -String|`socks5ProxyPassword`|SOCKS5 proxy password | None - -Check out the Javadoc for the {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} class for a full list of configurable parameters. - -> In addition to client-level configuration, you can also apply [producer](#configuring-producers) and [consumer](#configuring-consumers) specific configuration as described in sections below. - -## Producer - -In Pulsar, producers write messages to topics. Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object (as in the section [above](#client-configuration)), you can create a {@inject: javadoc:Producer:/client/org/apache/pulsar/client/api/Producer} for a specific Pulsar [topic](reference-terminology.md#topic). - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .create(); - -// You can then send messages to the broker and topic you specified: -producer.send("My message".getBytes()); - -``` - -By default, producers produce messages that consist of byte arrays. You can produce different types by specifying a message [schema](#schemas). - -```java - -Producer stringProducer = client.newProducer(Schema.STRING) - .topic("my-topic") - .create(); -stringProducer.send("My message"); - -``` - -> Make sure that you close your producers, consumers, and clients when you do not need them. - -> ```java -> -> producer.close(); -> consumer.close(); -> client.close(); -> -> -> ``` - -> -> Close operations can also be asynchronous: - -> ```java -> -> producer.closeAsync() -> .thenRun(() -> System.out.println("Producer closed")) -> .exceptionally((ex) -> { -> System.err.println("Failed to close producer: " + ex); -> return null; -> }); -> -> -> ``` - - -### Configure producer - -If you instantiate a `Producer` object by specifying only a topic name as the example above, use the default configuration for producer. - -If you create a producer, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -Type | Name|
    Description
    | Default -|---|---|---|--- -String| `topicName`| Topic name| null| -String|`producerName`|Producer name| null -long|`sendTimeoutMs`|Message send timeout in ms.

    If a message is not acknowledged by a server before the `sendTimeout` expires, an error occurs.|30000 -boolean|`blockIfQueueFull`|If it is set to `true`, when the outgoing message queue is full, the `Send` and `SendAsync` methods of producer block, rather than failing and throwing errors.

    If it is set to `false`, when the outgoing message queue is full, the `Send` and `SendAsync` methods of producer fail and `ProducerQueueIsFullError` exceptions occur.

    The `MaxPendingMessages` parameter determines the size of the outgoing message queue.|false -int|`maxPendingMessages`|The maximum size of a queue holding pending messages.

    For example, a message waiting to receive an acknowledgment from a [broker](reference-terminology.md#broker).

    By default, when the queue is full, all calls to the `Send` and `SendAsync` methods fail **unless** you set `BlockIfQueueFull` to `true`.|1000 -int|`maxPendingMessagesAcrossPartitions`|The maximum number of pending messages across partitions.

    Use the setting to lower the max pending messages for each partition ({@link #setMaxPendingMessages(int)}) if the total number exceeds the configured value.|50000 -MessageRoutingMode|`messageRoutingMode`|Message routing logic for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics).

    Apply the logic only when setting no key on messages.

    Available options are as follows:

  764. `pulsar.RoundRobinDistribution`: round robin

  765. `pulsar.UseSinglePartition`: publish all messages to a single partition

  766. `pulsar.CustomPartition`: a custom partitioning scheme
  767. |`pulsar.RoundRobinDistribution` -HashingScheme|`hashingScheme`|Hashing function determining the partition where you publish a particular message (**partitioned topics only**).

    Available options are as follows:

  768. `pulsar.JavaStringHash`: the equivalent of `String.hashCode()` in Java

  769. `pulsar.Murmur3_32Hash`: applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function

  770. `pulsar.BoostHash`: applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library
  771. |`HashingScheme.JavaStringHash` -ProducerCryptoFailureAction|`cryptoFailureAction`|Producer should take action when encryption fails.

  772. **FAIL**: if encryption fails, unencrypted messages fail to send.

  773. **SEND**: if encryption fails, unencrypted messages are sent.
  774. |`ProducerCryptoFailureAction.FAIL` -long|`batchingMaxPublishDelayMicros`|Batching time period of sending messages.|TimeUnit.MILLISECONDS.toMicros(1) -int|batchingMaxMessages|The maximum number of messages permitted in a batch.|1000 -boolean|`batchingEnabled`|Enable batching of messages. |true -CompressionType|`compressionType`|Message data compression type used by a producer.

    Available options:
  775. [`LZ4`](https://github.com/lz4/lz4)
  776. [`ZLIB`](https://zlib.net/)
  777. [`ZSTD`](https://facebook.github.io/zstd/)
  778. [`SNAPPY`](https://google.github.io/snappy/)
  779. | No compression - -You can configure parameters if you do not want to use the default configuration. - -For a full list, see the Javadoc for the {@inject: javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder} class. The following is an example. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .batchingMaxPublishDelay(10, TimeUnit.MILLISECONDS) - .sendTimeout(10, TimeUnit.SECONDS) - .blockIfQueueFull(true) - .create(); - -``` - -### Message routing - -When using partitioned topics, you can specify the routing mode whenever you publish messages using a producer. For more information on specifying a routing mode using the Java client, see the [Partitioned Topics](cookbooks-partitioned.md) cookbook. - -### Async send - -You can publish messages [asynchronously](concepts-messaging.md#send-modes) using the Java client. With async send, the producer puts the message in a blocking queue and returns it immediately. Then the client library sends the message to the broker in the background. If the queue is full (max size configurable), the producer is blocked or fails immediately when calling the API, depending on arguments passed to the producer. - -The following is an example. - -```java - -producer.sendAsync("my-async-message".getBytes()).thenAccept(msgId -> { - System.out.println("Message with ID " + msgId + " successfully sent"); -}); - -``` - -As you can see from the example above, async send operations return a {@inject: javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId} wrapped in a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture). - -### Configure messages - -In addition to a value, you can set additional items on a given message: - -```java - -producer.newMessage() - .key("my-message-key") - .value("my-async-message".getBytes()) - .property("my-key", "my-value") - .property("my-other-key", "my-other-value") - .send(); - -``` - -You can terminate the builder chain with `sendAsync()` and get a future return. - -## Consumer - -In Pulsar, consumers subscribe to topics and handle messages that producers publish to those topics. You can instantiate a new [consumer](reference-terminology.md#consumer) by first instantiating a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object and passing it a URL for a Pulsar broker (as [above](#client-configuration)). - -Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object, you can create a {@inject: javadoc:Consumer:/client/org/apache/pulsar/client/api/Consumer} by specifying a [topic](reference-terminology.md#topic) and a [subscription](concepts-messaging.md#subscription-modes). - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscribe(); - -``` - -The `subscribe` method will auto subscribe the consumer to the specified topic and subscription. One way to make the consumer listen on the topic is to set up a `while` loop. In this example loop, the consumer listens for messages, prints the contents of any received message, and then [acknowledges](reference-terminology.md#acknowledgment-ack) that the message has been processed. If the processing logic fails, you can use [negative acknowledgement](reference-terminology.md#acknowledgment-ack) to redeliver the message later. - -```java - -while (true) { - // Wait for a message - Message msg = consumer.receive(); - - try { - // Do something with the message - System.out.println("Message received: " + new String(msg.getData())); - - // Acknowledge the message so that it can be deleted by the message broker - consumer.acknowledge(msg); - } catch (Exception e) { - // Message failed to process, redeliver later - consumer.negativeAcknowledge(msg); - } -} - -``` - -If you don't want to block your main thread and rather listen constantly for new messages, consider using a `MessageListener`. - -```java - -MessageListener myMessageListener = (consumer, msg) -> { - try { - System.out.println("Message received: " + new String(msg.getData())); - consumer.acknowledge(msg); - } catch (Exception e) { - consumer.negativeAcknowledge(msg); - } -} - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .messageListener(myMessageListener) - .subscribe(); - -``` - -### Configure consumer - -If you instantiate a `Consumer` object by specifying only a topic and subscription name as in the example above, the consumer uses the default configuration. - -When you create a consumer, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -Type | Name|
    Description
    | Default -|---|---|---|--- -Set<String>| `topicNames`| Topic name| Sets.newTreeSet() -Pattern| `topicsPattern`| Topic pattern |None -String| `subscriptionName`| Subscription name| None -SubscriptionType| `subscriptionType`| Subscription type

    Four subscription types are available:
  780. Exclusive
  781. Failover
  782. Shared
  783. Key_Shared
  784. |SubscriptionType.Exclusive -int | `receiverQueueSize` | Size of a consumer's receiver queue.

    For example, the number of messages accumulated by a consumer before an application calls `Receive`.

    A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.| 1000 -long|`acknowledgementsGroupTimeMicros`|Group a consumer acknowledgment for a specified time.

    By default, a consumer uses 100ms grouping time to send out acknowledgments to a broker.

    Setting a group time of 0 sends out acknowledgments immediately.

    A longer ack group time is more efficient at the expense of a slight increase in message re-deliveries after a failure.|TimeUnit.MILLISECONDS.toMicros(100) -long|`negativeAckRedeliveryDelayMicros`|Delay to wait before redelivering messages that failed to be processed.

    When an application uses {@link Consumer#negativeAcknowledge(Message)}, failed messages are redelivered after a fixed timeout. |TimeUnit.MINUTES.toMicros(1) -int |`maxTotalReceiverQueueSizeAcrossPartitions`|The max total receiver queue size across partitions.

    This setting reduces the receiver queue size for individual partitions if the total receiver queue size exceeds this value.|50000 -String|`consumerName`|Consumer name|null -long|`ackTimeoutMillis`|Timeout of unacked messages|0 -long|`tickDurationMillis`|Granularity of the ack-timeout redelivery.

    Using an higher `tickDurationMillis` reduces the memory overhead to track messages when setting ack-timeout to a bigger value (for example, 1 hour).|1000 -int|`priorityLevel`|Priority level for a consumer to which a broker gives more priority while dispatching messages in Shared subscription type.

    The broker follows descending priorities. For example, 0=max-priority, 1, 2,...

    In shared subscription type, the broker **first dispatches messages to the max priority level consumers if they have permits**. Otherwise, the broker considers next priority level consumers.

    **Example 1**

    If a subscription has consumerA with `priorityLevel` 0 and consumerB with `priorityLevel` 1, then the broker **only dispatches messages to consumerA until it runs out permits** and then starts dispatching messages to consumerB.

    **Example 2**

    Consumer Priority, Level, Permits
    C1, 0, 2
    C2, 0, 1
    C3, 0, 1
    C4, 1, 2
    C5, 1, 1

    Order in which a broker dispatches messages to consumers is: C1, C2, C3, C1, C4, C5, C4.|0 -ConsumerCryptoFailureAction|`cryptoFailureAction`|Consumer should take action when it receives a message that can not be decrypted.

  785. **FAIL**: this is the default option to fail messages until crypto succeeds.

  786. **DISCARD**:silently acknowledge and not deliver message to an application.

  787. **CONSUME**: deliver encrypted messages to applications. It is the application's responsibility to decrypt the message.

    The decompression of message fails.

    If messages contain batch messages, a client is not be able to retrieve individual messages in batch.

    Delivered encrypted message contains {@link EncryptionContext} which contains encryption and compression information in it using which application can decrypt consumed message payload.
  788. |
  789. ConsumerCryptoFailureAction.FAIL
  790. -SortedMap|`properties`|A name or value property of this consumer.

    `properties` is application defined metadata attached to a consumer.

    When getting a topic stats, associate this metadata with the consumer stats for easier identification.|new TreeMap() -boolean|`readCompacted`|If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    Only enabling `readCompacted` on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`.|false -SubscriptionInitialPosition|`subscriptionInitialPosition`|Initial position at which to set cursor when subscribing to a topic at first time.|SubscriptionInitialPosition.Latest -int|`patternAutoDiscoveryPeriod`|Topic auto discovery period when using a pattern for topic's consumer.

    The default and minimum value is 1 minute.|1 -RegexSubscriptionMode|`regexSubscriptionMode`|When subscribing to a topic using a regular expression, you can pick a certain type of topics.

  791. **PersistentOnly**: only subscribe to persistent topics.

  792. **NonPersistentOnly**: only subscribe to non-persistent topics.

  793. **AllTopics**: subscribe to both persistent and non-persistent topics.
  794. |RegexSubscriptionMode.PersistentOnly -DeadLetterPolicy|`deadLetterPolicy`|Dead letter policy for consumers.

    By default, some messages are probably redelivered many times, even to the extent that it never stops.

    By using the dead letter mechanism, messages have the max redelivery count. **When exceeding the maximum number of redeliveries, messages are sent to the Dead Letter Topic and acknowledged automatically**.

    You can enable the dead letter mechanism by setting `deadLetterPolicy`.

    **Example**

    client.newConsumer()
    .deadLetterPolicy(DeadLetterPolicy.builder().maxRedeliverCount(10).build())
    .subscribe();


    Default dead letter topic name is `{TopicName}-{Subscription}-DLQ`.

    To set a custom dead letter topic name:
    client.newConsumer()
    .deadLetterPolicy(DeadLetterPolicy.builder().maxRedeliverCount(10)
    .deadLetterTopic("your-topic-name").build())
    .subscribe();


    When specifying the dead letter policy while not specifying `ackTimeoutMillis`, you can set the ack timeout to 30000 millisecond.|None -boolean|`autoUpdatePartitions`|If `autoUpdatePartitions` is enabled, a consumer subscribes to partition increasement automatically.

    **Note**: this is only for partitioned consumers.|true -boolean|`replicateSubscriptionState`|If `replicateSubscriptionState` is enabled, a subscription state is replicated to geo-replicated clusters.|false - -You can configure parameters if you do not want to use the default configuration. For a full list, see the Javadoc for the {@inject: javadoc:ConsumerBuilder:/client/org/apache/pulsar/client/api/ConsumerBuilder} class. - -The following is an example. - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .ackTimeout(10, TimeUnit.SECONDS) - .subscriptionType(SubscriptionType.Exclusive) - .subscribe(); - -``` - -### Async receive - -The `receive` method receives messages synchronously (the consumer process is blocked until a message is available). You can also use [async receive](concepts-messaging.md#receive-modes), which returns a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) object immediately once a new message is available. - -The following is an example. - -```java - -CompletableFuture asyncMessage = consumer.receiveAsync(); - -``` - -Async receive operations return a {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} wrapped inside of a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture). - -### Batch receive - -Use `batchReceive` to receive multiple messages for each call. - -The following is an example. - -```java - -Messages messages = consumer.batchReceive(); -for (Object message : messages) { - // do something -} -consumer.acknowledge(messages) - -``` - -:::note - -Batch receive policy limits the number and bytes of messages in a single batch. You can specify a timeout to wait for enough messages. -The batch receive is completed if any of the following condition is met: enough number of messages, bytes of messages, wait timeout. - -```java - -Consumer consumer = client.newConsumer() -.topic("my-topic") -.subscriptionName("my-subscription") -.batchReceivePolicy(BatchReceivePolicy.builder() -.maxNumMessages(100) -.maxNumBytes(1024 * 1024) -.timeout(200, TimeUnit.MILLISECONDS) -.build()) -.subscribe(); - -``` - -The default batch receive policy is: - -```java - -BatchReceivePolicy.builder() -.maxNumMessage(-1) -.maxNumBytes(10 * 1024 * 1024) -.timeout(100, TimeUnit.MILLISECONDS) -.build(); - -``` - -::: - -### Multi-topic subscriptions - -In addition to subscribing a consumer to a single Pulsar topic, you can also subscribe to multiple topics simultaneously using [multi-topic subscriptions](concepts-messaging.md#multi-topic-subscriptions). To use multi-topic subscriptions you can supply either a regular expression (regex) or a `List` of topics. If you select topics via regex, all topics must be within the same Pulsar namespace. - -The followings are some examples. - -```java - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; - -import java.util.Arrays; -import java.util.List; -import java.util.regex.Pattern; - -ConsumerBuilder consumerBuilder = pulsarClient.newConsumer() - .subscriptionName(subscription); - -// Subscribe to all topics in a namespace -Pattern allTopicsInNamespace = Pattern.compile("public/default/.*"); -Consumer allTopicsConsumer = consumerBuilder - .topicsPattern(allTopicsInNamespace) - .subscribe(); - -// Subscribe to a subsets of topics in a namespace, based on regex -Pattern someTopicsInNamespace = Pattern.compile("public/default/foo.*"); -Consumer allTopicsConsumer = consumerBuilder - .topicsPattern(someTopicsInNamespace) - .subscribe(); - -``` - -In the above example, the consumer subscribes to the `persistent` topics that can match the topic name pattern. If you want the consumer subscribes to all `persistent` and `non-persistent` topics that can match the topic name pattern, set `subscriptionTopicsMode` to `RegexSubscriptionMode.AllTopics`. - -```java - -Pattern pattern = Pattern.compile("public/default/.*"); -pulsarClient.newConsumer() - .subscriptionName("my-sub") - .topicsPattern(pattern) - .subscriptionTopicsMode(RegexSubscriptionMode.AllTopics) - .subscribe(); - -``` - -:::note - -By default, the `subscriptionTopicsMode` of the consumer is `PersistentOnly`. Available options of `subscriptionTopicsMode` are `PersistentOnly`, `NonPersistentOnly`, and `AllTopics`. - -::: - -You can also subscribe to an explicit list of topics (across namespaces if you wish): - -```java - -List topics = Arrays.asList( - "topic-1", - "topic-2", - "topic-3" -); - -Consumer multiTopicConsumer = consumerBuilder - .topics(topics) - .subscribe(); - -// Alternatively: -Consumer multiTopicConsumer = consumerBuilder - .topic( - "topic-1", - "topic-2", - "topic-3" - ) - .subscribe(); - -``` - -You can also subscribe to multiple topics asynchronously using the `subscribeAsync` method rather than the synchronous `subscribe` method. The following is an example. - -```java - -Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default.*"); -consumerBuilder - .topics(topics) - .subscribeAsync() - .thenAccept(this::receiveMessageFromConsumer); - -private void receiveMessageFromConsumer(Object consumer) { - ((Consumer)consumer).receiveAsync().thenAccept(message -> { - // Do something with the received message - receiveMessageFromConsumer(consumer); - }); -} - -``` - -### Subscription types - -Pulsar has various [subscription types](concepts-messaging#subscription-types) to match different scenarios. A topic can have multiple subscriptions with different subscription types. However, a subscription can only have one subscription type at a time. - -A subscription is identical with the subscription name; a subscription name can specify only one subscription type at a time. To change the subscription type, you should first stop all consumers of this subscription. - -Different subscription types have different message distribution modes. This section describes the differences of subscription types and how to use them. - -In order to better describe their differences, assuming you have a topic named "my-topic", and the producer has published 10 messages. - -```java - -Producer producer = client.newProducer(Schema.STRING) - .topic("my-topic") - .enableBatching(false) - .create(); -// 3 messages with "key-1", 3 messages with "key-2", 2 messages with "key-3" and 2 messages with "key-4" -producer.newMessage().key("key-1").value("message-1-1").send(); -producer.newMessage().key("key-1").value("message-1-2").send(); -producer.newMessage().key("key-1").value("message-1-3").send(); -producer.newMessage().key("key-2").value("message-2-1").send(); -producer.newMessage().key("key-2").value("message-2-2").send(); -producer.newMessage().key("key-2").value("message-2-3").send(); -producer.newMessage().key("key-3").value("message-3-1").send(); -producer.newMessage().key("key-3").value("message-3-2").send(); -producer.newMessage().key("key-4").value("message-4-1").send(); -producer.newMessage().key("key-4").value("message-4-2").send(); - -``` - -#### Exclusive - -Create a new consumer and subscribe with the `Exclusive` subscription type. - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Exclusive) - .subscribe() - -``` - -Only the first consumer is allowed to the subscription, other consumers receive an error. The first consumer receives all 10 messages, and the consuming order is the same as the producing order. - -:::note - -If topic is a partitioned topic, the first consumer subscribes to all partitioned topics, other consumers are not assigned with partitions and receive an error. - -::: - -#### Failover - -Create new consumers and subscribe with the`Failover` subscription type. - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Failover) - .subscribe() -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Failover) - .subscribe() -//conumser1 is the active consumer, consumer2 is the standby consumer. -//consumer1 receives 5 messages and then crashes, consumer2 takes over as an active consumer. - -``` - -Multiple consumers can attach to the same subscription, yet only the first consumer is active, and others are standby. When the active consumer is disconnected, messages will be dispatched to one of standby consumers, and the standby consumer then becomes active consumer. - -If the first active consumer is disconnected after receiving 5 messages, the standby consumer becomes active consumer. Consumer1 will receive: - -``` - -("key-1", "message-1-1") -("key-1", "message-1-2") -("key-1", "message-1-3") -("key-2", "message-2-1") -("key-2", "message-2-2") - -``` - -consumer2 will receive: - -``` - -("key-2", "message-2-3") -("key-3", "message-3-1") -("key-3", "message-3-2") -("key-4", "message-4-1") -("key-4", "message-4-2") - -``` - -:::note - -If a topic is a partitioned topic, each partition has only one active consumer, messages of one partition are distributed to only one consumer, and messages of multiple partitions are distributed to multiple consumers. - -::: - -#### Shared - -Create new consumers and subscribe with `Shared` subscription type. - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .subscribe() - -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .subscribe() -//Both consumer1 and consumer2 are active consumers. - -``` - -In shared subscription type, multiple consumers can attach to the same subscription and messages are delivered in a round robin distribution across consumers. - -If a broker dispatches only one message at a time, consumer1 receives the following information. - -``` - -("key-1", "message-1-1") -("key-1", "message-1-3") -("key-2", "message-2-2") -("key-3", "message-3-1") -("key-4", "message-4-1") - -``` - -consumer2 receives the following information. - -``` - -("key-1", "message-1-2") -("key-2", "message-2-1") -("key-2", "message-2-3") -("key-3", "message-3-2") -("key-4", "message-4-2") - -``` - -`Shared` subscription is different from `Exclusive` and `Failover` subscription types. `Shared` subscription has better flexibility, but cannot provide order guarantee. - -#### Key_shared - -This is a new subscription type since 2.4.0 release. Create new consumers and subscribe with `Key_Shared` subscription type. - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Key_Shared) - .subscribe() - -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Key_Shared) - .subscribe() -//Both consumer1 and consumer2 are active consumers. - -``` - -`Key_Shared` subscription is like `Shared` subscription, all consumers can attach to the same subscription. But it is different from `Key_Shared` subscription, messages with the same key are delivered to only one consumer in order. The possible distribution of messages between different consumers (by default we do not know in advance which keys will be assigned to a consumer, but a key will only be assigned to a consumer at the same time). - -consumer1 receives the following information. - -``` - -("key-1", "message-1-1") -("key-1", "message-1-2") -("key-1", "message-1-3") -("key-3", "message-3-1") -("key-3", "message-3-2") - -``` - -consumer2 receives the following information. - -``` - -("key-2", "message-2-1") -("key-2", "message-2-2") -("key-2", "message-2-3") -("key-4", "message-4-1") -("key-4", "message-4-2") - -``` - -If batching is enabled at the producer side, messages with different keys are added to a batch by default. The broker will dispatch the batch to the consumer, so the default batch mechanism may break the Key_Shared subscription guaranteed message distribution semantics. The producer needs to use the `KeyBasedBatcher`. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .batcherBuilder(BatcherBuilder.KEY_BASED) - .create(); - -``` - -Or the producer can disable batching. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .enableBatching(false) - .create(); - -``` - -:::note - -If the message key is not specified, messages without key are dispatched to one consumer in order by default. - -::: - -## Reader - -With the [reader interface](concepts-clients.md#reader-interface), Pulsar clients can "manually position" themselves within a topic and reading all messages from a specified message onward. The Pulsar API for Java enables you to create {@inject: javadoc:Reader:/client/org/apache/pulsar/client/api/Reader} objects by specifying a topic and a {@inject: javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId}. - -The following is an example. - -```java - -byte[] msgIdBytes = // Some message ID byte array -MessageId id = MessageId.fromByteArray(msgIdBytes); -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(id) - .create(); - -while (true) { - Message message = reader.readNext(); - // Process message -} - -``` - -In the example above, a `Reader` object is instantiated for a specific topic and message (by ID); the reader iterates over each message in the topic after the message is identified by `msgIdBytes` (how that value is obtained depends on the application). - -The code sample above shows pointing the `Reader` object to a specific message (by ID), but you can also use `MessageId.earliest` to point to the earliest available message on the topic of `MessageId.latest` to point to the most recent available message. - -### Configure reader -When you create a reader, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -| Type | Name |
    Description
    | Default -|---|---|---|--- -String|`topicName`|Topic name. |None -int|`receiverQueueSize`|Size of a consumer's receiver queue.

    For example, the number of messages that can be accumulated by a consumer before an application calls `Receive`.

    A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.|1000 -ReaderListener<T>|`readerListener`|A listener that is called for message received.|None -String|`readerName`|Reader name.|null -String| `subscriptionName`|Subscription name|When there is a single topic, the default subscription name is `"reader-" + 10-digit UUID`.
    When there are multiple topics, the default subscription name is `"multiTopicsReader-" + 10-digit UUID`. -String|`subscriptionRolePrefix`|Prefix of subscription role. |null -CryptoKeyReader|`cryptoKeyReader`|Interface that abstracts the access to a key store.|null -ConsumerCryptoFailureAction|`cryptoFailureAction`|Consumer should take action when it receives a message that can not be decrypted.

  795. **FAIL**: this is the default option to fail messages until crypto succeeds.

  796. **DISCARD**: silently acknowledge and not deliver message to an application.

  797. **CONSUME**: deliver encrypted messages to applications. It is the application's responsibility to decrypt the message.

    The message decompression fails.

    If messages contain batch messages, a client is not be able to retrieve individual messages in batch.

    Delivered encrypted message contains {@link EncryptionContext} which contains encryption and compression information in it using which application can decrypt consumed message payload.
  798. |
  799. ConsumerCryptoFailureAction.FAIL
  800. -boolean|`readCompacted`|If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (for example, failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`.|false -boolean|`resetIncludeHead`|If set to true, the first message to be returned is the one specified by `messageId`.

    If set to false, the first message to be returned is the one next to the message specified by `messageId`.|false - -### Sticky key range reader - -In sticky key range reader, broker will only dispatch messages which hash of the message key contains by the specified key hash range. Multiple key hash ranges can be specified on a reader. - -The following is an example to create a sticky key range reader. - -```java - -pulsarClient.newReader() - .topic(topic) - .startMessageId(MessageId.earliest) - .keyHashRange(Range.of(0, 10000), Range.of(20001, 30000)) - .create(); - -``` - -Total hash range size is 65536, so the max end of the range should be less than or equal to 65535. - -## Schema - -In Pulsar, all message data consists of byte arrays "under the hood." [Message schemas](schema-get-started.md) enable you to use other types of data when constructing and handling messages (from simple types like strings to more complex, application-specific types.md). If you construct, say, a [producer](#producers) without specifying a schema, then the producer can only produce messages of type `byte[]`. The following is an example. - -```java - -Producer producer = client.newProducer() - .topic(topic) - .create(); - -``` - -The producer above is equivalent to a `Producer` (in fact, you should *always* explicitly specify the type). If you'd like to use a producer for a different type of data, you'll need to specify a **schema** that informs Pulsar which data type will be transmitted over the [topic](reference-terminology.md#topic). - -### AvroBaseStructSchema example - -Let's say that you have a `SensorReading` class that you'd like to transmit over a Pulsar topic: - -```java - -public class SensorReading { - public float temperature; - - public SensorReading(float temperature) { - this.temperature = temperature; - } - - // A no-arg constructor is required - public SensorReading() { - } - - public float getTemperature() { - return temperature; - } - - public void setTemperature(float temperature) { - this.temperature = temperature; - } -} - -``` - -You could then create a `Producer` (or `Consumer`) like this: - -```java - -Producer producer = client.newProducer(JSONSchema.of(SensorReading.class)) - .topic("sensor-readings") - .create(); - -``` - -The following schema formats are currently available for Java: - -* No schema or the byte array schema (which can be applied using `Schema.BYTES`): - - ```java - - Producer bytesProducer = client.newProducer(Schema.BYTES) - .topic("some-raw-bytes-topic") - .create(); - - ``` - - Or, equivalently: - - ```java - - Producer bytesProducer = client.newProducer() - .topic("some-raw-bytes-topic") - .create(); - - ``` - -* `String` for normal UTF-8-encoded string data. Apply the schema using `Schema.STRING`: - - ```java - - Producer stringProducer = client.newProducer(Schema.STRING) - .topic("some-string-topic") - .create(); - - ``` - -* Create JSON schemas for POJOs using `Schema.JSON`. The following is an example. - - ```java - - Producer pojoProducer = client.newProducer(Schema.JSON(MyPojo.class)) - .topic("some-pojo-topic") - .create(); - - ``` - -* Generate Protobuf schemas using `Schema.PROTOBUF`. The following example shows how to create the Protobuf schema and use it to instantiate a new producer: - - ```java - - Producer protobufProducer = client.newProducer(Schema.PROTOBUF(MyProtobuf.class)) - .topic("some-protobuf-topic") - .create(); - - ``` - -* Define Avro schemas with `Schema.AVRO`. The following code snippet demonstrates how to create and use Avro schema. - - ```java - - Producer avroProducer = client.newProducer(Schema.AVRO(MyAvro.class)) - .topic("some-avro-topic") - .create(); - - ``` - -### ProtobufNativeSchema example - -For example of ProtobufNativeSchema, see [`SchemaDefinition` in `Complex type`](schema-understand.md#complex-type). - -## Authentication - -Pulsar currently supports three authentication schemes: [TLS](security-tls-authentication.md), [Athenz](security-athenz.md), and [Oauth2](security-oauth2.md). You can use the Pulsar Java client with all of them. - -### TLS Authentication - -To use [TLS](security-tls-authentication.md), `enableTls` method is deprecated and you need to use "pulsar+ssl://" in serviceUrl to enable, point your Pulsar client to a TLS cert path, and provide paths to cert and key files. - -The following is an example. - -```java - -Map authParams = new HashMap(); -authParams.put("tlsCertFile", "/path/to/client-cert.pem"); -authParams.put("tlsKeyFile", "/path/to/client-key.pem"); - -Authentication tlsAuth = AuthenticationFactory - .create(AuthenticationTls.class.getName(), authParams); - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://my-broker.com:6651") - .tlsTrustCertsFilePath("/path/to/cacert.pem") - .authentication(tlsAuth) - .build(); - -``` - -### Athenz - -To use [Athenz](security-athenz.md) as an authentication provider, you need to [use TLS](#tls-authentication) and provide values for four parameters in a hash: - -* `tenantDomain` -* `tenantService` -* `providerDomain` -* `privateKey` - -You can also set an optional `keyId`. The following is an example. - -```java - -Map authParams = new HashMap(); -authParams.put("tenantDomain", "shopping"); // Tenant domain name -authParams.put("tenantService", "some_app"); // Tenant service name -authParams.put("providerDomain", "pulsar"); // Provider domain name -authParams.put("privateKey", "file:///path/to/private.pem"); // Tenant private key path -authParams.put("keyId", "v1"); // Key id for the tenant private key (optional, default: "0") - -Authentication athenzAuth = AuthenticationFactory - .create(AuthenticationAthenz.class.getName(), authParams); - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://my-broker.com:6651") - .tlsTrustCertsFilePath("/path/to/cacert.pem") - .authentication(athenzAuth) - .build(); - -``` - -> #### Supported pattern formats -> The `privateKey` parameter supports the following three pattern formats: -> * `file:///path/to/file` -> * `file:/path/to/file` -> * `data:application/x-pem-file;base64,` - -### Oauth2 - -The following example shows how to use [Oauth2](security-oauth2.md) as an authentication provider for the Pulsar Java client. - -You can use the factory method to configure authentication for Pulsar Java client. - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactoryOAuth2.clientCredentials(this.issuerUrl, this.credentialsUrl, this.audience)) - .build(); - -``` - -In addition, you can also use the encoded parameters to configure authentication for Pulsar Java client. - -```java - -Authentication auth = AuthenticationFactory - .create(AuthenticationOAuth2.class.getName(), "{"type":"client_credentials","privateKey":"...","issuerUrl":"...","audience":"..."}"); -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication(auth) - .build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-node.md b/site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-node.md deleted file mode 100644 index e24032946bdcde..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-node.md +++ /dev/null @@ -1,643 +0,0 @@ ---- -id: client-libraries-node -title: The Pulsar Node.js client -sidebar_label: "Node.js" -original_id: client-libraries-node ---- - -The Pulsar Node.js client can be used to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Node.js. - -All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Node.js client are thread-safe. - -For 1.3.0 or later versions, [type definitions](https://github.com/apache/pulsar-client-node/blob/master/index.d.ts) used in TypeScript are available. - -## Installation - -You can install the [`pulsar-client`](https://www.npmjs.com/package/pulsar-client) library via [npm](https://www.npmjs.com/). - -### Requirements -Pulsar Node.js client library is based on the C++ client library. -Follow [these instructions](client-libraries-cpp.md#compilation) and install the Pulsar C++ client library. - -### Compatibility - -Compatibility between each version of the Node.js client and the C++ client is as follows: - -| Node.js client | C++ client | -| :------------- | :------------- | -| 1.0.0 | 2.3.0 or later | -| 1.1.0 | 2.4.0 or later | -| 1.2.0 | 2.5.0 or later | - -If an incompatible version of the C++ client is installed, you may fail to build or run this library. - -### Installation using npm - -Install the `pulsar-client` library via [npm](https://www.npmjs.com/): - -```shell - -$ npm install pulsar-client - -``` - -:::note - -Also, this library works only in Node.js 10.x or later because it uses the [`node-addon-api`](https://github.com/nodejs/node-addon-api) module to wrap the C++ library. - -::: - -## Connection URLs -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here is an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you are using [TLS encryption](security-tls-transport.md) or [TLS Authentication](security-tls-authentication.md), the URL looks like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you first need a client object. You can create a client instance using a `new` operator and the `Client` method, passing in a client options object (more on configuration [below](#client-configuration)). - -Here is an example: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - await client.close(); -})(); - -``` - -### Client configuration - -The following configurable parameters are available for Pulsar clients: - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `serviceUrl` | The connection URL for the Pulsar cluster. See [above](#connection-urls) for more info. | | -| `authentication` | Configure the authentication provider. (default: no authentication). See [TLS Authentication](security-tls-authentication.md) for more info. | | -| `operationTimeoutSeconds` | The timeout for Node.js client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries occur until this threshold is reached, at which point the operation fails. | 30 | -| `ioThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker). | 1 | -| `messageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)). | 1 | -| `concurrentLookupRequest` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 50000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 50000 | -| `tlsTrustCertsFilePath` | The file path for the trusted TLS certificate. | | -| `tlsValidateHostname` | The boolean value of setup whether to enable TLS hostname verification. | `false` | -| `tlsAllowInsecureConnection` | The boolean value of setup whether the Pulsar client accepts untrusted TLS certificate from broker. | `false` | -| `statsIntervalInSeconds` | Interval between each stat info. Stats is activated with positive statsInterval. The value should be set to 1 second at least | 600 | -| `log` | A function that is used for logging. | `console.log` | - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Node.js producers using a producer configuration object. - -Here is an example: - -```JavaScript - -const producer = await client.createProducer({ - topic: 'my-topic', -}); - -await producer.send({ - data: Buffer.from("Hello, Pulsar"), -}); - -await producer.close(); - -``` - -> #### Promise operation -> When you create a new Pulsar producer, the operation returns `Promise` object and get producer instance or an error through executor function. -> In this example, using await operator instead of executor function. - -### Producer operations - -Pulsar Node.js producers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `send(Object)` | Publishes a [message](#messages) to the producer's topic. When the message is successfully acknowledged by the Pulsar broker, or an error is thrown, the Promise object whose result is the message ID runs executor function. | `Promise` | -| `flush()` | Sends message from send queue to Pulsar broker. When the message is successfully acknowledged by the Pulsar broker, or an error is thrown, the Promise object runs executor function. | `Promise` | -| `close()` | Closes the producer and releases all resources allocated to it. Once `close()` is called, no more messages are accepted from the publisher. This method returns a Promise object. It runs the executor function when all pending publish requests are persisted by Pulsar. If an error is thrown, no pending writes are retried. | `Promise` | -| `getProducerName()` | Getter method of the producer name. | `string` | -| `getTopic()` | Getter method of the name of the topic. | `string` | - -### Producer configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer publishes messages. The topic format is `` or `//`. For example, `sample/ns1/my-topic`. | | -| `producerName` | A name for the producer. If you do not explicitly assign a name, Pulsar automatically generates a globally unique name. If you choose to explicitly assign a name, it needs to be unique across *all* Pulsar clusters, otherwise the creation operation throws an error. | | -| `sendTimeoutMs` | When publishing a message to a topic, the producer waits for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error is thrown. If you set `sendTimeoutMs` to -1, the timeout is set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30000 | -| `initialSequenceId` | The initial sequence ID of the message. When producer send message, add sequence ID to message. The ID is increased each time to send. | | -| `maxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `send` method fails *unless* `blockIfQueueFull` is set to `true`. | 1000 | -| `maxPendingMessagesAcrossPartitions` | The maximum size of the sum of partition's pending queue. | 50000 | -| `blockIfQueueFull` | If set to `true`, the producer's `send` method waits when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `maxPendingMessages` parameter); if set to `false` (the default), `send` operations fails and throw a error when the queue is full. | `false` | -| `messageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-messaging.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`RoundRobinDistribution`), or publishing all messages to a single partition (`UseSinglePartition`, the default). | `UseSinglePartition` | -| `hashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `JavaStringHash` (the equivalent of `String.hashCode()` in Java), `Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library). | `BoostHash` | -| `compressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), and [`Zlib`](https://zlib.net/), [ZSTD](https://github.com/facebook/zstd/), [SNAPPY](https://github.com/google/snappy/). | Compression None | -| `batchingEnabled` | If set to `true`, the producer send message as batch. | `true` | -| `batchingMaxPublishDelayMs` | The maximum time of delay sending message in batching. | 10 | -| `batchingMaxMessages` | The maximum size of sending message in each time of batching. | 1000 | -| `properties` | The metadata of producer. | | - -### Producer example - -This example creates a Node.js producer for the `my-topic` topic and sends 10 messages to that topic: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - // Create a producer - const producer = await client.createProducer({ - topic: 'my-topic', - }); - - // Send messages - for (let i = 0; i < 10; i += 1) { - const msg = `my-message-${i}`; - producer.send({ - data: Buffer.from(msg), - }); - console.log(`Sent message: ${msg}`); - } - await producer.flush(); - - await producer.close(); - await client.close(); -})(); - -``` - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Node.js consumers using a consumer configuration object. - -Here is an example: - -```JavaScript - -const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', -}); - -const msg = await consumer.receive(); -console.log(msg.getData().toString()); -consumer.acknowledge(msg); - -await consumer.close(); - -``` - -> #### Promise operation -> When you create a new Pulsar consumer, the operation returns `Promise` object and get consumer instance or an error through executor function. -> In this example, using await operator instead of executor function. - -### Consumer operations - -Pulsar Node.js consumers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `receive()` | Receives a single message from the topic. When the message is available, the Promise object run executor function and get message object. | `Promise` | -| `receive(Number)` | Receives a single message from the topic with specific timeout in milliseconds. | `Promise` | -| `acknowledge(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message object. | `void` | -| `acknowledgeId(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID object. | `void` | -| `acknowledgeCumulative(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `acknowledgeCumulative` method returns void, and send the ack to the broker asynchronously. After that, the messages are *not* redelivered to the consumer. Cumulative acking can not be used with a [shared](concepts-messaging.md#shared) subscription type. | `void` | -| `acknowledgeCumulativeId(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message ID. | `void` | -| `negativeAcknowledge(Message)`| [Negatively acknowledges](reference-terminology.md#negative-acknowledgment-nack) a message to the Pulsar broker by message object. | `void` | -| `negativeAcknowledgeId(MessageId)` | [Negatively acknowledges](reference-terminology.md#negative-acknowledgment-nack) a message to the Pulsar broker by message ID object. | `void` | -| `close()` | Closes the consumer, disabling its ability to receive messages from the broker. | `Promise` | -| `unsubscribe()` | Unsubscribes the subscription. | `Promise` | - -### Consumer configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar topic on which the consumer establishes a subscription and listen for messages. | | -| `topics` | The array of topics. | | -| `topicsPattern` | The regular expression for topics. | | -| `subscription` | The subscription name for this consumer. | | -| `subscriptionType` | Available options are `Exclusive`, `Shared`, `Key_Shared`, and `Failover`. | `Exclusive` | -| `subscriptionInitialPosition` | Initial position at which to set cursor when subscribing to a topic at first time. | `SubscriptionInitialPosition.Latest` | -| `ackTimeoutMs` | Acknowledge timeout in milliseconds. | 0 | -| `nAckRedeliverTimeoutMs` | Delay to wait before redelivering messages that failed to be processed. | 60000 | -| `receiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000 | -| `receiverQueueSizeAcrossPartitions` | Set the max total receiver queue size across partitions. This setting is used to reduce the receiver queue size for individual partitions if the total exceeds this value. | 50000 | -| `consumerName` | The name of consumer. Currently(v2.4.1), [failover](concepts-messaging.md#failover) mode use consumer name in ordering. | | -| `properties` | The metadata of consumer. | | -| `listener`| A listener that is called for a message received. | | -| `readCompacted`| If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`. | false | - -### Consumer example - -This example creates a Node.js consumer with the `my-subscription` subscription on the `my-topic` topic, receives messages, prints the content that arrive, and acknowledges each message to the Pulsar broker for 10 times: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - // Create a consumer - const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', - subscriptionType: 'Exclusive', - }); - - // Receive messages - for (let i = 0; i < 10; i += 1) { - const msg = await consumer.receive(); - console.log(msg.getData().toString()); - consumer.acknowledge(msg); - } - - await consumer.close(); - await client.close(); -})(); - -``` - -Instead a consumer can be created with `listener` to process messages. - -```JavaScript - -// Create a consumer -const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', - subscriptionType: 'Exclusive', - listener: (msg, msgConsumer) => { - console.log(msg.getData().toString()); - msgConsumer.acknowledge(msg); - }, -}); - -``` - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recently unacked message). You can [configure](#reader-configuration) Node.js readers using a reader configuration object. - -Here is an example: - -```JavaScript - -const reader = await client.createReader({ - topic: 'my-topic', - startMessageId: Pulsar.MessageId.earliest(), -}); - -const msg = await reader.readNext(); -console.log(msg.getData().toString()); - -await reader.close(); - -``` - -### Reader operations - -Pulsar Node.js readers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `readNext()` | Receives the next message on the topic (analogous to the `receive` method for [consumers](#consumer-operations)). When the message is available, the Promise object run executor function and get message object. | `Promise` | -| `readNext(Number)` | Receives a single message from the topic with specific timeout in milliseconds. | `Promise` | -| `hasNext()` | Return whether the broker has next message in target topic. | `Boolean` | -| `close()` | Closes the reader, disabling its ability to receive messages from the broker. | `Promise` | - -### Reader configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader establishes a subscription and listen for messages. | | -| `startMessageId` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `Pulsar.MessageId.earliest` (the earliest available message on the topic), `Pulsar.MessageId.latest` (the latest available message on the topic), or a message ID object for a position that is not earliest or latest. | | -| `receiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `readNext`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000 | -| `readerName` | The name of the reader. | | -| `subscriptionRolePrefix` | The subscription role prefix. | | -| `readCompacted` | If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`. | `false` | - - -### Reader example - -This example creates a Node.js reader with the `my-topic` topic, reads messages, and prints the content that arrive for 10 times: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - operationTimeoutSeconds: 30, - }); - - // Create a reader - const reader = await client.createReader({ - topic: 'my-topic', - startMessageId: Pulsar.MessageId.earliest(), - }); - - // read messages - for (let i = 0; i < 10; i += 1) { - const msg = await reader.readNext(); - console.log(msg.getData().toString()); - } - - await reader.close(); - await client.close(); -})(); - -``` - -## Messages - -In Pulsar Node.js client, you have to construct producer message object for producer. - -Here is an example message: - -```JavaScript - -const msg = { - data: Buffer.from('Hello, Pulsar'), - partitionKey: 'key1', - properties: { - 'foo': 'bar', - }, - eventTimestamp: Date.now(), - replicationClusters: [ - 'cluster1', - 'cluster2', - ], -} - -await producer.send(msg); - -``` - -The following keys are available for producer message objects: - -| Parameter | Description | -| :-------- | :---------- | -| `data` | The actual data payload of the message. | -| `properties` | A Object for any application-specific metadata attached to the message. | -| `eventTimestamp` | The timestamp associated with the message. | -| `sequenceId` | The sequence ID of the message. | -| `partitionKey` | The optional key associated with the message (particularly useful for things like topic compaction). | -| `replicationClusters` | The clusters to which this message is replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. | -| `deliverAt` | The absolute timestamp at or after which the message is delivered. | | -| `deliverAfter` | The relative delay after which the message is delivered. | | - -### Message object operations - -In Pulsar Node.js client, you can receive (or read) message object as consumer (or reader). - -The message object have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `getTopicName()` | Getter method of topic name. | `String` | -| `getProperties()` | Getter method of properties. | `Array` | -| `getData()` | Getter method of message data. | `Buffer` | -| `getMessageId()` | Getter method of [message id object](#message-id-object-operations). | `Object` | -| `getPublishTimestamp()` | Getter method of publish timestamp. | `Number` | -| `getEventTimestamp()` | Getter method of event timestamp. | `Number` | -| `getRedeliveryCount()` | Getter method of redelivery count. | `Number` | -| `getPartitionKey()` | Getter method of partition key. | `String` | - -### Message ID object operations - -In Pulsar Node.js client, you can get message id object from message object. - -The message id object have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `serialize()` | Serialize the message id into a Buffer for storing. | `Buffer` | -| `toString()` | Get message id as String. | `String` | - -The client has static method of message id object. You can access it as `Pulsar.MessageId.someStaticMethod` too. - -The following static methods are available for the message id object: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `earliest()` | MessageId representing the earliest, or oldest available message stored in the topic. | `Object` | -| `latest()` | MessageId representing the latest, or last published message in the topic. | `Object` | -| `deserialize(Buffer)` | Deserialize a message id object from a Buffer. | `Object` | - -## End-to-end encryption - -[End-to-end encryption](https://pulsar.apache.org/docs/en/next/cookbooks-encryption/#docsNav) allows applications to encrypt messages at producers and decrypt at consumers. - -### Configuration - -If you want to use the end-to-end encryption feature in the Node.js client, you need to configure `publicKeyPath` and `privateKeyPath` for both producer and consumer. - -``` - -publicKeyPath: "./public.pem" -privateKeyPath: "./private.pem" - -``` - -### Tutorial - -This section provides step-by-step instructions on how to use the end-to-end encryption feature in the Node.js client. - -**Prerequisite** - -- Pulsar C++ client 2.7.1 or later - -**Step** - -1. Create both public and private key pairs. - - **Input** - - ```shell - - openssl genrsa -out private.pem 2048 - openssl rsa -in private.pem -pubout -out public.pem - - ``` - -2. Create a producer to send encrypted messages. - - **Input** - - ```nodejs - - const Pulsar = require('pulsar-client'); - - (async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - operationTimeoutSeconds: 30, - }); - - // Create a producer - const producer = await client.createProducer({ - topic: 'persistent://public/default/my-topic', - sendTimeoutMs: 30000, - batchingEnabled: true, - publicKeyPath: "./public.pem", - privateKeyPath: "./private.pem", - encryptionKey: "encryption-key" - }); - - console.log(producer.ProducerConfig) - // Send messages - for (let i = 0; i < 10; i += 1) { - const msg = `my-message-${i}`; - producer.send({ - data: Buffer.from(msg), - }); - console.log(`Sent message: ${msg}`); - } - await producer.flush(); - - await producer.close(); - await client.close(); - })(); - - ``` - -3. Create a consumer to receive encrypted messages. - - **Input** - - ```nodejs - - const Pulsar = require('pulsar-client'); - - (async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://172.25.0.3:6650', - operationTimeoutSeconds: 30 - }); - - // Create a consumer - const consumer = await client.subscribe({ - topic: 'persistent://public/default/my-topic', - subscription: 'sub1', - subscriptionType: 'Shared', - ackTimeoutMs: 10000, - publicKeyPath: "./public.pem", - privateKeyPath: "./private.pem" - }); - - console.log(consumer) - // Receive messages - for (let i = 0; i < 10; i += 1) { - const msg = await consumer.receive(); - console.log(msg.getData().toString()); - consumer.acknowledge(msg); - } - - await consumer.close(); - await client.close(); - })(); - - ``` - -4. Run the consumer to receive encrypted messages. - - **Input** - - ```shell - - node consumer.js - - ``` - -5. In a new terminal tab, run the producer to produce encrypted messages. - - **Input** - - ```shell - - node producer.js - - ``` - - Now you can see the producer sends messages and the consumer receives messages successfully. - - **Output** - - This is from the producer side. - - ``` - - Sent message: my-message-0 - Sent message: my-message-1 - Sent message: my-message-2 - Sent message: my-message-3 - Sent message: my-message-4 - Sent message: my-message-5 - Sent message: my-message-6 - Sent message: my-message-7 - Sent message: my-message-8 - Sent message: my-message-9 - - ``` - - This is from the consumer side. - - ``` - - my-message-0 - my-message-1 - my-message-2 - my-message-3 - my-message-4 - my-message-5 - my-message-6 - my-message-7 - my-message-8 - my-message-9 - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-python.md b/site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-python.md deleted file mode 100644 index f30cf55387d92e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-python.md +++ /dev/null @@ -1,456 +0,0 @@ ---- -id: client-libraries-python -title: Pulsar Python client -sidebar_label: "Python" -original_id: client-libraries-python ---- - -Pulsar Python client library is a wrapper over the existing [C++ client library](client-libraries-cpp.md) and exposes all of the [same features](/api/cpp). You can find the code in the [`python` subdirectory](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/python) of the C++ client code. - -All the methods in producer, consumer, and reader of a Python client are thread-safe. - -[pdoc](https://github.com/BurntSushi/pdoc)-generated API docs for the Python client are available [here](/api/python). - -## Install - -You can install the [`pulsar-client`](https://pypi.python.org/pypi/pulsar-client) library either via [PyPi](https://pypi.python.org/pypi), using [pip](#installation-using-pip), or by building the library from source. - -### Install using pip - -To install the `pulsar-client` library as a pre-built package using the [pip](https://pip.pypa.io/en/stable/) package manager: - -```shell - -$ pip install pulsar-client==@pulsar:version_number@ - -``` - -### Optional dependencies - -To support aspects like pulsar functions or Avro serialization, additional optional components can be installed alongside the `pulsar-client` library - -```shell - -# avro serialization -$ pip install pulsar-client[avro]=='@pulsar:version_number@' - -# functions runtime -$ pip install pulsar-client[functions]=='@pulsar:version_number@' - -# all optional components -$ pip install pulsar-client[all]=='@pulsar:version_number@' - -``` - -Installation via PyPi is available for the following Python versions: - -Platform | Supported Python versions -:--------|:------------------------- -MacOS
    10.13 (High Sierra), 10.14 (Mojave)
    | 2.7, 3.7 -Linux | 2.7, 3.4, 3.5, 3.6, 3.7, 3.8 - -### Install from source - -To install the `pulsar-client` library by building from source, follow [instructions](client-libraries-cpp.md#compilation) and compile the Pulsar C++ client library. That builds the Python binding for the library. - -To install the built Python bindings: - -```shell - -$ git clone https://github.com/apache/pulsar -$ cd pulsar/pulsar-client-cpp/python -$ sudo python setup.py install - -``` - -## API Reference - -The complete Python API reference is available at [api/python](/api/python). - -## Examples - -You can find a variety of Python code examples for the `pulsar-client` library. - -### Producer example - -The following example creates a Python producer for the `my-topic` topic and sends 10 messages on that topic: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') - -producer = client.create_producer('my-topic') - -for i in range(10): - producer.send(('Hello-%d' % i).encode('utf-8')) - -client.close() - -``` - -### Consumer example - -The following example creates a consumer with the `my-subscription` subscription name on the `my-topic` topic, receives incoming messages, prints the content and ID of messages that arrive, and acknowledges each message to the Pulsar broker. - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') - -consumer = client.subscribe('my-topic', 'my-subscription') - -while True: - msg = consumer.receive() - try: - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except: - # Message failed to be processed - consumer.negative_acknowledge(msg) - -client.close() - -``` - -This example shows how to configure negative acknowledgement. - -```python - -from pulsar import Client, schema -client = Client('pulsar://localhost:6650') -consumer = client.subscribe('negative_acks','test',schema=schema.StringSchema()) -producer = client.create_producer('negative_acks',schema=schema.StringSchema()) -for i in range(10): - print('send msg "hello-%d"' % i) - producer.send_async('hello-%d' % i, callback=None) -producer.flush() -for i in range(10): - msg = consumer.receive() - consumer.negative_acknowledge(msg) - print('receive and nack msg "%s"' % msg.data()) -for i in range(10): - msg = consumer.receive() - consumer.acknowledge(msg) - print('receive and ack msg "%s"' % msg.data()) -try: - # No more messages expected - msg = consumer.receive(100) -except: - print("no more msg") - pass - -``` - -### Reader interface example - -You can use the Pulsar Python API to use the Pulsar [reader interface](concepts-clients.md#reader-interface). Here's an example: - -```python - -# MessageId taken from a previously fetched message -msg_id = msg.message_id() - -reader = client.create_reader('my-topic', msg_id) - -while True: - msg = reader.read_next() - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # No acknowledgment - -``` - -### Multi-topic subscriptions - -In addition to subscribing a consumer to a single Pulsar topic, you can also subscribe to multiple topics simultaneously. To use multi-topic subscriptions, you can supply a regular expression (regex) or a `List` of topics. If you select topics via regex, all topics must be within the same Pulsar namespace. - -The following is an example. - -```python - -import re -consumer = client.subscribe(re.compile('persistent://public/default/topic-*'), 'my-subscription') -while True: - msg = consumer.receive() - try: - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except: - # Message failed to be processed - consumer.negative_acknowledge(msg) -client.close() - -``` - -## Schema - -### Declare and validate schema - -You can declare a schema by passing a class that inherits -from `pulsar.schema.Record` and defines the fields as -class variables. For example: - -```python - -from pulsar.schema import * - -class Example(Record): - a = String() - b = Integer() - c = Boolean() - -``` - -With this simple schema definition, you can create producers, consumers and readers instances that refer to that. - -```python - -producer = client.create_producer( - topic='my-topic', - schema=AvroSchema(Example) ) - -producer.send(Example(a='Hello', b=1)) - -``` - -After creating the producer, the Pulsar broker validates that the existing topic schema is indeed of "Avro" type and that the format is compatible with the schema definition of the `Example` class. - -If there is a mismatch, an exception occurs in the producer creation. - -Once a producer is created with a certain schema definition, -it will only accept objects that are instances of the declared -schema class. - -Similarly, for a consumer/reader, the consumer will return an -object, instance of the schema record class, rather than the raw -bytes: - -```python - -consumer = client.subscribe( - topic='my-topic', - subscription_name='my-subscription', - schema=AvroSchema(Example) ) - -while True: - msg = consumer.receive() - ex = msg.value() - try: - print("Received message a={} b={} c={}".format(ex.a, ex.b, ex.c)) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except: - # Message failed to be processed - consumer.negative_acknowledge(msg) - -``` - -### Supported schema types - -You can use different builtin schema types in Pulsar. All the definitions are in the `pulsar.schema` package. - -| Schema | Notes | -| ------ | ----- | -| `BytesSchema` | Get the raw payload as a `bytes` object. No serialization/deserialization are performed. This is the default schema mode | -| `StringSchema` | Encode/decode payload as a UTF-8 string. Uses `str` objects | -| `JsonSchema` | Require record definition. Serializes the record into standard JSON payload | -| `AvroSchema` | Require record definition. Serializes in AVRO format | - -### Schema definition reference - -The schema definition is done through a class that inherits from `pulsar.schema.Record`. - -This class has a number of fields which can be of either -`pulsar.schema.Field` type or another nested `Record`. All the -fields are specified in the `pulsar.schema` package. The fields -are matching the AVRO fields types. - -| Field Type | Python Type | Notes | -| ---------- | ----------- | ----- | -| `Boolean` | `bool` | | -| `Integer` | `int` | | -| `Long` | `int` | | -| `Float` | `float` | | -| `Double` | `float` | | -| `Bytes` | `bytes` | | -| `String` | `str` | | -| `Array` | `list` | Need to specify record type for items. | -| `Map` | `dict` | Key is always `String`. Need to specify value type. | - -Additionally, any Python `Enum` type can be used as a valid field type. - -#### Fields parameters - -When adding a field, you can use these parameters in the constructor. - -| Argument | Default | Notes | -| ---------- | --------| ----- | -| `default` | `None` | Set a default value for the field. Eg: `a = Integer(default=5)` | -| `required` | `False` | Mark the field as "required". It is set in the schema accordingly. | - -#### Schema definition examples - -##### Simple definition - -```python - -class Example(Record): - a = String() - b = Integer() - c = Array(String()) - i = Map(String()) - -``` - -##### Using enums - -```python - -from enum import Enum - -class Color(Enum): - red = 1 - green = 2 - blue = 3 - -class Example(Record): - name = String() - color = Color - -``` - -##### Complex types - -```python - -class MySubRecord(Record): - x = Integer() - y = Long() - z = String() - -class Example(Record): - a = String() - sub = MySubRecord() - -``` - -## End-to-end encryption - -[End-to-end encryption](https://pulsar.apache.org/docs/en/next/cookbooks-encryption/#docsNav) allows applications to encrypt messages at producers and decrypt messages at consumers. - -### Configuration - -To use the end-to-end encryption feature in the Python client, you need to configure `publicKeyPath` and `privateKeyPath` for both producer and consumer. - -``` - -publicKeyPath: "./public.pem" -privateKeyPath: "./private.pem" - -``` - -### Tutorial - -This section provides step-by-step instructions on how to use the end-to-end encryption feature in the Python client. - -**Prerequisite** - -- Pulsar Python client 2.7.1 or later - -**Step** - -1. Create both public and private key pairs. - - **Input** - - ```shell - - openssl genrsa -out private.pem 2048 - openssl rsa -in private.pem -pubout -out public.pem - - ``` - -2. Create a producer to send encrypted messages. - - **Input** - - ```python - - import pulsar - - publicKeyPath = "./public.pem" - privateKeyPath = "./private.pem" - crypto_key_reader = pulsar.CryptoKeyReader(publicKeyPath, privateKeyPath) - client = pulsar.Client('pulsar://localhost:6650') - producer = client.create_producer(topic='encryption', encryption_key='encryption', crypto_key_reader=crypto_key_reader) - producer.send('encryption message'.encode('utf8')) - print('sent message') - producer.close() - client.close() - - ``` - -3. Create a consumer to receive encrypted messages. - - **Input** - - ```python - - import pulsar - - publicKeyPath = "./public.pem" - privateKeyPath = "./private.pem" - crypto_key_reader = pulsar.CryptoKeyReader(publicKeyPath, privateKeyPath) - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe(topic='encryption', subscription_name='encryption-sub', crypto_key_reader=crypto_key_reader) - msg = consumer.receive() - print("Received msg '{}' id = '{}'".format(msg.data(), msg.message_id())) - consumer.close() - client.close() - - ``` - -4. Run the consumer to receive encrypted messages. - - **Input** - - ```shell - - python consumer.py - - ``` - -5. In a new terminal tab, run the producer to produce encrypted messages. - - **Input** - - ```shell - - python producer.py - - ``` - - Now you can see the producer sends messages and the consumer receives messages successfully. - - **Output** - - This is from the producer side. - - ``` - - sent message - - ``` - - This is from the consumer side. - - ``` - - Received msg 'b'encryption message'' id = '(0,0,-1,-1)' - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-websocket.md b/site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-websocket.md deleted file mode 100644 index ebdb9bc1cd18f6..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries-websocket.md +++ /dev/null @@ -1,621 +0,0 @@ ---- -id: client-libraries-websocket -title: Pulsar WebSocket API -sidebar_label: "WebSocket" -original_id: client-libraries-websocket ---- - -Pulsar [WebSocket](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API) API provides a simple way to interact with Pulsar using languages that do not have an official [client library](getting-started-clients.md). Through WebSocket, you can publish and consume messages and use features available on the [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page. - - -> You can use Pulsar WebSocket API with any WebSocket client library. See examples for Python and Node.js [below](#client-examples). - -## Running the WebSocket service - -The standalone variant of Pulsar that we recommend using for [local development](getting-started-standalone.md) already has the WebSocket service enabled. - -In non-standalone mode, there are two ways to deploy the WebSocket service: - -* [embedded](#embedded-with-a-pulsar-broker) with a Pulsar broker -* as a [separate component](#as-a-separate-component) - -### Embedded with a Pulsar broker - -In this mode, the WebSocket service will run within the same HTTP service that's already running in the broker. To enable this mode, set the [`webSocketServiceEnabled`](reference-configuration.md#broker-webSocketServiceEnabled) parameter in the [`conf/broker.conf`](reference-configuration.md#broker) configuration file in your installation. - -```properties - -webSocketServiceEnabled=true - -``` - -### As a separate component - -In this mode, the WebSocket service will be run from a Pulsar [broker](reference-terminology.md#broker) as a separate service. Configuration for this mode is handled in the [`conf/websocket.conf`](reference-configuration.md#websocket) configuration file. You'll need to set *at least* the following parameters: - -* [`configurationStoreServers`](reference-configuration.md#websocket-configurationStoreServers) -* [`webServicePort`](reference-configuration.md#websocket-webServicePort) -* [`clusterName`](reference-configuration.md#websocket-clusterName) - -Here's an example: - -```properties - -configurationStoreServers=zk1:2181,zk2:2181,zk3:2181 -webServicePort=8080 -clusterName=my-cluster - -``` - -### Security settings - -To enable TLS encryption on WebSocket service: - -```properties - -tlsEnabled=true -tlsAllowInsecureConnection=false -tlsCertificateFilePath=/path/to/client-websocket.cert.pem -tlsKeyFilePath=/path/to/client-websocket.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -### Starting the broker - -When the configuration is set, you can start the service using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) tool: - -```shell - -$ bin/pulsar-daemon start websocket - -``` - -## API Reference - -Pulsar's WebSocket API offers three endpoints for [producing](#producer-endpoint) messages, [consuming](#consumer-endpoint) messages and [reading](#reader-endpoint) messages. - -All exchanges via the WebSocket API use JSON. - -### Authentication - -#### Browser javascript WebSocket client - -Use the query param `token` transport the authentication token. - -```http - -ws://broker-service-url:8080/path?token=token - -``` - -### Producer endpoint - -The producer endpoint requires you to specify a tenant, namespace, and topic in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/producer/persistent/:tenant/:namespace/:topic - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`sendTimeoutMillis` | long | no | Send timeout (default: 30 secs) -`batchingEnabled` | boolean | no | Enable batching of messages (default: false) -`batchingMaxMessages` | int | no | Maximum number of messages permitted in a batch (default: 1000) -`maxPendingMessages` | int | no | Set the max size of the internal-queue holding the messages (default: 1000) -`batchingMaxPublishDelay` | long | no | Time period within which the messages will be batched (default: 10ms) -`messageRoutingMode` | string | no | Message [routing mode](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/ProducerConfiguration.MessageRoutingMode.html) for the partitioned producer: `SinglePartition`, `RoundRobinPartition` -`compressionType` | string | no | Compression [type](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/CompressionType.html): `LZ4`, `ZLIB` -`producerName` | string | no | Specify the name for the producer. Pulsar will enforce only one producer with same name can be publishing on a topic -`initialSequenceId` | long | no | Set the baseline for the sequence ids for messages published by the producer. -`hashingScheme` | string | no | [Hashing function](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.HashingScheme.html) to use when publishing on a partitioned topic: `JavaStringHash`, `Murmur3_32Hash` -`token` | string | no | Authentication token, this is used for the browser javascript client - - -#### Publishing a message - -```json - -{ - "payload": "SGVsbG8gV29ybGQ=", - "properties": {"key1": "value1", "key2": "value2"}, - "context": "1" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`payload` | string | yes | Base-64 encoded payload -`properties` | key-value pairs | no | Application-defined properties -`context` | string | no | Application-defined request identifier -`key` | string | no | For partitioned topics, decides which partition to use -`replicationClusters` | array | no | Restrict replication to this list of [clusters](reference-terminology.md#cluster), specified by name - - -##### Example success response - -```json - -{ - "result": "ok", - "messageId": "CAAQAw==", - "context": "1" - } - -``` - -##### Example failure response - -```json - - { - "result": "send-error:3", - "errorMsg": "Failed to de-serialize from JSON", - "context": "1" - } - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`result` | string | yes | `ok` if successful or an error message if unsuccessful -`messageId` | string | yes | Message ID assigned to the published message -`context` | string | no | Application-defined request identifier - - -### Consumer endpoint - -The consumer endpoint requires you to specify a tenant, namespace, and topic, as well as a subscription, in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/consumer/persistent/:tenant/:namespace/:topic/:subscription - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`ackTimeoutMillis` | long | no | Set the timeout for unacked messages (default: 0) -`subscriptionType` | string | no | [Subscription type](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/SubscriptionType.html): `Exclusive`, `Failover`, `Shared`, `Key_Shared` -`receiverQueueSize` | int | no | Size of the consumer receive queue (default: 1000) -`consumerName` | string | no | Consumer name -`priorityLevel` | int | no | Define a [priority](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setPriorityLevel-int-) for the consumer -`maxRedeliverCount` | int | no | Define a [maxRedeliverCount](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#deadLetterPolicy-org.apache.pulsar.client.api.DeadLetterPolicy-) for the consumer (default: 0). Activates [Dead Letter Topic](https://github.com/apache/pulsar/wiki/PIP-22%3A-Pulsar-Dead-Letter-Topic) feature. -`deadLetterTopic` | string | no | Define a [deadLetterTopic](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#deadLetterPolicy-org.apache.pulsar.client.api.DeadLetterPolicy-) for the consumer (default: {topic}-{subscription}-DLQ). Activates [Dead Letter Topic](https://github.com/apache/pulsar/wiki/PIP-22%3A-Pulsar-Dead-Letter-Topic) feature. -`pullMode` | boolean | no | Enable pull mode (default: false). See "Flow Control" below. -`negativeAckRedeliveryDelay` | int | no | When a message is negatively acknowledged, it will be redelivered to the DLQ. -`token` | string | no | Authentication token, this is used for the browser javascript client - -NB: these parameter (except `pullMode`) apply to the internal consumer of the WebSocket service. -So messages will be subject to the redelivery settings as soon as the get into the receive queue, -even if the client doesn't consume on the WebSocket. - -##### Receiving messages - -Server will push messages on the WebSocket session: - -```json - -{ - "messageId": "CAAQAw==", - "payload": "SGVsbG8gV29ybGQ=", - "properties": {"key1": "value1", "key2": "value2"}, - "publishTime": "2016-08-30 16:45:57.785", - "redeliveryCount": 4 -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId` | string | yes | Message ID -`payload` | string | yes | Base-64 encoded payload -`publishTime` | string | yes | Publish timestamp -`redeliveryCount` | number | yes | Number of times this message was already delivered -`properties` | key-value pairs | no | Application-defined properties -`key` | string | no | Original routing key set by producer - -#### Acknowledging the message - -Consumer needs to acknowledge the successful processing of the message to -have the Pulsar broker delete it. - -```json - -{ - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Negatively acknowledging messages - -```json - -{ - "type": "negativeAcknowledge", - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Flow control - -##### Push Mode - -By default (`pullMode=false`), the consumer endpoint will use the `receiverQueueSize` parameter both to size its -internal receive queue and to limit the number of unacknowledged messages that are passed to the WebSocket client. -In this mode, if you don't send acknowledgements, the Pulsar WebSocket service will stop sending messages after reaching -`receiverQueueSize` unacked messages sent to the WebSocket client. - -##### Pull Mode - -If you set `pullMode` to `true`, the WebSocket client will need to send `permit` commands to permit the -Pulsar WebSocket service to send more messages. - -```json - -{ - "type": "permit", - "permitMessages": 100 -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `permit` -`permitMessages`| int | yes | Number of messages to permit - -NB: in this mode it's possible to acknowledge messages in a different connection. - -#### Check if reach end of topic - -Consumer can check if it has reached end of topic by sending `isEndOfTopic` request. - -**Request** - -```json - -{ - "type": "isEndOfTopic" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `isEndOfTopic` - -**Response** - -```json - -{ - "endOfTopic": "true/false" - } - -``` - -### Reader endpoint - -The reader endpoint requires you to specify a tenant, namespace, and topic in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/reader/persistent/:tenant/:namespace/:topic - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`readerName` | string | no | Reader name -`receiverQueueSize` | int | no | Size of the consumer receive queue (default: 1000) -`messageId` | int or enum | no | Message ID to start from, `earliest` or `latest` (default: `latest`) -`token` | string | no | Authentication token, this is used for the browser javascript client - -##### Receiving messages - -Server will push messages on the WebSocket session: - -```json - -{ - "messageId": "CAAQAw==", - "payload": "SGVsbG8gV29ybGQ=", - "properties": {"key1": "value1", "key2": "value2"}, - "publishTime": "2016-08-30 16:45:57.785", - "redeliveryCount": 4 -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId` | string | yes | Message ID -`payload` | string | yes | Base-64 encoded payload -`publishTime` | string | yes | Publish timestamp -`redeliveryCount` | number | yes | Number of times this message was already delivered -`properties` | key-value pairs | no | Application-defined properties -`key` | string | no | Original routing key set by producer - -#### Acknowledging the message - -**In WebSocket**, Reader needs to acknowledge the successful processing of the message to -have the Pulsar WebSocket service update the number of pending messages. -If you don't send acknowledgements, Pulsar WebSocket service will stop sending messages after reaching the pendingMessages limit. - -```json - -{ - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Check if reach end of topic - -Consumer can check if it has reached end of topic by sending `isEndOfTopic` request. - -**Request** - -```json - -{ - "type": "isEndOfTopic" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `isEndOfTopic` - -**Response** - -```json - -{ - "endOfTopic": "true/false" - } - -``` - -### Error codes - -In case of error the server will close the WebSocket session using the -following error codes: - -Error Code | Error Message -:----------|:------------- -1 | Failed to create producer -2 | Failed to subscribe -3 | Failed to deserialize from JSON -4 | Failed to serialize to JSON -5 | Failed to authenticate client -6 | Client is not authorized -7 | Invalid payload encoding -8 | Unknown error - -> The application is responsible for re-establishing a new WebSocket session after a backoff period. - -## Client examples - -Below you'll find code examples for the Pulsar WebSocket API in [Python](#python) and [Node.js](#nodejs). - -### Python - -This example uses the [`websocket-client`](https://pypi.python.org/pypi/websocket-client) package. You can install it using [pip](https://pypi.python.org/pypi/pip): - -```shell - -$ pip install websocket-client - -``` - -You can also download it from [PyPI](https://pypi.python.org/pypi/websocket-client). - -#### Python producer - -Here's an example Python producer that sends a simple message to a Pulsar [topic](reference-terminology.md#topic): - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/producer/persistent/public/default/my-topic' - -ws = websocket.create_connection(TOPIC) - -# Send one message as JSON -ws.send(json.dumps({ - 'payload' : base64.b64encode('Hello World'), - 'properties': { - 'key1' : 'value1', - 'key2' : 'value2' - }, - 'context' : 5 -})) - -response = json.loads(ws.recv()) -if response['result'] == 'ok': - print 'Message published successfully' -else: - print 'Failed to publish message:', response -ws.close() - -``` - -#### Python consumer - -Here's an example Python consumer that listens on a Pulsar topic and prints the message ID whenever a message arrives: - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/consumer/persistent/public/default/my-topic/my-sub' - -ws = websocket.create_connection(TOPIC) - -while True: - msg = json.loads(ws.recv()) - if not msg: break - - print "Received: {} - payload: {}".format(msg, base64.b64decode(msg['payload'])) - - # Acknowledge successful processing - ws.send(json.dumps({'messageId' : msg['messageId']})) - -ws.close() - -``` - -#### Python reader - -Here's an example Python reader that listens on a Pulsar topic and prints the message ID whenever a message arrives: - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/reader/persistent/public/default/my-topic' -ws = websocket.create_connection(TOPIC) - -while True: - msg = json.loads(ws.recv()) - if not msg: break - - print "Received: {} - payload: {}".format(msg, base64.b64decode(msg['payload'])) - - # Acknowledge successful processing - ws.send(json.dumps({'messageId' : msg['messageId']})) - -ws.close() - -``` - -### Node.js - -This example uses the [`ws`](https://websockets.github.io/ws/) package. You can install it using [npm](https://www.npmjs.com/): - -```shell - -$ npm install ws - -``` - -#### Node.js producer - -Here's an example Node.js producer that sends a simple message to a Pulsar topic: - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/producer/persistent/public/default/my-topic`; -const ws = new WebSocket(topic); - -var message = { - "payload" : new Buffer("Hello World").toString('base64'), - "properties": { - "key1" : "value1", - "key2" : "value2" - }, - "context" : "1" -}; - -ws.on('open', function() { - // Send one message - ws.send(JSON.stringify(message)); -}); - -ws.on('message', function(message) { - console.log('received ack: %s', message); -}); - -``` - -#### Node.js consumer - -Here's an example Node.js consumer that listens on the same topic used by the producer above: - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/consumer/persistent/public/default/my-topic/my-sub`; -const ws = new WebSocket(topic); - -ws.on('message', function(message) { - var receiveMsg = JSON.parse(message); - console.log('Received: %s - payload: %s', message, new Buffer(receiveMsg.payload, 'base64').toString()); - var ackMsg = {"messageId" : receiveMsg.messageId}; - ws.send(JSON.stringify(ackMsg)); -}); - -``` - -#### NodeJS reader - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/reader/persistent/public/default/my-topic`; -const ws = new WebSocket(topic); - -ws.on('message', function(message) { - var receiveMsg = JSON.parse(message); - console.log('Received: %s - payload: %s', message, new Buffer(receiveMsg.payload, 'base64').toString()); - var ackMsg = {"messageId" : receiveMsg.messageId}; - ws.send(JSON.stringify(ackMsg)); -}); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries.md b/site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries.md deleted file mode 100644 index 00d128c514040f..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/client-libraries.md +++ /dev/null @@ -1,35 +0,0 @@ ---- -id: client-libraries -title: Pulsar client libraries -sidebar_label: "Overview" -original_id: client-libraries ---- - -Pulsar supports the following client libraries: - -- [Java client](client-libraries-java.md) -- [Go client](client-libraries-go.md) -- [Python client](client-libraries-python.md) -- [C++ client](client-libraries-cpp.md) -- [Node.js client](client-libraries-node.md) -- [WebSocket client](client-libraries-websocket.md) -- [C# client](client-libraries-dotnet.md) - -## Feature matrix -Pulsar client feature matrix for different languages is listed on [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page. - -## Third-party clients - -Besides the official released clients, multiple projects on developing Pulsar clients are available in different languages. - -> If you have developed a new Pulsar client, feel free to submit a pull request and add your client to the list below. - -| Language | Project | Maintainer | License | Description | -|----------|---------|------------|---------|-------------| -| Go | [pulsar-client-go](https://github.com/Comcast/pulsar-client-go) | [Comcast](https://github.com/Comcast) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client | -| Go | [go-pulsar](https://github.com/t2y/go-pulsar) | [t2y](https://github.com/t2y) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | -| Haskell | [supernova](https://github.com/cr-org/supernova) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Native Pulsar client for Haskell | -| Scala | [neutron](https://github.com/cr-org/neutron) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Purely functional Apache Pulsar client for Scala built on top of Fs2 | -| Scala | [pulsar4s](https://github.com/sksamuel/pulsar4s) | [sksamuel](https://github.com/sksamuel) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Idomatic, typesafe, and reactive Scala client for Apache Pulsar | -| Rust | [pulsar-rs](https://github.com/wyyerd/pulsar-rs) | [Wyyerd Group](https://github.com/wyyerd) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Future-based Rust bindings for Apache Pulsar | -| .NET | [pulsar-client-dotnet](https://github.com/fsharplang-ru/pulsar-client-dotnet) | [Lanayx](https://github.com/Lanayx) | [![GitHub](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT) | Native .NET client for C#/F#/VB | diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-architecture-overview.md b/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-architecture-overview.md deleted file mode 100644 index f3e75c3e307e0c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-architecture-overview.md +++ /dev/null @@ -1,172 +0,0 @@ ---- -id: concepts-architecture-overview -title: Architecture Overview -sidebar_label: "Architecture" -original_id: concepts-architecture-overview ---- - -At the highest level, a Pulsar instance is composed of one or more Pulsar clusters. Clusters within an instance can [replicate](concepts-replication.md) data amongst themselves. - -In a Pulsar cluster: - -* One or more brokers handles and load balances incoming messages from producers, dispatches messages to consumers, communicates with the Pulsar configuration store to handle various coordination tasks, stores messages in BookKeeper instances (aka bookies), relies on a cluster-specific ZooKeeper cluster for certain tasks, and more. -* A BookKeeper cluster consisting of one or more bookies handles [persistent storage](#persistent-storage) of messages. -* A ZooKeeper cluster specific to that cluster handles coordination tasks between Pulsar clusters. - -The diagram below provides an illustration of a Pulsar cluster: - -![Pulsar architecture diagram](/assets/pulsar-system-architecture.png) - -At the broader instance level, an instance-wide ZooKeeper cluster called the configuration store handles coordination tasks involving multiple clusters, for example [geo-replication](concepts-replication.md). - -## Brokers - -The Pulsar message broker is a stateless component that's primarily responsible for running two other components: - -* An HTTP server that exposes a {@inject: rest:REST:/} API for both administrative tasks and [topic lookup](concepts-clients.md#client-setup-phase) for producers and consumers. The producers connect to the brokers to publish messages and the consumers connect to the brokers to consume the messages. -* A dispatcher, which is an asynchronous TCP server over a custom [binary protocol](developing-binary-protocol.md) used for all data transfers - -Messages are typically dispatched out of a [managed ledger](#managed-ledgers) cache for the sake of performance, *unless* the backlog exceeds the cache size. If the backlog grows too large for the cache, the broker will start reading entries from BookKeeper. - -Finally, to support geo-replication on global topics, the broker manages replicators that tail the entries published in the local region and republish them to the remote region using the Pulsar [Java client library](client-libraries-java.md). - -> For a guide to managing Pulsar brokers, see the [brokers](admin-api-brokers.md) guide. - -## Clusters - -A Pulsar instance consists of one or more Pulsar *clusters*. Clusters, in turn, consist of: - -* One or more Pulsar [brokers](#brokers) -* A ZooKeeper quorum used for cluster-level configuration and coordination -* An ensemble of bookies used for [persistent storage](#persistent-storage) of messages - -Clusters can replicate amongst themselves using [geo-replication](concepts-replication.md). - -> For a guide to managing Pulsar clusters, see the [clusters](admin-api-clusters.md) guide. - -## Metadata store - -The Pulsar metadata store maintains all the metadata of a Pulsar cluster, such as topic metadata, schema, broker load data, and so on. Pulsar uses [Apache ZooKeeper](https://zookeeper.apache.org/) for metadata storage, cluster configuration, and coordination. The Pulsar metadata store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster. You can use one ZooKeeper cluster for both Pulsar metadata store and BookKeeper metadata store. If you want to deploy Pulsar brokers connected to an existing BookKeeper cluster, you need to deploy separate ZooKeeper clusters for Pulsar metadata store and BookKeeper metadata store respectively. - -In a Pulsar instance: - -* A configuration store quorum stores configuration for tenants, namespaces, and other entities that need to be globally consistent. -* Each cluster has its own local ZooKeeper ensemble that stores cluster-specific configuration and coordination such as which brokers are responsible for which topics as well as ownership metadata, broker load reports, BookKeeper ledger metadata, and more. - -## Configuration store - -The configuration store maintains all the configurations of a Pulsar instance, such as clusters, tenants, namespaces, partitioned topic related configurations, and so on. A Pulsar instance can have a single local cluster, multiple local clusters, or multiple cross-region clusters. Consequently, the configuration store can share the configurations across multiple clusters under a Pulsar instance. The configuration store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster. - -## Persistent storage - -Pulsar provides guaranteed message delivery for applications. If a message successfully reaches a Pulsar broker, it will be delivered to its intended target. - -This guarantee requires that non-acknowledged messages are stored in a durable manner until they can be delivered to and acknowledged by consumers. This mode of messaging is commonly called *persistent messaging*. In Pulsar, N copies of all messages are stored and synced on disk, for example 4 copies across two servers with mirrored [RAID](https://en.wikipedia.org/wiki/RAID) volumes on each server. - -### Apache BookKeeper - -Pulsar uses a system called [Apache BookKeeper](http://bookkeeper.apache.org/) for persistent message storage. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) (WAL) system that provides a number of crucial advantages for Pulsar: - -* It enables Pulsar to utilize many independent logs, called [ledgers](#ledgers). Multiple ledgers can be created for topics over time. -* It offers very efficient storage for sequential data that handles entry replication. -* It guarantees read consistency of ledgers in the presence of various system failures. -* It offers even distribution of I/O across bookies. -* It's horizontally scalable in both capacity and throughput. Capacity can be immediately increased by adding more bookies to a cluster. -* Bookies are designed to handle thousands of ledgers with concurrent reads and writes. By using multiple disk devices---one for journal and another for general storage--bookies are able to isolate the effects of read operations from the latency of ongoing write operations. - -In addition to message data, *cursors* are also persistently stored in BookKeeper. Cursors are [subscription](reference-terminology.md#subscription) positions for [consumers](reference-terminology.md#consumer). BookKeeper enables Pulsar to store consumer position in a scalable fashion. - -At the moment, Pulsar supports persistent message storage. This accounts for the `persistent` in all topic names. Here's an example: - -```http - -persistent://my-tenant/my-namespace/my-topic - -``` - -> Pulsar also supports ephemeral ([non-persistent](concepts-messaging.md#non-persistent-topics)) message storage. - - -You can see an illustration of how brokers and bookies interact in the diagram below: - -![Brokers and bookies](/assets/broker-bookie.png) - - -### Ledgers - -A ledger is an append-only data structure with a single writer that is assigned to multiple BookKeeper storage nodes, or bookies. Ledger entries are replicated to multiple bookies. Ledgers themselves have very simple semantics: - -* A Pulsar broker can create a ledger, append entries to the ledger, and close the ledger. -* After the ledger has been closed---either explicitly or because the writer process crashed---it can then be opened only in read-only mode. -* Finally, when entries in the ledger are no longer needed, the whole ledger can be deleted from the system (across all bookies). - -#### Ledger read consistency - -The main strength of Bookkeeper is that it guarantees read consistency in ledgers in the presence of failures. Since the ledger can only be written to by a single process, that process is free to append entries very efficiently, without need to obtain consensus. After a failure, the ledger will go through a recovery process that will finalize the state of the ledger and establish which entry was last committed to the log. After that point, all readers of the ledger are guaranteed to see the exact same content. - -#### Managed ledgers - -Given that Bookkeeper ledgers provide a single log abstraction, a library was developed on top of the ledger called the *managed ledger* that represents the storage layer for a single topic. A managed ledger represents the abstraction of a stream of messages with a single writer that keeps appending at the end of the stream and multiple cursors that are consuming the stream, each with its own associated position. - -Internally, a single managed ledger uses multiple BookKeeper ledgers to store the data. There are two reasons to have multiple ledgers: - -1. After a failure, a ledger is no longer writable and a new one needs to be created. -2. A ledger can be deleted when all cursors have consumed the messages it contains. This allows for periodic rollover of ledgers. - -### Journal storage - -In BookKeeper, *journal* files contain BookKeeper transaction logs. Before making an update to a [ledger](#ledgers), a bookie needs to ensure that a transaction describing the update is written to persistent (non-volatile) storage. A new journal file is created once the bookie starts or the older journal file reaches the journal file size threshold (configured using the [`journalMaxSizeMB`](reference-configuration.md#bookkeeper-journalMaxSizeMB) parameter). - -## Pulsar proxy - -One way for Pulsar clients to interact with a Pulsar [cluster](#clusters) is by connecting to Pulsar message [brokers](#brokers) directly. In some cases, however, this kind of direct connection is either infeasible or undesirable because the client doesn't have direct access to broker addresses. If you're running Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform, for example, then direct client connections to brokers are likely not possible. - -The **Pulsar proxy** provides a solution to this problem by acting as a single gateway for all of the brokers in a cluster. If you run the Pulsar proxy (which, again, is optional), all client connections with the Pulsar cluster will flow through the proxy rather than communicating with brokers. - -> For the sake of performance and fault tolerance, you can run as many instances of the Pulsar proxy as you'd like. - -Architecturally, the Pulsar proxy gets all the information it requires from ZooKeeper. When starting the proxy on a machine, you only need to provide ZooKeeper connection strings for the cluster-specific and instance-wide configuration store clusters. Here's an example: - -```bash - -$ bin/pulsar proxy \ - --zookeeper-servers zk-0,zk-1,zk-2 \ - --configuration-store-servers zk-0,zk-1,zk-2 - -``` - -> #### Pulsar proxy docs -> For documentation on using the Pulsar proxy, see the [Pulsar proxy admin documentation](administration-proxy.md). - - -Some important things to know about the Pulsar proxy: - -* Connecting clients don't need to provide *any* specific configuration to use the Pulsar proxy. You won't need to update the client configuration for existing applications beyond updating the IP used for the service URL (for example if you're running a load balancer over the Pulsar proxy). -* [TLS encryption](security-tls-transport.md) and [authentication](security-tls-authentication.md) is supported by the Pulsar proxy - -## Service discovery - -[Clients](getting-started-clients.md) connecting to Pulsar brokers need to be able to communicate with an entire Pulsar instance using a single URL. Pulsar provides a built-in service discovery mechanism that you can set up using the instructions in the [Deploying a Pulsar instance](deploy-bare-metal.md#service-discovery-setup) guide. - -You can use your own service discovery system if you'd like. If you use your own system, there is just one requirement: when a client performs an HTTP request to an endpoint, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to *some* active broker in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means. - -The diagram below illustrates Pulsar service discovery: - -![alt-text](/assets/pulsar-service-discovery.png) - -In this diagram, the Pulsar cluster is addressable via a single DNS name: `pulsar-cluster.acme.com`. A [Python client](client-libraries-python.md), for example, could access this Pulsar cluster like this: - -```python - -from pulsar import Client - -client = Client('pulsar://pulsar-cluster.acme.com:6650') - -``` - -:::note - -In Pulsar, each topic is handled by only one broker. Initial requests from a client to read, update or delete a topic are sent to a broker that may not be the topic owner. If the broker cannot handle the request for this topic, it redirects the request to the appropriate broker. - -::: - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-authentication.md b/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-authentication.md deleted file mode 100644 index f6307890c904a7..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-authentication.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -id: concepts-authentication -title: Authentication and Authorization -sidebar_label: "Authentication and Authorization" -original_id: concepts-authentication ---- - -Pulsar supports a pluggable [authentication](security-overview.md) mechanism which can be configured at the proxy and/or the broker. Pulsar also supports a pluggable [authorization](security-authorization.md) mechanism. These mechanisms work together to identify the client and its access rights on topics, namespaces and tenants. - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-clients.md b/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-clients.md deleted file mode 100644 index 4040624f7d6366..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-clients.md +++ /dev/null @@ -1,92 +0,0 @@ ---- -id: concepts-clients -title: Pulsar Clients -sidebar_label: "Clients" -original_id: concepts-clients ---- - -Pulsar exposes a client API with language bindings for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md), [C++](client-libraries-cpp.md) and [C#](client-libraries-dotnet.md). The client API optimizes and encapsulates Pulsar's client-broker communication protocol and exposes a simple and intuitive API for use by applications. - -Under the hood, the current official Pulsar client libraries support transparent reconnection and/or connection failover to brokers, queuing of messages until acknowledged by the broker, and heuristics such as connection retries with backoff. - -> **Custom client libraries** -> If you'd like to create your own client library, we recommend consulting the documentation on Pulsar's custom [binary protocol](developing-binary-protocol.md). - - -## Client setup phase - -Before an application creates a producer/consumer, the Pulsar client library needs to initiate a setup phase including two steps: - -1. The client attempts to determine the owner of the topic by sending an HTTP lookup request to the broker. The request could reach one of the active brokers which, by looking at the (cached) zookeeper metadata knows who is serving the topic or, in case nobody is serving it, tries to assign it to the least loaded broker. -1. Once the client library has the broker address, it creates a TCP connection (or reuse an existing connection from the pool) and authenticates it. Within this connection, client and broker exchange binary commands from a custom protocol. At this point the client sends a command to create producer/consumer to the broker, which will comply after having validated the authorization policy. - -Whenever the TCP connection breaks, the client immediately re-initiates this setup phase and keeps trying with exponential backoff to re-establish the producer or consumer until the operation succeeds. - -## Reader interface - -In Pulsar, the "standard" [consumer interface](concepts-messaging.md#consumers) involves using consumers to listen on [topics](reference-terminology.md#topic), process incoming messages, and finally acknowledge those messages when they are processed. Whenever a new subscription is created, it is initially positioned at the end of the topic (by default), and consumers associated with that subscription begin reading with the first message created afterwards. Whenever a consumer connects to a topic using a pre-existing subscription, it begins reading from the earliest message un-acked within that subscription. In summary, with the consumer interface, subscription cursors are automatically managed by Pulsar in response to [message acknowledgements](concepts-messaging.md#acknowledgement). - -The **reader interface** for Pulsar enables applications to manually manage cursors. When you use a reader to connect to a topic---rather than a consumer---you need to specify *which* message the reader begins reading from when it connects to a topic. When connecting to a topic, the reader interface enables you to begin with: - -* The **earliest** available message in the topic -* The **latest** available message in the topic -* Some other message between the earliest and the latest. If you select this option, you'll need to explicitly provide a message ID. Your application will be responsible for "knowing" this message ID in advance, perhaps fetching it from a persistent data store or cache. - -The reader interface is helpful for use cases like using Pulsar to provide effectively-once processing semantics for a stream processing system. For this use case, it's essential that the stream processing system be able to "rewind" topics to a specific message and begin reading there. The reader interface provides Pulsar clients with the low-level abstraction necessary to "manually position" themselves within a topic. - -Internally, the reader interface is implemented as a consumer using an exclusive, non-durable subscription to the topic with a randomly-allocated name. - -[ **IMPORTANT** ] - -Unlike subscription/consumer, readers are non-durable in nature and does not prevent data in a topic from being deleted, thus it is ***strongly*** advised that [data retention](cookbooks-retention-expiry.md) be configured. If data retention for a topic is not configured for an adequate amount of time, messages that the reader has not yet read might be deleted . This causes the readers to essentially skip messages. Configuring the data retention for a topic guarantees the reader with a certain duration to read a message. - -Please also note that a reader can have a "backlog", but the metric is only used for users to know how behind the reader is. The metric is not considered for any backlog quota calculations. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-reader-consumer-interfaces.png) - -Here's a Java example that begins reading from the earliest available message on a topic: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageId; -import org.apache.pulsar.client.api.Reader; - -// Create a reader on a topic and for a specific message (and onward) -Reader reader = pulsarClient.newReader() - .topic("reader-api-test") - .startMessageId(MessageId.earliest) - .create(); - -while (true) { - Message message = reader.readNext(); - - // Process the message -} - -``` - -To create a reader that reads from the latest available message: - -```java - -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(MessageId.latest) - .create(); - -``` - -To create a reader that reads from some message between the earliest and the latest: - -```java - -byte[] msgIdBytes = // Some byte array -MessageId id = MessageId.fromByteArray(msgIdBytes); -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(id) - .create(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-messaging.md b/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-messaging.md deleted file mode 100644 index 045d294e96e6e0..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-messaging.md +++ /dev/null @@ -1,700 +0,0 @@ ---- -id: concepts-messaging -title: Messaging -sidebar_label: "Messaging" -original_id: concepts-messaging ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar is built on the [publish-subscribe](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) pattern (often abbreviated to pub-sub). In this pattern, [producers](#producers) publish messages to [topics](#topics). [Consumers](#consumers) [subscribe](#subscription-types) to those topics, process incoming messages, and send an acknowledgement when processing is complete. - -When a subscription is created, Pulsar [retains](concepts-architecture-overview.md#persistent-storage) all messages, even if the consumer is disconnected. Retained messages are discarded only when a consumer acknowledges that those messages are processed successfully. - -## Messages - -Messages are the basic "unit" of Pulsar. The following table lists the components of messages. - -Component | Description -:---------|:------- -Value / data payload | The data carried by the message. All Pulsar messages contain raw bytes, although message data can also conform to data [schemas](schema-get-started.md). -Key | Messages are optionally tagged with keys, which is useful for things like [topic compaction](concepts-topic-compaction.md). -Properties | An optional key/value map of user-defined properties. -Producer name | The name of the producer who produces the message. If you do not specify a producer name, the default name is used. -Sequence ID | Each Pulsar message belongs to an ordered sequence on its topic. The sequence ID of the message is its order in that sequence. -Publish time | The timestamp of when the message is published. The timestamp is automatically applied by the producer. -Event time | An optional timestamp attached to a message by applications. For example, applications attach a timestamp on when the message is processed. If nothing is set to event time, the value is `0`. -TypedMessageBuilder | It is used to construct a message. You can set message properties such as the message key, message value with `TypedMessageBuilder`.
    When you set `TypedMessageBuilder`, set the key as a string. If you set the key as other types, for example, an AVRO object, the key is sent as bytes, and it is difficult to get the AVRO object back on the consumer. - -The default size of a message is 5 MB. You can configure the max size of a message with the following configurations. - -- In the `broker.conf` file. - - ```bash - - # The max size of a message (in bytes). - maxMessageSize=5242880 - - ``` - -- In the `bookkeeper.conf` file. - - ```bash - - # The max size of the netty frame (in bytes). Any messages received larger than this value are rejected. The default value is 5 MB. - nettyMaxFrameSizeBytes=5253120 - - ``` - -> For more information on Pulsar message contents, see Pulsar [binary protocol](developing-binary-protocol.md). - -## Producers - -A producer is a process that attaches to a topic and publishes messages to a Pulsar [broker](reference-terminology.md#broker). The Pulsar broker process the messages. - -### Send modes - -Producers send messages to brokers synchronously (sync) or asynchronously (async). - -| Mode | Description | -|:-----------|-----------| -| Sync send | The producer waits for an acknowledgement from the broker after sending every message. If the acknowledgment is not received, the producer treats the sending operation as a failure. | -| Async send | The producer puts a message in a blocking queue and returns immediately. The client library sends the message to the broker in the background. If the queue is full (you can [configure](reference-configuration.md#broker) the maximum size), the producer is blocked or fails immediately when calling the API, depending on arguments passed to the producer. | - -### Access mode - -You can have different types of access modes on topics for producers. - -|Access mode | Description -|---|--- -`Shared`|Multiple producers can publish on a topic.

    This is the **default** setting. -`Exclusive`|Only one producer can publish on a topic.

    If there is already a producer connected, other producers trying to publish on this topic get errors immediately.

    The "old" producer is evicted and a "new" producer is selected to be the next exclusive producer if the "old" producer experiences a network partition with the broker. -`WaitForExclusive`|If there is already a producer connected, the producer creation is pending (rather than timing out) until the producer gets the `Exclusive` access.

    The producer that succeeds in becoming the exclusive one is treated as the leader. Consequently, if you want to implement the leader election scheme for your application, you can use this access mode. - -:::note - -Once an application creates a producer with the `Exclusive` or `WaitForExclusive` access mode successfully, the instance of the application is guaranteed to be the **only one writer** on the topic. Other producers trying to produce on this topic get errors immediately or have to wait until they get the `Exclusive` access. -For more information, see [PIP 68: Exclusive Producer](https://github.com/apache/pulsar/wiki/PIP-68:-Exclusive-Producer). - -::: - -You can set producer access mode through Java Client API. For more information, see `ProducerAccessMode` in [ProducerBuilder.java](https://github.com/apache/pulsar/blob/fc5768ca3bbf92815d142fe30e6bfad70a1b4fc6/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/ProducerBuilder.java). - - -### Compression - -You can compress messages published by producers during transportation. Pulsar currently supports the following types of compression: - -* [LZ4](https://github.com/lz4/lz4) -* [ZLIB](https://zlib.net/) -* [ZSTD](https://facebook.github.io/zstd/) -* [SNAPPY](https://google.github.io/snappy/) - -### Batching - -When batching is enabled, the producer accumulates and sends a batch of messages in a single request. The batch size is defined by the maximum number of messages and the maximum publish latency. Therefore, the backlog size represents the total number of batches instead of the total number of messages. - -In Pulsar, batches are tracked and stored as single units rather than as individual messages. Consumer unbundles a batch into individual messages. However, scheduled messages (configured through the `deliverAt` or the `deliverAfter` parameter) are always sent as individual messages even batching is enabled. - -In general, a batch is acknowledged when all of its messages are acknowledged by a consumer. It means unexpected failures, negative acknowledgements, or acknowledgement timeouts can result in redelivery of all messages in a batch, even if some of the messages are acknowledged. - -To avoid redelivering acknowledged messages in a batch to the consumer, Pulsar introduces batch index acknowledgement since Pulsar 2.6.0. When batch index acknowledgement is enabled, the consumer filters out the batch index that has been acknowledged and sends the batch index acknowledgement request to the broker. The broker maintains the batch index acknowledgement status and tracks the acknowledgement status of each batch index to avoid dispatching acknowledged messages to the consumer. When all indexes of the batch message are acknowledged, the batch message is deleted. - -By default, batch index acknowledgement is disabled (`acknowledgmentAtBatchIndexLevelEnabled=false`). You can enable batch index acknowledgement by setting the `acknowledgmentAtBatchIndexLevelEnabled` parameter to `true` at the broker side. Enabling batch index acknowledgement results in more memory overheads. - -### Chunking -When you enable chunking, read the following instructions. -- Batching and chunking cannot be enabled simultaneously. To enable chunking, you must disable batching in advance. -- Chunking is only supported for persisted topics. -- Chunking is only supported for the exclusive and failover subscription types. - -When chunking is enabled (`chunkingEnabled=true`), if the message size is greater than the allowed maximum publish-payload size, the producer splits the original message into chunked messages and publishes them with chunked metadata to the broker separately and in order. At the broker side, the chunked messages are stored in the managed-ledger in the same way as that of ordinary messages. The only difference is that the consumer needs to buffer the chunked messages and combines them into the real message when all chunked messages have been collected. The chunked messages in the managed-ledger can be interwoven with ordinary messages. If producer fails to publish all the chunks of a message, the consumer can expire incomplete chunks if consumer fail to receive all chunks in expire time. By default, the expire time is set to one minute. - -The consumer consumes the chunked messages and buffers them until the consumer receives all the chunks of a message. And then the consumer stitches chunked messages together and places them into the receiver-queue. Clients consume messages from the receiver-queue. Once the consumer consumes the entire large message and acknowledges it, the consumer internally sends acknowledgement of all the chunk messages associated to that large message. You can set the `maxPendingChunkedMessage` parameter on the consumer. When the threshold is reached, the consumer drops the unchunked messages by silently acknowledging them or asking the broker to redeliver them later by marking them unacknowledged. - -The broker does not require any changes to support chunking for non-shared subscription. The broker only uses `chunkedMessageRate` to record chunked message rate on the topic. - -#### Handle chunked messages with one producer and one ordered consumer - -As shown in the following figure, when a topic has one producer which publishes large message payload in chunked messages along with regular non-chunked messages. The producer publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. The broker stores all the three chunked messages in the managed-ledger and dispatches to the ordered (exclusive/failover) consumer in the same order. The consumer buffers all the chunked messages in memory until it receives all the chunked messages, combines them into one message and then hands over the original message M1 to the client. - -![](/assets/chunking-01.png) - -#### Handle chunked messages with multiple producers and one ordered consumer - -When multiple publishers publish chunked messages into a single topic, the broker stores all the chunked messages coming from different publishers in the same managed-ledger. As shown below, Producer 1 publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. Producer 2 publishes message M2 in three chunks M2-C1, M2-C2 and M2-C3. All chunked messages of the specific message are still in order but might not be consecutive in the managed-ledger. This brings some memory pressure to the consumer because the consumer keeps separate buffer for each large message to aggregate all chunks of the large message and combine them into one message. - -![](/assets/chunking-02.png) - -## Consumers - -A consumer is a process that attaches to a topic via a subscription and then receives messages. - -A consumer sends a [flow permit request](developing-binary-protocol.md#flow-control) to a broker to get messages. There is a queue at the consumer side to receive messages pushed from the broker. You can configure the queue size with the [`receiverQueueSize`](client-libraries-java.md#configure-consumer) parameter. The default size is `1000`). Each time `consumer.receive()` is called, a message is dequeued from the buffer. - -### Receive modes - -Messages are received from [brokers](reference-terminology.md#broker) either synchronously (sync) or asynchronously (async). - -| Mode | Description | -|:--------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Sync receive | A sync receive is blocked until a message is available. | -| Async receive | An async receive returns immediately with a future value—for example, a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) in Java—that completes once a new message is available. | - -### Listeners - -Client libraries provide listener implementation for consumers. For example, the [Java client](client-libraries-java.md) provides a {@inject: javadoc:MesssageListener:/client/org/apache/pulsar/client/api/MessageListener} interface. In this interface, the `received` method is called whenever a new message is received. - -### Acknowledgement - -When a consumer consumes a message successfully, the consumer sends an acknowledgement request to the broker. This message is permanently stored, and then deleted only after all the subscriptions have acknowledged it. If you want to store the message that has been acknowledged by a consumer, you need to configure the [message retention policy](concepts-messaging.md#message-retention-and-expiry). - -For a batch message, if batch index acknowledgement is enabled, the broker maintains the batch index acknowledgement status and tracks the acknowledgement status of each batch index to avoid dispatching acknowledged messages to the consumer. When all indexes of the batch message are acknowledged, the batch message is deleted. For details about the batch index acknowledgement, see [batching](#batching). - -Messages can be acknowledged in the following two ways: - -- Messages are acknowledged individually. With individual acknowledgement, the consumer needs to acknowledge each message and sends an acknowledgement request to the broker. -- Messages are acknowledged cumulatively. With cumulative acknowledgement, the consumer only needs to acknowledge the last message it received. All messages in the stream up to (and including) the provided message are not re-delivered to that consumer. - -:::note - -Cumulative acknowledgement cannot be used in [Shared subscription type](#subscription-types), because this subscription type involves multiple consumers which have access to the same subscription. In Shared subscription type, messages are acknowledged individually. - -::: - -### Negative acknowledgement - -When a consumer does not consume a message successfully at a time, and wants to consume the message again, the consumer sends a negative acknowledgement to the broker, and then the broker redelivers the message. - -Messages are negatively acknowledged either individually or cumulatively, depending on the consumption subscription type. - -In the exclusive and failover subscription types, consumers only negatively acknowledge the last message they receive. - -In the shared and Key_Shared subscription types, you can negatively acknowledge messages individually. - -Be aware that negative acknowledgment on ordered subscription types, such as Exclusive, Failover and Key_Shared, can cause failed messages to arrive consumers out of the original order. - -:::note - -If batching is enabled, other messages and the negatively acknowledged messages in the same batch are redelivered to the consumer. - -::: - -### Acknowledgement timeout - -If a message is not consumed successfully, and you want to trigger the broker to redeliver the message automatically, you can adopt the unacknowledged message automatic re-delivery mechanism. Client tracks the unacknowledged messages within the entire `acktimeout` time range, and sends a `redeliver unacknowledged messages` request to the broker automatically when the acknowledgement timeout is specified. - -:::note - -If batching is enabled, other messages and the unacknowledged messages in the same batch are redelivered to the consumer. - -::: - -:::note - -Prefer negative acknowledgements over acknowledgement timeout. Negative acknowledgement controls the re-delivery of individual messages with more precision, and avoids invalid redeliveries when the message processing time exceeds the acknowledgement timeout. - -::: - -### Dead letter topic - -Dead letter topic enables you to consume new messages when some messages cannot be consumed successfully by a consumer. In this mechanism, messages that are failed to be consumed are stored in a separate topic, which is called dead letter topic. You can decide how to handle messages in the dead letter topic. - -The following example shows how to enable dead letter topic in a Java client using the default dead letter topic: - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic(topic) - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .build()) - .subscribe(); - -``` - -The default dead letter topic uses this format: - -``` - ---DLQ - -``` - - -If you want to specify the name of the dead letter topic, use this Java client example: - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic(topic) - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .deadLetterTopic("your-topic-name") - .build()) - .subscribe(); - -``` - -Dead letter topic depends on message re-delivery. Messages are redelivered either due to [acknowledgement timeout](#acknowledgement-timeout) or [negative acknowledgement](#negative-acknowledgement). If you are going to use negative acknowledgement on a message, make sure it is negatively acknowledged before the acknowledgement timeout. - -:::note - -Currently, dead letter topic is enabled In the shared and Key_Shared subscription types. - -::: - -### Retry letter topic - -For many online business systems, a message is re-consumed due to exception occurs in the business logic processing. To configure the delay time for re-consuming the failed messages, you can configure the producer to send messages to both the business topic and the retry letter topic, and enable automatic retry on the consumer. When automatic retry is enabled on the consumer, a message is stored in the retry letter topic if the messages are not consumed, and therefore the consumer automatically consumes the failed messages from the retry letter topic after a specified delay time. - -By default, automatic retry is disabled. You can set `enableRetry` to `true` to enable automatic retry on the consumer. - -This example shows how to consume messages from a retry letter topic. - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic(topic) - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .enableRetry(true) - .receiverQueueSize(100) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .retryLetterTopic("persistent://my-property/my-ns/my-subscription-custom-Retry") - .build()) - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscribe(); - -``` - -## Topics - -As in other pub-sub systems, topics in Pulsar are named channels for transmitting messages from producers to consumers. Topic names are URLs that have a well-defined structure: - -```http - -{persistent|non-persistent}://tenant/namespace/topic - -``` - -Topic name component | Description -:--------------------|:----------- -`persistent` / `non-persistent` | This identifies the type of topic. Pulsar supports two kind of topics: [persistent](concepts-architecture-overview.md#persistent-storage) and [non-persistent](#non-persistent-topics). The default is persistent, so if you do not specify a type, the topic is persistent. With persistent topics, all messages are durably persisted on disks (if the broker is not standalone, messages are durably persisted on multiple disks), whereas data for non-persistent topics is not persisted to storage disks. -`tenant` | The topic tenant within the instance. Tenants are essential to multi-tenancy in Pulsar, and spread across clusters. -`namespace` | The administrative unit of the topic, which acts as a grouping mechanism for related topics. Most topic configuration is performed at the [namespace](#namespaces) level. Each tenant has one or multiple namespaces. -`topic` | The final part of the name. Topic names have no special meaning in a Pulsar instance. - -> **No need to explicitly create new topics** -> You do not need to explicitly create topics in Pulsar. If a client attempts to write or receive messages to/from a topic that does not yet exist, Pulsar creates that topic under the namespace provided in the [topic name](#topics) automatically. -> If no tenant or namespace is specified when a client creates a topic, the topic is created in the default tenant and namespace. You can also create a topic in a specified tenant and namespace, such as `persistent://my-tenant/my-namespace/my-topic`. `persistent://my-tenant/my-namespace/my-topic` means the `my-topic` topic is created in the `my-namespace` namespace of the `my-tenant` tenant. - -## Namespaces - -A namespace is a logical nomenclature within a tenant. A tenant creates multiple namespaces via the [admin API](admin-api-namespaces.md#create). For instance, a tenant with different applications can create a separate namespace for each application. A namespace allows the application to create and manage a hierarchy of topics. The topic `my-tenant/app1` is a namespace for the application `app1` for `my-tenant`. You can create any number of [topics](#topics) under the namespace. - -## Subscriptions - -A subscription is a named configuration rule that determines how messages are delivered to consumers. Four subscription types are available in Pulsar: [exclusive](#exclusive), [shared](#shared), [failover](#failover), and [key_shared](#key_shared). These types are illustrated in the figure below. - -![Subscription types](/assets/pulsar-subscription-types.png) - -> **Pub-Sub or Queuing** -> In Pulsar, you can use different subscriptions flexibly. -> * If you want to achieve traditional "fan-out pub-sub messaging" among consumers, specify a unique subscription name for each consumer. It is exclusive subscription type. -> * If you want to achieve "message queuing" among consumers, share the same subscription name among multiple consumers(shared, failover, key_shared). -> * If you want to achieve both effects simultaneously, combine exclusive subscription type with other subscription types for consumers. - -### Subscription types -When a subscription has no consumers, its subscription type is undefined. The type of a subscription is defined when a consumer connects to it, and the type can be changed by restarting all consumers with a different configuration. - -#### Exclusive - -In *exclusive* type, only a single consumer is allowed to attach to the subscription. If multiple consumers subscribe to a topic using the same subscription, an error occurs. - -In the diagram below, only **Consumer A-0** is allowed to consume messages. - -> Exclusive is the default subscription type. - -![Exclusive subscriptions](/assets/pulsar-exclusive-subscriptions.png) - -#### Failover - -In *Failover* type, multiple consumers can attach to the same subscription. A master consumer is picked for non-partitioned topic or each partition of partitioned topic and receives messages. When the master consumer disconnects, all (non-acknowledged and subsequent) messages are delivered to the next consumer in line. - -For partitioned topics, broker will sort consumers by priority level and lexicographical order of consumer name. Then broker will try to evenly assigns topics to consumers with the highest priority level. - -For non-partitioned topic, broker will pick consumer in the order they subscribe to the non partitioned topic. - -In the diagram below, **Consumer-B-0** is the master consumer while **Consumer-B-1** would be the next consumer in line to receive messages if **Consumer-B-0** is disconnected. - -![Failover subscriptions](/assets/pulsar-failover-subscriptions.png) - -#### Shared - -In *shared* or *round robin* mode, multiple consumers can attach to the same subscription. Messages are delivered in a round robin distribution across consumers, and any given message is delivered to only one consumer. When a consumer disconnects, all the messages that were sent to it and not acknowledged will be rescheduled for sending to the remaining consumers. - -In the diagram below, **Consumer-C-1** and **Consumer-C-2** are able to subscribe to the topic, but **Consumer-C-3** and others could as well. - -> **Limitations of Shared type** -> When using Shared type, be aware that: -> * Message ordering is not guaranteed. -> * You cannot use cumulative acknowledgment with Shared type. - -![Shared subscriptions](/assets/pulsar-shared-subscriptions.png) - -#### Key_Shared - -In *Key_Shared* type, multiple consumers can attach to the same subscription. Messages are delivered in a distribution across consumers and message with same key or same ordering key are delivered to only one consumer. No matter how many times the message is re-delivered, it is delivered to the same consumer. When a consumer connected or disconnected will cause served consumer change for some key of message. - -> **Limitations of Key_Shared mode** -> When you use Key_Shared type, be aware that: -> * You need to specify a key or orderingKey for messages. -> * You cannot use cumulative acknowledgment with Key_Shared type. -> * Your producers should disable batching or use a key-based batch builder. - -![Key_Shared subscriptions](/assets/pulsar-key-shared-subscriptions.png) - -**You can disable Key_Shared subscription in the `broker.config` file.** - -### Subscription modes - -#### What is a subscription mode - -The subscription mode indicates the cursor type. - -- When a subscription is created, an associated cursor is created to record the last consumed position. -- When a consumer of the subscription restarts, it can continue consuming from the last message it consumes. - -Subscription mode | Description | Note -|---|---|--- -`Durable`|The cursor is durable, which retains messages and persists the current position.

    If a broker restarts from a failure, it can recover the cursor from the persistent storage (BookKeeper), so that messages can continue to be consumed from the last consumed position.|`Durable` is the **default** subscription mode. -`NonDurable`|The cursor is non-durable.

    Once a broker stops, the cursor is lost and can never be recovered, so that messages **can not** continue to be consumed from the last consumed position.|Reader’s subscription mode is `NonDurable` in nature and it does not prevent data in a topic from being deleted. Reader’s subscription mode **can not** be changed. - -A [subscription](#subscriptions) can have one or more consumers. When a consumer subscribes to a topic, it must specify the subscription name. A durable subscription and a non-durable subscription can have the same name, they are independent of each other. If a consumer specifies a subscription which does not exist before, the subscription is automatically created. - -#### When to use - -By default, messages of a topic without any durable subscriptions are marked as deleted. If you want to prevent the messages being marked as deleted, you can create a durable subscription for this topic. In this case, only acknowledged messages are marked as deleted. For more information, see [message retention and expiry](cookbooks-retention-expiry.md). - -#### How to use - -After a consumer is created, the default subscription mode of the consumer is `Durable`. You can change the subscription mode to `NonDurable` by making changes to the consumer’s configuration. - -````mdx-code-block - - - - -```java - - Consumer consumer = pulsarClient.newConsumer() - .topic("my-topic") - .subscriptionName("my-sub") - .subscriptionMode(SubscriptionMode.Durable) - .subscribe(); - -``` - - - - -```java - - Consumer consumer = pulsarClient.newConsumer() - .topic("my-topic") - .subscriptionName("my-sub") - .subscriptionMode(SubscriptionMode.NonDurable) - .subscribe(); - -``` - - - - -```` - -For how to create, check, or delete a durable subscription, see [manage subscriptions](admin-api-topics.md/#manage-subscriptions). - -## Multi-topic subscriptions - -When a consumer subscribes to a Pulsar topic, by default it subscribes to one specific topic, such as `persistent://public/default/my-topic`. As of Pulsar version 1.23.0-incubating, however, Pulsar consumers can simultaneously subscribe to multiple topics. You can define a list of topics in two ways: - -* On the basis of a [**reg**ular **ex**pression](https://en.wikipedia.org/wiki/Regular_expression) (regex), for example `persistent://public/default/finance-.*` -* By explicitly defining a list of topics - -> When subscribing to multiple topics by regex, all topics must be in the same [namespace](#namespaces). - -When subscribing to multiple topics, the Pulsar client automatically makes a call to the Pulsar API to discover the topics that match the regex pattern/list, and then subscribe to all of them. If any of the topics do not exist, the consumer auto-subscribes to them once the topics are created. - -> **No ordering guarantees across multiple topics** -> When a producer sends messages to a single topic, all messages are guaranteed to be read from that topic in the same order. However, these guarantees do not hold across multiple topics. So when a producer sends message to multiple topics, the order in which messages are read from those topics is not guaranteed to be the same. - -The following are multi-topic subscription examples for Java. - -```java - -import java.util.regex.Pattern; - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient pulsarClient = // Instantiate Pulsar client object - -// Subscribe to all topics in a namespace -Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default/.*"); -Consumer allTopicsConsumer = pulsarClient.newConsumer() - .topicsPattern(allTopicsInNamespace) - .subscriptionName("subscription-1") - .subscribe(); - -// Subscribe to a subsets of topics in a namespace, based on regex -Pattern someTopicsInNamespace = Pattern.compile("persistent://public/default/foo.*"); -Consumer someTopicsConsumer = pulsarClient.newConsumer() - .topicsPattern(someTopicsInNamespace) - .subscriptionName("subscription-1") - .subscribe(); - -``` - -For code examples, see [Java](client-libraries-java.md#multi-topic-subscriptions). - -## Partitioned topics - -Normal topics are served only by a single broker, which limits the maximum throughput of the topic. *Partitioned topics* are a special type of topic that are handled by multiple brokers, thus allowing for higher throughput. - -A partitioned topic is actually implemented as N internal topics, where N is the number of partitions. When publishing messages to a partitioned topic, each message is routed to one of several brokers. The distribution of partitions across brokers is handled automatically by Pulsar. - -The diagram below illustrates this: - -![](/assets/partitioning.png) - -The **Topic1** topic has five partitions (**P0** through **P4**) split across three brokers. Because there are more partitions than brokers, two brokers handle two partitions a piece, while the third handles only one (again, Pulsar handles this distribution of partitions automatically). - -Messages for this topic are broadcast to two consumers. The [routing mode](#routing-modes) determines each message should be published to which partition, while the [subscription type](#subscription-types) determines which messages go to which consumers. - -Decisions about routing and subscription types can be made separately in most cases. In general, throughput concerns should guide partitioning/routing decisions while subscription decisions should be guided by application semantics. - -There is no difference between partitioned topics and normal topics in terms of how subscription types work, as partitioning only determines what happens between when a message is published by a producer and processed and acknowledged by a consumer. - -Partitioned topics need to be explicitly created via the [admin API](admin-api-overview.md). The number of partitions can be specified when creating the topic. - -### Routing modes - -When publishing to partitioned topics, you must specify a *routing mode*. The routing mode determines which partition---that is, which internal topic---each message should be published to. - -There are three {@inject: javadoc:MessageRoutingMode:/client/org/apache/pulsar/client/api/MessageRoutingMode} available: - -Mode | Description -:--------|:------------ -`RoundRobinPartition` | If no key is provided, the producer will publish messages across all partitions in round-robin fashion to achieve maximum throughput. Please note that round-robin is not done per individual message but rather it's set to the same boundary of batching delay, to ensure batching is effective. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition. This is the default mode. -`SinglePartition` | If no key is provided, the producer will randomly pick one single partition and publish all the messages into that partition. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition. -`CustomPartition` | Use custom message router implementation that will be called to determine the partition for a particular message. User can create a custom routing mode by using the [Java client](client-libraries-java.md) and implementing the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface. - -### Ordering guarantee - -The ordering of messages is related to MessageRoutingMode and Message Key. Usually, user would want an ordering of Per-key-partition guarantee. - -If there is a key attached to message, the messages will be routed to corresponding partitions based on the hashing scheme specified by {@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} in {@inject: javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder}, when using either `SinglePartition` or `RoundRobinPartition` mode. - -Ordering guarantee | Description | Routing Mode and Key -:------------------|:------------|:------------ -Per-key-partition | All the messages with the same key will be in order and be placed in same partition. | Use either `SinglePartition` or `RoundRobinPartition` mode, and Key is provided by each message. -Per-producer | All the messages from the same producer will be in order. | Use `SinglePartition` mode, and no Key is provided for each message. - -### Hashing scheme - -{@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} is an enum that represent sets of standard hashing functions available when choosing the partition to use for a particular message. - -There are 2 types of standard hashing functions available: `JavaStringHash` and `Murmur3_32Hash`. -The default hashing function for producer is `JavaStringHash`. -Please pay attention that `JavaStringHash` is not useful when producers can be from different multiple language clients, under this use case, it is recommended to use `Murmur3_32Hash`. - - - -## Non-persistent topics - - -By default, Pulsar persistently stores *all* unacknowledged messages on multiple [BookKeeper](concepts-architecture-overview.md#persistent-storage) bookies (storage nodes). Data for messages on persistent topics can thus survive broker restarts and subscriber failover. - -Pulsar also, however, supports **non-persistent topics**, which are topics on which messages are *never* persisted to disk and live only in memory. When using non-persistent delivery, killing a Pulsar broker or disconnecting a subscriber to a topic means that all in-transit messages are lost on that (non-persistent) topic, meaning that clients may see message loss. - -Non-persistent topics have names of this form (note the `non-persistent` in the name): - -```http - -non-persistent://tenant/namespace/topic - -``` - -> For more info on using non-persistent topics, see the [Non-persistent messaging cookbook](cookbooks-non-persistent.md). - -In non-persistent topics, brokers immediately deliver messages to all connected subscribers *without persisting them* in [BookKeeper](concepts-architecture-overview.md#persistent-storage). If a subscriber is disconnected, the broker will not be able to deliver those in-transit messages, and subscribers will never be able to receive those messages again. Eliminating the persistent storage step makes messaging on non-persistent topics slightly faster than on persistent topics in some cases, but with the caveat that some of the core benefits of Pulsar are lost. - -> With non-persistent topics, message data lives only in memory. If a message broker fails or message data can otherwise not be retrieved from memory, your message data may be lost. Use non-persistent topics only if you're *certain* that your use case requires it and can sustain it. - -By default, non-persistent topics are enabled on Pulsar brokers. You can disable them in the broker's [configuration](reference-configuration.md#broker-enableNonPersistentTopics). You can manage non-persistent topics using the `pulsar-admin topics` command. For more information, see [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/). - -### Performance - -Non-persistent messaging is usually faster than persistent messaging because brokers don't persist messages and immediately send acks back to the producer as soon as that message is delivered to connected brokers. Producers thus see comparatively low publish latency with non-persistent topic. - -### Client API - -Producers and consumers can connect to non-persistent topics in the same way as persistent topics, with the crucial difference that the topic name must start with `non-persistent`. All three subscription types---[exclusive](#exclusive), [shared](#shared), and [failover](#failover)---are supported for non-persistent topics. - -Here's an example [Java consumer](client-libraries-java.md#consumers) for a non-persistent topic: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); -String npTopic = "non-persistent://public/default/my-topic"; -String subscriptionName = "my-subscription-name"; - -Consumer consumer = client.newConsumer() - .topic(npTopic) - .subscriptionName(subscriptionName) - .subscribe(); - -``` - -Here's an example [Java producer](client-libraries-java.md#producer) for the same non-persistent topic: - -```java - -Producer producer = client.newProducer() - .topic(npTopic) - .create(); - -``` - - -## System topic - -System topic is a predefined topic for internal use within Pulsar. It can be either persistent or non-persistent topic. - -System topics serve to implement certain features and eliminate dependencies on third-party components, such as transactions, heartbeat detections, topic-level policies, and resource group services. System topics empower the implementation of these features to be simplified, dependent, and flexible. Take heartbeat detections for example, you can leverage the system topic for healthcheck to internally enable producer/reader to procude/consume messages under the heartbeat namespace, which can detect whether the current service is still alive. - -There are diverse system topics depending on namespaces. The following table outlines the available system topics for each specific namespace. - -| Namespace | TopicName | Domain | Count | Usage | -|-----------|-----------|--------|-------|-------| -| pulsar/system | `transaction_coordinator_assign_${id}` | Persistent | Default 16 | Transaction coordinator | -| pulsar/system | `_transaction_log${tc_id}` | Persistent | Default 16 | Transaction log | -| pulsar/system | `resource-usage` | Non-persistent | Default 4 | Resource group service | -| host/port | `heartbeat` | Persistent | 1 | Heartbeat detection | -| User-defined-ns | [`__change_events`](concepts-multi-tenancy.md#namespace-change-events-and-topic-level-policies) | Persistent | Default 4 | Topic events | -| User-defined-ns | `__transaction_buffer_snapshot` | Persistent | One per namespace | Transaction buffer snapshots | -| User-defined-ns | `${topicName}__transaction_pending_ack` | Persistent | One per every topic subscription acknowledged with transactions | Acknowledgements with transactions | - -:::note - -* You cannot create any system topics. -* By default, system topics are disabled. To enable system topics, you need to change the following configurations in the `conf/broker.conf` or `conf/standalone.conf` file. - - ```conf - systemTopicEnabled=true - topicLevelPoliciesEnabled=true - ``` - -::: - - -## Message retention and expiry - -By default, Pulsar message brokers: - -* immediately delete *all* messages that have been acknowledged by a consumer, and -* [persistently store](concepts-architecture-overview.md#persistent-storage) all unacknowledged messages in a message backlog. - -Pulsar has two features, however, that enable you to override this default behavior: - -* Message **retention** enables you to store messages that have been acknowledged by a consumer -* Message **expiry** enables you to set a time to live (TTL) for messages that have not yet been acknowledged - -> All message retention and expiry is managed at the [namespace](#namespaces) level. For a how-to, see the [Message retention and expiry](cookbooks-retention-expiry.md) cookbook. - -The diagram below illustrates both concepts: - -![Message retention and expiry](/assets/retention-expiry.png) - -With message retention, shown at the top, a retention policy applied to all topics in a namespace dictates that some messages are durably stored in Pulsar even though they've already been acknowledged. Acknowledged messages that are not covered by the retention policy are deleted. Without a retention policy, *all* of the acknowledged messages would be deleted. - -With message expiry, shown at the bottom, some messages are deleted, even though they haven't been acknowledged, because they've expired according to the TTL applied to the namespace (for example because a TTL of 5 minutes has been applied and the messages haven't been acknowledged but are 10 minutes old). - -## Message deduplication - -Message duplication occurs when a message is [persisted](concepts-architecture-overview.md#persistent-storage) by Pulsar more than once. Message deduplication is an optional Pulsar feature that prevents unnecessary message duplication by processing each message only once, even if the message is received more than once. - -The following diagram illustrates what happens when message deduplication is disabled vs. enabled: - -![Pulsar message deduplication](/assets/message-deduplication.png) - - -Message deduplication is disabled in the scenario shown at the top. Here, a producer publishes message 1 on a topic; the message reaches a Pulsar broker and is [persisted](concepts-architecture-overview.md#persistent-storage) to BookKeeper. The producer then sends message 1 again (in this case due to some retry logic), and the message is received by the broker and stored in BookKeeper again, which means that duplication has occurred. - -In the second scenario at the bottom, the producer publishes message 1, which is received by the broker and persisted, as in the first scenario. When the producer attempts to publish the message again, however, the broker knows that it has already seen message 1 and thus does not persist the message. - -> Message deduplication is handled at the namespace level or the topic level. For more instructions, see the [message deduplication cookbook](cookbooks-deduplication.md). - - -### Producer idempotency - -The other available approach to message deduplication is to ensure that each message is *only produced once*. This approach is typically called **producer idempotency**. The drawback of this approach is that it defers the work of message deduplication to the application. In Pulsar, this is handled at the [broker](reference-terminology.md#broker) level, so you do not need to modify your Pulsar client code. Instead, you only need to make administrative changes. For details, see [Managing message deduplication](cookbooks-deduplication.md). - -### Deduplication and effectively-once semantics - -Message deduplication makes Pulsar an ideal messaging system to be used in conjunction with stream processing engines (SPEs) and other systems seeking to provide effectively-once processing semantics. Messaging systems that do not offer automatic message deduplication require the SPE or other system to guarantee deduplication, which means that strict message ordering comes at the cost of burdening the application with the responsibility of deduplication. With Pulsar, strict ordering guarantees come at no application-level cost. - -> You can find more in-depth information in [this post](https://www.splunk.com/en_us/blog/it/exactly-once-is-not-exactly-the-same.html). - -## Delayed message delivery -Delayed message delivery enables you to consume a message later rather than immediately. In this mechanism, a message is stored in BookKeeper, `DelayedDeliveryTracker` maintains the time index(time -> messageId) in memory after published to a broker, and it is delivered to a consumer once the specific delayed time is passed. - -Delayed message delivery only works in Shared subscription type. In Exclusive and Failover subscription types, the delayed message is dispatched immediately. - -The diagram below illustrates the concept of delayed message delivery: - -![Delayed Message Delivery](/assets/message_delay.png) - -A broker saves a message without any check. When a consumer consumes a message, if the message is set to delay, then the message is added to `DelayedDeliveryTracker`. A subscription checks and gets timeout messages from `DelayedDeliveryTracker`. - -### Broker -Delayed message delivery is enabled by default. You can change it in the broker configuration file as below: - -``` - -# Whether to enable the delayed delivery for messages. -# If disabled, messages are immediately delivered and there is no tracking overhead. -delayedDeliveryEnabled=true - -# Control the ticking time for the retry of delayed message delivery, -# affecting the accuracy of the delivery time compared to the scheduled time. -# Default is 1 second. -delayedDeliveryTickTimeMillis=1000 - -``` - -### Producer -The following is an example of delayed message delivery for a producer in Java: - -```java - -// message to be delivered at the configured delay interval -producer.newMessage().deliverAfter(3L, TimeUnit.Minute).value("Hello Pulsar!").send(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-multi-tenancy.md b/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-multi-tenancy.md deleted file mode 100644 index 93a59557b2efca..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-multi-tenancy.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -id: concepts-multi-tenancy -title: Multi Tenancy -sidebar_label: "Multi Tenancy" -original_id: concepts-multi-tenancy ---- - -Pulsar was created from the ground up as a multi-tenant system. To support multi-tenancy, Pulsar has a concept of tenants. Tenants can be spread across clusters and can each have their own [authentication and authorization](security-overview.md) scheme applied to them. They are also the administrative unit at which storage quotas, [message TTL](cookbooks-retention-expiry.md#time-to-live-ttl), and isolation policies can be managed. - -The multi-tenant nature of Pulsar is reflected mostly visibly in topic URLs, which have this structure: - -```http - -persistent://tenant/namespace/topic - -``` - -As you can see, the tenant is the most basic unit of categorization for topics (more fundamental than the namespace and topic name). - -## Tenants - -To each tenant in a Pulsar instance you can assign: - -* An [authorization](security-authorization.md) scheme -* The set of [clusters](reference-terminology.md#cluster) to which the tenant's configuration applies - -## Namespaces - -Tenants and namespaces are two key concepts of Pulsar to support multi-tenancy. - -* Pulsar is provisioned for specified tenants with appropriate capacity allocated to the tenant. -* A namespace is the administrative unit nomenclature within a tenant. The configuration policies set on a namespace apply to all the topics created in that namespace. A tenant may create multiple namespaces via self-administration using the REST API and the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool. For instance, a tenant with different applications can create a separate namespace for each application. - -Names for topics in the same namespace will look like this: - -```http - -persistent://tenant/app1/topic-1 - -persistent://tenant/app1/topic-2 - -persistent://tenant/app1/topic-3 - -``` - -### Namespace change events and topic-level policies - -Pulsar is a multi-tenant event streaming system. Administrators can manage the tenants and namespaces by setting policies at different levels. However, the policies, such as retention policy and storage quota policy, are only available at a namespace level. In many use cases, users need to set a policy at the topic level. The namespace change events approach is proposed for supporting topic-level policies in an efficient way. In this approach, Pulsar is used as an event log to store namespace change events (such as topic policy changes). This approach has a few benefits: -- Avoid using ZooKeeper and introducing more loads to ZooKeeper. -- Use Pulsar as an event log for propagating the policy cache. It can scale efficiently. -- Use Pulsar SQL to query the namespace changes and audit the system. - -Each namespace has a [system topic](concepts-messaging.md#system-topic) named `__change_events`. This system topic stores change events for a given namespace. The following figure illustrates how to leverage it to update topic-level policies. - -![Leverage the system topic to update topic-level policies](/assets/system-topic-for-topic-level-policies.svg) - -1. Pulsar Admin clients communicate with the Admin Restful API to update topic-level policies. -2. Any broker that receives the Admin HTTP request publishes a topic policy change event to the corresponding system topic (`__change_events`) of the namespace. -3. Each broker that owns a namespace bundle(s) subscribes to the system topic (`__change_events`) to receive the change events of the namespace. -4. Each broker applies the change events to its policy cache. -5. Once the policy cache is updated, the broker sends the response back to the Pulsar Admin clients. - -:::note - -By default, the system topic is disabled. To enable topic-level policy (`topicLevelPoliciesEnabled`=`true`), you need to enable the system topic by setting `systemtopicenabled` to `true` in the `conf/broker.conf` or `conf/standalone.conf` file. - -::: \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-multiple-advertised-listeners.md b/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-multiple-advertised-listeners.md deleted file mode 100644 index f2e1ae0aadc7ca..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-multiple-advertised-listeners.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -id: concepts-multiple-advertised-listeners -title: Multiple advertised listeners -sidebar_label: "Multiple advertised listeners" -original_id: concepts-multiple-advertised-listeners ---- - -When a Pulsar cluster is deployed in the production environment, it may require to expose multiple advertised addresses for the broker. For example, when you deploy a Pulsar cluster in Kubernetes and want other clients, which are not in the same Kubernetes cluster, to connect to the Pulsar cluster, you need to assign a broker URL to external clients. But clients in the same Kubernetes cluster can still connect to the Pulsar cluster through the internal network of Kubernetes. - -## Advertised listeners - -To ensure clients in both internal and external networks can connect to a Pulsar cluster, Pulsar introduces `advertisedListeners` and `internalListenerName` configuration options into the [broker configuration file](reference-configuration.md#broker) to ensure that the broker supports exposing multiple advertised listeners and support the separation of internal and external network traffic. - -- The `advertisedListeners` is used to specify multiple advertised listeners. The broker uses the listener as the broker identifier in the load manager and the bundle owner data. The `advertisedListeners` is formatted as `:pulsar://:, :pulsar+ssl://:`. You can set up the `advertisedListeners` like -`advertisedListeners=internal:pulsar://192.168.1.11:6660,internal:pulsar+ssl://192.168.1.11:6651`. - -- The `internalListenerName` is used to specify the internal service URL that the broker uses. You can specify the `internalListenerName` by choosing one of the `advertisedListeners`. The broker uses the listener name of the first advertised listener as the `internalListenerName` if the `internalListenerName` is absent. - -After setting up the `advertisedListeners`, clients can choose one of the listeners as the service URL to create a connection to the broker as long as the network is accessible. However, if the client creates producers or consumer on a topic, the client must send a lookup requests to the broker for getting the owner broker, then connect to the owner broker to publish messages or consume messages. Therefore, You must allow the client to get the corresponding service URL with the same advertised listener name as the one used by the client. This helps keep client-side simple and secure. - -## Use multiple advertised listeners - -This example shows how a Pulsar client uses multiple advertised listeners. - -1. Configure multiple advertised listeners in the broker configuration file. - -```shell - -advertisedListeners={listenerName}:pulsar://xxxx:6650, -{listenerName}:pulsar+ssl://xxxx:6651 - -``` - -2. Specify the listener name for the client. - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://xxxx:6650") - .listenerName("external") - .build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-overview.md b/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-overview.md deleted file mode 100644 index e8a2f4b9d321a7..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-overview.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -id: concepts-overview -title: Pulsar Overview -sidebar_label: "Overview" -original_id: concepts-overview ---- - -Pulsar is a multi-tenant, high-performance solution for server-to-server messaging. Originally developed by Yahoo, Pulsar is under the stewardship of the [Apache Software Foundation](https://www.apache.org/). - -Key features of Pulsar are listed below: - -* Native support for multiple clusters in a Pulsar instance, with seamless [geo-replication](administration-geo.md) of messages across clusters. -* Very low publish and end-to-end latency. -* Seamless scalability to over a million topics. -* A simple [client API](concepts-clients.md) with bindings for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) and [C++](client-libraries-cpp.md). -* Multiple [subscription types](concepts-messaging.md#subscription-types) ([exclusive](concepts-messaging.md#exclusive), [shared](concepts-messaging.md#shared), and [failover](concepts-messaging.md#failover)) for topics. -* Guaranteed message delivery with [persistent message storage](concepts-architecture-overview.md#persistent-storage) provided by [Apache BookKeeper](http://bookkeeper.apache.org/). -* A serverless light-weight computing framework [Pulsar Functions](functions-overview.md) offers the capability for stream-native data processing. -* A serverless connector framework [Pulsar IO](io-overview.md), which is built on Pulsar Functions, makes it easier to move data in and out of Apache Pulsar. -* [Tiered Storage](concepts-tiered-storage.md) offloads data from hot/warm storage to cold/long-term storage (such as S3 and GCS) when the data is aging out. - -## Contents - -- [Messaging Concepts](concepts-messaging.md) -- [Architecture Overview](concepts-architecture-overview.md) -- [Pulsar Clients](concepts-clients.md) -- [Geo Replication](concepts-replication.md) -- [Multi Tenancy](concepts-multi-tenancy.md) -- [Authentication and Authorization](concepts-authentication.md) -- [Topic Compaction](concepts-topic-compaction.md) -- [Tiered Storage](concepts-tiered-storage.md) diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-proxy-sni-routing.md b/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-proxy-sni-routing.md deleted file mode 100644 index 51419a66cefe6e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-proxy-sni-routing.md +++ /dev/null @@ -1,180 +0,0 @@ ---- -id: concepts-proxy-sni-routing -title: Proxy support with SNI routing -sidebar_label: "Proxy support with SNI routing" -original_id: concepts-proxy-sni-routing ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -A proxy server is an intermediary server that forwards requests from multiple clients to different servers across the Internet. The proxy server acts as a "traffic cop" in both forward and reverse proxy scenarios, and benefits your system such as load balancing, performance, security, auto-scaling, and so on. - -The proxy in Pulsar acts as a reverse proxy, and creates a gateway in front of brokers. Proxies such as Apache Traffic Server (ATS), HAProxy, Nginx, and Envoy are not supported in Pulsar. These proxy-servers support **SNI routing**. SNI routing is used to route traffic to a destination without terminating the SSL connection. Layer 4 routing provides greater transparency because the outbound connection is determined by examining the destination address in the client TCP packets. - -Pulsar clients (Java, C++, Python) support [SNI routing protocol](https://github.com/apache/pulsar/wiki/PIP-60:-Support-Proxy-server-with-SNI-routing), so you can connect to brokers through the proxy. This document walks you through how to set up the ATS proxy, enable SNI routing, and connect Pulsar client to the broker through the ATS proxy. - -## ATS-SNI Routing in Pulsar -To support [layer-4 SNI routing](https://docs.trafficserver.apache.org/en/latest/admin-guide/layer-4-routing.en.html) with ATS, the inbound connection must be a TLS connection. Pulsar client supports SNI routing protocol on TLS connection, so when Pulsar clients connect to broker through ATS proxy, Pulsar uses ATS as a reverse proxy. - -Pulsar supports SNI routing for geo-replication, so brokers can connect to brokers in other clusters through the ATS proxy. - -This section explains how to set up and use ATS as a reverse proxy, so Pulsar clients can connect to brokers through the ATS proxy using the SNI routing protocol on TLS connection. - -### Set up ATS Proxy for layer-4 SNI routing -To support layer 4 SNI routing, you need to configure the `records.conf` and `ssl_server_name.conf` files. - -![Pulsar client SNI](/assets/pulsar-sni-client.png) - -The [records.config](https://docs.trafficserver.apache.org/en/latest/admin-guide/files/records.config.en.html) file is located in the `/usr/local/etc/trafficserver/` directory by default. The file lists configurable variables used by the ATS. - -To configure the `records.config` files, complete the following steps. -1. Update TLS port (`http.server_ports`) on which proxy listens, and update proxy certs (`ssl.client.cert.path` and `ssl.client.cert.filename`) to secure TLS tunneling. -2. Configure server ports (`http.connect_ports`) used for tunneling to the broker. If Pulsar brokers are listening on `4443` and `6651` ports, add the brokers service port in the `http.connect_ports` configuration. - -The following is an example. - -``` - -# PROXY TLS PORT -CONFIG proxy.config.http.server_ports STRING 4443:ssl 4080 -# PROXY CERTS FILE PATH -CONFIG proxy.config.ssl.client.cert.path STRING /proxy-cert.pem -# PROXY KEY FILE PATH -CONFIG proxy.config.ssl.client.cert.filename STRING /proxy-key.pem - - -# The range of origin server ports that can be used for tunneling via CONNECT. # Traffic Server allows tunnels only to the specified ports. Supports both wildcards (*) and ranges (e.g. 0-1023). -CONFIG proxy.config.http.connect_ports STRING 4443 6651 - -``` - -The `ssl_server_name` file is used to configure TLS connection handling for inbound and outbound connections. The configuration is determined by the SNI values provided by the inbound connection. The file consists of a set of configuration items, and each is identified by an SNI value (`fqdn`). When an inbound TLS connection is made, the SNI value from the TLS negotiation is matched with the items specified in this file. If the values match, the values specified in that item override the default values. - -The following example shows mapping of the inbound SNI hostname coming from the client, and the actual broker service URL where request should be redirected. For example, if the client sends the SNI header `pulsar-broker1`, the proxy creates a TLS tunnel by redirecting request to the `pulsar-broker1:6651` service URL. - -``` - -server_config = { - { - fqdn = 'pulsar-broker-vip', - # Forward to Pulsar broker which is listening on 6651 - tunnel_route = 'pulsar-broker-vip:6651' - }, - { - fqdn = 'pulsar-broker1', - # Forward to Pulsar broker-1 which is listening on 6651 - tunnel_route = 'pulsar-broker1:6651' - }, - { - fqdn = 'pulsar-broker2', - # Forward to Pulsar broker-2 which is listening on 6651 - tunnel_route = 'pulsar-broker2:6651' - }, -} - -``` - -After you configure the `ssl_server_name.config` and `records.config` files, the ATS-proxy server handles SNI routing and creates TCP tunnel between the client and the broker. - -### Configure Pulsar-client with SNI routing -ATS SNI-routing works only with TLS. You need to enable TLS for the ATS proxy and brokers first, configure the SNI routing protocol, and then connect Pulsar clients to brokers through ATS proxy. Pulsar clients support SNI routing by connecting to the proxy, and sending the target broker URL to the SNI header. This process is processed internally. You only need to configure the following proxy configuration initially when you create a Pulsar client to use the SNI routing protocol. - -````mdx-code-block - - - - -```java - -String brokerServiceUrl = "pulsar+ssl://pulsar-broker-vip:6651/"; -String proxyUrl = "pulsar+ssl://ats-proxy:443"; -ClientBuilder clientBuilder = PulsarClient.builder() - .serviceUrl(brokerServiceUrl) - .tlsTrustCertsFilePath(TLS_TRUST_CERT_FILE_PATH) - .enableTls(true) - .allowTlsInsecureConnection(false) - .proxyServiceUrl(proxyUrl, ProxyProtocol.SNI) - .operationTimeout(1000, TimeUnit.MILLISECONDS); - -Map authParams = new HashMap(); -authParams.put("tlsCertFile", TLS_CLIENT_CERT_FILE_PATH); -authParams.put("tlsKeyFile", TLS_CLIENT_KEY_FILE_PATH); -clientBuilder.authentication(AuthenticationTls.class.getName(), authParams); - -PulsarClient pulsarClient = clientBuilder.build(); - -``` - - - - -```c++ - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/cacert.pem"); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create( - "/path/to/client-cert.pem", "/path/to/client-key.pem");); - -Client client("pulsar+ssl://ats-proxy:443", config); - -``` - - - - -```python - -from pulsar import Client, AuthenticationTLS - -auth = AuthenticationTLS("/path/to/my-role.cert.pem", "/path/to/my-role.key-pk8.pem") -client = Client("pulsar+ssl://ats-proxy:443", - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False, - authentication=auth) - -``` - - - - -```` - -### Pulsar geo-replication with SNI routing -You can use the ATS proxy for geo-replication. Pulsar brokers can connect to brokers in geo-replication by using SNI routing. To enable SNI routing for broker connection cross clusters, you need to configure SNI proxy URL to the cluster metadata. If you have configured SNI proxy URL in the cluster metadata, you can connect to broker cross clusters through the proxy over SNI routing. - -![Pulsar client SNI](/assets/pulsar-sni-geo.png) - -In this example, a Pulsar cluster is deployed into two separate regions, `us-west` and `us-east`. Both regions are configured with ATS proxy, and brokers in each region run behind the ATS proxy. We configure the cluster metadata for both clusters, so brokers in one cluster can use SNI routing and connect to brokers in other clusters through the ATS proxy. - -(a) Configure the cluster metadata for `us-east` with `us-east` broker service URL and `us-east` ATS proxy URL with SNI proxy-protocol. - -``` - -./pulsar-admin clusters update \ ---broker-url-secure pulsar+ssl://east-broker-vip:6651 \ ---url http://east-broker-vip:8080 \ ---proxy-protocol SNI \ ---proxy-url pulsar+ssl://east-ats-proxy:443 - -``` - -(b) Configure the cluster metadata for `us-west` with `us-west` broker service URL and `us-west` ATS proxy URL with SNI proxy-protocol. - -``` - -./pulsar-admin clusters update \ ---broker-url-secure pulsar+ssl://west-broker-vip:6651 \ ---url http://west-broker-vip:8080 \ ---proxy-protocol SNI \ ---proxy-url pulsar+ssl://west-ats-proxy:443 - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-replication.md b/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-replication.md deleted file mode 100644 index 799f0eb4d92c6b..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-replication.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -id: concepts-replication -title: Geo Replication -sidebar_label: "Geo Replication" -original_id: concepts-replication ---- - -Pulsar enables messages to be produced and consumed in different geo-locations. For instance, your application may be publishing data in one region or market and you would like to process it for consumption in other regions or markets. [Geo-replication](administration-geo.md) in Pulsar enables you to do that. - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-tiered-storage.md b/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-tiered-storage.md deleted file mode 100644 index f6988e53a8cd4e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-tiered-storage.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: concepts-tiered-storage -title: Tiered Storage -sidebar_label: "Tiered Storage" -original_id: concepts-tiered-storage ---- - -Pulsar's segment oriented architecture allows for topic backlogs to grow very large, effectively without limit. However, this can become expensive over time. - -One way to alleviate this cost is to use Tiered Storage. With tiered storage, older messages in the backlog can be moved from BookKeeper to a cheaper storage mechanism, while still allowing clients to access the backlog as if nothing had changed. - -![Tiered Storage](/assets/pulsar-tiered-storage.png) - -> Data written to BookKeeper is replicated to 3 physical machines by default. However, once a segment is sealed in BookKeeper it becomes immutable and can be copied to long term storage. Long term storage can achieve cost savings by using mechanisms such as [Reed-Solomon error correction](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction) to require fewer physical copies of data. - -Pulsar currently supports S3, Google Cloud Storage (GCS), and filesystem for [long term store](https://pulsar.apache.org/docs/en/cookbooks-tiered-storage/). Offloading to long term storage triggered via a Rest API or command line interface. The user passes in the amount of topic data they wish to retain on BookKeeper, and the broker will copy the backlog data to long term storage. The original data will then be deleted from BookKeeper after a configured delay (4 hours by default). - -> For a guide for setting up tiered storage, see the [Tiered storage cookbook](cookbooks-tiered-storage.md). diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-topic-compaction.md b/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-topic-compaction.md deleted file mode 100644 index 34b7ed7fbbd31e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-topic-compaction.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -id: concepts-topic-compaction -title: Topic Compaction -sidebar_label: "Topic Compaction" -original_id: concepts-topic-compaction ---- - -Pulsar was built with highly scalable [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data as a primary objective. Pulsar topics enable you to persistently store as many unacknowledged messages as you need while preserving message ordering. By default, Pulsar stores *all* unacknowledged/unprocessed messages produced on a topic. Accumulating many unacknowledged messages on a topic is necessary for many Pulsar use cases but it can also be very time intensive for Pulsar consumers to "rewind" through the entire log of messages. - -> For a more practical guide to topic compaction, see the [Topic compaction cookbook](cookbooks-compaction.md). - -For some use cases consumers don't need a complete "image" of the topic log. They may only need a few values to construct a more "shallow" image of the log, perhaps even just the most recent value. For these kinds of use cases Pulsar offers **topic compaction**. When you run compaction on a topic, Pulsar goes through a topic's backlog and removes messages that are *obscured* by later messages, i.e. it goes through the topic on a per-key basis and leaves only the most recent message associated with that key. - -Pulsar's topic compaction feature: - -* Allows for faster "rewind" through topic logs -* Applies only to [persistent topics](concepts-architecture-overview.md#persistent-storage) -* Triggered automatically when the backlog reaches a certain size or can be triggered manually via the command line. See the [Topic compaction cookbook](cookbooks-compaction.md) -* Is conceptually and operationally distinct from [retention and expiry](concepts-messaging.md#message-retention-and-expiry). Topic compaction *does*, however, respect retention. If retention has removed a message from the message backlog of a topic, the message will also not be readable from the compacted topic ledger. - -> #### Topic compaction example: the stock ticker -> An example use case for a compacted Pulsar topic would be a stock ticker topic. On a stock ticker topic, each message bears a timestamped dollar value for stocks for purchase (with the message key holding the stock symbol, e.g. `AAPL` or `GOOG`). With a stock ticker you may care only about the most recent value(s) of the stock and have no interest in historical data (i.e. you don't need to construct a complete image of the topic's sequence of messages per key). Compaction would be highly beneficial in this case because it would keep consumers from needing to rewind through obscured messages. - - -## How topic compaction works - -When topic compaction is triggered [via the CLI](cookbooks-compaction.md), Pulsar will iterate over the entire topic from beginning to end. For each key that it encounters the compaction routine will keep a record of the latest occurrence of that key. - -After that, the broker will create a new [BookKeeper ledger](concepts-architecture-overview.md#ledgers) and make a second iteration through each message on the topic. For each message, if the key matches the latest occurrence of that key, then the key's data payload, message ID, and metadata will be written to the newly created ledger. If the key doesn't match the latest then the message will be skipped and left alone. If any given message has an empty payload, it will be skipped and considered deleted (akin to the concept of [tombstones](https://en.wikipedia.org/wiki/Tombstone_(data_store)) in key-value databases). At the end of this second iteration through the topic, the newly created BookKeeper ledger is closed and two things are written to the topic's metadata: the ID of the BookKeeper ledger and the message ID of the last compacted message (this is known as the **compaction horizon** of the topic). Once this metadata is written compaction is complete. - -After the initial compaction operation, the Pulsar [broker](reference-terminology.md#broker) that owns the topic is notified whenever any future changes are made to the compaction horizon and compacted backlog. When such changes occur: - -* Clients (consumers and readers) that have read compacted enabled will attempt to read messages from a topic and either: - * Read from the topic like normal (if the message ID is greater than or equal to the compaction horizon) or - * Read beginning at the compaction horizon (if the message ID is lower than the compaction horizon) - - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-transactions.md b/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-transactions.md deleted file mode 100644 index 08490ba06b5d7e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/concepts-transactions.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -id: transactions -title: Transactions -sidebar_label: "Overview" -original_id: transactions ---- - -Transactional semantics enable event streaming applications to consume, process, and produce messages in one atomic operation. In Pulsar, a producer or consumer can work with messages across multiple topics and partitions and ensure those messages are processed as a single unit. - -The following concepts help you understand Pulsar transactions. - -## Transaction coordinator and transaction log -The transaction coordinator maintains the topics and subscriptions that interact in a transaction. When a transaction is committed, the transaction coordinator interacts with the topic owner broker to complete the transaction. - -The transaction coordinator maintains the entire life cycle of transactions, and prevents a transaction from incorrect status. - -The transaction coordinator handles transaction timeout, and ensures that the transaction is aborted after a transaction timeout. - -All the transaction metadata is persisted in the transaction log. The transaction log is backed by a Pulsar topic. After the transaction coordinator crashes, it can restore the transaction metadata from the transaction log. - -## Transaction ID -The transaction ID (TxnID) identifies a unique transaction in Pulsar. The transaction ID is 128-bit. The highest 16 bits are reserved for the ID of the transaction coordinator, and the remaining bits are used for monotonically increasing numbers in each transaction coordinator. It is easy to locate the transaction crash with the TxnID. - -## Transaction buffer -Messages produced within a transaction are stored in the transaction buffer. The messages in transaction buffer are not materialized (visible) to consumers until the transactions are committed. The messages in the transaction buffer are discarded when the transactions are aborted. - -## Pending acknowledge state -Message acknowledges within a transaction are maintained by the pending acknowledge state before the transaction completes. If a message is in the pending acknowledge state, the message cannot be acknowledged by other transactions until the message is removed from the pending acknowledge state. - -The pending acknowledge state is persisted to the pending acknowledge log. The pending acknowledge log is backed by a Pulsar topic. A new broker can restore the state from the pending acknowledge log to ensure the acknowledgement is not lost. diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-bookkeepermetadata.md b/site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-bookkeepermetadata.md deleted file mode 100644 index b0fa98dc3b65d5..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-bookkeepermetadata.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -id: cookbooks-bookkeepermetadata -title: BookKeeper Ledger Metadata -original_id: cookbooks-bookkeepermetadata ---- - -Pulsar stores data on BookKeeper ledgers, you can understand the contents of a ledger by inspecting the metadata attached to the ledger. -Such metadata are stored on ZooKeeper and they are readable using BookKeeper APIs. - -Description of current metadata: - -| Scope | Metadata name | Metadata value | -| ------------- | ------------- | ------------- | -| All ledgers | application | 'pulsar' | -| All ledgers | component | 'managed-ledger', 'schema', 'compacted-topic' | -| Managed ledgers | pulsar/managed-ledger | name of the ledger | -| Cursor | pulsar/cursor | name of the cursor | -| Compacted topic | pulsar/compactedTopic | name of the original topic | -| Compacted topic | pulsar/compactedTo | id of the last compacted message | - - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-compaction.md b/site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-compaction.md deleted file mode 100644 index dfa314727241a8..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-compaction.md +++ /dev/null @@ -1,142 +0,0 @@ ---- -id: cookbooks-compaction -title: Topic compaction -sidebar_label: "Topic compaction" -original_id: cookbooks-compaction ---- - -Pulsar's [topic compaction](concepts-topic-compaction.md#compaction) feature enables you to create **compacted** topics in which older, "obscured" entries are pruned from the topic, allowing for faster reads through the topic's history (which messages are deemed obscured/outdated/irrelevant will depend on your use case). - -To use compaction: - -* You need to give messages keys, as topic compaction in Pulsar takes place on a *per-key basis* (i.e. messages are compacted based on their key). For a stock ticker use case, the stock symbol---e.g. `AAPL` or `GOOG`---could serve as the key (more on this [below](#when-should-i-use-compacted-topics)). Messages without keys will be left alone by the compaction process. -* Compaction can be configured to run [automatically](#configuring-compaction-to-run-automatically), or you can manually [trigger](#triggering-compaction-manually) compaction using the Pulsar administrative API. -* Your consumers must be [configured](#consumer-configuration) to read from compacted topics ([Java consumers](#java), for example, have a `readCompacted` setting that must be set to `true`). If this configuration is not set, consumers will still be able to read from the non-compacted topic. - - -> Compaction only works on messages that have keys (as in the stock ticker example the stock symbol serves as the key for each message). Keys can thus be thought of as the axis along which compaction is applied. Messages that don't have keys are simply ignored by compaction. - -## When should I use compacted topics? - -The classic example of a topic that could benefit from compaction would be a stock ticker topic through which consumers can access up-to-date values for specific stocks. Imagine a scenario in which messages carrying stock value data use the stock symbol as the key (`GOOG`, `AAPL`, `TWTR`, etc.). Compacting this topic would give consumers on the topic two options: - -* They can read from the "original," non-compacted topic in case they need access to "historical" values, i.e. the entirety of the topic's messages. -* They can read from the compacted topic if they only want to see the most up-to-date messages. - -Thus, if you're using a Pulsar topic called `stock-values`, some consumers could have access to all messages in the topic (perhaps because they're performing some kind of number crunching of all values in the last hour) while the consumers used to power the real-time stock ticker only see the compacted topic (and thus aren't forced to process outdated messages). Which variant of the topic any given consumer pulls messages from is determined by the consumer's [configuration](#consumer-configuration). - -> One of the benefits of compaction in Pulsar is that you aren't forced to choose between compacted and non-compacted topics, as the compaction process leaves the original topic as-is and essentially adds an alternate topic. In other words, you can run compaction on a topic and consumers that need access to the non-compacted version of the topic will not be adversely affected. - - -## Configuring compaction to run automatically - -Tenant administrators can configure a policy for compaction at the namespace level. The policy specifies how large the topic backlog can grow before compaction is triggered. - -For example, to trigger compaction when the backlog reaches 100MB: - -```bash - -$ bin/pulsar-admin namespaces set-compaction-threshold \ - --threshold 100M my-tenant/my-namespace - -``` - -Configuring the compaction threshold on a namespace will apply to all topics within that namespace. - -## Triggering compaction manually - -In order to run compaction on a topic, you need to use the [`topics compact`](reference-pulsar-admin.md#topics-compact) command for the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool. Here's an example: - -```bash - -$ bin/pulsar-admin topics compact \ - persistent://my-tenant/my-namespace/my-topic - -``` - -The `pulsar-admin` tool runs compaction via the Pulsar {@inject: rest:REST:/} API. To run compaction in its own dedicated process, i.e. *not* through the REST API, you can use the [`pulsar compact-topic`](reference-cli-tools.md#pulsar-compact-topic) command. Here's an example: - -```bash - -$ bin/pulsar compact-topic \ - --topic persistent://my-tenant-namespace/my-topic - -``` - -> Running compaction in its own process is recommended when you want to avoid interfering with the broker's performance. Broker performance should only be affected, however, when running compaction on topics with a large keyspace (i.e when there are many keys on the topic). The first phase of the compaction process keeps a copy of each key in the topic, which can create memory pressure as the number of keys grows. Using the `pulsar-admin topics compact` command to run compaction through the REST API should present no issues in the overwhelming majority of cases; using `pulsar compact-topic` should correspondingly be considered an edge case. - -The `pulsar compact-topic` command communicates with [ZooKeeper](https://zookeeper.apache.org) directly. In order to establish communication with ZooKeeper, though, the `pulsar` CLI tool will need to have a valid [broker configuration](reference-configuration.md#broker). You can either supply a proper configuration in `conf/broker.conf` or specify a non-default location for the configuration: - -```bash - -$ bin/pulsar compact-topic \ - --broker-conf /path/to/broker.conf \ - --topic persistent://my-tenant/my-namespace/my-topic - -# If the configuration is in conf/broker.conf -$ bin/pulsar compact-topic \ - --topic persistent://my-tenant/my-namespace/my-topic - -``` - -#### When should I trigger compaction? - -How often you [trigger compaction](#triggering-compaction-manually) will vary widely based on the use case. If you want a compacted topic to be extremely speedy on read, then you should run compaction fairly frequently. - -## Consumer configuration - -Pulsar consumers and readers need to be configured to read from compacted topics. The sections below show you how to enable compacted topic reads for Pulsar's language clients. - -### Java - -In order to read from a compacted topic using a Java consumer, the `readCompacted` parameter must be set to `true`. Here's an example consumer for a compacted topic: - -```java - -Consumer compactedTopicConsumer = client.newConsumer() - .topic("some-compacted-topic") - .readCompacted(true) - .subscribe(); - -``` - -As mentioned above, topic compaction in Pulsar works on a *per-key basis*. That means that messages that you produce on compacted topics need to have keys (the content of the key will depend on your use case). Messages that don't have keys will be ignored by the compaction process. Here's an example Pulsar message with a key: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageBuilder; - -Message msg = MessageBuilder.create() - .setContent(someByteArray) - .setKey("some-key") - .build(); - -``` - -The example below shows a message with a key being produced on a compacted Pulsar topic: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageBuilder; -import org.apache.pulsar.client.api.Producer; -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -Producer compactedTopicProducer = client.newProducer() - .topic("some-compacted-topic") - .create(); - -Message msg = MessageBuilder.create() - .setContent(someByteArray) - .setKey("some-key") - .build(); - -compactedTopicProducer.send(msg); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-deduplication.md b/site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-deduplication.md deleted file mode 100644 index f7f9e3d7bb425b..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-deduplication.md +++ /dev/null @@ -1,151 +0,0 @@ ---- -id: cookbooks-deduplication -title: Message deduplication -sidebar_label: "Message deduplication" -original_id: cookbooks-deduplication ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -When **Message deduplication** is enabled, it ensures that each message produced on Pulsar topics is persisted to disk *only once*, even if the message is produced more than once. Message deduplication is handled automatically on the server side. - -To use message deduplication in Pulsar, you need to configure your Pulsar brokers and clients. - -## How it works - -You can enable or disable message deduplication at the namespace level or the topic level. By default, it is disabled on all namespaces or topics. You can enable it in the following ways: - -* Enable deduplication for all namespaces/topics at the broker-level. -* Enable deduplication for a specific namespace with the `pulsar-admin namespaces` interface. -* Enable deduplication for a specific topic with the `pulsar-admin topics` interface. - -## Configure message deduplication - -You can configure message deduplication in Pulsar using the [`broker.conf`](reference-configuration.md#broker) configuration file. The following deduplication-related parameters are available. - -Parameter | Description | Default -:---------|:------------|:------- -`brokerDeduplicationEnabled` | Sets the default behavior for message deduplication in the Pulsar broker. If it is set to `true`, message deduplication is enabled on all namespaces/topics. If it is set to `false`, you have to enable or disable deduplication at the namespace level or the topic level. | `false` -`brokerDeduplicationMaxNumberOfProducers` | The maximum number of producers for which information is stored for deduplication purposes. | `10000` -`brokerDeduplicationEntriesInterval` | The number of entries after which a deduplication informational snapshot is taken. A larger interval leads to fewer snapshots being taken, though this lengthens the topic recovery time (the time required for entries published after the snapshot to be replayed). | `1000` -`brokerDeduplicationSnapshotIntervalSeconds`| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |`120` -`brokerDeduplicationProducerInactivityTimeoutMinutes` | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | `360` (6 hours) - -### Set default value at the broker-level - -By default, message deduplication is *disabled* on all Pulsar namespaces/topics. To enable it on all namespaces/topics, set the `brokerDeduplicationEnabled` parameter to `true` and re-start the broker. - -Even if you set the value for `brokerDeduplicationEnabled`, enabling or disabling via Pulsar admin CLI overrides the default settings at the broker-level. - -### Enable message deduplication - -Though message deduplication is disabled by default at the broker level, you can enable message deduplication for a specific namespace or topic using the [`pulsar-admin namespaces set-deduplication`](reference-pulsar-admin.md#namespace-set-deduplication) or the [`pulsar-admin topics set-deduplication`](reference-pulsar-admin.md#topic-set-deduplication) command. You can use the `--enable`/`-e` flag and specify the namespace/topic. - -The following example shows how to enable message deduplication at the namespace level. - -```bash - -$ bin/pulsar-admin namespaces set-deduplication \ - public/default \ - --enable # or just -e - -``` - -### Disable message deduplication - -Even if you enable message deduplication at the broker level, you can disable message deduplication for a specific namespace or topic using the [`pulsar-admin namespace set-deduplication`](reference-pulsar-admin.md#namespace-set-deduplication) or the [`pulsar-admin topics set-deduplication`](reference-pulsar-admin.md#topic-set-deduplication) command. Use the `--disable`/`-d` flag and specify the namespace/topic. - -The following example shows how to disable message deduplication at the namespace level. - -```bash - -$ bin/pulsar-admin namespaces set-deduplication \ - public/default \ - --disable # or just -d - -``` - -## Pulsar clients - -If you enable message deduplication in Pulsar brokers, you need complete the following tasks for your client producers: - -1. Specify a name for the producer. -1. Set the message timeout to `0` (namely, no timeout). - -The instructions for Java, Python, and C++ clients are different. - -````mdx-code-block - - - -To enable message deduplication on a [Java producer](client-libraries-java.md#producers), set the producer name using the `producerName` setter, and set the timeout to `0` using the `sendTimeout` setter. - -```java - -import org.apache.pulsar.client.api.Producer; -import org.apache.pulsar.client.api.PulsarClient; -import java.util.concurrent.TimeUnit; - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); -Producer producer = pulsarClient.newProducer() - .producerName("producer-1") - .topic("persistent://public/default/topic-1") - .sendTimeout(0, TimeUnit.SECONDS) - .create(); - -``` - - - - -To enable message deduplication on a [Python producer](client-libraries-python.md#producers), set the producer name using `producer_name`, and set the timeout to `0` using `send_timeout_millis`. - -```python - -import pulsar - -client = pulsar.Client("pulsar://localhost:6650") -producer = client.create_producer( - "persistent://public/default/topic-1", - producer_name="producer-1", - send_timeout_millis=0) - -``` - - - - -To enable message deduplication on a [C++ producer](client-libraries-cpp.md#producer), set the producer name using `producer_name`, and set the timeout to `0` using `send_timeout_millis`. - -```cpp - -#include - -std::string serviceUrl = "pulsar://localhost:6650"; -std::string topic = "persistent://some-tenant/ns1/topic-1"; -std::string producerName = "producer-1"; - -Client client(serviceUrl); - -ProducerConfiguration producerConfig; -producerConfig.setSendTimeout(0); -producerConfig.setProducerName(producerName); - -Producer producer; - -Result result = client.createProducer(topic, producerConfig, producer); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-encryption.md b/site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-encryption.md deleted file mode 100644 index f0d8fb8735eb63..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-encryption.md +++ /dev/null @@ -1,184 +0,0 @@ ---- -id: cookbooks-encryption -title: Pulsar Encryption -sidebar_label: "Encryption" -original_id: cookbooks-encryption ---- - -Pulsar encryption allows applications to encrypt messages at the producer and decrypt at the consumer. Encryption is performed using the public/private key pair configured by the application. Encrypted messages can only be decrypted by consumers with a valid key. - -## Asymmetric and symmetric encryption - -Pulsar uses dynamically generated symmetric AES key to encrypt messages(data). The AES key(data key) is encrypted using application provided ECDSA/RSA key pair, as a result there is no need to share the secret with everyone. - -Key is a public/private key pair used for encryption/decryption. The producer key is the public key, and the consumer key is the private key of the key pair. - -The application configures the producer with the public key. This key is used to encrypt the AES data key. The encrypted data key is sent as part of message header. Only entities with the private key(in this case the consumer) will be able to decrypt the data key which is used to decrypt the message. - -A message can be encrypted with more than one key. Any one of the keys used for encrypting the message is sufficient to decrypt the message - -Pulsar does not store the encryption key anywhere in the pulsar service. If you lose/delete the private key, your message is irretrievably lost, and is unrecoverable - -## Producer -![alt text](/assets/pulsar-encryption-producer.jpg "Pulsar Encryption Producer") - -## Consumer -![alt text](/assets/pulsar-encryption-consumer.jpg "Pulsar Encryption Consumer") - -## Here are the steps to get started: - -1. Create your ECDSA or RSA public/private key pair. - -```shell - -openssl ecparam -name secp521r1 -genkey -param_enc explicit -out test_ecdsa_privkey.pem -openssl ec -in test_ecdsa_privkey.pem -pubout -outform pkcs8 -out test_ecdsa_pubkey.pem - -``` - -2. Add the public and private key to the key management and configure your producers to retrieve public keys and consumers clients to retrieve private keys. -3. Implement CryptoKeyReader::getPublicKey() interface from producer and CryptoKeyReader::getPrivateKey() interface from consumer, which will be invoked by Pulsar client to load the key. -4. Add encryption key to producer configuration: conf.addEncryptionKey("myapp.key") -5. Add CryptoKeyReader implementation to producer/consumer config: conf.setCryptoKeyReader(keyReader) -6. Sample producer application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} -PulsarClient pulsarClient = PulsarClient.create("http://localhost:8080"); - -ProducerConfiguration prodConf = new ProducerConfiguration(); -prodConf.setCryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")); -prodConf.addEncryptionKey("myappkey"); - -Producer producer = pulsarClient.createProducer("persistent://my-tenant/my-ns/my-topic", prodConf); - -for (int i = 0; i < 10; i++) { - producer.send("my-message".getBytes()); -} - -pulsarClient.close(); - -``` - -7. Sample Consumer Application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} - -ConsumerConfiguration consConf = new ConsumerConfiguration(); -consConf.setCryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")); -PulsarClient pulsarClient = PulsarClient.create("http://localhost:8080"); -Consumer consumer = pulsarClient.subscribe("persistent://my-tenant//my-ns/my-topic", "my-subscriber-name", consConf); -Message msg = null; - -for (int i = 0; i < 10; i++) { - msg = consumer.receive(); - // do something - System.out.println("Received: " + new String(msg.getData())); -} - -// Acknowledge the consumption of all messages at once -consumer.acknowledgeCumulative(msg); -pulsarClient.close(); - -``` - -## Key rotation -Pulsar generates new AES data key every 4 hours or after a certain number of messages are published. The asymmetric public key is automatically fetched by producer every 4 hours by calling CryptoKeyReader::getPublicKey() to retrieve the latest version. - -## Enabling encryption at the producer application: -If you produce messages that are consumed across application boundaries, you need to ensure that consumers in other applications have access to one of the private keys that can decrypt the messages. This can be done in two ways: -1. The consumer application provides you access to their public key, which you add to your producer keys -1. You grant access to one of the private keys from the pairs used by producer - -In some cases, the producer may want to encrypt the messages with multiple keys. For this, add all such keys to the config. Consumer will be able to decrypt the message, as long as it has access to at least one of the keys. - -E.g: If messages needs to be encrypted using 2 keys myapp.messagekey1 and myapp.messagekey2, - -```java - -conf.addEncryptionKey("myapp.messagekey1"); -conf.addEncryptionKey("myapp.messagekey2"); - -``` - -## Decrypting encrypted messages at the consumer application: -Consumers require access one of the private keys to decrypt messages produced by the producer. If you would like to receive encrypted messages, create a public/private key and give your public key to the producer application to encrypt messages using your public key. - -## Handling Failures: -* Producer/ Consumer loses access to the key - * Producer action will fail indicating the cause of the failure. Application has the option to proceed with sending unencrypted message in such cases. Call conf.setCryptoFailureAction(ProducerCryptoFailureAction) to control the producer behavior. The default behavior is to fail the request. - * If consumption failed due to decryption failure or missing keys in consumer, application has the option to consume the encrypted message or discard it. Call conf.setCryptoFailureAction(ConsumerCryptoFailureAction) to control the consumer behavior. The default behavior is to fail the request. -Application will never be able to decrypt the messages if the private key is permanently lost. -* Batch messaging - * If decryption fails and the message contain batch messages, client will not be able to retrieve individual messages in the batch, hence message consumption fails even if conf.setCryptoFailureAction() is set to CONSUME. -* If decryption fails, the message consumption stops and application will notice backlog growth in addition to decryption failure messages in the client log. If application does not have access to the private key to decrypt the message, the only option is to skip/discard backlogged messages. - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-message-queue.md b/site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-message-queue.md deleted file mode 100644 index eb43cbde5fb818..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-message-queue.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -id: cookbooks-message-queue -title: Using Pulsar as a message queue -sidebar_label: "Message queue" -original_id: cookbooks-message-queue ---- - -Message queues are essential components of many large-scale data architectures. If every single work object that passes through your system absolutely *must* be processed in spite of the slowness or downright failure of this or that system component, there's a good chance that you'll need a message queue to step in and ensure that unprocessed data is retained---with correct ordering---until the required actions are taken. - -Pulsar is a great choice for a message queue because: - -* it was built with [persistent message storage](concepts-architecture-overview.md#persistent-storage) in mind -* it offers automatic load balancing across [consumers](reference-terminology.md#consumer) for messages on a topic (or custom load balancing if you wish) - -> You can use the same Pulsar installation to act as a real-time message bus and as a message queue if you wish (or just one or the other). You can set aside some topics for real-time purposes and other topics for message queue purposes (or use specific namespaces for either purpose if you wish). - - -# Client configuration changes - -To use a Pulsar [topic](reference-terminology.md#topic) as a message queue, you should distribute the receiver load on that topic across several consumers (the optimal number of consumers will depend on the load). Each consumer must: - -* Establish a [shared subscription](concepts-messaging.md#shared) and use the same subscription name as the other consumers (otherwise the subscription is not shared and the consumers can't act as a processing ensemble) -* If you'd like to have tight control over message dispatching across consumers, set the consumers' **receiver queue** size very low (potentially even to 0 if necessary). Each Pulsar [consumer](reference-terminology.md#consumer) has a receiver queue that determines how many messages the consumer will attempt to fetch at a time. A receiver queue of 1000 (the default), for example, means that the consumer will attempt to process 1000 messages from the topic's backlog upon connection. Setting the receiver queue to zero essentially means ensuring that each consumer is only doing one thing at a time. - - The downside to restricting the receiver queue size of consumers is that that limits the potential throughput of those consumers and cannot be used with [partitioned topics](reference-terminology.md#partitioned-topic). Whether the performance/control trade-off is worthwhile will depend on your use case. - -## Java clients - -Here's an example Java consumer configuration that uses a shared subscription: - -```java - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; -import org.apache.pulsar.client.api.SubscriptionType; - -String SERVICE_URL = "pulsar://localhost:6650"; -String TOPIC = "persistent://public/default/mq-topic-1"; -String subscription = "sub-1"; - -PulsarClient client = PulsarClient.builder() - .serviceUrl(SERVICE_URL) - .build(); - -Consumer consumer = client.newConsumer() - .topic(TOPIC) - .subscriptionName(subscription) - .subscriptionType(SubscriptionType.Shared) - // If you'd like to restrict the receiver queue size - .receiverQueueSize(10) - .subscribe(); - -``` - -## Python clients - -Here's an example Python consumer configuration that uses a shared subscription: - -```python - -from pulsar import Client, ConsumerType - -SERVICE_URL = "pulsar://localhost:6650" -TOPIC = "persistent://public/default/mq-topic-1" -SUBSCRIPTION = "sub-1" - -client = Client(SERVICE_URL) -consumer = client.subscribe( - TOPIC, - SUBSCRIPTION, - # If you'd like to restrict the receiver queue size - receiver_queue_size=10, - consumer_type=ConsumerType.Shared) - -``` - -## C++ clients - -Here's an example C++ consumer configuration that uses a shared subscription: - -```cpp - -#include - -std::string serviceUrl = "pulsar://localhost:6650"; -std::string topic = "persistent://public/defaultmq-topic-1"; -std::string subscription = "sub-1"; - -Client client(serviceUrl); - -ConsumerConfiguration consumerConfig; -consumerConfig.setConsumerType(ConsumerType.ConsumerShared); -// If you'd like to restrict the receiver queue size -consumerConfig.setReceiverQueueSize(10); - -Consumer consumer; - -Result result = client.subscribe(topic, subscription, consumerConfig, consumer); - -``` - -## Go clients - -Here is an example of a Go consumer configuration that uses a shared subscription: - -```go - -import "github.com/apache/pulsar-client-go/pulsar" - -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "persistent://public/default/mq-topic-1", - SubscriptionName: "sub-1", - Type: pulsar.Shared, - ReceiverQueueSize: 10, // If you'd like to restrict the receiver queue size -}) -if err != nil { - log.Fatal(err) -} - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-non-persistent.md b/site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-non-persistent.md deleted file mode 100644 index 178301e86eb8df..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-non-persistent.md +++ /dev/null @@ -1,63 +0,0 @@ ---- -id: cookbooks-non-persistent -title: Non-persistent messaging -sidebar_label: "Non-persistent messaging" -original_id: cookbooks-non-persistent ---- - -**Non-persistent topics** are Pulsar topics in which message data is *never* [persistently stored](concepts-architecture-overview.md#persistent-storage) and kept only in memory. This cookbook provides: - -* A basic [conceptual overview](#overview) of non-persistent topics -* Information about [configurable parameters](#configuration) related to non-persistent topics -* A guide to the [CLI interface](#cli) for managing non-persistent topics - -## Overview - -By default, Pulsar persistently stores *all* unacknowledged messages on multiple [BookKeeper](#persistent-storage) bookies (storage nodes). Data for messages on persistent topics can thus survive broker restarts and subscriber failover. - -Pulsar also, however, supports **non-persistent topics**, which are topics on which messages are *never* persisted to disk and live only in memory. When using non-persistent delivery, killing a Pulsar [broker](reference-terminology.md#broker) or disconnecting a subscriber to a topic means that all in-transit messages are lost on that (non-persistent) topic, meaning that clients may see message loss. - -Non-persistent topics have names of this form (note the `non-persistent` in the name): - -```http - -non-persistent://tenant/namespace/topic - -``` - -> For more high-level information about non-persistent topics, see the [Concepts and Architecture](concepts-messaging.md#non-persistent-topics) documentation. - -## Using - -> In order to use non-persistent topics, they must be [enabled](#enabling) in your Pulsar broker configuration. - -In order to use non-persistent topics, you only need to differentiate them by name when interacting with them. This [`pulsar-client produce`](reference-cli-tools.md#pulsar-client-produce) command, for example, would produce one message on a non-persistent topic in a standalone cluster: - -```bash - -$ bin/pulsar-client produce non-persistent://public/default/example-np-topic \ - --num-produce 1 \ - --messages "This message will be stored only in memory" - -``` - -> For a more thorough guide to non-persistent topics from an administrative perspective, see the [Non-persistent topics](admin-api-topics.md) guide. - -## Enabling - -In order to enable non-persistent topics in a Pulsar broker, the [`enableNonPersistentTopics`](reference-configuration.md#broker-enableNonPersistentTopics) must be set to `true`. This is the default, and so you won't need to take any action to enable non-persistent messaging. - - -> #### Configuration for standalone mode -> If you're running Pulsar in standalone mode, the same configurable parameters are available but in the [`standalone.conf`](reference-configuration.md#standalone) configuration file. - -If you'd like to enable *only* non-persistent topics in a broker, you can set the [`enablePersistentTopics`](reference-configuration.md#broker-enablePersistentTopics) parameter to `false` and the `enableNonPersistentTopics` parameter to `true`. - -## Managing with cli - -Non-persistent topics can be managed using the [`pulsar-admin non-persistent`](reference-pulsar-admin.md#non-persistent) command-line interface. With that interface you can perform actions like [create a partitioned non-persistent topic](reference-pulsar-admin.md#non-persistent-create-partitioned-topic), get [stats](reference-pulsar-admin.md#non-persistent-stats) for a non-persistent topic, [list](reference-pulsar-admin.md) non-persistent topics under a namespace, and more. - -## Using with Pulsar clients - -You shouldn't need to make any changes to your Pulsar clients to use non-persistent messaging beyond making sure that you use proper [topic names](#using) with `non-persistent` as the topic type. - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-partitioned.md b/site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-partitioned.md deleted file mode 100644 index fb9ac354cc6d60..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-partitioned.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: cookbooks-partitioned -title: Partitioned topics -sidebar_label: "Partitioned Topics" -original_id: cookbooks-partitioned ---- -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-retention-expiry.md b/site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-retention-expiry.md deleted file mode 100644 index 192fbe6c53277f..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-retention-expiry.md +++ /dev/null @@ -1,413 +0,0 @@ ---- -id: cookbooks-retention-expiry -title: Message retention and expiry -sidebar_label: "Message retention and expiry" -original_id: cookbooks-retention-expiry ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar brokers are responsible for handling messages that pass through Pulsar, including [persistent storage](concepts-architecture-overview.md#persistent-storage) of messages. By default, for each topic, brokers only retain messages that are in at least one backlog. A backlog is the set of unacknowledged messages for a particular subscription. As a topic can have multiple subscriptions, a topic can have multiple backlogs. - -As a consequence, no messages are retained (by default) on a topic that has not had any subscriptions created for it. - -(Note that messages that are no longer being stored are not necessarily immediately deleted, and may in fact still be accessible until the next ledger rollover. Because clients cannot predict when rollovers may happen, it is not wise to rely on a rollover not happening at an inconvenient point in time.) - -In Pulsar, you can modify this behavior, with namespace granularity, in two ways: - -* You can persistently store messages that are not within a backlog (because they've been acknowledged by on every existing subscription, or because there are no subscriptions) by setting [retention policies](#retention-policies). -* Messages that are not acknowledged within a specified timeframe can be automatically acknowledged, by specifying the [time to live](#time-to-live-ttl) (TTL). - -Pulsar's [admin interface](admin-api-overview.md) enables you to manage both retention policies and TTL with namespace granularity (and thus within a specific tenant and either on a specific cluster or in the [`global`](concepts-architecture-overview.md#global-cluster) cluster). - - -> #### Retention and TTL solve two different problems -> * Message retention: Keep the data for at least X hours (even if acknowledged) -> * Time-to-live: Discard data after some time (by automatically acknowledging) -> -> Most applications will want to use at most one of these. - - -## Retention policies - -By default, when a Pulsar message arrives at a broker, the message is stored until it has been acknowledged on all subscriptions, at which point it is marked for deletion. You can override this behavior and retain messages that have already been acknowledged on all subscriptions by setting a *retention policy* for all topics in a given namespace. Retention is based on both a *size limit* and a *time limit*. - -Retention policies are useful when you use the Reader interface. The Reader interface does not use acknowledgements, and messages do not exist within backlogs. It is required to configure retention for Reader-only use cases. - -When you set a retention policy on topics in a namespace, you must set **both** a *size limit* and a *time limit*. You can refer to the following table to set retention policies in `pulsar-admin` and Java. - -|Time limit|Size limit| Message retention | -|----------|----------|------------------------| -| -1 | -1 | Infinite retention | -| -1 | >0 | Based on the size limit | -| >0 | -1 | Based on the time limit | -| 0 | 0 | Disable message retention (by default) | -| 0 | >0 | Invalid | -| >0 | 0 | Invalid | -| >0 | >0 | Acknowledged messages or messages with no active subscription will not be retained when either time or size reaches the limit. | - -The retention settings apply to all messages on topics that do not have any subscriptions, or to messages that have been acknowledged by all subscriptions. The retention policy settings do not affect unacknowledged messages on topics with subscriptions. The unacknowledged messages are controlled by the backlog quota. - -When a retention limit on a topic is exceeded, the oldest message is marked for deletion until the set of retained messages falls within the specified limits again. - -### Defaults - -You can set message retention at instance level with the following two parameters: `defaultRetentionTimeInMinutes` and `defaultRetentionSizeInMB`. Both parameters are set to `0` by default. - -For more information of the two parameters, refer to the [`broker.conf`](reference-configuration.md#broker) configuration file. - -### Set retention policy - -You can set a retention policy for a namespace by specifying the namespace, a size limit and a time limit in `pulsar-admin`, REST API and Java. - -````mdx-code-block - - - -You can use the [`set-retention`](reference-pulsar-admin.md#namespaces-set-retention) subcommand and specify a namespace, a size limit using the `-s`/`--size` flag, and a time limit using the `-t`/`--time` flag. - -In the following example, the size limit is set to 10 GB and the time limit is set to 3 hours for each topic within the `my-tenant/my-ns` namespace. -- When the size of messages reaches 10 GB on a topic within 3 hours, the acknowledged messages will not be retained. -- After 3 hours, even if the message size is less than 10 GB, the acknowledged messages will not be retained. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 10G \ - --time 3h - -``` - -In the following example, the time is not limited and the size limit is set to 1 TB. The size limit determines the retention. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 1T \ - --time -1 - -``` - -In the following example, the size is not limited and the time limit is set to 3 hours. The time limit determines the retention. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size -1 \ - --time 3h - -``` - -To achieve infinite retention, set both values to `-1`. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size -1 \ - --time -1 - -``` - -To disable the retention policy, set both values to `0`. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 0 \ - --time 0 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/retention|operation/setRetention?version=@pulsar:version_number@} - -:::note - -To disable the retention policy, you need to set both the size and time limit to `0`. Set either size or time limit to `0` is invalid. - -::: - - - - -```java - -int retentionTime = 10; // 10 minutes -int retentionSize = 500; // 500 megabytes -RetentionPolicies policies = new RetentionPolicies(retentionTime, retentionSize); -admin.namespaces().setRetention(namespace, policies); - -``` - - - - -```` - -### Get retention policy - -You can fetch the retention policy for a namespace by specifying the namespace. The output will be a JSON object with two keys: `retentionTimeInMinutes` and `retentionSizeInMB`. - -#### pulsar-admin - -Use the [`get-retention`](reference-pulsar-admin.md#namespaces) subcommand and specify the namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces get-retention my-tenant/my-ns -{ - "retentionTimeInMinutes": 10, - "retentionSizeInMB": 500 -} - -``` - -#### REST API - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/retention|operation/getRetention?version=@pulsar:version_number@} - -#### Java - -```java - -admin.namespaces().getRetention(namespace); - -``` - -## Backlog quotas - -*Backlogs* are sets of unacknowledged messages for a topic that have been stored by bookies. Pulsar stores all unacknowledged messages in backlogs until they are processed and acknowledged. - -You can control the allowable size of backlogs, at the namespace level, using *backlog quotas*. Setting a backlog quota involves setting: - -TODO: Expand on is this per backlog or per topic? - -* an allowable *size threshold* for each topic in the namespace -* a *retention policy* that determines which action the [broker](reference-terminology.md#broker) takes if the threshold is exceeded. - -The following retention policies are available: - -Policy | Action -:------|:------ -`producer_request_hold` | The broker will hold and not persist produce request payload -`producer_exception` | The broker will disconnect from the client by throwing an exception -`consumer_backlog_eviction` | The broker will begin discarding backlog messages - - -> #### Beware the distinction between retention policy types -> As you may have noticed, there are two definitions of the term "retention policy" in Pulsar, one that applies to persistent storage of messages not in backlogs, and one that applies to messages within backlogs. - - -Backlog quotas are handled at the namespace level. They can be managed via: - -### Set size/time thresholds and backlog retention policies - -You can set a size and/or time threshold and backlog retention policy for all of the topics in a [namespace](reference-terminology.md#namespace) by specifying the namespace, a size limit and/or a time limit in second, and a policy by name. - -#### pulsar-admin - -Use the [`set-backlog-quota`](reference-pulsar-admin.md#namespaces) subcommand and specify a namespace, a size limit using the `-l`/`--limit` flag, and a retention policy using the `-p`/`--policy` flag. - -##### Example - -```shell - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \ - --limit 2G \ - --limitTime 36000 \ - --policy producer_request_hold - -``` - -#### REST API - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - -#### Java - -```java - -long sizeLimit = 2147483648L; -BacklogQuota.RetentionPolicy policy = BacklogQuota.RetentionPolicy.producer_request_hold; -BacklogQuota quota = new BacklogQuota(sizeLimit, policy); -admin.namespaces().setBacklogQuota(namespace, quota); - -``` - -### Get backlog threshold and backlog retention policy - -You can see which size threshold and backlog retention policy has been applied to a namespace. - -#### pulsar-admin - -Use the [`get-backlog-quotas`](reference-pulsar-admin.md#pulsar-admin-namespaces-get-backlog-quotas) subcommand and specify a namespace. Here's an example: - -```shell - -$ pulsar-admin namespaces get-backlog-quotas my-tenant/my-ns -{ - "destination_storage": { - "limit" : 2147483648, - "policy" : "producer_request_hold" - } -} - -``` - -#### REST API - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/backlogQuotaMap|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - -#### Java - -```java - -Map quotas = - admin.namespaces().getBacklogQuotas(namespace); - -``` - -### Remove backlog quotas - -#### pulsar-admin - -Use the [`remove-backlog-quota`](reference-pulsar-admin.md#pulsar-admin-namespaces-remove-backlog-quota) subcommand and specify a namespace. Here's an example: - -```shell - -$ pulsar-admin namespaces remove-backlog-quota my-tenant/my-ns - -``` - -#### REST API - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/removeBacklogQuota?version=@pulsar:version_number@} - -#### Java - -```java - -admin.namespaces().removeBacklogQuota(namespace); - -``` - -### Clear backlog - -#### pulsar-admin - -Use the [`clear-backlog`](reference-pulsar-admin.md#pulsar-admin-namespaces-clear-backlog) subcommand. - -##### Example - -```shell - -$ pulsar-admin namespaces clear-backlog my-tenant/my-ns - -``` - -By default, you will be prompted to ensure that you really want to clear the backlog for the namespace. You can override the prompt using the `-f`/`--force` flag. - -## Time to live (TTL) - -By default, Pulsar stores all unacknowledged messages forever. This can lead to heavy disk space usage in cases where a lot of messages are going unacknowledged. If disk space is a concern, you can set a time to live (TTL) that determines how long unacknowledged messages will be retained. - -### Set the TTL for a namespace - -#### pulsar-admin - -Use the [`set-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-set-message-ttl) subcommand and specify a namespace and a TTL (in seconds) using the `-ttl`/`--messageTTL` flag. - -##### Example - -```shell - -$ pulsar-admin namespaces set-message-ttl my-tenant/my-ns \ - --messageTTL 120 # TTL of 2 minutes - -``` - -#### REST API - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/setNamespaceMessageTTL?version=@pulsar:version_number@} - -#### Java - -```java - -admin.namespaces().setNamespaceMessageTTL(namespace, ttlInSeconds); - -``` - -### Get the TTL configuration for a namespace - -#### pulsar-admin - -Use the [`get-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-get-message-ttl) subcommand and specify a namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces get-message-ttl my-tenant/my-ns -60 - -``` - -#### REST API - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/getNamespaceMessageTTL?version=@pulsar:version_number@} - -#### Java - -```java - -admin.namespaces().getNamespaceMessageTTL(namespace) - -``` - -### Remove the TTL configuration for a namespace - -#### pulsar-admin - -Use the [`remove-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-remove-message-ttl) subcommand and specify a namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces remove-message-ttl my-tenant/my-ns - -``` - -#### REST API - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/removeNamespaceMessageTTL?version=@pulsar:version_number@} - -#### Java - -```java - -admin.namespaces().removeNamespaceMessageTTL(namespace) - -``` - -## Delete messages from namespaces - -If you do not have any retention period and that you never have much of a backlog, the upper limit for retaining messages, which are acknowledged, equals to the Pulsar segment rollover period + entry log rollover period + (garbage collection interval * garbage collection ratios). - -- **Segment rollover period**: basically, the segment rollover period is how often a new segment is created. Once a new segment is created, the old segment will be deleted. By default, this happens either when you have written 50,000 entries (messages) or have waited 240 minutes. You can tune this in your broker. - -- **Entry log rollover period**: multiple ledgers in BookKeeper are interleaved into an [entry log](https://bookkeeper.apache.org/docs/4.11.1/getting-started/concepts/#entry-logs). In order for a ledger that has been deleted, the entry log must all be rolled over. -The entry log rollover period is configurable, but is purely based on the entry log size. For details, see [here](https://bookkeeper.apache.org/docs/4.11.1/reference/config/#entry-log-settings). Once the entry log is rolled over, the entry log can be garbage collected. - -- **Garbage collection interval**: because entry logs have interleaved ledgers, to free up space, the entry logs need to be rewritten. The garbage collection interval is how often BookKeeper performs garbage collection. which is related to minor compaction and major compaction of entry logs. For details, see [here](https://bookkeeper.apache.org/docs/4.11.1/reference/config/#entry-log-compaction-settings). diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-tiered-storage.md b/site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-tiered-storage.md deleted file mode 100644 index 4c86166c7b1ceb..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/cookbooks-tiered-storage.md +++ /dev/null @@ -1,342 +0,0 @@ ---- -id: cookbooks-tiered-storage -title: Tiered Storage -sidebar_label: "Tiered Storage" -original_id: cookbooks-tiered-storage ---- - -Pulsar's **Tiered Storage** feature allows older backlog data to be offloaded to long term storage, thereby freeing up space in BookKeeper and reducing storage costs. This cookbook walks you through using tiered storage in your Pulsar cluster. - -* Tiered storage uses [Apache jclouds](https://jclouds.apache.org) to support [Amazon S3](https://aws.amazon.com/s3/) and [Google Cloud Storage](https://cloud.google.com/storage/)(GCS for short) for long term storage. With Jclouds, it is easy to add support for more [cloud storage providers](https://jclouds.apache.org/reference/providers/#blobstore-providers) in the future. - -* Tiered storage uses [Apache Hadoop](http://hadoop.apache.org/) to support filesystem for long term storage. With Hadoop, it is easy to add support for more filesystem in the future. - -## When should I use Tiered Storage? - -Tiered storage should be used when you have a topic for which you want to keep a very long backlog for a long time. For example, if you have a topic containing user actions which you use to train your recommendation systems, you may want to keep that data for a long time, so that if you change your recommendation algorithm you can rerun it against your full user history. - -## The offloading mechanism - -A topic in Pulsar is backed by a log, known as a managed ledger. This log is composed of an ordered list of segments. Pulsar only every writes to the final segment of the log. All previous segments are sealed. The data within the segment is immutable. This is known as a segment oriented architecture. - -![Tiered storage](/assets/pulsar-tiered-storage.png "Tiered Storage") - -The Tiered Storage offloading mechanism takes advantage of this segment oriented architecture. When offloading is requested, the segments of the log are copied, one-by-one, to tiered storage. All segments of the log, apart from the segment currently being written to can be offloaded. - -On the broker, the administrator must configure the bucket and credentials for the cloud storage service. -The configured bucket must exist before attempting to offload. If it does not exist, the offload operation will fail. - -Pulsar uses multi-part objects to upload the segment data. It is possible that a broker could crash while uploading the data. -We recommend you add a life cycle rule your bucket to expire incomplete multi-part upload after a day or two to avoid -getting charged for incomplete uploads. - -When ledgers are offloaded to long term storage, you can still query data in the offloaded ledgers with Pulsar SQL. - -## Configuring the offload driver - -Offloading is configured in ```broker.conf```. - -At a minimum, the administrator must configure the driver, the bucket and the authenticating credentials. -There is also some other knobs to configure, like the bucket region, the max block size in backed storage, etc. - -Currently we support driver of types: - -- `aws-s3`: [Simple Cloud Storage Service](https://aws.amazon.com/s3/) -- `google-cloud-storage`: [Google Cloud Storage](https://cloud.google.com/storage/) -- `filesystem`: [Filesystem Storage](http://hadoop.apache.org/) - -> Driver names are case-insensitive for driver's name. There is a third driver type, `s3`, which is identical to `aws-s3`, -> though it requires that you specify an endpoint url using `s3ManagedLedgerOffloadServiceEndpoint`. This is useful if -> using a S3 compatible data store, other than AWS. - -```conf - -managedLedgerOffloadDriver=aws-s3 - -``` - -### "aws-s3" Driver configuration - -#### Bucket and Region - -Buckets are the basic containers that hold your data. -Everything that you store in Cloud Storage must be contained in a bucket. -You can use buckets to organize your data and control access to your data, -but unlike directories and folders, you cannot nest buckets. - -```conf - -s3ManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -Bucket Region is the region where bucket located. Bucket Region is not a required -but a recommended configuration. If it is not configured, It will use the default region. - -With AWS S3, the default region is `US East (N. Virginia)`. Page [AWS Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html) contains more information. - -```conf - -s3ManagedLedgerOffloadRegion=eu-west-3 - -``` - -#### Authentication with AWS - -To be able to access AWS S3, you need to authenticate with AWS S3. -Pulsar does not provide any direct means of configuring authentication for AWS S3, -but relies on the mechanisms supported by the [DefaultAWSCredentialsProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html). - -Once you have created a set of credentials in the AWS IAM console, they can be configured in a number of ways. - -1. Using ec2 instance metadata credentials - -If you are on AWS instance with an instance profile that provides credentials, Pulsar will use these credentials -if no other mechanism is provided - -2. Set the environment variables **AWS_ACCESS_KEY_ID** and **AWS_SECRET_ACCESS_KEY** in ```conf/pulsar_env.sh```. - -```bash - -export AWS_ACCESS_KEY_ID=ABC123456789 -export AWS_SECRET_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -> \"export\" is important so that the variables are made available in the environment of spawned processes. - - -3. Add the Java system properties *aws.accessKeyId* and *aws.secretKey* to **PULSAR_EXTRA_OPTS** in `conf/pulsar_env.sh`. - -```bash - -PULSAR_EXTRA_OPTS="${PULSAR_EXTRA_OPTS} ${PULSAR_MEM} ${PULSAR_GC} -Daws.accessKeyId=ABC123456789 -Daws.secretKey=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.maxCapacityPerThread=4096" - -``` - -4. Set the access credentials in ```~/.aws/credentials```. - -```conf - -[default] -aws_access_key_id=ABC123456789 -aws_secret_access_key=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -5. Assuming an IAM role - -If you want to assume an IAM role, this can be done via specifying the following: - -```conf - -s3ManagedLedgerOffloadRole= -s3ManagedLedgerOffloadRoleSessionName=pulsar-s3-offload - -``` - -This will use the `DefaultAWSCredentialsProviderChain` for assuming this role. - -> The broker must be rebooted for credentials specified in pulsar_env to take effect. - -#### Configuring the size of block read/write - -Pulsar also provides some knobs to configure the size of requests sent to AWS S3. - -- ```s3ManagedLedgerOffloadMaxBlockSizeInBytes``` configures the maximum size of - a "part" sent during a multipart upload. This cannot be smaller than 5MB. Default is 64MB. -- ```s3ManagedLedgerOffloadReadBufferSizeInBytes``` configures the block size for - each individual read when reading back data from AWS S3. Default is 1MB. - -In both cases, these should not be touched unless you know what you are doing. - -### "google-cloud-storage" Driver configuration - -Buckets are the basic containers that hold your data. Everything that you store in -Cloud Storage must be contained in a bucket. You can use buckets to organize your data and -control access to your data, but unlike directories and folders, you cannot nest buckets. - -```conf - -gcsManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -Bucket Region is the region where bucket located. Bucket Region is not a required but -a recommended configuration. If it is not configured, It will use the default region. - -Regarding GCS, buckets are default created in the `us multi-regional location`, -page [Bucket Locations](https://cloud.google.com/storage/docs/bucket-locations) contains more information. - -```conf - -gcsManagedLedgerOffloadRegion=europe-west3 - -``` - -#### Authentication with GCS - -The administrator needs to configure `gcsManagedLedgerOffloadServiceAccountKeyFile` in `broker.conf` -for the broker to be able to access the GCS service. `gcsManagedLedgerOffloadServiceAccountKeyFile` is -a Json file, containing the GCS credentials of a service account. -[Service Accounts section of this page](https://support.google.com/googleapi/answer/6158849) contains -more information of how to create this key file for authentication. More information about google cloud IAM -is available [here](https://cloud.google.com/storage/docs/access-control/iam). - -To generate service account credentials or view the public credentials that you've already generated, follow the following steps: - -1. Open the [Service accounts page](https://console.developers.google.com/iam-admin/serviceaccounts). -2. Select a project or create a new one. -3. Click **Create service account**. -4. In the **Create service account** window, type a name for the service account, and select **Furnish a new private key**. If you want to [grant G Suite domain-wide authority](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#delegatingauthority) to the service account, also select **Enable G Suite Domain-wide Delegation**. -5. Click **Create**. - -> Notes: Make ensure that the service account you create has permission to operate GCS, you need to assign **Storage Admin** permission to your service account in [here](https://cloud.google.com/storage/docs/access-control/iam). - -```conf - -gcsManagedLedgerOffloadServiceAccountKeyFile="/Users/hello/Downloads/project-804d5e6a6f33.json" - -``` - -#### Configuring the size of block read/write - -Pulsar also provides some knobs to configure the size of requests sent to GCS. - -- ```gcsManagedLedgerOffloadMaxBlockSizeInBytes``` configures the maximum size of a "part" sent - during a multipart upload. This cannot be smaller than 5MB. Default is 64MB. -- ```gcsManagedLedgerOffloadReadBufferSizeInBytes``` configures the block size for each individual - read when reading back data from GCS. Default is 1MB. - -In both cases, these should not be touched unless you know what you are doing. - -### "filesystem" Driver configuration - - -#### Configure connection address - -You can configure the connection address in the `broker.conf` file. - -```conf - -fileSystemURI="hdfs://127.0.0.1:9000" - -``` - -#### Configure Hadoop profile path - -The configuration file is stored in the Hadoop profile path. It contains various settings, such as base path, authentication, and so on. - -```conf - -fileSystemProfilePath="../conf/filesystem_offload_core_site.xml" - -``` - -The model for storing topic data uses `org.apache.hadoop.io.MapFile`. You can use all of the configurations in `org.apache.hadoop.io.MapFile` for Hadoop. - -**Example** - -```conf - - - fs.defaultFS - - - - - hadoop.tmp.dir - pulsar - - - - io.file.buffer.size - 4096 - - - - io.seqfile.compress.blocksize - 1000000 - - - - io.seqfile.compression.type - BLOCK - - - - io.map.index.interval - 128 - - -``` - -For more information about the configurations in `org.apache.hadoop.io.MapFile`, see [Filesystem Storage](http://hadoop.apache.org/). -## Configuring offload to run automatically - -Namespace policies can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that the topic has stored on the pulsar cluster. Once the topic reaches the threshold, an offload operation will be triggered. Setting a negative value to the threshold will disable automatic offloading. Setting the threshold to 0 will cause the broker to offload data as soon as it possiby can. - -```bash - -$ bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -> Automatic offload runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offload will not until the current segment is full. - -## Configuring read priority for offloaded messages - -By default, once messages were offloaded to long term storage, brokers will read them from long term storage, but messages still exists in bookkeeper for a period depends on the administrator's configuration. For -messages exists in both bookkeeper and long term storage, if they are preferred to read from bookkeeper, you can use command to change this configuration. - -```bash - -# default value for -orp is tiered-storage-first -$ bin/pulsar-admin namespaces set-offload-policies my-tenant/my-namespace -orp bookkeeper-first -$ bin/pulsar-admin topics set-offload-policies my-tenant/my-namespace/topic1 -orp bookkeeper-first - -``` - -## Triggering offload manually - -Offloading can manually triggered through a REST endpoint on the Pulsar broker. We provide a CLI which will call this rest endpoint for you. - -When triggering offload, you must specify the maximum size, in bytes, of backlog which will be retained locally on the bookkeeper. The offload mechanism will offload segments from the start of the topic backlog until this condition is met. - -```bash - -$ bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 -Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - -``` - -The command to triggers an offload will not wait until the offload operation has completed. To check the status of the offload, use offload-status. - -```bash - -$ bin/pulsar-admin topics offload-status my-tenant/my-namespace/topic1 -Offload is currently running - -``` - -To wait for offload to complete, add the -w flag. - -```bash - -$ bin/pulsar-admin topics offload-status -w my-tenant/my-namespace/topic1 -Offload was a success - -``` - -If there is an error offloading, the error will be propagated to the offload-status command. - -```bash - -$ bin/pulsar-admin topics offload-status persistent://public/default/topic1 -Error in offload -null - -Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/deploy-aws.md b/site2/website/versioned_docs/version-2.8.1-deprecated/deploy-aws.md deleted file mode 100644 index 93c389b56e2cf1..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/deploy-aws.md +++ /dev/null @@ -1,271 +0,0 @@ ---- -id: deploy-aws -title: Deploying a Pulsar cluster on AWS using Terraform and Ansible -sidebar_label: "Amazon Web Services" -original_id: deploy-aws ---- - -> For instructions on deploying a single Pulsar cluster manually rather than using Terraform and Ansible, see [Deploying a Pulsar cluster on bare metal](deploy-bare-metal.md). For instructions on manually deploying a multi-cluster Pulsar instance, see [Deploying a Pulsar instance on bare metal](deploy-bare-metal-multi-cluster.md). - -One of the easiest ways to get a Pulsar [cluster](reference-terminology.md#cluster) running on [Amazon Web Services](https://aws.amazon.com/) (AWS) is to use the [Terraform](https://terraform.io) infrastructure provisioning tool and the [Ansible](https://www.ansible.com) server automation tool. Terraform can create the resources necessary for running the Pulsar cluster---[EC2](https://aws.amazon.com/ec2/) instances, networking and security infrastructure, etc.---While Ansible can install and run Pulsar on the provisioned resources. - -## Requirements and setup - -In order to install a Pulsar cluster on AWS using Terraform and Ansible, you need to prepare the following things: - -* An [AWS account](https://aws.amazon.com/account/) and the [`aws`](https://aws.amazon.com/cli/) command-line tool -* Python and [pip](https://pip.pypa.io/en/stable/) -* The [`terraform-inventory`](https://github.com/adammck/terraform-inventory) tool, which enables Ansible to use Terraform artifacts - -You also need to make sure that you are currently logged into your AWS account via the `aws` tool: - -```bash - -$ aws configure - -``` - -## Installation - -You can install Ansible on Linux or macOS using pip. - -```bash - -$ pip install ansible - -``` - -You can install Terraform using the instructions [here](https://learn.hashicorp.com/tutorials/terraform/install-cli). - -You also need to have the Terraform and Ansible configuration for Pulsar locally on your machine. You can find them in the [GitHub repository](https://github.com/apache/pulsar) of Pulsar, which you can fetch using Git commands: - -```bash - -$ git clone https://github.com/apache/pulsar -$ cd pulsar/deployment/terraform-ansible/aws - -``` - -## SSH setup - -> If you already have an SSH key and want to use it, you can skip the step of generating an SSH key and update `private_key_file` setting -> in `ansible.cfg` file and `public_key_path` setting in `terraform.tfvars` file. -> -> For example, if you already have a private SSH key in `~/.ssh/pulsar_aws` and a public key in `~/.ssh/pulsar_aws.pub`, -> follow the steps below: -> -> 1. update `ansible.cfg` with following values: -> - -> ```shell -> -> private_key_file=~/.ssh/pulsar_aws -> -> -> ``` - -> -> 2. update `terraform.tfvars` with following values: -> - -> ```shell -> -> public_key_path=~/.ssh/pulsar_aws.pub -> -> -> ``` - - -In order to create the necessary AWS resources using Terraform, you need to create an SSH key. Enter the following commands to create a private SSH key in `~/.ssh/id_rsa` and a public key in `~/.ssh/id_rsa.pub`: - -```bash - -$ ssh-keygen -t rsa - -``` - -Do *not* enter a passphrase (hit **Enter** instead when the prompt comes out). Enter the following command to verify that a key has been created: - -```bash - -$ ls ~/.ssh -id_rsa id_rsa.pub - -``` - -## Create AWS resources using Terraform - -To start building AWS resources with Terraform, you need to install all Terraform dependencies. Enter the following command: - -```bash - -$ terraform init -# This will create a .terraform folder - -``` - -After that, you can apply the default Terraform configuration by entering this command: - -```bash - -$ terraform apply - -``` - -Then you see this prompt below: - -```bash - -Do you want to perform these actions? - Terraform will perform the actions described above. - Only 'yes' will be accepted to approve. - - Enter a value: - -``` - -Type `yes` and hit **Enter**. Applying the configuration could take several minutes. When the configuration applying finishes, you can see `Apply complete!` along with some other information, including the number of resources created. - -### Apply a non-default configuration - -You can apply a non-default Terraform configuration by changing the values in the `terraform.tfvars` file. The following variables are available: - -Variable name | Description | Default -:-------------|:------------|:------- -`public_key_path` | The path of the public key that you have generated. | `~/.ssh/id_rsa.pub` -`region` | The AWS region in which the Pulsar cluster runs | `us-west-2` -`availability_zone` | The AWS availability zone in which the Pulsar cluster runs | `us-west-2a` -`aws_ami` | The [Amazon Machine Image](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) (AMI) that the cluster uses | `ami-9fa343e7` -`num_zookeeper_nodes` | The number of [ZooKeeper](https://zookeeper.apache.org) nodes in the ZooKeeper cluster | 3 -`num_bookie_nodes` | The number of bookies that runs in the cluster | 3 -`num_broker_nodes` | The number of Pulsar brokers that runs in the cluster | 2 -`num_proxy_nodes` | The number of Pulsar proxies that runs in the cluster | 1 -`base_cidr_block` | The root [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) that network assets uses for the cluster | `10.0.0.0/16` -`instance_types` | The EC2 instance types to be used. This variable is a map with two keys: `zookeeper` for the ZooKeeper instances, `bookie` for the BookKeeper bookies and `broker` and `proxy` for Pulsar brokers and bookies | `t2.small` (ZooKeeper), `i3.xlarge` (BookKeeper) and `c5.2xlarge` (Brokers/Proxies) - -### What is installed - -When you run the Ansible playbook, the following AWS resources are used: - -* 9 total [Elastic Compute Cloud](https://aws.amazon.com/ec2) (EC2) instances running the [ami-9fa343e7](https://access.redhat.com/articles/3135091) Amazon Machine Image (AMI), which runs [Red Hat Enterprise Linux (RHEL) 7.4](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/7.4_release_notes/index). By default, that includes: - * 3 small VMs for ZooKeeper ([t2.small](https://www.ec2instances.info/?selected=t2.small) instances) - * 3 larger VMs for BookKeeper [bookies](reference-terminology.md#bookie) ([i3.xlarge](https://www.ec2instances.info/?selected=i3.xlarge) instances) - * 2 larger VMs for Pulsar [brokers](reference-terminology.md#broker) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances) - * 1 larger VMs for Pulsar [proxy](reference-terminology.md#proxy) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances) -* An EC2 [security group](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html) -* A [virtual private cloud](https://aws.amazon.com/vpc/) (VPC) for security -* An [API Gateway](https://aws.amazon.com/api-gateway/) for connections from the outside world -* A [route table](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html) for the Pulsar cluster's VPC -* A [subnet](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html) for the VPC - -All EC2 instances for the cluster run in the [us-west-2](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) region. - -### Fetch your Pulsar connection URL - -When you apply the Terraform configuration by entering the command `terraform apply`, Terraform outputs a value for the `pulsar_service_url`. The value should look something like this: - -``` - -pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650 - -``` - -You can fetch that value at any time by entering the command `terraform output pulsar_service_url` or parsing the `terraform.tstate` file (which is JSON, even though the filename does not reflect that): - -```bash - -$ cat terraform.tfstate | jq .modules[0].outputs.pulsar_service_url.value - -``` - -### Destroy your cluster - -At any point, you can destroy all AWS resources associated with your cluster using Terraform's `destroy` command: - -```bash - -$ terraform destroy - -``` - -## Setup Disks - -Before you run the Pulsar playbook, you need to mount the disks to the correct directories on those bookie nodes. Since different type of machines have different disk layout, you need to update the task defined in `setup-disk.yaml` file after changing the `instance_types` in your terraform config, - -To setup disks on bookie nodes, enter this command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - setup-disk.yaml - -``` - -After that, the disks is mounted under `/mnt/journal` as journal disk, and `/mnt/storage` as ledger disk. -Remember to enter this command just only once. If you attempt to enter this command again after you have run Pulsar playbook, your disks might potentially be erased again, causing the bookies to fail to start up. - -## Run the Pulsar playbook - -Once you have created the necessary AWS resources using Terraform, you can install and run Pulsar on the Terraform-created EC2 instances using Ansible. - -(Optional) If you want to use any [built-in IO connectors](io-connectors.md) , edit the `Download Pulsar IO packages` task in the `deploy-pulsar.yaml` file and uncomment the connectors you want to use. - -To run the playbook, enter this command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - ../deploy-pulsar.yaml - -``` - -If you have created a private SSH key at a location different from `~/.ssh/id_rsa`, you can specify the different location using the `--private-key` flag in the following command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - --private-key="~/.ssh/some-non-default-key" \ - ../deploy-pulsar.yaml - -``` - -## Access the cluster - -You can now access your running Pulsar using the unique Pulsar connection URL for your cluster, which you can obtain following the instructions [above](#fetching-your-pulsar-connection-url). - -For a quick demonstration of accessing the cluster, we can use the Python client for Pulsar and the Python shell. First, install the Pulsar Python module using pip: - -```bash - -$ pip install pulsar-client - -``` - -Now, open up the Python shell using the `python` command: - -```bash - -$ python - -``` - -Once you are in the shell, enter the following command: - -```python - ->>> import pulsar ->>> client = pulsar.Client('pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650') -# Make sure to use your connection URL ->>> producer = client.create_producer('persistent://public/default/test-topic') ->>> producer.send('Hello world') ->>> client.close() - -``` - -If all of these commands are successful, Pulsar clients can now use your cluster! diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/deploy-bare-metal-multi-cluster.md b/site2/website/versioned_docs/version-2.8.1-deprecated/deploy-bare-metal-multi-cluster.md deleted file mode 100644 index 1b23eea07a20bf..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/deploy-bare-metal-multi-cluster.md +++ /dev/null @@ -1,486 +0,0 @@ ---- -id: deploy-bare-metal-multi-cluster -title: Deploying a multi-cluster on bare metal -sidebar_label: "Bare metal multi-cluster" -original_id: deploy-bare-metal-multi-cluster ---- - -:::tip - -1. Single-cluster Pulsar installations should be sufficient for all but the most ambitious use cases. If you are interested in experimenting with -Pulsar or using it in a startup or on a single team, you had better opt for a single cluster. For instructions on deploying a single cluster, -see the guide [here](deploy-bare-metal.md). -2. If you want to use all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you need to download `apache-pulsar-io-connectors` -package and install `apache-pulsar-io-connectors` under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you -run a separate cluster of function workers for [Pulsar Functions](functions-overview.md). -3. If you want to use [Tiered Storage](concepts-tiered-storage.md) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders` -package and install `apache-pulsar-offloaders` under `offloaders` directory in the pulsar directory on every broker node. For more details of how to configure -this feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md). - -::: - -A Pulsar *instance* consists of multiple Pulsar clusters working in unison. You can distribute clusters across data centers or geographical regions and replicate the clusters amongst themselves using [geo-replication](administration-geo.md). Deploying a multi-cluster Pulsar instance involves the following basic steps: - -* Deploying two separate [ZooKeeper](#deploy-zookeeper) quorums: a [local](#deploy-local-zookeeper) quorum for each cluster in the instance and a [configuration store](#configuration-store) quorum for instance-wide tasks -* Initializing [cluster metadata](#cluster-metadata-initialization) for each cluster -* Deploying a [BookKeeper cluster](#deploy-bookkeeper) of bookies in each Pulsar cluster -* Deploying [brokers](#deploy-brokers) in each Pulsar cluster - -If you want to deploy a single Pulsar cluster, see [Clusters and Brokers](getting-started-standalone.md#start-the-cluster). - -> #### Run Pulsar locally or on Kubernetes? -> This guide shows you how to deploy Pulsar in production in a non-Kubernetes environment. If you want to run a standalone Pulsar cluster on a single machine for development purposes, see the [Setting up a local cluster](getting-started-standalone.md) guide. If you want to run Pulsar on [Kubernetes](https://kubernetes.io), see the [Pulsar on Kubernetes](deploy-kubernetes.md) guide, which includes sections on running Pulsar on Kubernetes on [Google Kubernetes Engine](deploy-kubernetes#pulsar-on-google-kubernetes-engine) and on [Amazon Web Services](deploy-kubernetes#pulsar-on-amazon-web-services). - -## System requirement - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions, JRE/JDK 11 is recommended. - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -## Install Pulsar - -To get started running Pulsar, download a binary tarball release in one of the following ways: - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar @pulsar:version@ binary release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget 'https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=pulsar/pulsar-@pulsar:version@/apache-pulsar-@pulsar:version@-bin.tar.gz' -O apache-pulsar-@pulsar:version@-bin.tar.gz - - ``` - -Once you download the tarball, untar it and `cd` into the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -## What your package contains - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | [Command-line tools](reference-cli-tools.md) of Pulsar, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) -`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more -`examples` | A Java JAR file containing example [Pulsar Functions](functions-overview.md) -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that Pulsar uses -`licenses` | License files, in `.txt` form, for various components of the Pulsar codebase - -The following directories are created once you begin running Pulsar: - -Directory | Contains -:---------|:-------- -`data` | The data storage directory that ZooKeeper and BookKeeper use -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md) -`logs` | Logs that the installation creates - - -## Deploy ZooKeeper - -Each Pulsar instance relies on two separate ZooKeeper quorums. - -* [Local ZooKeeper](#deploy-local-zookeeper) operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs to have a dedicated ZooKeeper cluster. -* [Configuration Store](#deploy-the-configuration-store) operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum. - -The configuration store quorum can be provided by an independent cluster of machines or by the same machines used by local ZooKeeper. - - -### Deploy local ZooKeeper - -ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar. - -You need to stand up one local ZooKeeper cluster *per Pulsar cluster* for deploying a Pulsar instance. - -To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -On each host, you need to specify the ID of the node in the `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - -On a ZooKeeper server at `zk1.us-west.example.com`, for example, you could set the `myid` value like this: - -```shell - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com` the command looks like `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start zookeeper - -``` - -### Deploy the configuration store - -The ZooKeeper cluster that is configured and started up in the section above is a *local* ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks. - -If you deploy a [single-cluster](#single-cluster-pulsar-instance) instance, you do not need a separate cluster for the configuration store. If, however, you deploy a [multi-cluster](#multi-cluster-pulsar-instance) instance, you should stand up a separate ZooKeeper cluster for configuration tasks. - -#### Single-cluster Pulsar instance - -If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports. - -To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorum uses to the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 - -``` - -As before, create the `myid` files for each server on `data/global-zookeeper/myid`. - -#### Multi-cluster Pulsar instance - -When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions. - -The key here is to make sure the ZK quorum members are spread across at least 3 regions and that other regions run as observers. - -Again, given the very low expected load on the configuration store servers, you can -share the same hosts used for the local ZooKeeper quorum. - -For example, assume a Pulsar instance with the following clusters `us-west`, -`us-east`, `us-central`, `eu-central`, `ap-south`. Also assume, each cluster has its own local ZK servers named such as the following: - -``` - -zk[1-3].${CLUSTER}.example.com - -``` - -In this scenario if you want to pick the quorum participants from few clusters and -let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`. - -This method guarantees that writes to configuration store is possible even if one of these regions is unreachable. - -The ZK configuration in all the servers looks like: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 -server.4=zk1.us-central.example.com:2185:2186 -server.5=zk2.us-central.example.com:2185:2186 -server.6=zk3.us-central.example.com:2185:2186:observer -server.7=zk1.us-east.example.com:2185:2186 -server.8=zk2.us-east.example.com:2185:2186 -server.9=zk3.us-east.example.com:2185:2186:observer -server.10=zk1.eu-central.example.com:2185:2186:observer -server.11=zk2.eu-central.example.com:2185:2186:observer -server.12=zk3.eu-central.example.com:2185:2186:observer -server.13=zk1.ap-south.example.com:2185:2186:observer -server.14=zk2.ap-south.example.com:2185:2186:observer -server.15=zk3.ap-south.example.com:2185:2186:observer - -``` - -Additionally, ZK observers need to have the following parameters: - -```properties - -peerType=observer - -``` - -##### Start the service - -Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) - -```shell - -$ bin/pulsar-daemon start configuration-store - -``` - -## Cluster metadata initialization - -Once you set up the cluster-specific ZooKeeper and configuration store quorums for your instance, you need to write some metadata to ZooKeeper for each cluster in your instance. **you only needs to write these metadata once**. - -You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. The following is an example: - -```shell - -$ bin/pulsar initialize-cluster-metadata \ - --cluster us-west \ - --zookeeper zk1.us-west.example.com:2181 \ - --configuration-store zk1.us-west.example.com:2184 \ - --web-service-url http://pulsar.us-west.example.com:8080/ \ - --web-service-url-tls https://pulsar.us-west.example.com:8443/ \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/ - -``` - -As you can see from the example above, you need to specify the following: - -* The name of the cluster -* The local ZooKeeper connection string for the cluster -* The configuration store connection string for the entire instance -* The web service URL for the cluster -* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster - -If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster. - -Make sure to run `initialize-cluster-metadata` for each cluster in your instance. - -## Deploy BookKeeper - -BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar. - -Each Pulsar broker needs to have its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster. - -### Configure bookies - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important aspect of configuring each bookie is ensuring that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for the local ZooKeeper of Pulsar cluster. - -### Start bookies - -You can start a bookie in two ways: in the foreground or as a background daemon. - -To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -You can verify that the bookie works properly using the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell): - -```shell - -$ bin/bookkeeper shell bookiesanity - -``` - -This command creates a new ledger on the local bookie, writes a few entries, reads them back and finally deletes the ledger. - -After you have started all bookies, you can use the `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to verify that all bookies in the cluster are running. - -```bash - -$ bin/bookkeeper shell simpletest --ensemble --writeQuorum --ackQuorum --numEntries - -``` - -Bookie hosts are responsible for storing message data on disk. In order for bookies to provide optimal performance, having a suitable hardware configuration is essential for the bookies. The following are key dimensions for bookie hardware capacity. - -* Disk I/O capacity read/write -* Storage capacity - -Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker. To ensure low write latency, BookKeeper is -designed to use multiple devices: - -* A **journal** to ensure durability. For sequential writes, having fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts is critical. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID)s controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms. -* A **ledger storage device** is where data is stored until all consumers acknowledge the message. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller. - - - -## Deploy brokers - -Once you set up ZooKeeper, initialize cluster metadata, and spin up BookKeeper bookies, you can deploy brokers. - -### Broker configuration - -You can configure brokers using the [`conf/broker.conf`](reference-configuration.md#broker) configuration file. - -The most important element of broker configuration is ensuring that each broker is aware of its local ZooKeeper quorum as well as the configuration store quorum. Make sure that you set the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) parameter to reflect the local quorum and the [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameter to reflect the configuration store quorum (although you need to specify only those ZooKeeper servers located in the same cluster). - -You also need to specify the name of the [cluster](reference-terminology.md#cluster) to which the broker belongs using the [`clusterName`](reference-configuration.md#broker-clusterName) parameter. In addition, you need to match the broker and web service ports provided when you initialize the metadata (especially when you use a different port from default) of the cluster. - -The following is an example configuration: - -```properties - -# Local ZooKeeper servers -zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -# Configuration store quorum connection string. -configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184 - -clusterName=us-west - -# Broker data port -brokerServicePort=6650 - -# Broker data port for TLS -brokerServicePortTls=6651 - -# Port to use to server HTTP request -webServicePort=8080 - -# Port to use to server HTTPS request -webServicePortTls=8443 - -``` - -### Broker hardware - -Pulsar brokers do not require any special hardware since they do not use the local disk. You had better choose fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) so that the software can take full advantage of that. - -### Start the broker service - -You can start a broker in the background by using [nohup](https://en.wikipedia.org/wiki/Nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start broker - -``` - -You can also start brokers in the foreground by using [`pulsar broker`](reference-cli-tools.md#broker): - -```shell - -$ bin/pulsar broker - -``` - -## Service discovery - -[Clients](getting-started-clients.md) connecting to Pulsar brokers need to be able to communicate with an entire Pulsar instance using a single URL. Pulsar provides a built-in service discovery mechanism that you can set up using the instructions [immediately below](#service-discovery-setup). - -You can also use your own service discovery system if you want. If you use your own system, you only need to satisfy just one requirement: when a client performs an HTTP request to an [endpoint](reference-configuration.md) for a Pulsar cluster, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to *some* active broker in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means. - -> #### Service discovery already provided by many scheduling systems -> Many large-scale deployment systems, such as [Kubernetes](deploy-kubernetes.md), have service discovery systems built in. If you run Pulsar on such a system, you may not need to provide your own service discovery mechanism. - - -### Service discovery setup - -The service discovery mechanism that included with Pulsar maintains a list of active brokers, which stored in ZooKeeper, and supports lookup using HTTP and also the [binary protocol](developing-binary-protocol.md) of Pulsar. - -To get started setting up the built-in service of discovery of Pulsar, you need to change a few parameters in the [`conf/discovery.conf`](reference-configuration.md#service-discovery) configuration file. Set the [`zookeeperServers`](reference-configuration.md#service-discovery-zookeeperServers) parameter to the ZooKeeper quorum connection string of the cluster and the [`configurationStoreServers`](reference-configuration.md#service-discovery-configurationStoreServers) setting to the [configuration -store](reference-terminology.md#configuration-store) quorum connection string. - -```properties - -# Zookeeper quorum connection string -zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -# Global configuration store connection string -configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184 - -``` - -To start the discovery service: - -```shell - -$ bin/pulsar-daemon start discovery - -``` - -## Admin client and verification - -At this point your Pulsar instance should be ready to use. You can now configure client machines that can serve as [administrative clients](admin-api-overview.md) for each cluster. You can use the [`conf/client.conf`](reference-configuration.md#client) configuration file to configure admin clients. - -The most important thing is that you point the [`serviceUrl`](reference-configuration.md#client-serviceUrl) parameter to the correct service URL for the cluster: - -```properties - -serviceUrl=http://pulsar.us-west.example.com:8080/ - -``` - -## Provision new tenants - -Pulsar is built as a fundamentally multi-tenant system. - - -If a new tenant wants to use the system, you need to create a new one. You can create a new tenant by using the [`pulsar-admin`](reference-pulsar-admin.md#tenants) CLI tool: - -```shell - -$ bin/pulsar-admin tenants create test-tenant \ - --allowed-clusters us-west \ - --admin-roles test-admin-role - -``` - -In this command, users who identify with `test-admin-role` role can administer the configuration for the `test-tenant` tenant. The `test-tenant` tenant can only use the `us-west` cluster. From now on, this tenant can manage its resources. - -Once you create a tenant, you need to create [namespaces](reference-terminology.md#namespace) for topics within that tenant. - - -The first step is to create a namespace. A namespace is an administrative unit that can contain many topics. A common practice is to create a namespace for each different use case from a single tenant. - -```shell - -$ bin/pulsar-admin namespaces create test-tenant/ns1 - -``` - -##### Test producer and consumer - - -Everything is now ready to send and receive messages. The quickest way to test the system is through the [`pulsar-perf`](reference-cli-tools.md#pulsar-perf) client tool. - - -You can use a topic in the namespace that you have just created. Topics are automatically created the first time when a producer or a consumer tries to use them. - -The topic name in this case could be: - -```http - -persistent://test-tenant/ns1/my-topic - -``` - -Start a consumer that creates a subscription on the topic and waits for messages: - -```shell - -$ bin/pulsar-perf consume persistent://test-tenant/ns1/my-topic - -``` - -Start a producer that publishes messages at a fixed rate and reports stats every 10 seconds: - -```shell - -$ bin/pulsar-perf produce persistent://test-tenant/ns1/my-topic - -``` - -To report the topic stats: - -```shell - -$ bin/pulsar-admin topics stats persistent://test-tenant/ns1/my-topic - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/deploy-bare-metal.md b/site2/website/versioned_docs/version-2.8.1-deprecated/deploy-bare-metal.md deleted file mode 100644 index bdd05f24f2566d..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/deploy-bare-metal.md +++ /dev/null @@ -1,541 +0,0 @@ ---- -id: deploy-bare-metal -title: Deploy a cluster on bare metal -sidebar_label: "Bare metal" -original_id: deploy-bare-metal ---- - -:::tip - -1. Single-cluster Pulsar installations should be sufficient for all but the most ambitious use cases. If you are interested in experimenting with -Pulsar or using Pulsar in a startup or on a single team, it is simplest to opt for a single cluster. If you do need to run a multi-cluster Pulsar instance, -see the guide [here](deploy-bare-metal-multi-cluster.md). -2. If you want to use all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you need to download `apache-pulsar-io-connectors` -package and install `apache-pulsar-io-connectors` under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you -have run a separate cluster of function workers for [Pulsar Functions](functions-overview.md). -3. If you want to use [Tiered Storage](concepts-tiered-storage.md) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders` -package and install `apache-pulsar-offloaders` under `offloaders` directory in the pulsar directory on every broker node. For more details of how to configure -this feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md). - -::: - -Deploying a Pulsar cluster involves doing the following (in order): - -* Deploy a [ZooKeeper](#deploy-a-zookeeper-cluster) cluster (optional) -* Initialize [cluster metadata](#initialize-cluster-metadata) -* Deploy a [BookKeeper](#deploy-a-bookkeeper-cluster) cluster -* Deploy one or more Pulsar [brokers](#deploy-pulsar-brokers) - -## Preparation - -### Requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions, JRE/JDK 11 is recommended. - -> If you already have an existing ZooKeeper cluster and want to reuse it, you do not need to prepare the machines -> for running ZooKeeper. - -To run Pulsar on bare metal, the following configuration is recommended: - -* At least 6 Linux machines or VMs - * 3 for running [ZooKeeper](https://zookeeper.apache.org) - * 3 for running a Pulsar broker, and a [BookKeeper](https://bookkeeper.apache.org) bookie -* A single [DNS](https://en.wikipedia.org/wiki/Domain_Name_System) name covering all of the Pulsar broker hosts - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -> If you do not have enough machines, or to try out Pulsar in cluster mode (and expand the cluster later), -> you can deploy a full Pulsar configuration on one node, where Zookeeper, the bookie and broker are run on the same machine. - -> If you do not have a DNS server, you can use the multi-host format in the service URL instead. - -Each machine in your cluster needs to have [Java 8](https://adoptium.net/?variant=openjdk8) or [Java 11](https://adoptium.net/?variant=openjdk11) installed. - -The following is a diagram showing the basic setup: - -![alt-text](/assets/pulsar-basic-setup.png) - -In this diagram, connecting clients need to be able to communicate with the Pulsar cluster using a single URL. In this case, `pulsar-cluster.acme.com` abstracts over all of the message-handling brokers. Pulsar message brokers run on machines alongside BookKeeper bookies; brokers and bookies, in turn, rely on ZooKeeper. - -### Hardware considerations - -When you deploy a Pulsar cluster, keep in mind the following basic better choices when you do the capacity planning. - -#### ZooKeeper - -For machines running ZooKeeper, is is recommended to use less powerful machines or VMs. Pulsar uses ZooKeeper only for periodic coordination-related and configuration-related tasks, *not* for basic operations. If you run Pulsar on [Amazon Web Services](https://aws.amazon.com/) (AWS), for example, a [t2.small](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html) instance might likely suffice. - -#### Bookies and Brokers - -For machines running a bookie and a Pulsar broker, more powerful machines are required. For an AWS deployment, for example, [i3.4xlarge](https://aws.amazon.com/blogs/aws/now-available-i3-instances-for-demanding-io-intensive-applications/) instances may be appropriate. On those machines you can use the following: - -* Fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) (for Pulsar brokers) -* Small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache (for BookKeeper bookies) - -## Install the Pulsar binary package - -> You need to install the Pulsar binary package on *each machine in the cluster*, including machines running [ZooKeeper](#deploy-a-zookeeper-cluster) and [BookKeeper](#deploy-a-bookkeeper-cluster). - -To get started deploying a Pulsar cluster on bare metal, you need to download a binary tarball release in one of the following ways: - -* By clicking on the link below directly, which automatically triggers a download: - * Pulsar @pulsar:version@ binary release -* From the Pulsar [downloads page](pulsar:download_page_url) -* From the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) on [GitHub](https://github.com) -* Using [wget](https://www.gnu.org/software/wget): - -```bash - -$ wget pulsar:binary_release_url - -``` - -Once you download the tarball, untar it and `cd` into the resulting directory: - -```bash - -$ tar xvzf apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -The extracted directory contains the following subdirectories: - -Directory | Contains -:---------|:-------- -`bin` |[command-line tools](reference-cli-tools.md) of Pulsar, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) -`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more -`data` | The data storage directory that ZooKeeper and BookKeeper use -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that Pulsar uses -`logs` | Logs that the installation creates - -## [Install Builtin Connectors (optional)]( https://pulsar.apache.org/docs/en/next/standalone/#install-builtin-connectors-optional) - -> Since Pulsar release `2.1.0-incubating`, Pulsar provides a separate binary distribution, containing all the `builtin` connectors. -> If you want to enable those `builtin` connectors, you can follow the instructions as below; otherwise you can -> skip this section for now. - -To get started using builtin connectors, you need to download the connectors tarball release on every broker node in one of the following ways: - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar IO Connectors @pulsar:version@ release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -Once you download the .nar file, copy the file to directory `connectors` in the pulsar directory. -For example, if you download the connector file `pulsar-io-aerospike-@pulsar:version@.nar`: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -## [Install Tiered Storage Offloaders (optional)](https://pulsar.apache.org/docs/en/next/standalone/#install-tiered-storage-offloaders-optional) - -> Since Pulsar release `2.2.0`, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -> If you want to enable tiered storage feature, you can follow the instructions as below; otherwise you can -> skip this section for now. - -To get started using tiered storage offloaders, you need to download the offloaders tarball release on every broker node in one of the following ways: - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -Once you download the tarball, in the pulsar directory, untar the offloaders package and copy the offloaders as `offloaders` in the pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you can find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more details of how to configure tiered storage feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md) - - -## Deploy a ZooKeeper cluster - -> If you already have an existing zookeeper cluster and want to use it, you can skip this section. - -[ZooKeeper](https://zookeeper.apache.org) manages a variety of essential coordination- and configuration-related tasks for Pulsar. To deploy a Pulsar cluster, you need to deploy ZooKeeper first (before all other components). A 3-node ZooKeeper cluster is the recommended configuration. Pulsar does not make heavy use of ZooKeeper, so more lightweight machines or VMs should suffice for running ZooKeeper. - -To begin, add all ZooKeeper servers to the configuration specified in [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) (in the Pulsar directory that you create [above](#install-the-pulsar-binary-package)). The following is an example: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -> If you only have one machine on which to deploy Pulsar, you only need to add one server entry in the configuration file. - -On each host, you need to specify the ID of the node in the `myid` file, which is in the `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - -For example, on a ZooKeeper server like `zk1.us-west.example.com`, you can set the `myid` value as follows: - -```bash - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com`, the command is `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and have the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start zookeeper - -``` - -> If you plan to deploy Zookeeper with the Bookie on the same node, you need to start zookeeper by using different stats -> port by configuring the `metricsProvider.httpPort` in zookeeper.conf. - -## Initialize cluster metadata - -Once you deploy ZooKeeper for your cluster, you need to write some metadata to ZooKeeper. You only need to write this data **once**. - -You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. This command can be run on any machine in your Pulsar cluster, so the metadata can be initialized from a ZooKeeper, broker, or bookie machine. The following is an example: - -```shell - -$ bin/pulsar initialize-cluster-metadata \ - --cluster pulsar-cluster-1 \ - --zookeeper zk1.us-west.example.com:2181 \ - --configuration-store zk1.us-west.example.com:2181 \ - --web-service-url http://pulsar.us-west.example.com:8080 \ - --web-service-url-tls https://pulsar.us-west.example.com:8443 \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650 \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -As you can see from the example above, you will need to specify the following: - -Flag | Description -:----|:----------- -`--cluster` | A name for the cluster -`--zookeeper` | A "local" ZooKeeper connection string for the cluster. This connection string only needs to include *one* machine in the ZooKeeper cluster. -`--configuration-store` | The configuration store connection string for the entire instance. As with the `--zookeeper` flag, this connection string only needs to include *one* machine in the ZooKeeper cluster. -`--web-service-url` | The web service URL for the cluster, plus a port. This URL should be a standard DNS name. The default port is 8080 (you had better not use a different port). -`--web-service-url-tls` | If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster. The default port is 8443 (you had better not use a different port). -`--broker-service-url` | A broker service URL enabling interaction with the brokers in the cluster. This URL should not use the same DNS name as the web service URL but should use the `pulsar` scheme instead. The default port is 6650 (you had better not use a different port). -`--broker-service-url-tls` | If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster. The default port is 6651 (you had better not use a different port). - - -> If you do not have a DNS server, you can use multi-host format in the service URL with the following settings: -> - -> ```properties -> -> --web-service-url http://host1:8080,host2:8080,host3:8080 \ -> --web-service-url-tls https://host1:8443,host2:8443,host3:8443 \ -> --broker-service-url pulsar://host1:6650,host2:6650,host3:6650 \ -> --broker-service-url-tls pulsar+ssl://host1:6651,host2:6651,host3:6651 -> -> -> ``` - -> -> If you want to use an existing BookKeeper cluster, you can add the `--existing-bk-metadata-service-uri` flag as follows: -> - -> ```properties -> -> --existing-bk-metadata-service-uri "zk+null://zk1:2181;zk2:2181/ledgers" \ -> --web-service-url http://host1:8080,host2:8080,host3:8080 \ -> --web-service-url-tls https://host1:8443,host2:8443,host3:8443 \ -> --broker-service-url pulsar://host1:6650,host2:6650,host3:6650 \ -> --broker-service-url-tls pulsar+ssl://host1:6651,host2:6651,host3:6651 -> -> -> ``` - -> You can obtain the metadata service URI of the existing BookKeeper cluster by using the `bin/bookkeeper shell whatisinstanceid` command. You must enclose the value in double quotes since the multiple metadata service URIs are separated with semicolons. - -## Deploy a BookKeeper cluster - -[BookKeeper](https://bookkeeper.apache.org) handles all persistent data storage in Pulsar. You need to deploy a cluster of BookKeeper bookies to use Pulsar. You can choose to run a **3-bookie BookKeeper cluster**. - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important step in configuring bookies for our purposes here is ensuring that [`zkServers`](reference-configuration.md#bookkeeper-zkServers) is set to the connection string for the ZooKeeper cluster. The following is an example: - -```properties - -zkServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -``` - -Once you appropriately modify the `zkServers` parameter, you can make any other configuration changes that you require. You can find a full listing of the available BookKeeper configuration parameters [here](reference-configuration.md#bookkeeper). However, consulting the [BookKeeper documentation](http://bookkeeper.apache.org/docs/latest/reference/config/) for a more in-depth guide might be a better choice. - -Once you apply the desired configuration in `conf/bookkeeper.conf`, you can start up a bookie on each of your BookKeeper hosts. You can start up each bookie either in the background, using [nohup](https://en.wikipedia.org/wiki/Nohup), or in the foreground. - -To start the bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -To start the bookie in the foreground: - -```bash - -$ bin/pulsar bookie - -``` - -You can verify that a bookie works properly by running the `bookiesanity` command on the [BookKeeper shell](reference-cli-tools.md#shell): - -```bash - -$ bin/bookkeeper shell bookiesanity - -``` - -This command creates an ephemeral BookKeeper ledger on the local bookie, writes a few entries, reads them back, and finally deletes the ledger. - -After you start all the bookies, you can use `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to verify all the bookies in the cluster are up running. - -```bash - -$ bin/bookkeeper shell simpletest --ensemble --writeQuorum --ackQuorum --numEntries - -``` - -This command creates a `num-bookies` sized ledger on the cluster, writes a few entries, and finally deletes the ledger. - - -## Deploy Pulsar brokers - -Pulsar brokers are the last thing you need to deploy in your Pulsar cluster. Brokers handle Pulsar messages and provide the administrative interface of Pulsar. A good choice is to run **3 brokers**, one for each machine that already runs a BookKeeper bookie. - -### Configure Brokers - -The most important element of broker configuration is ensuring that each broker is aware of the ZooKeeper cluster that you have deployed. Ensure that the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) and [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameters are correct. In this case, since you only have 1 cluster and no configuration store setup, the `configurationStoreServers` point to the same `zookeeperServers`. - -```properties - -zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 -configurationStoreServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -``` - -You also need to specify the cluster name (matching the name that you provided when you [initialize the metadata of the cluster](#initialize-cluster-metadata)): - -```properties - -clusterName=pulsar-cluster-1 - -``` - -In addition, you need to match the broker and web service ports provided when you initialize the metadata of the cluster (especially when you use a different port than the default): - -```properties - -brokerServicePort=6650 -brokerServicePortTls=6651 -webServicePort=8080 -webServicePortTls=8443 - -``` - -> If you deploy Pulsar in a one-node cluster, you should update the replication settings in `conf/broker.conf` to `1`. -> - -> ```properties -> -> # Number of bookies to use when creating a ledger -> managedLedgerDefaultEnsembleSize=1 -> -> # Number of copies to store for each message -> managedLedgerDefaultWriteQuorum=1 -> -> # Number of guaranteed copies (acks to wait before write is complete) -> managedLedgerDefaultAckQuorum=1 -> -> -> ``` - - -### Enable Pulsar Functions (optional) - -If you want to enable [Pulsar Functions](functions-overview.md), you can follow the instructions as below: - -1. Edit `conf/broker.conf` to enable functions worker, by setting `functionsWorkerEnabled` to `true`. - - ```conf - - functionsWorkerEnabled=true - - ``` - -2. Edit `conf/functions_worker.yml` and set `pulsarFunctionsCluster` to the cluster name that you provide when you [initialize the metadata of the cluster](#initialize-cluster-metadata). - - ```conf - - pulsarFunctionsCluster: pulsar-cluster-1 - - ``` - -If you want to learn more options about deploying the functions worker, check out [Deploy and manage functions worker](functions-worker.md). - -### Start Brokers - -You can then provide any other configuration changes that you want in the [`conf/broker.conf`](reference-configuration.md#broker) file. Once you decide on a configuration, you can start up the brokers for your Pulsar cluster. Like ZooKeeper and BookKeeper, you can start brokers either in the foreground or in the background, using nohup. - -You can start a broker in the foreground using the [`pulsar broker`](reference-cli-tools.md#pulsar-broker) command: - -```bash - -$ bin/pulsar broker - -``` - -You can start a broker in the background using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start broker - -``` - -Once you successfully start up all the brokers that you intend to use, your Pulsar cluster should be ready to go! - -## Connect to the running cluster - -Once your Pulsar cluster is up and running, you should be able to connect with it using Pulsar clients. One such client is the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool, which is included with the Pulsar binary package. The `pulsar-client` tool can publish messages to and consume messages from Pulsar topics and thus provide a simple way to make sure that your cluster runs properly. - -To use the `pulsar-client` tool, first modify the client configuration file in [`conf/client.conf`](reference-configuration.md#client) in your binary package. You need to change the values for `webServiceUrl` and `brokerServiceUrl`, substituting `localhost` (which is the default), with the DNS name that you assign to your broker/bookie hosts. The following is an example: - -```properties - -webServiceUrl=http://us-west.example.com:8080 -brokerServiceurl=pulsar://us-west.example.com:6650 - -``` - -> If you do not have a DNS server, you can specify multi-host in service URL as follows: -> - -> ```properties -> -> webServiceUrl=http://host1:8080,host2:8080,host3:8080 -> brokerServiceurl=pulsar://host1:6650,host2:6650,host3:6650 -> -> -> ``` - - -Once that is complete, you can publish a message to the Pulsar topic: - -```bash - -$ bin/pulsar-client produce \ - persistent://public/default/test \ - -n 1 \ - -m "Hello Pulsar" - -``` - -> You may need to use a different cluster name in the topic if you specify a cluster name other than `pulsar-cluster-1`. - -This command publishes a single message to the Pulsar topic. In addition, you can subscribe to the Pulsar topic in a different terminal before publishing messages as below: - -```bash - -$ bin/pulsar-client consume \ - persistent://public/default/test \ - -n 100 \ - -s "consumer-test" \ - -t "Exclusive" - -``` - -Once you successfully publish the above message to the topic, you should see it in the standard output: - -```bash - ------ got message ----- -Hello Pulsar - -``` - -## Run Functions - -> If you have [enabled](#enable-pulsar-functions-optional) Pulsar Functions, you can try out the Pulsar Functions now. - -Create an ExclamationFunction `exclamation`. - -```bash - -bin/pulsar-admin functions create \ - --jar examples/api-examples.jar \ - --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \ - --inputs persistent://public/default/exclamation-input \ - --output persistent://public/default/exclamation-output \ - --tenant public \ - --namespace default \ - --name exclamation - -``` - -Check whether the function runs as expected by [triggering](functions-deploying.md#triggering-pulsar-functions) the function. - -```bash - -bin/pulsar-admin functions trigger --name exclamation --trigger-value "hello world" - -``` - -You should see the following output: - -```shell - -hello world! - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/deploy-dcos.md b/site2/website/versioned_docs/version-2.8.1-deprecated/deploy-dcos.md deleted file mode 100644 index 952d5f47e30fa7..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/deploy-dcos.md +++ /dev/null @@ -1,200 +0,0 @@ ---- -id: deploy-dcos -title: Deploy Pulsar on DC/OS -sidebar_label: "DC/OS" -original_id: deploy-dcos ---- - -:::tip - -If you want to enable all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you can choose to use `apachepulsar/pulsar-all` image instead of -`apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors). - -::: - -[DC/OS](https://dcos.io/) (the DataCenter Operating System) is a distributed operating system used for deploying and managing applications and systems on [Apache Mesos](http://mesos.apache.org/). DC/OS is an open-source tool that [Mesosphere](https://mesosphere.com/) creates and maintains . - -Apache Pulsar is available as a [Marathon Application Group](https://mesosphere.github.io/marathon/docs/application-groups.html), which runs multiple applications as manageable sets. - -## Prerequisites - -In order to run Pulsar on DC/OS, you need the following: - -* DC/OS version [1.9](https://docs.mesosphere.com/1.9/) or higher -* A [DC/OS cluster](https://docs.mesosphere.com/1.9/installing/) with at least three agent nodes -* The [DC/OS CLI tool](https://docs.mesosphere.com/1.9/cli/install/) installed -* The [`PulsarGroups.json`](https://github.com/apache/pulsar/blob/master/deployment/dcos/PulsarGroups.json) configuration file from the Pulsar GitHub repo. - - ```bash - - $ curl -O https://raw.githubusercontent.com/apache/pulsar/master/deployment/dcos/PulsarGroups.json - - ``` - -Each node in the DC/OS-managed Mesos cluster must have at least: - -* 4 CPU -* 4 GB of memory -* 60 GB of total persistent disk - -Alternatively, you can change the configuration in `PulsarGroups.json` according to match your resources of DC/OS cluster. - -## Deploy Pulsar using the DC/OS command interface - -You can deploy Pulsar on DC/OS using this command: - -```bash - -$ dcos marathon group add PulsarGroups.json - -``` - -This command deploys Docker container instances in three groups, which together comprise a Pulsar cluster: - -* 3 bookies (1 [bookie](reference-terminology.md#bookie) on each agent node and 1 [bookie recovery](http://bookkeeper.apache.org/docs/latest/admin/autorecovery/) instance) -* 3 Pulsar [brokers](reference-terminology.md#broker) (1 broker on each node and 1 admin instance) -* 1 [Prometheus](http://prometheus.io/) instance and 1 [Grafana](https://grafana.com/) instance - - -> When you run DC/OS, a ZooKeeper cluster already runs at `master.mesos:2181`, thus you do not have to install or start up ZooKeeper separately. - -After executing the `dcos` command above, click on the **Services** tab in the DC/OS [GUI interface](https://docs.mesosphere.com/latest/gui/), which you can access at [http://m1.dcos](http://m1.dcos) in this example. You should see several applications in the process of deploying. - -![DC/OS command executed](/assets/dcos_command_execute.png) - -![DC/OS command executed2](/assets/dcos_command_execute2.png) - -## The BookKeeper group - -To monitor the status of the BookKeeper cluster deployment, click on the **bookkeeper** group in the parent **pulsar** group. - -![DC/OS bookkeeper status](/assets/dcos_bookkeeper_status.png) - -At this point, 3 [bookies](reference-terminology.md#bookie) should be shown as green, which means that the bookies have been deployed successfully and are now running. - -![DC/OS bookkeeper running](/assets/dcos_bookkeeper_run.png) - -You can also click into each bookie instance to get more detailed information, such as the bookie running log. - -![DC/OS bookie log](/assets/dcos_bookie_log.png) - -To display information about the BookKeeper in ZooKeeper, you can visit [http://m1.dcos/exhibitor](http://m1.dcos/exhibitor). In this example, 3 bookies are under the `available` directory. - -![DC/OS bookkeeper in zk](/assets/dcos_bookkeeper_in_zookeeper.png) - -## The Pulsar broker Group - -Similar to the BookKeeper group above, click into the **brokers** to check the status of the Pulsar brokers. - -![DC/OS broker status](/assets/dcos_broker_status.png) - -![DC/OS broker running](/assets/dcos_broker_run.png) - -You can also click into each broker instance to get more detailed information, such as the broker running log. - -![DC/OS broker log](/assets/dcos_broker_log.png) - -Broker cluster information in Zookeeper is also available through the web UI. In this example, you can see that the `loadbalance` and `managed-ledgers` directories have been created. - -![DC/OS broker in zk](/assets/dcos_broker_in_zookeeper.png) - -## Monitor Group - -The **monitory** group consists of Prometheus and Grafana. - -![DC/OS monitor status](/assets/dcos_monitor_status.png) - -### Prometheus - -Click into the instance of `prom` to get the endpoint of Prometheus, which is `192.168.65.121:9090` in this example. - -![DC/OS prom endpoint](/assets/dcos_prom_endpoint.png) - -If you click that endpoint, you can see the Prometheus dashboard. The [http://192.168.65.121:9090/targets](http://192.168.65.121:9090/targets) URL display all the bookies and brokers. - -![DC/OS prom targets](/assets/dcos_prom_targets.png) - -### Grafana - -Click into `grafana` to get the endpoint for Grafana, which is `192.168.65.121:3000` in this example. - -![DC/OS grafana endpoint](/assets/dcos_grafana_endpoint.png) - -If you click that endpoint, you can access the Grafana dashboard. - -![DC/OS grafana targets](/assets/dcos_grafana_dashboard.png) - -## Run a simple Pulsar consumer and producer on DC/OS - -Now that you have a fully deployed Pulsar cluster, you can run a simple consumer and producer to show Pulsar on DC/OS in action. - -### Download and prepare the Pulsar Java tutorial - -You can clone a [Pulsar Java tutorial](https://github.com/streamlio/pulsar-java-tutorial) repo. This repo contains a simple Pulsar consumer and producer (you can find more information in the `README` file of the repo). - -```bash - -$ git clone https://github.com/streamlio/pulsar-java-tutorial - -``` - -Change the `SERVICE_URL` from `pulsar://localhost:6650` to `pulsar://a1.dcos:6650` in both [`ConsumerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ConsumerTutorial.java) and [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java). -The `pulsar://a1.dcos:6650` endpoint is for the broker service. You can fetch the endpoint details for each broker instance from the DC/OS GUI. `a1.dcos` is a DC/OS client agent, which runs a broker. The client agent IP address can also replace this. - -Now, change the message number from 10 to 10000000 in main method of [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java) so that it can produce more messages. - -Now compile the project code using the command below: - -```bash - -$ mvn clean package - -``` - -### Run the consumer and producer - -Execute this command to run the consumer: - -```bash - -$ mvn exec:java -Dexec.mainClass="tutorial.ConsumerTutorial" - -``` - -Execute this command to run the producer: - -```bash - -$ mvn exec:java -Dexec.mainClass="tutorial.ProducerTutorial" - -``` - -You can see the producer producing messages and the consumer consuming messages through the DC/OS GUI. - -![DC/OS pulsar producer](/assets/dcos_producer.png) - -![DC/OS pulsar consumer](/assets/dcos_consumer.png) - -### View Grafana metric output - -While the producer and consumer run, you can access running metrics information from Grafana. - -![DC/OS pulsar dashboard](/assets/dcos_metrics.png) - - -## Uninstall Pulsar - -You can shut down and uninstall the `pulsar` application from DC/OS at any time in the following two ways: - -1. Using the DC/OS GUI, you can choose **Delete** at the right end of Pulsar group. - - ![DC/OS pulsar uninstall](/assets/dcos_uninstall.png) - -2. You can use the following command: - - ```bash - - $ dcos marathon group remove /pulsar - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/deploy-docker.md b/site2/website/versioned_docs/version-2.8.1-deprecated/deploy-docker.md deleted file mode 100644 index 8348d78deb2378..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/deploy-docker.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -id: deploy-docker -title: Deploy a cluster on Docker -sidebar_label: "Docker" -original_id: deploy-docker ---- - -To deploy a Pulsar cluster on Docker, complete the following steps: -1. Deploy a ZooKeeper cluster (optional) -2. Initialize cluster metadata -3. Deploy a BookKeeper cluster -4. Deploy one or more Pulsar brokers - -## Prepare - -To run Pulsar on Docker, you need to create a container for each Pulsar component: ZooKeeper, BookKeeper and broker. You can pull the images of ZooKeeper and BookKeeper separately on [Docker Hub](https://hub.docker.com/), and pull a [Pulsar image](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) for the broker. You can also pull only one [Pulsar image](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) and create three containers with this image. This tutorial takes the second option as an example. - -### Pull a Pulsar image -You can pull a Pulsar image from [Docker Hub](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) with the following command. - -``` - -docker pull apachepulsar/pulsar-all:latest - -``` - -### Create three containers -Create containers for ZooKeeper, BookKeeper and broker. In this example, they are named as `zookeeper`, `bookkeeper` and `broker` respectively. You can name them as you want with the `--name` flag. By default, the container names are created randomly. - -``` - -docker run -it --name bookkeeper apachepulsar/pulsar-all:latest /bin/bash -docker run -it --name zookeeper apachepulsar/pulsar-all:latest /bin/bash -docker run -it --name broker apachepulsar/pulsar-all:latest /bin/bash - -``` - -### Create a network -To deploy a Pulsar cluster on Docker, you need to create a `network` and connect the containers of ZooKeeper, BookKeeper and broker to this network. The following command creates the network `pulsar`: - -``` - -docker network create pulsar - -``` - -### Connect containers to network -Connect the containers of ZooKeeper, BookKeeper and broker to the `pulsar` network with the following commands. - -``` - -docker network connect pulsar zookeeper -docker network connect pulsar bookkeeper -docker network connect pulsar broker - -``` - -To check whether the containers are successfully connected to the network, enter the `docker network inspect pulsar` command. - -For detailed information about how to deploy ZooKeeper cluster, BookKeeper cluster, brokers, see [deploy a cluster on bare metal](deploy-bare-metal.md). diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/deploy-kubernetes.md b/site2/website/versioned_docs/version-2.8.1-deprecated/deploy-kubernetes.md deleted file mode 100644 index 1aefc6ad79f716..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/deploy-kubernetes.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -id: deploy-kubernetes -title: Deploy Pulsar on Kubernetes -sidebar_label: "Kubernetes" -original_id: deploy-kubernetes ---- - -To get up and running with these charts as fast as possible, in a **non-production** use case, we provide -a [quick start guide](getting-started-helm.md) for Proof of Concept (PoC) deployments. - -To configure and install a Pulsar cluster on Kubernetes for production usage, follow the complete [Installation Guide](helm-install.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/deploy-monitoring.md b/site2/website/versioned_docs/version-2.8.1-deprecated/deploy-monitoring.md deleted file mode 100644 index f9fe0e0bb97be7..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/deploy-monitoring.md +++ /dev/null @@ -1,148 +0,0 @@ ---- -id: deploy-monitoring -title: Monitor -sidebar_label: "Monitor" -original_id: deploy-monitoring ---- - -You can use different ways to monitor a Pulsar cluster, exposing both metrics related to the usage of topics and the overall health of the individual components of the cluster. - -## Collect metrics - -You can collect broker stats, ZooKeeper stats, and BookKeeper stats. - -### Broker stats - -You can collect Pulsar broker metrics from brokers and export the metrics in JSON format. The Pulsar broker metrics mainly have two types: - -* *Destination dumps*, which contain stats for each individual topic. You can fetch the destination dumps using the command below: - - ```shell - - bin/pulsar-admin broker-stats destinations - - ``` - -* Broker metrics, which contain the broker information and topics stats aggregated at namespace level. You can fetch the broker metrics by using the following command: - - ```shell - - bin/pulsar-admin broker-stats monitoring-metrics - - ``` - -All the message rates are updated every minute. - -The aggregated broker metrics are also exposed in the [Prometheus](https://prometheus.io) format at: - -```shell - -http://$BROKER_ADDRESS:8080/metrics/ - -``` - -### ZooKeeper stats - -The local ZooKeeper, configuration store server and clients that are shipped with Pulsar can expose detailed stats through Prometheus. - -```shell - -http://$LOCAL_ZK_SERVER:8000/metrics -http://$GLOBAL_ZK_SERVER:8001/metrics - -``` - -The default port of local ZooKeeper is `8000` and the default port of the configuration store is `8001`. You can use a different stats port by configuring `metricsProvider.httpPort` in the `conf/zookeeper.conf` file. - -### BookKeeper stats - -You can configure the stats frameworks for BookKeeper by modifying the `statsProviderClass` in the `conf/bookkeeper.conf` file. - -The default BookKeeper configuration enables the Prometheus exporter. The configuration is included with Pulsar distribution. - -```shell - -http://$BOOKIE_ADDRESS:8000/metrics - -``` - -The default port for bookie is `8000`. You can change the port by configuring `prometheusStatsHttpPort` in the `conf/bookkeeper.conf` file. - -### Managed cursor acknowledgment state -The acknowledgment state is persistent to the ledger first. When the acknowledgment state fails to be persistent to the ledger, they are persistent to ZooKeeper. To track the stats of acknowledgement, you can configure the metrics for the managed cursor. - -``` - -brk_ml_cursor_persistLedgerSucceed(namespace=", ledger_name="", cursor_name:") -brk_ml_cursor_persistLedgerErrors(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_persistZookeeperSucceed(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_persistZookeeperErrors(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_nonContiguousDeletedMessagesRange(namespace="", ledger_name="", cursor_name:"") - -``` - -Those metrics are added in the Prometheus interface, you can monitor and check the metrics stats in the Grafana. - -### Function and connector stats - -You can collect functions worker stats from `functions-worker` and export the metrics in JSON formats, which contain functions worker JVM metrics. - -``` - -pulsar-admin functions-worker monitoring-metrics - -``` - -You can collect functions and connectors metrics from `functions-worker` and export the metrics in JSON formats. - -``` - -pulsar-admin functions-worker function-stats - -``` - -The aggregated functions and connectors metrics can be exposed in Prometheus formats as below. You can get [`FUNCTIONS_WORKER_ADDRESS`](http://pulsar.apache.org/docs/en/next/functions-worker/) and `WORKER_PORT` from the `functions_worker.yml` file. - -``` - -http://$FUNCTIONS_WORKER_ADDRESS:$WORKER_PORT/metrics: - -``` - -## Configure Prometheus - -You can use Prometheus to collect all the metrics exposed for Pulsar components and set up [Grafana](https://grafana.com/) dashboards to display the metrics and monitor your Pulsar cluster. For details, refer to [Prometheus guide](https://prometheus.io/docs/introduction/getting_started/). - -When you run Pulsar on bare metal, you can provide the list of nodes to be probed. When you deploy Pulsar in a Kubernetes cluster, the monitoring is setup automatically. For details, refer to [Kubernetes instructions](helm-deploy.md). - -## Dashboards - -When you collect time series statistics, the major problem is to make sure the number of dimensions attached to the data does not explode. Thus you only need to collect time series of metrics aggregated at the namespace level. - -### Pulsar per-topic dashboard - -The per-topic dashboard instructions are available at [Pulsar manager](administration-pulsar-manager.md). - -### Grafana - -You can use grafana to create dashboard driven by the data that is stored in Prometheus. - -When you deploy Pulsar on Kubernetes, a `pulsar-grafana` Docker image is enabled by default. You can use the docker image with the principal dashboards. - -Enter the command below to use the dashboard manually: - -```shell - -docker run -p3000:3000 \ - -e PROMETHEUS_URL=http://$PROMETHEUS_HOST:9090/ \ - apachepulsar/pulsar-grafana:latest - -``` - -The following are some Grafana dashboards examples: - -- [pulsar-grafana](http://pulsar.apache.org/docs/en/deploy-monitoring/#grafana): a Grafana dashboard that displays metrics collected in Prometheus for Pulsar clusters running on Kubernetes. -- [apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard): a collection of Grafana dashboard templates for different Pulsar components running on both Kubernetes and on-premise machines. - -## Alerting rules -You can set alerting rules according to your Pulsar environment. To configure alerting rules for Apache Pulsar, refer to [alerting rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/). diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/develop-load-manager.md b/site2/website/versioned_docs/version-2.8.1-deprecated/develop-load-manager.md deleted file mode 100644 index 509209b6a852d8..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/develop-load-manager.md +++ /dev/null @@ -1,227 +0,0 @@ ---- -id: develop-load-manager -title: Modular load manager -sidebar_label: "Modular load manager" -original_id: develop-load-manager ---- - -The *modular load manager*, implemented in [`ModularLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/ModularLoadManagerImpl.java), is a flexible alternative to the previously implemented load manager, [`SimpleLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/SimpleLoadManagerImpl.java), which attempts to simplify how load is managed while also providing abstractions so that complex load management strategies may be implemented. - -## Usage - -There are two ways that you can enable the modular load manager: - -1. Change the value of the `loadManagerClassName` parameter in `conf/broker.conf` from `org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl` to `org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl`. -2. Using the `pulsar-admin` tool. Here's an example: - - ```shell - - $ pulsar-admin update-dynamic-config \ - --config loadManagerClassName \ - --value org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl - - ``` - - You can use the same method to change back to the original value. In either case, any mistake in specifying the load manager will cause Pulsar to default to `SimpleLoadManagerImpl`. - -## Verification - -There are a few different ways to determine which load manager is being used: - -1. Use `pulsar-admin` to examine the `loadManagerClassName` element: - - ```shell - - $ bin/pulsar-admin brokers get-all-dynamic-config - { - "loadManagerClassName" : "org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl" - } - - ``` - - If there is no `loadManagerClassName` element, then the default load manager is used. - -2. Consult a ZooKeeper load report. With the module load manager, the load report in `/loadbalance/brokers/...` will have many differences. for example the `systemResourceUsage` sub-elements (`bandwidthIn`, `bandwidthOut`, etc.) are now all at the top level. Here is an example load report from the module load manager: - - ```json - - { - "bandwidthIn": { - "limit": 10240000.0, - "usage": 4.256510416666667 - }, - "bandwidthOut": { - "limit": 10240000.0, - "usage": 5.287239583333333 - }, - "bundles": [], - "cpu": { - "limit": 2400.0, - "usage": 5.7353247655435915 - }, - "directMemory": { - "limit": 16384.0, - "usage": 1.0 - } - } - - ``` - - With the simple load manager, the load report in `/loadbalance/brokers/...` will look like this: - - ```json - - { - "systemResourceUsage": { - "bandwidthIn": { - "limit": 10240000.0, - "usage": 0.0 - }, - "bandwidthOut": { - "limit": 10240000.0, - "usage": 0.0 - }, - "cpu": { - "limit": 2400.0, - "usage": 0.0 - }, - "directMemory": { - "limit": 16384.0, - "usage": 1.0 - }, - "memory": { - "limit": 8192.0, - "usage": 3903.0 - } - } - } - - ``` - -3. The command-line [broker monitor](reference-cli-tools.md#monitor-brokers) will have a different output format depending on which load manager implementation is being used. - - Here is an example from the modular load manager: - - ``` - - =================================================================================================================== - ||SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.00 |48.33 |0.01 |0.00 |0.00 |48.33 || - ||COUNT |TOPIC |BUNDLE |PRODUCER |CONSUMER |BUNDLE + |BUNDLE - || - || |4 |4 |0 |2 |4 |0 || - ||LATEST |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - ||SHORT |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - ||LONG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - =================================================================================================================== - - ``` - - Here is an example from the simple load manager: - - ``` - - =================================================================================================================== - ||COUNT |TOPIC |BUNDLE |PRODUCER |CONSUMER |BUNDLE + |BUNDLE - || - || |4 |4 |0 |2 |0 |0 || - ||RAW SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.25 |47.94 |0.01 |0.00 |0.00 |47.94 || - ||ALLOC SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.20 |1.89 | |1.27 |3.21 |3.21 || - ||RAW MSG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.01 |0.01 |0.01 || - ||ALLOC MSG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |54.84 |134.48 |189.31 |126.54 |320.96 |447.50 || - =================================================================================================================== - - ``` - -It is important to note that the module load manager is _centralized_, meaning that all requests to assign a bundle---whether it's been seen before or whether this is the first time---only get handled by the _lead_ broker (which can change over time). To determine the current lead broker, examine the `/loadbalance/leader` node in ZooKeeper. - -## Implementation - -### Data - -The data monitored by the modular load manager is contained in the [`LoadData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/LoadData.java) class. -Here, the available data is subdivided into the bundle data and the broker data. - -#### Broker - -The broker data is contained in the [`BrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BrokerData.java) class. It is further subdivided into two parts, -one being the local data which every broker individually writes to ZooKeeper, and the other being the historical broker -data which is written to ZooKeeper by the leader broker. - -##### Local Broker Data -The local broker data is contained in the class [`LocalBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/java/org/apache/pulsar/policies/data/loadbalancer/LocalBrokerData.java) and provides information about the following resources: - -* CPU usage -* JVM heap memory usage -* Direct memory usage -* Bandwidth in/out usage -* Most recent total message rate in/out across all bundles -* Total number of topics, bundles, producers, and consumers -* Names of all bundles assigned to this broker -* Most recent changes in bundle assignments for this broker - -The local broker data is updated periodically according to the service configuration -"loadBalancerReportUpdateMaxIntervalMinutes". After any broker updates their local broker data, the leader broker will -receive the update immediately via a ZooKeeper watch, where the local data is read from the ZooKeeper node -`/loadbalance/brokers/` - -##### Historical Broker Data - -The historical broker data is contained in the [`TimeAverageBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/TimeAverageBrokerData.java) class. - -In order to reconcile the need to make good decisions in a steady-state scenario and make reactive decisions in a critical scenario, the historical data is split into two parts: the short-term data for reactive decisions, and the long-term data for steady-state decisions. Both time frames maintain the following information: - -* Message rate in/out for the entire broker -* Message throughput in/out for the entire broker - -Unlike the bundle data, the broker data does not maintain samples for the global broker message rates and throughputs, which is not expected to remain steady as new bundles are removed or added. Instead, this data is aggregated over the short-term and long-term data for the bundles. See the section on bundle data to understand how that data is collected and maintained. - -The historical broker data is updated for each broker in memory by the leader broker whenever any broker writes their local data to ZooKeeper. Then, the historical data is written to ZooKeeper by the leader broker periodically according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`. - -##### Bundle Data - -The bundle data is contained in the [`BundleData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BundleData.java). Like the historical broker data, the bundle data is split into a short-term and a long-term time frame. The information maintained in each time frame: - -* Message rate in/out for this bundle -* Message Throughput In/Out for this bundle -* Current number of samples for this bundle - -The time frames are implemented by maintaining the average of these values over a set, limited number of samples, where -the samples are obtained through the message rate and throughput values in the local data. Thus, if the update interval -for the local data is 2 minutes, the number of short samples is 10 and the number of long samples is 1000, the -short-term data is maintained over a period of `10 samples * 2 minutes / sample = 20 minutes`, while the long-term -data is similarly over a period of 2000 minutes. Whenever there are not enough samples to satisfy a given time frame, -the average is taken only over the existing samples. When no samples are available, default values are assumed until -they are overwritten by the first sample. Currently, the default values are - -* Message rate in/out: 50 messages per second both ways -* Message throughput in/out: 50KB per second both ways - -The bundle data is updated in memory on the leader broker whenever any broker writes their local data to ZooKeeper. -Then, the bundle data is written to ZooKeeper by the leader broker periodically at the same time as the historical -broker data, according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`. - -### Traffic Distribution - -The modular load manager uses the abstraction provided by [`ModularLoadManagerStrategy`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/ModularLoadManagerStrategy.java) to make decisions about bundle assignment. The strategy makes a decision by considering the service configuration, the entire load data, and the bundle data for the bundle to be assigned. Currently, the only supported strategy is [`LeastLongTermMessageRate`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/LeastLongTermMessageRate.java), though soon users will have the ability to inject their own strategies if desired. - -#### Least Long Term Message Rate Strategy - -As its name suggests, the least long term message rate strategy attempts to distribute bundles across brokers so that -the message rate in the long-term time window for each broker is roughly the same. However, simply balancing load based -on message rate does not handle the issue of asymmetric resource burden per message on each broker. Thus, the system -resource usages, which are CPU, memory, direct memory, bandwidth in, and bandwidth out, are also considered in the -assignment process. This is done by weighting the final message rate according to -`1 / (overload_threshold - max_usage)`, where `overload_threshold` corresponds to the configuration -`loadBalancerBrokerOverloadedThresholdPercentage` and `max_usage` is the maximum proportion among the system resources -that is being utilized by the candidate broker. This multiplier ensures that machines with are being more heavily taxed -by the same message rates will receive less load. In particular, it tries to ensure that if one machine is overloaded, -then all machines are approximately overloaded. In the case in which a broker's max usage exceeds the overload -threshold, that broker is not considered for bundle assignment. If all brokers are overloaded, the bundle is randomly -assigned. - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/develop-schema.md b/site2/website/versioned_docs/version-2.8.1-deprecated/develop-schema.md deleted file mode 100644 index 2d4461a5ea2b55..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/develop-schema.md +++ /dev/null @@ -1,62 +0,0 @@ ---- -id: develop-schema -title: Custom schema storage -sidebar_label: "Custom schema storage" -original_id: develop-schema ---- - -By default, Pulsar stores data type [schemas](concepts-schema-registry.md) in [Apache BookKeeper](https://bookkeeper.apache.org) (which is deployed alongside Pulsar). You can, however, use another storage system if you wish. This doc walks you through creating your own schema storage implementation. - -In order to use a non-default (i.e. non-BookKeeper) storage system for Pulsar schemas, you need to implement two Java interfaces: [`SchemaStorage`](#schemastorage-interface) and [`SchemaStorageFactory`](#schemastoragefactory-interface). - -## SchemaStorage interface - -The `SchemaStorage` interface has the following methods: - -```java - -public interface SchemaStorage { - // How schemas are updated - CompletableFuture put(String key, byte[] value, byte[] hash); - - // How schemas are fetched from storage - CompletableFuture get(String key, SchemaVersion version); - - // How schemas are deleted - CompletableFuture delete(String key); - - // Utility method for converting a schema version byte array to a SchemaVersion object - SchemaVersion versionFromBytes(byte[] version); - - // Startup behavior for the schema storage client - void start() throws Exception; - - // Shutdown behavior for the schema storage client - void close() throws Exception; -} - -``` - -> For a full-fledged example schema storage implementation, see the [`BookKeeperSchemaStorage`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorage.java) class. - -## SchemaStorageFactory interface - -```java - -public interface SchemaStorageFactory { - @NotNull - SchemaStorage create(PulsarService pulsar) throws Exception; -} - -``` - -> For a full-fledged example schema storage factory implementation, see the [`BookKeeperSchemaStorageFactory`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorageFactory.java) class. - -## Deployment - -In order to use your custom schema storage implementation, you'll need to: - -1. Package the implementation in a [JAR](https://docs.oracle.com/javase/tutorial/deployment/jar/basicsindex.html) file. -1. Add that jar to the `lib` folder in your Pulsar [binary or source distribution](getting-started-standalone.md#installing-pulsar). -1. Change the `schemaRegistryStorageClassName` configuration in [`broker.conf`](reference-configuration.md#broker) to your custom factory class (i.e. the `SchemaStorageFactory` implementation, not the `SchemaStorage` implementation). -1. Start up Pulsar. diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/develop-tools.md b/site2/website/versioned_docs/version-2.8.1-deprecated/develop-tools.md deleted file mode 100644 index b5457790b80810..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/develop-tools.md +++ /dev/null @@ -1,111 +0,0 @@ ---- -id: develop-tools -title: Simulation tools -sidebar_label: "Simulation tools" -original_id: develop-tools ---- - -It is sometimes necessary create an test environment and incur artificial load to observe how well load managers -handle the load. The load simulation controller, the load simulation client, and the broker monitor were created as an -effort to make create this load and observe the effects on the managers more easily. - -## Simulation Client -The simulation client is a machine which will create and subscribe to topics with configurable message rates and sizes. -Because it is sometimes necessary in simulating large load to use multiple client machines, the user does not interact -with the simulation client directly, but instead delegates their requests to the simulation controller, which will then -send signals to clients to start incurring load. The client implementation is in the class -`org.apache.pulsar.testclient.LoadSimulationClient`. - -### Usage -To Start a simulation client, use the `pulsar-perf` script with the command `simulation-client` as follows: - -``` - -pulsar-perf simulation-client --port --service-url - -``` - -The client will then be ready to receive controller commands. -## Simulation Controller -The simulation controller send signals to the simulation clients, requesting them to create new topics, stop old -topics, change the load incurred by topics, as well as several other tasks. It is implemented in the class -`org.apache.pulsar.testclient.LoadSimulationController` and presents a shell to the user as an interface to send -command with. - -### Usage -To start a simulation controller, use the `pulsar-perf` script with the command `simulation-controller` as follows: - -``` - -pulsar-perf simulation-controller --cluster --client-port ---clients - -``` - -The clients should already be started before the controller is started. You will then be presented with a simple prompt, -where you can issue commands to simulation clients. Arguments often refer to tenant names, namespace names, and topic -names. In all cases, the BASE name of the tenants, namespaces, and topics are used. For example, for the topic -`persistent://my_tenant/my_cluster/my_namespace/my_topic`, the tenant name is `my_tenant`, the namespace name is -`my_namespace`, and the topic name is `my_topic`. The controller can perform the following actions: - -* Create a topic with a producer and a consumer - * `trade [--rate ] - [--rand-rate ,] - [--size ]` -* Create a group of topics with a producer and a consumer - * `trade_group [--rate ] - [--rand-rate ,] - [--separation ] [--size ] - [--topics-per-namespace ]` -* Change the configuration of an existing topic - * `change [--rate ] - [--rand-rate ,] - [--size ]` -* Change the configuration of a group of topics - * `change_group [--rate ] [--rand-rate ,] - [--size ] [--topics-per-namespace ]` -* Shutdown a previously created topic - * `stop ` -* Shutdown a previously created group of topics - * `stop_group ` -* Copy the historical data from one ZooKeeper to another and simulate based on the message rates and sizes in that history - * `copy [--rate-multiplier value]` -* Simulate the load of the historical data on the current ZooKeeper (should be same ZooKeeper being simulated on) - * `simulate [--rate-multiplier value]` -* Stream the latest data from the given active ZooKeeper to simulate the real-time load of that ZooKeeper. - * `stream [--rate-multiplier value]` - -The "group" arguments in these commands allow the user to create or affect multiple topics at once. Groups are created -when calling the `trade_group` command, and all topics from these groups may be subsequently modified or stopped -with the `change_group` and `stop_group` commands respectively. All ZooKeeper arguments are of the form -`zookeeper_host:port`. - -### Difference Between Copy, Simulate, and Stream -The commands `copy`, `simulate`, and `stream` are very similar but have significant differences. `copy` is used when -you want to simulate the load of a static, external ZooKeeper on the ZooKeeper you are simulating on. Thus, -`source zookeeper` should be the ZooKeeper you want to copy and `target zookeeper` should be the ZooKeeper you are -simulating on, and then it will get the full benefit of the historical data of the source in both load manager -implementations. `simulate` on the other hand takes in only one ZooKeeper, the one you are simulating on. It assumes -that you are simulating on a ZooKeeper that has historical data for `SimpleLoadManagerImpl` and creates equivalent -historical data for `ModularLoadManagerImpl`. Then, the load according to the historical data is simulated by the -clients. Finally, `stream` takes in an active ZooKeeper different than the ZooKeeper being simulated on and streams -load data from it and simulates the real-time load. In all cases, the optional `rate-multiplier` argument allows the -user to simulate some proportion of the load. For instance, using `--rate-multiplier 0.05` will cause messages to -be sent at only `5%` of the rate of the load that is being simulated. - -## Broker Monitor -To observe the behavior of the load manager in these simulations, one may utilize the broker monitor, which is -implemented in `org.apache.pulsar.testclient.BrokerMonitor`. The broker monitor will print tabular load data to the -console as it is updated using watchers. - -### Usage -To start a broker monitor, use the `monitor-brokers` command in the `pulsar-perf` script: - -``` - -pulsar-perf monitor-brokers --connect-string - -``` - -The console will then continuously print load data until it is interrupted. - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/developing-binary-protocol.md b/site2/website/versioned_docs/version-2.8.1-deprecated/developing-binary-protocol.md deleted file mode 100644 index c5ce6edd4eb02e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/developing-binary-protocol.md +++ /dev/null @@ -1,606 +0,0 @@ ---- -id: developing-binary-protocol -title: Pulsar binary protocol specification -sidebar_label: "Binary protocol" -original_id: developing-binary-protocol ---- - -Pulsar uses a custom binary protocol for communications between producers/consumers and brokers. This protocol is designed to support required features, such as acknowledgements and flow control, while ensuring maximum transport and implementation efficiency. - -Clients and brokers exchange *commands* with each other. Commands are formatted as binary [protocol buffer](https://developers.google.com/protocol-buffers/) (aka *protobuf*) messages. The format of protobuf commands is specified in the [`PulsarApi.proto`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto) file and also documented in the [Protobuf interface](#protobuf-interface) section below. - -> ### Connection sharing -> Commands for different producers and consumers can be interleaved and sent through the same connection without restriction. - -All commands associated with Pulsar's protocol are contained in a [`BaseCommand`](#pulsar.proto.BaseCommand) protobuf message that includes a [`Type`](#pulsar.proto.Type) [enum](https://developers.google.com/protocol-buffers/docs/proto#enum) with all possible subcommands as optional fields. `BaseCommand` messages can specify only one subcommand. - -## Framing - -Since protobuf doesn't provide any sort of message frame, all messages in the Pulsar protocol are prepended with a 4-byte field that specifies the size of the frame. The maximum allowable size of a single frame is 5 MB. - -The Pulsar protocol allows for two types of commands: - -1. **Simple commands** that do not carry a message payload. -2. **Payload commands** that bear a payload that is used when publishing or delivering messages. In payload commands, the protobuf command data is followed by protobuf [metadata](#message-metadata) and then the payload, which is passed in raw format outside of protobuf. All sizes are passed as 4-byte unsigned big endian integers. - -> Message payloads are passed in raw format rather than protobuf format for efficiency reasons. - -### Simple commands - -Simple (payload-free) commands have this basic structure: - -| Component | Description | Size (in bytes) | -|:------------|:----------------------------------------------------------------------------------------|:----------------| -| totalSize | The size of the frame, counting everything that comes after it (in bytes) | 4 | -| commandSize | The size of the protobuf-serialized command | 4 | -| message | The protobuf message serialized in a raw binary format (rather than in protobuf format) | | - -### Payload commands - -Payload commands have this basic structure: - -| Component | Description | Size (in bytes) | -|:-------------|:--------------------------------------------------------------------------------------------|:----------------| -| totalSize | The size of the frame, counting everything that comes after it (in bytes) | 4 | -| commandSize | The size of the protobuf-serialized command | 4 | -| message | The protobuf message serialized in a raw binary format (rather than in protobuf format) | | -| magicNumber | A 2-byte byte array (`0x0e01`) identifying the current format | 2 | -| checksum | A [CRC32-C checksum](http://www.evanjones.ca/crc32c.html) of everything that comes after it | 4 | -| metadataSize | The size of the message [metadata](#message-metadata) | 4 | -| metadata | The message [metadata](#message-metadata) stored as a binary protobuf message | | -| payload | Anything left in the frame is considered the payload and can include any sequence of bytes | | - -## Message metadata - -Message metadata is stored alongside the application-specified payload as a serialized protobuf message. Metadata is created by the producer and passed on unchanged to the consumer. - -| Field | Description | -|:-------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `producer_name` | The name of the producer that published the message | -| `sequence_id` | The sequence ID of the message, assigned by producer | -| `publish_time` | The publish timestamp in Unix time (i.e. as the number of milliseconds since January 1st, 1970 in UTC) | -| `properties` | A sequence of key/value pairs (using the [`KeyValue`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto#L32) message). These are application-defined keys and values with no special meaning to Pulsar. | -| `replicated_from` *(optional)* | Indicates that the message has been replicated and specifies the name of the [cluster](reference-terminology.md#cluster) where the message was originally published | -| `partition_key` *(optional)* | While publishing on a partition topic, if the key is present, the hash of the key is used to determine which partition to choose | -| `compression` *(optional)* | Signals that payload has been compressed and with which compression library | -| `uncompressed_size` *(optional)* | If compression is used, the producer must fill the uncompressed size field with the original payload size | -| `num_messages_in_batch` *(optional)* | If this message is really a [batch](#batch-messages) of multiple entries, this field must be set to the number of messages in the batch | - -### Batch messages - -When using batch messages, the payload will be containing a list of entries, -each of them with its individual metadata, defined by the `SingleMessageMetadata` -object. - - -For a single batch, the payload format will look like this: - - -| Field | Description | -|:--------------|:------------------------------------------------------------| -| metadataSizeN | The size of the single message metadata serialized Protobuf | -| metadataN | Single message metadata | -| payloadN | Message payload passed by application | - -Each metadata field looks like this; - -| Field | Description | -|:---------------------------|:--------------------------------------------------------| -| properties | Application-defined properties | -| partition key *(optional)* | Key to indicate the hashing to a particular partition | -| payload_size | Size of the payload for the single message in the batch | - -When compression is enabled, the whole batch will be compressed at once. - -## Interactions - -### Connection establishment - -After opening a TCP connection to a broker, typically on port 6650, the client -is responsible to initiate the session. - -![Connect interaction](/assets/binary-protocol-connect.png) - -After receiving a `Connected` response from the broker, the client can -consider the connection ready to use. Alternatively, if the broker doesn't -validate the client authentication, it will reply with an `Error` command and -close the TCP connection. - -Example: - -```protobuf - -message CommandConnect { - "client_version" : "Pulsar-Client-Java-v1.15.2", - "auth_method_name" : "my-authentication-plugin", - "auth_data" : "my-auth-data", - "protocol_version" : 6 -} - -``` - -Fields: - * `client_version` → String based identifier. Format is not enforced - * `auth_method_name` → *(optional)* Name of the authentication plugin if auth - enabled - * `auth_data` → *(optional)* Plugin specific authentication data - * `protocol_version` → Indicates the protocol version supported by the - client. Broker will not send commands introduced in newer revisions of the - protocol. Broker might be enforcing a minimum version - -```protobuf - -message CommandConnected { - "server_version" : "Pulsar-Broker-v1.15.2", - "protocol_version" : 6 -} - -``` - -Fields: - * `server_version` → String identifier of broker version - * `protocol_version` → Protocol version supported by the broker. Client - must not attempt to send commands introduced in newer revisions of the - protocol - -### Keep Alive - -To identify prolonged network partitions between clients and brokers or cases -in which a machine crashes without interrupting the TCP connection on the remote -end (eg: power outage, kernel panic, hard reboot...), we have introduced a -mechanism to probe for the availability status of the remote peer. - -Both clients and brokers are sending `Ping` commands periodically and they will -close the socket if a `Pong` response is not received within a timeout (default -used by broker is 60s). - -A valid implementation of a Pulsar client is not required to send the `Ping` -probe, though it is required to promptly reply after receiving one from the -broker in order to prevent the remote side from forcibly closing the TCP connection. - - -### Producer - -In order to send messages, a client needs to establish a producer. When creating -a producer, the broker will first verify that this particular client is -authorized to publish on the topic. - -Once the client gets confirmation of the producer creation, it can publish -messages to the broker, referring to the producer id negotiated before. - -![Producer interaction](/assets/binary-protocol-producer.png) - -##### Command Producer - -```protobuf - -message CommandProducer { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "producer_id" : 1, - "request_id" : 1 -} - -``` - -Parameters: - * `topic` → Complete topic name to where you want to create the producer on - * `producer_id` → Client generated producer identifier. Needs to be unique - within the same connection - * `request_id` → Identifier for this request. Used to match the response with - the originating request. Needs to be unique within the same connection - * `producer_name` → *(optional)* If a producer name is specified, the name will - be used, otherwise the broker will generate a unique name. Generated - producer name is guaranteed to be globally unique. Implementations are - expected to let the broker generate a new producer name when the producer - is initially created, then reuse it when recreating the producer after - reconnections. - -The broker will reply with either `ProducerSuccess` or `Error` commands. - -##### Command ProducerSuccess - -```protobuf - -message CommandProducerSuccess { - "request_id" : 1, - "producer_name" : "generated-unique-producer-name" -} - -``` - -Parameters: - * `request_id` → Original id of the `CreateProducer` request - * `producer_name` → Generated globally unique producer name or the name - specified by the client, if any. - -##### Command Send - -Command `Send` is used to publish a new message within the context of an -already existing producer. This command is used in a frame that includes command -as well as message payload, for which the complete format is specified in the [payload commands](#payload-commands) section. - -```protobuf - -message CommandSend { - "producer_id" : 1, - "sequence_id" : 0, - "num_messages" : 1 -} - -``` - -Parameters: - * `producer_id` → id of an existing producer - * `sequence_id` → each message has an associated sequence id which is expected - to be implemented with a counter starting at 0. The `SendReceipt` that - acknowledges the effective publishing of messages will refer to it by - its sequence id. - * `num_messages` → *(optional)* Used when publishing a batch of messages at - once. - -##### Command SendReceipt - -After a message has been persisted on the configured number of replicas, the -broker will send the acknowledgment receipt to the producer. - -```protobuf - -message CommandSendReceipt { - "producer_id" : 1, - "sequence_id" : 0, - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -Parameters: - * `producer_id` → id of producer originating the send request - * `sequence_id` → sequence id of the published message - * `message_id` → message id assigned by the system to the published message - Unique within a single cluster. Message id is composed of 2 longs, `ledgerId` - and `entryId`, that reflect that this unique id is assigned when appending - to a BookKeeper ledger - - -##### Command CloseProducer - -**Note**: *This command can be sent by either producer or broker*. - -When receiving a `CloseProducer` command, the broker will stop accepting any -more messages for the producer, wait until all pending messages are persisted -and then reply `Success` to the client. - -The broker can send a `CloseProducer` command to client when it's performing -a graceful failover (eg: broker is being restarted, or the topic is being unloaded -by load balancer to be transferred to a different broker). - -When receiving the `CloseProducer`, the client is expected to go through the -service discovery lookup again and recreate the producer again. The TCP -connection is not affected. - -### Consumer - -A consumer is used to attach to a subscription and consume messages from it. -After every reconnection, a client needs to subscribe to the topic. If a -subscription is not already there, a new one will be created. - -![Consumer](/assets/binary-protocol-consumer.png) - -#### Flow control - -After the consumer is ready, the client needs to *give permission* to the -broker to push messages. This is done with the `Flow` command. - -A `Flow` command gives additional *permits* to send messages to the consumer. -A typical consumer implementation will use a queue to accumulate these messages -before the application is ready to consume them. - -After the application has dequeued half of the messages in the queue, the consumer -sends permits to the broker to ask for more messages (equals to half of the messages in the queue). - -For example, if the queue size is 1000 and the consumer consumes 500 messages in the queue. -Then the consumer sends permits to the broker to ask for 500 messages. - -##### Command Subscribe - -```protobuf - -message CommandSubscribe { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "subscription" : "my-subscription-name", - "subType" : "Exclusive", - "consumer_id" : 1, - "request_id" : 1 -} - -``` - -Parameters: - * `topic` → Complete topic name to where you want to create the consumer on - * `subscription` → Subscription name - * `subType` → Subscription type: Exclusive, Shared, Failover, Key_Shared - * `consumer_id` → Client generated consumer identifier. Needs to be unique - within the same connection - * `request_id` → Identifier for this request. Used to match the response with - the originating request. Needs to be unique within the same connection - * `consumer_name` → *(optional)* Clients can specify a consumer name. This - name can be used to track a particular consumer in the stats. Also, in - Failover subscription type, the name is used to decide which consumer is - elected as *master* (the one receiving messages): consumers are sorted by - their consumer name and the first one is elected master. - -##### Command Flow - -```protobuf - -message CommandFlow { - "consumer_id" : 1, - "messagePermits" : 1000 -} - -``` - -Parameters: -* `consumer_id` → Id of an already established consumer -* `messagePermits` → Number of additional permits to grant to the broker for - pushing more messages - -##### Command Message - -Command `Message` is used by the broker to push messages to an existing consumer, -within the limits of the given permits. - - -This command is used in a frame that includes the message payload as well, for -which the complete format is specified in the [payload commands](#payload-commands) -section. - -```protobuf - -message CommandMessage { - "consumer_id" : 1, - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -##### Command Ack - -An `Ack` is used to signal to the broker that a given message has been -successfully processed by the application and can be discarded by the broker. - -In addition, the broker will also maintain the consumer position based on the -acknowledged messages. - -```protobuf - -message CommandAck { - "consumer_id" : 1, - "ack_type" : "Individual", - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -Parameters: - * `consumer_id` → Id of an already established consumer - * `ack_type` → Type of acknowledgment: `Individual` or `Cumulative` - * `message_id` → Id of the message to acknowledge - * `validation_error` → *(optional)* Indicates that the consumer has discarded - the messages due to: `UncompressedSizeCorruption`, - `DecompressionError`, `ChecksumMismatch`, `BatchDeSerializeError` - * `properties` → *(optional)* Reserved configuration items - * `txnid_most_bits` → *(optional)* Same as Transaction Coordinator ID, `txnid_most_bits` and `txnid_least_bits` - uniquely identify a transaction. - * `txnid_least_bits` → *(optional)* The ID of the transaction opened in a transaction coordinator, - `txnid_most_bits` and `txnid_least_bits`uniquely identify a transaction. - * `request_id` → *(optional)* ID for handling response and timeout. - - - ##### Command AckResponse - -An `AckResponse` is the broker’s response to acknowledge a request sent by the client. It contains the `consumer_id` sent in the request. -If a transaction is used, it contains both the Transaction ID and the Request ID that are sent in the request. The client finishes the specific request according to the Request ID. If the `error` field is set, it indicates that the request has failed. - -An example of `AckResponse` with redirection: - -```protobuf - -message CommandAckResponse { - "consumer_id" : 1, - "txnid_least_bits" = 0, - "txnid_most_bits" = 1, - "request_id" = 5 -} - -``` - -##### Command CloseConsumer - -***Note***: **This command can be sent by either producer or broker*. - -This command behaves the same as [`CloseProducer`](#command-closeproducer) - -##### Command RedeliverUnacknowledgedMessages - -A consumer can ask the broker to redeliver some or all of the pending messages -that were pushed to that particular consumer and not yet acknowledged. - -The protobuf object accepts a list of message ids that the consumer wants to -be redelivered. If the list is empty, the broker will redeliver all the -pending messages. - -On redelivery, messages can be sent to the same consumer or, in the case of a -shared subscription, spread across all available consumers. - - -##### Command ReachedEndOfTopic - -This is sent by a broker to a particular consumer, whenever the topic -has been "terminated" and all the messages on the subscription were -acknowledged. - -The client should use this command to notify the application that no more -messages are coming from the consumer. - -##### Command ConsumerStats - -This command is sent by the client to retrieve Subscriber and Consumer level -stats from the broker. -Parameters: - * `request_id` → Id of the request, used to correlate the request - and the response. - * `consumer_id` → Id of an already established consumer. - -##### Command ConsumerStatsResponse - -This is the broker's response to ConsumerStats request by the client. -It contains the Subscriber and Consumer level stats of the `consumer_id` sent in the request. -If the `error_code` or the `error_message` field is set it indicates that the request has failed. - -##### Command Unsubscribe - -This command is sent by the client to unsubscribe the `consumer_id` from the associated topic. -Parameters: - * `request_id` → Id of the request. - * `consumer_id` → Id of an already established consumer which needs to unsubscribe. - - -## Service discovery - -### Topic lookup - -Topic lookup needs to be performed each time a client needs to create or -reconnect a producer or a consumer. Lookup is used to discover which particular -broker is serving the topic we are about to use. - -Lookup can be done with a REST call as described in the [admin API](admin-api-topics.md#lookup-of-topic) -docs. - -Since Pulsar-1.16 it is also possible to perform the lookup within the binary -protocol. - -For the sake of example, let's assume we have a service discovery component -running at `pulsar://broker.example.com:6650` - -Individual brokers will be running at `pulsar://broker-1.example.com:6650`, -`pulsar://broker-2.example.com:6650`, ... - -A client can use a connection to the discovery service host to issue a -`LookupTopic` command. The response can either be a broker hostname to -connect to, or a broker hostname to which retry the lookup. - -The `LookupTopic` command has to be used in a connection that has already -gone through the `Connect` / `Connected` initial handshake. - -![Topic lookup](/assets/binary-protocol-topic-lookup.png) - -```protobuf - -message CommandLookupTopic { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "request_id" : 1, - "authoritative" : false -} - -``` - -Fields: - * `topic` → Topic name to lookup - * `request_id` → Id of the request that will be passed with its response - * `authoritative` → Initial lookup request should use false. When following a - redirect response, client should pass the same value contained in the - response - -##### LookupTopicResponse - -Example of response with successful lookup: - -```protobuf - -message CommandLookupTopicResponse { - "request_id" : 1, - "response" : "Connect", - "brokerServiceUrl" : "pulsar://broker-1.example.com:6650", - "brokerServiceUrlTls" : "pulsar+ssl://broker-1.example.com:6651", - "authoritative" : true -} - -``` - -Example of lookup response with redirection: - -```protobuf - -message CommandLookupTopicResponse { - "request_id" : 1, - "response" : "Redirect", - "brokerServiceUrl" : "pulsar://broker-2.example.com:6650", - "brokerServiceUrlTls" : "pulsar+ssl://broker-2.example.com:6651", - "authoritative" : true -} - -``` - -In this second case, we need to reissue the `LookupTopic` command request -to `broker-2.example.com` and this broker will be able to give a definitive -answer to the lookup request. - -### Partitioned topics discovery - -Partitioned topics metadata discovery is used to find out if a topic is a -"partitioned topic" and how many partitions were set up. - -If the topic is marked as "partitioned", the client is expected to create -multiple producers or consumers, one for each partition, using the `partition-X` -suffix. - -This information only needs to be retrieved the first time a producer or -consumer is created. There is no need to do this after reconnections. - -The discovery of partitioned topics metadata works very similar to the topic -lookup. The client send a request to the service discovery address and the -response will contain actual metadata. - -##### Command PartitionedTopicMetadata - -```protobuf - -message CommandPartitionedTopicMetadata { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "request_id" : 1 -} - -``` - -Fields: - * `topic` → the topic for which to check the partitions metadata - * `request_id` → Id of the request that will be passed with its response - - -##### Command PartitionedTopicMetadataResponse - -Example of response with metadata: - -```protobuf - -message CommandPartitionedTopicMetadataResponse { - "request_id" : 1, - "response" : "Success", - "partitions" : 32 -} - -``` - -## Protobuf interface - -All Pulsar's Protobuf definitions can be found {@inject: github:here:/pulsar-common/src/main/proto/PulsarApi.proto}. diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/functions-cli.md b/site2/website/versioned_docs/version-2.8.1-deprecated/functions-cli.md deleted file mode 100644 index c9fcfa201525f0..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/functions-cli.md +++ /dev/null @@ -1,198 +0,0 @@ ---- -id: functions-cli -title: Pulsar Functions command line tool -sidebar_label: "Reference: CLI" -original_id: functions-cli ---- - -The following tables list Pulsar Functions command-line tools. You can learn Pulsar Functions modes, commands, and parameters. - -## localrun - -Run Pulsar Functions locally, rather than deploying it to the Pulsar cluster. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -broker-service-url | The URL for the Pulsar broker. | | -classname | The class name of a Pulsar Function.| | -client-auth-params | Client authentication parameter. | | -client-auth-plugin | Client authentication plugin using which function-process can connect to broker. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime).| | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -hostname-verification-enabled | Enable hostname verification. | false -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -instance-id-offset | Start the instanceIds from this offset. | 0 -log-topic | The topic to which the logs a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -tls-allow-insecure | Allow insecure tls connection. | false -tls-trust-cert-path | tls trust cert file path. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -use-tls | Use tls connection. | false -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - - -## create - -Create and deploy a Pulsar Function in cluster mode. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -classname | The class name of a Pulsar Function. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime).| | -custom-runtime-options | A string that encodes options to customize the runtime, see docs for configured runtime for details | | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -log-topic | The topic to which the logs of a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - -## delete - -Delete a Pulsar Function that is running on a Pulsar cluster. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## update - -Update a Pulsar Function that has been deployed to a Pulsar cluster. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -classname | The class name of a Pulsar Function. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime). | | -custom-runtime-options | A string that encodes options to customize the runtime, see docs for configured runtime for details | | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -log-topic | The topic to which the logs of a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -update-auth-data | Whether or not to update the auth data. | false -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - -## get - -Fetch information about a Pulsar Function. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## restart - -Restart function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## stop - -Stops function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## start - -Starts a stopped function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/functions-debug.md b/site2/website/versioned_docs/version-2.8.1-deprecated/functions-debug.md deleted file mode 100644 index e1d55ae0897aa5..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/functions-debug.md +++ /dev/null @@ -1,533 +0,0 @@ ---- -id: functions-debug -title: Debug Pulsar Functions -sidebar_label: "How-to: Debug" -original_id: functions-debug ---- - -You can use the following methods to debug Pulsar Functions: - -* [Captured stderr](functions-debug.md#captured-stderr) -* [Use unit test](functions-debug.md#use-unit-test) -* [Debug with localrun mode](functions-debug.md#debug-with-localrun-mode) -* [Use log topic](functions-debug.md#use-log-topic) -* [Use Functions CLI](functions-debug.md#use-functions-cli) - -## Captured stderr - -Function startup information and captured stderr output is written to `logs/functions////-.log` - -This is useful for debugging why a function fails to start. - -## Use unit test - -A Pulsar Function is a function with inputs and outputs, you can test a Pulsar Function in a similar way as you test any function. - -For example, if you have the following Pulsar Function: - -```java - -import java.util.function.Function; - -public class JavaNativeExclamationFunction implements Function { - @Override - public String apply(String input) { - return String.format("%s!", input); - } -} - -``` - -You can write a simple unit test to test Pulsar Function. - -:::tip - -Pulsar uses testng for testing. - -::: - -```java - -@Test -public void testJavaNativeExclamationFunction() { - JavaNativeExclamationFunction exclamation = new JavaNativeExclamationFunction(); - String output = exclamation.apply("foo"); - Assert.assertEquals(output, "foo!"); -} - -``` - -The following Pulsar Function implements the `org.apache.pulsar.functions.api.Function` interface. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class ExclamationFunction implements Function { - @Override - public String process(String input, Context context) { - return String.format("%s!", input); - } -} - -``` - -In this situation, you can write a unit test for this function as well. Remember to mock the `Context` parameter. The following is an example. - -:::tip - -Pulsar uses testng for testing. - -::: - -```java - -@Test -public void testExclamationFunction() { - ExclamationFunction exclamation = new ExclamationFunction(); - String output = exclamation.process("foo", mock(Context.class)); - Assert.assertEquals(output, "foo!"); -} - -``` - -## Debug with localrun mode -When you run a Pulsar Function in localrun mode, it launches an instance of the Function on your local machine as a thread. - -In this mode, a Pulsar Function consumes and produces actual data to a Pulsar cluster, and mirrors how the function actually runs in a Pulsar cluster. - -:::note - -Currently, debugging with localrun mode is only supported by Pulsar Functions written in Java. You need Pulsar version 2.4.0 or later to do the following. Even though localrun is available in versions earlier than Pulsar 2.4.0, you cannot debug with localrun mode programmatically or run Functions as threads. - -::: - -You can launch your function in the following manner. - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setName(functionName); -functionConfig.setInputs(Collections.singleton(sourceTopic)); -functionConfig.setClassName(ExclamationFunction.class.getName()); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setOutput(sinkTopic); - -LocalRunner localRunner = LocalRunner.builder().functionConfig(functionConfig).build(); -localRunner.start(true); - -``` - -So you can debug functions using an IDE easily. Set breakpoints and manually step through a function to debug with real data. - -The following example illustrates how to programmatically launch a function in localrun mode. - -```java - -public class ExclamationFunction implements Function { - - @Override - public String process(String s, Context context) throws Exception { - return s + "!"; - } - -public static void main(String[] args) throws Exception { - FunctionConfig functionConfig = new FunctionConfig(); - functionConfig.setName("exclamation"); - functionConfig.setInputs(Collections.singleton("input")); - functionConfig.setClassName(ExclamationFunction.class.getName()); - functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); - functionConfig.setOutput("output"); - - LocalRunner localRunner = LocalRunner.builder().functionConfig(functionConfig).build(); - localRunner.start(false); -} - -``` - -To use localrun mode programmatically, add the following dependency. - -```xml - - - org.apache.pulsar - pulsar-functions-local-runner - ${pulsar.version} - - -``` - -For complete code samples, see [here](https://github.com/jerrypeng/pulsar-functions-demos/tree/master/debugging). - -:::note - -Debugging with localrun mode for Pulsar Functions written in other languages will be supported soon. - -::: - -## Use log topic - -In Pulsar Functions, you can generate log information defined in functions to a specified log topic. You can configure consumers to consume messages from a specified log topic to check the log information. - -![Pulsar Functions core programming model](/assets/pulsar-functions-overview.png) - -**Example** - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class LoggingFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - String messageId = new String(context.getMessageId()); - - if (input.contains("danger")) { - LOG.warn("A warning was received in message {}", messageId); - } else { - LOG.info("Message {} received\nContent: {}", messageId, input); - } - - return null; - } -} - -``` - -As shown in the example above, you can get the logger via `context.getLogger()` and assign the logger to the `LOG` variable of `slf4j`, so you can define your desired log information in a function using the `LOG` variable. Meanwhile, you need to specify the topic to which the log information is produced. - -**Example** - -```bash - -$ bin/pulsar-admin functions create \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -## Use Functions CLI - -With [Pulsar Functions CLI](reference-pulsar-admin.md#functions), you can debug Pulsar Functions with the following subcommands: - -* `get` -* `status` -* `stats` -* `list` -* `trigger` - -:::tip - -For complete commands of **Pulsar Functions CLI**, see [here](reference-pulsar-admin.md#functions)。 - -::: - -### `get` - -Get information about a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions get options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -:::tip - -`--fqfn` consists of `--name`, `--namespace` and `--tenant`, so you can specify either `--fqfn` or `--name`, `--namespace` and `--tenant`. - -::: - -**Example** - -You can specify `--fqfn` to get information about a Pulsar Function. - -```bash - -$ ./bin/pulsar-admin functions get public/default/ExclamationFunctio6 - -``` - -Optionally, you can specify `--name`, `--namespace` and `--tenant` to get information about a Pulsar Function. - -```bash - -$ ./bin/pulsar-admin functions get \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 - -``` - -As shown below, the `get` command shows input, output, runtime, and other information about the _ExclamationFunctio6_ function. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "ExclamationFunctio6", - "className": "org.example.test.ExclamationFunction", - "inputSpecs": { - "persistent://public/default/my-topic-1": { - "isRegexPattern": false - } - }, - "output": "persistent://public/default/test-1", - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "userConfig": {}, - "runtime": "JAVA", - "autoAck": true, - "parallelism": 1 -} - -``` - -### `status` - -Check the current status of a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions status options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--instance-id`|The instance ID of a Pulsar Function
    If the `--instance-id` is not specified, it gets the IDs of all instances.
    -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - -``` - -As shown below, the `status` command shows the number of instances, running instances, the instance running under the _ExclamationFunctio6_ function, received messages, successfully processed messages, system exceptions, the average latency and so on. - -```json - -{ - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReceived" : 1, - "numSuccessfullyProcessed" : 1, - "numUserExceptions" : 0, - "latestUserExceptions" : [ ], - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "averageLatency" : 0.8385, - "lastInvocationTime" : 1557734137987, - "workerId" : "c-standalone-fw-23ccc88ef29b-8080" - } - } ] -} - -``` - -### `stats` - -Get the current stats of a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions stats options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--instance-id`|The instance ID of a Pulsar Function.
    If the `--instance-id` is not specified, it gets the IDs of all instances.
    -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - -``` - -The output is shown as follows: - -```json - -{ - "receivedTotal" : 1, - "processedSuccessfullyTotal" : 1, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : 0.8385, - "1min" : { - "receivedTotal" : 0, - "processedSuccessfullyTotal" : 0, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : null - }, - "lastInvocation" : 1557734137987, - "instances" : [ { - "instanceId" : 0, - "metrics" : { - "receivedTotal" : 1, - "processedSuccessfullyTotal" : 1, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : 0.8385, - "1min" : { - "receivedTotal" : 0, - "processedSuccessfullyTotal" : 0, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : null - }, - "lastInvocation" : 1557734137987, - "userMetrics" : { } - } - } ] -} - -``` - -### `list` - -List all Pulsar Functions running under a specific tenant and namespace. - -**Usage** - -```bash - -$ pulsar-admin functions list options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions list \ - --tenant public \ - --namespace default - -``` - -As shown below, the `list` command returns three functions running under the _public_ tenant and the _default_ namespace. - -```text - -ExclamationFunctio1 -ExclamationFunctio2 -ExclamationFunctio3 - -``` - -### `trigger` - -Trigger a specified Pulsar Function with a supplied value. This command simulates the execution process of a Pulsar Function and verifies it. - -**Usage** - -```bash - -$ pulsar-admin functions trigger options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. -|`--topic`|The topic name that a Pulsar Function consumes from. -|`--trigger-file`|The path to a file that contains the data to trigger a Pulsar Function. -|`--trigger-value`|The value to trigger a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - --topic persistent://public/default/my-topic-1 \ - --trigger-value "hello pulsar functions" - -``` - -As shown below, the `trigger` command returns the following result: - -```text - -This is my function! - -``` - -:::note - -You must specify the [entire topic name](getting-started-pulsar.md#topic-names) when using the `--topic` option. Otherwise, the following error occurs. - -```text - -Function in trigger function has unidentified topic -Reason: Function in trigger function has unidentified topic - -``` - -::: - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/functions-deploy.md b/site2/website/versioned_docs/version-2.8.1-deprecated/functions-deploy.md deleted file mode 100644 index 9c47208f68fa0c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/functions-deploy.md +++ /dev/null @@ -1,262 +0,0 @@ ---- -id: functions-deploy -title: Deploy Pulsar Functions -sidebar_label: "How-to: Deploy" -original_id: functions-deploy ---- - -## Requirements - -To deploy and manage Pulsar Functions, you need to have a Pulsar cluster running. There are several options for this: - -* You can run a [standalone cluster](getting-started-standalone.md) locally on your own machine. -* You can deploy a Pulsar cluster on [Kubernetes](deploy-kubernetes.md), [Amazon Web Services](deploy-aws.md), [bare metal](deploy-bare-metal.md), [DC/OS](https://dcos.io/), and more. - -If you run a non-[standalone](reference-terminology.md#standalone) cluster, you need to obtain the service URL for the cluster. How you obtain the service URL depends on how you deploy your Pulsar cluster. - -If you want to deploy and trigger Python user-defined functions, you need to install [the pulsar python client](http://pulsar.apache.org/docs/en/client-libraries-python/) on all the machines running [functions workers](functions-worker.md). - -## Command-line interface - -Pulsar Functions are deployed and managed using the [`pulsar-admin functions`](reference-pulsar-admin.md#functions) interface, which contains commands such as [`create`](reference-pulsar-admin.md#functions-create) for deploying functions in [cluster mode](#cluster-mode), [`trigger`](reference-pulsar-admin.md#trigger) for [triggering](#triggering-pulsar-functions) functions, [`list`](reference-pulsar-admin.md#list-2) for listing deployed functions. - -To learn more commands, refer to [`pulsar-admin functions`](reference-pulsar-admin.md#functions). - -### Default arguments - -When managing Pulsar Functions, you need to specify a variety of information about functions, including tenant, namespace, input and output topics, and so on. However, some parameters have default values if you do not specify values for them. The following table lists the default values. - -Parameter | Default -:---------|:------- -Function name | You can specify any value for the class name (except org, library, or similar class names). For example, when you specify the flag `--classname org.example.MyFunction`, the function name is `MyFunction`. -Tenant | Derived from names of the input topics. If the input topics are under the `marketing` tenant, which means the topic names have the form `persistent://marketing/{namespace}/{topicName}`, the tenant is `marketing`. -Namespace | Derived from names of the input topics. If the input topics are under the `asia` namespace under the `marketing` tenant, which means the topic names have the form `persistent://marketing/asia/{topicName}`, then the namespace is `asia`. -Output topic | `{input topic}-{function name}-output`. For example, if an input topic name of a function is `incoming`, and the function name is `exclamation`, then the name of the output topic is `incoming-exclamation-output`. -Subscription type | For `at-least-once` and `at-most-once` [processing guarantees](functions-overview.md#processing-guarantees), the [`SHARED`](concepts-messaging.md#shared) mode is applied by default; for `effectively-once` guarantees, the [`FAILOVER`](concepts-messaging.md#failover) mode is applied. -Processing guarantees | [`ATLEAST_ONCE`](functions-overview.md#processing-guarantees) -Pulsar service URL | `pulsar://localhost:6650` - -### Example of default arguments - -Take the `create` command as an example. - -```bash - -$ bin/pulsar-admin functions create \ - --jar my-pulsar-functions.jar \ - --classname org.example.MyFunction \ - --inputs my-function-input-topic1,my-function-input-topic2 - -``` - -The function has default values for the function name (`MyFunction`), tenant (`public`), namespace (`default`), subscription type (`SHARED`), processing guarantees (`ATLEAST_ONCE`), and Pulsar service URL (`pulsar://localhost:6650`). - -## Local run mode - -If you run a Pulsar Function in **local run** mode, it runs on the machine from which you enter the commands (on your laptop, an [AWS EC2](https://aws.amazon.com/ec2/) instance, and so on). The following is a [`localrun`](reference-pulsar-admin.md#localrun) command example. - -```bash - -$ bin/pulsar-admin functions localrun \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/input-1 \ - --output persistent://public/default/output-1 - -``` - -By default, the function connects to a Pulsar cluster running on the same machine, via a local [broker](reference-terminology.md#broker) service URL of `pulsar://localhost:6650`. If you use local run mode to run a function but connect it to a non-local Pulsar cluster, you can specify a different broker URL using the `--brokerServiceUrl` flag. The following is an example. - -```bash - -$ bin/pulsar-admin functions localrun \ - --broker-service-url pulsar://my-cluster-host:6650 \ - # Other function parameters - -``` - -## Cluster mode - -When you run a Pulsar Function in **cluster** mode, the function code is uploaded to a Pulsar broker and runs *alongside the broker* rather than in your [local environment](#local-run-mode). You can run a function in cluster mode using the [`create`](reference-pulsar-admin.md#create-1) command. - -```bash - -$ bin/pulsar-admin functions create \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/input-1 \ - --output persistent://public/default/output-1 - -``` - -### Update functions in cluster mode - -You can use the [`update`](reference-pulsar-admin.md#update-1) command to update a Pulsar Function running in cluster mode. The following command updates the function created in the [cluster mode](#cluster-mode) section. - -```bash - -$ bin/pulsar-admin functions update \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/new-input-topic \ - --output persistent://public/default/new-output-topic - -``` - -### Parallelism - -Pulsar Functions run as processes or threads, which are called **instances**. When you run a Pulsar Function, it runs as a single instance by default. With one localrun command, you can only run a single instance of a function. If you want to run multiple instances, you can use localrun command multiple times. - -When you create a function, you can specify the *parallelism* of a function (the number of instances to run). You can set the parallelism factor using the `--parallelism` flag of the [`create`](reference-pulsar-admin.md#functions-create) command. - -```bash - -$ bin/pulsar-admin functions create \ - --parallelism 3 \ - # Other function info - -``` - -You can adjust the parallelism of an already created function using the [`update`](reference-pulsar-admin.md#update-1) interface. - -```bash - -$ bin/pulsar-admin functions update \ - --parallelism 5 \ - # Other function - -``` - -If you specify a function configuration via YAML, use the `parallelism` parameter. The following is a config file example. - -```yaml - -# function-config.yaml -parallelism: 3 -inputs: -- persistent://public/default/input-1 -output: persistent://public/default/output-1 -# other parameters - -``` - -The following is corresponding update command. - -```bash - -$ bin/pulsar-admin functions update \ - --function-config-file function-config.yaml - -``` - -### Function instance resources - -When you run Pulsar Functions in [cluster mode](#cluster-mode), you can specify the resources that are assigned to each function [instance](#parallelism). - -Resource | Specified as | Runtimes -:--------|:----------------|:-------- -CPU | The number of cores | Kubernetes -RAM | The number of bytes | Process, Docker -Disk space | The number of bytes | Docker - -The following function creation command allocates 8 cores, 8 GB of RAM, and 10 GB of disk space to a function. - -```bash - -$ bin/pulsar-admin functions create \ - --jar target/my-functions.jar \ - --classname org.example.functions.MyFunction \ - --cpu 8 \ - --ram 8589934592 \ - --disk 10737418240 - -``` - -> #### Resources are *per instance* -> The resources that you apply to a given Pulsar Function are applied to each instance of the function. For example, if you apply 8 GB of RAM to a function with a parallelism of 5, you are applying 40 GB of RAM for the function in total. Make sure that you take the parallelism (the number of instances) factor into your resource calculations. - -### Use Package management service - -Package management enables version management and simplifies the upgrade and rollback processes for Functions, Sinks, and Sources. When you use the same function, sink and source in different namespaces, you can upload them to a common package management system. - -To use [Package management service](admin-api-packages.md), ensure that the package management service has been enabled in your cluster by setting the following properties in `broker.conf`. - -> Note: Package management service is not enabled by default. - -```yaml - -enablePackagesManagement=true -packagesManagementStorageProvider=org.apache.pulsar.packages.management.storage.bookkeeper.BookKeeperPackagesStorageProvider -packagesReplicas=1 -packagesManagementLedgerRootPath=/ledgers - -``` - -With Package management service enabled, you can upload your function packages by [upload a package](admin-api-packages.md#upload-a-package) to the service and get the [package URL](admin-api-packages.md#package-url). - -When you have a ready to use package URL, you can create the function with package URL by setting `--jar`, `--py`, or `--go` to the package URL with `pulsar-admin functions create`. - -## Trigger Pulsar Functions - -If a Pulsar Function is running in [cluster mode](#cluster-mode), you can **trigger** it at any time using the command line. Triggering a function means that you send a message with a specific value to the function and get the function output (if any) via the command line. - -> Triggering a function is to invoke a function by producing a message on one of the input topics. With the [`pulsar-admin functions trigger`](reference-pulsar-admin.md#trigger) command, you can send messages to functions without using the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool or a language-specific client library. - -To learn how to trigger a function, you can start with Python function that returns a simple string based on the input. - -```python - -# myfunc.py -def process(input): - return "This function has been triggered with a value of {0}".format(input) - -``` - -You can run the function in [local run mode](functions-deploy.md#local-run-mode). - -```bash - -$ bin/pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name myfunc \ - --py myfunc.py \ - --classname myfunc \ - --inputs persistent://public/default/in \ - --output persistent://public/default/out - -``` - -Then assign a consumer to listen on the output topic for messages from the `myfunc` function with the [`pulsar-client consume`](reference-cli-tools.md#consume) command. - -```bash - -$ bin/pulsar-client consume persistent://public/default/out \ - --subscription-name my-subscription - --num-messages 0 # Listen indefinitely - -``` - -And then you can trigger the function. - -```bash - -$ bin/pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name myfunc \ - --trigger-value "hello world" - -``` - -The consumer listening on the output topic produces something as follows in the log. - -``` - ------ got message ----- -This function has been triggered with a value of hello world - -``` - -> #### Topic info is not required -> In the `trigger` command, you only need to specify basic information about the function (tenant, namespace, and name). To trigger the function, you do not need to know the function input topics. diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/functions-develop.md b/site2/website/versioned_docs/version-2.8.1-deprecated/functions-develop.md deleted file mode 100644 index 2e29aa1c474005..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/functions-develop.md +++ /dev/null @@ -1,1600 +0,0 @@ ---- -id: functions-develop -title: Develop Pulsar Functions -sidebar_label: "How-to: Develop" -original_id: functions-develop ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -You learn how to develop Pulsar Functions with different APIs for Java, Python and Go. - -## Available APIs -In Java and Python, you have two options to write Pulsar Functions. In Go, you can use Pulsar Functions SDK for Go. - -Interface | Description | Use cases -:---------|:------------|:--------- -Language-native interface | No Pulsar-specific libraries or special dependencies required (only core libraries from Java/Python). | Functions that do not require access to the function [context](#context). -Pulsar Function SDK for Java/Python/Go | Pulsar-specific libraries that provide a range of functionality not provided by "native" interfaces. | Functions that require access to the function [context](#context). - -The language-native function, which adds an exclamation point to all incoming strings and publishes the resulting string to a topic, has no external dependencies. The following example is language-native function. - -````mdx-code-block - - - -```Java - -import java.util.function.Function; - -public class JavaNativeExclamationFunction implements Function { - @Override - public String apply(String input) { - return String.format("%s!", input); - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/JavaNativeExclamationFunction.java). - - - - -```python - -def process(input): - return "{}!".format(input) - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/native_exclamation_function.py). - -:::note - -You can write Pulsar Functions in python2 or python3. However, Pulsar only looks for `python` as the interpreter. -If you're running Pulsar Functions on an Ubuntu system that only supports python3, you might fail to -start the functions. In this case, you can create a symlink. Your system will fail if -you subsequently install any other package that depends on Python 2.x. A solution is under development in [Issue 5518](https://github.com/apache/pulsar/issues/5518). - -```bash - -sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 10 - -``` - -::: - - - - -```` - -The following example uses Pulsar Functions SDK. -````mdx-code-block - - - -```Java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class ExclamationFunction implements Function { - @Override - public String process(String input, Context context) { - return String.format("%s!", input); - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/ExclamationFunction.java). - - - - -```python - -from pulsar import Function - -class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - return input + '!' - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/exclamation_function.py). - - - - -```Go - -package main - -import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" -) - -func HandleRequest(ctx context.Context, in []byte) error{ - fmt.Println(string(in) + "!") - return nil -} - -func main() { - pf.Start(HandleRequest) -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/77cf09eafa4f1626a53a1fe2e65dd25f377c1127/pulsar-function-go/examples/inputFunc/inputFunc.go#L20-L36). - - - - -```` - -## Schema registry -Pulsar has a built-in schema registry and is bundled with popular schema types, such as Avro, JSON and Protobuf. Pulsar Functions can leverage the existing schema information from input topics and derive the input type. The schema registry applies for output topic as well. - -## SerDe -SerDe stands for **Ser**ialization and **De**serialization. Pulsar Functions uses SerDe when publishing data to and consuming data from Pulsar topics. How SerDe works by default depends on the language you use for a particular function. - -````mdx-code-block - - - -When you write Pulsar Functions in Java, the following basic Java types are built in and supported by default: `String`, `Double`, `Integer`, `Float`, `Long`, `Short`, and `Byte`. - -To customize Java types, you need to implement the following interface. - -```java - -public interface SerDe { - T deserialize(byte[] input); - byte[] serialize(T input); -} - -``` - -SerDe works in the following ways in Java Functions. -- If the input and output topics have schema, Pulsar Functions use schema for SerDe. -- If the input or output topics do not exist, Pulsar Functions adopt the following rules to determine SerDe: - - If the schema type is specified, Pulsar Functions use the specified schema type. - - If SerDe is specified, Pulsar Functions use the specified SerDe, and the schema type for input and output topics is `Byte`. - - If neither the schema type nor SerDe is specified, Pulsar Functions use the built-in SerDe. For non-primitive schema type, the built-in SerDe serializes and deserializes objects in the `JSON` format. - - - - -In Python, the default SerDe is identity, meaning that the type is serialized as whatever type the producer function returns. - -You can specify the SerDe when [creating](functions-deploy.md#cluster-mode) or [running](functions-deploy.md#local-run-mode) functions. - -```bash - -$ bin/pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name my_function \ - --py my_function.py \ - --classname my_function.MyFunction \ - --custom-serde-inputs '{"input-topic-1":"Serde1","input-topic-2":"Serde2"}' \ - --output-serde-classname Serde3 \ - --output output-topic-1 - -``` - -This case contains two input topics: `input-topic-1` and `input-topic-2`, each of which is mapped to a different SerDe class (the map must be specified as a JSON string). The output topic, `output-topic-1`, uses the `Serde3` class for SerDe. At the moment, all Pulsar Functions logic, include processing function and SerDe classes, must be contained within a single Python file. - -When using Pulsar Functions for Python, you have three SerDe options: - -1. You can use the [`IdentitySerde`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L70), which leaves the data unchanged. The `IdentitySerDe` is the **default**. Creating or running a function without explicitly specifying SerDe means that this option is used. -2. You can use the [`PickleSerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L62), which uses Python [`pickle`](https://docs.python.org/3/library/pickle.html) for SerDe. -3. You can create a custom SerDe class by implementing the baseline [`SerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L50) class, which has just two methods: [`serialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L53) for converting the object into bytes, and [`deserialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L58) for converting bytes into an object of the required application-specific type. - -The table below shows when you should use each SerDe. - -SerDe option | When to use -:------------|:----------- -`IdentitySerde` | When you work with simple types like strings, Booleans, integers. -`PickleSerDe` | When you work with complex, application-specific types and are comfortable with the "best effort" approach of `pickle`. -Custom SerDe | When you require explicit control over SerDe, potentially for performance or data compatibility purposes. - - - - -Currently, the feature is not available in Go. - - - - -```` - -### Example -Imagine that you're writing Pulsar Functions that are processing tweet objects, you can refer to the following example of `Tweet` class. - -````mdx-code-block - - - -```java - -public class Tweet { - private String username; - private String tweetContent; - - public Tweet(String username, String tweetContent) { - this.username = username; - this.tweetContent = tweetContent; - } - - // Standard setters and getters -} - -``` - -To pass `Tweet` objects directly between Pulsar Functions, you need to provide a custom SerDe class. In the example below, `Tweet` objects are basically strings in which the username and tweet content are separated by a `|`. - -```java - -package com.example.serde; - -import org.apache.pulsar.functions.api.SerDe; - -import java.util.regex.Pattern; - -public class TweetSerde implements SerDe { - public Tweet deserialize(byte[] input) { - String s = new String(input); - String[] fields = s.split(Pattern.quote("|")); - return new Tweet(fields[0], fields[1]); - } - - public byte[] serialize(Tweet input) { - return "%s|%s".format(input.getUsername(), input.getTweetContent()).getBytes(); - } -} - -``` - -To apply this customized SerDe to a particular Pulsar Function, you need to: - -* Package the `Tweet` and `TweetSerde` classes into a JAR. -* Specify a path to the JAR and SerDe class name when deploying the function. - -The following is an example of [`create`](reference-pulsar-admin.md#create-1) operation. - -```bash - -$ bin/pulsar-admin functions create \ - --jar /path/to/your.jar \ - --output-serde-classname com.example.serde.TweetSerde \ - # Other function attributes - -``` - -> #### Custom SerDe classes must be packaged with your function JARs -> Pulsar does not store your custom SerDe classes separately from your Pulsar Functions. So you need to include your SerDe classes in your function JARs. If not, Pulsar returns an error. - - - - -```python - -class Tweet(object): - def __init__(self, username, tweet_content): - self.username = username - self.tweet_content = tweet_content - -``` - -In order to use this class in Pulsar Functions, you have two options: - -1. You can specify `PickleSerDe`, which applies the [`pickle`](https://docs.python.org/3/library/pickle.html) library SerDe. -2. You can create your own SerDe class. The following is an example. - - ```python - - from pulsar import SerDe - - class TweetSerDe(SerDe): - - def serialize(self, input): - return bytes("{0}|{1}".format(input.username, input.tweet_content)) - - def deserialize(self, input_bytes): - tweet_components = str(input_bytes).split('|') - return Tweet(tweet_components[0], tweet_componentsp[1]) - - ``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/custom_object_function.py). - - - - -```` - -In both languages, however, you can write custom SerDe logic for more complex, application-specific types. - -## Context -Java, Python and Go SDKs provide access to a **context object** that can be used by a function. This context object provides a wide variety of information and functionality to the function. - -* The name and ID of a Pulsar Function. -* The message ID of each message. Each Pulsar message is automatically assigned with an ID. -* The key, event time, properties and partition key of each message. -* The name of the topic to which the message is sent. -* The names of all input topics as well as the output topic associated with the function. -* The name of the class used for [SerDe](#serde). -* The [tenant](reference-terminology.md#tenant) and namespace associated with the function. -* The ID of the Pulsar Functions instance running the function. -* The version of the function. -* The [logger object](functions-develop.md#logger) used by the function, which can be used to create function log messages. -* Access to arbitrary [user configuration](#user-config) values supplied via the CLI. -* An interface for recording [metrics](#metrics). -* An interface for storing and retrieving state in [state storage](#state-storage). -* A function to publish new messages onto arbitrary topics. -* A function to ack the message being processed (if auto-ack is disabled). -* (Java) get Pulsar admin client. - -````mdx-code-block - - - -The [Context](https://github.com/apache/pulsar/blob/master/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Context.java) interface provides a number of methods that you can use to access the function [context](#context). The various method signatures for the `Context` interface are listed as follows. - -```java - -public interface Context { - Record getCurrentRecord(); - Collection getInputTopics(); - String getOutputTopic(); - String getOutputSchemaType(); - String getTenant(); - String getNamespace(); - String getFunctionName(); - String getFunctionId(); - String getInstanceId(); - String getFunctionVersion(); - Logger getLogger(); - void incrCounter(String key, long amount); - void incrCounterAsync(String key, long amount); - long getCounter(String key); - long getCounterAsync(String key); - void putState(String key, ByteBuffer value); - void putStateAsync(String key, ByteBuffer value); - void deleteState(String key); - ByteBuffer getState(String key); - ByteBuffer getStateAsync(String key); - Map getUserConfigMap(); - Optional getUserConfigValue(String key); - Object getUserConfigValueOrDefault(String key, Object defaultValue); - void recordMetric(String metricName, double value); - CompletableFuture publish(String topicName, O object, String schemaOrSerdeClassName); - CompletableFuture publish(String topicName, O object); - TypedMessageBuilder newOutputMessage(String topicName, Schema schema) throws PulsarClientException; - ConsumerBuilder newConsumerBuilder(Schema schema) throws PulsarClientException; - PulsarAdmin getPulsarAdmin(); - PulsarAdmin getPulsarAdmin(String clusterName); -} - -``` - -The following example uses several methods available via the `Context` object. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.stream.Collectors; - -public class ContextFunction implements Function { - public Void process(String input, Context context) { - Logger LOG = context.getLogger(); - String inputTopics = context.getInputTopics().stream().collect(Collectors.joining(", ")); - String functionName = context.getFunctionName(); - - String logMessage = String.format("A message with a value of \"%s\" has arrived on one of the following topics: %s\n", - input, - inputTopics); - - LOG.info(logMessage); - - String metricName = String.format("function-%s-messages-received", functionName); - context.recordMetric(metricName, 1); - - return null; - } -} - -``` - - - - -``` - -class ContextImpl(pulsar.Context): - def get_message_id(self): - ... - def get_message_key(self): - ... - def get_message_eventtime(self): - ... - def get_message_properties(self): - ... - def get_current_message_topic_name(self): - ... - def get_partition_key(self): - ... - def get_function_name(self): - ... - def get_function_tenant(self): - ... - def get_function_namespace(self): - ... - def get_function_id(self): - ... - def get_instance_id(self): - ... - def get_function_version(self): - ... - def get_logger(self): - ... - def get_user_config_value(self, key): - ... - def get_user_config_map(self): - ... - def record_metric(self, metric_name, metric_value): - ... - def get_input_topics(self): - ... - def get_output_topic(self): - ... - def get_output_serde_class_name(self): - ... - def publish(self, topic_name, message, serde_class_name="serde.IdentitySerDe", - properties=None, compression_type=None, callback=None, message_conf=None): - ... - def ack(self, msgid, topic): - ... - def get_and_reset_metrics(self): - ... - def reset_metrics(self): - ... - def get_metrics(self): - ... - def incr_counter(self, key, amount): - ... - def get_counter(self, key): - ... - def del_counter(self, key): - ... - def put_state(self, key, value): - ... - def get_state(self, key): - ... - -``` - - - - -``` - -func (c *FunctionContext) GetInstanceID() int { - return c.instanceConf.instanceID -} - -func (c *FunctionContext) GetInputTopics() []string { - return c.inputTopics -} - -func (c *FunctionContext) GetOutputTopic() string { - return c.instanceConf.funcDetails.GetSink().Topic -} - -func (c *FunctionContext) GetFuncTenant() string { - return c.instanceConf.funcDetails.Tenant -} - -func (c *FunctionContext) GetFuncName() string { - return c.instanceConf.funcDetails.Name -} - -func (c *FunctionContext) GetFuncNamespace() string { - return c.instanceConf.funcDetails.Namespace -} - -func (c *FunctionContext) GetFuncID() string { - return c.instanceConf.funcID -} - -func (c *FunctionContext) GetFuncVersion() string { - return c.instanceConf.funcVersion -} - -func (c *FunctionContext) GetUserConfValue(key string) interface{} { - return c.userConfigs[key] -} - -func (c *FunctionContext) GetUserConfMap() map[string]interface{} { - return c.userConfigs -} - -func (c *FunctionContext) SetCurrentRecord(record pulsar.Message) { - c.record = record -} - -func (c *FunctionContext) GetCurrentRecord() pulsar.Message { - return c.record -} - -func (c *FunctionContext) NewOutputMessage(topic string) pulsar.Producer { - return c.outputMessage(topic) -} - -``` - -The following example uses several methods available via the `Context` object. - -``` - -import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" -) - -func contextFunc(ctx context.Context) { - if fc, ok := pf.FromContext(ctx); ok { - fmt.Printf("function ID is:%s, ", fc.GetFuncID()) - fmt.Printf("function version is:%s\n", fc.GetFuncVersion()) - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/77cf09eafa4f1626a53a1fe2e65dd25f377c1127/pulsar-function-go/examples/contextFunc/contextFunc.go#L29-L34). - - - - -```` - -### User config -When you run or update Pulsar Functions created using SDK, you can pass arbitrary key/values to them with the command line with the `--user-config` flag. Key/values must be specified as JSON. The following function creation command passes a user configured key/value to a function. - -```bash - -$ bin/pulsar-admin functions create \ - --name word-filter \ - # Other function configs - --user-config '{"forbidden-word":"rosebud"}' - -``` - -````mdx-code-block - - - -The Java SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - # Other function configs - --user-config '{"word-of-the-day":"verdure"}' - -``` - -To access that value in a Java function: - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.Optional; - -public class UserConfigFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - Optional wotd = context.getUserConfigValue("word-of-the-day"); - if (wotd.isPresent()) { - LOG.info("The word of the day is {}", wotd); - } else { - LOG.warn("No word of the day provided"); - } - return null; - } -} - -``` - -The `UserConfigFunction` function will log the string `"The word of the day is verdure"` every time the function is invoked (which means every time a message arrives). The `word-of-the-day` user config will be changed only when the function is updated with a new config value via the command line. - -You can also access the entire user config map or set a default value in case no value is present: - -```java - -// Get the whole config map -Map allConfigs = context.getUserConfigMap(); - -// Get value or resort to default -String wotd = context.getUserConfigValueOrDefault("word-of-the-day", "perspicacious"); - -``` - -> For all key/value pairs passed to Java functions, both the key *and* the value are `String`. To set the value to be a different type, you need to deserialize from the `String` type. - - - - -In Python function, you can access the configuration value like this. - -```python - -from pulsar import Function - -class WordFilter(Function): - def process(self, context, input): - forbidden_word = context.user_config()["forbidden-word"] - - # Don't publish the message if it contains the user-supplied - # forbidden word - if forbidden_word in input: - pass - # Otherwise publish the message - else: - return input - -``` - -The Python SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - # Other function configs \ - --user-config '{"word-of-the-day":"verdure"}' - -``` - -To access that value in a Python function: - -```python - -from pulsar import Function - -class UserConfigFunction(Function): - def process(self, input, context): - logger = context.get_logger() - wotd = context.get_user_config_value('word-of-the-day') - if wotd is None: - logger.warn('No word of the day provided') - else: - logger.info("The word of the day is {0}".format(wotd)) - -``` - - - - -The Go SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - --go path/to/go/binary - --user-config '{"word-of-the-day":"lackadaisical"}' - -``` - -To access that value in a Go function: - -```go - -func contextFunc(ctx context.Context) { - fc, ok := pf.FromContext(ctx) - if !ok { - logutil.Fatal("Function context is not defined") - } - - wotd := fc.GetUserConfValue("word-of-the-day") - - if wotd == nil { - logutil.Warn("The word of the day is empty") - } else { - logutil.Infof("The word of the day is %s", wotd.(string)) - } -} - -``` - - - - -```` - -### Logger - -````mdx-code-block - - - -Pulsar Functions that use the Java SDK have access to an [SLF4j](https://www.slf4j.org/) [`Logger`](https://www.slf4j.org/api/org/apache/log4j/Logger.html) object that can be used to produce logs at the chosen log level. The following example logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class LoggingFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - String messageId = new String(context.getMessageId()); - - if (input.contains("danger")) { - LOG.warn("A warning was received in message {}", messageId); - } else { - LOG.info("Message {} received\nContent: {}", messageId, input); - } - - return null; - } -} - -``` - -If you want your function to produce logs, you need to specify a log topic when creating or running the function. The following is an example. - -```bash - -$ bin/pulsar-admin functions create \ - --jar my-functions.jar \ - --classname my.package.LoggingFunction \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -All logs produced by `LoggingFunction` above can be accessed via the `persistent://public/default/logging-function-logs` topic. - -#### Customize Function log level -Additionally, you can use the XML file, `functions_log4j2.xml`, to customize the function log level. -To customize the function log level, create or update `functions_log4j2.xml` in your Pulsar conf directory (for example, `/etc/pulsar/` on bare-metal, or `/pulsar/conf` on Kubernetes) to contain contents such as: - -```xml - - - pulsar-functions-instance - 30 - - - pulsar.log.appender - RollingFile - - - pulsar.log.level - debug - - - bk.log.level - debug - - - - - Console - SYSTEM_OUT - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - RollingFile - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.log - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}-%d{MM-dd-yyyy}-%i.log.gz - true - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - 1 - true - - - 1 GB - - - 0 0 0 * * ? - - - - - ${sys:pulsar.function.log.dir} - 2 - - */${sys:pulsar.function.log.file}*log.gz - - - 30d - - - - - - BkRollingFile - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.bk - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.bk-%d{MM-dd-yyyy}-%i.log.gz - true - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - 1 - true - - - 1 GB - - - 0 0 0 * * ? - - - - - ${sys:pulsar.function.log.dir} - 2 - - */${sys:pulsar.function.log.file}.bk*log.gz - - - 30d - - - - - - - - org.apache.pulsar.functions.runtime.shaded.org.apache.bookkeeper - ${sys:bk.log.level} - false - - BkRollingFile - - - - ${sys:pulsar.log.level} - - ${sys:pulsar.log.appender} - ${sys:pulsar.log.level} - - - - - -``` - -The properties set like: - -```xml - - - pulsar.log.level - debug - - -``` - -propagate to places where they are referenced, such as: - -```xml - - - ${sys:pulsar.log.level} - - ${sys:pulsar.log.appender} - ${sys:pulsar.log.level} - - - -``` - -In the above example, debug level logging would be applied to ALL function logs. -This may be more verbose than you desire. To be more selective, you can apply different log levels to different classes or modules. For example: - -```xml - - - com.example.module - info - false - - ${sys:pulsar.log.appender} - - - -``` - -You can be more specific as well, such as applying a more verbose log level to a class in the module, such as: - -```xml - - - com.example.module.className - debug - false - - Console - - - -``` - -Each `` entry allows you to output the log to a target specified in the definition of the Appender. - -Additivity pertains to whether log messages will be duplicated if multiple Logger entries overlap. -To disable additivity, specify - -```xml - -false - -``` - -as shown in examples above. Disabling additivity prevents duplication of log messages when one or more `` entries contain classes or modules that overlap. - -The `` is defined in the `` section, such as: - -```xml - - - Console - SYSTEM_OUT - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - -``` - - - - -Pulsar Functions that use the Python SDK have access to a logging object that can be used to produce logs at the chosen log level. The following example function that logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`. - -```python - -from pulsar import Function - -class LoggingFunction(Function): - def process(self, input, context): - logger = context.get_logger() - msg_id = context.get_message_id() - if 'danger' in input: - logger.warn("A warning was received in message {0}".format(context.get_message_id())) - else: - logger.info("Message {0} received\nContent: {1}".format(msg_id, input)) - -``` - -If you want your function to produce logs on a Pulsar topic, you need to specify a **log topic** when creating or running the function. The following is an example. - -```bash - -$ bin/pulsar-admin functions create \ - --py logging_function.py \ - --classname logging_function.LoggingFunction \ - --log-topic logging-function-logs \ - # Other function configs - -``` - -All logs produced by `LoggingFunction` above can be accessed via the `logging-function-logs` topic. -Additionally, you can specify the function log level through the broker XML file as described in [Customize Function log level](#customize-function-log-level). - - - - -The following Go Function example shows different log levels based on the function input. - -``` - -import ( - "context" - - "github.com/apache/pulsar/pulsar-function-go/pf" - - log "github.com/apache/pulsar/pulsar-function-go/logutil" -) - -func loggerFunc(ctx context.Context, input []byte) { - if len(input) <= 100 { - log.Infof("This input has a length of: %d", len(input)) - } else { - log.Warnf("This input is getting too long! It has {%d} characters", len(input)) - } -} - -func main() { - pf.Start(loggerFunc) -} - -``` - -When you use `logTopic` related functionalities in Go Function, import `github.com/apache/pulsar/pulsar-function-go/logutil`, and you do not have to use the `getLogger()` context object. - -Additionally, you can specify the function log level through the broker XML file, as described here: [Customize Function log level](#customize-function-log-level) - - - - -```` - -### Pulsar admin - -Pulsar Functions using the Java SDK has access to the Pulsar admin client, which allows the Pulsar admin client to manage API calls to current Pulsar clusters or external clusters (if `external-pulsars` is provided). - -````mdx-code-block - - - -Below is an example of how to use the Pulsar admin client exposed from the Function `context`. - -``` - -import org.apache.pulsar.client.admin.PulsarAdmin; -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -/** - * In this particular example, for every input message, - * the function resets the cursor of the current function's subscription to a - * specified timestamp. - */ -public class CursorManagementFunction implements Function { - - @Override - public String process(String input, Context context) throws Exception { - PulsarAdmin adminClient = context.getPulsarAdmin(); - if (adminClient != null) { - String topic = context.getCurrentRecord().getTopicName().isPresent() ? - context.getCurrentRecord().getTopicName().get() : null; - String subName = context.getTenant() + "/" + context.getNamespace() + "/" + context.getFunctionName(); - if (topic != null) { - // 1578188166 below is a random-pick timestamp - adminClient.topics().resetCursor(topic, subName, 1578188166); - return "reset cursor successfully"; - } - } - return null; - } -} - -``` - -If you want your function to get access to the Pulsar admin client, you need to enable this feature by setting `exposeAdminClientEnabled=true` in the `functions_worker.yml` file. You can test whether this feature is enabled or not using the command `pulsar-admin functions localrun` with the flag `--web-service-url`. - -``` - -$ bin/pulsar-admin functions localrun \ - --jar my-functions.jar \ - --classname my.package.CursorManagementFunction \ - --web-service-url http://pulsar-web-service:8080 \ - # Other function configs - -``` - - - - -```` - -## Metrics - -Pulsar Functions allows you to deploy and manage processing functions that consume messages from and publish messages to Pulsar topics easily. It is important to ensure that the running functions are healthy at any time. Pulsar Functions can publish arbitrary metrics to the metrics interface which can be queried. - -:::note - -If a Pulsar Function uses the language-native interface for Java or Python, that function is not able to publish metrics and stats to Pulsar. - -::: - -You can monitor Pulsar Functions that have been deployed with the following methods: - -- Check the metrics provided by Pulsar. - - Pulsar Functions expose the metrics that can be collected and used for monitoring the health of **Java, Python, and Go** functions. You can check the metrics by following the [monitoring](deploy-monitoring.md) guide. - - For the complete list of the function metrics, see [here](reference-metrics.md#pulsar-functions). - -- Set and check your customized metrics. - - In addition to the metrics provided by Pulsar, Pulsar allows you to customize metrics for **Java and Python** functions. Function workers collect user-defined metrics to Prometheus automatically and you can check them in Grafana. - -Here are examples of how to customize metrics for Java and Python functions. - -````mdx-code-block - - - -You can record metrics using the [`Context`](#context) object on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class MetricRecorderFunction implements Function { - @Override - public void apply(Integer input, Context context) { - // Records the metric 1 every time a message arrives - context.recordMetric("hit-count", 1); - - // Records the metric only if the arriving number equals 11 - if (input == 11) { - context.recordMetric("elevens-count", 1); - } - - return null; - } -} - -``` - - - - -You can record metrics using the [`Context`](#context) object on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message. The following is an example. - -```python - -from pulsar import Function - -class MetricRecorderFunction(Function): - def process(self, input, context): - context.record_metric('hit-count', 1) - - if input == 11: - context.record_metric('elevens-count', 1) - -``` - - - - -Currently, the feature is not available in Go. - - - - -```` - -## Security - -If you want to enable security on Pulsar Functions, first you should enable security on [Functions Workers](functions-worker.md). For more details, refer to [Security settings](functions-worker.md#security-settings). - -Pulsar Functions can support the following providers: - -- ClearTextSecretsProvider -- EnvironmentBasedSecretsProvider - -> Pulsar Function supports ClearTextSecretsProvider by default. - -At the same time, Pulsar Functions provides two interfaces, **SecretsProvider** and **SecretsProviderConfigurator**, allowing users to customize secret provider. - -````mdx-code-block - - - -You can get secret provider using the [`Context`](#context) object. The following is an example: - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class GetSecretProviderFunction implements Function { - - @Override - public Void process(String input, Context context) throws Exception { - Logger LOG = context.getLogger(); - String secretProvider = context.getSecret(input); - - if (!secretProvider.isEmpty()) { - LOG.info("The secret provider is {}", secretProvider); - } else { - LOG.warn("No secret provider"); - } - - return null; - } -} - -``` - - - - -You can get secret provider using the [`Context`](#context) object. The following is an example: - -```python - -from pulsar import Function - -class GetSecretProviderFunction(Function): - def process(self, input, context): - logger = context.get_logger() - secret_provider = context.get_secret(input) - if secret_provider is None: - logger.warn('No secret provider') - else: - logger.info("The secret provider is {0}".format(secret_provider)) - -``` - - - - -Currently, the feature is not available in Go. - - - - -```` - -## State storage -Pulsar Functions use [Apache BookKeeper](https://bookkeeper.apache.org) as a state storage interface. Pulsar installation, including the local standalone installation, includes deployment of BookKeeper bookies. - -Since Pulsar 2.1.0 release, Pulsar integrates with Apache BookKeeper [table service](https://docs.google.com/document/d/155xAwWv5IdOitHh1NVMEwCMGgB28M3FyMiQSxEpjE-Y/edit#heading=h.56rbh52koe3f) to store the `State` for functions. For example, a `WordCount` function can store its `counters` state into BookKeeper table service via Pulsar Functions State API. - -States are key-value pairs, where the key is a string and the value is arbitrary binary data - counters are stored as 64-bit big-endian binary values. Keys are scoped to an individual Pulsar Function, and shared between instances of that function. - -You can access states within Pulsar Java Functions using the `putState`, `putStateAsync`, `getState`, `getStateAsync`, `incrCounter`, `incrCounterAsync`, `getCounter`, `getCounterAsync` and `deleteState` calls on the context object. You can access states within Pulsar Python Functions using the `putState`, `getState`, `incrCounter`, `getCounter` and `deleteState` calls on the context object. You can also manage states using the [querystate](#query-state) and [putstate](#putstate) options to `pulsar-admin functions`. - -:::note - -State storage is not available in Go. - -::: - -### API - -````mdx-code-block - - - -Currently Pulsar Functions expose the following APIs for mutating and accessing State. These APIs are available in the [Context](functions-develop.md#context) object when you are using Java SDK functions. - -#### incrCounter - -```java - - /** - * Increment the builtin distributed counter referred by key - * @param key The name of the key - * @param amount The amount to be incremented - */ - void incrCounter(String key, long amount); - -``` - -The application can use `incrCounter` to change the counter of a given `key` by the given `amount`. - -#### incrCounterAsync - -```java - - /** - * Increment the builtin distributed counter referred by key - * but dont wait for the completion of the increment operation - * - * @param key The name of the key - * @param amount The amount to be incremented - */ - CompletableFuture incrCounterAsync(String key, long amount); - -``` - -The application can use `incrCounterAsync` to asynchronously change the counter of a given `key` by the given `amount`. - -#### getCounter - -```java - - /** - * Retrieve the counter value for the key. - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - long getCounter(String key); - -``` - -The application can use `getCounter` to retrieve the counter of a given `key` mutated by `incrCounter`. - -Except the `counter` API, Pulsar also exposes a general key/value API for functions to store -general key/value state. - -#### getCounterAsync - -```java - - /** - * Retrieve the counter value for the key, but don't wait - * for the operation to be completed - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - CompletableFuture getCounterAsync(String key); - -``` - -The application can use `getCounterAsync` to asynchronously retrieve the counter of a given `key` mutated by `incrCounterAsync`. - -#### putState - -```java - - /** - * Update the state value for the key. - * - * @param key name of the key - * @param value state value of the key - */ - void putState(String key, ByteBuffer value); - -``` - -#### putStateAsync - -```java - - /** - * Update the state value for the key, but don't wait for the operation to be completed - * - * @param key name of the key - * @param value state value of the key - */ - CompletableFuture putStateAsync(String key, ByteBuffer value); - -``` - -The application can use `putStateAsync` to asynchronously update the state of a given `key`. - -#### getState - -```java - - /** - * Retrieve the state value for the key. - * - * @param key name of the key - * @return the state value for the key. - */ - ByteBuffer getState(String key); - -``` - -#### getStateAsync - -```java - - /** - * Retrieve the state value for the key, but don't wait for the operation to be completed - * - * @param key name of the key - * @return the state value for the key. - */ - CompletableFuture getStateAsync(String key); - -``` - -The application can use `getStateAsync` to asynchronously retrieve the state of a given `key`. - -#### deleteState - -```java - - /** - * Delete the state value for the key. - * - * @param key name of the key - */ - -``` - -Counters and binary values share the same keyspace, so this deletes either type. - - - - -Currently Pulsar Functions expose the following APIs for mutating and accessing State. These APIs are available in the [Context](#context) object when you are using Python SDK functions. - -#### incr_counter - -```python - - def incr_counter(self, key, amount): - ""incr the counter of a given key in the managed state"" - -``` - -Application can use `incr_counter` to change the counter of a given `key` by the given `amount`. -If the `key` does not exist, a new key is created. - -#### get_counter - -```python - - def get_counter(self, key): - """get the counter of a given key in the managed state""" - -``` - -Application can use `get_counter` to retrieve the counter of a given `key` mutated by `incrCounter`. - -Except the `counter` API, Pulsar also exposes a general key/value API for functions to store -general key/value state. - -#### put_state - -```python - - def put_state(self, key, value): - """update the value of a given key in the managed state""" - -``` - -The key is a string, and the value is arbitrary binary data. - -#### get_state - -```python - - def get_state(self, key): - """get the value of a given key in the managed state""" - -``` - -#### del_counter - -```python - - def del_counter(self, key): - """delete the counter of a given key in the managed state""" - -``` - -Counters and binary values share the same keyspace, so this deletes either type. - - - - -```` - -### Query State - -A Pulsar Function can use the [State API](#api) for storing state into Pulsar's state storage -and retrieving state back from Pulsar's state storage. Additionally Pulsar also provides -CLI commands for querying its state. - -```shell - -$ bin/pulsar-admin functions querystate \ - --tenant \ - --namespace \ - --name \ - --state-storage-url \ - --key \ - [---watch] - -``` - -If `--watch` is specified, the CLI will watch the value of the provided `state-key`. - -### Example - -````mdx-code-block - - - -{@inject: github:WordCountFunction:/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/WordCountFunction.java} is a very good example -demonstrating on how Application can easily store `state` in Pulsar Functions. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountFunction implements Function { - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split("\\.")).forEach(word -> context.incrCounter(word, 1)); - return null; - } -} - -``` - -The logic of this `WordCount` function is pretty simple and straightforward: - -1. The function first splits the received `String` into multiple words using regex `\\.`. -2. For each `word`, the function increments the corresponding `counter` by 1 (via `incrCounter(key, amount)`). - - - - -```python - -from pulsar import Function - -class WordCount(Function): - def process(self, item, context): - for word in item.split(): - context.incr_counter(word, 1) - -``` - -The logic of this `WordCount` function is pretty simple and straightforward: - -1. The function first splits the received string into multiple words on space. -2. For each `word`, the function increments the corresponding `counter` by 1 (via `incr_counter(key, amount)`). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/functions-metrics.md b/site2/website/versioned_docs/version-2.8.1-deprecated/functions-metrics.md deleted file mode 100644 index 8add6693160929..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/functions-metrics.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: functions-metrics -title: Metrics for Pulsar Functions -sidebar_label: "Metrics" -original_id: functions-metrics ---- - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/functions-overview.md b/site2/website/versioned_docs/version-2.8.1-deprecated/functions-overview.md deleted file mode 100644 index 816d301e0fd0e7..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/functions-overview.md +++ /dev/null @@ -1,209 +0,0 @@ ---- -id: functions-overview -title: Pulsar Functions overview -sidebar_label: "Overview" -original_id: functions-overview ---- - -**Pulsar Functions** are lightweight compute processes that - -* consume messages from one or more Pulsar topics, -* apply a user-supplied processing logic to each message, -* publish the results of the computation to another topic. - - -## Goals -With Pulsar Functions, you can create complex processing logic without deploying a separate neighboring system (such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://heron.incubator.apache.org/), [Apache Flink](https://flink.apache.org/)). Pulsar Functions are computing infrastructure of Pulsar messaging system. The core goal is tied to a series of other goals: - -* Developer productivity (language-native vs Pulsar Functions SDK functions) -* Easy troubleshooting -* Operational simplicity (no need for an external processing system) - -## Inspirations -Pulsar Functions are inspired by (and take cues from) several systems and paradigms: - -* Stream processing engines such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://apache.github.io/incubator-heron), and [Apache Flink](https://flink.apache.org) -* "Serverless" and "Function as a Service" (FaaS) cloud platforms like [Amazon Web Services Lambda](https://aws.amazon.com/lambda/), [Google Cloud Functions](https://cloud.google.com/functions/), and [Azure Cloud Functions](https://azure.microsoft.com/en-us/services/functions/) - -Pulsar Functions can be described as - -* [Lambda](https://aws.amazon.com/lambda/)-style functions that are -* specifically designed to use Pulsar as a message bus. - -## Programming model -Pulsar Functions provide a wide range of functionality, and the core programming model is simple. Functions receive messages from one or more **input [topics](reference-terminology.md#topic)**. Each time a message is received, the function will complete the following tasks. - - * Apply some processing logic to the input and write output to: - * An **output topic** in Pulsar - * [Apache BookKeeper](functions-develop.md#state-storage) - * Write logs to a **log topic** (potentially for debugging purposes) - * Increment a [counter](#word-count-example) - -![Pulsar Functions core programming model](/assets/pulsar-functions-overview.png) - -You can use Pulsar Functions to set up the following processing chain: - -* A Python function listens for the `raw-sentences` topic and "sanitizes" incoming strings (removing extraneous whitespace and converting all characters to lowercase) and then publishes the results to a `sanitized-sentences` topic. -* A Java function listens for the `sanitized-sentences` topic, counts the number of times each word appears within a specified time window, and publishes the results to a `results` topic -* Finally, a Python function listens for the `results` topic and writes the results to a MySQL table. - - -### Word count example - -If you implement the classic word count example using Pulsar Functions, it looks something like this: - -![Pulsar Functions word count example](/assets/pulsar-functions-word-count.png) - -To write the function in Java with [Pulsar Functions SDK for Java](functions-develop.md#available-apis), you can write the function as follows. - -```java - -package org.example.functions; - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountFunction implements Function { - // This function is invoked every time a message is published to the input topic - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split(" ")).forEach(word -> { - String counterKey = word.toLowerCase(); - context.incrCounter(counterKey, 1); - }); - return null; - } -} - -``` - -Bundle and build the JAR file to be deployed, and then deploy it in your Pulsar cluster using the [command line](functions-deploy.md#command-line-interface) as follows. - -```bash - -$ bin/pulsar-admin functions create \ - --jar target/my-jar-with-dependencies.jar \ - --classname org.example.functions.WordCountFunction \ - --tenant public \ - --namespace default \ - --name word-count \ - --inputs persistent://public/default/sentences \ - --output persistent://public/default/count - -``` - -### Content-based routing example - -Pulsar Functions are used in many cases. The following is a sophisticated example that involves content-based routing. - -For example, a function takes items (strings) as input and publishes them to either a `fruits` or `vegetables` topic, depending on the item. Or, if an item is neither fruit nor vegetable, a warning is logged to a [log topic](functions-develop.md#logger). The following is a visual representation. - -![Pulsar Functions routing example](/assets/pulsar-functions-routing-example.png) - -If you implement this routing functionality in Python, it looks something like this: - -```python - -from pulsar import Function - -class RoutingFunction(Function): - def __init__(self): - self.fruits_topic = "persistent://public/default/fruits" - self.vegetables_topic = "persistent://public/default/vegetables" - - @staticmethod - def is_fruit(item): - return item in [b"apple", b"orange", b"pear", b"other fruits..."] - - @staticmethod - def is_vegetable(item): - return item in [b"carrot", b"lettuce", b"radish", b"other vegetables..."] - - def process(self, item, context): - if self.is_fruit(item): - context.publish(self.fruits_topic, item) - elif self.is_vegetable(item): - context.publish(self.vegetables_topic, item) - else: - warning = "The item {0} is neither a fruit nor a vegetable".format(item) - context.get_logger().warn(warning) - -``` - -If this code is stored in `~/router.py`, then you can deploy it in your Pulsar cluster using the [command line](functions-deploy.md#command-line-interface) as follows. - -```bash - -$ bin/pulsar-admin functions create \ - --py ~/router.py \ - --classname router.RoutingFunction \ - --tenant public \ - --namespace default \ - --name route-fruit-veg \ - --inputs persistent://public/default/basket-items - -``` - -### Functions, messages and message types -Pulsar Functions take byte arrays as inputs and spit out byte arrays as output. However in languages that support typed interfaces(Java), you can write typed Functions, and bind messages to types in the following ways. -* [Schema Registry](functions-develop.md#schema-registry) -* [SerDe](functions-develop.md#serde) - - -## Fully Qualified Function Name (FQFN) -Each Pulsar Function has a **Fully Qualified Function Name** (FQFN) that consists of three elements: the function tenant, namespace, and function name. FQFN looks like this: - -```http - -tenant/namespace/name - -``` - -FQFNs enable you to create multiple functions with the same name provided that they are in different namespaces. - -## Supported languages -Currently, you can write Pulsar Functions in Java, Python, and Go. For details, refer to [Develop Pulsar Functions](functions-develop.md). - -## Processing guarantees -Pulsar Functions provide three different messaging semantics that you can apply to any function. - -Delivery semantics | Description -:------------------|:------- -**At-most-once** delivery | Each message sent to the function is likely to be processed, or not to be processed (hence "at most"). -**At-least-once** delivery | Each message sent to the function can be processed more than once (hence the "at least"). -**Effectively-once** delivery | Each message sent to the function will have one output associated with it. - - -### Apply processing guarantees to a function -You can set the processing guarantees for a Pulsar Function when you create the Function. The following [`pulsar-function create`](reference-pulsar-admin.md#create-1) command creates a function with effectively-once guarantees applied. - -```bash - -$ bin/pulsar-admin functions create \ - --name my-effectively-once-function \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other function configs - -``` - -The available options for `--processing-guarantees` are: - -* `ATMOST_ONCE` -* `ATLEAST_ONCE` -* `EFFECTIVELY_ONCE` - -> By default, Pulsar Functions provide at-least-once delivery guarantees. So if you create a function without supplying a value for the `--processingGuarantees` flag, the function provides at-least-once guarantees. - -### Update the processing guarantees of a function -You can change the processing guarantees applied to a function using the [`update`](reference-pulsar-admin.md#update-1) command. The following is an example. - -```bash - -$ bin/pulsar-admin functions update \ - --processing-guarantees ATMOST_ONCE \ - # Other function configs - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/functions-package.md b/site2/website/versioned_docs/version-2.8.1-deprecated/functions-package.md deleted file mode 100644 index a995d5c1588771..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/functions-package.md +++ /dev/null @@ -1,493 +0,0 @@ ---- -id: functions-package -title: Package Pulsar Functions -sidebar_label: "How-to: Package" -original_id: functions-package ---- - -You can package Pulsar functions in Java, Python, and Go. Packaging the window function in Java is the same as [packaging a function in Java](#java). - -:::note - -Currently, the window function is not available in Python and Go. - -::: - -## Prerequisite - -Before running a Pulsar function, you need to start Pulsar. You can [run a standalone Pulsar in Docker](getting-started-docker.md), or [run Pulsar in Kubernetes](getting-started-helm.md). - -To check whether the Docker image starts, you can use the `docker ps` command. - -## Java - -To package a function in Java, complete the following steps. - -1. Create a new maven project with a pom file. In the following code sample, the value of `mainClass` is your package name. - - ```Java - - - - 4.0.0 - - java-function - java-function - 1.0-SNAPSHOT - - - - org.apache.pulsar - pulsar-functions-api - 2.6.0 - - - - - - - maven-assembly-plugin - - false - - jar-with-dependencies - - - - org.example.test.ExclamationFunction - - - - - - make-assembly - package - - assembly - - - - - - org.apache.maven.plugins - maven-compiler-plugin - - 8 - 8 - - - - - - - - ``` - -2. Write a Java function. - - ``` - - package org.example.test; - - import java.util.function.Function; - - public class ExclamationFunction implements Function { - @Override - public String apply(String s) { - return "This is my function!"; - } - } - - ``` - - For the imported package, you can use one of the following interfaces: - - Function interface provided by Java 8: `java.util.function.Function` - - Pulsar Function interface: `org.apache.pulsar.functions.api.Function` - - The main difference between the two interfaces is that the `org.apache.pulsar.functions.api.Function` interface provides the context interface. When you write a function and want to interact with it, you can use context to obtain a wide variety of information and functionality for Pulsar Functions. - - The following example uses `org.apache.pulsar.functions.api.Function` interface with context. - - ``` - - package org.example.functions; - import org.apache.pulsar.functions.api.Context; - import org.apache.pulsar.functions.api.Function; - - import java.util.Arrays; - public class WordCountFunction implements Function { - // This function is invoked every time a message is published to the input topic - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split(" ")).forEach(word -> { - String counterKey = word.toLowerCase(); - context.incrCounter(counterKey, 1); - }); - return null; - } - } - - ``` - -3. Package the Java function. - - ```bash - - mvn package - - ``` - - After the Java function is packaged, a `target` directory is created automatically. Open the `target` directory to check if there is a JAR package similar to `java-function-1.0-SNAPSHOT.jar`. - - -4. Run the Java function. - - (1) Copy the packaged jar file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Java function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname org.example.test.ExclamationFunction \ - --jar java-function-1.0-SNAPSHOT.jar \ - --inputs persistent://public/default/my-topic-1 \ - --output persistent://public/default/test-1 \ - --tenant public \ - --namespace default \ - --name JavaFunction - - ``` - - The following log indicates that the Java function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -## Python - -Python Function supports the following three formats: - -- One python file -- ZIP file -- PIP - -### One python file - -To package a function with **one python file** in Python, complete the following steps. - -1. Write a Python function. - - ``` - - from pulsar import Function // import the Function module from Pulsar - - # The classic ExclamationFunction that appends an exclamation at the end - # of the input - class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - return input + '!' - - ``` - - In this example, when you write a Python function, you need to inherit the Function class and implement the `process()` method. - - `process()` mainly has two parameters: - - - `input` represents your input. - - - `context` represents an interface exposed by the Pulsar Function. You can get the attributes in the Python function based on the provided context object. - -2. Install a Python client. - - The implementation of a Python function depends on the Python client, so before deploying a Python function, you need to install the corresponding version of the Python client. - - ```bash - - pip install pulsar-client==2.6.0 - - ``` - -3. Run the Python Function. - - (1) Copy the Python function file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Python function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname . \ - --py \ - --inputs persistent://public/default/my-topic-1 \ - --output persistent://public/default/test-1 \ - --tenant public \ - --namespace default \ - --name PythonFunction - - ``` - - The following log indicates that the Python function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -### ZIP file - -To package a function with the **ZIP file** in Python, complete the following steps. - -1. Prepare the ZIP file. - - The following is required when packaging the ZIP file of the Python Function. - - ```text - - Assuming the zip file is named as `func.zip`, unzip the `func.zip` folder: - "func/src" - "func/requirements.txt" - "func/deps" - - ``` - - Take [exclamation.zip](https://github.com/apache/pulsar/tree/master/tests/docker-images/latest-version-image/python-examples) as an example. The internal structure of the example is as follows. - - ```text - - . - ├── deps - │   └── sh-1.12.14-py2.py3-none-any.whl - └── src - └── exclamation.py - - ``` - -2. Run the Python Function. - - (1) Copy the ZIP file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Python function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname exclamation \ - --py \ - --inputs persistent://public/default/in-topic \ - --output persistent://public/default/out-topic \ - --tenant public \ - --namespace default \ - --name PythonFunction - - ``` - - The following log indicates that the Python function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -### PIP - -The PIP method is only supported in Kubernetes runtime. To package a function with **PIP** in Python, complete the following steps. - -1. Configure the `functions_worker.yml` file. - - ```text - - #### Kubernetes Runtime #### - installUserCodeDependencies: true - - ``` - -2. Write your Python Function. - - ``` - - from pulsar import Function - import js2xml - - # The classic ExclamationFunction that appends an exclamation at the end - # of the input - class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - // add your logic - return input + '!' - - ``` - - You can introduce additional dependencies. When Python Function detects that the file currently used is `whl` and the `installUserCodeDependencies` parameter is specified, the system uses the `pip install` command to install the dependencies required in Python Function. - -3. Generate the `whl` file. - - ```shell script - - $ cd $PULSAR_HOME/pulsar-functions/scripts/python - $ chmod +x generate.sh - $ ./generate.sh - # e.g: ./generate.sh /path/to/python /path/to/python/output 1.0.0 - - ``` - - The output is written in `/path/to/python/output`: - - ```text - - -rw-r--r-- 1 root staff 1.8K 8 27 14:29 pulsarfunction-1.0.0-py2-none-any.whl - -rw-r--r-- 1 root staff 1.4K 8 27 14:29 pulsarfunction-1.0.0.tar.gz - -rw-r--r-- 1 root staff 0B 8 27 14:29 pulsarfunction.whl - - ``` - -## Go - -To package a function in Go, complete the following steps. - -1. Write a Go function. - - Currently, Go function can be **only** implemented using SDK and the interface of the function is exposed in the form of SDK. Before using the Go function, you need to import "github.com/apache/pulsar/pulsar-function-go/pf". - - ``` - - import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" - ) - - func HandleRequest(ctx context.Context, input []byte) error { - fmt.Println(string(input) + "!") - return nil - } - - func main() { - pf.Start(HandleRequest) - } - - ``` - - You can use context to connect to the Go function. - - ``` - - if fc, ok := pf.FromContext(ctx); ok { - fmt.Printf("function ID is:%s, ", fc.GetFuncID()) - fmt.Printf("function version is:%s\n", fc.GetFuncVersion()) - } - - ``` - - When writing a Go function, remember that - - In `main()`, you **only** need to register the function name to `Start()`. **Only** one function name is received in `Start()`. - - Go function uses Go reflection, which is based on the received function name, to verify whether the parameter list and returned value list are correct. The parameter list and returned value list **must be** one of the following sample functions: - - ``` - - func () - func () error - func (input) error - func () (output, error) - func (input) (output, error) - func (context.Context) error - func (context.Context, input) error - func (context.Context) (output, error) - func (context.Context, input) (output, error) - - ``` - -2. Build the Go function. - - ``` - - go build .go - - ``` - -3. Run the Go Function. - - (1) Copy the Go function file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Go function with the following command. - - ``` - - ./bin/pulsar-admin functions localrun \ - --go [your go function path] - --inputs [input topics] \ - --output [output topic] \ - --tenant [default:public] \ - --namespace [default:default] \ - --name [custom unique go function name] - - ``` - - The following log indicates that the Go function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -## Start Functions in cluster mode -If you want to start a function in cluster mode, replace `localrun` with `create` in the commands above. The following log indicates that your function starts successfully. - - ```text - - "Created successfully" - - ``` - -For information about parameters on `--classname`, `--jar`, `--py`, `--go`, `--inputs`, run the command `./bin/pulsar-admin functions` or see [here](reference-pulsar-admin.md#functions). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/functions-runtime.md b/site2/website/versioned_docs/version-2.8.1-deprecated/functions-runtime.md deleted file mode 100644 index ab7d1c05db421e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/functions-runtime.md +++ /dev/null @@ -1,399 +0,0 @@ ---- -id: functions-runtime -title: Configure Functions runtime -sidebar_label: "Setup: Configure Functions runtime" -original_id: functions-runtime ---- - -You can use the following methods to run functions. - -- *Thread*: Invoke functions threads in functions worker. -- *Process*: Invoke functions in processes forked by functions worker. -- *Kubernetes*: Submit functions as Kubernetes StatefulSets by functions worker. - -:::note - -Pulsar supports adding labels to the Kubernetes StatefulSets and services while launching functions, which facilitates selecting the target Kubernetes objects. - -::: - -The differences of the thread and process modes are: -- Thread mode: when a function runs in thread mode, it runs on the same Java virtual machine (JVM) with functions worker. -- Process mode: when a function runs in process mode, it runs on the same machine that functions worker runs. - -## Configure thread runtime -It is easy to configure *Thread* runtime. In most cases, you do not need to configure anything. You can customize the thread group name with the following settings: - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.thread.ThreadRuntimeFactory -functionRuntimeFactoryConfigs: - threadGroupName: "Your Function Container Group" - -``` - -*Thread* runtime is only supported in Java function. - -## Configure process runtime -When you enable *Process* runtime, you do not need to configure anything. - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.process.ProcessRuntimeFactory -functionRuntimeFactoryConfigs: - # the directory for storing the function logs - logDirectory: - # change the jar location only when you put the java instance jar in a different location - javaInstanceJarLocation: - # change the python instance location only when you put the python instance jar in a different location - pythonInstanceLocation: - # change the extra dependencies location: - extraFunctionDependenciesDir: - -``` - -*Process* runtime is supported in Java, Python, and Go functions. - -## Configure Kubernetes runtime - -When the functions worker generates Kubernetes manifests and apply the manifests, the Kubernetes runtime works. If you have run functions worker on Kubernetes, you can use the `serviceAccount` associated with the pod that the functions worker is running in. Otherwise, you can configure it to communicate with a Kubernetes cluster. - -The manifests, generated by the functions worker, include a `StatefulSet`, a `Service` (used to communicate with the pods), and a `Secret` for auth credentials (when applicable). The `StatefulSet` manifest (by default) has a single pod, with the number of replicas determined by the "parallelism" of the function. On pod boot, the pod downloads the function payload (via the functions worker REST API). The pod's container image is configurable, but must have the functions runtime. - -The Kubernetes runtime supports secrets, so you can create a Kubernetes secret and expose it as an environment variable in the pod. The Kubernetes runtime is extensible, you can implement classes and customize the way how to generate Kubernetes manifests, how to pass auth data to pods, and how to integrate secrets. - -:::tip - -For the rules of translating Pulsar object names into Kubernetes resource labels, see [here](admin-api-overview.md#how-to-define-pulsar-resource-names-when-running-pulsar-in-kubernetes). - -::: - -### Basic configuration - -It is easy to configure Kubernetes runtime. You can just uncomment the settings of `kubernetesContainerFactory` in the `functions_worker.yaml` file. The following is an example. - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.kubernetes.KubernetesRuntimeFactory -functionRuntimeFactoryConfigs: - # uri to kubernetes cluster, leave it to empty and it will use the kubernetes settings in function worker - k8Uri: - # the kubernetes namespace to run the function instances. it is `default`, if this setting is left to be empty - jobNamespace: - # The Kubernetes pod name to run the function instances. It is set to - # `pf----` if this setting is left to be empty - jobName: - # the docker image to run function instance. by default it is `apachepulsar/pulsar` - pulsarDockerImageName: - # the docker image to run function instance according to different configurations provided by users. - # By default it is `apachepulsar/pulsar`. - # e.g: - # functionDockerImages: - # JAVA: JAVA_IMAGE_NAME - # PYTHON: PYTHON_IMAGE_NAME - # GO: GO_IMAGE_NAME - functionDockerImages: - # "The image pull policy for image used to run function instance. By default it is `IfNotPresent` - imagePullPolicy: IfNotPresent - # the root directory of pulsar home directory in `pulsarDockerImageName`. by default it is `/pulsar`. - # if you are using your own built image in `pulsarDockerImageName`, you need to set this setting accordingly - pulsarRootDir: - # The config admin CLI allows users to customize the configuration of the admin cli tool, such as: - # `/bin/pulsar-admin and /bin/pulsarctl`. By default it is `/bin/pulsar-admin`. If you want to use `pulsarctl` - # you need to set this setting accordingly - configAdminCLI: - # this setting only takes effects if `k8Uri` is set to null. if your function worker is running as a k8 pod, - # setting this to true is let function worker to submit functions to the same k8s cluster as function worker - # is running. setting this to false if your function worker is not running as a k8 pod. - submittingInsidePod: false - # setting the pulsar service url that pulsar function should use to connect to pulsar - # if it is not set, it will use the pulsar service url configured in worker service - pulsarServiceUrl: - # setting the pulsar admin url that pulsar function should use to connect to pulsar - # if it is not set, it will use the pulsar admin url configured in worker service - pulsarAdminUrl: - # The flag indicates to install user code dependencies. (applied to python package) - installUserCodeDependencies: - # The repository that pulsar functions use to download python dependencies - pythonDependencyRepository: - # The repository that pulsar functions use to download extra python dependencies - pythonExtraDependencyRepository: - # the custom labels that function worker uses to select the nodes for pods - customLabels: - # The expected metrics collection interval, in seconds - expectedMetricsCollectionInterval: 30 - # Kubernetes Runtime will periodically checkback on - # this configMap if defined and if there are any changes - # to the kubernetes specific stuff, we apply those changes - changeConfigMap: - # The namespace for storing change config map - changeConfigMapNamespace: - # The ratio cpu request and cpu limit to be set for a function/source/sink. - # The formula for cpu request is cpuRequest = userRequestCpu / cpuOverCommitRatio - cpuOverCommitRatio: 1.0 - # The ratio memory request and memory limit to be set for a function/source/sink. - # The formula for memory request is memoryRequest = userRequestMemory / memoryOverCommitRatio - memoryOverCommitRatio: 1.0 - # The port inside the function pod which is used by the worker to communicate with the pod - grpcPort: 9093 - # The port inside the function pod on which prometheus metrics are exposed - metricsPort: 9094 - # The directory inside the function pod where nar packages will be extracted - narExtractionDirectory: - # The classpath where function instance files stored - functionInstanceClassPath: - # the directory for dropping extra function dependencies - # if it is not an absolute path, it is relative to `pulsarRootDir` - extraFunctionDependenciesDir: - # Additional memory padding added on top of the memory requested by the function per on a per instance basis - percentMemoryPadding: 10 - -``` - -If you run functions worker embedded in a broker on Kubernetes, you can use the default settings. - -### Run standalone functions worker on Kubernetes - -If you run functions worker standalone (that is, not embedded) on Kubernetes, you need to configure `pulsarSerivceUrl` to be the URL of the broker and `pulsarAdminUrl` as the URL to the functions worker. - -For example, both Pulsar brokers and Function Workers run in the `pulsar` K8S namespace. The brokers have a service called `brokers` and the functions worker has a service called `func-worker`. The settings are as follows: - -```yaml - -pulsarServiceUrl: pulsar://broker.pulsar:6650 // or pulsar+ssl://broker.pulsar:6651 if using TLS -pulsarAdminUrl: http://func-worker.pulsar:8080 // or https://func-worker:8443 if using TLS - -``` - -### Run RBAC in Kubernetes clusters - -If you run RBAC in your Kubernetes cluster, make sure that the service account you use for running functions workers (or brokers, if functions workers run along with brokers) have permissions on the following Kubernetes APIs. - -- services -- configmaps -- pods -- apps.statefulsets - -The following is sufficient: - -```yaml - -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRole -metadata: - name: functions-worker -rules: -- apiGroups: [""] - resources: - - services - - configmaps - - pods - verbs: - - '*' -- apiGroups: - - apps - resources: - - statefulsets - verbs: - - '*' ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: functions-worker ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: functions-worker -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: functions-worker -subjectsKubernetesSec: -- kind: ServiceAccount - name: functions-worker - -``` - -If the service-account is not properly configured, an error message similar to this is displayed: - -```bash - -22:04:27.696 [Timer-0] ERROR org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory - Error while trying to fetch configmap example-pulsar-4qvmb5gur3c6fc9dih0x1xn8b-function-worker-config at namespace pulsar -io.kubernetes.client.ApiException: Forbidden - at io.kubernetes.client.ApiClient.handleResponse(ApiClient.java:882) ~[io.kubernetes-client-java-2.0.0.jar:?] - at io.kubernetes.client.ApiClient.execute(ApiClient.java:798) ~[io.kubernetes-client-java-2.0.0.jar:?] - at io.kubernetes.client.apis.CoreV1Api.readNamespacedConfigMapWithHttpInfo(CoreV1Api.java:23673) ~[io.kubernetes-client-java-api-2.0.0.jar:?] - at io.kubernetes.client.apis.CoreV1Api.readNamespacedConfigMap(CoreV1Api.java:23655) ~[io.kubernetes-client-java-api-2.0.0.jar:?] - at org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory.fetchConfigMap(KubernetesRuntimeFactory.java:284) [org.apache.pulsar-pulsar-functions-runtime-2.4.0-42c3bf949.jar:2.4.0-42c3bf949] - at org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory$1.run(KubernetesRuntimeFactory.java:275) [org.apache.pulsar-pulsar-functions-runtime-2.4.0-42c3bf949.jar:2.4.0-42c3bf949] - at java.util.TimerThread.mainLoop(Timer.java:555) [?:1.8.0_212] - at java.util.TimerThread.run(Timer.java:505) [?:1.8.0_212] - -``` - -### Integrate Kubernetes secrets - -In order to safely distribute secrets, Pulsar Functions can reference Kubernetes secrets. To enable this, set the `secretsProviderConfiguratorClassName` to `org.apache.pulsar.functions.secretsproviderconfigurator.KubernetesSecretsProviderConfigurator`. - -You can create a secret in the namespace where your functions are deployed. For example, you deploy functions to the `pulsar-func` Kubernetes namespace, and you have a secret named `database-creds` with a field name `password`, which you want to mount in the pod as an environment variable called `DATABASE_PASSWORD`. The following functions configuration enables you to reference that secret and mount the value as an environment variable in the pod. - -```Yaml - -tenant: "mytenant" -namespace: "mynamespace" -name: "myfunction" -topicName: "persistent://mytenant/mynamespace/myfuncinput" -className: "com.company.pulsar.myfunction" - -secrets: - # the secret will be mounted from the `password` field in the `database-creds` secret as an env var called `DATABASE_PASSWORD` - DATABASE_PASSWORD: - path: "database-creds" - key: "password" - -``` - -### Enable token authentication - -When you enable authentication for your Pulsar cluster, you need a mechanism for the pod running your function to authenticate with the broker. - -The `org.apache.pulsar.functions.auth.KubernetesFunctionAuthProvider` interface provides support for any authentication mechanism. The `functionAuthProviderClassName` in `function-worker.yml` is used to specify your path to this implementation. - -Pulsar includes an implementation of this interface for token authentication, and distributes the certificate authority via the same implementation. The configuration is similar as follows: - -```Yaml - -functionAuthProviderClassName: org.apache.pulsar.functions.auth.KubernetesSecretsTokenAuthProvider - -``` - -For token authentication, the functions worker captures the token that is used to deploy (or update) the function. The token is saved as a secret and mounted into the pod. - -For custom authentication or TLS, you need to implement this interface or use an alternative mechanism to provide authentication. If you use token authentication and TLS encryption to secure the communication with the cluster, Pulsar passes your certificate authority (CA) to the client, so the client obtains what it needs to authenticate the cluster, and trusts the cluster with your signed certificate. - -:::note - -If you use tokens that expire when deploying functions, these tokens will expire. - -::: - -### Run clusters with authentication - -When you run a functions worker in a standalone process (that is, not embedded in the broker) in a cluster with authentication, you must configure your functions worker to interact with the broker and authenticate incoming requests. So you need to configure properties that the broker requires for authentication or authorization. - -For example, if you use token authentication, you need to configure the following properties in the `function-worker.yml` file. - -```Yaml - -clientAuthenticationPlugin: org.apache.pulsar.client.impl.auth.AuthenticationToken -clientAuthenticationParameters: file:///etc/pulsar/token/admin-token.txt -configurationStoreServers: zookeeper-cluster:2181 # auth requires a connection to zookeeper -authenticationProviders: - - "org.apache.pulsar.broker.authentication.AuthenticationProviderToken" -authorizationEnabled: true -authenticationEnabled: true -superUserRoles: - - superuser - - proxy -properties: - tokenSecretKey: file:///etc/pulsar/jwt/secret # if using a secret token, key file must be DER-encoded - tokenPublicKey: file:///etc/pulsar/jwt/public.key # if using public/private key tokens, key file must be DER-encoded - -``` - -:::note - -You must configure both the Function Worker authorization or authentication for the server to authenticate requests and configure the client to be authenticated to communicate with the broker. - -::: - -### Customize Kubernetes runtime - -The Kubernetes integration enables you to implement a class and customize how to generate manifests. You can configure it by setting `runtimeCustomizerClassName` in the `functions-worker.yml` file and use the fully qualified class name. You must implement the `org.apache.pulsar.functions.runtime.kubernetes.KubernetesManifestCustomizer` interface. - -The functions (and sinks/sources) API provides a flag, `customRuntimeOptions`, which is passed to this interface. - -To initialize the `KubernetesManifestCustomizer`, you can provide `runtimeCustomizerConfig` in the `functions-worker.yml` file. `runtimeCustomizerConfig` is passed to the `public void initialize(Map config)` function of the interface. `runtimeCustomizerConfig`is different from the `customRuntimeOptions` as `runtimeCustomizerConfig` is the same across all functions. If you provide both `runtimeCustomizerConfig` and `customRuntimeOptions`, you need to decide how to manage these two configurations in your implementation of `KubernetesManifestCustomizer`. - -Pulsar includes a built-in implementation. To use the basic implementation, set `runtimeCustomizerClassName` to `org.apache.pulsar.functions.runtime.kubernetes.BasicKubernetesManifestCustomizer`. The built-in implementation initialized with `runtimeCustomizerConfig` enables you to pass a JSON document as `customRuntimeOptions` with certain properties to augment, which decides how the manifests are generated. If both `runtimeCustomizerConfig` and `customRuntimeOptions` are provided, `BasicKubernetesManifestCustomizer` uses `customRuntimeOptions` to override the configuration if there are conflicts in these two configurations. - -Below is an example of `customRuntimeOptions`. - -```json - -{ - "jobName": "jobname", // the k8s pod name to run this function instance - "jobNamespace": "namespace", // the k8s namespace to run this function in - "extractLabels": { // extra labels to attach to the statefulSet, service, and pods - "extraLabel": "value" - }, - "extraAnnotations": { // extra annotations to attach to the statefulSet, service, and pods - "extraAnnotation": "value" - }, - "nodeSelectorLabels": { // node selector labels to add on to the pod spec - "customLabel": "value" - }, - "tolerations": [ // tolerations to add to the pod spec - { - "key": "custom-key", - "value": "value", - "effect": "NoSchedule" - } - ], - "resourceRequirements": { // values for cpu and memory should be defined as described here: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container - "requests": { - "cpu": 1, - "memory": "4G" - }, - "limits": { - "cpu": 2, - "memory": "8G" - } - } -} - -``` - -## Run clusters with geo-replication - -If you run multiple clusters tied together with geo-replication, it is important to use a different function namespace for each cluster. Otherwise, the function shares a namespace and potentially schedule across clusters. - -For example, if you have two clusters: `east-1` and `west-1`, you can configure the functions workers for `east-1` and `west-1` perspectively as follows. - -```Yaml - -pulsarFunctionsCluster: east-1 -pulsarFunctionsNamespace: public/functions-east-1 - -``` - -```Yaml - -pulsarFunctionsCluster: west-1 -pulsarFunctionsNamespace: public/functions-west-1 - -``` - -This ensures the two different Functions Workers use distinct sets of topics for their internal coordination. - -## Configure standalone functions worker - -When configuring a standalone functions worker, you need to configure properties that the broker requires, especially if you use TLS. And then Functions Worker can communicate with the broker. - -You need to configure the following required properties. - -```Yaml - -workerPort: 8080 -workerPortTls: 8443 # when using TLS -tlsCertificateFilePath: /etc/pulsar/tls/tls.crt # when using TLS -tlsKeyFilePath: /etc/pulsar/tls/tls.key # when using TLS -tlsTrustCertsFilePath: /etc/pulsar/tls/ca.crt # when using TLS -pulsarServiceUrl: pulsar://broker.pulsar:6650/ # or pulsar+ssl://pulsar-prod-broker.pulsar:6651/ when using TLS -pulsarWebServiceUrl: http://broker.pulsar:8080/ # or https://pulsar-prod-broker.pulsar:8443/ when using TLS -useTls: true # when using TLS, critical! - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/functions-worker.md b/site2/website/versioned_docs/version-2.8.1-deprecated/functions-worker.md deleted file mode 100644 index 49fc76b30bdaa5..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/functions-worker.md +++ /dev/null @@ -1,386 +0,0 @@ ---- -id: functions-worker -title: Deploy and manage functions worker -sidebar_label: "Setup: Pulsar Functions Worker" -original_id: functions-worker ---- -Before using Pulsar Functions, you need to learn how to set up Pulsar Functions worker and how to [configure Functions runtime](functions-runtime.md). - -Pulsar `functions-worker` is a logic component to run Pulsar Functions in cluster mode. Two options are available, and you can select either based on your requirements. -- [run with brokers](#run-functions-worker-with-brokers) -- [run it separately](#run-functions-worker-separately) in a different broker - -:::note - -The `--- Service Urls---` lines in the following diagrams represent Pulsar service URLs that Pulsar client and admin use to connect to a Pulsar cluster. - -::: - -## Run Functions-worker with brokers - -The following diagram illustrates the deployment of functions-workers running along with brokers. - -![assets/functions-worker-corun.png](/assets/functions-worker-corun.png) - -To enable functions-worker running as part of a broker, you need to set `functionsWorkerEnabled` to `true` in the `broker.conf` file. - -```conf - -functionsWorkerEnabled=true - -``` - -If the `functionsWorkerEnabled` is set to `true`, the functions-worker is started as part of a broker. You need to configure the `conf/functions_worker.yml` file to customize your functions_worker. - -Before you run Functions-worker with broker, you have to configure Functions-worker, and then start it with brokers. - -### Configure Functions-Worker to run with brokers -In this mode, most of the settings are already inherited from your broker configuration (for example, configurationStore settings, authentication settings, and so on) since `functions-worker` is running as part of the broker. - -Pay attention to the following required settings when configuring functions-worker in this mode. - -- `numFunctionPackageReplicas`: The number of replicas to store function packages. The default value is `1`, which is good for standalone deployment. For production deployment, to ensure high availability, set it to be larger than `2`. -- `initializedDlogMetadata`: Whether to initialize distributed log metadata in runtime. If it is set to `true`, you must ensure that it has been initialized by `bin/pulsar initialize-cluster-metadata` command. - -If authentication is enabled on the BookKeeper cluster, configure the following BookKeeper authentication settings. - -- `bookkeeperClientAuthenticationPlugin`: the BookKeeper client authentication plugin name. -- `bookkeeperClientAuthenticationParametersName`: the BookKeeper client authentication plugin parameters name. -- `bookkeeperClientAuthenticationParameters`: the BookKeeper client authentication plugin parameters. - -### Configure Stateful-Functions to run with broker - -If you want to use Stateful-Functions related functions (for example, `putState()` and `queryState()` related interfaces), follow steps below. - -1. Enable the **streamStorage** service in the BookKeeper. - - Currently, the service uses the NAR package, so you need to set the configuration in `bookkeeper.conf`. - - ```text - - extraServerComponents=org.apache.bookkeeper.stream.server.StreamStorageLifecycleComponent - - ``` - - After starting bookie, use the following methods to check whether the streamStorage service is started correctly. - - Input: - - ```shell - - telnet localhost 4181 - - ``` - - Output: - - ```text - - Trying 127.0.0.1... - Connected to localhost. - Escape character is '^]'. - - ``` - -2. Turn on this function in `functions_worker.yml`. - - ```text - - stateStorageServiceUrl: bk://:4181 - - ``` - - `bk-service-url` is the service URL pointing to the BookKeeper table service. - -### Start Functions-worker with broker - -Once you have configured the `functions_worker.yml` file, you can start or restart your broker. - -And then you can use the following command to verify if `functions-worker` is running well. - -```bash - -curl :8080/admin/v2/worker/cluster - -``` - -After entering the command above, a list of active function workers in the cluster is returned. The output is similar to the following. - -```json - -[{"workerId":"","workerHostname":"","port":8080}] - -``` - -## Run Functions-worker separately - -This section illustrates how to run `functions-worker` as a separate process in separate machines. - -![assets/functions-worker-separated.png](/assets/functions-worker-separated.png) - -:::note - -In this mode, make sure `functionsWorkerEnabled` is set to `false`, so you won't start `functions-worker` with brokers by mistake. Also, while accessing the `functions-worker` to manage any of the functions, the `pulsar-admin` CLI tool or any of the clients should use the `workerHostname` and `workerPort` that you set in [Worker parameters](#worker-parameters) to generate an `--admin-url`. - -::: - -### Configure Functions-worker to run separately - -To run function-worker separately, you have to configure the following parameters. - -#### Worker parameters - -- `workerId`: The type is string. It is unique across clusters, which is used to identify a worker machine. -- `workerHostname`: The hostname of the worker machine. -- `workerPort`: The port that the worker server listens on. Keep it as default if you don't customize it. -- `workerPortTls`: The TLS port that the worker server listens on. Keep it as default if you don't customize it. - -#### Function package parameter - -- `numFunctionPackageReplicas`: The number of replicas to store function packages. The default value is `1`. - -#### Function metadata parameter - -- `pulsarServiceUrl`: The Pulsar service URL for your broker cluster. -- `pulsarWebServiceUrl`: The Pulsar web service URL for your broker cluster. -- `pulsarFunctionsCluster`: Set the value to your Pulsar cluster name (same as the `clusterName` setting in the broker configuration). - -If authentication is enabled for your broker cluster, you *should* configure the authentication plugin and parameters for the functions worker to communicate with the brokers. - -- `brokerClientAuthenticationEnabled`: Whether to enable the broker client authentication used by function workers to talk to brokers. -- `clientAuthenticationPlugin`: The authentication plugin to be used by the Pulsar client used in worker service. -- `clientAuthenticationParameters`: The authentication parameter to be used by the Pulsar client used in worker service. - -#### Security settings - -If you want to enable security on functions workers, you *should*: -- [Enable TLS transport encryption](#enable-tls-transport-encryption) -- [Enable Authentication Provider](#enable-authentication-provider) -- [Enable Authorization Provider](#enable-authorization-provider) -- [Enable End-to-End Encryption](#enable-end-to-end-encryption) - -##### Enable TLS transport encryption - -To enable TLS transport encryption, configure the following settings. - -``` - -useTLS: true -pulsarServiceUrl: pulsar+ssl://localhost:6651/ -pulsarWebServiceUrl: https://localhost:8443 - -tlsEnabled: true -tlsCertificateFilePath: /path/to/functions-worker.cert.pem -tlsKeyFilePath: /path/to/functions-worker.key-pk8.pem -tlsTrustCertsFilePath: /path/to/ca.cert.pem - -// The path to trusted certificates used by the Pulsar client to authenticate with Pulsar brokers -brokerClientTrustCertsFilePath: /path/to/ca.cert.pem - -``` - -For details on TLS encryption, refer to [Transport Encryption using TLS](security-tls-transport.md). - -##### Enable Authentication Provider - -To enable authentication on Functions Worker, you need to configure the following settings. - -:::note - -Substitute the *providers list* with the providers you want to enable. - -::: - -``` - -authenticationEnabled: true -authenticationProviders: [ provider1, provider2 ] - -``` - -For *TLS Authentication* provider, follow the example below to add the necessary settings. -See [TLS Authentication](security-tls-authentication.md) for more details. - -``` - -brokerClientAuthenticationPlugin: org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters: tlsCertFile:/path/to/admin.cert.pem,tlsKeyFile:/path/to/admin.key-pk8.pem - -authenticationEnabled: true -authenticationProviders: ['org.apache.pulsar.broker.authentication.AuthenticationProviderTls'] - -``` - -For *SASL Authentication* provider, add `saslJaasClientAllowedIds` and `saslJaasBrokerSectionName` -under `properties` if needed. - -``` - -properties: - saslJaasClientAllowedIds: .*pulsar.* - saslJaasBrokerSectionName: Broker - -``` - -For *Token Authentication* provider, add necessary settings for `properties` if needed. -See [Token Authentication](security-jwt.md) for more details. -Note: key files must be DER-encoded - -``` - -properties: - tokenSecretKey: file://my/secret.key - # If using public/private - # tokenPublicKey: file:///path/to/public.key - -``` - -##### Enable Authorization Provider - -To enable authorization on Functions Worker, you need to configure `authorizationEnabled`, `authorizationProvider` and `configurationStoreServers`. The authentication provider connects to `configurationStoreServers` to receive namespace policies. - -```yaml - -authorizationEnabled: true -authorizationProvider: org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider -configurationStoreServers: - -``` - -You should also configure a list of superuser roles. The superuser roles are able to access any admin API. The following is a configuration example. - -```yaml - -superUserRoles: - - role1 - - role2 - - role3 - -``` - -##### Enable End-to-End Encryption - -You can use the public and private key pair that the application configures to perform encryption. Only the consumers with a valid key can decrypt the encrypted messages. - -To enable End-to-End encryption on Functions Worker, you can set it by specifying `--producer-config` in the command line terminal, for more information, please refer to [here](security-encryption.md). - -We include the relevant configuration information of `CryptoConfig` into `ProducerConfig`. The specific configurable field information about `CryptoConfig` is as follows: - -```text - -public class CryptoConfig { - private String cryptoKeyReaderClassName; - private Map cryptoKeyReaderConfig; - - private String[] encryptionKeys; - private ProducerCryptoFailureAction producerCryptoFailureAction; - - private ConsumerCryptoFailureAction consumerCryptoFailureAction; -} - -``` - -- `producerCryptoFailureAction`: define the action if producer fail to encrypt data one of `FAIL`, `SEND`. -- `consumerCryptoFailureAction`: define the action if consumer fail to decrypt data one of `FAIL`, `DISCARD`, `CONSUME`. - -#### BookKeeper Authentication - -If authentication is enabled on the BookKeeper cluster, you need configure the BookKeeper authentication settings as follows: - -- `bookkeeperClientAuthenticationPlugin`: the plugin name of BookKeeper client authentication. -- `bookkeeperClientAuthenticationParametersName`: the plugin parameters name of BookKeeper client authentication. -- `bookkeeperClientAuthenticationParameters`: the plugin parameters of BookKeeper client authentication. - -### Start Functions-worker - -Once you have finished configuring the `functions_worker.yml` configuration file, you can start a `functions-worker` in the background by using [nohup](https://en.wikipedia.org/wiki/Nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -bin/pulsar-daemon start functions-worker - -``` - -You can also start `functions-worker` in the foreground by using `pulsar` CLI tool: - -```bash - -bin/pulsar functions-worker - -``` - -### Configure Proxies for Functions-workers - -When you are running `functions-worker` in a separate cluster, the admin rest endpoints are split into two clusters. `functions`, `function-worker`, `source` and `sink` endpoints are now served -by the `functions-worker` cluster, while all the other remaining endpoints are served by the broker cluster. -Hence you need to configure your `pulsar-admin` to use the right service URL accordingly. - -In order to address this inconvenience, you can start a proxy cluster for routing the admin rest requests accordingly. Hence you will have one central entry point for your admin service. - -If you already have a proxy cluster, continue reading. If you haven't setup a proxy cluster before, you can follow the [instructions](http://pulsar.apache.org/docs/en/administration-proxy/) to -start proxies. - -![assets/functions-worker-separated.png](/assets/functions-worker-separated-proxy.png) - -To enable routing functions related admin requests to `functions-worker` in a proxy, you can edit the `proxy.conf` file to modify the following settings: - -```conf - -functionWorkerWebServiceURL= -functionWorkerWebServiceURLTLS= - -``` - -## Compare the Run-with-Broker and Run-separately modes - -As described above, you can run Function-worker with brokers, or run it separately. And it is more convenient to run functions-workers along with brokers. However, running functions-workers in a separate cluster provides better resource isolation for running functions in `Process` or `Thread` mode. - -Use which mode for your cases, refer to the following guidelines to determine. - -Use the `Run-with-Broker` mode in the following cases: -- a) if resource isolation is not required when running functions in `Process` or `Thread` mode; -- b) if you configure the functions-worker to run functions on Kubernetes (where the resource isolation problem is addressed by Kubernetes). - -Use the `Run-separately` mode in the following cases: -- a) you don't have a Kubernetes cluster; -- b) if you want to run functions and brokers separately. - -## Troubleshooting - -**Error message: Namespace missing local cluster name in clusters list** - -``` - -Failed to get partitioned topic metadata: org.apache.pulsar.client.api.PulsarClientException$BrokerMetadataException: Namespace missing local cluster name in clusters list: local_cluster=xyz ns=public/functions clusters=[standalone] - -``` - -The error message prompts when either of the cases occurs: -- a) a broker is started with `functionsWorkerEnabled=true`, but the `pulsarFunctionsCluster` is not set to the correct cluster in the `conf/functions_worker.yaml` file; -- b) setting up a geo-replicated Pulsar cluster with `functionsWorkerEnabled=true`, while brokers in one cluster run well, brokers in the other cluster do not work well. - -**Workaround** - -If any of these cases happens, follow the instructions below to fix the problem: - -1. Disable Functions Worker by setting `functionsWorkerEnabled=false`, and restart brokers. - -2. Get the current clusters list of `public/functions` namespace. - -```bash - -bin/pulsar-admin namespaces get-clusters public/functions - -``` - -3. Check if the cluster is in the clusters list. If the cluster is not in the list, add it to the list and update the clusters list. - -```bash - -bin/pulsar-admin namespaces set-clusters --clusters , public/functions - -``` - -4. After setting the cluster successfully, enable functions worker by setting `functionsWorkerEnabled=true`. - -5. Set the correct cluster name in `pulsarFunctionsCluster` in the `conf/functions_worker.yml` file, and restart brokers. diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/getting-started-concepts-and-architecture.md b/site2/website/versioned_docs/version-2.8.1-deprecated/getting-started-concepts-and-architecture.md deleted file mode 100644 index fe9c3fbc553b2c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/getting-started-concepts-and-architecture.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -id: concepts-architecture -title: Pulsar concepts and architecture -sidebar_label: "Concepts and architecture" -original_id: concepts-architecture ---- - - - - - - - - - - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/getting-started-docker.md b/site2/website/versioned_docs/version-2.8.1-deprecated/getting-started-docker.md deleted file mode 100644 index 4f20971d75330c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/getting-started-docker.md +++ /dev/null @@ -1,176 +0,0 @@ ---- -id: getting-started-docker -title: Set up a standalone Pulsar in Docker -sidebar_label: "Run Pulsar in Docker" -original_id: getting-started-docker ---- - -For local development and testing, you can run Pulsar in standalone -mode on your own machine within a Docker container. - -If you have not installed Docker, download the [Community edition](https://www.docker.com/community-edition) -and follow the instructions for your OS. - -## Start Pulsar in Docker - -* For MacOS, Linux, and Windows: - - ```shell - - $ docker run -it \ - -p 6650:6650 \ - -p 8080:8080 \ - --mount source=pulsardata,target=/pulsar/data \ - --mount source=pulsarconf,target=/pulsar/conf \ - apachepulsar/pulsar:@pulsar:version@ \ - bin/pulsar standalone - - ``` - -A few things to note about this command: - * The data, metadata, and configuration are persisted on Docker volumes in order to not start "fresh" every -time the container is restarted. For details on the volumes you can use `docker volume inspect ` - * For Docker on Windows make sure to configure it to use Linux containers - -If you start Pulsar successfully, you will see `INFO`-level log messages like this: - -``` - -2017-08-09 22:34:04,030 - INFO - [main:WebService@213] - Web Service started at http://127.0.0.1:8080 -2017-08-09 22:34:04,038 - INFO - [main:PulsarService@335] - messaging service is ready, bootstrap service on port=8080, broker url=pulsar://127.0.0.1:6650, cluster=standalone, configs=org.apache.pulsar.broker.ServiceConfiguration@4db60246 -... - -``` - -:::tip - -When you start a local standalone cluster, a `public/default` namespace is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -::: - -## Use Pulsar in Docker - -Pulsar offers client libraries for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) -and [C++](client-libraries-cpp.md). If you're running a local standalone cluster, you can -use one of these root URLs to interact with your cluster: - -* `pulsar://localhost:6650` -* `http://localhost:8080` - -The following example will guide you get started with Pulsar quickly by using the [Python](client-libraries-python.md) -client API. - -Install the Pulsar Python client library directly from [PyPI](https://pypi.org/project/pulsar-client/): - -```shell - -$ pip install pulsar-client - -``` - -### Consume a message - -Create a consumer and subscribe to the topic: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -consumer = client.subscribe('my-topic', - subscription_name='my-sub') - -while True: - msg = consumer.receive() - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - -client.close() - -``` - -### Produce a message - -Now start a producer to send some test messages: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -producer = client.create_producer('my-topic') - -for i in range(10): - producer.send(('hello-pulsar-%d' % i).encode('utf-8')) - -client.close() - -``` - -## Get the topic statistics - -In Pulsar, you can use REST, Java, or command-line tools to control every aspect of the system. -For details on APIs, refer to [Admin API Overview](admin-api-overview.md). - -In the simplest example, you can use curl to probe the stats for a particular topic: - -```shell - -$ curl http://localhost:8080/admin/v2/persistent/public/default/my-topic/stats | python -m json.tool - -``` - -The output is something like this: - -```json - -{ - "averageMsgSize": 0.0, - "msgRateIn": 0.0, - "msgRateOut": 0.0, - "msgThroughputIn": 0.0, - "msgThroughputOut": 0.0, - "publishers": [ - { - "address": "/172.17.0.1:35048", - "averageMsgSize": 0.0, - "clientVersion": "1.19.0-incubating", - "connectedSince": "2017-08-09 20:59:34.621+0000", - "msgRateIn": 0.0, - "msgThroughputIn": 0.0, - "producerId": 0, - "producerName": "standalone-0-1" - } - ], - "replication": {}, - "storageSize": 16, - "subscriptions": { - "my-sub": { - "blockedSubscriptionOnUnackedMsgs": false, - "consumers": [ - { - "address": "/172.17.0.1:35064", - "availablePermits": 996, - "blockedConsumerOnUnackedMsgs": false, - "clientVersion": "1.19.0-incubating", - "connectedSince": "2017-08-09 21:05:39.222+0000", - "consumerName": "166111", - "msgRateOut": 0.0, - "msgRateRedeliver": 0.0, - "msgThroughputOut": 0.0, - "unackedMessages": 0 - } - ], - "msgBacklog": 0, - "msgRateExpired": 0.0, - "msgRateOut": 0.0, - "msgRateRedeliver": 0.0, - "msgThroughputOut": 0.0, - "type": "Exclusive", - "unackedMessages": 0 - } - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/getting-started-helm.md b/site2/website/versioned_docs/version-2.8.1-deprecated/getting-started-helm.md deleted file mode 100644 index 440087c275c053..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/getting-started-helm.md +++ /dev/null @@ -1,441 +0,0 @@ ---- -id: getting-started-helm -title: Get started in Kubernetes -sidebar_label: "Run Pulsar in Kubernetes" -original_id: getting-started-helm ---- - -This section guides you through every step of installing and running Apache Pulsar with Helm on Kubernetes quickly, including the following sections: - -- Install the Apache Pulsar on Kubernetes using Helm -- Start and stop Apache Pulsar -- Create topics using `pulsar-admin` -- Produce and consume messages using Pulsar clients -- Monitor Apache Pulsar status with Prometheus and Grafana - -For deploying a Pulsar cluster for production usage, read the documentation on [how to configure and install a Pulsar Helm chart](helm-deploy.md). - -## Prerequisite - -- Kubernetes server 1.14.0+ -- kubectl 1.14.0+ -- Helm 3.0+ - -:::tip - -For the following steps, step 2 and step 3 are for **developers** and step 4 and step 5 are for **administrators**. - -::: - -## Step 0: Prepare a Kubernetes cluster - -Before installing a Pulsar Helm chart, you have to create a Kubernetes cluster. You can follow [the instructions](helm-prepare.md) to prepare a Kubernetes cluster. - -We use [Minikube](https://minikube.sigs.k8s.io/docs/start/) in this quick start guide. To prepare a Kubernetes cluster, follow these steps: - -1. Create a Kubernetes cluster on Minikube. - - ```bash - - minikube start --memory=8192 --cpus=4 --kubernetes-version= - - ``` - - The `` can be any [Kubernetes version supported by your Minikube installation](https://minikube.sigs.k8s.io/docs/reference/configuration/kubernetes/), such as `v1.16.1`. - -2. Set `kubectl` to use Minikube. - - ```bash - - kubectl config use-context minikube - - ``` - -3. To use the [Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) with the local Kubernetes cluster on Minikube, enter the command below: - - ```bash - - minikube dashboard - - ``` - - The command automatically triggers opening a webpage in your browser. - -## Step 1: Install Pulsar Helm chart - -0. Add Pulsar charts repo. - - ```bash - - helm repo add apache https://pulsar.apache.org/charts - - ``` - - ```bash - - helm repo update - - ``` - -1. Clone the Pulsar Helm chart repository. - - ```bash - - git clone https://github.com/apache/pulsar-helm-chart - cd pulsar-helm-chart - - ``` - -2. Run the script `prepare_helm_release.sh` to create secrets required for installing the Apache Pulsar Helm chart. The username `pulsar` and password `pulsar` are used for logging into the Grafana dashboard and Pulsar Manager. - - > **NOTE** - > When running the script, you can use `-n` to specify the Kubernetes namespace where the Pulsar Helm chart is installed, `-k` to define the Pulsar Helm release name, and `-c` to create the Kubernetes namespace. For more information about the script, run `./scripts/pulsar/prepare_helm_release.sh --help`. - - ```bash - - ./scripts/pulsar/prepare_helm_release.sh \ - -n pulsar \ - -k pulsar-mini \ - -c - - ``` - -3. Use the Pulsar Helm chart to install a Pulsar cluster to Kubernetes. - - > **NOTE** - > You need to specify `--set initialize=true` when installing Pulsar the first time. This command installs and starts Apache Pulsar. - - ```bash - - helm install \ - --values examples/values-minikube.yaml \ - --set initialize=true \ - --namespace pulsar \ - pulsar-mini apache/pulsar - - ``` - -4. Check the status of all pods. - - ```bash - - kubectl get pods -n pulsar - - ``` - - If all pods start up successfully, you can see that the `STATUS` is changed to `Running` or `Completed`. - - **Output** - - ```bash - - NAME READY STATUS RESTARTS AGE - pulsar-mini-bookie-0 1/1 Running 0 9m27s - pulsar-mini-bookie-init-5gphs 0/1 Completed 0 9m27s - pulsar-mini-broker-0 1/1 Running 0 9m27s - pulsar-mini-grafana-6b7bcc64c7-4tkxd 1/1 Running 0 9m27s - pulsar-mini-prometheus-5fcf5dd84c-w8mgz 1/1 Running 0 9m27s - pulsar-mini-proxy-0 1/1 Running 0 9m27s - pulsar-mini-pulsar-init-t7cqt 0/1 Completed 0 9m27s - pulsar-mini-pulsar-manager-9bcbb4d9f-htpcs 1/1 Running 0 9m27s - pulsar-mini-toolset-0 1/1 Running 0 9m27s - pulsar-mini-zookeeper-0 1/1 Running 0 9m27s - - ``` - -5. Check the status of all services in the namespace `pulsar`. - - ```bash - - kubectl get services -n pulsar - - ``` - - **Output** - - ```bash - - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - pulsar-mini-bookie ClusterIP None 3181/TCP,8000/TCP 11m - pulsar-mini-broker ClusterIP None 8080/TCP,6650/TCP 11m - pulsar-mini-grafana LoadBalancer 10.106.141.246 3000:31905/TCP 11m - pulsar-mini-prometheus ClusterIP None 9090/TCP 11m - pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 11m - pulsar-mini-pulsar-manager LoadBalancer 10.103.192.175 9527:30190/TCP 11m - pulsar-mini-toolset ClusterIP None 11m - pulsar-mini-zookeeper ClusterIP None 2888/TCP,3888/TCP,2181/TCP 11m - - ``` - -## Step 2: Use pulsar-admin to create Pulsar tenants/namespaces/topics - -`pulsar-admin` is the CLI (command-Line Interface) tool for Pulsar. In this step, you can use `pulsar-admin` to create resources, including tenants, namespaces, and topics. - -1. Enter the `toolset` container. - - ```bash - - kubectl exec -it -n pulsar pulsar-mini-toolset-0 -- /bin/bash - - ``` - -2. In the `toolset` container, create a tenant named `apache`. - - ```bash - - bin/pulsar-admin tenants create apache - - ``` - - Then you can list the tenants to see if the tenant is created successfully. - - ```bash - - bin/pulsar-admin tenants list - - ``` - - You should see a similar output as below. The tenant `apache` has been successfully created. - - ```bash - - "apache" - "public" - "pulsar" - - ``` - -3. In the `toolset` container, create a namespace named `pulsar` in the tenant `apache`. - - ```bash - - bin/pulsar-admin namespaces create apache/pulsar - - ``` - - Then you can list the namespaces of tenant `apache` to see if the namespace is created successfully. - - ```bash - - bin/pulsar-admin namespaces list apache - - ``` - - You should see a similar output as below. The namespace `apache/pulsar` has been successfully created. - - ```bash - - "apache/pulsar" - - ``` - -4. In the `toolset` container, create a topic `test-topic` with `4` partitions in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics create-partitioned-topic apache/pulsar/test-topic -p 4 - - ``` - -5. In the `toolset` container, list all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics list-partitioned-topics apache/pulsar - - ``` - - Then you can see all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - "persistent://apache/pulsar/test-topic" - - ``` - -## Step 3: Use Pulsar client to produce and consume messages - -You can use the Pulsar client to create producers and consumers to produce and consume messages. - -By default, the Pulsar Helm chart exposes the Pulsar cluster through a Kubernetes `LoadBalancer`. In Minikube, you can use the following command to check the proxy service. - -```bash - -kubectl get services -n pulsar | grep pulsar-mini-proxy - -``` - -You will see a similar output as below. - -```bash - -pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 28m - -``` - -This output tells what are the node ports that Pulsar cluster's binary port and HTTP port are mapped to. The port after `80:` is the HTTP port while the port after `6650:` is the binary port. - -Then you can find the IP address and exposed ports of your Minikube server by running the following command. - -```bash - -minikube service pulsar-mini-proxy -n pulsar - -``` - -**Output** - -```bash - -|-----------|-------------------|-------------|-------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|-------------------------| -| pulsar | pulsar-mini-proxy | http/80 | http://172.17.0.4:32305 | -| | | pulsar/6650 | http://172.17.0.4:31816 | -|-----------|-------------------|-------------|-------------------------| -🏃 Starting tunnel for service pulsar-mini-proxy. -|-----------|-------------------|-------------|------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|------------------------| -| pulsar | pulsar-mini-proxy | | http://127.0.0.1:61853 | -| | | | http://127.0.0.1:61854 | -|-----------|-------------------|-------------|------------------------| - -``` - -At this point, you can get the service URLs to connect to your Pulsar client. Here are URL examples: - -``` - -webServiceUrl=http://127.0.0.1:61853/ -brokerServiceUrl=pulsar://127.0.0.1:61854/ - -``` - -Then you can proceed with the following steps: - -1. Download the Apache Pulsar tarball from the [downloads page](https://pulsar.apache.org/download/). - -2. Decompress the tarball based on your download file. - - ```bash - - tar -xf .tar.gz - - ``` - -3. Expose `PULSAR_HOME`. - - (1) Enter the directory of the decompressed download file. - - (2) Expose `PULSAR_HOME` as the environment variable. - - ```bash - - export PULSAR_HOME=$(pwd) - - ``` - -4. Configure the Pulsar client. - - In the `${PULSAR_HOME}/conf/client.conf` file, replace `webServiceUrl` and `brokerServiceUrl` with the service URLs you get from the above steps. - -5. Create a subscription to consume messages from `apache/pulsar/test-topic`. - - ```bash - - bin/pulsar-client consume -s sub apache/pulsar/test-topic -n 0 - - ``` - -6. Open a new terminal. In the new terminal, create a producer and send 10 messages to the `test-topic` topic. - - ```bash - - bin/pulsar-client produce apache/pulsar/test-topic -m "---------hello apache pulsar-------" -n 10 - - ``` - -7. Verify the results. - - - From the producer side - - **Output** - - The messages have been produced successfully. - - ```bash - - 18:15:15.489 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 10 messages successfully produced - - ``` - - - From the consumer side - - **Output** - - At the same time, you can receive the messages as below. - - ```bash - - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - - ``` - -## Step 4: Use Pulsar Manager to manage the cluster - -[Pulsar Manager](administration-pulsar-manager.md) is a web-based GUI management tool for managing and monitoring Pulsar. - -1. By default, the `Pulsar Manager` is exposed as a separate `LoadBalancer`. You can open the Pulsar Manager UI using the following command: - - ```bash - - minikube service -n pulsar pulsar-mini-pulsar-manager - - ``` - -2. The Pulsar Manager UI will be open in your browser. You can use the username `pulsar` and password `pulsar` to log into Pulsar Manager. - -3. In Pulsar Manager UI, you can create an environment. - - - Click `New Environment` button in the top-left corner. - - Type `pulsar-mini` for the field `Environment Name` in the popup window. - - Type `http://pulsar-mini-broker:8080` for the field `Service URL` in the popup window. - - Click `Confirm` button in the popup window. - -4. After successfully creating an environment, you are redirected to the `tenants` page of that environment. Then you can create `tenants`, `namespaces` and `topics` using the Pulsar Manager. - -## Step 5: Use Prometheus and Grafana to monitor cluster - -Grafana is an open-source visualization tool, which can be used for visualizing time series data into dashboards. - -1. By default, the Grafana is exposed as a separate `LoadBalancer`. You can open the Grafana UI using the following command: - - ```bash - - minikube service pulsar-mini-grafana -n pulsar - - ``` - -2. The Grafana UI is open in your browser. You can use the username `pulsar` and password `pulsar` to log into the Grafana Dashboard. - -3. You can view dashboards for different components of a Pulsar cluster. diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/getting-started-pulsar.md b/site2/website/versioned_docs/version-2.8.1-deprecated/getting-started-pulsar.md deleted file mode 100644 index 752590f57b5585..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/getting-started-pulsar.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -id: pulsar-2.0 -title: Pulsar 2.0 -sidebar_label: "Pulsar 2.0" -original_id: pulsar-2.0 ---- - -Pulsar 2.0 is a major new release for Pulsar that brings some bold changes to the platform, including [simplified topic names](#topic-names), the addition of the [Pulsar Functions](functions-overview.md) feature, some terminology changes, and more. - -## New features in Pulsar 2.0 - -Feature | Description -:-------|:----------- -[Pulsar Functions](functions-overview.md) | A lightweight compute option for Pulsar - -## Major changes - -There are a few major changes that you should be aware of, as they may significantly impact your day-to-day usage. - -### Properties versus tenants - -Previously, Pulsar had a concept of properties. A property is essentially the exact same thing as a tenant, so the "property" terminology has been removed in version 2.0. The [`pulsar-admin properties`](reference-pulsar-admin.md#pulsar-admin) command-line interface, for example, has been replaced with the [`pulsar-admin tenants`](reference-pulsar-admin.md#pulsar-admin-tenants) interface. In some cases the properties terminology is still used but is now considered deprecated and will be removed entirely in a future release. - -### Topic names - -Prior to version 2.0, *all* Pulsar topics had the following form: - -```http - -{persistent|non-persistent}://property/cluster/namespace/topic - -``` - -Two important changes have been made in Pulsar 2.0: - -* There is no longer a [cluster component](#no-cluster) -* Properties have been [renamed to tenants](#tenants) -* You can use a [flexible](#flexible-topic-naming) naming system to shorten many topic names -* `/` is not allowed in topic name - -#### No cluster component - -The cluster component has been removed from topic names. Thus, all topic names now have the following form: - -```http - -{persistent|non-persistent}://tenant/namespace/topic - -``` - -> Existing topics that use the legacy name format will continue to work without any change, and there are no plans to change that. - - -#### Flexible topic naming - -All topic names in Pulsar 2.0 internally have the form shown [above](#no-cluster-component) but you can now use shorthand names in many cases (for the sake of simplicity). The flexible naming system stems from the fact that there is now a default topic type, tenant, and namespace: - -Topic aspect | Default -:------------|:------- -topic type | `persistent` -tenant | `public` -namespace | `default` - -The table below shows some example topic name translations that use implicit defaults: - -Input topic name | Translated topic name -:----------------|:--------------------- -`my-topic` | `persistent://public/default/my-topic` -`my-tenant/my-namespace/my-topic` | `persistent://my-tenant/my-namespace/my-topic` - -> For [non-persistent topics](concepts-messaging.md#non-persistent-topics) you'll need to continue to specify the entire topic name, as the default-based rules for persistent topic names don't apply. Thus you cannot use a shorthand name like `non-persistent://my-topic` and would need to use `non-persistent://public/default/my-topic` instead - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/getting-started-standalone.md b/site2/website/versioned_docs/version-2.8.1-deprecated/getting-started-standalone.md deleted file mode 100644 index cea47efd08d4b3..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/getting-started-standalone.md +++ /dev/null @@ -1,271 +0,0 @@ ---- -id: getting-started-standalone -title: Set up a standalone Pulsar locally -sidebar_label: "Run Pulsar locally" -original_id: getting-started-standalone ---- - -For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process. - -> #### Pulsar in production? -> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal.md) guide. - -## Install Pulsar standalone - -This tutorial guides you through every step of the installation process. - -### System requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions, JRE/JDK 11 is recommended. - -:::tip - -By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. - -::: - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -### Install Pulsar using binary release - -To get started with Pulsar, download a binary tarball release in one of the following ways: - -* download from the Apache mirror (Pulsar @pulsar:version@ binary release) - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:binary_release_url - - ``` - -After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -#### What your package contains - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/). -`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more. -`examples` | A Java JAR file containing [Pulsar Functions](functions-overview.md) example. -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar. -`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar). - -These directories are created once you begin running Pulsar. - -Directory | Contains -:---------|:-------- -`data` | The data storage directory used by ZooKeeper and BookKeeper. -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md). -`logs` | Logs created by the installation. - -:::tip - -If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions: -* [Install builtin connectors (optional)](#install-builtin-connectors-optional) -* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional) -Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders. - -::: - -### Install builtin connectors (optional) - -Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors. -To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways: - -* download from the Apache mirror Pulsar IO Connectors @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. -For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker -(or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions). -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), -you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors). - -::: - -### Install tiered storage offloaders (optional) - -:::tip - -Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -To enable tiered storage feature, follow the instructions below; otherwise skip this section. - -::: - -To get started with [tiered storage offloaders](concepts-tiered-storage.md), you need to download the offloaders tarball release on every broker node in one of the following ways: - -* download from the Apache mirror Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders` -in the pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage.md). - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory. -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), -you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - -::: - -## Start Pulsar standalone - -Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode. - -```bash - -$ bin/pulsar standalone - -``` - -If you have started Pulsar successfully, you will see `INFO`-level log messages like this: - -```bash - -2017-06-01 14:46:29,192 - INFO - [main:WebSocketService@95] - Configuration Store cache started -2017-06-01 14:46:29,192 - INFO - [main:AuthenticationService@61] - Authentication is disabled -2017-06-01 14:46:29,192 - INFO - [main:WebSocketService@108] - Pulsar WebSocket Service started - -``` - -:::tip - -* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window. - -::: - -You can also run the service as a background process using the `pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). -> -> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview.md) document to secure your deployment. -> -> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -## Use Pulsar standalone - -Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. - -### Consume a message - -The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client consume my-topic -s "first-subscription" - -``` - -If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -09:56:55.566 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.MultiTopicsConsumerImpl - [TopicsConsumerFakeTopicNamee2df9] [first-subscription] Success subscribe new topic my-topic in topics consumer, partitions: 4, allTopicPartitionsNumber: 4 - -``` - -:::tip - -As you have noticed that we do not explicitly create the `my-topic` topic, from which we consume the message. When you consume a message from a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well. - -::: - -### Produce a message - -The following command produces a message saying `hello-pulsar` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client produce my-topic --messages "hello-pulsar" - -``` - -If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -13:09:39.356 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced - -``` - -## Stop Pulsar standalone - -Press `Ctrl+C` to stop a local standalone Pulsar. - -:::tip - -If the service runs as a background process using the `pulsar-daemon start standalone` command, then use the `pulsar-daemon stop standalone` command to stop the service. -For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). - -::: - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/helm-deploy.md b/site2/website/versioned_docs/version-2.8.1-deprecated/helm-deploy.md deleted file mode 100644 index 93709f7091c1ea..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/helm-deploy.md +++ /dev/null @@ -1,434 +0,0 @@ ---- -id: helm-deploy -title: Deploy Pulsar cluster using Helm -sidebar_label: "Deployment" -original_id: helm-deploy ---- - -Before running `helm install`, you need to decide how to run Pulsar. -Options can be specified using Helm's `--set option.name=value` command line option. - -## Select configuration options - -In each section, collect the options that are combined to use with the `helm install` command. - -### Kubernetes namespace - -By default, the Pulsar Helm chart is installed to a namespace called `pulsar`. - -```yaml - -namespace: pulsar - -``` - -To install the Pulsar Helm chart into a different Kubernetes namespace, you can include this option in the `helm install` command. - -```bash - ---set namespace= - -``` - -By default, the Pulsar Helm chart doesn't create the namespace. - -```yaml - -namespaceCreate: false - -``` - -To use the Pulsar Helm chart to create the Kubernetes namespace automatically, you can include this option in the `helm install` command. - -```bash - ---set namespaceCreate=true - -``` - -### Persistence - -By default, the Pulsar Helm chart creates Volume Claims with the expectation that a dynamic provisioner creates the underlying Persistent Volumes. - -```yaml - -volumes: - persistence: true - # configure the components to use local persistent volume - # the local provisioner should be installed prior to enable local persistent volume - local_storage: false - -``` - -To use local persistent volumes as the persistent storage for Helm release, you can install the [local storage provisioner](#install-local-storage-provisioner) and include the following option in the `helm install` command. - -```bash - ---set volumes.local_storage=true - -``` - -:::note - -Before installing the production instance of Pulsar, ensure to plan the storage settings to avoid extra storage migration work. Because after initial installation, you must edit Kubernetes objects manually if you want to change storage settings. - -::: - -The Pulsar Helm chart is designed for production use. To use the Pulsar Helm chart in a development environment (such as Minikube), you can disable persistence by including this option in your `helm install` command. - -```bash - ---set volumes.persistence=false - -``` - -### Affinity - -By default, `anti-affinity` is enabled to ensure pods of the same component can run on different nodes. - -```yaml - -affinity: - anti_affinity: true - -``` - -To use the Pulsar Helm chart in a development environment (such as Minikube), you can disable `anti-affinity` by including this option in your `helm install` command. - -```bash - ---set affinity.anti_affinity=false - -``` - -### Components - -The Pulsar Helm chart is designed for production usage. It deploys a production-ready Pulsar cluster, including Pulsar core components and monitoring components. - -You can customize the components to be deployed by turning on/off individual components. - -```yaml - -## Components -## -## Control what components of Apache Pulsar to deploy for the cluster -components: - # zookeeper - zookeeper: true - # bookkeeper - bookkeeper: true - # bookkeeper - autorecovery - autorecovery: true - # broker - broker: true - # functions - functions: true - # proxy - proxy: true - # toolset - toolset: true - # pulsar manager - pulsar_manager: true - -## Monitoring Components -## -## Control what components of the monitoring stack to deploy for the cluster -monitoring: - # monitoring - prometheus - prometheus: true - # monitoring - grafana - grafana: true - -``` - -### Docker images - -The Pulsar Helm chart is designed to enable controlled upgrades. So it can configure independent image versions for components. You can customize the images by setting individual component. - -```yaml - -## Images -## -## Control what images to use for each component -images: - zookeeper: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - bookie: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - autorecovery: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - broker: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - proxy: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - functions: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - prometheus: - repository: prom/prometheus - tag: v1.6.3 - pullPolicy: IfNotPresent - grafana: - repository: streamnative/apache-pulsar-grafana-dashboard-k8s - tag: 0.0.4 - pullPolicy: IfNotPresent - pulsar_manager: - repository: apachepulsar/pulsar-manager - tag: v0.1.0 - pullPolicy: IfNotPresent - hasCommand: false - -``` - -### TLS - -The Pulsar Helm chart can be configured to enable TLS (Transport Layer Security) to protect all the traffic between components. Before enabling TLS, you have to provision TLS certificates for the required components. - -#### Provision TLS certificates using cert-manager - -To use the `cert-manager` to provision the TLS certificates, you have to install the [cert-manager](#install-cert-manager) before installing the Pulsar Helm chart. After successfully installing the cert-manager, you can set `certs.internal_issuer.enabled` to `true`. Therefore, the Pulsar Helm chart can use the `cert-manager` to generate `selfsigning` TLS certificates for the configured components. - -```yaml - -certs: - internal_issuer: - enabled: false - component: internal-cert-issuer - type: selfsigning - -``` - -You can also customize the generated TLS certificates by configuring the fields as the following. - -```yaml - -tls: - # common settings for generating certs - common: - # 90d - duration: 2160h - # 15d - renewBefore: 360h - organization: - - pulsar - keySize: 4096 - keyAlgorithm: rsa - keyEncoding: pkcs8 - -``` - -#### Enable TLS - -After installing the `cert-manager`, you can set `tls.enabled` to `true` to enable TLS encryption for the entire cluster. - -```yaml - -tls: - enabled: false - -``` - -You can also configure whether to enable TLS encryption for individual component. - -```yaml - -tls: - # settings for generating certs for proxy - proxy: - enabled: false - cert_name: tls-proxy - # settings for generating certs for broker - broker: - enabled: false - cert_name: tls-broker - # settings for generating certs for bookies - bookie: - enabled: false - cert_name: tls-bookie - # settings for generating certs for zookeeper - zookeeper: - enabled: false - cert_name: tls-zookeeper - # settings for generating certs for recovery - autorecovery: - cert_name: tls-recovery - # settings for generating certs for toolset - toolset: - cert_name: tls-toolset - -``` - -### Authentication - -By default, authentication is disabled. You can set `auth.authentication.enabled` to `true` to enable authentication. -Currently, the Pulsar Helm chart only supports JWT authentication provider. You can set `auth.authentication.provider` to `jwt` to use the JWT authentication provider. - -```yaml - -# Enable or disable broker authentication and authorization. -auth: - authentication: - enabled: false - provider: "jwt" - jwt: - # Enable JWT authentication - # If the token is generated by a secret key, set the usingSecretKey as true. - # If the token is generated by a private key, set the usingSecretKey as false. - usingSecretKey: false - superUsers: - # broker to broker communication - broker: "broker-admin" - # proxy to broker communication - proxy: "proxy-admin" - # pulsar-admin client to broker/proxy communication - client: "admin" - -``` - -To enable authentication, you can run [prepare helm release](#prepare-the-helm-release) to generate token secret keys and tokens for three super users specified in the `auth.superUsers` field. The generated token keys and super user tokens are uploaded and stored as Kubernetes secrets prefixed with `-token-`. You can use the following command to find those secrets. - -```bash - -kubectl get secrets -n - -``` - -### Authorization - -By default, authorization is disabled. Authorization can be enabled only when authentication is enabled. - -```yaml - -auth: - authorization: - enabled: false - -``` - -To enable authorization, you can include this option in the `helm install` command. - -```bash - ---set auth.authorization.enabled=true - -``` - -### CPU and RAM resource requirements - -By default, the resource requests and the number of replicas for the Pulsar components in the Pulsar Helm chart are adequate for a small production deployment. If you deploy a non-production instance, you can reduce the defaults to fit into a smaller cluster. - -Once you have all of your configuration options collected, you can install dependent charts before installing the Pulsar Helm chart. - -## Install dependent charts - -### Install local storage provisioner - -To use local persistent volumes as the persistent storage, you need to install a storage provisioner for [local persistent volumes](https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/). - -One of the easiest way to get started is to use the local storage provisioner provided along with the Pulsar Helm chart. - -``` - -helm repo add streamnative https://charts.streamnative.io -helm repo update -helm install pulsar-storage-provisioner streamnative/local-storage-provisioner - -``` - -### Install cert-manager - -The Pulsar Helm chart uses the [cert-manager](https://github.com/jetstack/cert-manager) to provision and manage TLS certificates automatically. To enable TLS encryption for brokers or proxies, you need to install the cert-manager in advance. - -For details about how to install the cert-manager, follow the [official instructions](https://cert-manager.io/docs/installation/kubernetes/#installing-with-helm). - -Alternatively, we provide a bash script [install-cert-manager.sh](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/cert-manager/install-cert-manager.sh) to install a cert-manager release to the namespace `cert-manager`. - -```bash - -git clone https://github.com/apache/pulsar-helm-chart -cd pulsar-helm-chart -./scripts/cert-manager/install-cert-manager.sh - -``` - -## Prepare Helm release - -Once you have install all the dependent charts and collected all of your configuration options, you can run [prepare_helm_release.sh](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/prepare_helm_release.sh) to prepare the Helm release. - -```bash - -git clone https://github.com/apache/pulsar-helm-chart -cd pulsar-helm-chart -./scripts/pulsar/prepare_helm_release.sh -n -k - -``` - -The `prepare_helm_release` creates the following resources: - -- A Kubernetes namespace for installing the Pulsar release -- JWT secret keys and tokens for three super users: `broker-admin`, `proxy-admin`, and `admin`. By default, it generates an asymmetric pubic/private key pair. You can choose to generate a symmetric secret key by specifying `--symmetric`. - - `proxy-admin` role is used for proxies to communicate to brokers. - - `broker-admin` role is used for inter-broker communications. - - `admin` role is used by the admin tools. - -## Deploy Pulsar cluster using Helm - -Once you have finished the following three things, you can install a Helm release. - -- Collect all of your configuration options. -- Install dependent charts. -- Prepare the Helm release. - -In this example, we name our Helm release `pulsar`. - -```bash - -helm repo add apache https://pulsar.apache.org/charts -helm repo update -helm install pulsar apache/pulsar \ - --timeout 10m \ - --set initialize=true \ - --set [your configuration options] - -``` - -:::note - -For the first deployment, add `--set initialize=true` option to initialize bookie and Pulsar cluster metadata. - -::: - -You can also use the `--version ` option if you want to install a specific version of Pulsar Helm chart. - -## Monitor deployment - -A list of installed resources are output once the Pulsar cluster is deployed. This may take 5-10 minutes. - -The status of the deployment can be checked by running the `helm status pulsar` command, which can also be done while the deployment is taking place if you run the command in another terminal. - -## Access Pulsar cluster - -The default values will create a `ClusterIP` for the following resources, which you can use to interact with the cluster. - -- Proxy: You can use the IP address to produce and consume messages to the installed Pulsar cluster. -- Pulsar Manager: You can access the Pulsar Manager UI at `http://:9527`. -- Grafana Dashboard: You can access the Grafana dashboard at `http://:3000`. - -To find the IP addresses of those components, run the following command: - -```bash - -kubectl get service -n - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/helm-install.md b/site2/website/versioned_docs/version-2.8.1-deprecated/helm-install.md deleted file mode 100644 index 1f4d5eb69d5ddd..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/helm-install.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -id: helm-install -title: Install Apache Pulsar using Helm -sidebar_label: "Install" -original_id: helm-install ---- - -Install Apache Pulsar on Kubernetes with the official Pulsar Helm chart. - -## Requirements - -To deploy Apache Pulsar on Kubernetes, the followings are required. - -- kubectl 1.14 or higher, compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin)) -- Helm v3 (3.0.2 or higher) -- A Kubernetes cluster, version 1.14 or higher - -## Environment setup - -Before deploying Pulsar, you need to prepare your environment. - -### Tools - -Install [`helm`](helm-tools.md) and [`kubectl`](helm-tools.md) on your computer. - -## Cloud cluster preparation - -:::note - -Kubernetes 1.14 or higher is required. - -::: - -To create and connect to the Kubernetes cluster, follow the instructions: - -- [Google Kubernetes Engine](helm-prepare.md#google-kubernetes-engine) - -## Pulsar deployment - -Once the environment is set up and configuration is generated, you can now proceed to the [deployment of Pulsar](helm-deploy.md). - -## Pulsar upgrade - -To upgrade an existing Kubernetes installation, follow the [upgrade documentation](helm-upgrade.md). diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/helm-overview.md b/site2/website/versioned_docs/version-2.8.1-deprecated/helm-overview.md deleted file mode 100644 index 385d535e319b65..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/helm-overview.md +++ /dev/null @@ -1,104 +0,0 @@ ---- -id: helm-overview -title: Apache Pulsar Helm Chart -sidebar_label: "Overview" -original_id: helm-overview ---- - -This is the official supported Helm chart to install Apache Pulsar on a cloud-native environment. It was enhanced based on StreamNative's [Helm Chart](https://github.com/streamnative/charts). - -## Introduction - -The Apache Pulsar Helm chart is one of the most convenient ways to operate Pulsar on Kubernetes. This Pulsar Helm chart contains all the required components to get started and can scale to large deployments. - -This chart includes all the components for a complete experience, but each part can be configured to be installed separately. - -- Pulsar core components: - - ZooKeeper - - Bookies - - Brokers - - Function workers - - Proxies -- Control Center: - - Pulsar Manager - - Prometheus - - Grafana - -It includes support for: - -- Security - - Automatically provisioned TLS certificates, using [Jetstack](https://www.jetstack.io/)'s [cert-manager](https://cert-manager.io/docs/) - - self-signed - - [Let's Encrypt](https://letsencrypt.org/) - - TLS Encryption - - Proxy - - Broker - - Toolset - - Bookie - - ZooKeeper - - Authentication - - JWT - - Authorization -- Storage - - Non-persistence storage - - Persistence volume - - Local persistent volumes -- Functions - - Kubernetes Runtime - - Process Runtime - - Thread Runtime -- Operations - - Independent image versions for all components, enabling controlled upgrades - -## Pulsar Helm chart quick start - -To get up and run with these charts as fast as possible, in a **non-production** use case, we provide a [quick start guide](getting-started-helm.md) for Proof of Concept (PoC) deployments. - -This guide walks the user through deploying these charts with default values and features, but *does not* meet production ready requirements. To deploy these charts into production under sustained load, follow the complete [Installation Guide](helm-install.md). - -## Troubleshooting - -We have done our best to make these charts as seamless as possible. Occasionally, troubles do go outside of our control. We have collected tips and tricks for troubleshooting common issues. Please check them first before raising an [issue](https://github.com/apache/pulsar/issues/new/choose), and feel free to add to them by raising a [Pull Request](https://github.com/apache/pulsar/compare). - -## Installation - -The Apache Pulsar Helm chart contains all required dependencies. - -If you deploy a PoC for testing, we strongly suggest you follow our [Quick Start Guide](getting-started-helm.md) for your first iteration. - -1. [Preparation](helm-prepare.md) -2. [Deployment](helm-deploy.md) - -## Upgrading - -Once the Pulsar Helm chart is installed, use the `helm upgrade` to complete configuration changes and chart updates. - -```bash - -helm repo add apache https://pulsar.apache.org/charts -helm repo update -helm get values > pulsar.yaml -helm upgrade apache/pulsar -f pulsar.yaml - -``` - -For more detailed information, see [Upgrading](helm-upgrade.md). - -## Uninstallation - -To uninstall the Pulsar Helm chart, run the following command: - -```bash - -helm delete - -``` - -For the purposes of continuity, these charts have some Kubernetes objects that cannot be removed when performing `helm delete`. -It is recommended to *consciously* remove these items, as they affect re-deployment. - -* PVCs for stateful data: *consciously* remove these items. - - ZooKeeper: This is your metadata. - - BookKeeper: This is your data. - - Prometheus: This is your metrics data, which can be safely removed. -* Secrets: if the secrets are generated by the [prepare release script](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/prepare_helm_release.sh), they contain secret keys and tokens. You can use the [cleanup release script](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/cleanup_helm_release.sh) to remove these secrets and tokens as needed. diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/helm-prepare.md b/site2/website/versioned_docs/version-2.8.1-deprecated/helm-prepare.md deleted file mode 100644 index 5e9f2f9ef4f680..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/helm-prepare.md +++ /dev/null @@ -1,92 +0,0 @@ ---- -id: helm-prepare -title: Prepare Kubernetes resources -sidebar_label: "Prepare" -original_id: helm-prepare ---- - -For a fully functional Pulsar cluster, you need a few resources before deploying the Apache Pulsar Helm chart. The following provides instructions to prepare the Kubernetes cluster before deploying the Pulsar Helm chart. - -- [Google Kubernetes Engine](#google-kubernetes-engine) - - [Manual cluster creation](#manual-cluster-creation) - - [Scripted cluster creation](#scripted-cluster-creation) - - [Create cluster with local SSDs](#create-cluster-with-local-ssds) -- [Next Steps](#next-steps) - -## Google Kubernetes Engine - -To get started easier, a script is provided to create the cluster automatically. Alternatively, a cluster can be created manually as well. - -- [Google Kubernetes Engine](#google-kubernetes-engine) - - [Manual cluster creation](#manual-cluster-creation) - - [Scripted cluster creation](#scripted-cluster-creation) - - [Create cluster with local SSDs](#create-cluster-with-local-ssds) -- [Next Steps](#next-steps) - -### Manual cluster creation - -To provision a Kubernetes cluster manually, follow the [GKE instructions](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster). - -Alternatively, you can use the [instructions](#scripted-cluster-creation) below to provision a GKE cluster as needed. - -### Scripted cluster creation - -A [bootstrap script](https://github.com/streamnative/charts/tree/master/scripts/pulsar/gke_bootstrap_script.sh) has been created to automate much of the setup process for users on GCP/GKE. - -The script can: - -1. Create a new GKE cluster. -2. Allow the cluster to modify DNS (Domain Name Server) records. -3. Setup `kubectl`, and connect it to the cluster. - -Google Cloud SDK is a dependency of this script, so ensure it is [set up correctly](helm-tools.md#connect-to-a-gke-cluster) for the script to work. - -The script reads various parameters from environment variables and an argument `up` or `down` for bootstrap and clean-up respectively. - -The following table describes all variables. - -| **Variable** | **Description** | **Default value** | -| ------------ | --------------- | ----------------- | -| PROJECT | ID of your GCP project | No default value. It requires to be set. | -| CLUSTER_NAME | Name of the GKE cluster | `pulsar-dev` | -| CONFDIR | Configuration directory to store Kubernetes configuration | ${HOME}/.config/streamnative | -| INT_NETWORK | IP space to use within this cluster | `default` | -| LOCAL_SSD_COUNT | Number of local SSD counts | 4 | -| MACHINE_TYPE | Type of machine to use for nodes | `n1-standard-4` | -| NUM_NODES | Number of nodes to be created in each of the cluster's zones | 4 | -| PREEMPTIBLE | Create nodes using preemptible VM instances in the new cluster. | false | -| REGION | Compute region for the cluster | `us-east1` | -| USE_LOCAL_SSD | Flag to create a cluster with local SSDs | false | -| ZONE | Compute zone for the cluster | `us-east1-b` | -| ZONE_EXTENSION | The extension (`a`, `b`, `c`) of the zone name of the cluster | `b` | -| EXTRA_CREATE_ARGS | Extra arguments passed to create command | | - -Run the script, by passing in your desired parameters. It can work with the default parameters except for `PROJECT` which is required: - -```bash - -PROJECT= scripts/pulsar/gke_bootstrap_script.sh up - -``` - -The script can also be used to clean up the created GKE resources. - -```bash - -PROJECT= scripts/pulsar/gke_bootstrap_script.sh down - -``` - -#### Create cluster with local SSDs - -To install the Pulsar Helm chart using local persistent volumes, you need to create a GKE cluster with local SSDs. You can do so by specifying `USE_LOCAL_SSD` to be `true` in the following command to create a Pulsar cluster with local SSDs. - -``` - -PROJECT= USE_LOCAL_SSD=true LOCAL_SSD_COUNT= scripts/pulsar/gke_bootstrap_script.sh up - -``` - -## Next Steps - -Continue with the [installation of the chart](helm-deploy.md) once you have the cluster up and running. diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/helm-tools.md b/site2/website/versioned_docs/version-2.8.1-deprecated/helm-tools.md deleted file mode 100644 index 6ba89006913b64..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/helm-tools.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -id: helm-tools -title: Required tools for deploying Pulsar Helm Chart -sidebar_label: "Required Tools" -original_id: helm-tools ---- - -Before deploying Pulsar to your Kubernetes cluster, there are some tools you must have installed locally. - -## kubectl - -kubectl is the tool that talks to the Kubernetes API. kubectl 1.14 or higher is required and it needs to be compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin)). - -To Install kubectl locally, follow the [Kubernetes documentation](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl). - -The server version of kubectl cannot be obtained until we connect to a cluster. - -## Helm - -Helm is the package manager for Kubernetes. The Apache Pulsar Helm Chart is tested and supported with Helm v3. - -### Get Helm - -You can get Helm from the project's [releases page](https://github.com/helm/helm/releases), or follow other options under the official documentation of [installing Helm](https://helm.sh/docs/intro/install/). - -### Next steps - -Once kubectl and Helm are configured, you can configure your [Kubernetes cluster](helm-prepare.md). - -## Additional information - -### Templates - -Templating in Helm is done through Golang's [text/template](https://golang.org/pkg/text/template/) and [sprig](https://godoc.org/github.com/Masterminds/sprig). - -For more information about how all the inner workings behave, check these documents: - -- [Functions and Pipelines](https://helm.sh/docs/chart_template_guide/functions_and_pipelines/) -- [Subcharts and Globals](https://helm.sh/docs/chart_template_guide/subcharts_and_globals/) - -### Tips and tricks - -For additional information on developing with Helm, check [tips and tricks section](https://helm.sh/docs/howto/charts_tips_and_tricks/) in the Helm repository. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/helm-upgrade.md b/site2/website/versioned_docs/version-2.8.1-deprecated/helm-upgrade.md deleted file mode 100644 index 7d671e6bfb3c10..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/helm-upgrade.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -id: helm-upgrade -title: Upgrade Pulsar Helm release -sidebar_label: "Upgrade" -original_id: helm-upgrade ---- - -Before upgrading your Pulsar installation, you need to check the change log corresponding to the specific release you want to upgrade to and look for any release notes that might pertain to the new Pulsar helm chart version. - -We also recommend that you need to provide all values using the `helm upgrade --set key=value` syntax or the `-f values.yml` instead of using `--reuse-values`, because some of the current values might be deprecated. - -:::note - -You can retrieve your previous `--set` arguments cleanly, with `helm get values `. If you direct this into a file (`helm get values > pulsar.yml`), you can safely pass this file through `-f`, namely `helm upgrade apache/pulsar -f pulsar.yaml`. This safely replaces the behavior of `--reuse-values`. - -::: - -## Steps - -To upgrade Apache Pulsar to a newer version, follow these steps: - -1. Check the change log for the specific version you would like to upgrade to. -2. Go through [deployment documentation](helm-deploy.md) step by step. -3. Extract your previous `--set` arguments with the following command. - - ```bash - - helm get values > pulsar.yaml - - ``` - -4. Decide all the values you need to set. -5. Perform the upgrade, with all `--set` arguments extracted in step 4. - - ```bash - - helm upgrade apache/pulsar \ - --version \ - -f pulsar.yaml \ - --set ... - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-aerospike-sink.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-aerospike-sink.md deleted file mode 100644 index 63d7338a3ba91c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-aerospike-sink.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -id: io-aerospike-sink -title: Aerospike sink connector -sidebar_label: "Aerospike sink connector" -original_id: io-aerospike-sink ---- - -The Aerospike sink connector pulls messages from Pulsar topics to Aerospike clusters. - -## Configuration - -The configuration of the Aerospike sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `seedHosts` |String| true | No default value| The comma-separated list of one or more Aerospike cluster hosts.

    Each host can be specified as a valid IP address or hostname followed by an optional port number. | -| `keyspace` | String| true |No default value |The Aerospike namespace. | -| `columnName` | String | true| No default value|The Aerospike column name. | -|`userName`|String|false|NULL|The Aerospike username.| -|`password`|String|false|NULL|The Aerospike password.| -| `keySet` | String|false |NULL | The Aerospike set name. | -| `maxConcurrentRequests` |int| false | 100 | The maximum number of concurrent Aerospike transactions that a sink can open. | -| `timeoutMs` | int|false | 100 | This property controls `socketTimeout` and `totalTimeout` for Aerospike transactions. | -| `retries` | int|false | 1 |The maximum number of retries before aborting a write transaction to Aerospike. | diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-canal-source.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-canal-source.md deleted file mode 100644 index d1fd43bb0f74e4..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-canal-source.md +++ /dev/null @@ -1,235 +0,0 @@ ---- -id: io-canal-source -title: Canal source connector -sidebar_label: "Canal source connector" -original_id: io-canal-source ---- - -The Canal source connector pulls messages from MySQL to Pulsar topics. - -## Configuration - -The configuration of Canal source connector has the following properties. - -### Property - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `username` | true | None | Canal server account (not MySQL).| -| `password` | true | None | Canal server password (not MySQL). | -|`destination`|true|None|Source destination that Canal source connector connects to. -| `singleHostname` | false | None | Canal server address.| -| `singlePort` | false | None | Canal server port.| -| `cluster` | true | false | Whether to enable cluster mode based on Canal server configuration or not.

  801. true: **cluster** mode.
    If set to true, it talks to `zkServers` to figure out the actual database host.

  802. false: **standalone** mode.
    If set to false, it connects to the database specified by `singleHostname` and `singlePort`.
  803. | -| `zkServers` | true | None | Address and port of the Zookeeper that Canal source connector talks to figure out the actual database host.| -| `batchSize` | false | 1000 | Batch size to fetch from Canal. | - -### Example - -Before using the Canal connector, you can create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "zkServers": "127.0.0.1:2181", - "batchSize": "5120", - "destination": "example", - "username": "", - "password": "", - "cluster": false, - "singleHostname": "127.0.0.1", - "singlePort": "11111", - } - - ``` - -* YAML - - You can create a YAML file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/resources/canal-mysql-source-config.yaml) below to your YAML file. - - ```yaml - - configs: - zkServers: "127.0.0.1:2181" - batchSize: 5120 - destination: "example" - username: "" - password: "" - cluster: false - singleHostname: "127.0.0.1" - singlePort: 11111 - - ``` - -## Usage - -Here is an example of storing MySQL data using the configuration file as above. - -1. Start a MySQL server. - - ```bash - - $ docker pull mysql:5.7 - $ docker run -d -it --rm --name pulsar-mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=canal -e MYSQL_USER=mysqluser -e MYSQL_PASSWORD=mysqlpw mysql:5.7 - - ``` - -2. Create a configuration file `mysqld.cnf`. - - ```bash - - [mysqld] - pid-file = /var/run/mysqld/mysqld.pid - socket = /var/run/mysqld/mysqld.sock - datadir = /var/lib/mysql - #log-error = /var/log/mysql/error.log - # By default we only accept connections from localhost - #bind-address = 127.0.0.1 - # Disabling symbolic-links is recommended to prevent assorted security risks - symbolic-links=0 - log-bin=mysql-bin - binlog-format=ROW - server_id=1 - - ``` - -3. Copy the configuration file `mysqld.cnf` to MySQL server. - - ```bash - - $ docker cp mysqld.cnf pulsar-mysql:/etc/mysql/mysql.conf.d/ - - ``` - -4. Restart the MySQL server. - - ```bash - - $ docker restart pulsar-mysql - - ``` - -5. Create a test database in MySQL server. - - ```bash - - $ docker exec -it pulsar-mysql /bin/bash - $ mysql -h 127.0.0.1 -uroot -pcanal -e 'create database test;' - - ``` - -6. Start a Canal server and connect to MySQL server. - - ``` - - $ docker pull canal/canal-server:v1.1.2 - $ docker run -d -it --link pulsar-mysql -e canal.auto.scan=false -e canal.destinations=test -e canal.instance.master.address=pulsar-mysql:3306 -e canal.instance.dbUsername=root -e canal.instance.dbPassword=canal -e canal.instance.connectionCharset=UTF-8 -e canal.instance.tsdb.enable=true -e canal.instance.gtidon=false --name=pulsar-canal-server -p 8000:8000 -p 2222:2222 -p 11111:11111 -p 11112:11112 -m 4096m canal/canal-server:v1.1.2 - - ``` - -7. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:2.3.0 - $ docker run -d -it --link pulsar-canal-server -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:2.3.0 bin/pulsar standalone - - ``` - -8. Modify the configuration file `canal-mysql-source-config.yaml`. - - ```yaml - - configs: - zkServers: "" - batchSize: "5120" - destination: "test" - username: "" - password: "" - cluster: false - singleHostname: "pulsar-canal-server" - singlePort: "11111" - - ``` - -9. Create a consumer file `pulsar-client.py`. - - ```python - - import pulsar - - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe('my-topic', - subscription_name='my-sub') - - while True: - msg = consumer.receive() - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - - client.close() - - ``` - -10. Copy the configuration file `canal-mysql-source-config.yaml` and the consumer file `pulsar-client.py` to Pulsar server. - - ```bash - - $ docker cp canal-mysql-source-config.yaml pulsar-standalone:/pulsar/conf/ - $ docker cp pulsar-client.py pulsar-standalone:/pulsar/ - - ``` - -11. Download a Canal connector and start it. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - $ wget https://archive.apache.org/dist/pulsar/pulsar-2.3.0/connectors/pulsar-io-canal-2.3.0.nar -P connectors - $ ./bin/pulsar-admin source localrun \ - --archive ./connectors/pulsar-io-canal-2.3.0.nar \ - --classname org.apache.pulsar.io.canal.CanalStringSource \ - --tenant public \ - --namespace default \ - --name canal \ - --destination-topic-name my-topic \ - --source-config-file /pulsar/conf/canal-mysql-source-config.yaml \ - --parallelism 1 - - ``` - -12. Consume data from MySQL. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - $ python pulsar-client.py - - ``` - -13. Open another window to log in MySQL server. - - ```bash - - $ docker exec -it pulsar-mysql /bin/bash - $ mysql -h 127.0.0.1 -uroot -pcanal - - ``` - -14. Create a table, and insert, delete, and update data in MySQL server. - - ```bash - - mysql> use test; - mysql> show tables; - mysql> CREATE TABLE IF NOT EXISTS `test_table`(`test_id` INT UNSIGNED AUTO_INCREMENT,`test_title` VARCHAR(100) NOT NULL, - `test_author` VARCHAR(40) NOT NULL, - `test_date` DATE,PRIMARY KEY ( `test_id` ))ENGINE=InnoDB DEFAULT CHARSET=utf8; - mysql> INSERT INTO test_table (test_title, test_author, test_date) VALUES("a", "b", NOW()); - mysql> UPDATE test_table SET test_title='c' WHERE test_title='a'; - mysql> DELETE FROM test_table WHERE test_title='c'; - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-cassandra-sink.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-cassandra-sink.md deleted file mode 100644 index b27a754f49e182..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-cassandra-sink.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -id: io-cassandra-sink -title: Cassandra sink connector -sidebar_label: "Cassandra sink connector" -original_id: io-cassandra-sink ---- - -The Cassandra sink connector pulls messages from Pulsar topics to Cassandra clusters. - -## Configuration - -The configuration of the Cassandra sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `roots` | String|true | " " (empty string) | A comma-separated list of Cassandra hosts to connect to.| -| `keyspace` | String|true| " " (empty string)| The key space used for writing pulsar messages.

    **Note: `keyspace` should be created prior to a Cassandra sink.**| -| `keyname` | String|true| " " (empty string)| The key name of the Cassandra column family.

    The column is used for storing Pulsar message keys.

    If a Pulsar message doesn't have any key associated, the message value is used as the key. | -| `columnFamily` | String|true| " " (empty string)| The Cassandra column family name.

    **Note: `columnFamily` should be created prior to a Cassandra sink.**| -| `columnName` | String|true| " " (empty string) | The column name of the Cassandra column family.

    The column is used for storing Pulsar message values. | - -### Example - -Before using the Cassandra sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - } - - ``` - -* YAML - - ``` - - configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - - ``` - -## Usage - -For more information about **how to connect Pulsar with Cassandra**, see [here](io-quickstart.md#connect-pulsar-to-apache-cassandra). diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-cdc-debezium.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-cdc-debezium.md deleted file mode 100644 index 293ccf2b35e8aa..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-cdc-debezium.md +++ /dev/null @@ -1,543 +0,0 @@ ---- -id: io-cdc-debezium -title: Debezium source connector -sidebar_label: "Debezium source connector" -original_id: io-cdc-debezium ---- - -The Debezium source connector pulls messages from MySQL or PostgreSQL -and persists the messages to Pulsar topics. - -## Configuration - -The configuration of Debezium source connector has the following properties. - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `task.class` | true | null | A source task class that implemented in Debezium. | -| `database.hostname` | true | null | The address of a database server. | -| `database.port` | true | null | The port number of a database server.| -| `database.user` | true | null | The name of a database user that has the required privileges. | -| `database.password` | true | null | The password for a database user that has the required privileges. | -| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. | -| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. | -| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the connector.

    This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. | -| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. | -| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value. | -| `database.history` | true | null | The name of the database history class. | -| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements.

    **Note: this topic is for internal use only and should not be used by consumers.** | -| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. | -| `pulsar.service.url` | true | null | Pulsar cluster service URL for the offset topic used in Debezium. You can use the `bin/pulsar-admin --admin-url http://pulsar:8080 sources localrun --source-config-file configs/pg-pulsar-config.yaml` command to point to the target Pulsar cluster.| -| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. | -| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). | -| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. | -| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. | - - - -## Example of MySQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "3306", - "database.user": "debezium", - "database.password": "dbz", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.whitelist": "inventory", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.history.pulsar.topic": "history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "pulsar.service.url": "pulsar://127.0.0.1:6650", - "offset.storage.topic": "offset-topic" - } - - ``` - -* YAML - - You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mysql-source" - topicName: "debezium-mysql-topic" - archive: "connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for mysql, docker image: debezium/example-mysql:0.8 - database.hostname: "localhost" - database.port: "3306" - database.user: "debezium" - database.password: "dbz" - database.server.id: "184054" - database.server.name: "dbserver1" - database.whitelist: "inventory" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.history.pulsar.topic: "history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG - key.converter: "org.apache.kafka.connect.json.JsonConverter" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## OFFSET_STORAGE_TOPIC_CONFIG - offset.storage.topic: "offset-topic" - - ``` - -### Usage - -This example shows how to change the data of a MySQL table using the Pulsar Debezium connector. - -1. Start a MySQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -it --rm \ - --name mysql \ - -p 3306:3306 \ - -e MYSQL_ROOT_PASSWORD=debezium \ - -e MYSQL_USER=mysqluser \ - -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar \ - --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","value.converter": "org.apache.kafka.connect.json.JsonConverter","pulsar.service.url": "pulsar://127.0.0.1:6650","offset.storage.topic": "offset-topic"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mysql-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the table _inventory.products_. - - ```bash - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MySQL client in docker. - - ```bash - - $ docker run -it --rm \ - --name mysqlterm \ - --link mysql \ - --rm mysql:5.7 sh \ - -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' - - ``` - -6. A MySQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - mysql> use inventory; - mysql> show tables; - mysql> SELECT * FROM products; - mysql> UPDATE products SET name='1111111111' WHERE id=101; - mysql> UPDATE products SET name='1111111111' WHERE id=107; - - ``` - - In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic. - -## Example of PostgreSQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "5432", - "database.user": "postgres", - "database.password": "postgres", - "database.dbname": "postgres", - "database.server.name": "dbserver1", - "schema.whitelist": "inventory", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-postgres-source" - topicName: "debezium-postgres-topic" - archive: "connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-postgress:0.8 - database.hostname: "localhost" - database.port: "5432" - database.user: "postgres" - database.password: "postgres" - database.dbname: "postgres" - database.server.name: "dbserver1" - schema.whitelist: "inventory" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector. - - -1. Start a PostgreSQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-postgres:0.8 - $ docker run -d -it --rm --name pulsar-postgresql -p 5432:5432 debezium/example-postgres:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar \ - --name debezium-postgres-source \ - --destination-topic-name debezium-postgres-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "postgres","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-postgres-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a PostgreSQL client in docker. - - ```bash - - $ docker exec -it pulsar-postgresql /bin/bash - - ``` - -6. A PostgreSQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - psql -U postgres postgres - postgres=# \c postgres; - You are now connected to database "postgres" as user "postgres". - postgres=# SET search_path TO inventory; - SET - postgres=# select * from products; - id | name | description | weight - -----+--------------------+---------------------------------------------------------+-------- - 102 | car battery | 12V car battery | 8.1 - 103 | 12-pack drill bits | 12-pack of drill bits with sizes ranging from #40 to #3 | 0.8 - 104 | hammer | 12oz carpenter's hammer | 0.75 - 105 | hammer | 14oz carpenter's hammer | 0.875 - 106 | hammer | 16oz carpenter's hammer | 1 - 107 | rocks | box of assorted rocks | 5.3 - 108 | jacket | water resistent black wind breaker | 0.1 - 109 | spare tire | 24 inch spare tire | 22.2 - 101 | 1111111111 | Small 2-wheel scooter | 3.14 - (9 rows) - - postgres=# UPDATE products SET name='1111111111' WHERE id=107; - UPDATE 1 - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":107}}�{"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products.Value","field":"before"},{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products.Value","field":"after"},{"type":"struct","fields":[{"type":"string","optional":true,"field":"version"},{"type":"string","optional":true,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":false,"field":"db"},{"type":"int64","optional":true,"field":"ts_usec"},{"type":"int64","optional":true,"field":"txId"},{"type":"int64","optional":true,"field":"lsn"},{"type":"string","optional":true,"field":"schema"},{"type":"string","optional":true,"field":"table"},{"type":"boolean","optional":true,"default":false,"field":"snapshot"},{"type":"boolean","optional":true,"field":"last_snapshot_record"}],"optional":false,"name":"io.debezium.connector.postgresql.Source","field":"source"},{"type":"string","optional":false,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"before":{"id":107,"name":"rocks","description":"box of assorted rocks","weight":5.3},"after":{"id":107,"name":"1111111111","description":"box of assorted rocks","weight":5.3},"source":{"version":"0.9.2.Final","connector":"postgresql","name":"dbserver1","db":"postgres","ts_usec":1559208957661080,"txId":577,"lsn":23862872,"schema":"inventory","table":"products","snapshot":false,"last_snapshot_record":null},"op":"u","ts_ms":1559208957692}} - - ``` - -## Example of MongoDB - -You need to create a configuration file before using the Pulsar Debezium connector. - -* JSON - - ```json - - { - "mongodb.hosts": "rs0/mongodb:27017", - "mongodb.name": "dbserver1", - "mongodb.user": "debezium", - "mongodb.password": "dbz", - "mongodb.task.id": "1", - "database.whitelist": "inventory", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mongodb-source" - topicName: "debezium-mongodb-topic" - archive: "connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-postgress:0.10 - mongodb.hosts: "rs0/mongodb:27017", - mongodb.name: "dbserver1", - mongodb.user: "debezium", - mongodb.password: "dbz", - mongodb.task.id: "1", - database.whitelist: "inventory", - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector. - - -1. Start a MongoDB server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-mongodb:0.10 - $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017 debezium/example-mongodb:0.10 - - ``` - - Use the following commands to initialize the data. - - ``` bash - - ./usr/local/bin/init-inventory.sh - - ``` - - If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a``` - - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-mongodb-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar \ - --name debezium-mongodb-source \ - --destination-topic-name debezium-mongodb-topic \ - --tenant public \ - --namespace default \ - --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mongodb-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MongoDB client in docker. - - ```bash - - $ docker exec -it pulsar-mongodb /bin/bash - - ``` - -6. A MongoDB client pops out. - - ```bash - - mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory - db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}}) - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":false,"field":"rs"},{"type":"string","optional":false,"field":"collection"},{"type":"int32","optional":false,"field":"ord"},{"type":"int64","optional":true,"field":"h"}],"optional":false,"name":"io.debezium.connector.mongo.Source","field":"source"},{"type":"string","optional":true,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"after":"{\"_id\": {\"$numberLong\": \"104\"},\"name\": \"hammer\",\"description\": \"12oz carpenter's hammer\",\"weight\": 1.25,\"quantity\": 4}","patch":null,"source":{"version":"0.10.0.Final","connector":"mongodb","name":"dbserver1","ts_ms":1573541905000,"snapshot":"true","db":"inventory","rs":"rs0","collection":"products","ord":1,"h":4983083486544392763},"op":"r","ts_ms":1573541909761}}. - - ``` - -## FAQ - -### Debezium postgres connector will hang when create snap - -```$xslt - -#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000] - java.lang.Thread.State: WAITING (parking) - at sun.misc.Unsafe.park(Native Method) - - parking to wait for <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) - at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) - at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) - at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396) - at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649) - at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132) - at io.debezium.connector.postgresql.PostgresConnectorTask$Lambda$203/385424085.accept(Unknown Source) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$240/1347039967.accept(Unknown Source) - at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$206/589332928.run(Unknown Source) - at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705) - at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717) - at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126) - at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47) - at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127) - at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230) - at java.lang.Thread.run(Thread.java:748) - -``` - -If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file: - -```$xslt - -max.queue.size= - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-cdc.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-cdc.md deleted file mode 100644 index e6e662884826de..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-cdc.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -id: io-cdc -title: CDC connector -sidebar_label: "CDC connector" -original_id: io-cdc ---- - -CDC source connectors capture log changes of databases (such as MySQL, MongoDB, and PostgreSQL) into Pulsar. - -> CDC source connectors are built on top of [Canal](https://github.com/alibaba/canal) and [Debezium](https://debezium.io/) and store all data into Pulsar cluster in a persistent, replicated, and partitioned way. - -Currently, Pulsar has the following CDC connectors. - -Name|Java Class -|---|--- -[Canal source connector](io-canal-source.md)|[org.apache.pulsar.io.canal.CanalStringSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/java/org/apache/pulsar/io/canal/CanalStringSource.java) -[Debezium source connector](io-cdc-debezium.md)|
  804. [org.apache.pulsar.io.debezium.DebeziumSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/core/src/main/java/org/apache/pulsar/io/debezium/DebeziumSource.java)
  805. [org.apache.pulsar.io.debezium.mysql.DebeziumMysqlSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/java/org/apache/pulsar/io/debezium/mysql/DebeziumMysqlSource.java)
  806. [org.apache.pulsar.io.debezium.postgres.DebeziumPostgresSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/java/org/apache/pulsar/io/debezium/postgres/DebeziumPostgresSource.java)
  807. - -For more information about Canal and Debezium, see the information below. - -Subject | Reference -|---|--- -How to use Canal source connector with MySQL|[Canal guide](https://github.com/alibaba/canal/wiki) -How does Canal work | [Canal tutorial](https://github.com/alibaba/canal/wiki) -How to use Debezium source connector with MySQL | [Debezium guide](https://debezium.io/docs/connectors/mysql/) -How does Debezium work | [Debezium tutorial](https://debezium.io/docs/tutorial/) diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-cli.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-cli.md deleted file mode 100644 index 3d54bb61875e25..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-cli.md +++ /dev/null @@ -1,658 +0,0 @@ ---- -id: io-cli -title: Connector Admin CLI -sidebar_label: "CLI" -original_id: io-cli ---- - -The `pulsar-admin` tool helps you manage Pulsar connectors. - -## `sources` - -An interface for managing Pulsar IO sources (ingress data into Pulsar). - -```bash - -$ pulsar-admin sources subcommands - -``` - -Subcommands are: - -* `create` - -* `update` - -* `delete` - -* `get` - -* `status` - -* `list` - -* `stop` - -* `start` - -* `restart` - -* `localrun` - -* `available-sources` - -* `reload` - - -### `create` - -Submit a Pulsar IO source connector to run in a Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources create options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--classname` | The source's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per source instance (applicable only to Docker runtime). -| `--deserialization-classname` | The SerDe classname for the source. -| `--destination-topic-name` | The Pulsar topic to which data is sent. -| `--disk` | The disk (in bytes) that needs to be allocated per source instance (applicable only to Docker runtime). -|`--name` | The source's name. -| `--namespace` | The source's namespace. -| ` --parallelism` | The source's parallelism factor, that is, the number of source instances to run. -| `--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per source instance (applicable only to the process and Docker runtimes). -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -| `--source-config` | Source config key/values. -| `--source-config-file` | The path to a YAML config file specifying the source's configuration. -| `-t`, `--source-type` | The source's connector provider. -| `--tenant` | The source's tenant. -|`--producer-config`| The custom producer configuration (as a JSON string). - -### `update` - -Update a already submitted Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources update options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--classname` | The source's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per source instance (applicable only to Docker runtime). -| `--deserialization-classname` | The SerDe classname for the source. -| `--destination-topic-name` | The Pulsar topic to which data is sent. -| `--disk` | The disk (in bytes) that needs to be allocated per source instance (applicable only to Docker runtime). -|`--name` | The source's name. -| `--namespace` | The source's namespace. -| ` --parallelism` | The source's parallelism factor, that is, the number of source instances to run. -| `--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per source instance (applicable only to the process and Docker runtimes). -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -| `--source-config` | Source config key/values. -| `--source-config-file` | The path to a YAML config file specifying the source's configuration. -| `-t`, `--source-type` | The source's connector provider. The `source-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. -| `--tenant` | The source's tenant. -| `--update-auth-data` | Whether or not to update the auth data.
    **Default value: false.** - - -### `delete` - -Delete a Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources delete options - -``` - -#### Option - -|Flag|Description| -|---|---| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `get` - -Get the information about a Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources get options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `status` - -Check the current status of a Pulsar Source. - -#### Usage - -```bash - -$ pulsar-admin sources status options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source ID.
    If `instance-id` is not provided, Pulsar gets status of all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `list` - -List all running Pulsar IO source connectors. - -#### Usage - -```bash - -$ pulsar-admin sources list options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `stop` - -Stop a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources stop options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar stops all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `start` - -Start a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources start options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar starts all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `restart` - -Restart a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources restart options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar restarts all instances. -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `localrun` - -Run a Pulsar IO source connector locally rather than deploying it to the Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources localrun options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the Source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--broker-service-url` | The URL for the Pulsar broker. -|`--classname`|The source's class name if `archive` is file-url-path (file://). -| `--client-auth-params` | Client authentication parameter. -| `--client-auth-plugin` | Client authentication plugin using which function-process can connect to broker. -|`--cpu`|The CPU (in cores) that needs to be allocated per source instance (applicable only to the Docker runtime).| -|`--deserialization-classname`|The SerDe classname for the source. -|`--destination-topic-name`|The Pulsar topic to which data is sent. -|`--disk`|The disk (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime).| -|`--hostname-verification-enabled`|Enable hostname verification.
    **Default value: false**. -|`--name`|The source’s name.| -|`--namespace`|The source’s namespace.| -|`--parallelism`|The source’s parallelism factor, that is, the number of source instances to run).| -|`--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -|`--ram`|The RAM (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime).| -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -|`--source-config`|Source config key/values. -|`--source-config-file`|The path to a YAML config file specifying the source’s configuration. -|`--source-type`|The source's connector provider. -|`--tenant`|The source’s tenant. -|`--tls-allow-insecure`|Allow insecure tls connection.
    **Default value: false**. -|`--tls-trust-cert-path`|The tls trust cert file path. -|`--use-tls`|Use tls connection.
    **Default value: false**. -|`--producer-config`| The custom producer configuration (as a JSON string). - -### `available-sources` - -Get the list of Pulsar IO connector sources supported by Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources available-sources - -``` - -### `reload` - -Reload the available built-in connectors. - -#### Usage - -```bash - -$ pulsar-admin sources reload - -``` - -## `sinks` - -An interface for managing Pulsar IO sinks (egress data from Pulsar). - -```bash - -$ pulsar-admin sinks subcommands - -``` - -Subcommands are: - -* `create` - -* `update` - -* `delete` - -* `get` - -* `status` - -* `list` - -* `stop` - -* `start` - -* `restart` - -* `localrun` - -* `available-sinks` - -* `reload` - - -### `create` - -Submit a Pulsar IO sink connector to run in a Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks create options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--classname` | The sink's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per sink instance (applicable only to Docker runtime). -| `--custom-schema-inputs` | The map of input topics to schema types or class names (as a JSON string). -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -| `--disk` | The disk (in bytes) that needs to be allocated per sink instance (applicable only to Docker runtime). -|`-i, --inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name` | The sink's name. -| `--namespace` | The sink's namespace. -| ` --parallelism` | The sink's parallelism factor, that is, the number of sink instances to run. -| `--processing-guarantees` | The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the process and Docker runtimes). -| `--retain-ordering` | Sink consumes and sinks messages in order. -| `--sink-config` | sink config key/values. -| `--sink-config-file` | The path to a YAML config file specifying the sink's configuration. -| `-t`, `--sink-type` | The sink's connector provider. The `sink-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. -| `--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -| `--tenant` | The sink's tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). - -### `update` - -Update a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks update options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--classname` | The sink's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per sink instance (applicable only to Docker runtime). -| `--custom-schema-inputs` | The map of input topics to schema types or class names (as a JSON string). -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -| `--disk` | The disk (in bytes) that needs to be allocated per sink instance (applicable only to Docker runtime). -|`-i, --inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name` | The sink's name. -| `--namespace` | The sink's namespace. -| ` --parallelism` | The sink's parallelism factor, that is, the number of sink instances to run. -| `--processing-guarantees` | The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the process and Docker runtimes). -| `--retain-ordering` | Sink consumes and sinks messages in order. -| `--sink-config` | sink config key/values. -| `--sink-config-file` | The path to a YAML config file specifying the sink's configuration. -| `-t`, `--sink-type` | The sink's connector provider. -| `--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -| `--tenant` | The sink's tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). -| `--update-auth-data` | Whether or not to update the auth data.
    **Default value: false.** - -### `delete` - -Delete a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks delete options - -``` - -#### Option - -|Flag|Description| -|---|---| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - -### `get` - -Get the information about a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks get options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `status` - -Check the current status of a Pulsar sink. - -#### Usage - -```bash - -$ pulsar-admin sinks status options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink ID.
    If `instance-id` is not provided, Pulsar gets status of all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `list` - -List all running Pulsar IO sink connectors. - -#### Usage - -```bash - -$ pulsar-admin sinks list options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `stop` - -Stop a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks stop options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar stops all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - -### `start` - -Start a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks start options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar starts all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `restart` - -Restart a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks restart options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar restarts all instances. -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `localrun` - -Run a Pulsar IO sink connector locally rather than deploying it to the Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks localrun options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--broker-service-url` | The URL for the Pulsar broker. -|`--classname`|The sink's class name if `archive` is file-url-path (file://). -| `--client-auth-params` | Client authentication parameter. -| `--client-auth-plugin` | Client authentication plugin using which function-process can connect to broker. -|`--cpu`|The CPU (in cores) that needs to be allocated per sink instance (applicable only to the Docker runtime). -| `--custom-schema-inputs` | The map of input topics to Schema types or class names (as a JSON string). -| `--max-redeliver-count` | Maximum number of times that a message is redelivered before being sent to the dead letter queue. -| `--dead-letter-topic` | Name of the dead letter topic where the failing messages are sent. -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -|`--disk`|The disk (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime).| -|`--hostname-verification-enabled`|Enable hostname verification.
    **Default value: false**. -| `-i`, `--inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name`|The sink’s name.| -|`--namespace`|The sink’s namespace.| -|`--parallelism`|The sink’s parallelism factor, that is, the number of sink instances to run).| -|`--processing-guarantees`|The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -|`--ram`|The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime).| -|`--retain-ordering` | Sink consumes and sinks messages in order. -|`--sink-config`|sink config key/values. -|`--sink-config-file`|The path to a YAML config file specifying the sink’s configuration. -|`--sink-type`|The sink's connector provider. -|`--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -|`--tenant`|The sink’s tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--negative-ack-redelivery-delay-ms` | The negatively-acknowledged message redelivery delay in milliseconds. | -|`--tls-allow-insecure`|Allow insecure tls connection.
    **Default value: false**. -|`--tls-trust-cert-path`|The tls trust cert file path. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). -|`--use-tls`|Use tls connection.
    **Default value: false**. - -### `available-sinks` - -Get the list of Pulsar IO connector sinks supported by Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks available-sinks - -``` - -### `reload` - -Reload the available built-in connectors. - -#### Usage - -```bash - -$ pulsar-admin sinks reload - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-connectors.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-connectors.md deleted file mode 100644 index 8db368e0e70637..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-connectors.md +++ /dev/null @@ -1,232 +0,0 @@ ---- -id: io-connectors -title: Built-in connector -sidebar_label: "Built-in connector" -original_id: io-connectors ---- - -Pulsar distribution includes a set of common connectors that have been packaged and tested with the rest of Apache Pulsar. These connectors import and export data from some of the most commonly used data systems. - -Using any of these connectors is as easy as writing a simple connector and running the connector locally or submitting the connector to a Pulsar Functions cluster. - -## Source connector - -Pulsar has various source connectors, which are sorted alphabetically as below. - -### Canal - -* [Configuration](io-canal-source.md#configuration) - -* [Example](io-canal-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/java/org/apache/pulsar/io/canal/CanalStringSource.java) - - -### Debezium MySQL - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-mysql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/java/org/apache/pulsar/io/debezium/mysql/DebeziumMysqlSource.java) - -### Debezium PostgreSQL - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-postgresql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/java/org/apache/pulsar/io/debezium/postgres/DebeziumPostgresSource.java) - -### Debezium MongoDB - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-mongodb) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/java/org/apache/pulsar/io/debezium/mongodb/DebeziumMongoDbSource.java) - -### DynamoDB - -* [Configuration](io-dynamodb-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/dynamodb/src/main/java/org/apache/pulsar/io/dynamodb/DynamoDBSource.java) - -### File - -* [Configuration](io-file-source.md#configuration) - -* [Example](io-file-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/file/src/main/java/org/apache/pulsar/io/file/FileSource.java) - -### Flume - -* [Configuration](io-flume-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/java/org/apache/pulsar/io/flume/FlumeConnector.java) - -### Twitter firehose - -* [Configuration](io-twitter-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/twitter/src/main/java/org/apache/pulsar/io/twitter/TwitterFireHose.java) - -### Kafka - -* [Configuration](io-kafka-source.md#configuration) - -* [Example](io-kafka-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java) - -### Kinesis - -* [Configuration](io-kinesis-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kinesis/src/main/java/org/apache/pulsar/io/kinesis/KinesisSource.java) - -### Netty - -* [Configuration](io-netty-source.md#configuration) - -* [Example of TCP](io-netty-source.md#tcp) - -* [Example of HTTP](io-netty-source.md#http) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/netty/src/main/java/org/apache/pulsar/io/netty/NettySource.java) - -### NSQ - -* [Configuration](io-nsq-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/nsq/src/main/java/org/apache/pulsar/io/nsq/NSQSource.java) - -### RabbitMQ - -* [Configuration](io-rabbitmq-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/rabbitmq/src/main/java/org/apache/pulsar/io/rabbitmq/RabbitMQSource.java) - -## Sink connector - -Pulsar has various sink connectors, which are sorted alphabetically as below. - -### Aerospike - -* [Configuration](io-aerospike-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/aerospike/src/main/java/org/apache/pulsar/io/aerospike/AerospikeStringSink.java) - -### Cassandra - -* [Configuration](io-cassandra-sink.md#configuration) - -* [Example](io-cassandra-sink.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/cassandra/src/main/java/org/apache/pulsar/io/cassandra/CassandraStringSink.java) - -### ElasticSearch - -* [Configuration](io-elasticsearch-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/elastic-search/src/main/java/org/apache/pulsar/io/elasticsearch/ElasticSearchSink.java) - -### Flume - -* [Configuration](io-flume-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/java/org/apache/pulsar/io/flume/sink/StringSink.java) - -### HBase - -* [Configuration](io-hbase-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hbase/src/main/java/org/apache/pulsar/io/hbase/HbaseAbstractConfig.java) - -### HDFS2 - -* [Configuration](io-hdfs2-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hdfs2/src/main/java/org/apache/pulsar/io/hdfs2/AbstractHdfsConnector.java) - -### HDFS3 - -* [Configuration](io-hdfs3-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hdfs3/src/main/java/org/apache/pulsar/io/hdfs3/AbstractHdfsConnector.java) - -### InfluxDB - -* [Configuration](io-influxdb-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/influxdb/src/main/java/org/apache/pulsar/io/influxdb/InfluxDBGenericRecordSink.java) - -### JDBC ClickHouse - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-clickhouse) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/clickhouse/src/main/java/org/apache/pulsar/io/jdbc/ClickHouseJdbcAutoSchemaSink.java) - -### JDBC MariaDB - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-mariadb) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/mariadb/src/main/java/org/apache/pulsar/io/jdbc/MariadbJdbcAutoSchemaSink.java) - -### JDBC PostgreSQL - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-postgresql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/postgres/src/main/java/org/apache/pulsar/io/jdbc/PostgresJdbcAutoSchemaSink.java) - -### JDBC SQLite - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-sqlite) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/sqlite/src/main/java/org/apache/pulsar/io/jdbc/SqliteJdbcAutoSchemaSink.java) - -### Kafka - -* [Configuration](io-kafka-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSink.java) - -### Kinesis - -* [Configuration](io-kinesis-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kinesis/src/main/java/org/apache/pulsar/io/kinesis/KinesisSink.java) - -### MongoDB - -* [Configuration](io-mongo-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/mongo/src/main/java/org/apache/pulsar/io/mongodb/MongoSink.java) - -### RabbitMQ - -* [Configuration](io-rabbitmq-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/rabbitmq/src/main/java/org/apache/pulsar/io/rabbitmq/RabbitMQSink.java) - -### Redis - -* [Configuration](io-redis-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/redis/src/main/java/org/apache/pulsar/io/redis/RedisAbstractConfig.java) - -### Solr - -* [Configuration](io-solr-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/solr/src/main/java/org/apache/pulsar/io/solr/SolrSinkConfig.java) - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-debezium-source.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-debezium-source.md deleted file mode 100644 index 8c3ba0cb20f252..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-debezium-source.md +++ /dev/null @@ -1,621 +0,0 @@ ---- -id: io-debezium-source -title: Debezium source connector -sidebar_label: "Debezium source connector" -original_id: io-debezium-source ---- - -The Debezium source connector pulls messages from MySQL or PostgreSQL -and persists the messages to Pulsar topics. - -## Configuration - -The configuration of Debezium source connector has the following properties. - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `task.class` | true | null | A source task class that implemented in Debezium. | -| `database.hostname` | true | null | The address of a database server. | -| `database.port` | true | null | The port number of a database server.| -| `database.user` | true | null | The name of a database user that has the required privileges. | -| `database.password` | true | null | The password for a database user that has the required privileges. | -| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. | -| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. | -| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the connector.

    This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. | -| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. | -| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value. | -| `database.history` | true | null | The name of the database history class. | -| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements.

    **Note: this topic is for internal use only and should not be used by consumers.** | -| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. | -| `pulsar.service.url` | true | null | Pulsar cluster service URL. | -| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. | -| `json-with-envelope` | false | false | Present the message only consist of payload. - -### Converter Options - -1. org.apache.kafka.connect.json.JsonConverter - -This config `json-with-envelope` is valid only for the JsonConverter. It's default value is false, the consumer use the schema ` -Schema.KeyValue(Schema.AUTO_CONSUME(), Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, -and the message only consist of payload. - -If the config `json-with-envelope` value is true, the consumer use the schema -`Schema.KeyValue(Schema.BYTES, Schema.BYTES`, the message consist of schema and payload. - -2. org.apache.pulsar.kafka.shade.io.confluent.connect.avro.AvroConverter - -If users select the AvroConverter, then the pulsar consumer should use the schema `Schema.KeyValue(Schema.AUTO_CONSUME(), -Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, and the message consist of payload. - -### MongoDB Configuration -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). | -| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. | -| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. | - - - -## Example of MySQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "3306", - "database.user": "debezium", - "database.password": "dbz", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.whitelist": "inventory", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.history.pulsar.topic": "history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "pulsar.service.url": "pulsar://127.0.0.1:6650", - "offset.storage.topic": "offset-topic" - } - - ``` - -* YAML - - You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mysql-source" - topicName: "debezium-mysql-topic" - archive: "connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for mysql, docker image: debezium/example-mysql:0.8 - database.hostname: "localhost" - database.port: "3306" - database.user: "debezium" - database.password: "dbz" - database.server.id: "184054" - database.server.name: "dbserver1" - database.whitelist: "inventory" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.history.pulsar.topic: "history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG - key.converter: "org.apache.kafka.connect.json.JsonConverter" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## OFFSET_STORAGE_TOPIC_CONFIG - offset.storage.topic: "offset-topic" - - ``` - -### Usage - -This example shows how to change the data of a MySQL table using the Pulsar Debezium connector. - -1. Start a MySQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -it --rm \ - --name mysql \ - -p 3306:3306 \ - -e MYSQL_ROOT_PASSWORD=debezium \ - -e MYSQL_USER=mysqluser \ - -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar \ - --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","value.converter": "org.apache.kafka.connect.json.JsonConverter","pulsar.service.url": "pulsar://127.0.0.1:6650","offset.storage.topic": "offset-topic"}' - - ``` - - :::note - - Currently, the destination topic (specified by the `destination-topic-name` option ) is a required configuration but it is not used for the Debezium connector to save data. The Debezium connector saves data in the following 4 types of topics: - - - One topic named with the database server name ( `database.server.name`) for storing the database metadata messages, such as `public/default/database.server.name`. - - One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. - - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - - One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - - If the automatic topic creation is disabled on your broker, you need to manually create the above 4 types of topics and the destination topic. - - ::: - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mysql-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the table _inventory.products_. - - ```bash - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MySQL client in docker. - - ```bash - - $ docker run -it --rm \ - --name mysqlterm \ - --link mysql \ - --rm mysql:5.7 sh \ - -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' - - ``` - -6. A MySQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - mysql> use inventory; - mysql> show tables; - mysql> SELECT * FROM products; - mysql> UPDATE products SET name='1111111111' WHERE id=101; - mysql> UPDATE products SET name='1111111111' WHERE id=107; - - ``` - - In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic. - -## Example of PostgreSQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "5432", - "database.user": "postgres", - "database.password": "changeme", - "database.dbname": "postgres", - "database.server.name": "dbserver1", - "plugin.name": "pgoutput", - "schema.whitelist": "public", - "table.whitelist": "public.users", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-postgres-source" - topicName: "debezium-postgres-topic" - archive: "connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for postgres version 10+, official docker image: postgres:<10+> - database.hostname: "localhost" - database.port: "5432" - database.user: "postgres" - database.password: "changeme" - database.dbname: "postgres" - database.server.name: "dbserver1" - plugin.name: "pgoutput" - schema.whitelist: "public" - table.whitelist: "public.users" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -Notice that `pgoutput` is a standard plugin of Postgres introduced in version 10 - [see Postgres architecture docu](https://www.postgresql.org/docs/10/logical-replication-architecture.html). You don't need to install anything, just make sure the WAL level is set to `logical` (see docker command below and [Postgres docu](https://www.postgresql.org/docs/current/runtime-config-wal.html)). - -### Usage - -This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector. - - -1. Start a PostgreSQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -d -it --rm \ - --name pulsar-postgres \ - -p 5432:5432 \ - -e POSTGRES_PASSWORD=changeme \ - postgres:13.3 -c wal_level=logical - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar \ - --name debezium-postgres-source \ - --destination-topic-name debezium-postgres-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "changeme","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "public","table.whitelist": "public.users","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - :::note - - Currently, the destination topic (specified by the `destination-topic-name` option ) is a required configuration but it is not used for the Debezium connector to save data. The Debezium connector saves data in the following 4 types of topics: - - - One topic named with the database server name ( `database.server.name`) for storing the database metadata messages, such as `public/default/database.server.name`. - - One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. - - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - - One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - - If the automatic topic creation is disabled on your broker, you need to manually create the above 4 types of topics and the destination topic. - - ::: - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-postgres-source-config.yaml - - ``` - -4. Subscribe the topic _sub-users_ for the _public.users_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-users" public/default/dbserver1.public.users -n 0 - - ``` - -5. Start a PostgreSQL client in docker. - - ```bash - - $ docker exec -it pulsar-postgresql /bin/bash - - ``` - -6. A PostgreSQL client pops out. - - Use the following commands to create sample data in the table _users_. - - ``` - - psql -U postgres -h localhost -p 5432 - Password for user postgres: - - CREATE TABLE users( - id BIGINT GENERATED ALWAYS AS IDENTITY, PRIMARY KEY(id), - hash_firstname TEXT NOT NULL, - hash_lastname TEXT NOT NULL, - gender VARCHAR(6) NOT NULL CHECK (gender IN ('male', 'female')) - ); - - INSERT INTO users(hash_firstname, hash_lastname, gender) - SELECT md5(RANDOM()::TEXT), md5(RANDOM()::TEXT), CASE WHEN RANDOM() < 0.5 THEN 'male' ELSE 'female' END FROM generate_series(1, 100); - - postgres=# select * from users; - - id | hash_firstname | hash_lastname | gender - -------+----------------------------------+----------------------------------+-------- - 1 | 02bf7880eb489edc624ba637f5ab42bd | 3e742c2cc4217d8e3382cc251415b2fb | female - 2 | dd07064326bb9119189032316158f064 | 9c0e938f9eddbd5200ba348965afbc61 | male - 3 | 2c5316fdd9d6595c1cceb70eed12e80c | 8a93d7d8f9d76acfaaa625c82a03ea8b | female - 4 | 3dfa3b4f70d8cd2155567210e5043d2b | 32c156bc28f7f03ab5d28e2588a3dc19 | female - - - postgres=# UPDATE users SET hash_firstname='maxim' WHERE id=1; - UPDATE 1 - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"before":null,"after":{"id":1,"hash_firstname":"maxim","hash_lastname":"292113d30a3ccee0e19733dd7f88b258","gender":"male"},"source:{"version":"1.0.0.Final","connector":"postgresql","name":"foobar","ts_ms":1624045862644,"snapshot":"false","db":"postgres","schema":"public","table":"users","txId":595,"lsn":24419784,"xmin":null},"op":"u","ts_ms":1624045862648} - ...many more - - ``` - -## Example of MongoDB - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "mongodb.hosts": "rs0/mongodb:27017", - "mongodb.name": "dbserver1", - "mongodb.user": "debezium", - "mongodb.password": "dbz", - "mongodb.task.id": "1", - "database.whitelist": "inventory", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mongodb-source" - topicName: "debezium-mongodb-topic" - archive: "connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-mongodb:0.10 - mongodb.hosts: "rs0/mongodb:27017", - mongodb.name: "dbserver1", - mongodb.user: "debezium", - mongodb.password: "dbz", - mongodb.task.id: "1", - database.whitelist: "inventory", - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector. - - -1. Start a MongoDB server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-mongodb:0.10 - $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017 debezium/example-mongodb:0.10 - - ``` - - Use the following commands to initialize the data. - - ``` bash - - ./usr/local/bin/init-inventory.sh - - ``` - - If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a``` - - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-mongodb-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar \ - --name debezium-mongodb-source \ - --destination-topic-name debezium-mongodb-topic \ - --tenant public \ - --namespace default \ - --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - :::note - - Currently, the destination topic (specified by the `destination-topic-name` option ) is a required configuration but it is not used for the Debezium connector to save data. The Debezium connector saves data in the following 4 types of topics: - - - One topic named with the database server name ( `database.server.name`) for storing the database metadata messages, such as `public/default/database.server.name`. - - One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. - - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - - One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - - If the automatic topic creation is disabled on your broker, you need to manually create the above 4 types of topics and the destination topic. - - ::: - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mongodb-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MongoDB client in docker. - - ```bash - - $ docker exec -it pulsar-mongodb /bin/bash - - ``` - -6. A MongoDB client pops out. - - ```bash - - mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory - db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}}) - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":false,"field":"rs"},{"type":"string","optional":false,"field":"collection"},{"type":"int32","optional":false,"field":"ord"},{"type":"int64","optional":true,"field":"h"}],"optional":false,"name":"io.debezium.connector.mongo.Source","field":"source"},{"type":"string","optional":true,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"after":"{\"_id\": {\"$numberLong\": \"104\"},\"name\": \"hammer\",\"description\": \"12oz carpenter's hammer\",\"weight\": 1.25,\"quantity\": 4}","patch":null,"source":{"version":"0.10.0.Final","connector":"mongodb","name":"dbserver1","ts_ms":1573541905000,"snapshot":"true","db":"inventory","rs":"rs0","collection":"products","ord":1,"h":4983083486544392763},"op":"r","ts_ms":1573541909761}}. - - ``` - -## FAQ - -### Debezium postgres connector will hang when create snap - -```$xslt - -#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000] - java.lang.Thread.State: WAITING (parking) - at sun.misc.Unsafe.park(Native Method) - - parking to wait for <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) - at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) - at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) - at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396) - at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649) - at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132) - at io.debezium.connector.postgresql.PostgresConnectorTask$Lambda$203/385424085.accept(Unknown Source) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$240/1347039967.accept(Unknown Source) - at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$206/589332928.run(Unknown Source) - at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705) - at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717) - at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126) - at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47) - at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127) - at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230) - at java.lang.Thread.run(Thread.java:748) - -``` - -If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file: - -```$xslt - -max.queue.size= - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-debug.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-debug.md deleted file mode 100644 index 844e101d00d2a7..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-debug.md +++ /dev/null @@ -1,407 +0,0 @@ ---- -id: io-debug -title: How to debug Pulsar connectors -sidebar_label: "Debug" -original_id: io-debug ---- -This guide explains how to debug connectors in localrun or cluster mode and gives a debugging checklist. -To better demonstrate how to debug Pulsar connectors, here takes a Mongo sink connector as an example. - -**Deploy a Mongo sink environment** -1. Start a Mongo service. - - ```bash - - docker pull mongo:4 - docker run -d -p 27017:27017 --name pulsar-mongo -v $PWD/data:/data/db mongo:4 - - ``` - -2. Create a DB and a collection. - - ```bash - - docker exec -it pulsar-mongo /bin/bash - mongo - > use pulsar - > db.createCollection('messages') - > exit - - ``` - -3. Start Pulsar standalone. - - ```bash - - docker pull apachepulsar/pulsar:2.4.0 - docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --link pulsar-mongo --name pulsar-mongo-standalone apachepulsar/pulsar:2.4.0 bin/pulsar standalone - - ``` - -4. Configure the Mongo sink with the `mongo-sink-config.yaml` file. - - ```bash - - configs: - mongoUri: "mongodb://pulsar-mongo:27017" - database: "pulsar" - collection: "messages" - batchSize: 2 - batchTimeMs: 500 - - ``` - - ```bash - - docker cp mongo-sink-config.yaml pulsar-mongo-standalone:/pulsar/ - - ``` - -5. Download the Mongo sink nar package. - - ```bash - - docker exec -it pulsar-mongo-standalone /bin/bash - curl -O http://apache.01link.hk/pulsar/pulsar-2.4.0/connectors/pulsar-io-mongo-2.4.0.nar - - ``` - -## Debug in localrun mode -Start the Mongo sink in localrun mode using the `localrun` command. -:::tip - -For more information about the `localrun` command, see [`localrun`](reference-connector-admin.md/#localrun-1). - -::: - -```bash - -./bin/pulsar-admin sinks localrun \ ---archive pulsar-io-mongo-2.4.0.nar \ ---tenant public --namespace default \ ---inputs test-mongo \ ---name pulsar-mongo-sink \ ---sink-config-file mongo-sink-config.yaml \ ---parallelism 1 - -``` - -### Use connector log -Use one of the following methods to get a connector log in localrun mode: -* After executing the `localrun` command, the **log is automatically printed on the console**. -* The log is located at: - - ```bash - - logs/functions/tenant/namespace/function-name/function-name-instance-id.log - - ``` - - **Example** - - The path of the Mongo sink connector is: - - ```bash - - logs/functions/public/default/pulsar-mongo-sink/pulsar-mongo-sink-0.log - - ``` - -To clearly explain the log information, here breaks down the large block of information into small blocks and add descriptions for each block. -* This piece of log information shows the storage path of the nar package after decompression. - - ``` - - 08:21:54.132 [main] INFO org.apache.pulsar.common.nar.NarClassLoader - Created class loader with paths: [file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/, file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/META-INF/bundled-dependencies/, - - ``` - - :::tip - - If `class cannot be found` exception is thrown, check whether the nar file is decompressed in the folder `file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/META-INF/bundled-dependencies/` or not. - - ::: - -* This piece of log information illustrates the basic information about the Mongo sink connector, such as tenant, namespace, name, parallelism, resources, and so on, which can be used to **check whether the Mongo sink connector is configured correctly or not**. - - ```bash - - 08:21:55.390 [main] INFO org.apache.pulsar.functions.runtime.ThreadRuntime - ThreadContainer starting function with instance config InstanceConfig(instanceId=0, functionId=853d60a1-0c48-44d5-9a5c-6917386476b2, functionVersion=c2ce1458-b69e-4175-88c0-a0a856a2be8c, functionDetails=tenant: "public" - namespace: "default" - name: "pulsar-mongo-sink" - className: "org.apache.pulsar.functions.api.utils.IdentityFunction" - autoAck: true - parallelism: 1 - source { - typeClassName: "[B" - inputSpecs { - key: "test-mongo" - value { - } - } - cleanupSubscription: true - } - sink { - className: "org.apache.pulsar.io.mongodb.MongoSink" - configs: "{\"mongoUri\":\"mongodb://pulsar-mongo:27017\",\"database\":\"pulsar\",\"collection\":\"messages\",\"batchSize\":2,\"batchTimeMs\":500}" - typeClassName: "[B" - } - resources { - cpu: 1.0 - ram: 1073741824 - disk: 10737418240 - } - componentType: SINK - , maxBufferedTuples=1024, functionAuthenticationSpec=null, port=38459, clusterName=local) - - ``` - -* This piece of log information demonstrates the status of the connections to Mongo and configuration information. - - ```bash - - 08:21:56.231 [cluster-ClusterId{value='5d6396a3c9e77c0569ff00eb', description='null'}-pulsar-mongo:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:1, serverValue:8}] to pulsar-mongo:27017 - 08:21:56.326 [cluster-ClusterId{value='5d6396a3c9e77c0569ff00eb', description='null'}-pulsar-mongo:27017] INFO org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=pulsar-mongo:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 2, 0]}, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=89058800} - - ``` - -* This piece of log information explains the configuration of consumers and clients, including the topic name, subscription name, subscription type, and so on. - - ```bash - - 08:21:56.719 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl - Starting Pulsar consumer status recorder with config: { - "topicNames" : [ "test-mongo" ], - "topicsPattern" : null, - "subscriptionName" : "public/default/pulsar-mongo-sink", - "subscriptionType" : "Shared", - "receiverQueueSize" : 1000, - "acknowledgementsGroupTimeMicros" : 100000, - "negativeAckRedeliveryDelayMicros" : 60000000, - "maxTotalReceiverQueueSizeAcrossPartitions" : 50000, - "consumerName" : null, - "ackTimeoutMillis" : 0, - "tickDurationMillis" : 1000, - "priorityLevel" : 0, - "cryptoFailureAction" : "CONSUME", - "properties" : { - "application" : "pulsar-sink", - "id" : "public/default/pulsar-mongo-sink", - "instance_id" : "0" - }, - "readCompacted" : false, - "subscriptionInitialPosition" : "Latest", - "patternAutoDiscoveryPeriod" : 1, - "regexSubscriptionMode" : "PersistentOnly", - "deadLetterPolicy" : null, - "autoUpdatePartitions" : true, - "replicateSubscriptionState" : false, - "resetIncludeHead" : false - } - 08:21:56.726 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl - Pulsar client config: { - "serviceUrl" : "pulsar://localhost:6650", - "authPluginClassName" : null, - "authParams" : null, - "operationTimeoutMs" : 30000, - "statsIntervalSeconds" : 60, - "numIoThreads" : 1, - "numListenerThreads" : 1, - "connectionsPerBroker" : 1, - "useTcpNoDelay" : true, - "useTls" : false, - "tlsTrustCertsFilePath" : null, - "tlsAllowInsecureConnection" : false, - "tlsHostnameVerificationEnable" : false, - "concurrentLookupRequest" : 5000, - "maxLookupRequest" : 50000, - "maxNumberOfRejectedRequestPerConnection" : 50, - "keepAliveIntervalSeconds" : 30, - "connectionTimeoutMs" : 10000, - "requestTimeoutMs" : 60000, - "defaultBackoffIntervalNanos" : 100000000, - "maxBackoffIntervalNanos" : 30000000000 - } - - ``` - -## Debug in cluster mode -You can use the following methods to debug a connector in cluster mode: -* [Use connector log](#use-connector-log) -* [Use admin CLI](#use-admin-cli) -### Use connector log -In cluster mode, multiple connectors can run on a worker. To find the log path of a specified connector, use the `workerId` to locate the connector log. -### Use admin CLI -Pulsar admin CLI helps you debug Pulsar connectors with the following subcommands: -* [`get`](#get) - -* [`status`](#status) -* [`topics stats`](#topics-stats) - -**Create a Mongo sink** - -```bash - -./bin/pulsar-admin sinks create \ ---archive pulsar-io-mongo-2.4.0.nar \ ---tenant public \ ---namespace default \ ---inputs test-mongo \ ---name pulsar-mongo-sink \ ---sink-config-file mongo-sink-config.yaml \ ---parallelism 1 - -``` - -### `get` -Use the `get` command to get the basic information about the Mongo sink connector, such as tenant, namespace, name, parallelism, and so on. - -```bash - -./bin/pulsar-admin sinks get --tenant public --namespace default --name pulsar-mongo-sink -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-mongo-sink", - "className": "org.apache.pulsar.io.mongodb.MongoSink", - "inputSpecs": { - "test-mongo": { - "isRegexPattern": false - } - }, - "configs": { - "mongoUri": "mongodb://pulsar-mongo:27017", - "database": "pulsar", - "collection": "messages", - "batchSize": 2.0, - "batchTimeMs": 500.0 - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -:::tip - -For more information about the `get` command, see [`get`](reference-connector-admin.md/#get-1). - -::: - -### `status` -Use the `status` command to get the current status about the Mongo sink connector, such as the number of instance, the number of running instance, instanceId, workerId and so on. - -```bash - -./bin/pulsar-admin sinks status ---tenant public \ ---namespace default \ ---name pulsar-mongo-sink -{ -"numInstances" : 1, -"numRunning" : 1, -"instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-5d202832fd18-8080" - } -} ] -} - -``` - -:::tip - -For more information about the `status` command, see [`status`](reference-connector-admin.md/#stauts-1). -If there are multiple connectors running on a worker, `workerId` can locate the worker on which the specified connector is running. - -::: - -### `topics stats` -Use the `topics stats` command to get the stats for a topic and its connected producer and consumer, such as whether the topic has received messages or not, whether there is a backlog of messages or not, the available permits and other key information. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -```bash - -./bin/pulsar-admin topics stats test-mongo -{ - "msgRateIn" : 0.0, - "msgThroughputIn" : 0.0, - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "averageMsgSize" : 0.0, - "storageSize" : 1, - "publishers" : [ ], - "subscriptions" : { - "public/default/pulsar-mongo-sink" : { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "msgRateRedeliver" : 0.0, - "msgBacklog" : 0, - "blockedSubscriptionOnUnackedMsgs" : false, - "msgDelayed" : 0, - "unackedMessages" : 0, - "type" : "Shared", - "msgRateExpired" : 0.0, - "consumers" : [ { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "msgRateRedeliver" : 0.0, - "consumerName" : "dffdd", - "availablePermits" : 999, - "unackedMessages" : 0, - "blockedConsumerOnUnackedMsgs" : false, - "metadata" : { - "instance_id" : "0", - "application" : "pulsar-sink", - "id" : "public/default/pulsar-mongo-sink" - }, - "connectedSince" : "2019-08-26T08:48:07.582Z", - "clientVersion" : "2.4.0", - "address" : "/172.17.0.3:57790" - } ], - "isReplicated" : false - } - }, - "replication" : { }, - "deduplicationStatus" : "Disabled" -} - -``` - -:::tip - -For more information about the `topic stats` command, see [`topic stats`](http://pulsar.apache.org/docs/en/pulsar-admin/#stats-1). - -::: - -## Checklist -This checklist indicates the major areas to check when you debug connectors. It is a reminder of what to look for to ensure a thorough review and an evaluation tool to get the status of connectors. -* Does Pulsar start successfully? - -* Does the external service run normally? - -* Is the nar package complete? - -* Is the connector configuration file correct? - -* In localrun mode, run a connector and check the printed information (connector log) on the console. - -* In cluster mode: - - * Use the `get` command to get the basic information. - - * Use the `status` command to get the current status. - * Use the `topics stats` command to get the stats for a specified topic and its connected producers and consumers. - - * Check the connector log. -* Enter into the external system and verify the result. diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-develop.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-develop.md deleted file mode 100644 index d6f4f8261ac820..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-develop.md +++ /dev/null @@ -1,421 +0,0 @@ ---- -id: io-develop -title: How to develop Pulsar connectors -sidebar_label: "Develop" -original_id: io-develop ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide describes how to develop Pulsar connectors to move data -between Pulsar and other systems. - -Pulsar connectors are special [Pulsar Functions](functions-overview.md), so creating -a Pulsar connector is similar to creating a Pulsar function. - -Pulsar connectors come in two types: - -| Type | Description | Example -|---|---|--- -{@inject: github:Source:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java}|Import data from another system to Pulsar.|[RabbitMQ source connector](io-rabbitmq.md) imports the messages of a RabbitMQ queue to a Pulsar topic. -{@inject: github:Sink:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java}|Export data from Pulsar to another system.|[Kinesis sink connector](io-kinesis.md) exports the messages of a Pulsar topic to a Kinesis stream. - -## Develop - -You can develop Pulsar source connectors and sink connectors. - -### Source - -Developing a source connector is to implement the {@inject: github:Source:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} -interface, which means you need to implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method and the {@inject: github:read:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - -1. Implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - - ```java - - /** - * Open connector with configuration - * - * @param config initialization config - * @param sourceContext - * @throws Exception IO type exceptions when opening a connector - */ - void open(final Map config, SourceContext sourceContext) throws Exception; - - ``` - - This method is called when the source connector is initialized. - - In this method, you can retrieve all connector specific settings through the passed-in `config` parameter and initialize all necessary resources. - - For example, a Kafka connector can create a Kafka client in this `open` method. - - Besides, Pulsar runtime also provides a `SourceContext` for the - connector to access runtime resources for tasks like collecting metrics. The implementation can save the `SourceContext` for future use. - -2. Implement the {@inject: github:read:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - - ```java - - /** - * Reads the next message from source. - * If source does not have any new messages, this call should block. - * @return next message from source. The return result should never be null - * @throws Exception - */ - Record read() throws Exception; - - ``` - - If nothing to return, the implementation should be blocking rather than returning `null`. - - The returned {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should encapsulate the following information, which is needed by Pulsar IO runtime. - - * {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should provide the following variables: - - |Variable|Required|Description - |---|---|--- - `TopicName`|No|Pulsar topic name from which the record is originated from. - `Key`|No| Messages can optionally be tagged with keys.

    For more information, see [Routing modes](concepts-messaging.md#routing-modes).| - `Value`|Yes|Actual data of the record. - `EventTime`|No|Event time of the record from the source. - `PartitionId`|No| If the record is originated from a partitioned source, it returns its `PartitionId`.

    `PartitionId` is used as a part of the unique identifier by Pulsar IO runtime to deduplicate messages and achieve exactly-once processing guarantee. - `RecordSequence`|No|If the record is originated from a sequential source, it returns its `RecordSequence`.

    `RecordSequence` is used as a part of the unique identifier by Pulsar IO runtime to deduplicate messages and achieve exactly-once processing guarantee. - `Properties` |No| If the record carries user-defined properties, it returns those properties. - `DestinationTopic`|No|Topic to which message should be written. - `Message`|No|A class which carries data sent by users.

    For more information, see [Message.java](https://github.com/apache/pulsar/blob/master/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/Message.java).| - - * {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should provide the following methods: - - Method|Description - |---|--- - `ack` |Acknowledge that the record is fully processed. - `fail`|Indicate that the record fails to be processed. - -## Handle schema information - -Pulsar IO automatically handles the schema and provides a strongly typed API based on Java generics. -If you know the schema type that you are producing, you can declare the Java class relative to that type in your sink declaration. - -``` - -public class MySource implements Source { - public Record read() {} -} - -``` - -If you want to implement a source that works with any schema, you can go with `byte[]` (of `ByteBuffer`) and use Schema.AUTO_PRODUCE_BYTES(). - -``` - -public class MySource implements Source { - public Record read() { - - Schema wantedSchema = .... - Record myRecord = new MyRecordImplementation(); - .... - } - class MyRecordImplementation implements Record { - public byte[] getValue() { - return ....encoded byte[]...that represents the value - } - public Schema getSchema() { - return Schema.AUTO_PRODUCE_BYTES(wantedSchema); - } - } -} - -``` - -To handle the `KeyValue` type properly, follow the guidelines for your record implementation: -- It must implement {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/KVRecord.java} interface and implement `getKeySchema`,`getValueSchema`, and `getKeyValueEncodingType` -- It must return a `KeyValue` object as `Record.getValue()` -- It may return null in `Record.getSchema()` - -When Pulsar IO runtime encounters a `KVRecord`, it brings the following changes automatically: -- Set properly the `KeyValueSchema` -- Encode the Message Key and the Message Value according to the `KeyValueEncoding` (SEPARATED or INLINE) - -:::tip - -For more information about **how to create a source connector**, see {@inject: github:KafkaSource:/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java}. - -::: - -### Sink - -Developing a sink connector **is similar to** developing a source connector, that is, you need to implement the {@inject: github:Sink:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} interface, which means implementing the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method and the {@inject: github:write:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - -1. Implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - - ```java - - /** - * Open connector with configuration - * - * @param config initialization config - * @param sinkContext - * @throws Exception IO type exceptions when opening a connector - */ - void open(final Map config, SinkContext sinkContext) throws Exception; - - ``` - -2. Implement the {@inject: github:write:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - - ```java - - /** - * Write a message to Sink - * @param record record to write to sink - * @throws Exception - */ - void write(Record record) throws Exception; - - ``` - - During the implementation, you can decide how to write the `Value` and - the `Key` to the actual source, and leverage all the provided information such as - `PartitionId` and `RecordSequence` to achieve different processing guarantees. - - You also need to ack records (if messages are sent successfully) or fail records (if messages fail to send). - -## Handling Schema information - -Pulsar IO handles automatically the Schema and provides a strongly typed API based on Java generics. -If you know the Schema type that you are consuming from you can declare the Java class relative to that type in your Sink declaration. - -``` - -public class MySink implements Sink { - public void write(Record record) {} -} - -``` - -If you want to implement a sink that works with any schema, you can you go with the special GenericObject interface. - -``` - -public class MySink implements Sink { - public void write(Record record) { - Schema schema = record.getSchema(); - GenericObject genericObject = record.getValue(); - if (genericObject != null) { - SchemaType type = genericObject.getSchemaType(); - Object nativeObject = genericObject.getNativeObject(); - ... - } - .... - } -} - -``` - -In the case of AVRO, JSON, and Protobuf records (schemaType=AVRO,JSON,PROTOBUF_NATIVE), you can cast the -`genericObject` variable to `GenericRecord` and use `getFields()` and `getField()` API. -You are able to access the native AVRO record using `genericObject.getNativeObject()`. - -In the case of KeyValue type, you can access both the schema for the key and the schema for the value using this code. - -``` - -public class MySink implements Sink { - public void write(Record record) { - Schema schema = record.getSchema(); - GenericObject genericObject = record.getValue(); - SchemaType type = genericObject.getSchemaType(); - Object nativeObject = genericObject.getNativeObject(); - if (type == SchemaType.KEY_VALUE) { - KeyValue keyValue = (KeyValue) nativeObject; - Object key = keyValue.getKey(); - Object value = keyValue.getValue(); - - KeyValueSchema keyValueSchema = (KeyValueSchema) schema; - Schema keySchema = keyValueSchema.getKeySchema(); - Schema valueSchema = keyValueSchema.getValueSchema(); - } - .... - } -} - -``` - -## Test - -Testing connectors can be challenging because Pulsar IO connectors interact with two systems -that may be difficult to mock—Pulsar and the system to which the connector is connecting. - -It is -recommended writing special tests to test the connector functionalities as below -while mocking the external service. - -### Unit test - -You can create unit tests for your connector. - -### Integration test - -Once you have written sufficient unit tests, you can add -separate integration tests to verify end-to-end functionality. - -Pulsar uses [testcontainers](https://www.testcontainers.org/) **for all integration tests**. - -:::tip - -For more information about **how to create integration tests for Pulsar connectors**, see {@inject: github:IntegrationTests:/tests/integration/src/test/java/org/apache/pulsar/tests/integration/io}. - -::: - -## Package - -Once you've developed and tested your connector, you need to package it so that it can be submitted -to a [Pulsar Functions](functions-overview.md) cluster. - -There are two methods to -work with Pulsar Functions' runtime, that is, [NAR](#nar) and [uber JAR](#uber-jar). - -:::note - -If you plan to package and distribute your connector for others to use, you are obligated to - -::: - -license and copyright your own code properly. Remember to add the license and copyright to -all libraries your code uses and to your distribution. -> -> If you use the [NAR](#nar) method, the NAR plugin -automatically creates a `DEPENDENCIES` file in the generated NAR package, including the proper -licensing and copyrights of all libraries of your connector. - -### NAR - -**NAR** stands for NiFi Archive, which is a custom packaging mechanism used by Apache NiFi, to provide -a bit of Java ClassLoader isolation. - -:::tip - -For more information about **how NAR works**, see [here](https://medium.com/hashmapinc/nifi-nar-files-explained-14113f7796fd). - -::: - -Pulsar uses the same mechanism for packaging **all** [built-in connectors](io-connectors.md). - -The easiest approach to package a Pulsar connector is to create a NAR package using [nifi-nar-maven-plugin](https://mvnrepository.com/artifact/org.apache.nifi/nifi-nar-maven-plugin). - -Include this [nifi-nar-maven-plugin](https://mvnrepository.com/artifact/org.apache.nifi/nifi-nar-maven-plugin) in your maven project for your connector as below. - -```xml - - - - org.apache.nifi - nifi-nar-maven-plugin - 1.2.0 - - - -``` - -You must also create a `resources/META-INF/services/pulsar-io.yaml` file with the following contents: - -```yaml - -name: connector name -description: connector description -sourceClass: fully qualified class name (only if source connector) -sinkClass: fully qualified class name (only if sink connector) - -``` - -For Gradle users, there is a [Gradle Nar plugin available on the Gradle Plugin Portal](https://plugins.gradle.org/plugin/io.github.lhotari.gradle-nar-plugin). - -:::tip - -For more information about an **how to use NAR for Pulsar connectors**, see {@inject: github:TwitterFirehose:/pulsar-io/twitter/pom.xml}. - -::: - -### Uber JAR - -An alternative approach is to create an **uber JAR** that contains all of the connector's JAR files -and other resource files. No directory internal structure is necessary. - -You can use [maven-shade-plugin](https://maven.apache.org/plugins/maven-shade-plugin/examples/includes-excludes.html) to create a uber JAR as below: - -```xml - - - org.apache.maven.plugins - maven-shade-plugin - 3.1.1 - - - package - - shade - - - - - *:* - - - - - - - -``` - -## Monitor - -Pulsar connectors enable you to move data in and out of Pulsar easily. It is important to ensure that the running connectors are healthy at any time. You can monitor Pulsar connectors that have been deployed with the following methods: - -- Check the metrics provided by Pulsar. - - Pulsar connectors expose the metrics that can be collected and used for monitoring the health of **Java** connectors. You can check the metrics by following the [monitoring](deploy-monitoring.md) guide. - -- Set and check your customized metrics. - - In addition to the metrics provided by Pulsar, Pulsar allows you to customize metrics for **Java** connectors. Function workers collect user-defined metrics to Prometheus automatically and you can check them in Grafana. - -Here is an example of how to customize metrics for a Java connector. - -````mdx-code-block - - - -``` - -public class TestMetricSink implements Sink { - - @Override - public void open(Map config, SinkContext sinkContext) throws Exception { - sinkContext.recordMetric("foo", 1); - } - - @Override - public void write(Record record) throws Exception { - - } - - @Override - public void close() throws Exception { - - } - } - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-dynamodb-source.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-dynamodb-source.md deleted file mode 100644 index ce585786eb0428..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-dynamodb-source.md +++ /dev/null @@ -1,80 +0,0 @@ ---- -id: io-dynamodb-source -title: AWS DynamoDB source connector -sidebar_label: "AWS DynamoDB source connector" -original_id: io-dynamodb-source ---- - -The DynamoDB source connector pulls data from DynamoDB table streams and persists data into Pulsar. - -This connector uses the [DynamoDB Streams Kinesis Adapter](https://github.com/awslabs/dynamodb-streams-kinesis-adapter), -which uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual -consuming of messages. The KCL uses DynamoDB to track state for consumers and requires cloudwatch access to log metrics. - - -## Configuration - -The configuration of the DynamoDB source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.

    Below are the available options:

  808. `AT_TIMESTAMP`: start from the record at or after the specified timestamp.

  809. `LATEST`: start after the most recent data record.

  810. `TRIM_HORIZON`: start from the oldest available data record.
  811. -`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption. -`applicationName`|String|false|Pulsar IO connector|The name of the KCL application. Must be unique, as it is used to define the table name for the dynamo table used for state tracking.

    By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances. -`checkpointInterval`|long|false|60000|The frequency of the KCL checkpoint in milliseconds. -`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds. -`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint. -`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector.

    Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed. -`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsEndpoint`|String|false|" " (empty string)|The DynamoDB Streams end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsDynamodbStreamArn`|String|true|" " (empty string)|The DynamoDB stream arn. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    `awsCredentialProviderPlugin` has the following built-in plugs:

  812. `org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:
    this plugin uses the default AWS provider chain.
    For more information, see [using the default credential provider chain](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default).

  813. `org.apache.pulsar.io.kinesis.STSAssumeRoleProviderPlugin`:
    this plugin takes a configuration via the `awsCredentialPluginParam` that describes a role to assume when running the KCL.
    **JSON configuration example**
    `{"roleArn": "arn...", "roleSessionName": "name"}`

    `awsCredentialPluginName` is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If `awsCredentialPluginName` set to empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`.
  814. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Example - -Before using the DynamoDB source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "awsEndpoint": "https://some.endpoint.aws", - "awsRegion": "us-east-1", - "awsDynamodbStreamArn": "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "applicationName": "My test application", - "checkpointInterval": "30000", - "backoffTime": "4000", - "numRetries": "3", - "receiveQueueSize": 2000, - "initialPositionInStream": "TRIM_HORIZON", - "startAtTime": "2019-03-05T19:28:58.000Z" - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "https://some.endpoint.aws" - awsRegion: "us-east-1" - awsDynamodbStreamArn: "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - applicationName: "My test application" - checkpointInterval: 30000 - backoffTime: 4000 - numRetries: 3 - receiveQueueSize: 2000 - initialPositionInStream: "TRIM_HORIZON" - startAtTime: "2019-03-05T19:28:58.000Z" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-elasticsearch-sink.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-elasticsearch-sink.md deleted file mode 100644 index 4acedd3dd0788d..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-elasticsearch-sink.md +++ /dev/null @@ -1,173 +0,0 @@ ---- -id: io-elasticsearch-sink -title: ElasticSearch sink connector -sidebar_label: "ElasticSearch sink connector" -original_id: io-elasticsearch-sink ---- - -The ElasticSearch sink connector pulls messages from Pulsar topics and persists the messages to indexes. - -## Configuration - -The configuration of the ElasticSearch sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `elasticSearchUrl` | String| true |" " (empty string)| The URL of elastic search cluster to which the connector connects. | -| `indexName` | String| true |" " (empty string)| The index name to which the connector writes messages. | -| `typeName` | String | false | "_doc" | The type name to which the connector writes messages to.

    The value should be set explicitly to a valid type name other than "_doc" for Elasticsearch version before 6.2, and left to default otherwise. | -| `indexNumberOfShards` | int| false |1| The number of shards of the index. | -| `indexNumberOfReplicas` | int| false |1 | The number of replicas of the index. | -| `username` | String| false |" " (empty string)| The username used by the connector to connect to the elastic search cluster.

    If `username` is set, then `password` should also be provided. | -| `password` | String| false | " " (empty string)|The password used by the connector to connect to the elastic search cluster.

    If `username` is set, then `password` should also be provided. | - -## Example - -Before using the ElasticSearch sink connector, you need to create a configuration file through one of the following methods. - -### Configuration - -#### For Elasticsearch After 6.2 - -* JSON - - ```json - - { - "elasticSearchUrl": "http://localhost:9200", - "indexName": "my_index", - "username": "scooby", - "password": "doobie" - } - - ``` - -* YAML - - ```yaml - - configs: - elasticSearchUrl: "http://localhost:9200" - indexName: "my_index" - username: "scooby" - password: "doobie" - - ``` - -#### For Elasticsearch Before 6.2 - -* JSON - - ```json - - { - "elasticSearchUrl": "http://localhost:9200", - "indexName": "my_index", - "typeName": "doc", - "username": "scooby", - "password": "doobie" - } - - ``` - -* YAML - - ```yaml - - configs: - elasticSearchUrl: "http://localhost:9200" - indexName: "my_index" - typeName: "doc" - username: "scooby" - password: "doobie" - - ``` - -### Usage - -1. Start a single node Elasticsearch cluster. - - ```bash - - $ docker run -p 9200:9200 -p 9300:9300 \ - -e "discovery.type=single-node" \ - docker.elastic.co/elasticsearch/elasticsearch:7.5.1 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - - Make sure the NAR file is available at `connectors/pulsar-io-elastic-search-@pulsar:version@.nar`. - -3. Start the Pulsar Elasticsearch connector in local run mode using one of the following methods. - * Use the **JSON** configuration as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-elastic-search-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name elasticsearch-test-sink \ - --sink-config '{"elasticSearchUrl":"http://localhost:9200","indexName": "my_index","username": "scooby","password": "doobie"}' \ - --inputs elasticsearch_test - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-elastic-search-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name elasticsearch-test-sink \ - --sink-config-file elasticsearch-sink.yml \ - --inputs elasticsearch_test - - ``` - -4. Publish records to the topic. - - ```bash - - $ bin/pulsar-client produce elasticsearch_test --messages "{\"a\":1}" - - ``` - -5. Check documents in Elasticsearch. - - * refresh the index - - ```bash - - $ curl -s http://localhost:9200/my_index/_refresh - - ``` - - - * search documents - - ```bash - - $ curl -s http://localhost:9200/my_index/_search - - ``` - - You can see the record that published earlier has been successfully written into Elasticsearch. - - ```json - - {"took":2,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":1,"relation":"eq"},"max_score":1.0,"hits":[{"_index":"my_index","_type":"_doc","_id":"FSxemm8BLjG_iC0EeTYJ","_score":1.0,"_source":{"a":1}}]}} - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-file-source.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-file-source.md deleted file mode 100644 index e9d710cce65e83..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-file-source.md +++ /dev/null @@ -1,160 +0,0 @@ ---- -id: io-file-source -title: File source connector -sidebar_label: "File source connector" -original_id: io-file-source ---- - -The File source connector pulls messages from files in directories and persists the messages to Pulsar topics. - -## Configuration - -The configuration of the File source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `inputDirectory` | String|true | No default value|The input directory to pull files. | -| `recurse` | Boolean|false | true | Whether to pull files from subdirectory or not.| -| `keepFile` |Boolean|false | false | If set to true, the file is not deleted after it is processed, which means the file can be picked up continually. | -| `fileFilter` | String|false| [^\\.].* | The file whose name matches the given regular expression is picked up. | -| `pathFilter` | String |false | NULL | If `recurse` is set to true, the subdirectory whose path matches the given regular expression is scanned. | -| `minimumFileAge` | Integer|false | 0 | The minimum age that a file can be processed.

    Any file younger than `minimumFileAge` (according to the last modification date) is ignored. | -| `maximumFileAge` | Long|false |Long.MAX_VALUE | The maximum age that a file can be processed.

    Any file older than `maximumFileAge` (according to last modification date) is ignored. | -| `minimumSize` |Integer| false |1 | The minimum size (in bytes) that a file can be processed. | -| `maximumSize` | Double|false |Double.MAX_VALUE| The maximum size (in bytes) that a file can be processed. | -| `ignoreHiddenFiles` |Boolean| false | true| Whether the hidden files should be ignored or not. | -| `pollingInterval`|Long | false | 10000L | Indicates how long to wait before performing a directory listing. | -| `numWorkers` | Integer | false | 1 | The number of worker threads that process files.

    This allows you to process a larger number of files concurrently.

    However, setting this to a value greater than 1 makes the data from multiple files mixed in the target topic. | - -### Example - -Before using the File source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "inputDirectory": "/Users/david", - "recurse": true, - "keepFile": true, - "fileFilter": "[^\\.].*", - "pathFilter": "*", - "minimumFileAge": 0, - "maximumFileAge": 9999999999, - "minimumSize": 1, - "maximumSize": 5000000, - "ignoreHiddenFiles": true, - "pollingInterval": 5000, - "numWorkers": 1 - } - - ``` - -* YAML - - ```yaml - - configs: - inputDirectory: "/Users/david" - recurse: true - keepFile: true - fileFilter: "[^\\.].*" - pathFilter: "*" - minimumFileAge: 0 - maximumFileAge: 9999999999 - minimumSize: 1 - maximumSize: 5000000 - ignoreHiddenFiles: true - pollingInterval: 5000 - numWorkers: 1 - - ``` - -## Usage - -Here is an example of using the File source connecter. - -1. Pull a Pulsar image. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - ``` - -2. Start Pulsar standalone. - - ```bash - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -3. Create a configuration file _file-connector.yaml_. - - ```yaml - - configs: - inputDirectory: "/opt" - - ``` - -4. Copy the configuration file _file-connector.yaml_ to the container. - - ```bash - - $ docker cp connectors/file-connector.yaml pulsar-standalone:/pulsar/ - - ``` - -5. Download the File source connector. - - ```bash - - $ curl -O https://mirrors.tuna.tsinghua.edu.cn/apache/pulsar/pulsar-{version}/connectors/pulsar-io-file-{version}.nar - - ``` - -6. Start the File source connector. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - - $ ./bin/pulsar-admin sources localrun \ - --archive /pulsar/pulsar-io-file-{version}.nar \ - --name file-test \ - --destination-topic-name pulsar-file-test \ - --source-config-file /pulsar/file-connector.yaml - - ``` - -7. Start a consumer. - - ```bash - - ./bin/pulsar-client consume -s file-test -n 0 pulsar-file-test - - ``` - -8. Write the message to the file _test.txt_. - - ```bash - - echo "hello world!" > /opt/test.txt - - ``` - - The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello world! - - ``` - - \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-flume-sink.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-flume-sink.md deleted file mode 100644 index b2ace53702f8ca..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-flume-sink.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -id: io-flume-sink -title: Flume sink connector -sidebar_label: "Flume sink connector" -original_id: io-flume-sink ---- - -The Flume sink connector pulls messages from Pulsar topics to logs. - -## Configuration - -The configuration of the Flume sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`name`|String|true|"" (empty string)|The name of the agent. -`confFile`|String|true|"" (empty string)|The configuration file. -`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed. -`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection. -`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration. - -### Example - -Before using the Flume sink connector, you need to create a configuration file through one of the following methods. - -> For more information about the `sink.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/sink.conf). - -* JSON - - ```json - - { - "name": "a1", - "confFile": "sink.conf", - "noReloadConf": "false", - "zkConnString": "", - "zkBasePath": "" - } - - ``` - -* YAML - - ```yaml - - configs: - name: a1 - confFile: sink.conf - noReloadConf: false - zkConnString: "" - zkBasePath: "" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-flume-source.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-flume-source.md deleted file mode 100644 index b7fd7edad88111..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-flume-source.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -id: io-flume-source -title: Flume source connector -sidebar_label: "Flume source connector" -original_id: io-flume-source ---- - -The Flume source connector pulls messages from logs to Pulsar topics. - -## Configuration - -The configuration of the Flume source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`name`|String|true|"" (empty string)|The name of the agent. -`confFile`|String|true|"" (empty string)|The configuration file. -`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed. -`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection. -`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration. - -### Example - -Before using the Flume source connector, you need to create a configuration file through one of the following methods. - -> For more information about the `source.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/source.conf). - -* JSON - - ```json - - { - "name": "a1", - "confFile": "source.conf", - "noReloadConf": "false", - "zkConnString": "", - "zkBasePath": "" - } - - ``` - -* YAML - - ```yaml - - configs: - name: a1 - confFile: source.conf - noReloadConf: false - zkConnString: "" - zkBasePath: "" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-hbase-sink.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-hbase-sink.md deleted file mode 100644 index 1737b00fa26805..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-hbase-sink.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -id: io-hbase-sink -title: HBase sink connector -sidebar_label: "HBase sink connector" -original_id: io-hbase-sink ---- - -The HBase sink connector pulls the messages from Pulsar topics -and persists the messages to HBase tables - -## Configuration - -The configuration of the HBase sink connector has the following properties. - -### Property - -| Name | Type|Default | Required | Description | -|------|---------|----------|-------------|--- -| `hbaseConfigResources` | String|None | false | HBase system configuration `hbase-site.xml` file. | -| `zookeeperQuorum` | String|None | true | HBase system configuration about `hbase.zookeeper.quorum` value. | -| `zookeeperClientPort` | String|2181 | false | HBase system configuration about `hbase.zookeeper.property.clientPort` value. | -| `zookeeperZnodeParent` | String|/hbase | false | HBase system configuration about `zookeeper.znode.parent` value. | -| `tableName` | None |String | true | HBase table, the value is `namespace:tableName`. | -| `rowKeyName` | String|None | true | HBase table rowkey name. | -| `familyName` | String|None | true | HBase table column family name. | -| `qualifierNames` |String| None | true | HBase table column qualifier names. | -| `batchTimeMs` | Long|1000l| false | HBase table operation timeout in milliseconds. | -| `batchSize` | int|200| false | Batch size of updates made to the HBase table. | - -### Example - -Before using the HBase sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "hbaseConfigResources": "hbase-site.xml", - "zookeeperQuorum": "localhost", - "zookeeperClientPort": "2181", - "zookeeperZnodeParent": "/hbase", - "tableName": "pulsar_hbase", - "rowKeyName": "rowKey", - "familyName": "info", - "qualifierNames": [ 'name', 'address', 'age'] - } - - ``` - -* YAML - - ```yaml - - configs: - hbaseConfigResources: "hbase-site.xml" - zookeeperQuorum: "localhost" - zookeeperClientPort: "2181" - zookeeperZnodeParent: "/hbase" - tableName: "pulsar_hbase" - rowKeyName: "rowKey" - familyName: "info" - qualifierNames: [ 'name', 'address', 'age'] - - ``` - - \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-hdfs2-sink.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-hdfs2-sink.md deleted file mode 100644 index 4a8527154430d0..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-hdfs2-sink.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -id: io-hdfs2-sink -title: HDFS2 sink connector -sidebar_label: "HDFS2 sink connector" -original_id: io-hdfs2-sink ---- - -The HDFS2 sink connector pulls the messages from Pulsar topics -and persists the messages to HDFS files. - -## Configuration - -The configuration of the HDFS2 sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `hdfsConfigResources` | String|true| None | A file or a comma-separated list containing the Hadoop file system configuration.

    **Example**
    'core-site.xml'
    'hdfs-site.xml' | -| `directory` | String | true | None|The HDFS directory where files read from or written to. | -| `encoding` | String |false |None |The character encoding for the files.

    **Example**
    UTF-8
    ASCII | -| `compression` | Compression |false |None |The compression code used to compress or de-compress the files on HDFS.

    Below are the available options:
  815. BZIP2
  816. DEFLATE
  817. GZIP
  818. LZ4
  819. SNAPPY
  820. | -| `kerberosUserPrincipal` |String| false| None|The principal account of Kerberos user used for authentication. | -| `keytab` | String|false|None| The full pathname of the Kerberos keytab file used for authentication. | -| `filenamePrefix` |String| true, if `compression` is set to `None`. | None |The prefix of the files created inside the HDFS directory.

    **Example**
    The value of topicA result in files named topicA-. | -| `fileExtension` | String| true | None | The extension added to the files written to HDFS.

    **Example**
    '.txt'
    '.seq' | -| `separator` | char|false |None |The character used to separate records in a text file.

    If no value is provided, the contents from all records are concatenated together in one continuous byte array. | -| `syncInterval` | long| false |0| The interval between calls to flush data to HDFS disk in milliseconds. | -| `maxPendingRecords` |int| false|Integer.MAX_VALUE | The maximum number of records that hold in memory before acking.

    Setting this property to 1 makes every record send to disk before the record is acked.

    Setting this property to a higher value allows buffering records before flushing them to disk. -| `subdirectoryPattern` | String | false | None | A subdirectory associated with the created time of the sink.
    The pattern is the formatted pattern of `directory`'s subdirectory.

    See [DateTimeFormatter](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html) for pattern's syntax. | - -### Example - -Before using the HDFS2 sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "hdfsConfigResources": "core-site.xml", - "directory": "/foo/bar", - "filenamePrefix": "prefix", - "fileExtension": ".log", - "compression": "SNAPPY", - "subdirectoryPattern": "yyyy-MM-dd" - } - - ``` - -* YAML - - ```yaml - - configs: - hdfsConfigResources: "core-site.xml" - directory: "/foo/bar" - filenamePrefix: "prefix" - fileExtension: ".log" - compression: "SNAPPY" - subdirectoryPattern: "yyyy-MM-dd" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-hdfs3-sink.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-hdfs3-sink.md deleted file mode 100644 index aec065a25db7f4..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-hdfs3-sink.md +++ /dev/null @@ -1,59 +0,0 @@ ---- -id: io-hdfs3-sink -title: HDFS3 sink connector -sidebar_label: "HDFS3 sink connector" -original_id: io-hdfs3-sink ---- - -The HDFS3 sink connector pulls the messages from Pulsar topics -and persists the messages to HDFS files. - -## Configuration - -The configuration of the HDFS3 sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `hdfsConfigResources` | String|true| None | A file or a comma-separated list containing the Hadoop file system configuration.

    **Example**
    'core-site.xml'
    'hdfs-site.xml' | -| `directory` | String | true | None|The HDFS directory where files read from or written to. | -| `encoding` | String |false |None |The character encoding for the files.

    **Example**
    UTF-8
    ASCII | -| `compression` | Compression |false |None |The compression code used to compress or de-compress the files on HDFS.

    Below are the available options:
  821. BZIP2
  822. DEFLATE
  823. GZIP
  824. LZ4
  825. SNAPPY
  826. | -| `kerberosUserPrincipal` |String| false| None|The principal account of Kerberos user used for authentication. | -| `keytab` | String|false|None| The full pathname of the Kerberos keytab file used for authentication. | -| `filenamePrefix` |String| false |None |The prefix of the files created inside the HDFS directory.

    **Example**
    The value of topicA result in files named topicA-. | -| `fileExtension` | String| false | None| The extension added to the files written to HDFS.

    **Example**
    '.txt'
    '.seq' | -| `separator` | char|false |None |The character used to separate records in a text file.

    If no value is provided, the contents from all records are concatenated together in one continuous byte array. | -| `syncInterval` | long| false |0| The interval between calls to flush data to HDFS disk in milliseconds. | -| `maxPendingRecords` |int| false|Integer.MAX_VALUE | The maximum number of records that hold in memory before acking.

    Setting this property to 1 makes every record send to disk before the record is acked.

    Setting this property to a higher value allows buffering records before flushing them to disk. - -### Example - -Before using the HDFS3 sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "hdfsConfigResources": "core-site.xml", - "directory": "/foo/bar", - "filenamePrefix": "prefix", - "compression": "SNAPPY" - } - - ``` - -* YAML - - ```yaml - - configs: - hdfsConfigResources: "core-site.xml" - directory: "/foo/bar" - filenamePrefix: "prefix" - compression: "SNAPPY" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-influxdb-sink.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-influxdb-sink.md deleted file mode 100644 index 9382f8c03121cc..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-influxdb-sink.md +++ /dev/null @@ -1,119 +0,0 @@ ---- -id: io-influxdb-sink -title: InfluxDB sink connector -sidebar_label: "InfluxDB sink connector" -original_id: io-influxdb-sink ---- - -The InfluxDB sink connector pulls messages from Pulsar topics -and persists the messages to InfluxDB. - -The InfluxDB sink provides different configurations for InfluxDBv1 and v2 respectively. - -## Configuration - -The configuration of the InfluxDB sink connector has the following properties. - -### Property -#### InfluxDBv2 -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. | -| `token` | String|true| " " (empty string) |The authentication token used to authenticate to InfluxDB. | -| `organization` | String| true|" " (empty string) | The InfluxDB organization to write to. | -| `bucket` |String| true | " " (empty string)| The InfluxDB bucket to write to. | -| `precision` | String|false| ns | The timestamp precision for writing data to InfluxDB.

    Below are the available options:
  827. ns
  828. us
  829. ms
  830. s
  831. | -| `logLevel` | String|false| NONE|The log level for InfluxDB request and response.

    Below are the available options:
  832. NONE
  833. BASIC
  834. HEADERS
  835. FULL
  836. | -| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. | -| `batchTimeMs` |long|false| 1000L | The InfluxDB operation time in milliseconds. | -| `batchSize` | int|false|200| The batch size of writing to InfluxDB. | - -#### InfluxDBv1 -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. | -| `username` | String|false| " " (empty string) |The username used to authenticate to InfluxDB. | -| `password` | String| false|" " (empty string) | The password used to authenticate to InfluxDB. | -| `database` |String| true | " " (empty string)| The InfluxDB to which write messages. | -| `consistencyLevel` | String|false|ONE | The consistency level for writing data to InfluxDB.

    Below are the available options:
  837. ALL
  838. ANY
  839. ONE
  840. QUORUM
  841. | -| `logLevel` | String|false| NONE|The log level for InfluxDB request and response.

    Below are the available options:
  842. NONE
  843. BASIC
  844. HEADERS
  845. FULL
  846. | -| `retentionPolicy` | String|false| autogen| The retention policy for InfluxDB. | -| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. | -| `batchTimeMs` |long|false| 1000L | The InfluxDB operation time in milliseconds. | -| `batchSize` | int|false|200| The batch size of writing to InfluxDB. | - -### Example -Before using the InfluxDB sink connector, you need to create a configuration file through one of the following methods. -#### InfluxDBv2 -* JSON - - ```json - - { - "influxdbUrl": "http://localhost:9999", - "organization": "example-org", - "bucket": "example-bucket", - "token": "xxxx", - "precision": "ns", - "logLevel": "NONE", - "gzipEnable": false, - "batchTimeMs": 1000, - "batchSize": 100 - } - - ``` - - -* YAML - - ```yaml - - configs: - influxdbUrl: "http://localhost:9999" - organization: "example-org" - bucket: "example-bucket" - token: "xxxx" - precision: "ns" - logLevel: "NONE" - gzipEnable: false - batchTimeMs: 1000 - batchSize: 100 - - ``` - - -#### InfluxDBv1 - -* JSON - - ```json - - { - "influxdbUrl": "http://localhost:8086", - "database": "test_db", - "consistencyLevel": "ONE", - "logLevel": "NONE", - "retentionPolicy": "autogen", - "gzipEnable": false, - "batchTimeMs": 1000, - "batchSize": 100 - } - - ``` - -* YAML - - ```yaml - - configs: - influxdbUrl: "http://localhost:8086" - database: "test_db" - consistencyLevel: "ONE" - logLevel: "NONE" - retentionPolicy: "autogen" - gzipEnable: false - batchTimeMs: 1000 - batchSize: 100 - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-jdbc-sink.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-jdbc-sink.md deleted file mode 100644 index 77dbb61fccd7ed..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-jdbc-sink.md +++ /dev/null @@ -1,157 +0,0 @@ ---- -id: io-jdbc-sink -title: JDBC sink connector -sidebar_label: "JDBC sink connector" -original_id: io-jdbc-sink ---- - -The JDBC sink connectors allow pulling messages from Pulsar topics -and persists the messages to ClickHouse, MariaDB, PostgreSQL, and SQLite. - -> Currently, INSERT, DELETE and UPDATE operations are supported. - -## Configuration - -The configuration of all JDBC sink connectors has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `userName` | String|false | " " (empty string) | The username used to connect to the database specified by `jdbcUrl`.

    **Note: `userName` is case-sensitive.**| -| `password` | String|false | " " (empty string)| The password used to connect to the database specified by `jdbcUrl`.

    **Note: `password` is case-sensitive.**| -| `jdbcUrl` | String|true | " " (empty string) | The JDBC URL of the database to which the connector connects. | -| `tableName` | String|true | " " (empty string) | The name of the table to which the connector writes. | -| `nonKey` | String|false | " " (empty string) | A comma-separated list contains the fields used in updating events. | -| `key` | String|false | " " (empty string) | A comma-separated list contains the fields used in `where` condition of updating and deleting events. | -| `timeoutMs` | int| false|500 | The JDBC operation timeout in milliseconds. | -| `batchSize` | int|false | 200 | The batch size of updates made to the database. | - -### Example for ClickHouse - -* JSON - - ```json - - { - "userName": "clickhouse", - "password": "password", - "jdbcUrl": "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink", - "tableName": "pulsar_clickhouse_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-clickhouse-sink" - topicName: "persistent://public/default/jdbc-clickhouse-topic" - sinkType: "jdbc-clickhouse" - configs: - userName: "clickhouse" - password: "password" - jdbcUrl: "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink" - tableName: "pulsar_clickhouse_jdbc_sink" - - ``` - -### Example for MariaDB - -* JSON - - ```json - - { - "userName": "mariadb", - "password": "password", - "jdbcUrl": "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink", - "tableName": "pulsar_mariadb_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-mariadb-sink" - topicName: "persistent://public/default/jdbc-mariadb-topic" - sinkType: "jdbc-mariadb" - configs: - userName: "mariadb" - password: "password" - jdbcUrl: "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink" - tableName: "pulsar_mariadb_jdbc_sink" - - ``` - -### Example for PostgreSQL - -Before using the JDBC PostgreSQL sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "userName": "postgres", - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "tableName": "pulsar_postgres_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-postgres-sink" - topicName: "persistent://public/default/jdbc-postgres-topic" - sinkType: "jdbc-postgres" - configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink" - tableName: "pulsar_postgres_jdbc_sink" - - ``` - -For more information on **how to use this JDBC sink connector**, see [connect Pulsar to PostgreSQL](io-quickstart.md#connect-pulsar-to-postgresql). - -### Example for SQLite - -* JSON - - ```json - - { - "jdbcUrl": "jdbc:sqlite:db.sqlite", - "tableName": "pulsar_sqlite_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-sqlite-sink" - topicName: "persistent://public/default/jdbc-sqlite-topic" - sinkType: "jdbc-sqlite" - configs: - jdbcUrl: "jdbc:sqlite:db.sqlite" - tableName: "pulsar_sqlite_jdbc_sink" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-kafka-sink.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-kafka-sink.md deleted file mode 100644 index 09dad4ce70bac9..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-kafka-sink.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -id: io-kafka-sink -title: Kafka sink connector -sidebar_label: "Kafka sink connector" -original_id: io-kafka-sink ---- - -The Kafka sink connector pulls messages from Pulsar topics and persists the messages -to Kafka topics. - -This guide explains how to configure and use the Kafka sink connector. - -## Configuration - -The configuration of the Kafka sink connector has the following parameters. - -### Property - -| Name | Type| Required | Default | Description -|------|----------|---------|-------------|-------------| -| `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. | -|`acks`|String|true|" " (empty string) |The number of acknowledgments that the producer requires the leader to receive before a request completes.
    This controls the durability of the sent records. -|`batchsize`|long|false|16384L|The batch size that a Kafka producer attempts to batch records together before sending them to brokers. -|`maxRequestSize`|long|false|1048576L|The maximum size of a Kafka request in bytes. -|`topic`|String|true|" " (empty string) |The Kafka topic which receives messages from Pulsar. -| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringSerializer | The serializer class for Kafka producers to serialize keys. -| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArraySerializer | The serializer class for Kafka producers to serialize values.

    The serializer is set by a specific implementation of [`KafkaAbstractSink`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSink.java). -|`producerConfigProperties`|Map|false|" " (empty string)|The producer configuration properties to be passed to producers.

    **Note: other properties specified in the connector configuration file take precedence over this configuration**. - - -### Example - -Before using the Kafka sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "bootstrapServers": "localhost:6667", - "topic": "test", - "acks": "1", - "batchSize": "16384", - "maxRequestSize": "1048576", - "producerConfigProperties": - { - "client.id": "test-pulsar-producer", - "security.protocol": "SASL_PLAINTEXT", - "sasl.mechanism": "GSSAPI", - "sasl.kerberos.service.name": "kafka", - "acks": "all" - } - } - -* YAML - - ``` - -yaml - configs: - bootstrapServers: "localhost:6667" - topic: "test" - acks: "1" - batchSize: "16384" - maxRequestSize: "1048576" - producerConfigProperties: - client.id: "test-pulsar-producer" - security.protocol: "SASL_PLAINTEXT" - sasl.mechanism: "GSSAPI" - sasl.kerberos.service.name: "kafka" - acks: "all" - ``` diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-kafka-source.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-kafka-source.md deleted file mode 100644 index 53448699e21b4a..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-kafka-source.md +++ /dev/null @@ -1,226 +0,0 @@ ---- -id: io-kafka-source -title: Kafka source connector -sidebar_label: "Kafka source connector" -original_id: io-kafka-source ---- - -The Kafka source connector pulls messages from Kafka topics and persists the messages -to Pulsar topics. - -This guide explains how to configure and use the Kafka source connector. - -## Configuration - -The configuration of the Kafka source connector has the following properties. - -### Property - -| Name | Type| Required | Default | Description -|------|----------|---------|-------------|-------------| -| `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. | -| `groupId` |String| true | " " (empty string) | A unique string that identifies the group of consumer processes to which this consumer belongs. | -| `fetchMinBytes` | long|false | 1 | The minimum byte expected for each fetch response. | -| `autoCommitEnabled` | boolean |false | true | If set to true, the consumer's offset is periodically committed in the background.

    This committed offset is used when the process fails as the position from which a new consumer begins. | -| `autoCommitIntervalMs` | long|false | 5000 | The frequency in milliseconds that the consumer offsets are auto-committed to Kafka if `autoCommitEnabled` is set to true. | -| `heartbeatIntervalMs` | long| false | 3000 | The interval between heartbeats to the consumer when using Kafka's group management facilities.

    **Note: `heartbeatIntervalMs` must be smaller than `sessionTimeoutMs`**.| -| `sessionTimeoutMs` | long|false | 30000 | The timeout used to detect consumer failures when using Kafka's group management facility. | -| `topic` | String|true | " " (empty string)| The Kafka topic which sends messages to Pulsar. | -| `consumerConfigProperties` | Map| false | " " (empty string) | The consumer configuration properties to be passed to consumers.

    **Note: other properties specified in the connector configuration file take precedence over this configuration**. | -| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringDeserializer | The deserializer class for Kafka consumers to deserialize keys.
    The deserializer is set by a specific implementation of [`KafkaAbstractSource`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java). -| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArrayDeserializer | The deserializer class for Kafka consumers to deserialize values. -| `autoOffsetReset` | String | false | "earliest" | The default offset reset policy. | - -### Schema Management - -This Kafka source connector applies the schema to the topic depending on the data type that is present on the Kafka topic. -You can detect the data type from the `keyDeserializationClass` and `valueDeserializationClass` configuration parameters. - -If the `valueDeserializationClass` is `org.apache.kafka.common.serialization.StringDeserializer`, you can set Schema.STRING() as schema type on the Pulsar topic. - -If `valueDeserializationClass` is `io.confluent.kafka.serializers.KafkaAvroDeserializer`, Pulsar downloads the AVRO schema from the Confluent Schema Registry® -and sets it properly on the Pulsar topic. - -In this case, you need to set `schema.registry.url` inside of the `consumerConfigProperties` configuration entry -of the source. - -If `keyDeserializationClass` is not `org.apache.kafka.common.serialization.StringDeserializer`, it means -that you do not have a String as key and the Kafka Source uses the KeyValue schema type with the SEPARATED encoding. - -Pulsar supports AVRO format for keys. - -In this case, you can have a Pulsar topic with the following properties: -- Schema: KeyValue schema with SEPARATED encoding -- Key: the content of key of the Kafka message (base64 encoded) -- Value: the content of value of the Kafka message -- KeySchema: the schema detected from `keyDeserializationClass` -- ValueSchema: the schema detected from `valueDeserializationClass` - -Topic compaction and partition routing use the Pulsar key, that contains the Kafka key, and so they are driven by the same value that you have on Kafka. - -When you consume data from Pulsar topics, you can use the `KeyValue` schema. In this way, you can decode the data properly. -If you want to access the raw key, you can use the `Message#getKeyBytes()` API. - -### Example - -Before using the Kafka source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "bootstrapServers": "pulsar-kafka:9092", - "groupId": "test-pulsar-io", - "topic": "my-topic", - "sessionTimeoutMs": "10000", - "autoCommitEnabled": false - } - - ``` - -* YAML - - ```yaml - - configs: - bootstrapServers: "pulsar-kafka:9092" - groupId: "test-pulsar-io" - topic: "my-topic" - sessionTimeoutMs: "10000" - autoCommitEnabled: false - - ``` - -## Usage - -Here is an example of using the Kafka source connector with the configuration file as shown previously. - -1. Download a Kafka client and a Kafka connector. - - ```bash - - $ wget https://repo1.maven.org/maven2/org/apache/kafka/kafka-clients/0.10.2.1/kafka-clients-0.10.2.1.jar - - $ wget https://archive.apache.org/dist/pulsar/pulsar-2.4.0/connectors/pulsar-io-kafka-2.4.0.nar - - ``` - -2. Create a network. - - ```bash - - $ docker network create kafka-pulsar - - ``` - -3. Pull a ZooKeeper image and start ZooKeeper. - - ```bash - - $ docker pull wurstmeister/zookeeper - - $ docker run -d -it -p 2181:2181 --name pulsar-kafka-zookeeper --network kafka-pulsar wurstmeister/zookeeper - - ``` - -4. Pull a Kafka image and start Kafka. - - ```bash - - $ docker pull wurstmeister/kafka:2.11-1.0.2 - - $ docker run -d -it --network kafka-pulsar -p 6667:6667 -p 9092:9092 -e KAFKA_ADVERTISED_HOST_NAME=pulsar-kafka -e KAFKA_ZOOKEEPER_CONNECT=pulsar-kafka-zookeeper:2181 --name pulsar-kafka wurstmeister/kafka:2.11-1.0.2 - - ``` - -5. Pull a Pulsar image and start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:@pulsar:version@ - - $ docker run -d -it --network kafka-pulsar -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-kafka-standalone apachepulsar/pulsar:2.4.0 bin/pulsar standalone - - ``` - -6. Create a producer file _kafka-producer.py_. - - ```python - - from kafka import KafkaProducer - producer = KafkaProducer(bootstrap_servers='pulsar-kafka:9092') - future = producer.send('my-topic', b'hello world') - future.get() - - ``` - -7. Create a consumer file _pulsar-client.py_. - - ```python - - import pulsar - - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe('my-topic', subscription_name='my-aa') - - while True: - msg = consumer.receive() - print msg - print dir(msg) - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - - client.close() - - ``` - -8. Copy the following files to Pulsar. - - ```bash - - $ docker cp pulsar-io-kafka-@pulsar:version@.nar pulsar-kafka-standalone:/pulsar - $ docker cp kafkaSourceConfig.yaml pulsar-kafka-standalone:/pulsar/conf - $ docker cp pulsar-client.py pulsar-kafka-standalone:/pulsar/ - $ docker cp kafka-producer.py pulsar-kafka-standalone:/pulsar/ - - ``` - -9. Open a new terminal window and start the Kafka source connector in local run mode. - - ```bash - - $ docker exec -it pulsar-kafka-standalone /bin/bash - - $ ./bin/pulsar-admin source localrun \ - --archive ./pulsar-io-kafka-@pulsar:version@.nar \ - --classname org.apache.pulsar.io.kafka.KafkaBytesSource \ - --tenant public \ - --namespace default \ - --name kafka \ - --destination-topic-name my-topic \ - --source-config-file ./conf/kafkaSourceConfig.yaml \ - --parallelism 1 - - ``` - -10. Open a new terminal window and run the consumer. - - ```bash - - $ docker exec -it pulsar-kafka-standalone /bin/bash - - $ pip install kafka-python - - $ python3 kafka-producer.py - - ``` - - The following information appears on the consumer terminal window. - - ```bash - - Received message: 'hello world' - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-kinesis-sink.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-kinesis-sink.md deleted file mode 100644 index 153587dcfc783e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-kinesis-sink.md +++ /dev/null @@ -1,80 +0,0 @@ ---- -id: io-kinesis-sink -title: Kinesis sink connector -sidebar_label: "Kinesis sink connector" -original_id: io-kinesis-sink ---- - -The Kinesis sink connector pulls data from Pulsar and persists data into Amazon Kinesis. - -## Configuration - -The configuration of the Kinesis sink connector has the following property. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`messageFormat`|MessageFormat|true|ONLY_RAW_PAYLOAD|Message format in which Kinesis sink converts Pulsar messages and publishes to Kinesis streams.

    Below are the available options:

  847. `ONLY_RAW_PAYLOAD`: Kinesis sink directly publishes Pulsar message payload as a message into the configured Kinesis stream.

  848. `FULL_MESSAGE_IN_JSON`: Kinesis sink creates a JSON payload with Pulsar message payload, properties and encryptionCtx, and publishes JSON payload into the configured Kinesis stream.

  849. `FULL_MESSAGE_IN_FB`: Kinesis sink creates a flatbuffer serialized payload with Pulsar message payload, properties and encryptionCtx, and publishes flatbuffer payload into the configured Kinesis stream.
  850. -`retainOrdering`|boolean|false|false|Whether Pulsar connectors to retain ordering when moving messages from Pulsar to Kinesis or not. -`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    It is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If it is empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Built-in plugins - -The following are built-in `AwsCredentialProviderPlugin` plugins: - -* `org.apache.pulsar.io.aws.AwsDefaultProviderChainPlugin` - - This plugin takes no configuration, it uses the default AWS provider chain. - - For more information, see [AWS documentation](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default). - -* `org.apache.pulsar.io.aws.STSAssumeRoleProviderPlugin` - - This plugin takes a configuration (via the `awsCredentialPluginParam`) that describes a role to assume when running the KCL. - - This configuration takes the form of a small json document like: - - ```json - - {"roleArn": "arn...", "roleSessionName": "name"} - - ``` - -### Example - -Before using the Kinesis sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "awsEndpoint": "some.endpoint.aws", - "awsRegion": "us-east-1", - "awsKinesisStreamName": "my-stream", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "messageFormat": "ONLY_RAW_PAYLOAD", - "retainOrdering": "true" - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "some.endpoint.aws" - awsRegion: "us-east-1" - awsKinesisStreamName: "my-stream" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - messageFormat: "ONLY_RAW_PAYLOAD" - retainOrdering: "true" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-kinesis-source.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-kinesis-source.md deleted file mode 100644 index 0d07eefc3703b3..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-kinesis-source.md +++ /dev/null @@ -1,81 +0,0 @@ ---- -id: io-kinesis-source -title: Kinesis source connector -sidebar_label: "Kinesis source connector" -original_id: io-kinesis-source ---- - -The Kinesis source connector pulls data from Amazon Kinesis and persists data into Pulsar. - -This connector uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual consuming of messages. The KCL uses DynamoDB to track state for consumers. - -> Note: currently, the Kinesis source connector only supports raw messages. If you use KMS encrypted messages, the encrypted messages are sent to downstream. This connector will support decrypting messages in the future release. - - -## Configuration - -The configuration of the Kinesis source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.

    Below are the available options:

  851. `AT_TIMESTAMP`: start from the record at or after the specified timestamp.

  852. `LATEST`: start after the most recent data record.

  853. `TRIM_HORIZON`: start from the oldest available data record.
  854. -`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption. -`applicationName`|String|false|Pulsar IO connector|The name of the Amazon Kinesis application.

    By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances. -`checkpointInterval`|long|false|60000|The frequency of the Kinesis stream checkpoint in milliseconds. -`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds. -`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint. -`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector.

    Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed. -`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`useEnhancedFanOut`|boolean|false|true|If set to true, it uses Kinesis enhanced fan-out.

    If set to false, it uses polling. -`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    `awsCredentialProviderPlugin` has the following built-in plugs:

  855. `org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:
    this plugin uses the default AWS provider chain.
    For more information, see [using the default credential provider chain](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default).

  856. `org.apache.pulsar.io.kinesis.STSAssumeRoleProviderPlugin`:
    this plugin takes a configuration via the `awsCredentialPluginParam` that describes a role to assume when running the KCL.
    **JSON configuration example**
    `{"roleArn": "arn...", "roleSessionName": "name"}`

    `awsCredentialPluginName` is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If `awsCredentialPluginName` set to empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`.
  857. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Example - -Before using the Kinesis source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "awsEndpoint": "https://some.endpoint.aws", - "awsRegion": "us-east-1", - "awsKinesisStreamName": "my-stream", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "applicationName": "My test application", - "checkpointInterval": "30000", - "backoffTime": "4000", - "numRetries": "3", - "receiveQueueSize": 2000, - "initialPositionInStream": "TRIM_HORIZON", - "startAtTime": "2019-03-05T19:28:58.000Z" - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "https://some.endpoint.aws" - awsRegion: "us-east-1" - awsKinesisStreamName: "my-stream" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - applicationName: "My test application" - checkpointInterval: 30000 - backoffTime: 4000 - numRetries: 3 - receiveQueueSize: 2000 - initialPositionInStream: "TRIM_HORIZON" - startAtTime: "2019-03-05T19:28:58.000Z" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-mongo-sink.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-mongo-sink.md deleted file mode 100644 index 30c15a6c280938..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-mongo-sink.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -id: io-mongo-sink -title: MongoDB sink connector -sidebar_label: "MongoDB sink connector" -original_id: io-mongo-sink ---- - -The MongoDB sink connector pulls messages from Pulsar topics -and persists the messages to collections. - -## Configuration - -The configuration of the MongoDB sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `mongoUri` | String| true| " " (empty string) | The MongoDB URI to which the connector connects.

    For more information, see [connection string URI format](https://docs.mongodb.com/manual/reference/connection-string/). | -| `database` | String| true| " " (empty string)| The database name to which the collection belongs. | -| `collection` | String| true| " " (empty string)| The collection name to which the connector writes messages. | -| `batchSize` | int|false|100 | The batch size of writing messages to collections. | -| `batchTimeMs` |long|false|1000| The batch operation interval in milliseconds. | - - -### Example - -Before using the Mongo sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "mongoUri": "mongodb://localhost:27017", - "database": "pulsar", - "collection": "messages", - "batchSize": "2", - "batchTimeMs": "500" - } - - ``` - -* YAML - - ```yaml - - configs: - mongoUri: "mongodb://localhost:27017" - database: "pulsar" - collection: "messages" - batchSize: 2 - batchTimeMs: 500 - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-netty-source.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-netty-source.md deleted file mode 100644 index e1ec8d863115b3..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-netty-source.md +++ /dev/null @@ -1,241 +0,0 @@ ---- -id: io-netty-source -title: Netty source connector -sidebar_label: "Netty source connector" -original_id: io-netty-source ---- - -The Netty source connector opens a port that accepts incoming data via the configured network protocol -and publish it to user-defined Pulsar topics. - -This connector can be used in a containerized (for example, k8s) deployment. Otherwise, if the connector is running in process or thread mode, the instance may be conflicting on listening to ports. - -## Configuration - -The configuration of the Netty source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `type` |String| true |tcp | The network protocol over which data is transmitted to netty.

    Below are the available options:
  858. tcp
  859. http
  860. udp
  861. | -| `host` | String|true | 127.0.0.1 | The host name or address on which the source instance listen. | -| `port` | int|true | 10999 | The port on which the source instance listen. | -| `numberOfThreads` |int| true |1 | The number of threads of Netty TCP server to accept incoming connections and handle the traffic of accepted connections. | - - -### Example - -Before using the Netty source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "type": "tcp", - "host": "127.0.0.1", - "port": "10911", - "numberOfThreads": "1" - } - - ``` - -* YAML - - ```yaml - - configs: - type: "tcp" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -## Usage - -The following examples show how to use the Netty source connector with TCP and HTTP. - -### TCP - -1. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-netty-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -2. Create a configuration file _netty-source-config.yaml_. - - ```yaml - - configs: - type: "tcp" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -3. Copy the configuration file _netty-source-config.yaml_ to Pulsar server. - - ```bash - - $ docker cp netty-source-config.yaml pulsar-netty-standalone:/pulsar/conf/ - - ``` - -4. Download the Netty source connector. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - curl -O http://mirror-hk.koddos.net/apache/pulsar/pulsar-{version}/connectors/pulsar-io-netty-{version}.nar - - ``` - -5. Start the Netty source connector. - - ```bash - - $ ./bin/pulsar-admin sources localrun \ - --archive pulsar-io-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name netty \ - --destination-topic-name netty-topic \ - --source-config-file netty-source-config.yaml \ - --parallelism 1 - - ``` - -6. Consume data. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ ./bin/pulsar-client consume -t Exclusive -s netty-sub netty-topic -n 0 - - ``` - -7. Open another terminal window to send data to the Netty source. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ apt-get update - - $ apt-get -y install telnet - - $ root@1d19327b2c67:/pulsar# telnet 127.0.0.1 10999 - Trying 127.0.0.1... - Connected to 127.0.0.1. - Escape character is '^]'. - hello - world - - ``` - -8. The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello - - ----- got message ----- - world - - ``` - -### HTTP - -1. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-netty-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -2. Create a configuration file _netty-source-config.yaml_. - - ```yaml - - configs: - type: "http" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -3. Copy the configuration file _netty-source-config.yaml_ to Pulsar server. - - ```bash - - $ docker cp netty-source-config.yaml pulsar-netty-standalone:/pulsar/conf/ - - ``` - -4. Download the Netty source connector. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - curl -O http://mirror-hk.koddos.net/apache/pulsar/pulsar-{version}/connectors/pulsar-io-netty-{version}.nar - - ``` - -5. Start the Netty source connector. - - ```bash - - $ ./bin/pulsar-admin sources localrun \ - --archive pulsar-io-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name netty \ - --destination-topic-name netty-topic \ - --source-config-file netty-source-config.yaml \ - --parallelism 1 - - ``` - -6. Consume data. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ ./bin/pulsar-client consume -t Exclusive -s netty-sub netty-topic -n 0 - - ``` - -7. Open another terminal window to send data to the Netty source. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ curl -X POST --data 'hello, world!' http://127.0.0.1:10999/ - - ``` - -8. The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello, world! - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-nsq-source.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-nsq-source.md deleted file mode 100644 index b61e7e100c22e1..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-nsq-source.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -id: io-nsq-source -title: NSQ source connector -sidebar_label: "NSQ source connector" -original_id: io-nsq-source ---- - -The NSQ source connector receives messages from NSQ topics -and writes messages to Pulsar topics. - -## Configuration - -The configuration of the NSQ source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `lookupds` |String| true | " " (empty string) | A comma-separated list of nsqlookupds to connect to. | -| `topic` | String|true | " " (empty string) | The NSQ topic to transport. | -| `channel` | String |false | pulsar-transport-{$topic} | The channel to consume from on the provided NSQ topic. | \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-overview.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-overview.md deleted file mode 100644 index 3db5ee34042d3f..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-overview.md +++ /dev/null @@ -1,164 +0,0 @@ ---- -id: io-overview -title: Pulsar connector overview -sidebar_label: "Overview" -original_id: io-overview ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Messaging systems are most powerful when you can easily use them with external systems like databases and other messaging systems. - -**Pulsar IO connectors** enable you to easily create, deploy, and manage connectors that interact with external systems, such as [Apache Cassandra](https://cassandra.apache.org), [Aerospike](https://www.aerospike.com), and many others. - - -## Concept - -Pulsar IO connectors come in two types: **source** and **sink**. - -This diagram illustrates the relationship between source, Pulsar, and sink: - -![Pulsar IO diagram](/assets/pulsar-io.png "Pulsar IO connectors (sources and sinks)") - - -### Source - -> Sources **feed data from external systems into Pulsar**. - -Common sources include other messaging systems and firehose-style data pipeline APIs. - -For the complete list of Pulsar built-in source connectors, see [source connector](io-connectors.md#source-connector). - -### Sink - -> Sinks **feed data from Pulsar into external systems**. - -Common sinks include other messaging systems and SQL and NoSQL databases. - -For the complete list of Pulsar built-in sink connectors, see [sink connector](io-connectors.md#sink-connector). - -## Processing guarantee - -Processing guarantees are used to handle errors when writing messages to Pulsar topics. - -> Pulsar connectors and Functions use the **same** processing guarantees as below. - -Delivery semantic | Description -:------------------|:------- -`at-most-once` | Each message sent to a connector is to be **processed once** or **not to be processed**. -`at-least-once` | Each message sent to a connector is to be **processed once** or **more than once**. -`effectively-once` | Each message sent to a connector has **one output associated** with it. - -> Processing guarantees for connectors not just rely on Pulsar guarantee but also **relate to external systems**, that is, **the implementation of source and sink**. - -* Source: Pulsar ensures that writing messages to Pulsar topics respects to the processing guarantees. It is within Pulsar's control. - -* Sink: the processing guarantees rely on the sink implementation. If the sink implementation does not handle retries in an idempotent way, the sink does not respect to the processing guarantees. - -### Set - -When creating a connector, you can set the processing guarantee with the following semantics: - -* ATLEAST_ONCE - -* ATMOST_ONCE - -* EFFECTIVELY_ONCE - -> If `--processing-guarantees` is not specified when creating a connector, the default semantic is `ATLEAST_ONCE`. - -Here takes **Admin CLI** as an example. For more information about **REST API** or **JAVA Admin API**, see [here](io-use.md#create). - -````mdx-code-block - - - - -```bash - -$ bin/pulsar-admin sources create \ - --processing-guarantees ATMOST_ONCE \ - # Other source configs - -``` - -For more information about the options of `pulsar-admin sources create`, see [here](reference-connector-admin.md#create). - - - - -```bash - -$ bin/pulsar-admin sinks create \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other sink configs - -``` - -For more information about the options of `pulsar-admin sinks create`, see [here](reference-connector-admin.md#create-1). - - - - -```` - -### Update - -After creating a connector, you can update the processing guarantee with the following semantics: - -* ATLEAST_ONCE - -* ATMOST_ONCE - -* EFFECTIVELY_ONCE - -Here takes **Admin CLI** as an example. For more information about **REST API** or **JAVA Admin API**, see [here](io-use.md#create). - -````mdx-code-block - - - - -```bash - -$ bin/pulsar-admin sources update \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other source configs - -``` - -For more information about the options of `pulsar-admin sources update`, see [here](reference-connector-admin.md#update). - - - - -```bash - -$ bin/pulsar-admin sinks update \ - --processing-guarantees ATMOST_ONCE \ - # Other sink configs - -``` - -For more information about the options of `pulsar-admin sinks update`, see [here](reference-connector-admin.md#update-1). - - - - -```` - - -## Work with connector - -You can manage Pulsar connectors (for example, create, update, start, stop, restart, reload, delete and perform other operations on connectors) via the [Connector Admin CLI](reference-connector-admin.md) with [sources](io-cli.md#sources) and [sinks](io-cli.md#sinks) subcommands. - -Connectors (sources and sinks) and Functions are components of instances, and they all run on Functions workers. When managing a source, sink or function via [Connector Admin CLI](reference-connector-admin.md) or [Functions Admin CLI](functions-cli.md), an instance is started on a worker. For more information, see [Functions worker](functions-worker.md#run-functions-worker-separately). - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-quickstart.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-quickstart.md deleted file mode 100644 index 8474c93f51336d..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-quickstart.md +++ /dev/null @@ -1,963 +0,0 @@ ---- -id: io-quickstart -title: How to connect Pulsar to database -sidebar_label: "Get started" -original_id: io-quickstart ---- - -This tutorial provides a hands-on look at how you can move data out of Pulsar without writing a single line of code. - -It is helpful to review the [concepts](io-overview.md) for Pulsar I/O with running the steps in this guide to gain a deeper understanding. - -At the end of this tutorial, you are able to: - -- [Connect Pulsar to Cassandra](#Connect-Pulsar-to-Cassandra) - -- [Connect Pulsar to PostgreSQL](#Connect-Pulsar-to-PostgreSQL) - -:::tip - -* These instructions assume you are running Pulsar in [standalone mode](getting-started-standalone.md). However, all -the commands used in this tutorial can be used in a multi-nodes Pulsar cluster without any changes. -* All the instructions are assumed to run at the root directory of a Pulsar binary distribution. - -::: - -## Install Pulsar and built-in connector - -Before connecting Pulsar to a database, you need to install Pulsar and the desired built-in connector. - -For more information about **how to install a standalone Pulsar and built-in connectors**, see [here](getting-started-standalone.md/#installing-pulsar). - -## Start Pulsar standalone - -1. Start Pulsar locally. - - ```bash - - bin/pulsar standalone - - ``` - - All the components of a Pulsar service are start in order. - - You can curl those pulsar service endpoints to make sure Pulsar service is up running correctly. - -2. Check Pulsar binary protocol port. - - ```bash - - telnet localhost 6650 - - ``` - -3. Check Pulsar Function cluster. - - ```bash - - curl -s http://localhost:8080/admin/v2/worker/cluster - - ``` - - **Example output** - - ```json - - [{"workerId":"c-standalone-fw-localhost-6750","workerHostname":"localhost","port":6750}] - - ``` - -4. Make sure a public tenant and a default namespace exist. - - ```bash - - curl -s http://localhost:8080/admin/v2/namespaces/public - - ``` - - **Example output** - - ```json - - ["public/default","public/functions"] - - ``` - -5. All built-in connectors should be listed as available. - - ```bash - - curl -s http://localhost:8080/admin/v2/functions/connectors - - ``` - - **Example output** - - ```json - - [{"name":"aerospike","description":"Aerospike database sink","sinkClass":"org.apache.pulsar.io.aerospike.AerospikeStringSink"},{"name":"cassandra","description":"Writes data into Cassandra","sinkClass":"org.apache.pulsar.io.cassandra.CassandraStringSink"},{"name":"kafka","description":"Kafka source and sink connector","sourceClass":"org.apache.pulsar.io.kafka.KafkaStringSource","sinkClass":"org.apache.pulsar.io.kafka.KafkaBytesSink"},{"name":"kinesis","description":"Kinesis sink connector","sinkClass":"org.apache.pulsar.io.kinesis.KinesisSink"},{"name":"rabbitmq","description":"RabbitMQ source connector","sourceClass":"org.apache.pulsar.io.rabbitmq.RabbitMQSource"},{"name":"twitter","description":"Ingest data from Twitter firehose","sourceClass":"org.apache.pulsar.io.twitter.TwitterFireHose"}] - - ``` - - If an error occurs when starting Pulsar service, you may see an exception at the terminal running `pulsar/standalone`, - or you can navigate to the `logs` directory under the Pulsar directory to view the logs. - -## Connect Pulsar to Cassandra - -This section demonstrates how to connect Pulsar to Cassandra. - -:::tip - -* Make sure you have Docker installed. If you do not have one, see [install Docker](https://docs.docker.com/docker-for-mac/install/). -* The Cassandra sink connector reads messages from Pulsar topics and writes the messages into Cassandra tables. For more information, see [Cassandra sink connector](io-cassandra-sink.md). - -::: - -### Setup a Cassandra cluster - -This example uses `cassandra` Docker image to start a single-node Cassandra cluster in Docker. - -1. Start a Cassandra cluster. - - ```bash - - docker run -d --rm --name=cassandra -p 9042:9042 cassandra - - ``` - - :::note - - Before moving to the next steps, make sure the Cassandra cluster is running. - - ::: - -2. Make sure the Docker process is running. - - ```bash - - docker ps - - ``` - -3. Check the Cassandra logs to make sure the Cassandra process is running as expected. - - ```bash - - docker logs cassandra - - ``` - -4. Check the status of the Cassandra cluster. - - ```bash - - docker exec cassandra nodetool status - - ``` - - **Example output** - - ``` - - Datacenter: datacenter1 - ======================= - Status=Up/Down - |/ State=Normal/Leaving/Joining/Moving - -- Address Load Tokens Owns (effective) Host ID Rack - UN 172.17.0.2 103.67 KiB 256 100.0% af0e4b2f-84e0-4f0b-bb14-bd5f9070ff26 rack1 - - ``` - -5. Use `cqlsh` to connect to the Cassandra cluster. - - ```bash - - $ docker exec -ti cassandra cqlsh localhost - Connected to Test Cluster at localhost:9042. - [cqlsh 5.0.1 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4] - Use HELP for help. - cqlsh> - - ``` - -6. Create a keyspace `pulsar_test_keyspace`. - - ```bash - - cqlsh> CREATE KEYSPACE pulsar_test_keyspace WITH replication = {'class':'SimpleStrategy', 'replication_factor':1}; - - ``` - -7. Create a table `pulsar_test_table`. - - ```bash - - cqlsh> USE pulsar_test_keyspace; - cqlsh:pulsar_test_keyspace> CREATE TABLE pulsar_test_table (key text PRIMARY KEY, col text); - - ``` - -### Configure a Cassandra sink - -Now that we have a Cassandra cluster running locally. - -In this section, you need to configure a Cassandra sink connector. - -To run a Cassandra sink connector, you need to prepare a configuration file including the information that Pulsar connector runtime needs to know. - -For example, how Pulsar connector can find the Cassandra cluster, what is the keyspace and the table that Pulsar connector uses for writing Pulsar messages to, and so on. - -You can create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - } - - ``` - -* YAML - - ```yaml - - configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - - ``` - -For more information, see [Cassandra sink connector](io-cassandra-sink.md). - -### Create a Cassandra sink - -You can use the [Connector Admin CLI](io-cli.md) -to create a sink connector and perform other operations on them. - -Run the following command to create a Cassandra sink connector with sink type _cassandra_ and the config file _examples/cassandra-sink.yml_ created previously. - -#### Note -> The `sink-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. - -```bash - -bin/pulsar-admin sinks create \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink \ - --sink-type cassandra \ - --sink-config-file examples/cassandra-sink.yml \ - --inputs test_cassandra - -``` - -Once the command is executed, Pulsar creates the sink connector _cassandra-test-sink_. - -This sink connector runs -as a Pulsar Function and writes the messages produced in the topic _test_cassandra_ to the Cassandra table _pulsar_test_table_. - -### Inspect a Cassandra sink - -You can use the [Connector Admin CLI](io-cli.md) -to monitor a connector and perform other operations on it. - -* Get the information of a Cassandra sink. - - ```bash - - bin/pulsar-admin sinks get \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - **Example output** - - ```json - - { - "tenant": "public", - "namespace": "default", - "name": "cassandra-test-sink", - "className": "org.apache.pulsar.io.cassandra.CassandraStringSink", - "inputSpecs": { - "test_cassandra": { - "isRegexPattern": false - } - }, - "configs": { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true, - "archive": "builtin://cassandra" - } - - ``` - -* Check the status of a Cassandra sink. - - ```bash - - bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - **Example output** - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-localhost-8080" - } - } ] - } - - ``` - -### Verify a Cassandra sink - -1. Produce some messages to the input topic of the Cassandra sink _test_cassandra_. - - ```bash - - for i in {0..9}; do bin/pulsar-client produce -m "key-$i" -n 1 test_cassandra; done - - ``` - -2. Inspect the status of the Cassandra sink _test_cassandra_. - - ```bash - - bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - You can see 10 messages are processed by the Cassandra sink _test_cassandra_. - - **Example output** - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 10, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 10, - "lastReceivedTime" : 1551685489136, - "workerId" : "c-standalone-fw-localhost-8080" - } - } ] - } - - ``` - -3. Use `cqlsh` to connect to the Cassandra cluster. - - ```bash - - docker exec -ti cassandra cqlsh localhost - - ``` - -4. Check the data of the Cassandra table _pulsar_test_table_. - - ```bash - - cqlsh> use pulsar_test_keyspace; - cqlsh:pulsar_test_keyspace> select * from pulsar_test_table; - - key | col - --------+-------- - key-5 | key-5 - key-0 | key-0 - key-9 | key-9 - key-2 | key-2 - key-1 | key-1 - key-3 | key-3 - key-6 | key-6 - key-7 | key-7 - key-4 | key-4 - key-8 | key-8 - - ``` - -### Delete a Cassandra Sink - -You can use the [Connector Admin CLI](io-cli.md) -to delete a connector and perform other operations on it. - -```bash - -bin/pulsar-admin sinks delete \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - -``` - -## Connect Pulsar to PostgreSQL - -This section demonstrates how to connect Pulsar to PostgreSQL. - -:::tip - -* Make sure you have Docker installed. If you do not have one, see [install Docker](https://docs.docker.com/docker-for-mac/install/). -* The JDBC sink connector pulls messages from Pulsar topics and persists the messages to ClickHouse, MariaDB, PostgreSQL, or SQlite. - -::: - ->For more information, see [JDBC sink connector](io-jdbc-sink.md). - - -### Setup a PostgreSQL cluster - -This example uses the PostgreSQL 12 docker image to start a single-node PostgreSQL cluster in Docker. - -1. Pull the PostgreSQL 12 image from Docker. - - ```bash - - $ docker pull postgres:12 - - ``` - -2. Start PostgreSQL. - - ```bash - - $ docker run -d -it --rm \ - --name pulsar-postgres \ - -p 5432:5432 \ - -e POSTGRES_PASSWORD=password \ - -e POSTGRES_USER=postgres \ - postgres:12 - - ``` - - #### Tip - - Flag | Description | This example - ---|---|---| - `-d` | To start a container in detached mode. | / - `-it` | Keep STDIN open even if not attached and allocate a terminal. | / - `--rm` | Remove the container automatically when it exits. | / - `-name` | Assign a name to the container. | This example specifies _pulsar-postgres_ for the container. - `-p` | Publish the port of the container to the host. | This example publishes the port _5432_ of the container to the host. - `-e` | Set environment variables. | This example sets the following variables:
    - The password for the user is _password_.
    - The name for the user is _postgres_. - - :::tip - - For more information about Docker commands, see [Docker CLI](https://docs.docker.com/engine/reference/commandline/run/). - - ::: - -3. Check if PostgreSQL has been started successfully. - - ```bash - - $ docker logs -f pulsar-postgres - - ``` - - PostgreSQL has been started successfully if the following message appears. - - ```text - - 2020-05-11 20:09:24.492 UTC [1] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit - 2020-05-11 20:09:24.492 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 - 2020-05-11 20:09:24.492 UTC [1] LOG: listening on IPv6 address "::", port 5432 - 2020-05-11 20:09:24.499 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" - 2020-05-11 20:09:24.523 UTC [55] LOG: database system was shut down at 2020-05-11 20:09:24 UTC - 2020-05-11 20:09:24.533 UTC [1] LOG: database system is ready to accept connections - - ``` - -4. Access to PostgreSQL. - - ```bash - - $ docker exec -it pulsar-postgres /bin/bash - - ``` - -5. Create a PostgreSQL table _pulsar_postgres_jdbc_sink_. - - ```bash - - $ psql -U postgres postgres - - postgres=# create table if not exists pulsar_postgres_jdbc_sink - ( - id serial PRIMARY KEY, - name VARCHAR(255) NOT NULL - ); - - ``` - -### Configure a JDBC sink - -Now we have a PostgreSQL running locally. - -In this section, you need to configure a JDBC sink connector. - -1. Add a configuration file. - - To run a JDBC sink connector, you need to prepare a YAML configuration file including the information that Pulsar connector runtime needs to know. - - For example, how Pulsar connector can find the PostgreSQL cluster, what is the JDBC URL and the table that Pulsar connector uses for writing messages to. - - Create a _pulsar-postgres-jdbc-sink.yaml_ file, copy the following contents to this file, and place the file in the `pulsar/connectors` folder. - - ```yaml - - configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink" - tableName: "pulsar_postgres_jdbc_sink" - - ``` - -2. Create a schema. - - Create a _avro-schema_ file, copy the following contents to this file, and place the file in the `pulsar/connectors` folder. - - ```json - - { - "type": "AVRO", - "schema": "{\"type\":\"record\",\"name\":\"Test\",\"fields\":[{\"name\":\"id\",\"type\":[\"null\",\"int\"]},{\"name\":\"name\",\"type\":[\"null\",\"string\"]}]}", - "properties": {} - } - - ``` - - :::tip - - For more information about AVRO, see [Apache Avro](https://avro.apache.org/docs/1.9.1/). - - ::: - -3. Upload a schema to a topic. - - This example uploads the _avro-schema_ schema to the _pulsar-postgres-jdbc-sink-topic_ topic. - - ```bash - - $ bin/pulsar-admin schemas upload pulsar-postgres-jdbc-sink-topic -f ./connectors/avro-schema - - ``` - -4. Check if the schema has been uploaded successfully. - - ```bash - - $ bin/pulsar-admin schemas get pulsar-postgres-jdbc-sink-topic - - ``` - - The schema has been uploaded successfully if the following message appears. - - ```json - - {"name":"pulsar-postgres-jdbc-sink-topic","schema":"{\"type\":\"record\",\"name\":\"Test\",\"fields\":[{\"name\":\"id\",\"type\":[\"null\",\"int\"]},{\"name\":\"name\",\"type\":[\"null\",\"string\"]}]}","type":"AVRO","properties":{}} - - ``` - -### Create a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to create a sink connector and perform other operations on it. - -This example creates a sink connector and specifies the desired information. - -```bash - -$ bin/pulsar-admin sinks create \ ---archive ./connectors/pulsar-io-jdbc-postgres-@pulsar:version@.nar \ ---inputs pulsar-postgres-jdbc-sink-topic \ ---name pulsar-postgres-jdbc-sink \ ---sink-config-file ./connectors/pulsar-postgres-jdbc-sink.yaml \ ---parallelism 1 - -``` - -Once the command is executed, Pulsar creates a sink connector _pulsar-postgres-jdbc-sink_. - -This sink connector runs as a Pulsar Function and writes the messages produced in the topic _pulsar-postgres-jdbc-sink-topic_ to the PostgreSQL table _pulsar_postgres_jdbc_sink_. - - #### Tip - - Flag | Description | This example - ---|---|---| - `--archive` | The path to the archive file for the sink. | _pulsar-io-jdbc-postgres-@pulsar:version@.nar_ | - `--inputs` | The input topic(s) of the sink.

    Multiple topics can be specified as a comma-separated list.|| - `--name` | The name of the sink. | _pulsar-postgres-jdbc-sink_ | - `--sink-config-file` | The path to a YAML config file specifying the configuration of the sink. | _pulsar-postgres-jdbc-sink.yaml_ | - `--parallelism` | The parallelism factor of the sink.

    For example, the number of sink instances to run. | _1_ | - -:::tip - -For more information about `pulsar-admin sinks create options`, see [here](io-cli.md#sinks). - -::: - -The sink has been created successfully if the following message appears. - -```bash - -"Created successfully" - -``` - -### Inspect a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to monitor a connector and perform other operations on it. - -* List all running JDBC sink(s). - - ```bash - - $ bin/pulsar-admin sinks list \ - --tenant public \ - --namespace default - - ``` - - :::tip - - For more information about `pulsar-admin sinks list options`, see [here](io-cli.md/#list-1). - - ::: - - The result shows that only the _postgres-jdbc-sink_ sink is running. - - ```json - - [ - "pulsar-postgres-jdbc-sink" - ] - - ``` - -* Get the information of a JDBC sink. - - ```bash - - $ bin/pulsar-admin sinks get \ - --tenant public \ - --namespace default \ - --name pulsar-postgres-jdbc-sink - - ``` - - :::tip - - For more information about `pulsar-admin sinks get options`, see [here](io-cli.md/#get-1). - - ::: - - The result shows the information of the sink connector, including tenant, namespace, topic and so on. - - ```json - - { - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true - } - - ``` - -* Get the status of a JDBC sink - - ```bash - - $ bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name pulsar-postgres-jdbc-sink - - ``` - - :::tip - - For more information about `pulsar-admin sinks status options`, see [here](io-cli.md/#status-1). - - ::: - - The result shows the current status of sink connector, including the number of instance, running status, worker ID and so on. - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-192.168.2.52-8080" - } - } ] - } - - ``` - -### Stop a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to stop a connector and perform other operations on it. - -```bash - -$ bin/pulsar-admin sinks stop \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks stop options`, see [here](io-cli.md/#stop-1). - -::: - -The sink instance has been stopped successfully if the following message disappears. - -```bash - -"Stopped successfully" - -``` - -### Restart a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to restart a connector and perform other operations on it. - -```bash - -$ bin/pulsar-admin sinks restart \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks restart options`, see [here](io-cli.md/#restart-1). - -::: - -The sink instance has been started successfully if the following message disappears. - -```bash - -"Started successfully" - -``` - -:::tip - -* Optionally, you can run a standalone sink connector using `pulsar-admin sinks localrun options`. -Note that `pulsar-admin sinks localrun options` **runs a sink connector locally**, while `pulsar-admin sinks start options` **starts a sink connector in a cluster**. -* For more information about `pulsar-admin sinks localrun options`, see [here](io-cli.md#localrun-1). - -::: - -### Update a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to update a connector and perform other operations on it. - -This example updates the parallelism of the _pulsar-postgres-jdbc-sink_ sink connector to 2. - -```bash - -$ bin/pulsar-admin sinks update \ ---name pulsar-postgres-jdbc-sink \ ---parallelism 2 - -``` - -:::tip - -For more information about `pulsar-admin sinks update options`, see [here](io-cli.md/#update-1). - -::: - -The sink connector has been updated successfully if the following message disappears. - -```bash - -"Updated successfully" - -``` - -This example double-checks the information. - -```bash - -$ bin/pulsar-admin sinks get \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -The result shows that the parallelism is 2. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 2, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -### Delete a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to delete a connector and perform other operations on it. - -This example deletes the _pulsar-postgres-jdbc-sink_ sink connector. - -```bash - -$ bin/pulsar-admin sinks delete \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks delete options`, see [here](io-cli.md/#delete-1). - -::: - -The sink connector has been deleted successfully if the following message appears. - -```text - -"Deleted successfully" - -``` - -This example double-checks the status of the sink connector. - -```bash - -$ bin/pulsar-admin sinks get \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -The result shows that the sink connector does not exist. - -```text - -HTTP 404 Not Found - -Reason: Sink pulsar-postgres-jdbc-sink doesn't exist - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-rabbitmq-sink.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-rabbitmq-sink.md deleted file mode 100644 index d7fda99460dc97..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-rabbitmq-sink.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -id: io-rabbitmq-sink -title: RabbitMQ sink connector -sidebar_label: "RabbitMQ sink connector" -original_id: io-rabbitmq-sink ---- - -The RabbitMQ sink connector pulls messages from Pulsar topics -and persist the messages to RabbitMQ queues. - - -## Configuration - -The configuration of the RabbitMQ sink connector has the following properties. - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `connectionName` |String| true | " " (empty string) | The connection name. | -| `host` | String| true | " " (empty string) | The RabbitMQ host. | -| `port` | int |true | 5672 | The RabbitMQ port. | -| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. | -| `username` | String|false | guest | The username used to authenticate to RabbitMQ. | -| `password` | String|false | guest | The password used to authenticate to RabbitMQ. | -| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. | -| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number.

    0 means unlimited. | -| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets.

    0 means unlimited. | -| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds.

    0 means infinite. | -| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. | -| `requestedHeartbeat` | int|false | 60 | The exchange to publish messages. | -| `exchangeName` | String|true | " " (empty string) | The maximum number of messages that the server delivers.

    0 means unlimited. | -| `prefetchGlobal` |String|true | " " (empty string) |The routing key used to publish messages. | - - -### Example - -Before using the RabbitMQ sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "host": "localhost", - "port": "5672", - "virtualHost": "/", - "username": "guest", - "password": "guest", - "queueName": "test-queue", - "connectionName": "test-connection", - "requestedChannelMax": "0", - "requestedFrameMax": "0", - "connectionTimeout": "60000", - "handshakeTimeout": "10000", - "requestedHeartbeat": "60", - "exchangeName": "test-exchange", - "routingKey": "test-key" - } - - ``` - -* YAML - - ```yaml - - configs: - host: "localhost" - port: 5672 - virtualHost: "/", - username: "guest" - password: "guest" - queueName: "test-queue" - connectionName: "test-connection" - requestedChannelMax: 0 - requestedFrameMax: 0 - connectionTimeout: 60000 - handshakeTimeout: 10000 - requestedHeartbeat: 60 - exchangeName: "test-exchange" - routingKey: "test-key" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-rabbitmq-source.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-rabbitmq-source.md deleted file mode 100644 index c2c31cc97d10d9..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-rabbitmq-source.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -id: io-rabbitmq-source -title: RabbitMQ source connector -sidebar_label: "RabbitMQ source connector" -original_id: io-rabbitmq-source ---- - -The RabbitMQ source connector receives messages from RabbitMQ clusters -and writes messages to Pulsar topics. - -## Configuration - -The configuration of the RabbitMQ source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `connectionName` |String| true | " " (empty string) | The connection name. | -| `host` | String| true | " " (empty string) | The RabbitMQ host. | -| `port` | int |true | 5672 | The RabbitMQ port. | -| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. | -| `username` | String|false | guest | The username used to authenticate to RabbitMQ. | -| `password` | String|false | guest | The password used to authenticate to RabbitMQ. | -| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. | -| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number.

    0 means unlimited. | -| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets.

    0 means unlimited. | -| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds.

    0 means infinite. | -| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. | -| `requestedHeartbeat` | int|false | 60 | The requested heartbeat timeout in seconds. | -| `prefetchCount` | int|false | 0 | The maximum number of messages that the server delivers.

    0 means unlimited. | -| `prefetchGlobal` | boolean|false | false |Whether the setting should be applied to the entire channel rather than each consumer. | -| `passive` | boolean|false | false | Whether the rabbitmq consumer should create its own queue or bind to an existing one. | - -### Example - -Before using the RabbitMQ source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "host": "localhost", - "port": "5672", - "virtualHost": "/", - "username": "guest", - "password": "guest", - "queueName": "test-queue", - "connectionName": "test-connection", - "requestedChannelMax": "0", - "requestedFrameMax": "0", - "connectionTimeout": "60000", - "handshakeTimeout": "10000", - "requestedHeartbeat": "60", - "prefetchCount": "0", - "prefetchGlobal": "false", - "passive": "false" - } - - ``` - -* YAML - - ```yaml - - configs: - host: "localhost" - port: 5672 - virtualHost: "/" - username: "guest" - password: "guest" - queueName: "test-queue" - connectionName: "test-connection" - requestedChannelMax: 0 - requestedFrameMax: 0 - connectionTimeout: 60000 - handshakeTimeout: 10000 - requestedHeartbeat: 60 - prefetchCount: 0 - prefetchGlobal: "false" - passive: "false" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-redis-sink.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-redis-sink.md deleted file mode 100644 index 793d74a5f2cb38..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-redis-sink.md +++ /dev/null @@ -1,74 +0,0 @@ ---- -id: io-redis-sink -title: Redis sink connector -sidebar_label: "Redis sink connector" -original_id: io-redis-sink ---- - -The Redis sink connector pulls messages from Pulsar topics -and persists the messages to a Redis database. - - - -## Configuration - -The configuration of the Redis sink connector has the following properties. - - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `redisHosts` |String|true|" " (empty string) | A comma-separated list of Redis hosts to connect to. | -| `redisPassword` |String|false|" " (empty string) | The password used to connect to Redis. | -| `redisDatabase` | int|true|0 | The Redis database to connect to. | -| `clientMode` |String| false|Standalone | The client mode when interacting with Redis cluster.

    Below are the available options:
  862. Standalone
  863. Cluster
  864. | -| `autoReconnect` | boolean|false|true | Whether the Redis client automatically reconnect or not. | -| `requestQueue` | int|false|2147483647 | The maximum number of queued requests to Redis. | -| `tcpNoDelay` |boolean| false| false | Whether to enable TCP with no delay or not. | -| `keepAlive` | boolean|false | false |Whether to enable a keepalive to Redis or not. | -| `connectTimeout` |long| false|10000 | The time to wait before timing out when connecting in milliseconds. | -| `operationTimeout` | long|false|10000 | The time before an operation is marked as timed out in milliseconds . | -| `batchTimeMs` | int|false|1000 | The Redis operation time in milliseconds. | -| `batchSize` | int|false|200 | The batch size of writing to Redis database. | - - -### Example - -Before using the Redis sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "redisHosts": "localhost:6379", - "redisPassword": "fake@123", - "redisDatabase": "1", - "clientMode": "Standalone", - "operationTimeout": "2000", - "batchSize": "100", - "batchTimeMs": "1000", - "connectTimeout": "3000" - } - - ``` - -* YAML - - ```yaml - - { - redisHosts: "localhost:6379" - redisPassword: "fake@123" - redisDatabase: 1 - clientMode: "Standalone" - operationTimeout: 2000 - batchSize: 100 - batchTimeMs: 1000 - connectTimeout: 3000 - } - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-solr-sink.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-solr-sink.md deleted file mode 100644 index df2c3612c38eb6..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-solr-sink.md +++ /dev/null @@ -1,65 +0,0 @@ ---- -id: io-solr-sink -title: Solr sink connector -sidebar_label: "Solr sink connector" -original_id: io-solr-sink ---- - -The Solr sink connector pulls messages from Pulsar topics -and persists the messages to Solr collections. - - - -## Configuration - -The configuration of the Solr sink connector has the following properties. - - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `solrUrl` | String|true|" " (empty string) |
  865. Comma-separated zookeeper hosts with chroot used in the SolrCloud mode.
    **Example**
    `localhost:2181,localhost:2182/chroot`

  866. URL to connect to Solr used in standalone mode.
    **Example**
    `localhost:8983/solr`
  867. | -| `solrMode` | String|true|SolrCloud| The client mode when interacting with the Solr cluster.

    Below are the available options:
  868. Standalone
  869. SolrCloud
  870. | -| `solrCollection` |String|true| " " (empty string) | Solr collection name to which records need to be written. | -| `solrCommitWithinMs` |int| false|10 | The time within million seconds for Solr updating commits.| -| `username` |String|false| " " (empty string) | The username for basic authentication.

    **Note: `usename` is case-sensitive.** | -| `password` | String|false| " " (empty string) | The password for basic authentication.

    **Note: `password` is case-sensitive.** | - - - -### Example - -Before using the Solr sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "solrUrl": "localhost:2181,localhost:2182/chroot", - "solrMode": "SolrCloud", - "solrCollection": "techproducts", - "solrCommitWithinMs": 100, - "username": "fakeuser", - "password": "fake@123" - } - - ``` - -* YAML - - ```yaml - - { - solrUrl: "localhost:2181,localhost:2182/chroot" - solrMode: "SolrCloud" - solrCollection: "techproducts" - solrCommitWithinMs: 100 - username: "fakeuser" - password: "fake@123" - } - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-twitter-source.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-twitter-source.md deleted file mode 100644 index 8de3504dd0fef2..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-twitter-source.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -id: io-twitter-source -title: Twitter Firehose source connector -sidebar_label: "Twitter Firehose source connector" -original_id: io-twitter-source ---- - -The Twitter Firehose source connector receives tweets from Twitter Firehose and -writes the tweets to Pulsar topics. - -## Configuration - -The configuration of the Twitter Firehose source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `consumerKey` | String|true | " " (empty string) | The twitter OAuth consumer key.

    For more information, see [Access tokens](https://developer.twitter.com/en/docs/basics/authentication/guides/access-tokens). | -| `consumerSecret` | String |true | " " (empty string) | The twitter OAuth consumer secret. | -| `token` | String|true | " " (empty string) | The twitter OAuth token. | -| `tokenSecret` | String|true | " " (empty string) | The twitter OAuth secret. | -| `guestimateTweetTime`|Boolean|false|false|Most firehose events have null createdAt time.

    If `guestimateTweetTime` set to true, the connector estimates the createdTime of each firehose event to be current time. -| `clientName` | String |false | openconnector-twitter-source| The twitter firehose client name. | -| `clientHosts` |String| false | Constants.STREAM_HOST | The twitter firehose hosts to which client connects. | -| `clientBufferSize` | int|false | 50000 | The buffer size for buffering tweets fetched from twitter firehose. | - -> For more information about OAuth credentials, see [Twitter developers portal](https://developer.twitter.com/en.html). diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-twitter.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-twitter.md deleted file mode 100644 index 3b2f6325453c3c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-twitter.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: io-twitter -title: Twitter Firehose Connector -sidebar_label: "Twitter Firehose Connector" -original_id: io-twitter ---- - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/io-use.md b/site2/website/versioned_docs/version-2.8.1-deprecated/io-use.md deleted file mode 100644 index da9ed746c4d372..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/io-use.md +++ /dev/null @@ -1,1787 +0,0 @@ ---- -id: io-use -title: How to use Pulsar connectors -sidebar_label: "Use" -original_id: io-use ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide describes how to use Pulsar connectors. - -## Install a connector - -Pulsar bundles several [builtin connectors](io-connectors.md) used to move data in and out of commonly used systems (such as database and messaging system). Optionally, you can create and use your desired non-builtin connectors. - -:::note - -When using a non-builtin connector, you need to specify the path of a archive file for the connector. - -::: - -To set up a builtin connector, follow -the instructions [here](getting-started-standalone.md#installing-builtin-connectors). - -After the setup, the builtin connector is automatically discovered by Pulsar brokers (or function-workers), so no additional installation steps are required. - -## Configure a connector - -You can configure the following information: - -* [Configure a default storage location for a connector](#configure-a-default-storage-location-for-a-connector) - -* [Configure a connector with a YAML file](#configure-a-connector-with-yaml-file) - -### Configure a default storage location for a connector - -To configure a default folder for builtin connectors, set the `connectorsDirectory` parameter in the `./conf/functions_worker.yml` configuration file. - -**Example** - -Set the `./connectors` folder as the default storage location for builtin connectors. - -``` - -######################## -# Connectors -######################## - -connectorsDirectory: ./connectors - -``` - -### Configure a connector with a YAML file - -To configure a connector, you need to provide a YAML configuration file when creating a connector. - -The YAML configuration file tells Pulsar where to locate connectors and how to connect connectors with Pulsar topics. - -**Example 1** - -Below is a YAML configuration file of a Cassandra sink, which tells Pulsar: - -* Which Cassandra cluster to connect - -* What is the `keyspace` and `columnFamily` to be used in Cassandra for collecting data - -* How to map Pulsar messages into Cassandra table key and columns - -```shell - -tenant: public -namespace: default -name: cassandra-test-sink -... -# cassandra specific config -configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - -``` - -**Example 2** - -Below is a YAML configuration file of a Kafka source. - -```shell - -configs: - bootstrapServers: "pulsar-kafka:9092" - groupId: "test-pulsar-io" - topic: "my-topic" - sessionTimeoutMs: "10000" - autoCommitEnabled: "false" - -``` - -**Example 3** - -Below is a YAML configuration file of a PostgreSQL JDBC sink. - -```shell - -configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/test_jdbc" - tableName: "test_jdbc" - -``` - -## Get available connectors - -Before starting using connectors, you can perform the following operations: - -* [Reload connectors](#reload) - -* [Get a list of available connectors](#get-available-connectors) - -### `reload` - -If you add or delete a nar file in a connector folder, reload the available builtin connector before using it. - -#### Source - -Use the `reload` subcommand. - -```shell - -$ pulsar-admin sources reload - -``` - -For more information, see [`here`](io-cli.md#reload). - -#### Sink - -Use the `reload` subcommand. - -```shell - -$ pulsar-admin sinks reload - -``` - -For more information, see [`here`](io-cli.md#reload-1). - -### `available` - -After reloading connectors (optional), you can get a list of available connectors. - -#### Source - -Use the `available-sources` subcommand. - -```shell - -$ pulsar-admin sources available-sources - -``` - -#### Sink - -Use the `available-sinks` subcommand. - -```shell - -$ pulsar-admin sinks available-sinks - -``` - -## Run a connector - -To run a connector, you can perform the following operations: - -* [Create a connector](#create) - -* [Start a connector](#start) - -* [Run a connector locally](#localrun) - -### `create` - -You can create a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Create a source connector. - -````mdx-code-block - - - - -Use the `create` subcommand. - -``` - -$ pulsar-admin sources create options - -``` - -For more information, see [here](io-cli.md#create). - - - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/registerSource?version=@pulsar:version_number@} - - - - -* Create a source connector with a **local file**. - - ```java - - void createSource(SourceConfig sourceConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - |Name|Description - |---|--- - `sourceConfig` | The source configuration object - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#createSource-SourceConfig-java.lang.String-). - -* Create a source connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void createSourceWithUrl(SourceConfig sourceConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - Parameter| Description - |---|--- - `sourceConfig` | The source configuration object - `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSourceWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#createSourceWithUrl-SourceConfig-java.lang.String-). - - - - -```` - -#### Sink - -Create a sink connector. - -````mdx-code-block - - - - -Use the `create` subcommand. - -``` - -$ pulsar-admin sinks create options - -``` - -For more information, see [here](io-cli.md#create-1). - - - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/registerSink?version=@pulsar:version_number@} - - - - -* Create a sink connector with a **local file**. - - ```java - - void createSink(SinkConfig sinkConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - |Name|Description - |---|--- - `sinkConfig` | The sink configuration object - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#createSink-SinkConfig-java.lang.String-). - -* Create a sink connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void createSinkWithUrl(SinkConfig sinkConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - Parameter| Description - |---|--- - `sinkConfig` | The sink configuration object - `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSinkWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#createSinkWithUrl-SinkConfig-java.lang.String-). - - - - -```` - -### `start` - -You can start a connector using **Admin CLI** or **REST API**. - -#### Source - -Start a source connector. - -````mdx-code-block - - - - -Use the `start` subcommand. - -``` - -$ pulsar-admin sources start options - -``` - -For more information, see [here](io-cli.md#start). - - - - -* Start **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/start|operation/startSource?version=@pulsar:version_number@} - -* Start a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/start|operation/startSource?version=@pulsar:version_number@} - - - - -```` - -#### Sink - -Start a sink connector. - -````mdx-code-block - - - - -Use the `start` subcommand. - -``` - -$ pulsar-admin sinks start options - -``` - -For more information, see [here](io-cli.md#start-1). - - - - -* Start **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/start|operation/startSink?version=@pulsar:version_number@} - -* Start a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sourceName/:instanceId/start|operation/startSink?version=@pulsar:version_number@} - - - - -```` - -### `localrun` - -You can run a connector locally rather than deploying it on a Pulsar cluster using **Admin CLI**. - -#### Source - -Run a source connector locally. - -````mdx-code-block - - - - -Use the `localrun` subcommand. - -``` - -$ pulsar-admin sources localrun options - -``` - -For more information, see [here](io-cli.md#localrun). - - - - -```` - -#### Sink - -Run a sink connector locally. - -````mdx-code-block - - - - -Use the `localrun` subcommand. - -``` - -$ pulsar-admin sinks localrun options - -``` - -For more information, see [here](io-cli.md#localrun-1). - - - - -```` - -## Monitor a connector - -To monitor a connector, you can perform the following operations: - -* [Get the information of a connector](#get) - -* [Get the list of all running connectors](#list) - -* [Get the current status of a connector](#status) - -### `get` - -You can get the information of a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the information of a source connector. - -````mdx-code-block - - - - -Use the `get` subcommand. - -``` - -$ pulsar-admin sources get options - -``` - -For more information, see [here](io-cli.md#get). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/getSourceInfo?version=@pulsar:version_number@} - - - - -```java - -SourceConfig getSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Example** - -This is a sourceConfig. - -```java - -{ - "tenant": "tenantName", - "namespace": "namespaceName", - "name": "sourceName", - "className": "className", - "topicName": "topicName", - "configs": {}, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "resources": { - "cpu": 1.0, - "ram": 1073741824, - "disk": 10737418240 - } -} - -``` - -This is a sourceConfig example. - -``` - -{ - "tenant": "public", - "namespace": "default", - "name": "debezium-mysql-source", - "className": "org.apache.pulsar.io.debezium.mysql.DebeziumMysqlSource", - "topicName": "debezium-mysql-topic", - "configs": { - "database.user": "debezium", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.port": "3306", - "database.hostname": "localhost", - "database.password": "dbz", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "database.whitelist": "inventory", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "pulsar.service.url": "pulsar://127.0.0.1:6650", - "database.history.pulsar.topic": "history-topic2" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "resources": { - "cpu": 1.0, - "ram": 1073741824, - "disk": 10737418240 - } -} - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException.NotFoundException` | Cluster doesn't exist -`PulsarAdminException` | Unexpected error - -For more information, see [`getSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#getSource-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Get the information of a sink connector. - -````mdx-code-block - - - - -Use the `get` subcommand. - -``` - -$ pulsar-admin sinks get options - -``` - -For more information, see [here](io-cli.md#get-1). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/getSinkInfo?version=@pulsar:version_number@} - - - - -```java - -SinkConfig getSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - -``` - -**Example** - -This is a sinkConfig. - -```json - -{ -"tenant": "tenantName", -"namespace": "namespaceName", -"name": "sinkName", -"className": "className", -"inputSpecs": { -"topicName": { - "isRegexPattern": false -} -}, -"configs": {}, -"parallelism": 1, -"processingGuarantees": "ATLEAST_ONCE", -"retainOrdering": false, -"autoAck": true -} - -``` - -This is a sinkConfig example. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -**Parameter description** - -Name| Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`sink` | Sink name - -For more information, see [`getSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#getSink-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -### `list` - -You can get the list of all running connectors using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the list of all running source connectors. - -````mdx-code-block - - - - -Use the `list` subcommand. - -``` - -$ pulsar-admin sources list options - -``` - -For more information, see [here](io-cli.md#list). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace|operation/listSources?version=@pulsar:version_number@} - - - - -```java - -List listSources(String tenant, - String namespace) - throws PulsarAdminException - -``` - -**Response example** - -```java - -["f1", "f2", "f3"] - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException` | Unexpected error - -For more information, see [`listSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#listSources-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Get the list of all running sink connectors. - -````mdx-code-block - - - - -Use the `list` subcommand. - -``` - -$ pulsar-admin sinks list options - -``` - -For more information, see [here](io-cli.md#list-1). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace|operation/listSinks?version=@pulsar:version_number@} - - - - -```java - -List listSinks(String tenant, - String namespace) - throws PulsarAdminException - -``` - -**Response example** - -```java - -["f1", "f2", "f3"] - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException` | Unexpected error - -For more information, see [`listSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#listSinks-java.lang.String-java.lang.String-). - - - - -```` - -### `status` - -You can get the current status of a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the current status of a source connector. - -````mdx-code-block - - - - -Use the `status` subcommand. - -``` - -$ pulsar-admin sources status options - -``` - -For more information, see [here](io-cli.md#status). - - - - -* Get the current status of **all** source connectors. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName/status|operation/getSourceStatus?version=@pulsar:version_number@} - -* Gets the current status of a **specified** source connector. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/status|operation/getSourceStatus?version=@pulsar:version_number@} - - - - -* Get the current status of **all** source connectors. - - ```java - - SourceStatus getSourceStatus(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - - **Exception** - - Name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSourceStatus`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#getSource-java.lang.String-java.lang.String-java.lang.String-). - -* Gets the current status of a **specified** source connector. - - ```java - - SourceStatus.SourceInstanceStatus.SourceInstanceStatusData getSourceStatus(String tenant, - String namespace, - String source, - int id) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - `id` | Source instanceID - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSourceStatus`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#getSourceStatus-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Get the current status of a Pulsar sink connector. - -````mdx-code-block - - - - -Use the `status` subcommand. - -``` - -$ pulsar-admin sinks status options - -``` - -For more information, see [here](io-cli.md#status-1). - - - - -* Get the current status of **all** sink connectors. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sinkName/status|operation/getSinkStatus?version=@pulsar:version_number@} - -* Gets the current status of a **specified** sink connector. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sourceName/:instanceId/status|operation/getSinkInstanceStatus?version=@pulsar:version_number@} - - - - -* Get the current status of **all** sink connectors. - - ```java - - SinkStatus getSinkStatus(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSinkStatus`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#getSinkStatus-java.lang.String-java.lang.String-java.lang.String-). - -* Gets the current status of a **specified** source connector. - - ```java - - SinkStatus.SinkInstanceStatus.SinkInstanceStatusData getSinkStatus(String tenant, - String namespace, - String sink, - int id) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - `id` | Sink instanceID - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSinkStatusWithInstanceID`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#getSinkStatus-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Update a connector - -### `update` - -You can update a running connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Update a running Pulsar source connector. - -````mdx-code-block - - - - -Use the `update` subcommand. - -``` - -$ pulsar-admin sources update options - -``` - -For more information, see [here](io-cli.md#update). - - - - -Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/updateSource?version=@pulsar:version_number@} - - - - -* Update a running source connector with a **local file**. - - ```java - - void updateSource(SourceConfig sourceConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - |`sourceConfig` | The source configuration object - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - - For more information, see [`updateSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#updateSource-SourceConfig-java.lang.String-). - -* Update a source connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void updateSourceWithUrl(SourceConfig sourceConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - | Name | Description - |---|--- - | `sourceConfig` | The source configuration object - | `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - -For more information, see [`createSourceWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#updateSourceWithUrl-SourceConfig-java.lang.String-). - - - - -```` - -#### Sink - -Update a running Pulsar sink connector. - -````mdx-code-block - - - - -Use the `update` subcommand. - -``` - -$ pulsar-admin sinks update options - -``` - -For more information, see [here](io-cli.md#update-1). - - - - -Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/updateSink?version=@pulsar:version_number@} - - - - -* Update a running sink connector with a **local file**. - - ```java - - void updateSink(SinkConfig sinkConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - |`sinkConfig` | The sink configuration object - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - - For more information, see [`updateSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#updateSink-SinkConfig-java.lang.String-). - -* Update a sink connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void updateSinkWithUrl(SinkConfig sinkConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - | Name | Description - |---|--- - | `sinkConfig` | The sink configuration object - | `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - |`PulsarAdminException.NotFoundException` | Cluster doesn't exist - |`PulsarAdminException` | Unexpected error - -For more information, see [`updateSinkWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#updateSinkWithUrl-SinkConfig-java.lang.String-). - - - - -```` - -## Stop a connector - -### `stop` - -You can stop a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Stop a source connector. - -````mdx-code-block - - - - -Use the `stop` subcommand. - -``` - -$ pulsar-admin sources stop options - -``` - -For more information, see [here](io-cli.md#stop). - - - - -* Stop **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/stopSource?version=@pulsar:version_number@} - -* Stop a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId|operation/stopSource?version=@pulsar:version_number@} - - - - -* Stop **all** source connectors. - - ```java - - void stopSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#stopSource-java.lang.String-java.lang.String-java.lang.String-). - -* Stop a **specified** source connector. - - ```java - - void stopSource(String tenant, - String namespace, - String source, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#stopSource-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Stop a sink connector. - -````mdx-code-block - - - - -Use the `stop` subcommand. - -``` - -$ pulsar-admin sinks stop options - -``` - -For more information, see [here](io-cli.md#stop-1). - - - - -* Stop **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sinkName/stop|operation/stopSink?version=@pulsar:version_number@} - -* Stop a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkeName/:instanceId/stop|operation/stopSink?version=@pulsar:version_number@} - - - - -* Stop **all** sink connectors. - - ```java - - void stopSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#stopSink-java.lang.String-java.lang.String-java.lang.String-). - -* Stop a **specified** sink connector. - - ```java - - void stopSink(String tenant, - String namespace, - String sink, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#stopSink-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Restart a connector - -### `restart` - -You can restart a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Restart a source connector. - -````mdx-code-block - - - - -Use the `restart` subcommand. - -``` - -$ pulsar-admin sources restart options - -``` - -For more information, see [here](io-cli.md#restart). - - - - -* Restart **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/restart|operation/restartSource?version=@pulsar:version_number@} - -* Restart a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/restart|operation/restartSource?version=@pulsar:version_number@} - - - - -* Restart **all** source connectors. - - ```java - - void restartSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#restartSource-java.lang.String-java.lang.String-java.lang.String-). - -* Restart a **specified** source connector. - - ```java - - void restartSource(String tenant, - String namespace, - String source, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#restartSource-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Restart a sink connector. - -````mdx-code-block - - - - -Use the `restart` subcommand. - -``` - -$ pulsar-admin sinks restart options - -``` - -For more information, see [here](io-cli.md#restart-1). - - - - -* Restart **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/restart|operation/restartSource?version=@pulsar:version_number@} - -* Restart a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/:instanceId/restart|operation/restartSource?version=@pulsar:version_number@} - - - - -* Restart all Pulsar sink connectors. - - ```java - - void restartSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Sink name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#restartSink-java.lang.String-java.lang.String-java.lang.String-). - -* Restart a **specified** sink connector. - - ```java - - void restartSink(String tenant, - String namespace, - String sink, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Sink instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#restartSink-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Delete a connector - -### `delete` - -You can delete a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Delete a source connector. - -````mdx-code-block - - - - -Use the `delete` subcommand. - -``` - -$ pulsar-admin sources delete options - -``` - -For more information, see [here](io-cli.md#delete). - - - - -Delete al Pulsar source connector. - -Send a `DELETE` request to this endpoint: {@inject: endpoint|DELETE|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/deregisterSource?version=@pulsar:version_number@} - - - - -Delete a source connector. - -```java - -void deleteSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Parameter** - -| Name | Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`source` | Source name - -**Exception** - -|Name|Description| -|---|--- -|`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission -| `PulsarAdminException.NotFoundException` | Cluster doesn't exist -| `PulsarAdminException.PreconditionFailedException` | Cluster is not empty -| `PulsarAdminException` | Unexpected error - -For more information, see [`deleteSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#deleteSource-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Delete a sink connector. - -````mdx-code-block - - - - -Use the `delete` subcommand. - -``` - -$ pulsar-admin sinks delete options - -``` - -For more information, see [here](io-cli.md#delete-1). - - - - -Delete a sink connector. - -Send a `DELETE` request to this endpoint: {@inject: endpoint|DELETE|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/deregisterSink?version=@pulsar:version_number@} - - - - -Delete a Pulsar sink connector. - -```java - -void deleteSink(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Parameter** - -| Name | Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`sink` | Sink name - -**Exception** - -|Name|Description| -|---|--- -|`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission -| `PulsarAdminException.NotFoundException` | Cluster doesn't exist -| `PulsarAdminException.PreconditionFailedException` | Cluster is not empty -| `PulsarAdminException` | Unexpected error - -For more information, see [`deleteSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#deleteSink-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/kubernetes-helm.md b/site2/website/versioned_docs/version-2.8.1-deprecated/kubernetes-helm.md deleted file mode 100644 index ea92a0968cd7d0..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/kubernetes-helm.md +++ /dev/null @@ -1,441 +0,0 @@ ---- -id: kubernetes-helm -title: Get started in Kubernetes -sidebar_label: "Run Pulsar in Kubernetes" -original_id: kubernetes-helm ---- - -This section guides you through every step of installing and running Apache Pulsar with Helm on Kubernetes quickly, including the following sections: - -- Install the Apache Pulsar on Kubernetes using Helm -- Start and stop Apache Pulsar -- Create topics using `pulsar-admin` -- Produce and consume messages using Pulsar clients -- Monitor Apache Pulsar status with Prometheus and Grafana - -For deploying a Pulsar cluster for production usage, read the documentation on [how to configure and install a Pulsar Helm chart](helm-deploy.md). - -## Prerequisite - -- Kubernetes server 1.14.0+ -- kubectl 1.14.0+ -- Helm 3.0+ - -:::tip - -For the following steps, step 2 and step 3 are for **developers** and step 4 and step 5 are for **administrators**. - -::: - -## Step 0: Prepare a Kubernetes cluster - -Before installing a Pulsar Helm chart, you have to create a Kubernetes cluster. You can follow [the instructions](helm-prepare.md) to prepare a Kubernetes cluster. - -We use [Minikube](https://minikube.sigs.k8s.io/docs/start/) in this quick start guide. To prepare a Kubernetes cluster, follow these steps: - -1. Create a Kubernetes cluster on Minikube. - - ```bash - - minikube start --memory=8192 --cpus=4 --kubernetes-version= - - ``` - - The `` can be any [Kubernetes version supported by your Minikube installation](https://minikube.sigs.k8s.io/docs/reference/configuration/kubernetes/), such as `v1.16.1`. - -2. Set `kubectl` to use Minikube. - - ```bash - - kubectl config use-context minikube - - ``` - -3. To use the [Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) with the local Kubernetes cluster on Minikube, enter the command below: - - ```bash - - minikube dashboard - - ``` - - The command automatically triggers opening a webpage in your browser. - -## Step 1: Install Pulsar Helm chart - -1. Add Pulsar charts repo. - - ```bash - - helm repo add apache https://pulsar.apache.org/charts - - ``` - - ```bash - - helm repo update - - ``` - -2. Clone the Pulsar Helm chart repository. - - ```bash - - git clone https://github.com/apache/pulsar-helm-chart - cd pulsar-helm-chart - - ``` - -3. Run the script `prepare_helm_release.sh` to create secrets required for installing the Apache Pulsar Helm chart. The username `pulsar` and password `pulsar` are used for logging into the Grafana dashboard and Pulsar Manager. - - ```bash - - ./scripts/pulsar/prepare_helm_release.sh \ - -n pulsar \ - -k pulsar-mini \ - -c - - ``` - -4. Use the Pulsar Helm chart to install a Pulsar cluster to Kubernetes. - - :::note - - You need to specify `--set initialize=true` when installing Pulsar the first time. This command installs and starts Apache Pulsar. - - ::: - - ```bash - - helm install \ - --values examples/values-minikube.yaml \ - --set initialize=true \ - --namespace pulsar \ - pulsar-mini apache/pulsar - - ``` - -5. Check the status of all pods. - - ```bash - - kubectl get pods -n pulsar - - ``` - - If all pods start up successfully, you can see that the `STATUS` is changed to `Running` or `Completed`. - - **Output** - - ```bash - - NAME READY STATUS RESTARTS AGE - pulsar-mini-bookie-0 1/1 Running 0 9m27s - pulsar-mini-bookie-init-5gphs 0/1 Completed 0 9m27s - pulsar-mini-broker-0 1/1 Running 0 9m27s - pulsar-mini-grafana-6b7bcc64c7-4tkxd 1/1 Running 0 9m27s - pulsar-mini-prometheus-5fcf5dd84c-w8mgz 1/1 Running 0 9m27s - pulsar-mini-proxy-0 1/1 Running 0 9m27s - pulsar-mini-pulsar-init-t7cqt 0/1 Completed 0 9m27s - pulsar-mini-pulsar-manager-9bcbb4d9f-htpcs 1/1 Running 0 9m27s - pulsar-mini-toolset-0 1/1 Running 0 9m27s - pulsar-mini-zookeeper-0 1/1 Running 0 9m27s - - ``` - -6. Check the status of all services in the namespace `pulsar`. - - ```bash - - kubectl get services -n pulsar - - ``` - - **Output** - - ```bash - - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - pulsar-mini-bookie ClusterIP None 3181/TCP,8000/TCP 11m - pulsar-mini-broker ClusterIP None 8080/TCP,6650/TCP 11m - pulsar-mini-grafana LoadBalancer 10.106.141.246 3000:31905/TCP 11m - pulsar-mini-prometheus ClusterIP None 9090/TCP 11m - pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 11m - pulsar-mini-pulsar-manager LoadBalancer 10.103.192.175 9527:30190/TCP 11m - pulsar-mini-toolset ClusterIP None 11m - pulsar-mini-zookeeper ClusterIP None 2888/TCP,3888/TCP,2181/TCP 11m - - ``` - -## Step 2: Use pulsar-admin to create Pulsar tenants/namespaces/topics - -`pulsar-admin` is the CLI (command-Line Interface) tool for Pulsar. In this step, you can use `pulsar-admin` to create resources, including tenants, namespaces, and topics. - -1. Enter the `toolset` container. - - ```bash - - kubectl exec -it -n pulsar pulsar-mini-toolset-0 -- /bin/bash - - ``` - -2. In the `toolset` container, create a tenant named `apache`. - - ```bash - - bin/pulsar-admin tenants create apache - - ``` - - Then you can list the tenants to see if the tenant is created successfully. - - ```bash - - bin/pulsar-admin tenants list - - ``` - - You should see a similar output as below. The tenant `apache` has been successfully created. - - ```bash - - "apache" - "public" - "pulsar" - - ``` - -3. In the `toolset` container, create a namespace named `pulsar` in the tenant `apache`. - - ```bash - - bin/pulsar-admin namespaces create apache/pulsar - - ``` - - Then you can list the namespaces of tenant `apache` to see if the namespace is created successfully. - - ```bash - - bin/pulsar-admin namespaces list apache - - ``` - - You should see a similar output as below. The namespace `apache/pulsar` has been successfully created. - - ```bash - - "apache/pulsar" - - ``` - -4. In the `toolset` container, create a topic `test-topic` with `4` partitions in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics create-partitioned-topic apache/pulsar/test-topic -p 4 - - ``` - -5. In the `toolset` container, list all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics list-partitioned-topics apache/pulsar - - ``` - - Then you can see all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - "persistent://apache/pulsar/test-topic" - - ``` - -## Step 3: Use Pulsar client to produce and consume messages - -You can use the Pulsar client to create producers and consumers to produce and consume messages. - -By default, the Pulsar Helm chart exposes the Pulsar cluster through a Kubernetes `LoadBalancer`. In Minikube, you can use the following command to check the proxy service. - -```bash - -kubectl get services -n pulsar | grep pulsar-mini-proxy - -``` - -You will see a similar output as below. - -```bash - -pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 28m - -``` - -This output tells what are the node ports that Pulsar cluster's binary port and HTTP port are mapped to. The port after `80:` is the HTTP port while the port after `6650:` is the binary port. - -Then you can find the IP address and exposed ports of your Minikube server by running the following command. - -```bash - -minikube service pulsar-mini-proxy -n pulsar - -``` - -**Output** - -```bash - -|-----------|-------------------|-------------|-------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|-------------------------| -| pulsar | pulsar-mini-proxy | http/80 | http://172.17.0.4:32305 | -| | | pulsar/6650 | http://172.17.0.4:31816 | -|-----------|-------------------|-------------|-------------------------| -🏃 Starting tunnel for service pulsar-mini-proxy. -|-----------|-------------------|-------------|------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|------------------------| -| pulsar | pulsar-mini-proxy | | http://127.0.0.1:61853 | -| | | | http://127.0.0.1:61854 | -|-----------|-------------------|-------------|------------------------| - -``` - -At this point, you can get the service URLs to connect to your Pulsar client. Here are URL examples: - -``` - -webServiceUrl=http://127.0.0.1:61853/ -brokerServiceUrl=pulsar://127.0.0.1:61854/ - -``` - -Then you can proceed with the following steps: - -1. Download the Apache Pulsar tarball from the [downloads page](https://pulsar.apache.org/download/). - -2. Decompress the tarball based on your download file. - - ```bash - - tar -xf .tar.gz - - ``` - -3. Expose `PULSAR_HOME`. - - (1) Enter the directory of the decompressed download file. - - (2) Expose `PULSAR_HOME` as the environment variable. - - ```bash - - export PULSAR_HOME=$(pwd) - - ``` - -4. Configure the Pulsar client. - - In the `${PULSAR_HOME}/conf/client.conf` file, replace `webServiceUrl` and `brokerServiceUrl` with the service URLs you get from the above steps. - -5. Create a subscription to consume messages from `apache/pulsar/test-topic`. - - ```bash - - bin/pulsar-client consume -s sub apache/pulsar/test-topic -n 0 - - ``` - -6. Open a new terminal. In the new terminal, create a producer and send 10 messages to the `test-topic` topic. - - ```bash - - bin/pulsar-client produce apache/pulsar/test-topic -m "---------hello apache pulsar-------" -n 10 - - ``` - -7. Verify the results. - - - From the producer side - - **Output** - - The messages have been produced successfully. - - ```bash - - 18:15:15.489 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 10 messages successfully produced - - ``` - - - From the consumer side - - **Output** - - At the same time, you can receive the messages as below. - - ```bash - - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - - ``` - -## Step 4: Use Pulsar Manager to manage the cluster - -[Pulsar Manager](administration-pulsar-manager.md) is a web-based GUI management tool for managing and monitoring Pulsar. - -1. By default, the `Pulsar Manager` is exposed as a separate `LoadBalancer`. You can open the Pulsar Manager UI using the following command: - - ```bash - - minikube service -n pulsar pulsar-mini-pulsar-manager - - ``` - -2. The Pulsar Manager UI will be open in your browser. You can use the username `pulsar` and password `pulsar` to log into Pulsar Manager. - -3. In Pulsar Manager UI, you can create an environment. - - - Click `New Environment` button in the top-left corner. - - Type `pulsar-mini` for the field `Environment Name` in the popup window. - - Type `http://pulsar-mini-broker:8080` for the field `Service URL` in the popup window. - - Click `Confirm` button in the popup window. - -4. After successfully creating an environment, you are redirected to the `tenants` page of that environment. Then you can create `tenants`, `namespaces` and `topics` using the Pulsar Manager. - -## Step 5: Use Prometheus and Grafana to monitor cluster - -Grafana is an open-source visualization tool, which can be used for visualizing time series data into dashboards. - -1. By default, the Grafana is exposed as a separate `LoadBalancer`. You can open the Grafana UI using the following command: - - ```bash - - minikube service pulsar-mini-grafana -n pulsar - - ``` - -2. The Grafana UI is open in your browser. You can use the username `pulsar` and password `pulsar` to log into the Grafana Dashboard. - -3. You can view dashboards for different components of a Pulsar cluster. diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/performance-pulsar-perf.md b/site2/website/versioned_docs/version-2.8.1-deprecated/performance-pulsar-perf.md deleted file mode 100644 index 7b7f312bbb3ca2..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/performance-pulsar-perf.md +++ /dev/null @@ -1,227 +0,0 @@ ---- -id: performance-pulsar-perf -title: Pulsar Perf -sidebar_label: "Pulsar Perf" -original_id: performance-pulsar-perf ---- - -The Pulsar Perf is a built-in performance test tool for Apache Pulsar. You can use the Pulsar Perf to test message writing or reading performance. For detailed information about performance tuning, see [here](https://streamnative.io/en/blog/tech/2021-01-14-pulsar-architecture-performance-tuning). - -## Produce messages - -This example shows how the Pulsar Perf produces messages with default options. For all configuration options available for the `pulsar-perf produce` command, see [configuration options](#configuration-options-for-pulsar-perf-produce). - -``` - -bin/pulsar-perf produce my-topic - -``` - -After the command is executed, the test data is continuously output on the Console. - -**Output** - -``` - -19:53:31.459 [pulsar-perf-producer-exec-1-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Created 1 producers -19:53:31.482 [pulsar-timer-5-1] WARN com.scurrilous.circe.checksum.Crc32cIntChecksum - Failed to load Circe JNI library. Falling back to Java based CRC32c provider -19:53:40.861 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 93.7 msg/s --- 0.7 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.575 ms - med: 3.460 - 95pct: 4.790 - 99pct: 5.308 - 99.9pct: 5.834 - 99.99pct: 6.609 - Max: 6.609 -19:53:50.909 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.437 ms - med: 3.328 - 95pct: 4.656 - 99pct: 5.071 - 99.9pct: 5.519 - 99.99pct: 5.588 - Max: 5.588 -19:54:00.926 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.376 ms - med: 3.276 - 95pct: 4.520 - 99pct: 4.939 - 99.9pct: 5.440 - 99.99pct: 5.490 - Max: 5.490 -19:54:10.940 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.298 ms - med: 3.220 - 95pct: 4.474 - 99pct: 4.926 - 99.9pct: 5.645 - 99.99pct: 5.654 - Max: 5.654 -19:54:20.956 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.1 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.308 ms - med: 3.199 - 95pct: 4.532 - 99pct: 4.871 - 99.9pct: 5.291 - 99.99pct: 5.323 - Max: 5.323 -19:54:30.972 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.249 ms - med: 3.144 - 95pct: 4.437 - 99pct: 4.970 - 99.9pct: 5.329 - 99.99pct: 5.414 - Max: 5.414 -19:54:40.987 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.435 ms - med: 3.361 - 95pct: 4.772 - 99pct: 5.150 - 99.9pct: 5.373 - 99.99pct: 5.837 - Max: 5.837 -^C19:54:44.325 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Aggregated throughput stats --- 7286 records sent --- 99.140 msg/s --- 0.775 Mbit/s -19:54:44.336 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Aggregated latency stats --- Latency: mean: 3.383 ms - med: 3.293 - 95pct: 4.610 - 99pct: 5.059 - 99.9pct: 5.588 - 99.99pct: 5.837 - 99.999pct: 6.609 - Max: 6.609 - -``` - -From the above test data, you can get the throughput statistics and the write latency statistics. The aggregated statistics is printed when the Pulsar Perf is stopped. You can press **Ctrl**+**C** to stop the Pulsar Perf. After the Pulsar Perf is stopped, the [HdrHistogram](http://hdrhistogram.github.io/HdrHistogram/) formatted test result appears under your directory. The document looks like `perf-producer-1589370810837.hgrm`. You can also check the test result through [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html). For details about how to check the test result through [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html), see [HdrHistogram Plotter](#hdrhistogram-plotter). - -### Configuration options for `pulsar-perf produce` - -You can get all options by executing the `bin/pulsar-perf produce -h` command. Therefore, you can modify these options as required. - -The following table lists configuration options available for the `pulsar-perf produce` command. - -| Option | Description | Default value| -|----|----|----| -| access-mode | Set the producer access mode. Valid values are `Shared`, `Exclusive` and `WaitForExclusive`. | Shared | -| admin-url | Set the Pulsar admin URL. | N/A | -| auth-params | Set the authentication parameters, whose format is determined by the implementation of the `configure` method in the authentication plugin class, such as "key1:val1,key2:val2" or "{"key1":"val1","key2":"val2"}". | N/A | -| auth_plugin | Set the authentication plugin class name. | N/A | -| listener-name | Set the listener name for the broker. | N/A | -| batch-max-bytes | Set the maximum number of bytes for each batch. | 4194304 | -| batch-max-messages | Set the maximum number of messages for each batch. | 1000 | -| batch-time-window | Set a window for a batch of messages. | 1 ms | -| busy-wait | Enable or disable Busy-Wait on the Pulsar client. | false | -| chunking | Configure whether to split the message and publish in chunks if message size is larger than allowed max size. | false | -| compression | Compress the message payload. | N/A | -| conf-file | Set the configuration file. | N/A | -| delay | Mark messages with a given delay. | 0s | -| encryption-key-name | Set the name of the public key used to encrypt the payload. | N/A | -| encryption-key-value-file | Set the file which contains the public key used to encrypt the payload. | N/A | -| exit-on-failure | Configure whether to exit from the process on publish failure. | false | -| format-class | Set the custom formatter class name. | org.apache.pulsar.testclient.DefaultMessageFormatter | -| format-payload | Configure whether to format %i as a message index in the stream from producer and/or %t as the timestamp nanoseconds. | false | -| help | Configure the help message. | false | -| max-connections | Set the maximum number of TCP connections to a single broker. | 100 | -| max-outstanding | Set the maximum number of outstanding messages. | 1000 | -| max-outstanding-across-partitions | Set the maximum number of outstanding messages across partitions. | 50000 | -| message-key-generation-mode | Set the generation mode of message key. Valid options are `autoIncrement`, `random`. | N/A | -| num-io-threads | Set the number of threads to be used for handling connections to brokers. | 1 | -| num-messages | Set the number of messages to be published in total. If it is set to 0, it keeps publishing messages. | 0 | -| num-producers | Set the number of producers for each topic. | 1 | -| num-test-threads | Set the number of test threads. | 1 | -| num-topic | Set the number of topics. | 1 | -| partitions | Configure whether to create partitioned topics with the given number of partitions. | N/A | -| payload-delimiter | Set the delimiter used to split lines when using payload from a file. | \n | -| payload-file | Use the payload from an UTF-8 encoded text file and a payload is randomly selected when messages are published. | N/A | -| producer-name | Set the producer name. | N/A | -| rate | Set the publish rate of messages across topics. | 100 | -| send-timeout | Set the sendTimeout. | 0 | -| separator | Set the separator between the topic and topic number. | - | -| service-url | Set the Pulsar service URL. | | -| size | Set the message size. | 1024 bytes | -| stats-interval-seconds | Set the statistics interval. If it is set to 0, statistics is disabled. | 0 | -| test-duration | Set the test duration. If it is set to 0, it keeps publishing tests. | 0s | -| trust-cert-file | Set the path for the trusted TLS certificate file. | | | -| warmup-time | Set the warm-up time. | 1s | -| tls-allow-insecure | Set the allowed insecure TLS connection. | N/A | - -## Consume messages - -This example shows how the Pulsar Perf consumes messages with default options. - -``` - -bin/pulsar-perf consume my-topic - -``` - -After the command is executed, the test data is continuously output on the Console. - -**Output** - -``` - -20:35:37.071 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Start receiving from 1 consumers on 1 topics -20:35:41.150 [pulsar-client-io-1-9] WARN com.scurrilous.circe.checksum.Crc32cIntChecksum - Failed to load Circe JNI library. Falling back to Java based CRC32c provider -20:35:47.092 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 59.572 msg/s -- 0.465 Mbit/s --- Latency: mean: 11.298 ms - med: 10 - 95pct: 15 - 99pct: 98 - 99.9pct: 137 - 99.99pct: 152 - Max: 152 -20:35:57.104 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.958 msg/s -- 0.781 Mbit/s --- Latency: mean: 9.176 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 18 - Max: 18 -20:36:07.115 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 100.006 msg/s -- 0.781 Mbit/s --- Latency: mean: 9.316 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -20:36:17.125 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 100.085 msg/s -- 0.782 Mbit/s --- Latency: mean: 9.327 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -20:36:27.136 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.900 msg/s -- 0.780 Mbit/s --- Latency: mean: 9.404 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -20:36:37.147 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.985 msg/s -- 0.781 Mbit/s --- Latency: mean: 8.998 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -^C20:36:42.755 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceConsumer - Aggregated throughput stats --- 6051 records received --- 92.125 msg/s --- 0.720 Mbit/s -20:36:42.759 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceConsumer - Aggregated latency stats --- Latency: mean: 9.422 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 98 - 99.99pct: 137 - 99.999pct: 152 - Max: 152 - -``` - -From the output test data, you can get the throughput statistics and the end-to-end latency statistics. The aggregated statistics is printed after the Pulsar Perf is stopped. You can press **Ctrl**+**C** to stop the Pulsar Perf. - -### Configuration options for `pulsar-perf consume` - -You can get all options by executing the `bin/pulsar-perf consume -h` command. Therefore, you can modify these options as required. - -The following table lists configuration options available for the `pulsar-perf consume` command. - -| Option | Description | Default value | -|----|----|----| -| acks-delay-millis | Set the acknowledgment grouping delay in milliseconds. | 100 ms | -| auth-params | Set the authentication parameters, whose format is determined by the implementation of the `configure` method in the authentication plugin class, such as "key1:val1,key2:val2" or "{"key1":"val1","key2":"val2"}". | N/A | -| auth_plugin | Set the authentication plugin class name. | N/A | -| auto_ack_chunk_q_full | Configure whether to automatically ack for the oldest message in receiver queue if the queue is full. | false | -| listener-name | Set the listener name for the broker. | N/A | -| batch-index-ack | Enable or disable the batch index acknowledgment. | false | -| busy-wait | Enable or disable Busy-Wait on the Pulsar client. | false | -| conf-file | Set the configuration file. | N/A | -| encryption-key-name | Set the name of the public key used to encrypt the payload. | N/A | -| encryption-key-value-file | Set the file which contains the public key used to encrypt the payload. | N/A | -| help | Configure the help message. | false | -| expire_time_incomplete_chunked_messages | Set the expiration time for incomplete chunk messages (in milliseconds). | 0 | -| max-connections | Set the maximum number of TCP connections to a single broker. | 100 | -| max_chunked_msg | Set the max pending chunk messages. | 0 | -| num-consumers | Set the number of consumers for each topic. | 1 | -| num-io-threads |Set the number of threads to be used for handling connections to brokers. | 1 | -| num-subscriptions | Set the number of subscriptions (per topic). | 1 | -| num-topic | Set the number of topics. | 1 | -| pool-messages | Configure whether to use the pooled message. | true | -| rate | Simulate a slow message consumer (rate in msg/s). | 0.0 | -| receiver-queue-size | Set the size of the receiver queue. | 1000 | -| receiver-queue-size-across-partitions | Set the max total size of the receiver queue across partitions. | 50000 | -| replicated | Configure whether the subscription status should be replicated. | false | -| service-url | Set the Pulsar service URL. | | -| stats-interval-seconds | Set the statistics interval. If it is set to 0, statistics is disabled. | 0 | -| subscriber-name | Set the subscriber name prefix. | sub | -| subscription-position | Set the subscription position. Valid values are `Latest`, `Earliest`.| Latest | -| subscription-type | Set the subscription type.
  871. Exclusive
  872. Shared
  873. Failover
  874. Key_Shared
  875. | Exclusive | -| test-duration | Set the test duration (in seconds). If the value is 0 or smaller than 0, it keeps consuming messages. | 0 | -| tls-allow-insecure | Set the allowed insecure TLS connection. | N/A | -| trust-cert-file | Set the path for the trusted TLS certificate file. | | | - -## Configurations - -By default, the Pulsar Perf uses `conf/client.conf` as the default configuration and uses `conf/log4j2.yaml` as the default Log4j configuration. If you want to connect to other Pulsar clusters, you can update the `brokerServiceUrl` in the client configuration. - -You can use the following commands to change the configuration file and the Log4j configuration file. - -``` - -export PULSAR_CLIENT_CONF= -export PULSAR_LOG_CONF= - -``` - -In addition, you can use the following command to configure the JVM configuration through environment variables: - -``` - -export PULSAR_EXTRA_OPTS='-Xms4g -Xmx4g -XX:MaxDirectMemorySize=4g' - -``` - -## HdrHistogram Plotter - -The [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html) is a visualization tool for checking Pulsar Perf test results, which makes it easier to observe the test results. - -To check test results through the HdrHistogram Plotter, follow these steps: - -1. Clone the HdrHistogram repository from GitHub to the local. - - ``` - - git clone https://github.com/HdrHistogram/HdrHistogram.git - - ``` - -2. Switch to the HdrHistogram folder. - - ``` - - cd HdrHistogram - - ``` - -3. Install the HdrHistogram Plotter. - - ``` - - mvn clean install -DskipTests - - ``` - -4. Transform the file generated by the Pulsar Perf. - - ``` - - ./HistogramLogProcessor -i -o - - ``` - -5. You will get two output files. Upload the output file with the filename extension of .hgrm to the [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html). - -6. Check the test result through the Graphical User Interface of the HdrHistogram Plotter, as shown blow. - - ![](/assets/perf-produce.png) diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/reference-cli-tools.md b/site2/website/versioned_docs/version-2.8.1-deprecated/reference-cli-tools.md deleted file mode 100644 index 309009f635e295..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/reference-cli-tools.md +++ /dev/null @@ -1,958 +0,0 @@ ---- -id: reference-cli-tools -title: Pulsar command-line tools -sidebar_label: "Pulsar CLI tools" -original_id: reference-cli-tools ---- - -Pulsar offers several command-line tools that you can use for managing Pulsar installations, performance testing, using command-line producers and consumers, and more. - -All Pulsar command-line tools can be run from the `bin` directory of your [installed Pulsar package](getting-started-standalone.md). The following tools are currently documented: - -* [`pulsar`](#pulsar) -* [`pulsar-client`](#pulsar-client) -* [`pulsar-daemon`](#pulsar-daemon) -* [`pulsar-perf`](#pulsar-perf) -* [`bookkeeper`](#bookkeeper) -* [`broker-tool`](#broker-tool) - -> ### Getting help -> You can get help for any CLI tool, command, or subcommand using the `--help` flag, or `-h` for short. Here's an example: - -> ```shell -> -> $ bin/pulsar broker --help -> -> -> ``` - - -## `pulsar` - -The pulsar tool is used to start Pulsar components, such as bookies and ZooKeeper, in the foreground. - -These processes can also be started in the background, using nohup, using the pulsar-daemon tool, which has the same command interface as pulsar. - -Usage: - -```bash - -$ pulsar command - -``` - -Commands: -* `bookie` -* `broker` -* `compact-topic` -* `discovery` -* `configuration-store` -* `initialize-cluster-metadata` -* `proxy` -* `standalone` -* `websocket` -* `zookeeper` -* `zookeeper-shell` - -Example: - -```bash - -$ PULSAR_BROKER_CONF=/path/to/broker.conf pulsar broker - -``` - -The table below lists the environment variables that you can use to configure the `pulsar` tool. - -|Variable|Description|Default| -|---|---|---| -|`PULSAR_LOG_CONF`|Log4j configuration file|`conf/log4j2.yaml`| -|`PULSAR_BROKER_CONF`|Configuration file for broker|`conf/broker.conf`| -|`PULSAR_BOOKKEEPER_CONF`|description: Configuration file for bookie|`conf/bookkeeper.conf`| -|`PULSAR_ZK_CONF`|Configuration file for zookeeper|`conf/zookeeper.conf`| -|`PULSAR_CONFIGURATION_STORE_CONF`|Configuration file for the configuration store|`conf/global_zookeeper.conf`| -|`PULSAR_DISCOVERY_CONF`|Configuration file for discovery service|`conf/discovery.conf`| -|`PULSAR_WEBSOCKET_CONF`|Configuration file for websocket proxy|`conf/websocket.conf`| -|`PULSAR_STANDALONE_CONF`|Configuration file for standalone|`conf/standalone.conf`| -|`PULSAR_EXTRA_OPTS`|Extra options to be passed to the jvm|| -|`PULSAR_EXTRA_CLASSPATH`|Extra paths for Pulsar's classpath|| -|`PULSAR_PID_DIR`|Folder where the pulsar server PID file should be stored|| -|`PULSAR_STOP_TIMEOUT`|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful|| - - - -### `bookie` - -Starts up a bookie server - -Usage: - -```bash - -$ pulsar bookie options - -``` - -Options - -|Option|Description|Default| -|---|---|---| -|`-readOnly`|Force start a read-only bookie server|false| -|`-withAutoRecovery`|Start auto-recover service bookie server|false| - - -Example - -```bash - -$ PULSAR_BOOKKEEPER_CONF=/path/to/bookkeeper.conf pulsar bookie \ - -readOnly \ - -withAutoRecovery - -``` - -### `broker` - -Starts up a Pulsar broker - -Usage - -```bash - -$ pulsar broker options - -``` - -Options - -|Option|Description|Default| -|---|---|---| -|`-bc` , `--bookie-conf`|Configuration file for BookKeeper|| -|`-rb` , `--run-bookie`|Run a BookKeeper bookie on the same host as the Pulsar broker|false| -|`-ra` , `--run-bookie-autorecovery`|Run a BookKeeper autorecovery daemon on the same host as the Pulsar broker|false| - -Example - -```bash - -$ PULSAR_BROKER_CONF=/path/to/broker.conf pulsar broker - -``` - -### `compact-topic` - -Run compaction against a Pulsar topic (in a new process) - -Usage - -```bash - -$ pulsar compact-topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-t` , `--topic`|The Pulsar topic that you would like to compact|| - -Example - -```bash - -$ pulsar compact-topic --topic topic-to-compact - -``` - -### `discovery` - -Run a discovery server - -Usage - -```bash - -$ pulsar discovery - -``` - -Example - -```bash - -$ PULSAR_DISCOVERY_CONF=/path/to/discovery.conf pulsar discovery - -``` - -### `configuration-store` - -Starts up the Pulsar configuration store - -Usage - -```bash - -$ pulsar configuration-store - -``` - -Example - -```bash - -$ PULSAR_CONFIGURATION_STORE_CONF=/path/to/configuration_store.conf pulsar configuration-store - -``` - -### `initialize-cluster-metadata` - -One-time cluster metadata initialization - -Usage - -```bash - -$ pulsar initialize-cluster-metadata options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-ub` , `--broker-service-url`|The broker service URL for the new cluster|| -|`-tb` , `--broker-service-url-tls`|The broker service URL for the new cluster with TLS encryption|| -|`-c` , `--cluster`|Cluster name|| -|`-cs` , `--configuration-store`|The configuration store quorum connection string|| -|`--existing-bk-metadata-service-uri`|The metadata service URI of the existing BookKeeper cluster that you want to use|| -|`-h` , `--help`|Cluster name|false| -|`--initial-num-stream-storage-containers`|The number of storage containers of BookKeeper stream storage|16| -|`--initial-num-transaction-coordinators`|The number of transaction coordinators assigned in a cluster|16| -|`-uw` , `--web-service-url`|The web service URL for the new cluster|| -|`-tw` , `--web-service-url-tls`|The web service URL for the new cluster with TLS encryption|| -|`-zk` , `--zookeeper`|The local ZooKeeper quorum connection string|| -|`--zookeeper-session-timeout-ms`|The local ZooKeeper session timeout. The time unit is in millisecond(ms)|30000| - - -### `proxy` - -Manages the Pulsar proxy - -Usage - -```bash - -$ pulsar proxy options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--configuration-store`|Configuration store connection string|| -|`-zk` , `--zookeeper-servers`|Local ZooKeeper connection string|| - -Example - -```bash - -$ PULSAR_PROXY_CONF=/path/to/proxy.conf pulsar proxy \ - --zookeeper-servers zk-0,zk-1,zk2 \ - --configuration-store zk-0,zk-1,zk-2 - -``` - -### `standalone` - -Run a broker service with local bookies and local ZooKeeper - -Usage - -```bash - -$ pulsar standalone options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-a` , `--advertised-address`|The standalone broker advertised address|| -|`--bookkeeper-dir`|Local bookies’ base data directory|data/standalone/bookkeeper| -|`--bookkeeper-port`|Local bookies’ base port|3181| -|`--no-broker`|Only start ZooKeeper and BookKeeper services, not the broker|false| -|`--num-bookies`|The number of local bookies|1| -|`--only-broker`|Only start the Pulsar broker service (not ZooKeeper or BookKeeper)|| -|`--wipe-data`|Clean up previous ZooKeeper/BookKeeper data|| -|`--zookeeper-dir`|Local ZooKeeper’s data directory|data/standalone/zookeeper| -|`--zookeeper-port` |Local ZooKeeper’s port|2181| - -Example - -```bash - -$ PULSAR_STANDALONE_CONF=/path/to/standalone.conf pulsar standalone - -``` - -### `websocket` - -Usage - -```bash - -$ pulsar websocket - -``` - -Example - -```bash - -$ PULSAR_WEBSOCKET_CONF=/path/to/websocket.conf pulsar websocket - -``` - -### `zookeeper` - -Starts up a ZooKeeper cluster - -Usage - -```bash - -$ pulsar zookeeper - -``` - -Example - -```bash - -$ PULSAR_ZK_CONF=/path/to/zookeeper.conf pulsar zookeeper - -``` - -### `zookeeper-shell` - -Connects to a running ZooKeeper cluster using the ZooKeeper shell - -Usage - -```bash - -$ pulsar zookeeper-shell options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration file for ZooKeeper|| -|`-server`|Configuration zk address, eg: `127.0.0.1:2181`|| - - - -## `pulsar-client` - -The pulsar-client tool - -Usage - -```bash - -$ pulsar-client command - -``` - -Commands -* `produce` -* `consume` - - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class, for example "key1:val1,key2:val2" or "{\"key1\":\"val1\",\"key2\":\"val2\"}"|{"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"}| -|`--auth-plugin`|Authentication plugin class name|org.apache.pulsar.client.impl.auth.AuthenticationSasl| -|`--listener-name`|Listener name for the broker|| -|`--proxy-protocol`|Proxy protocol to select type of routing at proxy|| -|`--proxy-url`|Proxy-server URL to which to connect|| -|`--url`|Broker URL to which to connect|pulsar://localhost:6650/
    ws://localhost:8080 | -| `-v`, `--version` | Get the version of the Pulsar client -|`-h`, `--help`|Show this help - - -### `produce` -Send a message or messages to a specific broker and topic - -Usage - -```bash - -$ pulsar-client produce topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-f`, `--files`|Comma-separated file paths to send; either -m or -f must be specified|[]| -|`-m`, `--messages`|Comma-separated string of messages to send; either -m or -f must be specified|[]| -|`-n`, `--num-produce`|The number of times to send the message(s); the count of messages/files * num-produce should be below 1000|1| -|`-r`, `--rate`|Rate (in messages per second) at which to produce; a value 0 means to produce messages as fast as possible|0.0| -|`-c`, `--chunking`|Split the message and publish in chunks if the message size is larger than the allowed max size|false| -|`-s`, `--separator`|Character to split messages string with.|","| -|`-k`, `--key`|Message key to add|key=value string, like k1=v1,k2=v2.| -|`-p`, `--properties`|Properties to add. If you want to add multiple properties, use the comma as the separator, e.g. `k1=v1,k2=v2`.| | -|`-ekn`, `--encryption-key-name`|The public key name to encrypt payload.| | -|`-ekv`, `--encryption-key-value`|The URI of public key to encrypt payload. For example, `file:///path/to/public.key` or `data:application/x-pem-file;base64,*****`.| | - - -### `consume` -Consume messages from a specific broker and topic - -Usage - -```bash - -$ pulsar-client consume topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--hex`|Display binary messages in hexadecimal format.|false| -|`-n`, `--num-messages`|Number of messages to consume, 0 means to consume forever.|1| -|`-r`, `--rate`|Rate (in messages per second) at which to consume; a value 0 means to consume messages as fast as possible|0.0| -|`--regex`|Indicate the topic name is a regex pattern|false| -|`-s`, `--subscription-name`|Subscription name|| -|`-t`, `--subscription-type`|The type of the subscription. Possible values: Exclusive, Shared, Failover, Key_Shared.|Exclusive| -|`-p`, `--subscription-position`|The position of the subscription. Possible values: Latest, Earliest.|Latest| -|`-m`, `--subscription-mode`|Subscription mode. Possible values: Durable, NonDurable.|Durable| -|`-q`, `--queue-size`|The size of consumer's receiver queue.|0| -|`-mc`, `--max_chunked_msg`|Max pending chunk messages.|0| -|`-ac`, `--auto_ack_chunk_q_full`|Auto ack for the oldest message in consumer's receiver queue if the queue full.|false| -|`--hide-content`|Do not print the message to the console.|false| -|`-st`, `--schema-type`|Set the schema type. Use `auto_consume` to dump AVRO and other structured data types. Possible values: bytes, auto_consume.|bytes| -|`-ekv`, `--encryption-key-value`|The URI of public key to encrypt payload. For example, `file:///path/to/public.key` or `data:application/x-pem-file;base64,*****`.| | -|`-pm`, `--pool-messages`|Use the pooled message.|true| - -## `pulsar-daemon` -A wrapper around the pulsar tool that’s used to start and stop processes, such as ZooKeeper, bookies, and Pulsar brokers, in the background using nohup. - -pulsar-daemon has a similar interface to the pulsar command but adds start and stop commands for various services. For a listing of those services, run pulsar-daemon to see the help output or see the documentation for the pulsar command. - -Usage - -```bash - -$ pulsar-daemon command - -``` - -Commands -* `start` -* `stop` - - -### `start` -Start a service in the background using nohup. - -Usage - -```bash - -$ pulsar-daemon start service - -``` - -### `stop` -Stop a service that’s already been started using start. - -Usage - -```bash - -$ pulsar-daemon stop service options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|-force|Stop the service forcefully if not stopped by normal shutdown.|false| - - - -## `pulsar-perf` -A tool for performance testing a Pulsar broker. - -Usage - -```bash - -$ pulsar-perf command - -``` - -Commands -* `consume` -* `produce` -* `read` -* `websocket-producer` -* `managed-ledger` -* `monitor-brokers` -* `simulation-client` -* `simulation-controller` -* `help` - -Environment variables - -The table below lists the environment variables that you can use to configure the pulsar-perf tool. - -|Variable|Description|Default| -|---|---|---| -|`PULSAR_LOG_CONF`|Log4j configuration file|conf/log4j2.yaml| -|`PULSAR_CLIENT_CONF`|Configuration file for the client|conf/client.conf| -|`PULSAR_EXTRA_OPTS`|Extra options to be passed to the JVM|| -|`PULSAR_EXTRA_CLASSPATH`|Extra paths for Pulsar's classpath|| - - -### `consume` -Run a consumer - -Usage - -``` - -$ pulsar-perf consume options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth_plugin`|Authentication plugin class name|| -|`-ac`, `--auto_ack_chunk_q_full`|Auto ack for the oldest message in consumer's receiver queue if the queue full|false| -|`--listener-name`|Listener name for the broker|| -|`--acks-delay-millis`|Acknowledgements grouping delay in millis|100| -|`--batch-index-ack`|Enable or disable the batch index acknowledgment|false| -|`-bw`, `--busy-wait`|Enable or disable Busy-Wait on the Pulsar client|false| -|`-v`, `--encryption-key-value-file`|The file which contains the private key to decrypt payload|| -|`-h`, `--help`|Help message|false| -|`--conf-file`|Configuration file|| -|`-e`, `--expire_time_incomplete_chunked_messages`|The expiration time for incomplete chunk messages (in milliseconds)|0| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-mc`, `--max_chunked_msg`|Max pending chunk messages|0| -|`-n`, `--num-consumers`|Number of consumers (per topic)|1| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-ns`, `--num-subscriptions`|Number of subscriptions (per topic)|1| -|`-t`, `--num-topics`|The number of topics|1| -|`-pm`, `--pool-messages`|Use the pooled message|true| -|`-r`, `--rate`|Simulate a slow message consumer (rate in msg/s)|0| -|`-q`, `--receiver-queue-size`|Size of the receiver queue|1000| -|`-p`, `--receiver-queue-size-across-partitions`|Max total size of the receiver queue across partitions|50000| -|`--replicated`|Whether the subscription status should be replicated|false| -|`-u`, `--service-url`|Pulsar service URL|| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled|0| -|`-s`, `--subscriber-name`|Subscriber name prefix|sub| -|`-ss`, `--subscriptions`|A list of subscriptions to consume on (e.g. sub1,sub2)|sub| -|`-st`, `--subscription-type`|Subscriber type. Possible values are Exclusive, Shared, Failover, Key_Shared.|Exclusive| -|`-sp`, `--subscription-position`|Subscriber position. Possible values are Latest, Earliest.|Latest| -|`-time`, `--test-duration`|Test duration (in seconds). If the value is 0 or smaller than 0, it keeps consuming messages|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - - -### `produce` -Run a producer - -Usage - -```bash - -$ pulsar-perf produce options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-am`, `--access-mode`|Producer access mode. Valid values are `Shared`, `Exclusive` and `WaitForExclusive`|Shared| -|`-au`, `--admin-url`|Pulsar admin URL|| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth_plugin`|Authentication plugin class name|| -|`--listener-name`|Listener name for the broker|| -|`-b`, `--batch-time-window`|Batch messages in a window of the specified number of milliseconds|1| -|`-bb`, `--batch-max-bytes`|Maximum number of bytes per batch|4194304| -|`-bm`, `--batch-max-messages`|Maximum number of messages per batch|1000| -|`-bw`, `--busy-wait`|Enable or disable Busy-Wait on the Pulsar client|false| -|`-ch`, `--chunking`|Split the message and publish in chunks if the message size is larger than allowed max size|false| -|`-d`, `--delay`|Mark messages with a given delay in seconds|0s| -|`-z`, `--compression`|Compress messages’ payload. Possible values are NONE, LZ4, ZLIB, ZSTD or SNAPPY.|| -|`--conf-file`|Configuration file|| -|`-k`, `--encryption-key-name`|The public key name to encrypt payload|| -|`-v`, `--encryption-key-value-file`|The file which contains the public key to encrypt payload|| -|`-ef`, `--exit-on-failure`|Exit from the process on publish failure|false| -|`-fc`, `--format-class`|Custom Formatter class name|org.apache.pulsar.testclient.DefaultMessageFormatter| -|`-fp`, `--format-payload`|Format %i as a message index in the stream from producer and/or %t as the timestamp nanoseconds|false| -|`-h`, `--help`|Help message|false| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-o`, `--max-outstanding`|Max number of outstanding messages|1000| -|`-p`, `--max-outstanding-across-partitions`|Max number of outstanding messages across partitions|50000| -|`-mk`, `--message-key-generation-mode`|The generation mode of message key. Valid options are `autoIncrement`, `random`|| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-m`, `--num-messages`|Number of messages to publish in total. If the value is 0 or smaller than 0, it keeps publishing messages.|0| -|`-n`, `--num-producers`|The number of producers (per topic)|1| -|`-threads`, `--num-test-threads`|Number of test threads|1| -|`-t`, `--num-topic`|The number of topics|1| -|`-np`, `--partitions`|Create partitioned topics with the given number of partitions. Setting this value to 0 means not trying to create a topic|| -|`-f`, `--payload-file`|Use payload from an UTF-8 encoded text file and a payload will be randomly selected when publishing messages|| -|`-e`, `--payload-delimiter`|The delimiter used to split lines when using payload from a file|\n| -|`-pn`, `--producer-name`|Producer Name|| -|`-r`, `--rate`|Publish rate msg/s across topics|100| -|`--send-timeout`|Set the sendTimeout|0| -|`--separator`|Separator between the topic and topic number|-| -|`-u`, `--service-url`|Pulsar service URL|| -|`-s`, `--size`|Message size (in bytes)|1024| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled.|0| -|`-time`, `--test-duration`|Test duration (in seconds). If the value is 0 or smaller than 0, it keeps publishing messages.|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--warmup-time`|Warm-up time in seconds|1| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - - -### `read` -Run a topic reader - -Usage - -```bash - -$ pulsar-perf read options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth_plugin`|Authentication plugin class name|| -|`--listener-name`|Listener name for the broker|| -|`--conf-file`|Configuration file|| -|`-h`, `--help`|Help message|false| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-t`, `--num-topics`|The number of topics|1| -|`-r`, `--rate`|Simulate a slow message reader (rate in msg/s)|0| -|`-q`, `--receiver-queue-size`|Size of the receiver queue|1000| -|`-u`, `--service-url`|Pulsar service URL|| -|`-m`, `--start-message-id`|Start message id. This can be either 'earliest', 'latest' or a specific message id by using 'lid:eid'|earliest| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled.|0| -|`-time`, `--test-duration`|Test duration (in seconds). If the value is 0 or smaller than 0, it keeps consuming messages.|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--use-tls`|Use TLS encryption on the connection|false| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - -### `websocket-producer` -Run a websocket producer - -Usage - -```bash - -$ pulsar-perf websocket-producer options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth_plugin`|Authentication plugin class name|| -|`--conf-file`|Configuration file|| -|`-h`, `--help`|Help message|false| -|`-m`, `--num-messages`|Number of messages to publish in total. If the value is 0 or smaller than 0, it keeps publishing messages|0| -|`-t`, `--num-topic`|The number of topics|1| -|`-f`, `--payload-file`|Use payload from a file instead of empty buffer|| -|`-u`, `--proxy-url`|Pulsar Proxy URL, e.g., "ws://localhost:8080/"|| -|`-r`, `--rate`|Publish rate msg/s across topics|100| -|`-s`, `--size`|Message size in byte|1024| -|`-time`, `--test-duration`|Test duration (in seconds). If the value is 0 or smaller than 0, it keeps publishing messages|0| - - -### `managed-ledger` -Write directly on managed-ledgers - -Usage - -```bash - -$ pulsar-perf managed-ledger options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-a`, `--ack-quorum`|Ledger ack quorum|1| -|`-dt`, `--digest-type`|BookKeeper digest type. Possible Values: [CRC32, MAC, CRC32C, DUMMY]|CRC32C| -|`-e`, `--ensemble-size`|Ledger ensemble size|1| -|`-h`, `--help`|Help message|false| -|`-c`, `--max-connections`|Max number of TCP connections to a single bookie|1| -|`-o`, `--max-outstanding`|Max number of outstanding requests|1000| -|`-m`, `--num-messages`|Number of messages to publish in total. If the value is 0 or smaller than 0, it keeps publishing messages|0| -|`-t`, `--num-topic`|Number of managed ledgers|1| -|`-r`, `--rate`|Write rate msg/s across managed ledgers|100| -|`-s`, `--size`|Message size in byte|1024| -|`-time`, `--test-duration`|Test duration (in seconds). If the value is 0 or smaller than 0, it keeps publishing messages|0| -|`--threads`|Number of threads writing|1| -|`-w`, `--write-quorum`|Ledger write quorum|1| -|`-zk`, `--zookeeperServers`|ZooKeeper connection string|| - - -### `monitor-brokers` -Continuously receive broker data and/or load reports - -Usage - -```bash - -$ pulsar-perf monitor-brokers options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--connect-string`|A connection string for one or more ZooKeeper servers|| -|`-h`, `--help`|Help message|false| - - -### `simulation-client` -Run a simulation server acting as a Pulsar client. Uses the client configuration specified in `conf/client.conf`. - -Usage - -```bash - -$ pulsar-perf simulation-client options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--port`|Port to listen on for controller|0| -|`--service-url`|Pulsar Service URL|| -|`-h`, `--help`|Help message|false| - -### `simulation-controller` -Run a simulation controller to give commands to servers - -Usage - -```bash - -$ pulsar-perf simulation-controller options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--client-port`|The port that the clients are listening on|0| -|`--clients`|Comma-separated list of client hostnames|| -|`--cluster`|The cluster to test on|| -|`-h`, `--help`|Help message|false| - - -### `help` -This help message - -Usage - -```bash - -$ pulsar-perf help - -``` - -## `bookkeeper` -A tool for managing BookKeeper. - -Usage - -```bash - -$ bookkeeper command - -``` - -Commands -* `autorecovery` -* `bookie` -* `localbookie` -* `upgrade` -* `shell` - - -Environment variables - -The table below lists the environment variables that you can use to configure the bookkeeper tool. - -|Variable|Description|Default| -|---|---|---| -|BOOKIE_LOG_CONF|Log4j configuration file|conf/log4j2.yaml| -|BOOKIE_CONF|BookKeeper configuration file|conf/bk_server.conf| -|BOOKIE_EXTRA_OPTS|Extra options to be passed to the JVM|| -|BOOKIE_EXTRA_CLASSPATH|Extra paths for BookKeeper's classpath|| -|ENTRY_FORMATTER_CLASS|The Java class used to format entries|| -|BOOKIE_PID_DIR|Folder where the BookKeeper server PID file should be stored|| -|BOOKIE_STOP_TIMEOUT|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful|| - - -### `auto-recovery` -Runs an auto-recovery service - -Usage - -```bash - -$ bookkeeper autorecovery options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery|| - - -### `bookie` -Starts up a BookKeeper server (aka bookie) - -Usage - -```bash - -$ bookkeeper bookie options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery|| -|-readOnly|Force start a read-only bookie server|false| -|-withAutoRecovery|Start auto-recovery service bookie server|false| - - -### `localbookie` -Runs a test ensemble of N bookies locally - -Usage - -```bash - -$ bookkeeper localbookie N - -``` - -### `upgrade` -Upgrade the bookie’s filesystem - -Usage - -```bash - -$ bookkeeper upgrade options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery|| -|`-u`, `--upgrade`|Upgrade the bookie’s directories|| - - -### `shell` -Run shell for admin commands. To see a full listing of those commands, run bookkeeper shell without an argument. - -Usage - -```bash - -$ bookkeeper shell - -``` - -Example - -```bash - -$ bookkeeper shell bookiesanity - -``` - -## `broker-tool` - -The `broker- tool` is used for operations on a specific broker. - -Usage - -```bash - -$ broker-tool command - -``` - -Commands -* `load-report` -* `help` - -Example -Two ways to get more information about a command as below: - -```bash - -$ broker-tool help command -$ broker-tool command --help - -``` - -### `load-report` - -Collect the load report of a specific broker. -The command is run on a broker, and used for troubleshooting why broker can’t collect right load report. - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--interval`| Interval to collect load report, in milliseconds || -|`-h`, `--help`| Display help information || - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/reference-configuration.md b/site2/website/versioned_docs/version-2.8.1-deprecated/reference-configuration.md deleted file mode 100644 index 68124dd94f29eb..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/reference-configuration.md +++ /dev/null @@ -1,789 +0,0 @@ ---- -id: reference-configuration -title: Pulsar configuration -sidebar_label: "Pulsar configuration" -original_id: reference-configuration ---- - - - - -You can manage Pulsar configuration by configuration files in the [`conf`](https://github.com/apache/pulsar/tree/master/conf) directory of a Pulsar [installation](getting-started-standalone.md). - -- [BookKeeper](#bookkeeper) -- [Broker](#broker) -- [Client](#client) -- [Service discovery](#service-discovery) -- [Log4j](#log4j) -- [Log4j shell](#log4j-shell) -- [Standalone](#standalone) -- [WebSocket](#websocket) -- [Pulsar proxy](#pulsar-proxy) -- [ZooKeeper](#zookeeper) - -## BookKeeper - -BookKeeper is a replicated log storage system that Pulsar uses for durable storage of all messages. - - -|Name|Description|Default| -|---|---|---| -|bookiePort|The port on which the bookie server listens.|3181| -|allowLoopback|Whether the bookie is allowed to use a loopback interface as its primary interface (that is the interface used to establish its identity). By default, loopback interfaces are not allowed to work as the primary interface. Using a loopback interface as the primary interface usually indicates a configuration error. For example, it’s fairly common in some VPS setups to not configure a hostname or to have the hostname resolve to `127.0.0.1`. If this is the case, then all bookies in the cluster will establish their identities as `127.0.0.1:3181` and only one will be able to join the cluster. For VPSs configured like this, you should explicitly set the listening interface.|false| -|listeningInterface|The network interface on which the bookie listens. By default, the bookie listens on all interfaces.|eth0| -|advertisedAddress|Configure a specific hostname or IP address that the bookie should use to advertise itself to clients. By default, the bookie advertises either its own IP address or hostname according to the `listeningInterface` and `useHostNameAsBookieID` settings.|N/A| -|allowMultipleDirsUnderSameDiskPartition|Configure the bookie to enable/disable multiple ledger/index/journal directories in the same filesystem disk partition.|false| -|minUsableSizeForIndexFileCreation|The minimum safe usable size available in index directory for bookie to create index files while replaying journal at the time of bookie starts in Readonly Mode (in bytes).|1073741824| -|journalDirectory|The directory where BookKeeper outputs its write-ahead log (WAL).|data/bookkeeper/journal| -|journalDirectories|Directories that BookKeeper outputs its write ahead log. Multiple directories are available, being separated by `,`. For example: `journalDirectories=/tmp/bk-journal1,/tmp/bk-journal2`. If `journalDirectories` is set, the bookies skip `journalDirectory` and use this setting directory.|/tmp/bk-journal| -|ledgerDirectories|The directory where BookKeeper outputs ledger snapshots. This could define multiple directories to store snapshots separated by `,`, for example `ledgerDirectories=/tmp/bk1-data,/tmp/bk2-data`. Ideally, ledger dirs and the journal dir are each in a different device, which reduces the contention between random I/O and sequential write. It is possible to run with a single disk, but performance will be significantly lower.|data/bookkeeper/ledgers| -|ledgerManagerType|The type of ledger manager used to manage how ledgers are stored, managed, and garbage collected. See [BookKeeper Internals](http://bookkeeper.apache.org/docs/latest/getting-started/concepts) for more info.|hierarchical| -|zkLedgersRootPath|The root ZooKeeper path used to store ledger metadata. This parameter is used by the ZooKeeper-based ledger manager as a root znode to store all ledgers.|/ledgers| -|ledgerStorageClass|Ledger storage implementation class|org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage| -|entryLogFilePreallocationEnabled|Enable or disable entry logger preallocation|true| -|logSizeLimit|Max file size of the entry logger, in bytes. A new entry log file will be created when the old one reaches the file size limitation.|1073741824| -|minorCompactionThreshold|Threshold of minor compaction. Entry log files whose remaining size percentage reaches below this threshold will be compacted in a minor compaction. If set to less than zero, the minor compaction is disabled.|0.2| -|minorCompactionInterval|Time interval to run minor compaction, in seconds. If set to less than zero, the minor compaction is disabled. Note: should be greater than gcWaitTime. |3600| -|majorCompactionThreshold|The threshold of major compaction. Entry log files whose remaining size percentage reaches below this threshold will be compacted in a major compaction. Those entry log files whose remaining size percentage is still higher than the threshold will never be compacted. If set to less than zero, the minor compaction is disabled.|0.5| -|majorCompactionInterval|The time interval to run major compaction, in seconds. If set to less than zero, the major compaction is disabled. Note: should be greater than gcWaitTime. |86400| -|readOnlyModeEnabled|If `readOnlyModeEnabled=true`, then on all full ledger disks, bookie will be converted to read-only mode and serve only read requests. Otherwise the bookie will be shutdown.|true| -|forceReadOnlyBookie|Whether the bookie is force started in read only mode.|false| -|persistBookieStatusEnabled|Persist the bookie status locally on the disks. So the bookies can keep their status upon restarts.|false| -|compactionMaxOutstandingRequests|Sets the maximum number of entries that can be compacted without flushing. When compacting, the entries are written to the entrylog and the new offsets are cached in memory. Once the entrylog is flushed the index is updated with the new offsets. This parameter controls the number of entries added to the entrylog before a flush is forced. A higher value for this parameter means more memory will be used for offsets. Each offset consists of 3 longs. This parameter should not be modified unless you’re fully aware of the consequences.|100000| -|compactionRate|The rate at which compaction will read entries, in adds per second.|1000| -|isThrottleByBytes|Throttle compaction by bytes or by entries.|false| -|compactionRateByEntries|The rate at which compaction will read entries, in adds per second.|1000| -|compactionRateByBytes|Set the rate at which compaction reads entries. The unit is bytes added per second.|1000000| -|journalMaxSizeMB|Max file size of journal file, in megabytes. A new journal file will be created when the old one reaches the file size limitation.|2048| -|journalMaxBackups|The max number of old journal files to keep. Keeping a number of old journal files would help data recovery in special cases.|5| -|journalPreAllocSizeMB|How space to pre-allocate at a time in the journal.|16| -|journalWriteBufferSizeKB|The of the write buffers used for the journal.|64| -|journalRemoveFromPageCache|Whether pages should be removed from the page cache after force write.|true| -|journalAdaptiveGroupWrites|Whether to group journal force writes, which optimizes group commit for higher throughput.|true| -|journalMaxGroupWaitMSec|The maximum latency to impose on a journal write to achieve grouping.|1| -|journalAlignmentSize|All the journal writes and commits should be aligned to given size|4096| -|journalBufferedWritesThreshold|Maximum writes to buffer to achieve grouping|524288| -|journalFlushWhenQueueEmpty|If we should flush the journal when journal queue is empty|false| -|numJournalCallbackThreads|The number of threads that should handle journal callbacks|8| -|openLedgerRereplicationGracePeriod | The grace period, in milliseconds, that the replication worker waits before fencing and replicating a ledger fragment that's still being written to upon bookie failure. | 30000 | -|rereplicationEntryBatchSize|The number of max entries to keep in fragment for re-replication|100| -|autoRecoveryDaemonEnabled|Whether the bookie itself can start auto-recovery service.|true| -|lostBookieRecoveryDelay|How long to wait, in seconds, before starting auto recovery of a lost bookie.|0| -|gcWaitTime|How long the interval to trigger next garbage collection, in milliseconds. Since garbage collection is running in background, too frequent gc will heart performance. It is better to give a higher number of gc interval if there is enough disk capacity.|900000| -|gcOverreplicatedLedgerWaitTime|How long the interval to trigger next garbage collection of overreplicated ledgers, in milliseconds. This should not be run very frequently since we read the metadata for all the ledgers on the bookie from zk.|86400000| -|flushInterval|How long the interval to flush ledger index pages to disk, in milliseconds. Flushing index files will introduce much random disk I/O. If separating journal dir and ledger dirs each on different devices, flushing would not affect performance. But if putting journal dir and ledger dirs on same device, performance degrade significantly on too frequent flushing. You can consider increment flush interval to get better performance, but you need to pay more time on bookie server restart after failure.|60000| -|bookieDeathWatchInterval|Interval to watch whether bookie is dead or not, in milliseconds|1000| -|allowStorageExpansion|Allow the bookie storage to expand. Newly added ledger and index dirs must be empty.|false| -|zkServers|A list of one of more servers on which zookeeper is running. The server list can be comma separated values, for example: zkServers=zk1:2181,zk2:2181,zk3:2181.|localhost:2181| -|zkTimeout|ZooKeeper client session timeout in milliseconds Bookie server will exit if it received SESSION_EXPIRED because it was partitioned off from ZooKeeper for more than the session timeout JVM garbage collection, disk I/O will cause SESSION_EXPIRED. Increment this value could help avoiding this issue|30000| -|zkRetryBackoffStartMs|The start time that the Zookeeper client backoff retries in milliseconds.|1000| -|zkRetryBackoffMaxMs|The maximum time that the Zookeeper client backoff retries in milliseconds.|10000| -|zkEnableSecurity|Set ACLs on every node written on ZooKeeper, allowing users to read and write BookKeeper metadata stored on ZooKeeper. In order to make ACLs work you need to setup ZooKeeper JAAS authentication. All the bookies and Client need to share the same user, and this is usually done using Kerberos authentication. See ZooKeeper documentation.|false| -|httpServerEnabled|The flag enables/disables starting the admin http server.|false| -|httpServerPort|The HTTP server port to listen on. By default, the value is `8080`. If you want to keep it consistent with the Prometheus stats provider, you can set it to `8000`.|8080 -|httpServerClass|The http server class.|org.apache.bookkeeper.http.vertx.VertxHttpServer| -|serverTcpNoDelay|This settings is used to enabled/disabled Nagle’s algorithm, which is a means of improving the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network. If you are sending many small messages, such that more than one can fit in a single IP packet, setting server.tcpnodelay to false to enable Nagle algorithm can provide better performance.|true| -|serverSockKeepalive|This setting is used to send keep-alive messages on connection-oriented sockets.|true| -|serverTcpLinger|The socket linger timeout on close. When enabled, a close or shutdown will not return until all queued messages for the socket have been successfully sent or the linger timeout has been reached. Otherwise, the call returns immediately and the closing is done in the background.|0| -|byteBufAllocatorSizeMax|The maximum buf size of the received ByteBuf allocator.|1048576| -|nettyMaxFrameSizeBytes|The maximum netty frame size in bytes. Any message received larger than this will be rejected.|5253120| -|openFileLimit|Max number of ledger index files could be opened in bookie server If number of ledger index files reaches this limitation, bookie server started to swap some ledgers from memory to disk. Too frequent swap will affect performance. You can tune this number to gain performance according your requirements.|0| -|pageSize|Size of a index page in ledger cache, in bytes A larger index page can improve performance writing page to disk, which is efficient when you have small number of ledgers and these ledgers have similar number of entries. If you have large number of ledgers and each ledger has fewer entries, smaller index page would improve memory usage.|8192| -|pageLimit|How many index pages provided in ledger cache If number of index pages reaches this limitation, bookie server starts to swap some ledgers from memory to disk. You can increment this value when you found swap became more frequent. But make sure pageLimit*pageSize should not more than JVM max memory limitation, otherwise you would got OutOfMemoryException. In general, incrementing pageLimit, using smaller index page would gain better performance in lager number of ledgers with fewer entries case If pageLimit is -1, bookie server will use 1/3 of JVM memory to compute the limitation of number of index pages.|0| -|readOnlyModeEnabled|If all ledger directories configured are full, then support only read requests for clients. If "readOnlyModeEnabled=true" then on all ledger disks full, bookie will be converted to read-only mode and serve only read requests. Otherwise the bookie will be shutdown. By default this will be disabled.|true| -|diskUsageThreshold|For each ledger dir, maximum disk space which can be used. Default is 0.95f. i.e. 95% of disk can be used at most after which nothing will be written to that partition. If all ledger dir partitions are full, then bookie will turn to readonly mode if ‘readOnlyModeEnabled=true’ is set, else it will shutdown. Valid values should be in between 0 and 1 (exclusive).|0.95| -|diskCheckInterval|Disk check interval in milli seconds, interval to check the ledger dirs usage.|10000| -|auditorPeriodicCheckInterval|Interval at which the auditor will do a check of all ledgers in the cluster. By default this runs once a week. The interval is set in seconds. To disable the periodic check completely, set this to 0. Note that periodic checking will put extra load on the cluster, so it should not be run more frequently than once a day.|604800| -|sortedLedgerStorageEnabled|Whether sorted-ledger storage is enabled.|true| -|auditorPeriodicBookieCheckInterval|The interval between auditor bookie checks. The auditor bookie check, checks ledger metadata to see which bookies should contain entries for each ledger. If a bookie which should contain entries is unavailable, thea the ledger containing that entry is marked for recovery. Setting this to 0 disabled the periodic check. Bookie checks will still run when a bookie fails. The interval is specified in seconds.|86400| -|numAddWorkerThreads|The number of threads that should handle write requests. if zero, the writes would be handled by netty threads directly.|0| -|numReadWorkerThreads|The number of threads that should handle read requests. if zero, the reads would be handled by netty threads directly.|8| -|numHighPriorityWorkerThreads|The umber of threads that should be used for high priority requests (i.e. recovery reads and adds, and fencing).|8| -|maxPendingReadRequestsPerThread|If read workers threads are enabled, limit the number of pending requests, to avoid the executor queue to grow indefinitely.|2500| -|maxPendingAddRequestsPerThread|The limited number of pending requests, which is used to avoid the executor queue to grow indefinitely when add workers threads are enabled.|10000| -|isForceGCAllowWhenNoSpace|Whether force compaction is allowed when the disk is full or almost full. Forcing GC could get some space back, but could also fill up the disk space more quickly. This is because new log files are created before GC, while old garbage log files are deleted after GC.|false| -|verifyMetadataOnGC|True if the bookie should double check `readMetadata` prior to GC.|false| -|flushEntrylogBytes|Entry log flush interval in bytes. Flushing in smaller chunks but more frequently reduces spikes in disk I/O. Flushing too frequently may also affect performance negatively.|268435456| -|readBufferSizeBytes|The number of bytes we should use as capacity for BufferedReadChannel.|4096| -|writeBufferSizeBytes|The number of bytes used as capacity for the write buffer|65536| -|useHostNameAsBookieID|Whether the bookie should use its hostname to register with the coordination service (e.g.: zookeeper service). When false, bookie will use its ip address for the registration.|false| -|bookieId | If you want to custom a bookie ID or use a dynamic network address for the bookie, you can set the `bookieId`.

    Bookie advertises itself using the `bookieId` rather than the `BookieSocketAddress` (`hostname:port` or `IP:port`). If you set the `bookieId`, then the `useHostNameAsBookieID` does not take effect.

    The `bookieId` is a non-empty string that can contain ASCII digits and letters ([a-zA-Z9-0]), colons, dashes, and dots.

    For more information about `bookieId`, see [here](http://bookkeeper.apache.org/bps/BP-41-bookieid/).|N/A| -|allowEphemeralPorts|Whether the bookie is allowed to use an ephemeral port (port 0) as its server port. By default, an ephemeral port is not allowed. Using an ephemeral port as the service port usually indicates a configuration error. However, in unit tests, using an ephemeral port will address port conflict problems and allow running tests in parallel.|false| -|enableLocalTransport|Whether the bookie is allowed to listen for the BookKeeper clients executed on the local JVM.|false| -|disableServerSocketBind|Whether the bookie is allowed to disable bind on network interfaces. This bookie will be available only to BookKeeper clients executed on the local JVM.|false| -|skipListArenaChunkSize|The number of bytes that we should use as chunk allocation for `org.apache.bookkeeper.bookie.SkipListArena`.|4194304| -|skipListArenaMaxAllocSize|The maximum size that we should allocate from the skiplist arena. Allocations larger than this should be allocated directly by the VM to avoid fragmentation.|131072| -|bookieAuthProviderFactoryClass|The factory class name of the bookie authentication provider. If this is null, then there is no authentication.|null| -|statsProviderClass||org.apache.bookkeeper.stats.prometheus.PrometheusMetricsProvider| -|prometheusStatsHttpPort||8000| -|dbStorage_writeCacheMaxSizeMb|Size of Write Cache. Memory is allocated from JVM direct memory. Write cache is used to buffer entries before flushing into the entry log. For good performance, it should be big enough to hold a substantial amount of entries in the flush interval.|25% of direct memory| -|dbStorage_readAheadCacheMaxSizeMb|Size of Read cache. Memory is allocated from JVM direct memory. This read cache is pre-filled doing read-ahead whenever a cache miss happens. By default, it is allocated to 25% of the available direct memory.|N/A| -|dbStorage_readAheadCacheBatchSize|How many entries to pre-fill in cache after a read cache miss|1000| -|dbStorage_rocksDB_blockCacheSize|Size of RocksDB block-cache. For best performance, this cache should be big enough to hold a significant portion of the index database which can reach ~2GB in some cases. By default, it uses 10% of direct memory.|N/A| -|dbStorage_rocksDB_writeBufferSizeMB||64| -|dbStorage_rocksDB_sstSizeInMB||64| -|dbStorage_rocksDB_blockSize||65536| -|dbStorage_rocksDB_bloomFilterBitsPerKey||10| -|dbStorage_rocksDB_numLevels||-1| -|dbStorage_rocksDB_numFilesInLevel0||4| -|dbStorage_rocksDB_maxSizeInLevel1MB||256| - -## Broker - -Pulsar brokers are responsible for handling incoming messages from producers, dispatching messages to consumers, replicating data between clusters, and more. - -|Name|Description|Default| -|---|---|---| -|advertisedListeners|Specify multiple advertised listeners for the broker.

    The format is `:pulsar://:`.

    If there are multiple listeners, separate them with commas.

    **Note**: do not use this configuration with `advertisedAddress` and `brokerServicePort`. If the value of this configuration is empty, the broker uses `advertisedAddress` and `brokerServicePort`|/| -|internalListenerName|Specify the internal listener name for the broker.

    **Note**: the listener name must be contained in `advertisedListeners`.

    If the value of this configuration is empty, the broker uses the first listener as the internal listener.|/| -|authenticateOriginalAuthData| If this flag is set to `true`, the broker authenticates the original Auth data; else it just accepts the originalPrincipal and authorizes it (if required). |false| -|enablePersistentTopics| Whether persistent topics are enabled on the broker |true| -|enableNonPersistentTopics| Whether non-persistent topics are enabled on the broker |true| -|functionsWorkerEnabled| Whether the Pulsar Functions worker service is enabled in the broker |false| -|exposePublisherStats|Whether to enable topic level metrics.|true| -|statsUpdateFrequencyInSecs||60| -|statsUpdateInitialDelayInSecs||60| -|zookeeperServers| Zookeeper quorum connection string || -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -|brokerServicePort| Broker data port |6650| -|brokerServicePortTls| Broker data port for TLS |6651| -|webServicePort| Port to use to server HTTP request |8080| -|webServicePortTls| Port to use to server HTTPS request |8443| -|webSocketServiceEnabled| Enable the WebSocket API service in broker |false| -|webSocketNumIoThreads|The number of IO threads in Pulsar Client used in WebSocket proxy.|8| -|webSocketConnectionsPerBroker|The number of connections per Broker in Pulsar Client used in WebSocket proxy.|8| -|webSocketSessionIdleTimeoutMillis|Time in milliseconds that idle WebSocket session times out.|300000| -|webSocketMaxTextFrameSize|The maximum size of a text message during parsing in WebSocket proxy.|1048576| -|exposeTopicLevelMetricsInPrometheus|Whether to enable topic level metrics.|true| -|exposeConsumerLevelMetricsInPrometheus|Whether to enable consumer level metrics.|false| -|jvmGCMetricsLoggerClassName|Classname of Pluggable JVM GC metrics logger that can log GC specific metrics.|N/A| -|bindAddress| Hostname or IP address the service binds on, default is 0.0.0.0. |0.0.0.0| -|advertisedAddress| Hostname or IP address the service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used. || -|clusterName| Name of the cluster to which this broker belongs to || -|maxTenants|The maximum number of tenants that can be created in each Pulsar cluster. When the number of tenants reaches the threshold, the broker rejects the request of creating a new tenant. The default value 0 disables the check. |0| -| maxNamespacesPerTenant | The maximum number of namespaces that can be created in each tenant. When the number of namespaces reaches this threshold, the broker rejects the request of creating a new tenant. The default value 0 disables the check. |0| -|brokerDeduplicationEnabled| Sets the default behavior for message deduplication in the broker. If enabled, the broker will reject messages that were already stored in the topic. This setting can be overridden on a per-namespace basis. |false| -|brokerDeduplicationMaxNumberOfProducers| The maximum number of producers for which information will be stored for deduplication purposes. |10000| -|brokerDeduplicationEntriesInterval| The number of entries after which a deduplication informational snapshot is taken. A larger interval will lead to fewer snapshots being taken, though this would also lengthen the topic recovery time (the time required for entries published after the snapshot to be replayed). |1000| -|brokerDeduplicationSnapshotIntervalSeconds| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |120| -|brokerDeduplicationProducerInactivityTimeoutMinutes| The time of inactivity (in minutes) after which the broker will discard deduplication information related to a disconnected producer. |360| -|dispatchThrottlingRatePerReplicatorInMsg| The default messages per second dispatch throttling-limit for every replicator in replication. The value of `0` means disabling replication message dispatch-throttling| 0 | -|dispatchThrottlingRatePerReplicatorInByte| The default bytes per second dispatch throttling-limit for every replicator in replication. The value of `0` means disabling replication message-byte dispatch-throttling| 0 | -|zooKeeperSessionTimeoutMillis| Zookeeper session timeout in milliseconds |30000| -|brokerShutdownTimeoutMs| Time to wait for broker graceful shutdown. After this time elapses, the process will be killed |60000| -|skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out of memory error. |false| -|backlogQuotaCheckEnabled| Enable backlog quota check. Enforces action on topic when the quota is reached |true| -|backlogQuotaCheckIntervalInSeconds| How often to check for topics that have reached the quota |60| -|backlogQuotaDefaultLimitGB| The default per-topic backlog quota limit. Being less than 0 means no limitation. By default, it is -1. | -1 | -|backlogQuotaDefaultRetentionPolicy|The defaulted backlog quota retention policy. By Default, it is `producer_request_hold`.
  876. 'producer_request_hold' Policy which holds producer's send request until the resource becomes available (or holding times out)
  877. 'producer_exception' Policy which throws `javax.jms.ResourceAllocationException` to the producer
  878. 'consumer_backlog_eviction' Policy which evicts the oldest message from the slowest consumer's backlog
  879. |producer_request_hold| -|allowAutoTopicCreation| Enable topic auto creation if a new producer or consumer connected |true| -|allowAutoTopicCreationType| The type of topic that is allowed to be automatically created.(partitioned/non-partitioned) |non-partitioned| -|allowAutoSubscriptionCreation| Enable subscription auto creation if a new consumer connected |true| -|defaultNumPartitions| The number of partitioned topics that is allowed to be automatically created if `allowAutoTopicCreationType` is partitioned |1| -|brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics |true| -|brokerDeleteInactiveTopicsFrequencySeconds| How often to check for inactive topics |60| -| brokerDeleteInactiveTopicsMode | Set the mode to delete inactive topics.
  880. `delete_when_no_subscriptions`: delete the topic which has no subscriptions or active producers.
  881. `delete_when_subscriptions_caught_up`: delete the topic whose subscriptions have no backlogs and which has no active producers or consumers.
  882. | `delete_when_no_subscriptions` | -| brokerDeleteInactiveTopicsMaxInactiveDurationSeconds | Set the maximum duration for inactive topics. If it is not specified, the `brokerDeleteInactiveTopicsFrequencySeconds` parameter is adopted. | N/A | -|forceDeleteTenantAllowed| Enable you to delete a tenant forcefully. |false| -|forceDeleteNamespaceAllowed| Enable you to delete a namespace forcefully. |false| -|messageExpiryCheckIntervalInMinutes| The frequency of proactively checking and purging expired messages. |5| -|brokerServiceCompactionMonitorIntervalInSeconds| Interval between checks to determine whether topics with compaction policies need compaction. |60| -brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater than this threshold, compression is triggered.

    Set this threshold to 0 means disabling the compression check.|N/A -|delayedDeliveryEnabled| Whether to enable the delayed delivery for messages. If disabled, messages will be immediately delivered and there will be no tracking overhead.|true| -|delayedDeliveryTickTimeMillis|Control the tick time for retrying on delayed delivery, which affects the accuracy of the delivery time compared to the scheduled time. By default, it is 1 second.|1000| -|activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed. |1000| -|clientLibraryVersionCheckEnabled| Enable check for minimum allowed client library version |false| -|clientLibraryVersionCheckAllowUnversioned| Allow client libraries with no version information |true| -|statusFilePath| Path for the file used to determine the rotation status for the broker when responding to service discovery health checks || -|preferLaterVersions| If true, (and ModularLoadManagerImpl is being used), the load manager will attempt to use only brokers running the latest software version (to minimize impact to bundles) |false| -|maxNumPartitionsPerPartitionedTopic|Max number of partitions per partitioned topic. Use 0 or negative number to disable the check|0| -| maxSubscriptionsPerTopic | Maximum number of subscriptions allowed to subscribe to a topic. Once this limit reaches, the broker rejects new subscriptions until the number of subscriptions decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxProducersPerTopic | Maximum number of producers allowed to connect to a topic. Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerTopic | Maximum number of consumers allowed to connect to a topic. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerSubscription | Maximum number of consumers allowed to connect to a subscription. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -|tlsEnabled|Deprecated - Use `webServicePortTls` and `brokerServicePortTls` instead. |false| -|tlsCertificateFilePath| Path for the TLS certificate file || -|tlsKeyFilePath| Path for the TLS private key file || -|tlsTrustCertsFilePath| Path for the trusted TLS certificate file. This cert is used to verify that any certs presented by connecting clients are signed by a certificate authority. If this verification fails, then the certs are untrusted and the connections are dropped. || -|tlsAllowInsecureConnection| Accept untrusted TLS certificate from client. If it is set to `true`, a client with a cert which cannot be verified with the 'tlsTrustCertsFilePath' cert will be allowed to connect to the server, though the cert will not be used for client authentication. |false| -|tlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLSv1.3```, ```TLSv1.2``` || -|tlsCiphers|Specify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256```|| -|tlsEnabledWithKeyStore| Enable TLS with KeyStore type configuration in broker |false| -|tlsProvider| TLS Provider for KeyStore type || -|tlsKeyStoreType| LS KeyStore type configuration in broker: JKS, PKCS12 |JKS| -|tlsKeyStore| TLS KeyStore path in broker || -|tlsKeyStorePassword| TLS KeyStore password for broker || -|brokerClientTlsEnabledWithKeyStore| Whether internal client use KeyStore type to authenticate with Pulsar brokers |false| -|brokerClientSslProvider| The TLS Provider used by internal client to authenticate with other Pulsar brokers || -|brokerClientTlsTrustStoreType| TLS TrustStore type configuration for internal client: JKS, PKCS12, used by the internal client to authenticate with Pulsar brokers |JKS| -|brokerClientTlsTrustStore| TLS TrustStore path for internal client, used by the internal client to authenticate with Pulsar brokers || -|brokerClientTlsTrustStorePassword| TLS TrustStore password for internal client, used by the internal client to authenticate with Pulsar brokers || -|brokerClientTlsCiphers| Specify the tls cipher the internal client will use to negotiate during TLS Handshake. (a comma-separated list of ciphers) e.g. [TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256]|| -|brokerClientTlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS handshake. (a comma-separated list of protocol names). e.g. `TLSv1.3`, `TLSv1.2` || -|ttlDurationDefaultInSeconds|The default Time to Live (TTL) for namespaces if the TTL is not configured at namespace policies. When the value is set to `0`, TTL is disabled. By default, TTL is disabled. |0| -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicAlg| Configure the algorithm to be used to validate auth tokens. This can be any of the asymettric algorithms supported by Java JWT (https://github.com/jwtk/jjwt#signature-algorithms-keys) |RS256| -|tokenAuthClaim| Specify which of the token's claims will be used as the authentication "principal" or "role". The default "sub" claim will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud", that will be used to get the audience from token. If not set, audience will not be verified. || -|tokenAudience| The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token, need contains this. || -|maxUnackedMessagesPerConsumer| Max number of unacknowledged messages allowed to receive messages by a consumer on a shared subscription. Broker will stop sending messages to consumer once, this limit reaches until consumer starts acknowledging messages back. Using a value of 0, is disabling unackeMessage limit check and consumer can receive messages without any restriction |50000| -|maxUnackedMessagesPerSubscription| Max number of unacknowledged messages allowed per shared subscription. Broker will stop dispatching messages to all consumers of the subscription once this limit reaches until consumer starts acknowledging messages back and unack count reaches to limit/2. Using a value of 0, is disabling unackedMessage-limit check and dispatcher can dispatch messages without any restriction |200000| -|subscriptionRedeliveryTrackerEnabled| Enable subscription message redelivery tracker |true| -|subscriptionExpirationTimeMinutes | How long to delete inactive subscriptions from last consuming.

    Setting this configuration to a value **greater than 0** deletes inactive subscriptions automatically.
    Setting this configuration to **0** does not delete inactive subscriptions automatically.

    Since this configuration takes effect on all topics, if there is even one topic whose subscriptions should not be deleted automatically, you need to set it to 0.
    Instead, you can set a subscription expiration time for each **namespace** using the [`pulsar-admin namespaces set-subscription-expiration-time options` command](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-subscription-expiration-time-em-). | 0 | -|maxConcurrentLookupRequest| Max number of concurrent lookup request broker allows to throttle heavy incoming lookup traffic |50000| -|maxConcurrentTopicLoadRequest| Max number of concurrent topic loading request broker allows to control number of zk-operations |5000| -|authenticationEnabled| Enable authentication |false| -|authenticationProviders| Authentication provider name list, which is comma separated list of class names || -| authenticationRefreshCheckSeconds | Interval of time for checking for expired authentication credentials | 60 | -|authorizationEnabled| Enforce authorization |false| -|superUserRoles| Role names that are treated as "super-user", meaning they will be able to do all admin operations and publish/consume from all topics || -|brokerClientAuthenticationPlugin| Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters || -|brokerClientAuthenticationParameters||| -|athenzDomainNames| Supported Athenz provider domain names(comma separated) for authentication || -|exposePreciseBacklogInPrometheus| Enable expose the precise backlog stats, set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate. |false| -|schemaRegistryStorageClassName|The schema storage implementation used by this broker.|org.apache.pulsar.broker.service.schema.BookkeeperSchemaStorageFactory| -|isSchemaValidationEnforced|Enforce schema validation on following cases: if a producer without a schema attempts to produce to a topic with schema, the producer will be failed to connect. PLEASE be carefully on using this, since non-java clients don't support schema. If this setting is enabled, then non-java clients fail to produce.|false| -| topicFencingTimeoutSeconds | If a topic remains fenced for a certain time period (in seconds), it is closed forcefully. If set to 0 or a negative number, the fenced topic is not closed. | 0 | -|offloadersDirectory|The directory for all the offloader implementations.|./offloaders| -|bookkeeperMetadataServiceUri| Metadata service uri that bookkeeper is used for loading corresponding metadata driver and resolving its metadata service location. This value can be fetched using `bookkeeper shell whatisinstanceid` command in BookKeeper cluster. For example: zk+hierarchical://localhost:2181/ledgers. The metadata service uri list can also be semicolon separated values like below: zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers || -|bookkeeperClientAuthenticationPlugin| Authentication plugin to use when connecting to bookies || -|bookkeeperClientAuthenticationParametersName| BookKeeper auth plugin implementation specifics parameters name and values || -|bookkeeperClientAuthenticationParameters||| -|bookkeeperClientNumWorkerThreads| Number of BookKeeper client worker threads. Default is Runtime.getRuntime().availableProcessors() || -|bookkeeperClientTimeoutInSeconds| Timeout for BK add / read operations |30| -|bookkeeperClientSpeculativeReadTimeoutInMillis| Speculative reads are initiated if a read request doesn’t complete within a certain time Using a value of 0, is disabling the speculative reads |0| -|bookkeeperNumberOfChannelsPerBookie| Number of channels per bookie |16| -|bookkeeperClientHealthCheckEnabled| Enable bookies health check. Bookies that have more than the configured number of failure within the interval will be quarantined for some time. During this period, new ledgers won’t be created on these bookies |true| -|bookkeeperClientHealthCheckIntervalSeconds||60| -|bookkeeperClientHealthCheckErrorThresholdPerInterval||5| -|bookkeeperClientHealthCheckQuarantineTimeInSeconds ||1800| -|bookkeeperClientRackawarePolicyEnabled| Enable rack-aware bookie selection policy. BK will chose bookies from different racks when forming a new bookie ensemble |true| -|bookkeeperClientRegionawarePolicyEnabled| Enable region-aware bookie selection policy. BK will chose bookies from different regions and racks when forming a new bookie ensemble. If enabled, the value of bookkeeperClientRackawarePolicyEnabled is ignored |false| -|bookkeeperClientMinNumRacksPerWriteQuorum| Minimum number of racks per write quorum. BK rack-aware bookie selection policy will try to get bookies from at least 'bookkeeperClientMinNumRacksPerWriteQuorum' racks for a write quorum. |2| -|bookkeeperClientEnforceMinNumRacksPerWriteQuorum| Enforces rack-aware bookie selection policy to pick bookies from 'bookkeeperClientMinNumRacksPerWriteQuorum' racks for a writeQuorum. If BK can't find bookie then it would throw BKNotEnoughBookiesException instead of picking random one. |false| -|bookkeeperClientReorderReadSequenceEnabled| Enable/disable reordering read sequence on reading entries. |false| -|bookkeeperClientIsolationGroups| Enable bookie isolation by specifying a list of bookie groups to choose from. Any bookie outside the specified groups will not be used by the broker || -|bookkeeperClientSecondaryIsolationGroups| Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn't have enough bookie available. || -|bookkeeperClientMinAvailableBookiesInIsolationGroups| Minimum bookies that should be available as part of bookkeeperClientIsolationGroups else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list. || -|bookkeeperClientGetBookieInfoIntervalSeconds| Set the interval to periodically check bookie info |86400| -|bookkeeperClientGetBookieInfoRetryIntervalSeconds| Set the interval to retry a failed bookie info lookup |60| -|bookkeeperEnableStickyReads | Enable/disable having read operations for a ledger to be sticky to a single bookie. If this flag is enabled, the client will use one single bookie (by preference) to read all entries for a ledger. | true | -|managedLedgerDefaultEnsembleSize| Number of bookies to use when creating a ledger |2| -|managedLedgerDefaultWriteQuorum| Number of copies to store for each message |2| -|managedLedgerDefaultAckQuorum| Number of guaranteed copies (acks to wait before write is complete) |2| -|managedLedgerCacheSizeMB| Amount of memory to use for caching data payload in managed ledger. This memory is allocated from JVM direct memory and it’s shared across all the topics running in the same broker. By default, uses 1/5th of available direct memory || -|managedLedgerCacheCopyEntries| Whether we should make a copy of the entry payloads when inserting in cache| false| -|managedLedgerCacheEvictionWatermark| Threshold to which bring down the cache level when eviction is triggered |0.9| -|managedLedgerCacheEvictionFrequency| Configure the cache eviction frequency for the managed ledger cache (evictions/sec) | 100.0 | -|managedLedgerCacheEvictionTimeThresholdMillis| All entries that have stayed in cache for more than the configured time, will be evicted | 1000 | -|managedLedgerCursorBackloggedThreshold| Configure the threshold (in number of entries) from where a cursor should be considered 'backlogged' and thus should be set as inactive. | 1000| -|managedLedgerDefaultMarkDeleteRateLimit| Rate limit the amount of writes per second generated by consumer acking the messages |1.0| -|managedLedgerMaxEntriesPerLedger| The max number of entries to append to a ledger before triggering a rollover. A ledger rollover is triggered after the min rollover time has passed and one of the following conditions is true:
    • The max rollover time has been reached
    • The max entries have been written to the ledger
    • The max ledger size has been written to the ledger
    |50000| -|managedLedgerMinLedgerRolloverTimeMinutes| Minimum time between ledger rollover for a topic |10| -|managedLedgerMaxLedgerRolloverTimeMinutes| Maximum time before forcing a ledger rollover for a topic |240| -|managedLedgerCursorMaxEntriesPerLedger| Max number of entries to append to a cursor ledger |50000| -|managedLedgerCursorRolloverTimeInSeconds| Max time before triggering a rollover on a cursor ledger |14400| -|managedLedgerMaxUnackedRangesToPersist| Max number of "acknowledgment holes" that are going to be persistently stored. When acknowledging out of order, a consumer will leave holes that are supposed to be quickly filled by acking all the messages. The information of which messages are acknowledged is persisted by compressing in "ranges" of messages that were acknowledged. After the max number of ranges is reached, the information will only be tracked in memory and messages will be redelivered in case of crashes. |1000| -|autoSkipNonRecoverableData| Skip reading non-recoverable/unreadable data-ledger under managed-ledger’s list.It helps when data-ledgers gets corrupted at bookkeeper and managed-cursor is stuck at that ledger. |false| -|loadBalancerEnabled| Enable load balancer |true| -|loadBalancerPlacementStrategy| Strategy to assign a new bundle weightedRandomSelection || -|loadBalancerReportUpdateThresholdPercentage| Percentage of change to trigger load report update |10| -|loadBalancerReportUpdateMaxIntervalMinutes| maximum interval to update load report |15| -|loadBalancerHostUsageCheckIntervalMinutes| Frequency of report to collect |1| -|loadBalancerSheddingIntervalMinutes| Load shedding interval. Broker periodically checks whether some traffic should be offload from some over-loaded broker to other under-loaded brokers |30| -|loadBalancerSheddingGracePeriodMinutes| Prevent the same topics to be shed and moved to other broker more than once within this timeframe |30| -|loadBalancerBrokerMaxTopics| Usage threshold to allocate max number of topics to broker |50000| -|loadBalancerBrokerUnderloadedThresholdPercentage| Usage threshold to determine a broker as under-loaded |1| -|loadBalancerBrokerOverloadedThresholdPercentage| Usage threshold to determine a broker as over-loaded |85| -|loadBalancerResourceQuotaUpdateIntervalMinutes| Interval to update namespace bundle resource quota |15| -|loadBalancerBrokerComfortLoadLevelPercentage| Usage threshold to determine a broker is having just right level of load |65| -|loadBalancerAutoBundleSplitEnabled| enable/disable namespace bundle auto split |false| -|loadBalancerNamespaceBundleMaxTopics| maximum topics in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxSessions| maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxMsgRate| maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxBandwidthMbytes| maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered |100| -|loadBalancerNamespaceMaximumBundles| maximum number of bundles in a namespace |128| -|replicationMetricsEnabled| Enable replication metrics |true| -|replicationConnectionsPerBroker| Max number of connections to open for each broker in a remote cluster More connections host-to-host lead to better throughput over high-latency links. |16| -|replicationProducerQueueSize| Replicator producer queue size |1000| -|replicatorPrefix| Replicator prefix used for replicator producer name and cursor name pulsar.repl|| -|replicationTlsEnabled| Enable TLS when talking with other clusters to replicate messages |false| -|brokerServicePurgeInactiveFrequencyInSeconds|Deprecated. Use `brokerDeleteInactiveTopicsFrequencySeconds`.|60| -|transactionCoordinatorEnabled|Whether to enable transaction coordinator in broker.|true| -|transactionMetadataStoreProviderClassName| |org.apache.pulsar.transaction.coordinator.impl.InMemTransactionMetadataStoreProvider| -|defaultRetentionTimeInMinutes| Default message retention time. 0 means retention is disabled. -1 means data is not removed by time quota |0| -|defaultRetentionSizeInMB| Default retention size. 0 means retention is disabled. -1 means data is not removed by size quota |0| -|keepAliveIntervalSeconds| How often to check whether the connections are still alive |30| -|bootstrapNamespaces| The bootstrap name. | N/A | -|loadManagerClassName| Name of load manager to use |org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl| -|supportedNamespaceBundleSplitAlgorithms| Supported algorithms name for namespace bundle split |[range_equally_divide,topic_count_equally_divide]| -|defaultNamespaceBundleSplitAlgorithm| Default algorithm name for namespace bundle split |range_equally_divide| -|managedLedgerOffloadDriver| The directory for all the offloader implementations `offloadersDirectory=./offloaders`. Driver to use to offload old data to long term storage (Possible values: S3, aws-s3, google-cloud-storage). When using google-cloud-storage, Make sure both Google Cloud Storage and Google Cloud Storage JSON API are enabled for the project (check from Developers Console -> Api&auth -> APIs). || -|managedLedgerOffloadMaxThreads| Maximum number of thread pool threads for ledger offloading |2| -|managedLedgerOffloadPrefetchRounds|The maximum prefetch rounds for ledger reading for offloading.|1| -|managedLedgerUnackedRangesOpenCacheSetEnabled| Use Open Range-Set to cache unacknowledged messages |true| -|managedLedgerOffloadDeletionLagMs|Delay between a ledger being successfully offloaded to long term storage and the ledger being deleted from bookkeeper | 14400000| -|managedLedgerOffloadAutoTriggerSizeThresholdBytes|The number of bytes before triggering automatic offload to long term storage |-1 (disabled)| -|s3ManagedLedgerOffloadRegion| For Amazon S3 ledger offload, AWS region || -|s3ManagedLedgerOffloadBucket| For Amazon S3 ledger offload, Bucket to place offloaded ledger into || -|s3ManagedLedgerOffloadServiceEndpoint| For Amazon S3 ledger offload, Alternative endpoint to connect to (useful for testing) || -|s3ManagedLedgerOffloadMaxBlockSizeInBytes| For Amazon S3 ledger offload, Max block size in bytes. (64MB by default, 5MB minimum) |67108864| -|s3ManagedLedgerOffloadReadBufferSizeInBytes| For Amazon S3 ledger offload, Read buffer size in bytes (1MB by default) |1048576| -|gcsManagedLedgerOffloadRegion|For Google Cloud Storage ledger offload, region where offload bucket is located. Go to this page for more details: https://cloud.google.com/storage/docs/bucket-locations .|N/A| -|gcsManagedLedgerOffloadBucket|For Google Cloud Storage ledger offload, Bucket to place offloaded ledger into.|N/A| -|gcsManagedLedgerOffloadMaxBlockSizeInBytes|For Google Cloud Storage ledger offload, the maximum block size in bytes. (64MB by default, 5MB minimum)|67108864| -|gcsManagedLedgerOffloadReadBufferSizeInBytes|For Google Cloud Storage ledger offload, Read buffer size in bytes. (1MB by default)|1048576| -|gcsManagedLedgerOffloadServiceAccountKeyFile|For Google Cloud Storage, path to json file containing service account credentials. For more details, see the "Service Accounts" section of https://support.google.com/googleapi/answer/6158849 .|N/A| -|fileSystemProfilePath|For File System Storage, file system profile path.|../conf/filesystem_offload_core_site.xml| -|fileSystemURI|For File System Storage, file system uri.|N/A| -|s3ManagedLedgerOffloadRole| For Amazon S3 ledger offload, provide a role to assume before writing to s3 || -|s3ManagedLedgerOffloadRoleSessionName| For Amazon S3 ledger offload, provide a role session name when using a role |pulsar-s3-offload| -| acknowledgmentAtBatchIndexLevelEnabled | Enable or disable the batch index acknowledgement. | false | -|enableReplicatedSubscriptions|Whether to enable tracking of replicated subscriptions state across clusters.|true| -|replicatedSubscriptionsSnapshotFrequencyMillis|The frequency of snapshots for replicated subscriptions tracking.|1000| -|replicatedSubscriptionsSnapshotTimeoutSeconds|The timeout for building a consistent snapshot for tracking replicated subscriptions state.|30| -|replicatedSubscriptionsSnapshotMaxCachedPerSubscription|The maximum number of snapshot to be cached per subscription.|10| -|maxMessagePublishBufferSizeInMB|The maximum memory size for a broker to handle messages that are sent by producers. If the processing message size exceeds this value, the broker stops reading data from the connection. The processing messages refer to the messages that are sent to the broker but the broker has not sent response to the client. Usually the messages are waiting to be written to bookies. It is shared across all the topics running in the same broker. The value `-1` disables the memory limitation. By default, it is 50% of direct memory.|N/A| -|messagePublishBufferCheckIntervalInMillis|Interval between checks to see if message publish buffer size exceeds the maximum. Use `0` or negative number to disable the max publish buffer limiting.|100| -|retentionCheckIntervalInSeconds|Check between intervals to see if consumed ledgers need to be trimmed. Use 0 or negative number to disable the check.|120| -| maxMessageSize | Set the maximum size of a message. | 5242880 | -| preciseTopicPublishRateLimiterEnable | Enable precise topic publish rate limiting. | false | -| lazyCursorRecovery | Whether to recover cursors lazily when trying to recover a managed ledger backing a persistent topic. It can improve write availability of topics. The caveat is now when recovered ledger is ready to write we're not sure if all old consumers' last mark delete position(ack position) can be recovered or not. So user can make the trade off or have custom logic in application to checkpoint consumer state.| false | -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| -| maxTopicsPerNamespace | The maximum number of persistent topics that can be created in the namespace. When the number of topics reaches this threshold, the broker rejects the request of creating a new topic, including the auto-created topics by the producer or consumer, until the number of connected consumers decreases. The default value 0 disables the check. | 0 | -|subscriptionTypesEnabled| Enable all subscription types, which are exclusive, shared, failover, and key_shared. | Exclusive, Shared, Failover, Key_Shared | -| managedLedgerInfoCompressionType | ManagedLedgerInfo compression type, option values (NONE, LZ4, ZLIB, ZSTD, SNAPPY), if value is NONE or invalid, the managedLedgerInfo will not be compressed. Notice, after enable this configuration, if you want to degrade broker, you should change the configuration to `NONE` and make sure all ledger metadata are saved without compression. | None | -|narExtractionDirectory | The extraction directory of the nar package.
    Available for Protocol Handler, Additional Servlets, Offloaders, Broker Interceptor. | System.getProperty("java.io.tmpdir") | - -## Client - -You can use the [`pulsar-client`](reference-cli-tools.md#pulsar-client) CLI tool to publish messages to and consume messages from Pulsar topics. You can use this tool in place of a client library. - -|Name|Description|Default| -|---|---|---| -|webServiceUrl| The web URL for the cluster. |http://localhost:8080/| -|brokerServiceUrl| The Pulsar protocol URL for the cluster. |pulsar://localhost:6650/| -|authPlugin| The authentication plugin. || -|authParams| The authentication parameters for the cluster, as a comma-separated string. || -|useTls| Whether to enforce the TLS authentication in the cluster. |false| -| tlsAllowInsecureConnection | Allow TLS connections to servers whose certificate cannot be verified to have been signed by a trusted certificate authority. | false | -| tlsEnableHostnameVerification | Whether the server hostname must match the common name of the certificate that is used by the server. | false | -|tlsTrustCertsFilePath||| -| useKeyStoreTls | Enable TLS with KeyStore type configuration in the broker. | false | -| tlsTrustStoreType | TLS TrustStore type configuration.
  883. JKS
  884. PKCS12
  885. |JKS| -| tlsTrustStore | TLS TrustStore path. | | -| tlsTrustStorePassword | TLS TrustStore password. | | - - -## Service discovery - -|Name|Description|Default| -|---|---|---| -|zookeeperServers| Zookeeper quorum connection string (comma-separated) || -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -|zookeeperSessionTimeoutMs| ZooKeeper session timeout |30000| -|servicePort| Port to use to server binary-proto request |6650| -|servicePortTls| Port to use to server binary-proto-tls request |6651| -|webServicePort| Port that discovery service listen on |8080| -|webServicePortTls| Port to use to server HTTPS request |8443| -|bindOnLocalhost| Control whether to bind directly on localhost rather than on normal hostname |false| -|authenticationEnabled| Enable authentication |false| -|authenticationProviders| Authentication provider name list, which is comma separated list of class names (comma-separated) || -|authorizationEnabled| Enforce authorization |false| -|superUserRoles| Role names that are treated as "super-user", meaning they will be able to do all admin operations and publish/consume from all topics (comma-separated) || -|tlsEnabled| Enable TLS |false| -|tlsCertificateFilePath| Path for the TLS certificate file || -|tlsKeyFilePath| Path for the TLS private key file || - - - -## Log4j - -You can set the log level and configuration in the [log4j2.yaml](https://github.com/apache/pulsar/blob/d557e0aa286866363bc6261dec87790c055db1b0/conf/log4j2.yaml#L155) file. The following logging configuration parameters are available. - -|Name|Default| -|---|---| -|pulsar.root.logger| WARN,CONSOLE| -|pulsar.log.dir| logs| -|pulsar.log.file| pulsar.log| -|log4j.rootLogger| ${pulsar.root.logger}| -|log4j.appender.CONSOLE| org.apache.log4j.ConsoleAppender| -|log4j.appender.CONSOLE.Threshold| DEBUG| -|log4j.appender.CONSOLE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.CONSOLE.layout.ConversionPattern| %d{ISO8601} - %-5p - [%t:%C{1}@%L] - %m%n| -|log4j.appender.ROLLINGFILE| org.apache.log4j.DailyRollingFileAppender| -|log4j.appender.ROLLINGFILE.Threshold| DEBUG| -|log4j.appender.ROLLINGFILE.File| ${pulsar.log.dir}/${pulsar.log.file}| -|log4j.appender.ROLLINGFILE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.ROLLINGFILE.layout.ConversionPattern| %d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n| -|log4j.appender.TRACEFILE| org.apache.log4j.FileAppender| -|log4j.appender.TRACEFILE.Threshold| TRACE| -|log4j.appender.TRACEFILE.File| pulsar-trace.log| -|log4j.appender.TRACEFILE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.TRACEFILE.layout.ConversionPattern| %d{ISO8601} - %-5p [%t:%C{1}@%L][%x] - %m%n| - -> Note: 'topic' in log4j2.appender is configurable. -> - If you want to append all logs to a single topic, set the same topic name. -> - If you want to append logs to different topics, you can set different topic names. - -## Log4j shell - -|Name|Default| -|---|---| -|bookkeeper.root.logger| ERROR,CONSOLE| -|log4j.rootLogger| ${bookkeeper.root.logger}| -|log4j.appender.CONSOLE| org.apache.log4j.ConsoleAppender| -|log4j.appender.CONSOLE.Threshold| DEBUG| -|log4j.appender.CONSOLE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.CONSOLE.layout.ConversionPattern| %d{ABSOLUTE} %-5p %m%n| -|log4j.logger.org.apache.zookeeper| ERROR| -|log4j.logger.org.apache.bookkeeper| ERROR| -|log4j.logger.org.apache.bookkeeper.bookie.BookieShell| INFO| - - -## Standalone - -|Name|Description|Default| -|---|---|---| -|authenticateOriginalAuthData| If this flag is set to `true`, the broker authenticates the original Auth data; else it just accepts the originalPrincipal and authorizes it (if required). |false| -|zookeeperServers| The quorum connection string for local ZooKeeper || -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -|brokerServicePort| The port on which the standalone broker listens for connections |6650| -|webServicePort| The port used by the standalone broker for HTTP requests |8080| -|bindAddress| The hostname or IP address on which the standalone service binds |0.0.0.0| -|advertisedAddress| The hostname or IP address that the standalone service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used. || -| numAcceptorThreads | Number of threads to use for Netty Acceptor | 1 | -| numIOThreads | Number of threads to use for Netty IO | 2 * Runtime.getRuntime().availableProcessors() | -| numHttpServerThreads | Number of threads to use for HTTP requests processing | 2 * Runtime.getRuntime().availableProcessors()| -|isRunningStandalone|This flag controls features that are meant to be used when running in standalone mode.|N/A| -|clusterName| The name of the cluster that this broker belongs to. |standalone| -| failureDomainsEnabled | Enable cluster's failure-domain which can distribute brokers into logical region. | false | -|zooKeeperSessionTimeoutMillis| The ZooKeeper session timeout, in milliseconds. |30000| -|zooKeeperOperationTimeoutSeconds|ZooKeeper operation timeout in seconds.|30| -|brokerShutdownTimeoutMs| The time to wait for graceful broker shutdown. After this time elapses, the process will be killed. |60000| -|skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out of memory error. |false| -|backlogQuotaCheckEnabled| Enable the backlog quota check, which enforces a specified action when the quota is reached. |true| -|backlogQuotaCheckIntervalInSeconds| How often to check for topics that have reached the backlog quota. |60| -|backlogQuotaDefaultLimitGB| The default per-topic backlog quota limit. Being less than 0 means no limitation. By default, it is -1. |-1| -|ttlDurationDefaultInSeconds|The default Time to Live (TTL) for namespaces if the TTL is not configured at namespace policies. When the value is set to `0`, TTL is disabled. By default, TTL is disabled. |0| -|brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics. |true| -|brokerDeleteInactiveTopicsFrequencySeconds| How often to check for inactive topics, in seconds. |60| -| maxPendingPublishRequestsPerConnection | Maximum pending publish requests per connection to avoid keeping large number of pending requests in memory | 1000| -|messageExpiryCheckIntervalInMinutes| How often to proactively check and purged expired messages. |5| -|activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed. |1000| -| subscriptionExpirationTimeMinutes | How long to delete inactive subscriptions from last consumption. When it is set to 0, inactive subscriptions are not deleted automatically | 0 | -| subscriptionRedeliveryTrackerEnabled | Enable subscription message redelivery tracker to send redelivery count to consumer. | true | -|subscriptionKeySharedEnable|Whether to enable the Key_Shared subscription.|true| -| subscriptionKeySharedUseConsistentHashing | In Key_Shared subscription type, with default AUTO_SPLIT mode, use splitting ranges or consistent hashing to reassign keys to new consumers. | false | -| subscriptionKeySharedConsistentHashingReplicaPoints | In Key_Shared subscription type, the number of points in the consistent-hashing ring. The greater the number, the more equal the assignment of keys to consumers. | 100 | -| subscriptionExpiryCheckIntervalInMinutes | How frequently to proactively check and purge expired subscription |5 | -| brokerDeduplicationEnabled | Set the default behavior for message deduplication in the broker. This can be overridden per-namespace. If it is enabled, the broker rejects messages that are already stored in the topic. | false | -| brokerDeduplicationMaxNumberOfProducers | Maximum number of producer information that it's going to be persisted for deduplication purposes | 10000 | -| brokerDeduplicationEntriesInterval | Number of entries after which a deduplication information snapshot is taken. A greater interval leads to less snapshots being taken though it would increase the topic recovery time, when the entries published after the snapshot need to be replayed. | 1000 | -| brokerDeduplicationProducerInactivityTimeoutMinutes | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | 360 | -| defaultNumberOfNamespaceBundles | When a namespace is created without specifying the number of bundles, this value is used as the default setting.| 4 | -|clientLibraryVersionCheckEnabled| Enable checks for minimum allowed client library version. |false| -|clientLibraryVersionCheckAllowUnversioned| Allow client libraries with no version information |true| -|statusFilePath| The path for the file used to determine the rotation status for the broker when responding to service discovery health checks |/usr/local/apache/htdocs| -|maxUnackedMessagesPerConsumer| The maximum number of unacknowledged messages allowed to be received by consumers on a shared subscription. The broker will stop sending messages to a consumer once this limit is reached or until the consumer begins acknowledging messages. A value of 0 disables the unacked message limit check and thus allows consumers to receive messages without any restrictions. |50000| -|maxUnackedMessagesPerSubscription| The same as above, except per subscription rather than per consumer. |200000| -| maxUnackedMessagesPerBroker | Maximum number of unacknowledged messages allowed per broker. Once this limit reaches, the broker stops dispatching messages to all shared subscriptions which has a higher number of unacknowledged messages until subscriptions start acknowledging messages back and unacknowledged messages count reaches to limit/2. When the value is set to 0, unacknowledged message limit check is disabled and broker does not block dispatchers. | 0 | -| maxUnackedMessagesPerSubscriptionOnBrokerBlocked | Once the broker reaches maxUnackedMessagesPerBroker limit, it blocks subscriptions which have higher unacknowledged messages than this percentage limit and subscription does not receive any new messages until that subscription acknowledges messages back. | 0.16 | -| unblockStuckSubscriptionEnabled|Broker periodically checks if subscription is stuck and unblock if flag is enabled.|false| -|zookeeperSessionExpiredPolicy|There are two policies when ZooKeeper session expired happens, "shutdown" and "reconnect". If it is set to "shutdown" policy, when ZooKeeper session expired happens, the broker is shutdown. If it is set to "reconnect" policy, the broker tries to reconnect to ZooKeeper server and re-register metadata to ZooKeeper. Note: the "reconnect" policy is an experiment feature.|shutdown| -| topicPublisherThrottlingTickTimeMillis | Tick time to schedule task that checks topic publish rate limiting across all topics. A lower value can improve accuracy while throttling publish but it uses more CPU to perform frequent check. (Disable publish throttling with value 0) | 10| -| brokerPublisherThrottlingTickTimeMillis | Tick time to schedule task that checks broker publish rate limiting across all topics. A lower value can improve accuracy while throttling publish but it uses more CPU to perform frequent check. When the value is set to 0, publish throttling is disabled. |50 | -| brokerPublisherThrottlingMaxMessageRate | Maximum rate (in 1 second) of messages allowed to publish for a broker if the message rate limiting is enabled. When the value is set to 0, message rate limiting is disabled. | 0| -| brokerPublisherThrottlingMaxByteRate | Maximum rate (in 1 second) of bytes allowed to publish for a broker if the byte rate limiting is enabled. When the value is set to 0, the byte rate limiting is disabled. | 0 | -|subscribeThrottlingRatePerConsumer|Too many subscribe requests from a consumer can cause broker rewinding consumer cursors and loading data from bookies, hence causing high network bandwidth usage. When the positive value is set, broker will throttle the subscribe requests for one consumer. Otherwise, the throttling will be disabled. By default, throttling is disabled.|0| -|subscribeRatePeriodPerConsumerInSecond|Rate period for {subscribeThrottlingRatePerConsumer}. By default, it is 30s.|30| -| dispatchThrottlingRatePerTopicInMsg | Default messages (per second) dispatch throttling-limit for every topic. When the value is set to 0, default message dispatch throttling-limit is disabled. |0 | -| dispatchThrottlingRatePerTopicInByte | Default byte (per second) dispatch throttling-limit for every topic. When the value is set to 0, default byte dispatch throttling-limit is disabled. | 0| -| dispatchThrottlingRateRelativeToPublishRate | Enable dispatch rate-limiting relative to publish rate. | false | -|dispatchThrottlingRatePerSubscriptionInMsg|The defaulted number of message dispatching throttling-limit for a subscription. The value of 0 disables message dispatch-throttling.|0| -|dispatchThrottlingRatePerSubscriptionInByte|The default number of message-bytes dispatching throttling-limit for a subscription. The value of 0 disables message-byte dispatch-throttling.|0| -| dispatchThrottlingOnNonBacklogConsumerEnabled | Enable dispatch-throttling for both caught up consumers as well as consumers who have backlogs. | true | -|dispatcherMaxReadBatchSize|The maximum number of entries to read from BookKeeper. By default, it is 100 entries.|100| -|dispatcherMaxReadSizeBytes|The maximum size in bytes of entries to read from BookKeeper. By default, it is 5MB.|5242880| -|dispatcherMinReadBatchSize|The minimum number of entries to read from BookKeeper. By default, it is 1 entry. When there is an error occurred on reading entries from bookkeeper, the broker will backoff the batch size to this minimum number.|1| -|dispatcherMaxRoundRobinBatchSize|The maximum number of entries to dispatch for a shared subscription. By default, it is 20 entries.|20| -| preciseDispatcherFlowControl | Precise dispathcer flow control according to history message number of each entry. | false | -| streamingDispatch | Whether to use streaming read dispatcher. It can be useful when there's a huge backlog to drain and instead of read with micro batch we can streamline the read from bookkeeper to make the most of consumer capacity till we hit bookkeeper read limit or consumer process limit, then we can use consumer flow control to tune the speed. This feature is currently in preview and can be changed in subsequent release. | false | -| maxConcurrentLookupRequest | Maximum number of concurrent lookup request that the broker allows to throttle heavy incoming lookup traffic. | 50000 | -| maxConcurrentTopicLoadRequest | Maximum number of concurrent topic loading request that the broker allows to control the number of zk-operations. | 5000 | -| maxConcurrentNonPersistentMessagePerConnection | Maximum number of concurrent non-persistent message that can be processed per connection. | 1000 | -| numWorkerThreadsForNonPersistentTopic | Number of worker threads to serve non-persistent topic. | 8 | -| enablePersistentTopics | Enable broker to load persistent topics. | true | -| enableNonPersistentTopics | Enable broker to load non-persistent topics. | true | -| maxSubscriptionsPerTopic | Maximum number of subscriptions allowed to subscribe to a topic. Once this limit reaches, the broker rejects new subscriptions until the number of subscriptions decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxProducersPerTopic | Maximum number of producers allowed to connect to a topic. Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerTopic | Maximum number of consumers allowed to connect to a topic. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerSubscription | Maximum number of consumers allowed to connect to a subscription. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxNumPartitionsPerPartitionedTopic | Maximum number of partitions per partitioned topic. When the value is set to a negative number or is set to 0, the check is disabled. | 0 | -| tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in seconds. When the value is set to 0, check the TLS certificate on every new connection. | 300 | -| tlsCertificateFilePath | Path for the TLS certificate file. | | -| tlsKeyFilePath | Path for the TLS private key file. | | -| tlsTrustCertsFilePath | Path for the trusted TLS certificate file.| | -| tlsAllowInsecureConnection | Accept untrusted TLS certificate from the client. If it is set to true, a client with a certificate which cannot be verified with the 'tlsTrustCertsFilePath' certificate is allowed to connect to the server, though the certificate is not be used for client authentication. | false | -| tlsProtocols | Specify the TLS protocols the broker uses to negotiate during TLS handshake. | | -| tlsCiphers | Specify the TLS cipher the broker uses to negotiate during TLS Handshake. | | -| tlsRequireTrustedClientCertOnConnect | Trusted client certificates are required for to connect TLS. Reject the Connection if the client certificate is not trusted. In effect, this requires that all connecting clients perform TLS client authentication. | false | -| tlsEnabledWithKeyStore | Enable TLS with KeyStore type configuration in broker. | false | -| tlsProvider | TLS Provider for KeyStore type. | | -| tlsKeyStoreType | TLS KeyStore type configuration in the broker.
  886. JKS
  887. PKCS12
  888. |JKS| -| tlsKeyStore | TLS KeyStore path in the broker. | | -| tlsKeyStorePassword | TLS KeyStore password for the broker. | | -| tlsTrustStoreType | TLS TrustStore type configuration in the broker
  889. JKS
  890. PKCS12
  891. |JKS| -| tlsTrustStore | TLS TrustStore path in the broker. | | -| tlsTrustStorePassword | TLS TrustStore password for the broker. | | -| brokerClientTlsEnabledWithKeyStore | Configure whether the internal client uses the KeyStore type to authenticate with Pulsar brokers. | false | -| brokerClientSslProvider | The TLS Provider used by the internal client to authenticate with other Pulsar brokers. | | -| brokerClientTlsTrustStoreType | TLS TrustStore type configuration for the internal client to authenticate with Pulsar brokers.
  892. JKS
  893. PKCS12
  894. | JKS | -| brokerClientTlsTrustStore | TLS TrustStore path for the internal client to authenticate with Pulsar brokers. | | -| brokerClientTlsTrustStorePassword | TLS TrustStore password for the internal client to authenticate with Pulsar brokers. | | -| brokerClientTlsCiphers | Specify the TLS cipher that the internal client uses to negotiate during TLS Handshake. | | -| brokerClientTlsProtocols | Specify the TLS protocols that the broker uses to negotiate during TLS handshake. | | -| systemTopicEnabled | Enable/Disable system topics. | false | -| topicLevelPoliciesEnabled | Enable or disable topic level policies. Topic level policies depends on the system topic. Please enable the system topic first. | false | -| topicFencingTimeoutSeconds | If a topic remains fenced for a certain time period (in seconds), it is closed forcefully. If set to 0 or a negative number, the fenced topic is not closed. | 0 | -| proxyRoles | Role names that are treated as "proxy roles". If the broker sees a request with role as proxyRoles, it demands to see a valid original principal. | | -|authenticationEnabled| Enable authentication for the broker. |false| -|authenticationProviders| A comma-separated list of class names for authentication providers. |false| -|authorizationEnabled| Enforce authorization in brokers. |false| -| authorizationProvider | Authorization provider fully qualified class-name. | org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider | -| authorizationAllowWildcardsMatching | Allow wildcard matching in authorization. Wildcard matching is applicable only when the wildcard-character (*) presents at the **first** or **last** position. | false | -|superUserRoles| Role names that are treated as "superusers." Superusers are authorized to perform all admin tasks. | | -|brokerClientAuthenticationPlugin| The authentication settings of the broker itself. Used when the broker connects to other brokers either in the same cluster or from other clusters. | | -|brokerClientAuthenticationParameters| The parameters that go along with the plugin specified using brokerClientAuthenticationPlugin. | | -|athenzDomainNames| Supported Athenz authentication provider domain names as a comma-separated list. | | -| anonymousUserRole | When this parameter is not empty, unauthenticated users perform as anonymousUserRole. | | -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenAuthClaim| Specify the token claim that will be used as the authentication "principal" or "role". The "subject" field will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud". It is used to get the audience from token. If it is not set, the audience is not verified. || -| tokenAudience | The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token need contains this parameter.| | -|saslJaasClientAllowedIds|This is a regexp, which limits the range of possible ids which can connect to the Broker using SASL. By default, it is set to `SaslConstants.JAAS_CLIENT_ALLOWED_IDS_DEFAULT`, which is ".*pulsar.*", so only clients whose id contains 'pulsar' are allowed to connect.|N/A| -|saslJaasBrokerSectionName|Service Principal, for login context name. By default, it is set to `SaslConstants.JAAS_DEFAULT_BROKER_SECTION_NAME`, which is "Broker".|N/A| -|httpMaxRequestSize|If the value is larger than 0, it rejects all HTTP requests with bodies larged than the configured limit.|-1| -|exposePreciseBacklogInPrometheus| Enable expose the precise backlog stats, set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate. |false| -|bookkeeperMetadataServiceUri|Metadata service uri is what BookKeeper used for loading corresponding metadata driver and resolving its metadata service location. This value can be fetched using `bookkeeper shell whatisinstanceid` command in BookKeeper cluster. For example: `zk+hierarchical://localhost:2181/ledgers`. The metadata service uri list can also be semicolon separated values like: `zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers`.|N/A| -|bookkeeperClientAuthenticationPlugin| Authentication plugin to be used when connecting to bookies (BookKeeper servers). || -|bookkeeperClientAuthenticationParametersName| BookKeeper authentication plugin implementation parameters and values. || -|bookkeeperClientAuthenticationParameters| Parameters associated with the bookkeeperClientAuthenticationParametersName || -|bookkeeperClientNumWorkerThreads| Number of BookKeeper client worker threads. Default is Runtime.getRuntime().availableProcessors() || -|bookkeeperClientTimeoutInSeconds| Timeout for BookKeeper add and read operations. |30| -|bookkeeperClientSpeculativeReadTimeoutInMillis| Speculative reads are initiated if a read request doesn’t complete within a certain time. A value of 0 disables speculative reads. |0| -|bookkeeperUseV2WireProtocol|Use older Bookkeeper wire protocol with bookie.|true| -|bookkeeperClientHealthCheckEnabled| Enable bookie health checks. |true| -|bookkeeperClientHealthCheckIntervalSeconds| The time interval, in seconds, at which health checks are performed. New ledgers are not created during health checks. |60| -|bookkeeperClientHealthCheckErrorThresholdPerInterval| Error threshold for health checks. |5| -|bookkeeperClientHealthCheckQuarantineTimeInSeconds| If bookies have more than the allowed number of failures within the time interval specified by bookkeeperClientHealthCheckIntervalSeconds |1800| -|bookkeeperClientGetBookieInfoIntervalSeconds|Specify options for the GetBookieInfo check. This setting helps ensure the list of bookies that are up to date on the brokers.|86400| -|bookkeeperClientGetBookieInfoRetryIntervalSeconds|Specify options for the GetBookieInfo check. This setting helps ensure the list of bookies that are up to date on the brokers.|60| -|bookkeeperClientRackawarePolicyEnabled| |true| -|bookkeeperClientRegionawarePolicyEnabled| |false| -|bookkeeperClientMinNumRacksPerWriteQuorum| |2| -|bookkeeperClientMinNumRacksPerWriteQuorum| |false| -|bookkeeperClientReorderReadSequenceEnabled| |false| -|bookkeeperClientIsolationGroups||| -|bookkeeperClientSecondaryIsolationGroups| Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn't have enough bookie available. || -|bookkeeperClientMinAvailableBookiesInIsolationGroups| Minimum bookies that should be available as part of bookkeeperClientIsolationGroups else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list. || -| bookkeeperTLSProviderFactoryClass | Set the client security provider factory class name. | org.apache.bookkeeper.tls.TLSContextFactory | -| bookkeeperTLSClientAuthentication | Enable TLS authentication with bookie. | false | -| bookkeeperTLSKeyFileType | Supported type: PEM, JKS, PKCS12. | PEM | -| bookkeeperTLSTrustCertTypes | Supported type: PEM, JKS, PKCS12. | PEM | -| bookkeeperTLSKeyStorePasswordPath | Path to file containing keystore password, if the client keystore is password protected. | | -| bookkeeperTLSTrustStorePasswordPath | Path to file containing truststore password, if the client truststore is password protected. | | -| bookkeeperTLSKeyFilePath | Path for the TLS private key file. | | -| bookkeeperTLSCertificateFilePath | Path for the TLS certificate file. | | -| bookkeeperTLSTrustCertsFilePath | Path for the trusted TLS certificate file. | | -| bookkeeperDiskWeightBasedPlacementEnabled | Enable/Disable disk weight based placement. | false | -| bookkeeperExplicitLacIntervalInMills | Set the interval to check the need for sending an explicit LAC. When the value is set to 0, no explicit LAC is sent. | 0 | -| bookkeeperClientExposeStatsToPrometheus | Expose BookKeeper client managed ledger stats to Prometheus. | false | -|managedLedgerDefaultEnsembleSize| |1| -|managedLedgerDefaultWriteQuorum| |1| -|managedLedgerDefaultAckQuorum| |1| -| managedLedgerDigestType | Default type of checksum to use when writing to BookKeeper. | CRC32C | -| managedLedgerNumSchedulerThreads | Number of threads to be used for managed ledger scheduled tasks. | 8 | -|managedLedgerCacheSizeMB| |N/A| -|managedLedgerCacheCopyEntries| Whether to copy the entry payloads when inserting in cache.| false| -|managedLedgerCacheEvictionWatermark| |0.9| -|managedLedgerCacheEvictionFrequency| Configure the cache eviction frequency for the managed ledger cache (evictions/sec) | 100.0 | -|managedLedgerCacheEvictionTimeThresholdMillis| All entries that have stayed in cache for more than the configured time, will be evicted | 1000 | -|managedLedgerCursorBackloggedThreshold| Configure the threshold (in number of entries) from where a cursor should be considered 'backlogged' and thus should be set as inactive. | 1000| -|managedLedgerUnackedRangesOpenCacheSetEnabled| Use Open Range-Set to cache unacknowledged messages |true| -|managedLedgerDefaultMarkDeleteRateLimit| |0.1| -|managedLedgerMaxEntriesPerLedger| |50000| -|managedLedgerMinLedgerRolloverTimeMinutes| |10| -|managedLedgerMaxLedgerRolloverTimeMinutes| |240| -|managedLedgerCursorMaxEntriesPerLedger| |50000| -|managedLedgerCursorRolloverTimeInSeconds| |14400| -| managedLedgerMaxSizePerLedgerMbytes | Maximum ledger size before triggering a rollover for a topic. | 2048 | -| managedLedgerMaxUnackedRangesToPersist | Maximum number of "acknowledgment holes" that are going to be persistently stored. When acknowledging out of order, a consumer leaves holes that are supposed to be quickly filled by acknowledging all the messages. The information of which messages are acknowledged is persisted by compressing in "ranges" of messages that were acknowledged. After the max number of ranges is reached, the information is only tracked in memory and messages are redelivered in case of crashes. | 10000 | -| managedLedgerMaxUnackedRangesToPersistInZooKeeper | Maximum number of "acknowledgment holes" that can be stored in Zookeeper. If the number of unacknowledged message range is higher than this limit, the broker persists unacknowledged ranges into bookkeeper to avoid additional data overhead into Zookeeper. | 1000 | -|autoSkipNonRecoverableData| |false| -| managedLedgerMetadataOperationsTimeoutSeconds | Operation timeout while updating managed-ledger metadata. | 60 | -| managedLedgerReadEntryTimeoutSeconds | Read entries timeout when the broker tries to read messages from BookKeeper. | 0 | -| managedLedgerAddEntryTimeoutSeconds | Add entry timeout when the broker tries to publish message to BookKeeper. | 0 | -| managedLedgerNewEntriesCheckDelayInMillis | New entries check delay for the cursor under the managed ledger. If no new messages in the topic, the cursor tries to check again after the delay time. For consumption latency sensitive scenario, you can set the value to a smaller value or 0. Of course, a smaller value may degrade consumption throughput.|10 ms| -| managedLedgerPrometheusStatsLatencyRolloverSeconds | Managed ledger prometheus stats latency rollover seconds. | 60 | -| managedLedgerTraceTaskExecution | Whether to trace managed ledger task execution time. | true | -|managedLedgerNewEntriesCheckDelayInMillis|New entries check delay for the cursor under the managed ledger. If no new messages in the topic, the cursor will try to check again after the delay time. For consumption latency sensitive scenario, it can be set to a smaller value or 0. A smaller value degrades consumption throughput. By default, it is 10ms.|10| -|loadBalancerEnabled| |false| -|loadBalancerPlacementStrategy| |weightedRandomSelection| -|loadBalancerReportUpdateThresholdPercentage| |10| -|loadBalancerReportUpdateMaxIntervalMinutes| |15| -|loadBalancerHostUsageCheckIntervalMinutes| |1| -|loadBalancerSheddingIntervalMinutes| |30| -|loadBalancerSheddingGracePeriodMinutes| |30| -|loadBalancerBrokerMaxTopics| |50000| -|loadBalancerBrokerUnderloadedThresholdPercentage| |1| -|loadBalancerBrokerOverloadedThresholdPercentage| |85| -|loadBalancerResourceQuotaUpdateIntervalMinutes| |15| -|loadBalancerBrokerComfortLoadLevelPercentage| |65| -|loadBalancerAutoBundleSplitEnabled| |false| -| loadBalancerAutoUnloadSplitBundlesEnabled | Enable/Disable automatic unloading of split bundles. | true | -|loadBalancerNamespaceBundleMaxTopics| |1000| -|loadBalancerNamespaceBundleMaxSessions| |1000| -|loadBalancerNamespaceBundleMaxMsgRate| |1000| -|loadBalancerNamespaceBundleMaxBandwidthMbytes| |100| -|loadBalancerNamespaceMaximumBundles| |128| -| loadBalancerBrokerThresholdShedderPercentage | The broker resource usage threshold. When the broker resource usage is greater than the pulsar cluster average resource usage, the threshold shedder is triggered to offload bundles from the broker. It only takes effect in the ThresholdShedder strategy. | 10 | -| loadBalancerHistoryResourcePercentage | The history usage when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 0.9 | -| loadBalancerBandwithInResourceWeight | The BandWithIn usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerBandwithOutResourceWeight | The BandWithOut usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerCPUResourceWeight | The CPU usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerMemoryResourceWeight | The heap memory usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerDirectMemoryResourceWeight | The direct memory usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerBundleUnloadMinThroughputThreshold | Bundle unload minimum throughput threshold. Avoid bundle unload frequently. It only takes effect in the ThresholdShedder strategy. | 10 | -|replicationMetricsEnabled| |true| -|replicationConnectionsPerBroker| |16| -|replicationProducerQueueSize| |1000| -| replicationPolicyCheckDurationSeconds | Duration to check replication policy to avoid replicator inconsistency due to missing ZooKeeper watch. When the value is set to 0, disable checking replication policy. | 600 | -|defaultRetentionTimeInMinutes| |0| -|defaultRetentionSizeInMB| |0| -|keepAliveIntervalSeconds| |30| -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| -|bookieId | If you want to custom a bookie ID or use a dynamic network address for a bookie, you can set the `bookieId`.

    Bookie advertises itself using the `bookieId` rather than the `BookieSocketAddress` (`hostname:port` or `IP:port`).

    The `bookieId` is a non-empty string that can contain ASCII digits and letters ([a-zA-Z9-0]), colons, dashes, and dots.

    For more information about `bookieId`, see [here](http://bookkeeper.apache.org/bps/BP-41-bookieid/).|/| -| maxTopicsPerNamespace | The maximum number of persistent topics that can be created in the namespace. When the number of topics reaches this threshold, the broker rejects the request of creating a new topic, including the auto-created topics by the producer or consumer, until the number of connected consumers decreases. The default value 0 disables the check. | 0 | - -## WebSocket - -|Name|Description|Default| -|---|---|---| -|configurationStoreServers ||| -|zooKeeperSessionTimeoutMillis| |30000| -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|serviceUrl||| -|serviceUrlTls||| -|brokerServiceUrl||| -|brokerServiceUrlTls||| -|webServicePort||8080| -|webServicePortTls||8443| -|bindAddress||0.0.0.0| -|clusterName ||| -|authenticationEnabled||false| -|authenticationProviders||| -|authorizationEnabled||false| -|superUserRoles ||| -|brokerClientAuthenticationPlugin||| -|brokerClientAuthenticationParameters||| -|tlsEnabled||false| -|tlsAllowInsecureConnection||false| -|tlsCertificateFilePath||| -|tlsKeyFilePath ||| -|tlsTrustCertsFilePath||| - -## Pulsar proxy - -The [Pulsar proxy](concepts-architecture-overview.md#pulsar-proxy) can be configured in the `conf/proxy.conf` file. - - -|Name|Description|Default| -|---|---|---| -|forwardAuthorizationCredentials| Forward client authorization credentials to Broker for re-authorization, and make sure authentication is enabled for this to take effect. |false| -|zookeeperServers| The ZooKeeper quorum connection string (as a comma-separated list) || -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -| brokerServiceURL | The service URL pointing to the broker cluster. Must begin with `pulsar://`. | | -| brokerServiceURLTLS | The TLS service URL pointing to the broker cluster. Must begin with `pulsar+ssl://`. | | -| brokerWebServiceURL | The Web service URL pointing to the broker cluster | | -| brokerWebServiceURLTLS | The TLS Web service URL pointing to the broker cluster | | -| functionWorkerWebServiceURL | The Web service URL pointing to the function worker cluster. It is only configured when you setup function workers in a separate cluster. | | -| functionWorkerWebServiceURLTLS | The TLS Web service URL pointing to the function worker cluster. It is only configured when you setup function workers in a separate cluster. | | -|zookeeperSessionTimeoutMs| ZooKeeper session timeout (in milliseconds) |30000| -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|advertisedAddress|Hostname or IP address the service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostname()` is used.|N/A| -|servicePort| The port to use for server binary Protobuf requests |6650| -|servicePortTls| The port to use to server binary Protobuf TLS requests |6651| -|statusFilePath| Path for the file used to determine the rotation status for the proxy instance when responding to service discovery health checks || -| proxyLogLevel | Proxy log level
  895. 0: Do not log any TCP channel information.
  896. 1: Parse and log any TCP channel information and command information without message body.
  897. 2: Parse and log channel information, command information and message body.
  898. | 0 | -|authenticationEnabled| Whether authentication is enabled for the Pulsar proxy |false| -|authenticateMetricsEndpoint| Whether the '/metrics' endpoint requires authentication. Defaults to true. 'authenticationEnabled' must also be set for this to take effect. |true| -|authenticationProviders| Authentication provider name list (a comma-separated list of class names) || -|authorizationEnabled| Whether authorization is enforced by the Pulsar proxy |false| -|authorizationProvider| Authorization provider as a fully qualified class name |org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider| -| anonymousUserRole | When this parameter is not empty, unauthenticated users perform as anonymousUserRole. | | -|brokerClientAuthenticationPlugin| The authentication plugin used by the Pulsar proxy to authenticate with Pulsar brokers || -|brokerClientAuthenticationParameters| The authentication parameters used by the Pulsar proxy to authenticate with Pulsar brokers || -|brokerClientTrustCertsFilePath| The path to trusted certificates used by the Pulsar proxy to authenticate with Pulsar brokers || -|superUserRoles| Role names that are treated as "super-users," meaning that they will be able to perform all admin || -|maxConcurrentInboundConnections| Max concurrent inbound connections. The proxy will reject requests beyond that. |10000| -|maxConcurrentLookupRequests| Max concurrent outbound connections. The proxy will error out requests beyond that. |50000| -|tlsEnabledInProxy| Deprecated - use `servicePortTls` and `webServicePortTls` instead. |false| -|tlsEnabledWithBroker| Whether TLS is enabled when communicating with Pulsar brokers. |false| -| tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in seconds. If the value is set 0, check TLS certificate every new connection. | 300 | -|tlsCertificateFilePath| Path for the TLS certificate file || -|tlsKeyFilePath| Path for the TLS private key file || -|tlsTrustCertsFilePath| Path for the trusted TLS certificate pem file || -|tlsHostnameVerificationEnabled| Whether the hostname is validated when the proxy creates a TLS connection with brokers |false| -|tlsRequireTrustedClientCertOnConnect| Whether client certificates are required for TLS. Connections are rejected if the client certificate isn’t trusted. |false| -|tlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLSv1.3```, ```TLSv1.2``` || -|tlsCiphers|Specify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256```|| -| httpReverseProxyConfigs | HTTP directs to redirect to non-pulsar services | | -| httpOutputBufferSize | HTTP output buffer size. The amount of data that will be buffered for HTTP requests before it is flushed to the channel. A larger buffer size may result in higher HTTP throughput though it may take longer for the client to see data. If using HTTP streaming via the reverse proxy, this should be set to the minimum value (1) so that clients see the data as soon as possible. | 32768 | -| httpNumThreads | Number of threads to use for HTTP requests processing| 2 * Runtime.getRuntime().availableProcessors() | -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenAuthClaim| Specify the token claim that will be used as the authentication "principal" or "role". The "subject" field will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud". It is used to get the audience from token. If it is not set, the audience is not verified. || -| tokenAudience | The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token need contains this parameter.| | -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| - -## ZooKeeper - -ZooKeeper handles a broad range of essential configuration- and coordination-related tasks for Pulsar. The default configuration file for ZooKeeper is in the `conf/zookeeper.conf` file in your Pulsar installation. The following parameters are available: - - -|Name|Description|Default| -|---|---|---| -|tickTime| The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick. |2000| -|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10| -|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter. |5| -|dataDir| The location where ZooKeeper will store in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper| -|clientPort| The port on which the ZooKeeper server will listen for connections. |2181| -|admin.enableServer|The port at which the admin listens.|true| -|admin.serverPort|The port at which the admin listens.|9990| -|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest). |3| -|autopurge.purgeInterval| The time interval, in hours, by which the ZooKeeper database purge task is triggered. Setting to a non-zero number will enable auto purge; setting to 0 will disable. Read this guide before enabling auto purge. |1| -|forceSync|Requires updates to be synced to media of the transaction log before finishing processing the update. If this option is set to 'no', ZooKeeper will not require updates to be synced to the media. WARNING: it's not recommended to run a production ZK cluster with `forceSync` disabled.|yes| -|maxClientCnxns| The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60| - - - - -In addition to the parameters in the table above, configuring ZooKeeper for Pulsar involves adding -a `server.N` line to the `conf/zookeeper.conf` file for each node in the ZooKeeper cluster, where `N` is the number of the ZooKeeper node. Here's an example for a three-node ZooKeeper cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -> We strongly recommend consulting the [ZooKeeper Administrator's Guide](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html) for a more thorough and comprehensive introduction to ZooKeeper configuration diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/reference-connector-admin.md b/site2/website/versioned_docs/version-2.8.1-deprecated/reference-connector-admin.md deleted file mode 100644 index 7b73ae80750cd4..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/reference-connector-admin.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -id: reference-connector-admin -title: Connector Admin CLI -sidebar_label: "Connector Admin CLI" -original_id: reference-connector-admin ---- - -> **Important** -> -> For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/reference-metrics.md b/site2/website/versioned_docs/version-2.8.1-deprecated/reference-metrics.md deleted file mode 100644 index 2d7944e0c83360..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/reference-metrics.md +++ /dev/null @@ -1,555 +0,0 @@ ---- -id: reference-metrics -title: Pulsar Metrics -sidebar_label: "Pulsar Metrics" -original_id: reference-metrics ---- - - - -Pulsar exposes the following metrics in Prometheus format. You can monitor your clusters with those metrics. - -* [ZooKeeper](#zookeeper) -* [BookKeeper](#bookkeeper) -* [Broker](#broker) -* [Pulsar Functions](#pulsar-functions) -* [Proxy](#proxy) -* [Pulsar SQL Worker](#pulsar-sql-worker) -* [Pulsar transaction](#pulsar-transaction) - -The following types of metrics are available: - -- [Counter](https://prometheus.io/docs/concepts/metric_types/#counter): a cumulative metric that represents a single monotonically increasing counter. The value increases by default. You can reset the value to zero or restart your cluster. -- [Gauge](https://prometheus.io/docs/concepts/metric_types/#gauge): a metric that represents a single numerical value that can arbitrarily go up and down. -- [Histogram](https://prometheus.io/docs/concepts/metric_types/#histogram): a histogram samples observations (usually things like request durations or response sizes) and counts them in configurable buckets. The `_bucket` suffix is the number of observations within a histogram bucket, configured with parameter `{le=""}`. The `_count` suffix is the number of observations, shown as a time series and behaves like a counter. The `_sum` suffix is the sum of observed values, also shown as a time series and behaves like a counter. These suffixes are together denoted by `_*` in this doc. -- [Summary](https://prometheus.io/docs/concepts/metric_types/#summary): similar to a histogram, a summary samples observations (usually things like request durations and response sizes). While it also provides a total count of observations and a sum of all observed values, it calculates configurable quantiles over a sliding time window. - -## ZooKeeper - -The ZooKeeper metrics are exposed under "/metrics" at port `8000`. You can use a different port by configuring the `metricsProvider.httpPort` in conf/zookeeper.conf. - -ZooKeeper provides a New Metrics System since 3.6.0. For more detailed metrics, refer to the [ZooKeeper Monitor Guide](https://zookeeper.apache.org/doc/r3.7.0/zookeeperMonitor.html). - -## BookKeeper - -The BookKeeper metrics are exposed under "/metrics" at port `8000`. You can change the port by updating `prometheusStatsHttpPort` -in the `bookkeeper.conf` configuration file. - -### Server metrics - -| Name | Type | Description | -|---|---|---| -| bookie_SERVER_STATUS | Gauge | The server status for bookie server.
    • 1: the bookie is running in writable mode.
    • 0: the bookie is running in readonly mode.
    | -| bookkeeper_server_ADD_ENTRY_count | Counter | The total number of ADD_ENTRY requests received at the bookie. The `success` label is used to distinguish successes and failures. | -| bookkeeper_server_READ_ENTRY_count | Counter | The total number of READ_ENTRY requests received at the bookie. The `success` label is used to distinguish successes and failures. | -| bookie_WRITE_BYTES | Counter | The total number of bytes written to the bookie. | -| bookie_READ_BYTES | Counter | The total number of bytes read from the bookie. | -| bookkeeper_server_ADD_ENTRY_REQUEST | Summary | The summary of request latency of ADD_ENTRY requests at the bookie. The `success` label is used to distinguish successes and failures. | -| bookkeeper_server_READ_ENTRY_REQUEST | Summary | The summary of request latency of READ_ENTRY requests at the bookie. The `success` label is used to distinguish successes and failures. | - -### Journal metrics - -| Name | Type | Description | -|---|---|---| -| bookie_journal_JOURNAL_SYNC_count | Counter | The total number of journal fsync operations happening at the bookie. The `success` label is used to distinguish successes and failures. | -| bookie_journal_JOURNAL_QUEUE_SIZE | Gauge | The total number of requests pending in the journal queue. | -| bookie_journal_JOURNAL_FORCE_WRITE_QUEUE_SIZE | Gauge | The total number of force write (fsync) requests pending in the force-write queue. | -| bookie_journal_JOURNAL_CB_QUEUE_SIZE | Gauge | The total number of callbacks pending in the callback queue. | -| bookie_journal_JOURNAL_ADD_ENTRY | Summary | The summary of request latency of adding entries to the journal. | -| bookie_journal_JOURNAL_SYNC | Summary | The summary of fsync latency of syncing data to the journal disk. | - -### Storage metrics - -| Name | Type | Description | -|---|---|---| -| bookie_ledgers_count | Gauge | The total number of ledgers stored in the bookie. | -| bookie_entries_count | Gauge | The total number of entries stored in the bookie. | -| bookie_write_cache_size | Gauge | The bookie write cache size (in bytes). | -| bookie_read_cache_size | Gauge | The bookie read cache size (in bytes). | -| bookie_DELETED_LEDGER_COUNT | Counter | The total number of ledgers deleted since the bookie has started. | -| bookie_ledger_writable_dirs | Gauge | The number of writable directories in the bookie. | - -## Broker - -The broker metrics are exposed under "/metrics" at port `8080`. You can change the port by updating `webServicePort` to a different port -in the `broker.conf` configuration file. - -All the metrics exposed by a broker are labelled with `cluster=${pulsar_cluster}`. The name of Pulsar cluster is the value of `${pulsar_cluster}`, which you have configured in the `broker.conf` file. - -The following metrics are available for broker: - -- [ZooKeeper](#zookeeper) - - [Server metrics](#server-metrics) - - [Request metrics](#request-metrics) -- [BookKeeper](#bookkeeper) - - [Server metrics](#server-metrics-1) - - [Journal metrics](#journal-metrics) - - [Storage metrics](#storage-metrics) -- [Broker](#broker) - - [Namespace metrics](#namespace-metrics) - - [Replication metrics](#replication-metrics) - - [Topic metrics](#topic-metrics) - - [Replication metrics](#replication-metrics-1) - - [ManagedLedgerCache metrics](#managedledgercache-metrics) - - [ManagedLedger metrics](#managedledger-metrics) - - [LoadBalancing metrics](#loadbalancing-metrics) - - [BundleUnloading metrics](#bundleunloading-metrics) - - [BundleSplit metrics](#bundlesplit-metrics) - - [Subscription metrics](#subscription-metrics) - - [Consumer metrics](#consumer-metrics) - - [Managed ledger bookie client metrics](#managed-ledger-bookie-client-metrics) - - [Token metrics](#token-metrics) - - [Authentication metrics](#authentication-metrics) - - [Connection metrics](#connection-metrics) - - [Jetty metrics](#jetty-metrics) -- [Pulsar Functions](#pulsar-functions) -- [Proxy](#proxy) -- [Pulsar SQL Worker](#pulsar-sql-worker) -- [Pulsar transaction](#pulsar-transaction) - -### Namespace metrics - -> Namespace metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `false`. - -All the namespace metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -| Name | Type | Description | -|---|---|---| -| pulsar_topics_count | Gauge | The number of Pulsar topics of the namespace owned by this broker. | -| pulsar_subscriptions_count | Gauge | The number of Pulsar subscriptions of the namespace served by this broker. | -| pulsar_producers_count | Gauge | The number of active producers of the namespace connected to this broker. | -| pulsar_consumers_count | Gauge | The number of active consumers of the namespace connected to this broker. | -| pulsar_rate_in | Gauge | The total message rate of the namespace coming into this broker (messages/second). | -| pulsar_rate_out | Gauge | The total message rate of the namespace going out from this broker (messages/second). | -| pulsar_throughput_in | Gauge | The total throughput of the namespace coming into this broker (bytes/second). | -| pulsar_throughput_out | Gauge | The total throughput of the namespace going out from this broker (bytes/second). | -| pulsar_storage_size | Gauge | The total storage size of the topics in this namespace owned by this broker (bytes). | -| pulsar_storage_backlog_size | Gauge | The total backlog size of the topics of this namespace owned by this broker (messages). | -| pulsar_storage_offloaded_size | Gauge | The total amount of the data in this namespace offloaded to the tiered storage (bytes). | -| pulsar_storage_write_rate | Gauge | The total message batches (entries) written to the storage for this namespace (message batches / second). | -| pulsar_storage_read_rate | Gauge | The total message batches (entries) read from the storage for this namespace (message batches / second). | -| pulsar_subscription_delayed | Gauge | The total message batches (entries) are delayed for dispatching. | -| pulsar_storage_write_latency_le_* | Histogram | The entry rate of a namespace that the storage write latency is smaller with a given threshold.
    Available thresholds:
    • pulsar_storage_write_latency_le_0_5: <= 0.5ms
    • pulsar_storage_write_latency_le_1: <= 1ms
    • pulsar_storage_write_latency_le_5: <= 5ms
    • pulsar_storage_write_latency_le_10: <= 10ms
    • pulsar_storage_write_latency_le_20: <= 20ms
    • pulsar_storage_write_latency_le_50: <= 50ms
    • pulsar_storage_write_latency_le_100: <= 100ms
    • pulsar_storage_write_latency_le_200: <= 200ms
    • pulsar_storage_write_latency_le_1000: <= 1s
    • pulsar_storage_write_latency_le_overflow: > 1s
    | -| pulsar_entry_size_le_* | Histogram | The entry rate of a namespace that the entry size is smaller with a given threshold.
    Available thresholds:
    • pulsar_entry_size_le_128: <= 128 bytes
    • pulsar_entry_size_le_512: <= 512 bytes
    • pulsar_entry_size_le_1_kb: <= 1 KB
    • pulsar_entry_size_le_2_kb: <= 2 KB
    • pulsar_entry_size_le_4_kb: <= 4 KB
    • pulsar_entry_size_le_16_kb: <= 16 KB
    • pulsar_entry_size_le_100_kb: <= 100 KB
    • pulsar_entry_size_le_1_mb: <= 1 MB
    • pulsar_entry_size_le_overflow: > 1 MB
    | - -#### Replication metrics - -If a namespace is configured to be replicated among multiple Pulsar clusters, the corresponding replication metrics is also exposed when `replicationMetricsEnabled` is enabled. - -All the replication metrics are also labelled with `remoteCluster=${pulsar_remote_cluster}`. - -| Name | Type | Description | -|---|---|---| -| pulsar_replication_rate_in | Gauge | The total message rate of the namespace replicating from remote cluster (messages/second). | -| pulsar_replication_rate_out | Gauge | The total message rate of the namespace replicating to remote cluster (messages/second). | -| pulsar_replication_throughput_in | Gauge | The total throughput of the namespace replicating from remote cluster (bytes/second). | -| pulsar_replication_throughput_out | Gauge | The total throughput of the namespace replicating to remote cluster (bytes/second). | -| pulsar_replication_backlog | Gauge | The total backlog of the namespace replicating to remote cluster (messages). | -| pulsar_replication_rate_expired | Gauge | Total rate of messages expired (messages/second). | -| pulsar_replication_connected_count | Gauge | The count of replication-subscriber up and running to replicate to remote cluster. | -| pulsar_replication_delay_in_seconds | Gauge | Time in seconds from the time a message was produced to the time when it is about to be replicated. | - -### Topic metrics - -> Topic metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `true`. - -All the topic metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. - -| Name | Type | Description | -|---|---|---| -| pulsar_subscriptions_count | Gauge | The number of Pulsar subscriptions of the topic served by this broker. | -| pulsar_producers_count | Gauge | The number of active producers of the topic connected to this broker. | -| pulsar_consumers_count | Gauge | The number of active consumers of the topic connected to this broker. | -| pulsar_rate_in | Gauge | The total message rate of the topic coming into this broker (messages/second). | -| pulsar_rate_out | Gauge | The total message rate of the topic going out from this broker (messages/second). | -| pulsar_throughput_in | Gauge | The total throughput of the topic coming into this broker (bytes/second). | -| pulsar_throughput_out | Gauge | The total throughput of the topic going out from this broker (bytes/second). | -| pulsar_storage_size | Gauge | The total storage size of the topics in this topic owned by this broker (bytes). | -| pulsar_storage_backlog_size | Gauge | The total backlog size of the topics of this topic owned by this broker (messages). | -| pulsar_storage_offloaded_size | Gauge | The total amount of the data in this topic offloaded to the tiered storage (bytes). | -| pulsar_storage_backlog_quota_limit | Gauge | The total amount of the data in this topic that limit the backlog quota (bytes). | -| pulsar_storage_write_rate | Gauge | The total message batches (entries) written to the storage for this topic (message batches / second). | -| pulsar_storage_read_rate | Gauge | The total message batches (entries) read from the storage for this topic (message batches / second). | -| pulsar_subscription_delayed | Gauge | The total message batches (entries) are delayed for dispatching. | -| pulsar_storage_write_latency_le_* | Histogram | The entry rate of a topic that the storage write latency is smaller with a given threshold.
    Available thresholds:
    • pulsar_storage_write_latency_le_0_5: <= 0.5ms
    • pulsar_storage_write_latency_le_1: <= 1ms
    • pulsar_storage_write_latency_le_5: <= 5ms
    • pulsar_storage_write_latency_le_10: <= 10ms
    • pulsar_storage_write_latency_le_20: <= 20ms
    • pulsar_storage_write_latency_le_50: <= 50ms
    • pulsar_storage_write_latency_le_100: <= 100ms
    • pulsar_storage_write_latency_le_200: <= 200ms
    • pulsar_storage_write_latency_le_1000: <= 1s
    • pulsar_storage_write_latency_le_overflow: > 1s
    | -| pulsar_entry_size_le_* | Histogram | The entry rate of a topic that the entry size is smaller with a given threshold.
    Available thresholds:
    • pulsar_entry_size_le_128: <= 128 bytes
    • pulsar_entry_size_le_512: <= 512 bytes
    • pulsar_entry_size_le_1_kb: <= 1 KB
    • pulsar_entry_size_le_2_kb: <= 2 KB
    • pulsar_entry_size_le_4_kb: <= 4 KB
    • pulsar_entry_size_le_16_kb: <= 16 KB
    • pulsar_entry_size_le_100_kb: <= 100 KB
    • pulsar_entry_size_le_1_mb: <= 1 MB
    • pulsar_entry_size_le_overflow: > 1 MB
    | -| pulsar_in_bytes_total | Counter | The total number of bytes received for this topic | -| pulsar_in_messages_total | Counter | The total number of messages received for this topic | -| pulsar_out_bytes_total | Counter | The total number of bytes read from this topic | -| pulsar_out_messages_total | Counter | The total number of messages read from this topic | - -#### Replication metrics - -If a namespace that a topic belongs to is configured to be replicated among multiple Pulsar clusters, the corresponding replication metrics is also exposed when `replicationMetricsEnabled` is enabled. - -All the replication metrics are labelled with `remoteCluster=${pulsar_remote_cluster}`. - -| Name | Type | Description | -|---|---|---| -| pulsar_replication_rate_in | Gauge | The total message rate of the topic replicating from remote cluster (messages/second). | -| pulsar_replication_rate_out | Gauge | The total message rate of the topic replicating to remote cluster (messages/second). | -| pulsar_replication_throughput_in | Gauge | The total throughput of the topic replicating from remote cluster (bytes/second). | -| pulsar_replication_throughput_out | Gauge | The total throughput of the topic replicating to remote cluster (bytes/second). | -| pulsar_replication_backlog | Gauge | The total backlog of the topic replicating to remote cluster (messages). | - -### ManagedLedgerCache metrics -All the ManagedLedgerCache metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_ml_cache_evictions | Gauge | The number of cache evictions during the last minute. | -| pulsar_ml_cache_hits_rate | Gauge | The number of cache hits per second. | -| pulsar_ml_cache_hits_throughput | Gauge | The amount of data is retrieved from the cache in byte/s | -| pulsar_ml_cache_misses_rate | Gauge | The number of cache misses per second | -| pulsar_ml_cache_misses_throughput | Gauge | The amount of data is retrieved from the cache in byte/s | -| pulsar_ml_cache_pool_active_allocations | Gauge | The number of currently active allocations in direct arena | -| pulsar_ml_cache_pool_active_allocations_huge | Gauge | The number of currently active huge allocation in direct arena | -| pulsar_ml_cache_pool_active_allocations_normal | Gauge | The number of currently active normal allocations in direct arena | -| pulsar_ml_cache_pool_active_allocations_small | Gauge | The number of currently active small allocations in direct arena | -| pulsar_ml_cache_pool_active_allocations_tiny | Gauge | The number of currently active tiny allocations in direct arena | -| pulsar_ml_cache_pool_allocated | Gauge | The total allocated memory of chunk lists in direct arena | -| pulsar_ml_cache_pool_used | Gauge | The total used memory of chunk lists in direct arena | -| pulsar_ml_cache_used_size | Gauge | The size in byte used to store the entries payloads | -| pulsar_ml_count | Gauge | The number of currently opened managed ledgers | - -### ManagedLedger metrics -All the managedLedger metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- namespace: namespace=${pulsar_namespace}. ${pulsar_namespace} is the namespace name. -- quantile: quantile=${quantile}. Quantile is only for `Histogram` type metric, and represents the threshold for given Buckets. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_ml_AddEntryBytesRate | Gauge | The bytes/s rate of messages added | -| pulsar_ml_AddEntryErrors | Gauge | The number of addEntry requests that failed | -| pulsar_ml_AddEntryLatencyBuckets | Histogram | The latency of adding a ledger entry with a given quantile (threshold), including time spent on waiting in queue on the broker side
    Available quantile:
    • quantile="0.0_0.5" is AddEntryLatency between (0.0ms, 0.5ms]
    • quantile="0.5_1.0" is AddEntryLatency between (0.5ms, 1.0ms]
    • quantile="1.0_5.0" is AddEntryLatency between (1ms, 5ms]
    • quantile="5.0_10.0" is AddEntryLatency between (5ms, 10ms]
    • quantile="10.0_20.0" is AddEntryLatency between (10ms, 20ms]
    • quantile="20.0_50.0" is AddEntryLatency between (20ms, 50ms]
    • quantile="50.0_100.0" is AddEntryLatency between (50ms, 100ms]
    • quantile="100.0_200.0" is AddEntryLatency between (100ms, 200ms]
    • quantile="200.0_1000.0" is AddEntryLatency between (200ms, 1s]
    | -| pulsar_ml_AddEntryLatencyBuckets_OVERFLOW | Gauge | The number of times the AddEntryLatency is longer than 1 second | -| pulsar_ml_AddEntryMessagesRate | Gauge | The msg/s rate of messages added | -| pulsar_ml_AddEntrySucceed | Gauge | The number of addEntry requests that succeeded | -| pulsar_ml_EntrySizeBuckets | Histogram | The added entry size of a ledger with a given quantile.
    Available quantile:
    • quantile="0.0_128.0" is EntrySize between (0byte, 128byte]
    • quantile="128.0_512.0" is EntrySize between (128byte, 512byte]
    • quantile="512.0_1024.0" is EntrySize between (512byte, 1KB]
    • quantile="1024.0_2048.0" is EntrySize between (1KB, 2KB]
    • quantile="2048.0_4096.0" is EntrySize between (2KB, 4KB]
    • quantile="4096.0_16384.0" is EntrySize between (4KB, 16KB]
    • quantile="16384.0_102400.0" is EntrySize between (16KB, 100KB]
    • quantile="102400.0_1232896.0" is EntrySize between (100KB, 1MB]
    | -| pulsar_ml_EntrySizeBuckets_OVERFLOW |Gauge | The number of times the EntrySize is larger than 1MB | -| pulsar_ml_LedgerSwitchLatencyBuckets | Histogram | The ledger switch latency with a given quantile.
    Available quantile:
    • quantile="0.0_0.5" is EntrySize between (0ms, 0.5ms]
    • quantile="0.5_1.0" is EntrySize between (0.5ms, 1ms]
    • quantile="1.0_5.0" is EntrySize between (1ms, 5ms]
    • quantile="5.0_10.0" is EntrySize between (5ms, 10ms]
    • quantile="10.0_20.0" is EntrySize between (10ms, 20ms]
    • quantile="20.0_50.0" is EntrySize between (20ms, 50ms]
    • quantile="50.0_100.0" is EntrySize between (50ms, 100ms]
    • quantile="100.0_200.0" is EntrySize between (100ms, 200ms]
    • quantile="200.0_1000.0" is EntrySize between (200ms, 1000ms]
    | -| pulsar_ml_LedgerSwitchLatencyBuckets_OVERFLOW | Gauge | The number of times the ledger switch latency is longer than 1 second | -| pulsar_ml_LedgerAddEntryLatencyBuckets | Histogram | The latency for bookie client to persist a ledger entry from broker to BookKeeper service with a given quantile (threshold).
    Available quantile:
    • quantile="0.0_0.5" is LedgerAddEntryLatency between (0.0ms, 0.5ms]
    • quantile="0.5_1.0" is LedgerAddEntryLatency between (0.5ms, 1.0ms]
    • quantile="1.0_5.0" is LedgerAddEntryLatency between (1ms, 5ms]
    • quantile="5.0_10.0" is LedgerAddEntryLatency between (5ms, 10ms]
    • quantile="10.0_20.0" is LedgerAddEntryLatency between (10ms, 20ms]
    • quantile="20.0_50.0" is LedgerAddEntryLatency between (20ms, 50ms]
    • quantile="50.0_100.0" is LedgerAddEntryLatency between (50ms, 100ms]
    • quantile="100.0_200.0" is LedgerAddEntryLatency between (100ms, 200ms]
    • quantile="200.0_1000.0" is LedgerAddEntryLatency between (200ms, 1s]
    | -| pulsar_ml_LedgerAddEntryLatencyBuckets_OVERFLOW | Gauge | The number of times the LedgerAddEntryLatency is longer than 1 second | -| pulsar_ml_MarkDeleteRate | Gauge | The rate of mark-delete ops/s | -| pulsar_ml_NumberOfMessagesInBacklog | Gauge | The number of backlog messages for all the consumers | -| pulsar_ml_ReadEntriesBytesRate | Gauge | The bytes/s rate of messages read | -| pulsar_ml_ReadEntriesErrors | Gauge | The number of readEntries requests that failed | -| pulsar_ml_ReadEntriesRate | Gauge | The msg/s rate of messages read | -| pulsar_ml_ReadEntriesSucceeded | Gauge | The number of readEntries requests that succeeded | -| pulsar_ml_StoredMessagesSize | Gauge | The total size of the messages in active ledgers (accounting for the multiple copies stored) | - -### Managed cursor acknowledgment state - -The acknowledgment state is persistent to the ledger first. When the acknowledgment state fails to be persistent to the ledger, they are persistent to ZooKeeper. To track the stats of acknowledgment, you can configure the metrics for the managed cursor. - -All the cursor acknowledgment state metrics are labelled with the following labels: - -- namespace: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -- ledger_name: `ledger_name=${pulsar_ledger_name}`. `${pulsar_ledger_name}` is the ledger name. - -- cursor_name: `ledger_name=${pulsar_cursor_name}`. `${pulsar_cursor_name}` is the cursor name. - -Name |Type |Description -|---|---|--- -brk_ml_cursor_persistLedgerSucceed(namespace=", ledger_name="", cursor_name:")|Gauge|The number of acknowledgment states that is persistent to a ledger.| -brk_ml_cursor_persistLedgerErrors(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of ledger errors occurred when acknowledgment states fail to be persistent to the ledger.| -brk_ml_cursor_persistZookeeperSucceed(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of acknowledgment states that is persistent to ZooKeeper. -brk_ml_cursor_persistZookeeperErrors(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of ledger errors occurred when acknowledgment states fail to be persistent to ZooKeeper. -brk_ml_cursor_nonContiguousDeletedMessagesRange(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of non-contiguous deleted messages ranges. - -### LoadBalancing metrics -All the loadbalancing metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- broker: broker=${broker}. ${broker} is the IP address of the broker -- metric: metric="loadBalancing". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_bandwidth_in_usage | Gauge | The broker inbound bandwith usage (in percent). | -| pulsar_lb_bandwidth_out_usage | Gauge | The broker outbound bandwith usage (in percent). | -| pulsar_lb_cpu_usage | Gauge | The broker cpu usage (in percent). | -| pulsar_lb_directMemory_usage | Gauge | The broker process direct memory usage (in percent). | -| pulsar_lb_memory_usage | Gauge | The broker process memory usage (in percent). | - -#### BundleUnloading metrics -All the bundleUnloading metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- metric: metric="bundleUnloading". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_unload_broker_count | Counter | Unload broker count in this bundle unloading | -| pulsar_lb_unload_bundle_count | Counter | Bundle unload count in this bundle unloading | - -#### BundleSplit metrics -All the bundleUnloading metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- metric: metric="bundlesSplit". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_bundles_split_count | Counter | The total count of bundle split in this leader broker | - -### Subscription metrics - -> Subscription metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `true`. - -All the subscription metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. -- *subscription*: `subscription=${subscription}`. `${subscription}` is the topic subscription name. - -| Name | Type | Description | -|---|---|---| -| pulsar_subscription_back_log | Gauge | The total backlog of a subscription (messages). | -| pulsar_subscription_delayed | Gauge | The total number of messages are delayed to be dispatched for a subscription (messages). | -| pulsar_subscription_msg_rate_redeliver | Gauge | The total message rate for message being redelivered (messages/second). | -| pulsar_subscription_unacked_messages | Gauge | The total number of unacknowledged messages of a subscription (messages). | -| pulsar_subscription_blocked_on_unacked_messages | Gauge | Indicate whether a subscription is blocked on unacknowledged messages or not.
    • 1 means the subscription is blocked on waiting unacknowledged messages to be acked.
    • 0 means the subscription is not blocked on waiting unacknowledged messages to be acked.
    | -| pulsar_subscription_msg_rate_out | Gauge | The total message dispatch rate for a subscription (messages/second). | -| pulsar_subscription_msg_throughput_out | Gauge | The total message dispatch throughput for a subscription (bytes/second). | - -### Consumer metrics - -> Consumer metrics are only exposed when both `exposeTopicLevelMetricsInPrometheus` and `exposeConsumerLevelMetricsInPrometheus` are set to `true`. - -All the consumer metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. -- *subscription*: `subscription=${subscription}`. `${subscription}` is the topic subscription name. -- *consumer_name*: `consumer_name=${consumer_name}`. `${consumer_name}` is the topic consumer name. -- *consumer_id*: `consumer_id=${consumer_id}`. `${consumer_id}` is the topic consumer id. - -| Name | Type | Description | -|---|---|---| -| pulsar_consumer_msg_rate_redeliver | Gauge | The total message rate for message being redelivered (messages/second). | -| pulsar_consumer_unacked_messages | Gauge | The total number of unacknowledged messages of a consumer (messages). | -| pulsar_consumer_blocked_on_unacked_messages | Gauge | Indicate whether a consumer is blocked on unacknowledged messages or not.
    • 1 means the consumer is blocked on waiting unacknowledged messages to be acked.
    • 0 means the consumer is not blocked on waiting unacknowledged messages to be acked.
    | -| pulsar_consumer_msg_rate_out | Gauge | The total message dispatch rate for a consumer (messages/second). | -| pulsar_consumer_msg_throughput_out | Gauge | The total message dispatch throughput for a consumer (bytes/second). | -| pulsar_consumer_available_permits | Gauge | The available permits for for a consumer. | - -### Managed ledger bookie client metrics - -All the managed ledger bookie client metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_completed_tasks_* | Gauge | The number of tasks the scheduler executor execute completed.
    The number of metrics determined by the scheduler executor thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_queue_* | Gauge | The number of tasks queued in the scheduler executor's queue.
    The number of metrics determined by scheduler executor's thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_total_tasks_* | Gauge | The total number of tasks the scheduler executor received.
    The number of metrics determined by scheduler executor's thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_task_execution | Summary | The scheduler task execution latency calculated in milliseconds. | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_task_queued | Summary | The scheduler task queued latency calculated in milliseconds. | - -### Token metrics - -All the token metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -|---|---|---| -| pulsar_expired_token_count | Counter | The number of expired tokens in Pulsar. | -| pulsar_expiring_token_minutes | Histogram | The remaining time of expiring tokens in minutes. | - -### Authentication metrics - -All the authentication metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *provider_name*: `provider_name=${provider_name}`. `${provider_name}` is the class name of the authentication provider. -- *auth_method*: `auth_method=${auth_method}`. `${auth_method}` is the authentication method of the authentication provider. -- *reason*: `reason=${reason}`. `${reason}` is the reason for failing authentication operation. (This label is only for `pulsar_authentication_failures_count`.) - -| Name | Type | Description | -|---|---|---| -| pulsar_authentication_success_count| Counter | The number of successful authentication operations. | -| pulsar_authentication_failures_count | Counter | The number of failing authentication operations. | - -### Connection metrics - -All the connection metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *broker*: `broker=${advertised_address}`. `${advertised_address}` is the advertised address of the broker. -- *metric*: `metric=${metric}`. `${metric}` is the connection metric collective name. - -| Name | Type | Description | -|---|---|---| -| pulsar_active_connections| Gauge | The number of active connections. | -| pulsar_connection_created_total_count | Gauge | The total number of connections. | -| pulsar_connection_create_success_count | Gauge | The number of successfully created connections. | -| pulsar_connection_create_fail_count | Gauge | The number of failed connections. | -| pulsar_connection_closed_total_count | Gauge | The total number of closed connections. | -| pulsar_broker_throttled_connections | Gauge | The number of throttled connections. | -| pulsar_broker_throttled_connections_global_limit | Gauge | The number of throttled connections because of per-connection limit. | - -### Jetty metrics - -> For a functions-worker running separately from brokers, its Jetty metrics are only exposed when `includeStandardPrometheusMetrics` is set to `true`. - -All the jetty metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -|---|---|---| -| jetty_requests_total | Counter | Number of requests. | -| jetty_requests_active | Gauge | Number of requests currently active. | -| jetty_requests_active_max | Gauge | Maximum number of requests that have been active at once. | -| jetty_request_time_max_seconds | Gauge | Maximum time spent handling requests. | -| jetty_request_time_seconds_total | Counter | Total time spent in all request handling. | -| jetty_dispatched_total | Counter | Number of dispatches. | -| jetty_dispatched_active | Gauge | Number of dispatches currently active. | -| jetty_dispatched_active_max | Gauge | Maximum number of active dispatches being handled. | -| jetty_dispatched_time_max | Gauge | Maximum time spent in dispatch handling. | -| jetty_dispatched_time_seconds_total | Counter | Total time spent in dispatch handling. | -| jetty_async_requests_total | Counter | Total number of async requests. | -| jetty_async_requests_waiting | Gauge | Currently waiting async requests. | -| jetty_async_requests_waiting_max | Gauge | Maximum number of waiting async requests. | -| jetty_async_dispatches_total | Counter | Number of requested that have been asynchronously dispatched. | -| jetty_expires_total | Counter | Number of async requests requests that have expired. | -| jetty_responses_total | Counter | Number of responses, labeled by status code. The `code` label can be "1xx", "2xx", "3xx", "4xx", or "5xx". | -| jetty_stats_seconds | Gauge | Time in seconds stats have been collected for. | -| jetty_responses_bytes_total | Counter | Total number of bytes across all responses. | - -## Pulsar Functions - -All the Pulsar Functions metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -| Name | Type | Description | -|---|---|---| -| pulsar_function_processed_successfully_total | Counter | The total number of messages processed successfully. | -| pulsar_function_processed_successfully_total_1min | Counter | The total number of messages processed successfully in the last 1 minute. | -| pulsar_function_system_exceptions_total | Counter | The total number of system exceptions. | -| pulsar_function_system_exceptions_total_1min | Counter | The total number of system exceptions in the last 1 minute. | -| pulsar_function_user_exceptions_total | Counter | The total number of user exceptions. | -| pulsar_function_user_exceptions_total_1min | Counter | The total number of user exceptions in the last 1 minute. | -| pulsar_function_process_latency_ms | Summary | The process latency in milliseconds. | -| pulsar_function_process_latency_ms_1min | Summary | The process latency in milliseconds in the last 1 minute. | -| pulsar_function_last_invocation | Gauge | The timestamp of the last invocation of the function. | -| pulsar_function_received_total | Counter | The total number of messages received from source. | -| pulsar_function_received_total_1min | Counter | The total number of messages received from source in the last 1 minute. | -pulsar_function_user_metric_ | Summary|The user-defined metrics. - -## Connectors - -All the Pulsar connector metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -Connector metrics contain **source** metrics and **sink** metrics. - -- **Source** metrics - - | Name | Type | Description | - |---|---|---| - pulsar_source_written_total|Counter|The total number of records written to a Pulsar topic. - pulsar_source_written_total_1min|Counter|The total number of records written to a Pulsar topic in the last 1 minute. - pulsar_source_received_total|Counter|The total number of records received from source. - pulsar_source_received_total_1min|Counter|The total number of records received from source in the last 1 minute. - pulsar_source_last_invocation|Gauge|The timestamp of the last invocation of the source. - pulsar_source_source_exception|Gauge|The exception from a source. - pulsar_source_source_exceptions_total|Counter|The total number of source exceptions. - pulsar_source_source_exceptions_total_1min |Counter|The total number of source exceptions in the last 1 minute. - pulsar_source_system_exception|Gauge|The exception from system code. - pulsar_source_system_exceptions_total|Counter|The total number of system exceptions. - pulsar_source_system_exceptions_total_1min|Counter|The total number of system exceptions in the last 1 minute. - pulsar_source_user_metric_ | Summary|The user-defined metrics. - -- **Sink** metrics - - | Name | Type | Description | - |---|---|---| - pulsar_sink_written_total|Counter| The total number of records processed by a sink. - pulsar_sink_written_total_1min|Counter| The total number of records processed by a sink in the last 1 minute. - pulsar_sink_received_total_1min|Counter| The total number of messages that a sink has received from Pulsar topics in the last 1 minute. - pulsar_sink_received_total|Counter| The total number of records that a sink has received from Pulsar topics. - pulsar_sink_last_invocation|Gauge|The timestamp of the last invocation of the sink. - pulsar_sink_sink_exception|Gauge|The exception from a sink. - pulsar_sink_sink_exceptions_total|Counter|The total number of sink exceptions. - pulsar_sink_sink_exceptions_total_1min |Counter|The total number of sink exceptions in the last 1 minute. - pulsar_sink_system_exception|Gauge|The exception from system code. - pulsar_sink_system_exceptions_total|Counter|The total number of system exceptions. - pulsar_sink_system_exceptions_total_1min|Counter|The total number of system exceptions in the last 1 minute. - pulsar_sink_user_metric_ | Summary|The user-defined metrics. - -## Proxy - -All the proxy metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *kubernetes_pod_name*: `kubernetes_pod_name=${kubernetes_pod_name}`. `${kubernetes_pod_name}` is the Kubernetes pod name. - -| Name | Type | Description | -|---|---|---| -| pulsar_proxy_active_connections | Gauge | Number of connections currently active in the proxy. | -| pulsar_proxy_new_connections | Counter | Counter of connections being opened in the proxy. | -| pulsar_proxy_rejected_connections | Counter | Counter for connections rejected due to throttling. | -| pulsar_proxy_binary_ops | Counter | Counter of proxy operations. | -| pulsar_proxy_binary_bytes | Counter | Counter of proxy bytes. | - -## Pulsar SQL Worker - -| Name | Type | Description | -|---|---|---| -| split_bytes_read | Counter | Number of bytes read from BookKeeper. | -| split_num_messages_deserialized | Counter | Number of messages deserialized. | -| split_num_record_deserialized | Counter | Number of records deserialized. | -| split_bytes_read_per_query | Summary | Total number of bytes read per query. | -| split_entry_deserialize_time | Summary | Time spent on derserializing entries. | -| split_entry_deserialize_time_per_query | Summary | Time spent on derserializing entries per query. | -| split_entry_queue_dequeue_wait_time | Summary | Time spend on waiting to get entry from entry queue because it is empty. | -| split_entry_queue_dequeue_wait_time_per_query | Summary | Total time spent on waiting to get entry from entry queue per query. | -| split_message_queue_dequeue_wait_time_per_query | Summary | Time spent on waiting to dequeue from message queue because is is empty per query. | -| split_message_queue_enqueue_wait_time | Summary | Time spent on waiting for message queue enqueue because the message queue is full. | -| split_message_queue_enqueue_wait_time_per_query | Summary | Time spent on waiting for message queue enqueue because the message queue is full per query. | -| split_num_entries_per_batch | Summary | Number of entries per batch. | -| split_num_entries_per_query | Summary | Number of entries per query. | -| split_num_messages_deserialized_per_entry | Summary | Number of messages deserialized per entry. | -| split_num_messages_deserialized_per_query | Summary | Number of messages deserialized per query. | -| split_read_attempts | Summary | Number of read attempts (fail if queues are full). | -| split_read_attempts_per_query | Summary | Number of read attempts per query. | -| split_read_latency_per_batch | Summary | Latency of reads per batch. | -| split_read_latency_per_query | Summary | Total read latency per query. | -| split_record_deserialize_time | Summary | Time spent on deserializing message to record. For example, Avro, JSON, and so on. | -| split_record_deserialize_time_per_query | Summary | Time spent on deserializing message to record per query. | -| split_total_execution_time | Summary | The total execution time. | - -## Pulsar transaction - -All the transaction metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *coordinator_id*: `coordinator_id=${coordinator_id}`. `${coordinator_id}` is the coordinator id. - -| Name | Type | Description | -|---|---|---| -| pulsar_txn_active_count | Gauge | Number of active transactions. | -| pulsar_txn_created_count | Counter | Number of created transactions. | -| pulsar_txn_committed_count | Counter | Number of committed transactions. | -| pulsar_txn_aborted_count | Counter | Number of aborted transactions of this coordinator. | -| pulsar_txn_timeout_count | Counter | Number of timeout transactions. | -| pulsar_txn_append_log_count | Counter | Number of append transaction logs. | -| pulsar_txn_execution_latency_le_* | Histogram | Transaction execution latency.
    Available latencies are as below:
    • latency="10" is TransactionExecutionLatency between (0ms, 10ms]
    • latency="20" is TransactionExecutionLatency between (10ms, 20ms]
    • latency="50" is TransactionExecutionLatency between (20ms, 50ms]
    • latency="100" is TransactionExecutionLatency between (50ms, 100ms]
    • latency="500" is TransactionExecutionLatency between (100ms, 500ms]
    • latency="1000" is TransactionExecutionLatency between (500ms, 1000ms]
    • latency="5000" is TransactionExecutionLatency between (1s, 5s]
    • latency="15000" is TransactionExecutionLatency between (5s, 15s]
    • latency="30000" is TransactionExecutionLatency between (15s, 30s]
    • latency="60000" is TransactionExecutionLatency between (30s, 60s]
    • latency="300000" is TransactionExecutionLatency between (1m,5m]
    • latency="1500000" is TransactionExecutionLatency between (5m,15m]
    • latency="3000000" is TransactionExecutionLatency between (15m,30m]
    • latency="overflow" is TransactionExecutionLatency between (30m,∞]
    | diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/reference-pulsar-admin.md b/site2/website/versioned_docs/version-2.8.1-deprecated/reference-pulsar-admin.md deleted file mode 100644 index bba1d6379dd972..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/reference-pulsar-admin.md +++ /dev/null @@ -1,3338 +0,0 @@ ---- -id: reference-pulsar-admin -title: Pulsar admin CLI -sidebar_label: "Pulsar Admin CLI" -original_id: reference-pulsar-admin ---- - -> **Important** -> -> This page is deprecated and not updated anymore. For the latest and complete information about `pulsar-admin`, including commands, flags, descriptions, and more, see [pulsar-admin doc](https://pulsar.apache.org/tools/pulsar-admin/). - -The `pulsar-admin` tool enables you to manage Pulsar installations, including clusters, brokers, namespaces, tenants, and more. - -Usage - -```bash - -$ pulsar-admin command - -``` - -Commands -* `broker-stats` -* `brokers` -* `clusters` -* `functions` -* `functions-worker` -* `namespaces` -* `ns-isolation-policy` -* `sources` - - For more information, see [here](io-cli.md#sources) -* `sinks` - - For more information, see [here](io-cli.md#sinks) -* `topics` -* `tenants` -* `resource-quotas` -* `schemas` - -## `broker-stats` - -Operations to collect broker statistics - -```bash - -$ pulsar-admin broker-stats subcommand - -``` - -Subcommands -* `allocator-stats` -* `topics(destinations)` -* `mbeans` -* `monitoring-metrics` -* `load-report` - - -### `allocator-stats` - -Dump allocator stats - -Usage - -```bash - -$ pulsar-admin broker-stats allocator-stats allocator-name - -``` - -### `topics(destinations)` - -Dump topic stats - -Usage - -```bash - -$ pulsar-admin broker-stats topics options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - -### `mbeans` - -Dump Mbean stats - -Usage - -```bash - -$ pulsar-admin broker-stats mbeans options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - - -### `monitoring-metrics` - -Dump metrics for monitoring - -Usage - -```bash - -$ pulsar-admin broker-stats monitoring-metrics options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - - -### `load-report` - -Dump broker load-report - -Usage - -```bash - -$ pulsar-admin broker-stats load-report - -``` - -## `brokers` - -Operations about brokers - -```bash - -$ pulsar-admin brokers subcommand - -``` - -Subcommands -* `list` -* `namespaces` -* `update-dynamic-config` -* `list-dynamic-config` -* `get-all-dynamic-config` -* `get-internal-config` -* `get-runtime-config` -* `healthcheck` - -### `list` -List active brokers of the cluster - -Usage - -```bash - -$ pulsar-admin brokers list cluster-name - -``` - -### `leader-broker` -Get the information of the leader broker - -Usage - -```bash - -$ pulsar-admin brokers leader-broker - -``` - -### `namespaces` -List namespaces owned by the broker - -Usage - -```bash - -$ pulsar-admin brokers namespaces cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--url`|The URL for the broker|| - - -### `update-dynamic-config` -Update a broker's dynamic service configuration - -Usage - -```bash - -$ pulsar-admin brokers update-dynamic-config options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--config`|Service configuration parameter name|| -|`--value`|Value for the configuration parameter value specified using the `--config` flag|| - - -### `list-dynamic-config` -Get list of updatable configuration name - -Usage - -```bash - -$ pulsar-admin brokers list-dynamic-config - -``` - -### `delete-dynamic-config` -Delete dynamic-serviceConfiguration of broker - -Usage - -```bash - -$ pulsar-admin brokers delete-dynamic-config options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--config`|Service configuration parameter name|| - - -### `get-all-dynamic-config` -Get all overridden dynamic-configuration values - -Usage - -```bash - -$ pulsar-admin brokers get-all-dynamic-config - -``` - -### `get-internal-config` -Get internal configuration information - -Usage - -```bash - -$ pulsar-admin brokers get-internal-config - -``` - -### `get-runtime-config` -Get runtime configuration values - -Usage - -```bash - -$ pulsar-admin brokers get-runtime-config - -``` - -### `healthcheck` -Run a health check against the broker - -Usage - -```bash - -$ pulsar-admin brokers healthcheck - -``` - -## `clusters` -Operations about clusters - -Usage - -```bash - -$ pulsar-admin clusters subcommand - -``` - -Subcommands -* `get` -* `create` -* `update` -* `delete` -* `list` -* `update-peer-clusters` -* `get-peer-clusters` -* `get-failure-domain` -* `create-failure-domain` -* `update-failure-domain` -* `delete-failure-domain` -* `list-failure-domains` - - -### `get` -Get the configuration data for the specified cluster - -Usage - -```bash - -$ pulsar-admin clusters get cluster-name - -``` - -### `create` -Provisions a new cluster. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin clusters create cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--broker-url`|The URL for the broker service.|| -|`--broker-url-secure`|The broker service URL for a secure connection|| -|`--url`|service-url|| -|`--url-secure`|service-url for secure connection|| - - -### `update` -Update the configuration for a cluster - -Usage - -```bash - -$ pulsar-admin clusters update cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--broker-url`|The URL for the broker service.|| -|`--broker-url-secure`|The broker service URL for a secure connection|| -|`--url`|service-url|| -|`--url-secure`|service-url for secure connection|| - - -### `delete` -Deletes an existing cluster - -Usage - -```bash - -$ pulsar-admin clusters delete cluster-name - -``` - -### `list` -List the existing clusters - -Usage - -```bash - -$ pulsar-admin clusters list - -``` - -### `update-peer-clusters` -Update peer cluster names - -Usage - -```bash - -$ pulsar-admin clusters update-peer-clusters cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--peer-clusters`|Comma separated peer cluster names (Pass empty string "" to delete list)|| - -### `get-peer-clusters` -Get list of peer clusters - -Usage - -```bash - -$ pulsar-admin clusters get-peer-clusters - -``` - -### `get-failure-domain` -Get the configuration brokers of a failure domain - -Usage - -```bash - -$ pulsar-admin clusters get-failure-domain cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `create-failure-domain` -Create a new failure domain for a cluster (updates it if already created) - -Usage - -```bash - -$ pulsar-admin clusters create-failure-domain cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--broker-list`|Comma separated broker list|| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `update-failure-domain` -Update failure domain for a cluster (creates a new one if not exist) - -Usage - -```bash - -$ pulsar-admin clusters update-failure-domain cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--broker-list`|Comma separated broker list|| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `delete-failure-domain` -Delete an existing failure domain - -Usage - -```bash - -$ pulsar-admin clusters delete-failure-domain cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `list-failure-domains` -List the existing failure domains for a cluster - -Usage - -```bash - -$ pulsar-admin clusters list-failure-domains cluster-name - -``` - -## `functions` - -A command-line interface for Pulsar Functions - -Usage - -```bash - -$ pulsar-admin functions subcommand - -``` - -Subcommands -* `localrun` -* `create` -* `delete` -* `update` -* `get` -* `restart` -* `stop` -* `start` -* `status` -* `stats` -* `list` -* `querystate` -* `putstate` -* `trigger` - - -### `localrun` -Run the Pulsar Function locally (rather than deploying it to the Pulsar cluster) - - -Usage - -```bash - -$ pulsar-admin functions localrun options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--broker-service-url `|The URL of the Pulsar broker|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--client-auth-params`|Client authentication param|| -|`--client-auth-plugin`|Client authentication plugin using which function-process can connect to broker|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--hostname-verification-enabled`|Enable hostname verification|false| -|`--instance-id-offset`|Start the instanceIds from this offset|0| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--go`|Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--state-storage-service-url`|The URL for the state storage service. By default, it it set to the service URL of the Apache BookKeeper. This service URL must be added manually when the Pulsar Function runs locally. || -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed successfully are sent|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--tls-allow-insecure`|Allow insecure tls connection|false| -|`--tls-trust-cert-path`|The tls trust cert file path|| -|`--use-tls`|Use tls connection|false| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `create` -Create a Pulsar Function in cluster mode (i.e. deploy it on a Pulsar cluster) - -Usage - -``` - -$ pulsar-admin functions create options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function’s namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--go`|Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `delete` -Delete a Pulsar Function that's running on a Pulsar cluster - -Usage - -```bash - -$ pulsar-admin functions delete options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `update` -Update a Pulsar Function that's been deployed to a Pulsar cluster - -Usage - -```bash - -$ pulsar-admin functions update options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function’s namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--go`|Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `get` -Fetch information about a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions get options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `restart` -Restart function instance - -Usage - -```bash - -$ pulsar-admin functions restart options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (restart all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `stop` -Stops function instance - -Usage - -```bash - -$ pulsar-admin functions stop options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (stop all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `start` -Starts a stopped function instance - -Usage - -```bash - -$ pulsar-admin functions start options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (start all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `status` -Check the current status of a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions status options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (Get-status of all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `stats` -Get the current stats of a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions stats options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (Get-stats of all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - -### `list` -List all of the Pulsar Functions running under a specific tenant and namespace - -Usage - -```bash - -$ pulsar-admin functions list options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `querystate` -Fetch the current state associated with a Pulsar Function running in cluster mode - -Usage - -```bash - -$ pulsar-admin functions querystate options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`-k`, `--key`|The key for the state you want to fetch|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| -|`-w`, `--watch`|Watch for changes in the value associated with a key for a Pulsar Function|false| - -### `putstate` -Put a key/value pair to the state associated with a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions putstate options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the Pulsar Function|| -|`--name`|The name of a Pulsar Function|| -|`--namespace`|The namespace of a Pulsar Function|| -|`--tenant`|The tenant of a Pulsar Function|| -|`-s`, `--state`|The FunctionState that needs to be put|| - -### `trigger` -Triggers the specified Pulsar Function with a supplied value - -Usage - -```bash - -$ pulsar-admin functions trigger options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| -|`--topic`|The specific topic name that the function consumes from that you want to inject the data to|| -|`--trigger-file`|The path to the file that contains the data with which you'd like to trigger the function|| -|`--trigger-value`|The value with which you want to trigger the function|| - - -## `functions-worker` -Operations to collect function-worker statistics - -```bash - -$ pulsar-admin functions-worker subcommand - -``` - -Subcommands - -* `function-stats` -* `get-cluster` -* `get-cluster-leader` -* `get-function-assignments` -* `monitoring-metrics` - -### `function-stats` - -Dump all functions stats running on this broker - -Usage - -```bash - -$ pulsar-admin functions-worker function-stats - -``` - -### `get-cluster` - -Get all workers belonging to this cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-cluster - -``` - -### `get-cluster-leader` - -Get the leader of the worker cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-cluster-leader - -``` - -### `get-function-assignments` - -Get the assignments of the functions across the worker cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-function-assignments - -``` - -### `monitoring-metrics` - -Dump metrics for Monitoring - -Usage - -```bash - -$ pulsar-admin functions-worker monitoring-metrics - -``` - -## `namespaces` - -Operations for managing namespaces - -```bash - -$ pulsar-admin namespaces subcommand - -``` - -Subcommands -* `list` -* `topics` -* `policies` -* `create` -* `delete` -* `set-deduplication` -* `set-auto-topic-creation` -* `remove-auto-topic-creation` -* `set-auto-subscription-creation` -* `remove-auto-subscription-creation` -* `permissions` -* `grant-permission` -* `revoke-permission` -* `grant-subscription-permission` -* `revoke-subscription-permission` -* `set-clusters` -* `get-clusters` -* `get-backlog-quotas` -* `set-backlog-quota` -* `remove-backlog-quota` -* `get-persistence` -* `set-persistence` -* `get-message-ttl` -* `set-message-ttl` -* `remove-message-ttl` -* `get-anti-affinity-group` -* `set-anti-affinity-group` -* `get-anti-affinity-namespaces` -* `delete-anti-affinity-group` -* `get-retention` -* `set-retention` -* `unload` -* `split-bundle` -* `set-dispatch-rate` -* `get-dispatch-rate` -* `set-replicator-dispatch-rate` -* `get-replicator-dispatch-rate` -* `set-subscribe-rate` -* `get-subscribe-rate` -* `set-subscription-dispatch-rate` -* `get-subscription-dispatch-rate` -* `clear-backlog` -* `unsubscribe` -* `set-encryption-required` -* `set-delayed-delivery` -* `get-delayed-delivery` -* `set-subscription-auth-mode` -* `get-max-producers-per-topic` -* `set-max-producers-per-topic` -* `get-max-consumers-per-topic` -* `set-max-consumers-per-topic` -* `get-max-consumers-per-subscription` -* `set-max-consumers-per-subscription` -* `get-max-unacked-messages-per-subscription` -* `set-max-unacked-messages-per-subscription` -* `get-max-unacked-messages-per-consumer` -* `set-max-unacked-messages-per-consumer` -* `get-compaction-threshold` -* `set-compaction-threshold` -* `get-offload-threshold` -* `set-offload-threshold` -* `get-offload-deletion-lag` -* `set-offload-deletion-lag` -* `clear-offload-deletion-lag` -* `get-schema-autoupdate-strategy` -* `set-schema-autoupdate-strategy` -* `set-offload-policies` -* `get-offload-policies` -* `set-max-subscriptions-per-topic` -* `get-max-subscriptions-per-topic` -* `remove-max-subscriptions-per-topic` - - -### `list` -Get the namespaces for a tenant - -Usage - -```bash - -$ pulsar-admin namespaces list tenant-name - -``` - -### `topics` -Get the list of topics for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces topics tenant/namespace - -``` - -### `policies` -Get the configuration policies of a namespace - -Usage - -```bash - -$ pulsar-admin namespaces policies tenant/namespace - -``` - -### `create` -Create a new namespace - -Usage - -```bash - -$ pulsar-admin namespaces create tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-b`, `--bundles`|The number of bundles to activate|0| -|`-c`, `--clusters`|List of clusters this namespace will be assigned|| - - -### `delete` -Deletes a namespace. The namespace needs to be empty - -Usage - -```bash - -$ pulsar-admin namespaces delete tenant/namespace - -``` - -### `set-deduplication` -Enable or disable message deduplication on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-deduplication tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable message deduplication on the specified namespace|false| -|`--disable`, `-d`|Disable message deduplication on the specified namespace|false| - -### `set-auto-topic-creation` -Enable or disable autoTopicCreation for a namespace, overriding broker settings - -Usage - -```bash - -$ pulsar-admin namespaces set-auto-topic-creation tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable allowAutoTopicCreation on namespace|false| -|`--disable`, `-d`|Disable allowAutoTopicCreation on namespace|false| -|`--type`, `-t`|Type of topic to be auto-created. Possible values: (partitioned, non-partitioned)|non-partitioned| -|`--num-partitions`, `-n`|Default number of partitions of topic to be auto-created, applicable to partitioned topics only|| - -### `remove-auto-topic-creation` -Remove override of autoTopicCreation for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces remove-auto-topic-creation tenant/namespace - -``` - -### `set-auto-subscription-creation` -Enable autoSubscriptionCreation for a namespace, overriding broker settings - -Usage - -```bash - -$ pulsar-admin namespaces set-auto-subscription-creation tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable allowAutoSubscriptionCreation on namespace|false| - -### `remove-auto-subscription-creation` -Remove override of autoSubscriptionCreation for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces remove-auto-subscription-creation tenant/namespace - -``` - -### `permissions` -Get the permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces permissions tenant/namespace - -``` - -### `grant-permission` -Grant permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces grant-permission tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--actions`|Actions to be granted (`produce` or `consume`)|| -|`--role`|The client role to which to grant the permissions|| - - -### `revoke-permission` -Revoke permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces revoke-permission tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--role`|The client role to which to revoke the permissions|| - -### `grant-subscription-permission` -Grant permissions to access subscription admin-api - -Usage - -```bash - -$ pulsar-admin namespaces grant-subscription-permission tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--roles`|The client roles to which to grant the permissions (comma separated roles)|| -|`--subscription`|The subscription name for which permission will be granted to roles|| - -### `revoke-subscription-permission` -Revoke permissions to access subscription admin-api - -Usage - -```bash - -$ pulsar-admin namespaces revoke-subscription-permission tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--role`|The client role to which to revoke the permissions|| -|`--subscription`|The subscription name for which permission will be revoked to roles|| - -### `set-clusters` -Set replication clusters for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-clusters tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--clusters`|Replication clusters ID list (comma-separated values)|| - - -### `get-clusters` -Get replication clusters for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-clusters tenant/namespace - -``` - -### `get-backlog-quotas` -Get the backlog quota policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-backlog-quotas tenant/namespace - -``` - -### `set-backlog-quota` -Set a backlog quota policy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-backlog-quota tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-l`, `--limit`|The backlog size limit (for example `10M` or `16G`)|| -|`-p`, `--policy`|The retention policy to enforce when the limit is reached. The valid options are: `producer_request_hold`, `producer_exception` or `consumer_backlog_eviction`| - -Example - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \ ---limit 2G \ ---policy producer_request_hold - -``` - -### `remove-backlog-quota` -Remove a backlog quota policy from a namespace - -Usage - -```bash - -$ pulsar-admin namespaces remove-backlog-quota tenant/namespace - -``` - -### `get-persistence` -Get the persistence policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-persistence tenant/namespace - -``` - -### `set-persistence` -Set the persistence policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-persistence tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-a`, `--bookkeeper-ack-quorum`|The number of acks (guaranteed copies) to wait for each entry|0| -|`-e`, `--bookkeeper-ensemble`|The number of bookies to use for a topic|0| -|`-w`, `--bookkeeper-write-quorum`|How many writes to make of each entry|0| -|`-r`, `--ml-mark-delete-max-rate`|Throttling rate of mark-delete operation (0 means no throttle)|| - - -### `get-message-ttl` -Get the message TTL for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-message-ttl tenant/namespace - -``` - -### `set-message-ttl` -Set the message TTL for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-message-ttl tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-ttl`, `--messageTTL`|Message TTL in seconds. When the value is set to `0`, TTL is disabled. TTL is disabled by default. |0| - -### `remove-message-ttl` -Remove the message TTL for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces remove-message-ttl tenant/namespace - -``` - -### `get-anti-affinity-group` -Get Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-anti-affinity-group tenant/namespace - -``` - -### `set-anti-affinity-group` -Set Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-anti-affinity-group tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-g`, `--group`|Anti-affinity group name|| - -### `get-anti-affinity-namespaces` -Get Anti-affinity namespaces grouped with the given anti-affinity group name - -Usage - -```bash - -$ pulsar-admin namespaces get-anti-affinity-namespaces options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--cluster`|Cluster name|| -|`-g`, `--group`|Anti-affinity group name|| -|`-p`, `--tenant`|Tenant is only used for authorization. Client has to be admin of any of the tenant to access this api|| - -### `delete-anti-affinity-group` -Remove Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces delete-anti-affinity-group tenant/namespace - -``` - -### `get-retention` -Get the retention policy that is applied to each topic within the specified namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-retention tenant/namespace - -``` - -### `set-retention` -Set the retention policy for each topic within the specified namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-retention tenant/namespace - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-s`, `--size`|The retention size limits (for example 10M, 16G or 3T) for each topic in the namespace. 0 means no retention and -1 means infinite size retention|| -|`-t`, `--time`|The retention time in minutes, hours, days, or weeks. Examples: 100m, 13h, 2d, 5w. 0 means no retention and -1 means infinite time retention|| - - -### `unload` -Unload a namespace or namespace bundle from the current serving broker. - -Usage - -```bash - -$ pulsar-admin namespaces unload tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| - -### `split-bundle` -Split a namespace-bundle from the current serving broker - -Usage - -```bash - -$ pulsar-admin namespaces split-bundle tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-u`, `--unload`|Unload newly split bundles after splitting old bundle|false| - -### `set-dispatch-rate` -Set message-dispatch-rate for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-dispatch-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-dispatch-rate` -Get configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-dispatch-rate tenant/namespace - -``` - -### `set-replicator-dispatch-rate` -Set replicator message-dispatch-rate for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-replicator-dispatch-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-replicator-dispatch-rate` -Get replicator configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-replicator-dispatch-rate tenant/namespace - -``` - -### `set-subscribe-rate` -Set subscribe-rate per consumer for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscribe-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-sr`, `--subscribe-rate`|The subscribe rate (default -1 will be overwrite if not passed)|-1| -|`-st`, `--subscribe-rate-period`|The subscribe rate period in second type (default 30 second will be overwrite if not passed)|30| - -### `get-subscribe-rate` -Get configured subscribe-rate per consumer for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-subscribe-rate tenant/namespace - -``` - -### `set-subscription-dispatch-rate` -Set subscription message-dispatch-rate for all subscription of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscription-dispatch-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--sub-msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-subscription-dispatch-rate` -Get subscription configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-subscription-dispatch-rate tenant/namespace - -``` - -### `clear-backlog` -Clear the backlog for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces clear-backlog tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-force`, `--force`|Whether to force a clear backlog without prompt|false| -|`-s`, `--sub`|The subscription name|| - - -### `unsubscribe` -Unsubscribe the given subscription on all destinations on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces unsubscribe tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-s`, `--sub`|The subscription name|| - -### `set-encryption-required` -Enable or disable message encryption required for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-encryption-required tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-d`, `--disable`|Disable message encryption required|false| -|`-e`, `--enable`|Enable message encryption required|false| - -### `set-delayed-delivery` -Set the delayed delivery policy on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-delayed-delivery tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-d`, `--disable`|Disable delayed delivery messages|false| -|`-e`, `--enable`|Enable delayed delivery messages|false| -|`-t`, `--time`|The tick time for when retrying on delayed delivery messages|1s| - - -### `get-delayed-delivery` -Get the delayed delivery policy on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-delayed-delivery-time tenant/namespace - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-t`, `--time`|The tick time for when retrying on delayed delivery messages|1s| - - -### `set-subscription-auth-mode` -Set subscription auth mode on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscription-auth-mode tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-m`, `--subscription-auth-mode`|Subscription authorization mode for Pulsar policies. Valid options are: [None, Prefix]|| - -### `get-max-producers-per-topic` -Get maxProducersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-producers-per-topic tenant/namespace - -``` - -### `set-max-producers-per-topic` -Set maxProducersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-producers-per-topic tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-p`, `--max-producers-per-topic`|maxProducersPerTopic for a namespace|0| - -### `get-max-consumers-per-topic` -Get maxConsumersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-consumers-per-topic tenant/namespace - -``` - -### `set-max-consumers-per-topic` -Set maxConsumersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-consumers-per-topic tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-consumers-per-topic`|maxConsumersPerTopic for a namespace|0| - -### `get-max-consumers-per-subscription` -Get maxConsumersPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-consumers-per-subscription tenant/namespace - -``` - -### `set-max-consumers-per-subscription` -Set maxConsumersPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-consumers-per-subscription tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-consumers-per-subscription`|maxConsumersPerSubscription for a namespace|0| - -### `get-max-unacked-messages-per-subscription` -Get maxUnackedMessagesPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-unacked-messages-per-subscription tenant/namespace - -``` - -### `set-max-unacked-messages-per-subscription` -Set maxUnackedMessagesPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-unacked-messages-per-subscription tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-unacked-messages-per-subscription`|maxUnackedMessagesPerSubscription for a namespace|-1| - -### `get-max-unacked-messages-per-consumer` -Get maxUnackedMessagesPerConsumer for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-unacked-messages-per-consumer tenant/namespace - -``` - -### `set-max-unacked-messages-per-consumer` -Set maxUnackedMessagesPerConsumer for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-unacked-messages-per-consumer tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-unacked-messages-per-consumer`|maxUnackedMessagesPerConsumer for a namespace|-1| - - -### `get-compaction-threshold` -Get compactionThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-compaction-threshold tenant/namespace - -``` - -### `set-compaction-threshold` -Set compactionThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-compaction-threshold tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-t`, `--threshold`|Maximum number of bytes in a topic backlog before compaction is triggered (eg: 10M, 16G, 3T). 0 disables automatic compaction|0| - - -### `get-offload-threshold` -Get offloadThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-threshold tenant/namespace - -``` - -### `set-offload-threshold` -Set offloadThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-threshold tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-s`, `--size`|Maximum number of bytes stored in the pulsar cluster for a topic before data will start being automatically offloaded to longterm storage (eg: 10M, 16G, 3T, 100). Negative values disable automatic offload. 0 triggers offloading as soon as possible.|-1| - -### `get-offload-deletion-lag` -Get offloadDeletionLag, in minutes, for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-deletion-lag tenant/namespace - -``` - -### `set-offload-deletion-lag` -Set offloadDeletionLag for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-deletion-lag tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-l`, `--lag`|Duration to wait after offloading a ledger segment, before deleting the copy of that segment from cluster local storage. (eg: 10m, 5h, 3d, 2w).|-1| - -### `clear-offload-deletion-lag` -Clear offloadDeletionLag for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces clear-offload-deletion-lag tenant/namespace - -``` - -### `get-schema-autoupdate-strategy` -Get the schema auto-update strategy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-schema-autoupdate-strategy tenant/namespace - -``` - -### `set-schema-autoupdate-strategy` -Set the schema auto-update strategy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-schema-autoupdate-strategy tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--compatibility`|Compatibility level required for new schemas created via a Producer. Possible values (Full, Backward, Forward, None).|Full| -|`-d`, `--disabled`|Disable automatic schema updates.|false| - -### `get-publish-rate` -Get the message publish rate for each topic in a namespace, in bytes as well as messages per second - -Usage - -```bash - -$ pulsar-admin namespaces get-publish-rate tenant/namespace - -``` - -### `set-publish-rate` -Set the message publish rate for each topic in a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-publish-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-m`, `--msg-publish-rate`|Threshold for number of messages per second per topic in the namespace (-1 implies not set, 0 for no limit).|-1| -|`-b`, `--byte-publish-rate`|Threshold for number of bytes per second per topic in the namespace (-1 implies not set, 0 for no limit).|-1| - -### `set-offload-policies` -Set the offload policy for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-policies tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-d`, `--driver`|Driver to use to offload old data to long term storage,(Possible values: S3, aws-s3, google-cloud-storage)|| -|`-r`, `--region`|The long term storage region|| -|`-b`, `--bucket`|Bucket to place offloaded ledger into|| -|`-e`, `--endpoint`|Alternative endpoint to connect to|| -|`-i`, `--aws-id`|AWS Credential Id to use when using driver S3 or aws-s3|| -|`-s`, `--aws-secret`|AWS Credential Secret to use when using driver S3 or aws-s3|| -|`-ro`, `--s3-role`|S3 Role used for STSAssumeRoleSessionCredentialsProvider using driver S3 or aws-s3|| -|`-rsn`, `--s3-role-session-name`|S3 role session name used for STSAssumeRoleSessionCredentialsProvider using driver S3 or aws-s3|| -|`-mbs`, `--maxBlockSize`|Max block size|64MB| -|`-rbs`, `--readBufferSize`|Read buffer size|1MB| -|`-oat`, `--offloadAfterThreshold`|Offload after threshold size (eg: 1M, 5M)|| -|`-oae`, `--offloadAfterElapsed`|Offload after elapsed in millis (or minutes, hours,days,weeks eg: 100m, 3h, 2d, 5w).|| - -### `get-offload-policies` -Get the offload policy for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-policies tenant/namespace - -``` - -### `set-max-subscriptions-per-topic` -Set the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces set-max-subscriptions-per-topic tenant/namespace - -``` - -### `get-max-subscriptions-per-topic` -Get the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces get-max-subscriptions-per-topic tenant/namespace - -``` - -### `remove-max-subscriptions-per-topic` -Remove the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces remove-max-subscriptions-per-topic tenant/namespace - -``` - -## `ns-isolation-policy` -Operations for managing namespace isolation policies. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy subcommand - -``` - -Subcommands -* `set` -* `get` -* `list` -* `delete` -* `brokers` -* `broker` - -### `set` -Create/update a namespace isolation policy for a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy set cluster-name policy-name options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`--auto-failover-policy-params`|Comma-separated name=value auto failover policy parameters|[]| -|`--auto-failover-policy-type`|Auto failover policy type name. Currently available options: min_available.|[]| -|`--namespaces`|Comma-separated namespaces regex list|[]| -|`--primary`|Comma-separated primary broker regex list|[]| -|`--secondary`|Comma-separated secondary broker regex list|[]| - - -### `get` -Get the namespace isolation policy of a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy get cluster-name policy-name - -``` - -### `list` -List all namespace isolation policies of a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy list cluster-name - -``` - -### `delete` -Delete namespace isolation policy of a cluster. This operation requires superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy delete - -``` - -### `brokers` -List all brokers with namespace-isolation policies attached to it. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy brokers cluster-name - -``` - -### `broker` -Get broker with namespace-isolation policies attached to it. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy broker cluster-name options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`--broker`|Broker name to get namespace-isolation policies attached to it|| - -## `topics` -Operations for managing Pulsar topics (both persistent and non-persistent). - -Usage - -```bash - -$ pulsar-admin topics subcommand - -``` - -From Pulsar 2.7.0, some namespace-level policies are available on topic level. To enable topic-level policy in Pulsar, you need to configure the following parameters in the `broker.conf` file. - -```shell - -systemTopicEnabled=true -topicLevelPoliciesEnabled=true - -``` - -Subcommands -* `compact` -* `compaction-status` -* `offload` -* `offload-status` -* `create-partitioned-topic` -* `create-missed-partitions` -* `delete-partitioned-topic` -* `create` -* `get-partitioned-topic-metadata` -* `update-partitioned-topic` -* `list-partitioned-topics` -* `list` -* `terminate` -* `permissions` -* `grant-permission` -* `revoke-permission` -* `lookup` -* `bundle-range` -* `delete` -* `unload` -* `create-subscription` -* `subscriptions` -* `unsubscribe` -* `stats` -* `stats-internal` -* `info-internal` -* `partitioned-stats` -* `partitioned-stats-internal` -* `skip` -* `clear-backlog` -* `expire-messages` -* `expire-messages-all-subscriptions` -* `peek-messages` -* `reset-cursor` -* `get-message-by-id` -* `last-message-id` -* `get-backlog-quotas` -* `set-backlog-quota` -* `remove-backlog-quota` -* `get-persistence` -* `set-persistence` -* `remove-persistence` -* `get-message-ttl` -* `set-message-ttl` -* `remove-message-ttl` -* `get-deduplication` -* `set-deduplication` -* `remove-deduplication` -* `get-retention` -* `set-retention` -* `remove-retention` -* `get-dispatch-rate` -* `set-dispatch-rate` -* `remove-dispatch-rate` -* `get-max-unacked-messages-per-subscription` -* `set-max-unacked-messages-per-subscription` -* `remove-max-unacked-messages-per-subscription` -* `get-max-unacked-messages-per-consumer` -* `set-max-unacked-messages-per-consumer` -* `remove-max-unacked-messages-per-consumer` -* `get-delayed-delivery` -* `set-delayed-delivery` -* `remove-delayed-delivery` -* `get-max-producers` -* `set-max-producers` -* `remove-max-producers` -* `get-max-consumers` -* `set-max-consumers` -* `remove-max-consumers` -* `get-compaction-threshold` -* `set-compaction-threshold` -* `remove-compaction-threshold` -* `get-offload-policies` -* `set-offload-policies` -* `remove-offload-policies` -* `get-inactive-topic-policies` -* `set-inactive-topic-policies` -* `remove-inactive-topic-policies` -* `set-max-subscriptions` -* `get-max-subscriptions` -* `remove-max-subscriptions` - -### `compact` -Run compaction on the specified topic (persistent topics only) - -Usage - -``` - -$ pulsar-admin topics compact persistent://tenant/namespace/topic - -``` - -### `compaction-status` -Check the status of a topic compaction (persistent topics only) - -Usage - -```bash - -$ pulsar-admin topics compaction-status persistent://tenant/namespace/topic - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-w`, `--wait-complete`|Wait for compaction to complete|false| - - -### `offload` -Trigger offload of data from a topic to long-term storage (e.g. Amazon S3) - -Usage - -```bash - -$ pulsar-admin topics offload persistent://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--size-threshold`|The maximum amount of data to keep in BookKeeper for the specific topic|| - - -### `offload-status` -Check the status of data offloading from a topic to long-term storage - -Usage - -```bash - -$ pulsar-admin topics offload-status persistent://tenant/namespace/topic op - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-w`, `--wait-complete`|Wait for compaction to complete|false| - - -### `create-partitioned-topic` -Create a partitioned topic. A partitioned topic must be created before producers can publish to it. - -:::note - -By default, after 60 seconds of creation, topics are considered inactive and deleted automatically to prevent from generating trash data. -To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. -To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to your desired value. -For more information about these two parameters, see [here](reference-configuration.md#broker). - -::: - -Usage - -```bash - -$ pulsar-admin topics create-partitioned-topic {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-p`, `--partitions`|The number of partitions for the topic|0| - -### `create-missed-partitions` -Try to create partitions for partitioned topic. The partitions of partition topic has to be created, -can be used by repair partitions when topic auto creation is disabled - -Usage - -```bash - -$ pulsar-admin topics create-missed-partitions persistent://tenant/namespace/topic - -``` - -### `delete-partitioned-topic` -Delete a partitioned topic. This will also delete all the partitions of the topic if they exist. - -Usage - -```bash - -$ pulsar-admin topics delete-partitioned-topic {persistent|non-persistent} - -``` - -### `create` -Creates a non-partitioned topic. A non-partitioned topic must explicitly be created by the user if allowAutoTopicCreation or createIfMissing is disabled. - -:::note - -By default, after 60 seconds of creation, topics are considered inactive and deleted automatically to prevent from generating trash data. -To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. -To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to your desired value. -For more information about these two parameters, see [here](reference-configuration.md#broker). - -::: - -Usage - -```bash - -$ pulsar-admin topics create {persistent|non-persistent}://tenant/namespace/topic - -``` - -### `get-partitioned-topic-metadata` -Get the partitioned topic metadata. If the topic is not created or is a non-partitioned topic, this will return an empty topic with zero partitions. - -Usage - -```bash - -$ pulsar-admin topics get-partitioned-topic-metadata {persistent|non-persistent}://tenant/namespace/topic - -``` - -### `update-partitioned-topic` -Update existing non-global partitioned topic. New updating number of partitions must be greater than existing number of partitions. - -Usage - -```bash - -$ pulsar-admin topics update-partitioned-topic {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-p`, `--partitions`|The number of partitions for the topic|0| - -### `list-partitioned-topics` -Get the list of partitioned topics under a namespace. - -Usage - -```bash - -$ pulsar-admin topics list-partitioned-topics tenant/namespace - -``` - -### `list` -Get the list of topics under a namespace - -Usage - -``` - -$ pulsar-admin topics list tenant/cluster/namespace - -``` - -### `terminate` -Terminate a persistent topic (disallow further messages from being published on the topic) - -Usage - -```bash - -$ pulsar-admin topics terminate persistent://tenant/namespace/topic - -``` - -### `permissions` -Get the permissions on a topic. Retrieve the effective permissions for a destination. These permissions are defined by the permissions set at the namespace level combined (union) with any eventual specific permissions set on the topic. - -Usage - -```bash - -$ pulsar-admin topics permissions topic - -``` - -### `grant-permission` -Grant a new permission to a client role on a single topic - -Usage - -```bash - -$ pulsar-admin topics grant-permission {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--actions`|Actions to be granted (`produce` or `consume`)|| -|`--role`|The client role to which to grant the permissions|| - - -### `revoke-permission` -Revoke permissions to a client role on a single topic. If the permission was not set at the topic level, but rather at the namespace level, this operation will return an error (HTTP status code 412). - -Usage - -```bash - -$ pulsar-admin topics revoke-permission topic - -``` - -### `lookup` -Look up a topic from the current serving broker - -Usage - -```bash - -$ pulsar-admin topics lookup topic - -``` - -### `bundle-range` -Get the namespace bundle which contains the given topic - -Usage - -```bash - -$ pulsar-admin topics bundle-range topic - -``` - -### `delete` -Delete a topic. The topic cannot be deleted if there are any active subscriptions or producers connected to the topic. - -Usage - -```bash - -$ pulsar-admin topics delete topic - -``` - -### `unload` -Unload a topic - -Usage - -```bash - -$ pulsar-admin topics unload topic - -``` - -### `create-subscription` -Create a new subscription on a topic. - -Usage - -```bash - -$ pulsar-admin topics create-subscription [options] persistent://tenant/namespace/topic - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-m`, `--messageId`|messageId where to create the subscription. It can be either 'latest', 'earliest' or (ledgerId:entryId)|latest| -|`-s`, `--subscription`|Subscription to reset position on|| - -### `subscriptions` -Get the list of subscriptions on the topic - -Usage - -```bash - -$ pulsar-admin topics subscriptions topic - -``` - -### `unsubscribe` -Delete a durable subscriber from a topic - -Usage - -```bash - -$ pulsar-admin topics unsubscribe topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|The subscription to delete|| -|`-f`, `--force`|Disconnect and close all consumers and delete subscription forcefully|false| - - -### `stats` -Get the stats for the topic and its connected producers and consumers. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -Usage - -```bash - -$ pulsar-admin topics stats topic - -``` - -:::note - -The unit of `storageSize` and `averageMsgSize` is Byte. - -::: - -### `stats-internal` -Get the internal stats for the topic - -Usage - -```bash - -$ pulsar-admin topics stats-internal topic - -``` - -### `info-internal` -Get the internal metadata info for the topic - -Usage - -```bash - -$ pulsar-admin topics info-internal topic - -``` - -### `partitioned-stats` -Get the stats for the partitioned topic and its connected producers and consumers. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -Usage - -```bash - -$ pulsar-admin topics partitioned-stats topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--per-partition`|Get per-partition stats|false| - -### `partitioned-stats-internal` -Get the internal stats for the partitioned topic and its connected producers and consumers. All the rates are computed over a 1 minute window and are relative the last completed 1 minute period. - -Usage - -```bash - -$ pulsar-admin topics partitioned-stats-internal topic - -``` - -### `skip` -Skip some messages for the subscription - -Usage - -```bash - -$ pulsar-admin topics skip topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-n`, `--count`|The number of messages to skip|0| -|`-s`, `--subscription`|The subscription on which to skip messages|| - - -### `clear-backlog` -Clear backlog (skip all the messages) for the subscription - -Usage - -```bash - -$ pulsar-admin topics clear-backlog topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|The subscription to clear|| - - -### `expire-messages` -Expire messages that are older than the given expiry time (in seconds) for the subscription. - -Usage - -```bash - -$ pulsar-admin topics expire-messages topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-t`, `--expireTime`|Expire messages older than the time (in seconds)|0| -|`-s`, `--subscription`|The subscription to skip messages on|| - - -### `expire-messages-all-subscriptions` -Expire messages older than the given expiry time (in seconds) for all subscriptions - -Usage - -```bash - -$ pulsar-admin topics expire-messages-all-subscriptions topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-t`, `--expireTime`|Expire messages older than the time (in seconds)|0| - - -### `peek-messages` -Peek some messages for the subscription. - -Usage - -```bash - -$ pulsar-admin topics peek-messages topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-n`, `--count`|The number of messages|0| -|`-s`, `--subscription`|Subscription to get messages from|| - - -### `reset-cursor` -Reset position for subscription to a position that is closest to timestamp or messageId. - -Usage - -```bash - -$ pulsar-admin topics reset-cursor topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|Subscription to reset position on|| -|`-t`, `--time`|The time in minutes to reset back to (or minutes, hours, days, weeks, etc.). Examples: `100m`, `3h`, `2d`, `5w`.|| -|`-m`, `--messageId`| The messageId to reset back to (ledgerId:entryId). || - -### `get-message-by-id` -Get message by ledger id and entry id - -Usage - -```bash - -$ pulsar-admin topics get-message-by-id topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-l`, `--ledgerId`|The ledger id |0| -|`-e`, `--entryId`|The entry id |0| - -### `last-message-id` -Get the last commit message ID of the topic. - -Usage - -```bash - -$ pulsar-admin topics last-message-id persistent://tenant/namespace/topic - -``` - -### `get-backlog-quotas` -Get the backlog quota policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-backlog-quotas tenant/namespace/topic - -``` - -### `set-backlog-quota` -Set a backlog quota policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-backlog-quota tenant/namespace/topic options - -``` - -### `remove-backlog-quota` -Remove a backlog quota policy from a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-backlog-quota tenant/namespace/topic - -``` - -### `get-persistence` -Get the persistence policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-persistence tenant/namespace/topic - -``` - -### `set-persistence` -Set the persistence policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-persistence tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-e`, `--bookkeeper-ensemble`|Number of bookies to use for a topic|0| -|`-w`, `--bookkeeper-write-quorum`|How many writes to make of each entry|0| -|`-a`, `--bookkeeper-ack-quorum`|Number of acks (guaranteed copies) to wait for each entry|0| -|`-r`, `--ml-mark-delete-max-rate`|Throttling rate of mark-delete operation (0 means no throttle)|| - -### `remove-persistence` -Remove the persistence policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-persistence tenant/namespace/topic - -``` - -### `get-message-ttl` -Get the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-message-ttl tenant/namespace/topic - -``` - -### `set-message-ttl` -Set the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-message-ttl tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-ttl`, `--messageTTL`|Message TTL for a topic in second, allowed range from 1 to `Integer.MAX_VALUE` |0| - -### `remove-message-ttl` -Remove the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-message-ttl tenant/namespace/topic - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable message deduplication on the specified topic.|false| -|`--disable`, `-d`|Disable message deduplication on the specified topic.|false| - -### `get-deduplication` -Get a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-deduplication tenant/namespace/topic - -``` - -### `set-deduplication` -Set a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-deduplication tenant/namespace/topic options - -``` - -### `remove-deduplication` -Remove a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-deduplication tenant/namespace/topic - -``` - -## `tenants` -Operations for managing tenants - -Usage - -```bash - -$ pulsar-admin tenants subcommand - -``` - -Subcommands -* `list` -* `get` -* `create` -* `update` -* `delete` - -### `list` -List the existing tenants - -Usage - -```bash - -$ pulsar-admin tenants list - -``` - -### `get` -Gets the configuration of a tenant - -Usage - -```bash - -$ pulsar-admin tenants get tenant-name - -``` - -### `create` -Creates a new tenant - -Usage - -```bash - -$ pulsar-admin tenants create tenant-name options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-r`, `--admin-roles`|Comma-separated admin roles|| -|`-c`, `--allowed-clusters`|Comma-separated allowed clusters|| - -### `update` -Updates a tenant - -Usage - -```bash - -$ pulsar-admin tenants update tenant-name options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-r`, `--admin-roles`|Comma-separated admin roles|| -|`-c`, `--allowed-clusters`|Comma-separated allowed clusters|| - - -### `delete` -Deletes an existing tenant - -Usage - -```bash - -$ pulsar-admin tenants delete tenant-name - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-f`, `--force`|Delete a tenant forcefully by deleting all namespaces under it.|false| - - -## `resource-quotas` -Operations for managing resource quotas - -Usage - -```bash - -$ pulsar-admin resource-quotas subcommand - -``` - -Subcommands -* `get` -* `set` -* `reset-namespace-bundle-quota` - - -### `get` -Get the resource quota for a specified namespace bundle, or default quota if no namespace/bundle is specified. - -Usage - -```bash - -$ pulsar-admin resource-quotas get options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-n`, `--namespace`|The namespace|| - - -### `set` -Set the resource quota for the specified namespace bundle, or default quota if no namespace/bundle is specified. - -Usage - -```bash - -$ pulsar-admin resource-quotas set options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-bi`, `--bandwidthIn`|The expected inbound bandwidth (in bytes/second)|0| -|`-bo`, `--bandwidthOut`|Expected outbound bandwidth (in bytes/second)0| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-d`, `--dynamic`|Allow to be dynamically re-calculated (or not)|false| -|`-mem`, `--memory`|Expectred memory usage (in megabytes)|0| -|`-mi`, `--msgRateIn`|Expected incoming messages per second|0| -|`-mo`, `--msgRateOut`|Expected outgoing messages per second|0| -|`-n`, `--namespace`|The namespace as tenant/namespace, for example my-tenant/my-ns. Must be specified together with -b/--bundle.|| - - -### `reset-namespace-bundle-quota` -Reset the specified namespace bundle's resource quota to a default value. - -Usage - -```bash - -$ pulsar-admin resource-quotas reset-namespace-bundle-quota options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-n`, `--namespace`|The namespace|| - - - -## `schemas` -Operations related to Schemas associated with Pulsar topics. - -Usage - -``` - -$ pulsar-admin schemas subcommand - -``` - -Subcommands -* `upload` -* `delete` -* `get` -* `extract` - - -### `upload` -Upload the schema definition for a topic - -Usage - -```bash - -$ pulsar-admin schemas upload persistent://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`--filename`|The path to the schema definition file. An example schema file is available under conf directory.|| - - -### `delete` -Delete the schema definition associated with a topic - -Usage - -```bash - -$ pulsar-admin schemas delete persistent://tenant/namespace/topic - -``` - -### `get` -Retrieve the schema definition associated with a topic (at a given version if version is supplied). - -Usage - -```bash - -$ pulsar-admin schemas get persistent://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`--version`|The version of the schema definition to retrieve for a topic.|| - -### `extract` -Provide the schema definition for a topic via Java class name contained in a JAR file - -Usage - -```bash - -$ pulsar-admin schemas extract persistent://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--classname`|The Java class name|| -|`-j`, `--jar`|A path to the JAR file which contains the above Java class|| -|`-t`, `--type`|The type of the schema (avro or json)|| diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/reference-rest-api-overview.md b/site2/website/versioned_docs/version-2.8.1-deprecated/reference-rest-api-overview.md deleted file mode 100644 index 4bdcf23483a2b5..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/reference-rest-api-overview.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: reference-rest-api-overview -title: Pulsar REST APIs -sidebar_label: "Pulsar REST APIs" ---- - -A REST API (also known as RESTful API, REpresentational State Transfer Application Programming Interface) is a set of definitions and protocols for building and integrating application software, using HTTP requests to GET, PUT, POST, and DELETE data following the REST standards. In essence, REST API is a set of remote calls using standard methods to request and return data in a specific format between two systems. - -Pulsar provides a variety of REST APIs that enable you to interact with Pulsar to retrieve information or perform an action. - -| REST API category | Description | -| --- | --- | -| [Admin](https://pulsar.apache.org/admin-rest-api/?version=master) | REST APIs for administrative operations.| -| [Functions](https://pulsar.apache.org/functions-rest-api/?version=master) | REST APIs for function-specific operations.| -| [Sources](https://pulsar.apache.org/source-rest-api/?version=master) | REST APIs for source-specific operations.| -| [Sinks](https://pulsar.apache.org/sink-rest-api/?version=master) | REST APIs for sink-specific operations.| -| [Packages](https://pulsar.apache.org/packages-rest-api/?version=master) | REST APIs for package-specific operations. A package can be a group of functions, sources, and sinks.| - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/reference-terminology.md b/site2/website/versioned_docs/version-2.8.1-deprecated/reference-terminology.md deleted file mode 100644 index e5099141c3231e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/reference-terminology.md +++ /dev/null @@ -1,176 +0,0 @@ ---- -id: reference-terminology -title: Pulsar Terminology -sidebar_label: "Terminology" -original_id: reference-terminology ---- - -Here is a glossary of terms related to Apache Pulsar: - -### Concepts - -#### Pulsar - -Pulsar is a distributed messaging system originally created by Yahoo but now under the stewardship of the Apache Software Foundation. - -#### Message - -Messages are the basic unit of Pulsar. They're what [producers](#producer) publish to [topics](#topic) -and what [consumers](#consumer) then consume from topics. - -#### Topic - -A named channel used to pass messages published by [producers](#producer) to [consumers](#consumer) who -process those [messages](#message). - -#### Partitioned Topic - -A topic that is served by multiple Pulsar [brokers](#broker), which enables higher throughput. - -#### Namespace - -A grouping mechanism for related [topics](#topic). - -#### Namespace Bundle - -A virtual group of [topics](#topic) that belong to the same [namespace](#namespace). A namespace bundle -is defined as a range between two 32-bit hashes, such as 0x00000000 and 0xffffffff. - -#### Tenant - -An administrative unit for allocating capacity and enforcing an authentication/authorization scheme. - -#### Subscription - -A lease on a [topic](#topic) established by a group of [consumers](#consumer). Pulsar has four subscription -modes (exclusive, shared, failover and key_shared). - -#### Pub-Sub - -A messaging pattern in which [producer](#producer) processes publish messages on [topics](#topic) that -are then consumed (processed) by [consumer](#consumer) processes. - -#### Producer - -A process that publishes [messages](#message) to a Pulsar [topic](#topic). - -#### Consumer - -A process that establishes a subscription to a Pulsar [topic](#topic) and processes messages published -to that topic by [producers](#producer). - -#### Reader - -Pulsar readers are message processors much like Pulsar [consumers](#consumer) but with two crucial differences: - -- you can specify *where* on a topic readers begin processing messages (consumers always begin with the latest - available unacked message); -- readers don't retain data or acknowledge messages. - -#### Cursor - -The subscription position for a [consumer](#consumer). - -#### Acknowledgment (ack) - -A message sent to a Pulsar broker by a [consumer](#consumer) that a message has been successfully processed. -An acknowledgement (ack) is Pulsar's way of knowing that the message can be deleted from the system; -if no acknowledgement, then the message will be retained until it's processed. - -#### Negative Acknowledgment (nack) - -When an application fails to process a particular message, it can send a "negative ack" to Pulsar -to signal that the message should be replayed at a later timer. (By default, failed messages are -replayed after a 1 minute delay). Be aware that negative acknowledgment on ordered subscription types, -such as Exclusive, Failover and Key_Shared, can cause failed messages to arrive consumers out of the original order. - -#### Unacknowledged - -A message that has been delivered to a consumer for processing but not yet confirmed as processed by the consumer. - -#### Retention Policy - -Size and time limits that you can set on a [namespace](#namespace) to configure retention of [messages](#message) -that have already been [acknowledged](#acknowledgement-ack). - -#### Multi-Tenancy - -The ability to isolate [namespaces](#namespace), specify quotas, and configure authentication and authorization -on a per-[tenant](#tenant) basis. - -#### Failure Domain - -A logical domain under a Pulsar cluster. Each logical domain contains a pre-configured list of brokers. - -#### Anti-affinity Namespaces - -A group of namespaces that have anti-affinity to each other. - -### Architecture - -#### Standalone - -A lightweight Pulsar broker in which all components run in a single Java Virtual Machine (JVM) process. Standalone -clusters can be run on a single machine and are useful for development purposes. - -#### Cluster - -A set of Pulsar [brokers](#broker) and [BookKeeper](#bookkeeper) servers (aka [bookies](#bookie)). -Clusters can reside in different geographical regions and replicate messages to one another -in a process called [geo-replication](#geo-replication). - -#### Instance - -A group of Pulsar [clusters](#cluster) that act together as a single unit. - -#### Geo-Replication - -Replication of messages across Pulsar [clusters](#cluster), potentially in different datacenters -or geographical regions. - -#### Configuration Store - -Pulsar's configuration store (previously known as configuration store) is a ZooKeeper quorum that -is used for configuration-specific tasks. A multi-cluster Pulsar installation requires just one -configuration store across all [clusters](#cluster). - -#### Topic Lookup - -A service provided by Pulsar [brokers](#broker) that enables connecting clients to automatically determine -which Pulsar [cluster](#cluster) is responsible for a [topic](#topic) (and thus where message traffic for -the topic needs to be routed). - -#### Service Discovery - -A mechanism provided by Pulsar that enables connecting clients to use just a single URL to interact -with all the [brokers](#broker) in a [cluster](#cluster). - -#### Broker - -A stateless component of Pulsar [clusters](#cluster) that runs two other components: an HTTP server -exposing a REST interface for administration and topic lookup and a [dispatcher](#dispatcher) that -handles all message transfers. Pulsar clusters typically consist of multiple brokers. - -#### Dispatcher - -An asynchronous TCP server used for all data transfers in-and-out a Pulsar [broker](#broker). The Pulsar -dispatcher uses a custom binary protocol for all communications. - -### Storage - -#### BookKeeper - -[Apache BookKeeper](http://bookkeeper.apache.org/) is a scalable, low-latency persistent log storage -service that Pulsar uses to store data. - -#### Bookie - -Bookie is the name of an individual BookKeeper server. It is effectively the storage server of Pulsar. - -#### Ledger - -An append-only data structure in [BookKeeper](#bookkeeper) that is used to persistently store messages in Pulsar [topics](#topic). - -### Functions - -Pulsar Functions are lightweight functions that can consume messages from Pulsar topics, apply custom processing logic, and, if desired, publish results to topics. diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/schema-evolution-compatibility.md b/site2/website/versioned_docs/version-2.8.1-deprecated/schema-evolution-compatibility.md deleted file mode 100644 index 3e78429df69da2..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/schema-evolution-compatibility.md +++ /dev/null @@ -1,201 +0,0 @@ ---- -id: schema-evolution-compatibility -title: Schema evolution and compatibility -sidebar_label: "Schema evolution and compatibility" -original_id: schema-evolution-compatibility ---- - -Normally, schemas do not stay the same over a long period of time. Instead, they undergo evolutions to satisfy new needs. - -This chapter examines how Pulsar schema evolves and what Pulsar schema compatibility check strategies are. - -## Schema evolution - -Pulsar schema is defined in a data structure called `SchemaInfo`. - -Each `SchemaInfo` stored with a topic has a version. The version is used to manage the schema changes happening within a topic. - -The message produced with `SchemaInfo` is tagged with a schema version. When a message is consumed by a Pulsar client, the Pulsar client can use the schema version to retrieve the corresponding `SchemaInfo` and use the correct schema information to deserialize data. - -### What is schema evolution? - -Schemas store the details of attributes and types. To satisfy new business requirements, you need to update schemas inevitably over time, which is called **schema evolution**. - -Any schema changes affect downstream consumers. Schema evolution ensures that the downstream consumers can seamlessly handle data encoded with both old schemas and new schemas. - -### How Pulsar schema should evolve? - -The answer is Pulsar schema compatibility check strategy. It determines how schema compares old schemas with new schemas in topics. - -For more information, see [Schema compatibility check strategy](#schema-compatibility-check-strategy). - -### How does Pulsar support schema evolution? - -1. When a producer/consumer/reader connects to a broker, the broker deploys the schema compatibility checker configured by `schemaRegistryCompatibilityCheckers` to enforce schema compatibility check. - - The schema compatibility checker is one instance per schema type. - - Currently, Avro and JSON have their own compatibility checkers, while all the other schema types share the default compatibility checker which disables schema evolution. - -2. The producer/consumer/reader sends its client `SchemaInfo` to the broker. - -3. The broker knows the schema type and locates the schema compatibility checker for that type. - -4. The broker uses the checker to check if the `SchemaInfo` is compatible with the latest schema of the topic by applying its compatibility check strategy. - - Currently, the compatibility check strategy is configured at the namespace level and applied to all the topics within that namespace. - -## Schema compatibility check strategy - -Pulsar has 8 schema compatibility check strategies, which are summarized in the following table. - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Changes allowed | Check against which schema | Upgrade first | -| --- | --- | --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Disable schema compatibility check. | All changes are allowed | All previous versions | Any order | -| `ALWAYS_INCOMPATIBLE` | Disable schema evolution. | All changes are disabled | None | None | -| `BACKWARD` | Consumers using the schema V3 can process data written by producers using the schema V3 or V2. |
  899. Add optional fields
  900. Delete fields
  901. | Latest version | Consumers | -| `BACKWARD_TRANSITIVE` | Consumers using the schema V3 can process data written by producers using the schema V3, V2 or V1. |
  902. Add optional fields
  903. Delete fields
  904. | All previous versions | Consumers | -| `FORWARD` | Consumers using the schema V3 or V2 can process data written by producers using the schema V3. |
  905. Add fields
  906. Delete optional fields
  907. | Latest version | Producers | -| `FORWARD_TRANSITIVE` | Consumers using the schema V3, V2 or V1 can process data written by producers using the schema V3. |
  908. Add fields
  909. Delete optional fields
  910. | All previous versions | Producers | -| `FULL` | Backward and forward compatible between the schema V3 and V2. |
  911. Modify optional fields
  912. | Latest version | Any order | -| `FULL_TRANSITIVE` | Backward and forward compatible among the schema V3, V2, and V1. |
  913. Modify optional fields
  914. | All previous versions | Any order | - -### ALWAYS_COMPATIBLE and ALWAYS_INCOMPATIBLE - -| Compatibility check strategy | Definition | Note | -| --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Disable schema compatibility check. | None | -| `ALWAYS_INCOMPATIBLE` | Disable schema evolution, that is, any schema change is rejected. |
  915. For all schema types except Avro and JSON, the default schema compatibility check strategy is `ALWAYS_INCOMPATIBLE`.
  916. For Avro and JSON, the default schema compatibility check strategy is `FULL`.
  917. | - -#### Example - -* Example 1 - - In some situations, an application needs to store events of several different types in the same Pulsar topic. - - In particular, when developing a data model in an `Event Sourcing` style, you might have several kinds of events that affect the state of an entity. - - For example, for a user entity, there are `userCreated`, `userAddressChanged` and `userEnquiryReceived` events. The application requires that those events are always read in the same order. - - Consequently, those events need to go in the same Pulsar partition to maintain order. This application can use `ALWAYS_COMPATIBLE` to allow different kinds of events co-exist in the same topic. - -* Example 2 - - Sometimes we also make incompatible changes. - - For example, you are modifying a field type from `string` to `int`. - - In this case, you need to: - - * Upgrade all producers and consumers to the new schema versions at the same time. - - * Optionally, create a new topic and start migrating applications to use the new topic and the new schema, avoiding the need to handle two incompatible versions in the same topic. - -### BACKWARD and BACKWARD_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | -|---|---|---| -`BACKWARD` | Consumers using the new schema can process data written by producers using the **last schema**. | The consumers using the schema V3 can process data written by producers using the schema V3 or V2. | -`BACKWARD_TRANSITIVE` | Consumers using the new schema can process data written by producers using **all previous schemas**. | The consumers using the schema V3 can process data written by producers using the schema V3, V2, or V1. | - -#### Example - -* Example 1 - - Remove a field. - - A consumer constructed to process events without one field can process events written with the old schema containing the field, and the consumer will ignore that field. - -* Example 2 - - You want to load all Pulsar data into a Hive data warehouse and run SQL queries against the data. - - Same SQL queries must continue to work even the data is changed. To support it, you can evolve the schemas using the `BACKWARD` strategy. - -### FORWARD and FORWARD_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | -|---|---|---| -`FORWARD` | Consumers using the **last schema** can process data written by producers using a new schema, even though they may not be able to use the full capabilities of the new schema. | The consumers using the schema V3 or V2 can process data written by producers using the schema V3. | -`FORWARD_TRANSITIVE` | Consumers using **all previous schemas** can process data written by producers using a new schema. | The consumers using the schema V3, V2, or V1 can process data written by producers using the schema V3. - -#### Example - -* Example 1 - - Add a field. - - In most data formats, consumers written to process events without new fields can continue doing so even when they receive new events containing new fields. - -* Example 2 - - If a consumer has an application logic tied to a full version of a schema, the application logic may not be updated instantly when the schema evolves. - - In this case, you need to project data with a new schema onto an old schema that the application understands. - - Consequently, you can evolve the schemas using the `FORWARD` strategy to ensure that the old schema can process data encoded with the new schema. - -### FULL and FULL_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | Note | -| --- | --- | --- | --- | -| `FULL` | Schemas are both backward and forward compatible, which means: Consumers using the last schema can process data written by producers using the new schema. AND Consumers using the new schema can process data written by producers using the last schema. | Consumers using the schema V3 can process data written by producers using the schema V3 or V2. AND Consumers using the schema V3 or V2 can process data written by producers using the schema V3. |
  918. For Avro and JSON, the default schema compatibility check strategy is `FULL`.
  919. For all schema types except Avro and JSON, the default schema compatibility check strategy is `ALWAYS_INCOMPATIBLE`.
  920. | -| `FULL_TRANSITIVE` | The new schema is backward and forward compatible with all previously registered schemas. | Consumers using the schema V3 can process data written by producers using the schema V3, V2 or V1. AND Consumers using the schema V3, V2 or V1 can process data written by producers using the schema V3. | None | - -#### Example - -In some data formats, for example, Avro, you can define fields with default values. Consequently, adding or removing a field with a default value is a fully compatible change. - -## Schema verification - -When a producer or a consumer tries to connect to a topic, a broker performs some checks to verify a schema. - -### Producer - -When a producer tries to connect to a topic (suppose ignore the schema auto creation), a broker does the following checks: - -* Check if the schema carried by the producer exists in the schema registry or not. - - * If the schema is already registered, then the producer is connected to a broker and produce messages with that schema. - - * If the schema is not registered, then Pulsar verifies if the schema is allowed to be registered based on the configured compatibility check strategy. - -### Consumer -When a consumer tries to connect to a topic, a broker checks if a carried schema is compatible with a registered schema based on the configured schema compatibility check strategy. - -| Compatibility check strategy | Check logic | -| --- | --- | -| `ALWAYS_COMPATIBLE` | All pass | -| `ALWAYS_INCOMPATIBLE` | No pass | -| `BACKWARD` | Can read the last schema | -| `BACKWARD_TRANSITIVE` | Can read all schemas | -| `FORWARD` | Can read the last schema | -| `FORWARD_TRANSITIVE` | Can read the last schema | -| `FULL` | Can read the last schema | -| `FULL_TRANSITIVE` | Can read all schemas | - -## Order of upgrading clients - -The order of upgrading client applications is determined by the compatibility check strategy. - -For example, the producers using schemas to write data to Pulsar and the consumers using schemas to read data from Pulsar. - -| Compatibility check strategy | Upgrade first | Description | -| --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Any order | The compatibility check is disabled. Consequently, you can upgrade the producers and consumers in **any order**. | -| `ALWAYS_INCOMPATIBLE` | None | The schema evolution is disabled. | -|
  921. `BACKWARD`
  922. `BACKWARD_TRANSITIVE`
  923. | Consumers | There is no guarantee that consumers using the old schema can read data produced using the new schema. Consequently, **upgrade all consumers first**, and then start producing new data. | -|
  924. `FORWARD`
  925. `FORWARD_TRANSITIVE`
  926. | Producers | There is no guarantee that consumers using the new schema can read data produced using the old schema. Consequently, **upgrade all producers first**
  927. to use the new schema and ensure that the data already produced using the old schemas are not available to consumers, and then upgrade the consumers.
  928. | -|
  929. `FULL`
  930. `FULL_TRANSITIVE`
  931. | Any order | There is no guarantee that consumers using the old schema can read data produced using the new schema and consumers using the new schema can read data produced using the old schema. Consequently, you can upgrade the producers and consumers in **any order**. | - - - - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/schema-get-started.md b/site2/website/versioned_docs/version-2.8.1-deprecated/schema-get-started.md deleted file mode 100644 index afacb0fa51f2ef..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/schema-get-started.md +++ /dev/null @@ -1,102 +0,0 @@ ---- -id: schema-get-started -title: Get started -sidebar_label: "Get started" -original_id: schema-get-started ---- - -This chapter introduces Pulsar schemas and explains why they are important. - -## Schema Registry - -Type safety is extremely important in any application built around a message bus like Pulsar. - -Producers and consumers need some kind of mechanism for coordinating types at the topic level to avoid various potential problems arise. For example, serialization and deserialization issues. - -Applications typically adopt one of the following approaches to guarantee type safety in messaging. Both approaches are available in Pulsar, and you're free to adopt one or the other or to mix and match on a per-topic basis. - -#### Note -> -> Currently, the Pulsar schema registry is only available for the [Java client](client-libraries-java.md), [CGo client](client-libraries-cgo.md), [Python client](client-libraries-python.md), and [C++ client](client-libraries-cpp.md). - -### Client-side approach - -Producers and consumers are responsible for not only serializing and deserializing messages (which consist of raw bytes) but also "knowing" which types are being transmitted via which topics. - -If a producer is sending temperature sensor data on the topic `topic-1`, consumers of that topic will run into trouble if they attempt to parse that data as moisture sensor readings. - -Producers and consumers can send and receive messages consisting of raw byte arrays and leave all type safety enforcement to the application on an "out-of-band" basis. - -### Server-side approach - -Producers and consumers inform the system which data types can be transmitted via the topic. - -With this approach, the messaging system enforces type safety and ensures that producers and consumers remain synced. - -Pulsar has a built-in **schema registry** that enables clients to upload data schemas on a per-topic basis. Those schemas dictate which data types are recognized as valid for that topic. - -## Why use schema - -When a schema is enabled, Pulsar does parse data, it takes bytes as inputs and sends bytes as outputs. While data has meaning beyond bytes, you need to parse data and might encounter parse exceptions which mainly occur in the following situations: - -* The field does not exist - -* The field type has changed (for example, `string` is changed to `int`) - -There are a few methods to prevent and overcome these exceptions, for example, you can catch exceptions when parsing errors, which makes code hard to maintain; or you can adopt a schema management system to perform schema evolution, not to break downstream applications, and enforces type safety to max extend in the language you are using, the solution is Pulsar Schema. - -Pulsar schema enables you to use language-specific types of data when constructing and handling messages from simple types like `string` to more complex application-specific types. - -**Example** - -You can use the _User_ class to define the messages sent to Pulsar topics. - -``` - -public class User { - String name; - int age; -} - -``` - -When constructing a producer with the _User_ class, you can specify a schema or not as below. - -### Without schema - -If you construct a producer without specifying a schema, then the producer can only produce messages of type `byte[]`. If you have a POJO class, you need to serialize the POJO into bytes before sending messages. - -**Example** - -``` - -Producer producer = client.newProducer() - .topic(topic) - .create(); -User user = new User("Tom", 28); -byte[] message = … // serialize the `user` by yourself; -producer.send(message); - -``` - -### With schema - -If you construct a producer with specifying a schema, then you can send a class to a topic directly without worrying about how to serialize POJOs into bytes. - -**Example** - -This example constructs a producer with the _JSONSchema_, and you can send the _User_ class to topics directly without worrying about how to serialize it into bytes. - -``` - -Producer producer = client.newProducer(JSONSchema.of(User.class)) - .topic(topic) - .create(); -User user = new User("Tom", 28); -producer.send(user); - -``` - -### Summary - -When constructing a producer with a schema, you do not need to serialize messages into bytes, instead Pulsar schema does this job in the background. diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/schema-manage.md b/site2/website/versioned_docs/version-2.8.1-deprecated/schema-manage.md deleted file mode 100644 index c588aae619eee9..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/schema-manage.md +++ /dev/null @@ -1,639 +0,0 @@ ---- -id: schema-manage -title: Manage schema -sidebar_label: "Manage schema" -original_id: schema-manage ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide demonstrates the ways to manage schemas: - -* Automatically - - * [Schema AutoUpdate](#schema-autoupdate) - -* Manually - - * [Schema manual management](#schema-manual-management) - - * [Custom schema storage](#custom-schema-storage) - -## Schema AutoUpdate - -If a schema passes the schema compatibility check, Pulsar producer automatically updates this schema to the topic it produces by default. - -### AutoUpdate for producer - -For a producer, the `AutoUpdate` happens in the following cases: - -* If a **topic doesn’t have a schema**, Pulsar registers a schema automatically. - -* If a **topic has a schema**: - - * If a **producer doesn’t carry a schema**: - - * If `isSchemaValidationEnforced` or `schemaValidationEnforced` is **disabled** in the namespace to which the topic belongs, the producer is allowed to connect to the topic and produce data. - - * If `isSchemaValidationEnforced` or `schemaValidationEnforced` is **enabled** in the namespace to which the topic belongs, the producer is rejected and disconnected. - - * If a **producer carries a schema**: - - A broker performs the compatibility check based on the configured compatibility check strategy of the namespace to which the topic belongs. - - * If the schema is registered, a producer is connected to a broker. - - * If the schema is not registered: - - * If `isAllowAutoUpdateSchema` sets to **false**, the producer is rejected to connect to a broker. - - * If `isAllowAutoUpdateSchema` sets to **true**: - - * If the schema passes the compatibility check, then the broker registers a new schema automatically for the topic and the producer is connected. - - * If the schema does not pass the compatibility check, then the broker does not register a schema and the producer is rejected to connect to a broker. - -![AutoUpdate Producer](/assets/schema-producer.png) - -### AutoUpdate for consumer - -For a consumer, the `AutoUpdate` happens in the following cases: - -* If a **consumer connects to a topic without a schema** (which means the consumer receiving raw bytes), the consumer can connect to the topic successfully without doing any compatibility check. - -* If a **consumer connects to a topic with a schema**. - - * If a topic does not have all of them (a schema/data/a local consumer and a local producer): - - * If `isAllowAutoUpdateSchema` sets to **true**, then the consumer registers a schema and it is connected to a broker. - - * If `isAllowAutoUpdateSchema` sets to **false**, then the consumer is rejected to connect to a broker. - - * If a topic has one of them (a schema/data/a local consumer and a local producer), then the schema compatibility check is performed. - - * If the schema passes the compatibility check, then the consumer is connected to the broker. - - * If the schema does not pass the compatibility check, then the consumer is rejected to connect to the broker. - -![AutoUpdate Consumer](/assets/schema-consumer.png) - - -### Manage AutoUpdate strategy - -You can use the `pulsar-admin` command to manage the `AutoUpdate` strategy as below: - -* [Enable AutoUpdate](#enable-autoupdate) - -* [Disable AutoUpdate](#disable-autoupdate) - -* [Adjust compatibility](#adjust-compatibility) - -#### Enable AutoUpdate - -To enable `AutoUpdate` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-is-allow-auto-update-schema --enable tenant/namespace - -``` - -#### Disable AutoUpdate - -To disable `AutoUpdate` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-is-allow-auto-update-schema --disable tenant/namespace - -``` - -Once the `AutoUpdate` is disabled, you can only register a new schema using the `pulsar-admin` command. - -#### Adjust compatibility - -To adjust the schema compatibility level on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-compatibility-strategy --compatibility tenant/namespace - -``` - -### Schema validation - -By default, `schemaValidationEnforced` is **disabled** for producers: - -* This means a producer without a schema can produce any kind of messages to a topic with schemas, which may result in producing trash data to the topic. - -* This allows non-java language clients that don’t support schema can produce messages to a topic with schemas. - -However, if you want a stronger guarantee on the topics with schemas, you can enable `schemaValidationEnforced` across the whole cluster or on a per-namespace basis. - -#### Enable schema validation - -To enable `schemaValidationEnforced` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-validation-enforce --enable tenant/namespace - -``` - -#### Disable schema validation - -To disable `schemaValidationEnforced` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-validation-enforce --disable tenant/namespace - -``` - -## Schema manual management - -To manage schemas, you can use one of the following methods. - -| Method | Description | -| --- | --- | -| **Admin CLI**
  932. | You can use the `pulsar-admin` tool to manage Pulsar schemas, brokers, clusters, sources, sinks, topics, tenants and so on. For more information about how to use the `pulsar-admin` tool, see [here](reference-pulsar-admin.md). | -| **REST API**
  933. | Pulsar exposes schema related management API in Pulsar’s admin RESTful API. You can access the admin RESTful endpoint directly to manage schemas. For more information about how to use the Pulsar REST API, see [here](http://pulsar.apache.org/admin-rest-api/). | -| **Java Admin API**
  934. | Pulsar provides Java admin library. | - -### Upload a schema - -To upload (register) a new schema for a topic, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `upload` subcommand. - -```bash - -$ pulsar-admin schemas upload --filename - -``` - -The `schema-definition-file` is in JSON format. - -```json - -{ - "type": "", - "schema": "", - "properties": {} // the properties associated with the schema -} - -``` - -The `schema-definition-file` includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  935. If the schema is a
  936. **primitive**
  937. schema, this field should be blank.
  938. If the schema is a
  939. **struct**
  940. schema, this field should be a JSON string of the Avro schema definition.
  941. | -| `properties` | The additional properties associated with the schema. | - -Here are examples of the `schema-definition-file` for a JSON schema. - -**Example 1** - -```json - -{ - "type": "JSON", - "schema": "{\"type\":\"record\",\"name\":\"User\",\"namespace\":\"com.foo\",\"fields\":[{\"name\":\"file1\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"file2\",\"type\":\"string\",\"default\":null},{\"name\":\"file3\",\"type\":[\"null\",\"string\"],\"default\":\"dfdf\"}]}", - "properties": {} -} - -``` - -**Example 2** - -```json - -{ - "type": "STRING", - "schema": "", - "properties": { - "key1": "value1" - } -} - -``` - -
    - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/uploadSchema?version=@pulsar:version_number@} - -The post payload is in JSON format. - -```json - -{ - "type": "", - "schema": "", - "properties": {} // the properties associated with the schema -} - -``` - -The post payload includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  942. If the schema is a
  943. **primitive**
  944. schema, this field should be blank.
  945. If the schema is a
  946. **struct**
  947. schema, this field should be a JSON string of the Avro schema definition.
  948. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -void createSchema(String topic, PostSchemaPayload schemaPayload) - -``` - -The `PostSchemaPayload` includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  949. If the schema is a
  950. **primitive**
  951. schema, this field should be blank.
  952. If the schema is a
  953. **struct**
  954. schema, this field should be a JSON string of the Avro schema definition.
  955. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `PostSchemaPayload`: - -```java - -PulsarAdmin admin = …; - -PostSchemaPayload payload = new PostSchemaPayload(); -payload.setType("INT8"); -payload.setSchema(""); - -admin.createSchema("my-tenant/my-ns/my-topic", payload); - -``` - -
    - -
    -```` - -### Get a schema (latest) - -To get the latest schema for a topic, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `get` subcommand. - -```bash - -$ pulsar-admin schemas get - -{ - "version": 0, - "type": "String", - "timestamp": 0, - "data": "string", - "properties": { - "property1": "string", - "property2": "string" - } -} - -``` - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/getSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", - "type": "", - "timestamp": "", - "data": "", - "properties": {} // the properties associated with the schema -} - -``` - -The response includes the following fields: - -| Field | Description | -| --- | --- | -| `version` | The schema version, which is a long number. | -| `type` | The schema type. | -| `timestamp` | The timestamp of creating this version of schema. | -| `data` | The schema definition data, which is encoded in UTF 8 charset.
  956. If the schema is a
  957. **primitive**
  958. schema, this field should be blank.
  959. If the schema is a
  960. **struct**
  961. schema, this field should be a JSON string of the Avro schema definition.
  962. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -SchemaInfo createSchema(String topic) - -``` - -The `SchemaInfo` includes the following fields: - -| Field | Description | -| --- | --- | -| `name` | The schema name. | -| `type` | The schema type. | -| `schema` | A byte array of the schema definition data, which is encoded in UTF 8 charset.
  963. If the schema is a
  964. **primitive**
  965. schema, this byte array should be empty.
  966. If the schema is a
  967. **struct**
  968. schema, this field should be a JSON string of the Avro schema definition converted to a byte array.
  969. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `SchemaInfo`: - -```java - -PulsarAdmin admin = …; - -SchemaInfo si = admin.getSchema("my-tenant/my-ns/my-topic"); - -``` - -
    - -
    -```` - -### Get a schema (specific) - -To get a specific version of a schema, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `get` subcommand. - -```bash - -$ pulsar-admin schemas get --version= - -``` - - - - -Send a `GET` request to a schema endpoint: {@inject: endpoint|GET|/admin/v2/schemas/:tenant/:namespace/:topic/schema/:version|operation/getSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", - "type": "", - "timestamp": "", - "data": "", - "properties": {} // the properties associated with the schema -} - -``` - -The response includes the following fields: - -| Field | Description | -| --- | --- | -| `version` | The schema version, which is a long number. | -| `type` | The schema type. | -| `timestamp` | The timestamp of creating this version of schema. | -| `data` | The schema definition data, which is encoded in UTF 8 charset.
  970. If the schema is a
  971. **primitive**
  972. schema, this field should be blank.
  973. If the schema is a
  974. **struct**
  975. schema, this field should be a JSON string of the Avro schema definition.
  976. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -SchemaInfo createSchema(String topic, long version) - -``` - -The `SchemaInfo` includes the following fields: - -| Field | Description | -| --- | --- | -| `name` | The schema name. | -| `type` | The schema type. | -| `schema` | A byte array of the schema definition data, which is encoded in UTF 8.
  977. If the schema is a
  978. **primitive**
  979. schema, this byte array should be empty.
  980. If the schema is a
  981. **struct**
  982. schema, this field should be a JSON string of the Avro schema definition converted to a byte array.
  983. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `SchemaInfo`: - -```java - -PulsarAdmin admin = …; - -SchemaInfo si = admin.getSchema("my-tenant/my-ns/my-topic", 1L); - -``` - -
    - -
    -```` - -### Extract a schema - -To provide a schema via a topic, you can use the following method. - -````mdx-code-block - - - - -Use the `extract` subcommand. - -```bash - -$ pulsar-admin schemas extract --classname --jar --type - -``` - - - - -```` - -### Delete a schema - -To delete a schema for a topic, you can use one of the following methods. - -:::note - -In any case, the **delete** action deletes **all versions** of a schema registered for a topic. - -::: - -````mdx-code-block - - - - -Use the `delete` subcommand. - -```bash - -$ pulsar-admin schemas delete - -``` - - - - -Send a `DELETE` request to a schema endpoint: {@inject: endpoint|DELETE|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/deleteSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", -} - -``` - -The response includes the following field: - -Field | Description | ----|---| -`version` | The schema version, which is a long number. | - - - - -```java - -void deleteSchema(String topic) - -``` - -Here is an example of deleting a schema. - -```java - -PulsarAdmin admin = …; - -admin.deleteSchema("my-tenant/my-ns/my-topic"); - -``` - - - - -```` - -## Custom schema storage - -By default, Pulsar stores various data types of schemas in [Apache BookKeeper](https://bookkeeper.apache.org) deployed alongside Pulsar. - -However, you can use another storage system if needed. - -### Implement - -To use a non-default (non-BookKeeper) storage system for Pulsar schemas, you need to implement the following Java interfaces: - -* [SchemaStorage interface](#schemastorage-interface) - -* [SchemaStorageFactory interface](#schemastoragefactory-interface) - -#### SchemaStorage interface - -The `SchemaStorage` interface has the following methods: - -```java - -public interface SchemaStorage { - // How schemas are updated - CompletableFuture put(String key, byte[] value, byte[] hash); - - // How schemas are fetched from storage - CompletableFuture get(String key, SchemaVersion version); - - // How schemas are deleted - CompletableFuture delete(String key); - - // Utility method for converting a schema version byte array to a SchemaVersion object - SchemaVersion versionFromBytes(byte[] version); - - // Startup behavior for the schema storage client - void start() throws Exception; - - // Shutdown behavior for the schema storage client - void close() throws Exception; -} - -``` - -:::tip - -For a complete example of **schema storage** implementation, see [BookKeeperSchemaStorage](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorage.java) class. - -::: - -#### SchemaStorageFactory interface - -The `SchemaStorageFactory` interface has the following method: - -```java - -public interface SchemaStorageFactory { - @NotNull - SchemaStorage create(PulsarService pulsar) throws Exception; -} - -``` - -:::tip - -For a complete example of **schema storage factory** implementation, see [BookKeeperSchemaStorageFactory](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorageFactory.java) class. - -::: - -### Deploy - -To use your custom schema storage implementation, perform the following steps. - -1. Package the implementation in a [JAR](https://docs.oracle.com/javase/tutorial/deployment/jar/basicsindex.html) file. - -2. Add the JAR file to the `lib` folder in your Pulsar binary or source distribution. - -3. Change the `schemaRegistryStorageClassName` configuration in `broker.conf` to your custom factory class. - -4. Start Pulsar. diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/schema-understand.md b/site2/website/versioned_docs/version-2.8.1-deprecated/schema-understand.md deleted file mode 100644 index a86b02add435e1..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/schema-understand.md +++ /dev/null @@ -1,556 +0,0 @@ ---- -id: schema-understand -title: Understand schema -sidebar_label: "Understand schema" -original_id: schema-understand ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This chapter explains the basic concepts of Pulsar schema, focuses on the topics of particular importance, and provides additional background. - -## SchemaInfo - -Pulsar schema is defined in a data structure called `SchemaInfo`. - -The `SchemaInfo` is stored and enforced on a per-topic basis and cannot be stored at the namespace or tenant level. - -A `SchemaInfo` consists of the following fields: - -| Field | Description | -| --- | --- | -| `name` | Schema name (a string). | -| `type` | Schema type, which determines how to interpret the schema data.
  984. Predefined schema: see [here](schema-understand.md#schema-type).
  985. Customized schema: it is left as an empty string.
  986. | -| `schema`(`payload`) | Schema data, which is a sequence of 8-bit unsigned bytes and schema-type specific. | -| `properties` | It is a user defined properties as a string/string map. Applications can use this bag for carrying any application specific logics. Possible properties might be the Git hash associated with the schema, an environment string like `dev` or `prod`. | - -**Example** - -This is the `SchemaInfo` of a string. - -```json - -{ - "name": "test-string-schema", - "type": "STRING", - "schema": "", - "properties": {} -} - -``` - -## Schema type - -Pulsar supports various schema types, which are mainly divided into two categories: - -* Primitive type - -* Complex type - -### Primitive type - -Currently, Pulsar supports the following primitive types: - -| Primitive Type | Description | -|---|---| -| `BOOLEAN` | A binary value | -| `INT8` | A 8-bit signed integer | -| `INT16` | A 16-bit signed integer | -| `INT32` | A 32-bit signed integer | -| `INT64` | A 64-bit signed integer | -| `FLOAT` | A single precision (32-bit) IEEE 754 floating-point number | -| `DOUBLE` | A double-precision (64-bit) IEEE 754 floating-point number | -| `BYTES` | A sequence of 8-bit unsigned bytes | -| `STRING` | A Unicode character sequence | -| `TIMESTAMP` (`DATE`, `TIME`) | A logic type represents a specific instant in time with millisecond precision.
    It stores the number of milliseconds since `January 1, 1970, 00:00:00 GMT` as an `INT64` value | -| INSTANT | A single instantaneous point on the time-line with nanoseconds precision| -| LOCAL_DATE | An immutable date-time object that represents a date, often viewed as year-month-day| -| LOCAL_TIME | An immutable date-time object that represents a time, often viewed as hour-minute-second. Time is represented to nanosecond precision.| -| LOCAL_DATE_TIME | An immutable date-time object that represents a date-time, often viewed as year-month-day-hour-minute-second | - -For primitive types, Pulsar does not store any schema data in `SchemaInfo`. The `type` in `SchemaInfo` is used to determine how to serialize and deserialize the data. - -Some of the primitive schema implementations can use `properties` to store implementation-specific tunable settings. For example, a `string` schema can use `properties` to store the encoding charset to serialize and deserialize strings. - -The conversions between **Pulsar schema types** and **language-specific primitive types** are as below. - -| Schema Type | Java Type| Python Type | Go Type | -|---|---|---|---| -| BOOLEAN | boolean | bool | bool | -| INT8 | byte | | int8 | -| INT16 | short | | int16 | -| INT32 | int | | int32 | -| INT64 | long | | int64 | -| FLOAT | float | float | float32 | -| DOUBLE | double | float | float64| -| BYTES | byte[], ByteBuffer, ByteBuf | bytes | []byte | -| STRING | string | str | string| -| TIMESTAMP | java.sql.Timestamp | | | -| TIME | java.sql.Time | | | -| DATE | java.util.Date | | | -| INSTANT | java.time.Instant | | | -| LOCAL_DATE | java.time.LocalDate | | | -| LOCAL_TIME | java.time.LocalDateTime | | -| LOCAL_DATE_TIME | java.time.LocalTime | | - -**Example** - -This example demonstrates how to use a string schema. - -1. Create a producer with a string schema and send messages. - - ```java - - Producer producer = client.newProducer(Schema.STRING).create(); - producer.newMessage().value("Hello Pulsar!").send(); - - ``` - -2. Create a consumer with a string schema and receive messages. - - ```java - - Consumer consumer = client.newConsumer(Schema.STRING).subscribe(); - consumer.receive(); - - ``` - -### Complex type - -Currently, Pulsar supports the following complex types: - -| Complex Type | Description | -|---|---| -| `keyvalue` | Represents a complex type of a key/value pair. | -| `struct` | Handles structured data. It supports `AvroBaseStructSchema` and `ProtobufNativeSchema`. | - -#### keyvalue - -`Keyvalue` schema helps applications define schemas for both key and value. - -For `SchemaInfo` of `keyvalue` schema, Pulsar stores the `SchemaInfo` of key schema and the `SchemaInfo` of value schema together. - -Pulsar provides the following methods to encode a key/value pair in messages: - -* `INLINE` - -* `SEPARATED` - -You can choose the encoding type when constructing the key/value schema. - -````mdx-code-block - - - - -Key/value pairs are encoded together in the message payload. - - - - -Key is encoded in the message key and the value is encoded in the message payload. - -**Example** - -This example shows how to construct a key/value schema and then use it to produce and consume messages. - -1. Construct a key/value schema with `INLINE` encoding type. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.INLINE - ); - - ``` - -2. Optionally, construct a key/value schema with `SEPARATED` encoding type. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - ``` - -3. Produce messages using a key/value schema. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - Producer> producer = client.newProducer(kvSchema) - .topic(TOPIC) - .create(); - - final int key = 100; - final String value = "value-100"; - - // send the key/value message - producer.newMessage() - .value(new KeyValue(key, value)) - .send(); - - ``` - -4. Consume messages using a key/value schema. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - Consumer> consumer = client.newConsumer(kvSchema) - ... - .topic(TOPIC) - .subscriptionName(SubscriptionName).subscribe(); - - // receive key/value pair - Message> msg = consumer.receive(); - KeyValue kv = msg.getValue(); - - ``` - - - - -```` - -#### struct - -This section describes the details of type and usage of the `struct` schema. - -##### Type - -`struct` schema supports `AvroBaseStructSchema` and `ProtobufNativeSchema`. - -|Type|Description| ----|---| -`AvroBaseStructSchema`|Pulsar uses [Avro Specification](http://avro.apache.org/docs/current/spec.html) to declare the schema definition for `AvroBaseStructSchema`, which supports `AvroSchema`, `JsonSchema`, and `ProtobufSchema`.

    This allows Pulsar:
    - to use the same tools to manage schema definitions
    - to use different serialization or deserialization methods to handle data| -`ProtobufNativeSchema`|`ProtobufNativeSchema` is based on protobuf native Descriptor.

    This allows Pulsar:
    - to use native protobuf-v3 to serialize or deserialize data
    - to use `AutoConsume` to deserialize data. - -##### Usage - -Pulsar provides the following methods to use the `struct` schema: - -* `static` - -* `generic` - -* `SchemaDefinition` - -````mdx-code-block - - - - -You can predefine the `struct` schema, which can be a POJO in Java, a `struct` in Go, or classes generated by Avro or Protobuf tools. - -**Example** - -Pulsar gets the schema definition from the predefined `struct` using an Avro library. The schema definition is the schema data stored as a part of the `SchemaInfo`. - -1. Create the _User_ class to define the messages sent to Pulsar topics. - - ```java - - @Builder - @AllArgsConstructor - @NoArgsConstructor - public static class User { - String name; - int age; - } - - ``` - -2. Create a producer with a `struct` schema and send messages. - - ```java - - Producer producer = client.newProducer(Schema.AVRO(User.class)).create(); - producer.newMessage().value(User.builder().name("pulsar-user").age(1).build()).send(); - - ``` - -3. Create a consumer with a `struct` schema and receive messages - - ```java - - Consumer consumer = client.newConsumer(Schema.AVRO(User.class)).subscribe(); - User user = consumer.receive(); - - ``` - - - - -Sometimes applications do not have pre-defined structs, and you can use this method to define schema and access data. - -You can define the `struct` schema using the `GenericSchemaBuilder`, generate a generic struct using `GenericRecordBuilder` and consume messages into `GenericRecord`. - -**Example** - -1. Use `RecordSchemaBuilder` to build a schema. - - ```java - - RecordSchemaBuilder recordSchemaBuilder = SchemaBuilder.record("schemaName"); - recordSchemaBuilder.field("intField").type(SchemaType.INT32); - SchemaInfo schemaInfo = recordSchemaBuilder.build(SchemaType.AVRO); - - Producer producer = client.newProducer(Schema.generic(schemaInfo)).create(); - - ``` - -2. Use `RecordBuilder` to build the struct records. - - ```java - - producer.newMessage().value(schema.newRecordBuilder() - .set("intField", 32) - .build()).send(); - - ``` - - - - -You can define the `schemaDefinition` to generate a `struct` schema. - -**Example** - -1. Create the _User_ class to define the messages sent to Pulsar topics. - - ```java - - @Builder - @AllArgsConstructor - @NoArgsConstructor - public static class User { - String name; - int age; - } - - ``` - -2. Create a producer with a `SchemaDefinition` and send messages. - - ```java - - SchemaDefinition schemaDefinition = SchemaDefinition.builder().withPojo(User.class).build(); - Producer producer = client.newProducer(Schema.AVRO(schemaDefinition)).create(); - producer.newMessage().value(User.builder().name("pulsar-user").age(1).build()).send(); - - ``` - -3. Create a consumer with a `SchemaDefinition` schema and receive messages - - ```java - - SchemaDefinition schemaDefinition = SchemaDefinition.builder().withPojo(User.class).build(); - Consumer consumer = client.newConsumer(Schema.AVRO(schemaDefinition)).subscribe(); - User user = consumer.receive().getValue(); - - ``` - - - - -```` - -### Auto Schema - -If you don't know the schema type of a Pulsar topic in advance, you can use AUTO schema to produce or consume generic records to or from brokers. - -| Auto Schema Type | Description | -|---|---| -| `AUTO_PRODUCE` | This is useful for transferring data **from a producer to a Pulsar topic that has a schema**. | -| `AUTO_CONSUME` | This is useful for transferring data **from a Pulsar topic that has a schema to a consumer**. | - -#### AUTO_PRODUCE - -`AUTO_PRODUCE` schema helps a producer validate whether the bytes sent by the producer is compatible with the schema of a topic. - -**Example** - -Suppose that: - -* You have a producer processing messages from a Kafka topic _K_. - -* You have a Pulsar topic _P_, and you do not know its schema type. - -* Your application reads the messages from _K_ and writes the messages to _P_. - -In this case, you can use `AUTO_PRODUCE` to verify whether the bytes produced by _K_ can be sent to _P_ or not. - -```java - -Produce pulsarProducer = client.newProducer(Schema.AUTO_PRODUCE()) - … - .create(); - -byte[] kafkaMessageBytes = … ; - -pulsarProducer.produce(kafkaMessageBytes); - -``` - -#### AUTO_CONSUME - -`AUTO_CONSUME` schema helps a Pulsar topic validate whether the bytes sent by a Pulsar topic is compatible with a consumer, that is, the Pulsar topic deserializes messages into language-specific objects using the `SchemaInfo` retrieved from broker-side. - -Currently, `AUTO_CONSUME` supports AVRO, JSON and ProtobufNativeSchema schemas. It deserializes messages into `GenericRecord`. - -**Example** - -Suppose that: - -* You have a Pulsar topic _P_. - -* You have a consumer (for example, MySQL) receiving messages from the topic _P_. - -* Your application reads the messages from _P_ and writes the messages to MySQL. - -In this case, you can use `AUTO_CONSUME` to verify whether the bytes produced by _P_ can be sent to MySQL or not. - -```java - -Consumer pulsarConsumer = client.newConsumer(Schema.AUTO_CONSUME()) - … - .subscribe(); - -Message msg = consumer.receive() ; -GenericRecord record = msg.getValue(); - -``` - -## Schema version - -Each `SchemaInfo` stored with a topic has a version. Schema version manages schema changes happening within a topic. - -Messages produced with a given `SchemaInfo` is tagged with a schema version, so when a message is consumed by a Pulsar client, the Pulsar client can use the schema version to retrieve the corresponding `SchemaInfo` and then use the `SchemaInfo` to deserialize data. - -Schemas are versioned in succession. Schema storage happens in a broker that handles the associated topics so that version assignments can be made. - -Once a version is assigned/fetched to/for a schema, all subsequent messages produced by that producer are tagged with the appropriate version. - -**Example** - -The following example illustrates how the schema version works. - -Suppose that a Pulsar [Java client](client-libraries-java.md) created using the code below attempts to connect to Pulsar and begins to send messages: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -Producer producer = client.newProducer(JSONSchema.of(SensorReading.class)) - .topic("sensor-data") - .sendTimeout(3, TimeUnit.SECONDS) - .create(); - -``` - -The table below lists the possible scenarios when this connection attempt occurs and what happens in each scenario: - -| Scenario | What happens | -| --- | --- | -|
  987. No schema exists for the topic.
  988. | (1) The producer is created using the given schema. (2) Since no existing schema is compatible with the `SensorReading` schema, the schema is transmitted to the broker and stored. (3) Any consumer created using the same schema or topic can consume messages from the `sensor-data` topic. | -|
  989. A schema already exists.
  990. The producer connects using the same schema that is already stored.
  991. | (1) The schema is transmitted to the broker. (2) The broker determines that the schema is compatible. (3) The broker attempts to store the schema in [BookKeeper](concepts-architecture-overview.md#persistent-storage) but then determines that it's already stored, so it is used to tag produced messages. |
  992. A schema already exists.
  993. The producer connects using a new schema that is compatible.
  994. | (1) The schema is transmitted to the broker. (2) The broker determines that the schema is compatible and stores the new schema as the current version (with a new version number). | - -## How does schema work - -Pulsar schemas are applied and enforced at the **topic** level (schemas cannot be applied at the namespace or tenant level). - -Producers and consumers upload schemas to brokers, so Pulsar schemas work on the producer side and the consumer side. - -### Producer side - -This diagram illustrates how does schema work on the Producer side. - -![Schema works at the producer side](/assets/schema-producer.png) - -1. The application uses a schema instance to construct a producer instance. - - The schema instance defines the schema for the data being produced using the producer instance. - - Take AVRO as an example, Pulsar extracts schema definition from the POJO class and constructs the `SchemaInfo` that the producer needs to pass to a broker when it connects. - -2. The producer connects to the broker with the `SchemaInfo` extracted from the passed-in schema instance. - -3. The broker looks up the schema in the schema storage to check if it is already a registered schema. - -4. If yes, the broker skips the schema validation since it is a known schema, and returns the schema version to the producer. - -5. If no, the broker verifies whether a schema can be automatically created in this namespace: - - * If `isAllowAutoUpdateSchema` sets to **true**, then a schema can be created, and the broker validates the schema based on the schema compatibility check strategy defined for the topic. - - * If `isAllowAutoUpdateSchema` sets to **false**, then a schema can not be created, and the producer is rejected to connect to the broker. - -**Tip**: - -`isAllowAutoUpdateSchema` can be set via **Pulsar admin API** or **REST API.** - -For how to set `isAllowAutoUpdateSchema` via Pulsar admin API, see [Manage AutoUpdate Strategy](schema-manage.md/#manage-autoupdate-strategy). - -6. If the schema is allowed to be updated, then the compatible strategy check is performed. - - * If the schema is compatible, the broker stores it and returns the schema version to the producer. - - All the messages produced by this producer are tagged with the schema version. - - * If the schema is incompatible, the broker rejects it. - -### Consumer side - -This diagram illustrates how does Schema work on the consumer side. - -![Schema works at the consumer side](/assets/schema-consumer.png) - -1. The application uses a schema instance to construct a consumer instance. - - The schema instance defines the schema that the consumer uses for decoding messages received from a broker. - -2. The consumer connects to the broker with the `SchemaInfo` extracted from the passed-in schema instance. - -3. The broker determines whether the topic has one of them (a schema/data/a local consumer and a local producer). - -4. If a topic does not have all of them (a schema/data/a local consumer and a local producer): - - * If `isAllowAutoUpdateSchema` sets to **true**, then the consumer registers a schema and it is connected to a broker. - - * If `isAllowAutoUpdateSchema` sets to **false**, then the consumer is rejected to connect to a broker. - -5. If a topic has one of them (a schema/data/a local consumer and a local producer), then the schema compatibility check is performed. - - * If the schema passes the compatibility check, then the consumer is connected to the broker. - - * If the schema does not pass the compatibility check, then the consumer is rejected to connect to the broker. - -6. The consumer receives messages from the broker. - - If the schema used by the consumer supports schema versioning (for example, AVRO schema), the consumer fetches the `SchemaInfo` of the version tagged in messages and uses the passed-in schema and the schema tagged in messages to decode the messages. diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/security-athenz.md b/site2/website/versioned_docs/version-2.8.1-deprecated/security-athenz.md deleted file mode 100644 index 8a39fe25316d07..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/security-athenz.md +++ /dev/null @@ -1,98 +0,0 @@ ---- -id: security-athenz -title: Authentication using Athenz -sidebar_label: "Authentication using Athenz" -original_id: security-athenz ---- - -[Athenz](https://github.com/AthenZ/athenz) is a role-based authentication/authorization system. In Pulsar, you can use Athenz role tokens (also known as *z-tokens*) to establish the identify of the client. - -## Athenz authentication settings - -A [decentralized Athenz system](https://github.com/AthenZ/athenz/blob/master/docs/decent_authz_flow.md) contains an [authori**Z**ation **M**anagement **S**ystem](https://github.com/AthenZ/athenz/blob/master/docs/setup_zms.md) (ZMS) server and an [authori**Z**ation **T**oken **S**ystem](https://github.com/AthenZ/athenz/blob/master/docs/setup_zts) (ZTS) server. - -To begin, you need to set up Athenz service access control. You need to create domains for the *provider* (which provides some resources to other services with some authentication/authorization policies) and the *tenant* (which is provisioned to access some resources in a provider). In this case, the provider corresponds to the Pulsar service itself and the tenant corresponds to each application using Pulsar (typically, a [tenant](reference-terminology.md#tenant) in Pulsar). - -### Create the tenant domain and service - -On the [tenant](reference-terminology.md#tenant) side, you need to do the following things: - -1. Create a domain, such as `shopping` -2. Generate a private/public key pair -3. Create a service, such as `some_app`, on the domain with the public key - -Note that you need to specify the private key generated in step 2 when the Pulsar client connects to the [broker](reference-terminology.md#broker) (see client configuration examples for [Java](client-libraries-java.md#tls-authentication) and [C++](client-libraries-cpp.md#tls-authentication)). - -For more specific steps involving the Athenz UI, refer to [Example Service Access Control Setup](https://github.com/AthenZ/athenz/blob/master/docs/example_service_athenz_setup.md#client-tenant-domain). - -### Create the provider domain and add the tenant service to some role members - -On the provider side, you need to do the following things: - -1. Create a domain, such as `pulsar` -2. Create a role -3. Add the tenant service to members of the role - -Note that you can specify any action and resource in step 2 since they are not used on Pulsar. In other words, Pulsar uses the Athenz role token only for authentication, *not* for authorization. - -For more specific steps involving UI, refer to [Example Service Access Control Setup](https://github.com/AthenZ/athenz/blob/master/docs/example_service_athenz_setup.md#server-provider-domain). - -## Configure the broker for Athenz - -> ### TLS encryption -> -> Note that when you are using Athenz as an authentication provider, you had better use TLS encryption -> as it can protect role tokens from being intercepted and reused. (for more details involving TLS encryption see [Architecture - Data Model](https://github.com/AthenZ/athenz/blob/master/docs/data_model)). - -In the `conf/broker.conf` configuration file in your Pulsar installation, you need to provide the class name of the Athenz authentication provider as well as a comma-separated list of provider domain names. - -```properties - -# Add the Athenz auth provider -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderAthenz -athenzDomainNames=pulsar - -# Enable TLS -tlsEnabled=true -tlsCertificateFilePath=/path/to/broker-cert.pem -tlsKeyFilePath=/path/to/broker-key.pem - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationAthenz -brokerClientAuthenticationParameters={"tenantDomain":"shopping","tenantService":"some_app","providerDomain":"pulsar","privateKey":"file:///path/to/private.pem","keyId":"v1"} - -``` - -> A full listing of parameters is available in the `conf/broker.conf` file, you can also find the default -> values for those parameters in [Broker Configuration](reference-configuration.md#broker). - -## Configure clients for Athenz - -For more information on Pulsar client authentication using Athenz, see the following language-specific docs: - -* [Java client](client-libraries-java.md#athenz) - -## Configure CLI tools for Athenz - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following authentication parameters to the `conf/client.conf` config file to use Athenz with CLI tools of Pulsar: - -```properties - -# URL for the broker -serviceUrl=https://broker.example.com:8443/ - -# Set Athenz auth plugin and its parameters -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationAthenz -authParams={"tenantDomain":"shopping","tenantService":"some_app","providerDomain":"pulsar","privateKey":"file:///path/to/private.pem","keyId":"v1"} - -# Enable TLS -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/cacert.pem - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/security-authorization.md b/site2/website/versioned_docs/version-2.8.1-deprecated/security-authorization.md deleted file mode 100644 index 9cfd7c8c203f63..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/security-authorization.md +++ /dev/null @@ -1,130 +0,0 @@ ---- -id: security-authorization -title: Authentication and authorization in Pulsar -sidebar_label: "Authorization and ACLs" -original_id: security-authorization ---- - - -In Pulsar, the [authentication provider](security-overview.md#authentication-providers) is responsible for properly identifying clients and associating the clients with [role tokens](security-overview.md#role-tokens). If you only enable authentication, an authenticated role token has the ability to access all resources in the cluster. *Authorization* is the process that determines *what* clients are able to do. - -The role tokens with the most privileges are the *superusers*. The *superusers* can create and destroy tenants, along with having full access to all tenant resources. - -When a superuser creates a [tenant](reference-terminology.md#tenant), that tenant is assigned an admin role. A client with the admin role token can then create, modify and destroy namespaces, and grant and revoke permissions to *other role tokens* on those namespaces. - -## Broker and Proxy Setup - -### Enable authorization and assign superusers -You can enable the authorization and assign the superusers in the broker ([`conf/broker.conf`](reference-configuration.md#broker)) configuration files. - -```properties - -authorizationEnabled=true -superUserRoles=my-super-user-1,my-super-user-2 - -``` - -> A full list of parameters is available in the `conf/broker.conf` file. -> You can also find the default values for those parameters in [Broker Configuration](reference-configuration.md#broker). - -Typically, you use superuser roles for administrators, clients as well as broker-to-broker authorization. When you use [geo-replication](concepts-replication.md), every broker needs to be able to publish to all the other topics of clusters. - -You can also enable the authorization for the proxy in the proxy configuration file (`conf/proxy.conf`). Once you enable the authorization on the proxy, the proxy does an additional authorization check before forwarding the request to a broker. -If you enable authorization on the broker, the broker checks the authorization of the request when the broker receives the forwarded request. - -### Proxy Roles - -By default, the broker treats the connection between a proxy and the broker as a normal user connection. The broker authenticates the user as the role configured in `proxy.conf`(see ["Enable TLS Authentication on Proxies"](security-tls-authentication.md#enable-tls-authentication-on-proxies)). However, when the user connects to the cluster through a proxy, the user rarely requires the authentication. The user expects to be able to interact with the cluster as the role for which they have authenticated with the proxy. - -Pulsar uses *Proxy roles* to enable the authentication. Proxy roles are specified in the broker configuration file, [`conf/broker.conf`](reference-configuration.md#broker). If a client that is authenticated with a broker is one of its ```proxyRoles```, all requests from that client must also carry information about the role of the client that is authenticated with the proxy. This information is called the *original principal*. If the *original principal* is absent, the client is not able to access anything. - -You must authorize both the *proxy role* and the *original principal* to access a resource to ensure that the resource is accessible via the proxy. Administrators can take two approaches to authorize the *proxy role* and the *original principal*. - -The more secure approach is to grant access to the proxy roles each time you grant access to a resource. For example, if you have a proxy role named `proxy1`, when the superuser creates a tenant, you should specify `proxy1` as one of the admin roles. When a role is granted permissions to produce or consume from a namespace, if that client wants to produce or consume through a proxy, you should also grant `proxy1` the same permissions. - -Another approach is to make the proxy role a superuser. This allows the proxy to access all resources. The client still needs to authenticate with the proxy, and all requests made through the proxy have their role downgraded to the *original principal* of the authenticated client. However, if the proxy is compromised, a bad actor could get full access to your cluster. - -You can specify the roles as proxy roles in [`conf/broker.conf`](reference-configuration.md#broker). - -```properties - -proxyRoles=my-proxy-role - -# if you want to allow superusers to use the proxy (see above) -superUserRoles=my-super-user-1,my-super-user-2,my-proxy-role - -``` - -## Administer tenants - -Pulsar [instance](reference-terminology.md#instance) administrators or some kind of self-service portal typically provisions a Pulsar [tenant](reference-terminology.md#tenant). - -You can manage tenants using the [`pulsar-admin`](reference-pulsar-admin.md) tool. - -### Create a new tenant - -The following is an example tenant creation command: - -```shell - -$ bin/pulsar-admin tenants create my-tenant \ - --admin-roles my-admin-role \ - --allowed-clusters us-west,us-east - -``` - -This command creates a new tenant `my-tenant` that is allowed to use the clusters `us-west` and `us-east`. - -A client that successfully identifies itself as having the role `my-admin-role` is allowed to perform all administrative tasks on this tenant. - -The structure of topic names in Pulsar reflects the hierarchy between tenants, clusters, and namespaces: - -```shell - -persistent://tenant/namespace/topic - -``` - -### Manage permissions - -You can use [Pulsar Admin Tools](admin-api-permissions.md) for managing permission in Pulsar. - -### Pulsar admin authentication - -```java - -PulsarAdmin admin = PulsarAdmin.builder() - .serviceHttpUrl("http://broker:8080") - .authentication("com.org.MyAuthPluginClass", "param1:value1") - .build(); - -``` - -To use TLS: - -```java - -PulsarAdmin admin = PulsarAdmin.builder() - .serviceHttpUrl("https://broker:8080") - .authentication("com.org.MyAuthPluginClass", "param1:value1") - .tlsTrustCertsFilePath("/path/to/trust/cert") - .build(); - -``` - -## Authorize an authenticated client with multiple roles - -When a client is identified with multiple roles in a token (the type of role claim in the token is an array) during the authentication process, Pulsar supports to check the permissions of all the roles and further authorize the client as long as one of its roles has the required permissions. - -> **Note**
    -> This authorization method is only compatible with [JWT authentication](security-jwt.md). - -To enable this authorization method, configure the authorization provider as `MultiRolesTokenAuthorizationProvider` in the `conf/broker.conf` file. - - ```properties - - # Authorization provider fully qualified class-name - authorizationProvider=org.apache.pulsar.broker.authorization.MultiRolesTokenAuthorizationProvider - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/security-basic-auth.md b/site2/website/versioned_docs/version-2.8.1-deprecated/security-basic-auth.md deleted file mode 100644 index 2585526bb478af..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/security-basic-auth.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -id: security-basic-auth -title: Authentication using HTTP basic -sidebar_label: "Authentication using HTTP basic" ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - -[Basic authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) is a simple authentication scheme built into the HTTP protocol, which uses base64-encoded username and password pairs as credentials. - -## Prerequisites - -Install [`htpasswd`](https://httpd.apache.org/docs/2.4/programs/htpasswd.html) in your environment to create a password file for storing username-password pairs. - -* For Ubuntu/Debian, run the following command to install `htpasswd`. - - ``` - apt install apache2-utils - ``` - -* For CentOS/RHEL, run the following command to install `htpasswd`. - - ``` - yum install httpd-tools - ``` - -## Create your authentication file - -:::note -Currently, you can use MD5 (recommended) and CRYPT encryption to authenticate your password. -::: - -Create a password file named `.htpasswd` with a user account `superuser/admin`: -* Use MD5 encryption (recommended): - - ``` - htpasswd -cmb /path/to/.htpasswd superuser admin - ``` - -* Use CRYPT encryption: - - ``` - htpasswd -cdb /path/to/.htpasswd superuser admin - ``` - -You can preview the content of your password file by running the following command: - -``` -cat path/to/.htpasswd -superuser:$apr1$GBIYZYFZ$MzLcPrvoUky16mLcK6UtX/ -``` - -## Enable basic authentication on brokers - -To configure brokers to authenticate clients, complete the following steps. - -1. Add the following parameters to the `conf/broker.conf` file. If you use a standalone Pulsar, you need to add these parameters to the `conf/standalone.conf` file. - - ``` - # Configuration to enable Basic authentication - authenticationEnabled=true - authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderBasic - - # Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters - brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic - brokerClientAuthenticationParameters={"userId":"superuser","password":"admin"} - - # If this flag is set then the broker authenticates the original Auth data - # else it just accepts the originalPrincipal and authorizes it (if required). - authenticateOriginalAuthData=true - ``` - -2. Set an environment variable named `PULSAR_EXTRA_OPTS` and the value is `-Dpulsar.auth.basic.conf=/path/to/.htpasswd`. Pulsar reads this environment variable to implement HTTP basic authentication. - -## Enable basic authentication on proxies - -To configure proxies to authenticate clients, complete the following steps. - -1. Add the following parameters to the `conf/proxy.conf` file: - - ``` - # For clients connecting to the proxy - authenticationEnabled=true - authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderBasic - - # For the proxy to connect to brokers - brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic - brokerClientAuthenticationParameters={"userId":"superuser","password":"admin"} - - # Whether client authorization credentials are forwarded to the broker for re-authorization. - # Authentication must be enabled via authenticationEnabled=true for this to take effect. - forwardAuthorizationCredentials=true - ``` - -2. Set an environment variable named `PULSAR_EXTRA_OPTS` and the value is `-Dpulsar.auth.basic.conf=/path/to/.htpasswd`. Pulsar reads this environment variable to implement HTTP basic authentication. - -## Configure basic authentication in CLI tools - -[Command-line tools](/docs/next/reference-cli-tools), such as [Pulsar-admin](/tools/pulsar-admin/), [Pulsar-perf](/tools/pulsar-perf/) and [Pulsar-client](/tools/pulsar-client/), use the `conf/client.conf` file in your Pulsar installation. To configure basic authentication in Pulsar CLI tools, you need to add the following parameters to the `conf/client.conf` file. - -``` -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic -authParams={"userId":"superuser","password":"admin"} -``` - - -## Configure basic authentication in Pulsar clients - -The following example shows how to configure basic authentication when using Pulsar clients. - - - - - ```java - AuthenticationBasic auth = new AuthenticationBasic(); - auth.configure("{\"userId\":\"superuser\",\"password\":\"admin\"}"); - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650") - .authentication(auth) - .build(); - ``` - - - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/security-bouncy-castle.md b/site2/website/versioned_docs/version-2.8.1-deprecated/security-bouncy-castle.md deleted file mode 100644 index be937055d8e311..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/security-bouncy-castle.md +++ /dev/null @@ -1,157 +0,0 @@ ---- -id: security-bouncy-castle -title: Bouncy Castle Providers -sidebar_label: "Bouncy Castle Providers" -original_id: security-bouncy-castle ---- - -## BouncyCastle Introduce - -`Bouncy Castle` is a Java library that complements the default Java Cryptographic Extension (JCE), -and it provides more cipher suites and algorithms than the default JCE provided by Sun. - -In addition to that, `Bouncy Castle` has lots of utilities for reading arcane formats like PEM and ASN.1 that no sane person would want to rewrite themselves. - -In Pulsar, security and crypto have dependencies on BouncyCastle Jars. For the detailed installing and configuring Bouncy Castle FIPS, see [BC FIPS Documentation](https://www.bouncycastle.org/documentation.html), especially the **User Guides** and **Security Policy** PDFs. - -`Bouncy Castle` provides both [FIPS](https://www.bouncycastle.org/fips_faq.html) and non-FIPS version. But in a JVM, you can not include both of the 2 versions, and you need to exclude the current version before include the other. - -In Pulsar, the security and crypto methods also depends on `Bouncy Castle`, especially in [TLS Authentication](security-tls-authentication.md) and [Transport Encryption](security-encryption.md). This document contains the configuration between BouncyCastle FIPS(BC-FIPS) and non-FIPS(BC-non-FIPS) version while using Pulsar. - -## How BouncyCastle modules packaged in Pulsar - -In Pulsar's `bouncy-castle` module, We provide 2 sub modules: `bouncy-castle-bc`(for non-FIPS version) and `bouncy-castle-bcfips`(for FIPS version), to package BC jars together to make the include and exclude of `Bouncy Castle` easier. - -To achieve this goal, we will need to package several `bouncy-castle` jars together into `bouncy-castle-bc` or `bouncy-castle-bcfips` jar. -Each of the original bouncy-castle jar is related with security, so BouncyCastle dutifully supplies signed of each JAR. -But when we do the re-package, Maven shade explodes the BouncyCastle jar file which puts the signatures into META-INF, -these signatures aren't valid for this new, uber-jar (signatures are only for the original BC jar). -Usually, You will meet error like `java.lang.SecurityException: Invalid signature file digest for Manifest main attributes`. - -You could exclude these signatures in mvn pom file to avoid above error, by - -```access transformers - -META-INF/*.SF -META-INF/*.DSA -META-INF/*.RSA - -``` - -But it can also lead to new, cryptic errors, e.g. `java.security.NoSuchAlgorithmException: PBEWithSHA256And256BitAES-CBC-BC SecretKeyFactory not available` -By explicitly specifying where to find the algorithm like this: `SecretKeyFactory.getInstance("PBEWithSHA256And256BitAES-CBC-BC","BC")` -It will get the real error: `java.security.NoSuchProviderException: JCE cannot authenticate the provider BC` - -So, we used a [executable packer plugin](https://github.com/nthuemmel/executable-packer-maven-plugin) that uses a jar-in-jar approach to preserve the BouncyCastle signature in a single, executable jar. - -### Include dependencies of BC-non-FIPS - -Pulsar module `bouncy-castle-bc`, which defined by `bouncy-castle/bc/pom.xml` contains the needed non-FIPS jars for Pulsar, and packaged as a jar-in-jar(need to provide `pkg`). - -```xml - - - org.bouncycastle - bcpkix-jdk15on - ${bouncycastle.version} - - - - org.bouncycastle - bcprov-ext-jdk15on - ${bouncycastle.version} - - -``` - -By using this `bouncy-castle-bc` module, you can easily include and exclude BouncyCastle non-FIPS jars. - -### Modules that include BC-non-FIPS module (`bouncy-castle-bc`) - -For Pulsar client, user need the bouncy-castle module, so `pulsar-client-original` will include the `bouncy-castle-bc` module, and have `pkg` set to reference the `jar-in-jar` package. -It is included as following example: - -```xml - - - org.apache.pulsar - bouncy-castle-bc - ${pulsar.version} - pkg - - -``` - -By default `bouncy-castle-bc` already included in `pulsar-client-original`, And `pulsar-client-original` has been included in a lot of other modules like `pulsar-client-admin`, `pulsar-broker`. -But for the above shaded jar and signatures reason, we should not package Pulsar's `bouncy-castle` module into `pulsar-client-all` other shaded modules directly, such as `pulsar-client-shaded`, `pulsar-client-admin-shaded` and `pulsar-broker-shaded`. -So in the shaded modules, we will exclude the `bouncy-castle` modules. - -```xml - - - - org.apache.pulsar:pulsar-client-original - - ** - - - org/bouncycastle/** - - - - -``` - -That means, `bouncy-castle` related jars are not shaded in these fat jars. - -### Module BC-FIPS (`bouncy-castle-bcfips`) - -Pulsar module `bouncy-castle-bcfips`, which defined by `bouncy-castle/bcfips/pom.xml` contains the needed FIPS jars for Pulsar. -Similar to `bouncy-castle-bc`, `bouncy-castle-bcfips` also packaged as a `jar-in-jar` package for easy include/exclude. - -```xml - - - org.bouncycastle - bc-fips - ${bouncycastlefips.version} - - - - org.bouncycastle - bcpkix-fips - ${bouncycastlefips.version} - - -``` - -### Exclude BC-non-FIPS and include BC-FIPS - -If you want to switch from BC-non-FIPS to BC-FIPS version, Here is an example for `pulsar-broker` module: - -```xml - - - org.apache.pulsar - pulsar-broker - ${pulsar.version} - - - org.apache.pulsar - bouncy-castle-bc - - - - - - org.apache.pulsar - bouncy-castle-bcfips - ${pulsar.version} - pkg - - -``` - - -For more example, you can reference module `bcfips-include-test`. - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/security-encryption.md b/site2/website/versioned_docs/version-2.8.1-deprecated/security-encryption.md deleted file mode 100644 index c2f3530d94d9e4..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/security-encryption.md +++ /dev/null @@ -1,200 +0,0 @@ ---- -id: security-encryption -title: Pulsar Encryption -sidebar_label: "End-to-End Encryption" -original_id: security-encryption ---- - -Applications can use Pulsar encryption to encrypt messages on the producer side and decrypt messages on the consumer side. You can use the public and private key pair that the application configures to perform encryption. Only the consumers with a valid key can decrypt the encrypted messages. - -## Asymmetric and symmetric encryption - -Pulsar uses a dynamically generated symmetric AES key to encrypt messages(data). You can use the application-provided ECDSA (Elliptic Curve Digital Signature Algorithm) or RSA (Rivest–Shamir–Adleman) key pair to encrypt the AES key(data key), so you do not have to share the secret with everyone. - -Key is a public and private key pair used for encryption or decryption. The producer key is the public key of the key pair, and the consumer key is the private key of the key pair. - -The application configures the producer with the public key. You can use this key to encrypt the AES data key. The encrypted data key is sent as part of message header. Only entities with the private key (in this case the consumer) are able to decrypt the data key which is used to decrypt the message. - -You can encrypt a message with more than one key. Any one of the keys used for encrypting the message is sufficient to decrypt the message. - -Pulsar does not store the encryption key anywhere in the Pulsar service. If you lose or delete the private key, your message is irretrievably lost, and is unrecoverable. - -## Producer -![alt text](/assets/pulsar-encryption-producer.jpg "Pulsar Encryption Producer") - -## Consumer -![alt text](/assets/pulsar-encryption-consumer.jpg "Pulsar Encryption Consumer") - -## Get started - -1. Create your ECDSA or RSA public and private key pair by using the following commands. - * ECDSA(for Java clients only) - - ```shell - - openssl ecparam -name secp521r1 -genkey -param_enc explicit -out test_ecdsa_privkey.pem - openssl ec -in test_ecdsa_privkey.pem -pubout -outform pem -out test_ecdsa_pubkey.pem - - ``` - - * RSA (for C++, Python and Node.js clients) - - ```shell - - openssl genrsa -out test_rsa_privkey.pem 2048 - openssl rsa -in test_rsa_privkey.pem -pubout -outform pkcs8 -out test_rsa_pubkey.pem - - ``` - -2. Add the public and private key to the key management and configure your producers to retrieve public keys and consumers clients to retrieve private keys. - -3. Implement the `CryptoKeyReader` interface, specifically `CryptoKeyReader.getPublicKey()` for producer and `CryptoKeyReader.getPrivateKey()` for consumer, which Pulsar client invokes to load the key. - -4. Add the encryption key name to the producer builder: PulsarClient.newProducer().addEncryptionKey("myapp.key"). - -5. Add CryptoKeyReader implementation to producer or consumer builder: PulsarClient.newProducer().cryptoKeyReader(keyReader) / PulsarClient.newConsumer().cryptoKeyReader(keyReader). - -6. Sample producer application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl("pulsar://localhost:6650").build(); - -Producer producer = pulsarClient.newProducer() - .topic("persistent://my-tenant/my-ns/my-topic") - .addEncryptionKey("myappkey") - .cryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")) - .create(); - -for (int i = 0; i < 10; i++) { - producer.send("my-message".getBytes()); -} - -producer.close(); -pulsarClient.close(); - -``` - -7. Sample Consumer Application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl("pulsar://localhost:6650").build(); -Consumer consumer = pulsarClient.newConsumer() - .topic("persistent://my-tenant/my-ns/my-topic") - .subscriptionName("my-subscriber-name") - .cryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")) - .subscribe(); -Message msg = null; - -for (int i = 0; i < 10; i++) { - msg = consumer.receive(); - // do something - System.out.println("Received: " + new String(msg.getData())); -} - -// Acknowledge the consumption of all messages at once -consumer.acknowledgeCumulative(msg); -consumer.close(); -pulsarClient.close(); - -``` - -## Key rotation -Pulsar generates a new AES data key every 4 hours or after publishing a certain number of messages. A producer fetches the asymmetric public key every 4 hours by calling CryptoKeyReader.getPublicKey() to retrieve the latest version. - -## Enable encryption at the producer application -If you produce messages that are consumed across application boundaries, you need to ensure that consumers in other applications have access to one of the private keys that can decrypt the messages. You can do this in two ways: -1. The consumer application provides you access to their public key, which you add to your producer keys. -2. You grant access to one of the private keys from the pairs that producer uses. - -When producers want to encrypt the messages with multiple keys, producers add all such keys to the config. Consumer can decrypt the message as long as the consumer has access to at least one of the keys. - -If you need to encrypt the messages using 2 keys (`myapp.messagekey1` and `myapp.messagekey2`), refer to the following example. - -```java - -PulsarClient.newProducer().addEncryptionKey("myapp.messagekey1").addEncryptionKey("myapp.messagekey2"); - -``` - -## Decrypt encrypted messages at the consumer application -Consumers require to access one of the private keys to decrypt messages that the producer produces. If you want to receive encrypted messages, create a public or private key and give your public key to the producer application to encrypt messages using your public key. - -## Handle failures -* Producer/Consumer loses access to the key - * Producer action fails to indicate the cause of the failure. Application has the option to proceed with sending unencrypted messages in such cases. Call `PulsarClient.newProducer().cryptoFailureAction(ProducerCryptoFailureAction)` to control the producer behavior. The default behavior is to fail the request. - * If consumption fails due to decryption failure or missing keys in consumer, the application has the option to consume the encrypted message or discard it. Call `PulsarClient.newConsumer().cryptoFailureAction(ConsumerCryptoFailureAction)` to control the consumer behavior. The default behavior is to fail the request. Application is never able to decrypt the messages if the private key is permanently lost. -* Batch messaging - * If decryption fails and the message contains batch messages, client is not able to retrieve individual messages in the batch, hence message consumption fails even if cryptoFailureAction() is set to `ConsumerCryptoFailureAction.CONSUME`. -* If decryption fails, the message consumption stops and the application notices backlog growth in addition to decryption failure messages in the client log. If the application does not have access to the private key to decrypt the message, the only option is to skip or discard backlogged messages. diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/security-extending.md b/site2/website/versioned_docs/version-2.8.1-deprecated/security-extending.md deleted file mode 100644 index e7484453b8beb8..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/security-extending.md +++ /dev/null @@ -1,207 +0,0 @@ ---- -id: security-extending -title: Extending Authentication and Authorization in Pulsar -sidebar_label: "Extending" -original_id: security-extending ---- - -Pulsar provides a way to use custom authentication and authorization mechanisms. - -## Authentication - -Pulsar supports mutual TLS and Athenz authentication plugins. For how to use these authentication plugins, you can refer to the description in [Security](security-overview.md). - -You can use a custom authentication mechanism by providing the implementation in the form of two plugins. One plugin is for the Client library and the other plugin is for the Pulsar Proxy and/or Pulsar Broker to validate the credentials. - -### Client authentication plugin - -For the client library, you need to implement `org.apache.pulsar.client.api.Authentication`. By entering the command below you can pass this class when you create a Pulsar client: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .authentication(new MyAuthentication()) - .build(); - -``` - -You can use 2 interfaces to implement on the client side: - * `Authentication` -> http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/Authentication.html - * `AuthenticationDataProvider` -> http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/AuthenticationDataProvider.html - - -This in turn needs to provide the client credentials in the form of `org.apache.pulsar.client.api.AuthenticationDataProvider`. This leaves the chance to return different kinds of authentication token for different types of connection or by passing a certificate chain to use for TLS. - - -You can find examples for client authentication providers at: - - * Mutual TLS Auth -- https://github.com/apache/pulsar/tree/master/pulsar-client/src/main/java/org/apache/pulsar/client/impl/auth - * Athenz -- https://github.com/apache/pulsar/tree/master/pulsar-client-auth-athenz/src/main/java/org/apache/pulsar/client/impl/auth - -### Proxy/Broker authentication plugin - -On the proxy/broker side, you need to configure the corresponding plugin to validate the credentials that the client sends. The Proxy and Broker can support multiple authentication providers at the same time. - -In `conf/broker.conf` you can choose to specify a list of valid providers: - -```properties - -# Authentication provider name list, which is comma separated list of class names -authenticationProviders= - -``` - -To implement `org.apache.pulsar.broker.authentication.AuthenticationProvider` on one single interface: - -```java - -/** - * Provider of authentication mechanism - */ -public interface AuthenticationProvider extends Closeable { - - /** - * Perform initialization for the authentication provider - * - * @param config - * broker config object - * @throws IOException - * if the initialization fails - */ - void initialize(ServiceConfiguration config) throws IOException; - - /** - * @return the authentication method name supported by this provider - */ - String getAuthMethodName(); - - /** - * Validate the authentication for the given credentials with the specified authentication data - * - * @param authData - * provider specific authentication data - * @return the "role" string for the authenticated connection, if the authentication was successful - * @throws AuthenticationException - * if the credentials are not valid - */ - String authenticate(AuthenticationDataSource authData) throws AuthenticationException; - -} - -``` - -The following is the example for Broker authentication plugins: - - * Mutual TLS -- https://github.com/apache/pulsar/blob/master/pulsar-broker-common/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderTls.java - * Athenz -- https://github.com/apache/pulsar/blob/master/pulsar-broker-auth-athenz/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderAthenz.java - -## Authorization - -Authorization is the operation that checks whether a particular "role" or "principal" has permission to perform a certain operation. - -By default, you can use the embedded authorization provider provided by Pulsar. You can also configure a different authorization provider through a plugin. -Note that although the Authentication plugin is designed for use in both the Proxy and Broker, -the Authorization plugin is designed only for use on the Broker however the Proxy does perform some simple Authorization checks of Roles if authorization is enabled. - -To provide a custom provider, you need to implement the `org.apache.pulsar.broker.authorization.AuthorizationProvider` interface, put this class in the Pulsar broker classpath and configure the class in `conf/broker.conf`: - - ```properties - - # Authorization provider fully qualified class-name - authorizationProvider=org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider - - ``` - -```java - -/** - * Provider of authorization mechanism - */ -public interface AuthorizationProvider extends Closeable { - - /** - * Perform initialization for the authorization provider - * - * @param conf - * broker config object - * @param configCache - * pulsar zk configuration cache service - * @throws IOException - * if the initialization fails - */ - void initialize(ServiceConfiguration conf, ConfigurationCacheService configCache) throws IOException; - - /** - * Check if the specified role has permission to send messages to the specified fully qualified topic name. - * - * @param topicName - * the fully qualified topic name associated with the topic. - * @param role - * the app id used to send messages to the topic. - */ - CompletableFuture canProduceAsync(TopicName topicName, String role, - AuthenticationDataSource authenticationData); - - /** - * Check if the specified role has permission to receive messages from the specified fully qualified topic name. - * - * @param topicName - * the fully qualified topic name associated with the topic. - * @param role - * the app id used to receive messages from the topic. - * @param subscription - * the subscription name defined by the client - */ - CompletableFuture canConsumeAsync(TopicName topicName, String role, - AuthenticationDataSource authenticationData, String subscription); - - /** - * Check whether the specified role can perform a lookup for the specified topic. - * - * For that the caller needs to have producer or consumer permission. - * - * @param topicName - * @param role - * @return - * @throws Exception - */ - CompletableFuture canLookupAsync(TopicName topicName, String role, - AuthenticationDataSource authenticationData); - - /** - * - * Grant authorization-action permission on a namespace to the given client - * - * @param namespace - * @param actions - * @param role - * @param authDataJson - * additional authdata in json format - * @return CompletableFuture - * @completesWith
    - * IllegalArgumentException when namespace not found
    - * IllegalStateException when failed to grant permission - */ - CompletableFuture grantPermissionAsync(NamespaceName namespace, Set actions, String role, - String authDataJson); - - /** - * Grant authorization-action permission on a topic to the given client - * - * @param topicName - * @param role - * @param authDataJson - * additional authdata in json format - * @return CompletableFuture - * @completesWith
    - * IllegalArgumentException when namespace not found
    - * IllegalStateException when failed to grant permission - */ - CompletableFuture grantPermissionAsync(TopicName topicName, Set actions, String role, - String authDataJson); - -} - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/security-jwt.md b/site2/website/versioned_docs/version-2.8.1-deprecated/security-jwt.md deleted file mode 100644 index 1fa65b7c27f60c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/security-jwt.md +++ /dev/null @@ -1,331 +0,0 @@ ---- -id: security-jwt -title: Client authentication using tokens based on JSON Web Tokens -sidebar_label: "Authentication using JWT" -original_id: security-jwt ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -## Token authentication overview - -Pulsar supports authenticating clients using security tokens that are based on [JSON Web Tokens](https://jwt.io/introduction/) ([RFC-7519](https://tools.ietf.org/html/rfc7519)). - -You can use tokens to identify a Pulsar client and associate with some "principal" (or "role") that -is permitted to do some actions (eg: publish to a topic or consume from a topic). - -A user typically gets a token string from the administrator (or some automated service). - -The compact representation of a signed JWT is a string that looks like as the following: - -``` - -eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -Application specifies the token when you create the client instance. An alternative is to pass a "token supplier" (a function that returns the token when the client library needs one). - -> #### Always use TLS transport encryption -> Sending a token is equivalent to sending a password over the wire. You had better use TLS encryption all the time when you connect to the Pulsar service. See -> [Transport Encryption using TLS](security-tls-transport.md) for more details. - -### CLI Tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use the token authentication with CLI tools of Pulsar: - -```properties - -webServiceUrl=http://broker.example.com:8080/ -brokerServiceUrl=pulsar://broker.example.com:6650/ -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -authParams=token:eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -The token string can also be read from a file, for example: - -``` - -authParams=file:///path/to/token/file - -``` - -### Pulsar client - -You can use tokens to authenticate the following Pulsar clients. - -````mdx-code-block - - - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactory.token("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY")) - .build(); - -``` - -Similarly, you can also pass a `Supplier`: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactory.token(() -> { - // Read token from custom source - return readToken(); - })) - .build(); - -``` - - - - -```python - -from pulsar import Client, AuthenticationToken - -client = Client('pulsar://broker.example.com:6650/' - authentication=AuthenticationToken('eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY')) - -``` - -Alternatively, you can also pass a `Supplier`: - -```python - -def read_token(): - with open('/path/to/token.txt') as tf: - return tf.read().strip() - -client = Client('pulsar://broker.example.com:6650/' - authentication=AuthenticationToken(read_token)) - -``` - - - - -```go - -client, err := NewClient(ClientOptions{ - URL: "pulsar://localhost:6650", - Authentication: NewAuthenticationToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY"), -}) - -``` - -Similarly, you can also pass a `Supplier`: - -```go - -client, err := NewClient(ClientOptions{ - URL: "pulsar://localhost:6650", - Authentication: NewAuthenticationTokenSupplier(func () string { - // Read token from custom source - return readToken() - }), -}) - -``` - - - - -```c++ - -#include - -pulsar::ClientConfiguration config; -config.setAuth(pulsar::AuthToken::createWithToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY")); - -pulsar::Client client("pulsar://broker.example.com:6650/", config); - -``` - - - - -```c# - -var client = PulsarClient.Builder() - .AuthenticateUsingToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY") - .Build(); - -``` - - - - -```` - -## Enable token authentication - -On how to enable token authentication on a Pulsar cluster, you can refer to the guide below. - -JWT supports two different kinds of keys in order to generate and validate the tokens: - - * Symmetric : - - You can use a single ***Secret*** key to generate and validate tokens. - * Asymmetric: A pair of keys consists of the Private key and the Public key. - - You can use ***Private*** key to generate tokens. - - You can use ***Public*** key to validate tokens. - -### Create a secret key - -When you use a secret key, the administrator creates the key and uses the key to generate the client tokens. You can also configure this key to brokers in order to validate the clients. - -Output file is generated in the root of your Pulsar installation directory. You can also provide absolute path for the output file using the command below. - -```shell - -$ bin/pulsar tokens create-secret-key --output my-secret.key - -``` - -Enter this command to generate base64 encoded private key. - -```shell - -$ bin/pulsar tokens create-secret-key --output /opt/my-secret.key --base64 - -``` - -### Create a key pair - -With Public and Private keys, you need to create a pair of keys. Pulsar supports all algorithms that the Java JWT library (shown [here](https://github.com/jwtk/jjwt#signature-algorithms-keys)) supports. - -Output file is generated in the root of your Pulsar installation directory. You can also provide absolute path for the output file using the command below. - -```shell - -$ bin/pulsar tokens create-key-pair --output-private-key my-private.key --output-public-key my-public.key - -``` - - * Store `my-private.key` in a safe location and only administrator can use `my-private.key` to generate new tokens. - * `my-public.key` is distributed to all Pulsar brokers. You can publicly share this file without any security concern. - -### Generate tokens - -A token is the credential associated with a user. The association is done through the "principal" or "role". In the case of JWT tokens, this field is typically referred as **subject**, though they are exactly the same concept. - -Then, you need to use this command to require the generated token to have a **subject** field set. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user - -``` - -This command prints the token string on stdout. - -Similarly, you can create a token by passing the "private" key using the command below: - -```shell - -$ bin/pulsar tokens create --private-key file:///path/to/my-private.key \ - --subject test-user - -``` - -Finally, you can enter the following command to create a token with a pre-defined TTL. And then the token is automatically invalidated. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user \ - --expiry-time 1y - -``` - -### Authorization - -The token itself does not have any permission associated. The authorization engine determines whether the token should have permissions or not. Once you have created the token, you can grant permission for this token to do certain actions. The following is an example. - -```shell - -$ bin/pulsar-admin namespaces grant-permission my-tenant/my-namespace \ - --role test-user \ - --actions produce,consume - -``` - -### Enable token authentication on Brokers - -To configure brokers to authenticate clients, add the following parameters to `broker.conf`: - -```properties - -# Configuration to enable authentication and authorization -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientTlsEnabled=true -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Either configure the token string or specify to read it from a file. The following three available formats are all valid: -# brokerClientAuthenticationParameters={"token":"your-token-string"} -# brokerClientAuthenticationParameters=token:your-token-string -# brokerClientAuthenticationParameters=file:///path/to/token -brokerClientTrustCertsFilePath=/path/my-ca/certs/ca.cert.pem - -# If this flag is set then the broker authenticates the original Auth data -# else it just accepts the originalPrincipal and authorizes it (if required). -authenticateOriginalAuthData=true - -# If using secret key (Note: key files must be DER-encoded) -tokenSecretKey=file:///path/to/secret.key -# The key can also be passed inline: -# tokenSecretKey=data:;base64,FLFyW0oLJ2Fi22KKCm21J18mbAdztfSHN/lAT5ucEKU= - -# If using public/private (Note: key files must be DER-encoded) -# tokenPublicKey=file:///path/to/public.key - -``` - -### Enable token authentication on Proxies - -To configure proxies to authenticate clients, add the following parameters to `proxy.conf`: - -The proxy uses its own token when connecting to brokers. You need to configure the role token for this key pair in the `proxyRoles` of the brokers. For more details, see the [authorization guide](security-authorization.md). - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken -tokenSecretKey=file:///path/to/secret.key - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Either configure the token string or specify to read it from a file. The following three available formats are all valid: -# brokerClientAuthenticationParameters={"token":"your-token-string"} -# brokerClientAuthenticationParameters=token:your-token-string -# brokerClientAuthenticationParameters=file:///path/to/token - -# Whether client authorization credentials are forwarded to the broker for re-authorization. -# Authentication must be enabled via authenticationEnabled=true for this to take effect. -forwardAuthorizationCredentials=true - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/security-kerberos.md b/site2/website/versioned_docs/version-2.8.1-deprecated/security-kerberos.md deleted file mode 100644 index c49fa3bea1fce0..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/security-kerberos.md +++ /dev/null @@ -1,443 +0,0 @@ ---- -id: security-kerberos -title: Authentication using Kerberos -sidebar_label: "Authentication using Kerberos" -original_id: security-kerberos ---- - -[Kerberos](https://web.mit.edu/kerberos/) is a network authentication protocol. By using secret-key cryptography, [Kerberos](https://web.mit.edu/kerberos/) is designed to provide strong authentication for client applications and server applications. - -In Pulsar, you can use Kerberos with [SASL](https://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer) as a choice for authentication. And Pulsar uses the [Java Authentication and Authorization Service (JAAS)](https://en.wikipedia.org/wiki/Java_Authentication_and_Authorization_Service) for SASL configuration. You need to provide JAAS configurations for Kerberos authentication. - -This document introduces how to configure `Kerberos` with `SASL` between Pulsar clients and brokers and how to configure Kerberos for Pulsar proxy in detail. - -## Configuration for Kerberos between Client and Broker - -### Prerequisites - -To begin, you need to set up (or already have) a [Key Distribution Center(KDC)](https://en.wikipedia.org/wiki/Key_distribution_center). Also you need to configure and run the [Key Distribution Center(KDC)](https://en.wikipedia.org/wiki/Key_distribution_center)in advance. - -If your organization already uses a Kerberos server (for example, by using `Active Directory`), you do not have to install a new server for Pulsar. If your organization does not use a Kerberos server, you need to install one. Your Linux vendor might have packages for `Kerberos`. On how to install and configure Kerberos, refer to [Ubuntu](https://help.ubuntu.com/community/Kerberos), -[Redhat](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Managing_Smart_Cards/installing-kerberos.html). - -Note that if you use Oracle Java, you need to download JCE policy files for your Java version and copy them to the `$JAVA_HOME/jre/lib/security` directory. - -#### Kerberos principals - -If you use the existing Kerberos system, ask your Kerberos administrator for a principal for each Brokers in your cluster and for every operating system user that accesses Pulsar with Kerberos authentication(via clients and tools). - -If you have installed your own Kerberos system, you can create these principals with the following commands: - -```shell - -### add Principals for broker -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey broker/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{broker-keytabname}.keytab broker/{hostname}@{REALM}" -### add Principals for client -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey client/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{client-keytabname}.keytab client/{hostname}@{REALM}" - -``` - -Note that *Kerberos* requires that all your hosts can be resolved with their FQDNs. - -The first part of Broker principal (for example, `broker` in `broker/{hostname}@{REALM}`) is the `serverType` of each host. The suggested values of `serverType` are `broker` (host machine runs service Pulsar Broker) and `proxy` (host machine runs service Pulsar Proxy). - -#### Configure how to connect to KDC - -You need to enter the command below to specify the path to the `krb5.conf` file for the client side and the broker side. The content of `krb5.conf` file indicates the default Realm and KDC information. See [JDK’s Kerberos Requirements](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html) for more details. - -```shell - --Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -Here is an example of the krb5.conf file: - -In the configuration file, `EXAMPLE.COM` is the default realm; `kdc = localhost:62037` is the kdc server url for realm `EXAMPLE.COM `: - -``` - -[libdefaults] - default_realm = EXAMPLE.COM - -[realms] - EXAMPLE.COM = { - kdc = localhost:62037 - } - -``` - -Usually machines configured with kerberos already have a system wide configuration and this configuration is optional. - -#### JAAS configuration file - -You need JAAS configuration file for the client side and the broker side. JAAS configuration file provides the section of information that is used to connect KDC. Here is an example named `pulsar_jaas.conf`: - -``` - - PulsarBroker { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - - PulsarClient { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarclient.keytab" - principal="client/localhost@EXAMPLE.COM"; -}; - -``` - -You need to set the `JAAS` configuration file path as JVM parameter for client and broker. For example: - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf - -``` - -In the `pulsar_jaas.conf` file above - -1. `PulsarBroker` is a section name in the JAAS file that each broker uses. This section tells the broker to use which principal inside Kerberos and the location of the keytab where the principal is stored. `PulsarBroker` allows the broker to use the keytab specified in this section. -2. `PulsarClient` is a section name in the JASS file that each broker uses. This section tells the client to use which principal inside Kerberos and the location of the keytab where the principal is stored. `PulsarClient` allows the client to use the keytab specified in this section. - The following example also reuses this `PulsarClient` section in both the Pulsar internal admin configuration and in CLI command of `bin/pulsar-client`, `bin/pulsar-perf` and `bin/pulsar-admin`. You can also add different sections for different use cases. - -You can have 2 separate JAAS configuration files: -* the file for a broker that has sections of both `PulsarBroker` and `PulsarClient`; -* the file for a client that only has a `PulsarClient` section. - - -### Kerberos configuration for Brokers - -#### Configure the `broker.conf` file - - In the `broker.conf` file, set Kerberos related configurations. - - - Set `authenticationEnabled` to `true`; - - Set `authenticationProviders` to choose `AuthenticationProviderSasl`; - - Set `saslJaasClientAllowedIds` regex for principal that is allowed to connect to broker; - - Set `saslJaasBrokerSectionName` that corresponds to the section in JAAS configuration file for broker; - - To make Pulsar internal admin client work properly, you need to set the configuration in the `broker.conf` file as below: - - Set `brokerClientAuthenticationPlugin` to client plugin `AuthenticationSasl`; - - Set `brokerClientAuthenticationParameters` to value in JSON string `{"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"}`, in which `PulsarClient` is the section name in the `pulsar_jaas.conf` file, and `"serverType":"broker"` indicates that the internal admin client connects to a Pulsar Broker; - - Here is an example: - -``` - -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarBroker - -## Authentication settings of the broker itself. Used when the broker connects to other brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -brokerClientAuthenticationParameters={"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"} - -``` - -#### Set Broker JVM parameter - - Set JVM parameters for JAAS configuration file and krb5 configuration file with additional options. - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -You can add this at the end of `PULSAR_EXTRA_OPTS` in the file [`pulsar_env.sh`](https://github.com/apache/pulsar/blob/master/conf/pulsar_env.sh) - -You must ensure that the operating system user who starts broker can reach the keytabs configured in the `pulsar_jaas.conf` file and kdc server in the `krb5.conf` file. - -### Kerberos configuration for clients - -#### Java Client and Java Admin Client - -In client application, include `pulsar-client-auth-sasl` in your project dependency. - -``` - - - org.apache.pulsar - pulsar-client-auth-sasl - ${pulsar.version} - - -``` - -Configure the authentication type to use `AuthenticationSasl`, and also provide the authentication parameters to it. - -You need 2 parameters: -- `saslJaasClientSectionName`. This parameter corresponds to the section in JAAS configuration file for client; -- `serverType`. This parameter stands for whether this client connects to broker or proxy. And client uses this parameter to know which server side principal should be used. - -When you authenticate between client and broker with the setting in above JAAS configuration file, we need to set `saslJaasClientSectionName` to `PulsarClient` and set `serverType` to `broker`. - -The following is an example of creating a Java client: - - ```java - - System.setProperty("java.security.auth.login.config", "/etc/pulsar/pulsar_jaas.conf"); - System.setProperty("java.security.krb5.conf", "/etc/pulsar/krb5.conf"); - - Map authParams = Maps.newHashMap(); - authParams.put("saslJaasClientSectionName", "PulsarClient"); - authParams.put("serverType", "broker"); - - Authentication saslAuth = AuthenticationFactory - .create(org.apache.pulsar.client.impl.auth.AuthenticationSasl.class.getName(), authParams); - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://my-broker.com:6650") - .authentication(saslAuth) - .build(); - - ``` - -> The first two lines in the example above are hard coded, alternatively, you can set additional JVM parameters for JAAS and krb5 configuration file when you run the application like below: - -``` - -java -cp -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf $APP-jar-with-dependencies.jar $CLASSNAME - -``` - -You must ensure that the operating system user who starts pulsar client can reach the keytabs configured in the `pulsar_jaas.conf` file and kdc server in the `krb5.conf` file. - -#### Configure CLI tools - -If you use a command-line tool (such as `bin/pulsar-client`, `bin/pulsar-perf` and `bin/pulsar-admin`), you need to perform the following steps: - -Step 1. Enter the command below to configure your `client.conf`. - -```shell - -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -authParams={"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"} - -``` - -Step 2. Enter the command below to set JVM parameters for JAAS configuration file and krb5 configuration file with additional options. - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -You can add this at the end of `PULSAR_EXTRA_OPTS` in the file [`pulsar_tools_env.sh`](https://github.com/apache/pulsar/blob/master/conf/pulsar_tools_env.sh), -or add this line `OPTS="$OPTS -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf "` directly to the CLI tool script. - -The meaning of configurations is the same as the meaning of configurations in Java client section. - -## Kerberos configuration for working with Pulsar Proxy - -With the above configuration, client and broker can do authentication using Kerberos. - -A client that connects to Pulsar Proxy is a little different. Pulsar Proxy (as a SASL Server in Kerberos) authenticates Client (as a SASL client in Kerberos) first; and then Pulsar broker authenticates Pulsar Proxy. - -Now in comparison with the above configuration between client and broker, we show you how to configure Pulsar Proxy as follows. - -### Create principal for Pulsar Proxy in Kerberos - -You need to add new principals for Pulsar Proxy comparing with the above configuration. If you already have principals for client and broker, you only need to add the proxy principal here. - -```shell - -### add Principals for Pulsar Proxy -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey proxy/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{proxy-keytabname}.keytab proxy/{hostname}@{REALM}" -### add Principals for broker -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey broker/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{broker-keytabname}.keytab broker/{hostname}@{REALM}" -### add Principals for client -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey client/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{client-keytabname}.keytab client/{hostname}@{REALM}" - -``` - -### Add a section in JAAS configuration file for Pulsar Proxy - -In comparison with the above configuration, add a new section for Pulsar Proxy in JAAS configuration file. - -Here is an example named `pulsar_jaas.conf`: - -``` - - PulsarBroker { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - - PulsarProxy { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarproxy.keytab" - principal="proxy/localhost@EXAMPLE.COM"; -}; - - PulsarClient { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarclient.keytab" - principal="client/localhost@EXAMPLE.COM"; -}; - -``` - -### Proxy client configuration - -Pulsar client configuration is similar with client and broker configuration, except that you need to set `serverType` to `proxy` instead of `broker`, for the reason that you need to do the Kerberos authentication between client and proxy. - - ```java - - System.setProperty("java.security.auth.login.config", "/etc/pulsar/pulsar_jaas.conf"); - System.setProperty("java.security.krb5.conf", "/etc/pulsar/krb5.conf"); - - Map authParams = Maps.newHashMap(); - authParams.put("saslJaasClientSectionName", "PulsarClient"); - authParams.put("serverType", "proxy"); // ** here is the different ** - - Authentication saslAuth = AuthenticationFactory - .create(org.apache.pulsar.client.impl.auth.AuthenticationSasl.class.getName(), authParams); - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://my-broker.com:6650") - .authentication(saslAuth) - .build(); - - ``` - -> The first two lines in the example above are hard coded, alternatively, you can set additional JVM parameters for JAAS and krb5 configuration file when you run the application like below: - -``` - -java -cp -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf $APP-jar-with-dependencies.jar $CLASSNAME - -``` - -### Kerberos configuration for Pulsar proxy service - -In the `proxy.conf` file, set Kerberos related configuration. Here is an example: - -```shell - -## related to authenticate client. -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarProxy - -## related to be authenticated by broker -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -brokerClientAuthenticationParameters={"saslJaasClientSectionName":"PulsarProxy", "serverType":"broker"} -forwardAuthorizationCredentials=true - -``` - -The first part relates to authenticating between client and Pulsar Proxy. In this phase, client works as SASL client, while Pulsar Proxy works as SASL server. - -The second part relates to authenticating between Pulsar Proxy and Pulsar Broker. In this phase, Pulsar Proxy works as SASL client, while Pulsar Broker works as SASL server. - -### Broker side configuration. - -The broker side configuration file is the same with the above `broker.conf`, you do not need special configuration for Pulsar Proxy. - -``` - -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarBroker - -``` - -## Regarding authorization and role token - -For Kerberos authentication, we usually use the authenticated principal as the role token for Pulsar authorization. For more information of authorization in Pulsar, see [security authorization](security-authorization.md). - -If you enable 'authorizationEnabled', you need to set `superUserRoles` in `broker.conf` that corresponds to the name registered in kdc. - -For example: - -```bash - -superUserRoles=client/{clientIp}@EXAMPLE.COM - -``` - -## Regarding authentication between ZooKeeper and Broker - -Pulsar Broker acts as a Kerberos client when you authenticate with Zookeeper. According to [ZooKeeper document](https://cwiki.apache.org/confluence/display/ZOOKEEPER/Client-Server+mutual+authentication), you need these settings in `conf/zookeeper.conf`: - -``` - -authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider -requireClientAuthScheme=sasl - -``` - -Enter the following commands to add a section of `Client` configurations in the file `pulsar_jaas.conf`, which Pulsar Broker uses: - -``` - - Client { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - -``` - -In this setting, the principal of Pulsar Broker and keyTab file indicates the role of Broker when you authenticate with ZooKeeper. - -## Regarding authentication between BookKeeper and Broker - -Pulsar Broker acts as a Kerberos client when you authenticate with Bookie. According to [BookKeeper document](http://bookkeeper.apache.org/docs/latest/security/sasl/), you need to add `bookkeeperClientAuthenticationPlugin` parameter in `broker.conf`: - -``` - -bookkeeperClientAuthenticationPlugin=org.apache.bookkeeper.sasl.SASLClientProviderFactory - -``` - -In this setting, `SASLClientProviderFactory` creates a BookKeeper SASL client in a Broker, and the Broker uses the created SASL client to authenticate with a Bookie node. - -Enter the following commands to add a section of `BookKeeper` configurations in the `pulsar_jaas.conf` that Pulsar Broker uses: - -``` - - BookKeeper { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - -``` - -In this setting, the principal of Pulsar Broker and keyTab file indicates the role of Broker when you authenticate with Bookie. diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/security-oauth2.md b/site2/website/versioned_docs/version-2.8.1-deprecated/security-oauth2.md deleted file mode 100644 index 24b1530cc848ae..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/security-oauth2.md +++ /dev/null @@ -1,232 +0,0 @@ ---- -id: security-oauth2 -title: Client authentication using OAuth 2.0 access tokens -sidebar_label: "Authentication using OAuth 2.0 access tokens" -original_id: security-oauth2 ---- - -Pulsar supports authenticating clients using OAuth 2.0 access tokens. You can use OAuth 2.0 access tokens to identify a Pulsar client and associate the Pulsar client with some "principal" (or "role"), which is permitted to do some actions, such as publishing messages to a topic or consume messages from a topic. - -This module is used to support the Pulsar client authentication plugin for OAuth 2.0. After communicating with the Oauth 2.0 server, the Pulsar client gets an `access token` from the Oauth 2.0 server, and passes this `access token` to the Pulsar broker to do the authentication. The broker can use the `org.apache.pulsar.broker.authentication.AuthenticationProviderToken`. Or, you can add your own `AuthenticationProvider` to make it with this module. - -## Authentication provider configuration - -This library allows you to authenticate the Pulsar client by using an access token that is obtained from an OAuth 2.0 authorization service, which acts as a _token issuer_. - -### Authentication types - -The authentication type determines how to obtain an access token through an OAuth 2.0 authorization flow. - -:::note - -Currently, the Pulsar Java client only supports the `client_credentials` authentication type. - -::: - -#### Client credentials - -The following table lists parameters supported for the `client credentials` authentication type. - -| Parameter | Description | Example | Required or not | -| --- | --- | --- | --- | -| `type` | Oauth 2.0 authentication type. | `client_credentials` (default) | Optional | -| `issuerUrl` | URL of the authentication provider which allows the Pulsar client to obtain an access token | `https://accounts.google.com` | Required | -| `privateKey` | URL to a JSON credentials file | Support the following pattern formats:
  995. `file:///path/to/file`
  996. `file:/path/to/file`
  997. `data:application/json;base64,`
  998. | Required | -| `audience` | An OAuth 2.0 "resource server" identifier for the Pulsar cluster | `https://broker.example.com` | Required | - -The credentials file contains service account credentials used with the client authentication type. The following shows an example of a credentials file `credentials_file.json`. - -```json - -{ - "type": "client_credentials", - "client_id": "d9ZyX97q1ef8Cr81WHVC4hFQ64vSlDK3", - "client_secret": "on1uJ...k6F6R", - "client_email": "1234567890-abcdefghijklmnopqrstuvwxyz@developer.gserviceaccount.com", - "issuer_url": "https://accounts.google.com" -} - -``` - -In the above example, the authentication type is set to `client_credentials` by default. And the fields "client_id" and "client_secret" are required. - -### Typical original OAuth2 request mapping - -The following shows a typical original OAuth2 request, which is used to obtain the access token from the OAuth2 server. - -```bash - -curl --request POST \ - --url https://dev-kt-aa9ne.us.auth0.com/oauth/token \ - --header 'content-type: application/json' \ - --data '{ - "client_id":"Xd23RHsUnvUlP7wchjNYOaIfazgeHd9x", - "client_secret":"rT7ps7WY8uhdVuBTKWZkttwLdQotmdEliaM5rLfmgNibvqziZ-g07ZH52N_poGAb", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "grant_type":"client_credentials"}' - -``` - -In the above example, the mapping relationship is shown as below. - -- The `issuerUrl` parameter in this plugin is mapped to `--url https://dev-kt-aa9ne.us.auth0.com`. -- The `privateKey` file parameter in this plugin should at least contains the `client_id` and `client_secret` fields. -- The `audience` parameter in this plugin is mapped to `"audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"`. - -## Client Configuration - -You can use the OAuth2 authentication provider with the following Pulsar clients. - -### Java - -You can use the factory method to configure authentication for Pulsar Java client. - -```java - -import org.apache.pulsar.client.impl.auth.oauth2.AuthenticationFactoryOAuth2; - -String issuerUrl = "https://dev-kt-aa9ne.us.auth0.com"; -String credentialsUrl = "file:///path/to/KeyFile.json"; -String audience = "https://dev-kt-aa9ne.us.auth0.com/api/v2/"; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactoryOAuth2.clientCredentials(issuerUrl, credentialsUrl, audience)) - .build(); - -``` - -In addition, you can also use the encoded parameters to configure authentication for Pulsar Java client. - -```java - -Authentication auth = AuthenticationFactory - .create(AuthenticationOAuth2.class.getName(), "{"type":"client_credentials","privateKey":"./key/path/..","issuerUrl":"...","audience":"..."}"); -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication(auth) - .build(); - -``` - -### C++ client - -The C++ client is similar to the Java client. You need to provide parameters of `issuerUrl`, `private_key` (the credentials file path), and the audience. - -```c++ - -#include - -pulsar::ClientConfiguration config; -std::string params = R"({ - "issuer_url": "https://dev-kt-aa9ne.us.auth0.com", - "private_key": "../../pulsar-broker/src/test/resources/authentication/token/cpp_credentials_file.json", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/"})"; - -config.setAuth(pulsar::AuthOauth2::create(params)); - -pulsar::Client client("pulsar://broker.example.com:6650/", config); - -``` - -### Go client - -To enable OAuth2 authentication in Go client, you need to configure OAuth2 authentication. -This example shows how to configure OAuth2 authentication in Go client. - -```go - -oauth := pulsar.NewAuthenticationOAuth2(map[string]string{ - "type": "client_credentials", - "issuerUrl": "https://dev-kt-aa9ne.us.auth0.com", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "privateKey": "/path/to/privateKey", - "clientId": "0Xx...Yyxeny", - }) -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://my-cluster:6650", - Authentication: oauth, -}) - -``` - -### Python client - -To enable OAuth2 authentication in Python client, you need to configure OAuth2 authentication. -This example shows how to configure OAuth2 authentication in Python client. - -```python - -from pulsar import Client, AuthenticationOauth2 - -params = ''' -{ - "issuer_url": "https://dev-kt-aa9ne.us.auth0.com", - "private_key": "/path/to/privateKey", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/" -} -''' - -client = Client("pulsar://my-cluster:6650", authentication=AuthenticationOauth2(params)) - -``` - -## CLI configuration - -This section describes how to use Pulsar CLI tools to connect a cluster through OAuth2 authentication plugin. - -### pulsar-admin - -This example shows how to use pulsar-admin to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-admin --admin-url https://streamnative.cloud:443 \ ---auth-plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ -tenants list - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). - -### pulsar-client - -This example shows how to use pulsar-client to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-client \ ---url SERVICE_URL \ ---auth-plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ -produce test-topic -m "test-message" -n 10 - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). - -### pulsar-perf - -This example shows how to use pulsar-perf to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-perf produce --service-url pulsar+ssl://streamnative.cloud:6651 \ ---auth_plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ --r 1000 -s 1024 test-topic - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/security-overview.md b/site2/website/versioned_docs/version-2.8.1-deprecated/security-overview.md deleted file mode 100644 index c6bd9b64e4f766..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/security-overview.md +++ /dev/null @@ -1,36 +0,0 @@ ---- -id: security-overview -title: Pulsar security overview -sidebar_label: "Overview" -original_id: security-overview ---- - -As the central message bus for a business, Apache Pulsar is frequently used for storing mission-critical data. Therefore, enabling security features in Pulsar is crucial. - -By default, Pulsar configures no encryption, authentication, or authorization. Any client can communicate to Apache Pulsar via plain text service URLs. So we must ensure that Pulsar accessing via these plain text service URLs is restricted to trusted clients only. In such cases, you can use Network segmentation and/or authorization ACLs to restrict access to trusted IPs. If you use neither, the state of cluster is wide open and anyone can access the cluster. - -Pulsar supports a pluggable authentication mechanism. And Pulsar clients use this mechanism to authenticate with brokers and proxies. You can also configure Pulsar to support multiple authentication sources. - -The Pulsar broker validates the authentication credentials when a connection is established. After the initial connection is authenticated, the "principal" token is stored for authorization though the connection is not re-authenticated. The broker periodically checks the expiration status of every `ServerCnx` object. You can set the `authenticationRefreshCheckSeconds` on the broker to control the frequency to check the expiration status. By default, the `authenticationRefreshCheckSeconds` is set to 60s. When the authentication is expired, the broker forces to re-authenticate the connection. If the re-authentication fails, the broker disconnects the client. - -The broker supports learning whether a particular client supports authentication refreshing. If a client supports authentication refreshing and the credential is expired, the authentication provider calls the `refreshAuthentication` method to initiate the refreshing process. If a client does not support authentication refreshing and the credential is expired, the broker disconnects the client. - -You had better secure the service components in your Apache Pulsar deployment. - -## Role tokens - -In Pulsar, a *role* is a string, like `admin` or `app1`, which can represent a single client or multiple clients. You can use roles to control permission for clients to produce or consume from certain topics, administer the configuration for tenants, and so on. - -Apache Pulsar uses an [Authentication Provider](#authentication-providers) to establish the identity of a client and then assign a *role token* to that client. This role token is then used for [Authorization and ACLs](security-authorization.md) to determine what the client is authorized to do. - -## Authentication providers - -Currently Pulsar supports the following authentication providers: - -- [TLS Authentication](security-tls-authentication.md) -- [Athenz](security-athenz.md) -- [Kerberos](security-kerberos.md) -- [JSON Web Token Authentication](security-jwt.md) -- [OAuth 2.0 authentication](security-oauth2.md) -- [HTTP basic authentication](security-basic-auth.md) - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/security-tls-authentication.md b/site2/website/versioned_docs/version-2.8.1-deprecated/security-tls-authentication.md deleted file mode 100644 index 85d2240f413060..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/security-tls-authentication.md +++ /dev/null @@ -1,222 +0,0 @@ ---- -id: security-tls-authentication -title: Authentication using TLS -sidebar_label: "Authentication using TLS" -original_id: security-tls-authentication ---- - -## TLS authentication overview - -TLS authentication is an extension of [TLS transport encryption](security-tls-transport.md). Not only servers have keys and certs that the client uses to verify the identity of servers, clients also have keys and certs that the server uses to verify the identity of clients. You must have TLS transport encryption configured on your cluster before you can use TLS authentication. This guide assumes you already have TLS transport encryption configured. - -`Bouncy Castle Provider` provides TLS related cipher suites and algorithms in Pulsar. If you need [FIPS](https://www.bouncycastle.org/fips_faq.html) version of `Bouncy Castle Provider`, please reference [Bouncy Castle page](security-bouncy-castle.md). - -### Create client certificates - -Client certificates are generated using the certificate authority. Server certificates are also generated with the same certificate authority. - -The biggest difference between client certs and server certs is that the **common name** for the client certificate is the **role token** which that client is authenticated as. - -To use client certificates, you need to set `tlsRequireTrustedClientCertOnConnect=true` at the broker side. For details, refer to [TLS broker configuration](security-tls-transport.md#configure-broker). - -First, you need to enter the following command to generate the key : - -```bash - -$ openssl genrsa -out admin.key.pem 2048 - -``` - -Similar to the broker, the client expects the key to be in [PKCS 8](https://en.wikipedia.org/wiki/PKCS_8) format, so you need to convert it by entering the following command: - -```bash - -$ openssl pkcs8 -topk8 -inform PEM -outform PEM \ - -in admin.key.pem -out admin.key-pk8.pem -nocrypt - -``` - -Next, enter the command below to generate the certificate request. When you are asked for a **common name**, enter the **role token** that you want this key pair to authenticate a client as. - -```bash - -$ openssl req -config openssl.cnf \ - -key admin.key.pem -new -sha256 -out admin.csr.pem - -``` - -:::note - -If openssl.cnf is not specified, read [Certificate authority](http://pulsar.apache.org/docs/en/security-tls-transport/#certificate-authority) to get the openssl.cnf. - -::: - -Then, enter the command below to sign with request with the certificate authority. Note that the client certs uses the **usr_cert** extension, which allows the cert to be used for client authentication. - -```bash - -$ openssl ca -config openssl.cnf -extensions usr_cert \ - -days 1000 -notext -md sha256 \ - -in admin.csr.pem -out admin.cert.pem - -``` - -You can get a cert, `admin.cert.pem`, and a key, `admin.key-pk8.pem` from this command. With `ca.cert.pem`, clients can use this cert and this key to authenticate themselves to brokers and proxies as the role token ``admin``. - -:::note - -If the "unable to load CA private key" error occurs and the reason of this error is "No such file or directory: /etc/pki/CA/private/cakey.pem" in this step. Try the command below: - -```bash - -$ cd /etc/pki/tls/misc/CA -$ ./CA -newca - -``` - -to generate `cakey.pem` . - -::: - -## Enable TLS authentication on brokers - -To configure brokers to authenticate clients, add the following parameters to `broker.conf`, alongside [the configuration to enable tls transport](security-tls-transport.md#broker-configuration): - -```properties - -# Configuration to enable authentication -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# operations and publish/consume from all topics -superUserRoles=admin - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientTlsEnabled=true -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters={"tlsCertFile":"/path/my-ca/admin.cert.pem","tlsKeyFile":"/path/my-ca/admin.key-pk8.pem"} -brokerClientTrustCertsFilePath=/path/my-ca/certs/ca.cert.pem - -``` - -## Enable TLS authentication on proxies - -To configure proxies to authenticate clients, add the following parameters to `proxy.conf`, alongside [the configuration to enable tls transport](security-tls-transport.md#proxy-configuration): - -The proxy should have its own client key pair for connecting to brokers. You need to configure the role token for this key pair in the ``proxyRoles`` of the brokers. See the [authorization guide](security-authorization.md) for more details. - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters=tlsCertFile:/path/to/proxy.cert.pem,tlsKeyFile:/path/to/proxy.key-pk8.pem - -``` - -## Client configuration - -When you use TLS authentication, client connects via TLS transport. You need to configure the client to use ```https://``` and 8443 port for the web service URL, ```pulsar+ssl://``` and 6651 port for the broker service URL. - -### CLI tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use TLS authentication with the CLI tools of Pulsar: - -```properties - -webServiceUrl=https://broker.example.com:8443/ -brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/ca.cert.pem -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -authParams=tlsCertFile:/path/to/my-role.cert.pem,tlsKeyFile:/path/to/my-role.key-pk8.pem - -``` - -### Java client - -```java - -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .tlsTrustCertsFilePath("/path/to/ca.cert.pem") - .authentication("org.apache.pulsar.client.impl.auth.AuthenticationTls", - "tlsCertFile:/path/to/my-role.cert.pem,tlsKeyFile:/path/to/my-role.key-pk8.pem") - .build(); - -``` - -### Python client - -```python - -from pulsar import Client, AuthenticationTLS - -auth = AuthenticationTLS("/path/to/my-role.cert.pem", "/path/to/my-role.key-pk8.pem") -client = Client("pulsar+ssl://broker.example.com:6651/", - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False, - authentication=auth) - -``` - -### C++ client - -```c++ - -#include - -pulsar::ClientConfiguration config; -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/ca.cert.pem"); -config.setTlsAllowInsecureConnection(false); - -pulsar::AuthenticationPtr auth = pulsar::AuthTls::create("/path/to/my-role.cert.pem", - "/path/to/my-role.key-pk8.pem") -config.setAuth(auth); - -pulsar::Client client("pulsar+ssl://broker.example.com:6651/", config); - -``` - -### Node.js client - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const auth = new Pulsar.AuthenticationTls({ - certificatePath: '/path/to/my-role.cert.pem', - privateKeyPath: '/path/to/my-role.key-pk8.pem', - }); - - const client = new Pulsar.Client({ - serviceUrl: 'pulsar+ssl://broker.example.com:6651/', - authentication: auth, - tlsTrustCertsFilePath: '/path/to/ca.cert.pem', - }); -})(); - -``` - -### C# client - -```c# - -var clientCertificate = new X509Certificate2("admin.pfx"); -var client = PulsarClient.Builder() - .AuthenticateUsingClientCertificate(clientCertificate) - .Build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/security-tls-keystore.md b/site2/website/versioned_docs/version-2.8.1-deprecated/security-tls-keystore.md deleted file mode 100644 index b09500e3daf04c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/security-tls-keystore.md +++ /dev/null @@ -1,322 +0,0 @@ ---- -id: security-tls-keystore -title: Using TLS with KeyStore configure -sidebar_label: "Using TLS with KeyStore configure" -original_id: security-tls-keystore ---- - -## Overview - -Apache Pulsar supports [TLS encryption](security-tls-transport.md) and [TLS authentication](security-tls-authentication.md) between clients and Apache Pulsar service. -By default it uses PEM format file configuration. This page tries to describe use [KeyStore](https://en.wikipedia.org/wiki/Java_KeyStore) type configure for TLS. - - -## TLS encryption with KeyStore configure - -### Generate TLS key and certificate - -The first step of deploying TLS is to generate the key and the certificate for each machine in the cluster. -You can use Java’s `keytool` utility to accomplish this task. We will generate the key into a temporary keystore -initially for broker, so that we can export and sign it later with CA. - -```shell - -keytool -keystore broker.keystore.jks -alias localhost -validity {validity} -genkeypair -keyalg RSA - -``` - -You need to specify two parameters in the above command: - -1. `keystore`: the keystore file that stores the certificate. The *keystore* file contains the private key of - the certificate; hence, it needs to be kept safely. -2. `validity`: the valid time of the certificate in days. - -> Ensure that common name (CN) matches exactly with the fully qualified domain name (FQDN) of the server. -The client compares the CN with the DNS domain name to ensure that it is indeed connecting to the desired server, not a malicious one. - -### Creating your own CA - -After the first step, each broker in the cluster has a public-private key pair, and a certificate to identify the machine. -The certificate, however, is unsigned, which means that an attacker can create such a certificate to pretend to be any machine. - -Therefore, it is important to prevent forged certificates by signing them for each machine in the cluster. -A `certificate authority (CA)` is responsible for signing certificates. CA works likes a government that issues passports — -the government stamps (signs) each passport so that the passport becomes difficult to forge. Other governments verify the stamps -to ensure the passport is authentic. Similarly, the CA signs the certificates, and the cryptography guarantees that a signed -certificate is computationally difficult to forge. Thus, as long as the CA is a genuine and trusted authority, the clients have -high assurance that they are connecting to the authentic machines. - -```shell - -openssl req -new -x509 -keyout ca-key -out ca-cert -days 365 - -``` - -The generated CA is simply a *public-private* key pair and certificate, and it is intended to sign other certificates. - -The next step is to add the generated CA to the clients' truststore so that the clients can trust this CA: - -```shell - -keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert - -``` - -NOTE: If you configure the brokers to require client authentication by setting `tlsRequireTrustedClientCertOnConnect` to `true` on the -broker configuration, then you must also provide a truststore for the brokers and it should have all the CA certificates that clients keys were signed by. - -```shell - -keytool -keystore broker.truststore.jks -alias CARoot -import -file ca-cert - -``` - -In contrast to the keystore, which stores each machine’s own identity, the truststore of a client stores all the certificates -that the client should trust. Importing a certificate into one’s truststore also means trusting all certificates that are signed -by that certificate. As the analogy above, trusting the government (CA) also means trusting all passports (certificates) that -it has issued. This attribute is called the chain of trust, and it is particularly useful when deploying TLS on a large BookKeeper cluster. -You can sign all certificates in the cluster with a single CA, and have all machines share the same truststore that trusts the CA. -That way all machines can authenticate all other machines. - - -### Signing the certificate - -The next step is to sign all certificates in the keystore with the CA we generated. First, you need to export the certificate from the keystore: - -```shell - -keytool -keystore broker.keystore.jks -alias localhost -certreq -file cert-file - -``` - -Then sign it with the CA: - -```shell - -openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days {validity} -CAcreateserial -passin pass:{ca-password} - -``` - -Finally, you need to import both the certificate of the CA and the signed certificate into the keystore: - -```shell - -keytool -keystore broker.keystore.jks -alias CARoot -import -file ca-cert -keytool -keystore broker.keystore.jks -alias localhost -import -file cert-signed - -``` - -The definitions of the parameters are the following: - -1. `keystore`: the location of the keystore -2. `ca-cert`: the certificate of the CA -3. `ca-key`: the private key of the CA -4. `ca-password`: the passphrase of the CA -5. `cert-file`: the exported, unsigned certificate of the broker -6. `cert-signed`: the signed certificate of the broker - -### Configuring brokers - -Brokers enable TLS by provide valid `brokerServicePortTls` and `webServicePortTls`, and also need set `tlsEnabledWithKeyStore` to `true` for using KeyStore type configuration. -Besides this, KeyStore path, KeyStore password, TrustStore path, and TrustStore password need to provided. -And since broker will create internal client/admin client to communicate with other brokers, user also need to provide config for them, this is similar to how user config the outside client/admin-client. -If `tlsRequireTrustedClientCertOnConnect` is `true`, broker will reject the Connection if the Client Certificate is not trusted. - -The following TLS configs are needed on the broker side: - -```properties - -tlsEnabledWithKeyStore=true -# key store -tlsKeyStoreType=JKS -tlsKeyStore=/var/private/tls/broker.keystore.jks -tlsKeyStorePassword=brokerpw - -# trust store -tlsTrustStoreType=JKS -tlsTrustStore=/var/private/tls/broker.truststore.jks -tlsTrustStorePassword=brokerpw - -# internal client/admin-client config -brokerClientTlsEnabled=true -brokerClientTlsEnabledWithKeyStore=true -brokerClientTlsTrustStoreType=JKS -brokerClientTlsTrustStore=/var/private/tls/client.truststore.jks -brokerClientTlsTrustStorePassword=clientpw - -``` - -NOTE: it is important to restrict access to the store files via filesystem permissions. - -Optional settings that may worth consider: - -1. tlsClientAuthentication=false: Enable/Disable using TLS for authentication. This config when enabled will authenticate the other end - of the communication channel. It should be enabled on both brokers and clients for mutual TLS. -2. tlsCiphers=[TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256], A cipher suite is a named combination of authentication, encryption, MAC and key exchange - algorithm used to negotiate the security settings for a network connection using TLS network protocol. By default, - it is null. [OpenSSL Ciphers](https://www.openssl.org/docs/man1.0.2/apps/ciphers.html) - [JDK Ciphers](http://docs.oracle.com/javase/8/docs/technotes/guides/security/StandardNames.html#ciphersuites) -3. tlsProtocols=[TLSv1.3,TLSv1.2] (list out the TLS protocols that you are going to accept from clients). - By default, it is not set. - -### Configuring Clients - -This is similar to [TLS encryption configuing for client with PEM type](security-tls-transport.md#Client configuration). -For a a minimal configuration, user need to provide the TrustStore information. - -e.g. -1. for [Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools#pulsar-admin), [`pulsar-perf`](reference-cli-tools#pulsar-perf), and [`pulsar-client`](reference-cli-tools#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - - ```properties - - webServiceUrl=https://broker.example.com:8443/ - brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ - useKeyStoreTls=true - tlsTrustStoreType=JKS - tlsTrustStorePath=/var/private/tls/client.truststore.jks - tlsTrustStorePassword=clientpw - - ``` - -1. for java client - - ```java - - import org.apache.pulsar.client.api.PulsarClient; - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .build(); - - ``` - -1. for java admin client - -```java - - PulsarAdmin amdin = PulsarAdmin.builder().serviceHttpUrl("https://broker.example.com:8443") - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .build(); - -``` - -## TLS authentication with KeyStore configure - -This similar to [TLS authentication with PEM type](security-tls-authentication.md) - -### broker authentication config - -`broker.conf` - -```properties - -# Configuration to enable authentication -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# this should be the CN for one of client keystore. -superUserRoles=admin - -# Enable KeyStore type -tlsEnabledWithKeyStore=true -requireTrustedClientCertOnConnect=true - -# key store -tlsKeyStoreType=JKS -tlsKeyStore=/var/private/tls/broker.keystore.jks -tlsKeyStorePassword=brokerpw - -# trust store -tlsTrustStoreType=JKS -tlsTrustStore=/var/private/tls/broker.truststore.jks -tlsTrustStorePassword=brokerpw - -# internal client/admin-client config -brokerClientTlsEnabled=true -brokerClientTlsEnabledWithKeyStore=true -brokerClientTlsTrustStoreType=JKS -brokerClientTlsTrustStore=/var/private/tls/client.truststore.jks -brokerClientTlsTrustStorePassword=clientpw -# internal auth config -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls -brokerClientAuthenticationParameters={"keyStoreType":"JKS","keyStorePath":"/var/private/tls/client.keystore.jks","keyStorePassword":"clientpw"} -# currently websocket not support keystore type -webSocketServiceEnabled=false - -``` - -### client authentication configuring - -Besides the TLS encryption configuring. The main work is configuring the KeyStore, which contains a valid CN as client role, for client. - -e.g. -1. for [Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools#pulsar-admin), [`pulsar-perf`](reference-cli-tools#pulsar-perf), and [`pulsar-client`](reference-cli-tools#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - - ```properties - - webServiceUrl=https://broker.example.com:8443/ - brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ - useKeyStoreTls=true - tlsTrustStoreType=JKS - tlsTrustStorePath=/var/private/tls/client.truststore.jks - tlsTrustStorePassword=clientpw - authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls - authParams={"keyStoreType":"JKS","keyStorePath":"/path/to/keystorefile","keyStorePassword":"keystorepw"} - - ``` - -1. for java client - - ```java - - import org.apache.pulsar.client.api.PulsarClient; - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .authentication( - "org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls", - "keyStoreType:JKS,keyStorePath:/var/private/tls/client.keystore.jks,keyStorePassword:clientpw") - .build(); - - ``` - -1. for java admin client - - ```java - - PulsarAdmin amdin = PulsarAdmin.builder().serviceHttpUrl("https://broker.example.com:8443") - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .authentication( - "org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls", - "keyStoreType:JKS,keyStorePath:/var/private/tls/client.keystore.jks,keyStorePassword:clientpw") - .build(); - - ``` - -## Enabling TLS Logging - -You can enable TLS debug logging at the JVM level by starting the brokers and/or clients with `javax.net.debug` system property. For example: - -```shell - --Djavax.net.debug=all - -``` - -You can find more details on this in [Oracle documentation](http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/ReadDebug.html) on [debugging SSL/TLS connections](http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/ReadDebug.html). diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/security-tls-transport.md b/site2/website/versioned_docs/version-2.8.1-deprecated/security-tls-transport.md deleted file mode 100644 index 2cad17a78c3507..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/security-tls-transport.md +++ /dev/null @@ -1,295 +0,0 @@ ---- -id: security-tls-transport -title: Transport Encryption using TLS -sidebar_label: "Transport Encryption using TLS" -original_id: security-tls-transport ---- - -## TLS overview - -By default, Apache Pulsar clients communicate with the Apache Pulsar service in plain text. This means that all data is sent in the clear. You can use TLS to encrypt this traffic to protect the traffic from the snooping of a man-in-the-middle attacker. - -You can also configure TLS for both encryption and authentication. Use this guide to configure just TLS transport encryption and refer to [here](security-tls-authentication.md) for TLS authentication configuration. Alternatively, you can use [another authentication mechanism](security-athenz.md) on top of TLS transport encryption. - -> Note that enabling TLS may impact the performance due to encryption overhead. - -## TLS concepts - -TLS is a form of [public key cryptography](https://en.wikipedia.org/wiki/Public-key_cryptography). Using key pairs consisting of a public key and a private key can perform the encryption. The public key encrpyts the messages and the private key decrypts the messages. - -To use TLS transport encryption, you need two kinds of key pairs, **server key pairs** and a **certificate authority**. - -You can use a third kind of key pair, **client key pairs**, for [client authentication](security-tls-authentication.md). - -You should store the **certificate authority** private key in a very secure location (a fully encrypted, disconnected, air gapped computer). As for the certificate authority public key, the **trust cert**, you can freely shared it. - -For both client and server key pairs, the administrator first generates a private key and a certificate request, then uses the certificate authority private key to sign the certificate request, finally generates a certificate. This certificate is the public key for the server/client key pair. - -For TLS transport encryption, the clients can use the **trust cert** to verify that the server has a key pair that the certificate authority signed when the clients are talking to the server. A man-in-the-middle attacker does not have access to the certificate authority, so they couldn't create a server with such a key pair. - -For TLS authentication, the server uses the **trust cert** to verify that the client has a key pair that the certificate authority signed. The common name of the **client cert** is then used as the client's role token (see [Overview](security-overview.md)). - -`Bouncy Castle Provider` provides cipher suites and algorithms in Pulsar. If you need [FIPS](https://www.bouncycastle.org/fips_faq.html) version of `Bouncy Castle Provider`, please reference [Bouncy Castle page](security-bouncy-castle.md). - -## Create TLS certificates - -Creating TLS certificates for Pulsar involves creating a [certificate authority](#certificate-authority) (CA), [server certificate](#server-certificate), and [client certificate](#client-certificate). - -Follow the guide below to set up a certificate authority. You can also refer to plenty of resources on the internet for more details. We recommend [this guide](https://jamielinux.com/docs/openssl-certificate-authority/index.html) for your detailed reference. - -### Certificate authority - -1. Create the certificate for the CA. You can use CA to sign both the broker and client certificates. This ensures that each party will trust the others. You should store CA in a very secure location (ideally completely disconnected from networks, air gapped, and fully encrypted). - -2. Entering the following command to create a directory for your CA, and place [this openssl configuration file](https://github.com/apache/pulsar/tree/master/site2/website/static/examples/openssl.cnf) in the directory. You may want to modify the default answers for company name and department in the configuration file. Export the location of the CA directory to the environment variable, CA_HOME. The configuration file uses this environment variable to find the rest of the files and directories that the CA needs. - -```bash - -mkdir my-ca -cd my-ca -wget https://raw.githubusercontent.com/apache/pulsar-site/main/site2/website/static/examples/openssl.cnf -export CA_HOME=$(pwd) - -``` - -3. Enter the commands below to create the necessary directories, keys and certs. - -```bash - -mkdir certs crl newcerts private -chmod 700 private/ -touch index.txt -echo 1000 > serial -openssl genrsa -aes256 -out private/ca.key.pem 4096 -chmod 400 private/ca.key.pem -openssl req -config openssl.cnf -key private/ca.key.pem \ - -new -x509 -days 7300 -sha256 -extensions v3_ca \ - -out certs/ca.cert.pem -chmod 444 certs/ca.cert.pem - -``` - -4. After you answer the question prompts, CA-related files are stored in the `./my-ca` directory. Within that directory: - -* `certs/ca.cert.pem` is the public certificate. This public certificates is meant to be distributed to all parties involved. -* `private/ca.key.pem` is the private key. You only need it when you are signing a new certificate for either broker or clients and you must safely guard this private key. - -### Server certificate - -Once you have created a CA certificate, you can create certificate requests and sign them with the CA. - -The following commands ask you a few questions and then create the certificates. When you are asked for the common name, you should match the hostname of the broker. You can also use a wildcard to match a group of broker hostnames, for example, `*.broker.usw.example.com`. This ensures that multiple machines can reuse the same certificate. - -:::tip - -Sometimes matching the hostname is not possible or makes no sense, -such as when you create the brokers with random hostnames, or you -plan to connect to the hosts via their IP. In these cases, you -should configure the client to disable TLS hostname verification. For more -details, you can see [the host verification section in client configuration](#hostname-verification). - -::: - -1. Enter the command below to generate the key. - -```bash - -openssl genrsa -out broker.key.pem 2048 - -``` - -The broker expects the key to be in [PKCS 8](https://en.wikipedia.org/wiki/PKCS_8) format, so enter the following command to convert it. - -```bash - -openssl pkcs8 -topk8 -inform PEM -outform PEM \ - -in broker.key.pem -out broker.key-pk8.pem -nocrypt - -``` - -2. Enter the following command to generate the certificate request. - -```bash - -openssl req -config openssl.cnf \ - -key broker.key.pem -new -sha256 -out broker.csr.pem - -``` - -3. Sign it with the certificate authority by entering the command below. - -```bash - -openssl ca -config openssl.cnf -extensions server_cert \ - -days 1000 -notext -md sha256 \ - -in broker.csr.pem -out broker.cert.pem - -``` - -At this point, you have a cert, `broker.cert.pem`, and a key, `broker.key-pk8.pem`, which you can use along with `ca.cert.pem` to configure TLS transport encryption for your broker and proxy nodes. - -## Configure broker - -To configure a Pulsar [broker](reference-terminology.md#broker) to use TLS transport encryption, you need to make some changes to `broker.conf`, which locates in the `conf` directory of your [Pulsar installation](getting-started-standalone.md). - -Add these values to the configuration file (substituting the appropriate certificate paths where necessary): - -```properties - -tlsEnabled=true -tlsRequireTrustedClientCertOnConnect=true -tlsCertificateFilePath=/path/to/broker.cert.pem -tlsKeyFilePath=/path/to/broker.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -> You can find a full list of parameters available in the `conf/broker.conf` file, -> as well as the default values for those parameters, in [Broker Configuration](reference-configuration.md#broker) -> -### TLS Protocol Version and Cipher - -You can configure the broker (and proxy) to require specific TLS protocol versions and ciphers for TLS negiotation. You can use the TLS protocol versions and ciphers to stop clients from requesting downgraded TLS protocol versions or ciphers that may have weaknesses. - -Both the TLS protocol versions and cipher properties can take multiple values, separated by commas. The possible values for protocol version and ciphers depend on the TLS provider that you are using. Pulsar uses OpenSSL if the OpenSSL is available, but if the OpenSSL is not available, Pulsar defaults back to the JDK implementation. - -```properties - -tlsProtocols=TLSv1.3,TLSv1.2 -tlsCiphers=TLS_DH_RSA_WITH_AES_256_GCM_SHA384,TLS_DH_RSA_WITH_AES_256_CBC_SHA - -``` - -OpenSSL currently supports ```TLSv1.1```, ```TLSv1.2``` and ```TLSv1.3``` for the protocol version. You can acquire a list of supported cipher from the openssl ciphers command, i.e. ```openssl ciphers -tls1_3```. - -For JDK 11, you can obtain a list of supported values from the documentation: -- [TLS protocol](https://docs.oracle.com/en/java/javase/11/security/oracle-providers.html#GUID-7093246A-31A3-4304-AC5F-5FB6400405E2__SUNJSSEPROVIDERPROTOCOLPARAMETERS-BBF75009) -- [Ciphers](https://docs.oracle.com/en/java/javase/11/security/oracle-providers.html#GUID-7093246A-31A3-4304-AC5F-5FB6400405E2__SUNJSSE_CIPHER_SUITES) - -## Proxy Configuration - -Proxies need to configure TLS in two directions, for clients connecting to the proxy, and for the proxy connecting to brokers. - -```properties - -# For clients connecting to the proxy -tlsEnabledInProxy=true -tlsCertificateFilePath=/path/to/broker.cert.pem -tlsKeyFilePath=/path/to/broker.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -# For the proxy to connect to brokers -tlsEnabledWithBroker=true -brokerClientTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -## Client configuration - -When you enable the TLS transport encryption, you need to configure the client to use ```https://``` and port 8443 for the web service URL, and ```pulsar+ssl://``` and port 6651 for the broker service URL. - -As the server certificate that you generated above does not belong to any of the default trust chains, you also need to either specify the path the **trust cert** (recommended), or tell the client to allow untrusted server certs. - -### Hostname verification - -Hostname verification is a TLS security feature whereby a client can refuse to connect to a server if the "CommonName" does not match the hostname to which the hostname is connecting. By default, Pulsar clients disable hostname verification, as it requires that each broker has a DNS record and a unique cert. - -Moreover, as the administrator has full control of the certificate authority, a bad actor is unlikely to be able to pull off a man-in-the-middle attack. "allowInsecureConnection" allows the client to connect to servers whose cert has not been signed by an approved CA. The client disables "allowInsecureConnection" by default, and you should always disable "allowInsecureConnection" in production environments. As long as you disable "allowInsecureConnection", a man-in-the-middle attack requires that the attacker has access to the CA. - -One scenario where you may want to enable hostname verification is where you have multiple proxy nodes behind a VIP, and the VIP has a DNS record, for example, pulsar.mycompany.com. In this case, you can generate a TLS cert with pulsar.mycompany.com as the "CommonName," and then enable hostname verification on the client. - -The examples below show that hostname verification is disabled for the CLI tools/Java/Python/C++/Node.js/C# clients by default. - -### CLI tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools.md#pulsar-admin), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use TLS transport with the CLI tools of Pulsar: - -```properties - -webServiceUrl=https://broker.example.com:8443/ -brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/ca.cert.pem -tlsEnableHostnameVerification=false - -``` - -#### Java client - -```java - -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .tlsTrustCertsFilePath("/path/to/ca.cert.pem") - .enableTlsHostnameVerification(false) // false by default, in any case - .allowTlsInsecureConnection(false) // false by default, in any case - .build(); - -``` - -#### Python client - -```python - -from pulsar import Client - -client = Client("pulsar+ssl://broker.example.com:6651/", - tls_hostname_verification=False, - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False) // defaults to false from v2.2.0 onwards - -``` - -#### C++ client - -```c++ - -#include - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); // shouldn't be needed soon -config.setTlsTrustCertsFilePath(caPath); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create(clientPublicKeyPath, clientPrivateKeyPath)); -config.setValidateHostName(false); - -``` - -#### Node.js client - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const client = new Pulsar.Client({ - serviceUrl: 'pulsar+ssl://broker.example.com:6651/', - tlsTrustCertsFilePath: '/path/to/ca.cert.pem', - useTls: true, - tlsValidateHostname: false, - tlsAllowInsecureConnection: false, - }); -})(); - -``` - -#### C# client - -```c# - -var certificate = new X509Certificate2("ca.cert.pem"); -var client = PulsarClient.Builder() - .TrustedCertificateAuthority(certificate) //If the CA is not trusted on the host, you can add it explicitly. - .VerifyCertificateAuthority(true) //Default is 'true' - .VerifyCertificateName(false) //Default is 'false' - .Build(); - -``` - -> Note that `VerifyCertificateName` refers to the configuration of hostname verification in the C# client. diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/security-token-admin.md b/site2/website/versioned_docs/version-2.8.1-deprecated/security-token-admin.md deleted file mode 100644 index a265f6320d28fe..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/security-token-admin.md +++ /dev/null @@ -1,183 +0,0 @@ ---- -id: security-token-admin -title: Token authentication admin -sidebar_label: "Token authentication admin" -original_id: security-token-admin ---- - -## Token Authentication Overview - -Pulsar supports authenticating clients using security tokens that are based on [JSON Web Tokens](https://jwt.io/introduction/) ([RFC-7519](https://tools.ietf.org/html/rfc7519)). - -Tokens are used to identify a Pulsar client and associate with some "principal" (or "role") which -will be then granted permissions to do some actions (eg: publish or consume from a topic). - -A user will typically be given a token string by an administrator (or some automated service). - -The compact representation of a signed JWT is a string that looks like: - -``` - - eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -Application will specify the token when creating the client instance. An alternative is to pass -a "token supplier", that is to say a function that returns the token when the client library -will need one. - -> #### Always use TLS transport encryption -> Sending a token is equivalent to sending a password over the wire. It is strongly recommended to -> always use TLS encryption when talking to the Pulsar service. See -> [Transport Encryption using TLS](security-tls-transport.md) - -## Secret vs Public/Private keys - -JWT support two different kind of keys in order to generate and validate the tokens: - - * Symmetric : - - there is a single ***Secret*** key that is used both to generate and validate - * Asymmetric: there is a pair of keys. - - ***Private*** key is used to generate tokens - - ***Public*** key is used to validate tokens - -### Secret key - -When using a secret key, the administrator will create the key and he will -use it to generate the client tokens. This key will be also configured to -the brokers to allow them to validate the clients. - -#### Creating a secret key - -> Output file will be generated in the root of your pulsar installation directory. You can also provide absolute path for the output file. - -```shell - -$ bin/pulsar tokens create-secret-key --output my-secret.key - -``` - -To generate base64 encoded private key - -```shell - -$ bin/pulsar tokens create-secret-key --output /opt/my-secret.key --base64 - -``` - -### Public/Private keys - -With public/private, we need to create a pair of keys. Pulsar supports all algorithms supported by the Java JWT library shown [here](https://github.com/jwtk/jjwt#signature-algorithms-keys) - -#### Creating a key pair - -> Output file will be generated in the root of your pulsar installation directory. You can also provide absolute path for the output file. - -```shell - -$ bin/pulsar tokens create-key-pair --output-private-key my-private.key --output-public-key my-public.key - -``` - - * `my-private.key` will be stored in a safe location and only used by administrator to generate - new tokens. - * `my-public.key` will be distributed to all Pulsar brokers. This file can be publicly shared without - any security concern. - -## Generating tokens - -A token is the credential associated with a user. The association is done through the "principal", -or "role". In case of JWT tokens, this field it's typically referred to as **subject**, though -it's exactly the same concept. - -The generated token is then required to have a **subject** field set. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user - -``` - -This will print the token string on stdout. - -Similarly, one can create a token by passing the "private" key: - -```shell - -$ bin/pulsar tokens create --private-key file:///path/to/my-private.key \ - --subject test-user - -``` - -Finally, a token can also be created with a pre-defined TTL. After that time, -the token will be automatically invalidated. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user \ - --expiry-time 1y - -``` - -## Authorization - -The token itself doesn't have any permission associated. That will be determined by the -authorization engine. Once the token is created, one can grant permission for this token to do certain -actions. Eg. : - -```shell - -$ bin/pulsar-admin namespaces grant-permission my-tenant/my-namespace \ - --role test-user \ - --actions produce,consume - -``` - -## Enabling Token Authentication ... - -### ... on Brokers - -To configure brokers to authenticate clients, put the following in `broker.conf`: - -```properties - -# Configuration to enable authentication and authorization -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken - -# If using secret key (Note: key files must be DER-encoded) -tokenSecretKey=file:///path/to/secret.key -# The key can also be passed inline: -# tokenSecretKey=data:;base64,FLFyW0oLJ2Fi22KKCm21J18mbAdztfSHN/lAT5ucEKU= - -# If using public/private (Note: key files must be DER-encoded) -# tokenPublicKey=file:///path/to/public.key - -``` - -### ... on Proxies - -To configure proxies to authenticate clients, put the following in `proxy.conf`: - -The proxy will have its own token used when talking to brokers. The role token for this -key pair should be configured in the ``proxyRoles`` of the brokers. See the [authorization guide](security-authorization.md) for more details. - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken -tokenSecretKey=file:///path/to/secret.key - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Or, alternatively, read token from file -# brokerClientAuthenticationParameters=file:///path/to/proxy-token.txt - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/sql-deployment-configurations.md b/site2/website/versioned_docs/version-2.8.1-deprecated/sql-deployment-configurations.md deleted file mode 100644 index 02d8bc78f6cb9d..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/sql-deployment-configurations.md +++ /dev/null @@ -1,199 +0,0 @@ ---- -id: sql-deployment-configurations -title: Pulsar SQL configuration and deployment -sidebar_label: "Configuration and deployment" -original_id: sql-deployment-configurations ---- - -You can configure Presto Pulsar connector and deploy a cluster with the following instruction. - -## Configure Presto Pulsar Connector -You can configure Presto Pulsar Connector in the `${project.root}/conf/presto/catalog/pulsar.properties` properties file. The configuration for the connector and the default values are as follows. - -```properties - -# name of the connector to be displayed in the catalog -connector.name=pulsar - -# the url of Pulsar broker service -pulsar.web-service-url=http://localhost:8080 - -# URI of Zookeeper cluster -pulsar.zookeeper-uri=localhost:2181 - -# minimum number of entries to read at a single time -pulsar.entry-read-batch-size=100 - -# default number of splits to use per query -pulsar.target-num-splits=4 - -``` - -You can connect Presto to a Pulsar cluster with multiple hosts. To configure multiple hosts for brokers, add multiple URLs to `pulsar.web-service-url`. To configure multiple hosts for ZooKeeper, add multiple URIs to `pulsar.zookeeper-uri`. The following is an example. - -``` - -pulsar.web-service-url=http://localhost:8080,localhost:8081,localhost:8082 -pulsar.zookeeper-uri=localhost1,localhost2:2181 - -``` - -**Note: by default, Pulsar SQL does not get the last message in a topic**. It is by design and controlled by settings. By default, BookKeeper LAC only advances when subsequent entries are added. If there is no subsequent entry added, the last written entry is not visible to readers until the ledger is closed. This is not a problem for Pulsar which uses managed ledger, but Pulsar SQL directly reads from BookKeeper ledger. - -If you want to get the last message in a topic, set the following configurations: - -1. For the broker configuration, set `bookkeeperExplicitLacIntervalInMills` > 0 in `broker.conf` or `standalone.conf`. - -2. For the Presto configuration, set `pulsar.bookkeeper-explicit-interval` > 0 and `pulsar.bookkeeper-use-v2-protocol=false`. - -However, using BookKeeper V3 protocol introduces additional GC overhead to BK as it uses Protobuf. - -## Query data from existing Presto clusters - -If you already have a Presto cluster, you can copy the Presto Pulsar connector plugin to your existing cluster. Download the archived plugin package with the following command. - -```bash - -$ wget pulsar:binary_release_url - -``` - -## Deploy a new cluster - -Since Pulsar SQL is powered by [Trino (formerly Presto SQL)](https://trino.io), the configuration for deployment is the same for the Pulsar SQL worker. - -:::note - -For how to set up a standalone single node environment, refer to [Query data](sql-getting-started.md). - -::: - -You can use the same CLI args as the Presto launcher. - -```bash - -$ ./bin/pulsar sql-worker --help -Usage: launcher [options] command - -Commands: run, start, stop, restart, kill, status - -Options: - -h, --help show this help message and exit - -v, --verbose Run verbosely - --etc-dir=DIR Defaults to INSTALL_PATH/etc - --launcher-config=FILE - Defaults to INSTALL_PATH/bin/launcher.properties - --node-config=FILE Defaults to ETC_DIR/node.properties - --jvm-config=FILE Defaults to ETC_DIR/jvm.config - --config=FILE Defaults to ETC_DIR/config.properties - --log-levels-file=FILE - Defaults to ETC_DIR/log.properties - --data-dir=DIR Defaults to INSTALL_PATH - --pid-file=FILE Defaults to DATA_DIR/var/run/launcher.pid - --launcher-log-file=FILE - Defaults to DATA_DIR/var/log/launcher.log (only in - daemon mode) - --server-log-file=FILE - Defaults to DATA_DIR/var/log/server.log (only in - daemon mode) - -D NAME=VALUE Set a Java system property - -``` - -The default configuration for the cluster is located in `${project.root}/conf/presto`. You can customize your deployment by modifying the default configuration. - -You can set the worker to read from a different configuration directory, or set a different directory to write data. - -```bash - -$ ./bin/pulsar sql-worker run --etc-dir /tmp/incubator-pulsar/conf/presto --data-dir /tmp/presto-1 - -``` - -You can start the worker as daemon process. - -```bash - -$ ./bin/pulsar sql-worker start - -``` - -### Deploy a cluster on multiple nodes - -You can deploy a Pulsar SQL cluster or Presto cluster on multiple nodes. The following example shows how to deploy a cluster on three-node cluster. - -1. Copy the Pulsar binary distribution to three nodes. - -The first node runs as Presto coordinator. The minimal configuration requirement in the `${project.root}/conf/presto/config.properties` file is as follows. - -```properties - -coordinator=true -node-scheduler.include-coordinator=true -http-server.http.port=8080 -query.max-memory=50GB -query.max-memory-per-node=1GB -discovery-server.enabled=true -discovery.uri= - -``` - -The other two nodes serve as worker nodes, you can use the following configuration for worker nodes. - -```properties - -coordinator=false -http-server.http.port=8080 -query.max-memory=50GB -query.max-memory-per-node=1GB -discovery.uri= - -``` - -2. Modify `pulsar.web-service-url` and `pulsar.zookeeper-uri` configuration in the `${project.root}/conf/presto/catalog/pulsar.properties` file accordingly for the three nodes. - -3. Start the coordinator node. - -``` - -$ ./bin/pulsar sql-worker run - -``` - -4. Start worker nodes. - -``` - -$ ./bin/pulsar sql-worker run - -``` - -5. Start the SQL CLI and check the status of your cluster. - -```bash - -$ ./bin/pulsar sql --server - -``` - -6. Check the status of your nodes. - -```bash - -presto> SELECT * FROM system.runtime.nodes; - node_id | http_uri | node_version | coordinator | state ----------+-------------------------+--------------+-------------+-------- - 1 | http://192.168.2.1:8081 | testversion | true | active - 3 | http://192.168.2.2:8081 | testversion | false | active - 2 | http://192.168.2.3:8081 | testversion | false | active - -``` - -For more information about deployment in Presto, refer to [Presto deployment](https://trino.io/docs/current/installation/deployment.html). - -:::note - -The broker does not advance LAC, so when Pulsar SQL bypass broker to query data, it can only read entries up to the LAC that all the bookies learned. You can enable periodically write LAC on the broker by setting "bookkeeperExplicitLacIntervalInMills" in the broker.conf. - -::: - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/sql-getting-started.md b/site2/website/versioned_docs/version-2.8.1-deprecated/sql-getting-started.md deleted file mode 100644 index 8a5cd7199b365a..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/sql-getting-started.md +++ /dev/null @@ -1,187 +0,0 @@ ---- -id: sql-getting-started -title: Query data with Pulsar SQL -sidebar_label: "Query data" -original_id: sql-getting-started ---- - -Before querying data in Pulsar, you need to install Pulsar and built-in connectors. - -## Requirements -1. Install [Pulsar](getting-started-standalone.md#install-pulsar-standalone). -2. Install Pulsar [built-in connectors](getting-started-standalone.md#install-builtin-connectors-optional). - -## Query data in Pulsar -To query data in Pulsar with Pulsar SQL, complete the following steps. - -1. Start a Pulsar standalone cluster. - -```bash - -./bin/pulsar standalone - -``` - -2. Start a Pulsar SQL worker. - -```bash - -./bin/pulsar sql-worker run - -``` - -3. After initializing Pulsar standalone cluster and the SQL worker, run SQL CLI. - -```bash - -./bin/pulsar sql - -``` - -4. Test with SQL commands. - -```bash - -presto> show catalogs; - Catalog ---------- - pulsar - system -(2 rows) - -Query 20180829_211752_00004_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [0 rows, 0B] [0 rows/s, 0B/s] - - -presto> show schemas in pulsar; - Schema ------------------------ - information_schema - public/default - public/functions - sample/standalone/ns1 -(4 rows) - -Query 20180829_211818_00005_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [4 rows, 89B] [21 rows/s, 471B/s] - - -presto> show tables in pulsar."public/default"; - Table -------- -(0 rows) - -Query 20180829_211839_00006_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [0 rows, 0B] [0 rows/s, 0B/s] - -``` - -Since there is no data in Pulsar, no records is returned. - -5. Start the built-in connector _DataGeneratorSource_ and ingest some mock data. - -```bash - -./bin/pulsar-admin sources create --name generator --destinationTopicName generator_test --source-type data-generator - -``` - -And then you can query a topic in the namespace "public/default". - -```bash - -presto> show tables in pulsar."public/default"; - Table ----------------- - generator_test -(1 row) - -Query 20180829_213202_00000_csyeu, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:02 [1 rows, 38B] [0 rows/s, 17B/s] - -``` - -You can now query the data within the topic "generator_test". - -```bash - -presto> select * from pulsar."public/default".generator_test; - - firstname | middlename | lastname | email | username | password | telephonenumber | age | companyemail | nationalidentitycardnumber | --------------+-------------+-------------+----------------------------------+--------------+----------+-----------------+-----+-----------------------------------------------+----------------------------+ - Genesis | Katherine | Wiley | genesis.wiley@gmail.com | genesisw | y9D2dtU3 | 959-197-1860 | 71 | genesis.wiley@interdemconsulting.eu | 880-58-9247 | - Brayden | | Stanton | brayden.stanton@yahoo.com | braydens | ZnjmhXik | 220-027-867 | 81 | brayden.stanton@supermemo.eu | 604-60-7069 | - Benjamin | Julian | Velasquez | benjamin.velasquez@yahoo.com | benjaminv | 8Bc7m3eb | 298-377-0062 | 21 | benjamin.velasquez@hostesltd.biz | 213-32-5882 | - Michael | Thomas | Donovan | donovan@mail.com | michaeld | OqBm9MLs | 078-134-4685 | 55 | michael.donovan@memortech.eu | 443-30-3442 | - Brooklyn | Avery | Roach | brooklynroach@yahoo.com | broach | IxtBLafO | 387-786-2998 | 68 | brooklyn.roach@warst.biz | 085-88-3973 | - Skylar | | Bradshaw | skylarbradshaw@yahoo.com | skylarb | p6eC6cKy | 210-872-608 | 96 | skylar.bradshaw@flyhigh.eu | 453-46-0334 | -. -. -. - -``` - -You can query the mock data. - -## Query your own data -If you want to query your own data, you need to ingest your own data first. You can write a simple producer and write custom defined data to Pulsar. The following is an example. - -```java - -public class TestProducer { - - public static class Foo { - private int field1 = 1; - private String field2; - private long field3; - - public Foo() { - } - - public int getField1() { - return field1; - } - - public void setField1(int field1) { - this.field1 = field1; - } - - public String getField2() { - return field2; - } - - public void setField2(String field2) { - this.field2 = field2; - } - - public long getField3() { - return field3; - } - - public void setField3(long field3) { - this.field3 = field3; - } - } - - public static void main(String[] args) throws Exception { - PulsarClient pulsarClient = PulsarClient.builder().serviceUrl("pulsar://localhost:6650").build(); - Producer producer = pulsarClient.newProducer(AvroSchema.of(Foo.class)).topic("test_topic").create(); - - for (int i = 0; i < 1000; i++) { - Foo foo = new Foo(); - foo.setField1(i); - foo.setField2("foo" + i); - foo.setField3(System.currentTimeMillis()); - producer.newMessage().value(foo).send(); - } - producer.close(); - pulsarClient.close(); - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/sql-overview.md b/site2/website/versioned_docs/version-2.8.1-deprecated/sql-overview.md deleted file mode 100644 index 0235d0256c5f19..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/sql-overview.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: sql-overview -title: Pulsar SQL Overview -sidebar_label: "Overview" -original_id: sql-overview ---- - -Apache Pulsar is used to store streams of event data, and the event data is structured with predefined fields. With the implementation of the [Schema Registry](schema-get-started.md), you can store structured data in Pulsar and query the data by using [Trino (formerly Presto SQL.md)](https://trino.io/). - -As the core of Pulsar SQL, Presto Pulsar connector enables Presto workers within a Presto cluster to query data from Pulsar. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-sql-arch-2.png) - -The query performance is efficient and highly scalable, because Pulsar adopts [two level segment based architecture](concepts-architecture-overview.md#apache-bookkeeper). - -Topics in Pulsar are stored as segments in [Apache BookKeeper](https://bookkeeper.apache.org/). Each topic segment is replicated to some BookKeeper nodes, which enables concurrent reads and high read throughput. You can configure the number of BookKeeper nodes, and the default number is `3`. In Presto Pulsar connector, data is read directly from BookKeeper, so Presto workers can read concurrently from horizontally scalable number BookKeeper nodes. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-sql-arch-1.png) diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/sql-rest-api.md b/site2/website/versioned_docs/version-2.8.1-deprecated/sql-rest-api.md deleted file mode 100644 index c92fd62f7d8703..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/sql-rest-api.md +++ /dev/null @@ -1,192 +0,0 @@ ---- -id: sql-rest-api -title: Pulsar SQL REST APIs -sidebar_label: "REST APIs" -original_id: sql-rest-api ---- - -This section lists resources that make up the Presto REST API v1. - -## Request for Presto services - -All requests for Presto services should use Presto REST API v1 version. - -To request services, use explicit URL `http://presto.service:8081/v1`. You need to update `presto.service:8081` with your real Presto address before sending requests. - -`POST` requests require the `X-Presto-User` header. If you use authentication, you must use the same `username` that is specified in the authentication configuration. If you do not use authentication, you can specify anything for `username`. - -```properties - -X-Presto-User: username - -``` - -For more information about headers, refer to [PrestoHeaders](https://github.com/trinodb/trino). - -## Schema - -You can use statement in the HTTP body. All data is received as JSON document that might contain a `nextUri` link. If the received JSON document contains a `nextUri` link, the request continues with the `nextUri` link until the received data does not contain a `nextUri` link. If no error is returned, the query completes successfully. If an `error` field is displayed in `stats`, it means the query fails. - -The following is an example of `show catalogs`. The query continues until the received JSON document does not contain a `nextUri` link. Since no `error` is displayed in `stats`, it means that the query completes successfully. - -```powershell - -➜ ~ curl --header "X-Presto-User: test-user" --request POST --data 'show catalogs' http://localhost:8081/v1/statement -{ - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "stats" : { - "queued" : true, - "nodes" : 0, - "userTimeMillis" : 0, - "cpuTimeMillis" : 0, - "wallTimeMillis" : 0, - "processedBytes" : 0, - "processedRows" : 0, - "runningSplits" : 0, - "queuedTimeMillis" : 0, - "queuedSplits" : 0, - "completedSplits" : 0, - "totalSplits" : 0, - "scheduled" : false, - "peakMemoryBytes" : 0, - "state" : "QUEUED", - "elapsedTimeMillis" : 0 - }, - "id" : "20191113_033653_00006_dg6hb", - "nextUri" : "http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/1" -} - -➜ ~ curl http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/1 -{ - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "nextUri" : "http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/2", - "id" : "20191113_033653_00006_dg6hb", - "stats" : { - "state" : "PLANNING", - "totalSplits" : 0, - "queued" : false, - "userTimeMillis" : 0, - "completedSplits" : 0, - "scheduled" : false, - "wallTimeMillis" : 0, - "runningSplits" : 0, - "queuedSplits" : 0, - "cpuTimeMillis" : 0, - "processedRows" : 0, - "processedBytes" : 0, - "nodes" : 0, - "queuedTimeMillis" : 1, - "elapsedTimeMillis" : 2, - "peakMemoryBytes" : 0 - } -} - -➜ ~ curl http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/2 -{ - "id" : "20191113_033653_00006_dg6hb", - "data" : [ - [ - "pulsar" - ], - [ - "system" - ] - ], - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "columns" : [ - { - "typeSignature" : { - "rawType" : "varchar", - "arguments" : [ - { - "kind" : "LONG_LITERAL", - "value" : 6 - } - ], - "literalArguments" : [], - "typeArguments" : [] - }, - "name" : "Catalog", - "type" : "varchar(6)" - } - ], - "stats" : { - "wallTimeMillis" : 104, - "scheduled" : true, - "userTimeMillis" : 14, - "progressPercentage" : 100, - "totalSplits" : 19, - "nodes" : 1, - "cpuTimeMillis" : 16, - "queued" : false, - "queuedTimeMillis" : 1, - "state" : "FINISHED", - "peakMemoryBytes" : 0, - "elapsedTimeMillis" : 111, - "processedBytes" : 0, - "processedRows" : 0, - "queuedSplits" : 0, - "rootStage" : { - "cpuTimeMillis" : 1, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 1, - "subStages" : [ - { - "cpuTimeMillis" : 14, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 17, - "subStages" : [ - { - "wallTimeMillis" : 7, - "subStages" : [], - "stageId" : "2", - "done" : true, - "nodes" : 1, - "totalSplits" : 1, - "processedBytes" : 22, - "processedRows" : 2, - "queuedSplits" : 0, - "userTimeMillis" : 1, - "cpuTimeMillis" : 1, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 1 - } - ], - "wallTimeMillis" : 92, - "nodes" : 1, - "done" : true, - "stageId" : "1", - "userTimeMillis" : 12, - "processedRows" : 2, - "processedBytes" : 51, - "queuedSplits" : 0, - "totalSplits" : 17 - } - ], - "wallTimeMillis" : 5, - "done" : true, - "nodes" : 1, - "stageId" : "0", - "userTimeMillis" : 1, - "processedRows" : 2, - "processedBytes" : 22, - "totalSplits" : 1, - "queuedSplits" : 0 - }, - "runningSplits" : 0, - "completedSplits" : 19 - } -} - -``` - -:::note - -Since the response data is not in sync with the query state from the perspective of clients, you cannot rely on the response data to determine whether the query completes. - -::: - -For more information about Presto REST API, refer to [Presto HTTP Protocol](https://github.com/prestosql/presto/wiki/HTTP-Protocol). diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/standalone-docker.md b/site2/website/versioned_docs/version-2.8.1-deprecated/standalone-docker.md deleted file mode 100644 index 1710ec819d7a4e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/standalone-docker.md +++ /dev/null @@ -1,214 +0,0 @@ ---- -id: standalone-docker -title: Set up a standalone Pulsar in Docker -sidebar_label: "Run Pulsar in Docker" -original_id: standalone-docker ---- - -For local development and testing, you can run Pulsar in standalone mode on your own machine within a Docker container. - -If you have not installed Docker, download the [Community edition](https://www.docker.com/community-edition) and follow the instructions for your OS. - -## Start Pulsar in Docker - -* For MacOS, Linux, and Windows: - - ```shell - - $ docker run -it -p 6650:6650 -p 8080:8080 --mount source=pulsardata,target=/pulsar/data --mount source=pulsarconf,target=/pulsar/conf apachepulsar/pulsar:@pulsar:version@ bin/pulsar standalone - - ``` - -A few things to note about this command: - * The data, metadata, and configuration are persisted on Docker volumes in order to not start "fresh" every -time the container is restarted. For details on the volumes you can use `docker volume inspect ` - * For Docker on Windows make sure to configure it to use Linux containers - -If you start Pulsar successfully, you will see `INFO`-level log messages like this: - -``` - -08:18:30.970 [main] INFO org.apache.pulsar.broker.web.WebService - HTTP Service started at http://0.0.0.0:8080 -... -07:53:37.322 [main] INFO org.apache.pulsar.broker.PulsarService - messaging service is ready, bootstrap service port = 8080, broker url= pulsar://localhost:6650, cluster=standalone, configs=org.apache.pulsar.broker.ServiceConfiguration@98b63c1 -... - -``` - -:::tip - -When you start a local standalone cluster, a `public/default` - -::: - -namespace is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. -For more information, see [Topics](concepts-messaging.md#topics). - -## Use Pulsar in Docker - -Pulsar offers client libraries for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) -and [C++](client-libraries-cpp.md). If you're running a local standalone cluster, you can -use one of these root URLs to interact with your cluster: - -* `pulsar://localhost:6650` -* `http://localhost:8080` - -The following example will guide you get started with Pulsar quickly by using the [Python client API](client-libraries-python.md) -client API. - -Install the Pulsar Python client library directly from [PyPI](https://pypi.org/project/pulsar-client/): - -```shell - -$ pip install pulsar-client - -``` - -### Consume a message - -Create a consumer and subscribe to the topic: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -consumer = client.subscribe('my-topic', - subscription_name='my-sub') - -while True: - msg = consumer.receive() - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - -client.close() - -``` - -### Produce a message - -Now start a producer to send some test messages: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -producer = client.create_producer('my-topic') - -for i in range(10): - producer.send(('hello-pulsar-%d' % i).encode('utf-8')) - -client.close() - -``` - -## Get the topic statistics - -In Pulsar, you can use REST, Java, or command-line tools to control every aspect of the system. -For details on APIs, refer to [Admin API Overview](admin-api-overview.md). - -In the simplest example, you can use curl to probe the stats for a particular topic: - -```shell - -$ curl http://localhost:8080/admin/v2/persistent/public/default/my-topic/stats | python -m json.tool - -``` - -The output is something like this: - -```json - -{ - "msgRateIn": 0.0, - "msgThroughputIn": 0.0, - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesInCounter": 7097, - "msgInCounter": 143, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "averageMsgSize": 0.0, - "msgChunkPublished": false, - "storageSize": 7097, - "backlogSize": 0, - "offloadedStorageSize": 0, - "publishers": [ - { - "accessMode": "Shared", - "msgRateIn": 0.0, - "msgThroughputIn": 0.0, - "averageMsgSize": 0.0, - "chunkedMessageRate": 0.0, - "producerId": 0, - "metadata": {}, - "address": "/127.0.0.1:35604", - "connectedSince": "2021-07-04T09:05:43.04788Z", - "clientVersion": "2.8.0", - "producerName": "standalone-2-5" - } - ], - "waitingPublishers": 0, - "subscriptions": { - "my-sub": { - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "msgRateRedeliver": 0.0, - "chunkedMessageRate": 0, - "msgBacklog": 0, - "backlogSize": 0, - "msgBacklogNoDelayed": 0, - "blockedSubscriptionOnUnackedMsgs": false, - "msgDelayed": 0, - "unackedMessages": 0, - "type": "Exclusive", - "activeConsumerName": "3c544f1daa", - "msgRateExpired": 0.0, - "totalMsgExpired": 0, - "lastExpireTimestamp": 0, - "lastConsumedFlowTimestamp": 1625389101290, - "lastConsumedTimestamp": 1625389546070, - "lastAckedTimestamp": 1625389546162, - "lastMarkDeleteAdvancedTimestamp": 1625389546163, - "consumers": [ - { - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "msgRateRedeliver": 0.0, - "chunkedMessageRate": 0.0, - "consumerName": "3c544f1daa", - "availablePermits": 867, - "unackedMessages": 0, - "avgMessagesPerEntry": 6, - "blockedConsumerOnUnackedMsgs": false, - "lastAckedTimestamp": 1625389546162, - "lastConsumedTimestamp": 1625389546070, - "metadata": {}, - "address": "/127.0.0.1:35472", - "connectedSince": "2021-07-04T08:58:21.287682Z", - "clientVersion": "2.8.0" - } - ], - "isDurable": true, - "isReplicated": false, - "allowOutOfOrderDelivery": false, - "consumersAfterMarkDeletePosition": {}, - "nonContiguousDeletedMessagesRanges": 0, - "nonContiguousDeletedMessagesRangesSerializedSize": 0, - "durable": true, - "replicated": false - } - }, - "replication": {}, - "deduplicationStatus": "Disabled", - "nonContiguousDeletedMessagesRanges": 0, - "nonContiguousDeletedMessagesRangesSerializedSize": 0 -} - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/standalone.md b/site2/website/versioned_docs/version-2.8.1-deprecated/standalone.md deleted file mode 100644 index 25afa11a91b117..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/standalone.md +++ /dev/null @@ -1,271 +0,0 @@ ---- -id: standalone -title: Set up a standalone Pulsar locally -sidebar_label: "Run Pulsar locally" -original_id: standalone ---- - -For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process. - -> #### Pulsar in production? -> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal.md) guide. - -## Install Pulsar standalone - -This tutorial guides you through every step of the installation process. - -### System requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions, JRE/JDK 11 is recommended. - -:::tip - -By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. - -::: - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -### Install Pulsar using binary release - -To get started with Pulsar, download a binary tarball release in one of the following ways: - -* download from the Apache mirror (Pulsar @pulsar:version@ binary release) - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:binary_release_url - - ``` - -After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -#### What your package contains - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/). -`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more. -`examples` | A Java JAR file containing [Pulsar Functions](functions-overview.md) example. -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar. -`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar). - -These directories are created once you begin running Pulsar. - -Directory | Contains -:---------|:-------- -`data` | The data storage directory used by ZooKeeper and BookKeeper. -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md). -`logs` | Logs created by the installation. - -:::tip - -If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions: -* [Install builtin connectors (optional)](#install-builtin-connectors-optional) -* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional) -Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders. - -::: - -### Install builtin connectors (optional) - -Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors. -To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways: - -* download from the Apache mirror Pulsar IO Connectors @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. -For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker -(or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions). -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), -you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors). - -::: - -### Install tiered storage offloaders (optional) - -:::tip - -Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -To enable tiered storage feature, follow the instructions below; otherwise skip this section. - -::: - -To get started with [tiered storage offloaders](concepts-tiered-storage.md), you need to download the offloaders tarball release on every broker node in one of the following ways: - -* download from the Apache mirror Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders` -in the pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage.md). - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory. -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), -you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - -::: - -## Start Pulsar standalone - -Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode. - -```bash - -$ bin/pulsar standalone - -``` - -If you have started Pulsar successfully, you will see `INFO`-level log messages like this: - -```bash - -2017-06-01 14:46:29,192 - INFO - [main:WebSocketService@95] - Configuration Store cache started -2017-06-01 14:46:29,192 - INFO - [main:AuthenticationService@61] - Authentication is disabled -2017-06-01 14:46:29,192 - INFO - [main:WebSocketService@108] - Pulsar WebSocket Service started - -``` - -:::tip - -* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window. - -::: - -You can also run the service as a background process using the `pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). -> -> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview.md) document to secure your deployment. -> -> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -## Use Pulsar standalone - -Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. - -### Consume a message - -The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client consume my-topic -s "first-subscription" - -``` - -If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -09:56:55.566 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.MultiTopicsConsumerImpl - [TopicsConsumerFakeTopicNamee2df9] [first-subscription] Success subscribe new topic my-topic in topics consumer, partitions: 4, allTopicPartitionsNumber: 4 - -``` - -:::tip - -As you have noticed that we do not explicitly create the `my-topic` topic, from which we consume the message. When you consume a message from a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well. - -::: - -### Produce a message - -The following command produces a message saying `hello-pulsar` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client produce my-topic --messages "hello-pulsar" - -``` - -If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -13:09:39.356 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced - -``` - -## Stop Pulsar standalone - -Press `Ctrl+C` to stop a local standalone Pulsar. - -:::tip - -If the service runs as a background process using the `pulsar-daemon start standalone` command, then use the `pulsar-daemon stop standalone` command to stop the service. -For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). - -::: - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/tiered-storage-aliyun.md b/site2/website/versioned_docs/version-2.8.1-deprecated/tiered-storage-aliyun.md deleted file mode 100644 index 5772f162b5e26d..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/tiered-storage-aliyun.md +++ /dev/null @@ -1,257 +0,0 @@ ---- -id: tiered-storage-aliyun -title: Use Aliyun OSS offloader with Pulsar -sidebar_label: "Aliyun OSS offloader" -original_id: tiered-storage-aliyun ---- - -This chapter guides you through every step of installing and configuring the Aliyun Object Storage Service (OSS) offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the Aliyun OSS offloader. - -### Prerequisite - -- Pulsar: 2.8.0 or later versions - -### Step - -This example uses Pulsar 2.8.0. - -1. Download the Pulsar tarball, see [here](https://pulsar.apache.org/docs/en/standalone/#install-pulsar-using-binary-release). - -2. Download and untar the Pulsar offloaders package, then copy the Pulsar offloaders as `offloaders` in the Pulsar directory, see [here](https://pulsar.apache.org/docs/en/standalone/#install-tiered-storage-offloaders-optional). - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/), [GCS](https://cloud.google.com/storage/), [Azure](https://portal.azure.com/#home), and [Aliyun OSS](https://www.aliyun.com/product/oss) for long-term storage. - - ``` - - tiered-storage-file-system-2.8.0.nar - tiered-storage-jcloud-2.8.0.nar - - ``` - - :::note - - * If you are running Pulsar in a bare-metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image. The `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to Aliyun OSS, you need to configure some properties of the Aliyun OSS offload driver. - -::: - -Besides, you can also configure the Aliyun OSS offloader to run it automatically or trigger it manually. - -### Configure Aliyun OSS offloader driver - -You can configure the Aliyun OSS offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - | Required configuration | Description | Example value | - | --- | --- |--- | - | `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive. | aliyun-oss | - | `offloadersDirectory` | Offloader directory | offloaders | - | `managedLedgerOffloadBucket` | Bucket | pulsar-topic-offload | - | `managedLedgerOffloadServiceEndpoint` | Endpoint | http://oss-cn-hongkong.aliyuncs.com | - -- **Optional** configurations are as below. - - | Optional | Description | Example value | - | --- | --- | --- | - | `managedLedgerOffloadReadBufferSizeInBytes` | Size of block read | 1 MB | - | `managedLedgerOffloadMaxBlockSizeInBytes` | Size of block write | 64 MB | - | `managedLedgerMinLedgerRolloverTimeMinutes` | Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment. | 2 | - | `managedLedgerMaxEntriesPerLedger` | Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment. | 5000 | - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in Aliyun OSS must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -managedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Endpoint (required) - -The endpoint is the region where a bucket is located. - -:::tip - -For more information about Aliyun OSS regions and endpoints, see [International website](https://www.alibabacloud.com/help/doc-detail/31837.htm) or [Chinese website](https://help.aliyun.com/document_detail/31837.html). - -::: - - -##### Example - -This example sets the endpoint as _oss-us-west-1-internal_. - -``` - -managedLedgerOffloadServiceEndpoint=http://oss-us-west-1-internal.aliyuncs.com - -``` - -#### Authentication (required) - -To be able to access Aliyun OSS, you need to authenticate with Aliyun OSS. - -Set the environment variables `ALIYUN_OSS_ACCESS_KEY_ID` and `ALIYUN_OSS_ACCESS_KEY_SECRET` in `conf/pulsar_env.sh`. - -"export" is important so that the variables are made available in the environment of spawned processes. - -```bash - -export ALIYUN_OSS_ACCESS_KEY_ID=ABC123456789 -export ALIYUN_OSS_ACCESS_KEY_SECRET=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from Aliyun OSS in the configuration file `broker.conf` or `standalone.conf`. - -| Configuration | Description | Default value | -| --- | --- | --- | -| `managedLedgerOffloadReadBufferSizeInBytes` | Block size for each individual read when reading back data from Aliyun OSS. | 1 MB | -| `managedLedgerOffloadMaxBlockSizeInBytes` | Maximum size of a "part" sent during a multipart upload to Aliyun OSS. It **cannot** be smaller than 5 MB. | 64 MB | - -### Run Aliyun OSS offloader automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -| Threshold value | Action | -| --- | --- | -| > 0 | It triggers the offloading operation if the topic storage reaches its threshold. | -| = 0 | It causes a broker to offload data as soon as possible. | -| < 0 | It disables automatic offloading operation. | - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, the offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the Aliyun OSS offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-threshold-em-). - -::: - -### Run Aliyun OSS offloader manually - -For individual topics, you can trigger the Aliyun OSS offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to Aliyun OSS until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the Aliyun OSS offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-em-). - - ::: - -- This example checks the Aliyun OSS offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the Aliyun OSS offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-status-em-). - - ::: - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/tiered-storage-aws.md b/site2/website/versioned_docs/version-2.8.1-deprecated/tiered-storage-aws.md deleted file mode 100644 index a83de62643638e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/tiered-storage-aws.md +++ /dev/null @@ -1,329 +0,0 @@ ---- -id: tiered-storage-aws -title: Use AWS S3 offloader with Pulsar -sidebar_label: "AWS S3 offloader" -original_id: tiered-storage-aws ---- - -This chapter guides you through every step of installing and configuring the AWS S3 offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the AWS S3 offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or later versions - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download from the Pulsar [downloads page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget): - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/) and [GCS](https://cloud.google.com/storage/) for long term storage. - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to AWS S3, you need to configure some properties of the AWS S3 offload driver. - -::: - -Besides, you can also configure the AWS S3 offloader to run it automatically or trigger it manually. - -### Configure AWS S3 offloader driver - -You can configure the AWS S3 offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - Required configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive.

    **Note**: there is a third driver type, S3, which is identical to AWS S3, though S3 requires that you specify an endpoint URL using `s3ManagedLedgerOffloadServiceEndpoint`. This is useful if using an S3 compatible data store other than AWS S3. | aws-s3 - `offloadersDirectory` | Offloader directory | offloaders - `s3ManagedLedgerOffloadBucket` | Bucket | pulsar-topic-offload - -- **Optional** configurations are as below. - - Optional | Description | Example value - |---|---|--- - `s3ManagedLedgerOffloadRegion` | Bucket region

    **Note**: before specifying a value for this parameter, you need to set the following configurations. Otherwise, you might get an error.

    - Set [`s3ManagedLedgerOffloadServiceEndpoint`](https://docs.aws.amazon.com/general/latest/gr/s3.html).

    Example
    `s3ManagedLedgerOffloadServiceEndpoint=https://s3.YOUR_REGION.amazonaws.com`

    - Grant `GetBucketLocation` permission to a user.

    For how to grant `GetBucketLocation` permission to a user, see [here](https://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html#using-with-s3-actions-related-to-buckets).| eu-west-3 - `s3ManagedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `s3ManagedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in AWS S3 must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -s3ManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Bucket region - -A bucket region is a region where a bucket is located. If a bucket region is not specified, the **default** region (`US East (N. Virginia)`) is used. - -:::tip - -For more information about AWS regions and endpoints, see [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). - -::: - - -##### Example - -This example sets the bucket region as _europe-west-3_. - -``` - -s3ManagedLedgerOffloadRegion=eu-west-3 - -``` - -#### Authentication (required) - -To be able to access AWS S3, you need to authenticate with AWS S3. - -Pulsar does not provide any direct methods of configuring authentication for AWS S3, -but relies on the mechanisms supported by the [DefaultAWSCredentialsProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html). - -Once you have created a set of credentials in the AWS IAM console, you can configure credentials using one of the following methods. - -* Use EC2 instance metadata credentials. - - If you are on AWS instance with an instance profile that provides credentials, Pulsar uses these credentials if no other mechanism is provided. - -* Set the environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` in `conf/pulsar_env.sh`. - - "export" is important so that the variables are made available in the environment of spawned processes. - - ```bash - - export AWS_ACCESS_KEY_ID=ABC123456789 - export AWS_SECRET_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -* Add the Java system properties `aws.accessKeyId` and `aws.secretKey` to `PULSAR_EXTRA_OPTS` in `conf/pulsar_env.sh`. - - ```bash - - PULSAR_EXTRA_OPTS="${PULSAR_EXTRA_OPTS} ${PULSAR_MEM} ${PULSAR_GC} -Daws.accessKeyId=ABC123456789 -Daws.secretKey=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.maxCapacityPerThread=4096" - - ``` - -* Set the access credentials in `~/.aws/credentials`. - - ```conf - - [default] - aws_access_key_id=ABC123456789 - aws_secret_access_key=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -* Assume an IAM role. - - This example uses the `DefaultAWSCredentialsProviderChain` for assuming this role. - - The broker must be rebooted for credentials specified in `pulsar_env` to take effect. - - ```conf - - s3ManagedLedgerOffloadRole= - s3ManagedLedgerOffloadRoleSessionName=pulsar-s3-offload - - ``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from AWS S3 in the configuration file `broker.conf` or `standalone.conf`. - -Configuration|Description|Default value -|---|---|--- -`s3ManagedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from AWS S3.|1 MB -`s3ManagedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to AWS S3. It **cannot** be smaller than 5 MB. |64 MB - -### Configure AWS S3 offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the AWS S3 offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-threshold-em-). - -::: - -### Configure AWS S3 offloader to run manually - -For individual topics, you can trigger AWS S3 offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to AWS S3 until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the AWS S3 offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-em-). - - ::: - -- This example checks the AWS S3 offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the AWS S3 offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-status-em-). - - ::: - -## Tutorial - -For the complete and step-by-step instructions on how to use the AWS S3 offloader with Pulsar, see [here](https://hub.streamnative.io/offloaders/aws-s3/2.5.1#usage). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/tiered-storage-azure.md b/site2/website/versioned_docs/version-2.8.1-deprecated/tiered-storage-azure.md deleted file mode 100644 index e1485af3984e31..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/tiered-storage-azure.md +++ /dev/null @@ -1,264 +0,0 @@ ---- -id: tiered-storage-azure -title: Use Azure BlobStore offloader with Pulsar -sidebar_label: "Azure BlobStore offloader" -original_id: tiered-storage-azure ---- - -This chapter guides you through every step of installing and configuring the Azure BlobStore offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the Azure BlobStore offloader. - -### Prerequisite - -- Pulsar: 2.6.2 or later versions - -### Step - -This example uses Pulsar 2.6.2. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.6.2/apache-pulsar-2.6.2-bin.tar.gz) - - * Download from the Pulsar [downloads page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget): - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.6.2/apache-pulsar-2.6.2-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.6.2/apache-pulsar-offloaders-2.6.2-bin.tar.gz - tar xvfz apache-pulsar-offloaders-2.6.2-bin.tar.gz - - ``` - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.6.2/offloaders apache-pulsar-2.6.2/offloaders - - ls offloaders - - ``` - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/), [GCS](https://cloud.google.com/storage/) and [Azure](https://portal.azure.com/#home) for long term storage. - - ``` - - tiered-storage-file-system-2.6.2.nar - tiered-storage-jcloud-2.6.2.nar - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to Azure BlobStore, you need to configure some properties of the Azure BlobStore offload driver. - -::: - -Besides, you can also configure the Azure BlobStore offloader to run it automatically or trigger it manually. - -### Configure Azure BlobStore offloader driver - -You can configure the Azure BlobStore offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - Required configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name | azureblob - `offloadersDirectory` | Offloader directory | offloaders - `managedLedgerOffloadBucket` | Bucket | pulsar-topic-offload - -- **Optional** configurations are as below. - - Optional | Description | Example value - |---|---|--- - `managedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `managedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in Azure BlobStore must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -managedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Authentication (required) - -To be able to access Azure BlobStore, you need to authenticate with Azure BlobStore. - -* Set the environment variables `AZURE_STORAGE_ACCOUNT` and `AZURE_STORAGE_ACCESS_KEY` in `conf/pulsar_env.sh`. - - "export" is important so that the variables are made available in the environment of spawned processes. - - ```bash - - export AZURE_STORAGE_ACCOUNT=ABC123456789 - export AZURE_STORAGE_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from Azure BlobStore in the configuration file `broker.conf` or `standalone.conf`. - -Configuration|Description|Default value -|---|---|--- -`managedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from Azure BlobStore store.|1 MB -`managedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to Azure BlobStore store. It **cannot** be smaller than 5 MB. |64 MB - -### Configure Azure BlobStore offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the Azure BlobStore offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-threshold-em-). - -::: - -### Configure Azure BlobStore offloader to run manually - -For individual topics, you can trigger Azure BlobStore offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to Azure BlobStore until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the Azure BlobStore offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-em-). - - ::: - -- This example checks the Azure BlobStore offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the Azure BlobStore offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-status-em-). - - ::: - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/tiered-storage-filesystem.md b/site2/website/versioned_docs/version-2.8.1-deprecated/tiered-storage-filesystem.md deleted file mode 100644 index 7d18b68385d57e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/tiered-storage-filesystem.md +++ /dev/null @@ -1,630 +0,0 @@ ---- -id: tiered-storage-filesystem -title: Use filesystem offloader with Pulsar -sidebar_label: "Filesystem offloader" -original_id: tiered-storage-filesystem ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This chapter guides you through every step of installing and configuring the filesystem offloader and using it with Pulsar. - -## Installation - -This section describes how to install the filesystem offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or higher versions - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download the Pulsar tarball from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download the Pulsar tarball from the Pulsar [download page](https://pulsar.apache.org/download) - - * Use the [wget](https://www.gnu.org/software/wget) command to dowload the Pulsar tarball. - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - - :::note - - * If you run Pulsar in a bare metal cluster, ensure that the `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you run Pulsar in Docker or deploying Pulsar using a Docker image (such as K8S and DCOS), you can use the `apachepulsar/pulsar-all` image. The `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - - :::note - - * If you run Pulsar in a bare metal cluster, ensure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you run Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image. The `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to filesystem, you need to configure some properties of the filesystem offloader driver. - -::: - -Besides, you can also configure the filesystem offloader to run it automatically or trigger it manually. - -### Configure filesystem offloader driver - -You can configure the filesystem offloader driver in the `broker.conf` or `standalone.conf` configuration file. - -````mdx-code-block - - - -- **Required** configurations are as below. - - Parameter | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive. | filesystem - `fileSystemURI` | Connection address, which is the URI to access the default Hadoop distributed file system. | hdfs://127.0.0.1:9000 - `offloadersDirectory` | Offloader directory | offloaders - `fileSystemProfilePath` | Hadoop profile path. The configuration file is stored in the Hadoop profile path. It contains various settings for Hadoop performance tuning. | conf/filesystem_offload_core_site.xml - -- **Optional** configurations are as below. - - Parameter| Description | Example value - |---|---|--- - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic.

    **Note**: it is not recommended to set this parameter in the production environment.|2 - `offloadersDirectory` | Offloader directory | offloaders - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended to set this parameter in the production environment.|5000 - -
    - - -- **Required** configurations are as below. - - Parameter | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive. | filesystem - `fileSystemProfilePath` | NFS profile path. The configuration file is stored in the NFS profile path. It contains various settings for performance tuning. | conf/filesystem_offload_core_site.xml - -- **Optional** configurations are as below. - - Parameter| Description | Example value - |---|---|--- - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic.

    **Note**: it is not recommended to set this parameter in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended to set this parameter in the production environment.|5000 - -
    - -
    -```` - -### Run filesystem offloader automatically - -You can configure the namespace policy to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic storage reaches the threshold, an offload operation is triggered automatically. - -Threshold value|Action -|---|--- -| > 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offload runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, the filesystem offloader does not work until the current segment is full. - -You can configure the threshold using CLI tools, such as pulsar-admin. - -#### Example - -This example sets the filesystem offloader threshold to 10 MB using pulsar-admin. - -```bash - -pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#set-offload-threshold). - -::: - -### Run filesystem offloader manually - -For individual topics, you can trigger the filesystem offloader manually using one of the following methods: - -- Use the REST endpoint. - -- Use CLI tools (such as pulsar-admin). - -To manually trigger the filesystem offloader via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are offloaded to the filesystem until the threshold is no longer exceeded. Older segments are offloaded first. - -#### Example - -- This example manually run the filesystem offloader using pulsar-admin. - - ```bash - - pulsar-admin topics offload --size-threshold 10M persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload). - - ::: - -- This example checks filesystem offloader status using pulsar-admin. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the filesystem to complete the job, add the `-w` flag. - - ```bash - - pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in the offloading operation, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload-status). - - ::: - -## Tutorial - -This section provides step-by-step instructions on how to use the filesystem offloader to move data from Pulsar to Hadoop Distributed File System (HDFS) or Network File system (NFS). - -````mdx-code-block - - - -To move data from Pulsar to HDFS, follow these steps. - -### Step 1: Prepare the HDFS environment - -This tutorial sets up a Hadoop single node cluster and uses Hadoop 3.2.1. - -:::tip - -For details about how to set up a Hadoop single node cluster, see [here](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html). - -::: - -1. Download and uncompress Hadoop 3.2.1. - - ``` - - wget https://mirrors.bfsu.edu.cn/apache/hadoop/common/hadoop-3.2.1/hadoop-3.2.1.tar.gz - - tar -zxvf hadoop-3.2.1.tar.gz -C $HADOOP_HOME - - ``` - -2. Configure Hadoop. - - ``` - - # $HADOOP_HOME/etc/hadoop/core-site.xml - - - fs.defaultFS - hdfs://localhost:9000 - - - - # $HADOOP_HOME/etc/hadoop/hdfs-site.xml - - - dfs.replication - 1 - - - - ``` - -3. Set passphraseless ssh. - - ``` - - # Now check that you can ssh to the localhost without a passphrase: - $ ssh localhost - # If you cannot ssh to localhost without a passphrase, execute the following commands - $ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa - $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys - $ chmod 0600 ~/.ssh/authorized_keys - - ``` - -4. Start HDFS. - - ``` - - # don't execute this command repeatedly, repeat execute will cauld the clusterId of the datanode is not consistent with namenode - $HADOOP_HOME/bin/hadoop namenode -format - $HADOOP_HOME/sbin/start-dfs.sh - - ``` - -5. Navigate to the [HDFS website](http://localhost:9870/). - - You can see the **Overview** page. - - ![](/assets/FileSystem-1.png) - - 1. At the top navigation bar, click **Datanodes** to check DataNode information. - - ![](/assets/FileSystem-2.png) - - 2. Click **HTTP Address** to get more detailed information about localhost:9866. - - As can be seen below, the size of **Capacity Used** is 4 KB, which is the initial value. - - ![](/assets/FileSystem-3.png) - -### Step 2: Install the filesystem offloader - -For details, see [installation](#installation). - -### Step 3: Configure the filesystem offloader - -As indicated in the [configuration](#configuration) section, you need to configure some properties for the filesystem offloader driver before using it. This tutorial assumes that you have configured the filesystem offloader driver as below and run Pulsar in **standalone** mode. - -Set the following configurations in the `conf/standalone.conf` file. - -```conf - -managedLedgerOffloadDriver=filesystem -fileSystemURI=hdfs://127.0.0.1:9000 -fileSystemProfilePath=conf/filesystem_offload_core_site.xml - -``` - -:::note - -For testing purposes, you can set the following two configurations to speed up ledger rollover, but it is not recommended that you set them in the production environment. - -::: - -``` - -managedLedgerMinLedgerRolloverTimeMinutes=1 -managedLedgerMaxEntriesPerLedger=100 - -``` - - - - -:::note - -In this section, it is assumed that you have enabled NFS service and set the shared path of your NFS service. In this section, `/Users/test` is used as the shared path of NFS service. - -::: - -To offload data to NFS, follow these steps. - -### Step 1: Install the filesystem offloader - -For details, see [installation](#installation). - -### Step 2: Mont your NFS to your local filesystem - -This example mounts mounts */Users/pulsar_nfs* to */Users/test*. - -``` - -mount -e 192.168.0.103:/Users/test/Users/pulsar_nfs - -``` - -### Step 3: Configure the filesystem offloader driver - -As indicated in the [configuration](#configuration) section, you need to configure some properties for the filesystem offloader driver before using it. This tutorial assumes that you have configured the filesystem offloader driver as below and run Pulsar in **standalone** mode. - -1. Set the following configurations in the `conf/standalone.conf` file. - - ```conf - - managedLedgerOffloadDriver=filesystem - fileSystemProfilePath=conf/filesystem_offload_core_site.xml - - ``` - -2. Modify the *filesystem_offload_core_site.xml* as follows. - - ``` - - - fs.defaultFS - file:/// - - - - hadoop.tmp.dir - file:///Users/pulsar_nfs - - - - io.file.buffer.size - 4096 - - - - io.seqfile.compress.blocksize - 1000000 - - - - io.seqfile.compression.type - BLOCK - - - - io.map.index.interval - 128 - - - ``` - - - - -```` - -### Step 4: Offload data from BookKeeper to filesystem - -Execute the following commands in the repository where you download Pulsar tarball. For example, `~/path/to/apache-pulsar-2.5.1`. - -1. Start Pulsar standalone. - - ``` - - bin/pulsar standalone -a 127.0.0.1 - - ``` - -2. To ensure the data generated is not deleted immediately, it is recommended to set the [retention policy](https://pulsar.apache.org/docs/en/next/cookbooks-retention-expiry/#retention-policies), which can be either a **size** limit or a **time** limit. The larger value you set for the retention policy, the longer the data can be retained. - - ``` - - bin/pulsar-admin namespaces set-retention public/default --size 100M --time 2d - - ``` - - :::tip - - For more information about the `pulsarctl namespaces set-retention options` command, including flags, descriptions, default values, and shorthands, see [here](https://docs.streamnative.io/pulsarctl/v2.7.0.6/#-em-set-retention-em-). - - ::: - -3. Produce data using pulsar-client. - - ``` - - bin/pulsar-client produce -m "Hello FileSystem Offloader" -n 1000 public/default/fs-test - - ``` - -4. The offloading operation starts after a ledger rollover is triggered. To ensure offload data successfully, it is recommended that you wait until several ledger rollovers are triggered. In this case, you might need to wait for a second. You can check the ledger status using pulsarctl. - - ``` - - bin/pulsar-admin topics stats-internal public/default/fs-test - - ``` - - **Output** - - The data of the ledger 696 is not offloaded. - - ``` - - { - "version": 1, - "creationDate": "2020-06-16T21:46:25.807+08:00", - "modificationDate": "2020-06-16T21:46:25.821+08:00", - "ledgers": [ - { - "ledgerId": 696, - "isOffloaded": false - } - ], - "cursors": {} - } - - ``` - -5. Wait a second and send more messages to the topic. - - ``` - - bin/pulsar-client produce -m "Hello FileSystem Offloader" -n 1000 public/default/fs-test - - ``` - -6. Check the ledger status using pulsarctl. - - ``` - - bin/pulsar-admin topics stats-internal public/default/fs-test - - ``` - - **Output** - - The ledger 696 is rolled over. - - ``` - - { - "version": 2, - "creationDate": "2020-06-16T21:46:25.807+08:00", - "modificationDate": "2020-06-16T21:48:52.288+08:00", - "ledgers": [ - { - "ledgerId": 696, - "entries": 1001, - "size": 81695, - "isOffloaded": false - }, - { - "ledgerId": 697, - "isOffloaded": false - } - ], - "cursors": {} - } - - ``` - -7. Trigger the offloading operation manually using pulsarctl. - - ``` - - bin/pulsar-admin topics offload -s 0 public/default/fs-test - - ``` - - **Output** - - Data in ledgers before the ledge 697 is offloaded. - - ``` - - # offload info, the ledgers before 697 will be offloaded - Offload triggered for persistent://public/default/fs-test3 for messages before 697:0:-1 - - ``` - -8. Check the ledger status using pulsarctl. - - ``` - - bin/pulsar-admin topics stats-internal public/default/fs-test - - ``` - - **Output** - - The data of the ledger 696 is offloaded. - - ``` - - { - "version": 4, - "creationDate": "2020-06-16T21:46:25.807+08:00", - "modificationDate": "2020-06-16T21:52:13.25+08:00", - "ledgers": [ - { - "ledgerId": 696, - "entries": 1001, - "size": 81695, - "isOffloaded": true - }, - { - "ledgerId": 697, - "isOffloaded": false - } - ], - "cursors": {} - } - - ``` - - And the **Capacity Used** is changed from 4 KB to 116.46 KB. - - ![](/assets/FileSystem-8.png) \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/tiered-storage-gcs.md b/site2/website/versioned_docs/version-2.8.1-deprecated/tiered-storage-gcs.md deleted file mode 100644 index 81e7c5c6e6a44b..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/tiered-storage-gcs.md +++ /dev/null @@ -1,319 +0,0 @@ ---- -id: tiered-storage-gcs -title: Use GCS offloader with Pulsar -sidebar_label: "GCS offloader" -original_id: tiered-storage-gcs ---- - -This chapter guides you through every step of installing and configuring the GCS offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the GCS offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or later versions - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download from the Pulsar [download page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget) - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8S and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - As shown in the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support GCS and AWS S3 for long term storage. - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - -## Configuration - -:::note - -Before offloading data from BookKeeper to GCS, you need to configure some properties of the GCS offloader driver. - -::: - -Besides, you can also configure the GCS offloader to run it automatically or trigger it manually. - -### Configure GCS offloader driver - -You can configure GCS offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - **Required** configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver`|Offloader driver name, which is case-insensitive.|google-cloud-storage - `offloadersDirectory`|Offloader directory|offloaders - `gcsManagedLedgerOffloadBucket`|Bucket|pulsar-topic-offload - `gcsManagedLedgerOffloadRegion`|Bucket region|europe-west3 - `gcsManagedLedgerOffloadServiceAccountKeyFile`|Authentication |/Users/user-name/Downloads/project-804d5e6a6f33.json - -- **Optional** configurations are as below. - - Optional configuration|Description|Example value - |---|---|--- - `gcsManagedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `gcsManagedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic.|2 - `managedLedgerMaxEntriesPerLedger`|The max number of entries to append to a ledger before triggering a rollover.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in GCS **must** be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you can not nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -gcsManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Bucket region (required) - -Bucket region is the region where a bucket is located. If a bucket region is not specified, the **default** region (`us multi-regional location`) is used. - -:::tip - -For more information about bucket location, see [here](https://cloud.google.com/storage/docs/bucket-locations). - -::: - -##### Example - -This example sets the bucket region as _europe-west3_. - -``` - -gcsManagedLedgerOffloadRegion=europe-west3 - -``` - -#### Authentication (required) - -To enable a broker access GCS, you need to configure `gcsManagedLedgerOffloadServiceAccountKeyFile` in the configuration file `broker.conf`. - -`gcsManagedLedgerOffloadServiceAccountKeyFile` is -a JSON file, containing GCS credentials of a service account. - -##### Example - -To generate service account credentials or view the public credentials that you've already generated, follow the following steps. - -1. Navigate to the [Service accounts page](https://console.developers.google.com/iam-admin/serviceaccounts). - -2. Select a project or create a new one. - -3. Click **Create service account**. - -4. In the **Create service account** window, type a name for the service account and select **Furnish a new private key**. - - If you want to [grant G Suite domain-wide authority](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#delegatingauthority) to the service account, select **Enable G Suite Domain-wide Delegation**. - -5. Click **Create**. - - :::note - - Make sure the service account you create has permission to operate GCS, you need to assign **Storage Admin** permission to your service account [here](https://cloud.google.com/storage/docs/access-control/iam). - - ::: - -6. You can get the following information and set this in `broker.conf`. - - ```conf - - gcsManagedLedgerOffloadServiceAccountKeyFile="/Users/user-name/Downloads/project-804d5e6a6f33.json" - - ``` - - :::tip - - - For more information about how to create `gcsManagedLedgerOffloadServiceAccountKeyFile`, see [here](https://support.google.com/googleapi/answer/6158849). - - For more information about Google Cloud IAM, see [here](https://cloud.google.com/storage/docs/access-control/iam). - - ::: - -#### Size of block read/write - -You can configure the size of a request sent to or read from GCS in the configuration file `broker.conf`. - -Configuration|Description -|---|--- -`gcsManagedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from GCS.

    The **default** value is 1 MB. -`gcsManagedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to GCS.

    It **can not** be smaller than 5 MB.

    The **default** value is 64 MB. - -### Configure GCS offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offload operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the GCS offloader threshold size to 10 MB using pulsar-admin. - -```bash - -pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#set-offload-threshold). - -::: - -### Configure GCS offloader to run manually - -For individual topics, you can trigger GCS offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger the GCS via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to GCS until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the GCS offloader to run manually using pulsar-admin with the command `pulsar-admin topics offload (topic-name) (threshold)`. - - ```bash - - pulsar-admin topics offload persistent://my-tenant/my-namespace/topic1 10M - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload). - - ::: - -- This example checks the GCS offloader status using pulsar-admin with the command `pulsar-admin topics offload-status options`. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for GCS to complete the job, add the `-w` flag. - - ```bash - - pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload-status). - - ::: - -## Tutorial - -For the complete and step-by-step instructions on how to use the GCS offloader with Pulsar, see [here](https://hub.streamnative.io/offloaders/gcs/2.5.1#usage). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/tiered-storage-overview.md b/site2/website/versioned_docs/version-2.8.1-deprecated/tiered-storage-overview.md deleted file mode 100644 index c635034f463b46..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/tiered-storage-overview.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -id: tiered-storage-overview -title: Overview of tiered storage -sidebar_label: "Overview" -original_id: tiered-storage-overview ---- - -Pulsar's **Tiered Storage** feature allows older backlog data to be moved from BookKeeper to long term and cheaper storage, while still allowing clients to access the backlog as if nothing has changed. - -* Tiered storage uses [Apache jclouds](https://jclouds.apache.org) to support [Amazon S3](https://aws.amazon.com/s3/) and [GCS (Google Cloud Storage)](https://cloud.google.com/storage/) for long term storage. - - With jclouds, it is easy to add support for more [cloud storage providers](https://jclouds.apache.org/reference/providers/#blobstore-providers) in the future. - - :::tip - - - For more information about how to use the AWS S3 offloader with Pulsar, see [here](tiered-storage-aws.md). - - - For more information about how to use the GCS offloader with Pulsar, see [here](tiered-storage-gcs.md). - - ::: - -* Tiered storage uses [Apache Hadoop](http://hadoop.apache.org/) to support filesystems for long term storage. - - With Hadoop, it is easy to add support for more filesystems in the future. - - :::tip - - For more information about how to use the filesystem offloader with Pulsar, see [here](tiered-storage-filesystem.md). - - ::: - -## When to use tiered storage? - -Tiered storage should be used when you have a topic for which you want to keep a very long backlog for a long time. - -For example, if you have a topic containing user actions which you use to train your recommendation systems, you may want to keep that data for a long time, so that if you change your recommendation algorithm, you can rerun it against your full user history. - -## How does tiered storage work? - -A topic in Pulsar is backed by a **log**, known as a **managed ledger**. This log is composed of an ordered list of segments. Pulsar only writes to the final segment of the log. All previous segments are sealed. The data within the segment is immutable. This is known as a **segment oriented architecture**. - -![Tiered storage](/assets/pulsar-tiered-storage.png "Tiered Storage") - -The tiered storage offloading mechanism takes advantage of segment oriented architecture. When offloading is requested, the segments of the log are copied one-by-one to tiered storage. All segments of the log (apart from the current segment) written to tiered storage can be offloaded. - -Data written to BookKeeper is replicated to 3 physical machines by default. However, once a segment is sealed in BookKeeper, it becomes immutable and can be copied to long term storage. Long term storage can achieve cost savings by using mechanisms such as [Reed-Solomon error correction](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction) to require fewer physical copies of data. - -Before offloading ledgers to long term storage, you need to configure buckets, credentials, and other properties for the cloud storage service. Additionally, Pulsar uses multi-part objects to upload the segment data and brokers may crash while uploading the data. It is recommended that you add a life cycle rule for your bucket to expire incomplete multi-part upload after a day or two days to avoid getting charged for incomplete uploads. Moreover, you can trigger the offloading operation manually (via REST API or CLI) or automatically (via CLI). - -After offloading ledgers to long term storage, you can still query data in the offloaded ledgers with Pulsar SQL. - -For more information about tiered storage for Pulsar topics, see [here](https://github.com/apache/pulsar/wiki/PIP-17:-Tiered-storage-for-Pulsar-topics). diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/transaction-api.md b/site2/website/versioned_docs/version-2.8.1-deprecated/transaction-api.md deleted file mode 100644 index fedc314646c938..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/transaction-api.md +++ /dev/null @@ -1,172 +0,0 @@ ---- -id: transactions-api -title: Transactions API -sidebar_label: "Transactions API" -original_id: transactions-api ---- - -All messages in a transaction are available only to consumers after the transaction has been committed. If a transaction has been aborted, all the writes and acknowledgments in this transaction roll back. - -## Prerequisites - -1. To enable transactions in Pulsar, you need to configure the parameter in `broker.conf` file or `standalone.conf` file. - -``` - -transactionCoordinatorEnabled=true - -``` - -2. Initialize transaction coordinator metadata, so the transaction coordinators can leverage advantages of the partitioned topic, such as load balance. - -``` - -bin/pulsar initialize-transaction-coordinator-metadata -cs 127.0.0.1:2181 -c standalone - -``` - -After initializing transaction coordinator metadata, you can use the transactions API. The following APIs are available. - -## Initialize Pulsar client - -You can enable transaction for transaction client and initialize transaction coordinator client. - -``` - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .enableTransaction(true) - .build(); - -``` - -## Start transactions -You can start transaction in the following way. - -``` - -Transaction txn = pulsarClient - .newTransaction() - .withTransactionTimeout(5, TimeUnit.MINUTES) - .build() - .get(); - -``` - -## Produce transaction messages - -A transaction parameter is required when producing new transaction messages. The semantic of the transaction messages in Pulsar is `read-committed`, so the consumer cannot receive the ongoing transaction messages before the transaction is committed. - -``` - -producer.newMessage(txn).value("Hello Pulsar Transaction".getBytes()).sendAsync(); - -``` - -## Acknowledge the messages with the transaction - -The transaction acknowledgement requires a transaction parameter. The transaction acknowledgement marks the messages state to pending-ack state. When the transaction is committed, the pending-ack state becomes ack state. If the transaction is aborted, the pending-ack state becomes unack state. - -``` - -Message message = consumer.receive(); -consumer.acknowledgeAsync(message.getMessageId(), txn); - -``` - -## Commit transactions - -When the transaction is committed, consumers receive the transaction messages and the pending-ack state becomes ack state. - -``` - -txn.commit().get(); - -``` - -## Abort transaction - -When the transaction is aborted, the transaction acknowledgement is canceled and the pending-ack messages are redelivered. - -``` - -txn.abort().get(); - -``` - -### Example -The following example shows how messages are processed in transaction. - -``` - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl(getPulsarServiceList().get(0).getBrokerServiceUrl()) - .statsInterval(0, TimeUnit.SECONDS) - .enableTransaction(true) - .build(); - -String sourceTopic = "public/default/source-topic"; -String sinkTopic = "public/default/sink-topic"; - -Producer sourceProducer = pulsarClient - .newProducer(Schema.STRING) - .topic(sourceTopic) - .create(); -sourceProducer.newMessage().value("hello pulsar transaction").sendAsync(); - -Consumer sourceConsumer = pulsarClient - .newConsumer(Schema.STRING) - .topic(sourceTopic) - .subscriptionName("test") - .subscriptionType(SubscriptionType.Shared) - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscribe(); - -Producer sinkProducer = pulsarClient - .newProducer(Schema.STRING) - .topic(sinkTopic) - .create(); - -Transaction txn = pulsarClient - .newTransaction() - .withTransactionTimeout(5, TimeUnit.MINUTES) - .build() - .get(); - -// source message acknowledgement and sink message produce belong to one transaction, -// they are combined into an atomic operation. -Message message = sourceConsumer.receive(); -sourceConsumer.acknowledgeAsync(message.getMessageId(), txn); -sinkProducer.newMessage(txn).value("sink data").sendAsync(); - -txn.commit().get(); - -``` - -## Enable batch messages in transactions - -To enable batch messages in transactions, you need to enable the batch index acknowledgement feature. The transaction acks check whether the batch index acknowledgement conflicts. - -To enable batch index acknowledgement, you need to set `acknowledgmentAtBatchIndexLevelEnabled` to `true` in the `broker.conf` or `standalone.conf` file. - -``` - -acknowledgmentAtBatchIndexLevelEnabled=true - -``` - -And then you need to call the `enableBatchIndexAcknowledgment(true)` method in the consumer builder. - -``` - -Consumer sinkConsumer = pulsarClient - .newConsumer() - .topic(transferTopic) - .subscriptionName("sink-topic") - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscriptionType(SubscriptionType.Shared) - .enableBatchIndexAcknowledgment(true) // enable batch index acknowledgement - .subscribe(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/transaction-guarantee.md b/site2/website/versioned_docs/version-2.8.1-deprecated/transaction-guarantee.md deleted file mode 100644 index 9db2d254e159f6..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/transaction-guarantee.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -id: transactions-guarantee -title: Transactions Guarantee -sidebar_label: "Transactions Guarantee" -original_id: transactions-guarantee ---- - -Pulsar transactions support the following guarantee. - -## Atomic multi-partition writes and multi-subscription acknowledges -Transactions enable atomic writes to multiple topics and partitions. A batch of messages in a transaction can be received from, produced to, and acknowledged by many partitions. All the operations involved in a transaction succeed or fail as a single unit. - -## Read transactional message -All the messages in a transaction are available only for consumers until the transaction is committed. - -## Acknowledge transactional message -A message is acknowledged successfully only once by a consumer under the subscription when acknowledging the message with the transaction ID. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/txn-how.md b/site2/website/versioned_docs/version-2.8.1-deprecated/txn-how.md deleted file mode 100644 index add072448aeb34..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/txn-how.md +++ /dev/null @@ -1,151 +0,0 @@ ---- -id: txn-how -title: How transactions work? -sidebar_label: "How transactions work?" -original_id: txn-how ---- - -This section describes transaction components and how the components work together. For the complete design details, see [PIP-31: Transactional Streaming](https://docs.google.com/document/d/145VYp09JKTw9jAT-7yNyFU255FptB2_B2Fye100ZXDI/edit#heading=h.bm5ainqxosrx). - -## Key concept - -It is important to know the following key concepts, which is a prerequisite for understanding how transactions work. - -### Transaction coordinator - -The transaction coordinator (TC) is a module running inside a Pulsar broker. - -* It maintains the entire life cycle of transactions and prevents a transaction from getting into an incorrect status. - -* It handles transaction timeout, and ensures that the transaction is aborted after a transaction timeout. - -### Transaction log - -All the transaction metadata persists in the transaction log. The transaction log is backed by a Pulsar topic. If the transaction coordinator crashes, it can restore the transaction metadata from the transaction log. - -The transaction log stores the transaction status rather than actual messages in the transaction (the actual messages are stored in the actual topic partitions). - -### Transaction buffer - -Messages produced to a topic partition within a transaction are stored in the transaction buffer (TB) of that topic partition. The messages in the transaction buffer are not visible to consumers until the transactions are committed. The messages in the transaction buffer are discarded when the transactions are aborted. - -Transaction buffer stores all ongoing and aborted transactions in memory. All messages are sent to the actual partitioned Pulsar topics. After transactions are committed, the messages in the transaction buffer are materialized (visible) to consumers. When the transactions are aborted, the messages in the transaction buffer are discarded. - -### Transaction ID - -Transaction ID (TxnID) identifies a unique transaction in Pulsar. The transaction ID is 128-bit. The highest 16 bits are reserved for the ID of the transaction coordinator, and the remaining bits are used for monotonically increasing numbers in each transaction coordinator. It is easy to locate the transaction crash with the TxnID. - -### Pending acknowledge state - -Pending acknowledge state maintains message acknowledgments within a transaction before a transaction completes. If a message is in the pending acknowledge state, the message cannot be acknowledged by other transactions until the message is removed from the pending acknowledge state. - -The pending acknowledge state is persisted to the pending acknowledge log (cursor ledger). A new broker can restore the state from the pending acknowledge log to ensure the acknowledgement is not lost. - -## Data flow - -At a high level, the data flow can be split into several steps: - -1. Begin a transaction. - -2. Publish messages with a transaction. - -3. Acknowledge messages with a transaction. - -4. End a transaction. - -To help you debug or tune the transaction for better performance, review the following diagrams and descriptions. - -### 1. Begin a transaction - -Before introducing the transaction in Pulsar, a producer is created and then messages are sent to brokers and stored in data logs. - -![](/assets/txn-3.png) - -Let’s walk through the steps for _beginning a transaction_. - -| Step | Description | -| --- | --- | -| 1.1 | The first step is that the Pulsar client finds the transaction coordinator. | -| 1.2 | The transaction coordinator allocates a transaction ID for the transaction. In the transaction log, the transaction is logged with its transaction ID and status (OPEN), which ensures the transaction status is persisted regardless of transaction coordinator crashes. | -| 1.3 | The transaction log sends the result of persisting the transaction ID to the transaction coordinator. | -| 1.4 | After the transaction status entry is logged, the transaction coordinator brings the transaction ID back to the Pulsar client. | - -### 2. Publish messages with a transaction - -In this stage, the Pulsar client enters a transaction loop, repeating the `consume-process-produce` operation for all the messages that comprise the transaction. This is a long phase and is potentially composed of multiple produce and acknowledgement requests. - -![](/assets/txn-4.png) - -Let’s walk through the steps for _publishing messages with a transaction_. - -| Step | Description | -| --- | --- | -| 2.1.1 | Before the Pulsar client produces messages to a new topic partition, it sends a request to the transaction coordinator to add the partition to the transaction. | -| 2.1.2 | The transaction coordinator logs the partition changes of the transaction into the transaction log for durability, which ensures the transaction coordinator knows all the partitions that a transaction is handling. The transaction coordinator can commit or abort changes on each partition at the end-partition phase. | -| 2.1.3 | The transaction log sends the result of logging the new partition (used for producing messages) to the transaction coordinator. | -| 2.1.4 | The transaction coordinator sends the result of adding a new produced partition to the transaction. | -| 2.2.1 | The Pulsar client starts producing messages to partitions. The flow of this part is the same as the normal flow of producing messages except that the batch of messages produced by a transaction contains transaction IDs. | -| 2.2.2 | The broker writes messages to a partition. | - -### 3. Acknowledge messages with a transaction - -In this phase, the Pulsar client sends a request to the transaction coordinator and a new subscription is acknowledged as a part of a transaction. - -![](/assets/txn-5.png) - -Let’s walk through the steps for _acknowledging messages with a transaction_. - -| Step | Description | -| --- | --- | -| 3.1.1 | The Pulsar client sends a request to add an acknowledged subscription to the transaction coordinator. | -| 3.1.2 | The transaction coordinator logs the addition of subscription, which ensures that it knows all subscriptions handled by a transaction and can commit or abort changes on each subscription at the end phase. | -| 3.1.3 | The transaction log sends the result of logging the new partition (used for acknowledging messages) to the transaction coordinator. | -| 3.1.4 | The transaction coordinator sends the result of adding the new acknowledged partition to the transaction. | -| 3.2 | The Pulsar client acknowledges messages on the subscription. The flow of this part is the same as the normal flow of acknowledging messages except that the acknowledged request carries a transaction ID. | -| 3.3 | The broker receiving the acknowledgement request checks if the acknowledgment belongs to a transaction or not. | - -### 4. End a transaction - -At the end of a transaction, the Pulsar client decides to commit or abort the transaction. The transaction can be aborted when a conflict is detected on acknowledging messages. - -#### 4.1 End transaction request - -When the Pulsar client finishes a transaction, it issues an end transaction request. - -![](/assets/txn-6.png) - -Let’s walk through the steps for _ending the transaction_. - -| Step | Description | -| --- | --- | -| 4.1.1 | The Pulsar client issues an end transaction request (with a field indicating whether the transaction is to be committed or aborted) to the transaction coordinator. | -| 4.1.2 | The transaction coordinator writes a COMMITTING or ABORTING message to its transaction log. | -| 4.1.3 | The transaction log sends the result of logging the committing or aborting status. | - -#### 4.2 Finalize a transaction - -The transaction coordinator starts the process of committing or aborting messages to all the partitions involved in this transaction. - -![](/assets/txn-7.png) - -Let’s walk through the steps for _finalizing a transaction_. - -| Step | Description | -| --- | --- | -| 4.2.1 | The transaction coordinator commits transactions on subscriptions and commits transactions on partitions at the same time. | -| 4.2.2 | The broker (produce) writes produced committed markers to the actual partitions. At the same time, the broker (ack) writes acked committed marks to the subscription pending ack partitions. | -| 4.2.3 | The data log sends the result of writing produced committed marks to the broker. At the same time, pending ack data log sends the result of writing acked committed marks to the broker. The cursor moves to the next position. | - -#### 4.3 Mark a transaction as COMMITTED or ABORTED - -The transaction coordinator writes the final transaction status to the transaction log to complete the transaction. - -![](/assets/txn-8.png) - -Let’s walk through the steps for _marking a transaction as COMMITTED or ABORTED_. - -| Step | Description | -| --- | --- | -| 4.3.1 | After all produced messages and acknowledgements to all partitions involved in this transaction have been successfully committed or aborted, the transaction coordinator writes the final COMMITTED or ABORTED transaction status messages to its transaction log, indicating that the transaction is complete. All the messages associated with the transaction in its transaction log can be safely removed. | -| 4.3.2 | The transaction log sends the result of the committed transaction to the transaction coordinator. | -| 4.3.3 | The transaction coordinator sends the result of the committed transaction to the Pulsar client. | diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/txn-monitor.md b/site2/website/versioned_docs/version-2.8.1-deprecated/txn-monitor.md deleted file mode 100644 index 5b50953772d092..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/txn-monitor.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -id: txn-monitor -title: How to monitor transactions? -sidebar_label: "How to monitor transactions?" -original_id: txn-monitor ---- - -You can monitor the status of the transactions in Prometheus and Grafana using the [transaction metrics](https://pulsar.apache.org/docs/en/next/reference-metrics/#pulsar-transaction). - -For how to configure Prometheus and Grafana, see [here](https://pulsar.apache.org/docs/en/next/deploy-monitoring). diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/txn-use.md b/site2/website/versioned_docs/version-2.8.1-deprecated/txn-use.md deleted file mode 100644 index de0e4a92f1b27e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/txn-use.md +++ /dev/null @@ -1,105 +0,0 @@ ---- -id: txn-use -title: How to use transactions? -sidebar_label: "How to use transactions?" -original_id: txn-use ---- - -## Transaction API - -The transaction feature is primarily a server-side and protocol-level feature. You can use the transaction feature via the [transaction API](https://pulsar.apache.org/api/admin/), which is available in **Pulsar 2.8.0 or later**. - -To use the transaction API, you do not need any additional settings in the Pulsar client. **By default**, transactions is **disabled**. - -Currently, transaction API is only available for **Java** clients. Support for other language clients will be added in the future releases. - -## Quick start - -This section provides an example of how to use the transaction API to send and receive messages in a Java client. - -1. Start Pulsar 2.8.0 or later. - -2. Enable transaction. - - Change the configuration in the `broker.conf` file. - - ``` - - transactionCoordinatorEnabled=true - - ``` - - If you want to enable batch messages in transactions, follow the steps below. - - Set `acknowledgmentAtBatchIndexLevelEnabled` to `true` in the `broker.conf` or `standalone.conf` file. - - ``` - - acknowledgmentAtBatchIndexLevelEnabled=true - - ``` - -3. Initialize transaction coordinator metadata. - - The transaction coordinator can leverage the advantages of partitioned topics (such as load balance). - - **Input** - - ``` - - bin/pulsar initialize-transaction-coordinator-metadata -cs 127.0.0.1:2181 -c standalone - - ``` - - **Output** - - ``` - - Transaction coordinator metadata setup success - - ``` - -4. Initialize a Pulsar client. - - ``` - - PulsarClient client = PulsarClient.builder() - - .serviceUrl("pulsar://localhost:6650") - - .enableTransaction(true) - - .build(); - - ``` - -Now you can start using the transaction API to send and receive messages. Below is an example of a `consume-process-produce` application written in Java. - -![](/assets/txn-9.png) - -Let’s walk through this example step by step. - -| Step | Description | -| --- | --- | -| 1. Start a transaction. | The application opens a new transaction by calling PulsarClient.newTransaction. It specifics the transaction timeout as 1 minute. If the transaction is not committed within 1 minute, the transaction is automatically aborted. | -| 2. Receive messages from topics. | The application creates two normal consumers to receive messages from topic input-topic-1 and input-topic-2 respectively. | -| 3. Publish messages to topics with the transaction. | The application creates two producers to produce the resulting messages to the output topic _output-topic-1_ and output-topic-2 respectively. The application applies the processing logic and generates two output messages. The application sends those two output messages as part of the transaction opened in the first step via Producer.newMessage(Transaction). | -| 4. Acknowledge the messages with the transaction. | In the same transaction, the application acknowledges the two input messages. | -| 5. Commit the transaction. | The application commits the transaction by calling Transaction.commit() on the open transaction. The commit operation ensures the two input messages are marked as acknowledged and the two output messages are written successfully to the output topics. | - -[1] Example of enabling batch messages ack in transactions in the consumer builder. - -``` - -Consumer sinkConsumer = pulsarClient - .newConsumer() - .topic(transferTopic) - .subscriptionName("sink-topic") - -.subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscriptionType(SubscriptionType.Shared) - .enableBatchIndexAcknowledgment(true) // enable batch index acknowledgement - .subscribe(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/txn-what.md b/site2/website/versioned_docs/version-2.8.1-deprecated/txn-what.md deleted file mode 100644 index 844f19a700f8f0..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/txn-what.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -id: txn-what -title: What are transactions? -sidebar_label: "What are transactions?" -original_id: txn-what ---- - -Transactions strengthen the message delivery semantics of Apache Pulsar and [processing guarantees of Pulsar Functions](https://pulsar.apache.org/docs/en/next/functions-overview/#processing-guarantees). The Pulsar Transaction API supports atomic writes and acknowledgments across multiple topics. - -Transactions allow: - -- A producer to send a batch of messages to multiple topics where all messages in the batch are eventually visible to any consumer, or none are ever visible to consumers. - -- End-to-end exactly-once semantics (execute a `consume-process-produce` operation exactly once). - -## Transaction semantics - -Pulsar transactions have the following semantics: - -* All operations within a transaction are committed as a single unit. - - * Either all messages are committed, or none of them are. - - * Each message is written or processed exactly once, without data loss or duplicates (even in the event of failures). - - * If a transaction is aborted, all the writes and acknowledgments in this transaction rollback. - -* A group of messages in a transaction can be received from, produced to, and acknowledged by multiple partitions. - - * Consumers are only allowed to read committed (acked) messages. In other words, the broker does not deliver transactional messages which are part of an open transaction or messages which are part of an aborted transaction. - - * Message writes across multiple partitions are atomic. - - * Message acks across multiple subscriptions are atomic. A message is acked successfully only once by a consumer under the subscription when acknowledging the message with the transaction ID. - -## Transactions and stream processing - -Stream processing on Pulsar is a `consume-process-produce` operation on Pulsar topics: - -* `Consume`: a source operator that runs a Pulsar consumer reads messages from one or multiple Pulsar topics. - -* `Process`: a processing operator transforms the messages. - -* `Produce`: a sink operator that runs a Pulsar producer writes the resulting messages to one or multiple Pulsar topics. - -![](/assets/txn-2.png) - -Pulsar transactions support end-to-end exactly-once stream processing, which means messages are not lost from a source operator and messages are not duplicated to a sink operator. - -## Use case - -Prior to Pulsar 2.8.0, there was no easy way to build stream processing applications with Pulsar to achieve exactly-once processing guarantees. With the transaction introduced in Pulsar 2.8.0, the following services support exactly-once semantics: - -* [Pulsar Flink connector](https://flink.apache.org/2021/01/07/pulsar-flink-connector-270.html) - - Prior to Pulsar 2.8.0, if you want to build stream applications using Pulsar and Flink, the Pulsar Flink connector only supported exactly-once source connector and at-least-once sink connector, which means the highest processing guarantee for end-to-end was at-least-once, there was possibility that the resulting messages from streaming applications produce duplicated messages to the resulting topics in Pulsar. - - With the transaction introduced in Pulsar 2.8.0, the Pulsar Flink sink connector can support exactly-once semantics by implementing the designated `TwoPhaseCommitSinkFunction` and hooking up the Flink sink message lifecycle with Pulsar transaction API. - -* Support for Pulsar Functions and other connectors will be added in the future releases. diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/txn-why.md b/site2/website/versioned_docs/version-2.8.1-deprecated/txn-why.md deleted file mode 100644 index 1ed8769977654e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/txn-why.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -id: txn-why -title: Why transactions? -sidebar_label: "Why transactions?" -original_id: txn-why ---- - -Pulsar transactions (txn) enable event streaming applications to consume, process, and produce messages in one atomic operation. The reason for developing this feature can be summarized as below. - -## Demand of stream processing - -The demand for stream processing applications with stronger processing guarantees has grown along with the rise of stream processing. For example, in the financial industry, financial institutions use stream processing engines to process debits and credits for users. This type of use case requires that every message is processed exactly once, without exception. - -In other words, if a stream processing application consumes message A and -produces the result as a message B (B = f(A)), then exactly-once processing -guarantee means that A can only be marked as consumed if and only if B is -successfully produced, and vice versa. - -![](/assets/txn-1.png) - -The Pulsar transactions API strengthens the message delivery semantics and the processing guarantees for stream processing. It enables stream processing applications to consume, process, and produce messages in one atomic operation. That means, a batch of messages in a transaction can be received from, produced to and acknowledged by many topic partitions. All the operations involved in a transaction succeed or fail as one single until. - -## Limitation of idempotent producer - -Avoiding data loss or duplication can be achieved by using the Pulsar idempotent producer, but it does not provide guarantees for writes across multiple partitions. - -In Pulsar, the highest level of message delivery guarantee is using an [idempotent producer](https://pulsar.apache.org/docs/en/next/concepts-messaging/#producer-idempotency) with the exactly once semantic at one single partition, that is, each message is persisted exactly once without data loss and duplication. However, there are some limitations in this solution: - -- Due to the monotonic increasing sequence ID, this solution only works on a single partition and within a single producer session (that is, for producing one message), so there is no atomicity when producing multiple messages to one or multiple partitions. - - In this case, if there are some failures (for example, client / broker / bookie crashes, network failure, and more) in the process of producing and receiving messages, messages are re-processed and re-delivered, which may cause data loss or data duplication: - - - For the producer: if the producer retry sending messages, some messages are persisted multiple times; if the producer does not retry sending messages, some messages are persisted once and other messages are lost. - - - For the consumer: since the consumer does not know whether the broker has received messages or not, the consumer may not retry sending acks, which causes it to receive duplicate messages. - -- Similarly, for Pulsar Function, it only guarantees exactly once semantics for an idempotent function on a single event rather than processing multiple events or producing multiple results that can happen exactly. - - For example, if a function accepts multiple events and produces one result (for example, window function), the function may fail between producing the result and acknowledging the incoming messages, or even between acknowledging individual events, which causes all (or some) incoming messages to be re-delivered and reprocessed, and a new result is generated. - - However, many scenarios need atomic guarantees across multiple partitions and sessions. - -- Consumers need to rely on more mechanisms to acknowledge (ack) messages once. - - For example, consumers are required to store the MessageID along with its acked state. After the topic is unloaded, the subscription can recover the acked state of this MessageID in memory when the topic is loaded again. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.1-deprecated/window-functions-context.md b/site2/website/versioned_docs/version-2.8.1-deprecated/window-functions-context.md deleted file mode 100644 index f80fea57989ef0..00000000000000 --- a/site2/website/versioned_docs/version-2.8.1-deprecated/window-functions-context.md +++ /dev/null @@ -1,581 +0,0 @@ ---- -id: window-functions-context -title: Window Functions Context -sidebar_label: "Window Functions: Context" -original_id: window-functions-context ---- - -Java SDK provides access to a **window context object** that can be used by a window function. This context object provides a wide variety of information and functionality for Pulsar window functions as below. - -- [Spec](#spec) - - * Names of all input topics and the output topic associated with the function. - * Tenant and namespace associated with the function. - * Pulsar window function name, ID, and version. - * ID of the Pulsar function instance running the window function. - * Number of instances that invoke the window function. - * Built-in type or custom class name of the output schema. - -- [Logger](#logger) - - * Logger object used by the window function, which can be used to create window function log messages. - -- [User config](#user-config) - - * Access to arbitrary user configuration values. - -- [Routing](#routing) - - * Routing is supported in Pulsar window functions. Pulsar window functions send messages to arbitrary topics as per the `publish` interface. - -- [Metrics](#metrics) - - * Interface for recording metrics. - -- [State storage](#state-storage) - - * Interface for storing and retrieving state in [state storage](#state-storage). - -## Spec - -Spec contains the basic information of a function. - -### Get input topics - -The `getInputTopics` method gets the **name list** of all input topics. - -This example demonstrates how to get the name list of all input topics in a Java window function. - -```java - -public class GetInputTopicsWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - Collection inputTopics = context.getInputTopics(); - System.out.println(inputTopics); - - return null; - } - -} - -``` - -### Get output topic - -The `getOutputTopic` method gets the **name of a topic** to which the message is sent. - -This example demonstrates how to get the name of an output topic in a Java window function. - -```java - -public class GetOutputTopicWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String outputTopic = context.getOutputTopic(); - System.out.println(outputTopic); - - return null; - } -} - -``` - -### Get tenant - -The `getTenant` method gets the tenant name associated with the window function. - -This example demonstrates how to get the tenant name in a Java window function. - -```java - -public class GetTenantWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String tenant = context.getTenant(); - System.out.println(tenant); - - return null; - } - -} - -``` - -### Get namespace - -The `getNamespace` method gets the namespace associated with the window function. - -This example demonstrates how to get the namespace in a Java window function. - -```java - -public class GetNamespaceWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String ns = context.getNamespace(); - System.out.println(ns); - - return null; - } - -} - -``` - -### Get function name - -The `getFunctionName` method gets the window function name. - -This example demonstrates how to get the function name in a Java window function. - -```java - -public class GetNameOfWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionName = context.getFunctionName(); - System.out.println(functionName); - - return null; - } - -} - -``` - -### Get function ID - -The `getFunctionId` method gets the window function ID. - -This example demonstrates how to get the function ID in a Java window function. - -```java - -public class GetFunctionIDWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionID = context.getFunctionId(); - System.out.println(functionID); - - return null; - } - -} - -``` - -### Get function version - -The `getFunctionVersion` method gets the window function version. - -This example demonstrates how to get the function version of a Java window function. - -```java - -public class GetVersionOfWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionVersion = context.getFunctionVersion(); - System.out.println(functionVersion); - - return null; - } - -} - -``` - -### Get instance ID - -The `getInstanceId` method gets the instance ID of a window function. - -This example demonstrates how to get the instance ID in a Java window function. - -```java - -public class GetInstanceIDWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - int instanceId = context.getInstanceId(); - System.out.println(instanceId); - - return null; - } - -} - -``` - -### Get num instances - -The `getNumInstances` method gets the number of instances that invoke the window function. - -This example demonstrates how to get the number of instances in a Java window function. - -```java - -public class GetNumInstancesWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - int numInstances = context.getNumInstances(); - System.out.println(numInstances); - - return null; - } - -} - -``` - -### Get output schema type - -The `getOutputSchemaType` method gets the built-in type or custom class name of the output schema. - -This example demonstrates how to get the output schema type of a Java window function. - -```java - -public class GetOutputSchemaTypeWindowFunction implements WindowFunction { - - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String schemaType = context.getOutputSchemaType(); - System.out.println(schemaType); - - return null; - } -} - -``` - -## Logger - -Pulsar window functions using Java SDK has access to an [SLF4j](https://www.slf4j.org/) [`Logger`](https://www.slf4j.org/api/org/apache/log4j/Logger.html) object that can be used to produce logs at the chosen log level. - -This example logs either a `WARNING`-level or `INFO`-level log based on whether the incoming string contains the word `danger` or not in a Java function. - -```java - -import java.util.Collection; -import org.apache.pulsar.functions.api.Record; -import org.apache.pulsar.functions.api.WindowContext; -import org.apache.pulsar.functions.api.WindowFunction; -import org.slf4j.Logger; - -public class LoggingWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - Logger log = context.getLogger(); - for (Record record : inputs) { - log.info(record + "-window-log"); - } - return null; - } - -} - -``` - -If you need your function to produce logs, specify a log topic when creating or running the function. - -```bash - -bin/pulsar-admin functions create \ - --jar my-functions.jar \ - --classname my.package.LoggingFunction \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -You can access all logs produced by `LoggingFunction` via the `persistent://public/default/logging-function-logs` topic. - -## Metrics - -Pulsar window functions can publish arbitrary metrics to the metrics interface which can be queried. - -:::note - -If a Pulsar window function uses the language-native interface for Java, that function is not able to publish metrics and stats to Pulsar. - -::: - -You can record metrics using the context object on a per-key basis. - -This example sets a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message in a Java function. - -```java - -import java.util.Collection; -import org.apache.pulsar.functions.api.Record; -import org.apache.pulsar.functions.api.WindowContext; -import org.apache.pulsar.functions.api.WindowFunction; - - -/** - * Example function that wants to keep track of - * the event time of each message sent. - */ -public class UserMetricWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - - for (Record record : inputs) { - if (record.getEventTime().isPresent()) { - context.recordMetric("MessageEventTime", record.getEventTime().get().doubleValue()); - } - } - - return null; - } -} - -``` - -## User config - -When you run or update Pulsar Functions that are created using SDK, you can pass arbitrary key/value pairs to them with the `--user-config` flag. Key/value pairs **must** be specified as JSON. - -This example passes a user configured key/value to a function. - -```bash - -bin/pulsar-admin functions create \ - --name word-filter \ - --user-config '{"forbidden-word":"rosebud"}' \ - # Other function configs - -``` - -### API -You can use the following APIs to get user-defined information for window functions. -#### getUserConfigMap - -`getUserConfigMap` API gets a map of all user-defined key/value configurations for the window function. - -```java - -/** - * Get a map of all user-defined key/value configs for the function. - * - * @return The full map of user-defined config values - */ - Map getUserConfigMap(); - -``` - -#### getUserConfigValue - -The `getUserConfigValue` API gets a user-defined key/value. - -```java - -/** - * Get any user-defined key/value. - * - * @param key The key - * @return The Optional value specified by the user for that key. - */ - Optional getUserConfigValue(String key); - -``` - -#### getUserConfigValueOrDefault - -The `getUserConfigValueOrDefault` API gets a user-defined key/value or a default value if none is present. - -```java - -/** - * Get any user-defined key/value or a default value if none is present. - * - * @param key - * @param defaultValue - * @return Either the user config value associated with a given key or a supplied default value - */ - Object getUserConfigValueOrDefault(String key, Object defaultValue); - -``` - -This example demonstrates how to access key/value pairs provided to Pulsar window functions. - -Java SDK context object enables you to access key/value pairs provided to Pulsar window functions via the command line (as JSON). - -:::tip - -For all key/value pairs passed to Java window functions, both the `key` and the `value` are `String`. To set the value to be a different type, you need to deserialize it from the `String` type. - -::: - -This example passes a key/value pair in a Java window function. - -```bash - -bin/pulsar-admin functions create \ - --user-config '{"word-of-the-day":"verdure"}' \ - # Other function configs - -``` - -This example accesses values in a Java window function. - -The `UserConfigFunction` function logs the string `"The word of the day is verdure"` every time the function is invoked (which means every time a message arrives). The user config of `word-of-the-day` is changed **only** when the function is updated with a new config value via -multiple ways, such as the command line tool or REST API. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.Optional; - -public class UserConfigWindowFunction implements WindowFunction { - @Override - public String process(Collection> input, WindowContext context) throws Exception { - Optional whatToWrite = context.getUserConfigValue("WhatToWrite"); - if (whatToWrite.get() != null) { - return (String)whatToWrite.get(); - } else { - return "Not a nice way"; - } - } - -} - -``` - -If no value is provided, you can access the entire user config map or set a default value. - -```java - -// Get the whole config map -Map allConfigs = context.getUserConfigMap(); - -// Get value or resort to default -String wotd = context.getUserConfigValueOrDefault("word-of-the-day", "perspicacious"); - -``` - -## Routing - -You can use the `context.publish()` interface to publish as many results as you want. - -This example shows that the `PublishFunction` class uses the built-in function in the context to publish messages to the `publishTopic` in a Java function. - -```java - -public class PublishWindowFunction implements WindowFunction { - @Override - public Void process(Collection> input, WindowContext context) throws Exception { - String publishTopic = (String) context.getUserConfigValueOrDefault("publish-topic", "publishtopic"); - String output = String.format("%s!", input); - context.publish(publishTopic, output); - - return null; - } - -} - -``` - -## State storage - -Pulsar window functions use [Apache BookKeeper](https://bookkeeper.apache.org) as a state storage interface. Apache Pulsar installation (including the standalone installation) includes the deployment of BookKeeper bookies. - -Apache Pulsar integrates with Apache BookKeeper `table service` to store the `state` for functions. For example, the `WordCount` function can store its `counters` state into BookKeeper table service via Pulsar Functions state APIs. - -States are key-value pairs, where the key is a string and the value is arbitrary binary data—counters are stored as 64-bit big-endian binary values. Keys are scoped to an individual Pulsar Function and shared between instances of that function. - -Currently, Pulsar window functions expose Java API to access, update, and manage states. These APIs are available in the context object when you use Java SDK functions. - -| Java API| Description -|---|--- -|`incrCounter`|Increases a built-in distributed counter referred by key. -|`getCounter`|Gets the counter value for the key. -|`putState`|Updates the state value for the key. - -You can use the following APIs to access, update, and manage states in Java window functions. - -#### incrCounter - -The `incrCounter` API increases a built-in distributed counter referred by key. - -Applications use the `incrCounter` API to change the counter of a given `key` by the given `amount`. If the `key` does not exist, a new key is created. - -```java - - /** - * Increment the builtin distributed counter referred by key - * @param key The name of the key - * @param amount The amount to be incremented - */ - void incrCounter(String key, long amount); - -``` - -#### getCounter - -The `getCounter` API gets the counter value for the key. - -Applications uses the `getCounter` API to retrieve the counter of a given `key` changed by the `incrCounter` API. - -```java - - /** - * Retrieve the counter value for the key. - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - long getCounter(String key); - -``` - -Except the `getCounter` API, Pulsar also exposes a general key/value API (`putState`) for functions to store general key/value state. - -#### putState - -The `putState` API updates the state value for the key. - -```java - - /** - * Update the state value for the key. - * - * @param key name of the key - * @param value state value of the key - */ - void putState(String key, ByteBuffer value); - -``` - -This example demonstrates how applications store states in Pulsar window functions. - -The logic of the `WordCountWindowFunction` is simple and straightforward. - -1. The function first splits the received string into multiple words using regex `\\.`. - -2. For each `word`, the function increments the corresponding `counter` by 1 via `incrCounter(key, amount)`. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - for (Record input : inputs) { - Arrays.asList(input.getValue().split("\\.")).forEach(word -> context.incrCounter(word, 1)); - } - return null; - - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/about.md b/site2/website/versioned_docs/version-2.8.2-deprecated/about.md deleted file mode 100644 index 2b86e7273ae25e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/about.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -slug: / -id: about -title: Welcome to the doc portal! -sidebar_label: "About" ---- - -import BlockLinks from "@site/src/components/BlockLinks"; -import BlockLink from "@site/src/components/BlockLink"; -import { docUrl } from "@site/src/utils/index"; - - -# Welcome to the doc portal! -*** - -This portal holds a variety of support documents to help you work with Pulsar . If you’re a beginner, there are tutorials and explainers to help you understand Pulsar and how it works. - -If you’re an experienced coder, review this page to learn the easiest way to access the specific content you’re looking for. - -## Get Started Now - - - - - - - - - -## Navigation -*** - -There are several ways to get around in the doc portal. The index navigation pane is a table of contents for the entire archive. The archive is divided into sections, like chapters in a book. Click the title of the topic to view it. - -In-context links provide an easy way to immediately reference related topics. Click the underlined term to view the topic. - -Links to related topics can be found at the bottom of each topic page. Click the link to view the topic. - -![Page Linking](/assets/page-linking.png) - -## Continuous Improvement -*** -As you probably know, we are working on a new user experience for our documentation portal that will make learning about and building on top of Apache Pulsar a much better experience. Whether you need overview concepts, how-to procedures, curated guides or quick references, we’re building content to support it. This welcome page is just the first step. We will be providing updates every month. - -## Help Improve These Documents -*** - -You’ll notice an Edit button at the bottom and top of each page. Click it to open a landing page with instructions for requesting changes to posted documents. These are your resources. Participation is not only welcomed – it’s essential! - -## Join the Community! -*** - -The Pulsar community on github is active, passionate, and knowledgeable. Join discussions, voice opinions, suggest features, and dive into the code itself. Find your Pulsar family here at [apache/pulsar](https://github.com/apache/pulsar). - -An equally passionate community can be found in the [Pulsar Slack channel](https://apache-pulsar.slack.com/). You’ll need an invitation to join, but many Github Pulsar community members are Slack members too. Join, hang out, learn, and make some new friends. - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/adaptors-kafka.md b/site2/website/versioned_docs/version-2.8.2-deprecated/adaptors-kafka.md deleted file mode 100644 index ad0d886a9e04b8..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/adaptors-kafka.md +++ /dev/null @@ -1,274 +0,0 @@ ---- -id: adaptors-kafka -title: Pulsar adaptor for Apache Kafka -sidebar_label: "Kafka client wrapper" -original_id: adaptors-kafka ---- - - -Pulsar provides an easy option for applications that are currently written using the [Apache Kafka](http://kafka.apache.org) Java client API. - -## Using the Pulsar Kafka compatibility wrapper - -In an existing application, change the regular Kafka client dependency and replace it with the Pulsar Kafka wrapper. Remove the following dependency in `pom.xml`: - -```xml - - - org.apache.kafka - kafka-clients - 0.10.2.1 - - -``` - -Then include this dependency for the Pulsar Kafka wrapper: - -```xml - - - org.apache.pulsar - pulsar-client-kafka - @pulsar:version@ - - -``` - -With the new dependency, the existing code works without any changes. You need to adjust the configuration, and make sure it points the -producers and consumers to Pulsar service rather than Kafka, and uses a particular -Pulsar topic. - -## Using the Pulsar Kafka compatibility wrapper together with existing kafka client - -When migrating from Kafka to Pulsar, the application might use the original kafka client -and the pulsar kafka wrapper together during migration. You should consider using the -unshaded pulsar kafka client wrapper. - -```xml - - - org.apache.pulsar - pulsar-client-kafka-original - @pulsar:version@ - - -``` - -When using this dependency, construct producers using `org.apache.kafka.clients.producer.PulsarKafkaProducer` -instead of `org.apache.kafka.clients.producer.KafkaProducer` and `org.apache.kafka.clients.producer.PulsarKafkaConsumer` for consumers. - -## Producer example - -```java - -// Topic needs to be a regular Pulsar topic -String topic = "persistent://public/default/my-topic"; - -Properties props = new Properties(); -// Point to a Pulsar service -props.put("bootstrap.servers", "pulsar://localhost:6650"); - -props.put("key.serializer", IntegerSerializer.class.getName()); -props.put("value.serializer", StringSerializer.class.getName()); - -Producer producer = new KafkaProducer(props); - -for (int i = 0; i < 10; i++) { - producer.send(new ProducerRecord(topic, i, "hello-" + i)); - log.info("Message {} sent successfully", i); -} - -producer.close(); - -``` - -## Consumer example - -```java - -String topic = "persistent://public/default/my-topic"; - -Properties props = new Properties(); -// Point to a Pulsar service -props.put("bootstrap.servers", "pulsar://localhost:6650"); -props.put("group.id", "my-subscription-name"); -props.put("enable.auto.commit", "false"); -props.put("key.deserializer", IntegerDeserializer.class.getName()); -props.put("value.deserializer", StringDeserializer.class.getName()); - -Consumer consumer = new KafkaConsumer(props); -consumer.subscribe(Arrays.asList(topic)); - -while (true) { - ConsumerRecords records = consumer.poll(100); - records.forEach(record -> { - log.info("Received record: {}", record); - }); - - // Commit last offset - consumer.commitSync(); -} - -``` - -## Complete Examples - -You can find the complete producer and consumer examples [here](https://github.com/apache/pulsar-adapters/tree/master/pulsar-client-kafka-compat/pulsar-client-kafka-tests/src/test/java/org/apache/pulsar/client/kafka/compat/examples). - -## Compatibility matrix - -Currently the Pulsar Kafka wrapper supports most of the operations offered by the Kafka API. - -### Producer - -APIs: - -| Producer Method | Supported | Notes | -|:------------------------------------------------------------------------------|:----------|:-------------------------------------------------------------------------| -| `Future send(ProducerRecord record)` | Yes | | -| `Future send(ProducerRecord record, Callback callback)` | Yes | | -| `void flush()` | Yes | | -| `List partitionsFor(String topic)` | No | | -| `Map metrics()` | No | | -| `void close()` | Yes | | -| `void close(long timeout, TimeUnit unit)` | Yes | | - -Properties: - -| Config property | Supported | Notes | -|:----------------------------------------|:----------|:------------------------------------------------------------------------------| -| `acks` | Ignored | Durability and quorum writes are configured at the namespace level | -| `auto.offset.reset` | Yes | It uses a default value of `earliest` if you do not give a specific setting. | -| `batch.size` | Ignored | | -| `bootstrap.servers` | Yes | | -| `buffer.memory` | Ignored | | -| `client.id` | Ignored | | -| `compression.type` | Yes | Allows `gzip` and `lz4`. No `snappy`. | -| `connections.max.idle.ms` | Yes | Only support up to 2,147,483,647,000(Integer.MAX_VALUE * 1000) ms of idle time| -| `interceptor.classes` | Yes | | -| `key.serializer` | Yes | | -| `linger.ms` | Yes | Controls the group commit time when batching messages | -| `max.block.ms` | Ignored | | -| `max.in.flight.requests.per.connection` | Ignored | In Pulsar ordering is maintained even with multiple requests in flight | -| `max.request.size` | Ignored | | -| `metric.reporters` | Ignored | | -| `metrics.num.samples` | Ignored | | -| `metrics.sample.window.ms` | Ignored | | -| `partitioner.class` | Yes | | -| `receive.buffer.bytes` | Ignored | | -| `reconnect.backoff.ms` | Ignored | | -| `request.timeout.ms` | Ignored | | -| `retries` | Ignored | Pulsar client retries with exponential backoff until the send timeout expires. | -| `send.buffer.bytes` | Ignored | | -| `timeout.ms` | Yes | | -| `value.serializer` | Yes | | - - -### Consumer - -The following table lists consumer APIs. - -| Consumer Method | Supported | Notes | -|:--------------------------------------------------------------------------------------------------------|:----------|:------| -| `Set assignment()` | No | | -| `Set subscription()` | Yes | | -| `void subscribe(Collection topics)` | Yes | | -| `void subscribe(Collection topics, ConsumerRebalanceListener callback)` | No | | -| `void assign(Collection partitions)` | No | | -| `void subscribe(Pattern pattern, ConsumerRebalanceListener callback)` | No | | -| `void unsubscribe()` | Yes | | -| `ConsumerRecords poll(long timeoutMillis)` | Yes | | -| `void commitSync()` | Yes | | -| `void commitSync(Map offsets)` | Yes | | -| `void commitAsync()` | Yes | | -| `void commitAsync(OffsetCommitCallback callback)` | Yes | | -| `void commitAsync(Map offsets, OffsetCommitCallback callback)` | Yes | | -| `void seek(TopicPartition partition, long offset)` | Yes | | -| `void seekToBeginning(Collection partitions)` | Yes | | -| `void seekToEnd(Collection partitions)` | Yes | | -| `long position(TopicPartition partition)` | Yes | | -| `OffsetAndMetadata committed(TopicPartition partition)` | Yes | | -| `Map metrics()` | No | | -| `List partitionsFor(String topic)` | No | | -| `Map> listTopics()` | No | | -| `Set paused()` | No | | -| `void pause(Collection partitions)` | No | | -| `void resume(Collection partitions)` | No | | -| `Map offsetsForTimes(Map timestampsToSearch)` | No | | -| `Map beginningOffsets(Collection partitions)` | No | | -| `Map endOffsets(Collection partitions)` | No | | -| `void close()` | Yes | | -| `void close(long timeout, TimeUnit unit)` | Yes | | -| `void wakeup()` | No | | - -Properties: - -| Config property | Supported | Notes | -|:--------------------------------|:----------|:------------------------------------------------------| -| `group.id` | Yes | Maps to a Pulsar subscription name | -| `max.poll.records` | Yes | | -| `max.poll.interval.ms` | Ignored | Messages are "pushed" from broker | -| `session.timeout.ms` | Ignored | | -| `heartbeat.interval.ms` | Ignored | | -| `bootstrap.servers` | Yes | Needs to point to a single Pulsar service URL | -| `enable.auto.commit` | Yes | | -| `auto.commit.interval.ms` | Ignored | With auto-commit, acks are sent immediately to broker | -| `partition.assignment.strategy` | Ignored | | -| `auto.offset.reset` | Yes | Only support earliest and latest. | -| `fetch.min.bytes` | Ignored | | -| `fetch.max.bytes` | Ignored | | -| `fetch.max.wait.ms` | Ignored | | -| `interceptor.classes` | Yes | | -| `metadata.max.age.ms` | Ignored | | -| `max.partition.fetch.bytes` | Ignored | | -| `send.buffer.bytes` | Ignored | | -| `receive.buffer.bytes` | Ignored | | -| `client.id` | Ignored | | - - -## Customize Pulsar configurations - -You can configure Pulsar authentication provider directly from the Kafka properties. - -### Pulsar client properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.authentication.class`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-org.apache.pulsar.client.api.Authentication-) | | Configure to auth provider. For example, `org.apache.pulsar.client.impl.auth.AuthenticationTls`.| -| [`pulsar.authentication.params.map`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.util.Map-) | | Map which represents parameters for the Authentication-Plugin. | -| [`pulsar.authentication.params.string`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.lang.String-) | | String which represents parameters for the Authentication-Plugin, for example, `key1:val1,key2:val2`. | -| [`pulsar.use.tls`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTls-boolean-) | `false` | Enable TLS transport encryption. | -| [`pulsar.tls.trust.certs.file.path`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsTrustCertsFilePath-java.lang.String-) | | Path for the TLS trust certificate store. | -| [`pulsar.tls.allow.insecure.connection`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsAllowInsecureConnection-boolean-) | `false` | Accept self-signed certificates from brokers. | -| [`pulsar.operation.timeout.ms`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setOperationTimeout-int-java.util.concurrent.TimeUnit-) | `30000` | General operations timeout. | -| [`pulsar.stats.interval.seconds`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setStatsInterval-long-java.util.concurrent.TimeUnit-) | `60` | Pulsar client lib stats printing interval. | -| [`pulsar.num.io.threads`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setIoThreads-int-) | `1` | The number of Netty IO threads to use. | -| [`pulsar.connections.per.broker`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConnectionsPerBroker-int-) | `1` | The maximum number of connection to each broker. | -| [`pulsar.use.tcp.nodelay`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTcpNoDelay-boolean-) | `true` | TCP no-delay. | -| [`pulsar.concurrent.lookup.requests`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConcurrentLookupRequest-int-) | `50000` | The maximum number of concurrent topic lookups. | -| [`pulsar.max.number.rejected.request.per.connection`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setMaxNumberOfRejectedRequestPerConnection-int-) | `50` | The threshold of errors to forcefully close a connection. | -| [`pulsar.keepalive.interval.ms`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientBuilder.html#keepAliveInterval-int-java.util.concurrent.TimeUnit-)| `30000` | Keep alive interval for each client-broker-connection. | - - -### Pulsar producer properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.producer.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setProducerName-java.lang.String-) | | Specify the producer name. | -| [`pulsar.producer.initial.sequence.id`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setInitialSequenceId-long-) | | Specify baseline for sequence ID of this producer. | -| [`pulsar.producer.max.pending.messages`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessages-int-) | `1000` | Set the maximum size of the message queue pending to receive an acknowledgment from the broker. | -| [`pulsar.producer.max.pending.messages.across.partitions`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessagesAcrossPartitions-int-) | `50000` | Set the maximum number of pending messages across all the partitions. | -| [`pulsar.producer.batching.enabled`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingEnabled-boolean-) | `true` | Control whether automatic batching of messages is enabled for the producer. | -| [`pulsar.producer.batching.max.messages`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingMaxMessages-int-) | `1000` | The maximum number of messages in a batch. | -| [`pulsar.block.if.producer.queue.full`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBlockIfQueueFull-boolean-) | | Specify the block producer if queue is full. | - - -### Pulsar consumer Properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.consumer.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setConsumerName-java.lang.String-) | | Specify the consumer name. | -| [`pulsar.consumer.receiver.queue.size`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setReceiverQueueSize-int-) | 1000 | Set the size of the consumer receiver queue. | -| [`pulsar.consumer.acknowledgments.group.time.millis`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#acknowledgmentGroupTime-long-java.util.concurrent.TimeUnit-) | 100 | Set the maximum amount of group time for consumers to send the acknowledgments to the broker. | -| [`pulsar.consumer.total.receiver.queue.size.across.partitions`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setMaxTotalReceiverQueueSizeAcrossPartitions-int-) | 50000 | Set the maximum size of the total receiver queue across partitions. | -| [`pulsar.consumer.subscription.topics.mode`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#subscriptionTopicsMode-Mode-) | PersistentOnly | Set the subscription topic mode for consumers. | diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/adaptors-spark.md b/site2/website/versioned_docs/version-2.8.2-deprecated/adaptors-spark.md deleted file mode 100644 index e14f13b5d4b079..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/adaptors-spark.md +++ /dev/null @@ -1,91 +0,0 @@ ---- -id: adaptors-spark -title: Pulsar adaptor for Apache Spark -sidebar_label: "Apache Spark" -original_id: adaptors-spark ---- - -## Spark Streaming receiver -The Spark Streaming receiver for Pulsar is a custom receiver that enables Apache [Spark Streaming](https://spark.apache.org/streaming/) to receive raw data from Pulsar. - -An application can receive data in [Resilient Distributed Dataset](https://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds) (RDD) format via the Spark Streaming receiver and can process it in a variety of ways. - -### Prerequisites - -To use the receiver, include a dependency for the `pulsar-spark` library in your Java configuration. - -#### Maven - -If you're using Maven, add this to your `pom.xml`: - -```xml - - -@pulsar:version@ - - - - org.apache.pulsar - pulsar-spark - ${pulsar.version} - - -``` - -#### Gradle - -If you're using Gradle, add this to your `build.gradle` file: - -```groovy - -def pulsarVersion = "@pulsar:version@" - -dependencies { - compile group: 'org.apache.pulsar', name: 'pulsar-spark', version: pulsarVersion -} - -``` - -### Usage - -Pass an instance of `SparkStreamingPulsarReceiver` to the `receiverStream` method in `JavaStreamingContext`: - -```java - - String serviceUrl = "pulsar://localhost:6650/"; - String topic = "persistent://public/default/test_src"; - String subs = "test_sub"; - - SparkConf sparkConf = new SparkConf().setMaster("local[*]").setAppName("Pulsar Spark Example"); - - JavaStreamingContext jsc = new JavaStreamingContext(sparkConf, Durations.seconds(60)); - - ConsumerConfigurationData pulsarConf = new ConsumerConfigurationData(); - - Set set = new HashSet(); - set.add(topic); - pulsarConf.setTopicNames(set); - pulsarConf.setSubscriptionName(subs); - - SparkStreamingPulsarReceiver pulsarReceiver = new SparkStreamingPulsarReceiver( - serviceUrl, - pulsarConf, - new AuthenticationDisabled()); - - JavaReceiverInputDStream lineDStream = jsc.receiverStream(pulsarReceiver); - -``` - -For a complete example, click [here](https://github.com/apache/pulsar-adapters/blob/master/examples/spark/src/main/java/org/apache/spark/streaming/receiver/example/SparkStreamingPulsarReceiverExample.java). In this example, the number of messages that contain the string "Pulsar" in received messages is counted. - -Note that if needed, other Pulsar authentication classes can be used. For example, in order to use a token during authentication the following parameters for the `SparkStreamingPulsarReceiver` constructor can be set: - -```java - -SparkStreamingPulsarReceiver pulsarReceiver = new SparkStreamingPulsarReceiver( - serviceUrl, - pulsarConf, - new AuthenticationToken("token:")); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/adaptors-storm.md b/site2/website/versioned_docs/version-2.8.2-deprecated/adaptors-storm.md deleted file mode 100644 index 76d507164777db..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/adaptors-storm.md +++ /dev/null @@ -1,96 +0,0 @@ ---- -id: adaptors-storm -title: Pulsar adaptor for Apache Storm -sidebar_label: "Apache Storm" -original_id: adaptors-storm ---- - -Pulsar Storm is an adaptor for integrating with [Apache Storm](http://storm.apache.org/) topologies. It provides core Storm implementations for sending and receiving data. - -An application can inject data into a Storm topology via a generic Pulsar spout, as well as consume data from a Storm topology via a generic Pulsar bolt. - -## Using the Pulsar Storm Adaptor - -Include dependency for Pulsar Storm Adaptor: - -```xml - - - org.apache.pulsar - pulsar-storm - ${pulsar.version} - - -``` - -## Pulsar Spout - -The Pulsar Spout allows for the data published on a topic to be consumed by a Storm topology. It emits a Storm tuple based on the message received and the `MessageToValuesMapper` provided by the client. - -The tuples that fail to be processed by the downstream bolts will be re-injected by the spout with an exponential backoff, within a configurable timeout (the default is 60 seconds) or a configurable number of retries, whichever comes first, after which it is acknowledged by the consumer. Here's an example construction of a spout: - -```java - -MessageToValuesMapper messageToValuesMapper = new MessageToValuesMapper() { - - @Override - public Values toValues(Message msg) { - return new Values(new String(msg.getData())); - } - - @Override - public void declareOutputFields(OutputFieldsDeclarer declarer) { - // declare the output fields - declarer.declare(new Fields("string")); - } -}; - -// Configure a Pulsar Spout -PulsarSpoutConfiguration spoutConf = new PulsarSpoutConfiguration(); -spoutConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650"); -spoutConf.setTopic("persistent://my-property/usw/my-ns/my-topic1"); -spoutConf.setSubscriptionName("my-subscriber-name1"); -spoutConf.setMessageToValuesMapper(messageToValuesMapper); - -// Create a Pulsar Spout -PulsarSpout spout = new PulsarSpout(spoutConf); - -``` - -For a complete example, click [here](https://github.com/apache/pulsar-adapters/blob/master/pulsar-storm/src/test/java/org/apache/pulsar/storm/PulsarSpoutTest.java). - -## Pulsar Bolt - -The Pulsar bolt allows data in a Storm topology to be published on a topic. It publishes messages based on the Storm tuple received and the `TupleToMessageMapper` provided by the client. - -A partitioned topic can also be used to publish messages on different topics. In the implementation of the `TupleToMessageMapper`, a "key" will need to be provided in the message which will send the messages with the same key to the same topic. Here's an example bolt: - -```java - -TupleToMessageMapper tupleToMessageMapper = new TupleToMessageMapper() { - - @Override - public TypedMessageBuilder toMessage(TypedMessageBuilder msgBuilder, Tuple tuple) { - String receivedMessage = tuple.getString(0); - // message processing - String processedMsg = receivedMessage + "-processed"; - return msgBuilder.value(processedMsg.getBytes()); - } - - @Override - public void declareOutputFields(OutputFieldsDeclarer declarer) { - // declare the output fields - } -}; - -// Configure a Pulsar Bolt -PulsarBoltConfiguration boltConf = new PulsarBoltConfiguration(); -boltConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650"); -boltConf.setTopic("persistent://my-property/usw/my-ns/my-topic2"); -boltConf.setTupleToMessageMapper(tupleToMessageMapper); - -// Create a Pulsar Bolt -PulsarBolt bolt = new PulsarBolt(boltConf); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-brokers.md b/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-brokers.md deleted file mode 100644 index aea8f70f87a7d3..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-brokers.md +++ /dev/null @@ -1,286 +0,0 @@ ---- -id: admin-api-brokers -title: Managing Brokers -sidebar_label: "Brokers" -original_id: admin-api-brokers ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Pulsar brokers consist of two components: - -1. An HTTP server exposing a {@inject: rest:REST:/} interface administration and [topic](reference-terminology.md#topic) lookup. -2. A dispatcher that handles all Pulsar [message](reference-terminology.md#message) transfers. - -[Brokers](reference-terminology.md#broker) can be managed via: - -* The [`brokers`](reference-pulsar-admin.md#brokers) command of the [`pulsar-admin`](reference-pulsar-admin.md) tool -* The `/admin/v2/brokers` endpoint of the admin {@inject: rest:REST:/} API -* The `brokers` method of the {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin.html} object in the [Java API](client-libraries-java.md) - -In addition to being configurable when you start them up, brokers can also be [dynamically configured](#dynamic-broker-configuration). - -> See the [Configuration](reference-configuration.md#broker) page for a full listing of broker-specific configuration parameters. - -## Brokers resources - -### List active brokers - -Fetch all available active brokers that are serving traffic with cluster name. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers list use - -``` - -``` - -broker1.use.org.com:8080 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/:cluster|operation/getActiveBrokers?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getActiveBrokers(clusterName) - -``` - - - - -```` - -### Get the information of the leader broker - -Fetch the information of the leader broker, for example, the service url. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers leader-broker - -``` - -``` - -BrokerInfo(serviceUrl=broker1.use.org.com:8080) - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/leaderBroker|operation/getLeaderBroker?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getLeaderBroker() - -``` - -For the detail of the code above, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/BrokersImpl.java#L80) - - - - -```` - -#### list of namespaces owned by a given broker - -It finds all namespaces which are owned and served by a given broker. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers namespaces use \ - --url broker1.use.org.com:8080 - -``` - -```json - -{ - "my-property/use/my-ns/0x00000000_0xffffffff": { - "broker_assignment": "shared", - "is_controlled": false, - "is_active": true - } -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/:cluster/:broker/ownedNamespaces|operation/getOwnedNamespaes?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getOwnedNamespaces(cluster,brokerUrl); - -``` - - - - -```` - -### Dynamic broker configuration - -One way to configure a Pulsar [broker](reference-terminology.md#broker) is to supply a [configuration](reference-configuration.md#broker) when the broker is [started up](reference-cli-tools.md#pulsar-broker). - -But since all broker configuration in Pulsar is stored in ZooKeeper, configuration values can also be dynamically updated *while the broker is running*. When you update broker configuration dynamically, ZooKeeper will notify the broker of the change and the broker will then override any existing configuration values. - -* The [`brokers`](reference-pulsar-admin.md#brokers) command for the [`pulsar-admin`](reference-pulsar-admin.md) tool has a variety of subcommands that enable you to manipulate a broker's configuration dynamically, enabling you to [update config values](#update-dynamic-configuration) and more. -* In the Pulsar admin {@inject: rest:REST:/} API, dynamic configuration is managed through the `/admin/v2/brokers/configuration` endpoint. - -### Update dynamic configuration - -````mdx-code-block - - - -The [`update-dynamic-config`](reference-pulsar-admin.md#brokers-update-dynamic-config) subcommand will update existing configuration. It takes two arguments: the name of the parameter and the new value using the `config` and `value` flag respectively. Here's an example for the [`brokerShutdownTimeoutMs`](reference-configuration.md#broker-brokerShutdownTimeoutMs) parameter: - -```shell - -$ pulsar-admin brokers update-dynamic-config --config brokerShutdownTimeoutMs --value 100 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/brokers/configuration/:configName/:configValue|operation/updateDynamicConfiguration?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().updateDynamicConfiguration(configName, configValue); - -``` - - - - -```` - -### List updated values - -Fetch a list of all potentially updatable configuration parameters. -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers list-dynamic-config -brokerShutdownTimeoutMs - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/configuration|operation/getDynamicConfigurationName?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getDynamicConfigurationNames(); - -``` - - - - -```` - -### List all - -Fetch a list of all parameters that have been dynamically updated. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers get-all-dynamic-config -brokerShutdownTimeoutMs:100 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/configuration/values|operation/getAllDynamicConfigurations?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getAllDynamicConfigurations(); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-clusters.md b/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-clusters.md deleted file mode 100644 index ce085345640a4f..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-clusters.md +++ /dev/null @@ -1,318 +0,0 @@ ---- -id: admin-api-clusters -title: Managing Clusters -sidebar_label: "Clusters" -original_id: admin-api-clusters ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Pulsar clusters consist of one or more Pulsar [brokers](reference-terminology.md#broker), one or more [BookKeeper](reference-terminology.md#bookkeeper) -servers (aka [bookies](reference-terminology.md#bookie)), and a [ZooKeeper](https://zookeeper.apache.org) cluster that provides configuration and coordination management. - -Clusters can be managed via: - -* The [`clusters`](reference-pulsar-admin.md#clusters) command of the [`pulsar-admin`](reference-pulsar-admin.md) tool -* The `/admin/v2/clusters` endpoint of the admin {@inject: rest:REST:/} API -* The `clusters` method of the {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object in the [Java API](client-libraries-java.md) - -## Clusters resources - -### Provision - -New clusters can be provisioned using the admin interface. - -> Please note that this operation requires superuser privileges. - -````mdx-code-block - - - -You can provision a new cluster using the [`create`](reference-pulsar-admin.md#clusters-create) subcommand. Here's an example: - -```shell - -$ pulsar-admin clusters create cluster-1 \ - --url http://my-cluster.org.com:8080 \ - --broker-url pulsar://my-cluster.org.com:6650 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/clusters/:cluster|operation/createCluster?version=@pulsar:version_number@} - - - - -```java - -ClusterData clusterData = new ClusterData( - serviceUrl, - serviceUrlTls, - brokerServiceUrl, - brokerServiceUrlTls -); -admin.clusters().createCluster(clusterName, clusterData); - -``` - - - - -```` - -### Initialize cluster metadata - -When provision a new cluster, you need to initialize that cluster's [metadata](concepts-architecture-overview.md#metadata-store). When initializing cluster metadata, you need to specify all of the following: - -* The name of the cluster -* The local ZooKeeper connection string for the cluster -* The configuration store connection string for the entire instance -* The web service URL for the cluster -* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster - -You must initialize cluster metadata *before* starting up any [brokers](admin-api-brokers.md) that will belong to the cluster. - -> **No cluster metadata initialization through the REST API or the Java admin API** -> -> Unlike most other admin functions in Pulsar, cluster metadata initialization cannot be performed via the admin REST API -> or the admin Java client, as metadata initialization involves communicating with ZooKeeper directly. -> Instead, you can use the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool, in particular -> the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command. - -Here's an example cluster metadata initialization command: - -```shell - -bin/pulsar initialize-cluster-metadata \ - --cluster us-west \ - --zookeeper zk1.us-west.example.com:2181 \ - --configuration-store zk1.us-west.example.com:2184 \ - --web-service-url http://pulsar.us-west.example.com:8080/ \ - --web-service-url-tls https://pulsar.us-west.example.com:8443/ \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/ - -``` - -You'll need to use `--*-tls` flags only if you're using [TLS authentication](security-tls-authentication.md) in your instance. - -### Get configuration - -You can fetch the [configuration](reference-configuration.md) for an existing cluster at any time. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#clusters-get) subcommand and specify the name of the cluster. Here's an example: - -```shell - -$ pulsar-admin clusters get cluster-1 -{ - "serviceUrl": "http://my-cluster.org.com:8080/", - "serviceUrlTls": null, - "brokerServiceUrl": "pulsar://my-cluster.org.com:6650/", - "brokerServiceUrlTls": null - "peerClusterNames": null -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/clusters/:cluster|operation/getCluster?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().getCluster(clusterName); - -``` - - - - -```` - -### Update - -You can update the configuration for an existing cluster at any time. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#clusters-update) subcommand and specify new configuration values using flags. - -```shell - -$ pulsar-admin clusters update cluster-1 \ - --url http://my-cluster.org.com:4081 \ - --broker-url pulsar://my-cluster.org.com:3350 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/clusters/:cluster|operation/updateCluster?version=@pulsar:version_number@} - - - - -```java - -ClusterData clusterData = new ClusterData( - serviceUrl, - serviceUrlTls, - brokerServiceUrl, - brokerServiceUrlTls -); -admin.clusters().updateCluster(clusterName, clusterData); - -``` - - - - -```` - -### Delete - -Clusters can be deleted from a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#clusters-delete) subcommand and specify the name of the cluster. - -``` - -$ pulsar-admin clusters delete cluster-1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/clusters/:cluster|operation/deleteCluster?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().deleteCluster(clusterName); - -``` - - - - -```` - -### List - -You can fetch a list of all clusters in a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#clusters-list) subcommand. - -```shell - -$ pulsar-admin clusters list -cluster-1 -cluster-2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/clusters|operation/getClusters?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().getClusters(); - -``` - - - - -```` - -### Update peer-cluster data - -Peer clusters can be configured for a given cluster in a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`update-peer-clusters`](reference-pulsar-admin.md#clusters-update-peer-clusters) subcommand and specify the list of peer-cluster names. - -``` - -$ pulsar-admin update-peer-clusters cluster-1 --peer-clusters cluster-2 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/clusters/:cluster/peers|operation/setPeerClusterNames?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().updatePeerClusterNames(clusterName, peerClusterList); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-functions.md b/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-functions.md deleted file mode 100644 index d6693fda9240e3..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-functions.md +++ /dev/null @@ -1,830 +0,0 @@ ---- -id: admin-api-functions -title: Manage Functions -sidebar_label: "Functions" -original_id: admin-api-functions ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -**Pulsar Functions** are lightweight compute processes that - -* consume messages from one or more Pulsar topics -* apply a user-supplied processing logic to each message -* publish the results of the computation to another topic - -Functions can be managed via the following methods. - -Method | Description ----|--- -**Admin CLI** | The [`functions`](reference-pulsar-admin.md#functions) command of the [`pulsar-admin`](reference-pulsar-admin.md) tool. -**REST API** |The `/admin/v3/functions` endpoint of the admin {@inject: rest:REST:/} API. -**Java Admin API**| The `functions` method of the {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object in the [Java API](client-libraries-java.md). - -## Function resources - -You can perform the following operations on functions. - -### Create a function - -You can create a Pulsar function in cluster mode (deploy it on a Pulsar cluster) using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#functions-create) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --inputs test-input-topic \ - --output persistent://public/default/test-output-topic \ - --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \ - --jar /examples/api-examples.jar - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName|operation/registerFunction?version=@pulsar:version_number@} - - - - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setTenant(tenant); -functionConfig.setNamespace(namespace); -functionConfig.setName(functionName); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setParallelism(1); -functionConfig.setClassName("org.apache.pulsar.functions.api.examples.ExclamationFunction"); -functionConfig.setProcessingGuarantees(FunctionConfig.ProcessingGuarantees.ATLEAST_ONCE); -functionConfig.setTopicsPattern(sourceTopicPattern); -functionConfig.setSubName(subscriptionName); -functionConfig.setAutoAck(true); -functionConfig.setOutput(sinkTopic); -admin.functions().createFunction(functionConfig, fileName); - -``` - - - - -```` - -### Update a function - -You can update a Pulsar function that has been deployed to a Pulsar cluster using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#functions-update) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions update \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --output persistent://public/default/update-output-topic \ - # other options - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/functions/:tenant/:namespace/:functionName|operation/updateFunction?version=@pulsar:version_number@} - - - - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setTenant(tenant); -functionConfig.setNamespace(namespace); -functionConfig.setName(functionName); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setParallelism(1); -functionConfig.setClassName("org.apache.pulsar.functions.api.examples.ExclamationFunction"); -UpdateOptions updateOptions = new UpdateOptions(); -updateOptions.setUpdateAuthData(updateAuthData); -admin.functions().updateFunction(functionConfig, userCodeFile, updateOptions); - -``` - - - - -```` - -### Start an instance of a function - -You can start a stopped function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`start`](reference-pulsar-admin.md#functions-start) subcommand. - -```shell - -$ pulsar-admin functions start \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/start|operation/startFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().startFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Start all instances of a function - -You can start all stopped function instances using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`start`](reference-pulsar-admin.md#functions-start) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions start \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/start|operation/startFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().startFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Stop an instance of a function - -You can stop a function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`stop`](reference-pulsar-admin.md#functions-stop) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stop \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/stop|operation/stopFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().stopFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Stop all instances of a function - -You can stop all function instances using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`stop`](reference-pulsar-admin.md#functions-stop) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stop \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/stop|operation/stopFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().stopFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Restart an instance of a function - -Restart a function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`restart`](reference-pulsar-admin.md#functions-restart) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions restart \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/restart|operation/restartFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().restartFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Restart all instances of a function - -You can restart all function instances using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`restart`](reference-pulsar-admin.md#functions-restart) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions restart \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/restart|operation/restartFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().restartFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### List all functions - -You can list all Pulsar functions running under a specific tenant and namespace using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#functions-list) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions list \ - --tenant public \ - --namespace default - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace|operation/listFunctions?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctions(tenant, namespace); - -``` - - - - -```` - -### Delete a function - -You can delete a Pulsar function that is running on a Pulsar cluster using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#functions-delete) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions delete \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|DELETE|/admin/v3/functions/:tenant/:namespace/:functionName|operation/deregisterFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().deleteFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Get info about a function - -You can get information about a Pulsar function currently running in cluster mode using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#functions-get) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions get \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName|operation/getFunctionInfo?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Get status of an instance of a function - -You can get the current status of a Pulsar function instance with `instance-id` using Admin CLI, REST API or Java Admin API. -````mdx-code-block - - - -Use the [`status`](reference-pulsar-admin.md#functions-status) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/status|operation/getFunctionInstanceStatus?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStatus(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Get status of all instances of a function - -You can get the current status of a Pulsar function instance using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`status`](reference-pulsar-admin.md#functions-status) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/status|operation/getFunctionStatus?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStatus(tenant, namespace, functionName); - -``` - - - - -```` - -### Get stats of an instance of a function - -You can get the current stats of a Pulsar Function instance with `instance-id` using Admin CLI, REST API or Java admin API. -````mdx-code-block - - - -Use the [`stats`](reference-pulsar-admin.md#functions-stats) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/stats|operation/getFunctionInstanceStats?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStats(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Get stats of all instances of a function - -You can get the current stats of a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`stats`](reference-pulsar-admin.md#functions-stats) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/stats|operation/getFunctionStats?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStats(tenant, namespace, functionName); - -``` - - - - -```` - -### Trigger a function - -You can trigger a specified Pulsar function with a supplied value using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`trigger`](reference-pulsar-admin.md#functions-trigger) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --topic (the name of input topic) \ - --trigger-value \"hello pulsar\" - # or --trigger-file (the path of trigger file) - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/trigger|operation/triggerFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().triggerFunction(tenant, namespace, functionName, topic, triggerValue, triggerFile); - -``` - - - - -```` - -### Put state associated with a function - -You can put the state associated with a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`putstate`](reference-pulsar-admin.md#functions-putstate) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions putstate \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --state "{\"key\":\"pulsar\", \"stringValue\":\"hello pulsar\"}" - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/state/:key|operation/putFunctionState?version=@pulsar:version_number@} - - - - -```java - -TypeReference typeRef = new TypeReference() {}; -FunctionState stateRepr = ObjectMapperFactory.getThreadLocal().readValue(state, typeRef); -admin.functions().putFunctionState(tenant, namespace, functionName, stateRepr); - -``` - - - - -```` - -### Fetch state associated with a function - -You can fetch the current state associated with a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`querystate`](reference-pulsar-admin.md#functions-querystate) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions querystate \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --key (the key of state) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/state/:key|operation/getFunctionState?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionState(tenant, namespace, functionName, key); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-namespaces.md b/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-namespaces.md deleted file mode 100644 index a5197c263ca5e1..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-namespaces.md +++ /dev/null @@ -1,1324 +0,0 @@ ---- -id: admin-api-namespaces -title: Managing Namespaces -sidebar_label: "Namespaces" -original_id: admin-api-namespaces ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Pulsar [namespaces](reference-terminology.md#namespace) are logical groupings of [topics](reference-terminology.md#topic). - -Namespaces can be managed via: - -* The [`namespaces`](reference-pulsar-admin.md#clusters) command of the [`pulsar-admin`](reference-pulsar-admin.md) tool -* The `/admin/v2/namespaces` endpoint of the admin {@inject: rest:REST:/} API -* The `namespaces` method of the {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object in the [Java API](client-libraries-java.md) - -## Namespaces resources - -### Create namespaces - -You can create new namespaces under a given [tenant](reference-terminology.md#tenant). - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#namespaces-create) subcommand and specify the namespace by name: - -```shell - -$ pulsar-admin namespaces create test-tenant/test-namespace - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace|operation/createNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().createNamespace(namespace); - -``` - - - - -```` - -### Get policies - -You can fetch the current policies associated with a namespace at any time. - -````mdx-code-block - - - -Use the [`policies`](reference-pulsar-admin.md#namespaces-policies) subcommand and specify the namespace: - -```shell - -$ pulsar-admin namespaces policies test-tenant/test-namespace -{ - "auth_policies": { - "namespace_auth": {}, - "destination_auth": {} - }, - "replication_clusters": [], - "bundles_activated": true, - "bundles": { - "boundaries": [ - "0x00000000", - "0xffffffff" - ], - "numBundles": 1 - }, - "backlog_quota_map": {}, - "persistence": null, - "latency_stats_sample_rate": {}, - "message_ttl_in_seconds": 0, - "retention_policies": null, - "deleted": false -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace|operation/getPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPolicies(namespace); - -``` - - - - -```` - -### List namespaces - -You can list all namespaces within a given Pulsar [tenant](reference-terminology.md#tenant). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#namespaces-list) subcommand and specify the tenant: - -```shell - -$ pulsar-admin namespaces list test-tenant -test-tenant/ns1 -test-tenant/ns2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant|operation/getTenantNamespaces?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaces(tenant); - -``` - - - - -```` - -### Delete namespaces - -You can delete existing namespaces from a tenant. - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#namespaces-delete) subcommand and specify the namespace: - -```shell - -$ pulsar-admin namespaces delete test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace|operation/deleteNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().deleteNamespace(namespace); - -``` - - - - -```` - -### Configure replication clusters - -#### Set replication cluster - -It sets replication clusters for a namespace, so Pulsar can internally replicate publish message from one colo to another colo. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-clusters test-tenant/ns1 \ - --clusters cl1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/replication|operation/setNamespaceReplicationClusters?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setNamespaceReplicationClusters(namespace, clusters); - -``` - - - - -```` - -#### Get replication cluster - -It gives a list of replication clusters for a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-clusters test-tenant/cl1/ns1 - -``` - -``` - -cl2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/replication|operation/getNamespaceReplicationClusters?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaceReplicationClusters(namespace) - -``` - - - - -```` - -### Configure backlog quota policies - -#### Set backlog quota policies - -Backlog quota helps the broker to restrict bandwidth/storage of a namespace once it reaches a certain threshold limit. Admin can set the limit and take corresponding action after the limit is reached. - - 1. producer_request_hold: broker will hold and not persist produce request payload - - 2. producer_exception: broker disconnects with the client by giving an exception. - - 3. consumer_backlog_eviction: broker will start discarding backlog messages - - Backlog quota restriction can be taken care by defining restriction of backlog-quota-type: destination_storage - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-backlog-quota --limit 10G --limitTime 36000 --policy producer_request_hold test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/setBacklogQuota?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setBacklogQuota(namespace, new BacklogQuota(limit, limitTime, policy)) - -``` - - - - -```` - -#### Get backlog quota policies - -It shows a configured backlog quota for a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-backlog-quotas test-tenant/ns1 - -``` - -```json - -{ - "destination_storage": { - "limit": 10, - "policy": "producer_request_hold" - } -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/backlogQuotaMap|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getBacklogQuotaMap(namespace); - -``` - - - - -```` - -#### Remove backlog quota policies - -It removes backlog quota policies for a given namespace - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-backlog-quota test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/removeBacklogQuota?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeBacklogQuota(namespace, backlogQuotaType) - -``` - - - - -```` - -### Configure persistence policies - -#### Set persistence policies - -Persistence policies allow to configure persistency-level for all topic messages under a given namespace. - - - Bookkeeper-ack-quorum: Number of acks (guaranteed copies) to wait for each entry, default: 0 - - - Bookkeeper-ensemble: Number of bookies to use for a topic, default: 0 - - - Bookkeeper-write-quorum: How many writes to make of each entry, default: 0 - - - Ml-mark-delete-max-rate: Throttling rate of mark-delete operation (0 means no throttle), default: 0.0 - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-persistence --bookkeeper-ack-quorum 2 --bookkeeper-ensemble 3 --bookkeeper-write-quorum 2 --ml-mark-delete-max-rate 0 test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/setPersistence?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setPersistence(namespace,new PersistencePolicies(bookkeeperEnsemble, bookkeeperWriteQuorum,bookkeeperAckQuorum,managedLedgerMaxMarkDeleteRate)) - -``` - - - - -```` - -#### Get persistence policies - -It shows the configured persistence policies of a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-persistence test-tenant/ns1 - -``` - -```json - -{ - "bookkeeperEnsemble": 3, - "bookkeeperWriteQuorum": 2, - "bookkeeperAckQuorum": 2, - "managedLedgerMaxMarkDeleteRate": 0 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/getPersistence?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPersistence(namespace) - -``` - - - - -```` - -### Configure namespace bundles - -#### Unload namespace bundles - -The namespace bundle is a virtual group of topics which belong to the same namespace. If the broker gets overloaded with the number of bundles, this command can help unload a bundle from that broker, so it can be served by some other less-loaded brokers. The namespace bundle ID ranges from 0x00000000 to 0xffffffff. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces unload --bundle 0x00000000_0xffffffff test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/:bundle/unload|operation/unloadNamespaceBundle?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().unloadNamespaceBundle(namespace, bundle) - -``` - - - - -```` - -#### Split namespace bundles - -Each namespace bundle can contain multiple topics and each bundle can be served by only one broker. -If a single bundle is creating an excessive load on a broker, an admin splits the bundle using this command permitting one or more of the new bundles to be unloaded thus spreading the load across the brokers. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces split-bundle --bundle 0x00000000_0xffffffff test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/:bundle/split|operation/splitNamespaceBundle?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().splitNamespaceBundle(namespace, bundle) - -``` - - - - -```` - -### Configure message TTL - -#### Set message-ttl - -It configures message’s time to live (in seconds) duration. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-message-ttl --messageTTL 100 test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/setNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setNamespaceMessageTTL(namespace, messageTTL) - -``` - - - - -```` - -#### Get message-ttl - -It gives a message ttl of configured namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-message-ttl test-tenant/ns1 - -``` - -``` - -100 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/getNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaceMessageTTL(namespace) - -``` - - - - -```` - -#### Remove message-ttl - -Remove a message TTL of the configured namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-message-ttl test-tenant/ns1 - -``` - -``` - -100 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/removeNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeNamespaceMessageTTL(namespace) - -``` - - - - -```` - - -### Clear backlog - -#### Clear namespace backlog - -It clears all message backlog for all the topics that belong to a specific namespace. You can also clear backlog for a specific subscription as well. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces clear-backlog --sub my-subscription test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/clearBacklog|operation/clearNamespaceBacklogForSubscription?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().clearNamespaceBacklogForSubscription(namespace, subscription) - -``` - - - - -```` - -#### Clear bundle backlog - -It clears all message backlog for all the topics that belong to a specific NamespaceBundle. You can also clear backlog for a specific subscription as well. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces clear-backlog --bundle 0x00000000_0xffffffff --sub my-subscription test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/:bundle/clearBacklog|operation/clearNamespaceBundleBacklogForSubscription?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().clearNamespaceBundleBacklogForSubscription(namespace, bundle, subscription) - -``` - - - - -```` - -### Configure retention - -#### Set retention - -Each namespace contains multiple topics and the retention size (storage size) of each topic should not exceed a specific threshold or it should be stored for a certain period. This command helps configure the retention size and time of topics in a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin set-retention --size 100 --time 10 test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/retention|operation/setRetention?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setRetention(namespace, new RetentionPolicies(retentionTimeInMin, retentionSizeInMB)) - -``` - - - - -```` - -#### Get retention - -It shows retention information of a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-retention test-tenant/ns1 - -``` - -```json - -{ - "retentionTimeInMinutes": 10, - "retentionSizeInMB": 100 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/retention|operation/getRetention?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getRetention(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for topics - -#### Set dispatch throttling for topics - -It sets message dispatch rate for all the topics under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -:::note - -- If neither `clusterDispatchRate` nor `topicDispatchRate` is configured, dispatch throttling is disabled. - -- If `topicDispatchRate` is not configured, `clusterDispatchRate` takes effect. - -- If `topicDispatchRate` is configured, `topicDispatchRate` takes effect. - -::: - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/dispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for topics - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/dispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getDispatchRate(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for subscription - -#### Set dispatch throttling for subscription - -It sets message dispatch rate for all the subscription of topics under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-subscription-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/subscriptionDispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setSubscriptionDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for subscription - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-subscription-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/subscriptionDispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getSubscriptionDispatchRate(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for replicator - -#### Set dispatch throttling for replicator - -It sets message dispatch rate for all the replicator between replication clusters under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-replicator-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/replicatorDispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setReplicatorDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for replicator - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-replicator-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/replicatorDispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getReplicatorDispatchRate(namespace) - -``` - - - - -```` - -### Configure deduplication snapshot interval - -#### Get deduplication snapshot interval - -It shows configured `deduplicationSnapshotInterval` for a namespace (Each topic under the namespace will take a deduplication snapshot according to this interval) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-deduplication-snapshot-interval test-tenant/ns1 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/getDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getDeduplicationSnapshotInterval(namespace) - -``` - - - - -```` - -#### Set deduplication snapshot interval - -Set configured `deduplicationSnapshotInterval` for a namespace. Each topic under the namespace will take a deduplication snapshot according to this interval. -`brokerDeduplicationEnabled` must be set to `true` for this property to take effect. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-deduplication-snapshot-interval test-tenant/ns1 --interval 1000 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/setDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setDeduplicationSnapshotInterval(namespace, 1000) - -``` - - - - -```` - -#### Remove deduplication snapshot interval - -Remove configured `deduplicationSnapshotInterval` of a namespace (Each topic under the namespace will take a deduplication snapshot according to this interval) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-deduplication-snapshot-interval test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/deleteDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeDeduplicationSnapshotInterval(namespace) - -``` - - - - -```` - -### Namespace isolation - -You can use the [Pulsar isolation policy](administration-isolation.md) to allocate resources (broker and bookie) for a namespace. - -### Unload namespaces from a broker - -You can unload a namespace, or a [namespace bundle](reference-terminology.md#namespace-bundle), from the Pulsar [broker](reference-terminology.md#broker) that is currently responsible for it. - -#### pulsar-admin - -Use the [`unload`](reference-pulsar-admin.md#unload) subcommand of the [`namespaces`](reference-pulsar-admin.md#namespaces) command. - -````mdx-code-block - - - -```shell - -$ pulsar-admin namespaces unload my-tenant/my-ns - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/unload|operation/unloadNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().unload(namespace) - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-non-partitioned-topics.md b/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-non-partitioned-topics.md deleted file mode 100644 index e6347bb8c363a1..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-non-partitioned-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-non-partitioned-topics -title: Managing non-partitioned topics -sidebar_label: "Non-partitioned topics" -original_id: admin-api-non-partitioned-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-non-persistent-topics.md b/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-non-persistent-topics.md deleted file mode 100644 index 3126a6494c7153..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-non-persistent-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-non-persistent-topics -title: Managing non-persistent topics -sidebar_label: "Non-Persistent topics" -original_id: admin-api-non-persistent-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-overview.md b/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-overview.md deleted file mode 100644 index 6aa4aa067ae8d5..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-overview.md +++ /dev/null @@ -1,144 +0,0 @@ ---- -id: admin-api-overview -title: Pulsar admin interface -sidebar_label: "Overview" -original_id: admin-api-overview ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -The Pulsar admin interface enables you to manage all important entities in a Pulsar instance, such as tenants, topics, and namespaces. - -You can interact with the admin interface via: - -- The `pulsar-admin` CLI tool, which is available in the `bin` folder of your Pulsar installation: - - ```shell - - bin/pulsar-admin - - ``` - - > **Important** - > - > For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). - -- HTTP calls, which are made against the admin {@inject: rest:REST:/} API provided by Pulsar brokers. For some RESTful APIs, they might be redirected to the owner brokers for serving with [`307 Temporary Redirect`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/307), hence the HTTP callers should handle `307 Temporary Redirect`. If you use `curl` commands, you should specify `-L` to handle redirections. - - > **Important** - > - > For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. - -- A Java client interface. - - > **Important** - > - > For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -> **The REST API is the admin interface**. Both the `pulsar-admin` CLI tool and the Java client use the REST API. If you implement your own admin interface client, you should use the REST API. - -## Admin setup - -Each of the three admin interfaces (the `pulsar-admin` CLI tool, the {@inject: rest:REST:/} API, and the [Java admin API](/api/admin)) requires some special setup if you have enabled authentication in your Pulsar instance. - -````mdx-code-block - - - -If you have enabled authentication, you need to provide an auth configuration to use the `pulsar-admin` tool. By default, the configuration for the `pulsar-admin` tool is in the [`conf/client.conf`](reference-configuration.md#client) file. The following are the available parameters: - -|Name|Description|Default| -|----|-----------|-------| -|webServiceUrl|The web URL for the cluster.|http://localhost:8080/| -|brokerServiceUrl|The Pulsar protocol URL for the cluster.|pulsar://localhost:6650/| -|authPlugin|The authentication plugin.| | -|authParams|The authentication parameters for the cluster, as a comma-separated string.| | -|useTls|Whether or not TLS authentication will be enforced in the cluster.|false| -|tlsAllowInsecureConnection|Accept untrusted TLS certificate from client.|false| -|tlsTrustCertsFilePath|Path for the trusted TLS certificate file.| | - - - - -You can find details for the REST API exposed by Pulsar brokers in this {@inject: rest:document:/}. - - - - -To use the Java admin API, instantiate a {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object, and specify a URL for a Pulsar broker and a {@inject: javadoc:PulsarAdminBuilder:/admin/org/apache/pulsar/client/admin/PulsarAdminBuilder}. The following is a minimal example using `localhost`: - -```java - -String url = "http://localhost:8080"; -// Pass auth-plugin class fully-qualified name if Pulsar-security enabled -String authPluginClassName = "com.org.MyAuthPluginClass"; -// Pass auth-param if auth-plugin class requires it -String authParams = "param1=value1"; -boolean useTls = false; -boolean tlsAllowInsecureConnection = false; -String tlsTrustCertsFilePath = null; -PulsarAdmin admin = PulsarAdmin.builder() -.authentication(authPluginClassName,authParams) -.serviceHttpUrl(url) -.tlsTrustCertsFilePath(tlsTrustCertsFilePath) -.allowTlsInsecureConnection(tlsAllowInsecureConnection) -.build(); - -``` - -If you use multiple brokers, you can use multi-host like Pulsar service. For example, - -```java - -String url = "http://localhost:8080,localhost:8081,localhost:8082"; -// Pass auth-plugin class fully-qualified name if Pulsar-security enabled -String authPluginClassName = "com.org.MyAuthPluginClass"; -// Pass auth-param if auth-plugin class requires it -String authParams = "param1=value1"; -boolean useTls = false; -boolean tlsAllowInsecureConnection = false; -String tlsTrustCertsFilePath = null; -PulsarAdmin admin = PulsarAdmin.builder() -.authentication(authPluginClassName,authParams) -.serviceHttpUrl(url) -.tlsTrustCertsFilePath(tlsTrustCertsFilePath) -.allowTlsInsecureConnection(tlsAllowInsecureConnection) -.build(); - -``` - - - - -```` - -## How to define Pulsar resource names when running Pulsar in Kubernetes -If you run Pulsar Functions or connectors on Kubernetes, you need to follow Kubernetes naming convention to define the names of your Pulsar resources, whichever admin interface you use. - -Kubernetes requires a name that can be used as a DNS subdomain name as defined in [RFC 1123](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names). Pulsar supports more legal characters than Kubernetes naming convention. If you create a Pulsar resource name with special characters that are not supported by Kubernetes (for example, including colons in a Pulsar namespace name), Kubernetes runtime translates the Pulsar object names into Kubernetes resource labels which are in RFC 1123-compliant forms. Consequently, you can run functions or connectors using Kubernetes runtime. The rules for translating Pulsar object names into Kubernetes resource labels are as below: - -- Truncate to 63 characters - -- Replace the following characters with dashes (-): - - - Non-alphanumeric characters - - - Underscores (_) - - - Dots (.) - -- Replace beginning and ending non-alphanumeric characters with 0 - -:::tip - -- If you get an error in translating Pulsar object names into Kubernetes resource labels (for example, you may have a naming collision if your Pulsar object name is too long) or want to customize the translating rules, see [customize Kubernetes runtime](https://pulsar.apache.org/docs/en/next/functions-runtime/#customize-kubernetes-runtime). -- For how to configure Kubernetes runtime, see [here](https://pulsar.apache.org/docs/en/next/functions-runtime/#configure-kubernetes-runtime). - -::: - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-packages.md b/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-packages.md deleted file mode 100644 index efb4dcfabcdffa..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-packages.md +++ /dev/null @@ -1,391 +0,0 @@ ---- -id: admin-api-packages -title: Manage packages -sidebar_label: "Packages" -original_id: admin-api-packages ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Package management enables version management and simplifies the upgrade and rollback processes for Functions, Sinks, and Sources. When you use the same function, sink and source in different namespaces, you can upload them to a common package management system. - -## Package name - -A `package` is identified by five parts: `type`, `tenant`, `namespace`, `package name`, and `version`. - -| Part | Description | -|-------|-------------| -|`type` |The type of the package. The following types are supported: `function`, `sink` and `source`. | -| `name`|The fully qualified name of the package: `//`.| -|`version`|The version of the package.| - -The following is a code sample. - -```java - -class PackageName { - private final PackageType type; - private final String namespace; - private final String tenant; - private final String name; - private final String version; -} - -enum PackageType { - FUNCTION("function"), SINK("sink"), SOURCE("source"); -} - -``` - -## Package URL -A package is located using a URL. The package URL is written in the following format: - -```shell - -:////@ - -``` - -The following are package URL examples: - -`sink://public/default/mysql-sink@1.0` -`function://my-tenant/my-ns/my-function@0.1` -`source://my-tenant/my-ns/mysql-cdc-source@2.3` - -The package management system stores the data, versions and metadata of each package. The metadata is shown in the following table. - -| metadata | Description | -|----------|-------------| -|description|The description of the package.| -|contact |The contact information of a package. For example, team email.| -|create_time| The time when the package is created.| -|modification_time| The time when the package is modified.| -|properties |A key/value map that stores your own information.| - -## Permissions - -The packages are organized by the tenant and namespace, so you can apply the tenant and namespace permissions to packages directly. - -## Package resources -You can use the package management with command line tools, REST API and Java client. - -### Upload a package -You can upload a package to the package management service in the following ways. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages upload function://public/default/example@v0.1 --path package-file --description package-description - -``` - - - - -{@inject: endpoint|POST|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/upload?version=@pulsar:version_number@} - - - - -Upload a package to the package management service synchronously. - -```java - - void upload(PackageMetadata metadata, String packageName, String path) throws PulsarAdminException; - -``` - -Upload a package to the package management service asynchronously. - -```java - - CompletableFuture uploadAsync(PackageMetadata metadata, String packageName, String path); - -``` - - - - -```` - -### Download a package -You can download a package to the package management service in the following ways. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages download function://public/default/example@v0.1 --path package-file - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/download?version=@pulsar:version_number@} - - - - -Download a package to the package management service synchronously. - -```java - - void download(String packageName, String path) throws PulsarAdminException; - -``` - -Download a package to the package management service asynchronously. - -```java - - CompletableFuture downloadAsync(String packageName, String path); - -``` - - - - -```` - -### List all versions of a package -You can get a list of all versions of a package in the following ways. -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages list --type function public/default - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName|operation/listPackageVersion?version=@pulsar:version_number@} - - - - -List all versions of a package synchronously. - -```java - - List listPackageVersions(String packageName) throws PulsarAdminException; - -``` - -List all versions of a package asynchronously. - -```java - - CompletableFuture> listPackageVersionsAsync(String packageName); - -``` - - - - -```` - -### List all the specified type packages under a namespace -You can get a list of all the packages with the given type in a namespace in the following ways. -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages list --type function public/default - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/packages/:type/:tenant/:namespace|operation/listPackages?version=@pulsar:version_number@} - - - - -List all the packages with the given type in a namespace synchronously. - -```java - - List listPackages(String type, String namespace) throws PulsarAdminException; - -``` - -List all the packages with the given type in a namespace asynchronously. - -```java - - CompletableFuture> listPackagesAsync(String type, String namespace); - -``` - - - - -```` - -### Get the metadata of a package -You can get the metadata of a package in the following ways. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages get-metadata function://public/default/test@v1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version/metadata|operation/getMeta?version=@pulsar:version_number@} - - - - -Get the metadata of a package synchronously. - -```java - - PackageMetadata getMetadata(String packageName) throws PulsarAdminException; - -``` - -Get the metadata of a package asynchronously. - -```java - - CompletableFuture getMetadataAsync(String packageName); - -``` - - - - -```` - -### Update the metadata of a package -You can update the metadata of a package in the following ways. -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages update-metadata function://public/default/example@v0.1 --description update-description - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version/metadata|operation/updateMeta?version=@pulsar:version_number@} - - - - -Update a package metadata information synchronously. - -```java - - void updateMetadata(String packageName, PackageMetadata metadata) throws PulsarAdminException; - -``` - -Update a package metadata information asynchronously. - -```java - - CompletableFuture updateMetadataAsync(String packageName, PackageMetadata metadata); - -``` - - - - -```` - -### Delete a specified package -You can delete a specified package with its package name in the following ways. - -````mdx-code-block - - - -The following command example deletes a package of version 0.1. - -```shell - -bin/pulsar-admin packages delete function://public/default/example@v0.1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/delete?version=@pulsar:version_number@} - - - - -Delete a specified package synchronously. - -```java - - void delete(String packageName) throws PulsarAdminException; - -``` - -Delete a specified package asynchronously. - -```java - - CompletableFuture deleteAsync(String packageName); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-partitioned-topics.md b/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-partitioned-topics.md deleted file mode 100644 index 5ce182282e0324..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-partitioned-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-partitioned-topics -title: Managing partitioned topics -sidebar_label: "Partitioned topics" -original_id: admin-api-partitioned-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-permissions.md b/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-permissions.md deleted file mode 100644 index 741d58ac7bf7a5..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-permissions.md +++ /dev/null @@ -1,189 +0,0 @@ ---- -id: admin-api-permissions -title: Managing permissions -sidebar_label: "Permissions" -original_id: admin-api-permissions ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Pulsar allows you to grant namespace-level or topic-level permission to users. - -- If you grant a namespace-level permission to a user, then the user can access all the topics under the namespace. - -- If you grant a topic-level permission to a user, then the user can access only the topic. - -The chapters below demonstrate how to grant namespace-level permissions to users. For how to grant topic-level permissions to users, see [manage topics](admin-api-topics.md/#grant-permission). - -## Grant permissions - -You can grant permissions to specific roles for lists of operations such as `produce` and `consume`. - -````mdx-code-block - - - -Use the [`grant-permission`](reference-pulsar-admin.md#grant-permission) subcommand and specify a namespace, actions using the `--actions` flag, and a role using the `--role` flag: - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role admin10 - -``` - -Wildcard authorization can be performed when `authorizationAllowWildcardsMatching` is set to `true` in `broker.conf`. - -e.g. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role 'my.role.*' - -``` - -Then, roles `my.role.1`, `my.role.2`, `my.role.foo`, `my.role.bar`, etc. can produce and consume. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role '*.role.my' - -``` - -Then, roles `1.role.my`, `2.role.my`, `foo.role.my`, `bar.role.my`, etc. can produce and consume. - -**Note**: A wildcard matching works at **the beginning or end of the role name only**. - -e.g. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role 'my.*.role' - -``` - -In this case, only the role `my.*.role` has permissions. -Roles `my.1.role`, `my.2.role`, `my.foo.role`, `my.bar.role`, etc. **cannot** produce and consume. - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/permissions/:role|operation/grantPermissionOnNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().grantPermissionOnNamespace(namespace, role, getAuthActions(actions)); - -``` - - - - -```` - -## Get permissions - -You can see which permissions have been granted to which roles in a namespace. - -````mdx-code-block - - - -Use the [`permissions`](reference-pulsar-admin#permissions) subcommand and specify a namespace: - -```shell - -$ pulsar-admin namespaces permissions test-tenant/ns1 -{ - "admin10": [ - "produce", - "consume" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/permissions|operation/getPermissions?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPermissions(namespace); - -``` - - - - -```` - -## Revoke permissions - -You can revoke permissions from specific roles, which means that those roles will no longer have access to the specified namespace. - -````mdx-code-block - - - -Use the [`revoke-permission`](reference-pulsar-admin.md#revoke-permission) subcommand and specify a namespace and a role using the `--role` flag: - -```shell - -$ pulsar-admin namespaces revoke-permission test-tenant/ns1 \ - --role admin10 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/permissions/:role|operation/revokePermissionsOnNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().revokePermissionsOnNamespace(namespace, role); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-persistent-topics.md b/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-persistent-topics.md deleted file mode 100644 index 50d135b72f5424..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-persistent-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-persistent-topics -title: Managing persistent topics -sidebar_label: "Persistent topics" -original_id: admin-api-persistent-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-schemas.md b/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-schemas.md deleted file mode 100644 index 9ffe21f5b0f750..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-schemas.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: admin-api-schemas -title: Managing Schemas -sidebar_label: "Schemas" -original_id: admin-api-schemas ---- - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-tenants.md b/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-tenants.md deleted file mode 100644 index 2a5e308e6488cb..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-tenants.md +++ /dev/null @@ -1,238 +0,0 @@ ---- -id: admin-api-tenants -title: Managing Tenants -sidebar_label: "Tenants" -original_id: admin-api-tenants ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Tenants, like namespaces, can be managed using the [admin API](admin-api-overview.md). There are currently two configurable aspects of tenants: - -* Admin roles -* Allowed clusters - -## Tenant resources - -### List - -You can list all of the tenants associated with an [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#tenants-list) subcommand. - -```shell - -$ pulsar-admin tenants list -my-tenant-1 -my-tenant-2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/tenants|operation/getTenants?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().getTenants(); - -``` - - - - -```` - -### Create - -You can create a new tenant. - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#tenants-create) subcommand: - -```shell - -$ pulsar-admin tenants create my-tenant - -``` - -When creating a tenant, you can assign admin roles using the `-r`/`--admin-roles` flag. You can specify multiple roles as a comma-separated list. Here are some examples: - -```shell - -$ pulsar-admin tenants create my-tenant \ - --admin-roles role1,role2,role3 - -$ pulsar-admin tenants create my-tenant \ - -r role1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/tenants/:tenant|operation/createTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().createTenant(tenantName, tenantInfo); - -``` - - - - -```` - -### Get configuration - -You can fetch the [configuration](reference-configuration.md) for an existing tenant at any time. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#tenants-get) subcommand and specify the name of the tenant. Here's an example: - -```shell - -$ pulsar-admin tenants get my-tenant -{ - "adminRoles": [ - "admin1", - "admin2" - ], - "allowedClusters": [ - "cl1", - "cl2" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/tenants/:cluster|operation/getTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().getTenantInfo(tenantName); - -``` - - - - -```` - -### Delete - -Tenants can be deleted from a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#tenants-delete) subcommand and specify the name of the tenant. - -```shell - -$ pulsar-admin tenants delete my-tenant - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/tenants/:cluster|operation/deleteTenant?version=@pulsar:version_number@} - - - - -```java - -admin.Tenants().deleteTenant(tenantName); - -``` - - - - -```` - -### Update - -You can update a tenant's configuration. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#tenants-update) subcommand. - -```shell - -$ pulsar-admin tenants update my-tenant - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/tenants/:cluster|operation/updateTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().updateTenant(tenantName, tenantInfo); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-topics.md b/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-topics.md deleted file mode 100644 index d858a9e828418b..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/admin-api-topics.md +++ /dev/null @@ -1,2142 +0,0 @@ ---- -id: admin-api-topics -title: Manage topics -sidebar_label: "Topics" -original_id: admin-api-topics ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Pulsar has persistent and non-persistent topics. Persistent topic is a logical endpoint for publishing and consuming messages. The topic name structure for persistent topics is: - -```shell - -persistent://tenant/namespace/topic - -``` - -Non-persistent topics are used in applications that only consume real-time published messages and do not need persistent guarantee. In this way, it reduces message-publish latency by removing overhead of persisting messages. The topic name structure for non-persistent topics is: - -```shell - -non-persistent://tenant/namespace/topic - -``` - -## Manage topic resources -Whether it is persistent or non-persistent topic, you can obtain the topic resources through `pulsar-admin` tool, REST API and Java. - -:::note - -In REST API, `:schema` stands for persistent or non-persistent. `:tenant`, `:namespace`, `:x` are variables, replace them with the real tenant, namespace, and `x` names when using them. -Take {@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} as an example, to get the list of persistent topics in REST API, use `https://pulsar.apache.org/admin/v2/persistent/my-tenant/my-namespace`. To get the list of non-persistent topics in REST API, use `https://pulsar.apache.org/admin/v2/non-persistent/my-tenant/my-namespace`. - -::: - -### List of topics - -You can get the list of topics under a given namespace in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list \ - my-tenant/my-namespace - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} - - - - -```java - -String namespace = "my-tenant/my-namespace"; -admin.topics().getList(namespace); - -``` - - - - -```` - -### Grant permission - -You can grant permissions on a client role to perform specific actions on a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics grant-permission \ - --actions produce,consume --role application1 \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/permissions/:role|operation/grantPermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String role = "test-role"; -Set actions = Sets.newHashSet(AuthAction.produce, AuthAction.consume); -admin.topics().grantPermission(topic, role, actions); - -``` - - - - -```` - -### Get permission - -You can fetch permission in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics permissions \ - persistent://test-tenant/ns1/tp1 \ - -{ - "application1": [ - "consume", - "produce" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/permissions|operation/getPermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getPermissions(topic); - -``` - - - - -```` - -### Revoke permission - -You can revoke a permission granted on a client role in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics revoke-permission \ - --role application1 \ - persistent://test-tenant/ns1/tp1 \ - -{ - "application1": [ - "consume", - "produce" - ] -} - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic/permissions/:role|operation/revokePermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String role = "test-role"; -admin.topics().revokePermissions(topic, role); - -``` - - - - -```` - -### Delete topic - -You can delete a topic in the following ways. You cannot delete a topic if any active subscription or producers is connected to the topic. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics delete \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic|operation/deleteTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().delete(topic); - -``` - - - - -```` - -### Unload topic - -You can unload a topic in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics unload \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/unload|operation/unloadTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().unload(topic); - -``` - - - - -```` - -### Get stats - -You can check the following statistics of a given non-partitioned topic. - - - **msgRateIn**: The sum of all local and replication publishers' publish rates (msg/s). - - - **msgThroughputIn**: The sum of all local and replication publishers' publish rates (bytes/s). - - - **msgRateOut**: The sum of all local and replication consumers' dispatch rates(msg/s). - - - **msgThroughputOut**: The sum of all local and replication consumers' dispatch rates (bytes/s). - - - **averageMsgSize**: The average size (in bytes) of messages published within the last interval. - - - **storageSize**: The sum of the ledgers' storage size for this topic. The space used to store the messages for the topic. - - - **publishers**: The list of all local publishers into the topic. The list ranges from zero to thousands. - - - **msgRateIn**: The total rate of messages (msg/s) published by this publisher. - - - **msgThroughputIn**: The total throughput (bytes/s) of the messages published by this publisher. - - - **averageMsgSize**: The average message size in bytes from this publisher within the last interval. - - - **producerId**: The internal identifier for this producer on this topic. - - - **producerName**: The internal identifier for this producer, generated by the client library. - - - **address**: The IP address and source port for the connection of this producer. - - - **connectedSince**: The timestamp when this producer is created or reconnected last time. - - - **subscriptions**: The list of all local subscriptions to the topic. - - - **my-subscription**: The name of this subscription. It is defined by the client. - - - **msgRateOut**: The total rate of messages (msg/s) delivered on this subscription. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered on this subscription. - - - **msgBacklog**: The number of messages in the subscription backlog. - - - **type**: The subscription type. - - - **msgRateExpired**: The rate at which messages were discarded instead of dispatched from this subscription due to TTL. - - - **lastExpireTimestamp**: The timestamp of the last message expire execution. - - - **lastConsumedFlowTimestamp**: The timestamp of the last flow command received. - - - **lastConsumedTimestamp**: The latest timestamp of all the consumed timestamp of the consumers. - - - **lastAckedTimestamp**: The latest timestamp of all the acked timestamp of the consumers. - - - **consumers**: The list of connected consumers for this subscription. - - - **msgRateOut**: The total rate of messages (msg/s) delivered to the consumer. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered to the consumer. - - - **consumerName**: The internal identifier for this consumer, generated by the client library. - - - **availablePermits**: The number of messages that the consumer has space for in the client library's listen queue. `0` means the client library's queue is full and `receive()` isn't being called. A non-zero value means this consumer is ready for dispatched messages. - - - **unackedMessages**: The number of unacknowledged messages for the consumer, where an unacknowledged message is one that has been sent to the consumer but not yet acknowledged. This field is only meaningful when using a subscription that tracks individual message acknowledgement. - - - **blockedConsumerOnUnackedMsgs**: The flag used to verify if the consumer is blocked due to reaching threshold of the unacknowledged messages. - - - **lastConsumedTimestamp**: The timestamp when the consumer reads a message the last time. - - - **lastAckedTimestamp**: The timestamp when the consumer acknowledges a message the last time. - - - **replication**: This section gives the stats for cross-colo replication of this topic - - - **msgRateIn**: The total rate (msg/s) of messages received from the remote cluster. - - - **msgThroughputIn**: The total throughput (bytes/s) received from the remote cluster. - - - **msgRateOut**: The total rate of messages (msg/s) delivered to the replication-subscriber. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered to the replication-subscriber. - - - **msgRateExpired**: The total rate of messages (msg/s) expired. - - - **replicationBacklog**: The number of messages pending to be replicated to remote cluster. - - - **connected**: Whether the outbound replicator is connected. - - - **replicationDelayInSeconds**: How long the oldest message has been waiting to be sent through the connection, if connected is `true`. - - - **inboundConnection**: The IP and port of the broker in the remote cluster's publisher connection to this broker. - - - **inboundConnectedSince**: The TCP connection being used to publish messages to the remote cluster. If there are no local publishers connected, this connection is automatically closed after a minute. - - - **outboundConnection**: The address of the outbound replication connection. - - - **outboundConnectedSince**: The timestamp of establishing outbound connection. - -The following is an example of a topic status. - -```json - -{ - "msgRateIn": 4641.528542257553, - "msgThroughputIn": 44663039.74947473, - "msgRateOut": 0, - "msgThroughputOut": 0, - "averageMsgSize": 1232439.816728665, - "storageSize": 135532389160, - "publishers": [ - { - "msgRateIn": 57.855383881403576, - "msgThroughputIn": 558994.7078932219, - "averageMsgSize": 613135, - "producerId": 0, - "producerName": null, - "address": null, - "connectedSince": null - } - ], - "subscriptions": { - "my-topic_subscription": { - "msgRateOut": 0, - "msgThroughputOut": 0, - "msgBacklog": 116632, - "type": null, - "msgRateExpired": 36.98245516804671, - "consumers": [] - } - }, - "replication": {} -} - -``` - -To get the status of a topic, you can use the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getStats(topic); - -``` - - - - -```` - -### Get internal stats - -You can get the detailed statistics of a topic. - - - **entriesAddedCounter**: Messages published since this broker loaded this topic. - - - **numberOfEntries**: The total number of messages being tracked. - - - **totalSize**: The total storage size in bytes of all messages. - - - **currentLedgerEntries**: The count of messages written to the ledger that is currently open for writing. - - - **currentLedgerSize**: The size in bytes of messages written to the ledger that is currently open for writing. - - - **lastLedgerCreatedTimestamp**: The time when the last ledger is created. - - - **lastLedgerCreationFailureTimestamp:** The time when the last ledger failed. - - - **waitingCursorsCount**: The number of cursors that are "caught up" and waiting for a new message to be published. - - - **pendingAddEntriesCount**: The number of messages that complete (asynchronous) write requests. - - - **lastConfirmedEntry**: The ledgerid:entryid of the last message that is written successfully. If the entryid is `-1`, then the ledger is open, yet no entries are written. - - - **state**: The state of this ledger for writing. The state `LedgerOpened` means that a ledger is open for saving published messages. - - - **ledgers**: The ordered list of all ledgers for this topic holding messages. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. - - - **metadata**: The ledger metadata. - - - **schemaLedgers**: The ordered list of all ledgers for this topic schema. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. - - - **metadata**: The ledger metadata. - - - **compactedLedger**: The ledgers holding un-acked messages after topic compaction. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. The value is `false` for the compacted topic ledger. - - - **cursors**: The list of all cursors on this topic. Each subscription in the topic stats has a cursor. - - - **markDeletePosition**: All messages before the markDeletePosition are acknowledged by the subscriber. - - - **readPosition**: The latest position of subscriber for reading message. - - - **waitingReadOp**: This is true when the subscription has read the latest message published to the topic and is waiting for new messages to be published. - - - **pendingReadOps**: The counter for how many outstanding read requests to the BookKeepers in progress. - - - **messagesConsumedCounter**: The number of messages this cursor has acked since this broker loaded this topic. - - - **cursorLedger**: The ledger being used to persistently store the current markDeletePosition. - - - **cursorLedgerLastEntry**: The last entryid used to persistently store the current markDeletePosition. - - - **individuallyDeletedMessages**: If acknowledges are being done out of order, the ranges of messages acknowledged between the markDeletePosition and the read-position shows. - - - **lastLedgerSwitchTimestamp**: The last time the cursor ledger is rolled over. - - - **state**: The state of the cursor ledger: `Open` means you have a cursor ledger for saving updates of the markDeletePosition. - -The following is an example of the detailed statistics of a topic. - -```json - -{ - "entriesAddedCounter":0, - "numberOfEntries":0, - "totalSize":0, - "currentLedgerEntries":0, - "currentLedgerSize":0, - "lastLedgerCreatedTimestamp":"2021-01-22T21:12:14.868+08:00", - "lastLedgerCreationFailureTimestamp":null, - "waitingCursorsCount":0, - "pendingAddEntriesCount":0, - "lastConfirmedEntry":"3:-1", - "state":"LedgerOpened", - "ledgers":[ - { - "ledgerId":3, - "entries":0, - "size":0, - "offloaded":false, - "metadata":null - } - ], - "cursors":{ - "test":{ - "markDeletePosition":"3:-1", - "readPosition":"3:-1", - "waitingReadOp":false, - "pendingReadOps":0, - "messagesConsumedCounter":0, - "cursorLedger":4, - "cursorLedgerLastEntry":1, - "individuallyDeletedMessages":"[]", - "lastLedgerSwitchTimestamp":"2021-01-22T21:12:14.966+08:00", - "state":"Open", - "numberOfEntriesSinceFirstNotAckedMessage":0, - "totalNonContiguousDeletedMessagesRange":0, - "properties":{ - - } - } - }, - "schemaLedgers":[ - { - "ledgerId":1, - "entries":11, - "size":10, - "offloaded":false, - "metadata":null - } - ], - "compactedLedger":{ - "ledgerId":-1, - "entries":-1, - "size":-1, - "offloaded":false, - "metadata":null - } -} - -``` - -To get the internal status of a topic, you can use the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats-internal \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/internalStats|operation/getInternalStats?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getInternalStats(topic); - -``` - - - - -```` - -### Peek messages - -You can peek a number of messages for a specific subscription of a given topic in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics peek-messages \ - --count 10 --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -Message ID: 315674752:0 -Properties: { "X-Pulsar-publish-time" : "2015-07-13 17:40:28.451" } -msg-payload - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/position/:messagePosition|operation/peekNthMessage?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -int numMessages = 1; -admin.topics().peekMessages(topic, subName, numMessages); - -``` - - - - -```` - -### Get message by ID - -You can fetch the message with the given ledger ID and entry ID in the following ways. - -````mdx-code-block - - - -```shell - -$ ./bin/pulsar-admin topics get-message-by-id \ - persistent://public/default/my-topic \ - -l 10 -e 0 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/ledger/:ledgerId/entry/:entryId|operation/getMessageById?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -long ledgerId = 10; -long entryId = 10; -admin.topics().getMessageById(topic, ledgerId, entryId); - -``` - - - - -```` - -### Skip messages - -You can skip a number of messages for a specific subscription of a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics skip \ - --count 10 --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/skip/:numMessages|operation/skipMessages?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -int numMessages = 1; -admin.topics().skipMessages(topic, subName, numMessages); - -``` - - - - -```` - -### Skip all messages - -You can skip all the old messages for a specific subscription of a given topic. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics skip-all \ - --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/skip_all|operation/skipAllMessages?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -admin.topics().skipAllMessages(topic, subName); - -``` - - - - -```` - -### Reset cursor - -You can reset a subscription cursor position back to the position which is recorded X minutes before. It essentially calculates time and position of cursor at X minutes before and resets it at that position. You can reset the cursor in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics reset-cursor \ - --subscription my-subscription --time 10 \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/resetcursor/:timestamp|operation/resetCursor?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -long timestamp = 2342343L; -admin.topics().skipAllMessages(topic, subName, timestamp); - -``` - - - - -```` - -### Look up topic's owner broker - -You can locate the owner broker of the given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics lookup \ - persistent://test-tenant/ns1/tp1 \ - - "pulsar://broker1.org.com:4480" - -``` - - - - -{@inject: endpoint|GET|/lookup/v2/topic/:schema/:tenant:namespace/:topic|/?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.lookup().lookupDestination(topic); - -``` - - - - -```` - -### Get bundle - -You can get the range of the bundle that the given topic belongs to in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics bundle-range \ - persistent://test-tenant/ns1/tp1 \ - - "0x00000000_0xffffffff" - -``` - - - - -{@inject: endpoint|GET|/lookup/v2/topic/:topic_domain/:tenant/:namespace/:topic/bundle|/?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.lookup().getBundleRange(topic); - -``` - - - - -```` - -### Get subscriptions - -You can check all subscription names for a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics subscriptions \ - persistent://test-tenant/ns1/tp1 \ - - my-subscription - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscriptions|operation/getSubscriptions?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getSubscriptions(topic); - -``` - - - - -```` - -### Last Message Id - -You can get the last committed message ID for a persistent topic. It is available since 2.3.0 release. - -````mdx-code-block - - - -```shell - -pulsar-admin topics last-message-id topic-name - -``` - - - - -{@inject: endpoint|Get|/admin/v2/:schema/:tenant/:namespace/:topic/lastMessageId|operation/getLastMessageId?version=@pulsar:version_number@} - - - - -```Java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getLastMessage(topic); - -``` - - - - -```` - - -### Configure deduplication snapshot interval - -#### Get deduplication snapshot interval - -To get the topic-level deduplication snapshot interval, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-deduplication-snapshot-interval options - -``` - - - - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/getDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getDeduplicationSnapshotInterval(topic) - -``` - - - - -```` - -#### Set deduplication snapshot interval - -To set the topic-level deduplication snapshot interval, use one of the following methods. - -> **Prerequisite** `brokerDeduplicationEnabled` must be set to `true`. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-deduplication-snapshot-interval options - -``` - - - - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/setDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.topics().setDeduplicationSnapshotInterval(topic, 1000) - -``` - - - - -```` - -#### Remove deduplication snapshot interval - -To remove the topic-level deduplication snapshot interval, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-deduplication-snapshot-interval options - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/deleteDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.topics().removeDeduplicationSnapshotInterval(topic) - -``` - - - - -```` - - -### Configure inactive topic policies - -#### Get inactive topic policies - -To get the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-inactive-topic-policies options - -``` - - - - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/getInactiveTopicPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getInactiveTopicPolicies(topic) - -``` - - - - -```` - -#### Set inactive topic policies - -To set the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-inactive-topic-policies options - -``` - - - - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/setInactiveTopicPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().setInactiveTopicPolicies(topic, inactiveTopicPolicies) - -``` - - - - -```` - -#### Remove inactive topic policies - -To remove the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-inactive-topic-policies options - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/removeInactiveTopicPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().removeInactiveTopicPolicies(topic) - -``` - - - - -```` - - -### Configure offload policies - -#### Get offload policies - -To get the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-offload-policies options - -``` - - - - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/getOffloadPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getOffloadPolicies(topic) - -``` - - - - -```` - -#### Set offload policies - -To set the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-offload-policies options - -``` - - - - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/setOffloadPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().setOffloadPolicies(topic, offloadPolicies) - -``` - - - - -```` - -#### Remove offload policies - -To remove the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-offload-policies options - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/removeOffloadPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().removeOffloadPolicies(topic) - -``` - - - - -```` - - -## Manage non-partitioned topics -You can use Pulsar [admin API](admin-api-overview.md) to create, delete and check status of non-partitioned topics. - -### Create -Non-partitioned topics must be explicitly created. When creating a new non-partitioned topic, you need to provide a name for the topic. - -By default, 60 seconds after creation, topics are considered inactive and deleted automatically to avoid generating trash data. To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to a specific value. - -For more information about the two parameters, see [here](reference-configuration.md#broker). - -You can create non-partitioned topics in the following ways. -````mdx-code-block - - - -When you create non-partitioned topics with the [`create`](reference-pulsar-admin.md#create-3) command, you need to specify the topic name as an argument. - -```shell - -$ bin/pulsar-admin topics create \ - persistent://my-tenant/my-namespace/my-topic - -``` - -:::note - -When you create a non-partitioned topic with the suffix '-partition-' followed by numeric value like 'xyz-topic-partition-x' for the topic name, if a partitioned topic with same suffix 'xyz-topic-partition-y' exists, then the numeric value(x) for the non-partitioned topic must be larger than the number of partitions(y) of the partitioned topic. Otherwise, you cannot create such a non-partitioned topic. - -::: - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic|operation/createNonPartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().createNonPartitionedTopic(topicName); - -``` - - - - -```` - -### Delete -You can delete non-partitioned topics in the following ways. -````mdx-code-block - - - -```shell - -$ bin/pulsar-admin topics delete \ - persistent://my-tenant/my-namespace/my-topic - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic|operation/deleteTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().delete(topic); - -``` - - - - -```` - -### List - -You can get the list of topics under a given namespace in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list tenant/namespace -persistent://tenant/namespace/topic1 -persistent://tenant/namespace/topic2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getList(namespace); - -``` - - - - -```` - -### Stats - -You can check the current statistics of a given topic. The following is an example. For description of each stats, refer to [get stats](#get-stats). - -```json - -{ - "msgRateIn": 4641.528542257553, - "msgThroughputIn": 44663039.74947473, - "msgRateOut": 0, - "msgThroughputOut": 0, - "averageMsgSize": 1232439.816728665, - "storageSize": 135532389160, - "publishers": [ - { - "msgRateIn": 57.855383881403576, - "msgThroughputIn": 558994.7078932219, - "averageMsgSize": 613135, - "producerId": 0, - "producerName": null, - "address": null, - "connectedSince": null - } - ], - "subscriptions": { - "my-topic_subscription": { - "msgRateOut": 0, - "msgThroughputOut": 0, - "msgBacklog": 116632, - "type": null, - "msgRateExpired": 36.98245516804671, - "consumers": [] - } - }, - "replication": {} -} - -``` - -You can check the current statistics of a given topic and its connected producers and consumers in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats \ - persistent://test-tenant/namespace/topic \ - --get-precise-backlog - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getStats(topic, false /* is precise backlog */); - -``` - - - - -```` - -## Manage partitioned topics -You can use Pulsar [admin API](admin-api-overview.md) to create, update, delete and check status of partitioned topics. - -### Create - -Partitioned topics must be explicitly created. When creating a new partitioned topic, you need to provide a name and the number of partitions for the topic. - -By default, 60 seconds after creation, topics are considered inactive and deleted automatically to avoid generating trash data. To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to a specific value. - -For more information about the two parameters, see [here](reference-configuration.md#broker). - -You can create partitioned topics in the following ways. -````mdx-code-block - - - -When you create partitioned topics with the [`create-partitioned-topic`](reference-pulsar-admin.md#create-partitioned-topic) -command, you need to specify the topic name as an argument and the number of partitions using the `-p` or `--partitions` flag. - -```shell - -$ bin/pulsar-admin topics create-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic \ - --partitions 4 - -``` - -:::note - -If a non-partitioned topic with the suffix '-partition-' followed by a numeric value like 'xyz-topic-partition-10', you can not create a partitioned topic with name 'xyz-topic', because the partitions of the partitioned topic could override the existing non-partitioned topic. To create such partitioned topic, you have to delete that non-partitioned topic first. - -::: - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/partitions|operation/createPartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -int numPartitions = 4; -admin.topics().createPartitionedTopic(topicName, numPartitions); - -``` - - - - -```` - -### Create missed partitions - -When topic auto-creation is disabled, and you have a partitioned topic without any partitions, you can use the [`create-missed-partitions`](reference-pulsar-admin.md#create-missed-partitions) command to create partitions for the topic. - -````mdx-code-block - - - -You can create missed partitions with the [`create-missed-partitions`](reference-pulsar-admin.md#create-missed-partitions) command and specify the topic name as an argument. - -```shell - -$ bin/pulsar-admin topics create-missed-partitions \ - persistent://my-tenant/my-namespace/my-topic \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic|operation/createMissedPartitions?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().createMissedPartitions(topicName); - -``` - - - - -```` - -### Get metadata - -Partitioned topics are associated with metadata, you can view it as a JSON object. The following metadata field is available. - -Field | Description -:-----|:------- -`partitions` | The number of partitions into which the topic is divided. - -````mdx-code-block - - - -You can check the number of partitions in a partitioned topic with the [`get-partitioned-topic-metadata`](reference-pulsar-admin.md#get-partitioned-topic-metadata) subcommand. - -```shell - -$ pulsar-admin topics get-partitioned-topic-metadata \ - persistent://my-tenant/my-namespace/my-topic -{ - "partitions": 4 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/partitions|operation/getPartitionedMetadata?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getPartitionedTopicMetadata(topicName); - -``` - - - - -```` - -### Update - -You can update the number of partitions for an existing partitioned topic *if* the topic is non-global. However, you can only add the partition number. Decrementing the number of partitions would delete the topic, which is not supported in Pulsar. - -Producers and consumers can find the newly created partitions automatically. - -````mdx-code-block - - - -You can update partitioned topics with the [`update-partitioned-topic`](reference-pulsar-admin.md#update-partitioned-topic) command. - -```shell - -$ pulsar-admin topics update-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic \ - --partitions 8 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:cluster/:namespace/:destination/partitions|operation/updatePartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().updatePartitionedTopic(topic, numPartitions); - -``` - - - - -```` - -### Delete -You can delete partitioned topics with the [`delete-partitioned-topic`](reference-pulsar-admin.md#delete-partitioned-topic) command, REST API and Java. - -````mdx-code-block - - - -```shell - -$ bin/pulsar-admin topics delete-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:topic/:namespace/:destination/partitions|operation/deletePartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().delete(topic); - -``` - - - - -```` - -### List -You can get the list of partitioned topics under a given namespace in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list-partitioned-topics tenant/namespace -persistent://tenant/namespace/topic1 -persistent://tenant/namespace/topic2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getPartitionedTopicList?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getPartitionedTopicList(namespace); - -``` - - - - -```` - -### Stats - -You can check the current statistics of a given partitioned topic. The following is an example. For description of each stats, refer to [get stats](#get-stats). - -Note that in the subscription JSON object, `chuckedMessageRate` is deprecated. Please use `chunkedMessageRate`. Both will be sent in the JSON for now. - -```json - -{ - "msgRateIn" : 999.992947159793, - "msgThroughputIn" : 1070918.4635439808, - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesInCounter" : 270318763, - "msgInCounter" : 252489, - "bytesOutCounter" : 0, - "msgOutCounter" : 0, - "averageMsgSize" : 1070.926056966454, - "msgChunkPublished" : false, - "storageSize" : 270316646, - "backlogSize" : 200921133, - "publishers" : [ { - "msgRateIn" : 999.992947159793, - "msgThroughputIn" : 1070918.4635439808, - "averageMsgSize" : 1070.3333333333333, - "chunkedMessageRate" : 0.0, - "producerId" : 0 - } ], - "subscriptions" : { - "test" : { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesOutCounter" : 0, - "msgOutCounter" : 0, - "msgRateRedeliver" : 0.0, - "chuckedMessageRate" : 0, - "chunkedMessageRate" : 0, - "msgBacklog" : 144318, - "msgBacklogNoDelayed" : 144318, - "blockedSubscriptionOnUnackedMsgs" : false, - "msgDelayed" : 0, - "unackedMessages" : 0, - "msgRateExpired" : 0.0, - "lastExpireTimestamp" : 0, - "lastConsumedFlowTimestamp" : 0, - "lastConsumedTimestamp" : 0, - "lastAckedTimestamp" : 0, - "consumers" : [ ], - "isDurable" : true, - "isReplicated" : false - } - }, - "replication" : { }, - "metadata" : { - "partitions" : 3 - }, - "partitions" : { } -} - -``` - -You can check the current statistics of a given partitioned topic and its connected producers and consumers in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics partitioned-stats \ - persistent://test-tenant/namespace/topic \ - --per-partition - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/partitioned-stats|operation/getPartitionedStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getPartitionedStats(topic, true /* per partition */, false /* is precise backlog */); - -``` - - - - -```` - -### Internal stats - -You can check the detailed statistics of a topic. The following is an example. For description of each stats, refer to [get internal stats](#get-internal-stats). - -```json - -{ - "entriesAddedCounter": 20449518, - "numberOfEntries": 3233, - "totalSize": 331482, - "currentLedgerEntries": 3233, - "currentLedgerSize": 331482, - "lastLedgerCreatedTimestamp": "2016-06-29 03:00:23.825", - "lastLedgerCreationFailureTimestamp": null, - "waitingCursorsCount": 1, - "pendingAddEntriesCount": 0, - "lastConfirmedEntry": "324711539:3232", - "state": "LedgerOpened", - "ledgers": [ - { - "ledgerId": 324711539, - "entries": 0, - "size": 0 - } - ], - "cursors": { - "my-subscription": { - "markDeletePosition": "324711539:3133", - "readPosition": "324711539:3233", - "waitingReadOp": true, - "pendingReadOps": 0, - "messagesConsumedCounter": 20449501, - "cursorLedger": 324702104, - "cursorLedgerLastEntry": 21, - "individuallyDeletedMessages": "[(324711539:3134‥324711539:3136], (324711539:3137‥324711539:3140], ]", - "lastLedgerSwitchTimestamp": "2016-06-29 01:30:19.313", - "state": "Open" - } - } -} - -``` - -You can get the internal stats for the partitioned topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats-internal \ - persistent://test-tenant/namespace/topic - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/internalStats|operation/getInternalStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getInternalStats(topic); - -``` - - - - -```` - -## Publish to partitioned topics - -By default, Pulsar topics are served by a single broker, which limits the maximum throughput of a topic. *Partitioned topics* can span multiple brokers and thus allow for higher throughput. - -You can publish to partitioned topics using Pulsar client libraries. When publishing to partitioned topics, you must specify a routing mode. If you do not specify any routing mode when you create a new producer, the round robin routing mode is used. - -### Routing mode - -You can specify the routing mode in the ProducerConfiguration object that you use to configure your producer. The routing mode determines which partition(internal topic) that each message should be published to. - -The following {@inject: javadoc:MessageRoutingMode:/client/org/apache/pulsar/client/api/MessageRoutingMode} options are available. - -Mode | Description -:--------|:------------ -`RoundRobinPartition` | If no key is provided, the producer publishes messages across all partitions in round-robin policy to achieve the maximum throughput. Round-robin is not done per individual message, round-robin is set to the same boundary of batching delay to ensure that batching is effective. If a key is specified on the message, the partitioned producer hashes the key and assigns message to a particular partition. This is the default mode. -`SinglePartition` | If no key is provided, the producer picks a single partition randomly and publishes all messages into that partition. If a key is specified on the message, the partitioned producer hashes the key and assigns message to a particular partition. -`CustomPartition` | Use custom message router implementation that is called to determine the partition for a particular message. You can create a custom routing mode by using the Java client and implementing the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface. - -The following is an example: - -```java - -String pulsarBrokerRootUrl = "pulsar://localhost:6650"; -String topic = "persistent://my-tenant/my-namespace/my-topic"; - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl(pulsarBrokerRootUrl).build(); -Producer producer = pulsarClient.newProducer() - .topic(topic) - .messageRoutingMode(MessageRoutingMode.SinglePartition) - .create(); -producer.send("Partitioned topic message".getBytes()); - -``` - -### Custom message router - -To use a custom message router, you need to provide an implementation of the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface, which has just one `choosePartition` method: - -```java - -public interface MessageRouter extends Serializable { - int choosePartition(Message msg); -} - -``` - -The following router routes every message to partition 10: - -```java - -public class AlwaysTenRouter implements MessageRouter { - public int choosePartition(Message msg) { - return 10; - } -} - -``` - -With that implementation, you can send - -```java - -String pulsarBrokerRootUrl = "pulsar://localhost:6650"; -String topic = "persistent://my-tenant/my-cluster-my-namespace/my-topic"; - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl(pulsarBrokerRootUrl).build(); -Producer producer = pulsarClient.newProducer() - .topic(topic) - .messageRouter(new AlwaysTenRouter()) - .create(); -producer.send("Partitioned topic message".getBytes()); - -``` - -### How to choose partitions when using a key -If a message has a key, it supersedes the round robin routing policy. The following example illustrates how to choose the partition when using a key. - -```java - -// If the message has a key, it supersedes the round robin routing policy - if (msg.hasKey()) { - return signSafeMod(hash.makeHash(msg.getKey()), topicMetadata.numPartitions()); - } - - if (isBatchingEnabled) { // if batching is enabled, choose partition on `partitionSwitchMs` boundary. - long currentMs = clock.millis(); - return signSafeMod(currentMs / partitionSwitchMs + startPtnIdx, topicMetadata.numPartitions()); - } else { - return signSafeMod(PARTITION_INDEX_UPDATER.getAndIncrement(this), topicMetadata.numPartitions()); - } - -``` - -## Manage subscriptions -You can use [Pulsar admin API](admin-api-overview.md) to create, check, and delete subscriptions. -### Create subscription -You can create a subscription for a topic using one of the following methods. -````mdx-code-block - - - -```shell - -pulsar-admin topics create-subscription \ ---subscription my-subscription \ -persistent://test-tenant/ns1/tp1 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/persistent/:tenant/:namespace/:topic/subscription/:subscription|operation/createSubscriptions?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subscriptionName = "my-subscription"; -admin.topics().createSubscription(topic, subscriptionName, MessageId.latest); - -``` - - - - -```` -### Get subscription -You can check all subscription names for a given topic using one of the following methods. -````mdx-code-block - - - -```shell - -pulsar-admin topics subscriptions \ -persistent://test-tenant/ns1/tp1 \ -my-subscription - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscriptions|operation/getSubscriptions?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getSubscriptions(topic); - -``` - - - - -```` -### Unsubscribe subscription -When a subscription does not process messages any more, you can unsubscribe it using one of the following methods. -````mdx-code-block - - - -```shell - -pulsar-admin topics unsubscribe \ ---subscription my-subscription \ -persistent://test-tenant/ns1/tp1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/:topic/subscription/:subscription|operation/deleteSubscription?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subscriptionName = "my-subscription"; -admin.topics().deleteSubscription(topic, subscriptionName); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/administration-dashboard.md b/site2/website/versioned_docs/version-2.8.2-deprecated/administration-dashboard.md deleted file mode 100644 index 25f976609b40bf..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/administration-dashboard.md +++ /dev/null @@ -1,76 +0,0 @@ ---- -id: administration-dashboard -title: Pulsar dashboard -sidebar_label: "Dashboard" -original_id: administration-dashboard ---- - -:::note - -Pulsar dashboard is deprecated. If you want to manage and monitor the stats of your topics, use [Pulsar Manager](administration-pulsar-manager.md). - -::: - -Pulsar dashboard is a web application that enables users to monitor current stats for all [topics](reference-terminology.md#topic) in tabular form. - -The dashboard is a data collector that polls stats from all the brokers in a Pulsar instance (across multiple clusters) and stores all the information in a [PostgreSQL](https://www.postgresql.org/) database. - -You can use the [Django](https://www.djangoproject.com) web app to render the collected data. - -## Install - -The easiest way to use the dashboard is to run it inside a [Docker](https://www.docker.com/products/docker) container. - -```shell - -$ SERVICE_URL=http://broker.example.com:8080/ -$ docker run -p 80:80 \ - -e SERVICE_URL=$SERVICE_URL \ - apachepulsar/pulsar-dashboard:@pulsar:version@ - -``` - -You can find the {@inject: github:Dockerfile:/dashboard/Dockerfile} in the `dashboard` directory and build an image from scratch as well: - -```shell - -$ docker build -t apachepulsar/pulsar-dashboard dashboard - -``` - -If token authentication is enabled: -> Provided token should have super-user access. - -```shell - -$ SERVICE_URL=http://broker.example.com:8080/ -$ JWT_TOKEN=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c -$ docker run -p 80:80 \ - -e SERVICE_URL=$SERVICE_URL \ - -e JWT_TOKEN=$JWT_TOKEN \ - apachepulsar/pulsar-dashboard - -``` - - -You need to specify only one service URL for a Pulsar cluster. Internally, the collector figures out all the existing clusters and the brokers from where it needs to pull the metrics. If you connect the dashboard to Pulsar running in standalone mode, the URL is `http://:8080` by default. `` is the ip address or hostname of the machine running Pulsar standalone. The ip address or hostname should be accessible from the docker instance running dashboard. - -Once the Docker container runs, the web dashboard is accessible via `localhost` or whichever host that Docker uses. - -> The `SERVICE_URL` that the dashboard uses needs to be reachable from inside the Docker container - -If the Pulsar service runs in standalone mode in `localhost`, the `SERVICE_URL` has to -be the IP of the machine. - -Similarly, given the Pulsar standalone advertises itself with localhost by default, you need to -explicitly set the advertise address to the host IP. For example: - -```shell - -$ bin/pulsar standalone --advertised-address 1.2.3.4 - -``` - -### Known issues - -Currently, only Pulsar Token [authentication](security-overview.md#authentication-providers) is supported. diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/administration-geo.md b/site2/website/versioned_docs/version-2.8.2-deprecated/administration-geo.md deleted file mode 100644 index f5dc1b2f79a3fa..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/administration-geo.md +++ /dev/null @@ -1,215 +0,0 @@ ---- -id: administration-geo -title: Pulsar geo-replication -sidebar_label: "Geo-replication" -original_id: administration-geo ---- - -*Geo-replication* is the replication of persistently stored message data across multiple clusters of a Pulsar instance. - -## How geo-replication works - -The diagram below illustrates the process of geo-replication across Pulsar clusters: - -![Replication Diagram](/assets/geo-replication.png) - -In this diagram, whenever **P1**, **P2**, and **P3** producers publish messages to the **T1** topic on **Cluster-A**, **Cluster-B**, and **Cluster-C** clusters respectively, those messages are instantly replicated across clusters. Once the messages are replicated, **C1** and **C2** consumers can consume those messages from their respective clusters. - -Without geo-replication, **C1** and **C2** consumers are not able to consume messages that **P3** producer publishes. - -## Geo-replication and Pulsar properties - -You must enable geo-replication on a per-tenant basis in Pulsar. You can enable geo-replication between clusters only when a tenant is created that allows access to both clusters. - -Although geo-replication must be enabled between two clusters, actually geo-replication is managed at the namespace level. You must complete the following tasks to enable geo-replication for a namespace: - -* [Enable geo-replication namespaces](#enable-geo-replication-namespaces) -* Configure that namespace to replicate across two or more provisioned clusters - -Any message published on *any* topic in that namespace is replicated to all clusters in the specified set. - -## Local persistence and forwarding - -When messages are produced on a Pulsar topic, messages are first persisted in the local cluster, and then forwarded asynchronously to the remote clusters. - -In normal cases, when connectivity issues are none, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, the network [round-trip time](https://en.wikipedia.org/wiki/Round-trip_delay_time) (RTT) between the remote regions defines end-to-end delivery latency. - -Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (like during a network partition). - -Producers and consumers can publish messages to and consume messages from any cluster in a Pulsar instance. However, subscriptions cannot only be local to the cluster where the subscriptions are created but also can be transferred between clusters after replicated subscription is enabled. Once replicated subscription is enabled, you can keep subscription state in synchronization. Therefore, a topic can be asynchronously replicated across multiple geographical regions. In case of failover, a consumer can restart consuming messages from the failure point in a different cluster. - -In the aforementioned example, the **T1** topic is replicated among three clusters, **Cluster-A**, **Cluster-B**, and **Cluster-C**. - -All messages produced in any of the three clusters are delivered to all subscriptions in other clusters. In this case, **C1** and **C2** consumers receive all messages that **P1**, **P2**, and **P3** producers publish. Ordering is still guaranteed on a per-producer basis. - -## Configure replication - -As stated in [Geo-replication and Pulsar properties](#geo-replication-and-pulsar-properties) section, geo-replication in Pulsar is managed at the [tenant](reference-terminology.md#tenant) level. - -The following example connects three clusters: **us-east**, **us-west**, and **us-cent**. - -### Connect replication clusters - -To replicate data among clusters, you need to configure each cluster to connect to the other. You can use the [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) tool to create a connection. - -**Example** - -Suppose that you have 3 replication clusters: `us-west`, `us-cent`, and `us-east`. - -1. Configure the connection from `us-west` to `us-east`. - - Run the following command on `us-west`. - -```shell - -$ bin/pulsar-admin clusters create \ - --broker-url pulsar://: \ - --url http://: \ - us-east - -``` - - :::tip - - - If you want to use a secure connection for a cluster, you can use the flags `--broker-url-secure` and `--url-secure`. For more information, see [pulsar-admin clusters create](https://pulsar.apache.org/tools/pulsar-admin/). - - Different clusters may have different authentications. You can use the authentication flag `--auth-plugin` and `--auth-parameters` together to set cluster authentication, which overrides `brokerClientAuthenticationPlugin` and `brokerClientAuthenticationParameters` if `authenticationEnabled` sets to `true` in `broker.conf` and `standalone.conf`. For more information, see [authentication and authorization](concepts-authentication.md). - - ::: - -2. Configure the connection from `us-west` to `us-cent`. - - Run the following command on `us-west`. - -```shell - -$ bin/pulsar-admin clusters create \ - --broker-url pulsar://: \ - --url http://: \ - us-cent - -``` - -3. Run similar commands on `us-east` and `us-cent` to create connections among clusters. - -### Grant permissions to properties - -To replicate to a cluster, the tenant needs permission to use that cluster. You can grant permission to the tenant when you create the tenant or grant later. - -Specify all the intended clusters when you create a tenant: - -```shell - -$ bin/pulsar-admin tenants create my-tenant \ - --admin-roles my-admin-role \ - --allowed-clusters us-west,us-east,us-cent - -``` - -To update permissions of an existing tenant, use `update` instead of `create`. - -### Enable geo-replication namespaces - -You can create a namespace with the following command sample. - -```shell - -$ bin/pulsar-admin namespaces create my-tenant/my-namespace - -``` - -Initially, the namespace is not assigned to any cluster. You can assign the namespace to clusters using the `set-clusters` subcommand: - -```shell - -$ bin/pulsar-admin namespaces set-clusters my-tenant/my-namespace \ - --clusters us-west,us-east,us-cent - -``` - -You can change the replication clusters for a namespace at any time, without disruption to ongoing traffic. Replication channels are immediately set up or stopped in all clusters as soon as the configuration changes. - -### Use topics with geo-replication - -Once you create a geo-replication namespace, any topics that producers or consumers create within that namespace is replicated across clusters. Typically, each application uses the `serviceUrl` for the local cluster. - -#### Selective replication - -By default, messages are replicated to all clusters configured for the namespace. You can restrict replication selectively by specifying a replication list for a message, and then that message is replicated only to the subset in the replication list. - -The following is an example for the [Java API](client-libraries-java.md). Note the use of the `setReplicationClusters` method when you construct the {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} object: - -```java - -List restrictReplicationTo = Arrays.asList( - "us-west", - "us-east" -); - -Producer producer = client.newProducer() - .topic("some-topic") - .create(); - -producer.newMessage() - .value("my-payload".getBytes()) - .setReplicationClusters(restrictReplicationTo) - .send(); - -``` - -#### Topic stats - -Topic-specific statistics for geo-replication topics are available via the [`pulsar-admin`](reference-pulsar-admin.md) tool and {@inject: rest:REST:/} API: - -```shell - -$ bin/pulsar-admin persistent stats persistent://my-tenant/my-namespace/my-topic - -``` - -Each cluster reports its own local stats, including the incoming and outgoing replication rates and backlogs. - -#### Delete a geo-replication topic - -Given that geo-replication topics exist in multiple regions, directly deleting a geo-replication topic is not possible. Instead, you should rely on automatic topic garbage collection. - -In Pulsar, a topic is automatically deleted when the topic meets the following three conditions: -- no producers or consumers are connected to it; -- no subscriptions to it; -- no more messages are kept for retention. -For geo-replication topics, each region uses a fault-tolerant mechanism to decide when deleting the topic locally is safe. - -You can explicitly disable topic garbage collection by setting `brokerDeleteInactiveTopicsEnabled` to `false` in your [broker configuration](reference-configuration.md#broker). - -To delete a geo-replication topic, close all producers and consumers on the topic, and delete all of its local subscriptions in every replication cluster. When Pulsar determines that no valid subscription for the topic remains across the system, it will garbage collect the topic. - -## Replicated subscriptions - -Pulsar supports replicated subscriptions, so you can keep subscription state in sync, within a sub-second timeframe, in the context of a topic that is being asynchronously replicated across multiple geographical regions. - -In case of failover, a consumer can restart consuming from the failure point in a different cluster. - -### Enable replicated subscription - -Replicated subscription is disabled by default. You can enable replicated subscription when creating a consumer. - -```java - -Consumer consumer = client.newConsumer(Schema.STRING) - .topic("my-topic") - .subscriptionName("my-subscription") - .replicateSubscriptionState(true) - .subscribe(); - -``` - -### Advantages - - * It is easy to implement the logic. - * You can choose to enable or disable replicated subscription. - * When you enable it, the overhead is low, and it is easy to configure. - * When you disable it, the overhead is zero. - -### Limitations - -* When you enable replicated subscription, you're creating a consistent distributed snapshot to establish an association between message ids from different clusters. The snapshots are taken periodically. The default value is `1 second`. It means that a consumer failing over to a different cluster can potentially receive 1 second of duplicates. You can also configure the frequency of the snapshot in the `broker.conf` file. -* Only the base line cursor position is synced in replicated subscriptions while the individual acknowledgments are not synced. This means the messages acknowledged out-of-order could end up getting delivered again, in the case of a cluster failover. diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/administration-isolation.md b/site2/website/versioned_docs/version-2.8.2-deprecated/administration-isolation.md deleted file mode 100644 index d2de042a2e7415..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/administration-isolation.md +++ /dev/null @@ -1,115 +0,0 @@ ---- -id: administration-isolation -title: Pulsar isolation -sidebar_label: "Pulsar isolation" -original_id: administration-isolation ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -In an organization, a Pulsar instance provides services to multiple teams. When organizing the resources across multiple teams, you want to make a suitable isolation plan to avoid the resource competition between different teams and applications and provide high-quality messaging service. In this case, you need to take resource isolation into consideration and weigh your intended actions against expected and unexpected consequences. - -To enforce resource isolation, you can use the Pulsar isolation policy, which allows you to allocate resources (**broker** and **bookie**) for the namespace. - -## Broker isolation - -In Pulsar, when namespaces (more specifically, namespace bundles) are assigned dynamically to brokers, the namespace isolation policy limits the set of brokers that can be used for assignment. Before topics are assigned to brokers, you can set the namespace isolation policy with a primary or a secondary regex to select desired brokers. - -You can set a namespace isolation policy for a cluster using one of the following methods. - -````mdx-code-block - - - - -``` - -pulsar-admin ns-isolation-policy set options - -``` - -For more information about the command `pulsar-admin ns-isolation-policy set options`, see [here](https://pulsar.apache.org/tools/pulsar-admin/). - -**Example** - -```shell - -bin/pulsar-admin ns-isolation-policy set \ ---auto-failover-policy-type min_available \ ---auto-failover-policy-params min_limit=1,usage_threshold=80 \ ---namespaces my-tenant/my-namespace \ ---primary 10.193.216.* my-cluster policy-name - -``` - - - - -[PUT /admin/v2/namespaces/{tenant}/{namespace}](https://pulsar.apache.org/admin-rest-api/?version=master&apiversion=v2#operation/createNamespace) - - - - -For how to set namespace isolation policy using Java admin API, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/NamespacesImpl.java#L251). - - - - -```` - -## Bookie isolation - -A namespace can be isolated into user-defined groups of bookies, which guarantees all the data that belongs to the namespace is stored in desired bookies. The bookie affinity group uses the BookKeeper [rack-aware placement policy](https://bookkeeper.apache.org/docs/latest/api/javadoc/org/apache/bookkeeper/client/EnsemblePlacementPolicy.html) and it is a way to feed rack information which is stored as JSON format in znode. - -You can set a bookie affinity group using one of the following methods. - -````mdx-code-block - - - - -``` - -pulsar-admin namespaces set-bookie-affinity-group options - -``` - -For more information about the command `pulsar-admin namespaces set-bookie-affinity-group options`, see [here](https://pulsar.apache.org/tools/pulsar-admin/). - -**Example** - -```shell - -bin/pulsar-admin bookies set-bookie-rack \ ---bookie 127.0.0.1:3181 \ ---hostname 127.0.0.1:3181 \ ---group group-bookie1 \ ---rack rack1 - -bin/pulsar-admin namespaces set-bookie-affinity-group public/default \ ---primary-group group-bookie1 - -``` - - - - -[POST /admin/v2/namespaces/{tenant}/{namespace}/persistence/bookieAffinity](https://pulsar.apache.org/admin-rest-api/?version=master&apiversion=v2#operation/setBookieAffinityGroup) - - - - -For how to set bookie affinity group for a namespace using Java admin API, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/NamespacesImpl.java#L1164). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/administration-load-balance.md b/site2/website/versioned_docs/version-2.8.2-deprecated/administration-load-balance.md deleted file mode 100644 index 134c39e0956b23..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/administration-load-balance.md +++ /dev/null @@ -1,256 +0,0 @@ ---- -id: administration-load-balance -title: Pulsar load balance -sidebar_label: "Load balance" -original_id: administration-load-balance ---- - -## Load balance across Pulsar brokers - -Pulsar is an horizontally scalable messaging system, so the traffic -in a logical cluster must be spread across all the available Pulsar brokers as evenly as possible, which is a core requirement. - -You can use multiple settings and tools to control the traffic distribution which require a bit of context to understand how the traffic is managed in Pulsar. Though, in most cases, the core requirement mentioned above is true out of the box and you should not worry about it. - -## Pulsar load manager architecture - -The following part introduces the basic architecture of the Pulsar load manager. - -### Assign topics to brokers dynamically - -Topics are dynamically assigned to brokers based on the load conditions of all brokers in the cluster. - -When a client starts using new topics that are not assigned to any broker, a process is triggered to choose the best suited broker to acquire ownership of these topics according to the load conditions. - -In case of partitioned topics, different partitions are assigned to different brokers. Here "topic" means either a non-partitioned topic or one partition of a topic. - -The assignment is "dynamic" because the assignment changes quickly. For example, if the broker owning the topic crashes, the topic is reassigned immediately to another broker. Another scenario is that the broker owning the topic becomes overloaded. In this case, the topic is reassigned to a less loaded broker. - -The stateless nature of brokers makes the dynamic assignment possible, so you can quickly expand or shrink the cluster based on usage. - -#### Assignment granularity - -The assignment of topics or partitions to brokers is not done at the topics or partitions level, but done at the Bundle level (a higher level). The reason is to amortize the amount of information that you need to keep track. Based on CPU, memory, traffic load and other indexes, topics are assigned to a particular broker dynamically. - -Instead of individual topic or partition assignment, each broker takes ownership of a subset of the topics for a namespace. This subset is called a "*bundle*" and effectively this subset is a sharding mechanism. - -The namespace is the "administrative" unit: many config knobs or operations are done at the namespace level. - -For assignment, a namespaces is sharded into a list of "bundles", with each bundle comprising -a portion of overall hash range of the namespace. - -Topics are assigned to a particular bundle by taking the hash of the topic name and checking in which -bundle the hash falls into. - -Each bundle is independent of the others and thus is independently assigned to different brokers. - -### Create namespaces and bundles - -When you create a new namespace, the new namespace sets to use the default number of bundles. You can set this in `conf/broker.conf`: - -```properties - -# When a namespace is created without specifying the number of bundle, this -# value will be used as the default -defaultNumberOfNamespaceBundles=4 - -``` - -You can either change the system default, or override it when you create a new namespace: - -```shell - -$ bin/pulsar-admin namespaces create my-tenant/my-namespace --clusters us-west --bundles 16 - -``` - -With this command, you create a namespace with 16 initial bundles. Therefore the topics for this namespaces can immediately be spread across up to 16 brokers. - -In general, if you know the expected traffic and number of topics in advance, you had better start with a reasonable number of bundles instead of waiting for the system to auto-correct the distribution. - -On the same note, it is beneficial to start with more bundles than the number of brokers, because of the hashing nature of the distribution of topics into bundles. For example, for a namespace with 1000 topics, using something like 64 bundles achieves a good distribution of traffic across 16 brokers. - -### Unload topics and bundles - -You can "unload" a topic in Pulsar with admin operation. Unloading means to close the topics, -release ownership and reassign the topics to a new broker, based on current load. - -When unloading happens, the client experiences a small latency blip, typically in the order of tens of milliseconds, while the topic is reassigned. - -Unloading is the mechanism that the load-manager uses to perform the load shedding, but you can also trigger the unloading manually, for example to correct the assignments and redistribute traffic even before having any broker overloaded. - -Unloading a topic has no effect on the assignment, but just closes and reopens the particular topic: - -```shell - -pulsar-admin topics unload persistent://tenant/namespace/topic - -``` - -To unload all topics for a namespace and trigger reassignments: - -```shell - -pulsar-admin namespaces unload tenant/namespace - -``` - -### Split namespace bundles - -Since the load for the topics in a bundle might change over time, or predicting upfront might just be hard, brokers can split bundles into two. The new smaller bundles can be reassigned to different brokers. - -The splitting happens based on some tunable thresholds. Any existing bundle that exceeds any of the threshold is a candidate to be split. By default the newly split bundles are also immediately offloaded to other brokers, to facilitate the traffic distribution. - -```properties - -# enable/disable namespace bundle auto split -loadBalancerAutoBundleSplitEnabled=true - -# enable/disable automatic unloading of split bundles -loadBalancerAutoUnloadSplitBundlesEnabled=true - -# maximum topics in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxTopics=1000 - -# maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxSessions=1000 - -# maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxMsgRate=30000 - -# maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxBandwidthMbytes=100 - -# maximum number of bundles in a namespace (for auto-split) -loadBalancerNamespaceMaximumBundles=128 - -``` - -### Shed load automatically - -The support for automatic load shedding is available in the load manager of Pulsar. This means that whenever the system recognizes a particular broker is overloaded, the system forces some traffic to be reassigned to less loaded brokers. - -When a broker is identified as overloaded, the broker forces to "unload" a subset of the bundles, the -ones with higher traffic, that make up for the overload percentage. - -For example, the default threshold is 85% and if a broker is over quota at 95% CPU usage, then the broker unloads the percent difference plus a 5% margin: `(95% - 85%) + 5% = 15%`. - -Given the selection of bundles to offload is based on traffic (as a proxy measure for cpu, network -and memory), broker unloads bundles for at least 15% of traffic. - -The automatic load shedding is enabled by default and you can disable the automatic load shedding with this setting: - -```properties - -# Enable/disable automatic bundle unloading for load-shedding -loadBalancerSheddingEnabled=true - -``` - -Additional settings that apply to shedding: - -```properties - -# Load shedding interval. Broker periodically checks whether some traffic should be offload from -# some over-loaded broker to other under-loaded brokers -loadBalancerSheddingIntervalMinutes=1 - -# Prevent the same topics to be shed and moved to other brokers more that once within this timeframe -loadBalancerSheddingGracePeriodMinutes=30 - -``` - -#### Broker overload thresholds - -The determinations of when a broker is overloaded is based on threshold of CPU, network and memory usage. Whenever either of those metrics reaches the threshold, the system triggers the shedding (if enabled). - -By default, overload threshold is set at 85%: - -```properties - -# Usage threshold to determine a broker as over-loaded -loadBalancerBrokerOverloadedThresholdPercentage=85 - -``` - -Pulsar gathers the usage stats from the system metrics. - -In case of network utilization, in some cases the network interface speed that Linux reports is -not correct and needs to be manually overridden. This is the case in AWS EC2 instances with 1Gbps -NIC speed for which the OS reports 10Gbps speed. - -Because of the incorrect max speed, the Pulsar load manager might think the broker has not reached the NIC capacity, while in fact the broker already uses all the bandwidth and the traffic is slowed down. - -You can use the following setting to correct the max NIC speed: - -```properties - -# Override the auto-detection of the network interfaces max speed. -# This option is useful in some environments (eg: EC2 VMs) where the max speed -# reported by Linux is not reflecting the real bandwidth available to the broker. -# Since the network usage is employed by the load manager to decide when a broker -# is overloaded, it is important to make sure the info is correct or override it -# with the right value here. The configured value can be a double (eg: 0.8) and that -# can be used to trigger load-shedding even before hitting on NIC limits. -loadBalancerOverrideBrokerNicSpeedGbps= - -``` - -When the value is empty, Pulsar uses the value that the OS reports. - -### Distribute anti-affinity namespaces across failure domains - -When your application has multiple namespaces and you want one of them available all the time to avoid any downtime, you can group these namespaces and distribute them across different [failure domains](reference-terminology.md#failure-domain) and different brokers. Thus, if one of the failure domains is down (due to release rollout or brokers restart), it only disrupts namespaces owned by that specific failure domain and the rest of the namespaces owned by other domains remain available without any impact. - -Such a group of namespaces has anti-affinity to each other, that is, all the namespaces in this group are [anti-affinity namespaces](reference-terminology.md#anti-affinity-namespaces) and are distributed to different failure domains in a load-balanced manner. - -As illustrated in the following figure, Pulsar has 2 failure domains (Domain1 and Domain2) and each domain has 2 brokers in it. You can create an anti-affinity namespace group that has 4 namespaces in it, and all the 4 namespaces have anti-affinity to each other. The load manager tries to distribute namespaces evenly across all the brokers in the same domain. Since each domain has 2 brokers, every broker owns one namespace from this anti-affinity namespace group, and you can see each domain owns 2 namespaces, and each broker owns 1 namespace. - -![Distribute anti-affinity namespaces across failure domains](/assets/anti-affinity-namespaces-across-failure-domains.svg) - -The load manager follows an even distribution policy across failure domains to assign anti-affinity namespaces. The following table outlines the even-distributed assignment sequence illustrated in the above figure. - -| Assignment sequence | Namespace | Failure domain candidates | Broker candidates | Selected broker | -|:---|:------------|:------------------|:------------------------------------|:-----------------| -| 1 | Namespace1 | Domain1, Domain2 | Broker1, Broker2, Broker3, Broker4 | Domain1:Broker1 | -| 2 | Namespace2 | Domain2 | Broker3, Broker4 | Domain2:Broker3 | -| 3 | Namespace3 | Domain1, Domain2 | Broker2, Broker4 | Domain1:Broker2 | -| 4 | Namespace4 | Domain2 | Broker4 | Domain2:Broker4 | - -:::tip - -* Each namespace belongs to only one anti-affinity group. If a namespace with an existing anti-affinity assignment is assigned to another anti-affinity group, the original assignment is dropped. - -* If there are more anti-affinity namespaces than failure domains, the load manager distributes namespaces evenly across all the domains, and also every domain distributes namespaces evenly across all the brokers under that domain. - -::: - -#### Create a failure domain and register brokers - -:::note - -One broker can only be registered to a single failure domain. - -::: - -To create a domain under a specific cluster and register brokers, run the following command: - -```bash - -pulsar-admin clusters create-failure-domain --domain-name --broker-list - -``` - -You can also view, update, and delete domains under a specific cluster. For more information, refer to [Pulsar admin doc](/tools/pulsar-admin/). - -#### Create an anti-affinity namespace group - -An anti-affinity group is created automatically when the first namespace is assigned to the group. To assign a namespace to an anti-affinity group, run the following command. It sets an anti-affinity group name for a namespace. - -```bash - -pulsar-admin namespaces set-anti-affinity-group --group - -``` - -For more information about `anti-affinity-group` related commands, refer to [Pulsar admin doc](/tools/pulsar-admin/). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/administration-proxy.md b/site2/website/versioned_docs/version-2.8.2-deprecated/administration-proxy.md deleted file mode 100644 index c046ed34d46b22..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/administration-proxy.md +++ /dev/null @@ -1,86 +0,0 @@ ---- -id: administration-proxy -title: Pulsar proxy -sidebar_label: "Pulsar proxy" -original_id: administration-proxy ---- - -Pulsar proxy is an optional gateway. Pulsar proxy is used when direction connections between clients and Pulsar brokers are either infeasible or undesirable. For example, when you run Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform, you can run Pulsar proxy. - -## Configure the proxy - -Before using the proxy, you need to configure it with the brokers addresses in the cluster. You can configure the proxy to connect directly to service discovery, or specify a broker URL in the configuration. - -### Use service discovery - -Pulsar uses [ZooKeeper](https://zookeeper.apache.org) for service discovery. To connect the proxy to ZooKeeper, specify the following in `conf/proxy.conf`. - -```properties - -zookeeperServers=zk-0,zk-1,zk-2 -configurationStoreServers=zk-0:2184,zk-remote:2184 - -``` - -> To use service discovery, you need to open the network ACLs, so the proxy can connects to the ZooKeeper nodes through the ZooKeeper client port (port `2181`) and the configuration store client port (port `2184`). - -> However, it is not secure to use service discovery. Because if the network ACL is open, when someone compromises a proxy, they have full access to ZooKeeper. - -### Use broker URLs - -It is more secure to specify a URL to connect to the brokers. - -Proxy authorization requires access to ZooKeeper, so if you use these broker URLs to connect to the brokers, you need to disable authorization at the Proxy level. Brokers still authorize requests after the proxy forwards them. - -You can configure the broker URLs in `conf/proxy.conf` as follows. - -```properties - -brokerServiceURL=pulsar://brokers.example.com:6650 -brokerWebServiceURL=http://brokers.example.com:8080 -functionWorkerWebServiceURL=http://function-workers.example.com:8080 - -``` - -If you use TLS, configure the broker URLs in the following way: - -```properties - -brokerServiceURLTLS=pulsar+ssl://brokers.example.com:6651 -brokerWebServiceURLTLS=https://brokers.example.com:8443 -functionWorkerWebServiceURL=https://function-workers.example.com:8443 - -``` - -The hostname in the URLs provided should be a DNS entry which points to multiple brokers or a virtual IP address, which is backed by multiple broker IP addresses, so that the proxy does not lose connectivity to Pulsar cluster if a single broker becomes unavailable. - -The ports to connect to the brokers (6650 and 8080, or in the case of TLS, 6651 and 8443) should be open in the network ACLs. - -Note that if you do not use functions, you do not need to configure `functionWorkerWebServiceURL`. - -## Start the proxy - -To start the proxy: - -```bash - -$ cd /path/to/pulsar/directory -$ bin/pulsar proxy - -``` - -> You can run multiple instances of the Pulsar proxy in a cluster. - -## Stop the proxy - -Pulsar proxy runs in the foreground by default. To stop the proxy, simply stop the process in which the proxy is running. - -## Proxy frontends - -You can run Pulsar proxy behind some kind of load-distributing frontend, such as an [HAProxy](https://www.digitalocean.com/community/tutorials/an-introduction-to-haproxy-and-load-balancing-concepts) load balancer. - -## Use Pulsar clients with the proxy - -Once your Pulsar proxy is up and running, preferably behind a load-distributing [frontend](#proxy-frontends), clients can connect to the proxy via whichever address that the frontend uses. If the address is the DNS address `pulsar.cluster.default`, for example, the connection URL for clients is `pulsar://pulsar.cluster.default:6650`. - -For more information on Proxy configuration, refer to [Pulsar proxy](reference-configuration.md#pulsar-proxy). diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/administration-pulsar-manager.md b/site2/website/versioned_docs/version-2.8.2-deprecated/administration-pulsar-manager.md deleted file mode 100644 index 0e3800d847f0c8..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/administration-pulsar-manager.md +++ /dev/null @@ -1,205 +0,0 @@ ---- -id: administration-pulsar-manager -title: Pulsar Manager -sidebar_label: "Pulsar Manager" -original_id: administration-pulsar-manager ---- - -Pulsar Manager is a web-based GUI management and monitoring tool that helps administrators and users manage and monitor tenants, namespaces, topics, subscriptions, brokers, clusters, and so on, and supports dynamic configuration of multiple environments. - -:::note - -If you monitor your current stats with Pulsar dashboard, you can try to use Pulsar Manager instead. Pulsar dashboard is deprecated. - -::: - -## Install - -The easiest way to use the Pulsar Manager is to run it inside a [Docker](https://www.docker.com/products/docker) container. - -```shell - -docker pull apachepulsar/pulsar-manager:v0.2.0 -docker run -it \ - -p 9527:9527 -p 7750:7750 \ - -e SPRING_CONFIGURATION_FILE=/pulsar-manager/pulsar-manager/application.properties \ - apachepulsar/pulsar-manager:v0.2.0 - -``` - -* `SPRING_CONFIGURATION_FILE`: Default configuration file for spring. - -### Set administrator account and password - - ```shell - - CSRF_TOKEN=$(curl http://localhost:7750/pulsar-manager/csrf-token) - curl \ - -H 'X-XSRF-TOKEN: $CSRF_TOKEN' \ - -H 'Cookie: XSRF-TOKEN=$CSRF_TOKEN;' \ - -H "Content-Type: application/json" \ - -X PUT http://localhost:7750/pulsar-manager/users/superuser \ - -d '{"name": "admin", "password": "apachepulsar", "description": "test", "email": "username@test.org"}' - - ``` - -You can find the docker image in the [Docker Hub](https://github.com/apache/pulsar-manager/tree/master/docker) directory and build an image from the source code as well: - -``` - -git clone https://github.com/apache/pulsar-manager -cd pulsar-manager/front-end -npm install --save -npm run build:prod -cd .. -./gradlew build -x test -cd .. -docker build -f docker/Dockerfile --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` --build-arg VCS_REF=`latest` --build-arg VERSION=`latest` -t apachepulsar/pulsar-manager . - -``` - -### Use custom databases - -If you have a large amount of data, you can use a custom database. The following is an example of PostgreSQL. - -1. Initialize database and table structures using the [file](https://github.com/apache/pulsar-manager/tree/master/src/main/resources/META-INF/sql/postgresql-schema.sql). - -2. Modify the [configuration file](https://github.com/apache/pulsar-manager/blob/master/src/main/resources/application.properties) and add PostgreSQL configuration. - -``` - -spring.datasource.driver-class-name=org.postgresql.Driver -spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/pulsar_manager -spring.datasource.username=postgres -spring.datasource.password=postgres - -``` - -3. Compile to generate a new executable jar package. - -``` - -./gradlew build -x test - -``` - -### Enable JWT authentication - -If you want to turn on JWT authentication, configure the following parameters: - -* `backend.jwt.token`: token for the superuser. You need to configure this parameter during cluster initialization. -* `jwt.broker.token.mode`: multiple modes of generating token, including PUBLIC, PRIVATE, and SECRET. -* `jwt.broker.public.key`: configure this option if you use the PUBLIC mode. -* `jwt.broker.private.key`: configure this option if you use the PRIVATE mode. -* `jwt.broker.secret.key`: configure this option if you use the SECRET mode. - -For more information, see [Token Authentication Admin of Pulsar](http://pulsar.apache.org/docs/en/security-token-admin/). - - -If you want to enable JWT authentication, use one of the following methods. - - -* Method 1: use command-line tool - -``` - -wget https://dist.apache.org/repos/dist/release/pulsar/pulsar-manager/pulsar-manager-0.2.0/apache-pulsar-manager-0.2.0-bin.tar.gz -tar -zxvf apache-pulsar-manager-0.2.0-bin.tar.gz -cd pulsar-manager -tar -zxvf pulsar-manager.tar -cd pulsar-manager -cp -r ../dist ui -./bin/pulsar-manager --redirect.host=http://localhost --redirect.port=9527 insert.stats.interval=600000 --backend.jwt.token=token --jwt.broker.token.mode=PRIVATE --jwt.broker.private.key=file:///path/broker-private.key --jwt.broker.public.key=file:///path/broker-public.key - -``` - -Firstly, [set the administrator account and password](#set-administrator-account-and-password) - -Secondly, log in to Pulsar manager through http://localhost:7750/ui/index.html. - -* Method 2: configure the application.properties file - -``` - -backend.jwt.token=token - -jwt.broker.token.mode=PRIVATE -jwt.broker.public.key=file:///path/broker-public.key -jwt.broker.private.key=file:///path/broker-private.key - -or -jwt.broker.token.mode=SECRET -jwt.broker.secret.key=file:///path/broker-secret.key - -``` - -* Method 3: use Docker and enable token authentication. - -``` - -export JWT_TOKEN="your-token" -docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -v $PWD:/data apachepulsar/pulsar-manager:v0.2.0 /bin/sh - -``` - -* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --secret-key` or `bin/pulsar tokens create --private-key` command. -* `REDIRECT_HOST`: the IP address of the front-end server. -* `REDIRECT_PORT`: the port of the front-end server. -* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database. -* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database. -* `USERNAME`: the username of PostgreSQL. -* `PASSWORD`: the password of PostgreSQL. -* `LOG_LEVEL`: the level of log. - -* Method 4: use Docker and turn on **token authentication** and **token management** by private key and public key. - -``` - -export JWT_TOKEN="your-token" -export PRIVATE_KEY="file:///pulsar-manager/secret/my-private.key" -export PUBLIC_KEY="file:///pulsar-manager/secret/my-public.key" -docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -e PRIVATE_KEY=$PRIVATE_KEY -e PUBLIC_KEY=$PUBLIC_KEY -v $PWD:/data -v $PWD/secret:/pulsar-manager/secret apachepulsar/pulsar-manager:v0.2.0 /bin/sh - -``` - -* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --private-key` command. -* `PRIVATE_KEY`: private key path mounted in container, generated by `bin/pulsar tokens create-key-pair` command. -* `PUBLIC_KEY`: public key path mounted in container, generated by `bin/pulsar tokens create-key-pair` command. -* `$PWD/secret`: the folder where the private key and public key generated by the `bin/pulsar tokens create-key-pair` command are placed locally -* `REDIRECT_HOST`: the IP address of the front-end server. -* `REDIRECT_PORT`: the port of the front-end server. -* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database. -* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database. -* `USERNAME`: the username of PostgreSQL. -* `PASSWORD`: the password of PostgreSQL. -* `LOG_LEVEL`: the level of log. - -* Method 5: use Docker and turn on **token authentication** and **token management** by secret key. - -``` - -export JWT_TOKEN="your-token" -export SECRET_KEY="file:///pulsar-manager/secret/my-secret.key" -docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -e SECRET_KEY=$SECRET_KEY -v $PWD:/data -v $PWD/secret:/pulsar-manager/secret apachepulsar/pulsar-manager:v0.2.0 /bin/sh - -``` - -* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --secret-key` command. -* `SECRET_KEY`: secret key path mounted in container, generated by `bin/pulsar tokens create-secret-key` command. -* `$PWD/secret`: the folder where the secret key generated by the `bin/pulsar tokens create-secret-key` command are placed locally -* `REDIRECT_HOST`: the IP address of the front-end server. -* `REDIRECT_PORT`: the port of the front-end server. -* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database. -* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database. -* `USERNAME`: the username of PostgreSQL. -* `PASSWORD`: the password of PostgreSQL. -* `LOG_LEVEL`: the level of log. - -* For more information about backend configurations, see [here](https://github.com/apache/pulsar-manager/blob/master/src/README). -* For more information about frontend configurations, see [here](https://github.com/apache/pulsar-manager/tree/master/front-end). - -## Log in - -[Set the administrator account and password](#set-administrator-account-and-password). - -Visit http://localhost:9527 to log in. diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/administration-stats.md b/site2/website/versioned_docs/version-2.8.2-deprecated/administration-stats.md deleted file mode 100644 index ac0c03602f36d5..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/administration-stats.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -id: administration-stats -title: Pulsar stats -sidebar_label: "Pulsar statistics" -original_id: administration-stats ---- - -## Partitioned topics - -|Stat|Description| -|---|---| -|msgRateIn| The sum of publish rates of all local and replication publishers in messages per second.| -|msgThroughputIn| Same as msgRateIn but in bytes per second instead of messages per second.| -|msgRateOut| The sum of dispatch rates of all local and replication consumers in messages per second.| -|msgThroughputOut| Same as msgRateOut but in bytes per second instead of messages per second.| -|averageMsgSize| Average message size, in bytes, from this publisher within the last interval.| -|storageSize| The sum of storage size of the ledgers for this topic.| -|publishers| The list of all local publishers into the topic. Publishers can be anywhere from zero to thousands.| -|producerId| Internal identifier for this producer on this topic.| -|producerName| Internal identifier for this producer, generated by the client library.| -|address| IP address and source port for the connection of this producer.| -|connectedSince| Timestamp this producer is created or last reconnected.| -|subscriptions| The list of all local subscriptions to the topic.| -|my-subscription| The name of this subscription (client defined).| -|msgBacklog| The count of messages in backlog for this subscription.| -|type| This subscription type.| -|msgRateExpired| The rate at which messages are discarded instead of dispatched from this subscription due to TTL.| -|consumers| The list of connected consumers for this subscription.| -|consumerName| Internal identifier for this consumer, generated by the client library.| -|availablePermits| The number of messages this consumer has space for in the listen queue of client library. A value of 0 means the queue of client library is full and receive() is not being called. A nonzero value means this consumer is ready to be dispatched messages.| -|replication| This section gives the stats for cross-colo replication of this topic.| -|replicationBacklog| The outbound replication backlog in messages.| -|connected| Whether the outbound replicator is connected.| -|replicationDelayInSeconds| How long the oldest message has been waiting to be sent through the connection, if connected is true.| -|inboundConnection| The IP and port of the broker in the publisher connection of remote cluster to this broker. | -|inboundConnectedSince| The TCP connection being used to publish messages to the remote cluster. If no local publishers are connected, this connection is automatically closed after a minute.| - - -## Topics - -|Stat|Description| -|---|---| -|entriesAddedCounter| Messages published since this broker loads this topic.| -|numberOfEntries| Total number of messages being tracked.| -|totalSize| Total storage size in bytes of all messages.| -|currentLedgerEntries| Count of messages written to the ledger currently open for writing.| -|currentLedgerSize| Size in bytes of messages written to ledger currently open for writing.| -|lastLedgerCreatedTimestamp| Time when last ledger is created.| -|lastLedgerCreationFailureTimestamp| Time when last ledger is failed.| -|waitingCursorsCount| How many cursors are caught up and waiting for a new message to be published.| -|pendingAddEntriesCount| How many messages have (asynchronous) write requests you are waiting on completion.| -|lastConfirmedEntry| The ledgerid:entryid of the last message successfully written. If the entryid is -1, then the ledger is opened or is being currently opened but has no entries written yet.| -|state| The state of the cursor ledger. Open means you have a cursor ledger for saving updates of the markDeletePosition.| -|ledgers| The ordered list of all ledgers for this topic holding its messages.| -|cursors| The list of all cursors on this topic. Every subscription you saw in the topic stats has one.| -|markDeletePosition| The ack position: the last message the subscriber acknowledges receiving.| -|readPosition| The latest position of subscriber for reading message.| -|waitingReadOp| This is true when the subscription reads the latest message that is published to the topic and waits on new messages to be published.| -|pendingReadOps| The counter for how many outstanding read requests to the BookKeepers you have in progress.| -|messagesConsumedCounter| Number of messages this cursor acks since this broker loads this topic.| -|cursorLedger| The ledger used to persistently store the current markDeletePosition.| -|cursorLedgerLastEntry| The last entryid used to persistently store the current markDeletePosition.| -|individuallyDeletedMessages| If Acks are done out of order, shows the ranges of messages Acked between the markDeletePosition and the read-position.| -|lastLedgerSwitchTimestamp| The last time the cursor ledger is rolled over.| diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/administration-upgrade.md b/site2/website/versioned_docs/version-2.8.2-deprecated/administration-upgrade.md deleted file mode 100644 index 72d136b6460f62..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/administration-upgrade.md +++ /dev/null @@ -1,168 +0,0 @@ ---- -id: administration-upgrade -title: Upgrade Guide -sidebar_label: "Upgrade" -original_id: administration-upgrade ---- - -## Upgrade guidelines - -Apache Pulsar is comprised of multiple components, ZooKeeper, bookies, and brokers. These components are either stateful or stateless. You do not have to upgrade ZooKeeper nodes unless you have special requirement. While you upgrade, you need to pay attention to bookies (stateful), brokers and proxies (stateless). - -The following are some guidelines on upgrading a Pulsar cluster. Read the guidelines before upgrading. - -- Backup all your configuration files before upgrading. -- Read guide entirely, make a plan, and then execute the plan. When you make upgrade plan, you need to take your specific requirements and environment into consideration. -- Pay attention to the upgrading order of components. In general, you do not need to upgrade your ZooKeeper or configuration store cluster (the global ZooKeeper cluster). You need to upgrade bookies first, and then upgrade brokers, proxies, and your clients. -- If `autorecovery` is enabled, you need to disable `autorecovery` in the upgrade process, and re-enable it after completing the process. -- Read the release notes carefully for each release. Release notes contain features, configuration changes that might impact your upgrade. -- Upgrade a small subset of nodes of each type to canary test the new version before upgrading all nodes of that type in the cluster. When you have upgraded the canary nodes, run for a while to ensure that they work correctly. -- Upgrade one data center to verify new version before upgrading all data centers if your cluster runs in multi-cluster replicated mode. - -> Note: Currently, Apache Pulsar is compatible between versions. - -## Upgrade sequence - -To upgrade an Apache Pulsar cluster, follow the upgrade sequence. - -1. Upgrade ZooKeeper (optional) -- Canary test: test an upgraded version in one or a small set of ZooKeeper nodes. -- Rolling upgrade: rollout the upgraded version to all ZooKeeper servers incrementally, one at a time. Monitor your dashboard during the whole rolling upgrade process. -2. Upgrade bookies -- Canary test: test an upgraded version in one or a small set of bookies. -- Rolling upgrade: - - a. Disable `autorecovery` with the following command. - - ```shell - - bin/bookkeeper shell autorecovery -disable - - ``` - - - - b. Rollout the upgraded version to all bookies in the cluster after you determine that a version is safe after canary. - - c. After you upgrade all bookies, re-enable `autorecovery` with the following command. - - ```shell - - bin/bookkeeper shell autorecovery -enable - - ``` - -3. Upgrade brokers -- Canary test: test an upgraded version in one or a small set of brokers. -- Rolling upgrade: rollout the upgraded version to all brokers in the cluster after you determine that a version is safe after canary. -4. Upgrade proxies -- Canary test: test an upgraded version in one or a small set of proxies. -- Rolling upgrade: rollout the upgraded version to all proxies in the cluster after you determine that a version is safe after canary. - -## Upgrade ZooKeeper (optional) -While you upgrade ZooKeeper servers, you can do canary test first, and then upgrade all ZooKeeper servers in the cluster. - -### Canary test - -You can test an upgraded version in one of ZooKeeper servers before upgrading all ZooKeeper servers in your cluster. - -To upgrade ZooKeeper server to a new version, complete the following steps: - -1. Stop a ZooKeeper server. -2. Upgrade the binary and configuration files. -3. Start the ZooKeeper server with the new binary files. -4. Use `pulsar zookeeper-shell` to connect to the newly upgraded ZooKeeper server and run a few commands to verify if it works as expected. -5. Run the ZooKeeper server for a few days, observe and make sure the ZooKeeper cluster runs well. - -#### Canary rollback - -If issues occur during canary test, you can shut down the problematic ZooKeeper node, revert the binary and configuration, and restart the ZooKeeper with the reverted binary. - -### Upgrade all ZooKeeper servers - -After canary test to upgrade one ZooKeeper in your cluster, you can upgrade all ZooKeeper servers in your cluster. - -You can upgrade all ZooKeeper servers one by one by following steps in canary test. - -## Upgrade bookies - -While you upgrade bookies, you can do canary test first, and then upgrade all bookies in the cluster. -For more details, you can read Apache BookKeeper [Upgrade guide](http://bookkeeper.apache.org/docs/latest/admin/upgrade). - -### Canary test - -You can test an upgraded version in one or a small set of bookies before upgrading all bookies in your cluster. - -To upgrade bookie to a new version, complete the following steps: - -1. Stop a bookie. -2. Upgrade the binary and configuration files. -3. Start the bookie in `ReadOnly` mode to verify if the bookie of this new version runs well for read workload. - - ```shell - - bin/pulsar bookie --readOnly - - ``` - -4. When the bookie runs successfully in `ReadOnly` mode, stop the bookie and restart it in `Write/Read` mode. - - ```shell - - bin/pulsar bookie - - ``` - -5. Observe and make sure the cluster serves both write and read traffic. - -#### Canary rollback - -If issues occur during the canary test, you can shut down the problematic bookie node. Other bookies in the cluster replaces this problematic bookie node with autorecovery. - -### Upgrade all bookies - -After canary test to upgrade some bookies in your cluster, you can upgrade all bookies in your cluster. - -Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios. - -In a rolling upgrade scenario, upgrade one bookie at a time. In a downtime upgrade scenario, shut down the entire cluster, upgrade each bookie, and then start the cluster. - -While you upgrade in both scenarios, the procedure is the same for each bookie. - -1. Stop the bookie. -2. Upgrade the software (either new binary or new configuration files). -2. Start the bookie. - -> **Advanced operations** -> When you upgrade a large BookKeeper cluster in a rolling upgrade scenario, upgrading one bookie at a time is slow. If you configure rack-aware or region-aware placement policy, you can upgrade bookies rack by rack or region by region, which speeds up the whole upgrade process. - -## Upgrade brokers and proxies - -The upgrade procedure for brokers and proxies is the same. Brokers and proxies are `stateless`, so upgrading the two services is easy. - -### Canary test - -You can test an upgraded version in one or a small set of nodes before upgrading all nodes in your cluster. - -To upgrade to a new version, complete the following steps: - -1. Stop a broker (or proxy). -2. Upgrade the binary and configuration file. -3. Start a broker (or proxy). - -#### Canary rollback - -If issues occur during canary test, you can shut down the problematic broker (or proxy) node. Revert to the old version and restart the broker (or proxy). - -### Upgrade all brokers or proxies - -After canary test to upgrade some brokers or proxies in your cluster, you can upgrade all brokers or proxies in your cluster. - -Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios. - -In a rolling upgrade scenario, you can upgrade one broker or one proxy at a time if the size of the cluster is small. If your cluster is large, you can upgrade brokers or proxies in batches. When you upgrade a batch of brokers or proxies, make sure the remaining brokers and proxies in the cluster have enough capacity to handle the traffic during upgrade. - -In a downtime upgrade scenario, shut down the entire cluster, upgrade each broker or proxy, and then start the cluster. - -While you upgrade in both scenarios, the procedure is the same for each broker or proxy. - -1. Stop the broker or proxy. -2. Upgrade the software (either new binary or new configuration files). -3. Start the broker or proxy. diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/administration-zk-bk.md b/site2/website/versioned_docs/version-2.8.2-deprecated/administration-zk-bk.md deleted file mode 100644 index 2c080123ca81df..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/administration-zk-bk.md +++ /dev/null @@ -1,386 +0,0 @@ ---- -id: administration-zk-bk -title: ZooKeeper and BookKeeper administration -sidebar_label: "ZooKeeper and BookKeeper" -original_id: administration-zk-bk ---- - -Pulsar relies on two external systems for essential tasks: - -* [ZooKeeper](https://zookeeper.apache.org/) is responsible for a wide variety of configuration-related and coordination-related tasks. -* [BookKeeper](http://bookkeeper.apache.org/) is responsible for [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data. - -ZooKeeper and BookKeeper are both open-source [Apache](https://www.apache.org/) projects. - -> Skip to the [How Pulsar uses ZooKeeper and BookKeeper](#how-pulsar-uses-zookeeper-and-bookkeeper) section below for a more schematic explanation of the role of these two systems in Pulsar. - - -## ZooKeeper - -Each Pulsar instance relies on two separate ZooKeeper quorums. - -* [Local ZooKeeper](#deploy-local-zookeeper) operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs to have a dedicated ZooKeeper cluster. -* [Configuration Store](#deploy-configuration-store) operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum. - -### Deploy local ZooKeeper - -ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar. - -To deploy a Pulsar instance, you need to stand up one local ZooKeeper cluster *per Pulsar cluster*. - -To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -On each host, you need to specify the node ID in `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - - -On a ZooKeeper server at `zk1.us-west.example.com`, for example, you can set the `myid` value like this: - -```shell - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com` the command is `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start zookeeper - -``` - -### Deploy configuration store - -The ZooKeeper cluster configured and started up in the section above is a *local* ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks. - -If you deploy a [single-cluster](#single-cluster-pulsar-instance) instance, you do not need a separate cluster for the configuration store. If, however, you deploy a [multi-cluster](#multi-cluster-pulsar-instance) instance, you need to stand up a separate ZooKeeper cluster for configuration tasks. - -#### Single-cluster Pulsar instance - -If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports. - -To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorum uses to the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 - -``` - -As before, create the `myid` files for each server on `data/global-zookeeper/myid`. - -#### Multi-cluster Pulsar instance - -When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions. - -The key here is to make sure the ZK quorum members are spread across at least 3 regions and that other regions run as observers. - -Again, given the very low expected load on the configuration store servers, you can share the same hosts used for the local ZooKeeper quorum. - -For example, you can assume a Pulsar instance with the following clusters `us-west`, `us-east`, `us-central`, `eu-central`, `ap-south`. Also you can assume, each cluster has its own local ZK servers named such as - -``` - -zk[1-3].${CLUSTER}.example.com - -``` - -In this scenario you want to pick the quorum participants from few clusters and let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`. - -This guarantees that writes to configuration store is possible even if one of these regions is unreachable. - -The ZK configuration in all the servers looks like: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 -server.4=zk1.us-central.example.com:2185:2186 -server.5=zk2.us-central.example.com:2185:2186 -server.6=zk3.us-central.example.com:2185:2186:observer -server.7=zk1.us-east.example.com:2185:2186 -server.8=zk2.us-east.example.com:2185:2186 -server.9=zk3.us-east.example.com:2185:2186:observer -server.10=zk1.eu-central.example.com:2185:2186:observer -server.11=zk2.eu-central.example.com:2185:2186:observer -server.12=zk3.eu-central.example.com:2185:2186:observer -server.13=zk1.ap-south.example.com:2185:2186:observer -server.14=zk2.ap-south.example.com:2185:2186:observer -server.15=zk3.ap-south.example.com:2185:2186:observer - -``` - -Additionally, ZK observers need to have: - -```properties - -peerType=observer - -``` - -##### Start the service - -Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) - -```shell - -$ bin/pulsar-daemon start configuration-store - -``` - -### ZooKeeper configuration - -In Pulsar, ZooKeeper configuration is handled by two separate configuration files in the `conf` directory of your Pulsar installation: `conf/zookeeper.conf` for [local ZooKeeper](#local-zookeeper) and `conf/global-zookeeper.conf` for [configuration store](#configuration-store). - -#### Local ZooKeeper - -The [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file handles the configuration for local ZooKeeper. The table below shows the available parameters: - -|Name|Description|Default| -|---|---|---| -|tickTime| The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick. |2000| -|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10| -|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter. |5| -|dataDir| The location where ZooKeeper stores in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper| -|clientPort| The port on which the ZooKeeper server listens for connections. |2181| -|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest). |3| -|autopurge.purgeInterval| The time interval, in hours, which triggers the ZooKeeper database purge task. Setting to a non-zero number enables auto purge; setting to 0 disables. Read this guide before enabling auto purge. |1| -|maxClientCnxns| The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60| - - -#### Configuration Store - -The [`conf/global-zookeeper.conf`](reference-configuration.md#configuration-store) file handles the configuration for configuration store. The table below shows the available parameters: - - -## BookKeeper - -BookKeeper stores all durable messages in Pulsar. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) WAL system that guarantees read consistency of independent message logs calls ledgers. Individual BookKeeper servers are also called *bookies*. - -> To manage message persistence, retention, and expiry in Pulsar, refer to [cookbook](cookbooks-retention-expiry.md). - -### Hardware requirements - -Bookie hosts store message data on disk. To provide optimal performance, ensure that the bookies have a suitable hardware configuration. The following are two key dimensions of bookie hardware capacity: - -- Disk I/O capacity read/write -- Storage capacity - -Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker by default. To ensure low write latency, BookKeeper is designed to use multiple devices: - -- A **journal** to ensure durability. For sequential writes, it is critical to have fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms. -- A **ledger storage device** stores data. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller. - -### Configure BookKeeper - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. When you configure each bookie, ensure that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for local ZooKeeper of the Pulsar cluster. - -The minimum configuration changes required in `conf/bookkeeper.conf` are as follows: - -:::note - -Set `journalDirectory` and `ledgerDirectories` carefully. It is difficilt to change them later. - -::: - -```properties - -# Change to point to journal disk mount point -journalDirectory=data/bookkeeper/journal - -# Point to ledger storage disk mount point -ledgerDirectories=data/bookkeeper/ledgers - -# Point to local ZK quorum -zkServers=zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181 - -#It is recommended to set this parameter. Otherwise, BookKeeper can't start normally in certain environments (for example, Huawei Cloud). -advertisedAddress= - -``` - -To change the ZooKeeper root path that BookKeeper uses, use `zkLedgersRootPath=/MY-PREFIX/ledgers` instead of `zkServers=localhost:2181/MY-PREFIX`. - -> For more information about BookKeeper, refer to the official [BookKeeper docs](http://bookkeeper.apache.org). - -### Deploy BookKeeper - -BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar. Each Pulsar broker has its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster. - -### Start bookies manually - -You can start a bookie in the foreground or as a background daemon. - -To start a bookie in the foreground, use the [`bookkeeper`](reference-cli-tools.md#bookkeeper) CLI tool: - -```bash - -$ bin/bookkeeper bookie - -``` - -To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -You can verify whether the bookie works properly with the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell): - -```shell - -$ bin/bookkeeper shell bookiesanity - -``` - -When you use this command, you create a new ledger on the local bookie, write a few entries, read them back and finally delete the ledger. - -### Decommission bookies cleanly - -Before you decommission a bookie, you need to check your environment and meet the following requirements. - -1. Ensure the state of your cluster supports decommissioning the target bookie. Check if `EnsembleSize >= Write Quorum >= Ack Quorum` is `true` with one less bookie. - -2. Ensure the target bookie is listed after using the `listbookies` command. - -3. Ensure that no other process is ongoing (upgrade etc). - -And then you can decommission bookies safely. To decommission bookies, complete the following steps. - -1. Log in to the bookie node, check if there are underreplicated ledgers. The decommission command force to replicate the underreplicated ledgers. -`$ bin/bookkeeper shell listunderreplicated` - -2. Stop the bookie by killing the bookie process. Make sure that no liveness/readiness probes setup for the bookies to spin them back up if you deploy it in a Kubernetes environment. - -3. Run the decommission command. - - If you have logged in to the node to be decommissioned, you do not need to provide `-bookieid`. - - If you are running the decommission command for the target bookie node from another bookie node, you should mention the target bookie ID in the arguments for `-bookieid` - `$ bin/bookkeeper shell decommissionbookie` - or - `$ bin/bookkeeper shell decommissionbookie -bookieid ` - -4. Validate that no ledgers are on the decommissioned bookie. -`$ bin/bookkeeper shell listledgers -bookieid ` - -You can run the following command to check if the bookie you have decommissioned is listed in the bookies list: - -```bash - -./bookkeeper shell listbookies -rw -h -./bookkeeper shell listbookies -ro -h - -``` - -## BookKeeper persistence policies - -In Pulsar, you can set *persistence policies* at the namespace level, which determines how BookKeeper handles persistent storage of messages. Policies determine four things: - -* The number of acks (guaranteed copies) to wait for each ledger entry. -* The number of bookies to use for a topic. -* The number of writes to make for each ledger entry. -* The throttling rate for mark-delete operations. - -### Set persistence policies - -You can set persistence policies for BookKeeper at the [namespace](reference-terminology.md#namespace) level. - -#### Pulsar-admin - -Use the [`set-persistence`](reference-pulsar-admin.md#namespaces-set-persistence) subcommand and specify a namespace as well as any policies that you want to apply. The available flags are: - -Flag | Description | Default -:----|:------------|:------- -`-a`, `--bookkeeper-ack-quorum` | The number of acks (guaranteed copies) to wait on for each entry | 0 -`-e`, `--bookkeeper-ensemble` | The number of [bookies](reference-terminology.md#bookie) to use for topics in the namespace | 0 -`-w`, `--bookkeeper-write-quorum` | The number of writes to make for each entry | 0 -`-r`, `--ml-mark-delete-max-rate` | Throttling rate for mark-delete operations (0 means no throttle) | 0 - -The following is an example: - -```shell - -$ pulsar-admin namespaces set-persistence my-tenant/my-ns \ - --bookkeeper-ack-quorum 3 \ - --bookkeeper-ensemble 2 - -``` - -#### REST API - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/setPersistence?version=@pulsar:version_number@} - -#### Java - -```java - -int bkEnsemble = 2; -int bkQuorum = 3; -int bkAckQuorum = 2; -double markDeleteRate = 0.7; -PersistencePolicies policies = - new PersistencePolicies(ensemble, quorum, ackQuorum, markDeleteRate); -admin.namespaces().setPersistence(namespace, policies); - -``` - -### List persistence policies - -You can see which persistence policy currently applies to a namespace. - -#### Pulsar-admin - -Use the [`get-persistence`](reference-pulsar-admin.md#namespaces-get-persistence) subcommand and specify the namespace. - -The following is an example: - -```shell - -$ pulsar-admin namespaces get-persistence my-tenant/my-ns -{ - "bookkeeperEnsemble": 1, - "bookkeeperWriteQuorum": 1, - "bookkeeperAckQuorum", 1, - "managedLedgerMaxMarkDeleteRate": 0 -} - -``` - -#### REST API - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/getPersistence?version=@pulsar:version_number@} - -#### Java - -```java - -PersistencePolicies policies = admin.namespaces().getPersistence(namespace); - -``` - -## How Pulsar uses ZooKeeper and BookKeeper - -This diagram illustrates the role of ZooKeeper and BookKeeper in a Pulsar cluster: - -![ZooKeeper and BookKeeper](/assets/pulsar-system-architecture.png) - -Each Pulsar cluster consists of one or more message brokers. Each broker relies on an ensemble of bookies. diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-cgo.md b/site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-cgo.md deleted file mode 100644 index f352f942b77144..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-cgo.md +++ /dev/null @@ -1,579 +0,0 @@ ---- -id: client-libraries-cgo -title: Pulsar CGo client -sidebar_label: "CGo(deprecated)" -original_id: client-libraries-cgo ---- - -You can use Pulsar Go client to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang). - -All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Go client are thread-safe. - -Currently, the following Go clients are maintained in two repositories. - -| Language | Project | Maintainer | License | Description | -|----------|---------|------------|---------|-------------| -| CGo | [pulsar-client-go](https://github.com/apache/pulsar/tree/master/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | CGo client that depends on C++ client library | -| Go | [pulsar-client-go](https://github.com/apache/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client | - -> **API docs available as well** -> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar/pulsar-client-go/pulsar). - -## Installation - -### Requirements - -Pulsar Go client library is based on the C++ client library. Follow -the instructions for [C++ library](client-libraries-cpp.md) for installing the binaries through [RPM](client-libraries-cpp.md#rpm), [Deb](client-libraries-cpp.md#deb) or [Homebrew packages](client-libraries-cpp.md#macos). - -### Install go package - -> **Compatibility Warning** -> The version number of the Go client **must match** the version number of the Pulsar C++ client library. - -You can install the `pulsar` library locally using `go get`. Note that `go get` doesn't support fetching a specific tag - it will always pull in master's version of the Go client. You'll need a C++ client library that matches master. - -```bash - -$ go get -u github.com/apache/pulsar/pulsar-client-go/pulsar - -``` - -Or you can use [dep](https://github.com/golang/dep) for managing the dependencies. - -```bash - -$ dep ensure -add github.com/apache/pulsar/pulsar-client-go/pulsar@v@pulsar:version@ - -``` - -Once installed locally, you can import it into your project: - -```go - -import "github.com/apache/pulsar/pulsar-client-go/pulsar" - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example: - -```go - -import ( - "log" - "runtime" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - OperationTimeoutSeconds: 5, - MessageListenerThreads: runtime.NumCPU(), - }) - - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } -} - -``` - -The following configurable parameters are available for Pulsar clients: - -Parameter | Description | Default -:---------|:------------|:------- -`URL` | The connection URL for the Pulsar cluster. See [above](#urls) for more info | -`IOThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker) | 1 -`OperationTimeoutSeconds` | The timeout for some Go client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries will occur until this threshold is reached, at which point the operation will fail. | 30 -`MessageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)) | 1 -`ConcurrentLookupRequests` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 5000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 5000 -`Logger` | A custom logger implementation for the client (as a function that takes a log level, file path, line number, and message). All info, warn, and error messages will be routed to this function. | `nil` -`TLSTrustCertsFilePath` | The file path for the trusted TLS certificate | -`TLSAllowInsecureConnection` | Whether the client accepts untrusted TLS certificates from the broker | `false` -`Authentication` | Configure the authentication provider. (default: no authentication). Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | `nil` -`StatsIntervalInSeconds` | The interval (in seconds) at which client stats are published | 60 - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example: - -```go - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", -}) - -if err != nil { - log.Fatalf("Could not instantiate Pulsar producer: %v", err) -} - -defer producer.Close() - -msg := pulsar.ProducerMessage{ - Payload: []byte("Hello, Pulsar"), -} - -if err := producer.Send(context.Background(), msg); err != nil { - log.Fatalf("Producer could not send message: %v", err) -} - -``` - -> **Blocking operation** -> When you create a new Pulsar producer, the operation will block (waiting on a go channel) until either a producer is successfully created or an error is thrown. - - -### Producer operations - -Pulsar Go producers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string` -`Name()` | Fetches the producer's name | `string` -`Send(context.Context, ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | `error` -`SendAndGetMsgID(context.Context, ProducerMessage)`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | (MessageID, error) -`SendAsync(context.Context, ProducerMessage, func(ProducerMessage, error))` | Publishes a [message](#messages) to the producer's topic asynchronously. The third argument is a callback function that specifies what happens either when the message is acknowledged or an error is thrown. | -`SendAndGetMsgIDAsync(context.Context, ProducerMessage, func(MessageID, error))`| Send a message in asynchronous mode. The callback will report back the message being published and the eventual error in publishing | -`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64 -`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error -`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | `error` -`Schema()` | | Schema - -Here's a more involved example usage of a producer: - -```go - -import ( - "context" - "fmt" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatal(err) } - - // Use the client to instantiate a producer - producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", - }) - - if err != nil { log.Fatal(err) } - - ctx := context.Background() - - // Send 10 messages synchronously and 10 messages asynchronously - for i := 0; i < 10; i++ { - // Create a message - msg := pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("message-%d", i)), - } - - // Attempt to send the message - if err := producer.Send(ctx, msg); err != nil { - log.Fatal(err) - } - - // Create a different message to send asynchronously - asyncMsg := pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("async-message-%d", i)), - } - - // Attempt to send the message asynchronously and handle the response - producer.SendAsync(ctx, asyncMsg, func(msg pulsar.ProducerMessage, err error) { - if err != nil { log.Fatal(err) } - - fmt.Printf("the %s successfully published", string(msg.Payload)) - }) - } -} - -``` - -### Producer configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer will publish messages | -`Name` | A name for the producer. If you don't explicitly assign a name, Pulsar will automatically generate a globally unique name that you can access later using the `Name()` method. If you choose to explicitly assign a name, it will need to be unique across *all* Pulsar clusters, otherwise the creation operation will throw an error. | -`Properties`| Attach a set of application defined properties to the producer. This properties will be visible in the topic stats | -`SendTimeout` | When publishing a message to a topic, the producer will wait for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error will be thrown. If you set `SendTimeout` to -1, the timeout will be set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30 seconds -`MaxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `Send` and `SendAsync` methods will fail *unless* `BlockIfQueueFull` is set to `true`. | -`MaxPendingMessagesAcrossPartitions` | Set the number of max pending messages across all the partitions. This setting will be used to lower the max pending messages for each partition `MaxPendingMessages(int)`, if the total exceeds the configured value.| -`BlockIfQueueFull` | If set to `true`, the producer's `Send` and `SendAsync` methods will block when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `MaxPendingMessages` parameter); if set to `false` (the default), `Send` and `SendAsync` operations will fail and throw a `ProducerQueueIsFullError` when the queue is full. | `false` -`MessageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`pulsar.RoundRobinDistribution`, the default), publishing all messages to a single partition (`pulsar.UseSinglePartition`), or a custom partitioning scheme (`pulsar.CustomPartition`). | `pulsar.RoundRobinDistribution` -`HashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `pulsar.JavaStringHash` (the equivalent of `String.hashCode()` in Java), `pulsar.Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `pulsar.BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library) | `pulsar.JavaStringHash` -`CompressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), [`ZLIB`](https://zlib.net/), [`ZSTD`](https://facebook.github.io/zstd/) and [`SNAPPY`](https://google.github.io/snappy/). | No compression -`MessageRouter` | By default, Pulsar uses a round-robin routing scheme for [partitioned topics](cookbooks-partitioned.md). The `MessageRouter` parameter enables you to specify custom routing logic via a function that takes the Pulsar message and topic metadata as an argument and returns an integer (where the ), i.e. a function signature of `func(Message, TopicMetadata) int`. | -`Batching` | Control whether automatic batching of messages is enabled for the producer. | false -`BatchingMaxPublishDelay` | Set the time period within which the messages sent will be batched (default: 1ms) if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or until | 1ms -`BatchingMaxMessages` | Set the maximum number of messages permitted in a batch. (default: 1000) If set to a value greater than 1, messages will be queued until this threshold is reached or batch interval has elapsed | 1000 - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels: - -```go - -msgChannel := make(chan pulsar.ConsumerMessage) - -consumerOpts := pulsar.ConsumerOptions{ - Topic: "my-topic", - SubscriptionName: "my-subscription-1", - Type: pulsar.Exclusive, - MessageChannel: msgChannel, -} - -consumer, err := client.Subscribe(consumerOpts) - -if err != nil { - log.Fatalf("Could not establish subscription: %v", err) -} - -defer consumer.Close() - -for cm := range msgChannel { - msg := cm.Message - - fmt.Printf("Message ID: %s", msg.ID()) - fmt.Printf("Message value: %s", string(msg.Payload())) - - consumer.Ack(msg) -} - -``` - -> **Blocking operation** -> When you create a new Pulsar consumer, the operation will block (on a go channel) until either a producer is successfully created or an error is thrown. - - -### Consumer operations - -Pulsar Go consumers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the consumer's [topic](reference-terminology.md#topic) | `string` -`Subscription()` | Returns the consumer's subscription name | `string` -`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error` -`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)` -`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | `error` -`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | `error` -`AckCumulative(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `AckCumulative` method will block until the ack has been sent to the broker. After that, the messages will *not* be redelivered to the consumer. Cumulative acking can only be used with a [shared](concepts-messaging.md#shared) subscription type. | `error` -`AckCumulativeID(MessageID)` |Ack the reception of all the messages in the stream up to (and including) the provided message. This method will block until the acknowledge has been sent to the broker. After that, the messages will not be re-delivered to this consumer. | error -`Nack(Message)` | Acknowledge the failure to process a single message. | `error` -`NackID(MessageID)` | Acknowledge the failure to process a single message. | `error` -`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | `error` -`RedeliverUnackedMessages()` | Redelivers *all* unacknowledged messages on the topic. In [failover](concepts-messaging.md#failover) mode, this request is ignored if the consumer isn't active on the specified topic; in [shared](concepts-messaging.md#shared) mode, redelivered messages are distributed across all consumers connected to the topic. **Note**: this is a *non-blocking* operation that doesn't throw an error. | -`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | error - -#### Receive example - -Here's an example usage of a Go consumer that uses the `Receive()` method to process incoming messages: - -```go - -import ( - "context" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatal(err) } - - // Use the client object to instantiate a consumer - consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "my-golang-topic", - SubscriptionName: "sub-1", - Type: pulsar.Exclusive, - }) - - if err != nil { log.Fatal(err) } - - defer consumer.Close() - - ctx := context.Background() - - // Listen indefinitely on the topic - for { - msg, err := consumer.Receive(ctx) - if err != nil { log.Fatal(err) } - - // Do something with the message - err = processMessage(msg) - - if err == nil { - // Message processed successfully - consumer.Ack(msg) - } else { - // Failed to process messages - consumer.Nack(msg) - } - } -} - -``` - -### Consumer configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the consumer will establish a subscription and listen for messages | -`Topics` | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing | -`TopicsPattern` | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing | -`SubscriptionName` | The subscription name for this consumer | -`Properties` | Attach a set of application defined properties to the consumer. This properties will be visible in the topic stats| -`Name` | The name of the consumer | -`AckTimeout` | Set the timeout for unacked messages | 0 -`NackRedeliveryDelay` | The delay after which to redeliver the messages that failed to be processed. Default is 1min. (See `Consumer.Nack()`) | 1 minute -`Type` | Available options are `Exclusive`, `Shared`, and `Failover` | `Exclusive` -`SubscriptionInitPos` | InitialPosition at which the cursor will be set when subscribe | Latest -`MessageChannel` | The Go channel used by the consumer. Messages that arrive from the Pulsar topic(s) will be passed to this channel. | -`ReceiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `Receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000 -`MaxTotalReceiverQueueSizeAcrossPartitions` |Set the max total receiver queue size across partitions. This setting will be used to reduce the receiver queue size for individual partitions if the total exceeds this value | 50000 -`ReadCompacted` | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the consumer will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal. | - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example: - -```go - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageId: pulsar.LatestMessage, -}) - -``` - -> **Blocking operation** -> When you create a new Pulsar reader, the operation will block (on a go channel) until either a reader is successfully created or an error is thrown. - - -### Reader operations - -Pulsar Go readers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string` -`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)` -`HasNext()` | Check if there is any message available to read from the current position| (bool, error) -`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error` - -#### "Next" example - -Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages: - -```go - -import ( - "context" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatalf("Could not create client: %v", err) } - - // Use the client to instantiate a reader - reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: pulsar.EarliestMessage, - }) - - if err != nil { log.Fatalf("Could not create reader: %v", err) } - - defer reader.Close() - - ctx := context.Background() - - // Listen on the topic for incoming messages - for { - msg, err := reader.Next(ctx) - if err != nil { log.Fatalf("Error reading from topic: %v", err) } - - // Process the message - } -} - -``` - -In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example: - -```go - -lastSavedId := // Read last saved message id from external store as byte[] - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: DeserializeMessageID(lastSavedId), -}) - -``` - -### Reader configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader will establish a subscription and listen for messages -`Name` | The name of the reader -`StartMessageID` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `pulsar.EarliestMessage` (the earliest available message on the topic), `pulsar.LatestMessage` (the latest available message on the topic), or a `MessageID` object for a position that isn't earliest or latest. | -`MessageChannel` | The Go channel used by the reader. Messages that arrive from the Pulsar topic(s) will be passed to this channel. | -`ReceiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `Next`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000 -`SubscriptionRolePrefix` | The subscription role prefix. | `reader` -`ReadCompacted` | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the reader will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal.| - -## Messages - -The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message: - -```go - -msg := pulsar.ProducerMessage{ - Payload: []byte("Here is some message data"), - Key: "message-key", - Properties: map[string]string{ - "foo": "bar", - }, - EventTime: time.Now(), - ReplicationClusters: []string{"cluster1", "cluster3"}, -} - -if err := producer.send(msg); err != nil { - log.Fatalf("Could not publish message due to: %v", err) -} - -``` - -The following methods parameters are available for `ProducerMessage` objects: - -Parameter | Description -:---------|:----------- -`Payload` | The actual data payload of the message -`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message. -`Key` | The optional key associated with the message (particularly useful for things like topic compaction) -`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message -`EventTime` | The timestamp associated with the message -`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. -`SequenceID` | Set the sequence id to assign to the current message - -## TLS encryption and authentication - -In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so: - - * Use `pulsar+ssl` URL type - * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker - * Configure `Authentication` option - -Here's an example: - -```go - -opts := pulsar.ClientOptions{ - URL: "pulsar+ssl://my-cluster.com:6651", - TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr", - Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"), -} - -``` - -## Schema - -This example shows how to create a producer and consumer with schema. - -```go - -var exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -jsonSchema := NewJsonSchema(exampleSchemaDef, nil) -// create producer -producer, err := client.CreateProducerWithSchema(ProducerOptions{ - Topic: "jsonTopic", -}, jsonSchema) -err = producer.Send(context.Background(), ProducerMessage{ - Value: &testJson{ - ID: 100, - Name: "pulsar", - }, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() -//create consumer -var s testJson -consumerJS := NewJsonSchema(exampleSchemaDef, nil) -consumer, err := client.SubscribeWithSchema(ConsumerOptions{ - Topic: "jsonTopic", - SubscriptionName: "sub-2", -}, consumerJS) -if err != nil { - log.Fatal(err) -} -msg, err := consumer.Receive(context.Background()) -if err != nil { - log.Fatal(err) -} -err = msg.GetValue(&s) -if err != nil { - log.Fatal(err) -} -fmt.Println(s.ID) // output: 100 -fmt.Println(s.Name) // output: pulsar -defer consumer.Close() - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-cpp.md b/site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-cpp.md deleted file mode 100644 index 08622b9e830714..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-cpp.md +++ /dev/null @@ -1,408 +0,0 @@ ---- -id: client-libraries-cpp -title: Pulsar C++ client -sidebar_label: "C++" -original_id: client-libraries-cpp ---- - -You can use Pulsar C++ client to create Pulsar producers and consumers in C++. - -All the methods in producer, consumer, and reader of a C++ client are thread-safe. - -## Supported platforms - -Pulsar C++ client is supported on **Linux** and **MacOS** platforms. - -[Doxygen](http://www.doxygen.nl/)-generated API docs for the C++ client are available [here](/api/cpp). - -## System requirements - -You need to install the following components before using the C++ client: - -* [CMake](https://cmake.org/) -* [Boost](http://www.boost.org/) -* [Protocol Buffers](https://developers.google.com/protocol-buffers/) 2.6 -* [libcurl](https://curl.haxx.se/libcurl/) -* [Google Test](https://github.com/google/googletest) - -## Linux - -### Compilation - -1. Clone the Pulsar repository. - -```shell - -$ git clone https://github.com/apache/pulsar - -``` - -2. Install all necessary dependencies. - -```shell - -$ apt-get install cmake libssl-dev libcurl4-openssl-dev liblog4cxx-dev \ - libprotobuf-dev protobuf-compiler libboost-all-dev google-mock libgtest-dev libjsoncpp-dev - -``` - -3. Compile and install [Google Test](https://github.com/google/googletest). - -```shell - -# libgtest-dev version is 1.18.0 or above -$ cd /usr/src/googletest -$ sudo cmake . -$ sudo make -$ sudo cp ./googlemock/libgmock.a ./googlemock/gtest/libgtest.a /usr/lib/ - -# less than 1.18.0 -$ cd /usr/src/gtest -$ sudo cmake . -$ sudo make -$ sudo cp libgtest.a /usr/lib - -$ cd /usr/src/gmock -$ sudo cmake . -$ sudo make -$ sudo cp libgmock.a /usr/lib - -``` - -4. Compile the Pulsar client library for C++ inside the Pulsar repository. - -```shell - -$ cd pulsar-client-cpp -$ cmake . -$ make - -``` - -After you install the components successfully, the files `libpulsar.so` and `libpulsar.a` are in the `lib` folder of the repository. The tools `perfProducer` and `perfConsumer` are in the `perf` directory. - -### Install Dependencies - -> Since 2.1.0 release, Pulsar ships pre-built RPM and Debian packages. You can download and install those packages directly. - -After you download and install RPM or DEB, the `libpulsar.so`, `libpulsarnossl.so`, `libpulsar.a`, and `libpulsarwithdeps.a` libraries are in your `/usr/lib` directory. - -By default, they are built in code path `${PULSAR_HOME}/pulsar-client-cpp`. You can build with the command below. - - `cmake . -DBUILD_TESTS=OFF -DLINK_STATIC=ON && make pulsarShared pulsarSharedNossl pulsarStatic pulsarStaticWithDeps -j 3`. - -These libraries rely on some other libraries. If you want to get detailed version of dependencies, see [RPM](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/rpm/Dockerfile) or [DEB](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/deb/Dockerfile) files. - -1. `libpulsar.so` is a shared library, containing statically linked `boost` and `openssl`. It also dynamically links all other necessary libraries. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsar.so -I/usr/local/ssl/include - -``` - -2. `libpulsarnossl.so` is a shared library, similar to `libpulsar.so` except that the libraries `openssl` and `crypto` are dynamically linked. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsarnossl.so -lssl -lcrypto -I/usr/local/ssl/include -L/usr/local/ssl/lib - -``` - -3. `libpulsar.a` is a static library. You need to load dependencies before using this library. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsar.a -lssl -lcrypto -ldl -lpthread -I/usr/local/ssl/include -L/usr/local/ssl/lib -lboost_system -lboost_regex -lcurl -lprotobuf -lzstd -lz - -``` - -4. `libpulsarwithdeps.a` is a static library, based on `libpulsar.a`. It is archived in the dependencies of `libboost_regex`, `libboost_system`, `libcurl`, `libprotobuf`, `libzstd` and `libz`. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsarwithdeps.a -lssl -lcrypto -ldl -lpthread -I/usr/local/ssl/include -L/usr/local/ssl/lib - -``` - -The `libpulsarwithdeps.a` does not include library openssl related libraries `libssl` and `libcrypto`, because these two libraries are related to security. It is more reasonable and easier to use the versions provided by the local system to handle security issues and upgrade libraries. - -### Install RPM - -1. Download a RPM package from the links in the table. - -| Link | Crypto files | -|------|--------------| -| [client](@pulsar:dist_rpm:client@) | [asc](@pulsar:dist_rpm:client@.asc), [sha512](@pulsar:dist_rpm:client@.sha512) | -| [client-debuginfo](@pulsar:dist_rpm:client-debuginfo@) | [asc](@pulsar:dist_rpm:client-debuginfo@.asc), [sha512](@pulsar:dist_rpm:client-debuginfo@.sha512) | -| [client-devel](@pulsar:dist_rpm:client-devel@) | [asc](@pulsar:dist_rpm:client-devel@.asc), [sha512](@pulsar:dist_rpm:client-devel@.sha512) | - -2. Install the package using the following command. - -```bash - -$ rpm -ivh apache-pulsar-client*.rpm - -``` - -After you install RPM successfully, Pulsar libraries are in the `/usr/lib` directory. - -### Install Debian - -1. Download a Debian package from the links in the table. - -| Link | Crypto files | -|------|--------------| -| [client](@pulsar:deb:client@) | [asc](@pulsar:dist_deb:client@.asc), [sha512](@pulsar:dist_deb:client@.sha512) | -| [client-devel](@pulsar:deb:client-devel@) | [asc](@pulsar:dist_deb:client-devel@.asc), [sha512](@pulsar:dist_deb:client-devel@.sha512) | - -2. Install the package using the following command. - -```bash - -$ apt install ./apache-pulsar-client*.deb - -``` - -After you install DEB successfully, Pulsar libraries are in the `/usr/lib` directory. - -### Build - -> If you want to build RPM and Debian packages from the latest master, follow the instructions below. You should run all the instructions at the root directory of your cloned Pulsar repository. - -There are recipes that build RPM and Debian packages containing a -statically linked `libpulsar.so` / `libpulsarnossl.so` / `libpulsar.a` / `libpulsarwithdeps.a` with all required dependencies. - -To build the C++ library packages, you need to build the Java packages first. - -```shell - -mvn install -DskipTests - -``` - -#### RPM - -To build the RPM inside a Docker container, use the command below. The RPMs are in the `pulsar-client-cpp/pkg/rpm/RPMS/x86_64/` path. - -```shell - -pulsar-client-cpp/pkg/rpm/docker-build-rpm.sh - -``` - -| Package name | Content | -|-----|-----| -| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` | -| pulsar-client-devel | Static library `libpulsar.a`, `libpulsarwithdeps.a`and C++ and C headers | -| pulsar-client-debuginfo | Debug symbols for `libpulsar.so` | - -#### Debian - -To build Debian packages, enter the following command. - -```shell - -pulsar-client-cpp/pkg/deb/docker-build-deb.sh - -``` - -Debian packages are created in the `pulsar-client-cpp/pkg/deb/BUILD/DEB/` path. - -| Package name | Content | -|-----|-----| -| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` | -| pulsar-client-dev | Static library `libpulsar.a`, `libpulsarwithdeps.a` and C++ and C headers | - -## MacOS - -### Compilation - -1. Clone the Pulsar repository. - -```shell - -$ git clone https://github.com/apache/pulsar - -``` - -2. Install all necessary dependencies. - -```shell - -# OpenSSL installation -$ brew install openssl -$ export OPENSSL_INCLUDE_DIR=/usr/local/opt/openssl/include/ -$ export OPENSSL_ROOT_DIR=/usr/local/opt/openssl/ - -# Protocol Buffers installation -$ brew tap homebrew/versions -$ brew install protobuf260 -$ brew install boost -$ brew install log4cxx - -# Google Test installation -$ git clone https://github.com/google/googletest.git -$ cd googletest -$ git checkout release-1.12.1 -$ cmake . -$ make install - -``` - -3. Compile the Pulsar client library in the repository that you cloned. - -```shell - -$ cd pulsar-client-cpp -$ cmake . -$ make - -``` - -### Install `libpulsar` - -Pulsar releases are available in the [Homebrew](https://brew.sh/) core repository. You can install the C++ client library with the following command. The package is installed with the library and headers. - -```shell - -brew install libpulsar - -``` - -## Connection URLs - -To connect Pulsar using client libraries, you need to specify a Pulsar protocol URL. - -Pulsar protocol URLs are assigned to specific clusters, you can use the Pulsar URI scheme. The default port is `6650`. The following is an example for localhost. - -```http - -pulsar://localhost:6650 - -``` - -In a Pulsar cluster in production, the URL looks as follows. - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you use TLS authentication, you need to add `ssl`, and the default port is `6651`. The following is an example. - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a consumer - -To use Pulsar as a consumer, you need to create a consumer on the C++ client. The following is an example. - -```c++ - -Client client("pulsar://localhost:6650"); - -Consumer consumer; -Result result = client.subscribe("my-topic", "my-subscription-name", consumer); -if (result != ResultOk) { - LOG_ERROR("Failed to subscribe: " << result); - return -1; -} - -Message msg; - -while (true) { - consumer.receive(msg); - LOG_INFO("Received: " << msg - << " with payload '" << msg.getDataAsString() << "'"); - - consumer.acknowledge(msg); -} - -client.close(); - -``` - -## Create a producer - -To use Pulsar as a producer, you need to create a producer on the C++ client. The following is an example. - -```c++ - -Client client("pulsar://localhost:6650"); - -Producer producer; -Result result = client.createProducer("my-topic", producer); -if (result != ResultOk) { - LOG_ERROR("Error creating producer: " << result); - return -1; -} - -// Publish 10 messages to the topic -for (int i = 0; i < 10; i++){ - Message msg = MessageBuilder().setContent("my-message").build(); - Result res = producer.send(msg); - LOG_INFO("Message sent: " << res); -} -client.close(); - -``` - -## Enable authentication in connection URLs -If you use TLS authentication when connecting to Pulsar, you need to add `ssl` in the connection URLs, and the default port is `6651`. The following is an example. - -```cpp - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/cacert.pem"); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create( - "/path/to/client-cert.pem", "/path/to/client-key.pem");); - -Client client("pulsar+ssl://my-broker.com:6651", config); - -``` - -For complete examples, refer to [C++ client examples](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/examples). - -## Schema - -This section describes some examples about schema. For more information about schema, see [Pulsar schema](schema-get-started.md). - -### Create producer with Avro schema - -The following example shows how to create a producer with an Avro schema. - -```cpp - -static const std::string exampleSchema = - "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," - "\"fields\":[{\"name\":\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"int\"}]}"; -Producer producer; -ProducerConfiguration producerConf; -producerConf.setSchema(SchemaInfo(AVRO, "Avro", exampleSchema)); -client.createProducer("topic-avro", producerConf, producer); - -``` - -### Create consumer with Avro schema - -The following example shows how to create a consumer with an Avro schema. - -```cpp - -static const std::string exampleSchema = - "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," - "\"fields\":[{\"name\":\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"int\"}]}"; -ConsumerConfiguration consumerConf; -Consumer consumer; -consumerConf.setSchema(SchemaInfo(AVRO, "Avro", exampleSchema)); -client.subscribe("topic-avro", "sub-2", consumerConf, consumer) - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-dotnet.md b/site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-dotnet.md deleted file mode 100644 index fbec5e473be69c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-dotnet.md +++ /dev/null @@ -1,434 +0,0 @@ ---- -id: client-libraries-dotnet -title: Pulsar C# client -sidebar_label: "C#" -original_id: client-libraries-dotnet ---- - -You can use the Pulsar C# client (DotPulsar) to create Pulsar producers and consumers in C#. All the methods in the producer, consumer, and reader of a C# client are thread-safe. The official documentation for DotPulsar is available [here](https://github.com/apache/pulsar-dotpulsar/wiki). - -## Installation - -You can install the Pulsar C# client library either through the dotnet CLI or through the Visual Studio. This section describes how to install the Pulsar C# client library through the dotnet CLI. For information about how to install the Pulsar C# client library through the Visual Studio, see [here](https://docs.microsoft.com/en-us/visualstudio/mac/nuget-walkthrough?view=vsmac-2019). - -### Prerequisites - -Install the [.NET Core SDK](https://dotnet.microsoft.com/download/), which provides the dotnet command-line tool. Starting in Visual Studio 2017, the dotnet CLI is automatically installed with any .NET Core related workloads. - -### Procedures - -To install the Pulsar C# client library, following these steps: - -1. Create a project. - - 1. Create a folder for the project. - - 2. Open a terminal window and switch to the new folder. - - 3. Create the project using the following command. - - ``` - - dotnet new console - - ``` - - 4. Use `dotnet run` to test that the app has been created properly. - -2. Add the DotPulsar NuGet package. - - 1. Use the following command to install the `DotPulsar` package. - - ``` - - dotnet add package DotPulsar - - ``` - - 2. After the command completes, open the `.csproj` file to see the added reference. - - ```xml - - - - - - ``` - -## Client - -This section describes some configuration examples for the Pulsar C# client. - -### Create client - -This example shows how to create a Pulsar C# client connected to localhost. - -```c# - -var client = PulsarClient.Builder().Build(); - -``` - -To create a Pulsar C# client by using the builder, you can specify the following options. - -| Option | Description | Default | -| ---- | ---- | ---- | -| ServiceUrl | Set the service URL for the Pulsar cluster. | pulsar://localhost:6650 | -| RetryInterval | Set the time to wait before retrying an operation or a reconnection. | 3s | - -### Create producer - -This section describes how to create a producer. - -- Create a producer by using the builder. - - ```c# - - var producer = client.NewProducer() - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a producer without using the builder. - - ```c# - - var options = new ProducerOptions("persistent://public/default/mytopic"); - var producer = client.CreateProducer(options); - - ``` - -### Create consumer - -This section describes how to create a consumer. - -- Create a consumer by using the builder. - - ```c# - - var consumer = client.NewConsumer() - .SubscriptionName("MySubscription") - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a consumer without using the builder. - - ```c# - - var options = new ConsumerOptions("MySubscription", "persistent://public/default/mytopic"); - var consumer = client.CreateConsumer(options); - - ``` - -### Create reader - -This section describes how to create a reader. - -- Create a reader by using the builder. - - ```c# - - var reader = client.NewReader() - .StartMessageId(MessageId.Earliest) - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a reader without using the builder. - - ```c# - - var options = new ReaderOptions(MessageId.Earliest, "persistent://public/default/mytopic"); - var reader = client.CreateReader(options); - - ``` - -### Configure encryption policies - -The Pulsar C# client supports four kinds of encryption policies: - -- `EnforceUnencrypted`: always use unencrypted connections. -- `EnforceEncrypted`: always use encrypted connections) -- `PreferUnencrypted`: use unencrypted connections, if possible. -- `PreferEncrypted`: use encrypted connections, if possible. - -This example shows how to set the `EnforceUnencrypted` encryption policy. - -```c# - -var client = PulsarClient.Builder() - .ConnectionSecurity(EncryptionPolicy.EnforceEncrypted) - .Build(); - -``` - -### Configure authentication - -Currently, the Pulsar C# client supports the TLS (Transport Layer Security) and JWT (JSON Web Token) authentication. - -If you have followed [Authentication using TLS](security-tls-authentication.md), you get a certificate and a key. To use them from the Pulsar C# client, follow these steps: - -1. Create an unencrypted and password-less pfx file. - - ```c# - - openssl pkcs12 -export -keypbe NONE -certpbe NONE -out admin.pfx -inkey admin.key.pem -in admin.cert.pem -passout pass: - - ``` - -2. Use the admin.pfx file to create an X509Certificate2 and pass it to the Pulsar C# client. - - ```c# - - var clientCertificate = new X509Certificate2("admin.pfx"); - var client = PulsarClient.Builder() - .AuthenticateUsingClientCertificate(clientCertificate) - .Build(); - - ``` - -## Producer - -A producer is a process that attaches to a topic and publishes messages to a Pulsar broker for processing. This section describes some configuration examples about the producer. - -## Send data - -This example shows how to send data. - -```c# - -var data = Encoding.UTF8.GetBytes("Hello World"); -await producer.Send(data); - -``` - -### Send messages with customized metadata - -- Send messages with customized metadata by using the builder. - - ```c# - - var data = Encoding.UTF8.GetBytes("Hello World"); - var messageId = await producer.NewMessage() - .Property("SomeKey", "SomeValue") - .Send(data); - - ``` - -- Send messages with customized metadata without using the builder. - - ```c# - - var data = Encoding.UTF8.GetBytes("Hello World"); - var metadata = new MessageMetadata(); - metadata["SomeKey"] = "SomeValue"; - var messageId = await producer.Send(metadata, data)); - - ``` - -## Consumer - -A consumer is a process that attaches to a topic through a subscription and then receives messages. This section describes some configuration examples about the consumer. - -### Receive messages - -This example shows how a consumer receives messages from a topic. - -```c# - -await foreach (var message in consumer.Messages()) -{ - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); -} - -``` - -### Acknowledge messages - -Messages can be acknowledged individually or cumulatively. For details about message acknowledgement, see [acknowledgement](concepts-messaging.md#acknowledgement). - -- Acknowledge messages individually. - - ```c# - - await foreach (var message in consumer.Messages()) - { - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); - } - - ``` - -- Acknowledge messages cumulatively. - - ```c# - - await consumer.AcknowledgeCumulative(message); - - ``` - -### Unsubscribe from topics - -This example shows how a consumer unsubscribes from a topic. - -```c# - -await consumer.Unsubscribe(); - -``` - -#### Note - -> A consumer cannot be used and is disposed once the consumer unsubscribes from a topic. - -## Reader - -A reader is actually just a consumer without a cursor. This means that Pulsar does not keep track of your progress and there is no need to acknowledge messages. - -This example shows how a reader receives messages. - -```c# - -await foreach (var message in reader.Messages()) -{ - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); -} - -``` - -## Monitoring - -This section describes how to monitor the producer, consumer, and reader state. - -### Monitor producer - -The following table lists states available for the producer. - -| State | Description | -| ---- | ----| -| Closed | The producer or the Pulsar client has been disposed. | -| Connected | All is well. | -| Disconnected | The connection is lost and attempts are being made to reconnect. | -| Faulted | An unrecoverable error has occurred. | - -This example shows how to monitor the producer state. - -```c# - -private static async ValueTask Monitor(IProducer producer, CancellationToken cancellationToken) -{ - var state = ProducerState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = await producer.StateChangedFrom(state, cancellationToken); - - var stateMessage = state switch - { - ProducerState.Connected => $"The producer is connected", - ProducerState.Disconnected => $"The producer is disconnected", - ProducerState.Closed => $"The producer has closed", - ProducerState.Faulted => $"The producer has faulted", - _ => $"The producer has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (producer.IsFinalState(state)) - return; - } -} - -``` - -### Monitor consumer state - -The following table lists states available for the consumer. - -| State | Description | -| ---- | ----| -| Active | All is well. | -| Inactive | All is well. The subscription type is `Failover` and you are not the active consumer. | -| Closed | The consumer or the Pulsar client has been disposed. | -| Disconnected | The connection is lost and attempts are being made to reconnect. | -| Faulted | An unrecoverable error has occurred. | -| ReachedEndOfTopic | No more messages are delivered. | - -This example shows how to monitor the consumer state. - -```c# - -private static async ValueTask Monitor(IConsumer consumer, CancellationToken cancellationToken) -{ - var state = ConsumerState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = await consumer.StateChangedFrom(state, cancellationToken); - - var stateMessage = state switch - { - ConsumerState.Active => "The consumer is active", - ConsumerState.Inactive => "The consumer is inactive", - ConsumerState.Disconnected => "The consumer is disconnected", - ConsumerState.Closed => "The consumer has closed", - ConsumerState.ReachedEndOfTopic => "The consumer has reached end of topic", - ConsumerState.Faulted => "The consumer has faulted", - _ => $"The consumer has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (consumer.IsFinalState(state)) - return; - } -} - -``` - -### Monitor reader state - -The following table lists states available for the reader. - -| State | Description | -| ---- | ----| -| Closed | The reader or the Pulsar client has been disposed. | -| Connected | All is well. | -| Disconnected | The connection is lost and attempts are being made to reconnect. -| Faulted | An unrecoverable error has occurred. | -| ReachedEndOfTopic | No more messages are delivered. | - -This example shows how to monitor the reader state. - -```c# - -private static async ValueTask Monitor(IReader reader, CancellationToken cancellationToken) -{ - var state = ReaderState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = await reader.StateChangedFrom(state, cancellationToken); - - var stateMessage = state switch - { - ReaderState.Connected => "The reader is connected", - ReaderState.Disconnected => "The reader is disconnected", - ReaderState.Closed => "The reader has closed", - ReaderState.ReachedEndOfTopic => "The reader has reached end of topic", - ReaderState.Faulted => "The reader has faulted", - _ => $"The reader has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (reader.IsFinalState(state)) - return; - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-go.md b/site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-go.md deleted file mode 100644 index 6281b03dd8c805..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-go.md +++ /dev/null @@ -1,885 +0,0 @@ ---- -id: client-libraries-go -title: Pulsar Go client -sidebar_label: "Go" -original_id: client-libraries-go ---- - -> Tips: Currently, the CGo client will be deprecated, if you want to know more about the CGo client, please refer to [CGo client docs](client-libraries-cgo.md) - -You can use Pulsar [Go client](https://github.com/apache/pulsar-client-go) to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang). - -> **API docs available as well** -> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar-client-go/pulsar). - - -## Installation - -### Install go package - -You can install the `pulsar` library locally using `go get`. - -```bash - -$ go get -u "github.com/apache/pulsar-client-go/pulsar" - -``` - -Once installed locally, you can import it into your project: - -```go - -import "github.com/apache/pulsar-client-go/pulsar" - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -If you have multiple brokers, you can set the URL as below. - -``` - -pulsar://localhost:6550,localhost:6651,localhost:6652 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example: - -```go - -import ( - "log" - "time" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - OperationTimeout: 30 * time.Second, - ConnectionTimeout: 30 * time.Second, - }) - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } - - defer client.Close() -} - -``` - -If you have multiple brokers, you can initiate a client object as below. - -```go - -import ( - "log" - "time" - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650,localhost:6651,localhost:6652", - OperationTimeout: 30 * time.Second, - ConnectionTimeout: 30 * time.Second, - }) - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } - - defer client.Close() -} - -``` - -The following configurable parameters are available for Pulsar clients: - - Name | Description | Default -| :-------- | :---------- |:---------- | -| URL | Configure the service URL for the Pulsar service.

    If you have multiple brokers, you can set multiple Pulsar cluster addresses for a client.

    This parameter is **required**. |None | -| ConnectionTimeout | Timeout for the establishment of a TCP connection | 30s | -| OperationTimeout| Set the operation timeout. Producer-create, subscribe and unsubscribe operations will be retried until this interval, after which the operation will be marked as failed| 30s| -| Authentication | Configure the authentication provider. Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | no authentication | -| TLSTrustCertsFilePath | Set the path to the trusted TLS certificate file | | -| TLSAllowInsecureConnection | Configure whether the Pulsar client accept untrusted TLS certificate from broker | false | -| TLSValidateHostname | Configure whether the Pulsar client verify the validity of the host name from broker | false | -| ListenerName | Configure the net model for VPC users to connect to the Pulsar broker | | -| MaxConnectionsPerBroker | Max number of connections to a single broker that is kept in the pool | 1 | -| CustomMetricsLabels | Add custom labels to all the metrics reported by this client instance | | -| Logger | Configure the logger used by the client | logrus.StandardLogger | - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example: - -```go - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", -}) - -if err != nil { - log.Fatal(err) -} - -_, err = producer.Send(context.Background(), &pulsar.ProducerMessage{ - Payload: []byte("hello"), -}) - -defer producer.Close() - -if err != nil { - fmt.Println("Failed to publish message", err) -} -fmt.Println("Published message") - -``` - -### Producer operations - -Pulsar Go producers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string` -`Name()` | Fetches the producer's name | `string` -`Send(context.Context, *ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | (MessageID, error) -`SendAsync(context.Context, *ProducerMessage, func(MessageID, *ProducerMessage, error))`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | -`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64 -`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error -`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | - -### Producer Example - -#### How to use message router in producer - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: serviceURL, -}) - -if err != nil { - log.Fatal(err) -} -defer client.Close() - -// Only subscribe on the specific partition -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "my-partitioned-topic-partition-2", - SubscriptionName: "my-sub", -}) - -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-partitioned-topic", - MessageRouter: func(msg *ProducerMessage, tm TopicMetadata) int { - fmt.Println("Routing message ", msg, " -- Partitions: ", tm.NumPartitions()) - return 2 - }, -}) - -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -``` - -#### How to use schema interface in producer - -```go - -type testJSON struct { - ID int `json:"id"` - Name string `json:"name"` -} - -``` - -```go - -var ( - exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -) - -``` - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -properties := make(map[string]string) -properties["pulsar"] = "hello" -jsonSchemaWithProperties := NewJSONSchema(exampleSchemaDef, properties) -producer, err := client.CreateProducer(ProducerOptions{ - Topic: "jsonTopic", - Schema: jsonSchemaWithProperties, -}) -assert.Nil(t, err) - -_, err = producer.Send(context.Background(), &ProducerMessage{ - Value: &testJSON{ - ID: 100, - Name: "pulsar", - }, -}) -if err != nil { - log.Fatal(err) -} -producer.Close() - -``` - -#### How to use delay relative in producer - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topicName := newTopicName() -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topicName, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: topicName, - SubscriptionName: "subName", - Type: Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -ID, err := producer.Send(context.Background(), &pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("test")), - DeliverAfter: 3 * time.Second, -}) -if err != nil { - log.Fatal(err) -} -fmt.Println(ID) - -ctx, canc := context.WithTimeout(context.Background(), 1*time.Second) -msg, err := consumer.Receive(ctx) -if err != nil { - log.Fatal(err) -} -fmt.Println(msg.Payload()) -canc() - -ctx, canc = context.WithTimeout(context.Background(), 5*time.Second) -msg, err = consumer.Receive(ctx) -if err != nil { - log.Fatal(err) -} -fmt.Println(msg.Payload()) -canc() - -``` - -### Producer configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Name | Name specify a name for the producer. If not assigned, the system will generate a globally unique name which can be access with Producer.ProducerName(). | | -| Properties | Properties attach a set of application defined properties to the producer This properties will be visible in the topic stats | | -| SendTimeout | SendTimeout set the timeout for a message that is not acknowledged by the server | 30s | -| DisableBlockIfQueueFull | DisableBlockIfQueueFull control whether Send and SendAsync block if producer's message queue is full | false | -| MaxPendingMessages| MaxPendingMessages set the max size of the queue holding the messages pending to receive an acknowledgment from the broker. | | -| HashingScheme | HashingScheme change the `HashingScheme` used to chose the partition on where to publish a particular message. | JavaStringHash | -| CompressionType | CompressionType set the compression type for the producer. | not compressed | -| CompressionLevel | Define the desired compression level. Options: Default, Faster and Better | Default | -| MessageRouter | MessageRouter set a custom message routing policy by passing an implementation of MessageRouter | | -| DisableBatching | DisableBatching control whether automatic batching of messages is enabled for the producer. | false | -| BatchingMaxPublishDelay | BatchingMaxPublishDelay set the time period within which the messages sent will be batched | 1ms | -| BatchingMaxMessages | BatchingMaxMessages set the maximum number of messages permitted in a batch. | 1000 | -| BatchingMaxSize | BatchingMaxSize sets the maximum number of bytes permitted in a batch. | 128KB | -| Schema | Schema set a custom schema type by passing an implementation of `Schema` | bytes[] | -| Interceptors | A chain of interceptors. These interceptors are called at some points defined in the `ProducerInterceptor` interface. | None | -| MaxReconnectToBroker | MaxReconnectToBroker set the maximum retry number of reconnectToBroker | ultimate | -| BatcherBuilderType | BatcherBuilderType sets the batch builder type. This is used to create a batch container when batching is enabled. Options: DefaultBatchBuilder and KeyBasedBatchBuilder | DefaultBatchBuilder | - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels: - -```go - -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "topic-1", - SubscriptionName: "my-sub", - Type: pulsar.Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -for i := 0; i < 10; i++ { - msg, err := consumer.Receive(context.Background()) - if err != nil { - log.Fatal(err) - } - - fmt.Printf("Received message msgId: %#v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - - consumer.Ack(msg) -} - -if err := consumer.Unsubscribe(); err != nil { - log.Fatal(err) -} - -``` - -### Consumer operations - -Pulsar Go consumers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Subscription()` | Returns the consumer's subscription name | `string` -`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error` -`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)` -`Chan()` | Chan returns a channel from which to consume messages. | `<-chan ConsumerMessage` -`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | -`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | -`ReconsumeLater(msg Message, delay time.Duration)` | ReconsumeLater mark a message for redelivery after custom delay | -`Nack(Message)` | Acknowledge the failure to process a single message. | -`NackID(MessageID)` | Acknowledge the failure to process a single message. | -`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | `error` -`SeekByTime(time time.Time)` | Reset the subscription associated with this consumer to a specific message publish time. | `error` -`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | -`Name()` | Name returns the name of consumer | `string` - -### Receive example - -#### How to use regex consumer - -```go - -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) - -defer client.Close() - -p, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topicInRegex, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer p.Close() - -topicsPattern := fmt.Sprintf("persistent://%s/foo.*", namespace) -opts := pulsar.ConsumerOptions{ - TopicsPattern: topicsPattern, - SubscriptionName: "regex-sub", -} -consumer, err := client.Subscribe(opts) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -``` - -#### How to use multi topics Consumer - -```go - -func newTopicName() string { - return fmt.Sprintf("my-topic-%v", time.Now().Nanosecond()) -} - - -topic1 := "topic-1" -topic2 := "topic-2" - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -topics := []string{topic1, topic2} -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topics: topics, - SubscriptionName: "multi-topic-sub", -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -``` - -#### How to use consumer listener - -```go - -import ( - "fmt" - "log" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{URL: "pulsar://localhost:6650"}) - if err != nil { - log.Fatal(err) - } - - defer client.Close() - - channel := make(chan pulsar.ConsumerMessage, 100) - - options := pulsar.ConsumerOptions{ - Topic: "topic-1", - SubscriptionName: "my-subscription", - Type: pulsar.Shared, - } - - options.MessageChannel = channel - - consumer, err := client.Subscribe(options) - if err != nil { - log.Fatal(err) - } - - defer consumer.Close() - - // Receive messages from channel. The channel returns a struct which contains message and the consumer from where - // the message was received. It's not necessary here since we have 1 single consumer, but the channel could be - // shared across multiple consumers as well - for cm := range channel { - msg := cm.Message - fmt.Printf("Received message msgId: %v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - - consumer.Ack(msg) - } -} - -``` - -#### How to use consumer receive timeout - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topic := "test-topic-with-no-messages" -ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond) -defer cancel() - -// create consumer -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: topic, - SubscriptionName: "my-sub1", - Type: Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -msg, err := consumer.Receive(ctx) -fmt.Println(msg.Payload()) -if err != nil { - log.Fatal(err) -} - -``` - -#### How to use schema in consumer - -```go - -type testJSON struct { - ID int `json:"id"` - Name string `json:"name"` -} - -``` - -```go - -var ( - exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -) - -``` - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -var s testJSON - -consumerJS := NewJSONSchema(exampleSchemaDef, nil) -consumer, err := client.Subscribe(ConsumerOptions{ - Topic: "jsonTopic", - SubscriptionName: "sub-1", - Schema: consumerJS, - SubscriptionInitialPosition: SubscriptionPositionEarliest, -}) -assert.Nil(t, err) -msg, err := consumer.Receive(context.Background()) -assert.Nil(t, err) -err = msg.GetSchemaValue(&s) -if err != nil { - log.Fatal(err) -} - -defer consumer.Close() - -``` - -### Consumer configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Topics | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing| | -| TopicsPattern | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing | | -| AutoDiscoveryPeriod | Specify the interval in which to poll for new partitions or new topics if using a TopicsPattern. | | -| SubscriptionName | Specify the subscription name for this consumer. This argument is required when subscribing | | -| Name | Set the consumer name | | -| Properties | Properties attach a set of application defined properties to the producer This properties will be visible in the topic stats | | -| Type | Select the subscription type to be used when subscribing to the topic. | Exclusive | -| SubscriptionInitialPosition | InitialPosition at which the cursor will be set when subscribe | Latest | -| DLQ | Configuration for Dead Letter Queue consumer policy. | no DLQ | -| MessageChannel | Sets a `MessageChannel` for the consumer. When a message is received, it will be pushed to the channel for consumption | | -| ReceiverQueueSize | Sets the size of the consumer receive queue. | 1000| -| NackRedeliveryDelay | The delay after which to redeliver the messages that failed to be processed | 1min | -| ReadCompacted | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic | false | -| ReplicateSubscriptionState | Mark the subscription as replicated to keep it in sync across clusters | false | -| KeySharedPolicy | Configuration for Key Shared consumer policy. | | -| RetryEnable | Auto retry send messages to default filled DLQPolicy topics | false | -| Interceptors | A chain of interceptors. These interceptors are called at some points defined in the `ConsumerInterceptor` interface. | | -| MaxReconnectToBroker | MaxReconnectToBroker set the maximum retry number of reconnectToBroker. | ultimate | -| Schema | Schema set a custom schema type by passing an implementation of `Schema` | bytes[] | - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example: - -```go - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "topic-1", - StartMessageID: pulsar.EarliestMessageID(), -}) -if err != nil { - log.Fatal(err) -} -defer reader.Close() - -``` - -### Reader operations - -Pulsar Go readers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string` -`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)` -`HasNext()` | Check if there is any message available to read from the current position| (bool, error) -`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error` -`Seek(MessageID)` | Reset the subscription associated with this reader to a specific message ID | `error` -`SeekByTime(time time.Time)` | Reset the subscription associated with this reader to a specific message publish time | `error` - -### Reader example - -#### How to use reader to read 'next' message - -Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages: - -```go - -import ( - "context" - "fmt" - "log" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{URL: "pulsar://localhost:6650"}) - if err != nil { - log.Fatal(err) - } - - defer client.Close() - - reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "topic-1", - StartMessageID: pulsar.EarliestMessageID(), - }) - if err != nil { - log.Fatal(err) - } - defer reader.Close() - - for reader.HasNext() { - msg, err := reader.Next(context.Background()) - if err != nil { - log.Fatal(err) - } - - fmt.Printf("Received message msgId: %#v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - } -} - -``` - -In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example: - -```go - -lastSavedId := // Read last saved message id from external store as byte[] - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: pulsar.DeserializeMessageID(lastSavedId), -}) - -``` - -#### How to use reader to read specific message - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: lookupURL, -}) - -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topic := "topic-1" -ctx := context.Background() - -// create producer -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topic, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -// send 10 messages -msgIDs := [10]MessageID{} -for i := 0; i < 10; i++ { - msgID, err := producer.Send(ctx, &pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("hello-%d", i)), - }) - assert.NoError(t, err) - assert.NotNil(t, msgID) - msgIDs[i] = msgID -} - -// create reader on 5th message (not included) -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: topic, - StartMessageID: msgIDs[4], -}) - -if err != nil { - log.Fatal(err) -} -defer reader.Close() - -// receive the remaining 5 messages -for i := 5; i < 10; i++ { - msg, err := reader.Next(context.Background()) - if err != nil { - log.Fatal(err) -} - -// create reader on 5th message (included) -readerInclusive, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: topic, - StartMessageID: msgIDs[4], - StartMessageIDInclusive: true, -}) - -if err != nil { - log.Fatal(err) -} -defer readerInclusive.Close() - -``` - -### Reader configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Name | Name set the reader name. | | -| Properties | Attach a set of application defined properties to the reader. This properties will be visible in the topic stats | | -| StartMessageID | StartMessageID initial reader positioning is done by specifying a message id. | | -| StartMessageIDInclusive | If true, the reader will start at the `StartMessageID`, included. Default is `false` and the reader will start from the "next" message | false | -| MessageChannel | MessageChannel sets a `MessageChannel` for the consumer When a message is received, it will be pushed to the channel for consumption| | -| ReceiverQueueSize | ReceiverQueueSize sets the size of the consumer receive queue. | 1000 | -| SubscriptionRolePrefix| SubscriptionRolePrefix set the subscription role prefix. | "reader" | -| ReadCompacted | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. ReadCompacted can only be enabled when reading from a persistent topic. | false| - -## Messages - -The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message: - -```go - -msg := pulsar.ProducerMessage{ - Payload: []byte("Here is some message data"), - Key: "message-key", - Properties: map[string]string{ - "foo": "bar", - }, - EventTime: time.Now(), - ReplicationClusters: []string{"cluster1", "cluster3"}, -} - -if _, err := producer.send(msg); err != nil { - log.Fatalf("Could not publish message due to: %v", err) -} - -``` - -The following methods parameters are available for `ProducerMessage` objects: - -Parameter | Description -:---------|:----------- -`Payload` | The actual data payload of the message -`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message. -`Key` | The optional key associated with the message (particularly useful for things like topic compaction) -`OrderingKey` | OrderingKey sets the ordering key of the message. -`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message -`EventTime` | The timestamp associated with the message -`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. -`SequenceID` | Set the sequence id to assign to the current message -`DeliverAfter` | Request to deliver the message only after the specified relative delay -`DeliverAt` | Deliver the message only at or after the specified absolute timestamp - -## TLS encryption and authentication - -In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so: - - * Use `pulsar+ssl` URL type - * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker - * Configure `Authentication` option - -Here's an example: - -```go - -opts := pulsar.ClientOptions{ - URL: "pulsar+ssl://my-cluster.com:6651", - TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr", - Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"), -} - -``` - -## OAuth2 authentication - -To use [OAuth2 authentication](security-oauth2.md), you'll need to configure your client to perform the following operations. -This example shows how to configure OAuth2 authentication. - -```go - -oauth := pulsar.NewAuthenticationOAuth2(map[string]string{ - "type": "client_credentials", - "issuerUrl": "https://dev-kt-aa9ne.us.auth0.com", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "privateKey": "/path/to/privateKey", - "clientId": "0Xx...Yyxeny", - }) -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://my-cluster:6650", - Authentication: oauth, -}) - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-java.md b/site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-java.md deleted file mode 100644 index 0ff9a22936fafc..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-java.md +++ /dev/null @@ -1,1038 +0,0 @@ ---- -id: client-libraries-java -title: Pulsar Java client -sidebar_label: "Java" -original_id: client-libraries-java ---- - -You can use Pulsar Java client to create Java [producer](#producer), [consumer](#consumer), and [readers](#reader-interface) of messages and to perform [administrative tasks](admin-api-overview.md). The current version of the Java client is **@pulsar:version@**. - -All the methods in [producer](#producer), [consumer](#consumer), and [reader](#reader) of a Java client are thread-safe. - -Javadoc for the Pulsar client is divided into two domains by package as follows. - -Package | Description | Maven Artifact -:-------|:------------|:-------------- -[`org.apache.pulsar.client.api`](/api/client) | The producer and consumer API | [org.apache.pulsar:pulsar-client:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C@pulsar:version@%7Cjar) -[`org.apache.pulsar.client.admin`](/api/admin) | The Java [admin API](admin-api-overview.md) | [org.apache.pulsar:pulsar-client-admin:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client-admin%7C@pulsar:version@%7Cjar) -`org.apache.pulsar.client.all` |Includes both `pulsar-client` and `pulsar-client-admin`

    Both `pulsar-client` and `pulsar-client-admin` are shaded packages and they shade dependencies independently. Consequently, the applications using both `pulsar-client` and `pulsar-client-admin` have redundant shaded classes. It would be troublesome if you introduce new dependencies but forget to update shading rules.

    In this case, you can use `pulsar-client-all`, which shades dependencies only one time and reduces the size of dependencies. |[org.apache.pulsar:pulsar-client-all:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client-all%7C@pulsar:version@%7Cjar) - -This document focuses only on the client API for producing and consuming messages on Pulsar topics. For how to use the Java admin client, see [Pulsar admin interface](admin-api-overview.md). - -## Installation - -The latest version of the Pulsar Java client library is available via [Maven Central](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C@pulsar:version@%7Cjar). To use the latest version, add the `pulsar-client` library to your build configuration. - -### Maven - -If you use Maven, add the following information to the `pom.xml` file. - -```xml - - -@pulsar:version@ - - - - org.apache.pulsar - pulsar-client - ${pulsar.version} - - -``` - -### Gradle - -If you use Gradle, add the following information to the `build.gradle` file. - -```groovy - -def pulsarVersion = '@pulsar:version@' - -dependencies { - compile group: 'org.apache.pulsar', name: 'pulsar-client', version: pulsarVersion -} - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -You can assign Pulsar protocol URLs to specific clusters and use the `pulsar` scheme. The default port is `6650`. The following is an example of `localhost`. - -```http - -pulsar://localhost:6650 - -``` - -If you have multiple brokers, the URL is as follows. - -```http - -pulsar://localhost:6550,localhost:6651,localhost:6652 - -``` - -A URL for a production Pulsar cluster is as follows. - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you use [TLS](security-tls-authentication.md) authentication, the URL is as follows. - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Client - -You can instantiate a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object using just a URL for the target Pulsar [cluster](reference-terminology.md#cluster) like this: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -``` - -If you have multiple brokers, you can initiate a PulsarClient like this: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650,localhost:6651,localhost:6652") - .build(); - -``` - -> ### Default broker URLs for standalone clusters -> If you run a cluster in [standalone mode](getting-started-standalone.md), the broker is available at the `pulsar://localhost:6650` URL by default. - -If you create a client, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -| Type | Name |
    Description
    | Default -|---|---|---|--- -String | `serviceUrl` |Service URL provider for Pulsar service | None -String | `authPluginClassName` | Name of the authentication plugin | None -String | `authParams` | String represents parameters for the authentication plugin

    **Example**
    key1:val1,key2:val2|None -long|`operationTimeoutMs`|Operation timeout |30000 -long|`statsIntervalSeconds`|Interval between each stats info

    Stats is activated with positive `statsInterval`

    Set `statsIntervalSeconds` to 1 second at least |60 -int|`numIoThreads`| The number of threads used for handling connections to brokers | 1 -int|`numListenerThreads`|The number of threads used for handling message listeners. The listener thread pool is shared across all the consumers and readers using the "listener" model to get messages. For a given consumer, the listener is always invoked from the same thread to ensure ordering. If you want multiple threads to process a single topic, you need to create a [`shared`](https://pulsar.apache.org/docs/en/next/concepts-messaging/#shared) subscription and multiple consumers for this subscription. This does not ensure ordering.| 1 -boolean|`useTcpNoDelay`|Whether to use TCP no-delay flag on the connection to disable Nagle algorithm |true -boolean |`useTls` |Whether to use TLS encryption on the connection| false -string | `tlsTrustCertsFilePath` |Path to the trusted TLS certificate file|None -boolean|`tlsAllowInsecureConnection`|Whether the Pulsar client accepts untrusted TLS certificate from broker | false -boolean | `tlsHostnameVerificationEnable` | Whether to enable TLS hostname verification|false -int|`concurrentLookupRequest`|The number of concurrent lookup requests allowed to send on each broker connection to prevent overload on broker|5000 -int|`maxLookupRequest`|The maximum number of lookup requests allowed on each broker connection to prevent overload on broker | 50000 -int|`maxNumberOfRejectedRequestPerConnection`|The maximum number of rejected requests of a broker in a certain time frame (30 seconds) after the current connection is closed and the client creates a new connection to connect to a different broker|50 -int|`keepAliveIntervalSeconds`|Seconds of keeping alive interval for each client broker connection|30 -int|`connectionTimeoutMs`|Duration of waiting for a connection to a broker to be established

    If the duration passes without a response from a broker, the connection attempt is dropped|10000 -int|`requestTimeoutMs`|Maximum duration for completing a request |60000 -int|`defaultBackoffIntervalNanos`| Default duration for a backoff interval | TimeUnit.MILLISECONDS.toNanos(100); -long|`maxBackoffIntervalNanos`|Maximum duration for a backoff interval|TimeUnit.SECONDS.toNanos(30) -SocketAddress|`socks5ProxyAddress`|SOCKS5 proxy address | None -String|`socks5ProxyUsername`|SOCKS5 proxy username | None -String|`socks5ProxyPassword`|SOCKS5 proxy password | None - -Check out the Javadoc for the {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} class for a full list of configurable parameters. - -> In addition to client-level configuration, you can also apply [producer](#configuring-producers) and [consumer](#configuring-consumers) specific configuration as described in sections below. - -## Producer - -In Pulsar, producers write messages to topics. Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object (as in the section [above](#client-configuration)), you can create a {@inject: javadoc:Producer:/client/org/apache/pulsar/client/api/Producer} for a specific Pulsar [topic](reference-terminology.md#topic). - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .create(); - -// You can then send messages to the broker and topic you specified: -producer.send("My message".getBytes()); - -``` - -By default, producers produce messages that consist of byte arrays. You can produce different types by specifying a message [schema](#schemas). - -```java - -Producer stringProducer = client.newProducer(Schema.STRING) - .topic("my-topic") - .create(); -stringProducer.send("My message"); - -``` - -> Make sure that you close your producers, consumers, and clients when you do not need them. - -> ```java -> -> producer.close(); -> consumer.close(); -> client.close(); -> -> -> ``` - -> -> Close operations can also be asynchronous: - -> ```java -> -> producer.closeAsync() -> .thenRun(() -> System.out.println("Producer closed")) -> .exceptionally((ex) -> { -> System.err.println("Failed to close producer: " + ex); -> return null; -> }); -> -> -> ``` - - -### Configure producer - -If you instantiate a `Producer` object by specifying only a topic name as the example above, use the default configuration for producer. - -If you create a producer, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -Type | Name|
    Description
    | Default -|---|---|---|--- -String| `topicName`| Topic name| null| -String|`producerName`|Producer name| null -long|`sendTimeoutMs`|Message send timeout in ms.

    If a message is not acknowledged by a server before the `sendTimeout` expires, an error occurs.|30000 -boolean|`blockIfQueueFull`|If it is set to `true`, when the outgoing message queue is full, the `Send` and `SendAsync` methods of producer block, rather than failing and throwing errors.

    If it is set to `false`, when the outgoing message queue is full, the `Send` and `SendAsync` methods of producer fail and `ProducerQueueIsFullError` exceptions occur.

    The `MaxPendingMessages` parameter determines the size of the outgoing message queue.|false -int|`maxPendingMessages`|The maximum size of a queue holding pending messages.

    For example, a message waiting to receive an acknowledgment from a [broker](reference-terminology.md#broker).

    By default, when the queue is full, all calls to the `Send` and `SendAsync` methods fail **unless** you set `BlockIfQueueFull` to `true`.|1000 -int|`maxPendingMessagesAcrossPartitions`|The maximum number of pending messages across partitions.

    Use the setting to lower the max pending messages for each partition ({@link #setMaxPendingMessages(int)}) if the total number exceeds the configured value.|50000 -MessageRoutingMode|`messageRoutingMode`|Message routing logic for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics).

    Apply the logic only when setting no key on messages.

    Available options are as follows:

  999. `pulsar.RoundRobinDistribution`: round robin

  1000. `pulsar.UseSinglePartition`: publish all messages to a single partition

  1001. `pulsar.CustomPartition`: a custom partitioning scheme
  1002. |`pulsar.RoundRobinDistribution` -HashingScheme|`hashingScheme`|Hashing function determining the partition where you publish a particular message (**partitioned topics only**).

    Available options are as follows:

  1003. `pulsar.JavaStringHash`: the equivalent of `String.hashCode()` in Java

  1004. `pulsar.Murmur3_32Hash`: applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function

  1005. `pulsar.BoostHash`: applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library
  1006. |`HashingScheme.JavaStringHash` -ProducerCryptoFailureAction|`cryptoFailureAction`|Producer should take action when encryption fails.

  1007. **FAIL**: if encryption fails, unencrypted messages fail to send.

  1008. **SEND**: if encryption fails, unencrypted messages are sent.
  1009. |`ProducerCryptoFailureAction.FAIL` -long|`batchingMaxPublishDelayMicros`|Batching time period of sending messages.|TimeUnit.MILLISECONDS.toMicros(1) -int|batchingMaxMessages|The maximum number of messages permitted in a batch.|1000 -boolean|`batchingEnabled`|Enable batching of messages. |true -CompressionType|`compressionType`|Message data compression type used by a producer.

    Available options:
  1010. [`LZ4`](https://github.com/lz4/lz4)
  1011. [`ZLIB`](https://zlib.net/)
  1012. [`ZSTD`](https://facebook.github.io/zstd/)
  1013. [`SNAPPY`](https://google.github.io/snappy/)
  1014. | No compression - -You can configure parameters if you do not want to use the default configuration. - -For a full list, see the Javadoc for the {@inject: javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder} class. The following is an example. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .batchingMaxPublishDelay(10, TimeUnit.MILLISECONDS) - .sendTimeout(10, TimeUnit.SECONDS) - .blockIfQueueFull(true) - .create(); - -``` - -### Message routing - -When using partitioned topics, you can specify the routing mode whenever you publish messages using a producer. For more information on specifying a routing mode using the Java client, see the [Partitioned Topics](cookbooks-partitioned.md) cookbook. - -### Async send - -You can publish messages [asynchronously](concepts-messaging.md#send-modes) using the Java client. With async send, the producer puts the message in a blocking queue and returns it immediately. Then the client library sends the message to the broker in the background. If the queue is full (max size configurable), the producer is blocked or fails immediately when calling the API, depending on arguments passed to the producer. - -The following is an example. - -```java - -producer.sendAsync("my-async-message".getBytes()).thenAccept(msgId -> { - System.out.println("Message with ID " + msgId + " successfully sent"); -}); - -``` - -As you can see from the example above, async send operations return a {@inject: javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId} wrapped in a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture). - -### Configure messages - -In addition to a value, you can set additional items on a given message: - -```java - -producer.newMessage() - .key("my-message-key") - .value("my-async-message".getBytes()) - .property("my-key", "my-value") - .property("my-other-key", "my-other-value") - .send(); - -``` - -You can terminate the builder chain with `sendAsync()` and get a future return. - -## Consumer - -In Pulsar, consumers subscribe to topics and handle messages that producers publish to those topics. You can instantiate a new [consumer](reference-terminology.md#consumer) by first instantiating a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object and passing it a URL for a Pulsar broker (as [above](#client-configuration)). - -Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object, you can create a {@inject: javadoc:Consumer:/client/org/apache/pulsar/client/api/Consumer} by specifying a [topic](reference-terminology.md#topic) and a [subscription](concepts-messaging.md#subscription-modes). - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscribe(); - -``` - -The `subscribe` method will auto subscribe the consumer to the specified topic and subscription. One way to make the consumer listen on the topic is to set up a `while` loop. In this example loop, the consumer listens for messages, prints the contents of any received message, and then [acknowledges](reference-terminology.md#acknowledgment-ack) that the message has been processed. If the processing logic fails, you can use [negative acknowledgement](reference-terminology.md#acknowledgment-ack) to redeliver the message later. - -```java - -while (true) { - // Wait for a message - Message msg = consumer.receive(); - - try { - // Do something with the message - System.out.println("Message received: " + new String(msg.getData())); - - // Acknowledge the message so that it can be deleted by the message broker - consumer.acknowledge(msg); - } catch (Exception e) { - // Message failed to process, redeliver later - consumer.negativeAcknowledge(msg); - } -} - -``` - -If you don't want to block your main thread and rather listen constantly for new messages, consider using a `MessageListener`. - -```java - -MessageListener myMessageListener = (consumer, msg) -> { - try { - System.out.println("Message received: " + new String(msg.getData())); - consumer.acknowledge(msg); - } catch (Exception e) { - consumer.negativeAcknowledge(msg); - } -} - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .messageListener(myMessageListener) - .subscribe(); - -``` - -### Configure consumer - -If you instantiate a `Consumer` object by specifying only a topic and subscription name as in the example above, the consumer uses the default configuration. - -When you create a consumer, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -Type | Name|
    Description
    | Default -|---|---|---|--- -Set<String>| `topicNames`| Topic name| Sets.newTreeSet() -Pattern| `topicsPattern`| Topic pattern |None -String| `subscriptionName`| Subscription name| None -SubscriptionType| `subscriptionType`| Subscription type

    Four subscription types are available:
  1015. Exclusive
  1016. Failover
  1017. Shared
  1018. Key_Shared
  1019. |SubscriptionType.Exclusive -int | `receiverQueueSize` | Size of a consumer's receiver queue.

    For example, the number of messages accumulated by a consumer before an application calls `Receive`.

    A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.| 1000 -long|`acknowledgementsGroupTimeMicros`|Group a consumer acknowledgment for a specified time.

    By default, a consumer uses 100ms grouping time to send out acknowledgments to a broker.

    Setting a group time of 0 sends out acknowledgments immediately.

    A longer ack group time is more efficient at the expense of a slight increase in message re-deliveries after a failure.|TimeUnit.MILLISECONDS.toMicros(100) -long|`negativeAckRedeliveryDelayMicros`|Delay to wait before redelivering messages that failed to be processed.

    When an application uses {@link Consumer#negativeAcknowledge(Message)}, failed messages are redelivered after a fixed timeout. |TimeUnit.MINUTES.toMicros(1) -int |`maxTotalReceiverQueueSizeAcrossPartitions`|The max total receiver queue size across partitions.

    This setting reduces the receiver queue size for individual partitions if the total receiver queue size exceeds this value.|50000 -String|`consumerName`|Consumer name|null -long|`ackTimeoutMillis`|Timeout of unacked messages|0 -long|`tickDurationMillis`|Granularity of the ack-timeout redelivery.

    Using an higher `tickDurationMillis` reduces the memory overhead to track messages when setting ack-timeout to a bigger value (for example, 1 hour).|1000 -int|`priorityLevel`|Priority level for a consumer to which a broker gives more priority while dispatching messages in Shared subscription type.

    The broker follows descending priorities. For example, 0=max-priority, 1, 2,...

    In shared subscription type, the broker **first dispatches messages to the max priority level consumers if they have permits**. Otherwise, the broker considers next priority level consumers.

    **Example 1**

    If a subscription has consumerA with `priorityLevel` 0 and consumerB with `priorityLevel` 1, then the broker **only dispatches messages to consumerA until it runs out permits** and then starts dispatching messages to consumerB.

    **Example 2**

    Consumer Priority, Level, Permits
    C1, 0, 2
    C2, 0, 1
    C3, 0, 1
    C4, 1, 2
    C5, 1, 1

    Order in which a broker dispatches messages to consumers is: C1, C2, C3, C1, C4, C5, C4.|0 -ConsumerCryptoFailureAction|`cryptoFailureAction`|Consumer should take action when it receives a message that can not be decrypted.

  1020. **FAIL**: this is the default option to fail messages until crypto succeeds.

  1021. **DISCARD**:silently acknowledge and not deliver message to an application.

  1022. **CONSUME**: deliver encrypted messages to applications. It is the application's responsibility to decrypt the message.

    The decompression of message fails.

    If messages contain batch messages, a client is not be able to retrieve individual messages in batch.

    Delivered encrypted message contains {@link EncryptionContext} which contains encryption and compression information in it using which application can decrypt consumed message payload.
  1023. |
  1024. ConsumerCryptoFailureAction.FAIL
  1025. -SortedMap|`properties`|A name or value property of this consumer.

    `properties` is application defined metadata attached to a consumer.

    When getting a topic stats, associate this metadata with the consumer stats for easier identification.|new TreeMap() -boolean|`readCompacted`|If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    Only enabling `readCompacted` on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`.|false -SubscriptionInitialPosition|`subscriptionInitialPosition`|Initial position at which to set cursor when subscribing to a topic at first time.|SubscriptionInitialPosition.Latest -int|`patternAutoDiscoveryPeriod`|Topic auto discovery period when using a pattern for topic's consumer.

    The default and minimum value is 1 minute.|1 -RegexSubscriptionMode|`regexSubscriptionMode`|When subscribing to a topic using a regular expression, you can pick a certain type of topics.

  1026. **PersistentOnly**: only subscribe to persistent topics.

  1027. **NonPersistentOnly**: only subscribe to non-persistent topics.

  1028. **AllTopics**: subscribe to both persistent and non-persistent topics.
  1029. |RegexSubscriptionMode.PersistentOnly -DeadLetterPolicy|`deadLetterPolicy`|Dead letter policy for consumers.

    By default, some messages are probably redelivered many times, even to the extent that it never stops.

    By using the dead letter mechanism, messages have the max redelivery count. **When exceeding the maximum number of redeliveries, messages are sent to the Dead Letter Topic and acknowledged automatically**.

    You can enable the dead letter mechanism by setting `deadLetterPolicy`.

    **Example**

    client.newConsumer()
    .deadLetterPolicy(DeadLetterPolicy.builder().maxRedeliverCount(10).build())
    .subscribe();


    Default dead letter topic name is `{TopicName}-{Subscription}-DLQ`.

    To set a custom dead letter topic name:
    client.newConsumer()
    .deadLetterPolicy(DeadLetterPolicy.builder().maxRedeliverCount(10)
    .deadLetterTopic("your-topic-name").build())
    .subscribe();


    When specifying the dead letter policy while not specifying `ackTimeoutMillis`, you can set the ack timeout to 30000 millisecond.|None -boolean|`autoUpdatePartitions`|If `autoUpdatePartitions` is enabled, a consumer subscribes to partition increasement automatically.

    **Note**: this is only for partitioned consumers.|true -boolean|`replicateSubscriptionState`|If `replicateSubscriptionState` is enabled, a subscription state is replicated to geo-replicated clusters.|false - -You can configure parameters if you do not want to use the default configuration. For a full list, see the Javadoc for the {@inject: javadoc:ConsumerBuilder:/client/org/apache/pulsar/client/api/ConsumerBuilder} class. - -The following is an example. - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .ackTimeout(10, TimeUnit.SECONDS) - .subscriptionType(SubscriptionType.Exclusive) - .subscribe(); - -``` - -### Async receive - -The `receive` method receives messages synchronously (the consumer process is blocked until a message is available). You can also use [async receive](concepts-messaging.md#receive-modes), which returns a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) object immediately once a new message is available. - -The following is an example. - -```java - -CompletableFuture asyncMessage = consumer.receiveAsync(); - -``` - -Async receive operations return a {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} wrapped inside of a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture). - -### Batch receive - -Use `batchReceive` to receive multiple messages for each call. - -The following is an example. - -```java - -Messages messages = consumer.batchReceive(); -for (Object message : messages) { - // do something -} -consumer.acknowledge(messages) - -``` - -:::note - -Batch receive policy limits the number and bytes of messages in a single batch. You can specify a timeout to wait for enough messages. -The batch receive is completed if any of the following condition is met: enough number of messages, bytes of messages, wait timeout. - -```java - -Consumer consumer = client.newConsumer() -.topic("my-topic") -.subscriptionName("my-subscription") -.batchReceivePolicy(BatchReceivePolicy.builder() -.maxNumMessages(100) -.maxNumBytes(1024 * 1024) -.timeout(200, TimeUnit.MILLISECONDS) -.build()) -.subscribe(); - -``` - -The default batch receive policy is: - -```java - -BatchReceivePolicy.builder() -.maxNumMessage(-1) -.maxNumBytes(10 * 1024 * 1024) -.timeout(100, TimeUnit.MILLISECONDS) -.build(); - -``` - -::: - -### Multi-topic subscriptions - -In addition to subscribing a consumer to a single Pulsar topic, you can also subscribe to multiple topics simultaneously using [multi-topic subscriptions](concepts-messaging.md#multi-topic-subscriptions). To use multi-topic subscriptions you can supply either a regular expression (regex) or a `List` of topics. If you select topics via regex, all topics must be within the same Pulsar namespace. - -The followings are some examples. - -```java - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; - -import java.util.Arrays; -import java.util.List; -import java.util.regex.Pattern; - -ConsumerBuilder consumerBuilder = pulsarClient.newConsumer() - .subscriptionName(subscription); - -// Subscribe to all topics in a namespace -Pattern allTopicsInNamespace = Pattern.compile("public/default/.*"); -Consumer allTopicsConsumer = consumerBuilder - .topicsPattern(allTopicsInNamespace) - .subscribe(); - -// Subscribe to a subsets of topics in a namespace, based on regex -Pattern someTopicsInNamespace = Pattern.compile("public/default/foo.*"); -Consumer allTopicsConsumer = consumerBuilder - .topicsPattern(someTopicsInNamespace) - .subscribe(); - -``` - -In the above example, the consumer subscribes to the `persistent` topics that can match the topic name pattern. If you want the consumer subscribes to all `persistent` and `non-persistent` topics that can match the topic name pattern, set `subscriptionTopicsMode` to `RegexSubscriptionMode.AllTopics`. - -```java - -Pattern pattern = Pattern.compile("public/default/.*"); -pulsarClient.newConsumer() - .subscriptionName("my-sub") - .topicsPattern(pattern) - .subscriptionTopicsMode(RegexSubscriptionMode.AllTopics) - .subscribe(); - -``` - -:::note - -By default, the `subscriptionTopicsMode` of the consumer is `PersistentOnly`. Available options of `subscriptionTopicsMode` are `PersistentOnly`, `NonPersistentOnly`, and `AllTopics`. - -::: - -You can also subscribe to an explicit list of topics (across namespaces if you wish): - -```java - -List topics = Arrays.asList( - "topic-1", - "topic-2", - "topic-3" -); - -Consumer multiTopicConsumer = consumerBuilder - .topics(topics) - .subscribe(); - -// Alternatively: -Consumer multiTopicConsumer = consumerBuilder - .topic( - "topic-1", - "topic-2", - "topic-3" - ) - .subscribe(); - -``` - -You can also subscribe to multiple topics asynchronously using the `subscribeAsync` method rather than the synchronous `subscribe` method. The following is an example. - -```java - -Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default.*"); -consumerBuilder - .topics(topics) - .subscribeAsync() - .thenAccept(this::receiveMessageFromConsumer); - -private void receiveMessageFromConsumer(Object consumer) { - ((Consumer)consumer).receiveAsync().thenAccept(message -> { - // Do something with the received message - receiveMessageFromConsumer(consumer); - }); -} - -``` - -### Subscription types - -Pulsar has various [subscription types](concepts-messaging#subscription-types) to match different scenarios. A topic can have multiple subscriptions with different subscription types. However, a subscription can only have one subscription type at a time. - -A subscription is identical with the subscription name; a subscription name can specify only one subscription type at a time. To change the subscription type, you should first stop all consumers of this subscription. - -Different subscription types have different message distribution modes. This section describes the differences of subscription types and how to use them. - -In order to better describe their differences, assuming you have a topic named "my-topic", and the producer has published 10 messages. - -```java - -Producer producer = client.newProducer(Schema.STRING) - .topic("my-topic") - .enableBatching(false) - .create(); -// 3 messages with "key-1", 3 messages with "key-2", 2 messages with "key-3" and 2 messages with "key-4" -producer.newMessage().key("key-1").value("message-1-1").send(); -producer.newMessage().key("key-1").value("message-1-2").send(); -producer.newMessage().key("key-1").value("message-1-3").send(); -producer.newMessage().key("key-2").value("message-2-1").send(); -producer.newMessage().key("key-2").value("message-2-2").send(); -producer.newMessage().key("key-2").value("message-2-3").send(); -producer.newMessage().key("key-3").value("message-3-1").send(); -producer.newMessage().key("key-3").value("message-3-2").send(); -producer.newMessage().key("key-4").value("message-4-1").send(); -producer.newMessage().key("key-4").value("message-4-2").send(); - -``` - -#### Exclusive - -Create a new consumer and subscribe with the `Exclusive` subscription type. - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Exclusive) - .subscribe() - -``` - -Only the first consumer is allowed to the subscription, other consumers receive an error. The first consumer receives all 10 messages, and the consuming order is the same as the producing order. - -:::note - -If topic is a partitioned topic, the first consumer subscribes to all partitioned topics, other consumers are not assigned with partitions and receive an error. - -::: - -#### Failover - -Create new consumers and subscribe with the`Failover` subscription type. - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Failover) - .subscribe() -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Failover) - .subscribe() -//conumser1 is the active consumer, consumer2 is the standby consumer. -//consumer1 receives 5 messages and then crashes, consumer2 takes over as an active consumer. - -``` - -Multiple consumers can attach to the same subscription, yet only the first consumer is active, and others are standby. When the active consumer is disconnected, messages will be dispatched to one of standby consumers, and the standby consumer then becomes active consumer. - -If the first active consumer is disconnected after receiving 5 messages, the standby consumer becomes active consumer. Consumer1 will receive: - -``` - -("key-1", "message-1-1") -("key-1", "message-1-2") -("key-1", "message-1-3") -("key-2", "message-2-1") -("key-2", "message-2-2") - -``` - -consumer2 will receive: - -``` - -("key-2", "message-2-3") -("key-3", "message-3-1") -("key-3", "message-3-2") -("key-4", "message-4-1") -("key-4", "message-4-2") - -``` - -:::note - -If a topic is a partitioned topic, each partition has only one active consumer, messages of one partition are distributed to only one consumer, and messages of multiple partitions are distributed to multiple consumers. - -::: - -#### Shared - -Create new consumers and subscribe with `Shared` subscription type. - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .subscribe() - -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .subscribe() -//Both consumer1 and consumer2 are active consumers. - -``` - -In shared subscription type, multiple consumers can attach to the same subscription and messages are delivered in a round robin distribution across consumers. - -If a broker dispatches only one message at a time, consumer1 receives the following information. - -``` - -("key-1", "message-1-1") -("key-1", "message-1-3") -("key-2", "message-2-2") -("key-3", "message-3-1") -("key-4", "message-4-1") - -``` - -consumer2 receives the following information. - -``` - -("key-1", "message-1-2") -("key-2", "message-2-1") -("key-2", "message-2-3") -("key-3", "message-3-2") -("key-4", "message-4-2") - -``` - -`Shared` subscription is different from `Exclusive` and `Failover` subscription types. `Shared` subscription has better flexibility, but cannot provide order guarantee. - -#### Key_shared - -This is a new subscription type since 2.4.0 release. Create new consumers and subscribe with `Key_Shared` subscription type. - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Key_Shared) - .subscribe() - -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Key_Shared) - .subscribe() -//Both consumer1 and consumer2 are active consumers. - -``` - -`Key_Shared` subscription is like `Shared` subscription, all consumers can attach to the same subscription. But it is different from `Key_Shared` subscription, messages with the same key are delivered to only one consumer in order. The possible distribution of messages between different consumers (by default we do not know in advance which keys will be assigned to a consumer, but a key will only be assigned to a consumer at the same time). - -consumer1 receives the following information. - -``` - -("key-1", "message-1-1") -("key-1", "message-1-2") -("key-1", "message-1-3") -("key-3", "message-3-1") -("key-3", "message-3-2") - -``` - -consumer2 receives the following information. - -``` - -("key-2", "message-2-1") -("key-2", "message-2-2") -("key-2", "message-2-3") -("key-4", "message-4-1") -("key-4", "message-4-2") - -``` - -If batching is enabled at the producer side, messages with different keys are added to a batch by default. The broker will dispatch the batch to the consumer, so the default batch mechanism may break the Key_Shared subscription guaranteed message distribution semantics. The producer needs to use the `KeyBasedBatcher`. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .batcherBuilder(BatcherBuilder.KEY_BASED) - .create(); - -``` - -Or the producer can disable batching. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .enableBatching(false) - .create(); - -``` - -:::note - -If the message key is not specified, messages without key are dispatched to one consumer in order by default. - -::: - -## Reader - -With the [reader interface](concepts-clients.md#reader-interface), Pulsar clients can "manually position" themselves within a topic and reading all messages from a specified message onward. The Pulsar API for Java enables you to create {@inject: javadoc:Reader:/client/org/apache/pulsar/client/api/Reader} objects by specifying a topic and a {@inject: javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId}. - -The following is an example. - -```java - -byte[] msgIdBytes = // Some message ID byte array -MessageId id = MessageId.fromByteArray(msgIdBytes); -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(id) - .create(); - -while (true) { - Message message = reader.readNext(); - // Process message -} - -``` - -In the example above, a `Reader` object is instantiated for a specific topic and message (by ID); the reader iterates over each message in the topic after the message is identified by `msgIdBytes` (how that value is obtained depends on the application). - -The code sample above shows pointing the `Reader` object to a specific message (by ID), but you can also use `MessageId.earliest` to point to the earliest available message on the topic of `MessageId.latest` to point to the most recent available message. - -### Configure reader -When you create a reader, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -| Type | Name |
    Description
    | Default -|---|---|---|--- -String|`topicName`|Topic name. |None -int|`receiverQueueSize`|Size of a consumer's receiver queue.

    For example, the number of messages that can be accumulated by a consumer before an application calls `Receive`.

    A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.|1000 -ReaderListener<T>|`readerListener`|A listener that is called for message received.|None -String|`readerName`|Reader name.|null -String| `subscriptionName`|Subscription name|When there is a single topic, the default subscription name is `"reader-" + 10-digit UUID`.
    When there are multiple topics, the default subscription name is `"multiTopicsReader-" + 10-digit UUID`. -String|`subscriptionRolePrefix`|Prefix of subscription role. |null -CryptoKeyReader|`cryptoKeyReader`|Interface that abstracts the access to a key store.|null -ConsumerCryptoFailureAction|`cryptoFailureAction`|Consumer should take action when it receives a message that can not be decrypted.

  1030. **FAIL**: this is the default option to fail messages until crypto succeeds.

  1031. **DISCARD**: silently acknowledge and not deliver message to an application.

  1032. **CONSUME**: deliver encrypted messages to applications. It is the application's responsibility to decrypt the message.

    The message decompression fails.

    If messages contain batch messages, a client is not be able to retrieve individual messages in batch.

    Delivered encrypted message contains {@link EncryptionContext} which contains encryption and compression information in it using which application can decrypt consumed message payload.
  1033. |
  1034. ConsumerCryptoFailureAction.FAIL
  1035. -boolean|`readCompacted`|If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (for example, failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`.|false -boolean|`resetIncludeHead`|If set to true, the first message to be returned is the one specified by `messageId`.

    If set to false, the first message to be returned is the one next to the message specified by `messageId`.|false - -### Sticky key range reader - -In sticky key range reader, broker will only dispatch messages which hash of the message key contains by the specified key hash range. Multiple key hash ranges can be specified on a reader. - -The following is an example to create a sticky key range reader. - -```java - -pulsarClient.newReader() - .topic(topic) - .startMessageId(MessageId.earliest) - .keyHashRange(Range.of(0, 10000), Range.of(20001, 30000)) - .create(); - -``` - -Total hash range size is 65536, so the max end of the range should be less than or equal to 65535. - -## Schema - -In Pulsar, all message data consists of byte arrays "under the hood." [Message schemas](schema-get-started.md) enable you to use other types of data when constructing and handling messages (from simple types like strings to more complex, application-specific types.md). If you construct, say, a [producer](#producers) without specifying a schema, then the producer can only produce messages of type `byte[]`. The following is an example. - -```java - -Producer producer = client.newProducer() - .topic(topic) - .create(); - -``` - -The producer above is equivalent to a `Producer` (in fact, you should *always* explicitly specify the type). If you'd like to use a producer for a different type of data, you'll need to specify a **schema** that informs Pulsar which data type will be transmitted over the [topic](reference-terminology.md#topic). - -### AvroBaseStructSchema example - -Let's say that you have a `SensorReading` class that you'd like to transmit over a Pulsar topic: - -```java - -public class SensorReading { - public float temperature; - - public SensorReading(float temperature) { - this.temperature = temperature; - } - - // A no-arg constructor is required - public SensorReading() { - } - - public float getTemperature() { - return temperature; - } - - public void setTemperature(float temperature) { - this.temperature = temperature; - } -} - -``` - -You could then create a `Producer` (or `Consumer`) like this: - -```java - -Producer producer = client.newProducer(JSONSchema.of(SensorReading.class)) - .topic("sensor-readings") - .create(); - -``` - -The following schema formats are currently available for Java: - -* No schema or the byte array schema (which can be applied using `Schema.BYTES`): - - ```java - - Producer bytesProducer = client.newProducer(Schema.BYTES) - .topic("some-raw-bytes-topic") - .create(); - - ``` - - Or, equivalently: - - ```java - - Producer bytesProducer = client.newProducer() - .topic("some-raw-bytes-topic") - .create(); - - ``` - -* `String` for normal UTF-8-encoded string data. Apply the schema using `Schema.STRING`: - - ```java - - Producer stringProducer = client.newProducer(Schema.STRING) - .topic("some-string-topic") - .create(); - - ``` - -* Create JSON schemas for POJOs using `Schema.JSON`. The following is an example. - - ```java - - Producer pojoProducer = client.newProducer(Schema.JSON(MyPojo.class)) - .topic("some-pojo-topic") - .create(); - - ``` - -* Generate Protobuf schemas using `Schema.PROTOBUF`. The following example shows how to create the Protobuf schema and use it to instantiate a new producer: - - ```java - - Producer protobufProducer = client.newProducer(Schema.PROTOBUF(MyProtobuf.class)) - .topic("some-protobuf-topic") - .create(); - - ``` - -* Define Avro schemas with `Schema.AVRO`. The following code snippet demonstrates how to create and use Avro schema. - - ```java - - Producer avroProducer = client.newProducer(Schema.AVRO(MyAvro.class)) - .topic("some-avro-topic") - .create(); - - ``` - -### ProtobufNativeSchema example - -For example of ProtobufNativeSchema, see [`SchemaDefinition` in `Complex type`](schema-understand.md#complex-type). - -## Authentication - -Pulsar currently supports three authentication schemes: [TLS](security-tls-authentication.md), [Athenz](security-athenz.md), and [Oauth2](security-oauth2.md). You can use the Pulsar Java client with all of them. - -### TLS Authentication - -To use [TLS](security-tls-authentication.md), `enableTls` method is deprecated and you need to use "pulsar+ssl://" in serviceUrl to enable, point your Pulsar client to a TLS cert path, and provide paths to cert and key files. - -The following is an example. - -```java - -Map authParams = new HashMap(); -authParams.put("tlsCertFile", "/path/to/client-cert.pem"); -authParams.put("tlsKeyFile", "/path/to/client-key.pem"); - -Authentication tlsAuth = AuthenticationFactory - .create(AuthenticationTls.class.getName(), authParams); - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://my-broker.com:6651") - .tlsTrustCertsFilePath("/path/to/cacert.pem") - .authentication(tlsAuth) - .build(); - -``` - -### Athenz - -To use [Athenz](security-athenz.md) as an authentication provider, you need to [use TLS](#tls-authentication) and provide values for four parameters in a hash: - -* `tenantDomain` -* `tenantService` -* `providerDomain` -* `privateKey` - -You can also set an optional `keyId`. The following is an example. - -```java - -Map authParams = new HashMap(); -authParams.put("tenantDomain", "shopping"); // Tenant domain name -authParams.put("tenantService", "some_app"); // Tenant service name -authParams.put("providerDomain", "pulsar"); // Provider domain name -authParams.put("privateKey", "file:///path/to/private.pem"); // Tenant private key path -authParams.put("keyId", "v1"); // Key id for the tenant private key (optional, default: "0") - -Authentication athenzAuth = AuthenticationFactory - .create(AuthenticationAthenz.class.getName(), authParams); - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://my-broker.com:6651") - .tlsTrustCertsFilePath("/path/to/cacert.pem") - .authentication(athenzAuth) - .build(); - -``` - -> #### Supported pattern formats -> The `privateKey` parameter supports the following three pattern formats: -> * `file:///path/to/file` -> * `file:/path/to/file` -> * `data:application/x-pem-file;base64,` - -### Oauth2 - -The following example shows how to use [Oauth2](security-oauth2.md) as an authentication provider for the Pulsar Java client. - -You can use the factory method to configure authentication for Pulsar Java client. - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactoryOAuth2.clientCredentials(this.issuerUrl, this.credentialsUrl, this.audience)) - .build(); - -``` - -In addition, you can also use the encoded parameters to configure authentication for Pulsar Java client. - -```java - -Authentication auth = AuthenticationFactory - .create(AuthenticationOAuth2.class.getName(), "{"type":"client_credentials","privateKey":"...","issuerUrl":"...","audience":"..."}"); -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication(auth) - .build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-node.md b/site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-node.md deleted file mode 100644 index 8031d287c6f2ad..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-node.md +++ /dev/null @@ -1,643 +0,0 @@ ---- -id: client-libraries-node -title: The Pulsar Node.js client -sidebar_label: "Node.js" -original_id: client-libraries-node ---- - -The Pulsar Node.js client can be used to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Node.js. - -All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Node.js client are thread-safe. - -For 1.3.0 or later versions, [type definitions](https://github.com/apache/pulsar-client-node/blob/master/index.d.ts) used in TypeScript are available. - -## Installation - -You can install the [`pulsar-client`](https://www.npmjs.com/package/pulsar-client) library via [npm](https://www.npmjs.com/). - -### Requirements -Pulsar Node.js client library is based on the C++ client library. -Follow [these instructions](client-libraries-cpp.md#compilation) and install the Pulsar C++ client library. - -### Compatibility - -Compatibility between each version of the Node.js client and the C++ client is as follows: - -| Node.js client | C++ client | -| :------------- | :------------- | -| 1.0.0 | 2.3.0 or later | -| 1.1.0 | 2.4.0 or later | -| 1.2.0 | 2.5.0 or later | - -If an incompatible version of the C++ client is installed, you may fail to build or run this library. - -### Installation using npm - -Install the `pulsar-client` library via [npm](https://www.npmjs.com/): - -```shell - -$ npm install pulsar-client - -``` - -:::note - -Also, this library works only in Node.js 10.x or later because it uses the [`node-addon-api`](https://github.com/nodejs/node-addon-api) module to wrap the C++ library. - -::: - -## Connection URLs -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here is an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you are using [TLS encryption](security-tls-transport.md) or [TLS Authentication](security-tls-authentication.md), the URL looks like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you first need a client object. You can create a client instance using a `new` operator and the `Client` method, passing in a client options object (more on configuration [below](#client-configuration)). - -Here is an example: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - await client.close(); -})(); - -``` - -### Client configuration - -The following configurable parameters are available for Pulsar clients: - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `serviceUrl` | The connection URL for the Pulsar cluster. See [above](#connection-urls) for more info. | | -| `authentication` | Configure the authentication provider. (default: no authentication). See [TLS Authentication](security-tls-authentication.md) for more info. | | -| `operationTimeoutSeconds` | The timeout for Node.js client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries occur until this threshold is reached, at which point the operation fails. | 30 | -| `ioThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker). | 1 | -| `messageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)). | 1 | -| `concurrentLookupRequest` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 50000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 50000 | -| `tlsTrustCertsFilePath` | The file path for the trusted TLS certificate. | | -| `tlsValidateHostname` | The boolean value of setup whether to enable TLS hostname verification. | `false` | -| `tlsAllowInsecureConnection` | The boolean value of setup whether the Pulsar client accepts untrusted TLS certificate from broker. | `false` | -| `statsIntervalInSeconds` | Interval between each stat info. Stats is activated with positive statsInterval. The value should be set to 1 second at least | 600 | -| `log` | A function that is used for logging. | `console.log` | - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Node.js producers using a producer configuration object. - -Here is an example: - -```JavaScript - -const producer = await client.createProducer({ - topic: 'my-topic', // or 'my-tenant/my-namespace/my-topic' to specify topic's tenant and namespace -}); - -await producer.send({ - data: Buffer.from("Hello, Pulsar"), -}); - -await producer.close(); - -``` - -> #### Promise operation -> When you create a new Pulsar producer, the operation returns `Promise` object and get producer instance or an error through executor function. -> In this example, using await operator instead of executor function. - -### Producer operations - -Pulsar Node.js producers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `send(Object)` | Publishes a [message](#messages) to the producer's topic. When the message is successfully acknowledged by the Pulsar broker, or an error is thrown, the Promise object whose result is the message ID runs executor function. | `Promise` | -| `flush()` | Sends message from send queue to Pulsar broker. When the message is successfully acknowledged by the Pulsar broker, or an error is thrown, the Promise object runs executor function. | `Promise` | -| `close()` | Closes the producer and releases all resources allocated to it. Once `close()` is called, no more messages are accepted from the publisher. This method returns a Promise object. It runs the executor function when all pending publish requests are persisted by Pulsar. If an error is thrown, no pending writes are retried. | `Promise` | -| `getProducerName()` | Getter method of the producer name. | `string` | -| `getTopic()` | Getter method of the name of the topic. | `string` | - -### Producer configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer publishes messages. The topic format is `` or `//`. For example, `sample/ns1/my-topic`. | | -| `producerName` | A name for the producer. If you do not explicitly assign a name, Pulsar automatically generates a globally unique name. If you choose to explicitly assign a name, it needs to be unique across *all* Pulsar clusters, otherwise the creation operation throws an error. | | -| `sendTimeoutMs` | When publishing a message to a topic, the producer waits for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error is thrown. If you set `sendTimeoutMs` to -1, the timeout is set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30000 | -| `initialSequenceId` | The initial sequence ID of the message. When producer send message, add sequence ID to message. The ID is increased each time to send. | | -| `maxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `send` method fails *unless* `blockIfQueueFull` is set to `true`. | 1000 | -| `maxPendingMessagesAcrossPartitions` | The maximum size of the sum of partition's pending queue. | 50000 | -| `blockIfQueueFull` | If set to `true`, the producer's `send` method waits when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `maxPendingMessages` parameter); if set to `false` (the default), `send` operations fails and throw a error when the queue is full. | `false` | -| `messageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-messaging.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`RoundRobinDistribution`), or publishing all messages to a single partition (`UseSinglePartition`, the default). | `UseSinglePartition` | -| `hashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `JavaStringHash` (the equivalent of `String.hashCode()` in Java), `Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library). | `BoostHash` | -| `compressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), and [`Zlib`](https://zlib.net/), [ZSTD](https://github.com/facebook/zstd/), [SNAPPY](https://github.com/google/snappy/). | Compression None | -| `batchingEnabled` | If set to `true`, the producer send message as batch. | `true` | -| `batchingMaxPublishDelayMs` | The maximum time of delay sending message in batching. | 10 | -| `batchingMaxMessages` | The maximum size of sending message in each time of batching. | 1000 | -| `properties` | The metadata of producer. | | - -### Producer example - -This example creates a Node.js producer for the `my-topic` topic and sends 10 messages to that topic: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - // Create a producer - const producer = await client.createProducer({ - topic: 'my-topic', - }); - - // Send messages - for (let i = 0; i < 10; i += 1) { - const msg = `my-message-${i}`; - producer.send({ - data: Buffer.from(msg), - }); - console.log(`Sent message: ${msg}`); - } - await producer.flush(); - - await producer.close(); - await client.close(); -})(); - -``` - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Node.js consumers using a consumer configuration object. - -Here is an example: - -```JavaScript - -const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', -}); - -const msg = await consumer.receive(); -console.log(msg.getData().toString()); -consumer.acknowledge(msg); - -await consumer.close(); - -``` - -> #### Promise operation -> When you create a new Pulsar consumer, the operation returns `Promise` object and get consumer instance or an error through executor function. -> In this example, using await operator instead of executor function. - -### Consumer operations - -Pulsar Node.js consumers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `receive()` | Receives a single message from the topic. When the message is available, the Promise object run executor function and get message object. | `Promise` | -| `receive(Number)` | Receives a single message from the topic with specific timeout in milliseconds. | `Promise` | -| `acknowledge(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message object. | `void` | -| `acknowledgeId(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID object. | `void` | -| `acknowledgeCumulative(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `acknowledgeCumulative` method returns void, and send the ack to the broker asynchronously. After that, the messages are *not* redelivered to the consumer. Cumulative acking can not be used with a [shared](concepts-messaging.md#shared) subscription type. | `void` | -| `acknowledgeCumulativeId(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message ID. | `void` | -| `negativeAcknowledge(Message)`| [Negatively acknowledges](reference-terminology.md#negative-acknowledgment-nack) a message to the Pulsar broker by message object. | `void` | -| `negativeAcknowledgeId(MessageId)` | [Negatively acknowledges](reference-terminology.md#negative-acknowledgment-nack) a message to the Pulsar broker by message ID object. | `void` | -| `close()` | Closes the consumer, disabling its ability to receive messages from the broker. | `Promise` | -| `unsubscribe()` | Unsubscribes the subscription. | `Promise` | - -### Consumer configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar topic on which the consumer establishes a subscription and listen for messages. | | -| `topics` | The array of topics. | | -| `topicsPattern` | The regular expression for topics. | | -| `subscription` | The subscription name for this consumer. | | -| `subscriptionType` | Available options are `Exclusive`, `Shared`, `Key_Shared`, and `Failover`. | `Exclusive` | -| `subscriptionInitialPosition` | Initial position at which to set cursor when subscribing to a topic at first time. | `SubscriptionInitialPosition.Latest` | -| `ackTimeoutMs` | Acknowledge timeout in milliseconds. | 0 | -| `nAckRedeliverTimeoutMs` | Delay to wait before redelivering messages that failed to be processed. | 60000 | -| `receiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000 | -| `receiverQueueSizeAcrossPartitions` | Set the max total receiver queue size across partitions. This setting is used to reduce the receiver queue size for individual partitions if the total exceeds this value. | 50000 | -| `consumerName` | The name of consumer. Currently(v2.4.1), [failover](concepts-messaging.md#failover) mode use consumer name in ordering. | | -| `properties` | The metadata of consumer. | | -| `listener`| A listener that is called for a message received. | | -| `readCompacted`| If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`. | false | - -### Consumer example - -This example creates a Node.js consumer with the `my-subscription` subscription on the `my-topic` topic, receives messages, prints the content that arrive, and acknowledges each message to the Pulsar broker for 10 times: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - // Create a consumer - const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', - subscriptionType: 'Exclusive', - }); - - // Receive messages - for (let i = 0; i < 10; i += 1) { - const msg = await consumer.receive(); - console.log(msg.getData().toString()); - consumer.acknowledge(msg); - } - - await consumer.close(); - await client.close(); -})(); - -``` - -Instead a consumer can be created with `listener` to process messages. - -```JavaScript - -// Create a consumer -const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', - subscriptionType: 'Exclusive', - listener: (msg, msgConsumer) => { - console.log(msg.getData().toString()); - msgConsumer.acknowledge(msg); - }, -}); - -``` - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recently unacked message). You can [configure](#reader-configuration) Node.js readers using a reader configuration object. - -Here is an example: - -```JavaScript - -const reader = await client.createReader({ - topic: 'my-topic', - startMessageId: Pulsar.MessageId.earliest(), -}); - -const msg = await reader.readNext(); -console.log(msg.getData().toString()); - -await reader.close(); - -``` - -### Reader operations - -Pulsar Node.js readers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `readNext()` | Receives the next message on the topic (analogous to the `receive` method for [consumers](#consumer-operations)). When the message is available, the Promise object run executor function and get message object. | `Promise` | -| `readNext(Number)` | Receives a single message from the topic with specific timeout in milliseconds. | `Promise` | -| `hasNext()` | Return whether the broker has next message in target topic. | `Boolean` | -| `close()` | Closes the reader, disabling its ability to receive messages from the broker. | `Promise` | - -### Reader configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader establishes a subscription and listen for messages. | | -| `startMessageId` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `Pulsar.MessageId.earliest` (the earliest available message on the topic), `Pulsar.MessageId.latest` (the latest available message on the topic), or a message ID object for a position that is not earliest or latest. | | -| `receiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `readNext`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000 | -| `readerName` | The name of the reader. | | -| `subscriptionRolePrefix` | The subscription role prefix. | | -| `readCompacted` | If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`. | `false` | - - -### Reader example - -This example creates a Node.js reader with the `my-topic` topic, reads messages, and prints the content that arrive for 10 times: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - operationTimeoutSeconds: 30, - }); - - // Create a reader - const reader = await client.createReader({ - topic: 'my-topic', - startMessageId: Pulsar.MessageId.earliest(), - }); - - // read messages - for (let i = 0; i < 10; i += 1) { - const msg = await reader.readNext(); - console.log(msg.getData().toString()); - } - - await reader.close(); - await client.close(); -})(); - -``` - -## Messages - -In Pulsar Node.js client, you have to construct producer message object for producer. - -Here is an example message: - -```JavaScript - -const msg = { - data: Buffer.from('Hello, Pulsar'), - partitionKey: 'key1', - properties: { - 'foo': 'bar', - }, - eventTimestamp: Date.now(), - replicationClusters: [ - 'cluster1', - 'cluster2', - ], -} - -await producer.send(msg); - -``` - -The following keys are available for producer message objects: - -| Parameter | Description | -| :-------- | :---------- | -| `data` | The actual data payload of the message. | -| `properties` | A Object for any application-specific metadata attached to the message. | -| `eventTimestamp` | The timestamp associated with the message. | -| `sequenceId` | The sequence ID of the message. | -| `partitionKey` | The optional key associated with the message (particularly useful for things like topic compaction). | -| `replicationClusters` | The clusters to which this message is replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. | -| `deliverAt` | The absolute timestamp at or after which the message is delivered. | | -| `deliverAfter` | The relative delay after which the message is delivered. | | - -### Message object operations - -In Pulsar Node.js client, you can receive (or read) message object as consumer (or reader). - -The message object have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `getTopicName()` | Getter method of topic name. | `String` | -| `getProperties()` | Getter method of properties. | `Array` | -| `getData()` | Getter method of message data. | `Buffer` | -| `getMessageId()` | Getter method of [message id object](#message-id-object-operations). | `Object` | -| `getPublishTimestamp()` | Getter method of publish timestamp. | `Number` | -| `getEventTimestamp()` | Getter method of event timestamp. | `Number` | -| `getRedeliveryCount()` | Getter method of redelivery count. | `Number` | -| `getPartitionKey()` | Getter method of partition key. | `String` | - -### Message ID object operations - -In Pulsar Node.js client, you can get message id object from message object. - -The message id object have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `serialize()` | Serialize the message id into a Buffer for storing. | `Buffer` | -| `toString()` | Get message id as String. | `String` | - -The client has static method of message id object. You can access it as `Pulsar.MessageId.someStaticMethod` too. - -The following static methods are available for the message id object: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `earliest()` | MessageId representing the earliest, or oldest available message stored in the topic. | `Object` | -| `latest()` | MessageId representing the latest, or last published message in the topic. | `Object` | -| `deserialize(Buffer)` | Deserialize a message id object from a Buffer. | `Object` | - -## End-to-end encryption - -[End-to-end encryption](https://pulsar.apache.org/docs/en/next/cookbooks-encryption/#docsNav) allows applications to encrypt messages at producers and decrypt at consumers. - -### Configuration - -If you want to use the end-to-end encryption feature in the Node.js client, you need to configure `publicKeyPath` and `privateKeyPath` for both producer and consumer. - -``` - -publicKeyPath: "./public.pem" -privateKeyPath: "./private.pem" - -``` - -### Tutorial - -This section provides step-by-step instructions on how to use the end-to-end encryption feature in the Node.js client. - -**Prerequisite** - -- Pulsar C++ client 2.7.1 or later - -**Step** - -1. Create both public and private key pairs. - - **Input** - - ```shell - - openssl genrsa -out private.pem 2048 - openssl rsa -in private.pem -pubout -out public.pem - - ``` - -2. Create a producer to send encrypted messages. - - **Input** - - ```nodejs - - const Pulsar = require('pulsar-client'); - - (async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - operationTimeoutSeconds: 30, - }); - - // Create a producer - const producer = await client.createProducer({ - topic: 'persistent://public/default/my-topic', - sendTimeoutMs: 30000, - batchingEnabled: true, - publicKeyPath: "./public.pem", - privateKeyPath: "./private.pem", - encryptionKey: "encryption-key" - }); - - console.log(producer.ProducerConfig) - // Send messages - for (let i = 0; i < 10; i += 1) { - const msg = `my-message-${i}`; - producer.send({ - data: Buffer.from(msg), - }); - console.log(`Sent message: ${msg}`); - } - await producer.flush(); - - await producer.close(); - await client.close(); - })(); - - ``` - -3. Create a consumer to receive encrypted messages. - - **Input** - - ```nodejs - - const Pulsar = require('pulsar-client'); - - (async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://172.25.0.3:6650', - operationTimeoutSeconds: 30 - }); - - // Create a consumer - const consumer = await client.subscribe({ - topic: 'persistent://public/default/my-topic', - subscription: 'sub1', - subscriptionType: 'Shared', - ackTimeoutMs: 10000, - publicKeyPath: "./public.pem", - privateKeyPath: "./private.pem" - }); - - console.log(consumer) - // Receive messages - for (let i = 0; i < 10; i += 1) { - const msg = await consumer.receive(); - console.log(msg.getData().toString()); - consumer.acknowledge(msg); - } - - await consumer.close(); - await client.close(); - })(); - - ``` - -4. Run the consumer to receive encrypted messages. - - **Input** - - ```shell - - node consumer.js - - ``` - -5. In a new terminal tab, run the producer to produce encrypted messages. - - **Input** - - ```shell - - node producer.js - - ``` - - Now you can see the producer sends messages and the consumer receives messages successfully. - - **Output** - - This is from the producer side. - - ``` - - Sent message: my-message-0 - Sent message: my-message-1 - Sent message: my-message-2 - Sent message: my-message-3 - Sent message: my-message-4 - Sent message: my-message-5 - Sent message: my-message-6 - Sent message: my-message-7 - Sent message: my-message-8 - Sent message: my-message-9 - - ``` - - This is from the consumer side. - - ``` - - my-message-0 - my-message-1 - my-message-2 - my-message-3 - my-message-4 - my-message-5 - my-message-6 - my-message-7 - my-message-8 - my-message-9 - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-python.md b/site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-python.md deleted file mode 100644 index f30cf55387d92e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-python.md +++ /dev/null @@ -1,456 +0,0 @@ ---- -id: client-libraries-python -title: Pulsar Python client -sidebar_label: "Python" -original_id: client-libraries-python ---- - -Pulsar Python client library is a wrapper over the existing [C++ client library](client-libraries-cpp.md) and exposes all of the [same features](/api/cpp). You can find the code in the [`python` subdirectory](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/python) of the C++ client code. - -All the methods in producer, consumer, and reader of a Python client are thread-safe. - -[pdoc](https://github.com/BurntSushi/pdoc)-generated API docs for the Python client are available [here](/api/python). - -## Install - -You can install the [`pulsar-client`](https://pypi.python.org/pypi/pulsar-client) library either via [PyPi](https://pypi.python.org/pypi), using [pip](#installation-using-pip), or by building the library from source. - -### Install using pip - -To install the `pulsar-client` library as a pre-built package using the [pip](https://pip.pypa.io/en/stable/) package manager: - -```shell - -$ pip install pulsar-client==@pulsar:version_number@ - -``` - -### Optional dependencies - -To support aspects like pulsar functions or Avro serialization, additional optional components can be installed alongside the `pulsar-client` library - -```shell - -# avro serialization -$ pip install pulsar-client[avro]=='@pulsar:version_number@' - -# functions runtime -$ pip install pulsar-client[functions]=='@pulsar:version_number@' - -# all optional components -$ pip install pulsar-client[all]=='@pulsar:version_number@' - -``` - -Installation via PyPi is available for the following Python versions: - -Platform | Supported Python versions -:--------|:------------------------- -MacOS
    10.13 (High Sierra), 10.14 (Mojave)
    | 2.7, 3.7 -Linux | 2.7, 3.4, 3.5, 3.6, 3.7, 3.8 - -### Install from source - -To install the `pulsar-client` library by building from source, follow [instructions](client-libraries-cpp.md#compilation) and compile the Pulsar C++ client library. That builds the Python binding for the library. - -To install the built Python bindings: - -```shell - -$ git clone https://github.com/apache/pulsar -$ cd pulsar/pulsar-client-cpp/python -$ sudo python setup.py install - -``` - -## API Reference - -The complete Python API reference is available at [api/python](/api/python). - -## Examples - -You can find a variety of Python code examples for the `pulsar-client` library. - -### Producer example - -The following example creates a Python producer for the `my-topic` topic and sends 10 messages on that topic: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') - -producer = client.create_producer('my-topic') - -for i in range(10): - producer.send(('Hello-%d' % i).encode('utf-8')) - -client.close() - -``` - -### Consumer example - -The following example creates a consumer with the `my-subscription` subscription name on the `my-topic` topic, receives incoming messages, prints the content and ID of messages that arrive, and acknowledges each message to the Pulsar broker. - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') - -consumer = client.subscribe('my-topic', 'my-subscription') - -while True: - msg = consumer.receive() - try: - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except: - # Message failed to be processed - consumer.negative_acknowledge(msg) - -client.close() - -``` - -This example shows how to configure negative acknowledgement. - -```python - -from pulsar import Client, schema -client = Client('pulsar://localhost:6650') -consumer = client.subscribe('negative_acks','test',schema=schema.StringSchema()) -producer = client.create_producer('negative_acks',schema=schema.StringSchema()) -for i in range(10): - print('send msg "hello-%d"' % i) - producer.send_async('hello-%d' % i, callback=None) -producer.flush() -for i in range(10): - msg = consumer.receive() - consumer.negative_acknowledge(msg) - print('receive and nack msg "%s"' % msg.data()) -for i in range(10): - msg = consumer.receive() - consumer.acknowledge(msg) - print('receive and ack msg "%s"' % msg.data()) -try: - # No more messages expected - msg = consumer.receive(100) -except: - print("no more msg") - pass - -``` - -### Reader interface example - -You can use the Pulsar Python API to use the Pulsar [reader interface](concepts-clients.md#reader-interface). Here's an example: - -```python - -# MessageId taken from a previously fetched message -msg_id = msg.message_id() - -reader = client.create_reader('my-topic', msg_id) - -while True: - msg = reader.read_next() - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # No acknowledgment - -``` - -### Multi-topic subscriptions - -In addition to subscribing a consumer to a single Pulsar topic, you can also subscribe to multiple topics simultaneously. To use multi-topic subscriptions, you can supply a regular expression (regex) or a `List` of topics. If you select topics via regex, all topics must be within the same Pulsar namespace. - -The following is an example. - -```python - -import re -consumer = client.subscribe(re.compile('persistent://public/default/topic-*'), 'my-subscription') -while True: - msg = consumer.receive() - try: - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except: - # Message failed to be processed - consumer.negative_acknowledge(msg) -client.close() - -``` - -## Schema - -### Declare and validate schema - -You can declare a schema by passing a class that inherits -from `pulsar.schema.Record` and defines the fields as -class variables. For example: - -```python - -from pulsar.schema import * - -class Example(Record): - a = String() - b = Integer() - c = Boolean() - -``` - -With this simple schema definition, you can create producers, consumers and readers instances that refer to that. - -```python - -producer = client.create_producer( - topic='my-topic', - schema=AvroSchema(Example) ) - -producer.send(Example(a='Hello', b=1)) - -``` - -After creating the producer, the Pulsar broker validates that the existing topic schema is indeed of "Avro" type and that the format is compatible with the schema definition of the `Example` class. - -If there is a mismatch, an exception occurs in the producer creation. - -Once a producer is created with a certain schema definition, -it will only accept objects that are instances of the declared -schema class. - -Similarly, for a consumer/reader, the consumer will return an -object, instance of the schema record class, rather than the raw -bytes: - -```python - -consumer = client.subscribe( - topic='my-topic', - subscription_name='my-subscription', - schema=AvroSchema(Example) ) - -while True: - msg = consumer.receive() - ex = msg.value() - try: - print("Received message a={} b={} c={}".format(ex.a, ex.b, ex.c)) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except: - # Message failed to be processed - consumer.negative_acknowledge(msg) - -``` - -### Supported schema types - -You can use different builtin schema types in Pulsar. All the definitions are in the `pulsar.schema` package. - -| Schema | Notes | -| ------ | ----- | -| `BytesSchema` | Get the raw payload as a `bytes` object. No serialization/deserialization are performed. This is the default schema mode | -| `StringSchema` | Encode/decode payload as a UTF-8 string. Uses `str` objects | -| `JsonSchema` | Require record definition. Serializes the record into standard JSON payload | -| `AvroSchema` | Require record definition. Serializes in AVRO format | - -### Schema definition reference - -The schema definition is done through a class that inherits from `pulsar.schema.Record`. - -This class has a number of fields which can be of either -`pulsar.schema.Field` type or another nested `Record`. All the -fields are specified in the `pulsar.schema` package. The fields -are matching the AVRO fields types. - -| Field Type | Python Type | Notes | -| ---------- | ----------- | ----- | -| `Boolean` | `bool` | | -| `Integer` | `int` | | -| `Long` | `int` | | -| `Float` | `float` | | -| `Double` | `float` | | -| `Bytes` | `bytes` | | -| `String` | `str` | | -| `Array` | `list` | Need to specify record type for items. | -| `Map` | `dict` | Key is always `String`. Need to specify value type. | - -Additionally, any Python `Enum` type can be used as a valid field type. - -#### Fields parameters - -When adding a field, you can use these parameters in the constructor. - -| Argument | Default | Notes | -| ---------- | --------| ----- | -| `default` | `None` | Set a default value for the field. Eg: `a = Integer(default=5)` | -| `required` | `False` | Mark the field as "required". It is set in the schema accordingly. | - -#### Schema definition examples - -##### Simple definition - -```python - -class Example(Record): - a = String() - b = Integer() - c = Array(String()) - i = Map(String()) - -``` - -##### Using enums - -```python - -from enum import Enum - -class Color(Enum): - red = 1 - green = 2 - blue = 3 - -class Example(Record): - name = String() - color = Color - -``` - -##### Complex types - -```python - -class MySubRecord(Record): - x = Integer() - y = Long() - z = String() - -class Example(Record): - a = String() - sub = MySubRecord() - -``` - -## End-to-end encryption - -[End-to-end encryption](https://pulsar.apache.org/docs/en/next/cookbooks-encryption/#docsNav) allows applications to encrypt messages at producers and decrypt messages at consumers. - -### Configuration - -To use the end-to-end encryption feature in the Python client, you need to configure `publicKeyPath` and `privateKeyPath` for both producer and consumer. - -``` - -publicKeyPath: "./public.pem" -privateKeyPath: "./private.pem" - -``` - -### Tutorial - -This section provides step-by-step instructions on how to use the end-to-end encryption feature in the Python client. - -**Prerequisite** - -- Pulsar Python client 2.7.1 or later - -**Step** - -1. Create both public and private key pairs. - - **Input** - - ```shell - - openssl genrsa -out private.pem 2048 - openssl rsa -in private.pem -pubout -out public.pem - - ``` - -2. Create a producer to send encrypted messages. - - **Input** - - ```python - - import pulsar - - publicKeyPath = "./public.pem" - privateKeyPath = "./private.pem" - crypto_key_reader = pulsar.CryptoKeyReader(publicKeyPath, privateKeyPath) - client = pulsar.Client('pulsar://localhost:6650') - producer = client.create_producer(topic='encryption', encryption_key='encryption', crypto_key_reader=crypto_key_reader) - producer.send('encryption message'.encode('utf8')) - print('sent message') - producer.close() - client.close() - - ``` - -3. Create a consumer to receive encrypted messages. - - **Input** - - ```python - - import pulsar - - publicKeyPath = "./public.pem" - privateKeyPath = "./private.pem" - crypto_key_reader = pulsar.CryptoKeyReader(publicKeyPath, privateKeyPath) - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe(topic='encryption', subscription_name='encryption-sub', crypto_key_reader=crypto_key_reader) - msg = consumer.receive() - print("Received msg '{}' id = '{}'".format(msg.data(), msg.message_id())) - consumer.close() - client.close() - - ``` - -4. Run the consumer to receive encrypted messages. - - **Input** - - ```shell - - python consumer.py - - ``` - -5. In a new terminal tab, run the producer to produce encrypted messages. - - **Input** - - ```shell - - python producer.py - - ``` - - Now you can see the producer sends messages and the consumer receives messages successfully. - - **Output** - - This is from the producer side. - - ``` - - sent message - - ``` - - This is from the consumer side. - - ``` - - Received msg 'b'encryption message'' id = '(0,0,-1,-1)' - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-websocket.md b/site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-websocket.md deleted file mode 100644 index e2da41f0461b40..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries-websocket.md +++ /dev/null @@ -1,657 +0,0 @@ ---- -id: client-libraries-websocket -title: Pulsar WebSocket API -sidebar_label: "WebSocket" -original_id: client-libraries-websocket ---- - -Pulsar [WebSocket](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API) API provides a simple way to interact with Pulsar using languages that do not have an official [client library](getting-started-clients.md). Through WebSocket, you can publish and consume messages and use features available on the [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page. - - -> You can use Pulsar WebSocket API with any WebSocket client library. See examples for Python and Node.js [below](#client-examples). - -## Running the WebSocket service - -The standalone variant of Pulsar that we recommend using for [local development](getting-started-standalone.md) already has the WebSocket service enabled. - -In non-standalone mode, there are two ways to deploy the WebSocket service: - -* [embedded](#embedded-with-a-pulsar-broker) with a Pulsar broker -* as a [separate component](#as-a-separate-component) - -### Embedded with a Pulsar broker - -In this mode, the WebSocket service will run within the same HTTP service that's already running in the broker. To enable this mode, set the [`webSocketServiceEnabled`](reference-configuration.md#broker-webSocketServiceEnabled) parameter in the [`conf/broker.conf`](reference-configuration.md#broker) configuration file in your installation. - -```properties - -webSocketServiceEnabled=true - -``` - -### As a separate component - -In this mode, the WebSocket service will be run from a Pulsar [broker](reference-terminology.md#broker) as a separate service. Configuration for this mode is handled in the [`conf/websocket.conf`](reference-configuration.md#websocket) configuration file. You'll need to set *at least* the following parameters: - -* [`configurationStoreServers`](reference-configuration.md#websocket-configurationStoreServers) -* [`webServicePort`](reference-configuration.md#websocket-webServicePort) -* [`clusterName`](reference-configuration.md#websocket-clusterName) - -Here's an example: - -```properties - -configurationStoreServers=zk1:2181,zk2:2181,zk3:2181 -webServicePort=8080 -clusterName=my-cluster - -``` - -### Security settings - -To enable TLS encryption on WebSocket service: - -```properties - -tlsEnabled=true -tlsAllowInsecureConnection=false -tlsCertificateFilePath=/path/to/client-websocket.cert.pem -tlsKeyFilePath=/path/to/client-websocket.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -### Starting the broker - -When the configuration is set, you can start the service using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) tool: - -```shell - -$ bin/pulsar-daemon start websocket - -``` - -## API Reference - -Pulsar's WebSocket API offers three endpoints for [producing](#producer-endpoint) messages, [consuming](#consumer-endpoint) messages and [reading](#reader-endpoint) messages. - -All exchanges via the WebSocket API use JSON. - -### Authentication - -#### Browser javascript WebSocket client - -Use the query param `token` transport the authentication token. - -```http - -ws://broker-service-url:8080/path?token=token - -``` - -### Producer endpoint - -The producer endpoint requires you to specify a tenant, namespace, and topic in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/producer/persistent/:tenant/:namespace/:topic - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`sendTimeoutMillis` | long | no | Send timeout (default: 30 secs) -`batchingEnabled` | boolean | no | Enable batching of messages (default: false) -`batchingMaxMessages` | int | no | Maximum number of messages permitted in a batch (default: 1000) -`maxPendingMessages` | int | no | Set the max size of the internal-queue holding the messages (default: 1000) -`batchingMaxPublishDelay` | long | no | Time period within which the messages will be batched (default: 10ms) -`messageRoutingMode` | string | no | Message [routing mode](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/ProducerConfiguration.MessageRoutingMode.html) for the partitioned producer: `SinglePartition`, `RoundRobinPartition` -`compressionType` | string | no | Compression [type](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/CompressionType.html): `LZ4`, `ZLIB` -`producerName` | string | no | Specify the name for the producer. Pulsar will enforce only one producer with same name can be publishing on a topic -`initialSequenceId` | long | no | Set the baseline for the sequence ids for messages published by the producer. -`hashingScheme` | string | no | [Hashing function](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.HashingScheme.html) to use when publishing on a partitioned topic: `JavaStringHash`, `Murmur3_32Hash` -`token` | string | no | Authentication token, this is used for the browser javascript client - - -#### Publishing a message - -```json - -{ - "payload": "SGVsbG8gV29ybGQ=", - "properties": {"key1": "value1", "key2": "value2"}, - "context": "1" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`payload` | string | yes | Base-64 encoded payload -`properties` | key-value pairs | no | Application-defined properties -`context` | string | no | Application-defined request identifier -`key` | string | no | For partitioned topics, decides which partition to use -`replicationClusters` | array | no | Restrict replication to this list of [clusters](reference-terminology.md#cluster), specified by name - - -##### Example success response - -```json - -{ - "result": "ok", - "messageId": "CAAQAw==", - "context": "1" - } - -``` - -##### Example failure response - -```json - - { - "result": "send-error:3", - "errorMsg": "Failed to de-serialize from JSON", - "context": "1" - } - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`result` | string | yes | `ok` if successful or an error message if unsuccessful -`messageId` | string | yes | Message ID assigned to the published message -`context` | string | no | Application-defined request identifier - - -### Consumer endpoint - -The consumer endpoint requires you to specify a tenant, namespace, and topic, as well as a subscription, in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/consumer/persistent/:tenant/:namespace/:topic/:subscription - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`ackTimeoutMillis` | long | no | Set the timeout for unacked messages (default: 0) -`subscriptionType` | string | no | [Subscription type](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/SubscriptionType.html): `Exclusive`, `Failover`, `Shared`, `Key_Shared` -`receiverQueueSize` | int | no | Size of the consumer receive queue (default: 1000) -`consumerName` | string | no | Consumer name -`priorityLevel` | int | no | Define a [priority](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setPriorityLevel-int-) for the consumer -`maxRedeliverCount` | int | no | Define a [maxRedeliverCount](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#deadLetterPolicy-org.apache.pulsar.client.api.DeadLetterPolicy-) for the consumer (default: 0). Activates [Dead Letter Topic](https://github.com/apache/pulsar/wiki/PIP-22%3A-Pulsar-Dead-Letter-Topic) feature. -`deadLetterTopic` | string | no | Define a [deadLetterTopic](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#deadLetterPolicy-org.apache.pulsar.client.api.DeadLetterPolicy-) for the consumer (default: {topic}-{subscription}-DLQ). Activates [Dead Letter Topic](https://github.com/apache/pulsar/wiki/PIP-22%3A-Pulsar-Dead-Letter-Topic) feature. -`pullMode` | boolean | no | Enable pull mode (default: false). See "Flow Control" below. -`negativeAckRedeliveryDelay` | int | no | When a message is negatively acknowledged, it will be redelivered to the DLQ. -`token` | string | no | Authentication token, this is used for the browser javascript client - -NB: these parameter (except `pullMode`) apply to the internal consumer of the WebSocket service. -So messages will be subject to the redelivery settings as soon as the get into the receive queue, -even if the client doesn't consume on the WebSocket. - -##### Receiving messages - -Server will push messages on the WebSocket session: - -```json - -{ - "messageId": "CAMQADAA", - "payload": "hvXcJvHW7kOSrUn17P2q71RA5SdiXwZBqw==", - "properties": {}, - "publishTime": "2021-10-29T16:01:38.967-07:00", - "redeliveryCount": 0, - "encryptionContext": { - "keys": { - "client-rsa.pem": { - "keyValue": "jEuwS+PeUzmCo7IfLNxqoj4h7txbLjCQjkwpaw5AWJfZ2xoIdMkOuWDkOsqgFmWwxiecakS6GOZHs94x3sxzKHQx9Oe1jpwBg2e7L4fd26pp+WmAiLm/ArZJo6JotTeFSvKO3u/yQtGTZojDDQxiqFOQ1ZbMdtMZA8DpSMuq+Zx7PqLo43UdW1+krjQfE5WD+y+qE3LJQfwyVDnXxoRtqWLpVsAROlN2LxaMbaftv5HckoejJoB4xpf/dPOUqhnRstwQHf6klKT5iNhjsY4usACt78uILT0pEPd14h8wEBidBz/vAlC/zVMEqiDVzgNS7dqEYS4iHbf7cnWVCn3Hxw==", - "metadata": {} - } - }, - "param": "Tfu1PxVm6S9D3+Hk", - "compressionType": "NONE", - "uncompressedMessageSize": 0, - "batchSize": { - "empty": false, - "present": true - } - } - -``` - -Below are the parameters in the WebSocket consumer response. - -- General parameters - - Key | Type | Required? | Explanation - :---|:-----|:----------|:----------- - `messageId` | string | yes | Message ID - `payload` | string | yes | Base-64 encoded payload - `publishTime` | string | yes | Publish timestamp - `redeliveryCount` | number | yes | Number of times this message was already delivered - `properties` | key-value pairs | no | Application-defined properties - `key` | string | no | Original routing key set by producer - `encryptionContext` | EncryptionContext | no | Encryption context that consumers can use to decrypt received messages - `param` | string | no | Initialization vector for cipher (Base64 encoding) - `batchSize` | string | no | Number of entries in a message (if it is a batch message) - `uncompressedMessageSize` | string | no | Message size before compression - `compressionType` | string | no | Algorithm used to compress the message payload - -- `encryptionContext` related parameter - - Key | Type | Required? | Explanation - :---|:-----|:----------|:----------- - `keys` |key-EncryptionKey pairs | yes | Key in `key-EncryptionKey` pairs is an encryption key name. Value in `key-EncryptionKey` pairs is an encryption key object. - -- `encryptionKey` related parameters - - Key | Type | Required? | Explanation - :---|:-----|:----------|:----------- - `keyValue` | string | yes | Encryption key (Base64 encoding) - `metadata` | key-value pairs | no | Application-defined metadata - -#### Acknowledging the message - -Consumer needs to acknowledge the successful processing of the message to -have the Pulsar broker delete it. - -```json - -{ - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Negatively acknowledging messages - -```json - -{ - "type": "negativeAcknowledge", - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Flow control - -##### Push Mode - -By default (`pullMode=false`), the consumer endpoint will use the `receiverQueueSize` parameter both to size its -internal receive queue and to limit the number of unacknowledged messages that are passed to the WebSocket client. -In this mode, if you don't send acknowledgements, the Pulsar WebSocket service will stop sending messages after reaching -`receiverQueueSize` unacked messages sent to the WebSocket client. - -##### Pull Mode - -If you set `pullMode` to `true`, the WebSocket client will need to send `permit` commands to permit the -Pulsar WebSocket service to send more messages. - -```json - -{ - "type": "permit", - "permitMessages": 100 -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `permit` -`permitMessages`| int | yes | Number of messages to permit - -NB: in this mode it's possible to acknowledge messages in a different connection. - -#### Check if reach end of topic - -Consumer can check if it has reached end of topic by sending `isEndOfTopic` request. - -**Request** - -```json - -{ - "type": "isEndOfTopic" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `isEndOfTopic` - -**Response** - -```json - -{ - "endOfTopic": "true/false" - } - -``` - -### Reader endpoint - -The reader endpoint requires you to specify a tenant, namespace, and topic in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/reader/persistent/:tenant/:namespace/:topic - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`readerName` | string | no | Reader name -`receiverQueueSize` | int | no | Size of the consumer receive queue (default: 1000) -`messageId` | int or enum | no | Message ID to start from, `earliest` or `latest` (default: `latest`) -`token` | string | no | Authentication token, this is used for the browser javascript client - -##### Receiving messages - -Server will push messages on the WebSocket session: - -```json - -{ - "messageId": "CAAQAw==", - "payload": "SGVsbG8gV29ybGQ=", - "properties": {"key1": "value1", "key2": "value2"}, - "publishTime": "2016-08-30 16:45:57.785", - "redeliveryCount": 4 -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId` | string | yes | Message ID -`payload` | string | yes | Base-64 encoded payload -`publishTime` | string | yes | Publish timestamp -`redeliveryCount` | number | yes | Number of times this message was already delivered -`properties` | key-value pairs | no | Application-defined properties -`key` | string | no | Original routing key set by producer - -#### Acknowledging the message - -**In WebSocket**, Reader needs to acknowledge the successful processing of the message to -have the Pulsar WebSocket service update the number of pending messages. -If you don't send acknowledgements, Pulsar WebSocket service will stop sending messages after reaching the pendingMessages limit. - -```json - -{ - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Check if reach end of topic - -Consumer can check if it has reached end of topic by sending `isEndOfTopic` request. - -**Request** - -```json - -{ - "type": "isEndOfTopic" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `isEndOfTopic` - -**Response** - -```json - -{ - "endOfTopic": "true/false" - } - -``` - -### Error codes - -In case of error the server will close the WebSocket session using the -following error codes: - -Error Code | Error Message -:----------|:------------- -1 | Failed to create producer -2 | Failed to subscribe -3 | Failed to deserialize from JSON -4 | Failed to serialize to JSON -5 | Failed to authenticate client -6 | Client is not authorized -7 | Invalid payload encoding -8 | Unknown error - -> The application is responsible for re-establishing a new WebSocket session after a backoff period. - -## Client examples - -Below you'll find code examples for the Pulsar WebSocket API in [Python](#python) and [Node.js](#nodejs). - -### Python - -This example uses the [`websocket-client`](https://pypi.python.org/pypi/websocket-client) package. You can install it using [pip](https://pypi.python.org/pypi/pip): - -```shell - -$ pip install websocket-client - -``` - -You can also download it from [PyPI](https://pypi.python.org/pypi/websocket-client). - -#### Python producer - -Here's an example Python producer that sends a simple message to a Pulsar [topic](reference-terminology.md#topic): - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/producer/persistent/public/default/my-topic' - -ws = websocket.create_connection(TOPIC) - -# Send one message as JSON -ws.send(json.dumps({ - 'payload' : base64.b64encode('Hello World'), - 'properties': { - 'key1' : 'value1', - 'key2' : 'value2' - }, - 'context' : 5 -})) - -response = json.loads(ws.recv()) -if response['result'] == 'ok': - print 'Message published successfully' -else: - print 'Failed to publish message:', response -ws.close() - -``` - -#### Python consumer - -Here's an example Python consumer that listens on a Pulsar topic and prints the message ID whenever a message arrives: - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/consumer/persistent/public/default/my-topic/my-sub' - -ws = websocket.create_connection(TOPIC) - -while True: - msg = json.loads(ws.recv()) - if not msg: break - - print "Received: {} - payload: {}".format(msg, base64.b64decode(msg['payload'])) - - # Acknowledge successful processing - ws.send(json.dumps({'messageId' : msg['messageId']})) - -ws.close() - -``` - -#### Python reader - -Here's an example Python reader that listens on a Pulsar topic and prints the message ID whenever a message arrives: - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/reader/persistent/public/default/my-topic' -ws = websocket.create_connection(TOPIC) - -while True: - msg = json.loads(ws.recv()) - if not msg: break - - print "Received: {} - payload: {}".format(msg, base64.b64decode(msg['payload'])) - - # Acknowledge successful processing - ws.send(json.dumps({'messageId' : msg['messageId']})) - -ws.close() - -``` - -### Node.js - -This example uses the [`ws`](https://websockets.github.io/ws/) package. You can install it using [npm](https://www.npmjs.com/): - -```shell - -$ npm install ws - -``` - -#### Node.js producer - -Here's an example Node.js producer that sends a simple message to a Pulsar topic: - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/producer/persistent/public/default/my-topic`; -const ws = new WebSocket(topic); - -var message = { - "payload" : new Buffer("Hello World").toString('base64'), - "properties": { - "key1" : "value1", - "key2" : "value2" - }, - "context" : "1" -}; - -ws.on('open', function() { - // Send one message - ws.send(JSON.stringify(message)); -}); - -ws.on('message', function(message) { - console.log('received ack: %s', message); -}); - -``` - -#### Node.js consumer - -Here's an example Node.js consumer that listens on the same topic used by the producer above: - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/consumer/persistent/public/default/my-topic/my-sub`; -const ws = new WebSocket(topic); - -ws.on('message', function(message) { - var receiveMsg = JSON.parse(message); - console.log('Received: %s - payload: %s', message, new Buffer(receiveMsg.payload, 'base64').toString()); - var ackMsg = {"messageId" : receiveMsg.messageId}; - ws.send(JSON.stringify(ackMsg)); -}); - -``` - -#### NodeJS reader - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/reader/persistent/public/default/my-topic`; -const ws = new WebSocket(topic); - -ws.on('message', function(message) { - var receiveMsg = JSON.parse(message); - console.log('Received: %s - payload: %s', message, new Buffer(receiveMsg.payload, 'base64').toString()); - var ackMsg = {"messageId" : receiveMsg.messageId}; - ws.send(JSON.stringify(ackMsg)); -}); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries.md b/site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries.md deleted file mode 100644 index 00d128c514040f..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/client-libraries.md +++ /dev/null @@ -1,35 +0,0 @@ ---- -id: client-libraries -title: Pulsar client libraries -sidebar_label: "Overview" -original_id: client-libraries ---- - -Pulsar supports the following client libraries: - -- [Java client](client-libraries-java.md) -- [Go client](client-libraries-go.md) -- [Python client](client-libraries-python.md) -- [C++ client](client-libraries-cpp.md) -- [Node.js client](client-libraries-node.md) -- [WebSocket client](client-libraries-websocket.md) -- [C# client](client-libraries-dotnet.md) - -## Feature matrix -Pulsar client feature matrix for different languages is listed on [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page. - -## Third-party clients - -Besides the official released clients, multiple projects on developing Pulsar clients are available in different languages. - -> If you have developed a new Pulsar client, feel free to submit a pull request and add your client to the list below. - -| Language | Project | Maintainer | License | Description | -|----------|---------|------------|---------|-------------| -| Go | [pulsar-client-go](https://github.com/Comcast/pulsar-client-go) | [Comcast](https://github.com/Comcast) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client | -| Go | [go-pulsar](https://github.com/t2y/go-pulsar) | [t2y](https://github.com/t2y) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | -| Haskell | [supernova](https://github.com/cr-org/supernova) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Native Pulsar client for Haskell | -| Scala | [neutron](https://github.com/cr-org/neutron) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Purely functional Apache Pulsar client for Scala built on top of Fs2 | -| Scala | [pulsar4s](https://github.com/sksamuel/pulsar4s) | [sksamuel](https://github.com/sksamuel) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Idomatic, typesafe, and reactive Scala client for Apache Pulsar | -| Rust | [pulsar-rs](https://github.com/wyyerd/pulsar-rs) | [Wyyerd Group](https://github.com/wyyerd) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Future-based Rust bindings for Apache Pulsar | -| .NET | [pulsar-client-dotnet](https://github.com/fsharplang-ru/pulsar-client-dotnet) | [Lanayx](https://github.com/Lanayx) | [![GitHub](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT) | Native .NET client for C#/F#/VB | diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-architecture-overview.md b/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-architecture-overview.md deleted file mode 100644 index f3e75c3e307e0c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-architecture-overview.md +++ /dev/null @@ -1,172 +0,0 @@ ---- -id: concepts-architecture-overview -title: Architecture Overview -sidebar_label: "Architecture" -original_id: concepts-architecture-overview ---- - -At the highest level, a Pulsar instance is composed of one or more Pulsar clusters. Clusters within an instance can [replicate](concepts-replication.md) data amongst themselves. - -In a Pulsar cluster: - -* One or more brokers handles and load balances incoming messages from producers, dispatches messages to consumers, communicates with the Pulsar configuration store to handle various coordination tasks, stores messages in BookKeeper instances (aka bookies), relies on a cluster-specific ZooKeeper cluster for certain tasks, and more. -* A BookKeeper cluster consisting of one or more bookies handles [persistent storage](#persistent-storage) of messages. -* A ZooKeeper cluster specific to that cluster handles coordination tasks between Pulsar clusters. - -The diagram below provides an illustration of a Pulsar cluster: - -![Pulsar architecture diagram](/assets/pulsar-system-architecture.png) - -At the broader instance level, an instance-wide ZooKeeper cluster called the configuration store handles coordination tasks involving multiple clusters, for example [geo-replication](concepts-replication.md). - -## Brokers - -The Pulsar message broker is a stateless component that's primarily responsible for running two other components: - -* An HTTP server that exposes a {@inject: rest:REST:/} API for both administrative tasks and [topic lookup](concepts-clients.md#client-setup-phase) for producers and consumers. The producers connect to the brokers to publish messages and the consumers connect to the brokers to consume the messages. -* A dispatcher, which is an asynchronous TCP server over a custom [binary protocol](developing-binary-protocol.md) used for all data transfers - -Messages are typically dispatched out of a [managed ledger](#managed-ledgers) cache for the sake of performance, *unless* the backlog exceeds the cache size. If the backlog grows too large for the cache, the broker will start reading entries from BookKeeper. - -Finally, to support geo-replication on global topics, the broker manages replicators that tail the entries published in the local region and republish them to the remote region using the Pulsar [Java client library](client-libraries-java.md). - -> For a guide to managing Pulsar brokers, see the [brokers](admin-api-brokers.md) guide. - -## Clusters - -A Pulsar instance consists of one or more Pulsar *clusters*. Clusters, in turn, consist of: - -* One or more Pulsar [brokers](#brokers) -* A ZooKeeper quorum used for cluster-level configuration and coordination -* An ensemble of bookies used for [persistent storage](#persistent-storage) of messages - -Clusters can replicate amongst themselves using [geo-replication](concepts-replication.md). - -> For a guide to managing Pulsar clusters, see the [clusters](admin-api-clusters.md) guide. - -## Metadata store - -The Pulsar metadata store maintains all the metadata of a Pulsar cluster, such as topic metadata, schema, broker load data, and so on. Pulsar uses [Apache ZooKeeper](https://zookeeper.apache.org/) for metadata storage, cluster configuration, and coordination. The Pulsar metadata store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster. You can use one ZooKeeper cluster for both Pulsar metadata store and BookKeeper metadata store. If you want to deploy Pulsar brokers connected to an existing BookKeeper cluster, you need to deploy separate ZooKeeper clusters for Pulsar metadata store and BookKeeper metadata store respectively. - -In a Pulsar instance: - -* A configuration store quorum stores configuration for tenants, namespaces, and other entities that need to be globally consistent. -* Each cluster has its own local ZooKeeper ensemble that stores cluster-specific configuration and coordination such as which brokers are responsible for which topics as well as ownership metadata, broker load reports, BookKeeper ledger metadata, and more. - -## Configuration store - -The configuration store maintains all the configurations of a Pulsar instance, such as clusters, tenants, namespaces, partitioned topic related configurations, and so on. A Pulsar instance can have a single local cluster, multiple local clusters, or multiple cross-region clusters. Consequently, the configuration store can share the configurations across multiple clusters under a Pulsar instance. The configuration store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster. - -## Persistent storage - -Pulsar provides guaranteed message delivery for applications. If a message successfully reaches a Pulsar broker, it will be delivered to its intended target. - -This guarantee requires that non-acknowledged messages are stored in a durable manner until they can be delivered to and acknowledged by consumers. This mode of messaging is commonly called *persistent messaging*. In Pulsar, N copies of all messages are stored and synced on disk, for example 4 copies across two servers with mirrored [RAID](https://en.wikipedia.org/wiki/RAID) volumes on each server. - -### Apache BookKeeper - -Pulsar uses a system called [Apache BookKeeper](http://bookkeeper.apache.org/) for persistent message storage. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) (WAL) system that provides a number of crucial advantages for Pulsar: - -* It enables Pulsar to utilize many independent logs, called [ledgers](#ledgers). Multiple ledgers can be created for topics over time. -* It offers very efficient storage for sequential data that handles entry replication. -* It guarantees read consistency of ledgers in the presence of various system failures. -* It offers even distribution of I/O across bookies. -* It's horizontally scalable in both capacity and throughput. Capacity can be immediately increased by adding more bookies to a cluster. -* Bookies are designed to handle thousands of ledgers with concurrent reads and writes. By using multiple disk devices---one for journal and another for general storage--bookies are able to isolate the effects of read operations from the latency of ongoing write operations. - -In addition to message data, *cursors* are also persistently stored in BookKeeper. Cursors are [subscription](reference-terminology.md#subscription) positions for [consumers](reference-terminology.md#consumer). BookKeeper enables Pulsar to store consumer position in a scalable fashion. - -At the moment, Pulsar supports persistent message storage. This accounts for the `persistent` in all topic names. Here's an example: - -```http - -persistent://my-tenant/my-namespace/my-topic - -``` - -> Pulsar also supports ephemeral ([non-persistent](concepts-messaging.md#non-persistent-topics)) message storage. - - -You can see an illustration of how brokers and bookies interact in the diagram below: - -![Brokers and bookies](/assets/broker-bookie.png) - - -### Ledgers - -A ledger is an append-only data structure with a single writer that is assigned to multiple BookKeeper storage nodes, or bookies. Ledger entries are replicated to multiple bookies. Ledgers themselves have very simple semantics: - -* A Pulsar broker can create a ledger, append entries to the ledger, and close the ledger. -* After the ledger has been closed---either explicitly or because the writer process crashed---it can then be opened only in read-only mode. -* Finally, when entries in the ledger are no longer needed, the whole ledger can be deleted from the system (across all bookies). - -#### Ledger read consistency - -The main strength of Bookkeeper is that it guarantees read consistency in ledgers in the presence of failures. Since the ledger can only be written to by a single process, that process is free to append entries very efficiently, without need to obtain consensus. After a failure, the ledger will go through a recovery process that will finalize the state of the ledger and establish which entry was last committed to the log. After that point, all readers of the ledger are guaranteed to see the exact same content. - -#### Managed ledgers - -Given that Bookkeeper ledgers provide a single log abstraction, a library was developed on top of the ledger called the *managed ledger* that represents the storage layer for a single topic. A managed ledger represents the abstraction of a stream of messages with a single writer that keeps appending at the end of the stream and multiple cursors that are consuming the stream, each with its own associated position. - -Internally, a single managed ledger uses multiple BookKeeper ledgers to store the data. There are two reasons to have multiple ledgers: - -1. After a failure, a ledger is no longer writable and a new one needs to be created. -2. A ledger can be deleted when all cursors have consumed the messages it contains. This allows for periodic rollover of ledgers. - -### Journal storage - -In BookKeeper, *journal* files contain BookKeeper transaction logs. Before making an update to a [ledger](#ledgers), a bookie needs to ensure that a transaction describing the update is written to persistent (non-volatile) storage. A new journal file is created once the bookie starts or the older journal file reaches the journal file size threshold (configured using the [`journalMaxSizeMB`](reference-configuration.md#bookkeeper-journalMaxSizeMB) parameter). - -## Pulsar proxy - -One way for Pulsar clients to interact with a Pulsar [cluster](#clusters) is by connecting to Pulsar message [brokers](#brokers) directly. In some cases, however, this kind of direct connection is either infeasible or undesirable because the client doesn't have direct access to broker addresses. If you're running Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform, for example, then direct client connections to brokers are likely not possible. - -The **Pulsar proxy** provides a solution to this problem by acting as a single gateway for all of the brokers in a cluster. If you run the Pulsar proxy (which, again, is optional), all client connections with the Pulsar cluster will flow through the proxy rather than communicating with brokers. - -> For the sake of performance and fault tolerance, you can run as many instances of the Pulsar proxy as you'd like. - -Architecturally, the Pulsar proxy gets all the information it requires from ZooKeeper. When starting the proxy on a machine, you only need to provide ZooKeeper connection strings for the cluster-specific and instance-wide configuration store clusters. Here's an example: - -```bash - -$ bin/pulsar proxy \ - --zookeeper-servers zk-0,zk-1,zk-2 \ - --configuration-store-servers zk-0,zk-1,zk-2 - -``` - -> #### Pulsar proxy docs -> For documentation on using the Pulsar proxy, see the [Pulsar proxy admin documentation](administration-proxy.md). - - -Some important things to know about the Pulsar proxy: - -* Connecting clients don't need to provide *any* specific configuration to use the Pulsar proxy. You won't need to update the client configuration for existing applications beyond updating the IP used for the service URL (for example if you're running a load balancer over the Pulsar proxy). -* [TLS encryption](security-tls-transport.md) and [authentication](security-tls-authentication.md) is supported by the Pulsar proxy - -## Service discovery - -[Clients](getting-started-clients.md) connecting to Pulsar brokers need to be able to communicate with an entire Pulsar instance using a single URL. Pulsar provides a built-in service discovery mechanism that you can set up using the instructions in the [Deploying a Pulsar instance](deploy-bare-metal.md#service-discovery-setup) guide. - -You can use your own service discovery system if you'd like. If you use your own system, there is just one requirement: when a client performs an HTTP request to an endpoint, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to *some* active broker in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means. - -The diagram below illustrates Pulsar service discovery: - -![alt-text](/assets/pulsar-service-discovery.png) - -In this diagram, the Pulsar cluster is addressable via a single DNS name: `pulsar-cluster.acme.com`. A [Python client](client-libraries-python.md), for example, could access this Pulsar cluster like this: - -```python - -from pulsar import Client - -client = Client('pulsar://pulsar-cluster.acme.com:6650') - -``` - -:::note - -In Pulsar, each topic is handled by only one broker. Initial requests from a client to read, update or delete a topic are sent to a broker that may not be the topic owner. If the broker cannot handle the request for this topic, it redirects the request to the appropriate broker. - -::: - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-authentication.md b/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-authentication.md deleted file mode 100644 index f6307890c904a7..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-authentication.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -id: concepts-authentication -title: Authentication and Authorization -sidebar_label: "Authentication and Authorization" -original_id: concepts-authentication ---- - -Pulsar supports a pluggable [authentication](security-overview.md) mechanism which can be configured at the proxy and/or the broker. Pulsar also supports a pluggable [authorization](security-authorization.md) mechanism. These mechanisms work together to identify the client and its access rights on topics, namespaces and tenants. - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-clients.md b/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-clients.md deleted file mode 100644 index 4040624f7d6366..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-clients.md +++ /dev/null @@ -1,92 +0,0 @@ ---- -id: concepts-clients -title: Pulsar Clients -sidebar_label: "Clients" -original_id: concepts-clients ---- - -Pulsar exposes a client API with language bindings for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md), [C++](client-libraries-cpp.md) and [C#](client-libraries-dotnet.md). The client API optimizes and encapsulates Pulsar's client-broker communication protocol and exposes a simple and intuitive API for use by applications. - -Under the hood, the current official Pulsar client libraries support transparent reconnection and/or connection failover to brokers, queuing of messages until acknowledged by the broker, and heuristics such as connection retries with backoff. - -> **Custom client libraries** -> If you'd like to create your own client library, we recommend consulting the documentation on Pulsar's custom [binary protocol](developing-binary-protocol.md). - - -## Client setup phase - -Before an application creates a producer/consumer, the Pulsar client library needs to initiate a setup phase including two steps: - -1. The client attempts to determine the owner of the topic by sending an HTTP lookup request to the broker. The request could reach one of the active brokers which, by looking at the (cached) zookeeper metadata knows who is serving the topic or, in case nobody is serving it, tries to assign it to the least loaded broker. -1. Once the client library has the broker address, it creates a TCP connection (or reuse an existing connection from the pool) and authenticates it. Within this connection, client and broker exchange binary commands from a custom protocol. At this point the client sends a command to create producer/consumer to the broker, which will comply after having validated the authorization policy. - -Whenever the TCP connection breaks, the client immediately re-initiates this setup phase and keeps trying with exponential backoff to re-establish the producer or consumer until the operation succeeds. - -## Reader interface - -In Pulsar, the "standard" [consumer interface](concepts-messaging.md#consumers) involves using consumers to listen on [topics](reference-terminology.md#topic), process incoming messages, and finally acknowledge those messages when they are processed. Whenever a new subscription is created, it is initially positioned at the end of the topic (by default), and consumers associated with that subscription begin reading with the first message created afterwards. Whenever a consumer connects to a topic using a pre-existing subscription, it begins reading from the earliest message un-acked within that subscription. In summary, with the consumer interface, subscription cursors are automatically managed by Pulsar in response to [message acknowledgements](concepts-messaging.md#acknowledgement). - -The **reader interface** for Pulsar enables applications to manually manage cursors. When you use a reader to connect to a topic---rather than a consumer---you need to specify *which* message the reader begins reading from when it connects to a topic. When connecting to a topic, the reader interface enables you to begin with: - -* The **earliest** available message in the topic -* The **latest** available message in the topic -* Some other message between the earliest and the latest. If you select this option, you'll need to explicitly provide a message ID. Your application will be responsible for "knowing" this message ID in advance, perhaps fetching it from a persistent data store or cache. - -The reader interface is helpful for use cases like using Pulsar to provide effectively-once processing semantics for a stream processing system. For this use case, it's essential that the stream processing system be able to "rewind" topics to a specific message and begin reading there. The reader interface provides Pulsar clients with the low-level abstraction necessary to "manually position" themselves within a topic. - -Internally, the reader interface is implemented as a consumer using an exclusive, non-durable subscription to the topic with a randomly-allocated name. - -[ **IMPORTANT** ] - -Unlike subscription/consumer, readers are non-durable in nature and does not prevent data in a topic from being deleted, thus it is ***strongly*** advised that [data retention](cookbooks-retention-expiry.md) be configured. If data retention for a topic is not configured for an adequate amount of time, messages that the reader has not yet read might be deleted . This causes the readers to essentially skip messages. Configuring the data retention for a topic guarantees the reader with a certain duration to read a message. - -Please also note that a reader can have a "backlog", but the metric is only used for users to know how behind the reader is. The metric is not considered for any backlog quota calculations. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-reader-consumer-interfaces.png) - -Here's a Java example that begins reading from the earliest available message on a topic: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageId; -import org.apache.pulsar.client.api.Reader; - -// Create a reader on a topic and for a specific message (and onward) -Reader reader = pulsarClient.newReader() - .topic("reader-api-test") - .startMessageId(MessageId.earliest) - .create(); - -while (true) { - Message message = reader.readNext(); - - // Process the message -} - -``` - -To create a reader that reads from the latest available message: - -```java - -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(MessageId.latest) - .create(); - -``` - -To create a reader that reads from some message between the earliest and the latest: - -```java - -byte[] msgIdBytes = // Some byte array -MessageId id = MessageId.fromByteArray(msgIdBytes); -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(id) - .create(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-messaging.md b/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-messaging.md deleted file mode 100644 index b76728f109b5d0..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-messaging.md +++ /dev/null @@ -1,700 +0,0 @@ ---- -id: concepts-messaging -title: Messaging -sidebar_label: "Messaging" -original_id: concepts-messaging ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar is built on the [publish-subscribe](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) pattern (often abbreviated to pub-sub). In this pattern, [producers](#producers) publish messages to [topics](#topics). [Consumers](#consumers) [subscribe](#subscription-types) to those topics, process incoming messages, and send an acknowledgement when processing is complete. - -When a subscription is created, Pulsar [retains](concepts-architecture-overview.md#persistent-storage) all messages, even if the consumer is disconnected. Retained messages are discarded only when a consumer acknowledges that those messages are processed successfully. - -## Messages - -Messages are the basic "unit" of Pulsar. The following table lists the components of messages. - -Component | Description -:---------|:------- -Value / data payload | The data carried by the message. All Pulsar messages contain raw bytes, although message data can also conform to data [schemas](schema-get-started.md). -Key | Messages are optionally tagged with keys, which is useful for things like [topic compaction](concepts-topic-compaction.md). -Properties | An optional key/value map of user-defined properties. -Producer name | The name of the producer who produces the message. If you do not specify a producer name, the default name is used. -Sequence ID | Each Pulsar message belongs to an ordered sequence on its topic. The sequence ID of the message is its order in that sequence. -Publish time | The timestamp of when the message is published. The timestamp is automatically applied by the producer. -Event time | An optional timestamp attached to a message by applications. For example, applications attach a timestamp on when the message is processed. If nothing is set to event time, the value is `0`. -TypedMessageBuilder | It is used to construct a message. You can set message properties such as the message key, message value with `TypedMessageBuilder`.
    When you set `TypedMessageBuilder`, set the key as a string. If you set the key as other types, for example, an AVRO object, the key is sent as bytes, and it is difficult to get the AVRO object back on the consumer. - -The default size of a message is 5 MB. You can configure the max size of a message with the following configurations. - -- In the `broker.conf` file. - - ```bash - - # The max size of a message (in bytes). - maxMessageSize=5242880 - - ``` - -- In the `bookkeeper.conf` file. - - ```bash - - # The max size of the netty frame (in bytes). Any messages received larger than this value are rejected. The default value is 5 MB. - nettyMaxFrameSizeBytes=5253120 - - ``` - -> For more information on Pulsar message contents, see Pulsar [binary protocol](developing-binary-protocol.md). - -## Producers - -A producer is a process that attaches to a topic and publishes messages to a Pulsar [broker](reference-terminology.md#broker). The Pulsar broker process the messages. - -### Send modes - -Producers send messages to brokers synchronously (sync) or asynchronously (async). - -| Mode | Description | -|:-----------|-----------| -| Sync send | The producer waits for an acknowledgement from the broker after sending every message. If the acknowledgment is not received, the producer treats the sending operation as a failure. | -| Async send | The producer puts a message in a blocking queue and returns immediately. The client library sends the message to the broker in the background. If the queue is full (you can [configure](reference-configuration.md#broker) the maximum size), the producer is blocked or fails immediately when calling the API, depending on arguments passed to the producer. | - -### Access mode - -You can have different types of access modes on topics for producers. - -|Access mode | Description -|---|--- -`Shared`|Multiple producers can publish on a topic.

    This is the **default** setting. -`Exclusive`|Only one producer can publish on a topic.

    If there is already a producer connected, other producers trying to publish on this topic get errors immediately.

    The "old" producer is evicted and a "new" producer is selected to be the next exclusive producer if the "old" producer experiences a network partition with the broker. -`WaitForExclusive`|If there is already a producer connected, the producer creation is pending (rather than timing out) until the producer gets the `Exclusive` access.

    The producer that succeeds in becoming the exclusive one is treated as the leader. Consequently, if you want to implement the leader election scheme for your application, you can use this access mode. - -:::note - -Once an application creates a producer with the `Exclusive` or `WaitForExclusive` access mode successfully, the instance of the application is guaranteed to be the **only one writer** on the topic. Other producers trying to produce on this topic get errors immediately or have to wait until they get the `Exclusive` access. -For more information, see [PIP 68: Exclusive Producer](https://github.com/apache/pulsar/wiki/PIP-68:-Exclusive-Producer). - -::: - -You can set producer access mode through Java Client API. For more information, see `ProducerAccessMode` in [ProducerBuilder.java](https://github.com/apache/pulsar/blob/fc5768ca3bbf92815d142fe30e6bfad70a1b4fc6/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/ProducerBuilder.java). - - -### Compression - -You can compress messages published by producers during transportation. Pulsar currently supports the following types of compression: - -* [LZ4](https://github.com/lz4/lz4) -* [ZLIB](https://zlib.net/) -* [ZSTD](https://facebook.github.io/zstd/) -* [SNAPPY](https://google.github.io/snappy/) - -### Batching - -When batching is enabled, the producer accumulates and sends a batch of messages in a single request. The batch size is defined by the maximum number of messages and the maximum publish latency. Therefore, the backlog size represents the total number of batches instead of the total number of messages. - -In Pulsar, batches are tracked and stored as single units rather than as individual messages. Consumer unbundles a batch into individual messages. However, scheduled messages (configured through the `deliverAt` or the `deliverAfter` parameter) are always sent as individual messages even batching is enabled. - -In general, a batch is acknowledged when all of its messages are acknowledged by a consumer. It means unexpected failures, negative acknowledgements, or acknowledgement timeouts can result in redelivery of all messages in a batch, even if some of the messages are acknowledged. - -To avoid redelivering acknowledged messages in a batch to the consumer, Pulsar introduces batch index acknowledgement since Pulsar 2.6.0. When batch index acknowledgement is enabled, the consumer filters out the batch index that has been acknowledged and sends the batch index acknowledgement request to the broker. The broker maintains the batch index acknowledgement status and tracks the acknowledgement status of each batch index to avoid dispatching acknowledged messages to the consumer. When all indexes of the batch message are acknowledged, the batch message is deleted. - -By default, batch index acknowledgement is disabled (`acknowledgmentAtBatchIndexLevelEnabled=false`). You can enable batch index acknowledgement by setting the `acknowledgmentAtBatchIndexLevelEnabled` parameter to `true` at the broker side. Enabling batch index acknowledgement results in more memory overheads. - -### Chunking -When you enable chunking, read the following instructions. -- Batching and chunking cannot be enabled simultaneously. To enable chunking, you must disable batching in advance. -- Chunking is only supported for persisted topics. -- Chunking is only supported for the exclusive and failover subscription types. - -When chunking is enabled (`chunkingEnabled=true`), if the message size is greater than the allowed maximum publish-payload size, the producer splits the original message into chunked messages and publishes them with chunked metadata to the broker separately and in order. At the broker side, the chunked messages are stored in the managed-ledger in the same way as that of ordinary messages. The only difference is that the consumer needs to buffer the chunked messages and combines them into the real message when all chunked messages have been collected. The chunked messages in the managed-ledger can be interwoven with ordinary messages. If producer fails to publish all the chunks of a message, the consumer can expire incomplete chunks if consumer fail to receive all chunks in expire time. By default, the expire time is set to one minute. - -The consumer consumes the chunked messages and buffers them until the consumer receives all the chunks of a message. And then the consumer stitches chunked messages together and places them into the receiver-queue. Clients consume messages from the receiver-queue. Once the consumer consumes the entire large message and acknowledges it, the consumer internally sends acknowledgement of all the chunk messages associated to that large message. You can set the `maxPendingChunkedMessage` parameter on the consumer. When the threshold is reached, the consumer drops the unchunked messages by silently acknowledging them or asking the broker to redeliver them later by marking them unacknowledged. - -The broker does not require any changes to support chunking for non-shared subscription. The broker only uses `chunkedMessageRate` to record chunked message rate on the topic. - -#### Handle chunked messages with one producer and one ordered consumer - -As shown in the following figure, when a topic has one producer which publishes large message payload in chunked messages along with regular non-chunked messages. The producer publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. The broker stores all the three chunked messages in the managed-ledger and dispatches to the ordered (exclusive/failover) consumer in the same order. The consumer buffers all the chunked messages in memory until it receives all the chunked messages, combines them into one message and then hands over the original message M1 to the client. - -![](/assets/chunking-01.png) - -#### Handle chunked messages with multiple producers and one ordered consumer - -When multiple publishers publish chunked messages into a single topic, the broker stores all the chunked messages coming from different publishers in the same managed-ledger. As shown below, Producer 1 publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. Producer 2 publishes message M2 in three chunks M2-C1, M2-C2 and M2-C3. All chunked messages of the specific message are still in order but might not be consecutive in the managed-ledger. This brings some memory pressure to the consumer because the consumer keeps separate buffer for each large message to aggregate all chunks of the large message and combine them into one message. - -![](/assets/chunking-02.png) - -## Consumers - -A consumer is a process that attaches to a topic via a subscription and then receives messages. - -A consumer sends a [flow permit request](developing-binary-protocol.md#flow-control) to a broker to get messages. There is a queue at the consumer side to receive messages pushed from the broker. You can configure the queue size with the [`receiverQueueSize`](client-libraries-java.md#configure-consumer) parameter. The default size is `1000`). Each time `consumer.receive()` is called, a message is dequeued from the buffer. - -### Receive modes - -Messages are received from [brokers](reference-terminology.md#broker) either synchronously (sync) or asynchronously (async). - -| Mode | Description | -|:--------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Sync receive | A sync receive is blocked until a message is available. | -| Async receive | An async receive returns immediately with a future value—for example, a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) in Java—that completes once a new message is available. | - -### Listeners - -Client libraries provide listener implementation for consumers. For example, the [Java client](client-libraries-java.md) provides a {@inject: javadoc:MesssageListener:/client/org/apache/pulsar/client/api/MessageListener} interface. In this interface, the `received` method is called whenever a new message is received. - -### Acknowledgement - -When a consumer consumes a message successfully, the consumer sends an acknowledgement request to the broker. This message is permanently stored, and then deleted only after all the subscriptions have acknowledged it. If you want to store the message that has been acknowledged by a consumer, you need to configure the [message retention policy](concepts-messaging.md#message-retention-and-expiry). - -For a batch message, if batch index acknowledgement is enabled, the broker maintains the batch index acknowledgement status and tracks the acknowledgement status of each batch index to avoid dispatching acknowledged messages to the consumer. When all indexes of the batch message are acknowledged, the batch message is deleted. For details about the batch index acknowledgement, see [batching](#batching). - -Messages can be acknowledged in the following two ways: - -- Messages are acknowledged individually. With individual acknowledgement, the consumer needs to acknowledge each message and sends an acknowledgement request to the broker. -- Messages are acknowledged cumulatively. With cumulative acknowledgement, the consumer only needs to acknowledge the last message it received. All messages in the stream up to (and including) the provided message are not re-delivered to that consumer. - -:::note - -Cumulative acknowledgement cannot be used in [Shared subscription type](#subscription-types), because this subscription type involves multiple consumers which have access to the same subscription. In Shared subscription type, messages are acknowledged individually. - -::: - -### Negative acknowledgement - -When a consumer does not consume a message successfully at a time, and wants to consume the message again, the consumer sends a negative acknowledgement to the broker, and then the broker redelivers the message. - -Messages are negatively acknowledged either individually or cumulatively, depending on the consumption subscription type. - -In the exclusive and failover subscription types, consumers only negatively acknowledge the last message they receive. - -In the shared and Key_Shared subscription types, you can negatively acknowledge messages individually. - -Be aware that negative acknowledgment on ordered subscription types, such as Exclusive, Failover and Key_Shared, can cause failed messages to arrive consumers out of the original order. - -:::note - -If batching is enabled, other messages and the negatively acknowledged messages in the same batch are redelivered to the consumer. - -::: - -### Acknowledgement timeout - -If a message is not consumed successfully, and you want to trigger the broker to redeliver the message automatically, you can adopt the unacknowledged message automatic re-delivery mechanism. Client tracks the unacknowledged messages within the entire `acktimeout` time range, and sends a `redeliver unacknowledged messages` request to the broker automatically when the acknowledgement timeout is specified. - -:::note - -If batching is enabled, other messages and the unacknowledged messages in the same batch are redelivered to the consumer. - -::: - -:::note - -Prefer negative acknowledgements over acknowledgement timeout. Negative acknowledgement controls the re-delivery of individual messages with more precision, and avoids invalid redeliveries when the message processing time exceeds the acknowledgement timeout. - -::: - -### Dead letter topic - -Dead letter topic enables you to consume new messages when some messages cannot be consumed successfully by a consumer. In this mechanism, messages that are failed to be consumed are stored in a separate topic, which is called dead letter topic. You can decide how to handle messages in the dead letter topic. - -The following example shows how to enable dead letter topic in a Java client using the default dead letter topic: - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic(topic) - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .build()) - .subscribe(); - -``` - -The default dead letter topic uses this format: - -``` - ---DLQ - -``` - - -If you want to specify the name of the dead letter topic, use this Java client example: - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic(topic) - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .deadLetterTopic("your-topic-name") - .build()) - .subscribe(); - -``` - -Dead letter topic depends on message re-delivery. Messages are redelivered either due to [acknowledgement timeout](#acknowledgement-timeout) or [negative acknowledgement](#negative-acknowledgement). If you are going to use negative acknowledgement on a message, make sure it is negatively acknowledged before the acknowledgement timeout. - -:::note - -Currently, dead letter topic is enabled In the shared and Key_Shared subscription types. - -::: - -### Retry letter topic - -For many online business systems, a message is re-consumed due to exception occurs in the business logic processing. To configure the delay time for re-consuming the failed messages, you can configure the producer to send messages to both the business topic and the retry letter topic, and enable automatic retry on the consumer. When automatic retry is enabled on the consumer, a message is stored in the retry letter topic if the messages are not consumed, and therefore the consumer automatically consumes the failed messages from the retry letter topic after a specified delay time. - -By default, automatic retry is disabled. You can set `enableRetry` to `true` to enable automatic retry on the consumer. - -This example shows how to consume messages from a retry letter topic. - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic(topic) - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .enableRetry(true) - .receiverQueueSize(100) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .retryLetterTopic("persistent://my-property/my-ns/my-subscription-custom-Retry") - .build()) - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscribe(); - -``` - -## Topics - -As in other pub-sub systems, topics in Pulsar are named channels for transmitting messages from producers to consumers. Topic names are URLs that have a well-defined structure: - -```http - -{persistent|non-persistent}://tenant/namespace/topic - -``` - -Topic name component | Description -:--------------------|:----------- -`persistent` / `non-persistent` | This identifies the type of topic. Pulsar supports two kind of topics: [persistent](concepts-architecture-overview.md#persistent-storage) and [non-persistent](#non-persistent-topics). The default is persistent, so if you do not specify a type, the topic is persistent. With persistent topics, all messages are durably persisted on disks (if the broker is not standalone, messages are durably persisted on multiple disks), whereas data for non-persistent topics is not persisted to storage disks. -`tenant` | The topic tenant within the instance. Tenants are essential to multi-tenancy in Pulsar, and spread across clusters. -`namespace` | The administrative unit of the topic, which acts as a grouping mechanism for related topics. Most topic configuration is performed at the [namespace](#namespaces) level. Each tenant has one or multiple namespaces. -`topic` | The final part of the name. Topic names have no special meaning in a Pulsar instance. - -> **No need to explicitly create new topics** -> You do not need to explicitly create topics in Pulsar. If a client attempts to write or receive messages to/from a topic that does not yet exist, Pulsar creates that topic under the namespace provided in the [topic name](#topics) automatically. -> If no tenant or namespace is specified when a client creates a topic, the topic is created in the default tenant and namespace. You can also create a topic in a specified tenant and namespace, such as `persistent://my-tenant/my-namespace/my-topic`. `persistent://my-tenant/my-namespace/my-topic` means the `my-topic` topic is created in the `my-namespace` namespace of the `my-tenant` tenant. - -## Namespaces - -A namespace is a logical nomenclature within a tenant. A tenant creates multiple namespaces via the [admin API](admin-api-namespaces.md#create). For instance, a tenant with different applications can create a separate namespace for each application. A namespace allows the application to create and manage a hierarchy of topics. The topic `my-tenant/app1` is a namespace for the application `app1` for `my-tenant`. You can create any number of [topics](#topics) under the namespace. - -## Subscriptions - -A subscription is a named configuration rule that determines how messages are delivered to consumers. Four subscription types are available in Pulsar: [exclusive](#exclusive), [shared](#shared), [failover](#failover), and [key_shared](#key_shared). These types are illustrated in the figure below. - -![Subscription types](/assets/pulsar-subscription-types.png) - -> **Pub-Sub or Queuing** -> In Pulsar, you can use different subscriptions flexibly. -> * If you want to achieve traditional "fan-out pub-sub messaging" among consumers, specify a unique subscription name for each consumer. It is exclusive subscription type. -> * If you want to achieve "message queuing" among consumers, share the same subscription name among multiple consumers(shared, failover, key_shared). -> * If you want to achieve both effects simultaneously, combine exclusive subscription type with other subscription types for consumers. - -### Subscription types -When a subscription has no consumers, its subscription type is undefined. The type of a subscription is defined when a consumer connects to it, and the type can be changed by restarting all consumers with a different configuration. - -#### Exclusive - -In *exclusive* type, only a single consumer is allowed to attach to the subscription. If multiple consumers subscribe to a topic using the same subscription, an error occurs. - -In the diagram below, only **Consumer A-0** is allowed to consume messages. - -> Exclusive is the default subscription type. - -![Exclusive subscriptions](/assets/pulsar-exclusive-subscriptions.png) - -#### Failover - -In *Failover* type, multiple consumers can attach to the same subscription. A master consumer is picked for non-partitioned topic or each partition of partitioned topic and receives messages. When the master consumer disconnects, all (non-acknowledged and subsequent) messages are delivered to the next consumer in line. - -For partitioned topics, broker will sort consumers by priority level and lexicographical order of consumer name. Then broker will try to evenly assigns topics to consumers with the highest priority level. - -For non-partitioned topic, broker will pick consumer in the order they subscribe to the non partitioned topic. - -In the diagram below, **Consumer-B-0** is the master consumer while **Consumer-B-1** would be the next consumer in line to receive messages if **Consumer-B-0** is disconnected. - -![Failover subscriptions](/assets/pulsar-failover-subscriptions.png) - -#### Shared - -In *shared* or *round robin* mode, multiple consumers can attach to the same subscription. Messages are delivered in a round robin distribution across consumers, and any given message is delivered to only one consumer. When a consumer disconnects, all the messages that were sent to it and not acknowledged will be rescheduled for sending to the remaining consumers. - -In the diagram below, **Consumer-C-1** and **Consumer-C-2** are able to subscribe to the topic, but **Consumer-C-3** and others could as well. - -> **Limitations of Shared type** -> When using Shared type, be aware that: -> * Message ordering is not guaranteed. -> * You cannot use cumulative acknowledgment with Shared type. - -![Shared subscriptions](/assets/pulsar-shared-subscriptions.png) - -#### Key_Shared - -In *Key_Shared* mode, multiple consumers can attach to the same subscription. Messages are delivered in a distribution across consumers and message with same key or same ordering key are delivered to only one consumer. No matter how many times the message is re-delivered, it is delivered to the same consumer. When a consumer connected or disconnected will cause served consumer change for some key of message. - -> **Limitations of Key_Shared type** -> When you use Key_Shared type, be aware that: -> * You need to specify a key or orderingKey for messages. -> * You cannot use cumulative acknowledgment with Key_Shared type. -> * Your producers should disable batching or use a key-based batch builder. - -![Key_Shared subscriptions](/assets/pulsar-key-shared-subscriptions.png) - -**You can disable Key_Shared subscription in the `broker.config` file.** - -### Subscription modes - -#### What is a subscription mode - -The subscription mode indicates the cursor type. - -- When a subscription is created, an associated cursor is created to record the last consumed position. -- When a consumer of the subscription restarts, it can continue consuming from the last message it consumes. - -Subscription mode | Description | Note -|---|---|--- -`Durable`|The cursor is durable, which retains messages and persists the current position.

    If a broker restarts from a failure, it can recover the cursor from the persistent storage (BookKeeper), so that messages can continue to be consumed from the last consumed position.|`Durable` is the **default** subscription mode. -`NonDurable`|The cursor is non-durable.

    Once a broker stops, the cursor is lost and can never be recovered, so that messages **can not** continue to be consumed from the last consumed position.|Reader’s subscription mode is `NonDurable` in nature and it does not prevent data in a topic from being deleted. Reader’s subscription mode **can not** be changed. - -A [subscription](#subscriptions) can have one or more consumers. When a consumer subscribes to a topic, it must specify the subscription name. A durable subscription and a non-durable subscription can have the same name, they are independent of each other. If a consumer specifies a subscription which does not exist before, the subscription is automatically created. - -#### When to use - -By default, messages of a topic without any durable subscriptions are marked as deleted. If you want to prevent the messages being marked as deleted, you can create a durable subscription for this topic. In this case, only acknowledged messages are marked as deleted. For more information, see [message retention and expiry](cookbooks-retention-expiry.md). - -#### How to use - -After a consumer is created, the default subscription mode of the consumer is `Durable`. You can change the subscription mode to `NonDurable` by making changes to the consumer’s configuration. - -````mdx-code-block - - - - -```java - - Consumer consumer = pulsarClient.newConsumer() - .topic("my-topic") - .subscriptionName("my-sub") - .subscriptionMode(SubscriptionMode.Durable) - .subscribe(); - -``` - - - - -```java - - Consumer consumer = pulsarClient.newConsumer() - .topic("my-topic") - .subscriptionName("my-sub") - .subscriptionMode(SubscriptionMode.NonDurable) - .subscribe(); - -``` - - - - -```` - -For how to create, check, or delete a durable subscription, see [manage subscriptions](admin-api-topics.md/#manage-subscriptions). - -## Multi-topic subscriptions - -When a consumer subscribes to a Pulsar topic, by default it subscribes to one specific topic, such as `persistent://public/default/my-topic`. As of Pulsar version 1.23.0-incubating, however, Pulsar consumers can simultaneously subscribe to multiple topics. You can define a list of topics in two ways: - -* On the basis of a [**reg**ular **ex**pression](https://en.wikipedia.org/wiki/Regular_expression) (regex), for example `persistent://public/default/finance-.*` -* By explicitly defining a list of topics - -> When subscribing to multiple topics by regex, all topics must be in the same [namespace](#namespaces). - -When subscribing to multiple topics, the Pulsar client automatically makes a call to the Pulsar API to discover the topics that match the regex pattern/list, and then subscribe to all of them. If any of the topics do not exist, the consumer auto-subscribes to them once the topics are created. - -> **No ordering guarantees across multiple topics** -> When a producer sends messages to a single topic, all messages are guaranteed to be read from that topic in the same order. However, these guarantees do not hold across multiple topics. So when a producer sends message to multiple topics, the order in which messages are read from those topics is not guaranteed to be the same. - -The following are multi-topic subscription examples for Java. - -```java - -import java.util.regex.Pattern; - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient pulsarClient = // Instantiate Pulsar client object - -// Subscribe to all topics in a namespace -Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default/.*"); -Consumer allTopicsConsumer = pulsarClient.newConsumer() - .topicsPattern(allTopicsInNamespace) - .subscriptionName("subscription-1") - .subscribe(); - -// Subscribe to a subsets of topics in a namespace, based on regex -Pattern someTopicsInNamespace = Pattern.compile("persistent://public/default/foo.*"); -Consumer someTopicsConsumer = pulsarClient.newConsumer() - .topicsPattern(someTopicsInNamespace) - .subscriptionName("subscription-1") - .subscribe(); - -``` - -For code examples, see [Java](client-libraries-java.md#multi-topic-subscriptions). - -## Partitioned topics - -Normal topics are served only by a single broker, which limits the maximum throughput of the topic. *Partitioned topics* are a special type of topic that are handled by multiple brokers, thus allowing for higher throughput. - -A partitioned topic is actually implemented as N internal topics, where N is the number of partitions. When publishing messages to a partitioned topic, each message is routed to one of several brokers. The distribution of partitions across brokers is handled automatically by Pulsar. - -The diagram below illustrates this: - -![](/assets/partitioning.png) - -The **Topic1** topic has five partitions (**P0** through **P4**) split across three brokers. Because there are more partitions than brokers, two brokers handle two partitions a piece, while the third handles only one (again, Pulsar handles this distribution of partitions automatically). - -Messages for this topic are broadcast to two consumers. The [routing mode](#routing-modes) determines each message should be published to which partition, while the [subscription type](#subscription-types) determines which messages go to which consumers. - -Decisions about routing and subscription types can be made separately in most cases. In general, throughput concerns should guide partitioning/routing decisions while subscription decisions should be guided by application semantics. - -There is no difference between partitioned topics and normal topics in terms of how subscription types work, as partitioning only determines what happens between when a message is published by a producer and processed and acknowledged by a consumer. - -Partitioned topics need to be explicitly created via the [admin API](admin-api-overview.md). The number of partitions can be specified when creating the topic. - -### Routing modes - -When publishing to partitioned topics, you must specify a *routing mode*. The routing mode determines which partition---that is, which internal topic---each message should be published to. - -There are three {@inject: javadoc:MessageRoutingMode:/client/org/apache/pulsar/client/api/MessageRoutingMode} available: - -Mode | Description -:--------|:------------ -`RoundRobinPartition` | If no key is provided, the producer will publish messages across all partitions in round-robin fashion to achieve maximum throughput. Please note that round-robin is not done per individual message but rather it's set to the same boundary of batching delay, to ensure batching is effective. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition. This is the default mode. -`SinglePartition` | If no key is provided, the producer will randomly pick one single partition and publish all the messages into that partition. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition. -`CustomPartition` | Use custom message router implementation that will be called to determine the partition for a particular message. User can create a custom routing mode by using the [Java client](client-libraries-java.md) and implementing the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface. - -### Ordering guarantee - -The ordering of messages is related to MessageRoutingMode and Message Key. Usually, user would want an ordering of Per-key-partition guarantee. - -If there is a key attached to message, the messages will be routed to corresponding partitions based on the hashing scheme specified by {@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} in {@inject: javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder}, when using either `SinglePartition` or `RoundRobinPartition` mode. - -Ordering guarantee | Description | Routing Mode and Key -:------------------|:------------|:------------ -Per-key-partition | All the messages with the same key will be in order and be placed in same partition. | Use either `SinglePartition` or `RoundRobinPartition` mode, and Key is provided by each message. -Per-producer | All the messages from the same producer will be in order. | Use `SinglePartition` mode, and no Key is provided for each message. - -### Hashing scheme - -{@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} is an enum that represent sets of standard hashing functions available when choosing the partition to use for a particular message. - -There are 2 types of standard hashing functions available: `JavaStringHash` and `Murmur3_32Hash`. -The default hashing function for producer is `JavaStringHash`. -Please pay attention that `JavaStringHash` is not useful when producers can be from different multiple language clients, under this use case, it is recommended to use `Murmur3_32Hash`. - - - -## Non-persistent topics - - -By default, Pulsar persistently stores *all* unacknowledged messages on multiple [BookKeeper](concepts-architecture-overview.md#persistent-storage) bookies (storage nodes). Data for messages on persistent topics can thus survive broker restarts and subscriber failover. - -Pulsar also, however, supports **non-persistent topics**, which are topics on which messages are *never* persisted to disk and live only in memory. When using non-persistent delivery, killing a Pulsar broker or disconnecting a subscriber to a topic means that all in-transit messages are lost on that (non-persistent) topic, meaning that clients may see message loss. - -Non-persistent topics have names of this form (note the `non-persistent` in the name): - -```http - -non-persistent://tenant/namespace/topic - -``` - -> For more info on using non-persistent topics, see the [Non-persistent messaging cookbook](cookbooks-non-persistent.md). - -In non-persistent topics, brokers immediately deliver messages to all connected subscribers *without persisting them* in [BookKeeper](concepts-architecture-overview.md#persistent-storage). If a subscriber is disconnected, the broker will not be able to deliver those in-transit messages, and subscribers will never be able to receive those messages again. Eliminating the persistent storage step makes messaging on non-persistent topics slightly faster than on persistent topics in some cases, but with the caveat that some of the core benefits of Pulsar are lost. - -> With non-persistent topics, message data lives only in memory. If a message broker fails or message data can otherwise not be retrieved from memory, your message data may be lost. Use non-persistent topics only if you're *certain* that your use case requires it and can sustain it. - -By default, non-persistent topics are enabled on Pulsar brokers. You can disable them in the broker's [configuration](reference-configuration.md#broker-enableNonPersistentTopics). You can manage non-persistent topics using the `pulsar-admin topics` command. For more information, see [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/). - -### Performance - -Non-persistent messaging is usually faster than persistent messaging because brokers don't persist messages and immediately send acks back to the producer as soon as that message is delivered to connected brokers. Producers thus see comparatively low publish latency with non-persistent topic. - -### Client API - -Producers and consumers can connect to non-persistent topics in the same way as persistent topics, with the crucial difference that the topic name must start with `non-persistent`. All three subscription types---[exclusive](#exclusive), [shared](#shared), and [failover](#failover)---are supported for non-persistent topics. - -Here's an example [Java consumer](client-libraries-java.md#consumers) for a non-persistent topic: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); -String npTopic = "non-persistent://public/default/my-topic"; -String subscriptionName = "my-subscription-name"; - -Consumer consumer = client.newConsumer() - .topic(npTopic) - .subscriptionName(subscriptionName) - .subscribe(); - -``` - -Here's an example [Java producer](client-libraries-java.md#producer) for the same non-persistent topic: - -```java - -Producer producer = client.newProducer() - .topic(npTopic) - .create(); - -``` - - -## System topic - -System topic is a predefined topic for internal use within Pulsar. It can be either persistent or non-persistent topic. - -System topics serve to implement certain features and eliminate dependencies on third-party components, such as transactions, heartbeat detections, topic-level policies, and resource group services. System topics empower the implementation of these features to be simplified, dependent, and flexible. Take heartbeat detections for example, you can leverage the system topic for healthcheck to internally enable producer/reader to procude/consume messages under the heartbeat namespace, which can detect whether the current service is still alive. - -There are diverse system topics depending on namespaces. The following table outlines the available system topics for each specific namespace. - -| Namespace | TopicName | Domain | Count | Usage | -|-----------|-----------|--------|-------|-------| -| pulsar/system | `transaction_coordinator_assign_${id}` | Persistent | Default 16 | Transaction coordinator | -| pulsar/system | `_transaction_log${tc_id}` | Persistent | Default 16 | Transaction log | -| pulsar/system | `resource-usage` | Non-persistent | Default 4 | Resource group service | -| host/port | `heartbeat` | Persistent | 1 | Heartbeat detection | -| User-defined-ns | [`__change_events`](concepts-multi-tenancy.md#namespace-change-events-and-topic-level-policies) | Persistent | Default 4 | Topic events | -| User-defined-ns | `__transaction_buffer_snapshot` | Persistent | One per namespace | Transaction buffer snapshots | -| User-defined-ns | `${topicName}__transaction_pending_ack` | Persistent | One per every topic subscription acknowledged with transactions | Acknowledgements with transactions | - -:::note - -* You cannot create any system topics. -* By default, system topics are disabled. To enable system topics, you need to change the following configurations in the `conf/broker.conf` or `conf/standalone.conf` file. - - ```conf - systemTopicEnabled=true - topicLevelPoliciesEnabled=true - ``` - -::: - - -## Message retention and expiry - -By default, Pulsar message brokers: - -* immediately delete *all* messages that have been acknowledged by a consumer, and -* [persistently store](concepts-architecture-overview.md#persistent-storage) all unacknowledged messages in a message backlog. - -Pulsar has two features, however, that enable you to override this default behavior: - -* Message **retention** enables you to store messages that have been acknowledged by a consumer -* Message **expiry** enables you to set a time to live (TTL) for messages that have not yet been acknowledged - -> All message retention and expiry is managed at the [namespace](#namespaces) level. For a how-to, see the [Message retention and expiry](cookbooks-retention-expiry.md) cookbook. - -The diagram below illustrates both concepts: - -![Message retention and expiry](/assets/retention-expiry.png) - -With message retention, shown at the top, a retention policy applied to all topics in a namespace dictates that some messages are durably stored in Pulsar even though they've already been acknowledged. Acknowledged messages that are not covered by the retention policy are deleted. Without a retention policy, *all* of the acknowledged messages would be deleted. - -With message expiry, shown at the bottom, some messages are deleted, even though they haven't been acknowledged, because they've expired according to the TTL applied to the namespace (for example because a TTL of 5 minutes has been applied and the messages haven't been acknowledged but are 10 minutes old). - -## Message deduplication - -Message duplication occurs when a message is [persisted](concepts-architecture-overview.md#persistent-storage) by Pulsar more than once. Message deduplication is an optional Pulsar feature that prevents unnecessary message duplication by processing each message only once, even if the message is received more than once. - -The following diagram illustrates what happens when message deduplication is disabled vs. enabled: - -![Pulsar message deduplication](/assets/message-deduplication.png) - - -Message deduplication is disabled in the scenario shown at the top. Here, a producer publishes message 1 on a topic; the message reaches a Pulsar broker and is [persisted](concepts-architecture-overview.md#persistent-storage) to BookKeeper. The producer then sends message 1 again (in this case due to some retry logic), and the message is received by the broker and stored in BookKeeper again, which means that duplication has occurred. - -In the second scenario at the bottom, the producer publishes message 1, which is received by the broker and persisted, as in the first scenario. When the producer attempts to publish the message again, however, the broker knows that it has already seen message 1 and thus does not persist the message. - -> Message deduplication is handled at the namespace level or the topic level. For more instructions, see the [message deduplication cookbook](cookbooks-deduplication.md). - - -### Producer idempotency - -The other available approach to message deduplication is to ensure that each message is *only produced once*. This approach is typically called **producer idempotency**. The drawback of this approach is that it defers the work of message deduplication to the application. In Pulsar, this is handled at the [broker](reference-terminology.md#broker) level, so you do not need to modify your Pulsar client code. Instead, you only need to make administrative changes. For details, see [Managing message deduplication](cookbooks-deduplication.md). - -### Deduplication and effectively-once semantics - -Message deduplication makes Pulsar an ideal messaging system to be used in conjunction with stream processing engines (SPEs) and other systems seeking to provide effectively-once processing semantics. Messaging systems that do not offer automatic message deduplication require the SPE or other system to guarantee deduplication, which means that strict message ordering comes at the cost of burdening the application with the responsibility of deduplication. With Pulsar, strict ordering guarantees come at no application-level cost. - -> You can find more in-depth information in [this post](https://www.splunk.com/en_us/blog/it/exactly-once-is-not-exactly-the-same.html). - -## Delayed message delivery -Delayed message delivery enables you to consume a message later rather than immediately. In this mechanism, a message is stored in BookKeeper, `DelayedDeliveryTracker` maintains the time index(time -> messageId) in memory after published to a broker, and it is delivered to a consumer once the specific delayed time is passed. - -Delayed message delivery only works in Shared subscription type. In Exclusive and Failover subscription types, the delayed message is dispatched immediately. - -The diagram below illustrates the concept of delayed message delivery: - -![Delayed Message Delivery](/assets/message_delay.png) - -A broker saves a message without any check. When a consumer consumes a message, if the message is set to delay, then the message is added to `DelayedDeliveryTracker`. A subscription checks and gets timeout messages from `DelayedDeliveryTracker`. - -### Broker -Delayed message delivery is enabled by default. You can change it in the broker configuration file as below: - -``` - -# Whether to enable the delayed delivery for messages. -# If disabled, messages are immediately delivered and there is no tracking overhead. -delayedDeliveryEnabled=true - -# Control the ticking time for the retry of delayed message delivery, -# affecting the accuracy of the delivery time compared to the scheduled time. -# Default is 1 second. -delayedDeliveryTickTimeMillis=1000 - -``` - -### Producer -The following is an example of delayed message delivery for a producer in Java: - -```java - -// message to be delivered at the configured delay interval -producer.newMessage().deliverAfter(3L, TimeUnit.Minute).value("Hello Pulsar!").send(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-multi-tenancy.md b/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-multi-tenancy.md deleted file mode 100644 index 93a59557b2efca..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-multi-tenancy.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -id: concepts-multi-tenancy -title: Multi Tenancy -sidebar_label: "Multi Tenancy" -original_id: concepts-multi-tenancy ---- - -Pulsar was created from the ground up as a multi-tenant system. To support multi-tenancy, Pulsar has a concept of tenants. Tenants can be spread across clusters and can each have their own [authentication and authorization](security-overview.md) scheme applied to them. They are also the administrative unit at which storage quotas, [message TTL](cookbooks-retention-expiry.md#time-to-live-ttl), and isolation policies can be managed. - -The multi-tenant nature of Pulsar is reflected mostly visibly in topic URLs, which have this structure: - -```http - -persistent://tenant/namespace/topic - -``` - -As you can see, the tenant is the most basic unit of categorization for topics (more fundamental than the namespace and topic name). - -## Tenants - -To each tenant in a Pulsar instance you can assign: - -* An [authorization](security-authorization.md) scheme -* The set of [clusters](reference-terminology.md#cluster) to which the tenant's configuration applies - -## Namespaces - -Tenants and namespaces are two key concepts of Pulsar to support multi-tenancy. - -* Pulsar is provisioned for specified tenants with appropriate capacity allocated to the tenant. -* A namespace is the administrative unit nomenclature within a tenant. The configuration policies set on a namespace apply to all the topics created in that namespace. A tenant may create multiple namespaces via self-administration using the REST API and the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool. For instance, a tenant with different applications can create a separate namespace for each application. - -Names for topics in the same namespace will look like this: - -```http - -persistent://tenant/app1/topic-1 - -persistent://tenant/app1/topic-2 - -persistent://tenant/app1/topic-3 - -``` - -### Namespace change events and topic-level policies - -Pulsar is a multi-tenant event streaming system. Administrators can manage the tenants and namespaces by setting policies at different levels. However, the policies, such as retention policy and storage quota policy, are only available at a namespace level. In many use cases, users need to set a policy at the topic level. The namespace change events approach is proposed for supporting topic-level policies in an efficient way. In this approach, Pulsar is used as an event log to store namespace change events (such as topic policy changes). This approach has a few benefits: -- Avoid using ZooKeeper and introducing more loads to ZooKeeper. -- Use Pulsar as an event log for propagating the policy cache. It can scale efficiently. -- Use Pulsar SQL to query the namespace changes and audit the system. - -Each namespace has a [system topic](concepts-messaging.md#system-topic) named `__change_events`. This system topic stores change events for a given namespace. The following figure illustrates how to leverage it to update topic-level policies. - -![Leverage the system topic to update topic-level policies](/assets/system-topic-for-topic-level-policies.svg) - -1. Pulsar Admin clients communicate with the Admin Restful API to update topic-level policies. -2. Any broker that receives the Admin HTTP request publishes a topic policy change event to the corresponding system topic (`__change_events`) of the namespace. -3. Each broker that owns a namespace bundle(s) subscribes to the system topic (`__change_events`) to receive the change events of the namespace. -4. Each broker applies the change events to its policy cache. -5. Once the policy cache is updated, the broker sends the response back to the Pulsar Admin clients. - -:::note - -By default, the system topic is disabled. To enable topic-level policy (`topicLevelPoliciesEnabled`=`true`), you need to enable the system topic by setting `systemtopicenabled` to `true` in the `conf/broker.conf` or `conf/standalone.conf` file. - -::: \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-multiple-advertised-listeners.md b/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-multiple-advertised-listeners.md deleted file mode 100644 index f2e1ae0aadc7ca..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-multiple-advertised-listeners.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -id: concepts-multiple-advertised-listeners -title: Multiple advertised listeners -sidebar_label: "Multiple advertised listeners" -original_id: concepts-multiple-advertised-listeners ---- - -When a Pulsar cluster is deployed in the production environment, it may require to expose multiple advertised addresses for the broker. For example, when you deploy a Pulsar cluster in Kubernetes and want other clients, which are not in the same Kubernetes cluster, to connect to the Pulsar cluster, you need to assign a broker URL to external clients. But clients in the same Kubernetes cluster can still connect to the Pulsar cluster through the internal network of Kubernetes. - -## Advertised listeners - -To ensure clients in both internal and external networks can connect to a Pulsar cluster, Pulsar introduces `advertisedListeners` and `internalListenerName` configuration options into the [broker configuration file](reference-configuration.md#broker) to ensure that the broker supports exposing multiple advertised listeners and support the separation of internal and external network traffic. - -- The `advertisedListeners` is used to specify multiple advertised listeners. The broker uses the listener as the broker identifier in the load manager and the bundle owner data. The `advertisedListeners` is formatted as `:pulsar://:, :pulsar+ssl://:`. You can set up the `advertisedListeners` like -`advertisedListeners=internal:pulsar://192.168.1.11:6660,internal:pulsar+ssl://192.168.1.11:6651`. - -- The `internalListenerName` is used to specify the internal service URL that the broker uses. You can specify the `internalListenerName` by choosing one of the `advertisedListeners`. The broker uses the listener name of the first advertised listener as the `internalListenerName` if the `internalListenerName` is absent. - -After setting up the `advertisedListeners`, clients can choose one of the listeners as the service URL to create a connection to the broker as long as the network is accessible. However, if the client creates producers or consumer on a topic, the client must send a lookup requests to the broker for getting the owner broker, then connect to the owner broker to publish messages or consume messages. Therefore, You must allow the client to get the corresponding service URL with the same advertised listener name as the one used by the client. This helps keep client-side simple and secure. - -## Use multiple advertised listeners - -This example shows how a Pulsar client uses multiple advertised listeners. - -1. Configure multiple advertised listeners in the broker configuration file. - -```shell - -advertisedListeners={listenerName}:pulsar://xxxx:6650, -{listenerName}:pulsar+ssl://xxxx:6651 - -``` - -2. Specify the listener name for the client. - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://xxxx:6650") - .listenerName("external") - .build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-overview.md b/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-overview.md deleted file mode 100644 index e8a2f4b9d321a7..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-overview.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -id: concepts-overview -title: Pulsar Overview -sidebar_label: "Overview" -original_id: concepts-overview ---- - -Pulsar is a multi-tenant, high-performance solution for server-to-server messaging. Originally developed by Yahoo, Pulsar is under the stewardship of the [Apache Software Foundation](https://www.apache.org/). - -Key features of Pulsar are listed below: - -* Native support for multiple clusters in a Pulsar instance, with seamless [geo-replication](administration-geo.md) of messages across clusters. -* Very low publish and end-to-end latency. -* Seamless scalability to over a million topics. -* A simple [client API](concepts-clients.md) with bindings for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) and [C++](client-libraries-cpp.md). -* Multiple [subscription types](concepts-messaging.md#subscription-types) ([exclusive](concepts-messaging.md#exclusive), [shared](concepts-messaging.md#shared), and [failover](concepts-messaging.md#failover)) for topics. -* Guaranteed message delivery with [persistent message storage](concepts-architecture-overview.md#persistent-storage) provided by [Apache BookKeeper](http://bookkeeper.apache.org/). -* A serverless light-weight computing framework [Pulsar Functions](functions-overview.md) offers the capability for stream-native data processing. -* A serverless connector framework [Pulsar IO](io-overview.md), which is built on Pulsar Functions, makes it easier to move data in and out of Apache Pulsar. -* [Tiered Storage](concepts-tiered-storage.md) offloads data from hot/warm storage to cold/long-term storage (such as S3 and GCS) when the data is aging out. - -## Contents - -- [Messaging Concepts](concepts-messaging.md) -- [Architecture Overview](concepts-architecture-overview.md) -- [Pulsar Clients](concepts-clients.md) -- [Geo Replication](concepts-replication.md) -- [Multi Tenancy](concepts-multi-tenancy.md) -- [Authentication and Authorization](concepts-authentication.md) -- [Topic Compaction](concepts-topic-compaction.md) -- [Tiered Storage](concepts-tiered-storage.md) diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-proxy-sni-routing.md b/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-proxy-sni-routing.md deleted file mode 100644 index 51419a66cefe6e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-proxy-sni-routing.md +++ /dev/null @@ -1,180 +0,0 @@ ---- -id: concepts-proxy-sni-routing -title: Proxy support with SNI routing -sidebar_label: "Proxy support with SNI routing" -original_id: concepts-proxy-sni-routing ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -A proxy server is an intermediary server that forwards requests from multiple clients to different servers across the Internet. The proxy server acts as a "traffic cop" in both forward and reverse proxy scenarios, and benefits your system such as load balancing, performance, security, auto-scaling, and so on. - -The proxy in Pulsar acts as a reverse proxy, and creates a gateway in front of brokers. Proxies such as Apache Traffic Server (ATS), HAProxy, Nginx, and Envoy are not supported in Pulsar. These proxy-servers support **SNI routing**. SNI routing is used to route traffic to a destination without terminating the SSL connection. Layer 4 routing provides greater transparency because the outbound connection is determined by examining the destination address in the client TCP packets. - -Pulsar clients (Java, C++, Python) support [SNI routing protocol](https://github.com/apache/pulsar/wiki/PIP-60:-Support-Proxy-server-with-SNI-routing), so you can connect to brokers through the proxy. This document walks you through how to set up the ATS proxy, enable SNI routing, and connect Pulsar client to the broker through the ATS proxy. - -## ATS-SNI Routing in Pulsar -To support [layer-4 SNI routing](https://docs.trafficserver.apache.org/en/latest/admin-guide/layer-4-routing.en.html) with ATS, the inbound connection must be a TLS connection. Pulsar client supports SNI routing protocol on TLS connection, so when Pulsar clients connect to broker through ATS proxy, Pulsar uses ATS as a reverse proxy. - -Pulsar supports SNI routing for geo-replication, so brokers can connect to brokers in other clusters through the ATS proxy. - -This section explains how to set up and use ATS as a reverse proxy, so Pulsar clients can connect to brokers through the ATS proxy using the SNI routing protocol on TLS connection. - -### Set up ATS Proxy for layer-4 SNI routing -To support layer 4 SNI routing, you need to configure the `records.conf` and `ssl_server_name.conf` files. - -![Pulsar client SNI](/assets/pulsar-sni-client.png) - -The [records.config](https://docs.trafficserver.apache.org/en/latest/admin-guide/files/records.config.en.html) file is located in the `/usr/local/etc/trafficserver/` directory by default. The file lists configurable variables used by the ATS. - -To configure the `records.config` files, complete the following steps. -1. Update TLS port (`http.server_ports`) on which proxy listens, and update proxy certs (`ssl.client.cert.path` and `ssl.client.cert.filename`) to secure TLS tunneling. -2. Configure server ports (`http.connect_ports`) used for tunneling to the broker. If Pulsar brokers are listening on `4443` and `6651` ports, add the brokers service port in the `http.connect_ports` configuration. - -The following is an example. - -``` - -# PROXY TLS PORT -CONFIG proxy.config.http.server_ports STRING 4443:ssl 4080 -# PROXY CERTS FILE PATH -CONFIG proxy.config.ssl.client.cert.path STRING /proxy-cert.pem -# PROXY KEY FILE PATH -CONFIG proxy.config.ssl.client.cert.filename STRING /proxy-key.pem - - -# The range of origin server ports that can be used for tunneling via CONNECT. # Traffic Server allows tunnels only to the specified ports. Supports both wildcards (*) and ranges (e.g. 0-1023). -CONFIG proxy.config.http.connect_ports STRING 4443 6651 - -``` - -The `ssl_server_name` file is used to configure TLS connection handling for inbound and outbound connections. The configuration is determined by the SNI values provided by the inbound connection. The file consists of a set of configuration items, and each is identified by an SNI value (`fqdn`). When an inbound TLS connection is made, the SNI value from the TLS negotiation is matched with the items specified in this file. If the values match, the values specified in that item override the default values. - -The following example shows mapping of the inbound SNI hostname coming from the client, and the actual broker service URL where request should be redirected. For example, if the client sends the SNI header `pulsar-broker1`, the proxy creates a TLS tunnel by redirecting request to the `pulsar-broker1:6651` service URL. - -``` - -server_config = { - { - fqdn = 'pulsar-broker-vip', - # Forward to Pulsar broker which is listening on 6651 - tunnel_route = 'pulsar-broker-vip:6651' - }, - { - fqdn = 'pulsar-broker1', - # Forward to Pulsar broker-1 which is listening on 6651 - tunnel_route = 'pulsar-broker1:6651' - }, - { - fqdn = 'pulsar-broker2', - # Forward to Pulsar broker-2 which is listening on 6651 - tunnel_route = 'pulsar-broker2:6651' - }, -} - -``` - -After you configure the `ssl_server_name.config` and `records.config` files, the ATS-proxy server handles SNI routing and creates TCP tunnel between the client and the broker. - -### Configure Pulsar-client with SNI routing -ATS SNI-routing works only with TLS. You need to enable TLS for the ATS proxy and brokers first, configure the SNI routing protocol, and then connect Pulsar clients to brokers through ATS proxy. Pulsar clients support SNI routing by connecting to the proxy, and sending the target broker URL to the SNI header. This process is processed internally. You only need to configure the following proxy configuration initially when you create a Pulsar client to use the SNI routing protocol. - -````mdx-code-block - - - - -```java - -String brokerServiceUrl = "pulsar+ssl://pulsar-broker-vip:6651/"; -String proxyUrl = "pulsar+ssl://ats-proxy:443"; -ClientBuilder clientBuilder = PulsarClient.builder() - .serviceUrl(brokerServiceUrl) - .tlsTrustCertsFilePath(TLS_TRUST_CERT_FILE_PATH) - .enableTls(true) - .allowTlsInsecureConnection(false) - .proxyServiceUrl(proxyUrl, ProxyProtocol.SNI) - .operationTimeout(1000, TimeUnit.MILLISECONDS); - -Map authParams = new HashMap(); -authParams.put("tlsCertFile", TLS_CLIENT_CERT_FILE_PATH); -authParams.put("tlsKeyFile", TLS_CLIENT_KEY_FILE_PATH); -clientBuilder.authentication(AuthenticationTls.class.getName(), authParams); - -PulsarClient pulsarClient = clientBuilder.build(); - -``` - - - - -```c++ - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/cacert.pem"); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create( - "/path/to/client-cert.pem", "/path/to/client-key.pem");); - -Client client("pulsar+ssl://ats-proxy:443", config); - -``` - - - - -```python - -from pulsar import Client, AuthenticationTLS - -auth = AuthenticationTLS("/path/to/my-role.cert.pem", "/path/to/my-role.key-pk8.pem") -client = Client("pulsar+ssl://ats-proxy:443", - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False, - authentication=auth) - -``` - - - - -```` - -### Pulsar geo-replication with SNI routing -You can use the ATS proxy for geo-replication. Pulsar brokers can connect to brokers in geo-replication by using SNI routing. To enable SNI routing for broker connection cross clusters, you need to configure SNI proxy URL to the cluster metadata. If you have configured SNI proxy URL in the cluster metadata, you can connect to broker cross clusters through the proxy over SNI routing. - -![Pulsar client SNI](/assets/pulsar-sni-geo.png) - -In this example, a Pulsar cluster is deployed into two separate regions, `us-west` and `us-east`. Both regions are configured with ATS proxy, and brokers in each region run behind the ATS proxy. We configure the cluster metadata for both clusters, so brokers in one cluster can use SNI routing and connect to brokers in other clusters through the ATS proxy. - -(a) Configure the cluster metadata for `us-east` with `us-east` broker service URL and `us-east` ATS proxy URL with SNI proxy-protocol. - -``` - -./pulsar-admin clusters update \ ---broker-url-secure pulsar+ssl://east-broker-vip:6651 \ ---url http://east-broker-vip:8080 \ ---proxy-protocol SNI \ ---proxy-url pulsar+ssl://east-ats-proxy:443 - -``` - -(b) Configure the cluster metadata for `us-west` with `us-west` broker service URL and `us-west` ATS proxy URL with SNI proxy-protocol. - -``` - -./pulsar-admin clusters update \ ---broker-url-secure pulsar+ssl://west-broker-vip:6651 \ ---url http://west-broker-vip:8080 \ ---proxy-protocol SNI \ ---proxy-url pulsar+ssl://west-ats-proxy:443 - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-replication.md b/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-replication.md deleted file mode 100644 index 799f0eb4d92c6b..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-replication.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -id: concepts-replication -title: Geo Replication -sidebar_label: "Geo Replication" -original_id: concepts-replication ---- - -Pulsar enables messages to be produced and consumed in different geo-locations. For instance, your application may be publishing data in one region or market and you would like to process it for consumption in other regions or markets. [Geo-replication](administration-geo.md) in Pulsar enables you to do that. - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-tiered-storage.md b/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-tiered-storage.md deleted file mode 100644 index f6988e53a8cd4e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-tiered-storage.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: concepts-tiered-storage -title: Tiered Storage -sidebar_label: "Tiered Storage" -original_id: concepts-tiered-storage ---- - -Pulsar's segment oriented architecture allows for topic backlogs to grow very large, effectively without limit. However, this can become expensive over time. - -One way to alleviate this cost is to use Tiered Storage. With tiered storage, older messages in the backlog can be moved from BookKeeper to a cheaper storage mechanism, while still allowing clients to access the backlog as if nothing had changed. - -![Tiered Storage](/assets/pulsar-tiered-storage.png) - -> Data written to BookKeeper is replicated to 3 physical machines by default. However, once a segment is sealed in BookKeeper it becomes immutable and can be copied to long term storage. Long term storage can achieve cost savings by using mechanisms such as [Reed-Solomon error correction](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction) to require fewer physical copies of data. - -Pulsar currently supports S3, Google Cloud Storage (GCS), and filesystem for [long term store](https://pulsar.apache.org/docs/en/cookbooks-tiered-storage/). Offloading to long term storage triggered via a Rest API or command line interface. The user passes in the amount of topic data they wish to retain on BookKeeper, and the broker will copy the backlog data to long term storage. The original data will then be deleted from BookKeeper after a configured delay (4 hours by default). - -> For a guide for setting up tiered storage, see the [Tiered storage cookbook](cookbooks-tiered-storage.md). diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-topic-compaction.md b/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-topic-compaction.md deleted file mode 100644 index 34b7ed7fbbd31e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-topic-compaction.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -id: concepts-topic-compaction -title: Topic Compaction -sidebar_label: "Topic Compaction" -original_id: concepts-topic-compaction ---- - -Pulsar was built with highly scalable [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data as a primary objective. Pulsar topics enable you to persistently store as many unacknowledged messages as you need while preserving message ordering. By default, Pulsar stores *all* unacknowledged/unprocessed messages produced on a topic. Accumulating many unacknowledged messages on a topic is necessary for many Pulsar use cases but it can also be very time intensive for Pulsar consumers to "rewind" through the entire log of messages. - -> For a more practical guide to topic compaction, see the [Topic compaction cookbook](cookbooks-compaction.md). - -For some use cases consumers don't need a complete "image" of the topic log. They may only need a few values to construct a more "shallow" image of the log, perhaps even just the most recent value. For these kinds of use cases Pulsar offers **topic compaction**. When you run compaction on a topic, Pulsar goes through a topic's backlog and removes messages that are *obscured* by later messages, i.e. it goes through the topic on a per-key basis and leaves only the most recent message associated with that key. - -Pulsar's topic compaction feature: - -* Allows for faster "rewind" through topic logs -* Applies only to [persistent topics](concepts-architecture-overview.md#persistent-storage) -* Triggered automatically when the backlog reaches a certain size or can be triggered manually via the command line. See the [Topic compaction cookbook](cookbooks-compaction.md) -* Is conceptually and operationally distinct from [retention and expiry](concepts-messaging.md#message-retention-and-expiry). Topic compaction *does*, however, respect retention. If retention has removed a message from the message backlog of a topic, the message will also not be readable from the compacted topic ledger. - -> #### Topic compaction example: the stock ticker -> An example use case for a compacted Pulsar topic would be a stock ticker topic. On a stock ticker topic, each message bears a timestamped dollar value for stocks for purchase (with the message key holding the stock symbol, e.g. `AAPL` or `GOOG`). With a stock ticker you may care only about the most recent value(s) of the stock and have no interest in historical data (i.e. you don't need to construct a complete image of the topic's sequence of messages per key). Compaction would be highly beneficial in this case because it would keep consumers from needing to rewind through obscured messages. - - -## How topic compaction works - -When topic compaction is triggered [via the CLI](cookbooks-compaction.md), Pulsar will iterate over the entire topic from beginning to end. For each key that it encounters the compaction routine will keep a record of the latest occurrence of that key. - -After that, the broker will create a new [BookKeeper ledger](concepts-architecture-overview.md#ledgers) and make a second iteration through each message on the topic. For each message, if the key matches the latest occurrence of that key, then the key's data payload, message ID, and metadata will be written to the newly created ledger. If the key doesn't match the latest then the message will be skipped and left alone. If any given message has an empty payload, it will be skipped and considered deleted (akin to the concept of [tombstones](https://en.wikipedia.org/wiki/Tombstone_(data_store)) in key-value databases). At the end of this second iteration through the topic, the newly created BookKeeper ledger is closed and two things are written to the topic's metadata: the ID of the BookKeeper ledger and the message ID of the last compacted message (this is known as the **compaction horizon** of the topic). Once this metadata is written compaction is complete. - -After the initial compaction operation, the Pulsar [broker](reference-terminology.md#broker) that owns the topic is notified whenever any future changes are made to the compaction horizon and compacted backlog. When such changes occur: - -* Clients (consumers and readers) that have read compacted enabled will attempt to read messages from a topic and either: - * Read from the topic like normal (if the message ID is greater than or equal to the compaction horizon) or - * Read beginning at the compaction horizon (if the message ID is lower than the compaction horizon) - - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-transactions.md b/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-transactions.md deleted file mode 100644 index 08490ba06b5d7e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/concepts-transactions.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -id: transactions -title: Transactions -sidebar_label: "Overview" -original_id: transactions ---- - -Transactional semantics enable event streaming applications to consume, process, and produce messages in one atomic operation. In Pulsar, a producer or consumer can work with messages across multiple topics and partitions and ensure those messages are processed as a single unit. - -The following concepts help you understand Pulsar transactions. - -## Transaction coordinator and transaction log -The transaction coordinator maintains the topics and subscriptions that interact in a transaction. When a transaction is committed, the transaction coordinator interacts with the topic owner broker to complete the transaction. - -The transaction coordinator maintains the entire life cycle of transactions, and prevents a transaction from incorrect status. - -The transaction coordinator handles transaction timeout, and ensures that the transaction is aborted after a transaction timeout. - -All the transaction metadata is persisted in the transaction log. The transaction log is backed by a Pulsar topic. After the transaction coordinator crashes, it can restore the transaction metadata from the transaction log. - -## Transaction ID -The transaction ID (TxnID) identifies a unique transaction in Pulsar. The transaction ID is 128-bit. The highest 16 bits are reserved for the ID of the transaction coordinator, and the remaining bits are used for monotonically increasing numbers in each transaction coordinator. It is easy to locate the transaction crash with the TxnID. - -## Transaction buffer -Messages produced within a transaction are stored in the transaction buffer. The messages in transaction buffer are not materialized (visible) to consumers until the transactions are committed. The messages in the transaction buffer are discarded when the transactions are aborted. - -## Pending acknowledge state -Message acknowledges within a transaction are maintained by the pending acknowledge state before the transaction completes. If a message is in the pending acknowledge state, the message cannot be acknowledged by other transactions until the message is removed from the pending acknowledge state. - -The pending acknowledge state is persisted to the pending acknowledge log. The pending acknowledge log is backed by a Pulsar topic. A new broker can restore the state from the pending acknowledge log to ensure the acknowledgement is not lost. diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-bookkeepermetadata.md b/site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-bookkeepermetadata.md deleted file mode 100644 index b0fa98dc3b65d5..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-bookkeepermetadata.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -id: cookbooks-bookkeepermetadata -title: BookKeeper Ledger Metadata -original_id: cookbooks-bookkeepermetadata ---- - -Pulsar stores data on BookKeeper ledgers, you can understand the contents of a ledger by inspecting the metadata attached to the ledger. -Such metadata are stored on ZooKeeper and they are readable using BookKeeper APIs. - -Description of current metadata: - -| Scope | Metadata name | Metadata value | -| ------------- | ------------- | ------------- | -| All ledgers | application | 'pulsar' | -| All ledgers | component | 'managed-ledger', 'schema', 'compacted-topic' | -| Managed ledgers | pulsar/managed-ledger | name of the ledger | -| Cursor | pulsar/cursor | name of the cursor | -| Compacted topic | pulsar/compactedTopic | name of the original topic | -| Compacted topic | pulsar/compactedTo | id of the last compacted message | - - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-compaction.md b/site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-compaction.md deleted file mode 100644 index dfa314727241a8..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-compaction.md +++ /dev/null @@ -1,142 +0,0 @@ ---- -id: cookbooks-compaction -title: Topic compaction -sidebar_label: "Topic compaction" -original_id: cookbooks-compaction ---- - -Pulsar's [topic compaction](concepts-topic-compaction.md#compaction) feature enables you to create **compacted** topics in which older, "obscured" entries are pruned from the topic, allowing for faster reads through the topic's history (which messages are deemed obscured/outdated/irrelevant will depend on your use case). - -To use compaction: - -* You need to give messages keys, as topic compaction in Pulsar takes place on a *per-key basis* (i.e. messages are compacted based on their key). For a stock ticker use case, the stock symbol---e.g. `AAPL` or `GOOG`---could serve as the key (more on this [below](#when-should-i-use-compacted-topics)). Messages without keys will be left alone by the compaction process. -* Compaction can be configured to run [automatically](#configuring-compaction-to-run-automatically), or you can manually [trigger](#triggering-compaction-manually) compaction using the Pulsar administrative API. -* Your consumers must be [configured](#consumer-configuration) to read from compacted topics ([Java consumers](#java), for example, have a `readCompacted` setting that must be set to `true`). If this configuration is not set, consumers will still be able to read from the non-compacted topic. - - -> Compaction only works on messages that have keys (as in the stock ticker example the stock symbol serves as the key for each message). Keys can thus be thought of as the axis along which compaction is applied. Messages that don't have keys are simply ignored by compaction. - -## When should I use compacted topics? - -The classic example of a topic that could benefit from compaction would be a stock ticker topic through which consumers can access up-to-date values for specific stocks. Imagine a scenario in which messages carrying stock value data use the stock symbol as the key (`GOOG`, `AAPL`, `TWTR`, etc.). Compacting this topic would give consumers on the topic two options: - -* They can read from the "original," non-compacted topic in case they need access to "historical" values, i.e. the entirety of the topic's messages. -* They can read from the compacted topic if they only want to see the most up-to-date messages. - -Thus, if you're using a Pulsar topic called `stock-values`, some consumers could have access to all messages in the topic (perhaps because they're performing some kind of number crunching of all values in the last hour) while the consumers used to power the real-time stock ticker only see the compacted topic (and thus aren't forced to process outdated messages). Which variant of the topic any given consumer pulls messages from is determined by the consumer's [configuration](#consumer-configuration). - -> One of the benefits of compaction in Pulsar is that you aren't forced to choose between compacted and non-compacted topics, as the compaction process leaves the original topic as-is and essentially adds an alternate topic. In other words, you can run compaction on a topic and consumers that need access to the non-compacted version of the topic will not be adversely affected. - - -## Configuring compaction to run automatically - -Tenant administrators can configure a policy for compaction at the namespace level. The policy specifies how large the topic backlog can grow before compaction is triggered. - -For example, to trigger compaction when the backlog reaches 100MB: - -```bash - -$ bin/pulsar-admin namespaces set-compaction-threshold \ - --threshold 100M my-tenant/my-namespace - -``` - -Configuring the compaction threshold on a namespace will apply to all topics within that namespace. - -## Triggering compaction manually - -In order to run compaction on a topic, you need to use the [`topics compact`](reference-pulsar-admin.md#topics-compact) command for the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool. Here's an example: - -```bash - -$ bin/pulsar-admin topics compact \ - persistent://my-tenant/my-namespace/my-topic - -``` - -The `pulsar-admin` tool runs compaction via the Pulsar {@inject: rest:REST:/} API. To run compaction in its own dedicated process, i.e. *not* through the REST API, you can use the [`pulsar compact-topic`](reference-cli-tools.md#pulsar-compact-topic) command. Here's an example: - -```bash - -$ bin/pulsar compact-topic \ - --topic persistent://my-tenant-namespace/my-topic - -``` - -> Running compaction in its own process is recommended when you want to avoid interfering with the broker's performance. Broker performance should only be affected, however, when running compaction on topics with a large keyspace (i.e when there are many keys on the topic). The first phase of the compaction process keeps a copy of each key in the topic, which can create memory pressure as the number of keys grows. Using the `pulsar-admin topics compact` command to run compaction through the REST API should present no issues in the overwhelming majority of cases; using `pulsar compact-topic` should correspondingly be considered an edge case. - -The `pulsar compact-topic` command communicates with [ZooKeeper](https://zookeeper.apache.org) directly. In order to establish communication with ZooKeeper, though, the `pulsar` CLI tool will need to have a valid [broker configuration](reference-configuration.md#broker). You can either supply a proper configuration in `conf/broker.conf` or specify a non-default location for the configuration: - -```bash - -$ bin/pulsar compact-topic \ - --broker-conf /path/to/broker.conf \ - --topic persistent://my-tenant/my-namespace/my-topic - -# If the configuration is in conf/broker.conf -$ bin/pulsar compact-topic \ - --topic persistent://my-tenant/my-namespace/my-topic - -``` - -#### When should I trigger compaction? - -How often you [trigger compaction](#triggering-compaction-manually) will vary widely based on the use case. If you want a compacted topic to be extremely speedy on read, then you should run compaction fairly frequently. - -## Consumer configuration - -Pulsar consumers and readers need to be configured to read from compacted topics. The sections below show you how to enable compacted topic reads for Pulsar's language clients. - -### Java - -In order to read from a compacted topic using a Java consumer, the `readCompacted` parameter must be set to `true`. Here's an example consumer for a compacted topic: - -```java - -Consumer compactedTopicConsumer = client.newConsumer() - .topic("some-compacted-topic") - .readCompacted(true) - .subscribe(); - -``` - -As mentioned above, topic compaction in Pulsar works on a *per-key basis*. That means that messages that you produce on compacted topics need to have keys (the content of the key will depend on your use case). Messages that don't have keys will be ignored by the compaction process. Here's an example Pulsar message with a key: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageBuilder; - -Message msg = MessageBuilder.create() - .setContent(someByteArray) - .setKey("some-key") - .build(); - -``` - -The example below shows a message with a key being produced on a compacted Pulsar topic: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageBuilder; -import org.apache.pulsar.client.api.Producer; -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -Producer compactedTopicProducer = client.newProducer() - .topic("some-compacted-topic") - .create(); - -Message msg = MessageBuilder.create() - .setContent(someByteArray) - .setKey("some-key") - .build(); - -compactedTopicProducer.send(msg); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-deduplication.md b/site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-deduplication.md deleted file mode 100644 index f7f9e3d7bb425b..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-deduplication.md +++ /dev/null @@ -1,151 +0,0 @@ ---- -id: cookbooks-deduplication -title: Message deduplication -sidebar_label: "Message deduplication" -original_id: cookbooks-deduplication ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -When **Message deduplication** is enabled, it ensures that each message produced on Pulsar topics is persisted to disk *only once*, even if the message is produced more than once. Message deduplication is handled automatically on the server side. - -To use message deduplication in Pulsar, you need to configure your Pulsar brokers and clients. - -## How it works - -You can enable or disable message deduplication at the namespace level or the topic level. By default, it is disabled on all namespaces or topics. You can enable it in the following ways: - -* Enable deduplication for all namespaces/topics at the broker-level. -* Enable deduplication for a specific namespace with the `pulsar-admin namespaces` interface. -* Enable deduplication for a specific topic with the `pulsar-admin topics` interface. - -## Configure message deduplication - -You can configure message deduplication in Pulsar using the [`broker.conf`](reference-configuration.md#broker) configuration file. The following deduplication-related parameters are available. - -Parameter | Description | Default -:---------|:------------|:------- -`brokerDeduplicationEnabled` | Sets the default behavior for message deduplication in the Pulsar broker. If it is set to `true`, message deduplication is enabled on all namespaces/topics. If it is set to `false`, you have to enable or disable deduplication at the namespace level or the topic level. | `false` -`brokerDeduplicationMaxNumberOfProducers` | The maximum number of producers for which information is stored for deduplication purposes. | `10000` -`brokerDeduplicationEntriesInterval` | The number of entries after which a deduplication informational snapshot is taken. A larger interval leads to fewer snapshots being taken, though this lengthens the topic recovery time (the time required for entries published after the snapshot to be replayed). | `1000` -`brokerDeduplicationSnapshotIntervalSeconds`| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |`120` -`brokerDeduplicationProducerInactivityTimeoutMinutes` | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | `360` (6 hours) - -### Set default value at the broker-level - -By default, message deduplication is *disabled* on all Pulsar namespaces/topics. To enable it on all namespaces/topics, set the `brokerDeduplicationEnabled` parameter to `true` and re-start the broker. - -Even if you set the value for `brokerDeduplicationEnabled`, enabling or disabling via Pulsar admin CLI overrides the default settings at the broker-level. - -### Enable message deduplication - -Though message deduplication is disabled by default at the broker level, you can enable message deduplication for a specific namespace or topic using the [`pulsar-admin namespaces set-deduplication`](reference-pulsar-admin.md#namespace-set-deduplication) or the [`pulsar-admin topics set-deduplication`](reference-pulsar-admin.md#topic-set-deduplication) command. You can use the `--enable`/`-e` flag and specify the namespace/topic. - -The following example shows how to enable message deduplication at the namespace level. - -```bash - -$ bin/pulsar-admin namespaces set-deduplication \ - public/default \ - --enable # or just -e - -``` - -### Disable message deduplication - -Even if you enable message deduplication at the broker level, you can disable message deduplication for a specific namespace or topic using the [`pulsar-admin namespace set-deduplication`](reference-pulsar-admin.md#namespace-set-deduplication) or the [`pulsar-admin topics set-deduplication`](reference-pulsar-admin.md#topic-set-deduplication) command. Use the `--disable`/`-d` flag and specify the namespace/topic. - -The following example shows how to disable message deduplication at the namespace level. - -```bash - -$ bin/pulsar-admin namespaces set-deduplication \ - public/default \ - --disable # or just -d - -``` - -## Pulsar clients - -If you enable message deduplication in Pulsar brokers, you need complete the following tasks for your client producers: - -1. Specify a name for the producer. -1. Set the message timeout to `0` (namely, no timeout). - -The instructions for Java, Python, and C++ clients are different. - -````mdx-code-block - - - -To enable message deduplication on a [Java producer](client-libraries-java.md#producers), set the producer name using the `producerName` setter, and set the timeout to `0` using the `sendTimeout` setter. - -```java - -import org.apache.pulsar.client.api.Producer; -import org.apache.pulsar.client.api.PulsarClient; -import java.util.concurrent.TimeUnit; - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); -Producer producer = pulsarClient.newProducer() - .producerName("producer-1") - .topic("persistent://public/default/topic-1") - .sendTimeout(0, TimeUnit.SECONDS) - .create(); - -``` - - - - -To enable message deduplication on a [Python producer](client-libraries-python.md#producers), set the producer name using `producer_name`, and set the timeout to `0` using `send_timeout_millis`. - -```python - -import pulsar - -client = pulsar.Client("pulsar://localhost:6650") -producer = client.create_producer( - "persistent://public/default/topic-1", - producer_name="producer-1", - send_timeout_millis=0) - -``` - - - - -To enable message deduplication on a [C++ producer](client-libraries-cpp.md#producer), set the producer name using `producer_name`, and set the timeout to `0` using `send_timeout_millis`. - -```cpp - -#include - -std::string serviceUrl = "pulsar://localhost:6650"; -std::string topic = "persistent://some-tenant/ns1/topic-1"; -std::string producerName = "producer-1"; - -Client client(serviceUrl); - -ProducerConfiguration producerConfig; -producerConfig.setSendTimeout(0); -producerConfig.setProducerName(producerName); - -Producer producer; - -Result result = client.createProducer(topic, producerConfig, producer); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-encryption.md b/site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-encryption.md deleted file mode 100644 index f0d8fb8735eb63..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-encryption.md +++ /dev/null @@ -1,184 +0,0 @@ ---- -id: cookbooks-encryption -title: Pulsar Encryption -sidebar_label: "Encryption" -original_id: cookbooks-encryption ---- - -Pulsar encryption allows applications to encrypt messages at the producer and decrypt at the consumer. Encryption is performed using the public/private key pair configured by the application. Encrypted messages can only be decrypted by consumers with a valid key. - -## Asymmetric and symmetric encryption - -Pulsar uses dynamically generated symmetric AES key to encrypt messages(data). The AES key(data key) is encrypted using application provided ECDSA/RSA key pair, as a result there is no need to share the secret with everyone. - -Key is a public/private key pair used for encryption/decryption. The producer key is the public key, and the consumer key is the private key of the key pair. - -The application configures the producer with the public key. This key is used to encrypt the AES data key. The encrypted data key is sent as part of message header. Only entities with the private key(in this case the consumer) will be able to decrypt the data key which is used to decrypt the message. - -A message can be encrypted with more than one key. Any one of the keys used for encrypting the message is sufficient to decrypt the message - -Pulsar does not store the encryption key anywhere in the pulsar service. If you lose/delete the private key, your message is irretrievably lost, and is unrecoverable - -## Producer -![alt text](/assets/pulsar-encryption-producer.jpg "Pulsar Encryption Producer") - -## Consumer -![alt text](/assets/pulsar-encryption-consumer.jpg "Pulsar Encryption Consumer") - -## Here are the steps to get started: - -1. Create your ECDSA or RSA public/private key pair. - -```shell - -openssl ecparam -name secp521r1 -genkey -param_enc explicit -out test_ecdsa_privkey.pem -openssl ec -in test_ecdsa_privkey.pem -pubout -outform pkcs8 -out test_ecdsa_pubkey.pem - -``` - -2. Add the public and private key to the key management and configure your producers to retrieve public keys and consumers clients to retrieve private keys. -3. Implement CryptoKeyReader::getPublicKey() interface from producer and CryptoKeyReader::getPrivateKey() interface from consumer, which will be invoked by Pulsar client to load the key. -4. Add encryption key to producer configuration: conf.addEncryptionKey("myapp.key") -5. Add CryptoKeyReader implementation to producer/consumer config: conf.setCryptoKeyReader(keyReader) -6. Sample producer application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} -PulsarClient pulsarClient = PulsarClient.create("http://localhost:8080"); - -ProducerConfiguration prodConf = new ProducerConfiguration(); -prodConf.setCryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")); -prodConf.addEncryptionKey("myappkey"); - -Producer producer = pulsarClient.createProducer("persistent://my-tenant/my-ns/my-topic", prodConf); - -for (int i = 0; i < 10; i++) { - producer.send("my-message".getBytes()); -} - -pulsarClient.close(); - -``` - -7. Sample Consumer Application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} - -ConsumerConfiguration consConf = new ConsumerConfiguration(); -consConf.setCryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")); -PulsarClient pulsarClient = PulsarClient.create("http://localhost:8080"); -Consumer consumer = pulsarClient.subscribe("persistent://my-tenant//my-ns/my-topic", "my-subscriber-name", consConf); -Message msg = null; - -for (int i = 0; i < 10; i++) { - msg = consumer.receive(); - // do something - System.out.println("Received: " + new String(msg.getData())); -} - -// Acknowledge the consumption of all messages at once -consumer.acknowledgeCumulative(msg); -pulsarClient.close(); - -``` - -## Key rotation -Pulsar generates new AES data key every 4 hours or after a certain number of messages are published. The asymmetric public key is automatically fetched by producer every 4 hours by calling CryptoKeyReader::getPublicKey() to retrieve the latest version. - -## Enabling encryption at the producer application: -If you produce messages that are consumed across application boundaries, you need to ensure that consumers in other applications have access to one of the private keys that can decrypt the messages. This can be done in two ways: -1. The consumer application provides you access to their public key, which you add to your producer keys -1. You grant access to one of the private keys from the pairs used by producer - -In some cases, the producer may want to encrypt the messages with multiple keys. For this, add all such keys to the config. Consumer will be able to decrypt the message, as long as it has access to at least one of the keys. - -E.g: If messages needs to be encrypted using 2 keys myapp.messagekey1 and myapp.messagekey2, - -```java - -conf.addEncryptionKey("myapp.messagekey1"); -conf.addEncryptionKey("myapp.messagekey2"); - -``` - -## Decrypting encrypted messages at the consumer application: -Consumers require access one of the private keys to decrypt messages produced by the producer. If you would like to receive encrypted messages, create a public/private key and give your public key to the producer application to encrypt messages using your public key. - -## Handling Failures: -* Producer/ Consumer loses access to the key - * Producer action will fail indicating the cause of the failure. Application has the option to proceed with sending unencrypted message in such cases. Call conf.setCryptoFailureAction(ProducerCryptoFailureAction) to control the producer behavior. The default behavior is to fail the request. - * If consumption failed due to decryption failure or missing keys in consumer, application has the option to consume the encrypted message or discard it. Call conf.setCryptoFailureAction(ConsumerCryptoFailureAction) to control the consumer behavior. The default behavior is to fail the request. -Application will never be able to decrypt the messages if the private key is permanently lost. -* Batch messaging - * If decryption fails and the message contain batch messages, client will not be able to retrieve individual messages in the batch, hence message consumption fails even if conf.setCryptoFailureAction() is set to CONSUME. -* If decryption fails, the message consumption stops and application will notice backlog growth in addition to decryption failure messages in the client log. If application does not have access to the private key to decrypt the message, the only option is to skip/discard backlogged messages. - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-message-queue.md b/site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-message-queue.md deleted file mode 100644 index eb43cbde5fb818..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-message-queue.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -id: cookbooks-message-queue -title: Using Pulsar as a message queue -sidebar_label: "Message queue" -original_id: cookbooks-message-queue ---- - -Message queues are essential components of many large-scale data architectures. If every single work object that passes through your system absolutely *must* be processed in spite of the slowness or downright failure of this or that system component, there's a good chance that you'll need a message queue to step in and ensure that unprocessed data is retained---with correct ordering---until the required actions are taken. - -Pulsar is a great choice for a message queue because: - -* it was built with [persistent message storage](concepts-architecture-overview.md#persistent-storage) in mind -* it offers automatic load balancing across [consumers](reference-terminology.md#consumer) for messages on a topic (or custom load balancing if you wish) - -> You can use the same Pulsar installation to act as a real-time message bus and as a message queue if you wish (or just one or the other). You can set aside some topics for real-time purposes and other topics for message queue purposes (or use specific namespaces for either purpose if you wish). - - -# Client configuration changes - -To use a Pulsar [topic](reference-terminology.md#topic) as a message queue, you should distribute the receiver load on that topic across several consumers (the optimal number of consumers will depend on the load). Each consumer must: - -* Establish a [shared subscription](concepts-messaging.md#shared) and use the same subscription name as the other consumers (otherwise the subscription is not shared and the consumers can't act as a processing ensemble) -* If you'd like to have tight control over message dispatching across consumers, set the consumers' **receiver queue** size very low (potentially even to 0 if necessary). Each Pulsar [consumer](reference-terminology.md#consumer) has a receiver queue that determines how many messages the consumer will attempt to fetch at a time. A receiver queue of 1000 (the default), for example, means that the consumer will attempt to process 1000 messages from the topic's backlog upon connection. Setting the receiver queue to zero essentially means ensuring that each consumer is only doing one thing at a time. - - The downside to restricting the receiver queue size of consumers is that that limits the potential throughput of those consumers and cannot be used with [partitioned topics](reference-terminology.md#partitioned-topic). Whether the performance/control trade-off is worthwhile will depend on your use case. - -## Java clients - -Here's an example Java consumer configuration that uses a shared subscription: - -```java - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; -import org.apache.pulsar.client.api.SubscriptionType; - -String SERVICE_URL = "pulsar://localhost:6650"; -String TOPIC = "persistent://public/default/mq-topic-1"; -String subscription = "sub-1"; - -PulsarClient client = PulsarClient.builder() - .serviceUrl(SERVICE_URL) - .build(); - -Consumer consumer = client.newConsumer() - .topic(TOPIC) - .subscriptionName(subscription) - .subscriptionType(SubscriptionType.Shared) - // If you'd like to restrict the receiver queue size - .receiverQueueSize(10) - .subscribe(); - -``` - -## Python clients - -Here's an example Python consumer configuration that uses a shared subscription: - -```python - -from pulsar import Client, ConsumerType - -SERVICE_URL = "pulsar://localhost:6650" -TOPIC = "persistent://public/default/mq-topic-1" -SUBSCRIPTION = "sub-1" - -client = Client(SERVICE_URL) -consumer = client.subscribe( - TOPIC, - SUBSCRIPTION, - # If you'd like to restrict the receiver queue size - receiver_queue_size=10, - consumer_type=ConsumerType.Shared) - -``` - -## C++ clients - -Here's an example C++ consumer configuration that uses a shared subscription: - -```cpp - -#include - -std::string serviceUrl = "pulsar://localhost:6650"; -std::string topic = "persistent://public/defaultmq-topic-1"; -std::string subscription = "sub-1"; - -Client client(serviceUrl); - -ConsumerConfiguration consumerConfig; -consumerConfig.setConsumerType(ConsumerType.ConsumerShared); -// If you'd like to restrict the receiver queue size -consumerConfig.setReceiverQueueSize(10); - -Consumer consumer; - -Result result = client.subscribe(topic, subscription, consumerConfig, consumer); - -``` - -## Go clients - -Here is an example of a Go consumer configuration that uses a shared subscription: - -```go - -import "github.com/apache/pulsar-client-go/pulsar" - -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "persistent://public/default/mq-topic-1", - SubscriptionName: "sub-1", - Type: pulsar.Shared, - ReceiverQueueSize: 10, // If you'd like to restrict the receiver queue size -}) -if err != nil { - log.Fatal(err) -} - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-non-persistent.md b/site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-non-persistent.md deleted file mode 100644 index 178301e86eb8df..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-non-persistent.md +++ /dev/null @@ -1,63 +0,0 @@ ---- -id: cookbooks-non-persistent -title: Non-persistent messaging -sidebar_label: "Non-persistent messaging" -original_id: cookbooks-non-persistent ---- - -**Non-persistent topics** are Pulsar topics in which message data is *never* [persistently stored](concepts-architecture-overview.md#persistent-storage) and kept only in memory. This cookbook provides: - -* A basic [conceptual overview](#overview) of non-persistent topics -* Information about [configurable parameters](#configuration) related to non-persistent topics -* A guide to the [CLI interface](#cli) for managing non-persistent topics - -## Overview - -By default, Pulsar persistently stores *all* unacknowledged messages on multiple [BookKeeper](#persistent-storage) bookies (storage nodes). Data for messages on persistent topics can thus survive broker restarts and subscriber failover. - -Pulsar also, however, supports **non-persistent topics**, which are topics on which messages are *never* persisted to disk and live only in memory. When using non-persistent delivery, killing a Pulsar [broker](reference-terminology.md#broker) or disconnecting a subscriber to a topic means that all in-transit messages are lost on that (non-persistent) topic, meaning that clients may see message loss. - -Non-persistent topics have names of this form (note the `non-persistent` in the name): - -```http - -non-persistent://tenant/namespace/topic - -``` - -> For more high-level information about non-persistent topics, see the [Concepts and Architecture](concepts-messaging.md#non-persistent-topics) documentation. - -## Using - -> In order to use non-persistent topics, they must be [enabled](#enabling) in your Pulsar broker configuration. - -In order to use non-persistent topics, you only need to differentiate them by name when interacting with them. This [`pulsar-client produce`](reference-cli-tools.md#pulsar-client-produce) command, for example, would produce one message on a non-persistent topic in a standalone cluster: - -```bash - -$ bin/pulsar-client produce non-persistent://public/default/example-np-topic \ - --num-produce 1 \ - --messages "This message will be stored only in memory" - -``` - -> For a more thorough guide to non-persistent topics from an administrative perspective, see the [Non-persistent topics](admin-api-topics.md) guide. - -## Enabling - -In order to enable non-persistent topics in a Pulsar broker, the [`enableNonPersistentTopics`](reference-configuration.md#broker-enableNonPersistentTopics) must be set to `true`. This is the default, and so you won't need to take any action to enable non-persistent messaging. - - -> #### Configuration for standalone mode -> If you're running Pulsar in standalone mode, the same configurable parameters are available but in the [`standalone.conf`](reference-configuration.md#standalone) configuration file. - -If you'd like to enable *only* non-persistent topics in a broker, you can set the [`enablePersistentTopics`](reference-configuration.md#broker-enablePersistentTopics) parameter to `false` and the `enableNonPersistentTopics` parameter to `true`. - -## Managing with cli - -Non-persistent topics can be managed using the [`pulsar-admin non-persistent`](reference-pulsar-admin.md#non-persistent) command-line interface. With that interface you can perform actions like [create a partitioned non-persistent topic](reference-pulsar-admin.md#non-persistent-create-partitioned-topic), get [stats](reference-pulsar-admin.md#non-persistent-stats) for a non-persistent topic, [list](reference-pulsar-admin.md) non-persistent topics under a namespace, and more. - -## Using with Pulsar clients - -You shouldn't need to make any changes to your Pulsar clients to use non-persistent messaging beyond making sure that you use proper [topic names](#using) with `non-persistent` as the topic type. - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-partitioned.md b/site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-partitioned.md deleted file mode 100644 index fb9ac354cc6d60..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-partitioned.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: cookbooks-partitioned -title: Partitioned topics -sidebar_label: "Partitioned Topics" -original_id: cookbooks-partitioned ---- -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-retention-expiry.md b/site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-retention-expiry.md deleted file mode 100644 index 192fbe6c53277f..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-retention-expiry.md +++ /dev/null @@ -1,413 +0,0 @@ ---- -id: cookbooks-retention-expiry -title: Message retention and expiry -sidebar_label: "Message retention and expiry" -original_id: cookbooks-retention-expiry ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar brokers are responsible for handling messages that pass through Pulsar, including [persistent storage](concepts-architecture-overview.md#persistent-storage) of messages. By default, for each topic, brokers only retain messages that are in at least one backlog. A backlog is the set of unacknowledged messages for a particular subscription. As a topic can have multiple subscriptions, a topic can have multiple backlogs. - -As a consequence, no messages are retained (by default) on a topic that has not had any subscriptions created for it. - -(Note that messages that are no longer being stored are not necessarily immediately deleted, and may in fact still be accessible until the next ledger rollover. Because clients cannot predict when rollovers may happen, it is not wise to rely on a rollover not happening at an inconvenient point in time.) - -In Pulsar, you can modify this behavior, with namespace granularity, in two ways: - -* You can persistently store messages that are not within a backlog (because they've been acknowledged by on every existing subscription, or because there are no subscriptions) by setting [retention policies](#retention-policies). -* Messages that are not acknowledged within a specified timeframe can be automatically acknowledged, by specifying the [time to live](#time-to-live-ttl) (TTL). - -Pulsar's [admin interface](admin-api-overview.md) enables you to manage both retention policies and TTL with namespace granularity (and thus within a specific tenant and either on a specific cluster or in the [`global`](concepts-architecture-overview.md#global-cluster) cluster). - - -> #### Retention and TTL solve two different problems -> * Message retention: Keep the data for at least X hours (even if acknowledged) -> * Time-to-live: Discard data after some time (by automatically acknowledging) -> -> Most applications will want to use at most one of these. - - -## Retention policies - -By default, when a Pulsar message arrives at a broker, the message is stored until it has been acknowledged on all subscriptions, at which point it is marked for deletion. You can override this behavior and retain messages that have already been acknowledged on all subscriptions by setting a *retention policy* for all topics in a given namespace. Retention is based on both a *size limit* and a *time limit*. - -Retention policies are useful when you use the Reader interface. The Reader interface does not use acknowledgements, and messages do not exist within backlogs. It is required to configure retention for Reader-only use cases. - -When you set a retention policy on topics in a namespace, you must set **both** a *size limit* and a *time limit*. You can refer to the following table to set retention policies in `pulsar-admin` and Java. - -|Time limit|Size limit| Message retention | -|----------|----------|------------------------| -| -1 | -1 | Infinite retention | -| -1 | >0 | Based on the size limit | -| >0 | -1 | Based on the time limit | -| 0 | 0 | Disable message retention (by default) | -| 0 | >0 | Invalid | -| >0 | 0 | Invalid | -| >0 | >0 | Acknowledged messages or messages with no active subscription will not be retained when either time or size reaches the limit. | - -The retention settings apply to all messages on topics that do not have any subscriptions, or to messages that have been acknowledged by all subscriptions. The retention policy settings do not affect unacknowledged messages on topics with subscriptions. The unacknowledged messages are controlled by the backlog quota. - -When a retention limit on a topic is exceeded, the oldest message is marked for deletion until the set of retained messages falls within the specified limits again. - -### Defaults - -You can set message retention at instance level with the following two parameters: `defaultRetentionTimeInMinutes` and `defaultRetentionSizeInMB`. Both parameters are set to `0` by default. - -For more information of the two parameters, refer to the [`broker.conf`](reference-configuration.md#broker) configuration file. - -### Set retention policy - -You can set a retention policy for a namespace by specifying the namespace, a size limit and a time limit in `pulsar-admin`, REST API and Java. - -````mdx-code-block - - - -You can use the [`set-retention`](reference-pulsar-admin.md#namespaces-set-retention) subcommand and specify a namespace, a size limit using the `-s`/`--size` flag, and a time limit using the `-t`/`--time` flag. - -In the following example, the size limit is set to 10 GB and the time limit is set to 3 hours for each topic within the `my-tenant/my-ns` namespace. -- When the size of messages reaches 10 GB on a topic within 3 hours, the acknowledged messages will not be retained. -- After 3 hours, even if the message size is less than 10 GB, the acknowledged messages will not be retained. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 10G \ - --time 3h - -``` - -In the following example, the time is not limited and the size limit is set to 1 TB. The size limit determines the retention. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 1T \ - --time -1 - -``` - -In the following example, the size is not limited and the time limit is set to 3 hours. The time limit determines the retention. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size -1 \ - --time 3h - -``` - -To achieve infinite retention, set both values to `-1`. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size -1 \ - --time -1 - -``` - -To disable the retention policy, set both values to `0`. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 0 \ - --time 0 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/retention|operation/setRetention?version=@pulsar:version_number@} - -:::note - -To disable the retention policy, you need to set both the size and time limit to `0`. Set either size or time limit to `0` is invalid. - -::: - - - - -```java - -int retentionTime = 10; // 10 minutes -int retentionSize = 500; // 500 megabytes -RetentionPolicies policies = new RetentionPolicies(retentionTime, retentionSize); -admin.namespaces().setRetention(namespace, policies); - -``` - - - - -```` - -### Get retention policy - -You can fetch the retention policy for a namespace by specifying the namespace. The output will be a JSON object with two keys: `retentionTimeInMinutes` and `retentionSizeInMB`. - -#### pulsar-admin - -Use the [`get-retention`](reference-pulsar-admin.md#namespaces) subcommand and specify the namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces get-retention my-tenant/my-ns -{ - "retentionTimeInMinutes": 10, - "retentionSizeInMB": 500 -} - -``` - -#### REST API - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/retention|operation/getRetention?version=@pulsar:version_number@} - -#### Java - -```java - -admin.namespaces().getRetention(namespace); - -``` - -## Backlog quotas - -*Backlogs* are sets of unacknowledged messages for a topic that have been stored by bookies. Pulsar stores all unacknowledged messages in backlogs until they are processed and acknowledged. - -You can control the allowable size of backlogs, at the namespace level, using *backlog quotas*. Setting a backlog quota involves setting: - -TODO: Expand on is this per backlog or per topic? - -* an allowable *size threshold* for each topic in the namespace -* a *retention policy* that determines which action the [broker](reference-terminology.md#broker) takes if the threshold is exceeded. - -The following retention policies are available: - -Policy | Action -:------|:------ -`producer_request_hold` | The broker will hold and not persist produce request payload -`producer_exception` | The broker will disconnect from the client by throwing an exception -`consumer_backlog_eviction` | The broker will begin discarding backlog messages - - -> #### Beware the distinction between retention policy types -> As you may have noticed, there are two definitions of the term "retention policy" in Pulsar, one that applies to persistent storage of messages not in backlogs, and one that applies to messages within backlogs. - - -Backlog quotas are handled at the namespace level. They can be managed via: - -### Set size/time thresholds and backlog retention policies - -You can set a size and/or time threshold and backlog retention policy for all of the topics in a [namespace](reference-terminology.md#namespace) by specifying the namespace, a size limit and/or a time limit in second, and a policy by name. - -#### pulsar-admin - -Use the [`set-backlog-quota`](reference-pulsar-admin.md#namespaces) subcommand and specify a namespace, a size limit using the `-l`/`--limit` flag, and a retention policy using the `-p`/`--policy` flag. - -##### Example - -```shell - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \ - --limit 2G \ - --limitTime 36000 \ - --policy producer_request_hold - -``` - -#### REST API - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - -#### Java - -```java - -long sizeLimit = 2147483648L; -BacklogQuota.RetentionPolicy policy = BacklogQuota.RetentionPolicy.producer_request_hold; -BacklogQuota quota = new BacklogQuota(sizeLimit, policy); -admin.namespaces().setBacklogQuota(namespace, quota); - -``` - -### Get backlog threshold and backlog retention policy - -You can see which size threshold and backlog retention policy has been applied to a namespace. - -#### pulsar-admin - -Use the [`get-backlog-quotas`](reference-pulsar-admin.md#pulsar-admin-namespaces-get-backlog-quotas) subcommand and specify a namespace. Here's an example: - -```shell - -$ pulsar-admin namespaces get-backlog-quotas my-tenant/my-ns -{ - "destination_storage": { - "limit" : 2147483648, - "policy" : "producer_request_hold" - } -} - -``` - -#### REST API - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/backlogQuotaMap|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - -#### Java - -```java - -Map quotas = - admin.namespaces().getBacklogQuotas(namespace); - -``` - -### Remove backlog quotas - -#### pulsar-admin - -Use the [`remove-backlog-quota`](reference-pulsar-admin.md#pulsar-admin-namespaces-remove-backlog-quota) subcommand and specify a namespace. Here's an example: - -```shell - -$ pulsar-admin namespaces remove-backlog-quota my-tenant/my-ns - -``` - -#### REST API - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/removeBacklogQuota?version=@pulsar:version_number@} - -#### Java - -```java - -admin.namespaces().removeBacklogQuota(namespace); - -``` - -### Clear backlog - -#### pulsar-admin - -Use the [`clear-backlog`](reference-pulsar-admin.md#pulsar-admin-namespaces-clear-backlog) subcommand. - -##### Example - -```shell - -$ pulsar-admin namespaces clear-backlog my-tenant/my-ns - -``` - -By default, you will be prompted to ensure that you really want to clear the backlog for the namespace. You can override the prompt using the `-f`/`--force` flag. - -## Time to live (TTL) - -By default, Pulsar stores all unacknowledged messages forever. This can lead to heavy disk space usage in cases where a lot of messages are going unacknowledged. If disk space is a concern, you can set a time to live (TTL) that determines how long unacknowledged messages will be retained. - -### Set the TTL for a namespace - -#### pulsar-admin - -Use the [`set-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-set-message-ttl) subcommand and specify a namespace and a TTL (in seconds) using the `-ttl`/`--messageTTL` flag. - -##### Example - -```shell - -$ pulsar-admin namespaces set-message-ttl my-tenant/my-ns \ - --messageTTL 120 # TTL of 2 minutes - -``` - -#### REST API - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/setNamespaceMessageTTL?version=@pulsar:version_number@} - -#### Java - -```java - -admin.namespaces().setNamespaceMessageTTL(namespace, ttlInSeconds); - -``` - -### Get the TTL configuration for a namespace - -#### pulsar-admin - -Use the [`get-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-get-message-ttl) subcommand and specify a namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces get-message-ttl my-tenant/my-ns -60 - -``` - -#### REST API - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/getNamespaceMessageTTL?version=@pulsar:version_number@} - -#### Java - -```java - -admin.namespaces().getNamespaceMessageTTL(namespace) - -``` - -### Remove the TTL configuration for a namespace - -#### pulsar-admin - -Use the [`remove-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-remove-message-ttl) subcommand and specify a namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces remove-message-ttl my-tenant/my-ns - -``` - -#### REST API - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/removeNamespaceMessageTTL?version=@pulsar:version_number@} - -#### Java - -```java - -admin.namespaces().removeNamespaceMessageTTL(namespace) - -``` - -## Delete messages from namespaces - -If you do not have any retention period and that you never have much of a backlog, the upper limit for retaining messages, which are acknowledged, equals to the Pulsar segment rollover period + entry log rollover period + (garbage collection interval * garbage collection ratios). - -- **Segment rollover period**: basically, the segment rollover period is how often a new segment is created. Once a new segment is created, the old segment will be deleted. By default, this happens either when you have written 50,000 entries (messages) or have waited 240 minutes. You can tune this in your broker. - -- **Entry log rollover period**: multiple ledgers in BookKeeper are interleaved into an [entry log](https://bookkeeper.apache.org/docs/4.11.1/getting-started/concepts/#entry-logs). In order for a ledger that has been deleted, the entry log must all be rolled over. -The entry log rollover period is configurable, but is purely based on the entry log size. For details, see [here](https://bookkeeper.apache.org/docs/4.11.1/reference/config/#entry-log-settings). Once the entry log is rolled over, the entry log can be garbage collected. - -- **Garbage collection interval**: because entry logs have interleaved ledgers, to free up space, the entry logs need to be rewritten. The garbage collection interval is how often BookKeeper performs garbage collection. which is related to minor compaction and major compaction of entry logs. For details, see [here](https://bookkeeper.apache.org/docs/4.11.1/reference/config/#entry-log-compaction-settings). diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-tiered-storage.md b/site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-tiered-storage.md deleted file mode 100644 index 4c86166c7b1ceb..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/cookbooks-tiered-storage.md +++ /dev/null @@ -1,342 +0,0 @@ ---- -id: cookbooks-tiered-storage -title: Tiered Storage -sidebar_label: "Tiered Storage" -original_id: cookbooks-tiered-storage ---- - -Pulsar's **Tiered Storage** feature allows older backlog data to be offloaded to long term storage, thereby freeing up space in BookKeeper and reducing storage costs. This cookbook walks you through using tiered storage in your Pulsar cluster. - -* Tiered storage uses [Apache jclouds](https://jclouds.apache.org) to support [Amazon S3](https://aws.amazon.com/s3/) and [Google Cloud Storage](https://cloud.google.com/storage/)(GCS for short) for long term storage. With Jclouds, it is easy to add support for more [cloud storage providers](https://jclouds.apache.org/reference/providers/#blobstore-providers) in the future. - -* Tiered storage uses [Apache Hadoop](http://hadoop.apache.org/) to support filesystem for long term storage. With Hadoop, it is easy to add support for more filesystem in the future. - -## When should I use Tiered Storage? - -Tiered storage should be used when you have a topic for which you want to keep a very long backlog for a long time. For example, if you have a topic containing user actions which you use to train your recommendation systems, you may want to keep that data for a long time, so that if you change your recommendation algorithm you can rerun it against your full user history. - -## The offloading mechanism - -A topic in Pulsar is backed by a log, known as a managed ledger. This log is composed of an ordered list of segments. Pulsar only every writes to the final segment of the log. All previous segments are sealed. The data within the segment is immutable. This is known as a segment oriented architecture. - -![Tiered storage](/assets/pulsar-tiered-storage.png "Tiered Storage") - -The Tiered Storage offloading mechanism takes advantage of this segment oriented architecture. When offloading is requested, the segments of the log are copied, one-by-one, to tiered storage. All segments of the log, apart from the segment currently being written to can be offloaded. - -On the broker, the administrator must configure the bucket and credentials for the cloud storage service. -The configured bucket must exist before attempting to offload. If it does not exist, the offload operation will fail. - -Pulsar uses multi-part objects to upload the segment data. It is possible that a broker could crash while uploading the data. -We recommend you add a life cycle rule your bucket to expire incomplete multi-part upload after a day or two to avoid -getting charged for incomplete uploads. - -When ledgers are offloaded to long term storage, you can still query data in the offloaded ledgers with Pulsar SQL. - -## Configuring the offload driver - -Offloading is configured in ```broker.conf```. - -At a minimum, the administrator must configure the driver, the bucket and the authenticating credentials. -There is also some other knobs to configure, like the bucket region, the max block size in backed storage, etc. - -Currently we support driver of types: - -- `aws-s3`: [Simple Cloud Storage Service](https://aws.amazon.com/s3/) -- `google-cloud-storage`: [Google Cloud Storage](https://cloud.google.com/storage/) -- `filesystem`: [Filesystem Storage](http://hadoop.apache.org/) - -> Driver names are case-insensitive for driver's name. There is a third driver type, `s3`, which is identical to `aws-s3`, -> though it requires that you specify an endpoint url using `s3ManagedLedgerOffloadServiceEndpoint`. This is useful if -> using a S3 compatible data store, other than AWS. - -```conf - -managedLedgerOffloadDriver=aws-s3 - -``` - -### "aws-s3" Driver configuration - -#### Bucket and Region - -Buckets are the basic containers that hold your data. -Everything that you store in Cloud Storage must be contained in a bucket. -You can use buckets to organize your data and control access to your data, -but unlike directories and folders, you cannot nest buckets. - -```conf - -s3ManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -Bucket Region is the region where bucket located. Bucket Region is not a required -but a recommended configuration. If it is not configured, It will use the default region. - -With AWS S3, the default region is `US East (N. Virginia)`. Page [AWS Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html) contains more information. - -```conf - -s3ManagedLedgerOffloadRegion=eu-west-3 - -``` - -#### Authentication with AWS - -To be able to access AWS S3, you need to authenticate with AWS S3. -Pulsar does not provide any direct means of configuring authentication for AWS S3, -but relies on the mechanisms supported by the [DefaultAWSCredentialsProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html). - -Once you have created a set of credentials in the AWS IAM console, they can be configured in a number of ways. - -1. Using ec2 instance metadata credentials - -If you are on AWS instance with an instance profile that provides credentials, Pulsar will use these credentials -if no other mechanism is provided - -2. Set the environment variables **AWS_ACCESS_KEY_ID** and **AWS_SECRET_ACCESS_KEY** in ```conf/pulsar_env.sh```. - -```bash - -export AWS_ACCESS_KEY_ID=ABC123456789 -export AWS_SECRET_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -> \"export\" is important so that the variables are made available in the environment of spawned processes. - - -3. Add the Java system properties *aws.accessKeyId* and *aws.secretKey* to **PULSAR_EXTRA_OPTS** in `conf/pulsar_env.sh`. - -```bash - -PULSAR_EXTRA_OPTS="${PULSAR_EXTRA_OPTS} ${PULSAR_MEM} ${PULSAR_GC} -Daws.accessKeyId=ABC123456789 -Daws.secretKey=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.maxCapacityPerThread=4096" - -``` - -4. Set the access credentials in ```~/.aws/credentials```. - -```conf - -[default] -aws_access_key_id=ABC123456789 -aws_secret_access_key=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -5. Assuming an IAM role - -If you want to assume an IAM role, this can be done via specifying the following: - -```conf - -s3ManagedLedgerOffloadRole= -s3ManagedLedgerOffloadRoleSessionName=pulsar-s3-offload - -``` - -This will use the `DefaultAWSCredentialsProviderChain` for assuming this role. - -> The broker must be rebooted for credentials specified in pulsar_env to take effect. - -#### Configuring the size of block read/write - -Pulsar also provides some knobs to configure the size of requests sent to AWS S3. - -- ```s3ManagedLedgerOffloadMaxBlockSizeInBytes``` configures the maximum size of - a "part" sent during a multipart upload. This cannot be smaller than 5MB. Default is 64MB. -- ```s3ManagedLedgerOffloadReadBufferSizeInBytes``` configures the block size for - each individual read when reading back data from AWS S3. Default is 1MB. - -In both cases, these should not be touched unless you know what you are doing. - -### "google-cloud-storage" Driver configuration - -Buckets are the basic containers that hold your data. Everything that you store in -Cloud Storage must be contained in a bucket. You can use buckets to organize your data and -control access to your data, but unlike directories and folders, you cannot nest buckets. - -```conf - -gcsManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -Bucket Region is the region where bucket located. Bucket Region is not a required but -a recommended configuration. If it is not configured, It will use the default region. - -Regarding GCS, buckets are default created in the `us multi-regional location`, -page [Bucket Locations](https://cloud.google.com/storage/docs/bucket-locations) contains more information. - -```conf - -gcsManagedLedgerOffloadRegion=europe-west3 - -``` - -#### Authentication with GCS - -The administrator needs to configure `gcsManagedLedgerOffloadServiceAccountKeyFile` in `broker.conf` -for the broker to be able to access the GCS service. `gcsManagedLedgerOffloadServiceAccountKeyFile` is -a Json file, containing the GCS credentials of a service account. -[Service Accounts section of this page](https://support.google.com/googleapi/answer/6158849) contains -more information of how to create this key file for authentication. More information about google cloud IAM -is available [here](https://cloud.google.com/storage/docs/access-control/iam). - -To generate service account credentials or view the public credentials that you've already generated, follow the following steps: - -1. Open the [Service accounts page](https://console.developers.google.com/iam-admin/serviceaccounts). -2. Select a project or create a new one. -3. Click **Create service account**. -4. In the **Create service account** window, type a name for the service account, and select **Furnish a new private key**. If you want to [grant G Suite domain-wide authority](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#delegatingauthority) to the service account, also select **Enable G Suite Domain-wide Delegation**. -5. Click **Create**. - -> Notes: Make ensure that the service account you create has permission to operate GCS, you need to assign **Storage Admin** permission to your service account in [here](https://cloud.google.com/storage/docs/access-control/iam). - -```conf - -gcsManagedLedgerOffloadServiceAccountKeyFile="/Users/hello/Downloads/project-804d5e6a6f33.json" - -``` - -#### Configuring the size of block read/write - -Pulsar also provides some knobs to configure the size of requests sent to GCS. - -- ```gcsManagedLedgerOffloadMaxBlockSizeInBytes``` configures the maximum size of a "part" sent - during a multipart upload. This cannot be smaller than 5MB. Default is 64MB. -- ```gcsManagedLedgerOffloadReadBufferSizeInBytes``` configures the block size for each individual - read when reading back data from GCS. Default is 1MB. - -In both cases, these should not be touched unless you know what you are doing. - -### "filesystem" Driver configuration - - -#### Configure connection address - -You can configure the connection address in the `broker.conf` file. - -```conf - -fileSystemURI="hdfs://127.0.0.1:9000" - -``` - -#### Configure Hadoop profile path - -The configuration file is stored in the Hadoop profile path. It contains various settings, such as base path, authentication, and so on. - -```conf - -fileSystemProfilePath="../conf/filesystem_offload_core_site.xml" - -``` - -The model for storing topic data uses `org.apache.hadoop.io.MapFile`. You can use all of the configurations in `org.apache.hadoop.io.MapFile` for Hadoop. - -**Example** - -```conf - - - fs.defaultFS - - - - - hadoop.tmp.dir - pulsar - - - - io.file.buffer.size - 4096 - - - - io.seqfile.compress.blocksize - 1000000 - - - - io.seqfile.compression.type - BLOCK - - - - io.map.index.interval - 128 - - -``` - -For more information about the configurations in `org.apache.hadoop.io.MapFile`, see [Filesystem Storage](http://hadoop.apache.org/). -## Configuring offload to run automatically - -Namespace policies can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that the topic has stored on the pulsar cluster. Once the topic reaches the threshold, an offload operation will be triggered. Setting a negative value to the threshold will disable automatic offloading. Setting the threshold to 0 will cause the broker to offload data as soon as it possiby can. - -```bash - -$ bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -> Automatic offload runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offload will not until the current segment is full. - -## Configuring read priority for offloaded messages - -By default, once messages were offloaded to long term storage, brokers will read them from long term storage, but messages still exists in bookkeeper for a period depends on the administrator's configuration. For -messages exists in both bookkeeper and long term storage, if they are preferred to read from bookkeeper, you can use command to change this configuration. - -```bash - -# default value for -orp is tiered-storage-first -$ bin/pulsar-admin namespaces set-offload-policies my-tenant/my-namespace -orp bookkeeper-first -$ bin/pulsar-admin topics set-offload-policies my-tenant/my-namespace/topic1 -orp bookkeeper-first - -``` - -## Triggering offload manually - -Offloading can manually triggered through a REST endpoint on the Pulsar broker. We provide a CLI which will call this rest endpoint for you. - -When triggering offload, you must specify the maximum size, in bytes, of backlog which will be retained locally on the bookkeeper. The offload mechanism will offload segments from the start of the topic backlog until this condition is met. - -```bash - -$ bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 -Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - -``` - -The command to triggers an offload will not wait until the offload operation has completed. To check the status of the offload, use offload-status. - -```bash - -$ bin/pulsar-admin topics offload-status my-tenant/my-namespace/topic1 -Offload is currently running - -``` - -To wait for offload to complete, add the -w flag. - -```bash - -$ bin/pulsar-admin topics offload-status -w my-tenant/my-namespace/topic1 -Offload was a success - -``` - -If there is an error offloading, the error will be propagated to the offload-status command. - -```bash - -$ bin/pulsar-admin topics offload-status persistent://public/default/topic1 -Error in offload -null - -Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/deploy-aws.md b/site2/website/versioned_docs/version-2.8.2-deprecated/deploy-aws.md deleted file mode 100644 index 662e99ceb9b09e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/deploy-aws.md +++ /dev/null @@ -1,271 +0,0 @@ ---- -id: deploy-aws -title: Deploying a Pulsar cluster on AWS using Terraform and Ansible -sidebar_label: "Amazon Web Services" -original_id: deploy-aws ---- - -> For instructions on deploying a single Pulsar cluster manually rather than using Terraform and Ansible, see [Deploying a Pulsar cluster on bare metal](deploy-bare-metal.md). For instructions on manually deploying a multi-cluster Pulsar instance, see [Deploying a Pulsar instance on bare metal](deploy-bare-metal-multi-cluster.md). - -One of the easiest ways to get a Pulsar [cluster](reference-terminology.md#cluster) running on [Amazon Web Services](https://aws.amazon.com/) (AWS) is to use the [Terraform](https://terraform.io) infrastructure provisioning tool and the [Ansible](https://www.ansible.com) server automation tool. Terraform can create the resources necessary for running the Pulsar cluster---[EC2](https://aws.amazon.com/ec2/) instances, networking and security infrastructure, etc.---While Ansible can install and run Pulsar on the provisioned resources. - -## Requirements and setup - -In order to install a Pulsar cluster on AWS using Terraform and Ansible, you need to prepare the following things: - -* An [AWS account](https://aws.amazon.com/account/) and the [`aws`](https://aws.amazon.com/cli/) command-line tool -* Python and [pip](https://pip.pypa.io/en/stable/) -* The [`terraform-inventory`](https://github.com/adammck/terraform-inventory) tool, which enables Ansible to use Terraform artifacts - -You also need to make sure that you are currently logged into your AWS account via the `aws` tool: - -```bash - -$ aws configure - -``` - -## Installation - -You can install Ansible on Linux or macOS using pip. - -```bash - -$ pip install ansible - -``` - -You can install Terraform using the instructions [here](https://learn.hashicorp.com/tutorials/terraform/install-cli). - -You also need to have the Terraform and Ansible configuration for Pulsar locally on your machine. You can find them in the [GitHub repository](https://github.com/apache/pulsar) of Pulsar, which you can fetch using Git commands: - -```bash - -$ git clone https://github.com/apache/pulsar -$ cd pulsar/deployment/terraform-ansible/aws - -``` - -## SSH setup - -> If you already have an SSH key and want to use it, you can skip the step of generating an SSH key and update `private_key_file` setting -> in `ansible.cfg` file and `public_key_path` setting in `terraform.tfvars` file. -> -> For example, if you already have a private SSH key in `~/.ssh/pulsar_aws` and a public key in `~/.ssh/pulsar_aws.pub`, -> follow the steps below: -> -> 1. update `ansible.cfg` with following values: -> - -> ```shell -> -> private_key_file=~/.ssh/pulsar_aws -> -> -> ``` - -> -> 2. update `terraform.tfvars` with following values: -> - -> ```shell -> -> public_key_path=~/.ssh/pulsar_aws.pub -> -> -> ``` - - -In order to create the necessary AWS resources using Terraform, you need to create an SSH key. Enter the following commands to create a private SSH key in `~/.ssh/id_rsa` and a public key in `~/.ssh/id_rsa.pub`: - -```bash - -$ ssh-keygen -t rsa - -``` - -Do *not* enter a passphrase (hit **Enter** instead when the prompt comes out). Enter the following command to verify that a key has been created: - -```bash - -$ ls ~/.ssh -id_rsa id_rsa.pub - -``` - -## Create AWS resources using Terraform - -To start building AWS resources with Terraform, you need to install all Terraform dependencies. Enter the following command: - -```bash - -$ terraform init -# This will create a .terraform folder - -``` - -After that, you can apply the default Terraform configuration by entering this command: - -```bash - -$ terraform apply - -``` - -Then you see this prompt below: - -```bash - -Do you want to perform these actions? - Terraform will perform the actions described above. - Only 'yes' will be accepted to approve. - - Enter a value: - -``` - -Type `yes` and hit **Enter**. Applying the configuration could take several minutes. When the configuration applying finishes, you can see `Apply complete!` along with some other information, including the number of resources created. - -### Apply a non-default configuration - -You can apply a non-default Terraform configuration by changing the values in the `terraform.tfvars` file. The following variables are available: - -Variable name | Description | Default -:-------------|:------------|:------- -`public_key_path` | The path of the public key that you have generated. | `~/.ssh/id_rsa.pub` -`region` | The AWS region in which the Pulsar cluster runs | `us-west-2` -`availability_zone` | The AWS availability zone in which the Pulsar cluster runs | `us-west-2a` -`aws_ami` | The [Amazon Machine Image](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) (AMI) that the cluster uses | `ami-9fa343e7` -`num_zookeeper_nodes` | The number of [ZooKeeper](https://zookeeper.apache.org) nodes in the ZooKeeper cluster | 3 -`num_bookie_nodes` | The number of bookies that runs in the cluster | 3 -`num_broker_nodes` | The number of Pulsar brokers that runs in the cluster | 2 -`num_proxy_nodes` | The number of Pulsar proxies that runs in the cluster | 1 -`base_cidr_block` | The root [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) that network assets uses for the cluster | `10.0.0.0/16` -`instance_types` | The EC2 instance types to be used. This variable is a map with two keys: `zookeeper` for the ZooKeeper instances, `bookie` for the BookKeeper bookies and `broker` and `proxy` for Pulsar brokers and bookies | `t2.small` (ZooKeeper), `i3.xlarge` (BookKeeper) and `c5.2xlarge` (Brokers/Proxies) - -### What is installed - -When you run the Ansible playbook, the following AWS resources are used: - -* 9 total [Elastic Compute Cloud](https://aws.amazon.com/ec2) (EC2) instances running the [ami-9fa343e7](https://access.redhat.com/articles/3135091) Amazon Machine Image (AMI), which runs [Red Hat Enterprise Linux (RHEL) 7.4](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/7.4_release_notes/index). By default, that includes: - * 3 small VMs for ZooKeeper ([t2.small](https://www.ec2instances.info/?selected=t2.small) instances) - * 3 larger VMs for BookKeeper [bookies](reference-terminology.md#bookie) ([i3.xlarge](https://www.ec2instances.info/?selected=i3.xlarge) instances) - * 2 larger VMs for Pulsar [brokers](reference-terminology.md#broker) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances) - * 1 larger VMs for Pulsar [proxy](reference-terminology.md#proxy) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances) -* An EC2 [security group](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html) -* A [virtual private cloud](https://aws.amazon.com/vpc/) (VPC) for security -* An [API Gateway](https://aws.amazon.com/api-gateway/) for connections from the outside world -* A [route table](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html) for the Pulsar cluster's VPC -* A [subnet](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html) for the VPC - -All EC2 instances for the cluster run in the [us-west-2](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) region. - -### Fetch your Pulsar connection URL - -When you apply the Terraform configuration by entering the command `terraform apply`, Terraform outputs a value for the `pulsar_service_url`. The value should look something like this: - -``` - -pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650 - -``` - -You can fetch that value at any time by entering the command `terraform output pulsar_service_url` or parsing the `terraform.tstate` file (which is JSON, even though the filename does not reflect that): - -```bash - -$ cat terraform.tfstate | jq .modules[0].outputs.pulsar_service_url.value - -``` - -### Destroy your cluster - -At any point, you can destroy all AWS resources associated with your cluster using Terraform's `destroy` command: - -```bash - -$ terraform destroy - -``` - -## Setup Disks - -Before you run the Pulsar playbook, you need to mount the disks to the correct directories on those bookie nodes. Since different type of machines have different disk layout, you need to update the task defined in `setup-disk.yaml` file after changing the `instance_types` in your terraform config, - -To setup disks on bookie nodes, enter this command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - setup-disk.yaml - -``` - -After that, the disks is mounted under `/mnt/journal` as journal disk, and `/mnt/storage` as ledger disk. -Remember to enter this command just only once. If you attempt to enter this command again after you have run Pulsar playbook, your disks might potentially be erased again, causing the bookies to fail to start up. - -## Run the Pulsar playbook - -Once you have created the necessary AWS resources using Terraform, you can install and run Pulsar on the Terraform-created EC2 instances using Ansible. - -(Optional) If you want to use any [built-in IO connectors](io-connectors.md), edit the `Download Pulsar IO packages` task in the `deploy-pulsar.yaml` file and uncomment the connectors you want to use. - -To run the playbook, enter this command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - ../deploy-pulsar.yaml - -``` - -If you have created a private SSH key at a location different from `~/.ssh/id_rsa`, you can specify the different location using the `--private-key` flag in the following command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - --private-key="~/.ssh/some-non-default-key" \ - ../deploy-pulsar.yaml - -``` - -## Access the cluster - -You can now access your running Pulsar using the unique Pulsar connection URL for your cluster, which you can obtain following the instructions [above](#fetching-your-pulsar-connection-url). - -For a quick demonstration of accessing the cluster, we can use the Python client for Pulsar and the Python shell. First, install the Pulsar Python module using pip: - -```bash - -$ pip install pulsar-client - -``` - -Now, open up the Python shell using the `python` command: - -```bash - -$ python - -``` - -Once you are in the shell, enter the following command: - -```python - ->>> import pulsar ->>> client = pulsar.Client('pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650') -# Make sure to use your connection URL ->>> producer = client.create_producer('persistent://public/default/test-topic') ->>> producer.send('Hello world') ->>> client.close() - -``` - -If all of these commands are successful, Pulsar clients can now use your cluster! diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/deploy-bare-metal-multi-cluster.md b/site2/website/versioned_docs/version-2.8.2-deprecated/deploy-bare-metal-multi-cluster.md deleted file mode 100644 index 1b23eea07a20bf..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/deploy-bare-metal-multi-cluster.md +++ /dev/null @@ -1,486 +0,0 @@ ---- -id: deploy-bare-metal-multi-cluster -title: Deploying a multi-cluster on bare metal -sidebar_label: "Bare metal multi-cluster" -original_id: deploy-bare-metal-multi-cluster ---- - -:::tip - -1. Single-cluster Pulsar installations should be sufficient for all but the most ambitious use cases. If you are interested in experimenting with -Pulsar or using it in a startup or on a single team, you had better opt for a single cluster. For instructions on deploying a single cluster, -see the guide [here](deploy-bare-metal.md). -2. If you want to use all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you need to download `apache-pulsar-io-connectors` -package and install `apache-pulsar-io-connectors` under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you -run a separate cluster of function workers for [Pulsar Functions](functions-overview.md). -3. If you want to use [Tiered Storage](concepts-tiered-storage.md) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders` -package and install `apache-pulsar-offloaders` under `offloaders` directory in the pulsar directory on every broker node. For more details of how to configure -this feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md). - -::: - -A Pulsar *instance* consists of multiple Pulsar clusters working in unison. You can distribute clusters across data centers or geographical regions and replicate the clusters amongst themselves using [geo-replication](administration-geo.md). Deploying a multi-cluster Pulsar instance involves the following basic steps: - -* Deploying two separate [ZooKeeper](#deploy-zookeeper) quorums: a [local](#deploy-local-zookeeper) quorum for each cluster in the instance and a [configuration store](#configuration-store) quorum for instance-wide tasks -* Initializing [cluster metadata](#cluster-metadata-initialization) for each cluster -* Deploying a [BookKeeper cluster](#deploy-bookkeeper) of bookies in each Pulsar cluster -* Deploying [brokers](#deploy-brokers) in each Pulsar cluster - -If you want to deploy a single Pulsar cluster, see [Clusters and Brokers](getting-started-standalone.md#start-the-cluster). - -> #### Run Pulsar locally or on Kubernetes? -> This guide shows you how to deploy Pulsar in production in a non-Kubernetes environment. If you want to run a standalone Pulsar cluster on a single machine for development purposes, see the [Setting up a local cluster](getting-started-standalone.md) guide. If you want to run Pulsar on [Kubernetes](https://kubernetes.io), see the [Pulsar on Kubernetes](deploy-kubernetes.md) guide, which includes sections on running Pulsar on Kubernetes on [Google Kubernetes Engine](deploy-kubernetes#pulsar-on-google-kubernetes-engine) and on [Amazon Web Services](deploy-kubernetes#pulsar-on-amazon-web-services). - -## System requirement - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions, JRE/JDK 11 is recommended. - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -## Install Pulsar - -To get started running Pulsar, download a binary tarball release in one of the following ways: - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar @pulsar:version@ binary release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget 'https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=pulsar/pulsar-@pulsar:version@/apache-pulsar-@pulsar:version@-bin.tar.gz' -O apache-pulsar-@pulsar:version@-bin.tar.gz - - ``` - -Once you download the tarball, untar it and `cd` into the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -## What your package contains - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | [Command-line tools](reference-cli-tools.md) of Pulsar, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) -`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more -`examples` | A Java JAR file containing example [Pulsar Functions](functions-overview.md) -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that Pulsar uses -`licenses` | License files, in `.txt` form, for various components of the Pulsar codebase - -The following directories are created once you begin running Pulsar: - -Directory | Contains -:---------|:-------- -`data` | The data storage directory that ZooKeeper and BookKeeper use -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md) -`logs` | Logs that the installation creates - - -## Deploy ZooKeeper - -Each Pulsar instance relies on two separate ZooKeeper quorums. - -* [Local ZooKeeper](#deploy-local-zookeeper) operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs to have a dedicated ZooKeeper cluster. -* [Configuration Store](#deploy-the-configuration-store) operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum. - -The configuration store quorum can be provided by an independent cluster of machines or by the same machines used by local ZooKeeper. - - -### Deploy local ZooKeeper - -ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar. - -You need to stand up one local ZooKeeper cluster *per Pulsar cluster* for deploying a Pulsar instance. - -To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -On each host, you need to specify the ID of the node in the `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - -On a ZooKeeper server at `zk1.us-west.example.com`, for example, you could set the `myid` value like this: - -```shell - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com` the command looks like `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start zookeeper - -``` - -### Deploy the configuration store - -The ZooKeeper cluster that is configured and started up in the section above is a *local* ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks. - -If you deploy a [single-cluster](#single-cluster-pulsar-instance) instance, you do not need a separate cluster for the configuration store. If, however, you deploy a [multi-cluster](#multi-cluster-pulsar-instance) instance, you should stand up a separate ZooKeeper cluster for configuration tasks. - -#### Single-cluster Pulsar instance - -If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports. - -To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorum uses to the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 - -``` - -As before, create the `myid` files for each server on `data/global-zookeeper/myid`. - -#### Multi-cluster Pulsar instance - -When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions. - -The key here is to make sure the ZK quorum members are spread across at least 3 regions and that other regions run as observers. - -Again, given the very low expected load on the configuration store servers, you can -share the same hosts used for the local ZooKeeper quorum. - -For example, assume a Pulsar instance with the following clusters `us-west`, -`us-east`, `us-central`, `eu-central`, `ap-south`. Also assume, each cluster has its own local ZK servers named such as the following: - -``` - -zk[1-3].${CLUSTER}.example.com - -``` - -In this scenario if you want to pick the quorum participants from few clusters and -let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`. - -This method guarantees that writes to configuration store is possible even if one of these regions is unreachable. - -The ZK configuration in all the servers looks like: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 -server.4=zk1.us-central.example.com:2185:2186 -server.5=zk2.us-central.example.com:2185:2186 -server.6=zk3.us-central.example.com:2185:2186:observer -server.7=zk1.us-east.example.com:2185:2186 -server.8=zk2.us-east.example.com:2185:2186 -server.9=zk3.us-east.example.com:2185:2186:observer -server.10=zk1.eu-central.example.com:2185:2186:observer -server.11=zk2.eu-central.example.com:2185:2186:observer -server.12=zk3.eu-central.example.com:2185:2186:observer -server.13=zk1.ap-south.example.com:2185:2186:observer -server.14=zk2.ap-south.example.com:2185:2186:observer -server.15=zk3.ap-south.example.com:2185:2186:observer - -``` - -Additionally, ZK observers need to have the following parameters: - -```properties - -peerType=observer - -``` - -##### Start the service - -Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) - -```shell - -$ bin/pulsar-daemon start configuration-store - -``` - -## Cluster metadata initialization - -Once you set up the cluster-specific ZooKeeper and configuration store quorums for your instance, you need to write some metadata to ZooKeeper for each cluster in your instance. **you only needs to write these metadata once**. - -You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. The following is an example: - -```shell - -$ bin/pulsar initialize-cluster-metadata \ - --cluster us-west \ - --zookeeper zk1.us-west.example.com:2181 \ - --configuration-store zk1.us-west.example.com:2184 \ - --web-service-url http://pulsar.us-west.example.com:8080/ \ - --web-service-url-tls https://pulsar.us-west.example.com:8443/ \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/ - -``` - -As you can see from the example above, you need to specify the following: - -* The name of the cluster -* The local ZooKeeper connection string for the cluster -* The configuration store connection string for the entire instance -* The web service URL for the cluster -* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster - -If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster. - -Make sure to run `initialize-cluster-metadata` for each cluster in your instance. - -## Deploy BookKeeper - -BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar. - -Each Pulsar broker needs to have its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster. - -### Configure bookies - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important aspect of configuring each bookie is ensuring that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for the local ZooKeeper of Pulsar cluster. - -### Start bookies - -You can start a bookie in two ways: in the foreground or as a background daemon. - -To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -You can verify that the bookie works properly using the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell): - -```shell - -$ bin/bookkeeper shell bookiesanity - -``` - -This command creates a new ledger on the local bookie, writes a few entries, reads them back and finally deletes the ledger. - -After you have started all bookies, you can use the `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to verify that all bookies in the cluster are running. - -```bash - -$ bin/bookkeeper shell simpletest --ensemble --writeQuorum --ackQuorum --numEntries - -``` - -Bookie hosts are responsible for storing message data on disk. In order for bookies to provide optimal performance, having a suitable hardware configuration is essential for the bookies. The following are key dimensions for bookie hardware capacity. - -* Disk I/O capacity read/write -* Storage capacity - -Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker. To ensure low write latency, BookKeeper is -designed to use multiple devices: - -* A **journal** to ensure durability. For sequential writes, having fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts is critical. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID)s controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms. -* A **ledger storage device** is where data is stored until all consumers acknowledge the message. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller. - - - -## Deploy brokers - -Once you set up ZooKeeper, initialize cluster metadata, and spin up BookKeeper bookies, you can deploy brokers. - -### Broker configuration - -You can configure brokers using the [`conf/broker.conf`](reference-configuration.md#broker) configuration file. - -The most important element of broker configuration is ensuring that each broker is aware of its local ZooKeeper quorum as well as the configuration store quorum. Make sure that you set the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) parameter to reflect the local quorum and the [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameter to reflect the configuration store quorum (although you need to specify only those ZooKeeper servers located in the same cluster). - -You also need to specify the name of the [cluster](reference-terminology.md#cluster) to which the broker belongs using the [`clusterName`](reference-configuration.md#broker-clusterName) parameter. In addition, you need to match the broker and web service ports provided when you initialize the metadata (especially when you use a different port from default) of the cluster. - -The following is an example configuration: - -```properties - -# Local ZooKeeper servers -zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -# Configuration store quorum connection string. -configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184 - -clusterName=us-west - -# Broker data port -brokerServicePort=6650 - -# Broker data port for TLS -brokerServicePortTls=6651 - -# Port to use to server HTTP request -webServicePort=8080 - -# Port to use to server HTTPS request -webServicePortTls=8443 - -``` - -### Broker hardware - -Pulsar brokers do not require any special hardware since they do not use the local disk. You had better choose fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) so that the software can take full advantage of that. - -### Start the broker service - -You can start a broker in the background by using [nohup](https://en.wikipedia.org/wiki/Nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start broker - -``` - -You can also start brokers in the foreground by using [`pulsar broker`](reference-cli-tools.md#broker): - -```shell - -$ bin/pulsar broker - -``` - -## Service discovery - -[Clients](getting-started-clients.md) connecting to Pulsar brokers need to be able to communicate with an entire Pulsar instance using a single URL. Pulsar provides a built-in service discovery mechanism that you can set up using the instructions [immediately below](#service-discovery-setup). - -You can also use your own service discovery system if you want. If you use your own system, you only need to satisfy just one requirement: when a client performs an HTTP request to an [endpoint](reference-configuration.md) for a Pulsar cluster, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to *some* active broker in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means. - -> #### Service discovery already provided by many scheduling systems -> Many large-scale deployment systems, such as [Kubernetes](deploy-kubernetes.md), have service discovery systems built in. If you run Pulsar on such a system, you may not need to provide your own service discovery mechanism. - - -### Service discovery setup - -The service discovery mechanism that included with Pulsar maintains a list of active brokers, which stored in ZooKeeper, and supports lookup using HTTP and also the [binary protocol](developing-binary-protocol.md) of Pulsar. - -To get started setting up the built-in service of discovery of Pulsar, you need to change a few parameters in the [`conf/discovery.conf`](reference-configuration.md#service-discovery) configuration file. Set the [`zookeeperServers`](reference-configuration.md#service-discovery-zookeeperServers) parameter to the ZooKeeper quorum connection string of the cluster and the [`configurationStoreServers`](reference-configuration.md#service-discovery-configurationStoreServers) setting to the [configuration -store](reference-terminology.md#configuration-store) quorum connection string. - -```properties - -# Zookeeper quorum connection string -zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -# Global configuration store connection string -configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184 - -``` - -To start the discovery service: - -```shell - -$ bin/pulsar-daemon start discovery - -``` - -## Admin client and verification - -At this point your Pulsar instance should be ready to use. You can now configure client machines that can serve as [administrative clients](admin-api-overview.md) for each cluster. You can use the [`conf/client.conf`](reference-configuration.md#client) configuration file to configure admin clients. - -The most important thing is that you point the [`serviceUrl`](reference-configuration.md#client-serviceUrl) parameter to the correct service URL for the cluster: - -```properties - -serviceUrl=http://pulsar.us-west.example.com:8080/ - -``` - -## Provision new tenants - -Pulsar is built as a fundamentally multi-tenant system. - - -If a new tenant wants to use the system, you need to create a new one. You can create a new tenant by using the [`pulsar-admin`](reference-pulsar-admin.md#tenants) CLI tool: - -```shell - -$ bin/pulsar-admin tenants create test-tenant \ - --allowed-clusters us-west \ - --admin-roles test-admin-role - -``` - -In this command, users who identify with `test-admin-role` role can administer the configuration for the `test-tenant` tenant. The `test-tenant` tenant can only use the `us-west` cluster. From now on, this tenant can manage its resources. - -Once you create a tenant, you need to create [namespaces](reference-terminology.md#namespace) for topics within that tenant. - - -The first step is to create a namespace. A namespace is an administrative unit that can contain many topics. A common practice is to create a namespace for each different use case from a single tenant. - -```shell - -$ bin/pulsar-admin namespaces create test-tenant/ns1 - -``` - -##### Test producer and consumer - - -Everything is now ready to send and receive messages. The quickest way to test the system is through the [`pulsar-perf`](reference-cli-tools.md#pulsar-perf) client tool. - - -You can use a topic in the namespace that you have just created. Topics are automatically created the first time when a producer or a consumer tries to use them. - -The topic name in this case could be: - -```http - -persistent://test-tenant/ns1/my-topic - -``` - -Start a consumer that creates a subscription on the topic and waits for messages: - -```shell - -$ bin/pulsar-perf consume persistent://test-tenant/ns1/my-topic - -``` - -Start a producer that publishes messages at a fixed rate and reports stats every 10 seconds: - -```shell - -$ bin/pulsar-perf produce persistent://test-tenant/ns1/my-topic - -``` - -To report the topic stats: - -```shell - -$ bin/pulsar-admin topics stats persistent://test-tenant/ns1/my-topic - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/deploy-bare-metal.md b/site2/website/versioned_docs/version-2.8.2-deprecated/deploy-bare-metal.md deleted file mode 100644 index 8152694e39ee79..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/deploy-bare-metal.md +++ /dev/null @@ -1,541 +0,0 @@ ---- -id: deploy-bare-metal -title: Deploy a cluster on bare metal -sidebar_label: "Bare metal" -original_id: deploy-bare-metal ---- - -:::tip - -1. Single-cluster Pulsar installations should be sufficient for all but the most ambitious use cases. If you are interested in experimenting with -Pulsar or using Pulsar in a startup or on a single team, it is simplest to opt for a single cluster. If you do need to run a multi-cluster Pulsar instance, -see the guide [here](deploy-bare-metal-multi-cluster.md). -2. If you want to use all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you need to download `apache-pulsar-io-connectors` -package and install `apache-pulsar-io-connectors` under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you -have run a separate cluster of function workers for [Pulsar Functions](functions-overview.md). -3. If you want to use [Tiered Storage](concepts-tiered-storage.md) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders` -package and install `apache-pulsar-offloaders` under `offloaders` directory in the pulsar directory on every broker node. For more details of how to configure -this feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md). - -::: - -Deploying a Pulsar cluster involves doing the following (in order): - -* Deploy a [ZooKeeper](#deploy-a-zookeeper-cluster) cluster (optional) -* Initialize [cluster metadata](#initialize-cluster-metadata) -* Deploy a [BookKeeper](#deploy-a-bookkeeper-cluster) cluster -* Deploy one or more Pulsar [brokers](#deploy-pulsar-brokers) - -## Preparation - -### Requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions. - -> If you already have an existing ZooKeeper cluster and want to reuse it, you do not need to prepare the machines -> for running ZooKeeper. - -To run Pulsar on bare metal, the following configuration is recommended: - -* At least 6 Linux machines or VMs - * 3 for running [ZooKeeper](https://zookeeper.apache.org) - * 3 for running a Pulsar broker, and a [BookKeeper](https://bookkeeper.apache.org) bookie -* A single [DNS](https://en.wikipedia.org/wiki/Domain_Name_System) name covering all of the Pulsar broker hosts - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -> If you do not have enough machines, or to try out Pulsar in cluster mode (and expand the cluster later), -> you can deploy a full Pulsar configuration on one node, where Zookeeper, the bookie and broker are run on the same machine. - -> If you do not have a DNS server, you can use the multi-host format in the service URL instead. - -Each machine in your cluster needs to have [Java 8](https://adoptium.net/?variant=openjdk8) or [Java 11](https://adoptium.net/?variant=openjdk11) installed. - -The following is a diagram showing the basic setup: - -![alt-text](/assets/pulsar-basic-setup.png) - -In this diagram, connecting clients need to be able to communicate with the Pulsar cluster using a single URL. In this case, `pulsar-cluster.acme.com` abstracts over all of the message-handling brokers. Pulsar message brokers run on machines alongside BookKeeper bookies; brokers and bookies, in turn, rely on ZooKeeper. - -### Hardware considerations - -When you deploy a Pulsar cluster, keep in mind the following basic better choices when you do the capacity planning. - -#### ZooKeeper - -For machines running ZooKeeper, is is recommended to use less powerful machines or VMs. Pulsar uses ZooKeeper only for periodic coordination-related and configuration-related tasks, *not* for basic operations. If you run Pulsar on [Amazon Web Services](https://aws.amazon.com/) (AWS), for example, a [t2.small](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html) instance might likely suffice. - -#### Bookies and Brokers - -For machines running a bookie and a Pulsar broker, more powerful machines are required. For an AWS deployment, for example, [i3.4xlarge](https://aws.amazon.com/blogs/aws/now-available-i3-instances-for-demanding-io-intensive-applications/) instances may be appropriate. On those machines you can use the following: - -* Fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) (for Pulsar brokers) -* Small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache (for BookKeeper bookies) - -## Install the Pulsar binary package - -> You need to install the Pulsar binary package on *each machine in the cluster*, including machines running [ZooKeeper](#deploy-a-zookeeper-cluster) and [BookKeeper](#deploy-a-bookkeeper-cluster). - -To get started deploying a Pulsar cluster on bare metal, you need to download a binary tarball release in one of the following ways: - -* By clicking on the link below directly, which automatically triggers a download: - * Pulsar @pulsar:version@ binary release -* From the Pulsar [downloads page](pulsar:download_page_url) -* From the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) on [GitHub](https://github.com) -* Using [wget](https://www.gnu.org/software/wget): - -```bash - -$ wget pulsar:binary_release_url - -``` - -Once you download the tarball, untar it and `cd` into the resulting directory: - -```bash - -$ tar xvzf apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -The extracted directory contains the following subdirectories: - -Directory | Contains -:---------|:-------- -`bin` |[command-line tools](reference-cli-tools.md) of Pulsar, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) -`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more -`data` | The data storage directory that ZooKeeper and BookKeeper use -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that Pulsar uses -`logs` | Logs that the installation creates - -## [Install Builtin Connectors (optional)]( https://pulsar.apache.org/docs/en/next/standalone/#install-builtin-connectors-optional) - -> Since Pulsar release `2.1.0-incubating`, Pulsar provides a separate binary distribution, containing all the `builtin` connectors. -> If you want to enable those `builtin` connectors, you can follow the instructions as below; otherwise you can -> skip this section for now. - -To get started using builtin connectors, you need to download the connectors tarball release on every broker node in one of the following ways: - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar IO Connectors @pulsar:version@ release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -Once you download the .nar file, copy the file to directory `connectors` in the pulsar directory. -For example, if you download the connector file `pulsar-io-aerospike-@pulsar:version@.nar`: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -## [Install Tiered Storage Offloaders (optional)](https://pulsar.apache.org/docs/en/next/standalone/#install-tiered-storage-offloaders-optional) - -> Since Pulsar release `2.2.0`, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -> If you want to enable tiered storage feature, you can follow the instructions as below; otherwise you can -> skip this section for now. - -To get started using tiered storage offloaders, you need to download the offloaders tarball release on every broker node in one of the following ways: - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -Once you download the tarball, in the pulsar directory, untar the offloaders package and copy the offloaders as `offloaders` in the pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you can find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more details of how to configure tiered storage feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md) - - -## Deploy a ZooKeeper cluster - -> If you already have an existing zookeeper cluster and want to use it, you can skip this section. - -[ZooKeeper](https://zookeeper.apache.org) manages a variety of essential coordination- and configuration-related tasks for Pulsar. To deploy a Pulsar cluster, you need to deploy ZooKeeper first (before all other components). A 3-node ZooKeeper cluster is the recommended configuration. Pulsar does not make heavy use of ZooKeeper, so more lightweight machines or VMs should suffice for running ZooKeeper. - -To begin, add all ZooKeeper servers to the configuration specified in [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) (in the Pulsar directory that you create [above](#install-the-pulsar-binary-package)). The following is an example: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -> If you only have one machine on which to deploy Pulsar, you only need to add one server entry in the configuration file. - -On each host, you need to specify the ID of the node in the `myid` file, which is in the `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - -For example, on a ZooKeeper server like `zk1.us-west.example.com`, you can set the `myid` value as follows: - -```bash - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com`, the command is `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and have the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start zookeeper - -``` - -> If you plan to deploy Zookeeper with the Bookie on the same node, you need to start zookeeper by using different stats -> port by configuring the `metricsProvider.httpPort` in zookeeper.conf. - -## Initialize cluster metadata - -Once you deploy ZooKeeper for your cluster, you need to write some metadata to ZooKeeper. You only need to write this data **once**. - -You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. This command can be run on any machine in your Pulsar cluster, so the metadata can be initialized from a ZooKeeper, broker, or bookie machine. The following is an example: - -```shell - -$ bin/pulsar initialize-cluster-metadata \ - --cluster pulsar-cluster-1 \ - --zookeeper zk1.us-west.example.com:2181 \ - --configuration-store zk1.us-west.example.com:2181 \ - --web-service-url http://pulsar.us-west.example.com:8080 \ - --web-service-url-tls https://pulsar.us-west.example.com:8443 \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650 \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -As you can see from the example above, you will need to specify the following: - -Flag | Description -:----|:----------- -`--cluster` | A name for the cluster -`--zookeeper` | A "local" ZooKeeper connection string for the cluster. This connection string only needs to include *one* machine in the ZooKeeper cluster. -`--configuration-store` | The configuration store connection string for the entire instance. As with the `--zookeeper` flag, this connection string only needs to include *one* machine in the ZooKeeper cluster. -`--web-service-url` | The web service URL for the cluster, plus a port. This URL should be a standard DNS name. The default port is 8080 (you had better not use a different port). -`--web-service-url-tls` | If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster. The default port is 8443 (you had better not use a different port). -`--broker-service-url` | A broker service URL enabling interaction with the brokers in the cluster. This URL should not use the same DNS name as the web service URL but should use the `pulsar` scheme instead. The default port is 6650 (you had better not use a different port). -`--broker-service-url-tls` | If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster. The default port is 6651 (you had better not use a different port). - - -> If you do not have a DNS server, you can use multi-host format in the service URL with the following settings: -> - -> ```properties -> -> --web-service-url http://host1:8080,host2:8080,host3:8080 \ -> --web-service-url-tls https://host1:8443,host2:8443,host3:8443 \ -> --broker-service-url pulsar://host1:6650,host2:6650,host3:6650 \ -> --broker-service-url-tls pulsar+ssl://host1:6651,host2:6651,host3:6651 -> -> -> ``` - -> -> If you want to use an existing BookKeeper cluster, you can add the `--existing-bk-metadata-service-uri` flag as follows: -> - -> ```properties -> -> --existing-bk-metadata-service-uri "zk+null://zk1:2181;zk2:2181/ledgers" \ -> --web-service-url http://host1:8080,host2:8080,host3:8080 \ -> --web-service-url-tls https://host1:8443,host2:8443,host3:8443 \ -> --broker-service-url pulsar://host1:6650,host2:6650,host3:6650 \ -> --broker-service-url-tls pulsar+ssl://host1:6651,host2:6651,host3:6651 -> -> -> ``` - -> You can obtain the metadata service URI of the existing BookKeeper cluster by using the `bin/bookkeeper shell whatisinstanceid` command. You must enclose the value in double quotes since the multiple metadata service URIs are separated with semicolons. - -## Deploy a BookKeeper cluster - -[BookKeeper](https://bookkeeper.apache.org) handles all persistent data storage in Pulsar. You need to deploy a cluster of BookKeeper bookies to use Pulsar. You can choose to run a **3-bookie BookKeeper cluster**. - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important step in configuring bookies for our purposes here is ensuring that [`zkServers`](reference-configuration.md#bookkeeper-zkServers) is set to the connection string for the ZooKeeper cluster. The following is an example: - -```properties - -zkServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -``` - -Once you appropriately modify the `zkServers` parameter, you can make any other configuration changes that you require. You can find a full listing of the available BookKeeper configuration parameters [here](reference-configuration.md#bookkeeper). However, consulting the [BookKeeper documentation](http://bookkeeper.apache.org/docs/latest/reference/config/) for a more in-depth guide might be a better choice. - -Once you apply the desired configuration in `conf/bookkeeper.conf`, you can start up a bookie on each of your BookKeeper hosts. You can start up each bookie either in the background, using [nohup](https://en.wikipedia.org/wiki/Nohup), or in the foreground. - -To start the bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -To start the bookie in the foreground: - -```bash - -$ bin/pulsar bookie - -``` - -You can verify that a bookie works properly by running the `bookiesanity` command on the [BookKeeper shell](reference-cli-tools.md#shell): - -```bash - -$ bin/bookkeeper shell bookiesanity - -``` - -This command creates an ephemeral BookKeeper ledger on the local bookie, writes a few entries, reads them back, and finally deletes the ledger. - -After you start all the bookies, you can use `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to verify all the bookies in the cluster are up running. - -```bash - -$ bin/bookkeeper shell simpletest --ensemble --writeQuorum --ackQuorum --numEntries - -``` - -This command creates a `num-bookies` sized ledger on the cluster, writes a few entries, and finally deletes the ledger. - - -## Deploy Pulsar brokers - -Pulsar brokers are the last thing you need to deploy in your Pulsar cluster. Brokers handle Pulsar messages and provide the administrative interface of Pulsar. A good choice is to run **3 brokers**, one for each machine that already runs a BookKeeper bookie. - -### Configure Brokers - -The most important element of broker configuration is ensuring that each broker is aware of the ZooKeeper cluster that you have deployed. Ensure that the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) and [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameters are correct. In this case, since you only have 1 cluster and no configuration store setup, the `configurationStoreServers` point to the same `zookeeperServers`. - -```properties - -zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 -configurationStoreServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -``` - -You also need to specify the cluster name (matching the name that you provided when you [initialize the metadata of the cluster](#initialize-cluster-metadata)): - -```properties - -clusterName=pulsar-cluster-1 - -``` - -In addition, you need to match the broker and web service ports provided when you initialize the metadata of the cluster (especially when you use a different port than the default): - -```properties - -brokerServicePort=6650 -brokerServicePortTls=6651 -webServicePort=8080 -webServicePortTls=8443 - -``` - -> If you deploy Pulsar in a one-node cluster, you should update the replication settings in `conf/broker.conf` to `1`. -> - -> ```properties -> -> # Number of bookies to use when creating a ledger -> managedLedgerDefaultEnsembleSize=1 -> -> # Number of copies to store for each message -> managedLedgerDefaultWriteQuorum=1 -> -> # Number of guaranteed copies (acks to wait before write is complete) -> managedLedgerDefaultAckQuorum=1 -> -> -> ``` - - -### Enable Pulsar Functions (optional) - -If you want to enable [Pulsar Functions](functions-overview.md), you can follow the instructions as below: - -1. Edit `conf/broker.conf` to enable functions worker, by setting `functionsWorkerEnabled` to `true`. - - ```conf - - functionsWorkerEnabled=true - - ``` - -2. Edit `conf/functions_worker.yml` and set `pulsarFunctionsCluster` to the cluster name that you provide when you [initialize the metadata of the cluster](#initialize-cluster-metadata). - - ```conf - - pulsarFunctionsCluster: pulsar-cluster-1 - - ``` - -If you want to learn more options about deploying the functions worker, check out [Deploy and manage functions worker](functions-worker.md). - -### Start Brokers - -You can then provide any other configuration changes that you want in the [`conf/broker.conf`](reference-configuration.md#broker) file. Once you decide on a configuration, you can start up the brokers for your Pulsar cluster. Like ZooKeeper and BookKeeper, you can start brokers either in the foreground or in the background, using nohup. - -You can start a broker in the foreground using the [`pulsar broker`](reference-cli-tools.md#pulsar-broker) command: - -```bash - -$ bin/pulsar broker - -``` - -You can start a broker in the background using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start broker - -``` - -Once you successfully start up all the brokers that you intend to use, your Pulsar cluster should be ready to go! - -## Connect to the running cluster - -Once your Pulsar cluster is up and running, you should be able to connect with it using Pulsar clients. One such client is the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool, which is included with the Pulsar binary package. The `pulsar-client` tool can publish messages to and consume messages from Pulsar topics and thus provide a simple way to make sure that your cluster runs properly. - -To use the `pulsar-client` tool, first modify the client configuration file in [`conf/client.conf`](reference-configuration.md#client) in your binary package. You need to change the values for `webServiceUrl` and `brokerServiceUrl`, substituting `localhost` (which is the default), with the DNS name that you assign to your broker/bookie hosts. The following is an example: - -```properties - -webServiceUrl=http://us-west.example.com:8080 -brokerServiceurl=pulsar://us-west.example.com:6650 - -``` - -> If you do not have a DNS server, you can specify multi-host in service URL as follows: -> - -> ```properties -> -> webServiceUrl=http://host1:8080,host2:8080,host3:8080 -> brokerServiceurl=pulsar://host1:6650,host2:6650,host3:6650 -> -> -> ``` - - -Once that is complete, you can publish a message to the Pulsar topic: - -```bash - -$ bin/pulsar-client produce \ - persistent://public/default/test \ - -n 1 \ - -m "Hello Pulsar" - -``` - -> You may need to use a different cluster name in the topic if you specify a cluster name other than `pulsar-cluster-1`. - -This command publishes a single message to the Pulsar topic. In addition, you can subscribe to the Pulsar topic in a different terminal before publishing messages as below: - -```bash - -$ bin/pulsar-client consume \ - persistent://public/default/test \ - -n 100 \ - -s "consumer-test" \ - -t "Exclusive" - -``` - -Once you successfully publish the above message to the topic, you should see it in the standard output: - -```bash - ------ got message ----- -Hello Pulsar - -``` - -## Run Functions - -> If you have [enabled](#enable-pulsar-functions-optional) Pulsar Functions, you can try out the Pulsar Functions now. - -Create an ExclamationFunction `exclamation`. - -```bash - -bin/pulsar-admin functions create \ - --jar examples/api-examples.jar \ - --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \ - --inputs persistent://public/default/exclamation-input \ - --output persistent://public/default/exclamation-output \ - --tenant public \ - --namespace default \ - --name exclamation - -``` - -Check whether the function runs as expected by [triggering](functions-deploying.md#triggering-pulsar-functions) the function. - -```bash - -bin/pulsar-admin functions trigger --name exclamation --trigger-value "hello world" - -``` - -You should see the following output: - -```shell - -hello world! - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/deploy-dcos.md b/site2/website/versioned_docs/version-2.8.2-deprecated/deploy-dcos.md deleted file mode 100644 index 952d5f47e30fa7..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/deploy-dcos.md +++ /dev/null @@ -1,200 +0,0 @@ ---- -id: deploy-dcos -title: Deploy Pulsar on DC/OS -sidebar_label: "DC/OS" -original_id: deploy-dcos ---- - -:::tip - -If you want to enable all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you can choose to use `apachepulsar/pulsar-all` image instead of -`apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors). - -::: - -[DC/OS](https://dcos.io/) (the DataCenter Operating System) is a distributed operating system used for deploying and managing applications and systems on [Apache Mesos](http://mesos.apache.org/). DC/OS is an open-source tool that [Mesosphere](https://mesosphere.com/) creates and maintains . - -Apache Pulsar is available as a [Marathon Application Group](https://mesosphere.github.io/marathon/docs/application-groups.html), which runs multiple applications as manageable sets. - -## Prerequisites - -In order to run Pulsar on DC/OS, you need the following: - -* DC/OS version [1.9](https://docs.mesosphere.com/1.9/) or higher -* A [DC/OS cluster](https://docs.mesosphere.com/1.9/installing/) with at least three agent nodes -* The [DC/OS CLI tool](https://docs.mesosphere.com/1.9/cli/install/) installed -* The [`PulsarGroups.json`](https://github.com/apache/pulsar/blob/master/deployment/dcos/PulsarGroups.json) configuration file from the Pulsar GitHub repo. - - ```bash - - $ curl -O https://raw.githubusercontent.com/apache/pulsar/master/deployment/dcos/PulsarGroups.json - - ``` - -Each node in the DC/OS-managed Mesos cluster must have at least: - -* 4 CPU -* 4 GB of memory -* 60 GB of total persistent disk - -Alternatively, you can change the configuration in `PulsarGroups.json` according to match your resources of DC/OS cluster. - -## Deploy Pulsar using the DC/OS command interface - -You can deploy Pulsar on DC/OS using this command: - -```bash - -$ dcos marathon group add PulsarGroups.json - -``` - -This command deploys Docker container instances in three groups, which together comprise a Pulsar cluster: - -* 3 bookies (1 [bookie](reference-terminology.md#bookie) on each agent node and 1 [bookie recovery](http://bookkeeper.apache.org/docs/latest/admin/autorecovery/) instance) -* 3 Pulsar [brokers](reference-terminology.md#broker) (1 broker on each node and 1 admin instance) -* 1 [Prometheus](http://prometheus.io/) instance and 1 [Grafana](https://grafana.com/) instance - - -> When you run DC/OS, a ZooKeeper cluster already runs at `master.mesos:2181`, thus you do not have to install or start up ZooKeeper separately. - -After executing the `dcos` command above, click on the **Services** tab in the DC/OS [GUI interface](https://docs.mesosphere.com/latest/gui/), which you can access at [http://m1.dcos](http://m1.dcos) in this example. You should see several applications in the process of deploying. - -![DC/OS command executed](/assets/dcos_command_execute.png) - -![DC/OS command executed2](/assets/dcos_command_execute2.png) - -## The BookKeeper group - -To monitor the status of the BookKeeper cluster deployment, click on the **bookkeeper** group in the parent **pulsar** group. - -![DC/OS bookkeeper status](/assets/dcos_bookkeeper_status.png) - -At this point, 3 [bookies](reference-terminology.md#bookie) should be shown as green, which means that the bookies have been deployed successfully and are now running. - -![DC/OS bookkeeper running](/assets/dcos_bookkeeper_run.png) - -You can also click into each bookie instance to get more detailed information, such as the bookie running log. - -![DC/OS bookie log](/assets/dcos_bookie_log.png) - -To display information about the BookKeeper in ZooKeeper, you can visit [http://m1.dcos/exhibitor](http://m1.dcos/exhibitor). In this example, 3 bookies are under the `available` directory. - -![DC/OS bookkeeper in zk](/assets/dcos_bookkeeper_in_zookeeper.png) - -## The Pulsar broker Group - -Similar to the BookKeeper group above, click into the **brokers** to check the status of the Pulsar brokers. - -![DC/OS broker status](/assets/dcos_broker_status.png) - -![DC/OS broker running](/assets/dcos_broker_run.png) - -You can also click into each broker instance to get more detailed information, such as the broker running log. - -![DC/OS broker log](/assets/dcos_broker_log.png) - -Broker cluster information in Zookeeper is also available through the web UI. In this example, you can see that the `loadbalance` and `managed-ledgers` directories have been created. - -![DC/OS broker in zk](/assets/dcos_broker_in_zookeeper.png) - -## Monitor Group - -The **monitory** group consists of Prometheus and Grafana. - -![DC/OS monitor status](/assets/dcos_monitor_status.png) - -### Prometheus - -Click into the instance of `prom` to get the endpoint of Prometheus, which is `192.168.65.121:9090` in this example. - -![DC/OS prom endpoint](/assets/dcos_prom_endpoint.png) - -If you click that endpoint, you can see the Prometheus dashboard. The [http://192.168.65.121:9090/targets](http://192.168.65.121:9090/targets) URL display all the bookies and brokers. - -![DC/OS prom targets](/assets/dcos_prom_targets.png) - -### Grafana - -Click into `grafana` to get the endpoint for Grafana, which is `192.168.65.121:3000` in this example. - -![DC/OS grafana endpoint](/assets/dcos_grafana_endpoint.png) - -If you click that endpoint, you can access the Grafana dashboard. - -![DC/OS grafana targets](/assets/dcos_grafana_dashboard.png) - -## Run a simple Pulsar consumer and producer on DC/OS - -Now that you have a fully deployed Pulsar cluster, you can run a simple consumer and producer to show Pulsar on DC/OS in action. - -### Download and prepare the Pulsar Java tutorial - -You can clone a [Pulsar Java tutorial](https://github.com/streamlio/pulsar-java-tutorial) repo. This repo contains a simple Pulsar consumer and producer (you can find more information in the `README` file of the repo). - -```bash - -$ git clone https://github.com/streamlio/pulsar-java-tutorial - -``` - -Change the `SERVICE_URL` from `pulsar://localhost:6650` to `pulsar://a1.dcos:6650` in both [`ConsumerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ConsumerTutorial.java) and [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java). -The `pulsar://a1.dcos:6650` endpoint is for the broker service. You can fetch the endpoint details for each broker instance from the DC/OS GUI. `a1.dcos` is a DC/OS client agent, which runs a broker. The client agent IP address can also replace this. - -Now, change the message number from 10 to 10000000 in main method of [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java) so that it can produce more messages. - -Now compile the project code using the command below: - -```bash - -$ mvn clean package - -``` - -### Run the consumer and producer - -Execute this command to run the consumer: - -```bash - -$ mvn exec:java -Dexec.mainClass="tutorial.ConsumerTutorial" - -``` - -Execute this command to run the producer: - -```bash - -$ mvn exec:java -Dexec.mainClass="tutorial.ProducerTutorial" - -``` - -You can see the producer producing messages and the consumer consuming messages through the DC/OS GUI. - -![DC/OS pulsar producer](/assets/dcos_producer.png) - -![DC/OS pulsar consumer](/assets/dcos_consumer.png) - -### View Grafana metric output - -While the producer and consumer run, you can access running metrics information from Grafana. - -![DC/OS pulsar dashboard](/assets/dcos_metrics.png) - - -## Uninstall Pulsar - -You can shut down and uninstall the `pulsar` application from DC/OS at any time in the following two ways: - -1. Using the DC/OS GUI, you can choose **Delete** at the right end of Pulsar group. - - ![DC/OS pulsar uninstall](/assets/dcos_uninstall.png) - -2. You can use the following command: - - ```bash - - $ dcos marathon group remove /pulsar - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/deploy-docker.md b/site2/website/versioned_docs/version-2.8.2-deprecated/deploy-docker.md deleted file mode 100644 index 8348d78deb2378..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/deploy-docker.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -id: deploy-docker -title: Deploy a cluster on Docker -sidebar_label: "Docker" -original_id: deploy-docker ---- - -To deploy a Pulsar cluster on Docker, complete the following steps: -1. Deploy a ZooKeeper cluster (optional) -2. Initialize cluster metadata -3. Deploy a BookKeeper cluster -4. Deploy one or more Pulsar brokers - -## Prepare - -To run Pulsar on Docker, you need to create a container for each Pulsar component: ZooKeeper, BookKeeper and broker. You can pull the images of ZooKeeper and BookKeeper separately on [Docker Hub](https://hub.docker.com/), and pull a [Pulsar image](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) for the broker. You can also pull only one [Pulsar image](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) and create three containers with this image. This tutorial takes the second option as an example. - -### Pull a Pulsar image -You can pull a Pulsar image from [Docker Hub](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) with the following command. - -``` - -docker pull apachepulsar/pulsar-all:latest - -``` - -### Create three containers -Create containers for ZooKeeper, BookKeeper and broker. In this example, they are named as `zookeeper`, `bookkeeper` and `broker` respectively. You can name them as you want with the `--name` flag. By default, the container names are created randomly. - -``` - -docker run -it --name bookkeeper apachepulsar/pulsar-all:latest /bin/bash -docker run -it --name zookeeper apachepulsar/pulsar-all:latest /bin/bash -docker run -it --name broker apachepulsar/pulsar-all:latest /bin/bash - -``` - -### Create a network -To deploy a Pulsar cluster on Docker, you need to create a `network` and connect the containers of ZooKeeper, BookKeeper and broker to this network. The following command creates the network `pulsar`: - -``` - -docker network create pulsar - -``` - -### Connect containers to network -Connect the containers of ZooKeeper, BookKeeper and broker to the `pulsar` network with the following commands. - -``` - -docker network connect pulsar zookeeper -docker network connect pulsar bookkeeper -docker network connect pulsar broker - -``` - -To check whether the containers are successfully connected to the network, enter the `docker network inspect pulsar` command. - -For detailed information about how to deploy ZooKeeper cluster, BookKeeper cluster, brokers, see [deploy a cluster on bare metal](deploy-bare-metal.md). diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/deploy-kubernetes.md b/site2/website/versioned_docs/version-2.8.2-deprecated/deploy-kubernetes.md deleted file mode 100644 index 1aefc6ad79f716..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/deploy-kubernetes.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -id: deploy-kubernetes -title: Deploy Pulsar on Kubernetes -sidebar_label: "Kubernetes" -original_id: deploy-kubernetes ---- - -To get up and running with these charts as fast as possible, in a **non-production** use case, we provide -a [quick start guide](getting-started-helm.md) for Proof of Concept (PoC) deployments. - -To configure and install a Pulsar cluster on Kubernetes for production usage, follow the complete [Installation Guide](helm-install.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/deploy-monitoring.md b/site2/website/versioned_docs/version-2.8.2-deprecated/deploy-monitoring.md deleted file mode 100644 index f9fe0e0bb97be7..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/deploy-monitoring.md +++ /dev/null @@ -1,148 +0,0 @@ ---- -id: deploy-monitoring -title: Monitor -sidebar_label: "Monitor" -original_id: deploy-monitoring ---- - -You can use different ways to monitor a Pulsar cluster, exposing both metrics related to the usage of topics and the overall health of the individual components of the cluster. - -## Collect metrics - -You can collect broker stats, ZooKeeper stats, and BookKeeper stats. - -### Broker stats - -You can collect Pulsar broker metrics from brokers and export the metrics in JSON format. The Pulsar broker metrics mainly have two types: - -* *Destination dumps*, which contain stats for each individual topic. You can fetch the destination dumps using the command below: - - ```shell - - bin/pulsar-admin broker-stats destinations - - ``` - -* Broker metrics, which contain the broker information and topics stats aggregated at namespace level. You can fetch the broker metrics by using the following command: - - ```shell - - bin/pulsar-admin broker-stats monitoring-metrics - - ``` - -All the message rates are updated every minute. - -The aggregated broker metrics are also exposed in the [Prometheus](https://prometheus.io) format at: - -```shell - -http://$BROKER_ADDRESS:8080/metrics/ - -``` - -### ZooKeeper stats - -The local ZooKeeper, configuration store server and clients that are shipped with Pulsar can expose detailed stats through Prometheus. - -```shell - -http://$LOCAL_ZK_SERVER:8000/metrics -http://$GLOBAL_ZK_SERVER:8001/metrics - -``` - -The default port of local ZooKeeper is `8000` and the default port of the configuration store is `8001`. You can use a different stats port by configuring `metricsProvider.httpPort` in the `conf/zookeeper.conf` file. - -### BookKeeper stats - -You can configure the stats frameworks for BookKeeper by modifying the `statsProviderClass` in the `conf/bookkeeper.conf` file. - -The default BookKeeper configuration enables the Prometheus exporter. The configuration is included with Pulsar distribution. - -```shell - -http://$BOOKIE_ADDRESS:8000/metrics - -``` - -The default port for bookie is `8000`. You can change the port by configuring `prometheusStatsHttpPort` in the `conf/bookkeeper.conf` file. - -### Managed cursor acknowledgment state -The acknowledgment state is persistent to the ledger first. When the acknowledgment state fails to be persistent to the ledger, they are persistent to ZooKeeper. To track the stats of acknowledgement, you can configure the metrics for the managed cursor. - -``` - -brk_ml_cursor_persistLedgerSucceed(namespace=", ledger_name="", cursor_name:") -brk_ml_cursor_persistLedgerErrors(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_persistZookeeperSucceed(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_persistZookeeperErrors(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_nonContiguousDeletedMessagesRange(namespace="", ledger_name="", cursor_name:"") - -``` - -Those metrics are added in the Prometheus interface, you can monitor and check the metrics stats in the Grafana. - -### Function and connector stats - -You can collect functions worker stats from `functions-worker` and export the metrics in JSON formats, which contain functions worker JVM metrics. - -``` - -pulsar-admin functions-worker monitoring-metrics - -``` - -You can collect functions and connectors metrics from `functions-worker` and export the metrics in JSON formats. - -``` - -pulsar-admin functions-worker function-stats - -``` - -The aggregated functions and connectors metrics can be exposed in Prometheus formats as below. You can get [`FUNCTIONS_WORKER_ADDRESS`](http://pulsar.apache.org/docs/en/next/functions-worker/) and `WORKER_PORT` from the `functions_worker.yml` file. - -``` - -http://$FUNCTIONS_WORKER_ADDRESS:$WORKER_PORT/metrics: - -``` - -## Configure Prometheus - -You can use Prometheus to collect all the metrics exposed for Pulsar components and set up [Grafana](https://grafana.com/) dashboards to display the metrics and monitor your Pulsar cluster. For details, refer to [Prometheus guide](https://prometheus.io/docs/introduction/getting_started/). - -When you run Pulsar on bare metal, you can provide the list of nodes to be probed. When you deploy Pulsar in a Kubernetes cluster, the monitoring is setup automatically. For details, refer to [Kubernetes instructions](helm-deploy.md). - -## Dashboards - -When you collect time series statistics, the major problem is to make sure the number of dimensions attached to the data does not explode. Thus you only need to collect time series of metrics aggregated at the namespace level. - -### Pulsar per-topic dashboard - -The per-topic dashboard instructions are available at [Pulsar manager](administration-pulsar-manager.md). - -### Grafana - -You can use grafana to create dashboard driven by the data that is stored in Prometheus. - -When you deploy Pulsar on Kubernetes, a `pulsar-grafana` Docker image is enabled by default. You can use the docker image with the principal dashboards. - -Enter the command below to use the dashboard manually: - -```shell - -docker run -p3000:3000 \ - -e PROMETHEUS_URL=http://$PROMETHEUS_HOST:9090/ \ - apachepulsar/pulsar-grafana:latest - -``` - -The following are some Grafana dashboards examples: - -- [pulsar-grafana](http://pulsar.apache.org/docs/en/deploy-monitoring/#grafana): a Grafana dashboard that displays metrics collected in Prometheus for Pulsar clusters running on Kubernetes. -- [apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard): a collection of Grafana dashboard templates for different Pulsar components running on both Kubernetes and on-premise machines. - -## Alerting rules -You can set alerting rules according to your Pulsar environment. To configure alerting rules for Apache Pulsar, refer to [alerting rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/). diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/develop-load-manager.md b/site2/website/versioned_docs/version-2.8.2-deprecated/develop-load-manager.md deleted file mode 100644 index 509209b6a852d8..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/develop-load-manager.md +++ /dev/null @@ -1,227 +0,0 @@ ---- -id: develop-load-manager -title: Modular load manager -sidebar_label: "Modular load manager" -original_id: develop-load-manager ---- - -The *modular load manager*, implemented in [`ModularLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/ModularLoadManagerImpl.java), is a flexible alternative to the previously implemented load manager, [`SimpleLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/SimpleLoadManagerImpl.java), which attempts to simplify how load is managed while also providing abstractions so that complex load management strategies may be implemented. - -## Usage - -There are two ways that you can enable the modular load manager: - -1. Change the value of the `loadManagerClassName` parameter in `conf/broker.conf` from `org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl` to `org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl`. -2. Using the `pulsar-admin` tool. Here's an example: - - ```shell - - $ pulsar-admin update-dynamic-config \ - --config loadManagerClassName \ - --value org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl - - ``` - - You can use the same method to change back to the original value. In either case, any mistake in specifying the load manager will cause Pulsar to default to `SimpleLoadManagerImpl`. - -## Verification - -There are a few different ways to determine which load manager is being used: - -1. Use `pulsar-admin` to examine the `loadManagerClassName` element: - - ```shell - - $ bin/pulsar-admin brokers get-all-dynamic-config - { - "loadManagerClassName" : "org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl" - } - - ``` - - If there is no `loadManagerClassName` element, then the default load manager is used. - -2. Consult a ZooKeeper load report. With the module load manager, the load report in `/loadbalance/brokers/...` will have many differences. for example the `systemResourceUsage` sub-elements (`bandwidthIn`, `bandwidthOut`, etc.) are now all at the top level. Here is an example load report from the module load manager: - - ```json - - { - "bandwidthIn": { - "limit": 10240000.0, - "usage": 4.256510416666667 - }, - "bandwidthOut": { - "limit": 10240000.0, - "usage": 5.287239583333333 - }, - "bundles": [], - "cpu": { - "limit": 2400.0, - "usage": 5.7353247655435915 - }, - "directMemory": { - "limit": 16384.0, - "usage": 1.0 - } - } - - ``` - - With the simple load manager, the load report in `/loadbalance/brokers/...` will look like this: - - ```json - - { - "systemResourceUsage": { - "bandwidthIn": { - "limit": 10240000.0, - "usage": 0.0 - }, - "bandwidthOut": { - "limit": 10240000.0, - "usage": 0.0 - }, - "cpu": { - "limit": 2400.0, - "usage": 0.0 - }, - "directMemory": { - "limit": 16384.0, - "usage": 1.0 - }, - "memory": { - "limit": 8192.0, - "usage": 3903.0 - } - } - } - - ``` - -3. The command-line [broker monitor](reference-cli-tools.md#monitor-brokers) will have a different output format depending on which load manager implementation is being used. - - Here is an example from the modular load manager: - - ``` - - =================================================================================================================== - ||SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.00 |48.33 |0.01 |0.00 |0.00 |48.33 || - ||COUNT |TOPIC |BUNDLE |PRODUCER |CONSUMER |BUNDLE + |BUNDLE - || - || |4 |4 |0 |2 |4 |0 || - ||LATEST |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - ||SHORT |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - ||LONG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - =================================================================================================================== - - ``` - - Here is an example from the simple load manager: - - ``` - - =================================================================================================================== - ||COUNT |TOPIC |BUNDLE |PRODUCER |CONSUMER |BUNDLE + |BUNDLE - || - || |4 |4 |0 |2 |0 |0 || - ||RAW SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.25 |47.94 |0.01 |0.00 |0.00 |47.94 || - ||ALLOC SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.20 |1.89 | |1.27 |3.21 |3.21 || - ||RAW MSG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.01 |0.01 |0.01 || - ||ALLOC MSG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |54.84 |134.48 |189.31 |126.54 |320.96 |447.50 || - =================================================================================================================== - - ``` - -It is important to note that the module load manager is _centralized_, meaning that all requests to assign a bundle---whether it's been seen before or whether this is the first time---only get handled by the _lead_ broker (which can change over time). To determine the current lead broker, examine the `/loadbalance/leader` node in ZooKeeper. - -## Implementation - -### Data - -The data monitored by the modular load manager is contained in the [`LoadData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/LoadData.java) class. -Here, the available data is subdivided into the bundle data and the broker data. - -#### Broker - -The broker data is contained in the [`BrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BrokerData.java) class. It is further subdivided into two parts, -one being the local data which every broker individually writes to ZooKeeper, and the other being the historical broker -data which is written to ZooKeeper by the leader broker. - -##### Local Broker Data -The local broker data is contained in the class [`LocalBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/java/org/apache/pulsar/policies/data/loadbalancer/LocalBrokerData.java) and provides information about the following resources: - -* CPU usage -* JVM heap memory usage -* Direct memory usage -* Bandwidth in/out usage -* Most recent total message rate in/out across all bundles -* Total number of topics, bundles, producers, and consumers -* Names of all bundles assigned to this broker -* Most recent changes in bundle assignments for this broker - -The local broker data is updated periodically according to the service configuration -"loadBalancerReportUpdateMaxIntervalMinutes". After any broker updates their local broker data, the leader broker will -receive the update immediately via a ZooKeeper watch, where the local data is read from the ZooKeeper node -`/loadbalance/brokers/` - -##### Historical Broker Data - -The historical broker data is contained in the [`TimeAverageBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/TimeAverageBrokerData.java) class. - -In order to reconcile the need to make good decisions in a steady-state scenario and make reactive decisions in a critical scenario, the historical data is split into two parts: the short-term data for reactive decisions, and the long-term data for steady-state decisions. Both time frames maintain the following information: - -* Message rate in/out for the entire broker -* Message throughput in/out for the entire broker - -Unlike the bundle data, the broker data does not maintain samples for the global broker message rates and throughputs, which is not expected to remain steady as new bundles are removed or added. Instead, this data is aggregated over the short-term and long-term data for the bundles. See the section on bundle data to understand how that data is collected and maintained. - -The historical broker data is updated for each broker in memory by the leader broker whenever any broker writes their local data to ZooKeeper. Then, the historical data is written to ZooKeeper by the leader broker periodically according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`. - -##### Bundle Data - -The bundle data is contained in the [`BundleData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BundleData.java). Like the historical broker data, the bundle data is split into a short-term and a long-term time frame. The information maintained in each time frame: - -* Message rate in/out for this bundle -* Message Throughput In/Out for this bundle -* Current number of samples for this bundle - -The time frames are implemented by maintaining the average of these values over a set, limited number of samples, where -the samples are obtained through the message rate and throughput values in the local data. Thus, if the update interval -for the local data is 2 minutes, the number of short samples is 10 and the number of long samples is 1000, the -short-term data is maintained over a period of `10 samples * 2 minutes / sample = 20 minutes`, while the long-term -data is similarly over a period of 2000 minutes. Whenever there are not enough samples to satisfy a given time frame, -the average is taken only over the existing samples. When no samples are available, default values are assumed until -they are overwritten by the first sample. Currently, the default values are - -* Message rate in/out: 50 messages per second both ways -* Message throughput in/out: 50KB per second both ways - -The bundle data is updated in memory on the leader broker whenever any broker writes their local data to ZooKeeper. -Then, the bundle data is written to ZooKeeper by the leader broker periodically at the same time as the historical -broker data, according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`. - -### Traffic Distribution - -The modular load manager uses the abstraction provided by [`ModularLoadManagerStrategy`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/ModularLoadManagerStrategy.java) to make decisions about bundle assignment. The strategy makes a decision by considering the service configuration, the entire load data, and the bundle data for the bundle to be assigned. Currently, the only supported strategy is [`LeastLongTermMessageRate`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/LeastLongTermMessageRate.java), though soon users will have the ability to inject their own strategies if desired. - -#### Least Long Term Message Rate Strategy - -As its name suggests, the least long term message rate strategy attempts to distribute bundles across brokers so that -the message rate in the long-term time window for each broker is roughly the same. However, simply balancing load based -on message rate does not handle the issue of asymmetric resource burden per message on each broker. Thus, the system -resource usages, which are CPU, memory, direct memory, bandwidth in, and bandwidth out, are also considered in the -assignment process. This is done by weighting the final message rate according to -`1 / (overload_threshold - max_usage)`, where `overload_threshold` corresponds to the configuration -`loadBalancerBrokerOverloadedThresholdPercentage` and `max_usage` is the maximum proportion among the system resources -that is being utilized by the candidate broker. This multiplier ensures that machines with are being more heavily taxed -by the same message rates will receive less load. In particular, it tries to ensure that if one machine is overloaded, -then all machines are approximately overloaded. In the case in which a broker's max usage exceeds the overload -threshold, that broker is not considered for bundle assignment. If all brokers are overloaded, the bundle is randomly -assigned. - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/develop-schema.md b/site2/website/versioned_docs/version-2.8.2-deprecated/develop-schema.md deleted file mode 100644 index 2d4461a5ea2b55..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/develop-schema.md +++ /dev/null @@ -1,62 +0,0 @@ ---- -id: develop-schema -title: Custom schema storage -sidebar_label: "Custom schema storage" -original_id: develop-schema ---- - -By default, Pulsar stores data type [schemas](concepts-schema-registry.md) in [Apache BookKeeper](https://bookkeeper.apache.org) (which is deployed alongside Pulsar). You can, however, use another storage system if you wish. This doc walks you through creating your own schema storage implementation. - -In order to use a non-default (i.e. non-BookKeeper) storage system for Pulsar schemas, you need to implement two Java interfaces: [`SchemaStorage`](#schemastorage-interface) and [`SchemaStorageFactory`](#schemastoragefactory-interface). - -## SchemaStorage interface - -The `SchemaStorage` interface has the following methods: - -```java - -public interface SchemaStorage { - // How schemas are updated - CompletableFuture put(String key, byte[] value, byte[] hash); - - // How schemas are fetched from storage - CompletableFuture get(String key, SchemaVersion version); - - // How schemas are deleted - CompletableFuture delete(String key); - - // Utility method for converting a schema version byte array to a SchemaVersion object - SchemaVersion versionFromBytes(byte[] version); - - // Startup behavior for the schema storage client - void start() throws Exception; - - // Shutdown behavior for the schema storage client - void close() throws Exception; -} - -``` - -> For a full-fledged example schema storage implementation, see the [`BookKeeperSchemaStorage`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorage.java) class. - -## SchemaStorageFactory interface - -```java - -public interface SchemaStorageFactory { - @NotNull - SchemaStorage create(PulsarService pulsar) throws Exception; -} - -``` - -> For a full-fledged example schema storage factory implementation, see the [`BookKeeperSchemaStorageFactory`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorageFactory.java) class. - -## Deployment - -In order to use your custom schema storage implementation, you'll need to: - -1. Package the implementation in a [JAR](https://docs.oracle.com/javase/tutorial/deployment/jar/basicsindex.html) file. -1. Add that jar to the `lib` folder in your Pulsar [binary or source distribution](getting-started-standalone.md#installing-pulsar). -1. Change the `schemaRegistryStorageClassName` configuration in [`broker.conf`](reference-configuration.md#broker) to your custom factory class (i.e. the `SchemaStorageFactory` implementation, not the `SchemaStorage` implementation). -1. Start up Pulsar. diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/develop-tools.md b/site2/website/versioned_docs/version-2.8.2-deprecated/develop-tools.md deleted file mode 100644 index b5457790b80810..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/develop-tools.md +++ /dev/null @@ -1,111 +0,0 @@ ---- -id: develop-tools -title: Simulation tools -sidebar_label: "Simulation tools" -original_id: develop-tools ---- - -It is sometimes necessary create an test environment and incur artificial load to observe how well load managers -handle the load. The load simulation controller, the load simulation client, and the broker monitor were created as an -effort to make create this load and observe the effects on the managers more easily. - -## Simulation Client -The simulation client is a machine which will create and subscribe to topics with configurable message rates and sizes. -Because it is sometimes necessary in simulating large load to use multiple client machines, the user does not interact -with the simulation client directly, but instead delegates their requests to the simulation controller, which will then -send signals to clients to start incurring load. The client implementation is in the class -`org.apache.pulsar.testclient.LoadSimulationClient`. - -### Usage -To Start a simulation client, use the `pulsar-perf` script with the command `simulation-client` as follows: - -``` - -pulsar-perf simulation-client --port --service-url - -``` - -The client will then be ready to receive controller commands. -## Simulation Controller -The simulation controller send signals to the simulation clients, requesting them to create new topics, stop old -topics, change the load incurred by topics, as well as several other tasks. It is implemented in the class -`org.apache.pulsar.testclient.LoadSimulationController` and presents a shell to the user as an interface to send -command with. - -### Usage -To start a simulation controller, use the `pulsar-perf` script with the command `simulation-controller` as follows: - -``` - -pulsar-perf simulation-controller --cluster --client-port ---clients - -``` - -The clients should already be started before the controller is started. You will then be presented with a simple prompt, -where you can issue commands to simulation clients. Arguments often refer to tenant names, namespace names, and topic -names. In all cases, the BASE name of the tenants, namespaces, and topics are used. For example, for the topic -`persistent://my_tenant/my_cluster/my_namespace/my_topic`, the tenant name is `my_tenant`, the namespace name is -`my_namespace`, and the topic name is `my_topic`. The controller can perform the following actions: - -* Create a topic with a producer and a consumer - * `trade [--rate ] - [--rand-rate ,] - [--size ]` -* Create a group of topics with a producer and a consumer - * `trade_group [--rate ] - [--rand-rate ,] - [--separation ] [--size ] - [--topics-per-namespace ]` -* Change the configuration of an existing topic - * `change [--rate ] - [--rand-rate ,] - [--size ]` -* Change the configuration of a group of topics - * `change_group [--rate ] [--rand-rate ,] - [--size ] [--topics-per-namespace ]` -* Shutdown a previously created topic - * `stop ` -* Shutdown a previously created group of topics - * `stop_group ` -* Copy the historical data from one ZooKeeper to another and simulate based on the message rates and sizes in that history - * `copy [--rate-multiplier value]` -* Simulate the load of the historical data on the current ZooKeeper (should be same ZooKeeper being simulated on) - * `simulate [--rate-multiplier value]` -* Stream the latest data from the given active ZooKeeper to simulate the real-time load of that ZooKeeper. - * `stream [--rate-multiplier value]` - -The "group" arguments in these commands allow the user to create or affect multiple topics at once. Groups are created -when calling the `trade_group` command, and all topics from these groups may be subsequently modified or stopped -with the `change_group` and `stop_group` commands respectively. All ZooKeeper arguments are of the form -`zookeeper_host:port`. - -### Difference Between Copy, Simulate, and Stream -The commands `copy`, `simulate`, and `stream` are very similar but have significant differences. `copy` is used when -you want to simulate the load of a static, external ZooKeeper on the ZooKeeper you are simulating on. Thus, -`source zookeeper` should be the ZooKeeper you want to copy and `target zookeeper` should be the ZooKeeper you are -simulating on, and then it will get the full benefit of the historical data of the source in both load manager -implementations. `simulate` on the other hand takes in only one ZooKeeper, the one you are simulating on. It assumes -that you are simulating on a ZooKeeper that has historical data for `SimpleLoadManagerImpl` and creates equivalent -historical data for `ModularLoadManagerImpl`. Then, the load according to the historical data is simulated by the -clients. Finally, `stream` takes in an active ZooKeeper different than the ZooKeeper being simulated on and streams -load data from it and simulates the real-time load. In all cases, the optional `rate-multiplier` argument allows the -user to simulate some proportion of the load. For instance, using `--rate-multiplier 0.05` will cause messages to -be sent at only `5%` of the rate of the load that is being simulated. - -## Broker Monitor -To observe the behavior of the load manager in these simulations, one may utilize the broker monitor, which is -implemented in `org.apache.pulsar.testclient.BrokerMonitor`. The broker monitor will print tabular load data to the -console as it is updated using watchers. - -### Usage -To start a broker monitor, use the `monitor-brokers` command in the `pulsar-perf` script: - -``` - -pulsar-perf monitor-brokers --connect-string - -``` - -The console will then continuously print load data until it is interrupted. - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/developing-binary-protocol.md b/site2/website/versioned_docs/version-2.8.2-deprecated/developing-binary-protocol.md deleted file mode 100644 index cb9a5f69ed0ddf..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/developing-binary-protocol.md +++ /dev/null @@ -1,606 +0,0 @@ ---- -id: developing-binary-protocol -title: Pulsar binary protocol specification -sidebar_label: "Binary protocol" -original_id: developing-binary-protocol ---- - -Pulsar uses a custom binary protocol for communications between producers/consumers and brokers. This protocol is designed to support required features, such as acknowledgements and flow control, while ensuring maximum transport and implementation efficiency. - -Clients and brokers exchange *commands* with each other. Commands are formatted as binary [protocol buffer](https://developers.google.com/protocol-buffers/) (aka *protobuf*) messages. The format of protobuf commands is specified in the [`PulsarApi.proto`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto) file and also documented in the [Protobuf interface](#protobuf-interface) section below. - -> ### Connection sharing -> Commands for different producers and consumers can be interleaved and sent through the same connection without restriction. - -All commands associated with Pulsar's protocol are contained in a [`BaseCommand`](#pulsar.proto.BaseCommand) protobuf message that includes a [`Type`](#pulsar.proto.Type) [enum](https://developers.google.com/protocol-buffers/docs/proto#enum) with all possible subcommands as optional fields. `BaseCommand` messages can specify only one subcommand. - -## Framing - -Since protobuf doesn't provide any sort of message frame, all messages in the Pulsar protocol are prepended with a 4-byte field that specifies the size of the frame. The maximum allowable size of a single frame is 5 MB. - -The Pulsar protocol allows for two types of commands: - -1. **Simple commands** that do not carry a message payload. -2. **Payload commands** that bear a payload that is used when publishing or delivering messages. In payload commands, the protobuf command data is followed by protobuf [metadata](#message-metadata) and then the payload, which is passed in raw format outside of protobuf. All sizes are passed as 4-byte unsigned big endian integers. - -> Message payloads are passed in raw format rather than protobuf format for efficiency reasons. - -### Simple commands - -Simple (payload-free) commands have this basic structure: - -| Component | Description | Size (in bytes) | -|:------------|:----------------------------------------------------------------------------------------|:----------------| -| totalSize | The size of the frame, counting everything that comes after it (in bytes) | 4 | -| commandSize | The size of the protobuf-serialized command | 4 | -| message | The protobuf message serialized in a raw binary format (rather than in protobuf format) | | - -### Payload commands - -Payload commands have this basic structure: - -| Component | Description | Size (in bytes) | -|:-------------|:--------------------------------------------------------------------------------------------|:----------------| -| totalSize | The size of the frame, counting everything that comes after it (in bytes) | 4 | -| commandSize | The size of the protobuf-serialized command | 4 | -| message | The protobuf message serialized in a raw binary format (rather than in protobuf format) | | -| magicNumber | A 2-byte byte array (`0x0e01`) identifying the current format | 2 | -| checksum | A [CRC32-C checksum](http://www.evanjones.ca/crc32c.html) of everything that comes after it | 4 | -| metadataSize | The size of the message [metadata](#message-metadata) | 4 | -| metadata | The message [metadata](#message-metadata) stored as a binary protobuf message | | -| payload | Anything left in the frame is considered the payload and can include any sequence of bytes | | - -## Message metadata - -Message metadata is stored alongside the application-specified payload as a serialized protobuf message. Metadata is created by the producer and passed on unchanged to the consumer. - -| Field | Description | -|:-------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `producer_name` | The name of the producer that published the message | -| `sequence_id` | The sequence ID of the message, assigned by producer | -| `publish_time` | The publish timestamp in Unix time (i.e. as the number of milliseconds since January 1st, 1970 in UTC) | -| `properties` | A sequence of key/value pairs (using the [`KeyValue`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto#L32) message). These are application-defined keys and values with no special meaning to Pulsar. | -| `replicated_from` *(optional)* | Indicates that the message has been replicated and specifies the name of the [cluster](reference-terminology.md#cluster) where the message was originally published | -| `partition_key` *(optional)* | While publishing on a partition topic, if the key is present, the hash of the key is used to determine which partition to choose | -| `compression` *(optional)* | Signals that payload has been compressed and with which compression library | -| `uncompressed_size` *(optional)* | If compression is used, the producer must fill the uncompressed size field with the original payload size | -| `num_messages_in_batch` *(optional)* | If this message is really a [batch](#batch-messages) of multiple entries, this field must be set to the number of messages in the batch | - -### Batch messages - -When using batch messages, the payload will be containing a list of entries, -each of them with its individual metadata, defined by the `SingleMessageMetadata` -object. - - -For a single batch, the payload format will look like this: - - -| Field | Description | -|:--------------|:------------------------------------------------------------| -| metadataSizeN | The size of the single message metadata serialized Protobuf | -| metadataN | Single message metadata | -| payloadN | Message payload passed by application | - -Each metadata field looks like this; - -| Field | Description | -|:---------------------------|:--------------------------------------------------------| -| properties | Application-defined properties | -| partition key *(optional)* | Key to indicate the hashing to a particular partition | -| payload_size | Size of the payload for the single message in the batch | - -When compression is enabled, the whole batch will be compressed at once. - -## Interactions - -### Connection establishment - -After opening a TCP connection to a broker, typically on port 6650, the client -is responsible to initiate the session. - -![Connect interaction](/assets/binary-protocol-connect.png) - -After receiving a `Connected` response from the broker, the client can -consider the connection ready to use. Alternatively, if the broker doesn't -validate the client authentication, it will reply with an `Error` command and -close the TCP connection. - -Example: - -```protobuf - -message CommandConnect { - "client_version" : "Pulsar-Client-Java-v1.15.2", - "auth_method_name" : "my-authentication-plugin", - "auth_data" : "my-auth-data", - "protocol_version" : 6 -} - -``` - -Fields: - * `client_version` → String based identifier. Format is not enforced - * `auth_method_name` → *(optional)* Name of the authentication plugin if auth - enabled - * `auth_data` → *(optional)* Plugin specific authentication data - * `protocol_version` → Indicates the protocol version supported by the - client. Broker will not send commands introduced in newer revisions of the - protocol. Broker might be enforcing a minimum version - -```protobuf - -message CommandConnected { - "server_version" : "Pulsar-Broker-v1.15.2", - "protocol_version" : 6 -} - -``` - -Fields: - * `server_version` → String identifier of broker version - * `protocol_version` → Protocol version supported by the broker. Client - must not attempt to send commands introduced in newer revisions of the - protocol - -### Keep Alive - -To identify prolonged network partitions between clients and brokers or cases -in which a machine crashes without interrupting the TCP connection on the remote -end (eg: power outage, kernel panic, hard reboot...), we have introduced a -mechanism to probe for the availability status of the remote peer. - -Both clients and brokers are sending `Ping` commands periodically and they will -close the socket if a `Pong` response is not received within a timeout (default -used by broker is 60s). - -A valid implementation of a Pulsar client is not required to send the `Ping` -probe, though it is required to promptly reply after receiving one from the -broker in order to prevent the remote side from forcibly closing the TCP connection. - - -### Producer - -In order to send messages, a client needs to establish a producer. When creating -a producer, the broker will first verify that this particular client is -authorized to publish on the topic. - -Once the client gets confirmation of the producer creation, it can publish -messages to the broker, referring to the producer id negotiated before. - -![Producer interaction](/assets/binary-protocol-producer.png) - -##### Command Producer - -```protobuf - -message CommandProducer { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "producer_id" : 1, - "request_id" : 1 -} - -``` - -Parameters: - * `topic` → Complete topic name to where you want to create the producer on - * `producer_id` → Client generated producer identifier. Needs to be unique - within the same connection - * `request_id` → Identifier for this request. Used to match the response with - the originating request. Needs to be unique within the same connection - * `producer_name` → *(optional)* If a producer name is specified, the name will - be used, otherwise the broker will generate a unique name. Generated - producer name is guaranteed to be globally unique. Implementations are - expected to let the broker generate a new producer name when the producer - is initially created, then reuse it when recreating the producer after - reconnections. - -The broker will reply with either `ProducerSuccess` or `Error` commands. - -##### Command ProducerSuccess - -```protobuf - -message CommandProducerSuccess { - "request_id" : 1, - "producer_name" : "generated-unique-producer-name" -} - -``` - -Parameters: - * `request_id` → Original id of the `CreateProducer` request - * `producer_name` → Generated globally unique producer name or the name - specified by the client, if any. - -##### Command Send - -Command `Send` is used to publish a new message within the context of an -already existing producer. This command is used in a frame that includes command -as well as message payload, for which the complete format is specified in the [payload commands](#payload-commands) section. - -```protobuf - -message CommandSend { - "producer_id" : 1, - "sequence_id" : 0, - "num_messages" : 1 -} - -``` - -Parameters: - * `producer_id` → id of an existing producer - * `sequence_id` → each message has an associated sequence id which is expected - to be implemented with a counter starting at 0. The `SendReceipt` that - acknowledges the effective publishing of messages will refer to it by - its sequence id. - * `num_messages` → *(optional)* Used when publishing a batch of messages at - once. - -##### Command SendReceipt - -After a message has been persisted on the configured number of replicas, the -broker will send the acknowledgment receipt to the producer. - -```protobuf - -message CommandSendReceipt { - "producer_id" : 1, - "sequence_id" : 0, - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -Parameters: - * `producer_id` → id of producer originating the send request - * `sequence_id` → sequence id of the published message - * `message_id` → message id assigned by the system to the published message - Unique within a single cluster. Message id is composed of 2 longs, `ledgerId` - and `entryId`, that reflect that this unique id is assigned when appending - to a BookKeeper ledger - - -##### Command CloseProducer - -**Note**: *This command can be sent by either producer or broker*. - -When receiving a `CloseProducer` command, the broker will stop accepting any -more messages for the producer, wait until all pending messages are persisted -and then reply `Success` to the client. - -The broker can send a `CloseProducer` command to client when it's performing -a graceful failover (eg: broker is being restarted, or the topic is being unloaded -by load balancer to be transferred to a different broker). - -When receiving the `CloseProducer`, the client is expected to go through the -service discovery lookup again and recreate the producer again. The TCP -connection is not affected. - -### Consumer - -A consumer is used to attach to a subscription and consume messages from it. -After every reconnection, a client needs to subscribe to the topic. If a -subscription is not already there, a new one will be created. - -![Consumer](/assets/binary-protocol-consumer.png) - -#### Flow control - -After the consumer is ready, the client needs to *give permission* to the -broker to push messages. This is done with the `Flow` command. - -A `Flow` command gives additional *permits* to send messages to the consumer. -A typical consumer implementation will use a queue to accumulate these messages -before the application is ready to consume them. - -After the application has dequeued half of the messages in the queue, the consumer -sends permits to the broker to ask for more messages (equals to half of the messages in the queue). - -For example, if the queue size is 1000 and the consumer consumes 500 messages in the queue. -Then the consumer sends permits to the broker to ask for 500 messages. - -##### Command Subscribe - -```protobuf - -message CommandSubscribe { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "subscription" : "my-subscription-name", - "subType" : "Exclusive", - "consumer_id" : 1, - "request_id" : 1 -} - -``` - -Parameters: - * `topic` → Complete topic name to where you want to create the consumer on - * `subscription` → Subscription name - * `subType` → Subscription type: Exclusive, Shared, Failover, Key_Shared - * `consumer_id` → Client generated consumer identifier. Needs to be unique - within the same connection - * `request_id` → Identifier for this request. Used to match the response with - the originating request. Needs to be unique within the same connection - * `consumer_name` → *(optional)* Clients can specify a consumer name. This - name can be used to track a particular consumer in the stats. Also, in - Failover subscription type, the name is used to decide which consumer is - elected as *master* (the one receiving messages): consumers are sorted by - their consumer name and the first one is elected master. - -##### Command Flow - -```protobuf - -message CommandFlow { - "consumer_id" : 1, - "messagePermits" : 1000 -} - -``` - -Parameters: -* `consumer_id` → Id of an already established consumer -* `messagePermits` → Number of additional permits to grant to the broker for - pushing more messages - -##### Command Message - -Command `Message` is used by the broker to push messages to an existing consumer, -within the limits of the given permits. - - -This command is used in a frame that includes the message payload as well, for -which the complete format is specified in the [payload commands](#payload-commands) -section. - -```protobuf - -message CommandMessage { - "consumer_id" : 1, - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -##### Command Ack - -An `Ack` is used to signal to the broker that a given message has been -successfully processed by the application and can be discarded by the broker. - -In addition, the broker will also maintain the consumer position based on the -acknowledged messages. - -```protobuf - -message CommandAck { - "consumer_id" : 1, - "ack_type" : "Individual", - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -Parameters: - * `consumer_id` → Id of an already established consumer - * `ack_type` → Type of acknowledgment: `Individual` or `Cumulative` - * `message_id` → Id of the message to acknowledge - * `validation_error` → *(optional)* Indicates that the consumer has discarded - the messages due to: `UncompressedSizeCorruption`, - `DecompressionError`, `ChecksumMismatch`, `BatchDeSerializeError` - * `properties` → *(optional)* Reserved configuration items - * `txnid_most_bits` → *(optional)* Same as Transaction Coordinator ID, `txnid_most_bits` and `txnid_least_bits` - uniquely identify a transaction. - * `txnid_least_bits` → *(optional)* The ID of the transaction opened in a transaction coordinator, - `txnid_most_bits` and `txnid_least_bits`uniquely identify a transaction. - * `request_id` → *(optional)* ID for handling response and timeout. - - - ##### Command AckResponse - -An `AckResponse` is the broker’s response to acknowledge a request sent by the client. It contains the `consumer_id` sent in the request. -If a transaction is used, it contains both the Transaction ID and the Request ID that are sent in the request. The client finishes the specific request according to the Request ID. If the `error` field is set, it indicates that the request has failed. - -An example of `AckResponse` with redirection: - -```protobuf - -message CommandAckResponse { - "consumer_id" : 1, - "txnid_least_bits" = 0, - "txnid_most_bits" = 1, - "request_id" = 5 -} - -``` - -##### Command CloseConsumer - -***Note***: **This command can be sent by either producer or broker*. - -This command behaves the same as [`CloseProducer`](#command-closeproducer) - -##### Command RedeliverUnacknowledgedMessages - -A consumer can ask the broker to redeliver some or all of the pending messages -that were pushed to that particular consumer and not yet acknowledged. - -The protobuf object accepts a list of message ids that the consumer wants to -be redelivered. If the list is empty, the broker will redeliver all the -pending messages. - -On redelivery, messages can be sent to the same consumer or, in the case of a -shared subscription, spread across all available consumers. - - -##### Command ReachedEndOfTopic - -This is sent by a broker to a particular consumer, whenever the topic -has been "terminated" and all the messages on the subscription were -acknowledged. - -The client should use this command to notify the application that no more -messages are coming from the consumer. - -##### Command ConsumerStats - -This command is sent by the client to retrieve Subscriber and Consumer level -stats from the broker. -Parameters: - * `request_id` → Id of the request, used to correlate the request - and the response. - * `consumer_id` → Id of an already established consumer. - -##### Command ConsumerStatsResponse - -This is the broker's response to ConsumerStats request by the client. -It contains the Subscriber and Consumer level stats of the `consumer_id` sent in the request. -If the `error_code` or the `error_message` field is set it indicates that the request has failed. - -##### Command Unsubscribe - -This command is sent by the client to unsubscribe the `consumer_id` from the associated topic. -Parameters: - * `request_id` → Id of the request. - * `consumer_id` → Id of an already established consumer which needs to unsubscribe. - - -## Service discovery - -### Topic lookup - -Topic lookup needs to be performed each time a client needs to create or -reconnect a producer or a consumer. Lookup is used to discover which particular -broker is serving the topic we are about to use. - -Lookup can be done with a REST call as described in the [admin API](admin-api-topics.md#lookup-of-topic) -docs. - -Since Pulsar-1.16 it is also possible to perform the lookup within the binary -protocol. - -For the sake of example, let's assume we have a service discovery component -running at `pulsar://broker.example.com:6650` - -Individual brokers will be running at `pulsar://broker-1.example.com:6650`, -`pulsar://broker-2.example.com:6650`, ... - -A client can use a connection to the discovery service host to issue a -`LookupTopic` command. The response can either be a broker hostname to -connect to, or a broker hostname to which retry the lookup. - -The `LookupTopic` command has to be used in a connection that has already -gone through the `Connect` / `Connected` initial handshake. - -![Topic lookup](/assets/binary-protocol-topic-lookup.png) - -```protobuf - -message CommandLookupTopic { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "request_id" : 1, - "authoritative" : false -} - -``` - -Fields: - * `topic` → Topic name to lookup - * `request_id` → Id of the request that will be passed with its response - * `authoritative` → Initial lookup request should use false. When following a - redirect response, client should pass the same value contained in the - response - -##### LookupTopicResponse - -Example of response with successful lookup: - -```protobuf - -message CommandLookupTopicResponse { - "request_id" : 1, - "response" : "Connect", - "brokerServiceUrl" : "pulsar://broker-1.example.com:6650", - "brokerServiceUrlTls" : "pulsar+ssl://broker-1.example.com:6651", - "authoritative" : true -} - -``` - -Example of lookup response with redirection: - -```protobuf - -message CommandLookupTopicResponse { - "request_id" : 1, - "response" : "Redirect", - "brokerServiceUrl" : "pulsar://broker-2.example.com:6650", - "brokerServiceUrlTls" : "pulsar+ssl://broker-2.example.com:6651", - "authoritative" : true -} - -``` - -In this second case, we need to reissue the `LookupTopic` command request -to `broker-2.example.com` and this broker will be able to give a definitive -answer to the lookup request. - -### Partitioned topics discovery - -Partitioned topics metadata discovery is used to find out if a topic is a -"partitioned topic" and how many partitions were set up. - -If the topic is marked as "partitioned", the client is expected to create -multiple producers or consumers, one for each partition, using the `partition-X` -suffix. - -This information only needs to be retrieved the first time a producer or -consumer is created. There is no need to do this after reconnections. - -The discovery of partitioned topics metadata works very similar to the topic -lookup. The client send a request to the service discovery address and the -response will contain actual metadata. - -##### Command PartitionedTopicMetadata - -```protobuf - -message CommandPartitionedTopicMetadata { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "request_id" : 1 -} - -``` - -Fields: - * `topic` → the topic for which to check the partitions metadata - * `request_id` → Id of the request that will be passed with its response - - -##### Command PartitionedTopicMetadataResponse - -Example of response with metadata: - -```protobuf - -message CommandPartitionedTopicMetadataResponse { - "request_id" : 1, - "response" : "Success", - "partitions" : 32 -} - -``` - -## Protobuf interface - -All Pulsar's Protobuf definitions can be found {@inject: github:here:/pulsar-common/src/main/proto/PulsarApi.proto}. diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/functions-cli.md b/site2/website/versioned_docs/version-2.8.2-deprecated/functions-cli.md deleted file mode 100644 index c9fcfa201525f0..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/functions-cli.md +++ /dev/null @@ -1,198 +0,0 @@ ---- -id: functions-cli -title: Pulsar Functions command line tool -sidebar_label: "Reference: CLI" -original_id: functions-cli ---- - -The following tables list Pulsar Functions command-line tools. You can learn Pulsar Functions modes, commands, and parameters. - -## localrun - -Run Pulsar Functions locally, rather than deploying it to the Pulsar cluster. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -broker-service-url | The URL for the Pulsar broker. | | -classname | The class name of a Pulsar Function.| | -client-auth-params | Client authentication parameter. | | -client-auth-plugin | Client authentication plugin using which function-process can connect to broker. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime).| | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -hostname-verification-enabled | Enable hostname verification. | false -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -instance-id-offset | Start the instanceIds from this offset. | 0 -log-topic | The topic to which the logs a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -tls-allow-insecure | Allow insecure tls connection. | false -tls-trust-cert-path | tls trust cert file path. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -use-tls | Use tls connection. | false -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - - -## create - -Create and deploy a Pulsar Function in cluster mode. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -classname | The class name of a Pulsar Function. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime).| | -custom-runtime-options | A string that encodes options to customize the runtime, see docs for configured runtime for details | | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -log-topic | The topic to which the logs of a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - -## delete - -Delete a Pulsar Function that is running on a Pulsar cluster. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## update - -Update a Pulsar Function that has been deployed to a Pulsar cluster. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -classname | The class name of a Pulsar Function. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime). | | -custom-runtime-options | A string that encodes options to customize the runtime, see docs for configured runtime for details | | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -log-topic | The topic to which the logs of a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -update-auth-data | Whether or not to update the auth data. | false -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - -## get - -Fetch information about a Pulsar Function. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## restart - -Restart function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## stop - -Stops function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## start - -Starts a stopped function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/functions-debug.md b/site2/website/versioned_docs/version-2.8.2-deprecated/functions-debug.md deleted file mode 100644 index e1d55ae0897aa5..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/functions-debug.md +++ /dev/null @@ -1,533 +0,0 @@ ---- -id: functions-debug -title: Debug Pulsar Functions -sidebar_label: "How-to: Debug" -original_id: functions-debug ---- - -You can use the following methods to debug Pulsar Functions: - -* [Captured stderr](functions-debug.md#captured-stderr) -* [Use unit test](functions-debug.md#use-unit-test) -* [Debug with localrun mode](functions-debug.md#debug-with-localrun-mode) -* [Use log topic](functions-debug.md#use-log-topic) -* [Use Functions CLI](functions-debug.md#use-functions-cli) - -## Captured stderr - -Function startup information and captured stderr output is written to `logs/functions////-.log` - -This is useful for debugging why a function fails to start. - -## Use unit test - -A Pulsar Function is a function with inputs and outputs, you can test a Pulsar Function in a similar way as you test any function. - -For example, if you have the following Pulsar Function: - -```java - -import java.util.function.Function; - -public class JavaNativeExclamationFunction implements Function { - @Override - public String apply(String input) { - return String.format("%s!", input); - } -} - -``` - -You can write a simple unit test to test Pulsar Function. - -:::tip - -Pulsar uses testng for testing. - -::: - -```java - -@Test -public void testJavaNativeExclamationFunction() { - JavaNativeExclamationFunction exclamation = new JavaNativeExclamationFunction(); - String output = exclamation.apply("foo"); - Assert.assertEquals(output, "foo!"); -} - -``` - -The following Pulsar Function implements the `org.apache.pulsar.functions.api.Function` interface. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class ExclamationFunction implements Function { - @Override - public String process(String input, Context context) { - return String.format("%s!", input); - } -} - -``` - -In this situation, you can write a unit test for this function as well. Remember to mock the `Context` parameter. The following is an example. - -:::tip - -Pulsar uses testng for testing. - -::: - -```java - -@Test -public void testExclamationFunction() { - ExclamationFunction exclamation = new ExclamationFunction(); - String output = exclamation.process("foo", mock(Context.class)); - Assert.assertEquals(output, "foo!"); -} - -``` - -## Debug with localrun mode -When you run a Pulsar Function in localrun mode, it launches an instance of the Function on your local machine as a thread. - -In this mode, a Pulsar Function consumes and produces actual data to a Pulsar cluster, and mirrors how the function actually runs in a Pulsar cluster. - -:::note - -Currently, debugging with localrun mode is only supported by Pulsar Functions written in Java. You need Pulsar version 2.4.0 or later to do the following. Even though localrun is available in versions earlier than Pulsar 2.4.0, you cannot debug with localrun mode programmatically or run Functions as threads. - -::: - -You can launch your function in the following manner. - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setName(functionName); -functionConfig.setInputs(Collections.singleton(sourceTopic)); -functionConfig.setClassName(ExclamationFunction.class.getName()); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setOutput(sinkTopic); - -LocalRunner localRunner = LocalRunner.builder().functionConfig(functionConfig).build(); -localRunner.start(true); - -``` - -So you can debug functions using an IDE easily. Set breakpoints and manually step through a function to debug with real data. - -The following example illustrates how to programmatically launch a function in localrun mode. - -```java - -public class ExclamationFunction implements Function { - - @Override - public String process(String s, Context context) throws Exception { - return s + "!"; - } - -public static void main(String[] args) throws Exception { - FunctionConfig functionConfig = new FunctionConfig(); - functionConfig.setName("exclamation"); - functionConfig.setInputs(Collections.singleton("input")); - functionConfig.setClassName(ExclamationFunction.class.getName()); - functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); - functionConfig.setOutput("output"); - - LocalRunner localRunner = LocalRunner.builder().functionConfig(functionConfig).build(); - localRunner.start(false); -} - -``` - -To use localrun mode programmatically, add the following dependency. - -```xml - - - org.apache.pulsar - pulsar-functions-local-runner - ${pulsar.version} - - -``` - -For complete code samples, see [here](https://github.com/jerrypeng/pulsar-functions-demos/tree/master/debugging). - -:::note - -Debugging with localrun mode for Pulsar Functions written in other languages will be supported soon. - -::: - -## Use log topic - -In Pulsar Functions, you can generate log information defined in functions to a specified log topic. You can configure consumers to consume messages from a specified log topic to check the log information. - -![Pulsar Functions core programming model](/assets/pulsar-functions-overview.png) - -**Example** - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class LoggingFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - String messageId = new String(context.getMessageId()); - - if (input.contains("danger")) { - LOG.warn("A warning was received in message {}", messageId); - } else { - LOG.info("Message {} received\nContent: {}", messageId, input); - } - - return null; - } -} - -``` - -As shown in the example above, you can get the logger via `context.getLogger()` and assign the logger to the `LOG` variable of `slf4j`, so you can define your desired log information in a function using the `LOG` variable. Meanwhile, you need to specify the topic to which the log information is produced. - -**Example** - -```bash - -$ bin/pulsar-admin functions create \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -## Use Functions CLI - -With [Pulsar Functions CLI](reference-pulsar-admin.md#functions), you can debug Pulsar Functions with the following subcommands: - -* `get` -* `status` -* `stats` -* `list` -* `trigger` - -:::tip - -For complete commands of **Pulsar Functions CLI**, see [here](reference-pulsar-admin.md#functions)。 - -::: - -### `get` - -Get information about a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions get options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -:::tip - -`--fqfn` consists of `--name`, `--namespace` and `--tenant`, so you can specify either `--fqfn` or `--name`, `--namespace` and `--tenant`. - -::: - -**Example** - -You can specify `--fqfn` to get information about a Pulsar Function. - -```bash - -$ ./bin/pulsar-admin functions get public/default/ExclamationFunctio6 - -``` - -Optionally, you can specify `--name`, `--namespace` and `--tenant` to get information about a Pulsar Function. - -```bash - -$ ./bin/pulsar-admin functions get \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 - -``` - -As shown below, the `get` command shows input, output, runtime, and other information about the _ExclamationFunctio6_ function. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "ExclamationFunctio6", - "className": "org.example.test.ExclamationFunction", - "inputSpecs": { - "persistent://public/default/my-topic-1": { - "isRegexPattern": false - } - }, - "output": "persistent://public/default/test-1", - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "userConfig": {}, - "runtime": "JAVA", - "autoAck": true, - "parallelism": 1 -} - -``` - -### `status` - -Check the current status of a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions status options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--instance-id`|The instance ID of a Pulsar Function
    If the `--instance-id` is not specified, it gets the IDs of all instances.
    -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - -``` - -As shown below, the `status` command shows the number of instances, running instances, the instance running under the _ExclamationFunctio6_ function, received messages, successfully processed messages, system exceptions, the average latency and so on. - -```json - -{ - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReceived" : 1, - "numSuccessfullyProcessed" : 1, - "numUserExceptions" : 0, - "latestUserExceptions" : [ ], - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "averageLatency" : 0.8385, - "lastInvocationTime" : 1557734137987, - "workerId" : "c-standalone-fw-23ccc88ef29b-8080" - } - } ] -} - -``` - -### `stats` - -Get the current stats of a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions stats options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--instance-id`|The instance ID of a Pulsar Function.
    If the `--instance-id` is not specified, it gets the IDs of all instances.
    -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - -``` - -The output is shown as follows: - -```json - -{ - "receivedTotal" : 1, - "processedSuccessfullyTotal" : 1, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : 0.8385, - "1min" : { - "receivedTotal" : 0, - "processedSuccessfullyTotal" : 0, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : null - }, - "lastInvocation" : 1557734137987, - "instances" : [ { - "instanceId" : 0, - "metrics" : { - "receivedTotal" : 1, - "processedSuccessfullyTotal" : 1, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : 0.8385, - "1min" : { - "receivedTotal" : 0, - "processedSuccessfullyTotal" : 0, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : null - }, - "lastInvocation" : 1557734137987, - "userMetrics" : { } - } - } ] -} - -``` - -### `list` - -List all Pulsar Functions running under a specific tenant and namespace. - -**Usage** - -```bash - -$ pulsar-admin functions list options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions list \ - --tenant public \ - --namespace default - -``` - -As shown below, the `list` command returns three functions running under the _public_ tenant and the _default_ namespace. - -```text - -ExclamationFunctio1 -ExclamationFunctio2 -ExclamationFunctio3 - -``` - -### `trigger` - -Trigger a specified Pulsar Function with a supplied value. This command simulates the execution process of a Pulsar Function and verifies it. - -**Usage** - -```bash - -$ pulsar-admin functions trigger options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. -|`--topic`|The topic name that a Pulsar Function consumes from. -|`--trigger-file`|The path to a file that contains the data to trigger a Pulsar Function. -|`--trigger-value`|The value to trigger a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - --topic persistent://public/default/my-topic-1 \ - --trigger-value "hello pulsar functions" - -``` - -As shown below, the `trigger` command returns the following result: - -```text - -This is my function! - -``` - -:::note - -You must specify the [entire topic name](getting-started-pulsar.md#topic-names) when using the `--topic` option. Otherwise, the following error occurs. - -```text - -Function in trigger function has unidentified topic -Reason: Function in trigger function has unidentified topic - -``` - -::: - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/functions-deploy.md b/site2/website/versioned_docs/version-2.8.2-deprecated/functions-deploy.md deleted file mode 100644 index 9c47208f68fa0c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/functions-deploy.md +++ /dev/null @@ -1,262 +0,0 @@ ---- -id: functions-deploy -title: Deploy Pulsar Functions -sidebar_label: "How-to: Deploy" -original_id: functions-deploy ---- - -## Requirements - -To deploy and manage Pulsar Functions, you need to have a Pulsar cluster running. There are several options for this: - -* You can run a [standalone cluster](getting-started-standalone.md) locally on your own machine. -* You can deploy a Pulsar cluster on [Kubernetes](deploy-kubernetes.md), [Amazon Web Services](deploy-aws.md), [bare metal](deploy-bare-metal.md), [DC/OS](https://dcos.io/), and more. - -If you run a non-[standalone](reference-terminology.md#standalone) cluster, you need to obtain the service URL for the cluster. How you obtain the service URL depends on how you deploy your Pulsar cluster. - -If you want to deploy and trigger Python user-defined functions, you need to install [the pulsar python client](http://pulsar.apache.org/docs/en/client-libraries-python/) on all the machines running [functions workers](functions-worker.md). - -## Command-line interface - -Pulsar Functions are deployed and managed using the [`pulsar-admin functions`](reference-pulsar-admin.md#functions) interface, which contains commands such as [`create`](reference-pulsar-admin.md#functions-create) for deploying functions in [cluster mode](#cluster-mode), [`trigger`](reference-pulsar-admin.md#trigger) for [triggering](#triggering-pulsar-functions) functions, [`list`](reference-pulsar-admin.md#list-2) for listing deployed functions. - -To learn more commands, refer to [`pulsar-admin functions`](reference-pulsar-admin.md#functions). - -### Default arguments - -When managing Pulsar Functions, you need to specify a variety of information about functions, including tenant, namespace, input and output topics, and so on. However, some parameters have default values if you do not specify values for them. The following table lists the default values. - -Parameter | Default -:---------|:------- -Function name | You can specify any value for the class name (except org, library, or similar class names). For example, when you specify the flag `--classname org.example.MyFunction`, the function name is `MyFunction`. -Tenant | Derived from names of the input topics. If the input topics are under the `marketing` tenant, which means the topic names have the form `persistent://marketing/{namespace}/{topicName}`, the tenant is `marketing`. -Namespace | Derived from names of the input topics. If the input topics are under the `asia` namespace under the `marketing` tenant, which means the topic names have the form `persistent://marketing/asia/{topicName}`, then the namespace is `asia`. -Output topic | `{input topic}-{function name}-output`. For example, if an input topic name of a function is `incoming`, and the function name is `exclamation`, then the name of the output topic is `incoming-exclamation-output`. -Subscription type | For `at-least-once` and `at-most-once` [processing guarantees](functions-overview.md#processing-guarantees), the [`SHARED`](concepts-messaging.md#shared) mode is applied by default; for `effectively-once` guarantees, the [`FAILOVER`](concepts-messaging.md#failover) mode is applied. -Processing guarantees | [`ATLEAST_ONCE`](functions-overview.md#processing-guarantees) -Pulsar service URL | `pulsar://localhost:6650` - -### Example of default arguments - -Take the `create` command as an example. - -```bash - -$ bin/pulsar-admin functions create \ - --jar my-pulsar-functions.jar \ - --classname org.example.MyFunction \ - --inputs my-function-input-topic1,my-function-input-topic2 - -``` - -The function has default values for the function name (`MyFunction`), tenant (`public`), namespace (`default`), subscription type (`SHARED`), processing guarantees (`ATLEAST_ONCE`), and Pulsar service URL (`pulsar://localhost:6650`). - -## Local run mode - -If you run a Pulsar Function in **local run** mode, it runs on the machine from which you enter the commands (on your laptop, an [AWS EC2](https://aws.amazon.com/ec2/) instance, and so on). The following is a [`localrun`](reference-pulsar-admin.md#localrun) command example. - -```bash - -$ bin/pulsar-admin functions localrun \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/input-1 \ - --output persistent://public/default/output-1 - -``` - -By default, the function connects to a Pulsar cluster running on the same machine, via a local [broker](reference-terminology.md#broker) service URL of `pulsar://localhost:6650`. If you use local run mode to run a function but connect it to a non-local Pulsar cluster, you can specify a different broker URL using the `--brokerServiceUrl` flag. The following is an example. - -```bash - -$ bin/pulsar-admin functions localrun \ - --broker-service-url pulsar://my-cluster-host:6650 \ - # Other function parameters - -``` - -## Cluster mode - -When you run a Pulsar Function in **cluster** mode, the function code is uploaded to a Pulsar broker and runs *alongside the broker* rather than in your [local environment](#local-run-mode). You can run a function in cluster mode using the [`create`](reference-pulsar-admin.md#create-1) command. - -```bash - -$ bin/pulsar-admin functions create \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/input-1 \ - --output persistent://public/default/output-1 - -``` - -### Update functions in cluster mode - -You can use the [`update`](reference-pulsar-admin.md#update-1) command to update a Pulsar Function running in cluster mode. The following command updates the function created in the [cluster mode](#cluster-mode) section. - -```bash - -$ bin/pulsar-admin functions update \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/new-input-topic \ - --output persistent://public/default/new-output-topic - -``` - -### Parallelism - -Pulsar Functions run as processes or threads, which are called **instances**. When you run a Pulsar Function, it runs as a single instance by default. With one localrun command, you can only run a single instance of a function. If you want to run multiple instances, you can use localrun command multiple times. - -When you create a function, you can specify the *parallelism* of a function (the number of instances to run). You can set the parallelism factor using the `--parallelism` flag of the [`create`](reference-pulsar-admin.md#functions-create) command. - -```bash - -$ bin/pulsar-admin functions create \ - --parallelism 3 \ - # Other function info - -``` - -You can adjust the parallelism of an already created function using the [`update`](reference-pulsar-admin.md#update-1) interface. - -```bash - -$ bin/pulsar-admin functions update \ - --parallelism 5 \ - # Other function - -``` - -If you specify a function configuration via YAML, use the `parallelism` parameter. The following is a config file example. - -```yaml - -# function-config.yaml -parallelism: 3 -inputs: -- persistent://public/default/input-1 -output: persistent://public/default/output-1 -# other parameters - -``` - -The following is corresponding update command. - -```bash - -$ bin/pulsar-admin functions update \ - --function-config-file function-config.yaml - -``` - -### Function instance resources - -When you run Pulsar Functions in [cluster mode](#cluster-mode), you can specify the resources that are assigned to each function [instance](#parallelism). - -Resource | Specified as | Runtimes -:--------|:----------------|:-------- -CPU | The number of cores | Kubernetes -RAM | The number of bytes | Process, Docker -Disk space | The number of bytes | Docker - -The following function creation command allocates 8 cores, 8 GB of RAM, and 10 GB of disk space to a function. - -```bash - -$ bin/pulsar-admin functions create \ - --jar target/my-functions.jar \ - --classname org.example.functions.MyFunction \ - --cpu 8 \ - --ram 8589934592 \ - --disk 10737418240 - -``` - -> #### Resources are *per instance* -> The resources that you apply to a given Pulsar Function are applied to each instance of the function. For example, if you apply 8 GB of RAM to a function with a parallelism of 5, you are applying 40 GB of RAM for the function in total. Make sure that you take the parallelism (the number of instances) factor into your resource calculations. - -### Use Package management service - -Package management enables version management and simplifies the upgrade and rollback processes for Functions, Sinks, and Sources. When you use the same function, sink and source in different namespaces, you can upload them to a common package management system. - -To use [Package management service](admin-api-packages.md), ensure that the package management service has been enabled in your cluster by setting the following properties in `broker.conf`. - -> Note: Package management service is not enabled by default. - -```yaml - -enablePackagesManagement=true -packagesManagementStorageProvider=org.apache.pulsar.packages.management.storage.bookkeeper.BookKeeperPackagesStorageProvider -packagesReplicas=1 -packagesManagementLedgerRootPath=/ledgers - -``` - -With Package management service enabled, you can upload your function packages by [upload a package](admin-api-packages.md#upload-a-package) to the service and get the [package URL](admin-api-packages.md#package-url). - -When you have a ready to use package URL, you can create the function with package URL by setting `--jar`, `--py`, or `--go` to the package URL with `pulsar-admin functions create`. - -## Trigger Pulsar Functions - -If a Pulsar Function is running in [cluster mode](#cluster-mode), you can **trigger** it at any time using the command line. Triggering a function means that you send a message with a specific value to the function and get the function output (if any) via the command line. - -> Triggering a function is to invoke a function by producing a message on one of the input topics. With the [`pulsar-admin functions trigger`](reference-pulsar-admin.md#trigger) command, you can send messages to functions without using the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool or a language-specific client library. - -To learn how to trigger a function, you can start with Python function that returns a simple string based on the input. - -```python - -# myfunc.py -def process(input): - return "This function has been triggered with a value of {0}".format(input) - -``` - -You can run the function in [local run mode](functions-deploy.md#local-run-mode). - -```bash - -$ bin/pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name myfunc \ - --py myfunc.py \ - --classname myfunc \ - --inputs persistent://public/default/in \ - --output persistent://public/default/out - -``` - -Then assign a consumer to listen on the output topic for messages from the `myfunc` function with the [`pulsar-client consume`](reference-cli-tools.md#consume) command. - -```bash - -$ bin/pulsar-client consume persistent://public/default/out \ - --subscription-name my-subscription - --num-messages 0 # Listen indefinitely - -``` - -And then you can trigger the function. - -```bash - -$ bin/pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name myfunc \ - --trigger-value "hello world" - -``` - -The consumer listening on the output topic produces something as follows in the log. - -``` - ------ got message ----- -This function has been triggered with a value of hello world - -``` - -> #### Topic info is not required -> In the `trigger` command, you only need to specify basic information about the function (tenant, namespace, and name). To trigger the function, you do not need to know the function input topics. diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/functions-develop.md b/site2/website/versioned_docs/version-2.8.2-deprecated/functions-develop.md deleted file mode 100644 index 2e29aa1c474005..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/functions-develop.md +++ /dev/null @@ -1,1600 +0,0 @@ ---- -id: functions-develop -title: Develop Pulsar Functions -sidebar_label: "How-to: Develop" -original_id: functions-develop ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -You learn how to develop Pulsar Functions with different APIs for Java, Python and Go. - -## Available APIs -In Java and Python, you have two options to write Pulsar Functions. In Go, you can use Pulsar Functions SDK for Go. - -Interface | Description | Use cases -:---------|:------------|:--------- -Language-native interface | No Pulsar-specific libraries or special dependencies required (only core libraries from Java/Python). | Functions that do not require access to the function [context](#context). -Pulsar Function SDK for Java/Python/Go | Pulsar-specific libraries that provide a range of functionality not provided by "native" interfaces. | Functions that require access to the function [context](#context). - -The language-native function, which adds an exclamation point to all incoming strings and publishes the resulting string to a topic, has no external dependencies. The following example is language-native function. - -````mdx-code-block - - - -```Java - -import java.util.function.Function; - -public class JavaNativeExclamationFunction implements Function { - @Override - public String apply(String input) { - return String.format("%s!", input); - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/JavaNativeExclamationFunction.java). - - - - -```python - -def process(input): - return "{}!".format(input) - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/native_exclamation_function.py). - -:::note - -You can write Pulsar Functions in python2 or python3. However, Pulsar only looks for `python` as the interpreter. -If you're running Pulsar Functions on an Ubuntu system that only supports python3, you might fail to -start the functions. In this case, you can create a symlink. Your system will fail if -you subsequently install any other package that depends on Python 2.x. A solution is under development in [Issue 5518](https://github.com/apache/pulsar/issues/5518). - -```bash - -sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 10 - -``` - -::: - - - - -```` - -The following example uses Pulsar Functions SDK. -````mdx-code-block - - - -```Java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class ExclamationFunction implements Function { - @Override - public String process(String input, Context context) { - return String.format("%s!", input); - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/ExclamationFunction.java). - - - - -```python - -from pulsar import Function - -class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - return input + '!' - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/exclamation_function.py). - - - - -```Go - -package main - -import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" -) - -func HandleRequest(ctx context.Context, in []byte) error{ - fmt.Println(string(in) + "!") - return nil -} - -func main() { - pf.Start(HandleRequest) -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/77cf09eafa4f1626a53a1fe2e65dd25f377c1127/pulsar-function-go/examples/inputFunc/inputFunc.go#L20-L36). - - - - -```` - -## Schema registry -Pulsar has a built-in schema registry and is bundled with popular schema types, such as Avro, JSON and Protobuf. Pulsar Functions can leverage the existing schema information from input topics and derive the input type. The schema registry applies for output topic as well. - -## SerDe -SerDe stands for **Ser**ialization and **De**serialization. Pulsar Functions uses SerDe when publishing data to and consuming data from Pulsar topics. How SerDe works by default depends on the language you use for a particular function. - -````mdx-code-block - - - -When you write Pulsar Functions in Java, the following basic Java types are built in and supported by default: `String`, `Double`, `Integer`, `Float`, `Long`, `Short`, and `Byte`. - -To customize Java types, you need to implement the following interface. - -```java - -public interface SerDe { - T deserialize(byte[] input); - byte[] serialize(T input); -} - -``` - -SerDe works in the following ways in Java Functions. -- If the input and output topics have schema, Pulsar Functions use schema for SerDe. -- If the input or output topics do not exist, Pulsar Functions adopt the following rules to determine SerDe: - - If the schema type is specified, Pulsar Functions use the specified schema type. - - If SerDe is specified, Pulsar Functions use the specified SerDe, and the schema type for input and output topics is `Byte`. - - If neither the schema type nor SerDe is specified, Pulsar Functions use the built-in SerDe. For non-primitive schema type, the built-in SerDe serializes and deserializes objects in the `JSON` format. - - - - -In Python, the default SerDe is identity, meaning that the type is serialized as whatever type the producer function returns. - -You can specify the SerDe when [creating](functions-deploy.md#cluster-mode) or [running](functions-deploy.md#local-run-mode) functions. - -```bash - -$ bin/pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name my_function \ - --py my_function.py \ - --classname my_function.MyFunction \ - --custom-serde-inputs '{"input-topic-1":"Serde1","input-topic-2":"Serde2"}' \ - --output-serde-classname Serde3 \ - --output output-topic-1 - -``` - -This case contains two input topics: `input-topic-1` and `input-topic-2`, each of which is mapped to a different SerDe class (the map must be specified as a JSON string). The output topic, `output-topic-1`, uses the `Serde3` class for SerDe. At the moment, all Pulsar Functions logic, include processing function and SerDe classes, must be contained within a single Python file. - -When using Pulsar Functions for Python, you have three SerDe options: - -1. You can use the [`IdentitySerde`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L70), which leaves the data unchanged. The `IdentitySerDe` is the **default**. Creating or running a function without explicitly specifying SerDe means that this option is used. -2. You can use the [`PickleSerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L62), which uses Python [`pickle`](https://docs.python.org/3/library/pickle.html) for SerDe. -3. You can create a custom SerDe class by implementing the baseline [`SerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L50) class, which has just two methods: [`serialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L53) for converting the object into bytes, and [`deserialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L58) for converting bytes into an object of the required application-specific type. - -The table below shows when you should use each SerDe. - -SerDe option | When to use -:------------|:----------- -`IdentitySerde` | When you work with simple types like strings, Booleans, integers. -`PickleSerDe` | When you work with complex, application-specific types and are comfortable with the "best effort" approach of `pickle`. -Custom SerDe | When you require explicit control over SerDe, potentially for performance or data compatibility purposes. - - - - -Currently, the feature is not available in Go. - - - - -```` - -### Example -Imagine that you're writing Pulsar Functions that are processing tweet objects, you can refer to the following example of `Tweet` class. - -````mdx-code-block - - - -```java - -public class Tweet { - private String username; - private String tweetContent; - - public Tweet(String username, String tweetContent) { - this.username = username; - this.tweetContent = tweetContent; - } - - // Standard setters and getters -} - -``` - -To pass `Tweet` objects directly between Pulsar Functions, you need to provide a custom SerDe class. In the example below, `Tweet` objects are basically strings in which the username and tweet content are separated by a `|`. - -```java - -package com.example.serde; - -import org.apache.pulsar.functions.api.SerDe; - -import java.util.regex.Pattern; - -public class TweetSerde implements SerDe { - public Tweet deserialize(byte[] input) { - String s = new String(input); - String[] fields = s.split(Pattern.quote("|")); - return new Tweet(fields[0], fields[1]); - } - - public byte[] serialize(Tweet input) { - return "%s|%s".format(input.getUsername(), input.getTweetContent()).getBytes(); - } -} - -``` - -To apply this customized SerDe to a particular Pulsar Function, you need to: - -* Package the `Tweet` and `TweetSerde` classes into a JAR. -* Specify a path to the JAR and SerDe class name when deploying the function. - -The following is an example of [`create`](reference-pulsar-admin.md#create-1) operation. - -```bash - -$ bin/pulsar-admin functions create \ - --jar /path/to/your.jar \ - --output-serde-classname com.example.serde.TweetSerde \ - # Other function attributes - -``` - -> #### Custom SerDe classes must be packaged with your function JARs -> Pulsar does not store your custom SerDe classes separately from your Pulsar Functions. So you need to include your SerDe classes in your function JARs. If not, Pulsar returns an error. - - - - -```python - -class Tweet(object): - def __init__(self, username, tweet_content): - self.username = username - self.tweet_content = tweet_content - -``` - -In order to use this class in Pulsar Functions, you have two options: - -1. You can specify `PickleSerDe`, which applies the [`pickle`](https://docs.python.org/3/library/pickle.html) library SerDe. -2. You can create your own SerDe class. The following is an example. - - ```python - - from pulsar import SerDe - - class TweetSerDe(SerDe): - - def serialize(self, input): - return bytes("{0}|{1}".format(input.username, input.tweet_content)) - - def deserialize(self, input_bytes): - tweet_components = str(input_bytes).split('|') - return Tweet(tweet_components[0], tweet_componentsp[1]) - - ``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/custom_object_function.py). - - - - -```` - -In both languages, however, you can write custom SerDe logic for more complex, application-specific types. - -## Context -Java, Python and Go SDKs provide access to a **context object** that can be used by a function. This context object provides a wide variety of information and functionality to the function. - -* The name and ID of a Pulsar Function. -* The message ID of each message. Each Pulsar message is automatically assigned with an ID. -* The key, event time, properties and partition key of each message. -* The name of the topic to which the message is sent. -* The names of all input topics as well as the output topic associated with the function. -* The name of the class used for [SerDe](#serde). -* The [tenant](reference-terminology.md#tenant) and namespace associated with the function. -* The ID of the Pulsar Functions instance running the function. -* The version of the function. -* The [logger object](functions-develop.md#logger) used by the function, which can be used to create function log messages. -* Access to arbitrary [user configuration](#user-config) values supplied via the CLI. -* An interface for recording [metrics](#metrics). -* An interface for storing and retrieving state in [state storage](#state-storage). -* A function to publish new messages onto arbitrary topics. -* A function to ack the message being processed (if auto-ack is disabled). -* (Java) get Pulsar admin client. - -````mdx-code-block - - - -The [Context](https://github.com/apache/pulsar/blob/master/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Context.java) interface provides a number of methods that you can use to access the function [context](#context). The various method signatures for the `Context` interface are listed as follows. - -```java - -public interface Context { - Record getCurrentRecord(); - Collection getInputTopics(); - String getOutputTopic(); - String getOutputSchemaType(); - String getTenant(); - String getNamespace(); - String getFunctionName(); - String getFunctionId(); - String getInstanceId(); - String getFunctionVersion(); - Logger getLogger(); - void incrCounter(String key, long amount); - void incrCounterAsync(String key, long amount); - long getCounter(String key); - long getCounterAsync(String key); - void putState(String key, ByteBuffer value); - void putStateAsync(String key, ByteBuffer value); - void deleteState(String key); - ByteBuffer getState(String key); - ByteBuffer getStateAsync(String key); - Map getUserConfigMap(); - Optional getUserConfigValue(String key); - Object getUserConfigValueOrDefault(String key, Object defaultValue); - void recordMetric(String metricName, double value); - CompletableFuture publish(String topicName, O object, String schemaOrSerdeClassName); - CompletableFuture publish(String topicName, O object); - TypedMessageBuilder newOutputMessage(String topicName, Schema schema) throws PulsarClientException; - ConsumerBuilder newConsumerBuilder(Schema schema) throws PulsarClientException; - PulsarAdmin getPulsarAdmin(); - PulsarAdmin getPulsarAdmin(String clusterName); -} - -``` - -The following example uses several methods available via the `Context` object. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.stream.Collectors; - -public class ContextFunction implements Function { - public Void process(String input, Context context) { - Logger LOG = context.getLogger(); - String inputTopics = context.getInputTopics().stream().collect(Collectors.joining(", ")); - String functionName = context.getFunctionName(); - - String logMessage = String.format("A message with a value of \"%s\" has arrived on one of the following topics: %s\n", - input, - inputTopics); - - LOG.info(logMessage); - - String metricName = String.format("function-%s-messages-received", functionName); - context.recordMetric(metricName, 1); - - return null; - } -} - -``` - - - - -``` - -class ContextImpl(pulsar.Context): - def get_message_id(self): - ... - def get_message_key(self): - ... - def get_message_eventtime(self): - ... - def get_message_properties(self): - ... - def get_current_message_topic_name(self): - ... - def get_partition_key(self): - ... - def get_function_name(self): - ... - def get_function_tenant(self): - ... - def get_function_namespace(self): - ... - def get_function_id(self): - ... - def get_instance_id(self): - ... - def get_function_version(self): - ... - def get_logger(self): - ... - def get_user_config_value(self, key): - ... - def get_user_config_map(self): - ... - def record_metric(self, metric_name, metric_value): - ... - def get_input_topics(self): - ... - def get_output_topic(self): - ... - def get_output_serde_class_name(self): - ... - def publish(self, topic_name, message, serde_class_name="serde.IdentitySerDe", - properties=None, compression_type=None, callback=None, message_conf=None): - ... - def ack(self, msgid, topic): - ... - def get_and_reset_metrics(self): - ... - def reset_metrics(self): - ... - def get_metrics(self): - ... - def incr_counter(self, key, amount): - ... - def get_counter(self, key): - ... - def del_counter(self, key): - ... - def put_state(self, key, value): - ... - def get_state(self, key): - ... - -``` - - - - -``` - -func (c *FunctionContext) GetInstanceID() int { - return c.instanceConf.instanceID -} - -func (c *FunctionContext) GetInputTopics() []string { - return c.inputTopics -} - -func (c *FunctionContext) GetOutputTopic() string { - return c.instanceConf.funcDetails.GetSink().Topic -} - -func (c *FunctionContext) GetFuncTenant() string { - return c.instanceConf.funcDetails.Tenant -} - -func (c *FunctionContext) GetFuncName() string { - return c.instanceConf.funcDetails.Name -} - -func (c *FunctionContext) GetFuncNamespace() string { - return c.instanceConf.funcDetails.Namespace -} - -func (c *FunctionContext) GetFuncID() string { - return c.instanceConf.funcID -} - -func (c *FunctionContext) GetFuncVersion() string { - return c.instanceConf.funcVersion -} - -func (c *FunctionContext) GetUserConfValue(key string) interface{} { - return c.userConfigs[key] -} - -func (c *FunctionContext) GetUserConfMap() map[string]interface{} { - return c.userConfigs -} - -func (c *FunctionContext) SetCurrentRecord(record pulsar.Message) { - c.record = record -} - -func (c *FunctionContext) GetCurrentRecord() pulsar.Message { - return c.record -} - -func (c *FunctionContext) NewOutputMessage(topic string) pulsar.Producer { - return c.outputMessage(topic) -} - -``` - -The following example uses several methods available via the `Context` object. - -``` - -import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" -) - -func contextFunc(ctx context.Context) { - if fc, ok := pf.FromContext(ctx); ok { - fmt.Printf("function ID is:%s, ", fc.GetFuncID()) - fmt.Printf("function version is:%s\n", fc.GetFuncVersion()) - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/77cf09eafa4f1626a53a1fe2e65dd25f377c1127/pulsar-function-go/examples/contextFunc/contextFunc.go#L29-L34). - - - - -```` - -### User config -When you run or update Pulsar Functions created using SDK, you can pass arbitrary key/values to them with the command line with the `--user-config` flag. Key/values must be specified as JSON. The following function creation command passes a user configured key/value to a function. - -```bash - -$ bin/pulsar-admin functions create \ - --name word-filter \ - # Other function configs - --user-config '{"forbidden-word":"rosebud"}' - -``` - -````mdx-code-block - - - -The Java SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - # Other function configs - --user-config '{"word-of-the-day":"verdure"}' - -``` - -To access that value in a Java function: - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.Optional; - -public class UserConfigFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - Optional wotd = context.getUserConfigValue("word-of-the-day"); - if (wotd.isPresent()) { - LOG.info("The word of the day is {}", wotd); - } else { - LOG.warn("No word of the day provided"); - } - return null; - } -} - -``` - -The `UserConfigFunction` function will log the string `"The word of the day is verdure"` every time the function is invoked (which means every time a message arrives). The `word-of-the-day` user config will be changed only when the function is updated with a new config value via the command line. - -You can also access the entire user config map or set a default value in case no value is present: - -```java - -// Get the whole config map -Map allConfigs = context.getUserConfigMap(); - -// Get value or resort to default -String wotd = context.getUserConfigValueOrDefault("word-of-the-day", "perspicacious"); - -``` - -> For all key/value pairs passed to Java functions, both the key *and* the value are `String`. To set the value to be a different type, you need to deserialize from the `String` type. - - - - -In Python function, you can access the configuration value like this. - -```python - -from pulsar import Function - -class WordFilter(Function): - def process(self, context, input): - forbidden_word = context.user_config()["forbidden-word"] - - # Don't publish the message if it contains the user-supplied - # forbidden word - if forbidden_word in input: - pass - # Otherwise publish the message - else: - return input - -``` - -The Python SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - # Other function configs \ - --user-config '{"word-of-the-day":"verdure"}' - -``` - -To access that value in a Python function: - -```python - -from pulsar import Function - -class UserConfigFunction(Function): - def process(self, input, context): - logger = context.get_logger() - wotd = context.get_user_config_value('word-of-the-day') - if wotd is None: - logger.warn('No word of the day provided') - else: - logger.info("The word of the day is {0}".format(wotd)) - -``` - - - - -The Go SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - --go path/to/go/binary - --user-config '{"word-of-the-day":"lackadaisical"}' - -``` - -To access that value in a Go function: - -```go - -func contextFunc(ctx context.Context) { - fc, ok := pf.FromContext(ctx) - if !ok { - logutil.Fatal("Function context is not defined") - } - - wotd := fc.GetUserConfValue("word-of-the-day") - - if wotd == nil { - logutil.Warn("The word of the day is empty") - } else { - logutil.Infof("The word of the day is %s", wotd.(string)) - } -} - -``` - - - - -```` - -### Logger - -````mdx-code-block - - - -Pulsar Functions that use the Java SDK have access to an [SLF4j](https://www.slf4j.org/) [`Logger`](https://www.slf4j.org/api/org/apache/log4j/Logger.html) object that can be used to produce logs at the chosen log level. The following example logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class LoggingFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - String messageId = new String(context.getMessageId()); - - if (input.contains("danger")) { - LOG.warn("A warning was received in message {}", messageId); - } else { - LOG.info("Message {} received\nContent: {}", messageId, input); - } - - return null; - } -} - -``` - -If you want your function to produce logs, you need to specify a log topic when creating or running the function. The following is an example. - -```bash - -$ bin/pulsar-admin functions create \ - --jar my-functions.jar \ - --classname my.package.LoggingFunction \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -All logs produced by `LoggingFunction` above can be accessed via the `persistent://public/default/logging-function-logs` topic. - -#### Customize Function log level -Additionally, you can use the XML file, `functions_log4j2.xml`, to customize the function log level. -To customize the function log level, create or update `functions_log4j2.xml` in your Pulsar conf directory (for example, `/etc/pulsar/` on bare-metal, or `/pulsar/conf` on Kubernetes) to contain contents such as: - -```xml - - - pulsar-functions-instance - 30 - - - pulsar.log.appender - RollingFile - - - pulsar.log.level - debug - - - bk.log.level - debug - - - - - Console - SYSTEM_OUT - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - RollingFile - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.log - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}-%d{MM-dd-yyyy}-%i.log.gz - true - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - 1 - true - - - 1 GB - - - 0 0 0 * * ? - - - - - ${sys:pulsar.function.log.dir} - 2 - - */${sys:pulsar.function.log.file}*log.gz - - - 30d - - - - - - BkRollingFile - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.bk - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.bk-%d{MM-dd-yyyy}-%i.log.gz - true - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - 1 - true - - - 1 GB - - - 0 0 0 * * ? - - - - - ${sys:pulsar.function.log.dir} - 2 - - */${sys:pulsar.function.log.file}.bk*log.gz - - - 30d - - - - - - - - org.apache.pulsar.functions.runtime.shaded.org.apache.bookkeeper - ${sys:bk.log.level} - false - - BkRollingFile - - - - ${sys:pulsar.log.level} - - ${sys:pulsar.log.appender} - ${sys:pulsar.log.level} - - - - - -``` - -The properties set like: - -```xml - - - pulsar.log.level - debug - - -``` - -propagate to places where they are referenced, such as: - -```xml - - - ${sys:pulsar.log.level} - - ${sys:pulsar.log.appender} - ${sys:pulsar.log.level} - - - -``` - -In the above example, debug level logging would be applied to ALL function logs. -This may be more verbose than you desire. To be more selective, you can apply different log levels to different classes or modules. For example: - -```xml - - - com.example.module - info - false - - ${sys:pulsar.log.appender} - - - -``` - -You can be more specific as well, such as applying a more verbose log level to a class in the module, such as: - -```xml - - - com.example.module.className - debug - false - - Console - - - -``` - -Each `` entry allows you to output the log to a target specified in the definition of the Appender. - -Additivity pertains to whether log messages will be duplicated if multiple Logger entries overlap. -To disable additivity, specify - -```xml - -false - -``` - -as shown in examples above. Disabling additivity prevents duplication of log messages when one or more `` entries contain classes or modules that overlap. - -The `` is defined in the `` section, such as: - -```xml - - - Console - SYSTEM_OUT - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - -``` - - - - -Pulsar Functions that use the Python SDK have access to a logging object that can be used to produce logs at the chosen log level. The following example function that logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`. - -```python - -from pulsar import Function - -class LoggingFunction(Function): - def process(self, input, context): - logger = context.get_logger() - msg_id = context.get_message_id() - if 'danger' in input: - logger.warn("A warning was received in message {0}".format(context.get_message_id())) - else: - logger.info("Message {0} received\nContent: {1}".format(msg_id, input)) - -``` - -If you want your function to produce logs on a Pulsar topic, you need to specify a **log topic** when creating or running the function. The following is an example. - -```bash - -$ bin/pulsar-admin functions create \ - --py logging_function.py \ - --classname logging_function.LoggingFunction \ - --log-topic logging-function-logs \ - # Other function configs - -``` - -All logs produced by `LoggingFunction` above can be accessed via the `logging-function-logs` topic. -Additionally, you can specify the function log level through the broker XML file as described in [Customize Function log level](#customize-function-log-level). - - - - -The following Go Function example shows different log levels based on the function input. - -``` - -import ( - "context" - - "github.com/apache/pulsar/pulsar-function-go/pf" - - log "github.com/apache/pulsar/pulsar-function-go/logutil" -) - -func loggerFunc(ctx context.Context, input []byte) { - if len(input) <= 100 { - log.Infof("This input has a length of: %d", len(input)) - } else { - log.Warnf("This input is getting too long! It has {%d} characters", len(input)) - } -} - -func main() { - pf.Start(loggerFunc) -} - -``` - -When you use `logTopic` related functionalities in Go Function, import `github.com/apache/pulsar/pulsar-function-go/logutil`, and you do not have to use the `getLogger()` context object. - -Additionally, you can specify the function log level through the broker XML file, as described here: [Customize Function log level](#customize-function-log-level) - - - - -```` - -### Pulsar admin - -Pulsar Functions using the Java SDK has access to the Pulsar admin client, which allows the Pulsar admin client to manage API calls to current Pulsar clusters or external clusters (if `external-pulsars` is provided). - -````mdx-code-block - - - -Below is an example of how to use the Pulsar admin client exposed from the Function `context`. - -``` - -import org.apache.pulsar.client.admin.PulsarAdmin; -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -/** - * In this particular example, for every input message, - * the function resets the cursor of the current function's subscription to a - * specified timestamp. - */ -public class CursorManagementFunction implements Function { - - @Override - public String process(String input, Context context) throws Exception { - PulsarAdmin adminClient = context.getPulsarAdmin(); - if (adminClient != null) { - String topic = context.getCurrentRecord().getTopicName().isPresent() ? - context.getCurrentRecord().getTopicName().get() : null; - String subName = context.getTenant() + "/" + context.getNamespace() + "/" + context.getFunctionName(); - if (topic != null) { - // 1578188166 below is a random-pick timestamp - adminClient.topics().resetCursor(topic, subName, 1578188166); - return "reset cursor successfully"; - } - } - return null; - } -} - -``` - -If you want your function to get access to the Pulsar admin client, you need to enable this feature by setting `exposeAdminClientEnabled=true` in the `functions_worker.yml` file. You can test whether this feature is enabled or not using the command `pulsar-admin functions localrun` with the flag `--web-service-url`. - -``` - -$ bin/pulsar-admin functions localrun \ - --jar my-functions.jar \ - --classname my.package.CursorManagementFunction \ - --web-service-url http://pulsar-web-service:8080 \ - # Other function configs - -``` - - - - -```` - -## Metrics - -Pulsar Functions allows you to deploy and manage processing functions that consume messages from and publish messages to Pulsar topics easily. It is important to ensure that the running functions are healthy at any time. Pulsar Functions can publish arbitrary metrics to the metrics interface which can be queried. - -:::note - -If a Pulsar Function uses the language-native interface for Java or Python, that function is not able to publish metrics and stats to Pulsar. - -::: - -You can monitor Pulsar Functions that have been deployed with the following methods: - -- Check the metrics provided by Pulsar. - - Pulsar Functions expose the metrics that can be collected and used for monitoring the health of **Java, Python, and Go** functions. You can check the metrics by following the [monitoring](deploy-monitoring.md) guide. - - For the complete list of the function metrics, see [here](reference-metrics.md#pulsar-functions). - -- Set and check your customized metrics. - - In addition to the metrics provided by Pulsar, Pulsar allows you to customize metrics for **Java and Python** functions. Function workers collect user-defined metrics to Prometheus automatically and you can check them in Grafana. - -Here are examples of how to customize metrics for Java and Python functions. - -````mdx-code-block - - - -You can record metrics using the [`Context`](#context) object on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class MetricRecorderFunction implements Function { - @Override - public void apply(Integer input, Context context) { - // Records the metric 1 every time a message arrives - context.recordMetric("hit-count", 1); - - // Records the metric only if the arriving number equals 11 - if (input == 11) { - context.recordMetric("elevens-count", 1); - } - - return null; - } -} - -``` - - - - -You can record metrics using the [`Context`](#context) object on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message. The following is an example. - -```python - -from pulsar import Function - -class MetricRecorderFunction(Function): - def process(self, input, context): - context.record_metric('hit-count', 1) - - if input == 11: - context.record_metric('elevens-count', 1) - -``` - - - - -Currently, the feature is not available in Go. - - - - -```` - -## Security - -If you want to enable security on Pulsar Functions, first you should enable security on [Functions Workers](functions-worker.md). For more details, refer to [Security settings](functions-worker.md#security-settings). - -Pulsar Functions can support the following providers: - -- ClearTextSecretsProvider -- EnvironmentBasedSecretsProvider - -> Pulsar Function supports ClearTextSecretsProvider by default. - -At the same time, Pulsar Functions provides two interfaces, **SecretsProvider** and **SecretsProviderConfigurator**, allowing users to customize secret provider. - -````mdx-code-block - - - -You can get secret provider using the [`Context`](#context) object. The following is an example: - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class GetSecretProviderFunction implements Function { - - @Override - public Void process(String input, Context context) throws Exception { - Logger LOG = context.getLogger(); - String secretProvider = context.getSecret(input); - - if (!secretProvider.isEmpty()) { - LOG.info("The secret provider is {}", secretProvider); - } else { - LOG.warn("No secret provider"); - } - - return null; - } -} - -``` - - - - -You can get secret provider using the [`Context`](#context) object. The following is an example: - -```python - -from pulsar import Function - -class GetSecretProviderFunction(Function): - def process(self, input, context): - logger = context.get_logger() - secret_provider = context.get_secret(input) - if secret_provider is None: - logger.warn('No secret provider') - else: - logger.info("The secret provider is {0}".format(secret_provider)) - -``` - - - - -Currently, the feature is not available in Go. - - - - -```` - -## State storage -Pulsar Functions use [Apache BookKeeper](https://bookkeeper.apache.org) as a state storage interface. Pulsar installation, including the local standalone installation, includes deployment of BookKeeper bookies. - -Since Pulsar 2.1.0 release, Pulsar integrates with Apache BookKeeper [table service](https://docs.google.com/document/d/155xAwWv5IdOitHh1NVMEwCMGgB28M3FyMiQSxEpjE-Y/edit#heading=h.56rbh52koe3f) to store the `State` for functions. For example, a `WordCount` function can store its `counters` state into BookKeeper table service via Pulsar Functions State API. - -States are key-value pairs, where the key is a string and the value is arbitrary binary data - counters are stored as 64-bit big-endian binary values. Keys are scoped to an individual Pulsar Function, and shared between instances of that function. - -You can access states within Pulsar Java Functions using the `putState`, `putStateAsync`, `getState`, `getStateAsync`, `incrCounter`, `incrCounterAsync`, `getCounter`, `getCounterAsync` and `deleteState` calls on the context object. You can access states within Pulsar Python Functions using the `putState`, `getState`, `incrCounter`, `getCounter` and `deleteState` calls on the context object. You can also manage states using the [querystate](#query-state) and [putstate](#putstate) options to `pulsar-admin functions`. - -:::note - -State storage is not available in Go. - -::: - -### API - -````mdx-code-block - - - -Currently Pulsar Functions expose the following APIs for mutating and accessing State. These APIs are available in the [Context](functions-develop.md#context) object when you are using Java SDK functions. - -#### incrCounter - -```java - - /** - * Increment the builtin distributed counter referred by key - * @param key The name of the key - * @param amount The amount to be incremented - */ - void incrCounter(String key, long amount); - -``` - -The application can use `incrCounter` to change the counter of a given `key` by the given `amount`. - -#### incrCounterAsync - -```java - - /** - * Increment the builtin distributed counter referred by key - * but dont wait for the completion of the increment operation - * - * @param key The name of the key - * @param amount The amount to be incremented - */ - CompletableFuture incrCounterAsync(String key, long amount); - -``` - -The application can use `incrCounterAsync` to asynchronously change the counter of a given `key` by the given `amount`. - -#### getCounter - -```java - - /** - * Retrieve the counter value for the key. - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - long getCounter(String key); - -``` - -The application can use `getCounter` to retrieve the counter of a given `key` mutated by `incrCounter`. - -Except the `counter` API, Pulsar also exposes a general key/value API for functions to store -general key/value state. - -#### getCounterAsync - -```java - - /** - * Retrieve the counter value for the key, but don't wait - * for the operation to be completed - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - CompletableFuture getCounterAsync(String key); - -``` - -The application can use `getCounterAsync` to asynchronously retrieve the counter of a given `key` mutated by `incrCounterAsync`. - -#### putState - -```java - - /** - * Update the state value for the key. - * - * @param key name of the key - * @param value state value of the key - */ - void putState(String key, ByteBuffer value); - -``` - -#### putStateAsync - -```java - - /** - * Update the state value for the key, but don't wait for the operation to be completed - * - * @param key name of the key - * @param value state value of the key - */ - CompletableFuture putStateAsync(String key, ByteBuffer value); - -``` - -The application can use `putStateAsync` to asynchronously update the state of a given `key`. - -#### getState - -```java - - /** - * Retrieve the state value for the key. - * - * @param key name of the key - * @return the state value for the key. - */ - ByteBuffer getState(String key); - -``` - -#### getStateAsync - -```java - - /** - * Retrieve the state value for the key, but don't wait for the operation to be completed - * - * @param key name of the key - * @return the state value for the key. - */ - CompletableFuture getStateAsync(String key); - -``` - -The application can use `getStateAsync` to asynchronously retrieve the state of a given `key`. - -#### deleteState - -```java - - /** - * Delete the state value for the key. - * - * @param key name of the key - */ - -``` - -Counters and binary values share the same keyspace, so this deletes either type. - - - - -Currently Pulsar Functions expose the following APIs for mutating and accessing State. These APIs are available in the [Context](#context) object when you are using Python SDK functions. - -#### incr_counter - -```python - - def incr_counter(self, key, amount): - ""incr the counter of a given key in the managed state"" - -``` - -Application can use `incr_counter` to change the counter of a given `key` by the given `amount`. -If the `key` does not exist, a new key is created. - -#### get_counter - -```python - - def get_counter(self, key): - """get the counter of a given key in the managed state""" - -``` - -Application can use `get_counter` to retrieve the counter of a given `key` mutated by `incrCounter`. - -Except the `counter` API, Pulsar also exposes a general key/value API for functions to store -general key/value state. - -#### put_state - -```python - - def put_state(self, key, value): - """update the value of a given key in the managed state""" - -``` - -The key is a string, and the value is arbitrary binary data. - -#### get_state - -```python - - def get_state(self, key): - """get the value of a given key in the managed state""" - -``` - -#### del_counter - -```python - - def del_counter(self, key): - """delete the counter of a given key in the managed state""" - -``` - -Counters and binary values share the same keyspace, so this deletes either type. - - - - -```` - -### Query State - -A Pulsar Function can use the [State API](#api) for storing state into Pulsar's state storage -and retrieving state back from Pulsar's state storage. Additionally Pulsar also provides -CLI commands for querying its state. - -```shell - -$ bin/pulsar-admin functions querystate \ - --tenant \ - --namespace \ - --name \ - --state-storage-url \ - --key \ - [---watch] - -``` - -If `--watch` is specified, the CLI will watch the value of the provided `state-key`. - -### Example - -````mdx-code-block - - - -{@inject: github:WordCountFunction:/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/WordCountFunction.java} is a very good example -demonstrating on how Application can easily store `state` in Pulsar Functions. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountFunction implements Function { - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split("\\.")).forEach(word -> context.incrCounter(word, 1)); - return null; - } -} - -``` - -The logic of this `WordCount` function is pretty simple and straightforward: - -1. The function first splits the received `String` into multiple words using regex `\\.`. -2. For each `word`, the function increments the corresponding `counter` by 1 (via `incrCounter(key, amount)`). - - - - -```python - -from pulsar import Function - -class WordCount(Function): - def process(self, item, context): - for word in item.split(): - context.incr_counter(word, 1) - -``` - -The logic of this `WordCount` function is pretty simple and straightforward: - -1. The function first splits the received string into multiple words on space. -2. For each `word`, the function increments the corresponding `counter` by 1 (via `incr_counter(key, amount)`). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/functions-metrics.md b/site2/website/versioned_docs/version-2.8.2-deprecated/functions-metrics.md deleted file mode 100644 index 8add6693160929..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/functions-metrics.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: functions-metrics -title: Metrics for Pulsar Functions -sidebar_label: "Metrics" -original_id: functions-metrics ---- - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/functions-overview.md b/site2/website/versioned_docs/version-2.8.2-deprecated/functions-overview.md deleted file mode 100644 index 816d301e0fd0e7..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/functions-overview.md +++ /dev/null @@ -1,209 +0,0 @@ ---- -id: functions-overview -title: Pulsar Functions overview -sidebar_label: "Overview" -original_id: functions-overview ---- - -**Pulsar Functions** are lightweight compute processes that - -* consume messages from one or more Pulsar topics, -* apply a user-supplied processing logic to each message, -* publish the results of the computation to another topic. - - -## Goals -With Pulsar Functions, you can create complex processing logic without deploying a separate neighboring system (such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://heron.incubator.apache.org/), [Apache Flink](https://flink.apache.org/)). Pulsar Functions are computing infrastructure of Pulsar messaging system. The core goal is tied to a series of other goals: - -* Developer productivity (language-native vs Pulsar Functions SDK functions) -* Easy troubleshooting -* Operational simplicity (no need for an external processing system) - -## Inspirations -Pulsar Functions are inspired by (and take cues from) several systems and paradigms: - -* Stream processing engines such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://apache.github.io/incubator-heron), and [Apache Flink](https://flink.apache.org) -* "Serverless" and "Function as a Service" (FaaS) cloud platforms like [Amazon Web Services Lambda](https://aws.amazon.com/lambda/), [Google Cloud Functions](https://cloud.google.com/functions/), and [Azure Cloud Functions](https://azure.microsoft.com/en-us/services/functions/) - -Pulsar Functions can be described as - -* [Lambda](https://aws.amazon.com/lambda/)-style functions that are -* specifically designed to use Pulsar as a message bus. - -## Programming model -Pulsar Functions provide a wide range of functionality, and the core programming model is simple. Functions receive messages from one or more **input [topics](reference-terminology.md#topic)**. Each time a message is received, the function will complete the following tasks. - - * Apply some processing logic to the input and write output to: - * An **output topic** in Pulsar - * [Apache BookKeeper](functions-develop.md#state-storage) - * Write logs to a **log topic** (potentially for debugging purposes) - * Increment a [counter](#word-count-example) - -![Pulsar Functions core programming model](/assets/pulsar-functions-overview.png) - -You can use Pulsar Functions to set up the following processing chain: - -* A Python function listens for the `raw-sentences` topic and "sanitizes" incoming strings (removing extraneous whitespace and converting all characters to lowercase) and then publishes the results to a `sanitized-sentences` topic. -* A Java function listens for the `sanitized-sentences` topic, counts the number of times each word appears within a specified time window, and publishes the results to a `results` topic -* Finally, a Python function listens for the `results` topic and writes the results to a MySQL table. - - -### Word count example - -If you implement the classic word count example using Pulsar Functions, it looks something like this: - -![Pulsar Functions word count example](/assets/pulsar-functions-word-count.png) - -To write the function in Java with [Pulsar Functions SDK for Java](functions-develop.md#available-apis), you can write the function as follows. - -```java - -package org.example.functions; - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountFunction implements Function { - // This function is invoked every time a message is published to the input topic - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split(" ")).forEach(word -> { - String counterKey = word.toLowerCase(); - context.incrCounter(counterKey, 1); - }); - return null; - } -} - -``` - -Bundle and build the JAR file to be deployed, and then deploy it in your Pulsar cluster using the [command line](functions-deploy.md#command-line-interface) as follows. - -```bash - -$ bin/pulsar-admin functions create \ - --jar target/my-jar-with-dependencies.jar \ - --classname org.example.functions.WordCountFunction \ - --tenant public \ - --namespace default \ - --name word-count \ - --inputs persistent://public/default/sentences \ - --output persistent://public/default/count - -``` - -### Content-based routing example - -Pulsar Functions are used in many cases. The following is a sophisticated example that involves content-based routing. - -For example, a function takes items (strings) as input and publishes them to either a `fruits` or `vegetables` topic, depending on the item. Or, if an item is neither fruit nor vegetable, a warning is logged to a [log topic](functions-develop.md#logger). The following is a visual representation. - -![Pulsar Functions routing example](/assets/pulsar-functions-routing-example.png) - -If you implement this routing functionality in Python, it looks something like this: - -```python - -from pulsar import Function - -class RoutingFunction(Function): - def __init__(self): - self.fruits_topic = "persistent://public/default/fruits" - self.vegetables_topic = "persistent://public/default/vegetables" - - @staticmethod - def is_fruit(item): - return item in [b"apple", b"orange", b"pear", b"other fruits..."] - - @staticmethod - def is_vegetable(item): - return item in [b"carrot", b"lettuce", b"radish", b"other vegetables..."] - - def process(self, item, context): - if self.is_fruit(item): - context.publish(self.fruits_topic, item) - elif self.is_vegetable(item): - context.publish(self.vegetables_topic, item) - else: - warning = "The item {0} is neither a fruit nor a vegetable".format(item) - context.get_logger().warn(warning) - -``` - -If this code is stored in `~/router.py`, then you can deploy it in your Pulsar cluster using the [command line](functions-deploy.md#command-line-interface) as follows. - -```bash - -$ bin/pulsar-admin functions create \ - --py ~/router.py \ - --classname router.RoutingFunction \ - --tenant public \ - --namespace default \ - --name route-fruit-veg \ - --inputs persistent://public/default/basket-items - -``` - -### Functions, messages and message types -Pulsar Functions take byte arrays as inputs and spit out byte arrays as output. However in languages that support typed interfaces(Java), you can write typed Functions, and bind messages to types in the following ways. -* [Schema Registry](functions-develop.md#schema-registry) -* [SerDe](functions-develop.md#serde) - - -## Fully Qualified Function Name (FQFN) -Each Pulsar Function has a **Fully Qualified Function Name** (FQFN) that consists of three elements: the function tenant, namespace, and function name. FQFN looks like this: - -```http - -tenant/namespace/name - -``` - -FQFNs enable you to create multiple functions with the same name provided that they are in different namespaces. - -## Supported languages -Currently, you can write Pulsar Functions in Java, Python, and Go. For details, refer to [Develop Pulsar Functions](functions-develop.md). - -## Processing guarantees -Pulsar Functions provide three different messaging semantics that you can apply to any function. - -Delivery semantics | Description -:------------------|:------- -**At-most-once** delivery | Each message sent to the function is likely to be processed, or not to be processed (hence "at most"). -**At-least-once** delivery | Each message sent to the function can be processed more than once (hence the "at least"). -**Effectively-once** delivery | Each message sent to the function will have one output associated with it. - - -### Apply processing guarantees to a function -You can set the processing guarantees for a Pulsar Function when you create the Function. The following [`pulsar-function create`](reference-pulsar-admin.md#create-1) command creates a function with effectively-once guarantees applied. - -```bash - -$ bin/pulsar-admin functions create \ - --name my-effectively-once-function \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other function configs - -``` - -The available options for `--processing-guarantees` are: - -* `ATMOST_ONCE` -* `ATLEAST_ONCE` -* `EFFECTIVELY_ONCE` - -> By default, Pulsar Functions provide at-least-once delivery guarantees. So if you create a function without supplying a value for the `--processingGuarantees` flag, the function provides at-least-once guarantees. - -### Update the processing guarantees of a function -You can change the processing guarantees applied to a function using the [`update`](reference-pulsar-admin.md#update-1) command. The following is an example. - -```bash - -$ bin/pulsar-admin functions update \ - --processing-guarantees ATMOST_ONCE \ - # Other function configs - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/functions-package.md b/site2/website/versioned_docs/version-2.8.2-deprecated/functions-package.md deleted file mode 100644 index a995d5c1588771..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/functions-package.md +++ /dev/null @@ -1,493 +0,0 @@ ---- -id: functions-package -title: Package Pulsar Functions -sidebar_label: "How-to: Package" -original_id: functions-package ---- - -You can package Pulsar functions in Java, Python, and Go. Packaging the window function in Java is the same as [packaging a function in Java](#java). - -:::note - -Currently, the window function is not available in Python and Go. - -::: - -## Prerequisite - -Before running a Pulsar function, you need to start Pulsar. You can [run a standalone Pulsar in Docker](getting-started-docker.md), or [run Pulsar in Kubernetes](getting-started-helm.md). - -To check whether the Docker image starts, you can use the `docker ps` command. - -## Java - -To package a function in Java, complete the following steps. - -1. Create a new maven project with a pom file. In the following code sample, the value of `mainClass` is your package name. - - ```Java - - - - 4.0.0 - - java-function - java-function - 1.0-SNAPSHOT - - - - org.apache.pulsar - pulsar-functions-api - 2.6.0 - - - - - - - maven-assembly-plugin - - false - - jar-with-dependencies - - - - org.example.test.ExclamationFunction - - - - - - make-assembly - package - - assembly - - - - - - org.apache.maven.plugins - maven-compiler-plugin - - 8 - 8 - - - - - - - - ``` - -2. Write a Java function. - - ``` - - package org.example.test; - - import java.util.function.Function; - - public class ExclamationFunction implements Function { - @Override - public String apply(String s) { - return "This is my function!"; - } - } - - ``` - - For the imported package, you can use one of the following interfaces: - - Function interface provided by Java 8: `java.util.function.Function` - - Pulsar Function interface: `org.apache.pulsar.functions.api.Function` - - The main difference between the two interfaces is that the `org.apache.pulsar.functions.api.Function` interface provides the context interface. When you write a function and want to interact with it, you can use context to obtain a wide variety of information and functionality for Pulsar Functions. - - The following example uses `org.apache.pulsar.functions.api.Function` interface with context. - - ``` - - package org.example.functions; - import org.apache.pulsar.functions.api.Context; - import org.apache.pulsar.functions.api.Function; - - import java.util.Arrays; - public class WordCountFunction implements Function { - // This function is invoked every time a message is published to the input topic - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split(" ")).forEach(word -> { - String counterKey = word.toLowerCase(); - context.incrCounter(counterKey, 1); - }); - return null; - } - } - - ``` - -3. Package the Java function. - - ```bash - - mvn package - - ``` - - After the Java function is packaged, a `target` directory is created automatically. Open the `target` directory to check if there is a JAR package similar to `java-function-1.0-SNAPSHOT.jar`. - - -4. Run the Java function. - - (1) Copy the packaged jar file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Java function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname org.example.test.ExclamationFunction \ - --jar java-function-1.0-SNAPSHOT.jar \ - --inputs persistent://public/default/my-topic-1 \ - --output persistent://public/default/test-1 \ - --tenant public \ - --namespace default \ - --name JavaFunction - - ``` - - The following log indicates that the Java function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -## Python - -Python Function supports the following three formats: - -- One python file -- ZIP file -- PIP - -### One python file - -To package a function with **one python file** in Python, complete the following steps. - -1. Write a Python function. - - ``` - - from pulsar import Function // import the Function module from Pulsar - - # The classic ExclamationFunction that appends an exclamation at the end - # of the input - class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - return input + '!' - - ``` - - In this example, when you write a Python function, you need to inherit the Function class and implement the `process()` method. - - `process()` mainly has two parameters: - - - `input` represents your input. - - - `context` represents an interface exposed by the Pulsar Function. You can get the attributes in the Python function based on the provided context object. - -2. Install a Python client. - - The implementation of a Python function depends on the Python client, so before deploying a Python function, you need to install the corresponding version of the Python client. - - ```bash - - pip install pulsar-client==2.6.0 - - ``` - -3. Run the Python Function. - - (1) Copy the Python function file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Python function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname . \ - --py \ - --inputs persistent://public/default/my-topic-1 \ - --output persistent://public/default/test-1 \ - --tenant public \ - --namespace default \ - --name PythonFunction - - ``` - - The following log indicates that the Python function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -### ZIP file - -To package a function with the **ZIP file** in Python, complete the following steps. - -1. Prepare the ZIP file. - - The following is required when packaging the ZIP file of the Python Function. - - ```text - - Assuming the zip file is named as `func.zip`, unzip the `func.zip` folder: - "func/src" - "func/requirements.txt" - "func/deps" - - ``` - - Take [exclamation.zip](https://github.com/apache/pulsar/tree/master/tests/docker-images/latest-version-image/python-examples) as an example. The internal structure of the example is as follows. - - ```text - - . - ├── deps - │   └── sh-1.12.14-py2.py3-none-any.whl - └── src - └── exclamation.py - - ``` - -2. Run the Python Function. - - (1) Copy the ZIP file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Python function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname exclamation \ - --py \ - --inputs persistent://public/default/in-topic \ - --output persistent://public/default/out-topic \ - --tenant public \ - --namespace default \ - --name PythonFunction - - ``` - - The following log indicates that the Python function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -### PIP - -The PIP method is only supported in Kubernetes runtime. To package a function with **PIP** in Python, complete the following steps. - -1. Configure the `functions_worker.yml` file. - - ```text - - #### Kubernetes Runtime #### - installUserCodeDependencies: true - - ``` - -2. Write your Python Function. - - ``` - - from pulsar import Function - import js2xml - - # The classic ExclamationFunction that appends an exclamation at the end - # of the input - class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - // add your logic - return input + '!' - - ``` - - You can introduce additional dependencies. When Python Function detects that the file currently used is `whl` and the `installUserCodeDependencies` parameter is specified, the system uses the `pip install` command to install the dependencies required in Python Function. - -3. Generate the `whl` file. - - ```shell script - - $ cd $PULSAR_HOME/pulsar-functions/scripts/python - $ chmod +x generate.sh - $ ./generate.sh - # e.g: ./generate.sh /path/to/python /path/to/python/output 1.0.0 - - ``` - - The output is written in `/path/to/python/output`: - - ```text - - -rw-r--r-- 1 root staff 1.8K 8 27 14:29 pulsarfunction-1.0.0-py2-none-any.whl - -rw-r--r-- 1 root staff 1.4K 8 27 14:29 pulsarfunction-1.0.0.tar.gz - -rw-r--r-- 1 root staff 0B 8 27 14:29 pulsarfunction.whl - - ``` - -## Go - -To package a function in Go, complete the following steps. - -1. Write a Go function. - - Currently, Go function can be **only** implemented using SDK and the interface of the function is exposed in the form of SDK. Before using the Go function, you need to import "github.com/apache/pulsar/pulsar-function-go/pf". - - ``` - - import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" - ) - - func HandleRequest(ctx context.Context, input []byte) error { - fmt.Println(string(input) + "!") - return nil - } - - func main() { - pf.Start(HandleRequest) - } - - ``` - - You can use context to connect to the Go function. - - ``` - - if fc, ok := pf.FromContext(ctx); ok { - fmt.Printf("function ID is:%s, ", fc.GetFuncID()) - fmt.Printf("function version is:%s\n", fc.GetFuncVersion()) - } - - ``` - - When writing a Go function, remember that - - In `main()`, you **only** need to register the function name to `Start()`. **Only** one function name is received in `Start()`. - - Go function uses Go reflection, which is based on the received function name, to verify whether the parameter list and returned value list are correct. The parameter list and returned value list **must be** one of the following sample functions: - - ``` - - func () - func () error - func (input) error - func () (output, error) - func (input) (output, error) - func (context.Context) error - func (context.Context, input) error - func (context.Context) (output, error) - func (context.Context, input) (output, error) - - ``` - -2. Build the Go function. - - ``` - - go build .go - - ``` - -3. Run the Go Function. - - (1) Copy the Go function file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Go function with the following command. - - ``` - - ./bin/pulsar-admin functions localrun \ - --go [your go function path] - --inputs [input topics] \ - --output [output topic] \ - --tenant [default:public] \ - --namespace [default:default] \ - --name [custom unique go function name] - - ``` - - The following log indicates that the Go function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -## Start Functions in cluster mode -If you want to start a function in cluster mode, replace `localrun` with `create` in the commands above. The following log indicates that your function starts successfully. - - ```text - - "Created successfully" - - ``` - -For information about parameters on `--classname`, `--jar`, `--py`, `--go`, `--inputs`, run the command `./bin/pulsar-admin functions` or see [here](reference-pulsar-admin.md#functions). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/functions-runtime.md b/site2/website/versioned_docs/version-2.8.2-deprecated/functions-runtime.md deleted file mode 100644 index ab7d1c05db421e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/functions-runtime.md +++ /dev/null @@ -1,399 +0,0 @@ ---- -id: functions-runtime -title: Configure Functions runtime -sidebar_label: "Setup: Configure Functions runtime" -original_id: functions-runtime ---- - -You can use the following methods to run functions. - -- *Thread*: Invoke functions threads in functions worker. -- *Process*: Invoke functions in processes forked by functions worker. -- *Kubernetes*: Submit functions as Kubernetes StatefulSets by functions worker. - -:::note - -Pulsar supports adding labels to the Kubernetes StatefulSets and services while launching functions, which facilitates selecting the target Kubernetes objects. - -::: - -The differences of the thread and process modes are: -- Thread mode: when a function runs in thread mode, it runs on the same Java virtual machine (JVM) with functions worker. -- Process mode: when a function runs in process mode, it runs on the same machine that functions worker runs. - -## Configure thread runtime -It is easy to configure *Thread* runtime. In most cases, you do not need to configure anything. You can customize the thread group name with the following settings: - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.thread.ThreadRuntimeFactory -functionRuntimeFactoryConfigs: - threadGroupName: "Your Function Container Group" - -``` - -*Thread* runtime is only supported in Java function. - -## Configure process runtime -When you enable *Process* runtime, you do not need to configure anything. - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.process.ProcessRuntimeFactory -functionRuntimeFactoryConfigs: - # the directory for storing the function logs - logDirectory: - # change the jar location only when you put the java instance jar in a different location - javaInstanceJarLocation: - # change the python instance location only when you put the python instance jar in a different location - pythonInstanceLocation: - # change the extra dependencies location: - extraFunctionDependenciesDir: - -``` - -*Process* runtime is supported in Java, Python, and Go functions. - -## Configure Kubernetes runtime - -When the functions worker generates Kubernetes manifests and apply the manifests, the Kubernetes runtime works. If you have run functions worker on Kubernetes, you can use the `serviceAccount` associated with the pod that the functions worker is running in. Otherwise, you can configure it to communicate with a Kubernetes cluster. - -The manifests, generated by the functions worker, include a `StatefulSet`, a `Service` (used to communicate with the pods), and a `Secret` for auth credentials (when applicable). The `StatefulSet` manifest (by default) has a single pod, with the number of replicas determined by the "parallelism" of the function. On pod boot, the pod downloads the function payload (via the functions worker REST API). The pod's container image is configurable, but must have the functions runtime. - -The Kubernetes runtime supports secrets, so you can create a Kubernetes secret and expose it as an environment variable in the pod. The Kubernetes runtime is extensible, you can implement classes and customize the way how to generate Kubernetes manifests, how to pass auth data to pods, and how to integrate secrets. - -:::tip - -For the rules of translating Pulsar object names into Kubernetes resource labels, see [here](admin-api-overview.md#how-to-define-pulsar-resource-names-when-running-pulsar-in-kubernetes). - -::: - -### Basic configuration - -It is easy to configure Kubernetes runtime. You can just uncomment the settings of `kubernetesContainerFactory` in the `functions_worker.yaml` file. The following is an example. - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.kubernetes.KubernetesRuntimeFactory -functionRuntimeFactoryConfigs: - # uri to kubernetes cluster, leave it to empty and it will use the kubernetes settings in function worker - k8Uri: - # the kubernetes namespace to run the function instances. it is `default`, if this setting is left to be empty - jobNamespace: - # The Kubernetes pod name to run the function instances. It is set to - # `pf----` if this setting is left to be empty - jobName: - # the docker image to run function instance. by default it is `apachepulsar/pulsar` - pulsarDockerImageName: - # the docker image to run function instance according to different configurations provided by users. - # By default it is `apachepulsar/pulsar`. - # e.g: - # functionDockerImages: - # JAVA: JAVA_IMAGE_NAME - # PYTHON: PYTHON_IMAGE_NAME - # GO: GO_IMAGE_NAME - functionDockerImages: - # "The image pull policy for image used to run function instance. By default it is `IfNotPresent` - imagePullPolicy: IfNotPresent - # the root directory of pulsar home directory in `pulsarDockerImageName`. by default it is `/pulsar`. - # if you are using your own built image in `pulsarDockerImageName`, you need to set this setting accordingly - pulsarRootDir: - # The config admin CLI allows users to customize the configuration of the admin cli tool, such as: - # `/bin/pulsar-admin and /bin/pulsarctl`. By default it is `/bin/pulsar-admin`. If you want to use `pulsarctl` - # you need to set this setting accordingly - configAdminCLI: - # this setting only takes effects if `k8Uri` is set to null. if your function worker is running as a k8 pod, - # setting this to true is let function worker to submit functions to the same k8s cluster as function worker - # is running. setting this to false if your function worker is not running as a k8 pod. - submittingInsidePod: false - # setting the pulsar service url that pulsar function should use to connect to pulsar - # if it is not set, it will use the pulsar service url configured in worker service - pulsarServiceUrl: - # setting the pulsar admin url that pulsar function should use to connect to pulsar - # if it is not set, it will use the pulsar admin url configured in worker service - pulsarAdminUrl: - # The flag indicates to install user code dependencies. (applied to python package) - installUserCodeDependencies: - # The repository that pulsar functions use to download python dependencies - pythonDependencyRepository: - # The repository that pulsar functions use to download extra python dependencies - pythonExtraDependencyRepository: - # the custom labels that function worker uses to select the nodes for pods - customLabels: - # The expected metrics collection interval, in seconds - expectedMetricsCollectionInterval: 30 - # Kubernetes Runtime will periodically checkback on - # this configMap if defined and if there are any changes - # to the kubernetes specific stuff, we apply those changes - changeConfigMap: - # The namespace for storing change config map - changeConfigMapNamespace: - # The ratio cpu request and cpu limit to be set for a function/source/sink. - # The formula for cpu request is cpuRequest = userRequestCpu / cpuOverCommitRatio - cpuOverCommitRatio: 1.0 - # The ratio memory request and memory limit to be set for a function/source/sink. - # The formula for memory request is memoryRequest = userRequestMemory / memoryOverCommitRatio - memoryOverCommitRatio: 1.0 - # The port inside the function pod which is used by the worker to communicate with the pod - grpcPort: 9093 - # The port inside the function pod on which prometheus metrics are exposed - metricsPort: 9094 - # The directory inside the function pod where nar packages will be extracted - narExtractionDirectory: - # The classpath where function instance files stored - functionInstanceClassPath: - # the directory for dropping extra function dependencies - # if it is not an absolute path, it is relative to `pulsarRootDir` - extraFunctionDependenciesDir: - # Additional memory padding added on top of the memory requested by the function per on a per instance basis - percentMemoryPadding: 10 - -``` - -If you run functions worker embedded in a broker on Kubernetes, you can use the default settings. - -### Run standalone functions worker on Kubernetes - -If you run functions worker standalone (that is, not embedded) on Kubernetes, you need to configure `pulsarSerivceUrl` to be the URL of the broker and `pulsarAdminUrl` as the URL to the functions worker. - -For example, both Pulsar brokers and Function Workers run in the `pulsar` K8S namespace. The brokers have a service called `brokers` and the functions worker has a service called `func-worker`. The settings are as follows: - -```yaml - -pulsarServiceUrl: pulsar://broker.pulsar:6650 // or pulsar+ssl://broker.pulsar:6651 if using TLS -pulsarAdminUrl: http://func-worker.pulsar:8080 // or https://func-worker:8443 if using TLS - -``` - -### Run RBAC in Kubernetes clusters - -If you run RBAC in your Kubernetes cluster, make sure that the service account you use for running functions workers (or brokers, if functions workers run along with brokers) have permissions on the following Kubernetes APIs. - -- services -- configmaps -- pods -- apps.statefulsets - -The following is sufficient: - -```yaml - -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRole -metadata: - name: functions-worker -rules: -- apiGroups: [""] - resources: - - services - - configmaps - - pods - verbs: - - '*' -- apiGroups: - - apps - resources: - - statefulsets - verbs: - - '*' ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: functions-worker ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: functions-worker -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: functions-worker -subjectsKubernetesSec: -- kind: ServiceAccount - name: functions-worker - -``` - -If the service-account is not properly configured, an error message similar to this is displayed: - -```bash - -22:04:27.696 [Timer-0] ERROR org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory - Error while trying to fetch configmap example-pulsar-4qvmb5gur3c6fc9dih0x1xn8b-function-worker-config at namespace pulsar -io.kubernetes.client.ApiException: Forbidden - at io.kubernetes.client.ApiClient.handleResponse(ApiClient.java:882) ~[io.kubernetes-client-java-2.0.0.jar:?] - at io.kubernetes.client.ApiClient.execute(ApiClient.java:798) ~[io.kubernetes-client-java-2.0.0.jar:?] - at io.kubernetes.client.apis.CoreV1Api.readNamespacedConfigMapWithHttpInfo(CoreV1Api.java:23673) ~[io.kubernetes-client-java-api-2.0.0.jar:?] - at io.kubernetes.client.apis.CoreV1Api.readNamespacedConfigMap(CoreV1Api.java:23655) ~[io.kubernetes-client-java-api-2.0.0.jar:?] - at org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory.fetchConfigMap(KubernetesRuntimeFactory.java:284) [org.apache.pulsar-pulsar-functions-runtime-2.4.0-42c3bf949.jar:2.4.0-42c3bf949] - at org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory$1.run(KubernetesRuntimeFactory.java:275) [org.apache.pulsar-pulsar-functions-runtime-2.4.0-42c3bf949.jar:2.4.0-42c3bf949] - at java.util.TimerThread.mainLoop(Timer.java:555) [?:1.8.0_212] - at java.util.TimerThread.run(Timer.java:505) [?:1.8.0_212] - -``` - -### Integrate Kubernetes secrets - -In order to safely distribute secrets, Pulsar Functions can reference Kubernetes secrets. To enable this, set the `secretsProviderConfiguratorClassName` to `org.apache.pulsar.functions.secretsproviderconfigurator.KubernetesSecretsProviderConfigurator`. - -You can create a secret in the namespace where your functions are deployed. For example, you deploy functions to the `pulsar-func` Kubernetes namespace, and you have a secret named `database-creds` with a field name `password`, which you want to mount in the pod as an environment variable called `DATABASE_PASSWORD`. The following functions configuration enables you to reference that secret and mount the value as an environment variable in the pod. - -```Yaml - -tenant: "mytenant" -namespace: "mynamespace" -name: "myfunction" -topicName: "persistent://mytenant/mynamespace/myfuncinput" -className: "com.company.pulsar.myfunction" - -secrets: - # the secret will be mounted from the `password` field in the `database-creds` secret as an env var called `DATABASE_PASSWORD` - DATABASE_PASSWORD: - path: "database-creds" - key: "password" - -``` - -### Enable token authentication - -When you enable authentication for your Pulsar cluster, you need a mechanism for the pod running your function to authenticate with the broker. - -The `org.apache.pulsar.functions.auth.KubernetesFunctionAuthProvider` interface provides support for any authentication mechanism. The `functionAuthProviderClassName` in `function-worker.yml` is used to specify your path to this implementation. - -Pulsar includes an implementation of this interface for token authentication, and distributes the certificate authority via the same implementation. The configuration is similar as follows: - -```Yaml - -functionAuthProviderClassName: org.apache.pulsar.functions.auth.KubernetesSecretsTokenAuthProvider - -``` - -For token authentication, the functions worker captures the token that is used to deploy (or update) the function. The token is saved as a secret and mounted into the pod. - -For custom authentication or TLS, you need to implement this interface or use an alternative mechanism to provide authentication. If you use token authentication and TLS encryption to secure the communication with the cluster, Pulsar passes your certificate authority (CA) to the client, so the client obtains what it needs to authenticate the cluster, and trusts the cluster with your signed certificate. - -:::note - -If you use tokens that expire when deploying functions, these tokens will expire. - -::: - -### Run clusters with authentication - -When you run a functions worker in a standalone process (that is, not embedded in the broker) in a cluster with authentication, you must configure your functions worker to interact with the broker and authenticate incoming requests. So you need to configure properties that the broker requires for authentication or authorization. - -For example, if you use token authentication, you need to configure the following properties in the `function-worker.yml` file. - -```Yaml - -clientAuthenticationPlugin: org.apache.pulsar.client.impl.auth.AuthenticationToken -clientAuthenticationParameters: file:///etc/pulsar/token/admin-token.txt -configurationStoreServers: zookeeper-cluster:2181 # auth requires a connection to zookeeper -authenticationProviders: - - "org.apache.pulsar.broker.authentication.AuthenticationProviderToken" -authorizationEnabled: true -authenticationEnabled: true -superUserRoles: - - superuser - - proxy -properties: - tokenSecretKey: file:///etc/pulsar/jwt/secret # if using a secret token, key file must be DER-encoded - tokenPublicKey: file:///etc/pulsar/jwt/public.key # if using public/private key tokens, key file must be DER-encoded - -``` - -:::note - -You must configure both the Function Worker authorization or authentication for the server to authenticate requests and configure the client to be authenticated to communicate with the broker. - -::: - -### Customize Kubernetes runtime - -The Kubernetes integration enables you to implement a class and customize how to generate manifests. You can configure it by setting `runtimeCustomizerClassName` in the `functions-worker.yml` file and use the fully qualified class name. You must implement the `org.apache.pulsar.functions.runtime.kubernetes.KubernetesManifestCustomizer` interface. - -The functions (and sinks/sources) API provides a flag, `customRuntimeOptions`, which is passed to this interface. - -To initialize the `KubernetesManifestCustomizer`, you can provide `runtimeCustomizerConfig` in the `functions-worker.yml` file. `runtimeCustomizerConfig` is passed to the `public void initialize(Map config)` function of the interface. `runtimeCustomizerConfig`is different from the `customRuntimeOptions` as `runtimeCustomizerConfig` is the same across all functions. If you provide both `runtimeCustomizerConfig` and `customRuntimeOptions`, you need to decide how to manage these two configurations in your implementation of `KubernetesManifestCustomizer`. - -Pulsar includes a built-in implementation. To use the basic implementation, set `runtimeCustomizerClassName` to `org.apache.pulsar.functions.runtime.kubernetes.BasicKubernetesManifestCustomizer`. The built-in implementation initialized with `runtimeCustomizerConfig` enables you to pass a JSON document as `customRuntimeOptions` with certain properties to augment, which decides how the manifests are generated. If both `runtimeCustomizerConfig` and `customRuntimeOptions` are provided, `BasicKubernetesManifestCustomizer` uses `customRuntimeOptions` to override the configuration if there are conflicts in these two configurations. - -Below is an example of `customRuntimeOptions`. - -```json - -{ - "jobName": "jobname", // the k8s pod name to run this function instance - "jobNamespace": "namespace", // the k8s namespace to run this function in - "extractLabels": { // extra labels to attach to the statefulSet, service, and pods - "extraLabel": "value" - }, - "extraAnnotations": { // extra annotations to attach to the statefulSet, service, and pods - "extraAnnotation": "value" - }, - "nodeSelectorLabels": { // node selector labels to add on to the pod spec - "customLabel": "value" - }, - "tolerations": [ // tolerations to add to the pod spec - { - "key": "custom-key", - "value": "value", - "effect": "NoSchedule" - } - ], - "resourceRequirements": { // values for cpu and memory should be defined as described here: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container - "requests": { - "cpu": 1, - "memory": "4G" - }, - "limits": { - "cpu": 2, - "memory": "8G" - } - } -} - -``` - -## Run clusters with geo-replication - -If you run multiple clusters tied together with geo-replication, it is important to use a different function namespace for each cluster. Otherwise, the function shares a namespace and potentially schedule across clusters. - -For example, if you have two clusters: `east-1` and `west-1`, you can configure the functions workers for `east-1` and `west-1` perspectively as follows. - -```Yaml - -pulsarFunctionsCluster: east-1 -pulsarFunctionsNamespace: public/functions-east-1 - -``` - -```Yaml - -pulsarFunctionsCluster: west-1 -pulsarFunctionsNamespace: public/functions-west-1 - -``` - -This ensures the two different Functions Workers use distinct sets of topics for their internal coordination. - -## Configure standalone functions worker - -When configuring a standalone functions worker, you need to configure properties that the broker requires, especially if you use TLS. And then Functions Worker can communicate with the broker. - -You need to configure the following required properties. - -```Yaml - -workerPort: 8080 -workerPortTls: 8443 # when using TLS -tlsCertificateFilePath: /etc/pulsar/tls/tls.crt # when using TLS -tlsKeyFilePath: /etc/pulsar/tls/tls.key # when using TLS -tlsTrustCertsFilePath: /etc/pulsar/tls/ca.crt # when using TLS -pulsarServiceUrl: pulsar://broker.pulsar:6650/ # or pulsar+ssl://pulsar-prod-broker.pulsar:6651/ when using TLS -pulsarWebServiceUrl: http://broker.pulsar:8080/ # or https://pulsar-prod-broker.pulsar:8443/ when using TLS -useTls: true # when using TLS, critical! - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/functions-worker.md b/site2/website/versioned_docs/version-2.8.2-deprecated/functions-worker.md deleted file mode 100644 index 49fc76b30bdaa5..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/functions-worker.md +++ /dev/null @@ -1,386 +0,0 @@ ---- -id: functions-worker -title: Deploy and manage functions worker -sidebar_label: "Setup: Pulsar Functions Worker" -original_id: functions-worker ---- -Before using Pulsar Functions, you need to learn how to set up Pulsar Functions worker and how to [configure Functions runtime](functions-runtime.md). - -Pulsar `functions-worker` is a logic component to run Pulsar Functions in cluster mode. Two options are available, and you can select either based on your requirements. -- [run with brokers](#run-functions-worker-with-brokers) -- [run it separately](#run-functions-worker-separately) in a different broker - -:::note - -The `--- Service Urls---` lines in the following diagrams represent Pulsar service URLs that Pulsar client and admin use to connect to a Pulsar cluster. - -::: - -## Run Functions-worker with brokers - -The following diagram illustrates the deployment of functions-workers running along with brokers. - -![assets/functions-worker-corun.png](/assets/functions-worker-corun.png) - -To enable functions-worker running as part of a broker, you need to set `functionsWorkerEnabled` to `true` in the `broker.conf` file. - -```conf - -functionsWorkerEnabled=true - -``` - -If the `functionsWorkerEnabled` is set to `true`, the functions-worker is started as part of a broker. You need to configure the `conf/functions_worker.yml` file to customize your functions_worker. - -Before you run Functions-worker with broker, you have to configure Functions-worker, and then start it with brokers. - -### Configure Functions-Worker to run with brokers -In this mode, most of the settings are already inherited from your broker configuration (for example, configurationStore settings, authentication settings, and so on) since `functions-worker` is running as part of the broker. - -Pay attention to the following required settings when configuring functions-worker in this mode. - -- `numFunctionPackageReplicas`: The number of replicas to store function packages. The default value is `1`, which is good for standalone deployment. For production deployment, to ensure high availability, set it to be larger than `2`. -- `initializedDlogMetadata`: Whether to initialize distributed log metadata in runtime. If it is set to `true`, you must ensure that it has been initialized by `bin/pulsar initialize-cluster-metadata` command. - -If authentication is enabled on the BookKeeper cluster, configure the following BookKeeper authentication settings. - -- `bookkeeperClientAuthenticationPlugin`: the BookKeeper client authentication plugin name. -- `bookkeeperClientAuthenticationParametersName`: the BookKeeper client authentication plugin parameters name. -- `bookkeeperClientAuthenticationParameters`: the BookKeeper client authentication plugin parameters. - -### Configure Stateful-Functions to run with broker - -If you want to use Stateful-Functions related functions (for example, `putState()` and `queryState()` related interfaces), follow steps below. - -1. Enable the **streamStorage** service in the BookKeeper. - - Currently, the service uses the NAR package, so you need to set the configuration in `bookkeeper.conf`. - - ```text - - extraServerComponents=org.apache.bookkeeper.stream.server.StreamStorageLifecycleComponent - - ``` - - After starting bookie, use the following methods to check whether the streamStorage service is started correctly. - - Input: - - ```shell - - telnet localhost 4181 - - ``` - - Output: - - ```text - - Trying 127.0.0.1... - Connected to localhost. - Escape character is '^]'. - - ``` - -2. Turn on this function in `functions_worker.yml`. - - ```text - - stateStorageServiceUrl: bk://:4181 - - ``` - - `bk-service-url` is the service URL pointing to the BookKeeper table service. - -### Start Functions-worker with broker - -Once you have configured the `functions_worker.yml` file, you can start or restart your broker. - -And then you can use the following command to verify if `functions-worker` is running well. - -```bash - -curl :8080/admin/v2/worker/cluster - -``` - -After entering the command above, a list of active function workers in the cluster is returned. The output is similar to the following. - -```json - -[{"workerId":"","workerHostname":"","port":8080}] - -``` - -## Run Functions-worker separately - -This section illustrates how to run `functions-worker` as a separate process in separate machines. - -![assets/functions-worker-separated.png](/assets/functions-worker-separated.png) - -:::note - -In this mode, make sure `functionsWorkerEnabled` is set to `false`, so you won't start `functions-worker` with brokers by mistake. Also, while accessing the `functions-worker` to manage any of the functions, the `pulsar-admin` CLI tool or any of the clients should use the `workerHostname` and `workerPort` that you set in [Worker parameters](#worker-parameters) to generate an `--admin-url`. - -::: - -### Configure Functions-worker to run separately - -To run function-worker separately, you have to configure the following parameters. - -#### Worker parameters - -- `workerId`: The type is string. It is unique across clusters, which is used to identify a worker machine. -- `workerHostname`: The hostname of the worker machine. -- `workerPort`: The port that the worker server listens on. Keep it as default if you don't customize it. -- `workerPortTls`: The TLS port that the worker server listens on. Keep it as default if you don't customize it. - -#### Function package parameter - -- `numFunctionPackageReplicas`: The number of replicas to store function packages. The default value is `1`. - -#### Function metadata parameter - -- `pulsarServiceUrl`: The Pulsar service URL for your broker cluster. -- `pulsarWebServiceUrl`: The Pulsar web service URL for your broker cluster. -- `pulsarFunctionsCluster`: Set the value to your Pulsar cluster name (same as the `clusterName` setting in the broker configuration). - -If authentication is enabled for your broker cluster, you *should* configure the authentication plugin and parameters for the functions worker to communicate with the brokers. - -- `brokerClientAuthenticationEnabled`: Whether to enable the broker client authentication used by function workers to talk to brokers. -- `clientAuthenticationPlugin`: The authentication plugin to be used by the Pulsar client used in worker service. -- `clientAuthenticationParameters`: The authentication parameter to be used by the Pulsar client used in worker service. - -#### Security settings - -If you want to enable security on functions workers, you *should*: -- [Enable TLS transport encryption](#enable-tls-transport-encryption) -- [Enable Authentication Provider](#enable-authentication-provider) -- [Enable Authorization Provider](#enable-authorization-provider) -- [Enable End-to-End Encryption](#enable-end-to-end-encryption) - -##### Enable TLS transport encryption - -To enable TLS transport encryption, configure the following settings. - -``` - -useTLS: true -pulsarServiceUrl: pulsar+ssl://localhost:6651/ -pulsarWebServiceUrl: https://localhost:8443 - -tlsEnabled: true -tlsCertificateFilePath: /path/to/functions-worker.cert.pem -tlsKeyFilePath: /path/to/functions-worker.key-pk8.pem -tlsTrustCertsFilePath: /path/to/ca.cert.pem - -// The path to trusted certificates used by the Pulsar client to authenticate with Pulsar brokers -brokerClientTrustCertsFilePath: /path/to/ca.cert.pem - -``` - -For details on TLS encryption, refer to [Transport Encryption using TLS](security-tls-transport.md). - -##### Enable Authentication Provider - -To enable authentication on Functions Worker, you need to configure the following settings. - -:::note - -Substitute the *providers list* with the providers you want to enable. - -::: - -``` - -authenticationEnabled: true -authenticationProviders: [ provider1, provider2 ] - -``` - -For *TLS Authentication* provider, follow the example below to add the necessary settings. -See [TLS Authentication](security-tls-authentication.md) for more details. - -``` - -brokerClientAuthenticationPlugin: org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters: tlsCertFile:/path/to/admin.cert.pem,tlsKeyFile:/path/to/admin.key-pk8.pem - -authenticationEnabled: true -authenticationProviders: ['org.apache.pulsar.broker.authentication.AuthenticationProviderTls'] - -``` - -For *SASL Authentication* provider, add `saslJaasClientAllowedIds` and `saslJaasBrokerSectionName` -under `properties` if needed. - -``` - -properties: - saslJaasClientAllowedIds: .*pulsar.* - saslJaasBrokerSectionName: Broker - -``` - -For *Token Authentication* provider, add necessary settings for `properties` if needed. -See [Token Authentication](security-jwt.md) for more details. -Note: key files must be DER-encoded - -``` - -properties: - tokenSecretKey: file://my/secret.key - # If using public/private - # tokenPublicKey: file:///path/to/public.key - -``` - -##### Enable Authorization Provider - -To enable authorization on Functions Worker, you need to configure `authorizationEnabled`, `authorizationProvider` and `configurationStoreServers`. The authentication provider connects to `configurationStoreServers` to receive namespace policies. - -```yaml - -authorizationEnabled: true -authorizationProvider: org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider -configurationStoreServers: - -``` - -You should also configure a list of superuser roles. The superuser roles are able to access any admin API. The following is a configuration example. - -```yaml - -superUserRoles: - - role1 - - role2 - - role3 - -``` - -##### Enable End-to-End Encryption - -You can use the public and private key pair that the application configures to perform encryption. Only the consumers with a valid key can decrypt the encrypted messages. - -To enable End-to-End encryption on Functions Worker, you can set it by specifying `--producer-config` in the command line terminal, for more information, please refer to [here](security-encryption.md). - -We include the relevant configuration information of `CryptoConfig` into `ProducerConfig`. The specific configurable field information about `CryptoConfig` is as follows: - -```text - -public class CryptoConfig { - private String cryptoKeyReaderClassName; - private Map cryptoKeyReaderConfig; - - private String[] encryptionKeys; - private ProducerCryptoFailureAction producerCryptoFailureAction; - - private ConsumerCryptoFailureAction consumerCryptoFailureAction; -} - -``` - -- `producerCryptoFailureAction`: define the action if producer fail to encrypt data one of `FAIL`, `SEND`. -- `consumerCryptoFailureAction`: define the action if consumer fail to decrypt data one of `FAIL`, `DISCARD`, `CONSUME`. - -#### BookKeeper Authentication - -If authentication is enabled on the BookKeeper cluster, you need configure the BookKeeper authentication settings as follows: - -- `bookkeeperClientAuthenticationPlugin`: the plugin name of BookKeeper client authentication. -- `bookkeeperClientAuthenticationParametersName`: the plugin parameters name of BookKeeper client authentication. -- `bookkeeperClientAuthenticationParameters`: the plugin parameters of BookKeeper client authentication. - -### Start Functions-worker - -Once you have finished configuring the `functions_worker.yml` configuration file, you can start a `functions-worker` in the background by using [nohup](https://en.wikipedia.org/wiki/Nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -bin/pulsar-daemon start functions-worker - -``` - -You can also start `functions-worker` in the foreground by using `pulsar` CLI tool: - -```bash - -bin/pulsar functions-worker - -``` - -### Configure Proxies for Functions-workers - -When you are running `functions-worker` in a separate cluster, the admin rest endpoints are split into two clusters. `functions`, `function-worker`, `source` and `sink` endpoints are now served -by the `functions-worker` cluster, while all the other remaining endpoints are served by the broker cluster. -Hence you need to configure your `pulsar-admin` to use the right service URL accordingly. - -In order to address this inconvenience, you can start a proxy cluster for routing the admin rest requests accordingly. Hence you will have one central entry point for your admin service. - -If you already have a proxy cluster, continue reading. If you haven't setup a proxy cluster before, you can follow the [instructions](http://pulsar.apache.org/docs/en/administration-proxy/) to -start proxies. - -![assets/functions-worker-separated.png](/assets/functions-worker-separated-proxy.png) - -To enable routing functions related admin requests to `functions-worker` in a proxy, you can edit the `proxy.conf` file to modify the following settings: - -```conf - -functionWorkerWebServiceURL= -functionWorkerWebServiceURLTLS= - -``` - -## Compare the Run-with-Broker and Run-separately modes - -As described above, you can run Function-worker with brokers, or run it separately. And it is more convenient to run functions-workers along with brokers. However, running functions-workers in a separate cluster provides better resource isolation for running functions in `Process` or `Thread` mode. - -Use which mode for your cases, refer to the following guidelines to determine. - -Use the `Run-with-Broker` mode in the following cases: -- a) if resource isolation is not required when running functions in `Process` or `Thread` mode; -- b) if you configure the functions-worker to run functions on Kubernetes (where the resource isolation problem is addressed by Kubernetes). - -Use the `Run-separately` mode in the following cases: -- a) you don't have a Kubernetes cluster; -- b) if you want to run functions and brokers separately. - -## Troubleshooting - -**Error message: Namespace missing local cluster name in clusters list** - -``` - -Failed to get partitioned topic metadata: org.apache.pulsar.client.api.PulsarClientException$BrokerMetadataException: Namespace missing local cluster name in clusters list: local_cluster=xyz ns=public/functions clusters=[standalone] - -``` - -The error message prompts when either of the cases occurs: -- a) a broker is started with `functionsWorkerEnabled=true`, but the `pulsarFunctionsCluster` is not set to the correct cluster in the `conf/functions_worker.yaml` file; -- b) setting up a geo-replicated Pulsar cluster with `functionsWorkerEnabled=true`, while brokers in one cluster run well, brokers in the other cluster do not work well. - -**Workaround** - -If any of these cases happens, follow the instructions below to fix the problem: - -1. Disable Functions Worker by setting `functionsWorkerEnabled=false`, and restart brokers. - -2. Get the current clusters list of `public/functions` namespace. - -```bash - -bin/pulsar-admin namespaces get-clusters public/functions - -``` - -3. Check if the cluster is in the clusters list. If the cluster is not in the list, add it to the list and update the clusters list. - -```bash - -bin/pulsar-admin namespaces set-clusters --clusters , public/functions - -``` - -4. After setting the cluster successfully, enable functions worker by setting `functionsWorkerEnabled=true`. - -5. Set the correct cluster name in `pulsarFunctionsCluster` in the `conf/functions_worker.yml` file, and restart brokers. diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/getting-started-concepts-and-architecture.md b/site2/website/versioned_docs/version-2.8.2-deprecated/getting-started-concepts-and-architecture.md deleted file mode 100644 index fe9c3fbc553b2c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/getting-started-concepts-and-architecture.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -id: concepts-architecture -title: Pulsar concepts and architecture -sidebar_label: "Concepts and architecture" -original_id: concepts-architecture ---- - - - - - - - - - - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/getting-started-docker.md b/site2/website/versioned_docs/version-2.8.2-deprecated/getting-started-docker.md deleted file mode 100644 index 4f20971d75330c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/getting-started-docker.md +++ /dev/null @@ -1,176 +0,0 @@ ---- -id: getting-started-docker -title: Set up a standalone Pulsar in Docker -sidebar_label: "Run Pulsar in Docker" -original_id: getting-started-docker ---- - -For local development and testing, you can run Pulsar in standalone -mode on your own machine within a Docker container. - -If you have not installed Docker, download the [Community edition](https://www.docker.com/community-edition) -and follow the instructions for your OS. - -## Start Pulsar in Docker - -* For MacOS, Linux, and Windows: - - ```shell - - $ docker run -it \ - -p 6650:6650 \ - -p 8080:8080 \ - --mount source=pulsardata,target=/pulsar/data \ - --mount source=pulsarconf,target=/pulsar/conf \ - apachepulsar/pulsar:@pulsar:version@ \ - bin/pulsar standalone - - ``` - -A few things to note about this command: - * The data, metadata, and configuration are persisted on Docker volumes in order to not start "fresh" every -time the container is restarted. For details on the volumes you can use `docker volume inspect ` - * For Docker on Windows make sure to configure it to use Linux containers - -If you start Pulsar successfully, you will see `INFO`-level log messages like this: - -``` - -2017-08-09 22:34:04,030 - INFO - [main:WebService@213] - Web Service started at http://127.0.0.1:8080 -2017-08-09 22:34:04,038 - INFO - [main:PulsarService@335] - messaging service is ready, bootstrap service on port=8080, broker url=pulsar://127.0.0.1:6650, cluster=standalone, configs=org.apache.pulsar.broker.ServiceConfiguration@4db60246 -... - -``` - -:::tip - -When you start a local standalone cluster, a `public/default` namespace is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -::: - -## Use Pulsar in Docker - -Pulsar offers client libraries for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) -and [C++](client-libraries-cpp.md). If you're running a local standalone cluster, you can -use one of these root URLs to interact with your cluster: - -* `pulsar://localhost:6650` -* `http://localhost:8080` - -The following example will guide you get started with Pulsar quickly by using the [Python](client-libraries-python.md) -client API. - -Install the Pulsar Python client library directly from [PyPI](https://pypi.org/project/pulsar-client/): - -```shell - -$ pip install pulsar-client - -``` - -### Consume a message - -Create a consumer and subscribe to the topic: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -consumer = client.subscribe('my-topic', - subscription_name='my-sub') - -while True: - msg = consumer.receive() - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - -client.close() - -``` - -### Produce a message - -Now start a producer to send some test messages: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -producer = client.create_producer('my-topic') - -for i in range(10): - producer.send(('hello-pulsar-%d' % i).encode('utf-8')) - -client.close() - -``` - -## Get the topic statistics - -In Pulsar, you can use REST, Java, or command-line tools to control every aspect of the system. -For details on APIs, refer to [Admin API Overview](admin-api-overview.md). - -In the simplest example, you can use curl to probe the stats for a particular topic: - -```shell - -$ curl http://localhost:8080/admin/v2/persistent/public/default/my-topic/stats | python -m json.tool - -``` - -The output is something like this: - -```json - -{ - "averageMsgSize": 0.0, - "msgRateIn": 0.0, - "msgRateOut": 0.0, - "msgThroughputIn": 0.0, - "msgThroughputOut": 0.0, - "publishers": [ - { - "address": "/172.17.0.1:35048", - "averageMsgSize": 0.0, - "clientVersion": "1.19.0-incubating", - "connectedSince": "2017-08-09 20:59:34.621+0000", - "msgRateIn": 0.0, - "msgThroughputIn": 0.0, - "producerId": 0, - "producerName": "standalone-0-1" - } - ], - "replication": {}, - "storageSize": 16, - "subscriptions": { - "my-sub": { - "blockedSubscriptionOnUnackedMsgs": false, - "consumers": [ - { - "address": "/172.17.0.1:35064", - "availablePermits": 996, - "blockedConsumerOnUnackedMsgs": false, - "clientVersion": "1.19.0-incubating", - "connectedSince": "2017-08-09 21:05:39.222+0000", - "consumerName": "166111", - "msgRateOut": 0.0, - "msgRateRedeliver": 0.0, - "msgThroughputOut": 0.0, - "unackedMessages": 0 - } - ], - "msgBacklog": 0, - "msgRateExpired": 0.0, - "msgRateOut": 0.0, - "msgRateRedeliver": 0.0, - "msgThroughputOut": 0.0, - "type": "Exclusive", - "unackedMessages": 0 - } - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/getting-started-helm.md b/site2/website/versioned_docs/version-2.8.2-deprecated/getting-started-helm.md deleted file mode 100644 index 440087c275c053..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/getting-started-helm.md +++ /dev/null @@ -1,441 +0,0 @@ ---- -id: getting-started-helm -title: Get started in Kubernetes -sidebar_label: "Run Pulsar in Kubernetes" -original_id: getting-started-helm ---- - -This section guides you through every step of installing and running Apache Pulsar with Helm on Kubernetes quickly, including the following sections: - -- Install the Apache Pulsar on Kubernetes using Helm -- Start and stop Apache Pulsar -- Create topics using `pulsar-admin` -- Produce and consume messages using Pulsar clients -- Monitor Apache Pulsar status with Prometheus and Grafana - -For deploying a Pulsar cluster for production usage, read the documentation on [how to configure and install a Pulsar Helm chart](helm-deploy.md). - -## Prerequisite - -- Kubernetes server 1.14.0+ -- kubectl 1.14.0+ -- Helm 3.0+ - -:::tip - -For the following steps, step 2 and step 3 are for **developers** and step 4 and step 5 are for **administrators**. - -::: - -## Step 0: Prepare a Kubernetes cluster - -Before installing a Pulsar Helm chart, you have to create a Kubernetes cluster. You can follow [the instructions](helm-prepare.md) to prepare a Kubernetes cluster. - -We use [Minikube](https://minikube.sigs.k8s.io/docs/start/) in this quick start guide. To prepare a Kubernetes cluster, follow these steps: - -1. Create a Kubernetes cluster on Minikube. - - ```bash - - minikube start --memory=8192 --cpus=4 --kubernetes-version= - - ``` - - The `` can be any [Kubernetes version supported by your Minikube installation](https://minikube.sigs.k8s.io/docs/reference/configuration/kubernetes/), such as `v1.16.1`. - -2. Set `kubectl` to use Minikube. - - ```bash - - kubectl config use-context minikube - - ``` - -3. To use the [Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) with the local Kubernetes cluster on Minikube, enter the command below: - - ```bash - - minikube dashboard - - ``` - - The command automatically triggers opening a webpage in your browser. - -## Step 1: Install Pulsar Helm chart - -0. Add Pulsar charts repo. - - ```bash - - helm repo add apache https://pulsar.apache.org/charts - - ``` - - ```bash - - helm repo update - - ``` - -1. Clone the Pulsar Helm chart repository. - - ```bash - - git clone https://github.com/apache/pulsar-helm-chart - cd pulsar-helm-chart - - ``` - -2. Run the script `prepare_helm_release.sh` to create secrets required for installing the Apache Pulsar Helm chart. The username `pulsar` and password `pulsar` are used for logging into the Grafana dashboard and Pulsar Manager. - - > **NOTE** - > When running the script, you can use `-n` to specify the Kubernetes namespace where the Pulsar Helm chart is installed, `-k` to define the Pulsar Helm release name, and `-c` to create the Kubernetes namespace. For more information about the script, run `./scripts/pulsar/prepare_helm_release.sh --help`. - - ```bash - - ./scripts/pulsar/prepare_helm_release.sh \ - -n pulsar \ - -k pulsar-mini \ - -c - - ``` - -3. Use the Pulsar Helm chart to install a Pulsar cluster to Kubernetes. - - > **NOTE** - > You need to specify `--set initialize=true` when installing Pulsar the first time. This command installs and starts Apache Pulsar. - - ```bash - - helm install \ - --values examples/values-minikube.yaml \ - --set initialize=true \ - --namespace pulsar \ - pulsar-mini apache/pulsar - - ``` - -4. Check the status of all pods. - - ```bash - - kubectl get pods -n pulsar - - ``` - - If all pods start up successfully, you can see that the `STATUS` is changed to `Running` or `Completed`. - - **Output** - - ```bash - - NAME READY STATUS RESTARTS AGE - pulsar-mini-bookie-0 1/1 Running 0 9m27s - pulsar-mini-bookie-init-5gphs 0/1 Completed 0 9m27s - pulsar-mini-broker-0 1/1 Running 0 9m27s - pulsar-mini-grafana-6b7bcc64c7-4tkxd 1/1 Running 0 9m27s - pulsar-mini-prometheus-5fcf5dd84c-w8mgz 1/1 Running 0 9m27s - pulsar-mini-proxy-0 1/1 Running 0 9m27s - pulsar-mini-pulsar-init-t7cqt 0/1 Completed 0 9m27s - pulsar-mini-pulsar-manager-9bcbb4d9f-htpcs 1/1 Running 0 9m27s - pulsar-mini-toolset-0 1/1 Running 0 9m27s - pulsar-mini-zookeeper-0 1/1 Running 0 9m27s - - ``` - -5. Check the status of all services in the namespace `pulsar`. - - ```bash - - kubectl get services -n pulsar - - ``` - - **Output** - - ```bash - - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - pulsar-mini-bookie ClusterIP None 3181/TCP,8000/TCP 11m - pulsar-mini-broker ClusterIP None 8080/TCP,6650/TCP 11m - pulsar-mini-grafana LoadBalancer 10.106.141.246 3000:31905/TCP 11m - pulsar-mini-prometheus ClusterIP None 9090/TCP 11m - pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 11m - pulsar-mini-pulsar-manager LoadBalancer 10.103.192.175 9527:30190/TCP 11m - pulsar-mini-toolset ClusterIP None 11m - pulsar-mini-zookeeper ClusterIP None 2888/TCP,3888/TCP,2181/TCP 11m - - ``` - -## Step 2: Use pulsar-admin to create Pulsar tenants/namespaces/topics - -`pulsar-admin` is the CLI (command-Line Interface) tool for Pulsar. In this step, you can use `pulsar-admin` to create resources, including tenants, namespaces, and topics. - -1. Enter the `toolset` container. - - ```bash - - kubectl exec -it -n pulsar pulsar-mini-toolset-0 -- /bin/bash - - ``` - -2. In the `toolset` container, create a tenant named `apache`. - - ```bash - - bin/pulsar-admin tenants create apache - - ``` - - Then you can list the tenants to see if the tenant is created successfully. - - ```bash - - bin/pulsar-admin tenants list - - ``` - - You should see a similar output as below. The tenant `apache` has been successfully created. - - ```bash - - "apache" - "public" - "pulsar" - - ``` - -3. In the `toolset` container, create a namespace named `pulsar` in the tenant `apache`. - - ```bash - - bin/pulsar-admin namespaces create apache/pulsar - - ``` - - Then you can list the namespaces of tenant `apache` to see if the namespace is created successfully. - - ```bash - - bin/pulsar-admin namespaces list apache - - ``` - - You should see a similar output as below. The namespace `apache/pulsar` has been successfully created. - - ```bash - - "apache/pulsar" - - ``` - -4. In the `toolset` container, create a topic `test-topic` with `4` partitions in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics create-partitioned-topic apache/pulsar/test-topic -p 4 - - ``` - -5. In the `toolset` container, list all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics list-partitioned-topics apache/pulsar - - ``` - - Then you can see all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - "persistent://apache/pulsar/test-topic" - - ``` - -## Step 3: Use Pulsar client to produce and consume messages - -You can use the Pulsar client to create producers and consumers to produce and consume messages. - -By default, the Pulsar Helm chart exposes the Pulsar cluster through a Kubernetes `LoadBalancer`. In Minikube, you can use the following command to check the proxy service. - -```bash - -kubectl get services -n pulsar | grep pulsar-mini-proxy - -``` - -You will see a similar output as below. - -```bash - -pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 28m - -``` - -This output tells what are the node ports that Pulsar cluster's binary port and HTTP port are mapped to. The port after `80:` is the HTTP port while the port after `6650:` is the binary port. - -Then you can find the IP address and exposed ports of your Minikube server by running the following command. - -```bash - -minikube service pulsar-mini-proxy -n pulsar - -``` - -**Output** - -```bash - -|-----------|-------------------|-------------|-------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|-------------------------| -| pulsar | pulsar-mini-proxy | http/80 | http://172.17.0.4:32305 | -| | | pulsar/6650 | http://172.17.0.4:31816 | -|-----------|-------------------|-------------|-------------------------| -🏃 Starting tunnel for service pulsar-mini-proxy. -|-----------|-------------------|-------------|------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|------------------------| -| pulsar | pulsar-mini-proxy | | http://127.0.0.1:61853 | -| | | | http://127.0.0.1:61854 | -|-----------|-------------------|-------------|------------------------| - -``` - -At this point, you can get the service URLs to connect to your Pulsar client. Here are URL examples: - -``` - -webServiceUrl=http://127.0.0.1:61853/ -brokerServiceUrl=pulsar://127.0.0.1:61854/ - -``` - -Then you can proceed with the following steps: - -1. Download the Apache Pulsar tarball from the [downloads page](https://pulsar.apache.org/download/). - -2. Decompress the tarball based on your download file. - - ```bash - - tar -xf .tar.gz - - ``` - -3. Expose `PULSAR_HOME`. - - (1) Enter the directory of the decompressed download file. - - (2) Expose `PULSAR_HOME` as the environment variable. - - ```bash - - export PULSAR_HOME=$(pwd) - - ``` - -4. Configure the Pulsar client. - - In the `${PULSAR_HOME}/conf/client.conf` file, replace `webServiceUrl` and `brokerServiceUrl` with the service URLs you get from the above steps. - -5. Create a subscription to consume messages from `apache/pulsar/test-topic`. - - ```bash - - bin/pulsar-client consume -s sub apache/pulsar/test-topic -n 0 - - ``` - -6. Open a new terminal. In the new terminal, create a producer and send 10 messages to the `test-topic` topic. - - ```bash - - bin/pulsar-client produce apache/pulsar/test-topic -m "---------hello apache pulsar-------" -n 10 - - ``` - -7. Verify the results. - - - From the producer side - - **Output** - - The messages have been produced successfully. - - ```bash - - 18:15:15.489 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 10 messages successfully produced - - ``` - - - From the consumer side - - **Output** - - At the same time, you can receive the messages as below. - - ```bash - - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - - ``` - -## Step 4: Use Pulsar Manager to manage the cluster - -[Pulsar Manager](administration-pulsar-manager.md) is a web-based GUI management tool for managing and monitoring Pulsar. - -1. By default, the `Pulsar Manager` is exposed as a separate `LoadBalancer`. You can open the Pulsar Manager UI using the following command: - - ```bash - - minikube service -n pulsar pulsar-mini-pulsar-manager - - ``` - -2. The Pulsar Manager UI will be open in your browser. You can use the username `pulsar` and password `pulsar` to log into Pulsar Manager. - -3. In Pulsar Manager UI, you can create an environment. - - - Click `New Environment` button in the top-left corner. - - Type `pulsar-mini` for the field `Environment Name` in the popup window. - - Type `http://pulsar-mini-broker:8080` for the field `Service URL` in the popup window. - - Click `Confirm` button in the popup window. - -4. After successfully creating an environment, you are redirected to the `tenants` page of that environment. Then you can create `tenants`, `namespaces` and `topics` using the Pulsar Manager. - -## Step 5: Use Prometheus and Grafana to monitor cluster - -Grafana is an open-source visualization tool, which can be used for visualizing time series data into dashboards. - -1. By default, the Grafana is exposed as a separate `LoadBalancer`. You can open the Grafana UI using the following command: - - ```bash - - minikube service pulsar-mini-grafana -n pulsar - - ``` - -2. The Grafana UI is open in your browser. You can use the username `pulsar` and password `pulsar` to log into the Grafana Dashboard. - -3. You can view dashboards for different components of a Pulsar cluster. diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/getting-started-pulsar.md b/site2/website/versioned_docs/version-2.8.2-deprecated/getting-started-pulsar.md deleted file mode 100644 index 752590f57b5585..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/getting-started-pulsar.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -id: pulsar-2.0 -title: Pulsar 2.0 -sidebar_label: "Pulsar 2.0" -original_id: pulsar-2.0 ---- - -Pulsar 2.0 is a major new release for Pulsar that brings some bold changes to the platform, including [simplified topic names](#topic-names), the addition of the [Pulsar Functions](functions-overview.md) feature, some terminology changes, and more. - -## New features in Pulsar 2.0 - -Feature | Description -:-------|:----------- -[Pulsar Functions](functions-overview.md) | A lightweight compute option for Pulsar - -## Major changes - -There are a few major changes that you should be aware of, as they may significantly impact your day-to-day usage. - -### Properties versus tenants - -Previously, Pulsar had a concept of properties. A property is essentially the exact same thing as a tenant, so the "property" terminology has been removed in version 2.0. The [`pulsar-admin properties`](reference-pulsar-admin.md#pulsar-admin) command-line interface, for example, has been replaced with the [`pulsar-admin tenants`](reference-pulsar-admin.md#pulsar-admin-tenants) interface. In some cases the properties terminology is still used but is now considered deprecated and will be removed entirely in a future release. - -### Topic names - -Prior to version 2.0, *all* Pulsar topics had the following form: - -```http - -{persistent|non-persistent}://property/cluster/namespace/topic - -``` - -Two important changes have been made in Pulsar 2.0: - -* There is no longer a [cluster component](#no-cluster) -* Properties have been [renamed to tenants](#tenants) -* You can use a [flexible](#flexible-topic-naming) naming system to shorten many topic names -* `/` is not allowed in topic name - -#### No cluster component - -The cluster component has been removed from topic names. Thus, all topic names now have the following form: - -```http - -{persistent|non-persistent}://tenant/namespace/topic - -``` - -> Existing topics that use the legacy name format will continue to work without any change, and there are no plans to change that. - - -#### Flexible topic naming - -All topic names in Pulsar 2.0 internally have the form shown [above](#no-cluster-component) but you can now use shorthand names in many cases (for the sake of simplicity). The flexible naming system stems from the fact that there is now a default topic type, tenant, and namespace: - -Topic aspect | Default -:------------|:------- -topic type | `persistent` -tenant | `public` -namespace | `default` - -The table below shows some example topic name translations that use implicit defaults: - -Input topic name | Translated topic name -:----------------|:--------------------- -`my-topic` | `persistent://public/default/my-topic` -`my-tenant/my-namespace/my-topic` | `persistent://my-tenant/my-namespace/my-topic` - -> For [non-persistent topics](concepts-messaging.md#non-persistent-topics) you'll need to continue to specify the entire topic name, as the default-based rules for persistent topic names don't apply. Thus you cannot use a shorthand name like `non-persistent://my-topic` and would need to use `non-persistent://public/default/my-topic` instead - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/getting-started-standalone.md b/site2/website/versioned_docs/version-2.8.2-deprecated/getting-started-standalone.md deleted file mode 100644 index cea47efd08d4b3..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/getting-started-standalone.md +++ /dev/null @@ -1,271 +0,0 @@ ---- -id: getting-started-standalone -title: Set up a standalone Pulsar locally -sidebar_label: "Run Pulsar locally" -original_id: getting-started-standalone ---- - -For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process. - -> #### Pulsar in production? -> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal.md) guide. - -## Install Pulsar standalone - -This tutorial guides you through every step of the installation process. - -### System requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions, JRE/JDK 11 is recommended. - -:::tip - -By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. - -::: - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -### Install Pulsar using binary release - -To get started with Pulsar, download a binary tarball release in one of the following ways: - -* download from the Apache mirror (Pulsar @pulsar:version@ binary release) - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:binary_release_url - - ``` - -After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -#### What your package contains - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/). -`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more. -`examples` | A Java JAR file containing [Pulsar Functions](functions-overview.md) example. -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar. -`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar). - -These directories are created once you begin running Pulsar. - -Directory | Contains -:---------|:-------- -`data` | The data storage directory used by ZooKeeper and BookKeeper. -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md). -`logs` | Logs created by the installation. - -:::tip - -If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions: -* [Install builtin connectors (optional)](#install-builtin-connectors-optional) -* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional) -Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders. - -::: - -### Install builtin connectors (optional) - -Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors. -To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways: - -* download from the Apache mirror Pulsar IO Connectors @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. -For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker -(or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions). -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), -you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors). - -::: - -### Install tiered storage offloaders (optional) - -:::tip - -Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -To enable tiered storage feature, follow the instructions below; otherwise skip this section. - -::: - -To get started with [tiered storage offloaders](concepts-tiered-storage.md), you need to download the offloaders tarball release on every broker node in one of the following ways: - -* download from the Apache mirror Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders` -in the pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage.md). - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory. -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), -you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - -::: - -## Start Pulsar standalone - -Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode. - -```bash - -$ bin/pulsar standalone - -``` - -If you have started Pulsar successfully, you will see `INFO`-level log messages like this: - -```bash - -2017-06-01 14:46:29,192 - INFO - [main:WebSocketService@95] - Configuration Store cache started -2017-06-01 14:46:29,192 - INFO - [main:AuthenticationService@61] - Authentication is disabled -2017-06-01 14:46:29,192 - INFO - [main:WebSocketService@108] - Pulsar WebSocket Service started - -``` - -:::tip - -* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window. - -::: - -You can also run the service as a background process using the `pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). -> -> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview.md) document to secure your deployment. -> -> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -## Use Pulsar standalone - -Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. - -### Consume a message - -The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client consume my-topic -s "first-subscription" - -``` - -If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -09:56:55.566 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.MultiTopicsConsumerImpl - [TopicsConsumerFakeTopicNamee2df9] [first-subscription] Success subscribe new topic my-topic in topics consumer, partitions: 4, allTopicPartitionsNumber: 4 - -``` - -:::tip - -As you have noticed that we do not explicitly create the `my-topic` topic, from which we consume the message. When you consume a message from a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well. - -::: - -### Produce a message - -The following command produces a message saying `hello-pulsar` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client produce my-topic --messages "hello-pulsar" - -``` - -If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -13:09:39.356 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced - -``` - -## Stop Pulsar standalone - -Press `Ctrl+C` to stop a local standalone Pulsar. - -:::tip - -If the service runs as a background process using the `pulsar-daemon start standalone` command, then use the `pulsar-daemon stop standalone` command to stop the service. -For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). - -::: - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/helm-deploy.md b/site2/website/versioned_docs/version-2.8.2-deprecated/helm-deploy.md deleted file mode 100644 index 93709f7091c1ea..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/helm-deploy.md +++ /dev/null @@ -1,434 +0,0 @@ ---- -id: helm-deploy -title: Deploy Pulsar cluster using Helm -sidebar_label: "Deployment" -original_id: helm-deploy ---- - -Before running `helm install`, you need to decide how to run Pulsar. -Options can be specified using Helm's `--set option.name=value` command line option. - -## Select configuration options - -In each section, collect the options that are combined to use with the `helm install` command. - -### Kubernetes namespace - -By default, the Pulsar Helm chart is installed to a namespace called `pulsar`. - -```yaml - -namespace: pulsar - -``` - -To install the Pulsar Helm chart into a different Kubernetes namespace, you can include this option in the `helm install` command. - -```bash - ---set namespace= - -``` - -By default, the Pulsar Helm chart doesn't create the namespace. - -```yaml - -namespaceCreate: false - -``` - -To use the Pulsar Helm chart to create the Kubernetes namespace automatically, you can include this option in the `helm install` command. - -```bash - ---set namespaceCreate=true - -``` - -### Persistence - -By default, the Pulsar Helm chart creates Volume Claims with the expectation that a dynamic provisioner creates the underlying Persistent Volumes. - -```yaml - -volumes: - persistence: true - # configure the components to use local persistent volume - # the local provisioner should be installed prior to enable local persistent volume - local_storage: false - -``` - -To use local persistent volumes as the persistent storage for Helm release, you can install the [local storage provisioner](#install-local-storage-provisioner) and include the following option in the `helm install` command. - -```bash - ---set volumes.local_storage=true - -``` - -:::note - -Before installing the production instance of Pulsar, ensure to plan the storage settings to avoid extra storage migration work. Because after initial installation, you must edit Kubernetes objects manually if you want to change storage settings. - -::: - -The Pulsar Helm chart is designed for production use. To use the Pulsar Helm chart in a development environment (such as Minikube), you can disable persistence by including this option in your `helm install` command. - -```bash - ---set volumes.persistence=false - -``` - -### Affinity - -By default, `anti-affinity` is enabled to ensure pods of the same component can run on different nodes. - -```yaml - -affinity: - anti_affinity: true - -``` - -To use the Pulsar Helm chart in a development environment (such as Minikube), you can disable `anti-affinity` by including this option in your `helm install` command. - -```bash - ---set affinity.anti_affinity=false - -``` - -### Components - -The Pulsar Helm chart is designed for production usage. It deploys a production-ready Pulsar cluster, including Pulsar core components and monitoring components. - -You can customize the components to be deployed by turning on/off individual components. - -```yaml - -## Components -## -## Control what components of Apache Pulsar to deploy for the cluster -components: - # zookeeper - zookeeper: true - # bookkeeper - bookkeeper: true - # bookkeeper - autorecovery - autorecovery: true - # broker - broker: true - # functions - functions: true - # proxy - proxy: true - # toolset - toolset: true - # pulsar manager - pulsar_manager: true - -## Monitoring Components -## -## Control what components of the monitoring stack to deploy for the cluster -monitoring: - # monitoring - prometheus - prometheus: true - # monitoring - grafana - grafana: true - -``` - -### Docker images - -The Pulsar Helm chart is designed to enable controlled upgrades. So it can configure independent image versions for components. You can customize the images by setting individual component. - -```yaml - -## Images -## -## Control what images to use for each component -images: - zookeeper: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - bookie: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - autorecovery: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - broker: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - proxy: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - functions: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - prometheus: - repository: prom/prometheus - tag: v1.6.3 - pullPolicy: IfNotPresent - grafana: - repository: streamnative/apache-pulsar-grafana-dashboard-k8s - tag: 0.0.4 - pullPolicy: IfNotPresent - pulsar_manager: - repository: apachepulsar/pulsar-manager - tag: v0.1.0 - pullPolicy: IfNotPresent - hasCommand: false - -``` - -### TLS - -The Pulsar Helm chart can be configured to enable TLS (Transport Layer Security) to protect all the traffic between components. Before enabling TLS, you have to provision TLS certificates for the required components. - -#### Provision TLS certificates using cert-manager - -To use the `cert-manager` to provision the TLS certificates, you have to install the [cert-manager](#install-cert-manager) before installing the Pulsar Helm chart. After successfully installing the cert-manager, you can set `certs.internal_issuer.enabled` to `true`. Therefore, the Pulsar Helm chart can use the `cert-manager` to generate `selfsigning` TLS certificates for the configured components. - -```yaml - -certs: - internal_issuer: - enabled: false - component: internal-cert-issuer - type: selfsigning - -``` - -You can also customize the generated TLS certificates by configuring the fields as the following. - -```yaml - -tls: - # common settings for generating certs - common: - # 90d - duration: 2160h - # 15d - renewBefore: 360h - organization: - - pulsar - keySize: 4096 - keyAlgorithm: rsa - keyEncoding: pkcs8 - -``` - -#### Enable TLS - -After installing the `cert-manager`, you can set `tls.enabled` to `true` to enable TLS encryption for the entire cluster. - -```yaml - -tls: - enabled: false - -``` - -You can also configure whether to enable TLS encryption for individual component. - -```yaml - -tls: - # settings for generating certs for proxy - proxy: - enabled: false - cert_name: tls-proxy - # settings for generating certs for broker - broker: - enabled: false - cert_name: tls-broker - # settings for generating certs for bookies - bookie: - enabled: false - cert_name: tls-bookie - # settings for generating certs for zookeeper - zookeeper: - enabled: false - cert_name: tls-zookeeper - # settings for generating certs for recovery - autorecovery: - cert_name: tls-recovery - # settings for generating certs for toolset - toolset: - cert_name: tls-toolset - -``` - -### Authentication - -By default, authentication is disabled. You can set `auth.authentication.enabled` to `true` to enable authentication. -Currently, the Pulsar Helm chart only supports JWT authentication provider. You can set `auth.authentication.provider` to `jwt` to use the JWT authentication provider. - -```yaml - -# Enable or disable broker authentication and authorization. -auth: - authentication: - enabled: false - provider: "jwt" - jwt: - # Enable JWT authentication - # If the token is generated by a secret key, set the usingSecretKey as true. - # If the token is generated by a private key, set the usingSecretKey as false. - usingSecretKey: false - superUsers: - # broker to broker communication - broker: "broker-admin" - # proxy to broker communication - proxy: "proxy-admin" - # pulsar-admin client to broker/proxy communication - client: "admin" - -``` - -To enable authentication, you can run [prepare helm release](#prepare-the-helm-release) to generate token secret keys and tokens for three super users specified in the `auth.superUsers` field. The generated token keys and super user tokens are uploaded and stored as Kubernetes secrets prefixed with `-token-`. You can use the following command to find those secrets. - -```bash - -kubectl get secrets -n - -``` - -### Authorization - -By default, authorization is disabled. Authorization can be enabled only when authentication is enabled. - -```yaml - -auth: - authorization: - enabled: false - -``` - -To enable authorization, you can include this option in the `helm install` command. - -```bash - ---set auth.authorization.enabled=true - -``` - -### CPU and RAM resource requirements - -By default, the resource requests and the number of replicas for the Pulsar components in the Pulsar Helm chart are adequate for a small production deployment. If you deploy a non-production instance, you can reduce the defaults to fit into a smaller cluster. - -Once you have all of your configuration options collected, you can install dependent charts before installing the Pulsar Helm chart. - -## Install dependent charts - -### Install local storage provisioner - -To use local persistent volumes as the persistent storage, you need to install a storage provisioner for [local persistent volumes](https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/). - -One of the easiest way to get started is to use the local storage provisioner provided along with the Pulsar Helm chart. - -``` - -helm repo add streamnative https://charts.streamnative.io -helm repo update -helm install pulsar-storage-provisioner streamnative/local-storage-provisioner - -``` - -### Install cert-manager - -The Pulsar Helm chart uses the [cert-manager](https://github.com/jetstack/cert-manager) to provision and manage TLS certificates automatically. To enable TLS encryption for brokers or proxies, you need to install the cert-manager in advance. - -For details about how to install the cert-manager, follow the [official instructions](https://cert-manager.io/docs/installation/kubernetes/#installing-with-helm). - -Alternatively, we provide a bash script [install-cert-manager.sh](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/cert-manager/install-cert-manager.sh) to install a cert-manager release to the namespace `cert-manager`. - -```bash - -git clone https://github.com/apache/pulsar-helm-chart -cd pulsar-helm-chart -./scripts/cert-manager/install-cert-manager.sh - -``` - -## Prepare Helm release - -Once you have install all the dependent charts and collected all of your configuration options, you can run [prepare_helm_release.sh](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/prepare_helm_release.sh) to prepare the Helm release. - -```bash - -git clone https://github.com/apache/pulsar-helm-chart -cd pulsar-helm-chart -./scripts/pulsar/prepare_helm_release.sh -n -k - -``` - -The `prepare_helm_release` creates the following resources: - -- A Kubernetes namespace for installing the Pulsar release -- JWT secret keys and tokens for three super users: `broker-admin`, `proxy-admin`, and `admin`. By default, it generates an asymmetric pubic/private key pair. You can choose to generate a symmetric secret key by specifying `--symmetric`. - - `proxy-admin` role is used for proxies to communicate to brokers. - - `broker-admin` role is used for inter-broker communications. - - `admin` role is used by the admin tools. - -## Deploy Pulsar cluster using Helm - -Once you have finished the following three things, you can install a Helm release. - -- Collect all of your configuration options. -- Install dependent charts. -- Prepare the Helm release. - -In this example, we name our Helm release `pulsar`. - -```bash - -helm repo add apache https://pulsar.apache.org/charts -helm repo update -helm install pulsar apache/pulsar \ - --timeout 10m \ - --set initialize=true \ - --set [your configuration options] - -``` - -:::note - -For the first deployment, add `--set initialize=true` option to initialize bookie and Pulsar cluster metadata. - -::: - -You can also use the `--version ` option if you want to install a specific version of Pulsar Helm chart. - -## Monitor deployment - -A list of installed resources are output once the Pulsar cluster is deployed. This may take 5-10 minutes. - -The status of the deployment can be checked by running the `helm status pulsar` command, which can also be done while the deployment is taking place if you run the command in another terminal. - -## Access Pulsar cluster - -The default values will create a `ClusterIP` for the following resources, which you can use to interact with the cluster. - -- Proxy: You can use the IP address to produce and consume messages to the installed Pulsar cluster. -- Pulsar Manager: You can access the Pulsar Manager UI at `http://:9527`. -- Grafana Dashboard: You can access the Grafana dashboard at `http://:3000`. - -To find the IP addresses of those components, run the following command: - -```bash - -kubectl get service -n - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/helm-install.md b/site2/website/versioned_docs/version-2.8.2-deprecated/helm-install.md deleted file mode 100644 index 1f4d5eb69d5ddd..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/helm-install.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -id: helm-install -title: Install Apache Pulsar using Helm -sidebar_label: "Install" -original_id: helm-install ---- - -Install Apache Pulsar on Kubernetes with the official Pulsar Helm chart. - -## Requirements - -To deploy Apache Pulsar on Kubernetes, the followings are required. - -- kubectl 1.14 or higher, compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin)) -- Helm v3 (3.0.2 or higher) -- A Kubernetes cluster, version 1.14 or higher - -## Environment setup - -Before deploying Pulsar, you need to prepare your environment. - -### Tools - -Install [`helm`](helm-tools.md) and [`kubectl`](helm-tools.md) on your computer. - -## Cloud cluster preparation - -:::note - -Kubernetes 1.14 or higher is required. - -::: - -To create and connect to the Kubernetes cluster, follow the instructions: - -- [Google Kubernetes Engine](helm-prepare.md#google-kubernetes-engine) - -## Pulsar deployment - -Once the environment is set up and configuration is generated, you can now proceed to the [deployment of Pulsar](helm-deploy.md). - -## Pulsar upgrade - -To upgrade an existing Kubernetes installation, follow the [upgrade documentation](helm-upgrade.md). diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/helm-overview.md b/site2/website/versioned_docs/version-2.8.2-deprecated/helm-overview.md deleted file mode 100644 index 385d535e319b65..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/helm-overview.md +++ /dev/null @@ -1,104 +0,0 @@ ---- -id: helm-overview -title: Apache Pulsar Helm Chart -sidebar_label: "Overview" -original_id: helm-overview ---- - -This is the official supported Helm chart to install Apache Pulsar on a cloud-native environment. It was enhanced based on StreamNative's [Helm Chart](https://github.com/streamnative/charts). - -## Introduction - -The Apache Pulsar Helm chart is one of the most convenient ways to operate Pulsar on Kubernetes. This Pulsar Helm chart contains all the required components to get started and can scale to large deployments. - -This chart includes all the components for a complete experience, but each part can be configured to be installed separately. - -- Pulsar core components: - - ZooKeeper - - Bookies - - Brokers - - Function workers - - Proxies -- Control Center: - - Pulsar Manager - - Prometheus - - Grafana - -It includes support for: - -- Security - - Automatically provisioned TLS certificates, using [Jetstack](https://www.jetstack.io/)'s [cert-manager](https://cert-manager.io/docs/) - - self-signed - - [Let's Encrypt](https://letsencrypt.org/) - - TLS Encryption - - Proxy - - Broker - - Toolset - - Bookie - - ZooKeeper - - Authentication - - JWT - - Authorization -- Storage - - Non-persistence storage - - Persistence volume - - Local persistent volumes -- Functions - - Kubernetes Runtime - - Process Runtime - - Thread Runtime -- Operations - - Independent image versions for all components, enabling controlled upgrades - -## Pulsar Helm chart quick start - -To get up and run with these charts as fast as possible, in a **non-production** use case, we provide a [quick start guide](getting-started-helm.md) for Proof of Concept (PoC) deployments. - -This guide walks the user through deploying these charts with default values and features, but *does not* meet production ready requirements. To deploy these charts into production under sustained load, follow the complete [Installation Guide](helm-install.md). - -## Troubleshooting - -We have done our best to make these charts as seamless as possible. Occasionally, troubles do go outside of our control. We have collected tips and tricks for troubleshooting common issues. Please check them first before raising an [issue](https://github.com/apache/pulsar/issues/new/choose), and feel free to add to them by raising a [Pull Request](https://github.com/apache/pulsar/compare). - -## Installation - -The Apache Pulsar Helm chart contains all required dependencies. - -If you deploy a PoC for testing, we strongly suggest you follow our [Quick Start Guide](getting-started-helm.md) for your first iteration. - -1. [Preparation](helm-prepare.md) -2. [Deployment](helm-deploy.md) - -## Upgrading - -Once the Pulsar Helm chart is installed, use the `helm upgrade` to complete configuration changes and chart updates. - -```bash - -helm repo add apache https://pulsar.apache.org/charts -helm repo update -helm get values > pulsar.yaml -helm upgrade apache/pulsar -f pulsar.yaml - -``` - -For more detailed information, see [Upgrading](helm-upgrade.md). - -## Uninstallation - -To uninstall the Pulsar Helm chart, run the following command: - -```bash - -helm delete - -``` - -For the purposes of continuity, these charts have some Kubernetes objects that cannot be removed when performing `helm delete`. -It is recommended to *consciously* remove these items, as they affect re-deployment. - -* PVCs for stateful data: *consciously* remove these items. - - ZooKeeper: This is your metadata. - - BookKeeper: This is your data. - - Prometheus: This is your metrics data, which can be safely removed. -* Secrets: if the secrets are generated by the [prepare release script](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/prepare_helm_release.sh), they contain secret keys and tokens. You can use the [cleanup release script](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/cleanup_helm_release.sh) to remove these secrets and tokens as needed. diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/helm-prepare.md b/site2/website/versioned_docs/version-2.8.2-deprecated/helm-prepare.md deleted file mode 100644 index 5e9f2f9ef4f680..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/helm-prepare.md +++ /dev/null @@ -1,92 +0,0 @@ ---- -id: helm-prepare -title: Prepare Kubernetes resources -sidebar_label: "Prepare" -original_id: helm-prepare ---- - -For a fully functional Pulsar cluster, you need a few resources before deploying the Apache Pulsar Helm chart. The following provides instructions to prepare the Kubernetes cluster before deploying the Pulsar Helm chart. - -- [Google Kubernetes Engine](#google-kubernetes-engine) - - [Manual cluster creation](#manual-cluster-creation) - - [Scripted cluster creation](#scripted-cluster-creation) - - [Create cluster with local SSDs](#create-cluster-with-local-ssds) -- [Next Steps](#next-steps) - -## Google Kubernetes Engine - -To get started easier, a script is provided to create the cluster automatically. Alternatively, a cluster can be created manually as well. - -- [Google Kubernetes Engine](#google-kubernetes-engine) - - [Manual cluster creation](#manual-cluster-creation) - - [Scripted cluster creation](#scripted-cluster-creation) - - [Create cluster with local SSDs](#create-cluster-with-local-ssds) -- [Next Steps](#next-steps) - -### Manual cluster creation - -To provision a Kubernetes cluster manually, follow the [GKE instructions](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster). - -Alternatively, you can use the [instructions](#scripted-cluster-creation) below to provision a GKE cluster as needed. - -### Scripted cluster creation - -A [bootstrap script](https://github.com/streamnative/charts/tree/master/scripts/pulsar/gke_bootstrap_script.sh) has been created to automate much of the setup process for users on GCP/GKE. - -The script can: - -1. Create a new GKE cluster. -2. Allow the cluster to modify DNS (Domain Name Server) records. -3. Setup `kubectl`, and connect it to the cluster. - -Google Cloud SDK is a dependency of this script, so ensure it is [set up correctly](helm-tools.md#connect-to-a-gke-cluster) for the script to work. - -The script reads various parameters from environment variables and an argument `up` or `down` for bootstrap and clean-up respectively. - -The following table describes all variables. - -| **Variable** | **Description** | **Default value** | -| ------------ | --------------- | ----------------- | -| PROJECT | ID of your GCP project | No default value. It requires to be set. | -| CLUSTER_NAME | Name of the GKE cluster | `pulsar-dev` | -| CONFDIR | Configuration directory to store Kubernetes configuration | ${HOME}/.config/streamnative | -| INT_NETWORK | IP space to use within this cluster | `default` | -| LOCAL_SSD_COUNT | Number of local SSD counts | 4 | -| MACHINE_TYPE | Type of machine to use for nodes | `n1-standard-4` | -| NUM_NODES | Number of nodes to be created in each of the cluster's zones | 4 | -| PREEMPTIBLE | Create nodes using preemptible VM instances in the new cluster. | false | -| REGION | Compute region for the cluster | `us-east1` | -| USE_LOCAL_SSD | Flag to create a cluster with local SSDs | false | -| ZONE | Compute zone for the cluster | `us-east1-b` | -| ZONE_EXTENSION | The extension (`a`, `b`, `c`) of the zone name of the cluster | `b` | -| EXTRA_CREATE_ARGS | Extra arguments passed to create command | | - -Run the script, by passing in your desired parameters. It can work with the default parameters except for `PROJECT` which is required: - -```bash - -PROJECT= scripts/pulsar/gke_bootstrap_script.sh up - -``` - -The script can also be used to clean up the created GKE resources. - -```bash - -PROJECT= scripts/pulsar/gke_bootstrap_script.sh down - -``` - -#### Create cluster with local SSDs - -To install the Pulsar Helm chart using local persistent volumes, you need to create a GKE cluster with local SSDs. You can do so by specifying `USE_LOCAL_SSD` to be `true` in the following command to create a Pulsar cluster with local SSDs. - -``` - -PROJECT= USE_LOCAL_SSD=true LOCAL_SSD_COUNT= scripts/pulsar/gke_bootstrap_script.sh up - -``` - -## Next Steps - -Continue with the [installation of the chart](helm-deploy.md) once you have the cluster up and running. diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/helm-tools.md b/site2/website/versioned_docs/version-2.8.2-deprecated/helm-tools.md deleted file mode 100644 index 6ba89006913b64..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/helm-tools.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -id: helm-tools -title: Required tools for deploying Pulsar Helm Chart -sidebar_label: "Required Tools" -original_id: helm-tools ---- - -Before deploying Pulsar to your Kubernetes cluster, there are some tools you must have installed locally. - -## kubectl - -kubectl is the tool that talks to the Kubernetes API. kubectl 1.14 or higher is required and it needs to be compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin)). - -To Install kubectl locally, follow the [Kubernetes documentation](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl). - -The server version of kubectl cannot be obtained until we connect to a cluster. - -## Helm - -Helm is the package manager for Kubernetes. The Apache Pulsar Helm Chart is tested and supported with Helm v3. - -### Get Helm - -You can get Helm from the project's [releases page](https://github.com/helm/helm/releases), or follow other options under the official documentation of [installing Helm](https://helm.sh/docs/intro/install/). - -### Next steps - -Once kubectl and Helm are configured, you can configure your [Kubernetes cluster](helm-prepare.md). - -## Additional information - -### Templates - -Templating in Helm is done through Golang's [text/template](https://golang.org/pkg/text/template/) and [sprig](https://godoc.org/github.com/Masterminds/sprig). - -For more information about how all the inner workings behave, check these documents: - -- [Functions and Pipelines](https://helm.sh/docs/chart_template_guide/functions_and_pipelines/) -- [Subcharts and Globals](https://helm.sh/docs/chart_template_guide/subcharts_and_globals/) - -### Tips and tricks - -For additional information on developing with Helm, check [tips and tricks section](https://helm.sh/docs/howto/charts_tips_and_tricks/) in the Helm repository. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/helm-upgrade.md b/site2/website/versioned_docs/version-2.8.2-deprecated/helm-upgrade.md deleted file mode 100644 index 7d671e6bfb3c10..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/helm-upgrade.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -id: helm-upgrade -title: Upgrade Pulsar Helm release -sidebar_label: "Upgrade" -original_id: helm-upgrade ---- - -Before upgrading your Pulsar installation, you need to check the change log corresponding to the specific release you want to upgrade to and look for any release notes that might pertain to the new Pulsar helm chart version. - -We also recommend that you need to provide all values using the `helm upgrade --set key=value` syntax or the `-f values.yml` instead of using `--reuse-values`, because some of the current values might be deprecated. - -:::note - -You can retrieve your previous `--set` arguments cleanly, with `helm get values `. If you direct this into a file (`helm get values > pulsar.yml`), you can safely pass this file through `-f`, namely `helm upgrade apache/pulsar -f pulsar.yaml`. This safely replaces the behavior of `--reuse-values`. - -::: - -## Steps - -To upgrade Apache Pulsar to a newer version, follow these steps: - -1. Check the change log for the specific version you would like to upgrade to. -2. Go through [deployment documentation](helm-deploy.md) step by step. -3. Extract your previous `--set` arguments with the following command. - - ```bash - - helm get values > pulsar.yaml - - ``` - -4. Decide all the values you need to set. -5. Perform the upgrade, with all `--set` arguments extracted in step 4. - - ```bash - - helm upgrade apache/pulsar \ - --version \ - -f pulsar.yaml \ - --set ... - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-aerospike-sink.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-aerospike-sink.md deleted file mode 100644 index 63d7338a3ba91c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-aerospike-sink.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -id: io-aerospike-sink -title: Aerospike sink connector -sidebar_label: "Aerospike sink connector" -original_id: io-aerospike-sink ---- - -The Aerospike sink connector pulls messages from Pulsar topics to Aerospike clusters. - -## Configuration - -The configuration of the Aerospike sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `seedHosts` |String| true | No default value| The comma-separated list of one or more Aerospike cluster hosts.

    Each host can be specified as a valid IP address or hostname followed by an optional port number. | -| `keyspace` | String| true |No default value |The Aerospike namespace. | -| `columnName` | String | true| No default value|The Aerospike column name. | -|`userName`|String|false|NULL|The Aerospike username.| -|`password`|String|false|NULL|The Aerospike password.| -| `keySet` | String|false |NULL | The Aerospike set name. | -| `maxConcurrentRequests` |int| false | 100 | The maximum number of concurrent Aerospike transactions that a sink can open. | -| `timeoutMs` | int|false | 100 | This property controls `socketTimeout` and `totalTimeout` for Aerospike transactions. | -| `retries` | int|false | 1 |The maximum number of retries before aborting a write transaction to Aerospike. | diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-canal-source.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-canal-source.md deleted file mode 100644 index d1fd43bb0f74e4..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-canal-source.md +++ /dev/null @@ -1,235 +0,0 @@ ---- -id: io-canal-source -title: Canal source connector -sidebar_label: "Canal source connector" -original_id: io-canal-source ---- - -The Canal source connector pulls messages from MySQL to Pulsar topics. - -## Configuration - -The configuration of Canal source connector has the following properties. - -### Property - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `username` | true | None | Canal server account (not MySQL).| -| `password` | true | None | Canal server password (not MySQL). | -|`destination`|true|None|Source destination that Canal source connector connects to. -| `singleHostname` | false | None | Canal server address.| -| `singlePort` | false | None | Canal server port.| -| `cluster` | true | false | Whether to enable cluster mode based on Canal server configuration or not.

  1036. true: **cluster** mode.
    If set to true, it talks to `zkServers` to figure out the actual database host.

  1037. false: **standalone** mode.
    If set to false, it connects to the database specified by `singleHostname` and `singlePort`.
  1038. | -| `zkServers` | true | None | Address and port of the Zookeeper that Canal source connector talks to figure out the actual database host.| -| `batchSize` | false | 1000 | Batch size to fetch from Canal. | - -### Example - -Before using the Canal connector, you can create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "zkServers": "127.0.0.1:2181", - "batchSize": "5120", - "destination": "example", - "username": "", - "password": "", - "cluster": false, - "singleHostname": "127.0.0.1", - "singlePort": "11111", - } - - ``` - -* YAML - - You can create a YAML file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/resources/canal-mysql-source-config.yaml) below to your YAML file. - - ```yaml - - configs: - zkServers: "127.0.0.1:2181" - batchSize: 5120 - destination: "example" - username: "" - password: "" - cluster: false - singleHostname: "127.0.0.1" - singlePort: 11111 - - ``` - -## Usage - -Here is an example of storing MySQL data using the configuration file as above. - -1. Start a MySQL server. - - ```bash - - $ docker pull mysql:5.7 - $ docker run -d -it --rm --name pulsar-mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=canal -e MYSQL_USER=mysqluser -e MYSQL_PASSWORD=mysqlpw mysql:5.7 - - ``` - -2. Create a configuration file `mysqld.cnf`. - - ```bash - - [mysqld] - pid-file = /var/run/mysqld/mysqld.pid - socket = /var/run/mysqld/mysqld.sock - datadir = /var/lib/mysql - #log-error = /var/log/mysql/error.log - # By default we only accept connections from localhost - #bind-address = 127.0.0.1 - # Disabling symbolic-links is recommended to prevent assorted security risks - symbolic-links=0 - log-bin=mysql-bin - binlog-format=ROW - server_id=1 - - ``` - -3. Copy the configuration file `mysqld.cnf` to MySQL server. - - ```bash - - $ docker cp mysqld.cnf pulsar-mysql:/etc/mysql/mysql.conf.d/ - - ``` - -4. Restart the MySQL server. - - ```bash - - $ docker restart pulsar-mysql - - ``` - -5. Create a test database in MySQL server. - - ```bash - - $ docker exec -it pulsar-mysql /bin/bash - $ mysql -h 127.0.0.1 -uroot -pcanal -e 'create database test;' - - ``` - -6. Start a Canal server and connect to MySQL server. - - ``` - - $ docker pull canal/canal-server:v1.1.2 - $ docker run -d -it --link pulsar-mysql -e canal.auto.scan=false -e canal.destinations=test -e canal.instance.master.address=pulsar-mysql:3306 -e canal.instance.dbUsername=root -e canal.instance.dbPassword=canal -e canal.instance.connectionCharset=UTF-8 -e canal.instance.tsdb.enable=true -e canal.instance.gtidon=false --name=pulsar-canal-server -p 8000:8000 -p 2222:2222 -p 11111:11111 -p 11112:11112 -m 4096m canal/canal-server:v1.1.2 - - ``` - -7. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:2.3.0 - $ docker run -d -it --link pulsar-canal-server -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:2.3.0 bin/pulsar standalone - - ``` - -8. Modify the configuration file `canal-mysql-source-config.yaml`. - - ```yaml - - configs: - zkServers: "" - batchSize: "5120" - destination: "test" - username: "" - password: "" - cluster: false - singleHostname: "pulsar-canal-server" - singlePort: "11111" - - ``` - -9. Create a consumer file `pulsar-client.py`. - - ```python - - import pulsar - - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe('my-topic', - subscription_name='my-sub') - - while True: - msg = consumer.receive() - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - - client.close() - - ``` - -10. Copy the configuration file `canal-mysql-source-config.yaml` and the consumer file `pulsar-client.py` to Pulsar server. - - ```bash - - $ docker cp canal-mysql-source-config.yaml pulsar-standalone:/pulsar/conf/ - $ docker cp pulsar-client.py pulsar-standalone:/pulsar/ - - ``` - -11. Download a Canal connector and start it. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - $ wget https://archive.apache.org/dist/pulsar/pulsar-2.3.0/connectors/pulsar-io-canal-2.3.0.nar -P connectors - $ ./bin/pulsar-admin source localrun \ - --archive ./connectors/pulsar-io-canal-2.3.0.nar \ - --classname org.apache.pulsar.io.canal.CanalStringSource \ - --tenant public \ - --namespace default \ - --name canal \ - --destination-topic-name my-topic \ - --source-config-file /pulsar/conf/canal-mysql-source-config.yaml \ - --parallelism 1 - - ``` - -12. Consume data from MySQL. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - $ python pulsar-client.py - - ``` - -13. Open another window to log in MySQL server. - - ```bash - - $ docker exec -it pulsar-mysql /bin/bash - $ mysql -h 127.0.0.1 -uroot -pcanal - - ``` - -14. Create a table, and insert, delete, and update data in MySQL server. - - ```bash - - mysql> use test; - mysql> show tables; - mysql> CREATE TABLE IF NOT EXISTS `test_table`(`test_id` INT UNSIGNED AUTO_INCREMENT,`test_title` VARCHAR(100) NOT NULL, - `test_author` VARCHAR(40) NOT NULL, - `test_date` DATE,PRIMARY KEY ( `test_id` ))ENGINE=InnoDB DEFAULT CHARSET=utf8; - mysql> INSERT INTO test_table (test_title, test_author, test_date) VALUES("a", "b", NOW()); - mysql> UPDATE test_table SET test_title='c' WHERE test_title='a'; - mysql> DELETE FROM test_table WHERE test_title='c'; - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-cassandra-sink.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-cassandra-sink.md deleted file mode 100644 index b27a754f49e182..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-cassandra-sink.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -id: io-cassandra-sink -title: Cassandra sink connector -sidebar_label: "Cassandra sink connector" -original_id: io-cassandra-sink ---- - -The Cassandra sink connector pulls messages from Pulsar topics to Cassandra clusters. - -## Configuration - -The configuration of the Cassandra sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `roots` | String|true | " " (empty string) | A comma-separated list of Cassandra hosts to connect to.| -| `keyspace` | String|true| " " (empty string)| The key space used for writing pulsar messages.

    **Note: `keyspace` should be created prior to a Cassandra sink.**| -| `keyname` | String|true| " " (empty string)| The key name of the Cassandra column family.

    The column is used for storing Pulsar message keys.

    If a Pulsar message doesn't have any key associated, the message value is used as the key. | -| `columnFamily` | String|true| " " (empty string)| The Cassandra column family name.

    **Note: `columnFamily` should be created prior to a Cassandra sink.**| -| `columnName` | String|true| " " (empty string) | The column name of the Cassandra column family.

    The column is used for storing Pulsar message values. | - -### Example - -Before using the Cassandra sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - } - - ``` - -* YAML - - ``` - - configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - - ``` - -## Usage - -For more information about **how to connect Pulsar with Cassandra**, see [here](io-quickstart.md#connect-pulsar-to-apache-cassandra). diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-cdc-debezium.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-cdc-debezium.md deleted file mode 100644 index 293ccf2b35e8aa..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-cdc-debezium.md +++ /dev/null @@ -1,543 +0,0 @@ ---- -id: io-cdc-debezium -title: Debezium source connector -sidebar_label: "Debezium source connector" -original_id: io-cdc-debezium ---- - -The Debezium source connector pulls messages from MySQL or PostgreSQL -and persists the messages to Pulsar topics. - -## Configuration - -The configuration of Debezium source connector has the following properties. - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `task.class` | true | null | A source task class that implemented in Debezium. | -| `database.hostname` | true | null | The address of a database server. | -| `database.port` | true | null | The port number of a database server.| -| `database.user` | true | null | The name of a database user that has the required privileges. | -| `database.password` | true | null | The password for a database user that has the required privileges. | -| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. | -| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. | -| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the connector.

    This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. | -| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. | -| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value. | -| `database.history` | true | null | The name of the database history class. | -| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements.

    **Note: this topic is for internal use only and should not be used by consumers.** | -| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. | -| `pulsar.service.url` | true | null | Pulsar cluster service URL for the offset topic used in Debezium. You can use the `bin/pulsar-admin --admin-url http://pulsar:8080 sources localrun --source-config-file configs/pg-pulsar-config.yaml` command to point to the target Pulsar cluster.| -| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. | -| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). | -| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. | -| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. | - - - -## Example of MySQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "3306", - "database.user": "debezium", - "database.password": "dbz", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.whitelist": "inventory", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.history.pulsar.topic": "history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "pulsar.service.url": "pulsar://127.0.0.1:6650", - "offset.storage.topic": "offset-topic" - } - - ``` - -* YAML - - You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mysql-source" - topicName: "debezium-mysql-topic" - archive: "connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for mysql, docker image: debezium/example-mysql:0.8 - database.hostname: "localhost" - database.port: "3306" - database.user: "debezium" - database.password: "dbz" - database.server.id: "184054" - database.server.name: "dbserver1" - database.whitelist: "inventory" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.history.pulsar.topic: "history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG - key.converter: "org.apache.kafka.connect.json.JsonConverter" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## OFFSET_STORAGE_TOPIC_CONFIG - offset.storage.topic: "offset-topic" - - ``` - -### Usage - -This example shows how to change the data of a MySQL table using the Pulsar Debezium connector. - -1. Start a MySQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -it --rm \ - --name mysql \ - -p 3306:3306 \ - -e MYSQL_ROOT_PASSWORD=debezium \ - -e MYSQL_USER=mysqluser \ - -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar \ - --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","value.converter": "org.apache.kafka.connect.json.JsonConverter","pulsar.service.url": "pulsar://127.0.0.1:6650","offset.storage.topic": "offset-topic"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mysql-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the table _inventory.products_. - - ```bash - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MySQL client in docker. - - ```bash - - $ docker run -it --rm \ - --name mysqlterm \ - --link mysql \ - --rm mysql:5.7 sh \ - -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' - - ``` - -6. A MySQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - mysql> use inventory; - mysql> show tables; - mysql> SELECT * FROM products; - mysql> UPDATE products SET name='1111111111' WHERE id=101; - mysql> UPDATE products SET name='1111111111' WHERE id=107; - - ``` - - In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic. - -## Example of PostgreSQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "5432", - "database.user": "postgres", - "database.password": "postgres", - "database.dbname": "postgres", - "database.server.name": "dbserver1", - "schema.whitelist": "inventory", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-postgres-source" - topicName: "debezium-postgres-topic" - archive: "connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-postgress:0.8 - database.hostname: "localhost" - database.port: "5432" - database.user: "postgres" - database.password: "postgres" - database.dbname: "postgres" - database.server.name: "dbserver1" - schema.whitelist: "inventory" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector. - - -1. Start a PostgreSQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-postgres:0.8 - $ docker run -d -it --rm --name pulsar-postgresql -p 5432:5432 debezium/example-postgres:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar \ - --name debezium-postgres-source \ - --destination-topic-name debezium-postgres-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "postgres","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-postgres-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a PostgreSQL client in docker. - - ```bash - - $ docker exec -it pulsar-postgresql /bin/bash - - ``` - -6. A PostgreSQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - psql -U postgres postgres - postgres=# \c postgres; - You are now connected to database "postgres" as user "postgres". - postgres=# SET search_path TO inventory; - SET - postgres=# select * from products; - id | name | description | weight - -----+--------------------+---------------------------------------------------------+-------- - 102 | car battery | 12V car battery | 8.1 - 103 | 12-pack drill bits | 12-pack of drill bits with sizes ranging from #40 to #3 | 0.8 - 104 | hammer | 12oz carpenter's hammer | 0.75 - 105 | hammer | 14oz carpenter's hammer | 0.875 - 106 | hammer | 16oz carpenter's hammer | 1 - 107 | rocks | box of assorted rocks | 5.3 - 108 | jacket | water resistent black wind breaker | 0.1 - 109 | spare tire | 24 inch spare tire | 22.2 - 101 | 1111111111 | Small 2-wheel scooter | 3.14 - (9 rows) - - postgres=# UPDATE products SET name='1111111111' WHERE id=107; - UPDATE 1 - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":107}}�{"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products.Value","field":"before"},{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products.Value","field":"after"},{"type":"struct","fields":[{"type":"string","optional":true,"field":"version"},{"type":"string","optional":true,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":false,"field":"db"},{"type":"int64","optional":true,"field":"ts_usec"},{"type":"int64","optional":true,"field":"txId"},{"type":"int64","optional":true,"field":"lsn"},{"type":"string","optional":true,"field":"schema"},{"type":"string","optional":true,"field":"table"},{"type":"boolean","optional":true,"default":false,"field":"snapshot"},{"type":"boolean","optional":true,"field":"last_snapshot_record"}],"optional":false,"name":"io.debezium.connector.postgresql.Source","field":"source"},{"type":"string","optional":false,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"before":{"id":107,"name":"rocks","description":"box of assorted rocks","weight":5.3},"after":{"id":107,"name":"1111111111","description":"box of assorted rocks","weight":5.3},"source":{"version":"0.9.2.Final","connector":"postgresql","name":"dbserver1","db":"postgres","ts_usec":1559208957661080,"txId":577,"lsn":23862872,"schema":"inventory","table":"products","snapshot":false,"last_snapshot_record":null},"op":"u","ts_ms":1559208957692}} - - ``` - -## Example of MongoDB - -You need to create a configuration file before using the Pulsar Debezium connector. - -* JSON - - ```json - - { - "mongodb.hosts": "rs0/mongodb:27017", - "mongodb.name": "dbserver1", - "mongodb.user": "debezium", - "mongodb.password": "dbz", - "mongodb.task.id": "1", - "database.whitelist": "inventory", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mongodb-source" - topicName: "debezium-mongodb-topic" - archive: "connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-postgress:0.10 - mongodb.hosts: "rs0/mongodb:27017", - mongodb.name: "dbserver1", - mongodb.user: "debezium", - mongodb.password: "dbz", - mongodb.task.id: "1", - database.whitelist: "inventory", - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector. - - -1. Start a MongoDB server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-mongodb:0.10 - $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017 debezium/example-mongodb:0.10 - - ``` - - Use the following commands to initialize the data. - - ``` bash - - ./usr/local/bin/init-inventory.sh - - ``` - - If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a``` - - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-mongodb-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar \ - --name debezium-mongodb-source \ - --destination-topic-name debezium-mongodb-topic \ - --tenant public \ - --namespace default \ - --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mongodb-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MongoDB client in docker. - - ```bash - - $ docker exec -it pulsar-mongodb /bin/bash - - ``` - -6. A MongoDB client pops out. - - ```bash - - mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory - db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}}) - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":false,"field":"rs"},{"type":"string","optional":false,"field":"collection"},{"type":"int32","optional":false,"field":"ord"},{"type":"int64","optional":true,"field":"h"}],"optional":false,"name":"io.debezium.connector.mongo.Source","field":"source"},{"type":"string","optional":true,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"after":"{\"_id\": {\"$numberLong\": \"104\"},\"name\": \"hammer\",\"description\": \"12oz carpenter's hammer\",\"weight\": 1.25,\"quantity\": 4}","patch":null,"source":{"version":"0.10.0.Final","connector":"mongodb","name":"dbserver1","ts_ms":1573541905000,"snapshot":"true","db":"inventory","rs":"rs0","collection":"products","ord":1,"h":4983083486544392763},"op":"r","ts_ms":1573541909761}}. - - ``` - -## FAQ - -### Debezium postgres connector will hang when create snap - -```$xslt - -#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000] - java.lang.Thread.State: WAITING (parking) - at sun.misc.Unsafe.park(Native Method) - - parking to wait for <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) - at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) - at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) - at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396) - at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649) - at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132) - at io.debezium.connector.postgresql.PostgresConnectorTask$Lambda$203/385424085.accept(Unknown Source) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$240/1347039967.accept(Unknown Source) - at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$206/589332928.run(Unknown Source) - at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705) - at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717) - at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126) - at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47) - at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127) - at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230) - at java.lang.Thread.run(Thread.java:748) - -``` - -If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file: - -```$xslt - -max.queue.size= - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-cdc.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-cdc.md deleted file mode 100644 index e6e662884826de..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-cdc.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -id: io-cdc -title: CDC connector -sidebar_label: "CDC connector" -original_id: io-cdc ---- - -CDC source connectors capture log changes of databases (such as MySQL, MongoDB, and PostgreSQL) into Pulsar. - -> CDC source connectors are built on top of [Canal](https://github.com/alibaba/canal) and [Debezium](https://debezium.io/) and store all data into Pulsar cluster in a persistent, replicated, and partitioned way. - -Currently, Pulsar has the following CDC connectors. - -Name|Java Class -|---|--- -[Canal source connector](io-canal-source.md)|[org.apache.pulsar.io.canal.CanalStringSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/java/org/apache/pulsar/io/canal/CanalStringSource.java) -[Debezium source connector](io-cdc-debezium.md)|
  1039. [org.apache.pulsar.io.debezium.DebeziumSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/core/src/main/java/org/apache/pulsar/io/debezium/DebeziumSource.java)
  1040. [org.apache.pulsar.io.debezium.mysql.DebeziumMysqlSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/java/org/apache/pulsar/io/debezium/mysql/DebeziumMysqlSource.java)
  1041. [org.apache.pulsar.io.debezium.postgres.DebeziumPostgresSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/java/org/apache/pulsar/io/debezium/postgres/DebeziumPostgresSource.java)
  1042. - -For more information about Canal and Debezium, see the information below. - -Subject | Reference -|---|--- -How to use Canal source connector with MySQL|[Canal guide](https://github.com/alibaba/canal/wiki) -How does Canal work | [Canal tutorial](https://github.com/alibaba/canal/wiki) -How to use Debezium source connector with MySQL | [Debezium guide](https://debezium.io/docs/connectors/mysql/) -How does Debezium work | [Debezium tutorial](https://debezium.io/docs/tutorial/) diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-cli.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-cli.md deleted file mode 100644 index 3d54bb61875e25..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-cli.md +++ /dev/null @@ -1,658 +0,0 @@ ---- -id: io-cli -title: Connector Admin CLI -sidebar_label: "CLI" -original_id: io-cli ---- - -The `pulsar-admin` tool helps you manage Pulsar connectors. - -## `sources` - -An interface for managing Pulsar IO sources (ingress data into Pulsar). - -```bash - -$ pulsar-admin sources subcommands - -``` - -Subcommands are: - -* `create` - -* `update` - -* `delete` - -* `get` - -* `status` - -* `list` - -* `stop` - -* `start` - -* `restart` - -* `localrun` - -* `available-sources` - -* `reload` - - -### `create` - -Submit a Pulsar IO source connector to run in a Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources create options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--classname` | The source's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per source instance (applicable only to Docker runtime). -| `--deserialization-classname` | The SerDe classname for the source. -| `--destination-topic-name` | The Pulsar topic to which data is sent. -| `--disk` | The disk (in bytes) that needs to be allocated per source instance (applicable only to Docker runtime). -|`--name` | The source's name. -| `--namespace` | The source's namespace. -| ` --parallelism` | The source's parallelism factor, that is, the number of source instances to run. -| `--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per source instance (applicable only to the process and Docker runtimes). -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -| `--source-config` | Source config key/values. -| `--source-config-file` | The path to a YAML config file specifying the source's configuration. -| `-t`, `--source-type` | The source's connector provider. -| `--tenant` | The source's tenant. -|`--producer-config`| The custom producer configuration (as a JSON string). - -### `update` - -Update a already submitted Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources update options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--classname` | The source's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per source instance (applicable only to Docker runtime). -| `--deserialization-classname` | The SerDe classname for the source. -| `--destination-topic-name` | The Pulsar topic to which data is sent. -| `--disk` | The disk (in bytes) that needs to be allocated per source instance (applicable only to Docker runtime). -|`--name` | The source's name. -| `--namespace` | The source's namespace. -| ` --parallelism` | The source's parallelism factor, that is, the number of source instances to run. -| `--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per source instance (applicable only to the process and Docker runtimes). -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -| `--source-config` | Source config key/values. -| `--source-config-file` | The path to a YAML config file specifying the source's configuration. -| `-t`, `--source-type` | The source's connector provider. The `source-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. -| `--tenant` | The source's tenant. -| `--update-auth-data` | Whether or not to update the auth data.
    **Default value: false.** - - -### `delete` - -Delete a Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources delete options - -``` - -#### Option - -|Flag|Description| -|---|---| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `get` - -Get the information about a Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources get options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `status` - -Check the current status of a Pulsar Source. - -#### Usage - -```bash - -$ pulsar-admin sources status options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source ID.
    If `instance-id` is not provided, Pulsar gets status of all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `list` - -List all running Pulsar IO source connectors. - -#### Usage - -```bash - -$ pulsar-admin sources list options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `stop` - -Stop a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources stop options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar stops all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `start` - -Start a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources start options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar starts all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `restart` - -Restart a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources restart options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar restarts all instances. -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `localrun` - -Run a Pulsar IO source connector locally rather than deploying it to the Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources localrun options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the Source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--broker-service-url` | The URL for the Pulsar broker. -|`--classname`|The source's class name if `archive` is file-url-path (file://). -| `--client-auth-params` | Client authentication parameter. -| `--client-auth-plugin` | Client authentication plugin using which function-process can connect to broker. -|`--cpu`|The CPU (in cores) that needs to be allocated per source instance (applicable only to the Docker runtime).| -|`--deserialization-classname`|The SerDe classname for the source. -|`--destination-topic-name`|The Pulsar topic to which data is sent. -|`--disk`|The disk (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime).| -|`--hostname-verification-enabled`|Enable hostname verification.
    **Default value: false**. -|`--name`|The source’s name.| -|`--namespace`|The source’s namespace.| -|`--parallelism`|The source’s parallelism factor, that is, the number of source instances to run).| -|`--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -|`--ram`|The RAM (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime).| -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -|`--source-config`|Source config key/values. -|`--source-config-file`|The path to a YAML config file specifying the source’s configuration. -|`--source-type`|The source's connector provider. -|`--tenant`|The source’s tenant. -|`--tls-allow-insecure`|Allow insecure tls connection.
    **Default value: false**. -|`--tls-trust-cert-path`|The tls trust cert file path. -|`--use-tls`|Use tls connection.
    **Default value: false**. -|`--producer-config`| The custom producer configuration (as a JSON string). - -### `available-sources` - -Get the list of Pulsar IO connector sources supported by Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources available-sources - -``` - -### `reload` - -Reload the available built-in connectors. - -#### Usage - -```bash - -$ pulsar-admin sources reload - -``` - -## `sinks` - -An interface for managing Pulsar IO sinks (egress data from Pulsar). - -```bash - -$ pulsar-admin sinks subcommands - -``` - -Subcommands are: - -* `create` - -* `update` - -* `delete` - -* `get` - -* `status` - -* `list` - -* `stop` - -* `start` - -* `restart` - -* `localrun` - -* `available-sinks` - -* `reload` - - -### `create` - -Submit a Pulsar IO sink connector to run in a Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks create options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--classname` | The sink's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per sink instance (applicable only to Docker runtime). -| `--custom-schema-inputs` | The map of input topics to schema types or class names (as a JSON string). -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -| `--disk` | The disk (in bytes) that needs to be allocated per sink instance (applicable only to Docker runtime). -|`-i, --inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name` | The sink's name. -| `--namespace` | The sink's namespace. -| ` --parallelism` | The sink's parallelism factor, that is, the number of sink instances to run. -| `--processing-guarantees` | The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the process and Docker runtimes). -| `--retain-ordering` | Sink consumes and sinks messages in order. -| `--sink-config` | sink config key/values. -| `--sink-config-file` | The path to a YAML config file specifying the sink's configuration. -| `-t`, `--sink-type` | The sink's connector provider. The `sink-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. -| `--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -| `--tenant` | The sink's tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). - -### `update` - -Update a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks update options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--classname` | The sink's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per sink instance (applicable only to Docker runtime). -| `--custom-schema-inputs` | The map of input topics to schema types or class names (as a JSON string). -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -| `--disk` | The disk (in bytes) that needs to be allocated per sink instance (applicable only to Docker runtime). -|`-i, --inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name` | The sink's name. -| `--namespace` | The sink's namespace. -| ` --parallelism` | The sink's parallelism factor, that is, the number of sink instances to run. -| `--processing-guarantees` | The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the process and Docker runtimes). -| `--retain-ordering` | Sink consumes and sinks messages in order. -| `--sink-config` | sink config key/values. -| `--sink-config-file` | The path to a YAML config file specifying the sink's configuration. -| `-t`, `--sink-type` | The sink's connector provider. -| `--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -| `--tenant` | The sink's tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). -| `--update-auth-data` | Whether or not to update the auth data.
    **Default value: false.** - -### `delete` - -Delete a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks delete options - -``` - -#### Option - -|Flag|Description| -|---|---| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - -### `get` - -Get the information about a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks get options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `status` - -Check the current status of a Pulsar sink. - -#### Usage - -```bash - -$ pulsar-admin sinks status options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink ID.
    If `instance-id` is not provided, Pulsar gets status of all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `list` - -List all running Pulsar IO sink connectors. - -#### Usage - -```bash - -$ pulsar-admin sinks list options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `stop` - -Stop a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks stop options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar stops all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - -### `start` - -Start a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks start options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar starts all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `restart` - -Restart a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks restart options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar restarts all instances. -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `localrun` - -Run a Pulsar IO sink connector locally rather than deploying it to the Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks localrun options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--broker-service-url` | The URL for the Pulsar broker. -|`--classname`|The sink's class name if `archive` is file-url-path (file://). -| `--client-auth-params` | Client authentication parameter. -| `--client-auth-plugin` | Client authentication plugin using which function-process can connect to broker. -|`--cpu`|The CPU (in cores) that needs to be allocated per sink instance (applicable only to the Docker runtime). -| `--custom-schema-inputs` | The map of input topics to Schema types or class names (as a JSON string). -| `--max-redeliver-count` | Maximum number of times that a message is redelivered before being sent to the dead letter queue. -| `--dead-letter-topic` | Name of the dead letter topic where the failing messages are sent. -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -|`--disk`|The disk (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime).| -|`--hostname-verification-enabled`|Enable hostname verification.
    **Default value: false**. -| `-i`, `--inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name`|The sink’s name.| -|`--namespace`|The sink’s namespace.| -|`--parallelism`|The sink’s parallelism factor, that is, the number of sink instances to run).| -|`--processing-guarantees`|The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -|`--ram`|The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime).| -|`--retain-ordering` | Sink consumes and sinks messages in order. -|`--sink-config`|sink config key/values. -|`--sink-config-file`|The path to a YAML config file specifying the sink’s configuration. -|`--sink-type`|The sink's connector provider. -|`--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -|`--tenant`|The sink’s tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--negative-ack-redelivery-delay-ms` | The negatively-acknowledged message redelivery delay in milliseconds. | -|`--tls-allow-insecure`|Allow insecure tls connection.
    **Default value: false**. -|`--tls-trust-cert-path`|The tls trust cert file path. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). -|`--use-tls`|Use tls connection.
    **Default value: false**. - -### `available-sinks` - -Get the list of Pulsar IO connector sinks supported by Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks available-sinks - -``` - -### `reload` - -Reload the available built-in connectors. - -#### Usage - -```bash - -$ pulsar-admin sinks reload - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-connectors.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-connectors.md deleted file mode 100644 index 8db368e0e70637..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-connectors.md +++ /dev/null @@ -1,232 +0,0 @@ ---- -id: io-connectors -title: Built-in connector -sidebar_label: "Built-in connector" -original_id: io-connectors ---- - -Pulsar distribution includes a set of common connectors that have been packaged and tested with the rest of Apache Pulsar. These connectors import and export data from some of the most commonly used data systems. - -Using any of these connectors is as easy as writing a simple connector and running the connector locally or submitting the connector to a Pulsar Functions cluster. - -## Source connector - -Pulsar has various source connectors, which are sorted alphabetically as below. - -### Canal - -* [Configuration](io-canal-source.md#configuration) - -* [Example](io-canal-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/java/org/apache/pulsar/io/canal/CanalStringSource.java) - - -### Debezium MySQL - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-mysql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/java/org/apache/pulsar/io/debezium/mysql/DebeziumMysqlSource.java) - -### Debezium PostgreSQL - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-postgresql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/java/org/apache/pulsar/io/debezium/postgres/DebeziumPostgresSource.java) - -### Debezium MongoDB - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-mongodb) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/java/org/apache/pulsar/io/debezium/mongodb/DebeziumMongoDbSource.java) - -### DynamoDB - -* [Configuration](io-dynamodb-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/dynamodb/src/main/java/org/apache/pulsar/io/dynamodb/DynamoDBSource.java) - -### File - -* [Configuration](io-file-source.md#configuration) - -* [Example](io-file-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/file/src/main/java/org/apache/pulsar/io/file/FileSource.java) - -### Flume - -* [Configuration](io-flume-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/java/org/apache/pulsar/io/flume/FlumeConnector.java) - -### Twitter firehose - -* [Configuration](io-twitter-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/twitter/src/main/java/org/apache/pulsar/io/twitter/TwitterFireHose.java) - -### Kafka - -* [Configuration](io-kafka-source.md#configuration) - -* [Example](io-kafka-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java) - -### Kinesis - -* [Configuration](io-kinesis-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kinesis/src/main/java/org/apache/pulsar/io/kinesis/KinesisSource.java) - -### Netty - -* [Configuration](io-netty-source.md#configuration) - -* [Example of TCP](io-netty-source.md#tcp) - -* [Example of HTTP](io-netty-source.md#http) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/netty/src/main/java/org/apache/pulsar/io/netty/NettySource.java) - -### NSQ - -* [Configuration](io-nsq-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/nsq/src/main/java/org/apache/pulsar/io/nsq/NSQSource.java) - -### RabbitMQ - -* [Configuration](io-rabbitmq-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/rabbitmq/src/main/java/org/apache/pulsar/io/rabbitmq/RabbitMQSource.java) - -## Sink connector - -Pulsar has various sink connectors, which are sorted alphabetically as below. - -### Aerospike - -* [Configuration](io-aerospike-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/aerospike/src/main/java/org/apache/pulsar/io/aerospike/AerospikeStringSink.java) - -### Cassandra - -* [Configuration](io-cassandra-sink.md#configuration) - -* [Example](io-cassandra-sink.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/cassandra/src/main/java/org/apache/pulsar/io/cassandra/CassandraStringSink.java) - -### ElasticSearch - -* [Configuration](io-elasticsearch-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/elastic-search/src/main/java/org/apache/pulsar/io/elasticsearch/ElasticSearchSink.java) - -### Flume - -* [Configuration](io-flume-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/java/org/apache/pulsar/io/flume/sink/StringSink.java) - -### HBase - -* [Configuration](io-hbase-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hbase/src/main/java/org/apache/pulsar/io/hbase/HbaseAbstractConfig.java) - -### HDFS2 - -* [Configuration](io-hdfs2-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hdfs2/src/main/java/org/apache/pulsar/io/hdfs2/AbstractHdfsConnector.java) - -### HDFS3 - -* [Configuration](io-hdfs3-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hdfs3/src/main/java/org/apache/pulsar/io/hdfs3/AbstractHdfsConnector.java) - -### InfluxDB - -* [Configuration](io-influxdb-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/influxdb/src/main/java/org/apache/pulsar/io/influxdb/InfluxDBGenericRecordSink.java) - -### JDBC ClickHouse - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-clickhouse) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/clickhouse/src/main/java/org/apache/pulsar/io/jdbc/ClickHouseJdbcAutoSchemaSink.java) - -### JDBC MariaDB - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-mariadb) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/mariadb/src/main/java/org/apache/pulsar/io/jdbc/MariadbJdbcAutoSchemaSink.java) - -### JDBC PostgreSQL - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-postgresql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/postgres/src/main/java/org/apache/pulsar/io/jdbc/PostgresJdbcAutoSchemaSink.java) - -### JDBC SQLite - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-sqlite) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/sqlite/src/main/java/org/apache/pulsar/io/jdbc/SqliteJdbcAutoSchemaSink.java) - -### Kafka - -* [Configuration](io-kafka-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSink.java) - -### Kinesis - -* [Configuration](io-kinesis-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kinesis/src/main/java/org/apache/pulsar/io/kinesis/KinesisSink.java) - -### MongoDB - -* [Configuration](io-mongo-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/mongo/src/main/java/org/apache/pulsar/io/mongodb/MongoSink.java) - -### RabbitMQ - -* [Configuration](io-rabbitmq-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/rabbitmq/src/main/java/org/apache/pulsar/io/rabbitmq/RabbitMQSink.java) - -### Redis - -* [Configuration](io-redis-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/redis/src/main/java/org/apache/pulsar/io/redis/RedisAbstractConfig.java) - -### Solr - -* [Configuration](io-solr-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/solr/src/main/java/org/apache/pulsar/io/solr/SolrSinkConfig.java) - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-debezium-source.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-debezium-source.md deleted file mode 100644 index 8c3ba0cb20f252..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-debezium-source.md +++ /dev/null @@ -1,621 +0,0 @@ ---- -id: io-debezium-source -title: Debezium source connector -sidebar_label: "Debezium source connector" -original_id: io-debezium-source ---- - -The Debezium source connector pulls messages from MySQL or PostgreSQL -and persists the messages to Pulsar topics. - -## Configuration - -The configuration of Debezium source connector has the following properties. - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `task.class` | true | null | A source task class that implemented in Debezium. | -| `database.hostname` | true | null | The address of a database server. | -| `database.port` | true | null | The port number of a database server.| -| `database.user` | true | null | The name of a database user that has the required privileges. | -| `database.password` | true | null | The password for a database user that has the required privileges. | -| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. | -| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. | -| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the connector.

    This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. | -| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. | -| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value. | -| `database.history` | true | null | The name of the database history class. | -| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements.

    **Note: this topic is for internal use only and should not be used by consumers.** | -| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. | -| `pulsar.service.url` | true | null | Pulsar cluster service URL. | -| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. | -| `json-with-envelope` | false | false | Present the message only consist of payload. - -### Converter Options - -1. org.apache.kafka.connect.json.JsonConverter - -This config `json-with-envelope` is valid only for the JsonConverter. It's default value is false, the consumer use the schema ` -Schema.KeyValue(Schema.AUTO_CONSUME(), Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, -and the message only consist of payload. - -If the config `json-with-envelope` value is true, the consumer use the schema -`Schema.KeyValue(Schema.BYTES, Schema.BYTES`, the message consist of schema and payload. - -2. org.apache.pulsar.kafka.shade.io.confluent.connect.avro.AvroConverter - -If users select the AvroConverter, then the pulsar consumer should use the schema `Schema.KeyValue(Schema.AUTO_CONSUME(), -Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, and the message consist of payload. - -### MongoDB Configuration -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). | -| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. | -| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. | - - - -## Example of MySQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "3306", - "database.user": "debezium", - "database.password": "dbz", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.whitelist": "inventory", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.history.pulsar.topic": "history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "pulsar.service.url": "pulsar://127.0.0.1:6650", - "offset.storage.topic": "offset-topic" - } - - ``` - -* YAML - - You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mysql-source" - topicName: "debezium-mysql-topic" - archive: "connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for mysql, docker image: debezium/example-mysql:0.8 - database.hostname: "localhost" - database.port: "3306" - database.user: "debezium" - database.password: "dbz" - database.server.id: "184054" - database.server.name: "dbserver1" - database.whitelist: "inventory" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.history.pulsar.topic: "history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG - key.converter: "org.apache.kafka.connect.json.JsonConverter" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## OFFSET_STORAGE_TOPIC_CONFIG - offset.storage.topic: "offset-topic" - - ``` - -### Usage - -This example shows how to change the data of a MySQL table using the Pulsar Debezium connector. - -1. Start a MySQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -it --rm \ - --name mysql \ - -p 3306:3306 \ - -e MYSQL_ROOT_PASSWORD=debezium \ - -e MYSQL_USER=mysqluser \ - -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar \ - --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","value.converter": "org.apache.kafka.connect.json.JsonConverter","pulsar.service.url": "pulsar://127.0.0.1:6650","offset.storage.topic": "offset-topic"}' - - ``` - - :::note - - Currently, the destination topic (specified by the `destination-topic-name` option ) is a required configuration but it is not used for the Debezium connector to save data. The Debezium connector saves data in the following 4 types of topics: - - - One topic named with the database server name ( `database.server.name`) for storing the database metadata messages, such as `public/default/database.server.name`. - - One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. - - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - - One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - - If the automatic topic creation is disabled on your broker, you need to manually create the above 4 types of topics and the destination topic. - - ::: - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mysql-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the table _inventory.products_. - - ```bash - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MySQL client in docker. - - ```bash - - $ docker run -it --rm \ - --name mysqlterm \ - --link mysql \ - --rm mysql:5.7 sh \ - -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' - - ``` - -6. A MySQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - mysql> use inventory; - mysql> show tables; - mysql> SELECT * FROM products; - mysql> UPDATE products SET name='1111111111' WHERE id=101; - mysql> UPDATE products SET name='1111111111' WHERE id=107; - - ``` - - In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic. - -## Example of PostgreSQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "5432", - "database.user": "postgres", - "database.password": "changeme", - "database.dbname": "postgres", - "database.server.name": "dbserver1", - "plugin.name": "pgoutput", - "schema.whitelist": "public", - "table.whitelist": "public.users", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-postgres-source" - topicName: "debezium-postgres-topic" - archive: "connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for postgres version 10+, official docker image: postgres:<10+> - database.hostname: "localhost" - database.port: "5432" - database.user: "postgres" - database.password: "changeme" - database.dbname: "postgres" - database.server.name: "dbserver1" - plugin.name: "pgoutput" - schema.whitelist: "public" - table.whitelist: "public.users" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -Notice that `pgoutput` is a standard plugin of Postgres introduced in version 10 - [see Postgres architecture docu](https://www.postgresql.org/docs/10/logical-replication-architecture.html). You don't need to install anything, just make sure the WAL level is set to `logical` (see docker command below and [Postgres docu](https://www.postgresql.org/docs/current/runtime-config-wal.html)). - -### Usage - -This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector. - - -1. Start a PostgreSQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -d -it --rm \ - --name pulsar-postgres \ - -p 5432:5432 \ - -e POSTGRES_PASSWORD=changeme \ - postgres:13.3 -c wal_level=logical - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar \ - --name debezium-postgres-source \ - --destination-topic-name debezium-postgres-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "changeme","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "public","table.whitelist": "public.users","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - :::note - - Currently, the destination topic (specified by the `destination-topic-name` option ) is a required configuration but it is not used for the Debezium connector to save data. The Debezium connector saves data in the following 4 types of topics: - - - One topic named with the database server name ( `database.server.name`) for storing the database metadata messages, such as `public/default/database.server.name`. - - One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. - - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - - One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - - If the automatic topic creation is disabled on your broker, you need to manually create the above 4 types of topics and the destination topic. - - ::: - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-postgres-source-config.yaml - - ``` - -4. Subscribe the topic _sub-users_ for the _public.users_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-users" public/default/dbserver1.public.users -n 0 - - ``` - -5. Start a PostgreSQL client in docker. - - ```bash - - $ docker exec -it pulsar-postgresql /bin/bash - - ``` - -6. A PostgreSQL client pops out. - - Use the following commands to create sample data in the table _users_. - - ``` - - psql -U postgres -h localhost -p 5432 - Password for user postgres: - - CREATE TABLE users( - id BIGINT GENERATED ALWAYS AS IDENTITY, PRIMARY KEY(id), - hash_firstname TEXT NOT NULL, - hash_lastname TEXT NOT NULL, - gender VARCHAR(6) NOT NULL CHECK (gender IN ('male', 'female')) - ); - - INSERT INTO users(hash_firstname, hash_lastname, gender) - SELECT md5(RANDOM()::TEXT), md5(RANDOM()::TEXT), CASE WHEN RANDOM() < 0.5 THEN 'male' ELSE 'female' END FROM generate_series(1, 100); - - postgres=# select * from users; - - id | hash_firstname | hash_lastname | gender - -------+----------------------------------+----------------------------------+-------- - 1 | 02bf7880eb489edc624ba637f5ab42bd | 3e742c2cc4217d8e3382cc251415b2fb | female - 2 | dd07064326bb9119189032316158f064 | 9c0e938f9eddbd5200ba348965afbc61 | male - 3 | 2c5316fdd9d6595c1cceb70eed12e80c | 8a93d7d8f9d76acfaaa625c82a03ea8b | female - 4 | 3dfa3b4f70d8cd2155567210e5043d2b | 32c156bc28f7f03ab5d28e2588a3dc19 | female - - - postgres=# UPDATE users SET hash_firstname='maxim' WHERE id=1; - UPDATE 1 - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"before":null,"after":{"id":1,"hash_firstname":"maxim","hash_lastname":"292113d30a3ccee0e19733dd7f88b258","gender":"male"},"source:{"version":"1.0.0.Final","connector":"postgresql","name":"foobar","ts_ms":1624045862644,"snapshot":"false","db":"postgres","schema":"public","table":"users","txId":595,"lsn":24419784,"xmin":null},"op":"u","ts_ms":1624045862648} - ...many more - - ``` - -## Example of MongoDB - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "mongodb.hosts": "rs0/mongodb:27017", - "mongodb.name": "dbserver1", - "mongodb.user": "debezium", - "mongodb.password": "dbz", - "mongodb.task.id": "1", - "database.whitelist": "inventory", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mongodb-source" - topicName: "debezium-mongodb-topic" - archive: "connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-mongodb:0.10 - mongodb.hosts: "rs0/mongodb:27017", - mongodb.name: "dbserver1", - mongodb.user: "debezium", - mongodb.password: "dbz", - mongodb.task.id: "1", - database.whitelist: "inventory", - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector. - - -1. Start a MongoDB server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-mongodb:0.10 - $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017 debezium/example-mongodb:0.10 - - ``` - - Use the following commands to initialize the data. - - ``` bash - - ./usr/local/bin/init-inventory.sh - - ``` - - If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a``` - - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-mongodb-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar \ - --name debezium-mongodb-source \ - --destination-topic-name debezium-mongodb-topic \ - --tenant public \ - --namespace default \ - --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - :::note - - Currently, the destination topic (specified by the `destination-topic-name` option ) is a required configuration but it is not used for the Debezium connector to save data. The Debezium connector saves data in the following 4 types of topics: - - - One topic named with the database server name ( `database.server.name`) for storing the database metadata messages, such as `public/default/database.server.name`. - - One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. - - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - - One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - - If the automatic topic creation is disabled on your broker, you need to manually create the above 4 types of topics and the destination topic. - - ::: - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mongodb-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MongoDB client in docker. - - ```bash - - $ docker exec -it pulsar-mongodb /bin/bash - - ``` - -6. A MongoDB client pops out. - - ```bash - - mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory - db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}}) - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":false,"field":"rs"},{"type":"string","optional":false,"field":"collection"},{"type":"int32","optional":false,"field":"ord"},{"type":"int64","optional":true,"field":"h"}],"optional":false,"name":"io.debezium.connector.mongo.Source","field":"source"},{"type":"string","optional":true,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"after":"{\"_id\": {\"$numberLong\": \"104\"},\"name\": \"hammer\",\"description\": \"12oz carpenter's hammer\",\"weight\": 1.25,\"quantity\": 4}","patch":null,"source":{"version":"0.10.0.Final","connector":"mongodb","name":"dbserver1","ts_ms":1573541905000,"snapshot":"true","db":"inventory","rs":"rs0","collection":"products","ord":1,"h":4983083486544392763},"op":"r","ts_ms":1573541909761}}. - - ``` - -## FAQ - -### Debezium postgres connector will hang when create snap - -```$xslt - -#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000] - java.lang.Thread.State: WAITING (parking) - at sun.misc.Unsafe.park(Native Method) - - parking to wait for <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) - at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) - at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) - at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396) - at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649) - at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132) - at io.debezium.connector.postgresql.PostgresConnectorTask$Lambda$203/385424085.accept(Unknown Source) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$240/1347039967.accept(Unknown Source) - at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$206/589332928.run(Unknown Source) - at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705) - at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717) - at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126) - at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47) - at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127) - at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230) - at java.lang.Thread.run(Thread.java:748) - -``` - -If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file: - -```$xslt - -max.queue.size= - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-debug.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-debug.md deleted file mode 100644 index 844e101d00d2a7..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-debug.md +++ /dev/null @@ -1,407 +0,0 @@ ---- -id: io-debug -title: How to debug Pulsar connectors -sidebar_label: "Debug" -original_id: io-debug ---- -This guide explains how to debug connectors in localrun or cluster mode and gives a debugging checklist. -To better demonstrate how to debug Pulsar connectors, here takes a Mongo sink connector as an example. - -**Deploy a Mongo sink environment** -1. Start a Mongo service. - - ```bash - - docker pull mongo:4 - docker run -d -p 27017:27017 --name pulsar-mongo -v $PWD/data:/data/db mongo:4 - - ``` - -2. Create a DB and a collection. - - ```bash - - docker exec -it pulsar-mongo /bin/bash - mongo - > use pulsar - > db.createCollection('messages') - > exit - - ``` - -3. Start Pulsar standalone. - - ```bash - - docker pull apachepulsar/pulsar:2.4.0 - docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --link pulsar-mongo --name pulsar-mongo-standalone apachepulsar/pulsar:2.4.0 bin/pulsar standalone - - ``` - -4. Configure the Mongo sink with the `mongo-sink-config.yaml` file. - - ```bash - - configs: - mongoUri: "mongodb://pulsar-mongo:27017" - database: "pulsar" - collection: "messages" - batchSize: 2 - batchTimeMs: 500 - - ``` - - ```bash - - docker cp mongo-sink-config.yaml pulsar-mongo-standalone:/pulsar/ - - ``` - -5. Download the Mongo sink nar package. - - ```bash - - docker exec -it pulsar-mongo-standalone /bin/bash - curl -O http://apache.01link.hk/pulsar/pulsar-2.4.0/connectors/pulsar-io-mongo-2.4.0.nar - - ``` - -## Debug in localrun mode -Start the Mongo sink in localrun mode using the `localrun` command. -:::tip - -For more information about the `localrun` command, see [`localrun`](reference-connector-admin.md/#localrun-1). - -::: - -```bash - -./bin/pulsar-admin sinks localrun \ ---archive pulsar-io-mongo-2.4.0.nar \ ---tenant public --namespace default \ ---inputs test-mongo \ ---name pulsar-mongo-sink \ ---sink-config-file mongo-sink-config.yaml \ ---parallelism 1 - -``` - -### Use connector log -Use one of the following methods to get a connector log in localrun mode: -* After executing the `localrun` command, the **log is automatically printed on the console**. -* The log is located at: - - ```bash - - logs/functions/tenant/namespace/function-name/function-name-instance-id.log - - ``` - - **Example** - - The path of the Mongo sink connector is: - - ```bash - - logs/functions/public/default/pulsar-mongo-sink/pulsar-mongo-sink-0.log - - ``` - -To clearly explain the log information, here breaks down the large block of information into small blocks and add descriptions for each block. -* This piece of log information shows the storage path of the nar package after decompression. - - ``` - - 08:21:54.132 [main] INFO org.apache.pulsar.common.nar.NarClassLoader - Created class loader with paths: [file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/, file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/META-INF/bundled-dependencies/, - - ``` - - :::tip - - If `class cannot be found` exception is thrown, check whether the nar file is decompressed in the folder `file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/META-INF/bundled-dependencies/` or not. - - ::: - -* This piece of log information illustrates the basic information about the Mongo sink connector, such as tenant, namespace, name, parallelism, resources, and so on, which can be used to **check whether the Mongo sink connector is configured correctly or not**. - - ```bash - - 08:21:55.390 [main] INFO org.apache.pulsar.functions.runtime.ThreadRuntime - ThreadContainer starting function with instance config InstanceConfig(instanceId=0, functionId=853d60a1-0c48-44d5-9a5c-6917386476b2, functionVersion=c2ce1458-b69e-4175-88c0-a0a856a2be8c, functionDetails=tenant: "public" - namespace: "default" - name: "pulsar-mongo-sink" - className: "org.apache.pulsar.functions.api.utils.IdentityFunction" - autoAck: true - parallelism: 1 - source { - typeClassName: "[B" - inputSpecs { - key: "test-mongo" - value { - } - } - cleanupSubscription: true - } - sink { - className: "org.apache.pulsar.io.mongodb.MongoSink" - configs: "{\"mongoUri\":\"mongodb://pulsar-mongo:27017\",\"database\":\"pulsar\",\"collection\":\"messages\",\"batchSize\":2,\"batchTimeMs\":500}" - typeClassName: "[B" - } - resources { - cpu: 1.0 - ram: 1073741824 - disk: 10737418240 - } - componentType: SINK - , maxBufferedTuples=1024, functionAuthenticationSpec=null, port=38459, clusterName=local) - - ``` - -* This piece of log information demonstrates the status of the connections to Mongo and configuration information. - - ```bash - - 08:21:56.231 [cluster-ClusterId{value='5d6396a3c9e77c0569ff00eb', description='null'}-pulsar-mongo:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:1, serverValue:8}] to pulsar-mongo:27017 - 08:21:56.326 [cluster-ClusterId{value='5d6396a3c9e77c0569ff00eb', description='null'}-pulsar-mongo:27017] INFO org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=pulsar-mongo:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 2, 0]}, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=89058800} - - ``` - -* This piece of log information explains the configuration of consumers and clients, including the topic name, subscription name, subscription type, and so on. - - ```bash - - 08:21:56.719 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl - Starting Pulsar consumer status recorder with config: { - "topicNames" : [ "test-mongo" ], - "topicsPattern" : null, - "subscriptionName" : "public/default/pulsar-mongo-sink", - "subscriptionType" : "Shared", - "receiverQueueSize" : 1000, - "acknowledgementsGroupTimeMicros" : 100000, - "negativeAckRedeliveryDelayMicros" : 60000000, - "maxTotalReceiverQueueSizeAcrossPartitions" : 50000, - "consumerName" : null, - "ackTimeoutMillis" : 0, - "tickDurationMillis" : 1000, - "priorityLevel" : 0, - "cryptoFailureAction" : "CONSUME", - "properties" : { - "application" : "pulsar-sink", - "id" : "public/default/pulsar-mongo-sink", - "instance_id" : "0" - }, - "readCompacted" : false, - "subscriptionInitialPosition" : "Latest", - "patternAutoDiscoveryPeriod" : 1, - "regexSubscriptionMode" : "PersistentOnly", - "deadLetterPolicy" : null, - "autoUpdatePartitions" : true, - "replicateSubscriptionState" : false, - "resetIncludeHead" : false - } - 08:21:56.726 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl - Pulsar client config: { - "serviceUrl" : "pulsar://localhost:6650", - "authPluginClassName" : null, - "authParams" : null, - "operationTimeoutMs" : 30000, - "statsIntervalSeconds" : 60, - "numIoThreads" : 1, - "numListenerThreads" : 1, - "connectionsPerBroker" : 1, - "useTcpNoDelay" : true, - "useTls" : false, - "tlsTrustCertsFilePath" : null, - "tlsAllowInsecureConnection" : false, - "tlsHostnameVerificationEnable" : false, - "concurrentLookupRequest" : 5000, - "maxLookupRequest" : 50000, - "maxNumberOfRejectedRequestPerConnection" : 50, - "keepAliveIntervalSeconds" : 30, - "connectionTimeoutMs" : 10000, - "requestTimeoutMs" : 60000, - "defaultBackoffIntervalNanos" : 100000000, - "maxBackoffIntervalNanos" : 30000000000 - } - - ``` - -## Debug in cluster mode -You can use the following methods to debug a connector in cluster mode: -* [Use connector log](#use-connector-log) -* [Use admin CLI](#use-admin-cli) -### Use connector log -In cluster mode, multiple connectors can run on a worker. To find the log path of a specified connector, use the `workerId` to locate the connector log. -### Use admin CLI -Pulsar admin CLI helps you debug Pulsar connectors with the following subcommands: -* [`get`](#get) - -* [`status`](#status) -* [`topics stats`](#topics-stats) - -**Create a Mongo sink** - -```bash - -./bin/pulsar-admin sinks create \ ---archive pulsar-io-mongo-2.4.0.nar \ ---tenant public \ ---namespace default \ ---inputs test-mongo \ ---name pulsar-mongo-sink \ ---sink-config-file mongo-sink-config.yaml \ ---parallelism 1 - -``` - -### `get` -Use the `get` command to get the basic information about the Mongo sink connector, such as tenant, namespace, name, parallelism, and so on. - -```bash - -./bin/pulsar-admin sinks get --tenant public --namespace default --name pulsar-mongo-sink -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-mongo-sink", - "className": "org.apache.pulsar.io.mongodb.MongoSink", - "inputSpecs": { - "test-mongo": { - "isRegexPattern": false - } - }, - "configs": { - "mongoUri": "mongodb://pulsar-mongo:27017", - "database": "pulsar", - "collection": "messages", - "batchSize": 2.0, - "batchTimeMs": 500.0 - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -:::tip - -For more information about the `get` command, see [`get`](reference-connector-admin.md/#get-1). - -::: - -### `status` -Use the `status` command to get the current status about the Mongo sink connector, such as the number of instance, the number of running instance, instanceId, workerId and so on. - -```bash - -./bin/pulsar-admin sinks status ---tenant public \ ---namespace default \ ---name pulsar-mongo-sink -{ -"numInstances" : 1, -"numRunning" : 1, -"instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-5d202832fd18-8080" - } -} ] -} - -``` - -:::tip - -For more information about the `status` command, see [`status`](reference-connector-admin.md/#stauts-1). -If there are multiple connectors running on a worker, `workerId` can locate the worker on which the specified connector is running. - -::: - -### `topics stats` -Use the `topics stats` command to get the stats for a topic and its connected producer and consumer, such as whether the topic has received messages or not, whether there is a backlog of messages or not, the available permits and other key information. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -```bash - -./bin/pulsar-admin topics stats test-mongo -{ - "msgRateIn" : 0.0, - "msgThroughputIn" : 0.0, - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "averageMsgSize" : 0.0, - "storageSize" : 1, - "publishers" : [ ], - "subscriptions" : { - "public/default/pulsar-mongo-sink" : { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "msgRateRedeliver" : 0.0, - "msgBacklog" : 0, - "blockedSubscriptionOnUnackedMsgs" : false, - "msgDelayed" : 0, - "unackedMessages" : 0, - "type" : "Shared", - "msgRateExpired" : 0.0, - "consumers" : [ { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "msgRateRedeliver" : 0.0, - "consumerName" : "dffdd", - "availablePermits" : 999, - "unackedMessages" : 0, - "blockedConsumerOnUnackedMsgs" : false, - "metadata" : { - "instance_id" : "0", - "application" : "pulsar-sink", - "id" : "public/default/pulsar-mongo-sink" - }, - "connectedSince" : "2019-08-26T08:48:07.582Z", - "clientVersion" : "2.4.0", - "address" : "/172.17.0.3:57790" - } ], - "isReplicated" : false - } - }, - "replication" : { }, - "deduplicationStatus" : "Disabled" -} - -``` - -:::tip - -For more information about the `topic stats` command, see [`topic stats`](http://pulsar.apache.org/docs/en/pulsar-admin/#stats-1). - -::: - -## Checklist -This checklist indicates the major areas to check when you debug connectors. It is a reminder of what to look for to ensure a thorough review and an evaluation tool to get the status of connectors. -* Does Pulsar start successfully? - -* Does the external service run normally? - -* Is the nar package complete? - -* Is the connector configuration file correct? - -* In localrun mode, run a connector and check the printed information (connector log) on the console. - -* In cluster mode: - - * Use the `get` command to get the basic information. - - * Use the `status` command to get the current status. - * Use the `topics stats` command to get the stats for a specified topic and its connected producers and consumers. - - * Check the connector log. -* Enter into the external system and verify the result. diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-develop.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-develop.md deleted file mode 100644 index d6f4f8261ac820..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-develop.md +++ /dev/null @@ -1,421 +0,0 @@ ---- -id: io-develop -title: How to develop Pulsar connectors -sidebar_label: "Develop" -original_id: io-develop ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide describes how to develop Pulsar connectors to move data -between Pulsar and other systems. - -Pulsar connectors are special [Pulsar Functions](functions-overview.md), so creating -a Pulsar connector is similar to creating a Pulsar function. - -Pulsar connectors come in two types: - -| Type | Description | Example -|---|---|--- -{@inject: github:Source:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java}|Import data from another system to Pulsar.|[RabbitMQ source connector](io-rabbitmq.md) imports the messages of a RabbitMQ queue to a Pulsar topic. -{@inject: github:Sink:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java}|Export data from Pulsar to another system.|[Kinesis sink connector](io-kinesis.md) exports the messages of a Pulsar topic to a Kinesis stream. - -## Develop - -You can develop Pulsar source connectors and sink connectors. - -### Source - -Developing a source connector is to implement the {@inject: github:Source:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} -interface, which means you need to implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method and the {@inject: github:read:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - -1. Implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - - ```java - - /** - * Open connector with configuration - * - * @param config initialization config - * @param sourceContext - * @throws Exception IO type exceptions when opening a connector - */ - void open(final Map config, SourceContext sourceContext) throws Exception; - - ``` - - This method is called when the source connector is initialized. - - In this method, you can retrieve all connector specific settings through the passed-in `config` parameter and initialize all necessary resources. - - For example, a Kafka connector can create a Kafka client in this `open` method. - - Besides, Pulsar runtime also provides a `SourceContext` for the - connector to access runtime resources for tasks like collecting metrics. The implementation can save the `SourceContext` for future use. - -2. Implement the {@inject: github:read:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - - ```java - - /** - * Reads the next message from source. - * If source does not have any new messages, this call should block. - * @return next message from source. The return result should never be null - * @throws Exception - */ - Record read() throws Exception; - - ``` - - If nothing to return, the implementation should be blocking rather than returning `null`. - - The returned {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should encapsulate the following information, which is needed by Pulsar IO runtime. - - * {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should provide the following variables: - - |Variable|Required|Description - |---|---|--- - `TopicName`|No|Pulsar topic name from which the record is originated from. - `Key`|No| Messages can optionally be tagged with keys.

    For more information, see [Routing modes](concepts-messaging.md#routing-modes).| - `Value`|Yes|Actual data of the record. - `EventTime`|No|Event time of the record from the source. - `PartitionId`|No| If the record is originated from a partitioned source, it returns its `PartitionId`.

    `PartitionId` is used as a part of the unique identifier by Pulsar IO runtime to deduplicate messages and achieve exactly-once processing guarantee. - `RecordSequence`|No|If the record is originated from a sequential source, it returns its `RecordSequence`.

    `RecordSequence` is used as a part of the unique identifier by Pulsar IO runtime to deduplicate messages and achieve exactly-once processing guarantee. - `Properties` |No| If the record carries user-defined properties, it returns those properties. - `DestinationTopic`|No|Topic to which message should be written. - `Message`|No|A class which carries data sent by users.

    For more information, see [Message.java](https://github.com/apache/pulsar/blob/master/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/Message.java).| - - * {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should provide the following methods: - - Method|Description - |---|--- - `ack` |Acknowledge that the record is fully processed. - `fail`|Indicate that the record fails to be processed. - -## Handle schema information - -Pulsar IO automatically handles the schema and provides a strongly typed API based on Java generics. -If you know the schema type that you are producing, you can declare the Java class relative to that type in your sink declaration. - -``` - -public class MySource implements Source { - public Record read() {} -} - -``` - -If you want to implement a source that works with any schema, you can go with `byte[]` (of `ByteBuffer`) and use Schema.AUTO_PRODUCE_BYTES(). - -``` - -public class MySource implements Source { - public Record read() { - - Schema wantedSchema = .... - Record myRecord = new MyRecordImplementation(); - .... - } - class MyRecordImplementation implements Record { - public byte[] getValue() { - return ....encoded byte[]...that represents the value - } - public Schema getSchema() { - return Schema.AUTO_PRODUCE_BYTES(wantedSchema); - } - } -} - -``` - -To handle the `KeyValue` type properly, follow the guidelines for your record implementation: -- It must implement {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/KVRecord.java} interface and implement `getKeySchema`,`getValueSchema`, and `getKeyValueEncodingType` -- It must return a `KeyValue` object as `Record.getValue()` -- It may return null in `Record.getSchema()` - -When Pulsar IO runtime encounters a `KVRecord`, it brings the following changes automatically: -- Set properly the `KeyValueSchema` -- Encode the Message Key and the Message Value according to the `KeyValueEncoding` (SEPARATED or INLINE) - -:::tip - -For more information about **how to create a source connector**, see {@inject: github:KafkaSource:/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java}. - -::: - -### Sink - -Developing a sink connector **is similar to** developing a source connector, that is, you need to implement the {@inject: github:Sink:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} interface, which means implementing the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method and the {@inject: github:write:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - -1. Implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - - ```java - - /** - * Open connector with configuration - * - * @param config initialization config - * @param sinkContext - * @throws Exception IO type exceptions when opening a connector - */ - void open(final Map config, SinkContext sinkContext) throws Exception; - - ``` - -2. Implement the {@inject: github:write:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - - ```java - - /** - * Write a message to Sink - * @param record record to write to sink - * @throws Exception - */ - void write(Record record) throws Exception; - - ``` - - During the implementation, you can decide how to write the `Value` and - the `Key` to the actual source, and leverage all the provided information such as - `PartitionId` and `RecordSequence` to achieve different processing guarantees. - - You also need to ack records (if messages are sent successfully) or fail records (if messages fail to send). - -## Handling Schema information - -Pulsar IO handles automatically the Schema and provides a strongly typed API based on Java generics. -If you know the Schema type that you are consuming from you can declare the Java class relative to that type in your Sink declaration. - -``` - -public class MySink implements Sink { - public void write(Record record) {} -} - -``` - -If you want to implement a sink that works with any schema, you can you go with the special GenericObject interface. - -``` - -public class MySink implements Sink { - public void write(Record record) { - Schema schema = record.getSchema(); - GenericObject genericObject = record.getValue(); - if (genericObject != null) { - SchemaType type = genericObject.getSchemaType(); - Object nativeObject = genericObject.getNativeObject(); - ... - } - .... - } -} - -``` - -In the case of AVRO, JSON, and Protobuf records (schemaType=AVRO,JSON,PROTOBUF_NATIVE), you can cast the -`genericObject` variable to `GenericRecord` and use `getFields()` and `getField()` API. -You are able to access the native AVRO record using `genericObject.getNativeObject()`. - -In the case of KeyValue type, you can access both the schema for the key and the schema for the value using this code. - -``` - -public class MySink implements Sink { - public void write(Record record) { - Schema schema = record.getSchema(); - GenericObject genericObject = record.getValue(); - SchemaType type = genericObject.getSchemaType(); - Object nativeObject = genericObject.getNativeObject(); - if (type == SchemaType.KEY_VALUE) { - KeyValue keyValue = (KeyValue) nativeObject; - Object key = keyValue.getKey(); - Object value = keyValue.getValue(); - - KeyValueSchema keyValueSchema = (KeyValueSchema) schema; - Schema keySchema = keyValueSchema.getKeySchema(); - Schema valueSchema = keyValueSchema.getValueSchema(); - } - .... - } -} - -``` - -## Test - -Testing connectors can be challenging because Pulsar IO connectors interact with two systems -that may be difficult to mock—Pulsar and the system to which the connector is connecting. - -It is -recommended writing special tests to test the connector functionalities as below -while mocking the external service. - -### Unit test - -You can create unit tests for your connector. - -### Integration test - -Once you have written sufficient unit tests, you can add -separate integration tests to verify end-to-end functionality. - -Pulsar uses [testcontainers](https://www.testcontainers.org/) **for all integration tests**. - -:::tip - -For more information about **how to create integration tests for Pulsar connectors**, see {@inject: github:IntegrationTests:/tests/integration/src/test/java/org/apache/pulsar/tests/integration/io}. - -::: - -## Package - -Once you've developed and tested your connector, you need to package it so that it can be submitted -to a [Pulsar Functions](functions-overview.md) cluster. - -There are two methods to -work with Pulsar Functions' runtime, that is, [NAR](#nar) and [uber JAR](#uber-jar). - -:::note - -If you plan to package and distribute your connector for others to use, you are obligated to - -::: - -license and copyright your own code properly. Remember to add the license and copyright to -all libraries your code uses and to your distribution. -> -> If you use the [NAR](#nar) method, the NAR plugin -automatically creates a `DEPENDENCIES` file in the generated NAR package, including the proper -licensing and copyrights of all libraries of your connector. - -### NAR - -**NAR** stands for NiFi Archive, which is a custom packaging mechanism used by Apache NiFi, to provide -a bit of Java ClassLoader isolation. - -:::tip - -For more information about **how NAR works**, see [here](https://medium.com/hashmapinc/nifi-nar-files-explained-14113f7796fd). - -::: - -Pulsar uses the same mechanism for packaging **all** [built-in connectors](io-connectors.md). - -The easiest approach to package a Pulsar connector is to create a NAR package using [nifi-nar-maven-plugin](https://mvnrepository.com/artifact/org.apache.nifi/nifi-nar-maven-plugin). - -Include this [nifi-nar-maven-plugin](https://mvnrepository.com/artifact/org.apache.nifi/nifi-nar-maven-plugin) in your maven project for your connector as below. - -```xml - - - - org.apache.nifi - nifi-nar-maven-plugin - 1.2.0 - - - -``` - -You must also create a `resources/META-INF/services/pulsar-io.yaml` file with the following contents: - -```yaml - -name: connector name -description: connector description -sourceClass: fully qualified class name (only if source connector) -sinkClass: fully qualified class name (only if sink connector) - -``` - -For Gradle users, there is a [Gradle Nar plugin available on the Gradle Plugin Portal](https://plugins.gradle.org/plugin/io.github.lhotari.gradle-nar-plugin). - -:::tip - -For more information about an **how to use NAR for Pulsar connectors**, see {@inject: github:TwitterFirehose:/pulsar-io/twitter/pom.xml}. - -::: - -### Uber JAR - -An alternative approach is to create an **uber JAR** that contains all of the connector's JAR files -and other resource files. No directory internal structure is necessary. - -You can use [maven-shade-plugin](https://maven.apache.org/plugins/maven-shade-plugin/examples/includes-excludes.html) to create a uber JAR as below: - -```xml - - - org.apache.maven.plugins - maven-shade-plugin - 3.1.1 - - - package - - shade - - - - - *:* - - - - - - - -``` - -## Monitor - -Pulsar connectors enable you to move data in and out of Pulsar easily. It is important to ensure that the running connectors are healthy at any time. You can monitor Pulsar connectors that have been deployed with the following methods: - -- Check the metrics provided by Pulsar. - - Pulsar connectors expose the metrics that can be collected and used for monitoring the health of **Java** connectors. You can check the metrics by following the [monitoring](deploy-monitoring.md) guide. - -- Set and check your customized metrics. - - In addition to the metrics provided by Pulsar, Pulsar allows you to customize metrics for **Java** connectors. Function workers collect user-defined metrics to Prometheus automatically and you can check them in Grafana. - -Here is an example of how to customize metrics for a Java connector. - -````mdx-code-block - - - -``` - -public class TestMetricSink implements Sink { - - @Override - public void open(Map config, SinkContext sinkContext) throws Exception { - sinkContext.recordMetric("foo", 1); - } - - @Override - public void write(Record record) throws Exception { - - } - - @Override - public void close() throws Exception { - - } - } - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-dynamodb-source.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-dynamodb-source.md deleted file mode 100644 index ce585786eb0428..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-dynamodb-source.md +++ /dev/null @@ -1,80 +0,0 @@ ---- -id: io-dynamodb-source -title: AWS DynamoDB source connector -sidebar_label: "AWS DynamoDB source connector" -original_id: io-dynamodb-source ---- - -The DynamoDB source connector pulls data from DynamoDB table streams and persists data into Pulsar. - -This connector uses the [DynamoDB Streams Kinesis Adapter](https://github.com/awslabs/dynamodb-streams-kinesis-adapter), -which uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual -consuming of messages. The KCL uses DynamoDB to track state for consumers and requires cloudwatch access to log metrics. - - -## Configuration - -The configuration of the DynamoDB source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.

    Below are the available options:

  1043. `AT_TIMESTAMP`: start from the record at or after the specified timestamp.

  1044. `LATEST`: start after the most recent data record.

  1045. `TRIM_HORIZON`: start from the oldest available data record.
  1046. -`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption. -`applicationName`|String|false|Pulsar IO connector|The name of the KCL application. Must be unique, as it is used to define the table name for the dynamo table used for state tracking.

    By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances. -`checkpointInterval`|long|false|60000|The frequency of the KCL checkpoint in milliseconds. -`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds. -`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint. -`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector.

    Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed. -`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsEndpoint`|String|false|" " (empty string)|The DynamoDB Streams end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsDynamodbStreamArn`|String|true|" " (empty string)|The DynamoDB stream arn. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    `awsCredentialProviderPlugin` has the following built-in plugs:

  1047. `org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:
    this plugin uses the default AWS provider chain.
    For more information, see [using the default credential provider chain](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default).

  1048. `org.apache.pulsar.io.kinesis.STSAssumeRoleProviderPlugin`:
    this plugin takes a configuration via the `awsCredentialPluginParam` that describes a role to assume when running the KCL.
    **JSON configuration example**
    `{"roleArn": "arn...", "roleSessionName": "name"}`

    `awsCredentialPluginName` is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If `awsCredentialPluginName` set to empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`.
  1049. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Example - -Before using the DynamoDB source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "awsEndpoint": "https://some.endpoint.aws", - "awsRegion": "us-east-1", - "awsDynamodbStreamArn": "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "applicationName": "My test application", - "checkpointInterval": "30000", - "backoffTime": "4000", - "numRetries": "3", - "receiveQueueSize": 2000, - "initialPositionInStream": "TRIM_HORIZON", - "startAtTime": "2019-03-05T19:28:58.000Z" - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "https://some.endpoint.aws" - awsRegion: "us-east-1" - awsDynamodbStreamArn: "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - applicationName: "My test application" - checkpointInterval: 30000 - backoffTime: 4000 - numRetries: 3 - receiveQueueSize: 2000 - initialPositionInStream: "TRIM_HORIZON" - startAtTime: "2019-03-05T19:28:58.000Z" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-elasticsearch-sink.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-elasticsearch-sink.md deleted file mode 100644 index 4acedd3dd0788d..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-elasticsearch-sink.md +++ /dev/null @@ -1,173 +0,0 @@ ---- -id: io-elasticsearch-sink -title: ElasticSearch sink connector -sidebar_label: "ElasticSearch sink connector" -original_id: io-elasticsearch-sink ---- - -The ElasticSearch sink connector pulls messages from Pulsar topics and persists the messages to indexes. - -## Configuration - -The configuration of the ElasticSearch sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `elasticSearchUrl` | String| true |" " (empty string)| The URL of elastic search cluster to which the connector connects. | -| `indexName` | String| true |" " (empty string)| The index name to which the connector writes messages. | -| `typeName` | String | false | "_doc" | The type name to which the connector writes messages to.

    The value should be set explicitly to a valid type name other than "_doc" for Elasticsearch version before 6.2, and left to default otherwise. | -| `indexNumberOfShards` | int| false |1| The number of shards of the index. | -| `indexNumberOfReplicas` | int| false |1 | The number of replicas of the index. | -| `username` | String| false |" " (empty string)| The username used by the connector to connect to the elastic search cluster.

    If `username` is set, then `password` should also be provided. | -| `password` | String| false | " " (empty string)|The password used by the connector to connect to the elastic search cluster.

    If `username` is set, then `password` should also be provided. | - -## Example - -Before using the ElasticSearch sink connector, you need to create a configuration file through one of the following methods. - -### Configuration - -#### For Elasticsearch After 6.2 - -* JSON - - ```json - - { - "elasticSearchUrl": "http://localhost:9200", - "indexName": "my_index", - "username": "scooby", - "password": "doobie" - } - - ``` - -* YAML - - ```yaml - - configs: - elasticSearchUrl: "http://localhost:9200" - indexName: "my_index" - username: "scooby" - password: "doobie" - - ``` - -#### For Elasticsearch Before 6.2 - -* JSON - - ```json - - { - "elasticSearchUrl": "http://localhost:9200", - "indexName": "my_index", - "typeName": "doc", - "username": "scooby", - "password": "doobie" - } - - ``` - -* YAML - - ```yaml - - configs: - elasticSearchUrl: "http://localhost:9200" - indexName: "my_index" - typeName: "doc" - username: "scooby" - password: "doobie" - - ``` - -### Usage - -1. Start a single node Elasticsearch cluster. - - ```bash - - $ docker run -p 9200:9200 -p 9300:9300 \ - -e "discovery.type=single-node" \ - docker.elastic.co/elasticsearch/elasticsearch:7.5.1 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - - Make sure the NAR file is available at `connectors/pulsar-io-elastic-search-@pulsar:version@.nar`. - -3. Start the Pulsar Elasticsearch connector in local run mode using one of the following methods. - * Use the **JSON** configuration as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-elastic-search-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name elasticsearch-test-sink \ - --sink-config '{"elasticSearchUrl":"http://localhost:9200","indexName": "my_index","username": "scooby","password": "doobie"}' \ - --inputs elasticsearch_test - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-elastic-search-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name elasticsearch-test-sink \ - --sink-config-file elasticsearch-sink.yml \ - --inputs elasticsearch_test - - ``` - -4. Publish records to the topic. - - ```bash - - $ bin/pulsar-client produce elasticsearch_test --messages "{\"a\":1}" - - ``` - -5. Check documents in Elasticsearch. - - * refresh the index - - ```bash - - $ curl -s http://localhost:9200/my_index/_refresh - - ``` - - - * search documents - - ```bash - - $ curl -s http://localhost:9200/my_index/_search - - ``` - - You can see the record that published earlier has been successfully written into Elasticsearch. - - ```json - - {"took":2,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":1,"relation":"eq"},"max_score":1.0,"hits":[{"_index":"my_index","_type":"_doc","_id":"FSxemm8BLjG_iC0EeTYJ","_score":1.0,"_source":{"a":1}}]}} - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-file-source.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-file-source.md deleted file mode 100644 index e9d710cce65e83..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-file-source.md +++ /dev/null @@ -1,160 +0,0 @@ ---- -id: io-file-source -title: File source connector -sidebar_label: "File source connector" -original_id: io-file-source ---- - -The File source connector pulls messages from files in directories and persists the messages to Pulsar topics. - -## Configuration - -The configuration of the File source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `inputDirectory` | String|true | No default value|The input directory to pull files. | -| `recurse` | Boolean|false | true | Whether to pull files from subdirectory or not.| -| `keepFile` |Boolean|false | false | If set to true, the file is not deleted after it is processed, which means the file can be picked up continually. | -| `fileFilter` | String|false| [^\\.].* | The file whose name matches the given regular expression is picked up. | -| `pathFilter` | String |false | NULL | If `recurse` is set to true, the subdirectory whose path matches the given regular expression is scanned. | -| `minimumFileAge` | Integer|false | 0 | The minimum age that a file can be processed.

    Any file younger than `minimumFileAge` (according to the last modification date) is ignored. | -| `maximumFileAge` | Long|false |Long.MAX_VALUE | The maximum age that a file can be processed.

    Any file older than `maximumFileAge` (according to last modification date) is ignored. | -| `minimumSize` |Integer| false |1 | The minimum size (in bytes) that a file can be processed. | -| `maximumSize` | Double|false |Double.MAX_VALUE| The maximum size (in bytes) that a file can be processed. | -| `ignoreHiddenFiles` |Boolean| false | true| Whether the hidden files should be ignored or not. | -| `pollingInterval`|Long | false | 10000L | Indicates how long to wait before performing a directory listing. | -| `numWorkers` | Integer | false | 1 | The number of worker threads that process files.

    This allows you to process a larger number of files concurrently.

    However, setting this to a value greater than 1 makes the data from multiple files mixed in the target topic. | - -### Example - -Before using the File source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "inputDirectory": "/Users/david", - "recurse": true, - "keepFile": true, - "fileFilter": "[^\\.].*", - "pathFilter": "*", - "minimumFileAge": 0, - "maximumFileAge": 9999999999, - "minimumSize": 1, - "maximumSize": 5000000, - "ignoreHiddenFiles": true, - "pollingInterval": 5000, - "numWorkers": 1 - } - - ``` - -* YAML - - ```yaml - - configs: - inputDirectory: "/Users/david" - recurse: true - keepFile: true - fileFilter: "[^\\.].*" - pathFilter: "*" - minimumFileAge: 0 - maximumFileAge: 9999999999 - minimumSize: 1 - maximumSize: 5000000 - ignoreHiddenFiles: true - pollingInterval: 5000 - numWorkers: 1 - - ``` - -## Usage - -Here is an example of using the File source connecter. - -1. Pull a Pulsar image. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - ``` - -2. Start Pulsar standalone. - - ```bash - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -3. Create a configuration file _file-connector.yaml_. - - ```yaml - - configs: - inputDirectory: "/opt" - - ``` - -4. Copy the configuration file _file-connector.yaml_ to the container. - - ```bash - - $ docker cp connectors/file-connector.yaml pulsar-standalone:/pulsar/ - - ``` - -5. Download the File source connector. - - ```bash - - $ curl -O https://mirrors.tuna.tsinghua.edu.cn/apache/pulsar/pulsar-{version}/connectors/pulsar-io-file-{version}.nar - - ``` - -6. Start the File source connector. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - - $ ./bin/pulsar-admin sources localrun \ - --archive /pulsar/pulsar-io-file-{version}.nar \ - --name file-test \ - --destination-topic-name pulsar-file-test \ - --source-config-file /pulsar/file-connector.yaml - - ``` - -7. Start a consumer. - - ```bash - - ./bin/pulsar-client consume -s file-test -n 0 pulsar-file-test - - ``` - -8. Write the message to the file _test.txt_. - - ```bash - - echo "hello world!" > /opt/test.txt - - ``` - - The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello world! - - ``` - - \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-flume-sink.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-flume-sink.md deleted file mode 100644 index b2ace53702f8ca..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-flume-sink.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -id: io-flume-sink -title: Flume sink connector -sidebar_label: "Flume sink connector" -original_id: io-flume-sink ---- - -The Flume sink connector pulls messages from Pulsar topics to logs. - -## Configuration - -The configuration of the Flume sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`name`|String|true|"" (empty string)|The name of the agent. -`confFile`|String|true|"" (empty string)|The configuration file. -`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed. -`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection. -`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration. - -### Example - -Before using the Flume sink connector, you need to create a configuration file through one of the following methods. - -> For more information about the `sink.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/sink.conf). - -* JSON - - ```json - - { - "name": "a1", - "confFile": "sink.conf", - "noReloadConf": "false", - "zkConnString": "", - "zkBasePath": "" - } - - ``` - -* YAML - - ```yaml - - configs: - name: a1 - confFile: sink.conf - noReloadConf: false - zkConnString: "" - zkBasePath: "" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-flume-source.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-flume-source.md deleted file mode 100644 index b7fd7edad88111..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-flume-source.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -id: io-flume-source -title: Flume source connector -sidebar_label: "Flume source connector" -original_id: io-flume-source ---- - -The Flume source connector pulls messages from logs to Pulsar topics. - -## Configuration - -The configuration of the Flume source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`name`|String|true|"" (empty string)|The name of the agent. -`confFile`|String|true|"" (empty string)|The configuration file. -`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed. -`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection. -`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration. - -### Example - -Before using the Flume source connector, you need to create a configuration file through one of the following methods. - -> For more information about the `source.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/source.conf). - -* JSON - - ```json - - { - "name": "a1", - "confFile": "source.conf", - "noReloadConf": "false", - "zkConnString": "", - "zkBasePath": "" - } - - ``` - -* YAML - - ```yaml - - configs: - name: a1 - confFile: source.conf - noReloadConf: false - zkConnString: "" - zkBasePath: "" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-hbase-sink.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-hbase-sink.md deleted file mode 100644 index 1737b00fa26805..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-hbase-sink.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -id: io-hbase-sink -title: HBase sink connector -sidebar_label: "HBase sink connector" -original_id: io-hbase-sink ---- - -The HBase sink connector pulls the messages from Pulsar topics -and persists the messages to HBase tables - -## Configuration - -The configuration of the HBase sink connector has the following properties. - -### Property - -| Name | Type|Default | Required | Description | -|------|---------|----------|-------------|--- -| `hbaseConfigResources` | String|None | false | HBase system configuration `hbase-site.xml` file. | -| `zookeeperQuorum` | String|None | true | HBase system configuration about `hbase.zookeeper.quorum` value. | -| `zookeeperClientPort` | String|2181 | false | HBase system configuration about `hbase.zookeeper.property.clientPort` value. | -| `zookeeperZnodeParent` | String|/hbase | false | HBase system configuration about `zookeeper.znode.parent` value. | -| `tableName` | None |String | true | HBase table, the value is `namespace:tableName`. | -| `rowKeyName` | String|None | true | HBase table rowkey name. | -| `familyName` | String|None | true | HBase table column family name. | -| `qualifierNames` |String| None | true | HBase table column qualifier names. | -| `batchTimeMs` | Long|1000l| false | HBase table operation timeout in milliseconds. | -| `batchSize` | int|200| false | Batch size of updates made to the HBase table. | - -### Example - -Before using the HBase sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "hbaseConfigResources": "hbase-site.xml", - "zookeeperQuorum": "localhost", - "zookeeperClientPort": "2181", - "zookeeperZnodeParent": "/hbase", - "tableName": "pulsar_hbase", - "rowKeyName": "rowKey", - "familyName": "info", - "qualifierNames": [ 'name', 'address', 'age'] - } - - ``` - -* YAML - - ```yaml - - configs: - hbaseConfigResources: "hbase-site.xml" - zookeeperQuorum: "localhost" - zookeeperClientPort: "2181" - zookeeperZnodeParent: "/hbase" - tableName: "pulsar_hbase" - rowKeyName: "rowKey" - familyName: "info" - qualifierNames: [ 'name', 'address', 'age'] - - ``` - - \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-hdfs2-sink.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-hdfs2-sink.md deleted file mode 100644 index 4a8527154430d0..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-hdfs2-sink.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -id: io-hdfs2-sink -title: HDFS2 sink connector -sidebar_label: "HDFS2 sink connector" -original_id: io-hdfs2-sink ---- - -The HDFS2 sink connector pulls the messages from Pulsar topics -and persists the messages to HDFS files. - -## Configuration - -The configuration of the HDFS2 sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `hdfsConfigResources` | String|true| None | A file or a comma-separated list containing the Hadoop file system configuration.

    **Example**
    'core-site.xml'
    'hdfs-site.xml' | -| `directory` | String | true | None|The HDFS directory where files read from or written to. | -| `encoding` | String |false |None |The character encoding for the files.

    **Example**
    UTF-8
    ASCII | -| `compression` | Compression |false |None |The compression code used to compress or de-compress the files on HDFS.

    Below are the available options:
  1050. BZIP2
  1051. DEFLATE
  1052. GZIP
  1053. LZ4
  1054. SNAPPY
  1055. | -| `kerberosUserPrincipal` |String| false| None|The principal account of Kerberos user used for authentication. | -| `keytab` | String|false|None| The full pathname of the Kerberos keytab file used for authentication. | -| `filenamePrefix` |String| true, if `compression` is set to `None`. | None |The prefix of the files created inside the HDFS directory.

    **Example**
    The value of topicA result in files named topicA-. | -| `fileExtension` | String| true | None | The extension added to the files written to HDFS.

    **Example**
    '.txt'
    '.seq' | -| `separator` | char|false |None |The character used to separate records in a text file.

    If no value is provided, the contents from all records are concatenated together in one continuous byte array. | -| `syncInterval` | long| false |0| The interval between calls to flush data to HDFS disk in milliseconds. | -| `maxPendingRecords` |int| false|Integer.MAX_VALUE | The maximum number of records that hold in memory before acking.

    Setting this property to 1 makes every record send to disk before the record is acked.

    Setting this property to a higher value allows buffering records before flushing them to disk. -| `subdirectoryPattern` | String | false | None | A subdirectory associated with the created time of the sink.
    The pattern is the formatted pattern of `directory`'s subdirectory.

    See [DateTimeFormatter](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html) for pattern's syntax. | - -### Example - -Before using the HDFS2 sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "hdfsConfigResources": "core-site.xml", - "directory": "/foo/bar", - "filenamePrefix": "prefix", - "fileExtension": ".log", - "compression": "SNAPPY", - "subdirectoryPattern": "yyyy-MM-dd" - } - - ``` - -* YAML - - ```yaml - - configs: - hdfsConfigResources: "core-site.xml" - directory: "/foo/bar" - filenamePrefix: "prefix" - fileExtension: ".log" - compression: "SNAPPY" - subdirectoryPattern: "yyyy-MM-dd" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-hdfs3-sink.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-hdfs3-sink.md deleted file mode 100644 index aec065a25db7f4..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-hdfs3-sink.md +++ /dev/null @@ -1,59 +0,0 @@ ---- -id: io-hdfs3-sink -title: HDFS3 sink connector -sidebar_label: "HDFS3 sink connector" -original_id: io-hdfs3-sink ---- - -The HDFS3 sink connector pulls the messages from Pulsar topics -and persists the messages to HDFS files. - -## Configuration - -The configuration of the HDFS3 sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `hdfsConfigResources` | String|true| None | A file or a comma-separated list containing the Hadoop file system configuration.

    **Example**
    'core-site.xml'
    'hdfs-site.xml' | -| `directory` | String | true | None|The HDFS directory where files read from or written to. | -| `encoding` | String |false |None |The character encoding for the files.

    **Example**
    UTF-8
    ASCII | -| `compression` | Compression |false |None |The compression code used to compress or de-compress the files on HDFS.

    Below are the available options:
  1056. BZIP2
  1057. DEFLATE
  1058. GZIP
  1059. LZ4
  1060. SNAPPY
  1061. | -| `kerberosUserPrincipal` |String| false| None|The principal account of Kerberos user used for authentication. | -| `keytab` | String|false|None| The full pathname of the Kerberos keytab file used for authentication. | -| `filenamePrefix` |String| false |None |The prefix of the files created inside the HDFS directory.

    **Example**
    The value of topicA result in files named topicA-. | -| `fileExtension` | String| false | None| The extension added to the files written to HDFS.

    **Example**
    '.txt'
    '.seq' | -| `separator` | char|false |None |The character used to separate records in a text file.

    If no value is provided, the contents from all records are concatenated together in one continuous byte array. | -| `syncInterval` | long| false |0| The interval between calls to flush data to HDFS disk in milliseconds. | -| `maxPendingRecords` |int| false|Integer.MAX_VALUE | The maximum number of records that hold in memory before acking.

    Setting this property to 1 makes every record send to disk before the record is acked.

    Setting this property to a higher value allows buffering records before flushing them to disk. - -### Example - -Before using the HDFS3 sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "hdfsConfigResources": "core-site.xml", - "directory": "/foo/bar", - "filenamePrefix": "prefix", - "compression": "SNAPPY" - } - - ``` - -* YAML - - ```yaml - - configs: - hdfsConfigResources: "core-site.xml" - directory: "/foo/bar" - filenamePrefix: "prefix" - compression: "SNAPPY" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-influxdb-sink.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-influxdb-sink.md deleted file mode 100644 index 9382f8c03121cc..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-influxdb-sink.md +++ /dev/null @@ -1,119 +0,0 @@ ---- -id: io-influxdb-sink -title: InfluxDB sink connector -sidebar_label: "InfluxDB sink connector" -original_id: io-influxdb-sink ---- - -The InfluxDB sink connector pulls messages from Pulsar topics -and persists the messages to InfluxDB. - -The InfluxDB sink provides different configurations for InfluxDBv1 and v2 respectively. - -## Configuration - -The configuration of the InfluxDB sink connector has the following properties. - -### Property -#### InfluxDBv2 -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. | -| `token` | String|true| " " (empty string) |The authentication token used to authenticate to InfluxDB. | -| `organization` | String| true|" " (empty string) | The InfluxDB organization to write to. | -| `bucket` |String| true | " " (empty string)| The InfluxDB bucket to write to. | -| `precision` | String|false| ns | The timestamp precision for writing data to InfluxDB.

    Below are the available options:
  1062. ns
  1063. us
  1064. ms
  1065. s
  1066. | -| `logLevel` | String|false| NONE|The log level for InfluxDB request and response.

    Below are the available options:
  1067. NONE
  1068. BASIC
  1069. HEADERS
  1070. FULL
  1071. | -| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. | -| `batchTimeMs` |long|false| 1000L | The InfluxDB operation time in milliseconds. | -| `batchSize` | int|false|200| The batch size of writing to InfluxDB. | - -#### InfluxDBv1 -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. | -| `username` | String|false| " " (empty string) |The username used to authenticate to InfluxDB. | -| `password` | String| false|" " (empty string) | The password used to authenticate to InfluxDB. | -| `database` |String| true | " " (empty string)| The InfluxDB to which write messages. | -| `consistencyLevel` | String|false|ONE | The consistency level for writing data to InfluxDB.

    Below are the available options:
  1072. ALL
  1073. ANY
  1074. ONE
  1075. QUORUM
  1076. | -| `logLevel` | String|false| NONE|The log level for InfluxDB request and response.

    Below are the available options:
  1077. NONE
  1078. BASIC
  1079. HEADERS
  1080. FULL
  1081. | -| `retentionPolicy` | String|false| autogen| The retention policy for InfluxDB. | -| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. | -| `batchTimeMs` |long|false| 1000L | The InfluxDB operation time in milliseconds. | -| `batchSize` | int|false|200| The batch size of writing to InfluxDB. | - -### Example -Before using the InfluxDB sink connector, you need to create a configuration file through one of the following methods. -#### InfluxDBv2 -* JSON - - ```json - - { - "influxdbUrl": "http://localhost:9999", - "organization": "example-org", - "bucket": "example-bucket", - "token": "xxxx", - "precision": "ns", - "logLevel": "NONE", - "gzipEnable": false, - "batchTimeMs": 1000, - "batchSize": 100 - } - - ``` - - -* YAML - - ```yaml - - configs: - influxdbUrl: "http://localhost:9999" - organization: "example-org" - bucket: "example-bucket" - token: "xxxx" - precision: "ns" - logLevel: "NONE" - gzipEnable: false - batchTimeMs: 1000 - batchSize: 100 - - ``` - - -#### InfluxDBv1 - -* JSON - - ```json - - { - "influxdbUrl": "http://localhost:8086", - "database": "test_db", - "consistencyLevel": "ONE", - "logLevel": "NONE", - "retentionPolicy": "autogen", - "gzipEnable": false, - "batchTimeMs": 1000, - "batchSize": 100 - } - - ``` - -* YAML - - ```yaml - - configs: - influxdbUrl: "http://localhost:8086" - database: "test_db" - consistencyLevel: "ONE" - logLevel: "NONE" - retentionPolicy: "autogen" - gzipEnable: false - batchTimeMs: 1000 - batchSize: 100 - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-jdbc-sink.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-jdbc-sink.md deleted file mode 100644 index 77dbb61fccd7ed..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-jdbc-sink.md +++ /dev/null @@ -1,157 +0,0 @@ ---- -id: io-jdbc-sink -title: JDBC sink connector -sidebar_label: "JDBC sink connector" -original_id: io-jdbc-sink ---- - -The JDBC sink connectors allow pulling messages from Pulsar topics -and persists the messages to ClickHouse, MariaDB, PostgreSQL, and SQLite. - -> Currently, INSERT, DELETE and UPDATE operations are supported. - -## Configuration - -The configuration of all JDBC sink connectors has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `userName` | String|false | " " (empty string) | The username used to connect to the database specified by `jdbcUrl`.

    **Note: `userName` is case-sensitive.**| -| `password` | String|false | " " (empty string)| The password used to connect to the database specified by `jdbcUrl`.

    **Note: `password` is case-sensitive.**| -| `jdbcUrl` | String|true | " " (empty string) | The JDBC URL of the database to which the connector connects. | -| `tableName` | String|true | " " (empty string) | The name of the table to which the connector writes. | -| `nonKey` | String|false | " " (empty string) | A comma-separated list contains the fields used in updating events. | -| `key` | String|false | " " (empty string) | A comma-separated list contains the fields used in `where` condition of updating and deleting events. | -| `timeoutMs` | int| false|500 | The JDBC operation timeout in milliseconds. | -| `batchSize` | int|false | 200 | The batch size of updates made to the database. | - -### Example for ClickHouse - -* JSON - - ```json - - { - "userName": "clickhouse", - "password": "password", - "jdbcUrl": "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink", - "tableName": "pulsar_clickhouse_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-clickhouse-sink" - topicName: "persistent://public/default/jdbc-clickhouse-topic" - sinkType: "jdbc-clickhouse" - configs: - userName: "clickhouse" - password: "password" - jdbcUrl: "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink" - tableName: "pulsar_clickhouse_jdbc_sink" - - ``` - -### Example for MariaDB - -* JSON - - ```json - - { - "userName": "mariadb", - "password": "password", - "jdbcUrl": "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink", - "tableName": "pulsar_mariadb_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-mariadb-sink" - topicName: "persistent://public/default/jdbc-mariadb-topic" - sinkType: "jdbc-mariadb" - configs: - userName: "mariadb" - password: "password" - jdbcUrl: "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink" - tableName: "pulsar_mariadb_jdbc_sink" - - ``` - -### Example for PostgreSQL - -Before using the JDBC PostgreSQL sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "userName": "postgres", - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "tableName": "pulsar_postgres_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-postgres-sink" - topicName: "persistent://public/default/jdbc-postgres-topic" - sinkType: "jdbc-postgres" - configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink" - tableName: "pulsar_postgres_jdbc_sink" - - ``` - -For more information on **how to use this JDBC sink connector**, see [connect Pulsar to PostgreSQL](io-quickstart.md#connect-pulsar-to-postgresql). - -### Example for SQLite - -* JSON - - ```json - - { - "jdbcUrl": "jdbc:sqlite:db.sqlite", - "tableName": "pulsar_sqlite_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-sqlite-sink" - topicName: "persistent://public/default/jdbc-sqlite-topic" - sinkType: "jdbc-sqlite" - configs: - jdbcUrl: "jdbc:sqlite:db.sqlite" - tableName: "pulsar_sqlite_jdbc_sink" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-kafka-sink.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-kafka-sink.md deleted file mode 100644 index 09dad4ce70bac9..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-kafka-sink.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -id: io-kafka-sink -title: Kafka sink connector -sidebar_label: "Kafka sink connector" -original_id: io-kafka-sink ---- - -The Kafka sink connector pulls messages from Pulsar topics and persists the messages -to Kafka topics. - -This guide explains how to configure and use the Kafka sink connector. - -## Configuration - -The configuration of the Kafka sink connector has the following parameters. - -### Property - -| Name | Type| Required | Default | Description -|------|----------|---------|-------------|-------------| -| `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. | -|`acks`|String|true|" " (empty string) |The number of acknowledgments that the producer requires the leader to receive before a request completes.
    This controls the durability of the sent records. -|`batchsize`|long|false|16384L|The batch size that a Kafka producer attempts to batch records together before sending them to brokers. -|`maxRequestSize`|long|false|1048576L|The maximum size of a Kafka request in bytes. -|`topic`|String|true|" " (empty string) |The Kafka topic which receives messages from Pulsar. -| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringSerializer | The serializer class for Kafka producers to serialize keys. -| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArraySerializer | The serializer class for Kafka producers to serialize values.

    The serializer is set by a specific implementation of [`KafkaAbstractSink`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSink.java). -|`producerConfigProperties`|Map|false|" " (empty string)|The producer configuration properties to be passed to producers.

    **Note: other properties specified in the connector configuration file take precedence over this configuration**. - - -### Example - -Before using the Kafka sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "bootstrapServers": "localhost:6667", - "topic": "test", - "acks": "1", - "batchSize": "16384", - "maxRequestSize": "1048576", - "producerConfigProperties": - { - "client.id": "test-pulsar-producer", - "security.protocol": "SASL_PLAINTEXT", - "sasl.mechanism": "GSSAPI", - "sasl.kerberos.service.name": "kafka", - "acks": "all" - } - } - -* YAML - - ``` - -yaml - configs: - bootstrapServers: "localhost:6667" - topic: "test" - acks: "1" - batchSize: "16384" - maxRequestSize: "1048576" - producerConfigProperties: - client.id: "test-pulsar-producer" - security.protocol: "SASL_PLAINTEXT" - sasl.mechanism: "GSSAPI" - sasl.kerberos.service.name: "kafka" - acks: "all" - ``` diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-kafka-source.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-kafka-source.md deleted file mode 100644 index 53448699e21b4a..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-kafka-source.md +++ /dev/null @@ -1,226 +0,0 @@ ---- -id: io-kafka-source -title: Kafka source connector -sidebar_label: "Kafka source connector" -original_id: io-kafka-source ---- - -The Kafka source connector pulls messages from Kafka topics and persists the messages -to Pulsar topics. - -This guide explains how to configure and use the Kafka source connector. - -## Configuration - -The configuration of the Kafka source connector has the following properties. - -### Property - -| Name | Type| Required | Default | Description -|------|----------|---------|-------------|-------------| -| `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. | -| `groupId` |String| true | " " (empty string) | A unique string that identifies the group of consumer processes to which this consumer belongs. | -| `fetchMinBytes` | long|false | 1 | The minimum byte expected for each fetch response. | -| `autoCommitEnabled` | boolean |false | true | If set to true, the consumer's offset is periodically committed in the background.

    This committed offset is used when the process fails as the position from which a new consumer begins. | -| `autoCommitIntervalMs` | long|false | 5000 | The frequency in milliseconds that the consumer offsets are auto-committed to Kafka if `autoCommitEnabled` is set to true. | -| `heartbeatIntervalMs` | long| false | 3000 | The interval between heartbeats to the consumer when using Kafka's group management facilities.

    **Note: `heartbeatIntervalMs` must be smaller than `sessionTimeoutMs`**.| -| `sessionTimeoutMs` | long|false | 30000 | The timeout used to detect consumer failures when using Kafka's group management facility. | -| `topic` | String|true | " " (empty string)| The Kafka topic which sends messages to Pulsar. | -| `consumerConfigProperties` | Map| false | " " (empty string) | The consumer configuration properties to be passed to consumers.

    **Note: other properties specified in the connector configuration file take precedence over this configuration**. | -| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringDeserializer | The deserializer class for Kafka consumers to deserialize keys.
    The deserializer is set by a specific implementation of [`KafkaAbstractSource`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java). -| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArrayDeserializer | The deserializer class for Kafka consumers to deserialize values. -| `autoOffsetReset` | String | false | "earliest" | The default offset reset policy. | - -### Schema Management - -This Kafka source connector applies the schema to the topic depending on the data type that is present on the Kafka topic. -You can detect the data type from the `keyDeserializationClass` and `valueDeserializationClass` configuration parameters. - -If the `valueDeserializationClass` is `org.apache.kafka.common.serialization.StringDeserializer`, you can set Schema.STRING() as schema type on the Pulsar topic. - -If `valueDeserializationClass` is `io.confluent.kafka.serializers.KafkaAvroDeserializer`, Pulsar downloads the AVRO schema from the Confluent Schema Registry® -and sets it properly on the Pulsar topic. - -In this case, you need to set `schema.registry.url` inside of the `consumerConfigProperties` configuration entry -of the source. - -If `keyDeserializationClass` is not `org.apache.kafka.common.serialization.StringDeserializer`, it means -that you do not have a String as key and the Kafka Source uses the KeyValue schema type with the SEPARATED encoding. - -Pulsar supports AVRO format for keys. - -In this case, you can have a Pulsar topic with the following properties: -- Schema: KeyValue schema with SEPARATED encoding -- Key: the content of key of the Kafka message (base64 encoded) -- Value: the content of value of the Kafka message -- KeySchema: the schema detected from `keyDeserializationClass` -- ValueSchema: the schema detected from `valueDeserializationClass` - -Topic compaction and partition routing use the Pulsar key, that contains the Kafka key, and so they are driven by the same value that you have on Kafka. - -When you consume data from Pulsar topics, you can use the `KeyValue` schema. In this way, you can decode the data properly. -If you want to access the raw key, you can use the `Message#getKeyBytes()` API. - -### Example - -Before using the Kafka source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "bootstrapServers": "pulsar-kafka:9092", - "groupId": "test-pulsar-io", - "topic": "my-topic", - "sessionTimeoutMs": "10000", - "autoCommitEnabled": false - } - - ``` - -* YAML - - ```yaml - - configs: - bootstrapServers: "pulsar-kafka:9092" - groupId: "test-pulsar-io" - topic: "my-topic" - sessionTimeoutMs: "10000" - autoCommitEnabled: false - - ``` - -## Usage - -Here is an example of using the Kafka source connector with the configuration file as shown previously. - -1. Download a Kafka client and a Kafka connector. - - ```bash - - $ wget https://repo1.maven.org/maven2/org/apache/kafka/kafka-clients/0.10.2.1/kafka-clients-0.10.2.1.jar - - $ wget https://archive.apache.org/dist/pulsar/pulsar-2.4.0/connectors/pulsar-io-kafka-2.4.0.nar - - ``` - -2. Create a network. - - ```bash - - $ docker network create kafka-pulsar - - ``` - -3. Pull a ZooKeeper image and start ZooKeeper. - - ```bash - - $ docker pull wurstmeister/zookeeper - - $ docker run -d -it -p 2181:2181 --name pulsar-kafka-zookeeper --network kafka-pulsar wurstmeister/zookeeper - - ``` - -4. Pull a Kafka image and start Kafka. - - ```bash - - $ docker pull wurstmeister/kafka:2.11-1.0.2 - - $ docker run -d -it --network kafka-pulsar -p 6667:6667 -p 9092:9092 -e KAFKA_ADVERTISED_HOST_NAME=pulsar-kafka -e KAFKA_ZOOKEEPER_CONNECT=pulsar-kafka-zookeeper:2181 --name pulsar-kafka wurstmeister/kafka:2.11-1.0.2 - - ``` - -5. Pull a Pulsar image and start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:@pulsar:version@ - - $ docker run -d -it --network kafka-pulsar -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-kafka-standalone apachepulsar/pulsar:2.4.0 bin/pulsar standalone - - ``` - -6. Create a producer file _kafka-producer.py_. - - ```python - - from kafka import KafkaProducer - producer = KafkaProducer(bootstrap_servers='pulsar-kafka:9092') - future = producer.send('my-topic', b'hello world') - future.get() - - ``` - -7. Create a consumer file _pulsar-client.py_. - - ```python - - import pulsar - - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe('my-topic', subscription_name='my-aa') - - while True: - msg = consumer.receive() - print msg - print dir(msg) - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - - client.close() - - ``` - -8. Copy the following files to Pulsar. - - ```bash - - $ docker cp pulsar-io-kafka-@pulsar:version@.nar pulsar-kafka-standalone:/pulsar - $ docker cp kafkaSourceConfig.yaml pulsar-kafka-standalone:/pulsar/conf - $ docker cp pulsar-client.py pulsar-kafka-standalone:/pulsar/ - $ docker cp kafka-producer.py pulsar-kafka-standalone:/pulsar/ - - ``` - -9. Open a new terminal window and start the Kafka source connector in local run mode. - - ```bash - - $ docker exec -it pulsar-kafka-standalone /bin/bash - - $ ./bin/pulsar-admin source localrun \ - --archive ./pulsar-io-kafka-@pulsar:version@.nar \ - --classname org.apache.pulsar.io.kafka.KafkaBytesSource \ - --tenant public \ - --namespace default \ - --name kafka \ - --destination-topic-name my-topic \ - --source-config-file ./conf/kafkaSourceConfig.yaml \ - --parallelism 1 - - ``` - -10. Open a new terminal window and run the consumer. - - ```bash - - $ docker exec -it pulsar-kafka-standalone /bin/bash - - $ pip install kafka-python - - $ python3 kafka-producer.py - - ``` - - The following information appears on the consumer terminal window. - - ```bash - - Received message: 'hello world' - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-kinesis-sink.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-kinesis-sink.md deleted file mode 100644 index 153587dcfc783e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-kinesis-sink.md +++ /dev/null @@ -1,80 +0,0 @@ ---- -id: io-kinesis-sink -title: Kinesis sink connector -sidebar_label: "Kinesis sink connector" -original_id: io-kinesis-sink ---- - -The Kinesis sink connector pulls data from Pulsar and persists data into Amazon Kinesis. - -## Configuration - -The configuration of the Kinesis sink connector has the following property. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`messageFormat`|MessageFormat|true|ONLY_RAW_PAYLOAD|Message format in which Kinesis sink converts Pulsar messages and publishes to Kinesis streams.

    Below are the available options:

  1082. `ONLY_RAW_PAYLOAD`: Kinesis sink directly publishes Pulsar message payload as a message into the configured Kinesis stream.

  1083. `FULL_MESSAGE_IN_JSON`: Kinesis sink creates a JSON payload with Pulsar message payload, properties and encryptionCtx, and publishes JSON payload into the configured Kinesis stream.

  1084. `FULL_MESSAGE_IN_FB`: Kinesis sink creates a flatbuffer serialized payload with Pulsar message payload, properties and encryptionCtx, and publishes flatbuffer payload into the configured Kinesis stream.
  1085. -`retainOrdering`|boolean|false|false|Whether Pulsar connectors to retain ordering when moving messages from Pulsar to Kinesis or not. -`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    It is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If it is empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Built-in plugins - -The following are built-in `AwsCredentialProviderPlugin` plugins: - -* `org.apache.pulsar.io.aws.AwsDefaultProviderChainPlugin` - - This plugin takes no configuration, it uses the default AWS provider chain. - - For more information, see [AWS documentation](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default). - -* `org.apache.pulsar.io.aws.STSAssumeRoleProviderPlugin` - - This plugin takes a configuration (via the `awsCredentialPluginParam`) that describes a role to assume when running the KCL. - - This configuration takes the form of a small json document like: - - ```json - - {"roleArn": "arn...", "roleSessionName": "name"} - - ``` - -### Example - -Before using the Kinesis sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "awsEndpoint": "some.endpoint.aws", - "awsRegion": "us-east-1", - "awsKinesisStreamName": "my-stream", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "messageFormat": "ONLY_RAW_PAYLOAD", - "retainOrdering": "true" - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "some.endpoint.aws" - awsRegion: "us-east-1" - awsKinesisStreamName: "my-stream" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - messageFormat: "ONLY_RAW_PAYLOAD" - retainOrdering: "true" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-kinesis-source.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-kinesis-source.md deleted file mode 100644 index 0d07eefc3703b3..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-kinesis-source.md +++ /dev/null @@ -1,81 +0,0 @@ ---- -id: io-kinesis-source -title: Kinesis source connector -sidebar_label: "Kinesis source connector" -original_id: io-kinesis-source ---- - -The Kinesis source connector pulls data from Amazon Kinesis and persists data into Pulsar. - -This connector uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual consuming of messages. The KCL uses DynamoDB to track state for consumers. - -> Note: currently, the Kinesis source connector only supports raw messages. If you use KMS encrypted messages, the encrypted messages are sent to downstream. This connector will support decrypting messages in the future release. - - -## Configuration - -The configuration of the Kinesis source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.

    Below are the available options:

  1086. `AT_TIMESTAMP`: start from the record at or after the specified timestamp.

  1087. `LATEST`: start after the most recent data record.

  1088. `TRIM_HORIZON`: start from the oldest available data record.
  1089. -`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption. -`applicationName`|String|false|Pulsar IO connector|The name of the Amazon Kinesis application.

    By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances. -`checkpointInterval`|long|false|60000|The frequency of the Kinesis stream checkpoint in milliseconds. -`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds. -`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint. -`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector.

    Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed. -`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`useEnhancedFanOut`|boolean|false|true|If set to true, it uses Kinesis enhanced fan-out.

    If set to false, it uses polling. -`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    `awsCredentialProviderPlugin` has the following built-in plugs:

  1090. `org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:
    this plugin uses the default AWS provider chain.
    For more information, see [using the default credential provider chain](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default).

  1091. `org.apache.pulsar.io.kinesis.STSAssumeRoleProviderPlugin`:
    this plugin takes a configuration via the `awsCredentialPluginParam` that describes a role to assume when running the KCL.
    **JSON configuration example**
    `{"roleArn": "arn...", "roleSessionName": "name"}`

    `awsCredentialPluginName` is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If `awsCredentialPluginName` set to empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`.
  1092. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Example - -Before using the Kinesis source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "awsEndpoint": "https://some.endpoint.aws", - "awsRegion": "us-east-1", - "awsKinesisStreamName": "my-stream", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "applicationName": "My test application", - "checkpointInterval": "30000", - "backoffTime": "4000", - "numRetries": "3", - "receiveQueueSize": 2000, - "initialPositionInStream": "TRIM_HORIZON", - "startAtTime": "2019-03-05T19:28:58.000Z" - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "https://some.endpoint.aws" - awsRegion: "us-east-1" - awsKinesisStreamName: "my-stream" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - applicationName: "My test application" - checkpointInterval: 30000 - backoffTime: 4000 - numRetries: 3 - receiveQueueSize: 2000 - initialPositionInStream: "TRIM_HORIZON" - startAtTime: "2019-03-05T19:28:58.000Z" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-mongo-sink.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-mongo-sink.md deleted file mode 100644 index 30c15a6c280938..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-mongo-sink.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -id: io-mongo-sink -title: MongoDB sink connector -sidebar_label: "MongoDB sink connector" -original_id: io-mongo-sink ---- - -The MongoDB sink connector pulls messages from Pulsar topics -and persists the messages to collections. - -## Configuration - -The configuration of the MongoDB sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `mongoUri` | String| true| " " (empty string) | The MongoDB URI to which the connector connects.

    For more information, see [connection string URI format](https://docs.mongodb.com/manual/reference/connection-string/). | -| `database` | String| true| " " (empty string)| The database name to which the collection belongs. | -| `collection` | String| true| " " (empty string)| The collection name to which the connector writes messages. | -| `batchSize` | int|false|100 | The batch size of writing messages to collections. | -| `batchTimeMs` |long|false|1000| The batch operation interval in milliseconds. | - - -### Example - -Before using the Mongo sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "mongoUri": "mongodb://localhost:27017", - "database": "pulsar", - "collection": "messages", - "batchSize": "2", - "batchTimeMs": "500" - } - - ``` - -* YAML - - ```yaml - - configs: - mongoUri: "mongodb://localhost:27017" - database: "pulsar" - collection: "messages" - batchSize: 2 - batchTimeMs: 500 - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-netty-source.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-netty-source.md deleted file mode 100644 index e1ec8d863115b3..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-netty-source.md +++ /dev/null @@ -1,241 +0,0 @@ ---- -id: io-netty-source -title: Netty source connector -sidebar_label: "Netty source connector" -original_id: io-netty-source ---- - -The Netty source connector opens a port that accepts incoming data via the configured network protocol -and publish it to user-defined Pulsar topics. - -This connector can be used in a containerized (for example, k8s) deployment. Otherwise, if the connector is running in process or thread mode, the instance may be conflicting on listening to ports. - -## Configuration - -The configuration of the Netty source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `type` |String| true |tcp | The network protocol over which data is transmitted to netty.

    Below are the available options:
  1093. tcp
  1094. http
  1095. udp
  1096. | -| `host` | String|true | 127.0.0.1 | The host name or address on which the source instance listen. | -| `port` | int|true | 10999 | The port on which the source instance listen. | -| `numberOfThreads` |int| true |1 | The number of threads of Netty TCP server to accept incoming connections and handle the traffic of accepted connections. | - - -### Example - -Before using the Netty source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "type": "tcp", - "host": "127.0.0.1", - "port": "10911", - "numberOfThreads": "1" - } - - ``` - -* YAML - - ```yaml - - configs: - type: "tcp" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -## Usage - -The following examples show how to use the Netty source connector with TCP and HTTP. - -### TCP - -1. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-netty-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -2. Create a configuration file _netty-source-config.yaml_. - - ```yaml - - configs: - type: "tcp" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -3. Copy the configuration file _netty-source-config.yaml_ to Pulsar server. - - ```bash - - $ docker cp netty-source-config.yaml pulsar-netty-standalone:/pulsar/conf/ - - ``` - -4. Download the Netty source connector. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - curl -O http://mirror-hk.koddos.net/apache/pulsar/pulsar-{version}/connectors/pulsar-io-netty-{version}.nar - - ``` - -5. Start the Netty source connector. - - ```bash - - $ ./bin/pulsar-admin sources localrun \ - --archive pulsar-io-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name netty \ - --destination-topic-name netty-topic \ - --source-config-file netty-source-config.yaml \ - --parallelism 1 - - ``` - -6. Consume data. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ ./bin/pulsar-client consume -t Exclusive -s netty-sub netty-topic -n 0 - - ``` - -7. Open another terminal window to send data to the Netty source. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ apt-get update - - $ apt-get -y install telnet - - $ root@1d19327b2c67:/pulsar# telnet 127.0.0.1 10999 - Trying 127.0.0.1... - Connected to 127.0.0.1. - Escape character is '^]'. - hello - world - - ``` - -8. The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello - - ----- got message ----- - world - - ``` - -### HTTP - -1. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-netty-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -2. Create a configuration file _netty-source-config.yaml_. - - ```yaml - - configs: - type: "http" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -3. Copy the configuration file _netty-source-config.yaml_ to Pulsar server. - - ```bash - - $ docker cp netty-source-config.yaml pulsar-netty-standalone:/pulsar/conf/ - - ``` - -4. Download the Netty source connector. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - curl -O http://mirror-hk.koddos.net/apache/pulsar/pulsar-{version}/connectors/pulsar-io-netty-{version}.nar - - ``` - -5. Start the Netty source connector. - - ```bash - - $ ./bin/pulsar-admin sources localrun \ - --archive pulsar-io-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name netty \ - --destination-topic-name netty-topic \ - --source-config-file netty-source-config.yaml \ - --parallelism 1 - - ``` - -6. Consume data. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ ./bin/pulsar-client consume -t Exclusive -s netty-sub netty-topic -n 0 - - ``` - -7. Open another terminal window to send data to the Netty source. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ curl -X POST --data 'hello, world!' http://127.0.0.1:10999/ - - ``` - -8. The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello, world! - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-nsq-source.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-nsq-source.md deleted file mode 100644 index b61e7e100c22e1..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-nsq-source.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -id: io-nsq-source -title: NSQ source connector -sidebar_label: "NSQ source connector" -original_id: io-nsq-source ---- - -The NSQ source connector receives messages from NSQ topics -and writes messages to Pulsar topics. - -## Configuration - -The configuration of the NSQ source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `lookupds` |String| true | " " (empty string) | A comma-separated list of nsqlookupds to connect to. | -| `topic` | String|true | " " (empty string) | The NSQ topic to transport. | -| `channel` | String |false | pulsar-transport-{$topic} | The channel to consume from on the provided NSQ topic. | \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-overview.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-overview.md deleted file mode 100644 index 3db5ee34042d3f..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-overview.md +++ /dev/null @@ -1,164 +0,0 @@ ---- -id: io-overview -title: Pulsar connector overview -sidebar_label: "Overview" -original_id: io-overview ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Messaging systems are most powerful when you can easily use them with external systems like databases and other messaging systems. - -**Pulsar IO connectors** enable you to easily create, deploy, and manage connectors that interact with external systems, such as [Apache Cassandra](https://cassandra.apache.org), [Aerospike](https://www.aerospike.com), and many others. - - -## Concept - -Pulsar IO connectors come in two types: **source** and **sink**. - -This diagram illustrates the relationship between source, Pulsar, and sink: - -![Pulsar IO diagram](/assets/pulsar-io.png "Pulsar IO connectors (sources and sinks)") - - -### Source - -> Sources **feed data from external systems into Pulsar**. - -Common sources include other messaging systems and firehose-style data pipeline APIs. - -For the complete list of Pulsar built-in source connectors, see [source connector](io-connectors.md#source-connector). - -### Sink - -> Sinks **feed data from Pulsar into external systems**. - -Common sinks include other messaging systems and SQL and NoSQL databases. - -For the complete list of Pulsar built-in sink connectors, see [sink connector](io-connectors.md#sink-connector). - -## Processing guarantee - -Processing guarantees are used to handle errors when writing messages to Pulsar topics. - -> Pulsar connectors and Functions use the **same** processing guarantees as below. - -Delivery semantic | Description -:------------------|:------- -`at-most-once` | Each message sent to a connector is to be **processed once** or **not to be processed**. -`at-least-once` | Each message sent to a connector is to be **processed once** or **more than once**. -`effectively-once` | Each message sent to a connector has **one output associated** with it. - -> Processing guarantees for connectors not just rely on Pulsar guarantee but also **relate to external systems**, that is, **the implementation of source and sink**. - -* Source: Pulsar ensures that writing messages to Pulsar topics respects to the processing guarantees. It is within Pulsar's control. - -* Sink: the processing guarantees rely on the sink implementation. If the sink implementation does not handle retries in an idempotent way, the sink does not respect to the processing guarantees. - -### Set - -When creating a connector, you can set the processing guarantee with the following semantics: - -* ATLEAST_ONCE - -* ATMOST_ONCE - -* EFFECTIVELY_ONCE - -> If `--processing-guarantees` is not specified when creating a connector, the default semantic is `ATLEAST_ONCE`. - -Here takes **Admin CLI** as an example. For more information about **REST API** or **JAVA Admin API**, see [here](io-use.md#create). - -````mdx-code-block - - - - -```bash - -$ bin/pulsar-admin sources create \ - --processing-guarantees ATMOST_ONCE \ - # Other source configs - -``` - -For more information about the options of `pulsar-admin sources create`, see [here](reference-connector-admin.md#create). - - - - -```bash - -$ bin/pulsar-admin sinks create \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other sink configs - -``` - -For more information about the options of `pulsar-admin sinks create`, see [here](reference-connector-admin.md#create-1). - - - - -```` - -### Update - -After creating a connector, you can update the processing guarantee with the following semantics: - -* ATLEAST_ONCE - -* ATMOST_ONCE - -* EFFECTIVELY_ONCE - -Here takes **Admin CLI** as an example. For more information about **REST API** or **JAVA Admin API**, see [here](io-use.md#create). - -````mdx-code-block - - - - -```bash - -$ bin/pulsar-admin sources update \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other source configs - -``` - -For more information about the options of `pulsar-admin sources update`, see [here](reference-connector-admin.md#update). - - - - -```bash - -$ bin/pulsar-admin sinks update \ - --processing-guarantees ATMOST_ONCE \ - # Other sink configs - -``` - -For more information about the options of `pulsar-admin sinks update`, see [here](reference-connector-admin.md#update-1). - - - - -```` - - -## Work with connector - -You can manage Pulsar connectors (for example, create, update, start, stop, restart, reload, delete and perform other operations on connectors) via the [Connector Admin CLI](reference-connector-admin.md) with [sources](io-cli.md#sources) and [sinks](io-cli.md#sinks) subcommands. - -Connectors (sources and sinks) and Functions are components of instances, and they all run on Functions workers. When managing a source, sink or function via [Connector Admin CLI](reference-connector-admin.md) or [Functions Admin CLI](functions-cli.md), an instance is started on a worker. For more information, see [Functions worker](functions-worker.md#run-functions-worker-separately). - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-quickstart.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-quickstart.md deleted file mode 100644 index 8474c93f51336d..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-quickstart.md +++ /dev/null @@ -1,963 +0,0 @@ ---- -id: io-quickstart -title: How to connect Pulsar to database -sidebar_label: "Get started" -original_id: io-quickstart ---- - -This tutorial provides a hands-on look at how you can move data out of Pulsar without writing a single line of code. - -It is helpful to review the [concepts](io-overview.md) for Pulsar I/O with running the steps in this guide to gain a deeper understanding. - -At the end of this tutorial, you are able to: - -- [Connect Pulsar to Cassandra](#Connect-Pulsar-to-Cassandra) - -- [Connect Pulsar to PostgreSQL](#Connect-Pulsar-to-PostgreSQL) - -:::tip - -* These instructions assume you are running Pulsar in [standalone mode](getting-started-standalone.md). However, all -the commands used in this tutorial can be used in a multi-nodes Pulsar cluster without any changes. -* All the instructions are assumed to run at the root directory of a Pulsar binary distribution. - -::: - -## Install Pulsar and built-in connector - -Before connecting Pulsar to a database, you need to install Pulsar and the desired built-in connector. - -For more information about **how to install a standalone Pulsar and built-in connectors**, see [here](getting-started-standalone.md/#installing-pulsar). - -## Start Pulsar standalone - -1. Start Pulsar locally. - - ```bash - - bin/pulsar standalone - - ``` - - All the components of a Pulsar service are start in order. - - You can curl those pulsar service endpoints to make sure Pulsar service is up running correctly. - -2. Check Pulsar binary protocol port. - - ```bash - - telnet localhost 6650 - - ``` - -3. Check Pulsar Function cluster. - - ```bash - - curl -s http://localhost:8080/admin/v2/worker/cluster - - ``` - - **Example output** - - ```json - - [{"workerId":"c-standalone-fw-localhost-6750","workerHostname":"localhost","port":6750}] - - ``` - -4. Make sure a public tenant and a default namespace exist. - - ```bash - - curl -s http://localhost:8080/admin/v2/namespaces/public - - ``` - - **Example output** - - ```json - - ["public/default","public/functions"] - - ``` - -5. All built-in connectors should be listed as available. - - ```bash - - curl -s http://localhost:8080/admin/v2/functions/connectors - - ``` - - **Example output** - - ```json - - [{"name":"aerospike","description":"Aerospike database sink","sinkClass":"org.apache.pulsar.io.aerospike.AerospikeStringSink"},{"name":"cassandra","description":"Writes data into Cassandra","sinkClass":"org.apache.pulsar.io.cassandra.CassandraStringSink"},{"name":"kafka","description":"Kafka source and sink connector","sourceClass":"org.apache.pulsar.io.kafka.KafkaStringSource","sinkClass":"org.apache.pulsar.io.kafka.KafkaBytesSink"},{"name":"kinesis","description":"Kinesis sink connector","sinkClass":"org.apache.pulsar.io.kinesis.KinesisSink"},{"name":"rabbitmq","description":"RabbitMQ source connector","sourceClass":"org.apache.pulsar.io.rabbitmq.RabbitMQSource"},{"name":"twitter","description":"Ingest data from Twitter firehose","sourceClass":"org.apache.pulsar.io.twitter.TwitterFireHose"}] - - ``` - - If an error occurs when starting Pulsar service, you may see an exception at the terminal running `pulsar/standalone`, - or you can navigate to the `logs` directory under the Pulsar directory to view the logs. - -## Connect Pulsar to Cassandra - -This section demonstrates how to connect Pulsar to Cassandra. - -:::tip - -* Make sure you have Docker installed. If you do not have one, see [install Docker](https://docs.docker.com/docker-for-mac/install/). -* The Cassandra sink connector reads messages from Pulsar topics and writes the messages into Cassandra tables. For more information, see [Cassandra sink connector](io-cassandra-sink.md). - -::: - -### Setup a Cassandra cluster - -This example uses `cassandra` Docker image to start a single-node Cassandra cluster in Docker. - -1. Start a Cassandra cluster. - - ```bash - - docker run -d --rm --name=cassandra -p 9042:9042 cassandra - - ``` - - :::note - - Before moving to the next steps, make sure the Cassandra cluster is running. - - ::: - -2. Make sure the Docker process is running. - - ```bash - - docker ps - - ``` - -3. Check the Cassandra logs to make sure the Cassandra process is running as expected. - - ```bash - - docker logs cassandra - - ``` - -4. Check the status of the Cassandra cluster. - - ```bash - - docker exec cassandra nodetool status - - ``` - - **Example output** - - ``` - - Datacenter: datacenter1 - ======================= - Status=Up/Down - |/ State=Normal/Leaving/Joining/Moving - -- Address Load Tokens Owns (effective) Host ID Rack - UN 172.17.0.2 103.67 KiB 256 100.0% af0e4b2f-84e0-4f0b-bb14-bd5f9070ff26 rack1 - - ``` - -5. Use `cqlsh` to connect to the Cassandra cluster. - - ```bash - - $ docker exec -ti cassandra cqlsh localhost - Connected to Test Cluster at localhost:9042. - [cqlsh 5.0.1 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4] - Use HELP for help. - cqlsh> - - ``` - -6. Create a keyspace `pulsar_test_keyspace`. - - ```bash - - cqlsh> CREATE KEYSPACE pulsar_test_keyspace WITH replication = {'class':'SimpleStrategy', 'replication_factor':1}; - - ``` - -7. Create a table `pulsar_test_table`. - - ```bash - - cqlsh> USE pulsar_test_keyspace; - cqlsh:pulsar_test_keyspace> CREATE TABLE pulsar_test_table (key text PRIMARY KEY, col text); - - ``` - -### Configure a Cassandra sink - -Now that we have a Cassandra cluster running locally. - -In this section, you need to configure a Cassandra sink connector. - -To run a Cassandra sink connector, you need to prepare a configuration file including the information that Pulsar connector runtime needs to know. - -For example, how Pulsar connector can find the Cassandra cluster, what is the keyspace and the table that Pulsar connector uses for writing Pulsar messages to, and so on. - -You can create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - } - - ``` - -* YAML - - ```yaml - - configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - - ``` - -For more information, see [Cassandra sink connector](io-cassandra-sink.md). - -### Create a Cassandra sink - -You can use the [Connector Admin CLI](io-cli.md) -to create a sink connector and perform other operations on them. - -Run the following command to create a Cassandra sink connector with sink type _cassandra_ and the config file _examples/cassandra-sink.yml_ created previously. - -#### Note -> The `sink-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. - -```bash - -bin/pulsar-admin sinks create \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink \ - --sink-type cassandra \ - --sink-config-file examples/cassandra-sink.yml \ - --inputs test_cassandra - -``` - -Once the command is executed, Pulsar creates the sink connector _cassandra-test-sink_. - -This sink connector runs -as a Pulsar Function and writes the messages produced in the topic _test_cassandra_ to the Cassandra table _pulsar_test_table_. - -### Inspect a Cassandra sink - -You can use the [Connector Admin CLI](io-cli.md) -to monitor a connector and perform other operations on it. - -* Get the information of a Cassandra sink. - - ```bash - - bin/pulsar-admin sinks get \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - **Example output** - - ```json - - { - "tenant": "public", - "namespace": "default", - "name": "cassandra-test-sink", - "className": "org.apache.pulsar.io.cassandra.CassandraStringSink", - "inputSpecs": { - "test_cassandra": { - "isRegexPattern": false - } - }, - "configs": { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true, - "archive": "builtin://cassandra" - } - - ``` - -* Check the status of a Cassandra sink. - - ```bash - - bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - **Example output** - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-localhost-8080" - } - } ] - } - - ``` - -### Verify a Cassandra sink - -1. Produce some messages to the input topic of the Cassandra sink _test_cassandra_. - - ```bash - - for i in {0..9}; do bin/pulsar-client produce -m "key-$i" -n 1 test_cassandra; done - - ``` - -2. Inspect the status of the Cassandra sink _test_cassandra_. - - ```bash - - bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - You can see 10 messages are processed by the Cassandra sink _test_cassandra_. - - **Example output** - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 10, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 10, - "lastReceivedTime" : 1551685489136, - "workerId" : "c-standalone-fw-localhost-8080" - } - } ] - } - - ``` - -3. Use `cqlsh` to connect to the Cassandra cluster. - - ```bash - - docker exec -ti cassandra cqlsh localhost - - ``` - -4. Check the data of the Cassandra table _pulsar_test_table_. - - ```bash - - cqlsh> use pulsar_test_keyspace; - cqlsh:pulsar_test_keyspace> select * from pulsar_test_table; - - key | col - --------+-------- - key-5 | key-5 - key-0 | key-0 - key-9 | key-9 - key-2 | key-2 - key-1 | key-1 - key-3 | key-3 - key-6 | key-6 - key-7 | key-7 - key-4 | key-4 - key-8 | key-8 - - ``` - -### Delete a Cassandra Sink - -You can use the [Connector Admin CLI](io-cli.md) -to delete a connector and perform other operations on it. - -```bash - -bin/pulsar-admin sinks delete \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - -``` - -## Connect Pulsar to PostgreSQL - -This section demonstrates how to connect Pulsar to PostgreSQL. - -:::tip - -* Make sure you have Docker installed. If you do not have one, see [install Docker](https://docs.docker.com/docker-for-mac/install/). -* The JDBC sink connector pulls messages from Pulsar topics and persists the messages to ClickHouse, MariaDB, PostgreSQL, or SQlite. - -::: - ->For more information, see [JDBC sink connector](io-jdbc-sink.md). - - -### Setup a PostgreSQL cluster - -This example uses the PostgreSQL 12 docker image to start a single-node PostgreSQL cluster in Docker. - -1. Pull the PostgreSQL 12 image from Docker. - - ```bash - - $ docker pull postgres:12 - - ``` - -2. Start PostgreSQL. - - ```bash - - $ docker run -d -it --rm \ - --name pulsar-postgres \ - -p 5432:5432 \ - -e POSTGRES_PASSWORD=password \ - -e POSTGRES_USER=postgres \ - postgres:12 - - ``` - - #### Tip - - Flag | Description | This example - ---|---|---| - `-d` | To start a container in detached mode. | / - `-it` | Keep STDIN open even if not attached and allocate a terminal. | / - `--rm` | Remove the container automatically when it exits. | / - `-name` | Assign a name to the container. | This example specifies _pulsar-postgres_ for the container. - `-p` | Publish the port of the container to the host. | This example publishes the port _5432_ of the container to the host. - `-e` | Set environment variables. | This example sets the following variables:
    - The password for the user is _password_.
    - The name for the user is _postgres_. - - :::tip - - For more information about Docker commands, see [Docker CLI](https://docs.docker.com/engine/reference/commandline/run/). - - ::: - -3. Check if PostgreSQL has been started successfully. - - ```bash - - $ docker logs -f pulsar-postgres - - ``` - - PostgreSQL has been started successfully if the following message appears. - - ```text - - 2020-05-11 20:09:24.492 UTC [1] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit - 2020-05-11 20:09:24.492 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 - 2020-05-11 20:09:24.492 UTC [1] LOG: listening on IPv6 address "::", port 5432 - 2020-05-11 20:09:24.499 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" - 2020-05-11 20:09:24.523 UTC [55] LOG: database system was shut down at 2020-05-11 20:09:24 UTC - 2020-05-11 20:09:24.533 UTC [1] LOG: database system is ready to accept connections - - ``` - -4. Access to PostgreSQL. - - ```bash - - $ docker exec -it pulsar-postgres /bin/bash - - ``` - -5. Create a PostgreSQL table _pulsar_postgres_jdbc_sink_. - - ```bash - - $ psql -U postgres postgres - - postgres=# create table if not exists pulsar_postgres_jdbc_sink - ( - id serial PRIMARY KEY, - name VARCHAR(255) NOT NULL - ); - - ``` - -### Configure a JDBC sink - -Now we have a PostgreSQL running locally. - -In this section, you need to configure a JDBC sink connector. - -1. Add a configuration file. - - To run a JDBC sink connector, you need to prepare a YAML configuration file including the information that Pulsar connector runtime needs to know. - - For example, how Pulsar connector can find the PostgreSQL cluster, what is the JDBC URL and the table that Pulsar connector uses for writing messages to. - - Create a _pulsar-postgres-jdbc-sink.yaml_ file, copy the following contents to this file, and place the file in the `pulsar/connectors` folder. - - ```yaml - - configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink" - tableName: "pulsar_postgres_jdbc_sink" - - ``` - -2. Create a schema. - - Create a _avro-schema_ file, copy the following contents to this file, and place the file in the `pulsar/connectors` folder. - - ```json - - { - "type": "AVRO", - "schema": "{\"type\":\"record\",\"name\":\"Test\",\"fields\":[{\"name\":\"id\",\"type\":[\"null\",\"int\"]},{\"name\":\"name\",\"type\":[\"null\",\"string\"]}]}", - "properties": {} - } - - ``` - - :::tip - - For more information about AVRO, see [Apache Avro](https://avro.apache.org/docs/1.9.1/). - - ::: - -3. Upload a schema to a topic. - - This example uploads the _avro-schema_ schema to the _pulsar-postgres-jdbc-sink-topic_ topic. - - ```bash - - $ bin/pulsar-admin schemas upload pulsar-postgres-jdbc-sink-topic -f ./connectors/avro-schema - - ``` - -4. Check if the schema has been uploaded successfully. - - ```bash - - $ bin/pulsar-admin schemas get pulsar-postgres-jdbc-sink-topic - - ``` - - The schema has been uploaded successfully if the following message appears. - - ```json - - {"name":"pulsar-postgres-jdbc-sink-topic","schema":"{\"type\":\"record\",\"name\":\"Test\",\"fields\":[{\"name\":\"id\",\"type\":[\"null\",\"int\"]},{\"name\":\"name\",\"type\":[\"null\",\"string\"]}]}","type":"AVRO","properties":{}} - - ``` - -### Create a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to create a sink connector and perform other operations on it. - -This example creates a sink connector and specifies the desired information. - -```bash - -$ bin/pulsar-admin sinks create \ ---archive ./connectors/pulsar-io-jdbc-postgres-@pulsar:version@.nar \ ---inputs pulsar-postgres-jdbc-sink-topic \ ---name pulsar-postgres-jdbc-sink \ ---sink-config-file ./connectors/pulsar-postgres-jdbc-sink.yaml \ ---parallelism 1 - -``` - -Once the command is executed, Pulsar creates a sink connector _pulsar-postgres-jdbc-sink_. - -This sink connector runs as a Pulsar Function and writes the messages produced in the topic _pulsar-postgres-jdbc-sink-topic_ to the PostgreSQL table _pulsar_postgres_jdbc_sink_. - - #### Tip - - Flag | Description | This example - ---|---|---| - `--archive` | The path to the archive file for the sink. | _pulsar-io-jdbc-postgres-@pulsar:version@.nar_ | - `--inputs` | The input topic(s) of the sink.

    Multiple topics can be specified as a comma-separated list.|| - `--name` | The name of the sink. | _pulsar-postgres-jdbc-sink_ | - `--sink-config-file` | The path to a YAML config file specifying the configuration of the sink. | _pulsar-postgres-jdbc-sink.yaml_ | - `--parallelism` | The parallelism factor of the sink.

    For example, the number of sink instances to run. | _1_ | - -:::tip - -For more information about `pulsar-admin sinks create options`, see [here](io-cli.md#sinks). - -::: - -The sink has been created successfully if the following message appears. - -```bash - -"Created successfully" - -``` - -### Inspect a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to monitor a connector and perform other operations on it. - -* List all running JDBC sink(s). - - ```bash - - $ bin/pulsar-admin sinks list \ - --tenant public \ - --namespace default - - ``` - - :::tip - - For more information about `pulsar-admin sinks list options`, see [here](io-cli.md/#list-1). - - ::: - - The result shows that only the _postgres-jdbc-sink_ sink is running. - - ```json - - [ - "pulsar-postgres-jdbc-sink" - ] - - ``` - -* Get the information of a JDBC sink. - - ```bash - - $ bin/pulsar-admin sinks get \ - --tenant public \ - --namespace default \ - --name pulsar-postgres-jdbc-sink - - ``` - - :::tip - - For more information about `pulsar-admin sinks get options`, see [here](io-cli.md/#get-1). - - ::: - - The result shows the information of the sink connector, including tenant, namespace, topic and so on. - - ```json - - { - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true - } - - ``` - -* Get the status of a JDBC sink - - ```bash - - $ bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name pulsar-postgres-jdbc-sink - - ``` - - :::tip - - For more information about `pulsar-admin sinks status options`, see [here](io-cli.md/#status-1). - - ::: - - The result shows the current status of sink connector, including the number of instance, running status, worker ID and so on. - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-192.168.2.52-8080" - } - } ] - } - - ``` - -### Stop a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to stop a connector and perform other operations on it. - -```bash - -$ bin/pulsar-admin sinks stop \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks stop options`, see [here](io-cli.md/#stop-1). - -::: - -The sink instance has been stopped successfully if the following message disappears. - -```bash - -"Stopped successfully" - -``` - -### Restart a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to restart a connector and perform other operations on it. - -```bash - -$ bin/pulsar-admin sinks restart \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks restart options`, see [here](io-cli.md/#restart-1). - -::: - -The sink instance has been started successfully if the following message disappears. - -```bash - -"Started successfully" - -``` - -:::tip - -* Optionally, you can run a standalone sink connector using `pulsar-admin sinks localrun options`. -Note that `pulsar-admin sinks localrun options` **runs a sink connector locally**, while `pulsar-admin sinks start options` **starts a sink connector in a cluster**. -* For more information about `pulsar-admin sinks localrun options`, see [here](io-cli.md#localrun-1). - -::: - -### Update a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to update a connector and perform other operations on it. - -This example updates the parallelism of the _pulsar-postgres-jdbc-sink_ sink connector to 2. - -```bash - -$ bin/pulsar-admin sinks update \ ---name pulsar-postgres-jdbc-sink \ ---parallelism 2 - -``` - -:::tip - -For more information about `pulsar-admin sinks update options`, see [here](io-cli.md/#update-1). - -::: - -The sink connector has been updated successfully if the following message disappears. - -```bash - -"Updated successfully" - -``` - -This example double-checks the information. - -```bash - -$ bin/pulsar-admin sinks get \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -The result shows that the parallelism is 2. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 2, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -### Delete a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to delete a connector and perform other operations on it. - -This example deletes the _pulsar-postgres-jdbc-sink_ sink connector. - -```bash - -$ bin/pulsar-admin sinks delete \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks delete options`, see [here](io-cli.md/#delete-1). - -::: - -The sink connector has been deleted successfully if the following message appears. - -```text - -"Deleted successfully" - -``` - -This example double-checks the status of the sink connector. - -```bash - -$ bin/pulsar-admin sinks get \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -The result shows that the sink connector does not exist. - -```text - -HTTP 404 Not Found - -Reason: Sink pulsar-postgres-jdbc-sink doesn't exist - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-rabbitmq-sink.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-rabbitmq-sink.md deleted file mode 100644 index d7fda99460dc97..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-rabbitmq-sink.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -id: io-rabbitmq-sink -title: RabbitMQ sink connector -sidebar_label: "RabbitMQ sink connector" -original_id: io-rabbitmq-sink ---- - -The RabbitMQ sink connector pulls messages from Pulsar topics -and persist the messages to RabbitMQ queues. - - -## Configuration - -The configuration of the RabbitMQ sink connector has the following properties. - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `connectionName` |String| true | " " (empty string) | The connection name. | -| `host` | String| true | " " (empty string) | The RabbitMQ host. | -| `port` | int |true | 5672 | The RabbitMQ port. | -| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. | -| `username` | String|false | guest | The username used to authenticate to RabbitMQ. | -| `password` | String|false | guest | The password used to authenticate to RabbitMQ. | -| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. | -| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number.

    0 means unlimited. | -| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets.

    0 means unlimited. | -| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds.

    0 means infinite. | -| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. | -| `requestedHeartbeat` | int|false | 60 | The exchange to publish messages. | -| `exchangeName` | String|true | " " (empty string) | The maximum number of messages that the server delivers.

    0 means unlimited. | -| `prefetchGlobal` |String|true | " " (empty string) |The routing key used to publish messages. | - - -### Example - -Before using the RabbitMQ sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "host": "localhost", - "port": "5672", - "virtualHost": "/", - "username": "guest", - "password": "guest", - "queueName": "test-queue", - "connectionName": "test-connection", - "requestedChannelMax": "0", - "requestedFrameMax": "0", - "connectionTimeout": "60000", - "handshakeTimeout": "10000", - "requestedHeartbeat": "60", - "exchangeName": "test-exchange", - "routingKey": "test-key" - } - - ``` - -* YAML - - ```yaml - - configs: - host: "localhost" - port: 5672 - virtualHost: "/", - username: "guest" - password: "guest" - queueName: "test-queue" - connectionName: "test-connection" - requestedChannelMax: 0 - requestedFrameMax: 0 - connectionTimeout: 60000 - handshakeTimeout: 10000 - requestedHeartbeat: 60 - exchangeName: "test-exchange" - routingKey: "test-key" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-rabbitmq-source.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-rabbitmq-source.md deleted file mode 100644 index c2c31cc97d10d9..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-rabbitmq-source.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -id: io-rabbitmq-source -title: RabbitMQ source connector -sidebar_label: "RabbitMQ source connector" -original_id: io-rabbitmq-source ---- - -The RabbitMQ source connector receives messages from RabbitMQ clusters -and writes messages to Pulsar topics. - -## Configuration - -The configuration of the RabbitMQ source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `connectionName` |String| true | " " (empty string) | The connection name. | -| `host` | String| true | " " (empty string) | The RabbitMQ host. | -| `port` | int |true | 5672 | The RabbitMQ port. | -| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. | -| `username` | String|false | guest | The username used to authenticate to RabbitMQ. | -| `password` | String|false | guest | The password used to authenticate to RabbitMQ. | -| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. | -| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number.

    0 means unlimited. | -| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets.

    0 means unlimited. | -| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds.

    0 means infinite. | -| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. | -| `requestedHeartbeat` | int|false | 60 | The requested heartbeat timeout in seconds. | -| `prefetchCount` | int|false | 0 | The maximum number of messages that the server delivers.

    0 means unlimited. | -| `prefetchGlobal` | boolean|false | false |Whether the setting should be applied to the entire channel rather than each consumer. | -| `passive` | boolean|false | false | Whether the rabbitmq consumer should create its own queue or bind to an existing one. | - -### Example - -Before using the RabbitMQ source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "host": "localhost", - "port": "5672", - "virtualHost": "/", - "username": "guest", - "password": "guest", - "queueName": "test-queue", - "connectionName": "test-connection", - "requestedChannelMax": "0", - "requestedFrameMax": "0", - "connectionTimeout": "60000", - "handshakeTimeout": "10000", - "requestedHeartbeat": "60", - "prefetchCount": "0", - "prefetchGlobal": "false", - "passive": "false" - } - - ``` - -* YAML - - ```yaml - - configs: - host: "localhost" - port: 5672 - virtualHost: "/" - username: "guest" - password: "guest" - queueName: "test-queue" - connectionName: "test-connection" - requestedChannelMax: 0 - requestedFrameMax: 0 - connectionTimeout: 60000 - handshakeTimeout: 10000 - requestedHeartbeat: 60 - prefetchCount: 0 - prefetchGlobal: "false" - passive: "false" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-redis-sink.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-redis-sink.md deleted file mode 100644 index 793d74a5f2cb38..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-redis-sink.md +++ /dev/null @@ -1,74 +0,0 @@ ---- -id: io-redis-sink -title: Redis sink connector -sidebar_label: "Redis sink connector" -original_id: io-redis-sink ---- - -The Redis sink connector pulls messages from Pulsar topics -and persists the messages to a Redis database. - - - -## Configuration - -The configuration of the Redis sink connector has the following properties. - - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `redisHosts` |String|true|" " (empty string) | A comma-separated list of Redis hosts to connect to. | -| `redisPassword` |String|false|" " (empty string) | The password used to connect to Redis. | -| `redisDatabase` | int|true|0 | The Redis database to connect to. | -| `clientMode` |String| false|Standalone | The client mode when interacting with Redis cluster.

    Below are the available options:
  1097. Standalone
  1098. Cluster
  1099. | -| `autoReconnect` | boolean|false|true | Whether the Redis client automatically reconnect or not. | -| `requestQueue` | int|false|2147483647 | The maximum number of queued requests to Redis. | -| `tcpNoDelay` |boolean| false| false | Whether to enable TCP with no delay or not. | -| `keepAlive` | boolean|false | false |Whether to enable a keepalive to Redis or not. | -| `connectTimeout` |long| false|10000 | The time to wait before timing out when connecting in milliseconds. | -| `operationTimeout` | long|false|10000 | The time before an operation is marked as timed out in milliseconds . | -| `batchTimeMs` | int|false|1000 | The Redis operation time in milliseconds. | -| `batchSize` | int|false|200 | The batch size of writing to Redis database. | - - -### Example - -Before using the Redis sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "redisHosts": "localhost:6379", - "redisPassword": "fake@123", - "redisDatabase": "1", - "clientMode": "Standalone", - "operationTimeout": "2000", - "batchSize": "100", - "batchTimeMs": "1000", - "connectTimeout": "3000" - } - - ``` - -* YAML - - ```yaml - - { - redisHosts: "localhost:6379" - redisPassword: "fake@123" - redisDatabase: 1 - clientMode: "Standalone" - operationTimeout: 2000 - batchSize: 100 - batchTimeMs: 1000 - connectTimeout: 3000 - } - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-solr-sink.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-solr-sink.md deleted file mode 100644 index df2c3612c38eb6..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-solr-sink.md +++ /dev/null @@ -1,65 +0,0 @@ ---- -id: io-solr-sink -title: Solr sink connector -sidebar_label: "Solr sink connector" -original_id: io-solr-sink ---- - -The Solr sink connector pulls messages from Pulsar topics -and persists the messages to Solr collections. - - - -## Configuration - -The configuration of the Solr sink connector has the following properties. - - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `solrUrl` | String|true|" " (empty string) |
  1100. Comma-separated zookeeper hosts with chroot used in the SolrCloud mode.
    **Example**
    `localhost:2181,localhost:2182/chroot`

  1101. URL to connect to Solr used in standalone mode.
    **Example**
    `localhost:8983/solr`
  1102. | -| `solrMode` | String|true|SolrCloud| The client mode when interacting with the Solr cluster.

    Below are the available options:
  1103. Standalone
  1104. SolrCloud
  1105. | -| `solrCollection` |String|true| " " (empty string) | Solr collection name to which records need to be written. | -| `solrCommitWithinMs` |int| false|10 | The time within million seconds for Solr updating commits.| -| `username` |String|false| " " (empty string) | The username for basic authentication.

    **Note: `usename` is case-sensitive.** | -| `password` | String|false| " " (empty string) | The password for basic authentication.

    **Note: `password` is case-sensitive.** | - - - -### Example - -Before using the Solr sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "solrUrl": "localhost:2181,localhost:2182/chroot", - "solrMode": "SolrCloud", - "solrCollection": "techproducts", - "solrCommitWithinMs": 100, - "username": "fakeuser", - "password": "fake@123" - } - - ``` - -* YAML - - ```yaml - - { - solrUrl: "localhost:2181,localhost:2182/chroot" - solrMode: "SolrCloud" - solrCollection: "techproducts" - solrCommitWithinMs: 100 - username: "fakeuser" - password: "fake@123" - } - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-twitter-source.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-twitter-source.md deleted file mode 100644 index 8de3504dd0fef2..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-twitter-source.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -id: io-twitter-source -title: Twitter Firehose source connector -sidebar_label: "Twitter Firehose source connector" -original_id: io-twitter-source ---- - -The Twitter Firehose source connector receives tweets from Twitter Firehose and -writes the tweets to Pulsar topics. - -## Configuration - -The configuration of the Twitter Firehose source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `consumerKey` | String|true | " " (empty string) | The twitter OAuth consumer key.

    For more information, see [Access tokens](https://developer.twitter.com/en/docs/basics/authentication/guides/access-tokens). | -| `consumerSecret` | String |true | " " (empty string) | The twitter OAuth consumer secret. | -| `token` | String|true | " " (empty string) | The twitter OAuth token. | -| `tokenSecret` | String|true | " " (empty string) | The twitter OAuth secret. | -| `guestimateTweetTime`|Boolean|false|false|Most firehose events have null createdAt time.

    If `guestimateTweetTime` set to true, the connector estimates the createdTime of each firehose event to be current time. -| `clientName` | String |false | openconnector-twitter-source| The twitter firehose client name. | -| `clientHosts` |String| false | Constants.STREAM_HOST | The twitter firehose hosts to which client connects. | -| `clientBufferSize` | int|false | 50000 | The buffer size for buffering tweets fetched from twitter firehose. | - -> For more information about OAuth credentials, see [Twitter developers portal](https://developer.twitter.com/en.html). diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-twitter.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-twitter.md deleted file mode 100644 index 3b2f6325453c3c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-twitter.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: io-twitter -title: Twitter Firehose Connector -sidebar_label: "Twitter Firehose Connector" -original_id: io-twitter ---- - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/io-use.md b/site2/website/versioned_docs/version-2.8.2-deprecated/io-use.md deleted file mode 100644 index da9ed746c4d372..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/io-use.md +++ /dev/null @@ -1,1787 +0,0 @@ ---- -id: io-use -title: How to use Pulsar connectors -sidebar_label: "Use" -original_id: io-use ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide describes how to use Pulsar connectors. - -## Install a connector - -Pulsar bundles several [builtin connectors](io-connectors.md) used to move data in and out of commonly used systems (such as database and messaging system). Optionally, you can create and use your desired non-builtin connectors. - -:::note - -When using a non-builtin connector, you need to specify the path of a archive file for the connector. - -::: - -To set up a builtin connector, follow -the instructions [here](getting-started-standalone.md#installing-builtin-connectors). - -After the setup, the builtin connector is automatically discovered by Pulsar brokers (or function-workers), so no additional installation steps are required. - -## Configure a connector - -You can configure the following information: - -* [Configure a default storage location for a connector](#configure-a-default-storage-location-for-a-connector) - -* [Configure a connector with a YAML file](#configure-a-connector-with-yaml-file) - -### Configure a default storage location for a connector - -To configure a default folder for builtin connectors, set the `connectorsDirectory` parameter in the `./conf/functions_worker.yml` configuration file. - -**Example** - -Set the `./connectors` folder as the default storage location for builtin connectors. - -``` - -######################## -# Connectors -######################## - -connectorsDirectory: ./connectors - -``` - -### Configure a connector with a YAML file - -To configure a connector, you need to provide a YAML configuration file when creating a connector. - -The YAML configuration file tells Pulsar where to locate connectors and how to connect connectors with Pulsar topics. - -**Example 1** - -Below is a YAML configuration file of a Cassandra sink, which tells Pulsar: - -* Which Cassandra cluster to connect - -* What is the `keyspace` and `columnFamily` to be used in Cassandra for collecting data - -* How to map Pulsar messages into Cassandra table key and columns - -```shell - -tenant: public -namespace: default -name: cassandra-test-sink -... -# cassandra specific config -configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - -``` - -**Example 2** - -Below is a YAML configuration file of a Kafka source. - -```shell - -configs: - bootstrapServers: "pulsar-kafka:9092" - groupId: "test-pulsar-io" - topic: "my-topic" - sessionTimeoutMs: "10000" - autoCommitEnabled: "false" - -``` - -**Example 3** - -Below is a YAML configuration file of a PostgreSQL JDBC sink. - -```shell - -configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/test_jdbc" - tableName: "test_jdbc" - -``` - -## Get available connectors - -Before starting using connectors, you can perform the following operations: - -* [Reload connectors](#reload) - -* [Get a list of available connectors](#get-available-connectors) - -### `reload` - -If you add or delete a nar file in a connector folder, reload the available builtin connector before using it. - -#### Source - -Use the `reload` subcommand. - -```shell - -$ pulsar-admin sources reload - -``` - -For more information, see [`here`](io-cli.md#reload). - -#### Sink - -Use the `reload` subcommand. - -```shell - -$ pulsar-admin sinks reload - -``` - -For more information, see [`here`](io-cli.md#reload-1). - -### `available` - -After reloading connectors (optional), you can get a list of available connectors. - -#### Source - -Use the `available-sources` subcommand. - -```shell - -$ pulsar-admin sources available-sources - -``` - -#### Sink - -Use the `available-sinks` subcommand. - -```shell - -$ pulsar-admin sinks available-sinks - -``` - -## Run a connector - -To run a connector, you can perform the following operations: - -* [Create a connector](#create) - -* [Start a connector](#start) - -* [Run a connector locally](#localrun) - -### `create` - -You can create a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Create a source connector. - -````mdx-code-block - - - - -Use the `create` subcommand. - -``` - -$ pulsar-admin sources create options - -``` - -For more information, see [here](io-cli.md#create). - - - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/registerSource?version=@pulsar:version_number@} - - - - -* Create a source connector with a **local file**. - - ```java - - void createSource(SourceConfig sourceConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - |Name|Description - |---|--- - `sourceConfig` | The source configuration object - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#createSource-SourceConfig-java.lang.String-). - -* Create a source connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void createSourceWithUrl(SourceConfig sourceConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - Parameter| Description - |---|--- - `sourceConfig` | The source configuration object - `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSourceWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#createSourceWithUrl-SourceConfig-java.lang.String-). - - - - -```` - -#### Sink - -Create a sink connector. - -````mdx-code-block - - - - -Use the `create` subcommand. - -``` - -$ pulsar-admin sinks create options - -``` - -For more information, see [here](io-cli.md#create-1). - - - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/registerSink?version=@pulsar:version_number@} - - - - -* Create a sink connector with a **local file**. - - ```java - - void createSink(SinkConfig sinkConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - |Name|Description - |---|--- - `sinkConfig` | The sink configuration object - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#createSink-SinkConfig-java.lang.String-). - -* Create a sink connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void createSinkWithUrl(SinkConfig sinkConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - Parameter| Description - |---|--- - `sinkConfig` | The sink configuration object - `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSinkWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#createSinkWithUrl-SinkConfig-java.lang.String-). - - - - -```` - -### `start` - -You can start a connector using **Admin CLI** or **REST API**. - -#### Source - -Start a source connector. - -````mdx-code-block - - - - -Use the `start` subcommand. - -``` - -$ pulsar-admin sources start options - -``` - -For more information, see [here](io-cli.md#start). - - - - -* Start **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/start|operation/startSource?version=@pulsar:version_number@} - -* Start a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/start|operation/startSource?version=@pulsar:version_number@} - - - - -```` - -#### Sink - -Start a sink connector. - -````mdx-code-block - - - - -Use the `start` subcommand. - -``` - -$ pulsar-admin sinks start options - -``` - -For more information, see [here](io-cli.md#start-1). - - - - -* Start **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/start|operation/startSink?version=@pulsar:version_number@} - -* Start a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sourceName/:instanceId/start|operation/startSink?version=@pulsar:version_number@} - - - - -```` - -### `localrun` - -You can run a connector locally rather than deploying it on a Pulsar cluster using **Admin CLI**. - -#### Source - -Run a source connector locally. - -````mdx-code-block - - - - -Use the `localrun` subcommand. - -``` - -$ pulsar-admin sources localrun options - -``` - -For more information, see [here](io-cli.md#localrun). - - - - -```` - -#### Sink - -Run a sink connector locally. - -````mdx-code-block - - - - -Use the `localrun` subcommand. - -``` - -$ pulsar-admin sinks localrun options - -``` - -For more information, see [here](io-cli.md#localrun-1). - - - - -```` - -## Monitor a connector - -To monitor a connector, you can perform the following operations: - -* [Get the information of a connector](#get) - -* [Get the list of all running connectors](#list) - -* [Get the current status of a connector](#status) - -### `get` - -You can get the information of a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the information of a source connector. - -````mdx-code-block - - - - -Use the `get` subcommand. - -``` - -$ pulsar-admin sources get options - -``` - -For more information, see [here](io-cli.md#get). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/getSourceInfo?version=@pulsar:version_number@} - - - - -```java - -SourceConfig getSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Example** - -This is a sourceConfig. - -```java - -{ - "tenant": "tenantName", - "namespace": "namespaceName", - "name": "sourceName", - "className": "className", - "topicName": "topicName", - "configs": {}, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "resources": { - "cpu": 1.0, - "ram": 1073741824, - "disk": 10737418240 - } -} - -``` - -This is a sourceConfig example. - -``` - -{ - "tenant": "public", - "namespace": "default", - "name": "debezium-mysql-source", - "className": "org.apache.pulsar.io.debezium.mysql.DebeziumMysqlSource", - "topicName": "debezium-mysql-topic", - "configs": { - "database.user": "debezium", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.port": "3306", - "database.hostname": "localhost", - "database.password": "dbz", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "database.whitelist": "inventory", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "pulsar.service.url": "pulsar://127.0.0.1:6650", - "database.history.pulsar.topic": "history-topic2" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "resources": { - "cpu": 1.0, - "ram": 1073741824, - "disk": 10737418240 - } -} - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException.NotFoundException` | Cluster doesn't exist -`PulsarAdminException` | Unexpected error - -For more information, see [`getSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#getSource-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Get the information of a sink connector. - -````mdx-code-block - - - - -Use the `get` subcommand. - -``` - -$ pulsar-admin sinks get options - -``` - -For more information, see [here](io-cli.md#get-1). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/getSinkInfo?version=@pulsar:version_number@} - - - - -```java - -SinkConfig getSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - -``` - -**Example** - -This is a sinkConfig. - -```json - -{ -"tenant": "tenantName", -"namespace": "namespaceName", -"name": "sinkName", -"className": "className", -"inputSpecs": { -"topicName": { - "isRegexPattern": false -} -}, -"configs": {}, -"parallelism": 1, -"processingGuarantees": "ATLEAST_ONCE", -"retainOrdering": false, -"autoAck": true -} - -``` - -This is a sinkConfig example. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -**Parameter description** - -Name| Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`sink` | Sink name - -For more information, see [`getSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#getSink-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -### `list` - -You can get the list of all running connectors using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the list of all running source connectors. - -````mdx-code-block - - - - -Use the `list` subcommand. - -``` - -$ pulsar-admin sources list options - -``` - -For more information, see [here](io-cli.md#list). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace|operation/listSources?version=@pulsar:version_number@} - - - - -```java - -List listSources(String tenant, - String namespace) - throws PulsarAdminException - -``` - -**Response example** - -```java - -["f1", "f2", "f3"] - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException` | Unexpected error - -For more information, see [`listSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#listSources-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Get the list of all running sink connectors. - -````mdx-code-block - - - - -Use the `list` subcommand. - -``` - -$ pulsar-admin sinks list options - -``` - -For more information, see [here](io-cli.md#list-1). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace|operation/listSinks?version=@pulsar:version_number@} - - - - -```java - -List listSinks(String tenant, - String namespace) - throws PulsarAdminException - -``` - -**Response example** - -```java - -["f1", "f2", "f3"] - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException` | Unexpected error - -For more information, see [`listSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#listSinks-java.lang.String-java.lang.String-). - - - - -```` - -### `status` - -You can get the current status of a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the current status of a source connector. - -````mdx-code-block - - - - -Use the `status` subcommand. - -``` - -$ pulsar-admin sources status options - -``` - -For more information, see [here](io-cli.md#status). - - - - -* Get the current status of **all** source connectors. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName/status|operation/getSourceStatus?version=@pulsar:version_number@} - -* Gets the current status of a **specified** source connector. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/status|operation/getSourceStatus?version=@pulsar:version_number@} - - - - -* Get the current status of **all** source connectors. - - ```java - - SourceStatus getSourceStatus(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - - **Exception** - - Name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSourceStatus`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#getSource-java.lang.String-java.lang.String-java.lang.String-). - -* Gets the current status of a **specified** source connector. - - ```java - - SourceStatus.SourceInstanceStatus.SourceInstanceStatusData getSourceStatus(String tenant, - String namespace, - String source, - int id) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - `id` | Source instanceID - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSourceStatus`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#getSourceStatus-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Get the current status of a Pulsar sink connector. - -````mdx-code-block - - - - -Use the `status` subcommand. - -``` - -$ pulsar-admin sinks status options - -``` - -For more information, see [here](io-cli.md#status-1). - - - - -* Get the current status of **all** sink connectors. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sinkName/status|operation/getSinkStatus?version=@pulsar:version_number@} - -* Gets the current status of a **specified** sink connector. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sourceName/:instanceId/status|operation/getSinkInstanceStatus?version=@pulsar:version_number@} - - - - -* Get the current status of **all** sink connectors. - - ```java - - SinkStatus getSinkStatus(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSinkStatus`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#getSinkStatus-java.lang.String-java.lang.String-java.lang.String-). - -* Gets the current status of a **specified** source connector. - - ```java - - SinkStatus.SinkInstanceStatus.SinkInstanceStatusData getSinkStatus(String tenant, - String namespace, - String sink, - int id) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - `id` | Sink instanceID - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSinkStatusWithInstanceID`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#getSinkStatus-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Update a connector - -### `update` - -You can update a running connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Update a running Pulsar source connector. - -````mdx-code-block - - - - -Use the `update` subcommand. - -``` - -$ pulsar-admin sources update options - -``` - -For more information, see [here](io-cli.md#update). - - - - -Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/updateSource?version=@pulsar:version_number@} - - - - -* Update a running source connector with a **local file**. - - ```java - - void updateSource(SourceConfig sourceConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - |`sourceConfig` | The source configuration object - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - - For more information, see [`updateSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#updateSource-SourceConfig-java.lang.String-). - -* Update a source connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void updateSourceWithUrl(SourceConfig sourceConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - | Name | Description - |---|--- - | `sourceConfig` | The source configuration object - | `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - -For more information, see [`createSourceWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#updateSourceWithUrl-SourceConfig-java.lang.String-). - - - - -```` - -#### Sink - -Update a running Pulsar sink connector. - -````mdx-code-block - - - - -Use the `update` subcommand. - -``` - -$ pulsar-admin sinks update options - -``` - -For more information, see [here](io-cli.md#update-1). - - - - -Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/updateSink?version=@pulsar:version_number@} - - - - -* Update a running sink connector with a **local file**. - - ```java - - void updateSink(SinkConfig sinkConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - |`sinkConfig` | The sink configuration object - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - - For more information, see [`updateSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#updateSink-SinkConfig-java.lang.String-). - -* Update a sink connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void updateSinkWithUrl(SinkConfig sinkConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - | Name | Description - |---|--- - | `sinkConfig` | The sink configuration object - | `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - |`PulsarAdminException.NotFoundException` | Cluster doesn't exist - |`PulsarAdminException` | Unexpected error - -For more information, see [`updateSinkWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#updateSinkWithUrl-SinkConfig-java.lang.String-). - - - - -```` - -## Stop a connector - -### `stop` - -You can stop a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Stop a source connector. - -````mdx-code-block - - - - -Use the `stop` subcommand. - -``` - -$ pulsar-admin sources stop options - -``` - -For more information, see [here](io-cli.md#stop). - - - - -* Stop **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/stopSource?version=@pulsar:version_number@} - -* Stop a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId|operation/stopSource?version=@pulsar:version_number@} - - - - -* Stop **all** source connectors. - - ```java - - void stopSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#stopSource-java.lang.String-java.lang.String-java.lang.String-). - -* Stop a **specified** source connector. - - ```java - - void stopSource(String tenant, - String namespace, - String source, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#stopSource-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Stop a sink connector. - -````mdx-code-block - - - - -Use the `stop` subcommand. - -``` - -$ pulsar-admin sinks stop options - -``` - -For more information, see [here](io-cli.md#stop-1). - - - - -* Stop **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sinkName/stop|operation/stopSink?version=@pulsar:version_number@} - -* Stop a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkeName/:instanceId/stop|operation/stopSink?version=@pulsar:version_number@} - - - - -* Stop **all** sink connectors. - - ```java - - void stopSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#stopSink-java.lang.String-java.lang.String-java.lang.String-). - -* Stop a **specified** sink connector. - - ```java - - void stopSink(String tenant, - String namespace, - String sink, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#stopSink-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Restart a connector - -### `restart` - -You can restart a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Restart a source connector. - -````mdx-code-block - - - - -Use the `restart` subcommand. - -``` - -$ pulsar-admin sources restart options - -``` - -For more information, see [here](io-cli.md#restart). - - - - -* Restart **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/restart|operation/restartSource?version=@pulsar:version_number@} - -* Restart a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/restart|operation/restartSource?version=@pulsar:version_number@} - - - - -* Restart **all** source connectors. - - ```java - - void restartSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#restartSource-java.lang.String-java.lang.String-java.lang.String-). - -* Restart a **specified** source connector. - - ```java - - void restartSource(String tenant, - String namespace, - String source, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#restartSource-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Restart a sink connector. - -````mdx-code-block - - - - -Use the `restart` subcommand. - -``` - -$ pulsar-admin sinks restart options - -``` - -For more information, see [here](io-cli.md#restart-1). - - - - -* Restart **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/restart|operation/restartSource?version=@pulsar:version_number@} - -* Restart a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/:instanceId/restart|operation/restartSource?version=@pulsar:version_number@} - - - - -* Restart all Pulsar sink connectors. - - ```java - - void restartSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Sink name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#restartSink-java.lang.String-java.lang.String-java.lang.String-). - -* Restart a **specified** sink connector. - - ```java - - void restartSink(String tenant, - String namespace, - String sink, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Sink instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#restartSink-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Delete a connector - -### `delete` - -You can delete a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Delete a source connector. - -````mdx-code-block - - - - -Use the `delete` subcommand. - -``` - -$ pulsar-admin sources delete options - -``` - -For more information, see [here](io-cli.md#delete). - - - - -Delete al Pulsar source connector. - -Send a `DELETE` request to this endpoint: {@inject: endpoint|DELETE|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/deregisterSource?version=@pulsar:version_number@} - - - - -Delete a source connector. - -```java - -void deleteSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Parameter** - -| Name | Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`source` | Source name - -**Exception** - -|Name|Description| -|---|--- -|`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission -| `PulsarAdminException.NotFoundException` | Cluster doesn't exist -| `PulsarAdminException.PreconditionFailedException` | Cluster is not empty -| `PulsarAdminException` | Unexpected error - -For more information, see [`deleteSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#deleteSource-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Delete a sink connector. - -````mdx-code-block - - - - -Use the `delete` subcommand. - -``` - -$ pulsar-admin sinks delete options - -``` - -For more information, see [here](io-cli.md#delete-1). - - - - -Delete a sink connector. - -Send a `DELETE` request to this endpoint: {@inject: endpoint|DELETE|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/deregisterSink?version=@pulsar:version_number@} - - - - -Delete a Pulsar sink connector. - -```java - -void deleteSink(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Parameter** - -| Name | Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`sink` | Sink name - -**Exception** - -|Name|Description| -|---|--- -|`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission -| `PulsarAdminException.NotFoundException` | Cluster doesn't exist -| `PulsarAdminException.PreconditionFailedException` | Cluster is not empty -| `PulsarAdminException` | Unexpected error - -For more information, see [`deleteSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#deleteSink-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/kubernetes-helm.md b/site2/website/versioned_docs/version-2.8.2-deprecated/kubernetes-helm.md deleted file mode 100644 index ea92a0968cd7d0..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/kubernetes-helm.md +++ /dev/null @@ -1,441 +0,0 @@ ---- -id: kubernetes-helm -title: Get started in Kubernetes -sidebar_label: "Run Pulsar in Kubernetes" -original_id: kubernetes-helm ---- - -This section guides you through every step of installing and running Apache Pulsar with Helm on Kubernetes quickly, including the following sections: - -- Install the Apache Pulsar on Kubernetes using Helm -- Start and stop Apache Pulsar -- Create topics using `pulsar-admin` -- Produce and consume messages using Pulsar clients -- Monitor Apache Pulsar status with Prometheus and Grafana - -For deploying a Pulsar cluster for production usage, read the documentation on [how to configure and install a Pulsar Helm chart](helm-deploy.md). - -## Prerequisite - -- Kubernetes server 1.14.0+ -- kubectl 1.14.0+ -- Helm 3.0+ - -:::tip - -For the following steps, step 2 and step 3 are for **developers** and step 4 and step 5 are for **administrators**. - -::: - -## Step 0: Prepare a Kubernetes cluster - -Before installing a Pulsar Helm chart, you have to create a Kubernetes cluster. You can follow [the instructions](helm-prepare.md) to prepare a Kubernetes cluster. - -We use [Minikube](https://minikube.sigs.k8s.io/docs/start/) in this quick start guide. To prepare a Kubernetes cluster, follow these steps: - -1. Create a Kubernetes cluster on Minikube. - - ```bash - - minikube start --memory=8192 --cpus=4 --kubernetes-version= - - ``` - - The `` can be any [Kubernetes version supported by your Minikube installation](https://minikube.sigs.k8s.io/docs/reference/configuration/kubernetes/), such as `v1.16.1`. - -2. Set `kubectl` to use Minikube. - - ```bash - - kubectl config use-context minikube - - ``` - -3. To use the [Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) with the local Kubernetes cluster on Minikube, enter the command below: - - ```bash - - minikube dashboard - - ``` - - The command automatically triggers opening a webpage in your browser. - -## Step 1: Install Pulsar Helm chart - -1. Add Pulsar charts repo. - - ```bash - - helm repo add apache https://pulsar.apache.org/charts - - ``` - - ```bash - - helm repo update - - ``` - -2. Clone the Pulsar Helm chart repository. - - ```bash - - git clone https://github.com/apache/pulsar-helm-chart - cd pulsar-helm-chart - - ``` - -3. Run the script `prepare_helm_release.sh` to create secrets required for installing the Apache Pulsar Helm chart. The username `pulsar` and password `pulsar` are used for logging into the Grafana dashboard and Pulsar Manager. - - ```bash - - ./scripts/pulsar/prepare_helm_release.sh \ - -n pulsar \ - -k pulsar-mini \ - -c - - ``` - -4. Use the Pulsar Helm chart to install a Pulsar cluster to Kubernetes. - - :::note - - You need to specify `--set initialize=true` when installing Pulsar the first time. This command installs and starts Apache Pulsar. - - ::: - - ```bash - - helm install \ - --values examples/values-minikube.yaml \ - --set initialize=true \ - --namespace pulsar \ - pulsar-mini apache/pulsar - - ``` - -5. Check the status of all pods. - - ```bash - - kubectl get pods -n pulsar - - ``` - - If all pods start up successfully, you can see that the `STATUS` is changed to `Running` or `Completed`. - - **Output** - - ```bash - - NAME READY STATUS RESTARTS AGE - pulsar-mini-bookie-0 1/1 Running 0 9m27s - pulsar-mini-bookie-init-5gphs 0/1 Completed 0 9m27s - pulsar-mini-broker-0 1/1 Running 0 9m27s - pulsar-mini-grafana-6b7bcc64c7-4tkxd 1/1 Running 0 9m27s - pulsar-mini-prometheus-5fcf5dd84c-w8mgz 1/1 Running 0 9m27s - pulsar-mini-proxy-0 1/1 Running 0 9m27s - pulsar-mini-pulsar-init-t7cqt 0/1 Completed 0 9m27s - pulsar-mini-pulsar-manager-9bcbb4d9f-htpcs 1/1 Running 0 9m27s - pulsar-mini-toolset-0 1/1 Running 0 9m27s - pulsar-mini-zookeeper-0 1/1 Running 0 9m27s - - ``` - -6. Check the status of all services in the namespace `pulsar`. - - ```bash - - kubectl get services -n pulsar - - ``` - - **Output** - - ```bash - - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - pulsar-mini-bookie ClusterIP None 3181/TCP,8000/TCP 11m - pulsar-mini-broker ClusterIP None 8080/TCP,6650/TCP 11m - pulsar-mini-grafana LoadBalancer 10.106.141.246 3000:31905/TCP 11m - pulsar-mini-prometheus ClusterIP None 9090/TCP 11m - pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 11m - pulsar-mini-pulsar-manager LoadBalancer 10.103.192.175 9527:30190/TCP 11m - pulsar-mini-toolset ClusterIP None 11m - pulsar-mini-zookeeper ClusterIP None 2888/TCP,3888/TCP,2181/TCP 11m - - ``` - -## Step 2: Use pulsar-admin to create Pulsar tenants/namespaces/topics - -`pulsar-admin` is the CLI (command-Line Interface) tool for Pulsar. In this step, you can use `pulsar-admin` to create resources, including tenants, namespaces, and topics. - -1. Enter the `toolset` container. - - ```bash - - kubectl exec -it -n pulsar pulsar-mini-toolset-0 -- /bin/bash - - ``` - -2. In the `toolset` container, create a tenant named `apache`. - - ```bash - - bin/pulsar-admin tenants create apache - - ``` - - Then you can list the tenants to see if the tenant is created successfully. - - ```bash - - bin/pulsar-admin tenants list - - ``` - - You should see a similar output as below. The tenant `apache` has been successfully created. - - ```bash - - "apache" - "public" - "pulsar" - - ``` - -3. In the `toolset` container, create a namespace named `pulsar` in the tenant `apache`. - - ```bash - - bin/pulsar-admin namespaces create apache/pulsar - - ``` - - Then you can list the namespaces of tenant `apache` to see if the namespace is created successfully. - - ```bash - - bin/pulsar-admin namespaces list apache - - ``` - - You should see a similar output as below. The namespace `apache/pulsar` has been successfully created. - - ```bash - - "apache/pulsar" - - ``` - -4. In the `toolset` container, create a topic `test-topic` with `4` partitions in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics create-partitioned-topic apache/pulsar/test-topic -p 4 - - ``` - -5. In the `toolset` container, list all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics list-partitioned-topics apache/pulsar - - ``` - - Then you can see all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - "persistent://apache/pulsar/test-topic" - - ``` - -## Step 3: Use Pulsar client to produce and consume messages - -You can use the Pulsar client to create producers and consumers to produce and consume messages. - -By default, the Pulsar Helm chart exposes the Pulsar cluster through a Kubernetes `LoadBalancer`. In Minikube, you can use the following command to check the proxy service. - -```bash - -kubectl get services -n pulsar | grep pulsar-mini-proxy - -``` - -You will see a similar output as below. - -```bash - -pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 28m - -``` - -This output tells what are the node ports that Pulsar cluster's binary port and HTTP port are mapped to. The port after `80:` is the HTTP port while the port after `6650:` is the binary port. - -Then you can find the IP address and exposed ports of your Minikube server by running the following command. - -```bash - -minikube service pulsar-mini-proxy -n pulsar - -``` - -**Output** - -```bash - -|-----------|-------------------|-------------|-------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|-------------------------| -| pulsar | pulsar-mini-proxy | http/80 | http://172.17.0.4:32305 | -| | | pulsar/6650 | http://172.17.0.4:31816 | -|-----------|-------------------|-------------|-------------------------| -🏃 Starting tunnel for service pulsar-mini-proxy. -|-----------|-------------------|-------------|------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|------------------------| -| pulsar | pulsar-mini-proxy | | http://127.0.0.1:61853 | -| | | | http://127.0.0.1:61854 | -|-----------|-------------------|-------------|------------------------| - -``` - -At this point, you can get the service URLs to connect to your Pulsar client. Here are URL examples: - -``` - -webServiceUrl=http://127.0.0.1:61853/ -brokerServiceUrl=pulsar://127.0.0.1:61854/ - -``` - -Then you can proceed with the following steps: - -1. Download the Apache Pulsar tarball from the [downloads page](https://pulsar.apache.org/download/). - -2. Decompress the tarball based on your download file. - - ```bash - - tar -xf .tar.gz - - ``` - -3. Expose `PULSAR_HOME`. - - (1) Enter the directory of the decompressed download file. - - (2) Expose `PULSAR_HOME` as the environment variable. - - ```bash - - export PULSAR_HOME=$(pwd) - - ``` - -4. Configure the Pulsar client. - - In the `${PULSAR_HOME}/conf/client.conf` file, replace `webServiceUrl` and `brokerServiceUrl` with the service URLs you get from the above steps. - -5. Create a subscription to consume messages from `apache/pulsar/test-topic`. - - ```bash - - bin/pulsar-client consume -s sub apache/pulsar/test-topic -n 0 - - ``` - -6. Open a new terminal. In the new terminal, create a producer and send 10 messages to the `test-topic` topic. - - ```bash - - bin/pulsar-client produce apache/pulsar/test-topic -m "---------hello apache pulsar-------" -n 10 - - ``` - -7. Verify the results. - - - From the producer side - - **Output** - - The messages have been produced successfully. - - ```bash - - 18:15:15.489 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 10 messages successfully produced - - ``` - - - From the consumer side - - **Output** - - At the same time, you can receive the messages as below. - - ```bash - - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - - ``` - -## Step 4: Use Pulsar Manager to manage the cluster - -[Pulsar Manager](administration-pulsar-manager.md) is a web-based GUI management tool for managing and monitoring Pulsar. - -1. By default, the `Pulsar Manager` is exposed as a separate `LoadBalancer`. You can open the Pulsar Manager UI using the following command: - - ```bash - - minikube service -n pulsar pulsar-mini-pulsar-manager - - ``` - -2. The Pulsar Manager UI will be open in your browser. You can use the username `pulsar` and password `pulsar` to log into Pulsar Manager. - -3. In Pulsar Manager UI, you can create an environment. - - - Click `New Environment` button in the top-left corner. - - Type `pulsar-mini` for the field `Environment Name` in the popup window. - - Type `http://pulsar-mini-broker:8080` for the field `Service URL` in the popup window. - - Click `Confirm` button in the popup window. - -4. After successfully creating an environment, you are redirected to the `tenants` page of that environment. Then you can create `tenants`, `namespaces` and `topics` using the Pulsar Manager. - -## Step 5: Use Prometheus and Grafana to monitor cluster - -Grafana is an open-source visualization tool, which can be used for visualizing time series data into dashboards. - -1. By default, the Grafana is exposed as a separate `LoadBalancer`. You can open the Grafana UI using the following command: - - ```bash - - minikube service pulsar-mini-grafana -n pulsar - - ``` - -2. The Grafana UI is open in your browser. You can use the username `pulsar` and password `pulsar` to log into the Grafana Dashboard. - -3. You can view dashboards for different components of a Pulsar cluster. diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/performance-pulsar-perf.md b/site2/website/versioned_docs/version-2.8.2-deprecated/performance-pulsar-perf.md deleted file mode 100644 index ed00975dc119fc..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/performance-pulsar-perf.md +++ /dev/null @@ -1,227 +0,0 @@ ---- -id: performance-pulsar-perf -title: Pulsar Perf -sidebar_label: "Pulsar Perf" -original_id: performance-pulsar-perf ---- - -The Pulsar Perf is a built-in performance test tool for Apache Pulsar. You can use the Pulsar Perf to test message writing or reading performance. For detailed information about performance tuning, see [here](https://streamnative.io/en/blog/tech/2021-01-14-pulsar-architecture-performance-tuning). - -## Produce messages - -This example shows how the Pulsar Perf produces messages with default options. For all configuration options available for the `pulsar-perf produce` command, see [configuration options](#configuration-options-for-pulsar-perf-produce). - -``` - -bin/pulsar-perf produce my-topic - -``` - -After the command is executed, the test data is continuously output on the Console. - -**Output** - -``` - -19:53:31.459 [pulsar-perf-producer-exec-1-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Created 1 producers -19:53:31.482 [pulsar-timer-5-1] WARN com.scurrilous.circe.checksum.Crc32cIntChecksum - Failed to load Circe JNI library. Falling back to Java based CRC32c provider -19:53:40.861 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 93.7 msg/s --- 0.7 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.575 ms - med: 3.460 - 95pct: 4.790 - 99pct: 5.308 - 99.9pct: 5.834 - 99.99pct: 6.609 - Max: 6.609 -19:53:50.909 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.437 ms - med: 3.328 - 95pct: 4.656 - 99pct: 5.071 - 99.9pct: 5.519 - 99.99pct: 5.588 - Max: 5.588 -19:54:00.926 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.376 ms - med: 3.276 - 95pct: 4.520 - 99pct: 4.939 - 99.9pct: 5.440 - 99.99pct: 5.490 - Max: 5.490 -19:54:10.940 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.298 ms - med: 3.220 - 95pct: 4.474 - 99pct: 4.926 - 99.9pct: 5.645 - 99.99pct: 5.654 - Max: 5.654 -19:54:20.956 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.1 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.308 ms - med: 3.199 - 95pct: 4.532 - 99pct: 4.871 - 99.9pct: 5.291 - 99.99pct: 5.323 - Max: 5.323 -19:54:30.972 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.249 ms - med: 3.144 - 95pct: 4.437 - 99pct: 4.970 - 99.9pct: 5.329 - 99.99pct: 5.414 - Max: 5.414 -19:54:40.987 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.435 ms - med: 3.361 - 95pct: 4.772 - 99pct: 5.150 - 99.9pct: 5.373 - 99.99pct: 5.837 - Max: 5.837 -^C19:54:44.325 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Aggregated throughput stats --- 7286 records sent --- 99.140 msg/s --- 0.775 Mbit/s -19:54:44.336 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Aggregated latency stats --- Latency: mean: 3.383 ms - med: 3.293 - 95pct: 4.610 - 99pct: 5.059 - 99.9pct: 5.588 - 99.99pct: 5.837 - 99.999pct: 6.609 - Max: 6.609 - -``` - -From the above test data, you can get the throughput statistics and the write latency statistics. The aggregated statistics is printed when the Pulsar Perf is stopped. You can press **Ctrl**+**C** to stop the Pulsar Perf. After the Pulsar Perf is stopped, the [HdrHistogram](http://hdrhistogram.github.io/HdrHistogram/) formatted test result appears under your directory. The document looks like `perf-producer-1589370810837.hgrm`. You can also check the test result through [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html). For details about how to check the test result through [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html), see [HdrHistogram Plotter](#hdrhistogram-plotter). - -### Configuration options for `pulsar-perf produce` - -You can get all options by executing the `bin/pulsar-perf produce -h` command. Therefore, you can modify these options as required. - -The following table lists configuration options available for the `pulsar-perf produce` command. - -| Option | Description | Default value| -|----|----|----| -| access-mode | Set the producer access mode. Valid values are `Shared`, `Exclusive` and `WaitForExclusive`. | Shared | -| admin-url | Set the Pulsar admin URL. | N/A | -| auth-params | Set the authentication parameters, whose format is determined by the implementation of the `configure` method in the authentication plugin class, such as "key1:val1,key2:val2" or "{"key1":"val1","key2":"val2"}". | N/A | -| auth_plugin | Set the authentication plugin class name. | N/A | -| listener-name | Set the listener name for the broker. | N/A | -| batch-max-bytes | Set the maximum number of bytes for each batch. | 4194304 | -| batch-max-messages | Set the maximum number of messages for each batch. | 1000 | -| batch-time-window | Set a window for a batch of messages. | 1 ms | -| busy-wait | Enable or disable Busy-Wait on the Pulsar client. | false | -| chunking | Configure whether to split the message and publish in chunks if message size is larger than allowed max size. | false | -| compression | Compress the message payload. | N/A | -| conf-file | Set the configuration file. | N/A | -| delay | Mark messages with a given delay. | 0s | -| encryption-key-name | Set the name of the public key used to encrypt the payload. | N/A | -| encryption-key-value-file | Set the file which contains the public key used to encrypt the payload. | N/A | -| exit-on-failure | Configure whether to exit from the process on publish failure. | false | -| format-class | Set the custom formatter class name. | org.apache.pulsar.testclient.DefaultMessageFormatter | -| format-payload | Configure whether to format %i as a message index in the stream from producer and/or %t as the timestamp nanoseconds. | false | -| help | Configure the help message. | false | -| max-connections | Set the maximum number of TCP connections to a single broker. | 100 | -| max-outstanding | Set the maximum number of outstanding messages. | 1000 | -| max-outstanding-across-partitions | Set the maximum number of outstanding messages across partitions. | 50000 | -| message-key-generation-mode | Set the generation mode of message key. Valid options are `autoIncrement`, `random`. | N/A | -| num-io-threads | Set the number of threads to be used for handling connections to brokers. | 1 | -| num-messages | Set the number of messages to be published in total. If it is set to 0, it keeps publishing messages. | 0 | -| num-producers | Set the number of producers for each topic. | 1 | -| num-test-threads | Set the number of test threads. | 1 | -| num-topic | Set the number of topics. | 1 | -| partitions | Configure whether to create partitioned topics with the given number of partitions. | N/A | -| payload-delimiter | Set the delimiter used to split lines when using payload from a file. | \n | -| payload-file | Use the payload from an UTF-8 encoded text file and a payload is randomly selected when messages are published. | N/A | -| producer-name | Set the producer name. | N/A | -| rate | Set the publish rate of messages across topics. | 100 | -| send-timeout | Set the sendTimeout. | 0 | -| separator | Set the separator between the topic and topic number. | - | -| service-url | Set the Pulsar service URL. | | -| size | Set the message size. | 1024 bytes | -| stats-interval-seconds | Set the statistics interval. If it is set to 0, statistics is disabled. | 0 | -| test-duration | Set the test duration. If it is set to 0, it keeps publishing tests. | 0s | -| trust-cert-file | Set the path for the trusted TLS certificate file. | | | -| warmup-time | Set the warm-up time. | 1s | -| tls-allow-insecure | Set the allowed insecure TLS connection. | N/A | - -## Consume messages - -This example shows how the Pulsar Perf consumes messages with default options. - -``` - -bin/pulsar-perf consume my-topic - -``` - -After the command is executed, the test data is continuously output on the Console. - -**Output** - -``` - -20:35:37.071 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Start receiving from 1 consumers on 1 topics -20:35:41.150 [pulsar-client-io-1-9] WARN com.scurrilous.circe.checksum.Crc32cIntChecksum - Failed to load Circe JNI library. Falling back to Java based CRC32c provider -20:35:47.092 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 59.572 msg/s -- 0.465 Mbit/s --- Latency: mean: 11.298 ms - med: 10 - 95pct: 15 - 99pct: 98 - 99.9pct: 137 - 99.99pct: 152 - Max: 152 -20:35:57.104 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.958 msg/s -- 0.781 Mbit/s --- Latency: mean: 9.176 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 18 - Max: 18 -20:36:07.115 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 100.006 msg/s -- 0.781 Mbit/s --- Latency: mean: 9.316 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -20:36:17.125 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 100.085 msg/s -- 0.782 Mbit/s --- Latency: mean: 9.327 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -20:36:27.136 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.900 msg/s -- 0.780 Mbit/s --- Latency: mean: 9.404 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -20:36:37.147 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.985 msg/s -- 0.781 Mbit/s --- Latency: mean: 8.998 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -^C20:36:42.755 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceConsumer - Aggregated throughput stats --- 6051 records received --- 92.125 msg/s --- 0.720 Mbit/s -20:36:42.759 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceConsumer - Aggregated latency stats --- Latency: mean: 9.422 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 98 - 99.99pct: 137 - 99.999pct: 152 - Max: 152 - -``` - -From the output test data, you can get the throughput statistics and the end-to-end latency statistics. The aggregated statistics is printed after the Pulsar Perf is stopped. You can press **Ctrl**+**C** to stop the Pulsar Perf. - -### Configuration options for `pulsar-perf consume` - -You can get all options by executing the `bin/pulsar-perf consume -h` command. Therefore, you can modify these options as required. - -The following table lists configuration options available for the `pulsar-perf consume` command. - -| Option | Description | Default value | -|----|----|----| -| acks-delay-millis | Set the acknowledgment grouping delay in milliseconds. | 100 ms | -| auth-params | Set the authentication parameters, whose format is determined by the implementation of the `configure` method in the authentication plugin class, such as "key1:val1,key2:val2" or "{"key1":"val1","key2":"val2"}". | N/A | -| auth_plugin | Set the authentication plugin class name. | N/A | -| auto_ack_chunk_q_full | Configure whether to automatically ack for the oldest message in receiver queue if the queue is full. | false | -| listener-name | Set the listener name for the broker. | N/A | -| batch-index-ack | Enable or disable the batch index acknowledgment. | false | -| busy-wait | Enable or disable Busy-Wait on the Pulsar client. | false | -| conf-file | Set the configuration file. | N/A | -| encryption-key-name | Set the name of the public key used to encrypt the payload. | N/A | -| encryption-key-value-file | Set the file which contains the public key used to encrypt the payload. | N/A | -| help | Configure the help message. | false | -| expire_time_incomplete_chunked_messages | Set the expiration time for incomplete chunk messages (in milliseconds). | 0 | -| max-connections | Set the maximum number of TCP connections to a single broker. | 100 | -| max_chunked_msg | Set the max pending chunk messages. | 0 | -| num-consumers | Set the number of consumers for each topic. | 1 | -| num-io-threads |Set the number of threads to be used for handling connections to brokers. | 1 | -| num-subscriptions | Set the number of subscriptions (per topic). | 1 | -| num-topic | Set the number of topics. | 1 | -| pool-messages | Configure whether to use the pooled message. | true | -| rate | Simulate a slow message consumer (rate in msg/s). | 0.0 | -| receiver-queue-size | Set the size of the receiver queue. | 1000 | -| receiver-queue-size-across-partitions | Set the max total size of the receiver queue across partitions. | 50000 | -| replicated | Configure whether the subscription status should be replicated. | false | -| service-url | Set the Pulsar service URL. | | -| stats-interval-seconds | Set the statistics interval. If it is set to 0, statistics is disabled. | 0 | -| subscriber-name | Set the subscriber name prefix. | | -| subscription-position | Set the subscription position. Valid values are `Latest`, `Earliest`.| Latest | -| subscription-type | Set the subscription type.
  1106. Exclusive
  1107. Shared
  1108. Failover
  1109. Key_Shared
  1110. | Exclusive | -| test-duration | Set the test duration (in seconds). If the value is 0 or smaller than 0, it keeps consuming messages. | 0 | -| tls-allow-insecure | Set the allowed insecure TLS connection. | N/A | -| trust-cert-file | Set the path for the trusted TLS certificate file. | | | - -## Configurations - -By default, the Pulsar Perf uses `conf/client.conf` as the default configuration and uses `conf/log4j2.yaml` as the default Log4j configuration. If you want to connect to other Pulsar clusters, you can update the `brokerServiceUrl` in the client configuration. - -You can use the following commands to change the configuration file and the Log4j configuration file. - -``` - -export PULSAR_CLIENT_CONF= -export PULSAR_LOG_CONF= - -``` - -In addition, you can use the following command to configure the JVM configuration through environment variables: - -``` - -export PULSAR_EXTRA_OPTS='-Xms4g -Xmx4g -XX:MaxDirectMemorySize=4g' - -``` - -## HdrHistogram Plotter - -The [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html) is a visualization tool for checking Pulsar Perf test results, which makes it easier to observe the test results. - -To check test results through the HdrHistogram Plotter, follow these steps: - -1. Clone the HdrHistogram repository from GitHub to the local. - - ``` - - git clone https://github.com/HdrHistogram/HdrHistogram.git - - ``` - -2. Switch to the HdrHistogram folder. - - ``` - - cd HdrHistogram - - ``` - -3. Install the HdrHistogram Plotter. - - ``` - - mvn clean install -DskipTests - - ``` - -4. Transform the file generated by the Pulsar Perf. - - ``` - - ./HistogramLogProcessor -i -o - - ``` - -5. You will get two output files. Upload the output file with the filename extension of .hgrm to the [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html). - -6. Check the test result through the Graphical User Interface of the HdrHistogram Plotter, as shown blow. - - ![](/assets/perf-produce.png) diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/reference-cli-tools.md b/site2/website/versioned_docs/version-2.8.2-deprecated/reference-cli-tools.md deleted file mode 100644 index 2abd4a47f86ef6..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/reference-cli-tools.md +++ /dev/null @@ -1,958 +0,0 @@ ---- -id: reference-cli-tools -title: Pulsar command-line tools -sidebar_label: "Pulsar CLI tools" -original_id: reference-cli-tools ---- - -Pulsar offers several command-line tools that you can use for managing Pulsar installations, performance testing, using command-line producers and consumers, and more. - -All Pulsar command-line tools can be run from the `bin` directory of your [installed Pulsar package](getting-started-standalone.md). The following tools are currently documented: - -* [`pulsar`](#pulsar) -* [`pulsar-client`](#pulsar-client) -* [`pulsar-daemon`](#pulsar-daemon) -* [`pulsar-perf`](#pulsar-perf) -* [`bookkeeper`](#bookkeeper) -* [`broker-tool`](#broker-tool) - -> ### Getting help -> You can get help for any CLI tool, command, or subcommand using the `--help` flag, or `-h` for short. Here's an example: - -> ```shell -> -> $ bin/pulsar broker --help -> -> -> ``` - - -## `pulsar` - -The pulsar tool is used to start Pulsar components, such as bookies and ZooKeeper, in the foreground. - -These processes can also be started in the background, using nohup, using the pulsar-daemon tool, which has the same command interface as pulsar. - -Usage: - -```bash - -$ pulsar command - -``` - -Commands: -* `bookie` -* `broker` -* `compact-topic` -* `discovery` -* `configuration-store` -* `initialize-cluster-metadata` -* `proxy` -* `standalone` -* `websocket` -* `zookeeper` -* `zookeeper-shell` - -Example: - -```bash - -$ PULSAR_BROKER_CONF=/path/to/broker.conf pulsar broker - -``` - -The table below lists the environment variables that you can use to configure the `pulsar` tool. - -|Variable|Description|Default| -|---|---|---| -|`PULSAR_LOG_CONF`|Log4j configuration file|`conf/log4j2.yaml`| -|`PULSAR_BROKER_CONF`|Configuration file for broker|`conf/broker.conf`| -|`PULSAR_BOOKKEEPER_CONF`|description: Configuration file for bookie|`conf/bookkeeper.conf`| -|`PULSAR_ZK_CONF`|Configuration file for zookeeper|`conf/zookeeper.conf`| -|`PULSAR_CONFIGURATION_STORE_CONF`|Configuration file for the configuration store|`conf/global_zookeeper.conf`| -|`PULSAR_DISCOVERY_CONF`|Configuration file for discovery service|`conf/discovery.conf`| -|`PULSAR_WEBSOCKET_CONF`|Configuration file for websocket proxy|`conf/websocket.conf`| -|`PULSAR_STANDALONE_CONF`|Configuration file for standalone|`conf/standalone.conf`| -|`PULSAR_EXTRA_OPTS`|Extra options to be passed to the jvm|| -|`PULSAR_EXTRA_CLASSPATH`|Extra paths for Pulsar's classpath|| -|`PULSAR_PID_DIR`|Folder where the pulsar server PID file should be stored|| -|`PULSAR_STOP_TIMEOUT`|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful|| - - - -### `bookie` - -Starts up a bookie server - -Usage: - -```bash - -$ pulsar bookie options - -``` - -Options - -|Option|Description|Default| -|---|---|---| -|`-readOnly`|Force start a read-only bookie server|false| -|`-withAutoRecovery`|Start auto-recover service bookie server|false| - - -Example - -```bash - -$ PULSAR_BOOKKEEPER_CONF=/path/to/bookkeeper.conf pulsar bookie \ - -readOnly \ - -withAutoRecovery - -``` - -### `broker` - -Starts up a Pulsar broker - -Usage - -```bash - -$ pulsar broker options - -``` - -Options - -|Option|Description|Default| -|---|---|---| -|`-bc` , `--bookie-conf`|Configuration file for BookKeeper|| -|`-rb` , `--run-bookie`|Run a BookKeeper bookie on the same host as the Pulsar broker|false| -|`-ra` , `--run-bookie-autorecovery`|Run a BookKeeper autorecovery daemon on the same host as the Pulsar broker|false| - -Example - -```bash - -$ PULSAR_BROKER_CONF=/path/to/broker.conf pulsar broker - -``` - -### `compact-topic` - -Run compaction against a Pulsar topic (in a new process) - -Usage - -```bash - -$ pulsar compact-topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-t` , `--topic`|The Pulsar topic that you would like to compact|| - -Example - -```bash - -$ pulsar compact-topic --topic topic-to-compact - -``` - -### `discovery` - -Run a discovery server - -Usage - -```bash - -$ pulsar discovery - -``` - -Example - -```bash - -$ PULSAR_DISCOVERY_CONF=/path/to/discovery.conf pulsar discovery - -``` - -### `configuration-store` - -Starts up the Pulsar configuration store - -Usage - -```bash - -$ pulsar configuration-store - -``` - -Example - -```bash - -$ PULSAR_CONFIGURATION_STORE_CONF=/path/to/configuration_store.conf pulsar configuration-store - -``` - -### `initialize-cluster-metadata` - -One-time cluster metadata initialization - -Usage - -```bash - -$ pulsar initialize-cluster-metadata options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-ub` , `--broker-service-url`|The broker service URL for the new cluster|| -|`-tb` , `--broker-service-url-tls`|The broker service URL for the new cluster with TLS encryption|| -|`-c` , `--cluster`|Cluster name|| -|`-cs` , `--configuration-store`|The configuration store quorum connection string|| -|`--existing-bk-metadata-service-uri`|The metadata service URI of the existing BookKeeper cluster that you want to use|| -|`-h` , `--help`|Cluster name|false| -|`--initial-num-stream-storage-containers`|The number of storage containers of BookKeeper stream storage|16| -|`--initial-num-transaction-coordinators`|The number of transaction coordinators assigned in a cluster|16| -|`-uw` , `--web-service-url`|The web service URL for the new cluster|| -|`-tw` , `--web-service-url-tls`|The web service URL for the new cluster with TLS encryption|| -|`-zk` , `--zookeeper`|The local ZooKeeper quorum connection string|| -|`--zookeeper-session-timeout-ms`|The local ZooKeeper session timeout. The time unit is in millisecond(ms)|30000| - - -### `proxy` - -Manages the Pulsar proxy - -Usage - -```bash - -$ pulsar proxy options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--configuration-store`|Configuration store connection string|| -|`-zk` , `--zookeeper-servers`|Local ZooKeeper connection string|| - -Example - -```bash - -$ PULSAR_PROXY_CONF=/path/to/proxy.conf pulsar proxy \ - --zookeeper-servers zk-0,zk-1,zk2 \ - --configuration-store zk-0,zk-1,zk-2 - -``` - -### `standalone` - -Run a broker service with local bookies and local ZooKeeper - -Usage - -```bash - -$ pulsar standalone options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-a` , `--advertised-address`|The standalone broker advertised address|| -|`--bookkeeper-dir`|Local bookies’ base data directory|data/standalone/bookkeeper| -|`--bookkeeper-port`|Local bookies’ base port|3181| -|`--no-broker`|Only start ZooKeeper and BookKeeper services, not the broker|false| -|`--num-bookies`|The number of local bookies|1| -|`--only-broker`|Only start the Pulsar broker service (not ZooKeeper or BookKeeper)|| -|`--wipe-data`|Clean up previous ZooKeeper/BookKeeper data|| -|`--zookeeper-dir`|Local ZooKeeper’s data directory|data/standalone/zookeeper| -|`--zookeeper-port` |Local ZooKeeper’s port|2181| - -Example - -```bash - -$ PULSAR_STANDALONE_CONF=/path/to/standalone.conf pulsar standalone - -``` - -### `websocket` - -Usage - -```bash - -$ pulsar websocket - -``` - -Example - -```bash - -$ PULSAR_WEBSOCKET_CONF=/path/to/websocket.conf pulsar websocket - -``` - -### `zookeeper` - -Starts up a ZooKeeper cluster - -Usage - -```bash - -$ pulsar zookeeper - -``` - -Example - -```bash - -$ PULSAR_ZK_CONF=/path/to/zookeeper.conf pulsar zookeeper - -``` - -### `zookeeper-shell` - -Connects to a running ZooKeeper cluster using the ZooKeeper shell - -Usage - -```bash - -$ pulsar zookeeper-shell options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration file for ZooKeeper|| -|`-server`|Configuration zk address, eg: `127.0.0.1:2181`|| - - - -## `pulsar-client` - -The pulsar-client tool - -Usage - -```bash - -$ pulsar-client command - -``` - -Commands -* `produce` -* `consume` - - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class, for example "key1:val1,key2:val2" or "{\"key1\":\"val1\",\"key2\":\"val2\"}"|{"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"}| -|`--auth-plugin`|Authentication plugin class name|org.apache.pulsar.client.impl.auth.AuthenticationSasl| -|`--listener-name`|Listener name for the broker|| -|`--proxy-protocol`|Proxy protocol to select type of routing at proxy|| -|`--proxy-url`|Proxy-server URL to which to connect|| -|`--url`|Broker URL to which to connect|pulsar://localhost:6650/
    ws://localhost:8080 | -| `-v`, `--version` | Get the version of the Pulsar client -|`-h`, `--help`|Show this help - - -### `produce` -Send a message or messages to a specific broker and topic - -Usage - -```bash - -$ pulsar-client produce topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-f`, `--files`|Comma-separated file paths to send; either -m or -f must be specified|[]| -|`-m`, `--messages`|Comma-separated string of messages to send; either -m or -f must be specified|[]| -|`-n`, `--num-produce`|The number of times to send the message(s); the count of messages/files * num-produce should be below 1000|1| -|`-r`, `--rate`|Rate (in messages per second) at which to produce; a value 0 means to produce messages as fast as possible|0.0| -|`-c`, `--chunking`|Split the message and publish in chunks if the message size is larger than the allowed max size|false| -|`-s`, `--separator`|Character to split messages string with.|","| -|`-k`, `--key`|Message key to add|key=value string, like k1=v1,k2=v2.| -|`-p`, `--properties`|Properties to add. If you want to add multiple properties, use the comma as the separator, e.g. `k1=v1,k2=v2`.| | -|`-ekn`, `--encryption-key-name`|The public key name to encrypt payload.| | -|`-ekv`, `--encryption-key-value`|The URI of public key to encrypt payload. For example, `file:///path/to/public.key` or `data:application/x-pem-file;base64,*****`.| | - - -### `consume` -Consume messages from a specific broker and topic - -Usage - -```bash - -$ pulsar-client consume topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--hex`|Display binary messages in hexadecimal format.|false| -|`-n`, `--num-messages`|Number of messages to consume, 0 means to consume forever.|1| -|`-r`, `--rate`|Rate (in messages per second) at which to consume; a value 0 means to consume messages as fast as possible|0.0| -|`--regex`|Indicate the topic name is a regex pattern|false| -|`-s`, `--subscription-name`|Subscription name|| -|`-t`, `--subscription-type`|The type of the subscription. Possible values: Exclusive, Shared, Failover, Key_Shared.|Exclusive| -|`-p`, `--subscription-position`|The position of the subscription. Possible values: Latest, Earliest.|Latest| -|`-m`, `--subscription-mode`|Subscription mode. Possible values: Durable, NonDurable.|Durable| -|`-q`, `--queue-size`|The size of consumer's receiver queue.|0| -|`-mc`, `--max_chunked_msg`|Max pending chunk messages.|0| -|`-ac`, `--auto_ack_chunk_q_full`|Auto ack for the oldest message in consumer's receiver queue if the queue full.|false| -|`--hide-content`|Do not print the message to the console.|false| -|`-st`, `--schema-type`|Set the schema type. Use `auto_consume` to dump AVRO and other structured data types. Possible values: bytes, auto_consume.|bytes| -|`-ekv`, `--encryption-key-value`|The URI of public key to encrypt payload. For example, `file:///path/to/public.key` or `data:application/x-pem-file;base64,*****`.| | -|`-pm`, `--pool-messages`|Use the pooled message.|true| - -## `pulsar-daemon` -A wrapper around the pulsar tool that’s used to start and stop processes, such as ZooKeeper, bookies, and Pulsar brokers, in the background using nohup. - -pulsar-daemon has a similar interface to the pulsar command but adds start and stop commands for various services. For a listing of those services, run pulsar-daemon to see the help output or see the documentation for the pulsar command. - -Usage - -```bash - -$ pulsar-daemon command - -``` - -Commands -* `start` -* `stop` - - -### `start` -Start a service in the background using nohup. - -Usage - -```bash - -$ pulsar-daemon start service - -``` - -### `stop` -Stop a service that’s already been started using start. - -Usage - -```bash - -$ pulsar-daemon stop service options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|-force|Stop the service forcefully if not stopped by normal shutdown.|false| - - - -## `pulsar-perf` -A tool for performance testing a Pulsar broker. - -Usage - -```bash - -$ pulsar-perf command - -``` - -Commands -* `consume` -* `produce` -* `read` -* `websocket-producer` -* `managed-ledger` -* `monitor-brokers` -* `simulation-client` -* `simulation-controller` -* `help` - -Environment variables - -The table below lists the environment variables that you can use to configure the pulsar-perf tool. - -|Variable|Description|Default| -|---|---|---| -|`PULSAR_LOG_CONF`|Log4j configuration file|conf/log4j2.yaml| -|`PULSAR_CLIENT_CONF`|Configuration file for the client|conf/client.conf| -|`PULSAR_EXTRA_OPTS`|Extra options to be passed to the JVM|| -|`PULSAR_EXTRA_CLASSPATH`|Extra paths for Pulsar's classpath|| - - -### `consume` -Run a consumer - -Usage - -``` - -$ pulsar-perf consume options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth_plugin`|Authentication plugin class name|| -|`-ac`, `--auto_ack_chunk_q_full`|Auto ack for the oldest message in consumer's receiver queue if the queue full|false| -|`--listener-name`|Listener name for the broker|| -|`--acks-delay-millis`|Acknowledgements grouping delay in millis|100| -|`--batch-index-ack`|Enable or disable the batch index acknowledgment|false| -|`-bw`, `--busy-wait`|Enable or disable Busy-Wait on the Pulsar client|false| -|`-v`, `--encryption-key-value-file`|The file which contains the private key to decrypt payload|| -|`-h`, `--help`|Help message|false| -|`--conf-file`|Configuration file|| -|`-e`, `--expire_time_incomplete_chunked_messages`|The expiration time for incomplete chunk messages (in milliseconds)|0| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-mc`, `--max_chunked_msg`|Max pending chunk messages|0| -|`-n`, `--num-consumers`|Number of consumers (per topic)|1| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-ns`, `--num-subscriptions`|Number of subscriptions (per topic)|1| -|`-t`, `--num-topics`|The number of topics|1| -|`-pm`, `--pool-messages`|Use the pooled message|true| -|`-r`, `--rate`|Simulate a slow message consumer (rate in msg/s)|0| -|`-q`, `--receiver-queue-size`|Size of the receiver queue|1000| -|`-p`, `--receiver-queue-size-across-partitions`|Max total size of the receiver queue across partitions|50000| -|`--replicated`|Whether the subscription status should be replicated|false| -|`-u`, `--service-url`|Pulsar service URL|| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled|0| -|`-s`, `--subscriber-name`|Subscriber name prefix|| -|`-ss`, `--subscriptions`|A list of subscriptions to consume on (e.g. sub1,sub2)|sub| -|`-st`, `--subscription-type`|Subscriber type. Possible values are Exclusive, Shared, Failover, Key_Shared.|Exclusive| -|`-sp`, `--subscription-position`|Subscriber position. Possible values are Latest, Earliest.|Latest| -|`-time`, `--test-duration`|Test duration (in seconds). If the value is 0 or smaller than 0, it keeps consuming messages|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - - -### `produce` -Run a producer - -Usage - -```bash - -$ pulsar-perf produce options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-am`, `--access-mode`|Producer access mode. Valid values are `Shared`, `Exclusive` and `WaitForExclusive`|Shared| -|`-au`, `--admin-url`|Pulsar admin URL|| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth_plugin`|Authentication plugin class name|| -|`--listener-name`|Listener name for the broker|| -|`-b`, `--batch-time-window`|Batch messages in a window of the specified number of milliseconds|1| -|`-bb`, `--batch-max-bytes`|Maximum number of bytes per batch|4194304| -|`-bm`, `--batch-max-messages`|Maximum number of messages per batch|1000| -|`-bw`, `--busy-wait`|Enable or disable Busy-Wait on the Pulsar client|false| -|`-ch`, `--chunking`|Split the message and publish in chunks if the message size is larger than allowed max size|false| -|`-d`, `--delay`|Mark messages with a given delay in seconds|0s| -|`-z`, `--compression`|Compress messages’ payload. Possible values are NONE, LZ4, ZLIB, ZSTD or SNAPPY.|| -|`--conf-file`|Configuration file|| -|`-k`, `--encryption-key-name`|The public key name to encrypt payload|| -|`-v`, `--encryption-key-value-file`|The file which contains the public key to encrypt payload|| -|`-ef`, `--exit-on-failure`|Exit from the process on publish failure|false| -|`-fc`, `--format-class`|Custom Formatter class name|org.apache.pulsar.testclient.DefaultMessageFormatter| -|`-fp`, `--format-payload`|Format %i as a message index in the stream from producer and/or %t as the timestamp nanoseconds|false| -|`-h`, `--help`|Help message|false| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-o`, `--max-outstanding`|Max number of outstanding messages|1000| -|`-p`, `--max-outstanding-across-partitions`|Max number of outstanding messages across partitions|50000| -|`-mk`, `--message-key-generation-mode`|The generation mode of message key. Valid options are `autoIncrement`, `random`|| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-m`, `--num-messages`|Number of messages to publish in total. If the value is 0 or smaller than 0, it keeps publishing messages.|0| -|`-n`, `--num-producers`|The number of producers (per topic)|1| -|`-threads`, `--num-test-threads`|Number of test threads|1| -|`-t`, `--num-topic`|The number of topics|1| -|`-np`, `--partitions`|Create partitioned topics with the given number of partitions. Setting this value to 0 means not trying to create a topic|| -|`-f`, `--payload-file`|Use payload from an UTF-8 encoded text file and a payload will be randomly selected when publishing messages|| -|`-e`, `--payload-delimiter`|The delimiter used to split lines when using payload from a file|\n| -|`-pn`, `--producer-name`|Producer Name|| -|`-r`, `--rate`|Publish rate msg/s across topics|100| -|`--send-timeout`|Set the sendTimeout|0| -|`--separator`|Separator between the topic and topic number|-| -|`-u`, `--service-url`|Pulsar service URL|| -|`-s`, `--size`|Message size (in bytes)|1024| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled.|0| -|`-time`, `--test-duration`|Test duration (in seconds). If the value is 0 or smaller than 0, it keeps publishing messages.|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--warmup-time`|Warm-up time in seconds|1| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - - -### `read` -Run a topic reader - -Usage - -```bash - -$ pulsar-perf read options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth_plugin`|Authentication plugin class name|| -|`--listener-name`|Listener name for the broker|| -|`--conf-file`|Configuration file|| -|`-h`, `--help`|Help message|false| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-t`, `--num-topics`|The number of topics|1| -|`-r`, `--rate`|Simulate a slow message reader (rate in msg/s)|0| -|`-q`, `--receiver-queue-size`|Size of the receiver queue|1000| -|`-u`, `--service-url`|Pulsar service URL|| -|`-m`, `--start-message-id`|Start message id. This can be either 'earliest', 'latest' or a specific message id by using 'lid:eid'|earliest| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled.|0| -|`-time`, `--test-duration`|Test duration (in seconds). If the value is 0 or smaller than 0, it keeps consuming messages.|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--use-tls`|Use TLS encryption on the connection|false| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - -### `websocket-producer` -Run a websocket producer - -Usage - -```bash - -$ pulsar-perf websocket-producer options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth_plugin`|Authentication plugin class name|| -|`--conf-file`|Configuration file|| -|`-h`, `--help`|Help message|false| -|`-m`, `--num-messages`|Number of messages to publish in total. If the value is 0 or smaller than 0, it keeps publishing messages|0| -|`-t`, `--num-topic`|The number of topics|1| -|`-f`, `--payload-file`|Use payload from a file instead of empty buffer|| -|`-u`, `--proxy-url`|Pulsar Proxy URL, e.g., "ws://localhost:8080/"|| -|`-r`, `--rate`|Publish rate msg/s across topics|100| -|`-s`, `--size`|Message size in byte|1024| -|`-time`, `--test-duration`|Test duration (in seconds). If the value is 0 or smaller than 0, it keeps publishing messages|0| - - -### `managed-ledger` -Write directly on managed-ledgers - -Usage - -```bash - -$ pulsar-perf managed-ledger options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-a`, `--ack-quorum`|Ledger ack quorum|1| -|`-dt`, `--digest-type`|BookKeeper digest type. Possible Values: [CRC32, MAC, CRC32C, DUMMY]|CRC32C| -|`-e`, `--ensemble-size`|Ledger ensemble size|1| -|`-h`, `--help`|Help message|false| -|`-c`, `--max-connections`|Max number of TCP connections to a single bookie|1| -|`-o`, `--max-outstanding`|Max number of outstanding requests|1000| -|`-m`, `--num-messages`|Number of messages to publish in total. If the value is 0 or smaller than 0, it keeps publishing messages|0| -|`-t`, `--num-topic`|Number of managed ledgers|1| -|`-r`, `--rate`|Write rate msg/s across managed ledgers|100| -|`-s`, `--size`|Message size in byte|1024| -|`-time`, `--test-duration`|Test duration (in seconds). If the value is 0 or smaller than 0, it keeps publishing messages|0| -|`--threads`|Number of threads writing|1| -|`-w`, `--write-quorum`|Ledger write quorum|1| -|`-zk`, `--zookeeperServers`|ZooKeeper connection string|| - - -### `monitor-brokers` -Continuously receive broker data and/or load reports - -Usage - -```bash - -$ pulsar-perf monitor-brokers options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--connect-string`|A connection string for one or more ZooKeeper servers|| -|`-h`, `--help`|Help message|false| - - -### `simulation-client` -Run a simulation server acting as a Pulsar client. Uses the client configuration specified in `conf/client.conf`. - -Usage - -```bash - -$ pulsar-perf simulation-client options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--port`|Port to listen on for controller|0| -|`--service-url`|Pulsar Service URL|| -|`-h`, `--help`|Help message|false| - -### `simulation-controller` -Run a simulation controller to give commands to servers - -Usage - -```bash - -$ pulsar-perf simulation-controller options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--client-port`|The port that the clients are listening on|0| -|`--clients`|Comma-separated list of client hostnames|| -|`--cluster`|The cluster to test on|| -|`-h`, `--help`|Help message|false| - - -### `help` -This help message - -Usage - -```bash - -$ pulsar-perf help - -``` - -## `bookkeeper` -A tool for managing BookKeeper. - -Usage - -```bash - -$ bookkeeper command - -``` - -Commands -* `autorecovery` -* `bookie` -* `localbookie` -* `upgrade` -* `shell` - - -Environment variables - -The table below lists the environment variables that you can use to configure the bookkeeper tool. - -|Variable|Description|Default| -|---|---|---| -|BOOKIE_LOG_CONF|Log4j configuration file|conf/log4j2.yaml| -|BOOKIE_CONF|BookKeeper configuration file|conf/bk_server.conf| -|BOOKIE_EXTRA_OPTS|Extra options to be passed to the JVM|| -|BOOKIE_EXTRA_CLASSPATH|Extra paths for BookKeeper's classpath|| -|ENTRY_FORMATTER_CLASS|The Java class used to format entries|| -|BOOKIE_PID_DIR|Folder where the BookKeeper server PID file should be stored|| -|BOOKIE_STOP_TIMEOUT|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful|| - - -### `auto-recovery` -Runs an auto-recovery service - -Usage - -```bash - -$ bookkeeper autorecovery options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery|| - - -### `bookie` -Starts up a BookKeeper server (aka bookie) - -Usage - -```bash - -$ bookkeeper bookie options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery|| -|-readOnly|Force start a read-only bookie server|false| -|-withAutoRecovery|Start auto-recovery service bookie server|false| - - -### `localbookie` -Runs a test ensemble of N bookies locally - -Usage - -```bash - -$ bookkeeper localbookie N - -``` - -### `upgrade` -Upgrade the bookie’s filesystem - -Usage - -```bash - -$ bookkeeper upgrade options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery|| -|`-u`, `--upgrade`|Upgrade the bookie’s directories|| - - -### `shell` -Run shell for admin commands. To see a full listing of those commands, run bookkeeper shell without an argument. - -Usage - -```bash - -$ bookkeeper shell - -``` - -Example - -```bash - -$ bookkeeper shell bookiesanity - -``` - -## `broker-tool` - -The `broker- tool` is used for operations on a specific broker. - -Usage - -```bash - -$ broker-tool command - -``` - -Commands -* `load-report` -* `help` - -Example -Two ways to get more information about a command as below: - -```bash - -$ broker-tool help command -$ broker-tool command --help - -``` - -### `load-report` - -Collect the load report of a specific broker. -The command is run on a broker, and used for troubleshooting why broker can’t collect right load report. - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--interval`| Interval to collect load report, in milliseconds || -|`-h`, `--help`| Display help information || - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/reference-configuration.md b/site2/website/versioned_docs/version-2.8.2-deprecated/reference-configuration.md deleted file mode 100644 index e761161e99cea4..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/reference-configuration.md +++ /dev/null @@ -1,792 +0,0 @@ ---- -id: reference-configuration -title: Pulsar configuration -sidebar_label: "Pulsar configuration" -original_id: reference-configuration ---- - - - - -You can manage Pulsar configuration by configuration files in the [`conf`](https://github.com/apache/pulsar/tree/master/conf) directory of a Pulsar [installation](getting-started-standalone.md). - -- [BookKeeper](#bookkeeper) -- [Broker](#broker) -- [Client](#client) -- [Service discovery](#service-discovery) -- [Log4j](#log4j) -- [Log4j shell](#log4j-shell) -- [Standalone](#standalone) -- [WebSocket](#websocket) -- [Pulsar proxy](#pulsar-proxy) -- [ZooKeeper](#zookeeper) - -## BookKeeper - -BookKeeper is a replicated log storage system that Pulsar uses for durable storage of all messages. - - -|Name|Description|Default| -|---|---|---| -|bookiePort|The port on which the bookie server listens.|3181| -|allowLoopback|Whether the bookie is allowed to use a loopback interface as its primary interface (that is the interface used to establish its identity). By default, loopback interfaces are not allowed to work as the primary interface. Using a loopback interface as the primary interface usually indicates a configuration error. For example, it’s fairly common in some VPS setups to not configure a hostname or to have the hostname resolve to `127.0.0.1`. If this is the case, then all bookies in the cluster will establish their identities as `127.0.0.1:3181` and only one will be able to join the cluster. For VPSs configured like this, you should explicitly set the listening interface.|false| -|listeningInterface|The network interface on which the bookie listens. By default, the bookie listens on all interfaces.|eth0| -|advertisedAddress|Configure a specific hostname or IP address that the bookie should use to advertise itself to clients. By default, the bookie advertises either its own IP address or hostname according to the `listeningInterface` and `useHostNameAsBookieID` settings.|N/A| -|allowMultipleDirsUnderSameDiskPartition|Configure the bookie to enable/disable multiple ledger/index/journal directories in the same filesystem disk partition.|false| -|minUsableSizeForIndexFileCreation|The minimum safe usable size available in index directory for bookie to create index files while replaying journal at the time of bookie starts in Readonly Mode (in bytes).|1073741824| -|journalDirectory|The directory where BookKeeper outputs its write-ahead log (WAL).|data/bookkeeper/journal| -|journalDirectories|Directories that BookKeeper outputs its write ahead log. Multiple directories are available, being separated by `,`. For example: `journalDirectories=/tmp/bk-journal1,/tmp/bk-journal2`. If `journalDirectories` is set, the bookies skip `journalDirectory` and use this setting directory.|/tmp/bk-journal| -|ledgerDirectories|The directory where BookKeeper outputs ledger snapshots. This could define multiple directories to store snapshots separated by `,`, for example `ledgerDirectories=/tmp/bk1-data,/tmp/bk2-data`. Ideally, ledger dirs and the journal dir are each in a different device, which reduces the contention between random I/O and sequential write. It is possible to run with a single disk, but performance will be significantly lower.|data/bookkeeper/ledgers| -|ledgerManagerType|The type of ledger manager used to manage how ledgers are stored, managed, and garbage collected. See [BookKeeper Internals](http://bookkeeper.apache.org/docs/latest/getting-started/concepts) for more info.|hierarchical| -|zkLedgersRootPath|The root ZooKeeper path used to store ledger metadata. This parameter is used by the ZooKeeper-based ledger manager as a root znode to store all ledgers.|/ledgers| -|ledgerStorageClass|Ledger storage implementation class|org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage| -|entryLogFilePreallocationEnabled|Enable or disable entry logger preallocation|true| -|logSizeLimit|Max file size of the entry logger, in bytes. A new entry log file will be created when the old one reaches the file size limitation.|1073741824| -|minorCompactionThreshold|Threshold of minor compaction. Entry log files whose remaining size percentage reaches below this threshold will be compacted in a minor compaction. If set to less than zero, the minor compaction is disabled.|0.2| -|minorCompactionInterval|Time interval to run minor compaction, in seconds. If set to less than zero, the minor compaction is disabled. Note: should be greater than gcWaitTime. |3600| -|majorCompactionThreshold|The threshold of major compaction. Entry log files whose remaining size percentage reaches below this threshold will be compacted in a major compaction. Those entry log files whose remaining size percentage is still higher than the threshold will never be compacted. If set to less than zero, the minor compaction is disabled.|0.5| -|majorCompactionInterval|The time interval to run major compaction, in seconds. If set to less than zero, the major compaction is disabled. Note: should be greater than gcWaitTime. |86400| -|readOnlyModeEnabled|If `readOnlyModeEnabled=true`, then on all full ledger disks, bookie will be converted to read-only mode and serve only read requests. Otherwise the bookie will be shutdown.|true| -|forceReadOnlyBookie|Whether the bookie is force started in read only mode.|false| -|persistBookieStatusEnabled|Persist the bookie status locally on the disks. So the bookies can keep their status upon restarts.|false| -|compactionMaxOutstandingRequests|Sets the maximum number of entries that can be compacted without flushing. When compacting, the entries are written to the entrylog and the new offsets are cached in memory. Once the entrylog is flushed the index is updated with the new offsets. This parameter controls the number of entries added to the entrylog before a flush is forced. A higher value for this parameter means more memory will be used for offsets. Each offset consists of 3 longs. This parameter should not be modified unless you’re fully aware of the consequences.|100000| -|compactionRate|The rate at which compaction will read entries, in adds per second.|1000| -|isThrottleByBytes|Throttle compaction by bytes or by entries.|false| -|compactionRateByEntries|The rate at which compaction will read entries, in adds per second.|1000| -|compactionRateByBytes|Set the rate at which compaction reads entries. The unit is bytes added per second.|1000000| -|journalMaxSizeMB|Max file size of journal file, in megabytes. A new journal file will be created when the old one reaches the file size limitation.|2048| -|journalMaxBackups|The max number of old journal files to keep. Keeping a number of old journal files would help data recovery in special cases.|5| -|journalPreAllocSizeMB|How space to pre-allocate at a time in the journal.|16| -|journalWriteBufferSizeKB|The of the write buffers used for the journal.|64| -|journalRemoveFromPageCache|Whether pages should be removed from the page cache after force write.|true| -|journalAdaptiveGroupWrites|Whether to group journal force writes, which optimizes group commit for higher throughput.|true| -|journalMaxGroupWaitMSec|The maximum latency to impose on a journal write to achieve grouping.|1| -|journalAlignmentSize|All the journal writes and commits should be aligned to given size|4096| -|journalBufferedWritesThreshold|Maximum writes to buffer to achieve grouping|524288| -|journalFlushWhenQueueEmpty|If we should flush the journal when journal queue is empty|false| -|numJournalCallbackThreads|The number of threads that should handle journal callbacks|8| -|openLedgerRereplicationGracePeriod | The grace period, in milliseconds, that the replication worker waits before fencing and replicating a ledger fragment that's still being written to upon bookie failure. | 30000 | -|rereplicationEntryBatchSize|The number of max entries to keep in fragment for re-replication|100| -|autoRecoveryDaemonEnabled|Whether the bookie itself can start auto-recovery service.|true| -|lostBookieRecoveryDelay|How long to wait, in seconds, before starting auto recovery of a lost bookie.|0| -|gcWaitTime|How long the interval to trigger next garbage collection, in milliseconds. Since garbage collection is running in background, too frequent gc will heart performance. It is better to give a higher number of gc interval if there is enough disk capacity.|900000| -|gcOverreplicatedLedgerWaitTime|How long the interval to trigger next garbage collection of overreplicated ledgers, in milliseconds. This should not be run very frequently since we read the metadata for all the ledgers on the bookie from zk.|86400000| -|flushInterval|How long the interval to flush ledger index pages to disk, in milliseconds. Flushing index files will introduce much random disk I/O. If separating journal dir and ledger dirs each on different devices, flushing would not affect performance. But if putting journal dir and ledger dirs on same device, performance degrade significantly on too frequent flushing. You can consider increment flush interval to get better performance, but you need to pay more time on bookie server restart after failure.|60000| -|bookieDeathWatchInterval|Interval to watch whether bookie is dead or not, in milliseconds|1000| -|allowStorageExpansion|Allow the bookie storage to expand. Newly added ledger and index dirs must be empty.|false| -|zkServers|A list of one of more servers on which zookeeper is running. The server list can be comma separated values, for example: zkServers=zk1:2181,zk2:2181,zk3:2181.|localhost:2181| -|zkTimeout|ZooKeeper client session timeout in milliseconds Bookie server will exit if it received SESSION_EXPIRED because it was partitioned off from ZooKeeper for more than the session timeout JVM garbage collection, disk I/O will cause SESSION_EXPIRED. Increment this value could help avoiding this issue|30000| -|zkRetryBackoffStartMs|The start time that the Zookeeper client backoff retries in milliseconds.|1000| -|zkRetryBackoffMaxMs|The maximum time that the Zookeeper client backoff retries in milliseconds.|10000| -|zkEnableSecurity|Set ACLs on every node written on ZooKeeper, allowing users to read and write BookKeeper metadata stored on ZooKeeper. In order to make ACLs work you need to setup ZooKeeper JAAS authentication. All the bookies and Client need to share the same user, and this is usually done using Kerberos authentication. See ZooKeeper documentation.|false| -|httpServerEnabled|The flag enables/disables starting the admin http server.|false| -|httpServerPort|The HTTP server port to listen on. By default, the value is `8080`. If you want to keep it consistent with the Prometheus stats provider, you can set it to `8000`.|8080 -|httpServerClass|The http server class.|org.apache.bookkeeper.http.vertx.VertxHttpServer| -|serverTcpNoDelay|This settings is used to enabled/disabled Nagle’s algorithm, which is a means of improving the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network. If you are sending many small messages, such that more than one can fit in a single IP packet, setting server.tcpnodelay to false to enable Nagle algorithm can provide better performance.|true| -|serverSockKeepalive|This setting is used to send keep-alive messages on connection-oriented sockets.|true| -|serverTcpLinger|The socket linger timeout on close. When enabled, a close or shutdown will not return until all queued messages for the socket have been successfully sent or the linger timeout has been reached. Otherwise, the call returns immediately and the closing is done in the background.|0| -|byteBufAllocatorSizeMax|The maximum buf size of the received ByteBuf allocator.|1048576| -|nettyMaxFrameSizeBytes|The maximum netty frame size in bytes. Any message received larger than this will be rejected.|5253120| -|openFileLimit|Max number of ledger index files could be opened in bookie server If number of ledger index files reaches this limitation, bookie server started to swap some ledgers from memory to disk. Too frequent swap will affect performance. You can tune this number to gain performance according your requirements.|0| -|pageSize|Size of a index page in ledger cache, in bytes A larger index page can improve performance writing page to disk, which is efficient when you have small number of ledgers and these ledgers have similar number of entries. If you have large number of ledgers and each ledger has fewer entries, smaller index page would improve memory usage.|8192| -|pageLimit|How many index pages provided in ledger cache If number of index pages reaches this limitation, bookie server starts to swap some ledgers from memory to disk. You can increment this value when you found swap became more frequent. But make sure pageLimit*pageSize should not more than JVM max memory limitation, otherwise you would got OutOfMemoryException. In general, incrementing pageLimit, using smaller index page would gain better performance in lager number of ledgers with fewer entries case If pageLimit is -1, bookie server will use 1/3 of JVM memory to compute the limitation of number of index pages.|0| -|readOnlyModeEnabled|If all ledger directories configured are full, then support only read requests for clients. If "readOnlyModeEnabled=true" then on all ledger disks full, bookie will be converted to read-only mode and serve only read requests. Otherwise the bookie will be shutdown. By default this will be disabled.|true| -|diskUsageThreshold|For each ledger dir, maximum disk space which can be used. Default is 0.95f. i.e. 95% of disk can be used at most after which nothing will be written to that partition. If all ledger dir partitions are full, then bookie will turn to readonly mode if ‘readOnlyModeEnabled=true’ is set, else it will shutdown. Valid values should be in between 0 and 1 (exclusive).|0.95| -|diskCheckInterval|Disk check interval in milli seconds, interval to check the ledger dirs usage.|10000| -|auditorPeriodicCheckInterval|Interval at which the auditor will do a check of all ledgers in the cluster. By default this runs once a week. The interval is set in seconds. To disable the periodic check completely, set this to 0. Note that periodic checking will put extra load on the cluster, so it should not be run more frequently than once a day.|604800| -|sortedLedgerStorageEnabled|Whether sorted-ledger storage is enabled.|true| -|auditorPeriodicBookieCheckInterval|The interval between auditor bookie checks. The auditor bookie check, checks ledger metadata to see which bookies should contain entries for each ledger. If a bookie which should contain entries is unavailable, thea the ledger containing that entry is marked for recovery. Setting this to 0 disabled the periodic check. Bookie checks will still run when a bookie fails. The interval is specified in seconds.|86400| -|numAddWorkerThreads|The number of threads that should handle write requests. if zero, the writes would be handled by netty threads directly.|0| -|numReadWorkerThreads|The number of threads that should handle read requests. if zero, the reads would be handled by netty threads directly.|8| -|numHighPriorityWorkerThreads|The umber of threads that should be used for high priority requests (i.e. recovery reads and adds, and fencing).|8| -|maxPendingReadRequestsPerThread|If read workers threads are enabled, limit the number of pending requests, to avoid the executor queue to grow indefinitely.|2500| -|maxPendingAddRequestsPerThread|The limited number of pending requests, which is used to avoid the executor queue to grow indefinitely when add workers threads are enabled.|10000| -|isForceGCAllowWhenNoSpace|Whether force compaction is allowed when the disk is full or almost full. Forcing GC could get some space back, but could also fill up the disk space more quickly. This is because new log files are created before GC, while old garbage log files are deleted after GC.|false| -|verifyMetadataOnGC|True if the bookie should double check `readMetadata` prior to GC.|false| -|flushEntrylogBytes|Entry log flush interval in bytes. Flushing in smaller chunks but more frequently reduces spikes in disk I/O. Flushing too frequently may also affect performance negatively.|268435456| -|readBufferSizeBytes|The number of bytes we should use as capacity for BufferedReadChannel.|4096| -|writeBufferSizeBytes|The number of bytes used as capacity for the write buffer|65536| -|useHostNameAsBookieID|Whether the bookie should use its hostname to register with the coordination service (e.g.: zookeeper service). When false, bookie will use its ip address for the registration.|false| -|bookieId | If you want to custom a bookie ID or use a dynamic network address for the bookie, you can set the `bookieId`.

    Bookie advertises itself using the `bookieId` rather than the `BookieSocketAddress` (`hostname:port` or `IP:port`). If you set the `bookieId`, then the `useHostNameAsBookieID` does not take effect.

    The `bookieId` is a non-empty string that can contain ASCII digits and letters ([a-zA-Z9-0]), colons, dashes, and dots.

    For more information about `bookieId`, see [here](http://bookkeeper.apache.org/bps/BP-41-bookieid/).|N/A| -|allowEphemeralPorts|Whether the bookie is allowed to use an ephemeral port (port 0) as its server port. By default, an ephemeral port is not allowed. Using an ephemeral port as the service port usually indicates a configuration error. However, in unit tests, using an ephemeral port will address port conflict problems and allow running tests in parallel.|false| -|enableLocalTransport|Whether the bookie is allowed to listen for the BookKeeper clients executed on the local JVM.|false| -|disableServerSocketBind|Whether the bookie is allowed to disable bind on network interfaces. This bookie will be available only to BookKeeper clients executed on the local JVM.|false| -|skipListArenaChunkSize|The number of bytes that we should use as chunk allocation for `org.apache.bookkeeper.bookie.SkipListArena`.|4194304| -|skipListArenaMaxAllocSize|The maximum size that we should allocate from the skiplist arena. Allocations larger than this should be allocated directly by the VM to avoid fragmentation.|131072| -|bookieAuthProviderFactoryClass|The factory class name of the bookie authentication provider. If this is null, then there is no authentication.|null| -|statsProviderClass||org.apache.bookkeeper.stats.prometheus.PrometheusMetricsProvider| -|prometheusStatsHttpPort||8000| -|dbStorage_writeCacheMaxSizeMb|Size of Write Cache. Memory is allocated from JVM direct memory. Write cache is used to buffer entries before flushing into the entry log. For good performance, it should be big enough to hold a substantial amount of entries in the flush interval.|25% of direct memory| -|dbStorage_readAheadCacheMaxSizeMb|Size of Read cache. Memory is allocated from JVM direct memory. This read cache is pre-filled doing read-ahead whenever a cache miss happens. By default, it is allocated to 25% of the available direct memory.|N/A| -|dbStorage_readAheadCacheBatchSize|How many entries to pre-fill in cache after a read cache miss|1000| -|dbStorage_rocksDB_blockCacheSize|Size of RocksDB block-cache. For best performance, this cache should be big enough to hold a significant portion of the index database which can reach ~2GB in some cases. By default, it uses 10% of direct memory.|N/A| -|dbStorage_rocksDB_writeBufferSizeMB||64| -|dbStorage_rocksDB_sstSizeInMB||64| -|dbStorage_rocksDB_blockSize||65536| -|dbStorage_rocksDB_bloomFilterBitsPerKey||10| -|dbStorage_rocksDB_numLevels||-1| -|dbStorage_rocksDB_numFilesInLevel0||4| -|dbStorage_rocksDB_maxSizeInLevel1MB||256| - -## Broker - -Pulsar brokers are responsible for handling incoming messages from producers, dispatching messages to consumers, replicating data between clusters, and more. - -|Name|Description|Default| -|---|---|---| -|advertisedListeners|Specify multiple advertised listeners for the broker.

    The format is `:pulsar://:`.

    If there are multiple listeners, separate them with commas.

    **Note**: do not use this configuration with `advertisedAddress` and `brokerServicePort`. If the value of this configuration is empty, the broker uses `advertisedAddress` and `brokerServicePort`|/| -|internalListenerName|Specify the internal listener name for the broker.

    **Note**: the listener name must be contained in `advertisedListeners`.

    If the value of this configuration is empty, the broker uses the first listener as the internal listener.|/| -|authenticateOriginalAuthData| If this flag is set to `true`, the broker authenticates the original Auth data; else it just accepts the originalPrincipal and authorizes it (if required). |false| -|enablePersistentTopics| Whether persistent topics are enabled on the broker |true| -|enableNonPersistentTopics| Whether non-persistent topics are enabled on the broker |true| -|functionsWorkerEnabled| Whether the Pulsar Functions worker service is enabled in the broker |false| -|exposePublisherStats|Whether to enable topic level metrics.|true| -|statsUpdateFrequencyInSecs||60| -|statsUpdateInitialDelayInSecs||60| -|zookeeperServers| Zookeeper quorum connection string || -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -|brokerServicePort| Broker data port |6650| -|brokerServicePortTls| Broker data port for TLS |6651| -|webServicePort| Port to use to server HTTP request |8080| -|webServicePortTls| Port to use to server HTTPS request |8443| -|webSocketServiceEnabled| Enable the WebSocket API service in broker |false| -|webSocketNumIoThreads|The number of IO threads in Pulsar Client used in WebSocket proxy.|8| -|webSocketConnectionsPerBroker|The number of connections per Broker in Pulsar Client used in WebSocket proxy.|8| -|webSocketSessionIdleTimeoutMillis|Time in milliseconds that idle WebSocket session times out.|300000| -|webSocketMaxTextFrameSize|The maximum size of a text message during parsing in WebSocket proxy.|1048576| -|exposeTopicLevelMetricsInPrometheus|Whether to enable topic level metrics.|true| -|exposeConsumerLevelMetricsInPrometheus|Whether to enable consumer level metrics.|false| -|jvmGCMetricsLoggerClassName|Classname of Pluggable JVM GC metrics logger that can log GC specific metrics.|N/A| -|bindAddress| Hostname or IP address the service binds on, default is 0.0.0.0. |0.0.0.0| -|advertisedAddress| Hostname or IP address the service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used. || -|clusterName| Name of the cluster to which this broker belongs to || -|maxTenants|The maximum number of tenants that can be created in each Pulsar cluster. When the number of tenants reaches the threshold, the broker rejects the request of creating a new tenant. The default value 0 disables the check. |0| -| maxNamespacesPerTenant | The maximum number of namespaces that can be created in each tenant. When the number of namespaces reaches this threshold, the broker rejects the request of creating a new tenant. The default value 0 disables the check. |0| -|brokerDeduplicationEnabled| Sets the default behavior for message deduplication in the broker. If enabled, the broker will reject messages that were already stored in the topic. This setting can be overridden on a per-namespace basis. |false| -|brokerDeduplicationMaxNumberOfProducers| The maximum number of producers for which information will be stored for deduplication purposes. |10000| -|brokerDeduplicationEntriesInterval| The number of entries after which a deduplication informational snapshot is taken. A larger interval will lead to fewer snapshots being taken, though this would also lengthen the topic recovery time (the time required for entries published after the snapshot to be replayed). |1000| -|brokerDeduplicationSnapshotIntervalSeconds| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |120| -|brokerDeduplicationProducerInactivityTimeoutMinutes| The time of inactivity (in minutes) after which the broker will discard deduplication information related to a disconnected producer. |360| -|dispatchThrottlingRatePerReplicatorInMsg| The default messages per second dispatch throttling-limit for every replicator in replication. The value of `0` means disabling replication message dispatch-throttling| 0 | -|dispatchThrottlingRatePerReplicatorInByte| The default bytes per second dispatch throttling-limit for every replicator in replication. The value of `0` means disabling replication message-byte dispatch-throttling| 0 | -|zooKeeperSessionTimeoutMillis| Zookeeper session timeout in milliseconds |30000| -|brokerShutdownTimeoutMs| Time to wait for broker graceful shutdown. After this time elapses, the process will be killed |60000| -|skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out of memory error. |false| -|backlogQuotaCheckEnabled| Enable backlog quota check. Enforces action on topic when the quota is reached |true| -|backlogQuotaCheckIntervalInSeconds| How often to check for topics that have reached the quota |60| -|backlogQuotaDefaultLimitGB| The default per-topic backlog quota limit. Being less than 0 means no limitation. By default, it is -1. | -1 | -|backlogQuotaDefaultRetentionPolicy|The defaulted backlog quota retention policy. By Default, it is `producer_request_hold`.
  1111. 'producer_request_hold' Policy which holds producer's send request until the resource becomes available (or holding times out)
  1112. 'producer_exception' Policy which throws `javax.jms.ResourceAllocationException` to the producer
  1113. 'consumer_backlog_eviction' Policy which evicts the oldest message from the slowest consumer's backlog
  1114. |producer_request_hold| -|allowAutoTopicCreation| Enable topic auto creation if a new producer or consumer connected |true| -|allowAutoTopicCreationType| The type of topic that is allowed to be automatically created.(partitioned/non-partitioned) |non-partitioned| -|allowAutoSubscriptionCreation| Enable subscription auto creation if a new consumer connected |true| -|defaultNumPartitions| The number of partitioned topics that is allowed to be automatically created if `allowAutoTopicCreationType` is partitioned |1| -|brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics |true| -|brokerDeleteInactiveTopicsFrequencySeconds| How often to check for inactive topics |60| -| brokerDeleteInactiveTopicsMode | Set the mode to delete inactive topics.
  1115. `delete_when_no_subscriptions`: delete the topic which has no subscriptions or active producers.
  1116. `delete_when_subscriptions_caught_up`: delete the topic whose subscriptions have no backlogs and which has no active producers or consumers.
  1117. | `delete_when_no_subscriptions` | -| brokerDeleteInactiveTopicsMaxInactiveDurationSeconds | Set the maximum duration for inactive topics. If it is not specified, the `brokerDeleteInactiveTopicsFrequencySeconds` parameter is adopted. | N/A | -|forceDeleteTenantAllowed| Enable you to delete a tenant forcefully. |false| -|forceDeleteNamespaceAllowed| Enable you to delete a namespace forcefully. |false| -|messageExpiryCheckIntervalInMinutes| The frequency of proactively checking and purging expired messages. |5| -|brokerServiceCompactionMonitorIntervalInSeconds| Interval between checks to determine whether topics with compaction policies need compaction. |60| -brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater than this threshold, compression is triggered.

    Set this threshold to 0 means disabling the compression check.|N/A -|delayedDeliveryEnabled| Whether to enable the delayed delivery for messages. If disabled, messages will be immediately delivered and there will be no tracking overhead.|true| -|delayedDeliveryTickTimeMillis|Control the tick time for retrying on delayed delivery, which affects the accuracy of the delivery time compared to the scheduled time. By default, it is 1 second.|1000| -|activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed. |1000| -|clientLibraryVersionCheckEnabled| Enable check for minimum allowed client library version |false| -|clientLibraryVersionCheckAllowUnversioned| Allow client libraries with no version information |true| -|statusFilePath| Path for the file used to determine the rotation status for the broker when responding to service discovery health checks || -|preferLaterVersions| If true, (and ModularLoadManagerImpl is being used), the load manager will attempt to use only brokers running the latest software version (to minimize impact to bundles) |false| -|maxNumPartitionsPerPartitionedTopic|Max number of partitions per partitioned topic. Use 0 or negative number to disable the check|0| -| maxSubscriptionsPerTopic | Maximum number of subscriptions allowed to subscribe to a topic. Once this limit reaches, the broker rejects new subscriptions until the number of subscriptions decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxProducersPerTopic | Maximum number of producers allowed to connect to a topic. Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerTopic | Maximum number of consumers allowed to connect to a topic. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerSubscription | Maximum number of consumers allowed to connect to a subscription. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -|tlsEnabled|Deprecated - Use `webServicePortTls` and `brokerServicePortTls` instead. |false| -|tlsCertificateFilePath| Path for the TLS certificate file || -|tlsKeyFilePath| Path for the TLS private key file || -|tlsTrustCertsFilePath| Path for the trusted TLS certificate file. This cert is used to verify that any certs presented by connecting clients are signed by a certificate authority. If this verification fails, then the certs are untrusted and the connections are dropped. || -|tlsAllowInsecureConnection| Accept untrusted TLS certificate from client. If it is set to `true`, a client with a cert which cannot be verified with the 'tlsTrustCertsFilePath' cert will be allowed to connect to the server, though the cert will not be used for client authentication. |false| -|tlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLSv1.3```, ```TLSv1.2``` || -|tlsCiphers|Specify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256```|| -|tlsEnabledWithKeyStore| Enable TLS with KeyStore type configuration in broker |false| -|tlsProvider| TLS Provider for KeyStore type || -|tlsKeyStoreType| LS KeyStore type configuration in broker: JKS, PKCS12 |JKS| -|tlsKeyStore| TLS KeyStore path in broker || -|tlsKeyStorePassword| TLS KeyStore password for broker || -|brokerClientTlsEnabledWithKeyStore| Whether internal client use KeyStore type to authenticate with Pulsar brokers |false| -|brokerClientSslProvider| The TLS Provider used by internal client to authenticate with other Pulsar brokers || -|brokerClientTlsTrustStoreType| TLS TrustStore type configuration for internal client: JKS, PKCS12, used by the internal client to authenticate with Pulsar brokers |JKS| -|brokerClientTlsTrustStore| TLS TrustStore path for internal client, used by the internal client to authenticate with Pulsar brokers || -|brokerClientTlsTrustStorePassword| TLS TrustStore password for internal client, used by the internal client to authenticate with Pulsar brokers || -|brokerClientTlsCiphers| Specify the tls cipher the internal client will use to negotiate during TLS Handshake. (a comma-separated list of ciphers) e.g. [TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256]|| -|brokerClientTlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS handshake. (a comma-separated list of protocol names). e.g. `TLSv1.3`, `TLSv1.2` || -|ttlDurationDefaultInSeconds|The default Time to Live (TTL) for namespaces if the TTL is not configured at namespace policies. When the value is set to `0`, TTL is disabled. By default, TTL is disabled. |0| -|tokenSettingPrefix| Configure the prefix of the token related setting like `tokenSecretKey`, `tokenPublicKey`, `tokenAuthClaim`, `tokenPublicAlg`, `tokenAudienceClaim`, and `tokenAudience`. || -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicAlg| Configure the algorithm to be used to validate auth tokens. This can be any of the asymettric algorithms supported by Java JWT (https://github.com/jwtk/jjwt#signature-algorithms-keys) |RS256| -|tokenAuthClaim| Specify which of the token's claims will be used as the authentication "principal" or "role". The default "sub" claim will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud", that will be used to get the audience from token. If not set, audience will not be verified. || -|tokenAudience| The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token, need contains this. || -|maxUnackedMessagesPerConsumer| Max number of unacknowledged messages allowed to receive messages by a consumer on a shared subscription. Broker will stop sending messages to consumer once, this limit reaches until consumer starts acknowledging messages back. Using a value of 0, is disabling unackeMessage limit check and consumer can receive messages without any restriction |50000| -|maxUnackedMessagesPerSubscription| Max number of unacknowledged messages allowed per shared subscription. Broker will stop dispatching messages to all consumers of the subscription once this limit reaches until consumer starts acknowledging messages back and unack count reaches to limit/2. Using a value of 0, is disabling unackedMessage-limit check and dispatcher can dispatch messages without any restriction |200000| -|subscriptionRedeliveryTrackerEnabled| Enable subscription message redelivery tracker |true| -|subscriptionExpirationTimeMinutes | How long to delete inactive subscriptions from last consuming.

    Setting this configuration to a value **greater than 0** deletes inactive subscriptions automatically.
    Setting this configuration to **0** does not delete inactive subscriptions automatically.

    Since this configuration takes effect on all topics, if there is even one topic whose subscriptions should not be deleted automatically, you need to set it to 0.
    Instead, you can set a subscription expiration time for each **namespace** using the [`pulsar-admin namespaces set-subscription-expiration-time options` command](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-subscription-expiration-time-em-). | 0 | -|maxConcurrentLookupRequest| Max number of concurrent lookup request broker allows to throttle heavy incoming lookup traffic |50000| -|maxConcurrentTopicLoadRequest| Max number of concurrent topic loading request broker allows to control number of zk-operations |5000| -|authenticationEnabled| Enable authentication |false| -|authenticationProviders| Authentication provider name list, which is comma separated list of class names || -| authenticationRefreshCheckSeconds | Interval of time for checking for expired authentication credentials | 60 | -|authorizationEnabled| Enforce authorization |false| -|superUserRoles| Role names that are treated as "super-user", meaning they will be able to do all admin operations and publish/consume from all topics || -|brokerClientAuthenticationPlugin| Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters || -|brokerClientAuthenticationParameters||| -|athenzDomainNames| Supported Athenz provider domain names(comma separated) for authentication || -|exposePreciseBacklogInPrometheus| Enable expose the precise backlog stats, set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate. |false| -|schemaRegistryStorageClassName|The schema storage implementation used by this broker.|org.apache.pulsar.broker.service.schema.BookkeeperSchemaStorageFactory| -|isSchemaValidationEnforced|Enforce schema validation on following cases: if a producer without a schema attempts to produce to a topic with schema, the producer will be failed to connect. PLEASE be carefully on using this, since non-java clients don't support schema. If this setting is enabled, then non-java clients fail to produce.|false| -| topicFencingTimeoutSeconds | If a topic remains fenced for a certain time period (in seconds), it is closed forcefully. If set to 0 or a negative number, the fenced topic is not closed. | 0 | -|offloadersDirectory|The directory for all the offloader implementations.|./offloaders| -|bookkeeperMetadataServiceUri| Metadata service uri that bookkeeper is used for loading corresponding metadata driver and resolving its metadata service location. This value can be fetched using `bookkeeper shell whatisinstanceid` command in BookKeeper cluster. For example: zk+hierarchical://localhost:2181/ledgers. The metadata service uri list can also be semicolon separated values like below: zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers || -|bookkeeperClientAuthenticationPlugin| Authentication plugin to use when connecting to bookies || -|bookkeeperClientAuthenticationParametersName| BookKeeper auth plugin implementation specifics parameters name and values || -|bookkeeperClientAuthenticationParameters||| -|bookkeeperClientNumWorkerThreads| Number of BookKeeper client worker threads. Default is Runtime.getRuntime().availableProcessors() || -|bookkeeperClientTimeoutInSeconds| Timeout for BK add / read operations |30| -|bookkeeperClientSpeculativeReadTimeoutInMillis| Speculative reads are initiated if a read request doesn’t complete within a certain time Using a value of 0, is disabling the speculative reads |0| -|bookkeeperNumberOfChannelsPerBookie| Number of channels per bookie |16| -|bookkeeperClientHealthCheckEnabled| Enable bookies health check. Bookies that have more than the configured number of failure within the interval will be quarantined for some time. During this period, new ledgers won’t be created on these bookies |true| -|bookkeeperClientHealthCheckIntervalSeconds||60| -|bookkeeperClientHealthCheckErrorThresholdPerInterval||5| -|bookkeeperClientHealthCheckQuarantineTimeInSeconds ||1800| -|bookkeeperClientRackawarePolicyEnabled| Enable rack-aware bookie selection policy. BK will chose bookies from different racks when forming a new bookie ensemble |true| -|bookkeeperClientRegionawarePolicyEnabled| Enable region-aware bookie selection policy. BK will chose bookies from different regions and racks when forming a new bookie ensemble. If enabled, the value of bookkeeperClientRackawarePolicyEnabled is ignored |false| -|bookkeeperClientMinNumRacksPerWriteQuorum| Minimum number of racks per write quorum. BK rack-aware bookie selection policy will try to get bookies from at least 'bookkeeperClientMinNumRacksPerWriteQuorum' racks for a write quorum. |2| -|bookkeeperClientEnforceMinNumRacksPerWriteQuorum| Enforces rack-aware bookie selection policy to pick bookies from 'bookkeeperClientMinNumRacksPerWriteQuorum' racks for a writeQuorum. If BK can't find bookie then it would throw BKNotEnoughBookiesException instead of picking random one. |false| -|bookkeeperClientReorderReadSequenceEnabled| Enable/disable reordering read sequence on reading entries. |false| -|bookkeeperClientIsolationGroups| Enable bookie isolation by specifying a list of bookie groups to choose from. Any bookie outside the specified groups will not be used by the broker || -|bookkeeperClientSecondaryIsolationGroups| Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn't have enough bookie available. || -|bookkeeperClientMinAvailableBookiesInIsolationGroups| Minimum bookies that should be available as part of bookkeeperClientIsolationGroups else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list. || -|bookkeeperClientGetBookieInfoIntervalSeconds| Set the interval to periodically check bookie info |86400| -|bookkeeperClientGetBookieInfoRetryIntervalSeconds| Set the interval to retry a failed bookie info lookup |60| -|bookkeeperEnableStickyReads | Enable/disable having read operations for a ledger to be sticky to a single bookie. If this flag is enabled, the client will use one single bookie (by preference) to read all entries for a ledger. | true | -|managedLedgerDefaultEnsembleSize| Number of bookies to use when creating a ledger |2| -|managedLedgerDefaultWriteQuorum| Number of copies to store for each message |2| -|managedLedgerDefaultAckQuorum| Number of guaranteed copies (acks to wait before write is complete) |2| -|managedLedgerCacheSizeMB| Amount of memory to use for caching data payload in managed ledger. This memory is allocated from JVM direct memory and it’s shared across all the topics running in the same broker. By default, uses 1/5th of available direct memory || -|managedLedgerCacheCopyEntries| Whether we should make a copy of the entry payloads when inserting in cache| false| -|managedLedgerCacheEvictionWatermark| Threshold to which bring down the cache level when eviction is triggered |0.9| -|managedLedgerCacheEvictionFrequency| Configure the cache eviction frequency for the managed ledger cache (evictions/sec) | 100.0 | -|managedLedgerCacheEvictionTimeThresholdMillis| All entries that have stayed in cache for more than the configured time, will be evicted | 1000 | -|managedLedgerCursorBackloggedThreshold| Configure the threshold (in number of entries) from where a cursor should be considered 'backlogged' and thus should be set as inactive. | 1000| -|managedLedgerDefaultMarkDeleteRateLimit| Rate limit the amount of writes per second generated by consumer acking the messages |1.0| -|managedLedgerMaxEntriesPerLedger| The max number of entries to append to a ledger before triggering a rollover. A ledger rollover is triggered after the min rollover time has passed and one of the following conditions is true:
    • The max rollover time has been reached
    • The max entries have been written to the ledger
    • The max ledger size has been written to the ledger
    |50000| -|managedLedgerMinLedgerRolloverTimeMinutes| Minimum time between ledger rollover for a topic |10| -|managedLedgerMaxLedgerRolloverTimeMinutes| Maximum time before forcing a ledger rollover for a topic |240| -|managedLedgerCursorMaxEntriesPerLedger| Max number of entries to append to a cursor ledger |50000| -|managedLedgerCursorRolloverTimeInSeconds| Max time before triggering a rollover on a cursor ledger |14400| -|managedLedgerMaxUnackedRangesToPersist| Max number of "acknowledgment holes" that are going to be persistently stored. When acknowledging out of order, a consumer will leave holes that are supposed to be quickly filled by acking all the messages. The information of which messages are acknowledged is persisted by compressing in "ranges" of messages that were acknowledged. After the max number of ranges is reached, the information will only be tracked in memory and messages will be redelivered in case of crashes. |1000| -|autoSkipNonRecoverableData| Skip reading non-recoverable/unreadable data-ledger under managed-ledger’s list.It helps when data-ledgers gets corrupted at bookkeeper and managed-cursor is stuck at that ledger. |false| -|loadBalancerEnabled| Enable load balancer |true| -|loadBalancerPlacementStrategy| Strategy to assign a new bundle weightedRandomSelection || -|loadBalancerReportUpdateThresholdPercentage| Percentage of change to trigger load report update |10| -|loadBalancerReportUpdateMaxIntervalMinutes| maximum interval to update load report |15| -|loadBalancerHostUsageCheckIntervalMinutes| Frequency of report to collect |1| -|loadBalancerSheddingIntervalMinutes| Load shedding interval. Broker periodically checks whether some traffic should be offload from some over-loaded broker to other under-loaded brokers |30| -|loadBalancerSheddingGracePeriodMinutes| Prevent the same topics to be shed and moved to other broker more than once within this timeframe |30| -|loadBalancerBrokerMaxTopics| Usage threshold to allocate max number of topics to broker |50000| -|loadBalancerBrokerUnderloadedThresholdPercentage| Usage threshold to determine a broker as under-loaded |1| -|loadBalancerBrokerOverloadedThresholdPercentage| Usage threshold to determine a broker as over-loaded |85| -|loadBalancerResourceQuotaUpdateIntervalMinutes| Interval to update namespace bundle resource quota |15| -|loadBalancerBrokerComfortLoadLevelPercentage| Usage threshold to determine a broker is having just right level of load |65| -|loadBalancerAutoBundleSplitEnabled| enable/disable namespace bundle auto split |false| -|loadBalancerNamespaceBundleMaxTopics| maximum topics in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxSessions| maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxMsgRate| maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxBandwidthMbytes| maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered |100| -|loadBalancerNamespaceMaximumBundles| maximum number of bundles in a namespace |128| -|replicationMetricsEnabled| Enable replication metrics |true| -|replicationConnectionsPerBroker| Max number of connections to open for each broker in a remote cluster More connections host-to-host lead to better throughput over high-latency links. |16| -|replicationProducerQueueSize| Replicator producer queue size |1000| -|replicatorPrefix| Replicator prefix used for replicator producer name and cursor name pulsar.repl|| -|replicationTlsEnabled| Enable TLS when talking with other clusters to replicate messages |false| -|brokerServicePurgeInactiveFrequencyInSeconds|Deprecated. Use `brokerDeleteInactiveTopicsFrequencySeconds`.|60| -|transactionCoordinatorEnabled|Whether to enable transaction coordinator in broker.|true| -|transactionMetadataStoreProviderClassName| |org.apache.pulsar.transaction.coordinator.impl.InMemTransactionMetadataStoreProvider| -|defaultRetentionTimeInMinutes| Default message retention time. 0 means retention is disabled. -1 means data is not removed by time quota |0| -|defaultRetentionSizeInMB| Default retention size. 0 means retention is disabled. -1 means data is not removed by size quota |0| -|keepAliveIntervalSeconds| How often to check whether the connections are still alive |30| -|bootstrapNamespaces| The bootstrap name. | N/A | -|loadManagerClassName| Name of load manager to use |org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl| -|supportedNamespaceBundleSplitAlgorithms| Supported algorithms name for namespace bundle split |[range_equally_divide,topic_count_equally_divide]| -|defaultNamespaceBundleSplitAlgorithm| Default algorithm name for namespace bundle split |range_equally_divide| -|managedLedgerOffloadDriver| The directory for all the offloader implementations `offloadersDirectory=./offloaders`. Driver to use to offload old data to long term storage (Possible values: S3, aws-s3, google-cloud-storage). When using google-cloud-storage, Make sure both Google Cloud Storage and Google Cloud Storage JSON API are enabled for the project (check from Developers Console -> Api&auth -> APIs). || -|managedLedgerOffloadMaxThreads| Maximum number of thread pool threads for ledger offloading |2| -|managedLedgerOffloadPrefetchRounds|The maximum prefetch rounds for ledger reading for offloading.|1| -|managedLedgerUnackedRangesOpenCacheSetEnabled| Use Open Range-Set to cache unacknowledged messages |true| -|managedLedgerOffloadDeletionLagMs|Delay between a ledger being successfully offloaded to long term storage and the ledger being deleted from bookkeeper | 14400000| -|managedLedgerOffloadAutoTriggerSizeThresholdBytes|The number of bytes before triggering automatic offload to long term storage |-1 (disabled)| -|s3ManagedLedgerOffloadRegion| For Amazon S3 ledger offload, AWS region || -|s3ManagedLedgerOffloadBucket| For Amazon S3 ledger offload, Bucket to place offloaded ledger into || -|s3ManagedLedgerOffloadServiceEndpoint| For Amazon S3 ledger offload, Alternative endpoint to connect to (useful for testing) || -|s3ManagedLedgerOffloadMaxBlockSizeInBytes| For Amazon S3 ledger offload, Max block size in bytes. (64MB by default, 5MB minimum) |67108864| -|s3ManagedLedgerOffloadReadBufferSizeInBytes| For Amazon S3 ledger offload, Read buffer size in bytes (1MB by default) |1048576| -|gcsManagedLedgerOffloadRegion|For Google Cloud Storage ledger offload, region where offload bucket is located. Go to this page for more details: https://cloud.google.com/storage/docs/bucket-locations .|N/A| -|gcsManagedLedgerOffloadBucket|For Google Cloud Storage ledger offload, Bucket to place offloaded ledger into.|N/A| -|gcsManagedLedgerOffloadMaxBlockSizeInBytes|For Google Cloud Storage ledger offload, the maximum block size in bytes. (64MB by default, 5MB minimum)|67108864| -|gcsManagedLedgerOffloadReadBufferSizeInBytes|For Google Cloud Storage ledger offload, Read buffer size in bytes. (1MB by default)|1048576| -|gcsManagedLedgerOffloadServiceAccountKeyFile|For Google Cloud Storage, path to json file containing service account credentials. For more details, see the "Service Accounts" section of https://support.google.com/googleapi/answer/6158849 .|N/A| -|fileSystemProfilePath|For File System Storage, file system profile path.|../conf/filesystem_offload_core_site.xml| -|fileSystemURI|For File System Storage, file system uri.|N/A| -|s3ManagedLedgerOffloadRole| For Amazon S3 ledger offload, provide a role to assume before writing to s3 || -|s3ManagedLedgerOffloadRoleSessionName| For Amazon S3 ledger offload, provide a role session name when using a role |pulsar-s3-offload| -| acknowledgmentAtBatchIndexLevelEnabled | Enable or disable the batch index acknowledgement. | false | -|enableReplicatedSubscriptions|Whether to enable tracking of replicated subscriptions state across clusters.|true| -|replicatedSubscriptionsSnapshotFrequencyMillis|The frequency of snapshots for replicated subscriptions tracking.|1000| -|replicatedSubscriptionsSnapshotTimeoutSeconds|The timeout for building a consistent snapshot for tracking replicated subscriptions state.|30| -|replicatedSubscriptionsSnapshotMaxCachedPerSubscription|The maximum number of snapshot to be cached per subscription.|10| -|maxMessagePublishBufferSizeInMB|The maximum memory size for a broker to handle messages that are sent by producers. If the processing message size exceeds this value, the broker stops reading data from the connection. The processing messages refer to the messages that are sent to the broker but the broker has not sent response to the client. Usually the messages are waiting to be written to bookies. It is shared across all the topics running in the same broker. The value `-1` disables the memory limitation. By default, it is 50% of direct memory.|N/A| -|messagePublishBufferCheckIntervalInMillis|Interval between checks to see if message publish buffer size exceeds the maximum. Use `0` or negative number to disable the max publish buffer limiting.|100| -|retentionCheckIntervalInSeconds|Check between intervals to see if consumed ledgers need to be trimmed. Use 0 or negative number to disable the check.|120| -| maxMessageSize | Set the maximum size of a message. | 5242880 | -| preciseTopicPublishRateLimiterEnable | Enable precise topic publish rate limiting. | false | -| lazyCursorRecovery | Whether to recover cursors lazily when trying to recover a managed ledger backing a persistent topic. It can improve write availability of topics. The caveat is now when recovered ledger is ready to write we're not sure if all old consumers' last mark delete position(ack position) can be recovered or not. So user can make the trade off or have custom logic in application to checkpoint consumer state.| false | -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| -| maxTopicsPerNamespace | The maximum number of persistent topics that can be created in the namespace. When the number of topics reaches this threshold, the broker rejects the request of creating a new topic, including the auto-created topics by the producer or consumer, until the number of connected consumers decreases. The default value 0 disables the check. | 0 | -|subscriptionTypesEnabled| Enable all subscription types, which are exclusive, shared, failover, and key_shared. | Exclusive, Shared, Failover, Key_Shared | -| managedLedgerInfoCompressionType | ManagedLedgerInfo compression type, option values (NONE, LZ4, ZLIB, ZSTD, SNAPPY), if value is NONE or invalid, the managedLedgerInfo will not be compressed. Notice, after enable this configuration, if you want to degrade broker, you should change the configuration to `NONE` and make sure all ledger metadata are saved without compression. | None | -|narExtractionDirectory | The extraction directory of the nar package.
    Available for Protocol Handler, Additional Servlets, Offloaders, Broker Interceptor. | System.getProperty("java.io.tmpdir") | - -## Client - -You can use the [`pulsar-client`](reference-cli-tools.md#pulsar-client) CLI tool to publish messages to and consume messages from Pulsar topics. You can use this tool in place of a client library. - -|Name|Description|Default| -|---|---|---| -|webServiceUrl| The web URL for the cluster. |http://localhost:8080/| -|brokerServiceUrl| The Pulsar protocol URL for the cluster. |pulsar://localhost:6650/| -|authPlugin| The authentication plugin. || -|authParams| The authentication parameters for the cluster, as a comma-separated string. || -|useTls| Whether to enforce the TLS authentication in the cluster. |false| -| tlsAllowInsecureConnection | Allow TLS connections to servers whose certificate cannot be verified to have been signed by a trusted certificate authority. | false | -| tlsEnableHostnameVerification | Whether the server hostname must match the common name of the certificate that is used by the server. | false | -|tlsTrustCertsFilePath||| -| useKeyStoreTls | Enable TLS with KeyStore type configuration in the broker. | false | -| tlsTrustStoreType | TLS TrustStore type configuration.
  1118. JKS
  1119. PKCS12
  1120. |JKS| -| tlsTrustStore | TLS TrustStore path. | | -| tlsTrustStorePassword | TLS TrustStore password. | | - - -## Service discovery - -|Name|Description|Default| -|---|---|---| -|zookeeperServers| Zookeeper quorum connection string (comma-separated) || -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -|zookeeperSessionTimeoutMs| ZooKeeper session timeout |30000| -|servicePort| Port to use to server binary-proto request |6650| -|servicePortTls| Port to use to server binary-proto-tls request |6651| -|webServicePort| Port that discovery service listen on |8080| -|webServicePortTls| Port to use to server HTTPS request |8443| -|bindOnLocalhost| Control whether to bind directly on localhost rather than on normal hostname |false| -|authenticationEnabled| Enable authentication |false| -|authenticationProviders| Authentication provider name list, which is comma separated list of class names (comma-separated) || -|authorizationEnabled| Enforce authorization |false| -|superUserRoles| Role names that are treated as "super-user", meaning they will be able to do all admin operations and publish/consume from all topics (comma-separated) || -|tlsEnabled| Enable TLS |false| -|tlsCertificateFilePath| Path for the TLS certificate file || -|tlsKeyFilePath| Path for the TLS private key file || - - - -## Log4j - -You can set the log level and configuration in the [log4j2.yaml](https://github.com/apache/pulsar/blob/d557e0aa286866363bc6261dec87790c055db1b0/conf/log4j2.yaml#L155) file. The following logging configuration parameters are available. - -|Name|Default| -|---|---| -|pulsar.root.logger| WARN,CONSOLE| -|pulsar.log.dir| logs| -|pulsar.log.file| pulsar.log| -|log4j.rootLogger| ${pulsar.root.logger}| -|log4j.appender.CONSOLE| org.apache.log4j.ConsoleAppender| -|log4j.appender.CONSOLE.Threshold| DEBUG| -|log4j.appender.CONSOLE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.CONSOLE.layout.ConversionPattern| %d{ISO8601} - %-5p - [%t:%C{1}@%L] - %m%n| -|log4j.appender.ROLLINGFILE| org.apache.log4j.DailyRollingFileAppender| -|log4j.appender.ROLLINGFILE.Threshold| DEBUG| -|log4j.appender.ROLLINGFILE.File| ${pulsar.log.dir}/${pulsar.log.file}| -|log4j.appender.ROLLINGFILE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.ROLLINGFILE.layout.ConversionPattern| %d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n| -|log4j.appender.TRACEFILE| org.apache.log4j.FileAppender| -|log4j.appender.TRACEFILE.Threshold| TRACE| -|log4j.appender.TRACEFILE.File| pulsar-trace.log| -|log4j.appender.TRACEFILE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.TRACEFILE.layout.ConversionPattern| %d{ISO8601} - %-5p [%t:%C{1}@%L][%x] - %m%n| - -> Note: 'topic' in log4j2.appender is configurable. -> - If you want to append all logs to a single topic, set the same topic name. -> - If you want to append logs to different topics, you can set different topic names. - -## Log4j shell - -|Name|Default| -|---|---| -|bookkeeper.root.logger| ERROR,CONSOLE| -|log4j.rootLogger| ${bookkeeper.root.logger}| -|log4j.appender.CONSOLE| org.apache.log4j.ConsoleAppender| -|log4j.appender.CONSOLE.Threshold| DEBUG| -|log4j.appender.CONSOLE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.CONSOLE.layout.ConversionPattern| %d{ABSOLUTE} %-5p %m%n| -|log4j.logger.org.apache.zookeeper| ERROR| -|log4j.logger.org.apache.bookkeeper| ERROR| -|log4j.logger.org.apache.bookkeeper.bookie.BookieShell| INFO| - - -## Standalone - -|Name|Description|Default| -|---|---|---| -|authenticateOriginalAuthData| If this flag is set to `true`, the broker authenticates the original Auth data; else it just accepts the originalPrincipal and authorizes it (if required). |false| -|zookeeperServers| The quorum connection string for local ZooKeeper || -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -|brokerServicePort| The port on which the standalone broker listens for connections |6650| -|webServicePort| The port used by the standalone broker for HTTP requests |8080| -|bindAddress| The hostname or IP address on which the standalone service binds |0.0.0.0| -|advertisedAddress| The hostname or IP address that the standalone service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used. || -| numAcceptorThreads | Number of threads to use for Netty Acceptor | 1 | -| numIOThreads | Number of threads to use for Netty IO | 2 * Runtime.getRuntime().availableProcessors() | -| numHttpServerThreads | Number of threads to use for HTTP requests processing | 2 * Runtime.getRuntime().availableProcessors()| -|isRunningStandalone|This flag controls features that are meant to be used when running in standalone mode.|N/A| -|clusterName| The name of the cluster that this broker belongs to. |standalone| -| failureDomainsEnabled | Enable cluster's failure-domain which can distribute brokers into logical region. | false | -|zooKeeperSessionTimeoutMillis| The ZooKeeper session timeout, in milliseconds. |30000| -|zooKeeperOperationTimeoutSeconds|ZooKeeper operation timeout in seconds.|30| -|brokerShutdownTimeoutMs| The time to wait for graceful broker shutdown. After this time elapses, the process will be killed. |60000| -|skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out of memory error. |false| -|backlogQuotaCheckEnabled| Enable the backlog quota check, which enforces a specified action when the quota is reached. |true| -|backlogQuotaCheckIntervalInSeconds| How often to check for topics that have reached the backlog quota. |60| -|backlogQuotaDefaultLimitGB| The default per-topic backlog quota limit. Being less than 0 means no limitation. By default, it is -1. |-1| -|ttlDurationDefaultInSeconds|The default Time to Live (TTL) for namespaces if the TTL is not configured at namespace policies. When the value is set to `0`, TTL is disabled. By default, TTL is disabled. |0| -|brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics. |true| -|brokerDeleteInactiveTopicsFrequencySeconds| How often to check for inactive topics, in seconds. |60| -| maxPendingPublishRequestsPerConnection | Maximum pending publish requests per connection to avoid keeping large number of pending requests in memory | 1000| -|messageExpiryCheckIntervalInMinutes| How often to proactively check and purged expired messages. |5| -|activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed. |1000| -| subscriptionExpirationTimeMinutes | How long to delete inactive subscriptions from last consumption. When it is set to 0, inactive subscriptions are not deleted automatically | 0 | -| subscriptionRedeliveryTrackerEnabled | Enable subscription message redelivery tracker to send redelivery count to consumer. | true | -|subscriptionKeySharedEnable|Whether to enable the Key_Shared subscription.|true| -| subscriptionKeySharedUseConsistentHashing | In Key_Shared subscription type, with default AUTO_SPLIT mode, use splitting ranges or consistent hashing to reassign keys to new consumers. | false | -| subscriptionKeySharedConsistentHashingReplicaPoints | In Key_Shared subscription type, the number of points in the consistent-hashing ring. The greater the number, the more equal the assignment of keys to consumers. | 100 | -| subscriptionExpiryCheckIntervalInMinutes | How frequently to proactively check and purge expired subscription |5 | -| brokerDeduplicationEnabled | Set the default behavior for message deduplication in the broker. This can be overridden per-namespace. If it is enabled, the broker rejects messages that are already stored in the topic. | false | -| brokerDeduplicationMaxNumberOfProducers | Maximum number of producer information that it's going to be persisted for deduplication purposes | 10000 | -| brokerDeduplicationEntriesInterval | Number of entries after which a deduplication information snapshot is taken. A greater interval leads to less snapshots being taken though it would increase the topic recovery time, when the entries published after the snapshot need to be replayed. | 1000 | -| brokerDeduplicationProducerInactivityTimeoutMinutes | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | 360 | -| defaultNumberOfNamespaceBundles | When a namespace is created without specifying the number of bundles, this value is used as the default setting.| 4 | -|clientLibraryVersionCheckEnabled| Enable checks for minimum allowed client library version. |false| -|clientLibraryVersionCheckAllowUnversioned| Allow client libraries with no version information |true| -|statusFilePath| The path for the file used to determine the rotation status for the broker when responding to service discovery health checks |/usr/local/apache/htdocs| -|maxUnackedMessagesPerConsumer| The maximum number of unacknowledged messages allowed to be received by consumers on a shared subscription. The broker will stop sending messages to a consumer once this limit is reached or until the consumer begins acknowledging messages. A value of 0 disables the unacked message limit check and thus allows consumers to receive messages without any restrictions. |50000| -|maxUnackedMessagesPerSubscription| The same as above, except per subscription rather than per consumer. |200000| -| maxUnackedMessagesPerBroker | Maximum number of unacknowledged messages allowed per broker. Once this limit reaches, the broker stops dispatching messages to all shared subscriptions which has a higher number of unacknowledged messages until subscriptions start acknowledging messages back and unacknowledged messages count reaches to limit/2. When the value is set to 0, unacknowledged message limit check is disabled and broker does not block dispatchers. | 0 | -| maxUnackedMessagesPerSubscriptionOnBrokerBlocked | Once the broker reaches maxUnackedMessagesPerBroker limit, it blocks subscriptions which have higher unacknowledged messages than this percentage limit and subscription does not receive any new messages until that subscription acknowledges messages back. | 0.16 | -| unblockStuckSubscriptionEnabled|Broker periodically checks if subscription is stuck and unblock if flag is enabled.|false| -|zookeeperSessionExpiredPolicy|There are two policies when ZooKeeper session expired happens, "shutdown" and "reconnect". If it is set to "shutdown" policy, when ZooKeeper session expired happens, the broker is shutdown. If it is set to "reconnect" policy, the broker tries to reconnect to ZooKeeper server and re-register metadata to ZooKeeper. Note: the "reconnect" policy is an experiment feature.|shutdown| -| topicPublisherThrottlingTickTimeMillis | Tick time to schedule task that checks topic publish rate limiting across all topics. A lower value can improve accuracy while throttling publish but it uses more CPU to perform frequent check. (Disable publish throttling with value 0) | 10| -| brokerPublisherThrottlingTickTimeMillis | Tick time to schedule task that checks broker publish rate limiting across all topics. A lower value can improve accuracy while throttling publish but it uses more CPU to perform frequent check. When the value is set to 0, publish throttling is disabled. |50 | -| brokerPublisherThrottlingMaxMessageRate | Maximum rate (in 1 second) of messages allowed to publish for a broker if the message rate limiting is enabled. When the value is set to 0, message rate limiting is disabled. | 0| -| brokerPublisherThrottlingMaxByteRate | Maximum rate (in 1 second) of bytes allowed to publish for a broker if the byte rate limiting is enabled. When the value is set to 0, the byte rate limiting is disabled. | 0 | -|subscribeThrottlingRatePerConsumer|Too many subscribe requests from a consumer can cause broker rewinding consumer cursors and loading data from bookies, hence causing high network bandwidth usage. When the positive value is set, broker will throttle the subscribe requests for one consumer. Otherwise, the throttling will be disabled. By default, throttling is disabled.|0| -|subscribeRatePeriodPerConsumerInSecond|Rate period for {subscribeThrottlingRatePerConsumer}. By default, it is 30s.|30| -| dispatchThrottlingRatePerTopicInMsg | Default messages (per second) dispatch throttling-limit for every topic. When the value is set to 0, default message dispatch throttling-limit is disabled. |0 | -| dispatchThrottlingRatePerTopicInByte | Default byte (per second) dispatch throttling-limit for every topic. When the value is set to 0, default byte dispatch throttling-limit is disabled. | 0| -| dispatchThrottlingRateRelativeToPublishRate | Enable dispatch rate-limiting relative to publish rate. | false | -|dispatchThrottlingRatePerSubscriptionInMsg|The defaulted number of message dispatching throttling-limit for a subscription. The value of 0 disables message dispatch-throttling.|0| -|dispatchThrottlingRatePerSubscriptionInByte|The default number of message-bytes dispatching throttling-limit for a subscription. The value of 0 disables message-byte dispatch-throttling.|0| -| dispatchThrottlingOnNonBacklogConsumerEnabled | Enable dispatch-throttling for both caught up consumers as well as consumers who have backlogs. | true | -|dispatcherMaxReadBatchSize|The maximum number of entries to read from BookKeeper. By default, it is 100 entries.|100| -|dispatcherMaxReadSizeBytes|The maximum size in bytes of entries to read from BookKeeper. By default, it is 5MB.|5242880| -|dispatcherMinReadBatchSize|The minimum number of entries to read from BookKeeper. By default, it is 1 entry. When there is an error occurred on reading entries from bookkeeper, the broker will backoff the batch size to this minimum number.|1| -|dispatcherMaxRoundRobinBatchSize|The maximum number of entries to dispatch for a shared subscription. By default, it is 20 entries.|20| -| preciseDispatcherFlowControl | Precise dispathcer flow control according to history message number of each entry. | false | -| streamingDispatch | Whether to use streaming read dispatcher. It can be useful when there's a huge backlog to drain and instead of read with micro batch we can streamline the read from bookkeeper to make the most of consumer capacity till we hit bookkeeper read limit or consumer process limit, then we can use consumer flow control to tune the speed. This feature is currently in preview and can be changed in subsequent release. | false | -| maxConcurrentLookupRequest | Maximum number of concurrent lookup request that the broker allows to throttle heavy incoming lookup traffic. | 50000 | -| maxConcurrentTopicLoadRequest | Maximum number of concurrent topic loading request that the broker allows to control the number of zk-operations. | 5000 | -| maxConcurrentNonPersistentMessagePerConnection | Maximum number of concurrent non-persistent message that can be processed per connection. | 1000 | -| numWorkerThreadsForNonPersistentTopic | Number of worker threads to serve non-persistent topic. | 8 | -| enablePersistentTopics | Enable broker to load persistent topics. | true | -| enableNonPersistentTopics | Enable broker to load non-persistent topics. | true | -| maxSubscriptionsPerTopic | Maximum number of subscriptions allowed to subscribe to a topic. Once this limit reaches, the broker rejects new subscriptions until the number of subscriptions decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxProducersPerTopic | Maximum number of producers allowed to connect to a topic. Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerTopic | Maximum number of consumers allowed to connect to a topic. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerSubscription | Maximum number of consumers allowed to connect to a subscription. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxNumPartitionsPerPartitionedTopic | Maximum number of partitions per partitioned topic. When the value is set to a negative number or is set to 0, the check is disabled. | 0 | -| tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in seconds. When the value is set to 0, check the TLS certificate on every new connection. | 300 | -| tlsCertificateFilePath | Path for the TLS certificate file. | | -| tlsKeyFilePath | Path for the TLS private key file. | | -| tlsTrustCertsFilePath | Path for the trusted TLS certificate file.| | -| tlsAllowInsecureConnection | Accept untrusted TLS certificate from the client. If it is set to true, a client with a certificate which cannot be verified with the 'tlsTrustCertsFilePath' certificate is allowed to connect to the server, though the certificate is not be used for client authentication. | false | -| tlsProtocols | Specify the TLS protocols the broker uses to negotiate during TLS handshake. | | -| tlsCiphers | Specify the TLS cipher the broker uses to negotiate during TLS Handshake. | | -| tlsRequireTrustedClientCertOnConnect | Trusted client certificates are required for to connect TLS. Reject the Connection if the client certificate is not trusted. In effect, this requires that all connecting clients perform TLS client authentication. | false | -| tlsEnabledWithKeyStore | Enable TLS with KeyStore type configuration in broker. | false | -| tlsProvider | TLS Provider for KeyStore type. | | -| tlsKeyStoreType | TLS KeyStore type configuration in the broker.
  1121. JKS
  1122. PKCS12
  1123. |JKS| -| tlsKeyStore | TLS KeyStore path in the broker. | | -| tlsKeyStorePassword | TLS KeyStore password for the broker. | | -| tlsTrustStoreType | TLS TrustStore type configuration in the broker
  1124. JKS
  1125. PKCS12
  1126. |JKS| -| tlsTrustStore | TLS TrustStore path in the broker. | | -| tlsTrustStorePassword | TLS TrustStore password for the broker. | | -| brokerClientTlsEnabledWithKeyStore | Configure whether the internal client uses the KeyStore type to authenticate with Pulsar brokers. | false | -| brokerClientSslProvider | The TLS Provider used by the internal client to authenticate with other Pulsar brokers. | | -| brokerClientTlsTrustStoreType | TLS TrustStore type configuration for the internal client to authenticate with Pulsar brokers.
  1127. JKS
  1128. PKCS12
  1129. | JKS | -| brokerClientTlsTrustStore | TLS TrustStore path for the internal client to authenticate with Pulsar brokers. | | -| brokerClientTlsTrustStorePassword | TLS TrustStore password for the internal client to authenticate with Pulsar brokers. | | -| brokerClientTlsCiphers | Specify the TLS cipher that the internal client uses to negotiate during TLS Handshake. | | -| brokerClientTlsProtocols | Specify the TLS protocols that the broker uses to negotiate during TLS handshake. | | -| systemTopicEnabled | Enable/Disable system topics. | false | -| topicLevelPoliciesEnabled | Enable or disable topic level policies. Topic level policies depends on the system topic. Please enable the system topic first. | false | -| topicFencingTimeoutSeconds | If a topic remains fenced for a certain time period (in seconds), it is closed forcefully. If set to 0 or a negative number, the fenced topic is not closed. | 0 | -| proxyRoles | Role names that are treated as "proxy roles". If the broker sees a request with role as proxyRoles, it demands to see a valid original principal. | | -|authenticationEnabled| Enable authentication for the broker. |false| -|authenticationProviders| A comma-separated list of class names for authentication providers. |false| -|authorizationEnabled| Enforce authorization in brokers. |false| -| authorizationProvider | Authorization provider fully qualified class-name. | org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider | -| authorizationAllowWildcardsMatching | Allow wildcard matching in authorization. Wildcard matching is applicable only when the wildcard-character (*) presents at the **first** or **last** position. | false | -|superUserRoles| Role names that are treated as "superusers." Superusers are authorized to perform all admin tasks. | | -|brokerClientAuthenticationPlugin| The authentication settings of the broker itself. Used when the broker connects to other brokers either in the same cluster or from other clusters. | | -|brokerClientAuthenticationParameters| The parameters that go along with the plugin specified using brokerClientAuthenticationPlugin. | | -|athenzDomainNames| Supported Athenz authentication provider domain names as a comma-separated list. | | -| anonymousUserRole | When this parameter is not empty, unauthenticated users perform as anonymousUserRole. | | -|tokenSettingPrefix| Configure the prefix of the token related setting like `tokenSecretKey`, `tokenPublicKey`, `tokenAuthClaim`, `tokenPublicAlg`, `tokenAudienceClaim`, and `tokenAudience`. || -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenAuthClaim| Specify the token claim that will be used as the authentication "principal" or "role". The "subject" field will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud". It is used to get the audience from token. If it is not set, the audience is not verified. || -| tokenAudience | The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token need contains this parameter.| | -|saslJaasClientAllowedIds|This is a regexp, which limits the range of possible ids which can connect to the Broker using SASL. By default, it is set to `SaslConstants.JAAS_CLIENT_ALLOWED_IDS_DEFAULT`, which is ".*pulsar.*", so only clients whose id contains 'pulsar' are allowed to connect.|N/A| -|saslJaasBrokerSectionName|Service Principal, for login context name. By default, it is set to `SaslConstants.JAAS_DEFAULT_BROKER_SECTION_NAME`, which is "Broker".|N/A| -|httpMaxRequestSize|If the value is larger than 0, it rejects all HTTP requests with bodies larged than the configured limit.|-1| -|exposePreciseBacklogInPrometheus| Enable expose the precise backlog stats, set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate. |false| -|bookkeeperMetadataServiceUri|Metadata service uri is what BookKeeper used for loading corresponding metadata driver and resolving its metadata service location. This value can be fetched using `bookkeeper shell whatisinstanceid` command in BookKeeper cluster. For example: `zk+hierarchical://localhost:2181/ledgers`. The metadata service uri list can also be semicolon separated values like: `zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers`.|N/A| -|bookkeeperClientAuthenticationPlugin| Authentication plugin to be used when connecting to bookies (BookKeeper servers). || -|bookkeeperClientAuthenticationParametersName| BookKeeper authentication plugin implementation parameters and values. || -|bookkeeperClientAuthenticationParameters| Parameters associated with the bookkeeperClientAuthenticationParametersName || -|bookkeeperClientNumWorkerThreads| Number of BookKeeper client worker threads. Default is Runtime.getRuntime().availableProcessors() || -|bookkeeperClientTimeoutInSeconds| Timeout for BookKeeper add and read operations. |30| -|bookkeeperClientSpeculativeReadTimeoutInMillis| Speculative reads are initiated if a read request doesn’t complete within a certain time. A value of 0 disables speculative reads. |0| -|bookkeeperUseV2WireProtocol|Use older Bookkeeper wire protocol with bookie.|true| -|bookkeeperClientHealthCheckEnabled| Enable bookie health checks. |true| -|bookkeeperClientHealthCheckIntervalSeconds| The time interval, in seconds, at which health checks are performed. New ledgers are not created during health checks. |60| -|bookkeeperClientHealthCheckErrorThresholdPerInterval| Error threshold for health checks. |5| -|bookkeeperClientHealthCheckQuarantineTimeInSeconds| If bookies have more than the allowed number of failures within the time interval specified by bookkeeperClientHealthCheckIntervalSeconds |1800| -|bookkeeperClientGetBookieInfoIntervalSeconds|Specify options for the GetBookieInfo check. This setting helps ensure the list of bookies that are up to date on the brokers.|86400| -|bookkeeperClientGetBookieInfoRetryIntervalSeconds|Specify options for the GetBookieInfo check. This setting helps ensure the list of bookies that are up to date on the brokers.|60| -|bookkeeperClientRackawarePolicyEnabled| |true| -|bookkeeperClientRegionawarePolicyEnabled| |false| -|bookkeeperClientMinNumRacksPerWriteQuorum| |2| -|bookkeeperClientMinNumRacksPerWriteQuorum| |false| -|bookkeeperClientReorderReadSequenceEnabled| |false| -|bookkeeperClientIsolationGroups||| -|bookkeeperClientSecondaryIsolationGroups| Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn't have enough bookie available. || -|bookkeeperClientMinAvailableBookiesInIsolationGroups| Minimum bookies that should be available as part of bookkeeperClientIsolationGroups else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list. || -| bookkeeperTLSProviderFactoryClass | Set the client security provider factory class name. | org.apache.bookkeeper.tls.TLSContextFactory | -| bookkeeperTLSClientAuthentication | Enable TLS authentication with bookie. | false | -| bookkeeperTLSKeyFileType | Supported type: PEM, JKS, PKCS12. | PEM | -| bookkeeperTLSTrustCertTypes | Supported type: PEM, JKS, PKCS12. | PEM | -| bookkeeperTLSKeyStorePasswordPath | Path to file containing keystore password, if the client keystore is password protected. | | -| bookkeeperTLSTrustStorePasswordPath | Path to file containing truststore password, if the client truststore is password protected. | | -| bookkeeperTLSKeyFilePath | Path for the TLS private key file. | | -| bookkeeperTLSCertificateFilePath | Path for the TLS certificate file. | | -| bookkeeperTLSTrustCertsFilePath | Path for the trusted TLS certificate file. | | -| bookkeeperDiskWeightBasedPlacementEnabled | Enable/Disable disk weight based placement. | false | -| bookkeeperExplicitLacIntervalInMills | Set the interval to check the need for sending an explicit LAC. When the value is set to 0, no explicit LAC is sent. | 0 | -| bookkeeperClientExposeStatsToPrometheus | Expose BookKeeper client managed ledger stats to Prometheus. | false | -|managedLedgerDefaultEnsembleSize| |1| -|managedLedgerDefaultWriteQuorum| |1| -|managedLedgerDefaultAckQuorum| |1| -| managedLedgerDigestType | Default type of checksum to use when writing to BookKeeper. | CRC32C | -| managedLedgerNumSchedulerThreads | Number of threads to be used for managed ledger scheduled tasks. | 8 | -|managedLedgerCacheSizeMB| |N/A| -|managedLedgerCacheCopyEntries| Whether to copy the entry payloads when inserting in cache.| false| -|managedLedgerCacheEvictionWatermark| |0.9| -|managedLedgerCacheEvictionFrequency| Configure the cache eviction frequency for the managed ledger cache (evictions/sec) | 100.0 | -|managedLedgerCacheEvictionTimeThresholdMillis| All entries that have stayed in cache for more than the configured time, will be evicted | 1000 | -|managedLedgerCursorBackloggedThreshold| Configure the threshold (in number of entries) from where a cursor should be considered 'backlogged' and thus should be set as inactive. | 1000| -|managedLedgerUnackedRangesOpenCacheSetEnabled| Use Open Range-Set to cache unacknowledged messages |true| -|managedLedgerDefaultMarkDeleteRateLimit| |0.1| -|managedLedgerMaxEntriesPerLedger| |50000| -|managedLedgerMinLedgerRolloverTimeMinutes| |10| -|managedLedgerMaxLedgerRolloverTimeMinutes| |240| -|managedLedgerCursorMaxEntriesPerLedger| |50000| -|managedLedgerCursorRolloverTimeInSeconds| |14400| -| managedLedgerMaxSizePerLedgerMbytes | Maximum ledger size before triggering a rollover for a topic. | 2048 | -| managedLedgerMaxUnackedRangesToPersist | Maximum number of "acknowledgment holes" that are going to be persistently stored. When acknowledging out of order, a consumer leaves holes that are supposed to be quickly filled by acknowledging all the messages. The information of which messages are acknowledged is persisted by compressing in "ranges" of messages that were acknowledged. After the max number of ranges is reached, the information is only tracked in memory and messages are redelivered in case of crashes. | 10000 | -| managedLedgerMaxUnackedRangesToPersistInZooKeeper | Maximum number of "acknowledgment holes" that can be stored in Zookeeper. If the number of unacknowledged message range is higher than this limit, the broker persists unacknowledged ranges into bookkeeper to avoid additional data overhead into Zookeeper. | 1000 | -|autoSkipNonRecoverableData| |false| -| managedLedgerMetadataOperationsTimeoutSeconds | Operation timeout while updating managed-ledger metadata. | 60 | -| managedLedgerReadEntryTimeoutSeconds | Read entries timeout when the broker tries to read messages from BookKeeper. | 0 | -| managedLedgerAddEntryTimeoutSeconds | Add entry timeout when the broker tries to publish message to BookKeeper. | 0 | -| managedLedgerNewEntriesCheckDelayInMillis | New entries check delay for the cursor under the managed ledger. If no new messages in the topic, the cursor tries to check again after the delay time. For consumption latency sensitive scenario, you can set the value to a smaller value or 0. Of course, a smaller value may degrade consumption throughput.|10 ms| -| managedLedgerPrometheusStatsLatencyRolloverSeconds | Managed ledger prometheus stats latency rollover seconds. | 60 | -| managedLedgerTraceTaskExecution | Whether to trace managed ledger task execution time. | true | -|managedLedgerNewEntriesCheckDelayInMillis|New entries check delay for the cursor under the managed ledger. If no new messages in the topic, the cursor will try to check again after the delay time. For consumption latency sensitive scenario, it can be set to a smaller value or 0. A smaller value degrades consumption throughput. By default, it is 10ms.|10| -|loadBalancerEnabled| |false| -|loadBalancerPlacementStrategy| |weightedRandomSelection| -|loadBalancerReportUpdateThresholdPercentage| |10| -|loadBalancerReportUpdateMaxIntervalMinutes| |15| -|loadBalancerHostUsageCheckIntervalMinutes| |1| -|loadBalancerSheddingIntervalMinutes| |30| -|loadBalancerSheddingGracePeriodMinutes| |30| -|loadBalancerBrokerMaxTopics| |50000| -|loadBalancerBrokerUnderloadedThresholdPercentage| |1| -|loadBalancerBrokerOverloadedThresholdPercentage| |85| -|loadBalancerResourceQuotaUpdateIntervalMinutes| |15| -|loadBalancerBrokerComfortLoadLevelPercentage| |65| -|loadBalancerAutoBundleSplitEnabled| |false| -| loadBalancerAutoUnloadSplitBundlesEnabled | Enable/Disable automatic unloading of split bundles. | true | -|loadBalancerNamespaceBundleMaxTopics| |1000| -|loadBalancerNamespaceBundleMaxSessions| |1000| -|loadBalancerNamespaceBundleMaxMsgRate| |1000| -|loadBalancerNamespaceBundleMaxBandwidthMbytes| |100| -|loadBalancerNamespaceMaximumBundles| |128| -| loadBalancerBrokerThresholdShedderPercentage | The broker resource usage threshold. When the broker resource usage is greater than the pulsar cluster average resource usage, the threshold shedder is triggered to offload bundles from the broker. It only takes effect in the ThresholdShedder strategy. | 10 | -| loadBalancerHistoryResourcePercentage | The history usage when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 0.9 | -| loadBalancerBandwithInResourceWeight | The BandWithIn usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerBandwithOutResourceWeight | The BandWithOut usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerCPUResourceWeight | The CPU usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerMemoryResourceWeight | The heap memory usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerDirectMemoryResourceWeight | The direct memory usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerBundleUnloadMinThroughputThreshold | Bundle unload minimum throughput threshold. Avoid bundle unload frequently. It only takes effect in the ThresholdShedder strategy. | 10 | -|replicationMetricsEnabled| |true| -|replicationConnectionsPerBroker| |16| -|replicationProducerQueueSize| |1000| -| replicationPolicyCheckDurationSeconds | Duration to check replication policy to avoid replicator inconsistency due to missing ZooKeeper watch. When the value is set to 0, disable checking replication policy. | 600 | -|defaultRetentionTimeInMinutes| |0| -|defaultRetentionSizeInMB| |0| -|keepAliveIntervalSeconds| |30| -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| -|bookieId | If you want to custom a bookie ID or use a dynamic network address for a bookie, you can set the `bookieId`.

    Bookie advertises itself using the `bookieId` rather than the `BookieSocketAddress` (`hostname:port` or `IP:port`).

    The `bookieId` is a non-empty string that can contain ASCII digits and letters ([a-zA-Z9-0]), colons, dashes, and dots.

    For more information about `bookieId`, see [here](http://bookkeeper.apache.org/bps/BP-41-bookieid/).|/| -| maxTopicsPerNamespace | The maximum number of persistent topics that can be created in the namespace. When the number of topics reaches this threshold, the broker rejects the request of creating a new topic, including the auto-created topics by the producer or consumer, until the number of connected consumers decreases. The default value 0 disables the check. | 0 | - -## WebSocket - -|Name|Description|Default| -|---|---|---| -|configurationStoreServers ||| -|zooKeeperSessionTimeoutMillis| |30000| -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|serviceUrl||| -|serviceUrlTls||| -|brokerServiceUrl||| -|brokerServiceUrlTls||| -|webServicePort||8080| -|webServicePortTls||8443| -|bindAddress||0.0.0.0| -|clusterName ||| -|authenticationEnabled||false| -|authenticationProviders||| -|authorizationEnabled||false| -|superUserRoles ||| -|brokerClientAuthenticationPlugin||| -|brokerClientAuthenticationParameters||| -|tlsEnabled||false| -|tlsAllowInsecureConnection||false| -|tlsCertificateFilePath||| -|tlsKeyFilePath ||| -|tlsTrustCertsFilePath||| - -## Pulsar proxy - -The [Pulsar proxy](concepts-architecture-overview.md#pulsar-proxy) can be configured in the `conf/proxy.conf` file. - - -|Name|Description|Default| -|---|---|---| -|forwardAuthorizationCredentials| Forward client authorization credentials to Broker for re-authorization, and make sure authentication is enabled for this to take effect. |false| -|zookeeperServers| The ZooKeeper quorum connection string (as a comma-separated list) || -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -| brokerServiceURL | The service URL pointing to the broker cluster. Must begin with `pulsar://`. | | -| brokerServiceURLTLS | The TLS service URL pointing to the broker cluster. Must begin with `pulsar+ssl://`. | | -| brokerWebServiceURL | The Web service URL pointing to the broker cluster | | -| brokerWebServiceURLTLS | The TLS Web service URL pointing to the broker cluster | | -| functionWorkerWebServiceURL | The Web service URL pointing to the function worker cluster. It is only configured when you setup function workers in a separate cluster. | | -| functionWorkerWebServiceURLTLS | The TLS Web service URL pointing to the function worker cluster. It is only configured when you setup function workers in a separate cluster. | | -|zookeeperSessionTimeoutMs| ZooKeeper session timeout (in milliseconds) |30000| -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|advertisedAddress|Hostname or IP address the service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostname()` is used.|N/A| -|servicePort| The port to use for server binary Protobuf requests |6650| -|servicePortTls| The port to use to server binary Protobuf TLS requests |6651| -|statusFilePath| Path for the file used to determine the rotation status for the proxy instance when responding to service discovery health checks || -| proxyLogLevel | Proxy log level
  1130. 0: Do not log any TCP channel information.
  1131. 1: Parse and log any TCP channel information and command information without message body.
  1132. 2: Parse and log channel information, command information and message body.
  1133. | 0 | -|authenticationEnabled| Whether authentication is enabled for the Pulsar proxy |false| -|authenticateMetricsEndpoint| Whether the '/metrics' endpoint requires authentication. Defaults to true. 'authenticationEnabled' must also be set for this to take effect. |true| -|authenticationProviders| Authentication provider name list (a comma-separated list of class names) || -|authorizationEnabled| Whether authorization is enforced by the Pulsar proxy |false| -|authorizationProvider| Authorization provider as a fully qualified class name |org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider| -| anonymousUserRole | When this parameter is not empty, unauthenticated users perform as anonymousUserRole. | | -|brokerClientAuthenticationPlugin| The authentication plugin used by the Pulsar proxy to authenticate with Pulsar brokers || -|brokerClientAuthenticationParameters| The authentication parameters used by the Pulsar proxy to authenticate with Pulsar brokers || -|brokerClientTrustCertsFilePath| The path to trusted certificates used by the Pulsar proxy to authenticate with Pulsar brokers || -|superUserRoles| Role names that are treated as "super-users," meaning that they will be able to perform all admin || -|maxConcurrentInboundConnections| Max concurrent inbound connections. The proxy will reject requests beyond that. |10000| -|maxConcurrentLookupRequests| Max concurrent outbound connections. The proxy will error out requests beyond that. |50000| -|tlsEnabledInProxy| Deprecated - use `servicePortTls` and `webServicePortTls` instead. |false| -|tlsEnabledWithBroker| Whether TLS is enabled when communicating with Pulsar brokers. |false| -| tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in seconds. If the value is set 0, check TLS certificate every new connection. | 300 | -|tlsCertificateFilePath| Path for the TLS certificate file || -|tlsKeyFilePath| Path for the TLS private key file || -|tlsTrustCertsFilePath| Path for the trusted TLS certificate pem file || -|tlsHostnameVerificationEnabled| Whether the hostname is validated when the proxy creates a TLS connection with brokers |false| -|tlsRequireTrustedClientCertOnConnect| Whether client certificates are required for TLS. Connections are rejected if the client certificate isn’t trusted. |false| -|tlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLSv1.3```, ```TLSv1.2``` || -|tlsCiphers|Specify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256```|| -| httpReverseProxyConfigs | HTTP directs to redirect to non-pulsar services | | -| httpOutputBufferSize | HTTP output buffer size. The amount of data that will be buffered for HTTP requests before it is flushed to the channel. A larger buffer size may result in higher HTTP throughput though it may take longer for the client to see data. If using HTTP streaming via the reverse proxy, this should be set to the minimum value (1) so that clients see the data as soon as possible. | 32768 | -| httpNumThreads | Number of threads to use for HTTP requests processing| 2 * Runtime.getRuntime().availableProcessors() | -|tokenSettingPrefix| Configure the prefix of the token related setting like `tokenSecretKey`, `tokenPublicKey`, `tokenAuthClaim`, `tokenPublicAlg`, `tokenAudienceClaim`, and `tokenAudience`. || -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenAuthClaim| Specify the token claim that will be used as the authentication "principal" or "role". The "subject" field will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud". It is used to get the audience from token. If it is not set, the audience is not verified. || -| tokenAudience | The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token need contains this parameter.| | -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| - -## ZooKeeper - -ZooKeeper handles a broad range of essential configuration- and coordination-related tasks for Pulsar. The default configuration file for ZooKeeper is in the `conf/zookeeper.conf` file in your Pulsar installation. The following parameters are available: - - -|Name|Description|Default| -|---|---|---| -|tickTime| The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick. |2000| -|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10| -|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter. |5| -|dataDir| The location where ZooKeeper will store in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper| -|clientPort| The port on which the ZooKeeper server will listen for connections. |2181| -|admin.enableServer|The port at which the admin listens.|true| -|admin.serverPort|The port at which the admin listens.|9990| -|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest). |3| -|autopurge.purgeInterval| The time interval, in hours, by which the ZooKeeper database purge task is triggered. Setting to a non-zero number will enable auto purge; setting to 0 will disable. Read this guide before enabling auto purge. |1| -|forceSync|Requires updates to be synced to media of the transaction log before finishing processing the update. If this option is set to 'no', ZooKeeper will not require updates to be synced to the media. WARNING: it's not recommended to run a production ZK cluster with `forceSync` disabled.|yes| -|maxClientCnxns| The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60| - - - - -In addition to the parameters in the table above, configuring ZooKeeper for Pulsar involves adding -a `server.N` line to the `conf/zookeeper.conf` file for each node in the ZooKeeper cluster, where `N` is the number of the ZooKeeper node. Here's an example for a three-node ZooKeeper cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -> We strongly recommend consulting the [ZooKeeper Administrator's Guide](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html) for a more thorough and comprehensive introduction to ZooKeeper configuration diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/reference-connector-admin.md b/site2/website/versioned_docs/version-2.8.2-deprecated/reference-connector-admin.md deleted file mode 100644 index 7b73ae80750cd4..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/reference-connector-admin.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -id: reference-connector-admin -title: Connector Admin CLI -sidebar_label: "Connector Admin CLI" -original_id: reference-connector-admin ---- - -> **Important** -> -> For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/reference-metrics.md b/site2/website/versioned_docs/version-2.8.2-deprecated/reference-metrics.md deleted file mode 100644 index 61c734f96a23d6..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/reference-metrics.md +++ /dev/null @@ -1,564 +0,0 @@ ---- -id: reference-metrics -title: Pulsar Metrics -sidebar_label: "Pulsar Metrics" -original_id: reference-metrics ---- - - - -Pulsar exposes the following metrics in Prometheus format. You can monitor your clusters with those metrics. - -* [ZooKeeper](#zookeeper) -* [BookKeeper](#bookkeeper) -* [Broker](#broker) -* [Pulsar Functions](#pulsar-functions) -* [Proxy](#proxy) -* [Pulsar SQL Worker](#pulsar-sql-worker) -* [Pulsar transaction](#pulsar-transaction) - -The following types of metrics are available: - -- [Counter](https://prometheus.io/docs/concepts/metric_types/#counter): a cumulative metric that represents a single monotonically increasing counter. The value increases by default. You can reset the value to zero or restart your cluster. -- [Gauge](https://prometheus.io/docs/concepts/metric_types/#gauge): a metric that represents a single numerical value that can arbitrarily go up and down. -- [Histogram](https://prometheus.io/docs/concepts/metric_types/#histogram): a histogram samples observations (usually things like request durations or response sizes) and counts them in configurable buckets. The `_bucket` suffix is the number of observations within a histogram bucket, configured with parameter `{le=""}`. The `_count` suffix is the number of observations, shown as a time series and behaves like a counter. The `_sum` suffix is the sum of observed values, also shown as a time series and behaves like a counter. These suffixes are together denoted by `_*` in this doc. -- [Summary](https://prometheus.io/docs/concepts/metric_types/#summary): similar to a histogram, a summary samples observations (usually things like request durations and response sizes). While it also provides a total count of observations and a sum of all observed values, it calculates configurable quantiles over a sliding time window. - -## ZooKeeper - -The ZooKeeper metrics are exposed under "/metrics" at port `8000`. You can use a different port by configuring the `metricsProvider.httpPort` in conf/zookeeper.conf. - -ZooKeeper provides a New Metrics System since 3.6.0. For more detailed metrics, refer to the [ZooKeeper Monitor Guide](https://zookeeper.apache.org/doc/r3.7.0/zookeeperMonitor.html). - -## BookKeeper - -The BookKeeper metrics are exposed under "/metrics" at port `8000`. You can change the port by updating `prometheusStatsHttpPort` -in the `bookkeeper.conf` configuration file. - -### Server metrics - -| Name | Type | Description | -|---|---|---| -| bookie_SERVER_STATUS | Gauge | The server status for bookie server.
    • 1: the bookie is running in writable mode.
    • 0: the bookie is running in readonly mode.
    | -| bookkeeper_server_ADD_ENTRY_count | Counter | The total number of ADD_ENTRY requests received at the bookie. The `success` label is used to distinguish successes and failures. | -| bookkeeper_server_READ_ENTRY_count | Counter | The total number of READ_ENTRY requests received at the bookie. The `success` label is used to distinguish successes and failures. | -| bookie_WRITE_BYTES | Counter | The total number of bytes written to the bookie. | -| bookie_READ_BYTES | Counter | The total number of bytes read from the bookie. | -| bookkeeper_server_ADD_ENTRY_REQUEST | Summary | The summary of request latency of ADD_ENTRY requests at the bookie. The `success` label is used to distinguish successes and failures. | -| bookkeeper_server_READ_ENTRY_REQUEST | Summary | The summary of request latency of READ_ENTRY requests at the bookie. The `success` label is used to distinguish successes and failures. | - -### Journal metrics - -| Name | Type | Description | -|---|---|---| -| bookie_journal_JOURNAL_SYNC_count | Counter | The total number of journal fsync operations happening at the bookie. The `success` label is used to distinguish successes and failures. | -| bookie_journal_JOURNAL_QUEUE_SIZE | Gauge | The total number of requests pending in the journal queue. | -| bookie_journal_JOURNAL_FORCE_WRITE_QUEUE_SIZE | Gauge | The total number of force write (fsync) requests pending in the force-write queue. | -| bookie_journal_JOURNAL_CB_QUEUE_SIZE | Gauge | The total number of callbacks pending in the callback queue. | -| bookie_journal_JOURNAL_ADD_ENTRY | Summary | The summary of request latency of adding entries to the journal. | -| bookie_journal_JOURNAL_SYNC | Summary | The summary of fsync latency of syncing data to the journal disk. | - -### Storage metrics - -| Name | Type | Description | -|---|---|---| -| bookie_ledgers_count | Gauge | The total number of ledgers stored in the bookie. | -| bookie_entries_count | Gauge | The total number of entries stored in the bookie. | -| bookie_write_cache_size | Gauge | The bookie write cache size (in bytes). | -| bookie_read_cache_size | Gauge | The bookie read cache size (in bytes). | -| bookie_DELETED_LEDGER_COUNT | Counter | The total number of ledgers deleted since the bookie has started. | -| bookie_ledger_writable_dirs | Gauge | The number of writable directories in the bookie. | - -## Broker - -The broker metrics are exposed under "/metrics" at port `8080`. You can change the port by updating `webServicePort` to a different port -in the `broker.conf` configuration file. - -All the metrics exposed by a broker are labelled with `cluster=${pulsar_cluster}`. The name of Pulsar cluster is the value of `${pulsar_cluster}`, which you have configured in the `broker.conf` file. - -The following metrics are available for broker: - -- [ZooKeeper](#zookeeper) - - [Server metrics](#server-metrics) - - [Request metrics](#request-metrics) -- [BookKeeper](#bookkeeper) - - [Server metrics](#server-metrics-1) - - [Journal metrics](#journal-metrics) - - [Storage metrics](#storage-metrics) -- [Broker](#broker) - - [Namespace metrics](#namespace-metrics) - - [Replication metrics](#replication-metrics) - - [Topic metrics](#topic-metrics) - - [Replication metrics](#replication-metrics-1) - - [ManagedLedgerCache metrics](#managedledgercache-metrics) - - [ManagedLedger metrics](#managedledger-metrics) - - [LoadBalancing metrics](#loadbalancing-metrics) - - [BundleUnloading metrics](#bundleunloading-metrics) - - [BundleSplit metrics](#bundlesplit-metrics) - - [Subscription metrics](#subscription-metrics) - - [Consumer metrics](#consumer-metrics) - - [Managed ledger bookie client metrics](#managed-ledger-bookie-client-metrics) - - [Token metrics](#token-metrics) - - [Authentication metrics](#authentication-metrics) - - [Connection metrics](#connection-metrics) - - [Jetty metrics](#jetty-metrics) -- [Pulsar Functions](#pulsar-functions) -- [Proxy](#proxy) -- [Pulsar SQL Worker](#pulsar-sql-worker) -- [Pulsar transaction](#pulsar-transaction) - -### Namespace metrics - -> Namespace metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `false`. - -All the namespace metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -| Name | Type | Description | -|---|---|---| -| pulsar_topics_count | Gauge | The number of Pulsar topics of the namespace owned by this broker. | -| pulsar_subscriptions_count | Gauge | The number of Pulsar subscriptions of the namespace served by this broker. | -| pulsar_producers_count | Gauge | The number of active producers of the namespace connected to this broker. | -| pulsar_consumers_count | Gauge | The number of active consumers of the namespace connected to this broker. | -| pulsar_rate_in | Gauge | The total message rate of the namespace coming into this broker (messages/second). | -| pulsar_rate_out | Gauge | The total message rate of the namespace going out from this broker (messages/second). | -| pulsar_throughput_in | Gauge | The total throughput of the namespace coming into this broker (bytes/second). | -| pulsar_throughput_out | Gauge | The total throughput of the namespace going out from this broker (bytes/second). | -| pulsar_storage_size | Gauge | The total storage size of the topics in this namespace owned by this broker (bytes). | -| pulsar_storage_backlog_size | Gauge | The total backlog size of the topics of this namespace owned by this broker (messages). | -| pulsar_storage_offloaded_size | Gauge | The total amount of the data in this namespace offloaded to the tiered storage (bytes). | -| pulsar_storage_write_rate | Gauge | The total message batches (entries) written to the storage for this namespace (message batches / second). | -| pulsar_storage_read_rate | Gauge | The total message batches (entries) read from the storage for this namespace (message batches / second). | -| pulsar_subscription_delayed | Gauge | The total message batches (entries) are delayed for dispatching. | -| pulsar_storage_write_latency_le_* | Histogram | The entry rate of a namespace that the storage write latency is smaller with a given threshold.
    Available thresholds:
    • pulsar_storage_write_latency_le_0_5: <= 0.5ms
    • pulsar_storage_write_latency_le_1: <= 1ms
    • pulsar_storage_write_latency_le_5: <= 5ms
    • pulsar_storage_write_latency_le_10: <= 10ms
    • pulsar_storage_write_latency_le_20: <= 20ms
    • pulsar_storage_write_latency_le_50: <= 50ms
    • pulsar_storage_write_latency_le_100: <= 100ms
    • pulsar_storage_write_latency_le_200: <= 200ms
    • pulsar_storage_write_latency_le_1000: <= 1s
    • pulsar_storage_write_latency_le_overflow: > 1s
    | -| pulsar_entry_size_le_* | Histogram | The entry rate of a namespace that the entry size is smaller with a given threshold.
    Available thresholds:
    • pulsar_entry_size_le_128: <= 128 bytes
    • pulsar_entry_size_le_512: <= 512 bytes
    • pulsar_entry_size_le_1_kb: <= 1 KB
    • pulsar_entry_size_le_2_kb: <= 2 KB
    • pulsar_entry_size_le_4_kb: <= 4 KB
    • pulsar_entry_size_le_16_kb: <= 16 KB
    • pulsar_entry_size_le_100_kb: <= 100 KB
    • pulsar_entry_size_le_1_mb: <= 1 MB
    • pulsar_entry_size_le_overflow: > 1 MB
    | - -#### Replication metrics - -If a namespace is configured to be replicated among multiple Pulsar clusters, the corresponding replication metrics is also exposed when `replicationMetricsEnabled` is enabled. - -All the replication metrics are also labelled with `remoteCluster=${pulsar_remote_cluster}`. - -| Name | Type | Description | -|---|---|---| -| pulsar_replication_rate_in | Gauge | The total message rate of the namespace replicating from remote cluster (messages/second). | -| pulsar_replication_rate_out | Gauge | The total message rate of the namespace replicating to remote cluster (messages/second). | -| pulsar_replication_throughput_in | Gauge | The total throughput of the namespace replicating from remote cluster (bytes/second). | -| pulsar_replication_throughput_out | Gauge | The total throughput of the namespace replicating to remote cluster (bytes/second). | -| pulsar_replication_backlog | Gauge | The total backlog of the namespace replicating to remote cluster (messages). | -| pulsar_replication_rate_expired | Gauge | Total rate of messages expired (messages/second). | -| pulsar_replication_connected_count | Gauge | The count of replication-subscriber up and running to replicate to remote cluster. | -| pulsar_replication_delay_in_seconds | Gauge | Time in seconds from the time a message was produced to the time when it is about to be replicated. | - -### Topic metrics - -> Topic metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `true`. - -All the topic metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. - -| Name | Type | Description | -|---|---|---| -| pulsar_subscriptions_count | Gauge | The number of Pulsar subscriptions of the topic served by this broker. | -| pulsar_producers_count | Gauge | The number of active producers of the topic connected to this broker. | -| pulsar_consumers_count | Gauge | The number of active consumers of the topic connected to this broker. | -| pulsar_rate_in | Gauge | The total message rate of the topic coming into this broker (messages/second). | -| pulsar_rate_out | Gauge | The total message rate of the topic going out from this broker (messages/second). | -| pulsar_throughput_in | Gauge | The total throughput of the topic coming into this broker (bytes/second). | -| pulsar_throughput_out | Gauge | The total throughput of the topic going out from this broker (bytes/second). | -| pulsar_storage_size | Gauge | The total storage size of the topics in this topic owned by this broker (bytes). | -| pulsar_storage_backlog_size | Gauge | The total backlog size of the topics of this topic owned by this broker (messages). | -| pulsar_storage_offloaded_size | Gauge | The total amount of the data in this topic offloaded to the tiered storage (bytes). | -| pulsar_storage_backlog_quota_limit | Gauge | The total amount of the data in this topic that limit the backlog quota (bytes). | -| pulsar_storage_write_rate | Gauge | The total message batches (entries) written to the storage for this topic (message batches / second). | -| pulsar_storage_read_rate | Gauge | The total message batches (entries) read from the storage for this topic (message batches / second). | -| pulsar_subscription_delayed | Gauge | The total message batches (entries) are delayed for dispatching. | -| pulsar_storage_write_latency_le_* | Histogram | The entry rate of a topic that the storage write latency is smaller with a given threshold.
    Available thresholds:
    • pulsar_storage_write_latency_le_0_5: <= 0.5ms
    • pulsar_storage_write_latency_le_1: <= 1ms
    • pulsar_storage_write_latency_le_5: <= 5ms
    • pulsar_storage_write_latency_le_10: <= 10ms
    • pulsar_storage_write_latency_le_20: <= 20ms
    • pulsar_storage_write_latency_le_50: <= 50ms
    • pulsar_storage_write_latency_le_100: <= 100ms
    • pulsar_storage_write_latency_le_200: <= 200ms
    • pulsar_storage_write_latency_le_1000: <= 1s
    • pulsar_storage_write_latency_le_overflow: > 1s
    | -| pulsar_entry_size_le_* | Histogram | The entry rate of a topic that the entry size is smaller with a given threshold.
    Available thresholds:
    • pulsar_entry_size_le_128: <= 128 bytes
    • pulsar_entry_size_le_512: <= 512 bytes
    • pulsar_entry_size_le_1_kb: <= 1 KB
    • pulsar_entry_size_le_2_kb: <= 2 KB
    • pulsar_entry_size_le_4_kb: <= 4 KB
    • pulsar_entry_size_le_16_kb: <= 16 KB
    • pulsar_entry_size_le_100_kb: <= 100 KB
    • pulsar_entry_size_le_1_mb: <= 1 MB
    • pulsar_entry_size_le_overflow: > 1 MB
    | -| pulsar_in_bytes_total | Counter | The total number of messages in bytes received for this topic. | -| pulsar_in_messages_total | Counter | The total number of messages received for this topic. | -| pulsar_out_bytes_total | Counter | The total number of messages in bytes read from this topic. | -| pulsar_out_messages_total | Counter | The total number of messages read from this topic. | -| pulsar_compaction_removed_event_count | Gauge | The total number of removed events of the compaction. | -| pulsar_compaction_succeed_count | Gauge | The total number of successes of the compaction. | -| pulsar_compaction_failed_count | Gauge | The total number of failures of the compaction. | -| pulsar_compaction_duration_time_in_mills | Gauge | The duration time of the compaction. | -| pulsar_compaction_read_throughput | Gauge | The read throughput of the compaction. | -| pulsar_compaction_write_throughput | Gauge | The write throughput of the compaction. | -| pulsar_compaction_latency_le_* | Histogram | The compaction latency with given quantile.
    Available thresholds:
    • pulsar_compaction_latency_le_0_5: <= 0.5ms
    • pulsar_compaction_latency_le_1: <= 1ms
    • pulsar_compaction_latency_le_5: <= 5ms
    • pulsar_compaction_latency_le_10: <= 10ms
    • pulsar_compaction_latency_le_20: <= 20ms
    • pulsar_compaction_latency_le_50: <= 50ms
    • pulsar_compaction_latency_le_100: <= 100ms
    • pulsar_compaction_latency_le_200: <= 200ms
    • pulsar_compaction_latency_le_1000: <= 1s
    • pulsar_compaction_latency_le_overflow: > 1s
    | -| pulsar_compaction_compacted_entries_count | Gauge | The total number of the compacted entries. | -| pulsar_compaction_compacted_entries_size |Gauge | The total size of the compacted entries. | - -#### Replication metrics - -If a namespace that a topic belongs to is configured to be replicated among multiple Pulsar clusters, the corresponding replication metrics is also exposed when `replicationMetricsEnabled` is enabled. - -All the replication metrics are labelled with `remoteCluster=${pulsar_remote_cluster}`. - -| Name | Type | Description | -|---|---|---| -| pulsar_replication_rate_in | Gauge | The total message rate of the topic replicating from remote cluster (messages/second). | -| pulsar_replication_rate_out | Gauge | The total message rate of the topic replicating to remote cluster (messages/second). | -| pulsar_replication_throughput_in | Gauge | The total throughput of the topic replicating from remote cluster (bytes/second). | -| pulsar_replication_throughput_out | Gauge | The total throughput of the topic replicating to remote cluster (bytes/second). | -| pulsar_replication_backlog | Gauge | The total backlog of the topic replicating to remote cluster (messages). | - -### ManagedLedgerCache metrics -All the ManagedLedgerCache metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_ml_cache_evictions | Gauge | The number of cache evictions during the last minute. | -| pulsar_ml_cache_hits_rate | Gauge | The number of cache hits per second. | -| pulsar_ml_cache_hits_throughput | Gauge | The amount of data is retrieved from the cache in byte/s | -| pulsar_ml_cache_misses_rate | Gauge | The number of cache misses per second | -| pulsar_ml_cache_misses_throughput | Gauge | The amount of data is retrieved from the cache in byte/s | -| pulsar_ml_cache_pool_active_allocations | Gauge | The number of currently active allocations in direct arena | -| pulsar_ml_cache_pool_active_allocations_huge | Gauge | The number of currently active huge allocation in direct arena | -| pulsar_ml_cache_pool_active_allocations_normal | Gauge | The number of currently active normal allocations in direct arena | -| pulsar_ml_cache_pool_active_allocations_small | Gauge | The number of currently active small allocations in direct arena | -| pulsar_ml_cache_pool_active_allocations_tiny | Gauge | The number of currently active tiny allocations in direct arena | -| pulsar_ml_cache_pool_allocated | Gauge | The total allocated memory of chunk lists in direct arena | -| pulsar_ml_cache_pool_used | Gauge | The total used memory of chunk lists in direct arena | -| pulsar_ml_cache_used_size | Gauge | The size in byte used to store the entries payloads | -| pulsar_ml_count | Gauge | The number of currently opened managed ledgers | - -### ManagedLedger metrics -All the managedLedger metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- namespace: namespace=${pulsar_namespace}. ${pulsar_namespace} is the namespace name. -- quantile: quantile=${quantile}. Quantile is only for `Histogram` type metric, and represents the threshold for given Buckets. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_ml_AddEntryBytesRate | Gauge | The bytes/s rate of messages added | -| pulsar_ml_AddEntryErrors | Gauge | The number of addEntry requests that failed | -| pulsar_ml_AddEntryLatencyBuckets | Histogram | The latency of adding a ledger entry with a given quantile (threshold), including time spent on waiting in queue on the broker side
    Available quantile:
    • quantile="0.0_0.5" is AddEntryLatency between (0.0ms, 0.5ms]
    • quantile="0.5_1.0" is AddEntryLatency between (0.5ms, 1.0ms]
    • quantile="1.0_5.0" is AddEntryLatency between (1ms, 5ms]
    • quantile="5.0_10.0" is AddEntryLatency between (5ms, 10ms]
    • quantile="10.0_20.0" is AddEntryLatency between (10ms, 20ms]
    • quantile="20.0_50.0" is AddEntryLatency between (20ms, 50ms]
    • quantile="50.0_100.0" is AddEntryLatency between (50ms, 100ms]
    • quantile="100.0_200.0" is AddEntryLatency between (100ms, 200ms]
    • quantile="200.0_1000.0" is AddEntryLatency between (200ms, 1s]
    | -| pulsar_ml_AddEntryLatencyBuckets_OVERFLOW | Gauge | The number of times the AddEntryLatency is longer than 1 second | -| pulsar_ml_AddEntryMessagesRate | Gauge | The msg/s rate of messages added | -| pulsar_ml_AddEntrySucceed | Gauge | The number of addEntry requests that succeeded | -| pulsar_ml_EntrySizeBuckets | Histogram | The added entry size of a ledger with a given quantile.
    Available quantile:
    • quantile="0.0_128.0" is EntrySize between (0byte, 128byte]
    • quantile="128.0_512.0" is EntrySize between (128byte, 512byte]
    • quantile="512.0_1024.0" is EntrySize between (512byte, 1KB]
    • quantile="1024.0_2048.0" is EntrySize between (1KB, 2KB]
    • quantile="2048.0_4096.0" is EntrySize between (2KB, 4KB]
    • quantile="4096.0_16384.0" is EntrySize between (4KB, 16KB]
    • quantile="16384.0_102400.0" is EntrySize between (16KB, 100KB]
    • quantile="102400.0_1232896.0" is EntrySize between (100KB, 1MB]
    | -| pulsar_ml_EntrySizeBuckets_OVERFLOW |Gauge | The number of times the EntrySize is larger than 1MB | -| pulsar_ml_LedgerSwitchLatencyBuckets | Histogram | The ledger switch latency with a given quantile.
    Available quantile:
    • quantile="0.0_0.5" is EntrySize between (0ms, 0.5ms]
    • quantile="0.5_1.0" is EntrySize between (0.5ms, 1ms]
    • quantile="1.0_5.0" is EntrySize between (1ms, 5ms]
    • quantile="5.0_10.0" is EntrySize between (5ms, 10ms]
    • quantile="10.0_20.0" is EntrySize between (10ms, 20ms]
    • quantile="20.0_50.0" is EntrySize between (20ms, 50ms]
    • quantile="50.0_100.0" is EntrySize between (50ms, 100ms]
    • quantile="100.0_200.0" is EntrySize between (100ms, 200ms]
    • quantile="200.0_1000.0" is EntrySize between (200ms, 1000ms]
    | -| pulsar_ml_LedgerSwitchLatencyBuckets_OVERFLOW | Gauge | The number of times the ledger switch latency is longer than 1 second | -| pulsar_ml_LedgerAddEntryLatencyBuckets | Histogram | The latency for bookie client to persist a ledger entry from broker to BookKeeper service with a given quantile (threshold).
    Available quantile:
    • quantile="0.0_0.5" is LedgerAddEntryLatency between (0.0ms, 0.5ms]
    • quantile="0.5_1.0" is LedgerAddEntryLatency between (0.5ms, 1.0ms]
    • quantile="1.0_5.0" is LedgerAddEntryLatency between (1ms, 5ms]
    • quantile="5.0_10.0" is LedgerAddEntryLatency between (5ms, 10ms]
    • quantile="10.0_20.0" is LedgerAddEntryLatency between (10ms, 20ms]
    • quantile="20.0_50.0" is LedgerAddEntryLatency between (20ms, 50ms]
    • quantile="50.0_100.0" is LedgerAddEntryLatency between (50ms, 100ms]
    • quantile="100.0_200.0" is LedgerAddEntryLatency between (100ms, 200ms]
    • quantile="200.0_1000.0" is LedgerAddEntryLatency between (200ms, 1s]
    | -| pulsar_ml_LedgerAddEntryLatencyBuckets_OVERFLOW | Gauge | The number of times the LedgerAddEntryLatency is longer than 1 second | -| pulsar_ml_MarkDeleteRate | Gauge | The rate of mark-delete ops/s | -| pulsar_ml_NumberOfMessagesInBacklog | Gauge | The number of backlog messages for all the consumers | -| pulsar_ml_ReadEntriesBytesRate | Gauge | The bytes/s rate of messages read | -| pulsar_ml_ReadEntriesErrors | Gauge | The number of readEntries requests that failed | -| pulsar_ml_ReadEntriesRate | Gauge | The msg/s rate of messages read | -| pulsar_ml_ReadEntriesSucceeded | Gauge | The number of readEntries requests that succeeded | -| pulsar_ml_StoredMessagesSize | Gauge | The total size of the messages in active ledgers (accounting for the multiple copies stored) | - -### Managed cursor acknowledgment state - -The acknowledgment state is persistent to the ledger first. When the acknowledgment state fails to be persistent to the ledger, they are persistent to ZooKeeper. To track the stats of acknowledgment, you can configure the metrics for the managed cursor. - -All the cursor acknowledgment state metrics are labelled with the following labels: - -- namespace: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -- ledger_name: `ledger_name=${pulsar_ledger_name}`. `${pulsar_ledger_name}` is the ledger name. - -- cursor_name: `ledger_name=${pulsar_cursor_name}`. `${pulsar_cursor_name}` is the cursor name. - -Name |Type |Description -|---|---|--- -brk_ml_cursor_persistLedgerSucceed(namespace=", ledger_name="", cursor_name:")|Gauge|The number of acknowledgment states that is persistent to a ledger.| -brk_ml_cursor_persistLedgerErrors(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of ledger errors occurred when acknowledgment states fail to be persistent to the ledger.| -brk_ml_cursor_persistZookeeperSucceed(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of acknowledgment states that is persistent to ZooKeeper. -brk_ml_cursor_persistZookeeperErrors(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of ledger errors occurred when acknowledgment states fail to be persistent to ZooKeeper. -brk_ml_cursor_nonContiguousDeletedMessagesRange(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of non-contiguous deleted messages ranges. - -### LoadBalancing metrics -All the loadbalancing metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- broker: broker=${broker}. ${broker} is the IP address of the broker -- metric: metric="loadBalancing". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_bandwidth_in_usage | Gauge | The broker inbound bandwith usage (in percent). | -| pulsar_lb_bandwidth_out_usage | Gauge | The broker outbound bandwith usage (in percent). | -| pulsar_lb_cpu_usage | Gauge | The broker cpu usage (in percent). | -| pulsar_lb_directMemory_usage | Gauge | The broker process direct memory usage (in percent). | -| pulsar_lb_memory_usage | Gauge | The broker process memory usage (in percent). | - -#### BundleUnloading metrics -All the bundleUnloading metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- metric: metric="bundleUnloading". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_unload_broker_count | Counter | Unload broker count in this bundle unloading | -| pulsar_lb_unload_bundle_count | Counter | Bundle unload count in this bundle unloading | - -#### BundleSplit metrics -All the bundleUnloading metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- metric: metric="bundlesSplit". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_bundles_split_count | Counter | The total count of bundle split in this leader broker | - -### Subscription metrics - -> Subscription metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `true`. - -All the subscription metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. -- *subscription*: `subscription=${subscription}`. `${subscription}` is the topic subscription name. - -| Name | Type | Description | -|---|---|---| -| pulsar_subscription_back_log | Gauge | The total backlog of a subscription (messages). | -| pulsar_subscription_delayed | Gauge | The total number of messages are delayed to be dispatched for a subscription (messages). | -| pulsar_subscription_msg_rate_redeliver | Gauge | The total message rate for message being redelivered (messages/second). | -| pulsar_subscription_unacked_messages | Gauge | The total number of unacknowledged messages of a subscription (messages). | -| pulsar_subscription_blocked_on_unacked_messages | Gauge | Indicate whether a subscription is blocked on unacknowledged messages or not.
    • 1 means the subscription is blocked on waiting unacknowledged messages to be acked.
    • 0 means the subscription is not blocked on waiting unacknowledged messages to be acked.
    | -| pulsar_subscription_msg_rate_out | Gauge | The total message dispatch rate for a subscription (messages/second). | -| pulsar_subscription_msg_throughput_out | Gauge | The total message dispatch throughput for a subscription (bytes/second). | - -### Consumer metrics - -> Consumer metrics are only exposed when both `exposeTopicLevelMetricsInPrometheus` and `exposeConsumerLevelMetricsInPrometheus` are set to `true`. - -All the consumer metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. -- *subscription*: `subscription=${subscription}`. `${subscription}` is the topic subscription name. -- *consumer_name*: `consumer_name=${consumer_name}`. `${consumer_name}` is the topic consumer name. -- *consumer_id*: `consumer_id=${consumer_id}`. `${consumer_id}` is the topic consumer id. - -| Name | Type | Description | -|---|---|---| -| pulsar_consumer_msg_rate_redeliver | Gauge | The total message rate for message being redelivered (messages/second). | -| pulsar_consumer_unacked_messages | Gauge | The total number of unacknowledged messages of a consumer (messages). | -| pulsar_consumer_blocked_on_unacked_messages | Gauge | Indicate whether a consumer is blocked on unacknowledged messages or not.
    • 1 means the consumer is blocked on waiting unacknowledged messages to be acked.
    • 0 means the consumer is not blocked on waiting unacknowledged messages to be acked.
    | -| pulsar_consumer_msg_rate_out | Gauge | The total message dispatch rate for a consumer (messages/second). | -| pulsar_consumer_msg_throughput_out | Gauge | The total message dispatch throughput for a consumer (bytes/second). | -| pulsar_consumer_available_permits | Gauge | The available permits for for a consumer. | - -### Managed ledger bookie client metrics - -All the managed ledger bookie client metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_completed_tasks_* | Gauge | The number of tasks the scheduler executor execute completed.
    The number of metrics determined by the scheduler executor thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_queue_* | Gauge | The number of tasks queued in the scheduler executor's queue.
    The number of metrics determined by scheduler executor's thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_total_tasks_* | Gauge | The total number of tasks the scheduler executor received.
    The number of metrics determined by scheduler executor's thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_task_execution | Summary | The scheduler task execution latency calculated in milliseconds. | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_task_queued | Summary | The scheduler task queued latency calculated in milliseconds. | - -### Token metrics - -All the token metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -|---|---|---| -| pulsar_expired_token_count | Counter | The number of expired tokens in Pulsar. | -| pulsar_expiring_token_minutes | Histogram | The remaining time of expiring tokens in minutes. | - -### Authentication metrics - -All the authentication metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *provider_name*: `provider_name=${provider_name}`. `${provider_name}` is the class name of the authentication provider. -- *auth_method*: `auth_method=${auth_method}`. `${auth_method}` is the authentication method of the authentication provider. -- *reason*: `reason=${reason}`. `${reason}` is the reason for failing authentication operation. (This label is only for `pulsar_authentication_failures_count`.) - -| Name | Type | Description | -|---|---|---| -| pulsar_authentication_success_count| Counter | The number of successful authentication operations. | -| pulsar_authentication_failures_count | Counter | The number of failing authentication operations. | - -### Connection metrics - -All the connection metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *broker*: `broker=${advertised_address}`. `${advertised_address}` is the advertised address of the broker. -- *metric*: `metric=${metric}`. `${metric}` is the connection metric collective name. - -| Name | Type | Description | -|---|---|---| -| pulsar_active_connections| Gauge | The number of active connections. | -| pulsar_connection_created_total_count | Gauge | The total number of connections. | -| pulsar_connection_create_success_count | Gauge | The number of successfully created connections. | -| pulsar_connection_create_fail_count | Gauge | The number of failed connections. | -| pulsar_connection_closed_total_count | Gauge | The total number of closed connections. | -| pulsar_broker_throttled_connections | Gauge | The number of throttled connections. | -| pulsar_broker_throttled_connections_global_limit | Gauge | The number of throttled connections because of per-connection limit. | - -### Jetty metrics - -> For a functions-worker running separately from brokers, its Jetty metrics are only exposed when `includeStandardPrometheusMetrics` is set to `true`. - -All the jetty metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -|---|---|---| -| jetty_requests_total | Counter | Number of requests. | -| jetty_requests_active | Gauge | Number of requests currently active. | -| jetty_requests_active_max | Gauge | Maximum number of requests that have been active at once. | -| jetty_request_time_max_seconds | Gauge | Maximum time spent handling requests. | -| jetty_request_time_seconds_total | Counter | Total time spent in all request handling. | -| jetty_dispatched_total | Counter | Number of dispatches. | -| jetty_dispatched_active | Gauge | Number of dispatches currently active. | -| jetty_dispatched_active_max | Gauge | Maximum number of active dispatches being handled. | -| jetty_dispatched_time_max | Gauge | Maximum time spent in dispatch handling. | -| jetty_dispatched_time_seconds_total | Counter | Total time spent in dispatch handling. | -| jetty_async_requests_total | Counter | Total number of async requests. | -| jetty_async_requests_waiting | Gauge | Currently waiting async requests. | -| jetty_async_requests_waiting_max | Gauge | Maximum number of waiting async requests. | -| jetty_async_dispatches_total | Counter | Number of requested that have been asynchronously dispatched. | -| jetty_expires_total | Counter | Number of async requests requests that have expired. | -| jetty_responses_total | Counter | Number of responses, labeled by status code. The `code` label can be "1xx", "2xx", "3xx", "4xx", or "5xx". | -| jetty_stats_seconds | Gauge | Time in seconds stats have been collected for. | -| jetty_responses_bytes_total | Counter | Total number of bytes across all responses. | - -## Pulsar Functions - -All the Pulsar Functions metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -| Name | Type | Description | -|---|---|---| -| pulsar_function_processed_successfully_total | Counter | The total number of messages processed successfully. | -| pulsar_function_processed_successfully_total_1min | Counter | The total number of messages processed successfully in the last 1 minute. | -| pulsar_function_system_exceptions_total | Counter | The total number of system exceptions. | -| pulsar_function_system_exceptions_total_1min | Counter | The total number of system exceptions in the last 1 minute. | -| pulsar_function_user_exceptions_total | Counter | The total number of user exceptions. | -| pulsar_function_user_exceptions_total_1min | Counter | The total number of user exceptions in the last 1 minute. | -| pulsar_function_process_latency_ms | Summary | The process latency in milliseconds. | -| pulsar_function_process_latency_ms_1min | Summary | The process latency in milliseconds in the last 1 minute. | -| pulsar_function_last_invocation | Gauge | The timestamp of the last invocation of the function. | -| pulsar_function_received_total | Counter | The total number of messages received from source. | -| pulsar_function_received_total_1min | Counter | The total number of messages received from source in the last 1 minute. | -pulsar_function_user_metric_ | Summary|The user-defined metrics. - -## Connectors - -All the Pulsar connector metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -Connector metrics contain **source** metrics and **sink** metrics. - -- **Source** metrics - - | Name | Type | Description | - |---|---|---| - pulsar_source_written_total|Counter|The total number of records written to a Pulsar topic. - pulsar_source_written_total_1min|Counter|The total number of records written to a Pulsar topic in the last 1 minute. - pulsar_source_received_total|Counter|The total number of records received from source. - pulsar_source_received_total_1min|Counter|The total number of records received from source in the last 1 minute. - pulsar_source_last_invocation|Gauge|The timestamp of the last invocation of the source. - pulsar_source_source_exception|Gauge|The exception from a source. - pulsar_source_source_exceptions_total|Counter|The total number of source exceptions. - pulsar_source_source_exceptions_total_1min |Counter|The total number of source exceptions in the last 1 minute. - pulsar_source_system_exception|Gauge|The exception from system code. - pulsar_source_system_exceptions_total|Counter|The total number of system exceptions. - pulsar_source_system_exceptions_total_1min|Counter|The total number of system exceptions in the last 1 minute. - pulsar_source_user_metric_ | Summary|The user-defined metrics. - -- **Sink** metrics - - | Name | Type | Description | - |---|---|---| - pulsar_sink_written_total|Counter| The total number of records processed by a sink. - pulsar_sink_written_total_1min|Counter| The total number of records processed by a sink in the last 1 minute. - pulsar_sink_received_total_1min|Counter| The total number of messages that a sink has received from Pulsar topics in the last 1 minute. - pulsar_sink_received_total|Counter| The total number of records that a sink has received from Pulsar topics. - pulsar_sink_last_invocation|Gauge|The timestamp of the last invocation of the sink. - pulsar_sink_sink_exception|Gauge|The exception from a sink. - pulsar_sink_sink_exceptions_total|Counter|The total number of sink exceptions. - pulsar_sink_sink_exceptions_total_1min |Counter|The total number of sink exceptions in the last 1 minute. - pulsar_sink_system_exception|Gauge|The exception from system code. - pulsar_sink_system_exceptions_total|Counter|The total number of system exceptions. - pulsar_sink_system_exceptions_total_1min|Counter|The total number of system exceptions in the last 1 minute. - pulsar_sink_user_metric_ | Summary|The user-defined metrics. - -## Proxy - -All the proxy metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *kubernetes_pod_name*: `kubernetes_pod_name=${kubernetes_pod_name}`. `${kubernetes_pod_name}` is the Kubernetes pod name. - -| Name | Type | Description | -|---|---|---| -| pulsar_proxy_active_connections | Gauge | Number of connections currently active in the proxy. | -| pulsar_proxy_new_connections | Counter | Counter of connections being opened in the proxy. | -| pulsar_proxy_rejected_connections | Counter | Counter for connections rejected due to throttling. | -| pulsar_proxy_binary_ops | Counter | Counter of proxy operations. | -| pulsar_proxy_binary_bytes | Counter | Counter of proxy bytes. | - -## Pulsar SQL Worker - -| Name | Type | Description | -|---|---|---| -| split_bytes_read | Counter | Number of bytes read from BookKeeper. | -| split_num_messages_deserialized | Counter | Number of messages deserialized. | -| split_num_record_deserialized | Counter | Number of records deserialized. | -| split_bytes_read_per_query | Summary | Total number of bytes read per query. | -| split_entry_deserialize_time | Summary | Time spent on derserializing entries. | -| split_entry_deserialize_time_per_query | Summary | Time spent on derserializing entries per query. | -| split_entry_queue_dequeue_wait_time | Summary | Time spend on waiting to get entry from entry queue because it is empty. | -| split_entry_queue_dequeue_wait_time_per_query | Summary | Total time spent on waiting to get entry from entry queue per query. | -| split_message_queue_dequeue_wait_time_per_query | Summary | Time spent on waiting to dequeue from message queue because is is empty per query. | -| split_message_queue_enqueue_wait_time | Summary | Time spent on waiting for message queue enqueue because the message queue is full. | -| split_message_queue_enqueue_wait_time_per_query | Summary | Time spent on waiting for message queue enqueue because the message queue is full per query. | -| split_num_entries_per_batch | Summary | Number of entries per batch. | -| split_num_entries_per_query | Summary | Number of entries per query. | -| split_num_messages_deserialized_per_entry | Summary | Number of messages deserialized per entry. | -| split_num_messages_deserialized_per_query | Summary | Number of messages deserialized per query. | -| split_read_attempts | Summary | Number of read attempts (fail if queues are full). | -| split_read_attempts_per_query | Summary | Number of read attempts per query. | -| split_read_latency_per_batch | Summary | Latency of reads per batch. | -| split_read_latency_per_query | Summary | Total read latency per query. | -| split_record_deserialize_time | Summary | Time spent on deserializing message to record. For example, Avro, JSON, and so on. | -| split_record_deserialize_time_per_query | Summary | Time spent on deserializing message to record per query. | -| split_total_execution_time | Summary | The total execution time. | - -## Pulsar transaction - -All the transaction metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *coordinator_id*: `coordinator_id=${coordinator_id}`. `${coordinator_id}` is the coordinator id. - -| Name | Type | Description | -|---|---|---| -| pulsar_txn_active_count | Gauge | Number of active transactions. | -| pulsar_txn_created_count | Counter | Number of created transactions. | -| pulsar_txn_committed_count | Counter | Number of committed transactions. | -| pulsar_txn_aborted_count | Counter | Number of aborted transactions of this coordinator. | -| pulsar_txn_timeout_count | Counter | Number of timeout transactions. | -| pulsar_txn_append_log_count | Counter | Number of append transaction logs. | -| pulsar_txn_execution_latency_le_* | Histogram | Transaction execution latency.
    Available latencies are as below:
    • latency="10" is TransactionExecutionLatency between (0ms, 10ms]
    • latency="20" is TransactionExecutionLatency between (10ms, 20ms]
    • latency="50" is TransactionExecutionLatency between (20ms, 50ms]
    • latency="100" is TransactionExecutionLatency between (50ms, 100ms]
    • latency="500" is TransactionExecutionLatency between (100ms, 500ms]
    • latency="1000" is TransactionExecutionLatency between (500ms, 1000ms]
    • latency="5000" is TransactionExecutionLatency between (1s, 5s]
    • latency="15000" is TransactionExecutionLatency between (5s, 15s]
    • latency="30000" is TransactionExecutionLatency between (15s, 30s]
    • latency="60000" is TransactionExecutionLatency between (30s, 60s]
    • latency="300000" is TransactionExecutionLatency between (1m,5m]
    • latency="1500000" is TransactionExecutionLatency between (5m,15m]
    • latency="3000000" is TransactionExecutionLatency between (15m,30m]
    • latency="overflow" is TransactionExecutionLatency between (30m,∞]
    | diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/reference-pulsar-admin.md b/site2/website/versioned_docs/version-2.8.2-deprecated/reference-pulsar-admin.md deleted file mode 100644 index bba1d6379dd972..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/reference-pulsar-admin.md +++ /dev/null @@ -1,3338 +0,0 @@ ---- -id: reference-pulsar-admin -title: Pulsar admin CLI -sidebar_label: "Pulsar Admin CLI" -original_id: reference-pulsar-admin ---- - -> **Important** -> -> This page is deprecated and not updated anymore. For the latest and complete information about `pulsar-admin`, including commands, flags, descriptions, and more, see [pulsar-admin doc](https://pulsar.apache.org/tools/pulsar-admin/). - -The `pulsar-admin` tool enables you to manage Pulsar installations, including clusters, brokers, namespaces, tenants, and more. - -Usage - -```bash - -$ pulsar-admin command - -``` - -Commands -* `broker-stats` -* `brokers` -* `clusters` -* `functions` -* `functions-worker` -* `namespaces` -* `ns-isolation-policy` -* `sources` - - For more information, see [here](io-cli.md#sources) -* `sinks` - - For more information, see [here](io-cli.md#sinks) -* `topics` -* `tenants` -* `resource-quotas` -* `schemas` - -## `broker-stats` - -Operations to collect broker statistics - -```bash - -$ pulsar-admin broker-stats subcommand - -``` - -Subcommands -* `allocator-stats` -* `topics(destinations)` -* `mbeans` -* `monitoring-metrics` -* `load-report` - - -### `allocator-stats` - -Dump allocator stats - -Usage - -```bash - -$ pulsar-admin broker-stats allocator-stats allocator-name - -``` - -### `topics(destinations)` - -Dump topic stats - -Usage - -```bash - -$ pulsar-admin broker-stats topics options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - -### `mbeans` - -Dump Mbean stats - -Usage - -```bash - -$ pulsar-admin broker-stats mbeans options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - - -### `monitoring-metrics` - -Dump metrics for monitoring - -Usage - -```bash - -$ pulsar-admin broker-stats monitoring-metrics options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - - -### `load-report` - -Dump broker load-report - -Usage - -```bash - -$ pulsar-admin broker-stats load-report - -``` - -## `brokers` - -Operations about brokers - -```bash - -$ pulsar-admin brokers subcommand - -``` - -Subcommands -* `list` -* `namespaces` -* `update-dynamic-config` -* `list-dynamic-config` -* `get-all-dynamic-config` -* `get-internal-config` -* `get-runtime-config` -* `healthcheck` - -### `list` -List active brokers of the cluster - -Usage - -```bash - -$ pulsar-admin brokers list cluster-name - -``` - -### `leader-broker` -Get the information of the leader broker - -Usage - -```bash - -$ pulsar-admin brokers leader-broker - -``` - -### `namespaces` -List namespaces owned by the broker - -Usage - -```bash - -$ pulsar-admin brokers namespaces cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--url`|The URL for the broker|| - - -### `update-dynamic-config` -Update a broker's dynamic service configuration - -Usage - -```bash - -$ pulsar-admin brokers update-dynamic-config options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--config`|Service configuration parameter name|| -|`--value`|Value for the configuration parameter value specified using the `--config` flag|| - - -### `list-dynamic-config` -Get list of updatable configuration name - -Usage - -```bash - -$ pulsar-admin brokers list-dynamic-config - -``` - -### `delete-dynamic-config` -Delete dynamic-serviceConfiguration of broker - -Usage - -```bash - -$ pulsar-admin brokers delete-dynamic-config options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--config`|Service configuration parameter name|| - - -### `get-all-dynamic-config` -Get all overridden dynamic-configuration values - -Usage - -```bash - -$ pulsar-admin brokers get-all-dynamic-config - -``` - -### `get-internal-config` -Get internal configuration information - -Usage - -```bash - -$ pulsar-admin brokers get-internal-config - -``` - -### `get-runtime-config` -Get runtime configuration values - -Usage - -```bash - -$ pulsar-admin brokers get-runtime-config - -``` - -### `healthcheck` -Run a health check against the broker - -Usage - -```bash - -$ pulsar-admin brokers healthcheck - -``` - -## `clusters` -Operations about clusters - -Usage - -```bash - -$ pulsar-admin clusters subcommand - -``` - -Subcommands -* `get` -* `create` -* `update` -* `delete` -* `list` -* `update-peer-clusters` -* `get-peer-clusters` -* `get-failure-domain` -* `create-failure-domain` -* `update-failure-domain` -* `delete-failure-domain` -* `list-failure-domains` - - -### `get` -Get the configuration data for the specified cluster - -Usage - -```bash - -$ pulsar-admin clusters get cluster-name - -``` - -### `create` -Provisions a new cluster. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin clusters create cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--broker-url`|The URL for the broker service.|| -|`--broker-url-secure`|The broker service URL for a secure connection|| -|`--url`|service-url|| -|`--url-secure`|service-url for secure connection|| - - -### `update` -Update the configuration for a cluster - -Usage - -```bash - -$ pulsar-admin clusters update cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--broker-url`|The URL for the broker service.|| -|`--broker-url-secure`|The broker service URL for a secure connection|| -|`--url`|service-url|| -|`--url-secure`|service-url for secure connection|| - - -### `delete` -Deletes an existing cluster - -Usage - -```bash - -$ pulsar-admin clusters delete cluster-name - -``` - -### `list` -List the existing clusters - -Usage - -```bash - -$ pulsar-admin clusters list - -``` - -### `update-peer-clusters` -Update peer cluster names - -Usage - -```bash - -$ pulsar-admin clusters update-peer-clusters cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--peer-clusters`|Comma separated peer cluster names (Pass empty string "" to delete list)|| - -### `get-peer-clusters` -Get list of peer clusters - -Usage - -```bash - -$ pulsar-admin clusters get-peer-clusters - -``` - -### `get-failure-domain` -Get the configuration brokers of a failure domain - -Usage - -```bash - -$ pulsar-admin clusters get-failure-domain cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `create-failure-domain` -Create a new failure domain for a cluster (updates it if already created) - -Usage - -```bash - -$ pulsar-admin clusters create-failure-domain cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--broker-list`|Comma separated broker list|| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `update-failure-domain` -Update failure domain for a cluster (creates a new one if not exist) - -Usage - -```bash - -$ pulsar-admin clusters update-failure-domain cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--broker-list`|Comma separated broker list|| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `delete-failure-domain` -Delete an existing failure domain - -Usage - -```bash - -$ pulsar-admin clusters delete-failure-domain cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `list-failure-domains` -List the existing failure domains for a cluster - -Usage - -```bash - -$ pulsar-admin clusters list-failure-domains cluster-name - -``` - -## `functions` - -A command-line interface for Pulsar Functions - -Usage - -```bash - -$ pulsar-admin functions subcommand - -``` - -Subcommands -* `localrun` -* `create` -* `delete` -* `update` -* `get` -* `restart` -* `stop` -* `start` -* `status` -* `stats` -* `list` -* `querystate` -* `putstate` -* `trigger` - - -### `localrun` -Run the Pulsar Function locally (rather than deploying it to the Pulsar cluster) - - -Usage - -```bash - -$ pulsar-admin functions localrun options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--broker-service-url `|The URL of the Pulsar broker|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--client-auth-params`|Client authentication param|| -|`--client-auth-plugin`|Client authentication plugin using which function-process can connect to broker|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--hostname-verification-enabled`|Enable hostname verification|false| -|`--instance-id-offset`|Start the instanceIds from this offset|0| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--go`|Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--state-storage-service-url`|The URL for the state storage service. By default, it it set to the service URL of the Apache BookKeeper. This service URL must be added manually when the Pulsar Function runs locally. || -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed successfully are sent|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--tls-allow-insecure`|Allow insecure tls connection|false| -|`--tls-trust-cert-path`|The tls trust cert file path|| -|`--use-tls`|Use tls connection|false| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `create` -Create a Pulsar Function in cluster mode (i.e. deploy it on a Pulsar cluster) - -Usage - -``` - -$ pulsar-admin functions create options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function’s namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--go`|Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `delete` -Delete a Pulsar Function that's running on a Pulsar cluster - -Usage - -```bash - -$ pulsar-admin functions delete options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `update` -Update a Pulsar Function that's been deployed to a Pulsar cluster - -Usage - -```bash - -$ pulsar-admin functions update options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function’s namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--go`|Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `get` -Fetch information about a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions get options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `restart` -Restart function instance - -Usage - -```bash - -$ pulsar-admin functions restart options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (restart all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `stop` -Stops function instance - -Usage - -```bash - -$ pulsar-admin functions stop options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (stop all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `start` -Starts a stopped function instance - -Usage - -```bash - -$ pulsar-admin functions start options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (start all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `status` -Check the current status of a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions status options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (Get-status of all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `stats` -Get the current stats of a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions stats options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (Get-stats of all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - -### `list` -List all of the Pulsar Functions running under a specific tenant and namespace - -Usage - -```bash - -$ pulsar-admin functions list options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `querystate` -Fetch the current state associated with a Pulsar Function running in cluster mode - -Usage - -```bash - -$ pulsar-admin functions querystate options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`-k`, `--key`|The key for the state you want to fetch|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| -|`-w`, `--watch`|Watch for changes in the value associated with a key for a Pulsar Function|false| - -### `putstate` -Put a key/value pair to the state associated with a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions putstate options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the Pulsar Function|| -|`--name`|The name of a Pulsar Function|| -|`--namespace`|The namespace of a Pulsar Function|| -|`--tenant`|The tenant of a Pulsar Function|| -|`-s`, `--state`|The FunctionState that needs to be put|| - -### `trigger` -Triggers the specified Pulsar Function with a supplied value - -Usage - -```bash - -$ pulsar-admin functions trigger options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| -|`--topic`|The specific topic name that the function consumes from that you want to inject the data to|| -|`--trigger-file`|The path to the file that contains the data with which you'd like to trigger the function|| -|`--trigger-value`|The value with which you want to trigger the function|| - - -## `functions-worker` -Operations to collect function-worker statistics - -```bash - -$ pulsar-admin functions-worker subcommand - -``` - -Subcommands - -* `function-stats` -* `get-cluster` -* `get-cluster-leader` -* `get-function-assignments` -* `monitoring-metrics` - -### `function-stats` - -Dump all functions stats running on this broker - -Usage - -```bash - -$ pulsar-admin functions-worker function-stats - -``` - -### `get-cluster` - -Get all workers belonging to this cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-cluster - -``` - -### `get-cluster-leader` - -Get the leader of the worker cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-cluster-leader - -``` - -### `get-function-assignments` - -Get the assignments of the functions across the worker cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-function-assignments - -``` - -### `monitoring-metrics` - -Dump metrics for Monitoring - -Usage - -```bash - -$ pulsar-admin functions-worker monitoring-metrics - -``` - -## `namespaces` - -Operations for managing namespaces - -```bash - -$ pulsar-admin namespaces subcommand - -``` - -Subcommands -* `list` -* `topics` -* `policies` -* `create` -* `delete` -* `set-deduplication` -* `set-auto-topic-creation` -* `remove-auto-topic-creation` -* `set-auto-subscription-creation` -* `remove-auto-subscription-creation` -* `permissions` -* `grant-permission` -* `revoke-permission` -* `grant-subscription-permission` -* `revoke-subscription-permission` -* `set-clusters` -* `get-clusters` -* `get-backlog-quotas` -* `set-backlog-quota` -* `remove-backlog-quota` -* `get-persistence` -* `set-persistence` -* `get-message-ttl` -* `set-message-ttl` -* `remove-message-ttl` -* `get-anti-affinity-group` -* `set-anti-affinity-group` -* `get-anti-affinity-namespaces` -* `delete-anti-affinity-group` -* `get-retention` -* `set-retention` -* `unload` -* `split-bundle` -* `set-dispatch-rate` -* `get-dispatch-rate` -* `set-replicator-dispatch-rate` -* `get-replicator-dispatch-rate` -* `set-subscribe-rate` -* `get-subscribe-rate` -* `set-subscription-dispatch-rate` -* `get-subscription-dispatch-rate` -* `clear-backlog` -* `unsubscribe` -* `set-encryption-required` -* `set-delayed-delivery` -* `get-delayed-delivery` -* `set-subscription-auth-mode` -* `get-max-producers-per-topic` -* `set-max-producers-per-topic` -* `get-max-consumers-per-topic` -* `set-max-consumers-per-topic` -* `get-max-consumers-per-subscription` -* `set-max-consumers-per-subscription` -* `get-max-unacked-messages-per-subscription` -* `set-max-unacked-messages-per-subscription` -* `get-max-unacked-messages-per-consumer` -* `set-max-unacked-messages-per-consumer` -* `get-compaction-threshold` -* `set-compaction-threshold` -* `get-offload-threshold` -* `set-offload-threshold` -* `get-offload-deletion-lag` -* `set-offload-deletion-lag` -* `clear-offload-deletion-lag` -* `get-schema-autoupdate-strategy` -* `set-schema-autoupdate-strategy` -* `set-offload-policies` -* `get-offload-policies` -* `set-max-subscriptions-per-topic` -* `get-max-subscriptions-per-topic` -* `remove-max-subscriptions-per-topic` - - -### `list` -Get the namespaces for a tenant - -Usage - -```bash - -$ pulsar-admin namespaces list tenant-name - -``` - -### `topics` -Get the list of topics for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces topics tenant/namespace - -``` - -### `policies` -Get the configuration policies of a namespace - -Usage - -```bash - -$ pulsar-admin namespaces policies tenant/namespace - -``` - -### `create` -Create a new namespace - -Usage - -```bash - -$ pulsar-admin namespaces create tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-b`, `--bundles`|The number of bundles to activate|0| -|`-c`, `--clusters`|List of clusters this namespace will be assigned|| - - -### `delete` -Deletes a namespace. The namespace needs to be empty - -Usage - -```bash - -$ pulsar-admin namespaces delete tenant/namespace - -``` - -### `set-deduplication` -Enable or disable message deduplication on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-deduplication tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable message deduplication on the specified namespace|false| -|`--disable`, `-d`|Disable message deduplication on the specified namespace|false| - -### `set-auto-topic-creation` -Enable or disable autoTopicCreation for a namespace, overriding broker settings - -Usage - -```bash - -$ pulsar-admin namespaces set-auto-topic-creation tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable allowAutoTopicCreation on namespace|false| -|`--disable`, `-d`|Disable allowAutoTopicCreation on namespace|false| -|`--type`, `-t`|Type of topic to be auto-created. Possible values: (partitioned, non-partitioned)|non-partitioned| -|`--num-partitions`, `-n`|Default number of partitions of topic to be auto-created, applicable to partitioned topics only|| - -### `remove-auto-topic-creation` -Remove override of autoTopicCreation for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces remove-auto-topic-creation tenant/namespace - -``` - -### `set-auto-subscription-creation` -Enable autoSubscriptionCreation for a namespace, overriding broker settings - -Usage - -```bash - -$ pulsar-admin namespaces set-auto-subscription-creation tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable allowAutoSubscriptionCreation on namespace|false| - -### `remove-auto-subscription-creation` -Remove override of autoSubscriptionCreation for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces remove-auto-subscription-creation tenant/namespace - -``` - -### `permissions` -Get the permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces permissions tenant/namespace - -``` - -### `grant-permission` -Grant permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces grant-permission tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--actions`|Actions to be granted (`produce` or `consume`)|| -|`--role`|The client role to which to grant the permissions|| - - -### `revoke-permission` -Revoke permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces revoke-permission tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--role`|The client role to which to revoke the permissions|| - -### `grant-subscription-permission` -Grant permissions to access subscription admin-api - -Usage - -```bash - -$ pulsar-admin namespaces grant-subscription-permission tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--roles`|The client roles to which to grant the permissions (comma separated roles)|| -|`--subscription`|The subscription name for which permission will be granted to roles|| - -### `revoke-subscription-permission` -Revoke permissions to access subscription admin-api - -Usage - -```bash - -$ pulsar-admin namespaces revoke-subscription-permission tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--role`|The client role to which to revoke the permissions|| -|`--subscription`|The subscription name for which permission will be revoked to roles|| - -### `set-clusters` -Set replication clusters for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-clusters tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--clusters`|Replication clusters ID list (comma-separated values)|| - - -### `get-clusters` -Get replication clusters for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-clusters tenant/namespace - -``` - -### `get-backlog-quotas` -Get the backlog quota policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-backlog-quotas tenant/namespace - -``` - -### `set-backlog-quota` -Set a backlog quota policy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-backlog-quota tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-l`, `--limit`|The backlog size limit (for example `10M` or `16G`)|| -|`-p`, `--policy`|The retention policy to enforce when the limit is reached. The valid options are: `producer_request_hold`, `producer_exception` or `consumer_backlog_eviction`| - -Example - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \ ---limit 2G \ ---policy producer_request_hold - -``` - -### `remove-backlog-quota` -Remove a backlog quota policy from a namespace - -Usage - -```bash - -$ pulsar-admin namespaces remove-backlog-quota tenant/namespace - -``` - -### `get-persistence` -Get the persistence policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-persistence tenant/namespace - -``` - -### `set-persistence` -Set the persistence policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-persistence tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-a`, `--bookkeeper-ack-quorum`|The number of acks (guaranteed copies) to wait for each entry|0| -|`-e`, `--bookkeeper-ensemble`|The number of bookies to use for a topic|0| -|`-w`, `--bookkeeper-write-quorum`|How many writes to make of each entry|0| -|`-r`, `--ml-mark-delete-max-rate`|Throttling rate of mark-delete operation (0 means no throttle)|| - - -### `get-message-ttl` -Get the message TTL for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-message-ttl tenant/namespace - -``` - -### `set-message-ttl` -Set the message TTL for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-message-ttl tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-ttl`, `--messageTTL`|Message TTL in seconds. When the value is set to `0`, TTL is disabled. TTL is disabled by default. |0| - -### `remove-message-ttl` -Remove the message TTL for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces remove-message-ttl tenant/namespace - -``` - -### `get-anti-affinity-group` -Get Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-anti-affinity-group tenant/namespace - -``` - -### `set-anti-affinity-group` -Set Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-anti-affinity-group tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-g`, `--group`|Anti-affinity group name|| - -### `get-anti-affinity-namespaces` -Get Anti-affinity namespaces grouped with the given anti-affinity group name - -Usage - -```bash - -$ pulsar-admin namespaces get-anti-affinity-namespaces options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--cluster`|Cluster name|| -|`-g`, `--group`|Anti-affinity group name|| -|`-p`, `--tenant`|Tenant is only used for authorization. Client has to be admin of any of the tenant to access this api|| - -### `delete-anti-affinity-group` -Remove Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces delete-anti-affinity-group tenant/namespace - -``` - -### `get-retention` -Get the retention policy that is applied to each topic within the specified namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-retention tenant/namespace - -``` - -### `set-retention` -Set the retention policy for each topic within the specified namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-retention tenant/namespace - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-s`, `--size`|The retention size limits (for example 10M, 16G or 3T) for each topic in the namespace. 0 means no retention and -1 means infinite size retention|| -|`-t`, `--time`|The retention time in minutes, hours, days, or weeks. Examples: 100m, 13h, 2d, 5w. 0 means no retention and -1 means infinite time retention|| - - -### `unload` -Unload a namespace or namespace bundle from the current serving broker. - -Usage - -```bash - -$ pulsar-admin namespaces unload tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| - -### `split-bundle` -Split a namespace-bundle from the current serving broker - -Usage - -```bash - -$ pulsar-admin namespaces split-bundle tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-u`, `--unload`|Unload newly split bundles after splitting old bundle|false| - -### `set-dispatch-rate` -Set message-dispatch-rate for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-dispatch-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-dispatch-rate` -Get configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-dispatch-rate tenant/namespace - -``` - -### `set-replicator-dispatch-rate` -Set replicator message-dispatch-rate for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-replicator-dispatch-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-replicator-dispatch-rate` -Get replicator configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-replicator-dispatch-rate tenant/namespace - -``` - -### `set-subscribe-rate` -Set subscribe-rate per consumer for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscribe-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-sr`, `--subscribe-rate`|The subscribe rate (default -1 will be overwrite if not passed)|-1| -|`-st`, `--subscribe-rate-period`|The subscribe rate period in second type (default 30 second will be overwrite if not passed)|30| - -### `get-subscribe-rate` -Get configured subscribe-rate per consumer for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-subscribe-rate tenant/namespace - -``` - -### `set-subscription-dispatch-rate` -Set subscription message-dispatch-rate for all subscription of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscription-dispatch-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--sub-msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-subscription-dispatch-rate` -Get subscription configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-subscription-dispatch-rate tenant/namespace - -``` - -### `clear-backlog` -Clear the backlog for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces clear-backlog tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-force`, `--force`|Whether to force a clear backlog without prompt|false| -|`-s`, `--sub`|The subscription name|| - - -### `unsubscribe` -Unsubscribe the given subscription on all destinations on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces unsubscribe tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-s`, `--sub`|The subscription name|| - -### `set-encryption-required` -Enable or disable message encryption required for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-encryption-required tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-d`, `--disable`|Disable message encryption required|false| -|`-e`, `--enable`|Enable message encryption required|false| - -### `set-delayed-delivery` -Set the delayed delivery policy on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-delayed-delivery tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-d`, `--disable`|Disable delayed delivery messages|false| -|`-e`, `--enable`|Enable delayed delivery messages|false| -|`-t`, `--time`|The tick time for when retrying on delayed delivery messages|1s| - - -### `get-delayed-delivery` -Get the delayed delivery policy on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-delayed-delivery-time tenant/namespace - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-t`, `--time`|The tick time for when retrying on delayed delivery messages|1s| - - -### `set-subscription-auth-mode` -Set subscription auth mode on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscription-auth-mode tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-m`, `--subscription-auth-mode`|Subscription authorization mode for Pulsar policies. Valid options are: [None, Prefix]|| - -### `get-max-producers-per-topic` -Get maxProducersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-producers-per-topic tenant/namespace - -``` - -### `set-max-producers-per-topic` -Set maxProducersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-producers-per-topic tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-p`, `--max-producers-per-topic`|maxProducersPerTopic for a namespace|0| - -### `get-max-consumers-per-topic` -Get maxConsumersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-consumers-per-topic tenant/namespace - -``` - -### `set-max-consumers-per-topic` -Set maxConsumersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-consumers-per-topic tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-consumers-per-topic`|maxConsumersPerTopic for a namespace|0| - -### `get-max-consumers-per-subscription` -Get maxConsumersPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-consumers-per-subscription tenant/namespace - -``` - -### `set-max-consumers-per-subscription` -Set maxConsumersPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-consumers-per-subscription tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-consumers-per-subscription`|maxConsumersPerSubscription for a namespace|0| - -### `get-max-unacked-messages-per-subscription` -Get maxUnackedMessagesPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-unacked-messages-per-subscription tenant/namespace - -``` - -### `set-max-unacked-messages-per-subscription` -Set maxUnackedMessagesPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-unacked-messages-per-subscription tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-unacked-messages-per-subscription`|maxUnackedMessagesPerSubscription for a namespace|-1| - -### `get-max-unacked-messages-per-consumer` -Get maxUnackedMessagesPerConsumer for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-unacked-messages-per-consumer tenant/namespace - -``` - -### `set-max-unacked-messages-per-consumer` -Set maxUnackedMessagesPerConsumer for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-unacked-messages-per-consumer tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-unacked-messages-per-consumer`|maxUnackedMessagesPerConsumer for a namespace|-1| - - -### `get-compaction-threshold` -Get compactionThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-compaction-threshold tenant/namespace - -``` - -### `set-compaction-threshold` -Set compactionThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-compaction-threshold tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-t`, `--threshold`|Maximum number of bytes in a topic backlog before compaction is triggered (eg: 10M, 16G, 3T). 0 disables automatic compaction|0| - - -### `get-offload-threshold` -Get offloadThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-threshold tenant/namespace - -``` - -### `set-offload-threshold` -Set offloadThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-threshold tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-s`, `--size`|Maximum number of bytes stored in the pulsar cluster for a topic before data will start being automatically offloaded to longterm storage (eg: 10M, 16G, 3T, 100). Negative values disable automatic offload. 0 triggers offloading as soon as possible.|-1| - -### `get-offload-deletion-lag` -Get offloadDeletionLag, in minutes, for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-deletion-lag tenant/namespace - -``` - -### `set-offload-deletion-lag` -Set offloadDeletionLag for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-deletion-lag tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-l`, `--lag`|Duration to wait after offloading a ledger segment, before deleting the copy of that segment from cluster local storage. (eg: 10m, 5h, 3d, 2w).|-1| - -### `clear-offload-deletion-lag` -Clear offloadDeletionLag for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces clear-offload-deletion-lag tenant/namespace - -``` - -### `get-schema-autoupdate-strategy` -Get the schema auto-update strategy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-schema-autoupdate-strategy tenant/namespace - -``` - -### `set-schema-autoupdate-strategy` -Set the schema auto-update strategy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-schema-autoupdate-strategy tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--compatibility`|Compatibility level required for new schemas created via a Producer. Possible values (Full, Backward, Forward, None).|Full| -|`-d`, `--disabled`|Disable automatic schema updates.|false| - -### `get-publish-rate` -Get the message publish rate for each topic in a namespace, in bytes as well as messages per second - -Usage - -```bash - -$ pulsar-admin namespaces get-publish-rate tenant/namespace - -``` - -### `set-publish-rate` -Set the message publish rate for each topic in a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-publish-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-m`, `--msg-publish-rate`|Threshold for number of messages per second per topic in the namespace (-1 implies not set, 0 for no limit).|-1| -|`-b`, `--byte-publish-rate`|Threshold for number of bytes per second per topic in the namespace (-1 implies not set, 0 for no limit).|-1| - -### `set-offload-policies` -Set the offload policy for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-policies tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-d`, `--driver`|Driver to use to offload old data to long term storage,(Possible values: S3, aws-s3, google-cloud-storage)|| -|`-r`, `--region`|The long term storage region|| -|`-b`, `--bucket`|Bucket to place offloaded ledger into|| -|`-e`, `--endpoint`|Alternative endpoint to connect to|| -|`-i`, `--aws-id`|AWS Credential Id to use when using driver S3 or aws-s3|| -|`-s`, `--aws-secret`|AWS Credential Secret to use when using driver S3 or aws-s3|| -|`-ro`, `--s3-role`|S3 Role used for STSAssumeRoleSessionCredentialsProvider using driver S3 or aws-s3|| -|`-rsn`, `--s3-role-session-name`|S3 role session name used for STSAssumeRoleSessionCredentialsProvider using driver S3 or aws-s3|| -|`-mbs`, `--maxBlockSize`|Max block size|64MB| -|`-rbs`, `--readBufferSize`|Read buffer size|1MB| -|`-oat`, `--offloadAfterThreshold`|Offload after threshold size (eg: 1M, 5M)|| -|`-oae`, `--offloadAfterElapsed`|Offload after elapsed in millis (or minutes, hours,days,weeks eg: 100m, 3h, 2d, 5w).|| - -### `get-offload-policies` -Get the offload policy for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-policies tenant/namespace - -``` - -### `set-max-subscriptions-per-topic` -Set the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces set-max-subscriptions-per-topic tenant/namespace - -``` - -### `get-max-subscriptions-per-topic` -Get the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces get-max-subscriptions-per-topic tenant/namespace - -``` - -### `remove-max-subscriptions-per-topic` -Remove the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces remove-max-subscriptions-per-topic tenant/namespace - -``` - -## `ns-isolation-policy` -Operations for managing namespace isolation policies. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy subcommand - -``` - -Subcommands -* `set` -* `get` -* `list` -* `delete` -* `brokers` -* `broker` - -### `set` -Create/update a namespace isolation policy for a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy set cluster-name policy-name options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`--auto-failover-policy-params`|Comma-separated name=value auto failover policy parameters|[]| -|`--auto-failover-policy-type`|Auto failover policy type name. Currently available options: min_available.|[]| -|`--namespaces`|Comma-separated namespaces regex list|[]| -|`--primary`|Comma-separated primary broker regex list|[]| -|`--secondary`|Comma-separated secondary broker regex list|[]| - - -### `get` -Get the namespace isolation policy of a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy get cluster-name policy-name - -``` - -### `list` -List all namespace isolation policies of a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy list cluster-name - -``` - -### `delete` -Delete namespace isolation policy of a cluster. This operation requires superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy delete - -``` - -### `brokers` -List all brokers with namespace-isolation policies attached to it. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy brokers cluster-name - -``` - -### `broker` -Get broker with namespace-isolation policies attached to it. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy broker cluster-name options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`--broker`|Broker name to get namespace-isolation policies attached to it|| - -## `topics` -Operations for managing Pulsar topics (both persistent and non-persistent). - -Usage - -```bash - -$ pulsar-admin topics subcommand - -``` - -From Pulsar 2.7.0, some namespace-level policies are available on topic level. To enable topic-level policy in Pulsar, you need to configure the following parameters in the `broker.conf` file. - -```shell - -systemTopicEnabled=true -topicLevelPoliciesEnabled=true - -``` - -Subcommands -* `compact` -* `compaction-status` -* `offload` -* `offload-status` -* `create-partitioned-topic` -* `create-missed-partitions` -* `delete-partitioned-topic` -* `create` -* `get-partitioned-topic-metadata` -* `update-partitioned-topic` -* `list-partitioned-topics` -* `list` -* `terminate` -* `permissions` -* `grant-permission` -* `revoke-permission` -* `lookup` -* `bundle-range` -* `delete` -* `unload` -* `create-subscription` -* `subscriptions` -* `unsubscribe` -* `stats` -* `stats-internal` -* `info-internal` -* `partitioned-stats` -* `partitioned-stats-internal` -* `skip` -* `clear-backlog` -* `expire-messages` -* `expire-messages-all-subscriptions` -* `peek-messages` -* `reset-cursor` -* `get-message-by-id` -* `last-message-id` -* `get-backlog-quotas` -* `set-backlog-quota` -* `remove-backlog-quota` -* `get-persistence` -* `set-persistence` -* `remove-persistence` -* `get-message-ttl` -* `set-message-ttl` -* `remove-message-ttl` -* `get-deduplication` -* `set-deduplication` -* `remove-deduplication` -* `get-retention` -* `set-retention` -* `remove-retention` -* `get-dispatch-rate` -* `set-dispatch-rate` -* `remove-dispatch-rate` -* `get-max-unacked-messages-per-subscription` -* `set-max-unacked-messages-per-subscription` -* `remove-max-unacked-messages-per-subscription` -* `get-max-unacked-messages-per-consumer` -* `set-max-unacked-messages-per-consumer` -* `remove-max-unacked-messages-per-consumer` -* `get-delayed-delivery` -* `set-delayed-delivery` -* `remove-delayed-delivery` -* `get-max-producers` -* `set-max-producers` -* `remove-max-producers` -* `get-max-consumers` -* `set-max-consumers` -* `remove-max-consumers` -* `get-compaction-threshold` -* `set-compaction-threshold` -* `remove-compaction-threshold` -* `get-offload-policies` -* `set-offload-policies` -* `remove-offload-policies` -* `get-inactive-topic-policies` -* `set-inactive-topic-policies` -* `remove-inactive-topic-policies` -* `set-max-subscriptions` -* `get-max-subscriptions` -* `remove-max-subscriptions` - -### `compact` -Run compaction on the specified topic (persistent topics only) - -Usage - -``` - -$ pulsar-admin topics compact persistent://tenant/namespace/topic - -``` - -### `compaction-status` -Check the status of a topic compaction (persistent topics only) - -Usage - -```bash - -$ pulsar-admin topics compaction-status persistent://tenant/namespace/topic - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-w`, `--wait-complete`|Wait for compaction to complete|false| - - -### `offload` -Trigger offload of data from a topic to long-term storage (e.g. Amazon S3) - -Usage - -```bash - -$ pulsar-admin topics offload persistent://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--size-threshold`|The maximum amount of data to keep in BookKeeper for the specific topic|| - - -### `offload-status` -Check the status of data offloading from a topic to long-term storage - -Usage - -```bash - -$ pulsar-admin topics offload-status persistent://tenant/namespace/topic op - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-w`, `--wait-complete`|Wait for compaction to complete|false| - - -### `create-partitioned-topic` -Create a partitioned topic. A partitioned topic must be created before producers can publish to it. - -:::note - -By default, after 60 seconds of creation, topics are considered inactive and deleted automatically to prevent from generating trash data. -To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. -To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to your desired value. -For more information about these two parameters, see [here](reference-configuration.md#broker). - -::: - -Usage - -```bash - -$ pulsar-admin topics create-partitioned-topic {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-p`, `--partitions`|The number of partitions for the topic|0| - -### `create-missed-partitions` -Try to create partitions for partitioned topic. The partitions of partition topic has to be created, -can be used by repair partitions when topic auto creation is disabled - -Usage - -```bash - -$ pulsar-admin topics create-missed-partitions persistent://tenant/namespace/topic - -``` - -### `delete-partitioned-topic` -Delete a partitioned topic. This will also delete all the partitions of the topic if they exist. - -Usage - -```bash - -$ pulsar-admin topics delete-partitioned-topic {persistent|non-persistent} - -``` - -### `create` -Creates a non-partitioned topic. A non-partitioned topic must explicitly be created by the user if allowAutoTopicCreation or createIfMissing is disabled. - -:::note - -By default, after 60 seconds of creation, topics are considered inactive and deleted automatically to prevent from generating trash data. -To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. -To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to your desired value. -For more information about these two parameters, see [here](reference-configuration.md#broker). - -::: - -Usage - -```bash - -$ pulsar-admin topics create {persistent|non-persistent}://tenant/namespace/topic - -``` - -### `get-partitioned-topic-metadata` -Get the partitioned topic metadata. If the topic is not created or is a non-partitioned topic, this will return an empty topic with zero partitions. - -Usage - -```bash - -$ pulsar-admin topics get-partitioned-topic-metadata {persistent|non-persistent}://tenant/namespace/topic - -``` - -### `update-partitioned-topic` -Update existing non-global partitioned topic. New updating number of partitions must be greater than existing number of partitions. - -Usage - -```bash - -$ pulsar-admin topics update-partitioned-topic {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-p`, `--partitions`|The number of partitions for the topic|0| - -### `list-partitioned-topics` -Get the list of partitioned topics under a namespace. - -Usage - -```bash - -$ pulsar-admin topics list-partitioned-topics tenant/namespace - -``` - -### `list` -Get the list of topics under a namespace - -Usage - -``` - -$ pulsar-admin topics list tenant/cluster/namespace - -``` - -### `terminate` -Terminate a persistent topic (disallow further messages from being published on the topic) - -Usage - -```bash - -$ pulsar-admin topics terminate persistent://tenant/namespace/topic - -``` - -### `permissions` -Get the permissions on a topic. Retrieve the effective permissions for a destination. These permissions are defined by the permissions set at the namespace level combined (union) with any eventual specific permissions set on the topic. - -Usage - -```bash - -$ pulsar-admin topics permissions topic - -``` - -### `grant-permission` -Grant a new permission to a client role on a single topic - -Usage - -```bash - -$ pulsar-admin topics grant-permission {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--actions`|Actions to be granted (`produce` or `consume`)|| -|`--role`|The client role to which to grant the permissions|| - - -### `revoke-permission` -Revoke permissions to a client role on a single topic. If the permission was not set at the topic level, but rather at the namespace level, this operation will return an error (HTTP status code 412). - -Usage - -```bash - -$ pulsar-admin topics revoke-permission topic - -``` - -### `lookup` -Look up a topic from the current serving broker - -Usage - -```bash - -$ pulsar-admin topics lookup topic - -``` - -### `bundle-range` -Get the namespace bundle which contains the given topic - -Usage - -```bash - -$ pulsar-admin topics bundle-range topic - -``` - -### `delete` -Delete a topic. The topic cannot be deleted if there are any active subscriptions or producers connected to the topic. - -Usage - -```bash - -$ pulsar-admin topics delete topic - -``` - -### `unload` -Unload a topic - -Usage - -```bash - -$ pulsar-admin topics unload topic - -``` - -### `create-subscription` -Create a new subscription on a topic. - -Usage - -```bash - -$ pulsar-admin topics create-subscription [options] persistent://tenant/namespace/topic - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-m`, `--messageId`|messageId where to create the subscription. It can be either 'latest', 'earliest' or (ledgerId:entryId)|latest| -|`-s`, `--subscription`|Subscription to reset position on|| - -### `subscriptions` -Get the list of subscriptions on the topic - -Usage - -```bash - -$ pulsar-admin topics subscriptions topic - -``` - -### `unsubscribe` -Delete a durable subscriber from a topic - -Usage - -```bash - -$ pulsar-admin topics unsubscribe topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|The subscription to delete|| -|`-f`, `--force`|Disconnect and close all consumers and delete subscription forcefully|false| - - -### `stats` -Get the stats for the topic and its connected producers and consumers. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -Usage - -```bash - -$ pulsar-admin topics stats topic - -``` - -:::note - -The unit of `storageSize` and `averageMsgSize` is Byte. - -::: - -### `stats-internal` -Get the internal stats for the topic - -Usage - -```bash - -$ pulsar-admin topics stats-internal topic - -``` - -### `info-internal` -Get the internal metadata info for the topic - -Usage - -```bash - -$ pulsar-admin topics info-internal topic - -``` - -### `partitioned-stats` -Get the stats for the partitioned topic and its connected producers and consumers. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -Usage - -```bash - -$ pulsar-admin topics partitioned-stats topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--per-partition`|Get per-partition stats|false| - -### `partitioned-stats-internal` -Get the internal stats for the partitioned topic and its connected producers and consumers. All the rates are computed over a 1 minute window and are relative the last completed 1 minute period. - -Usage - -```bash - -$ pulsar-admin topics partitioned-stats-internal topic - -``` - -### `skip` -Skip some messages for the subscription - -Usage - -```bash - -$ pulsar-admin topics skip topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-n`, `--count`|The number of messages to skip|0| -|`-s`, `--subscription`|The subscription on which to skip messages|| - - -### `clear-backlog` -Clear backlog (skip all the messages) for the subscription - -Usage - -```bash - -$ pulsar-admin topics clear-backlog topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|The subscription to clear|| - - -### `expire-messages` -Expire messages that are older than the given expiry time (in seconds) for the subscription. - -Usage - -```bash - -$ pulsar-admin topics expire-messages topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-t`, `--expireTime`|Expire messages older than the time (in seconds)|0| -|`-s`, `--subscription`|The subscription to skip messages on|| - - -### `expire-messages-all-subscriptions` -Expire messages older than the given expiry time (in seconds) for all subscriptions - -Usage - -```bash - -$ pulsar-admin topics expire-messages-all-subscriptions topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-t`, `--expireTime`|Expire messages older than the time (in seconds)|0| - - -### `peek-messages` -Peek some messages for the subscription. - -Usage - -```bash - -$ pulsar-admin topics peek-messages topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-n`, `--count`|The number of messages|0| -|`-s`, `--subscription`|Subscription to get messages from|| - - -### `reset-cursor` -Reset position for subscription to a position that is closest to timestamp or messageId. - -Usage - -```bash - -$ pulsar-admin topics reset-cursor topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|Subscription to reset position on|| -|`-t`, `--time`|The time in minutes to reset back to (or minutes, hours, days, weeks, etc.). Examples: `100m`, `3h`, `2d`, `5w`.|| -|`-m`, `--messageId`| The messageId to reset back to (ledgerId:entryId). || - -### `get-message-by-id` -Get message by ledger id and entry id - -Usage - -```bash - -$ pulsar-admin topics get-message-by-id topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-l`, `--ledgerId`|The ledger id |0| -|`-e`, `--entryId`|The entry id |0| - -### `last-message-id` -Get the last commit message ID of the topic. - -Usage - -```bash - -$ pulsar-admin topics last-message-id persistent://tenant/namespace/topic - -``` - -### `get-backlog-quotas` -Get the backlog quota policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-backlog-quotas tenant/namespace/topic - -``` - -### `set-backlog-quota` -Set a backlog quota policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-backlog-quota tenant/namespace/topic options - -``` - -### `remove-backlog-quota` -Remove a backlog quota policy from a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-backlog-quota tenant/namespace/topic - -``` - -### `get-persistence` -Get the persistence policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-persistence tenant/namespace/topic - -``` - -### `set-persistence` -Set the persistence policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-persistence tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-e`, `--bookkeeper-ensemble`|Number of bookies to use for a topic|0| -|`-w`, `--bookkeeper-write-quorum`|How many writes to make of each entry|0| -|`-a`, `--bookkeeper-ack-quorum`|Number of acks (guaranteed copies) to wait for each entry|0| -|`-r`, `--ml-mark-delete-max-rate`|Throttling rate of mark-delete operation (0 means no throttle)|| - -### `remove-persistence` -Remove the persistence policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-persistence tenant/namespace/topic - -``` - -### `get-message-ttl` -Get the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-message-ttl tenant/namespace/topic - -``` - -### `set-message-ttl` -Set the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-message-ttl tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-ttl`, `--messageTTL`|Message TTL for a topic in second, allowed range from 1 to `Integer.MAX_VALUE` |0| - -### `remove-message-ttl` -Remove the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-message-ttl tenant/namespace/topic - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable message deduplication on the specified topic.|false| -|`--disable`, `-d`|Disable message deduplication on the specified topic.|false| - -### `get-deduplication` -Get a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-deduplication tenant/namespace/topic - -``` - -### `set-deduplication` -Set a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-deduplication tenant/namespace/topic options - -``` - -### `remove-deduplication` -Remove a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-deduplication tenant/namespace/topic - -``` - -## `tenants` -Operations for managing tenants - -Usage - -```bash - -$ pulsar-admin tenants subcommand - -``` - -Subcommands -* `list` -* `get` -* `create` -* `update` -* `delete` - -### `list` -List the existing tenants - -Usage - -```bash - -$ pulsar-admin tenants list - -``` - -### `get` -Gets the configuration of a tenant - -Usage - -```bash - -$ pulsar-admin tenants get tenant-name - -``` - -### `create` -Creates a new tenant - -Usage - -```bash - -$ pulsar-admin tenants create tenant-name options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-r`, `--admin-roles`|Comma-separated admin roles|| -|`-c`, `--allowed-clusters`|Comma-separated allowed clusters|| - -### `update` -Updates a tenant - -Usage - -```bash - -$ pulsar-admin tenants update tenant-name options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-r`, `--admin-roles`|Comma-separated admin roles|| -|`-c`, `--allowed-clusters`|Comma-separated allowed clusters|| - - -### `delete` -Deletes an existing tenant - -Usage - -```bash - -$ pulsar-admin tenants delete tenant-name - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-f`, `--force`|Delete a tenant forcefully by deleting all namespaces under it.|false| - - -## `resource-quotas` -Operations for managing resource quotas - -Usage - -```bash - -$ pulsar-admin resource-quotas subcommand - -``` - -Subcommands -* `get` -* `set` -* `reset-namespace-bundle-quota` - - -### `get` -Get the resource quota for a specified namespace bundle, or default quota if no namespace/bundle is specified. - -Usage - -```bash - -$ pulsar-admin resource-quotas get options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-n`, `--namespace`|The namespace|| - - -### `set` -Set the resource quota for the specified namespace bundle, or default quota if no namespace/bundle is specified. - -Usage - -```bash - -$ pulsar-admin resource-quotas set options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-bi`, `--bandwidthIn`|The expected inbound bandwidth (in bytes/second)|0| -|`-bo`, `--bandwidthOut`|Expected outbound bandwidth (in bytes/second)0| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-d`, `--dynamic`|Allow to be dynamically re-calculated (or not)|false| -|`-mem`, `--memory`|Expectred memory usage (in megabytes)|0| -|`-mi`, `--msgRateIn`|Expected incoming messages per second|0| -|`-mo`, `--msgRateOut`|Expected outgoing messages per second|0| -|`-n`, `--namespace`|The namespace as tenant/namespace, for example my-tenant/my-ns. Must be specified together with -b/--bundle.|| - - -### `reset-namespace-bundle-quota` -Reset the specified namespace bundle's resource quota to a default value. - -Usage - -```bash - -$ pulsar-admin resource-quotas reset-namespace-bundle-quota options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-n`, `--namespace`|The namespace|| - - - -## `schemas` -Operations related to Schemas associated with Pulsar topics. - -Usage - -``` - -$ pulsar-admin schemas subcommand - -``` - -Subcommands -* `upload` -* `delete` -* `get` -* `extract` - - -### `upload` -Upload the schema definition for a topic - -Usage - -```bash - -$ pulsar-admin schemas upload persistent://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`--filename`|The path to the schema definition file. An example schema file is available under conf directory.|| - - -### `delete` -Delete the schema definition associated with a topic - -Usage - -```bash - -$ pulsar-admin schemas delete persistent://tenant/namespace/topic - -``` - -### `get` -Retrieve the schema definition associated with a topic (at a given version if version is supplied). - -Usage - -```bash - -$ pulsar-admin schemas get persistent://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`--version`|The version of the schema definition to retrieve for a topic.|| - -### `extract` -Provide the schema definition for a topic via Java class name contained in a JAR file - -Usage - -```bash - -$ pulsar-admin schemas extract persistent://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--classname`|The Java class name|| -|`-j`, `--jar`|A path to the JAR file which contains the above Java class|| -|`-t`, `--type`|The type of the schema (avro or json)|| diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/reference-rest-api-overview.md b/site2/website/versioned_docs/version-2.8.2-deprecated/reference-rest-api-overview.md deleted file mode 100644 index 4bdcf23483a2b5..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/reference-rest-api-overview.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: reference-rest-api-overview -title: Pulsar REST APIs -sidebar_label: "Pulsar REST APIs" ---- - -A REST API (also known as RESTful API, REpresentational State Transfer Application Programming Interface) is a set of definitions and protocols for building and integrating application software, using HTTP requests to GET, PUT, POST, and DELETE data following the REST standards. In essence, REST API is a set of remote calls using standard methods to request and return data in a specific format between two systems. - -Pulsar provides a variety of REST APIs that enable you to interact with Pulsar to retrieve information or perform an action. - -| REST API category | Description | -| --- | --- | -| [Admin](https://pulsar.apache.org/admin-rest-api/?version=master) | REST APIs for administrative operations.| -| [Functions](https://pulsar.apache.org/functions-rest-api/?version=master) | REST APIs for function-specific operations.| -| [Sources](https://pulsar.apache.org/source-rest-api/?version=master) | REST APIs for source-specific operations.| -| [Sinks](https://pulsar.apache.org/sink-rest-api/?version=master) | REST APIs for sink-specific operations.| -| [Packages](https://pulsar.apache.org/packages-rest-api/?version=master) | REST APIs for package-specific operations. A package can be a group of functions, sources, and sinks.| - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/reference-terminology.md b/site2/website/versioned_docs/version-2.8.2-deprecated/reference-terminology.md deleted file mode 100644 index e5099141c3231e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/reference-terminology.md +++ /dev/null @@ -1,176 +0,0 @@ ---- -id: reference-terminology -title: Pulsar Terminology -sidebar_label: "Terminology" -original_id: reference-terminology ---- - -Here is a glossary of terms related to Apache Pulsar: - -### Concepts - -#### Pulsar - -Pulsar is a distributed messaging system originally created by Yahoo but now under the stewardship of the Apache Software Foundation. - -#### Message - -Messages are the basic unit of Pulsar. They're what [producers](#producer) publish to [topics](#topic) -and what [consumers](#consumer) then consume from topics. - -#### Topic - -A named channel used to pass messages published by [producers](#producer) to [consumers](#consumer) who -process those [messages](#message). - -#### Partitioned Topic - -A topic that is served by multiple Pulsar [brokers](#broker), which enables higher throughput. - -#### Namespace - -A grouping mechanism for related [topics](#topic). - -#### Namespace Bundle - -A virtual group of [topics](#topic) that belong to the same [namespace](#namespace). A namespace bundle -is defined as a range between two 32-bit hashes, such as 0x00000000 and 0xffffffff. - -#### Tenant - -An administrative unit for allocating capacity and enforcing an authentication/authorization scheme. - -#### Subscription - -A lease on a [topic](#topic) established by a group of [consumers](#consumer). Pulsar has four subscription -modes (exclusive, shared, failover and key_shared). - -#### Pub-Sub - -A messaging pattern in which [producer](#producer) processes publish messages on [topics](#topic) that -are then consumed (processed) by [consumer](#consumer) processes. - -#### Producer - -A process that publishes [messages](#message) to a Pulsar [topic](#topic). - -#### Consumer - -A process that establishes a subscription to a Pulsar [topic](#topic) and processes messages published -to that topic by [producers](#producer). - -#### Reader - -Pulsar readers are message processors much like Pulsar [consumers](#consumer) but with two crucial differences: - -- you can specify *where* on a topic readers begin processing messages (consumers always begin with the latest - available unacked message); -- readers don't retain data or acknowledge messages. - -#### Cursor - -The subscription position for a [consumer](#consumer). - -#### Acknowledgment (ack) - -A message sent to a Pulsar broker by a [consumer](#consumer) that a message has been successfully processed. -An acknowledgement (ack) is Pulsar's way of knowing that the message can be deleted from the system; -if no acknowledgement, then the message will be retained until it's processed. - -#### Negative Acknowledgment (nack) - -When an application fails to process a particular message, it can send a "negative ack" to Pulsar -to signal that the message should be replayed at a later timer. (By default, failed messages are -replayed after a 1 minute delay). Be aware that negative acknowledgment on ordered subscription types, -such as Exclusive, Failover and Key_Shared, can cause failed messages to arrive consumers out of the original order. - -#### Unacknowledged - -A message that has been delivered to a consumer for processing but not yet confirmed as processed by the consumer. - -#### Retention Policy - -Size and time limits that you can set on a [namespace](#namespace) to configure retention of [messages](#message) -that have already been [acknowledged](#acknowledgement-ack). - -#### Multi-Tenancy - -The ability to isolate [namespaces](#namespace), specify quotas, and configure authentication and authorization -on a per-[tenant](#tenant) basis. - -#### Failure Domain - -A logical domain under a Pulsar cluster. Each logical domain contains a pre-configured list of brokers. - -#### Anti-affinity Namespaces - -A group of namespaces that have anti-affinity to each other. - -### Architecture - -#### Standalone - -A lightweight Pulsar broker in which all components run in a single Java Virtual Machine (JVM) process. Standalone -clusters can be run on a single machine and are useful for development purposes. - -#### Cluster - -A set of Pulsar [brokers](#broker) and [BookKeeper](#bookkeeper) servers (aka [bookies](#bookie)). -Clusters can reside in different geographical regions and replicate messages to one another -in a process called [geo-replication](#geo-replication). - -#### Instance - -A group of Pulsar [clusters](#cluster) that act together as a single unit. - -#### Geo-Replication - -Replication of messages across Pulsar [clusters](#cluster), potentially in different datacenters -or geographical regions. - -#### Configuration Store - -Pulsar's configuration store (previously known as configuration store) is a ZooKeeper quorum that -is used for configuration-specific tasks. A multi-cluster Pulsar installation requires just one -configuration store across all [clusters](#cluster). - -#### Topic Lookup - -A service provided by Pulsar [brokers](#broker) that enables connecting clients to automatically determine -which Pulsar [cluster](#cluster) is responsible for a [topic](#topic) (and thus where message traffic for -the topic needs to be routed). - -#### Service Discovery - -A mechanism provided by Pulsar that enables connecting clients to use just a single URL to interact -with all the [brokers](#broker) in a [cluster](#cluster). - -#### Broker - -A stateless component of Pulsar [clusters](#cluster) that runs two other components: an HTTP server -exposing a REST interface for administration and topic lookup and a [dispatcher](#dispatcher) that -handles all message transfers. Pulsar clusters typically consist of multiple brokers. - -#### Dispatcher - -An asynchronous TCP server used for all data transfers in-and-out a Pulsar [broker](#broker). The Pulsar -dispatcher uses a custom binary protocol for all communications. - -### Storage - -#### BookKeeper - -[Apache BookKeeper](http://bookkeeper.apache.org/) is a scalable, low-latency persistent log storage -service that Pulsar uses to store data. - -#### Bookie - -Bookie is the name of an individual BookKeeper server. It is effectively the storage server of Pulsar. - -#### Ledger - -An append-only data structure in [BookKeeper](#bookkeeper) that is used to persistently store messages in Pulsar [topics](#topic). - -### Functions - -Pulsar Functions are lightweight functions that can consume messages from Pulsar topics, apply custom processing logic, and, if desired, publish results to topics. diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/schema-evolution-compatibility.md b/site2/website/versioned_docs/version-2.8.2-deprecated/schema-evolution-compatibility.md deleted file mode 100644 index d36eb7ad8a6131..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/schema-evolution-compatibility.md +++ /dev/null @@ -1,207 +0,0 @@ ---- -id: schema-evolution-compatibility -title: Schema evolution and compatibility -sidebar_label: "Schema evolution and compatibility" -original_id: schema-evolution-compatibility ---- - -Normally, schemas do not stay the same over a long period of time. Instead, they undergo evolutions to satisfy new needs. - -This chapter examines how Pulsar schema evolves and what Pulsar schema compatibility check strategies are. - -## Schema evolution - -Pulsar schema is defined in a data structure called `SchemaInfo`. - -Each `SchemaInfo` stored with a topic has a version. The version is used to manage the schema changes happening within a topic. - -The message produced with `SchemaInfo` is tagged with a schema version. When a message is consumed by a Pulsar client, the Pulsar client can use the schema version to retrieve the corresponding `SchemaInfo` and use the correct schema information to deserialize data. - -### What is schema evolution? - -Schemas store the details of attributes and types. To satisfy new business requirements, you need to update schemas inevitably over time, which is called **schema evolution**. - -Any schema changes affect downstream consumers. Schema evolution ensures that the downstream consumers can seamlessly handle data encoded with both old schemas and new schemas. - -### How Pulsar schema should evolve? - -The answer is Pulsar schema compatibility check strategy. It determines how schema compares old schemas with new schemas in topics. - -For more information, see [Schema compatibility check strategy](#schema-compatibility-check-strategy). - -### How does Pulsar support schema evolution? - -1. When a producer/consumer/reader connects to a broker, the broker deploys the schema compatibility checker configured by `schemaRegistryCompatibilityCheckers` to enforce schema compatibility check. - - The schema compatibility checker is one instance per schema type. - - Currently, Avro and JSON have their own compatibility checkers, while all the other schema types share the default compatibility checker which disables schema evolution. - -2. The producer/consumer/reader sends its client `SchemaInfo` to the broker. - -3. The broker knows the schema type and locates the schema compatibility checker for that type. - -4. The broker uses the checker to check if the `SchemaInfo` is compatible with the latest schema of the topic by applying its compatibility check strategy. - - Currently, the compatibility check strategy is configured at the namespace level and applied to all the topics within that namespace. - -## Schema compatibility check strategy - -Pulsar has 8 schema compatibility check strategies, which are summarized in the following table. - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Changes allowed | Check against which schema | Upgrade first | -| --- | --- | --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Disable schema compatibility check. | All changes are allowed | All previous versions | Any order | -| `ALWAYS_INCOMPATIBLE` | Disable schema evolution. | All changes are disabled | None | None | -| `BACKWARD` | Consumers using the schema V3 can process data written by producers using the schema V3 or V2. |
  1134. Add optional fields
  1135. Delete fields
  1136. | Latest version | Consumers | -| `BACKWARD_TRANSITIVE` | Consumers using the schema V3 can process data written by producers using the schema V3, V2 or V1. |
  1137. Add optional fields
  1138. Delete fields
  1139. | All previous versions | Consumers | -| `FORWARD` | Consumers using the schema V3 or V2 can process data written by producers using the schema V3. |
  1140. Add fields
  1141. Delete optional fields
  1142. | Latest version | Producers | -| `FORWARD_TRANSITIVE` | Consumers using the schema V3, V2 or V1 can process data written by producers using the schema V3. |
  1143. Add fields
  1144. Delete optional fields
  1145. | All previous versions | Producers | -| `FULL` | Backward and forward compatible between the schema V3 and V2. |
  1146. Modify optional fields
  1147. | Latest version | Any order | -| `FULL_TRANSITIVE` | Backward and forward compatible among the schema V3, V2, and V1. |
  1148. Modify optional fields
  1149. | All previous versions | Any order | - -### ALWAYS_COMPATIBLE and ALWAYS_INCOMPATIBLE - -| Compatibility check strategy | Definition | Note | -| --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Disable schema compatibility check. | None | -| `ALWAYS_INCOMPATIBLE` | Disable schema evolution, that is, any schema change is rejected. |
  1150. For all schema types except Avro and JSON, the default schema compatibility check strategy is `ALWAYS_INCOMPATIBLE`.
  1151. For Avro and JSON, the default schema compatibility check strategy is `FULL`.
  1152. | - -#### Example - -* Example 1 - - In some situations, an application needs to store events of several different types in the same Pulsar topic. - - In particular, when developing a data model in an `Event Sourcing` style, you might have several kinds of events that affect the state of an entity. - - For example, for a user entity, there are `userCreated`, `userAddressChanged` and `userEnquiryReceived` events. The application requires that those events are always read in the same order. - - Consequently, those events need to go in the same Pulsar partition to maintain order. This application can use `ALWAYS_COMPATIBLE` to allow different kinds of events co-exist in the same topic. - -* Example 2 - - Sometimes we also make incompatible changes. - - For example, you are modifying a field type from `string` to `int`. - - In this case, you need to: - - * Upgrade all producers and consumers to the new schema versions at the same time. - - * Optionally, create a new topic and start migrating applications to use the new topic and the new schema, avoiding the need to handle two incompatible versions in the same topic. - -### BACKWARD and BACKWARD_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | -|---|---|---| -`BACKWARD` | Consumers using the new schema can process data written by producers using the **last schema**. | The consumers using the schema V3 can process data written by producers using the schema V3 or V2. | -`BACKWARD_TRANSITIVE` | Consumers using the new schema can process data written by producers using **all previous schemas**. | The consumers using the schema V3 can process data written by producers using the schema V3, V2, or V1. | - -#### Example - -* Example 1 - - Remove a field. - - A consumer constructed to process events without one field can process events written with the old schema containing the field, and the consumer will ignore that field. - -* Example 2 - - You want to load all Pulsar data into a Hive data warehouse and run SQL queries against the data. - - Same SQL queries must continue to work even the data is changed. To support it, you can evolve the schemas using the `BACKWARD` strategy. - -### FORWARD and FORWARD_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | -|---|---|---| -`FORWARD` | Consumers using the **last schema** can process data written by producers using a new schema, even though they may not be able to use the full capabilities of the new schema. | The consumers using the schema V3 or V2 can process data written by producers using the schema V3. | -`FORWARD_TRANSITIVE` | Consumers using **all previous schemas** can process data written by producers using a new schema. | The consumers using the schema V3, V2, or V1 can process data written by producers using the schema V3. - -#### Example - -* Example 1 - - Add a field. - - In most data formats, consumers written to process events without new fields can continue doing so even when they receive new events containing new fields. - -* Example 2 - - If a consumer has an application logic tied to a full version of a schema, the application logic may not be updated instantly when the schema evolves. - - In this case, you need to project data with a new schema onto an old schema that the application understands. - - Consequently, you can evolve the schemas using the `FORWARD` strategy to ensure that the old schema can process data encoded with the new schema. - -### FULL and FULL_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | Note | -| --- | --- | --- | --- | -| `FULL` | Schemas are both backward and forward compatible, which means: Consumers using the last schema can process data written by producers using the new schema. AND Consumers using the new schema can process data written by producers using the last schema. | Consumers using the schema V3 can process data written by producers using the schema V3 or V2. AND Consumers using the schema V3 or V2 can process data written by producers using the schema V3. |
  1153. For Avro and JSON, the default schema compatibility check strategy is `FULL`.
  1154. For all schema types except Avro and JSON, the default schema compatibility check strategy is `ALWAYS_INCOMPATIBLE`.
  1155. | -| `FULL_TRANSITIVE` | The new schema is backward and forward compatible with all previously registered schemas. | Consumers using the schema V3 can process data written by producers using the schema V3, V2 or V1. AND Consumers using the schema V3, V2 or V1 can process data written by producers using the schema V3. | None | - -#### Example - -In some data formats, for example, Avro, you can define fields with default values. Consequently, adding or removing a field with a default value is a fully compatible change. - -:::tip - -You can set schema compatibility check strategy at namespace or broker level. For how to set the strategy, see [here](schema-manage.md/#set-schema-compatibility-check-strategy). - -::: - -## Schema verification - -When a producer or a consumer tries to connect to a topic, a broker performs some checks to verify a schema. - -### Producer - -When a producer tries to connect to a topic (suppose ignore the schema auto creation), a broker does the following checks: - -* Check if the schema carried by the producer exists in the schema registry or not. - - * If the schema is already registered, then the producer is connected to a broker and produce messages with that schema. - - * If the schema is not registered, then Pulsar verifies if the schema is allowed to be registered based on the configured compatibility check strategy. - -### Consumer -When a consumer tries to connect to a topic, a broker checks if a carried schema is compatible with a registered schema based on the configured schema compatibility check strategy. - -| Compatibility check strategy | Check logic | -| --- | --- | -| `ALWAYS_COMPATIBLE` | All pass | -| `ALWAYS_INCOMPATIBLE` | No pass | -| `BACKWARD` | Can read the last schema | -| `BACKWARD_TRANSITIVE` | Can read all schemas | -| `FORWARD` | Can read the last schema | -| `FORWARD_TRANSITIVE` | Can read the last schema | -| `FULL` | Can read the last schema | -| `FULL_TRANSITIVE` | Can read all schemas | - -## Order of upgrading clients - -The order of upgrading client applications is determined by the compatibility check strategy. - -For example, the producers using schemas to write data to Pulsar and the consumers using schemas to read data from Pulsar. - -| Compatibility check strategy | Upgrade first | Description | -| --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Any order | The compatibility check is disabled. Consequently, you can upgrade the producers and consumers in **any order**. | -| `ALWAYS_INCOMPATIBLE` | None | The schema evolution is disabled. | -|
  1156. `BACKWARD`
  1157. `BACKWARD_TRANSITIVE`
  1158. | Consumers | There is no guarantee that consumers using the old schema can read data produced using the new schema. Consequently, **upgrade all consumers first**, and then start producing new data. | -|
  1159. `FORWARD`
  1160. `FORWARD_TRANSITIVE`
  1161. | Producers | There is no guarantee that consumers using the new schema can read data produced using the old schema. Consequently, **upgrade all producers first**
  1162. to use the new schema and ensure that the data already produced using the old schemas are not available to consumers, and then upgrade the consumers.
  1163. | -|
  1164. `FULL`
  1165. `FULL_TRANSITIVE`
  1166. | Any order | There is no guarantee that consumers using the old schema can read data produced using the new schema and consumers using the new schema can read data produced using the old schema. Consequently, you can upgrade the producers and consumers in **any order**. | - - - - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/schema-get-started.md b/site2/website/versioned_docs/version-2.8.2-deprecated/schema-get-started.md deleted file mode 100644 index afacb0fa51f2ef..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/schema-get-started.md +++ /dev/null @@ -1,102 +0,0 @@ ---- -id: schema-get-started -title: Get started -sidebar_label: "Get started" -original_id: schema-get-started ---- - -This chapter introduces Pulsar schemas and explains why they are important. - -## Schema Registry - -Type safety is extremely important in any application built around a message bus like Pulsar. - -Producers and consumers need some kind of mechanism for coordinating types at the topic level to avoid various potential problems arise. For example, serialization and deserialization issues. - -Applications typically adopt one of the following approaches to guarantee type safety in messaging. Both approaches are available in Pulsar, and you're free to adopt one or the other or to mix and match on a per-topic basis. - -#### Note -> -> Currently, the Pulsar schema registry is only available for the [Java client](client-libraries-java.md), [CGo client](client-libraries-cgo.md), [Python client](client-libraries-python.md), and [C++ client](client-libraries-cpp.md). - -### Client-side approach - -Producers and consumers are responsible for not only serializing and deserializing messages (which consist of raw bytes) but also "knowing" which types are being transmitted via which topics. - -If a producer is sending temperature sensor data on the topic `topic-1`, consumers of that topic will run into trouble if they attempt to parse that data as moisture sensor readings. - -Producers and consumers can send and receive messages consisting of raw byte arrays and leave all type safety enforcement to the application on an "out-of-band" basis. - -### Server-side approach - -Producers and consumers inform the system which data types can be transmitted via the topic. - -With this approach, the messaging system enforces type safety and ensures that producers and consumers remain synced. - -Pulsar has a built-in **schema registry** that enables clients to upload data schemas on a per-topic basis. Those schemas dictate which data types are recognized as valid for that topic. - -## Why use schema - -When a schema is enabled, Pulsar does parse data, it takes bytes as inputs and sends bytes as outputs. While data has meaning beyond bytes, you need to parse data and might encounter parse exceptions which mainly occur in the following situations: - -* The field does not exist - -* The field type has changed (for example, `string` is changed to `int`) - -There are a few methods to prevent and overcome these exceptions, for example, you can catch exceptions when parsing errors, which makes code hard to maintain; or you can adopt a schema management system to perform schema evolution, not to break downstream applications, and enforces type safety to max extend in the language you are using, the solution is Pulsar Schema. - -Pulsar schema enables you to use language-specific types of data when constructing and handling messages from simple types like `string` to more complex application-specific types. - -**Example** - -You can use the _User_ class to define the messages sent to Pulsar topics. - -``` - -public class User { - String name; - int age; -} - -``` - -When constructing a producer with the _User_ class, you can specify a schema or not as below. - -### Without schema - -If you construct a producer without specifying a schema, then the producer can only produce messages of type `byte[]`. If you have a POJO class, you need to serialize the POJO into bytes before sending messages. - -**Example** - -``` - -Producer producer = client.newProducer() - .topic(topic) - .create(); -User user = new User("Tom", 28); -byte[] message = … // serialize the `user` by yourself; -producer.send(message); - -``` - -### With schema - -If you construct a producer with specifying a schema, then you can send a class to a topic directly without worrying about how to serialize POJOs into bytes. - -**Example** - -This example constructs a producer with the _JSONSchema_, and you can send the _User_ class to topics directly without worrying about how to serialize it into bytes. - -``` - -Producer producer = client.newProducer(JSONSchema.of(User.class)) - .topic(topic) - .create(); -User user = new User("Tom", 28); -producer.send(user); - -``` - -### Summary - -When constructing a producer with a schema, you do not need to serialize messages into bytes, instead Pulsar schema does this job in the background. diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/schema-manage.md b/site2/website/versioned_docs/version-2.8.2-deprecated/schema-manage.md deleted file mode 100644 index 4607c3b5dcae1c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/schema-manage.md +++ /dev/null @@ -1,704 +0,0 @@ ---- -id: schema-manage -title: Manage schema -sidebar_label: "Manage schema" -original_id: schema-manage ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide demonstrates the ways to manage schemas: - -* Automatically - - * [Schema AutoUpdate](#schema-autoupdate) - -* Manually - - * [Schema manual management](#schema-manual-management) - - * [Custom schema storage](#custom-schema-storage) - -## Schema AutoUpdate - -If a schema passes the schema compatibility check, Pulsar producer automatically updates this schema to the topic it produces by default. - -### AutoUpdate for producer - -For a producer, the `AutoUpdate` happens in the following cases: - -* If a **topic doesn’t have a schema**, Pulsar registers a schema automatically. - -* If a **topic has a schema**: - - * If a **producer doesn’t carry a schema**: - - * If `isSchemaValidationEnforced` or `schemaValidationEnforced` is **disabled** in the namespace to which the topic belongs, the producer is allowed to connect to the topic and produce data. - - * If `isSchemaValidationEnforced` or `schemaValidationEnforced` is **enabled** in the namespace to which the topic belongs, the producer is rejected and disconnected. - - * If a **producer carries a schema**: - - A broker performs the compatibility check based on the configured compatibility check strategy of the namespace to which the topic belongs. - - * If the schema is registered, a producer is connected to a broker. - - * If the schema is not registered: - - * If `isAllowAutoUpdateSchema` sets to **false**, the producer is rejected to connect to a broker. - - * If `isAllowAutoUpdateSchema` sets to **true**: - - * If the schema passes the compatibility check, then the broker registers a new schema automatically for the topic and the producer is connected. - - * If the schema does not pass the compatibility check, then the broker does not register a schema and the producer is rejected to connect to a broker. - -![AutoUpdate Producer](/assets/schema-producer.png) - -### AutoUpdate for consumer - -For a consumer, the `AutoUpdate` happens in the following cases: - -* If a **consumer connects to a topic without a schema** (which means the consumer receiving raw bytes), the consumer can connect to the topic successfully without doing any compatibility check. - -* If a **consumer connects to a topic with a schema**. - - * If a topic does not have all of them (a schema/data/a local consumer and a local producer): - - * If `isAllowAutoUpdateSchema` sets to **true**, then the consumer registers a schema and it is connected to a broker. - - * If `isAllowAutoUpdateSchema` sets to **false**, then the consumer is rejected to connect to a broker. - - * If a topic has one of them (a schema/data/a local consumer and a local producer), then the schema compatibility check is performed. - - * If the schema passes the compatibility check, then the consumer is connected to the broker. - - * If the schema does not pass the compatibility check, then the consumer is rejected to connect to the broker. - -![AutoUpdate Consumer](/assets/schema-consumer.png) - - -### Manage AutoUpdate strategy - -You can use the `pulsar-admin` command to manage the `AutoUpdate` strategy as below: - -* [Enable AutoUpdate](#enable-autoupdate) - -* [Disable AutoUpdate](#disable-autoupdate) - -* [Adjust compatibility](#adjust-compatibility) - -#### Enable AutoUpdate - -To enable `AutoUpdate` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-is-allow-auto-update-schema --enable tenant/namespace - -``` - -#### Disable AutoUpdate - -To disable `AutoUpdate` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-is-allow-auto-update-schema --disable tenant/namespace - -``` - -Once the `AutoUpdate` is disabled, you can only register a new schema using the `pulsar-admin` command. - -#### Adjust compatibility - -To adjust the schema compatibility level on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-compatibility-strategy --compatibility tenant/namespace - -``` - -### Schema validation - -By default, `schemaValidationEnforced` is **disabled** for producers: - -* This means a producer without a schema can produce any kind of messages to a topic with schemas, which may result in producing trash data to the topic. - -* This allows non-java language clients that don’t support schema can produce messages to a topic with schemas. - -However, if you want a stronger guarantee on the topics with schemas, you can enable `schemaValidationEnforced` across the whole cluster or on a per-namespace basis. - -#### Enable schema validation - -To enable `schemaValidationEnforced` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-validation-enforce --enable tenant/namespace - -``` - -#### Disable schema validation - -To disable `schemaValidationEnforced` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-validation-enforce --disable tenant/namespace - -``` - -## Schema manual management - -To manage schemas, you can use one of the following methods. - -| Method | Description | -| --- | --- | -| **Admin CLI**
  1167. | You can use the `pulsar-admin` tool to manage Pulsar schemas, brokers, clusters, sources, sinks, topics, tenants and so on. For more information about how to use the `pulsar-admin` tool, see [here](reference-pulsar-admin.md). | -| **REST API**
  1168. | Pulsar exposes schema related management API in Pulsar’s admin RESTful API. You can access the admin RESTful endpoint directly to manage schemas. For more information about how to use the Pulsar REST API, see [here](http://pulsar.apache.org/admin-rest-api/). | -| **Java Admin API**
  1169. | Pulsar provides Java admin library. | - -### Upload a schema - -To upload (register) a new schema for a topic, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `upload` subcommand. - -```bash - -$ pulsar-admin schemas upload --filename - -``` - -The `schema-definition-file` is in JSON format. - -```json - -{ - "type": "", - "schema": "", - "properties": {} // the properties associated with the schema -} - -``` - -The `schema-definition-file` includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  1170. If the schema is a
  1171. **primitive**
  1172. schema, this field should be blank.
  1173. If the schema is a
  1174. **struct**
  1175. schema, this field should be a JSON string of the Avro schema definition.
  1176. | -| `properties` | The additional properties associated with the schema. | - -Here are examples of the `schema-definition-file` for a JSON schema. - -**Example 1** - -```json - -{ - "type": "JSON", - "schema": "{\"type\":\"record\",\"name\":\"User\",\"namespace\":\"com.foo\",\"fields\":[{\"name\":\"file1\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"file2\",\"type\":\"string\",\"default\":null},{\"name\":\"file3\",\"type\":[\"null\",\"string\"],\"default\":\"dfdf\"}]}", - "properties": {} -} - -``` - -**Example 2** - -```json - -{ - "type": "STRING", - "schema": "", - "properties": { - "key1": "value1" - } -} - -``` - -
    - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/uploadSchema?version=@pulsar:version_number@} - -The post payload is in JSON format. - -```json - -{ - "type": "", - "schema": "", - "properties": {} // the properties associated with the schema -} - -``` - -The post payload includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  1177. If the schema is a
  1178. **primitive**
  1179. schema, this field should be blank.
  1180. If the schema is a
  1181. **struct**
  1182. schema, this field should be a JSON string of the Avro schema definition.
  1183. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -void createSchema(String topic, PostSchemaPayload schemaPayload) - -``` - -The `PostSchemaPayload` includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  1184. If the schema is a
  1185. **primitive**
  1186. schema, this field should be blank.
  1187. If the schema is a
  1188. **struct**
  1189. schema, this field should be a JSON string of the Avro schema definition.
  1190. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `PostSchemaPayload`: - -```java - -PulsarAdmin admin = …; - -PostSchemaPayload payload = new PostSchemaPayload(); -payload.setType("INT8"); -payload.setSchema(""); - -admin.createSchema("my-tenant/my-ns/my-topic", payload); - -``` - -
    - -
    -```` - -### Get a schema (latest) - -To get the latest schema for a topic, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `get` subcommand. - -```bash - -$ pulsar-admin schemas get - -{ - "version": 0, - "type": "String", - "timestamp": 0, - "data": "string", - "properties": { - "property1": "string", - "property2": "string" - } -} - -``` - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/getSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", - "type": "", - "timestamp": "", - "data": "", - "properties": {} // the properties associated with the schema -} - -``` - -The response includes the following fields: - -| Field | Description | -| --- | --- | -| `version` | The schema version, which is a long number. | -| `type` | The schema type. | -| `timestamp` | The timestamp of creating this version of schema. | -| `data` | The schema definition data, which is encoded in UTF 8 charset.
  1191. If the schema is a
  1192. **primitive**
  1193. schema, this field should be blank.
  1194. If the schema is a
  1195. **struct**
  1196. schema, this field should be a JSON string of the Avro schema definition.
  1197. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -SchemaInfo createSchema(String topic) - -``` - -The `SchemaInfo` includes the following fields: - -| Field | Description | -| --- | --- | -| `name` | The schema name. | -| `type` | The schema type. | -| `schema` | A byte array of the schema definition data, which is encoded in UTF 8 charset.
  1198. If the schema is a
  1199. **primitive**
  1200. schema, this byte array should be empty.
  1201. If the schema is a
  1202. **struct**
  1203. schema, this field should be a JSON string of the Avro schema definition converted to a byte array.
  1204. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `SchemaInfo`: - -```java - -PulsarAdmin admin = …; - -SchemaInfo si = admin.getSchema("my-tenant/my-ns/my-topic"); - -``` - -
    - -
    -```` - -### Get a schema (specific) - -To get a specific version of a schema, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `get` subcommand. - -```bash - -$ pulsar-admin schemas get --version= - -``` - - - - -Send a `GET` request to a schema endpoint: {@inject: endpoint|GET|/admin/v2/schemas/:tenant/:namespace/:topic/schema/:version|operation/getSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", - "type": "", - "timestamp": "", - "data": "", - "properties": {} // the properties associated with the schema -} - -``` - -The response includes the following fields: - -| Field | Description | -| --- | --- | -| `version` | The schema version, which is a long number. | -| `type` | The schema type. | -| `timestamp` | The timestamp of creating this version of schema. | -| `data` | The schema definition data, which is encoded in UTF 8 charset.
  1205. If the schema is a
  1206. **primitive**
  1207. schema, this field should be blank.
  1208. If the schema is a
  1209. **struct**
  1210. schema, this field should be a JSON string of the Avro schema definition.
  1211. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -SchemaInfo createSchema(String topic, long version) - -``` - -The `SchemaInfo` includes the following fields: - -| Field | Description | -| --- | --- | -| `name` | The schema name. | -| `type` | The schema type. | -| `schema` | A byte array of the schema definition data, which is encoded in UTF 8.
  1212. If the schema is a
  1213. **primitive**
  1214. schema, this byte array should be empty.
  1215. If the schema is a
  1216. **struct**
  1217. schema, this field should be a JSON string of the Avro schema definition converted to a byte array.
  1218. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `SchemaInfo`: - -```java - -PulsarAdmin admin = …; - -SchemaInfo si = admin.getSchema("my-tenant/my-ns/my-topic", 1L); - -``` - -
    - -
    -```` - -### Extract a schema - -To provide a schema via a topic, you can use the following method. - -````mdx-code-block - - - - -Use the `extract` subcommand. - -```bash - -$ pulsar-admin schemas extract --classname --jar --type - -``` - - - - -```` - -### Delete a schema - -To delete a schema for a topic, you can use one of the following methods. - -:::note - -In any case, the **delete** action deletes **all versions** of a schema registered for a topic. - -::: - -````mdx-code-block - - - - -Use the `delete` subcommand. - -```bash - -$ pulsar-admin schemas delete - -``` - - - - -Send a `DELETE` request to a schema endpoint: {@inject: endpoint|DELETE|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/deleteSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", -} - -``` - -The response includes the following field: - -Field | Description | ----|---| -`version` | The schema version, which is a long number. | - - - - -```java - -void deleteSchema(String topic) - -``` - -Here is an example of deleting a schema. - -```java - -PulsarAdmin admin = …; - -admin.deleteSchema("my-tenant/my-ns/my-topic"); - -``` - - - - -```` - -## Custom schema storage - -By default, Pulsar stores various data types of schemas in [Apache BookKeeper](https://bookkeeper.apache.org) deployed alongside Pulsar. - -However, you can use another storage system if needed. - -### Implement - -To use a non-default (non-BookKeeper) storage system for Pulsar schemas, you need to implement the following Java interfaces: - -* [SchemaStorage interface](#schemastorage-interface) - -* [SchemaStorageFactory interface](#schemastoragefactory-interface) - -#### SchemaStorage interface - -The `SchemaStorage` interface has the following methods: - -```java - -public interface SchemaStorage { - // How schemas are updated - CompletableFuture put(String key, byte[] value, byte[] hash); - - // How schemas are fetched from storage - CompletableFuture get(String key, SchemaVersion version); - - // How schemas are deleted - CompletableFuture delete(String key); - - // Utility method for converting a schema version byte array to a SchemaVersion object - SchemaVersion versionFromBytes(byte[] version); - - // Startup behavior for the schema storage client - void start() throws Exception; - - // Shutdown behavior for the schema storage client - void close() throws Exception; -} - -``` - -:::tip - -For a complete example of **schema storage** implementation, see [BookKeeperSchemaStorage](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorage.java) class. - -::: - -#### SchemaStorageFactory interface - -The `SchemaStorageFactory` interface has the following method: - -```java - -public interface SchemaStorageFactory { - @NotNull - SchemaStorage create(PulsarService pulsar) throws Exception; -} - -``` - -:::tip - -For a complete example of **schema storage factory** implementation, see [BookKeeperSchemaStorageFactory](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorageFactory.java) class. - -::: - -### Deploy - -To use your custom schema storage implementation, perform the following steps. - -1. Package the implementation in a [JAR](https://docs.oracle.com/javase/tutorial/deployment/jar/basicsindex.html) file. - -2. Add the JAR file to the `lib` folder in your Pulsar binary or source distribution. - -3. Change the `schemaRegistryStorageClassName` configuration in `broker.conf` to your custom factory class. - -4. Start Pulsar. - -## Set schema compatibility check strategy - -You can set [schema compatibility check strategy](schema-evolution-compatibility.md#schema-compatibility-check-strategy) at namespace or broker level. - -- If you set schema compatibility check strategy at both namespace or broker level, it uses the strategy set for the namespace level. - -- If you do not set schema compatibility check strategy at both namespace or broker level, it uses the `FULL` strategy. - -- If you set schema compatibility check strategy at broker level rather than namespace level, it uses the strategy set for the broker level. - -- If you set schema compatibility check strategy at namespace level rather than broker level, it uses the strategy set for the namespace level. - -### Namespace - -You can set schema compatibility check strategy at namespace level using one of the following methods. - -````mdx-code-block - - - - -Use the [`pulsar-admin namespaces set-schema-compatibility-strategy`](https://pulsar.apache.org/tools/pulsar-admin/) command. - -```shell - -pulsar-admin namespaces set-schema-compatibility-strategy options - -``` - - - - -Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace|operation/schemaCompatibilityStrategy?version=@pulsar:version_number@} - - - - -Use the [`setSchemaCompatibilityStrategy`](https://pulsar.apache.org/api/admin/)method. - -```java - -admin.namespaces().setSchemaCompatibilityStrategy("test", SchemaCompatibilityStrategy.FULL); - -``` - - - - -```` - -### Broker - -You can set schema compatibility check strategy at broker level by setting `schemaCompatibilityStrategy` in [`broker.conf`](https://github.com/apache/pulsar/blob/f24b4890c278f72a67fe30e7bf22dc36d71aac6a/conf/broker.conf#L1240) or [`standalone.conf`](https://github.com/apache/pulsar/blob/master/conf/standalone.conf) file. - -**Example** - -``` - -schemaCompatibilityStrategy=ALWAYS_INCOMPATIBLE - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/schema-understand.md b/site2/website/versioned_docs/version-2.8.2-deprecated/schema-understand.md deleted file mode 100644 index a86b02add435e1..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/schema-understand.md +++ /dev/null @@ -1,556 +0,0 @@ ---- -id: schema-understand -title: Understand schema -sidebar_label: "Understand schema" -original_id: schema-understand ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This chapter explains the basic concepts of Pulsar schema, focuses on the topics of particular importance, and provides additional background. - -## SchemaInfo - -Pulsar schema is defined in a data structure called `SchemaInfo`. - -The `SchemaInfo` is stored and enforced on a per-topic basis and cannot be stored at the namespace or tenant level. - -A `SchemaInfo` consists of the following fields: - -| Field | Description | -| --- | --- | -| `name` | Schema name (a string). | -| `type` | Schema type, which determines how to interpret the schema data.
  1219. Predefined schema: see [here](schema-understand.md#schema-type).
  1220. Customized schema: it is left as an empty string.
  1221. | -| `schema`(`payload`) | Schema data, which is a sequence of 8-bit unsigned bytes and schema-type specific. | -| `properties` | It is a user defined properties as a string/string map. Applications can use this bag for carrying any application specific logics. Possible properties might be the Git hash associated with the schema, an environment string like `dev` or `prod`. | - -**Example** - -This is the `SchemaInfo` of a string. - -```json - -{ - "name": "test-string-schema", - "type": "STRING", - "schema": "", - "properties": {} -} - -``` - -## Schema type - -Pulsar supports various schema types, which are mainly divided into two categories: - -* Primitive type - -* Complex type - -### Primitive type - -Currently, Pulsar supports the following primitive types: - -| Primitive Type | Description | -|---|---| -| `BOOLEAN` | A binary value | -| `INT8` | A 8-bit signed integer | -| `INT16` | A 16-bit signed integer | -| `INT32` | A 32-bit signed integer | -| `INT64` | A 64-bit signed integer | -| `FLOAT` | A single precision (32-bit) IEEE 754 floating-point number | -| `DOUBLE` | A double-precision (64-bit) IEEE 754 floating-point number | -| `BYTES` | A sequence of 8-bit unsigned bytes | -| `STRING` | A Unicode character sequence | -| `TIMESTAMP` (`DATE`, `TIME`) | A logic type represents a specific instant in time with millisecond precision.
    It stores the number of milliseconds since `January 1, 1970, 00:00:00 GMT` as an `INT64` value | -| INSTANT | A single instantaneous point on the time-line with nanoseconds precision| -| LOCAL_DATE | An immutable date-time object that represents a date, often viewed as year-month-day| -| LOCAL_TIME | An immutable date-time object that represents a time, often viewed as hour-minute-second. Time is represented to nanosecond precision.| -| LOCAL_DATE_TIME | An immutable date-time object that represents a date-time, often viewed as year-month-day-hour-minute-second | - -For primitive types, Pulsar does not store any schema data in `SchemaInfo`. The `type` in `SchemaInfo` is used to determine how to serialize and deserialize the data. - -Some of the primitive schema implementations can use `properties` to store implementation-specific tunable settings. For example, a `string` schema can use `properties` to store the encoding charset to serialize and deserialize strings. - -The conversions between **Pulsar schema types** and **language-specific primitive types** are as below. - -| Schema Type | Java Type| Python Type | Go Type | -|---|---|---|---| -| BOOLEAN | boolean | bool | bool | -| INT8 | byte | | int8 | -| INT16 | short | | int16 | -| INT32 | int | | int32 | -| INT64 | long | | int64 | -| FLOAT | float | float | float32 | -| DOUBLE | double | float | float64| -| BYTES | byte[], ByteBuffer, ByteBuf | bytes | []byte | -| STRING | string | str | string| -| TIMESTAMP | java.sql.Timestamp | | | -| TIME | java.sql.Time | | | -| DATE | java.util.Date | | | -| INSTANT | java.time.Instant | | | -| LOCAL_DATE | java.time.LocalDate | | | -| LOCAL_TIME | java.time.LocalDateTime | | -| LOCAL_DATE_TIME | java.time.LocalTime | | - -**Example** - -This example demonstrates how to use a string schema. - -1. Create a producer with a string schema and send messages. - - ```java - - Producer producer = client.newProducer(Schema.STRING).create(); - producer.newMessage().value("Hello Pulsar!").send(); - - ``` - -2. Create a consumer with a string schema and receive messages. - - ```java - - Consumer consumer = client.newConsumer(Schema.STRING).subscribe(); - consumer.receive(); - - ``` - -### Complex type - -Currently, Pulsar supports the following complex types: - -| Complex Type | Description | -|---|---| -| `keyvalue` | Represents a complex type of a key/value pair. | -| `struct` | Handles structured data. It supports `AvroBaseStructSchema` and `ProtobufNativeSchema`. | - -#### keyvalue - -`Keyvalue` schema helps applications define schemas for both key and value. - -For `SchemaInfo` of `keyvalue` schema, Pulsar stores the `SchemaInfo` of key schema and the `SchemaInfo` of value schema together. - -Pulsar provides the following methods to encode a key/value pair in messages: - -* `INLINE` - -* `SEPARATED` - -You can choose the encoding type when constructing the key/value schema. - -````mdx-code-block - - - - -Key/value pairs are encoded together in the message payload. - - - - -Key is encoded in the message key and the value is encoded in the message payload. - -**Example** - -This example shows how to construct a key/value schema and then use it to produce and consume messages. - -1. Construct a key/value schema with `INLINE` encoding type. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.INLINE - ); - - ``` - -2. Optionally, construct a key/value schema with `SEPARATED` encoding type. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - ``` - -3. Produce messages using a key/value schema. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - Producer> producer = client.newProducer(kvSchema) - .topic(TOPIC) - .create(); - - final int key = 100; - final String value = "value-100"; - - // send the key/value message - producer.newMessage() - .value(new KeyValue(key, value)) - .send(); - - ``` - -4. Consume messages using a key/value schema. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - Consumer> consumer = client.newConsumer(kvSchema) - ... - .topic(TOPIC) - .subscriptionName(SubscriptionName).subscribe(); - - // receive key/value pair - Message> msg = consumer.receive(); - KeyValue kv = msg.getValue(); - - ``` - - - - -```` - -#### struct - -This section describes the details of type and usage of the `struct` schema. - -##### Type - -`struct` schema supports `AvroBaseStructSchema` and `ProtobufNativeSchema`. - -|Type|Description| ----|---| -`AvroBaseStructSchema`|Pulsar uses [Avro Specification](http://avro.apache.org/docs/current/spec.html) to declare the schema definition for `AvroBaseStructSchema`, which supports `AvroSchema`, `JsonSchema`, and `ProtobufSchema`.

    This allows Pulsar:
    - to use the same tools to manage schema definitions
    - to use different serialization or deserialization methods to handle data| -`ProtobufNativeSchema`|`ProtobufNativeSchema` is based on protobuf native Descriptor.

    This allows Pulsar:
    - to use native protobuf-v3 to serialize or deserialize data
    - to use `AutoConsume` to deserialize data. - -##### Usage - -Pulsar provides the following methods to use the `struct` schema: - -* `static` - -* `generic` - -* `SchemaDefinition` - -````mdx-code-block - - - - -You can predefine the `struct` schema, which can be a POJO in Java, a `struct` in Go, or classes generated by Avro or Protobuf tools. - -**Example** - -Pulsar gets the schema definition from the predefined `struct` using an Avro library. The schema definition is the schema data stored as a part of the `SchemaInfo`. - -1. Create the _User_ class to define the messages sent to Pulsar topics. - - ```java - - @Builder - @AllArgsConstructor - @NoArgsConstructor - public static class User { - String name; - int age; - } - - ``` - -2. Create a producer with a `struct` schema and send messages. - - ```java - - Producer producer = client.newProducer(Schema.AVRO(User.class)).create(); - producer.newMessage().value(User.builder().name("pulsar-user").age(1).build()).send(); - - ``` - -3. Create a consumer with a `struct` schema and receive messages - - ```java - - Consumer consumer = client.newConsumer(Schema.AVRO(User.class)).subscribe(); - User user = consumer.receive(); - - ``` - - - - -Sometimes applications do not have pre-defined structs, and you can use this method to define schema and access data. - -You can define the `struct` schema using the `GenericSchemaBuilder`, generate a generic struct using `GenericRecordBuilder` and consume messages into `GenericRecord`. - -**Example** - -1. Use `RecordSchemaBuilder` to build a schema. - - ```java - - RecordSchemaBuilder recordSchemaBuilder = SchemaBuilder.record("schemaName"); - recordSchemaBuilder.field("intField").type(SchemaType.INT32); - SchemaInfo schemaInfo = recordSchemaBuilder.build(SchemaType.AVRO); - - Producer producer = client.newProducer(Schema.generic(schemaInfo)).create(); - - ``` - -2. Use `RecordBuilder` to build the struct records. - - ```java - - producer.newMessage().value(schema.newRecordBuilder() - .set("intField", 32) - .build()).send(); - - ``` - - - - -You can define the `schemaDefinition` to generate a `struct` schema. - -**Example** - -1. Create the _User_ class to define the messages sent to Pulsar topics. - - ```java - - @Builder - @AllArgsConstructor - @NoArgsConstructor - public static class User { - String name; - int age; - } - - ``` - -2. Create a producer with a `SchemaDefinition` and send messages. - - ```java - - SchemaDefinition schemaDefinition = SchemaDefinition.builder().withPojo(User.class).build(); - Producer producer = client.newProducer(Schema.AVRO(schemaDefinition)).create(); - producer.newMessage().value(User.builder().name("pulsar-user").age(1).build()).send(); - - ``` - -3. Create a consumer with a `SchemaDefinition` schema and receive messages - - ```java - - SchemaDefinition schemaDefinition = SchemaDefinition.builder().withPojo(User.class).build(); - Consumer consumer = client.newConsumer(Schema.AVRO(schemaDefinition)).subscribe(); - User user = consumer.receive().getValue(); - - ``` - - - - -```` - -### Auto Schema - -If you don't know the schema type of a Pulsar topic in advance, you can use AUTO schema to produce or consume generic records to or from brokers. - -| Auto Schema Type | Description | -|---|---| -| `AUTO_PRODUCE` | This is useful for transferring data **from a producer to a Pulsar topic that has a schema**. | -| `AUTO_CONSUME` | This is useful for transferring data **from a Pulsar topic that has a schema to a consumer**. | - -#### AUTO_PRODUCE - -`AUTO_PRODUCE` schema helps a producer validate whether the bytes sent by the producer is compatible with the schema of a topic. - -**Example** - -Suppose that: - -* You have a producer processing messages from a Kafka topic _K_. - -* You have a Pulsar topic _P_, and you do not know its schema type. - -* Your application reads the messages from _K_ and writes the messages to _P_. - -In this case, you can use `AUTO_PRODUCE` to verify whether the bytes produced by _K_ can be sent to _P_ or not. - -```java - -Produce pulsarProducer = client.newProducer(Schema.AUTO_PRODUCE()) - … - .create(); - -byte[] kafkaMessageBytes = … ; - -pulsarProducer.produce(kafkaMessageBytes); - -``` - -#### AUTO_CONSUME - -`AUTO_CONSUME` schema helps a Pulsar topic validate whether the bytes sent by a Pulsar topic is compatible with a consumer, that is, the Pulsar topic deserializes messages into language-specific objects using the `SchemaInfo` retrieved from broker-side. - -Currently, `AUTO_CONSUME` supports AVRO, JSON and ProtobufNativeSchema schemas. It deserializes messages into `GenericRecord`. - -**Example** - -Suppose that: - -* You have a Pulsar topic _P_. - -* You have a consumer (for example, MySQL) receiving messages from the topic _P_. - -* Your application reads the messages from _P_ and writes the messages to MySQL. - -In this case, you can use `AUTO_CONSUME` to verify whether the bytes produced by _P_ can be sent to MySQL or not. - -```java - -Consumer pulsarConsumer = client.newConsumer(Schema.AUTO_CONSUME()) - … - .subscribe(); - -Message msg = consumer.receive() ; -GenericRecord record = msg.getValue(); - -``` - -## Schema version - -Each `SchemaInfo` stored with a topic has a version. Schema version manages schema changes happening within a topic. - -Messages produced with a given `SchemaInfo` is tagged with a schema version, so when a message is consumed by a Pulsar client, the Pulsar client can use the schema version to retrieve the corresponding `SchemaInfo` and then use the `SchemaInfo` to deserialize data. - -Schemas are versioned in succession. Schema storage happens in a broker that handles the associated topics so that version assignments can be made. - -Once a version is assigned/fetched to/for a schema, all subsequent messages produced by that producer are tagged with the appropriate version. - -**Example** - -The following example illustrates how the schema version works. - -Suppose that a Pulsar [Java client](client-libraries-java.md) created using the code below attempts to connect to Pulsar and begins to send messages: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -Producer producer = client.newProducer(JSONSchema.of(SensorReading.class)) - .topic("sensor-data") - .sendTimeout(3, TimeUnit.SECONDS) - .create(); - -``` - -The table below lists the possible scenarios when this connection attempt occurs and what happens in each scenario: - -| Scenario | What happens | -| --- | --- | -|
  1222. No schema exists for the topic.
  1223. | (1) The producer is created using the given schema. (2) Since no existing schema is compatible with the `SensorReading` schema, the schema is transmitted to the broker and stored. (3) Any consumer created using the same schema or topic can consume messages from the `sensor-data` topic. | -|
  1224. A schema already exists.
  1225. The producer connects using the same schema that is already stored.
  1226. | (1) The schema is transmitted to the broker. (2) The broker determines that the schema is compatible. (3) The broker attempts to store the schema in [BookKeeper](concepts-architecture-overview.md#persistent-storage) but then determines that it's already stored, so it is used to tag produced messages. |
  1227. A schema already exists.
  1228. The producer connects using a new schema that is compatible.
  1229. | (1) The schema is transmitted to the broker. (2) The broker determines that the schema is compatible and stores the new schema as the current version (with a new version number). | - -## How does schema work - -Pulsar schemas are applied and enforced at the **topic** level (schemas cannot be applied at the namespace or tenant level). - -Producers and consumers upload schemas to brokers, so Pulsar schemas work on the producer side and the consumer side. - -### Producer side - -This diagram illustrates how does schema work on the Producer side. - -![Schema works at the producer side](/assets/schema-producer.png) - -1. The application uses a schema instance to construct a producer instance. - - The schema instance defines the schema for the data being produced using the producer instance. - - Take AVRO as an example, Pulsar extracts schema definition from the POJO class and constructs the `SchemaInfo` that the producer needs to pass to a broker when it connects. - -2. The producer connects to the broker with the `SchemaInfo` extracted from the passed-in schema instance. - -3. The broker looks up the schema in the schema storage to check if it is already a registered schema. - -4. If yes, the broker skips the schema validation since it is a known schema, and returns the schema version to the producer. - -5. If no, the broker verifies whether a schema can be automatically created in this namespace: - - * If `isAllowAutoUpdateSchema` sets to **true**, then a schema can be created, and the broker validates the schema based on the schema compatibility check strategy defined for the topic. - - * If `isAllowAutoUpdateSchema` sets to **false**, then a schema can not be created, and the producer is rejected to connect to the broker. - -**Tip**: - -`isAllowAutoUpdateSchema` can be set via **Pulsar admin API** or **REST API.** - -For how to set `isAllowAutoUpdateSchema` via Pulsar admin API, see [Manage AutoUpdate Strategy](schema-manage.md/#manage-autoupdate-strategy). - -6. If the schema is allowed to be updated, then the compatible strategy check is performed. - - * If the schema is compatible, the broker stores it and returns the schema version to the producer. - - All the messages produced by this producer are tagged with the schema version. - - * If the schema is incompatible, the broker rejects it. - -### Consumer side - -This diagram illustrates how does Schema work on the consumer side. - -![Schema works at the consumer side](/assets/schema-consumer.png) - -1. The application uses a schema instance to construct a consumer instance. - - The schema instance defines the schema that the consumer uses for decoding messages received from a broker. - -2. The consumer connects to the broker with the `SchemaInfo` extracted from the passed-in schema instance. - -3. The broker determines whether the topic has one of them (a schema/data/a local consumer and a local producer). - -4. If a topic does not have all of them (a schema/data/a local consumer and a local producer): - - * If `isAllowAutoUpdateSchema` sets to **true**, then the consumer registers a schema and it is connected to a broker. - - * If `isAllowAutoUpdateSchema` sets to **false**, then the consumer is rejected to connect to a broker. - -5. If a topic has one of them (a schema/data/a local consumer and a local producer), then the schema compatibility check is performed. - - * If the schema passes the compatibility check, then the consumer is connected to the broker. - - * If the schema does not pass the compatibility check, then the consumer is rejected to connect to the broker. - -6. The consumer receives messages from the broker. - - If the schema used by the consumer supports schema versioning (for example, AVRO schema), the consumer fetches the `SchemaInfo` of the version tagged in messages and uses the passed-in schema and the schema tagged in messages to decode the messages. diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/security-athenz.md b/site2/website/versioned_docs/version-2.8.2-deprecated/security-athenz.md deleted file mode 100644 index 8a39fe25316d07..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/security-athenz.md +++ /dev/null @@ -1,98 +0,0 @@ ---- -id: security-athenz -title: Authentication using Athenz -sidebar_label: "Authentication using Athenz" -original_id: security-athenz ---- - -[Athenz](https://github.com/AthenZ/athenz) is a role-based authentication/authorization system. In Pulsar, you can use Athenz role tokens (also known as *z-tokens*) to establish the identify of the client. - -## Athenz authentication settings - -A [decentralized Athenz system](https://github.com/AthenZ/athenz/blob/master/docs/decent_authz_flow.md) contains an [authori**Z**ation **M**anagement **S**ystem](https://github.com/AthenZ/athenz/blob/master/docs/setup_zms.md) (ZMS) server and an [authori**Z**ation **T**oken **S**ystem](https://github.com/AthenZ/athenz/blob/master/docs/setup_zts) (ZTS) server. - -To begin, you need to set up Athenz service access control. You need to create domains for the *provider* (which provides some resources to other services with some authentication/authorization policies) and the *tenant* (which is provisioned to access some resources in a provider). In this case, the provider corresponds to the Pulsar service itself and the tenant corresponds to each application using Pulsar (typically, a [tenant](reference-terminology.md#tenant) in Pulsar). - -### Create the tenant domain and service - -On the [tenant](reference-terminology.md#tenant) side, you need to do the following things: - -1. Create a domain, such as `shopping` -2. Generate a private/public key pair -3. Create a service, such as `some_app`, on the domain with the public key - -Note that you need to specify the private key generated in step 2 when the Pulsar client connects to the [broker](reference-terminology.md#broker) (see client configuration examples for [Java](client-libraries-java.md#tls-authentication) and [C++](client-libraries-cpp.md#tls-authentication)). - -For more specific steps involving the Athenz UI, refer to [Example Service Access Control Setup](https://github.com/AthenZ/athenz/blob/master/docs/example_service_athenz_setup.md#client-tenant-domain). - -### Create the provider domain and add the tenant service to some role members - -On the provider side, you need to do the following things: - -1. Create a domain, such as `pulsar` -2. Create a role -3. Add the tenant service to members of the role - -Note that you can specify any action and resource in step 2 since they are not used on Pulsar. In other words, Pulsar uses the Athenz role token only for authentication, *not* for authorization. - -For more specific steps involving UI, refer to [Example Service Access Control Setup](https://github.com/AthenZ/athenz/blob/master/docs/example_service_athenz_setup.md#server-provider-domain). - -## Configure the broker for Athenz - -> ### TLS encryption -> -> Note that when you are using Athenz as an authentication provider, you had better use TLS encryption -> as it can protect role tokens from being intercepted and reused. (for more details involving TLS encryption see [Architecture - Data Model](https://github.com/AthenZ/athenz/blob/master/docs/data_model)). - -In the `conf/broker.conf` configuration file in your Pulsar installation, you need to provide the class name of the Athenz authentication provider as well as a comma-separated list of provider domain names. - -```properties - -# Add the Athenz auth provider -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderAthenz -athenzDomainNames=pulsar - -# Enable TLS -tlsEnabled=true -tlsCertificateFilePath=/path/to/broker-cert.pem -tlsKeyFilePath=/path/to/broker-key.pem - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationAthenz -brokerClientAuthenticationParameters={"tenantDomain":"shopping","tenantService":"some_app","providerDomain":"pulsar","privateKey":"file:///path/to/private.pem","keyId":"v1"} - -``` - -> A full listing of parameters is available in the `conf/broker.conf` file, you can also find the default -> values for those parameters in [Broker Configuration](reference-configuration.md#broker). - -## Configure clients for Athenz - -For more information on Pulsar client authentication using Athenz, see the following language-specific docs: - -* [Java client](client-libraries-java.md#athenz) - -## Configure CLI tools for Athenz - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following authentication parameters to the `conf/client.conf` config file to use Athenz with CLI tools of Pulsar: - -```properties - -# URL for the broker -serviceUrl=https://broker.example.com:8443/ - -# Set Athenz auth plugin and its parameters -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationAthenz -authParams={"tenantDomain":"shopping","tenantService":"some_app","providerDomain":"pulsar","privateKey":"file:///path/to/private.pem","keyId":"v1"} - -# Enable TLS -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/cacert.pem - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/security-authorization.md b/site2/website/versioned_docs/version-2.8.2-deprecated/security-authorization.md deleted file mode 100644 index 9cfd7c8c203f63..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/security-authorization.md +++ /dev/null @@ -1,130 +0,0 @@ ---- -id: security-authorization -title: Authentication and authorization in Pulsar -sidebar_label: "Authorization and ACLs" -original_id: security-authorization ---- - - -In Pulsar, the [authentication provider](security-overview.md#authentication-providers) is responsible for properly identifying clients and associating the clients with [role tokens](security-overview.md#role-tokens). If you only enable authentication, an authenticated role token has the ability to access all resources in the cluster. *Authorization* is the process that determines *what* clients are able to do. - -The role tokens with the most privileges are the *superusers*. The *superusers* can create and destroy tenants, along with having full access to all tenant resources. - -When a superuser creates a [tenant](reference-terminology.md#tenant), that tenant is assigned an admin role. A client with the admin role token can then create, modify and destroy namespaces, and grant and revoke permissions to *other role tokens* on those namespaces. - -## Broker and Proxy Setup - -### Enable authorization and assign superusers -You can enable the authorization and assign the superusers in the broker ([`conf/broker.conf`](reference-configuration.md#broker)) configuration files. - -```properties - -authorizationEnabled=true -superUserRoles=my-super-user-1,my-super-user-2 - -``` - -> A full list of parameters is available in the `conf/broker.conf` file. -> You can also find the default values for those parameters in [Broker Configuration](reference-configuration.md#broker). - -Typically, you use superuser roles for administrators, clients as well as broker-to-broker authorization. When you use [geo-replication](concepts-replication.md), every broker needs to be able to publish to all the other topics of clusters. - -You can also enable the authorization for the proxy in the proxy configuration file (`conf/proxy.conf`). Once you enable the authorization on the proxy, the proxy does an additional authorization check before forwarding the request to a broker. -If you enable authorization on the broker, the broker checks the authorization of the request when the broker receives the forwarded request. - -### Proxy Roles - -By default, the broker treats the connection between a proxy and the broker as a normal user connection. The broker authenticates the user as the role configured in `proxy.conf`(see ["Enable TLS Authentication on Proxies"](security-tls-authentication.md#enable-tls-authentication-on-proxies)). However, when the user connects to the cluster through a proxy, the user rarely requires the authentication. The user expects to be able to interact with the cluster as the role for which they have authenticated with the proxy. - -Pulsar uses *Proxy roles* to enable the authentication. Proxy roles are specified in the broker configuration file, [`conf/broker.conf`](reference-configuration.md#broker). If a client that is authenticated with a broker is one of its ```proxyRoles```, all requests from that client must also carry information about the role of the client that is authenticated with the proxy. This information is called the *original principal*. If the *original principal* is absent, the client is not able to access anything. - -You must authorize both the *proxy role* and the *original principal* to access a resource to ensure that the resource is accessible via the proxy. Administrators can take two approaches to authorize the *proxy role* and the *original principal*. - -The more secure approach is to grant access to the proxy roles each time you grant access to a resource. For example, if you have a proxy role named `proxy1`, when the superuser creates a tenant, you should specify `proxy1` as one of the admin roles. When a role is granted permissions to produce or consume from a namespace, if that client wants to produce or consume through a proxy, you should also grant `proxy1` the same permissions. - -Another approach is to make the proxy role a superuser. This allows the proxy to access all resources. The client still needs to authenticate with the proxy, and all requests made through the proxy have their role downgraded to the *original principal* of the authenticated client. However, if the proxy is compromised, a bad actor could get full access to your cluster. - -You can specify the roles as proxy roles in [`conf/broker.conf`](reference-configuration.md#broker). - -```properties - -proxyRoles=my-proxy-role - -# if you want to allow superusers to use the proxy (see above) -superUserRoles=my-super-user-1,my-super-user-2,my-proxy-role - -``` - -## Administer tenants - -Pulsar [instance](reference-terminology.md#instance) administrators or some kind of self-service portal typically provisions a Pulsar [tenant](reference-terminology.md#tenant). - -You can manage tenants using the [`pulsar-admin`](reference-pulsar-admin.md) tool. - -### Create a new tenant - -The following is an example tenant creation command: - -```shell - -$ bin/pulsar-admin tenants create my-tenant \ - --admin-roles my-admin-role \ - --allowed-clusters us-west,us-east - -``` - -This command creates a new tenant `my-tenant` that is allowed to use the clusters `us-west` and `us-east`. - -A client that successfully identifies itself as having the role `my-admin-role` is allowed to perform all administrative tasks on this tenant. - -The structure of topic names in Pulsar reflects the hierarchy between tenants, clusters, and namespaces: - -```shell - -persistent://tenant/namespace/topic - -``` - -### Manage permissions - -You can use [Pulsar Admin Tools](admin-api-permissions.md) for managing permission in Pulsar. - -### Pulsar admin authentication - -```java - -PulsarAdmin admin = PulsarAdmin.builder() - .serviceHttpUrl("http://broker:8080") - .authentication("com.org.MyAuthPluginClass", "param1:value1") - .build(); - -``` - -To use TLS: - -```java - -PulsarAdmin admin = PulsarAdmin.builder() - .serviceHttpUrl("https://broker:8080") - .authentication("com.org.MyAuthPluginClass", "param1:value1") - .tlsTrustCertsFilePath("/path/to/trust/cert") - .build(); - -``` - -## Authorize an authenticated client with multiple roles - -When a client is identified with multiple roles in a token (the type of role claim in the token is an array) during the authentication process, Pulsar supports to check the permissions of all the roles and further authorize the client as long as one of its roles has the required permissions. - -> **Note**
    -> This authorization method is only compatible with [JWT authentication](security-jwt.md). - -To enable this authorization method, configure the authorization provider as `MultiRolesTokenAuthorizationProvider` in the `conf/broker.conf` file. - - ```properties - - # Authorization provider fully qualified class-name - authorizationProvider=org.apache.pulsar.broker.authorization.MultiRolesTokenAuthorizationProvider - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/security-basic-auth.md b/site2/website/versioned_docs/version-2.8.2-deprecated/security-basic-auth.md deleted file mode 100644 index 2585526bb478af..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/security-basic-auth.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -id: security-basic-auth -title: Authentication using HTTP basic -sidebar_label: "Authentication using HTTP basic" ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - -[Basic authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) is a simple authentication scheme built into the HTTP protocol, which uses base64-encoded username and password pairs as credentials. - -## Prerequisites - -Install [`htpasswd`](https://httpd.apache.org/docs/2.4/programs/htpasswd.html) in your environment to create a password file for storing username-password pairs. - -* For Ubuntu/Debian, run the following command to install `htpasswd`. - - ``` - apt install apache2-utils - ``` - -* For CentOS/RHEL, run the following command to install `htpasswd`. - - ``` - yum install httpd-tools - ``` - -## Create your authentication file - -:::note -Currently, you can use MD5 (recommended) and CRYPT encryption to authenticate your password. -::: - -Create a password file named `.htpasswd` with a user account `superuser/admin`: -* Use MD5 encryption (recommended): - - ``` - htpasswd -cmb /path/to/.htpasswd superuser admin - ``` - -* Use CRYPT encryption: - - ``` - htpasswd -cdb /path/to/.htpasswd superuser admin - ``` - -You can preview the content of your password file by running the following command: - -``` -cat path/to/.htpasswd -superuser:$apr1$GBIYZYFZ$MzLcPrvoUky16mLcK6UtX/ -``` - -## Enable basic authentication on brokers - -To configure brokers to authenticate clients, complete the following steps. - -1. Add the following parameters to the `conf/broker.conf` file. If you use a standalone Pulsar, you need to add these parameters to the `conf/standalone.conf` file. - - ``` - # Configuration to enable Basic authentication - authenticationEnabled=true - authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderBasic - - # Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters - brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic - brokerClientAuthenticationParameters={"userId":"superuser","password":"admin"} - - # If this flag is set then the broker authenticates the original Auth data - # else it just accepts the originalPrincipal and authorizes it (if required). - authenticateOriginalAuthData=true - ``` - -2. Set an environment variable named `PULSAR_EXTRA_OPTS` and the value is `-Dpulsar.auth.basic.conf=/path/to/.htpasswd`. Pulsar reads this environment variable to implement HTTP basic authentication. - -## Enable basic authentication on proxies - -To configure proxies to authenticate clients, complete the following steps. - -1. Add the following parameters to the `conf/proxy.conf` file: - - ``` - # For clients connecting to the proxy - authenticationEnabled=true - authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderBasic - - # For the proxy to connect to brokers - brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic - brokerClientAuthenticationParameters={"userId":"superuser","password":"admin"} - - # Whether client authorization credentials are forwarded to the broker for re-authorization. - # Authentication must be enabled via authenticationEnabled=true for this to take effect. - forwardAuthorizationCredentials=true - ``` - -2. Set an environment variable named `PULSAR_EXTRA_OPTS` and the value is `-Dpulsar.auth.basic.conf=/path/to/.htpasswd`. Pulsar reads this environment variable to implement HTTP basic authentication. - -## Configure basic authentication in CLI tools - -[Command-line tools](/docs/next/reference-cli-tools), such as [Pulsar-admin](/tools/pulsar-admin/), [Pulsar-perf](/tools/pulsar-perf/) and [Pulsar-client](/tools/pulsar-client/), use the `conf/client.conf` file in your Pulsar installation. To configure basic authentication in Pulsar CLI tools, you need to add the following parameters to the `conf/client.conf` file. - -``` -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic -authParams={"userId":"superuser","password":"admin"} -``` - - -## Configure basic authentication in Pulsar clients - -The following example shows how to configure basic authentication when using Pulsar clients. - - - - - ```java - AuthenticationBasic auth = new AuthenticationBasic(); - auth.configure("{\"userId\":\"superuser\",\"password\":\"admin\"}"); - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650") - .authentication(auth) - .build(); - ``` - - - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/security-bouncy-castle.md b/site2/website/versioned_docs/version-2.8.2-deprecated/security-bouncy-castle.md deleted file mode 100644 index be937055d8e311..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/security-bouncy-castle.md +++ /dev/null @@ -1,157 +0,0 @@ ---- -id: security-bouncy-castle -title: Bouncy Castle Providers -sidebar_label: "Bouncy Castle Providers" -original_id: security-bouncy-castle ---- - -## BouncyCastle Introduce - -`Bouncy Castle` is a Java library that complements the default Java Cryptographic Extension (JCE), -and it provides more cipher suites and algorithms than the default JCE provided by Sun. - -In addition to that, `Bouncy Castle` has lots of utilities for reading arcane formats like PEM and ASN.1 that no sane person would want to rewrite themselves. - -In Pulsar, security and crypto have dependencies on BouncyCastle Jars. For the detailed installing and configuring Bouncy Castle FIPS, see [BC FIPS Documentation](https://www.bouncycastle.org/documentation.html), especially the **User Guides** and **Security Policy** PDFs. - -`Bouncy Castle` provides both [FIPS](https://www.bouncycastle.org/fips_faq.html) and non-FIPS version. But in a JVM, you can not include both of the 2 versions, and you need to exclude the current version before include the other. - -In Pulsar, the security and crypto methods also depends on `Bouncy Castle`, especially in [TLS Authentication](security-tls-authentication.md) and [Transport Encryption](security-encryption.md). This document contains the configuration between BouncyCastle FIPS(BC-FIPS) and non-FIPS(BC-non-FIPS) version while using Pulsar. - -## How BouncyCastle modules packaged in Pulsar - -In Pulsar's `bouncy-castle` module, We provide 2 sub modules: `bouncy-castle-bc`(for non-FIPS version) and `bouncy-castle-bcfips`(for FIPS version), to package BC jars together to make the include and exclude of `Bouncy Castle` easier. - -To achieve this goal, we will need to package several `bouncy-castle` jars together into `bouncy-castle-bc` or `bouncy-castle-bcfips` jar. -Each of the original bouncy-castle jar is related with security, so BouncyCastle dutifully supplies signed of each JAR. -But when we do the re-package, Maven shade explodes the BouncyCastle jar file which puts the signatures into META-INF, -these signatures aren't valid for this new, uber-jar (signatures are only for the original BC jar). -Usually, You will meet error like `java.lang.SecurityException: Invalid signature file digest for Manifest main attributes`. - -You could exclude these signatures in mvn pom file to avoid above error, by - -```access transformers - -META-INF/*.SF -META-INF/*.DSA -META-INF/*.RSA - -``` - -But it can also lead to new, cryptic errors, e.g. `java.security.NoSuchAlgorithmException: PBEWithSHA256And256BitAES-CBC-BC SecretKeyFactory not available` -By explicitly specifying where to find the algorithm like this: `SecretKeyFactory.getInstance("PBEWithSHA256And256BitAES-CBC-BC","BC")` -It will get the real error: `java.security.NoSuchProviderException: JCE cannot authenticate the provider BC` - -So, we used a [executable packer plugin](https://github.com/nthuemmel/executable-packer-maven-plugin) that uses a jar-in-jar approach to preserve the BouncyCastle signature in a single, executable jar. - -### Include dependencies of BC-non-FIPS - -Pulsar module `bouncy-castle-bc`, which defined by `bouncy-castle/bc/pom.xml` contains the needed non-FIPS jars for Pulsar, and packaged as a jar-in-jar(need to provide `pkg`). - -```xml - - - org.bouncycastle - bcpkix-jdk15on - ${bouncycastle.version} - - - - org.bouncycastle - bcprov-ext-jdk15on - ${bouncycastle.version} - - -``` - -By using this `bouncy-castle-bc` module, you can easily include and exclude BouncyCastle non-FIPS jars. - -### Modules that include BC-non-FIPS module (`bouncy-castle-bc`) - -For Pulsar client, user need the bouncy-castle module, so `pulsar-client-original` will include the `bouncy-castle-bc` module, and have `pkg` set to reference the `jar-in-jar` package. -It is included as following example: - -```xml - - - org.apache.pulsar - bouncy-castle-bc - ${pulsar.version} - pkg - - -``` - -By default `bouncy-castle-bc` already included in `pulsar-client-original`, And `pulsar-client-original` has been included in a lot of other modules like `pulsar-client-admin`, `pulsar-broker`. -But for the above shaded jar and signatures reason, we should not package Pulsar's `bouncy-castle` module into `pulsar-client-all` other shaded modules directly, such as `pulsar-client-shaded`, `pulsar-client-admin-shaded` and `pulsar-broker-shaded`. -So in the shaded modules, we will exclude the `bouncy-castle` modules. - -```xml - - - - org.apache.pulsar:pulsar-client-original - - ** - - - org/bouncycastle/** - - - - -``` - -That means, `bouncy-castle` related jars are not shaded in these fat jars. - -### Module BC-FIPS (`bouncy-castle-bcfips`) - -Pulsar module `bouncy-castle-bcfips`, which defined by `bouncy-castle/bcfips/pom.xml` contains the needed FIPS jars for Pulsar. -Similar to `bouncy-castle-bc`, `bouncy-castle-bcfips` also packaged as a `jar-in-jar` package for easy include/exclude. - -```xml - - - org.bouncycastle - bc-fips - ${bouncycastlefips.version} - - - - org.bouncycastle - bcpkix-fips - ${bouncycastlefips.version} - - -``` - -### Exclude BC-non-FIPS and include BC-FIPS - -If you want to switch from BC-non-FIPS to BC-FIPS version, Here is an example for `pulsar-broker` module: - -```xml - - - org.apache.pulsar - pulsar-broker - ${pulsar.version} - - - org.apache.pulsar - bouncy-castle-bc - - - - - - org.apache.pulsar - bouncy-castle-bcfips - ${pulsar.version} - pkg - - -``` - - -For more example, you can reference module `bcfips-include-test`. - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/security-encryption.md b/site2/website/versioned_docs/version-2.8.2-deprecated/security-encryption.md deleted file mode 100644 index 58ab73fe4fabc3..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/security-encryption.md +++ /dev/null @@ -1,325 +0,0 @@ ---- -id: security-encryption -title: Pulsar Encryption -sidebar_label: "End-to-End Encryption" -original_id: security-encryption ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Applications can use Pulsar encryption to encrypt messages at the producer side and decrypt messages at the consumer side. You can use the public and private key pair that the application configures to perform encryption. Only the consumers with a valid key can decrypt the encrypted messages. - -## Asymmetric and symmetric encryption - -Pulsar uses dynamically generated symmetric AES key to encrypt messages(data). You can use the application provided ECDSA/RSA key pair to encrypt the AES key(data key), so you do not have to share the secret with everyone. - -Key is a public and private key pair used for encryption or decryption. The producer key is the public key of the key pair, and the consumer key is the private key of the key pair. - -The application configures the producer with the public key. You can use this key to encrypt the AES data key. The encrypted data key is sent as part of message header. Only entities with the private key (in this case the consumer) are able to decrypt the data key which is used to decrypt the message. - -You can encrypt a message with more than one key. Any one of the keys used for encrypting the message is sufficient to decrypt the message. - -Pulsar does not store the encryption key anywhere in the Pulsar service. If you lose or delete the private key, your message is irretrievably lost, and is unrecoverable. - -## Producer -![alt text](/assets/pulsar-encryption-producer.jpg "Pulsar Encryption Producer") - -## Consumer -![alt text](/assets/pulsar-encryption-consumer.jpg "Pulsar Encryption Consumer") - -## Get started - -1. Enter the commands below to create your ECDSA or RSA public and private key pair. - -```shell - -openssl ecparam -name secp521r1 -genkey -param_enc explicit -out test_ecdsa_privkey.pem -openssl ec -in test_ecdsa_privkey.pem -pubout -outform pem -out test_ecdsa_pubkey.pem - -``` - -2. Add the public and private key to the key management and configure your producers to retrieve public keys and consumers clients to retrieve private keys. - -3. Implement the CryptoKeyReader interface, specifically CryptoKeyReader.getPublicKey() for producer and CryptoKeyReader.getPrivateKey() for consumer, which Pulsar client invokes to load the key. - -4. Add encryption key name to producer builder: PulsarClient.newProducer().addEncryptionKey("myapp.key"). - -5. Configure a `CryptoKeyReader` to a producer, consumer or reader. - -````mdx-code-block - - - -```java - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl("pulsar://localhost:6650").build(); -String topic = "persistent://my-tenant/my-ns/my-topic"; -// RawFileKeyReader is just an example implementation that's not provided by Pulsar -CryptoKeyReader keyReader = new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem"); - -Producer producer = pulsarClient.newProducer() - .topic(topic) - .cryptoKeyReader(keyReader) - .addEncryptionKey("myappkey") - .create(); - -Consumer consumer = pulsarClient.newConsumer() - .topic(topic) - .subscriptionName("my-subscriber-name") - .cryptoKeyReader(keyReader) - .subscribe(); - -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(MessageId.earliest) - .cryptoKeyReader(keyReader) - .create(); - -``` - - - - -```c++ - -Client client("pulsar://localhost:6650"); -std::string topic = "persistent://my-tenant/my-ns/my-topic"; -// DefaultCryptoKeyReader is a built-in implementation that reads public key and private key from files -auto keyReader = std::make_shared("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem"); - -Producer producer; -ProducerConfiguration producerConf; -producerConf.setCryptoKeyReader(keyReader); -producerConf.addEncryptionKey("myappkey"); -client.createProducer(topic, producerConf, producer); - -Consumer consumer; -ConsumerConfiguration consumerConf; -consumerConf.setCryptoKeyReader(keyReader); -client.subscribe(topic, "my-subscriber-name", consumerConf, consumer); - -Reader reader; -ReaderConfiguration readerConf; -readerConf.setCryptoKeyReader(keyReader); -client.createReader(topic, MessageId::earliest(), readerConf, reader); - -``` - - - - -```python - -from pulsar import Client, CryptoKeyReader - -client = Client('pulsar://localhost:6650') -topic = 'persistent://my-tenant/my-ns/my-topic' -# CryptoKeyReader is a built-in implementation that reads public key and private key from files -key_reader = CryptoKeyReader('test_ecdsa_pubkey.pem', 'test_ecdsa_privkey.pem') - -producer = client.create_producer( - topic=topic, - encryption_key='myappkey', - crypto_key_reader=key_reader -) - -consumer = client.subscribe( - topic=topic, - subscription_name='my-subscriber-name', - crypto_key_reader=key_reader -) - -reader = client.create_reader( - topic=topic, - start_message_id=MessageId.earliest, - crypto_key_reader=key_reader -) - -client.close() - -``` - - - - -```nodejs - -const Pulsar = require('pulsar-client'); - -(async () => { -// Create a client -const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - operationTimeoutSeconds: 30, -}); - -// Create a producer -const producer = await client.createProducer({ - topic: 'persistent://public/default/my-topic', - sendTimeoutMs: 30000, - batchingEnabled: true, - publicKeyPath: "public-key.client-rsa.pem", - encryptionKey: "encryption-key" -}); - -// Create a consumer -const consumer = await client.subscribe({ - topic: 'persistent://public/default/my-topic', - subscription: 'sub1', - subscriptionType: 'Shared', - ackTimeoutMs: 10000, - privateKeyPath: "private-key.client-rsa.pem" -}); - -// Send messages -for (let i = 0; i < 10; i += 1) { - const msg = `my-message-${i}`; - producer.send({ - data: Buffer.from(msg), - }); - console.log(`Sent message: ${msg}`); -} -await producer.flush(); - -// Receive messages -for (let i = 0; i < 10; i += 1) { - const msg = await consumer.receive(); - console.log(msg.getData().toString()); - consumer.acknowledge(msg); -} - -await consumer.close(); -await producer.close(); -await client.close(); -})(); - -``` - - - - -```` - -6. Below is an example of a **customized** `CryptoKeyReader` implementation. - -````mdx-code-block - - - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} - -``` - - - - -```c++ - -class CustomCryptoKeyReader : public CryptoKeyReader { - public: - Result getPublicKey(const std::string& keyName, std::map& metadata, - EncryptionKeyInfo& encKeyInfo) const override { - // TODO: - return ResultOk; - } - - Result getPrivateKey(const std::string& keyName, std::map& metadata, - EncryptionKeyInfo& encKeyInfo) const override { - // TODO: - return ResultOk; - } -}; - -auto keyReader = std::make_shared(/* ... */); -// TODO: create producer, consumer or reader based on keyReader here - -``` - -Besides, you can use the **default** implementation of `CryptoKeyReader` by specifying the paths of `private key` and `public key`. - - - - -Currently, **customized** `CryptoKeyReader` implementation is not supported in Python. However, you can use the **default** implementation by specifying the path of `private key` and `public key`. - - - - -Currently, **customized** `CryptoKeyReader` implementation is not supported in Node.JS. However, you can use the **default** implementation by specifying the path of `private key` and `public key`. - - - - -```` - -## Key rotation -Pulsar generates a new AES data key every 4 hours or after publishing a certain number of messages. A producer fetches the asymmetric public key every 4 hours by calling CryptoKeyReader.getPublicKey() to retrieve the latest version. - -## Enable encryption at the producer application -If you produce messages that are consumed across application boundaries, you need to ensure that consumers in other applications have access to one of the private keys that can decrypt the messages. You can do this in two ways: -1. The consumer application provides you access to their public key, which you add to your producer keys. -2. You grant access to one of the private keys from the pairs that producer uses. - -When producers want to encrypt the messages with multiple keys, producers add all such keys to the config. Consumer can decrypt the message as long as the consumer has access to at least one of the keys. - -If you need to encrypt the messages using 2 keys (myapp.messagekey1 and myapp.messagekey2), refer to the following example. - -```java - -PulsarClient.newProducer().addEncryptionKey("myapp.messagekey1").addEncryptionKey("myapp.messagekey2"); - -``` - -## Decrypt encrypted messages at the consumer application -Consumers require access one of the private keys to decrypt messages that the producer produces. If you want to receive encrypted messages, create a public or private key and give your public key to the producer application to encrypt messages using your public key. - -## Handle failures -* Producer/ Consumer loses access to the key - * Producer action fails indicating the cause of the failure. Application has the option to proceed with sending unencrypted message in such cases. Call PulsarClient.newProducer().cryptoFailureAction(ProducerCryptoFailureAction) to control the producer behavior. The default behavior is to fail the request. - * If consumption fails due to decryption failure or missing keys in consumer, application has the option to consume the encrypted message or discard it. Call PulsarClient.newConsumer().cryptoFailureAction(ConsumerCryptoFailureAction) to control the consumer behavior. The default behavior is to fail the request. Application is never able to decrypt the messages if the private key is permanently lost. -* Batch messaging - * If decryption fails and the message contains batch messages, client is not able to retrieve individual messages in the batch, hence message consumption fails even if cryptoFailureAction() is set to ConsumerCryptoFailureAction.CONSUME. -* If decryption fails, the message consumption stops and application notices backlog growth in addition to decryption failure messages in the client log. If application does not have access to the private key to decrypt the message, the only option is to skip or discard backlogged messages. diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/security-extending.md b/site2/website/versioned_docs/version-2.8.2-deprecated/security-extending.md deleted file mode 100644 index e7484453b8beb8..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/security-extending.md +++ /dev/null @@ -1,207 +0,0 @@ ---- -id: security-extending -title: Extending Authentication and Authorization in Pulsar -sidebar_label: "Extending" -original_id: security-extending ---- - -Pulsar provides a way to use custom authentication and authorization mechanisms. - -## Authentication - -Pulsar supports mutual TLS and Athenz authentication plugins. For how to use these authentication plugins, you can refer to the description in [Security](security-overview.md). - -You can use a custom authentication mechanism by providing the implementation in the form of two plugins. One plugin is for the Client library and the other plugin is for the Pulsar Proxy and/or Pulsar Broker to validate the credentials. - -### Client authentication plugin - -For the client library, you need to implement `org.apache.pulsar.client.api.Authentication`. By entering the command below you can pass this class when you create a Pulsar client: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .authentication(new MyAuthentication()) - .build(); - -``` - -You can use 2 interfaces to implement on the client side: - * `Authentication` -> http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/Authentication.html - * `AuthenticationDataProvider` -> http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/AuthenticationDataProvider.html - - -This in turn needs to provide the client credentials in the form of `org.apache.pulsar.client.api.AuthenticationDataProvider`. This leaves the chance to return different kinds of authentication token for different types of connection or by passing a certificate chain to use for TLS. - - -You can find examples for client authentication providers at: - - * Mutual TLS Auth -- https://github.com/apache/pulsar/tree/master/pulsar-client/src/main/java/org/apache/pulsar/client/impl/auth - * Athenz -- https://github.com/apache/pulsar/tree/master/pulsar-client-auth-athenz/src/main/java/org/apache/pulsar/client/impl/auth - -### Proxy/Broker authentication plugin - -On the proxy/broker side, you need to configure the corresponding plugin to validate the credentials that the client sends. The Proxy and Broker can support multiple authentication providers at the same time. - -In `conf/broker.conf` you can choose to specify a list of valid providers: - -```properties - -# Authentication provider name list, which is comma separated list of class names -authenticationProviders= - -``` - -To implement `org.apache.pulsar.broker.authentication.AuthenticationProvider` on one single interface: - -```java - -/** - * Provider of authentication mechanism - */ -public interface AuthenticationProvider extends Closeable { - - /** - * Perform initialization for the authentication provider - * - * @param config - * broker config object - * @throws IOException - * if the initialization fails - */ - void initialize(ServiceConfiguration config) throws IOException; - - /** - * @return the authentication method name supported by this provider - */ - String getAuthMethodName(); - - /** - * Validate the authentication for the given credentials with the specified authentication data - * - * @param authData - * provider specific authentication data - * @return the "role" string for the authenticated connection, if the authentication was successful - * @throws AuthenticationException - * if the credentials are not valid - */ - String authenticate(AuthenticationDataSource authData) throws AuthenticationException; - -} - -``` - -The following is the example for Broker authentication plugins: - - * Mutual TLS -- https://github.com/apache/pulsar/blob/master/pulsar-broker-common/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderTls.java - * Athenz -- https://github.com/apache/pulsar/blob/master/pulsar-broker-auth-athenz/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderAthenz.java - -## Authorization - -Authorization is the operation that checks whether a particular "role" or "principal" has permission to perform a certain operation. - -By default, you can use the embedded authorization provider provided by Pulsar. You can also configure a different authorization provider through a plugin. -Note that although the Authentication plugin is designed for use in both the Proxy and Broker, -the Authorization plugin is designed only for use on the Broker however the Proxy does perform some simple Authorization checks of Roles if authorization is enabled. - -To provide a custom provider, you need to implement the `org.apache.pulsar.broker.authorization.AuthorizationProvider` interface, put this class in the Pulsar broker classpath and configure the class in `conf/broker.conf`: - - ```properties - - # Authorization provider fully qualified class-name - authorizationProvider=org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider - - ``` - -```java - -/** - * Provider of authorization mechanism - */ -public interface AuthorizationProvider extends Closeable { - - /** - * Perform initialization for the authorization provider - * - * @param conf - * broker config object - * @param configCache - * pulsar zk configuration cache service - * @throws IOException - * if the initialization fails - */ - void initialize(ServiceConfiguration conf, ConfigurationCacheService configCache) throws IOException; - - /** - * Check if the specified role has permission to send messages to the specified fully qualified topic name. - * - * @param topicName - * the fully qualified topic name associated with the topic. - * @param role - * the app id used to send messages to the topic. - */ - CompletableFuture canProduceAsync(TopicName topicName, String role, - AuthenticationDataSource authenticationData); - - /** - * Check if the specified role has permission to receive messages from the specified fully qualified topic name. - * - * @param topicName - * the fully qualified topic name associated with the topic. - * @param role - * the app id used to receive messages from the topic. - * @param subscription - * the subscription name defined by the client - */ - CompletableFuture canConsumeAsync(TopicName topicName, String role, - AuthenticationDataSource authenticationData, String subscription); - - /** - * Check whether the specified role can perform a lookup for the specified topic. - * - * For that the caller needs to have producer or consumer permission. - * - * @param topicName - * @param role - * @return - * @throws Exception - */ - CompletableFuture canLookupAsync(TopicName topicName, String role, - AuthenticationDataSource authenticationData); - - /** - * - * Grant authorization-action permission on a namespace to the given client - * - * @param namespace - * @param actions - * @param role - * @param authDataJson - * additional authdata in json format - * @return CompletableFuture - * @completesWith
    - * IllegalArgumentException when namespace not found
    - * IllegalStateException when failed to grant permission - */ - CompletableFuture grantPermissionAsync(NamespaceName namespace, Set actions, String role, - String authDataJson); - - /** - * Grant authorization-action permission on a topic to the given client - * - * @param topicName - * @param role - * @param authDataJson - * additional authdata in json format - * @return CompletableFuture - * @completesWith
    - * IllegalArgumentException when namespace not found
    - * IllegalStateException when failed to grant permission - */ - CompletableFuture grantPermissionAsync(TopicName topicName, Set actions, String role, - String authDataJson); - -} - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/security-jwt.md b/site2/website/versioned_docs/version-2.8.2-deprecated/security-jwt.md deleted file mode 100644 index 1fa65b7c27f60c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/security-jwt.md +++ /dev/null @@ -1,331 +0,0 @@ ---- -id: security-jwt -title: Client authentication using tokens based on JSON Web Tokens -sidebar_label: "Authentication using JWT" -original_id: security-jwt ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -## Token authentication overview - -Pulsar supports authenticating clients using security tokens that are based on [JSON Web Tokens](https://jwt.io/introduction/) ([RFC-7519](https://tools.ietf.org/html/rfc7519)). - -You can use tokens to identify a Pulsar client and associate with some "principal" (or "role") that -is permitted to do some actions (eg: publish to a topic or consume from a topic). - -A user typically gets a token string from the administrator (or some automated service). - -The compact representation of a signed JWT is a string that looks like as the following: - -``` - -eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -Application specifies the token when you create the client instance. An alternative is to pass a "token supplier" (a function that returns the token when the client library needs one). - -> #### Always use TLS transport encryption -> Sending a token is equivalent to sending a password over the wire. You had better use TLS encryption all the time when you connect to the Pulsar service. See -> [Transport Encryption using TLS](security-tls-transport.md) for more details. - -### CLI Tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use the token authentication with CLI tools of Pulsar: - -```properties - -webServiceUrl=http://broker.example.com:8080/ -brokerServiceUrl=pulsar://broker.example.com:6650/ -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -authParams=token:eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -The token string can also be read from a file, for example: - -``` - -authParams=file:///path/to/token/file - -``` - -### Pulsar client - -You can use tokens to authenticate the following Pulsar clients. - -````mdx-code-block - - - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactory.token("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY")) - .build(); - -``` - -Similarly, you can also pass a `Supplier`: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactory.token(() -> { - // Read token from custom source - return readToken(); - })) - .build(); - -``` - - - - -```python - -from pulsar import Client, AuthenticationToken - -client = Client('pulsar://broker.example.com:6650/' - authentication=AuthenticationToken('eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY')) - -``` - -Alternatively, you can also pass a `Supplier`: - -```python - -def read_token(): - with open('/path/to/token.txt') as tf: - return tf.read().strip() - -client = Client('pulsar://broker.example.com:6650/' - authentication=AuthenticationToken(read_token)) - -``` - - - - -```go - -client, err := NewClient(ClientOptions{ - URL: "pulsar://localhost:6650", - Authentication: NewAuthenticationToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY"), -}) - -``` - -Similarly, you can also pass a `Supplier`: - -```go - -client, err := NewClient(ClientOptions{ - URL: "pulsar://localhost:6650", - Authentication: NewAuthenticationTokenSupplier(func () string { - // Read token from custom source - return readToken() - }), -}) - -``` - - - - -```c++ - -#include - -pulsar::ClientConfiguration config; -config.setAuth(pulsar::AuthToken::createWithToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY")); - -pulsar::Client client("pulsar://broker.example.com:6650/", config); - -``` - - - - -```c# - -var client = PulsarClient.Builder() - .AuthenticateUsingToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY") - .Build(); - -``` - - - - -```` - -## Enable token authentication - -On how to enable token authentication on a Pulsar cluster, you can refer to the guide below. - -JWT supports two different kinds of keys in order to generate and validate the tokens: - - * Symmetric : - - You can use a single ***Secret*** key to generate and validate tokens. - * Asymmetric: A pair of keys consists of the Private key and the Public key. - - You can use ***Private*** key to generate tokens. - - You can use ***Public*** key to validate tokens. - -### Create a secret key - -When you use a secret key, the administrator creates the key and uses the key to generate the client tokens. You can also configure this key to brokers in order to validate the clients. - -Output file is generated in the root of your Pulsar installation directory. You can also provide absolute path for the output file using the command below. - -```shell - -$ bin/pulsar tokens create-secret-key --output my-secret.key - -``` - -Enter this command to generate base64 encoded private key. - -```shell - -$ bin/pulsar tokens create-secret-key --output /opt/my-secret.key --base64 - -``` - -### Create a key pair - -With Public and Private keys, you need to create a pair of keys. Pulsar supports all algorithms that the Java JWT library (shown [here](https://github.com/jwtk/jjwt#signature-algorithms-keys)) supports. - -Output file is generated in the root of your Pulsar installation directory. You can also provide absolute path for the output file using the command below. - -```shell - -$ bin/pulsar tokens create-key-pair --output-private-key my-private.key --output-public-key my-public.key - -``` - - * Store `my-private.key` in a safe location and only administrator can use `my-private.key` to generate new tokens. - * `my-public.key` is distributed to all Pulsar brokers. You can publicly share this file without any security concern. - -### Generate tokens - -A token is the credential associated with a user. The association is done through the "principal" or "role". In the case of JWT tokens, this field is typically referred as **subject**, though they are exactly the same concept. - -Then, you need to use this command to require the generated token to have a **subject** field set. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user - -``` - -This command prints the token string on stdout. - -Similarly, you can create a token by passing the "private" key using the command below: - -```shell - -$ bin/pulsar tokens create --private-key file:///path/to/my-private.key \ - --subject test-user - -``` - -Finally, you can enter the following command to create a token with a pre-defined TTL. And then the token is automatically invalidated. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user \ - --expiry-time 1y - -``` - -### Authorization - -The token itself does not have any permission associated. The authorization engine determines whether the token should have permissions or not. Once you have created the token, you can grant permission for this token to do certain actions. The following is an example. - -```shell - -$ bin/pulsar-admin namespaces grant-permission my-tenant/my-namespace \ - --role test-user \ - --actions produce,consume - -``` - -### Enable token authentication on Brokers - -To configure brokers to authenticate clients, add the following parameters to `broker.conf`: - -```properties - -# Configuration to enable authentication and authorization -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientTlsEnabled=true -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Either configure the token string or specify to read it from a file. The following three available formats are all valid: -# brokerClientAuthenticationParameters={"token":"your-token-string"} -# brokerClientAuthenticationParameters=token:your-token-string -# brokerClientAuthenticationParameters=file:///path/to/token -brokerClientTrustCertsFilePath=/path/my-ca/certs/ca.cert.pem - -# If this flag is set then the broker authenticates the original Auth data -# else it just accepts the originalPrincipal and authorizes it (if required). -authenticateOriginalAuthData=true - -# If using secret key (Note: key files must be DER-encoded) -tokenSecretKey=file:///path/to/secret.key -# The key can also be passed inline: -# tokenSecretKey=data:;base64,FLFyW0oLJ2Fi22KKCm21J18mbAdztfSHN/lAT5ucEKU= - -# If using public/private (Note: key files must be DER-encoded) -# tokenPublicKey=file:///path/to/public.key - -``` - -### Enable token authentication on Proxies - -To configure proxies to authenticate clients, add the following parameters to `proxy.conf`: - -The proxy uses its own token when connecting to brokers. You need to configure the role token for this key pair in the `proxyRoles` of the brokers. For more details, see the [authorization guide](security-authorization.md). - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken -tokenSecretKey=file:///path/to/secret.key - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Either configure the token string or specify to read it from a file. The following three available formats are all valid: -# brokerClientAuthenticationParameters={"token":"your-token-string"} -# brokerClientAuthenticationParameters=token:your-token-string -# brokerClientAuthenticationParameters=file:///path/to/token - -# Whether client authorization credentials are forwarded to the broker for re-authorization. -# Authentication must be enabled via authenticationEnabled=true for this to take effect. -forwardAuthorizationCredentials=true - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/security-kerberos.md b/site2/website/versioned_docs/version-2.8.2-deprecated/security-kerberos.md deleted file mode 100644 index c49fa3bea1fce0..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/security-kerberos.md +++ /dev/null @@ -1,443 +0,0 @@ ---- -id: security-kerberos -title: Authentication using Kerberos -sidebar_label: "Authentication using Kerberos" -original_id: security-kerberos ---- - -[Kerberos](https://web.mit.edu/kerberos/) is a network authentication protocol. By using secret-key cryptography, [Kerberos](https://web.mit.edu/kerberos/) is designed to provide strong authentication for client applications and server applications. - -In Pulsar, you can use Kerberos with [SASL](https://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer) as a choice for authentication. And Pulsar uses the [Java Authentication and Authorization Service (JAAS)](https://en.wikipedia.org/wiki/Java_Authentication_and_Authorization_Service) for SASL configuration. You need to provide JAAS configurations for Kerberos authentication. - -This document introduces how to configure `Kerberos` with `SASL` between Pulsar clients and brokers and how to configure Kerberos for Pulsar proxy in detail. - -## Configuration for Kerberos between Client and Broker - -### Prerequisites - -To begin, you need to set up (or already have) a [Key Distribution Center(KDC)](https://en.wikipedia.org/wiki/Key_distribution_center). Also you need to configure and run the [Key Distribution Center(KDC)](https://en.wikipedia.org/wiki/Key_distribution_center)in advance. - -If your organization already uses a Kerberos server (for example, by using `Active Directory`), you do not have to install a new server for Pulsar. If your organization does not use a Kerberos server, you need to install one. Your Linux vendor might have packages for `Kerberos`. On how to install and configure Kerberos, refer to [Ubuntu](https://help.ubuntu.com/community/Kerberos), -[Redhat](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Managing_Smart_Cards/installing-kerberos.html). - -Note that if you use Oracle Java, you need to download JCE policy files for your Java version and copy them to the `$JAVA_HOME/jre/lib/security` directory. - -#### Kerberos principals - -If you use the existing Kerberos system, ask your Kerberos administrator for a principal for each Brokers in your cluster and for every operating system user that accesses Pulsar with Kerberos authentication(via clients and tools). - -If you have installed your own Kerberos system, you can create these principals with the following commands: - -```shell - -### add Principals for broker -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey broker/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{broker-keytabname}.keytab broker/{hostname}@{REALM}" -### add Principals for client -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey client/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{client-keytabname}.keytab client/{hostname}@{REALM}" - -``` - -Note that *Kerberos* requires that all your hosts can be resolved with their FQDNs. - -The first part of Broker principal (for example, `broker` in `broker/{hostname}@{REALM}`) is the `serverType` of each host. The suggested values of `serverType` are `broker` (host machine runs service Pulsar Broker) and `proxy` (host machine runs service Pulsar Proxy). - -#### Configure how to connect to KDC - -You need to enter the command below to specify the path to the `krb5.conf` file for the client side and the broker side. The content of `krb5.conf` file indicates the default Realm and KDC information. See [JDK’s Kerberos Requirements](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html) for more details. - -```shell - --Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -Here is an example of the krb5.conf file: - -In the configuration file, `EXAMPLE.COM` is the default realm; `kdc = localhost:62037` is the kdc server url for realm `EXAMPLE.COM `: - -``` - -[libdefaults] - default_realm = EXAMPLE.COM - -[realms] - EXAMPLE.COM = { - kdc = localhost:62037 - } - -``` - -Usually machines configured with kerberos already have a system wide configuration and this configuration is optional. - -#### JAAS configuration file - -You need JAAS configuration file for the client side and the broker side. JAAS configuration file provides the section of information that is used to connect KDC. Here is an example named `pulsar_jaas.conf`: - -``` - - PulsarBroker { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - - PulsarClient { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarclient.keytab" - principal="client/localhost@EXAMPLE.COM"; -}; - -``` - -You need to set the `JAAS` configuration file path as JVM parameter for client and broker. For example: - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf - -``` - -In the `pulsar_jaas.conf` file above - -1. `PulsarBroker` is a section name in the JAAS file that each broker uses. This section tells the broker to use which principal inside Kerberos and the location of the keytab where the principal is stored. `PulsarBroker` allows the broker to use the keytab specified in this section. -2. `PulsarClient` is a section name in the JASS file that each broker uses. This section tells the client to use which principal inside Kerberos and the location of the keytab where the principal is stored. `PulsarClient` allows the client to use the keytab specified in this section. - The following example also reuses this `PulsarClient` section in both the Pulsar internal admin configuration and in CLI command of `bin/pulsar-client`, `bin/pulsar-perf` and `bin/pulsar-admin`. You can also add different sections for different use cases. - -You can have 2 separate JAAS configuration files: -* the file for a broker that has sections of both `PulsarBroker` and `PulsarClient`; -* the file for a client that only has a `PulsarClient` section. - - -### Kerberos configuration for Brokers - -#### Configure the `broker.conf` file - - In the `broker.conf` file, set Kerberos related configurations. - - - Set `authenticationEnabled` to `true`; - - Set `authenticationProviders` to choose `AuthenticationProviderSasl`; - - Set `saslJaasClientAllowedIds` regex for principal that is allowed to connect to broker; - - Set `saslJaasBrokerSectionName` that corresponds to the section in JAAS configuration file for broker; - - To make Pulsar internal admin client work properly, you need to set the configuration in the `broker.conf` file as below: - - Set `brokerClientAuthenticationPlugin` to client plugin `AuthenticationSasl`; - - Set `brokerClientAuthenticationParameters` to value in JSON string `{"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"}`, in which `PulsarClient` is the section name in the `pulsar_jaas.conf` file, and `"serverType":"broker"` indicates that the internal admin client connects to a Pulsar Broker; - - Here is an example: - -``` - -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarBroker - -## Authentication settings of the broker itself. Used when the broker connects to other brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -brokerClientAuthenticationParameters={"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"} - -``` - -#### Set Broker JVM parameter - - Set JVM parameters for JAAS configuration file and krb5 configuration file with additional options. - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -You can add this at the end of `PULSAR_EXTRA_OPTS` in the file [`pulsar_env.sh`](https://github.com/apache/pulsar/blob/master/conf/pulsar_env.sh) - -You must ensure that the operating system user who starts broker can reach the keytabs configured in the `pulsar_jaas.conf` file and kdc server in the `krb5.conf` file. - -### Kerberos configuration for clients - -#### Java Client and Java Admin Client - -In client application, include `pulsar-client-auth-sasl` in your project dependency. - -``` - - - org.apache.pulsar - pulsar-client-auth-sasl - ${pulsar.version} - - -``` - -Configure the authentication type to use `AuthenticationSasl`, and also provide the authentication parameters to it. - -You need 2 parameters: -- `saslJaasClientSectionName`. This parameter corresponds to the section in JAAS configuration file for client; -- `serverType`. This parameter stands for whether this client connects to broker or proxy. And client uses this parameter to know which server side principal should be used. - -When you authenticate between client and broker with the setting in above JAAS configuration file, we need to set `saslJaasClientSectionName` to `PulsarClient` and set `serverType` to `broker`. - -The following is an example of creating a Java client: - - ```java - - System.setProperty("java.security.auth.login.config", "/etc/pulsar/pulsar_jaas.conf"); - System.setProperty("java.security.krb5.conf", "/etc/pulsar/krb5.conf"); - - Map authParams = Maps.newHashMap(); - authParams.put("saslJaasClientSectionName", "PulsarClient"); - authParams.put("serverType", "broker"); - - Authentication saslAuth = AuthenticationFactory - .create(org.apache.pulsar.client.impl.auth.AuthenticationSasl.class.getName(), authParams); - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://my-broker.com:6650") - .authentication(saslAuth) - .build(); - - ``` - -> The first two lines in the example above are hard coded, alternatively, you can set additional JVM parameters for JAAS and krb5 configuration file when you run the application like below: - -``` - -java -cp -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf $APP-jar-with-dependencies.jar $CLASSNAME - -``` - -You must ensure that the operating system user who starts pulsar client can reach the keytabs configured in the `pulsar_jaas.conf` file and kdc server in the `krb5.conf` file. - -#### Configure CLI tools - -If you use a command-line tool (such as `bin/pulsar-client`, `bin/pulsar-perf` and `bin/pulsar-admin`), you need to perform the following steps: - -Step 1. Enter the command below to configure your `client.conf`. - -```shell - -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -authParams={"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"} - -``` - -Step 2. Enter the command below to set JVM parameters for JAAS configuration file and krb5 configuration file with additional options. - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -You can add this at the end of `PULSAR_EXTRA_OPTS` in the file [`pulsar_tools_env.sh`](https://github.com/apache/pulsar/blob/master/conf/pulsar_tools_env.sh), -or add this line `OPTS="$OPTS -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf "` directly to the CLI tool script. - -The meaning of configurations is the same as the meaning of configurations in Java client section. - -## Kerberos configuration for working with Pulsar Proxy - -With the above configuration, client and broker can do authentication using Kerberos. - -A client that connects to Pulsar Proxy is a little different. Pulsar Proxy (as a SASL Server in Kerberos) authenticates Client (as a SASL client in Kerberos) first; and then Pulsar broker authenticates Pulsar Proxy. - -Now in comparison with the above configuration between client and broker, we show you how to configure Pulsar Proxy as follows. - -### Create principal for Pulsar Proxy in Kerberos - -You need to add new principals for Pulsar Proxy comparing with the above configuration. If you already have principals for client and broker, you only need to add the proxy principal here. - -```shell - -### add Principals for Pulsar Proxy -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey proxy/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{proxy-keytabname}.keytab proxy/{hostname}@{REALM}" -### add Principals for broker -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey broker/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{broker-keytabname}.keytab broker/{hostname}@{REALM}" -### add Principals for client -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey client/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{client-keytabname}.keytab client/{hostname}@{REALM}" - -``` - -### Add a section in JAAS configuration file for Pulsar Proxy - -In comparison with the above configuration, add a new section for Pulsar Proxy in JAAS configuration file. - -Here is an example named `pulsar_jaas.conf`: - -``` - - PulsarBroker { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - - PulsarProxy { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarproxy.keytab" - principal="proxy/localhost@EXAMPLE.COM"; -}; - - PulsarClient { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarclient.keytab" - principal="client/localhost@EXAMPLE.COM"; -}; - -``` - -### Proxy client configuration - -Pulsar client configuration is similar with client and broker configuration, except that you need to set `serverType` to `proxy` instead of `broker`, for the reason that you need to do the Kerberos authentication between client and proxy. - - ```java - - System.setProperty("java.security.auth.login.config", "/etc/pulsar/pulsar_jaas.conf"); - System.setProperty("java.security.krb5.conf", "/etc/pulsar/krb5.conf"); - - Map authParams = Maps.newHashMap(); - authParams.put("saslJaasClientSectionName", "PulsarClient"); - authParams.put("serverType", "proxy"); // ** here is the different ** - - Authentication saslAuth = AuthenticationFactory - .create(org.apache.pulsar.client.impl.auth.AuthenticationSasl.class.getName(), authParams); - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://my-broker.com:6650") - .authentication(saslAuth) - .build(); - - ``` - -> The first two lines in the example above are hard coded, alternatively, you can set additional JVM parameters for JAAS and krb5 configuration file when you run the application like below: - -``` - -java -cp -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf $APP-jar-with-dependencies.jar $CLASSNAME - -``` - -### Kerberos configuration for Pulsar proxy service - -In the `proxy.conf` file, set Kerberos related configuration. Here is an example: - -```shell - -## related to authenticate client. -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarProxy - -## related to be authenticated by broker -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -brokerClientAuthenticationParameters={"saslJaasClientSectionName":"PulsarProxy", "serverType":"broker"} -forwardAuthorizationCredentials=true - -``` - -The first part relates to authenticating between client and Pulsar Proxy. In this phase, client works as SASL client, while Pulsar Proxy works as SASL server. - -The second part relates to authenticating between Pulsar Proxy and Pulsar Broker. In this phase, Pulsar Proxy works as SASL client, while Pulsar Broker works as SASL server. - -### Broker side configuration. - -The broker side configuration file is the same with the above `broker.conf`, you do not need special configuration for Pulsar Proxy. - -``` - -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarBroker - -``` - -## Regarding authorization and role token - -For Kerberos authentication, we usually use the authenticated principal as the role token for Pulsar authorization. For more information of authorization in Pulsar, see [security authorization](security-authorization.md). - -If you enable 'authorizationEnabled', you need to set `superUserRoles` in `broker.conf` that corresponds to the name registered in kdc. - -For example: - -```bash - -superUserRoles=client/{clientIp}@EXAMPLE.COM - -``` - -## Regarding authentication between ZooKeeper and Broker - -Pulsar Broker acts as a Kerberos client when you authenticate with Zookeeper. According to [ZooKeeper document](https://cwiki.apache.org/confluence/display/ZOOKEEPER/Client-Server+mutual+authentication), you need these settings in `conf/zookeeper.conf`: - -``` - -authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider -requireClientAuthScheme=sasl - -``` - -Enter the following commands to add a section of `Client` configurations in the file `pulsar_jaas.conf`, which Pulsar Broker uses: - -``` - - Client { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - -``` - -In this setting, the principal of Pulsar Broker and keyTab file indicates the role of Broker when you authenticate with ZooKeeper. - -## Regarding authentication between BookKeeper and Broker - -Pulsar Broker acts as a Kerberos client when you authenticate with Bookie. According to [BookKeeper document](http://bookkeeper.apache.org/docs/latest/security/sasl/), you need to add `bookkeeperClientAuthenticationPlugin` parameter in `broker.conf`: - -``` - -bookkeeperClientAuthenticationPlugin=org.apache.bookkeeper.sasl.SASLClientProviderFactory - -``` - -In this setting, `SASLClientProviderFactory` creates a BookKeeper SASL client in a Broker, and the Broker uses the created SASL client to authenticate with a Bookie node. - -Enter the following commands to add a section of `BookKeeper` configurations in the `pulsar_jaas.conf` that Pulsar Broker uses: - -``` - - BookKeeper { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - -``` - -In this setting, the principal of Pulsar Broker and keyTab file indicates the role of Broker when you authenticate with Bookie. diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/security-oauth2.md b/site2/website/versioned_docs/version-2.8.2-deprecated/security-oauth2.md deleted file mode 100644 index 24b1530cc848ae..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/security-oauth2.md +++ /dev/null @@ -1,232 +0,0 @@ ---- -id: security-oauth2 -title: Client authentication using OAuth 2.0 access tokens -sidebar_label: "Authentication using OAuth 2.0 access tokens" -original_id: security-oauth2 ---- - -Pulsar supports authenticating clients using OAuth 2.0 access tokens. You can use OAuth 2.0 access tokens to identify a Pulsar client and associate the Pulsar client with some "principal" (or "role"), which is permitted to do some actions, such as publishing messages to a topic or consume messages from a topic. - -This module is used to support the Pulsar client authentication plugin for OAuth 2.0. After communicating with the Oauth 2.0 server, the Pulsar client gets an `access token` from the Oauth 2.0 server, and passes this `access token` to the Pulsar broker to do the authentication. The broker can use the `org.apache.pulsar.broker.authentication.AuthenticationProviderToken`. Or, you can add your own `AuthenticationProvider` to make it with this module. - -## Authentication provider configuration - -This library allows you to authenticate the Pulsar client by using an access token that is obtained from an OAuth 2.0 authorization service, which acts as a _token issuer_. - -### Authentication types - -The authentication type determines how to obtain an access token through an OAuth 2.0 authorization flow. - -:::note - -Currently, the Pulsar Java client only supports the `client_credentials` authentication type. - -::: - -#### Client credentials - -The following table lists parameters supported for the `client credentials` authentication type. - -| Parameter | Description | Example | Required or not | -| --- | --- | --- | --- | -| `type` | Oauth 2.0 authentication type. | `client_credentials` (default) | Optional | -| `issuerUrl` | URL of the authentication provider which allows the Pulsar client to obtain an access token | `https://accounts.google.com` | Required | -| `privateKey` | URL to a JSON credentials file | Support the following pattern formats:
  1230. `file:///path/to/file`
  1231. `file:/path/to/file`
  1232. `data:application/json;base64,`
  1233. | Required | -| `audience` | An OAuth 2.0 "resource server" identifier for the Pulsar cluster | `https://broker.example.com` | Required | - -The credentials file contains service account credentials used with the client authentication type. The following shows an example of a credentials file `credentials_file.json`. - -```json - -{ - "type": "client_credentials", - "client_id": "d9ZyX97q1ef8Cr81WHVC4hFQ64vSlDK3", - "client_secret": "on1uJ...k6F6R", - "client_email": "1234567890-abcdefghijklmnopqrstuvwxyz@developer.gserviceaccount.com", - "issuer_url": "https://accounts.google.com" -} - -``` - -In the above example, the authentication type is set to `client_credentials` by default. And the fields "client_id" and "client_secret" are required. - -### Typical original OAuth2 request mapping - -The following shows a typical original OAuth2 request, which is used to obtain the access token from the OAuth2 server. - -```bash - -curl --request POST \ - --url https://dev-kt-aa9ne.us.auth0.com/oauth/token \ - --header 'content-type: application/json' \ - --data '{ - "client_id":"Xd23RHsUnvUlP7wchjNYOaIfazgeHd9x", - "client_secret":"rT7ps7WY8uhdVuBTKWZkttwLdQotmdEliaM5rLfmgNibvqziZ-g07ZH52N_poGAb", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "grant_type":"client_credentials"}' - -``` - -In the above example, the mapping relationship is shown as below. - -- The `issuerUrl` parameter in this plugin is mapped to `--url https://dev-kt-aa9ne.us.auth0.com`. -- The `privateKey` file parameter in this plugin should at least contains the `client_id` and `client_secret` fields. -- The `audience` parameter in this plugin is mapped to `"audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"`. - -## Client Configuration - -You can use the OAuth2 authentication provider with the following Pulsar clients. - -### Java - -You can use the factory method to configure authentication for Pulsar Java client. - -```java - -import org.apache.pulsar.client.impl.auth.oauth2.AuthenticationFactoryOAuth2; - -String issuerUrl = "https://dev-kt-aa9ne.us.auth0.com"; -String credentialsUrl = "file:///path/to/KeyFile.json"; -String audience = "https://dev-kt-aa9ne.us.auth0.com/api/v2/"; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactoryOAuth2.clientCredentials(issuerUrl, credentialsUrl, audience)) - .build(); - -``` - -In addition, you can also use the encoded parameters to configure authentication for Pulsar Java client. - -```java - -Authentication auth = AuthenticationFactory - .create(AuthenticationOAuth2.class.getName(), "{"type":"client_credentials","privateKey":"./key/path/..","issuerUrl":"...","audience":"..."}"); -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication(auth) - .build(); - -``` - -### C++ client - -The C++ client is similar to the Java client. You need to provide parameters of `issuerUrl`, `private_key` (the credentials file path), and the audience. - -```c++ - -#include - -pulsar::ClientConfiguration config; -std::string params = R"({ - "issuer_url": "https://dev-kt-aa9ne.us.auth0.com", - "private_key": "../../pulsar-broker/src/test/resources/authentication/token/cpp_credentials_file.json", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/"})"; - -config.setAuth(pulsar::AuthOauth2::create(params)); - -pulsar::Client client("pulsar://broker.example.com:6650/", config); - -``` - -### Go client - -To enable OAuth2 authentication in Go client, you need to configure OAuth2 authentication. -This example shows how to configure OAuth2 authentication in Go client. - -```go - -oauth := pulsar.NewAuthenticationOAuth2(map[string]string{ - "type": "client_credentials", - "issuerUrl": "https://dev-kt-aa9ne.us.auth0.com", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "privateKey": "/path/to/privateKey", - "clientId": "0Xx...Yyxeny", - }) -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://my-cluster:6650", - Authentication: oauth, -}) - -``` - -### Python client - -To enable OAuth2 authentication in Python client, you need to configure OAuth2 authentication. -This example shows how to configure OAuth2 authentication in Python client. - -```python - -from pulsar import Client, AuthenticationOauth2 - -params = ''' -{ - "issuer_url": "https://dev-kt-aa9ne.us.auth0.com", - "private_key": "/path/to/privateKey", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/" -} -''' - -client = Client("pulsar://my-cluster:6650", authentication=AuthenticationOauth2(params)) - -``` - -## CLI configuration - -This section describes how to use Pulsar CLI tools to connect a cluster through OAuth2 authentication plugin. - -### pulsar-admin - -This example shows how to use pulsar-admin to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-admin --admin-url https://streamnative.cloud:443 \ ---auth-plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ -tenants list - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). - -### pulsar-client - -This example shows how to use pulsar-client to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-client \ ---url SERVICE_URL \ ---auth-plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ -produce test-topic -m "test-message" -n 10 - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). - -### pulsar-perf - -This example shows how to use pulsar-perf to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-perf produce --service-url pulsar+ssl://streamnative.cloud:6651 \ ---auth_plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ --r 1000 -s 1024 test-topic - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/security-overview.md b/site2/website/versioned_docs/version-2.8.2-deprecated/security-overview.md deleted file mode 100644 index c6bd9b64e4f766..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/security-overview.md +++ /dev/null @@ -1,36 +0,0 @@ ---- -id: security-overview -title: Pulsar security overview -sidebar_label: "Overview" -original_id: security-overview ---- - -As the central message bus for a business, Apache Pulsar is frequently used for storing mission-critical data. Therefore, enabling security features in Pulsar is crucial. - -By default, Pulsar configures no encryption, authentication, or authorization. Any client can communicate to Apache Pulsar via plain text service URLs. So we must ensure that Pulsar accessing via these plain text service URLs is restricted to trusted clients only. In such cases, you can use Network segmentation and/or authorization ACLs to restrict access to trusted IPs. If you use neither, the state of cluster is wide open and anyone can access the cluster. - -Pulsar supports a pluggable authentication mechanism. And Pulsar clients use this mechanism to authenticate with brokers and proxies. You can also configure Pulsar to support multiple authentication sources. - -The Pulsar broker validates the authentication credentials when a connection is established. After the initial connection is authenticated, the "principal" token is stored for authorization though the connection is not re-authenticated. The broker periodically checks the expiration status of every `ServerCnx` object. You can set the `authenticationRefreshCheckSeconds` on the broker to control the frequency to check the expiration status. By default, the `authenticationRefreshCheckSeconds` is set to 60s. When the authentication is expired, the broker forces to re-authenticate the connection. If the re-authentication fails, the broker disconnects the client. - -The broker supports learning whether a particular client supports authentication refreshing. If a client supports authentication refreshing and the credential is expired, the authentication provider calls the `refreshAuthentication` method to initiate the refreshing process. If a client does not support authentication refreshing and the credential is expired, the broker disconnects the client. - -You had better secure the service components in your Apache Pulsar deployment. - -## Role tokens - -In Pulsar, a *role* is a string, like `admin` or `app1`, which can represent a single client or multiple clients. You can use roles to control permission for clients to produce or consume from certain topics, administer the configuration for tenants, and so on. - -Apache Pulsar uses an [Authentication Provider](#authentication-providers) to establish the identity of a client and then assign a *role token* to that client. This role token is then used for [Authorization and ACLs](security-authorization.md) to determine what the client is authorized to do. - -## Authentication providers - -Currently Pulsar supports the following authentication providers: - -- [TLS Authentication](security-tls-authentication.md) -- [Athenz](security-athenz.md) -- [Kerberos](security-kerberos.md) -- [JSON Web Token Authentication](security-jwt.md) -- [OAuth 2.0 authentication](security-oauth2.md) -- [HTTP basic authentication](security-basic-auth.md) - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/security-tls-authentication.md b/site2/website/versioned_docs/version-2.8.2-deprecated/security-tls-authentication.md deleted file mode 100644 index 85d2240f413060..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/security-tls-authentication.md +++ /dev/null @@ -1,222 +0,0 @@ ---- -id: security-tls-authentication -title: Authentication using TLS -sidebar_label: "Authentication using TLS" -original_id: security-tls-authentication ---- - -## TLS authentication overview - -TLS authentication is an extension of [TLS transport encryption](security-tls-transport.md). Not only servers have keys and certs that the client uses to verify the identity of servers, clients also have keys and certs that the server uses to verify the identity of clients. You must have TLS transport encryption configured on your cluster before you can use TLS authentication. This guide assumes you already have TLS transport encryption configured. - -`Bouncy Castle Provider` provides TLS related cipher suites and algorithms in Pulsar. If you need [FIPS](https://www.bouncycastle.org/fips_faq.html) version of `Bouncy Castle Provider`, please reference [Bouncy Castle page](security-bouncy-castle.md). - -### Create client certificates - -Client certificates are generated using the certificate authority. Server certificates are also generated with the same certificate authority. - -The biggest difference between client certs and server certs is that the **common name** for the client certificate is the **role token** which that client is authenticated as. - -To use client certificates, you need to set `tlsRequireTrustedClientCertOnConnect=true` at the broker side. For details, refer to [TLS broker configuration](security-tls-transport.md#configure-broker). - -First, you need to enter the following command to generate the key : - -```bash - -$ openssl genrsa -out admin.key.pem 2048 - -``` - -Similar to the broker, the client expects the key to be in [PKCS 8](https://en.wikipedia.org/wiki/PKCS_8) format, so you need to convert it by entering the following command: - -```bash - -$ openssl pkcs8 -topk8 -inform PEM -outform PEM \ - -in admin.key.pem -out admin.key-pk8.pem -nocrypt - -``` - -Next, enter the command below to generate the certificate request. When you are asked for a **common name**, enter the **role token** that you want this key pair to authenticate a client as. - -```bash - -$ openssl req -config openssl.cnf \ - -key admin.key.pem -new -sha256 -out admin.csr.pem - -``` - -:::note - -If openssl.cnf is not specified, read [Certificate authority](http://pulsar.apache.org/docs/en/security-tls-transport/#certificate-authority) to get the openssl.cnf. - -::: - -Then, enter the command below to sign with request with the certificate authority. Note that the client certs uses the **usr_cert** extension, which allows the cert to be used for client authentication. - -```bash - -$ openssl ca -config openssl.cnf -extensions usr_cert \ - -days 1000 -notext -md sha256 \ - -in admin.csr.pem -out admin.cert.pem - -``` - -You can get a cert, `admin.cert.pem`, and a key, `admin.key-pk8.pem` from this command. With `ca.cert.pem`, clients can use this cert and this key to authenticate themselves to brokers and proxies as the role token ``admin``. - -:::note - -If the "unable to load CA private key" error occurs and the reason of this error is "No such file or directory: /etc/pki/CA/private/cakey.pem" in this step. Try the command below: - -```bash - -$ cd /etc/pki/tls/misc/CA -$ ./CA -newca - -``` - -to generate `cakey.pem` . - -::: - -## Enable TLS authentication on brokers - -To configure brokers to authenticate clients, add the following parameters to `broker.conf`, alongside [the configuration to enable tls transport](security-tls-transport.md#broker-configuration): - -```properties - -# Configuration to enable authentication -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# operations and publish/consume from all topics -superUserRoles=admin - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientTlsEnabled=true -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters={"tlsCertFile":"/path/my-ca/admin.cert.pem","tlsKeyFile":"/path/my-ca/admin.key-pk8.pem"} -brokerClientTrustCertsFilePath=/path/my-ca/certs/ca.cert.pem - -``` - -## Enable TLS authentication on proxies - -To configure proxies to authenticate clients, add the following parameters to `proxy.conf`, alongside [the configuration to enable tls transport](security-tls-transport.md#proxy-configuration): - -The proxy should have its own client key pair for connecting to brokers. You need to configure the role token for this key pair in the ``proxyRoles`` of the brokers. See the [authorization guide](security-authorization.md) for more details. - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters=tlsCertFile:/path/to/proxy.cert.pem,tlsKeyFile:/path/to/proxy.key-pk8.pem - -``` - -## Client configuration - -When you use TLS authentication, client connects via TLS transport. You need to configure the client to use ```https://``` and 8443 port for the web service URL, ```pulsar+ssl://``` and 6651 port for the broker service URL. - -### CLI tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use TLS authentication with the CLI tools of Pulsar: - -```properties - -webServiceUrl=https://broker.example.com:8443/ -brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/ca.cert.pem -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -authParams=tlsCertFile:/path/to/my-role.cert.pem,tlsKeyFile:/path/to/my-role.key-pk8.pem - -``` - -### Java client - -```java - -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .tlsTrustCertsFilePath("/path/to/ca.cert.pem") - .authentication("org.apache.pulsar.client.impl.auth.AuthenticationTls", - "tlsCertFile:/path/to/my-role.cert.pem,tlsKeyFile:/path/to/my-role.key-pk8.pem") - .build(); - -``` - -### Python client - -```python - -from pulsar import Client, AuthenticationTLS - -auth = AuthenticationTLS("/path/to/my-role.cert.pem", "/path/to/my-role.key-pk8.pem") -client = Client("pulsar+ssl://broker.example.com:6651/", - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False, - authentication=auth) - -``` - -### C++ client - -```c++ - -#include - -pulsar::ClientConfiguration config; -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/ca.cert.pem"); -config.setTlsAllowInsecureConnection(false); - -pulsar::AuthenticationPtr auth = pulsar::AuthTls::create("/path/to/my-role.cert.pem", - "/path/to/my-role.key-pk8.pem") -config.setAuth(auth); - -pulsar::Client client("pulsar+ssl://broker.example.com:6651/", config); - -``` - -### Node.js client - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const auth = new Pulsar.AuthenticationTls({ - certificatePath: '/path/to/my-role.cert.pem', - privateKeyPath: '/path/to/my-role.key-pk8.pem', - }); - - const client = new Pulsar.Client({ - serviceUrl: 'pulsar+ssl://broker.example.com:6651/', - authentication: auth, - tlsTrustCertsFilePath: '/path/to/ca.cert.pem', - }); -})(); - -``` - -### C# client - -```c# - -var clientCertificate = new X509Certificate2("admin.pfx"); -var client = PulsarClient.Builder() - .AuthenticateUsingClientCertificate(clientCertificate) - .Build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/security-tls-keystore.md b/site2/website/versioned_docs/version-2.8.2-deprecated/security-tls-keystore.md deleted file mode 100644 index 2d80782ba88d35..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/security-tls-keystore.md +++ /dev/null @@ -1,342 +0,0 @@ ---- -id: security-tls-keystore -title: Using TLS with KeyStore configure -sidebar_label: "Using TLS with KeyStore configure" -original_id: security-tls-keystore ---- - -## Overview - -Apache Pulsar supports [TLS encryption](security-tls-transport.md) and [TLS authentication](security-tls-authentication.md) between clients and Apache Pulsar service. -By default it uses PEM format file configuration. This page tries to describe use [KeyStore](https://en.wikipedia.org/wiki/Java_KeyStore) type configure for TLS. - - -## TLS encryption with KeyStore configure - -### Generate TLS key and certificate - -The first step of deploying TLS is to generate the key and the certificate for each machine in the cluster. -You can use Java’s `keytool` utility to accomplish this task. We will generate the key into a temporary keystore -initially for broker, so that we can export and sign it later with CA. - -```shell - -keytool -keystore broker.keystore.jks -alias localhost -validity {validity} -genkeypair -keyalg RSA - -``` - -You need to specify two parameters in the above command: - -1. `keystore`: the keystore file that stores the certificate. The *keystore* file contains the private key of - the certificate; hence, it needs to be kept safely. -2. `validity`: the valid time of the certificate in days. - -> Ensure that common name (CN) matches exactly with the fully qualified domain name (FQDN) of the server. -The client compares the CN with the DNS domain name to ensure that it is indeed connecting to the desired server, not a malicious one. - -### Creating your own CA - -After the first step, each broker in the cluster has a public-private key pair, and a certificate to identify the machine. -The certificate, however, is unsigned, which means that an attacker can create such a certificate to pretend to be any machine. - -Therefore, it is important to prevent forged certificates by signing them for each machine in the cluster. -A `certificate authority (CA)` is responsible for signing certificates. CA works likes a government that issues passports — -the government stamps (signs) each passport so that the passport becomes difficult to forge. Other governments verify the stamps -to ensure the passport is authentic. Similarly, the CA signs the certificates, and the cryptography guarantees that a signed -certificate is computationally difficult to forge. Thus, as long as the CA is a genuine and trusted authority, the clients have -high assurance that they are connecting to the authentic machines. - -```shell - -openssl req -new -x509 -keyout ca-key -out ca-cert -days 365 - -``` - -The generated CA is simply a *public-private* key pair and certificate, and it is intended to sign other certificates. - -The next step is to add the generated CA to the clients' truststore so that the clients can trust this CA: - -```shell - -keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert - -``` - -NOTE: If you configure the brokers to require client authentication by setting `tlsRequireTrustedClientCertOnConnect` to `true` on the -broker configuration, then you must also provide a truststore for the brokers and it should have all the CA certificates that clients keys were signed by. - -```shell - -keytool -keystore broker.truststore.jks -alias CARoot -import -file ca-cert - -``` - -In contrast to the keystore, which stores each machine’s own identity, the truststore of a client stores all the certificates -that the client should trust. Importing a certificate into one’s truststore also means trusting all certificates that are signed -by that certificate. As the analogy above, trusting the government (CA) also means trusting all passports (certificates) that -it has issued. This attribute is called the chain of trust, and it is particularly useful when deploying TLS on a large BookKeeper cluster. -You can sign all certificates in the cluster with a single CA, and have all machines share the same truststore that trusts the CA. -That way all machines can authenticate all other machines. - - -### Signing the certificate - -The next step is to sign all certificates in the keystore with the CA we generated. First, you need to export the certificate from the keystore: - -```shell - -keytool -keystore broker.keystore.jks -alias localhost -certreq -file cert-file - -``` - -Then sign it with the CA: - -```shell - -openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days {validity} -CAcreateserial -passin pass:{ca-password} - -``` - -Finally, you need to import both the certificate of the CA and the signed certificate into the keystore: - -```shell - -keytool -keystore broker.keystore.jks -alias CARoot -import -file ca-cert -keytool -keystore broker.keystore.jks -alias localhost -import -file cert-signed - -``` - -The definitions of the parameters are the following: - -1. `keystore`: the location of the keystore -2. `ca-cert`: the certificate of the CA -3. `ca-key`: the private key of the CA -4. `ca-password`: the passphrase of the CA -5. `cert-file`: the exported, unsigned certificate of the broker -6. `cert-signed`: the signed certificate of the broker - -### Configuring brokers - -Brokers enable TLS by provide valid `brokerServicePortTls` and `webServicePortTls`, and also need set `tlsEnabledWithKeyStore` to `true` for using KeyStore type configuration. -Besides this, KeyStore path, KeyStore password, TrustStore path, and TrustStore password need to provided. -And since broker will create internal client/admin client to communicate with other brokers, user also need to provide config for them, this is similar to how user config the outside client/admin-client. -If `tlsRequireTrustedClientCertOnConnect` is `true`, broker will reject the Connection if the Client Certificate is not trusted. - -The following TLS configs are needed on the broker side: - -```properties - -tlsEnabledWithKeyStore=true -# key store -tlsKeyStoreType=JKS -tlsKeyStore=/var/private/tls/broker.keystore.jks -tlsKeyStorePassword=brokerpw - -# trust store -tlsTrustStoreType=JKS -tlsTrustStore=/var/private/tls/broker.truststore.jks -tlsTrustStorePassword=brokerpw - -# internal client/admin-client config -brokerClientTlsEnabled=true -brokerClientTlsEnabledWithKeyStore=true -brokerClientTlsTrustStoreType=JKS -brokerClientTlsTrustStore=/var/private/tls/client.truststore.jks -brokerClientTlsTrustStorePassword=clientpw - -``` - -NOTE: it is important to restrict access to the store files via filesystem permissions. - -If you have configured TLS on the broker, to disable non-TLS ports, you can set the values of the following configurations to empty as below. - -``` - -brokerServicePort= -webServicePort= - -``` - -In this case, you need to set the following configurations. - -```conf - -brokerClientTlsEnabled=true // Set this to true -brokerClientTlsEnabledWithKeyStore=true // Set this to true -brokerClientTlsTrustStore= // Set this to your desired value -brokerClientTlsTrustStorePassword= // Set this to your desired value - -``` - -Optional settings that may worth consider: - -1. tlsClientAuthentication=false: Enable/Disable using TLS for authentication. This config when enabled will authenticate the other end - of the communication channel. It should be enabled on both brokers and clients for mutual TLS. -2. tlsCiphers=[TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256], A cipher suite is a named combination of authentication, encryption, MAC and key exchange - algorithm used to negotiate the security settings for a network connection using TLS network protocol. By default, - it is null. [OpenSSL Ciphers](https://www.openssl.org/docs/man1.0.2/apps/ciphers.html) - [JDK Ciphers](http://docs.oracle.com/javase/8/docs/technotes/guides/security/StandardNames.html#ciphersuites) -3. tlsProtocols=[TLSv1.3,TLSv1.2] (list out the TLS protocols that you are going to accept from clients). - By default, it is not set. - -### Configuring Clients - -This is similar to [TLS encryption configuing for client with PEM type](security-tls-transport.md#Client configuration). -For a a minimal configuration, user need to provide the TrustStore information. - -e.g. -1. for [Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools#pulsar-admin), [`pulsar-perf`](reference-cli-tools#pulsar-perf), and [`pulsar-client`](reference-cli-tools#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - - ```properties - - webServiceUrl=https://broker.example.com:8443/ - brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ - useKeyStoreTls=true - tlsTrustStoreType=JKS - tlsTrustStorePath=/var/private/tls/client.truststore.jks - tlsTrustStorePassword=clientpw - - ``` - -1. for java client - - ```java - - import org.apache.pulsar.client.api.PulsarClient; - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .build(); - - ``` - -1. for java admin client - -```java - - PulsarAdmin amdin = PulsarAdmin.builder().serviceHttpUrl("https://broker.example.com:8443") - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .build(); - -``` - -## TLS authentication with KeyStore configure - -This similar to [TLS authentication with PEM type](security-tls-authentication.md) - -### broker authentication config - -`broker.conf` - -```properties - -# Configuration to enable authentication -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# this should be the CN for one of client keystore. -superUserRoles=admin - -# Enable KeyStore type -tlsEnabledWithKeyStore=true -requireTrustedClientCertOnConnect=true - -# key store -tlsKeyStoreType=JKS -tlsKeyStore=/var/private/tls/broker.keystore.jks -tlsKeyStorePassword=brokerpw - -# trust store -tlsTrustStoreType=JKS -tlsTrustStore=/var/private/tls/broker.truststore.jks -tlsTrustStorePassword=brokerpw - -# internal client/admin-client config -brokerClientTlsEnabled=true -brokerClientTlsEnabledWithKeyStore=true -brokerClientTlsTrustStoreType=JKS -brokerClientTlsTrustStore=/var/private/tls/client.truststore.jks -brokerClientTlsTrustStorePassword=clientpw -# internal auth config -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls -brokerClientAuthenticationParameters={"keyStoreType":"JKS","keyStorePath":"/var/private/tls/client.keystore.jks","keyStorePassword":"clientpw"} -# currently websocket not support keystore type -webSocketServiceEnabled=false - -``` - -### client authentication configuring - -Besides the TLS encryption configuring. The main work is configuring the KeyStore, which contains a valid CN as client role, for client. - -e.g. -1. for [Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools#pulsar-admin), [`pulsar-perf`](reference-cli-tools#pulsar-perf), and [`pulsar-client`](reference-cli-tools#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - - ```properties - - webServiceUrl=https://broker.example.com:8443/ - brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ - useKeyStoreTls=true - tlsTrustStoreType=JKS - tlsTrustStorePath=/var/private/tls/client.truststore.jks - tlsTrustStorePassword=clientpw - authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls - authParams={"keyStoreType":"JKS","keyStorePath":"/path/to/keystorefile","keyStorePassword":"keystorepw"} - - ``` - -1. for java client - - ```java - - import org.apache.pulsar.client.api.PulsarClient; - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .authentication( - "org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls", - "keyStoreType:JKS,keyStorePath:/var/private/tls/client.keystore.jks,keyStorePassword:clientpw") - .build(); - - ``` - -1. for java admin client - - ```java - - PulsarAdmin amdin = PulsarAdmin.builder().serviceHttpUrl("https://broker.example.com:8443") - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .authentication( - "org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls", - "keyStoreType:JKS,keyStorePath:/var/private/tls/client.keystore.jks,keyStorePassword:clientpw") - .build(); - - ``` - -## Enabling TLS Logging - -You can enable TLS debug logging at the JVM level by starting the brokers and/or clients with `javax.net.debug` system property. For example: - -```shell - --Djavax.net.debug=all - -``` - -You can find more details on this in [Oracle documentation](http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/ReadDebug.html) on [debugging SSL/TLS connections](http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/ReadDebug.html). diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/security-tls-transport.md b/site2/website/versioned_docs/version-2.8.2-deprecated/security-tls-transport.md deleted file mode 100644 index 2cad17a78c3507..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/security-tls-transport.md +++ /dev/null @@ -1,295 +0,0 @@ ---- -id: security-tls-transport -title: Transport Encryption using TLS -sidebar_label: "Transport Encryption using TLS" -original_id: security-tls-transport ---- - -## TLS overview - -By default, Apache Pulsar clients communicate with the Apache Pulsar service in plain text. This means that all data is sent in the clear. You can use TLS to encrypt this traffic to protect the traffic from the snooping of a man-in-the-middle attacker. - -You can also configure TLS for both encryption and authentication. Use this guide to configure just TLS transport encryption and refer to [here](security-tls-authentication.md) for TLS authentication configuration. Alternatively, you can use [another authentication mechanism](security-athenz.md) on top of TLS transport encryption. - -> Note that enabling TLS may impact the performance due to encryption overhead. - -## TLS concepts - -TLS is a form of [public key cryptography](https://en.wikipedia.org/wiki/Public-key_cryptography). Using key pairs consisting of a public key and a private key can perform the encryption. The public key encrpyts the messages and the private key decrypts the messages. - -To use TLS transport encryption, you need two kinds of key pairs, **server key pairs** and a **certificate authority**. - -You can use a third kind of key pair, **client key pairs**, for [client authentication](security-tls-authentication.md). - -You should store the **certificate authority** private key in a very secure location (a fully encrypted, disconnected, air gapped computer). As for the certificate authority public key, the **trust cert**, you can freely shared it. - -For both client and server key pairs, the administrator first generates a private key and a certificate request, then uses the certificate authority private key to sign the certificate request, finally generates a certificate. This certificate is the public key for the server/client key pair. - -For TLS transport encryption, the clients can use the **trust cert** to verify that the server has a key pair that the certificate authority signed when the clients are talking to the server. A man-in-the-middle attacker does not have access to the certificate authority, so they couldn't create a server with such a key pair. - -For TLS authentication, the server uses the **trust cert** to verify that the client has a key pair that the certificate authority signed. The common name of the **client cert** is then used as the client's role token (see [Overview](security-overview.md)). - -`Bouncy Castle Provider` provides cipher suites and algorithms in Pulsar. If you need [FIPS](https://www.bouncycastle.org/fips_faq.html) version of `Bouncy Castle Provider`, please reference [Bouncy Castle page](security-bouncy-castle.md). - -## Create TLS certificates - -Creating TLS certificates for Pulsar involves creating a [certificate authority](#certificate-authority) (CA), [server certificate](#server-certificate), and [client certificate](#client-certificate). - -Follow the guide below to set up a certificate authority. You can also refer to plenty of resources on the internet for more details. We recommend [this guide](https://jamielinux.com/docs/openssl-certificate-authority/index.html) for your detailed reference. - -### Certificate authority - -1. Create the certificate for the CA. You can use CA to sign both the broker and client certificates. This ensures that each party will trust the others. You should store CA in a very secure location (ideally completely disconnected from networks, air gapped, and fully encrypted). - -2. Entering the following command to create a directory for your CA, and place [this openssl configuration file](https://github.com/apache/pulsar/tree/master/site2/website/static/examples/openssl.cnf) in the directory. You may want to modify the default answers for company name and department in the configuration file. Export the location of the CA directory to the environment variable, CA_HOME. The configuration file uses this environment variable to find the rest of the files and directories that the CA needs. - -```bash - -mkdir my-ca -cd my-ca -wget https://raw.githubusercontent.com/apache/pulsar-site/main/site2/website/static/examples/openssl.cnf -export CA_HOME=$(pwd) - -``` - -3. Enter the commands below to create the necessary directories, keys and certs. - -```bash - -mkdir certs crl newcerts private -chmod 700 private/ -touch index.txt -echo 1000 > serial -openssl genrsa -aes256 -out private/ca.key.pem 4096 -chmod 400 private/ca.key.pem -openssl req -config openssl.cnf -key private/ca.key.pem \ - -new -x509 -days 7300 -sha256 -extensions v3_ca \ - -out certs/ca.cert.pem -chmod 444 certs/ca.cert.pem - -``` - -4. After you answer the question prompts, CA-related files are stored in the `./my-ca` directory. Within that directory: - -* `certs/ca.cert.pem` is the public certificate. This public certificates is meant to be distributed to all parties involved. -* `private/ca.key.pem` is the private key. You only need it when you are signing a new certificate for either broker or clients and you must safely guard this private key. - -### Server certificate - -Once you have created a CA certificate, you can create certificate requests and sign them with the CA. - -The following commands ask you a few questions and then create the certificates. When you are asked for the common name, you should match the hostname of the broker. You can also use a wildcard to match a group of broker hostnames, for example, `*.broker.usw.example.com`. This ensures that multiple machines can reuse the same certificate. - -:::tip - -Sometimes matching the hostname is not possible or makes no sense, -such as when you create the brokers with random hostnames, or you -plan to connect to the hosts via their IP. In these cases, you -should configure the client to disable TLS hostname verification. For more -details, you can see [the host verification section in client configuration](#hostname-verification). - -::: - -1. Enter the command below to generate the key. - -```bash - -openssl genrsa -out broker.key.pem 2048 - -``` - -The broker expects the key to be in [PKCS 8](https://en.wikipedia.org/wiki/PKCS_8) format, so enter the following command to convert it. - -```bash - -openssl pkcs8 -topk8 -inform PEM -outform PEM \ - -in broker.key.pem -out broker.key-pk8.pem -nocrypt - -``` - -2. Enter the following command to generate the certificate request. - -```bash - -openssl req -config openssl.cnf \ - -key broker.key.pem -new -sha256 -out broker.csr.pem - -``` - -3. Sign it with the certificate authority by entering the command below. - -```bash - -openssl ca -config openssl.cnf -extensions server_cert \ - -days 1000 -notext -md sha256 \ - -in broker.csr.pem -out broker.cert.pem - -``` - -At this point, you have a cert, `broker.cert.pem`, and a key, `broker.key-pk8.pem`, which you can use along with `ca.cert.pem` to configure TLS transport encryption for your broker and proxy nodes. - -## Configure broker - -To configure a Pulsar [broker](reference-terminology.md#broker) to use TLS transport encryption, you need to make some changes to `broker.conf`, which locates in the `conf` directory of your [Pulsar installation](getting-started-standalone.md). - -Add these values to the configuration file (substituting the appropriate certificate paths where necessary): - -```properties - -tlsEnabled=true -tlsRequireTrustedClientCertOnConnect=true -tlsCertificateFilePath=/path/to/broker.cert.pem -tlsKeyFilePath=/path/to/broker.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -> You can find a full list of parameters available in the `conf/broker.conf` file, -> as well as the default values for those parameters, in [Broker Configuration](reference-configuration.md#broker) -> -### TLS Protocol Version and Cipher - -You can configure the broker (and proxy) to require specific TLS protocol versions and ciphers for TLS negiotation. You can use the TLS protocol versions and ciphers to stop clients from requesting downgraded TLS protocol versions or ciphers that may have weaknesses. - -Both the TLS protocol versions and cipher properties can take multiple values, separated by commas. The possible values for protocol version and ciphers depend on the TLS provider that you are using. Pulsar uses OpenSSL if the OpenSSL is available, but if the OpenSSL is not available, Pulsar defaults back to the JDK implementation. - -```properties - -tlsProtocols=TLSv1.3,TLSv1.2 -tlsCiphers=TLS_DH_RSA_WITH_AES_256_GCM_SHA384,TLS_DH_RSA_WITH_AES_256_CBC_SHA - -``` - -OpenSSL currently supports ```TLSv1.1```, ```TLSv1.2``` and ```TLSv1.3``` for the protocol version. You can acquire a list of supported cipher from the openssl ciphers command, i.e. ```openssl ciphers -tls1_3```. - -For JDK 11, you can obtain a list of supported values from the documentation: -- [TLS protocol](https://docs.oracle.com/en/java/javase/11/security/oracle-providers.html#GUID-7093246A-31A3-4304-AC5F-5FB6400405E2__SUNJSSEPROVIDERPROTOCOLPARAMETERS-BBF75009) -- [Ciphers](https://docs.oracle.com/en/java/javase/11/security/oracle-providers.html#GUID-7093246A-31A3-4304-AC5F-5FB6400405E2__SUNJSSE_CIPHER_SUITES) - -## Proxy Configuration - -Proxies need to configure TLS in two directions, for clients connecting to the proxy, and for the proxy connecting to brokers. - -```properties - -# For clients connecting to the proxy -tlsEnabledInProxy=true -tlsCertificateFilePath=/path/to/broker.cert.pem -tlsKeyFilePath=/path/to/broker.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -# For the proxy to connect to brokers -tlsEnabledWithBroker=true -brokerClientTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -## Client configuration - -When you enable the TLS transport encryption, you need to configure the client to use ```https://``` and port 8443 for the web service URL, and ```pulsar+ssl://``` and port 6651 for the broker service URL. - -As the server certificate that you generated above does not belong to any of the default trust chains, you also need to either specify the path the **trust cert** (recommended), or tell the client to allow untrusted server certs. - -### Hostname verification - -Hostname verification is a TLS security feature whereby a client can refuse to connect to a server if the "CommonName" does not match the hostname to which the hostname is connecting. By default, Pulsar clients disable hostname verification, as it requires that each broker has a DNS record and a unique cert. - -Moreover, as the administrator has full control of the certificate authority, a bad actor is unlikely to be able to pull off a man-in-the-middle attack. "allowInsecureConnection" allows the client to connect to servers whose cert has not been signed by an approved CA. The client disables "allowInsecureConnection" by default, and you should always disable "allowInsecureConnection" in production environments. As long as you disable "allowInsecureConnection", a man-in-the-middle attack requires that the attacker has access to the CA. - -One scenario where you may want to enable hostname verification is where you have multiple proxy nodes behind a VIP, and the VIP has a DNS record, for example, pulsar.mycompany.com. In this case, you can generate a TLS cert with pulsar.mycompany.com as the "CommonName," and then enable hostname verification on the client. - -The examples below show that hostname verification is disabled for the CLI tools/Java/Python/C++/Node.js/C# clients by default. - -### CLI tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools.md#pulsar-admin), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use TLS transport with the CLI tools of Pulsar: - -```properties - -webServiceUrl=https://broker.example.com:8443/ -brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/ca.cert.pem -tlsEnableHostnameVerification=false - -``` - -#### Java client - -```java - -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .tlsTrustCertsFilePath("/path/to/ca.cert.pem") - .enableTlsHostnameVerification(false) // false by default, in any case - .allowTlsInsecureConnection(false) // false by default, in any case - .build(); - -``` - -#### Python client - -```python - -from pulsar import Client - -client = Client("pulsar+ssl://broker.example.com:6651/", - tls_hostname_verification=False, - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False) // defaults to false from v2.2.0 onwards - -``` - -#### C++ client - -```c++ - -#include - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); // shouldn't be needed soon -config.setTlsTrustCertsFilePath(caPath); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create(clientPublicKeyPath, clientPrivateKeyPath)); -config.setValidateHostName(false); - -``` - -#### Node.js client - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const client = new Pulsar.Client({ - serviceUrl: 'pulsar+ssl://broker.example.com:6651/', - tlsTrustCertsFilePath: '/path/to/ca.cert.pem', - useTls: true, - tlsValidateHostname: false, - tlsAllowInsecureConnection: false, - }); -})(); - -``` - -#### C# client - -```c# - -var certificate = new X509Certificate2("ca.cert.pem"); -var client = PulsarClient.Builder() - .TrustedCertificateAuthority(certificate) //If the CA is not trusted on the host, you can add it explicitly. - .VerifyCertificateAuthority(true) //Default is 'true' - .VerifyCertificateName(false) //Default is 'false' - .Build(); - -``` - -> Note that `VerifyCertificateName` refers to the configuration of hostname verification in the C# client. diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/security-token-admin.md b/site2/website/versioned_docs/version-2.8.2-deprecated/security-token-admin.md deleted file mode 100644 index a265f6320d28fe..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/security-token-admin.md +++ /dev/null @@ -1,183 +0,0 @@ ---- -id: security-token-admin -title: Token authentication admin -sidebar_label: "Token authentication admin" -original_id: security-token-admin ---- - -## Token Authentication Overview - -Pulsar supports authenticating clients using security tokens that are based on [JSON Web Tokens](https://jwt.io/introduction/) ([RFC-7519](https://tools.ietf.org/html/rfc7519)). - -Tokens are used to identify a Pulsar client and associate with some "principal" (or "role") which -will be then granted permissions to do some actions (eg: publish or consume from a topic). - -A user will typically be given a token string by an administrator (or some automated service). - -The compact representation of a signed JWT is a string that looks like: - -``` - - eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -Application will specify the token when creating the client instance. An alternative is to pass -a "token supplier", that is to say a function that returns the token when the client library -will need one. - -> #### Always use TLS transport encryption -> Sending a token is equivalent to sending a password over the wire. It is strongly recommended to -> always use TLS encryption when talking to the Pulsar service. See -> [Transport Encryption using TLS](security-tls-transport.md) - -## Secret vs Public/Private keys - -JWT support two different kind of keys in order to generate and validate the tokens: - - * Symmetric : - - there is a single ***Secret*** key that is used both to generate and validate - * Asymmetric: there is a pair of keys. - - ***Private*** key is used to generate tokens - - ***Public*** key is used to validate tokens - -### Secret key - -When using a secret key, the administrator will create the key and he will -use it to generate the client tokens. This key will be also configured to -the brokers to allow them to validate the clients. - -#### Creating a secret key - -> Output file will be generated in the root of your pulsar installation directory. You can also provide absolute path for the output file. - -```shell - -$ bin/pulsar tokens create-secret-key --output my-secret.key - -``` - -To generate base64 encoded private key - -```shell - -$ bin/pulsar tokens create-secret-key --output /opt/my-secret.key --base64 - -``` - -### Public/Private keys - -With public/private, we need to create a pair of keys. Pulsar supports all algorithms supported by the Java JWT library shown [here](https://github.com/jwtk/jjwt#signature-algorithms-keys) - -#### Creating a key pair - -> Output file will be generated in the root of your pulsar installation directory. You can also provide absolute path for the output file. - -```shell - -$ bin/pulsar tokens create-key-pair --output-private-key my-private.key --output-public-key my-public.key - -``` - - * `my-private.key` will be stored in a safe location and only used by administrator to generate - new tokens. - * `my-public.key` will be distributed to all Pulsar brokers. This file can be publicly shared without - any security concern. - -## Generating tokens - -A token is the credential associated with a user. The association is done through the "principal", -or "role". In case of JWT tokens, this field it's typically referred to as **subject**, though -it's exactly the same concept. - -The generated token is then required to have a **subject** field set. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user - -``` - -This will print the token string on stdout. - -Similarly, one can create a token by passing the "private" key: - -```shell - -$ bin/pulsar tokens create --private-key file:///path/to/my-private.key \ - --subject test-user - -``` - -Finally, a token can also be created with a pre-defined TTL. After that time, -the token will be automatically invalidated. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user \ - --expiry-time 1y - -``` - -## Authorization - -The token itself doesn't have any permission associated. That will be determined by the -authorization engine. Once the token is created, one can grant permission for this token to do certain -actions. Eg. : - -```shell - -$ bin/pulsar-admin namespaces grant-permission my-tenant/my-namespace \ - --role test-user \ - --actions produce,consume - -``` - -## Enabling Token Authentication ... - -### ... on Brokers - -To configure brokers to authenticate clients, put the following in `broker.conf`: - -```properties - -# Configuration to enable authentication and authorization -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken - -# If using secret key (Note: key files must be DER-encoded) -tokenSecretKey=file:///path/to/secret.key -# The key can also be passed inline: -# tokenSecretKey=data:;base64,FLFyW0oLJ2Fi22KKCm21J18mbAdztfSHN/lAT5ucEKU= - -# If using public/private (Note: key files must be DER-encoded) -# tokenPublicKey=file:///path/to/public.key - -``` - -### ... on Proxies - -To configure proxies to authenticate clients, put the following in `proxy.conf`: - -The proxy will have its own token used when talking to brokers. The role token for this -key pair should be configured in the ``proxyRoles`` of the brokers. See the [authorization guide](security-authorization.md) for more details. - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken -tokenSecretKey=file:///path/to/secret.key - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Or, alternatively, read token from file -# brokerClientAuthenticationParameters=file:///path/to/proxy-token.txt - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/sql-deployment-configurations.md b/site2/website/versioned_docs/version-2.8.2-deprecated/sql-deployment-configurations.md deleted file mode 100644 index 02d8bc78f6cb9d..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/sql-deployment-configurations.md +++ /dev/null @@ -1,199 +0,0 @@ ---- -id: sql-deployment-configurations -title: Pulsar SQL configuration and deployment -sidebar_label: "Configuration and deployment" -original_id: sql-deployment-configurations ---- - -You can configure Presto Pulsar connector and deploy a cluster with the following instruction. - -## Configure Presto Pulsar Connector -You can configure Presto Pulsar Connector in the `${project.root}/conf/presto/catalog/pulsar.properties` properties file. The configuration for the connector and the default values are as follows. - -```properties - -# name of the connector to be displayed in the catalog -connector.name=pulsar - -# the url of Pulsar broker service -pulsar.web-service-url=http://localhost:8080 - -# URI of Zookeeper cluster -pulsar.zookeeper-uri=localhost:2181 - -# minimum number of entries to read at a single time -pulsar.entry-read-batch-size=100 - -# default number of splits to use per query -pulsar.target-num-splits=4 - -``` - -You can connect Presto to a Pulsar cluster with multiple hosts. To configure multiple hosts for brokers, add multiple URLs to `pulsar.web-service-url`. To configure multiple hosts for ZooKeeper, add multiple URIs to `pulsar.zookeeper-uri`. The following is an example. - -``` - -pulsar.web-service-url=http://localhost:8080,localhost:8081,localhost:8082 -pulsar.zookeeper-uri=localhost1,localhost2:2181 - -``` - -**Note: by default, Pulsar SQL does not get the last message in a topic**. It is by design and controlled by settings. By default, BookKeeper LAC only advances when subsequent entries are added. If there is no subsequent entry added, the last written entry is not visible to readers until the ledger is closed. This is not a problem for Pulsar which uses managed ledger, but Pulsar SQL directly reads from BookKeeper ledger. - -If you want to get the last message in a topic, set the following configurations: - -1. For the broker configuration, set `bookkeeperExplicitLacIntervalInMills` > 0 in `broker.conf` or `standalone.conf`. - -2. For the Presto configuration, set `pulsar.bookkeeper-explicit-interval` > 0 and `pulsar.bookkeeper-use-v2-protocol=false`. - -However, using BookKeeper V3 protocol introduces additional GC overhead to BK as it uses Protobuf. - -## Query data from existing Presto clusters - -If you already have a Presto cluster, you can copy the Presto Pulsar connector plugin to your existing cluster. Download the archived plugin package with the following command. - -```bash - -$ wget pulsar:binary_release_url - -``` - -## Deploy a new cluster - -Since Pulsar SQL is powered by [Trino (formerly Presto SQL)](https://trino.io), the configuration for deployment is the same for the Pulsar SQL worker. - -:::note - -For how to set up a standalone single node environment, refer to [Query data](sql-getting-started.md). - -::: - -You can use the same CLI args as the Presto launcher. - -```bash - -$ ./bin/pulsar sql-worker --help -Usage: launcher [options] command - -Commands: run, start, stop, restart, kill, status - -Options: - -h, --help show this help message and exit - -v, --verbose Run verbosely - --etc-dir=DIR Defaults to INSTALL_PATH/etc - --launcher-config=FILE - Defaults to INSTALL_PATH/bin/launcher.properties - --node-config=FILE Defaults to ETC_DIR/node.properties - --jvm-config=FILE Defaults to ETC_DIR/jvm.config - --config=FILE Defaults to ETC_DIR/config.properties - --log-levels-file=FILE - Defaults to ETC_DIR/log.properties - --data-dir=DIR Defaults to INSTALL_PATH - --pid-file=FILE Defaults to DATA_DIR/var/run/launcher.pid - --launcher-log-file=FILE - Defaults to DATA_DIR/var/log/launcher.log (only in - daemon mode) - --server-log-file=FILE - Defaults to DATA_DIR/var/log/server.log (only in - daemon mode) - -D NAME=VALUE Set a Java system property - -``` - -The default configuration for the cluster is located in `${project.root}/conf/presto`. You can customize your deployment by modifying the default configuration. - -You can set the worker to read from a different configuration directory, or set a different directory to write data. - -```bash - -$ ./bin/pulsar sql-worker run --etc-dir /tmp/incubator-pulsar/conf/presto --data-dir /tmp/presto-1 - -``` - -You can start the worker as daemon process. - -```bash - -$ ./bin/pulsar sql-worker start - -``` - -### Deploy a cluster on multiple nodes - -You can deploy a Pulsar SQL cluster or Presto cluster on multiple nodes. The following example shows how to deploy a cluster on three-node cluster. - -1. Copy the Pulsar binary distribution to three nodes. - -The first node runs as Presto coordinator. The minimal configuration requirement in the `${project.root}/conf/presto/config.properties` file is as follows. - -```properties - -coordinator=true -node-scheduler.include-coordinator=true -http-server.http.port=8080 -query.max-memory=50GB -query.max-memory-per-node=1GB -discovery-server.enabled=true -discovery.uri= - -``` - -The other two nodes serve as worker nodes, you can use the following configuration for worker nodes. - -```properties - -coordinator=false -http-server.http.port=8080 -query.max-memory=50GB -query.max-memory-per-node=1GB -discovery.uri= - -``` - -2. Modify `pulsar.web-service-url` and `pulsar.zookeeper-uri` configuration in the `${project.root}/conf/presto/catalog/pulsar.properties` file accordingly for the three nodes. - -3. Start the coordinator node. - -``` - -$ ./bin/pulsar sql-worker run - -``` - -4. Start worker nodes. - -``` - -$ ./bin/pulsar sql-worker run - -``` - -5. Start the SQL CLI and check the status of your cluster. - -```bash - -$ ./bin/pulsar sql --server - -``` - -6. Check the status of your nodes. - -```bash - -presto> SELECT * FROM system.runtime.nodes; - node_id | http_uri | node_version | coordinator | state ----------+-------------------------+--------------+-------------+-------- - 1 | http://192.168.2.1:8081 | testversion | true | active - 3 | http://192.168.2.2:8081 | testversion | false | active - 2 | http://192.168.2.3:8081 | testversion | false | active - -``` - -For more information about deployment in Presto, refer to [Presto deployment](https://trino.io/docs/current/installation/deployment.html). - -:::note - -The broker does not advance LAC, so when Pulsar SQL bypass broker to query data, it can only read entries up to the LAC that all the bookies learned. You can enable periodically write LAC on the broker by setting "bookkeeperExplicitLacIntervalInMills" in the broker.conf. - -::: - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/sql-getting-started.md b/site2/website/versioned_docs/version-2.8.2-deprecated/sql-getting-started.md deleted file mode 100644 index 8a5cd7199b365a..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/sql-getting-started.md +++ /dev/null @@ -1,187 +0,0 @@ ---- -id: sql-getting-started -title: Query data with Pulsar SQL -sidebar_label: "Query data" -original_id: sql-getting-started ---- - -Before querying data in Pulsar, you need to install Pulsar and built-in connectors. - -## Requirements -1. Install [Pulsar](getting-started-standalone.md#install-pulsar-standalone). -2. Install Pulsar [built-in connectors](getting-started-standalone.md#install-builtin-connectors-optional). - -## Query data in Pulsar -To query data in Pulsar with Pulsar SQL, complete the following steps. - -1. Start a Pulsar standalone cluster. - -```bash - -./bin/pulsar standalone - -``` - -2. Start a Pulsar SQL worker. - -```bash - -./bin/pulsar sql-worker run - -``` - -3. After initializing Pulsar standalone cluster and the SQL worker, run SQL CLI. - -```bash - -./bin/pulsar sql - -``` - -4. Test with SQL commands. - -```bash - -presto> show catalogs; - Catalog ---------- - pulsar - system -(2 rows) - -Query 20180829_211752_00004_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [0 rows, 0B] [0 rows/s, 0B/s] - - -presto> show schemas in pulsar; - Schema ------------------------ - information_schema - public/default - public/functions - sample/standalone/ns1 -(4 rows) - -Query 20180829_211818_00005_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [4 rows, 89B] [21 rows/s, 471B/s] - - -presto> show tables in pulsar."public/default"; - Table -------- -(0 rows) - -Query 20180829_211839_00006_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [0 rows, 0B] [0 rows/s, 0B/s] - -``` - -Since there is no data in Pulsar, no records is returned. - -5. Start the built-in connector _DataGeneratorSource_ and ingest some mock data. - -```bash - -./bin/pulsar-admin sources create --name generator --destinationTopicName generator_test --source-type data-generator - -``` - -And then you can query a topic in the namespace "public/default". - -```bash - -presto> show tables in pulsar."public/default"; - Table ----------------- - generator_test -(1 row) - -Query 20180829_213202_00000_csyeu, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:02 [1 rows, 38B] [0 rows/s, 17B/s] - -``` - -You can now query the data within the topic "generator_test". - -```bash - -presto> select * from pulsar."public/default".generator_test; - - firstname | middlename | lastname | email | username | password | telephonenumber | age | companyemail | nationalidentitycardnumber | --------------+-------------+-------------+----------------------------------+--------------+----------+-----------------+-----+-----------------------------------------------+----------------------------+ - Genesis | Katherine | Wiley | genesis.wiley@gmail.com | genesisw | y9D2dtU3 | 959-197-1860 | 71 | genesis.wiley@interdemconsulting.eu | 880-58-9247 | - Brayden | | Stanton | brayden.stanton@yahoo.com | braydens | ZnjmhXik | 220-027-867 | 81 | brayden.stanton@supermemo.eu | 604-60-7069 | - Benjamin | Julian | Velasquez | benjamin.velasquez@yahoo.com | benjaminv | 8Bc7m3eb | 298-377-0062 | 21 | benjamin.velasquez@hostesltd.biz | 213-32-5882 | - Michael | Thomas | Donovan | donovan@mail.com | michaeld | OqBm9MLs | 078-134-4685 | 55 | michael.donovan@memortech.eu | 443-30-3442 | - Brooklyn | Avery | Roach | brooklynroach@yahoo.com | broach | IxtBLafO | 387-786-2998 | 68 | brooklyn.roach@warst.biz | 085-88-3973 | - Skylar | | Bradshaw | skylarbradshaw@yahoo.com | skylarb | p6eC6cKy | 210-872-608 | 96 | skylar.bradshaw@flyhigh.eu | 453-46-0334 | -. -. -. - -``` - -You can query the mock data. - -## Query your own data -If you want to query your own data, you need to ingest your own data first. You can write a simple producer and write custom defined data to Pulsar. The following is an example. - -```java - -public class TestProducer { - - public static class Foo { - private int field1 = 1; - private String field2; - private long field3; - - public Foo() { - } - - public int getField1() { - return field1; - } - - public void setField1(int field1) { - this.field1 = field1; - } - - public String getField2() { - return field2; - } - - public void setField2(String field2) { - this.field2 = field2; - } - - public long getField3() { - return field3; - } - - public void setField3(long field3) { - this.field3 = field3; - } - } - - public static void main(String[] args) throws Exception { - PulsarClient pulsarClient = PulsarClient.builder().serviceUrl("pulsar://localhost:6650").build(); - Producer producer = pulsarClient.newProducer(AvroSchema.of(Foo.class)).topic("test_topic").create(); - - for (int i = 0; i < 1000; i++) { - Foo foo = new Foo(); - foo.setField1(i); - foo.setField2("foo" + i); - foo.setField3(System.currentTimeMillis()); - producer.newMessage().value(foo).send(); - } - producer.close(); - pulsarClient.close(); - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/sql-overview.md b/site2/website/versioned_docs/version-2.8.2-deprecated/sql-overview.md deleted file mode 100644 index 0235d0256c5f19..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/sql-overview.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: sql-overview -title: Pulsar SQL Overview -sidebar_label: "Overview" -original_id: sql-overview ---- - -Apache Pulsar is used to store streams of event data, and the event data is structured with predefined fields. With the implementation of the [Schema Registry](schema-get-started.md), you can store structured data in Pulsar and query the data by using [Trino (formerly Presto SQL.md)](https://trino.io/). - -As the core of Pulsar SQL, Presto Pulsar connector enables Presto workers within a Presto cluster to query data from Pulsar. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-sql-arch-2.png) - -The query performance is efficient and highly scalable, because Pulsar adopts [two level segment based architecture](concepts-architecture-overview.md#apache-bookkeeper). - -Topics in Pulsar are stored as segments in [Apache BookKeeper](https://bookkeeper.apache.org/). Each topic segment is replicated to some BookKeeper nodes, which enables concurrent reads and high read throughput. You can configure the number of BookKeeper nodes, and the default number is `3`. In Presto Pulsar connector, data is read directly from BookKeeper, so Presto workers can read concurrently from horizontally scalable number BookKeeper nodes. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-sql-arch-1.png) diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/sql-rest-api.md b/site2/website/versioned_docs/version-2.8.2-deprecated/sql-rest-api.md deleted file mode 100644 index c92fd62f7d8703..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/sql-rest-api.md +++ /dev/null @@ -1,192 +0,0 @@ ---- -id: sql-rest-api -title: Pulsar SQL REST APIs -sidebar_label: "REST APIs" -original_id: sql-rest-api ---- - -This section lists resources that make up the Presto REST API v1. - -## Request for Presto services - -All requests for Presto services should use Presto REST API v1 version. - -To request services, use explicit URL `http://presto.service:8081/v1`. You need to update `presto.service:8081` with your real Presto address before sending requests. - -`POST` requests require the `X-Presto-User` header. If you use authentication, you must use the same `username` that is specified in the authentication configuration. If you do not use authentication, you can specify anything for `username`. - -```properties - -X-Presto-User: username - -``` - -For more information about headers, refer to [PrestoHeaders](https://github.com/trinodb/trino). - -## Schema - -You can use statement in the HTTP body. All data is received as JSON document that might contain a `nextUri` link. If the received JSON document contains a `nextUri` link, the request continues with the `nextUri` link until the received data does not contain a `nextUri` link. If no error is returned, the query completes successfully. If an `error` field is displayed in `stats`, it means the query fails. - -The following is an example of `show catalogs`. The query continues until the received JSON document does not contain a `nextUri` link. Since no `error` is displayed in `stats`, it means that the query completes successfully. - -```powershell - -➜ ~ curl --header "X-Presto-User: test-user" --request POST --data 'show catalogs' http://localhost:8081/v1/statement -{ - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "stats" : { - "queued" : true, - "nodes" : 0, - "userTimeMillis" : 0, - "cpuTimeMillis" : 0, - "wallTimeMillis" : 0, - "processedBytes" : 0, - "processedRows" : 0, - "runningSplits" : 0, - "queuedTimeMillis" : 0, - "queuedSplits" : 0, - "completedSplits" : 0, - "totalSplits" : 0, - "scheduled" : false, - "peakMemoryBytes" : 0, - "state" : "QUEUED", - "elapsedTimeMillis" : 0 - }, - "id" : "20191113_033653_00006_dg6hb", - "nextUri" : "http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/1" -} - -➜ ~ curl http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/1 -{ - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "nextUri" : "http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/2", - "id" : "20191113_033653_00006_dg6hb", - "stats" : { - "state" : "PLANNING", - "totalSplits" : 0, - "queued" : false, - "userTimeMillis" : 0, - "completedSplits" : 0, - "scheduled" : false, - "wallTimeMillis" : 0, - "runningSplits" : 0, - "queuedSplits" : 0, - "cpuTimeMillis" : 0, - "processedRows" : 0, - "processedBytes" : 0, - "nodes" : 0, - "queuedTimeMillis" : 1, - "elapsedTimeMillis" : 2, - "peakMemoryBytes" : 0 - } -} - -➜ ~ curl http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/2 -{ - "id" : "20191113_033653_00006_dg6hb", - "data" : [ - [ - "pulsar" - ], - [ - "system" - ] - ], - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "columns" : [ - { - "typeSignature" : { - "rawType" : "varchar", - "arguments" : [ - { - "kind" : "LONG_LITERAL", - "value" : 6 - } - ], - "literalArguments" : [], - "typeArguments" : [] - }, - "name" : "Catalog", - "type" : "varchar(6)" - } - ], - "stats" : { - "wallTimeMillis" : 104, - "scheduled" : true, - "userTimeMillis" : 14, - "progressPercentage" : 100, - "totalSplits" : 19, - "nodes" : 1, - "cpuTimeMillis" : 16, - "queued" : false, - "queuedTimeMillis" : 1, - "state" : "FINISHED", - "peakMemoryBytes" : 0, - "elapsedTimeMillis" : 111, - "processedBytes" : 0, - "processedRows" : 0, - "queuedSplits" : 0, - "rootStage" : { - "cpuTimeMillis" : 1, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 1, - "subStages" : [ - { - "cpuTimeMillis" : 14, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 17, - "subStages" : [ - { - "wallTimeMillis" : 7, - "subStages" : [], - "stageId" : "2", - "done" : true, - "nodes" : 1, - "totalSplits" : 1, - "processedBytes" : 22, - "processedRows" : 2, - "queuedSplits" : 0, - "userTimeMillis" : 1, - "cpuTimeMillis" : 1, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 1 - } - ], - "wallTimeMillis" : 92, - "nodes" : 1, - "done" : true, - "stageId" : "1", - "userTimeMillis" : 12, - "processedRows" : 2, - "processedBytes" : 51, - "queuedSplits" : 0, - "totalSplits" : 17 - } - ], - "wallTimeMillis" : 5, - "done" : true, - "nodes" : 1, - "stageId" : "0", - "userTimeMillis" : 1, - "processedRows" : 2, - "processedBytes" : 22, - "totalSplits" : 1, - "queuedSplits" : 0 - }, - "runningSplits" : 0, - "completedSplits" : 19 - } -} - -``` - -:::note - -Since the response data is not in sync with the query state from the perspective of clients, you cannot rely on the response data to determine whether the query completes. - -::: - -For more information about Presto REST API, refer to [Presto HTTP Protocol](https://github.com/prestosql/presto/wiki/HTTP-Protocol). diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/standalone-docker.md b/site2/website/versioned_docs/version-2.8.2-deprecated/standalone-docker.md deleted file mode 100644 index 1710ec819d7a4e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/standalone-docker.md +++ /dev/null @@ -1,214 +0,0 @@ ---- -id: standalone-docker -title: Set up a standalone Pulsar in Docker -sidebar_label: "Run Pulsar in Docker" -original_id: standalone-docker ---- - -For local development and testing, you can run Pulsar in standalone mode on your own machine within a Docker container. - -If you have not installed Docker, download the [Community edition](https://www.docker.com/community-edition) and follow the instructions for your OS. - -## Start Pulsar in Docker - -* For MacOS, Linux, and Windows: - - ```shell - - $ docker run -it -p 6650:6650 -p 8080:8080 --mount source=pulsardata,target=/pulsar/data --mount source=pulsarconf,target=/pulsar/conf apachepulsar/pulsar:@pulsar:version@ bin/pulsar standalone - - ``` - -A few things to note about this command: - * The data, metadata, and configuration are persisted on Docker volumes in order to not start "fresh" every -time the container is restarted. For details on the volumes you can use `docker volume inspect ` - * For Docker on Windows make sure to configure it to use Linux containers - -If you start Pulsar successfully, you will see `INFO`-level log messages like this: - -``` - -08:18:30.970 [main] INFO org.apache.pulsar.broker.web.WebService - HTTP Service started at http://0.0.0.0:8080 -... -07:53:37.322 [main] INFO org.apache.pulsar.broker.PulsarService - messaging service is ready, bootstrap service port = 8080, broker url= pulsar://localhost:6650, cluster=standalone, configs=org.apache.pulsar.broker.ServiceConfiguration@98b63c1 -... - -``` - -:::tip - -When you start a local standalone cluster, a `public/default` - -::: - -namespace is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. -For more information, see [Topics](concepts-messaging.md#topics). - -## Use Pulsar in Docker - -Pulsar offers client libraries for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) -and [C++](client-libraries-cpp.md). If you're running a local standalone cluster, you can -use one of these root URLs to interact with your cluster: - -* `pulsar://localhost:6650` -* `http://localhost:8080` - -The following example will guide you get started with Pulsar quickly by using the [Python client API](client-libraries-python.md) -client API. - -Install the Pulsar Python client library directly from [PyPI](https://pypi.org/project/pulsar-client/): - -```shell - -$ pip install pulsar-client - -``` - -### Consume a message - -Create a consumer and subscribe to the topic: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -consumer = client.subscribe('my-topic', - subscription_name='my-sub') - -while True: - msg = consumer.receive() - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - -client.close() - -``` - -### Produce a message - -Now start a producer to send some test messages: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -producer = client.create_producer('my-topic') - -for i in range(10): - producer.send(('hello-pulsar-%d' % i).encode('utf-8')) - -client.close() - -``` - -## Get the topic statistics - -In Pulsar, you can use REST, Java, or command-line tools to control every aspect of the system. -For details on APIs, refer to [Admin API Overview](admin-api-overview.md). - -In the simplest example, you can use curl to probe the stats for a particular topic: - -```shell - -$ curl http://localhost:8080/admin/v2/persistent/public/default/my-topic/stats | python -m json.tool - -``` - -The output is something like this: - -```json - -{ - "msgRateIn": 0.0, - "msgThroughputIn": 0.0, - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesInCounter": 7097, - "msgInCounter": 143, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "averageMsgSize": 0.0, - "msgChunkPublished": false, - "storageSize": 7097, - "backlogSize": 0, - "offloadedStorageSize": 0, - "publishers": [ - { - "accessMode": "Shared", - "msgRateIn": 0.0, - "msgThroughputIn": 0.0, - "averageMsgSize": 0.0, - "chunkedMessageRate": 0.0, - "producerId": 0, - "metadata": {}, - "address": "/127.0.0.1:35604", - "connectedSince": "2021-07-04T09:05:43.04788Z", - "clientVersion": "2.8.0", - "producerName": "standalone-2-5" - } - ], - "waitingPublishers": 0, - "subscriptions": { - "my-sub": { - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "msgRateRedeliver": 0.0, - "chunkedMessageRate": 0, - "msgBacklog": 0, - "backlogSize": 0, - "msgBacklogNoDelayed": 0, - "blockedSubscriptionOnUnackedMsgs": false, - "msgDelayed": 0, - "unackedMessages": 0, - "type": "Exclusive", - "activeConsumerName": "3c544f1daa", - "msgRateExpired": 0.0, - "totalMsgExpired": 0, - "lastExpireTimestamp": 0, - "lastConsumedFlowTimestamp": 1625389101290, - "lastConsumedTimestamp": 1625389546070, - "lastAckedTimestamp": 1625389546162, - "lastMarkDeleteAdvancedTimestamp": 1625389546163, - "consumers": [ - { - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "msgRateRedeliver": 0.0, - "chunkedMessageRate": 0.0, - "consumerName": "3c544f1daa", - "availablePermits": 867, - "unackedMessages": 0, - "avgMessagesPerEntry": 6, - "blockedConsumerOnUnackedMsgs": false, - "lastAckedTimestamp": 1625389546162, - "lastConsumedTimestamp": 1625389546070, - "metadata": {}, - "address": "/127.0.0.1:35472", - "connectedSince": "2021-07-04T08:58:21.287682Z", - "clientVersion": "2.8.0" - } - ], - "isDurable": true, - "isReplicated": false, - "allowOutOfOrderDelivery": false, - "consumersAfterMarkDeletePosition": {}, - "nonContiguousDeletedMessagesRanges": 0, - "nonContiguousDeletedMessagesRangesSerializedSize": 0, - "durable": true, - "replicated": false - } - }, - "replication": {}, - "deduplicationStatus": "Disabled", - "nonContiguousDeletedMessagesRanges": 0, - "nonContiguousDeletedMessagesRangesSerializedSize": 0 -} - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/standalone.md b/site2/website/versioned_docs/version-2.8.2-deprecated/standalone.md deleted file mode 100644 index 25afa11a91b117..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/standalone.md +++ /dev/null @@ -1,271 +0,0 @@ ---- -id: standalone -title: Set up a standalone Pulsar locally -sidebar_label: "Run Pulsar locally" -original_id: standalone ---- - -For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process. - -> #### Pulsar in production? -> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal.md) guide. - -## Install Pulsar standalone - -This tutorial guides you through every step of the installation process. - -### System requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions, JRE/JDK 11 is recommended. - -:::tip - -By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. - -::: - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -### Install Pulsar using binary release - -To get started with Pulsar, download a binary tarball release in one of the following ways: - -* download from the Apache mirror (Pulsar @pulsar:version@ binary release) - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:binary_release_url - - ``` - -After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -#### What your package contains - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/). -`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more. -`examples` | A Java JAR file containing [Pulsar Functions](functions-overview.md) example. -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar. -`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar). - -These directories are created once you begin running Pulsar. - -Directory | Contains -:---------|:-------- -`data` | The data storage directory used by ZooKeeper and BookKeeper. -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md). -`logs` | Logs created by the installation. - -:::tip - -If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions: -* [Install builtin connectors (optional)](#install-builtin-connectors-optional) -* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional) -Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders. - -::: - -### Install builtin connectors (optional) - -Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors. -To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways: - -* download from the Apache mirror Pulsar IO Connectors @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. -For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker -(or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions). -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), -you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors). - -::: - -### Install tiered storage offloaders (optional) - -:::tip - -Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -To enable tiered storage feature, follow the instructions below; otherwise skip this section. - -::: - -To get started with [tiered storage offloaders](concepts-tiered-storage.md), you need to download the offloaders tarball release on every broker node in one of the following ways: - -* download from the Apache mirror Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders` -in the pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage.md). - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory. -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), -you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - -::: - -## Start Pulsar standalone - -Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode. - -```bash - -$ bin/pulsar standalone - -``` - -If you have started Pulsar successfully, you will see `INFO`-level log messages like this: - -```bash - -2017-06-01 14:46:29,192 - INFO - [main:WebSocketService@95] - Configuration Store cache started -2017-06-01 14:46:29,192 - INFO - [main:AuthenticationService@61] - Authentication is disabled -2017-06-01 14:46:29,192 - INFO - [main:WebSocketService@108] - Pulsar WebSocket Service started - -``` - -:::tip - -* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window. - -::: - -You can also run the service as a background process using the `pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). -> -> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview.md) document to secure your deployment. -> -> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -## Use Pulsar standalone - -Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. - -### Consume a message - -The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client consume my-topic -s "first-subscription" - -``` - -If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -09:56:55.566 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.MultiTopicsConsumerImpl - [TopicsConsumerFakeTopicNamee2df9] [first-subscription] Success subscribe new topic my-topic in topics consumer, partitions: 4, allTopicPartitionsNumber: 4 - -``` - -:::tip - -As you have noticed that we do not explicitly create the `my-topic` topic, from which we consume the message. When you consume a message from a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well. - -::: - -### Produce a message - -The following command produces a message saying `hello-pulsar` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client produce my-topic --messages "hello-pulsar" - -``` - -If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -13:09:39.356 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced - -``` - -## Stop Pulsar standalone - -Press `Ctrl+C` to stop a local standalone Pulsar. - -:::tip - -If the service runs as a background process using the `pulsar-daemon start standalone` command, then use the `pulsar-daemon stop standalone` command to stop the service. -For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). - -::: - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/tiered-storage-aliyun.md b/site2/website/versioned_docs/version-2.8.2-deprecated/tiered-storage-aliyun.md deleted file mode 100644 index 5772f162b5e26d..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/tiered-storage-aliyun.md +++ /dev/null @@ -1,257 +0,0 @@ ---- -id: tiered-storage-aliyun -title: Use Aliyun OSS offloader with Pulsar -sidebar_label: "Aliyun OSS offloader" -original_id: tiered-storage-aliyun ---- - -This chapter guides you through every step of installing and configuring the Aliyun Object Storage Service (OSS) offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the Aliyun OSS offloader. - -### Prerequisite - -- Pulsar: 2.8.0 or later versions - -### Step - -This example uses Pulsar 2.8.0. - -1. Download the Pulsar tarball, see [here](https://pulsar.apache.org/docs/en/standalone/#install-pulsar-using-binary-release). - -2. Download and untar the Pulsar offloaders package, then copy the Pulsar offloaders as `offloaders` in the Pulsar directory, see [here](https://pulsar.apache.org/docs/en/standalone/#install-tiered-storage-offloaders-optional). - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/), [GCS](https://cloud.google.com/storage/), [Azure](https://portal.azure.com/#home), and [Aliyun OSS](https://www.aliyun.com/product/oss) for long-term storage. - - ``` - - tiered-storage-file-system-2.8.0.nar - tiered-storage-jcloud-2.8.0.nar - - ``` - - :::note - - * If you are running Pulsar in a bare-metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image. The `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to Aliyun OSS, you need to configure some properties of the Aliyun OSS offload driver. - -::: - -Besides, you can also configure the Aliyun OSS offloader to run it automatically or trigger it manually. - -### Configure Aliyun OSS offloader driver - -You can configure the Aliyun OSS offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - | Required configuration | Description | Example value | - | --- | --- |--- | - | `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive. | aliyun-oss | - | `offloadersDirectory` | Offloader directory | offloaders | - | `managedLedgerOffloadBucket` | Bucket | pulsar-topic-offload | - | `managedLedgerOffloadServiceEndpoint` | Endpoint | http://oss-cn-hongkong.aliyuncs.com | - -- **Optional** configurations are as below. - - | Optional | Description | Example value | - | --- | --- | --- | - | `managedLedgerOffloadReadBufferSizeInBytes` | Size of block read | 1 MB | - | `managedLedgerOffloadMaxBlockSizeInBytes` | Size of block write | 64 MB | - | `managedLedgerMinLedgerRolloverTimeMinutes` | Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment. | 2 | - | `managedLedgerMaxEntriesPerLedger` | Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment. | 5000 | - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in Aliyun OSS must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -managedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Endpoint (required) - -The endpoint is the region where a bucket is located. - -:::tip - -For more information about Aliyun OSS regions and endpoints, see [International website](https://www.alibabacloud.com/help/doc-detail/31837.htm) or [Chinese website](https://help.aliyun.com/document_detail/31837.html). - -::: - - -##### Example - -This example sets the endpoint as _oss-us-west-1-internal_. - -``` - -managedLedgerOffloadServiceEndpoint=http://oss-us-west-1-internal.aliyuncs.com - -``` - -#### Authentication (required) - -To be able to access Aliyun OSS, you need to authenticate with Aliyun OSS. - -Set the environment variables `ALIYUN_OSS_ACCESS_KEY_ID` and `ALIYUN_OSS_ACCESS_KEY_SECRET` in `conf/pulsar_env.sh`. - -"export" is important so that the variables are made available in the environment of spawned processes. - -```bash - -export ALIYUN_OSS_ACCESS_KEY_ID=ABC123456789 -export ALIYUN_OSS_ACCESS_KEY_SECRET=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from Aliyun OSS in the configuration file `broker.conf` or `standalone.conf`. - -| Configuration | Description | Default value | -| --- | --- | --- | -| `managedLedgerOffloadReadBufferSizeInBytes` | Block size for each individual read when reading back data from Aliyun OSS. | 1 MB | -| `managedLedgerOffloadMaxBlockSizeInBytes` | Maximum size of a "part" sent during a multipart upload to Aliyun OSS. It **cannot** be smaller than 5 MB. | 64 MB | - -### Run Aliyun OSS offloader automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -| Threshold value | Action | -| --- | --- | -| > 0 | It triggers the offloading operation if the topic storage reaches its threshold. | -| = 0 | It causes a broker to offload data as soon as possible. | -| < 0 | It disables automatic offloading operation. | - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, the offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the Aliyun OSS offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-threshold-em-). - -::: - -### Run Aliyun OSS offloader manually - -For individual topics, you can trigger the Aliyun OSS offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to Aliyun OSS until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the Aliyun OSS offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-em-). - - ::: - -- This example checks the Aliyun OSS offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the Aliyun OSS offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-status-em-). - - ::: - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/tiered-storage-aws.md b/site2/website/versioned_docs/version-2.8.2-deprecated/tiered-storage-aws.md deleted file mode 100644 index a83de62643638e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/tiered-storage-aws.md +++ /dev/null @@ -1,329 +0,0 @@ ---- -id: tiered-storage-aws -title: Use AWS S3 offloader with Pulsar -sidebar_label: "AWS S3 offloader" -original_id: tiered-storage-aws ---- - -This chapter guides you through every step of installing and configuring the AWS S3 offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the AWS S3 offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or later versions - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download from the Pulsar [downloads page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget): - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/) and [GCS](https://cloud.google.com/storage/) for long term storage. - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to AWS S3, you need to configure some properties of the AWS S3 offload driver. - -::: - -Besides, you can also configure the AWS S3 offloader to run it automatically or trigger it manually. - -### Configure AWS S3 offloader driver - -You can configure the AWS S3 offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - Required configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive.

    **Note**: there is a third driver type, S3, which is identical to AWS S3, though S3 requires that you specify an endpoint URL using `s3ManagedLedgerOffloadServiceEndpoint`. This is useful if using an S3 compatible data store other than AWS S3. | aws-s3 - `offloadersDirectory` | Offloader directory | offloaders - `s3ManagedLedgerOffloadBucket` | Bucket | pulsar-topic-offload - -- **Optional** configurations are as below. - - Optional | Description | Example value - |---|---|--- - `s3ManagedLedgerOffloadRegion` | Bucket region

    **Note**: before specifying a value for this parameter, you need to set the following configurations. Otherwise, you might get an error.

    - Set [`s3ManagedLedgerOffloadServiceEndpoint`](https://docs.aws.amazon.com/general/latest/gr/s3.html).

    Example
    `s3ManagedLedgerOffloadServiceEndpoint=https://s3.YOUR_REGION.amazonaws.com`

    - Grant `GetBucketLocation` permission to a user.

    For how to grant `GetBucketLocation` permission to a user, see [here](https://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html#using-with-s3-actions-related-to-buckets).| eu-west-3 - `s3ManagedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `s3ManagedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in AWS S3 must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -s3ManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Bucket region - -A bucket region is a region where a bucket is located. If a bucket region is not specified, the **default** region (`US East (N. Virginia)`) is used. - -:::tip - -For more information about AWS regions and endpoints, see [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). - -::: - - -##### Example - -This example sets the bucket region as _europe-west-3_. - -``` - -s3ManagedLedgerOffloadRegion=eu-west-3 - -``` - -#### Authentication (required) - -To be able to access AWS S3, you need to authenticate with AWS S3. - -Pulsar does not provide any direct methods of configuring authentication for AWS S3, -but relies on the mechanisms supported by the [DefaultAWSCredentialsProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html). - -Once you have created a set of credentials in the AWS IAM console, you can configure credentials using one of the following methods. - -* Use EC2 instance metadata credentials. - - If you are on AWS instance with an instance profile that provides credentials, Pulsar uses these credentials if no other mechanism is provided. - -* Set the environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` in `conf/pulsar_env.sh`. - - "export" is important so that the variables are made available in the environment of spawned processes. - - ```bash - - export AWS_ACCESS_KEY_ID=ABC123456789 - export AWS_SECRET_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -* Add the Java system properties `aws.accessKeyId` and `aws.secretKey` to `PULSAR_EXTRA_OPTS` in `conf/pulsar_env.sh`. - - ```bash - - PULSAR_EXTRA_OPTS="${PULSAR_EXTRA_OPTS} ${PULSAR_MEM} ${PULSAR_GC} -Daws.accessKeyId=ABC123456789 -Daws.secretKey=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.maxCapacityPerThread=4096" - - ``` - -* Set the access credentials in `~/.aws/credentials`. - - ```conf - - [default] - aws_access_key_id=ABC123456789 - aws_secret_access_key=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -* Assume an IAM role. - - This example uses the `DefaultAWSCredentialsProviderChain` for assuming this role. - - The broker must be rebooted for credentials specified in `pulsar_env` to take effect. - - ```conf - - s3ManagedLedgerOffloadRole= - s3ManagedLedgerOffloadRoleSessionName=pulsar-s3-offload - - ``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from AWS S3 in the configuration file `broker.conf` or `standalone.conf`. - -Configuration|Description|Default value -|---|---|--- -`s3ManagedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from AWS S3.|1 MB -`s3ManagedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to AWS S3. It **cannot** be smaller than 5 MB. |64 MB - -### Configure AWS S3 offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the AWS S3 offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-threshold-em-). - -::: - -### Configure AWS S3 offloader to run manually - -For individual topics, you can trigger AWS S3 offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to AWS S3 until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the AWS S3 offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-em-). - - ::: - -- This example checks the AWS S3 offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the AWS S3 offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-status-em-). - - ::: - -## Tutorial - -For the complete and step-by-step instructions on how to use the AWS S3 offloader with Pulsar, see [here](https://hub.streamnative.io/offloaders/aws-s3/2.5.1#usage). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/tiered-storage-azure.md b/site2/website/versioned_docs/version-2.8.2-deprecated/tiered-storage-azure.md deleted file mode 100644 index e1485af3984e31..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/tiered-storage-azure.md +++ /dev/null @@ -1,264 +0,0 @@ ---- -id: tiered-storage-azure -title: Use Azure BlobStore offloader with Pulsar -sidebar_label: "Azure BlobStore offloader" -original_id: tiered-storage-azure ---- - -This chapter guides you through every step of installing and configuring the Azure BlobStore offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the Azure BlobStore offloader. - -### Prerequisite - -- Pulsar: 2.6.2 or later versions - -### Step - -This example uses Pulsar 2.6.2. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.6.2/apache-pulsar-2.6.2-bin.tar.gz) - - * Download from the Pulsar [downloads page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget): - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.6.2/apache-pulsar-2.6.2-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.6.2/apache-pulsar-offloaders-2.6.2-bin.tar.gz - tar xvfz apache-pulsar-offloaders-2.6.2-bin.tar.gz - - ``` - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.6.2/offloaders apache-pulsar-2.6.2/offloaders - - ls offloaders - - ``` - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/), [GCS](https://cloud.google.com/storage/) and [Azure](https://portal.azure.com/#home) for long term storage. - - ``` - - tiered-storage-file-system-2.6.2.nar - tiered-storage-jcloud-2.6.2.nar - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to Azure BlobStore, you need to configure some properties of the Azure BlobStore offload driver. - -::: - -Besides, you can also configure the Azure BlobStore offloader to run it automatically or trigger it manually. - -### Configure Azure BlobStore offloader driver - -You can configure the Azure BlobStore offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - Required configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name | azureblob - `offloadersDirectory` | Offloader directory | offloaders - `managedLedgerOffloadBucket` | Bucket | pulsar-topic-offload - -- **Optional** configurations are as below. - - Optional | Description | Example value - |---|---|--- - `managedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `managedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in Azure BlobStore must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -managedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Authentication (required) - -To be able to access Azure BlobStore, you need to authenticate with Azure BlobStore. - -* Set the environment variables `AZURE_STORAGE_ACCOUNT` and `AZURE_STORAGE_ACCESS_KEY` in `conf/pulsar_env.sh`. - - "export" is important so that the variables are made available in the environment of spawned processes. - - ```bash - - export AZURE_STORAGE_ACCOUNT=ABC123456789 - export AZURE_STORAGE_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from Azure BlobStore in the configuration file `broker.conf` or `standalone.conf`. - -Configuration|Description|Default value -|---|---|--- -`managedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from Azure BlobStore store.|1 MB -`managedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to Azure BlobStore store. It **cannot** be smaller than 5 MB. |64 MB - -### Configure Azure BlobStore offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the Azure BlobStore offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-threshold-em-). - -::: - -### Configure Azure BlobStore offloader to run manually - -For individual topics, you can trigger Azure BlobStore offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to Azure BlobStore until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the Azure BlobStore offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-em-). - - ::: - -- This example checks the Azure BlobStore offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the Azure BlobStore offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-status-em-). - - ::: - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/tiered-storage-filesystem.md b/site2/website/versioned_docs/version-2.8.2-deprecated/tiered-storage-filesystem.md deleted file mode 100644 index 85a1644120fc63..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/tiered-storage-filesystem.md +++ /dev/null @@ -1,630 +0,0 @@ ---- -id: tiered-storage-filesystem -title: Use filesystem offloader with Pulsar -sidebar_label: "Filesystem offloader" -original_id: tiered-storage-filesystem ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This chapter guides you through every step of installing and configuring the filesystem offloader and using it with Pulsar. - -## Installation - -This section describes how to install the filesystem offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or higher versions - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download the Pulsar tarball from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download the Pulsar tarball from the Pulsar [download page](https://pulsar.apache.org/download) - - * Use the [wget](https://www.gnu.org/software/wget) command to dowload the Pulsar tarball. - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - - :::note - - * If you run Pulsar in a bare metal cluster, ensure that the `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you run Pulsar in Docker or deploying Pulsar using a Docker image (such as K8S and DCOS), you can use the `apachepulsar/pulsar-all` image. The `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - - :::note - - * If you run Pulsar in a bare metal cluster, ensure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you run Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image. The `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to filesystem, you need to configure some properties of the filesystem offloader driver. - -::: - -Besides, you can also configure the filesystem offloader to run it automatically or trigger it manually. - -### Configure filesystem offloader driver - -You can configure the filesystem offloader driver in the `broker.conf` or `standalone.conf` configuration file. - -````mdx-code-block - - - -- **Required** configurations are as below. - - Parameter | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive. | filesystem - `fileSystemURI` | Connection address, which is the URI to access the default Hadoop distributed file system. | hdfs://127.0.0.1:9000 - `offloadersDirectory` | Offloader directory | offloaders - `fileSystemProfilePath` | Hadoop profile path. The configuration file is stored in the Hadoop profile path. It contains various settings for Hadoop performance tuning. | conf/filesystem_offload_core_site.xml - -- **Optional** configurations are as below. - - Parameter| Description | Example value - |---|---|--- - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic.

    **Note**: it is not recommended to set this parameter in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended to set this parameter in the production environment.|5000 - -
    - - -- **Required** configurations are as below. - - Parameter | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive. | filesystem - `offloadersDirectory` | Offloader directory | offloaders - `fileSystemProfilePath` | NFS profile path. The configuration file is stored in the NFS profile path. It contains various settings for performance tuning. | conf/filesystem_offload_core_site.xml - -- **Optional** configurations are as below. - - Parameter| Description | Example value - |---|---|--- - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic.

    **Note**: it is not recommended to set this parameter in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended to set this parameter in the production environment.|5000 - -
    - -
    -```` - -### Run filesystem offloader automatically - -You can configure the namespace policy to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic storage reaches the threshold, an offload operation is triggered automatically. - -Threshold value|Action -|---|--- -| > 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offload runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, the filesystem offloader does not work until the current segment is full. - -You can configure the threshold using CLI tools, such as pulsar-admin. - -#### Example - -This example sets the filesystem offloader threshold to 10 MB using pulsar-admin. - -```bash - -pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#set-offload-threshold). - -::: - -### Run filesystem offloader manually - -For individual topics, you can trigger the filesystem offloader manually using one of the following methods: - -- Use the REST endpoint. - -- Use CLI tools (such as pulsar-admin). - -To manually trigger the filesystem offloader via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are offloaded to the filesystem until the threshold is no longer exceeded. Older segments are offloaded first. - -#### Example - -- This example manually run the filesystem offloader using pulsar-admin. - - ```bash - - pulsar-admin topics offload --size-threshold 10M persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload). - - ::: - -- This example checks filesystem offloader status using pulsar-admin. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the filesystem to complete the job, add the `-w` flag. - - ```bash - - pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in the offloading operation, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload-status). - - ::: - -## Tutorial - -This section provides step-by-step instructions on how to use the filesystem offloader to move data from Pulsar to Hadoop Distributed File System (HDFS) or Network File system (NFS). - -````mdx-code-block - - - -To move data from Pulsar to HDFS, follow these steps. - -### Step 1: Prepare the HDFS environment - -This tutorial sets up a Hadoop single node cluster and uses Hadoop 3.2.1. - -:::tip - -For details about how to set up a Hadoop single node cluster, see [here](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html). - -::: - -1. Download and uncompress Hadoop 3.2.1. - - ``` - - wget https://mirrors.bfsu.edu.cn/apache/hadoop/common/hadoop-3.2.1/hadoop-3.2.1.tar.gz - - tar -zxvf hadoop-3.2.1.tar.gz -C $HADOOP_HOME - - ``` - -2. Configure Hadoop. - - ``` - - # $HADOOP_HOME/etc/hadoop/core-site.xml - - - fs.defaultFS - hdfs://localhost:9000 - - - - # $HADOOP_HOME/etc/hadoop/hdfs-site.xml - - - dfs.replication - 1 - - - - ``` - -3. Set passphraseless ssh. - - ``` - - # Now check that you can ssh to the localhost without a passphrase: - $ ssh localhost - # If you cannot ssh to localhost without a passphrase, execute the following commands - $ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa - $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys - $ chmod 0600 ~/.ssh/authorized_keys - - ``` - -4. Start HDFS. - - ``` - - # don't execute this command repeatedly, repeat execute will cauld the clusterId of the datanode is not consistent with namenode - $HADOOP_HOME/bin/hadoop namenode -format - $HADOOP_HOME/sbin/start-dfs.sh - - ``` - -5. Navigate to the [HDFS website](http://localhost:9870/). - - You can see the **Overview** page. - - ![](/assets/FileSystem-1.png) - - 1. At the top navigation bar, click **Datanodes** to check DataNode information. - - ![](/assets/FileSystem-2.png) - - 2. Click **HTTP Address** to get more detailed information about localhost:9866. - - As can be seen below, the size of **Capacity Used** is 4 KB, which is the initial value. - - ![](/assets/FileSystem-3.png) - -### Step 2: Install the filesystem offloader - -For details, see [installation](#installation). - -### Step 3: Configure the filesystem offloader - -As indicated in the [configuration](#configuration) section, you need to configure some properties for the filesystem offloader driver before using it. This tutorial assumes that you have configured the filesystem offloader driver as below and run Pulsar in **standalone** mode. - -Set the following configurations in the `conf/standalone.conf` file. - -```conf - -managedLedgerOffloadDriver=filesystem -fileSystemURI=hdfs://127.0.0.1:9000 -fileSystemProfilePath=conf/filesystem_offload_core_site.xml - -``` - -:::note - -For testing purposes, you can set the following two configurations to speed up ledger rollover, but it is not recommended that you set them in the production environment. - -::: - -``` - -managedLedgerMinLedgerRolloverTimeMinutes=1 -managedLedgerMaxEntriesPerLedger=100 - -``` - - - - -:::note - -In this section, it is assumed that you have enabled NFS service and set the shared path of your NFS service. In this section, `/Users/test` is used as the shared path of NFS service. - -::: - -To offload data to NFS, follow these steps. - -### Step 1: Install the filesystem offloader - -For details, see [installation](#installation). - -### Step 2: Mont your NFS to your local filesystem - -This example mounts mounts */Users/pulsar_nfs* to */Users/test*. - -``` - -mount -e 192.168.0.103:/Users/test/Users/pulsar_nfs - -``` - -### Step 3: Configure the filesystem offloader driver - -As indicated in the [configuration](#configuration) section, you need to configure some properties for the filesystem offloader driver before using it. This tutorial assumes that you have configured the filesystem offloader driver as below and run Pulsar in **standalone** mode. - -1. Set the following configurations in the `conf/standalone.conf` file. - - ```conf - - managedLedgerOffloadDriver=filesystem - fileSystemProfilePath=conf/filesystem_offload_core_site.xml - - ``` - -2. Modify the *filesystem_offload_core_site.xml* as follows. - - ``` - - - fs.defaultFS - file:/// - - - - hadoop.tmp.dir - file:///Users/pulsar_nfs - - - - io.file.buffer.size - 4096 - - - - io.seqfile.compress.blocksize - 1000000 - - - - io.seqfile.compression.type - BLOCK - - - - io.map.index.interval - 128 - - - ``` - - - - -```` - -### Step 4: Offload data from BookKeeper to filesystem - -Execute the following commands in the repository where you download Pulsar tarball. For example, `~/path/to/apache-pulsar-2.5.1`. - -1. Start Pulsar standalone. - - ``` - - bin/pulsar standalone -a 127.0.0.1 - - ``` - -2. To ensure the data generated is not deleted immediately, it is recommended to set the [retention policy](https://pulsar.apache.org/docs/en/next/cookbooks-retention-expiry/#retention-policies), which can be either a **size** limit or a **time** limit. The larger value you set for the retention policy, the longer the data can be retained. - - ``` - - bin/pulsar-admin namespaces set-retention public/default --size 100M --time 2d - - ``` - - :::tip - - For more information about the `pulsarctl namespaces set-retention options` command, including flags, descriptions, default values, and shorthands, see [here](https://docs.streamnative.io/pulsarctl/v2.7.0.6/#-em-set-retention-em-). - - ::: - -3. Produce data using pulsar-client. - - ``` - - bin/pulsar-client produce -m "Hello FileSystem Offloader" -n 1000 public/default/fs-test - - ``` - -4. The offloading operation starts after a ledger rollover is triggered. To ensure offload data successfully, it is recommended that you wait until several ledger rollovers are triggered. In this case, you might need to wait for a second. You can check the ledger status using pulsarctl. - - ``` - - bin/pulsar-admin topics stats-internal public/default/fs-test - - ``` - - **Output** - - The data of the ledger 696 is not offloaded. - - ``` - - { - "version": 1, - "creationDate": "2020-06-16T21:46:25.807+08:00", - "modificationDate": "2020-06-16T21:46:25.821+08:00", - "ledgers": [ - { - "ledgerId": 696, - "isOffloaded": false - } - ], - "cursors": {} - } - - ``` - -5. Wait a second and send more messages to the topic. - - ``` - - bin/pulsar-client produce -m "Hello FileSystem Offloader" -n 1000 public/default/fs-test - - ``` - -6. Check the ledger status using pulsarctl. - - ``` - - bin/pulsar-admin topics stats-internal public/default/fs-test - - ``` - - **Output** - - The ledger 696 is rolled over. - - ``` - - { - "version": 2, - "creationDate": "2020-06-16T21:46:25.807+08:00", - "modificationDate": "2020-06-16T21:48:52.288+08:00", - "ledgers": [ - { - "ledgerId": 696, - "entries": 1001, - "size": 81695, - "isOffloaded": false - }, - { - "ledgerId": 697, - "isOffloaded": false - } - ], - "cursors": {} - } - - ``` - -7. Trigger the offloading operation manually using pulsarctl. - - ``` - - bin/pulsar-admin topics offload -s 0 public/default/fs-test - - ``` - - **Output** - - Data in ledgers before the ledge 697 is offloaded. - - ``` - - # offload info, the ledgers before 697 will be offloaded - Offload triggered for persistent://public/default/fs-test3 for messages before 697:0:-1 - - ``` - -8. Check the ledger status using pulsarctl. - - ``` - - bin/pulsar-admin topics stats-internal public/default/fs-test - - ``` - - **Output** - - The data of the ledger 696 is offloaded. - - ``` - - { - "version": 4, - "creationDate": "2020-06-16T21:46:25.807+08:00", - "modificationDate": "2020-06-16T21:52:13.25+08:00", - "ledgers": [ - { - "ledgerId": 696, - "entries": 1001, - "size": 81695, - "isOffloaded": true - }, - { - "ledgerId": 697, - "isOffloaded": false - } - ], - "cursors": {} - } - - ``` - - And the **Capacity Used** is changed from 4 KB to 116.46 KB. - - ![](/assets/FileSystem-8.png) \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/tiered-storage-gcs.md b/site2/website/versioned_docs/version-2.8.2-deprecated/tiered-storage-gcs.md deleted file mode 100644 index 81e7c5c6e6a44b..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/tiered-storage-gcs.md +++ /dev/null @@ -1,319 +0,0 @@ ---- -id: tiered-storage-gcs -title: Use GCS offloader with Pulsar -sidebar_label: "GCS offloader" -original_id: tiered-storage-gcs ---- - -This chapter guides you through every step of installing and configuring the GCS offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the GCS offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or later versions - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download from the Pulsar [download page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget) - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8S and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - As shown in the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support GCS and AWS S3 for long term storage. - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - -## Configuration - -:::note - -Before offloading data from BookKeeper to GCS, you need to configure some properties of the GCS offloader driver. - -::: - -Besides, you can also configure the GCS offloader to run it automatically or trigger it manually. - -### Configure GCS offloader driver - -You can configure GCS offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - **Required** configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver`|Offloader driver name, which is case-insensitive.|google-cloud-storage - `offloadersDirectory`|Offloader directory|offloaders - `gcsManagedLedgerOffloadBucket`|Bucket|pulsar-topic-offload - `gcsManagedLedgerOffloadRegion`|Bucket region|europe-west3 - `gcsManagedLedgerOffloadServiceAccountKeyFile`|Authentication |/Users/user-name/Downloads/project-804d5e6a6f33.json - -- **Optional** configurations are as below. - - Optional configuration|Description|Example value - |---|---|--- - `gcsManagedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `gcsManagedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic.|2 - `managedLedgerMaxEntriesPerLedger`|The max number of entries to append to a ledger before triggering a rollover.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in GCS **must** be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you can not nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -gcsManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Bucket region (required) - -Bucket region is the region where a bucket is located. If a bucket region is not specified, the **default** region (`us multi-regional location`) is used. - -:::tip - -For more information about bucket location, see [here](https://cloud.google.com/storage/docs/bucket-locations). - -::: - -##### Example - -This example sets the bucket region as _europe-west3_. - -``` - -gcsManagedLedgerOffloadRegion=europe-west3 - -``` - -#### Authentication (required) - -To enable a broker access GCS, you need to configure `gcsManagedLedgerOffloadServiceAccountKeyFile` in the configuration file `broker.conf`. - -`gcsManagedLedgerOffloadServiceAccountKeyFile` is -a JSON file, containing GCS credentials of a service account. - -##### Example - -To generate service account credentials or view the public credentials that you've already generated, follow the following steps. - -1. Navigate to the [Service accounts page](https://console.developers.google.com/iam-admin/serviceaccounts). - -2. Select a project or create a new one. - -3. Click **Create service account**. - -4. In the **Create service account** window, type a name for the service account and select **Furnish a new private key**. - - If you want to [grant G Suite domain-wide authority](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#delegatingauthority) to the service account, select **Enable G Suite Domain-wide Delegation**. - -5. Click **Create**. - - :::note - - Make sure the service account you create has permission to operate GCS, you need to assign **Storage Admin** permission to your service account [here](https://cloud.google.com/storage/docs/access-control/iam). - - ::: - -6. You can get the following information and set this in `broker.conf`. - - ```conf - - gcsManagedLedgerOffloadServiceAccountKeyFile="/Users/user-name/Downloads/project-804d5e6a6f33.json" - - ``` - - :::tip - - - For more information about how to create `gcsManagedLedgerOffloadServiceAccountKeyFile`, see [here](https://support.google.com/googleapi/answer/6158849). - - For more information about Google Cloud IAM, see [here](https://cloud.google.com/storage/docs/access-control/iam). - - ::: - -#### Size of block read/write - -You can configure the size of a request sent to or read from GCS in the configuration file `broker.conf`. - -Configuration|Description -|---|--- -`gcsManagedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from GCS.

    The **default** value is 1 MB. -`gcsManagedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to GCS.

    It **can not** be smaller than 5 MB.

    The **default** value is 64 MB. - -### Configure GCS offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offload operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the GCS offloader threshold size to 10 MB using pulsar-admin. - -```bash - -pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#set-offload-threshold). - -::: - -### Configure GCS offloader to run manually - -For individual topics, you can trigger GCS offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger the GCS via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to GCS until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the GCS offloader to run manually using pulsar-admin with the command `pulsar-admin topics offload (topic-name) (threshold)`. - - ```bash - - pulsar-admin topics offload persistent://my-tenant/my-namespace/topic1 10M - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload). - - ::: - -- This example checks the GCS offloader status using pulsar-admin with the command `pulsar-admin topics offload-status options`. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for GCS to complete the job, add the `-w` flag. - - ```bash - - pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload-status). - - ::: - -## Tutorial - -For the complete and step-by-step instructions on how to use the GCS offloader with Pulsar, see [here](https://hub.streamnative.io/offloaders/gcs/2.5.1#usage). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/tiered-storage-overview.md b/site2/website/versioned_docs/version-2.8.2-deprecated/tiered-storage-overview.md deleted file mode 100644 index c635034f463b46..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/tiered-storage-overview.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -id: tiered-storage-overview -title: Overview of tiered storage -sidebar_label: "Overview" -original_id: tiered-storage-overview ---- - -Pulsar's **Tiered Storage** feature allows older backlog data to be moved from BookKeeper to long term and cheaper storage, while still allowing clients to access the backlog as if nothing has changed. - -* Tiered storage uses [Apache jclouds](https://jclouds.apache.org) to support [Amazon S3](https://aws.amazon.com/s3/) and [GCS (Google Cloud Storage)](https://cloud.google.com/storage/) for long term storage. - - With jclouds, it is easy to add support for more [cloud storage providers](https://jclouds.apache.org/reference/providers/#blobstore-providers) in the future. - - :::tip - - - For more information about how to use the AWS S3 offloader with Pulsar, see [here](tiered-storage-aws.md). - - - For more information about how to use the GCS offloader with Pulsar, see [here](tiered-storage-gcs.md). - - ::: - -* Tiered storage uses [Apache Hadoop](http://hadoop.apache.org/) to support filesystems for long term storage. - - With Hadoop, it is easy to add support for more filesystems in the future. - - :::tip - - For more information about how to use the filesystem offloader with Pulsar, see [here](tiered-storage-filesystem.md). - - ::: - -## When to use tiered storage? - -Tiered storage should be used when you have a topic for which you want to keep a very long backlog for a long time. - -For example, if you have a topic containing user actions which you use to train your recommendation systems, you may want to keep that data for a long time, so that if you change your recommendation algorithm, you can rerun it against your full user history. - -## How does tiered storage work? - -A topic in Pulsar is backed by a **log**, known as a **managed ledger**. This log is composed of an ordered list of segments. Pulsar only writes to the final segment of the log. All previous segments are sealed. The data within the segment is immutable. This is known as a **segment oriented architecture**. - -![Tiered storage](/assets/pulsar-tiered-storage.png "Tiered Storage") - -The tiered storage offloading mechanism takes advantage of segment oriented architecture. When offloading is requested, the segments of the log are copied one-by-one to tiered storage. All segments of the log (apart from the current segment) written to tiered storage can be offloaded. - -Data written to BookKeeper is replicated to 3 physical machines by default. However, once a segment is sealed in BookKeeper, it becomes immutable and can be copied to long term storage. Long term storage can achieve cost savings by using mechanisms such as [Reed-Solomon error correction](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction) to require fewer physical copies of data. - -Before offloading ledgers to long term storage, you need to configure buckets, credentials, and other properties for the cloud storage service. Additionally, Pulsar uses multi-part objects to upload the segment data and brokers may crash while uploading the data. It is recommended that you add a life cycle rule for your bucket to expire incomplete multi-part upload after a day or two days to avoid getting charged for incomplete uploads. Moreover, you can trigger the offloading operation manually (via REST API or CLI) or automatically (via CLI). - -After offloading ledgers to long term storage, you can still query data in the offloaded ledgers with Pulsar SQL. - -For more information about tiered storage for Pulsar topics, see [here](https://github.com/apache/pulsar/wiki/PIP-17:-Tiered-storage-for-Pulsar-topics). diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/transaction-api.md b/site2/website/versioned_docs/version-2.8.2-deprecated/transaction-api.md deleted file mode 100644 index fedc314646c938..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/transaction-api.md +++ /dev/null @@ -1,172 +0,0 @@ ---- -id: transactions-api -title: Transactions API -sidebar_label: "Transactions API" -original_id: transactions-api ---- - -All messages in a transaction are available only to consumers after the transaction has been committed. If a transaction has been aborted, all the writes and acknowledgments in this transaction roll back. - -## Prerequisites - -1. To enable transactions in Pulsar, you need to configure the parameter in `broker.conf` file or `standalone.conf` file. - -``` - -transactionCoordinatorEnabled=true - -``` - -2. Initialize transaction coordinator metadata, so the transaction coordinators can leverage advantages of the partitioned topic, such as load balance. - -``` - -bin/pulsar initialize-transaction-coordinator-metadata -cs 127.0.0.1:2181 -c standalone - -``` - -After initializing transaction coordinator metadata, you can use the transactions API. The following APIs are available. - -## Initialize Pulsar client - -You can enable transaction for transaction client and initialize transaction coordinator client. - -``` - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .enableTransaction(true) - .build(); - -``` - -## Start transactions -You can start transaction in the following way. - -``` - -Transaction txn = pulsarClient - .newTransaction() - .withTransactionTimeout(5, TimeUnit.MINUTES) - .build() - .get(); - -``` - -## Produce transaction messages - -A transaction parameter is required when producing new transaction messages. The semantic of the transaction messages in Pulsar is `read-committed`, so the consumer cannot receive the ongoing transaction messages before the transaction is committed. - -``` - -producer.newMessage(txn).value("Hello Pulsar Transaction".getBytes()).sendAsync(); - -``` - -## Acknowledge the messages with the transaction - -The transaction acknowledgement requires a transaction parameter. The transaction acknowledgement marks the messages state to pending-ack state. When the transaction is committed, the pending-ack state becomes ack state. If the transaction is aborted, the pending-ack state becomes unack state. - -``` - -Message message = consumer.receive(); -consumer.acknowledgeAsync(message.getMessageId(), txn); - -``` - -## Commit transactions - -When the transaction is committed, consumers receive the transaction messages and the pending-ack state becomes ack state. - -``` - -txn.commit().get(); - -``` - -## Abort transaction - -When the transaction is aborted, the transaction acknowledgement is canceled and the pending-ack messages are redelivered. - -``` - -txn.abort().get(); - -``` - -### Example -The following example shows how messages are processed in transaction. - -``` - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl(getPulsarServiceList().get(0).getBrokerServiceUrl()) - .statsInterval(0, TimeUnit.SECONDS) - .enableTransaction(true) - .build(); - -String sourceTopic = "public/default/source-topic"; -String sinkTopic = "public/default/sink-topic"; - -Producer sourceProducer = pulsarClient - .newProducer(Schema.STRING) - .topic(sourceTopic) - .create(); -sourceProducer.newMessage().value("hello pulsar transaction").sendAsync(); - -Consumer sourceConsumer = pulsarClient - .newConsumer(Schema.STRING) - .topic(sourceTopic) - .subscriptionName("test") - .subscriptionType(SubscriptionType.Shared) - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscribe(); - -Producer sinkProducer = pulsarClient - .newProducer(Schema.STRING) - .topic(sinkTopic) - .create(); - -Transaction txn = pulsarClient - .newTransaction() - .withTransactionTimeout(5, TimeUnit.MINUTES) - .build() - .get(); - -// source message acknowledgement and sink message produce belong to one transaction, -// they are combined into an atomic operation. -Message message = sourceConsumer.receive(); -sourceConsumer.acknowledgeAsync(message.getMessageId(), txn); -sinkProducer.newMessage(txn).value("sink data").sendAsync(); - -txn.commit().get(); - -``` - -## Enable batch messages in transactions - -To enable batch messages in transactions, you need to enable the batch index acknowledgement feature. The transaction acks check whether the batch index acknowledgement conflicts. - -To enable batch index acknowledgement, you need to set `acknowledgmentAtBatchIndexLevelEnabled` to `true` in the `broker.conf` or `standalone.conf` file. - -``` - -acknowledgmentAtBatchIndexLevelEnabled=true - -``` - -And then you need to call the `enableBatchIndexAcknowledgment(true)` method in the consumer builder. - -``` - -Consumer sinkConsumer = pulsarClient - .newConsumer() - .topic(transferTopic) - .subscriptionName("sink-topic") - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscriptionType(SubscriptionType.Shared) - .enableBatchIndexAcknowledgment(true) // enable batch index acknowledgement - .subscribe(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/transaction-guarantee.md b/site2/website/versioned_docs/version-2.8.2-deprecated/transaction-guarantee.md deleted file mode 100644 index 9db2d254e159f6..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/transaction-guarantee.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -id: transactions-guarantee -title: Transactions Guarantee -sidebar_label: "Transactions Guarantee" -original_id: transactions-guarantee ---- - -Pulsar transactions support the following guarantee. - -## Atomic multi-partition writes and multi-subscription acknowledges -Transactions enable atomic writes to multiple topics and partitions. A batch of messages in a transaction can be received from, produced to, and acknowledged by many partitions. All the operations involved in a transaction succeed or fail as a single unit. - -## Read transactional message -All the messages in a transaction are available only for consumers until the transaction is committed. - -## Acknowledge transactional message -A message is acknowledged successfully only once by a consumer under the subscription when acknowledging the message with the transaction ID. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/txn-how.md b/site2/website/versioned_docs/version-2.8.2-deprecated/txn-how.md deleted file mode 100644 index add072448aeb34..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/txn-how.md +++ /dev/null @@ -1,151 +0,0 @@ ---- -id: txn-how -title: How transactions work? -sidebar_label: "How transactions work?" -original_id: txn-how ---- - -This section describes transaction components and how the components work together. For the complete design details, see [PIP-31: Transactional Streaming](https://docs.google.com/document/d/145VYp09JKTw9jAT-7yNyFU255FptB2_B2Fye100ZXDI/edit#heading=h.bm5ainqxosrx). - -## Key concept - -It is important to know the following key concepts, which is a prerequisite for understanding how transactions work. - -### Transaction coordinator - -The transaction coordinator (TC) is a module running inside a Pulsar broker. - -* It maintains the entire life cycle of transactions and prevents a transaction from getting into an incorrect status. - -* It handles transaction timeout, and ensures that the transaction is aborted after a transaction timeout. - -### Transaction log - -All the transaction metadata persists in the transaction log. The transaction log is backed by a Pulsar topic. If the transaction coordinator crashes, it can restore the transaction metadata from the transaction log. - -The transaction log stores the transaction status rather than actual messages in the transaction (the actual messages are stored in the actual topic partitions). - -### Transaction buffer - -Messages produced to a topic partition within a transaction are stored in the transaction buffer (TB) of that topic partition. The messages in the transaction buffer are not visible to consumers until the transactions are committed. The messages in the transaction buffer are discarded when the transactions are aborted. - -Transaction buffer stores all ongoing and aborted transactions in memory. All messages are sent to the actual partitioned Pulsar topics. After transactions are committed, the messages in the transaction buffer are materialized (visible) to consumers. When the transactions are aborted, the messages in the transaction buffer are discarded. - -### Transaction ID - -Transaction ID (TxnID) identifies a unique transaction in Pulsar. The transaction ID is 128-bit. The highest 16 bits are reserved for the ID of the transaction coordinator, and the remaining bits are used for monotonically increasing numbers in each transaction coordinator. It is easy to locate the transaction crash with the TxnID. - -### Pending acknowledge state - -Pending acknowledge state maintains message acknowledgments within a transaction before a transaction completes. If a message is in the pending acknowledge state, the message cannot be acknowledged by other transactions until the message is removed from the pending acknowledge state. - -The pending acknowledge state is persisted to the pending acknowledge log (cursor ledger). A new broker can restore the state from the pending acknowledge log to ensure the acknowledgement is not lost. - -## Data flow - -At a high level, the data flow can be split into several steps: - -1. Begin a transaction. - -2. Publish messages with a transaction. - -3. Acknowledge messages with a transaction. - -4. End a transaction. - -To help you debug or tune the transaction for better performance, review the following diagrams and descriptions. - -### 1. Begin a transaction - -Before introducing the transaction in Pulsar, a producer is created and then messages are sent to brokers and stored in data logs. - -![](/assets/txn-3.png) - -Let’s walk through the steps for _beginning a transaction_. - -| Step | Description | -| --- | --- | -| 1.1 | The first step is that the Pulsar client finds the transaction coordinator. | -| 1.2 | The transaction coordinator allocates a transaction ID for the transaction. In the transaction log, the transaction is logged with its transaction ID and status (OPEN), which ensures the transaction status is persisted regardless of transaction coordinator crashes. | -| 1.3 | The transaction log sends the result of persisting the transaction ID to the transaction coordinator. | -| 1.4 | After the transaction status entry is logged, the transaction coordinator brings the transaction ID back to the Pulsar client. | - -### 2. Publish messages with a transaction - -In this stage, the Pulsar client enters a transaction loop, repeating the `consume-process-produce` operation for all the messages that comprise the transaction. This is a long phase and is potentially composed of multiple produce and acknowledgement requests. - -![](/assets/txn-4.png) - -Let’s walk through the steps for _publishing messages with a transaction_. - -| Step | Description | -| --- | --- | -| 2.1.1 | Before the Pulsar client produces messages to a new topic partition, it sends a request to the transaction coordinator to add the partition to the transaction. | -| 2.1.2 | The transaction coordinator logs the partition changes of the transaction into the transaction log for durability, which ensures the transaction coordinator knows all the partitions that a transaction is handling. The transaction coordinator can commit or abort changes on each partition at the end-partition phase. | -| 2.1.3 | The transaction log sends the result of logging the new partition (used for producing messages) to the transaction coordinator. | -| 2.1.4 | The transaction coordinator sends the result of adding a new produced partition to the transaction. | -| 2.2.1 | The Pulsar client starts producing messages to partitions. The flow of this part is the same as the normal flow of producing messages except that the batch of messages produced by a transaction contains transaction IDs. | -| 2.2.2 | The broker writes messages to a partition. | - -### 3. Acknowledge messages with a transaction - -In this phase, the Pulsar client sends a request to the transaction coordinator and a new subscription is acknowledged as a part of a transaction. - -![](/assets/txn-5.png) - -Let’s walk through the steps for _acknowledging messages with a transaction_. - -| Step | Description | -| --- | --- | -| 3.1.1 | The Pulsar client sends a request to add an acknowledged subscription to the transaction coordinator. | -| 3.1.2 | The transaction coordinator logs the addition of subscription, which ensures that it knows all subscriptions handled by a transaction and can commit or abort changes on each subscription at the end phase. | -| 3.1.3 | The transaction log sends the result of logging the new partition (used for acknowledging messages) to the transaction coordinator. | -| 3.1.4 | The transaction coordinator sends the result of adding the new acknowledged partition to the transaction. | -| 3.2 | The Pulsar client acknowledges messages on the subscription. The flow of this part is the same as the normal flow of acknowledging messages except that the acknowledged request carries a transaction ID. | -| 3.3 | The broker receiving the acknowledgement request checks if the acknowledgment belongs to a transaction or not. | - -### 4. End a transaction - -At the end of a transaction, the Pulsar client decides to commit or abort the transaction. The transaction can be aborted when a conflict is detected on acknowledging messages. - -#### 4.1 End transaction request - -When the Pulsar client finishes a transaction, it issues an end transaction request. - -![](/assets/txn-6.png) - -Let’s walk through the steps for _ending the transaction_. - -| Step | Description | -| --- | --- | -| 4.1.1 | The Pulsar client issues an end transaction request (with a field indicating whether the transaction is to be committed or aborted) to the transaction coordinator. | -| 4.1.2 | The transaction coordinator writes a COMMITTING or ABORTING message to its transaction log. | -| 4.1.3 | The transaction log sends the result of logging the committing or aborting status. | - -#### 4.2 Finalize a transaction - -The transaction coordinator starts the process of committing or aborting messages to all the partitions involved in this transaction. - -![](/assets/txn-7.png) - -Let’s walk through the steps for _finalizing a transaction_. - -| Step | Description | -| --- | --- | -| 4.2.1 | The transaction coordinator commits transactions on subscriptions and commits transactions on partitions at the same time. | -| 4.2.2 | The broker (produce) writes produced committed markers to the actual partitions. At the same time, the broker (ack) writes acked committed marks to the subscription pending ack partitions. | -| 4.2.3 | The data log sends the result of writing produced committed marks to the broker. At the same time, pending ack data log sends the result of writing acked committed marks to the broker. The cursor moves to the next position. | - -#### 4.3 Mark a transaction as COMMITTED or ABORTED - -The transaction coordinator writes the final transaction status to the transaction log to complete the transaction. - -![](/assets/txn-8.png) - -Let’s walk through the steps for _marking a transaction as COMMITTED or ABORTED_. - -| Step | Description | -| --- | --- | -| 4.3.1 | After all produced messages and acknowledgements to all partitions involved in this transaction have been successfully committed or aborted, the transaction coordinator writes the final COMMITTED or ABORTED transaction status messages to its transaction log, indicating that the transaction is complete. All the messages associated with the transaction in its transaction log can be safely removed. | -| 4.3.2 | The transaction log sends the result of the committed transaction to the transaction coordinator. | -| 4.3.3 | The transaction coordinator sends the result of the committed transaction to the Pulsar client. | diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/txn-monitor.md b/site2/website/versioned_docs/version-2.8.2-deprecated/txn-monitor.md deleted file mode 100644 index 5b50953772d092..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/txn-monitor.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -id: txn-monitor -title: How to monitor transactions? -sidebar_label: "How to monitor transactions?" -original_id: txn-monitor ---- - -You can monitor the status of the transactions in Prometheus and Grafana using the [transaction metrics](https://pulsar.apache.org/docs/en/next/reference-metrics/#pulsar-transaction). - -For how to configure Prometheus and Grafana, see [here](https://pulsar.apache.org/docs/en/next/deploy-monitoring). diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/txn-use.md b/site2/website/versioned_docs/version-2.8.2-deprecated/txn-use.md deleted file mode 100644 index de0e4a92f1b27e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/txn-use.md +++ /dev/null @@ -1,105 +0,0 @@ ---- -id: txn-use -title: How to use transactions? -sidebar_label: "How to use transactions?" -original_id: txn-use ---- - -## Transaction API - -The transaction feature is primarily a server-side and protocol-level feature. You can use the transaction feature via the [transaction API](https://pulsar.apache.org/api/admin/), which is available in **Pulsar 2.8.0 or later**. - -To use the transaction API, you do not need any additional settings in the Pulsar client. **By default**, transactions is **disabled**. - -Currently, transaction API is only available for **Java** clients. Support for other language clients will be added in the future releases. - -## Quick start - -This section provides an example of how to use the transaction API to send and receive messages in a Java client. - -1. Start Pulsar 2.8.0 or later. - -2. Enable transaction. - - Change the configuration in the `broker.conf` file. - - ``` - - transactionCoordinatorEnabled=true - - ``` - - If you want to enable batch messages in transactions, follow the steps below. - - Set `acknowledgmentAtBatchIndexLevelEnabled` to `true` in the `broker.conf` or `standalone.conf` file. - - ``` - - acknowledgmentAtBatchIndexLevelEnabled=true - - ``` - -3. Initialize transaction coordinator metadata. - - The transaction coordinator can leverage the advantages of partitioned topics (such as load balance). - - **Input** - - ``` - - bin/pulsar initialize-transaction-coordinator-metadata -cs 127.0.0.1:2181 -c standalone - - ``` - - **Output** - - ``` - - Transaction coordinator metadata setup success - - ``` - -4. Initialize a Pulsar client. - - ``` - - PulsarClient client = PulsarClient.builder() - - .serviceUrl("pulsar://localhost:6650") - - .enableTransaction(true) - - .build(); - - ``` - -Now you can start using the transaction API to send and receive messages. Below is an example of a `consume-process-produce` application written in Java. - -![](/assets/txn-9.png) - -Let’s walk through this example step by step. - -| Step | Description | -| --- | --- | -| 1. Start a transaction. | The application opens a new transaction by calling PulsarClient.newTransaction. It specifics the transaction timeout as 1 minute. If the transaction is not committed within 1 minute, the transaction is automatically aborted. | -| 2. Receive messages from topics. | The application creates two normal consumers to receive messages from topic input-topic-1 and input-topic-2 respectively. | -| 3. Publish messages to topics with the transaction. | The application creates two producers to produce the resulting messages to the output topic _output-topic-1_ and output-topic-2 respectively. The application applies the processing logic and generates two output messages. The application sends those two output messages as part of the transaction opened in the first step via Producer.newMessage(Transaction). | -| 4. Acknowledge the messages with the transaction. | In the same transaction, the application acknowledges the two input messages. | -| 5. Commit the transaction. | The application commits the transaction by calling Transaction.commit() on the open transaction. The commit operation ensures the two input messages are marked as acknowledged and the two output messages are written successfully to the output topics. | - -[1] Example of enabling batch messages ack in transactions in the consumer builder. - -``` - -Consumer sinkConsumer = pulsarClient - .newConsumer() - .topic(transferTopic) - .subscriptionName("sink-topic") - -.subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscriptionType(SubscriptionType.Shared) - .enableBatchIndexAcknowledgment(true) // enable batch index acknowledgement - .subscribe(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/txn-what.md b/site2/website/versioned_docs/version-2.8.2-deprecated/txn-what.md deleted file mode 100644 index 844f19a700f8f0..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/txn-what.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -id: txn-what -title: What are transactions? -sidebar_label: "What are transactions?" -original_id: txn-what ---- - -Transactions strengthen the message delivery semantics of Apache Pulsar and [processing guarantees of Pulsar Functions](https://pulsar.apache.org/docs/en/next/functions-overview/#processing-guarantees). The Pulsar Transaction API supports atomic writes and acknowledgments across multiple topics. - -Transactions allow: - -- A producer to send a batch of messages to multiple topics where all messages in the batch are eventually visible to any consumer, or none are ever visible to consumers. - -- End-to-end exactly-once semantics (execute a `consume-process-produce` operation exactly once). - -## Transaction semantics - -Pulsar transactions have the following semantics: - -* All operations within a transaction are committed as a single unit. - - * Either all messages are committed, or none of them are. - - * Each message is written or processed exactly once, without data loss or duplicates (even in the event of failures). - - * If a transaction is aborted, all the writes and acknowledgments in this transaction rollback. - -* A group of messages in a transaction can be received from, produced to, and acknowledged by multiple partitions. - - * Consumers are only allowed to read committed (acked) messages. In other words, the broker does not deliver transactional messages which are part of an open transaction or messages which are part of an aborted transaction. - - * Message writes across multiple partitions are atomic. - - * Message acks across multiple subscriptions are atomic. A message is acked successfully only once by a consumer under the subscription when acknowledging the message with the transaction ID. - -## Transactions and stream processing - -Stream processing on Pulsar is a `consume-process-produce` operation on Pulsar topics: - -* `Consume`: a source operator that runs a Pulsar consumer reads messages from one or multiple Pulsar topics. - -* `Process`: a processing operator transforms the messages. - -* `Produce`: a sink operator that runs a Pulsar producer writes the resulting messages to one or multiple Pulsar topics. - -![](/assets/txn-2.png) - -Pulsar transactions support end-to-end exactly-once stream processing, which means messages are not lost from a source operator and messages are not duplicated to a sink operator. - -## Use case - -Prior to Pulsar 2.8.0, there was no easy way to build stream processing applications with Pulsar to achieve exactly-once processing guarantees. With the transaction introduced in Pulsar 2.8.0, the following services support exactly-once semantics: - -* [Pulsar Flink connector](https://flink.apache.org/2021/01/07/pulsar-flink-connector-270.html) - - Prior to Pulsar 2.8.0, if you want to build stream applications using Pulsar and Flink, the Pulsar Flink connector only supported exactly-once source connector and at-least-once sink connector, which means the highest processing guarantee for end-to-end was at-least-once, there was possibility that the resulting messages from streaming applications produce duplicated messages to the resulting topics in Pulsar. - - With the transaction introduced in Pulsar 2.8.0, the Pulsar Flink sink connector can support exactly-once semantics by implementing the designated `TwoPhaseCommitSinkFunction` and hooking up the Flink sink message lifecycle with Pulsar transaction API. - -* Support for Pulsar Functions and other connectors will be added in the future releases. diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/txn-why.md b/site2/website/versioned_docs/version-2.8.2-deprecated/txn-why.md deleted file mode 100644 index 1ed8769977654e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/txn-why.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -id: txn-why -title: Why transactions? -sidebar_label: "Why transactions?" -original_id: txn-why ---- - -Pulsar transactions (txn) enable event streaming applications to consume, process, and produce messages in one atomic operation. The reason for developing this feature can be summarized as below. - -## Demand of stream processing - -The demand for stream processing applications with stronger processing guarantees has grown along with the rise of stream processing. For example, in the financial industry, financial institutions use stream processing engines to process debits and credits for users. This type of use case requires that every message is processed exactly once, without exception. - -In other words, if a stream processing application consumes message A and -produces the result as a message B (B = f(A)), then exactly-once processing -guarantee means that A can only be marked as consumed if and only if B is -successfully produced, and vice versa. - -![](/assets/txn-1.png) - -The Pulsar transactions API strengthens the message delivery semantics and the processing guarantees for stream processing. It enables stream processing applications to consume, process, and produce messages in one atomic operation. That means, a batch of messages in a transaction can be received from, produced to and acknowledged by many topic partitions. All the operations involved in a transaction succeed or fail as one single until. - -## Limitation of idempotent producer - -Avoiding data loss or duplication can be achieved by using the Pulsar idempotent producer, but it does not provide guarantees for writes across multiple partitions. - -In Pulsar, the highest level of message delivery guarantee is using an [idempotent producer](https://pulsar.apache.org/docs/en/next/concepts-messaging/#producer-idempotency) with the exactly once semantic at one single partition, that is, each message is persisted exactly once without data loss and duplication. However, there are some limitations in this solution: - -- Due to the monotonic increasing sequence ID, this solution only works on a single partition and within a single producer session (that is, for producing one message), so there is no atomicity when producing multiple messages to one or multiple partitions. - - In this case, if there are some failures (for example, client / broker / bookie crashes, network failure, and more) in the process of producing and receiving messages, messages are re-processed and re-delivered, which may cause data loss or data duplication: - - - For the producer: if the producer retry sending messages, some messages are persisted multiple times; if the producer does not retry sending messages, some messages are persisted once and other messages are lost. - - - For the consumer: since the consumer does not know whether the broker has received messages or not, the consumer may not retry sending acks, which causes it to receive duplicate messages. - -- Similarly, for Pulsar Function, it only guarantees exactly once semantics for an idempotent function on a single event rather than processing multiple events or producing multiple results that can happen exactly. - - For example, if a function accepts multiple events and produces one result (for example, window function), the function may fail between producing the result and acknowledging the incoming messages, or even between acknowledging individual events, which causes all (or some) incoming messages to be re-delivered and reprocessed, and a new result is generated. - - However, many scenarios need atomic guarantees across multiple partitions and sessions. - -- Consumers need to rely on more mechanisms to acknowledge (ack) messages once. - - For example, consumers are required to store the MessageID along with its acked state. After the topic is unloaded, the subscription can recover the acked state of this MessageID in memory when the topic is loaded again. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.2-deprecated/window-functions-context.md b/site2/website/versioned_docs/version-2.8.2-deprecated/window-functions-context.md deleted file mode 100644 index f80fea57989ef0..00000000000000 --- a/site2/website/versioned_docs/version-2.8.2-deprecated/window-functions-context.md +++ /dev/null @@ -1,581 +0,0 @@ ---- -id: window-functions-context -title: Window Functions Context -sidebar_label: "Window Functions: Context" -original_id: window-functions-context ---- - -Java SDK provides access to a **window context object** that can be used by a window function. This context object provides a wide variety of information and functionality for Pulsar window functions as below. - -- [Spec](#spec) - - * Names of all input topics and the output topic associated with the function. - * Tenant and namespace associated with the function. - * Pulsar window function name, ID, and version. - * ID of the Pulsar function instance running the window function. - * Number of instances that invoke the window function. - * Built-in type or custom class name of the output schema. - -- [Logger](#logger) - - * Logger object used by the window function, which can be used to create window function log messages. - -- [User config](#user-config) - - * Access to arbitrary user configuration values. - -- [Routing](#routing) - - * Routing is supported in Pulsar window functions. Pulsar window functions send messages to arbitrary topics as per the `publish` interface. - -- [Metrics](#metrics) - - * Interface for recording metrics. - -- [State storage](#state-storage) - - * Interface for storing and retrieving state in [state storage](#state-storage). - -## Spec - -Spec contains the basic information of a function. - -### Get input topics - -The `getInputTopics` method gets the **name list** of all input topics. - -This example demonstrates how to get the name list of all input topics in a Java window function. - -```java - -public class GetInputTopicsWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - Collection inputTopics = context.getInputTopics(); - System.out.println(inputTopics); - - return null; - } - -} - -``` - -### Get output topic - -The `getOutputTopic` method gets the **name of a topic** to which the message is sent. - -This example demonstrates how to get the name of an output topic in a Java window function. - -```java - -public class GetOutputTopicWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String outputTopic = context.getOutputTopic(); - System.out.println(outputTopic); - - return null; - } -} - -``` - -### Get tenant - -The `getTenant` method gets the tenant name associated with the window function. - -This example demonstrates how to get the tenant name in a Java window function. - -```java - -public class GetTenantWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String tenant = context.getTenant(); - System.out.println(tenant); - - return null; - } - -} - -``` - -### Get namespace - -The `getNamespace` method gets the namespace associated with the window function. - -This example demonstrates how to get the namespace in a Java window function. - -```java - -public class GetNamespaceWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String ns = context.getNamespace(); - System.out.println(ns); - - return null; - } - -} - -``` - -### Get function name - -The `getFunctionName` method gets the window function name. - -This example demonstrates how to get the function name in a Java window function. - -```java - -public class GetNameOfWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionName = context.getFunctionName(); - System.out.println(functionName); - - return null; - } - -} - -``` - -### Get function ID - -The `getFunctionId` method gets the window function ID. - -This example demonstrates how to get the function ID in a Java window function. - -```java - -public class GetFunctionIDWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionID = context.getFunctionId(); - System.out.println(functionID); - - return null; - } - -} - -``` - -### Get function version - -The `getFunctionVersion` method gets the window function version. - -This example demonstrates how to get the function version of a Java window function. - -```java - -public class GetVersionOfWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionVersion = context.getFunctionVersion(); - System.out.println(functionVersion); - - return null; - } - -} - -``` - -### Get instance ID - -The `getInstanceId` method gets the instance ID of a window function. - -This example demonstrates how to get the instance ID in a Java window function. - -```java - -public class GetInstanceIDWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - int instanceId = context.getInstanceId(); - System.out.println(instanceId); - - return null; - } - -} - -``` - -### Get num instances - -The `getNumInstances` method gets the number of instances that invoke the window function. - -This example demonstrates how to get the number of instances in a Java window function. - -```java - -public class GetNumInstancesWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - int numInstances = context.getNumInstances(); - System.out.println(numInstances); - - return null; - } - -} - -``` - -### Get output schema type - -The `getOutputSchemaType` method gets the built-in type or custom class name of the output schema. - -This example demonstrates how to get the output schema type of a Java window function. - -```java - -public class GetOutputSchemaTypeWindowFunction implements WindowFunction { - - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String schemaType = context.getOutputSchemaType(); - System.out.println(schemaType); - - return null; - } -} - -``` - -## Logger - -Pulsar window functions using Java SDK has access to an [SLF4j](https://www.slf4j.org/) [`Logger`](https://www.slf4j.org/api/org/apache/log4j/Logger.html) object that can be used to produce logs at the chosen log level. - -This example logs either a `WARNING`-level or `INFO`-level log based on whether the incoming string contains the word `danger` or not in a Java function. - -```java - -import java.util.Collection; -import org.apache.pulsar.functions.api.Record; -import org.apache.pulsar.functions.api.WindowContext; -import org.apache.pulsar.functions.api.WindowFunction; -import org.slf4j.Logger; - -public class LoggingWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - Logger log = context.getLogger(); - for (Record record : inputs) { - log.info(record + "-window-log"); - } - return null; - } - -} - -``` - -If you need your function to produce logs, specify a log topic when creating or running the function. - -```bash - -bin/pulsar-admin functions create \ - --jar my-functions.jar \ - --classname my.package.LoggingFunction \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -You can access all logs produced by `LoggingFunction` via the `persistent://public/default/logging-function-logs` topic. - -## Metrics - -Pulsar window functions can publish arbitrary metrics to the metrics interface which can be queried. - -:::note - -If a Pulsar window function uses the language-native interface for Java, that function is not able to publish metrics and stats to Pulsar. - -::: - -You can record metrics using the context object on a per-key basis. - -This example sets a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message in a Java function. - -```java - -import java.util.Collection; -import org.apache.pulsar.functions.api.Record; -import org.apache.pulsar.functions.api.WindowContext; -import org.apache.pulsar.functions.api.WindowFunction; - - -/** - * Example function that wants to keep track of - * the event time of each message sent. - */ -public class UserMetricWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - - for (Record record : inputs) { - if (record.getEventTime().isPresent()) { - context.recordMetric("MessageEventTime", record.getEventTime().get().doubleValue()); - } - } - - return null; - } -} - -``` - -## User config - -When you run or update Pulsar Functions that are created using SDK, you can pass arbitrary key/value pairs to them with the `--user-config` flag. Key/value pairs **must** be specified as JSON. - -This example passes a user configured key/value to a function. - -```bash - -bin/pulsar-admin functions create \ - --name word-filter \ - --user-config '{"forbidden-word":"rosebud"}' \ - # Other function configs - -``` - -### API -You can use the following APIs to get user-defined information for window functions. -#### getUserConfigMap - -`getUserConfigMap` API gets a map of all user-defined key/value configurations for the window function. - -```java - -/** - * Get a map of all user-defined key/value configs for the function. - * - * @return The full map of user-defined config values - */ - Map getUserConfigMap(); - -``` - -#### getUserConfigValue - -The `getUserConfigValue` API gets a user-defined key/value. - -```java - -/** - * Get any user-defined key/value. - * - * @param key The key - * @return The Optional value specified by the user for that key. - */ - Optional getUserConfigValue(String key); - -``` - -#### getUserConfigValueOrDefault - -The `getUserConfigValueOrDefault` API gets a user-defined key/value or a default value if none is present. - -```java - -/** - * Get any user-defined key/value or a default value if none is present. - * - * @param key - * @param defaultValue - * @return Either the user config value associated with a given key or a supplied default value - */ - Object getUserConfigValueOrDefault(String key, Object defaultValue); - -``` - -This example demonstrates how to access key/value pairs provided to Pulsar window functions. - -Java SDK context object enables you to access key/value pairs provided to Pulsar window functions via the command line (as JSON). - -:::tip - -For all key/value pairs passed to Java window functions, both the `key` and the `value` are `String`. To set the value to be a different type, you need to deserialize it from the `String` type. - -::: - -This example passes a key/value pair in a Java window function. - -```bash - -bin/pulsar-admin functions create \ - --user-config '{"word-of-the-day":"verdure"}' \ - # Other function configs - -``` - -This example accesses values in a Java window function. - -The `UserConfigFunction` function logs the string `"The word of the day is verdure"` every time the function is invoked (which means every time a message arrives). The user config of `word-of-the-day` is changed **only** when the function is updated with a new config value via -multiple ways, such as the command line tool or REST API. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.Optional; - -public class UserConfigWindowFunction implements WindowFunction { - @Override - public String process(Collection> input, WindowContext context) throws Exception { - Optional whatToWrite = context.getUserConfigValue("WhatToWrite"); - if (whatToWrite.get() != null) { - return (String)whatToWrite.get(); - } else { - return "Not a nice way"; - } - } - -} - -``` - -If no value is provided, you can access the entire user config map or set a default value. - -```java - -// Get the whole config map -Map allConfigs = context.getUserConfigMap(); - -// Get value or resort to default -String wotd = context.getUserConfigValueOrDefault("word-of-the-day", "perspicacious"); - -``` - -## Routing - -You can use the `context.publish()` interface to publish as many results as you want. - -This example shows that the `PublishFunction` class uses the built-in function in the context to publish messages to the `publishTopic` in a Java function. - -```java - -public class PublishWindowFunction implements WindowFunction { - @Override - public Void process(Collection> input, WindowContext context) throws Exception { - String publishTopic = (String) context.getUserConfigValueOrDefault("publish-topic", "publishtopic"); - String output = String.format("%s!", input); - context.publish(publishTopic, output); - - return null; - } - -} - -``` - -## State storage - -Pulsar window functions use [Apache BookKeeper](https://bookkeeper.apache.org) as a state storage interface. Apache Pulsar installation (including the standalone installation) includes the deployment of BookKeeper bookies. - -Apache Pulsar integrates with Apache BookKeeper `table service` to store the `state` for functions. For example, the `WordCount` function can store its `counters` state into BookKeeper table service via Pulsar Functions state APIs. - -States are key-value pairs, where the key is a string and the value is arbitrary binary data—counters are stored as 64-bit big-endian binary values. Keys are scoped to an individual Pulsar Function and shared between instances of that function. - -Currently, Pulsar window functions expose Java API to access, update, and manage states. These APIs are available in the context object when you use Java SDK functions. - -| Java API| Description -|---|--- -|`incrCounter`|Increases a built-in distributed counter referred by key. -|`getCounter`|Gets the counter value for the key. -|`putState`|Updates the state value for the key. - -You can use the following APIs to access, update, and manage states in Java window functions. - -#### incrCounter - -The `incrCounter` API increases a built-in distributed counter referred by key. - -Applications use the `incrCounter` API to change the counter of a given `key` by the given `amount`. If the `key` does not exist, a new key is created. - -```java - - /** - * Increment the builtin distributed counter referred by key - * @param key The name of the key - * @param amount The amount to be incremented - */ - void incrCounter(String key, long amount); - -``` - -#### getCounter - -The `getCounter` API gets the counter value for the key. - -Applications uses the `getCounter` API to retrieve the counter of a given `key` changed by the `incrCounter` API. - -```java - - /** - * Retrieve the counter value for the key. - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - long getCounter(String key); - -``` - -Except the `getCounter` API, Pulsar also exposes a general key/value API (`putState`) for functions to store general key/value state. - -#### putState - -The `putState` API updates the state value for the key. - -```java - - /** - * Update the state value for the key. - * - * @param key name of the key - * @param value state value of the key - */ - void putState(String key, ByteBuffer value); - -``` - -This example demonstrates how applications store states in Pulsar window functions. - -The logic of the `WordCountWindowFunction` is simple and straightforward. - -1. The function first splits the received string into multiple words using regex `\\.`. - -2. For each `word`, the function increments the corresponding `counter` by 1 via `incrCounter(key, amount)`. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - for (Record input : inputs) { - Arrays.asList(input.getValue().split("\\.")).forEach(word -> context.incrCounter(word, 1)); - } - return null; - - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/about.md b/site2/website/versioned_docs/version-2.8.3-deprecated/about.md deleted file mode 100644 index 729460195c0a26..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/about.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -slug: / -id: about -title: Welcome to the doc portal! -sidebar_label: "About" ---- - -import BlockLinks from "@site/src/components/BlockLinks"; -import BlockLink from "@site/src/components/BlockLink"; -import { docUrl } from "@site/src/utils/index"; - - -# Welcome to the doc portal! -*** - -This portal holds a variety of support documents to help you work with Pulsar . If you’re a beginner, there are tutorials and explainers to help you understand Pulsar and how it works. - -If you’re an experienced coder, review this page to learn the easiest way to access the specific content you’re looking for. - -## Get Started Now - - - - - - - - - -## Navigation -*** - -There are several ways to get around in the doc portal. The index navigation pane is a table of contents for the entire archive. The archive is divided into sections, like chapters in a book. Click the title of the topic to view it. - -In-context links provide an easy way to immediately reference related topics. Click the underlined term to view the topic. - -Links to related topics can be found at the bottom of each topic page. Click the link to view the topic. - -![Page Linking](/assets/page-linking.png) - -## Continuous Improvement -*** -As you probably know, we are working on a new user experience for our documentation portal that will make learning about and building on top of Apache Pulsar a much better experience. Whether you need overview concepts, how-to procedures, curated guides or quick references, we’re building content to support it. This welcome page is just the first step. We will be providing updates every month. - -## Help Improve These Documents -*** - -You’ll notice an Edit button at the bottom and top of each page. Click it to open a landing page with instructions for requesting changes to posted documents. These are your resources. Participation is not only welcomed – it’s essential! - -## Join the Community! -*** - -The Pulsar community on github is active, passionate, and knowledgeable. Join discussions, voice opinions, suggest features, and dive into the code itself. Find your Pulsar family here at [apache/pulsar](https://github.com/apache/pulsar). - -An equally passionate community can be found in the [Pulsar Slack channel](https://apache-pulsar.slack.com/). You’ll need an invitation to join, but many Github Pulsar community members are Slack members too. Join, hang out, learn, and make some new friends. - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/adaptors-kafka.md b/site2/website/versioned_docs/version-2.8.3-deprecated/adaptors-kafka.md deleted file mode 100644 index ad0d886a9e04b8..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/adaptors-kafka.md +++ /dev/null @@ -1,274 +0,0 @@ ---- -id: adaptors-kafka -title: Pulsar adaptor for Apache Kafka -sidebar_label: "Kafka client wrapper" -original_id: adaptors-kafka ---- - - -Pulsar provides an easy option for applications that are currently written using the [Apache Kafka](http://kafka.apache.org) Java client API. - -## Using the Pulsar Kafka compatibility wrapper - -In an existing application, change the regular Kafka client dependency and replace it with the Pulsar Kafka wrapper. Remove the following dependency in `pom.xml`: - -```xml - - - org.apache.kafka - kafka-clients - 0.10.2.1 - - -``` - -Then include this dependency for the Pulsar Kafka wrapper: - -```xml - - - org.apache.pulsar - pulsar-client-kafka - @pulsar:version@ - - -``` - -With the new dependency, the existing code works without any changes. You need to adjust the configuration, and make sure it points the -producers and consumers to Pulsar service rather than Kafka, and uses a particular -Pulsar topic. - -## Using the Pulsar Kafka compatibility wrapper together with existing kafka client - -When migrating from Kafka to Pulsar, the application might use the original kafka client -and the pulsar kafka wrapper together during migration. You should consider using the -unshaded pulsar kafka client wrapper. - -```xml - - - org.apache.pulsar - pulsar-client-kafka-original - @pulsar:version@ - - -``` - -When using this dependency, construct producers using `org.apache.kafka.clients.producer.PulsarKafkaProducer` -instead of `org.apache.kafka.clients.producer.KafkaProducer` and `org.apache.kafka.clients.producer.PulsarKafkaConsumer` for consumers. - -## Producer example - -```java - -// Topic needs to be a regular Pulsar topic -String topic = "persistent://public/default/my-topic"; - -Properties props = new Properties(); -// Point to a Pulsar service -props.put("bootstrap.servers", "pulsar://localhost:6650"); - -props.put("key.serializer", IntegerSerializer.class.getName()); -props.put("value.serializer", StringSerializer.class.getName()); - -Producer producer = new KafkaProducer(props); - -for (int i = 0; i < 10; i++) { - producer.send(new ProducerRecord(topic, i, "hello-" + i)); - log.info("Message {} sent successfully", i); -} - -producer.close(); - -``` - -## Consumer example - -```java - -String topic = "persistent://public/default/my-topic"; - -Properties props = new Properties(); -// Point to a Pulsar service -props.put("bootstrap.servers", "pulsar://localhost:6650"); -props.put("group.id", "my-subscription-name"); -props.put("enable.auto.commit", "false"); -props.put("key.deserializer", IntegerDeserializer.class.getName()); -props.put("value.deserializer", StringDeserializer.class.getName()); - -Consumer consumer = new KafkaConsumer(props); -consumer.subscribe(Arrays.asList(topic)); - -while (true) { - ConsumerRecords records = consumer.poll(100); - records.forEach(record -> { - log.info("Received record: {}", record); - }); - - // Commit last offset - consumer.commitSync(); -} - -``` - -## Complete Examples - -You can find the complete producer and consumer examples [here](https://github.com/apache/pulsar-adapters/tree/master/pulsar-client-kafka-compat/pulsar-client-kafka-tests/src/test/java/org/apache/pulsar/client/kafka/compat/examples). - -## Compatibility matrix - -Currently the Pulsar Kafka wrapper supports most of the operations offered by the Kafka API. - -### Producer - -APIs: - -| Producer Method | Supported | Notes | -|:------------------------------------------------------------------------------|:----------|:-------------------------------------------------------------------------| -| `Future send(ProducerRecord record)` | Yes | | -| `Future send(ProducerRecord record, Callback callback)` | Yes | | -| `void flush()` | Yes | | -| `List partitionsFor(String topic)` | No | | -| `Map metrics()` | No | | -| `void close()` | Yes | | -| `void close(long timeout, TimeUnit unit)` | Yes | | - -Properties: - -| Config property | Supported | Notes | -|:----------------------------------------|:----------|:------------------------------------------------------------------------------| -| `acks` | Ignored | Durability and quorum writes are configured at the namespace level | -| `auto.offset.reset` | Yes | It uses a default value of `earliest` if you do not give a specific setting. | -| `batch.size` | Ignored | | -| `bootstrap.servers` | Yes | | -| `buffer.memory` | Ignored | | -| `client.id` | Ignored | | -| `compression.type` | Yes | Allows `gzip` and `lz4`. No `snappy`. | -| `connections.max.idle.ms` | Yes | Only support up to 2,147,483,647,000(Integer.MAX_VALUE * 1000) ms of idle time| -| `interceptor.classes` | Yes | | -| `key.serializer` | Yes | | -| `linger.ms` | Yes | Controls the group commit time when batching messages | -| `max.block.ms` | Ignored | | -| `max.in.flight.requests.per.connection` | Ignored | In Pulsar ordering is maintained even with multiple requests in flight | -| `max.request.size` | Ignored | | -| `metric.reporters` | Ignored | | -| `metrics.num.samples` | Ignored | | -| `metrics.sample.window.ms` | Ignored | | -| `partitioner.class` | Yes | | -| `receive.buffer.bytes` | Ignored | | -| `reconnect.backoff.ms` | Ignored | | -| `request.timeout.ms` | Ignored | | -| `retries` | Ignored | Pulsar client retries with exponential backoff until the send timeout expires. | -| `send.buffer.bytes` | Ignored | | -| `timeout.ms` | Yes | | -| `value.serializer` | Yes | | - - -### Consumer - -The following table lists consumer APIs. - -| Consumer Method | Supported | Notes | -|:--------------------------------------------------------------------------------------------------------|:----------|:------| -| `Set assignment()` | No | | -| `Set subscription()` | Yes | | -| `void subscribe(Collection topics)` | Yes | | -| `void subscribe(Collection topics, ConsumerRebalanceListener callback)` | No | | -| `void assign(Collection partitions)` | No | | -| `void subscribe(Pattern pattern, ConsumerRebalanceListener callback)` | No | | -| `void unsubscribe()` | Yes | | -| `ConsumerRecords poll(long timeoutMillis)` | Yes | | -| `void commitSync()` | Yes | | -| `void commitSync(Map offsets)` | Yes | | -| `void commitAsync()` | Yes | | -| `void commitAsync(OffsetCommitCallback callback)` | Yes | | -| `void commitAsync(Map offsets, OffsetCommitCallback callback)` | Yes | | -| `void seek(TopicPartition partition, long offset)` | Yes | | -| `void seekToBeginning(Collection partitions)` | Yes | | -| `void seekToEnd(Collection partitions)` | Yes | | -| `long position(TopicPartition partition)` | Yes | | -| `OffsetAndMetadata committed(TopicPartition partition)` | Yes | | -| `Map metrics()` | No | | -| `List partitionsFor(String topic)` | No | | -| `Map> listTopics()` | No | | -| `Set paused()` | No | | -| `void pause(Collection partitions)` | No | | -| `void resume(Collection partitions)` | No | | -| `Map offsetsForTimes(Map timestampsToSearch)` | No | | -| `Map beginningOffsets(Collection partitions)` | No | | -| `Map endOffsets(Collection partitions)` | No | | -| `void close()` | Yes | | -| `void close(long timeout, TimeUnit unit)` | Yes | | -| `void wakeup()` | No | | - -Properties: - -| Config property | Supported | Notes | -|:--------------------------------|:----------|:------------------------------------------------------| -| `group.id` | Yes | Maps to a Pulsar subscription name | -| `max.poll.records` | Yes | | -| `max.poll.interval.ms` | Ignored | Messages are "pushed" from broker | -| `session.timeout.ms` | Ignored | | -| `heartbeat.interval.ms` | Ignored | | -| `bootstrap.servers` | Yes | Needs to point to a single Pulsar service URL | -| `enable.auto.commit` | Yes | | -| `auto.commit.interval.ms` | Ignored | With auto-commit, acks are sent immediately to broker | -| `partition.assignment.strategy` | Ignored | | -| `auto.offset.reset` | Yes | Only support earliest and latest. | -| `fetch.min.bytes` | Ignored | | -| `fetch.max.bytes` | Ignored | | -| `fetch.max.wait.ms` | Ignored | | -| `interceptor.classes` | Yes | | -| `metadata.max.age.ms` | Ignored | | -| `max.partition.fetch.bytes` | Ignored | | -| `send.buffer.bytes` | Ignored | | -| `receive.buffer.bytes` | Ignored | | -| `client.id` | Ignored | | - - -## Customize Pulsar configurations - -You can configure Pulsar authentication provider directly from the Kafka properties. - -### Pulsar client properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.authentication.class`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-org.apache.pulsar.client.api.Authentication-) | | Configure to auth provider. For example, `org.apache.pulsar.client.impl.auth.AuthenticationTls`.| -| [`pulsar.authentication.params.map`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.util.Map-) | | Map which represents parameters for the Authentication-Plugin. | -| [`pulsar.authentication.params.string`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.lang.String-) | | String which represents parameters for the Authentication-Plugin, for example, `key1:val1,key2:val2`. | -| [`pulsar.use.tls`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTls-boolean-) | `false` | Enable TLS transport encryption. | -| [`pulsar.tls.trust.certs.file.path`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsTrustCertsFilePath-java.lang.String-) | | Path for the TLS trust certificate store. | -| [`pulsar.tls.allow.insecure.connection`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsAllowInsecureConnection-boolean-) | `false` | Accept self-signed certificates from brokers. | -| [`pulsar.operation.timeout.ms`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setOperationTimeout-int-java.util.concurrent.TimeUnit-) | `30000` | General operations timeout. | -| [`pulsar.stats.interval.seconds`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setStatsInterval-long-java.util.concurrent.TimeUnit-) | `60` | Pulsar client lib stats printing interval. | -| [`pulsar.num.io.threads`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setIoThreads-int-) | `1` | The number of Netty IO threads to use. | -| [`pulsar.connections.per.broker`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConnectionsPerBroker-int-) | `1` | The maximum number of connection to each broker. | -| [`pulsar.use.tcp.nodelay`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTcpNoDelay-boolean-) | `true` | TCP no-delay. | -| [`pulsar.concurrent.lookup.requests`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConcurrentLookupRequest-int-) | `50000` | The maximum number of concurrent topic lookups. | -| [`pulsar.max.number.rejected.request.per.connection`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setMaxNumberOfRejectedRequestPerConnection-int-) | `50` | The threshold of errors to forcefully close a connection. | -| [`pulsar.keepalive.interval.ms`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientBuilder.html#keepAliveInterval-int-java.util.concurrent.TimeUnit-)| `30000` | Keep alive interval for each client-broker-connection. | - - -### Pulsar producer properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.producer.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setProducerName-java.lang.String-) | | Specify the producer name. | -| [`pulsar.producer.initial.sequence.id`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setInitialSequenceId-long-) | | Specify baseline for sequence ID of this producer. | -| [`pulsar.producer.max.pending.messages`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessages-int-) | `1000` | Set the maximum size of the message queue pending to receive an acknowledgment from the broker. | -| [`pulsar.producer.max.pending.messages.across.partitions`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessagesAcrossPartitions-int-) | `50000` | Set the maximum number of pending messages across all the partitions. | -| [`pulsar.producer.batching.enabled`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingEnabled-boolean-) | `true` | Control whether automatic batching of messages is enabled for the producer. | -| [`pulsar.producer.batching.max.messages`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingMaxMessages-int-) | `1000` | The maximum number of messages in a batch. | -| [`pulsar.block.if.producer.queue.full`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBlockIfQueueFull-boolean-) | | Specify the block producer if queue is full. | - - -### Pulsar consumer Properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.consumer.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setConsumerName-java.lang.String-) | | Specify the consumer name. | -| [`pulsar.consumer.receiver.queue.size`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setReceiverQueueSize-int-) | 1000 | Set the size of the consumer receiver queue. | -| [`pulsar.consumer.acknowledgments.group.time.millis`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#acknowledgmentGroupTime-long-java.util.concurrent.TimeUnit-) | 100 | Set the maximum amount of group time for consumers to send the acknowledgments to the broker. | -| [`pulsar.consumer.total.receiver.queue.size.across.partitions`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setMaxTotalReceiverQueueSizeAcrossPartitions-int-) | 50000 | Set the maximum size of the total receiver queue across partitions. | -| [`pulsar.consumer.subscription.topics.mode`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#subscriptionTopicsMode-Mode-) | PersistentOnly | Set the subscription topic mode for consumers. | diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/adaptors-spark.md b/site2/website/versioned_docs/version-2.8.3-deprecated/adaptors-spark.md deleted file mode 100644 index e14f13b5d4b079..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/adaptors-spark.md +++ /dev/null @@ -1,91 +0,0 @@ ---- -id: adaptors-spark -title: Pulsar adaptor for Apache Spark -sidebar_label: "Apache Spark" -original_id: adaptors-spark ---- - -## Spark Streaming receiver -The Spark Streaming receiver for Pulsar is a custom receiver that enables Apache [Spark Streaming](https://spark.apache.org/streaming/) to receive raw data from Pulsar. - -An application can receive data in [Resilient Distributed Dataset](https://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds) (RDD) format via the Spark Streaming receiver and can process it in a variety of ways. - -### Prerequisites - -To use the receiver, include a dependency for the `pulsar-spark` library in your Java configuration. - -#### Maven - -If you're using Maven, add this to your `pom.xml`: - -```xml - - -@pulsar:version@ - - - - org.apache.pulsar - pulsar-spark - ${pulsar.version} - - -``` - -#### Gradle - -If you're using Gradle, add this to your `build.gradle` file: - -```groovy - -def pulsarVersion = "@pulsar:version@" - -dependencies { - compile group: 'org.apache.pulsar', name: 'pulsar-spark', version: pulsarVersion -} - -``` - -### Usage - -Pass an instance of `SparkStreamingPulsarReceiver` to the `receiverStream` method in `JavaStreamingContext`: - -```java - - String serviceUrl = "pulsar://localhost:6650/"; - String topic = "persistent://public/default/test_src"; - String subs = "test_sub"; - - SparkConf sparkConf = new SparkConf().setMaster("local[*]").setAppName("Pulsar Spark Example"); - - JavaStreamingContext jsc = new JavaStreamingContext(sparkConf, Durations.seconds(60)); - - ConsumerConfigurationData pulsarConf = new ConsumerConfigurationData(); - - Set set = new HashSet(); - set.add(topic); - pulsarConf.setTopicNames(set); - pulsarConf.setSubscriptionName(subs); - - SparkStreamingPulsarReceiver pulsarReceiver = new SparkStreamingPulsarReceiver( - serviceUrl, - pulsarConf, - new AuthenticationDisabled()); - - JavaReceiverInputDStream lineDStream = jsc.receiverStream(pulsarReceiver); - -``` - -For a complete example, click [here](https://github.com/apache/pulsar-adapters/blob/master/examples/spark/src/main/java/org/apache/spark/streaming/receiver/example/SparkStreamingPulsarReceiverExample.java). In this example, the number of messages that contain the string "Pulsar" in received messages is counted. - -Note that if needed, other Pulsar authentication classes can be used. For example, in order to use a token during authentication the following parameters for the `SparkStreamingPulsarReceiver` constructor can be set: - -```java - -SparkStreamingPulsarReceiver pulsarReceiver = new SparkStreamingPulsarReceiver( - serviceUrl, - pulsarConf, - new AuthenticationToken("token:")); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/adaptors-storm.md b/site2/website/versioned_docs/version-2.8.3-deprecated/adaptors-storm.md deleted file mode 100644 index 76d507164777db..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/adaptors-storm.md +++ /dev/null @@ -1,96 +0,0 @@ ---- -id: adaptors-storm -title: Pulsar adaptor for Apache Storm -sidebar_label: "Apache Storm" -original_id: adaptors-storm ---- - -Pulsar Storm is an adaptor for integrating with [Apache Storm](http://storm.apache.org/) topologies. It provides core Storm implementations for sending and receiving data. - -An application can inject data into a Storm topology via a generic Pulsar spout, as well as consume data from a Storm topology via a generic Pulsar bolt. - -## Using the Pulsar Storm Adaptor - -Include dependency for Pulsar Storm Adaptor: - -```xml - - - org.apache.pulsar - pulsar-storm - ${pulsar.version} - - -``` - -## Pulsar Spout - -The Pulsar Spout allows for the data published on a topic to be consumed by a Storm topology. It emits a Storm tuple based on the message received and the `MessageToValuesMapper` provided by the client. - -The tuples that fail to be processed by the downstream bolts will be re-injected by the spout with an exponential backoff, within a configurable timeout (the default is 60 seconds) or a configurable number of retries, whichever comes first, after which it is acknowledged by the consumer. Here's an example construction of a spout: - -```java - -MessageToValuesMapper messageToValuesMapper = new MessageToValuesMapper() { - - @Override - public Values toValues(Message msg) { - return new Values(new String(msg.getData())); - } - - @Override - public void declareOutputFields(OutputFieldsDeclarer declarer) { - // declare the output fields - declarer.declare(new Fields("string")); - } -}; - -// Configure a Pulsar Spout -PulsarSpoutConfiguration spoutConf = new PulsarSpoutConfiguration(); -spoutConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650"); -spoutConf.setTopic("persistent://my-property/usw/my-ns/my-topic1"); -spoutConf.setSubscriptionName("my-subscriber-name1"); -spoutConf.setMessageToValuesMapper(messageToValuesMapper); - -// Create a Pulsar Spout -PulsarSpout spout = new PulsarSpout(spoutConf); - -``` - -For a complete example, click [here](https://github.com/apache/pulsar-adapters/blob/master/pulsar-storm/src/test/java/org/apache/pulsar/storm/PulsarSpoutTest.java). - -## Pulsar Bolt - -The Pulsar bolt allows data in a Storm topology to be published on a topic. It publishes messages based on the Storm tuple received and the `TupleToMessageMapper` provided by the client. - -A partitioned topic can also be used to publish messages on different topics. In the implementation of the `TupleToMessageMapper`, a "key" will need to be provided in the message which will send the messages with the same key to the same topic. Here's an example bolt: - -```java - -TupleToMessageMapper tupleToMessageMapper = new TupleToMessageMapper() { - - @Override - public TypedMessageBuilder toMessage(TypedMessageBuilder msgBuilder, Tuple tuple) { - String receivedMessage = tuple.getString(0); - // message processing - String processedMsg = receivedMessage + "-processed"; - return msgBuilder.value(processedMsg.getBytes()); - } - - @Override - public void declareOutputFields(OutputFieldsDeclarer declarer) { - // declare the output fields - } -}; - -// Configure a Pulsar Bolt -PulsarBoltConfiguration boltConf = new PulsarBoltConfiguration(); -boltConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650"); -boltConf.setTopic("persistent://my-property/usw/my-ns/my-topic2"); -boltConf.setTupleToMessageMapper(tupleToMessageMapper); - -// Create a Pulsar Bolt -PulsarBolt bolt = new PulsarBolt(boltConf); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-brokers.md b/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-brokers.md deleted file mode 100644 index aea8f70f87a7d3..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-brokers.md +++ /dev/null @@ -1,286 +0,0 @@ ---- -id: admin-api-brokers -title: Managing Brokers -sidebar_label: "Brokers" -original_id: admin-api-brokers ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Pulsar brokers consist of two components: - -1. An HTTP server exposing a {@inject: rest:REST:/} interface administration and [topic](reference-terminology.md#topic) lookup. -2. A dispatcher that handles all Pulsar [message](reference-terminology.md#message) transfers. - -[Brokers](reference-terminology.md#broker) can be managed via: - -* The [`brokers`](reference-pulsar-admin.md#brokers) command of the [`pulsar-admin`](reference-pulsar-admin.md) tool -* The `/admin/v2/brokers` endpoint of the admin {@inject: rest:REST:/} API -* The `brokers` method of the {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin.html} object in the [Java API](client-libraries-java.md) - -In addition to being configurable when you start them up, brokers can also be [dynamically configured](#dynamic-broker-configuration). - -> See the [Configuration](reference-configuration.md#broker) page for a full listing of broker-specific configuration parameters. - -## Brokers resources - -### List active brokers - -Fetch all available active brokers that are serving traffic with cluster name. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers list use - -``` - -``` - -broker1.use.org.com:8080 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/:cluster|operation/getActiveBrokers?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getActiveBrokers(clusterName) - -``` - - - - -```` - -### Get the information of the leader broker - -Fetch the information of the leader broker, for example, the service url. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers leader-broker - -``` - -``` - -BrokerInfo(serviceUrl=broker1.use.org.com:8080) - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/leaderBroker|operation/getLeaderBroker?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getLeaderBroker() - -``` - -For the detail of the code above, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/BrokersImpl.java#L80) - - - - -```` - -#### list of namespaces owned by a given broker - -It finds all namespaces which are owned and served by a given broker. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers namespaces use \ - --url broker1.use.org.com:8080 - -``` - -```json - -{ - "my-property/use/my-ns/0x00000000_0xffffffff": { - "broker_assignment": "shared", - "is_controlled": false, - "is_active": true - } -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/:cluster/:broker/ownedNamespaces|operation/getOwnedNamespaes?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getOwnedNamespaces(cluster,brokerUrl); - -``` - - - - -```` - -### Dynamic broker configuration - -One way to configure a Pulsar [broker](reference-terminology.md#broker) is to supply a [configuration](reference-configuration.md#broker) when the broker is [started up](reference-cli-tools.md#pulsar-broker). - -But since all broker configuration in Pulsar is stored in ZooKeeper, configuration values can also be dynamically updated *while the broker is running*. When you update broker configuration dynamically, ZooKeeper will notify the broker of the change and the broker will then override any existing configuration values. - -* The [`brokers`](reference-pulsar-admin.md#brokers) command for the [`pulsar-admin`](reference-pulsar-admin.md) tool has a variety of subcommands that enable you to manipulate a broker's configuration dynamically, enabling you to [update config values](#update-dynamic-configuration) and more. -* In the Pulsar admin {@inject: rest:REST:/} API, dynamic configuration is managed through the `/admin/v2/brokers/configuration` endpoint. - -### Update dynamic configuration - -````mdx-code-block - - - -The [`update-dynamic-config`](reference-pulsar-admin.md#brokers-update-dynamic-config) subcommand will update existing configuration. It takes two arguments: the name of the parameter and the new value using the `config` and `value` flag respectively. Here's an example for the [`brokerShutdownTimeoutMs`](reference-configuration.md#broker-brokerShutdownTimeoutMs) parameter: - -```shell - -$ pulsar-admin brokers update-dynamic-config --config brokerShutdownTimeoutMs --value 100 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/brokers/configuration/:configName/:configValue|operation/updateDynamicConfiguration?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().updateDynamicConfiguration(configName, configValue); - -``` - - - - -```` - -### List updated values - -Fetch a list of all potentially updatable configuration parameters. -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers list-dynamic-config -brokerShutdownTimeoutMs - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/configuration|operation/getDynamicConfigurationName?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getDynamicConfigurationNames(); - -``` - - - - -```` - -### List all - -Fetch a list of all parameters that have been dynamically updated. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers get-all-dynamic-config -brokerShutdownTimeoutMs:100 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/configuration/values|operation/getAllDynamicConfigurations?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getAllDynamicConfigurations(); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-clusters.md b/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-clusters.md deleted file mode 100644 index ce085345640a4f..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-clusters.md +++ /dev/null @@ -1,318 +0,0 @@ ---- -id: admin-api-clusters -title: Managing Clusters -sidebar_label: "Clusters" -original_id: admin-api-clusters ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Pulsar clusters consist of one or more Pulsar [brokers](reference-terminology.md#broker), one or more [BookKeeper](reference-terminology.md#bookkeeper) -servers (aka [bookies](reference-terminology.md#bookie)), and a [ZooKeeper](https://zookeeper.apache.org) cluster that provides configuration and coordination management. - -Clusters can be managed via: - -* The [`clusters`](reference-pulsar-admin.md#clusters) command of the [`pulsar-admin`](reference-pulsar-admin.md) tool -* The `/admin/v2/clusters` endpoint of the admin {@inject: rest:REST:/} API -* The `clusters` method of the {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object in the [Java API](client-libraries-java.md) - -## Clusters resources - -### Provision - -New clusters can be provisioned using the admin interface. - -> Please note that this operation requires superuser privileges. - -````mdx-code-block - - - -You can provision a new cluster using the [`create`](reference-pulsar-admin.md#clusters-create) subcommand. Here's an example: - -```shell - -$ pulsar-admin clusters create cluster-1 \ - --url http://my-cluster.org.com:8080 \ - --broker-url pulsar://my-cluster.org.com:6650 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/clusters/:cluster|operation/createCluster?version=@pulsar:version_number@} - - - - -```java - -ClusterData clusterData = new ClusterData( - serviceUrl, - serviceUrlTls, - brokerServiceUrl, - brokerServiceUrlTls -); -admin.clusters().createCluster(clusterName, clusterData); - -``` - - - - -```` - -### Initialize cluster metadata - -When provision a new cluster, you need to initialize that cluster's [metadata](concepts-architecture-overview.md#metadata-store). When initializing cluster metadata, you need to specify all of the following: - -* The name of the cluster -* The local ZooKeeper connection string for the cluster -* The configuration store connection string for the entire instance -* The web service URL for the cluster -* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster - -You must initialize cluster metadata *before* starting up any [brokers](admin-api-brokers.md) that will belong to the cluster. - -> **No cluster metadata initialization through the REST API or the Java admin API** -> -> Unlike most other admin functions in Pulsar, cluster metadata initialization cannot be performed via the admin REST API -> or the admin Java client, as metadata initialization involves communicating with ZooKeeper directly. -> Instead, you can use the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool, in particular -> the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command. - -Here's an example cluster metadata initialization command: - -```shell - -bin/pulsar initialize-cluster-metadata \ - --cluster us-west \ - --zookeeper zk1.us-west.example.com:2181 \ - --configuration-store zk1.us-west.example.com:2184 \ - --web-service-url http://pulsar.us-west.example.com:8080/ \ - --web-service-url-tls https://pulsar.us-west.example.com:8443/ \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/ - -``` - -You'll need to use `--*-tls` flags only if you're using [TLS authentication](security-tls-authentication.md) in your instance. - -### Get configuration - -You can fetch the [configuration](reference-configuration.md) for an existing cluster at any time. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#clusters-get) subcommand and specify the name of the cluster. Here's an example: - -```shell - -$ pulsar-admin clusters get cluster-1 -{ - "serviceUrl": "http://my-cluster.org.com:8080/", - "serviceUrlTls": null, - "brokerServiceUrl": "pulsar://my-cluster.org.com:6650/", - "brokerServiceUrlTls": null - "peerClusterNames": null -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/clusters/:cluster|operation/getCluster?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().getCluster(clusterName); - -``` - - - - -```` - -### Update - -You can update the configuration for an existing cluster at any time. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#clusters-update) subcommand and specify new configuration values using flags. - -```shell - -$ pulsar-admin clusters update cluster-1 \ - --url http://my-cluster.org.com:4081 \ - --broker-url pulsar://my-cluster.org.com:3350 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/clusters/:cluster|operation/updateCluster?version=@pulsar:version_number@} - - - - -```java - -ClusterData clusterData = new ClusterData( - serviceUrl, - serviceUrlTls, - brokerServiceUrl, - brokerServiceUrlTls -); -admin.clusters().updateCluster(clusterName, clusterData); - -``` - - - - -```` - -### Delete - -Clusters can be deleted from a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#clusters-delete) subcommand and specify the name of the cluster. - -``` - -$ pulsar-admin clusters delete cluster-1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/clusters/:cluster|operation/deleteCluster?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().deleteCluster(clusterName); - -``` - - - - -```` - -### List - -You can fetch a list of all clusters in a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#clusters-list) subcommand. - -```shell - -$ pulsar-admin clusters list -cluster-1 -cluster-2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/clusters|operation/getClusters?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().getClusters(); - -``` - - - - -```` - -### Update peer-cluster data - -Peer clusters can be configured for a given cluster in a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`update-peer-clusters`](reference-pulsar-admin.md#clusters-update-peer-clusters) subcommand and specify the list of peer-cluster names. - -``` - -$ pulsar-admin update-peer-clusters cluster-1 --peer-clusters cluster-2 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/clusters/:cluster/peers|operation/setPeerClusterNames?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().updatePeerClusterNames(clusterName, peerClusterList); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-functions.md b/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-functions.md deleted file mode 100644 index d6693fda9240e3..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-functions.md +++ /dev/null @@ -1,830 +0,0 @@ ---- -id: admin-api-functions -title: Manage Functions -sidebar_label: "Functions" -original_id: admin-api-functions ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -**Pulsar Functions** are lightweight compute processes that - -* consume messages from one or more Pulsar topics -* apply a user-supplied processing logic to each message -* publish the results of the computation to another topic - -Functions can be managed via the following methods. - -Method | Description ----|--- -**Admin CLI** | The [`functions`](reference-pulsar-admin.md#functions) command of the [`pulsar-admin`](reference-pulsar-admin.md) tool. -**REST API** |The `/admin/v3/functions` endpoint of the admin {@inject: rest:REST:/} API. -**Java Admin API**| The `functions` method of the {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object in the [Java API](client-libraries-java.md). - -## Function resources - -You can perform the following operations on functions. - -### Create a function - -You can create a Pulsar function in cluster mode (deploy it on a Pulsar cluster) using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#functions-create) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --inputs test-input-topic \ - --output persistent://public/default/test-output-topic \ - --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \ - --jar /examples/api-examples.jar - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName|operation/registerFunction?version=@pulsar:version_number@} - - - - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setTenant(tenant); -functionConfig.setNamespace(namespace); -functionConfig.setName(functionName); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setParallelism(1); -functionConfig.setClassName("org.apache.pulsar.functions.api.examples.ExclamationFunction"); -functionConfig.setProcessingGuarantees(FunctionConfig.ProcessingGuarantees.ATLEAST_ONCE); -functionConfig.setTopicsPattern(sourceTopicPattern); -functionConfig.setSubName(subscriptionName); -functionConfig.setAutoAck(true); -functionConfig.setOutput(sinkTopic); -admin.functions().createFunction(functionConfig, fileName); - -``` - - - - -```` - -### Update a function - -You can update a Pulsar function that has been deployed to a Pulsar cluster using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#functions-update) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions update \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --output persistent://public/default/update-output-topic \ - # other options - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/functions/:tenant/:namespace/:functionName|operation/updateFunction?version=@pulsar:version_number@} - - - - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setTenant(tenant); -functionConfig.setNamespace(namespace); -functionConfig.setName(functionName); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setParallelism(1); -functionConfig.setClassName("org.apache.pulsar.functions.api.examples.ExclamationFunction"); -UpdateOptions updateOptions = new UpdateOptions(); -updateOptions.setUpdateAuthData(updateAuthData); -admin.functions().updateFunction(functionConfig, userCodeFile, updateOptions); - -``` - - - - -```` - -### Start an instance of a function - -You can start a stopped function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`start`](reference-pulsar-admin.md#functions-start) subcommand. - -```shell - -$ pulsar-admin functions start \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/start|operation/startFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().startFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Start all instances of a function - -You can start all stopped function instances using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`start`](reference-pulsar-admin.md#functions-start) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions start \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/start|operation/startFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().startFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Stop an instance of a function - -You can stop a function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`stop`](reference-pulsar-admin.md#functions-stop) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stop \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/stop|operation/stopFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().stopFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Stop all instances of a function - -You can stop all function instances using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`stop`](reference-pulsar-admin.md#functions-stop) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stop \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/stop|operation/stopFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().stopFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Restart an instance of a function - -Restart a function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`restart`](reference-pulsar-admin.md#functions-restart) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions restart \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/restart|operation/restartFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().restartFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Restart all instances of a function - -You can restart all function instances using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`restart`](reference-pulsar-admin.md#functions-restart) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions restart \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/restart|operation/restartFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().restartFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### List all functions - -You can list all Pulsar functions running under a specific tenant and namespace using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#functions-list) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions list \ - --tenant public \ - --namespace default - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace|operation/listFunctions?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctions(tenant, namespace); - -``` - - - - -```` - -### Delete a function - -You can delete a Pulsar function that is running on a Pulsar cluster using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#functions-delete) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions delete \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|DELETE|/admin/v3/functions/:tenant/:namespace/:functionName|operation/deregisterFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().deleteFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Get info about a function - -You can get information about a Pulsar function currently running in cluster mode using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#functions-get) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions get \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName|operation/getFunctionInfo?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Get status of an instance of a function - -You can get the current status of a Pulsar function instance with `instance-id` using Admin CLI, REST API or Java Admin API. -````mdx-code-block - - - -Use the [`status`](reference-pulsar-admin.md#functions-status) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/status|operation/getFunctionInstanceStatus?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStatus(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Get status of all instances of a function - -You can get the current status of a Pulsar function instance using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`status`](reference-pulsar-admin.md#functions-status) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/status|operation/getFunctionStatus?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStatus(tenant, namespace, functionName); - -``` - - - - -```` - -### Get stats of an instance of a function - -You can get the current stats of a Pulsar Function instance with `instance-id` using Admin CLI, REST API or Java admin API. -````mdx-code-block - - - -Use the [`stats`](reference-pulsar-admin.md#functions-stats) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/stats|operation/getFunctionInstanceStats?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStats(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Get stats of all instances of a function - -You can get the current stats of a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`stats`](reference-pulsar-admin.md#functions-stats) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/stats|operation/getFunctionStats?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStats(tenant, namespace, functionName); - -``` - - - - -```` - -### Trigger a function - -You can trigger a specified Pulsar function with a supplied value using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`trigger`](reference-pulsar-admin.md#functions-trigger) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --topic (the name of input topic) \ - --trigger-value \"hello pulsar\" - # or --trigger-file (the path of trigger file) - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/trigger|operation/triggerFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().triggerFunction(tenant, namespace, functionName, topic, triggerValue, triggerFile); - -``` - - - - -```` - -### Put state associated with a function - -You can put the state associated with a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`putstate`](reference-pulsar-admin.md#functions-putstate) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions putstate \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --state "{\"key\":\"pulsar\", \"stringValue\":\"hello pulsar\"}" - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/state/:key|operation/putFunctionState?version=@pulsar:version_number@} - - - - -```java - -TypeReference typeRef = new TypeReference() {}; -FunctionState stateRepr = ObjectMapperFactory.getThreadLocal().readValue(state, typeRef); -admin.functions().putFunctionState(tenant, namespace, functionName, stateRepr); - -``` - - - - -```` - -### Fetch state associated with a function - -You can fetch the current state associated with a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`querystate`](reference-pulsar-admin.md#functions-querystate) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions querystate \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --key (the key of state) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/state/:key|operation/getFunctionState?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionState(tenant, namespace, functionName, key); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-namespaces.md b/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-namespaces.md deleted file mode 100644 index cc3ce27d4b529b..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-namespaces.md +++ /dev/null @@ -1,1324 +0,0 @@ ---- -id: admin-api-namespaces -title: Managing Namespaces -sidebar_label: "Namespaces" -original_id: admin-api-namespaces ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Pulsar [namespaces](reference-terminology.md#namespace) are logical groupings of [topics](reference-terminology.md#topic). - -Namespaces can be managed via: - -* The [`namespaces`](reference-pulsar-admin.md#clusters) command of the [`pulsar-admin`](reference-pulsar-admin.md) tool -* The `/admin/v2/namespaces` endpoint of the admin {@inject: rest:REST:/} API -* The `namespaces` method of the {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object in the [Java API](client-libraries-java.md) - -## Namespaces resources - -### Create namespaces - -You can create new namespaces under a given [tenant](reference-terminology.md#tenant). - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#namespaces-create) subcommand and specify the namespace by name: - -```shell - -$ pulsar-admin namespaces create test-tenant/test-namespace - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace|operation/createNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().createNamespace(namespace); - -``` - - - - -```` - -### Get policies - -You can fetch the current policies associated with a namespace at any time. - -````mdx-code-block - - - -Use the [`policies`](reference-pulsar-admin.md#namespaces-policies) subcommand and specify the namespace: - -```shell - -$ pulsar-admin namespaces policies test-tenant/test-namespace -{ - "auth_policies": { - "namespace_auth": {}, - "destination_auth": {} - }, - "replication_clusters": [], - "bundles_activated": true, - "bundles": { - "boundaries": [ - "0x00000000", - "0xffffffff" - ], - "numBundles": 1 - }, - "backlog_quota_map": {}, - "persistence": null, - "latency_stats_sample_rate": {}, - "message_ttl_in_seconds": 0, - "retention_policies": null, - "deleted": false -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace|operation/getPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPolicies(namespace); - -``` - - - - -```` - -### List namespaces - -You can list all namespaces within a given Pulsar [tenant](reference-terminology.md#tenant). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#namespaces-list) subcommand and specify the tenant: - -```shell - -$ pulsar-admin namespaces list test-tenant -test-tenant/ns1 -test-tenant/ns2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant|operation/getTenantNamespaces?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaces(tenant); - -``` - - - - -```` - -### Delete namespaces - -You can delete existing namespaces from a tenant. - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#namespaces-delete) subcommand and specify the namespace: - -```shell - -$ pulsar-admin namespaces delete test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace|operation/deleteNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().deleteNamespace(namespace); - -``` - - - - -```` - -### Configure replication clusters - -#### Set replication cluster - -It sets replication clusters for a namespace, so Pulsar can internally replicate publish message from one colo to another colo. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-clusters test-tenant/ns1 \ - --clusters cl1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/replication|operation/setNamespaceReplicationClusters?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setNamespaceReplicationClusters(namespace, clusters); - -``` - - - - -```` - -#### Get replication cluster - -It gives a list of replication clusters for a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-clusters test-tenant/cl1/ns1 - -``` - -``` - -cl2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/replication|operation/getNamespaceReplicationClusters?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaceReplicationClusters(namespace) - -``` - - - - -```` - -### Configure backlog quota policies - -#### Set backlog quota policies - -Backlog quota helps the broker to restrict bandwidth/storage of a namespace once it reaches a certain threshold limit. Admin can set the limit and take corresponding action after the limit is reached. - - 1. producer_request_hold: broker will hold and not persist produce request payload - - 2. producer_exception: broker disconnects with the client by giving an exception. - - 3. consumer_backlog_eviction: broker will start discarding backlog messages - - Backlog quota restriction can be taken care by defining restriction of backlog-quota-type: destination_storage - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-backlog-quota --limit 10G --limitTime 36000 --policy producer_request_hold test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/setBacklogQuota?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setBacklogQuota(namespace, new BacklogQuota(limit, limitTime, policy)) - -``` - - - - -```` - -#### Get backlog quota policies - -It shows a configured backlog quota for a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-backlog-quotas test-tenant/ns1 - -``` - -```json - -{ - "destination_storage": { - "limit": 10, - "policy": "producer_request_hold" - } -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/backlogQuotaMap|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getBacklogQuotaMap(namespace); - -``` - - - - -```` - -#### Remove backlog quota policies - -It removes backlog quota policies for a given namespace - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-backlog-quota test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/removeBacklogQuota?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeBacklogQuota(namespace, backlogQuotaType) - -``` - - - - -```` - -### Configure persistence policies - -#### Set persistence policies - -Persistence policies allow to configure persistency-level for all topic messages under a given namespace. - - - Bookkeeper-ack-quorum: Number of acks (guaranteed copies) to wait for each entry, default: 0 - - - Bookkeeper-ensemble: Number of bookies to use for a topic, default: 0 - - - Bookkeeper-write-quorum: How many writes to make of each entry, default: 0 - - - Ml-mark-delete-max-rate: Throttling rate of mark-delete operation (0 means no throttle), default: 0.0 - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-persistence --bookkeeper-ack-quorum 2 --bookkeeper-ensemble 3 --bookkeeper-write-quorum 2 --ml-mark-delete-max-rate 0 test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/setPersistence?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setPersistence(namespace,new PersistencePolicies(bookkeeperEnsemble, bookkeeperWriteQuorum,bookkeeperAckQuorum,managedLedgerMaxMarkDeleteRate)) - -``` - - - - -```` - -#### Get persistence policies - -It shows the configured persistence policies of a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-persistence test-tenant/ns1 - -``` - -```json - -{ - "bookkeeperEnsemble": 3, - "bookkeeperWriteQuorum": 2, - "bookkeeperAckQuorum": 2, - "managedLedgerMaxMarkDeleteRate": 0 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/getPersistence?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPersistence(namespace) - -``` - - - - -```` - -### Configure namespace bundles - -#### Unload namespace bundles - -The namespace bundle is a virtual group of topics which belong to the same namespace. If the broker gets overloaded with the number of bundles, this command can help unload a bundle from that broker, so it can be served by some other less-loaded brokers. The namespace bundle ID ranges from 0x00000000 to 0xffffffff. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces unload --bundle 0x00000000_0xffffffff test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/:bundle/unload|operation/unloadNamespaceBundle?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().unloadNamespaceBundle(namespace, bundle) - -``` - - - - -```` - -#### Split namespace bundles - -Each namespace bundle can contain multiple topics and each bundle can be served by only one broker. -If a single bundle is creating an excessive load on a broker, an admin splits the bundle using this command permitting one or more of the new bundles to be unloaded thus spreading the load across the brokers. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces split-bundle --bundle 0x00000000_0xffffffff test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/:bundle/split|operation/splitNamespaceBundle?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().splitNamespaceBundle(namespace, bundle) - -``` - - - - -```` - -### Configure message TTL - -#### Set message-ttl - -It configures message’s time to live (in seconds) duration. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-message-ttl --messageTTL 100 test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/setNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setNamespaceMessageTTL(namespace, messageTTL) - -``` - - - - -```` - -#### Get message-ttl - -It gives a message ttl of configured namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-message-ttl test-tenant/ns1 - -``` - -``` - -100 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/getNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaceMessageTTL(namespace) - -``` - - - - -```` - -#### Remove message-ttl - -Remove a message TTL of the configured namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-message-ttl test-tenant/ns1 - -``` - -``` - -100 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/removeNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeNamespaceMessageTTL(namespace) - -``` - - - - -```` - - -### Clear backlog - -#### Clear namespace backlog - -It clears all message backlog for all the topics that belong to a specific namespace. You can also clear backlog for a specific subscription as well. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces clear-backlog --sub my-subscription test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/clearBacklog|operation/clearNamespaceBacklogForSubscription?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().clearNamespaceBacklogForSubscription(namespace, subscription) - -``` - - - - -```` - -#### Clear bundle backlog - -It clears all message backlog for all the topics that belong to a specific NamespaceBundle. You can also clear backlog for a specific subscription as well. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces clear-backlog --bundle 0x00000000_0xffffffff --sub my-subscription test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/:bundle/clearBacklog|operation/clearNamespaceBundleBacklogForSubscription?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().clearNamespaceBundleBacklogForSubscription(namespace, bundle, subscription) - -``` - - - - -```` - -### Configure retention - -#### Set retention - -Each namespace contains multiple topics and the retention size (storage size) of each topic should not exceed a specific threshold or it should be stored for a certain period. This command helps configure the retention size and time of topics in a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin set-retention --size 100 --time 10 test-tenant/ns1 - -``` - -``` - -N/A - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/retention|operation/setRetention?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setRetention(namespace, new RetentionPolicies(retentionTimeInMin, retentionSizeInMB)) - -``` - - - - -```` - -#### Get retention - -It shows retention information of a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-retention test-tenant/ns1 - -``` - -```json - -{ - "retentionTimeInMinutes": 10, - "retentionSizeInMB": 100 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/retention|operation/getRetention?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getRetention(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for topics - -#### Set dispatch throttling for topics - -It sets message dispatch rate for all the topics under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -:::note - -- If neither `clusterDispatchRate` nor `topicDispatchRate` is configured, dispatch throttling is disabled. -> -- If `topicDispatchRate` is not configured, `clusterDispatchRate` takes effect. -> -- If `topicDispatchRate` is configured, `topicDispatchRate` takes effect. - -::: - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/dispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for topics - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/dispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getDispatchRate(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for subscription - -#### Set dispatch throttling for subscription - -It sets message dispatch rate for all the subscription of topics under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-subscription-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/subscriptionDispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setSubscriptionDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for subscription - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-subscription-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/subscriptionDispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getSubscriptionDispatchRate(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for replicator - -#### Set dispatch throttling for replicator - -It sets message dispatch rate for all the replicator between replication clusters under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-replicator-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/replicatorDispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setReplicatorDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for replicator - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-replicator-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/replicatorDispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getReplicatorDispatchRate(namespace) - -``` - - - - -```` - -### Configure deduplication snapshot interval - -#### Get deduplication snapshot interval - -It shows configured `deduplicationSnapshotInterval` for a namespace (Each topic under the namespace will take a deduplication snapshot according to this interval) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-deduplication-snapshot-interval test-tenant/ns1 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/getDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getDeduplicationSnapshotInterval(namespace) - -``` - - - - -```` - -#### Set deduplication snapshot interval - -Set configured `deduplicationSnapshotInterval` for a namespace. Each topic under the namespace will take a deduplication snapshot according to this interval. -`brokerDeduplicationEnabled` must be set to `true` for this property to take effect. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-deduplication-snapshot-interval test-tenant/ns1 --interval 1000 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/setDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setDeduplicationSnapshotInterval(namespace, 1000) - -``` - - - - -```` - -#### Remove deduplication snapshot interval - -Remove configured `deduplicationSnapshotInterval` of a namespace (Each topic under the namespace will take a deduplication snapshot according to this interval) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-deduplication-snapshot-interval test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/deleteDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeDeduplicationSnapshotInterval(namespace) - -``` - - - - -```` - -### Namespace isolation - -You can use the [Pulsar isolation policy](administration-isolation.md) to allocate resources (broker and bookie) for a namespace. - -### Unload namespaces from a broker - -You can unload a namespace, or a [namespace bundle](reference-terminology.md#namespace-bundle), from the Pulsar [broker](reference-terminology.md#broker) that is currently responsible for it. - -#### pulsar-admin - -Use the [`unload`](reference-pulsar-admin.md#unload) subcommand of the [`namespaces`](reference-pulsar-admin.md#namespaces) command. - -````mdx-code-block - - - -```shell - -$ pulsar-admin namespaces unload my-tenant/my-ns - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/unload|operation/unloadNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().unload(namespace) - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-non-partitioned-topics.md b/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-non-partitioned-topics.md deleted file mode 100644 index e6347bb8c363a1..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-non-partitioned-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-non-partitioned-topics -title: Managing non-partitioned topics -sidebar_label: "Non-partitioned topics" -original_id: admin-api-non-partitioned-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-non-persistent-topics.md b/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-non-persistent-topics.md deleted file mode 100644 index 3126a6494c7153..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-non-persistent-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-non-persistent-topics -title: Managing non-persistent topics -sidebar_label: "Non-Persistent topics" -original_id: admin-api-non-persistent-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-overview.md b/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-overview.md deleted file mode 100644 index 6aa4aa067ae8d5..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-overview.md +++ /dev/null @@ -1,144 +0,0 @@ ---- -id: admin-api-overview -title: Pulsar admin interface -sidebar_label: "Overview" -original_id: admin-api-overview ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -The Pulsar admin interface enables you to manage all important entities in a Pulsar instance, such as tenants, topics, and namespaces. - -You can interact with the admin interface via: - -- The `pulsar-admin` CLI tool, which is available in the `bin` folder of your Pulsar installation: - - ```shell - - bin/pulsar-admin - - ``` - - > **Important** - > - > For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). - -- HTTP calls, which are made against the admin {@inject: rest:REST:/} API provided by Pulsar brokers. For some RESTful APIs, they might be redirected to the owner brokers for serving with [`307 Temporary Redirect`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/307), hence the HTTP callers should handle `307 Temporary Redirect`. If you use `curl` commands, you should specify `-L` to handle redirections. - - > **Important** - > - > For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. - -- A Java client interface. - - > **Important** - > - > For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -> **The REST API is the admin interface**. Both the `pulsar-admin` CLI tool and the Java client use the REST API. If you implement your own admin interface client, you should use the REST API. - -## Admin setup - -Each of the three admin interfaces (the `pulsar-admin` CLI tool, the {@inject: rest:REST:/} API, and the [Java admin API](/api/admin)) requires some special setup if you have enabled authentication in your Pulsar instance. - -````mdx-code-block - - - -If you have enabled authentication, you need to provide an auth configuration to use the `pulsar-admin` tool. By default, the configuration for the `pulsar-admin` tool is in the [`conf/client.conf`](reference-configuration.md#client) file. The following are the available parameters: - -|Name|Description|Default| -|----|-----------|-------| -|webServiceUrl|The web URL for the cluster.|http://localhost:8080/| -|brokerServiceUrl|The Pulsar protocol URL for the cluster.|pulsar://localhost:6650/| -|authPlugin|The authentication plugin.| | -|authParams|The authentication parameters for the cluster, as a comma-separated string.| | -|useTls|Whether or not TLS authentication will be enforced in the cluster.|false| -|tlsAllowInsecureConnection|Accept untrusted TLS certificate from client.|false| -|tlsTrustCertsFilePath|Path for the trusted TLS certificate file.| | - - - - -You can find details for the REST API exposed by Pulsar brokers in this {@inject: rest:document:/}. - - - - -To use the Java admin API, instantiate a {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object, and specify a URL for a Pulsar broker and a {@inject: javadoc:PulsarAdminBuilder:/admin/org/apache/pulsar/client/admin/PulsarAdminBuilder}. The following is a minimal example using `localhost`: - -```java - -String url = "http://localhost:8080"; -// Pass auth-plugin class fully-qualified name if Pulsar-security enabled -String authPluginClassName = "com.org.MyAuthPluginClass"; -// Pass auth-param if auth-plugin class requires it -String authParams = "param1=value1"; -boolean useTls = false; -boolean tlsAllowInsecureConnection = false; -String tlsTrustCertsFilePath = null; -PulsarAdmin admin = PulsarAdmin.builder() -.authentication(authPluginClassName,authParams) -.serviceHttpUrl(url) -.tlsTrustCertsFilePath(tlsTrustCertsFilePath) -.allowTlsInsecureConnection(tlsAllowInsecureConnection) -.build(); - -``` - -If you use multiple brokers, you can use multi-host like Pulsar service. For example, - -```java - -String url = "http://localhost:8080,localhost:8081,localhost:8082"; -// Pass auth-plugin class fully-qualified name if Pulsar-security enabled -String authPluginClassName = "com.org.MyAuthPluginClass"; -// Pass auth-param if auth-plugin class requires it -String authParams = "param1=value1"; -boolean useTls = false; -boolean tlsAllowInsecureConnection = false; -String tlsTrustCertsFilePath = null; -PulsarAdmin admin = PulsarAdmin.builder() -.authentication(authPluginClassName,authParams) -.serviceHttpUrl(url) -.tlsTrustCertsFilePath(tlsTrustCertsFilePath) -.allowTlsInsecureConnection(tlsAllowInsecureConnection) -.build(); - -``` - - - - -```` - -## How to define Pulsar resource names when running Pulsar in Kubernetes -If you run Pulsar Functions or connectors on Kubernetes, you need to follow Kubernetes naming convention to define the names of your Pulsar resources, whichever admin interface you use. - -Kubernetes requires a name that can be used as a DNS subdomain name as defined in [RFC 1123](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names). Pulsar supports more legal characters than Kubernetes naming convention. If you create a Pulsar resource name with special characters that are not supported by Kubernetes (for example, including colons in a Pulsar namespace name), Kubernetes runtime translates the Pulsar object names into Kubernetes resource labels which are in RFC 1123-compliant forms. Consequently, you can run functions or connectors using Kubernetes runtime. The rules for translating Pulsar object names into Kubernetes resource labels are as below: - -- Truncate to 63 characters - -- Replace the following characters with dashes (-): - - - Non-alphanumeric characters - - - Underscores (_) - - - Dots (.) - -- Replace beginning and ending non-alphanumeric characters with 0 - -:::tip - -- If you get an error in translating Pulsar object names into Kubernetes resource labels (for example, you may have a naming collision if your Pulsar object name is too long) or want to customize the translating rules, see [customize Kubernetes runtime](https://pulsar.apache.org/docs/en/next/functions-runtime/#customize-kubernetes-runtime). -- For how to configure Kubernetes runtime, see [here](https://pulsar.apache.org/docs/en/next/functions-runtime/#configure-kubernetes-runtime). - -::: - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-packages.md b/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-packages.md deleted file mode 100644 index efb4dcfabcdffa..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-packages.md +++ /dev/null @@ -1,391 +0,0 @@ ---- -id: admin-api-packages -title: Manage packages -sidebar_label: "Packages" -original_id: admin-api-packages ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Package management enables version management and simplifies the upgrade and rollback processes for Functions, Sinks, and Sources. When you use the same function, sink and source in different namespaces, you can upload them to a common package management system. - -## Package name - -A `package` is identified by five parts: `type`, `tenant`, `namespace`, `package name`, and `version`. - -| Part | Description | -|-------|-------------| -|`type` |The type of the package. The following types are supported: `function`, `sink` and `source`. | -| `name`|The fully qualified name of the package: `//`.| -|`version`|The version of the package.| - -The following is a code sample. - -```java - -class PackageName { - private final PackageType type; - private final String namespace; - private final String tenant; - private final String name; - private final String version; -} - -enum PackageType { - FUNCTION("function"), SINK("sink"), SOURCE("source"); -} - -``` - -## Package URL -A package is located using a URL. The package URL is written in the following format: - -```shell - -:////@ - -``` - -The following are package URL examples: - -`sink://public/default/mysql-sink@1.0` -`function://my-tenant/my-ns/my-function@0.1` -`source://my-tenant/my-ns/mysql-cdc-source@2.3` - -The package management system stores the data, versions and metadata of each package. The metadata is shown in the following table. - -| metadata | Description | -|----------|-------------| -|description|The description of the package.| -|contact |The contact information of a package. For example, team email.| -|create_time| The time when the package is created.| -|modification_time| The time when the package is modified.| -|properties |A key/value map that stores your own information.| - -## Permissions - -The packages are organized by the tenant and namespace, so you can apply the tenant and namespace permissions to packages directly. - -## Package resources -You can use the package management with command line tools, REST API and Java client. - -### Upload a package -You can upload a package to the package management service in the following ways. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages upload function://public/default/example@v0.1 --path package-file --description package-description - -``` - - - - -{@inject: endpoint|POST|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/upload?version=@pulsar:version_number@} - - - - -Upload a package to the package management service synchronously. - -```java - - void upload(PackageMetadata metadata, String packageName, String path) throws PulsarAdminException; - -``` - -Upload a package to the package management service asynchronously. - -```java - - CompletableFuture uploadAsync(PackageMetadata metadata, String packageName, String path); - -``` - - - - -```` - -### Download a package -You can download a package to the package management service in the following ways. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages download function://public/default/example@v0.1 --path package-file - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/download?version=@pulsar:version_number@} - - - - -Download a package to the package management service synchronously. - -```java - - void download(String packageName, String path) throws PulsarAdminException; - -``` - -Download a package to the package management service asynchronously. - -```java - - CompletableFuture downloadAsync(String packageName, String path); - -``` - - - - -```` - -### List all versions of a package -You can get a list of all versions of a package in the following ways. -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages list --type function public/default - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName|operation/listPackageVersion?version=@pulsar:version_number@} - - - - -List all versions of a package synchronously. - -```java - - List listPackageVersions(String packageName) throws PulsarAdminException; - -``` - -List all versions of a package asynchronously. - -```java - - CompletableFuture> listPackageVersionsAsync(String packageName); - -``` - - - - -```` - -### List all the specified type packages under a namespace -You can get a list of all the packages with the given type in a namespace in the following ways. -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages list --type function public/default - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/packages/:type/:tenant/:namespace|operation/listPackages?version=@pulsar:version_number@} - - - - -List all the packages with the given type in a namespace synchronously. - -```java - - List listPackages(String type, String namespace) throws PulsarAdminException; - -``` - -List all the packages with the given type in a namespace asynchronously. - -```java - - CompletableFuture> listPackagesAsync(String type, String namespace); - -``` - - - - -```` - -### Get the metadata of a package -You can get the metadata of a package in the following ways. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages get-metadata function://public/default/test@v1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version/metadata|operation/getMeta?version=@pulsar:version_number@} - - - - -Get the metadata of a package synchronously. - -```java - - PackageMetadata getMetadata(String packageName) throws PulsarAdminException; - -``` - -Get the metadata of a package asynchronously. - -```java - - CompletableFuture getMetadataAsync(String packageName); - -``` - - - - -```` - -### Update the metadata of a package -You can update the metadata of a package in the following ways. -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages update-metadata function://public/default/example@v0.1 --description update-description - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version/metadata|operation/updateMeta?version=@pulsar:version_number@} - - - - -Update a package metadata information synchronously. - -```java - - void updateMetadata(String packageName, PackageMetadata metadata) throws PulsarAdminException; - -``` - -Update a package metadata information asynchronously. - -```java - - CompletableFuture updateMetadataAsync(String packageName, PackageMetadata metadata); - -``` - - - - -```` - -### Delete a specified package -You can delete a specified package with its package name in the following ways. - -````mdx-code-block - - - -The following command example deletes a package of version 0.1. - -```shell - -bin/pulsar-admin packages delete function://public/default/example@v0.1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/delete?version=@pulsar:version_number@} - - - - -Delete a specified package synchronously. - -```java - - void delete(String packageName) throws PulsarAdminException; - -``` - -Delete a specified package asynchronously. - -```java - - CompletableFuture deleteAsync(String packageName); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-partitioned-topics.md b/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-partitioned-topics.md deleted file mode 100644 index 5ce182282e0324..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-partitioned-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-partitioned-topics -title: Managing partitioned topics -sidebar_label: "Partitioned topics" -original_id: admin-api-partitioned-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-permissions.md b/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-permissions.md deleted file mode 100644 index 741d58ac7bf7a5..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-permissions.md +++ /dev/null @@ -1,189 +0,0 @@ ---- -id: admin-api-permissions -title: Managing permissions -sidebar_label: "Permissions" -original_id: admin-api-permissions ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Pulsar allows you to grant namespace-level or topic-level permission to users. - -- If you grant a namespace-level permission to a user, then the user can access all the topics under the namespace. - -- If you grant a topic-level permission to a user, then the user can access only the topic. - -The chapters below demonstrate how to grant namespace-level permissions to users. For how to grant topic-level permissions to users, see [manage topics](admin-api-topics.md/#grant-permission). - -## Grant permissions - -You can grant permissions to specific roles for lists of operations such as `produce` and `consume`. - -````mdx-code-block - - - -Use the [`grant-permission`](reference-pulsar-admin.md#grant-permission) subcommand and specify a namespace, actions using the `--actions` flag, and a role using the `--role` flag: - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role admin10 - -``` - -Wildcard authorization can be performed when `authorizationAllowWildcardsMatching` is set to `true` in `broker.conf`. - -e.g. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role 'my.role.*' - -``` - -Then, roles `my.role.1`, `my.role.2`, `my.role.foo`, `my.role.bar`, etc. can produce and consume. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role '*.role.my' - -``` - -Then, roles `1.role.my`, `2.role.my`, `foo.role.my`, `bar.role.my`, etc. can produce and consume. - -**Note**: A wildcard matching works at **the beginning or end of the role name only**. - -e.g. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role 'my.*.role' - -``` - -In this case, only the role `my.*.role` has permissions. -Roles `my.1.role`, `my.2.role`, `my.foo.role`, `my.bar.role`, etc. **cannot** produce and consume. - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/permissions/:role|operation/grantPermissionOnNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().grantPermissionOnNamespace(namespace, role, getAuthActions(actions)); - -``` - - - - -```` - -## Get permissions - -You can see which permissions have been granted to which roles in a namespace. - -````mdx-code-block - - - -Use the [`permissions`](reference-pulsar-admin#permissions) subcommand and specify a namespace: - -```shell - -$ pulsar-admin namespaces permissions test-tenant/ns1 -{ - "admin10": [ - "produce", - "consume" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/permissions|operation/getPermissions?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPermissions(namespace); - -``` - - - - -```` - -## Revoke permissions - -You can revoke permissions from specific roles, which means that those roles will no longer have access to the specified namespace. - -````mdx-code-block - - - -Use the [`revoke-permission`](reference-pulsar-admin.md#revoke-permission) subcommand and specify a namespace and a role using the `--role` flag: - -```shell - -$ pulsar-admin namespaces revoke-permission test-tenant/ns1 \ - --role admin10 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/permissions/:role|operation/revokePermissionsOnNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().revokePermissionsOnNamespace(namespace, role); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-persistent-topics.md b/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-persistent-topics.md deleted file mode 100644 index 50d135b72f5424..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-persistent-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-persistent-topics -title: Managing persistent topics -sidebar_label: "Persistent topics" -original_id: admin-api-persistent-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-schemas.md b/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-schemas.md deleted file mode 100644 index 9ffe21f5b0f750..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-schemas.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: admin-api-schemas -title: Managing Schemas -sidebar_label: "Schemas" -original_id: admin-api-schemas ---- - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-tenants.md b/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-tenants.md deleted file mode 100644 index 2a5e308e6488cb..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-tenants.md +++ /dev/null @@ -1,238 +0,0 @@ ---- -id: admin-api-tenants -title: Managing Tenants -sidebar_label: "Tenants" -original_id: admin-api-tenants ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Tenants, like namespaces, can be managed using the [admin API](admin-api-overview.md). There are currently two configurable aspects of tenants: - -* Admin roles -* Allowed clusters - -## Tenant resources - -### List - -You can list all of the tenants associated with an [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#tenants-list) subcommand. - -```shell - -$ pulsar-admin tenants list -my-tenant-1 -my-tenant-2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/tenants|operation/getTenants?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().getTenants(); - -``` - - - - -```` - -### Create - -You can create a new tenant. - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#tenants-create) subcommand: - -```shell - -$ pulsar-admin tenants create my-tenant - -``` - -When creating a tenant, you can assign admin roles using the `-r`/`--admin-roles` flag. You can specify multiple roles as a comma-separated list. Here are some examples: - -```shell - -$ pulsar-admin tenants create my-tenant \ - --admin-roles role1,role2,role3 - -$ pulsar-admin tenants create my-tenant \ - -r role1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/tenants/:tenant|operation/createTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().createTenant(tenantName, tenantInfo); - -``` - - - - -```` - -### Get configuration - -You can fetch the [configuration](reference-configuration.md) for an existing tenant at any time. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#tenants-get) subcommand and specify the name of the tenant. Here's an example: - -```shell - -$ pulsar-admin tenants get my-tenant -{ - "adminRoles": [ - "admin1", - "admin2" - ], - "allowedClusters": [ - "cl1", - "cl2" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/tenants/:cluster|operation/getTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().getTenantInfo(tenantName); - -``` - - - - -```` - -### Delete - -Tenants can be deleted from a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#tenants-delete) subcommand and specify the name of the tenant. - -```shell - -$ pulsar-admin tenants delete my-tenant - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/tenants/:cluster|operation/deleteTenant?version=@pulsar:version_number@} - - - - -```java - -admin.Tenants().deleteTenant(tenantName); - -``` - - - - -```` - -### Update - -You can update a tenant's configuration. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#tenants-update) subcommand. - -```shell - -$ pulsar-admin tenants update my-tenant - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/tenants/:cluster|operation/updateTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().updateTenant(tenantName, tenantInfo); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-topics.md b/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-topics.md deleted file mode 100644 index d858a9e828418b..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/admin-api-topics.md +++ /dev/null @@ -1,2142 +0,0 @@ ---- -id: admin-api-topics -title: Manage topics -sidebar_label: "Topics" -original_id: admin-api-topics ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Pulsar has persistent and non-persistent topics. Persistent topic is a logical endpoint for publishing and consuming messages. The topic name structure for persistent topics is: - -```shell - -persistent://tenant/namespace/topic - -``` - -Non-persistent topics are used in applications that only consume real-time published messages and do not need persistent guarantee. In this way, it reduces message-publish latency by removing overhead of persisting messages. The topic name structure for non-persistent topics is: - -```shell - -non-persistent://tenant/namespace/topic - -``` - -## Manage topic resources -Whether it is persistent or non-persistent topic, you can obtain the topic resources through `pulsar-admin` tool, REST API and Java. - -:::note - -In REST API, `:schema` stands for persistent or non-persistent. `:tenant`, `:namespace`, `:x` are variables, replace them with the real tenant, namespace, and `x` names when using them. -Take {@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} as an example, to get the list of persistent topics in REST API, use `https://pulsar.apache.org/admin/v2/persistent/my-tenant/my-namespace`. To get the list of non-persistent topics in REST API, use `https://pulsar.apache.org/admin/v2/non-persistent/my-tenant/my-namespace`. - -::: - -### List of topics - -You can get the list of topics under a given namespace in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list \ - my-tenant/my-namespace - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} - - - - -```java - -String namespace = "my-tenant/my-namespace"; -admin.topics().getList(namespace); - -``` - - - - -```` - -### Grant permission - -You can grant permissions on a client role to perform specific actions on a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics grant-permission \ - --actions produce,consume --role application1 \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/permissions/:role|operation/grantPermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String role = "test-role"; -Set actions = Sets.newHashSet(AuthAction.produce, AuthAction.consume); -admin.topics().grantPermission(topic, role, actions); - -``` - - - - -```` - -### Get permission - -You can fetch permission in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics permissions \ - persistent://test-tenant/ns1/tp1 \ - -{ - "application1": [ - "consume", - "produce" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/permissions|operation/getPermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getPermissions(topic); - -``` - - - - -```` - -### Revoke permission - -You can revoke a permission granted on a client role in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics revoke-permission \ - --role application1 \ - persistent://test-tenant/ns1/tp1 \ - -{ - "application1": [ - "consume", - "produce" - ] -} - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic/permissions/:role|operation/revokePermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String role = "test-role"; -admin.topics().revokePermissions(topic, role); - -``` - - - - -```` - -### Delete topic - -You can delete a topic in the following ways. You cannot delete a topic if any active subscription or producers is connected to the topic. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics delete \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic|operation/deleteTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().delete(topic); - -``` - - - - -```` - -### Unload topic - -You can unload a topic in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics unload \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/unload|operation/unloadTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().unload(topic); - -``` - - - - -```` - -### Get stats - -You can check the following statistics of a given non-partitioned topic. - - - **msgRateIn**: The sum of all local and replication publishers' publish rates (msg/s). - - - **msgThroughputIn**: The sum of all local and replication publishers' publish rates (bytes/s). - - - **msgRateOut**: The sum of all local and replication consumers' dispatch rates(msg/s). - - - **msgThroughputOut**: The sum of all local and replication consumers' dispatch rates (bytes/s). - - - **averageMsgSize**: The average size (in bytes) of messages published within the last interval. - - - **storageSize**: The sum of the ledgers' storage size for this topic. The space used to store the messages for the topic. - - - **publishers**: The list of all local publishers into the topic. The list ranges from zero to thousands. - - - **msgRateIn**: The total rate of messages (msg/s) published by this publisher. - - - **msgThroughputIn**: The total throughput (bytes/s) of the messages published by this publisher. - - - **averageMsgSize**: The average message size in bytes from this publisher within the last interval. - - - **producerId**: The internal identifier for this producer on this topic. - - - **producerName**: The internal identifier for this producer, generated by the client library. - - - **address**: The IP address and source port for the connection of this producer. - - - **connectedSince**: The timestamp when this producer is created or reconnected last time. - - - **subscriptions**: The list of all local subscriptions to the topic. - - - **my-subscription**: The name of this subscription. It is defined by the client. - - - **msgRateOut**: The total rate of messages (msg/s) delivered on this subscription. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered on this subscription. - - - **msgBacklog**: The number of messages in the subscription backlog. - - - **type**: The subscription type. - - - **msgRateExpired**: The rate at which messages were discarded instead of dispatched from this subscription due to TTL. - - - **lastExpireTimestamp**: The timestamp of the last message expire execution. - - - **lastConsumedFlowTimestamp**: The timestamp of the last flow command received. - - - **lastConsumedTimestamp**: The latest timestamp of all the consumed timestamp of the consumers. - - - **lastAckedTimestamp**: The latest timestamp of all the acked timestamp of the consumers. - - - **consumers**: The list of connected consumers for this subscription. - - - **msgRateOut**: The total rate of messages (msg/s) delivered to the consumer. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered to the consumer. - - - **consumerName**: The internal identifier for this consumer, generated by the client library. - - - **availablePermits**: The number of messages that the consumer has space for in the client library's listen queue. `0` means the client library's queue is full and `receive()` isn't being called. A non-zero value means this consumer is ready for dispatched messages. - - - **unackedMessages**: The number of unacknowledged messages for the consumer, where an unacknowledged message is one that has been sent to the consumer but not yet acknowledged. This field is only meaningful when using a subscription that tracks individual message acknowledgement. - - - **blockedConsumerOnUnackedMsgs**: The flag used to verify if the consumer is blocked due to reaching threshold of the unacknowledged messages. - - - **lastConsumedTimestamp**: The timestamp when the consumer reads a message the last time. - - - **lastAckedTimestamp**: The timestamp when the consumer acknowledges a message the last time. - - - **replication**: This section gives the stats for cross-colo replication of this topic - - - **msgRateIn**: The total rate (msg/s) of messages received from the remote cluster. - - - **msgThroughputIn**: The total throughput (bytes/s) received from the remote cluster. - - - **msgRateOut**: The total rate of messages (msg/s) delivered to the replication-subscriber. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered to the replication-subscriber. - - - **msgRateExpired**: The total rate of messages (msg/s) expired. - - - **replicationBacklog**: The number of messages pending to be replicated to remote cluster. - - - **connected**: Whether the outbound replicator is connected. - - - **replicationDelayInSeconds**: How long the oldest message has been waiting to be sent through the connection, if connected is `true`. - - - **inboundConnection**: The IP and port of the broker in the remote cluster's publisher connection to this broker. - - - **inboundConnectedSince**: The TCP connection being used to publish messages to the remote cluster. If there are no local publishers connected, this connection is automatically closed after a minute. - - - **outboundConnection**: The address of the outbound replication connection. - - - **outboundConnectedSince**: The timestamp of establishing outbound connection. - -The following is an example of a topic status. - -```json - -{ - "msgRateIn": 4641.528542257553, - "msgThroughputIn": 44663039.74947473, - "msgRateOut": 0, - "msgThroughputOut": 0, - "averageMsgSize": 1232439.816728665, - "storageSize": 135532389160, - "publishers": [ - { - "msgRateIn": 57.855383881403576, - "msgThroughputIn": 558994.7078932219, - "averageMsgSize": 613135, - "producerId": 0, - "producerName": null, - "address": null, - "connectedSince": null - } - ], - "subscriptions": { - "my-topic_subscription": { - "msgRateOut": 0, - "msgThroughputOut": 0, - "msgBacklog": 116632, - "type": null, - "msgRateExpired": 36.98245516804671, - "consumers": [] - } - }, - "replication": {} -} - -``` - -To get the status of a topic, you can use the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getStats(topic); - -``` - - - - -```` - -### Get internal stats - -You can get the detailed statistics of a topic. - - - **entriesAddedCounter**: Messages published since this broker loaded this topic. - - - **numberOfEntries**: The total number of messages being tracked. - - - **totalSize**: The total storage size in bytes of all messages. - - - **currentLedgerEntries**: The count of messages written to the ledger that is currently open for writing. - - - **currentLedgerSize**: The size in bytes of messages written to the ledger that is currently open for writing. - - - **lastLedgerCreatedTimestamp**: The time when the last ledger is created. - - - **lastLedgerCreationFailureTimestamp:** The time when the last ledger failed. - - - **waitingCursorsCount**: The number of cursors that are "caught up" and waiting for a new message to be published. - - - **pendingAddEntriesCount**: The number of messages that complete (asynchronous) write requests. - - - **lastConfirmedEntry**: The ledgerid:entryid of the last message that is written successfully. If the entryid is `-1`, then the ledger is open, yet no entries are written. - - - **state**: The state of this ledger for writing. The state `LedgerOpened` means that a ledger is open for saving published messages. - - - **ledgers**: The ordered list of all ledgers for this topic holding messages. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. - - - **metadata**: The ledger metadata. - - - **schemaLedgers**: The ordered list of all ledgers for this topic schema. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. - - - **metadata**: The ledger metadata. - - - **compactedLedger**: The ledgers holding un-acked messages after topic compaction. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. The value is `false` for the compacted topic ledger. - - - **cursors**: The list of all cursors on this topic. Each subscription in the topic stats has a cursor. - - - **markDeletePosition**: All messages before the markDeletePosition are acknowledged by the subscriber. - - - **readPosition**: The latest position of subscriber for reading message. - - - **waitingReadOp**: This is true when the subscription has read the latest message published to the topic and is waiting for new messages to be published. - - - **pendingReadOps**: The counter for how many outstanding read requests to the BookKeepers in progress. - - - **messagesConsumedCounter**: The number of messages this cursor has acked since this broker loaded this topic. - - - **cursorLedger**: The ledger being used to persistently store the current markDeletePosition. - - - **cursorLedgerLastEntry**: The last entryid used to persistently store the current markDeletePosition. - - - **individuallyDeletedMessages**: If acknowledges are being done out of order, the ranges of messages acknowledged between the markDeletePosition and the read-position shows. - - - **lastLedgerSwitchTimestamp**: The last time the cursor ledger is rolled over. - - - **state**: The state of the cursor ledger: `Open` means you have a cursor ledger for saving updates of the markDeletePosition. - -The following is an example of the detailed statistics of a topic. - -```json - -{ - "entriesAddedCounter":0, - "numberOfEntries":0, - "totalSize":0, - "currentLedgerEntries":0, - "currentLedgerSize":0, - "lastLedgerCreatedTimestamp":"2021-01-22T21:12:14.868+08:00", - "lastLedgerCreationFailureTimestamp":null, - "waitingCursorsCount":0, - "pendingAddEntriesCount":0, - "lastConfirmedEntry":"3:-1", - "state":"LedgerOpened", - "ledgers":[ - { - "ledgerId":3, - "entries":0, - "size":0, - "offloaded":false, - "metadata":null - } - ], - "cursors":{ - "test":{ - "markDeletePosition":"3:-1", - "readPosition":"3:-1", - "waitingReadOp":false, - "pendingReadOps":0, - "messagesConsumedCounter":0, - "cursorLedger":4, - "cursorLedgerLastEntry":1, - "individuallyDeletedMessages":"[]", - "lastLedgerSwitchTimestamp":"2021-01-22T21:12:14.966+08:00", - "state":"Open", - "numberOfEntriesSinceFirstNotAckedMessage":0, - "totalNonContiguousDeletedMessagesRange":0, - "properties":{ - - } - } - }, - "schemaLedgers":[ - { - "ledgerId":1, - "entries":11, - "size":10, - "offloaded":false, - "metadata":null - } - ], - "compactedLedger":{ - "ledgerId":-1, - "entries":-1, - "size":-1, - "offloaded":false, - "metadata":null - } -} - -``` - -To get the internal status of a topic, you can use the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats-internal \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/internalStats|operation/getInternalStats?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getInternalStats(topic); - -``` - - - - -```` - -### Peek messages - -You can peek a number of messages for a specific subscription of a given topic in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics peek-messages \ - --count 10 --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -Message ID: 315674752:0 -Properties: { "X-Pulsar-publish-time" : "2015-07-13 17:40:28.451" } -msg-payload - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/position/:messagePosition|operation/peekNthMessage?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -int numMessages = 1; -admin.topics().peekMessages(topic, subName, numMessages); - -``` - - - - -```` - -### Get message by ID - -You can fetch the message with the given ledger ID and entry ID in the following ways. - -````mdx-code-block - - - -```shell - -$ ./bin/pulsar-admin topics get-message-by-id \ - persistent://public/default/my-topic \ - -l 10 -e 0 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/ledger/:ledgerId/entry/:entryId|operation/getMessageById?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -long ledgerId = 10; -long entryId = 10; -admin.topics().getMessageById(topic, ledgerId, entryId); - -``` - - - - -```` - -### Skip messages - -You can skip a number of messages for a specific subscription of a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics skip \ - --count 10 --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/skip/:numMessages|operation/skipMessages?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -int numMessages = 1; -admin.topics().skipMessages(topic, subName, numMessages); - -``` - - - - -```` - -### Skip all messages - -You can skip all the old messages for a specific subscription of a given topic. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics skip-all \ - --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/skip_all|operation/skipAllMessages?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -admin.topics().skipAllMessages(topic, subName); - -``` - - - - -```` - -### Reset cursor - -You can reset a subscription cursor position back to the position which is recorded X minutes before. It essentially calculates time and position of cursor at X minutes before and resets it at that position. You can reset the cursor in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics reset-cursor \ - --subscription my-subscription --time 10 \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/resetcursor/:timestamp|operation/resetCursor?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -long timestamp = 2342343L; -admin.topics().skipAllMessages(topic, subName, timestamp); - -``` - - - - -```` - -### Look up topic's owner broker - -You can locate the owner broker of the given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics lookup \ - persistent://test-tenant/ns1/tp1 \ - - "pulsar://broker1.org.com:4480" - -``` - - - - -{@inject: endpoint|GET|/lookup/v2/topic/:schema/:tenant:namespace/:topic|/?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.lookup().lookupDestination(topic); - -``` - - - - -```` - -### Get bundle - -You can get the range of the bundle that the given topic belongs to in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics bundle-range \ - persistent://test-tenant/ns1/tp1 \ - - "0x00000000_0xffffffff" - -``` - - - - -{@inject: endpoint|GET|/lookup/v2/topic/:topic_domain/:tenant/:namespace/:topic/bundle|/?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.lookup().getBundleRange(topic); - -``` - - - - -```` - -### Get subscriptions - -You can check all subscription names for a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics subscriptions \ - persistent://test-tenant/ns1/tp1 \ - - my-subscription - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscriptions|operation/getSubscriptions?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getSubscriptions(topic); - -``` - - - - -```` - -### Last Message Id - -You can get the last committed message ID for a persistent topic. It is available since 2.3.0 release. - -````mdx-code-block - - - -```shell - -pulsar-admin topics last-message-id topic-name - -``` - - - - -{@inject: endpoint|Get|/admin/v2/:schema/:tenant/:namespace/:topic/lastMessageId|operation/getLastMessageId?version=@pulsar:version_number@} - - - - -```Java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getLastMessage(topic); - -``` - - - - -```` - - -### Configure deduplication snapshot interval - -#### Get deduplication snapshot interval - -To get the topic-level deduplication snapshot interval, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-deduplication-snapshot-interval options - -``` - - - - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/getDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getDeduplicationSnapshotInterval(topic) - -``` - - - - -```` - -#### Set deduplication snapshot interval - -To set the topic-level deduplication snapshot interval, use one of the following methods. - -> **Prerequisite** `brokerDeduplicationEnabled` must be set to `true`. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-deduplication-snapshot-interval options - -``` - - - - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/setDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.topics().setDeduplicationSnapshotInterval(topic, 1000) - -``` - - - - -```` - -#### Remove deduplication snapshot interval - -To remove the topic-level deduplication snapshot interval, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-deduplication-snapshot-interval options - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/deleteDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.topics().removeDeduplicationSnapshotInterval(topic) - -``` - - - - -```` - - -### Configure inactive topic policies - -#### Get inactive topic policies - -To get the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-inactive-topic-policies options - -``` - - - - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/getInactiveTopicPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getInactiveTopicPolicies(topic) - -``` - - - - -```` - -#### Set inactive topic policies - -To set the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-inactive-topic-policies options - -``` - - - - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/setInactiveTopicPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().setInactiveTopicPolicies(topic, inactiveTopicPolicies) - -``` - - - - -```` - -#### Remove inactive topic policies - -To remove the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-inactive-topic-policies options - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/removeInactiveTopicPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().removeInactiveTopicPolicies(topic) - -``` - - - - -```` - - -### Configure offload policies - -#### Get offload policies - -To get the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-offload-policies options - -``` - - - - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/getOffloadPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getOffloadPolicies(topic) - -``` - - - - -```` - -#### Set offload policies - -To set the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-offload-policies options - -``` - - - - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/setOffloadPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().setOffloadPolicies(topic, offloadPolicies) - -``` - - - - -```` - -#### Remove offload policies - -To remove the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-offload-policies options - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/removeOffloadPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().removeOffloadPolicies(topic) - -``` - - - - -```` - - -## Manage non-partitioned topics -You can use Pulsar [admin API](admin-api-overview.md) to create, delete and check status of non-partitioned topics. - -### Create -Non-partitioned topics must be explicitly created. When creating a new non-partitioned topic, you need to provide a name for the topic. - -By default, 60 seconds after creation, topics are considered inactive and deleted automatically to avoid generating trash data. To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to a specific value. - -For more information about the two parameters, see [here](reference-configuration.md#broker). - -You can create non-partitioned topics in the following ways. -````mdx-code-block - - - -When you create non-partitioned topics with the [`create`](reference-pulsar-admin.md#create-3) command, you need to specify the topic name as an argument. - -```shell - -$ bin/pulsar-admin topics create \ - persistent://my-tenant/my-namespace/my-topic - -``` - -:::note - -When you create a non-partitioned topic with the suffix '-partition-' followed by numeric value like 'xyz-topic-partition-x' for the topic name, if a partitioned topic with same suffix 'xyz-topic-partition-y' exists, then the numeric value(x) for the non-partitioned topic must be larger than the number of partitions(y) of the partitioned topic. Otherwise, you cannot create such a non-partitioned topic. - -::: - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic|operation/createNonPartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().createNonPartitionedTopic(topicName); - -``` - - - - -```` - -### Delete -You can delete non-partitioned topics in the following ways. -````mdx-code-block - - - -```shell - -$ bin/pulsar-admin topics delete \ - persistent://my-tenant/my-namespace/my-topic - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic|operation/deleteTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().delete(topic); - -``` - - - - -```` - -### List - -You can get the list of topics under a given namespace in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list tenant/namespace -persistent://tenant/namespace/topic1 -persistent://tenant/namespace/topic2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getList(namespace); - -``` - - - - -```` - -### Stats - -You can check the current statistics of a given topic. The following is an example. For description of each stats, refer to [get stats](#get-stats). - -```json - -{ - "msgRateIn": 4641.528542257553, - "msgThroughputIn": 44663039.74947473, - "msgRateOut": 0, - "msgThroughputOut": 0, - "averageMsgSize": 1232439.816728665, - "storageSize": 135532389160, - "publishers": [ - { - "msgRateIn": 57.855383881403576, - "msgThroughputIn": 558994.7078932219, - "averageMsgSize": 613135, - "producerId": 0, - "producerName": null, - "address": null, - "connectedSince": null - } - ], - "subscriptions": { - "my-topic_subscription": { - "msgRateOut": 0, - "msgThroughputOut": 0, - "msgBacklog": 116632, - "type": null, - "msgRateExpired": 36.98245516804671, - "consumers": [] - } - }, - "replication": {} -} - -``` - -You can check the current statistics of a given topic and its connected producers and consumers in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats \ - persistent://test-tenant/namespace/topic \ - --get-precise-backlog - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getStats(topic, false /* is precise backlog */); - -``` - - - - -```` - -## Manage partitioned topics -You can use Pulsar [admin API](admin-api-overview.md) to create, update, delete and check status of partitioned topics. - -### Create - -Partitioned topics must be explicitly created. When creating a new partitioned topic, you need to provide a name and the number of partitions for the topic. - -By default, 60 seconds after creation, topics are considered inactive and deleted automatically to avoid generating trash data. To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to a specific value. - -For more information about the two parameters, see [here](reference-configuration.md#broker). - -You can create partitioned topics in the following ways. -````mdx-code-block - - - -When you create partitioned topics with the [`create-partitioned-topic`](reference-pulsar-admin.md#create-partitioned-topic) -command, you need to specify the topic name as an argument and the number of partitions using the `-p` or `--partitions` flag. - -```shell - -$ bin/pulsar-admin topics create-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic \ - --partitions 4 - -``` - -:::note - -If a non-partitioned topic with the suffix '-partition-' followed by a numeric value like 'xyz-topic-partition-10', you can not create a partitioned topic with name 'xyz-topic', because the partitions of the partitioned topic could override the existing non-partitioned topic. To create such partitioned topic, you have to delete that non-partitioned topic first. - -::: - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/partitions|operation/createPartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -int numPartitions = 4; -admin.topics().createPartitionedTopic(topicName, numPartitions); - -``` - - - - -```` - -### Create missed partitions - -When topic auto-creation is disabled, and you have a partitioned topic without any partitions, you can use the [`create-missed-partitions`](reference-pulsar-admin.md#create-missed-partitions) command to create partitions for the topic. - -````mdx-code-block - - - -You can create missed partitions with the [`create-missed-partitions`](reference-pulsar-admin.md#create-missed-partitions) command and specify the topic name as an argument. - -```shell - -$ bin/pulsar-admin topics create-missed-partitions \ - persistent://my-tenant/my-namespace/my-topic \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic|operation/createMissedPartitions?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().createMissedPartitions(topicName); - -``` - - - - -```` - -### Get metadata - -Partitioned topics are associated with metadata, you can view it as a JSON object. The following metadata field is available. - -Field | Description -:-----|:------- -`partitions` | The number of partitions into which the topic is divided. - -````mdx-code-block - - - -You can check the number of partitions in a partitioned topic with the [`get-partitioned-topic-metadata`](reference-pulsar-admin.md#get-partitioned-topic-metadata) subcommand. - -```shell - -$ pulsar-admin topics get-partitioned-topic-metadata \ - persistent://my-tenant/my-namespace/my-topic -{ - "partitions": 4 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/partitions|operation/getPartitionedMetadata?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getPartitionedTopicMetadata(topicName); - -``` - - - - -```` - -### Update - -You can update the number of partitions for an existing partitioned topic *if* the topic is non-global. However, you can only add the partition number. Decrementing the number of partitions would delete the topic, which is not supported in Pulsar. - -Producers and consumers can find the newly created partitions automatically. - -````mdx-code-block - - - -You can update partitioned topics with the [`update-partitioned-topic`](reference-pulsar-admin.md#update-partitioned-topic) command. - -```shell - -$ pulsar-admin topics update-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic \ - --partitions 8 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:cluster/:namespace/:destination/partitions|operation/updatePartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().updatePartitionedTopic(topic, numPartitions); - -``` - - - - -```` - -### Delete -You can delete partitioned topics with the [`delete-partitioned-topic`](reference-pulsar-admin.md#delete-partitioned-topic) command, REST API and Java. - -````mdx-code-block - - - -```shell - -$ bin/pulsar-admin topics delete-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:topic/:namespace/:destination/partitions|operation/deletePartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().delete(topic); - -``` - - - - -```` - -### List -You can get the list of partitioned topics under a given namespace in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list-partitioned-topics tenant/namespace -persistent://tenant/namespace/topic1 -persistent://tenant/namespace/topic2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getPartitionedTopicList?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getPartitionedTopicList(namespace); - -``` - - - - -```` - -### Stats - -You can check the current statistics of a given partitioned topic. The following is an example. For description of each stats, refer to [get stats](#get-stats). - -Note that in the subscription JSON object, `chuckedMessageRate` is deprecated. Please use `chunkedMessageRate`. Both will be sent in the JSON for now. - -```json - -{ - "msgRateIn" : 999.992947159793, - "msgThroughputIn" : 1070918.4635439808, - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesInCounter" : 270318763, - "msgInCounter" : 252489, - "bytesOutCounter" : 0, - "msgOutCounter" : 0, - "averageMsgSize" : 1070.926056966454, - "msgChunkPublished" : false, - "storageSize" : 270316646, - "backlogSize" : 200921133, - "publishers" : [ { - "msgRateIn" : 999.992947159793, - "msgThroughputIn" : 1070918.4635439808, - "averageMsgSize" : 1070.3333333333333, - "chunkedMessageRate" : 0.0, - "producerId" : 0 - } ], - "subscriptions" : { - "test" : { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesOutCounter" : 0, - "msgOutCounter" : 0, - "msgRateRedeliver" : 0.0, - "chuckedMessageRate" : 0, - "chunkedMessageRate" : 0, - "msgBacklog" : 144318, - "msgBacklogNoDelayed" : 144318, - "blockedSubscriptionOnUnackedMsgs" : false, - "msgDelayed" : 0, - "unackedMessages" : 0, - "msgRateExpired" : 0.0, - "lastExpireTimestamp" : 0, - "lastConsumedFlowTimestamp" : 0, - "lastConsumedTimestamp" : 0, - "lastAckedTimestamp" : 0, - "consumers" : [ ], - "isDurable" : true, - "isReplicated" : false - } - }, - "replication" : { }, - "metadata" : { - "partitions" : 3 - }, - "partitions" : { } -} - -``` - -You can check the current statistics of a given partitioned topic and its connected producers and consumers in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics partitioned-stats \ - persistent://test-tenant/namespace/topic \ - --per-partition - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/partitioned-stats|operation/getPartitionedStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getPartitionedStats(topic, true /* per partition */, false /* is precise backlog */); - -``` - - - - -```` - -### Internal stats - -You can check the detailed statistics of a topic. The following is an example. For description of each stats, refer to [get internal stats](#get-internal-stats). - -```json - -{ - "entriesAddedCounter": 20449518, - "numberOfEntries": 3233, - "totalSize": 331482, - "currentLedgerEntries": 3233, - "currentLedgerSize": 331482, - "lastLedgerCreatedTimestamp": "2016-06-29 03:00:23.825", - "lastLedgerCreationFailureTimestamp": null, - "waitingCursorsCount": 1, - "pendingAddEntriesCount": 0, - "lastConfirmedEntry": "324711539:3232", - "state": "LedgerOpened", - "ledgers": [ - { - "ledgerId": 324711539, - "entries": 0, - "size": 0 - } - ], - "cursors": { - "my-subscription": { - "markDeletePosition": "324711539:3133", - "readPosition": "324711539:3233", - "waitingReadOp": true, - "pendingReadOps": 0, - "messagesConsumedCounter": 20449501, - "cursorLedger": 324702104, - "cursorLedgerLastEntry": 21, - "individuallyDeletedMessages": "[(324711539:3134‥324711539:3136], (324711539:3137‥324711539:3140], ]", - "lastLedgerSwitchTimestamp": "2016-06-29 01:30:19.313", - "state": "Open" - } - } -} - -``` - -You can get the internal stats for the partitioned topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats-internal \ - persistent://test-tenant/namespace/topic - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/internalStats|operation/getInternalStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getInternalStats(topic); - -``` - - - - -```` - -## Publish to partitioned topics - -By default, Pulsar topics are served by a single broker, which limits the maximum throughput of a topic. *Partitioned topics* can span multiple brokers and thus allow for higher throughput. - -You can publish to partitioned topics using Pulsar client libraries. When publishing to partitioned topics, you must specify a routing mode. If you do not specify any routing mode when you create a new producer, the round robin routing mode is used. - -### Routing mode - -You can specify the routing mode in the ProducerConfiguration object that you use to configure your producer. The routing mode determines which partition(internal topic) that each message should be published to. - -The following {@inject: javadoc:MessageRoutingMode:/client/org/apache/pulsar/client/api/MessageRoutingMode} options are available. - -Mode | Description -:--------|:------------ -`RoundRobinPartition` | If no key is provided, the producer publishes messages across all partitions in round-robin policy to achieve the maximum throughput. Round-robin is not done per individual message, round-robin is set to the same boundary of batching delay to ensure that batching is effective. If a key is specified on the message, the partitioned producer hashes the key and assigns message to a particular partition. This is the default mode. -`SinglePartition` | If no key is provided, the producer picks a single partition randomly and publishes all messages into that partition. If a key is specified on the message, the partitioned producer hashes the key and assigns message to a particular partition. -`CustomPartition` | Use custom message router implementation that is called to determine the partition for a particular message. You can create a custom routing mode by using the Java client and implementing the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface. - -The following is an example: - -```java - -String pulsarBrokerRootUrl = "pulsar://localhost:6650"; -String topic = "persistent://my-tenant/my-namespace/my-topic"; - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl(pulsarBrokerRootUrl).build(); -Producer producer = pulsarClient.newProducer() - .topic(topic) - .messageRoutingMode(MessageRoutingMode.SinglePartition) - .create(); -producer.send("Partitioned topic message".getBytes()); - -``` - -### Custom message router - -To use a custom message router, you need to provide an implementation of the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface, which has just one `choosePartition` method: - -```java - -public interface MessageRouter extends Serializable { - int choosePartition(Message msg); -} - -``` - -The following router routes every message to partition 10: - -```java - -public class AlwaysTenRouter implements MessageRouter { - public int choosePartition(Message msg) { - return 10; - } -} - -``` - -With that implementation, you can send - -```java - -String pulsarBrokerRootUrl = "pulsar://localhost:6650"; -String topic = "persistent://my-tenant/my-cluster-my-namespace/my-topic"; - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl(pulsarBrokerRootUrl).build(); -Producer producer = pulsarClient.newProducer() - .topic(topic) - .messageRouter(new AlwaysTenRouter()) - .create(); -producer.send("Partitioned topic message".getBytes()); - -``` - -### How to choose partitions when using a key -If a message has a key, it supersedes the round robin routing policy. The following example illustrates how to choose the partition when using a key. - -```java - -// If the message has a key, it supersedes the round robin routing policy - if (msg.hasKey()) { - return signSafeMod(hash.makeHash(msg.getKey()), topicMetadata.numPartitions()); - } - - if (isBatchingEnabled) { // if batching is enabled, choose partition on `partitionSwitchMs` boundary. - long currentMs = clock.millis(); - return signSafeMod(currentMs / partitionSwitchMs + startPtnIdx, topicMetadata.numPartitions()); - } else { - return signSafeMod(PARTITION_INDEX_UPDATER.getAndIncrement(this), topicMetadata.numPartitions()); - } - -``` - -## Manage subscriptions -You can use [Pulsar admin API](admin-api-overview.md) to create, check, and delete subscriptions. -### Create subscription -You can create a subscription for a topic using one of the following methods. -````mdx-code-block - - - -```shell - -pulsar-admin topics create-subscription \ ---subscription my-subscription \ -persistent://test-tenant/ns1/tp1 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/persistent/:tenant/:namespace/:topic/subscription/:subscription|operation/createSubscriptions?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subscriptionName = "my-subscription"; -admin.topics().createSubscription(topic, subscriptionName, MessageId.latest); - -``` - - - - -```` -### Get subscription -You can check all subscription names for a given topic using one of the following methods. -````mdx-code-block - - - -```shell - -pulsar-admin topics subscriptions \ -persistent://test-tenant/ns1/tp1 \ -my-subscription - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscriptions|operation/getSubscriptions?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getSubscriptions(topic); - -``` - - - - -```` -### Unsubscribe subscription -When a subscription does not process messages any more, you can unsubscribe it using one of the following methods. -````mdx-code-block - - - -```shell - -pulsar-admin topics unsubscribe \ ---subscription my-subscription \ -persistent://test-tenant/ns1/tp1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/:topic/subscription/:subscription|operation/deleteSubscription?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subscriptionName = "my-subscription"; -admin.topics().deleteSubscription(topic, subscriptionName); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/administration-dashboard.md b/site2/website/versioned_docs/version-2.8.3-deprecated/administration-dashboard.md deleted file mode 100644 index 25f976609b40bf..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/administration-dashboard.md +++ /dev/null @@ -1,76 +0,0 @@ ---- -id: administration-dashboard -title: Pulsar dashboard -sidebar_label: "Dashboard" -original_id: administration-dashboard ---- - -:::note - -Pulsar dashboard is deprecated. If you want to manage and monitor the stats of your topics, use [Pulsar Manager](administration-pulsar-manager.md). - -::: - -Pulsar dashboard is a web application that enables users to monitor current stats for all [topics](reference-terminology.md#topic) in tabular form. - -The dashboard is a data collector that polls stats from all the brokers in a Pulsar instance (across multiple clusters) and stores all the information in a [PostgreSQL](https://www.postgresql.org/) database. - -You can use the [Django](https://www.djangoproject.com) web app to render the collected data. - -## Install - -The easiest way to use the dashboard is to run it inside a [Docker](https://www.docker.com/products/docker) container. - -```shell - -$ SERVICE_URL=http://broker.example.com:8080/ -$ docker run -p 80:80 \ - -e SERVICE_URL=$SERVICE_URL \ - apachepulsar/pulsar-dashboard:@pulsar:version@ - -``` - -You can find the {@inject: github:Dockerfile:/dashboard/Dockerfile} in the `dashboard` directory and build an image from scratch as well: - -```shell - -$ docker build -t apachepulsar/pulsar-dashboard dashboard - -``` - -If token authentication is enabled: -> Provided token should have super-user access. - -```shell - -$ SERVICE_URL=http://broker.example.com:8080/ -$ JWT_TOKEN=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c -$ docker run -p 80:80 \ - -e SERVICE_URL=$SERVICE_URL \ - -e JWT_TOKEN=$JWT_TOKEN \ - apachepulsar/pulsar-dashboard - -``` - - -You need to specify only one service URL for a Pulsar cluster. Internally, the collector figures out all the existing clusters and the brokers from where it needs to pull the metrics. If you connect the dashboard to Pulsar running in standalone mode, the URL is `http://:8080` by default. `` is the ip address or hostname of the machine running Pulsar standalone. The ip address or hostname should be accessible from the docker instance running dashboard. - -Once the Docker container runs, the web dashboard is accessible via `localhost` or whichever host that Docker uses. - -> The `SERVICE_URL` that the dashboard uses needs to be reachable from inside the Docker container - -If the Pulsar service runs in standalone mode in `localhost`, the `SERVICE_URL` has to -be the IP of the machine. - -Similarly, given the Pulsar standalone advertises itself with localhost by default, you need to -explicitly set the advertise address to the host IP. For example: - -```shell - -$ bin/pulsar standalone --advertised-address 1.2.3.4 - -``` - -### Known issues - -Currently, only Pulsar Token [authentication](security-overview.md#authentication-providers) is supported. diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/administration-geo.md b/site2/website/versioned_docs/version-2.8.3-deprecated/administration-geo.md deleted file mode 100644 index f5dc1b2f79a3fa..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/administration-geo.md +++ /dev/null @@ -1,215 +0,0 @@ ---- -id: administration-geo -title: Pulsar geo-replication -sidebar_label: "Geo-replication" -original_id: administration-geo ---- - -*Geo-replication* is the replication of persistently stored message data across multiple clusters of a Pulsar instance. - -## How geo-replication works - -The diagram below illustrates the process of geo-replication across Pulsar clusters: - -![Replication Diagram](/assets/geo-replication.png) - -In this diagram, whenever **P1**, **P2**, and **P3** producers publish messages to the **T1** topic on **Cluster-A**, **Cluster-B**, and **Cluster-C** clusters respectively, those messages are instantly replicated across clusters. Once the messages are replicated, **C1** and **C2** consumers can consume those messages from their respective clusters. - -Without geo-replication, **C1** and **C2** consumers are not able to consume messages that **P3** producer publishes. - -## Geo-replication and Pulsar properties - -You must enable geo-replication on a per-tenant basis in Pulsar. You can enable geo-replication between clusters only when a tenant is created that allows access to both clusters. - -Although geo-replication must be enabled between two clusters, actually geo-replication is managed at the namespace level. You must complete the following tasks to enable geo-replication for a namespace: - -* [Enable geo-replication namespaces](#enable-geo-replication-namespaces) -* Configure that namespace to replicate across two or more provisioned clusters - -Any message published on *any* topic in that namespace is replicated to all clusters in the specified set. - -## Local persistence and forwarding - -When messages are produced on a Pulsar topic, messages are first persisted in the local cluster, and then forwarded asynchronously to the remote clusters. - -In normal cases, when connectivity issues are none, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, the network [round-trip time](https://en.wikipedia.org/wiki/Round-trip_delay_time) (RTT) between the remote regions defines end-to-end delivery latency. - -Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (like during a network partition). - -Producers and consumers can publish messages to and consume messages from any cluster in a Pulsar instance. However, subscriptions cannot only be local to the cluster where the subscriptions are created but also can be transferred between clusters after replicated subscription is enabled. Once replicated subscription is enabled, you can keep subscription state in synchronization. Therefore, a topic can be asynchronously replicated across multiple geographical regions. In case of failover, a consumer can restart consuming messages from the failure point in a different cluster. - -In the aforementioned example, the **T1** topic is replicated among three clusters, **Cluster-A**, **Cluster-B**, and **Cluster-C**. - -All messages produced in any of the three clusters are delivered to all subscriptions in other clusters. In this case, **C1** and **C2** consumers receive all messages that **P1**, **P2**, and **P3** producers publish. Ordering is still guaranteed on a per-producer basis. - -## Configure replication - -As stated in [Geo-replication and Pulsar properties](#geo-replication-and-pulsar-properties) section, geo-replication in Pulsar is managed at the [tenant](reference-terminology.md#tenant) level. - -The following example connects three clusters: **us-east**, **us-west**, and **us-cent**. - -### Connect replication clusters - -To replicate data among clusters, you need to configure each cluster to connect to the other. You can use the [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) tool to create a connection. - -**Example** - -Suppose that you have 3 replication clusters: `us-west`, `us-cent`, and `us-east`. - -1. Configure the connection from `us-west` to `us-east`. - - Run the following command on `us-west`. - -```shell - -$ bin/pulsar-admin clusters create \ - --broker-url pulsar://: \ - --url http://: \ - us-east - -``` - - :::tip - - - If you want to use a secure connection for a cluster, you can use the flags `--broker-url-secure` and `--url-secure`. For more information, see [pulsar-admin clusters create](https://pulsar.apache.org/tools/pulsar-admin/). - - Different clusters may have different authentications. You can use the authentication flag `--auth-plugin` and `--auth-parameters` together to set cluster authentication, which overrides `brokerClientAuthenticationPlugin` and `brokerClientAuthenticationParameters` if `authenticationEnabled` sets to `true` in `broker.conf` and `standalone.conf`. For more information, see [authentication and authorization](concepts-authentication.md). - - ::: - -2. Configure the connection from `us-west` to `us-cent`. - - Run the following command on `us-west`. - -```shell - -$ bin/pulsar-admin clusters create \ - --broker-url pulsar://: \ - --url http://: \ - us-cent - -``` - -3. Run similar commands on `us-east` and `us-cent` to create connections among clusters. - -### Grant permissions to properties - -To replicate to a cluster, the tenant needs permission to use that cluster. You can grant permission to the tenant when you create the tenant or grant later. - -Specify all the intended clusters when you create a tenant: - -```shell - -$ bin/pulsar-admin tenants create my-tenant \ - --admin-roles my-admin-role \ - --allowed-clusters us-west,us-east,us-cent - -``` - -To update permissions of an existing tenant, use `update` instead of `create`. - -### Enable geo-replication namespaces - -You can create a namespace with the following command sample. - -```shell - -$ bin/pulsar-admin namespaces create my-tenant/my-namespace - -``` - -Initially, the namespace is not assigned to any cluster. You can assign the namespace to clusters using the `set-clusters` subcommand: - -```shell - -$ bin/pulsar-admin namespaces set-clusters my-tenant/my-namespace \ - --clusters us-west,us-east,us-cent - -``` - -You can change the replication clusters for a namespace at any time, without disruption to ongoing traffic. Replication channels are immediately set up or stopped in all clusters as soon as the configuration changes. - -### Use topics with geo-replication - -Once you create a geo-replication namespace, any topics that producers or consumers create within that namespace is replicated across clusters. Typically, each application uses the `serviceUrl` for the local cluster. - -#### Selective replication - -By default, messages are replicated to all clusters configured for the namespace. You can restrict replication selectively by specifying a replication list for a message, and then that message is replicated only to the subset in the replication list. - -The following is an example for the [Java API](client-libraries-java.md). Note the use of the `setReplicationClusters` method when you construct the {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} object: - -```java - -List restrictReplicationTo = Arrays.asList( - "us-west", - "us-east" -); - -Producer producer = client.newProducer() - .topic("some-topic") - .create(); - -producer.newMessage() - .value("my-payload".getBytes()) - .setReplicationClusters(restrictReplicationTo) - .send(); - -``` - -#### Topic stats - -Topic-specific statistics for geo-replication topics are available via the [`pulsar-admin`](reference-pulsar-admin.md) tool and {@inject: rest:REST:/} API: - -```shell - -$ bin/pulsar-admin persistent stats persistent://my-tenant/my-namespace/my-topic - -``` - -Each cluster reports its own local stats, including the incoming and outgoing replication rates and backlogs. - -#### Delete a geo-replication topic - -Given that geo-replication topics exist in multiple regions, directly deleting a geo-replication topic is not possible. Instead, you should rely on automatic topic garbage collection. - -In Pulsar, a topic is automatically deleted when the topic meets the following three conditions: -- no producers or consumers are connected to it; -- no subscriptions to it; -- no more messages are kept for retention. -For geo-replication topics, each region uses a fault-tolerant mechanism to decide when deleting the topic locally is safe. - -You can explicitly disable topic garbage collection by setting `brokerDeleteInactiveTopicsEnabled` to `false` in your [broker configuration](reference-configuration.md#broker). - -To delete a geo-replication topic, close all producers and consumers on the topic, and delete all of its local subscriptions in every replication cluster. When Pulsar determines that no valid subscription for the topic remains across the system, it will garbage collect the topic. - -## Replicated subscriptions - -Pulsar supports replicated subscriptions, so you can keep subscription state in sync, within a sub-second timeframe, in the context of a topic that is being asynchronously replicated across multiple geographical regions. - -In case of failover, a consumer can restart consuming from the failure point in a different cluster. - -### Enable replicated subscription - -Replicated subscription is disabled by default. You can enable replicated subscription when creating a consumer. - -```java - -Consumer consumer = client.newConsumer(Schema.STRING) - .topic("my-topic") - .subscriptionName("my-subscription") - .replicateSubscriptionState(true) - .subscribe(); - -``` - -### Advantages - - * It is easy to implement the logic. - * You can choose to enable or disable replicated subscription. - * When you enable it, the overhead is low, and it is easy to configure. - * When you disable it, the overhead is zero. - -### Limitations - -* When you enable replicated subscription, you're creating a consistent distributed snapshot to establish an association between message ids from different clusters. The snapshots are taken periodically. The default value is `1 second`. It means that a consumer failing over to a different cluster can potentially receive 1 second of duplicates. You can also configure the frequency of the snapshot in the `broker.conf` file. -* Only the base line cursor position is synced in replicated subscriptions while the individual acknowledgments are not synced. This means the messages acknowledged out-of-order could end up getting delivered again, in the case of a cluster failover. diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/administration-isolation.md b/site2/website/versioned_docs/version-2.8.3-deprecated/administration-isolation.md deleted file mode 100644 index d2de042a2e7415..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/administration-isolation.md +++ /dev/null @@ -1,115 +0,0 @@ ---- -id: administration-isolation -title: Pulsar isolation -sidebar_label: "Pulsar isolation" -original_id: administration-isolation ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -In an organization, a Pulsar instance provides services to multiple teams. When organizing the resources across multiple teams, you want to make a suitable isolation plan to avoid the resource competition between different teams and applications and provide high-quality messaging service. In this case, you need to take resource isolation into consideration and weigh your intended actions against expected and unexpected consequences. - -To enforce resource isolation, you can use the Pulsar isolation policy, which allows you to allocate resources (**broker** and **bookie**) for the namespace. - -## Broker isolation - -In Pulsar, when namespaces (more specifically, namespace bundles) are assigned dynamically to brokers, the namespace isolation policy limits the set of brokers that can be used for assignment. Before topics are assigned to brokers, you can set the namespace isolation policy with a primary or a secondary regex to select desired brokers. - -You can set a namespace isolation policy for a cluster using one of the following methods. - -````mdx-code-block - - - - -``` - -pulsar-admin ns-isolation-policy set options - -``` - -For more information about the command `pulsar-admin ns-isolation-policy set options`, see [here](https://pulsar.apache.org/tools/pulsar-admin/). - -**Example** - -```shell - -bin/pulsar-admin ns-isolation-policy set \ ---auto-failover-policy-type min_available \ ---auto-failover-policy-params min_limit=1,usage_threshold=80 \ ---namespaces my-tenant/my-namespace \ ---primary 10.193.216.* my-cluster policy-name - -``` - - - - -[PUT /admin/v2/namespaces/{tenant}/{namespace}](https://pulsar.apache.org/admin-rest-api/?version=master&apiversion=v2#operation/createNamespace) - - - - -For how to set namespace isolation policy using Java admin API, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/NamespacesImpl.java#L251). - - - - -```` - -## Bookie isolation - -A namespace can be isolated into user-defined groups of bookies, which guarantees all the data that belongs to the namespace is stored in desired bookies. The bookie affinity group uses the BookKeeper [rack-aware placement policy](https://bookkeeper.apache.org/docs/latest/api/javadoc/org/apache/bookkeeper/client/EnsemblePlacementPolicy.html) and it is a way to feed rack information which is stored as JSON format in znode. - -You can set a bookie affinity group using one of the following methods. - -````mdx-code-block - - - - -``` - -pulsar-admin namespaces set-bookie-affinity-group options - -``` - -For more information about the command `pulsar-admin namespaces set-bookie-affinity-group options`, see [here](https://pulsar.apache.org/tools/pulsar-admin/). - -**Example** - -```shell - -bin/pulsar-admin bookies set-bookie-rack \ ---bookie 127.0.0.1:3181 \ ---hostname 127.0.0.1:3181 \ ---group group-bookie1 \ ---rack rack1 - -bin/pulsar-admin namespaces set-bookie-affinity-group public/default \ ---primary-group group-bookie1 - -``` - - - - -[POST /admin/v2/namespaces/{tenant}/{namespace}/persistence/bookieAffinity](https://pulsar.apache.org/admin-rest-api/?version=master&apiversion=v2#operation/setBookieAffinityGroup) - - - - -For how to set bookie affinity group for a namespace using Java admin API, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/NamespacesImpl.java#L1164). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/administration-load-balance.md b/site2/website/versioned_docs/version-2.8.3-deprecated/administration-load-balance.md deleted file mode 100644 index 134c39e0956b23..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/administration-load-balance.md +++ /dev/null @@ -1,256 +0,0 @@ ---- -id: administration-load-balance -title: Pulsar load balance -sidebar_label: "Load balance" -original_id: administration-load-balance ---- - -## Load balance across Pulsar brokers - -Pulsar is an horizontally scalable messaging system, so the traffic -in a logical cluster must be spread across all the available Pulsar brokers as evenly as possible, which is a core requirement. - -You can use multiple settings and tools to control the traffic distribution which require a bit of context to understand how the traffic is managed in Pulsar. Though, in most cases, the core requirement mentioned above is true out of the box and you should not worry about it. - -## Pulsar load manager architecture - -The following part introduces the basic architecture of the Pulsar load manager. - -### Assign topics to brokers dynamically - -Topics are dynamically assigned to brokers based on the load conditions of all brokers in the cluster. - -When a client starts using new topics that are not assigned to any broker, a process is triggered to choose the best suited broker to acquire ownership of these topics according to the load conditions. - -In case of partitioned topics, different partitions are assigned to different brokers. Here "topic" means either a non-partitioned topic or one partition of a topic. - -The assignment is "dynamic" because the assignment changes quickly. For example, if the broker owning the topic crashes, the topic is reassigned immediately to another broker. Another scenario is that the broker owning the topic becomes overloaded. In this case, the topic is reassigned to a less loaded broker. - -The stateless nature of brokers makes the dynamic assignment possible, so you can quickly expand or shrink the cluster based on usage. - -#### Assignment granularity - -The assignment of topics or partitions to brokers is not done at the topics or partitions level, but done at the Bundle level (a higher level). The reason is to amortize the amount of information that you need to keep track. Based on CPU, memory, traffic load and other indexes, topics are assigned to a particular broker dynamically. - -Instead of individual topic or partition assignment, each broker takes ownership of a subset of the topics for a namespace. This subset is called a "*bundle*" and effectively this subset is a sharding mechanism. - -The namespace is the "administrative" unit: many config knobs or operations are done at the namespace level. - -For assignment, a namespaces is sharded into a list of "bundles", with each bundle comprising -a portion of overall hash range of the namespace. - -Topics are assigned to a particular bundle by taking the hash of the topic name and checking in which -bundle the hash falls into. - -Each bundle is independent of the others and thus is independently assigned to different brokers. - -### Create namespaces and bundles - -When you create a new namespace, the new namespace sets to use the default number of bundles. You can set this in `conf/broker.conf`: - -```properties - -# When a namespace is created without specifying the number of bundle, this -# value will be used as the default -defaultNumberOfNamespaceBundles=4 - -``` - -You can either change the system default, or override it when you create a new namespace: - -```shell - -$ bin/pulsar-admin namespaces create my-tenant/my-namespace --clusters us-west --bundles 16 - -``` - -With this command, you create a namespace with 16 initial bundles. Therefore the topics for this namespaces can immediately be spread across up to 16 brokers. - -In general, if you know the expected traffic and number of topics in advance, you had better start with a reasonable number of bundles instead of waiting for the system to auto-correct the distribution. - -On the same note, it is beneficial to start with more bundles than the number of brokers, because of the hashing nature of the distribution of topics into bundles. For example, for a namespace with 1000 topics, using something like 64 bundles achieves a good distribution of traffic across 16 brokers. - -### Unload topics and bundles - -You can "unload" a topic in Pulsar with admin operation. Unloading means to close the topics, -release ownership and reassign the topics to a new broker, based on current load. - -When unloading happens, the client experiences a small latency blip, typically in the order of tens of milliseconds, while the topic is reassigned. - -Unloading is the mechanism that the load-manager uses to perform the load shedding, but you can also trigger the unloading manually, for example to correct the assignments and redistribute traffic even before having any broker overloaded. - -Unloading a topic has no effect on the assignment, but just closes and reopens the particular topic: - -```shell - -pulsar-admin topics unload persistent://tenant/namespace/topic - -``` - -To unload all topics for a namespace and trigger reassignments: - -```shell - -pulsar-admin namespaces unload tenant/namespace - -``` - -### Split namespace bundles - -Since the load for the topics in a bundle might change over time, or predicting upfront might just be hard, brokers can split bundles into two. The new smaller bundles can be reassigned to different brokers. - -The splitting happens based on some tunable thresholds. Any existing bundle that exceeds any of the threshold is a candidate to be split. By default the newly split bundles are also immediately offloaded to other brokers, to facilitate the traffic distribution. - -```properties - -# enable/disable namespace bundle auto split -loadBalancerAutoBundleSplitEnabled=true - -# enable/disable automatic unloading of split bundles -loadBalancerAutoUnloadSplitBundlesEnabled=true - -# maximum topics in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxTopics=1000 - -# maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxSessions=1000 - -# maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxMsgRate=30000 - -# maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxBandwidthMbytes=100 - -# maximum number of bundles in a namespace (for auto-split) -loadBalancerNamespaceMaximumBundles=128 - -``` - -### Shed load automatically - -The support for automatic load shedding is available in the load manager of Pulsar. This means that whenever the system recognizes a particular broker is overloaded, the system forces some traffic to be reassigned to less loaded brokers. - -When a broker is identified as overloaded, the broker forces to "unload" a subset of the bundles, the -ones with higher traffic, that make up for the overload percentage. - -For example, the default threshold is 85% and if a broker is over quota at 95% CPU usage, then the broker unloads the percent difference plus a 5% margin: `(95% - 85%) + 5% = 15%`. - -Given the selection of bundles to offload is based on traffic (as a proxy measure for cpu, network -and memory), broker unloads bundles for at least 15% of traffic. - -The automatic load shedding is enabled by default and you can disable the automatic load shedding with this setting: - -```properties - -# Enable/disable automatic bundle unloading for load-shedding -loadBalancerSheddingEnabled=true - -``` - -Additional settings that apply to shedding: - -```properties - -# Load shedding interval. Broker periodically checks whether some traffic should be offload from -# some over-loaded broker to other under-loaded brokers -loadBalancerSheddingIntervalMinutes=1 - -# Prevent the same topics to be shed and moved to other brokers more that once within this timeframe -loadBalancerSheddingGracePeriodMinutes=30 - -``` - -#### Broker overload thresholds - -The determinations of when a broker is overloaded is based on threshold of CPU, network and memory usage. Whenever either of those metrics reaches the threshold, the system triggers the shedding (if enabled). - -By default, overload threshold is set at 85%: - -```properties - -# Usage threshold to determine a broker as over-loaded -loadBalancerBrokerOverloadedThresholdPercentage=85 - -``` - -Pulsar gathers the usage stats from the system metrics. - -In case of network utilization, in some cases the network interface speed that Linux reports is -not correct and needs to be manually overridden. This is the case in AWS EC2 instances with 1Gbps -NIC speed for which the OS reports 10Gbps speed. - -Because of the incorrect max speed, the Pulsar load manager might think the broker has not reached the NIC capacity, while in fact the broker already uses all the bandwidth and the traffic is slowed down. - -You can use the following setting to correct the max NIC speed: - -```properties - -# Override the auto-detection of the network interfaces max speed. -# This option is useful in some environments (eg: EC2 VMs) where the max speed -# reported by Linux is not reflecting the real bandwidth available to the broker. -# Since the network usage is employed by the load manager to decide when a broker -# is overloaded, it is important to make sure the info is correct or override it -# with the right value here. The configured value can be a double (eg: 0.8) and that -# can be used to trigger load-shedding even before hitting on NIC limits. -loadBalancerOverrideBrokerNicSpeedGbps= - -``` - -When the value is empty, Pulsar uses the value that the OS reports. - -### Distribute anti-affinity namespaces across failure domains - -When your application has multiple namespaces and you want one of them available all the time to avoid any downtime, you can group these namespaces and distribute them across different [failure domains](reference-terminology.md#failure-domain) and different brokers. Thus, if one of the failure domains is down (due to release rollout or brokers restart), it only disrupts namespaces owned by that specific failure domain and the rest of the namespaces owned by other domains remain available without any impact. - -Such a group of namespaces has anti-affinity to each other, that is, all the namespaces in this group are [anti-affinity namespaces](reference-terminology.md#anti-affinity-namespaces) and are distributed to different failure domains in a load-balanced manner. - -As illustrated in the following figure, Pulsar has 2 failure domains (Domain1 and Domain2) and each domain has 2 brokers in it. You can create an anti-affinity namespace group that has 4 namespaces in it, and all the 4 namespaces have anti-affinity to each other. The load manager tries to distribute namespaces evenly across all the brokers in the same domain. Since each domain has 2 brokers, every broker owns one namespace from this anti-affinity namespace group, and you can see each domain owns 2 namespaces, and each broker owns 1 namespace. - -![Distribute anti-affinity namespaces across failure domains](/assets/anti-affinity-namespaces-across-failure-domains.svg) - -The load manager follows an even distribution policy across failure domains to assign anti-affinity namespaces. The following table outlines the even-distributed assignment sequence illustrated in the above figure. - -| Assignment sequence | Namespace | Failure domain candidates | Broker candidates | Selected broker | -|:---|:------------|:------------------|:------------------------------------|:-----------------| -| 1 | Namespace1 | Domain1, Domain2 | Broker1, Broker2, Broker3, Broker4 | Domain1:Broker1 | -| 2 | Namespace2 | Domain2 | Broker3, Broker4 | Domain2:Broker3 | -| 3 | Namespace3 | Domain1, Domain2 | Broker2, Broker4 | Domain1:Broker2 | -| 4 | Namespace4 | Domain2 | Broker4 | Domain2:Broker4 | - -:::tip - -* Each namespace belongs to only one anti-affinity group. If a namespace with an existing anti-affinity assignment is assigned to another anti-affinity group, the original assignment is dropped. - -* If there are more anti-affinity namespaces than failure domains, the load manager distributes namespaces evenly across all the domains, and also every domain distributes namespaces evenly across all the brokers under that domain. - -::: - -#### Create a failure domain and register brokers - -:::note - -One broker can only be registered to a single failure domain. - -::: - -To create a domain under a specific cluster and register brokers, run the following command: - -```bash - -pulsar-admin clusters create-failure-domain --domain-name --broker-list - -``` - -You can also view, update, and delete domains under a specific cluster. For more information, refer to [Pulsar admin doc](/tools/pulsar-admin/). - -#### Create an anti-affinity namespace group - -An anti-affinity group is created automatically when the first namespace is assigned to the group. To assign a namespace to an anti-affinity group, run the following command. It sets an anti-affinity group name for a namespace. - -```bash - -pulsar-admin namespaces set-anti-affinity-group --group - -``` - -For more information about `anti-affinity-group` related commands, refer to [Pulsar admin doc](/tools/pulsar-admin/). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/administration-proxy.md b/site2/website/versioned_docs/version-2.8.3-deprecated/administration-proxy.md deleted file mode 100644 index 577eff7db0253a..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/administration-proxy.md +++ /dev/null @@ -1,125 +0,0 @@ ---- -id: administration-proxy -title: Pulsar proxy -sidebar_label: "Pulsar proxy" -original_id: administration-proxy ---- - -Pulsar proxy is an optional gateway. Pulsar proxy is used when direct connections between clients and Pulsar brokers are either infeasible or undesirable. For example, when you run Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform, you can run Pulsar proxy. - -The Pulsar proxy is not intended to be exposed on the public internet. The security considerations in the current design expect network perimeter security. The requirement of network perimeter security can be achieved with private networks. - -If a proxy deployment cannot be protected with network perimeter security, the alternative would be to use [Pulsar's "Proxy SNI routing" feature](concepts-proxy-sni-routing.md) with a properly secured and audited solution. In that case Pulsar proxy component is not used at all. - -## Configure the proxy - -Before using a proxy, you need to configure it with a broker's address in the cluster. You can configure the broker URL in the proxy configuration, or the proxy to connect directly using service discovery. - -> In a production environment service discovery is not recommended. - -### Use broker URLs - -It is more secure to specify a URL to connect to the brokers. - -Proxy authorization requires access to ZooKeeper, so if you use these broker URLs to connect to the brokers, you need to disable authorization at the Proxy level. Brokers still authorize requests after the proxy forwards them. - -You can configure the broker URLs in `conf/proxy.conf` as follows. - -```properties -brokerServiceURL=pulsar://brokers.example.com:6650 -brokerWebServiceURL=http://brokers.example.com:8080 -functionWorkerWebServiceURL=http://function-workers.example.com:8080 -``` - -If you use TLS, configure the broker URLs in the following way: - -```properties -brokerServiceURLTLS=pulsar+ssl://brokers.example.com:6651 -brokerWebServiceURLTLS=https://brokers.example.com:8443 -functionWorkerWebServiceURL=https://function-workers.example.com:8443 -``` - -The hostname in the URLs provided should be a DNS entry that points to multiple brokers or a virtual IP address, which is backed by multiple broker IP addresses, so that the proxy does not lose connectivity to Pulsar cluster if a single broker becomes unavailable. - -The ports to connect to the brokers (6650 and 8080, or in the case of TLS, 6651 and 8443) should be open in the network ACLs. - -Note that if you do not use functions, you do not need to configure `functionWorkerWebServiceURL`. - -### Use service discovery - -Pulsar uses [ZooKeeper](https://zookeeper.apache.org) for service discovery. To connect the proxy to ZooKeeper, specify the following in `conf/proxy.conf`. - -```properties -metadataStoreUrl=my-zk-0:2181,my-zk-1:2181,my-zk-2:2181 -configurationMetadataStoreUrl=my-zk-0:2184,my-zk-remote:2184 -``` - -> To use service discovery, you need to open the network ACLs, so the proxy can connects to the ZooKeeper nodes through the ZooKeeper client port (port `2181`) and the configuration store client port (port `2184`). - -> However, it is not secure to use service discovery. Because if the network ACL is open, when someone compromises a proxy, they have full access to ZooKeeper. - -### Restricting target broker addresses to mitigate CVE-2022-24280 - -The Pulsar Proxy trusts clients to provide valid target broker addresses to connect to. -Unless the Pulsar Proxy is explicitly configured to limit access, the Pulsar Proxy is vulnerable as described in the security advisory [Apache Pulsar Proxy target broker address isn't validated (CVE-2022-24280)](https://github.com/apache/pulsar/wiki/CVE-2022-24280). - -It is necessary to limit proxied broker connections to known broker addresses by specifying `brokerProxyAllowedHostNames` and `brokerProxyAllowedIPAddresses` settings. - -When specifying `brokerProxyAllowedHostNames`, it's possible to use a wildcard. -Please notice that `*` is a wildcard that matches any character in the hostname. It also matches dot `.` characters. - -It is recommended to use a pattern that matches only the desired brokers and no other hosts in the local network. Pulsar lookups will use the default host name of the broker by default. This can be overridden with the `advertisedAddress` setting in `broker.conf`. - -To increase security, it is also possible to restrict access with the `brokerProxyAllowedIPAddresses` setting. It is not mandatory to configure `brokerProxyAllowedIPAddresses` when `brokerProxyAllowedHostNames` is properly configured so that the pattern matches only the target brokers. -`brokerProxyAllowedIPAddresses` setting supports a comma separate list of IP address, IP address ranges and IP address networks [(supported format reference)](https://seancfoley.github.io/IPAddress/IPAddress/apidocs/inet/ipaddr/IPAddressString.html). - -Example: limiting by host name in a Kubernetes deployment -```yaml - # example of limiting to Kubernetes statefulset hostnames that contain "broker-" - PULSAR_PREFIX_brokerProxyAllowedHostNames: '*broker-*.*.*.svc.cluster.local' -``` - -Example: limiting by both host name and ip address in a `proxy.conf` file for host deployment. -```properties -# require "broker" in host name -brokerProxyAllowedHostNames=*broker*.localdomain -# limit target ip addresses to a specific network -brokerProxyAllowedIPAddresses=10.0.0.0/8 -``` - -Example: limiting by multiple host name patterns and multiple ip address ranges in a `proxy.conf` file for host deployment. -```properties -```properties -# require "broker" in host name -brokerProxyAllowedHostNames=*broker*.localdomain,*broker*.otherdomain -# limit target ip addresses to a specific network or range demonstrating multiple supported formats -brokerProxyAllowedIPAddresses=10.10.0.0/16,192.168.1.100-120,172.16.2.*,10.1.2.3 -``` - - -## Start the proxy - -To start the proxy: - -```bash - -$ cd /path/to/pulsar/directory -$ bin/pulsar proxy - -``` - -> You can run multiple instances of the Pulsar proxy in a cluster. - -## Stop the proxy - -Pulsar proxy runs in the foreground by default. To stop the proxy, simply stop the process in which the proxy is running. - -## Proxy frontends - -You can run Pulsar proxy behind some kind of load-distributing frontend, such as an [HAProxy](https://www.digitalocean.com/community/tutorials/an-introduction-to-haproxy-and-load-balancing-concepts) load balancer. - -## Use Pulsar clients with the proxy - -Once your Pulsar proxy is up and running, preferably behind a load-distributing [frontend](#proxy-frontends), clients can connect to the proxy via whichever address that the frontend uses. If the address is the DNS address `pulsar.cluster.default`, for example, the connection URL for clients is `pulsar://pulsar.cluster.default:6650`. - -For more information on Proxy configuration, refer to [Pulsar proxy](reference-configuration.md#pulsar-proxy). diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/administration-pulsar-manager.md b/site2/website/versioned_docs/version-2.8.3-deprecated/administration-pulsar-manager.md deleted file mode 100644 index 0e3800d847f0c8..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/administration-pulsar-manager.md +++ /dev/null @@ -1,205 +0,0 @@ ---- -id: administration-pulsar-manager -title: Pulsar Manager -sidebar_label: "Pulsar Manager" -original_id: administration-pulsar-manager ---- - -Pulsar Manager is a web-based GUI management and monitoring tool that helps administrators and users manage and monitor tenants, namespaces, topics, subscriptions, brokers, clusters, and so on, and supports dynamic configuration of multiple environments. - -:::note - -If you monitor your current stats with Pulsar dashboard, you can try to use Pulsar Manager instead. Pulsar dashboard is deprecated. - -::: - -## Install - -The easiest way to use the Pulsar Manager is to run it inside a [Docker](https://www.docker.com/products/docker) container. - -```shell - -docker pull apachepulsar/pulsar-manager:v0.2.0 -docker run -it \ - -p 9527:9527 -p 7750:7750 \ - -e SPRING_CONFIGURATION_FILE=/pulsar-manager/pulsar-manager/application.properties \ - apachepulsar/pulsar-manager:v0.2.0 - -``` - -* `SPRING_CONFIGURATION_FILE`: Default configuration file for spring. - -### Set administrator account and password - - ```shell - - CSRF_TOKEN=$(curl http://localhost:7750/pulsar-manager/csrf-token) - curl \ - -H 'X-XSRF-TOKEN: $CSRF_TOKEN' \ - -H 'Cookie: XSRF-TOKEN=$CSRF_TOKEN;' \ - -H "Content-Type: application/json" \ - -X PUT http://localhost:7750/pulsar-manager/users/superuser \ - -d '{"name": "admin", "password": "apachepulsar", "description": "test", "email": "username@test.org"}' - - ``` - -You can find the docker image in the [Docker Hub](https://github.com/apache/pulsar-manager/tree/master/docker) directory and build an image from the source code as well: - -``` - -git clone https://github.com/apache/pulsar-manager -cd pulsar-manager/front-end -npm install --save -npm run build:prod -cd .. -./gradlew build -x test -cd .. -docker build -f docker/Dockerfile --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` --build-arg VCS_REF=`latest` --build-arg VERSION=`latest` -t apachepulsar/pulsar-manager . - -``` - -### Use custom databases - -If you have a large amount of data, you can use a custom database. The following is an example of PostgreSQL. - -1. Initialize database and table structures using the [file](https://github.com/apache/pulsar-manager/tree/master/src/main/resources/META-INF/sql/postgresql-schema.sql). - -2. Modify the [configuration file](https://github.com/apache/pulsar-manager/blob/master/src/main/resources/application.properties) and add PostgreSQL configuration. - -``` - -spring.datasource.driver-class-name=org.postgresql.Driver -spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/pulsar_manager -spring.datasource.username=postgres -spring.datasource.password=postgres - -``` - -3. Compile to generate a new executable jar package. - -``` - -./gradlew build -x test - -``` - -### Enable JWT authentication - -If you want to turn on JWT authentication, configure the following parameters: - -* `backend.jwt.token`: token for the superuser. You need to configure this parameter during cluster initialization. -* `jwt.broker.token.mode`: multiple modes of generating token, including PUBLIC, PRIVATE, and SECRET. -* `jwt.broker.public.key`: configure this option if you use the PUBLIC mode. -* `jwt.broker.private.key`: configure this option if you use the PRIVATE mode. -* `jwt.broker.secret.key`: configure this option if you use the SECRET mode. - -For more information, see [Token Authentication Admin of Pulsar](http://pulsar.apache.org/docs/en/security-token-admin/). - - -If you want to enable JWT authentication, use one of the following methods. - - -* Method 1: use command-line tool - -``` - -wget https://dist.apache.org/repos/dist/release/pulsar/pulsar-manager/pulsar-manager-0.2.0/apache-pulsar-manager-0.2.0-bin.tar.gz -tar -zxvf apache-pulsar-manager-0.2.0-bin.tar.gz -cd pulsar-manager -tar -zxvf pulsar-manager.tar -cd pulsar-manager -cp -r ../dist ui -./bin/pulsar-manager --redirect.host=http://localhost --redirect.port=9527 insert.stats.interval=600000 --backend.jwt.token=token --jwt.broker.token.mode=PRIVATE --jwt.broker.private.key=file:///path/broker-private.key --jwt.broker.public.key=file:///path/broker-public.key - -``` - -Firstly, [set the administrator account and password](#set-administrator-account-and-password) - -Secondly, log in to Pulsar manager through http://localhost:7750/ui/index.html. - -* Method 2: configure the application.properties file - -``` - -backend.jwt.token=token - -jwt.broker.token.mode=PRIVATE -jwt.broker.public.key=file:///path/broker-public.key -jwt.broker.private.key=file:///path/broker-private.key - -or -jwt.broker.token.mode=SECRET -jwt.broker.secret.key=file:///path/broker-secret.key - -``` - -* Method 3: use Docker and enable token authentication. - -``` - -export JWT_TOKEN="your-token" -docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -v $PWD:/data apachepulsar/pulsar-manager:v0.2.0 /bin/sh - -``` - -* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --secret-key` or `bin/pulsar tokens create --private-key` command. -* `REDIRECT_HOST`: the IP address of the front-end server. -* `REDIRECT_PORT`: the port of the front-end server. -* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database. -* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database. -* `USERNAME`: the username of PostgreSQL. -* `PASSWORD`: the password of PostgreSQL. -* `LOG_LEVEL`: the level of log. - -* Method 4: use Docker and turn on **token authentication** and **token management** by private key and public key. - -``` - -export JWT_TOKEN="your-token" -export PRIVATE_KEY="file:///pulsar-manager/secret/my-private.key" -export PUBLIC_KEY="file:///pulsar-manager/secret/my-public.key" -docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -e PRIVATE_KEY=$PRIVATE_KEY -e PUBLIC_KEY=$PUBLIC_KEY -v $PWD:/data -v $PWD/secret:/pulsar-manager/secret apachepulsar/pulsar-manager:v0.2.0 /bin/sh - -``` - -* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --private-key` command. -* `PRIVATE_KEY`: private key path mounted in container, generated by `bin/pulsar tokens create-key-pair` command. -* `PUBLIC_KEY`: public key path mounted in container, generated by `bin/pulsar tokens create-key-pair` command. -* `$PWD/secret`: the folder where the private key and public key generated by the `bin/pulsar tokens create-key-pair` command are placed locally -* `REDIRECT_HOST`: the IP address of the front-end server. -* `REDIRECT_PORT`: the port of the front-end server. -* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database. -* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database. -* `USERNAME`: the username of PostgreSQL. -* `PASSWORD`: the password of PostgreSQL. -* `LOG_LEVEL`: the level of log. - -* Method 5: use Docker and turn on **token authentication** and **token management** by secret key. - -``` - -export JWT_TOKEN="your-token" -export SECRET_KEY="file:///pulsar-manager/secret/my-secret.key" -docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -e SECRET_KEY=$SECRET_KEY -v $PWD:/data -v $PWD/secret:/pulsar-manager/secret apachepulsar/pulsar-manager:v0.2.0 /bin/sh - -``` - -* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --secret-key` command. -* `SECRET_KEY`: secret key path mounted in container, generated by `bin/pulsar tokens create-secret-key` command. -* `$PWD/secret`: the folder where the secret key generated by the `bin/pulsar tokens create-secret-key` command are placed locally -* `REDIRECT_HOST`: the IP address of the front-end server. -* `REDIRECT_PORT`: the port of the front-end server. -* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database. -* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database. -* `USERNAME`: the username of PostgreSQL. -* `PASSWORD`: the password of PostgreSQL. -* `LOG_LEVEL`: the level of log. - -* For more information about backend configurations, see [here](https://github.com/apache/pulsar-manager/blob/master/src/README). -* For more information about frontend configurations, see [here](https://github.com/apache/pulsar-manager/tree/master/front-end). - -## Log in - -[Set the administrator account and password](#set-administrator-account-and-password). - -Visit http://localhost:9527 to log in. diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/administration-stats.md b/site2/website/versioned_docs/version-2.8.3-deprecated/administration-stats.md deleted file mode 100644 index ac0c03602f36d5..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/administration-stats.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -id: administration-stats -title: Pulsar stats -sidebar_label: "Pulsar statistics" -original_id: administration-stats ---- - -## Partitioned topics - -|Stat|Description| -|---|---| -|msgRateIn| The sum of publish rates of all local and replication publishers in messages per second.| -|msgThroughputIn| Same as msgRateIn but in bytes per second instead of messages per second.| -|msgRateOut| The sum of dispatch rates of all local and replication consumers in messages per second.| -|msgThroughputOut| Same as msgRateOut but in bytes per second instead of messages per second.| -|averageMsgSize| Average message size, in bytes, from this publisher within the last interval.| -|storageSize| The sum of storage size of the ledgers for this topic.| -|publishers| The list of all local publishers into the topic. Publishers can be anywhere from zero to thousands.| -|producerId| Internal identifier for this producer on this topic.| -|producerName| Internal identifier for this producer, generated by the client library.| -|address| IP address and source port for the connection of this producer.| -|connectedSince| Timestamp this producer is created or last reconnected.| -|subscriptions| The list of all local subscriptions to the topic.| -|my-subscription| The name of this subscription (client defined).| -|msgBacklog| The count of messages in backlog for this subscription.| -|type| This subscription type.| -|msgRateExpired| The rate at which messages are discarded instead of dispatched from this subscription due to TTL.| -|consumers| The list of connected consumers for this subscription.| -|consumerName| Internal identifier for this consumer, generated by the client library.| -|availablePermits| The number of messages this consumer has space for in the listen queue of client library. A value of 0 means the queue of client library is full and receive() is not being called. A nonzero value means this consumer is ready to be dispatched messages.| -|replication| This section gives the stats for cross-colo replication of this topic.| -|replicationBacklog| The outbound replication backlog in messages.| -|connected| Whether the outbound replicator is connected.| -|replicationDelayInSeconds| How long the oldest message has been waiting to be sent through the connection, if connected is true.| -|inboundConnection| The IP and port of the broker in the publisher connection of remote cluster to this broker. | -|inboundConnectedSince| The TCP connection being used to publish messages to the remote cluster. If no local publishers are connected, this connection is automatically closed after a minute.| - - -## Topics - -|Stat|Description| -|---|---| -|entriesAddedCounter| Messages published since this broker loads this topic.| -|numberOfEntries| Total number of messages being tracked.| -|totalSize| Total storage size in bytes of all messages.| -|currentLedgerEntries| Count of messages written to the ledger currently open for writing.| -|currentLedgerSize| Size in bytes of messages written to ledger currently open for writing.| -|lastLedgerCreatedTimestamp| Time when last ledger is created.| -|lastLedgerCreationFailureTimestamp| Time when last ledger is failed.| -|waitingCursorsCount| How many cursors are caught up and waiting for a new message to be published.| -|pendingAddEntriesCount| How many messages have (asynchronous) write requests you are waiting on completion.| -|lastConfirmedEntry| The ledgerid:entryid of the last message successfully written. If the entryid is -1, then the ledger is opened or is being currently opened but has no entries written yet.| -|state| The state of the cursor ledger. Open means you have a cursor ledger for saving updates of the markDeletePosition.| -|ledgers| The ordered list of all ledgers for this topic holding its messages.| -|cursors| The list of all cursors on this topic. Every subscription you saw in the topic stats has one.| -|markDeletePosition| The ack position: the last message the subscriber acknowledges receiving.| -|readPosition| The latest position of subscriber for reading message.| -|waitingReadOp| This is true when the subscription reads the latest message that is published to the topic and waits on new messages to be published.| -|pendingReadOps| The counter for how many outstanding read requests to the BookKeepers you have in progress.| -|messagesConsumedCounter| Number of messages this cursor acks since this broker loads this topic.| -|cursorLedger| The ledger used to persistently store the current markDeletePosition.| -|cursorLedgerLastEntry| The last entryid used to persistently store the current markDeletePosition.| -|individuallyDeletedMessages| If Acks are done out of order, shows the ranges of messages Acked between the markDeletePosition and the read-position.| -|lastLedgerSwitchTimestamp| The last time the cursor ledger is rolled over.| diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/administration-upgrade.md b/site2/website/versioned_docs/version-2.8.3-deprecated/administration-upgrade.md deleted file mode 100644 index 72d136b6460f62..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/administration-upgrade.md +++ /dev/null @@ -1,168 +0,0 @@ ---- -id: administration-upgrade -title: Upgrade Guide -sidebar_label: "Upgrade" -original_id: administration-upgrade ---- - -## Upgrade guidelines - -Apache Pulsar is comprised of multiple components, ZooKeeper, bookies, and brokers. These components are either stateful or stateless. You do not have to upgrade ZooKeeper nodes unless you have special requirement. While you upgrade, you need to pay attention to bookies (stateful), brokers and proxies (stateless). - -The following are some guidelines on upgrading a Pulsar cluster. Read the guidelines before upgrading. - -- Backup all your configuration files before upgrading. -- Read guide entirely, make a plan, and then execute the plan. When you make upgrade plan, you need to take your specific requirements and environment into consideration. -- Pay attention to the upgrading order of components. In general, you do not need to upgrade your ZooKeeper or configuration store cluster (the global ZooKeeper cluster). You need to upgrade bookies first, and then upgrade brokers, proxies, and your clients. -- If `autorecovery` is enabled, you need to disable `autorecovery` in the upgrade process, and re-enable it after completing the process. -- Read the release notes carefully for each release. Release notes contain features, configuration changes that might impact your upgrade. -- Upgrade a small subset of nodes of each type to canary test the new version before upgrading all nodes of that type in the cluster. When you have upgraded the canary nodes, run for a while to ensure that they work correctly. -- Upgrade one data center to verify new version before upgrading all data centers if your cluster runs in multi-cluster replicated mode. - -> Note: Currently, Apache Pulsar is compatible between versions. - -## Upgrade sequence - -To upgrade an Apache Pulsar cluster, follow the upgrade sequence. - -1. Upgrade ZooKeeper (optional) -- Canary test: test an upgraded version in one or a small set of ZooKeeper nodes. -- Rolling upgrade: rollout the upgraded version to all ZooKeeper servers incrementally, one at a time. Monitor your dashboard during the whole rolling upgrade process. -2. Upgrade bookies -- Canary test: test an upgraded version in one or a small set of bookies. -- Rolling upgrade: - - a. Disable `autorecovery` with the following command. - - ```shell - - bin/bookkeeper shell autorecovery -disable - - ``` - - - - b. Rollout the upgraded version to all bookies in the cluster after you determine that a version is safe after canary. - - c. After you upgrade all bookies, re-enable `autorecovery` with the following command. - - ```shell - - bin/bookkeeper shell autorecovery -enable - - ``` - -3. Upgrade brokers -- Canary test: test an upgraded version in one or a small set of brokers. -- Rolling upgrade: rollout the upgraded version to all brokers in the cluster after you determine that a version is safe after canary. -4. Upgrade proxies -- Canary test: test an upgraded version in one or a small set of proxies. -- Rolling upgrade: rollout the upgraded version to all proxies in the cluster after you determine that a version is safe after canary. - -## Upgrade ZooKeeper (optional) -While you upgrade ZooKeeper servers, you can do canary test first, and then upgrade all ZooKeeper servers in the cluster. - -### Canary test - -You can test an upgraded version in one of ZooKeeper servers before upgrading all ZooKeeper servers in your cluster. - -To upgrade ZooKeeper server to a new version, complete the following steps: - -1. Stop a ZooKeeper server. -2. Upgrade the binary and configuration files. -3. Start the ZooKeeper server with the new binary files. -4. Use `pulsar zookeeper-shell` to connect to the newly upgraded ZooKeeper server and run a few commands to verify if it works as expected. -5. Run the ZooKeeper server for a few days, observe and make sure the ZooKeeper cluster runs well. - -#### Canary rollback - -If issues occur during canary test, you can shut down the problematic ZooKeeper node, revert the binary and configuration, and restart the ZooKeeper with the reverted binary. - -### Upgrade all ZooKeeper servers - -After canary test to upgrade one ZooKeeper in your cluster, you can upgrade all ZooKeeper servers in your cluster. - -You can upgrade all ZooKeeper servers one by one by following steps in canary test. - -## Upgrade bookies - -While you upgrade bookies, you can do canary test first, and then upgrade all bookies in the cluster. -For more details, you can read Apache BookKeeper [Upgrade guide](http://bookkeeper.apache.org/docs/latest/admin/upgrade). - -### Canary test - -You can test an upgraded version in one or a small set of bookies before upgrading all bookies in your cluster. - -To upgrade bookie to a new version, complete the following steps: - -1. Stop a bookie. -2. Upgrade the binary and configuration files. -3. Start the bookie in `ReadOnly` mode to verify if the bookie of this new version runs well for read workload. - - ```shell - - bin/pulsar bookie --readOnly - - ``` - -4. When the bookie runs successfully in `ReadOnly` mode, stop the bookie and restart it in `Write/Read` mode. - - ```shell - - bin/pulsar bookie - - ``` - -5. Observe and make sure the cluster serves both write and read traffic. - -#### Canary rollback - -If issues occur during the canary test, you can shut down the problematic bookie node. Other bookies in the cluster replaces this problematic bookie node with autorecovery. - -### Upgrade all bookies - -After canary test to upgrade some bookies in your cluster, you can upgrade all bookies in your cluster. - -Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios. - -In a rolling upgrade scenario, upgrade one bookie at a time. In a downtime upgrade scenario, shut down the entire cluster, upgrade each bookie, and then start the cluster. - -While you upgrade in both scenarios, the procedure is the same for each bookie. - -1. Stop the bookie. -2. Upgrade the software (either new binary or new configuration files). -2. Start the bookie. - -> **Advanced operations** -> When you upgrade a large BookKeeper cluster in a rolling upgrade scenario, upgrading one bookie at a time is slow. If you configure rack-aware or region-aware placement policy, you can upgrade bookies rack by rack or region by region, which speeds up the whole upgrade process. - -## Upgrade brokers and proxies - -The upgrade procedure for brokers and proxies is the same. Brokers and proxies are `stateless`, so upgrading the two services is easy. - -### Canary test - -You can test an upgraded version in one or a small set of nodes before upgrading all nodes in your cluster. - -To upgrade to a new version, complete the following steps: - -1. Stop a broker (or proxy). -2. Upgrade the binary and configuration file. -3. Start a broker (or proxy). - -#### Canary rollback - -If issues occur during canary test, you can shut down the problematic broker (or proxy) node. Revert to the old version and restart the broker (or proxy). - -### Upgrade all brokers or proxies - -After canary test to upgrade some brokers or proxies in your cluster, you can upgrade all brokers or proxies in your cluster. - -Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios. - -In a rolling upgrade scenario, you can upgrade one broker or one proxy at a time if the size of the cluster is small. If your cluster is large, you can upgrade brokers or proxies in batches. When you upgrade a batch of brokers or proxies, make sure the remaining brokers and proxies in the cluster have enough capacity to handle the traffic during upgrade. - -In a downtime upgrade scenario, shut down the entire cluster, upgrade each broker or proxy, and then start the cluster. - -While you upgrade in both scenarios, the procedure is the same for each broker or proxy. - -1. Stop the broker or proxy. -2. Upgrade the software (either new binary or new configuration files). -3. Start the broker or proxy. diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/administration-zk-bk.md b/site2/website/versioned_docs/version-2.8.3-deprecated/administration-zk-bk.md deleted file mode 100644 index 2c080123ca81df..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/administration-zk-bk.md +++ /dev/null @@ -1,386 +0,0 @@ ---- -id: administration-zk-bk -title: ZooKeeper and BookKeeper administration -sidebar_label: "ZooKeeper and BookKeeper" -original_id: administration-zk-bk ---- - -Pulsar relies on two external systems for essential tasks: - -* [ZooKeeper](https://zookeeper.apache.org/) is responsible for a wide variety of configuration-related and coordination-related tasks. -* [BookKeeper](http://bookkeeper.apache.org/) is responsible for [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data. - -ZooKeeper and BookKeeper are both open-source [Apache](https://www.apache.org/) projects. - -> Skip to the [How Pulsar uses ZooKeeper and BookKeeper](#how-pulsar-uses-zookeeper-and-bookkeeper) section below for a more schematic explanation of the role of these two systems in Pulsar. - - -## ZooKeeper - -Each Pulsar instance relies on two separate ZooKeeper quorums. - -* [Local ZooKeeper](#deploy-local-zookeeper) operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs to have a dedicated ZooKeeper cluster. -* [Configuration Store](#deploy-configuration-store) operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum. - -### Deploy local ZooKeeper - -ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar. - -To deploy a Pulsar instance, you need to stand up one local ZooKeeper cluster *per Pulsar cluster*. - -To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -On each host, you need to specify the node ID in `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - - -On a ZooKeeper server at `zk1.us-west.example.com`, for example, you can set the `myid` value like this: - -```shell - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com` the command is `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start zookeeper - -``` - -### Deploy configuration store - -The ZooKeeper cluster configured and started up in the section above is a *local* ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks. - -If you deploy a [single-cluster](#single-cluster-pulsar-instance) instance, you do not need a separate cluster for the configuration store. If, however, you deploy a [multi-cluster](#multi-cluster-pulsar-instance) instance, you need to stand up a separate ZooKeeper cluster for configuration tasks. - -#### Single-cluster Pulsar instance - -If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports. - -To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorum uses to the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 - -``` - -As before, create the `myid` files for each server on `data/global-zookeeper/myid`. - -#### Multi-cluster Pulsar instance - -When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions. - -The key here is to make sure the ZK quorum members are spread across at least 3 regions and that other regions run as observers. - -Again, given the very low expected load on the configuration store servers, you can share the same hosts used for the local ZooKeeper quorum. - -For example, you can assume a Pulsar instance with the following clusters `us-west`, `us-east`, `us-central`, `eu-central`, `ap-south`. Also you can assume, each cluster has its own local ZK servers named such as - -``` - -zk[1-3].${CLUSTER}.example.com - -``` - -In this scenario you want to pick the quorum participants from few clusters and let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`. - -This guarantees that writes to configuration store is possible even if one of these regions is unreachable. - -The ZK configuration in all the servers looks like: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 -server.4=zk1.us-central.example.com:2185:2186 -server.5=zk2.us-central.example.com:2185:2186 -server.6=zk3.us-central.example.com:2185:2186:observer -server.7=zk1.us-east.example.com:2185:2186 -server.8=zk2.us-east.example.com:2185:2186 -server.9=zk3.us-east.example.com:2185:2186:observer -server.10=zk1.eu-central.example.com:2185:2186:observer -server.11=zk2.eu-central.example.com:2185:2186:observer -server.12=zk3.eu-central.example.com:2185:2186:observer -server.13=zk1.ap-south.example.com:2185:2186:observer -server.14=zk2.ap-south.example.com:2185:2186:observer -server.15=zk3.ap-south.example.com:2185:2186:observer - -``` - -Additionally, ZK observers need to have: - -```properties - -peerType=observer - -``` - -##### Start the service - -Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) - -```shell - -$ bin/pulsar-daemon start configuration-store - -``` - -### ZooKeeper configuration - -In Pulsar, ZooKeeper configuration is handled by two separate configuration files in the `conf` directory of your Pulsar installation: `conf/zookeeper.conf` for [local ZooKeeper](#local-zookeeper) and `conf/global-zookeeper.conf` for [configuration store](#configuration-store). - -#### Local ZooKeeper - -The [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file handles the configuration for local ZooKeeper. The table below shows the available parameters: - -|Name|Description|Default| -|---|---|---| -|tickTime| The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick. |2000| -|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10| -|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter. |5| -|dataDir| The location where ZooKeeper stores in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper| -|clientPort| The port on which the ZooKeeper server listens for connections. |2181| -|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest). |3| -|autopurge.purgeInterval| The time interval, in hours, which triggers the ZooKeeper database purge task. Setting to a non-zero number enables auto purge; setting to 0 disables. Read this guide before enabling auto purge. |1| -|maxClientCnxns| The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60| - - -#### Configuration Store - -The [`conf/global-zookeeper.conf`](reference-configuration.md#configuration-store) file handles the configuration for configuration store. The table below shows the available parameters: - - -## BookKeeper - -BookKeeper stores all durable messages in Pulsar. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) WAL system that guarantees read consistency of independent message logs calls ledgers. Individual BookKeeper servers are also called *bookies*. - -> To manage message persistence, retention, and expiry in Pulsar, refer to [cookbook](cookbooks-retention-expiry.md). - -### Hardware requirements - -Bookie hosts store message data on disk. To provide optimal performance, ensure that the bookies have a suitable hardware configuration. The following are two key dimensions of bookie hardware capacity: - -- Disk I/O capacity read/write -- Storage capacity - -Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker by default. To ensure low write latency, BookKeeper is designed to use multiple devices: - -- A **journal** to ensure durability. For sequential writes, it is critical to have fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms. -- A **ledger storage device** stores data. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller. - -### Configure BookKeeper - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. When you configure each bookie, ensure that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for local ZooKeeper of the Pulsar cluster. - -The minimum configuration changes required in `conf/bookkeeper.conf` are as follows: - -:::note - -Set `journalDirectory` and `ledgerDirectories` carefully. It is difficilt to change them later. - -::: - -```properties - -# Change to point to journal disk mount point -journalDirectory=data/bookkeeper/journal - -# Point to ledger storage disk mount point -ledgerDirectories=data/bookkeeper/ledgers - -# Point to local ZK quorum -zkServers=zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181 - -#It is recommended to set this parameter. Otherwise, BookKeeper can't start normally in certain environments (for example, Huawei Cloud). -advertisedAddress= - -``` - -To change the ZooKeeper root path that BookKeeper uses, use `zkLedgersRootPath=/MY-PREFIX/ledgers` instead of `zkServers=localhost:2181/MY-PREFIX`. - -> For more information about BookKeeper, refer to the official [BookKeeper docs](http://bookkeeper.apache.org). - -### Deploy BookKeeper - -BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar. Each Pulsar broker has its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster. - -### Start bookies manually - -You can start a bookie in the foreground or as a background daemon. - -To start a bookie in the foreground, use the [`bookkeeper`](reference-cli-tools.md#bookkeeper) CLI tool: - -```bash - -$ bin/bookkeeper bookie - -``` - -To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -You can verify whether the bookie works properly with the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell): - -```shell - -$ bin/bookkeeper shell bookiesanity - -``` - -When you use this command, you create a new ledger on the local bookie, write a few entries, read them back and finally delete the ledger. - -### Decommission bookies cleanly - -Before you decommission a bookie, you need to check your environment and meet the following requirements. - -1. Ensure the state of your cluster supports decommissioning the target bookie. Check if `EnsembleSize >= Write Quorum >= Ack Quorum` is `true` with one less bookie. - -2. Ensure the target bookie is listed after using the `listbookies` command. - -3. Ensure that no other process is ongoing (upgrade etc). - -And then you can decommission bookies safely. To decommission bookies, complete the following steps. - -1. Log in to the bookie node, check if there are underreplicated ledgers. The decommission command force to replicate the underreplicated ledgers. -`$ bin/bookkeeper shell listunderreplicated` - -2. Stop the bookie by killing the bookie process. Make sure that no liveness/readiness probes setup for the bookies to spin them back up if you deploy it in a Kubernetes environment. - -3. Run the decommission command. - - If you have logged in to the node to be decommissioned, you do not need to provide `-bookieid`. - - If you are running the decommission command for the target bookie node from another bookie node, you should mention the target bookie ID in the arguments for `-bookieid` - `$ bin/bookkeeper shell decommissionbookie` - or - `$ bin/bookkeeper shell decommissionbookie -bookieid ` - -4. Validate that no ledgers are on the decommissioned bookie. -`$ bin/bookkeeper shell listledgers -bookieid ` - -You can run the following command to check if the bookie you have decommissioned is listed in the bookies list: - -```bash - -./bookkeeper shell listbookies -rw -h -./bookkeeper shell listbookies -ro -h - -``` - -## BookKeeper persistence policies - -In Pulsar, you can set *persistence policies* at the namespace level, which determines how BookKeeper handles persistent storage of messages. Policies determine four things: - -* The number of acks (guaranteed copies) to wait for each ledger entry. -* The number of bookies to use for a topic. -* The number of writes to make for each ledger entry. -* The throttling rate for mark-delete operations. - -### Set persistence policies - -You can set persistence policies for BookKeeper at the [namespace](reference-terminology.md#namespace) level. - -#### Pulsar-admin - -Use the [`set-persistence`](reference-pulsar-admin.md#namespaces-set-persistence) subcommand and specify a namespace as well as any policies that you want to apply. The available flags are: - -Flag | Description | Default -:----|:------------|:------- -`-a`, `--bookkeeper-ack-quorum` | The number of acks (guaranteed copies) to wait on for each entry | 0 -`-e`, `--bookkeeper-ensemble` | The number of [bookies](reference-terminology.md#bookie) to use for topics in the namespace | 0 -`-w`, `--bookkeeper-write-quorum` | The number of writes to make for each entry | 0 -`-r`, `--ml-mark-delete-max-rate` | Throttling rate for mark-delete operations (0 means no throttle) | 0 - -The following is an example: - -```shell - -$ pulsar-admin namespaces set-persistence my-tenant/my-ns \ - --bookkeeper-ack-quorum 3 \ - --bookkeeper-ensemble 2 - -``` - -#### REST API - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/setPersistence?version=@pulsar:version_number@} - -#### Java - -```java - -int bkEnsemble = 2; -int bkQuorum = 3; -int bkAckQuorum = 2; -double markDeleteRate = 0.7; -PersistencePolicies policies = - new PersistencePolicies(ensemble, quorum, ackQuorum, markDeleteRate); -admin.namespaces().setPersistence(namespace, policies); - -``` - -### List persistence policies - -You can see which persistence policy currently applies to a namespace. - -#### Pulsar-admin - -Use the [`get-persistence`](reference-pulsar-admin.md#namespaces-get-persistence) subcommand and specify the namespace. - -The following is an example: - -```shell - -$ pulsar-admin namespaces get-persistence my-tenant/my-ns -{ - "bookkeeperEnsemble": 1, - "bookkeeperWriteQuorum": 1, - "bookkeeperAckQuorum", 1, - "managedLedgerMaxMarkDeleteRate": 0 -} - -``` - -#### REST API - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/getPersistence?version=@pulsar:version_number@} - -#### Java - -```java - -PersistencePolicies policies = admin.namespaces().getPersistence(namespace); - -``` - -## How Pulsar uses ZooKeeper and BookKeeper - -This diagram illustrates the role of ZooKeeper and BookKeeper in a Pulsar cluster: - -![ZooKeeper and BookKeeper](/assets/pulsar-system-architecture.png) - -Each Pulsar cluster consists of one or more message brokers. Each broker relies on an ensemble of bookies. diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-cgo.md b/site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-cgo.md deleted file mode 100644 index f352f942b77144..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-cgo.md +++ /dev/null @@ -1,579 +0,0 @@ ---- -id: client-libraries-cgo -title: Pulsar CGo client -sidebar_label: "CGo(deprecated)" -original_id: client-libraries-cgo ---- - -You can use Pulsar Go client to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang). - -All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Go client are thread-safe. - -Currently, the following Go clients are maintained in two repositories. - -| Language | Project | Maintainer | License | Description | -|----------|---------|------------|---------|-------------| -| CGo | [pulsar-client-go](https://github.com/apache/pulsar/tree/master/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | CGo client that depends on C++ client library | -| Go | [pulsar-client-go](https://github.com/apache/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client | - -> **API docs available as well** -> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar/pulsar-client-go/pulsar). - -## Installation - -### Requirements - -Pulsar Go client library is based on the C++ client library. Follow -the instructions for [C++ library](client-libraries-cpp.md) for installing the binaries through [RPM](client-libraries-cpp.md#rpm), [Deb](client-libraries-cpp.md#deb) or [Homebrew packages](client-libraries-cpp.md#macos). - -### Install go package - -> **Compatibility Warning** -> The version number of the Go client **must match** the version number of the Pulsar C++ client library. - -You can install the `pulsar` library locally using `go get`. Note that `go get` doesn't support fetching a specific tag - it will always pull in master's version of the Go client. You'll need a C++ client library that matches master. - -```bash - -$ go get -u github.com/apache/pulsar/pulsar-client-go/pulsar - -``` - -Or you can use [dep](https://github.com/golang/dep) for managing the dependencies. - -```bash - -$ dep ensure -add github.com/apache/pulsar/pulsar-client-go/pulsar@v@pulsar:version@ - -``` - -Once installed locally, you can import it into your project: - -```go - -import "github.com/apache/pulsar/pulsar-client-go/pulsar" - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example: - -```go - -import ( - "log" - "runtime" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - OperationTimeoutSeconds: 5, - MessageListenerThreads: runtime.NumCPU(), - }) - - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } -} - -``` - -The following configurable parameters are available for Pulsar clients: - -Parameter | Description | Default -:---------|:------------|:------- -`URL` | The connection URL for the Pulsar cluster. See [above](#urls) for more info | -`IOThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker) | 1 -`OperationTimeoutSeconds` | The timeout for some Go client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries will occur until this threshold is reached, at which point the operation will fail. | 30 -`MessageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)) | 1 -`ConcurrentLookupRequests` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 5000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 5000 -`Logger` | A custom logger implementation for the client (as a function that takes a log level, file path, line number, and message). All info, warn, and error messages will be routed to this function. | `nil` -`TLSTrustCertsFilePath` | The file path for the trusted TLS certificate | -`TLSAllowInsecureConnection` | Whether the client accepts untrusted TLS certificates from the broker | `false` -`Authentication` | Configure the authentication provider. (default: no authentication). Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | `nil` -`StatsIntervalInSeconds` | The interval (in seconds) at which client stats are published | 60 - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example: - -```go - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", -}) - -if err != nil { - log.Fatalf("Could not instantiate Pulsar producer: %v", err) -} - -defer producer.Close() - -msg := pulsar.ProducerMessage{ - Payload: []byte("Hello, Pulsar"), -} - -if err := producer.Send(context.Background(), msg); err != nil { - log.Fatalf("Producer could not send message: %v", err) -} - -``` - -> **Blocking operation** -> When you create a new Pulsar producer, the operation will block (waiting on a go channel) until either a producer is successfully created or an error is thrown. - - -### Producer operations - -Pulsar Go producers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string` -`Name()` | Fetches the producer's name | `string` -`Send(context.Context, ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | `error` -`SendAndGetMsgID(context.Context, ProducerMessage)`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | (MessageID, error) -`SendAsync(context.Context, ProducerMessage, func(ProducerMessage, error))` | Publishes a [message](#messages) to the producer's topic asynchronously. The third argument is a callback function that specifies what happens either when the message is acknowledged or an error is thrown. | -`SendAndGetMsgIDAsync(context.Context, ProducerMessage, func(MessageID, error))`| Send a message in asynchronous mode. The callback will report back the message being published and the eventual error in publishing | -`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64 -`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error -`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | `error` -`Schema()` | | Schema - -Here's a more involved example usage of a producer: - -```go - -import ( - "context" - "fmt" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatal(err) } - - // Use the client to instantiate a producer - producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", - }) - - if err != nil { log.Fatal(err) } - - ctx := context.Background() - - // Send 10 messages synchronously and 10 messages asynchronously - for i := 0; i < 10; i++ { - // Create a message - msg := pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("message-%d", i)), - } - - // Attempt to send the message - if err := producer.Send(ctx, msg); err != nil { - log.Fatal(err) - } - - // Create a different message to send asynchronously - asyncMsg := pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("async-message-%d", i)), - } - - // Attempt to send the message asynchronously and handle the response - producer.SendAsync(ctx, asyncMsg, func(msg pulsar.ProducerMessage, err error) { - if err != nil { log.Fatal(err) } - - fmt.Printf("the %s successfully published", string(msg.Payload)) - }) - } -} - -``` - -### Producer configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer will publish messages | -`Name` | A name for the producer. If you don't explicitly assign a name, Pulsar will automatically generate a globally unique name that you can access later using the `Name()` method. If you choose to explicitly assign a name, it will need to be unique across *all* Pulsar clusters, otherwise the creation operation will throw an error. | -`Properties`| Attach a set of application defined properties to the producer. This properties will be visible in the topic stats | -`SendTimeout` | When publishing a message to a topic, the producer will wait for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error will be thrown. If you set `SendTimeout` to -1, the timeout will be set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30 seconds -`MaxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `Send` and `SendAsync` methods will fail *unless* `BlockIfQueueFull` is set to `true`. | -`MaxPendingMessagesAcrossPartitions` | Set the number of max pending messages across all the partitions. This setting will be used to lower the max pending messages for each partition `MaxPendingMessages(int)`, if the total exceeds the configured value.| -`BlockIfQueueFull` | If set to `true`, the producer's `Send` and `SendAsync` methods will block when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `MaxPendingMessages` parameter); if set to `false` (the default), `Send` and `SendAsync` operations will fail and throw a `ProducerQueueIsFullError` when the queue is full. | `false` -`MessageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`pulsar.RoundRobinDistribution`, the default), publishing all messages to a single partition (`pulsar.UseSinglePartition`), or a custom partitioning scheme (`pulsar.CustomPartition`). | `pulsar.RoundRobinDistribution` -`HashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `pulsar.JavaStringHash` (the equivalent of `String.hashCode()` in Java), `pulsar.Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `pulsar.BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library) | `pulsar.JavaStringHash` -`CompressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), [`ZLIB`](https://zlib.net/), [`ZSTD`](https://facebook.github.io/zstd/) and [`SNAPPY`](https://google.github.io/snappy/). | No compression -`MessageRouter` | By default, Pulsar uses a round-robin routing scheme for [partitioned topics](cookbooks-partitioned.md). The `MessageRouter` parameter enables you to specify custom routing logic via a function that takes the Pulsar message and topic metadata as an argument and returns an integer (where the ), i.e. a function signature of `func(Message, TopicMetadata) int`. | -`Batching` | Control whether automatic batching of messages is enabled for the producer. | false -`BatchingMaxPublishDelay` | Set the time period within which the messages sent will be batched (default: 1ms) if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or until | 1ms -`BatchingMaxMessages` | Set the maximum number of messages permitted in a batch. (default: 1000) If set to a value greater than 1, messages will be queued until this threshold is reached or batch interval has elapsed | 1000 - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels: - -```go - -msgChannel := make(chan pulsar.ConsumerMessage) - -consumerOpts := pulsar.ConsumerOptions{ - Topic: "my-topic", - SubscriptionName: "my-subscription-1", - Type: pulsar.Exclusive, - MessageChannel: msgChannel, -} - -consumer, err := client.Subscribe(consumerOpts) - -if err != nil { - log.Fatalf("Could not establish subscription: %v", err) -} - -defer consumer.Close() - -for cm := range msgChannel { - msg := cm.Message - - fmt.Printf("Message ID: %s", msg.ID()) - fmt.Printf("Message value: %s", string(msg.Payload())) - - consumer.Ack(msg) -} - -``` - -> **Blocking operation** -> When you create a new Pulsar consumer, the operation will block (on a go channel) until either a producer is successfully created or an error is thrown. - - -### Consumer operations - -Pulsar Go consumers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the consumer's [topic](reference-terminology.md#topic) | `string` -`Subscription()` | Returns the consumer's subscription name | `string` -`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error` -`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)` -`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | `error` -`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | `error` -`AckCumulative(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `AckCumulative` method will block until the ack has been sent to the broker. After that, the messages will *not* be redelivered to the consumer. Cumulative acking can only be used with a [shared](concepts-messaging.md#shared) subscription type. | `error` -`AckCumulativeID(MessageID)` |Ack the reception of all the messages in the stream up to (and including) the provided message. This method will block until the acknowledge has been sent to the broker. After that, the messages will not be re-delivered to this consumer. | error -`Nack(Message)` | Acknowledge the failure to process a single message. | `error` -`NackID(MessageID)` | Acknowledge the failure to process a single message. | `error` -`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | `error` -`RedeliverUnackedMessages()` | Redelivers *all* unacknowledged messages on the topic. In [failover](concepts-messaging.md#failover) mode, this request is ignored if the consumer isn't active on the specified topic; in [shared](concepts-messaging.md#shared) mode, redelivered messages are distributed across all consumers connected to the topic. **Note**: this is a *non-blocking* operation that doesn't throw an error. | -`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | error - -#### Receive example - -Here's an example usage of a Go consumer that uses the `Receive()` method to process incoming messages: - -```go - -import ( - "context" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatal(err) } - - // Use the client object to instantiate a consumer - consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "my-golang-topic", - SubscriptionName: "sub-1", - Type: pulsar.Exclusive, - }) - - if err != nil { log.Fatal(err) } - - defer consumer.Close() - - ctx := context.Background() - - // Listen indefinitely on the topic - for { - msg, err := consumer.Receive(ctx) - if err != nil { log.Fatal(err) } - - // Do something with the message - err = processMessage(msg) - - if err == nil { - // Message processed successfully - consumer.Ack(msg) - } else { - // Failed to process messages - consumer.Nack(msg) - } - } -} - -``` - -### Consumer configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the consumer will establish a subscription and listen for messages | -`Topics` | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing | -`TopicsPattern` | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing | -`SubscriptionName` | The subscription name for this consumer | -`Properties` | Attach a set of application defined properties to the consumer. This properties will be visible in the topic stats| -`Name` | The name of the consumer | -`AckTimeout` | Set the timeout for unacked messages | 0 -`NackRedeliveryDelay` | The delay after which to redeliver the messages that failed to be processed. Default is 1min. (See `Consumer.Nack()`) | 1 minute -`Type` | Available options are `Exclusive`, `Shared`, and `Failover` | `Exclusive` -`SubscriptionInitPos` | InitialPosition at which the cursor will be set when subscribe | Latest -`MessageChannel` | The Go channel used by the consumer. Messages that arrive from the Pulsar topic(s) will be passed to this channel. | -`ReceiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `Receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000 -`MaxTotalReceiverQueueSizeAcrossPartitions` |Set the max total receiver queue size across partitions. This setting will be used to reduce the receiver queue size for individual partitions if the total exceeds this value | 50000 -`ReadCompacted` | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the consumer will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal. | - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example: - -```go - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageId: pulsar.LatestMessage, -}) - -``` - -> **Blocking operation** -> When you create a new Pulsar reader, the operation will block (on a go channel) until either a reader is successfully created or an error is thrown. - - -### Reader operations - -Pulsar Go readers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string` -`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)` -`HasNext()` | Check if there is any message available to read from the current position| (bool, error) -`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error` - -#### "Next" example - -Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages: - -```go - -import ( - "context" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatalf("Could not create client: %v", err) } - - // Use the client to instantiate a reader - reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: pulsar.EarliestMessage, - }) - - if err != nil { log.Fatalf("Could not create reader: %v", err) } - - defer reader.Close() - - ctx := context.Background() - - // Listen on the topic for incoming messages - for { - msg, err := reader.Next(ctx) - if err != nil { log.Fatalf("Error reading from topic: %v", err) } - - // Process the message - } -} - -``` - -In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example: - -```go - -lastSavedId := // Read last saved message id from external store as byte[] - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: DeserializeMessageID(lastSavedId), -}) - -``` - -### Reader configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader will establish a subscription and listen for messages -`Name` | The name of the reader -`StartMessageID` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `pulsar.EarliestMessage` (the earliest available message on the topic), `pulsar.LatestMessage` (the latest available message on the topic), or a `MessageID` object for a position that isn't earliest or latest. | -`MessageChannel` | The Go channel used by the reader. Messages that arrive from the Pulsar topic(s) will be passed to this channel. | -`ReceiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `Next`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000 -`SubscriptionRolePrefix` | The subscription role prefix. | `reader` -`ReadCompacted` | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the reader will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal.| - -## Messages - -The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message: - -```go - -msg := pulsar.ProducerMessage{ - Payload: []byte("Here is some message data"), - Key: "message-key", - Properties: map[string]string{ - "foo": "bar", - }, - EventTime: time.Now(), - ReplicationClusters: []string{"cluster1", "cluster3"}, -} - -if err := producer.send(msg); err != nil { - log.Fatalf("Could not publish message due to: %v", err) -} - -``` - -The following methods parameters are available for `ProducerMessage` objects: - -Parameter | Description -:---------|:----------- -`Payload` | The actual data payload of the message -`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message. -`Key` | The optional key associated with the message (particularly useful for things like topic compaction) -`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message -`EventTime` | The timestamp associated with the message -`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. -`SequenceID` | Set the sequence id to assign to the current message - -## TLS encryption and authentication - -In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so: - - * Use `pulsar+ssl` URL type - * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker - * Configure `Authentication` option - -Here's an example: - -```go - -opts := pulsar.ClientOptions{ - URL: "pulsar+ssl://my-cluster.com:6651", - TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr", - Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"), -} - -``` - -## Schema - -This example shows how to create a producer and consumer with schema. - -```go - -var exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -jsonSchema := NewJsonSchema(exampleSchemaDef, nil) -// create producer -producer, err := client.CreateProducerWithSchema(ProducerOptions{ - Topic: "jsonTopic", -}, jsonSchema) -err = producer.Send(context.Background(), ProducerMessage{ - Value: &testJson{ - ID: 100, - Name: "pulsar", - }, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() -//create consumer -var s testJson -consumerJS := NewJsonSchema(exampleSchemaDef, nil) -consumer, err := client.SubscribeWithSchema(ConsumerOptions{ - Topic: "jsonTopic", - SubscriptionName: "sub-2", -}, consumerJS) -if err != nil { - log.Fatal(err) -} -msg, err := consumer.Receive(context.Background()) -if err != nil { - log.Fatal(err) -} -err = msg.GetValue(&s) -if err != nil { - log.Fatal(err) -} -fmt.Println(s.ID) // output: 100 -fmt.Println(s.Name) // output: pulsar -defer consumer.Close() - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-cpp.md b/site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-cpp.md deleted file mode 100644 index 08622b9e830714..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-cpp.md +++ /dev/null @@ -1,408 +0,0 @@ ---- -id: client-libraries-cpp -title: Pulsar C++ client -sidebar_label: "C++" -original_id: client-libraries-cpp ---- - -You can use Pulsar C++ client to create Pulsar producers and consumers in C++. - -All the methods in producer, consumer, and reader of a C++ client are thread-safe. - -## Supported platforms - -Pulsar C++ client is supported on **Linux** and **MacOS** platforms. - -[Doxygen](http://www.doxygen.nl/)-generated API docs for the C++ client are available [here](/api/cpp). - -## System requirements - -You need to install the following components before using the C++ client: - -* [CMake](https://cmake.org/) -* [Boost](http://www.boost.org/) -* [Protocol Buffers](https://developers.google.com/protocol-buffers/) 2.6 -* [libcurl](https://curl.haxx.se/libcurl/) -* [Google Test](https://github.com/google/googletest) - -## Linux - -### Compilation - -1. Clone the Pulsar repository. - -```shell - -$ git clone https://github.com/apache/pulsar - -``` - -2. Install all necessary dependencies. - -```shell - -$ apt-get install cmake libssl-dev libcurl4-openssl-dev liblog4cxx-dev \ - libprotobuf-dev protobuf-compiler libboost-all-dev google-mock libgtest-dev libjsoncpp-dev - -``` - -3. Compile and install [Google Test](https://github.com/google/googletest). - -```shell - -# libgtest-dev version is 1.18.0 or above -$ cd /usr/src/googletest -$ sudo cmake . -$ sudo make -$ sudo cp ./googlemock/libgmock.a ./googlemock/gtest/libgtest.a /usr/lib/ - -# less than 1.18.0 -$ cd /usr/src/gtest -$ sudo cmake . -$ sudo make -$ sudo cp libgtest.a /usr/lib - -$ cd /usr/src/gmock -$ sudo cmake . -$ sudo make -$ sudo cp libgmock.a /usr/lib - -``` - -4. Compile the Pulsar client library for C++ inside the Pulsar repository. - -```shell - -$ cd pulsar-client-cpp -$ cmake . -$ make - -``` - -After you install the components successfully, the files `libpulsar.so` and `libpulsar.a` are in the `lib` folder of the repository. The tools `perfProducer` and `perfConsumer` are in the `perf` directory. - -### Install Dependencies - -> Since 2.1.0 release, Pulsar ships pre-built RPM and Debian packages. You can download and install those packages directly. - -After you download and install RPM or DEB, the `libpulsar.so`, `libpulsarnossl.so`, `libpulsar.a`, and `libpulsarwithdeps.a` libraries are in your `/usr/lib` directory. - -By default, they are built in code path `${PULSAR_HOME}/pulsar-client-cpp`. You can build with the command below. - - `cmake . -DBUILD_TESTS=OFF -DLINK_STATIC=ON && make pulsarShared pulsarSharedNossl pulsarStatic pulsarStaticWithDeps -j 3`. - -These libraries rely on some other libraries. If you want to get detailed version of dependencies, see [RPM](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/rpm/Dockerfile) or [DEB](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/deb/Dockerfile) files. - -1. `libpulsar.so` is a shared library, containing statically linked `boost` and `openssl`. It also dynamically links all other necessary libraries. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsar.so -I/usr/local/ssl/include - -``` - -2. `libpulsarnossl.so` is a shared library, similar to `libpulsar.so` except that the libraries `openssl` and `crypto` are dynamically linked. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsarnossl.so -lssl -lcrypto -I/usr/local/ssl/include -L/usr/local/ssl/lib - -``` - -3. `libpulsar.a` is a static library. You need to load dependencies before using this library. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsar.a -lssl -lcrypto -ldl -lpthread -I/usr/local/ssl/include -L/usr/local/ssl/lib -lboost_system -lboost_regex -lcurl -lprotobuf -lzstd -lz - -``` - -4. `libpulsarwithdeps.a` is a static library, based on `libpulsar.a`. It is archived in the dependencies of `libboost_regex`, `libboost_system`, `libcurl`, `libprotobuf`, `libzstd` and `libz`. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsarwithdeps.a -lssl -lcrypto -ldl -lpthread -I/usr/local/ssl/include -L/usr/local/ssl/lib - -``` - -The `libpulsarwithdeps.a` does not include library openssl related libraries `libssl` and `libcrypto`, because these two libraries are related to security. It is more reasonable and easier to use the versions provided by the local system to handle security issues and upgrade libraries. - -### Install RPM - -1. Download a RPM package from the links in the table. - -| Link | Crypto files | -|------|--------------| -| [client](@pulsar:dist_rpm:client@) | [asc](@pulsar:dist_rpm:client@.asc), [sha512](@pulsar:dist_rpm:client@.sha512) | -| [client-debuginfo](@pulsar:dist_rpm:client-debuginfo@) | [asc](@pulsar:dist_rpm:client-debuginfo@.asc), [sha512](@pulsar:dist_rpm:client-debuginfo@.sha512) | -| [client-devel](@pulsar:dist_rpm:client-devel@) | [asc](@pulsar:dist_rpm:client-devel@.asc), [sha512](@pulsar:dist_rpm:client-devel@.sha512) | - -2. Install the package using the following command. - -```bash - -$ rpm -ivh apache-pulsar-client*.rpm - -``` - -After you install RPM successfully, Pulsar libraries are in the `/usr/lib` directory. - -### Install Debian - -1. Download a Debian package from the links in the table. - -| Link | Crypto files | -|------|--------------| -| [client](@pulsar:deb:client@) | [asc](@pulsar:dist_deb:client@.asc), [sha512](@pulsar:dist_deb:client@.sha512) | -| [client-devel](@pulsar:deb:client-devel@) | [asc](@pulsar:dist_deb:client-devel@.asc), [sha512](@pulsar:dist_deb:client-devel@.sha512) | - -2. Install the package using the following command. - -```bash - -$ apt install ./apache-pulsar-client*.deb - -``` - -After you install DEB successfully, Pulsar libraries are in the `/usr/lib` directory. - -### Build - -> If you want to build RPM and Debian packages from the latest master, follow the instructions below. You should run all the instructions at the root directory of your cloned Pulsar repository. - -There are recipes that build RPM and Debian packages containing a -statically linked `libpulsar.so` / `libpulsarnossl.so` / `libpulsar.a` / `libpulsarwithdeps.a` with all required dependencies. - -To build the C++ library packages, you need to build the Java packages first. - -```shell - -mvn install -DskipTests - -``` - -#### RPM - -To build the RPM inside a Docker container, use the command below. The RPMs are in the `pulsar-client-cpp/pkg/rpm/RPMS/x86_64/` path. - -```shell - -pulsar-client-cpp/pkg/rpm/docker-build-rpm.sh - -``` - -| Package name | Content | -|-----|-----| -| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` | -| pulsar-client-devel | Static library `libpulsar.a`, `libpulsarwithdeps.a`and C++ and C headers | -| pulsar-client-debuginfo | Debug symbols for `libpulsar.so` | - -#### Debian - -To build Debian packages, enter the following command. - -```shell - -pulsar-client-cpp/pkg/deb/docker-build-deb.sh - -``` - -Debian packages are created in the `pulsar-client-cpp/pkg/deb/BUILD/DEB/` path. - -| Package name | Content | -|-----|-----| -| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` | -| pulsar-client-dev | Static library `libpulsar.a`, `libpulsarwithdeps.a` and C++ and C headers | - -## MacOS - -### Compilation - -1. Clone the Pulsar repository. - -```shell - -$ git clone https://github.com/apache/pulsar - -``` - -2. Install all necessary dependencies. - -```shell - -# OpenSSL installation -$ brew install openssl -$ export OPENSSL_INCLUDE_DIR=/usr/local/opt/openssl/include/ -$ export OPENSSL_ROOT_DIR=/usr/local/opt/openssl/ - -# Protocol Buffers installation -$ brew tap homebrew/versions -$ brew install protobuf260 -$ brew install boost -$ brew install log4cxx - -# Google Test installation -$ git clone https://github.com/google/googletest.git -$ cd googletest -$ git checkout release-1.12.1 -$ cmake . -$ make install - -``` - -3. Compile the Pulsar client library in the repository that you cloned. - -```shell - -$ cd pulsar-client-cpp -$ cmake . -$ make - -``` - -### Install `libpulsar` - -Pulsar releases are available in the [Homebrew](https://brew.sh/) core repository. You can install the C++ client library with the following command. The package is installed with the library and headers. - -```shell - -brew install libpulsar - -``` - -## Connection URLs - -To connect Pulsar using client libraries, you need to specify a Pulsar protocol URL. - -Pulsar protocol URLs are assigned to specific clusters, you can use the Pulsar URI scheme. The default port is `6650`. The following is an example for localhost. - -```http - -pulsar://localhost:6650 - -``` - -In a Pulsar cluster in production, the URL looks as follows. - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you use TLS authentication, you need to add `ssl`, and the default port is `6651`. The following is an example. - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a consumer - -To use Pulsar as a consumer, you need to create a consumer on the C++ client. The following is an example. - -```c++ - -Client client("pulsar://localhost:6650"); - -Consumer consumer; -Result result = client.subscribe("my-topic", "my-subscription-name", consumer); -if (result != ResultOk) { - LOG_ERROR("Failed to subscribe: " << result); - return -1; -} - -Message msg; - -while (true) { - consumer.receive(msg); - LOG_INFO("Received: " << msg - << " with payload '" << msg.getDataAsString() << "'"); - - consumer.acknowledge(msg); -} - -client.close(); - -``` - -## Create a producer - -To use Pulsar as a producer, you need to create a producer on the C++ client. The following is an example. - -```c++ - -Client client("pulsar://localhost:6650"); - -Producer producer; -Result result = client.createProducer("my-topic", producer); -if (result != ResultOk) { - LOG_ERROR("Error creating producer: " << result); - return -1; -} - -// Publish 10 messages to the topic -for (int i = 0; i < 10; i++){ - Message msg = MessageBuilder().setContent("my-message").build(); - Result res = producer.send(msg); - LOG_INFO("Message sent: " << res); -} -client.close(); - -``` - -## Enable authentication in connection URLs -If you use TLS authentication when connecting to Pulsar, you need to add `ssl` in the connection URLs, and the default port is `6651`. The following is an example. - -```cpp - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/cacert.pem"); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create( - "/path/to/client-cert.pem", "/path/to/client-key.pem");); - -Client client("pulsar+ssl://my-broker.com:6651", config); - -``` - -For complete examples, refer to [C++ client examples](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/examples). - -## Schema - -This section describes some examples about schema. For more information about schema, see [Pulsar schema](schema-get-started.md). - -### Create producer with Avro schema - -The following example shows how to create a producer with an Avro schema. - -```cpp - -static const std::string exampleSchema = - "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," - "\"fields\":[{\"name\":\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"int\"}]}"; -Producer producer; -ProducerConfiguration producerConf; -producerConf.setSchema(SchemaInfo(AVRO, "Avro", exampleSchema)); -client.createProducer("topic-avro", producerConf, producer); - -``` - -### Create consumer with Avro schema - -The following example shows how to create a consumer with an Avro schema. - -```cpp - -static const std::string exampleSchema = - "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," - "\"fields\":[{\"name\":\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"int\"}]}"; -ConsumerConfiguration consumerConf; -Consumer consumer; -consumerConf.setSchema(SchemaInfo(AVRO, "Avro", exampleSchema)); -client.subscribe("topic-avro", "sub-2", consumerConf, consumer) - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-dotnet.md b/site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-dotnet.md deleted file mode 100644 index fbec5e473be69c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-dotnet.md +++ /dev/null @@ -1,434 +0,0 @@ ---- -id: client-libraries-dotnet -title: Pulsar C# client -sidebar_label: "C#" -original_id: client-libraries-dotnet ---- - -You can use the Pulsar C# client (DotPulsar) to create Pulsar producers and consumers in C#. All the methods in the producer, consumer, and reader of a C# client are thread-safe. The official documentation for DotPulsar is available [here](https://github.com/apache/pulsar-dotpulsar/wiki). - -## Installation - -You can install the Pulsar C# client library either through the dotnet CLI or through the Visual Studio. This section describes how to install the Pulsar C# client library through the dotnet CLI. For information about how to install the Pulsar C# client library through the Visual Studio, see [here](https://docs.microsoft.com/en-us/visualstudio/mac/nuget-walkthrough?view=vsmac-2019). - -### Prerequisites - -Install the [.NET Core SDK](https://dotnet.microsoft.com/download/), which provides the dotnet command-line tool. Starting in Visual Studio 2017, the dotnet CLI is automatically installed with any .NET Core related workloads. - -### Procedures - -To install the Pulsar C# client library, following these steps: - -1. Create a project. - - 1. Create a folder for the project. - - 2. Open a terminal window and switch to the new folder. - - 3. Create the project using the following command. - - ``` - - dotnet new console - - ``` - - 4. Use `dotnet run` to test that the app has been created properly. - -2. Add the DotPulsar NuGet package. - - 1. Use the following command to install the `DotPulsar` package. - - ``` - - dotnet add package DotPulsar - - ``` - - 2. After the command completes, open the `.csproj` file to see the added reference. - - ```xml - - - - - - ``` - -## Client - -This section describes some configuration examples for the Pulsar C# client. - -### Create client - -This example shows how to create a Pulsar C# client connected to localhost. - -```c# - -var client = PulsarClient.Builder().Build(); - -``` - -To create a Pulsar C# client by using the builder, you can specify the following options. - -| Option | Description | Default | -| ---- | ---- | ---- | -| ServiceUrl | Set the service URL for the Pulsar cluster. | pulsar://localhost:6650 | -| RetryInterval | Set the time to wait before retrying an operation or a reconnection. | 3s | - -### Create producer - -This section describes how to create a producer. - -- Create a producer by using the builder. - - ```c# - - var producer = client.NewProducer() - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a producer without using the builder. - - ```c# - - var options = new ProducerOptions("persistent://public/default/mytopic"); - var producer = client.CreateProducer(options); - - ``` - -### Create consumer - -This section describes how to create a consumer. - -- Create a consumer by using the builder. - - ```c# - - var consumer = client.NewConsumer() - .SubscriptionName("MySubscription") - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a consumer without using the builder. - - ```c# - - var options = new ConsumerOptions("MySubscription", "persistent://public/default/mytopic"); - var consumer = client.CreateConsumer(options); - - ``` - -### Create reader - -This section describes how to create a reader. - -- Create a reader by using the builder. - - ```c# - - var reader = client.NewReader() - .StartMessageId(MessageId.Earliest) - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a reader without using the builder. - - ```c# - - var options = new ReaderOptions(MessageId.Earliest, "persistent://public/default/mytopic"); - var reader = client.CreateReader(options); - - ``` - -### Configure encryption policies - -The Pulsar C# client supports four kinds of encryption policies: - -- `EnforceUnencrypted`: always use unencrypted connections. -- `EnforceEncrypted`: always use encrypted connections) -- `PreferUnencrypted`: use unencrypted connections, if possible. -- `PreferEncrypted`: use encrypted connections, if possible. - -This example shows how to set the `EnforceUnencrypted` encryption policy. - -```c# - -var client = PulsarClient.Builder() - .ConnectionSecurity(EncryptionPolicy.EnforceEncrypted) - .Build(); - -``` - -### Configure authentication - -Currently, the Pulsar C# client supports the TLS (Transport Layer Security) and JWT (JSON Web Token) authentication. - -If you have followed [Authentication using TLS](security-tls-authentication.md), you get a certificate and a key. To use them from the Pulsar C# client, follow these steps: - -1. Create an unencrypted and password-less pfx file. - - ```c# - - openssl pkcs12 -export -keypbe NONE -certpbe NONE -out admin.pfx -inkey admin.key.pem -in admin.cert.pem -passout pass: - - ``` - -2. Use the admin.pfx file to create an X509Certificate2 and pass it to the Pulsar C# client. - - ```c# - - var clientCertificate = new X509Certificate2("admin.pfx"); - var client = PulsarClient.Builder() - .AuthenticateUsingClientCertificate(clientCertificate) - .Build(); - - ``` - -## Producer - -A producer is a process that attaches to a topic and publishes messages to a Pulsar broker for processing. This section describes some configuration examples about the producer. - -## Send data - -This example shows how to send data. - -```c# - -var data = Encoding.UTF8.GetBytes("Hello World"); -await producer.Send(data); - -``` - -### Send messages with customized metadata - -- Send messages with customized metadata by using the builder. - - ```c# - - var data = Encoding.UTF8.GetBytes("Hello World"); - var messageId = await producer.NewMessage() - .Property("SomeKey", "SomeValue") - .Send(data); - - ``` - -- Send messages with customized metadata without using the builder. - - ```c# - - var data = Encoding.UTF8.GetBytes("Hello World"); - var metadata = new MessageMetadata(); - metadata["SomeKey"] = "SomeValue"; - var messageId = await producer.Send(metadata, data)); - - ``` - -## Consumer - -A consumer is a process that attaches to a topic through a subscription and then receives messages. This section describes some configuration examples about the consumer. - -### Receive messages - -This example shows how a consumer receives messages from a topic. - -```c# - -await foreach (var message in consumer.Messages()) -{ - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); -} - -``` - -### Acknowledge messages - -Messages can be acknowledged individually or cumulatively. For details about message acknowledgement, see [acknowledgement](concepts-messaging.md#acknowledgement). - -- Acknowledge messages individually. - - ```c# - - await foreach (var message in consumer.Messages()) - { - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); - } - - ``` - -- Acknowledge messages cumulatively. - - ```c# - - await consumer.AcknowledgeCumulative(message); - - ``` - -### Unsubscribe from topics - -This example shows how a consumer unsubscribes from a topic. - -```c# - -await consumer.Unsubscribe(); - -``` - -#### Note - -> A consumer cannot be used and is disposed once the consumer unsubscribes from a topic. - -## Reader - -A reader is actually just a consumer without a cursor. This means that Pulsar does not keep track of your progress and there is no need to acknowledge messages. - -This example shows how a reader receives messages. - -```c# - -await foreach (var message in reader.Messages()) -{ - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); -} - -``` - -## Monitoring - -This section describes how to monitor the producer, consumer, and reader state. - -### Monitor producer - -The following table lists states available for the producer. - -| State | Description | -| ---- | ----| -| Closed | The producer or the Pulsar client has been disposed. | -| Connected | All is well. | -| Disconnected | The connection is lost and attempts are being made to reconnect. | -| Faulted | An unrecoverable error has occurred. | - -This example shows how to monitor the producer state. - -```c# - -private static async ValueTask Monitor(IProducer producer, CancellationToken cancellationToken) -{ - var state = ProducerState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = await producer.StateChangedFrom(state, cancellationToken); - - var stateMessage = state switch - { - ProducerState.Connected => $"The producer is connected", - ProducerState.Disconnected => $"The producer is disconnected", - ProducerState.Closed => $"The producer has closed", - ProducerState.Faulted => $"The producer has faulted", - _ => $"The producer has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (producer.IsFinalState(state)) - return; - } -} - -``` - -### Monitor consumer state - -The following table lists states available for the consumer. - -| State | Description | -| ---- | ----| -| Active | All is well. | -| Inactive | All is well. The subscription type is `Failover` and you are not the active consumer. | -| Closed | The consumer or the Pulsar client has been disposed. | -| Disconnected | The connection is lost and attempts are being made to reconnect. | -| Faulted | An unrecoverable error has occurred. | -| ReachedEndOfTopic | No more messages are delivered. | - -This example shows how to monitor the consumer state. - -```c# - -private static async ValueTask Monitor(IConsumer consumer, CancellationToken cancellationToken) -{ - var state = ConsumerState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = await consumer.StateChangedFrom(state, cancellationToken); - - var stateMessage = state switch - { - ConsumerState.Active => "The consumer is active", - ConsumerState.Inactive => "The consumer is inactive", - ConsumerState.Disconnected => "The consumer is disconnected", - ConsumerState.Closed => "The consumer has closed", - ConsumerState.ReachedEndOfTopic => "The consumer has reached end of topic", - ConsumerState.Faulted => "The consumer has faulted", - _ => $"The consumer has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (consumer.IsFinalState(state)) - return; - } -} - -``` - -### Monitor reader state - -The following table lists states available for the reader. - -| State | Description | -| ---- | ----| -| Closed | The reader or the Pulsar client has been disposed. | -| Connected | All is well. | -| Disconnected | The connection is lost and attempts are being made to reconnect. -| Faulted | An unrecoverable error has occurred. | -| ReachedEndOfTopic | No more messages are delivered. | - -This example shows how to monitor the reader state. - -```c# - -private static async ValueTask Monitor(IReader reader, CancellationToken cancellationToken) -{ - var state = ReaderState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = await reader.StateChangedFrom(state, cancellationToken); - - var stateMessage = state switch - { - ReaderState.Connected => "The reader is connected", - ReaderState.Disconnected => "The reader is disconnected", - ReaderState.Closed => "The reader has closed", - ReaderState.ReachedEndOfTopic => "The reader has reached end of topic", - ReaderState.Faulted => "The reader has faulted", - _ => $"The reader has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (reader.IsFinalState(state)) - return; - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-go.md b/site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-go.md deleted file mode 100644 index 6281b03dd8c805..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-go.md +++ /dev/null @@ -1,885 +0,0 @@ ---- -id: client-libraries-go -title: Pulsar Go client -sidebar_label: "Go" -original_id: client-libraries-go ---- - -> Tips: Currently, the CGo client will be deprecated, if you want to know more about the CGo client, please refer to [CGo client docs](client-libraries-cgo.md) - -You can use Pulsar [Go client](https://github.com/apache/pulsar-client-go) to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang). - -> **API docs available as well** -> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar-client-go/pulsar). - - -## Installation - -### Install go package - -You can install the `pulsar` library locally using `go get`. - -```bash - -$ go get -u "github.com/apache/pulsar-client-go/pulsar" - -``` - -Once installed locally, you can import it into your project: - -```go - -import "github.com/apache/pulsar-client-go/pulsar" - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -If you have multiple brokers, you can set the URL as below. - -``` - -pulsar://localhost:6550,localhost:6651,localhost:6652 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example: - -```go - -import ( - "log" - "time" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - OperationTimeout: 30 * time.Second, - ConnectionTimeout: 30 * time.Second, - }) - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } - - defer client.Close() -} - -``` - -If you have multiple brokers, you can initiate a client object as below. - -```go - -import ( - "log" - "time" - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650,localhost:6651,localhost:6652", - OperationTimeout: 30 * time.Second, - ConnectionTimeout: 30 * time.Second, - }) - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } - - defer client.Close() -} - -``` - -The following configurable parameters are available for Pulsar clients: - - Name | Description | Default -| :-------- | :---------- |:---------- | -| URL | Configure the service URL for the Pulsar service.

    If you have multiple brokers, you can set multiple Pulsar cluster addresses for a client.

    This parameter is **required**. |None | -| ConnectionTimeout | Timeout for the establishment of a TCP connection | 30s | -| OperationTimeout| Set the operation timeout. Producer-create, subscribe and unsubscribe operations will be retried until this interval, after which the operation will be marked as failed| 30s| -| Authentication | Configure the authentication provider. Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | no authentication | -| TLSTrustCertsFilePath | Set the path to the trusted TLS certificate file | | -| TLSAllowInsecureConnection | Configure whether the Pulsar client accept untrusted TLS certificate from broker | false | -| TLSValidateHostname | Configure whether the Pulsar client verify the validity of the host name from broker | false | -| ListenerName | Configure the net model for VPC users to connect to the Pulsar broker | | -| MaxConnectionsPerBroker | Max number of connections to a single broker that is kept in the pool | 1 | -| CustomMetricsLabels | Add custom labels to all the metrics reported by this client instance | | -| Logger | Configure the logger used by the client | logrus.StandardLogger | - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example: - -```go - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", -}) - -if err != nil { - log.Fatal(err) -} - -_, err = producer.Send(context.Background(), &pulsar.ProducerMessage{ - Payload: []byte("hello"), -}) - -defer producer.Close() - -if err != nil { - fmt.Println("Failed to publish message", err) -} -fmt.Println("Published message") - -``` - -### Producer operations - -Pulsar Go producers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string` -`Name()` | Fetches the producer's name | `string` -`Send(context.Context, *ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | (MessageID, error) -`SendAsync(context.Context, *ProducerMessage, func(MessageID, *ProducerMessage, error))`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | -`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64 -`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error -`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | - -### Producer Example - -#### How to use message router in producer - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: serviceURL, -}) - -if err != nil { - log.Fatal(err) -} -defer client.Close() - -// Only subscribe on the specific partition -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "my-partitioned-topic-partition-2", - SubscriptionName: "my-sub", -}) - -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-partitioned-topic", - MessageRouter: func(msg *ProducerMessage, tm TopicMetadata) int { - fmt.Println("Routing message ", msg, " -- Partitions: ", tm.NumPartitions()) - return 2 - }, -}) - -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -``` - -#### How to use schema interface in producer - -```go - -type testJSON struct { - ID int `json:"id"` - Name string `json:"name"` -} - -``` - -```go - -var ( - exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -) - -``` - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -properties := make(map[string]string) -properties["pulsar"] = "hello" -jsonSchemaWithProperties := NewJSONSchema(exampleSchemaDef, properties) -producer, err := client.CreateProducer(ProducerOptions{ - Topic: "jsonTopic", - Schema: jsonSchemaWithProperties, -}) -assert.Nil(t, err) - -_, err = producer.Send(context.Background(), &ProducerMessage{ - Value: &testJSON{ - ID: 100, - Name: "pulsar", - }, -}) -if err != nil { - log.Fatal(err) -} -producer.Close() - -``` - -#### How to use delay relative in producer - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topicName := newTopicName() -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topicName, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: topicName, - SubscriptionName: "subName", - Type: Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -ID, err := producer.Send(context.Background(), &pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("test")), - DeliverAfter: 3 * time.Second, -}) -if err != nil { - log.Fatal(err) -} -fmt.Println(ID) - -ctx, canc := context.WithTimeout(context.Background(), 1*time.Second) -msg, err := consumer.Receive(ctx) -if err != nil { - log.Fatal(err) -} -fmt.Println(msg.Payload()) -canc() - -ctx, canc = context.WithTimeout(context.Background(), 5*time.Second) -msg, err = consumer.Receive(ctx) -if err != nil { - log.Fatal(err) -} -fmt.Println(msg.Payload()) -canc() - -``` - -### Producer configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Name | Name specify a name for the producer. If not assigned, the system will generate a globally unique name which can be access with Producer.ProducerName(). | | -| Properties | Properties attach a set of application defined properties to the producer This properties will be visible in the topic stats | | -| SendTimeout | SendTimeout set the timeout for a message that is not acknowledged by the server | 30s | -| DisableBlockIfQueueFull | DisableBlockIfQueueFull control whether Send and SendAsync block if producer's message queue is full | false | -| MaxPendingMessages| MaxPendingMessages set the max size of the queue holding the messages pending to receive an acknowledgment from the broker. | | -| HashingScheme | HashingScheme change the `HashingScheme` used to chose the partition on where to publish a particular message. | JavaStringHash | -| CompressionType | CompressionType set the compression type for the producer. | not compressed | -| CompressionLevel | Define the desired compression level. Options: Default, Faster and Better | Default | -| MessageRouter | MessageRouter set a custom message routing policy by passing an implementation of MessageRouter | | -| DisableBatching | DisableBatching control whether automatic batching of messages is enabled for the producer. | false | -| BatchingMaxPublishDelay | BatchingMaxPublishDelay set the time period within which the messages sent will be batched | 1ms | -| BatchingMaxMessages | BatchingMaxMessages set the maximum number of messages permitted in a batch. | 1000 | -| BatchingMaxSize | BatchingMaxSize sets the maximum number of bytes permitted in a batch. | 128KB | -| Schema | Schema set a custom schema type by passing an implementation of `Schema` | bytes[] | -| Interceptors | A chain of interceptors. These interceptors are called at some points defined in the `ProducerInterceptor` interface. | None | -| MaxReconnectToBroker | MaxReconnectToBroker set the maximum retry number of reconnectToBroker | ultimate | -| BatcherBuilderType | BatcherBuilderType sets the batch builder type. This is used to create a batch container when batching is enabled. Options: DefaultBatchBuilder and KeyBasedBatchBuilder | DefaultBatchBuilder | - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels: - -```go - -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "topic-1", - SubscriptionName: "my-sub", - Type: pulsar.Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -for i := 0; i < 10; i++ { - msg, err := consumer.Receive(context.Background()) - if err != nil { - log.Fatal(err) - } - - fmt.Printf("Received message msgId: %#v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - - consumer.Ack(msg) -} - -if err := consumer.Unsubscribe(); err != nil { - log.Fatal(err) -} - -``` - -### Consumer operations - -Pulsar Go consumers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Subscription()` | Returns the consumer's subscription name | `string` -`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error` -`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)` -`Chan()` | Chan returns a channel from which to consume messages. | `<-chan ConsumerMessage` -`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | -`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | -`ReconsumeLater(msg Message, delay time.Duration)` | ReconsumeLater mark a message for redelivery after custom delay | -`Nack(Message)` | Acknowledge the failure to process a single message. | -`NackID(MessageID)` | Acknowledge the failure to process a single message. | -`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | `error` -`SeekByTime(time time.Time)` | Reset the subscription associated with this consumer to a specific message publish time. | `error` -`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | -`Name()` | Name returns the name of consumer | `string` - -### Receive example - -#### How to use regex consumer - -```go - -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) - -defer client.Close() - -p, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topicInRegex, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer p.Close() - -topicsPattern := fmt.Sprintf("persistent://%s/foo.*", namespace) -opts := pulsar.ConsumerOptions{ - TopicsPattern: topicsPattern, - SubscriptionName: "regex-sub", -} -consumer, err := client.Subscribe(opts) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -``` - -#### How to use multi topics Consumer - -```go - -func newTopicName() string { - return fmt.Sprintf("my-topic-%v", time.Now().Nanosecond()) -} - - -topic1 := "topic-1" -topic2 := "topic-2" - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -topics := []string{topic1, topic2} -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topics: topics, - SubscriptionName: "multi-topic-sub", -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -``` - -#### How to use consumer listener - -```go - -import ( - "fmt" - "log" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{URL: "pulsar://localhost:6650"}) - if err != nil { - log.Fatal(err) - } - - defer client.Close() - - channel := make(chan pulsar.ConsumerMessage, 100) - - options := pulsar.ConsumerOptions{ - Topic: "topic-1", - SubscriptionName: "my-subscription", - Type: pulsar.Shared, - } - - options.MessageChannel = channel - - consumer, err := client.Subscribe(options) - if err != nil { - log.Fatal(err) - } - - defer consumer.Close() - - // Receive messages from channel. The channel returns a struct which contains message and the consumer from where - // the message was received. It's not necessary here since we have 1 single consumer, but the channel could be - // shared across multiple consumers as well - for cm := range channel { - msg := cm.Message - fmt.Printf("Received message msgId: %v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - - consumer.Ack(msg) - } -} - -``` - -#### How to use consumer receive timeout - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topic := "test-topic-with-no-messages" -ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond) -defer cancel() - -// create consumer -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: topic, - SubscriptionName: "my-sub1", - Type: Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -msg, err := consumer.Receive(ctx) -fmt.Println(msg.Payload()) -if err != nil { - log.Fatal(err) -} - -``` - -#### How to use schema in consumer - -```go - -type testJSON struct { - ID int `json:"id"` - Name string `json:"name"` -} - -``` - -```go - -var ( - exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -) - -``` - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -var s testJSON - -consumerJS := NewJSONSchema(exampleSchemaDef, nil) -consumer, err := client.Subscribe(ConsumerOptions{ - Topic: "jsonTopic", - SubscriptionName: "sub-1", - Schema: consumerJS, - SubscriptionInitialPosition: SubscriptionPositionEarliest, -}) -assert.Nil(t, err) -msg, err := consumer.Receive(context.Background()) -assert.Nil(t, err) -err = msg.GetSchemaValue(&s) -if err != nil { - log.Fatal(err) -} - -defer consumer.Close() - -``` - -### Consumer configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Topics | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing| | -| TopicsPattern | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing | | -| AutoDiscoveryPeriod | Specify the interval in which to poll for new partitions or new topics if using a TopicsPattern. | | -| SubscriptionName | Specify the subscription name for this consumer. This argument is required when subscribing | | -| Name | Set the consumer name | | -| Properties | Properties attach a set of application defined properties to the producer This properties will be visible in the topic stats | | -| Type | Select the subscription type to be used when subscribing to the topic. | Exclusive | -| SubscriptionInitialPosition | InitialPosition at which the cursor will be set when subscribe | Latest | -| DLQ | Configuration for Dead Letter Queue consumer policy. | no DLQ | -| MessageChannel | Sets a `MessageChannel` for the consumer. When a message is received, it will be pushed to the channel for consumption | | -| ReceiverQueueSize | Sets the size of the consumer receive queue. | 1000| -| NackRedeliveryDelay | The delay after which to redeliver the messages that failed to be processed | 1min | -| ReadCompacted | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic | false | -| ReplicateSubscriptionState | Mark the subscription as replicated to keep it in sync across clusters | false | -| KeySharedPolicy | Configuration for Key Shared consumer policy. | | -| RetryEnable | Auto retry send messages to default filled DLQPolicy topics | false | -| Interceptors | A chain of interceptors. These interceptors are called at some points defined in the `ConsumerInterceptor` interface. | | -| MaxReconnectToBroker | MaxReconnectToBroker set the maximum retry number of reconnectToBroker. | ultimate | -| Schema | Schema set a custom schema type by passing an implementation of `Schema` | bytes[] | - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example: - -```go - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "topic-1", - StartMessageID: pulsar.EarliestMessageID(), -}) -if err != nil { - log.Fatal(err) -} -defer reader.Close() - -``` - -### Reader operations - -Pulsar Go readers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string` -`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)` -`HasNext()` | Check if there is any message available to read from the current position| (bool, error) -`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error` -`Seek(MessageID)` | Reset the subscription associated with this reader to a specific message ID | `error` -`SeekByTime(time time.Time)` | Reset the subscription associated with this reader to a specific message publish time | `error` - -### Reader example - -#### How to use reader to read 'next' message - -Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages: - -```go - -import ( - "context" - "fmt" - "log" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{URL: "pulsar://localhost:6650"}) - if err != nil { - log.Fatal(err) - } - - defer client.Close() - - reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "topic-1", - StartMessageID: pulsar.EarliestMessageID(), - }) - if err != nil { - log.Fatal(err) - } - defer reader.Close() - - for reader.HasNext() { - msg, err := reader.Next(context.Background()) - if err != nil { - log.Fatal(err) - } - - fmt.Printf("Received message msgId: %#v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - } -} - -``` - -In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example: - -```go - -lastSavedId := // Read last saved message id from external store as byte[] - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: pulsar.DeserializeMessageID(lastSavedId), -}) - -``` - -#### How to use reader to read specific message - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: lookupURL, -}) - -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topic := "topic-1" -ctx := context.Background() - -// create producer -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topic, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -// send 10 messages -msgIDs := [10]MessageID{} -for i := 0; i < 10; i++ { - msgID, err := producer.Send(ctx, &pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("hello-%d", i)), - }) - assert.NoError(t, err) - assert.NotNil(t, msgID) - msgIDs[i] = msgID -} - -// create reader on 5th message (not included) -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: topic, - StartMessageID: msgIDs[4], -}) - -if err != nil { - log.Fatal(err) -} -defer reader.Close() - -// receive the remaining 5 messages -for i := 5; i < 10; i++ { - msg, err := reader.Next(context.Background()) - if err != nil { - log.Fatal(err) -} - -// create reader on 5th message (included) -readerInclusive, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: topic, - StartMessageID: msgIDs[4], - StartMessageIDInclusive: true, -}) - -if err != nil { - log.Fatal(err) -} -defer readerInclusive.Close() - -``` - -### Reader configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Name | Name set the reader name. | | -| Properties | Attach a set of application defined properties to the reader. This properties will be visible in the topic stats | | -| StartMessageID | StartMessageID initial reader positioning is done by specifying a message id. | | -| StartMessageIDInclusive | If true, the reader will start at the `StartMessageID`, included. Default is `false` and the reader will start from the "next" message | false | -| MessageChannel | MessageChannel sets a `MessageChannel` for the consumer When a message is received, it will be pushed to the channel for consumption| | -| ReceiverQueueSize | ReceiverQueueSize sets the size of the consumer receive queue. | 1000 | -| SubscriptionRolePrefix| SubscriptionRolePrefix set the subscription role prefix. | "reader" | -| ReadCompacted | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. ReadCompacted can only be enabled when reading from a persistent topic. | false| - -## Messages - -The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message: - -```go - -msg := pulsar.ProducerMessage{ - Payload: []byte("Here is some message data"), - Key: "message-key", - Properties: map[string]string{ - "foo": "bar", - }, - EventTime: time.Now(), - ReplicationClusters: []string{"cluster1", "cluster3"}, -} - -if _, err := producer.send(msg); err != nil { - log.Fatalf("Could not publish message due to: %v", err) -} - -``` - -The following methods parameters are available for `ProducerMessage` objects: - -Parameter | Description -:---------|:----------- -`Payload` | The actual data payload of the message -`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message. -`Key` | The optional key associated with the message (particularly useful for things like topic compaction) -`OrderingKey` | OrderingKey sets the ordering key of the message. -`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message -`EventTime` | The timestamp associated with the message -`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. -`SequenceID` | Set the sequence id to assign to the current message -`DeliverAfter` | Request to deliver the message only after the specified relative delay -`DeliverAt` | Deliver the message only at or after the specified absolute timestamp - -## TLS encryption and authentication - -In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so: - - * Use `pulsar+ssl` URL type - * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker - * Configure `Authentication` option - -Here's an example: - -```go - -opts := pulsar.ClientOptions{ - URL: "pulsar+ssl://my-cluster.com:6651", - TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr", - Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"), -} - -``` - -## OAuth2 authentication - -To use [OAuth2 authentication](security-oauth2.md), you'll need to configure your client to perform the following operations. -This example shows how to configure OAuth2 authentication. - -```go - -oauth := pulsar.NewAuthenticationOAuth2(map[string]string{ - "type": "client_credentials", - "issuerUrl": "https://dev-kt-aa9ne.us.auth0.com", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "privateKey": "/path/to/privateKey", - "clientId": "0Xx...Yyxeny", - }) -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://my-cluster:6650", - Authentication: oauth, -}) - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-java.md b/site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-java.md deleted file mode 100644 index 0ff9a22936fafc..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-java.md +++ /dev/null @@ -1,1038 +0,0 @@ ---- -id: client-libraries-java -title: Pulsar Java client -sidebar_label: "Java" -original_id: client-libraries-java ---- - -You can use Pulsar Java client to create Java [producer](#producer), [consumer](#consumer), and [readers](#reader-interface) of messages and to perform [administrative tasks](admin-api-overview.md). The current version of the Java client is **@pulsar:version@**. - -All the methods in [producer](#producer), [consumer](#consumer), and [reader](#reader) of a Java client are thread-safe. - -Javadoc for the Pulsar client is divided into two domains by package as follows. - -Package | Description | Maven Artifact -:-------|:------------|:-------------- -[`org.apache.pulsar.client.api`](/api/client) | The producer and consumer API | [org.apache.pulsar:pulsar-client:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C@pulsar:version@%7Cjar) -[`org.apache.pulsar.client.admin`](/api/admin) | The Java [admin API](admin-api-overview.md) | [org.apache.pulsar:pulsar-client-admin:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client-admin%7C@pulsar:version@%7Cjar) -`org.apache.pulsar.client.all` |Includes both `pulsar-client` and `pulsar-client-admin`

    Both `pulsar-client` and `pulsar-client-admin` are shaded packages and they shade dependencies independently. Consequently, the applications using both `pulsar-client` and `pulsar-client-admin` have redundant shaded classes. It would be troublesome if you introduce new dependencies but forget to update shading rules.

    In this case, you can use `pulsar-client-all`, which shades dependencies only one time and reduces the size of dependencies. |[org.apache.pulsar:pulsar-client-all:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client-all%7C@pulsar:version@%7Cjar) - -This document focuses only on the client API for producing and consuming messages on Pulsar topics. For how to use the Java admin client, see [Pulsar admin interface](admin-api-overview.md). - -## Installation - -The latest version of the Pulsar Java client library is available via [Maven Central](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C@pulsar:version@%7Cjar). To use the latest version, add the `pulsar-client` library to your build configuration. - -### Maven - -If you use Maven, add the following information to the `pom.xml` file. - -```xml - - -@pulsar:version@ - - - - org.apache.pulsar - pulsar-client - ${pulsar.version} - - -``` - -### Gradle - -If you use Gradle, add the following information to the `build.gradle` file. - -```groovy - -def pulsarVersion = '@pulsar:version@' - -dependencies { - compile group: 'org.apache.pulsar', name: 'pulsar-client', version: pulsarVersion -} - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -You can assign Pulsar protocol URLs to specific clusters and use the `pulsar` scheme. The default port is `6650`. The following is an example of `localhost`. - -```http - -pulsar://localhost:6650 - -``` - -If you have multiple brokers, the URL is as follows. - -```http - -pulsar://localhost:6550,localhost:6651,localhost:6652 - -``` - -A URL for a production Pulsar cluster is as follows. - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you use [TLS](security-tls-authentication.md) authentication, the URL is as follows. - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Client - -You can instantiate a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object using just a URL for the target Pulsar [cluster](reference-terminology.md#cluster) like this: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -``` - -If you have multiple brokers, you can initiate a PulsarClient like this: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650,localhost:6651,localhost:6652") - .build(); - -``` - -> ### Default broker URLs for standalone clusters -> If you run a cluster in [standalone mode](getting-started-standalone.md), the broker is available at the `pulsar://localhost:6650` URL by default. - -If you create a client, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -| Type | Name |
    Description
    | Default -|---|---|---|--- -String | `serviceUrl` |Service URL provider for Pulsar service | None -String | `authPluginClassName` | Name of the authentication plugin | None -String | `authParams` | String represents parameters for the authentication plugin

    **Example**
    key1:val1,key2:val2|None -long|`operationTimeoutMs`|Operation timeout |30000 -long|`statsIntervalSeconds`|Interval between each stats info

    Stats is activated with positive `statsInterval`

    Set `statsIntervalSeconds` to 1 second at least |60 -int|`numIoThreads`| The number of threads used for handling connections to brokers | 1 -int|`numListenerThreads`|The number of threads used for handling message listeners. The listener thread pool is shared across all the consumers and readers using the "listener" model to get messages. For a given consumer, the listener is always invoked from the same thread to ensure ordering. If you want multiple threads to process a single topic, you need to create a [`shared`](https://pulsar.apache.org/docs/en/next/concepts-messaging/#shared) subscription and multiple consumers for this subscription. This does not ensure ordering.| 1 -boolean|`useTcpNoDelay`|Whether to use TCP no-delay flag on the connection to disable Nagle algorithm |true -boolean |`useTls` |Whether to use TLS encryption on the connection| false -string | `tlsTrustCertsFilePath` |Path to the trusted TLS certificate file|None -boolean|`tlsAllowInsecureConnection`|Whether the Pulsar client accepts untrusted TLS certificate from broker | false -boolean | `tlsHostnameVerificationEnable` | Whether to enable TLS hostname verification|false -int|`concurrentLookupRequest`|The number of concurrent lookup requests allowed to send on each broker connection to prevent overload on broker|5000 -int|`maxLookupRequest`|The maximum number of lookup requests allowed on each broker connection to prevent overload on broker | 50000 -int|`maxNumberOfRejectedRequestPerConnection`|The maximum number of rejected requests of a broker in a certain time frame (30 seconds) after the current connection is closed and the client creates a new connection to connect to a different broker|50 -int|`keepAliveIntervalSeconds`|Seconds of keeping alive interval for each client broker connection|30 -int|`connectionTimeoutMs`|Duration of waiting for a connection to a broker to be established

    If the duration passes without a response from a broker, the connection attempt is dropped|10000 -int|`requestTimeoutMs`|Maximum duration for completing a request |60000 -int|`defaultBackoffIntervalNanos`| Default duration for a backoff interval | TimeUnit.MILLISECONDS.toNanos(100); -long|`maxBackoffIntervalNanos`|Maximum duration for a backoff interval|TimeUnit.SECONDS.toNanos(30) -SocketAddress|`socks5ProxyAddress`|SOCKS5 proxy address | None -String|`socks5ProxyUsername`|SOCKS5 proxy username | None -String|`socks5ProxyPassword`|SOCKS5 proxy password | None - -Check out the Javadoc for the {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} class for a full list of configurable parameters. - -> In addition to client-level configuration, you can also apply [producer](#configuring-producers) and [consumer](#configuring-consumers) specific configuration as described in sections below. - -## Producer - -In Pulsar, producers write messages to topics. Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object (as in the section [above](#client-configuration)), you can create a {@inject: javadoc:Producer:/client/org/apache/pulsar/client/api/Producer} for a specific Pulsar [topic](reference-terminology.md#topic). - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .create(); - -// You can then send messages to the broker and topic you specified: -producer.send("My message".getBytes()); - -``` - -By default, producers produce messages that consist of byte arrays. You can produce different types by specifying a message [schema](#schemas). - -```java - -Producer stringProducer = client.newProducer(Schema.STRING) - .topic("my-topic") - .create(); -stringProducer.send("My message"); - -``` - -> Make sure that you close your producers, consumers, and clients when you do not need them. - -> ```java -> -> producer.close(); -> consumer.close(); -> client.close(); -> -> -> ``` - -> -> Close operations can also be asynchronous: - -> ```java -> -> producer.closeAsync() -> .thenRun(() -> System.out.println("Producer closed")) -> .exceptionally((ex) -> { -> System.err.println("Failed to close producer: " + ex); -> return null; -> }); -> -> -> ``` - - -### Configure producer - -If you instantiate a `Producer` object by specifying only a topic name as the example above, use the default configuration for producer. - -If you create a producer, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -Type | Name|
    Description
    | Default -|---|---|---|--- -String| `topicName`| Topic name| null| -String|`producerName`|Producer name| null -long|`sendTimeoutMs`|Message send timeout in ms.

    If a message is not acknowledged by a server before the `sendTimeout` expires, an error occurs.|30000 -boolean|`blockIfQueueFull`|If it is set to `true`, when the outgoing message queue is full, the `Send` and `SendAsync` methods of producer block, rather than failing and throwing errors.

    If it is set to `false`, when the outgoing message queue is full, the `Send` and `SendAsync` methods of producer fail and `ProducerQueueIsFullError` exceptions occur.

    The `MaxPendingMessages` parameter determines the size of the outgoing message queue.|false -int|`maxPendingMessages`|The maximum size of a queue holding pending messages.

    For example, a message waiting to receive an acknowledgment from a [broker](reference-terminology.md#broker).

    By default, when the queue is full, all calls to the `Send` and `SendAsync` methods fail **unless** you set `BlockIfQueueFull` to `true`.|1000 -int|`maxPendingMessagesAcrossPartitions`|The maximum number of pending messages across partitions.

    Use the setting to lower the max pending messages for each partition ({@link #setMaxPendingMessages(int)}) if the total number exceeds the configured value.|50000 -MessageRoutingMode|`messageRoutingMode`|Message routing logic for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics).

    Apply the logic only when setting no key on messages.

    Available options are as follows:

  1234. `pulsar.RoundRobinDistribution`: round robin

  1235. `pulsar.UseSinglePartition`: publish all messages to a single partition

  1236. `pulsar.CustomPartition`: a custom partitioning scheme
  1237. |`pulsar.RoundRobinDistribution` -HashingScheme|`hashingScheme`|Hashing function determining the partition where you publish a particular message (**partitioned topics only**).

    Available options are as follows:

  1238. `pulsar.JavaStringHash`: the equivalent of `String.hashCode()` in Java

  1239. `pulsar.Murmur3_32Hash`: applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function

  1240. `pulsar.BoostHash`: applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library
  1241. |`HashingScheme.JavaStringHash` -ProducerCryptoFailureAction|`cryptoFailureAction`|Producer should take action when encryption fails.

  1242. **FAIL**: if encryption fails, unencrypted messages fail to send.

  1243. **SEND**: if encryption fails, unencrypted messages are sent.
  1244. |`ProducerCryptoFailureAction.FAIL` -long|`batchingMaxPublishDelayMicros`|Batching time period of sending messages.|TimeUnit.MILLISECONDS.toMicros(1) -int|batchingMaxMessages|The maximum number of messages permitted in a batch.|1000 -boolean|`batchingEnabled`|Enable batching of messages. |true -CompressionType|`compressionType`|Message data compression type used by a producer.

    Available options:
  1245. [`LZ4`](https://github.com/lz4/lz4)
  1246. [`ZLIB`](https://zlib.net/)
  1247. [`ZSTD`](https://facebook.github.io/zstd/)
  1248. [`SNAPPY`](https://google.github.io/snappy/)
  1249. | No compression - -You can configure parameters if you do not want to use the default configuration. - -For a full list, see the Javadoc for the {@inject: javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder} class. The following is an example. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .batchingMaxPublishDelay(10, TimeUnit.MILLISECONDS) - .sendTimeout(10, TimeUnit.SECONDS) - .blockIfQueueFull(true) - .create(); - -``` - -### Message routing - -When using partitioned topics, you can specify the routing mode whenever you publish messages using a producer. For more information on specifying a routing mode using the Java client, see the [Partitioned Topics](cookbooks-partitioned.md) cookbook. - -### Async send - -You can publish messages [asynchronously](concepts-messaging.md#send-modes) using the Java client. With async send, the producer puts the message in a blocking queue and returns it immediately. Then the client library sends the message to the broker in the background. If the queue is full (max size configurable), the producer is blocked or fails immediately when calling the API, depending on arguments passed to the producer. - -The following is an example. - -```java - -producer.sendAsync("my-async-message".getBytes()).thenAccept(msgId -> { - System.out.println("Message with ID " + msgId + " successfully sent"); -}); - -``` - -As you can see from the example above, async send operations return a {@inject: javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId} wrapped in a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture). - -### Configure messages - -In addition to a value, you can set additional items on a given message: - -```java - -producer.newMessage() - .key("my-message-key") - .value("my-async-message".getBytes()) - .property("my-key", "my-value") - .property("my-other-key", "my-other-value") - .send(); - -``` - -You can terminate the builder chain with `sendAsync()` and get a future return. - -## Consumer - -In Pulsar, consumers subscribe to topics and handle messages that producers publish to those topics. You can instantiate a new [consumer](reference-terminology.md#consumer) by first instantiating a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object and passing it a URL for a Pulsar broker (as [above](#client-configuration)). - -Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object, you can create a {@inject: javadoc:Consumer:/client/org/apache/pulsar/client/api/Consumer} by specifying a [topic](reference-terminology.md#topic) and a [subscription](concepts-messaging.md#subscription-modes). - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscribe(); - -``` - -The `subscribe` method will auto subscribe the consumer to the specified topic and subscription. One way to make the consumer listen on the topic is to set up a `while` loop. In this example loop, the consumer listens for messages, prints the contents of any received message, and then [acknowledges](reference-terminology.md#acknowledgment-ack) that the message has been processed. If the processing logic fails, you can use [negative acknowledgement](reference-terminology.md#acknowledgment-ack) to redeliver the message later. - -```java - -while (true) { - // Wait for a message - Message msg = consumer.receive(); - - try { - // Do something with the message - System.out.println("Message received: " + new String(msg.getData())); - - // Acknowledge the message so that it can be deleted by the message broker - consumer.acknowledge(msg); - } catch (Exception e) { - // Message failed to process, redeliver later - consumer.negativeAcknowledge(msg); - } -} - -``` - -If you don't want to block your main thread and rather listen constantly for new messages, consider using a `MessageListener`. - -```java - -MessageListener myMessageListener = (consumer, msg) -> { - try { - System.out.println("Message received: " + new String(msg.getData())); - consumer.acknowledge(msg); - } catch (Exception e) { - consumer.negativeAcknowledge(msg); - } -} - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .messageListener(myMessageListener) - .subscribe(); - -``` - -### Configure consumer - -If you instantiate a `Consumer` object by specifying only a topic and subscription name as in the example above, the consumer uses the default configuration. - -When you create a consumer, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -Type | Name|
    Description
    | Default -|---|---|---|--- -Set<String>| `topicNames`| Topic name| Sets.newTreeSet() -Pattern| `topicsPattern`| Topic pattern |None -String| `subscriptionName`| Subscription name| None -SubscriptionType| `subscriptionType`| Subscription type

    Four subscription types are available:
  1250. Exclusive
  1251. Failover
  1252. Shared
  1253. Key_Shared
  1254. |SubscriptionType.Exclusive -int | `receiverQueueSize` | Size of a consumer's receiver queue.

    For example, the number of messages accumulated by a consumer before an application calls `Receive`.

    A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.| 1000 -long|`acknowledgementsGroupTimeMicros`|Group a consumer acknowledgment for a specified time.

    By default, a consumer uses 100ms grouping time to send out acknowledgments to a broker.

    Setting a group time of 0 sends out acknowledgments immediately.

    A longer ack group time is more efficient at the expense of a slight increase in message re-deliveries after a failure.|TimeUnit.MILLISECONDS.toMicros(100) -long|`negativeAckRedeliveryDelayMicros`|Delay to wait before redelivering messages that failed to be processed.

    When an application uses {@link Consumer#negativeAcknowledge(Message)}, failed messages are redelivered after a fixed timeout. |TimeUnit.MINUTES.toMicros(1) -int |`maxTotalReceiverQueueSizeAcrossPartitions`|The max total receiver queue size across partitions.

    This setting reduces the receiver queue size for individual partitions if the total receiver queue size exceeds this value.|50000 -String|`consumerName`|Consumer name|null -long|`ackTimeoutMillis`|Timeout of unacked messages|0 -long|`tickDurationMillis`|Granularity of the ack-timeout redelivery.

    Using an higher `tickDurationMillis` reduces the memory overhead to track messages when setting ack-timeout to a bigger value (for example, 1 hour).|1000 -int|`priorityLevel`|Priority level for a consumer to which a broker gives more priority while dispatching messages in Shared subscription type.

    The broker follows descending priorities. For example, 0=max-priority, 1, 2,...

    In shared subscription type, the broker **first dispatches messages to the max priority level consumers if they have permits**. Otherwise, the broker considers next priority level consumers.

    **Example 1**

    If a subscription has consumerA with `priorityLevel` 0 and consumerB with `priorityLevel` 1, then the broker **only dispatches messages to consumerA until it runs out permits** and then starts dispatching messages to consumerB.

    **Example 2**

    Consumer Priority, Level, Permits
    C1, 0, 2
    C2, 0, 1
    C3, 0, 1
    C4, 1, 2
    C5, 1, 1

    Order in which a broker dispatches messages to consumers is: C1, C2, C3, C1, C4, C5, C4.|0 -ConsumerCryptoFailureAction|`cryptoFailureAction`|Consumer should take action when it receives a message that can not be decrypted.

  1255. **FAIL**: this is the default option to fail messages until crypto succeeds.

  1256. **DISCARD**:silently acknowledge and not deliver message to an application.

  1257. **CONSUME**: deliver encrypted messages to applications. It is the application's responsibility to decrypt the message.

    The decompression of message fails.

    If messages contain batch messages, a client is not be able to retrieve individual messages in batch.

    Delivered encrypted message contains {@link EncryptionContext} which contains encryption and compression information in it using which application can decrypt consumed message payload.
  1258. |
  1259. ConsumerCryptoFailureAction.FAIL
  1260. -SortedMap|`properties`|A name or value property of this consumer.

    `properties` is application defined metadata attached to a consumer.

    When getting a topic stats, associate this metadata with the consumer stats for easier identification.|new TreeMap() -boolean|`readCompacted`|If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    Only enabling `readCompacted` on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`.|false -SubscriptionInitialPosition|`subscriptionInitialPosition`|Initial position at which to set cursor when subscribing to a topic at first time.|SubscriptionInitialPosition.Latest -int|`patternAutoDiscoveryPeriod`|Topic auto discovery period when using a pattern for topic's consumer.

    The default and minimum value is 1 minute.|1 -RegexSubscriptionMode|`regexSubscriptionMode`|When subscribing to a topic using a regular expression, you can pick a certain type of topics.

  1261. **PersistentOnly**: only subscribe to persistent topics.

  1262. **NonPersistentOnly**: only subscribe to non-persistent topics.

  1263. **AllTopics**: subscribe to both persistent and non-persistent topics.
  1264. |RegexSubscriptionMode.PersistentOnly -DeadLetterPolicy|`deadLetterPolicy`|Dead letter policy for consumers.

    By default, some messages are probably redelivered many times, even to the extent that it never stops.

    By using the dead letter mechanism, messages have the max redelivery count. **When exceeding the maximum number of redeliveries, messages are sent to the Dead Letter Topic and acknowledged automatically**.

    You can enable the dead letter mechanism by setting `deadLetterPolicy`.

    **Example**

    client.newConsumer()
    .deadLetterPolicy(DeadLetterPolicy.builder().maxRedeliverCount(10).build())
    .subscribe();


    Default dead letter topic name is `{TopicName}-{Subscription}-DLQ`.

    To set a custom dead letter topic name:
    client.newConsumer()
    .deadLetterPolicy(DeadLetterPolicy.builder().maxRedeliverCount(10)
    .deadLetterTopic("your-topic-name").build())
    .subscribe();


    When specifying the dead letter policy while not specifying `ackTimeoutMillis`, you can set the ack timeout to 30000 millisecond.|None -boolean|`autoUpdatePartitions`|If `autoUpdatePartitions` is enabled, a consumer subscribes to partition increasement automatically.

    **Note**: this is only for partitioned consumers.|true -boolean|`replicateSubscriptionState`|If `replicateSubscriptionState` is enabled, a subscription state is replicated to geo-replicated clusters.|false - -You can configure parameters if you do not want to use the default configuration. For a full list, see the Javadoc for the {@inject: javadoc:ConsumerBuilder:/client/org/apache/pulsar/client/api/ConsumerBuilder} class. - -The following is an example. - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .ackTimeout(10, TimeUnit.SECONDS) - .subscriptionType(SubscriptionType.Exclusive) - .subscribe(); - -``` - -### Async receive - -The `receive` method receives messages synchronously (the consumer process is blocked until a message is available). You can also use [async receive](concepts-messaging.md#receive-modes), which returns a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) object immediately once a new message is available. - -The following is an example. - -```java - -CompletableFuture asyncMessage = consumer.receiveAsync(); - -``` - -Async receive operations return a {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} wrapped inside of a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture). - -### Batch receive - -Use `batchReceive` to receive multiple messages for each call. - -The following is an example. - -```java - -Messages messages = consumer.batchReceive(); -for (Object message : messages) { - // do something -} -consumer.acknowledge(messages) - -``` - -:::note - -Batch receive policy limits the number and bytes of messages in a single batch. You can specify a timeout to wait for enough messages. -The batch receive is completed if any of the following condition is met: enough number of messages, bytes of messages, wait timeout. - -```java - -Consumer consumer = client.newConsumer() -.topic("my-topic") -.subscriptionName("my-subscription") -.batchReceivePolicy(BatchReceivePolicy.builder() -.maxNumMessages(100) -.maxNumBytes(1024 * 1024) -.timeout(200, TimeUnit.MILLISECONDS) -.build()) -.subscribe(); - -``` - -The default batch receive policy is: - -```java - -BatchReceivePolicy.builder() -.maxNumMessage(-1) -.maxNumBytes(10 * 1024 * 1024) -.timeout(100, TimeUnit.MILLISECONDS) -.build(); - -``` - -::: - -### Multi-topic subscriptions - -In addition to subscribing a consumer to a single Pulsar topic, you can also subscribe to multiple topics simultaneously using [multi-topic subscriptions](concepts-messaging.md#multi-topic-subscriptions). To use multi-topic subscriptions you can supply either a regular expression (regex) or a `List` of topics. If you select topics via regex, all topics must be within the same Pulsar namespace. - -The followings are some examples. - -```java - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; - -import java.util.Arrays; -import java.util.List; -import java.util.regex.Pattern; - -ConsumerBuilder consumerBuilder = pulsarClient.newConsumer() - .subscriptionName(subscription); - -// Subscribe to all topics in a namespace -Pattern allTopicsInNamespace = Pattern.compile("public/default/.*"); -Consumer allTopicsConsumer = consumerBuilder - .topicsPattern(allTopicsInNamespace) - .subscribe(); - -// Subscribe to a subsets of topics in a namespace, based on regex -Pattern someTopicsInNamespace = Pattern.compile("public/default/foo.*"); -Consumer allTopicsConsumer = consumerBuilder - .topicsPattern(someTopicsInNamespace) - .subscribe(); - -``` - -In the above example, the consumer subscribes to the `persistent` topics that can match the topic name pattern. If you want the consumer subscribes to all `persistent` and `non-persistent` topics that can match the topic name pattern, set `subscriptionTopicsMode` to `RegexSubscriptionMode.AllTopics`. - -```java - -Pattern pattern = Pattern.compile("public/default/.*"); -pulsarClient.newConsumer() - .subscriptionName("my-sub") - .topicsPattern(pattern) - .subscriptionTopicsMode(RegexSubscriptionMode.AllTopics) - .subscribe(); - -``` - -:::note - -By default, the `subscriptionTopicsMode` of the consumer is `PersistentOnly`. Available options of `subscriptionTopicsMode` are `PersistentOnly`, `NonPersistentOnly`, and `AllTopics`. - -::: - -You can also subscribe to an explicit list of topics (across namespaces if you wish): - -```java - -List topics = Arrays.asList( - "topic-1", - "topic-2", - "topic-3" -); - -Consumer multiTopicConsumer = consumerBuilder - .topics(topics) - .subscribe(); - -// Alternatively: -Consumer multiTopicConsumer = consumerBuilder - .topic( - "topic-1", - "topic-2", - "topic-3" - ) - .subscribe(); - -``` - -You can also subscribe to multiple topics asynchronously using the `subscribeAsync` method rather than the synchronous `subscribe` method. The following is an example. - -```java - -Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default.*"); -consumerBuilder - .topics(topics) - .subscribeAsync() - .thenAccept(this::receiveMessageFromConsumer); - -private void receiveMessageFromConsumer(Object consumer) { - ((Consumer)consumer).receiveAsync().thenAccept(message -> { - // Do something with the received message - receiveMessageFromConsumer(consumer); - }); -} - -``` - -### Subscription types - -Pulsar has various [subscription types](concepts-messaging#subscription-types) to match different scenarios. A topic can have multiple subscriptions with different subscription types. However, a subscription can only have one subscription type at a time. - -A subscription is identical with the subscription name; a subscription name can specify only one subscription type at a time. To change the subscription type, you should first stop all consumers of this subscription. - -Different subscription types have different message distribution modes. This section describes the differences of subscription types and how to use them. - -In order to better describe their differences, assuming you have a topic named "my-topic", and the producer has published 10 messages. - -```java - -Producer producer = client.newProducer(Schema.STRING) - .topic("my-topic") - .enableBatching(false) - .create(); -// 3 messages with "key-1", 3 messages with "key-2", 2 messages with "key-3" and 2 messages with "key-4" -producer.newMessage().key("key-1").value("message-1-1").send(); -producer.newMessage().key("key-1").value("message-1-2").send(); -producer.newMessage().key("key-1").value("message-1-3").send(); -producer.newMessage().key("key-2").value("message-2-1").send(); -producer.newMessage().key("key-2").value("message-2-2").send(); -producer.newMessage().key("key-2").value("message-2-3").send(); -producer.newMessage().key("key-3").value("message-3-1").send(); -producer.newMessage().key("key-3").value("message-3-2").send(); -producer.newMessage().key("key-4").value("message-4-1").send(); -producer.newMessage().key("key-4").value("message-4-2").send(); - -``` - -#### Exclusive - -Create a new consumer and subscribe with the `Exclusive` subscription type. - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Exclusive) - .subscribe() - -``` - -Only the first consumer is allowed to the subscription, other consumers receive an error. The first consumer receives all 10 messages, and the consuming order is the same as the producing order. - -:::note - -If topic is a partitioned topic, the first consumer subscribes to all partitioned topics, other consumers are not assigned with partitions and receive an error. - -::: - -#### Failover - -Create new consumers and subscribe with the`Failover` subscription type. - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Failover) - .subscribe() -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Failover) - .subscribe() -//conumser1 is the active consumer, consumer2 is the standby consumer. -//consumer1 receives 5 messages and then crashes, consumer2 takes over as an active consumer. - -``` - -Multiple consumers can attach to the same subscription, yet only the first consumer is active, and others are standby. When the active consumer is disconnected, messages will be dispatched to one of standby consumers, and the standby consumer then becomes active consumer. - -If the first active consumer is disconnected after receiving 5 messages, the standby consumer becomes active consumer. Consumer1 will receive: - -``` - -("key-1", "message-1-1") -("key-1", "message-1-2") -("key-1", "message-1-3") -("key-2", "message-2-1") -("key-2", "message-2-2") - -``` - -consumer2 will receive: - -``` - -("key-2", "message-2-3") -("key-3", "message-3-1") -("key-3", "message-3-2") -("key-4", "message-4-1") -("key-4", "message-4-2") - -``` - -:::note - -If a topic is a partitioned topic, each partition has only one active consumer, messages of one partition are distributed to only one consumer, and messages of multiple partitions are distributed to multiple consumers. - -::: - -#### Shared - -Create new consumers and subscribe with `Shared` subscription type. - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .subscribe() - -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .subscribe() -//Both consumer1 and consumer2 are active consumers. - -``` - -In shared subscription type, multiple consumers can attach to the same subscription and messages are delivered in a round robin distribution across consumers. - -If a broker dispatches only one message at a time, consumer1 receives the following information. - -``` - -("key-1", "message-1-1") -("key-1", "message-1-3") -("key-2", "message-2-2") -("key-3", "message-3-1") -("key-4", "message-4-1") - -``` - -consumer2 receives the following information. - -``` - -("key-1", "message-1-2") -("key-2", "message-2-1") -("key-2", "message-2-3") -("key-3", "message-3-2") -("key-4", "message-4-2") - -``` - -`Shared` subscription is different from `Exclusive` and `Failover` subscription types. `Shared` subscription has better flexibility, but cannot provide order guarantee. - -#### Key_shared - -This is a new subscription type since 2.4.0 release. Create new consumers and subscribe with `Key_Shared` subscription type. - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Key_Shared) - .subscribe() - -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Key_Shared) - .subscribe() -//Both consumer1 and consumer2 are active consumers. - -``` - -`Key_Shared` subscription is like `Shared` subscription, all consumers can attach to the same subscription. But it is different from `Key_Shared` subscription, messages with the same key are delivered to only one consumer in order. The possible distribution of messages between different consumers (by default we do not know in advance which keys will be assigned to a consumer, but a key will only be assigned to a consumer at the same time). - -consumer1 receives the following information. - -``` - -("key-1", "message-1-1") -("key-1", "message-1-2") -("key-1", "message-1-3") -("key-3", "message-3-1") -("key-3", "message-3-2") - -``` - -consumer2 receives the following information. - -``` - -("key-2", "message-2-1") -("key-2", "message-2-2") -("key-2", "message-2-3") -("key-4", "message-4-1") -("key-4", "message-4-2") - -``` - -If batching is enabled at the producer side, messages with different keys are added to a batch by default. The broker will dispatch the batch to the consumer, so the default batch mechanism may break the Key_Shared subscription guaranteed message distribution semantics. The producer needs to use the `KeyBasedBatcher`. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .batcherBuilder(BatcherBuilder.KEY_BASED) - .create(); - -``` - -Or the producer can disable batching. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .enableBatching(false) - .create(); - -``` - -:::note - -If the message key is not specified, messages without key are dispatched to one consumer in order by default. - -::: - -## Reader - -With the [reader interface](concepts-clients.md#reader-interface), Pulsar clients can "manually position" themselves within a topic and reading all messages from a specified message onward. The Pulsar API for Java enables you to create {@inject: javadoc:Reader:/client/org/apache/pulsar/client/api/Reader} objects by specifying a topic and a {@inject: javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId}. - -The following is an example. - -```java - -byte[] msgIdBytes = // Some message ID byte array -MessageId id = MessageId.fromByteArray(msgIdBytes); -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(id) - .create(); - -while (true) { - Message message = reader.readNext(); - // Process message -} - -``` - -In the example above, a `Reader` object is instantiated for a specific topic and message (by ID); the reader iterates over each message in the topic after the message is identified by `msgIdBytes` (how that value is obtained depends on the application). - -The code sample above shows pointing the `Reader` object to a specific message (by ID), but you can also use `MessageId.earliest` to point to the earliest available message on the topic of `MessageId.latest` to point to the most recent available message. - -### Configure reader -When you create a reader, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -| Type | Name |
    Description
    | Default -|---|---|---|--- -String|`topicName`|Topic name. |None -int|`receiverQueueSize`|Size of a consumer's receiver queue.

    For example, the number of messages that can be accumulated by a consumer before an application calls `Receive`.

    A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.|1000 -ReaderListener<T>|`readerListener`|A listener that is called for message received.|None -String|`readerName`|Reader name.|null -String| `subscriptionName`|Subscription name|When there is a single topic, the default subscription name is `"reader-" + 10-digit UUID`.
    When there are multiple topics, the default subscription name is `"multiTopicsReader-" + 10-digit UUID`. -String|`subscriptionRolePrefix`|Prefix of subscription role. |null -CryptoKeyReader|`cryptoKeyReader`|Interface that abstracts the access to a key store.|null -ConsumerCryptoFailureAction|`cryptoFailureAction`|Consumer should take action when it receives a message that can not be decrypted.

  1265. **FAIL**: this is the default option to fail messages until crypto succeeds.

  1266. **DISCARD**: silently acknowledge and not deliver message to an application.

  1267. **CONSUME**: deliver encrypted messages to applications. It is the application's responsibility to decrypt the message.

    The message decompression fails.

    If messages contain batch messages, a client is not be able to retrieve individual messages in batch.

    Delivered encrypted message contains {@link EncryptionContext} which contains encryption and compression information in it using which application can decrypt consumed message payload.
  1268. |
  1269. ConsumerCryptoFailureAction.FAIL
  1270. -boolean|`readCompacted`|If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (for example, failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`.|false -boolean|`resetIncludeHead`|If set to true, the first message to be returned is the one specified by `messageId`.

    If set to false, the first message to be returned is the one next to the message specified by `messageId`.|false - -### Sticky key range reader - -In sticky key range reader, broker will only dispatch messages which hash of the message key contains by the specified key hash range. Multiple key hash ranges can be specified on a reader. - -The following is an example to create a sticky key range reader. - -```java - -pulsarClient.newReader() - .topic(topic) - .startMessageId(MessageId.earliest) - .keyHashRange(Range.of(0, 10000), Range.of(20001, 30000)) - .create(); - -``` - -Total hash range size is 65536, so the max end of the range should be less than or equal to 65535. - -## Schema - -In Pulsar, all message data consists of byte arrays "under the hood." [Message schemas](schema-get-started.md) enable you to use other types of data when constructing and handling messages (from simple types like strings to more complex, application-specific types.md). If you construct, say, a [producer](#producers) without specifying a schema, then the producer can only produce messages of type `byte[]`. The following is an example. - -```java - -Producer producer = client.newProducer() - .topic(topic) - .create(); - -``` - -The producer above is equivalent to a `Producer` (in fact, you should *always* explicitly specify the type). If you'd like to use a producer for a different type of data, you'll need to specify a **schema** that informs Pulsar which data type will be transmitted over the [topic](reference-terminology.md#topic). - -### AvroBaseStructSchema example - -Let's say that you have a `SensorReading` class that you'd like to transmit over a Pulsar topic: - -```java - -public class SensorReading { - public float temperature; - - public SensorReading(float temperature) { - this.temperature = temperature; - } - - // A no-arg constructor is required - public SensorReading() { - } - - public float getTemperature() { - return temperature; - } - - public void setTemperature(float temperature) { - this.temperature = temperature; - } -} - -``` - -You could then create a `Producer` (or `Consumer`) like this: - -```java - -Producer producer = client.newProducer(JSONSchema.of(SensorReading.class)) - .topic("sensor-readings") - .create(); - -``` - -The following schema formats are currently available for Java: - -* No schema or the byte array schema (which can be applied using `Schema.BYTES`): - - ```java - - Producer bytesProducer = client.newProducer(Schema.BYTES) - .topic("some-raw-bytes-topic") - .create(); - - ``` - - Or, equivalently: - - ```java - - Producer bytesProducer = client.newProducer() - .topic("some-raw-bytes-topic") - .create(); - - ``` - -* `String` for normal UTF-8-encoded string data. Apply the schema using `Schema.STRING`: - - ```java - - Producer stringProducer = client.newProducer(Schema.STRING) - .topic("some-string-topic") - .create(); - - ``` - -* Create JSON schemas for POJOs using `Schema.JSON`. The following is an example. - - ```java - - Producer pojoProducer = client.newProducer(Schema.JSON(MyPojo.class)) - .topic("some-pojo-topic") - .create(); - - ``` - -* Generate Protobuf schemas using `Schema.PROTOBUF`. The following example shows how to create the Protobuf schema and use it to instantiate a new producer: - - ```java - - Producer protobufProducer = client.newProducer(Schema.PROTOBUF(MyProtobuf.class)) - .topic("some-protobuf-topic") - .create(); - - ``` - -* Define Avro schemas with `Schema.AVRO`. The following code snippet demonstrates how to create and use Avro schema. - - ```java - - Producer avroProducer = client.newProducer(Schema.AVRO(MyAvro.class)) - .topic("some-avro-topic") - .create(); - - ``` - -### ProtobufNativeSchema example - -For example of ProtobufNativeSchema, see [`SchemaDefinition` in `Complex type`](schema-understand.md#complex-type). - -## Authentication - -Pulsar currently supports three authentication schemes: [TLS](security-tls-authentication.md), [Athenz](security-athenz.md), and [Oauth2](security-oauth2.md). You can use the Pulsar Java client with all of them. - -### TLS Authentication - -To use [TLS](security-tls-authentication.md), `enableTls` method is deprecated and you need to use "pulsar+ssl://" in serviceUrl to enable, point your Pulsar client to a TLS cert path, and provide paths to cert and key files. - -The following is an example. - -```java - -Map authParams = new HashMap(); -authParams.put("tlsCertFile", "/path/to/client-cert.pem"); -authParams.put("tlsKeyFile", "/path/to/client-key.pem"); - -Authentication tlsAuth = AuthenticationFactory - .create(AuthenticationTls.class.getName(), authParams); - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://my-broker.com:6651") - .tlsTrustCertsFilePath("/path/to/cacert.pem") - .authentication(tlsAuth) - .build(); - -``` - -### Athenz - -To use [Athenz](security-athenz.md) as an authentication provider, you need to [use TLS](#tls-authentication) and provide values for four parameters in a hash: - -* `tenantDomain` -* `tenantService` -* `providerDomain` -* `privateKey` - -You can also set an optional `keyId`. The following is an example. - -```java - -Map authParams = new HashMap(); -authParams.put("tenantDomain", "shopping"); // Tenant domain name -authParams.put("tenantService", "some_app"); // Tenant service name -authParams.put("providerDomain", "pulsar"); // Provider domain name -authParams.put("privateKey", "file:///path/to/private.pem"); // Tenant private key path -authParams.put("keyId", "v1"); // Key id for the tenant private key (optional, default: "0") - -Authentication athenzAuth = AuthenticationFactory - .create(AuthenticationAthenz.class.getName(), authParams); - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://my-broker.com:6651") - .tlsTrustCertsFilePath("/path/to/cacert.pem") - .authentication(athenzAuth) - .build(); - -``` - -> #### Supported pattern formats -> The `privateKey` parameter supports the following three pattern formats: -> * `file:///path/to/file` -> * `file:/path/to/file` -> * `data:application/x-pem-file;base64,` - -### Oauth2 - -The following example shows how to use [Oauth2](security-oauth2.md) as an authentication provider for the Pulsar Java client. - -You can use the factory method to configure authentication for Pulsar Java client. - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactoryOAuth2.clientCredentials(this.issuerUrl, this.credentialsUrl, this.audience)) - .build(); - -``` - -In addition, you can also use the encoded parameters to configure authentication for Pulsar Java client. - -```java - -Authentication auth = AuthenticationFactory - .create(AuthenticationOAuth2.class.getName(), "{"type":"client_credentials","privateKey":"...","issuerUrl":"...","audience":"..."}"); -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication(auth) - .build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-node.md b/site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-node.md deleted file mode 100644 index 8031d287c6f2ad..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-node.md +++ /dev/null @@ -1,643 +0,0 @@ ---- -id: client-libraries-node -title: The Pulsar Node.js client -sidebar_label: "Node.js" -original_id: client-libraries-node ---- - -The Pulsar Node.js client can be used to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Node.js. - -All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Node.js client are thread-safe. - -For 1.3.0 or later versions, [type definitions](https://github.com/apache/pulsar-client-node/blob/master/index.d.ts) used in TypeScript are available. - -## Installation - -You can install the [`pulsar-client`](https://www.npmjs.com/package/pulsar-client) library via [npm](https://www.npmjs.com/). - -### Requirements -Pulsar Node.js client library is based on the C++ client library. -Follow [these instructions](client-libraries-cpp.md#compilation) and install the Pulsar C++ client library. - -### Compatibility - -Compatibility between each version of the Node.js client and the C++ client is as follows: - -| Node.js client | C++ client | -| :------------- | :------------- | -| 1.0.0 | 2.3.0 or later | -| 1.1.0 | 2.4.0 or later | -| 1.2.0 | 2.5.0 or later | - -If an incompatible version of the C++ client is installed, you may fail to build or run this library. - -### Installation using npm - -Install the `pulsar-client` library via [npm](https://www.npmjs.com/): - -```shell - -$ npm install pulsar-client - -``` - -:::note - -Also, this library works only in Node.js 10.x or later because it uses the [`node-addon-api`](https://github.com/nodejs/node-addon-api) module to wrap the C++ library. - -::: - -## Connection URLs -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here is an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you are using [TLS encryption](security-tls-transport.md) or [TLS Authentication](security-tls-authentication.md), the URL looks like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you first need a client object. You can create a client instance using a `new` operator and the `Client` method, passing in a client options object (more on configuration [below](#client-configuration)). - -Here is an example: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - await client.close(); -})(); - -``` - -### Client configuration - -The following configurable parameters are available for Pulsar clients: - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `serviceUrl` | The connection URL for the Pulsar cluster. See [above](#connection-urls) for more info. | | -| `authentication` | Configure the authentication provider. (default: no authentication). See [TLS Authentication](security-tls-authentication.md) for more info. | | -| `operationTimeoutSeconds` | The timeout for Node.js client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries occur until this threshold is reached, at which point the operation fails. | 30 | -| `ioThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker). | 1 | -| `messageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)). | 1 | -| `concurrentLookupRequest` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 50000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 50000 | -| `tlsTrustCertsFilePath` | The file path for the trusted TLS certificate. | | -| `tlsValidateHostname` | The boolean value of setup whether to enable TLS hostname verification. | `false` | -| `tlsAllowInsecureConnection` | The boolean value of setup whether the Pulsar client accepts untrusted TLS certificate from broker. | `false` | -| `statsIntervalInSeconds` | Interval between each stat info. Stats is activated with positive statsInterval. The value should be set to 1 second at least | 600 | -| `log` | A function that is used for logging. | `console.log` | - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Node.js producers using a producer configuration object. - -Here is an example: - -```JavaScript - -const producer = await client.createProducer({ - topic: 'my-topic', // or 'my-tenant/my-namespace/my-topic' to specify topic's tenant and namespace -}); - -await producer.send({ - data: Buffer.from("Hello, Pulsar"), -}); - -await producer.close(); - -``` - -> #### Promise operation -> When you create a new Pulsar producer, the operation returns `Promise` object and get producer instance or an error through executor function. -> In this example, using await operator instead of executor function. - -### Producer operations - -Pulsar Node.js producers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `send(Object)` | Publishes a [message](#messages) to the producer's topic. When the message is successfully acknowledged by the Pulsar broker, or an error is thrown, the Promise object whose result is the message ID runs executor function. | `Promise` | -| `flush()` | Sends message from send queue to Pulsar broker. When the message is successfully acknowledged by the Pulsar broker, or an error is thrown, the Promise object runs executor function. | `Promise` | -| `close()` | Closes the producer and releases all resources allocated to it. Once `close()` is called, no more messages are accepted from the publisher. This method returns a Promise object. It runs the executor function when all pending publish requests are persisted by Pulsar. If an error is thrown, no pending writes are retried. | `Promise` | -| `getProducerName()` | Getter method of the producer name. | `string` | -| `getTopic()` | Getter method of the name of the topic. | `string` | - -### Producer configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer publishes messages. The topic format is `` or `//`. For example, `sample/ns1/my-topic`. | | -| `producerName` | A name for the producer. If you do not explicitly assign a name, Pulsar automatically generates a globally unique name. If you choose to explicitly assign a name, it needs to be unique across *all* Pulsar clusters, otherwise the creation operation throws an error. | | -| `sendTimeoutMs` | When publishing a message to a topic, the producer waits for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error is thrown. If you set `sendTimeoutMs` to -1, the timeout is set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30000 | -| `initialSequenceId` | The initial sequence ID of the message. When producer send message, add sequence ID to message. The ID is increased each time to send. | | -| `maxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `send` method fails *unless* `blockIfQueueFull` is set to `true`. | 1000 | -| `maxPendingMessagesAcrossPartitions` | The maximum size of the sum of partition's pending queue. | 50000 | -| `blockIfQueueFull` | If set to `true`, the producer's `send` method waits when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `maxPendingMessages` parameter); if set to `false` (the default), `send` operations fails and throw a error when the queue is full. | `false` | -| `messageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-messaging.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`RoundRobinDistribution`), or publishing all messages to a single partition (`UseSinglePartition`, the default). | `UseSinglePartition` | -| `hashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `JavaStringHash` (the equivalent of `String.hashCode()` in Java), `Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library). | `BoostHash` | -| `compressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), and [`Zlib`](https://zlib.net/), [ZSTD](https://github.com/facebook/zstd/), [SNAPPY](https://github.com/google/snappy/). | Compression None | -| `batchingEnabled` | If set to `true`, the producer send message as batch. | `true` | -| `batchingMaxPublishDelayMs` | The maximum time of delay sending message in batching. | 10 | -| `batchingMaxMessages` | The maximum size of sending message in each time of batching. | 1000 | -| `properties` | The metadata of producer. | | - -### Producer example - -This example creates a Node.js producer for the `my-topic` topic and sends 10 messages to that topic: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - // Create a producer - const producer = await client.createProducer({ - topic: 'my-topic', - }); - - // Send messages - for (let i = 0; i < 10; i += 1) { - const msg = `my-message-${i}`; - producer.send({ - data: Buffer.from(msg), - }); - console.log(`Sent message: ${msg}`); - } - await producer.flush(); - - await producer.close(); - await client.close(); -})(); - -``` - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Node.js consumers using a consumer configuration object. - -Here is an example: - -```JavaScript - -const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', -}); - -const msg = await consumer.receive(); -console.log(msg.getData().toString()); -consumer.acknowledge(msg); - -await consumer.close(); - -``` - -> #### Promise operation -> When you create a new Pulsar consumer, the operation returns `Promise` object and get consumer instance or an error through executor function. -> In this example, using await operator instead of executor function. - -### Consumer operations - -Pulsar Node.js consumers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `receive()` | Receives a single message from the topic. When the message is available, the Promise object run executor function and get message object. | `Promise` | -| `receive(Number)` | Receives a single message from the topic with specific timeout in milliseconds. | `Promise` | -| `acknowledge(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message object. | `void` | -| `acknowledgeId(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID object. | `void` | -| `acknowledgeCumulative(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `acknowledgeCumulative` method returns void, and send the ack to the broker asynchronously. After that, the messages are *not* redelivered to the consumer. Cumulative acking can not be used with a [shared](concepts-messaging.md#shared) subscription type. | `void` | -| `acknowledgeCumulativeId(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message ID. | `void` | -| `negativeAcknowledge(Message)`| [Negatively acknowledges](reference-terminology.md#negative-acknowledgment-nack) a message to the Pulsar broker by message object. | `void` | -| `negativeAcknowledgeId(MessageId)` | [Negatively acknowledges](reference-terminology.md#negative-acknowledgment-nack) a message to the Pulsar broker by message ID object. | `void` | -| `close()` | Closes the consumer, disabling its ability to receive messages from the broker. | `Promise` | -| `unsubscribe()` | Unsubscribes the subscription. | `Promise` | - -### Consumer configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar topic on which the consumer establishes a subscription and listen for messages. | | -| `topics` | The array of topics. | | -| `topicsPattern` | The regular expression for topics. | | -| `subscription` | The subscription name for this consumer. | | -| `subscriptionType` | Available options are `Exclusive`, `Shared`, `Key_Shared`, and `Failover`. | `Exclusive` | -| `subscriptionInitialPosition` | Initial position at which to set cursor when subscribing to a topic at first time. | `SubscriptionInitialPosition.Latest` | -| `ackTimeoutMs` | Acknowledge timeout in milliseconds. | 0 | -| `nAckRedeliverTimeoutMs` | Delay to wait before redelivering messages that failed to be processed. | 60000 | -| `receiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000 | -| `receiverQueueSizeAcrossPartitions` | Set the max total receiver queue size across partitions. This setting is used to reduce the receiver queue size for individual partitions if the total exceeds this value. | 50000 | -| `consumerName` | The name of consumer. Currently(v2.4.1), [failover](concepts-messaging.md#failover) mode use consumer name in ordering. | | -| `properties` | The metadata of consumer. | | -| `listener`| A listener that is called for a message received. | | -| `readCompacted`| If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`. | false | - -### Consumer example - -This example creates a Node.js consumer with the `my-subscription` subscription on the `my-topic` topic, receives messages, prints the content that arrive, and acknowledges each message to the Pulsar broker for 10 times: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - // Create a consumer - const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', - subscriptionType: 'Exclusive', - }); - - // Receive messages - for (let i = 0; i < 10; i += 1) { - const msg = await consumer.receive(); - console.log(msg.getData().toString()); - consumer.acknowledge(msg); - } - - await consumer.close(); - await client.close(); -})(); - -``` - -Instead a consumer can be created with `listener` to process messages. - -```JavaScript - -// Create a consumer -const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', - subscriptionType: 'Exclusive', - listener: (msg, msgConsumer) => { - console.log(msg.getData().toString()); - msgConsumer.acknowledge(msg); - }, -}); - -``` - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recently unacked message). You can [configure](#reader-configuration) Node.js readers using a reader configuration object. - -Here is an example: - -```JavaScript - -const reader = await client.createReader({ - topic: 'my-topic', - startMessageId: Pulsar.MessageId.earliest(), -}); - -const msg = await reader.readNext(); -console.log(msg.getData().toString()); - -await reader.close(); - -``` - -### Reader operations - -Pulsar Node.js readers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `readNext()` | Receives the next message on the topic (analogous to the `receive` method for [consumers](#consumer-operations)). When the message is available, the Promise object run executor function and get message object. | `Promise` | -| `readNext(Number)` | Receives a single message from the topic with specific timeout in milliseconds. | `Promise` | -| `hasNext()` | Return whether the broker has next message in target topic. | `Boolean` | -| `close()` | Closes the reader, disabling its ability to receive messages from the broker. | `Promise` | - -### Reader configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader establishes a subscription and listen for messages. | | -| `startMessageId` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `Pulsar.MessageId.earliest` (the earliest available message on the topic), `Pulsar.MessageId.latest` (the latest available message on the topic), or a message ID object for a position that is not earliest or latest. | | -| `receiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `readNext`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000 | -| `readerName` | The name of the reader. | | -| `subscriptionRolePrefix` | The subscription role prefix. | | -| `readCompacted` | If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`. | `false` | - - -### Reader example - -This example creates a Node.js reader with the `my-topic` topic, reads messages, and prints the content that arrive for 10 times: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - operationTimeoutSeconds: 30, - }); - - // Create a reader - const reader = await client.createReader({ - topic: 'my-topic', - startMessageId: Pulsar.MessageId.earliest(), - }); - - // read messages - for (let i = 0; i < 10; i += 1) { - const msg = await reader.readNext(); - console.log(msg.getData().toString()); - } - - await reader.close(); - await client.close(); -})(); - -``` - -## Messages - -In Pulsar Node.js client, you have to construct producer message object for producer. - -Here is an example message: - -```JavaScript - -const msg = { - data: Buffer.from('Hello, Pulsar'), - partitionKey: 'key1', - properties: { - 'foo': 'bar', - }, - eventTimestamp: Date.now(), - replicationClusters: [ - 'cluster1', - 'cluster2', - ], -} - -await producer.send(msg); - -``` - -The following keys are available for producer message objects: - -| Parameter | Description | -| :-------- | :---------- | -| `data` | The actual data payload of the message. | -| `properties` | A Object for any application-specific metadata attached to the message. | -| `eventTimestamp` | The timestamp associated with the message. | -| `sequenceId` | The sequence ID of the message. | -| `partitionKey` | The optional key associated with the message (particularly useful for things like topic compaction). | -| `replicationClusters` | The clusters to which this message is replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. | -| `deliverAt` | The absolute timestamp at or after which the message is delivered. | | -| `deliverAfter` | The relative delay after which the message is delivered. | | - -### Message object operations - -In Pulsar Node.js client, you can receive (or read) message object as consumer (or reader). - -The message object have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `getTopicName()` | Getter method of topic name. | `String` | -| `getProperties()` | Getter method of properties. | `Array` | -| `getData()` | Getter method of message data. | `Buffer` | -| `getMessageId()` | Getter method of [message id object](#message-id-object-operations). | `Object` | -| `getPublishTimestamp()` | Getter method of publish timestamp. | `Number` | -| `getEventTimestamp()` | Getter method of event timestamp. | `Number` | -| `getRedeliveryCount()` | Getter method of redelivery count. | `Number` | -| `getPartitionKey()` | Getter method of partition key. | `String` | - -### Message ID object operations - -In Pulsar Node.js client, you can get message id object from message object. - -The message id object have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `serialize()` | Serialize the message id into a Buffer for storing. | `Buffer` | -| `toString()` | Get message id as String. | `String` | - -The client has static method of message id object. You can access it as `Pulsar.MessageId.someStaticMethod` too. - -The following static methods are available for the message id object: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `earliest()` | MessageId representing the earliest, or oldest available message stored in the topic. | `Object` | -| `latest()` | MessageId representing the latest, or last published message in the topic. | `Object` | -| `deserialize(Buffer)` | Deserialize a message id object from a Buffer. | `Object` | - -## End-to-end encryption - -[End-to-end encryption](https://pulsar.apache.org/docs/en/next/cookbooks-encryption/#docsNav) allows applications to encrypt messages at producers and decrypt at consumers. - -### Configuration - -If you want to use the end-to-end encryption feature in the Node.js client, you need to configure `publicKeyPath` and `privateKeyPath` for both producer and consumer. - -``` - -publicKeyPath: "./public.pem" -privateKeyPath: "./private.pem" - -``` - -### Tutorial - -This section provides step-by-step instructions on how to use the end-to-end encryption feature in the Node.js client. - -**Prerequisite** - -- Pulsar C++ client 2.7.1 or later - -**Step** - -1. Create both public and private key pairs. - - **Input** - - ```shell - - openssl genrsa -out private.pem 2048 - openssl rsa -in private.pem -pubout -out public.pem - - ``` - -2. Create a producer to send encrypted messages. - - **Input** - - ```nodejs - - const Pulsar = require('pulsar-client'); - - (async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - operationTimeoutSeconds: 30, - }); - - // Create a producer - const producer = await client.createProducer({ - topic: 'persistent://public/default/my-topic', - sendTimeoutMs: 30000, - batchingEnabled: true, - publicKeyPath: "./public.pem", - privateKeyPath: "./private.pem", - encryptionKey: "encryption-key" - }); - - console.log(producer.ProducerConfig) - // Send messages - for (let i = 0; i < 10; i += 1) { - const msg = `my-message-${i}`; - producer.send({ - data: Buffer.from(msg), - }); - console.log(`Sent message: ${msg}`); - } - await producer.flush(); - - await producer.close(); - await client.close(); - })(); - - ``` - -3. Create a consumer to receive encrypted messages. - - **Input** - - ```nodejs - - const Pulsar = require('pulsar-client'); - - (async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://172.25.0.3:6650', - operationTimeoutSeconds: 30 - }); - - // Create a consumer - const consumer = await client.subscribe({ - topic: 'persistent://public/default/my-topic', - subscription: 'sub1', - subscriptionType: 'Shared', - ackTimeoutMs: 10000, - publicKeyPath: "./public.pem", - privateKeyPath: "./private.pem" - }); - - console.log(consumer) - // Receive messages - for (let i = 0; i < 10; i += 1) { - const msg = await consumer.receive(); - console.log(msg.getData().toString()); - consumer.acknowledge(msg); - } - - await consumer.close(); - await client.close(); - })(); - - ``` - -4. Run the consumer to receive encrypted messages. - - **Input** - - ```shell - - node consumer.js - - ``` - -5. In a new terminal tab, run the producer to produce encrypted messages. - - **Input** - - ```shell - - node producer.js - - ``` - - Now you can see the producer sends messages and the consumer receives messages successfully. - - **Output** - - This is from the producer side. - - ``` - - Sent message: my-message-0 - Sent message: my-message-1 - Sent message: my-message-2 - Sent message: my-message-3 - Sent message: my-message-4 - Sent message: my-message-5 - Sent message: my-message-6 - Sent message: my-message-7 - Sent message: my-message-8 - Sent message: my-message-9 - - ``` - - This is from the consumer side. - - ``` - - my-message-0 - my-message-1 - my-message-2 - my-message-3 - my-message-4 - my-message-5 - my-message-6 - my-message-7 - my-message-8 - my-message-9 - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-python.md b/site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-python.md deleted file mode 100644 index f30cf55387d92e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-python.md +++ /dev/null @@ -1,456 +0,0 @@ ---- -id: client-libraries-python -title: Pulsar Python client -sidebar_label: "Python" -original_id: client-libraries-python ---- - -Pulsar Python client library is a wrapper over the existing [C++ client library](client-libraries-cpp.md) and exposes all of the [same features](/api/cpp). You can find the code in the [`python` subdirectory](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/python) of the C++ client code. - -All the methods in producer, consumer, and reader of a Python client are thread-safe. - -[pdoc](https://github.com/BurntSushi/pdoc)-generated API docs for the Python client are available [here](/api/python). - -## Install - -You can install the [`pulsar-client`](https://pypi.python.org/pypi/pulsar-client) library either via [PyPi](https://pypi.python.org/pypi), using [pip](#installation-using-pip), or by building the library from source. - -### Install using pip - -To install the `pulsar-client` library as a pre-built package using the [pip](https://pip.pypa.io/en/stable/) package manager: - -```shell - -$ pip install pulsar-client==@pulsar:version_number@ - -``` - -### Optional dependencies - -To support aspects like pulsar functions or Avro serialization, additional optional components can be installed alongside the `pulsar-client` library - -```shell - -# avro serialization -$ pip install pulsar-client[avro]=='@pulsar:version_number@' - -# functions runtime -$ pip install pulsar-client[functions]=='@pulsar:version_number@' - -# all optional components -$ pip install pulsar-client[all]=='@pulsar:version_number@' - -``` - -Installation via PyPi is available for the following Python versions: - -Platform | Supported Python versions -:--------|:------------------------- -MacOS
    10.13 (High Sierra), 10.14 (Mojave)
    | 2.7, 3.7 -Linux | 2.7, 3.4, 3.5, 3.6, 3.7, 3.8 - -### Install from source - -To install the `pulsar-client` library by building from source, follow [instructions](client-libraries-cpp.md#compilation) and compile the Pulsar C++ client library. That builds the Python binding for the library. - -To install the built Python bindings: - -```shell - -$ git clone https://github.com/apache/pulsar -$ cd pulsar/pulsar-client-cpp/python -$ sudo python setup.py install - -``` - -## API Reference - -The complete Python API reference is available at [api/python](/api/python). - -## Examples - -You can find a variety of Python code examples for the `pulsar-client` library. - -### Producer example - -The following example creates a Python producer for the `my-topic` topic and sends 10 messages on that topic: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') - -producer = client.create_producer('my-topic') - -for i in range(10): - producer.send(('Hello-%d' % i).encode('utf-8')) - -client.close() - -``` - -### Consumer example - -The following example creates a consumer with the `my-subscription` subscription name on the `my-topic` topic, receives incoming messages, prints the content and ID of messages that arrive, and acknowledges each message to the Pulsar broker. - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') - -consumer = client.subscribe('my-topic', 'my-subscription') - -while True: - msg = consumer.receive() - try: - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except: - # Message failed to be processed - consumer.negative_acknowledge(msg) - -client.close() - -``` - -This example shows how to configure negative acknowledgement. - -```python - -from pulsar import Client, schema -client = Client('pulsar://localhost:6650') -consumer = client.subscribe('negative_acks','test',schema=schema.StringSchema()) -producer = client.create_producer('negative_acks',schema=schema.StringSchema()) -for i in range(10): - print('send msg "hello-%d"' % i) - producer.send_async('hello-%d' % i, callback=None) -producer.flush() -for i in range(10): - msg = consumer.receive() - consumer.negative_acknowledge(msg) - print('receive and nack msg "%s"' % msg.data()) -for i in range(10): - msg = consumer.receive() - consumer.acknowledge(msg) - print('receive and ack msg "%s"' % msg.data()) -try: - # No more messages expected - msg = consumer.receive(100) -except: - print("no more msg") - pass - -``` - -### Reader interface example - -You can use the Pulsar Python API to use the Pulsar [reader interface](concepts-clients.md#reader-interface). Here's an example: - -```python - -# MessageId taken from a previously fetched message -msg_id = msg.message_id() - -reader = client.create_reader('my-topic', msg_id) - -while True: - msg = reader.read_next() - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # No acknowledgment - -``` - -### Multi-topic subscriptions - -In addition to subscribing a consumer to a single Pulsar topic, you can also subscribe to multiple topics simultaneously. To use multi-topic subscriptions, you can supply a regular expression (regex) or a `List` of topics. If you select topics via regex, all topics must be within the same Pulsar namespace. - -The following is an example. - -```python - -import re -consumer = client.subscribe(re.compile('persistent://public/default/topic-*'), 'my-subscription') -while True: - msg = consumer.receive() - try: - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except: - # Message failed to be processed - consumer.negative_acknowledge(msg) -client.close() - -``` - -## Schema - -### Declare and validate schema - -You can declare a schema by passing a class that inherits -from `pulsar.schema.Record` and defines the fields as -class variables. For example: - -```python - -from pulsar.schema import * - -class Example(Record): - a = String() - b = Integer() - c = Boolean() - -``` - -With this simple schema definition, you can create producers, consumers and readers instances that refer to that. - -```python - -producer = client.create_producer( - topic='my-topic', - schema=AvroSchema(Example) ) - -producer.send(Example(a='Hello', b=1)) - -``` - -After creating the producer, the Pulsar broker validates that the existing topic schema is indeed of "Avro" type and that the format is compatible with the schema definition of the `Example` class. - -If there is a mismatch, an exception occurs in the producer creation. - -Once a producer is created with a certain schema definition, -it will only accept objects that are instances of the declared -schema class. - -Similarly, for a consumer/reader, the consumer will return an -object, instance of the schema record class, rather than the raw -bytes: - -```python - -consumer = client.subscribe( - topic='my-topic', - subscription_name='my-subscription', - schema=AvroSchema(Example) ) - -while True: - msg = consumer.receive() - ex = msg.value() - try: - print("Received message a={} b={} c={}".format(ex.a, ex.b, ex.c)) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except: - # Message failed to be processed - consumer.negative_acknowledge(msg) - -``` - -### Supported schema types - -You can use different builtin schema types in Pulsar. All the definitions are in the `pulsar.schema` package. - -| Schema | Notes | -| ------ | ----- | -| `BytesSchema` | Get the raw payload as a `bytes` object. No serialization/deserialization are performed. This is the default schema mode | -| `StringSchema` | Encode/decode payload as a UTF-8 string. Uses `str` objects | -| `JsonSchema` | Require record definition. Serializes the record into standard JSON payload | -| `AvroSchema` | Require record definition. Serializes in AVRO format | - -### Schema definition reference - -The schema definition is done through a class that inherits from `pulsar.schema.Record`. - -This class has a number of fields which can be of either -`pulsar.schema.Field` type or another nested `Record`. All the -fields are specified in the `pulsar.schema` package. The fields -are matching the AVRO fields types. - -| Field Type | Python Type | Notes | -| ---------- | ----------- | ----- | -| `Boolean` | `bool` | | -| `Integer` | `int` | | -| `Long` | `int` | | -| `Float` | `float` | | -| `Double` | `float` | | -| `Bytes` | `bytes` | | -| `String` | `str` | | -| `Array` | `list` | Need to specify record type for items. | -| `Map` | `dict` | Key is always `String`. Need to specify value type. | - -Additionally, any Python `Enum` type can be used as a valid field type. - -#### Fields parameters - -When adding a field, you can use these parameters in the constructor. - -| Argument | Default | Notes | -| ---------- | --------| ----- | -| `default` | `None` | Set a default value for the field. Eg: `a = Integer(default=5)` | -| `required` | `False` | Mark the field as "required". It is set in the schema accordingly. | - -#### Schema definition examples - -##### Simple definition - -```python - -class Example(Record): - a = String() - b = Integer() - c = Array(String()) - i = Map(String()) - -``` - -##### Using enums - -```python - -from enum import Enum - -class Color(Enum): - red = 1 - green = 2 - blue = 3 - -class Example(Record): - name = String() - color = Color - -``` - -##### Complex types - -```python - -class MySubRecord(Record): - x = Integer() - y = Long() - z = String() - -class Example(Record): - a = String() - sub = MySubRecord() - -``` - -## End-to-end encryption - -[End-to-end encryption](https://pulsar.apache.org/docs/en/next/cookbooks-encryption/#docsNav) allows applications to encrypt messages at producers and decrypt messages at consumers. - -### Configuration - -To use the end-to-end encryption feature in the Python client, you need to configure `publicKeyPath` and `privateKeyPath` for both producer and consumer. - -``` - -publicKeyPath: "./public.pem" -privateKeyPath: "./private.pem" - -``` - -### Tutorial - -This section provides step-by-step instructions on how to use the end-to-end encryption feature in the Python client. - -**Prerequisite** - -- Pulsar Python client 2.7.1 or later - -**Step** - -1. Create both public and private key pairs. - - **Input** - - ```shell - - openssl genrsa -out private.pem 2048 - openssl rsa -in private.pem -pubout -out public.pem - - ``` - -2. Create a producer to send encrypted messages. - - **Input** - - ```python - - import pulsar - - publicKeyPath = "./public.pem" - privateKeyPath = "./private.pem" - crypto_key_reader = pulsar.CryptoKeyReader(publicKeyPath, privateKeyPath) - client = pulsar.Client('pulsar://localhost:6650') - producer = client.create_producer(topic='encryption', encryption_key='encryption', crypto_key_reader=crypto_key_reader) - producer.send('encryption message'.encode('utf8')) - print('sent message') - producer.close() - client.close() - - ``` - -3. Create a consumer to receive encrypted messages. - - **Input** - - ```python - - import pulsar - - publicKeyPath = "./public.pem" - privateKeyPath = "./private.pem" - crypto_key_reader = pulsar.CryptoKeyReader(publicKeyPath, privateKeyPath) - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe(topic='encryption', subscription_name='encryption-sub', crypto_key_reader=crypto_key_reader) - msg = consumer.receive() - print("Received msg '{}' id = '{}'".format(msg.data(), msg.message_id())) - consumer.close() - client.close() - - ``` - -4. Run the consumer to receive encrypted messages. - - **Input** - - ```shell - - python consumer.py - - ``` - -5. In a new terminal tab, run the producer to produce encrypted messages. - - **Input** - - ```shell - - python producer.py - - ``` - - Now you can see the producer sends messages and the consumer receives messages successfully. - - **Output** - - This is from the producer side. - - ``` - - sent message - - ``` - - This is from the consumer side. - - ``` - - Received msg 'b'encryption message'' id = '(0,0,-1,-1)' - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-websocket.md b/site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-websocket.md deleted file mode 100644 index e2da41f0461b40..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries-websocket.md +++ /dev/null @@ -1,657 +0,0 @@ ---- -id: client-libraries-websocket -title: Pulsar WebSocket API -sidebar_label: "WebSocket" -original_id: client-libraries-websocket ---- - -Pulsar [WebSocket](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API) API provides a simple way to interact with Pulsar using languages that do not have an official [client library](getting-started-clients.md). Through WebSocket, you can publish and consume messages and use features available on the [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page. - - -> You can use Pulsar WebSocket API with any WebSocket client library. See examples for Python and Node.js [below](#client-examples). - -## Running the WebSocket service - -The standalone variant of Pulsar that we recommend using for [local development](getting-started-standalone.md) already has the WebSocket service enabled. - -In non-standalone mode, there are two ways to deploy the WebSocket service: - -* [embedded](#embedded-with-a-pulsar-broker) with a Pulsar broker -* as a [separate component](#as-a-separate-component) - -### Embedded with a Pulsar broker - -In this mode, the WebSocket service will run within the same HTTP service that's already running in the broker. To enable this mode, set the [`webSocketServiceEnabled`](reference-configuration.md#broker-webSocketServiceEnabled) parameter in the [`conf/broker.conf`](reference-configuration.md#broker) configuration file in your installation. - -```properties - -webSocketServiceEnabled=true - -``` - -### As a separate component - -In this mode, the WebSocket service will be run from a Pulsar [broker](reference-terminology.md#broker) as a separate service. Configuration for this mode is handled in the [`conf/websocket.conf`](reference-configuration.md#websocket) configuration file. You'll need to set *at least* the following parameters: - -* [`configurationStoreServers`](reference-configuration.md#websocket-configurationStoreServers) -* [`webServicePort`](reference-configuration.md#websocket-webServicePort) -* [`clusterName`](reference-configuration.md#websocket-clusterName) - -Here's an example: - -```properties - -configurationStoreServers=zk1:2181,zk2:2181,zk3:2181 -webServicePort=8080 -clusterName=my-cluster - -``` - -### Security settings - -To enable TLS encryption on WebSocket service: - -```properties - -tlsEnabled=true -tlsAllowInsecureConnection=false -tlsCertificateFilePath=/path/to/client-websocket.cert.pem -tlsKeyFilePath=/path/to/client-websocket.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -### Starting the broker - -When the configuration is set, you can start the service using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) tool: - -```shell - -$ bin/pulsar-daemon start websocket - -``` - -## API Reference - -Pulsar's WebSocket API offers three endpoints for [producing](#producer-endpoint) messages, [consuming](#consumer-endpoint) messages and [reading](#reader-endpoint) messages. - -All exchanges via the WebSocket API use JSON. - -### Authentication - -#### Browser javascript WebSocket client - -Use the query param `token` transport the authentication token. - -```http - -ws://broker-service-url:8080/path?token=token - -``` - -### Producer endpoint - -The producer endpoint requires you to specify a tenant, namespace, and topic in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/producer/persistent/:tenant/:namespace/:topic - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`sendTimeoutMillis` | long | no | Send timeout (default: 30 secs) -`batchingEnabled` | boolean | no | Enable batching of messages (default: false) -`batchingMaxMessages` | int | no | Maximum number of messages permitted in a batch (default: 1000) -`maxPendingMessages` | int | no | Set the max size of the internal-queue holding the messages (default: 1000) -`batchingMaxPublishDelay` | long | no | Time period within which the messages will be batched (default: 10ms) -`messageRoutingMode` | string | no | Message [routing mode](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/ProducerConfiguration.MessageRoutingMode.html) for the partitioned producer: `SinglePartition`, `RoundRobinPartition` -`compressionType` | string | no | Compression [type](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/CompressionType.html): `LZ4`, `ZLIB` -`producerName` | string | no | Specify the name for the producer. Pulsar will enforce only one producer with same name can be publishing on a topic -`initialSequenceId` | long | no | Set the baseline for the sequence ids for messages published by the producer. -`hashingScheme` | string | no | [Hashing function](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.HashingScheme.html) to use when publishing on a partitioned topic: `JavaStringHash`, `Murmur3_32Hash` -`token` | string | no | Authentication token, this is used for the browser javascript client - - -#### Publishing a message - -```json - -{ - "payload": "SGVsbG8gV29ybGQ=", - "properties": {"key1": "value1", "key2": "value2"}, - "context": "1" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`payload` | string | yes | Base-64 encoded payload -`properties` | key-value pairs | no | Application-defined properties -`context` | string | no | Application-defined request identifier -`key` | string | no | For partitioned topics, decides which partition to use -`replicationClusters` | array | no | Restrict replication to this list of [clusters](reference-terminology.md#cluster), specified by name - - -##### Example success response - -```json - -{ - "result": "ok", - "messageId": "CAAQAw==", - "context": "1" - } - -``` - -##### Example failure response - -```json - - { - "result": "send-error:3", - "errorMsg": "Failed to de-serialize from JSON", - "context": "1" - } - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`result` | string | yes | `ok` if successful or an error message if unsuccessful -`messageId` | string | yes | Message ID assigned to the published message -`context` | string | no | Application-defined request identifier - - -### Consumer endpoint - -The consumer endpoint requires you to specify a tenant, namespace, and topic, as well as a subscription, in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/consumer/persistent/:tenant/:namespace/:topic/:subscription - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`ackTimeoutMillis` | long | no | Set the timeout for unacked messages (default: 0) -`subscriptionType` | string | no | [Subscription type](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/SubscriptionType.html): `Exclusive`, `Failover`, `Shared`, `Key_Shared` -`receiverQueueSize` | int | no | Size of the consumer receive queue (default: 1000) -`consumerName` | string | no | Consumer name -`priorityLevel` | int | no | Define a [priority](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setPriorityLevel-int-) for the consumer -`maxRedeliverCount` | int | no | Define a [maxRedeliverCount](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#deadLetterPolicy-org.apache.pulsar.client.api.DeadLetterPolicy-) for the consumer (default: 0). Activates [Dead Letter Topic](https://github.com/apache/pulsar/wiki/PIP-22%3A-Pulsar-Dead-Letter-Topic) feature. -`deadLetterTopic` | string | no | Define a [deadLetterTopic](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#deadLetterPolicy-org.apache.pulsar.client.api.DeadLetterPolicy-) for the consumer (default: {topic}-{subscription}-DLQ). Activates [Dead Letter Topic](https://github.com/apache/pulsar/wiki/PIP-22%3A-Pulsar-Dead-Letter-Topic) feature. -`pullMode` | boolean | no | Enable pull mode (default: false). See "Flow Control" below. -`negativeAckRedeliveryDelay` | int | no | When a message is negatively acknowledged, it will be redelivered to the DLQ. -`token` | string | no | Authentication token, this is used for the browser javascript client - -NB: these parameter (except `pullMode`) apply to the internal consumer of the WebSocket service. -So messages will be subject to the redelivery settings as soon as the get into the receive queue, -even if the client doesn't consume on the WebSocket. - -##### Receiving messages - -Server will push messages on the WebSocket session: - -```json - -{ - "messageId": "CAMQADAA", - "payload": "hvXcJvHW7kOSrUn17P2q71RA5SdiXwZBqw==", - "properties": {}, - "publishTime": "2021-10-29T16:01:38.967-07:00", - "redeliveryCount": 0, - "encryptionContext": { - "keys": { - "client-rsa.pem": { - "keyValue": "jEuwS+PeUzmCo7IfLNxqoj4h7txbLjCQjkwpaw5AWJfZ2xoIdMkOuWDkOsqgFmWwxiecakS6GOZHs94x3sxzKHQx9Oe1jpwBg2e7L4fd26pp+WmAiLm/ArZJo6JotTeFSvKO3u/yQtGTZojDDQxiqFOQ1ZbMdtMZA8DpSMuq+Zx7PqLo43UdW1+krjQfE5WD+y+qE3LJQfwyVDnXxoRtqWLpVsAROlN2LxaMbaftv5HckoejJoB4xpf/dPOUqhnRstwQHf6klKT5iNhjsY4usACt78uILT0pEPd14h8wEBidBz/vAlC/zVMEqiDVzgNS7dqEYS4iHbf7cnWVCn3Hxw==", - "metadata": {} - } - }, - "param": "Tfu1PxVm6S9D3+Hk", - "compressionType": "NONE", - "uncompressedMessageSize": 0, - "batchSize": { - "empty": false, - "present": true - } - } - -``` - -Below are the parameters in the WebSocket consumer response. - -- General parameters - - Key | Type | Required? | Explanation - :---|:-----|:----------|:----------- - `messageId` | string | yes | Message ID - `payload` | string | yes | Base-64 encoded payload - `publishTime` | string | yes | Publish timestamp - `redeliveryCount` | number | yes | Number of times this message was already delivered - `properties` | key-value pairs | no | Application-defined properties - `key` | string | no | Original routing key set by producer - `encryptionContext` | EncryptionContext | no | Encryption context that consumers can use to decrypt received messages - `param` | string | no | Initialization vector for cipher (Base64 encoding) - `batchSize` | string | no | Number of entries in a message (if it is a batch message) - `uncompressedMessageSize` | string | no | Message size before compression - `compressionType` | string | no | Algorithm used to compress the message payload - -- `encryptionContext` related parameter - - Key | Type | Required? | Explanation - :---|:-----|:----------|:----------- - `keys` |key-EncryptionKey pairs | yes | Key in `key-EncryptionKey` pairs is an encryption key name. Value in `key-EncryptionKey` pairs is an encryption key object. - -- `encryptionKey` related parameters - - Key | Type | Required? | Explanation - :---|:-----|:----------|:----------- - `keyValue` | string | yes | Encryption key (Base64 encoding) - `metadata` | key-value pairs | no | Application-defined metadata - -#### Acknowledging the message - -Consumer needs to acknowledge the successful processing of the message to -have the Pulsar broker delete it. - -```json - -{ - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Negatively acknowledging messages - -```json - -{ - "type": "negativeAcknowledge", - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Flow control - -##### Push Mode - -By default (`pullMode=false`), the consumer endpoint will use the `receiverQueueSize` parameter both to size its -internal receive queue and to limit the number of unacknowledged messages that are passed to the WebSocket client. -In this mode, if you don't send acknowledgements, the Pulsar WebSocket service will stop sending messages after reaching -`receiverQueueSize` unacked messages sent to the WebSocket client. - -##### Pull Mode - -If you set `pullMode` to `true`, the WebSocket client will need to send `permit` commands to permit the -Pulsar WebSocket service to send more messages. - -```json - -{ - "type": "permit", - "permitMessages": 100 -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `permit` -`permitMessages`| int | yes | Number of messages to permit - -NB: in this mode it's possible to acknowledge messages in a different connection. - -#### Check if reach end of topic - -Consumer can check if it has reached end of topic by sending `isEndOfTopic` request. - -**Request** - -```json - -{ - "type": "isEndOfTopic" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `isEndOfTopic` - -**Response** - -```json - -{ - "endOfTopic": "true/false" - } - -``` - -### Reader endpoint - -The reader endpoint requires you to specify a tenant, namespace, and topic in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/reader/persistent/:tenant/:namespace/:topic - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`readerName` | string | no | Reader name -`receiverQueueSize` | int | no | Size of the consumer receive queue (default: 1000) -`messageId` | int or enum | no | Message ID to start from, `earliest` or `latest` (default: `latest`) -`token` | string | no | Authentication token, this is used for the browser javascript client - -##### Receiving messages - -Server will push messages on the WebSocket session: - -```json - -{ - "messageId": "CAAQAw==", - "payload": "SGVsbG8gV29ybGQ=", - "properties": {"key1": "value1", "key2": "value2"}, - "publishTime": "2016-08-30 16:45:57.785", - "redeliveryCount": 4 -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId` | string | yes | Message ID -`payload` | string | yes | Base-64 encoded payload -`publishTime` | string | yes | Publish timestamp -`redeliveryCount` | number | yes | Number of times this message was already delivered -`properties` | key-value pairs | no | Application-defined properties -`key` | string | no | Original routing key set by producer - -#### Acknowledging the message - -**In WebSocket**, Reader needs to acknowledge the successful processing of the message to -have the Pulsar WebSocket service update the number of pending messages. -If you don't send acknowledgements, Pulsar WebSocket service will stop sending messages after reaching the pendingMessages limit. - -```json - -{ - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Check if reach end of topic - -Consumer can check if it has reached end of topic by sending `isEndOfTopic` request. - -**Request** - -```json - -{ - "type": "isEndOfTopic" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `isEndOfTopic` - -**Response** - -```json - -{ - "endOfTopic": "true/false" - } - -``` - -### Error codes - -In case of error the server will close the WebSocket session using the -following error codes: - -Error Code | Error Message -:----------|:------------- -1 | Failed to create producer -2 | Failed to subscribe -3 | Failed to deserialize from JSON -4 | Failed to serialize to JSON -5 | Failed to authenticate client -6 | Client is not authorized -7 | Invalid payload encoding -8 | Unknown error - -> The application is responsible for re-establishing a new WebSocket session after a backoff period. - -## Client examples - -Below you'll find code examples for the Pulsar WebSocket API in [Python](#python) and [Node.js](#nodejs). - -### Python - -This example uses the [`websocket-client`](https://pypi.python.org/pypi/websocket-client) package. You can install it using [pip](https://pypi.python.org/pypi/pip): - -```shell - -$ pip install websocket-client - -``` - -You can also download it from [PyPI](https://pypi.python.org/pypi/websocket-client). - -#### Python producer - -Here's an example Python producer that sends a simple message to a Pulsar [topic](reference-terminology.md#topic): - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/producer/persistent/public/default/my-topic' - -ws = websocket.create_connection(TOPIC) - -# Send one message as JSON -ws.send(json.dumps({ - 'payload' : base64.b64encode('Hello World'), - 'properties': { - 'key1' : 'value1', - 'key2' : 'value2' - }, - 'context' : 5 -})) - -response = json.loads(ws.recv()) -if response['result'] == 'ok': - print 'Message published successfully' -else: - print 'Failed to publish message:', response -ws.close() - -``` - -#### Python consumer - -Here's an example Python consumer that listens on a Pulsar topic and prints the message ID whenever a message arrives: - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/consumer/persistent/public/default/my-topic/my-sub' - -ws = websocket.create_connection(TOPIC) - -while True: - msg = json.loads(ws.recv()) - if not msg: break - - print "Received: {} - payload: {}".format(msg, base64.b64decode(msg['payload'])) - - # Acknowledge successful processing - ws.send(json.dumps({'messageId' : msg['messageId']})) - -ws.close() - -``` - -#### Python reader - -Here's an example Python reader that listens on a Pulsar topic and prints the message ID whenever a message arrives: - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/reader/persistent/public/default/my-topic' -ws = websocket.create_connection(TOPIC) - -while True: - msg = json.loads(ws.recv()) - if not msg: break - - print "Received: {} - payload: {}".format(msg, base64.b64decode(msg['payload'])) - - # Acknowledge successful processing - ws.send(json.dumps({'messageId' : msg['messageId']})) - -ws.close() - -``` - -### Node.js - -This example uses the [`ws`](https://websockets.github.io/ws/) package. You can install it using [npm](https://www.npmjs.com/): - -```shell - -$ npm install ws - -``` - -#### Node.js producer - -Here's an example Node.js producer that sends a simple message to a Pulsar topic: - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/producer/persistent/public/default/my-topic`; -const ws = new WebSocket(topic); - -var message = { - "payload" : new Buffer("Hello World").toString('base64'), - "properties": { - "key1" : "value1", - "key2" : "value2" - }, - "context" : "1" -}; - -ws.on('open', function() { - // Send one message - ws.send(JSON.stringify(message)); -}); - -ws.on('message', function(message) { - console.log('received ack: %s', message); -}); - -``` - -#### Node.js consumer - -Here's an example Node.js consumer that listens on the same topic used by the producer above: - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/consumer/persistent/public/default/my-topic/my-sub`; -const ws = new WebSocket(topic); - -ws.on('message', function(message) { - var receiveMsg = JSON.parse(message); - console.log('Received: %s - payload: %s', message, new Buffer(receiveMsg.payload, 'base64').toString()); - var ackMsg = {"messageId" : receiveMsg.messageId}; - ws.send(JSON.stringify(ackMsg)); -}); - -``` - -#### NodeJS reader - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/reader/persistent/public/default/my-topic`; -const ws = new WebSocket(topic); - -ws.on('message', function(message) { - var receiveMsg = JSON.parse(message); - console.log('Received: %s - payload: %s', message, new Buffer(receiveMsg.payload, 'base64').toString()); - var ackMsg = {"messageId" : receiveMsg.messageId}; - ws.send(JSON.stringify(ackMsg)); -}); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries.md b/site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries.md deleted file mode 100644 index 00d128c514040f..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/client-libraries.md +++ /dev/null @@ -1,35 +0,0 @@ ---- -id: client-libraries -title: Pulsar client libraries -sidebar_label: "Overview" -original_id: client-libraries ---- - -Pulsar supports the following client libraries: - -- [Java client](client-libraries-java.md) -- [Go client](client-libraries-go.md) -- [Python client](client-libraries-python.md) -- [C++ client](client-libraries-cpp.md) -- [Node.js client](client-libraries-node.md) -- [WebSocket client](client-libraries-websocket.md) -- [C# client](client-libraries-dotnet.md) - -## Feature matrix -Pulsar client feature matrix for different languages is listed on [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page. - -## Third-party clients - -Besides the official released clients, multiple projects on developing Pulsar clients are available in different languages. - -> If you have developed a new Pulsar client, feel free to submit a pull request and add your client to the list below. - -| Language | Project | Maintainer | License | Description | -|----------|---------|------------|---------|-------------| -| Go | [pulsar-client-go](https://github.com/Comcast/pulsar-client-go) | [Comcast](https://github.com/Comcast) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client | -| Go | [go-pulsar](https://github.com/t2y/go-pulsar) | [t2y](https://github.com/t2y) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | -| Haskell | [supernova](https://github.com/cr-org/supernova) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Native Pulsar client for Haskell | -| Scala | [neutron](https://github.com/cr-org/neutron) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Purely functional Apache Pulsar client for Scala built on top of Fs2 | -| Scala | [pulsar4s](https://github.com/sksamuel/pulsar4s) | [sksamuel](https://github.com/sksamuel) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Idomatic, typesafe, and reactive Scala client for Apache Pulsar | -| Rust | [pulsar-rs](https://github.com/wyyerd/pulsar-rs) | [Wyyerd Group](https://github.com/wyyerd) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Future-based Rust bindings for Apache Pulsar | -| .NET | [pulsar-client-dotnet](https://github.com/fsharplang-ru/pulsar-client-dotnet) | [Lanayx](https://github.com/Lanayx) | [![GitHub](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT) | Native .NET client for C#/F#/VB | diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-architecture-overview.md b/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-architecture-overview.md deleted file mode 100644 index f3e75c3e307e0c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-architecture-overview.md +++ /dev/null @@ -1,172 +0,0 @@ ---- -id: concepts-architecture-overview -title: Architecture Overview -sidebar_label: "Architecture" -original_id: concepts-architecture-overview ---- - -At the highest level, a Pulsar instance is composed of one or more Pulsar clusters. Clusters within an instance can [replicate](concepts-replication.md) data amongst themselves. - -In a Pulsar cluster: - -* One or more brokers handles and load balances incoming messages from producers, dispatches messages to consumers, communicates with the Pulsar configuration store to handle various coordination tasks, stores messages in BookKeeper instances (aka bookies), relies on a cluster-specific ZooKeeper cluster for certain tasks, and more. -* A BookKeeper cluster consisting of one or more bookies handles [persistent storage](#persistent-storage) of messages. -* A ZooKeeper cluster specific to that cluster handles coordination tasks between Pulsar clusters. - -The diagram below provides an illustration of a Pulsar cluster: - -![Pulsar architecture diagram](/assets/pulsar-system-architecture.png) - -At the broader instance level, an instance-wide ZooKeeper cluster called the configuration store handles coordination tasks involving multiple clusters, for example [geo-replication](concepts-replication.md). - -## Brokers - -The Pulsar message broker is a stateless component that's primarily responsible for running two other components: - -* An HTTP server that exposes a {@inject: rest:REST:/} API for both administrative tasks and [topic lookup](concepts-clients.md#client-setup-phase) for producers and consumers. The producers connect to the brokers to publish messages and the consumers connect to the brokers to consume the messages. -* A dispatcher, which is an asynchronous TCP server over a custom [binary protocol](developing-binary-protocol.md) used for all data transfers - -Messages are typically dispatched out of a [managed ledger](#managed-ledgers) cache for the sake of performance, *unless* the backlog exceeds the cache size. If the backlog grows too large for the cache, the broker will start reading entries from BookKeeper. - -Finally, to support geo-replication on global topics, the broker manages replicators that tail the entries published in the local region and republish them to the remote region using the Pulsar [Java client library](client-libraries-java.md). - -> For a guide to managing Pulsar brokers, see the [brokers](admin-api-brokers.md) guide. - -## Clusters - -A Pulsar instance consists of one or more Pulsar *clusters*. Clusters, in turn, consist of: - -* One or more Pulsar [brokers](#brokers) -* A ZooKeeper quorum used for cluster-level configuration and coordination -* An ensemble of bookies used for [persistent storage](#persistent-storage) of messages - -Clusters can replicate amongst themselves using [geo-replication](concepts-replication.md). - -> For a guide to managing Pulsar clusters, see the [clusters](admin-api-clusters.md) guide. - -## Metadata store - -The Pulsar metadata store maintains all the metadata of a Pulsar cluster, such as topic metadata, schema, broker load data, and so on. Pulsar uses [Apache ZooKeeper](https://zookeeper.apache.org/) for metadata storage, cluster configuration, and coordination. The Pulsar metadata store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster. You can use one ZooKeeper cluster for both Pulsar metadata store and BookKeeper metadata store. If you want to deploy Pulsar brokers connected to an existing BookKeeper cluster, you need to deploy separate ZooKeeper clusters for Pulsar metadata store and BookKeeper metadata store respectively. - -In a Pulsar instance: - -* A configuration store quorum stores configuration for tenants, namespaces, and other entities that need to be globally consistent. -* Each cluster has its own local ZooKeeper ensemble that stores cluster-specific configuration and coordination such as which brokers are responsible for which topics as well as ownership metadata, broker load reports, BookKeeper ledger metadata, and more. - -## Configuration store - -The configuration store maintains all the configurations of a Pulsar instance, such as clusters, tenants, namespaces, partitioned topic related configurations, and so on. A Pulsar instance can have a single local cluster, multiple local clusters, or multiple cross-region clusters. Consequently, the configuration store can share the configurations across multiple clusters under a Pulsar instance. The configuration store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster. - -## Persistent storage - -Pulsar provides guaranteed message delivery for applications. If a message successfully reaches a Pulsar broker, it will be delivered to its intended target. - -This guarantee requires that non-acknowledged messages are stored in a durable manner until they can be delivered to and acknowledged by consumers. This mode of messaging is commonly called *persistent messaging*. In Pulsar, N copies of all messages are stored and synced on disk, for example 4 copies across two servers with mirrored [RAID](https://en.wikipedia.org/wiki/RAID) volumes on each server. - -### Apache BookKeeper - -Pulsar uses a system called [Apache BookKeeper](http://bookkeeper.apache.org/) for persistent message storage. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) (WAL) system that provides a number of crucial advantages for Pulsar: - -* It enables Pulsar to utilize many independent logs, called [ledgers](#ledgers). Multiple ledgers can be created for topics over time. -* It offers very efficient storage for sequential data that handles entry replication. -* It guarantees read consistency of ledgers in the presence of various system failures. -* It offers even distribution of I/O across bookies. -* It's horizontally scalable in both capacity and throughput. Capacity can be immediately increased by adding more bookies to a cluster. -* Bookies are designed to handle thousands of ledgers with concurrent reads and writes. By using multiple disk devices---one for journal and another for general storage--bookies are able to isolate the effects of read operations from the latency of ongoing write operations. - -In addition to message data, *cursors* are also persistently stored in BookKeeper. Cursors are [subscription](reference-terminology.md#subscription) positions for [consumers](reference-terminology.md#consumer). BookKeeper enables Pulsar to store consumer position in a scalable fashion. - -At the moment, Pulsar supports persistent message storage. This accounts for the `persistent` in all topic names. Here's an example: - -```http - -persistent://my-tenant/my-namespace/my-topic - -``` - -> Pulsar also supports ephemeral ([non-persistent](concepts-messaging.md#non-persistent-topics)) message storage. - - -You can see an illustration of how brokers and bookies interact in the diagram below: - -![Brokers and bookies](/assets/broker-bookie.png) - - -### Ledgers - -A ledger is an append-only data structure with a single writer that is assigned to multiple BookKeeper storage nodes, or bookies. Ledger entries are replicated to multiple bookies. Ledgers themselves have very simple semantics: - -* A Pulsar broker can create a ledger, append entries to the ledger, and close the ledger. -* After the ledger has been closed---either explicitly or because the writer process crashed---it can then be opened only in read-only mode. -* Finally, when entries in the ledger are no longer needed, the whole ledger can be deleted from the system (across all bookies). - -#### Ledger read consistency - -The main strength of Bookkeeper is that it guarantees read consistency in ledgers in the presence of failures. Since the ledger can only be written to by a single process, that process is free to append entries very efficiently, without need to obtain consensus. After a failure, the ledger will go through a recovery process that will finalize the state of the ledger and establish which entry was last committed to the log. After that point, all readers of the ledger are guaranteed to see the exact same content. - -#### Managed ledgers - -Given that Bookkeeper ledgers provide a single log abstraction, a library was developed on top of the ledger called the *managed ledger* that represents the storage layer for a single topic. A managed ledger represents the abstraction of a stream of messages with a single writer that keeps appending at the end of the stream and multiple cursors that are consuming the stream, each with its own associated position. - -Internally, a single managed ledger uses multiple BookKeeper ledgers to store the data. There are two reasons to have multiple ledgers: - -1. After a failure, a ledger is no longer writable and a new one needs to be created. -2. A ledger can be deleted when all cursors have consumed the messages it contains. This allows for periodic rollover of ledgers. - -### Journal storage - -In BookKeeper, *journal* files contain BookKeeper transaction logs. Before making an update to a [ledger](#ledgers), a bookie needs to ensure that a transaction describing the update is written to persistent (non-volatile) storage. A new journal file is created once the bookie starts or the older journal file reaches the journal file size threshold (configured using the [`journalMaxSizeMB`](reference-configuration.md#bookkeeper-journalMaxSizeMB) parameter). - -## Pulsar proxy - -One way for Pulsar clients to interact with a Pulsar [cluster](#clusters) is by connecting to Pulsar message [brokers](#brokers) directly. In some cases, however, this kind of direct connection is either infeasible or undesirable because the client doesn't have direct access to broker addresses. If you're running Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform, for example, then direct client connections to brokers are likely not possible. - -The **Pulsar proxy** provides a solution to this problem by acting as a single gateway for all of the brokers in a cluster. If you run the Pulsar proxy (which, again, is optional), all client connections with the Pulsar cluster will flow through the proxy rather than communicating with brokers. - -> For the sake of performance and fault tolerance, you can run as many instances of the Pulsar proxy as you'd like. - -Architecturally, the Pulsar proxy gets all the information it requires from ZooKeeper. When starting the proxy on a machine, you only need to provide ZooKeeper connection strings for the cluster-specific and instance-wide configuration store clusters. Here's an example: - -```bash - -$ bin/pulsar proxy \ - --zookeeper-servers zk-0,zk-1,zk-2 \ - --configuration-store-servers zk-0,zk-1,zk-2 - -``` - -> #### Pulsar proxy docs -> For documentation on using the Pulsar proxy, see the [Pulsar proxy admin documentation](administration-proxy.md). - - -Some important things to know about the Pulsar proxy: - -* Connecting clients don't need to provide *any* specific configuration to use the Pulsar proxy. You won't need to update the client configuration for existing applications beyond updating the IP used for the service URL (for example if you're running a load balancer over the Pulsar proxy). -* [TLS encryption](security-tls-transport.md) and [authentication](security-tls-authentication.md) is supported by the Pulsar proxy - -## Service discovery - -[Clients](getting-started-clients.md) connecting to Pulsar brokers need to be able to communicate with an entire Pulsar instance using a single URL. Pulsar provides a built-in service discovery mechanism that you can set up using the instructions in the [Deploying a Pulsar instance](deploy-bare-metal.md#service-discovery-setup) guide. - -You can use your own service discovery system if you'd like. If you use your own system, there is just one requirement: when a client performs an HTTP request to an endpoint, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to *some* active broker in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means. - -The diagram below illustrates Pulsar service discovery: - -![alt-text](/assets/pulsar-service-discovery.png) - -In this diagram, the Pulsar cluster is addressable via a single DNS name: `pulsar-cluster.acme.com`. A [Python client](client-libraries-python.md), for example, could access this Pulsar cluster like this: - -```python - -from pulsar import Client - -client = Client('pulsar://pulsar-cluster.acme.com:6650') - -``` - -:::note - -In Pulsar, each topic is handled by only one broker. Initial requests from a client to read, update or delete a topic are sent to a broker that may not be the topic owner. If the broker cannot handle the request for this topic, it redirects the request to the appropriate broker. - -::: - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-authentication.md b/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-authentication.md deleted file mode 100644 index f6307890c904a7..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-authentication.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -id: concepts-authentication -title: Authentication and Authorization -sidebar_label: "Authentication and Authorization" -original_id: concepts-authentication ---- - -Pulsar supports a pluggable [authentication](security-overview.md) mechanism which can be configured at the proxy and/or the broker. Pulsar also supports a pluggable [authorization](security-authorization.md) mechanism. These mechanisms work together to identify the client and its access rights on topics, namespaces and tenants. - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-clients.md b/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-clients.md deleted file mode 100644 index 4040624f7d6366..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-clients.md +++ /dev/null @@ -1,92 +0,0 @@ ---- -id: concepts-clients -title: Pulsar Clients -sidebar_label: "Clients" -original_id: concepts-clients ---- - -Pulsar exposes a client API with language bindings for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md), [C++](client-libraries-cpp.md) and [C#](client-libraries-dotnet.md). The client API optimizes and encapsulates Pulsar's client-broker communication protocol and exposes a simple and intuitive API for use by applications. - -Under the hood, the current official Pulsar client libraries support transparent reconnection and/or connection failover to brokers, queuing of messages until acknowledged by the broker, and heuristics such as connection retries with backoff. - -> **Custom client libraries** -> If you'd like to create your own client library, we recommend consulting the documentation on Pulsar's custom [binary protocol](developing-binary-protocol.md). - - -## Client setup phase - -Before an application creates a producer/consumer, the Pulsar client library needs to initiate a setup phase including two steps: - -1. The client attempts to determine the owner of the topic by sending an HTTP lookup request to the broker. The request could reach one of the active brokers which, by looking at the (cached) zookeeper metadata knows who is serving the topic or, in case nobody is serving it, tries to assign it to the least loaded broker. -1. Once the client library has the broker address, it creates a TCP connection (or reuse an existing connection from the pool) and authenticates it. Within this connection, client and broker exchange binary commands from a custom protocol. At this point the client sends a command to create producer/consumer to the broker, which will comply after having validated the authorization policy. - -Whenever the TCP connection breaks, the client immediately re-initiates this setup phase and keeps trying with exponential backoff to re-establish the producer or consumer until the operation succeeds. - -## Reader interface - -In Pulsar, the "standard" [consumer interface](concepts-messaging.md#consumers) involves using consumers to listen on [topics](reference-terminology.md#topic), process incoming messages, and finally acknowledge those messages when they are processed. Whenever a new subscription is created, it is initially positioned at the end of the topic (by default), and consumers associated with that subscription begin reading with the first message created afterwards. Whenever a consumer connects to a topic using a pre-existing subscription, it begins reading from the earliest message un-acked within that subscription. In summary, with the consumer interface, subscription cursors are automatically managed by Pulsar in response to [message acknowledgements](concepts-messaging.md#acknowledgement). - -The **reader interface** for Pulsar enables applications to manually manage cursors. When you use a reader to connect to a topic---rather than a consumer---you need to specify *which* message the reader begins reading from when it connects to a topic. When connecting to a topic, the reader interface enables you to begin with: - -* The **earliest** available message in the topic -* The **latest** available message in the topic -* Some other message between the earliest and the latest. If you select this option, you'll need to explicitly provide a message ID. Your application will be responsible for "knowing" this message ID in advance, perhaps fetching it from a persistent data store or cache. - -The reader interface is helpful for use cases like using Pulsar to provide effectively-once processing semantics for a stream processing system. For this use case, it's essential that the stream processing system be able to "rewind" topics to a specific message and begin reading there. The reader interface provides Pulsar clients with the low-level abstraction necessary to "manually position" themselves within a topic. - -Internally, the reader interface is implemented as a consumer using an exclusive, non-durable subscription to the topic with a randomly-allocated name. - -[ **IMPORTANT** ] - -Unlike subscription/consumer, readers are non-durable in nature and does not prevent data in a topic from being deleted, thus it is ***strongly*** advised that [data retention](cookbooks-retention-expiry.md) be configured. If data retention for a topic is not configured for an adequate amount of time, messages that the reader has not yet read might be deleted . This causes the readers to essentially skip messages. Configuring the data retention for a topic guarantees the reader with a certain duration to read a message. - -Please also note that a reader can have a "backlog", but the metric is only used for users to know how behind the reader is. The metric is not considered for any backlog quota calculations. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-reader-consumer-interfaces.png) - -Here's a Java example that begins reading from the earliest available message on a topic: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageId; -import org.apache.pulsar.client.api.Reader; - -// Create a reader on a topic and for a specific message (and onward) -Reader reader = pulsarClient.newReader() - .topic("reader-api-test") - .startMessageId(MessageId.earliest) - .create(); - -while (true) { - Message message = reader.readNext(); - - // Process the message -} - -``` - -To create a reader that reads from the latest available message: - -```java - -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(MessageId.latest) - .create(); - -``` - -To create a reader that reads from some message between the earliest and the latest: - -```java - -byte[] msgIdBytes = // Some byte array -MessageId id = MessageId.fromByteArray(msgIdBytes); -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(id) - .create(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-messaging.md b/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-messaging.md deleted file mode 100644 index b76728f109b5d0..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-messaging.md +++ /dev/null @@ -1,700 +0,0 @@ ---- -id: concepts-messaging -title: Messaging -sidebar_label: "Messaging" -original_id: concepts-messaging ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar is built on the [publish-subscribe](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) pattern (often abbreviated to pub-sub). In this pattern, [producers](#producers) publish messages to [topics](#topics). [Consumers](#consumers) [subscribe](#subscription-types) to those topics, process incoming messages, and send an acknowledgement when processing is complete. - -When a subscription is created, Pulsar [retains](concepts-architecture-overview.md#persistent-storage) all messages, even if the consumer is disconnected. Retained messages are discarded only when a consumer acknowledges that those messages are processed successfully. - -## Messages - -Messages are the basic "unit" of Pulsar. The following table lists the components of messages. - -Component | Description -:---------|:------- -Value / data payload | The data carried by the message. All Pulsar messages contain raw bytes, although message data can also conform to data [schemas](schema-get-started.md). -Key | Messages are optionally tagged with keys, which is useful for things like [topic compaction](concepts-topic-compaction.md). -Properties | An optional key/value map of user-defined properties. -Producer name | The name of the producer who produces the message. If you do not specify a producer name, the default name is used. -Sequence ID | Each Pulsar message belongs to an ordered sequence on its topic. The sequence ID of the message is its order in that sequence. -Publish time | The timestamp of when the message is published. The timestamp is automatically applied by the producer. -Event time | An optional timestamp attached to a message by applications. For example, applications attach a timestamp on when the message is processed. If nothing is set to event time, the value is `0`. -TypedMessageBuilder | It is used to construct a message. You can set message properties such as the message key, message value with `TypedMessageBuilder`.
    When you set `TypedMessageBuilder`, set the key as a string. If you set the key as other types, for example, an AVRO object, the key is sent as bytes, and it is difficult to get the AVRO object back on the consumer. - -The default size of a message is 5 MB. You can configure the max size of a message with the following configurations. - -- In the `broker.conf` file. - - ```bash - - # The max size of a message (in bytes). - maxMessageSize=5242880 - - ``` - -- In the `bookkeeper.conf` file. - - ```bash - - # The max size of the netty frame (in bytes). Any messages received larger than this value are rejected. The default value is 5 MB. - nettyMaxFrameSizeBytes=5253120 - - ``` - -> For more information on Pulsar message contents, see Pulsar [binary protocol](developing-binary-protocol.md). - -## Producers - -A producer is a process that attaches to a topic and publishes messages to a Pulsar [broker](reference-terminology.md#broker). The Pulsar broker process the messages. - -### Send modes - -Producers send messages to brokers synchronously (sync) or asynchronously (async). - -| Mode | Description | -|:-----------|-----------| -| Sync send | The producer waits for an acknowledgement from the broker after sending every message. If the acknowledgment is not received, the producer treats the sending operation as a failure. | -| Async send | The producer puts a message in a blocking queue and returns immediately. The client library sends the message to the broker in the background. If the queue is full (you can [configure](reference-configuration.md#broker) the maximum size), the producer is blocked or fails immediately when calling the API, depending on arguments passed to the producer. | - -### Access mode - -You can have different types of access modes on topics for producers. - -|Access mode | Description -|---|--- -`Shared`|Multiple producers can publish on a topic.

    This is the **default** setting. -`Exclusive`|Only one producer can publish on a topic.

    If there is already a producer connected, other producers trying to publish on this topic get errors immediately.

    The "old" producer is evicted and a "new" producer is selected to be the next exclusive producer if the "old" producer experiences a network partition with the broker. -`WaitForExclusive`|If there is already a producer connected, the producer creation is pending (rather than timing out) until the producer gets the `Exclusive` access.

    The producer that succeeds in becoming the exclusive one is treated as the leader. Consequently, if you want to implement the leader election scheme for your application, you can use this access mode. - -:::note - -Once an application creates a producer with the `Exclusive` or `WaitForExclusive` access mode successfully, the instance of the application is guaranteed to be the **only one writer** on the topic. Other producers trying to produce on this topic get errors immediately or have to wait until they get the `Exclusive` access. -For more information, see [PIP 68: Exclusive Producer](https://github.com/apache/pulsar/wiki/PIP-68:-Exclusive-Producer). - -::: - -You can set producer access mode through Java Client API. For more information, see `ProducerAccessMode` in [ProducerBuilder.java](https://github.com/apache/pulsar/blob/fc5768ca3bbf92815d142fe30e6bfad70a1b4fc6/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/ProducerBuilder.java). - - -### Compression - -You can compress messages published by producers during transportation. Pulsar currently supports the following types of compression: - -* [LZ4](https://github.com/lz4/lz4) -* [ZLIB](https://zlib.net/) -* [ZSTD](https://facebook.github.io/zstd/) -* [SNAPPY](https://google.github.io/snappy/) - -### Batching - -When batching is enabled, the producer accumulates and sends a batch of messages in a single request. The batch size is defined by the maximum number of messages and the maximum publish latency. Therefore, the backlog size represents the total number of batches instead of the total number of messages. - -In Pulsar, batches are tracked and stored as single units rather than as individual messages. Consumer unbundles a batch into individual messages. However, scheduled messages (configured through the `deliverAt` or the `deliverAfter` parameter) are always sent as individual messages even batching is enabled. - -In general, a batch is acknowledged when all of its messages are acknowledged by a consumer. It means unexpected failures, negative acknowledgements, or acknowledgement timeouts can result in redelivery of all messages in a batch, even if some of the messages are acknowledged. - -To avoid redelivering acknowledged messages in a batch to the consumer, Pulsar introduces batch index acknowledgement since Pulsar 2.6.0. When batch index acknowledgement is enabled, the consumer filters out the batch index that has been acknowledged and sends the batch index acknowledgement request to the broker. The broker maintains the batch index acknowledgement status and tracks the acknowledgement status of each batch index to avoid dispatching acknowledged messages to the consumer. When all indexes of the batch message are acknowledged, the batch message is deleted. - -By default, batch index acknowledgement is disabled (`acknowledgmentAtBatchIndexLevelEnabled=false`). You can enable batch index acknowledgement by setting the `acknowledgmentAtBatchIndexLevelEnabled` parameter to `true` at the broker side. Enabling batch index acknowledgement results in more memory overheads. - -### Chunking -When you enable chunking, read the following instructions. -- Batching and chunking cannot be enabled simultaneously. To enable chunking, you must disable batching in advance. -- Chunking is only supported for persisted topics. -- Chunking is only supported for the exclusive and failover subscription types. - -When chunking is enabled (`chunkingEnabled=true`), if the message size is greater than the allowed maximum publish-payload size, the producer splits the original message into chunked messages and publishes them with chunked metadata to the broker separately and in order. At the broker side, the chunked messages are stored in the managed-ledger in the same way as that of ordinary messages. The only difference is that the consumer needs to buffer the chunked messages and combines them into the real message when all chunked messages have been collected. The chunked messages in the managed-ledger can be interwoven with ordinary messages. If producer fails to publish all the chunks of a message, the consumer can expire incomplete chunks if consumer fail to receive all chunks in expire time. By default, the expire time is set to one minute. - -The consumer consumes the chunked messages and buffers them until the consumer receives all the chunks of a message. And then the consumer stitches chunked messages together and places them into the receiver-queue. Clients consume messages from the receiver-queue. Once the consumer consumes the entire large message and acknowledges it, the consumer internally sends acknowledgement of all the chunk messages associated to that large message. You can set the `maxPendingChunkedMessage` parameter on the consumer. When the threshold is reached, the consumer drops the unchunked messages by silently acknowledging them or asking the broker to redeliver them later by marking them unacknowledged. - -The broker does not require any changes to support chunking for non-shared subscription. The broker only uses `chunkedMessageRate` to record chunked message rate on the topic. - -#### Handle chunked messages with one producer and one ordered consumer - -As shown in the following figure, when a topic has one producer which publishes large message payload in chunked messages along with regular non-chunked messages. The producer publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. The broker stores all the three chunked messages in the managed-ledger and dispatches to the ordered (exclusive/failover) consumer in the same order. The consumer buffers all the chunked messages in memory until it receives all the chunked messages, combines them into one message and then hands over the original message M1 to the client. - -![](/assets/chunking-01.png) - -#### Handle chunked messages with multiple producers and one ordered consumer - -When multiple publishers publish chunked messages into a single topic, the broker stores all the chunked messages coming from different publishers in the same managed-ledger. As shown below, Producer 1 publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. Producer 2 publishes message M2 in three chunks M2-C1, M2-C2 and M2-C3. All chunked messages of the specific message are still in order but might not be consecutive in the managed-ledger. This brings some memory pressure to the consumer because the consumer keeps separate buffer for each large message to aggregate all chunks of the large message and combine them into one message. - -![](/assets/chunking-02.png) - -## Consumers - -A consumer is a process that attaches to a topic via a subscription and then receives messages. - -A consumer sends a [flow permit request](developing-binary-protocol.md#flow-control) to a broker to get messages. There is a queue at the consumer side to receive messages pushed from the broker. You can configure the queue size with the [`receiverQueueSize`](client-libraries-java.md#configure-consumer) parameter. The default size is `1000`). Each time `consumer.receive()` is called, a message is dequeued from the buffer. - -### Receive modes - -Messages are received from [brokers](reference-terminology.md#broker) either synchronously (sync) or asynchronously (async). - -| Mode | Description | -|:--------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Sync receive | A sync receive is blocked until a message is available. | -| Async receive | An async receive returns immediately with a future value—for example, a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) in Java—that completes once a new message is available. | - -### Listeners - -Client libraries provide listener implementation for consumers. For example, the [Java client](client-libraries-java.md) provides a {@inject: javadoc:MesssageListener:/client/org/apache/pulsar/client/api/MessageListener} interface. In this interface, the `received` method is called whenever a new message is received. - -### Acknowledgement - -When a consumer consumes a message successfully, the consumer sends an acknowledgement request to the broker. This message is permanently stored, and then deleted only after all the subscriptions have acknowledged it. If you want to store the message that has been acknowledged by a consumer, you need to configure the [message retention policy](concepts-messaging.md#message-retention-and-expiry). - -For a batch message, if batch index acknowledgement is enabled, the broker maintains the batch index acknowledgement status and tracks the acknowledgement status of each batch index to avoid dispatching acknowledged messages to the consumer. When all indexes of the batch message are acknowledged, the batch message is deleted. For details about the batch index acknowledgement, see [batching](#batching). - -Messages can be acknowledged in the following two ways: - -- Messages are acknowledged individually. With individual acknowledgement, the consumer needs to acknowledge each message and sends an acknowledgement request to the broker. -- Messages are acknowledged cumulatively. With cumulative acknowledgement, the consumer only needs to acknowledge the last message it received. All messages in the stream up to (and including) the provided message are not re-delivered to that consumer. - -:::note - -Cumulative acknowledgement cannot be used in [Shared subscription type](#subscription-types), because this subscription type involves multiple consumers which have access to the same subscription. In Shared subscription type, messages are acknowledged individually. - -::: - -### Negative acknowledgement - -When a consumer does not consume a message successfully at a time, and wants to consume the message again, the consumer sends a negative acknowledgement to the broker, and then the broker redelivers the message. - -Messages are negatively acknowledged either individually or cumulatively, depending on the consumption subscription type. - -In the exclusive and failover subscription types, consumers only negatively acknowledge the last message they receive. - -In the shared and Key_Shared subscription types, you can negatively acknowledge messages individually. - -Be aware that negative acknowledgment on ordered subscription types, such as Exclusive, Failover and Key_Shared, can cause failed messages to arrive consumers out of the original order. - -:::note - -If batching is enabled, other messages and the negatively acknowledged messages in the same batch are redelivered to the consumer. - -::: - -### Acknowledgement timeout - -If a message is not consumed successfully, and you want to trigger the broker to redeliver the message automatically, you can adopt the unacknowledged message automatic re-delivery mechanism. Client tracks the unacknowledged messages within the entire `acktimeout` time range, and sends a `redeliver unacknowledged messages` request to the broker automatically when the acknowledgement timeout is specified. - -:::note - -If batching is enabled, other messages and the unacknowledged messages in the same batch are redelivered to the consumer. - -::: - -:::note - -Prefer negative acknowledgements over acknowledgement timeout. Negative acknowledgement controls the re-delivery of individual messages with more precision, and avoids invalid redeliveries when the message processing time exceeds the acknowledgement timeout. - -::: - -### Dead letter topic - -Dead letter topic enables you to consume new messages when some messages cannot be consumed successfully by a consumer. In this mechanism, messages that are failed to be consumed are stored in a separate topic, which is called dead letter topic. You can decide how to handle messages in the dead letter topic. - -The following example shows how to enable dead letter topic in a Java client using the default dead letter topic: - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic(topic) - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .build()) - .subscribe(); - -``` - -The default dead letter topic uses this format: - -``` - ---DLQ - -``` - - -If you want to specify the name of the dead letter topic, use this Java client example: - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic(topic) - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .deadLetterTopic("your-topic-name") - .build()) - .subscribe(); - -``` - -Dead letter topic depends on message re-delivery. Messages are redelivered either due to [acknowledgement timeout](#acknowledgement-timeout) or [negative acknowledgement](#negative-acknowledgement). If you are going to use negative acknowledgement on a message, make sure it is negatively acknowledged before the acknowledgement timeout. - -:::note - -Currently, dead letter topic is enabled In the shared and Key_Shared subscription types. - -::: - -### Retry letter topic - -For many online business systems, a message is re-consumed due to exception occurs in the business logic processing. To configure the delay time for re-consuming the failed messages, you can configure the producer to send messages to both the business topic and the retry letter topic, and enable automatic retry on the consumer. When automatic retry is enabled on the consumer, a message is stored in the retry letter topic if the messages are not consumed, and therefore the consumer automatically consumes the failed messages from the retry letter topic after a specified delay time. - -By default, automatic retry is disabled. You can set `enableRetry` to `true` to enable automatic retry on the consumer. - -This example shows how to consume messages from a retry letter topic. - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic(topic) - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .enableRetry(true) - .receiverQueueSize(100) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .retryLetterTopic("persistent://my-property/my-ns/my-subscription-custom-Retry") - .build()) - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscribe(); - -``` - -## Topics - -As in other pub-sub systems, topics in Pulsar are named channels for transmitting messages from producers to consumers. Topic names are URLs that have a well-defined structure: - -```http - -{persistent|non-persistent}://tenant/namespace/topic - -``` - -Topic name component | Description -:--------------------|:----------- -`persistent` / `non-persistent` | This identifies the type of topic. Pulsar supports two kind of topics: [persistent](concepts-architecture-overview.md#persistent-storage) and [non-persistent](#non-persistent-topics). The default is persistent, so if you do not specify a type, the topic is persistent. With persistent topics, all messages are durably persisted on disks (if the broker is not standalone, messages are durably persisted on multiple disks), whereas data for non-persistent topics is not persisted to storage disks. -`tenant` | The topic tenant within the instance. Tenants are essential to multi-tenancy in Pulsar, and spread across clusters. -`namespace` | The administrative unit of the topic, which acts as a grouping mechanism for related topics. Most topic configuration is performed at the [namespace](#namespaces) level. Each tenant has one or multiple namespaces. -`topic` | The final part of the name. Topic names have no special meaning in a Pulsar instance. - -> **No need to explicitly create new topics** -> You do not need to explicitly create topics in Pulsar. If a client attempts to write or receive messages to/from a topic that does not yet exist, Pulsar creates that topic under the namespace provided in the [topic name](#topics) automatically. -> If no tenant or namespace is specified when a client creates a topic, the topic is created in the default tenant and namespace. You can also create a topic in a specified tenant and namespace, such as `persistent://my-tenant/my-namespace/my-topic`. `persistent://my-tenant/my-namespace/my-topic` means the `my-topic` topic is created in the `my-namespace` namespace of the `my-tenant` tenant. - -## Namespaces - -A namespace is a logical nomenclature within a tenant. A tenant creates multiple namespaces via the [admin API](admin-api-namespaces.md#create). For instance, a tenant with different applications can create a separate namespace for each application. A namespace allows the application to create and manage a hierarchy of topics. The topic `my-tenant/app1` is a namespace for the application `app1` for `my-tenant`. You can create any number of [topics](#topics) under the namespace. - -## Subscriptions - -A subscription is a named configuration rule that determines how messages are delivered to consumers. Four subscription types are available in Pulsar: [exclusive](#exclusive), [shared](#shared), [failover](#failover), and [key_shared](#key_shared). These types are illustrated in the figure below. - -![Subscription types](/assets/pulsar-subscription-types.png) - -> **Pub-Sub or Queuing** -> In Pulsar, you can use different subscriptions flexibly. -> * If you want to achieve traditional "fan-out pub-sub messaging" among consumers, specify a unique subscription name for each consumer. It is exclusive subscription type. -> * If you want to achieve "message queuing" among consumers, share the same subscription name among multiple consumers(shared, failover, key_shared). -> * If you want to achieve both effects simultaneously, combine exclusive subscription type with other subscription types for consumers. - -### Subscription types -When a subscription has no consumers, its subscription type is undefined. The type of a subscription is defined when a consumer connects to it, and the type can be changed by restarting all consumers with a different configuration. - -#### Exclusive - -In *exclusive* type, only a single consumer is allowed to attach to the subscription. If multiple consumers subscribe to a topic using the same subscription, an error occurs. - -In the diagram below, only **Consumer A-0** is allowed to consume messages. - -> Exclusive is the default subscription type. - -![Exclusive subscriptions](/assets/pulsar-exclusive-subscriptions.png) - -#### Failover - -In *Failover* type, multiple consumers can attach to the same subscription. A master consumer is picked for non-partitioned topic or each partition of partitioned topic and receives messages. When the master consumer disconnects, all (non-acknowledged and subsequent) messages are delivered to the next consumer in line. - -For partitioned topics, broker will sort consumers by priority level and lexicographical order of consumer name. Then broker will try to evenly assigns topics to consumers with the highest priority level. - -For non-partitioned topic, broker will pick consumer in the order they subscribe to the non partitioned topic. - -In the diagram below, **Consumer-B-0** is the master consumer while **Consumer-B-1** would be the next consumer in line to receive messages if **Consumer-B-0** is disconnected. - -![Failover subscriptions](/assets/pulsar-failover-subscriptions.png) - -#### Shared - -In *shared* or *round robin* mode, multiple consumers can attach to the same subscription. Messages are delivered in a round robin distribution across consumers, and any given message is delivered to only one consumer. When a consumer disconnects, all the messages that were sent to it and not acknowledged will be rescheduled for sending to the remaining consumers. - -In the diagram below, **Consumer-C-1** and **Consumer-C-2** are able to subscribe to the topic, but **Consumer-C-3** and others could as well. - -> **Limitations of Shared type** -> When using Shared type, be aware that: -> * Message ordering is not guaranteed. -> * You cannot use cumulative acknowledgment with Shared type. - -![Shared subscriptions](/assets/pulsar-shared-subscriptions.png) - -#### Key_Shared - -In *Key_Shared* mode, multiple consumers can attach to the same subscription. Messages are delivered in a distribution across consumers and message with same key or same ordering key are delivered to only one consumer. No matter how many times the message is re-delivered, it is delivered to the same consumer. When a consumer connected or disconnected will cause served consumer change for some key of message. - -> **Limitations of Key_Shared type** -> When you use Key_Shared type, be aware that: -> * You need to specify a key or orderingKey for messages. -> * You cannot use cumulative acknowledgment with Key_Shared type. -> * Your producers should disable batching or use a key-based batch builder. - -![Key_Shared subscriptions](/assets/pulsar-key-shared-subscriptions.png) - -**You can disable Key_Shared subscription in the `broker.config` file.** - -### Subscription modes - -#### What is a subscription mode - -The subscription mode indicates the cursor type. - -- When a subscription is created, an associated cursor is created to record the last consumed position. -- When a consumer of the subscription restarts, it can continue consuming from the last message it consumes. - -Subscription mode | Description | Note -|---|---|--- -`Durable`|The cursor is durable, which retains messages and persists the current position.

    If a broker restarts from a failure, it can recover the cursor from the persistent storage (BookKeeper), so that messages can continue to be consumed from the last consumed position.|`Durable` is the **default** subscription mode. -`NonDurable`|The cursor is non-durable.

    Once a broker stops, the cursor is lost and can never be recovered, so that messages **can not** continue to be consumed from the last consumed position.|Reader’s subscription mode is `NonDurable` in nature and it does not prevent data in a topic from being deleted. Reader’s subscription mode **can not** be changed. - -A [subscription](#subscriptions) can have one or more consumers. When a consumer subscribes to a topic, it must specify the subscription name. A durable subscription and a non-durable subscription can have the same name, they are independent of each other. If a consumer specifies a subscription which does not exist before, the subscription is automatically created. - -#### When to use - -By default, messages of a topic without any durable subscriptions are marked as deleted. If you want to prevent the messages being marked as deleted, you can create a durable subscription for this topic. In this case, only acknowledged messages are marked as deleted. For more information, see [message retention and expiry](cookbooks-retention-expiry.md). - -#### How to use - -After a consumer is created, the default subscription mode of the consumer is `Durable`. You can change the subscription mode to `NonDurable` by making changes to the consumer’s configuration. - -````mdx-code-block - - - - -```java - - Consumer consumer = pulsarClient.newConsumer() - .topic("my-topic") - .subscriptionName("my-sub") - .subscriptionMode(SubscriptionMode.Durable) - .subscribe(); - -``` - - - - -```java - - Consumer consumer = pulsarClient.newConsumer() - .topic("my-topic") - .subscriptionName("my-sub") - .subscriptionMode(SubscriptionMode.NonDurable) - .subscribe(); - -``` - - - - -```` - -For how to create, check, or delete a durable subscription, see [manage subscriptions](admin-api-topics.md/#manage-subscriptions). - -## Multi-topic subscriptions - -When a consumer subscribes to a Pulsar topic, by default it subscribes to one specific topic, such as `persistent://public/default/my-topic`. As of Pulsar version 1.23.0-incubating, however, Pulsar consumers can simultaneously subscribe to multiple topics. You can define a list of topics in two ways: - -* On the basis of a [**reg**ular **ex**pression](https://en.wikipedia.org/wiki/Regular_expression) (regex), for example `persistent://public/default/finance-.*` -* By explicitly defining a list of topics - -> When subscribing to multiple topics by regex, all topics must be in the same [namespace](#namespaces). - -When subscribing to multiple topics, the Pulsar client automatically makes a call to the Pulsar API to discover the topics that match the regex pattern/list, and then subscribe to all of them. If any of the topics do not exist, the consumer auto-subscribes to them once the topics are created. - -> **No ordering guarantees across multiple topics** -> When a producer sends messages to a single topic, all messages are guaranteed to be read from that topic in the same order. However, these guarantees do not hold across multiple topics. So when a producer sends message to multiple topics, the order in which messages are read from those topics is not guaranteed to be the same. - -The following are multi-topic subscription examples for Java. - -```java - -import java.util.regex.Pattern; - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient pulsarClient = // Instantiate Pulsar client object - -// Subscribe to all topics in a namespace -Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default/.*"); -Consumer allTopicsConsumer = pulsarClient.newConsumer() - .topicsPattern(allTopicsInNamespace) - .subscriptionName("subscription-1") - .subscribe(); - -// Subscribe to a subsets of topics in a namespace, based on regex -Pattern someTopicsInNamespace = Pattern.compile("persistent://public/default/foo.*"); -Consumer someTopicsConsumer = pulsarClient.newConsumer() - .topicsPattern(someTopicsInNamespace) - .subscriptionName("subscription-1") - .subscribe(); - -``` - -For code examples, see [Java](client-libraries-java.md#multi-topic-subscriptions). - -## Partitioned topics - -Normal topics are served only by a single broker, which limits the maximum throughput of the topic. *Partitioned topics* are a special type of topic that are handled by multiple brokers, thus allowing for higher throughput. - -A partitioned topic is actually implemented as N internal topics, where N is the number of partitions. When publishing messages to a partitioned topic, each message is routed to one of several brokers. The distribution of partitions across brokers is handled automatically by Pulsar. - -The diagram below illustrates this: - -![](/assets/partitioning.png) - -The **Topic1** topic has five partitions (**P0** through **P4**) split across three brokers. Because there are more partitions than brokers, two brokers handle two partitions a piece, while the third handles only one (again, Pulsar handles this distribution of partitions automatically). - -Messages for this topic are broadcast to two consumers. The [routing mode](#routing-modes) determines each message should be published to which partition, while the [subscription type](#subscription-types) determines which messages go to which consumers. - -Decisions about routing and subscription types can be made separately in most cases. In general, throughput concerns should guide partitioning/routing decisions while subscription decisions should be guided by application semantics. - -There is no difference between partitioned topics and normal topics in terms of how subscription types work, as partitioning only determines what happens between when a message is published by a producer and processed and acknowledged by a consumer. - -Partitioned topics need to be explicitly created via the [admin API](admin-api-overview.md). The number of partitions can be specified when creating the topic. - -### Routing modes - -When publishing to partitioned topics, you must specify a *routing mode*. The routing mode determines which partition---that is, which internal topic---each message should be published to. - -There are three {@inject: javadoc:MessageRoutingMode:/client/org/apache/pulsar/client/api/MessageRoutingMode} available: - -Mode | Description -:--------|:------------ -`RoundRobinPartition` | If no key is provided, the producer will publish messages across all partitions in round-robin fashion to achieve maximum throughput. Please note that round-robin is not done per individual message but rather it's set to the same boundary of batching delay, to ensure batching is effective. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition. This is the default mode. -`SinglePartition` | If no key is provided, the producer will randomly pick one single partition and publish all the messages into that partition. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition. -`CustomPartition` | Use custom message router implementation that will be called to determine the partition for a particular message. User can create a custom routing mode by using the [Java client](client-libraries-java.md) and implementing the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface. - -### Ordering guarantee - -The ordering of messages is related to MessageRoutingMode and Message Key. Usually, user would want an ordering of Per-key-partition guarantee. - -If there is a key attached to message, the messages will be routed to corresponding partitions based on the hashing scheme specified by {@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} in {@inject: javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder}, when using either `SinglePartition` or `RoundRobinPartition` mode. - -Ordering guarantee | Description | Routing Mode and Key -:------------------|:------------|:------------ -Per-key-partition | All the messages with the same key will be in order and be placed in same partition. | Use either `SinglePartition` or `RoundRobinPartition` mode, and Key is provided by each message. -Per-producer | All the messages from the same producer will be in order. | Use `SinglePartition` mode, and no Key is provided for each message. - -### Hashing scheme - -{@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} is an enum that represent sets of standard hashing functions available when choosing the partition to use for a particular message. - -There are 2 types of standard hashing functions available: `JavaStringHash` and `Murmur3_32Hash`. -The default hashing function for producer is `JavaStringHash`. -Please pay attention that `JavaStringHash` is not useful when producers can be from different multiple language clients, under this use case, it is recommended to use `Murmur3_32Hash`. - - - -## Non-persistent topics - - -By default, Pulsar persistently stores *all* unacknowledged messages on multiple [BookKeeper](concepts-architecture-overview.md#persistent-storage) bookies (storage nodes). Data for messages on persistent topics can thus survive broker restarts and subscriber failover. - -Pulsar also, however, supports **non-persistent topics**, which are topics on which messages are *never* persisted to disk and live only in memory. When using non-persistent delivery, killing a Pulsar broker or disconnecting a subscriber to a topic means that all in-transit messages are lost on that (non-persistent) topic, meaning that clients may see message loss. - -Non-persistent topics have names of this form (note the `non-persistent` in the name): - -```http - -non-persistent://tenant/namespace/topic - -``` - -> For more info on using non-persistent topics, see the [Non-persistent messaging cookbook](cookbooks-non-persistent.md). - -In non-persistent topics, brokers immediately deliver messages to all connected subscribers *without persisting them* in [BookKeeper](concepts-architecture-overview.md#persistent-storage). If a subscriber is disconnected, the broker will not be able to deliver those in-transit messages, and subscribers will never be able to receive those messages again. Eliminating the persistent storage step makes messaging on non-persistent topics slightly faster than on persistent topics in some cases, but with the caveat that some of the core benefits of Pulsar are lost. - -> With non-persistent topics, message data lives only in memory. If a message broker fails or message data can otherwise not be retrieved from memory, your message data may be lost. Use non-persistent topics only if you're *certain* that your use case requires it and can sustain it. - -By default, non-persistent topics are enabled on Pulsar brokers. You can disable them in the broker's [configuration](reference-configuration.md#broker-enableNonPersistentTopics). You can manage non-persistent topics using the `pulsar-admin topics` command. For more information, see [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/). - -### Performance - -Non-persistent messaging is usually faster than persistent messaging because brokers don't persist messages and immediately send acks back to the producer as soon as that message is delivered to connected brokers. Producers thus see comparatively low publish latency with non-persistent topic. - -### Client API - -Producers and consumers can connect to non-persistent topics in the same way as persistent topics, with the crucial difference that the topic name must start with `non-persistent`. All three subscription types---[exclusive](#exclusive), [shared](#shared), and [failover](#failover)---are supported for non-persistent topics. - -Here's an example [Java consumer](client-libraries-java.md#consumers) for a non-persistent topic: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); -String npTopic = "non-persistent://public/default/my-topic"; -String subscriptionName = "my-subscription-name"; - -Consumer consumer = client.newConsumer() - .topic(npTopic) - .subscriptionName(subscriptionName) - .subscribe(); - -``` - -Here's an example [Java producer](client-libraries-java.md#producer) for the same non-persistent topic: - -```java - -Producer producer = client.newProducer() - .topic(npTopic) - .create(); - -``` - - -## System topic - -System topic is a predefined topic for internal use within Pulsar. It can be either persistent or non-persistent topic. - -System topics serve to implement certain features and eliminate dependencies on third-party components, such as transactions, heartbeat detections, topic-level policies, and resource group services. System topics empower the implementation of these features to be simplified, dependent, and flexible. Take heartbeat detections for example, you can leverage the system topic for healthcheck to internally enable producer/reader to procude/consume messages under the heartbeat namespace, which can detect whether the current service is still alive. - -There are diverse system topics depending on namespaces. The following table outlines the available system topics for each specific namespace. - -| Namespace | TopicName | Domain | Count | Usage | -|-----------|-----------|--------|-------|-------| -| pulsar/system | `transaction_coordinator_assign_${id}` | Persistent | Default 16 | Transaction coordinator | -| pulsar/system | `_transaction_log${tc_id}` | Persistent | Default 16 | Transaction log | -| pulsar/system | `resource-usage` | Non-persistent | Default 4 | Resource group service | -| host/port | `heartbeat` | Persistent | 1 | Heartbeat detection | -| User-defined-ns | [`__change_events`](concepts-multi-tenancy.md#namespace-change-events-and-topic-level-policies) | Persistent | Default 4 | Topic events | -| User-defined-ns | `__transaction_buffer_snapshot` | Persistent | One per namespace | Transaction buffer snapshots | -| User-defined-ns | `${topicName}__transaction_pending_ack` | Persistent | One per every topic subscription acknowledged with transactions | Acknowledgements with transactions | - -:::note - -* You cannot create any system topics. -* By default, system topics are disabled. To enable system topics, you need to change the following configurations in the `conf/broker.conf` or `conf/standalone.conf` file. - - ```conf - systemTopicEnabled=true - topicLevelPoliciesEnabled=true - ``` - -::: - - -## Message retention and expiry - -By default, Pulsar message brokers: - -* immediately delete *all* messages that have been acknowledged by a consumer, and -* [persistently store](concepts-architecture-overview.md#persistent-storage) all unacknowledged messages in a message backlog. - -Pulsar has two features, however, that enable you to override this default behavior: - -* Message **retention** enables you to store messages that have been acknowledged by a consumer -* Message **expiry** enables you to set a time to live (TTL) for messages that have not yet been acknowledged - -> All message retention and expiry is managed at the [namespace](#namespaces) level. For a how-to, see the [Message retention and expiry](cookbooks-retention-expiry.md) cookbook. - -The diagram below illustrates both concepts: - -![Message retention and expiry](/assets/retention-expiry.png) - -With message retention, shown at the top, a retention policy applied to all topics in a namespace dictates that some messages are durably stored in Pulsar even though they've already been acknowledged. Acknowledged messages that are not covered by the retention policy are deleted. Without a retention policy, *all* of the acknowledged messages would be deleted. - -With message expiry, shown at the bottom, some messages are deleted, even though they haven't been acknowledged, because they've expired according to the TTL applied to the namespace (for example because a TTL of 5 minutes has been applied and the messages haven't been acknowledged but are 10 minutes old). - -## Message deduplication - -Message duplication occurs when a message is [persisted](concepts-architecture-overview.md#persistent-storage) by Pulsar more than once. Message deduplication is an optional Pulsar feature that prevents unnecessary message duplication by processing each message only once, even if the message is received more than once. - -The following diagram illustrates what happens when message deduplication is disabled vs. enabled: - -![Pulsar message deduplication](/assets/message-deduplication.png) - - -Message deduplication is disabled in the scenario shown at the top. Here, a producer publishes message 1 on a topic; the message reaches a Pulsar broker and is [persisted](concepts-architecture-overview.md#persistent-storage) to BookKeeper. The producer then sends message 1 again (in this case due to some retry logic), and the message is received by the broker and stored in BookKeeper again, which means that duplication has occurred. - -In the second scenario at the bottom, the producer publishes message 1, which is received by the broker and persisted, as in the first scenario. When the producer attempts to publish the message again, however, the broker knows that it has already seen message 1 and thus does not persist the message. - -> Message deduplication is handled at the namespace level or the topic level. For more instructions, see the [message deduplication cookbook](cookbooks-deduplication.md). - - -### Producer idempotency - -The other available approach to message deduplication is to ensure that each message is *only produced once*. This approach is typically called **producer idempotency**. The drawback of this approach is that it defers the work of message deduplication to the application. In Pulsar, this is handled at the [broker](reference-terminology.md#broker) level, so you do not need to modify your Pulsar client code. Instead, you only need to make administrative changes. For details, see [Managing message deduplication](cookbooks-deduplication.md). - -### Deduplication and effectively-once semantics - -Message deduplication makes Pulsar an ideal messaging system to be used in conjunction with stream processing engines (SPEs) and other systems seeking to provide effectively-once processing semantics. Messaging systems that do not offer automatic message deduplication require the SPE or other system to guarantee deduplication, which means that strict message ordering comes at the cost of burdening the application with the responsibility of deduplication. With Pulsar, strict ordering guarantees come at no application-level cost. - -> You can find more in-depth information in [this post](https://www.splunk.com/en_us/blog/it/exactly-once-is-not-exactly-the-same.html). - -## Delayed message delivery -Delayed message delivery enables you to consume a message later rather than immediately. In this mechanism, a message is stored in BookKeeper, `DelayedDeliveryTracker` maintains the time index(time -> messageId) in memory after published to a broker, and it is delivered to a consumer once the specific delayed time is passed. - -Delayed message delivery only works in Shared subscription type. In Exclusive and Failover subscription types, the delayed message is dispatched immediately. - -The diagram below illustrates the concept of delayed message delivery: - -![Delayed Message Delivery](/assets/message_delay.png) - -A broker saves a message without any check. When a consumer consumes a message, if the message is set to delay, then the message is added to `DelayedDeliveryTracker`. A subscription checks and gets timeout messages from `DelayedDeliveryTracker`. - -### Broker -Delayed message delivery is enabled by default. You can change it in the broker configuration file as below: - -``` - -# Whether to enable the delayed delivery for messages. -# If disabled, messages are immediately delivered and there is no tracking overhead. -delayedDeliveryEnabled=true - -# Control the ticking time for the retry of delayed message delivery, -# affecting the accuracy of the delivery time compared to the scheduled time. -# Default is 1 second. -delayedDeliveryTickTimeMillis=1000 - -``` - -### Producer -The following is an example of delayed message delivery for a producer in Java: - -```java - -// message to be delivered at the configured delay interval -producer.newMessage().deliverAfter(3L, TimeUnit.Minute).value("Hello Pulsar!").send(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-multi-tenancy.md b/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-multi-tenancy.md deleted file mode 100644 index 93a59557b2efca..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-multi-tenancy.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -id: concepts-multi-tenancy -title: Multi Tenancy -sidebar_label: "Multi Tenancy" -original_id: concepts-multi-tenancy ---- - -Pulsar was created from the ground up as a multi-tenant system. To support multi-tenancy, Pulsar has a concept of tenants. Tenants can be spread across clusters and can each have their own [authentication and authorization](security-overview.md) scheme applied to them. They are also the administrative unit at which storage quotas, [message TTL](cookbooks-retention-expiry.md#time-to-live-ttl), and isolation policies can be managed. - -The multi-tenant nature of Pulsar is reflected mostly visibly in topic URLs, which have this structure: - -```http - -persistent://tenant/namespace/topic - -``` - -As you can see, the tenant is the most basic unit of categorization for topics (more fundamental than the namespace and topic name). - -## Tenants - -To each tenant in a Pulsar instance you can assign: - -* An [authorization](security-authorization.md) scheme -* The set of [clusters](reference-terminology.md#cluster) to which the tenant's configuration applies - -## Namespaces - -Tenants and namespaces are two key concepts of Pulsar to support multi-tenancy. - -* Pulsar is provisioned for specified tenants with appropriate capacity allocated to the tenant. -* A namespace is the administrative unit nomenclature within a tenant. The configuration policies set on a namespace apply to all the topics created in that namespace. A tenant may create multiple namespaces via self-administration using the REST API and the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool. For instance, a tenant with different applications can create a separate namespace for each application. - -Names for topics in the same namespace will look like this: - -```http - -persistent://tenant/app1/topic-1 - -persistent://tenant/app1/topic-2 - -persistent://tenant/app1/topic-3 - -``` - -### Namespace change events and topic-level policies - -Pulsar is a multi-tenant event streaming system. Administrators can manage the tenants and namespaces by setting policies at different levels. However, the policies, such as retention policy and storage quota policy, are only available at a namespace level. In many use cases, users need to set a policy at the topic level. The namespace change events approach is proposed for supporting topic-level policies in an efficient way. In this approach, Pulsar is used as an event log to store namespace change events (such as topic policy changes). This approach has a few benefits: -- Avoid using ZooKeeper and introducing more loads to ZooKeeper. -- Use Pulsar as an event log for propagating the policy cache. It can scale efficiently. -- Use Pulsar SQL to query the namespace changes and audit the system. - -Each namespace has a [system topic](concepts-messaging.md#system-topic) named `__change_events`. This system topic stores change events for a given namespace. The following figure illustrates how to leverage it to update topic-level policies. - -![Leverage the system topic to update topic-level policies](/assets/system-topic-for-topic-level-policies.svg) - -1. Pulsar Admin clients communicate with the Admin Restful API to update topic-level policies. -2. Any broker that receives the Admin HTTP request publishes a topic policy change event to the corresponding system topic (`__change_events`) of the namespace. -3. Each broker that owns a namespace bundle(s) subscribes to the system topic (`__change_events`) to receive the change events of the namespace. -4. Each broker applies the change events to its policy cache. -5. Once the policy cache is updated, the broker sends the response back to the Pulsar Admin clients. - -:::note - -By default, the system topic is disabled. To enable topic-level policy (`topicLevelPoliciesEnabled`=`true`), you need to enable the system topic by setting `systemtopicenabled` to `true` in the `conf/broker.conf` or `conf/standalone.conf` file. - -::: \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-multiple-advertised-listeners.md b/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-multiple-advertised-listeners.md deleted file mode 100644 index f2e1ae0aadc7ca..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-multiple-advertised-listeners.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -id: concepts-multiple-advertised-listeners -title: Multiple advertised listeners -sidebar_label: "Multiple advertised listeners" -original_id: concepts-multiple-advertised-listeners ---- - -When a Pulsar cluster is deployed in the production environment, it may require to expose multiple advertised addresses for the broker. For example, when you deploy a Pulsar cluster in Kubernetes and want other clients, which are not in the same Kubernetes cluster, to connect to the Pulsar cluster, you need to assign a broker URL to external clients. But clients in the same Kubernetes cluster can still connect to the Pulsar cluster through the internal network of Kubernetes. - -## Advertised listeners - -To ensure clients in both internal and external networks can connect to a Pulsar cluster, Pulsar introduces `advertisedListeners` and `internalListenerName` configuration options into the [broker configuration file](reference-configuration.md#broker) to ensure that the broker supports exposing multiple advertised listeners and support the separation of internal and external network traffic. - -- The `advertisedListeners` is used to specify multiple advertised listeners. The broker uses the listener as the broker identifier in the load manager and the bundle owner data. The `advertisedListeners` is formatted as `:pulsar://:, :pulsar+ssl://:`. You can set up the `advertisedListeners` like -`advertisedListeners=internal:pulsar://192.168.1.11:6660,internal:pulsar+ssl://192.168.1.11:6651`. - -- The `internalListenerName` is used to specify the internal service URL that the broker uses. You can specify the `internalListenerName` by choosing one of the `advertisedListeners`. The broker uses the listener name of the first advertised listener as the `internalListenerName` if the `internalListenerName` is absent. - -After setting up the `advertisedListeners`, clients can choose one of the listeners as the service URL to create a connection to the broker as long as the network is accessible. However, if the client creates producers or consumer on a topic, the client must send a lookup requests to the broker for getting the owner broker, then connect to the owner broker to publish messages or consume messages. Therefore, You must allow the client to get the corresponding service URL with the same advertised listener name as the one used by the client. This helps keep client-side simple and secure. - -## Use multiple advertised listeners - -This example shows how a Pulsar client uses multiple advertised listeners. - -1. Configure multiple advertised listeners in the broker configuration file. - -```shell - -advertisedListeners={listenerName}:pulsar://xxxx:6650, -{listenerName}:pulsar+ssl://xxxx:6651 - -``` - -2. Specify the listener name for the client. - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://xxxx:6650") - .listenerName("external") - .build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-overview.md b/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-overview.md deleted file mode 100644 index e8a2f4b9d321a7..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-overview.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -id: concepts-overview -title: Pulsar Overview -sidebar_label: "Overview" -original_id: concepts-overview ---- - -Pulsar is a multi-tenant, high-performance solution for server-to-server messaging. Originally developed by Yahoo, Pulsar is under the stewardship of the [Apache Software Foundation](https://www.apache.org/). - -Key features of Pulsar are listed below: - -* Native support for multiple clusters in a Pulsar instance, with seamless [geo-replication](administration-geo.md) of messages across clusters. -* Very low publish and end-to-end latency. -* Seamless scalability to over a million topics. -* A simple [client API](concepts-clients.md) with bindings for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) and [C++](client-libraries-cpp.md). -* Multiple [subscription types](concepts-messaging.md#subscription-types) ([exclusive](concepts-messaging.md#exclusive), [shared](concepts-messaging.md#shared), and [failover](concepts-messaging.md#failover)) for topics. -* Guaranteed message delivery with [persistent message storage](concepts-architecture-overview.md#persistent-storage) provided by [Apache BookKeeper](http://bookkeeper.apache.org/). -* A serverless light-weight computing framework [Pulsar Functions](functions-overview.md) offers the capability for stream-native data processing. -* A serverless connector framework [Pulsar IO](io-overview.md), which is built on Pulsar Functions, makes it easier to move data in and out of Apache Pulsar. -* [Tiered Storage](concepts-tiered-storage.md) offloads data from hot/warm storage to cold/long-term storage (such as S3 and GCS) when the data is aging out. - -## Contents - -- [Messaging Concepts](concepts-messaging.md) -- [Architecture Overview](concepts-architecture-overview.md) -- [Pulsar Clients](concepts-clients.md) -- [Geo Replication](concepts-replication.md) -- [Multi Tenancy](concepts-multi-tenancy.md) -- [Authentication and Authorization](concepts-authentication.md) -- [Topic Compaction](concepts-topic-compaction.md) -- [Tiered Storage](concepts-tiered-storage.md) diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-proxy-sni-routing.md b/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-proxy-sni-routing.md deleted file mode 100644 index 51419a66cefe6e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-proxy-sni-routing.md +++ /dev/null @@ -1,180 +0,0 @@ ---- -id: concepts-proxy-sni-routing -title: Proxy support with SNI routing -sidebar_label: "Proxy support with SNI routing" -original_id: concepts-proxy-sni-routing ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -A proxy server is an intermediary server that forwards requests from multiple clients to different servers across the Internet. The proxy server acts as a "traffic cop" in both forward and reverse proxy scenarios, and benefits your system such as load balancing, performance, security, auto-scaling, and so on. - -The proxy in Pulsar acts as a reverse proxy, and creates a gateway in front of brokers. Proxies such as Apache Traffic Server (ATS), HAProxy, Nginx, and Envoy are not supported in Pulsar. These proxy-servers support **SNI routing**. SNI routing is used to route traffic to a destination without terminating the SSL connection. Layer 4 routing provides greater transparency because the outbound connection is determined by examining the destination address in the client TCP packets. - -Pulsar clients (Java, C++, Python) support [SNI routing protocol](https://github.com/apache/pulsar/wiki/PIP-60:-Support-Proxy-server-with-SNI-routing), so you can connect to brokers through the proxy. This document walks you through how to set up the ATS proxy, enable SNI routing, and connect Pulsar client to the broker through the ATS proxy. - -## ATS-SNI Routing in Pulsar -To support [layer-4 SNI routing](https://docs.trafficserver.apache.org/en/latest/admin-guide/layer-4-routing.en.html) with ATS, the inbound connection must be a TLS connection. Pulsar client supports SNI routing protocol on TLS connection, so when Pulsar clients connect to broker through ATS proxy, Pulsar uses ATS as a reverse proxy. - -Pulsar supports SNI routing for geo-replication, so brokers can connect to brokers in other clusters through the ATS proxy. - -This section explains how to set up and use ATS as a reverse proxy, so Pulsar clients can connect to brokers through the ATS proxy using the SNI routing protocol on TLS connection. - -### Set up ATS Proxy for layer-4 SNI routing -To support layer 4 SNI routing, you need to configure the `records.conf` and `ssl_server_name.conf` files. - -![Pulsar client SNI](/assets/pulsar-sni-client.png) - -The [records.config](https://docs.trafficserver.apache.org/en/latest/admin-guide/files/records.config.en.html) file is located in the `/usr/local/etc/trafficserver/` directory by default. The file lists configurable variables used by the ATS. - -To configure the `records.config` files, complete the following steps. -1. Update TLS port (`http.server_ports`) on which proxy listens, and update proxy certs (`ssl.client.cert.path` and `ssl.client.cert.filename`) to secure TLS tunneling. -2. Configure server ports (`http.connect_ports`) used for tunneling to the broker. If Pulsar brokers are listening on `4443` and `6651` ports, add the brokers service port in the `http.connect_ports` configuration. - -The following is an example. - -``` - -# PROXY TLS PORT -CONFIG proxy.config.http.server_ports STRING 4443:ssl 4080 -# PROXY CERTS FILE PATH -CONFIG proxy.config.ssl.client.cert.path STRING /proxy-cert.pem -# PROXY KEY FILE PATH -CONFIG proxy.config.ssl.client.cert.filename STRING /proxy-key.pem - - -# The range of origin server ports that can be used for tunneling via CONNECT. # Traffic Server allows tunnels only to the specified ports. Supports both wildcards (*) and ranges (e.g. 0-1023). -CONFIG proxy.config.http.connect_ports STRING 4443 6651 - -``` - -The `ssl_server_name` file is used to configure TLS connection handling for inbound and outbound connections. The configuration is determined by the SNI values provided by the inbound connection. The file consists of a set of configuration items, and each is identified by an SNI value (`fqdn`). When an inbound TLS connection is made, the SNI value from the TLS negotiation is matched with the items specified in this file. If the values match, the values specified in that item override the default values. - -The following example shows mapping of the inbound SNI hostname coming from the client, and the actual broker service URL where request should be redirected. For example, if the client sends the SNI header `pulsar-broker1`, the proxy creates a TLS tunnel by redirecting request to the `pulsar-broker1:6651` service URL. - -``` - -server_config = { - { - fqdn = 'pulsar-broker-vip', - # Forward to Pulsar broker which is listening on 6651 - tunnel_route = 'pulsar-broker-vip:6651' - }, - { - fqdn = 'pulsar-broker1', - # Forward to Pulsar broker-1 which is listening on 6651 - tunnel_route = 'pulsar-broker1:6651' - }, - { - fqdn = 'pulsar-broker2', - # Forward to Pulsar broker-2 which is listening on 6651 - tunnel_route = 'pulsar-broker2:6651' - }, -} - -``` - -After you configure the `ssl_server_name.config` and `records.config` files, the ATS-proxy server handles SNI routing and creates TCP tunnel between the client and the broker. - -### Configure Pulsar-client with SNI routing -ATS SNI-routing works only with TLS. You need to enable TLS for the ATS proxy and brokers first, configure the SNI routing protocol, and then connect Pulsar clients to brokers through ATS proxy. Pulsar clients support SNI routing by connecting to the proxy, and sending the target broker URL to the SNI header. This process is processed internally. You only need to configure the following proxy configuration initially when you create a Pulsar client to use the SNI routing protocol. - -````mdx-code-block - - - - -```java - -String brokerServiceUrl = "pulsar+ssl://pulsar-broker-vip:6651/"; -String proxyUrl = "pulsar+ssl://ats-proxy:443"; -ClientBuilder clientBuilder = PulsarClient.builder() - .serviceUrl(brokerServiceUrl) - .tlsTrustCertsFilePath(TLS_TRUST_CERT_FILE_PATH) - .enableTls(true) - .allowTlsInsecureConnection(false) - .proxyServiceUrl(proxyUrl, ProxyProtocol.SNI) - .operationTimeout(1000, TimeUnit.MILLISECONDS); - -Map authParams = new HashMap(); -authParams.put("tlsCertFile", TLS_CLIENT_CERT_FILE_PATH); -authParams.put("tlsKeyFile", TLS_CLIENT_KEY_FILE_PATH); -clientBuilder.authentication(AuthenticationTls.class.getName(), authParams); - -PulsarClient pulsarClient = clientBuilder.build(); - -``` - - - - -```c++ - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/cacert.pem"); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create( - "/path/to/client-cert.pem", "/path/to/client-key.pem");); - -Client client("pulsar+ssl://ats-proxy:443", config); - -``` - - - - -```python - -from pulsar import Client, AuthenticationTLS - -auth = AuthenticationTLS("/path/to/my-role.cert.pem", "/path/to/my-role.key-pk8.pem") -client = Client("pulsar+ssl://ats-proxy:443", - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False, - authentication=auth) - -``` - - - - -```` - -### Pulsar geo-replication with SNI routing -You can use the ATS proxy for geo-replication. Pulsar brokers can connect to brokers in geo-replication by using SNI routing. To enable SNI routing for broker connection cross clusters, you need to configure SNI proxy URL to the cluster metadata. If you have configured SNI proxy URL in the cluster metadata, you can connect to broker cross clusters through the proxy over SNI routing. - -![Pulsar client SNI](/assets/pulsar-sni-geo.png) - -In this example, a Pulsar cluster is deployed into two separate regions, `us-west` and `us-east`. Both regions are configured with ATS proxy, and brokers in each region run behind the ATS proxy. We configure the cluster metadata for both clusters, so brokers in one cluster can use SNI routing and connect to brokers in other clusters through the ATS proxy. - -(a) Configure the cluster metadata for `us-east` with `us-east` broker service URL and `us-east` ATS proxy URL with SNI proxy-protocol. - -``` - -./pulsar-admin clusters update \ ---broker-url-secure pulsar+ssl://east-broker-vip:6651 \ ---url http://east-broker-vip:8080 \ ---proxy-protocol SNI \ ---proxy-url pulsar+ssl://east-ats-proxy:443 - -``` - -(b) Configure the cluster metadata for `us-west` with `us-west` broker service URL and `us-west` ATS proxy URL with SNI proxy-protocol. - -``` - -./pulsar-admin clusters update \ ---broker-url-secure pulsar+ssl://west-broker-vip:6651 \ ---url http://west-broker-vip:8080 \ ---proxy-protocol SNI \ ---proxy-url pulsar+ssl://west-ats-proxy:443 - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-replication.md b/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-replication.md deleted file mode 100644 index 799f0eb4d92c6b..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-replication.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -id: concepts-replication -title: Geo Replication -sidebar_label: "Geo Replication" -original_id: concepts-replication ---- - -Pulsar enables messages to be produced and consumed in different geo-locations. For instance, your application may be publishing data in one region or market and you would like to process it for consumption in other regions or markets. [Geo-replication](administration-geo.md) in Pulsar enables you to do that. - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-tiered-storage.md b/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-tiered-storage.md deleted file mode 100644 index f6988e53a8cd4e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-tiered-storage.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: concepts-tiered-storage -title: Tiered Storage -sidebar_label: "Tiered Storage" -original_id: concepts-tiered-storage ---- - -Pulsar's segment oriented architecture allows for topic backlogs to grow very large, effectively without limit. However, this can become expensive over time. - -One way to alleviate this cost is to use Tiered Storage. With tiered storage, older messages in the backlog can be moved from BookKeeper to a cheaper storage mechanism, while still allowing clients to access the backlog as if nothing had changed. - -![Tiered Storage](/assets/pulsar-tiered-storage.png) - -> Data written to BookKeeper is replicated to 3 physical machines by default. However, once a segment is sealed in BookKeeper it becomes immutable and can be copied to long term storage. Long term storage can achieve cost savings by using mechanisms such as [Reed-Solomon error correction](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction) to require fewer physical copies of data. - -Pulsar currently supports S3, Google Cloud Storage (GCS), and filesystem for [long term store](https://pulsar.apache.org/docs/en/cookbooks-tiered-storage/). Offloading to long term storage triggered via a Rest API or command line interface. The user passes in the amount of topic data they wish to retain on BookKeeper, and the broker will copy the backlog data to long term storage. The original data will then be deleted from BookKeeper after a configured delay (4 hours by default). - -> For a guide for setting up tiered storage, see the [Tiered storage cookbook](cookbooks-tiered-storage.md). diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-topic-compaction.md b/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-topic-compaction.md deleted file mode 100644 index 34b7ed7fbbd31e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-topic-compaction.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -id: concepts-topic-compaction -title: Topic Compaction -sidebar_label: "Topic Compaction" -original_id: concepts-topic-compaction ---- - -Pulsar was built with highly scalable [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data as a primary objective. Pulsar topics enable you to persistently store as many unacknowledged messages as you need while preserving message ordering. By default, Pulsar stores *all* unacknowledged/unprocessed messages produced on a topic. Accumulating many unacknowledged messages on a topic is necessary for many Pulsar use cases but it can also be very time intensive for Pulsar consumers to "rewind" through the entire log of messages. - -> For a more practical guide to topic compaction, see the [Topic compaction cookbook](cookbooks-compaction.md). - -For some use cases consumers don't need a complete "image" of the topic log. They may only need a few values to construct a more "shallow" image of the log, perhaps even just the most recent value. For these kinds of use cases Pulsar offers **topic compaction**. When you run compaction on a topic, Pulsar goes through a topic's backlog and removes messages that are *obscured* by later messages, i.e. it goes through the topic on a per-key basis and leaves only the most recent message associated with that key. - -Pulsar's topic compaction feature: - -* Allows for faster "rewind" through topic logs -* Applies only to [persistent topics](concepts-architecture-overview.md#persistent-storage) -* Triggered automatically when the backlog reaches a certain size or can be triggered manually via the command line. See the [Topic compaction cookbook](cookbooks-compaction.md) -* Is conceptually and operationally distinct from [retention and expiry](concepts-messaging.md#message-retention-and-expiry). Topic compaction *does*, however, respect retention. If retention has removed a message from the message backlog of a topic, the message will also not be readable from the compacted topic ledger. - -> #### Topic compaction example: the stock ticker -> An example use case for a compacted Pulsar topic would be a stock ticker topic. On a stock ticker topic, each message bears a timestamped dollar value for stocks for purchase (with the message key holding the stock symbol, e.g. `AAPL` or `GOOG`). With a stock ticker you may care only about the most recent value(s) of the stock and have no interest in historical data (i.e. you don't need to construct a complete image of the topic's sequence of messages per key). Compaction would be highly beneficial in this case because it would keep consumers from needing to rewind through obscured messages. - - -## How topic compaction works - -When topic compaction is triggered [via the CLI](cookbooks-compaction.md), Pulsar will iterate over the entire topic from beginning to end. For each key that it encounters the compaction routine will keep a record of the latest occurrence of that key. - -After that, the broker will create a new [BookKeeper ledger](concepts-architecture-overview.md#ledgers) and make a second iteration through each message on the topic. For each message, if the key matches the latest occurrence of that key, then the key's data payload, message ID, and metadata will be written to the newly created ledger. If the key doesn't match the latest then the message will be skipped and left alone. If any given message has an empty payload, it will be skipped and considered deleted (akin to the concept of [tombstones](https://en.wikipedia.org/wiki/Tombstone_(data_store)) in key-value databases). At the end of this second iteration through the topic, the newly created BookKeeper ledger is closed and two things are written to the topic's metadata: the ID of the BookKeeper ledger and the message ID of the last compacted message (this is known as the **compaction horizon** of the topic). Once this metadata is written compaction is complete. - -After the initial compaction operation, the Pulsar [broker](reference-terminology.md#broker) that owns the topic is notified whenever any future changes are made to the compaction horizon and compacted backlog. When such changes occur: - -* Clients (consumers and readers) that have read compacted enabled will attempt to read messages from a topic and either: - * Read from the topic like normal (if the message ID is greater than or equal to the compaction horizon) or - * Read beginning at the compaction horizon (if the message ID is lower than the compaction horizon) - - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-transactions.md b/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-transactions.md deleted file mode 100644 index 08490ba06b5d7e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/concepts-transactions.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -id: transactions -title: Transactions -sidebar_label: "Overview" -original_id: transactions ---- - -Transactional semantics enable event streaming applications to consume, process, and produce messages in one atomic operation. In Pulsar, a producer or consumer can work with messages across multiple topics and partitions and ensure those messages are processed as a single unit. - -The following concepts help you understand Pulsar transactions. - -## Transaction coordinator and transaction log -The transaction coordinator maintains the topics and subscriptions that interact in a transaction. When a transaction is committed, the transaction coordinator interacts with the topic owner broker to complete the transaction. - -The transaction coordinator maintains the entire life cycle of transactions, and prevents a transaction from incorrect status. - -The transaction coordinator handles transaction timeout, and ensures that the transaction is aborted after a transaction timeout. - -All the transaction metadata is persisted in the transaction log. The transaction log is backed by a Pulsar topic. After the transaction coordinator crashes, it can restore the transaction metadata from the transaction log. - -## Transaction ID -The transaction ID (TxnID) identifies a unique transaction in Pulsar. The transaction ID is 128-bit. The highest 16 bits are reserved for the ID of the transaction coordinator, and the remaining bits are used for monotonically increasing numbers in each transaction coordinator. It is easy to locate the transaction crash with the TxnID. - -## Transaction buffer -Messages produced within a transaction are stored in the transaction buffer. The messages in transaction buffer are not materialized (visible) to consumers until the transactions are committed. The messages in the transaction buffer are discarded when the transactions are aborted. - -## Pending acknowledge state -Message acknowledges within a transaction are maintained by the pending acknowledge state before the transaction completes. If a message is in the pending acknowledge state, the message cannot be acknowledged by other transactions until the message is removed from the pending acknowledge state. - -The pending acknowledge state is persisted to the pending acknowledge log. The pending acknowledge log is backed by a Pulsar topic. A new broker can restore the state from the pending acknowledge log to ensure the acknowledgement is not lost. diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-bookkeepermetadata.md b/site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-bookkeepermetadata.md deleted file mode 100644 index b0fa98dc3b65d5..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-bookkeepermetadata.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -id: cookbooks-bookkeepermetadata -title: BookKeeper Ledger Metadata -original_id: cookbooks-bookkeepermetadata ---- - -Pulsar stores data on BookKeeper ledgers, you can understand the contents of a ledger by inspecting the metadata attached to the ledger. -Such metadata are stored on ZooKeeper and they are readable using BookKeeper APIs. - -Description of current metadata: - -| Scope | Metadata name | Metadata value | -| ------------- | ------------- | ------------- | -| All ledgers | application | 'pulsar' | -| All ledgers | component | 'managed-ledger', 'schema', 'compacted-topic' | -| Managed ledgers | pulsar/managed-ledger | name of the ledger | -| Cursor | pulsar/cursor | name of the cursor | -| Compacted topic | pulsar/compactedTopic | name of the original topic | -| Compacted topic | pulsar/compactedTo | id of the last compacted message | - - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-compaction.md b/site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-compaction.md deleted file mode 100644 index dfa314727241a8..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-compaction.md +++ /dev/null @@ -1,142 +0,0 @@ ---- -id: cookbooks-compaction -title: Topic compaction -sidebar_label: "Topic compaction" -original_id: cookbooks-compaction ---- - -Pulsar's [topic compaction](concepts-topic-compaction.md#compaction) feature enables you to create **compacted** topics in which older, "obscured" entries are pruned from the topic, allowing for faster reads through the topic's history (which messages are deemed obscured/outdated/irrelevant will depend on your use case). - -To use compaction: - -* You need to give messages keys, as topic compaction in Pulsar takes place on a *per-key basis* (i.e. messages are compacted based on their key). For a stock ticker use case, the stock symbol---e.g. `AAPL` or `GOOG`---could serve as the key (more on this [below](#when-should-i-use-compacted-topics)). Messages without keys will be left alone by the compaction process. -* Compaction can be configured to run [automatically](#configuring-compaction-to-run-automatically), or you can manually [trigger](#triggering-compaction-manually) compaction using the Pulsar administrative API. -* Your consumers must be [configured](#consumer-configuration) to read from compacted topics ([Java consumers](#java), for example, have a `readCompacted` setting that must be set to `true`). If this configuration is not set, consumers will still be able to read from the non-compacted topic. - - -> Compaction only works on messages that have keys (as in the stock ticker example the stock symbol serves as the key for each message). Keys can thus be thought of as the axis along which compaction is applied. Messages that don't have keys are simply ignored by compaction. - -## When should I use compacted topics? - -The classic example of a topic that could benefit from compaction would be a stock ticker topic through which consumers can access up-to-date values for specific stocks. Imagine a scenario in which messages carrying stock value data use the stock symbol as the key (`GOOG`, `AAPL`, `TWTR`, etc.). Compacting this topic would give consumers on the topic two options: - -* They can read from the "original," non-compacted topic in case they need access to "historical" values, i.e. the entirety of the topic's messages. -* They can read from the compacted topic if they only want to see the most up-to-date messages. - -Thus, if you're using a Pulsar topic called `stock-values`, some consumers could have access to all messages in the topic (perhaps because they're performing some kind of number crunching of all values in the last hour) while the consumers used to power the real-time stock ticker only see the compacted topic (and thus aren't forced to process outdated messages). Which variant of the topic any given consumer pulls messages from is determined by the consumer's [configuration](#consumer-configuration). - -> One of the benefits of compaction in Pulsar is that you aren't forced to choose between compacted and non-compacted topics, as the compaction process leaves the original topic as-is and essentially adds an alternate topic. In other words, you can run compaction on a topic and consumers that need access to the non-compacted version of the topic will not be adversely affected. - - -## Configuring compaction to run automatically - -Tenant administrators can configure a policy for compaction at the namespace level. The policy specifies how large the topic backlog can grow before compaction is triggered. - -For example, to trigger compaction when the backlog reaches 100MB: - -```bash - -$ bin/pulsar-admin namespaces set-compaction-threshold \ - --threshold 100M my-tenant/my-namespace - -``` - -Configuring the compaction threshold on a namespace will apply to all topics within that namespace. - -## Triggering compaction manually - -In order to run compaction on a topic, you need to use the [`topics compact`](reference-pulsar-admin.md#topics-compact) command for the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool. Here's an example: - -```bash - -$ bin/pulsar-admin topics compact \ - persistent://my-tenant/my-namespace/my-topic - -``` - -The `pulsar-admin` tool runs compaction via the Pulsar {@inject: rest:REST:/} API. To run compaction in its own dedicated process, i.e. *not* through the REST API, you can use the [`pulsar compact-topic`](reference-cli-tools.md#pulsar-compact-topic) command. Here's an example: - -```bash - -$ bin/pulsar compact-topic \ - --topic persistent://my-tenant-namespace/my-topic - -``` - -> Running compaction in its own process is recommended when you want to avoid interfering with the broker's performance. Broker performance should only be affected, however, when running compaction on topics with a large keyspace (i.e when there are many keys on the topic). The first phase of the compaction process keeps a copy of each key in the topic, which can create memory pressure as the number of keys grows. Using the `pulsar-admin topics compact` command to run compaction through the REST API should present no issues in the overwhelming majority of cases; using `pulsar compact-topic` should correspondingly be considered an edge case. - -The `pulsar compact-topic` command communicates with [ZooKeeper](https://zookeeper.apache.org) directly. In order to establish communication with ZooKeeper, though, the `pulsar` CLI tool will need to have a valid [broker configuration](reference-configuration.md#broker). You can either supply a proper configuration in `conf/broker.conf` or specify a non-default location for the configuration: - -```bash - -$ bin/pulsar compact-topic \ - --broker-conf /path/to/broker.conf \ - --topic persistent://my-tenant/my-namespace/my-topic - -# If the configuration is in conf/broker.conf -$ bin/pulsar compact-topic \ - --topic persistent://my-tenant/my-namespace/my-topic - -``` - -#### When should I trigger compaction? - -How often you [trigger compaction](#triggering-compaction-manually) will vary widely based on the use case. If you want a compacted topic to be extremely speedy on read, then you should run compaction fairly frequently. - -## Consumer configuration - -Pulsar consumers and readers need to be configured to read from compacted topics. The sections below show you how to enable compacted topic reads for Pulsar's language clients. - -### Java - -In order to read from a compacted topic using a Java consumer, the `readCompacted` parameter must be set to `true`. Here's an example consumer for a compacted topic: - -```java - -Consumer compactedTopicConsumer = client.newConsumer() - .topic("some-compacted-topic") - .readCompacted(true) - .subscribe(); - -``` - -As mentioned above, topic compaction in Pulsar works on a *per-key basis*. That means that messages that you produce on compacted topics need to have keys (the content of the key will depend on your use case). Messages that don't have keys will be ignored by the compaction process. Here's an example Pulsar message with a key: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageBuilder; - -Message msg = MessageBuilder.create() - .setContent(someByteArray) - .setKey("some-key") - .build(); - -``` - -The example below shows a message with a key being produced on a compacted Pulsar topic: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageBuilder; -import org.apache.pulsar.client.api.Producer; -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -Producer compactedTopicProducer = client.newProducer() - .topic("some-compacted-topic") - .create(); - -Message msg = MessageBuilder.create() - .setContent(someByteArray) - .setKey("some-key") - .build(); - -compactedTopicProducer.send(msg); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-deduplication.md b/site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-deduplication.md deleted file mode 100644 index f7f9e3d7bb425b..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-deduplication.md +++ /dev/null @@ -1,151 +0,0 @@ ---- -id: cookbooks-deduplication -title: Message deduplication -sidebar_label: "Message deduplication" -original_id: cookbooks-deduplication ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -When **Message deduplication** is enabled, it ensures that each message produced on Pulsar topics is persisted to disk *only once*, even if the message is produced more than once. Message deduplication is handled automatically on the server side. - -To use message deduplication in Pulsar, you need to configure your Pulsar brokers and clients. - -## How it works - -You can enable or disable message deduplication at the namespace level or the topic level. By default, it is disabled on all namespaces or topics. You can enable it in the following ways: - -* Enable deduplication for all namespaces/topics at the broker-level. -* Enable deduplication for a specific namespace with the `pulsar-admin namespaces` interface. -* Enable deduplication for a specific topic with the `pulsar-admin topics` interface. - -## Configure message deduplication - -You can configure message deduplication in Pulsar using the [`broker.conf`](reference-configuration.md#broker) configuration file. The following deduplication-related parameters are available. - -Parameter | Description | Default -:---------|:------------|:------- -`brokerDeduplicationEnabled` | Sets the default behavior for message deduplication in the Pulsar broker. If it is set to `true`, message deduplication is enabled on all namespaces/topics. If it is set to `false`, you have to enable or disable deduplication at the namespace level or the topic level. | `false` -`brokerDeduplicationMaxNumberOfProducers` | The maximum number of producers for which information is stored for deduplication purposes. | `10000` -`brokerDeduplicationEntriesInterval` | The number of entries after which a deduplication informational snapshot is taken. A larger interval leads to fewer snapshots being taken, though this lengthens the topic recovery time (the time required for entries published after the snapshot to be replayed). | `1000` -`brokerDeduplicationSnapshotIntervalSeconds`| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |`120` -`brokerDeduplicationProducerInactivityTimeoutMinutes` | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | `360` (6 hours) - -### Set default value at the broker-level - -By default, message deduplication is *disabled* on all Pulsar namespaces/topics. To enable it on all namespaces/topics, set the `brokerDeduplicationEnabled` parameter to `true` and re-start the broker. - -Even if you set the value for `brokerDeduplicationEnabled`, enabling or disabling via Pulsar admin CLI overrides the default settings at the broker-level. - -### Enable message deduplication - -Though message deduplication is disabled by default at the broker level, you can enable message deduplication for a specific namespace or topic using the [`pulsar-admin namespaces set-deduplication`](reference-pulsar-admin.md#namespace-set-deduplication) or the [`pulsar-admin topics set-deduplication`](reference-pulsar-admin.md#topic-set-deduplication) command. You can use the `--enable`/`-e` flag and specify the namespace/topic. - -The following example shows how to enable message deduplication at the namespace level. - -```bash - -$ bin/pulsar-admin namespaces set-deduplication \ - public/default \ - --enable # or just -e - -``` - -### Disable message deduplication - -Even if you enable message deduplication at the broker level, you can disable message deduplication for a specific namespace or topic using the [`pulsar-admin namespace set-deduplication`](reference-pulsar-admin.md#namespace-set-deduplication) or the [`pulsar-admin topics set-deduplication`](reference-pulsar-admin.md#topic-set-deduplication) command. Use the `--disable`/`-d` flag and specify the namespace/topic. - -The following example shows how to disable message deduplication at the namespace level. - -```bash - -$ bin/pulsar-admin namespaces set-deduplication \ - public/default \ - --disable # or just -d - -``` - -## Pulsar clients - -If you enable message deduplication in Pulsar brokers, you need complete the following tasks for your client producers: - -1. Specify a name for the producer. -1. Set the message timeout to `0` (namely, no timeout). - -The instructions for Java, Python, and C++ clients are different. - -````mdx-code-block - - - -To enable message deduplication on a [Java producer](client-libraries-java.md#producers), set the producer name using the `producerName` setter, and set the timeout to `0` using the `sendTimeout` setter. - -```java - -import org.apache.pulsar.client.api.Producer; -import org.apache.pulsar.client.api.PulsarClient; -import java.util.concurrent.TimeUnit; - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); -Producer producer = pulsarClient.newProducer() - .producerName("producer-1") - .topic("persistent://public/default/topic-1") - .sendTimeout(0, TimeUnit.SECONDS) - .create(); - -``` - - - - -To enable message deduplication on a [Python producer](client-libraries-python.md#producers), set the producer name using `producer_name`, and set the timeout to `0` using `send_timeout_millis`. - -```python - -import pulsar - -client = pulsar.Client("pulsar://localhost:6650") -producer = client.create_producer( - "persistent://public/default/topic-1", - producer_name="producer-1", - send_timeout_millis=0) - -``` - - - - -To enable message deduplication on a [C++ producer](client-libraries-cpp.md#producer), set the producer name using `producer_name`, and set the timeout to `0` using `send_timeout_millis`. - -```cpp - -#include - -std::string serviceUrl = "pulsar://localhost:6650"; -std::string topic = "persistent://some-tenant/ns1/topic-1"; -std::string producerName = "producer-1"; - -Client client(serviceUrl); - -ProducerConfiguration producerConfig; -producerConfig.setSendTimeout(0); -producerConfig.setProducerName(producerName); - -Producer producer; - -Result result = client.createProducer(topic, producerConfig, producer); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-encryption.md b/site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-encryption.md deleted file mode 100644 index f0d8fb8735eb63..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-encryption.md +++ /dev/null @@ -1,184 +0,0 @@ ---- -id: cookbooks-encryption -title: Pulsar Encryption -sidebar_label: "Encryption" -original_id: cookbooks-encryption ---- - -Pulsar encryption allows applications to encrypt messages at the producer and decrypt at the consumer. Encryption is performed using the public/private key pair configured by the application. Encrypted messages can only be decrypted by consumers with a valid key. - -## Asymmetric and symmetric encryption - -Pulsar uses dynamically generated symmetric AES key to encrypt messages(data). The AES key(data key) is encrypted using application provided ECDSA/RSA key pair, as a result there is no need to share the secret with everyone. - -Key is a public/private key pair used for encryption/decryption. The producer key is the public key, and the consumer key is the private key of the key pair. - -The application configures the producer with the public key. This key is used to encrypt the AES data key. The encrypted data key is sent as part of message header. Only entities with the private key(in this case the consumer) will be able to decrypt the data key which is used to decrypt the message. - -A message can be encrypted with more than one key. Any one of the keys used for encrypting the message is sufficient to decrypt the message - -Pulsar does not store the encryption key anywhere in the pulsar service. If you lose/delete the private key, your message is irretrievably lost, and is unrecoverable - -## Producer -![alt text](/assets/pulsar-encryption-producer.jpg "Pulsar Encryption Producer") - -## Consumer -![alt text](/assets/pulsar-encryption-consumer.jpg "Pulsar Encryption Consumer") - -## Here are the steps to get started: - -1. Create your ECDSA or RSA public/private key pair. - -```shell - -openssl ecparam -name secp521r1 -genkey -param_enc explicit -out test_ecdsa_privkey.pem -openssl ec -in test_ecdsa_privkey.pem -pubout -outform pkcs8 -out test_ecdsa_pubkey.pem - -``` - -2. Add the public and private key to the key management and configure your producers to retrieve public keys and consumers clients to retrieve private keys. -3. Implement CryptoKeyReader::getPublicKey() interface from producer and CryptoKeyReader::getPrivateKey() interface from consumer, which will be invoked by Pulsar client to load the key. -4. Add encryption key to producer configuration: conf.addEncryptionKey("myapp.key") -5. Add CryptoKeyReader implementation to producer/consumer config: conf.setCryptoKeyReader(keyReader) -6. Sample producer application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} -PulsarClient pulsarClient = PulsarClient.create("http://localhost:8080"); - -ProducerConfiguration prodConf = new ProducerConfiguration(); -prodConf.setCryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")); -prodConf.addEncryptionKey("myappkey"); - -Producer producer = pulsarClient.createProducer("persistent://my-tenant/my-ns/my-topic", prodConf); - -for (int i = 0; i < 10; i++) { - producer.send("my-message".getBytes()); -} - -pulsarClient.close(); - -``` - -7. Sample Consumer Application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} - -ConsumerConfiguration consConf = new ConsumerConfiguration(); -consConf.setCryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")); -PulsarClient pulsarClient = PulsarClient.create("http://localhost:8080"); -Consumer consumer = pulsarClient.subscribe("persistent://my-tenant//my-ns/my-topic", "my-subscriber-name", consConf); -Message msg = null; - -for (int i = 0; i < 10; i++) { - msg = consumer.receive(); - // do something - System.out.println("Received: " + new String(msg.getData())); -} - -// Acknowledge the consumption of all messages at once -consumer.acknowledgeCumulative(msg); -pulsarClient.close(); - -``` - -## Key rotation -Pulsar generates new AES data key every 4 hours or after a certain number of messages are published. The asymmetric public key is automatically fetched by producer every 4 hours by calling CryptoKeyReader::getPublicKey() to retrieve the latest version. - -## Enabling encryption at the producer application: -If you produce messages that are consumed across application boundaries, you need to ensure that consumers in other applications have access to one of the private keys that can decrypt the messages. This can be done in two ways: -1. The consumer application provides you access to their public key, which you add to your producer keys -1. You grant access to one of the private keys from the pairs used by producer - -In some cases, the producer may want to encrypt the messages with multiple keys. For this, add all such keys to the config. Consumer will be able to decrypt the message, as long as it has access to at least one of the keys. - -E.g: If messages needs to be encrypted using 2 keys myapp.messagekey1 and myapp.messagekey2, - -```java - -conf.addEncryptionKey("myapp.messagekey1"); -conf.addEncryptionKey("myapp.messagekey2"); - -``` - -## Decrypting encrypted messages at the consumer application: -Consumers require access one of the private keys to decrypt messages produced by the producer. If you would like to receive encrypted messages, create a public/private key and give your public key to the producer application to encrypt messages using your public key. - -## Handling Failures: -* Producer/ Consumer loses access to the key - * Producer action will fail indicating the cause of the failure. Application has the option to proceed with sending unencrypted message in such cases. Call conf.setCryptoFailureAction(ProducerCryptoFailureAction) to control the producer behavior. The default behavior is to fail the request. - * If consumption failed due to decryption failure or missing keys in consumer, application has the option to consume the encrypted message or discard it. Call conf.setCryptoFailureAction(ConsumerCryptoFailureAction) to control the consumer behavior. The default behavior is to fail the request. -Application will never be able to decrypt the messages if the private key is permanently lost. -* Batch messaging - * If decryption fails and the message contain batch messages, client will not be able to retrieve individual messages in the batch, hence message consumption fails even if conf.setCryptoFailureAction() is set to CONSUME. -* If decryption fails, the message consumption stops and application will notice backlog growth in addition to decryption failure messages in the client log. If application does not have access to the private key to decrypt the message, the only option is to skip/discard backlogged messages. - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-message-queue.md b/site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-message-queue.md deleted file mode 100644 index eb43cbde5fb818..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-message-queue.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -id: cookbooks-message-queue -title: Using Pulsar as a message queue -sidebar_label: "Message queue" -original_id: cookbooks-message-queue ---- - -Message queues are essential components of many large-scale data architectures. If every single work object that passes through your system absolutely *must* be processed in spite of the slowness or downright failure of this or that system component, there's a good chance that you'll need a message queue to step in and ensure that unprocessed data is retained---with correct ordering---until the required actions are taken. - -Pulsar is a great choice for a message queue because: - -* it was built with [persistent message storage](concepts-architecture-overview.md#persistent-storage) in mind -* it offers automatic load balancing across [consumers](reference-terminology.md#consumer) for messages on a topic (or custom load balancing if you wish) - -> You can use the same Pulsar installation to act as a real-time message bus and as a message queue if you wish (or just one or the other). You can set aside some topics for real-time purposes and other topics for message queue purposes (or use specific namespaces for either purpose if you wish). - - -# Client configuration changes - -To use a Pulsar [topic](reference-terminology.md#topic) as a message queue, you should distribute the receiver load on that topic across several consumers (the optimal number of consumers will depend on the load). Each consumer must: - -* Establish a [shared subscription](concepts-messaging.md#shared) and use the same subscription name as the other consumers (otherwise the subscription is not shared and the consumers can't act as a processing ensemble) -* If you'd like to have tight control over message dispatching across consumers, set the consumers' **receiver queue** size very low (potentially even to 0 if necessary). Each Pulsar [consumer](reference-terminology.md#consumer) has a receiver queue that determines how many messages the consumer will attempt to fetch at a time. A receiver queue of 1000 (the default), for example, means that the consumer will attempt to process 1000 messages from the topic's backlog upon connection. Setting the receiver queue to zero essentially means ensuring that each consumer is only doing one thing at a time. - - The downside to restricting the receiver queue size of consumers is that that limits the potential throughput of those consumers and cannot be used with [partitioned topics](reference-terminology.md#partitioned-topic). Whether the performance/control trade-off is worthwhile will depend on your use case. - -## Java clients - -Here's an example Java consumer configuration that uses a shared subscription: - -```java - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; -import org.apache.pulsar.client.api.SubscriptionType; - -String SERVICE_URL = "pulsar://localhost:6650"; -String TOPIC = "persistent://public/default/mq-topic-1"; -String subscription = "sub-1"; - -PulsarClient client = PulsarClient.builder() - .serviceUrl(SERVICE_URL) - .build(); - -Consumer consumer = client.newConsumer() - .topic(TOPIC) - .subscriptionName(subscription) - .subscriptionType(SubscriptionType.Shared) - // If you'd like to restrict the receiver queue size - .receiverQueueSize(10) - .subscribe(); - -``` - -## Python clients - -Here's an example Python consumer configuration that uses a shared subscription: - -```python - -from pulsar import Client, ConsumerType - -SERVICE_URL = "pulsar://localhost:6650" -TOPIC = "persistent://public/default/mq-topic-1" -SUBSCRIPTION = "sub-1" - -client = Client(SERVICE_URL) -consumer = client.subscribe( - TOPIC, - SUBSCRIPTION, - # If you'd like to restrict the receiver queue size - receiver_queue_size=10, - consumer_type=ConsumerType.Shared) - -``` - -## C++ clients - -Here's an example C++ consumer configuration that uses a shared subscription: - -```cpp - -#include - -std::string serviceUrl = "pulsar://localhost:6650"; -std::string topic = "persistent://public/defaultmq-topic-1"; -std::string subscription = "sub-1"; - -Client client(serviceUrl); - -ConsumerConfiguration consumerConfig; -consumerConfig.setConsumerType(ConsumerType.ConsumerShared); -// If you'd like to restrict the receiver queue size -consumerConfig.setReceiverQueueSize(10); - -Consumer consumer; - -Result result = client.subscribe(topic, subscription, consumerConfig, consumer); - -``` - -## Go clients - -Here is an example of a Go consumer configuration that uses a shared subscription: - -```go - -import "github.com/apache/pulsar-client-go/pulsar" - -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "persistent://public/default/mq-topic-1", - SubscriptionName: "sub-1", - Type: pulsar.Shared, - ReceiverQueueSize: 10, // If you'd like to restrict the receiver queue size -}) -if err != nil { - log.Fatal(err) -} - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-non-persistent.md b/site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-non-persistent.md deleted file mode 100644 index 178301e86eb8df..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-non-persistent.md +++ /dev/null @@ -1,63 +0,0 @@ ---- -id: cookbooks-non-persistent -title: Non-persistent messaging -sidebar_label: "Non-persistent messaging" -original_id: cookbooks-non-persistent ---- - -**Non-persistent topics** are Pulsar topics in which message data is *never* [persistently stored](concepts-architecture-overview.md#persistent-storage) and kept only in memory. This cookbook provides: - -* A basic [conceptual overview](#overview) of non-persistent topics -* Information about [configurable parameters](#configuration) related to non-persistent topics -* A guide to the [CLI interface](#cli) for managing non-persistent topics - -## Overview - -By default, Pulsar persistently stores *all* unacknowledged messages on multiple [BookKeeper](#persistent-storage) bookies (storage nodes). Data for messages on persistent topics can thus survive broker restarts and subscriber failover. - -Pulsar also, however, supports **non-persistent topics**, which are topics on which messages are *never* persisted to disk and live only in memory. When using non-persistent delivery, killing a Pulsar [broker](reference-terminology.md#broker) or disconnecting a subscriber to a topic means that all in-transit messages are lost on that (non-persistent) topic, meaning that clients may see message loss. - -Non-persistent topics have names of this form (note the `non-persistent` in the name): - -```http - -non-persistent://tenant/namespace/topic - -``` - -> For more high-level information about non-persistent topics, see the [Concepts and Architecture](concepts-messaging.md#non-persistent-topics) documentation. - -## Using - -> In order to use non-persistent topics, they must be [enabled](#enabling) in your Pulsar broker configuration. - -In order to use non-persistent topics, you only need to differentiate them by name when interacting with them. This [`pulsar-client produce`](reference-cli-tools.md#pulsar-client-produce) command, for example, would produce one message on a non-persistent topic in a standalone cluster: - -```bash - -$ bin/pulsar-client produce non-persistent://public/default/example-np-topic \ - --num-produce 1 \ - --messages "This message will be stored only in memory" - -``` - -> For a more thorough guide to non-persistent topics from an administrative perspective, see the [Non-persistent topics](admin-api-topics.md) guide. - -## Enabling - -In order to enable non-persistent topics in a Pulsar broker, the [`enableNonPersistentTopics`](reference-configuration.md#broker-enableNonPersistentTopics) must be set to `true`. This is the default, and so you won't need to take any action to enable non-persistent messaging. - - -> #### Configuration for standalone mode -> If you're running Pulsar in standalone mode, the same configurable parameters are available but in the [`standalone.conf`](reference-configuration.md#standalone) configuration file. - -If you'd like to enable *only* non-persistent topics in a broker, you can set the [`enablePersistentTopics`](reference-configuration.md#broker-enablePersistentTopics) parameter to `false` and the `enableNonPersistentTopics` parameter to `true`. - -## Managing with cli - -Non-persistent topics can be managed using the [`pulsar-admin non-persistent`](reference-pulsar-admin.md#non-persistent) command-line interface. With that interface you can perform actions like [create a partitioned non-persistent topic](reference-pulsar-admin.md#non-persistent-create-partitioned-topic), get [stats](reference-pulsar-admin.md#non-persistent-stats) for a non-persistent topic, [list](reference-pulsar-admin.md) non-persistent topics under a namespace, and more. - -## Using with Pulsar clients - -You shouldn't need to make any changes to your Pulsar clients to use non-persistent messaging beyond making sure that you use proper [topic names](#using) with `non-persistent` as the topic type. - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-partitioned.md b/site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-partitioned.md deleted file mode 100644 index fb9ac354cc6d60..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-partitioned.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: cookbooks-partitioned -title: Partitioned topics -sidebar_label: "Partitioned Topics" -original_id: cookbooks-partitioned ---- -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-retention-expiry.md b/site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-retention-expiry.md deleted file mode 100644 index 192fbe6c53277f..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-retention-expiry.md +++ /dev/null @@ -1,413 +0,0 @@ ---- -id: cookbooks-retention-expiry -title: Message retention and expiry -sidebar_label: "Message retention and expiry" -original_id: cookbooks-retention-expiry ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar brokers are responsible for handling messages that pass through Pulsar, including [persistent storage](concepts-architecture-overview.md#persistent-storage) of messages. By default, for each topic, brokers only retain messages that are in at least one backlog. A backlog is the set of unacknowledged messages for a particular subscription. As a topic can have multiple subscriptions, a topic can have multiple backlogs. - -As a consequence, no messages are retained (by default) on a topic that has not had any subscriptions created for it. - -(Note that messages that are no longer being stored are not necessarily immediately deleted, and may in fact still be accessible until the next ledger rollover. Because clients cannot predict when rollovers may happen, it is not wise to rely on a rollover not happening at an inconvenient point in time.) - -In Pulsar, you can modify this behavior, with namespace granularity, in two ways: - -* You can persistently store messages that are not within a backlog (because they've been acknowledged by on every existing subscription, or because there are no subscriptions) by setting [retention policies](#retention-policies). -* Messages that are not acknowledged within a specified timeframe can be automatically acknowledged, by specifying the [time to live](#time-to-live-ttl) (TTL). - -Pulsar's [admin interface](admin-api-overview.md) enables you to manage both retention policies and TTL with namespace granularity (and thus within a specific tenant and either on a specific cluster or in the [`global`](concepts-architecture-overview.md#global-cluster) cluster). - - -> #### Retention and TTL solve two different problems -> * Message retention: Keep the data for at least X hours (even if acknowledged) -> * Time-to-live: Discard data after some time (by automatically acknowledging) -> -> Most applications will want to use at most one of these. - - -## Retention policies - -By default, when a Pulsar message arrives at a broker, the message is stored until it has been acknowledged on all subscriptions, at which point it is marked for deletion. You can override this behavior and retain messages that have already been acknowledged on all subscriptions by setting a *retention policy* for all topics in a given namespace. Retention is based on both a *size limit* and a *time limit*. - -Retention policies are useful when you use the Reader interface. The Reader interface does not use acknowledgements, and messages do not exist within backlogs. It is required to configure retention for Reader-only use cases. - -When you set a retention policy on topics in a namespace, you must set **both** a *size limit* and a *time limit*. You can refer to the following table to set retention policies in `pulsar-admin` and Java. - -|Time limit|Size limit| Message retention | -|----------|----------|------------------------| -| -1 | -1 | Infinite retention | -| -1 | >0 | Based on the size limit | -| >0 | -1 | Based on the time limit | -| 0 | 0 | Disable message retention (by default) | -| 0 | >0 | Invalid | -| >0 | 0 | Invalid | -| >0 | >0 | Acknowledged messages or messages with no active subscription will not be retained when either time or size reaches the limit. | - -The retention settings apply to all messages on topics that do not have any subscriptions, or to messages that have been acknowledged by all subscriptions. The retention policy settings do not affect unacknowledged messages on topics with subscriptions. The unacknowledged messages are controlled by the backlog quota. - -When a retention limit on a topic is exceeded, the oldest message is marked for deletion until the set of retained messages falls within the specified limits again. - -### Defaults - -You can set message retention at instance level with the following two parameters: `defaultRetentionTimeInMinutes` and `defaultRetentionSizeInMB`. Both parameters are set to `0` by default. - -For more information of the two parameters, refer to the [`broker.conf`](reference-configuration.md#broker) configuration file. - -### Set retention policy - -You can set a retention policy for a namespace by specifying the namespace, a size limit and a time limit in `pulsar-admin`, REST API and Java. - -````mdx-code-block - - - -You can use the [`set-retention`](reference-pulsar-admin.md#namespaces-set-retention) subcommand and specify a namespace, a size limit using the `-s`/`--size` flag, and a time limit using the `-t`/`--time` flag. - -In the following example, the size limit is set to 10 GB and the time limit is set to 3 hours for each topic within the `my-tenant/my-ns` namespace. -- When the size of messages reaches 10 GB on a topic within 3 hours, the acknowledged messages will not be retained. -- After 3 hours, even if the message size is less than 10 GB, the acknowledged messages will not be retained. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 10G \ - --time 3h - -``` - -In the following example, the time is not limited and the size limit is set to 1 TB. The size limit determines the retention. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 1T \ - --time -1 - -``` - -In the following example, the size is not limited and the time limit is set to 3 hours. The time limit determines the retention. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size -1 \ - --time 3h - -``` - -To achieve infinite retention, set both values to `-1`. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size -1 \ - --time -1 - -``` - -To disable the retention policy, set both values to `0`. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 0 \ - --time 0 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/retention|operation/setRetention?version=@pulsar:version_number@} - -:::note - -To disable the retention policy, you need to set both the size and time limit to `0`. Set either size or time limit to `0` is invalid. - -::: - - - - -```java - -int retentionTime = 10; // 10 minutes -int retentionSize = 500; // 500 megabytes -RetentionPolicies policies = new RetentionPolicies(retentionTime, retentionSize); -admin.namespaces().setRetention(namespace, policies); - -``` - - - - -```` - -### Get retention policy - -You can fetch the retention policy for a namespace by specifying the namespace. The output will be a JSON object with two keys: `retentionTimeInMinutes` and `retentionSizeInMB`. - -#### pulsar-admin - -Use the [`get-retention`](reference-pulsar-admin.md#namespaces) subcommand and specify the namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces get-retention my-tenant/my-ns -{ - "retentionTimeInMinutes": 10, - "retentionSizeInMB": 500 -} - -``` - -#### REST API - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/retention|operation/getRetention?version=@pulsar:version_number@} - -#### Java - -```java - -admin.namespaces().getRetention(namespace); - -``` - -## Backlog quotas - -*Backlogs* are sets of unacknowledged messages for a topic that have been stored by bookies. Pulsar stores all unacknowledged messages in backlogs until they are processed and acknowledged. - -You can control the allowable size of backlogs, at the namespace level, using *backlog quotas*. Setting a backlog quota involves setting: - -TODO: Expand on is this per backlog or per topic? - -* an allowable *size threshold* for each topic in the namespace -* a *retention policy* that determines which action the [broker](reference-terminology.md#broker) takes if the threshold is exceeded. - -The following retention policies are available: - -Policy | Action -:------|:------ -`producer_request_hold` | The broker will hold and not persist produce request payload -`producer_exception` | The broker will disconnect from the client by throwing an exception -`consumer_backlog_eviction` | The broker will begin discarding backlog messages - - -> #### Beware the distinction between retention policy types -> As you may have noticed, there are two definitions of the term "retention policy" in Pulsar, one that applies to persistent storage of messages not in backlogs, and one that applies to messages within backlogs. - - -Backlog quotas are handled at the namespace level. They can be managed via: - -### Set size/time thresholds and backlog retention policies - -You can set a size and/or time threshold and backlog retention policy for all of the topics in a [namespace](reference-terminology.md#namespace) by specifying the namespace, a size limit and/or a time limit in second, and a policy by name. - -#### pulsar-admin - -Use the [`set-backlog-quota`](reference-pulsar-admin.md#namespaces) subcommand and specify a namespace, a size limit using the `-l`/`--limit` flag, and a retention policy using the `-p`/`--policy` flag. - -##### Example - -```shell - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \ - --limit 2G \ - --limitTime 36000 \ - --policy producer_request_hold - -``` - -#### REST API - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - -#### Java - -```java - -long sizeLimit = 2147483648L; -BacklogQuota.RetentionPolicy policy = BacklogQuota.RetentionPolicy.producer_request_hold; -BacklogQuota quota = new BacklogQuota(sizeLimit, policy); -admin.namespaces().setBacklogQuota(namespace, quota); - -``` - -### Get backlog threshold and backlog retention policy - -You can see which size threshold and backlog retention policy has been applied to a namespace. - -#### pulsar-admin - -Use the [`get-backlog-quotas`](reference-pulsar-admin.md#pulsar-admin-namespaces-get-backlog-quotas) subcommand and specify a namespace. Here's an example: - -```shell - -$ pulsar-admin namespaces get-backlog-quotas my-tenant/my-ns -{ - "destination_storage": { - "limit" : 2147483648, - "policy" : "producer_request_hold" - } -} - -``` - -#### REST API - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/backlogQuotaMap|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - -#### Java - -```java - -Map quotas = - admin.namespaces().getBacklogQuotas(namespace); - -``` - -### Remove backlog quotas - -#### pulsar-admin - -Use the [`remove-backlog-quota`](reference-pulsar-admin.md#pulsar-admin-namespaces-remove-backlog-quota) subcommand and specify a namespace. Here's an example: - -```shell - -$ pulsar-admin namespaces remove-backlog-quota my-tenant/my-ns - -``` - -#### REST API - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/removeBacklogQuota?version=@pulsar:version_number@} - -#### Java - -```java - -admin.namespaces().removeBacklogQuota(namespace); - -``` - -### Clear backlog - -#### pulsar-admin - -Use the [`clear-backlog`](reference-pulsar-admin.md#pulsar-admin-namespaces-clear-backlog) subcommand. - -##### Example - -```shell - -$ pulsar-admin namespaces clear-backlog my-tenant/my-ns - -``` - -By default, you will be prompted to ensure that you really want to clear the backlog for the namespace. You can override the prompt using the `-f`/`--force` flag. - -## Time to live (TTL) - -By default, Pulsar stores all unacknowledged messages forever. This can lead to heavy disk space usage in cases where a lot of messages are going unacknowledged. If disk space is a concern, you can set a time to live (TTL) that determines how long unacknowledged messages will be retained. - -### Set the TTL for a namespace - -#### pulsar-admin - -Use the [`set-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-set-message-ttl) subcommand and specify a namespace and a TTL (in seconds) using the `-ttl`/`--messageTTL` flag. - -##### Example - -```shell - -$ pulsar-admin namespaces set-message-ttl my-tenant/my-ns \ - --messageTTL 120 # TTL of 2 minutes - -``` - -#### REST API - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/setNamespaceMessageTTL?version=@pulsar:version_number@} - -#### Java - -```java - -admin.namespaces().setNamespaceMessageTTL(namespace, ttlInSeconds); - -``` - -### Get the TTL configuration for a namespace - -#### pulsar-admin - -Use the [`get-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-get-message-ttl) subcommand and specify a namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces get-message-ttl my-tenant/my-ns -60 - -``` - -#### REST API - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/getNamespaceMessageTTL?version=@pulsar:version_number@} - -#### Java - -```java - -admin.namespaces().getNamespaceMessageTTL(namespace) - -``` - -### Remove the TTL configuration for a namespace - -#### pulsar-admin - -Use the [`remove-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-remove-message-ttl) subcommand and specify a namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces remove-message-ttl my-tenant/my-ns - -``` - -#### REST API - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/removeNamespaceMessageTTL?version=@pulsar:version_number@} - -#### Java - -```java - -admin.namespaces().removeNamespaceMessageTTL(namespace) - -``` - -## Delete messages from namespaces - -If you do not have any retention period and that you never have much of a backlog, the upper limit for retaining messages, which are acknowledged, equals to the Pulsar segment rollover period + entry log rollover period + (garbage collection interval * garbage collection ratios). - -- **Segment rollover period**: basically, the segment rollover period is how often a new segment is created. Once a new segment is created, the old segment will be deleted. By default, this happens either when you have written 50,000 entries (messages) or have waited 240 minutes. You can tune this in your broker. - -- **Entry log rollover period**: multiple ledgers in BookKeeper are interleaved into an [entry log](https://bookkeeper.apache.org/docs/4.11.1/getting-started/concepts/#entry-logs). In order for a ledger that has been deleted, the entry log must all be rolled over. -The entry log rollover period is configurable, but is purely based on the entry log size. For details, see [here](https://bookkeeper.apache.org/docs/4.11.1/reference/config/#entry-log-settings). Once the entry log is rolled over, the entry log can be garbage collected. - -- **Garbage collection interval**: because entry logs have interleaved ledgers, to free up space, the entry logs need to be rewritten. The garbage collection interval is how often BookKeeper performs garbage collection. which is related to minor compaction and major compaction of entry logs. For details, see [here](https://bookkeeper.apache.org/docs/4.11.1/reference/config/#entry-log-compaction-settings). diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-tiered-storage.md b/site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-tiered-storage.md deleted file mode 100644 index 8f6a7fb35206e2..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/cookbooks-tiered-storage.md +++ /dev/null @@ -1,342 +0,0 @@ ---- -id: cookbooks-tiered-storage -title: Tiered Storage -sidebar_label: "Tiered Storage" -original_id: cookbooks-tiered-storage ---- - -Pulsar's **Tiered Storage** feature allows older backlog data to be offloaded to long term storage, thereby freeing up space in BookKeeper and reducing storage costs. This cookbook walks you through using tiered storage in your Pulsar cluster. - -* Tiered storage uses [Apache jclouds](https://jclouds.apache.org) to support [Amazon S3](https://aws.amazon.com/s3/) and [Google Cloud Storage](https://cloud.google.com/storage/)(GCS for short) for long term storage. With Jclouds, it is easy to add support for more [cloud storage providers](https://jclouds.apache.org/reference/providers/#blobstore-providers) in the future. - -* Tiered storage uses [Apache Hadoop](http://hadoop.apache.org/) to support filesystem for long term storage. With Hadoop, it is easy to add support for more filesystem in the future. - -## When should I use Tiered Storage? - -Tiered storage should be used when you have a topic for which you want to keep a very long backlog for a long time. For example, if you have a topic containing user actions which you use to train your recommendation systems, you may want to keep that data for a long time, so that if you change your recommendation algorithm you can rerun it against your full user history. - -## The offloading mechanism - -A topic in Pulsar is backed by a log, known as a managed ledger. This log is composed of an ordered list of segments. Pulsar only every writes to the final segment of the log. All previous segments are sealed. The data within the segment is immutable. This is known as a segment oriented architecture. - -![Tiered storage](/assets/pulsar-tiered-storage.png "Tiered Storage") - -The Tiered Storage offloading mechanism takes advantage of this segment oriented architecture. When offloading is requested, the segments of the log are copied, one-by-one, to tiered storage. All segments of the log, apart from the segment currently being written to can be offloaded. - -On the broker, the administrator must configure the bucket and credentials for the cloud storage service. -The configured bucket must exist before attempting to offload. If it does not exist, the offload operation will fail. - -Pulsar uses multi-part objects to upload the segment data. It is possible that a broker could crash while uploading the data. -We recommend you add a life cycle rule your bucket to expire incomplete multi-part upload after a day or two to avoid -getting charged for incomplete uploads. - -When ledgers are offloaded to long term storage, you can still query data in the offloaded ledgers with Pulsar SQL. - -## Configuring the offload driver - -Offloading is configured in ```broker.conf```. - -At a minimum, the administrator must configure the driver, the bucket and the authenticating credentials. -There is also some other knobs to configure, like the bucket region, the max block size in backed storage, etc. - -Currently we support driver of types: - -- `aws-s3`: [Simple Cloud Storage Service](https://aws.amazon.com/s3/) -- `google-cloud-storage`: [Google Cloud Storage](https://cloud.google.com/storage/) -- `filesystem`: [Filesystem Storage](http://hadoop.apache.org/) - -> Driver names are case-insensitive for driver's name. There is a third driver type, `s3`, which is identical to `aws-s3`, -> though it requires that you specify an endpoint url using `s3ManagedLedgerOffloadServiceEndpoint`. This is useful if -> using a S3 compatible data store, other than AWS. - -```conf - -managedLedgerOffloadDriver=aws-s3 - -``` - -### "aws-s3" Driver configuration - -#### Bucket and Region - -Buckets are the basic containers that hold your data. -Everything that you store in Cloud Storage must be contained in a bucket. -You can use buckets to organize your data and control access to your data, -but unlike directories and folders, you cannot nest buckets. - -```conf - -s3ManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -Bucket Region is the region where bucket located. Bucket Region is not a required -but a recommended configuration. If it is not configured, It will use the default region. - -With AWS S3, the default region is `US East (N. Virginia)`. Page [AWS Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html) contains more information. - -```conf - -s3ManagedLedgerOffloadRegion=eu-west-3 - -``` - -#### Authentication with AWS - -To be able to access AWS S3, you need to authenticate with AWS S3. -Pulsar does not provide any direct means of configuring authentication for AWS S3, -but relies on the mechanisms supported by the [DefaultAWSCredentialsProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html). - -Once you have created a set of credentials in the AWS IAM console, they can be configured in a number of ways. - -1. Using ec2 instance metadata credentials - -If you are on AWS instance with an instance profile that provides credentials, Pulsar will use these credentials -if no other mechanism is provided - -2. Set the environment variables **AWS_ACCESS_KEY_ID** and **AWS_SECRET_ACCESS_KEY** in ```conf/pulsar_env.sh```. - -```bash - -export AWS_ACCESS_KEY_ID=ABC123456789 -export AWS_SECRET_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -> \"export\" is important so that the variables are made available in the environment of spawned processes. - - -3. Add the Java system properties *aws.accessKeyId* and *aws.secretKey* to **PULSAR_EXTRA_OPTS** in `conf/pulsar_env.sh`. - -```bash - -PULSAR_EXTRA_OPTS="${PULSAR_EXTRA_OPTS} ${PULSAR_MEM} ${PULSAR_GC} -Daws.accessKeyId=ABC123456789 -Daws.secretKey=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.maxCapacity.default=1000 -Dio.netty.recycler.linkCapacity=1024" - -``` - -4. Set the access credentials in ```~/.aws/credentials```. - -```conf - -[default] -aws_access_key_id=ABC123456789 -aws_secret_access_key=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -5. Assuming an IAM role - -If you want to assume an IAM role, this can be done via specifying the following: - -```conf - -s3ManagedLedgerOffloadRole= -s3ManagedLedgerOffloadRoleSessionName=pulsar-s3-offload - -``` - -This will use the `DefaultAWSCredentialsProviderChain` for assuming this role. - -> The broker must be rebooted for credentials specified in pulsar_env to take effect. - -#### Configuring the size of block read/write - -Pulsar also provides some knobs to configure the size of requests sent to AWS S3. - -- ```s3ManagedLedgerOffloadMaxBlockSizeInBytes``` configures the maximum size of - a "part" sent during a multipart upload. This cannot be smaller than 5MB. Default is 64MB. -- ```s3ManagedLedgerOffloadReadBufferSizeInBytes``` configures the block size for - each individual read when reading back data from AWS S3. Default is 1MB. - -In both cases, these should not be touched unless you know what you are doing. - -### "google-cloud-storage" Driver configuration - -Buckets are the basic containers that hold your data. Everything that you store in -Cloud Storage must be contained in a bucket. You can use buckets to organize your data and -control access to your data, but unlike directories and folders, you cannot nest buckets. - -```conf - -gcsManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -Bucket Region is the region where bucket located. Bucket Region is not a required but -a recommended configuration. If it is not configured, It will use the default region. - -Regarding GCS, buckets are default created in the `us multi-regional location`, -page [Bucket Locations](https://cloud.google.com/storage/docs/bucket-locations) contains more information. - -```conf - -gcsManagedLedgerOffloadRegion=europe-west3 - -``` - -#### Authentication with GCS - -The administrator needs to configure `gcsManagedLedgerOffloadServiceAccountKeyFile` in `broker.conf` -for the broker to be able to access the GCS service. `gcsManagedLedgerOffloadServiceAccountKeyFile` is -a Json file, containing the GCS credentials of a service account. -[Service Accounts section of this page](https://support.google.com/googleapi/answer/6158849) contains -more information of how to create this key file for authentication. More information about google cloud IAM -is available [here](https://cloud.google.com/storage/docs/access-control/iam). - -To generate service account credentials or view the public credentials that you've already generated, follow the following steps: - -1. Open the [Service accounts page](https://console.developers.google.com/iam-admin/serviceaccounts). -2. Select a project or create a new one. -3. Click **Create service account**. -4. In the **Create service account** window, type a name for the service account, and select **Furnish a new private key**. If you want to [grant G Suite domain-wide authority](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#delegatingauthority) to the service account, also select **Enable G Suite Domain-wide Delegation**. -5. Click **Create**. - -> Notes: Make ensure that the service account you create has permission to operate GCS, you need to assign **Storage Admin** permission to your service account in [here](https://cloud.google.com/storage/docs/access-control/iam). - -```conf - -gcsManagedLedgerOffloadServiceAccountKeyFile="/Users/hello/Downloads/project-804d5e6a6f33.json" - -``` - -#### Configuring the size of block read/write - -Pulsar also provides some knobs to configure the size of requests sent to GCS. - -- ```gcsManagedLedgerOffloadMaxBlockSizeInBytes``` configures the maximum size of a "part" sent - during a multipart upload. This cannot be smaller than 5MB. Default is 64MB. -- ```gcsManagedLedgerOffloadReadBufferSizeInBytes``` configures the block size for each individual - read when reading back data from GCS. Default is 1MB. - -In both cases, these should not be touched unless you know what you are doing. - -### "filesystem" Driver configuration - - -#### Configure connection address - -You can configure the connection address in the `broker.conf` file. - -```conf - -fileSystemURI="hdfs://127.0.0.1:9000" - -``` - -#### Configure Hadoop profile path - -The configuration file is stored in the Hadoop profile path. It contains various settings, such as base path, authentication, and so on. - -```conf - -fileSystemProfilePath="../conf/filesystem_offload_core_site.xml" - -``` - -The model for storing topic data uses `org.apache.hadoop.io.MapFile`. You can use all of the configurations in `org.apache.hadoop.io.MapFile` for Hadoop. - -**Example** - -```conf - - - fs.defaultFS - - - - - hadoop.tmp.dir - pulsar - - - - io.file.buffer.size - 4096 - - - - io.seqfile.compress.blocksize - 1000000 - - - - io.seqfile.compression.type - BLOCK - - - - io.map.index.interval - 128 - - -``` - -For more information about the configurations in `org.apache.hadoop.io.MapFile`, see [Filesystem Storage](http://hadoop.apache.org/). -## Configuring offload to run automatically - -Namespace policies can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that the topic has stored on the pulsar cluster. Once the topic reaches the threshold, an offload operation will be triggered. Setting a negative value to the threshold will disable automatic offloading. Setting the threshold to 0 will cause the broker to offload data as soon as it possiby can. - -```bash - -$ bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -> Automatic offload runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offload will not until the current segment is full. - -## Configuring read priority for offloaded messages - -By default, once messages were offloaded to long term storage, brokers will read them from long term storage, but messages still exists in bookkeeper for a period depends on the administrator's configuration. For -messages exists in both bookkeeper and long term storage, if they are preferred to read from bookkeeper, you can use command to change this configuration. - -```bash - -# default value for -orp is tiered-storage-first -$ bin/pulsar-admin namespaces set-offload-policies my-tenant/my-namespace -orp bookkeeper-first -$ bin/pulsar-admin topics set-offload-policies my-tenant/my-namespace/topic1 -orp bookkeeper-first - -``` - -## Triggering offload manually - -Offloading can manually triggered through a REST endpoint on the Pulsar broker. We provide a CLI which will call this rest endpoint for you. - -When triggering offload, you must specify the maximum size, in bytes, of backlog which will be retained locally on the bookkeeper. The offload mechanism will offload segments from the start of the topic backlog until this condition is met. - -```bash - -$ bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 -Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - -``` - -The command to triggers an offload will not wait until the offload operation has completed. To check the status of the offload, use offload-status. - -```bash - -$ bin/pulsar-admin topics offload-status my-tenant/my-namespace/topic1 -Offload is currently running - -``` - -To wait for offload to complete, add the -w flag. - -```bash - -$ bin/pulsar-admin topics offload-status -w my-tenant/my-namespace/topic1 -Offload was a success - -``` - -If there is an error offloading, the error will be propagated to the offload-status command. - -```bash - -$ bin/pulsar-admin topics offload-status persistent://public/default/topic1 -Error in offload -null - -Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/deploy-aws.md b/site2/website/versioned_docs/version-2.8.3-deprecated/deploy-aws.md deleted file mode 100644 index 662e99ceb9b09e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/deploy-aws.md +++ /dev/null @@ -1,271 +0,0 @@ ---- -id: deploy-aws -title: Deploying a Pulsar cluster on AWS using Terraform and Ansible -sidebar_label: "Amazon Web Services" -original_id: deploy-aws ---- - -> For instructions on deploying a single Pulsar cluster manually rather than using Terraform and Ansible, see [Deploying a Pulsar cluster on bare metal](deploy-bare-metal.md). For instructions on manually deploying a multi-cluster Pulsar instance, see [Deploying a Pulsar instance on bare metal](deploy-bare-metal-multi-cluster.md). - -One of the easiest ways to get a Pulsar [cluster](reference-terminology.md#cluster) running on [Amazon Web Services](https://aws.amazon.com/) (AWS) is to use the [Terraform](https://terraform.io) infrastructure provisioning tool and the [Ansible](https://www.ansible.com) server automation tool. Terraform can create the resources necessary for running the Pulsar cluster---[EC2](https://aws.amazon.com/ec2/) instances, networking and security infrastructure, etc.---While Ansible can install and run Pulsar on the provisioned resources. - -## Requirements and setup - -In order to install a Pulsar cluster on AWS using Terraform and Ansible, you need to prepare the following things: - -* An [AWS account](https://aws.amazon.com/account/) and the [`aws`](https://aws.amazon.com/cli/) command-line tool -* Python and [pip](https://pip.pypa.io/en/stable/) -* The [`terraform-inventory`](https://github.com/adammck/terraform-inventory) tool, which enables Ansible to use Terraform artifacts - -You also need to make sure that you are currently logged into your AWS account via the `aws` tool: - -```bash - -$ aws configure - -``` - -## Installation - -You can install Ansible on Linux or macOS using pip. - -```bash - -$ pip install ansible - -``` - -You can install Terraform using the instructions [here](https://learn.hashicorp.com/tutorials/terraform/install-cli). - -You also need to have the Terraform and Ansible configuration for Pulsar locally on your machine. You can find them in the [GitHub repository](https://github.com/apache/pulsar) of Pulsar, which you can fetch using Git commands: - -```bash - -$ git clone https://github.com/apache/pulsar -$ cd pulsar/deployment/terraform-ansible/aws - -``` - -## SSH setup - -> If you already have an SSH key and want to use it, you can skip the step of generating an SSH key and update `private_key_file` setting -> in `ansible.cfg` file and `public_key_path` setting in `terraform.tfvars` file. -> -> For example, if you already have a private SSH key in `~/.ssh/pulsar_aws` and a public key in `~/.ssh/pulsar_aws.pub`, -> follow the steps below: -> -> 1. update `ansible.cfg` with following values: -> - -> ```shell -> -> private_key_file=~/.ssh/pulsar_aws -> -> -> ``` - -> -> 2. update `terraform.tfvars` with following values: -> - -> ```shell -> -> public_key_path=~/.ssh/pulsar_aws.pub -> -> -> ``` - - -In order to create the necessary AWS resources using Terraform, you need to create an SSH key. Enter the following commands to create a private SSH key in `~/.ssh/id_rsa` and a public key in `~/.ssh/id_rsa.pub`: - -```bash - -$ ssh-keygen -t rsa - -``` - -Do *not* enter a passphrase (hit **Enter** instead when the prompt comes out). Enter the following command to verify that a key has been created: - -```bash - -$ ls ~/.ssh -id_rsa id_rsa.pub - -``` - -## Create AWS resources using Terraform - -To start building AWS resources with Terraform, you need to install all Terraform dependencies. Enter the following command: - -```bash - -$ terraform init -# This will create a .terraform folder - -``` - -After that, you can apply the default Terraform configuration by entering this command: - -```bash - -$ terraform apply - -``` - -Then you see this prompt below: - -```bash - -Do you want to perform these actions? - Terraform will perform the actions described above. - Only 'yes' will be accepted to approve. - - Enter a value: - -``` - -Type `yes` and hit **Enter**. Applying the configuration could take several minutes. When the configuration applying finishes, you can see `Apply complete!` along with some other information, including the number of resources created. - -### Apply a non-default configuration - -You can apply a non-default Terraform configuration by changing the values in the `terraform.tfvars` file. The following variables are available: - -Variable name | Description | Default -:-------------|:------------|:------- -`public_key_path` | The path of the public key that you have generated. | `~/.ssh/id_rsa.pub` -`region` | The AWS region in which the Pulsar cluster runs | `us-west-2` -`availability_zone` | The AWS availability zone in which the Pulsar cluster runs | `us-west-2a` -`aws_ami` | The [Amazon Machine Image](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) (AMI) that the cluster uses | `ami-9fa343e7` -`num_zookeeper_nodes` | The number of [ZooKeeper](https://zookeeper.apache.org) nodes in the ZooKeeper cluster | 3 -`num_bookie_nodes` | The number of bookies that runs in the cluster | 3 -`num_broker_nodes` | The number of Pulsar brokers that runs in the cluster | 2 -`num_proxy_nodes` | The number of Pulsar proxies that runs in the cluster | 1 -`base_cidr_block` | The root [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) that network assets uses for the cluster | `10.0.0.0/16` -`instance_types` | The EC2 instance types to be used. This variable is a map with two keys: `zookeeper` for the ZooKeeper instances, `bookie` for the BookKeeper bookies and `broker` and `proxy` for Pulsar brokers and bookies | `t2.small` (ZooKeeper), `i3.xlarge` (BookKeeper) and `c5.2xlarge` (Brokers/Proxies) - -### What is installed - -When you run the Ansible playbook, the following AWS resources are used: - -* 9 total [Elastic Compute Cloud](https://aws.amazon.com/ec2) (EC2) instances running the [ami-9fa343e7](https://access.redhat.com/articles/3135091) Amazon Machine Image (AMI), which runs [Red Hat Enterprise Linux (RHEL) 7.4](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/7.4_release_notes/index). By default, that includes: - * 3 small VMs for ZooKeeper ([t2.small](https://www.ec2instances.info/?selected=t2.small) instances) - * 3 larger VMs for BookKeeper [bookies](reference-terminology.md#bookie) ([i3.xlarge](https://www.ec2instances.info/?selected=i3.xlarge) instances) - * 2 larger VMs for Pulsar [brokers](reference-terminology.md#broker) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances) - * 1 larger VMs for Pulsar [proxy](reference-terminology.md#proxy) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances) -* An EC2 [security group](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html) -* A [virtual private cloud](https://aws.amazon.com/vpc/) (VPC) for security -* An [API Gateway](https://aws.amazon.com/api-gateway/) for connections from the outside world -* A [route table](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html) for the Pulsar cluster's VPC -* A [subnet](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html) for the VPC - -All EC2 instances for the cluster run in the [us-west-2](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) region. - -### Fetch your Pulsar connection URL - -When you apply the Terraform configuration by entering the command `terraform apply`, Terraform outputs a value for the `pulsar_service_url`. The value should look something like this: - -``` - -pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650 - -``` - -You can fetch that value at any time by entering the command `terraform output pulsar_service_url` or parsing the `terraform.tstate` file (which is JSON, even though the filename does not reflect that): - -```bash - -$ cat terraform.tfstate | jq .modules[0].outputs.pulsar_service_url.value - -``` - -### Destroy your cluster - -At any point, you can destroy all AWS resources associated with your cluster using Terraform's `destroy` command: - -```bash - -$ terraform destroy - -``` - -## Setup Disks - -Before you run the Pulsar playbook, you need to mount the disks to the correct directories on those bookie nodes. Since different type of machines have different disk layout, you need to update the task defined in `setup-disk.yaml` file after changing the `instance_types` in your terraform config, - -To setup disks on bookie nodes, enter this command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - setup-disk.yaml - -``` - -After that, the disks is mounted under `/mnt/journal` as journal disk, and `/mnt/storage` as ledger disk. -Remember to enter this command just only once. If you attempt to enter this command again after you have run Pulsar playbook, your disks might potentially be erased again, causing the bookies to fail to start up. - -## Run the Pulsar playbook - -Once you have created the necessary AWS resources using Terraform, you can install and run Pulsar on the Terraform-created EC2 instances using Ansible. - -(Optional) If you want to use any [built-in IO connectors](io-connectors.md), edit the `Download Pulsar IO packages` task in the `deploy-pulsar.yaml` file and uncomment the connectors you want to use. - -To run the playbook, enter this command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - ../deploy-pulsar.yaml - -``` - -If you have created a private SSH key at a location different from `~/.ssh/id_rsa`, you can specify the different location using the `--private-key` flag in the following command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - --private-key="~/.ssh/some-non-default-key" \ - ../deploy-pulsar.yaml - -``` - -## Access the cluster - -You can now access your running Pulsar using the unique Pulsar connection URL for your cluster, which you can obtain following the instructions [above](#fetching-your-pulsar-connection-url). - -For a quick demonstration of accessing the cluster, we can use the Python client for Pulsar and the Python shell. First, install the Pulsar Python module using pip: - -```bash - -$ pip install pulsar-client - -``` - -Now, open up the Python shell using the `python` command: - -```bash - -$ python - -``` - -Once you are in the shell, enter the following command: - -```python - ->>> import pulsar ->>> client = pulsar.Client('pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650') -# Make sure to use your connection URL ->>> producer = client.create_producer('persistent://public/default/test-topic') ->>> producer.send('Hello world') ->>> client.close() - -``` - -If all of these commands are successful, Pulsar clients can now use your cluster! diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/deploy-bare-metal-multi-cluster.md b/site2/website/versioned_docs/version-2.8.3-deprecated/deploy-bare-metal-multi-cluster.md deleted file mode 100644 index 1b23eea07a20bf..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/deploy-bare-metal-multi-cluster.md +++ /dev/null @@ -1,486 +0,0 @@ ---- -id: deploy-bare-metal-multi-cluster -title: Deploying a multi-cluster on bare metal -sidebar_label: "Bare metal multi-cluster" -original_id: deploy-bare-metal-multi-cluster ---- - -:::tip - -1. Single-cluster Pulsar installations should be sufficient for all but the most ambitious use cases. If you are interested in experimenting with -Pulsar or using it in a startup or on a single team, you had better opt for a single cluster. For instructions on deploying a single cluster, -see the guide [here](deploy-bare-metal.md). -2. If you want to use all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you need to download `apache-pulsar-io-connectors` -package and install `apache-pulsar-io-connectors` under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you -run a separate cluster of function workers for [Pulsar Functions](functions-overview.md). -3. If you want to use [Tiered Storage](concepts-tiered-storage.md) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders` -package and install `apache-pulsar-offloaders` under `offloaders` directory in the pulsar directory on every broker node. For more details of how to configure -this feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md). - -::: - -A Pulsar *instance* consists of multiple Pulsar clusters working in unison. You can distribute clusters across data centers or geographical regions and replicate the clusters amongst themselves using [geo-replication](administration-geo.md). Deploying a multi-cluster Pulsar instance involves the following basic steps: - -* Deploying two separate [ZooKeeper](#deploy-zookeeper) quorums: a [local](#deploy-local-zookeeper) quorum for each cluster in the instance and a [configuration store](#configuration-store) quorum for instance-wide tasks -* Initializing [cluster metadata](#cluster-metadata-initialization) for each cluster -* Deploying a [BookKeeper cluster](#deploy-bookkeeper) of bookies in each Pulsar cluster -* Deploying [brokers](#deploy-brokers) in each Pulsar cluster - -If you want to deploy a single Pulsar cluster, see [Clusters and Brokers](getting-started-standalone.md#start-the-cluster). - -> #### Run Pulsar locally or on Kubernetes? -> This guide shows you how to deploy Pulsar in production in a non-Kubernetes environment. If you want to run a standalone Pulsar cluster on a single machine for development purposes, see the [Setting up a local cluster](getting-started-standalone.md) guide. If you want to run Pulsar on [Kubernetes](https://kubernetes.io), see the [Pulsar on Kubernetes](deploy-kubernetes.md) guide, which includes sections on running Pulsar on Kubernetes on [Google Kubernetes Engine](deploy-kubernetes#pulsar-on-google-kubernetes-engine) and on [Amazon Web Services](deploy-kubernetes#pulsar-on-amazon-web-services). - -## System requirement - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions, JRE/JDK 11 is recommended. - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -## Install Pulsar - -To get started running Pulsar, download a binary tarball release in one of the following ways: - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar @pulsar:version@ binary release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget 'https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=pulsar/pulsar-@pulsar:version@/apache-pulsar-@pulsar:version@-bin.tar.gz' -O apache-pulsar-@pulsar:version@-bin.tar.gz - - ``` - -Once you download the tarball, untar it and `cd` into the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -## What your package contains - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | [Command-line tools](reference-cli-tools.md) of Pulsar, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) -`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more -`examples` | A Java JAR file containing example [Pulsar Functions](functions-overview.md) -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that Pulsar uses -`licenses` | License files, in `.txt` form, for various components of the Pulsar codebase - -The following directories are created once you begin running Pulsar: - -Directory | Contains -:---------|:-------- -`data` | The data storage directory that ZooKeeper and BookKeeper use -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md) -`logs` | Logs that the installation creates - - -## Deploy ZooKeeper - -Each Pulsar instance relies on two separate ZooKeeper quorums. - -* [Local ZooKeeper](#deploy-local-zookeeper) operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs to have a dedicated ZooKeeper cluster. -* [Configuration Store](#deploy-the-configuration-store) operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum. - -The configuration store quorum can be provided by an independent cluster of machines or by the same machines used by local ZooKeeper. - - -### Deploy local ZooKeeper - -ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar. - -You need to stand up one local ZooKeeper cluster *per Pulsar cluster* for deploying a Pulsar instance. - -To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -On each host, you need to specify the ID of the node in the `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - -On a ZooKeeper server at `zk1.us-west.example.com`, for example, you could set the `myid` value like this: - -```shell - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com` the command looks like `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start zookeeper - -``` - -### Deploy the configuration store - -The ZooKeeper cluster that is configured and started up in the section above is a *local* ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks. - -If you deploy a [single-cluster](#single-cluster-pulsar-instance) instance, you do not need a separate cluster for the configuration store. If, however, you deploy a [multi-cluster](#multi-cluster-pulsar-instance) instance, you should stand up a separate ZooKeeper cluster for configuration tasks. - -#### Single-cluster Pulsar instance - -If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports. - -To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorum uses to the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 - -``` - -As before, create the `myid` files for each server on `data/global-zookeeper/myid`. - -#### Multi-cluster Pulsar instance - -When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions. - -The key here is to make sure the ZK quorum members are spread across at least 3 regions and that other regions run as observers. - -Again, given the very low expected load on the configuration store servers, you can -share the same hosts used for the local ZooKeeper quorum. - -For example, assume a Pulsar instance with the following clusters `us-west`, -`us-east`, `us-central`, `eu-central`, `ap-south`. Also assume, each cluster has its own local ZK servers named such as the following: - -``` - -zk[1-3].${CLUSTER}.example.com - -``` - -In this scenario if you want to pick the quorum participants from few clusters and -let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`. - -This method guarantees that writes to configuration store is possible even if one of these regions is unreachable. - -The ZK configuration in all the servers looks like: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 -server.4=zk1.us-central.example.com:2185:2186 -server.5=zk2.us-central.example.com:2185:2186 -server.6=zk3.us-central.example.com:2185:2186:observer -server.7=zk1.us-east.example.com:2185:2186 -server.8=zk2.us-east.example.com:2185:2186 -server.9=zk3.us-east.example.com:2185:2186:observer -server.10=zk1.eu-central.example.com:2185:2186:observer -server.11=zk2.eu-central.example.com:2185:2186:observer -server.12=zk3.eu-central.example.com:2185:2186:observer -server.13=zk1.ap-south.example.com:2185:2186:observer -server.14=zk2.ap-south.example.com:2185:2186:observer -server.15=zk3.ap-south.example.com:2185:2186:observer - -``` - -Additionally, ZK observers need to have the following parameters: - -```properties - -peerType=observer - -``` - -##### Start the service - -Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) - -```shell - -$ bin/pulsar-daemon start configuration-store - -``` - -## Cluster metadata initialization - -Once you set up the cluster-specific ZooKeeper and configuration store quorums for your instance, you need to write some metadata to ZooKeeper for each cluster in your instance. **you only needs to write these metadata once**. - -You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. The following is an example: - -```shell - -$ bin/pulsar initialize-cluster-metadata \ - --cluster us-west \ - --zookeeper zk1.us-west.example.com:2181 \ - --configuration-store zk1.us-west.example.com:2184 \ - --web-service-url http://pulsar.us-west.example.com:8080/ \ - --web-service-url-tls https://pulsar.us-west.example.com:8443/ \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/ - -``` - -As you can see from the example above, you need to specify the following: - -* The name of the cluster -* The local ZooKeeper connection string for the cluster -* The configuration store connection string for the entire instance -* The web service URL for the cluster -* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster - -If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster. - -Make sure to run `initialize-cluster-metadata` for each cluster in your instance. - -## Deploy BookKeeper - -BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar. - -Each Pulsar broker needs to have its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster. - -### Configure bookies - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important aspect of configuring each bookie is ensuring that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for the local ZooKeeper of Pulsar cluster. - -### Start bookies - -You can start a bookie in two ways: in the foreground or as a background daemon. - -To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -You can verify that the bookie works properly using the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell): - -```shell - -$ bin/bookkeeper shell bookiesanity - -``` - -This command creates a new ledger on the local bookie, writes a few entries, reads them back and finally deletes the ledger. - -After you have started all bookies, you can use the `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to verify that all bookies in the cluster are running. - -```bash - -$ bin/bookkeeper shell simpletest --ensemble --writeQuorum --ackQuorum --numEntries - -``` - -Bookie hosts are responsible for storing message data on disk. In order for bookies to provide optimal performance, having a suitable hardware configuration is essential for the bookies. The following are key dimensions for bookie hardware capacity. - -* Disk I/O capacity read/write -* Storage capacity - -Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker. To ensure low write latency, BookKeeper is -designed to use multiple devices: - -* A **journal** to ensure durability. For sequential writes, having fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts is critical. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID)s controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms. -* A **ledger storage device** is where data is stored until all consumers acknowledge the message. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller. - - - -## Deploy brokers - -Once you set up ZooKeeper, initialize cluster metadata, and spin up BookKeeper bookies, you can deploy brokers. - -### Broker configuration - -You can configure brokers using the [`conf/broker.conf`](reference-configuration.md#broker) configuration file. - -The most important element of broker configuration is ensuring that each broker is aware of its local ZooKeeper quorum as well as the configuration store quorum. Make sure that you set the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) parameter to reflect the local quorum and the [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameter to reflect the configuration store quorum (although you need to specify only those ZooKeeper servers located in the same cluster). - -You also need to specify the name of the [cluster](reference-terminology.md#cluster) to which the broker belongs using the [`clusterName`](reference-configuration.md#broker-clusterName) parameter. In addition, you need to match the broker and web service ports provided when you initialize the metadata (especially when you use a different port from default) of the cluster. - -The following is an example configuration: - -```properties - -# Local ZooKeeper servers -zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -# Configuration store quorum connection string. -configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184 - -clusterName=us-west - -# Broker data port -brokerServicePort=6650 - -# Broker data port for TLS -brokerServicePortTls=6651 - -# Port to use to server HTTP request -webServicePort=8080 - -# Port to use to server HTTPS request -webServicePortTls=8443 - -``` - -### Broker hardware - -Pulsar brokers do not require any special hardware since they do not use the local disk. You had better choose fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) so that the software can take full advantage of that. - -### Start the broker service - -You can start a broker in the background by using [nohup](https://en.wikipedia.org/wiki/Nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start broker - -``` - -You can also start brokers in the foreground by using [`pulsar broker`](reference-cli-tools.md#broker): - -```shell - -$ bin/pulsar broker - -``` - -## Service discovery - -[Clients](getting-started-clients.md) connecting to Pulsar brokers need to be able to communicate with an entire Pulsar instance using a single URL. Pulsar provides a built-in service discovery mechanism that you can set up using the instructions [immediately below](#service-discovery-setup). - -You can also use your own service discovery system if you want. If you use your own system, you only need to satisfy just one requirement: when a client performs an HTTP request to an [endpoint](reference-configuration.md) for a Pulsar cluster, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to *some* active broker in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means. - -> #### Service discovery already provided by many scheduling systems -> Many large-scale deployment systems, such as [Kubernetes](deploy-kubernetes.md), have service discovery systems built in. If you run Pulsar on such a system, you may not need to provide your own service discovery mechanism. - - -### Service discovery setup - -The service discovery mechanism that included with Pulsar maintains a list of active brokers, which stored in ZooKeeper, and supports lookup using HTTP and also the [binary protocol](developing-binary-protocol.md) of Pulsar. - -To get started setting up the built-in service of discovery of Pulsar, you need to change a few parameters in the [`conf/discovery.conf`](reference-configuration.md#service-discovery) configuration file. Set the [`zookeeperServers`](reference-configuration.md#service-discovery-zookeeperServers) parameter to the ZooKeeper quorum connection string of the cluster and the [`configurationStoreServers`](reference-configuration.md#service-discovery-configurationStoreServers) setting to the [configuration -store](reference-terminology.md#configuration-store) quorum connection string. - -```properties - -# Zookeeper quorum connection string -zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -# Global configuration store connection string -configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184 - -``` - -To start the discovery service: - -```shell - -$ bin/pulsar-daemon start discovery - -``` - -## Admin client and verification - -At this point your Pulsar instance should be ready to use. You can now configure client machines that can serve as [administrative clients](admin-api-overview.md) for each cluster. You can use the [`conf/client.conf`](reference-configuration.md#client) configuration file to configure admin clients. - -The most important thing is that you point the [`serviceUrl`](reference-configuration.md#client-serviceUrl) parameter to the correct service URL for the cluster: - -```properties - -serviceUrl=http://pulsar.us-west.example.com:8080/ - -``` - -## Provision new tenants - -Pulsar is built as a fundamentally multi-tenant system. - - -If a new tenant wants to use the system, you need to create a new one. You can create a new tenant by using the [`pulsar-admin`](reference-pulsar-admin.md#tenants) CLI tool: - -```shell - -$ bin/pulsar-admin tenants create test-tenant \ - --allowed-clusters us-west \ - --admin-roles test-admin-role - -``` - -In this command, users who identify with `test-admin-role` role can administer the configuration for the `test-tenant` tenant. The `test-tenant` tenant can only use the `us-west` cluster. From now on, this tenant can manage its resources. - -Once you create a tenant, you need to create [namespaces](reference-terminology.md#namespace) for topics within that tenant. - - -The first step is to create a namespace. A namespace is an administrative unit that can contain many topics. A common practice is to create a namespace for each different use case from a single tenant. - -```shell - -$ bin/pulsar-admin namespaces create test-tenant/ns1 - -``` - -##### Test producer and consumer - - -Everything is now ready to send and receive messages. The quickest way to test the system is through the [`pulsar-perf`](reference-cli-tools.md#pulsar-perf) client tool. - - -You can use a topic in the namespace that you have just created. Topics are automatically created the first time when a producer or a consumer tries to use them. - -The topic name in this case could be: - -```http - -persistent://test-tenant/ns1/my-topic - -``` - -Start a consumer that creates a subscription on the topic and waits for messages: - -```shell - -$ bin/pulsar-perf consume persistent://test-tenant/ns1/my-topic - -``` - -Start a producer that publishes messages at a fixed rate and reports stats every 10 seconds: - -```shell - -$ bin/pulsar-perf produce persistent://test-tenant/ns1/my-topic - -``` - -To report the topic stats: - -```shell - -$ bin/pulsar-admin topics stats persistent://test-tenant/ns1/my-topic - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/deploy-bare-metal.md b/site2/website/versioned_docs/version-2.8.3-deprecated/deploy-bare-metal.md deleted file mode 100644 index bdd05f24f2566d..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/deploy-bare-metal.md +++ /dev/null @@ -1,541 +0,0 @@ ---- -id: deploy-bare-metal -title: Deploy a cluster on bare metal -sidebar_label: "Bare metal" -original_id: deploy-bare-metal ---- - -:::tip - -1. Single-cluster Pulsar installations should be sufficient for all but the most ambitious use cases. If you are interested in experimenting with -Pulsar or using Pulsar in a startup or on a single team, it is simplest to opt for a single cluster. If you do need to run a multi-cluster Pulsar instance, -see the guide [here](deploy-bare-metal-multi-cluster.md). -2. If you want to use all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you need to download `apache-pulsar-io-connectors` -package and install `apache-pulsar-io-connectors` under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you -have run a separate cluster of function workers for [Pulsar Functions](functions-overview.md). -3. If you want to use [Tiered Storage](concepts-tiered-storage.md) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders` -package and install `apache-pulsar-offloaders` under `offloaders` directory in the pulsar directory on every broker node. For more details of how to configure -this feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md). - -::: - -Deploying a Pulsar cluster involves doing the following (in order): - -* Deploy a [ZooKeeper](#deploy-a-zookeeper-cluster) cluster (optional) -* Initialize [cluster metadata](#initialize-cluster-metadata) -* Deploy a [BookKeeper](#deploy-a-bookkeeper-cluster) cluster -* Deploy one or more Pulsar [brokers](#deploy-pulsar-brokers) - -## Preparation - -### Requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions, JRE/JDK 11 is recommended. - -> If you already have an existing ZooKeeper cluster and want to reuse it, you do not need to prepare the machines -> for running ZooKeeper. - -To run Pulsar on bare metal, the following configuration is recommended: - -* At least 6 Linux machines or VMs - * 3 for running [ZooKeeper](https://zookeeper.apache.org) - * 3 for running a Pulsar broker, and a [BookKeeper](https://bookkeeper.apache.org) bookie -* A single [DNS](https://en.wikipedia.org/wiki/Domain_Name_System) name covering all of the Pulsar broker hosts - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -> If you do not have enough machines, or to try out Pulsar in cluster mode (and expand the cluster later), -> you can deploy a full Pulsar configuration on one node, where Zookeeper, the bookie and broker are run on the same machine. - -> If you do not have a DNS server, you can use the multi-host format in the service URL instead. - -Each machine in your cluster needs to have [Java 8](https://adoptium.net/?variant=openjdk8) or [Java 11](https://adoptium.net/?variant=openjdk11) installed. - -The following is a diagram showing the basic setup: - -![alt-text](/assets/pulsar-basic-setup.png) - -In this diagram, connecting clients need to be able to communicate with the Pulsar cluster using a single URL. In this case, `pulsar-cluster.acme.com` abstracts over all of the message-handling brokers. Pulsar message brokers run on machines alongside BookKeeper bookies; brokers and bookies, in turn, rely on ZooKeeper. - -### Hardware considerations - -When you deploy a Pulsar cluster, keep in mind the following basic better choices when you do the capacity planning. - -#### ZooKeeper - -For machines running ZooKeeper, is is recommended to use less powerful machines or VMs. Pulsar uses ZooKeeper only for periodic coordination-related and configuration-related tasks, *not* for basic operations. If you run Pulsar on [Amazon Web Services](https://aws.amazon.com/) (AWS), for example, a [t2.small](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html) instance might likely suffice. - -#### Bookies and Brokers - -For machines running a bookie and a Pulsar broker, more powerful machines are required. For an AWS deployment, for example, [i3.4xlarge](https://aws.amazon.com/blogs/aws/now-available-i3-instances-for-demanding-io-intensive-applications/) instances may be appropriate. On those machines you can use the following: - -* Fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) (for Pulsar brokers) -* Small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache (for BookKeeper bookies) - -## Install the Pulsar binary package - -> You need to install the Pulsar binary package on *each machine in the cluster*, including machines running [ZooKeeper](#deploy-a-zookeeper-cluster) and [BookKeeper](#deploy-a-bookkeeper-cluster). - -To get started deploying a Pulsar cluster on bare metal, you need to download a binary tarball release in one of the following ways: - -* By clicking on the link below directly, which automatically triggers a download: - * Pulsar @pulsar:version@ binary release -* From the Pulsar [downloads page](pulsar:download_page_url) -* From the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) on [GitHub](https://github.com) -* Using [wget](https://www.gnu.org/software/wget): - -```bash - -$ wget pulsar:binary_release_url - -``` - -Once you download the tarball, untar it and `cd` into the resulting directory: - -```bash - -$ tar xvzf apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -The extracted directory contains the following subdirectories: - -Directory | Contains -:---------|:-------- -`bin` |[command-line tools](reference-cli-tools.md) of Pulsar, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) -`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more -`data` | The data storage directory that ZooKeeper and BookKeeper use -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that Pulsar uses -`logs` | Logs that the installation creates - -## [Install Builtin Connectors (optional)]( https://pulsar.apache.org/docs/en/next/standalone/#install-builtin-connectors-optional) - -> Since Pulsar release `2.1.0-incubating`, Pulsar provides a separate binary distribution, containing all the `builtin` connectors. -> If you want to enable those `builtin` connectors, you can follow the instructions as below; otherwise you can -> skip this section for now. - -To get started using builtin connectors, you need to download the connectors tarball release on every broker node in one of the following ways: - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar IO Connectors @pulsar:version@ release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -Once you download the .nar file, copy the file to directory `connectors` in the pulsar directory. -For example, if you download the connector file `pulsar-io-aerospike-@pulsar:version@.nar`: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -## [Install Tiered Storage Offloaders (optional)](https://pulsar.apache.org/docs/en/next/standalone/#install-tiered-storage-offloaders-optional) - -> Since Pulsar release `2.2.0`, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -> If you want to enable tiered storage feature, you can follow the instructions as below; otherwise you can -> skip this section for now. - -To get started using tiered storage offloaders, you need to download the offloaders tarball release on every broker node in one of the following ways: - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -Once you download the tarball, in the pulsar directory, untar the offloaders package and copy the offloaders as `offloaders` in the pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you can find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more details of how to configure tiered storage feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md) - - -## Deploy a ZooKeeper cluster - -> If you already have an existing zookeeper cluster and want to use it, you can skip this section. - -[ZooKeeper](https://zookeeper.apache.org) manages a variety of essential coordination- and configuration-related tasks for Pulsar. To deploy a Pulsar cluster, you need to deploy ZooKeeper first (before all other components). A 3-node ZooKeeper cluster is the recommended configuration. Pulsar does not make heavy use of ZooKeeper, so more lightweight machines or VMs should suffice for running ZooKeeper. - -To begin, add all ZooKeeper servers to the configuration specified in [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) (in the Pulsar directory that you create [above](#install-the-pulsar-binary-package)). The following is an example: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -> If you only have one machine on which to deploy Pulsar, you only need to add one server entry in the configuration file. - -On each host, you need to specify the ID of the node in the `myid` file, which is in the `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - -For example, on a ZooKeeper server like `zk1.us-west.example.com`, you can set the `myid` value as follows: - -```bash - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com`, the command is `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and have the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start zookeeper - -``` - -> If you plan to deploy Zookeeper with the Bookie on the same node, you need to start zookeeper by using different stats -> port by configuring the `metricsProvider.httpPort` in zookeeper.conf. - -## Initialize cluster metadata - -Once you deploy ZooKeeper for your cluster, you need to write some metadata to ZooKeeper. You only need to write this data **once**. - -You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. This command can be run on any machine in your Pulsar cluster, so the metadata can be initialized from a ZooKeeper, broker, or bookie machine. The following is an example: - -```shell - -$ bin/pulsar initialize-cluster-metadata \ - --cluster pulsar-cluster-1 \ - --zookeeper zk1.us-west.example.com:2181 \ - --configuration-store zk1.us-west.example.com:2181 \ - --web-service-url http://pulsar.us-west.example.com:8080 \ - --web-service-url-tls https://pulsar.us-west.example.com:8443 \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650 \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -As you can see from the example above, you will need to specify the following: - -Flag | Description -:----|:----------- -`--cluster` | A name for the cluster -`--zookeeper` | A "local" ZooKeeper connection string for the cluster. This connection string only needs to include *one* machine in the ZooKeeper cluster. -`--configuration-store` | The configuration store connection string for the entire instance. As with the `--zookeeper` flag, this connection string only needs to include *one* machine in the ZooKeeper cluster. -`--web-service-url` | The web service URL for the cluster, plus a port. This URL should be a standard DNS name. The default port is 8080 (you had better not use a different port). -`--web-service-url-tls` | If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster. The default port is 8443 (you had better not use a different port). -`--broker-service-url` | A broker service URL enabling interaction with the brokers in the cluster. This URL should not use the same DNS name as the web service URL but should use the `pulsar` scheme instead. The default port is 6650 (you had better not use a different port). -`--broker-service-url-tls` | If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster. The default port is 6651 (you had better not use a different port). - - -> If you do not have a DNS server, you can use multi-host format in the service URL with the following settings: -> - -> ```properties -> -> --web-service-url http://host1:8080,host2:8080,host3:8080 \ -> --web-service-url-tls https://host1:8443,host2:8443,host3:8443 \ -> --broker-service-url pulsar://host1:6650,host2:6650,host3:6650 \ -> --broker-service-url-tls pulsar+ssl://host1:6651,host2:6651,host3:6651 -> -> -> ``` - -> -> If you want to use an existing BookKeeper cluster, you can add the `--existing-bk-metadata-service-uri` flag as follows: -> - -> ```properties -> -> --existing-bk-metadata-service-uri "zk+null://zk1:2181;zk2:2181/ledgers" \ -> --web-service-url http://host1:8080,host2:8080,host3:8080 \ -> --web-service-url-tls https://host1:8443,host2:8443,host3:8443 \ -> --broker-service-url pulsar://host1:6650,host2:6650,host3:6650 \ -> --broker-service-url-tls pulsar+ssl://host1:6651,host2:6651,host3:6651 -> -> -> ``` - -> You can obtain the metadata service URI of the existing BookKeeper cluster by using the `bin/bookkeeper shell whatisinstanceid` command. You must enclose the value in double quotes since the multiple metadata service URIs are separated with semicolons. - -## Deploy a BookKeeper cluster - -[BookKeeper](https://bookkeeper.apache.org) handles all persistent data storage in Pulsar. You need to deploy a cluster of BookKeeper bookies to use Pulsar. You can choose to run a **3-bookie BookKeeper cluster**. - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important step in configuring bookies for our purposes here is ensuring that [`zkServers`](reference-configuration.md#bookkeeper-zkServers) is set to the connection string for the ZooKeeper cluster. The following is an example: - -```properties - -zkServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -``` - -Once you appropriately modify the `zkServers` parameter, you can make any other configuration changes that you require. You can find a full listing of the available BookKeeper configuration parameters [here](reference-configuration.md#bookkeeper). However, consulting the [BookKeeper documentation](http://bookkeeper.apache.org/docs/latest/reference/config/) for a more in-depth guide might be a better choice. - -Once you apply the desired configuration in `conf/bookkeeper.conf`, you can start up a bookie on each of your BookKeeper hosts. You can start up each bookie either in the background, using [nohup](https://en.wikipedia.org/wiki/Nohup), or in the foreground. - -To start the bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -To start the bookie in the foreground: - -```bash - -$ bin/pulsar bookie - -``` - -You can verify that a bookie works properly by running the `bookiesanity` command on the [BookKeeper shell](reference-cli-tools.md#shell): - -```bash - -$ bin/bookkeeper shell bookiesanity - -``` - -This command creates an ephemeral BookKeeper ledger on the local bookie, writes a few entries, reads them back, and finally deletes the ledger. - -After you start all the bookies, you can use `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to verify all the bookies in the cluster are up running. - -```bash - -$ bin/bookkeeper shell simpletest --ensemble --writeQuorum --ackQuorum --numEntries - -``` - -This command creates a `num-bookies` sized ledger on the cluster, writes a few entries, and finally deletes the ledger. - - -## Deploy Pulsar brokers - -Pulsar brokers are the last thing you need to deploy in your Pulsar cluster. Brokers handle Pulsar messages and provide the administrative interface of Pulsar. A good choice is to run **3 brokers**, one for each machine that already runs a BookKeeper bookie. - -### Configure Brokers - -The most important element of broker configuration is ensuring that each broker is aware of the ZooKeeper cluster that you have deployed. Ensure that the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) and [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameters are correct. In this case, since you only have 1 cluster and no configuration store setup, the `configurationStoreServers` point to the same `zookeeperServers`. - -```properties - -zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 -configurationStoreServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -``` - -You also need to specify the cluster name (matching the name that you provided when you [initialize the metadata of the cluster](#initialize-cluster-metadata)): - -```properties - -clusterName=pulsar-cluster-1 - -``` - -In addition, you need to match the broker and web service ports provided when you initialize the metadata of the cluster (especially when you use a different port than the default): - -```properties - -brokerServicePort=6650 -brokerServicePortTls=6651 -webServicePort=8080 -webServicePortTls=8443 - -``` - -> If you deploy Pulsar in a one-node cluster, you should update the replication settings in `conf/broker.conf` to `1`. -> - -> ```properties -> -> # Number of bookies to use when creating a ledger -> managedLedgerDefaultEnsembleSize=1 -> -> # Number of copies to store for each message -> managedLedgerDefaultWriteQuorum=1 -> -> # Number of guaranteed copies (acks to wait before write is complete) -> managedLedgerDefaultAckQuorum=1 -> -> -> ``` - - -### Enable Pulsar Functions (optional) - -If you want to enable [Pulsar Functions](functions-overview.md), you can follow the instructions as below: - -1. Edit `conf/broker.conf` to enable functions worker, by setting `functionsWorkerEnabled` to `true`. - - ```conf - - functionsWorkerEnabled=true - - ``` - -2. Edit `conf/functions_worker.yml` and set `pulsarFunctionsCluster` to the cluster name that you provide when you [initialize the metadata of the cluster](#initialize-cluster-metadata). - - ```conf - - pulsarFunctionsCluster: pulsar-cluster-1 - - ``` - -If you want to learn more options about deploying the functions worker, check out [Deploy and manage functions worker](functions-worker.md). - -### Start Brokers - -You can then provide any other configuration changes that you want in the [`conf/broker.conf`](reference-configuration.md#broker) file. Once you decide on a configuration, you can start up the brokers for your Pulsar cluster. Like ZooKeeper and BookKeeper, you can start brokers either in the foreground or in the background, using nohup. - -You can start a broker in the foreground using the [`pulsar broker`](reference-cli-tools.md#pulsar-broker) command: - -```bash - -$ bin/pulsar broker - -``` - -You can start a broker in the background using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start broker - -``` - -Once you successfully start up all the brokers that you intend to use, your Pulsar cluster should be ready to go! - -## Connect to the running cluster - -Once your Pulsar cluster is up and running, you should be able to connect with it using Pulsar clients. One such client is the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool, which is included with the Pulsar binary package. The `pulsar-client` tool can publish messages to and consume messages from Pulsar topics and thus provide a simple way to make sure that your cluster runs properly. - -To use the `pulsar-client` tool, first modify the client configuration file in [`conf/client.conf`](reference-configuration.md#client) in your binary package. You need to change the values for `webServiceUrl` and `brokerServiceUrl`, substituting `localhost` (which is the default), with the DNS name that you assign to your broker/bookie hosts. The following is an example: - -```properties - -webServiceUrl=http://us-west.example.com:8080 -brokerServiceurl=pulsar://us-west.example.com:6650 - -``` - -> If you do not have a DNS server, you can specify multi-host in service URL as follows: -> - -> ```properties -> -> webServiceUrl=http://host1:8080,host2:8080,host3:8080 -> brokerServiceurl=pulsar://host1:6650,host2:6650,host3:6650 -> -> -> ``` - - -Once that is complete, you can publish a message to the Pulsar topic: - -```bash - -$ bin/pulsar-client produce \ - persistent://public/default/test \ - -n 1 \ - -m "Hello Pulsar" - -``` - -> You may need to use a different cluster name in the topic if you specify a cluster name other than `pulsar-cluster-1`. - -This command publishes a single message to the Pulsar topic. In addition, you can subscribe to the Pulsar topic in a different terminal before publishing messages as below: - -```bash - -$ bin/pulsar-client consume \ - persistent://public/default/test \ - -n 100 \ - -s "consumer-test" \ - -t "Exclusive" - -``` - -Once you successfully publish the above message to the topic, you should see it in the standard output: - -```bash - ------ got message ----- -Hello Pulsar - -``` - -## Run Functions - -> If you have [enabled](#enable-pulsar-functions-optional) Pulsar Functions, you can try out the Pulsar Functions now. - -Create an ExclamationFunction `exclamation`. - -```bash - -bin/pulsar-admin functions create \ - --jar examples/api-examples.jar \ - --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \ - --inputs persistent://public/default/exclamation-input \ - --output persistent://public/default/exclamation-output \ - --tenant public \ - --namespace default \ - --name exclamation - -``` - -Check whether the function runs as expected by [triggering](functions-deploying.md#triggering-pulsar-functions) the function. - -```bash - -bin/pulsar-admin functions trigger --name exclamation --trigger-value "hello world" - -``` - -You should see the following output: - -```shell - -hello world! - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/deploy-dcos.md b/site2/website/versioned_docs/version-2.8.3-deprecated/deploy-dcos.md deleted file mode 100644 index 952d5f47e30fa7..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/deploy-dcos.md +++ /dev/null @@ -1,200 +0,0 @@ ---- -id: deploy-dcos -title: Deploy Pulsar on DC/OS -sidebar_label: "DC/OS" -original_id: deploy-dcos ---- - -:::tip - -If you want to enable all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you can choose to use `apachepulsar/pulsar-all` image instead of -`apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors). - -::: - -[DC/OS](https://dcos.io/) (the DataCenter Operating System) is a distributed operating system used for deploying and managing applications and systems on [Apache Mesos](http://mesos.apache.org/). DC/OS is an open-source tool that [Mesosphere](https://mesosphere.com/) creates and maintains . - -Apache Pulsar is available as a [Marathon Application Group](https://mesosphere.github.io/marathon/docs/application-groups.html), which runs multiple applications as manageable sets. - -## Prerequisites - -In order to run Pulsar on DC/OS, you need the following: - -* DC/OS version [1.9](https://docs.mesosphere.com/1.9/) or higher -* A [DC/OS cluster](https://docs.mesosphere.com/1.9/installing/) with at least three agent nodes -* The [DC/OS CLI tool](https://docs.mesosphere.com/1.9/cli/install/) installed -* The [`PulsarGroups.json`](https://github.com/apache/pulsar/blob/master/deployment/dcos/PulsarGroups.json) configuration file from the Pulsar GitHub repo. - - ```bash - - $ curl -O https://raw.githubusercontent.com/apache/pulsar/master/deployment/dcos/PulsarGroups.json - - ``` - -Each node in the DC/OS-managed Mesos cluster must have at least: - -* 4 CPU -* 4 GB of memory -* 60 GB of total persistent disk - -Alternatively, you can change the configuration in `PulsarGroups.json` according to match your resources of DC/OS cluster. - -## Deploy Pulsar using the DC/OS command interface - -You can deploy Pulsar on DC/OS using this command: - -```bash - -$ dcos marathon group add PulsarGroups.json - -``` - -This command deploys Docker container instances in three groups, which together comprise a Pulsar cluster: - -* 3 bookies (1 [bookie](reference-terminology.md#bookie) on each agent node and 1 [bookie recovery](http://bookkeeper.apache.org/docs/latest/admin/autorecovery/) instance) -* 3 Pulsar [brokers](reference-terminology.md#broker) (1 broker on each node and 1 admin instance) -* 1 [Prometheus](http://prometheus.io/) instance and 1 [Grafana](https://grafana.com/) instance - - -> When you run DC/OS, a ZooKeeper cluster already runs at `master.mesos:2181`, thus you do not have to install or start up ZooKeeper separately. - -After executing the `dcos` command above, click on the **Services** tab in the DC/OS [GUI interface](https://docs.mesosphere.com/latest/gui/), which you can access at [http://m1.dcos](http://m1.dcos) in this example. You should see several applications in the process of deploying. - -![DC/OS command executed](/assets/dcos_command_execute.png) - -![DC/OS command executed2](/assets/dcos_command_execute2.png) - -## The BookKeeper group - -To monitor the status of the BookKeeper cluster deployment, click on the **bookkeeper** group in the parent **pulsar** group. - -![DC/OS bookkeeper status](/assets/dcos_bookkeeper_status.png) - -At this point, 3 [bookies](reference-terminology.md#bookie) should be shown as green, which means that the bookies have been deployed successfully and are now running. - -![DC/OS bookkeeper running](/assets/dcos_bookkeeper_run.png) - -You can also click into each bookie instance to get more detailed information, such as the bookie running log. - -![DC/OS bookie log](/assets/dcos_bookie_log.png) - -To display information about the BookKeeper in ZooKeeper, you can visit [http://m1.dcos/exhibitor](http://m1.dcos/exhibitor). In this example, 3 bookies are under the `available` directory. - -![DC/OS bookkeeper in zk](/assets/dcos_bookkeeper_in_zookeeper.png) - -## The Pulsar broker Group - -Similar to the BookKeeper group above, click into the **brokers** to check the status of the Pulsar brokers. - -![DC/OS broker status](/assets/dcos_broker_status.png) - -![DC/OS broker running](/assets/dcos_broker_run.png) - -You can also click into each broker instance to get more detailed information, such as the broker running log. - -![DC/OS broker log](/assets/dcos_broker_log.png) - -Broker cluster information in Zookeeper is also available through the web UI. In this example, you can see that the `loadbalance` and `managed-ledgers` directories have been created. - -![DC/OS broker in zk](/assets/dcos_broker_in_zookeeper.png) - -## Monitor Group - -The **monitory** group consists of Prometheus and Grafana. - -![DC/OS monitor status](/assets/dcos_monitor_status.png) - -### Prometheus - -Click into the instance of `prom` to get the endpoint of Prometheus, which is `192.168.65.121:9090` in this example. - -![DC/OS prom endpoint](/assets/dcos_prom_endpoint.png) - -If you click that endpoint, you can see the Prometheus dashboard. The [http://192.168.65.121:9090/targets](http://192.168.65.121:9090/targets) URL display all the bookies and brokers. - -![DC/OS prom targets](/assets/dcos_prom_targets.png) - -### Grafana - -Click into `grafana` to get the endpoint for Grafana, which is `192.168.65.121:3000` in this example. - -![DC/OS grafana endpoint](/assets/dcos_grafana_endpoint.png) - -If you click that endpoint, you can access the Grafana dashboard. - -![DC/OS grafana targets](/assets/dcos_grafana_dashboard.png) - -## Run a simple Pulsar consumer and producer on DC/OS - -Now that you have a fully deployed Pulsar cluster, you can run a simple consumer and producer to show Pulsar on DC/OS in action. - -### Download and prepare the Pulsar Java tutorial - -You can clone a [Pulsar Java tutorial](https://github.com/streamlio/pulsar-java-tutorial) repo. This repo contains a simple Pulsar consumer and producer (you can find more information in the `README` file of the repo). - -```bash - -$ git clone https://github.com/streamlio/pulsar-java-tutorial - -``` - -Change the `SERVICE_URL` from `pulsar://localhost:6650` to `pulsar://a1.dcos:6650` in both [`ConsumerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ConsumerTutorial.java) and [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java). -The `pulsar://a1.dcos:6650` endpoint is for the broker service. You can fetch the endpoint details for each broker instance from the DC/OS GUI. `a1.dcos` is a DC/OS client agent, which runs a broker. The client agent IP address can also replace this. - -Now, change the message number from 10 to 10000000 in main method of [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java) so that it can produce more messages. - -Now compile the project code using the command below: - -```bash - -$ mvn clean package - -``` - -### Run the consumer and producer - -Execute this command to run the consumer: - -```bash - -$ mvn exec:java -Dexec.mainClass="tutorial.ConsumerTutorial" - -``` - -Execute this command to run the producer: - -```bash - -$ mvn exec:java -Dexec.mainClass="tutorial.ProducerTutorial" - -``` - -You can see the producer producing messages and the consumer consuming messages through the DC/OS GUI. - -![DC/OS pulsar producer](/assets/dcos_producer.png) - -![DC/OS pulsar consumer](/assets/dcos_consumer.png) - -### View Grafana metric output - -While the producer and consumer run, you can access running metrics information from Grafana. - -![DC/OS pulsar dashboard](/assets/dcos_metrics.png) - - -## Uninstall Pulsar - -You can shut down and uninstall the `pulsar` application from DC/OS at any time in the following two ways: - -1. Using the DC/OS GUI, you can choose **Delete** at the right end of Pulsar group. - - ![DC/OS pulsar uninstall](/assets/dcos_uninstall.png) - -2. You can use the following command: - - ```bash - - $ dcos marathon group remove /pulsar - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/deploy-docker.md b/site2/website/versioned_docs/version-2.8.3-deprecated/deploy-docker.md deleted file mode 100644 index 8348d78deb2378..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/deploy-docker.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -id: deploy-docker -title: Deploy a cluster on Docker -sidebar_label: "Docker" -original_id: deploy-docker ---- - -To deploy a Pulsar cluster on Docker, complete the following steps: -1. Deploy a ZooKeeper cluster (optional) -2. Initialize cluster metadata -3. Deploy a BookKeeper cluster -4. Deploy one or more Pulsar brokers - -## Prepare - -To run Pulsar on Docker, you need to create a container for each Pulsar component: ZooKeeper, BookKeeper and broker. You can pull the images of ZooKeeper and BookKeeper separately on [Docker Hub](https://hub.docker.com/), and pull a [Pulsar image](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) for the broker. You can also pull only one [Pulsar image](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) and create three containers with this image. This tutorial takes the second option as an example. - -### Pull a Pulsar image -You can pull a Pulsar image from [Docker Hub](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) with the following command. - -``` - -docker pull apachepulsar/pulsar-all:latest - -``` - -### Create three containers -Create containers for ZooKeeper, BookKeeper and broker. In this example, they are named as `zookeeper`, `bookkeeper` and `broker` respectively. You can name them as you want with the `--name` flag. By default, the container names are created randomly. - -``` - -docker run -it --name bookkeeper apachepulsar/pulsar-all:latest /bin/bash -docker run -it --name zookeeper apachepulsar/pulsar-all:latest /bin/bash -docker run -it --name broker apachepulsar/pulsar-all:latest /bin/bash - -``` - -### Create a network -To deploy a Pulsar cluster on Docker, you need to create a `network` and connect the containers of ZooKeeper, BookKeeper and broker to this network. The following command creates the network `pulsar`: - -``` - -docker network create pulsar - -``` - -### Connect containers to network -Connect the containers of ZooKeeper, BookKeeper and broker to the `pulsar` network with the following commands. - -``` - -docker network connect pulsar zookeeper -docker network connect pulsar bookkeeper -docker network connect pulsar broker - -``` - -To check whether the containers are successfully connected to the network, enter the `docker network inspect pulsar` command. - -For detailed information about how to deploy ZooKeeper cluster, BookKeeper cluster, brokers, see [deploy a cluster on bare metal](deploy-bare-metal.md). diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/deploy-kubernetes.md b/site2/website/versioned_docs/version-2.8.3-deprecated/deploy-kubernetes.md deleted file mode 100644 index 1aefc6ad79f716..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/deploy-kubernetes.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -id: deploy-kubernetes -title: Deploy Pulsar on Kubernetes -sidebar_label: "Kubernetes" -original_id: deploy-kubernetes ---- - -To get up and running with these charts as fast as possible, in a **non-production** use case, we provide -a [quick start guide](getting-started-helm.md) for Proof of Concept (PoC) deployments. - -To configure and install a Pulsar cluster on Kubernetes for production usage, follow the complete [Installation Guide](helm-install.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/deploy-monitoring.md b/site2/website/versioned_docs/version-2.8.3-deprecated/deploy-monitoring.md deleted file mode 100644 index f9fe0e0bb97be7..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/deploy-monitoring.md +++ /dev/null @@ -1,148 +0,0 @@ ---- -id: deploy-monitoring -title: Monitor -sidebar_label: "Monitor" -original_id: deploy-monitoring ---- - -You can use different ways to monitor a Pulsar cluster, exposing both metrics related to the usage of topics and the overall health of the individual components of the cluster. - -## Collect metrics - -You can collect broker stats, ZooKeeper stats, and BookKeeper stats. - -### Broker stats - -You can collect Pulsar broker metrics from brokers and export the metrics in JSON format. The Pulsar broker metrics mainly have two types: - -* *Destination dumps*, which contain stats for each individual topic. You can fetch the destination dumps using the command below: - - ```shell - - bin/pulsar-admin broker-stats destinations - - ``` - -* Broker metrics, which contain the broker information and topics stats aggregated at namespace level. You can fetch the broker metrics by using the following command: - - ```shell - - bin/pulsar-admin broker-stats monitoring-metrics - - ``` - -All the message rates are updated every minute. - -The aggregated broker metrics are also exposed in the [Prometheus](https://prometheus.io) format at: - -```shell - -http://$BROKER_ADDRESS:8080/metrics/ - -``` - -### ZooKeeper stats - -The local ZooKeeper, configuration store server and clients that are shipped with Pulsar can expose detailed stats through Prometheus. - -```shell - -http://$LOCAL_ZK_SERVER:8000/metrics -http://$GLOBAL_ZK_SERVER:8001/metrics - -``` - -The default port of local ZooKeeper is `8000` and the default port of the configuration store is `8001`. You can use a different stats port by configuring `metricsProvider.httpPort` in the `conf/zookeeper.conf` file. - -### BookKeeper stats - -You can configure the stats frameworks for BookKeeper by modifying the `statsProviderClass` in the `conf/bookkeeper.conf` file. - -The default BookKeeper configuration enables the Prometheus exporter. The configuration is included with Pulsar distribution. - -```shell - -http://$BOOKIE_ADDRESS:8000/metrics - -``` - -The default port for bookie is `8000`. You can change the port by configuring `prometheusStatsHttpPort` in the `conf/bookkeeper.conf` file. - -### Managed cursor acknowledgment state -The acknowledgment state is persistent to the ledger first. When the acknowledgment state fails to be persistent to the ledger, they are persistent to ZooKeeper. To track the stats of acknowledgement, you can configure the metrics for the managed cursor. - -``` - -brk_ml_cursor_persistLedgerSucceed(namespace=", ledger_name="", cursor_name:") -brk_ml_cursor_persistLedgerErrors(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_persistZookeeperSucceed(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_persistZookeeperErrors(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_nonContiguousDeletedMessagesRange(namespace="", ledger_name="", cursor_name:"") - -``` - -Those metrics are added in the Prometheus interface, you can monitor and check the metrics stats in the Grafana. - -### Function and connector stats - -You can collect functions worker stats from `functions-worker` and export the metrics in JSON formats, which contain functions worker JVM metrics. - -``` - -pulsar-admin functions-worker monitoring-metrics - -``` - -You can collect functions and connectors metrics from `functions-worker` and export the metrics in JSON formats. - -``` - -pulsar-admin functions-worker function-stats - -``` - -The aggregated functions and connectors metrics can be exposed in Prometheus formats as below. You can get [`FUNCTIONS_WORKER_ADDRESS`](http://pulsar.apache.org/docs/en/next/functions-worker/) and `WORKER_PORT` from the `functions_worker.yml` file. - -``` - -http://$FUNCTIONS_WORKER_ADDRESS:$WORKER_PORT/metrics: - -``` - -## Configure Prometheus - -You can use Prometheus to collect all the metrics exposed for Pulsar components and set up [Grafana](https://grafana.com/) dashboards to display the metrics and monitor your Pulsar cluster. For details, refer to [Prometheus guide](https://prometheus.io/docs/introduction/getting_started/). - -When you run Pulsar on bare metal, you can provide the list of nodes to be probed. When you deploy Pulsar in a Kubernetes cluster, the monitoring is setup automatically. For details, refer to [Kubernetes instructions](helm-deploy.md). - -## Dashboards - -When you collect time series statistics, the major problem is to make sure the number of dimensions attached to the data does not explode. Thus you only need to collect time series of metrics aggregated at the namespace level. - -### Pulsar per-topic dashboard - -The per-topic dashboard instructions are available at [Pulsar manager](administration-pulsar-manager.md). - -### Grafana - -You can use grafana to create dashboard driven by the data that is stored in Prometheus. - -When you deploy Pulsar on Kubernetes, a `pulsar-grafana` Docker image is enabled by default. You can use the docker image with the principal dashboards. - -Enter the command below to use the dashboard manually: - -```shell - -docker run -p3000:3000 \ - -e PROMETHEUS_URL=http://$PROMETHEUS_HOST:9090/ \ - apachepulsar/pulsar-grafana:latest - -``` - -The following are some Grafana dashboards examples: - -- [pulsar-grafana](http://pulsar.apache.org/docs/en/deploy-monitoring/#grafana): a Grafana dashboard that displays metrics collected in Prometheus for Pulsar clusters running on Kubernetes. -- [apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard): a collection of Grafana dashboard templates for different Pulsar components running on both Kubernetes and on-premise machines. - -## Alerting rules -You can set alerting rules according to your Pulsar environment. To configure alerting rules for Apache Pulsar, refer to [alerting rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/). diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/develop-load-manager.md b/site2/website/versioned_docs/version-2.8.3-deprecated/develop-load-manager.md deleted file mode 100644 index 509209b6a852d8..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/develop-load-manager.md +++ /dev/null @@ -1,227 +0,0 @@ ---- -id: develop-load-manager -title: Modular load manager -sidebar_label: "Modular load manager" -original_id: develop-load-manager ---- - -The *modular load manager*, implemented in [`ModularLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/ModularLoadManagerImpl.java), is a flexible alternative to the previously implemented load manager, [`SimpleLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/SimpleLoadManagerImpl.java), which attempts to simplify how load is managed while also providing abstractions so that complex load management strategies may be implemented. - -## Usage - -There are two ways that you can enable the modular load manager: - -1. Change the value of the `loadManagerClassName` parameter in `conf/broker.conf` from `org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl` to `org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl`. -2. Using the `pulsar-admin` tool. Here's an example: - - ```shell - - $ pulsar-admin update-dynamic-config \ - --config loadManagerClassName \ - --value org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl - - ``` - - You can use the same method to change back to the original value. In either case, any mistake in specifying the load manager will cause Pulsar to default to `SimpleLoadManagerImpl`. - -## Verification - -There are a few different ways to determine which load manager is being used: - -1. Use `pulsar-admin` to examine the `loadManagerClassName` element: - - ```shell - - $ bin/pulsar-admin brokers get-all-dynamic-config - { - "loadManagerClassName" : "org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl" - } - - ``` - - If there is no `loadManagerClassName` element, then the default load manager is used. - -2. Consult a ZooKeeper load report. With the module load manager, the load report in `/loadbalance/brokers/...` will have many differences. for example the `systemResourceUsage` sub-elements (`bandwidthIn`, `bandwidthOut`, etc.) are now all at the top level. Here is an example load report from the module load manager: - - ```json - - { - "bandwidthIn": { - "limit": 10240000.0, - "usage": 4.256510416666667 - }, - "bandwidthOut": { - "limit": 10240000.0, - "usage": 5.287239583333333 - }, - "bundles": [], - "cpu": { - "limit": 2400.0, - "usage": 5.7353247655435915 - }, - "directMemory": { - "limit": 16384.0, - "usage": 1.0 - } - } - - ``` - - With the simple load manager, the load report in `/loadbalance/brokers/...` will look like this: - - ```json - - { - "systemResourceUsage": { - "bandwidthIn": { - "limit": 10240000.0, - "usage": 0.0 - }, - "bandwidthOut": { - "limit": 10240000.0, - "usage": 0.0 - }, - "cpu": { - "limit": 2400.0, - "usage": 0.0 - }, - "directMemory": { - "limit": 16384.0, - "usage": 1.0 - }, - "memory": { - "limit": 8192.0, - "usage": 3903.0 - } - } - } - - ``` - -3. The command-line [broker monitor](reference-cli-tools.md#monitor-brokers) will have a different output format depending on which load manager implementation is being used. - - Here is an example from the modular load manager: - - ``` - - =================================================================================================================== - ||SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.00 |48.33 |0.01 |0.00 |0.00 |48.33 || - ||COUNT |TOPIC |BUNDLE |PRODUCER |CONSUMER |BUNDLE + |BUNDLE - || - || |4 |4 |0 |2 |4 |0 || - ||LATEST |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - ||SHORT |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - ||LONG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - =================================================================================================================== - - ``` - - Here is an example from the simple load manager: - - ``` - - =================================================================================================================== - ||COUNT |TOPIC |BUNDLE |PRODUCER |CONSUMER |BUNDLE + |BUNDLE - || - || |4 |4 |0 |2 |0 |0 || - ||RAW SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.25 |47.94 |0.01 |0.00 |0.00 |47.94 || - ||ALLOC SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.20 |1.89 | |1.27 |3.21 |3.21 || - ||RAW MSG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.01 |0.01 |0.01 || - ||ALLOC MSG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |54.84 |134.48 |189.31 |126.54 |320.96 |447.50 || - =================================================================================================================== - - ``` - -It is important to note that the module load manager is _centralized_, meaning that all requests to assign a bundle---whether it's been seen before or whether this is the first time---only get handled by the _lead_ broker (which can change over time). To determine the current lead broker, examine the `/loadbalance/leader` node in ZooKeeper. - -## Implementation - -### Data - -The data monitored by the modular load manager is contained in the [`LoadData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/LoadData.java) class. -Here, the available data is subdivided into the bundle data and the broker data. - -#### Broker - -The broker data is contained in the [`BrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BrokerData.java) class. It is further subdivided into two parts, -one being the local data which every broker individually writes to ZooKeeper, and the other being the historical broker -data which is written to ZooKeeper by the leader broker. - -##### Local Broker Data -The local broker data is contained in the class [`LocalBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/java/org/apache/pulsar/policies/data/loadbalancer/LocalBrokerData.java) and provides information about the following resources: - -* CPU usage -* JVM heap memory usage -* Direct memory usage -* Bandwidth in/out usage -* Most recent total message rate in/out across all bundles -* Total number of topics, bundles, producers, and consumers -* Names of all bundles assigned to this broker -* Most recent changes in bundle assignments for this broker - -The local broker data is updated periodically according to the service configuration -"loadBalancerReportUpdateMaxIntervalMinutes". After any broker updates their local broker data, the leader broker will -receive the update immediately via a ZooKeeper watch, where the local data is read from the ZooKeeper node -`/loadbalance/brokers/` - -##### Historical Broker Data - -The historical broker data is contained in the [`TimeAverageBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/TimeAverageBrokerData.java) class. - -In order to reconcile the need to make good decisions in a steady-state scenario and make reactive decisions in a critical scenario, the historical data is split into two parts: the short-term data for reactive decisions, and the long-term data for steady-state decisions. Both time frames maintain the following information: - -* Message rate in/out for the entire broker -* Message throughput in/out for the entire broker - -Unlike the bundle data, the broker data does not maintain samples for the global broker message rates and throughputs, which is not expected to remain steady as new bundles are removed or added. Instead, this data is aggregated over the short-term and long-term data for the bundles. See the section on bundle data to understand how that data is collected and maintained. - -The historical broker data is updated for each broker in memory by the leader broker whenever any broker writes their local data to ZooKeeper. Then, the historical data is written to ZooKeeper by the leader broker periodically according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`. - -##### Bundle Data - -The bundle data is contained in the [`BundleData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BundleData.java). Like the historical broker data, the bundle data is split into a short-term and a long-term time frame. The information maintained in each time frame: - -* Message rate in/out for this bundle -* Message Throughput In/Out for this bundle -* Current number of samples for this bundle - -The time frames are implemented by maintaining the average of these values over a set, limited number of samples, where -the samples are obtained through the message rate and throughput values in the local data. Thus, if the update interval -for the local data is 2 minutes, the number of short samples is 10 and the number of long samples is 1000, the -short-term data is maintained over a period of `10 samples * 2 minutes / sample = 20 minutes`, while the long-term -data is similarly over a period of 2000 minutes. Whenever there are not enough samples to satisfy a given time frame, -the average is taken only over the existing samples. When no samples are available, default values are assumed until -they are overwritten by the first sample. Currently, the default values are - -* Message rate in/out: 50 messages per second both ways -* Message throughput in/out: 50KB per second both ways - -The bundle data is updated in memory on the leader broker whenever any broker writes their local data to ZooKeeper. -Then, the bundle data is written to ZooKeeper by the leader broker periodically at the same time as the historical -broker data, according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`. - -### Traffic Distribution - -The modular load manager uses the abstraction provided by [`ModularLoadManagerStrategy`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/ModularLoadManagerStrategy.java) to make decisions about bundle assignment. The strategy makes a decision by considering the service configuration, the entire load data, and the bundle data for the bundle to be assigned. Currently, the only supported strategy is [`LeastLongTermMessageRate`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/LeastLongTermMessageRate.java), though soon users will have the ability to inject their own strategies if desired. - -#### Least Long Term Message Rate Strategy - -As its name suggests, the least long term message rate strategy attempts to distribute bundles across brokers so that -the message rate in the long-term time window for each broker is roughly the same. However, simply balancing load based -on message rate does not handle the issue of asymmetric resource burden per message on each broker. Thus, the system -resource usages, which are CPU, memory, direct memory, bandwidth in, and bandwidth out, are also considered in the -assignment process. This is done by weighting the final message rate according to -`1 / (overload_threshold - max_usage)`, where `overload_threshold` corresponds to the configuration -`loadBalancerBrokerOverloadedThresholdPercentage` and `max_usage` is the maximum proportion among the system resources -that is being utilized by the candidate broker. This multiplier ensures that machines with are being more heavily taxed -by the same message rates will receive less load. In particular, it tries to ensure that if one machine is overloaded, -then all machines are approximately overloaded. In the case in which a broker's max usage exceeds the overload -threshold, that broker is not considered for bundle assignment. If all brokers are overloaded, the bundle is randomly -assigned. - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/develop-schema.md b/site2/website/versioned_docs/version-2.8.3-deprecated/develop-schema.md deleted file mode 100644 index 2d4461a5ea2b55..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/develop-schema.md +++ /dev/null @@ -1,62 +0,0 @@ ---- -id: develop-schema -title: Custom schema storage -sidebar_label: "Custom schema storage" -original_id: develop-schema ---- - -By default, Pulsar stores data type [schemas](concepts-schema-registry.md) in [Apache BookKeeper](https://bookkeeper.apache.org) (which is deployed alongside Pulsar). You can, however, use another storage system if you wish. This doc walks you through creating your own schema storage implementation. - -In order to use a non-default (i.e. non-BookKeeper) storage system for Pulsar schemas, you need to implement two Java interfaces: [`SchemaStorage`](#schemastorage-interface) and [`SchemaStorageFactory`](#schemastoragefactory-interface). - -## SchemaStorage interface - -The `SchemaStorage` interface has the following methods: - -```java - -public interface SchemaStorage { - // How schemas are updated - CompletableFuture put(String key, byte[] value, byte[] hash); - - // How schemas are fetched from storage - CompletableFuture get(String key, SchemaVersion version); - - // How schemas are deleted - CompletableFuture delete(String key); - - // Utility method for converting a schema version byte array to a SchemaVersion object - SchemaVersion versionFromBytes(byte[] version); - - // Startup behavior for the schema storage client - void start() throws Exception; - - // Shutdown behavior for the schema storage client - void close() throws Exception; -} - -``` - -> For a full-fledged example schema storage implementation, see the [`BookKeeperSchemaStorage`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorage.java) class. - -## SchemaStorageFactory interface - -```java - -public interface SchemaStorageFactory { - @NotNull - SchemaStorage create(PulsarService pulsar) throws Exception; -} - -``` - -> For a full-fledged example schema storage factory implementation, see the [`BookKeeperSchemaStorageFactory`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorageFactory.java) class. - -## Deployment - -In order to use your custom schema storage implementation, you'll need to: - -1. Package the implementation in a [JAR](https://docs.oracle.com/javase/tutorial/deployment/jar/basicsindex.html) file. -1. Add that jar to the `lib` folder in your Pulsar [binary or source distribution](getting-started-standalone.md#installing-pulsar). -1. Change the `schemaRegistryStorageClassName` configuration in [`broker.conf`](reference-configuration.md#broker) to your custom factory class (i.e. the `SchemaStorageFactory` implementation, not the `SchemaStorage` implementation). -1. Start up Pulsar. diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/develop-tools.md b/site2/website/versioned_docs/version-2.8.3-deprecated/develop-tools.md deleted file mode 100644 index b5457790b80810..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/develop-tools.md +++ /dev/null @@ -1,111 +0,0 @@ ---- -id: develop-tools -title: Simulation tools -sidebar_label: "Simulation tools" -original_id: develop-tools ---- - -It is sometimes necessary create an test environment and incur artificial load to observe how well load managers -handle the load. The load simulation controller, the load simulation client, and the broker monitor were created as an -effort to make create this load and observe the effects on the managers more easily. - -## Simulation Client -The simulation client is a machine which will create and subscribe to topics with configurable message rates and sizes. -Because it is sometimes necessary in simulating large load to use multiple client machines, the user does not interact -with the simulation client directly, but instead delegates their requests to the simulation controller, which will then -send signals to clients to start incurring load. The client implementation is in the class -`org.apache.pulsar.testclient.LoadSimulationClient`. - -### Usage -To Start a simulation client, use the `pulsar-perf` script with the command `simulation-client` as follows: - -``` - -pulsar-perf simulation-client --port --service-url - -``` - -The client will then be ready to receive controller commands. -## Simulation Controller -The simulation controller send signals to the simulation clients, requesting them to create new topics, stop old -topics, change the load incurred by topics, as well as several other tasks. It is implemented in the class -`org.apache.pulsar.testclient.LoadSimulationController` and presents a shell to the user as an interface to send -command with. - -### Usage -To start a simulation controller, use the `pulsar-perf` script with the command `simulation-controller` as follows: - -``` - -pulsar-perf simulation-controller --cluster --client-port ---clients - -``` - -The clients should already be started before the controller is started. You will then be presented with a simple prompt, -where you can issue commands to simulation clients. Arguments often refer to tenant names, namespace names, and topic -names. In all cases, the BASE name of the tenants, namespaces, and topics are used. For example, for the topic -`persistent://my_tenant/my_cluster/my_namespace/my_topic`, the tenant name is `my_tenant`, the namespace name is -`my_namespace`, and the topic name is `my_topic`. The controller can perform the following actions: - -* Create a topic with a producer and a consumer - * `trade [--rate ] - [--rand-rate ,] - [--size ]` -* Create a group of topics with a producer and a consumer - * `trade_group [--rate ] - [--rand-rate ,] - [--separation ] [--size ] - [--topics-per-namespace ]` -* Change the configuration of an existing topic - * `change [--rate ] - [--rand-rate ,] - [--size ]` -* Change the configuration of a group of topics - * `change_group [--rate ] [--rand-rate ,] - [--size ] [--topics-per-namespace ]` -* Shutdown a previously created topic - * `stop ` -* Shutdown a previously created group of topics - * `stop_group ` -* Copy the historical data from one ZooKeeper to another and simulate based on the message rates and sizes in that history - * `copy [--rate-multiplier value]` -* Simulate the load of the historical data on the current ZooKeeper (should be same ZooKeeper being simulated on) - * `simulate [--rate-multiplier value]` -* Stream the latest data from the given active ZooKeeper to simulate the real-time load of that ZooKeeper. - * `stream [--rate-multiplier value]` - -The "group" arguments in these commands allow the user to create or affect multiple topics at once. Groups are created -when calling the `trade_group` command, and all topics from these groups may be subsequently modified or stopped -with the `change_group` and `stop_group` commands respectively. All ZooKeeper arguments are of the form -`zookeeper_host:port`. - -### Difference Between Copy, Simulate, and Stream -The commands `copy`, `simulate`, and `stream` are very similar but have significant differences. `copy` is used when -you want to simulate the load of a static, external ZooKeeper on the ZooKeeper you are simulating on. Thus, -`source zookeeper` should be the ZooKeeper you want to copy and `target zookeeper` should be the ZooKeeper you are -simulating on, and then it will get the full benefit of the historical data of the source in both load manager -implementations. `simulate` on the other hand takes in only one ZooKeeper, the one you are simulating on. It assumes -that you are simulating on a ZooKeeper that has historical data for `SimpleLoadManagerImpl` and creates equivalent -historical data for `ModularLoadManagerImpl`. Then, the load according to the historical data is simulated by the -clients. Finally, `stream` takes in an active ZooKeeper different than the ZooKeeper being simulated on and streams -load data from it and simulates the real-time load. In all cases, the optional `rate-multiplier` argument allows the -user to simulate some proportion of the load. For instance, using `--rate-multiplier 0.05` will cause messages to -be sent at only `5%` of the rate of the load that is being simulated. - -## Broker Monitor -To observe the behavior of the load manager in these simulations, one may utilize the broker monitor, which is -implemented in `org.apache.pulsar.testclient.BrokerMonitor`. The broker monitor will print tabular load data to the -console as it is updated using watchers. - -### Usage -To start a broker monitor, use the `monitor-brokers` command in the `pulsar-perf` script: - -``` - -pulsar-perf monitor-brokers --connect-string - -``` - -The console will then continuously print load data until it is interrupted. - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/developing-binary-protocol.md b/site2/website/versioned_docs/version-2.8.3-deprecated/developing-binary-protocol.md deleted file mode 100644 index 177ffebd6e5ded..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/developing-binary-protocol.md +++ /dev/null @@ -1,606 +0,0 @@ ---- -id: developing-binary-protocol -title: Pulsar binary protocol specification -sidebar_label: "Binary protocol" -original_id: developing-binary-protocol ---- - -Pulsar uses a custom binary protocol for communications between producers/consumers and brokers. This protocol is designed to support required features, such as acknowledgements and flow control, while ensuring maximum transport and implementation efficiency. - -Clients and brokers exchange *commands* with each other. Commands are formatted as binary [protocol buffer](https://developers.google.com/protocol-buffers/) (aka *protobuf*) messages. The format of protobuf commands is specified in the [`PulsarApi.proto`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto) file and also documented in the [Protobuf interface](#protobuf-interface) section below. - -> ### Connection sharing -> Commands for different producers and consumers can be interleaved and sent through the same connection without restriction. - -All commands associated with Pulsar's protocol are contained in a [`BaseCommand`](#pulsar.proto.BaseCommand) protobuf message that includes a [`Type`](#pulsar.proto.Type) [enum](https://developers.google.com/protocol-buffers/docs/proto#enum) with all possible subcommands as optional fields. `BaseCommand` messages can specify only one subcommand. - -## Framing - -Since protobuf doesn't provide any sort of message frame, all messages in the Pulsar protocol are prepended with a 4-byte field that specifies the size of the frame. The maximum allowable size of a single frame is 5 MB. - -The Pulsar protocol allows for two types of commands: - -1. **Simple commands** that do not carry a message payload. -2. **Payload commands** that bear a payload that is used when publishing or delivering messages. In payload commands, the protobuf command data is followed by protobuf [metadata](#message-metadata) and then the payload, which is passed in raw format outside of protobuf. All sizes are passed as 4-byte unsigned big endian integers. - -> Message payloads are passed in raw format rather than protobuf format for efficiency reasons. - -### Simple commands - -Simple (payload-free) commands have this basic structure: - -| Component | Description | Size (in bytes) | -|:------------|:----------------------------------------------------------------------------------------|:----------------| -| totalSize | The size of the frame, counting everything that comes after it (in bytes) | 4 | -| commandSize | The size of the protobuf-serialized command | 4 | -| message | The protobuf message serialized in a raw binary format (rather than in protobuf format) | | - -### Payload commands - -Payload commands have this basic structure: - -| Component | Description | Size (in bytes) | -|:-------------|:--------------------------------------------------------------------------------------------|:----------------| -| totalSize | The size of the frame, counting everything that comes after it (in bytes) | 4 | -| commandSize | The size of the protobuf-serialized command | 4 | -| message | The protobuf message serialized in a raw binary format (rather than in protobuf format) | | -| magicNumber | A 2-byte byte array (`0x0e01`) identifying the current format | 2 | -| checksum | A [CRC32-C checksum](http://www.evanjones.ca/crc32c.html) of everything that comes after it | 4 | -| metadataSize | The size of the message [metadata](#message-metadata) | 4 | -| metadata | The message [metadata](#message-metadata) stored as a binary protobuf message | | -| payload | Anything left in the frame is considered the payload and can include any sequence of bytes | | - -## Message metadata - -Message metadata is stored alongside the application-specified payload as a serialized protobuf message. Metadata is created by the producer and passed on unchanged to the consumer. - -| Field | Description | -|:-------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `producer_name` | The name of the producer that published the message | -| `sequence_id` | The sequence ID of the message, assigned by producer | -| `publish_time` | The publish timestamp in Unix time (i.e. as the number of milliseconds since January 1st, 1970 in UTC) | -| `properties` | A sequence of key/value pairs (using the [`KeyValue`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto#L32) message). These are application-defined keys and values with no special meaning to Pulsar. | -| `replicated_from` *(optional)* | Indicates that the message has been replicated and specifies the name of the [cluster](reference-terminology.md#cluster) where the message was originally published | -| `partition_key` *(optional)* | While publishing on a partition topic, if the key is present, the hash of the key is used to determine which partition to choose | -| `compression` *(optional)* | Signals that payload has been compressed and with which compression library | -| `uncompressed_size` *(optional)* | If compression is used, the producer must fill the uncompressed size field with the original payload size | -| `num_messages_in_batch` *(optional)* | If this message is really a [batch](#batch-messages) of multiple entries, this field must be set to the number of messages in the batch | - -### Batch messages - -When using batch messages, the payload will be containing a list of entries, -each of them with its individual metadata, defined by the `SingleMessageMetadata` -object. - - -For a single batch, the payload format will look like this: - - -| Field | Description | -|:--------------|:------------------------------------------------------------| -| metadataSizeN | The size of the single message metadata serialized Protobuf | -| metadataN | Single message metadata | -| payloadN | Message payload passed by application | - -Each metadata field looks like this; - -| Field | Description | -|:---------------------------|:--------------------------------------------------------| -| properties | Application-defined properties | -| partition key *(optional)* | Key to indicate the hashing to a particular partition | -| payload_size | Size of the payload for the single message in the batch | - -When compression is enabled, the whole batch will be compressed at once. - -## Interactions - -### Connection establishment - -After opening a TCP connection to a broker, typically on port 6650, the client -is responsible to initiate the session. - -![Connect interaction](/assets/binary-protocol-connect.png) - -After receiving a `Connected` response from the broker, the client can -consider the connection ready to use. Alternatively, if the broker doesn't -validate the client authentication, it will reply with an `Error` command and -close the TCP connection. - -Example: - -```protobuf - -message CommandConnect { - "client_version" : "Pulsar-Client-Java-v1.15.2", - "auth_method_name" : "my-authentication-plugin", - "auth_data" : "my-auth-data", - "protocol_version" : 6 -} - -``` - -Fields: - * `client_version` → String based identifier. Format is not enforced - * `auth_method_name` → *(optional)* Name of the authentication plugin if auth - enabled - * `auth_data` → *(optional)* Plugin specific authentication data - * `protocol_version` → Indicates the protocol version supported by the - client. Broker will not send commands introduced in newer revisions of the - protocol. Broker might be enforcing a minimum version - -```protobuf - -message CommandConnected { - "server_version" : "Pulsar-Broker-v1.15.2", - "protocol_version" : 6 -} - -``` - -Fields: - * `server_version` → String identifier of broker version - * `protocol_version` → Protocol version supported by the broker. Client - must not attempt to send commands introduced in newer revisions of the - protocol - -### Keep Alive - -To identify prolonged network partitions between clients and brokers or cases -in which a machine crashes without interrupting the TCP connection on the remote -end (eg: power outage, kernel panic, hard reboot...), we have introduced a -mechanism to probe for the availability status of the remote peer. - -Both clients and brokers are sending `Ping` commands periodically and they will -close the socket if a `Pong` response is not received within a timeout (default -used by broker is 60s). - -A valid implementation of a Pulsar client is not required to send the `Ping` -probe, though it is required to promptly reply after receiving one from the -broker in order to prevent the remote side from forcibly closing the TCP connection. - - -### Producer - -In order to send messages, a client needs to establish a producer. When creating -a producer, the broker will first verify that this particular client is -authorized to publish on the topic. - -Once the client gets confirmation of the producer creation, it can publish -messages to the broker, referring to the producer id negotiated before. - -![Producer interaction](/assets/binary-protocol-producer.png) - -##### Command Producer - -```protobuf - -message CommandProducer { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "producer_id" : 1, - "request_id" : 1 -} - -``` - -Parameters: - * `topic` → Complete topic name to where you want to create the producer on - * `producer_id` → Client generated producer identifier. Needs to be unique - within the same connection - * `request_id` → Identifier for this request. Used to match the response with - the originating request. Needs to be unique within the same connection - * `producer_name` → *(optional)* If a producer name is specified, the name will - be used, otherwise the broker will generate a unique name. Generated - producer name is guaranteed to be globally unique. Implementations are - expected to let the broker generate a new producer name when the producer - is initially created, then reuse it when recreating the producer after - reconnections. - -The broker will reply with either `ProducerSuccess` or `Error` commands. - -##### Command ProducerSuccess - -```protobuf - -message CommandProducerSuccess { - "request_id" : 1, - "producer_name" : "generated-unique-producer-name" -} - -``` - -Parameters: - * `request_id` → Original id of the `CreateProducer` request - * `producer_name` → Generated globally unique producer name or the name - specified by the client, if any. - -##### Command Send - -Command `Send` is used to publish a new message within the context of an -already existing producer. This command is used in a frame that includes command -as well as message payload, for which the complete format is specified in the [payload commands](#payload-commands) section. - -```protobuf - -message CommandSend { - "producer_id" : 1, - "sequence_id" : 0, - "num_messages" : 1 -} - -``` - -Parameters: - * `producer_id` → id of an existing producer - * `sequence_id` → each message has an associated sequence id which is expected - to be implemented with a counter starting at 0. The `SendReceipt` that - acknowledges the effective publishing of messages will refer to it by - its sequence id. - * `num_messages` → *(optional)* Used when publishing a batch of messages at - once. - -##### Command SendReceipt - -After a message has been persisted on the configured number of replicas, the -broker will send the acknowledgment receipt to the producer. - -```protobuf - -message CommandSendReceipt { - "producer_id" : 1, - "sequence_id" : 0, - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -Parameters: - * `producer_id` → id of producer originating the send request - * `sequence_id` → sequence id of the published message - * `message_id` → message id assigned by the system to the published message - Unique within a single cluster. Message id is composed of 2 longs, `ledgerId` - and `entryId`, that reflect that this unique id is assigned when appending - to a BookKeeper ledger - - -##### Command CloseProducer - -**Note**: *This command can be sent by either producer or broker*. - -When receiving a `CloseProducer` command, the broker will stop accepting any -more messages for the producer, wait until all pending messages are persisted -and then reply `Success` to the client. - -The broker can send a `CloseProducer` command to client when it's performing -a graceful failover (eg: broker is being restarted, or the topic is being unloaded -by load balancer to be transferred to a different broker). - -When receiving the `CloseProducer`, the client is expected to go through the -service discovery lookup again and recreate the producer again. The TCP -connection is not affected. - -### Consumer - -A consumer is used to attach to a subscription and consume messages from it. -After every reconnection, a client needs to subscribe to the topic. If a -subscription is not already there, a new one will be created. - -![Consumer](/assets/binary-protocol-consumer.png) - -#### Flow control - -After the consumer is ready, the client needs to *give permission* to the -broker to push messages. This is done with the `Flow` command. - -A `Flow` command gives additional *permits* to send messages to the consumer. -A typical consumer implementation will use a queue to accumulate these messages -before the application is ready to consume them. - -After the application has dequeued half of the messages in the queue, the consumer -sends permits to the broker to ask for more messages (equals to half of the messages in the queue). - -For example, if the queue size is 1000 and the consumer consumes 500 messages in the queue. -Then the consumer sends permits to the broker to ask for 500 messages. - -##### Command Subscribe - -```protobuf - -message CommandSubscribe { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "subscription" : "my-subscription-name", - "subType" : "Exclusive", - "consumer_id" : 1, - "request_id" : 1 -} - -``` - -Parameters: - * `topic` → Complete topic name to where you want to create the consumer on - * `subscription` → Subscription name - * `subType` → Subscription type: Exclusive, Shared, Failover, Key_Shared - * `consumer_id` → Client generated consumer identifier. Needs to be unique - within the same connection - * `request_id` → Identifier for this request. Used to match the response with - the originating request. Needs to be unique within the same connection - * `consumer_name` → *(optional)* Clients can specify a consumer name. This - name can be used to track a particular consumer in the stats. Also, in - Failover subscription type, the name is used to decide which consumer is - elected as *master* (the one receiving messages): consumers are sorted by - their consumer name and the first one is elected master. - -##### Command Flow - -```protobuf - -message CommandFlow { - "consumer_id" : 1, - "messagePermits" : 1000 -} - -``` - -Parameters: -* `consumer_id` → Id of an already established consumer -* `messagePermits` → Number of additional permits to grant to the broker for - pushing more messages - -##### Command Message - -Command `Message` is used by the broker to push messages to an existing consumer, -within the limits of the given permits. - - -This command is used in a frame that includes the message payload as well, for -which the complete format is specified in the [payload commands](#payload-commands) -section. - -```protobuf - -message CommandMessage { - "consumer_id" : 1, - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -##### Command Ack - -An `Ack` is used to signal to the broker that a given message has been -successfully processed by the application and can be discarded by the broker. - -In addition, the broker will also maintain the consumer position based on the -acknowledged messages. - -```protobuf - -message CommandAck { - "consumer_id" : 1, - "ack_type" : "Individual", - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -Parameters: - * `consumer_id` → Id of an already established consumer - * `ack_type` → Type of acknowledgment: `Individual` or `Cumulative` - * `message_id` → Id of the message to acknowledge - * `validation_error` → *(optional)* Indicates that the consumer has discarded - the messages due to: `UncompressedSizeCorruption`, - `DecompressionError`, `ChecksumMismatch`, `BatchDeSerializeError` - * `properties` → *(optional)* Reserved configuration items - * `txnid_most_bits` → *(optional)* Same as Transaction Coordinator ID, `txnid_most_bits` and `txnid_least_bits` - uniquely identify a transaction. - * `txnid_least_bits` → *(optional)* The ID of the transaction opened in a transaction coordinator, - `txnid_most_bits` and `txnid_least_bits`uniquely identify a transaction. - * `request_id` → *(optional)* ID for handling response and timeout. - - - ##### Command AckResponse - -An `AckResponse` is the broker’s response to acknowledge a request sent by the client. It contains the `consumer_id` sent in the request. -If a transaction is used, it contains both the Transaction ID and the Request ID that are sent in the request. The client finishes the specific request according to the Request ID. If the `error` field is set, it indicates that the request has failed. - -An example of `AckResponse` with redirection: - -```protobuf - -message CommandAckResponse { - "consumer_id" : 1, - "txnid_least_bits" = 0, - "txnid_most_bits" = 1, - "request_id" = 5 -} - -``` - -##### Command CloseConsumer - -***Note***: **This command can be sent by either producer or broker*. - -This command behaves the same as [`CloseProducer`](#command-closeproducer) - -##### Command RedeliverUnacknowledgedMessages - -A consumer can ask the broker to redeliver some or all of the pending messages -that were pushed to that particular consumer and not yet acknowledged. - -The protobuf object accepts a list of message ids that the consumer wants to -be redelivered. If the list is empty, the broker will redeliver all the -pending messages. - -On redelivery, messages can be sent to the same consumer or, in the case of a -shared subscription, spread across all available consumers. - - -##### Command ReachedEndOfTopic - -This is sent by a broker to a particular consumer, whenever the topic -has been "terminated" and all the messages on the subscription were -acknowledged. - -The client should use this command to notify the application that no more -messages are coming from the consumer. - -##### Command ConsumerStats - -This command is sent by the client to retrieve Subscriber and Consumer level -stats from the broker. -Parameters: - * `request_id` → Id of the request, used to correlate the request - and the response. - * `consumer_id` → Id of an already established consumer. - -##### Command ConsumerStatsResponse - -This is the broker's response to ConsumerStats request by the client. -It contains the Subscriber and Consumer level stats of the `consumer_id` sent in the request. -If the `error_code` or the `error_message` field is set it indicates that the request has failed. - -##### Command Unsubscribe - -This command is sent by the client to unsubscribe the `consumer_id` from the associated topic. -Parameters: - * `request_id` → Id of the request. - * `consumer_id` → Id of an already established consumer which needs to unsubscribe. - - -## Service discovery - -### Topic lookup - -Topic lookup needs to be performed each time a client needs to create or -reconnect a producer or a consumer. Lookup is used to discover which particular -broker is serving the topic we are about to use. - -Lookup can be done with a REST call as described in the [admin API](admin-api-topics.md#look-up-topics-owner-broker) -docs. - -Since Pulsar-1.16 it is also possible to perform the lookup within the binary -protocol. - -For the sake of example, let's assume we have a service discovery component -running at `pulsar://broker.example.com:6650` - -Individual brokers will be running at `pulsar://broker-1.example.com:6650`, -`pulsar://broker-2.example.com:6650`, ... - -A client can use a connection to the discovery service host to issue a -`LookupTopic` command. The response can either be a broker hostname to -connect to, or a broker hostname to which retry the lookup. - -The `LookupTopic` command has to be used in a connection that has already -gone through the `Connect` / `Connected` initial handshake. - -![Topic lookup](/assets/binary-protocol-topic-lookup.png) - -```protobuf - -message CommandLookupTopic { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "request_id" : 1, - "authoritative" : false -} - -``` - -Fields: - * `topic` → Topic name to lookup - * `request_id` → Id of the request that will be passed with its response - * `authoritative` → Initial lookup request should use false. When following a - redirect response, client should pass the same value contained in the - response - -##### LookupTopicResponse - -Example of response with successful lookup: - -```protobuf - -message CommandLookupTopicResponse { - "request_id" : 1, - "response" : "Connect", - "brokerServiceUrl" : "pulsar://broker-1.example.com:6650", - "brokerServiceUrlTls" : "pulsar+ssl://broker-1.example.com:6651", - "authoritative" : true -} - -``` - -Example of lookup response with redirection: - -```protobuf - -message CommandLookupTopicResponse { - "request_id" : 1, - "response" : "Redirect", - "brokerServiceUrl" : "pulsar://broker-2.example.com:6650", - "brokerServiceUrlTls" : "pulsar+ssl://broker-2.example.com:6651", - "authoritative" : true -} - -``` - -In this second case, we need to reissue the `LookupTopic` command request -to `broker-2.example.com` and this broker will be able to give a definitive -answer to the lookup request. - -### Partitioned topics discovery - -Partitioned topics metadata discovery is used to find out if a topic is a -"partitioned topic" and how many partitions were set up. - -If the topic is marked as "partitioned", the client is expected to create -multiple producers or consumers, one for each partition, using the `partition-X` -suffix. - -This information only needs to be retrieved the first time a producer or -consumer is created. There is no need to do this after reconnections. - -The discovery of partitioned topics metadata works very similar to the topic -lookup. The client send a request to the service discovery address and the -response will contain actual metadata. - -##### Command PartitionedTopicMetadata - -```protobuf - -message CommandPartitionedTopicMetadata { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "request_id" : 1 -} - -``` - -Fields: - * `topic` → the topic for which to check the partitions metadata - * `request_id` → Id of the request that will be passed with its response - - -##### Command PartitionedTopicMetadataResponse - -Example of response with metadata: - -```protobuf - -message CommandPartitionedTopicMetadataResponse { - "request_id" : 1, - "response" : "Success", - "partitions" : 32 -} - -``` - -## Protobuf interface - -All Pulsar's Protobuf definitions can be found {@inject: github:here:/pulsar-common/src/main/proto/PulsarApi.proto}. diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/functions-cli.md b/site2/website/versioned_docs/version-2.8.3-deprecated/functions-cli.md deleted file mode 100644 index c9fcfa201525f0..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/functions-cli.md +++ /dev/null @@ -1,198 +0,0 @@ ---- -id: functions-cli -title: Pulsar Functions command line tool -sidebar_label: "Reference: CLI" -original_id: functions-cli ---- - -The following tables list Pulsar Functions command-line tools. You can learn Pulsar Functions modes, commands, and parameters. - -## localrun - -Run Pulsar Functions locally, rather than deploying it to the Pulsar cluster. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -broker-service-url | The URL for the Pulsar broker. | | -classname | The class name of a Pulsar Function.| | -client-auth-params | Client authentication parameter. | | -client-auth-plugin | Client authentication plugin using which function-process can connect to broker. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime).| | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -hostname-verification-enabled | Enable hostname verification. | false -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -instance-id-offset | Start the instanceIds from this offset. | 0 -log-topic | The topic to which the logs a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -tls-allow-insecure | Allow insecure tls connection. | false -tls-trust-cert-path | tls trust cert file path. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -use-tls | Use tls connection. | false -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - - -## create - -Create and deploy a Pulsar Function in cluster mode. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -classname | The class name of a Pulsar Function. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime).| | -custom-runtime-options | A string that encodes options to customize the runtime, see docs for configured runtime for details | | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -log-topic | The topic to which the logs of a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - -## delete - -Delete a Pulsar Function that is running on a Pulsar cluster. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## update - -Update a Pulsar Function that has been deployed to a Pulsar cluster. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -classname | The class name of a Pulsar Function. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime). | | -custom-runtime-options | A string that encodes options to customize the runtime, see docs for configured runtime for details | | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -log-topic | The topic to which the logs of a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -update-auth-data | Whether or not to update the auth data. | false -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - -## get - -Fetch information about a Pulsar Function. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## restart - -Restart function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## stop - -Stops function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## start - -Starts a stopped function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/functions-debug.md b/site2/website/versioned_docs/version-2.8.3-deprecated/functions-debug.md deleted file mode 100644 index e1d55ae0897aa5..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/functions-debug.md +++ /dev/null @@ -1,533 +0,0 @@ ---- -id: functions-debug -title: Debug Pulsar Functions -sidebar_label: "How-to: Debug" -original_id: functions-debug ---- - -You can use the following methods to debug Pulsar Functions: - -* [Captured stderr](functions-debug.md#captured-stderr) -* [Use unit test](functions-debug.md#use-unit-test) -* [Debug with localrun mode](functions-debug.md#debug-with-localrun-mode) -* [Use log topic](functions-debug.md#use-log-topic) -* [Use Functions CLI](functions-debug.md#use-functions-cli) - -## Captured stderr - -Function startup information and captured stderr output is written to `logs/functions////-.log` - -This is useful for debugging why a function fails to start. - -## Use unit test - -A Pulsar Function is a function with inputs and outputs, you can test a Pulsar Function in a similar way as you test any function. - -For example, if you have the following Pulsar Function: - -```java - -import java.util.function.Function; - -public class JavaNativeExclamationFunction implements Function { - @Override - public String apply(String input) { - return String.format("%s!", input); - } -} - -``` - -You can write a simple unit test to test Pulsar Function. - -:::tip - -Pulsar uses testng for testing. - -::: - -```java - -@Test -public void testJavaNativeExclamationFunction() { - JavaNativeExclamationFunction exclamation = new JavaNativeExclamationFunction(); - String output = exclamation.apply("foo"); - Assert.assertEquals(output, "foo!"); -} - -``` - -The following Pulsar Function implements the `org.apache.pulsar.functions.api.Function` interface. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class ExclamationFunction implements Function { - @Override - public String process(String input, Context context) { - return String.format("%s!", input); - } -} - -``` - -In this situation, you can write a unit test for this function as well. Remember to mock the `Context` parameter. The following is an example. - -:::tip - -Pulsar uses testng for testing. - -::: - -```java - -@Test -public void testExclamationFunction() { - ExclamationFunction exclamation = new ExclamationFunction(); - String output = exclamation.process("foo", mock(Context.class)); - Assert.assertEquals(output, "foo!"); -} - -``` - -## Debug with localrun mode -When you run a Pulsar Function in localrun mode, it launches an instance of the Function on your local machine as a thread. - -In this mode, a Pulsar Function consumes and produces actual data to a Pulsar cluster, and mirrors how the function actually runs in a Pulsar cluster. - -:::note - -Currently, debugging with localrun mode is only supported by Pulsar Functions written in Java. You need Pulsar version 2.4.0 or later to do the following. Even though localrun is available in versions earlier than Pulsar 2.4.0, you cannot debug with localrun mode programmatically or run Functions as threads. - -::: - -You can launch your function in the following manner. - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setName(functionName); -functionConfig.setInputs(Collections.singleton(sourceTopic)); -functionConfig.setClassName(ExclamationFunction.class.getName()); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setOutput(sinkTopic); - -LocalRunner localRunner = LocalRunner.builder().functionConfig(functionConfig).build(); -localRunner.start(true); - -``` - -So you can debug functions using an IDE easily. Set breakpoints and manually step through a function to debug with real data. - -The following example illustrates how to programmatically launch a function in localrun mode. - -```java - -public class ExclamationFunction implements Function { - - @Override - public String process(String s, Context context) throws Exception { - return s + "!"; - } - -public static void main(String[] args) throws Exception { - FunctionConfig functionConfig = new FunctionConfig(); - functionConfig.setName("exclamation"); - functionConfig.setInputs(Collections.singleton("input")); - functionConfig.setClassName(ExclamationFunction.class.getName()); - functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); - functionConfig.setOutput("output"); - - LocalRunner localRunner = LocalRunner.builder().functionConfig(functionConfig).build(); - localRunner.start(false); -} - -``` - -To use localrun mode programmatically, add the following dependency. - -```xml - - - org.apache.pulsar - pulsar-functions-local-runner - ${pulsar.version} - - -``` - -For complete code samples, see [here](https://github.com/jerrypeng/pulsar-functions-demos/tree/master/debugging). - -:::note - -Debugging with localrun mode for Pulsar Functions written in other languages will be supported soon. - -::: - -## Use log topic - -In Pulsar Functions, you can generate log information defined in functions to a specified log topic. You can configure consumers to consume messages from a specified log topic to check the log information. - -![Pulsar Functions core programming model](/assets/pulsar-functions-overview.png) - -**Example** - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class LoggingFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - String messageId = new String(context.getMessageId()); - - if (input.contains("danger")) { - LOG.warn("A warning was received in message {}", messageId); - } else { - LOG.info("Message {} received\nContent: {}", messageId, input); - } - - return null; - } -} - -``` - -As shown in the example above, you can get the logger via `context.getLogger()` and assign the logger to the `LOG` variable of `slf4j`, so you can define your desired log information in a function using the `LOG` variable. Meanwhile, you need to specify the topic to which the log information is produced. - -**Example** - -```bash - -$ bin/pulsar-admin functions create \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -## Use Functions CLI - -With [Pulsar Functions CLI](reference-pulsar-admin.md#functions), you can debug Pulsar Functions with the following subcommands: - -* `get` -* `status` -* `stats` -* `list` -* `trigger` - -:::tip - -For complete commands of **Pulsar Functions CLI**, see [here](reference-pulsar-admin.md#functions)。 - -::: - -### `get` - -Get information about a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions get options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -:::tip - -`--fqfn` consists of `--name`, `--namespace` and `--tenant`, so you can specify either `--fqfn` or `--name`, `--namespace` and `--tenant`. - -::: - -**Example** - -You can specify `--fqfn` to get information about a Pulsar Function. - -```bash - -$ ./bin/pulsar-admin functions get public/default/ExclamationFunctio6 - -``` - -Optionally, you can specify `--name`, `--namespace` and `--tenant` to get information about a Pulsar Function. - -```bash - -$ ./bin/pulsar-admin functions get \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 - -``` - -As shown below, the `get` command shows input, output, runtime, and other information about the _ExclamationFunctio6_ function. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "ExclamationFunctio6", - "className": "org.example.test.ExclamationFunction", - "inputSpecs": { - "persistent://public/default/my-topic-1": { - "isRegexPattern": false - } - }, - "output": "persistent://public/default/test-1", - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "userConfig": {}, - "runtime": "JAVA", - "autoAck": true, - "parallelism": 1 -} - -``` - -### `status` - -Check the current status of a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions status options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--instance-id`|The instance ID of a Pulsar Function
    If the `--instance-id` is not specified, it gets the IDs of all instances.
    -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - -``` - -As shown below, the `status` command shows the number of instances, running instances, the instance running under the _ExclamationFunctio6_ function, received messages, successfully processed messages, system exceptions, the average latency and so on. - -```json - -{ - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReceived" : 1, - "numSuccessfullyProcessed" : 1, - "numUserExceptions" : 0, - "latestUserExceptions" : [ ], - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "averageLatency" : 0.8385, - "lastInvocationTime" : 1557734137987, - "workerId" : "c-standalone-fw-23ccc88ef29b-8080" - } - } ] -} - -``` - -### `stats` - -Get the current stats of a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions stats options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--instance-id`|The instance ID of a Pulsar Function.
    If the `--instance-id` is not specified, it gets the IDs of all instances.
    -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - -``` - -The output is shown as follows: - -```json - -{ - "receivedTotal" : 1, - "processedSuccessfullyTotal" : 1, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : 0.8385, - "1min" : { - "receivedTotal" : 0, - "processedSuccessfullyTotal" : 0, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : null - }, - "lastInvocation" : 1557734137987, - "instances" : [ { - "instanceId" : 0, - "metrics" : { - "receivedTotal" : 1, - "processedSuccessfullyTotal" : 1, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : 0.8385, - "1min" : { - "receivedTotal" : 0, - "processedSuccessfullyTotal" : 0, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : null - }, - "lastInvocation" : 1557734137987, - "userMetrics" : { } - } - } ] -} - -``` - -### `list` - -List all Pulsar Functions running under a specific tenant and namespace. - -**Usage** - -```bash - -$ pulsar-admin functions list options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions list \ - --tenant public \ - --namespace default - -``` - -As shown below, the `list` command returns three functions running under the _public_ tenant and the _default_ namespace. - -```text - -ExclamationFunctio1 -ExclamationFunctio2 -ExclamationFunctio3 - -``` - -### `trigger` - -Trigger a specified Pulsar Function with a supplied value. This command simulates the execution process of a Pulsar Function and verifies it. - -**Usage** - -```bash - -$ pulsar-admin functions trigger options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. -|`--topic`|The topic name that a Pulsar Function consumes from. -|`--trigger-file`|The path to a file that contains the data to trigger a Pulsar Function. -|`--trigger-value`|The value to trigger a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - --topic persistent://public/default/my-topic-1 \ - --trigger-value "hello pulsar functions" - -``` - -As shown below, the `trigger` command returns the following result: - -```text - -This is my function! - -``` - -:::note - -You must specify the [entire topic name](getting-started-pulsar.md#topic-names) when using the `--topic` option. Otherwise, the following error occurs. - -```text - -Function in trigger function has unidentified topic -Reason: Function in trigger function has unidentified topic - -``` - -::: - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/functions-deploy.md b/site2/website/versioned_docs/version-2.8.3-deprecated/functions-deploy.md deleted file mode 100644 index 9c47208f68fa0c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/functions-deploy.md +++ /dev/null @@ -1,262 +0,0 @@ ---- -id: functions-deploy -title: Deploy Pulsar Functions -sidebar_label: "How-to: Deploy" -original_id: functions-deploy ---- - -## Requirements - -To deploy and manage Pulsar Functions, you need to have a Pulsar cluster running. There are several options for this: - -* You can run a [standalone cluster](getting-started-standalone.md) locally on your own machine. -* You can deploy a Pulsar cluster on [Kubernetes](deploy-kubernetes.md), [Amazon Web Services](deploy-aws.md), [bare metal](deploy-bare-metal.md), [DC/OS](https://dcos.io/), and more. - -If you run a non-[standalone](reference-terminology.md#standalone) cluster, you need to obtain the service URL for the cluster. How you obtain the service URL depends on how you deploy your Pulsar cluster. - -If you want to deploy and trigger Python user-defined functions, you need to install [the pulsar python client](http://pulsar.apache.org/docs/en/client-libraries-python/) on all the machines running [functions workers](functions-worker.md). - -## Command-line interface - -Pulsar Functions are deployed and managed using the [`pulsar-admin functions`](reference-pulsar-admin.md#functions) interface, which contains commands such as [`create`](reference-pulsar-admin.md#functions-create) for deploying functions in [cluster mode](#cluster-mode), [`trigger`](reference-pulsar-admin.md#trigger) for [triggering](#triggering-pulsar-functions) functions, [`list`](reference-pulsar-admin.md#list-2) for listing deployed functions. - -To learn more commands, refer to [`pulsar-admin functions`](reference-pulsar-admin.md#functions). - -### Default arguments - -When managing Pulsar Functions, you need to specify a variety of information about functions, including tenant, namespace, input and output topics, and so on. However, some parameters have default values if you do not specify values for them. The following table lists the default values. - -Parameter | Default -:---------|:------- -Function name | You can specify any value for the class name (except org, library, or similar class names). For example, when you specify the flag `--classname org.example.MyFunction`, the function name is `MyFunction`. -Tenant | Derived from names of the input topics. If the input topics are under the `marketing` tenant, which means the topic names have the form `persistent://marketing/{namespace}/{topicName}`, the tenant is `marketing`. -Namespace | Derived from names of the input topics. If the input topics are under the `asia` namespace under the `marketing` tenant, which means the topic names have the form `persistent://marketing/asia/{topicName}`, then the namespace is `asia`. -Output topic | `{input topic}-{function name}-output`. For example, if an input topic name of a function is `incoming`, and the function name is `exclamation`, then the name of the output topic is `incoming-exclamation-output`. -Subscription type | For `at-least-once` and `at-most-once` [processing guarantees](functions-overview.md#processing-guarantees), the [`SHARED`](concepts-messaging.md#shared) mode is applied by default; for `effectively-once` guarantees, the [`FAILOVER`](concepts-messaging.md#failover) mode is applied. -Processing guarantees | [`ATLEAST_ONCE`](functions-overview.md#processing-guarantees) -Pulsar service URL | `pulsar://localhost:6650` - -### Example of default arguments - -Take the `create` command as an example. - -```bash - -$ bin/pulsar-admin functions create \ - --jar my-pulsar-functions.jar \ - --classname org.example.MyFunction \ - --inputs my-function-input-topic1,my-function-input-topic2 - -``` - -The function has default values for the function name (`MyFunction`), tenant (`public`), namespace (`default`), subscription type (`SHARED`), processing guarantees (`ATLEAST_ONCE`), and Pulsar service URL (`pulsar://localhost:6650`). - -## Local run mode - -If you run a Pulsar Function in **local run** mode, it runs on the machine from which you enter the commands (on your laptop, an [AWS EC2](https://aws.amazon.com/ec2/) instance, and so on). The following is a [`localrun`](reference-pulsar-admin.md#localrun) command example. - -```bash - -$ bin/pulsar-admin functions localrun \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/input-1 \ - --output persistent://public/default/output-1 - -``` - -By default, the function connects to a Pulsar cluster running on the same machine, via a local [broker](reference-terminology.md#broker) service URL of `pulsar://localhost:6650`. If you use local run mode to run a function but connect it to a non-local Pulsar cluster, you can specify a different broker URL using the `--brokerServiceUrl` flag. The following is an example. - -```bash - -$ bin/pulsar-admin functions localrun \ - --broker-service-url pulsar://my-cluster-host:6650 \ - # Other function parameters - -``` - -## Cluster mode - -When you run a Pulsar Function in **cluster** mode, the function code is uploaded to a Pulsar broker and runs *alongside the broker* rather than in your [local environment](#local-run-mode). You can run a function in cluster mode using the [`create`](reference-pulsar-admin.md#create-1) command. - -```bash - -$ bin/pulsar-admin functions create \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/input-1 \ - --output persistent://public/default/output-1 - -``` - -### Update functions in cluster mode - -You can use the [`update`](reference-pulsar-admin.md#update-1) command to update a Pulsar Function running in cluster mode. The following command updates the function created in the [cluster mode](#cluster-mode) section. - -```bash - -$ bin/pulsar-admin functions update \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/new-input-topic \ - --output persistent://public/default/new-output-topic - -``` - -### Parallelism - -Pulsar Functions run as processes or threads, which are called **instances**. When you run a Pulsar Function, it runs as a single instance by default. With one localrun command, you can only run a single instance of a function. If you want to run multiple instances, you can use localrun command multiple times. - -When you create a function, you can specify the *parallelism* of a function (the number of instances to run). You can set the parallelism factor using the `--parallelism` flag of the [`create`](reference-pulsar-admin.md#functions-create) command. - -```bash - -$ bin/pulsar-admin functions create \ - --parallelism 3 \ - # Other function info - -``` - -You can adjust the parallelism of an already created function using the [`update`](reference-pulsar-admin.md#update-1) interface. - -```bash - -$ bin/pulsar-admin functions update \ - --parallelism 5 \ - # Other function - -``` - -If you specify a function configuration via YAML, use the `parallelism` parameter. The following is a config file example. - -```yaml - -# function-config.yaml -parallelism: 3 -inputs: -- persistent://public/default/input-1 -output: persistent://public/default/output-1 -# other parameters - -``` - -The following is corresponding update command. - -```bash - -$ bin/pulsar-admin functions update \ - --function-config-file function-config.yaml - -``` - -### Function instance resources - -When you run Pulsar Functions in [cluster mode](#cluster-mode), you can specify the resources that are assigned to each function [instance](#parallelism). - -Resource | Specified as | Runtimes -:--------|:----------------|:-------- -CPU | The number of cores | Kubernetes -RAM | The number of bytes | Process, Docker -Disk space | The number of bytes | Docker - -The following function creation command allocates 8 cores, 8 GB of RAM, and 10 GB of disk space to a function. - -```bash - -$ bin/pulsar-admin functions create \ - --jar target/my-functions.jar \ - --classname org.example.functions.MyFunction \ - --cpu 8 \ - --ram 8589934592 \ - --disk 10737418240 - -``` - -> #### Resources are *per instance* -> The resources that you apply to a given Pulsar Function are applied to each instance of the function. For example, if you apply 8 GB of RAM to a function with a parallelism of 5, you are applying 40 GB of RAM for the function in total. Make sure that you take the parallelism (the number of instances) factor into your resource calculations. - -### Use Package management service - -Package management enables version management and simplifies the upgrade and rollback processes for Functions, Sinks, and Sources. When you use the same function, sink and source in different namespaces, you can upload them to a common package management system. - -To use [Package management service](admin-api-packages.md), ensure that the package management service has been enabled in your cluster by setting the following properties in `broker.conf`. - -> Note: Package management service is not enabled by default. - -```yaml - -enablePackagesManagement=true -packagesManagementStorageProvider=org.apache.pulsar.packages.management.storage.bookkeeper.BookKeeperPackagesStorageProvider -packagesReplicas=1 -packagesManagementLedgerRootPath=/ledgers - -``` - -With Package management service enabled, you can upload your function packages by [upload a package](admin-api-packages.md#upload-a-package) to the service and get the [package URL](admin-api-packages.md#package-url). - -When you have a ready to use package URL, you can create the function with package URL by setting `--jar`, `--py`, or `--go` to the package URL with `pulsar-admin functions create`. - -## Trigger Pulsar Functions - -If a Pulsar Function is running in [cluster mode](#cluster-mode), you can **trigger** it at any time using the command line. Triggering a function means that you send a message with a specific value to the function and get the function output (if any) via the command line. - -> Triggering a function is to invoke a function by producing a message on one of the input topics. With the [`pulsar-admin functions trigger`](reference-pulsar-admin.md#trigger) command, you can send messages to functions without using the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool or a language-specific client library. - -To learn how to trigger a function, you can start with Python function that returns a simple string based on the input. - -```python - -# myfunc.py -def process(input): - return "This function has been triggered with a value of {0}".format(input) - -``` - -You can run the function in [local run mode](functions-deploy.md#local-run-mode). - -```bash - -$ bin/pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name myfunc \ - --py myfunc.py \ - --classname myfunc \ - --inputs persistent://public/default/in \ - --output persistent://public/default/out - -``` - -Then assign a consumer to listen on the output topic for messages from the `myfunc` function with the [`pulsar-client consume`](reference-cli-tools.md#consume) command. - -```bash - -$ bin/pulsar-client consume persistent://public/default/out \ - --subscription-name my-subscription - --num-messages 0 # Listen indefinitely - -``` - -And then you can trigger the function. - -```bash - -$ bin/pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name myfunc \ - --trigger-value "hello world" - -``` - -The consumer listening on the output topic produces something as follows in the log. - -``` - ------ got message ----- -This function has been triggered with a value of hello world - -``` - -> #### Topic info is not required -> In the `trigger` command, you only need to specify basic information about the function (tenant, namespace, and name). To trigger the function, you do not need to know the function input topics. diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/functions-develop.md b/site2/website/versioned_docs/version-2.8.3-deprecated/functions-develop.md deleted file mode 100644 index 2e29aa1c474005..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/functions-develop.md +++ /dev/null @@ -1,1600 +0,0 @@ ---- -id: functions-develop -title: Develop Pulsar Functions -sidebar_label: "How-to: Develop" -original_id: functions-develop ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -You learn how to develop Pulsar Functions with different APIs for Java, Python and Go. - -## Available APIs -In Java and Python, you have two options to write Pulsar Functions. In Go, you can use Pulsar Functions SDK for Go. - -Interface | Description | Use cases -:---------|:------------|:--------- -Language-native interface | No Pulsar-specific libraries or special dependencies required (only core libraries from Java/Python). | Functions that do not require access to the function [context](#context). -Pulsar Function SDK for Java/Python/Go | Pulsar-specific libraries that provide a range of functionality not provided by "native" interfaces. | Functions that require access to the function [context](#context). - -The language-native function, which adds an exclamation point to all incoming strings and publishes the resulting string to a topic, has no external dependencies. The following example is language-native function. - -````mdx-code-block - - - -```Java - -import java.util.function.Function; - -public class JavaNativeExclamationFunction implements Function { - @Override - public String apply(String input) { - return String.format("%s!", input); - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/JavaNativeExclamationFunction.java). - - - - -```python - -def process(input): - return "{}!".format(input) - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/native_exclamation_function.py). - -:::note - -You can write Pulsar Functions in python2 or python3. However, Pulsar only looks for `python` as the interpreter. -If you're running Pulsar Functions on an Ubuntu system that only supports python3, you might fail to -start the functions. In this case, you can create a symlink. Your system will fail if -you subsequently install any other package that depends on Python 2.x. A solution is under development in [Issue 5518](https://github.com/apache/pulsar/issues/5518). - -```bash - -sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 10 - -``` - -::: - - - - -```` - -The following example uses Pulsar Functions SDK. -````mdx-code-block - - - -```Java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class ExclamationFunction implements Function { - @Override - public String process(String input, Context context) { - return String.format("%s!", input); - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/ExclamationFunction.java). - - - - -```python - -from pulsar import Function - -class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - return input + '!' - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/exclamation_function.py). - - - - -```Go - -package main - -import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" -) - -func HandleRequest(ctx context.Context, in []byte) error{ - fmt.Println(string(in) + "!") - return nil -} - -func main() { - pf.Start(HandleRequest) -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/77cf09eafa4f1626a53a1fe2e65dd25f377c1127/pulsar-function-go/examples/inputFunc/inputFunc.go#L20-L36). - - - - -```` - -## Schema registry -Pulsar has a built-in schema registry and is bundled with popular schema types, such as Avro, JSON and Protobuf. Pulsar Functions can leverage the existing schema information from input topics and derive the input type. The schema registry applies for output topic as well. - -## SerDe -SerDe stands for **Ser**ialization and **De**serialization. Pulsar Functions uses SerDe when publishing data to and consuming data from Pulsar topics. How SerDe works by default depends on the language you use for a particular function. - -````mdx-code-block - - - -When you write Pulsar Functions in Java, the following basic Java types are built in and supported by default: `String`, `Double`, `Integer`, `Float`, `Long`, `Short`, and `Byte`. - -To customize Java types, you need to implement the following interface. - -```java - -public interface SerDe { - T deserialize(byte[] input); - byte[] serialize(T input); -} - -``` - -SerDe works in the following ways in Java Functions. -- If the input and output topics have schema, Pulsar Functions use schema for SerDe. -- If the input or output topics do not exist, Pulsar Functions adopt the following rules to determine SerDe: - - If the schema type is specified, Pulsar Functions use the specified schema type. - - If SerDe is specified, Pulsar Functions use the specified SerDe, and the schema type for input and output topics is `Byte`. - - If neither the schema type nor SerDe is specified, Pulsar Functions use the built-in SerDe. For non-primitive schema type, the built-in SerDe serializes and deserializes objects in the `JSON` format. - - - - -In Python, the default SerDe is identity, meaning that the type is serialized as whatever type the producer function returns. - -You can specify the SerDe when [creating](functions-deploy.md#cluster-mode) or [running](functions-deploy.md#local-run-mode) functions. - -```bash - -$ bin/pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name my_function \ - --py my_function.py \ - --classname my_function.MyFunction \ - --custom-serde-inputs '{"input-topic-1":"Serde1","input-topic-2":"Serde2"}' \ - --output-serde-classname Serde3 \ - --output output-topic-1 - -``` - -This case contains two input topics: `input-topic-1` and `input-topic-2`, each of which is mapped to a different SerDe class (the map must be specified as a JSON string). The output topic, `output-topic-1`, uses the `Serde3` class for SerDe. At the moment, all Pulsar Functions logic, include processing function and SerDe classes, must be contained within a single Python file. - -When using Pulsar Functions for Python, you have three SerDe options: - -1. You can use the [`IdentitySerde`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L70), which leaves the data unchanged. The `IdentitySerDe` is the **default**. Creating or running a function without explicitly specifying SerDe means that this option is used. -2. You can use the [`PickleSerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L62), which uses Python [`pickle`](https://docs.python.org/3/library/pickle.html) for SerDe. -3. You can create a custom SerDe class by implementing the baseline [`SerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L50) class, which has just two methods: [`serialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L53) for converting the object into bytes, and [`deserialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L58) for converting bytes into an object of the required application-specific type. - -The table below shows when you should use each SerDe. - -SerDe option | When to use -:------------|:----------- -`IdentitySerde` | When you work with simple types like strings, Booleans, integers. -`PickleSerDe` | When you work with complex, application-specific types and are comfortable with the "best effort" approach of `pickle`. -Custom SerDe | When you require explicit control over SerDe, potentially for performance or data compatibility purposes. - - - - -Currently, the feature is not available in Go. - - - - -```` - -### Example -Imagine that you're writing Pulsar Functions that are processing tweet objects, you can refer to the following example of `Tweet` class. - -````mdx-code-block - - - -```java - -public class Tweet { - private String username; - private String tweetContent; - - public Tweet(String username, String tweetContent) { - this.username = username; - this.tweetContent = tweetContent; - } - - // Standard setters and getters -} - -``` - -To pass `Tweet` objects directly between Pulsar Functions, you need to provide a custom SerDe class. In the example below, `Tweet` objects are basically strings in which the username and tweet content are separated by a `|`. - -```java - -package com.example.serde; - -import org.apache.pulsar.functions.api.SerDe; - -import java.util.regex.Pattern; - -public class TweetSerde implements SerDe { - public Tweet deserialize(byte[] input) { - String s = new String(input); - String[] fields = s.split(Pattern.quote("|")); - return new Tweet(fields[0], fields[1]); - } - - public byte[] serialize(Tweet input) { - return "%s|%s".format(input.getUsername(), input.getTweetContent()).getBytes(); - } -} - -``` - -To apply this customized SerDe to a particular Pulsar Function, you need to: - -* Package the `Tweet` and `TweetSerde` classes into a JAR. -* Specify a path to the JAR and SerDe class name when deploying the function. - -The following is an example of [`create`](reference-pulsar-admin.md#create-1) operation. - -```bash - -$ bin/pulsar-admin functions create \ - --jar /path/to/your.jar \ - --output-serde-classname com.example.serde.TweetSerde \ - # Other function attributes - -``` - -> #### Custom SerDe classes must be packaged with your function JARs -> Pulsar does not store your custom SerDe classes separately from your Pulsar Functions. So you need to include your SerDe classes in your function JARs. If not, Pulsar returns an error. - - - - -```python - -class Tweet(object): - def __init__(self, username, tweet_content): - self.username = username - self.tweet_content = tweet_content - -``` - -In order to use this class in Pulsar Functions, you have two options: - -1. You can specify `PickleSerDe`, which applies the [`pickle`](https://docs.python.org/3/library/pickle.html) library SerDe. -2. You can create your own SerDe class. The following is an example. - - ```python - - from pulsar import SerDe - - class TweetSerDe(SerDe): - - def serialize(self, input): - return bytes("{0}|{1}".format(input.username, input.tweet_content)) - - def deserialize(self, input_bytes): - tweet_components = str(input_bytes).split('|') - return Tweet(tweet_components[0], tweet_componentsp[1]) - - ``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/custom_object_function.py). - - - - -```` - -In both languages, however, you can write custom SerDe logic for more complex, application-specific types. - -## Context -Java, Python and Go SDKs provide access to a **context object** that can be used by a function. This context object provides a wide variety of information and functionality to the function. - -* The name and ID of a Pulsar Function. -* The message ID of each message. Each Pulsar message is automatically assigned with an ID. -* The key, event time, properties and partition key of each message. -* The name of the topic to which the message is sent. -* The names of all input topics as well as the output topic associated with the function. -* The name of the class used for [SerDe](#serde). -* The [tenant](reference-terminology.md#tenant) and namespace associated with the function. -* The ID of the Pulsar Functions instance running the function. -* The version of the function. -* The [logger object](functions-develop.md#logger) used by the function, which can be used to create function log messages. -* Access to arbitrary [user configuration](#user-config) values supplied via the CLI. -* An interface for recording [metrics](#metrics). -* An interface for storing and retrieving state in [state storage](#state-storage). -* A function to publish new messages onto arbitrary topics. -* A function to ack the message being processed (if auto-ack is disabled). -* (Java) get Pulsar admin client. - -````mdx-code-block - - - -The [Context](https://github.com/apache/pulsar/blob/master/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Context.java) interface provides a number of methods that you can use to access the function [context](#context). The various method signatures for the `Context` interface are listed as follows. - -```java - -public interface Context { - Record getCurrentRecord(); - Collection getInputTopics(); - String getOutputTopic(); - String getOutputSchemaType(); - String getTenant(); - String getNamespace(); - String getFunctionName(); - String getFunctionId(); - String getInstanceId(); - String getFunctionVersion(); - Logger getLogger(); - void incrCounter(String key, long amount); - void incrCounterAsync(String key, long amount); - long getCounter(String key); - long getCounterAsync(String key); - void putState(String key, ByteBuffer value); - void putStateAsync(String key, ByteBuffer value); - void deleteState(String key); - ByteBuffer getState(String key); - ByteBuffer getStateAsync(String key); - Map getUserConfigMap(); - Optional getUserConfigValue(String key); - Object getUserConfigValueOrDefault(String key, Object defaultValue); - void recordMetric(String metricName, double value); - CompletableFuture publish(String topicName, O object, String schemaOrSerdeClassName); - CompletableFuture publish(String topicName, O object); - TypedMessageBuilder newOutputMessage(String topicName, Schema schema) throws PulsarClientException; - ConsumerBuilder newConsumerBuilder(Schema schema) throws PulsarClientException; - PulsarAdmin getPulsarAdmin(); - PulsarAdmin getPulsarAdmin(String clusterName); -} - -``` - -The following example uses several methods available via the `Context` object. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.stream.Collectors; - -public class ContextFunction implements Function { - public Void process(String input, Context context) { - Logger LOG = context.getLogger(); - String inputTopics = context.getInputTopics().stream().collect(Collectors.joining(", ")); - String functionName = context.getFunctionName(); - - String logMessage = String.format("A message with a value of \"%s\" has arrived on one of the following topics: %s\n", - input, - inputTopics); - - LOG.info(logMessage); - - String metricName = String.format("function-%s-messages-received", functionName); - context.recordMetric(metricName, 1); - - return null; - } -} - -``` - - - - -``` - -class ContextImpl(pulsar.Context): - def get_message_id(self): - ... - def get_message_key(self): - ... - def get_message_eventtime(self): - ... - def get_message_properties(self): - ... - def get_current_message_topic_name(self): - ... - def get_partition_key(self): - ... - def get_function_name(self): - ... - def get_function_tenant(self): - ... - def get_function_namespace(self): - ... - def get_function_id(self): - ... - def get_instance_id(self): - ... - def get_function_version(self): - ... - def get_logger(self): - ... - def get_user_config_value(self, key): - ... - def get_user_config_map(self): - ... - def record_metric(self, metric_name, metric_value): - ... - def get_input_topics(self): - ... - def get_output_topic(self): - ... - def get_output_serde_class_name(self): - ... - def publish(self, topic_name, message, serde_class_name="serde.IdentitySerDe", - properties=None, compression_type=None, callback=None, message_conf=None): - ... - def ack(self, msgid, topic): - ... - def get_and_reset_metrics(self): - ... - def reset_metrics(self): - ... - def get_metrics(self): - ... - def incr_counter(self, key, amount): - ... - def get_counter(self, key): - ... - def del_counter(self, key): - ... - def put_state(self, key, value): - ... - def get_state(self, key): - ... - -``` - - - - -``` - -func (c *FunctionContext) GetInstanceID() int { - return c.instanceConf.instanceID -} - -func (c *FunctionContext) GetInputTopics() []string { - return c.inputTopics -} - -func (c *FunctionContext) GetOutputTopic() string { - return c.instanceConf.funcDetails.GetSink().Topic -} - -func (c *FunctionContext) GetFuncTenant() string { - return c.instanceConf.funcDetails.Tenant -} - -func (c *FunctionContext) GetFuncName() string { - return c.instanceConf.funcDetails.Name -} - -func (c *FunctionContext) GetFuncNamespace() string { - return c.instanceConf.funcDetails.Namespace -} - -func (c *FunctionContext) GetFuncID() string { - return c.instanceConf.funcID -} - -func (c *FunctionContext) GetFuncVersion() string { - return c.instanceConf.funcVersion -} - -func (c *FunctionContext) GetUserConfValue(key string) interface{} { - return c.userConfigs[key] -} - -func (c *FunctionContext) GetUserConfMap() map[string]interface{} { - return c.userConfigs -} - -func (c *FunctionContext) SetCurrentRecord(record pulsar.Message) { - c.record = record -} - -func (c *FunctionContext) GetCurrentRecord() pulsar.Message { - return c.record -} - -func (c *FunctionContext) NewOutputMessage(topic string) pulsar.Producer { - return c.outputMessage(topic) -} - -``` - -The following example uses several methods available via the `Context` object. - -``` - -import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" -) - -func contextFunc(ctx context.Context) { - if fc, ok := pf.FromContext(ctx); ok { - fmt.Printf("function ID is:%s, ", fc.GetFuncID()) - fmt.Printf("function version is:%s\n", fc.GetFuncVersion()) - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/77cf09eafa4f1626a53a1fe2e65dd25f377c1127/pulsar-function-go/examples/contextFunc/contextFunc.go#L29-L34). - - - - -```` - -### User config -When you run or update Pulsar Functions created using SDK, you can pass arbitrary key/values to them with the command line with the `--user-config` flag. Key/values must be specified as JSON. The following function creation command passes a user configured key/value to a function. - -```bash - -$ bin/pulsar-admin functions create \ - --name word-filter \ - # Other function configs - --user-config '{"forbidden-word":"rosebud"}' - -``` - -````mdx-code-block - - - -The Java SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - # Other function configs - --user-config '{"word-of-the-day":"verdure"}' - -``` - -To access that value in a Java function: - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.Optional; - -public class UserConfigFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - Optional wotd = context.getUserConfigValue("word-of-the-day"); - if (wotd.isPresent()) { - LOG.info("The word of the day is {}", wotd); - } else { - LOG.warn("No word of the day provided"); - } - return null; - } -} - -``` - -The `UserConfigFunction` function will log the string `"The word of the day is verdure"` every time the function is invoked (which means every time a message arrives). The `word-of-the-day` user config will be changed only when the function is updated with a new config value via the command line. - -You can also access the entire user config map or set a default value in case no value is present: - -```java - -// Get the whole config map -Map allConfigs = context.getUserConfigMap(); - -// Get value or resort to default -String wotd = context.getUserConfigValueOrDefault("word-of-the-day", "perspicacious"); - -``` - -> For all key/value pairs passed to Java functions, both the key *and* the value are `String`. To set the value to be a different type, you need to deserialize from the `String` type. - - - - -In Python function, you can access the configuration value like this. - -```python - -from pulsar import Function - -class WordFilter(Function): - def process(self, context, input): - forbidden_word = context.user_config()["forbidden-word"] - - # Don't publish the message if it contains the user-supplied - # forbidden word - if forbidden_word in input: - pass - # Otherwise publish the message - else: - return input - -``` - -The Python SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - # Other function configs \ - --user-config '{"word-of-the-day":"verdure"}' - -``` - -To access that value in a Python function: - -```python - -from pulsar import Function - -class UserConfigFunction(Function): - def process(self, input, context): - logger = context.get_logger() - wotd = context.get_user_config_value('word-of-the-day') - if wotd is None: - logger.warn('No word of the day provided') - else: - logger.info("The word of the day is {0}".format(wotd)) - -``` - - - - -The Go SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - --go path/to/go/binary - --user-config '{"word-of-the-day":"lackadaisical"}' - -``` - -To access that value in a Go function: - -```go - -func contextFunc(ctx context.Context) { - fc, ok := pf.FromContext(ctx) - if !ok { - logutil.Fatal("Function context is not defined") - } - - wotd := fc.GetUserConfValue("word-of-the-day") - - if wotd == nil { - logutil.Warn("The word of the day is empty") - } else { - logutil.Infof("The word of the day is %s", wotd.(string)) - } -} - -``` - - - - -```` - -### Logger - -````mdx-code-block - - - -Pulsar Functions that use the Java SDK have access to an [SLF4j](https://www.slf4j.org/) [`Logger`](https://www.slf4j.org/api/org/apache/log4j/Logger.html) object that can be used to produce logs at the chosen log level. The following example logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class LoggingFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - String messageId = new String(context.getMessageId()); - - if (input.contains("danger")) { - LOG.warn("A warning was received in message {}", messageId); - } else { - LOG.info("Message {} received\nContent: {}", messageId, input); - } - - return null; - } -} - -``` - -If you want your function to produce logs, you need to specify a log topic when creating or running the function. The following is an example. - -```bash - -$ bin/pulsar-admin functions create \ - --jar my-functions.jar \ - --classname my.package.LoggingFunction \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -All logs produced by `LoggingFunction` above can be accessed via the `persistent://public/default/logging-function-logs` topic. - -#### Customize Function log level -Additionally, you can use the XML file, `functions_log4j2.xml`, to customize the function log level. -To customize the function log level, create or update `functions_log4j2.xml` in your Pulsar conf directory (for example, `/etc/pulsar/` on bare-metal, or `/pulsar/conf` on Kubernetes) to contain contents such as: - -```xml - - - pulsar-functions-instance - 30 - - - pulsar.log.appender - RollingFile - - - pulsar.log.level - debug - - - bk.log.level - debug - - - - - Console - SYSTEM_OUT - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - RollingFile - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.log - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}-%d{MM-dd-yyyy}-%i.log.gz - true - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - 1 - true - - - 1 GB - - - 0 0 0 * * ? - - - - - ${sys:pulsar.function.log.dir} - 2 - - */${sys:pulsar.function.log.file}*log.gz - - - 30d - - - - - - BkRollingFile - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.bk - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.bk-%d{MM-dd-yyyy}-%i.log.gz - true - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - 1 - true - - - 1 GB - - - 0 0 0 * * ? - - - - - ${sys:pulsar.function.log.dir} - 2 - - */${sys:pulsar.function.log.file}.bk*log.gz - - - 30d - - - - - - - - org.apache.pulsar.functions.runtime.shaded.org.apache.bookkeeper - ${sys:bk.log.level} - false - - BkRollingFile - - - - ${sys:pulsar.log.level} - - ${sys:pulsar.log.appender} - ${sys:pulsar.log.level} - - - - - -``` - -The properties set like: - -```xml - - - pulsar.log.level - debug - - -``` - -propagate to places where they are referenced, such as: - -```xml - - - ${sys:pulsar.log.level} - - ${sys:pulsar.log.appender} - ${sys:pulsar.log.level} - - - -``` - -In the above example, debug level logging would be applied to ALL function logs. -This may be more verbose than you desire. To be more selective, you can apply different log levels to different classes or modules. For example: - -```xml - - - com.example.module - info - false - - ${sys:pulsar.log.appender} - - - -``` - -You can be more specific as well, such as applying a more verbose log level to a class in the module, such as: - -```xml - - - com.example.module.className - debug - false - - Console - - - -``` - -Each `` entry allows you to output the log to a target specified in the definition of the Appender. - -Additivity pertains to whether log messages will be duplicated if multiple Logger entries overlap. -To disable additivity, specify - -```xml - -false - -``` - -as shown in examples above. Disabling additivity prevents duplication of log messages when one or more `` entries contain classes or modules that overlap. - -The `` is defined in the `` section, such as: - -```xml - - - Console - SYSTEM_OUT - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - -``` - - - - -Pulsar Functions that use the Python SDK have access to a logging object that can be used to produce logs at the chosen log level. The following example function that logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`. - -```python - -from pulsar import Function - -class LoggingFunction(Function): - def process(self, input, context): - logger = context.get_logger() - msg_id = context.get_message_id() - if 'danger' in input: - logger.warn("A warning was received in message {0}".format(context.get_message_id())) - else: - logger.info("Message {0} received\nContent: {1}".format(msg_id, input)) - -``` - -If you want your function to produce logs on a Pulsar topic, you need to specify a **log topic** when creating or running the function. The following is an example. - -```bash - -$ bin/pulsar-admin functions create \ - --py logging_function.py \ - --classname logging_function.LoggingFunction \ - --log-topic logging-function-logs \ - # Other function configs - -``` - -All logs produced by `LoggingFunction` above can be accessed via the `logging-function-logs` topic. -Additionally, you can specify the function log level through the broker XML file as described in [Customize Function log level](#customize-function-log-level). - - - - -The following Go Function example shows different log levels based on the function input. - -``` - -import ( - "context" - - "github.com/apache/pulsar/pulsar-function-go/pf" - - log "github.com/apache/pulsar/pulsar-function-go/logutil" -) - -func loggerFunc(ctx context.Context, input []byte) { - if len(input) <= 100 { - log.Infof("This input has a length of: %d", len(input)) - } else { - log.Warnf("This input is getting too long! It has {%d} characters", len(input)) - } -} - -func main() { - pf.Start(loggerFunc) -} - -``` - -When you use `logTopic` related functionalities in Go Function, import `github.com/apache/pulsar/pulsar-function-go/logutil`, and you do not have to use the `getLogger()` context object. - -Additionally, you can specify the function log level through the broker XML file, as described here: [Customize Function log level](#customize-function-log-level) - - - - -```` - -### Pulsar admin - -Pulsar Functions using the Java SDK has access to the Pulsar admin client, which allows the Pulsar admin client to manage API calls to current Pulsar clusters or external clusters (if `external-pulsars` is provided). - -````mdx-code-block - - - -Below is an example of how to use the Pulsar admin client exposed from the Function `context`. - -``` - -import org.apache.pulsar.client.admin.PulsarAdmin; -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -/** - * In this particular example, for every input message, - * the function resets the cursor of the current function's subscription to a - * specified timestamp. - */ -public class CursorManagementFunction implements Function { - - @Override - public String process(String input, Context context) throws Exception { - PulsarAdmin adminClient = context.getPulsarAdmin(); - if (adminClient != null) { - String topic = context.getCurrentRecord().getTopicName().isPresent() ? - context.getCurrentRecord().getTopicName().get() : null; - String subName = context.getTenant() + "/" + context.getNamespace() + "/" + context.getFunctionName(); - if (topic != null) { - // 1578188166 below is a random-pick timestamp - adminClient.topics().resetCursor(topic, subName, 1578188166); - return "reset cursor successfully"; - } - } - return null; - } -} - -``` - -If you want your function to get access to the Pulsar admin client, you need to enable this feature by setting `exposeAdminClientEnabled=true` in the `functions_worker.yml` file. You can test whether this feature is enabled or not using the command `pulsar-admin functions localrun` with the flag `--web-service-url`. - -``` - -$ bin/pulsar-admin functions localrun \ - --jar my-functions.jar \ - --classname my.package.CursorManagementFunction \ - --web-service-url http://pulsar-web-service:8080 \ - # Other function configs - -``` - - - - -```` - -## Metrics - -Pulsar Functions allows you to deploy and manage processing functions that consume messages from and publish messages to Pulsar topics easily. It is important to ensure that the running functions are healthy at any time. Pulsar Functions can publish arbitrary metrics to the metrics interface which can be queried. - -:::note - -If a Pulsar Function uses the language-native interface for Java or Python, that function is not able to publish metrics and stats to Pulsar. - -::: - -You can monitor Pulsar Functions that have been deployed with the following methods: - -- Check the metrics provided by Pulsar. - - Pulsar Functions expose the metrics that can be collected and used for monitoring the health of **Java, Python, and Go** functions. You can check the metrics by following the [monitoring](deploy-monitoring.md) guide. - - For the complete list of the function metrics, see [here](reference-metrics.md#pulsar-functions). - -- Set and check your customized metrics. - - In addition to the metrics provided by Pulsar, Pulsar allows you to customize metrics for **Java and Python** functions. Function workers collect user-defined metrics to Prometheus automatically and you can check them in Grafana. - -Here are examples of how to customize metrics for Java and Python functions. - -````mdx-code-block - - - -You can record metrics using the [`Context`](#context) object on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class MetricRecorderFunction implements Function { - @Override - public void apply(Integer input, Context context) { - // Records the metric 1 every time a message arrives - context.recordMetric("hit-count", 1); - - // Records the metric only if the arriving number equals 11 - if (input == 11) { - context.recordMetric("elevens-count", 1); - } - - return null; - } -} - -``` - - - - -You can record metrics using the [`Context`](#context) object on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message. The following is an example. - -```python - -from pulsar import Function - -class MetricRecorderFunction(Function): - def process(self, input, context): - context.record_metric('hit-count', 1) - - if input == 11: - context.record_metric('elevens-count', 1) - -``` - - - - -Currently, the feature is not available in Go. - - - - -```` - -## Security - -If you want to enable security on Pulsar Functions, first you should enable security on [Functions Workers](functions-worker.md). For more details, refer to [Security settings](functions-worker.md#security-settings). - -Pulsar Functions can support the following providers: - -- ClearTextSecretsProvider -- EnvironmentBasedSecretsProvider - -> Pulsar Function supports ClearTextSecretsProvider by default. - -At the same time, Pulsar Functions provides two interfaces, **SecretsProvider** and **SecretsProviderConfigurator**, allowing users to customize secret provider. - -````mdx-code-block - - - -You can get secret provider using the [`Context`](#context) object. The following is an example: - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class GetSecretProviderFunction implements Function { - - @Override - public Void process(String input, Context context) throws Exception { - Logger LOG = context.getLogger(); - String secretProvider = context.getSecret(input); - - if (!secretProvider.isEmpty()) { - LOG.info("The secret provider is {}", secretProvider); - } else { - LOG.warn("No secret provider"); - } - - return null; - } -} - -``` - - - - -You can get secret provider using the [`Context`](#context) object. The following is an example: - -```python - -from pulsar import Function - -class GetSecretProviderFunction(Function): - def process(self, input, context): - logger = context.get_logger() - secret_provider = context.get_secret(input) - if secret_provider is None: - logger.warn('No secret provider') - else: - logger.info("The secret provider is {0}".format(secret_provider)) - -``` - - - - -Currently, the feature is not available in Go. - - - - -```` - -## State storage -Pulsar Functions use [Apache BookKeeper](https://bookkeeper.apache.org) as a state storage interface. Pulsar installation, including the local standalone installation, includes deployment of BookKeeper bookies. - -Since Pulsar 2.1.0 release, Pulsar integrates with Apache BookKeeper [table service](https://docs.google.com/document/d/155xAwWv5IdOitHh1NVMEwCMGgB28M3FyMiQSxEpjE-Y/edit#heading=h.56rbh52koe3f) to store the `State` for functions. For example, a `WordCount` function can store its `counters` state into BookKeeper table service via Pulsar Functions State API. - -States are key-value pairs, where the key is a string and the value is arbitrary binary data - counters are stored as 64-bit big-endian binary values. Keys are scoped to an individual Pulsar Function, and shared between instances of that function. - -You can access states within Pulsar Java Functions using the `putState`, `putStateAsync`, `getState`, `getStateAsync`, `incrCounter`, `incrCounterAsync`, `getCounter`, `getCounterAsync` and `deleteState` calls on the context object. You can access states within Pulsar Python Functions using the `putState`, `getState`, `incrCounter`, `getCounter` and `deleteState` calls on the context object. You can also manage states using the [querystate](#query-state) and [putstate](#putstate) options to `pulsar-admin functions`. - -:::note - -State storage is not available in Go. - -::: - -### API - -````mdx-code-block - - - -Currently Pulsar Functions expose the following APIs for mutating and accessing State. These APIs are available in the [Context](functions-develop.md#context) object when you are using Java SDK functions. - -#### incrCounter - -```java - - /** - * Increment the builtin distributed counter referred by key - * @param key The name of the key - * @param amount The amount to be incremented - */ - void incrCounter(String key, long amount); - -``` - -The application can use `incrCounter` to change the counter of a given `key` by the given `amount`. - -#### incrCounterAsync - -```java - - /** - * Increment the builtin distributed counter referred by key - * but dont wait for the completion of the increment operation - * - * @param key The name of the key - * @param amount The amount to be incremented - */ - CompletableFuture incrCounterAsync(String key, long amount); - -``` - -The application can use `incrCounterAsync` to asynchronously change the counter of a given `key` by the given `amount`. - -#### getCounter - -```java - - /** - * Retrieve the counter value for the key. - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - long getCounter(String key); - -``` - -The application can use `getCounter` to retrieve the counter of a given `key` mutated by `incrCounter`. - -Except the `counter` API, Pulsar also exposes a general key/value API for functions to store -general key/value state. - -#### getCounterAsync - -```java - - /** - * Retrieve the counter value for the key, but don't wait - * for the operation to be completed - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - CompletableFuture getCounterAsync(String key); - -``` - -The application can use `getCounterAsync` to asynchronously retrieve the counter of a given `key` mutated by `incrCounterAsync`. - -#### putState - -```java - - /** - * Update the state value for the key. - * - * @param key name of the key - * @param value state value of the key - */ - void putState(String key, ByteBuffer value); - -``` - -#### putStateAsync - -```java - - /** - * Update the state value for the key, but don't wait for the operation to be completed - * - * @param key name of the key - * @param value state value of the key - */ - CompletableFuture putStateAsync(String key, ByteBuffer value); - -``` - -The application can use `putStateAsync` to asynchronously update the state of a given `key`. - -#### getState - -```java - - /** - * Retrieve the state value for the key. - * - * @param key name of the key - * @return the state value for the key. - */ - ByteBuffer getState(String key); - -``` - -#### getStateAsync - -```java - - /** - * Retrieve the state value for the key, but don't wait for the operation to be completed - * - * @param key name of the key - * @return the state value for the key. - */ - CompletableFuture getStateAsync(String key); - -``` - -The application can use `getStateAsync` to asynchronously retrieve the state of a given `key`. - -#### deleteState - -```java - - /** - * Delete the state value for the key. - * - * @param key name of the key - */ - -``` - -Counters and binary values share the same keyspace, so this deletes either type. - - - - -Currently Pulsar Functions expose the following APIs for mutating and accessing State. These APIs are available in the [Context](#context) object when you are using Python SDK functions. - -#### incr_counter - -```python - - def incr_counter(self, key, amount): - ""incr the counter of a given key in the managed state"" - -``` - -Application can use `incr_counter` to change the counter of a given `key` by the given `amount`. -If the `key` does not exist, a new key is created. - -#### get_counter - -```python - - def get_counter(self, key): - """get the counter of a given key in the managed state""" - -``` - -Application can use `get_counter` to retrieve the counter of a given `key` mutated by `incrCounter`. - -Except the `counter` API, Pulsar also exposes a general key/value API for functions to store -general key/value state. - -#### put_state - -```python - - def put_state(self, key, value): - """update the value of a given key in the managed state""" - -``` - -The key is a string, and the value is arbitrary binary data. - -#### get_state - -```python - - def get_state(self, key): - """get the value of a given key in the managed state""" - -``` - -#### del_counter - -```python - - def del_counter(self, key): - """delete the counter of a given key in the managed state""" - -``` - -Counters and binary values share the same keyspace, so this deletes either type. - - - - -```` - -### Query State - -A Pulsar Function can use the [State API](#api) for storing state into Pulsar's state storage -and retrieving state back from Pulsar's state storage. Additionally Pulsar also provides -CLI commands for querying its state. - -```shell - -$ bin/pulsar-admin functions querystate \ - --tenant \ - --namespace \ - --name \ - --state-storage-url \ - --key \ - [---watch] - -``` - -If `--watch` is specified, the CLI will watch the value of the provided `state-key`. - -### Example - -````mdx-code-block - - - -{@inject: github:WordCountFunction:/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/WordCountFunction.java} is a very good example -demonstrating on how Application can easily store `state` in Pulsar Functions. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountFunction implements Function { - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split("\\.")).forEach(word -> context.incrCounter(word, 1)); - return null; - } -} - -``` - -The logic of this `WordCount` function is pretty simple and straightforward: - -1. The function first splits the received `String` into multiple words using regex `\\.`. -2. For each `word`, the function increments the corresponding `counter` by 1 (via `incrCounter(key, amount)`). - - - - -```python - -from pulsar import Function - -class WordCount(Function): - def process(self, item, context): - for word in item.split(): - context.incr_counter(word, 1) - -``` - -The logic of this `WordCount` function is pretty simple and straightforward: - -1. The function first splits the received string into multiple words on space. -2. For each `word`, the function increments the corresponding `counter` by 1 (via `incr_counter(key, amount)`). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/functions-metrics.md b/site2/website/versioned_docs/version-2.8.3-deprecated/functions-metrics.md deleted file mode 100644 index 8add6693160929..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/functions-metrics.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: functions-metrics -title: Metrics for Pulsar Functions -sidebar_label: "Metrics" -original_id: functions-metrics ---- - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/functions-overview.md b/site2/website/versioned_docs/version-2.8.3-deprecated/functions-overview.md deleted file mode 100644 index 816d301e0fd0e7..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/functions-overview.md +++ /dev/null @@ -1,209 +0,0 @@ ---- -id: functions-overview -title: Pulsar Functions overview -sidebar_label: "Overview" -original_id: functions-overview ---- - -**Pulsar Functions** are lightweight compute processes that - -* consume messages from one or more Pulsar topics, -* apply a user-supplied processing logic to each message, -* publish the results of the computation to another topic. - - -## Goals -With Pulsar Functions, you can create complex processing logic without deploying a separate neighboring system (such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://heron.incubator.apache.org/), [Apache Flink](https://flink.apache.org/)). Pulsar Functions are computing infrastructure of Pulsar messaging system. The core goal is tied to a series of other goals: - -* Developer productivity (language-native vs Pulsar Functions SDK functions) -* Easy troubleshooting -* Operational simplicity (no need for an external processing system) - -## Inspirations -Pulsar Functions are inspired by (and take cues from) several systems and paradigms: - -* Stream processing engines such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://apache.github.io/incubator-heron), and [Apache Flink](https://flink.apache.org) -* "Serverless" and "Function as a Service" (FaaS) cloud platforms like [Amazon Web Services Lambda](https://aws.amazon.com/lambda/), [Google Cloud Functions](https://cloud.google.com/functions/), and [Azure Cloud Functions](https://azure.microsoft.com/en-us/services/functions/) - -Pulsar Functions can be described as - -* [Lambda](https://aws.amazon.com/lambda/)-style functions that are -* specifically designed to use Pulsar as a message bus. - -## Programming model -Pulsar Functions provide a wide range of functionality, and the core programming model is simple. Functions receive messages from one or more **input [topics](reference-terminology.md#topic)**. Each time a message is received, the function will complete the following tasks. - - * Apply some processing logic to the input and write output to: - * An **output topic** in Pulsar - * [Apache BookKeeper](functions-develop.md#state-storage) - * Write logs to a **log topic** (potentially for debugging purposes) - * Increment a [counter](#word-count-example) - -![Pulsar Functions core programming model](/assets/pulsar-functions-overview.png) - -You can use Pulsar Functions to set up the following processing chain: - -* A Python function listens for the `raw-sentences` topic and "sanitizes" incoming strings (removing extraneous whitespace and converting all characters to lowercase) and then publishes the results to a `sanitized-sentences` topic. -* A Java function listens for the `sanitized-sentences` topic, counts the number of times each word appears within a specified time window, and publishes the results to a `results` topic -* Finally, a Python function listens for the `results` topic and writes the results to a MySQL table. - - -### Word count example - -If you implement the classic word count example using Pulsar Functions, it looks something like this: - -![Pulsar Functions word count example](/assets/pulsar-functions-word-count.png) - -To write the function in Java with [Pulsar Functions SDK for Java](functions-develop.md#available-apis), you can write the function as follows. - -```java - -package org.example.functions; - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountFunction implements Function { - // This function is invoked every time a message is published to the input topic - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split(" ")).forEach(word -> { - String counterKey = word.toLowerCase(); - context.incrCounter(counterKey, 1); - }); - return null; - } -} - -``` - -Bundle and build the JAR file to be deployed, and then deploy it in your Pulsar cluster using the [command line](functions-deploy.md#command-line-interface) as follows. - -```bash - -$ bin/pulsar-admin functions create \ - --jar target/my-jar-with-dependencies.jar \ - --classname org.example.functions.WordCountFunction \ - --tenant public \ - --namespace default \ - --name word-count \ - --inputs persistent://public/default/sentences \ - --output persistent://public/default/count - -``` - -### Content-based routing example - -Pulsar Functions are used in many cases. The following is a sophisticated example that involves content-based routing. - -For example, a function takes items (strings) as input and publishes them to either a `fruits` or `vegetables` topic, depending on the item. Or, if an item is neither fruit nor vegetable, a warning is logged to a [log topic](functions-develop.md#logger). The following is a visual representation. - -![Pulsar Functions routing example](/assets/pulsar-functions-routing-example.png) - -If you implement this routing functionality in Python, it looks something like this: - -```python - -from pulsar import Function - -class RoutingFunction(Function): - def __init__(self): - self.fruits_topic = "persistent://public/default/fruits" - self.vegetables_topic = "persistent://public/default/vegetables" - - @staticmethod - def is_fruit(item): - return item in [b"apple", b"orange", b"pear", b"other fruits..."] - - @staticmethod - def is_vegetable(item): - return item in [b"carrot", b"lettuce", b"radish", b"other vegetables..."] - - def process(self, item, context): - if self.is_fruit(item): - context.publish(self.fruits_topic, item) - elif self.is_vegetable(item): - context.publish(self.vegetables_topic, item) - else: - warning = "The item {0} is neither a fruit nor a vegetable".format(item) - context.get_logger().warn(warning) - -``` - -If this code is stored in `~/router.py`, then you can deploy it in your Pulsar cluster using the [command line](functions-deploy.md#command-line-interface) as follows. - -```bash - -$ bin/pulsar-admin functions create \ - --py ~/router.py \ - --classname router.RoutingFunction \ - --tenant public \ - --namespace default \ - --name route-fruit-veg \ - --inputs persistent://public/default/basket-items - -``` - -### Functions, messages and message types -Pulsar Functions take byte arrays as inputs and spit out byte arrays as output. However in languages that support typed interfaces(Java), you can write typed Functions, and bind messages to types in the following ways. -* [Schema Registry](functions-develop.md#schema-registry) -* [SerDe](functions-develop.md#serde) - - -## Fully Qualified Function Name (FQFN) -Each Pulsar Function has a **Fully Qualified Function Name** (FQFN) that consists of three elements: the function tenant, namespace, and function name. FQFN looks like this: - -```http - -tenant/namespace/name - -``` - -FQFNs enable you to create multiple functions with the same name provided that they are in different namespaces. - -## Supported languages -Currently, you can write Pulsar Functions in Java, Python, and Go. For details, refer to [Develop Pulsar Functions](functions-develop.md). - -## Processing guarantees -Pulsar Functions provide three different messaging semantics that you can apply to any function. - -Delivery semantics | Description -:------------------|:------- -**At-most-once** delivery | Each message sent to the function is likely to be processed, or not to be processed (hence "at most"). -**At-least-once** delivery | Each message sent to the function can be processed more than once (hence the "at least"). -**Effectively-once** delivery | Each message sent to the function will have one output associated with it. - - -### Apply processing guarantees to a function -You can set the processing guarantees for a Pulsar Function when you create the Function. The following [`pulsar-function create`](reference-pulsar-admin.md#create-1) command creates a function with effectively-once guarantees applied. - -```bash - -$ bin/pulsar-admin functions create \ - --name my-effectively-once-function \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other function configs - -``` - -The available options for `--processing-guarantees` are: - -* `ATMOST_ONCE` -* `ATLEAST_ONCE` -* `EFFECTIVELY_ONCE` - -> By default, Pulsar Functions provide at-least-once delivery guarantees. So if you create a function without supplying a value for the `--processingGuarantees` flag, the function provides at-least-once guarantees. - -### Update the processing guarantees of a function -You can change the processing guarantees applied to a function using the [`update`](reference-pulsar-admin.md#update-1) command. The following is an example. - -```bash - -$ bin/pulsar-admin functions update \ - --processing-guarantees ATMOST_ONCE \ - # Other function configs - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/functions-package.md b/site2/website/versioned_docs/version-2.8.3-deprecated/functions-package.md deleted file mode 100644 index a995d5c1588771..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/functions-package.md +++ /dev/null @@ -1,493 +0,0 @@ ---- -id: functions-package -title: Package Pulsar Functions -sidebar_label: "How-to: Package" -original_id: functions-package ---- - -You can package Pulsar functions in Java, Python, and Go. Packaging the window function in Java is the same as [packaging a function in Java](#java). - -:::note - -Currently, the window function is not available in Python and Go. - -::: - -## Prerequisite - -Before running a Pulsar function, you need to start Pulsar. You can [run a standalone Pulsar in Docker](getting-started-docker.md), or [run Pulsar in Kubernetes](getting-started-helm.md). - -To check whether the Docker image starts, you can use the `docker ps` command. - -## Java - -To package a function in Java, complete the following steps. - -1. Create a new maven project with a pom file. In the following code sample, the value of `mainClass` is your package name. - - ```Java - - - - 4.0.0 - - java-function - java-function - 1.0-SNAPSHOT - - - - org.apache.pulsar - pulsar-functions-api - 2.6.0 - - - - - - - maven-assembly-plugin - - false - - jar-with-dependencies - - - - org.example.test.ExclamationFunction - - - - - - make-assembly - package - - assembly - - - - - - org.apache.maven.plugins - maven-compiler-plugin - - 8 - 8 - - - - - - - - ``` - -2. Write a Java function. - - ``` - - package org.example.test; - - import java.util.function.Function; - - public class ExclamationFunction implements Function { - @Override - public String apply(String s) { - return "This is my function!"; - } - } - - ``` - - For the imported package, you can use one of the following interfaces: - - Function interface provided by Java 8: `java.util.function.Function` - - Pulsar Function interface: `org.apache.pulsar.functions.api.Function` - - The main difference between the two interfaces is that the `org.apache.pulsar.functions.api.Function` interface provides the context interface. When you write a function and want to interact with it, you can use context to obtain a wide variety of information and functionality for Pulsar Functions. - - The following example uses `org.apache.pulsar.functions.api.Function` interface with context. - - ``` - - package org.example.functions; - import org.apache.pulsar.functions.api.Context; - import org.apache.pulsar.functions.api.Function; - - import java.util.Arrays; - public class WordCountFunction implements Function { - // This function is invoked every time a message is published to the input topic - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split(" ")).forEach(word -> { - String counterKey = word.toLowerCase(); - context.incrCounter(counterKey, 1); - }); - return null; - } - } - - ``` - -3. Package the Java function. - - ```bash - - mvn package - - ``` - - After the Java function is packaged, a `target` directory is created automatically. Open the `target` directory to check if there is a JAR package similar to `java-function-1.0-SNAPSHOT.jar`. - - -4. Run the Java function. - - (1) Copy the packaged jar file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Java function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname org.example.test.ExclamationFunction \ - --jar java-function-1.0-SNAPSHOT.jar \ - --inputs persistent://public/default/my-topic-1 \ - --output persistent://public/default/test-1 \ - --tenant public \ - --namespace default \ - --name JavaFunction - - ``` - - The following log indicates that the Java function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -## Python - -Python Function supports the following three formats: - -- One python file -- ZIP file -- PIP - -### One python file - -To package a function with **one python file** in Python, complete the following steps. - -1. Write a Python function. - - ``` - - from pulsar import Function // import the Function module from Pulsar - - # The classic ExclamationFunction that appends an exclamation at the end - # of the input - class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - return input + '!' - - ``` - - In this example, when you write a Python function, you need to inherit the Function class and implement the `process()` method. - - `process()` mainly has two parameters: - - - `input` represents your input. - - - `context` represents an interface exposed by the Pulsar Function. You can get the attributes in the Python function based on the provided context object. - -2. Install a Python client. - - The implementation of a Python function depends on the Python client, so before deploying a Python function, you need to install the corresponding version of the Python client. - - ```bash - - pip install pulsar-client==2.6.0 - - ``` - -3. Run the Python Function. - - (1) Copy the Python function file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Python function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname . \ - --py \ - --inputs persistent://public/default/my-topic-1 \ - --output persistent://public/default/test-1 \ - --tenant public \ - --namespace default \ - --name PythonFunction - - ``` - - The following log indicates that the Python function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -### ZIP file - -To package a function with the **ZIP file** in Python, complete the following steps. - -1. Prepare the ZIP file. - - The following is required when packaging the ZIP file of the Python Function. - - ```text - - Assuming the zip file is named as `func.zip`, unzip the `func.zip` folder: - "func/src" - "func/requirements.txt" - "func/deps" - - ``` - - Take [exclamation.zip](https://github.com/apache/pulsar/tree/master/tests/docker-images/latest-version-image/python-examples) as an example. The internal structure of the example is as follows. - - ```text - - . - ├── deps - │   └── sh-1.12.14-py2.py3-none-any.whl - └── src - └── exclamation.py - - ``` - -2. Run the Python Function. - - (1) Copy the ZIP file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Python function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname exclamation \ - --py \ - --inputs persistent://public/default/in-topic \ - --output persistent://public/default/out-topic \ - --tenant public \ - --namespace default \ - --name PythonFunction - - ``` - - The following log indicates that the Python function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -### PIP - -The PIP method is only supported in Kubernetes runtime. To package a function with **PIP** in Python, complete the following steps. - -1. Configure the `functions_worker.yml` file. - - ```text - - #### Kubernetes Runtime #### - installUserCodeDependencies: true - - ``` - -2. Write your Python Function. - - ``` - - from pulsar import Function - import js2xml - - # The classic ExclamationFunction that appends an exclamation at the end - # of the input - class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - // add your logic - return input + '!' - - ``` - - You can introduce additional dependencies. When Python Function detects that the file currently used is `whl` and the `installUserCodeDependencies` parameter is specified, the system uses the `pip install` command to install the dependencies required in Python Function. - -3. Generate the `whl` file. - - ```shell script - - $ cd $PULSAR_HOME/pulsar-functions/scripts/python - $ chmod +x generate.sh - $ ./generate.sh - # e.g: ./generate.sh /path/to/python /path/to/python/output 1.0.0 - - ``` - - The output is written in `/path/to/python/output`: - - ```text - - -rw-r--r-- 1 root staff 1.8K 8 27 14:29 pulsarfunction-1.0.0-py2-none-any.whl - -rw-r--r-- 1 root staff 1.4K 8 27 14:29 pulsarfunction-1.0.0.tar.gz - -rw-r--r-- 1 root staff 0B 8 27 14:29 pulsarfunction.whl - - ``` - -## Go - -To package a function in Go, complete the following steps. - -1. Write a Go function. - - Currently, Go function can be **only** implemented using SDK and the interface of the function is exposed in the form of SDK. Before using the Go function, you need to import "github.com/apache/pulsar/pulsar-function-go/pf". - - ``` - - import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" - ) - - func HandleRequest(ctx context.Context, input []byte) error { - fmt.Println(string(input) + "!") - return nil - } - - func main() { - pf.Start(HandleRequest) - } - - ``` - - You can use context to connect to the Go function. - - ``` - - if fc, ok := pf.FromContext(ctx); ok { - fmt.Printf("function ID is:%s, ", fc.GetFuncID()) - fmt.Printf("function version is:%s\n", fc.GetFuncVersion()) - } - - ``` - - When writing a Go function, remember that - - In `main()`, you **only** need to register the function name to `Start()`. **Only** one function name is received in `Start()`. - - Go function uses Go reflection, which is based on the received function name, to verify whether the parameter list and returned value list are correct. The parameter list and returned value list **must be** one of the following sample functions: - - ``` - - func () - func () error - func (input) error - func () (output, error) - func (input) (output, error) - func (context.Context) error - func (context.Context, input) error - func (context.Context) (output, error) - func (context.Context, input) (output, error) - - ``` - -2. Build the Go function. - - ``` - - go build .go - - ``` - -3. Run the Go Function. - - (1) Copy the Go function file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Go function with the following command. - - ``` - - ./bin/pulsar-admin functions localrun \ - --go [your go function path] - --inputs [input topics] \ - --output [output topic] \ - --tenant [default:public] \ - --namespace [default:default] \ - --name [custom unique go function name] - - ``` - - The following log indicates that the Go function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -## Start Functions in cluster mode -If you want to start a function in cluster mode, replace `localrun` with `create` in the commands above. The following log indicates that your function starts successfully. - - ```text - - "Created successfully" - - ``` - -For information about parameters on `--classname`, `--jar`, `--py`, `--go`, `--inputs`, run the command `./bin/pulsar-admin functions` or see [here](reference-pulsar-admin.md#functions). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/functions-runtime.md b/site2/website/versioned_docs/version-2.8.3-deprecated/functions-runtime.md deleted file mode 100644 index ab7d1c05db421e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/functions-runtime.md +++ /dev/null @@ -1,399 +0,0 @@ ---- -id: functions-runtime -title: Configure Functions runtime -sidebar_label: "Setup: Configure Functions runtime" -original_id: functions-runtime ---- - -You can use the following methods to run functions. - -- *Thread*: Invoke functions threads in functions worker. -- *Process*: Invoke functions in processes forked by functions worker. -- *Kubernetes*: Submit functions as Kubernetes StatefulSets by functions worker. - -:::note - -Pulsar supports adding labels to the Kubernetes StatefulSets and services while launching functions, which facilitates selecting the target Kubernetes objects. - -::: - -The differences of the thread and process modes are: -- Thread mode: when a function runs in thread mode, it runs on the same Java virtual machine (JVM) with functions worker. -- Process mode: when a function runs in process mode, it runs on the same machine that functions worker runs. - -## Configure thread runtime -It is easy to configure *Thread* runtime. In most cases, you do not need to configure anything. You can customize the thread group name with the following settings: - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.thread.ThreadRuntimeFactory -functionRuntimeFactoryConfigs: - threadGroupName: "Your Function Container Group" - -``` - -*Thread* runtime is only supported in Java function. - -## Configure process runtime -When you enable *Process* runtime, you do not need to configure anything. - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.process.ProcessRuntimeFactory -functionRuntimeFactoryConfigs: - # the directory for storing the function logs - logDirectory: - # change the jar location only when you put the java instance jar in a different location - javaInstanceJarLocation: - # change the python instance location only when you put the python instance jar in a different location - pythonInstanceLocation: - # change the extra dependencies location: - extraFunctionDependenciesDir: - -``` - -*Process* runtime is supported in Java, Python, and Go functions. - -## Configure Kubernetes runtime - -When the functions worker generates Kubernetes manifests and apply the manifests, the Kubernetes runtime works. If you have run functions worker on Kubernetes, you can use the `serviceAccount` associated with the pod that the functions worker is running in. Otherwise, you can configure it to communicate with a Kubernetes cluster. - -The manifests, generated by the functions worker, include a `StatefulSet`, a `Service` (used to communicate with the pods), and a `Secret` for auth credentials (when applicable). The `StatefulSet` manifest (by default) has a single pod, with the number of replicas determined by the "parallelism" of the function. On pod boot, the pod downloads the function payload (via the functions worker REST API). The pod's container image is configurable, but must have the functions runtime. - -The Kubernetes runtime supports secrets, so you can create a Kubernetes secret and expose it as an environment variable in the pod. The Kubernetes runtime is extensible, you can implement classes and customize the way how to generate Kubernetes manifests, how to pass auth data to pods, and how to integrate secrets. - -:::tip - -For the rules of translating Pulsar object names into Kubernetes resource labels, see [here](admin-api-overview.md#how-to-define-pulsar-resource-names-when-running-pulsar-in-kubernetes). - -::: - -### Basic configuration - -It is easy to configure Kubernetes runtime. You can just uncomment the settings of `kubernetesContainerFactory` in the `functions_worker.yaml` file. The following is an example. - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.kubernetes.KubernetesRuntimeFactory -functionRuntimeFactoryConfigs: - # uri to kubernetes cluster, leave it to empty and it will use the kubernetes settings in function worker - k8Uri: - # the kubernetes namespace to run the function instances. it is `default`, if this setting is left to be empty - jobNamespace: - # The Kubernetes pod name to run the function instances. It is set to - # `pf----` if this setting is left to be empty - jobName: - # the docker image to run function instance. by default it is `apachepulsar/pulsar` - pulsarDockerImageName: - # the docker image to run function instance according to different configurations provided by users. - # By default it is `apachepulsar/pulsar`. - # e.g: - # functionDockerImages: - # JAVA: JAVA_IMAGE_NAME - # PYTHON: PYTHON_IMAGE_NAME - # GO: GO_IMAGE_NAME - functionDockerImages: - # "The image pull policy for image used to run function instance. By default it is `IfNotPresent` - imagePullPolicy: IfNotPresent - # the root directory of pulsar home directory in `pulsarDockerImageName`. by default it is `/pulsar`. - # if you are using your own built image in `pulsarDockerImageName`, you need to set this setting accordingly - pulsarRootDir: - # The config admin CLI allows users to customize the configuration of the admin cli tool, such as: - # `/bin/pulsar-admin and /bin/pulsarctl`. By default it is `/bin/pulsar-admin`. If you want to use `pulsarctl` - # you need to set this setting accordingly - configAdminCLI: - # this setting only takes effects if `k8Uri` is set to null. if your function worker is running as a k8 pod, - # setting this to true is let function worker to submit functions to the same k8s cluster as function worker - # is running. setting this to false if your function worker is not running as a k8 pod. - submittingInsidePod: false - # setting the pulsar service url that pulsar function should use to connect to pulsar - # if it is not set, it will use the pulsar service url configured in worker service - pulsarServiceUrl: - # setting the pulsar admin url that pulsar function should use to connect to pulsar - # if it is not set, it will use the pulsar admin url configured in worker service - pulsarAdminUrl: - # The flag indicates to install user code dependencies. (applied to python package) - installUserCodeDependencies: - # The repository that pulsar functions use to download python dependencies - pythonDependencyRepository: - # The repository that pulsar functions use to download extra python dependencies - pythonExtraDependencyRepository: - # the custom labels that function worker uses to select the nodes for pods - customLabels: - # The expected metrics collection interval, in seconds - expectedMetricsCollectionInterval: 30 - # Kubernetes Runtime will periodically checkback on - # this configMap if defined and if there are any changes - # to the kubernetes specific stuff, we apply those changes - changeConfigMap: - # The namespace for storing change config map - changeConfigMapNamespace: - # The ratio cpu request and cpu limit to be set for a function/source/sink. - # The formula for cpu request is cpuRequest = userRequestCpu / cpuOverCommitRatio - cpuOverCommitRatio: 1.0 - # The ratio memory request and memory limit to be set for a function/source/sink. - # The formula for memory request is memoryRequest = userRequestMemory / memoryOverCommitRatio - memoryOverCommitRatio: 1.0 - # The port inside the function pod which is used by the worker to communicate with the pod - grpcPort: 9093 - # The port inside the function pod on which prometheus metrics are exposed - metricsPort: 9094 - # The directory inside the function pod where nar packages will be extracted - narExtractionDirectory: - # The classpath where function instance files stored - functionInstanceClassPath: - # the directory for dropping extra function dependencies - # if it is not an absolute path, it is relative to `pulsarRootDir` - extraFunctionDependenciesDir: - # Additional memory padding added on top of the memory requested by the function per on a per instance basis - percentMemoryPadding: 10 - -``` - -If you run functions worker embedded in a broker on Kubernetes, you can use the default settings. - -### Run standalone functions worker on Kubernetes - -If you run functions worker standalone (that is, not embedded) on Kubernetes, you need to configure `pulsarSerivceUrl` to be the URL of the broker and `pulsarAdminUrl` as the URL to the functions worker. - -For example, both Pulsar brokers and Function Workers run in the `pulsar` K8S namespace. The brokers have a service called `brokers` and the functions worker has a service called `func-worker`. The settings are as follows: - -```yaml - -pulsarServiceUrl: pulsar://broker.pulsar:6650 // or pulsar+ssl://broker.pulsar:6651 if using TLS -pulsarAdminUrl: http://func-worker.pulsar:8080 // or https://func-worker:8443 if using TLS - -``` - -### Run RBAC in Kubernetes clusters - -If you run RBAC in your Kubernetes cluster, make sure that the service account you use for running functions workers (or brokers, if functions workers run along with brokers) have permissions on the following Kubernetes APIs. - -- services -- configmaps -- pods -- apps.statefulsets - -The following is sufficient: - -```yaml - -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRole -metadata: - name: functions-worker -rules: -- apiGroups: [""] - resources: - - services - - configmaps - - pods - verbs: - - '*' -- apiGroups: - - apps - resources: - - statefulsets - verbs: - - '*' ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: functions-worker ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: functions-worker -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: functions-worker -subjectsKubernetesSec: -- kind: ServiceAccount - name: functions-worker - -``` - -If the service-account is not properly configured, an error message similar to this is displayed: - -```bash - -22:04:27.696 [Timer-0] ERROR org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory - Error while trying to fetch configmap example-pulsar-4qvmb5gur3c6fc9dih0x1xn8b-function-worker-config at namespace pulsar -io.kubernetes.client.ApiException: Forbidden - at io.kubernetes.client.ApiClient.handleResponse(ApiClient.java:882) ~[io.kubernetes-client-java-2.0.0.jar:?] - at io.kubernetes.client.ApiClient.execute(ApiClient.java:798) ~[io.kubernetes-client-java-2.0.0.jar:?] - at io.kubernetes.client.apis.CoreV1Api.readNamespacedConfigMapWithHttpInfo(CoreV1Api.java:23673) ~[io.kubernetes-client-java-api-2.0.0.jar:?] - at io.kubernetes.client.apis.CoreV1Api.readNamespacedConfigMap(CoreV1Api.java:23655) ~[io.kubernetes-client-java-api-2.0.0.jar:?] - at org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory.fetchConfigMap(KubernetesRuntimeFactory.java:284) [org.apache.pulsar-pulsar-functions-runtime-2.4.0-42c3bf949.jar:2.4.0-42c3bf949] - at org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory$1.run(KubernetesRuntimeFactory.java:275) [org.apache.pulsar-pulsar-functions-runtime-2.4.0-42c3bf949.jar:2.4.0-42c3bf949] - at java.util.TimerThread.mainLoop(Timer.java:555) [?:1.8.0_212] - at java.util.TimerThread.run(Timer.java:505) [?:1.8.0_212] - -``` - -### Integrate Kubernetes secrets - -In order to safely distribute secrets, Pulsar Functions can reference Kubernetes secrets. To enable this, set the `secretsProviderConfiguratorClassName` to `org.apache.pulsar.functions.secretsproviderconfigurator.KubernetesSecretsProviderConfigurator`. - -You can create a secret in the namespace where your functions are deployed. For example, you deploy functions to the `pulsar-func` Kubernetes namespace, and you have a secret named `database-creds` with a field name `password`, which you want to mount in the pod as an environment variable called `DATABASE_PASSWORD`. The following functions configuration enables you to reference that secret and mount the value as an environment variable in the pod. - -```Yaml - -tenant: "mytenant" -namespace: "mynamespace" -name: "myfunction" -topicName: "persistent://mytenant/mynamespace/myfuncinput" -className: "com.company.pulsar.myfunction" - -secrets: - # the secret will be mounted from the `password` field in the `database-creds` secret as an env var called `DATABASE_PASSWORD` - DATABASE_PASSWORD: - path: "database-creds" - key: "password" - -``` - -### Enable token authentication - -When you enable authentication for your Pulsar cluster, you need a mechanism for the pod running your function to authenticate with the broker. - -The `org.apache.pulsar.functions.auth.KubernetesFunctionAuthProvider` interface provides support for any authentication mechanism. The `functionAuthProviderClassName` in `function-worker.yml` is used to specify your path to this implementation. - -Pulsar includes an implementation of this interface for token authentication, and distributes the certificate authority via the same implementation. The configuration is similar as follows: - -```Yaml - -functionAuthProviderClassName: org.apache.pulsar.functions.auth.KubernetesSecretsTokenAuthProvider - -``` - -For token authentication, the functions worker captures the token that is used to deploy (or update) the function. The token is saved as a secret and mounted into the pod. - -For custom authentication or TLS, you need to implement this interface or use an alternative mechanism to provide authentication. If you use token authentication and TLS encryption to secure the communication with the cluster, Pulsar passes your certificate authority (CA) to the client, so the client obtains what it needs to authenticate the cluster, and trusts the cluster with your signed certificate. - -:::note - -If you use tokens that expire when deploying functions, these tokens will expire. - -::: - -### Run clusters with authentication - -When you run a functions worker in a standalone process (that is, not embedded in the broker) in a cluster with authentication, you must configure your functions worker to interact with the broker and authenticate incoming requests. So you need to configure properties that the broker requires for authentication or authorization. - -For example, if you use token authentication, you need to configure the following properties in the `function-worker.yml` file. - -```Yaml - -clientAuthenticationPlugin: org.apache.pulsar.client.impl.auth.AuthenticationToken -clientAuthenticationParameters: file:///etc/pulsar/token/admin-token.txt -configurationStoreServers: zookeeper-cluster:2181 # auth requires a connection to zookeeper -authenticationProviders: - - "org.apache.pulsar.broker.authentication.AuthenticationProviderToken" -authorizationEnabled: true -authenticationEnabled: true -superUserRoles: - - superuser - - proxy -properties: - tokenSecretKey: file:///etc/pulsar/jwt/secret # if using a secret token, key file must be DER-encoded - tokenPublicKey: file:///etc/pulsar/jwt/public.key # if using public/private key tokens, key file must be DER-encoded - -``` - -:::note - -You must configure both the Function Worker authorization or authentication for the server to authenticate requests and configure the client to be authenticated to communicate with the broker. - -::: - -### Customize Kubernetes runtime - -The Kubernetes integration enables you to implement a class and customize how to generate manifests. You can configure it by setting `runtimeCustomizerClassName` in the `functions-worker.yml` file and use the fully qualified class name. You must implement the `org.apache.pulsar.functions.runtime.kubernetes.KubernetesManifestCustomizer` interface. - -The functions (and sinks/sources) API provides a flag, `customRuntimeOptions`, which is passed to this interface. - -To initialize the `KubernetesManifestCustomizer`, you can provide `runtimeCustomizerConfig` in the `functions-worker.yml` file. `runtimeCustomizerConfig` is passed to the `public void initialize(Map config)` function of the interface. `runtimeCustomizerConfig`is different from the `customRuntimeOptions` as `runtimeCustomizerConfig` is the same across all functions. If you provide both `runtimeCustomizerConfig` and `customRuntimeOptions`, you need to decide how to manage these two configurations in your implementation of `KubernetesManifestCustomizer`. - -Pulsar includes a built-in implementation. To use the basic implementation, set `runtimeCustomizerClassName` to `org.apache.pulsar.functions.runtime.kubernetes.BasicKubernetesManifestCustomizer`. The built-in implementation initialized with `runtimeCustomizerConfig` enables you to pass a JSON document as `customRuntimeOptions` with certain properties to augment, which decides how the manifests are generated. If both `runtimeCustomizerConfig` and `customRuntimeOptions` are provided, `BasicKubernetesManifestCustomizer` uses `customRuntimeOptions` to override the configuration if there are conflicts in these two configurations. - -Below is an example of `customRuntimeOptions`. - -```json - -{ - "jobName": "jobname", // the k8s pod name to run this function instance - "jobNamespace": "namespace", // the k8s namespace to run this function in - "extractLabels": { // extra labels to attach to the statefulSet, service, and pods - "extraLabel": "value" - }, - "extraAnnotations": { // extra annotations to attach to the statefulSet, service, and pods - "extraAnnotation": "value" - }, - "nodeSelectorLabels": { // node selector labels to add on to the pod spec - "customLabel": "value" - }, - "tolerations": [ // tolerations to add to the pod spec - { - "key": "custom-key", - "value": "value", - "effect": "NoSchedule" - } - ], - "resourceRequirements": { // values for cpu and memory should be defined as described here: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container - "requests": { - "cpu": 1, - "memory": "4G" - }, - "limits": { - "cpu": 2, - "memory": "8G" - } - } -} - -``` - -## Run clusters with geo-replication - -If you run multiple clusters tied together with geo-replication, it is important to use a different function namespace for each cluster. Otherwise, the function shares a namespace and potentially schedule across clusters. - -For example, if you have two clusters: `east-1` and `west-1`, you can configure the functions workers for `east-1` and `west-1` perspectively as follows. - -```Yaml - -pulsarFunctionsCluster: east-1 -pulsarFunctionsNamespace: public/functions-east-1 - -``` - -```Yaml - -pulsarFunctionsCluster: west-1 -pulsarFunctionsNamespace: public/functions-west-1 - -``` - -This ensures the two different Functions Workers use distinct sets of topics for their internal coordination. - -## Configure standalone functions worker - -When configuring a standalone functions worker, you need to configure properties that the broker requires, especially if you use TLS. And then Functions Worker can communicate with the broker. - -You need to configure the following required properties. - -```Yaml - -workerPort: 8080 -workerPortTls: 8443 # when using TLS -tlsCertificateFilePath: /etc/pulsar/tls/tls.crt # when using TLS -tlsKeyFilePath: /etc/pulsar/tls/tls.key # when using TLS -tlsTrustCertsFilePath: /etc/pulsar/tls/ca.crt # when using TLS -pulsarServiceUrl: pulsar://broker.pulsar:6650/ # or pulsar+ssl://pulsar-prod-broker.pulsar:6651/ when using TLS -pulsarWebServiceUrl: http://broker.pulsar:8080/ # or https://pulsar-prod-broker.pulsar:8443/ when using TLS -useTls: true # when using TLS, critical! - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/functions-worker.md b/site2/website/versioned_docs/version-2.8.3-deprecated/functions-worker.md deleted file mode 100644 index 49fc76b30bdaa5..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/functions-worker.md +++ /dev/null @@ -1,386 +0,0 @@ ---- -id: functions-worker -title: Deploy and manage functions worker -sidebar_label: "Setup: Pulsar Functions Worker" -original_id: functions-worker ---- -Before using Pulsar Functions, you need to learn how to set up Pulsar Functions worker and how to [configure Functions runtime](functions-runtime.md). - -Pulsar `functions-worker` is a logic component to run Pulsar Functions in cluster mode. Two options are available, and you can select either based on your requirements. -- [run with brokers](#run-functions-worker-with-brokers) -- [run it separately](#run-functions-worker-separately) in a different broker - -:::note - -The `--- Service Urls---` lines in the following diagrams represent Pulsar service URLs that Pulsar client and admin use to connect to a Pulsar cluster. - -::: - -## Run Functions-worker with brokers - -The following diagram illustrates the deployment of functions-workers running along with brokers. - -![assets/functions-worker-corun.png](/assets/functions-worker-corun.png) - -To enable functions-worker running as part of a broker, you need to set `functionsWorkerEnabled` to `true` in the `broker.conf` file. - -```conf - -functionsWorkerEnabled=true - -``` - -If the `functionsWorkerEnabled` is set to `true`, the functions-worker is started as part of a broker. You need to configure the `conf/functions_worker.yml` file to customize your functions_worker. - -Before you run Functions-worker with broker, you have to configure Functions-worker, and then start it with brokers. - -### Configure Functions-Worker to run with brokers -In this mode, most of the settings are already inherited from your broker configuration (for example, configurationStore settings, authentication settings, and so on) since `functions-worker` is running as part of the broker. - -Pay attention to the following required settings when configuring functions-worker in this mode. - -- `numFunctionPackageReplicas`: The number of replicas to store function packages. The default value is `1`, which is good for standalone deployment. For production deployment, to ensure high availability, set it to be larger than `2`. -- `initializedDlogMetadata`: Whether to initialize distributed log metadata in runtime. If it is set to `true`, you must ensure that it has been initialized by `bin/pulsar initialize-cluster-metadata` command. - -If authentication is enabled on the BookKeeper cluster, configure the following BookKeeper authentication settings. - -- `bookkeeperClientAuthenticationPlugin`: the BookKeeper client authentication plugin name. -- `bookkeeperClientAuthenticationParametersName`: the BookKeeper client authentication plugin parameters name. -- `bookkeeperClientAuthenticationParameters`: the BookKeeper client authentication plugin parameters. - -### Configure Stateful-Functions to run with broker - -If you want to use Stateful-Functions related functions (for example, `putState()` and `queryState()` related interfaces), follow steps below. - -1. Enable the **streamStorage** service in the BookKeeper. - - Currently, the service uses the NAR package, so you need to set the configuration in `bookkeeper.conf`. - - ```text - - extraServerComponents=org.apache.bookkeeper.stream.server.StreamStorageLifecycleComponent - - ``` - - After starting bookie, use the following methods to check whether the streamStorage service is started correctly. - - Input: - - ```shell - - telnet localhost 4181 - - ``` - - Output: - - ```text - - Trying 127.0.0.1... - Connected to localhost. - Escape character is '^]'. - - ``` - -2. Turn on this function in `functions_worker.yml`. - - ```text - - stateStorageServiceUrl: bk://:4181 - - ``` - - `bk-service-url` is the service URL pointing to the BookKeeper table service. - -### Start Functions-worker with broker - -Once you have configured the `functions_worker.yml` file, you can start or restart your broker. - -And then you can use the following command to verify if `functions-worker` is running well. - -```bash - -curl :8080/admin/v2/worker/cluster - -``` - -After entering the command above, a list of active function workers in the cluster is returned. The output is similar to the following. - -```json - -[{"workerId":"","workerHostname":"","port":8080}] - -``` - -## Run Functions-worker separately - -This section illustrates how to run `functions-worker` as a separate process in separate machines. - -![assets/functions-worker-separated.png](/assets/functions-worker-separated.png) - -:::note - -In this mode, make sure `functionsWorkerEnabled` is set to `false`, so you won't start `functions-worker` with brokers by mistake. Also, while accessing the `functions-worker` to manage any of the functions, the `pulsar-admin` CLI tool or any of the clients should use the `workerHostname` and `workerPort` that you set in [Worker parameters](#worker-parameters) to generate an `--admin-url`. - -::: - -### Configure Functions-worker to run separately - -To run function-worker separately, you have to configure the following parameters. - -#### Worker parameters - -- `workerId`: The type is string. It is unique across clusters, which is used to identify a worker machine. -- `workerHostname`: The hostname of the worker machine. -- `workerPort`: The port that the worker server listens on. Keep it as default if you don't customize it. -- `workerPortTls`: The TLS port that the worker server listens on. Keep it as default if you don't customize it. - -#### Function package parameter - -- `numFunctionPackageReplicas`: The number of replicas to store function packages. The default value is `1`. - -#### Function metadata parameter - -- `pulsarServiceUrl`: The Pulsar service URL for your broker cluster. -- `pulsarWebServiceUrl`: The Pulsar web service URL for your broker cluster. -- `pulsarFunctionsCluster`: Set the value to your Pulsar cluster name (same as the `clusterName` setting in the broker configuration). - -If authentication is enabled for your broker cluster, you *should* configure the authentication plugin and parameters for the functions worker to communicate with the brokers. - -- `brokerClientAuthenticationEnabled`: Whether to enable the broker client authentication used by function workers to talk to brokers. -- `clientAuthenticationPlugin`: The authentication plugin to be used by the Pulsar client used in worker service. -- `clientAuthenticationParameters`: The authentication parameter to be used by the Pulsar client used in worker service. - -#### Security settings - -If you want to enable security on functions workers, you *should*: -- [Enable TLS transport encryption](#enable-tls-transport-encryption) -- [Enable Authentication Provider](#enable-authentication-provider) -- [Enable Authorization Provider](#enable-authorization-provider) -- [Enable End-to-End Encryption](#enable-end-to-end-encryption) - -##### Enable TLS transport encryption - -To enable TLS transport encryption, configure the following settings. - -``` - -useTLS: true -pulsarServiceUrl: pulsar+ssl://localhost:6651/ -pulsarWebServiceUrl: https://localhost:8443 - -tlsEnabled: true -tlsCertificateFilePath: /path/to/functions-worker.cert.pem -tlsKeyFilePath: /path/to/functions-worker.key-pk8.pem -tlsTrustCertsFilePath: /path/to/ca.cert.pem - -// The path to trusted certificates used by the Pulsar client to authenticate with Pulsar brokers -brokerClientTrustCertsFilePath: /path/to/ca.cert.pem - -``` - -For details on TLS encryption, refer to [Transport Encryption using TLS](security-tls-transport.md). - -##### Enable Authentication Provider - -To enable authentication on Functions Worker, you need to configure the following settings. - -:::note - -Substitute the *providers list* with the providers you want to enable. - -::: - -``` - -authenticationEnabled: true -authenticationProviders: [ provider1, provider2 ] - -``` - -For *TLS Authentication* provider, follow the example below to add the necessary settings. -See [TLS Authentication](security-tls-authentication.md) for more details. - -``` - -brokerClientAuthenticationPlugin: org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters: tlsCertFile:/path/to/admin.cert.pem,tlsKeyFile:/path/to/admin.key-pk8.pem - -authenticationEnabled: true -authenticationProviders: ['org.apache.pulsar.broker.authentication.AuthenticationProviderTls'] - -``` - -For *SASL Authentication* provider, add `saslJaasClientAllowedIds` and `saslJaasBrokerSectionName` -under `properties` if needed. - -``` - -properties: - saslJaasClientAllowedIds: .*pulsar.* - saslJaasBrokerSectionName: Broker - -``` - -For *Token Authentication* provider, add necessary settings for `properties` if needed. -See [Token Authentication](security-jwt.md) for more details. -Note: key files must be DER-encoded - -``` - -properties: - tokenSecretKey: file://my/secret.key - # If using public/private - # tokenPublicKey: file:///path/to/public.key - -``` - -##### Enable Authorization Provider - -To enable authorization on Functions Worker, you need to configure `authorizationEnabled`, `authorizationProvider` and `configurationStoreServers`. The authentication provider connects to `configurationStoreServers` to receive namespace policies. - -```yaml - -authorizationEnabled: true -authorizationProvider: org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider -configurationStoreServers: - -``` - -You should also configure a list of superuser roles. The superuser roles are able to access any admin API. The following is a configuration example. - -```yaml - -superUserRoles: - - role1 - - role2 - - role3 - -``` - -##### Enable End-to-End Encryption - -You can use the public and private key pair that the application configures to perform encryption. Only the consumers with a valid key can decrypt the encrypted messages. - -To enable End-to-End encryption on Functions Worker, you can set it by specifying `--producer-config` in the command line terminal, for more information, please refer to [here](security-encryption.md). - -We include the relevant configuration information of `CryptoConfig` into `ProducerConfig`. The specific configurable field information about `CryptoConfig` is as follows: - -```text - -public class CryptoConfig { - private String cryptoKeyReaderClassName; - private Map cryptoKeyReaderConfig; - - private String[] encryptionKeys; - private ProducerCryptoFailureAction producerCryptoFailureAction; - - private ConsumerCryptoFailureAction consumerCryptoFailureAction; -} - -``` - -- `producerCryptoFailureAction`: define the action if producer fail to encrypt data one of `FAIL`, `SEND`. -- `consumerCryptoFailureAction`: define the action if consumer fail to decrypt data one of `FAIL`, `DISCARD`, `CONSUME`. - -#### BookKeeper Authentication - -If authentication is enabled on the BookKeeper cluster, you need configure the BookKeeper authentication settings as follows: - -- `bookkeeperClientAuthenticationPlugin`: the plugin name of BookKeeper client authentication. -- `bookkeeperClientAuthenticationParametersName`: the plugin parameters name of BookKeeper client authentication. -- `bookkeeperClientAuthenticationParameters`: the plugin parameters of BookKeeper client authentication. - -### Start Functions-worker - -Once you have finished configuring the `functions_worker.yml` configuration file, you can start a `functions-worker` in the background by using [nohup](https://en.wikipedia.org/wiki/Nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -bin/pulsar-daemon start functions-worker - -``` - -You can also start `functions-worker` in the foreground by using `pulsar` CLI tool: - -```bash - -bin/pulsar functions-worker - -``` - -### Configure Proxies for Functions-workers - -When you are running `functions-worker` in a separate cluster, the admin rest endpoints are split into two clusters. `functions`, `function-worker`, `source` and `sink` endpoints are now served -by the `functions-worker` cluster, while all the other remaining endpoints are served by the broker cluster. -Hence you need to configure your `pulsar-admin` to use the right service URL accordingly. - -In order to address this inconvenience, you can start a proxy cluster for routing the admin rest requests accordingly. Hence you will have one central entry point for your admin service. - -If you already have a proxy cluster, continue reading. If you haven't setup a proxy cluster before, you can follow the [instructions](http://pulsar.apache.org/docs/en/administration-proxy/) to -start proxies. - -![assets/functions-worker-separated.png](/assets/functions-worker-separated-proxy.png) - -To enable routing functions related admin requests to `functions-worker` in a proxy, you can edit the `proxy.conf` file to modify the following settings: - -```conf - -functionWorkerWebServiceURL= -functionWorkerWebServiceURLTLS= - -``` - -## Compare the Run-with-Broker and Run-separately modes - -As described above, you can run Function-worker with brokers, or run it separately. And it is more convenient to run functions-workers along with brokers. However, running functions-workers in a separate cluster provides better resource isolation for running functions in `Process` or `Thread` mode. - -Use which mode for your cases, refer to the following guidelines to determine. - -Use the `Run-with-Broker` mode in the following cases: -- a) if resource isolation is not required when running functions in `Process` or `Thread` mode; -- b) if you configure the functions-worker to run functions on Kubernetes (where the resource isolation problem is addressed by Kubernetes). - -Use the `Run-separately` mode in the following cases: -- a) you don't have a Kubernetes cluster; -- b) if you want to run functions and brokers separately. - -## Troubleshooting - -**Error message: Namespace missing local cluster name in clusters list** - -``` - -Failed to get partitioned topic metadata: org.apache.pulsar.client.api.PulsarClientException$BrokerMetadataException: Namespace missing local cluster name in clusters list: local_cluster=xyz ns=public/functions clusters=[standalone] - -``` - -The error message prompts when either of the cases occurs: -- a) a broker is started with `functionsWorkerEnabled=true`, but the `pulsarFunctionsCluster` is not set to the correct cluster in the `conf/functions_worker.yaml` file; -- b) setting up a geo-replicated Pulsar cluster with `functionsWorkerEnabled=true`, while brokers in one cluster run well, brokers in the other cluster do not work well. - -**Workaround** - -If any of these cases happens, follow the instructions below to fix the problem: - -1. Disable Functions Worker by setting `functionsWorkerEnabled=false`, and restart brokers. - -2. Get the current clusters list of `public/functions` namespace. - -```bash - -bin/pulsar-admin namespaces get-clusters public/functions - -``` - -3. Check if the cluster is in the clusters list. If the cluster is not in the list, add it to the list and update the clusters list. - -```bash - -bin/pulsar-admin namespaces set-clusters --clusters , public/functions - -``` - -4. After setting the cluster successfully, enable functions worker by setting `functionsWorkerEnabled=true`. - -5. Set the correct cluster name in `pulsarFunctionsCluster` in the `conf/functions_worker.yml` file, and restart brokers. diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/getting-started-concepts-and-architecture.md b/site2/website/versioned_docs/version-2.8.3-deprecated/getting-started-concepts-and-architecture.md deleted file mode 100644 index fe9c3fbc553b2c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/getting-started-concepts-and-architecture.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -id: concepts-architecture -title: Pulsar concepts and architecture -sidebar_label: "Concepts and architecture" -original_id: concepts-architecture ---- - - - - - - - - - - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/getting-started-docker.md b/site2/website/versioned_docs/version-2.8.3-deprecated/getting-started-docker.md deleted file mode 100644 index 4f20971d75330c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/getting-started-docker.md +++ /dev/null @@ -1,176 +0,0 @@ ---- -id: getting-started-docker -title: Set up a standalone Pulsar in Docker -sidebar_label: "Run Pulsar in Docker" -original_id: getting-started-docker ---- - -For local development and testing, you can run Pulsar in standalone -mode on your own machine within a Docker container. - -If you have not installed Docker, download the [Community edition](https://www.docker.com/community-edition) -and follow the instructions for your OS. - -## Start Pulsar in Docker - -* For MacOS, Linux, and Windows: - - ```shell - - $ docker run -it \ - -p 6650:6650 \ - -p 8080:8080 \ - --mount source=pulsardata,target=/pulsar/data \ - --mount source=pulsarconf,target=/pulsar/conf \ - apachepulsar/pulsar:@pulsar:version@ \ - bin/pulsar standalone - - ``` - -A few things to note about this command: - * The data, metadata, and configuration are persisted on Docker volumes in order to not start "fresh" every -time the container is restarted. For details on the volumes you can use `docker volume inspect ` - * For Docker on Windows make sure to configure it to use Linux containers - -If you start Pulsar successfully, you will see `INFO`-level log messages like this: - -``` - -2017-08-09 22:34:04,030 - INFO - [main:WebService@213] - Web Service started at http://127.0.0.1:8080 -2017-08-09 22:34:04,038 - INFO - [main:PulsarService@335] - messaging service is ready, bootstrap service on port=8080, broker url=pulsar://127.0.0.1:6650, cluster=standalone, configs=org.apache.pulsar.broker.ServiceConfiguration@4db60246 -... - -``` - -:::tip - -When you start a local standalone cluster, a `public/default` namespace is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -::: - -## Use Pulsar in Docker - -Pulsar offers client libraries for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) -and [C++](client-libraries-cpp.md). If you're running a local standalone cluster, you can -use one of these root URLs to interact with your cluster: - -* `pulsar://localhost:6650` -* `http://localhost:8080` - -The following example will guide you get started with Pulsar quickly by using the [Python](client-libraries-python.md) -client API. - -Install the Pulsar Python client library directly from [PyPI](https://pypi.org/project/pulsar-client/): - -```shell - -$ pip install pulsar-client - -``` - -### Consume a message - -Create a consumer and subscribe to the topic: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -consumer = client.subscribe('my-topic', - subscription_name='my-sub') - -while True: - msg = consumer.receive() - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - -client.close() - -``` - -### Produce a message - -Now start a producer to send some test messages: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -producer = client.create_producer('my-topic') - -for i in range(10): - producer.send(('hello-pulsar-%d' % i).encode('utf-8')) - -client.close() - -``` - -## Get the topic statistics - -In Pulsar, you can use REST, Java, or command-line tools to control every aspect of the system. -For details on APIs, refer to [Admin API Overview](admin-api-overview.md). - -In the simplest example, you can use curl to probe the stats for a particular topic: - -```shell - -$ curl http://localhost:8080/admin/v2/persistent/public/default/my-topic/stats | python -m json.tool - -``` - -The output is something like this: - -```json - -{ - "averageMsgSize": 0.0, - "msgRateIn": 0.0, - "msgRateOut": 0.0, - "msgThroughputIn": 0.0, - "msgThroughputOut": 0.0, - "publishers": [ - { - "address": "/172.17.0.1:35048", - "averageMsgSize": 0.0, - "clientVersion": "1.19.0-incubating", - "connectedSince": "2017-08-09 20:59:34.621+0000", - "msgRateIn": 0.0, - "msgThroughputIn": 0.0, - "producerId": 0, - "producerName": "standalone-0-1" - } - ], - "replication": {}, - "storageSize": 16, - "subscriptions": { - "my-sub": { - "blockedSubscriptionOnUnackedMsgs": false, - "consumers": [ - { - "address": "/172.17.0.1:35064", - "availablePermits": 996, - "blockedConsumerOnUnackedMsgs": false, - "clientVersion": "1.19.0-incubating", - "connectedSince": "2017-08-09 21:05:39.222+0000", - "consumerName": "166111", - "msgRateOut": 0.0, - "msgRateRedeliver": 0.0, - "msgThroughputOut": 0.0, - "unackedMessages": 0 - } - ], - "msgBacklog": 0, - "msgRateExpired": 0.0, - "msgRateOut": 0.0, - "msgRateRedeliver": 0.0, - "msgThroughputOut": 0.0, - "type": "Exclusive", - "unackedMessages": 0 - } - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/getting-started-helm.md b/site2/website/versioned_docs/version-2.8.3-deprecated/getting-started-helm.md deleted file mode 100644 index c76b1d8ff17568..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/getting-started-helm.md +++ /dev/null @@ -1,444 +0,0 @@ ---- -id: getting-started-helm -title: Get started in Kubernetes -sidebar_label: "Run Pulsar in Kubernetes" -original_id: getting-started-helm ---- - -This section guides you through every step of installing and running Apache Pulsar with Helm on Kubernetes quickly, including the following sections: - -- Install the Apache Pulsar on Kubernetes using Helm -- Start and stop Apache Pulsar -- Create topics using `pulsar-admin` -- Produce and consume messages using Pulsar clients -- Monitor Apache Pulsar status with Prometheus and Grafana - -For deploying a Pulsar cluster for production usage, read the documentation on [how to configure and install a Pulsar Helm chart](helm-deploy.md). - -## Prerequisite - -- Kubernetes server 1.14.0+ -- kubectl 1.14.0+ -- Helm 3.0+ - -:::tip - -For the following steps, step 2 and step 3 are for **developers** and step 4 and step 5 are for **administrators**. - -::: - -## Step 0: Prepare a Kubernetes cluster - -Before installing a Pulsar Helm chart, you have to create a Kubernetes cluster. You can follow [the instructions](helm-prepare.md) to prepare a Kubernetes cluster. - -We use [Minikube](https://minikube.sigs.k8s.io/docs/start/) in this quick start guide. To prepare a Kubernetes cluster, follow these steps: - -1. Create a Kubernetes cluster on Minikube. - - ```bash - - minikube start --memory=8192 --cpus=4 --kubernetes-version= - - ``` - - The `` can be any [Kubernetes version supported by your Minikube installation](https://minikube.sigs.k8s.io/docs/reference/configuration/kubernetes/), such as `v1.16.1`. - -2. Set `kubectl` to use Minikube. - - ```bash - - kubectl config use-context minikube - - ``` - -3. To use the [Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) with the local Kubernetes cluster on Minikube, enter the command below: - - ```bash - - minikube dashboard - - ``` - - The command automatically triggers opening a webpage in your browser. - -## Step 1: Install Pulsar Helm chart - -0. Add Pulsar charts repo. - - ```bash - - helm repo add apache https://pulsar.apache.org/charts - - ``` - - ```bash - - helm repo update - - ``` - -1. Clone the Pulsar Helm chart repository. - - ```bash - - git clone https://github.com/apache/pulsar-helm-chart - cd pulsar-helm-chart - - ``` - -2. Run the script `prepare_helm_release.sh` to create secrets required for installing the Apache Pulsar Helm chart. The username `pulsar` and password `pulsar` are used for logging into the Grafana dashboard and Pulsar Manager. - - :::note - - When running the script, you can use `-n` to specify the Kubernetes namespace where the Pulsar Helm chart is installed, `-k` to define the Pulsar Helm release name, and `-c` to create the Kubernetes namespace. For more information about the script, run `./scripts/pulsar/prepare_helm_release.sh --help`. - - ::: - - ```bash - - ./scripts/pulsar/prepare_helm_release.sh \ - -n pulsar \ - -k pulsar-mini \ - -c - - ``` - -3. Use the Pulsar Helm chart to install a Pulsar cluster to Kubernetes. - - > **NOTE** - > You need to specify `--set initialize=true` when installing Pulsar the first time. This command installs and starts Apache Pulsar. - - ```bash - - helm install \ - --values examples/values-minikube.yaml \ - --set initialize=true \ - --namespace pulsar \ - pulsar-mini apache/pulsar - - ``` - -4. Check the status of all pods. - - ```bash - - kubectl get pods -n pulsar - - ``` - - If all pods start up successfully, you can see that the `STATUS` is changed to `Running` or `Completed`. - - **Output** - - ```bash - - NAME READY STATUS RESTARTS AGE - pulsar-mini-bookie-0 1/1 Running 0 9m27s - pulsar-mini-bookie-init-5gphs 0/1 Completed 0 9m27s - pulsar-mini-broker-0 1/1 Running 0 9m27s - pulsar-mini-grafana-6b7bcc64c7-4tkxd 1/1 Running 0 9m27s - pulsar-mini-prometheus-5fcf5dd84c-w8mgz 1/1 Running 0 9m27s - pulsar-mini-proxy-0 1/1 Running 0 9m27s - pulsar-mini-pulsar-init-t7cqt 0/1 Completed 0 9m27s - pulsar-mini-pulsar-manager-9bcbb4d9f-htpcs 1/1 Running 0 9m27s - pulsar-mini-toolset-0 1/1 Running 0 9m27s - pulsar-mini-zookeeper-0 1/1 Running 0 9m27s - - ``` - -5. Check the status of all services in the namespace `pulsar`. - - ```bash - - kubectl get services -n pulsar - - ``` - - **Output** - - ```bash - - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - pulsar-mini-bookie ClusterIP None 3181/TCP,8000/TCP 11m - pulsar-mini-broker ClusterIP None 8080/TCP,6650/TCP 11m - pulsar-mini-grafana LoadBalancer 10.106.141.246 3000:31905/TCP 11m - pulsar-mini-prometheus ClusterIP None 9090/TCP 11m - pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 11m - pulsar-mini-pulsar-manager LoadBalancer 10.103.192.175 9527:30190/TCP 11m - pulsar-mini-toolset ClusterIP None 11m - pulsar-mini-zookeeper ClusterIP None 2888/TCP,3888/TCP,2181/TCP 11m - - ``` - -## Step 2: Use pulsar-admin to create Pulsar tenants/namespaces/topics - -`pulsar-admin` is the CLI (command-Line Interface) tool for Pulsar. In this step, you can use `pulsar-admin` to create resources, including tenants, namespaces, and topics. - -1. Enter the `toolset` container. - - ```bash - - kubectl exec -it -n pulsar pulsar-mini-toolset-0 -- /bin/bash - - ``` - -2. In the `toolset` container, create a tenant named `apache`. - - ```bash - - bin/pulsar-admin tenants create apache - - ``` - - Then you can list the tenants to see if the tenant is created successfully. - - ```bash - - bin/pulsar-admin tenants list - - ``` - - You should see a similar output as below. The tenant `apache` has been successfully created. - - ```bash - - "apache" - "public" - "pulsar" - - ``` - -3. In the `toolset` container, create a namespace named `pulsar` in the tenant `apache`. - - ```bash - - bin/pulsar-admin namespaces create apache/pulsar - - ``` - - Then you can list the namespaces of tenant `apache` to see if the namespace is created successfully. - - ```bash - - bin/pulsar-admin namespaces list apache - - ``` - - You should see a similar output as below. The namespace `apache/pulsar` has been successfully created. - - ```bash - - "apache/pulsar" - - ``` - -4. In the `toolset` container, create a topic `test-topic` with `4` partitions in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics create-partitioned-topic apache/pulsar/test-topic -p 4 - - ``` - -5. In the `toolset` container, list all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics list-partitioned-topics apache/pulsar - - ``` - - Then you can see all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - "persistent://apache/pulsar/test-topic" - - ``` - -## Step 3: Use Pulsar client to produce and consume messages - -You can use the Pulsar client to create producers and consumers to produce and consume messages. - -By default, the Pulsar Helm chart exposes the Pulsar cluster through a Kubernetes `LoadBalancer`. In Minikube, you can use the following command to check the proxy service. - -```bash - -kubectl get services -n pulsar | grep pulsar-mini-proxy - -``` - -You will see a similar output as below. - -```bash - -pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 28m - -``` - -This output tells what are the node ports that Pulsar cluster's binary port and HTTP port are mapped to. The port after `80:` is the HTTP port while the port after `6650:` is the binary port. - -Then you can find the IP address and exposed ports of your Minikube server by running the following command. - -```bash - -minikube service pulsar-mini-proxy -n pulsar - -``` - -**Output** - -```bash - -|-----------|-------------------|-------------|-------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|-------------------------| -| pulsar | pulsar-mini-proxy | http/80 | http://172.17.0.4:32305 | -| | | pulsar/6650 | http://172.17.0.4:31816 | -|-----------|-------------------|-------------|-------------------------| -🏃 Starting tunnel for service pulsar-mini-proxy. -|-----------|-------------------|-------------|------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|------------------------| -| pulsar | pulsar-mini-proxy | | http://127.0.0.1:61853 | -| | | | http://127.0.0.1:61854 | -|-----------|-------------------|-------------|------------------------| - -``` - -At this point, you can get the service URLs to connect to your Pulsar client. Here are URL examples: - -``` - -webServiceUrl=http://127.0.0.1:61853/ -brokerServiceUrl=pulsar://127.0.0.1:61854/ - -``` - -Then you can proceed with the following steps: - -1. Download the Apache Pulsar tarball from the [downloads page](https://pulsar.apache.org/download/). - -2. Decompress the tarball based on your download file. - - ```bash - - tar -xf .tar.gz - - ``` - -3. Expose `PULSAR_HOME`. - - (1) Enter the directory of the decompressed download file. - - (2) Expose `PULSAR_HOME` as the environment variable. - - ```bash - - export PULSAR_HOME=$(pwd) - - ``` - -4. Configure the Pulsar client. - - In the `${PULSAR_HOME}/conf/client.conf` file, replace `webServiceUrl` and `brokerServiceUrl` with the service URLs you get from the above steps. - -5. Create a subscription to consume messages from `apache/pulsar/test-topic`. - - ```bash - - bin/pulsar-client consume -s sub apache/pulsar/test-topic -n 0 - - ``` - -6. Open a new terminal. In the new terminal, create a producer and send 10 messages to the `test-topic` topic. - - ```bash - - bin/pulsar-client produce apache/pulsar/test-topic -m "---------hello apache pulsar-------" -n 10 - - ``` - -7. Verify the results. - - - From the producer side - - **Output** - - The messages have been produced successfully. - - ```bash - - 18:15:15.489 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 10 messages successfully produced - - ``` - - - From the consumer side - - **Output** - - At the same time, you can receive the messages as below. - - ```bash - - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - - ``` - -## Step 4: Use Pulsar Manager to manage the cluster - -[Pulsar Manager](administration-pulsar-manager.md) is a web-based GUI management tool for managing and monitoring Pulsar. - -1. By default, the `Pulsar Manager` is exposed as a separate `LoadBalancer`. You can open the Pulsar Manager UI using the following command: - - ```bash - - minikube service -n pulsar pulsar-mini-pulsar-manager - - ``` - -2. The Pulsar Manager UI will be open in your browser. You can use the username `pulsar` and password `pulsar` to log into Pulsar Manager. - -3. In Pulsar Manager UI, you can create an environment. - - - Click `New Environment` button in the top-left corner. - - Type `pulsar-mini` for the field `Environment Name` in the popup window. - - Type `http://pulsar-mini-broker:8080` for the field `Service URL` in the popup window. - - Click `Confirm` button in the popup window. - -4. After successfully creating an environment, you are redirected to the `tenants` page of that environment. Then you can create `tenants`, `namespaces` and `topics` using the Pulsar Manager. - -## Step 5: Use Prometheus and Grafana to monitor cluster - -Grafana is an open-source visualization tool, which can be used for visualizing time series data into dashboards. - -1. By default, the Grafana is exposed as a separate `LoadBalancer`. You can open the Grafana UI using the following command: - - ```bash - - minikube service pulsar-mini-grafana -n pulsar - - ``` - -2. The Grafana UI is open in your browser. You can use the username `pulsar` and password `pulsar` to log into the Grafana Dashboard. - -3. You can view dashboards for different components of a Pulsar cluster. diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/getting-started-pulsar.md b/site2/website/versioned_docs/version-2.8.3-deprecated/getting-started-pulsar.md deleted file mode 100644 index 752590f57b5585..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/getting-started-pulsar.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -id: pulsar-2.0 -title: Pulsar 2.0 -sidebar_label: "Pulsar 2.0" -original_id: pulsar-2.0 ---- - -Pulsar 2.0 is a major new release for Pulsar that brings some bold changes to the platform, including [simplified topic names](#topic-names), the addition of the [Pulsar Functions](functions-overview.md) feature, some terminology changes, and more. - -## New features in Pulsar 2.0 - -Feature | Description -:-------|:----------- -[Pulsar Functions](functions-overview.md) | A lightweight compute option for Pulsar - -## Major changes - -There are a few major changes that you should be aware of, as they may significantly impact your day-to-day usage. - -### Properties versus tenants - -Previously, Pulsar had a concept of properties. A property is essentially the exact same thing as a tenant, so the "property" terminology has been removed in version 2.0. The [`pulsar-admin properties`](reference-pulsar-admin.md#pulsar-admin) command-line interface, for example, has been replaced with the [`pulsar-admin tenants`](reference-pulsar-admin.md#pulsar-admin-tenants) interface. In some cases the properties terminology is still used but is now considered deprecated and will be removed entirely in a future release. - -### Topic names - -Prior to version 2.0, *all* Pulsar topics had the following form: - -```http - -{persistent|non-persistent}://property/cluster/namespace/topic - -``` - -Two important changes have been made in Pulsar 2.0: - -* There is no longer a [cluster component](#no-cluster) -* Properties have been [renamed to tenants](#tenants) -* You can use a [flexible](#flexible-topic-naming) naming system to shorten many topic names -* `/` is not allowed in topic name - -#### No cluster component - -The cluster component has been removed from topic names. Thus, all topic names now have the following form: - -```http - -{persistent|non-persistent}://tenant/namespace/topic - -``` - -> Existing topics that use the legacy name format will continue to work without any change, and there are no plans to change that. - - -#### Flexible topic naming - -All topic names in Pulsar 2.0 internally have the form shown [above](#no-cluster-component) but you can now use shorthand names in many cases (for the sake of simplicity). The flexible naming system stems from the fact that there is now a default topic type, tenant, and namespace: - -Topic aspect | Default -:------------|:------- -topic type | `persistent` -tenant | `public` -namespace | `default` - -The table below shows some example topic name translations that use implicit defaults: - -Input topic name | Translated topic name -:----------------|:--------------------- -`my-topic` | `persistent://public/default/my-topic` -`my-tenant/my-namespace/my-topic` | `persistent://my-tenant/my-namespace/my-topic` - -> For [non-persistent topics](concepts-messaging.md#non-persistent-topics) you'll need to continue to specify the entire topic name, as the default-based rules for persistent topic names don't apply. Thus you cannot use a shorthand name like `non-persistent://my-topic` and would need to use `non-persistent://public/default/my-topic` instead - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/getting-started-standalone.md b/site2/website/versioned_docs/version-2.8.3-deprecated/getting-started-standalone.md deleted file mode 100644 index cea47efd08d4b3..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/getting-started-standalone.md +++ /dev/null @@ -1,271 +0,0 @@ ---- -id: getting-started-standalone -title: Set up a standalone Pulsar locally -sidebar_label: "Run Pulsar locally" -original_id: getting-started-standalone ---- - -For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process. - -> #### Pulsar in production? -> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal.md) guide. - -## Install Pulsar standalone - -This tutorial guides you through every step of the installation process. - -### System requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions, JRE/JDK 11 is recommended. - -:::tip - -By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. - -::: - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -### Install Pulsar using binary release - -To get started with Pulsar, download a binary tarball release in one of the following ways: - -* download from the Apache mirror (Pulsar @pulsar:version@ binary release) - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:binary_release_url - - ``` - -After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -#### What your package contains - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/). -`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more. -`examples` | A Java JAR file containing [Pulsar Functions](functions-overview.md) example. -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar. -`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar). - -These directories are created once you begin running Pulsar. - -Directory | Contains -:---------|:-------- -`data` | The data storage directory used by ZooKeeper and BookKeeper. -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md). -`logs` | Logs created by the installation. - -:::tip - -If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions: -* [Install builtin connectors (optional)](#install-builtin-connectors-optional) -* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional) -Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders. - -::: - -### Install builtin connectors (optional) - -Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors. -To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways: - -* download from the Apache mirror Pulsar IO Connectors @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. -For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker -(or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions). -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), -you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors). - -::: - -### Install tiered storage offloaders (optional) - -:::tip - -Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -To enable tiered storage feature, follow the instructions below; otherwise skip this section. - -::: - -To get started with [tiered storage offloaders](concepts-tiered-storage.md), you need to download the offloaders tarball release on every broker node in one of the following ways: - -* download from the Apache mirror Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders` -in the pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage.md). - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory. -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), -you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - -::: - -## Start Pulsar standalone - -Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode. - -```bash - -$ bin/pulsar standalone - -``` - -If you have started Pulsar successfully, you will see `INFO`-level log messages like this: - -```bash - -2017-06-01 14:46:29,192 - INFO - [main:WebSocketService@95] - Configuration Store cache started -2017-06-01 14:46:29,192 - INFO - [main:AuthenticationService@61] - Authentication is disabled -2017-06-01 14:46:29,192 - INFO - [main:WebSocketService@108] - Pulsar WebSocket Service started - -``` - -:::tip - -* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window. - -::: - -You can also run the service as a background process using the `pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). -> -> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview.md) document to secure your deployment. -> -> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -## Use Pulsar standalone - -Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. - -### Consume a message - -The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client consume my-topic -s "first-subscription" - -``` - -If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -09:56:55.566 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.MultiTopicsConsumerImpl - [TopicsConsumerFakeTopicNamee2df9] [first-subscription] Success subscribe new topic my-topic in topics consumer, partitions: 4, allTopicPartitionsNumber: 4 - -``` - -:::tip - -As you have noticed that we do not explicitly create the `my-topic` topic, from which we consume the message. When you consume a message from a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well. - -::: - -### Produce a message - -The following command produces a message saying `hello-pulsar` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client produce my-topic --messages "hello-pulsar" - -``` - -If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -13:09:39.356 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced - -``` - -## Stop Pulsar standalone - -Press `Ctrl+C` to stop a local standalone Pulsar. - -:::tip - -If the service runs as a background process using the `pulsar-daemon start standalone` command, then use the `pulsar-daemon stop standalone` command to stop the service. -For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). - -::: - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/helm-deploy.md b/site2/website/versioned_docs/version-2.8.3-deprecated/helm-deploy.md deleted file mode 100644 index 93709f7091c1ea..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/helm-deploy.md +++ /dev/null @@ -1,434 +0,0 @@ ---- -id: helm-deploy -title: Deploy Pulsar cluster using Helm -sidebar_label: "Deployment" -original_id: helm-deploy ---- - -Before running `helm install`, you need to decide how to run Pulsar. -Options can be specified using Helm's `--set option.name=value` command line option. - -## Select configuration options - -In each section, collect the options that are combined to use with the `helm install` command. - -### Kubernetes namespace - -By default, the Pulsar Helm chart is installed to a namespace called `pulsar`. - -```yaml - -namespace: pulsar - -``` - -To install the Pulsar Helm chart into a different Kubernetes namespace, you can include this option in the `helm install` command. - -```bash - ---set namespace= - -``` - -By default, the Pulsar Helm chart doesn't create the namespace. - -```yaml - -namespaceCreate: false - -``` - -To use the Pulsar Helm chart to create the Kubernetes namespace automatically, you can include this option in the `helm install` command. - -```bash - ---set namespaceCreate=true - -``` - -### Persistence - -By default, the Pulsar Helm chart creates Volume Claims with the expectation that a dynamic provisioner creates the underlying Persistent Volumes. - -```yaml - -volumes: - persistence: true - # configure the components to use local persistent volume - # the local provisioner should be installed prior to enable local persistent volume - local_storage: false - -``` - -To use local persistent volumes as the persistent storage for Helm release, you can install the [local storage provisioner](#install-local-storage-provisioner) and include the following option in the `helm install` command. - -```bash - ---set volumes.local_storage=true - -``` - -:::note - -Before installing the production instance of Pulsar, ensure to plan the storage settings to avoid extra storage migration work. Because after initial installation, you must edit Kubernetes objects manually if you want to change storage settings. - -::: - -The Pulsar Helm chart is designed for production use. To use the Pulsar Helm chart in a development environment (such as Minikube), you can disable persistence by including this option in your `helm install` command. - -```bash - ---set volumes.persistence=false - -``` - -### Affinity - -By default, `anti-affinity` is enabled to ensure pods of the same component can run on different nodes. - -```yaml - -affinity: - anti_affinity: true - -``` - -To use the Pulsar Helm chart in a development environment (such as Minikube), you can disable `anti-affinity` by including this option in your `helm install` command. - -```bash - ---set affinity.anti_affinity=false - -``` - -### Components - -The Pulsar Helm chart is designed for production usage. It deploys a production-ready Pulsar cluster, including Pulsar core components and monitoring components. - -You can customize the components to be deployed by turning on/off individual components. - -```yaml - -## Components -## -## Control what components of Apache Pulsar to deploy for the cluster -components: - # zookeeper - zookeeper: true - # bookkeeper - bookkeeper: true - # bookkeeper - autorecovery - autorecovery: true - # broker - broker: true - # functions - functions: true - # proxy - proxy: true - # toolset - toolset: true - # pulsar manager - pulsar_manager: true - -## Monitoring Components -## -## Control what components of the monitoring stack to deploy for the cluster -monitoring: - # monitoring - prometheus - prometheus: true - # monitoring - grafana - grafana: true - -``` - -### Docker images - -The Pulsar Helm chart is designed to enable controlled upgrades. So it can configure independent image versions for components. You can customize the images by setting individual component. - -```yaml - -## Images -## -## Control what images to use for each component -images: - zookeeper: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - bookie: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - autorecovery: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - broker: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - proxy: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - functions: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - prometheus: - repository: prom/prometheus - tag: v1.6.3 - pullPolicy: IfNotPresent - grafana: - repository: streamnative/apache-pulsar-grafana-dashboard-k8s - tag: 0.0.4 - pullPolicy: IfNotPresent - pulsar_manager: - repository: apachepulsar/pulsar-manager - tag: v0.1.0 - pullPolicy: IfNotPresent - hasCommand: false - -``` - -### TLS - -The Pulsar Helm chart can be configured to enable TLS (Transport Layer Security) to protect all the traffic between components. Before enabling TLS, you have to provision TLS certificates for the required components. - -#### Provision TLS certificates using cert-manager - -To use the `cert-manager` to provision the TLS certificates, you have to install the [cert-manager](#install-cert-manager) before installing the Pulsar Helm chart. After successfully installing the cert-manager, you can set `certs.internal_issuer.enabled` to `true`. Therefore, the Pulsar Helm chart can use the `cert-manager` to generate `selfsigning` TLS certificates for the configured components. - -```yaml - -certs: - internal_issuer: - enabled: false - component: internal-cert-issuer - type: selfsigning - -``` - -You can also customize the generated TLS certificates by configuring the fields as the following. - -```yaml - -tls: - # common settings for generating certs - common: - # 90d - duration: 2160h - # 15d - renewBefore: 360h - organization: - - pulsar - keySize: 4096 - keyAlgorithm: rsa - keyEncoding: pkcs8 - -``` - -#### Enable TLS - -After installing the `cert-manager`, you can set `tls.enabled` to `true` to enable TLS encryption for the entire cluster. - -```yaml - -tls: - enabled: false - -``` - -You can also configure whether to enable TLS encryption for individual component. - -```yaml - -tls: - # settings for generating certs for proxy - proxy: - enabled: false - cert_name: tls-proxy - # settings for generating certs for broker - broker: - enabled: false - cert_name: tls-broker - # settings for generating certs for bookies - bookie: - enabled: false - cert_name: tls-bookie - # settings for generating certs for zookeeper - zookeeper: - enabled: false - cert_name: tls-zookeeper - # settings for generating certs for recovery - autorecovery: - cert_name: tls-recovery - # settings for generating certs for toolset - toolset: - cert_name: tls-toolset - -``` - -### Authentication - -By default, authentication is disabled. You can set `auth.authentication.enabled` to `true` to enable authentication. -Currently, the Pulsar Helm chart only supports JWT authentication provider. You can set `auth.authentication.provider` to `jwt` to use the JWT authentication provider. - -```yaml - -# Enable or disable broker authentication and authorization. -auth: - authentication: - enabled: false - provider: "jwt" - jwt: - # Enable JWT authentication - # If the token is generated by a secret key, set the usingSecretKey as true. - # If the token is generated by a private key, set the usingSecretKey as false. - usingSecretKey: false - superUsers: - # broker to broker communication - broker: "broker-admin" - # proxy to broker communication - proxy: "proxy-admin" - # pulsar-admin client to broker/proxy communication - client: "admin" - -``` - -To enable authentication, you can run [prepare helm release](#prepare-the-helm-release) to generate token secret keys and tokens for three super users specified in the `auth.superUsers` field. The generated token keys and super user tokens are uploaded and stored as Kubernetes secrets prefixed with `-token-`. You can use the following command to find those secrets. - -```bash - -kubectl get secrets -n - -``` - -### Authorization - -By default, authorization is disabled. Authorization can be enabled only when authentication is enabled. - -```yaml - -auth: - authorization: - enabled: false - -``` - -To enable authorization, you can include this option in the `helm install` command. - -```bash - ---set auth.authorization.enabled=true - -``` - -### CPU and RAM resource requirements - -By default, the resource requests and the number of replicas for the Pulsar components in the Pulsar Helm chart are adequate for a small production deployment. If you deploy a non-production instance, you can reduce the defaults to fit into a smaller cluster. - -Once you have all of your configuration options collected, you can install dependent charts before installing the Pulsar Helm chart. - -## Install dependent charts - -### Install local storage provisioner - -To use local persistent volumes as the persistent storage, you need to install a storage provisioner for [local persistent volumes](https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/). - -One of the easiest way to get started is to use the local storage provisioner provided along with the Pulsar Helm chart. - -``` - -helm repo add streamnative https://charts.streamnative.io -helm repo update -helm install pulsar-storage-provisioner streamnative/local-storage-provisioner - -``` - -### Install cert-manager - -The Pulsar Helm chart uses the [cert-manager](https://github.com/jetstack/cert-manager) to provision and manage TLS certificates automatically. To enable TLS encryption for brokers or proxies, you need to install the cert-manager in advance. - -For details about how to install the cert-manager, follow the [official instructions](https://cert-manager.io/docs/installation/kubernetes/#installing-with-helm). - -Alternatively, we provide a bash script [install-cert-manager.sh](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/cert-manager/install-cert-manager.sh) to install a cert-manager release to the namespace `cert-manager`. - -```bash - -git clone https://github.com/apache/pulsar-helm-chart -cd pulsar-helm-chart -./scripts/cert-manager/install-cert-manager.sh - -``` - -## Prepare Helm release - -Once you have install all the dependent charts and collected all of your configuration options, you can run [prepare_helm_release.sh](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/prepare_helm_release.sh) to prepare the Helm release. - -```bash - -git clone https://github.com/apache/pulsar-helm-chart -cd pulsar-helm-chart -./scripts/pulsar/prepare_helm_release.sh -n -k - -``` - -The `prepare_helm_release` creates the following resources: - -- A Kubernetes namespace for installing the Pulsar release -- JWT secret keys and tokens for three super users: `broker-admin`, `proxy-admin`, and `admin`. By default, it generates an asymmetric pubic/private key pair. You can choose to generate a symmetric secret key by specifying `--symmetric`. - - `proxy-admin` role is used for proxies to communicate to brokers. - - `broker-admin` role is used for inter-broker communications. - - `admin` role is used by the admin tools. - -## Deploy Pulsar cluster using Helm - -Once you have finished the following three things, you can install a Helm release. - -- Collect all of your configuration options. -- Install dependent charts. -- Prepare the Helm release. - -In this example, we name our Helm release `pulsar`. - -```bash - -helm repo add apache https://pulsar.apache.org/charts -helm repo update -helm install pulsar apache/pulsar \ - --timeout 10m \ - --set initialize=true \ - --set [your configuration options] - -``` - -:::note - -For the first deployment, add `--set initialize=true` option to initialize bookie and Pulsar cluster metadata. - -::: - -You can also use the `--version ` option if you want to install a specific version of Pulsar Helm chart. - -## Monitor deployment - -A list of installed resources are output once the Pulsar cluster is deployed. This may take 5-10 minutes. - -The status of the deployment can be checked by running the `helm status pulsar` command, which can also be done while the deployment is taking place if you run the command in another terminal. - -## Access Pulsar cluster - -The default values will create a `ClusterIP` for the following resources, which you can use to interact with the cluster. - -- Proxy: You can use the IP address to produce and consume messages to the installed Pulsar cluster. -- Pulsar Manager: You can access the Pulsar Manager UI at `http://:9527`. -- Grafana Dashboard: You can access the Grafana dashboard at `http://:3000`. - -To find the IP addresses of those components, run the following command: - -```bash - -kubectl get service -n - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/helm-install.md b/site2/website/versioned_docs/version-2.8.3-deprecated/helm-install.md deleted file mode 100644 index 1f4d5eb69d5ddd..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/helm-install.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -id: helm-install -title: Install Apache Pulsar using Helm -sidebar_label: "Install" -original_id: helm-install ---- - -Install Apache Pulsar on Kubernetes with the official Pulsar Helm chart. - -## Requirements - -To deploy Apache Pulsar on Kubernetes, the followings are required. - -- kubectl 1.14 or higher, compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin)) -- Helm v3 (3.0.2 or higher) -- A Kubernetes cluster, version 1.14 or higher - -## Environment setup - -Before deploying Pulsar, you need to prepare your environment. - -### Tools - -Install [`helm`](helm-tools.md) and [`kubectl`](helm-tools.md) on your computer. - -## Cloud cluster preparation - -:::note - -Kubernetes 1.14 or higher is required. - -::: - -To create and connect to the Kubernetes cluster, follow the instructions: - -- [Google Kubernetes Engine](helm-prepare.md#google-kubernetes-engine) - -## Pulsar deployment - -Once the environment is set up and configuration is generated, you can now proceed to the [deployment of Pulsar](helm-deploy.md). - -## Pulsar upgrade - -To upgrade an existing Kubernetes installation, follow the [upgrade documentation](helm-upgrade.md). diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/helm-overview.md b/site2/website/versioned_docs/version-2.8.3-deprecated/helm-overview.md deleted file mode 100644 index 385d535e319b65..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/helm-overview.md +++ /dev/null @@ -1,104 +0,0 @@ ---- -id: helm-overview -title: Apache Pulsar Helm Chart -sidebar_label: "Overview" -original_id: helm-overview ---- - -This is the official supported Helm chart to install Apache Pulsar on a cloud-native environment. It was enhanced based on StreamNative's [Helm Chart](https://github.com/streamnative/charts). - -## Introduction - -The Apache Pulsar Helm chart is one of the most convenient ways to operate Pulsar on Kubernetes. This Pulsar Helm chart contains all the required components to get started and can scale to large deployments. - -This chart includes all the components for a complete experience, but each part can be configured to be installed separately. - -- Pulsar core components: - - ZooKeeper - - Bookies - - Brokers - - Function workers - - Proxies -- Control Center: - - Pulsar Manager - - Prometheus - - Grafana - -It includes support for: - -- Security - - Automatically provisioned TLS certificates, using [Jetstack](https://www.jetstack.io/)'s [cert-manager](https://cert-manager.io/docs/) - - self-signed - - [Let's Encrypt](https://letsencrypt.org/) - - TLS Encryption - - Proxy - - Broker - - Toolset - - Bookie - - ZooKeeper - - Authentication - - JWT - - Authorization -- Storage - - Non-persistence storage - - Persistence volume - - Local persistent volumes -- Functions - - Kubernetes Runtime - - Process Runtime - - Thread Runtime -- Operations - - Independent image versions for all components, enabling controlled upgrades - -## Pulsar Helm chart quick start - -To get up and run with these charts as fast as possible, in a **non-production** use case, we provide a [quick start guide](getting-started-helm.md) for Proof of Concept (PoC) deployments. - -This guide walks the user through deploying these charts with default values and features, but *does not* meet production ready requirements. To deploy these charts into production under sustained load, follow the complete [Installation Guide](helm-install.md). - -## Troubleshooting - -We have done our best to make these charts as seamless as possible. Occasionally, troubles do go outside of our control. We have collected tips and tricks for troubleshooting common issues. Please check them first before raising an [issue](https://github.com/apache/pulsar/issues/new/choose), and feel free to add to them by raising a [Pull Request](https://github.com/apache/pulsar/compare). - -## Installation - -The Apache Pulsar Helm chart contains all required dependencies. - -If you deploy a PoC for testing, we strongly suggest you follow our [Quick Start Guide](getting-started-helm.md) for your first iteration. - -1. [Preparation](helm-prepare.md) -2. [Deployment](helm-deploy.md) - -## Upgrading - -Once the Pulsar Helm chart is installed, use the `helm upgrade` to complete configuration changes and chart updates. - -```bash - -helm repo add apache https://pulsar.apache.org/charts -helm repo update -helm get values > pulsar.yaml -helm upgrade apache/pulsar -f pulsar.yaml - -``` - -For more detailed information, see [Upgrading](helm-upgrade.md). - -## Uninstallation - -To uninstall the Pulsar Helm chart, run the following command: - -```bash - -helm delete - -``` - -For the purposes of continuity, these charts have some Kubernetes objects that cannot be removed when performing `helm delete`. -It is recommended to *consciously* remove these items, as they affect re-deployment. - -* PVCs for stateful data: *consciously* remove these items. - - ZooKeeper: This is your metadata. - - BookKeeper: This is your data. - - Prometheus: This is your metrics data, which can be safely removed. -* Secrets: if the secrets are generated by the [prepare release script](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/prepare_helm_release.sh), they contain secret keys and tokens. You can use the [cleanup release script](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/cleanup_helm_release.sh) to remove these secrets and tokens as needed. diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/helm-prepare.md b/site2/website/versioned_docs/version-2.8.3-deprecated/helm-prepare.md deleted file mode 100644 index 5e9f2f9ef4f680..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/helm-prepare.md +++ /dev/null @@ -1,92 +0,0 @@ ---- -id: helm-prepare -title: Prepare Kubernetes resources -sidebar_label: "Prepare" -original_id: helm-prepare ---- - -For a fully functional Pulsar cluster, you need a few resources before deploying the Apache Pulsar Helm chart. The following provides instructions to prepare the Kubernetes cluster before deploying the Pulsar Helm chart. - -- [Google Kubernetes Engine](#google-kubernetes-engine) - - [Manual cluster creation](#manual-cluster-creation) - - [Scripted cluster creation](#scripted-cluster-creation) - - [Create cluster with local SSDs](#create-cluster-with-local-ssds) -- [Next Steps](#next-steps) - -## Google Kubernetes Engine - -To get started easier, a script is provided to create the cluster automatically. Alternatively, a cluster can be created manually as well. - -- [Google Kubernetes Engine](#google-kubernetes-engine) - - [Manual cluster creation](#manual-cluster-creation) - - [Scripted cluster creation](#scripted-cluster-creation) - - [Create cluster with local SSDs](#create-cluster-with-local-ssds) -- [Next Steps](#next-steps) - -### Manual cluster creation - -To provision a Kubernetes cluster manually, follow the [GKE instructions](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster). - -Alternatively, you can use the [instructions](#scripted-cluster-creation) below to provision a GKE cluster as needed. - -### Scripted cluster creation - -A [bootstrap script](https://github.com/streamnative/charts/tree/master/scripts/pulsar/gke_bootstrap_script.sh) has been created to automate much of the setup process for users on GCP/GKE. - -The script can: - -1. Create a new GKE cluster. -2. Allow the cluster to modify DNS (Domain Name Server) records. -3. Setup `kubectl`, and connect it to the cluster. - -Google Cloud SDK is a dependency of this script, so ensure it is [set up correctly](helm-tools.md#connect-to-a-gke-cluster) for the script to work. - -The script reads various parameters from environment variables and an argument `up` or `down` for bootstrap and clean-up respectively. - -The following table describes all variables. - -| **Variable** | **Description** | **Default value** | -| ------------ | --------------- | ----------------- | -| PROJECT | ID of your GCP project | No default value. It requires to be set. | -| CLUSTER_NAME | Name of the GKE cluster | `pulsar-dev` | -| CONFDIR | Configuration directory to store Kubernetes configuration | ${HOME}/.config/streamnative | -| INT_NETWORK | IP space to use within this cluster | `default` | -| LOCAL_SSD_COUNT | Number of local SSD counts | 4 | -| MACHINE_TYPE | Type of machine to use for nodes | `n1-standard-4` | -| NUM_NODES | Number of nodes to be created in each of the cluster's zones | 4 | -| PREEMPTIBLE | Create nodes using preemptible VM instances in the new cluster. | false | -| REGION | Compute region for the cluster | `us-east1` | -| USE_LOCAL_SSD | Flag to create a cluster with local SSDs | false | -| ZONE | Compute zone for the cluster | `us-east1-b` | -| ZONE_EXTENSION | The extension (`a`, `b`, `c`) of the zone name of the cluster | `b` | -| EXTRA_CREATE_ARGS | Extra arguments passed to create command | | - -Run the script, by passing in your desired parameters. It can work with the default parameters except for `PROJECT` which is required: - -```bash - -PROJECT= scripts/pulsar/gke_bootstrap_script.sh up - -``` - -The script can also be used to clean up the created GKE resources. - -```bash - -PROJECT= scripts/pulsar/gke_bootstrap_script.sh down - -``` - -#### Create cluster with local SSDs - -To install the Pulsar Helm chart using local persistent volumes, you need to create a GKE cluster with local SSDs. You can do so by specifying `USE_LOCAL_SSD` to be `true` in the following command to create a Pulsar cluster with local SSDs. - -``` - -PROJECT= USE_LOCAL_SSD=true LOCAL_SSD_COUNT= scripts/pulsar/gke_bootstrap_script.sh up - -``` - -## Next Steps - -Continue with the [installation of the chart](helm-deploy.md) once you have the cluster up and running. diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/helm-tools.md b/site2/website/versioned_docs/version-2.8.3-deprecated/helm-tools.md deleted file mode 100644 index 6ba89006913b64..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/helm-tools.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -id: helm-tools -title: Required tools for deploying Pulsar Helm Chart -sidebar_label: "Required Tools" -original_id: helm-tools ---- - -Before deploying Pulsar to your Kubernetes cluster, there are some tools you must have installed locally. - -## kubectl - -kubectl is the tool that talks to the Kubernetes API. kubectl 1.14 or higher is required and it needs to be compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin)). - -To Install kubectl locally, follow the [Kubernetes documentation](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl). - -The server version of kubectl cannot be obtained until we connect to a cluster. - -## Helm - -Helm is the package manager for Kubernetes. The Apache Pulsar Helm Chart is tested and supported with Helm v3. - -### Get Helm - -You can get Helm from the project's [releases page](https://github.com/helm/helm/releases), or follow other options under the official documentation of [installing Helm](https://helm.sh/docs/intro/install/). - -### Next steps - -Once kubectl and Helm are configured, you can configure your [Kubernetes cluster](helm-prepare.md). - -## Additional information - -### Templates - -Templating in Helm is done through Golang's [text/template](https://golang.org/pkg/text/template/) and [sprig](https://godoc.org/github.com/Masterminds/sprig). - -For more information about how all the inner workings behave, check these documents: - -- [Functions and Pipelines](https://helm.sh/docs/chart_template_guide/functions_and_pipelines/) -- [Subcharts and Globals](https://helm.sh/docs/chart_template_guide/subcharts_and_globals/) - -### Tips and tricks - -For additional information on developing with Helm, check [tips and tricks section](https://helm.sh/docs/howto/charts_tips_and_tricks/) in the Helm repository. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/helm-upgrade.md b/site2/website/versioned_docs/version-2.8.3-deprecated/helm-upgrade.md deleted file mode 100644 index 7d671e6bfb3c10..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/helm-upgrade.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -id: helm-upgrade -title: Upgrade Pulsar Helm release -sidebar_label: "Upgrade" -original_id: helm-upgrade ---- - -Before upgrading your Pulsar installation, you need to check the change log corresponding to the specific release you want to upgrade to and look for any release notes that might pertain to the new Pulsar helm chart version. - -We also recommend that you need to provide all values using the `helm upgrade --set key=value` syntax or the `-f values.yml` instead of using `--reuse-values`, because some of the current values might be deprecated. - -:::note - -You can retrieve your previous `--set` arguments cleanly, with `helm get values `. If you direct this into a file (`helm get values > pulsar.yml`), you can safely pass this file through `-f`, namely `helm upgrade apache/pulsar -f pulsar.yaml`. This safely replaces the behavior of `--reuse-values`. - -::: - -## Steps - -To upgrade Apache Pulsar to a newer version, follow these steps: - -1. Check the change log for the specific version you would like to upgrade to. -2. Go through [deployment documentation](helm-deploy.md) step by step. -3. Extract your previous `--set` arguments with the following command. - - ```bash - - helm get values > pulsar.yaml - - ``` - -4. Decide all the values you need to set. -5. Perform the upgrade, with all `--set` arguments extracted in step 4. - - ```bash - - helm upgrade apache/pulsar \ - --version \ - -f pulsar.yaml \ - --set ... - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-aerospike-sink.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-aerospike-sink.md deleted file mode 100644 index 63d7338a3ba91c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-aerospike-sink.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -id: io-aerospike-sink -title: Aerospike sink connector -sidebar_label: "Aerospike sink connector" -original_id: io-aerospike-sink ---- - -The Aerospike sink connector pulls messages from Pulsar topics to Aerospike clusters. - -## Configuration - -The configuration of the Aerospike sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `seedHosts` |String| true | No default value| The comma-separated list of one or more Aerospike cluster hosts.

    Each host can be specified as a valid IP address or hostname followed by an optional port number. | -| `keyspace` | String| true |No default value |The Aerospike namespace. | -| `columnName` | String | true| No default value|The Aerospike column name. | -|`userName`|String|false|NULL|The Aerospike username.| -|`password`|String|false|NULL|The Aerospike password.| -| `keySet` | String|false |NULL | The Aerospike set name. | -| `maxConcurrentRequests` |int| false | 100 | The maximum number of concurrent Aerospike transactions that a sink can open. | -| `timeoutMs` | int|false | 100 | This property controls `socketTimeout` and `totalTimeout` for Aerospike transactions. | -| `retries` | int|false | 1 |The maximum number of retries before aborting a write transaction to Aerospike. | diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-canal-source.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-canal-source.md deleted file mode 100644 index d1fd43bb0f74e4..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-canal-source.md +++ /dev/null @@ -1,235 +0,0 @@ ---- -id: io-canal-source -title: Canal source connector -sidebar_label: "Canal source connector" -original_id: io-canal-source ---- - -The Canal source connector pulls messages from MySQL to Pulsar topics. - -## Configuration - -The configuration of Canal source connector has the following properties. - -### Property - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `username` | true | None | Canal server account (not MySQL).| -| `password` | true | None | Canal server password (not MySQL). | -|`destination`|true|None|Source destination that Canal source connector connects to. -| `singleHostname` | false | None | Canal server address.| -| `singlePort` | false | None | Canal server port.| -| `cluster` | true | false | Whether to enable cluster mode based on Canal server configuration or not.

  1271. true: **cluster** mode.
    If set to true, it talks to `zkServers` to figure out the actual database host.

  1272. false: **standalone** mode.
    If set to false, it connects to the database specified by `singleHostname` and `singlePort`.
  1273. | -| `zkServers` | true | None | Address and port of the Zookeeper that Canal source connector talks to figure out the actual database host.| -| `batchSize` | false | 1000 | Batch size to fetch from Canal. | - -### Example - -Before using the Canal connector, you can create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "zkServers": "127.0.0.1:2181", - "batchSize": "5120", - "destination": "example", - "username": "", - "password": "", - "cluster": false, - "singleHostname": "127.0.0.1", - "singlePort": "11111", - } - - ``` - -* YAML - - You can create a YAML file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/resources/canal-mysql-source-config.yaml) below to your YAML file. - - ```yaml - - configs: - zkServers: "127.0.0.1:2181" - batchSize: 5120 - destination: "example" - username: "" - password: "" - cluster: false - singleHostname: "127.0.0.1" - singlePort: 11111 - - ``` - -## Usage - -Here is an example of storing MySQL data using the configuration file as above. - -1. Start a MySQL server. - - ```bash - - $ docker pull mysql:5.7 - $ docker run -d -it --rm --name pulsar-mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=canal -e MYSQL_USER=mysqluser -e MYSQL_PASSWORD=mysqlpw mysql:5.7 - - ``` - -2. Create a configuration file `mysqld.cnf`. - - ```bash - - [mysqld] - pid-file = /var/run/mysqld/mysqld.pid - socket = /var/run/mysqld/mysqld.sock - datadir = /var/lib/mysql - #log-error = /var/log/mysql/error.log - # By default we only accept connections from localhost - #bind-address = 127.0.0.1 - # Disabling symbolic-links is recommended to prevent assorted security risks - symbolic-links=0 - log-bin=mysql-bin - binlog-format=ROW - server_id=1 - - ``` - -3. Copy the configuration file `mysqld.cnf` to MySQL server. - - ```bash - - $ docker cp mysqld.cnf pulsar-mysql:/etc/mysql/mysql.conf.d/ - - ``` - -4. Restart the MySQL server. - - ```bash - - $ docker restart pulsar-mysql - - ``` - -5. Create a test database in MySQL server. - - ```bash - - $ docker exec -it pulsar-mysql /bin/bash - $ mysql -h 127.0.0.1 -uroot -pcanal -e 'create database test;' - - ``` - -6. Start a Canal server and connect to MySQL server. - - ``` - - $ docker pull canal/canal-server:v1.1.2 - $ docker run -d -it --link pulsar-mysql -e canal.auto.scan=false -e canal.destinations=test -e canal.instance.master.address=pulsar-mysql:3306 -e canal.instance.dbUsername=root -e canal.instance.dbPassword=canal -e canal.instance.connectionCharset=UTF-8 -e canal.instance.tsdb.enable=true -e canal.instance.gtidon=false --name=pulsar-canal-server -p 8000:8000 -p 2222:2222 -p 11111:11111 -p 11112:11112 -m 4096m canal/canal-server:v1.1.2 - - ``` - -7. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:2.3.0 - $ docker run -d -it --link pulsar-canal-server -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:2.3.0 bin/pulsar standalone - - ``` - -8. Modify the configuration file `canal-mysql-source-config.yaml`. - - ```yaml - - configs: - zkServers: "" - batchSize: "5120" - destination: "test" - username: "" - password: "" - cluster: false - singleHostname: "pulsar-canal-server" - singlePort: "11111" - - ``` - -9. Create a consumer file `pulsar-client.py`. - - ```python - - import pulsar - - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe('my-topic', - subscription_name='my-sub') - - while True: - msg = consumer.receive() - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - - client.close() - - ``` - -10. Copy the configuration file `canal-mysql-source-config.yaml` and the consumer file `pulsar-client.py` to Pulsar server. - - ```bash - - $ docker cp canal-mysql-source-config.yaml pulsar-standalone:/pulsar/conf/ - $ docker cp pulsar-client.py pulsar-standalone:/pulsar/ - - ``` - -11. Download a Canal connector and start it. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - $ wget https://archive.apache.org/dist/pulsar/pulsar-2.3.0/connectors/pulsar-io-canal-2.3.0.nar -P connectors - $ ./bin/pulsar-admin source localrun \ - --archive ./connectors/pulsar-io-canal-2.3.0.nar \ - --classname org.apache.pulsar.io.canal.CanalStringSource \ - --tenant public \ - --namespace default \ - --name canal \ - --destination-topic-name my-topic \ - --source-config-file /pulsar/conf/canal-mysql-source-config.yaml \ - --parallelism 1 - - ``` - -12. Consume data from MySQL. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - $ python pulsar-client.py - - ``` - -13. Open another window to log in MySQL server. - - ```bash - - $ docker exec -it pulsar-mysql /bin/bash - $ mysql -h 127.0.0.1 -uroot -pcanal - - ``` - -14. Create a table, and insert, delete, and update data in MySQL server. - - ```bash - - mysql> use test; - mysql> show tables; - mysql> CREATE TABLE IF NOT EXISTS `test_table`(`test_id` INT UNSIGNED AUTO_INCREMENT,`test_title` VARCHAR(100) NOT NULL, - `test_author` VARCHAR(40) NOT NULL, - `test_date` DATE,PRIMARY KEY ( `test_id` ))ENGINE=InnoDB DEFAULT CHARSET=utf8; - mysql> INSERT INTO test_table (test_title, test_author, test_date) VALUES("a", "b", NOW()); - mysql> UPDATE test_table SET test_title='c' WHERE test_title='a'; - mysql> DELETE FROM test_table WHERE test_title='c'; - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-cassandra-sink.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-cassandra-sink.md deleted file mode 100644 index b27a754f49e182..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-cassandra-sink.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -id: io-cassandra-sink -title: Cassandra sink connector -sidebar_label: "Cassandra sink connector" -original_id: io-cassandra-sink ---- - -The Cassandra sink connector pulls messages from Pulsar topics to Cassandra clusters. - -## Configuration - -The configuration of the Cassandra sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `roots` | String|true | " " (empty string) | A comma-separated list of Cassandra hosts to connect to.| -| `keyspace` | String|true| " " (empty string)| The key space used for writing pulsar messages.

    **Note: `keyspace` should be created prior to a Cassandra sink.**| -| `keyname` | String|true| " " (empty string)| The key name of the Cassandra column family.

    The column is used for storing Pulsar message keys.

    If a Pulsar message doesn't have any key associated, the message value is used as the key. | -| `columnFamily` | String|true| " " (empty string)| The Cassandra column family name.

    **Note: `columnFamily` should be created prior to a Cassandra sink.**| -| `columnName` | String|true| " " (empty string) | The column name of the Cassandra column family.

    The column is used for storing Pulsar message values. | - -### Example - -Before using the Cassandra sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - } - - ``` - -* YAML - - ``` - - configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - - ``` - -## Usage - -For more information about **how to connect Pulsar with Cassandra**, see [here](io-quickstart.md#connect-pulsar-to-apache-cassandra). diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-cdc-debezium.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-cdc-debezium.md deleted file mode 100644 index 293ccf2b35e8aa..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-cdc-debezium.md +++ /dev/null @@ -1,543 +0,0 @@ ---- -id: io-cdc-debezium -title: Debezium source connector -sidebar_label: "Debezium source connector" -original_id: io-cdc-debezium ---- - -The Debezium source connector pulls messages from MySQL or PostgreSQL -and persists the messages to Pulsar topics. - -## Configuration - -The configuration of Debezium source connector has the following properties. - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `task.class` | true | null | A source task class that implemented in Debezium. | -| `database.hostname` | true | null | The address of a database server. | -| `database.port` | true | null | The port number of a database server.| -| `database.user` | true | null | The name of a database user that has the required privileges. | -| `database.password` | true | null | The password for a database user that has the required privileges. | -| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. | -| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. | -| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the connector.

    This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. | -| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. | -| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value. | -| `database.history` | true | null | The name of the database history class. | -| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements.

    **Note: this topic is for internal use only and should not be used by consumers.** | -| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. | -| `pulsar.service.url` | true | null | Pulsar cluster service URL for the offset topic used in Debezium. You can use the `bin/pulsar-admin --admin-url http://pulsar:8080 sources localrun --source-config-file configs/pg-pulsar-config.yaml` command to point to the target Pulsar cluster.| -| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. | -| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). | -| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. | -| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. | - - - -## Example of MySQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "3306", - "database.user": "debezium", - "database.password": "dbz", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.whitelist": "inventory", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.history.pulsar.topic": "history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "pulsar.service.url": "pulsar://127.0.0.1:6650", - "offset.storage.topic": "offset-topic" - } - - ``` - -* YAML - - You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mysql-source" - topicName: "debezium-mysql-topic" - archive: "connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for mysql, docker image: debezium/example-mysql:0.8 - database.hostname: "localhost" - database.port: "3306" - database.user: "debezium" - database.password: "dbz" - database.server.id: "184054" - database.server.name: "dbserver1" - database.whitelist: "inventory" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.history.pulsar.topic: "history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG - key.converter: "org.apache.kafka.connect.json.JsonConverter" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## OFFSET_STORAGE_TOPIC_CONFIG - offset.storage.topic: "offset-topic" - - ``` - -### Usage - -This example shows how to change the data of a MySQL table using the Pulsar Debezium connector. - -1. Start a MySQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -it --rm \ - --name mysql \ - -p 3306:3306 \ - -e MYSQL_ROOT_PASSWORD=debezium \ - -e MYSQL_USER=mysqluser \ - -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar \ - --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","value.converter": "org.apache.kafka.connect.json.JsonConverter","pulsar.service.url": "pulsar://127.0.0.1:6650","offset.storage.topic": "offset-topic"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mysql-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the table _inventory.products_. - - ```bash - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MySQL client in docker. - - ```bash - - $ docker run -it --rm \ - --name mysqlterm \ - --link mysql \ - --rm mysql:5.7 sh \ - -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' - - ``` - -6. A MySQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - mysql> use inventory; - mysql> show tables; - mysql> SELECT * FROM products; - mysql> UPDATE products SET name='1111111111' WHERE id=101; - mysql> UPDATE products SET name='1111111111' WHERE id=107; - - ``` - - In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic. - -## Example of PostgreSQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "5432", - "database.user": "postgres", - "database.password": "postgres", - "database.dbname": "postgres", - "database.server.name": "dbserver1", - "schema.whitelist": "inventory", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-postgres-source" - topicName: "debezium-postgres-topic" - archive: "connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-postgress:0.8 - database.hostname: "localhost" - database.port: "5432" - database.user: "postgres" - database.password: "postgres" - database.dbname: "postgres" - database.server.name: "dbserver1" - schema.whitelist: "inventory" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector. - - -1. Start a PostgreSQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-postgres:0.8 - $ docker run -d -it --rm --name pulsar-postgresql -p 5432:5432 debezium/example-postgres:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar \ - --name debezium-postgres-source \ - --destination-topic-name debezium-postgres-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "postgres","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-postgres-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a PostgreSQL client in docker. - - ```bash - - $ docker exec -it pulsar-postgresql /bin/bash - - ``` - -6. A PostgreSQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - psql -U postgres postgres - postgres=# \c postgres; - You are now connected to database "postgres" as user "postgres". - postgres=# SET search_path TO inventory; - SET - postgres=# select * from products; - id | name | description | weight - -----+--------------------+---------------------------------------------------------+-------- - 102 | car battery | 12V car battery | 8.1 - 103 | 12-pack drill bits | 12-pack of drill bits with sizes ranging from #40 to #3 | 0.8 - 104 | hammer | 12oz carpenter's hammer | 0.75 - 105 | hammer | 14oz carpenter's hammer | 0.875 - 106 | hammer | 16oz carpenter's hammer | 1 - 107 | rocks | box of assorted rocks | 5.3 - 108 | jacket | water resistent black wind breaker | 0.1 - 109 | spare tire | 24 inch spare tire | 22.2 - 101 | 1111111111 | Small 2-wheel scooter | 3.14 - (9 rows) - - postgres=# UPDATE products SET name='1111111111' WHERE id=107; - UPDATE 1 - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":107}}�{"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products.Value","field":"before"},{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products.Value","field":"after"},{"type":"struct","fields":[{"type":"string","optional":true,"field":"version"},{"type":"string","optional":true,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":false,"field":"db"},{"type":"int64","optional":true,"field":"ts_usec"},{"type":"int64","optional":true,"field":"txId"},{"type":"int64","optional":true,"field":"lsn"},{"type":"string","optional":true,"field":"schema"},{"type":"string","optional":true,"field":"table"},{"type":"boolean","optional":true,"default":false,"field":"snapshot"},{"type":"boolean","optional":true,"field":"last_snapshot_record"}],"optional":false,"name":"io.debezium.connector.postgresql.Source","field":"source"},{"type":"string","optional":false,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"before":{"id":107,"name":"rocks","description":"box of assorted rocks","weight":5.3},"after":{"id":107,"name":"1111111111","description":"box of assorted rocks","weight":5.3},"source":{"version":"0.9.2.Final","connector":"postgresql","name":"dbserver1","db":"postgres","ts_usec":1559208957661080,"txId":577,"lsn":23862872,"schema":"inventory","table":"products","snapshot":false,"last_snapshot_record":null},"op":"u","ts_ms":1559208957692}} - - ``` - -## Example of MongoDB - -You need to create a configuration file before using the Pulsar Debezium connector. - -* JSON - - ```json - - { - "mongodb.hosts": "rs0/mongodb:27017", - "mongodb.name": "dbserver1", - "mongodb.user": "debezium", - "mongodb.password": "dbz", - "mongodb.task.id": "1", - "database.whitelist": "inventory", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mongodb-source" - topicName: "debezium-mongodb-topic" - archive: "connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-postgress:0.10 - mongodb.hosts: "rs0/mongodb:27017", - mongodb.name: "dbserver1", - mongodb.user: "debezium", - mongodb.password: "dbz", - mongodb.task.id: "1", - database.whitelist: "inventory", - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector. - - -1. Start a MongoDB server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-mongodb:0.10 - $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017 debezium/example-mongodb:0.10 - - ``` - - Use the following commands to initialize the data. - - ``` bash - - ./usr/local/bin/init-inventory.sh - - ``` - - If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a``` - - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-mongodb-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar \ - --name debezium-mongodb-source \ - --destination-topic-name debezium-mongodb-topic \ - --tenant public \ - --namespace default \ - --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mongodb-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MongoDB client in docker. - - ```bash - - $ docker exec -it pulsar-mongodb /bin/bash - - ``` - -6. A MongoDB client pops out. - - ```bash - - mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory - db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}}) - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":false,"field":"rs"},{"type":"string","optional":false,"field":"collection"},{"type":"int32","optional":false,"field":"ord"},{"type":"int64","optional":true,"field":"h"}],"optional":false,"name":"io.debezium.connector.mongo.Source","field":"source"},{"type":"string","optional":true,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"after":"{\"_id\": {\"$numberLong\": \"104\"},\"name\": \"hammer\",\"description\": \"12oz carpenter's hammer\",\"weight\": 1.25,\"quantity\": 4}","patch":null,"source":{"version":"0.10.0.Final","connector":"mongodb","name":"dbserver1","ts_ms":1573541905000,"snapshot":"true","db":"inventory","rs":"rs0","collection":"products","ord":1,"h":4983083486544392763},"op":"r","ts_ms":1573541909761}}. - - ``` - -## FAQ - -### Debezium postgres connector will hang when create snap - -```$xslt - -#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000] - java.lang.Thread.State: WAITING (parking) - at sun.misc.Unsafe.park(Native Method) - - parking to wait for <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) - at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) - at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) - at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396) - at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649) - at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132) - at io.debezium.connector.postgresql.PostgresConnectorTask$Lambda$203/385424085.accept(Unknown Source) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$240/1347039967.accept(Unknown Source) - at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$206/589332928.run(Unknown Source) - at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705) - at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717) - at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126) - at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47) - at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127) - at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230) - at java.lang.Thread.run(Thread.java:748) - -``` - -If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file: - -```$xslt - -max.queue.size= - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-cdc.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-cdc.md deleted file mode 100644 index e6e662884826de..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-cdc.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -id: io-cdc -title: CDC connector -sidebar_label: "CDC connector" -original_id: io-cdc ---- - -CDC source connectors capture log changes of databases (such as MySQL, MongoDB, and PostgreSQL) into Pulsar. - -> CDC source connectors are built on top of [Canal](https://github.com/alibaba/canal) and [Debezium](https://debezium.io/) and store all data into Pulsar cluster in a persistent, replicated, and partitioned way. - -Currently, Pulsar has the following CDC connectors. - -Name|Java Class -|---|--- -[Canal source connector](io-canal-source.md)|[org.apache.pulsar.io.canal.CanalStringSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/java/org/apache/pulsar/io/canal/CanalStringSource.java) -[Debezium source connector](io-cdc-debezium.md)|
  1274. [org.apache.pulsar.io.debezium.DebeziumSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/core/src/main/java/org/apache/pulsar/io/debezium/DebeziumSource.java)
  1275. [org.apache.pulsar.io.debezium.mysql.DebeziumMysqlSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/java/org/apache/pulsar/io/debezium/mysql/DebeziumMysqlSource.java)
  1276. [org.apache.pulsar.io.debezium.postgres.DebeziumPostgresSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/java/org/apache/pulsar/io/debezium/postgres/DebeziumPostgresSource.java)
  1277. - -For more information about Canal and Debezium, see the information below. - -Subject | Reference -|---|--- -How to use Canal source connector with MySQL|[Canal guide](https://github.com/alibaba/canal/wiki) -How does Canal work | [Canal tutorial](https://github.com/alibaba/canal/wiki) -How to use Debezium source connector with MySQL | [Debezium guide](https://debezium.io/docs/connectors/mysql/) -How does Debezium work | [Debezium tutorial](https://debezium.io/docs/tutorial/) diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-cli.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-cli.md deleted file mode 100644 index 3d54bb61875e25..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-cli.md +++ /dev/null @@ -1,658 +0,0 @@ ---- -id: io-cli -title: Connector Admin CLI -sidebar_label: "CLI" -original_id: io-cli ---- - -The `pulsar-admin` tool helps you manage Pulsar connectors. - -## `sources` - -An interface for managing Pulsar IO sources (ingress data into Pulsar). - -```bash - -$ pulsar-admin sources subcommands - -``` - -Subcommands are: - -* `create` - -* `update` - -* `delete` - -* `get` - -* `status` - -* `list` - -* `stop` - -* `start` - -* `restart` - -* `localrun` - -* `available-sources` - -* `reload` - - -### `create` - -Submit a Pulsar IO source connector to run in a Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources create options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--classname` | The source's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per source instance (applicable only to Docker runtime). -| `--deserialization-classname` | The SerDe classname for the source. -| `--destination-topic-name` | The Pulsar topic to which data is sent. -| `--disk` | The disk (in bytes) that needs to be allocated per source instance (applicable only to Docker runtime). -|`--name` | The source's name. -| `--namespace` | The source's namespace. -| ` --parallelism` | The source's parallelism factor, that is, the number of source instances to run. -| `--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per source instance (applicable only to the process and Docker runtimes). -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -| `--source-config` | Source config key/values. -| `--source-config-file` | The path to a YAML config file specifying the source's configuration. -| `-t`, `--source-type` | The source's connector provider. -| `--tenant` | The source's tenant. -|`--producer-config`| The custom producer configuration (as a JSON string). - -### `update` - -Update a already submitted Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources update options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--classname` | The source's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per source instance (applicable only to Docker runtime). -| `--deserialization-classname` | The SerDe classname for the source. -| `--destination-topic-name` | The Pulsar topic to which data is sent. -| `--disk` | The disk (in bytes) that needs to be allocated per source instance (applicable only to Docker runtime). -|`--name` | The source's name. -| `--namespace` | The source's namespace. -| ` --parallelism` | The source's parallelism factor, that is, the number of source instances to run. -| `--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per source instance (applicable only to the process and Docker runtimes). -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -| `--source-config` | Source config key/values. -| `--source-config-file` | The path to a YAML config file specifying the source's configuration. -| `-t`, `--source-type` | The source's connector provider. The `source-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. -| `--tenant` | The source's tenant. -| `--update-auth-data` | Whether or not to update the auth data.
    **Default value: false.** - - -### `delete` - -Delete a Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources delete options - -``` - -#### Option - -|Flag|Description| -|---|---| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `get` - -Get the information about a Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources get options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `status` - -Check the current status of a Pulsar Source. - -#### Usage - -```bash - -$ pulsar-admin sources status options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source ID.
    If `instance-id` is not provided, Pulsar gets status of all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `list` - -List all running Pulsar IO source connectors. - -#### Usage - -```bash - -$ pulsar-admin sources list options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `stop` - -Stop a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources stop options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar stops all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `start` - -Start a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources start options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar starts all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `restart` - -Restart a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources restart options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar restarts all instances. -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `localrun` - -Run a Pulsar IO source connector locally rather than deploying it to the Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources localrun options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the Source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--broker-service-url` | The URL for the Pulsar broker. -|`--classname`|The source's class name if `archive` is file-url-path (file://). -| `--client-auth-params` | Client authentication parameter. -| `--client-auth-plugin` | Client authentication plugin using which function-process can connect to broker. -|`--cpu`|The CPU (in cores) that needs to be allocated per source instance (applicable only to the Docker runtime).| -|`--deserialization-classname`|The SerDe classname for the source. -|`--destination-topic-name`|The Pulsar topic to which data is sent. -|`--disk`|The disk (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime).| -|`--hostname-verification-enabled`|Enable hostname verification.
    **Default value: false**. -|`--name`|The source’s name.| -|`--namespace`|The source’s namespace.| -|`--parallelism`|The source’s parallelism factor, that is, the number of source instances to run).| -|`--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -|`--ram`|The RAM (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime).| -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -|`--source-config`|Source config key/values. -|`--source-config-file`|The path to a YAML config file specifying the source’s configuration. -|`--source-type`|The source's connector provider. -|`--tenant`|The source’s tenant. -|`--tls-allow-insecure`|Allow insecure tls connection.
    **Default value: false**. -|`--tls-trust-cert-path`|The tls trust cert file path. -|`--use-tls`|Use tls connection.
    **Default value: false**. -|`--producer-config`| The custom producer configuration (as a JSON string). - -### `available-sources` - -Get the list of Pulsar IO connector sources supported by Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources available-sources - -``` - -### `reload` - -Reload the available built-in connectors. - -#### Usage - -```bash - -$ pulsar-admin sources reload - -``` - -## `sinks` - -An interface for managing Pulsar IO sinks (egress data from Pulsar). - -```bash - -$ pulsar-admin sinks subcommands - -``` - -Subcommands are: - -* `create` - -* `update` - -* `delete` - -* `get` - -* `status` - -* `list` - -* `stop` - -* `start` - -* `restart` - -* `localrun` - -* `available-sinks` - -* `reload` - - -### `create` - -Submit a Pulsar IO sink connector to run in a Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks create options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--classname` | The sink's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per sink instance (applicable only to Docker runtime). -| `--custom-schema-inputs` | The map of input topics to schema types or class names (as a JSON string). -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -| `--disk` | The disk (in bytes) that needs to be allocated per sink instance (applicable only to Docker runtime). -|`-i, --inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name` | The sink's name. -| `--namespace` | The sink's namespace. -| ` --parallelism` | The sink's parallelism factor, that is, the number of sink instances to run. -| `--processing-guarantees` | The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the process and Docker runtimes). -| `--retain-ordering` | Sink consumes and sinks messages in order. -| `--sink-config` | sink config key/values. -| `--sink-config-file` | The path to a YAML config file specifying the sink's configuration. -| `-t`, `--sink-type` | The sink's connector provider. The `sink-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. -| `--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -| `--tenant` | The sink's tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). - -### `update` - -Update a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks update options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--classname` | The sink's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per sink instance (applicable only to Docker runtime). -| `--custom-schema-inputs` | The map of input topics to schema types or class names (as a JSON string). -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -| `--disk` | The disk (in bytes) that needs to be allocated per sink instance (applicable only to Docker runtime). -|`-i, --inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name` | The sink's name. -| `--namespace` | The sink's namespace. -| ` --parallelism` | The sink's parallelism factor, that is, the number of sink instances to run. -| `--processing-guarantees` | The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the process and Docker runtimes). -| `--retain-ordering` | Sink consumes and sinks messages in order. -| `--sink-config` | sink config key/values. -| `--sink-config-file` | The path to a YAML config file specifying the sink's configuration. -| `-t`, `--sink-type` | The sink's connector provider. -| `--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -| `--tenant` | The sink's tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). -| `--update-auth-data` | Whether or not to update the auth data.
    **Default value: false.** - -### `delete` - -Delete a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks delete options - -``` - -#### Option - -|Flag|Description| -|---|---| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - -### `get` - -Get the information about a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks get options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `status` - -Check the current status of a Pulsar sink. - -#### Usage - -```bash - -$ pulsar-admin sinks status options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink ID.
    If `instance-id` is not provided, Pulsar gets status of all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `list` - -List all running Pulsar IO sink connectors. - -#### Usage - -```bash - -$ pulsar-admin sinks list options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `stop` - -Stop a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks stop options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar stops all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - -### `start` - -Start a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks start options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar starts all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `restart` - -Restart a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks restart options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar restarts all instances. -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `localrun` - -Run a Pulsar IO sink connector locally rather than deploying it to the Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks localrun options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--broker-service-url` | The URL for the Pulsar broker. -|`--classname`|The sink's class name if `archive` is file-url-path (file://). -| `--client-auth-params` | Client authentication parameter. -| `--client-auth-plugin` | Client authentication plugin using which function-process can connect to broker. -|`--cpu`|The CPU (in cores) that needs to be allocated per sink instance (applicable only to the Docker runtime). -| `--custom-schema-inputs` | The map of input topics to Schema types or class names (as a JSON string). -| `--max-redeliver-count` | Maximum number of times that a message is redelivered before being sent to the dead letter queue. -| `--dead-letter-topic` | Name of the dead letter topic where the failing messages are sent. -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -|`--disk`|The disk (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime).| -|`--hostname-verification-enabled`|Enable hostname verification.
    **Default value: false**. -| `-i`, `--inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name`|The sink’s name.| -|`--namespace`|The sink’s namespace.| -|`--parallelism`|The sink’s parallelism factor, that is, the number of sink instances to run).| -|`--processing-guarantees`|The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -|`--ram`|The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime).| -|`--retain-ordering` | Sink consumes and sinks messages in order. -|`--sink-config`|sink config key/values. -|`--sink-config-file`|The path to a YAML config file specifying the sink’s configuration. -|`--sink-type`|The sink's connector provider. -|`--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -|`--tenant`|The sink’s tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--negative-ack-redelivery-delay-ms` | The negatively-acknowledged message redelivery delay in milliseconds. | -|`--tls-allow-insecure`|Allow insecure tls connection.
    **Default value: false**. -|`--tls-trust-cert-path`|The tls trust cert file path. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). -|`--use-tls`|Use tls connection.
    **Default value: false**. - -### `available-sinks` - -Get the list of Pulsar IO connector sinks supported by Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks available-sinks - -``` - -### `reload` - -Reload the available built-in connectors. - -#### Usage - -```bash - -$ pulsar-admin sinks reload - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-connectors.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-connectors.md deleted file mode 100644 index 8db368e0e70637..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-connectors.md +++ /dev/null @@ -1,232 +0,0 @@ ---- -id: io-connectors -title: Built-in connector -sidebar_label: "Built-in connector" -original_id: io-connectors ---- - -Pulsar distribution includes a set of common connectors that have been packaged and tested with the rest of Apache Pulsar. These connectors import and export data from some of the most commonly used data systems. - -Using any of these connectors is as easy as writing a simple connector and running the connector locally or submitting the connector to a Pulsar Functions cluster. - -## Source connector - -Pulsar has various source connectors, which are sorted alphabetically as below. - -### Canal - -* [Configuration](io-canal-source.md#configuration) - -* [Example](io-canal-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/java/org/apache/pulsar/io/canal/CanalStringSource.java) - - -### Debezium MySQL - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-mysql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/java/org/apache/pulsar/io/debezium/mysql/DebeziumMysqlSource.java) - -### Debezium PostgreSQL - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-postgresql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/java/org/apache/pulsar/io/debezium/postgres/DebeziumPostgresSource.java) - -### Debezium MongoDB - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-mongodb) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/java/org/apache/pulsar/io/debezium/mongodb/DebeziumMongoDbSource.java) - -### DynamoDB - -* [Configuration](io-dynamodb-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/dynamodb/src/main/java/org/apache/pulsar/io/dynamodb/DynamoDBSource.java) - -### File - -* [Configuration](io-file-source.md#configuration) - -* [Example](io-file-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/file/src/main/java/org/apache/pulsar/io/file/FileSource.java) - -### Flume - -* [Configuration](io-flume-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/java/org/apache/pulsar/io/flume/FlumeConnector.java) - -### Twitter firehose - -* [Configuration](io-twitter-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/twitter/src/main/java/org/apache/pulsar/io/twitter/TwitterFireHose.java) - -### Kafka - -* [Configuration](io-kafka-source.md#configuration) - -* [Example](io-kafka-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java) - -### Kinesis - -* [Configuration](io-kinesis-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kinesis/src/main/java/org/apache/pulsar/io/kinesis/KinesisSource.java) - -### Netty - -* [Configuration](io-netty-source.md#configuration) - -* [Example of TCP](io-netty-source.md#tcp) - -* [Example of HTTP](io-netty-source.md#http) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/netty/src/main/java/org/apache/pulsar/io/netty/NettySource.java) - -### NSQ - -* [Configuration](io-nsq-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/nsq/src/main/java/org/apache/pulsar/io/nsq/NSQSource.java) - -### RabbitMQ - -* [Configuration](io-rabbitmq-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/rabbitmq/src/main/java/org/apache/pulsar/io/rabbitmq/RabbitMQSource.java) - -## Sink connector - -Pulsar has various sink connectors, which are sorted alphabetically as below. - -### Aerospike - -* [Configuration](io-aerospike-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/aerospike/src/main/java/org/apache/pulsar/io/aerospike/AerospikeStringSink.java) - -### Cassandra - -* [Configuration](io-cassandra-sink.md#configuration) - -* [Example](io-cassandra-sink.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/cassandra/src/main/java/org/apache/pulsar/io/cassandra/CassandraStringSink.java) - -### ElasticSearch - -* [Configuration](io-elasticsearch-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/elastic-search/src/main/java/org/apache/pulsar/io/elasticsearch/ElasticSearchSink.java) - -### Flume - -* [Configuration](io-flume-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/java/org/apache/pulsar/io/flume/sink/StringSink.java) - -### HBase - -* [Configuration](io-hbase-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hbase/src/main/java/org/apache/pulsar/io/hbase/HbaseAbstractConfig.java) - -### HDFS2 - -* [Configuration](io-hdfs2-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hdfs2/src/main/java/org/apache/pulsar/io/hdfs2/AbstractHdfsConnector.java) - -### HDFS3 - -* [Configuration](io-hdfs3-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hdfs3/src/main/java/org/apache/pulsar/io/hdfs3/AbstractHdfsConnector.java) - -### InfluxDB - -* [Configuration](io-influxdb-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/influxdb/src/main/java/org/apache/pulsar/io/influxdb/InfluxDBGenericRecordSink.java) - -### JDBC ClickHouse - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-clickhouse) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/clickhouse/src/main/java/org/apache/pulsar/io/jdbc/ClickHouseJdbcAutoSchemaSink.java) - -### JDBC MariaDB - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-mariadb) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/mariadb/src/main/java/org/apache/pulsar/io/jdbc/MariadbJdbcAutoSchemaSink.java) - -### JDBC PostgreSQL - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-postgresql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/postgres/src/main/java/org/apache/pulsar/io/jdbc/PostgresJdbcAutoSchemaSink.java) - -### JDBC SQLite - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-sqlite) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/sqlite/src/main/java/org/apache/pulsar/io/jdbc/SqliteJdbcAutoSchemaSink.java) - -### Kafka - -* [Configuration](io-kafka-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSink.java) - -### Kinesis - -* [Configuration](io-kinesis-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kinesis/src/main/java/org/apache/pulsar/io/kinesis/KinesisSink.java) - -### MongoDB - -* [Configuration](io-mongo-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/mongo/src/main/java/org/apache/pulsar/io/mongodb/MongoSink.java) - -### RabbitMQ - -* [Configuration](io-rabbitmq-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/rabbitmq/src/main/java/org/apache/pulsar/io/rabbitmq/RabbitMQSink.java) - -### Redis - -* [Configuration](io-redis-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/redis/src/main/java/org/apache/pulsar/io/redis/RedisAbstractConfig.java) - -### Solr - -* [Configuration](io-solr-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/solr/src/main/java/org/apache/pulsar/io/solr/SolrSinkConfig.java) - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-debezium-source.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-debezium-source.md deleted file mode 100644 index 8c3ba0cb20f252..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-debezium-source.md +++ /dev/null @@ -1,621 +0,0 @@ ---- -id: io-debezium-source -title: Debezium source connector -sidebar_label: "Debezium source connector" -original_id: io-debezium-source ---- - -The Debezium source connector pulls messages from MySQL or PostgreSQL -and persists the messages to Pulsar topics. - -## Configuration - -The configuration of Debezium source connector has the following properties. - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `task.class` | true | null | A source task class that implemented in Debezium. | -| `database.hostname` | true | null | The address of a database server. | -| `database.port` | true | null | The port number of a database server.| -| `database.user` | true | null | The name of a database user that has the required privileges. | -| `database.password` | true | null | The password for a database user that has the required privileges. | -| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. | -| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. | -| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the connector.

    This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. | -| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. | -| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value. | -| `database.history` | true | null | The name of the database history class. | -| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements.

    **Note: this topic is for internal use only and should not be used by consumers.** | -| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. | -| `pulsar.service.url` | true | null | Pulsar cluster service URL. | -| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. | -| `json-with-envelope` | false | false | Present the message only consist of payload. - -### Converter Options - -1. org.apache.kafka.connect.json.JsonConverter - -This config `json-with-envelope` is valid only for the JsonConverter. It's default value is false, the consumer use the schema ` -Schema.KeyValue(Schema.AUTO_CONSUME(), Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, -and the message only consist of payload. - -If the config `json-with-envelope` value is true, the consumer use the schema -`Schema.KeyValue(Schema.BYTES, Schema.BYTES`, the message consist of schema and payload. - -2. org.apache.pulsar.kafka.shade.io.confluent.connect.avro.AvroConverter - -If users select the AvroConverter, then the pulsar consumer should use the schema `Schema.KeyValue(Schema.AUTO_CONSUME(), -Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, and the message consist of payload. - -### MongoDB Configuration -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). | -| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. | -| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. | - - - -## Example of MySQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "3306", - "database.user": "debezium", - "database.password": "dbz", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.whitelist": "inventory", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.history.pulsar.topic": "history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "pulsar.service.url": "pulsar://127.0.0.1:6650", - "offset.storage.topic": "offset-topic" - } - - ``` - -* YAML - - You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mysql-source" - topicName: "debezium-mysql-topic" - archive: "connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for mysql, docker image: debezium/example-mysql:0.8 - database.hostname: "localhost" - database.port: "3306" - database.user: "debezium" - database.password: "dbz" - database.server.id: "184054" - database.server.name: "dbserver1" - database.whitelist: "inventory" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.history.pulsar.topic: "history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG - key.converter: "org.apache.kafka.connect.json.JsonConverter" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## OFFSET_STORAGE_TOPIC_CONFIG - offset.storage.topic: "offset-topic" - - ``` - -### Usage - -This example shows how to change the data of a MySQL table using the Pulsar Debezium connector. - -1. Start a MySQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -it --rm \ - --name mysql \ - -p 3306:3306 \ - -e MYSQL_ROOT_PASSWORD=debezium \ - -e MYSQL_USER=mysqluser \ - -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar \ - --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","value.converter": "org.apache.kafka.connect.json.JsonConverter","pulsar.service.url": "pulsar://127.0.0.1:6650","offset.storage.topic": "offset-topic"}' - - ``` - - :::note - - Currently, the destination topic (specified by the `destination-topic-name` option ) is a required configuration but it is not used for the Debezium connector to save data. The Debezium connector saves data in the following 4 types of topics: - - - One topic named with the database server name ( `database.server.name`) for storing the database metadata messages, such as `public/default/database.server.name`. - - One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. - - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - - One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - - If the automatic topic creation is disabled on your broker, you need to manually create the above 4 types of topics and the destination topic. - - ::: - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mysql-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the table _inventory.products_. - - ```bash - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MySQL client in docker. - - ```bash - - $ docker run -it --rm \ - --name mysqlterm \ - --link mysql \ - --rm mysql:5.7 sh \ - -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' - - ``` - -6. A MySQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - mysql> use inventory; - mysql> show tables; - mysql> SELECT * FROM products; - mysql> UPDATE products SET name='1111111111' WHERE id=101; - mysql> UPDATE products SET name='1111111111' WHERE id=107; - - ``` - - In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic. - -## Example of PostgreSQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "5432", - "database.user": "postgres", - "database.password": "changeme", - "database.dbname": "postgres", - "database.server.name": "dbserver1", - "plugin.name": "pgoutput", - "schema.whitelist": "public", - "table.whitelist": "public.users", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-postgres-source" - topicName: "debezium-postgres-topic" - archive: "connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for postgres version 10+, official docker image: postgres:<10+> - database.hostname: "localhost" - database.port: "5432" - database.user: "postgres" - database.password: "changeme" - database.dbname: "postgres" - database.server.name: "dbserver1" - plugin.name: "pgoutput" - schema.whitelist: "public" - table.whitelist: "public.users" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -Notice that `pgoutput` is a standard plugin of Postgres introduced in version 10 - [see Postgres architecture docu](https://www.postgresql.org/docs/10/logical-replication-architecture.html). You don't need to install anything, just make sure the WAL level is set to `logical` (see docker command below and [Postgres docu](https://www.postgresql.org/docs/current/runtime-config-wal.html)). - -### Usage - -This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector. - - -1. Start a PostgreSQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -d -it --rm \ - --name pulsar-postgres \ - -p 5432:5432 \ - -e POSTGRES_PASSWORD=changeme \ - postgres:13.3 -c wal_level=logical - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar \ - --name debezium-postgres-source \ - --destination-topic-name debezium-postgres-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "changeme","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "public","table.whitelist": "public.users","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - :::note - - Currently, the destination topic (specified by the `destination-topic-name` option ) is a required configuration but it is not used for the Debezium connector to save data. The Debezium connector saves data in the following 4 types of topics: - - - One topic named with the database server name ( `database.server.name`) for storing the database metadata messages, such as `public/default/database.server.name`. - - One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. - - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - - One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - - If the automatic topic creation is disabled on your broker, you need to manually create the above 4 types of topics and the destination topic. - - ::: - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-postgres-source-config.yaml - - ``` - -4. Subscribe the topic _sub-users_ for the _public.users_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-users" public/default/dbserver1.public.users -n 0 - - ``` - -5. Start a PostgreSQL client in docker. - - ```bash - - $ docker exec -it pulsar-postgresql /bin/bash - - ``` - -6. A PostgreSQL client pops out. - - Use the following commands to create sample data in the table _users_. - - ``` - - psql -U postgres -h localhost -p 5432 - Password for user postgres: - - CREATE TABLE users( - id BIGINT GENERATED ALWAYS AS IDENTITY, PRIMARY KEY(id), - hash_firstname TEXT NOT NULL, - hash_lastname TEXT NOT NULL, - gender VARCHAR(6) NOT NULL CHECK (gender IN ('male', 'female')) - ); - - INSERT INTO users(hash_firstname, hash_lastname, gender) - SELECT md5(RANDOM()::TEXT), md5(RANDOM()::TEXT), CASE WHEN RANDOM() < 0.5 THEN 'male' ELSE 'female' END FROM generate_series(1, 100); - - postgres=# select * from users; - - id | hash_firstname | hash_lastname | gender - -------+----------------------------------+----------------------------------+-------- - 1 | 02bf7880eb489edc624ba637f5ab42bd | 3e742c2cc4217d8e3382cc251415b2fb | female - 2 | dd07064326bb9119189032316158f064 | 9c0e938f9eddbd5200ba348965afbc61 | male - 3 | 2c5316fdd9d6595c1cceb70eed12e80c | 8a93d7d8f9d76acfaaa625c82a03ea8b | female - 4 | 3dfa3b4f70d8cd2155567210e5043d2b | 32c156bc28f7f03ab5d28e2588a3dc19 | female - - - postgres=# UPDATE users SET hash_firstname='maxim' WHERE id=1; - UPDATE 1 - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"before":null,"after":{"id":1,"hash_firstname":"maxim","hash_lastname":"292113d30a3ccee0e19733dd7f88b258","gender":"male"},"source:{"version":"1.0.0.Final","connector":"postgresql","name":"foobar","ts_ms":1624045862644,"snapshot":"false","db":"postgres","schema":"public","table":"users","txId":595,"lsn":24419784,"xmin":null},"op":"u","ts_ms":1624045862648} - ...many more - - ``` - -## Example of MongoDB - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "mongodb.hosts": "rs0/mongodb:27017", - "mongodb.name": "dbserver1", - "mongodb.user": "debezium", - "mongodb.password": "dbz", - "mongodb.task.id": "1", - "database.whitelist": "inventory", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mongodb-source" - topicName: "debezium-mongodb-topic" - archive: "connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-mongodb:0.10 - mongodb.hosts: "rs0/mongodb:27017", - mongodb.name: "dbserver1", - mongodb.user: "debezium", - mongodb.password: "dbz", - mongodb.task.id: "1", - database.whitelist: "inventory", - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector. - - -1. Start a MongoDB server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-mongodb:0.10 - $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017 debezium/example-mongodb:0.10 - - ``` - - Use the following commands to initialize the data. - - ``` bash - - ./usr/local/bin/init-inventory.sh - - ``` - - If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a``` - - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-mongodb-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar \ - --name debezium-mongodb-source \ - --destination-topic-name debezium-mongodb-topic \ - --tenant public \ - --namespace default \ - --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - :::note - - Currently, the destination topic (specified by the `destination-topic-name` option ) is a required configuration but it is not used for the Debezium connector to save data. The Debezium connector saves data in the following 4 types of topics: - - - One topic named with the database server name ( `database.server.name`) for storing the database metadata messages, such as `public/default/database.server.name`. - - One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. - - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - - One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - - If the automatic topic creation is disabled on your broker, you need to manually create the above 4 types of topics and the destination topic. - - ::: - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mongodb-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MongoDB client in docker. - - ```bash - - $ docker exec -it pulsar-mongodb /bin/bash - - ``` - -6. A MongoDB client pops out. - - ```bash - - mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory - db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}}) - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":false,"field":"rs"},{"type":"string","optional":false,"field":"collection"},{"type":"int32","optional":false,"field":"ord"},{"type":"int64","optional":true,"field":"h"}],"optional":false,"name":"io.debezium.connector.mongo.Source","field":"source"},{"type":"string","optional":true,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"after":"{\"_id\": {\"$numberLong\": \"104\"},\"name\": \"hammer\",\"description\": \"12oz carpenter's hammer\",\"weight\": 1.25,\"quantity\": 4}","patch":null,"source":{"version":"0.10.0.Final","connector":"mongodb","name":"dbserver1","ts_ms":1573541905000,"snapshot":"true","db":"inventory","rs":"rs0","collection":"products","ord":1,"h":4983083486544392763},"op":"r","ts_ms":1573541909761}}. - - ``` - -## FAQ - -### Debezium postgres connector will hang when create snap - -```$xslt - -#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000] - java.lang.Thread.State: WAITING (parking) - at sun.misc.Unsafe.park(Native Method) - - parking to wait for <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) - at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) - at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) - at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396) - at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649) - at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132) - at io.debezium.connector.postgresql.PostgresConnectorTask$Lambda$203/385424085.accept(Unknown Source) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$240/1347039967.accept(Unknown Source) - at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$206/589332928.run(Unknown Source) - at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705) - at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717) - at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126) - at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47) - at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127) - at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230) - at java.lang.Thread.run(Thread.java:748) - -``` - -If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file: - -```$xslt - -max.queue.size= - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-debug.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-debug.md deleted file mode 100644 index 844e101d00d2a7..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-debug.md +++ /dev/null @@ -1,407 +0,0 @@ ---- -id: io-debug -title: How to debug Pulsar connectors -sidebar_label: "Debug" -original_id: io-debug ---- -This guide explains how to debug connectors in localrun or cluster mode and gives a debugging checklist. -To better demonstrate how to debug Pulsar connectors, here takes a Mongo sink connector as an example. - -**Deploy a Mongo sink environment** -1. Start a Mongo service. - - ```bash - - docker pull mongo:4 - docker run -d -p 27017:27017 --name pulsar-mongo -v $PWD/data:/data/db mongo:4 - - ``` - -2. Create a DB and a collection. - - ```bash - - docker exec -it pulsar-mongo /bin/bash - mongo - > use pulsar - > db.createCollection('messages') - > exit - - ``` - -3. Start Pulsar standalone. - - ```bash - - docker pull apachepulsar/pulsar:2.4.0 - docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --link pulsar-mongo --name pulsar-mongo-standalone apachepulsar/pulsar:2.4.0 bin/pulsar standalone - - ``` - -4. Configure the Mongo sink with the `mongo-sink-config.yaml` file. - - ```bash - - configs: - mongoUri: "mongodb://pulsar-mongo:27017" - database: "pulsar" - collection: "messages" - batchSize: 2 - batchTimeMs: 500 - - ``` - - ```bash - - docker cp mongo-sink-config.yaml pulsar-mongo-standalone:/pulsar/ - - ``` - -5. Download the Mongo sink nar package. - - ```bash - - docker exec -it pulsar-mongo-standalone /bin/bash - curl -O http://apache.01link.hk/pulsar/pulsar-2.4.0/connectors/pulsar-io-mongo-2.4.0.nar - - ``` - -## Debug in localrun mode -Start the Mongo sink in localrun mode using the `localrun` command. -:::tip - -For more information about the `localrun` command, see [`localrun`](reference-connector-admin.md/#localrun-1). - -::: - -```bash - -./bin/pulsar-admin sinks localrun \ ---archive pulsar-io-mongo-2.4.0.nar \ ---tenant public --namespace default \ ---inputs test-mongo \ ---name pulsar-mongo-sink \ ---sink-config-file mongo-sink-config.yaml \ ---parallelism 1 - -``` - -### Use connector log -Use one of the following methods to get a connector log in localrun mode: -* After executing the `localrun` command, the **log is automatically printed on the console**. -* The log is located at: - - ```bash - - logs/functions/tenant/namespace/function-name/function-name-instance-id.log - - ``` - - **Example** - - The path of the Mongo sink connector is: - - ```bash - - logs/functions/public/default/pulsar-mongo-sink/pulsar-mongo-sink-0.log - - ``` - -To clearly explain the log information, here breaks down the large block of information into small blocks and add descriptions for each block. -* This piece of log information shows the storage path of the nar package after decompression. - - ``` - - 08:21:54.132 [main] INFO org.apache.pulsar.common.nar.NarClassLoader - Created class loader with paths: [file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/, file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/META-INF/bundled-dependencies/, - - ``` - - :::tip - - If `class cannot be found` exception is thrown, check whether the nar file is decompressed in the folder `file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/META-INF/bundled-dependencies/` or not. - - ::: - -* This piece of log information illustrates the basic information about the Mongo sink connector, such as tenant, namespace, name, parallelism, resources, and so on, which can be used to **check whether the Mongo sink connector is configured correctly or not**. - - ```bash - - 08:21:55.390 [main] INFO org.apache.pulsar.functions.runtime.ThreadRuntime - ThreadContainer starting function with instance config InstanceConfig(instanceId=0, functionId=853d60a1-0c48-44d5-9a5c-6917386476b2, functionVersion=c2ce1458-b69e-4175-88c0-a0a856a2be8c, functionDetails=tenant: "public" - namespace: "default" - name: "pulsar-mongo-sink" - className: "org.apache.pulsar.functions.api.utils.IdentityFunction" - autoAck: true - parallelism: 1 - source { - typeClassName: "[B" - inputSpecs { - key: "test-mongo" - value { - } - } - cleanupSubscription: true - } - sink { - className: "org.apache.pulsar.io.mongodb.MongoSink" - configs: "{\"mongoUri\":\"mongodb://pulsar-mongo:27017\",\"database\":\"pulsar\",\"collection\":\"messages\",\"batchSize\":2,\"batchTimeMs\":500}" - typeClassName: "[B" - } - resources { - cpu: 1.0 - ram: 1073741824 - disk: 10737418240 - } - componentType: SINK - , maxBufferedTuples=1024, functionAuthenticationSpec=null, port=38459, clusterName=local) - - ``` - -* This piece of log information demonstrates the status of the connections to Mongo and configuration information. - - ```bash - - 08:21:56.231 [cluster-ClusterId{value='5d6396a3c9e77c0569ff00eb', description='null'}-pulsar-mongo:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:1, serverValue:8}] to pulsar-mongo:27017 - 08:21:56.326 [cluster-ClusterId{value='5d6396a3c9e77c0569ff00eb', description='null'}-pulsar-mongo:27017] INFO org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=pulsar-mongo:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 2, 0]}, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=89058800} - - ``` - -* This piece of log information explains the configuration of consumers and clients, including the topic name, subscription name, subscription type, and so on. - - ```bash - - 08:21:56.719 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl - Starting Pulsar consumer status recorder with config: { - "topicNames" : [ "test-mongo" ], - "topicsPattern" : null, - "subscriptionName" : "public/default/pulsar-mongo-sink", - "subscriptionType" : "Shared", - "receiverQueueSize" : 1000, - "acknowledgementsGroupTimeMicros" : 100000, - "negativeAckRedeliveryDelayMicros" : 60000000, - "maxTotalReceiverQueueSizeAcrossPartitions" : 50000, - "consumerName" : null, - "ackTimeoutMillis" : 0, - "tickDurationMillis" : 1000, - "priorityLevel" : 0, - "cryptoFailureAction" : "CONSUME", - "properties" : { - "application" : "pulsar-sink", - "id" : "public/default/pulsar-mongo-sink", - "instance_id" : "0" - }, - "readCompacted" : false, - "subscriptionInitialPosition" : "Latest", - "patternAutoDiscoveryPeriod" : 1, - "regexSubscriptionMode" : "PersistentOnly", - "deadLetterPolicy" : null, - "autoUpdatePartitions" : true, - "replicateSubscriptionState" : false, - "resetIncludeHead" : false - } - 08:21:56.726 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl - Pulsar client config: { - "serviceUrl" : "pulsar://localhost:6650", - "authPluginClassName" : null, - "authParams" : null, - "operationTimeoutMs" : 30000, - "statsIntervalSeconds" : 60, - "numIoThreads" : 1, - "numListenerThreads" : 1, - "connectionsPerBroker" : 1, - "useTcpNoDelay" : true, - "useTls" : false, - "tlsTrustCertsFilePath" : null, - "tlsAllowInsecureConnection" : false, - "tlsHostnameVerificationEnable" : false, - "concurrentLookupRequest" : 5000, - "maxLookupRequest" : 50000, - "maxNumberOfRejectedRequestPerConnection" : 50, - "keepAliveIntervalSeconds" : 30, - "connectionTimeoutMs" : 10000, - "requestTimeoutMs" : 60000, - "defaultBackoffIntervalNanos" : 100000000, - "maxBackoffIntervalNanos" : 30000000000 - } - - ``` - -## Debug in cluster mode -You can use the following methods to debug a connector in cluster mode: -* [Use connector log](#use-connector-log) -* [Use admin CLI](#use-admin-cli) -### Use connector log -In cluster mode, multiple connectors can run on a worker. To find the log path of a specified connector, use the `workerId` to locate the connector log. -### Use admin CLI -Pulsar admin CLI helps you debug Pulsar connectors with the following subcommands: -* [`get`](#get) - -* [`status`](#status) -* [`topics stats`](#topics-stats) - -**Create a Mongo sink** - -```bash - -./bin/pulsar-admin sinks create \ ---archive pulsar-io-mongo-2.4.0.nar \ ---tenant public \ ---namespace default \ ---inputs test-mongo \ ---name pulsar-mongo-sink \ ---sink-config-file mongo-sink-config.yaml \ ---parallelism 1 - -``` - -### `get` -Use the `get` command to get the basic information about the Mongo sink connector, such as tenant, namespace, name, parallelism, and so on. - -```bash - -./bin/pulsar-admin sinks get --tenant public --namespace default --name pulsar-mongo-sink -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-mongo-sink", - "className": "org.apache.pulsar.io.mongodb.MongoSink", - "inputSpecs": { - "test-mongo": { - "isRegexPattern": false - } - }, - "configs": { - "mongoUri": "mongodb://pulsar-mongo:27017", - "database": "pulsar", - "collection": "messages", - "batchSize": 2.0, - "batchTimeMs": 500.0 - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -:::tip - -For more information about the `get` command, see [`get`](reference-connector-admin.md/#get-1). - -::: - -### `status` -Use the `status` command to get the current status about the Mongo sink connector, such as the number of instance, the number of running instance, instanceId, workerId and so on. - -```bash - -./bin/pulsar-admin sinks status ---tenant public \ ---namespace default \ ---name pulsar-mongo-sink -{ -"numInstances" : 1, -"numRunning" : 1, -"instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-5d202832fd18-8080" - } -} ] -} - -``` - -:::tip - -For more information about the `status` command, see [`status`](reference-connector-admin.md/#stauts-1). -If there are multiple connectors running on a worker, `workerId` can locate the worker on which the specified connector is running. - -::: - -### `topics stats` -Use the `topics stats` command to get the stats for a topic and its connected producer and consumer, such as whether the topic has received messages or not, whether there is a backlog of messages or not, the available permits and other key information. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -```bash - -./bin/pulsar-admin topics stats test-mongo -{ - "msgRateIn" : 0.0, - "msgThroughputIn" : 0.0, - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "averageMsgSize" : 0.0, - "storageSize" : 1, - "publishers" : [ ], - "subscriptions" : { - "public/default/pulsar-mongo-sink" : { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "msgRateRedeliver" : 0.0, - "msgBacklog" : 0, - "blockedSubscriptionOnUnackedMsgs" : false, - "msgDelayed" : 0, - "unackedMessages" : 0, - "type" : "Shared", - "msgRateExpired" : 0.0, - "consumers" : [ { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "msgRateRedeliver" : 0.0, - "consumerName" : "dffdd", - "availablePermits" : 999, - "unackedMessages" : 0, - "blockedConsumerOnUnackedMsgs" : false, - "metadata" : { - "instance_id" : "0", - "application" : "pulsar-sink", - "id" : "public/default/pulsar-mongo-sink" - }, - "connectedSince" : "2019-08-26T08:48:07.582Z", - "clientVersion" : "2.4.0", - "address" : "/172.17.0.3:57790" - } ], - "isReplicated" : false - } - }, - "replication" : { }, - "deduplicationStatus" : "Disabled" -} - -``` - -:::tip - -For more information about the `topic stats` command, see [`topic stats`](http://pulsar.apache.org/docs/en/pulsar-admin/#stats-1). - -::: - -## Checklist -This checklist indicates the major areas to check when you debug connectors. It is a reminder of what to look for to ensure a thorough review and an evaluation tool to get the status of connectors. -* Does Pulsar start successfully? - -* Does the external service run normally? - -* Is the nar package complete? - -* Is the connector configuration file correct? - -* In localrun mode, run a connector and check the printed information (connector log) on the console. - -* In cluster mode: - - * Use the `get` command to get the basic information. - - * Use the `status` command to get the current status. - * Use the `topics stats` command to get the stats for a specified topic and its connected producers and consumers. - - * Check the connector log. -* Enter into the external system and verify the result. diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-develop.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-develop.md deleted file mode 100644 index d6f4f8261ac820..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-develop.md +++ /dev/null @@ -1,421 +0,0 @@ ---- -id: io-develop -title: How to develop Pulsar connectors -sidebar_label: "Develop" -original_id: io-develop ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide describes how to develop Pulsar connectors to move data -between Pulsar and other systems. - -Pulsar connectors are special [Pulsar Functions](functions-overview.md), so creating -a Pulsar connector is similar to creating a Pulsar function. - -Pulsar connectors come in two types: - -| Type | Description | Example -|---|---|--- -{@inject: github:Source:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java}|Import data from another system to Pulsar.|[RabbitMQ source connector](io-rabbitmq.md) imports the messages of a RabbitMQ queue to a Pulsar topic. -{@inject: github:Sink:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java}|Export data from Pulsar to another system.|[Kinesis sink connector](io-kinesis.md) exports the messages of a Pulsar topic to a Kinesis stream. - -## Develop - -You can develop Pulsar source connectors and sink connectors. - -### Source - -Developing a source connector is to implement the {@inject: github:Source:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} -interface, which means you need to implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method and the {@inject: github:read:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - -1. Implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - - ```java - - /** - * Open connector with configuration - * - * @param config initialization config - * @param sourceContext - * @throws Exception IO type exceptions when opening a connector - */ - void open(final Map config, SourceContext sourceContext) throws Exception; - - ``` - - This method is called when the source connector is initialized. - - In this method, you can retrieve all connector specific settings through the passed-in `config` parameter and initialize all necessary resources. - - For example, a Kafka connector can create a Kafka client in this `open` method. - - Besides, Pulsar runtime also provides a `SourceContext` for the - connector to access runtime resources for tasks like collecting metrics. The implementation can save the `SourceContext` for future use. - -2. Implement the {@inject: github:read:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - - ```java - - /** - * Reads the next message from source. - * If source does not have any new messages, this call should block. - * @return next message from source. The return result should never be null - * @throws Exception - */ - Record read() throws Exception; - - ``` - - If nothing to return, the implementation should be blocking rather than returning `null`. - - The returned {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should encapsulate the following information, which is needed by Pulsar IO runtime. - - * {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should provide the following variables: - - |Variable|Required|Description - |---|---|--- - `TopicName`|No|Pulsar topic name from which the record is originated from. - `Key`|No| Messages can optionally be tagged with keys.

    For more information, see [Routing modes](concepts-messaging.md#routing-modes).| - `Value`|Yes|Actual data of the record. - `EventTime`|No|Event time of the record from the source. - `PartitionId`|No| If the record is originated from a partitioned source, it returns its `PartitionId`.

    `PartitionId` is used as a part of the unique identifier by Pulsar IO runtime to deduplicate messages and achieve exactly-once processing guarantee. - `RecordSequence`|No|If the record is originated from a sequential source, it returns its `RecordSequence`.

    `RecordSequence` is used as a part of the unique identifier by Pulsar IO runtime to deduplicate messages and achieve exactly-once processing guarantee. - `Properties` |No| If the record carries user-defined properties, it returns those properties. - `DestinationTopic`|No|Topic to which message should be written. - `Message`|No|A class which carries data sent by users.

    For more information, see [Message.java](https://github.com/apache/pulsar/blob/master/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/Message.java).| - - * {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should provide the following methods: - - Method|Description - |---|--- - `ack` |Acknowledge that the record is fully processed. - `fail`|Indicate that the record fails to be processed. - -## Handle schema information - -Pulsar IO automatically handles the schema and provides a strongly typed API based on Java generics. -If you know the schema type that you are producing, you can declare the Java class relative to that type in your sink declaration. - -``` - -public class MySource implements Source { - public Record read() {} -} - -``` - -If you want to implement a source that works with any schema, you can go with `byte[]` (of `ByteBuffer`) and use Schema.AUTO_PRODUCE_BYTES(). - -``` - -public class MySource implements Source { - public Record read() { - - Schema wantedSchema = .... - Record myRecord = new MyRecordImplementation(); - .... - } - class MyRecordImplementation implements Record { - public byte[] getValue() { - return ....encoded byte[]...that represents the value - } - public Schema getSchema() { - return Schema.AUTO_PRODUCE_BYTES(wantedSchema); - } - } -} - -``` - -To handle the `KeyValue` type properly, follow the guidelines for your record implementation: -- It must implement {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/KVRecord.java} interface and implement `getKeySchema`,`getValueSchema`, and `getKeyValueEncodingType` -- It must return a `KeyValue` object as `Record.getValue()` -- It may return null in `Record.getSchema()` - -When Pulsar IO runtime encounters a `KVRecord`, it brings the following changes automatically: -- Set properly the `KeyValueSchema` -- Encode the Message Key and the Message Value according to the `KeyValueEncoding` (SEPARATED or INLINE) - -:::tip - -For more information about **how to create a source connector**, see {@inject: github:KafkaSource:/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java}. - -::: - -### Sink - -Developing a sink connector **is similar to** developing a source connector, that is, you need to implement the {@inject: github:Sink:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} interface, which means implementing the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method and the {@inject: github:write:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - -1. Implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - - ```java - - /** - * Open connector with configuration - * - * @param config initialization config - * @param sinkContext - * @throws Exception IO type exceptions when opening a connector - */ - void open(final Map config, SinkContext sinkContext) throws Exception; - - ``` - -2. Implement the {@inject: github:write:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - - ```java - - /** - * Write a message to Sink - * @param record record to write to sink - * @throws Exception - */ - void write(Record record) throws Exception; - - ``` - - During the implementation, you can decide how to write the `Value` and - the `Key` to the actual source, and leverage all the provided information such as - `PartitionId` and `RecordSequence` to achieve different processing guarantees. - - You also need to ack records (if messages are sent successfully) or fail records (if messages fail to send). - -## Handling Schema information - -Pulsar IO handles automatically the Schema and provides a strongly typed API based on Java generics. -If you know the Schema type that you are consuming from you can declare the Java class relative to that type in your Sink declaration. - -``` - -public class MySink implements Sink { - public void write(Record record) {} -} - -``` - -If you want to implement a sink that works with any schema, you can you go with the special GenericObject interface. - -``` - -public class MySink implements Sink { - public void write(Record record) { - Schema schema = record.getSchema(); - GenericObject genericObject = record.getValue(); - if (genericObject != null) { - SchemaType type = genericObject.getSchemaType(); - Object nativeObject = genericObject.getNativeObject(); - ... - } - .... - } -} - -``` - -In the case of AVRO, JSON, and Protobuf records (schemaType=AVRO,JSON,PROTOBUF_NATIVE), you can cast the -`genericObject` variable to `GenericRecord` and use `getFields()` and `getField()` API. -You are able to access the native AVRO record using `genericObject.getNativeObject()`. - -In the case of KeyValue type, you can access both the schema for the key and the schema for the value using this code. - -``` - -public class MySink implements Sink { - public void write(Record record) { - Schema schema = record.getSchema(); - GenericObject genericObject = record.getValue(); - SchemaType type = genericObject.getSchemaType(); - Object nativeObject = genericObject.getNativeObject(); - if (type == SchemaType.KEY_VALUE) { - KeyValue keyValue = (KeyValue) nativeObject; - Object key = keyValue.getKey(); - Object value = keyValue.getValue(); - - KeyValueSchema keyValueSchema = (KeyValueSchema) schema; - Schema keySchema = keyValueSchema.getKeySchema(); - Schema valueSchema = keyValueSchema.getValueSchema(); - } - .... - } -} - -``` - -## Test - -Testing connectors can be challenging because Pulsar IO connectors interact with two systems -that may be difficult to mock—Pulsar and the system to which the connector is connecting. - -It is -recommended writing special tests to test the connector functionalities as below -while mocking the external service. - -### Unit test - -You can create unit tests for your connector. - -### Integration test - -Once you have written sufficient unit tests, you can add -separate integration tests to verify end-to-end functionality. - -Pulsar uses [testcontainers](https://www.testcontainers.org/) **for all integration tests**. - -:::tip - -For more information about **how to create integration tests for Pulsar connectors**, see {@inject: github:IntegrationTests:/tests/integration/src/test/java/org/apache/pulsar/tests/integration/io}. - -::: - -## Package - -Once you've developed and tested your connector, you need to package it so that it can be submitted -to a [Pulsar Functions](functions-overview.md) cluster. - -There are two methods to -work with Pulsar Functions' runtime, that is, [NAR](#nar) and [uber JAR](#uber-jar). - -:::note - -If you plan to package and distribute your connector for others to use, you are obligated to - -::: - -license and copyright your own code properly. Remember to add the license and copyright to -all libraries your code uses and to your distribution. -> -> If you use the [NAR](#nar) method, the NAR plugin -automatically creates a `DEPENDENCIES` file in the generated NAR package, including the proper -licensing and copyrights of all libraries of your connector. - -### NAR - -**NAR** stands for NiFi Archive, which is a custom packaging mechanism used by Apache NiFi, to provide -a bit of Java ClassLoader isolation. - -:::tip - -For more information about **how NAR works**, see [here](https://medium.com/hashmapinc/nifi-nar-files-explained-14113f7796fd). - -::: - -Pulsar uses the same mechanism for packaging **all** [built-in connectors](io-connectors.md). - -The easiest approach to package a Pulsar connector is to create a NAR package using [nifi-nar-maven-plugin](https://mvnrepository.com/artifact/org.apache.nifi/nifi-nar-maven-plugin). - -Include this [nifi-nar-maven-plugin](https://mvnrepository.com/artifact/org.apache.nifi/nifi-nar-maven-plugin) in your maven project for your connector as below. - -```xml - - - - org.apache.nifi - nifi-nar-maven-plugin - 1.2.0 - - - -``` - -You must also create a `resources/META-INF/services/pulsar-io.yaml` file with the following contents: - -```yaml - -name: connector name -description: connector description -sourceClass: fully qualified class name (only if source connector) -sinkClass: fully qualified class name (only if sink connector) - -``` - -For Gradle users, there is a [Gradle Nar plugin available on the Gradle Plugin Portal](https://plugins.gradle.org/plugin/io.github.lhotari.gradle-nar-plugin). - -:::tip - -For more information about an **how to use NAR for Pulsar connectors**, see {@inject: github:TwitterFirehose:/pulsar-io/twitter/pom.xml}. - -::: - -### Uber JAR - -An alternative approach is to create an **uber JAR** that contains all of the connector's JAR files -and other resource files. No directory internal structure is necessary. - -You can use [maven-shade-plugin](https://maven.apache.org/plugins/maven-shade-plugin/examples/includes-excludes.html) to create a uber JAR as below: - -```xml - - - org.apache.maven.plugins - maven-shade-plugin - 3.1.1 - - - package - - shade - - - - - *:* - - - - - - - -``` - -## Monitor - -Pulsar connectors enable you to move data in and out of Pulsar easily. It is important to ensure that the running connectors are healthy at any time. You can monitor Pulsar connectors that have been deployed with the following methods: - -- Check the metrics provided by Pulsar. - - Pulsar connectors expose the metrics that can be collected and used for monitoring the health of **Java** connectors. You can check the metrics by following the [monitoring](deploy-monitoring.md) guide. - -- Set and check your customized metrics. - - In addition to the metrics provided by Pulsar, Pulsar allows you to customize metrics for **Java** connectors. Function workers collect user-defined metrics to Prometheus automatically and you can check them in Grafana. - -Here is an example of how to customize metrics for a Java connector. - -````mdx-code-block - - - -``` - -public class TestMetricSink implements Sink { - - @Override - public void open(Map config, SinkContext sinkContext) throws Exception { - sinkContext.recordMetric("foo", 1); - } - - @Override - public void write(Record record) throws Exception { - - } - - @Override - public void close() throws Exception { - - } - } - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-dynamodb-source.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-dynamodb-source.md deleted file mode 100644 index ce585786eb0428..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-dynamodb-source.md +++ /dev/null @@ -1,80 +0,0 @@ ---- -id: io-dynamodb-source -title: AWS DynamoDB source connector -sidebar_label: "AWS DynamoDB source connector" -original_id: io-dynamodb-source ---- - -The DynamoDB source connector pulls data from DynamoDB table streams and persists data into Pulsar. - -This connector uses the [DynamoDB Streams Kinesis Adapter](https://github.com/awslabs/dynamodb-streams-kinesis-adapter), -which uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual -consuming of messages. The KCL uses DynamoDB to track state for consumers and requires cloudwatch access to log metrics. - - -## Configuration - -The configuration of the DynamoDB source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.

    Below are the available options:

  1278. `AT_TIMESTAMP`: start from the record at or after the specified timestamp.

  1279. `LATEST`: start after the most recent data record.

  1280. `TRIM_HORIZON`: start from the oldest available data record.
  1281. -`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption. -`applicationName`|String|false|Pulsar IO connector|The name of the KCL application. Must be unique, as it is used to define the table name for the dynamo table used for state tracking.

    By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances. -`checkpointInterval`|long|false|60000|The frequency of the KCL checkpoint in milliseconds. -`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds. -`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint. -`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector.

    Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed. -`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsEndpoint`|String|false|" " (empty string)|The DynamoDB Streams end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsDynamodbStreamArn`|String|true|" " (empty string)|The DynamoDB stream arn. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    `awsCredentialProviderPlugin` has the following built-in plugs:

  1282. `org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:
    this plugin uses the default AWS provider chain.
    For more information, see [using the default credential provider chain](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default).

  1283. `org.apache.pulsar.io.kinesis.STSAssumeRoleProviderPlugin`:
    this plugin takes a configuration via the `awsCredentialPluginParam` that describes a role to assume when running the KCL.
    **JSON configuration example**
    `{"roleArn": "arn...", "roleSessionName": "name"}`

    `awsCredentialPluginName` is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If `awsCredentialPluginName` set to empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`.
  1284. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Example - -Before using the DynamoDB source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "awsEndpoint": "https://some.endpoint.aws", - "awsRegion": "us-east-1", - "awsDynamodbStreamArn": "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "applicationName": "My test application", - "checkpointInterval": "30000", - "backoffTime": "4000", - "numRetries": "3", - "receiveQueueSize": 2000, - "initialPositionInStream": "TRIM_HORIZON", - "startAtTime": "2019-03-05T19:28:58.000Z" - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "https://some.endpoint.aws" - awsRegion: "us-east-1" - awsDynamodbStreamArn: "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - applicationName: "My test application" - checkpointInterval: 30000 - backoffTime: 4000 - numRetries: 3 - receiveQueueSize: 2000 - initialPositionInStream: "TRIM_HORIZON" - startAtTime: "2019-03-05T19:28:58.000Z" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-elasticsearch-sink.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-elasticsearch-sink.md deleted file mode 100644 index 4acedd3dd0788d..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-elasticsearch-sink.md +++ /dev/null @@ -1,173 +0,0 @@ ---- -id: io-elasticsearch-sink -title: ElasticSearch sink connector -sidebar_label: "ElasticSearch sink connector" -original_id: io-elasticsearch-sink ---- - -The ElasticSearch sink connector pulls messages from Pulsar topics and persists the messages to indexes. - -## Configuration - -The configuration of the ElasticSearch sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `elasticSearchUrl` | String| true |" " (empty string)| The URL of elastic search cluster to which the connector connects. | -| `indexName` | String| true |" " (empty string)| The index name to which the connector writes messages. | -| `typeName` | String | false | "_doc" | The type name to which the connector writes messages to.

    The value should be set explicitly to a valid type name other than "_doc" for Elasticsearch version before 6.2, and left to default otherwise. | -| `indexNumberOfShards` | int| false |1| The number of shards of the index. | -| `indexNumberOfReplicas` | int| false |1 | The number of replicas of the index. | -| `username` | String| false |" " (empty string)| The username used by the connector to connect to the elastic search cluster.

    If `username` is set, then `password` should also be provided. | -| `password` | String| false | " " (empty string)|The password used by the connector to connect to the elastic search cluster.

    If `username` is set, then `password` should also be provided. | - -## Example - -Before using the ElasticSearch sink connector, you need to create a configuration file through one of the following methods. - -### Configuration - -#### For Elasticsearch After 6.2 - -* JSON - - ```json - - { - "elasticSearchUrl": "http://localhost:9200", - "indexName": "my_index", - "username": "scooby", - "password": "doobie" - } - - ``` - -* YAML - - ```yaml - - configs: - elasticSearchUrl: "http://localhost:9200" - indexName: "my_index" - username: "scooby" - password: "doobie" - - ``` - -#### For Elasticsearch Before 6.2 - -* JSON - - ```json - - { - "elasticSearchUrl": "http://localhost:9200", - "indexName": "my_index", - "typeName": "doc", - "username": "scooby", - "password": "doobie" - } - - ``` - -* YAML - - ```yaml - - configs: - elasticSearchUrl: "http://localhost:9200" - indexName: "my_index" - typeName: "doc" - username: "scooby" - password: "doobie" - - ``` - -### Usage - -1. Start a single node Elasticsearch cluster. - - ```bash - - $ docker run -p 9200:9200 -p 9300:9300 \ - -e "discovery.type=single-node" \ - docker.elastic.co/elasticsearch/elasticsearch:7.5.1 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - - Make sure the NAR file is available at `connectors/pulsar-io-elastic-search-@pulsar:version@.nar`. - -3. Start the Pulsar Elasticsearch connector in local run mode using one of the following methods. - * Use the **JSON** configuration as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-elastic-search-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name elasticsearch-test-sink \ - --sink-config '{"elasticSearchUrl":"http://localhost:9200","indexName": "my_index","username": "scooby","password": "doobie"}' \ - --inputs elasticsearch_test - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-elastic-search-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name elasticsearch-test-sink \ - --sink-config-file elasticsearch-sink.yml \ - --inputs elasticsearch_test - - ``` - -4. Publish records to the topic. - - ```bash - - $ bin/pulsar-client produce elasticsearch_test --messages "{\"a\":1}" - - ``` - -5. Check documents in Elasticsearch. - - * refresh the index - - ```bash - - $ curl -s http://localhost:9200/my_index/_refresh - - ``` - - - * search documents - - ```bash - - $ curl -s http://localhost:9200/my_index/_search - - ``` - - You can see the record that published earlier has been successfully written into Elasticsearch. - - ```json - - {"took":2,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":1,"relation":"eq"},"max_score":1.0,"hits":[{"_index":"my_index","_type":"_doc","_id":"FSxemm8BLjG_iC0EeTYJ","_score":1.0,"_source":{"a":1}}]}} - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-file-source.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-file-source.md deleted file mode 100644 index e9d710cce65e83..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-file-source.md +++ /dev/null @@ -1,160 +0,0 @@ ---- -id: io-file-source -title: File source connector -sidebar_label: "File source connector" -original_id: io-file-source ---- - -The File source connector pulls messages from files in directories and persists the messages to Pulsar topics. - -## Configuration - -The configuration of the File source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `inputDirectory` | String|true | No default value|The input directory to pull files. | -| `recurse` | Boolean|false | true | Whether to pull files from subdirectory or not.| -| `keepFile` |Boolean|false | false | If set to true, the file is not deleted after it is processed, which means the file can be picked up continually. | -| `fileFilter` | String|false| [^\\.].* | The file whose name matches the given regular expression is picked up. | -| `pathFilter` | String |false | NULL | If `recurse` is set to true, the subdirectory whose path matches the given regular expression is scanned. | -| `minimumFileAge` | Integer|false | 0 | The minimum age that a file can be processed.

    Any file younger than `minimumFileAge` (according to the last modification date) is ignored. | -| `maximumFileAge` | Long|false |Long.MAX_VALUE | The maximum age that a file can be processed.

    Any file older than `maximumFileAge` (according to last modification date) is ignored. | -| `minimumSize` |Integer| false |1 | The minimum size (in bytes) that a file can be processed. | -| `maximumSize` | Double|false |Double.MAX_VALUE| The maximum size (in bytes) that a file can be processed. | -| `ignoreHiddenFiles` |Boolean| false | true| Whether the hidden files should be ignored or not. | -| `pollingInterval`|Long | false | 10000L | Indicates how long to wait before performing a directory listing. | -| `numWorkers` | Integer | false | 1 | The number of worker threads that process files.

    This allows you to process a larger number of files concurrently.

    However, setting this to a value greater than 1 makes the data from multiple files mixed in the target topic. | - -### Example - -Before using the File source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "inputDirectory": "/Users/david", - "recurse": true, - "keepFile": true, - "fileFilter": "[^\\.].*", - "pathFilter": "*", - "minimumFileAge": 0, - "maximumFileAge": 9999999999, - "minimumSize": 1, - "maximumSize": 5000000, - "ignoreHiddenFiles": true, - "pollingInterval": 5000, - "numWorkers": 1 - } - - ``` - -* YAML - - ```yaml - - configs: - inputDirectory: "/Users/david" - recurse: true - keepFile: true - fileFilter: "[^\\.].*" - pathFilter: "*" - minimumFileAge: 0 - maximumFileAge: 9999999999 - minimumSize: 1 - maximumSize: 5000000 - ignoreHiddenFiles: true - pollingInterval: 5000 - numWorkers: 1 - - ``` - -## Usage - -Here is an example of using the File source connecter. - -1. Pull a Pulsar image. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - ``` - -2. Start Pulsar standalone. - - ```bash - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -3. Create a configuration file _file-connector.yaml_. - - ```yaml - - configs: - inputDirectory: "/opt" - - ``` - -4. Copy the configuration file _file-connector.yaml_ to the container. - - ```bash - - $ docker cp connectors/file-connector.yaml pulsar-standalone:/pulsar/ - - ``` - -5. Download the File source connector. - - ```bash - - $ curl -O https://mirrors.tuna.tsinghua.edu.cn/apache/pulsar/pulsar-{version}/connectors/pulsar-io-file-{version}.nar - - ``` - -6. Start the File source connector. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - - $ ./bin/pulsar-admin sources localrun \ - --archive /pulsar/pulsar-io-file-{version}.nar \ - --name file-test \ - --destination-topic-name pulsar-file-test \ - --source-config-file /pulsar/file-connector.yaml - - ``` - -7. Start a consumer. - - ```bash - - ./bin/pulsar-client consume -s file-test -n 0 pulsar-file-test - - ``` - -8. Write the message to the file _test.txt_. - - ```bash - - echo "hello world!" > /opt/test.txt - - ``` - - The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello world! - - ``` - - \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-flume-sink.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-flume-sink.md deleted file mode 100644 index b2ace53702f8ca..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-flume-sink.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -id: io-flume-sink -title: Flume sink connector -sidebar_label: "Flume sink connector" -original_id: io-flume-sink ---- - -The Flume sink connector pulls messages from Pulsar topics to logs. - -## Configuration - -The configuration of the Flume sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`name`|String|true|"" (empty string)|The name of the agent. -`confFile`|String|true|"" (empty string)|The configuration file. -`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed. -`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection. -`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration. - -### Example - -Before using the Flume sink connector, you need to create a configuration file through one of the following methods. - -> For more information about the `sink.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/sink.conf). - -* JSON - - ```json - - { - "name": "a1", - "confFile": "sink.conf", - "noReloadConf": "false", - "zkConnString": "", - "zkBasePath": "" - } - - ``` - -* YAML - - ```yaml - - configs: - name: a1 - confFile: sink.conf - noReloadConf: false - zkConnString: "" - zkBasePath: "" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-flume-source.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-flume-source.md deleted file mode 100644 index b7fd7edad88111..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-flume-source.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -id: io-flume-source -title: Flume source connector -sidebar_label: "Flume source connector" -original_id: io-flume-source ---- - -The Flume source connector pulls messages from logs to Pulsar topics. - -## Configuration - -The configuration of the Flume source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`name`|String|true|"" (empty string)|The name of the agent. -`confFile`|String|true|"" (empty string)|The configuration file. -`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed. -`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection. -`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration. - -### Example - -Before using the Flume source connector, you need to create a configuration file through one of the following methods. - -> For more information about the `source.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/source.conf). - -* JSON - - ```json - - { - "name": "a1", - "confFile": "source.conf", - "noReloadConf": "false", - "zkConnString": "", - "zkBasePath": "" - } - - ``` - -* YAML - - ```yaml - - configs: - name: a1 - confFile: source.conf - noReloadConf: false - zkConnString: "" - zkBasePath: "" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-hbase-sink.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-hbase-sink.md deleted file mode 100644 index 1737b00fa26805..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-hbase-sink.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -id: io-hbase-sink -title: HBase sink connector -sidebar_label: "HBase sink connector" -original_id: io-hbase-sink ---- - -The HBase sink connector pulls the messages from Pulsar topics -and persists the messages to HBase tables - -## Configuration - -The configuration of the HBase sink connector has the following properties. - -### Property - -| Name | Type|Default | Required | Description | -|------|---------|----------|-------------|--- -| `hbaseConfigResources` | String|None | false | HBase system configuration `hbase-site.xml` file. | -| `zookeeperQuorum` | String|None | true | HBase system configuration about `hbase.zookeeper.quorum` value. | -| `zookeeperClientPort` | String|2181 | false | HBase system configuration about `hbase.zookeeper.property.clientPort` value. | -| `zookeeperZnodeParent` | String|/hbase | false | HBase system configuration about `zookeeper.znode.parent` value. | -| `tableName` | None |String | true | HBase table, the value is `namespace:tableName`. | -| `rowKeyName` | String|None | true | HBase table rowkey name. | -| `familyName` | String|None | true | HBase table column family name. | -| `qualifierNames` |String| None | true | HBase table column qualifier names. | -| `batchTimeMs` | Long|1000l| false | HBase table operation timeout in milliseconds. | -| `batchSize` | int|200| false | Batch size of updates made to the HBase table. | - -### Example - -Before using the HBase sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "hbaseConfigResources": "hbase-site.xml", - "zookeeperQuorum": "localhost", - "zookeeperClientPort": "2181", - "zookeeperZnodeParent": "/hbase", - "tableName": "pulsar_hbase", - "rowKeyName": "rowKey", - "familyName": "info", - "qualifierNames": [ 'name', 'address', 'age'] - } - - ``` - -* YAML - - ```yaml - - configs: - hbaseConfigResources: "hbase-site.xml" - zookeeperQuorum: "localhost" - zookeeperClientPort: "2181" - zookeeperZnodeParent: "/hbase" - tableName: "pulsar_hbase" - rowKeyName: "rowKey" - familyName: "info" - qualifierNames: [ 'name', 'address', 'age'] - - ``` - - \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-hdfs2-sink.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-hdfs2-sink.md deleted file mode 100644 index 4a8527154430d0..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-hdfs2-sink.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -id: io-hdfs2-sink -title: HDFS2 sink connector -sidebar_label: "HDFS2 sink connector" -original_id: io-hdfs2-sink ---- - -The HDFS2 sink connector pulls the messages from Pulsar topics -and persists the messages to HDFS files. - -## Configuration - -The configuration of the HDFS2 sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `hdfsConfigResources` | String|true| None | A file or a comma-separated list containing the Hadoop file system configuration.

    **Example**
    'core-site.xml'
    'hdfs-site.xml' | -| `directory` | String | true | None|The HDFS directory where files read from or written to. | -| `encoding` | String |false |None |The character encoding for the files.

    **Example**
    UTF-8
    ASCII | -| `compression` | Compression |false |None |The compression code used to compress or de-compress the files on HDFS.

    Below are the available options:
  1285. BZIP2
  1286. DEFLATE
  1287. GZIP
  1288. LZ4
  1289. SNAPPY
  1290. | -| `kerberosUserPrincipal` |String| false| None|The principal account of Kerberos user used for authentication. | -| `keytab` | String|false|None| The full pathname of the Kerberos keytab file used for authentication. | -| `filenamePrefix` |String| true, if `compression` is set to `None`. | None |The prefix of the files created inside the HDFS directory.

    **Example**
    The value of topicA result in files named topicA-. | -| `fileExtension` | String| true | None | The extension added to the files written to HDFS.

    **Example**
    '.txt'
    '.seq' | -| `separator` | char|false |None |The character used to separate records in a text file.

    If no value is provided, the contents from all records are concatenated together in one continuous byte array. | -| `syncInterval` | long| false |0| The interval between calls to flush data to HDFS disk in milliseconds. | -| `maxPendingRecords` |int| false|Integer.MAX_VALUE | The maximum number of records that hold in memory before acking.

    Setting this property to 1 makes every record send to disk before the record is acked.

    Setting this property to a higher value allows buffering records before flushing them to disk. -| `subdirectoryPattern` | String | false | None | A subdirectory associated with the created time of the sink.
    The pattern is the formatted pattern of `directory`'s subdirectory.

    See [DateTimeFormatter](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html) for pattern's syntax. | - -### Example - -Before using the HDFS2 sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "hdfsConfigResources": "core-site.xml", - "directory": "/foo/bar", - "filenamePrefix": "prefix", - "fileExtension": ".log", - "compression": "SNAPPY", - "subdirectoryPattern": "yyyy-MM-dd" - } - - ``` - -* YAML - - ```yaml - - configs: - hdfsConfigResources: "core-site.xml" - directory: "/foo/bar" - filenamePrefix: "prefix" - fileExtension: ".log" - compression: "SNAPPY" - subdirectoryPattern: "yyyy-MM-dd" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-hdfs3-sink.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-hdfs3-sink.md deleted file mode 100644 index aec065a25db7f4..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-hdfs3-sink.md +++ /dev/null @@ -1,59 +0,0 @@ ---- -id: io-hdfs3-sink -title: HDFS3 sink connector -sidebar_label: "HDFS3 sink connector" -original_id: io-hdfs3-sink ---- - -The HDFS3 sink connector pulls the messages from Pulsar topics -and persists the messages to HDFS files. - -## Configuration - -The configuration of the HDFS3 sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `hdfsConfigResources` | String|true| None | A file or a comma-separated list containing the Hadoop file system configuration.

    **Example**
    'core-site.xml'
    'hdfs-site.xml' | -| `directory` | String | true | None|The HDFS directory where files read from or written to. | -| `encoding` | String |false |None |The character encoding for the files.

    **Example**
    UTF-8
    ASCII | -| `compression` | Compression |false |None |The compression code used to compress or de-compress the files on HDFS.

    Below are the available options:
  1291. BZIP2
  1292. DEFLATE
  1293. GZIP
  1294. LZ4
  1295. SNAPPY
  1296. | -| `kerberosUserPrincipal` |String| false| None|The principal account of Kerberos user used for authentication. | -| `keytab` | String|false|None| The full pathname of the Kerberos keytab file used for authentication. | -| `filenamePrefix` |String| false |None |The prefix of the files created inside the HDFS directory.

    **Example**
    The value of topicA result in files named topicA-. | -| `fileExtension` | String| false | None| The extension added to the files written to HDFS.

    **Example**
    '.txt'
    '.seq' | -| `separator` | char|false |None |The character used to separate records in a text file.

    If no value is provided, the contents from all records are concatenated together in one continuous byte array. | -| `syncInterval` | long| false |0| The interval between calls to flush data to HDFS disk in milliseconds. | -| `maxPendingRecords` |int| false|Integer.MAX_VALUE | The maximum number of records that hold in memory before acking.

    Setting this property to 1 makes every record send to disk before the record is acked.

    Setting this property to a higher value allows buffering records before flushing them to disk. - -### Example - -Before using the HDFS3 sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "hdfsConfigResources": "core-site.xml", - "directory": "/foo/bar", - "filenamePrefix": "prefix", - "compression": "SNAPPY" - } - - ``` - -* YAML - - ```yaml - - configs: - hdfsConfigResources: "core-site.xml" - directory: "/foo/bar" - filenamePrefix: "prefix" - compression: "SNAPPY" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-influxdb-sink.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-influxdb-sink.md deleted file mode 100644 index 9382f8c03121cc..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-influxdb-sink.md +++ /dev/null @@ -1,119 +0,0 @@ ---- -id: io-influxdb-sink -title: InfluxDB sink connector -sidebar_label: "InfluxDB sink connector" -original_id: io-influxdb-sink ---- - -The InfluxDB sink connector pulls messages from Pulsar topics -and persists the messages to InfluxDB. - -The InfluxDB sink provides different configurations for InfluxDBv1 and v2 respectively. - -## Configuration - -The configuration of the InfluxDB sink connector has the following properties. - -### Property -#### InfluxDBv2 -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. | -| `token` | String|true| " " (empty string) |The authentication token used to authenticate to InfluxDB. | -| `organization` | String| true|" " (empty string) | The InfluxDB organization to write to. | -| `bucket` |String| true | " " (empty string)| The InfluxDB bucket to write to. | -| `precision` | String|false| ns | The timestamp precision for writing data to InfluxDB.

    Below are the available options:
  1297. ns
  1298. us
  1299. ms
  1300. s
  1301. | -| `logLevel` | String|false| NONE|The log level for InfluxDB request and response.

    Below are the available options:
  1302. NONE
  1303. BASIC
  1304. HEADERS
  1305. FULL
  1306. | -| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. | -| `batchTimeMs` |long|false| 1000L | The InfluxDB operation time in milliseconds. | -| `batchSize` | int|false|200| The batch size of writing to InfluxDB. | - -#### InfluxDBv1 -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. | -| `username` | String|false| " " (empty string) |The username used to authenticate to InfluxDB. | -| `password` | String| false|" " (empty string) | The password used to authenticate to InfluxDB. | -| `database` |String| true | " " (empty string)| The InfluxDB to which write messages. | -| `consistencyLevel` | String|false|ONE | The consistency level for writing data to InfluxDB.

    Below are the available options:
  1307. ALL
  1308. ANY
  1309. ONE
  1310. QUORUM
  1311. | -| `logLevel` | String|false| NONE|The log level for InfluxDB request and response.

    Below are the available options:
  1312. NONE
  1313. BASIC
  1314. HEADERS
  1315. FULL
  1316. | -| `retentionPolicy` | String|false| autogen| The retention policy for InfluxDB. | -| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. | -| `batchTimeMs` |long|false| 1000L | The InfluxDB operation time in milliseconds. | -| `batchSize` | int|false|200| The batch size of writing to InfluxDB. | - -### Example -Before using the InfluxDB sink connector, you need to create a configuration file through one of the following methods. -#### InfluxDBv2 -* JSON - - ```json - - { - "influxdbUrl": "http://localhost:9999", - "organization": "example-org", - "bucket": "example-bucket", - "token": "xxxx", - "precision": "ns", - "logLevel": "NONE", - "gzipEnable": false, - "batchTimeMs": 1000, - "batchSize": 100 - } - - ``` - - -* YAML - - ```yaml - - configs: - influxdbUrl: "http://localhost:9999" - organization: "example-org" - bucket: "example-bucket" - token: "xxxx" - precision: "ns" - logLevel: "NONE" - gzipEnable: false - batchTimeMs: 1000 - batchSize: 100 - - ``` - - -#### InfluxDBv1 - -* JSON - - ```json - - { - "influxdbUrl": "http://localhost:8086", - "database": "test_db", - "consistencyLevel": "ONE", - "logLevel": "NONE", - "retentionPolicy": "autogen", - "gzipEnable": false, - "batchTimeMs": 1000, - "batchSize": 100 - } - - ``` - -* YAML - - ```yaml - - configs: - influxdbUrl: "http://localhost:8086" - database: "test_db" - consistencyLevel: "ONE" - logLevel: "NONE" - retentionPolicy: "autogen" - gzipEnable: false - batchTimeMs: 1000 - batchSize: 100 - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-jdbc-sink.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-jdbc-sink.md deleted file mode 100644 index 77dbb61fccd7ed..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-jdbc-sink.md +++ /dev/null @@ -1,157 +0,0 @@ ---- -id: io-jdbc-sink -title: JDBC sink connector -sidebar_label: "JDBC sink connector" -original_id: io-jdbc-sink ---- - -The JDBC sink connectors allow pulling messages from Pulsar topics -and persists the messages to ClickHouse, MariaDB, PostgreSQL, and SQLite. - -> Currently, INSERT, DELETE and UPDATE operations are supported. - -## Configuration - -The configuration of all JDBC sink connectors has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `userName` | String|false | " " (empty string) | The username used to connect to the database specified by `jdbcUrl`.

    **Note: `userName` is case-sensitive.**| -| `password` | String|false | " " (empty string)| The password used to connect to the database specified by `jdbcUrl`.

    **Note: `password` is case-sensitive.**| -| `jdbcUrl` | String|true | " " (empty string) | The JDBC URL of the database to which the connector connects. | -| `tableName` | String|true | " " (empty string) | The name of the table to which the connector writes. | -| `nonKey` | String|false | " " (empty string) | A comma-separated list contains the fields used in updating events. | -| `key` | String|false | " " (empty string) | A comma-separated list contains the fields used in `where` condition of updating and deleting events. | -| `timeoutMs` | int| false|500 | The JDBC operation timeout in milliseconds. | -| `batchSize` | int|false | 200 | The batch size of updates made to the database. | - -### Example for ClickHouse - -* JSON - - ```json - - { - "userName": "clickhouse", - "password": "password", - "jdbcUrl": "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink", - "tableName": "pulsar_clickhouse_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-clickhouse-sink" - topicName: "persistent://public/default/jdbc-clickhouse-topic" - sinkType: "jdbc-clickhouse" - configs: - userName: "clickhouse" - password: "password" - jdbcUrl: "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink" - tableName: "pulsar_clickhouse_jdbc_sink" - - ``` - -### Example for MariaDB - -* JSON - - ```json - - { - "userName": "mariadb", - "password": "password", - "jdbcUrl": "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink", - "tableName": "pulsar_mariadb_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-mariadb-sink" - topicName: "persistent://public/default/jdbc-mariadb-topic" - sinkType: "jdbc-mariadb" - configs: - userName: "mariadb" - password: "password" - jdbcUrl: "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink" - tableName: "pulsar_mariadb_jdbc_sink" - - ``` - -### Example for PostgreSQL - -Before using the JDBC PostgreSQL sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "userName": "postgres", - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "tableName": "pulsar_postgres_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-postgres-sink" - topicName: "persistent://public/default/jdbc-postgres-topic" - sinkType: "jdbc-postgres" - configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink" - tableName: "pulsar_postgres_jdbc_sink" - - ``` - -For more information on **how to use this JDBC sink connector**, see [connect Pulsar to PostgreSQL](io-quickstart.md#connect-pulsar-to-postgresql). - -### Example for SQLite - -* JSON - - ```json - - { - "jdbcUrl": "jdbc:sqlite:db.sqlite", - "tableName": "pulsar_sqlite_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-sqlite-sink" - topicName: "persistent://public/default/jdbc-sqlite-topic" - sinkType: "jdbc-sqlite" - configs: - jdbcUrl: "jdbc:sqlite:db.sqlite" - tableName: "pulsar_sqlite_jdbc_sink" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-kafka-sink.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-kafka-sink.md deleted file mode 100644 index 09dad4ce70bac9..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-kafka-sink.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -id: io-kafka-sink -title: Kafka sink connector -sidebar_label: "Kafka sink connector" -original_id: io-kafka-sink ---- - -The Kafka sink connector pulls messages from Pulsar topics and persists the messages -to Kafka topics. - -This guide explains how to configure and use the Kafka sink connector. - -## Configuration - -The configuration of the Kafka sink connector has the following parameters. - -### Property - -| Name | Type| Required | Default | Description -|------|----------|---------|-------------|-------------| -| `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. | -|`acks`|String|true|" " (empty string) |The number of acknowledgments that the producer requires the leader to receive before a request completes.
    This controls the durability of the sent records. -|`batchsize`|long|false|16384L|The batch size that a Kafka producer attempts to batch records together before sending them to brokers. -|`maxRequestSize`|long|false|1048576L|The maximum size of a Kafka request in bytes. -|`topic`|String|true|" " (empty string) |The Kafka topic which receives messages from Pulsar. -| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringSerializer | The serializer class for Kafka producers to serialize keys. -| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArraySerializer | The serializer class for Kafka producers to serialize values.

    The serializer is set by a specific implementation of [`KafkaAbstractSink`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSink.java). -|`producerConfigProperties`|Map|false|" " (empty string)|The producer configuration properties to be passed to producers.

    **Note: other properties specified in the connector configuration file take precedence over this configuration**. - - -### Example - -Before using the Kafka sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "bootstrapServers": "localhost:6667", - "topic": "test", - "acks": "1", - "batchSize": "16384", - "maxRequestSize": "1048576", - "producerConfigProperties": - { - "client.id": "test-pulsar-producer", - "security.protocol": "SASL_PLAINTEXT", - "sasl.mechanism": "GSSAPI", - "sasl.kerberos.service.name": "kafka", - "acks": "all" - } - } - -* YAML - - ``` - -yaml - configs: - bootstrapServers: "localhost:6667" - topic: "test" - acks: "1" - batchSize: "16384" - maxRequestSize: "1048576" - producerConfigProperties: - client.id: "test-pulsar-producer" - security.protocol: "SASL_PLAINTEXT" - sasl.mechanism: "GSSAPI" - sasl.kerberos.service.name: "kafka" - acks: "all" - ``` diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-kafka-source.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-kafka-source.md deleted file mode 100644 index 53448699e21b4a..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-kafka-source.md +++ /dev/null @@ -1,226 +0,0 @@ ---- -id: io-kafka-source -title: Kafka source connector -sidebar_label: "Kafka source connector" -original_id: io-kafka-source ---- - -The Kafka source connector pulls messages from Kafka topics and persists the messages -to Pulsar topics. - -This guide explains how to configure and use the Kafka source connector. - -## Configuration - -The configuration of the Kafka source connector has the following properties. - -### Property - -| Name | Type| Required | Default | Description -|------|----------|---------|-------------|-------------| -| `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. | -| `groupId` |String| true | " " (empty string) | A unique string that identifies the group of consumer processes to which this consumer belongs. | -| `fetchMinBytes` | long|false | 1 | The minimum byte expected for each fetch response. | -| `autoCommitEnabled` | boolean |false | true | If set to true, the consumer's offset is periodically committed in the background.

    This committed offset is used when the process fails as the position from which a new consumer begins. | -| `autoCommitIntervalMs` | long|false | 5000 | The frequency in milliseconds that the consumer offsets are auto-committed to Kafka if `autoCommitEnabled` is set to true. | -| `heartbeatIntervalMs` | long| false | 3000 | The interval between heartbeats to the consumer when using Kafka's group management facilities.

    **Note: `heartbeatIntervalMs` must be smaller than `sessionTimeoutMs`**.| -| `sessionTimeoutMs` | long|false | 30000 | The timeout used to detect consumer failures when using Kafka's group management facility. | -| `topic` | String|true | " " (empty string)| The Kafka topic which sends messages to Pulsar. | -| `consumerConfigProperties` | Map| false | " " (empty string) | The consumer configuration properties to be passed to consumers.

    **Note: other properties specified in the connector configuration file take precedence over this configuration**. | -| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringDeserializer | The deserializer class for Kafka consumers to deserialize keys.
    The deserializer is set by a specific implementation of [`KafkaAbstractSource`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java). -| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArrayDeserializer | The deserializer class for Kafka consumers to deserialize values. -| `autoOffsetReset` | String | false | "earliest" | The default offset reset policy. | - -### Schema Management - -This Kafka source connector applies the schema to the topic depending on the data type that is present on the Kafka topic. -You can detect the data type from the `keyDeserializationClass` and `valueDeserializationClass` configuration parameters. - -If the `valueDeserializationClass` is `org.apache.kafka.common.serialization.StringDeserializer`, you can set Schema.STRING() as schema type on the Pulsar topic. - -If `valueDeserializationClass` is `io.confluent.kafka.serializers.KafkaAvroDeserializer`, Pulsar downloads the AVRO schema from the Confluent Schema Registry® -and sets it properly on the Pulsar topic. - -In this case, you need to set `schema.registry.url` inside of the `consumerConfigProperties` configuration entry -of the source. - -If `keyDeserializationClass` is not `org.apache.kafka.common.serialization.StringDeserializer`, it means -that you do not have a String as key and the Kafka Source uses the KeyValue schema type with the SEPARATED encoding. - -Pulsar supports AVRO format for keys. - -In this case, you can have a Pulsar topic with the following properties: -- Schema: KeyValue schema with SEPARATED encoding -- Key: the content of key of the Kafka message (base64 encoded) -- Value: the content of value of the Kafka message -- KeySchema: the schema detected from `keyDeserializationClass` -- ValueSchema: the schema detected from `valueDeserializationClass` - -Topic compaction and partition routing use the Pulsar key, that contains the Kafka key, and so they are driven by the same value that you have on Kafka. - -When you consume data from Pulsar topics, you can use the `KeyValue` schema. In this way, you can decode the data properly. -If you want to access the raw key, you can use the `Message#getKeyBytes()` API. - -### Example - -Before using the Kafka source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "bootstrapServers": "pulsar-kafka:9092", - "groupId": "test-pulsar-io", - "topic": "my-topic", - "sessionTimeoutMs": "10000", - "autoCommitEnabled": false - } - - ``` - -* YAML - - ```yaml - - configs: - bootstrapServers: "pulsar-kafka:9092" - groupId: "test-pulsar-io" - topic: "my-topic" - sessionTimeoutMs: "10000" - autoCommitEnabled: false - - ``` - -## Usage - -Here is an example of using the Kafka source connector with the configuration file as shown previously. - -1. Download a Kafka client and a Kafka connector. - - ```bash - - $ wget https://repo1.maven.org/maven2/org/apache/kafka/kafka-clients/0.10.2.1/kafka-clients-0.10.2.1.jar - - $ wget https://archive.apache.org/dist/pulsar/pulsar-2.4.0/connectors/pulsar-io-kafka-2.4.0.nar - - ``` - -2. Create a network. - - ```bash - - $ docker network create kafka-pulsar - - ``` - -3. Pull a ZooKeeper image and start ZooKeeper. - - ```bash - - $ docker pull wurstmeister/zookeeper - - $ docker run -d -it -p 2181:2181 --name pulsar-kafka-zookeeper --network kafka-pulsar wurstmeister/zookeeper - - ``` - -4. Pull a Kafka image and start Kafka. - - ```bash - - $ docker pull wurstmeister/kafka:2.11-1.0.2 - - $ docker run -d -it --network kafka-pulsar -p 6667:6667 -p 9092:9092 -e KAFKA_ADVERTISED_HOST_NAME=pulsar-kafka -e KAFKA_ZOOKEEPER_CONNECT=pulsar-kafka-zookeeper:2181 --name pulsar-kafka wurstmeister/kafka:2.11-1.0.2 - - ``` - -5. Pull a Pulsar image and start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:@pulsar:version@ - - $ docker run -d -it --network kafka-pulsar -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-kafka-standalone apachepulsar/pulsar:2.4.0 bin/pulsar standalone - - ``` - -6. Create a producer file _kafka-producer.py_. - - ```python - - from kafka import KafkaProducer - producer = KafkaProducer(bootstrap_servers='pulsar-kafka:9092') - future = producer.send('my-topic', b'hello world') - future.get() - - ``` - -7. Create a consumer file _pulsar-client.py_. - - ```python - - import pulsar - - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe('my-topic', subscription_name='my-aa') - - while True: - msg = consumer.receive() - print msg - print dir(msg) - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - - client.close() - - ``` - -8. Copy the following files to Pulsar. - - ```bash - - $ docker cp pulsar-io-kafka-@pulsar:version@.nar pulsar-kafka-standalone:/pulsar - $ docker cp kafkaSourceConfig.yaml pulsar-kafka-standalone:/pulsar/conf - $ docker cp pulsar-client.py pulsar-kafka-standalone:/pulsar/ - $ docker cp kafka-producer.py pulsar-kafka-standalone:/pulsar/ - - ``` - -9. Open a new terminal window and start the Kafka source connector in local run mode. - - ```bash - - $ docker exec -it pulsar-kafka-standalone /bin/bash - - $ ./bin/pulsar-admin source localrun \ - --archive ./pulsar-io-kafka-@pulsar:version@.nar \ - --classname org.apache.pulsar.io.kafka.KafkaBytesSource \ - --tenant public \ - --namespace default \ - --name kafka \ - --destination-topic-name my-topic \ - --source-config-file ./conf/kafkaSourceConfig.yaml \ - --parallelism 1 - - ``` - -10. Open a new terminal window and run the consumer. - - ```bash - - $ docker exec -it pulsar-kafka-standalone /bin/bash - - $ pip install kafka-python - - $ python3 kafka-producer.py - - ``` - - The following information appears on the consumer terminal window. - - ```bash - - Received message: 'hello world' - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-kinesis-sink.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-kinesis-sink.md deleted file mode 100644 index 153587dcfc783e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-kinesis-sink.md +++ /dev/null @@ -1,80 +0,0 @@ ---- -id: io-kinesis-sink -title: Kinesis sink connector -sidebar_label: "Kinesis sink connector" -original_id: io-kinesis-sink ---- - -The Kinesis sink connector pulls data from Pulsar and persists data into Amazon Kinesis. - -## Configuration - -The configuration of the Kinesis sink connector has the following property. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`messageFormat`|MessageFormat|true|ONLY_RAW_PAYLOAD|Message format in which Kinesis sink converts Pulsar messages and publishes to Kinesis streams.

    Below are the available options:

  1317. `ONLY_RAW_PAYLOAD`: Kinesis sink directly publishes Pulsar message payload as a message into the configured Kinesis stream.

  1318. `FULL_MESSAGE_IN_JSON`: Kinesis sink creates a JSON payload with Pulsar message payload, properties and encryptionCtx, and publishes JSON payload into the configured Kinesis stream.

  1319. `FULL_MESSAGE_IN_FB`: Kinesis sink creates a flatbuffer serialized payload with Pulsar message payload, properties and encryptionCtx, and publishes flatbuffer payload into the configured Kinesis stream.
  1320. -`retainOrdering`|boolean|false|false|Whether Pulsar connectors to retain ordering when moving messages from Pulsar to Kinesis or not. -`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    It is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If it is empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Built-in plugins - -The following are built-in `AwsCredentialProviderPlugin` plugins: - -* `org.apache.pulsar.io.aws.AwsDefaultProviderChainPlugin` - - This plugin takes no configuration, it uses the default AWS provider chain. - - For more information, see [AWS documentation](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default). - -* `org.apache.pulsar.io.aws.STSAssumeRoleProviderPlugin` - - This plugin takes a configuration (via the `awsCredentialPluginParam`) that describes a role to assume when running the KCL. - - This configuration takes the form of a small json document like: - - ```json - - {"roleArn": "arn...", "roleSessionName": "name"} - - ``` - -### Example - -Before using the Kinesis sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "awsEndpoint": "some.endpoint.aws", - "awsRegion": "us-east-1", - "awsKinesisStreamName": "my-stream", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "messageFormat": "ONLY_RAW_PAYLOAD", - "retainOrdering": "true" - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "some.endpoint.aws" - awsRegion: "us-east-1" - awsKinesisStreamName: "my-stream" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - messageFormat: "ONLY_RAW_PAYLOAD" - retainOrdering: "true" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-kinesis-source.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-kinesis-source.md deleted file mode 100644 index 0d07eefc3703b3..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-kinesis-source.md +++ /dev/null @@ -1,81 +0,0 @@ ---- -id: io-kinesis-source -title: Kinesis source connector -sidebar_label: "Kinesis source connector" -original_id: io-kinesis-source ---- - -The Kinesis source connector pulls data from Amazon Kinesis and persists data into Pulsar. - -This connector uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual consuming of messages. The KCL uses DynamoDB to track state for consumers. - -> Note: currently, the Kinesis source connector only supports raw messages. If you use KMS encrypted messages, the encrypted messages are sent to downstream. This connector will support decrypting messages in the future release. - - -## Configuration - -The configuration of the Kinesis source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.

    Below are the available options:

  1321. `AT_TIMESTAMP`: start from the record at or after the specified timestamp.

  1322. `LATEST`: start after the most recent data record.

  1323. `TRIM_HORIZON`: start from the oldest available data record.
  1324. -`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption. -`applicationName`|String|false|Pulsar IO connector|The name of the Amazon Kinesis application.

    By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances. -`checkpointInterval`|long|false|60000|The frequency of the Kinesis stream checkpoint in milliseconds. -`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds. -`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint. -`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector.

    Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed. -`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`useEnhancedFanOut`|boolean|false|true|If set to true, it uses Kinesis enhanced fan-out.

    If set to false, it uses polling. -`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    `awsCredentialProviderPlugin` has the following built-in plugs:

  1325. `org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:
    this plugin uses the default AWS provider chain.
    For more information, see [using the default credential provider chain](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default).

  1326. `org.apache.pulsar.io.kinesis.STSAssumeRoleProviderPlugin`:
    this plugin takes a configuration via the `awsCredentialPluginParam` that describes a role to assume when running the KCL.
    **JSON configuration example**
    `{"roleArn": "arn...", "roleSessionName": "name"}`

    `awsCredentialPluginName` is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If `awsCredentialPluginName` set to empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`.
  1327. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Example - -Before using the Kinesis source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "awsEndpoint": "https://some.endpoint.aws", - "awsRegion": "us-east-1", - "awsKinesisStreamName": "my-stream", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "applicationName": "My test application", - "checkpointInterval": "30000", - "backoffTime": "4000", - "numRetries": "3", - "receiveQueueSize": 2000, - "initialPositionInStream": "TRIM_HORIZON", - "startAtTime": "2019-03-05T19:28:58.000Z" - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "https://some.endpoint.aws" - awsRegion: "us-east-1" - awsKinesisStreamName: "my-stream" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - applicationName: "My test application" - checkpointInterval: 30000 - backoffTime: 4000 - numRetries: 3 - receiveQueueSize: 2000 - initialPositionInStream: "TRIM_HORIZON" - startAtTime: "2019-03-05T19:28:58.000Z" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-mongo-sink.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-mongo-sink.md deleted file mode 100644 index 30c15a6c280938..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-mongo-sink.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -id: io-mongo-sink -title: MongoDB sink connector -sidebar_label: "MongoDB sink connector" -original_id: io-mongo-sink ---- - -The MongoDB sink connector pulls messages from Pulsar topics -and persists the messages to collections. - -## Configuration - -The configuration of the MongoDB sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `mongoUri` | String| true| " " (empty string) | The MongoDB URI to which the connector connects.

    For more information, see [connection string URI format](https://docs.mongodb.com/manual/reference/connection-string/). | -| `database` | String| true| " " (empty string)| The database name to which the collection belongs. | -| `collection` | String| true| " " (empty string)| The collection name to which the connector writes messages. | -| `batchSize` | int|false|100 | The batch size of writing messages to collections. | -| `batchTimeMs` |long|false|1000| The batch operation interval in milliseconds. | - - -### Example - -Before using the Mongo sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "mongoUri": "mongodb://localhost:27017", - "database": "pulsar", - "collection": "messages", - "batchSize": "2", - "batchTimeMs": "500" - } - - ``` - -* YAML - - ```yaml - - configs: - mongoUri: "mongodb://localhost:27017" - database: "pulsar" - collection: "messages" - batchSize: 2 - batchTimeMs: 500 - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-netty-source.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-netty-source.md deleted file mode 100644 index e1ec8d863115b3..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-netty-source.md +++ /dev/null @@ -1,241 +0,0 @@ ---- -id: io-netty-source -title: Netty source connector -sidebar_label: "Netty source connector" -original_id: io-netty-source ---- - -The Netty source connector opens a port that accepts incoming data via the configured network protocol -and publish it to user-defined Pulsar topics. - -This connector can be used in a containerized (for example, k8s) deployment. Otherwise, if the connector is running in process or thread mode, the instance may be conflicting on listening to ports. - -## Configuration - -The configuration of the Netty source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `type` |String| true |tcp | The network protocol over which data is transmitted to netty.

    Below are the available options:
  1328. tcp
  1329. http
  1330. udp
  1331. | -| `host` | String|true | 127.0.0.1 | The host name or address on which the source instance listen. | -| `port` | int|true | 10999 | The port on which the source instance listen. | -| `numberOfThreads` |int| true |1 | The number of threads of Netty TCP server to accept incoming connections and handle the traffic of accepted connections. | - - -### Example - -Before using the Netty source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "type": "tcp", - "host": "127.0.0.1", - "port": "10911", - "numberOfThreads": "1" - } - - ``` - -* YAML - - ```yaml - - configs: - type: "tcp" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -## Usage - -The following examples show how to use the Netty source connector with TCP and HTTP. - -### TCP - -1. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-netty-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -2. Create a configuration file _netty-source-config.yaml_. - - ```yaml - - configs: - type: "tcp" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -3. Copy the configuration file _netty-source-config.yaml_ to Pulsar server. - - ```bash - - $ docker cp netty-source-config.yaml pulsar-netty-standalone:/pulsar/conf/ - - ``` - -4. Download the Netty source connector. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - curl -O http://mirror-hk.koddos.net/apache/pulsar/pulsar-{version}/connectors/pulsar-io-netty-{version}.nar - - ``` - -5. Start the Netty source connector. - - ```bash - - $ ./bin/pulsar-admin sources localrun \ - --archive pulsar-io-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name netty \ - --destination-topic-name netty-topic \ - --source-config-file netty-source-config.yaml \ - --parallelism 1 - - ``` - -6. Consume data. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ ./bin/pulsar-client consume -t Exclusive -s netty-sub netty-topic -n 0 - - ``` - -7. Open another terminal window to send data to the Netty source. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ apt-get update - - $ apt-get -y install telnet - - $ root@1d19327b2c67:/pulsar# telnet 127.0.0.1 10999 - Trying 127.0.0.1... - Connected to 127.0.0.1. - Escape character is '^]'. - hello - world - - ``` - -8. The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello - - ----- got message ----- - world - - ``` - -### HTTP - -1. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-netty-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -2. Create a configuration file _netty-source-config.yaml_. - - ```yaml - - configs: - type: "http" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -3. Copy the configuration file _netty-source-config.yaml_ to Pulsar server. - - ```bash - - $ docker cp netty-source-config.yaml pulsar-netty-standalone:/pulsar/conf/ - - ``` - -4. Download the Netty source connector. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - curl -O http://mirror-hk.koddos.net/apache/pulsar/pulsar-{version}/connectors/pulsar-io-netty-{version}.nar - - ``` - -5. Start the Netty source connector. - - ```bash - - $ ./bin/pulsar-admin sources localrun \ - --archive pulsar-io-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name netty \ - --destination-topic-name netty-topic \ - --source-config-file netty-source-config.yaml \ - --parallelism 1 - - ``` - -6. Consume data. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ ./bin/pulsar-client consume -t Exclusive -s netty-sub netty-topic -n 0 - - ``` - -7. Open another terminal window to send data to the Netty source. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ curl -X POST --data 'hello, world!' http://127.0.0.1:10999/ - - ``` - -8. The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello, world! - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-nsq-source.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-nsq-source.md deleted file mode 100644 index b61e7e100c22e1..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-nsq-source.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -id: io-nsq-source -title: NSQ source connector -sidebar_label: "NSQ source connector" -original_id: io-nsq-source ---- - -The NSQ source connector receives messages from NSQ topics -and writes messages to Pulsar topics. - -## Configuration - -The configuration of the NSQ source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `lookupds` |String| true | " " (empty string) | A comma-separated list of nsqlookupds to connect to. | -| `topic` | String|true | " " (empty string) | The NSQ topic to transport. | -| `channel` | String |false | pulsar-transport-{$topic} | The channel to consume from on the provided NSQ topic. | \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-overview.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-overview.md deleted file mode 100644 index 3db5ee34042d3f..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-overview.md +++ /dev/null @@ -1,164 +0,0 @@ ---- -id: io-overview -title: Pulsar connector overview -sidebar_label: "Overview" -original_id: io-overview ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Messaging systems are most powerful when you can easily use them with external systems like databases and other messaging systems. - -**Pulsar IO connectors** enable you to easily create, deploy, and manage connectors that interact with external systems, such as [Apache Cassandra](https://cassandra.apache.org), [Aerospike](https://www.aerospike.com), and many others. - - -## Concept - -Pulsar IO connectors come in two types: **source** and **sink**. - -This diagram illustrates the relationship between source, Pulsar, and sink: - -![Pulsar IO diagram](/assets/pulsar-io.png "Pulsar IO connectors (sources and sinks)") - - -### Source - -> Sources **feed data from external systems into Pulsar**. - -Common sources include other messaging systems and firehose-style data pipeline APIs. - -For the complete list of Pulsar built-in source connectors, see [source connector](io-connectors.md#source-connector). - -### Sink - -> Sinks **feed data from Pulsar into external systems**. - -Common sinks include other messaging systems and SQL and NoSQL databases. - -For the complete list of Pulsar built-in sink connectors, see [sink connector](io-connectors.md#sink-connector). - -## Processing guarantee - -Processing guarantees are used to handle errors when writing messages to Pulsar topics. - -> Pulsar connectors and Functions use the **same** processing guarantees as below. - -Delivery semantic | Description -:------------------|:------- -`at-most-once` | Each message sent to a connector is to be **processed once** or **not to be processed**. -`at-least-once` | Each message sent to a connector is to be **processed once** or **more than once**. -`effectively-once` | Each message sent to a connector has **one output associated** with it. - -> Processing guarantees for connectors not just rely on Pulsar guarantee but also **relate to external systems**, that is, **the implementation of source and sink**. - -* Source: Pulsar ensures that writing messages to Pulsar topics respects to the processing guarantees. It is within Pulsar's control. - -* Sink: the processing guarantees rely on the sink implementation. If the sink implementation does not handle retries in an idempotent way, the sink does not respect to the processing guarantees. - -### Set - -When creating a connector, you can set the processing guarantee with the following semantics: - -* ATLEAST_ONCE - -* ATMOST_ONCE - -* EFFECTIVELY_ONCE - -> If `--processing-guarantees` is not specified when creating a connector, the default semantic is `ATLEAST_ONCE`. - -Here takes **Admin CLI** as an example. For more information about **REST API** or **JAVA Admin API**, see [here](io-use.md#create). - -````mdx-code-block - - - - -```bash - -$ bin/pulsar-admin sources create \ - --processing-guarantees ATMOST_ONCE \ - # Other source configs - -``` - -For more information about the options of `pulsar-admin sources create`, see [here](reference-connector-admin.md#create). - - - - -```bash - -$ bin/pulsar-admin sinks create \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other sink configs - -``` - -For more information about the options of `pulsar-admin sinks create`, see [here](reference-connector-admin.md#create-1). - - - - -```` - -### Update - -After creating a connector, you can update the processing guarantee with the following semantics: - -* ATLEAST_ONCE - -* ATMOST_ONCE - -* EFFECTIVELY_ONCE - -Here takes **Admin CLI** as an example. For more information about **REST API** or **JAVA Admin API**, see [here](io-use.md#create). - -````mdx-code-block - - - - -```bash - -$ bin/pulsar-admin sources update \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other source configs - -``` - -For more information about the options of `pulsar-admin sources update`, see [here](reference-connector-admin.md#update). - - - - -```bash - -$ bin/pulsar-admin sinks update \ - --processing-guarantees ATMOST_ONCE \ - # Other sink configs - -``` - -For more information about the options of `pulsar-admin sinks update`, see [here](reference-connector-admin.md#update-1). - - - - -```` - - -## Work with connector - -You can manage Pulsar connectors (for example, create, update, start, stop, restart, reload, delete and perform other operations on connectors) via the [Connector Admin CLI](reference-connector-admin.md) with [sources](io-cli.md#sources) and [sinks](io-cli.md#sinks) subcommands. - -Connectors (sources and sinks) and Functions are components of instances, and they all run on Functions workers. When managing a source, sink or function via [Connector Admin CLI](reference-connector-admin.md) or [Functions Admin CLI](functions-cli.md), an instance is started on a worker. For more information, see [Functions worker](functions-worker.md#run-functions-worker-separately). - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-quickstart.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-quickstart.md deleted file mode 100644 index 8474c93f51336d..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-quickstart.md +++ /dev/null @@ -1,963 +0,0 @@ ---- -id: io-quickstart -title: How to connect Pulsar to database -sidebar_label: "Get started" -original_id: io-quickstart ---- - -This tutorial provides a hands-on look at how you can move data out of Pulsar without writing a single line of code. - -It is helpful to review the [concepts](io-overview.md) for Pulsar I/O with running the steps in this guide to gain a deeper understanding. - -At the end of this tutorial, you are able to: - -- [Connect Pulsar to Cassandra](#Connect-Pulsar-to-Cassandra) - -- [Connect Pulsar to PostgreSQL](#Connect-Pulsar-to-PostgreSQL) - -:::tip - -* These instructions assume you are running Pulsar in [standalone mode](getting-started-standalone.md). However, all -the commands used in this tutorial can be used in a multi-nodes Pulsar cluster without any changes. -* All the instructions are assumed to run at the root directory of a Pulsar binary distribution. - -::: - -## Install Pulsar and built-in connector - -Before connecting Pulsar to a database, you need to install Pulsar and the desired built-in connector. - -For more information about **how to install a standalone Pulsar and built-in connectors**, see [here](getting-started-standalone.md/#installing-pulsar). - -## Start Pulsar standalone - -1. Start Pulsar locally. - - ```bash - - bin/pulsar standalone - - ``` - - All the components of a Pulsar service are start in order. - - You can curl those pulsar service endpoints to make sure Pulsar service is up running correctly. - -2. Check Pulsar binary protocol port. - - ```bash - - telnet localhost 6650 - - ``` - -3. Check Pulsar Function cluster. - - ```bash - - curl -s http://localhost:8080/admin/v2/worker/cluster - - ``` - - **Example output** - - ```json - - [{"workerId":"c-standalone-fw-localhost-6750","workerHostname":"localhost","port":6750}] - - ``` - -4. Make sure a public tenant and a default namespace exist. - - ```bash - - curl -s http://localhost:8080/admin/v2/namespaces/public - - ``` - - **Example output** - - ```json - - ["public/default","public/functions"] - - ``` - -5. All built-in connectors should be listed as available. - - ```bash - - curl -s http://localhost:8080/admin/v2/functions/connectors - - ``` - - **Example output** - - ```json - - [{"name":"aerospike","description":"Aerospike database sink","sinkClass":"org.apache.pulsar.io.aerospike.AerospikeStringSink"},{"name":"cassandra","description":"Writes data into Cassandra","sinkClass":"org.apache.pulsar.io.cassandra.CassandraStringSink"},{"name":"kafka","description":"Kafka source and sink connector","sourceClass":"org.apache.pulsar.io.kafka.KafkaStringSource","sinkClass":"org.apache.pulsar.io.kafka.KafkaBytesSink"},{"name":"kinesis","description":"Kinesis sink connector","sinkClass":"org.apache.pulsar.io.kinesis.KinesisSink"},{"name":"rabbitmq","description":"RabbitMQ source connector","sourceClass":"org.apache.pulsar.io.rabbitmq.RabbitMQSource"},{"name":"twitter","description":"Ingest data from Twitter firehose","sourceClass":"org.apache.pulsar.io.twitter.TwitterFireHose"}] - - ``` - - If an error occurs when starting Pulsar service, you may see an exception at the terminal running `pulsar/standalone`, - or you can navigate to the `logs` directory under the Pulsar directory to view the logs. - -## Connect Pulsar to Cassandra - -This section demonstrates how to connect Pulsar to Cassandra. - -:::tip - -* Make sure you have Docker installed. If you do not have one, see [install Docker](https://docs.docker.com/docker-for-mac/install/). -* The Cassandra sink connector reads messages from Pulsar topics and writes the messages into Cassandra tables. For more information, see [Cassandra sink connector](io-cassandra-sink.md). - -::: - -### Setup a Cassandra cluster - -This example uses `cassandra` Docker image to start a single-node Cassandra cluster in Docker. - -1. Start a Cassandra cluster. - - ```bash - - docker run -d --rm --name=cassandra -p 9042:9042 cassandra - - ``` - - :::note - - Before moving to the next steps, make sure the Cassandra cluster is running. - - ::: - -2. Make sure the Docker process is running. - - ```bash - - docker ps - - ``` - -3. Check the Cassandra logs to make sure the Cassandra process is running as expected. - - ```bash - - docker logs cassandra - - ``` - -4. Check the status of the Cassandra cluster. - - ```bash - - docker exec cassandra nodetool status - - ``` - - **Example output** - - ``` - - Datacenter: datacenter1 - ======================= - Status=Up/Down - |/ State=Normal/Leaving/Joining/Moving - -- Address Load Tokens Owns (effective) Host ID Rack - UN 172.17.0.2 103.67 KiB 256 100.0% af0e4b2f-84e0-4f0b-bb14-bd5f9070ff26 rack1 - - ``` - -5. Use `cqlsh` to connect to the Cassandra cluster. - - ```bash - - $ docker exec -ti cassandra cqlsh localhost - Connected to Test Cluster at localhost:9042. - [cqlsh 5.0.1 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4] - Use HELP for help. - cqlsh> - - ``` - -6. Create a keyspace `pulsar_test_keyspace`. - - ```bash - - cqlsh> CREATE KEYSPACE pulsar_test_keyspace WITH replication = {'class':'SimpleStrategy', 'replication_factor':1}; - - ``` - -7. Create a table `pulsar_test_table`. - - ```bash - - cqlsh> USE pulsar_test_keyspace; - cqlsh:pulsar_test_keyspace> CREATE TABLE pulsar_test_table (key text PRIMARY KEY, col text); - - ``` - -### Configure a Cassandra sink - -Now that we have a Cassandra cluster running locally. - -In this section, you need to configure a Cassandra sink connector. - -To run a Cassandra sink connector, you need to prepare a configuration file including the information that Pulsar connector runtime needs to know. - -For example, how Pulsar connector can find the Cassandra cluster, what is the keyspace and the table that Pulsar connector uses for writing Pulsar messages to, and so on. - -You can create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - } - - ``` - -* YAML - - ```yaml - - configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - - ``` - -For more information, see [Cassandra sink connector](io-cassandra-sink.md). - -### Create a Cassandra sink - -You can use the [Connector Admin CLI](io-cli.md) -to create a sink connector and perform other operations on them. - -Run the following command to create a Cassandra sink connector with sink type _cassandra_ and the config file _examples/cassandra-sink.yml_ created previously. - -#### Note -> The `sink-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. - -```bash - -bin/pulsar-admin sinks create \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink \ - --sink-type cassandra \ - --sink-config-file examples/cassandra-sink.yml \ - --inputs test_cassandra - -``` - -Once the command is executed, Pulsar creates the sink connector _cassandra-test-sink_. - -This sink connector runs -as a Pulsar Function and writes the messages produced in the topic _test_cassandra_ to the Cassandra table _pulsar_test_table_. - -### Inspect a Cassandra sink - -You can use the [Connector Admin CLI](io-cli.md) -to monitor a connector and perform other operations on it. - -* Get the information of a Cassandra sink. - - ```bash - - bin/pulsar-admin sinks get \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - **Example output** - - ```json - - { - "tenant": "public", - "namespace": "default", - "name": "cassandra-test-sink", - "className": "org.apache.pulsar.io.cassandra.CassandraStringSink", - "inputSpecs": { - "test_cassandra": { - "isRegexPattern": false - } - }, - "configs": { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true, - "archive": "builtin://cassandra" - } - - ``` - -* Check the status of a Cassandra sink. - - ```bash - - bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - **Example output** - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-localhost-8080" - } - } ] - } - - ``` - -### Verify a Cassandra sink - -1. Produce some messages to the input topic of the Cassandra sink _test_cassandra_. - - ```bash - - for i in {0..9}; do bin/pulsar-client produce -m "key-$i" -n 1 test_cassandra; done - - ``` - -2. Inspect the status of the Cassandra sink _test_cassandra_. - - ```bash - - bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - You can see 10 messages are processed by the Cassandra sink _test_cassandra_. - - **Example output** - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 10, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 10, - "lastReceivedTime" : 1551685489136, - "workerId" : "c-standalone-fw-localhost-8080" - } - } ] - } - - ``` - -3. Use `cqlsh` to connect to the Cassandra cluster. - - ```bash - - docker exec -ti cassandra cqlsh localhost - - ``` - -4. Check the data of the Cassandra table _pulsar_test_table_. - - ```bash - - cqlsh> use pulsar_test_keyspace; - cqlsh:pulsar_test_keyspace> select * from pulsar_test_table; - - key | col - --------+-------- - key-5 | key-5 - key-0 | key-0 - key-9 | key-9 - key-2 | key-2 - key-1 | key-1 - key-3 | key-3 - key-6 | key-6 - key-7 | key-7 - key-4 | key-4 - key-8 | key-8 - - ``` - -### Delete a Cassandra Sink - -You can use the [Connector Admin CLI](io-cli.md) -to delete a connector and perform other operations on it. - -```bash - -bin/pulsar-admin sinks delete \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - -``` - -## Connect Pulsar to PostgreSQL - -This section demonstrates how to connect Pulsar to PostgreSQL. - -:::tip - -* Make sure you have Docker installed. If you do not have one, see [install Docker](https://docs.docker.com/docker-for-mac/install/). -* The JDBC sink connector pulls messages from Pulsar topics and persists the messages to ClickHouse, MariaDB, PostgreSQL, or SQlite. - -::: - ->For more information, see [JDBC sink connector](io-jdbc-sink.md). - - -### Setup a PostgreSQL cluster - -This example uses the PostgreSQL 12 docker image to start a single-node PostgreSQL cluster in Docker. - -1. Pull the PostgreSQL 12 image from Docker. - - ```bash - - $ docker pull postgres:12 - - ``` - -2. Start PostgreSQL. - - ```bash - - $ docker run -d -it --rm \ - --name pulsar-postgres \ - -p 5432:5432 \ - -e POSTGRES_PASSWORD=password \ - -e POSTGRES_USER=postgres \ - postgres:12 - - ``` - - #### Tip - - Flag | Description | This example - ---|---|---| - `-d` | To start a container in detached mode. | / - `-it` | Keep STDIN open even if not attached and allocate a terminal. | / - `--rm` | Remove the container automatically when it exits. | / - `-name` | Assign a name to the container. | This example specifies _pulsar-postgres_ for the container. - `-p` | Publish the port of the container to the host. | This example publishes the port _5432_ of the container to the host. - `-e` | Set environment variables. | This example sets the following variables:
    - The password for the user is _password_.
    - The name for the user is _postgres_. - - :::tip - - For more information about Docker commands, see [Docker CLI](https://docs.docker.com/engine/reference/commandline/run/). - - ::: - -3. Check if PostgreSQL has been started successfully. - - ```bash - - $ docker logs -f pulsar-postgres - - ``` - - PostgreSQL has been started successfully if the following message appears. - - ```text - - 2020-05-11 20:09:24.492 UTC [1] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit - 2020-05-11 20:09:24.492 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 - 2020-05-11 20:09:24.492 UTC [1] LOG: listening on IPv6 address "::", port 5432 - 2020-05-11 20:09:24.499 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" - 2020-05-11 20:09:24.523 UTC [55] LOG: database system was shut down at 2020-05-11 20:09:24 UTC - 2020-05-11 20:09:24.533 UTC [1] LOG: database system is ready to accept connections - - ``` - -4. Access to PostgreSQL. - - ```bash - - $ docker exec -it pulsar-postgres /bin/bash - - ``` - -5. Create a PostgreSQL table _pulsar_postgres_jdbc_sink_. - - ```bash - - $ psql -U postgres postgres - - postgres=# create table if not exists pulsar_postgres_jdbc_sink - ( - id serial PRIMARY KEY, - name VARCHAR(255) NOT NULL - ); - - ``` - -### Configure a JDBC sink - -Now we have a PostgreSQL running locally. - -In this section, you need to configure a JDBC sink connector. - -1. Add a configuration file. - - To run a JDBC sink connector, you need to prepare a YAML configuration file including the information that Pulsar connector runtime needs to know. - - For example, how Pulsar connector can find the PostgreSQL cluster, what is the JDBC URL and the table that Pulsar connector uses for writing messages to. - - Create a _pulsar-postgres-jdbc-sink.yaml_ file, copy the following contents to this file, and place the file in the `pulsar/connectors` folder. - - ```yaml - - configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink" - tableName: "pulsar_postgres_jdbc_sink" - - ``` - -2. Create a schema. - - Create a _avro-schema_ file, copy the following contents to this file, and place the file in the `pulsar/connectors` folder. - - ```json - - { - "type": "AVRO", - "schema": "{\"type\":\"record\",\"name\":\"Test\",\"fields\":[{\"name\":\"id\",\"type\":[\"null\",\"int\"]},{\"name\":\"name\",\"type\":[\"null\",\"string\"]}]}", - "properties": {} - } - - ``` - - :::tip - - For more information about AVRO, see [Apache Avro](https://avro.apache.org/docs/1.9.1/). - - ::: - -3. Upload a schema to a topic. - - This example uploads the _avro-schema_ schema to the _pulsar-postgres-jdbc-sink-topic_ topic. - - ```bash - - $ bin/pulsar-admin schemas upload pulsar-postgres-jdbc-sink-topic -f ./connectors/avro-schema - - ``` - -4. Check if the schema has been uploaded successfully. - - ```bash - - $ bin/pulsar-admin schemas get pulsar-postgres-jdbc-sink-topic - - ``` - - The schema has been uploaded successfully if the following message appears. - - ```json - - {"name":"pulsar-postgres-jdbc-sink-topic","schema":"{\"type\":\"record\",\"name\":\"Test\",\"fields\":[{\"name\":\"id\",\"type\":[\"null\",\"int\"]},{\"name\":\"name\",\"type\":[\"null\",\"string\"]}]}","type":"AVRO","properties":{}} - - ``` - -### Create a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to create a sink connector and perform other operations on it. - -This example creates a sink connector and specifies the desired information. - -```bash - -$ bin/pulsar-admin sinks create \ ---archive ./connectors/pulsar-io-jdbc-postgres-@pulsar:version@.nar \ ---inputs pulsar-postgres-jdbc-sink-topic \ ---name pulsar-postgres-jdbc-sink \ ---sink-config-file ./connectors/pulsar-postgres-jdbc-sink.yaml \ ---parallelism 1 - -``` - -Once the command is executed, Pulsar creates a sink connector _pulsar-postgres-jdbc-sink_. - -This sink connector runs as a Pulsar Function and writes the messages produced in the topic _pulsar-postgres-jdbc-sink-topic_ to the PostgreSQL table _pulsar_postgres_jdbc_sink_. - - #### Tip - - Flag | Description | This example - ---|---|---| - `--archive` | The path to the archive file for the sink. | _pulsar-io-jdbc-postgres-@pulsar:version@.nar_ | - `--inputs` | The input topic(s) of the sink.

    Multiple topics can be specified as a comma-separated list.|| - `--name` | The name of the sink. | _pulsar-postgres-jdbc-sink_ | - `--sink-config-file` | The path to a YAML config file specifying the configuration of the sink. | _pulsar-postgres-jdbc-sink.yaml_ | - `--parallelism` | The parallelism factor of the sink.

    For example, the number of sink instances to run. | _1_ | - -:::tip - -For more information about `pulsar-admin sinks create options`, see [here](io-cli.md#sinks). - -::: - -The sink has been created successfully if the following message appears. - -```bash - -"Created successfully" - -``` - -### Inspect a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to monitor a connector and perform other operations on it. - -* List all running JDBC sink(s). - - ```bash - - $ bin/pulsar-admin sinks list \ - --tenant public \ - --namespace default - - ``` - - :::tip - - For more information about `pulsar-admin sinks list options`, see [here](io-cli.md/#list-1). - - ::: - - The result shows that only the _postgres-jdbc-sink_ sink is running. - - ```json - - [ - "pulsar-postgres-jdbc-sink" - ] - - ``` - -* Get the information of a JDBC sink. - - ```bash - - $ bin/pulsar-admin sinks get \ - --tenant public \ - --namespace default \ - --name pulsar-postgres-jdbc-sink - - ``` - - :::tip - - For more information about `pulsar-admin sinks get options`, see [here](io-cli.md/#get-1). - - ::: - - The result shows the information of the sink connector, including tenant, namespace, topic and so on. - - ```json - - { - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true - } - - ``` - -* Get the status of a JDBC sink - - ```bash - - $ bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name pulsar-postgres-jdbc-sink - - ``` - - :::tip - - For more information about `pulsar-admin sinks status options`, see [here](io-cli.md/#status-1). - - ::: - - The result shows the current status of sink connector, including the number of instance, running status, worker ID and so on. - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-192.168.2.52-8080" - } - } ] - } - - ``` - -### Stop a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to stop a connector and perform other operations on it. - -```bash - -$ bin/pulsar-admin sinks stop \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks stop options`, see [here](io-cli.md/#stop-1). - -::: - -The sink instance has been stopped successfully if the following message disappears. - -```bash - -"Stopped successfully" - -``` - -### Restart a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to restart a connector and perform other operations on it. - -```bash - -$ bin/pulsar-admin sinks restart \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks restart options`, see [here](io-cli.md/#restart-1). - -::: - -The sink instance has been started successfully if the following message disappears. - -```bash - -"Started successfully" - -``` - -:::tip - -* Optionally, you can run a standalone sink connector using `pulsar-admin sinks localrun options`. -Note that `pulsar-admin sinks localrun options` **runs a sink connector locally**, while `pulsar-admin sinks start options` **starts a sink connector in a cluster**. -* For more information about `pulsar-admin sinks localrun options`, see [here](io-cli.md#localrun-1). - -::: - -### Update a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to update a connector and perform other operations on it. - -This example updates the parallelism of the _pulsar-postgres-jdbc-sink_ sink connector to 2. - -```bash - -$ bin/pulsar-admin sinks update \ ---name pulsar-postgres-jdbc-sink \ ---parallelism 2 - -``` - -:::tip - -For more information about `pulsar-admin sinks update options`, see [here](io-cli.md/#update-1). - -::: - -The sink connector has been updated successfully if the following message disappears. - -```bash - -"Updated successfully" - -``` - -This example double-checks the information. - -```bash - -$ bin/pulsar-admin sinks get \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -The result shows that the parallelism is 2. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 2, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -### Delete a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to delete a connector and perform other operations on it. - -This example deletes the _pulsar-postgres-jdbc-sink_ sink connector. - -```bash - -$ bin/pulsar-admin sinks delete \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks delete options`, see [here](io-cli.md/#delete-1). - -::: - -The sink connector has been deleted successfully if the following message appears. - -```text - -"Deleted successfully" - -``` - -This example double-checks the status of the sink connector. - -```bash - -$ bin/pulsar-admin sinks get \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -The result shows that the sink connector does not exist. - -```text - -HTTP 404 Not Found - -Reason: Sink pulsar-postgres-jdbc-sink doesn't exist - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-rabbitmq-sink.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-rabbitmq-sink.md deleted file mode 100644 index d7fda99460dc97..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-rabbitmq-sink.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -id: io-rabbitmq-sink -title: RabbitMQ sink connector -sidebar_label: "RabbitMQ sink connector" -original_id: io-rabbitmq-sink ---- - -The RabbitMQ sink connector pulls messages from Pulsar topics -and persist the messages to RabbitMQ queues. - - -## Configuration - -The configuration of the RabbitMQ sink connector has the following properties. - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `connectionName` |String| true | " " (empty string) | The connection name. | -| `host` | String| true | " " (empty string) | The RabbitMQ host. | -| `port` | int |true | 5672 | The RabbitMQ port. | -| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. | -| `username` | String|false | guest | The username used to authenticate to RabbitMQ. | -| `password` | String|false | guest | The password used to authenticate to RabbitMQ. | -| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. | -| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number.

    0 means unlimited. | -| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets.

    0 means unlimited. | -| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds.

    0 means infinite. | -| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. | -| `requestedHeartbeat` | int|false | 60 | The exchange to publish messages. | -| `exchangeName` | String|true | " " (empty string) | The maximum number of messages that the server delivers.

    0 means unlimited. | -| `prefetchGlobal` |String|true | " " (empty string) |The routing key used to publish messages. | - - -### Example - -Before using the RabbitMQ sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "host": "localhost", - "port": "5672", - "virtualHost": "/", - "username": "guest", - "password": "guest", - "queueName": "test-queue", - "connectionName": "test-connection", - "requestedChannelMax": "0", - "requestedFrameMax": "0", - "connectionTimeout": "60000", - "handshakeTimeout": "10000", - "requestedHeartbeat": "60", - "exchangeName": "test-exchange", - "routingKey": "test-key" - } - - ``` - -* YAML - - ```yaml - - configs: - host: "localhost" - port: 5672 - virtualHost: "/", - username: "guest" - password: "guest" - queueName: "test-queue" - connectionName: "test-connection" - requestedChannelMax: 0 - requestedFrameMax: 0 - connectionTimeout: 60000 - handshakeTimeout: 10000 - requestedHeartbeat: 60 - exchangeName: "test-exchange" - routingKey: "test-key" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-rabbitmq-source.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-rabbitmq-source.md deleted file mode 100644 index c2c31cc97d10d9..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-rabbitmq-source.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -id: io-rabbitmq-source -title: RabbitMQ source connector -sidebar_label: "RabbitMQ source connector" -original_id: io-rabbitmq-source ---- - -The RabbitMQ source connector receives messages from RabbitMQ clusters -and writes messages to Pulsar topics. - -## Configuration - -The configuration of the RabbitMQ source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `connectionName` |String| true | " " (empty string) | The connection name. | -| `host` | String| true | " " (empty string) | The RabbitMQ host. | -| `port` | int |true | 5672 | The RabbitMQ port. | -| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. | -| `username` | String|false | guest | The username used to authenticate to RabbitMQ. | -| `password` | String|false | guest | The password used to authenticate to RabbitMQ. | -| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. | -| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number.

    0 means unlimited. | -| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets.

    0 means unlimited. | -| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds.

    0 means infinite. | -| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. | -| `requestedHeartbeat` | int|false | 60 | The requested heartbeat timeout in seconds. | -| `prefetchCount` | int|false | 0 | The maximum number of messages that the server delivers.

    0 means unlimited. | -| `prefetchGlobal` | boolean|false | false |Whether the setting should be applied to the entire channel rather than each consumer. | -| `passive` | boolean|false | false | Whether the rabbitmq consumer should create its own queue or bind to an existing one. | - -### Example - -Before using the RabbitMQ source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "host": "localhost", - "port": "5672", - "virtualHost": "/", - "username": "guest", - "password": "guest", - "queueName": "test-queue", - "connectionName": "test-connection", - "requestedChannelMax": "0", - "requestedFrameMax": "0", - "connectionTimeout": "60000", - "handshakeTimeout": "10000", - "requestedHeartbeat": "60", - "prefetchCount": "0", - "prefetchGlobal": "false", - "passive": "false" - } - - ``` - -* YAML - - ```yaml - - configs: - host: "localhost" - port: 5672 - virtualHost: "/" - username: "guest" - password: "guest" - queueName: "test-queue" - connectionName: "test-connection" - requestedChannelMax: 0 - requestedFrameMax: 0 - connectionTimeout: 60000 - handshakeTimeout: 10000 - requestedHeartbeat: 60 - prefetchCount: 0 - prefetchGlobal: "false" - passive: "false" - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-redis-sink.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-redis-sink.md deleted file mode 100644 index 793d74a5f2cb38..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-redis-sink.md +++ /dev/null @@ -1,74 +0,0 @@ ---- -id: io-redis-sink -title: Redis sink connector -sidebar_label: "Redis sink connector" -original_id: io-redis-sink ---- - -The Redis sink connector pulls messages from Pulsar topics -and persists the messages to a Redis database. - - - -## Configuration - -The configuration of the Redis sink connector has the following properties. - - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `redisHosts` |String|true|" " (empty string) | A comma-separated list of Redis hosts to connect to. | -| `redisPassword` |String|false|" " (empty string) | The password used to connect to Redis. | -| `redisDatabase` | int|true|0 | The Redis database to connect to. | -| `clientMode` |String| false|Standalone | The client mode when interacting with Redis cluster.

    Below are the available options:
  1332. Standalone
  1333. Cluster
  1334. | -| `autoReconnect` | boolean|false|true | Whether the Redis client automatically reconnect or not. | -| `requestQueue` | int|false|2147483647 | The maximum number of queued requests to Redis. | -| `tcpNoDelay` |boolean| false| false | Whether to enable TCP with no delay or not. | -| `keepAlive` | boolean|false | false |Whether to enable a keepalive to Redis or not. | -| `connectTimeout` |long| false|10000 | The time to wait before timing out when connecting in milliseconds. | -| `operationTimeout` | long|false|10000 | The time before an operation is marked as timed out in milliseconds . | -| `batchTimeMs` | int|false|1000 | The Redis operation time in milliseconds. | -| `batchSize` | int|false|200 | The batch size of writing to Redis database. | - - -### Example - -Before using the Redis sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "redisHosts": "localhost:6379", - "redisPassword": "fake@123", - "redisDatabase": "1", - "clientMode": "Standalone", - "operationTimeout": "2000", - "batchSize": "100", - "batchTimeMs": "1000", - "connectTimeout": "3000" - } - - ``` - -* YAML - - ```yaml - - { - redisHosts: "localhost:6379" - redisPassword: "fake@123" - redisDatabase: 1 - clientMode: "Standalone" - operationTimeout: 2000 - batchSize: 100 - batchTimeMs: 1000 - connectTimeout: 3000 - } - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-solr-sink.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-solr-sink.md deleted file mode 100644 index df2c3612c38eb6..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-solr-sink.md +++ /dev/null @@ -1,65 +0,0 @@ ---- -id: io-solr-sink -title: Solr sink connector -sidebar_label: "Solr sink connector" -original_id: io-solr-sink ---- - -The Solr sink connector pulls messages from Pulsar topics -and persists the messages to Solr collections. - - - -## Configuration - -The configuration of the Solr sink connector has the following properties. - - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `solrUrl` | String|true|" " (empty string) |
  1335. Comma-separated zookeeper hosts with chroot used in the SolrCloud mode.
    **Example**
    `localhost:2181,localhost:2182/chroot`

  1336. URL to connect to Solr used in standalone mode.
    **Example**
    `localhost:8983/solr`
  1337. | -| `solrMode` | String|true|SolrCloud| The client mode when interacting with the Solr cluster.

    Below are the available options:
  1338. Standalone
  1339. SolrCloud
  1340. | -| `solrCollection` |String|true| " " (empty string) | Solr collection name to which records need to be written. | -| `solrCommitWithinMs` |int| false|10 | The time within million seconds for Solr updating commits.| -| `username` |String|false| " " (empty string) | The username for basic authentication.

    **Note: `usename` is case-sensitive.** | -| `password` | String|false| " " (empty string) | The password for basic authentication.

    **Note: `password` is case-sensitive.** | - - - -### Example - -Before using the Solr sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "solrUrl": "localhost:2181,localhost:2182/chroot", - "solrMode": "SolrCloud", - "solrCollection": "techproducts", - "solrCommitWithinMs": 100, - "username": "fakeuser", - "password": "fake@123" - } - - ``` - -* YAML - - ```yaml - - { - solrUrl: "localhost:2181,localhost:2182/chroot" - solrMode: "SolrCloud" - solrCollection: "techproducts" - solrCommitWithinMs: 100 - username: "fakeuser" - password: "fake@123" - } - - ``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-twitter-source.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-twitter-source.md deleted file mode 100644 index 8de3504dd0fef2..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-twitter-source.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -id: io-twitter-source -title: Twitter Firehose source connector -sidebar_label: "Twitter Firehose source connector" -original_id: io-twitter-source ---- - -The Twitter Firehose source connector receives tweets from Twitter Firehose and -writes the tweets to Pulsar topics. - -## Configuration - -The configuration of the Twitter Firehose source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `consumerKey` | String|true | " " (empty string) | The twitter OAuth consumer key.

    For more information, see [Access tokens](https://developer.twitter.com/en/docs/basics/authentication/guides/access-tokens). | -| `consumerSecret` | String |true | " " (empty string) | The twitter OAuth consumer secret. | -| `token` | String|true | " " (empty string) | The twitter OAuth token. | -| `tokenSecret` | String|true | " " (empty string) | The twitter OAuth secret. | -| `guestimateTweetTime`|Boolean|false|false|Most firehose events have null createdAt time.

    If `guestimateTweetTime` set to true, the connector estimates the createdTime of each firehose event to be current time. -| `clientName` | String |false | openconnector-twitter-source| The twitter firehose client name. | -| `clientHosts` |String| false | Constants.STREAM_HOST | The twitter firehose hosts to which client connects. | -| `clientBufferSize` | int|false | 50000 | The buffer size for buffering tweets fetched from twitter firehose. | - -> For more information about OAuth credentials, see [Twitter developers portal](https://developer.twitter.com/en.html). diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-twitter.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-twitter.md deleted file mode 100644 index 3b2f6325453c3c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-twitter.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: io-twitter -title: Twitter Firehose Connector -sidebar_label: "Twitter Firehose Connector" -original_id: io-twitter ---- - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/io-use.md b/site2/website/versioned_docs/version-2.8.3-deprecated/io-use.md deleted file mode 100644 index da9ed746c4d372..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/io-use.md +++ /dev/null @@ -1,1787 +0,0 @@ ---- -id: io-use -title: How to use Pulsar connectors -sidebar_label: "Use" -original_id: io-use ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide describes how to use Pulsar connectors. - -## Install a connector - -Pulsar bundles several [builtin connectors](io-connectors.md) used to move data in and out of commonly used systems (such as database and messaging system). Optionally, you can create and use your desired non-builtin connectors. - -:::note - -When using a non-builtin connector, you need to specify the path of a archive file for the connector. - -::: - -To set up a builtin connector, follow -the instructions [here](getting-started-standalone.md#installing-builtin-connectors). - -After the setup, the builtin connector is automatically discovered by Pulsar brokers (or function-workers), so no additional installation steps are required. - -## Configure a connector - -You can configure the following information: - -* [Configure a default storage location for a connector](#configure-a-default-storage-location-for-a-connector) - -* [Configure a connector with a YAML file](#configure-a-connector-with-yaml-file) - -### Configure a default storage location for a connector - -To configure a default folder for builtin connectors, set the `connectorsDirectory` parameter in the `./conf/functions_worker.yml` configuration file. - -**Example** - -Set the `./connectors` folder as the default storage location for builtin connectors. - -``` - -######################## -# Connectors -######################## - -connectorsDirectory: ./connectors - -``` - -### Configure a connector with a YAML file - -To configure a connector, you need to provide a YAML configuration file when creating a connector. - -The YAML configuration file tells Pulsar where to locate connectors and how to connect connectors with Pulsar topics. - -**Example 1** - -Below is a YAML configuration file of a Cassandra sink, which tells Pulsar: - -* Which Cassandra cluster to connect - -* What is the `keyspace` and `columnFamily` to be used in Cassandra for collecting data - -* How to map Pulsar messages into Cassandra table key and columns - -```shell - -tenant: public -namespace: default -name: cassandra-test-sink -... -# cassandra specific config -configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - -``` - -**Example 2** - -Below is a YAML configuration file of a Kafka source. - -```shell - -configs: - bootstrapServers: "pulsar-kafka:9092" - groupId: "test-pulsar-io" - topic: "my-topic" - sessionTimeoutMs: "10000" - autoCommitEnabled: "false" - -``` - -**Example 3** - -Below is a YAML configuration file of a PostgreSQL JDBC sink. - -```shell - -configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/test_jdbc" - tableName: "test_jdbc" - -``` - -## Get available connectors - -Before starting using connectors, you can perform the following operations: - -* [Reload connectors](#reload) - -* [Get a list of available connectors](#get-available-connectors) - -### `reload` - -If you add or delete a nar file in a connector folder, reload the available builtin connector before using it. - -#### Source - -Use the `reload` subcommand. - -```shell - -$ pulsar-admin sources reload - -``` - -For more information, see [`here`](io-cli.md#reload). - -#### Sink - -Use the `reload` subcommand. - -```shell - -$ pulsar-admin sinks reload - -``` - -For more information, see [`here`](io-cli.md#reload-1). - -### `available` - -After reloading connectors (optional), you can get a list of available connectors. - -#### Source - -Use the `available-sources` subcommand. - -```shell - -$ pulsar-admin sources available-sources - -``` - -#### Sink - -Use the `available-sinks` subcommand. - -```shell - -$ pulsar-admin sinks available-sinks - -``` - -## Run a connector - -To run a connector, you can perform the following operations: - -* [Create a connector](#create) - -* [Start a connector](#start) - -* [Run a connector locally](#localrun) - -### `create` - -You can create a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Create a source connector. - -````mdx-code-block - - - - -Use the `create` subcommand. - -``` - -$ pulsar-admin sources create options - -``` - -For more information, see [here](io-cli.md#create). - - - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/registerSource?version=@pulsar:version_number@} - - - - -* Create a source connector with a **local file**. - - ```java - - void createSource(SourceConfig sourceConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - |Name|Description - |---|--- - `sourceConfig` | The source configuration object - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#createSource-SourceConfig-java.lang.String-). - -* Create a source connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void createSourceWithUrl(SourceConfig sourceConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - Parameter| Description - |---|--- - `sourceConfig` | The source configuration object - `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSourceWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#createSourceWithUrl-SourceConfig-java.lang.String-). - - - - -```` - -#### Sink - -Create a sink connector. - -````mdx-code-block - - - - -Use the `create` subcommand. - -``` - -$ pulsar-admin sinks create options - -``` - -For more information, see [here](io-cli.md#create-1). - - - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/registerSink?version=@pulsar:version_number@} - - - - -* Create a sink connector with a **local file**. - - ```java - - void createSink(SinkConfig sinkConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - |Name|Description - |---|--- - `sinkConfig` | The sink configuration object - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#createSink-SinkConfig-java.lang.String-). - -* Create a sink connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void createSinkWithUrl(SinkConfig sinkConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - Parameter| Description - |---|--- - `sinkConfig` | The sink configuration object - `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSinkWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#createSinkWithUrl-SinkConfig-java.lang.String-). - - - - -```` - -### `start` - -You can start a connector using **Admin CLI** or **REST API**. - -#### Source - -Start a source connector. - -````mdx-code-block - - - - -Use the `start` subcommand. - -``` - -$ pulsar-admin sources start options - -``` - -For more information, see [here](io-cli.md#start). - - - - -* Start **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/start|operation/startSource?version=@pulsar:version_number@} - -* Start a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/start|operation/startSource?version=@pulsar:version_number@} - - - - -```` - -#### Sink - -Start a sink connector. - -````mdx-code-block - - - - -Use the `start` subcommand. - -``` - -$ pulsar-admin sinks start options - -``` - -For more information, see [here](io-cli.md#start-1). - - - - -* Start **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/start|operation/startSink?version=@pulsar:version_number@} - -* Start a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sourceName/:instanceId/start|operation/startSink?version=@pulsar:version_number@} - - - - -```` - -### `localrun` - -You can run a connector locally rather than deploying it on a Pulsar cluster using **Admin CLI**. - -#### Source - -Run a source connector locally. - -````mdx-code-block - - - - -Use the `localrun` subcommand. - -``` - -$ pulsar-admin sources localrun options - -``` - -For more information, see [here](io-cli.md#localrun). - - - - -```` - -#### Sink - -Run a sink connector locally. - -````mdx-code-block - - - - -Use the `localrun` subcommand. - -``` - -$ pulsar-admin sinks localrun options - -``` - -For more information, see [here](io-cli.md#localrun-1). - - - - -```` - -## Monitor a connector - -To monitor a connector, you can perform the following operations: - -* [Get the information of a connector](#get) - -* [Get the list of all running connectors](#list) - -* [Get the current status of a connector](#status) - -### `get` - -You can get the information of a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the information of a source connector. - -````mdx-code-block - - - - -Use the `get` subcommand. - -``` - -$ pulsar-admin sources get options - -``` - -For more information, see [here](io-cli.md#get). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/getSourceInfo?version=@pulsar:version_number@} - - - - -```java - -SourceConfig getSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Example** - -This is a sourceConfig. - -```java - -{ - "tenant": "tenantName", - "namespace": "namespaceName", - "name": "sourceName", - "className": "className", - "topicName": "topicName", - "configs": {}, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "resources": { - "cpu": 1.0, - "ram": 1073741824, - "disk": 10737418240 - } -} - -``` - -This is a sourceConfig example. - -``` - -{ - "tenant": "public", - "namespace": "default", - "name": "debezium-mysql-source", - "className": "org.apache.pulsar.io.debezium.mysql.DebeziumMysqlSource", - "topicName": "debezium-mysql-topic", - "configs": { - "database.user": "debezium", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.port": "3306", - "database.hostname": "localhost", - "database.password": "dbz", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "database.whitelist": "inventory", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "pulsar.service.url": "pulsar://127.0.0.1:6650", - "database.history.pulsar.topic": "history-topic2" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "resources": { - "cpu": 1.0, - "ram": 1073741824, - "disk": 10737418240 - } -} - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException.NotFoundException` | Cluster doesn't exist -`PulsarAdminException` | Unexpected error - -For more information, see [`getSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#getSource-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Get the information of a sink connector. - -````mdx-code-block - - - - -Use the `get` subcommand. - -``` - -$ pulsar-admin sinks get options - -``` - -For more information, see [here](io-cli.md#get-1). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/getSinkInfo?version=@pulsar:version_number@} - - - - -```java - -SinkConfig getSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - -``` - -**Example** - -This is a sinkConfig. - -```json - -{ -"tenant": "tenantName", -"namespace": "namespaceName", -"name": "sinkName", -"className": "className", -"inputSpecs": { -"topicName": { - "isRegexPattern": false -} -}, -"configs": {}, -"parallelism": 1, -"processingGuarantees": "ATLEAST_ONCE", -"retainOrdering": false, -"autoAck": true -} - -``` - -This is a sinkConfig example. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -**Parameter description** - -Name| Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`sink` | Sink name - -For more information, see [`getSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#getSink-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -### `list` - -You can get the list of all running connectors using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the list of all running source connectors. - -````mdx-code-block - - - - -Use the `list` subcommand. - -``` - -$ pulsar-admin sources list options - -``` - -For more information, see [here](io-cli.md#list). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace|operation/listSources?version=@pulsar:version_number@} - - - - -```java - -List listSources(String tenant, - String namespace) - throws PulsarAdminException - -``` - -**Response example** - -```java - -["f1", "f2", "f3"] - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException` | Unexpected error - -For more information, see [`listSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#listSources-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Get the list of all running sink connectors. - -````mdx-code-block - - - - -Use the `list` subcommand. - -``` - -$ pulsar-admin sinks list options - -``` - -For more information, see [here](io-cli.md#list-1). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace|operation/listSinks?version=@pulsar:version_number@} - - - - -```java - -List listSinks(String tenant, - String namespace) - throws PulsarAdminException - -``` - -**Response example** - -```java - -["f1", "f2", "f3"] - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException` | Unexpected error - -For more information, see [`listSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#listSinks-java.lang.String-java.lang.String-). - - - - -```` - -### `status` - -You can get the current status of a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the current status of a source connector. - -````mdx-code-block - - - - -Use the `status` subcommand. - -``` - -$ pulsar-admin sources status options - -``` - -For more information, see [here](io-cli.md#status). - - - - -* Get the current status of **all** source connectors. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName/status|operation/getSourceStatus?version=@pulsar:version_number@} - -* Gets the current status of a **specified** source connector. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/status|operation/getSourceStatus?version=@pulsar:version_number@} - - - - -* Get the current status of **all** source connectors. - - ```java - - SourceStatus getSourceStatus(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - - **Exception** - - Name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSourceStatus`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#getSource-java.lang.String-java.lang.String-java.lang.String-). - -* Gets the current status of a **specified** source connector. - - ```java - - SourceStatus.SourceInstanceStatus.SourceInstanceStatusData getSourceStatus(String tenant, - String namespace, - String source, - int id) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - `id` | Source instanceID - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSourceStatus`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#getSourceStatus-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Get the current status of a Pulsar sink connector. - -````mdx-code-block - - - - -Use the `status` subcommand. - -``` - -$ pulsar-admin sinks status options - -``` - -For more information, see [here](io-cli.md#status-1). - - - - -* Get the current status of **all** sink connectors. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sinkName/status|operation/getSinkStatus?version=@pulsar:version_number@} - -* Gets the current status of a **specified** sink connector. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sourceName/:instanceId/status|operation/getSinkInstanceStatus?version=@pulsar:version_number@} - - - - -* Get the current status of **all** sink connectors. - - ```java - - SinkStatus getSinkStatus(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSinkStatus`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#getSinkStatus-java.lang.String-java.lang.String-java.lang.String-). - -* Gets the current status of a **specified** source connector. - - ```java - - SinkStatus.SinkInstanceStatus.SinkInstanceStatusData getSinkStatus(String tenant, - String namespace, - String sink, - int id) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - `id` | Sink instanceID - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSinkStatusWithInstanceID`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#getSinkStatus-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Update a connector - -### `update` - -You can update a running connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Update a running Pulsar source connector. - -````mdx-code-block - - - - -Use the `update` subcommand. - -``` - -$ pulsar-admin sources update options - -``` - -For more information, see [here](io-cli.md#update). - - - - -Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/updateSource?version=@pulsar:version_number@} - - - - -* Update a running source connector with a **local file**. - - ```java - - void updateSource(SourceConfig sourceConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - |`sourceConfig` | The source configuration object - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - - For more information, see [`updateSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#updateSource-SourceConfig-java.lang.String-). - -* Update a source connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void updateSourceWithUrl(SourceConfig sourceConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - | Name | Description - |---|--- - | `sourceConfig` | The source configuration object - | `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - -For more information, see [`createSourceWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#updateSourceWithUrl-SourceConfig-java.lang.String-). - - - - -```` - -#### Sink - -Update a running Pulsar sink connector. - -````mdx-code-block - - - - -Use the `update` subcommand. - -``` - -$ pulsar-admin sinks update options - -``` - -For more information, see [here](io-cli.md#update-1). - - - - -Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/updateSink?version=@pulsar:version_number@} - - - - -* Update a running sink connector with a **local file**. - - ```java - - void updateSink(SinkConfig sinkConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - |`sinkConfig` | The sink configuration object - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - - For more information, see [`updateSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#updateSink-SinkConfig-java.lang.String-). - -* Update a sink connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void updateSinkWithUrl(SinkConfig sinkConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - | Name | Description - |---|--- - | `sinkConfig` | The sink configuration object - | `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - |`PulsarAdminException.NotFoundException` | Cluster doesn't exist - |`PulsarAdminException` | Unexpected error - -For more information, see [`updateSinkWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#updateSinkWithUrl-SinkConfig-java.lang.String-). - - - - -```` - -## Stop a connector - -### `stop` - -You can stop a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Stop a source connector. - -````mdx-code-block - - - - -Use the `stop` subcommand. - -``` - -$ pulsar-admin sources stop options - -``` - -For more information, see [here](io-cli.md#stop). - - - - -* Stop **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/stopSource?version=@pulsar:version_number@} - -* Stop a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId|operation/stopSource?version=@pulsar:version_number@} - - - - -* Stop **all** source connectors. - - ```java - - void stopSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#stopSource-java.lang.String-java.lang.String-java.lang.String-). - -* Stop a **specified** source connector. - - ```java - - void stopSource(String tenant, - String namespace, - String source, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#stopSource-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Stop a sink connector. - -````mdx-code-block - - - - -Use the `stop` subcommand. - -``` - -$ pulsar-admin sinks stop options - -``` - -For more information, see [here](io-cli.md#stop-1). - - - - -* Stop **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sinkName/stop|operation/stopSink?version=@pulsar:version_number@} - -* Stop a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkeName/:instanceId/stop|operation/stopSink?version=@pulsar:version_number@} - - - - -* Stop **all** sink connectors. - - ```java - - void stopSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#stopSink-java.lang.String-java.lang.String-java.lang.String-). - -* Stop a **specified** sink connector. - - ```java - - void stopSink(String tenant, - String namespace, - String sink, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#stopSink-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Restart a connector - -### `restart` - -You can restart a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Restart a source connector. - -````mdx-code-block - - - - -Use the `restart` subcommand. - -``` - -$ pulsar-admin sources restart options - -``` - -For more information, see [here](io-cli.md#restart). - - - - -* Restart **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/restart|operation/restartSource?version=@pulsar:version_number@} - -* Restart a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/restart|operation/restartSource?version=@pulsar:version_number@} - - - - -* Restart **all** source connectors. - - ```java - - void restartSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#restartSource-java.lang.String-java.lang.String-java.lang.String-). - -* Restart a **specified** source connector. - - ```java - - void restartSource(String tenant, - String namespace, - String source, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#restartSource-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Restart a sink connector. - -````mdx-code-block - - - - -Use the `restart` subcommand. - -``` - -$ pulsar-admin sinks restart options - -``` - -For more information, see [here](io-cli.md#restart-1). - - - - -* Restart **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/restart|operation/restartSource?version=@pulsar:version_number@} - -* Restart a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/:instanceId/restart|operation/restartSource?version=@pulsar:version_number@} - - - - -* Restart all Pulsar sink connectors. - - ```java - - void restartSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Sink name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#restartSink-java.lang.String-java.lang.String-java.lang.String-). - -* Restart a **specified** sink connector. - - ```java - - void restartSink(String tenant, - String namespace, - String sink, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Sink instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#restartSink-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Delete a connector - -### `delete` - -You can delete a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Delete a source connector. - -````mdx-code-block - - - - -Use the `delete` subcommand. - -``` - -$ pulsar-admin sources delete options - -``` - -For more information, see [here](io-cli.md#delete). - - - - -Delete al Pulsar source connector. - -Send a `DELETE` request to this endpoint: {@inject: endpoint|DELETE|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/deregisterSource?version=@pulsar:version_number@} - - - - -Delete a source connector. - -```java - -void deleteSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Parameter** - -| Name | Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`source` | Source name - -**Exception** - -|Name|Description| -|---|--- -|`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission -| `PulsarAdminException.NotFoundException` | Cluster doesn't exist -| `PulsarAdminException.PreconditionFailedException` | Cluster is not empty -| `PulsarAdminException` | Unexpected error - -For more information, see [`deleteSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#deleteSource-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Delete a sink connector. - -````mdx-code-block - - - - -Use the `delete` subcommand. - -``` - -$ pulsar-admin sinks delete options - -``` - -For more information, see [here](io-cli.md#delete-1). - - - - -Delete a sink connector. - -Send a `DELETE` request to this endpoint: {@inject: endpoint|DELETE|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/deregisterSink?version=@pulsar:version_number@} - - - - -Delete a Pulsar sink connector. - -```java - -void deleteSink(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Parameter** - -| Name | Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`sink` | Sink name - -**Exception** - -|Name|Description| -|---|--- -|`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission -| `PulsarAdminException.NotFoundException` | Cluster doesn't exist -| `PulsarAdminException.PreconditionFailedException` | Cluster is not empty -| `PulsarAdminException` | Unexpected error - -For more information, see [`deleteSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#deleteSink-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/performance-pulsar-perf.md b/site2/website/versioned_docs/version-2.8.3-deprecated/performance-pulsar-perf.md deleted file mode 100644 index ed00975dc119fc..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/performance-pulsar-perf.md +++ /dev/null @@ -1,227 +0,0 @@ ---- -id: performance-pulsar-perf -title: Pulsar Perf -sidebar_label: "Pulsar Perf" -original_id: performance-pulsar-perf ---- - -The Pulsar Perf is a built-in performance test tool for Apache Pulsar. You can use the Pulsar Perf to test message writing or reading performance. For detailed information about performance tuning, see [here](https://streamnative.io/en/blog/tech/2021-01-14-pulsar-architecture-performance-tuning). - -## Produce messages - -This example shows how the Pulsar Perf produces messages with default options. For all configuration options available for the `pulsar-perf produce` command, see [configuration options](#configuration-options-for-pulsar-perf-produce). - -``` - -bin/pulsar-perf produce my-topic - -``` - -After the command is executed, the test data is continuously output on the Console. - -**Output** - -``` - -19:53:31.459 [pulsar-perf-producer-exec-1-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Created 1 producers -19:53:31.482 [pulsar-timer-5-1] WARN com.scurrilous.circe.checksum.Crc32cIntChecksum - Failed to load Circe JNI library. Falling back to Java based CRC32c provider -19:53:40.861 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 93.7 msg/s --- 0.7 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.575 ms - med: 3.460 - 95pct: 4.790 - 99pct: 5.308 - 99.9pct: 5.834 - 99.99pct: 6.609 - Max: 6.609 -19:53:50.909 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.437 ms - med: 3.328 - 95pct: 4.656 - 99pct: 5.071 - 99.9pct: 5.519 - 99.99pct: 5.588 - Max: 5.588 -19:54:00.926 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.376 ms - med: 3.276 - 95pct: 4.520 - 99pct: 4.939 - 99.9pct: 5.440 - 99.99pct: 5.490 - Max: 5.490 -19:54:10.940 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.298 ms - med: 3.220 - 95pct: 4.474 - 99pct: 4.926 - 99.9pct: 5.645 - 99.99pct: 5.654 - Max: 5.654 -19:54:20.956 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.1 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.308 ms - med: 3.199 - 95pct: 4.532 - 99pct: 4.871 - 99.9pct: 5.291 - 99.99pct: 5.323 - Max: 5.323 -19:54:30.972 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.249 ms - med: 3.144 - 95pct: 4.437 - 99pct: 4.970 - 99.9pct: 5.329 - 99.99pct: 5.414 - Max: 5.414 -19:54:40.987 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.435 ms - med: 3.361 - 95pct: 4.772 - 99pct: 5.150 - 99.9pct: 5.373 - 99.99pct: 5.837 - Max: 5.837 -^C19:54:44.325 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Aggregated throughput stats --- 7286 records sent --- 99.140 msg/s --- 0.775 Mbit/s -19:54:44.336 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Aggregated latency stats --- Latency: mean: 3.383 ms - med: 3.293 - 95pct: 4.610 - 99pct: 5.059 - 99.9pct: 5.588 - 99.99pct: 5.837 - 99.999pct: 6.609 - Max: 6.609 - -``` - -From the above test data, you can get the throughput statistics and the write latency statistics. The aggregated statistics is printed when the Pulsar Perf is stopped. You can press **Ctrl**+**C** to stop the Pulsar Perf. After the Pulsar Perf is stopped, the [HdrHistogram](http://hdrhistogram.github.io/HdrHistogram/) formatted test result appears under your directory. The document looks like `perf-producer-1589370810837.hgrm`. You can also check the test result through [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html). For details about how to check the test result through [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html), see [HdrHistogram Plotter](#hdrhistogram-plotter). - -### Configuration options for `pulsar-perf produce` - -You can get all options by executing the `bin/pulsar-perf produce -h` command. Therefore, you can modify these options as required. - -The following table lists configuration options available for the `pulsar-perf produce` command. - -| Option | Description | Default value| -|----|----|----| -| access-mode | Set the producer access mode. Valid values are `Shared`, `Exclusive` and `WaitForExclusive`. | Shared | -| admin-url | Set the Pulsar admin URL. | N/A | -| auth-params | Set the authentication parameters, whose format is determined by the implementation of the `configure` method in the authentication plugin class, such as "key1:val1,key2:val2" or "{"key1":"val1","key2":"val2"}". | N/A | -| auth_plugin | Set the authentication plugin class name. | N/A | -| listener-name | Set the listener name for the broker. | N/A | -| batch-max-bytes | Set the maximum number of bytes for each batch. | 4194304 | -| batch-max-messages | Set the maximum number of messages for each batch. | 1000 | -| batch-time-window | Set a window for a batch of messages. | 1 ms | -| busy-wait | Enable or disable Busy-Wait on the Pulsar client. | false | -| chunking | Configure whether to split the message and publish in chunks if message size is larger than allowed max size. | false | -| compression | Compress the message payload. | N/A | -| conf-file | Set the configuration file. | N/A | -| delay | Mark messages with a given delay. | 0s | -| encryption-key-name | Set the name of the public key used to encrypt the payload. | N/A | -| encryption-key-value-file | Set the file which contains the public key used to encrypt the payload. | N/A | -| exit-on-failure | Configure whether to exit from the process on publish failure. | false | -| format-class | Set the custom formatter class name. | org.apache.pulsar.testclient.DefaultMessageFormatter | -| format-payload | Configure whether to format %i as a message index in the stream from producer and/or %t as the timestamp nanoseconds. | false | -| help | Configure the help message. | false | -| max-connections | Set the maximum number of TCP connections to a single broker. | 100 | -| max-outstanding | Set the maximum number of outstanding messages. | 1000 | -| max-outstanding-across-partitions | Set the maximum number of outstanding messages across partitions. | 50000 | -| message-key-generation-mode | Set the generation mode of message key. Valid options are `autoIncrement`, `random`. | N/A | -| num-io-threads | Set the number of threads to be used for handling connections to brokers. | 1 | -| num-messages | Set the number of messages to be published in total. If it is set to 0, it keeps publishing messages. | 0 | -| num-producers | Set the number of producers for each topic. | 1 | -| num-test-threads | Set the number of test threads. | 1 | -| num-topic | Set the number of topics. | 1 | -| partitions | Configure whether to create partitioned topics with the given number of partitions. | N/A | -| payload-delimiter | Set the delimiter used to split lines when using payload from a file. | \n | -| payload-file | Use the payload from an UTF-8 encoded text file and a payload is randomly selected when messages are published. | N/A | -| producer-name | Set the producer name. | N/A | -| rate | Set the publish rate of messages across topics. | 100 | -| send-timeout | Set the sendTimeout. | 0 | -| separator | Set the separator between the topic and topic number. | - | -| service-url | Set the Pulsar service URL. | | -| size | Set the message size. | 1024 bytes | -| stats-interval-seconds | Set the statistics interval. If it is set to 0, statistics is disabled. | 0 | -| test-duration | Set the test duration. If it is set to 0, it keeps publishing tests. | 0s | -| trust-cert-file | Set the path for the trusted TLS certificate file. | | | -| warmup-time | Set the warm-up time. | 1s | -| tls-allow-insecure | Set the allowed insecure TLS connection. | N/A | - -## Consume messages - -This example shows how the Pulsar Perf consumes messages with default options. - -``` - -bin/pulsar-perf consume my-topic - -``` - -After the command is executed, the test data is continuously output on the Console. - -**Output** - -``` - -20:35:37.071 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Start receiving from 1 consumers on 1 topics -20:35:41.150 [pulsar-client-io-1-9] WARN com.scurrilous.circe.checksum.Crc32cIntChecksum - Failed to load Circe JNI library. Falling back to Java based CRC32c provider -20:35:47.092 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 59.572 msg/s -- 0.465 Mbit/s --- Latency: mean: 11.298 ms - med: 10 - 95pct: 15 - 99pct: 98 - 99.9pct: 137 - 99.99pct: 152 - Max: 152 -20:35:57.104 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.958 msg/s -- 0.781 Mbit/s --- Latency: mean: 9.176 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 18 - Max: 18 -20:36:07.115 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 100.006 msg/s -- 0.781 Mbit/s --- Latency: mean: 9.316 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -20:36:17.125 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 100.085 msg/s -- 0.782 Mbit/s --- Latency: mean: 9.327 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -20:36:27.136 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.900 msg/s -- 0.780 Mbit/s --- Latency: mean: 9.404 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -20:36:37.147 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.985 msg/s -- 0.781 Mbit/s --- Latency: mean: 8.998 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -^C20:36:42.755 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceConsumer - Aggregated throughput stats --- 6051 records received --- 92.125 msg/s --- 0.720 Mbit/s -20:36:42.759 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceConsumer - Aggregated latency stats --- Latency: mean: 9.422 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 98 - 99.99pct: 137 - 99.999pct: 152 - Max: 152 - -``` - -From the output test data, you can get the throughput statistics and the end-to-end latency statistics. The aggregated statistics is printed after the Pulsar Perf is stopped. You can press **Ctrl**+**C** to stop the Pulsar Perf. - -### Configuration options for `pulsar-perf consume` - -You can get all options by executing the `bin/pulsar-perf consume -h` command. Therefore, you can modify these options as required. - -The following table lists configuration options available for the `pulsar-perf consume` command. - -| Option | Description | Default value | -|----|----|----| -| acks-delay-millis | Set the acknowledgment grouping delay in milliseconds. | 100 ms | -| auth-params | Set the authentication parameters, whose format is determined by the implementation of the `configure` method in the authentication plugin class, such as "key1:val1,key2:val2" or "{"key1":"val1","key2":"val2"}". | N/A | -| auth_plugin | Set the authentication plugin class name. | N/A | -| auto_ack_chunk_q_full | Configure whether to automatically ack for the oldest message in receiver queue if the queue is full. | false | -| listener-name | Set the listener name for the broker. | N/A | -| batch-index-ack | Enable or disable the batch index acknowledgment. | false | -| busy-wait | Enable or disable Busy-Wait on the Pulsar client. | false | -| conf-file | Set the configuration file. | N/A | -| encryption-key-name | Set the name of the public key used to encrypt the payload. | N/A | -| encryption-key-value-file | Set the file which contains the public key used to encrypt the payload. | N/A | -| help | Configure the help message. | false | -| expire_time_incomplete_chunked_messages | Set the expiration time for incomplete chunk messages (in milliseconds). | 0 | -| max-connections | Set the maximum number of TCP connections to a single broker. | 100 | -| max_chunked_msg | Set the max pending chunk messages. | 0 | -| num-consumers | Set the number of consumers for each topic. | 1 | -| num-io-threads |Set the number of threads to be used for handling connections to brokers. | 1 | -| num-subscriptions | Set the number of subscriptions (per topic). | 1 | -| num-topic | Set the number of topics. | 1 | -| pool-messages | Configure whether to use the pooled message. | true | -| rate | Simulate a slow message consumer (rate in msg/s). | 0.0 | -| receiver-queue-size | Set the size of the receiver queue. | 1000 | -| receiver-queue-size-across-partitions | Set the max total size of the receiver queue across partitions. | 50000 | -| replicated | Configure whether the subscription status should be replicated. | false | -| service-url | Set the Pulsar service URL. | | -| stats-interval-seconds | Set the statistics interval. If it is set to 0, statistics is disabled. | 0 | -| subscriber-name | Set the subscriber name prefix. | | -| subscription-position | Set the subscription position. Valid values are `Latest`, `Earliest`.| Latest | -| subscription-type | Set the subscription type.
  1341. Exclusive
  1342. Shared
  1343. Failover
  1344. Key_Shared
  1345. | Exclusive | -| test-duration | Set the test duration (in seconds). If the value is 0 or smaller than 0, it keeps consuming messages. | 0 | -| tls-allow-insecure | Set the allowed insecure TLS connection. | N/A | -| trust-cert-file | Set the path for the trusted TLS certificate file. | | | - -## Configurations - -By default, the Pulsar Perf uses `conf/client.conf` as the default configuration and uses `conf/log4j2.yaml` as the default Log4j configuration. If you want to connect to other Pulsar clusters, you can update the `brokerServiceUrl` in the client configuration. - -You can use the following commands to change the configuration file and the Log4j configuration file. - -``` - -export PULSAR_CLIENT_CONF= -export PULSAR_LOG_CONF= - -``` - -In addition, you can use the following command to configure the JVM configuration through environment variables: - -``` - -export PULSAR_EXTRA_OPTS='-Xms4g -Xmx4g -XX:MaxDirectMemorySize=4g' - -``` - -## HdrHistogram Plotter - -The [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html) is a visualization tool for checking Pulsar Perf test results, which makes it easier to observe the test results. - -To check test results through the HdrHistogram Plotter, follow these steps: - -1. Clone the HdrHistogram repository from GitHub to the local. - - ``` - - git clone https://github.com/HdrHistogram/HdrHistogram.git - - ``` - -2. Switch to the HdrHistogram folder. - - ``` - - cd HdrHistogram - - ``` - -3. Install the HdrHistogram Plotter. - - ``` - - mvn clean install -DskipTests - - ``` - -4. Transform the file generated by the Pulsar Perf. - - ``` - - ./HistogramLogProcessor -i -o - - ``` - -5. You will get two output files. Upload the output file with the filename extension of .hgrm to the [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html). - -6. Check the test result through the Graphical User Interface of the HdrHistogram Plotter, as shown blow. - - ![](/assets/perf-produce.png) diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/reference-cli-tools.md b/site2/website/versioned_docs/version-2.8.3-deprecated/reference-cli-tools.md deleted file mode 100644 index 2abd4a47f86ef6..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/reference-cli-tools.md +++ /dev/null @@ -1,958 +0,0 @@ ---- -id: reference-cli-tools -title: Pulsar command-line tools -sidebar_label: "Pulsar CLI tools" -original_id: reference-cli-tools ---- - -Pulsar offers several command-line tools that you can use for managing Pulsar installations, performance testing, using command-line producers and consumers, and more. - -All Pulsar command-line tools can be run from the `bin` directory of your [installed Pulsar package](getting-started-standalone.md). The following tools are currently documented: - -* [`pulsar`](#pulsar) -* [`pulsar-client`](#pulsar-client) -* [`pulsar-daemon`](#pulsar-daemon) -* [`pulsar-perf`](#pulsar-perf) -* [`bookkeeper`](#bookkeeper) -* [`broker-tool`](#broker-tool) - -> ### Getting help -> You can get help for any CLI tool, command, or subcommand using the `--help` flag, or `-h` for short. Here's an example: - -> ```shell -> -> $ bin/pulsar broker --help -> -> -> ``` - - -## `pulsar` - -The pulsar tool is used to start Pulsar components, such as bookies and ZooKeeper, in the foreground. - -These processes can also be started in the background, using nohup, using the pulsar-daemon tool, which has the same command interface as pulsar. - -Usage: - -```bash - -$ pulsar command - -``` - -Commands: -* `bookie` -* `broker` -* `compact-topic` -* `discovery` -* `configuration-store` -* `initialize-cluster-metadata` -* `proxy` -* `standalone` -* `websocket` -* `zookeeper` -* `zookeeper-shell` - -Example: - -```bash - -$ PULSAR_BROKER_CONF=/path/to/broker.conf pulsar broker - -``` - -The table below lists the environment variables that you can use to configure the `pulsar` tool. - -|Variable|Description|Default| -|---|---|---| -|`PULSAR_LOG_CONF`|Log4j configuration file|`conf/log4j2.yaml`| -|`PULSAR_BROKER_CONF`|Configuration file for broker|`conf/broker.conf`| -|`PULSAR_BOOKKEEPER_CONF`|description: Configuration file for bookie|`conf/bookkeeper.conf`| -|`PULSAR_ZK_CONF`|Configuration file for zookeeper|`conf/zookeeper.conf`| -|`PULSAR_CONFIGURATION_STORE_CONF`|Configuration file for the configuration store|`conf/global_zookeeper.conf`| -|`PULSAR_DISCOVERY_CONF`|Configuration file for discovery service|`conf/discovery.conf`| -|`PULSAR_WEBSOCKET_CONF`|Configuration file for websocket proxy|`conf/websocket.conf`| -|`PULSAR_STANDALONE_CONF`|Configuration file for standalone|`conf/standalone.conf`| -|`PULSAR_EXTRA_OPTS`|Extra options to be passed to the jvm|| -|`PULSAR_EXTRA_CLASSPATH`|Extra paths for Pulsar's classpath|| -|`PULSAR_PID_DIR`|Folder where the pulsar server PID file should be stored|| -|`PULSAR_STOP_TIMEOUT`|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful|| - - - -### `bookie` - -Starts up a bookie server - -Usage: - -```bash - -$ pulsar bookie options - -``` - -Options - -|Option|Description|Default| -|---|---|---| -|`-readOnly`|Force start a read-only bookie server|false| -|`-withAutoRecovery`|Start auto-recover service bookie server|false| - - -Example - -```bash - -$ PULSAR_BOOKKEEPER_CONF=/path/to/bookkeeper.conf pulsar bookie \ - -readOnly \ - -withAutoRecovery - -``` - -### `broker` - -Starts up a Pulsar broker - -Usage - -```bash - -$ pulsar broker options - -``` - -Options - -|Option|Description|Default| -|---|---|---| -|`-bc` , `--bookie-conf`|Configuration file for BookKeeper|| -|`-rb` , `--run-bookie`|Run a BookKeeper bookie on the same host as the Pulsar broker|false| -|`-ra` , `--run-bookie-autorecovery`|Run a BookKeeper autorecovery daemon on the same host as the Pulsar broker|false| - -Example - -```bash - -$ PULSAR_BROKER_CONF=/path/to/broker.conf pulsar broker - -``` - -### `compact-topic` - -Run compaction against a Pulsar topic (in a new process) - -Usage - -```bash - -$ pulsar compact-topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-t` , `--topic`|The Pulsar topic that you would like to compact|| - -Example - -```bash - -$ pulsar compact-topic --topic topic-to-compact - -``` - -### `discovery` - -Run a discovery server - -Usage - -```bash - -$ pulsar discovery - -``` - -Example - -```bash - -$ PULSAR_DISCOVERY_CONF=/path/to/discovery.conf pulsar discovery - -``` - -### `configuration-store` - -Starts up the Pulsar configuration store - -Usage - -```bash - -$ pulsar configuration-store - -``` - -Example - -```bash - -$ PULSAR_CONFIGURATION_STORE_CONF=/path/to/configuration_store.conf pulsar configuration-store - -``` - -### `initialize-cluster-metadata` - -One-time cluster metadata initialization - -Usage - -```bash - -$ pulsar initialize-cluster-metadata options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-ub` , `--broker-service-url`|The broker service URL for the new cluster|| -|`-tb` , `--broker-service-url-tls`|The broker service URL for the new cluster with TLS encryption|| -|`-c` , `--cluster`|Cluster name|| -|`-cs` , `--configuration-store`|The configuration store quorum connection string|| -|`--existing-bk-metadata-service-uri`|The metadata service URI of the existing BookKeeper cluster that you want to use|| -|`-h` , `--help`|Cluster name|false| -|`--initial-num-stream-storage-containers`|The number of storage containers of BookKeeper stream storage|16| -|`--initial-num-transaction-coordinators`|The number of transaction coordinators assigned in a cluster|16| -|`-uw` , `--web-service-url`|The web service URL for the new cluster|| -|`-tw` , `--web-service-url-tls`|The web service URL for the new cluster with TLS encryption|| -|`-zk` , `--zookeeper`|The local ZooKeeper quorum connection string|| -|`--zookeeper-session-timeout-ms`|The local ZooKeeper session timeout. The time unit is in millisecond(ms)|30000| - - -### `proxy` - -Manages the Pulsar proxy - -Usage - -```bash - -$ pulsar proxy options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--configuration-store`|Configuration store connection string|| -|`-zk` , `--zookeeper-servers`|Local ZooKeeper connection string|| - -Example - -```bash - -$ PULSAR_PROXY_CONF=/path/to/proxy.conf pulsar proxy \ - --zookeeper-servers zk-0,zk-1,zk2 \ - --configuration-store zk-0,zk-1,zk-2 - -``` - -### `standalone` - -Run a broker service with local bookies and local ZooKeeper - -Usage - -```bash - -$ pulsar standalone options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-a` , `--advertised-address`|The standalone broker advertised address|| -|`--bookkeeper-dir`|Local bookies’ base data directory|data/standalone/bookkeeper| -|`--bookkeeper-port`|Local bookies’ base port|3181| -|`--no-broker`|Only start ZooKeeper and BookKeeper services, not the broker|false| -|`--num-bookies`|The number of local bookies|1| -|`--only-broker`|Only start the Pulsar broker service (not ZooKeeper or BookKeeper)|| -|`--wipe-data`|Clean up previous ZooKeeper/BookKeeper data|| -|`--zookeeper-dir`|Local ZooKeeper’s data directory|data/standalone/zookeeper| -|`--zookeeper-port` |Local ZooKeeper’s port|2181| - -Example - -```bash - -$ PULSAR_STANDALONE_CONF=/path/to/standalone.conf pulsar standalone - -``` - -### `websocket` - -Usage - -```bash - -$ pulsar websocket - -``` - -Example - -```bash - -$ PULSAR_WEBSOCKET_CONF=/path/to/websocket.conf pulsar websocket - -``` - -### `zookeeper` - -Starts up a ZooKeeper cluster - -Usage - -```bash - -$ pulsar zookeeper - -``` - -Example - -```bash - -$ PULSAR_ZK_CONF=/path/to/zookeeper.conf pulsar zookeeper - -``` - -### `zookeeper-shell` - -Connects to a running ZooKeeper cluster using the ZooKeeper shell - -Usage - -```bash - -$ pulsar zookeeper-shell options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration file for ZooKeeper|| -|`-server`|Configuration zk address, eg: `127.0.0.1:2181`|| - - - -## `pulsar-client` - -The pulsar-client tool - -Usage - -```bash - -$ pulsar-client command - -``` - -Commands -* `produce` -* `consume` - - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class, for example "key1:val1,key2:val2" or "{\"key1\":\"val1\",\"key2\":\"val2\"}"|{"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"}| -|`--auth-plugin`|Authentication plugin class name|org.apache.pulsar.client.impl.auth.AuthenticationSasl| -|`--listener-name`|Listener name for the broker|| -|`--proxy-protocol`|Proxy protocol to select type of routing at proxy|| -|`--proxy-url`|Proxy-server URL to which to connect|| -|`--url`|Broker URL to which to connect|pulsar://localhost:6650/
    ws://localhost:8080 | -| `-v`, `--version` | Get the version of the Pulsar client -|`-h`, `--help`|Show this help - - -### `produce` -Send a message or messages to a specific broker and topic - -Usage - -```bash - -$ pulsar-client produce topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-f`, `--files`|Comma-separated file paths to send; either -m or -f must be specified|[]| -|`-m`, `--messages`|Comma-separated string of messages to send; either -m or -f must be specified|[]| -|`-n`, `--num-produce`|The number of times to send the message(s); the count of messages/files * num-produce should be below 1000|1| -|`-r`, `--rate`|Rate (in messages per second) at which to produce; a value 0 means to produce messages as fast as possible|0.0| -|`-c`, `--chunking`|Split the message and publish in chunks if the message size is larger than the allowed max size|false| -|`-s`, `--separator`|Character to split messages string with.|","| -|`-k`, `--key`|Message key to add|key=value string, like k1=v1,k2=v2.| -|`-p`, `--properties`|Properties to add. If you want to add multiple properties, use the comma as the separator, e.g. `k1=v1,k2=v2`.| | -|`-ekn`, `--encryption-key-name`|The public key name to encrypt payload.| | -|`-ekv`, `--encryption-key-value`|The URI of public key to encrypt payload. For example, `file:///path/to/public.key` or `data:application/x-pem-file;base64,*****`.| | - - -### `consume` -Consume messages from a specific broker and topic - -Usage - -```bash - -$ pulsar-client consume topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--hex`|Display binary messages in hexadecimal format.|false| -|`-n`, `--num-messages`|Number of messages to consume, 0 means to consume forever.|1| -|`-r`, `--rate`|Rate (in messages per second) at which to consume; a value 0 means to consume messages as fast as possible|0.0| -|`--regex`|Indicate the topic name is a regex pattern|false| -|`-s`, `--subscription-name`|Subscription name|| -|`-t`, `--subscription-type`|The type of the subscription. Possible values: Exclusive, Shared, Failover, Key_Shared.|Exclusive| -|`-p`, `--subscription-position`|The position of the subscription. Possible values: Latest, Earliest.|Latest| -|`-m`, `--subscription-mode`|Subscription mode. Possible values: Durable, NonDurable.|Durable| -|`-q`, `--queue-size`|The size of consumer's receiver queue.|0| -|`-mc`, `--max_chunked_msg`|Max pending chunk messages.|0| -|`-ac`, `--auto_ack_chunk_q_full`|Auto ack for the oldest message in consumer's receiver queue if the queue full.|false| -|`--hide-content`|Do not print the message to the console.|false| -|`-st`, `--schema-type`|Set the schema type. Use `auto_consume` to dump AVRO and other structured data types. Possible values: bytes, auto_consume.|bytes| -|`-ekv`, `--encryption-key-value`|The URI of public key to encrypt payload. For example, `file:///path/to/public.key` or `data:application/x-pem-file;base64,*****`.| | -|`-pm`, `--pool-messages`|Use the pooled message.|true| - -## `pulsar-daemon` -A wrapper around the pulsar tool that’s used to start and stop processes, such as ZooKeeper, bookies, and Pulsar brokers, in the background using nohup. - -pulsar-daemon has a similar interface to the pulsar command but adds start and stop commands for various services. For a listing of those services, run pulsar-daemon to see the help output or see the documentation for the pulsar command. - -Usage - -```bash - -$ pulsar-daemon command - -``` - -Commands -* `start` -* `stop` - - -### `start` -Start a service in the background using nohup. - -Usage - -```bash - -$ pulsar-daemon start service - -``` - -### `stop` -Stop a service that’s already been started using start. - -Usage - -```bash - -$ pulsar-daemon stop service options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|-force|Stop the service forcefully if not stopped by normal shutdown.|false| - - - -## `pulsar-perf` -A tool for performance testing a Pulsar broker. - -Usage - -```bash - -$ pulsar-perf command - -``` - -Commands -* `consume` -* `produce` -* `read` -* `websocket-producer` -* `managed-ledger` -* `monitor-brokers` -* `simulation-client` -* `simulation-controller` -* `help` - -Environment variables - -The table below lists the environment variables that you can use to configure the pulsar-perf tool. - -|Variable|Description|Default| -|---|---|---| -|`PULSAR_LOG_CONF`|Log4j configuration file|conf/log4j2.yaml| -|`PULSAR_CLIENT_CONF`|Configuration file for the client|conf/client.conf| -|`PULSAR_EXTRA_OPTS`|Extra options to be passed to the JVM|| -|`PULSAR_EXTRA_CLASSPATH`|Extra paths for Pulsar's classpath|| - - -### `consume` -Run a consumer - -Usage - -``` - -$ pulsar-perf consume options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth_plugin`|Authentication plugin class name|| -|`-ac`, `--auto_ack_chunk_q_full`|Auto ack for the oldest message in consumer's receiver queue if the queue full|false| -|`--listener-name`|Listener name for the broker|| -|`--acks-delay-millis`|Acknowledgements grouping delay in millis|100| -|`--batch-index-ack`|Enable or disable the batch index acknowledgment|false| -|`-bw`, `--busy-wait`|Enable or disable Busy-Wait on the Pulsar client|false| -|`-v`, `--encryption-key-value-file`|The file which contains the private key to decrypt payload|| -|`-h`, `--help`|Help message|false| -|`--conf-file`|Configuration file|| -|`-e`, `--expire_time_incomplete_chunked_messages`|The expiration time for incomplete chunk messages (in milliseconds)|0| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-mc`, `--max_chunked_msg`|Max pending chunk messages|0| -|`-n`, `--num-consumers`|Number of consumers (per topic)|1| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-ns`, `--num-subscriptions`|Number of subscriptions (per topic)|1| -|`-t`, `--num-topics`|The number of topics|1| -|`-pm`, `--pool-messages`|Use the pooled message|true| -|`-r`, `--rate`|Simulate a slow message consumer (rate in msg/s)|0| -|`-q`, `--receiver-queue-size`|Size of the receiver queue|1000| -|`-p`, `--receiver-queue-size-across-partitions`|Max total size of the receiver queue across partitions|50000| -|`--replicated`|Whether the subscription status should be replicated|false| -|`-u`, `--service-url`|Pulsar service URL|| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled|0| -|`-s`, `--subscriber-name`|Subscriber name prefix|| -|`-ss`, `--subscriptions`|A list of subscriptions to consume on (e.g. sub1,sub2)|sub| -|`-st`, `--subscription-type`|Subscriber type. Possible values are Exclusive, Shared, Failover, Key_Shared.|Exclusive| -|`-sp`, `--subscription-position`|Subscriber position. Possible values are Latest, Earliest.|Latest| -|`-time`, `--test-duration`|Test duration (in seconds). If the value is 0 or smaller than 0, it keeps consuming messages|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - - -### `produce` -Run a producer - -Usage - -```bash - -$ pulsar-perf produce options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-am`, `--access-mode`|Producer access mode. Valid values are `Shared`, `Exclusive` and `WaitForExclusive`|Shared| -|`-au`, `--admin-url`|Pulsar admin URL|| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth_plugin`|Authentication plugin class name|| -|`--listener-name`|Listener name for the broker|| -|`-b`, `--batch-time-window`|Batch messages in a window of the specified number of milliseconds|1| -|`-bb`, `--batch-max-bytes`|Maximum number of bytes per batch|4194304| -|`-bm`, `--batch-max-messages`|Maximum number of messages per batch|1000| -|`-bw`, `--busy-wait`|Enable or disable Busy-Wait on the Pulsar client|false| -|`-ch`, `--chunking`|Split the message and publish in chunks if the message size is larger than allowed max size|false| -|`-d`, `--delay`|Mark messages with a given delay in seconds|0s| -|`-z`, `--compression`|Compress messages’ payload. Possible values are NONE, LZ4, ZLIB, ZSTD or SNAPPY.|| -|`--conf-file`|Configuration file|| -|`-k`, `--encryption-key-name`|The public key name to encrypt payload|| -|`-v`, `--encryption-key-value-file`|The file which contains the public key to encrypt payload|| -|`-ef`, `--exit-on-failure`|Exit from the process on publish failure|false| -|`-fc`, `--format-class`|Custom Formatter class name|org.apache.pulsar.testclient.DefaultMessageFormatter| -|`-fp`, `--format-payload`|Format %i as a message index in the stream from producer and/or %t as the timestamp nanoseconds|false| -|`-h`, `--help`|Help message|false| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-o`, `--max-outstanding`|Max number of outstanding messages|1000| -|`-p`, `--max-outstanding-across-partitions`|Max number of outstanding messages across partitions|50000| -|`-mk`, `--message-key-generation-mode`|The generation mode of message key. Valid options are `autoIncrement`, `random`|| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-m`, `--num-messages`|Number of messages to publish in total. If the value is 0 or smaller than 0, it keeps publishing messages.|0| -|`-n`, `--num-producers`|The number of producers (per topic)|1| -|`-threads`, `--num-test-threads`|Number of test threads|1| -|`-t`, `--num-topic`|The number of topics|1| -|`-np`, `--partitions`|Create partitioned topics with the given number of partitions. Setting this value to 0 means not trying to create a topic|| -|`-f`, `--payload-file`|Use payload from an UTF-8 encoded text file and a payload will be randomly selected when publishing messages|| -|`-e`, `--payload-delimiter`|The delimiter used to split lines when using payload from a file|\n| -|`-pn`, `--producer-name`|Producer Name|| -|`-r`, `--rate`|Publish rate msg/s across topics|100| -|`--send-timeout`|Set the sendTimeout|0| -|`--separator`|Separator between the topic and topic number|-| -|`-u`, `--service-url`|Pulsar service URL|| -|`-s`, `--size`|Message size (in bytes)|1024| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled.|0| -|`-time`, `--test-duration`|Test duration (in seconds). If the value is 0 or smaller than 0, it keeps publishing messages.|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--warmup-time`|Warm-up time in seconds|1| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - - -### `read` -Run a topic reader - -Usage - -```bash - -$ pulsar-perf read options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth_plugin`|Authentication plugin class name|| -|`--listener-name`|Listener name for the broker|| -|`--conf-file`|Configuration file|| -|`-h`, `--help`|Help message|false| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-t`, `--num-topics`|The number of topics|1| -|`-r`, `--rate`|Simulate a slow message reader (rate in msg/s)|0| -|`-q`, `--receiver-queue-size`|Size of the receiver queue|1000| -|`-u`, `--service-url`|Pulsar service URL|| -|`-m`, `--start-message-id`|Start message id. This can be either 'earliest', 'latest' or a specific message id by using 'lid:eid'|earliest| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled.|0| -|`-time`, `--test-duration`|Test duration (in seconds). If the value is 0 or smaller than 0, it keeps consuming messages.|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--use-tls`|Use TLS encryption on the connection|false| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - -### `websocket-producer` -Run a websocket producer - -Usage - -```bash - -$ pulsar-perf websocket-producer options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth_plugin`|Authentication plugin class name|| -|`--conf-file`|Configuration file|| -|`-h`, `--help`|Help message|false| -|`-m`, `--num-messages`|Number of messages to publish in total. If the value is 0 or smaller than 0, it keeps publishing messages|0| -|`-t`, `--num-topic`|The number of topics|1| -|`-f`, `--payload-file`|Use payload from a file instead of empty buffer|| -|`-u`, `--proxy-url`|Pulsar Proxy URL, e.g., "ws://localhost:8080/"|| -|`-r`, `--rate`|Publish rate msg/s across topics|100| -|`-s`, `--size`|Message size in byte|1024| -|`-time`, `--test-duration`|Test duration (in seconds). If the value is 0 or smaller than 0, it keeps publishing messages|0| - - -### `managed-ledger` -Write directly on managed-ledgers - -Usage - -```bash - -$ pulsar-perf managed-ledger options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-a`, `--ack-quorum`|Ledger ack quorum|1| -|`-dt`, `--digest-type`|BookKeeper digest type. Possible Values: [CRC32, MAC, CRC32C, DUMMY]|CRC32C| -|`-e`, `--ensemble-size`|Ledger ensemble size|1| -|`-h`, `--help`|Help message|false| -|`-c`, `--max-connections`|Max number of TCP connections to a single bookie|1| -|`-o`, `--max-outstanding`|Max number of outstanding requests|1000| -|`-m`, `--num-messages`|Number of messages to publish in total. If the value is 0 or smaller than 0, it keeps publishing messages|0| -|`-t`, `--num-topic`|Number of managed ledgers|1| -|`-r`, `--rate`|Write rate msg/s across managed ledgers|100| -|`-s`, `--size`|Message size in byte|1024| -|`-time`, `--test-duration`|Test duration (in seconds). If the value is 0 or smaller than 0, it keeps publishing messages|0| -|`--threads`|Number of threads writing|1| -|`-w`, `--write-quorum`|Ledger write quorum|1| -|`-zk`, `--zookeeperServers`|ZooKeeper connection string|| - - -### `monitor-brokers` -Continuously receive broker data and/or load reports - -Usage - -```bash - -$ pulsar-perf monitor-brokers options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--connect-string`|A connection string for one or more ZooKeeper servers|| -|`-h`, `--help`|Help message|false| - - -### `simulation-client` -Run a simulation server acting as a Pulsar client. Uses the client configuration specified in `conf/client.conf`. - -Usage - -```bash - -$ pulsar-perf simulation-client options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--port`|Port to listen on for controller|0| -|`--service-url`|Pulsar Service URL|| -|`-h`, `--help`|Help message|false| - -### `simulation-controller` -Run a simulation controller to give commands to servers - -Usage - -```bash - -$ pulsar-perf simulation-controller options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--client-port`|The port that the clients are listening on|0| -|`--clients`|Comma-separated list of client hostnames|| -|`--cluster`|The cluster to test on|| -|`-h`, `--help`|Help message|false| - - -### `help` -This help message - -Usage - -```bash - -$ pulsar-perf help - -``` - -## `bookkeeper` -A tool for managing BookKeeper. - -Usage - -```bash - -$ bookkeeper command - -``` - -Commands -* `autorecovery` -* `bookie` -* `localbookie` -* `upgrade` -* `shell` - - -Environment variables - -The table below lists the environment variables that you can use to configure the bookkeeper tool. - -|Variable|Description|Default| -|---|---|---| -|BOOKIE_LOG_CONF|Log4j configuration file|conf/log4j2.yaml| -|BOOKIE_CONF|BookKeeper configuration file|conf/bk_server.conf| -|BOOKIE_EXTRA_OPTS|Extra options to be passed to the JVM|| -|BOOKIE_EXTRA_CLASSPATH|Extra paths for BookKeeper's classpath|| -|ENTRY_FORMATTER_CLASS|The Java class used to format entries|| -|BOOKIE_PID_DIR|Folder where the BookKeeper server PID file should be stored|| -|BOOKIE_STOP_TIMEOUT|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful|| - - -### `auto-recovery` -Runs an auto-recovery service - -Usage - -```bash - -$ bookkeeper autorecovery options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery|| - - -### `bookie` -Starts up a BookKeeper server (aka bookie) - -Usage - -```bash - -$ bookkeeper bookie options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery|| -|-readOnly|Force start a read-only bookie server|false| -|-withAutoRecovery|Start auto-recovery service bookie server|false| - - -### `localbookie` -Runs a test ensemble of N bookies locally - -Usage - -```bash - -$ bookkeeper localbookie N - -``` - -### `upgrade` -Upgrade the bookie’s filesystem - -Usage - -```bash - -$ bookkeeper upgrade options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery|| -|`-u`, `--upgrade`|Upgrade the bookie’s directories|| - - -### `shell` -Run shell for admin commands. To see a full listing of those commands, run bookkeeper shell without an argument. - -Usage - -```bash - -$ bookkeeper shell - -``` - -Example - -```bash - -$ bookkeeper shell bookiesanity - -``` - -## `broker-tool` - -The `broker- tool` is used for operations on a specific broker. - -Usage - -```bash - -$ broker-tool command - -``` - -Commands -* `load-report` -* `help` - -Example -Two ways to get more information about a command as below: - -```bash - -$ broker-tool help command -$ broker-tool command --help - -``` - -### `load-report` - -Collect the load report of a specific broker. -The command is run on a broker, and used for troubleshooting why broker can’t collect right load report. - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--interval`| Interval to collect load report, in milliseconds || -|`-h`, `--help`| Display help information || - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/reference-configuration.md b/site2/website/versioned_docs/version-2.8.3-deprecated/reference-configuration.md deleted file mode 100644 index c53a2fb746652d..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/reference-configuration.md +++ /dev/null @@ -1,792 +0,0 @@ ---- -id: reference-configuration -title: Pulsar configuration -sidebar_label: "Pulsar configuration" -original_id: reference-configuration ---- - - - - -You can manage Pulsar configuration by configuration files in the [`conf`](https://github.com/apache/pulsar/tree/master/conf) directory of a Pulsar [installation](getting-started-standalone.md). - -- [BookKeeper](#bookkeeper) -- [Broker](#broker) -- [Client](#client) -- [Service discovery](#service-discovery) -- [Log4j](#log4j) -- [Log4j shell](#log4j-shell) -- [Standalone](#standalone) -- [WebSocket](#websocket) -- [Pulsar proxy](#pulsar-proxy) -- [ZooKeeper](#zookeeper) - -## BookKeeper - -BookKeeper is a replicated log storage system that Pulsar uses for durable storage of all messages. - - -|Name|Description|Default| -|---|---|---| -|bookiePort|The port on which the bookie server listens.|3181| -|allowLoopback|Whether the bookie is allowed to use a loopback interface as its primary interface (that is the interface used to establish its identity). By default, loopback interfaces are not allowed to work as the primary interface. Using a loopback interface as the primary interface usually indicates a configuration error. For example, it’s fairly common in some VPS setups to not configure a hostname or to have the hostname resolve to `127.0.0.1`. If this is the case, then all bookies in the cluster will establish their identities as `127.0.0.1:3181` and only one will be able to join the cluster. For VPSs configured like this, you should explicitly set the listening interface.|false| -|listeningInterface|The network interface on which the bookie listens. By default, the bookie listens on all interfaces.|eth0| -|advertisedAddress|Configure a specific hostname or IP address that the bookie should use to advertise itself to clients. By default, the bookie advertises either its own IP address or hostname according to the `listeningInterface` and `useHostNameAsBookieID` settings.|N/A| -|allowMultipleDirsUnderSameDiskPartition|Configure the bookie to enable/disable multiple ledger/index/journal directories in the same filesystem disk partition.|false| -|minUsableSizeForIndexFileCreation|The minimum safe usable size available in index directory for bookie to create index files while replaying journal at the time of bookie starts in Readonly Mode (in bytes).|1073741824| -|journalDirectory|The directory where BookKeeper outputs its write-ahead log (WAL).|data/bookkeeper/journal| -|journalDirectories|Directories that BookKeeper outputs its write ahead log. Multiple directories are available, being separated by `,`. For example: `journalDirectories=/tmp/bk-journal1,/tmp/bk-journal2`. If `journalDirectories` is set, the bookies skip `journalDirectory` and use this setting directory.|/tmp/bk-journal| -|ledgerDirectories|The directory where BookKeeper outputs ledger snapshots. This could define multiple directories to store snapshots separated by `,`, for example `ledgerDirectories=/tmp/bk1-data,/tmp/bk2-data`. Ideally, ledger dirs and the journal dir are each in a different device, which reduces the contention between random I/O and sequential write. It is possible to run with a single disk, but performance will be significantly lower.|data/bookkeeper/ledgers| -|ledgerManagerType|The type of ledger manager used to manage how ledgers are stored, managed, and garbage collected. See [BookKeeper Internals](http://bookkeeper.apache.org/docs/latest/getting-started/concepts) for more info.|hierarchical| -|zkLedgersRootPath|The root ZooKeeper path used to store ledger metadata. This parameter is used by the ZooKeeper-based ledger manager as a root znode to store all ledgers.|/ledgers| -|ledgerStorageClass|Ledger storage implementation class|org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage| -|entryLogFilePreallocationEnabled|Enable or disable entry logger preallocation|true| -|logSizeLimit|Max file size of the entry logger, in bytes. A new entry log file will be created when the old one reaches the file size limitation.|1073741824| -|minorCompactionThreshold|Threshold of minor compaction. Entry log files whose remaining size percentage reaches below this threshold will be compacted in a minor compaction. If set to less than zero, the minor compaction is disabled.|0.2| -|minorCompactionInterval|Time interval to run minor compaction, in seconds. If set to less than zero, the minor compaction is disabled. Note: should be greater than gcWaitTime. |3600| -|majorCompactionThreshold|The threshold of major compaction. Entry log files whose remaining size percentage reaches below this threshold will be compacted in a major compaction. Those entry log files whose remaining size percentage is still higher than the threshold will never be compacted. If set to less than zero, the minor compaction is disabled.|0.5| -|majorCompactionInterval|The time interval to run major compaction, in seconds. If set to less than zero, the major compaction is disabled. Note: should be greater than gcWaitTime. |86400| -|readOnlyModeEnabled|If `readOnlyModeEnabled=true`, then on all full ledger disks, bookie will be converted to read-only mode and serve only read requests. Otherwise the bookie will be shutdown.|true| -|forceReadOnlyBookie|Whether the bookie is force started in read only mode.|false| -|persistBookieStatusEnabled|Persist the bookie status locally on the disks. So the bookies can keep their status upon restarts.|false| -|compactionMaxOutstandingRequests|Sets the maximum number of entries that can be compacted without flushing. When compacting, the entries are written to the entrylog and the new offsets are cached in memory. Once the entrylog is flushed the index is updated with the new offsets. This parameter controls the number of entries added to the entrylog before a flush is forced. A higher value for this parameter means more memory will be used for offsets. Each offset consists of 3 longs. This parameter should not be modified unless you’re fully aware of the consequences.|100000| -|compactionRate|The rate at which compaction will read entries, in adds per second.|1000| -|isThrottleByBytes|Throttle compaction by bytes or by entries.|false| -|compactionRateByEntries|The rate at which compaction will read entries, in adds per second.|1000| -|compactionRateByBytes|Set the rate at which compaction reads entries. The unit is bytes added per second.|1000000| -|journalMaxSizeMB|Max file size of journal file, in megabytes. A new journal file will be created when the old one reaches the file size limitation.|2048| -|journalMaxBackups|The max number of old journal files to keep. Keeping a number of old journal files would help data recovery in special cases.|5| -|journalPreAllocSizeMB|How space to pre-allocate at a time in the journal.|16| -|journalWriteBufferSizeKB|The of the write buffers used for the journal.|64| -|journalRemoveFromPageCache|Whether pages should be removed from the page cache after force write.|true| -|journalAdaptiveGroupWrites|Whether to group journal force writes, which optimizes group commit for higher throughput.|true| -|journalMaxGroupWaitMSec|The maximum latency to impose on a journal write to achieve grouping.|1| -|journalAlignmentSize|All the journal writes and commits should be aligned to given size|4096| -|journalBufferedWritesThreshold|Maximum writes to buffer to achieve grouping|524288| -|journalFlushWhenQueueEmpty|If we should flush the journal when journal queue is empty|false| -|numJournalCallbackThreads|The number of threads that should handle journal callbacks|8| -|openLedgerRereplicationGracePeriod | The grace period, in milliseconds, that the replication worker waits before fencing and replicating a ledger fragment that's still being written to upon bookie failure. | 30000 | -|rereplicationEntryBatchSize|The number of max entries to keep in fragment for re-replication|100| -|autoRecoveryDaemonEnabled|Whether the bookie itself can start auto-recovery service.|true| -|lostBookieRecoveryDelay|How long to wait, in seconds, before starting auto recovery of a lost bookie.|0| -|gcWaitTime|How long the interval to trigger next garbage collection, in milliseconds. Since garbage collection is running in background, too frequent gc will heart performance. It is better to give a higher number of gc interval if there is enough disk capacity.|900000| -|gcOverreplicatedLedgerWaitTime|How long the interval to trigger next garbage collection of overreplicated ledgers, in milliseconds. This should not be run very frequently since we read the metadata for all the ledgers on the bookie from zk.|86400000| -|flushInterval|How long the interval to flush ledger index pages to disk, in milliseconds. Flushing index files will introduce much random disk I/O. If separating journal dir and ledger dirs each on different devices, flushing would not affect performance. But if putting journal dir and ledger dirs on same device, performance degrade significantly on too frequent flushing. You can consider increment flush interval to get better performance, but you need to pay more time on bookie server restart after failure.|60000| -|bookieDeathWatchInterval|Interval to watch whether bookie is dead or not, in milliseconds|1000| -|allowStorageExpansion|Allow the bookie storage to expand. Newly added ledger and index dirs must be empty.|false| -|zkServers|A list of one of more servers on which zookeeper is running. The server list can be comma separated values, for example: zkServers=zk1:2181,zk2:2181,zk3:2181.|localhost:2181| -|zkTimeout|ZooKeeper client session timeout in milliseconds Bookie server will exit if it received SESSION_EXPIRED because it was partitioned off from ZooKeeper for more than the session timeout JVM garbage collection, disk I/O will cause SESSION_EXPIRED. Increment this value could help avoiding this issue|30000| -|zkRetryBackoffStartMs|The start time that the Zookeeper client backoff retries in milliseconds.|1000| -|zkRetryBackoffMaxMs|The maximum time that the Zookeeper client backoff retries in milliseconds.|10000| -|zkEnableSecurity|Set ACLs on every node written on ZooKeeper, allowing users to read and write BookKeeper metadata stored on ZooKeeper. In order to make ACLs work you need to setup ZooKeeper JAAS authentication. All the bookies and Client need to share the same user, and this is usually done using Kerberos authentication. See ZooKeeper documentation.|false| -|httpServerEnabled|The flag enables/disables starting the admin http server.|false| -|httpServerPort|The HTTP server port to listen on. By default, the value is `8080`. If you want to keep it consistent with the Prometheus stats provider, you can set it to `8000`.|8080 -|httpServerClass|The http server class.|org.apache.bookkeeper.http.vertx.VertxHttpServer| -|serverTcpNoDelay|This settings is used to enabled/disabled Nagle’s algorithm, which is a means of improving the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network. If you are sending many small messages, such that more than one can fit in a single IP packet, setting server.tcpnodelay to false to enable Nagle algorithm can provide better performance.|true| -|serverSockKeepalive|This setting is used to send keep-alive messages on connection-oriented sockets.|true| -|serverTcpLinger|The socket linger timeout on close. When enabled, a close or shutdown will not return until all queued messages for the socket have been successfully sent or the linger timeout has been reached. Otherwise, the call returns immediately and the closing is done in the background.|0| -|byteBufAllocatorSizeMax|The maximum buf size of the received ByteBuf allocator.|1048576| -|nettyMaxFrameSizeBytes|The maximum netty frame size in bytes. Any message received larger than this will be rejected.|5253120| -|openFileLimit|Max number of ledger index files could be opened in bookie server If number of ledger index files reaches this limitation, bookie server started to swap some ledgers from memory to disk. Too frequent swap will affect performance. You can tune this number to gain performance according your requirements.|0| -|pageSize|Size of a index page in ledger cache, in bytes A larger index page can improve performance writing page to disk, which is efficient when you have small number of ledgers and these ledgers have similar number of entries. If you have large number of ledgers and each ledger has fewer entries, smaller index page would improve memory usage.|8192| -|pageLimit|How many index pages provided in ledger cache If number of index pages reaches this limitation, bookie server starts to swap some ledgers from memory to disk. You can increment this value when you found swap became more frequent. But make sure pageLimit*pageSize should not more than JVM max memory limitation, otherwise you would got OutOfMemoryException. In general, incrementing pageLimit, using smaller index page would gain better performance in lager number of ledgers with fewer entries case If pageLimit is -1, bookie server will use 1/3 of JVM memory to compute the limitation of number of index pages.|0| -|readOnlyModeEnabled|If all ledger directories configured are full, then support only read requests for clients. If "readOnlyModeEnabled=true" then on all ledger disks full, bookie will be converted to read-only mode and serve only read requests. Otherwise the bookie will be shutdown. By default this will be disabled.|true| -|diskUsageThreshold|For each ledger dir, maximum disk space which can be used. Default is 0.95f. i.e. 95% of disk can be used at most after which nothing will be written to that partition. If all ledger dir partitions are full, then bookie will turn to readonly mode if ‘readOnlyModeEnabled=true’ is set, else it will shutdown. Valid values should be in between 0 and 1 (exclusive).|0.95| -|diskCheckInterval|Disk check interval in milli seconds, interval to check the ledger dirs usage.|10000| -|auditorPeriodicCheckInterval|Interval at which the auditor will do a check of all ledgers in the cluster. By default this runs once a week. The interval is set in seconds. To disable the periodic check completely, set this to 0. Note that periodic checking will put extra load on the cluster, so it should not be run more frequently than once a day.|604800| -|sortedLedgerStorageEnabled|Whether sorted-ledger storage is enabled.|true| -|auditorPeriodicBookieCheckInterval|The interval between auditor bookie checks. The auditor bookie check, checks ledger metadata to see which bookies should contain entries for each ledger. If a bookie which should contain entries is unavailable, thea the ledger containing that entry is marked for recovery. Setting this to 0 disabled the periodic check. Bookie checks will still run when a bookie fails. The interval is specified in seconds.|86400| -|numAddWorkerThreads|The number of threads that should handle write requests. if zero, the writes would be handled by netty threads directly.|0| -|numReadWorkerThreads|The number of threads that should handle read requests. if zero, the reads would be handled by netty threads directly.|8| -|numHighPriorityWorkerThreads|The umber of threads that should be used for high priority requests (i.e. recovery reads and adds, and fencing).|8| -|maxPendingReadRequestsPerThread|If read workers threads are enabled, limit the number of pending requests, to avoid the executor queue to grow indefinitely.|2500| -|maxPendingAddRequestsPerThread|The limited number of pending requests, which is used to avoid the executor queue to grow indefinitely when add workers threads are enabled.|10000| -|isForceGCAllowWhenNoSpace|Whether force compaction is allowed when the disk is full or almost full. Forcing GC could get some space back, but could also fill up the disk space more quickly. This is because new log files are created before GC, while old garbage log files are deleted after GC.|false| -|verifyMetadataOnGC|True if the bookie should double check `readMetadata` prior to GC.|false| -|flushEntrylogBytes|Entry log flush interval in bytes. Flushing in smaller chunks but more frequently reduces spikes in disk I/O. Flushing too frequently may also affect performance negatively.|268435456| -|readBufferSizeBytes|The number of bytes we should use as capacity for BufferedReadChannel.|4096| -|writeBufferSizeBytes|The number of bytes used as capacity for the write buffer|65536| -|useHostNameAsBookieID|Whether the bookie should use its hostname to register with the coordination service (e.g.: zookeeper service). When false, bookie will use its ip address for the registration.|false| -|bookieId | If you want to custom a bookie ID or use a dynamic network address for the bookie, you can set the `bookieId`.

    Bookie advertises itself using the `bookieId` rather than the `BookieSocketAddress` (`hostname:port` or `IP:port`). If you set the `bookieId`, then the `useHostNameAsBookieID` does not take effect.

    The `bookieId` is a non-empty string that can contain ASCII digits and letters ([a-zA-Z9-0]), colons, dashes, and dots.

    For more information about `bookieId`, see [here](http://bookkeeper.apache.org/bps/BP-41-bookieid/).|N/A| -|allowEphemeralPorts|Whether the bookie is allowed to use an ephemeral port (port 0) as its server port. By default, an ephemeral port is not allowed. Using an ephemeral port as the service port usually indicates a configuration error. However, in unit tests, using an ephemeral port will address port conflict problems and allow running tests in parallel.|false| -|enableLocalTransport|Whether the bookie is allowed to listen for the BookKeeper clients executed on the local JVM.|false| -|disableServerSocketBind|Whether the bookie is allowed to disable bind on network interfaces. This bookie will be available only to BookKeeper clients executed on the local JVM.|false| -|skipListArenaChunkSize|The number of bytes that we should use as chunk allocation for `org.apache.bookkeeper.bookie.SkipListArena`.|4194304| -|skipListArenaMaxAllocSize|The maximum size that we should allocate from the skiplist arena. Allocations larger than this should be allocated directly by the VM to avoid fragmentation.|131072| -|bookieAuthProviderFactoryClass|The factory class name of the bookie authentication provider. If this is null, then there is no authentication.|null| -|statsProviderClass||org.apache.bookkeeper.stats.prometheus.PrometheusMetricsProvider| -|prometheusStatsHttpPort||8000| -|dbStorage_writeCacheMaxSizeMb|Size of Write Cache. Memory is allocated from JVM direct memory. Write cache is used to buffer entries before flushing into the entry log. For good performance, it should be big enough to hold a substantial amount of entries in the flush interval.|25% of direct memory| -|dbStorage_readAheadCacheMaxSizeMb|Size of Read cache. Memory is allocated from JVM direct memory. This read cache is pre-filled doing read-ahead whenever a cache miss happens. By default, it is allocated to 25% of the available direct memory.|N/A| -|dbStorage_readAheadCacheBatchSize|How many entries to pre-fill in cache after a read cache miss|1000| -|dbStorage_rocksDB_blockCacheSize|Size of RocksDB block-cache. For best performance, this cache should be big enough to hold a significant portion of the index database which can reach ~2GB in some cases. By default, it uses 10% of direct memory.|N/A| -|dbStorage_rocksDB_writeBufferSizeMB||64| -|dbStorage_rocksDB_sstSizeInMB||64| -|dbStorage_rocksDB_blockSize||65536| -|dbStorage_rocksDB_bloomFilterBitsPerKey||10| -|dbStorage_rocksDB_numLevels||-1| -|dbStorage_rocksDB_numFilesInLevel0||4| -|dbStorage_rocksDB_maxSizeInLevel1MB||256| - -## Broker - -Pulsar brokers are responsible for handling incoming messages from producers, dispatching messages to consumers, replicating data between clusters, and more. - -|Name|Description|Default| -|---|---|---| -|advertisedListeners|Specify multiple advertised listeners for the broker.

    The format is `:pulsar://:`.

    If there are multiple listeners, separate them with commas.

    **Note**: do not use this configuration with `advertisedAddress` and `brokerServicePort`. If the value of this configuration is empty, the broker uses `advertisedAddress` and `brokerServicePort`|/| -|internalListenerName|Specify the internal listener name for the broker.

    **Note**: the listener name must be contained in `advertisedListeners`.

    If the value of this configuration is empty, the broker uses the first listener as the internal listener.|/| -|authenticateOriginalAuthData| If this flag is set to `true`, the broker authenticates the original Auth data; else it just accepts the originalPrincipal and authorizes it (if required). |false| -|enablePersistentTopics| Whether persistent topics are enabled on the broker |true| -|enableNonPersistentTopics| Whether non-persistent topics are enabled on the broker |true| -|functionsWorkerEnabled| Whether the Pulsar Functions worker service is enabled in the broker |false| -|exposePublisherStats|Whether to enable topic level metrics.|true| -|statsUpdateFrequencyInSecs||60| -|statsUpdateInitialDelayInSecs||60| -|zookeeperServers| Zookeeper quorum connection string || -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -|brokerServicePort| Broker data port |6650| -|brokerServicePortTls| Broker data port for TLS |6651| -|webServicePort| Port to use to server HTTP request |8080| -|webServicePortTls| Port to use to server HTTPS request |8443| -|webSocketServiceEnabled| Enable the WebSocket API service in broker |false| -|webSocketNumIoThreads|The number of IO threads in Pulsar Client used in WebSocket proxy.|8| -|webSocketConnectionsPerBroker|The number of connections per Broker in Pulsar Client used in WebSocket proxy.|8| -|webSocketSessionIdleTimeoutMillis|Time in milliseconds that idle WebSocket session times out.|300000| -|webSocketMaxTextFrameSize|The maximum size of a text message during parsing in WebSocket proxy.|1048576| -|exposeTopicLevelMetricsInPrometheus|Whether to enable topic level metrics.|true| -|exposeConsumerLevelMetricsInPrometheus|Whether to enable consumer level metrics.|false| -|jvmGCMetricsLoggerClassName|Classname of Pluggable JVM GC metrics logger that can log GC specific metrics.|N/A| -|bindAddress| Hostname or IP address the service binds on, default is 0.0.0.0. |0.0.0.0| -|advertisedAddress| Hostname or IP address the service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used. || -|clusterName| Name of the cluster to which this broker belongs to || -|maxTenants|The maximum number of tenants that can be created in each Pulsar cluster. When the number of tenants reaches the threshold, the broker rejects the request of creating a new tenant. The default value 0 disables the check. |0| -| maxNamespacesPerTenant | The maximum number of namespaces that can be created in each tenant. When the number of namespaces reaches this threshold, the broker rejects the request of creating a new tenant. The default value 0 disables the check. |0| -|brokerDeduplicationEnabled| Sets the default behavior for message deduplication in the broker. If enabled, the broker will reject messages that were already stored in the topic. This setting can be overridden on a per-namespace basis. |false| -|brokerDeduplicationMaxNumberOfProducers| The maximum number of producers for which information will be stored for deduplication purposes. |10000| -|brokerDeduplicationEntriesInterval| The number of entries after which a deduplication informational snapshot is taken. A larger interval will lead to fewer snapshots being taken, though this would also lengthen the topic recovery time (the time required for entries published after the snapshot to be replayed). |1000| -|brokerDeduplicationSnapshotIntervalSeconds| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |120| -|brokerDeduplicationProducerInactivityTimeoutMinutes| The time of inactivity (in minutes) after which the broker will discard deduplication information related to a disconnected producer. |360| -|dispatchThrottlingRatePerReplicatorInMsg| The default messages per second dispatch throttling-limit for every replicator in replication. The value of `0` means disabling replication message dispatch-throttling| 0 | -|dispatchThrottlingRatePerReplicatorInByte| The default bytes per second dispatch throttling-limit for every replicator in replication. The value of `0` means disabling replication message-byte dispatch-throttling| 0 | -|zooKeeperSessionTimeoutMillis| Zookeeper session timeout in milliseconds |30000| -|brokerShutdownTimeoutMs| Time to wait for broker graceful shutdown. After this time elapses, the process will be killed |60000| -|skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out of memory error. |false| -|backlogQuotaCheckEnabled| Enable backlog quota check. Enforces action on topic when the quota is reached |true| -|backlogQuotaCheckIntervalInSeconds| How often to check for topics that have reached the quota |60| -|backlogQuotaDefaultLimitGB| The default per-topic backlog quota limit. Being less than 0 means no limitation. By default, it is -1. | -1 | -|backlogQuotaDefaultRetentionPolicy|The defaulted backlog quota retention policy. By Default, it is `producer_request_hold`.
  1346. 'producer_request_hold' Policy which holds producer's send request until the resource becomes available (or holding times out)
  1347. 'producer_exception' Policy which throws `javax.jms.ResourceAllocationException` to the producer
  1348. 'consumer_backlog_eviction' Policy which evicts the oldest message from the slowest consumer's backlog
  1349. |producer_request_hold| -|allowAutoTopicCreation| Enable topic auto creation if a new producer or consumer connected |true| -|allowAutoTopicCreationType| The type of topic that is allowed to be automatically created.(partitioned/non-partitioned) |non-partitioned| -|allowAutoSubscriptionCreation| Enable subscription auto creation if a new consumer connected |true| -|defaultNumPartitions| The number of partitioned topics that is allowed to be automatically created if `allowAutoTopicCreationType` is partitioned |1| -|brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics |true| -|brokerDeleteInactiveTopicsFrequencySeconds| How often to check for inactive topics |60| -| brokerDeleteInactiveTopicsMode | Set the mode to delete inactive topics.
  1350. `delete_when_no_subscriptions`: delete the topic which has no subscriptions or active producers.
  1351. `delete_when_subscriptions_caught_up`: delete the topic whose subscriptions have no backlogs and which has no active producers or consumers.
  1352. | `delete_when_no_subscriptions` | -| brokerDeleteInactiveTopicsMaxInactiveDurationSeconds | Set the maximum duration for inactive topics. If it is not specified, the `brokerDeleteInactiveTopicsFrequencySeconds` parameter is adopted. | N/A | -|forceDeleteTenantAllowed| Enable you to delete a tenant forcefully. |false| -|forceDeleteNamespaceAllowed| Enable you to delete a namespace forcefully. |false| -|messageExpiryCheckIntervalInMinutes| The frequency of proactively checking and purging expired messages. |5| -|brokerServiceCompactionMonitorIntervalInSeconds| Interval between checks to determine whether topics with compaction policies need compaction. |60| -brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater than this threshold, compression is triggered.

    Set this threshold to 0 means disabling the compression check.|N/A -|delayedDeliveryEnabled| Whether to enable the delayed delivery for messages. If disabled, messages will be immediately delivered and there will be no tracking overhead.|true| -|delayedDeliveryTickTimeMillis|Control the tick time for retrying on delayed delivery, which affects the accuracy of the delivery time compared to the scheduled time. By default, it is 1 second.|1000| -|activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed. |1000| -|clientLibraryVersionCheckEnabled| Enable check for minimum allowed client library version |false| -|clientLibraryVersionCheckAllowUnversioned| Allow client libraries with no version information |true| -|statusFilePath| Path for the file used to determine the rotation status for the broker when responding to service discovery health checks || -|preferLaterVersions| If true, (and ModularLoadManagerImpl is being used), the load manager will attempt to use only brokers running the latest software version (to minimize impact to bundles) |false| -|maxNumPartitionsPerPartitionedTopic|Max number of partitions per partitioned topic. Use 0 or negative number to disable the check|0| -| maxSubscriptionsPerTopic | Maximum number of subscriptions allowed to subscribe to a topic. Once this limit reaches, the broker rejects new subscriptions until the number of subscriptions decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxProducersPerTopic | Maximum number of producers allowed to connect to a topic. Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerTopic | Maximum number of consumers allowed to connect to a topic. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerSubscription | Maximum number of consumers allowed to connect to a subscription. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -|tlsEnabled|Deprecated - Use `webServicePortTls` and `brokerServicePortTls` instead. |false| -|tlsCertificateFilePath| Path for the TLS certificate file || -|tlsKeyFilePath| Path for the TLS private key file || -|tlsTrustCertsFilePath| Path for the trusted TLS certificate file. This cert is used to verify that any certs presented by connecting clients are signed by a certificate authority. If this verification fails, then the certs are untrusted and the connections are dropped. || -|tlsAllowInsecureConnection| Accept untrusted TLS certificate from client. If it is set to `true`, a client with a cert which cannot be verified with the 'tlsTrustCertsFilePath' cert will be allowed to connect to the server, though the cert will not be used for client authentication. |false| -|tlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLSv1.3```, ```TLSv1.2``` || -|tlsCiphers|Specify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256```|| -|tlsEnabledWithKeyStore| Enable TLS with KeyStore type configuration in broker |false| -|tlsProvider| TLS Provider for KeyStore type || -|tlsKeyStoreType| LS KeyStore type configuration in broker: JKS, PKCS12 |JKS| -|tlsKeyStore| TLS KeyStore path in broker || -|tlsKeyStorePassword| TLS KeyStore password for broker || -|brokerClientTlsEnabledWithKeyStore| Whether internal client use KeyStore type to authenticate with Pulsar brokers |false| -|brokerClientSslProvider| The TLS Provider used by internal client to authenticate with other Pulsar brokers || -|brokerClientTlsTrustStoreType| TLS TrustStore type configuration for internal client: JKS, PKCS12, used by the internal client to authenticate with Pulsar brokers |JKS| -|brokerClientTlsTrustStore| TLS TrustStore path for internal client, used by the internal client to authenticate with Pulsar brokers || -|brokerClientTlsTrustStorePassword| TLS TrustStore password for internal client, used by the internal client to authenticate with Pulsar brokers || -|brokerClientTlsCiphers| Specify the tls cipher the internal client will use to negotiate during TLS Handshake. (a comma-separated list of ciphers) e.g. [TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256]|| -|brokerClientTlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS handshake. (a comma-separated list of protocol names). e.g. `TLSv1.3`, `TLSv1.2` || -|ttlDurationDefaultInSeconds|The default Time to Live (TTL) for namespaces if the TTL is not configured at namespace policies. When the value is set to `0`, TTL is disabled. By default, TTL is disabled. |0| -|tokenSettingPrefix| Configure the prefix of the token related setting like `tokenSecretKey`, `tokenPublicKey`, `tokenAuthClaim`, `tokenPublicAlg`, `tokenAudienceClaim`, and `tokenAudience`. || -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicAlg| Configure the algorithm to be used to validate auth tokens. This can be any of the asymettric algorithms supported by Java JWT (https://github.com/jwtk/jjwt#signature-algorithms-keys) |RS256| -|tokenAuthClaim| Specify which of the token's claims will be used as the authentication "principal" or "role". The default "sub" claim will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud", that will be used to get the audience from token. If not set, audience will not be verified. || -|tokenAudience| The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token, need contains this. || -|maxUnackedMessagesPerConsumer| Max number of unacknowledged messages allowed to receive messages by a consumer on a shared subscription. Broker will stop sending messages to consumer once, this limit reaches until consumer starts acknowledging messages back. Using a value of 0, is disabling unackeMessage limit check and consumer can receive messages without any restriction |50000| -|maxUnackedMessagesPerSubscription| Max number of unacknowledged messages allowed per shared subscription. Broker will stop dispatching messages to all consumers of the subscription once this limit reaches until consumer starts acknowledging messages back and unack count reaches to limit/2. Using a value of 0, is disabling unackedMessage-limit check and dispatcher can dispatch messages without any restriction |200000| -|subscriptionRedeliveryTrackerEnabled| Enable subscription message redelivery tracker |true| -|subscriptionExpirationTimeMinutes | How long to delete inactive subscriptions from last consuming.

    Setting this configuration to a value **greater than 0** deletes inactive subscriptions automatically.
    Setting this configuration to **0** does not delete inactive subscriptions automatically.

    Since this configuration takes effect on all topics, if there is even one topic whose subscriptions should not be deleted automatically, you need to set it to 0.
    Instead, you can set a subscription expiration time for each **namespace** using the [`pulsar-admin namespaces set-subscription-expiration-time options` command](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-subscription-expiration-time-em-). | 0 | -|maxConcurrentLookupRequest| Max number of concurrent lookup request broker allows to throttle heavy incoming lookup traffic |50000| -|maxConcurrentTopicLoadRequest| Max number of concurrent topic loading request broker allows to control number of zk-operations |5000| -|authenticationEnabled| Enable authentication |false| -|authenticationProviders| Authentication provider name list, which is comma separated list of class names || -| authenticationRefreshCheckSeconds | Interval of time for checking for expired authentication credentials | 60 | -|authorizationEnabled| Enforce authorization |false| -|superUserRoles| Role names that are treated as "super-user", meaning they will be able to do all admin operations and publish/consume from all topics || -|brokerClientAuthenticationPlugin| Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters || -|brokerClientAuthenticationParameters||| -|athenzDomainNames| Supported Athenz provider domain names(comma separated) for authentication || -|exposePreciseBacklogInPrometheus| Enable expose the precise backlog stats, set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate. |false| -|schemaRegistryStorageClassName|The schema storage implementation used by this broker.|org.apache.pulsar.broker.service.schema.BookkeeperSchemaStorageFactory| -|isSchemaValidationEnforced|Enforce schema validation on following cases: if a producer without a schema attempts to produce to a topic with schema, the producer will be failed to connect. PLEASE be carefully on using this, since non-java clients don't support schema. If this setting is enabled, then non-java clients fail to produce.|false| -| topicFencingTimeoutSeconds | If a topic remains fenced for a certain time period (in seconds), it is closed forcefully. If set to 0 or a negative number, the fenced topic is not closed. | 0 | -|offloadersDirectory|The directory for all the offloader implementations.|./offloaders| -|bookkeeperMetadataServiceUri| Metadata service uri that bookkeeper is used for loading corresponding metadata driver and resolving its metadata service location. This value can be fetched using `bookkeeper shell whatisinstanceid` command in BookKeeper cluster. For example: zk+hierarchical://localhost:2181/ledgers. The metadata service uri list can also be semicolon separated values like below: zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers || -|bookkeeperClientAuthenticationPlugin| Authentication plugin to use when connecting to bookies || -|bookkeeperClientAuthenticationParametersName| BookKeeper auth plugin implementation specifics parameters name and values || -|bookkeeperClientAuthenticationParameters||| -|bookkeeperClientNumWorkerThreads| Number of BookKeeper client worker threads. Default is Runtime.getRuntime().availableProcessors() || -|bookkeeperClientTimeoutInSeconds| Timeout for BK add / read operations |30| -|bookkeeperClientSpeculativeReadTimeoutInMillis| Speculative reads are initiated if a read request doesn’t complete within a certain time Using a value of 0, is disabling the speculative reads |0| -|bookkeeperNumberOfChannelsPerBookie| Number of channels per bookie |16| -|bookkeeperClientHealthCheckEnabled| Enable bookies health check. Bookies that have more than the configured number of failure within the interval will be quarantined for some time. During this period, new ledgers won’t be created on these bookies |true| -|bookkeeperClientHealthCheckIntervalSeconds||60| -|bookkeeperClientHealthCheckErrorThresholdPerInterval||5| -|bookkeeperClientHealthCheckQuarantineTimeInSeconds ||1800| -|bookkeeperClientRackawarePolicyEnabled| Enable rack-aware bookie selection policy. BK will chose bookies from different racks when forming a new bookie ensemble |true| -|bookkeeperClientRegionawarePolicyEnabled| Enable region-aware bookie selection policy. BK will chose bookies from different regions and racks when forming a new bookie ensemble. If enabled, the value of bookkeeperClientRackawarePolicyEnabled is ignored |false| -|bookkeeperClientMinNumRacksPerWriteQuorum| Minimum number of racks per write quorum. BK rack-aware bookie selection policy will try to get bookies from at least 'bookkeeperClientMinNumRacksPerWriteQuorum' racks for a write quorum. |2| -|bookkeeperClientEnforceMinNumRacksPerWriteQuorum| Enforces rack-aware bookie selection policy to pick bookies from 'bookkeeperClientMinNumRacksPerWriteQuorum' racks for a writeQuorum. If BK can't find bookie then it would throw BKNotEnoughBookiesException instead of picking random one. |false| -|bookkeeperClientReorderReadSequenceEnabled| Enable/disable reordering read sequence on reading entries. |false| -|bookkeeperClientIsolationGroups| Enable bookie isolation by specifying a list of bookie groups to choose from. Any bookie outside the specified groups will not be used by the broker || -|bookkeeperClientSecondaryIsolationGroups| Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn't have enough bookie available. || -|bookkeeperClientMinAvailableBookiesInIsolationGroups| Minimum bookies that should be available as part of bookkeeperClientIsolationGroups else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list. || -|bookkeeperClientGetBookieInfoIntervalSeconds| Set the interval to periodically check bookie info |86400| -|bookkeeperClientGetBookieInfoRetryIntervalSeconds| Set the interval to retry a failed bookie info lookup |60| -|bookkeeperEnableStickyReads | Enable/disable having read operations for a ledger to be sticky to a single bookie. If this flag is enabled, the client will use one single bookie (by preference) to read all entries for a ledger. | true | -|managedLedgerDefaultEnsembleSize| Number of bookies to use when creating a ledger |2| -|managedLedgerDefaultWriteQuorum| Number of copies to store for each message |2| -|managedLedgerDefaultAckQuorum| Number of guaranteed copies (acks to wait before write is complete) |2| -|managedLedgerCacheSizeMB| Amount of memory to use for caching data payload in managed ledger. This memory is allocated from JVM direct memory and it’s shared across all the topics running in the same broker. By default, uses 1/5th of available direct memory || -|managedLedgerCacheCopyEntries| Whether we should make a copy of the entry payloads when inserting in cache| false| -|managedLedgerCacheEvictionWatermark| Threshold to which bring down the cache level when eviction is triggered |0.9| -|managedLedgerCacheEvictionFrequency| Configure the cache eviction frequency for the managed ledger cache (evictions/sec) | 100.0 | -|managedLedgerCacheEvictionTimeThresholdMillis| All entries that have stayed in cache for more than the configured time, will be evicted | 1000 | -|managedLedgerCursorBackloggedThreshold| Configure the threshold (in number of entries) from where a cursor should be considered 'backlogged' and thus should be set as inactive. | 1000| -|managedLedgerDefaultMarkDeleteRateLimit| Rate limit the amount of writes per second generated by consumer acking the messages |1.0| -|managedLedgerMaxEntriesPerLedger| The max number of entries to append to a ledger before triggering a rollover. A ledger rollover is triggered after the min rollover time has passed and one of the following conditions is true:
    • The max rollover time has been reached
    • The max entries have been written to the ledger
    • The max ledger size has been written to the ledger
    |50000| -|managedLedgerMinLedgerRolloverTimeMinutes| Minimum time between ledger rollover for a topic |10| -|managedLedgerMaxLedgerRolloverTimeMinutes| Maximum time before forcing a ledger rollover for a topic |240| -|managedLedgerCursorMaxEntriesPerLedger| Max number of entries to append to a cursor ledger |50000| -|managedLedgerCursorRolloverTimeInSeconds| Max time before triggering a rollover on a cursor ledger |14400| -|managedLedgerMaxUnackedRangesToPersist| Max number of "acknowledgment holes" that are going to be persistently stored. When acknowledging out of order, a consumer will leave holes that are supposed to be quickly filled by acking all the messages. The information of which messages are acknowledged is persisted by compressing in "ranges" of messages that were acknowledged. After the max number of ranges is reached, the information will only be tracked in memory and messages will be redelivered in case of crashes. |1000| -|autoSkipNonRecoverableData| Skip reading non-recoverable/unreadable data-ledger under managed-ledger’s list.It helps when data-ledgers gets corrupted at bookkeeper and managed-cursor is stuck at that ledger. |false| -|loadBalancerEnabled| Enable load balancer |true| -|loadBalancerPlacementStrategy| Strategy to assign a new bundle weightedRandomSelection || -|loadBalancerReportUpdateThresholdPercentage| Percentage of change to trigger load report update |10| -|loadBalancerReportUpdateMaxIntervalMinutes| maximum interval to update load report |15| -|loadBalancerHostUsageCheckIntervalMinutes| Frequency of report to collect |1| -|loadBalancerSheddingIntervalMinutes| Load shedding interval. Broker periodically checks whether some traffic should be offload from some over-loaded broker to other under-loaded brokers |30| -|loadBalancerSheddingGracePeriodMinutes| Prevent the same topics to be shed and moved to other broker more than once within this timeframe |30| -|loadBalancerBrokerMaxTopics| Usage threshold to allocate max number of topics to broker |50000| -|loadBalancerBrokerUnderloadedThresholdPercentage| Usage threshold to determine a broker as under-loaded |1| -|loadBalancerBrokerOverloadedThresholdPercentage| Usage threshold to determine a broker as over-loaded |85| -|loadBalancerResourceQuotaUpdateIntervalMinutes| Interval to update namespace bundle resource quota |15| -|loadBalancerBrokerComfortLoadLevelPercentage| Usage threshold to determine a broker is having just right level of load |65| -|loadBalancerAutoBundleSplitEnabled| enable/disable namespace bundle auto split |false| -|loadBalancerNamespaceBundleMaxTopics| maximum topics in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxSessions| maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxMsgRate| maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxBandwidthMbytes| maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered |100| -|loadBalancerNamespaceMaximumBundles| maximum number of bundles in a namespace |128| -|replicationMetricsEnabled| Enable replication metrics |true| -|replicationConnectionsPerBroker| Max number of connections to open for each broker in a remote cluster More connections host-to-host lead to better throughput over high-latency links. |16| -|replicationProducerQueueSize| Replicator producer queue size |1000| -|replicatorPrefix| Replicator prefix used for replicator producer name and cursor name pulsar.repl|| -|replicationTlsEnabled| Enable TLS when talking with other clusters to replicate messages |false| -|brokerServicePurgeInactiveFrequencyInSeconds|Deprecated. Use `brokerDeleteInactiveTopicsFrequencySeconds`.|60| -|transactionCoordinatorEnabled|Whether to enable transaction coordinator in broker.|true| -|transactionMetadataStoreProviderClassName| |org.apache.pulsar.transaction.coordinator.impl.InMemTransactionMetadataStoreProvider| -|defaultRetentionTimeInMinutes| Default message retention time. 0 means retention is disabled. -1 means data is not removed by time quota |0| -|defaultRetentionSizeInMB| Default retention size. 0 means retention is disabled. -1 means data is not removed by size quota |0| -|keepAliveIntervalSeconds| How often to check whether the connections are still alive |30| -|bootstrapNamespaces| The bootstrap name. | N/A | -|loadManagerClassName| Name of load manager to use |org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl| -|supportedNamespaceBundleSplitAlgorithms| Supported algorithms name for namespace bundle split |[range_equally_divide,topic_count_equally_divide]| -|defaultNamespaceBundleSplitAlgorithm| Default algorithm name for namespace bundle split |range_equally_divide| -|managedLedgerOffloadDriver| The directory for all the offloader implementations `offloadersDirectory=./offloaders`. Driver to use to offload old data to long term storage (Possible values: S3, aws-s3, google-cloud-storage). When using google-cloud-storage, Make sure both Google Cloud Storage and Google Cloud Storage JSON API are enabled for the project (check from Developers Console -> Api&auth -> APIs). || -|managedLedgerOffloadMaxThreads| Maximum number of thread pool threads for ledger offloading |2| -|managedLedgerOffloadPrefetchRounds|The maximum prefetch rounds for ledger reading for offloading.|1| -|managedLedgerUnackedRangesOpenCacheSetEnabled| Use Open Range-Set to cache unacknowledged messages |true| -|managedLedgerOffloadDeletionLagMs|Delay between a ledger being successfully offloaded to long term storage and the ledger being deleted from bookkeeper | 14400000| -|managedLedgerOffloadAutoTriggerSizeThresholdBytes|The number of bytes before triggering automatic offload to long term storage |-1 (disabled)| -|s3ManagedLedgerOffloadRegion| For Amazon S3 ledger offload, AWS region || -|s3ManagedLedgerOffloadBucket| For Amazon S3 ledger offload, Bucket to place offloaded ledger into || -|s3ManagedLedgerOffloadServiceEndpoint| For Amazon S3 ledger offload, Alternative endpoint to connect to (useful for testing) || -|s3ManagedLedgerOffloadMaxBlockSizeInBytes| For Amazon S3 ledger offload, Max block size in bytes. (64MB by default, 5MB minimum) |67108864| -|s3ManagedLedgerOffloadReadBufferSizeInBytes| For Amazon S3 ledger offload, Read buffer size in bytes (1MB by default) |1048576| -|gcsManagedLedgerOffloadRegion|For Google Cloud Storage ledger offload, region where offload bucket is located. Go to this page for more details: https://cloud.google.com/storage/docs/bucket-locations .|N/A| -|gcsManagedLedgerOffloadBucket|For Google Cloud Storage ledger offload, Bucket to place offloaded ledger into.|N/A| -|gcsManagedLedgerOffloadMaxBlockSizeInBytes|For Google Cloud Storage ledger offload, the maximum block size in bytes. (64MB by default, 5MB minimum)|67108864| -|gcsManagedLedgerOffloadReadBufferSizeInBytes|For Google Cloud Storage ledger offload, Read buffer size in bytes. (1MB by default)|1048576| -|gcsManagedLedgerOffloadServiceAccountKeyFile|For Google Cloud Storage, path to json file containing service account credentials. For more details, see the "Service Accounts" section of https://support.google.com/googleapi/answer/6158849 .|N/A| -|fileSystemProfilePath|For File System Storage, file system profile path.|../conf/filesystem_offload_core_site.xml| -|fileSystemURI|For File System Storage, file system uri.|N/A| -|s3ManagedLedgerOffloadRole| For Amazon S3 ledger offload, provide a role to assume before writing to s3 || -|s3ManagedLedgerOffloadRoleSessionName| For Amazon S3 ledger offload, provide a role session name when using a role |pulsar-s3-offload| -| acknowledgmentAtBatchIndexLevelEnabled | Enable or disable the batch index acknowledgement. | false | -|enableReplicatedSubscriptions|Whether to enable tracking of replicated subscriptions state across clusters.|true| -|replicatedSubscriptionsSnapshotFrequencyMillis|The frequency of snapshots for replicated subscriptions tracking.|1000| -|replicatedSubscriptionsSnapshotTimeoutSeconds|The timeout for building a consistent snapshot for tracking replicated subscriptions state.|30| -|replicatedSubscriptionsSnapshotMaxCachedPerSubscription|The maximum number of snapshot to be cached per subscription.|10| -|maxMessagePublishBufferSizeInMB|The maximum memory size for a broker to handle messages that are sent by producers. If the processing message size exceeds this value, the broker stops reading data from the connection. The processing messages refer to the messages that are sent to the broker but the broker has not sent response to the client. Usually the messages are waiting to be written to bookies. It is shared across all the topics running in the same broker. The value `-1` disables the memory limitation. By default, it is 50% of direct memory.|N/A| -|messagePublishBufferCheckIntervalInMillis|Interval between checks to see if message publish buffer size exceeds the maximum. Use `0` or negative number to disable the max publish buffer limiting.|100| -|retentionCheckIntervalInSeconds|Check between intervals to see if consumed ledgers need to be trimmed. Use 0 or negative number to disable the check.|120| -| maxMessageSize | Set the maximum size of a message. | 5242880 | -| preciseTopicPublishRateLimiterEnable | Enable precise topic publish rate limiting. | false | -| lazyCursorRecovery | Whether to recover cursors lazily when trying to recover a managed ledger backing a persistent topic. It can improve write availability of topics. The caveat is now when recovered ledger is ready to write we're not sure if all old consumers' last mark delete position(ack position) can be recovered or not. So user can make the trade off or have custom logic in application to checkpoint consumer state.| false | -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| -| maxTopicsPerNamespace | The maximum number of persistent topics that can be created in the namespace. When the number of topics reaches this threshold, the broker rejects the request of creating a new topic, including the auto-created topics by the producer or consumer, until the number of connected consumers decreases. The default value 0 disables the check. | 0 | -|subscriptionTypesEnabled| Enable all subscription types, which are exclusive, shared, failover, and key_shared. | Exclusive, Shared, Failover, Key_Shared | -| managedLedgerInfoCompressionType | ManagedLedgerInfo compression type, option values (NONE, LZ4, ZLIB, ZSTD, SNAPPY), if value is NONE or invalid, the managedLedgerInfo will not be compressed. Notice, after enable this configuration, if you want to degrade broker, you should change the configuration to `NONE` and make sure all ledger metadata are saved without compression. | None | -|narExtractionDirectory | The extraction directory of the nar package.
    Available for Protocol Handler, Additional Servlets, Offloaders, Broker Interceptor. | System.getProperty("java.io.tmpdir") | - -## Client - -You can use the [`pulsar-client`](reference-cli-tools.md#pulsar-client) CLI tool to publish messages to and consume messages from Pulsar topics. You can use this tool in place of a client library. - -|Name|Description|Default| -|---|---|---| -|webServiceUrl| The web URL for the cluster. |http://localhost:8080/| -|brokerServiceUrl| The Pulsar protocol URL for the cluster. |pulsar://localhost:6650/| -|authPlugin| The authentication plugin. || -|authParams| The authentication parameters for the cluster, as a comma-separated string. || -|useTls| Whether to enforce the TLS authentication in the cluster. |false| -| tlsAllowInsecureConnection | Allow TLS connections to servers whose certificate cannot be verified to have been signed by a trusted certificate authority. | false | -| tlsEnableHostnameVerification | Whether the server hostname must match the common name of the certificate that is used by the server. | false | -|tlsTrustCertsFilePath||| -| useKeyStoreTls | Enable TLS with KeyStore type configuration in the broker. | false | -| tlsTrustStoreType | TLS TrustStore type configuration.
  1353. JKS
  1354. PKCS12
  1355. |JKS| -| tlsTrustStore | TLS TrustStore path. | | -| tlsTrustStorePassword | TLS TrustStore password. | | -| webserviceTlsProvider | The TLS provider for the web service.
    When TLS authentication with CACert is used, the valid value is either `OPENSSL` or `JDK`.
    When TLS authentication with KeyStore is used, available options can be `SunJSSE`, `Conscrypt` and so on. | N/A | - -## Service discovery - -|Name|Description|Default| -|---|---|---| -|zookeeperServers| Zookeeper quorum connection string (comma-separated) || -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -|zookeeperSessionTimeoutMs| ZooKeeper session timeout |30000| -|servicePort| Port to use to server binary-proto request |6650| -|servicePortTls| Port to use to server binary-proto-tls request |6651| -|webServicePort| Port that discovery service listen on |8080| -|webServicePortTls| Port to use to server HTTPS request |8443| -|bindOnLocalhost| Control whether to bind directly on localhost rather than on normal hostname |false| -|authenticationEnabled| Enable authentication |false| -|authenticationProviders| Authentication provider name list, which is comma separated list of class names (comma-separated) || -|authorizationEnabled| Enforce authorization |false| -|superUserRoles| Role names that are treated as "super-user", meaning they will be able to do all admin operations and publish/consume from all topics (comma-separated) || -|tlsEnabled| Enable TLS |false| -|tlsCertificateFilePath| Path for the TLS certificate file || -|tlsKeyFilePath| Path for the TLS private key file || - - - -## Log4j - -You can set the log level and configuration in the [log4j2.yaml](https://github.com/apache/pulsar/blob/d557e0aa286866363bc6261dec87790c055db1b0/conf/log4j2.yaml#L155) file. The following logging configuration parameters are available. - -|Name|Default| -|---|---| -|pulsar.root.logger| WARN,CONSOLE| -|pulsar.log.dir| logs| -|pulsar.log.file| pulsar.log| -|log4j.rootLogger| ${pulsar.root.logger}| -|log4j.appender.CONSOLE| org.apache.log4j.ConsoleAppender| -|log4j.appender.CONSOLE.Threshold| DEBUG| -|log4j.appender.CONSOLE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.CONSOLE.layout.ConversionPattern| %d{ISO8601} - %-5p - [%t:%C{1}@%L] - %m%n| -|log4j.appender.ROLLINGFILE| org.apache.log4j.DailyRollingFileAppender| -|log4j.appender.ROLLINGFILE.Threshold| DEBUG| -|log4j.appender.ROLLINGFILE.File| ${pulsar.log.dir}/${pulsar.log.file}| -|log4j.appender.ROLLINGFILE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.ROLLINGFILE.layout.ConversionPattern| %d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n| -|log4j.appender.TRACEFILE| org.apache.log4j.FileAppender| -|log4j.appender.TRACEFILE.Threshold| TRACE| -|log4j.appender.TRACEFILE.File| pulsar-trace.log| -|log4j.appender.TRACEFILE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.TRACEFILE.layout.ConversionPattern| %d{ISO8601} - %-5p [%t:%C{1}@%L][%x] - %m%n| - -> Note: 'topic' in log4j2.appender is configurable. -> - If you want to append all logs to a single topic, set the same topic name. -> - If you want to append logs to different topics, you can set different topic names. - -## Log4j shell - -|Name|Default| -|---|---| -|bookkeeper.root.logger| ERROR,CONSOLE| -|log4j.rootLogger| ${bookkeeper.root.logger}| -|log4j.appender.CONSOLE| org.apache.log4j.ConsoleAppender| -|log4j.appender.CONSOLE.Threshold| DEBUG| -|log4j.appender.CONSOLE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.CONSOLE.layout.ConversionPattern| %d{ABSOLUTE} %-5p %m%n| -|log4j.logger.org.apache.zookeeper| ERROR| -|log4j.logger.org.apache.bookkeeper| ERROR| -|log4j.logger.org.apache.bookkeeper.bookie.BookieShell| INFO| - - -## Standalone - -|Name|Description|Default| -|---|---|---| -|authenticateOriginalAuthData| If this flag is set to `true`, the broker authenticates the original Auth data; else it just accepts the originalPrincipal and authorizes it (if required). |false| -|zookeeperServers| The quorum connection string for local ZooKeeper || -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -|brokerServicePort| The port on which the standalone broker listens for connections |6650| -|webServicePort| The port used by the standalone broker for HTTP requests |8080| -|bindAddress| The hostname or IP address on which the standalone service binds |0.0.0.0| -|advertisedAddress| The hostname or IP address that the standalone service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used. || -| numAcceptorThreads | Number of threads to use for Netty Acceptor | 1 | -| numIOThreads | Number of threads to use for Netty IO | 2 * Runtime.getRuntime().availableProcessors() | -| numHttpServerThreads | Number of threads to use for HTTP requests processing | 2 * Runtime.getRuntime().availableProcessors()| -|isRunningStandalone|This flag controls features that are meant to be used when running in standalone mode.|N/A| -|clusterName| The name of the cluster that this broker belongs to. |standalone| -| failureDomainsEnabled | Enable cluster's failure-domain which can distribute brokers into logical region. | false | -|zooKeeperSessionTimeoutMillis| The ZooKeeper session timeout, in milliseconds. |30000| -|zooKeeperOperationTimeoutSeconds|ZooKeeper operation timeout in seconds.|30| -|brokerShutdownTimeoutMs| The time to wait for graceful broker shutdown. After this time elapses, the process will be killed. |60000| -|skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out of memory error. |false| -|backlogQuotaCheckEnabled| Enable the backlog quota check, which enforces a specified action when the quota is reached. |true| -|backlogQuotaCheckIntervalInSeconds| How often to check for topics that have reached the backlog quota. |60| -|backlogQuotaDefaultLimitGB| The default per-topic backlog quota limit. Being less than 0 means no limitation. By default, it is -1. |-1| -|ttlDurationDefaultInSeconds|The default Time to Live (TTL) for namespaces if the TTL is not configured at namespace policies. When the value is set to `0`, TTL is disabled. By default, TTL is disabled. |0| -|brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics. |true| -|brokerDeleteInactiveTopicsFrequencySeconds| How often to check for inactive topics, in seconds. |60| -| maxPendingPublishRequestsPerConnection | Maximum pending publish requests per connection to avoid keeping large number of pending requests in memory | 1000| -|messageExpiryCheckIntervalInMinutes| How often to proactively check and purged expired messages. |5| -|activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed. |1000| -| subscriptionExpirationTimeMinutes | How long to delete inactive subscriptions from last consumption. When it is set to 0, inactive subscriptions are not deleted automatically | 0 | -| subscriptionRedeliveryTrackerEnabled | Enable subscription message redelivery tracker to send redelivery count to consumer. | true | -|subscriptionKeySharedEnable|Whether to enable the Key_Shared subscription.|true| -| subscriptionKeySharedUseConsistentHashing | In Key_Shared subscription type, with default AUTO_SPLIT mode, use splitting ranges or consistent hashing to reassign keys to new consumers. | false | -| subscriptionKeySharedConsistentHashingReplicaPoints | In Key_Shared subscription type, the number of points in the consistent-hashing ring. The greater the number, the more equal the assignment of keys to consumers. | 100 | -| subscriptionExpiryCheckIntervalInMinutes | How frequently to proactively check and purge expired subscription |5 | -| brokerDeduplicationEnabled | Set the default behavior for message deduplication in the broker. This can be overridden per-namespace. If it is enabled, the broker rejects messages that are already stored in the topic. | false | -| brokerDeduplicationMaxNumberOfProducers | Maximum number of producer information that it's going to be persisted for deduplication purposes | 10000 | -| brokerDeduplicationEntriesInterval | Number of entries after which a deduplication information snapshot is taken. A greater interval leads to less snapshots being taken though it would increase the topic recovery time, when the entries published after the snapshot need to be replayed. | 1000 | -| brokerDeduplicationProducerInactivityTimeoutMinutes | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | 360 | -| defaultNumberOfNamespaceBundles | When a namespace is created without specifying the number of bundles, this value is used as the default setting.| 4 | -|clientLibraryVersionCheckEnabled| Enable checks for minimum allowed client library version. |false| -|clientLibraryVersionCheckAllowUnversioned| Allow client libraries with no version information |true| -|statusFilePath| The path for the file used to determine the rotation status for the broker when responding to service discovery health checks |/usr/local/apache/htdocs| -|maxUnackedMessagesPerConsumer| The maximum number of unacknowledged messages allowed to be received by consumers on a shared subscription. The broker will stop sending messages to a consumer once this limit is reached or until the consumer begins acknowledging messages. A value of 0 disables the unacked message limit check and thus allows consumers to receive messages without any restrictions. |50000| -|maxUnackedMessagesPerSubscription| The same as above, except per subscription rather than per consumer. |200000| -| maxUnackedMessagesPerBroker | Maximum number of unacknowledged messages allowed per broker. Once this limit reaches, the broker stops dispatching messages to all shared subscriptions which has a higher number of unacknowledged messages until subscriptions start acknowledging messages back and unacknowledged messages count reaches to limit/2. When the value is set to 0, unacknowledged message limit check is disabled and broker does not block dispatchers. | 0 | -| maxUnackedMessagesPerSubscriptionOnBrokerBlocked | Once the broker reaches maxUnackedMessagesPerBroker limit, it blocks subscriptions which have higher unacknowledged messages than this percentage limit and subscription does not receive any new messages until that subscription acknowledges messages back. | 0.16 | -| unblockStuckSubscriptionEnabled|Broker periodically checks if subscription is stuck and unblock if flag is enabled.|false| -|zookeeperSessionExpiredPolicy|There are two policies when ZooKeeper session expired happens, "shutdown" and "reconnect". If it is set to "shutdown" policy, when ZooKeeper session expired happens, the broker is shutdown. If it is set to "reconnect" policy, the broker tries to reconnect to ZooKeeper server and re-register metadata to ZooKeeper. Note: the "reconnect" policy is an experiment feature.|shutdown| -| topicPublisherThrottlingTickTimeMillis | Tick time to schedule task that checks topic publish rate limiting across all topics. A lower value can improve accuracy while throttling publish but it uses more CPU to perform frequent check. (Disable publish throttling with value 0) | 10| -| brokerPublisherThrottlingTickTimeMillis | Tick time to schedule task that checks broker publish rate limiting across all topics. A lower value can improve accuracy while throttling publish but it uses more CPU to perform frequent check. When the value is set to 0, publish throttling is disabled. |50 | -| brokerPublisherThrottlingMaxMessageRate | Maximum rate (in 1 second) of messages allowed to publish for a broker if the message rate limiting is enabled. When the value is set to 0, message rate limiting is disabled. | 0| -| brokerPublisherThrottlingMaxByteRate | Maximum rate (in 1 second) of bytes allowed to publish for a broker if the byte rate limiting is enabled. When the value is set to 0, the byte rate limiting is disabled. | 0 | -|subscribeThrottlingRatePerConsumer|Too many subscribe requests from a consumer can cause broker rewinding consumer cursors and loading data from bookies, hence causing high network bandwidth usage. When the positive value is set, broker will throttle the subscribe requests for one consumer. Otherwise, the throttling will be disabled. By default, throttling is disabled.|0| -|subscribeRatePeriodPerConsumerInSecond|Rate period for {subscribeThrottlingRatePerConsumer}. By default, it is 30s.|30| -| dispatchThrottlingRatePerTopicInMsg | Default messages (per second) dispatch throttling-limit for every topic. When the value is set to 0, default message dispatch throttling-limit is disabled. |0 | -| dispatchThrottlingRatePerTopicInByte | Default byte (per second) dispatch throttling-limit for every topic. When the value is set to 0, default byte dispatch throttling-limit is disabled. | 0| -| dispatchThrottlingRateRelativeToPublishRate | Enable dispatch rate-limiting relative to publish rate. | false | -|dispatchThrottlingRatePerSubscriptionInMsg|The defaulted number of message dispatching throttling-limit for a subscription. The value of 0 disables message dispatch-throttling.|0| -|dispatchThrottlingRatePerSubscriptionInByte|The default number of message-bytes dispatching throttling-limit for a subscription. The value of 0 disables message-byte dispatch-throttling.|0| -| dispatchThrottlingOnNonBacklogConsumerEnabled | Enable dispatch-throttling for both caught up consumers as well as consumers who have backlogs. | true | -|dispatcherMaxReadBatchSize|The maximum number of entries to read from BookKeeper. By default, it is 100 entries.|100| -|dispatcherMaxReadSizeBytes|The maximum size in bytes of entries to read from BookKeeper. By default, it is 5MB.|5242880| -|dispatcherMinReadBatchSize|The minimum number of entries to read from BookKeeper. By default, it is 1 entry. When there is an error occurred on reading entries from bookkeeper, the broker will backoff the batch size to this minimum number.|1| -|dispatcherMaxRoundRobinBatchSize|The maximum number of entries to dispatch for a shared subscription. By default, it is 20 entries.|20| -| preciseDispatcherFlowControl | Precise dispathcer flow control according to history message number of each entry. | false | -| streamingDispatch | Whether to use streaming read dispatcher. It can be useful when there's a huge backlog to drain and instead of read with micro batch we can streamline the read from bookkeeper to make the most of consumer capacity till we hit bookkeeper read limit or consumer process limit, then we can use consumer flow control to tune the speed. This feature is currently in preview and can be changed in subsequent release. | false | -| maxConcurrentLookupRequest | Maximum number of concurrent lookup request that the broker allows to throttle heavy incoming lookup traffic. | 50000 | -| maxConcurrentTopicLoadRequest | Maximum number of concurrent topic loading request that the broker allows to control the number of zk-operations. | 5000 | -| maxConcurrentNonPersistentMessagePerConnection | Maximum number of concurrent non-persistent message that can be processed per connection. | 1000 | -| numWorkerThreadsForNonPersistentTopic | Number of worker threads to serve non-persistent topic. | 8 | -| enablePersistentTopics | Enable broker to load persistent topics. | true | -| enableNonPersistentTopics | Enable broker to load non-persistent topics. | true | -| maxSubscriptionsPerTopic | Maximum number of subscriptions allowed to subscribe to a topic. Once this limit reaches, the broker rejects new subscriptions until the number of subscriptions decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxProducersPerTopic | Maximum number of producers allowed to connect to a topic. Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerTopic | Maximum number of consumers allowed to connect to a topic. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerSubscription | Maximum number of consumers allowed to connect to a subscription. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxNumPartitionsPerPartitionedTopic | Maximum number of partitions per partitioned topic. When the value is set to a negative number or is set to 0, the check is disabled. | 0 | -| tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in seconds. When the value is set to 0, check the TLS certificate on every new connection. | 300 | -| tlsCertificateFilePath | Path for the TLS certificate file. | | -| tlsKeyFilePath | Path for the TLS private key file. | | -| tlsTrustCertsFilePath | Path for the trusted TLS certificate file.| | -| tlsAllowInsecureConnection | Accept untrusted TLS certificate from the client. If it is set to true, a client with a certificate which cannot be verified with the 'tlsTrustCertsFilePath' certificate is allowed to connect to the server, though the certificate is not be used for client authentication. | false | -| tlsProtocols | Specify the TLS protocols the broker uses to negotiate during TLS handshake. | | -| tlsCiphers | Specify the TLS cipher the broker uses to negotiate during TLS Handshake. | | -| tlsRequireTrustedClientCertOnConnect | Trusted client certificates are required for to connect TLS. Reject the Connection if the client certificate is not trusted. In effect, this requires that all connecting clients perform TLS client authentication. | false | -| tlsEnabledWithKeyStore | Enable TLS with KeyStore type configuration in broker. | false | -| tlsProvider | TLS Provider for KeyStore type. | | -| tlsKeyStoreType | TLS KeyStore type configuration in the broker.
  1356. JKS
  1357. PKCS12
  1358. |JKS| -| tlsKeyStore | TLS KeyStore path in the broker. | | -| tlsKeyStorePassword | TLS KeyStore password for the broker. | | -| tlsTrustStoreType | TLS TrustStore type configuration in the broker
  1359. JKS
  1360. PKCS12
  1361. |JKS| -| tlsTrustStore | TLS TrustStore path in the broker. | | -| tlsTrustStorePassword | TLS TrustStore password for the broker. | | -| brokerClientTlsEnabledWithKeyStore | Configure whether the internal client uses the KeyStore type to authenticate with Pulsar brokers. | false | -| brokerClientSslProvider | The TLS Provider used by the internal client to authenticate with other Pulsar brokers. | | -| brokerClientTlsTrustStoreType | TLS TrustStore type configuration for the internal client to authenticate with Pulsar brokers.
  1362. JKS
  1363. PKCS12
  1364. | JKS | -| brokerClientTlsTrustStore | TLS TrustStore path for the internal client to authenticate with Pulsar brokers. | | -| brokerClientTlsTrustStorePassword | TLS TrustStore password for the internal client to authenticate with Pulsar brokers. | | -| brokerClientTlsCiphers | Specify the TLS cipher that the internal client uses to negotiate during TLS Handshake. | | -| brokerClientTlsProtocols | Specify the TLS protocols that the broker uses to negotiate during TLS handshake. | | -| systemTopicEnabled | Enable/Disable system topics. | false | -| topicLevelPoliciesEnabled | Enable or disable topic level policies. Topic level policies depends on the system topic. Please enable the system topic first. | false | -| topicFencingTimeoutSeconds | If a topic remains fenced for a certain time period (in seconds), it is closed forcefully. If set to 0 or a negative number, the fenced topic is not closed. | 0 | -| proxyRoles | Role names that are treated as "proxy roles". If the broker sees a request with role as proxyRoles, it demands to see a valid original principal. | | -|authenticationEnabled| Enable authentication for the broker. |false| -|authenticationProviders| A comma-separated list of class names for authentication providers. |false| -|authorizationEnabled| Enforce authorization in brokers. |false| -| authorizationProvider | Authorization provider fully qualified class-name. | org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider | -| authorizationAllowWildcardsMatching | Allow wildcard matching in authorization. Wildcard matching is applicable only when the wildcard-character (*) presents at the **first** or **last** position. | false | -|superUserRoles| Role names that are treated as "superusers." Superusers are authorized to perform all admin tasks. | | -|brokerClientAuthenticationPlugin| The authentication settings of the broker itself. Used when the broker connects to other brokers either in the same cluster or from other clusters. | | -|brokerClientAuthenticationParameters| The parameters that go along with the plugin specified using brokerClientAuthenticationPlugin. | | -|athenzDomainNames| Supported Athenz authentication provider domain names as a comma-separated list. | | -| anonymousUserRole | When this parameter is not empty, unauthenticated users perform as anonymousUserRole. | | -|tokenSettingPrefix| Configure the prefix of the token related setting like `tokenSecretKey`, `tokenPublicKey`, `tokenAuthClaim`, `tokenPublicAlg`, `tokenAudienceClaim`, and `tokenAudience`. || -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenAuthClaim| Specify the token claim that will be used as the authentication "principal" or "role". The "subject" field will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud". It is used to get the audience from token. If it is not set, the audience is not verified. || -| tokenAudience | The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token need contains this parameter.| | -|saslJaasClientAllowedIds|This is a regexp, which limits the range of possible ids which can connect to the Broker using SASL. By default, it is set to `SaslConstants.JAAS_CLIENT_ALLOWED_IDS_DEFAULT`, which is ".*pulsar.*", so only clients whose id contains 'pulsar' are allowed to connect.|N/A| -|saslJaasBrokerSectionName|Service Principal, for login context name. By default, it is set to `SaslConstants.JAAS_DEFAULT_BROKER_SECTION_NAME`, which is "Broker".|N/A| -|httpMaxRequestSize|If the value is larger than 0, it rejects all HTTP requests with bodies larged than the configured limit.|-1| -|exposePreciseBacklogInPrometheus| Enable expose the precise backlog stats, set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate. |false| -|bookkeeperMetadataServiceUri|Metadata service uri is what BookKeeper used for loading corresponding metadata driver and resolving its metadata service location. This value can be fetched using `bookkeeper shell whatisinstanceid` command in BookKeeper cluster. For example: `zk+hierarchical://localhost:2181/ledgers`. The metadata service uri list can also be semicolon separated values like: `zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers`.|N/A| -|bookkeeperClientAuthenticationPlugin| Authentication plugin to be used when connecting to bookies (BookKeeper servers). || -|bookkeeperClientAuthenticationParametersName| BookKeeper authentication plugin implementation parameters and values. || -|bookkeeperClientAuthenticationParameters| Parameters associated with the bookkeeperClientAuthenticationParametersName || -|bookkeeperClientNumWorkerThreads| Number of BookKeeper client worker threads. Default is Runtime.getRuntime().availableProcessors() || -|bookkeeperClientTimeoutInSeconds| Timeout for BookKeeper add and read operations. |30| -|bookkeeperClientSpeculativeReadTimeoutInMillis| Speculative reads are initiated if a read request doesn’t complete within a certain time. A value of 0 disables speculative reads. |0| -|bookkeeperUseV2WireProtocol|Use older Bookkeeper wire protocol with bookie.|true| -|bookkeeperClientHealthCheckEnabled| Enable bookie health checks. |true| -|bookkeeperClientHealthCheckIntervalSeconds| The time interval, in seconds, at which health checks are performed. New ledgers are not created during health checks. |60| -|bookkeeperClientHealthCheckErrorThresholdPerInterval| Error threshold for health checks. |5| -|bookkeeperClientHealthCheckQuarantineTimeInSeconds| If bookies have more than the allowed number of failures within the time interval specified by bookkeeperClientHealthCheckIntervalSeconds |1800| -|bookkeeperClientGetBookieInfoIntervalSeconds|Specify options for the GetBookieInfo check. This setting helps ensure the list of bookies that are up to date on the brokers.|86400| -|bookkeeperClientGetBookieInfoRetryIntervalSeconds|Specify options for the GetBookieInfo check. This setting helps ensure the list of bookies that are up to date on the brokers.|60| -|bookkeeperClientRackawarePolicyEnabled| |true| -|bookkeeperClientRegionawarePolicyEnabled| |false| -|bookkeeperClientMinNumRacksPerWriteQuorum| |2| -|bookkeeperClientMinNumRacksPerWriteQuorum| |false| -|bookkeeperClientReorderReadSequenceEnabled| |false| -|bookkeeperClientIsolationGroups||| -|bookkeeperClientSecondaryIsolationGroups| Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn't have enough bookie available. || -|bookkeeperClientMinAvailableBookiesInIsolationGroups| Minimum bookies that should be available as part of bookkeeperClientIsolationGroups else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list. || -| bookkeeperTLSProviderFactoryClass | Set the client security provider factory class name. | org.apache.bookkeeper.tls.TLSContextFactory | -| bookkeeperTLSClientAuthentication | Enable TLS authentication with bookie. | false | -| bookkeeperTLSKeyFileType | Supported type: PEM, JKS, PKCS12. | PEM | -| bookkeeperTLSTrustCertTypes | Supported type: PEM, JKS, PKCS12. | PEM | -| bookkeeperTLSKeyStorePasswordPath | Path to file containing keystore password, if the client keystore is password protected. | | -| bookkeeperTLSTrustStorePasswordPath | Path to file containing truststore password, if the client truststore is password protected. | | -| bookkeeperTLSKeyFilePath | Path for the TLS private key file. | | -| bookkeeperTLSCertificateFilePath | Path for the TLS certificate file. | | -| bookkeeperTLSTrustCertsFilePath | Path for the trusted TLS certificate file. | | -| bookkeeperDiskWeightBasedPlacementEnabled | Enable/Disable disk weight based placement. | false | -| bookkeeperExplicitLacIntervalInMills | Set the interval to check the need for sending an explicit LAC. When the value is set to 0, no explicit LAC is sent. | 0 | -| bookkeeperClientExposeStatsToPrometheus | Expose BookKeeper client managed ledger stats to Prometheus. | false | -|managedLedgerDefaultEnsembleSize| |1| -|managedLedgerDefaultWriteQuorum| |1| -|managedLedgerDefaultAckQuorum| |1| -| managedLedgerDigestType | Default type of checksum to use when writing to BookKeeper. | CRC32C | -| managedLedgerNumSchedulerThreads | Number of threads to be used for managed ledger scheduled tasks. | 8 | -|managedLedgerCacheSizeMB| |N/A| -|managedLedgerCacheCopyEntries| Whether to copy the entry payloads when inserting in cache.| false| -|managedLedgerCacheEvictionWatermark| |0.9| -|managedLedgerCacheEvictionFrequency| Configure the cache eviction frequency for the managed ledger cache (evictions/sec) | 100.0 | -|managedLedgerCacheEvictionTimeThresholdMillis| All entries that have stayed in cache for more than the configured time, will be evicted | 1000 | -|managedLedgerCursorBackloggedThreshold| Configure the threshold (in number of entries) from where a cursor should be considered 'backlogged' and thus should be set as inactive. | 1000| -|managedLedgerUnackedRangesOpenCacheSetEnabled| Use Open Range-Set to cache unacknowledged messages |true| -|managedLedgerDefaultMarkDeleteRateLimit| |0.1| -|managedLedgerMaxEntriesPerLedger| |50000| -|managedLedgerMinLedgerRolloverTimeMinutes| |10| -|managedLedgerMaxLedgerRolloverTimeMinutes| |240| -|managedLedgerCursorMaxEntriesPerLedger| |50000| -|managedLedgerCursorRolloverTimeInSeconds| |14400| -| managedLedgerMaxSizePerLedgerMbytes | Maximum ledger size before triggering a rollover for a topic. | 2048 | -| managedLedgerMaxUnackedRangesToPersist | Maximum number of "acknowledgment holes" that are going to be persistently stored. When acknowledging out of order, a consumer leaves holes that are supposed to be quickly filled by acknowledging all the messages. The information of which messages are acknowledged is persisted by compressing in "ranges" of messages that were acknowledged. After the max number of ranges is reached, the information is only tracked in memory and messages are redelivered in case of crashes. | 10000 | -| managedLedgerMaxUnackedRangesToPersistInZooKeeper | Maximum number of "acknowledgment holes" that can be stored in Zookeeper. If the number of unacknowledged message range is higher than this limit, the broker persists unacknowledged ranges into bookkeeper to avoid additional data overhead into Zookeeper. | 1000 | -|autoSkipNonRecoverableData| |false| -| managedLedgerMetadataOperationsTimeoutSeconds | Operation timeout while updating managed-ledger metadata. | 60 | -| managedLedgerReadEntryTimeoutSeconds | Read entries timeout when the broker tries to read messages from BookKeeper. | 0 | -| managedLedgerAddEntryTimeoutSeconds | Add entry timeout when the broker tries to publish message to BookKeeper. | 0 | -| managedLedgerNewEntriesCheckDelayInMillis | New entries check delay for the cursor under the managed ledger. If no new messages in the topic, the cursor tries to check again after the delay time. For consumption latency sensitive scenario, you can set the value to a smaller value or 0. Of course, a smaller value may degrade consumption throughput.|10 ms| -| managedLedgerPrometheusStatsLatencyRolloverSeconds | Managed ledger prometheus stats latency rollover seconds. | 60 | -| managedLedgerTraceTaskExecution | Whether to trace managed ledger task execution time. | true | -|managedLedgerNewEntriesCheckDelayInMillis|New entries check delay for the cursor under the managed ledger. If no new messages in the topic, the cursor will try to check again after the delay time. For consumption latency sensitive scenario, it can be set to a smaller value or 0. A smaller value degrades consumption throughput. By default, it is 10ms.|10| -|loadBalancerEnabled| |false| -|loadBalancerPlacementStrategy| |weightedRandomSelection| -|loadBalancerReportUpdateThresholdPercentage| |10| -|loadBalancerReportUpdateMaxIntervalMinutes| |15| -|loadBalancerHostUsageCheckIntervalMinutes| |1| -|loadBalancerSheddingIntervalMinutes| |30| -|loadBalancerSheddingGracePeriodMinutes| |30| -|loadBalancerBrokerMaxTopics| |50000| -|loadBalancerBrokerUnderloadedThresholdPercentage| |1| -|loadBalancerBrokerOverloadedThresholdPercentage| |85| -|loadBalancerResourceQuotaUpdateIntervalMinutes| |15| -|loadBalancerBrokerComfortLoadLevelPercentage| |65| -|loadBalancerAutoBundleSplitEnabled| |false| -| loadBalancerAutoUnloadSplitBundlesEnabled | Enable/Disable automatic unloading of split bundles. | true | -|loadBalancerNamespaceBundleMaxTopics| |1000| -|loadBalancerNamespaceBundleMaxSessions| |1000| -|loadBalancerNamespaceBundleMaxMsgRate| |1000| -|loadBalancerNamespaceBundleMaxBandwidthMbytes| |100| -|loadBalancerNamespaceMaximumBundles| |128| -| loadBalancerBrokerThresholdShedderPercentage | The broker resource usage threshold. When the broker resource usage is greater than the pulsar cluster average resource usage, the threshold shedder is triggered to offload bundles from the broker. It only takes effect in the ThresholdShedder strategy. | 10 | -| loadBalancerHistoryResourcePercentage | The history usage when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 0.9 | -| loadBalancerBandwithInResourceWeight | The BandWithIn usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerBandwithOutResourceWeight | The BandWithOut usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerCPUResourceWeight | The CPU usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerMemoryResourceWeight | The heap memory usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerDirectMemoryResourceWeight | The direct memory usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerBundleUnloadMinThroughputThreshold | Bundle unload minimum throughput threshold. Avoid bundle unload frequently. It only takes effect in the ThresholdShedder strategy. | 10 | -|replicationMetricsEnabled| |true| -|replicationConnectionsPerBroker| |16| -|replicationProducerQueueSize| |1000| -| replicationPolicyCheckDurationSeconds | Duration to check replication policy to avoid replicator inconsistency due to missing ZooKeeper watch. When the value is set to 0, disable checking replication policy. | 600 | -|defaultRetentionTimeInMinutes| |0| -|defaultRetentionSizeInMB| |0| -|keepAliveIntervalSeconds| |30| -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| -|bookieId | If you want to custom a bookie ID or use a dynamic network address for a bookie, you can set the `bookieId`.

    Bookie advertises itself using the `bookieId` rather than the `BookieSocketAddress` (`hostname:port` or `IP:port`).

    The `bookieId` is a non-empty string that can contain ASCII digits and letters ([a-zA-Z9-0]), colons, dashes, and dots.

    For more information about `bookieId`, see [here](http://bookkeeper.apache.org/bps/BP-41-bookieid/).|/| -| maxTopicsPerNamespace | The maximum number of persistent topics that can be created in the namespace. When the number of topics reaches this threshold, the broker rejects the request of creating a new topic, including the auto-created topics by the producer or consumer, until the number of connected consumers decreases. The default value 0 disables the check. | 0 | - -## WebSocket - -|Name|Description|Default| -|---|---|---| -|configurationStoreServers ||| -|zooKeeperSessionTimeoutMillis| |30000| -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|serviceUrl||| -|serviceUrlTls||| -|brokerServiceUrl||| -|brokerServiceUrlTls||| -|webServicePort||8080| -|webServicePortTls||8443| -|bindAddress||0.0.0.0| -|clusterName ||| -|authenticationEnabled||false| -|authenticationProviders||| -|authorizationEnabled||false| -|superUserRoles ||| -|brokerClientAuthenticationPlugin||| -|brokerClientAuthenticationParameters||| -|tlsEnabled||false| -|tlsAllowInsecureConnection||false| -|tlsCertificateFilePath||| -|tlsKeyFilePath ||| -|tlsTrustCertsFilePath||| - -## Pulsar proxy - -The [Pulsar proxy](concepts-architecture-overview.md#pulsar-proxy) can be configured in the `conf/proxy.conf` file. - - -|Name|Description|Default| -|---|---|---| -|forwardAuthorizationCredentials| Forward client authorization credentials to Broker for re-authorization, and make sure authentication is enabled for this to take effect. |false| -|zookeeperServers| The ZooKeeper quorum connection string (as a comma-separated list) || -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -| brokerServiceURL | The service URL pointing to the broker cluster. Must begin with `pulsar://`. | | -| brokerServiceURLTLS | The TLS service URL pointing to the broker cluster. Must begin with `pulsar+ssl://`. | | -| brokerWebServiceURL | The Web service URL pointing to the broker cluster | | -| brokerWebServiceURLTLS | The TLS Web service URL pointing to the broker cluster | | -| functionWorkerWebServiceURL | The Web service URL pointing to the function worker cluster. It is only configured when you setup function workers in a separate cluster. | | -| functionWorkerWebServiceURLTLS | The TLS Web service URL pointing to the function worker cluster. It is only configured when you setup function workers in a separate cluster. | | -|zookeeperSessionTimeoutMs| ZooKeeper session timeout (in milliseconds) |30000| -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|advertisedAddress|Hostname or IP address the service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostname()` is used.|N/A| -|servicePort| The port to use for server binary Protobuf requests |6650| -|servicePortTls| The port to use to server binary Protobuf TLS requests |6651| -|statusFilePath| Path for the file used to determine the rotation status for the proxy instance when responding to service discovery health checks || -| proxyLogLevel | Proxy log level
  1365. 0: Do not log any TCP channel information.
  1366. 1: Parse and log any TCP channel information and command information without message body.
  1367. 2: Parse and log channel information, command information and message body.
  1368. | 0 | -|authenticationEnabled| Whether authentication is enabled for the Pulsar proxy |false| -|authenticateMetricsEndpoint| Whether the '/metrics' endpoint requires authentication. Defaults to true. 'authenticationEnabled' must also be set for this to take effect. |true| -|authenticationProviders| Authentication provider name list (a comma-separated list of class names) || -|authorizationEnabled| Whether authorization is enforced by the Pulsar proxy |false| -|authorizationProvider| Authorization provider as a fully qualified class name |org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider| -| anonymousUserRole | When this parameter is not empty, unauthenticated users perform as anonymousUserRole. | | -|brokerClientAuthenticationPlugin| The authentication plugin used by the Pulsar proxy to authenticate with Pulsar brokers || -|brokerClientAuthenticationParameters| The authentication parameters used by the Pulsar proxy to authenticate with Pulsar brokers || -|brokerClientTrustCertsFilePath| The path to trusted certificates used by the Pulsar proxy to authenticate with Pulsar brokers || -|superUserRoles| Role names that are treated as "super-users," meaning that they will be able to perform all admin || -|maxConcurrentInboundConnections| Max concurrent inbound connections. The proxy will reject requests beyond that. |10000| -|maxConcurrentLookupRequests| Max concurrent outbound connections. The proxy will error out requests beyond that. |50000| -|tlsEnabledInProxy| Deprecated - use `servicePortTls` and `webServicePortTls` instead. |false| -|tlsEnabledWithBroker| Whether TLS is enabled when communicating with Pulsar brokers. |false| -| tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in seconds. If the value is set 0, check TLS certificate every new connection. | 300 | -|tlsCertificateFilePath| Path for the TLS certificate file || -|tlsKeyFilePath| Path for the TLS private key file || -|tlsTrustCertsFilePath| Path for the trusted TLS certificate pem file || -|tlsHostnameVerificationEnabled| Whether the hostname is validated when the proxy creates a TLS connection with brokers |false| -|tlsRequireTrustedClientCertOnConnect| Whether client certificates are required for TLS. Connections are rejected if the client certificate isn’t trusted. |false| -|tlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLSv1.3```, ```TLSv1.2``` || -|tlsCiphers|Specify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256```|| -| httpReverseProxyConfigs | HTTP directs to redirect to non-pulsar services | | -| httpOutputBufferSize | HTTP output buffer size. The amount of data that will be buffered for HTTP requests before it is flushed to the channel. A larger buffer size may result in higher HTTP throughput though it may take longer for the client to see data. If using HTTP streaming via the reverse proxy, this should be set to the minimum value (1) so that clients see the data as soon as possible. | 32768 | -| httpNumThreads | Number of threads to use for HTTP requests processing| 2 * Runtime.getRuntime().availableProcessors() | -|tokenSettingPrefix| Configure the prefix of the token related setting like `tokenSecretKey`, `tokenPublicKey`, `tokenAuthClaim`, `tokenPublicAlg`, `tokenAudienceClaim`, and `tokenAudience`. || -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenAuthClaim| Specify the token claim that will be used as the authentication "principal" or "role". The "subject" field will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud". It is used to get the audience from token. If it is not set, the audience is not verified. || -| tokenAudience | The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token need contains this parameter.| | -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| - -## ZooKeeper - -ZooKeeper handles a broad range of essential configuration- and coordination-related tasks for Pulsar. The default configuration file for ZooKeeper is in the `conf/zookeeper.conf` file in your Pulsar installation. The following parameters are available: - - -|Name|Description|Default| -|---|---|---| -|tickTime| The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick. |2000| -|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10| -|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter. |5| -|dataDir| The location where ZooKeeper will store in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper| -|clientPort| The port on which the ZooKeeper server will listen for connections. |2181| -|admin.enableServer|The port at which the admin listens.|true| -|admin.serverPort|The port at which the admin listens.|9990| -|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest). |3| -|autopurge.purgeInterval| The time interval, in hours, by which the ZooKeeper database purge task is triggered. Setting to a non-zero number will enable auto purge; setting to 0 will disable. Read this guide before enabling auto purge. |1| -|forceSync|Requires updates to be synced to media of the transaction log before finishing processing the update. If this option is set to 'no', ZooKeeper will not require updates to be synced to the media. WARNING: it's not recommended to run a production ZK cluster with `forceSync` disabled.|yes| -|maxClientCnxns| The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60| - - - - -In addition to the parameters in the table above, configuring ZooKeeper for Pulsar involves adding -a `server.N` line to the `conf/zookeeper.conf` file for each node in the ZooKeeper cluster, where `N` is the number of the ZooKeeper node. Here's an example for a three-node ZooKeeper cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -> We strongly recommend consulting the [ZooKeeper Administrator's Guide](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html) for a more thorough and comprehensive introduction to ZooKeeper configuration diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/reference-connector-admin.md b/site2/website/versioned_docs/version-2.8.3-deprecated/reference-connector-admin.md deleted file mode 100644 index 7b73ae80750cd4..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/reference-connector-admin.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -id: reference-connector-admin -title: Connector Admin CLI -sidebar_label: "Connector Admin CLI" -original_id: reference-connector-admin ---- - -> **Important** -> -> For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/reference-metrics.md b/site2/website/versioned_docs/version-2.8.3-deprecated/reference-metrics.md deleted file mode 100644 index 61c734f96a23d6..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/reference-metrics.md +++ /dev/null @@ -1,564 +0,0 @@ ---- -id: reference-metrics -title: Pulsar Metrics -sidebar_label: "Pulsar Metrics" -original_id: reference-metrics ---- - - - -Pulsar exposes the following metrics in Prometheus format. You can monitor your clusters with those metrics. - -* [ZooKeeper](#zookeeper) -* [BookKeeper](#bookkeeper) -* [Broker](#broker) -* [Pulsar Functions](#pulsar-functions) -* [Proxy](#proxy) -* [Pulsar SQL Worker](#pulsar-sql-worker) -* [Pulsar transaction](#pulsar-transaction) - -The following types of metrics are available: - -- [Counter](https://prometheus.io/docs/concepts/metric_types/#counter): a cumulative metric that represents a single monotonically increasing counter. The value increases by default. You can reset the value to zero or restart your cluster. -- [Gauge](https://prometheus.io/docs/concepts/metric_types/#gauge): a metric that represents a single numerical value that can arbitrarily go up and down. -- [Histogram](https://prometheus.io/docs/concepts/metric_types/#histogram): a histogram samples observations (usually things like request durations or response sizes) and counts them in configurable buckets. The `_bucket` suffix is the number of observations within a histogram bucket, configured with parameter `{le=""}`. The `_count` suffix is the number of observations, shown as a time series and behaves like a counter. The `_sum` suffix is the sum of observed values, also shown as a time series and behaves like a counter. These suffixes are together denoted by `_*` in this doc. -- [Summary](https://prometheus.io/docs/concepts/metric_types/#summary): similar to a histogram, a summary samples observations (usually things like request durations and response sizes). While it also provides a total count of observations and a sum of all observed values, it calculates configurable quantiles over a sliding time window. - -## ZooKeeper - -The ZooKeeper metrics are exposed under "/metrics" at port `8000`. You can use a different port by configuring the `metricsProvider.httpPort` in conf/zookeeper.conf. - -ZooKeeper provides a New Metrics System since 3.6.0. For more detailed metrics, refer to the [ZooKeeper Monitor Guide](https://zookeeper.apache.org/doc/r3.7.0/zookeeperMonitor.html). - -## BookKeeper - -The BookKeeper metrics are exposed under "/metrics" at port `8000`. You can change the port by updating `prometheusStatsHttpPort` -in the `bookkeeper.conf` configuration file. - -### Server metrics - -| Name | Type | Description | -|---|---|---| -| bookie_SERVER_STATUS | Gauge | The server status for bookie server.
    • 1: the bookie is running in writable mode.
    • 0: the bookie is running in readonly mode.
    | -| bookkeeper_server_ADD_ENTRY_count | Counter | The total number of ADD_ENTRY requests received at the bookie. The `success` label is used to distinguish successes and failures. | -| bookkeeper_server_READ_ENTRY_count | Counter | The total number of READ_ENTRY requests received at the bookie. The `success` label is used to distinguish successes and failures. | -| bookie_WRITE_BYTES | Counter | The total number of bytes written to the bookie. | -| bookie_READ_BYTES | Counter | The total number of bytes read from the bookie. | -| bookkeeper_server_ADD_ENTRY_REQUEST | Summary | The summary of request latency of ADD_ENTRY requests at the bookie. The `success` label is used to distinguish successes and failures. | -| bookkeeper_server_READ_ENTRY_REQUEST | Summary | The summary of request latency of READ_ENTRY requests at the bookie. The `success` label is used to distinguish successes and failures. | - -### Journal metrics - -| Name | Type | Description | -|---|---|---| -| bookie_journal_JOURNAL_SYNC_count | Counter | The total number of journal fsync operations happening at the bookie. The `success` label is used to distinguish successes and failures. | -| bookie_journal_JOURNAL_QUEUE_SIZE | Gauge | The total number of requests pending in the journal queue. | -| bookie_journal_JOURNAL_FORCE_WRITE_QUEUE_SIZE | Gauge | The total number of force write (fsync) requests pending in the force-write queue. | -| bookie_journal_JOURNAL_CB_QUEUE_SIZE | Gauge | The total number of callbacks pending in the callback queue. | -| bookie_journal_JOURNAL_ADD_ENTRY | Summary | The summary of request latency of adding entries to the journal. | -| bookie_journal_JOURNAL_SYNC | Summary | The summary of fsync latency of syncing data to the journal disk. | - -### Storage metrics - -| Name | Type | Description | -|---|---|---| -| bookie_ledgers_count | Gauge | The total number of ledgers stored in the bookie. | -| bookie_entries_count | Gauge | The total number of entries stored in the bookie. | -| bookie_write_cache_size | Gauge | The bookie write cache size (in bytes). | -| bookie_read_cache_size | Gauge | The bookie read cache size (in bytes). | -| bookie_DELETED_LEDGER_COUNT | Counter | The total number of ledgers deleted since the bookie has started. | -| bookie_ledger_writable_dirs | Gauge | The number of writable directories in the bookie. | - -## Broker - -The broker metrics are exposed under "/metrics" at port `8080`. You can change the port by updating `webServicePort` to a different port -in the `broker.conf` configuration file. - -All the metrics exposed by a broker are labelled with `cluster=${pulsar_cluster}`. The name of Pulsar cluster is the value of `${pulsar_cluster}`, which you have configured in the `broker.conf` file. - -The following metrics are available for broker: - -- [ZooKeeper](#zookeeper) - - [Server metrics](#server-metrics) - - [Request metrics](#request-metrics) -- [BookKeeper](#bookkeeper) - - [Server metrics](#server-metrics-1) - - [Journal metrics](#journal-metrics) - - [Storage metrics](#storage-metrics) -- [Broker](#broker) - - [Namespace metrics](#namespace-metrics) - - [Replication metrics](#replication-metrics) - - [Topic metrics](#topic-metrics) - - [Replication metrics](#replication-metrics-1) - - [ManagedLedgerCache metrics](#managedledgercache-metrics) - - [ManagedLedger metrics](#managedledger-metrics) - - [LoadBalancing metrics](#loadbalancing-metrics) - - [BundleUnloading metrics](#bundleunloading-metrics) - - [BundleSplit metrics](#bundlesplit-metrics) - - [Subscription metrics](#subscription-metrics) - - [Consumer metrics](#consumer-metrics) - - [Managed ledger bookie client metrics](#managed-ledger-bookie-client-metrics) - - [Token metrics](#token-metrics) - - [Authentication metrics](#authentication-metrics) - - [Connection metrics](#connection-metrics) - - [Jetty metrics](#jetty-metrics) -- [Pulsar Functions](#pulsar-functions) -- [Proxy](#proxy) -- [Pulsar SQL Worker](#pulsar-sql-worker) -- [Pulsar transaction](#pulsar-transaction) - -### Namespace metrics - -> Namespace metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `false`. - -All the namespace metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -| Name | Type | Description | -|---|---|---| -| pulsar_topics_count | Gauge | The number of Pulsar topics of the namespace owned by this broker. | -| pulsar_subscriptions_count | Gauge | The number of Pulsar subscriptions of the namespace served by this broker. | -| pulsar_producers_count | Gauge | The number of active producers of the namespace connected to this broker. | -| pulsar_consumers_count | Gauge | The number of active consumers of the namespace connected to this broker. | -| pulsar_rate_in | Gauge | The total message rate of the namespace coming into this broker (messages/second). | -| pulsar_rate_out | Gauge | The total message rate of the namespace going out from this broker (messages/second). | -| pulsar_throughput_in | Gauge | The total throughput of the namespace coming into this broker (bytes/second). | -| pulsar_throughput_out | Gauge | The total throughput of the namespace going out from this broker (bytes/second). | -| pulsar_storage_size | Gauge | The total storage size of the topics in this namespace owned by this broker (bytes). | -| pulsar_storage_backlog_size | Gauge | The total backlog size of the topics of this namespace owned by this broker (messages). | -| pulsar_storage_offloaded_size | Gauge | The total amount of the data in this namespace offloaded to the tiered storage (bytes). | -| pulsar_storage_write_rate | Gauge | The total message batches (entries) written to the storage for this namespace (message batches / second). | -| pulsar_storage_read_rate | Gauge | The total message batches (entries) read from the storage for this namespace (message batches / second). | -| pulsar_subscription_delayed | Gauge | The total message batches (entries) are delayed for dispatching. | -| pulsar_storage_write_latency_le_* | Histogram | The entry rate of a namespace that the storage write latency is smaller with a given threshold.
    Available thresholds:
    • pulsar_storage_write_latency_le_0_5: <= 0.5ms
    • pulsar_storage_write_latency_le_1: <= 1ms
    • pulsar_storage_write_latency_le_5: <= 5ms
    • pulsar_storage_write_latency_le_10: <= 10ms
    • pulsar_storage_write_latency_le_20: <= 20ms
    • pulsar_storage_write_latency_le_50: <= 50ms
    • pulsar_storage_write_latency_le_100: <= 100ms
    • pulsar_storage_write_latency_le_200: <= 200ms
    • pulsar_storage_write_latency_le_1000: <= 1s
    • pulsar_storage_write_latency_le_overflow: > 1s
    | -| pulsar_entry_size_le_* | Histogram | The entry rate of a namespace that the entry size is smaller with a given threshold.
    Available thresholds:
    • pulsar_entry_size_le_128: <= 128 bytes
    • pulsar_entry_size_le_512: <= 512 bytes
    • pulsar_entry_size_le_1_kb: <= 1 KB
    • pulsar_entry_size_le_2_kb: <= 2 KB
    • pulsar_entry_size_le_4_kb: <= 4 KB
    • pulsar_entry_size_le_16_kb: <= 16 KB
    • pulsar_entry_size_le_100_kb: <= 100 KB
    • pulsar_entry_size_le_1_mb: <= 1 MB
    • pulsar_entry_size_le_overflow: > 1 MB
    | - -#### Replication metrics - -If a namespace is configured to be replicated among multiple Pulsar clusters, the corresponding replication metrics is also exposed when `replicationMetricsEnabled` is enabled. - -All the replication metrics are also labelled with `remoteCluster=${pulsar_remote_cluster}`. - -| Name | Type | Description | -|---|---|---| -| pulsar_replication_rate_in | Gauge | The total message rate of the namespace replicating from remote cluster (messages/second). | -| pulsar_replication_rate_out | Gauge | The total message rate of the namespace replicating to remote cluster (messages/second). | -| pulsar_replication_throughput_in | Gauge | The total throughput of the namespace replicating from remote cluster (bytes/second). | -| pulsar_replication_throughput_out | Gauge | The total throughput of the namespace replicating to remote cluster (bytes/second). | -| pulsar_replication_backlog | Gauge | The total backlog of the namespace replicating to remote cluster (messages). | -| pulsar_replication_rate_expired | Gauge | Total rate of messages expired (messages/second). | -| pulsar_replication_connected_count | Gauge | The count of replication-subscriber up and running to replicate to remote cluster. | -| pulsar_replication_delay_in_seconds | Gauge | Time in seconds from the time a message was produced to the time when it is about to be replicated. | - -### Topic metrics - -> Topic metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `true`. - -All the topic metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. - -| Name | Type | Description | -|---|---|---| -| pulsar_subscriptions_count | Gauge | The number of Pulsar subscriptions of the topic served by this broker. | -| pulsar_producers_count | Gauge | The number of active producers of the topic connected to this broker. | -| pulsar_consumers_count | Gauge | The number of active consumers of the topic connected to this broker. | -| pulsar_rate_in | Gauge | The total message rate of the topic coming into this broker (messages/second). | -| pulsar_rate_out | Gauge | The total message rate of the topic going out from this broker (messages/second). | -| pulsar_throughput_in | Gauge | The total throughput of the topic coming into this broker (bytes/second). | -| pulsar_throughput_out | Gauge | The total throughput of the topic going out from this broker (bytes/second). | -| pulsar_storage_size | Gauge | The total storage size of the topics in this topic owned by this broker (bytes). | -| pulsar_storage_backlog_size | Gauge | The total backlog size of the topics of this topic owned by this broker (messages). | -| pulsar_storage_offloaded_size | Gauge | The total amount of the data in this topic offloaded to the tiered storage (bytes). | -| pulsar_storage_backlog_quota_limit | Gauge | The total amount of the data in this topic that limit the backlog quota (bytes). | -| pulsar_storage_write_rate | Gauge | The total message batches (entries) written to the storage for this topic (message batches / second). | -| pulsar_storage_read_rate | Gauge | The total message batches (entries) read from the storage for this topic (message batches / second). | -| pulsar_subscription_delayed | Gauge | The total message batches (entries) are delayed for dispatching. | -| pulsar_storage_write_latency_le_* | Histogram | The entry rate of a topic that the storage write latency is smaller with a given threshold.
    Available thresholds:
    • pulsar_storage_write_latency_le_0_5: <= 0.5ms
    • pulsar_storage_write_latency_le_1: <= 1ms
    • pulsar_storage_write_latency_le_5: <= 5ms
    • pulsar_storage_write_latency_le_10: <= 10ms
    • pulsar_storage_write_latency_le_20: <= 20ms
    • pulsar_storage_write_latency_le_50: <= 50ms
    • pulsar_storage_write_latency_le_100: <= 100ms
    • pulsar_storage_write_latency_le_200: <= 200ms
    • pulsar_storage_write_latency_le_1000: <= 1s
    • pulsar_storage_write_latency_le_overflow: > 1s
    | -| pulsar_entry_size_le_* | Histogram | The entry rate of a topic that the entry size is smaller with a given threshold.
    Available thresholds:
    • pulsar_entry_size_le_128: <= 128 bytes
    • pulsar_entry_size_le_512: <= 512 bytes
    • pulsar_entry_size_le_1_kb: <= 1 KB
    • pulsar_entry_size_le_2_kb: <= 2 KB
    • pulsar_entry_size_le_4_kb: <= 4 KB
    • pulsar_entry_size_le_16_kb: <= 16 KB
    • pulsar_entry_size_le_100_kb: <= 100 KB
    • pulsar_entry_size_le_1_mb: <= 1 MB
    • pulsar_entry_size_le_overflow: > 1 MB
    | -| pulsar_in_bytes_total | Counter | The total number of messages in bytes received for this topic. | -| pulsar_in_messages_total | Counter | The total number of messages received for this topic. | -| pulsar_out_bytes_total | Counter | The total number of messages in bytes read from this topic. | -| pulsar_out_messages_total | Counter | The total number of messages read from this topic. | -| pulsar_compaction_removed_event_count | Gauge | The total number of removed events of the compaction. | -| pulsar_compaction_succeed_count | Gauge | The total number of successes of the compaction. | -| pulsar_compaction_failed_count | Gauge | The total number of failures of the compaction. | -| pulsar_compaction_duration_time_in_mills | Gauge | The duration time of the compaction. | -| pulsar_compaction_read_throughput | Gauge | The read throughput of the compaction. | -| pulsar_compaction_write_throughput | Gauge | The write throughput of the compaction. | -| pulsar_compaction_latency_le_* | Histogram | The compaction latency with given quantile.
    Available thresholds:
    • pulsar_compaction_latency_le_0_5: <= 0.5ms
    • pulsar_compaction_latency_le_1: <= 1ms
    • pulsar_compaction_latency_le_5: <= 5ms
    • pulsar_compaction_latency_le_10: <= 10ms
    • pulsar_compaction_latency_le_20: <= 20ms
    • pulsar_compaction_latency_le_50: <= 50ms
    • pulsar_compaction_latency_le_100: <= 100ms
    • pulsar_compaction_latency_le_200: <= 200ms
    • pulsar_compaction_latency_le_1000: <= 1s
    • pulsar_compaction_latency_le_overflow: > 1s
    | -| pulsar_compaction_compacted_entries_count | Gauge | The total number of the compacted entries. | -| pulsar_compaction_compacted_entries_size |Gauge | The total size of the compacted entries. | - -#### Replication metrics - -If a namespace that a topic belongs to is configured to be replicated among multiple Pulsar clusters, the corresponding replication metrics is also exposed when `replicationMetricsEnabled` is enabled. - -All the replication metrics are labelled with `remoteCluster=${pulsar_remote_cluster}`. - -| Name | Type | Description | -|---|---|---| -| pulsar_replication_rate_in | Gauge | The total message rate of the topic replicating from remote cluster (messages/second). | -| pulsar_replication_rate_out | Gauge | The total message rate of the topic replicating to remote cluster (messages/second). | -| pulsar_replication_throughput_in | Gauge | The total throughput of the topic replicating from remote cluster (bytes/second). | -| pulsar_replication_throughput_out | Gauge | The total throughput of the topic replicating to remote cluster (bytes/second). | -| pulsar_replication_backlog | Gauge | The total backlog of the topic replicating to remote cluster (messages). | - -### ManagedLedgerCache metrics -All the ManagedLedgerCache metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_ml_cache_evictions | Gauge | The number of cache evictions during the last minute. | -| pulsar_ml_cache_hits_rate | Gauge | The number of cache hits per second. | -| pulsar_ml_cache_hits_throughput | Gauge | The amount of data is retrieved from the cache in byte/s | -| pulsar_ml_cache_misses_rate | Gauge | The number of cache misses per second | -| pulsar_ml_cache_misses_throughput | Gauge | The amount of data is retrieved from the cache in byte/s | -| pulsar_ml_cache_pool_active_allocations | Gauge | The number of currently active allocations in direct arena | -| pulsar_ml_cache_pool_active_allocations_huge | Gauge | The number of currently active huge allocation in direct arena | -| pulsar_ml_cache_pool_active_allocations_normal | Gauge | The number of currently active normal allocations in direct arena | -| pulsar_ml_cache_pool_active_allocations_small | Gauge | The number of currently active small allocations in direct arena | -| pulsar_ml_cache_pool_active_allocations_tiny | Gauge | The number of currently active tiny allocations in direct arena | -| pulsar_ml_cache_pool_allocated | Gauge | The total allocated memory of chunk lists in direct arena | -| pulsar_ml_cache_pool_used | Gauge | The total used memory of chunk lists in direct arena | -| pulsar_ml_cache_used_size | Gauge | The size in byte used to store the entries payloads | -| pulsar_ml_count | Gauge | The number of currently opened managed ledgers | - -### ManagedLedger metrics -All the managedLedger metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- namespace: namespace=${pulsar_namespace}. ${pulsar_namespace} is the namespace name. -- quantile: quantile=${quantile}. Quantile is only for `Histogram` type metric, and represents the threshold for given Buckets. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_ml_AddEntryBytesRate | Gauge | The bytes/s rate of messages added | -| pulsar_ml_AddEntryErrors | Gauge | The number of addEntry requests that failed | -| pulsar_ml_AddEntryLatencyBuckets | Histogram | The latency of adding a ledger entry with a given quantile (threshold), including time spent on waiting in queue on the broker side
    Available quantile:
    • quantile="0.0_0.5" is AddEntryLatency between (0.0ms, 0.5ms]
    • quantile="0.5_1.0" is AddEntryLatency between (0.5ms, 1.0ms]
    • quantile="1.0_5.0" is AddEntryLatency between (1ms, 5ms]
    • quantile="5.0_10.0" is AddEntryLatency between (5ms, 10ms]
    • quantile="10.0_20.0" is AddEntryLatency between (10ms, 20ms]
    • quantile="20.0_50.0" is AddEntryLatency between (20ms, 50ms]
    • quantile="50.0_100.0" is AddEntryLatency between (50ms, 100ms]
    • quantile="100.0_200.0" is AddEntryLatency between (100ms, 200ms]
    • quantile="200.0_1000.0" is AddEntryLatency between (200ms, 1s]
    | -| pulsar_ml_AddEntryLatencyBuckets_OVERFLOW | Gauge | The number of times the AddEntryLatency is longer than 1 second | -| pulsar_ml_AddEntryMessagesRate | Gauge | The msg/s rate of messages added | -| pulsar_ml_AddEntrySucceed | Gauge | The number of addEntry requests that succeeded | -| pulsar_ml_EntrySizeBuckets | Histogram | The added entry size of a ledger with a given quantile.
    Available quantile:
    • quantile="0.0_128.0" is EntrySize between (0byte, 128byte]
    • quantile="128.0_512.0" is EntrySize between (128byte, 512byte]
    • quantile="512.0_1024.0" is EntrySize between (512byte, 1KB]
    • quantile="1024.0_2048.0" is EntrySize between (1KB, 2KB]
    • quantile="2048.0_4096.0" is EntrySize between (2KB, 4KB]
    • quantile="4096.0_16384.0" is EntrySize between (4KB, 16KB]
    • quantile="16384.0_102400.0" is EntrySize between (16KB, 100KB]
    • quantile="102400.0_1232896.0" is EntrySize between (100KB, 1MB]
    | -| pulsar_ml_EntrySizeBuckets_OVERFLOW |Gauge | The number of times the EntrySize is larger than 1MB | -| pulsar_ml_LedgerSwitchLatencyBuckets | Histogram | The ledger switch latency with a given quantile.
    Available quantile:
    • quantile="0.0_0.5" is EntrySize between (0ms, 0.5ms]
    • quantile="0.5_1.0" is EntrySize between (0.5ms, 1ms]
    • quantile="1.0_5.0" is EntrySize between (1ms, 5ms]
    • quantile="5.0_10.0" is EntrySize between (5ms, 10ms]
    • quantile="10.0_20.0" is EntrySize between (10ms, 20ms]
    • quantile="20.0_50.0" is EntrySize between (20ms, 50ms]
    • quantile="50.0_100.0" is EntrySize between (50ms, 100ms]
    • quantile="100.0_200.0" is EntrySize between (100ms, 200ms]
    • quantile="200.0_1000.0" is EntrySize between (200ms, 1000ms]
    | -| pulsar_ml_LedgerSwitchLatencyBuckets_OVERFLOW | Gauge | The number of times the ledger switch latency is longer than 1 second | -| pulsar_ml_LedgerAddEntryLatencyBuckets | Histogram | The latency for bookie client to persist a ledger entry from broker to BookKeeper service with a given quantile (threshold).
    Available quantile:
    • quantile="0.0_0.5" is LedgerAddEntryLatency between (0.0ms, 0.5ms]
    • quantile="0.5_1.0" is LedgerAddEntryLatency between (0.5ms, 1.0ms]
    • quantile="1.0_5.0" is LedgerAddEntryLatency between (1ms, 5ms]
    • quantile="5.0_10.0" is LedgerAddEntryLatency between (5ms, 10ms]
    • quantile="10.0_20.0" is LedgerAddEntryLatency between (10ms, 20ms]
    • quantile="20.0_50.0" is LedgerAddEntryLatency between (20ms, 50ms]
    • quantile="50.0_100.0" is LedgerAddEntryLatency between (50ms, 100ms]
    • quantile="100.0_200.0" is LedgerAddEntryLatency between (100ms, 200ms]
    • quantile="200.0_1000.0" is LedgerAddEntryLatency between (200ms, 1s]
    | -| pulsar_ml_LedgerAddEntryLatencyBuckets_OVERFLOW | Gauge | The number of times the LedgerAddEntryLatency is longer than 1 second | -| pulsar_ml_MarkDeleteRate | Gauge | The rate of mark-delete ops/s | -| pulsar_ml_NumberOfMessagesInBacklog | Gauge | The number of backlog messages for all the consumers | -| pulsar_ml_ReadEntriesBytesRate | Gauge | The bytes/s rate of messages read | -| pulsar_ml_ReadEntriesErrors | Gauge | The number of readEntries requests that failed | -| pulsar_ml_ReadEntriesRate | Gauge | The msg/s rate of messages read | -| pulsar_ml_ReadEntriesSucceeded | Gauge | The number of readEntries requests that succeeded | -| pulsar_ml_StoredMessagesSize | Gauge | The total size of the messages in active ledgers (accounting for the multiple copies stored) | - -### Managed cursor acknowledgment state - -The acknowledgment state is persistent to the ledger first. When the acknowledgment state fails to be persistent to the ledger, they are persistent to ZooKeeper. To track the stats of acknowledgment, you can configure the metrics for the managed cursor. - -All the cursor acknowledgment state metrics are labelled with the following labels: - -- namespace: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -- ledger_name: `ledger_name=${pulsar_ledger_name}`. `${pulsar_ledger_name}` is the ledger name. - -- cursor_name: `ledger_name=${pulsar_cursor_name}`. `${pulsar_cursor_name}` is the cursor name. - -Name |Type |Description -|---|---|--- -brk_ml_cursor_persistLedgerSucceed(namespace=", ledger_name="", cursor_name:")|Gauge|The number of acknowledgment states that is persistent to a ledger.| -brk_ml_cursor_persistLedgerErrors(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of ledger errors occurred when acknowledgment states fail to be persistent to the ledger.| -brk_ml_cursor_persistZookeeperSucceed(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of acknowledgment states that is persistent to ZooKeeper. -brk_ml_cursor_persistZookeeperErrors(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of ledger errors occurred when acknowledgment states fail to be persistent to ZooKeeper. -brk_ml_cursor_nonContiguousDeletedMessagesRange(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of non-contiguous deleted messages ranges. - -### LoadBalancing metrics -All the loadbalancing metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- broker: broker=${broker}. ${broker} is the IP address of the broker -- metric: metric="loadBalancing". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_bandwidth_in_usage | Gauge | The broker inbound bandwith usage (in percent). | -| pulsar_lb_bandwidth_out_usage | Gauge | The broker outbound bandwith usage (in percent). | -| pulsar_lb_cpu_usage | Gauge | The broker cpu usage (in percent). | -| pulsar_lb_directMemory_usage | Gauge | The broker process direct memory usage (in percent). | -| pulsar_lb_memory_usage | Gauge | The broker process memory usage (in percent). | - -#### BundleUnloading metrics -All the bundleUnloading metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- metric: metric="bundleUnloading". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_unload_broker_count | Counter | Unload broker count in this bundle unloading | -| pulsar_lb_unload_bundle_count | Counter | Bundle unload count in this bundle unloading | - -#### BundleSplit metrics -All the bundleUnloading metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- metric: metric="bundlesSplit". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_bundles_split_count | Counter | The total count of bundle split in this leader broker | - -### Subscription metrics - -> Subscription metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `true`. - -All the subscription metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. -- *subscription*: `subscription=${subscription}`. `${subscription}` is the topic subscription name. - -| Name | Type | Description | -|---|---|---| -| pulsar_subscription_back_log | Gauge | The total backlog of a subscription (messages). | -| pulsar_subscription_delayed | Gauge | The total number of messages are delayed to be dispatched for a subscription (messages). | -| pulsar_subscription_msg_rate_redeliver | Gauge | The total message rate for message being redelivered (messages/second). | -| pulsar_subscription_unacked_messages | Gauge | The total number of unacknowledged messages of a subscription (messages). | -| pulsar_subscription_blocked_on_unacked_messages | Gauge | Indicate whether a subscription is blocked on unacknowledged messages or not.
    • 1 means the subscription is blocked on waiting unacknowledged messages to be acked.
    • 0 means the subscription is not blocked on waiting unacknowledged messages to be acked.
    | -| pulsar_subscription_msg_rate_out | Gauge | The total message dispatch rate for a subscription (messages/second). | -| pulsar_subscription_msg_throughput_out | Gauge | The total message dispatch throughput for a subscription (bytes/second). | - -### Consumer metrics - -> Consumer metrics are only exposed when both `exposeTopicLevelMetricsInPrometheus` and `exposeConsumerLevelMetricsInPrometheus` are set to `true`. - -All the consumer metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. -- *subscription*: `subscription=${subscription}`. `${subscription}` is the topic subscription name. -- *consumer_name*: `consumer_name=${consumer_name}`. `${consumer_name}` is the topic consumer name. -- *consumer_id*: `consumer_id=${consumer_id}`. `${consumer_id}` is the topic consumer id. - -| Name | Type | Description | -|---|---|---| -| pulsar_consumer_msg_rate_redeliver | Gauge | The total message rate for message being redelivered (messages/second). | -| pulsar_consumer_unacked_messages | Gauge | The total number of unacknowledged messages of a consumer (messages). | -| pulsar_consumer_blocked_on_unacked_messages | Gauge | Indicate whether a consumer is blocked on unacknowledged messages or not.
    • 1 means the consumer is blocked on waiting unacknowledged messages to be acked.
    • 0 means the consumer is not blocked on waiting unacknowledged messages to be acked.
    | -| pulsar_consumer_msg_rate_out | Gauge | The total message dispatch rate for a consumer (messages/second). | -| pulsar_consumer_msg_throughput_out | Gauge | The total message dispatch throughput for a consumer (bytes/second). | -| pulsar_consumer_available_permits | Gauge | The available permits for for a consumer. | - -### Managed ledger bookie client metrics - -All the managed ledger bookie client metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_completed_tasks_* | Gauge | The number of tasks the scheduler executor execute completed.
    The number of metrics determined by the scheduler executor thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_queue_* | Gauge | The number of tasks queued in the scheduler executor's queue.
    The number of metrics determined by scheduler executor's thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_total_tasks_* | Gauge | The total number of tasks the scheduler executor received.
    The number of metrics determined by scheduler executor's thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_task_execution | Summary | The scheduler task execution latency calculated in milliseconds. | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_task_queued | Summary | The scheduler task queued latency calculated in milliseconds. | - -### Token metrics - -All the token metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -|---|---|---| -| pulsar_expired_token_count | Counter | The number of expired tokens in Pulsar. | -| pulsar_expiring_token_minutes | Histogram | The remaining time of expiring tokens in minutes. | - -### Authentication metrics - -All the authentication metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *provider_name*: `provider_name=${provider_name}`. `${provider_name}` is the class name of the authentication provider. -- *auth_method*: `auth_method=${auth_method}`. `${auth_method}` is the authentication method of the authentication provider. -- *reason*: `reason=${reason}`. `${reason}` is the reason for failing authentication operation. (This label is only for `pulsar_authentication_failures_count`.) - -| Name | Type | Description | -|---|---|---| -| pulsar_authentication_success_count| Counter | The number of successful authentication operations. | -| pulsar_authentication_failures_count | Counter | The number of failing authentication operations. | - -### Connection metrics - -All the connection metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *broker*: `broker=${advertised_address}`. `${advertised_address}` is the advertised address of the broker. -- *metric*: `metric=${metric}`. `${metric}` is the connection metric collective name. - -| Name | Type | Description | -|---|---|---| -| pulsar_active_connections| Gauge | The number of active connections. | -| pulsar_connection_created_total_count | Gauge | The total number of connections. | -| pulsar_connection_create_success_count | Gauge | The number of successfully created connections. | -| pulsar_connection_create_fail_count | Gauge | The number of failed connections. | -| pulsar_connection_closed_total_count | Gauge | The total number of closed connections. | -| pulsar_broker_throttled_connections | Gauge | The number of throttled connections. | -| pulsar_broker_throttled_connections_global_limit | Gauge | The number of throttled connections because of per-connection limit. | - -### Jetty metrics - -> For a functions-worker running separately from brokers, its Jetty metrics are only exposed when `includeStandardPrometheusMetrics` is set to `true`. - -All the jetty metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -|---|---|---| -| jetty_requests_total | Counter | Number of requests. | -| jetty_requests_active | Gauge | Number of requests currently active. | -| jetty_requests_active_max | Gauge | Maximum number of requests that have been active at once. | -| jetty_request_time_max_seconds | Gauge | Maximum time spent handling requests. | -| jetty_request_time_seconds_total | Counter | Total time spent in all request handling. | -| jetty_dispatched_total | Counter | Number of dispatches. | -| jetty_dispatched_active | Gauge | Number of dispatches currently active. | -| jetty_dispatched_active_max | Gauge | Maximum number of active dispatches being handled. | -| jetty_dispatched_time_max | Gauge | Maximum time spent in dispatch handling. | -| jetty_dispatched_time_seconds_total | Counter | Total time spent in dispatch handling. | -| jetty_async_requests_total | Counter | Total number of async requests. | -| jetty_async_requests_waiting | Gauge | Currently waiting async requests. | -| jetty_async_requests_waiting_max | Gauge | Maximum number of waiting async requests. | -| jetty_async_dispatches_total | Counter | Number of requested that have been asynchronously dispatched. | -| jetty_expires_total | Counter | Number of async requests requests that have expired. | -| jetty_responses_total | Counter | Number of responses, labeled by status code. The `code` label can be "1xx", "2xx", "3xx", "4xx", or "5xx". | -| jetty_stats_seconds | Gauge | Time in seconds stats have been collected for. | -| jetty_responses_bytes_total | Counter | Total number of bytes across all responses. | - -## Pulsar Functions - -All the Pulsar Functions metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -| Name | Type | Description | -|---|---|---| -| pulsar_function_processed_successfully_total | Counter | The total number of messages processed successfully. | -| pulsar_function_processed_successfully_total_1min | Counter | The total number of messages processed successfully in the last 1 minute. | -| pulsar_function_system_exceptions_total | Counter | The total number of system exceptions. | -| pulsar_function_system_exceptions_total_1min | Counter | The total number of system exceptions in the last 1 minute. | -| pulsar_function_user_exceptions_total | Counter | The total number of user exceptions. | -| pulsar_function_user_exceptions_total_1min | Counter | The total number of user exceptions in the last 1 minute. | -| pulsar_function_process_latency_ms | Summary | The process latency in milliseconds. | -| pulsar_function_process_latency_ms_1min | Summary | The process latency in milliseconds in the last 1 minute. | -| pulsar_function_last_invocation | Gauge | The timestamp of the last invocation of the function. | -| pulsar_function_received_total | Counter | The total number of messages received from source. | -| pulsar_function_received_total_1min | Counter | The total number of messages received from source in the last 1 minute. | -pulsar_function_user_metric_ | Summary|The user-defined metrics. - -## Connectors - -All the Pulsar connector metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -Connector metrics contain **source** metrics and **sink** metrics. - -- **Source** metrics - - | Name | Type | Description | - |---|---|---| - pulsar_source_written_total|Counter|The total number of records written to a Pulsar topic. - pulsar_source_written_total_1min|Counter|The total number of records written to a Pulsar topic in the last 1 minute. - pulsar_source_received_total|Counter|The total number of records received from source. - pulsar_source_received_total_1min|Counter|The total number of records received from source in the last 1 minute. - pulsar_source_last_invocation|Gauge|The timestamp of the last invocation of the source. - pulsar_source_source_exception|Gauge|The exception from a source. - pulsar_source_source_exceptions_total|Counter|The total number of source exceptions. - pulsar_source_source_exceptions_total_1min |Counter|The total number of source exceptions in the last 1 minute. - pulsar_source_system_exception|Gauge|The exception from system code. - pulsar_source_system_exceptions_total|Counter|The total number of system exceptions. - pulsar_source_system_exceptions_total_1min|Counter|The total number of system exceptions in the last 1 minute. - pulsar_source_user_metric_ | Summary|The user-defined metrics. - -- **Sink** metrics - - | Name | Type | Description | - |---|---|---| - pulsar_sink_written_total|Counter| The total number of records processed by a sink. - pulsar_sink_written_total_1min|Counter| The total number of records processed by a sink in the last 1 minute. - pulsar_sink_received_total_1min|Counter| The total number of messages that a sink has received from Pulsar topics in the last 1 minute. - pulsar_sink_received_total|Counter| The total number of records that a sink has received from Pulsar topics. - pulsar_sink_last_invocation|Gauge|The timestamp of the last invocation of the sink. - pulsar_sink_sink_exception|Gauge|The exception from a sink. - pulsar_sink_sink_exceptions_total|Counter|The total number of sink exceptions. - pulsar_sink_sink_exceptions_total_1min |Counter|The total number of sink exceptions in the last 1 minute. - pulsar_sink_system_exception|Gauge|The exception from system code. - pulsar_sink_system_exceptions_total|Counter|The total number of system exceptions. - pulsar_sink_system_exceptions_total_1min|Counter|The total number of system exceptions in the last 1 minute. - pulsar_sink_user_metric_ | Summary|The user-defined metrics. - -## Proxy - -All the proxy metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *kubernetes_pod_name*: `kubernetes_pod_name=${kubernetes_pod_name}`. `${kubernetes_pod_name}` is the Kubernetes pod name. - -| Name | Type | Description | -|---|---|---| -| pulsar_proxy_active_connections | Gauge | Number of connections currently active in the proxy. | -| pulsar_proxy_new_connections | Counter | Counter of connections being opened in the proxy. | -| pulsar_proxy_rejected_connections | Counter | Counter for connections rejected due to throttling. | -| pulsar_proxy_binary_ops | Counter | Counter of proxy operations. | -| pulsar_proxy_binary_bytes | Counter | Counter of proxy bytes. | - -## Pulsar SQL Worker - -| Name | Type | Description | -|---|---|---| -| split_bytes_read | Counter | Number of bytes read from BookKeeper. | -| split_num_messages_deserialized | Counter | Number of messages deserialized. | -| split_num_record_deserialized | Counter | Number of records deserialized. | -| split_bytes_read_per_query | Summary | Total number of bytes read per query. | -| split_entry_deserialize_time | Summary | Time spent on derserializing entries. | -| split_entry_deserialize_time_per_query | Summary | Time spent on derserializing entries per query. | -| split_entry_queue_dequeue_wait_time | Summary | Time spend on waiting to get entry from entry queue because it is empty. | -| split_entry_queue_dequeue_wait_time_per_query | Summary | Total time spent on waiting to get entry from entry queue per query. | -| split_message_queue_dequeue_wait_time_per_query | Summary | Time spent on waiting to dequeue from message queue because is is empty per query. | -| split_message_queue_enqueue_wait_time | Summary | Time spent on waiting for message queue enqueue because the message queue is full. | -| split_message_queue_enqueue_wait_time_per_query | Summary | Time spent on waiting for message queue enqueue because the message queue is full per query. | -| split_num_entries_per_batch | Summary | Number of entries per batch. | -| split_num_entries_per_query | Summary | Number of entries per query. | -| split_num_messages_deserialized_per_entry | Summary | Number of messages deserialized per entry. | -| split_num_messages_deserialized_per_query | Summary | Number of messages deserialized per query. | -| split_read_attempts | Summary | Number of read attempts (fail if queues are full). | -| split_read_attempts_per_query | Summary | Number of read attempts per query. | -| split_read_latency_per_batch | Summary | Latency of reads per batch. | -| split_read_latency_per_query | Summary | Total read latency per query. | -| split_record_deserialize_time | Summary | Time spent on deserializing message to record. For example, Avro, JSON, and so on. | -| split_record_deserialize_time_per_query | Summary | Time spent on deserializing message to record per query. | -| split_total_execution_time | Summary | The total execution time. | - -## Pulsar transaction - -All the transaction metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *coordinator_id*: `coordinator_id=${coordinator_id}`. `${coordinator_id}` is the coordinator id. - -| Name | Type | Description | -|---|---|---| -| pulsar_txn_active_count | Gauge | Number of active transactions. | -| pulsar_txn_created_count | Counter | Number of created transactions. | -| pulsar_txn_committed_count | Counter | Number of committed transactions. | -| pulsar_txn_aborted_count | Counter | Number of aborted transactions of this coordinator. | -| pulsar_txn_timeout_count | Counter | Number of timeout transactions. | -| pulsar_txn_append_log_count | Counter | Number of append transaction logs. | -| pulsar_txn_execution_latency_le_* | Histogram | Transaction execution latency.
    Available latencies are as below:
    • latency="10" is TransactionExecutionLatency between (0ms, 10ms]
    • latency="20" is TransactionExecutionLatency between (10ms, 20ms]
    • latency="50" is TransactionExecutionLatency between (20ms, 50ms]
    • latency="100" is TransactionExecutionLatency between (50ms, 100ms]
    • latency="500" is TransactionExecutionLatency between (100ms, 500ms]
    • latency="1000" is TransactionExecutionLatency between (500ms, 1000ms]
    • latency="5000" is TransactionExecutionLatency between (1s, 5s]
    • latency="15000" is TransactionExecutionLatency between (5s, 15s]
    • latency="30000" is TransactionExecutionLatency between (15s, 30s]
    • latency="60000" is TransactionExecutionLatency between (30s, 60s]
    • latency="300000" is TransactionExecutionLatency between (1m,5m]
    • latency="1500000" is TransactionExecutionLatency between (5m,15m]
    • latency="3000000" is TransactionExecutionLatency between (15m,30m]
    • latency="overflow" is TransactionExecutionLatency between (30m,∞]
    | diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/reference-pulsar-admin.md b/site2/website/versioned_docs/version-2.8.3-deprecated/reference-pulsar-admin.md deleted file mode 100644 index bba1d6379dd972..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/reference-pulsar-admin.md +++ /dev/null @@ -1,3338 +0,0 @@ ---- -id: reference-pulsar-admin -title: Pulsar admin CLI -sidebar_label: "Pulsar Admin CLI" -original_id: reference-pulsar-admin ---- - -> **Important** -> -> This page is deprecated and not updated anymore. For the latest and complete information about `pulsar-admin`, including commands, flags, descriptions, and more, see [pulsar-admin doc](https://pulsar.apache.org/tools/pulsar-admin/). - -The `pulsar-admin` tool enables you to manage Pulsar installations, including clusters, brokers, namespaces, tenants, and more. - -Usage - -```bash - -$ pulsar-admin command - -``` - -Commands -* `broker-stats` -* `brokers` -* `clusters` -* `functions` -* `functions-worker` -* `namespaces` -* `ns-isolation-policy` -* `sources` - - For more information, see [here](io-cli.md#sources) -* `sinks` - - For more information, see [here](io-cli.md#sinks) -* `topics` -* `tenants` -* `resource-quotas` -* `schemas` - -## `broker-stats` - -Operations to collect broker statistics - -```bash - -$ pulsar-admin broker-stats subcommand - -``` - -Subcommands -* `allocator-stats` -* `topics(destinations)` -* `mbeans` -* `monitoring-metrics` -* `load-report` - - -### `allocator-stats` - -Dump allocator stats - -Usage - -```bash - -$ pulsar-admin broker-stats allocator-stats allocator-name - -``` - -### `topics(destinations)` - -Dump topic stats - -Usage - -```bash - -$ pulsar-admin broker-stats topics options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - -### `mbeans` - -Dump Mbean stats - -Usage - -```bash - -$ pulsar-admin broker-stats mbeans options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - - -### `monitoring-metrics` - -Dump metrics for monitoring - -Usage - -```bash - -$ pulsar-admin broker-stats monitoring-metrics options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - - -### `load-report` - -Dump broker load-report - -Usage - -```bash - -$ pulsar-admin broker-stats load-report - -``` - -## `brokers` - -Operations about brokers - -```bash - -$ pulsar-admin brokers subcommand - -``` - -Subcommands -* `list` -* `namespaces` -* `update-dynamic-config` -* `list-dynamic-config` -* `get-all-dynamic-config` -* `get-internal-config` -* `get-runtime-config` -* `healthcheck` - -### `list` -List active brokers of the cluster - -Usage - -```bash - -$ pulsar-admin brokers list cluster-name - -``` - -### `leader-broker` -Get the information of the leader broker - -Usage - -```bash - -$ pulsar-admin brokers leader-broker - -``` - -### `namespaces` -List namespaces owned by the broker - -Usage - -```bash - -$ pulsar-admin brokers namespaces cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--url`|The URL for the broker|| - - -### `update-dynamic-config` -Update a broker's dynamic service configuration - -Usage - -```bash - -$ pulsar-admin brokers update-dynamic-config options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--config`|Service configuration parameter name|| -|`--value`|Value for the configuration parameter value specified using the `--config` flag|| - - -### `list-dynamic-config` -Get list of updatable configuration name - -Usage - -```bash - -$ pulsar-admin brokers list-dynamic-config - -``` - -### `delete-dynamic-config` -Delete dynamic-serviceConfiguration of broker - -Usage - -```bash - -$ pulsar-admin brokers delete-dynamic-config options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--config`|Service configuration parameter name|| - - -### `get-all-dynamic-config` -Get all overridden dynamic-configuration values - -Usage - -```bash - -$ pulsar-admin brokers get-all-dynamic-config - -``` - -### `get-internal-config` -Get internal configuration information - -Usage - -```bash - -$ pulsar-admin brokers get-internal-config - -``` - -### `get-runtime-config` -Get runtime configuration values - -Usage - -```bash - -$ pulsar-admin brokers get-runtime-config - -``` - -### `healthcheck` -Run a health check against the broker - -Usage - -```bash - -$ pulsar-admin brokers healthcheck - -``` - -## `clusters` -Operations about clusters - -Usage - -```bash - -$ pulsar-admin clusters subcommand - -``` - -Subcommands -* `get` -* `create` -* `update` -* `delete` -* `list` -* `update-peer-clusters` -* `get-peer-clusters` -* `get-failure-domain` -* `create-failure-domain` -* `update-failure-domain` -* `delete-failure-domain` -* `list-failure-domains` - - -### `get` -Get the configuration data for the specified cluster - -Usage - -```bash - -$ pulsar-admin clusters get cluster-name - -``` - -### `create` -Provisions a new cluster. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin clusters create cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--broker-url`|The URL for the broker service.|| -|`--broker-url-secure`|The broker service URL for a secure connection|| -|`--url`|service-url|| -|`--url-secure`|service-url for secure connection|| - - -### `update` -Update the configuration for a cluster - -Usage - -```bash - -$ pulsar-admin clusters update cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--broker-url`|The URL for the broker service.|| -|`--broker-url-secure`|The broker service URL for a secure connection|| -|`--url`|service-url|| -|`--url-secure`|service-url for secure connection|| - - -### `delete` -Deletes an existing cluster - -Usage - -```bash - -$ pulsar-admin clusters delete cluster-name - -``` - -### `list` -List the existing clusters - -Usage - -```bash - -$ pulsar-admin clusters list - -``` - -### `update-peer-clusters` -Update peer cluster names - -Usage - -```bash - -$ pulsar-admin clusters update-peer-clusters cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--peer-clusters`|Comma separated peer cluster names (Pass empty string "" to delete list)|| - -### `get-peer-clusters` -Get list of peer clusters - -Usage - -```bash - -$ pulsar-admin clusters get-peer-clusters - -``` - -### `get-failure-domain` -Get the configuration brokers of a failure domain - -Usage - -```bash - -$ pulsar-admin clusters get-failure-domain cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `create-failure-domain` -Create a new failure domain for a cluster (updates it if already created) - -Usage - -```bash - -$ pulsar-admin clusters create-failure-domain cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--broker-list`|Comma separated broker list|| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `update-failure-domain` -Update failure domain for a cluster (creates a new one if not exist) - -Usage - -```bash - -$ pulsar-admin clusters update-failure-domain cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--broker-list`|Comma separated broker list|| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `delete-failure-domain` -Delete an existing failure domain - -Usage - -```bash - -$ pulsar-admin clusters delete-failure-domain cluster-name options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `list-failure-domains` -List the existing failure domains for a cluster - -Usage - -```bash - -$ pulsar-admin clusters list-failure-domains cluster-name - -``` - -## `functions` - -A command-line interface for Pulsar Functions - -Usage - -```bash - -$ pulsar-admin functions subcommand - -``` - -Subcommands -* `localrun` -* `create` -* `delete` -* `update` -* `get` -* `restart` -* `stop` -* `start` -* `status` -* `stats` -* `list` -* `querystate` -* `putstate` -* `trigger` - - -### `localrun` -Run the Pulsar Function locally (rather than deploying it to the Pulsar cluster) - - -Usage - -```bash - -$ pulsar-admin functions localrun options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--broker-service-url `|The URL of the Pulsar broker|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--client-auth-params`|Client authentication param|| -|`--client-auth-plugin`|Client authentication plugin using which function-process can connect to broker|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--hostname-verification-enabled`|Enable hostname verification|false| -|`--instance-id-offset`|Start the instanceIds from this offset|0| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--go`|Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--state-storage-service-url`|The URL for the state storage service. By default, it it set to the service URL of the Apache BookKeeper. This service URL must be added manually when the Pulsar Function runs locally. || -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed successfully are sent|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--tls-allow-insecure`|Allow insecure tls connection|false| -|`--tls-trust-cert-path`|The tls trust cert file path|| -|`--use-tls`|Use tls connection|false| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `create` -Create a Pulsar Function in cluster mode (i.e. deploy it on a Pulsar cluster) - -Usage - -``` - -$ pulsar-admin functions create options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function’s namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--go`|Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `delete` -Delete a Pulsar Function that's running on a Pulsar cluster - -Usage - -```bash - -$ pulsar-admin functions delete options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `update` -Update a Pulsar Function that's been deployed to a Pulsar cluster - -Usage - -```bash - -$ pulsar-admin functions update options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function’s namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--go`|Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `get` -Fetch information about a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions get options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `restart` -Restart function instance - -Usage - -```bash - -$ pulsar-admin functions restart options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (restart all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `stop` -Stops function instance - -Usage - -```bash - -$ pulsar-admin functions stop options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (stop all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `start` -Starts a stopped function instance - -Usage - -```bash - -$ pulsar-admin functions start options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (start all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `status` -Check the current status of a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions status options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (Get-status of all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `stats` -Get the current stats of a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions stats options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (Get-stats of all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - -### `list` -List all of the Pulsar Functions running under a specific tenant and namespace - -Usage - -```bash - -$ pulsar-admin functions list options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `querystate` -Fetch the current state associated with a Pulsar Function running in cluster mode - -Usage - -```bash - -$ pulsar-admin functions querystate options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`-k`, `--key`|The key for the state you want to fetch|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| -|`-w`, `--watch`|Watch for changes in the value associated with a key for a Pulsar Function|false| - -### `putstate` -Put a key/value pair to the state associated with a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions putstate options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the Pulsar Function|| -|`--name`|The name of a Pulsar Function|| -|`--namespace`|The namespace of a Pulsar Function|| -|`--tenant`|The tenant of a Pulsar Function|| -|`-s`, `--state`|The FunctionState that needs to be put|| - -### `trigger` -Triggers the specified Pulsar Function with a supplied value - -Usage - -```bash - -$ pulsar-admin functions trigger options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| -|`--topic`|The specific topic name that the function consumes from that you want to inject the data to|| -|`--trigger-file`|The path to the file that contains the data with which you'd like to trigger the function|| -|`--trigger-value`|The value with which you want to trigger the function|| - - -## `functions-worker` -Operations to collect function-worker statistics - -```bash - -$ pulsar-admin functions-worker subcommand - -``` - -Subcommands - -* `function-stats` -* `get-cluster` -* `get-cluster-leader` -* `get-function-assignments` -* `monitoring-metrics` - -### `function-stats` - -Dump all functions stats running on this broker - -Usage - -```bash - -$ pulsar-admin functions-worker function-stats - -``` - -### `get-cluster` - -Get all workers belonging to this cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-cluster - -``` - -### `get-cluster-leader` - -Get the leader of the worker cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-cluster-leader - -``` - -### `get-function-assignments` - -Get the assignments of the functions across the worker cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-function-assignments - -``` - -### `monitoring-metrics` - -Dump metrics for Monitoring - -Usage - -```bash - -$ pulsar-admin functions-worker monitoring-metrics - -``` - -## `namespaces` - -Operations for managing namespaces - -```bash - -$ pulsar-admin namespaces subcommand - -``` - -Subcommands -* `list` -* `topics` -* `policies` -* `create` -* `delete` -* `set-deduplication` -* `set-auto-topic-creation` -* `remove-auto-topic-creation` -* `set-auto-subscription-creation` -* `remove-auto-subscription-creation` -* `permissions` -* `grant-permission` -* `revoke-permission` -* `grant-subscription-permission` -* `revoke-subscription-permission` -* `set-clusters` -* `get-clusters` -* `get-backlog-quotas` -* `set-backlog-quota` -* `remove-backlog-quota` -* `get-persistence` -* `set-persistence` -* `get-message-ttl` -* `set-message-ttl` -* `remove-message-ttl` -* `get-anti-affinity-group` -* `set-anti-affinity-group` -* `get-anti-affinity-namespaces` -* `delete-anti-affinity-group` -* `get-retention` -* `set-retention` -* `unload` -* `split-bundle` -* `set-dispatch-rate` -* `get-dispatch-rate` -* `set-replicator-dispatch-rate` -* `get-replicator-dispatch-rate` -* `set-subscribe-rate` -* `get-subscribe-rate` -* `set-subscription-dispatch-rate` -* `get-subscription-dispatch-rate` -* `clear-backlog` -* `unsubscribe` -* `set-encryption-required` -* `set-delayed-delivery` -* `get-delayed-delivery` -* `set-subscription-auth-mode` -* `get-max-producers-per-topic` -* `set-max-producers-per-topic` -* `get-max-consumers-per-topic` -* `set-max-consumers-per-topic` -* `get-max-consumers-per-subscription` -* `set-max-consumers-per-subscription` -* `get-max-unacked-messages-per-subscription` -* `set-max-unacked-messages-per-subscription` -* `get-max-unacked-messages-per-consumer` -* `set-max-unacked-messages-per-consumer` -* `get-compaction-threshold` -* `set-compaction-threshold` -* `get-offload-threshold` -* `set-offload-threshold` -* `get-offload-deletion-lag` -* `set-offload-deletion-lag` -* `clear-offload-deletion-lag` -* `get-schema-autoupdate-strategy` -* `set-schema-autoupdate-strategy` -* `set-offload-policies` -* `get-offload-policies` -* `set-max-subscriptions-per-topic` -* `get-max-subscriptions-per-topic` -* `remove-max-subscriptions-per-topic` - - -### `list` -Get the namespaces for a tenant - -Usage - -```bash - -$ pulsar-admin namespaces list tenant-name - -``` - -### `topics` -Get the list of topics for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces topics tenant/namespace - -``` - -### `policies` -Get the configuration policies of a namespace - -Usage - -```bash - -$ pulsar-admin namespaces policies tenant/namespace - -``` - -### `create` -Create a new namespace - -Usage - -```bash - -$ pulsar-admin namespaces create tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-b`, `--bundles`|The number of bundles to activate|0| -|`-c`, `--clusters`|List of clusters this namespace will be assigned|| - - -### `delete` -Deletes a namespace. The namespace needs to be empty - -Usage - -```bash - -$ pulsar-admin namespaces delete tenant/namespace - -``` - -### `set-deduplication` -Enable or disable message deduplication on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-deduplication tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable message deduplication on the specified namespace|false| -|`--disable`, `-d`|Disable message deduplication on the specified namespace|false| - -### `set-auto-topic-creation` -Enable or disable autoTopicCreation for a namespace, overriding broker settings - -Usage - -```bash - -$ pulsar-admin namespaces set-auto-topic-creation tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable allowAutoTopicCreation on namespace|false| -|`--disable`, `-d`|Disable allowAutoTopicCreation on namespace|false| -|`--type`, `-t`|Type of topic to be auto-created. Possible values: (partitioned, non-partitioned)|non-partitioned| -|`--num-partitions`, `-n`|Default number of partitions of topic to be auto-created, applicable to partitioned topics only|| - -### `remove-auto-topic-creation` -Remove override of autoTopicCreation for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces remove-auto-topic-creation tenant/namespace - -``` - -### `set-auto-subscription-creation` -Enable autoSubscriptionCreation for a namespace, overriding broker settings - -Usage - -```bash - -$ pulsar-admin namespaces set-auto-subscription-creation tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable allowAutoSubscriptionCreation on namespace|false| - -### `remove-auto-subscription-creation` -Remove override of autoSubscriptionCreation for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces remove-auto-subscription-creation tenant/namespace - -``` - -### `permissions` -Get the permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces permissions tenant/namespace - -``` - -### `grant-permission` -Grant permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces grant-permission tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--actions`|Actions to be granted (`produce` or `consume`)|| -|`--role`|The client role to which to grant the permissions|| - - -### `revoke-permission` -Revoke permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces revoke-permission tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--role`|The client role to which to revoke the permissions|| - -### `grant-subscription-permission` -Grant permissions to access subscription admin-api - -Usage - -```bash - -$ pulsar-admin namespaces grant-subscription-permission tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--roles`|The client roles to which to grant the permissions (comma separated roles)|| -|`--subscription`|The subscription name for which permission will be granted to roles|| - -### `revoke-subscription-permission` -Revoke permissions to access subscription admin-api - -Usage - -```bash - -$ pulsar-admin namespaces revoke-subscription-permission tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--role`|The client role to which to revoke the permissions|| -|`--subscription`|The subscription name for which permission will be revoked to roles|| - -### `set-clusters` -Set replication clusters for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-clusters tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--clusters`|Replication clusters ID list (comma-separated values)|| - - -### `get-clusters` -Get replication clusters for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-clusters tenant/namespace - -``` - -### `get-backlog-quotas` -Get the backlog quota policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-backlog-quotas tenant/namespace - -``` - -### `set-backlog-quota` -Set a backlog quota policy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-backlog-quota tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-l`, `--limit`|The backlog size limit (for example `10M` or `16G`)|| -|`-p`, `--policy`|The retention policy to enforce when the limit is reached. The valid options are: `producer_request_hold`, `producer_exception` or `consumer_backlog_eviction`| - -Example - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \ ---limit 2G \ ---policy producer_request_hold - -``` - -### `remove-backlog-quota` -Remove a backlog quota policy from a namespace - -Usage - -```bash - -$ pulsar-admin namespaces remove-backlog-quota tenant/namespace - -``` - -### `get-persistence` -Get the persistence policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-persistence tenant/namespace - -``` - -### `set-persistence` -Set the persistence policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-persistence tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-a`, `--bookkeeper-ack-quorum`|The number of acks (guaranteed copies) to wait for each entry|0| -|`-e`, `--bookkeeper-ensemble`|The number of bookies to use for a topic|0| -|`-w`, `--bookkeeper-write-quorum`|How many writes to make of each entry|0| -|`-r`, `--ml-mark-delete-max-rate`|Throttling rate of mark-delete operation (0 means no throttle)|| - - -### `get-message-ttl` -Get the message TTL for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-message-ttl tenant/namespace - -``` - -### `set-message-ttl` -Set the message TTL for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-message-ttl tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-ttl`, `--messageTTL`|Message TTL in seconds. When the value is set to `0`, TTL is disabled. TTL is disabled by default. |0| - -### `remove-message-ttl` -Remove the message TTL for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces remove-message-ttl tenant/namespace - -``` - -### `get-anti-affinity-group` -Get Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-anti-affinity-group tenant/namespace - -``` - -### `set-anti-affinity-group` -Set Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-anti-affinity-group tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-g`, `--group`|Anti-affinity group name|| - -### `get-anti-affinity-namespaces` -Get Anti-affinity namespaces grouped with the given anti-affinity group name - -Usage - -```bash - -$ pulsar-admin namespaces get-anti-affinity-namespaces options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--cluster`|Cluster name|| -|`-g`, `--group`|Anti-affinity group name|| -|`-p`, `--tenant`|Tenant is only used for authorization. Client has to be admin of any of the tenant to access this api|| - -### `delete-anti-affinity-group` -Remove Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces delete-anti-affinity-group tenant/namespace - -``` - -### `get-retention` -Get the retention policy that is applied to each topic within the specified namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-retention tenant/namespace - -``` - -### `set-retention` -Set the retention policy for each topic within the specified namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-retention tenant/namespace - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-s`, `--size`|The retention size limits (for example 10M, 16G or 3T) for each topic in the namespace. 0 means no retention and -1 means infinite size retention|| -|`-t`, `--time`|The retention time in minutes, hours, days, or weeks. Examples: 100m, 13h, 2d, 5w. 0 means no retention and -1 means infinite time retention|| - - -### `unload` -Unload a namespace or namespace bundle from the current serving broker. - -Usage - -```bash - -$ pulsar-admin namespaces unload tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| - -### `split-bundle` -Split a namespace-bundle from the current serving broker - -Usage - -```bash - -$ pulsar-admin namespaces split-bundle tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-u`, `--unload`|Unload newly split bundles after splitting old bundle|false| - -### `set-dispatch-rate` -Set message-dispatch-rate for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-dispatch-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-dispatch-rate` -Get configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-dispatch-rate tenant/namespace - -``` - -### `set-replicator-dispatch-rate` -Set replicator message-dispatch-rate for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-replicator-dispatch-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-replicator-dispatch-rate` -Get replicator configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-replicator-dispatch-rate tenant/namespace - -``` - -### `set-subscribe-rate` -Set subscribe-rate per consumer for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscribe-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-sr`, `--subscribe-rate`|The subscribe rate (default -1 will be overwrite if not passed)|-1| -|`-st`, `--subscribe-rate-period`|The subscribe rate period in second type (default 30 second will be overwrite if not passed)|30| - -### `get-subscribe-rate` -Get configured subscribe-rate per consumer for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-subscribe-rate tenant/namespace - -``` - -### `set-subscription-dispatch-rate` -Set subscription message-dispatch-rate for all subscription of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscription-dispatch-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--sub-msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-subscription-dispatch-rate` -Get subscription configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-subscription-dispatch-rate tenant/namespace - -``` - -### `clear-backlog` -Clear the backlog for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces clear-backlog tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-force`, `--force`|Whether to force a clear backlog without prompt|false| -|`-s`, `--sub`|The subscription name|| - - -### `unsubscribe` -Unsubscribe the given subscription on all destinations on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces unsubscribe tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-s`, `--sub`|The subscription name|| - -### `set-encryption-required` -Enable or disable message encryption required for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-encryption-required tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-d`, `--disable`|Disable message encryption required|false| -|`-e`, `--enable`|Enable message encryption required|false| - -### `set-delayed-delivery` -Set the delayed delivery policy on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-delayed-delivery tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-d`, `--disable`|Disable delayed delivery messages|false| -|`-e`, `--enable`|Enable delayed delivery messages|false| -|`-t`, `--time`|The tick time for when retrying on delayed delivery messages|1s| - - -### `get-delayed-delivery` -Get the delayed delivery policy on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-delayed-delivery-time tenant/namespace - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-t`, `--time`|The tick time for when retrying on delayed delivery messages|1s| - - -### `set-subscription-auth-mode` -Set subscription auth mode on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscription-auth-mode tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-m`, `--subscription-auth-mode`|Subscription authorization mode for Pulsar policies. Valid options are: [None, Prefix]|| - -### `get-max-producers-per-topic` -Get maxProducersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-producers-per-topic tenant/namespace - -``` - -### `set-max-producers-per-topic` -Set maxProducersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-producers-per-topic tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-p`, `--max-producers-per-topic`|maxProducersPerTopic for a namespace|0| - -### `get-max-consumers-per-topic` -Get maxConsumersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-consumers-per-topic tenant/namespace - -``` - -### `set-max-consumers-per-topic` -Set maxConsumersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-consumers-per-topic tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-consumers-per-topic`|maxConsumersPerTopic for a namespace|0| - -### `get-max-consumers-per-subscription` -Get maxConsumersPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-consumers-per-subscription tenant/namespace - -``` - -### `set-max-consumers-per-subscription` -Set maxConsumersPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-consumers-per-subscription tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-consumers-per-subscription`|maxConsumersPerSubscription for a namespace|0| - -### `get-max-unacked-messages-per-subscription` -Get maxUnackedMessagesPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-unacked-messages-per-subscription tenant/namespace - -``` - -### `set-max-unacked-messages-per-subscription` -Set maxUnackedMessagesPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-unacked-messages-per-subscription tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-unacked-messages-per-subscription`|maxUnackedMessagesPerSubscription for a namespace|-1| - -### `get-max-unacked-messages-per-consumer` -Get maxUnackedMessagesPerConsumer for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-unacked-messages-per-consumer tenant/namespace - -``` - -### `set-max-unacked-messages-per-consumer` -Set maxUnackedMessagesPerConsumer for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-unacked-messages-per-consumer tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-unacked-messages-per-consumer`|maxUnackedMessagesPerConsumer for a namespace|-1| - - -### `get-compaction-threshold` -Get compactionThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-compaction-threshold tenant/namespace - -``` - -### `set-compaction-threshold` -Set compactionThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-compaction-threshold tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-t`, `--threshold`|Maximum number of bytes in a topic backlog before compaction is triggered (eg: 10M, 16G, 3T). 0 disables automatic compaction|0| - - -### `get-offload-threshold` -Get offloadThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-threshold tenant/namespace - -``` - -### `set-offload-threshold` -Set offloadThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-threshold tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-s`, `--size`|Maximum number of bytes stored in the pulsar cluster for a topic before data will start being automatically offloaded to longterm storage (eg: 10M, 16G, 3T, 100). Negative values disable automatic offload. 0 triggers offloading as soon as possible.|-1| - -### `get-offload-deletion-lag` -Get offloadDeletionLag, in minutes, for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-deletion-lag tenant/namespace - -``` - -### `set-offload-deletion-lag` -Set offloadDeletionLag for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-deletion-lag tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-l`, `--lag`|Duration to wait after offloading a ledger segment, before deleting the copy of that segment from cluster local storage. (eg: 10m, 5h, 3d, 2w).|-1| - -### `clear-offload-deletion-lag` -Clear offloadDeletionLag for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces clear-offload-deletion-lag tenant/namespace - -``` - -### `get-schema-autoupdate-strategy` -Get the schema auto-update strategy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-schema-autoupdate-strategy tenant/namespace - -``` - -### `set-schema-autoupdate-strategy` -Set the schema auto-update strategy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-schema-autoupdate-strategy tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--compatibility`|Compatibility level required for new schemas created via a Producer. Possible values (Full, Backward, Forward, None).|Full| -|`-d`, `--disabled`|Disable automatic schema updates.|false| - -### `get-publish-rate` -Get the message publish rate for each topic in a namespace, in bytes as well as messages per second - -Usage - -```bash - -$ pulsar-admin namespaces get-publish-rate tenant/namespace - -``` - -### `set-publish-rate` -Set the message publish rate for each topic in a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-publish-rate tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-m`, `--msg-publish-rate`|Threshold for number of messages per second per topic in the namespace (-1 implies not set, 0 for no limit).|-1| -|`-b`, `--byte-publish-rate`|Threshold for number of bytes per second per topic in the namespace (-1 implies not set, 0 for no limit).|-1| - -### `set-offload-policies` -Set the offload policy for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-policies tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-d`, `--driver`|Driver to use to offload old data to long term storage,(Possible values: S3, aws-s3, google-cloud-storage)|| -|`-r`, `--region`|The long term storage region|| -|`-b`, `--bucket`|Bucket to place offloaded ledger into|| -|`-e`, `--endpoint`|Alternative endpoint to connect to|| -|`-i`, `--aws-id`|AWS Credential Id to use when using driver S3 or aws-s3|| -|`-s`, `--aws-secret`|AWS Credential Secret to use when using driver S3 or aws-s3|| -|`-ro`, `--s3-role`|S3 Role used for STSAssumeRoleSessionCredentialsProvider using driver S3 or aws-s3|| -|`-rsn`, `--s3-role-session-name`|S3 role session name used for STSAssumeRoleSessionCredentialsProvider using driver S3 or aws-s3|| -|`-mbs`, `--maxBlockSize`|Max block size|64MB| -|`-rbs`, `--readBufferSize`|Read buffer size|1MB| -|`-oat`, `--offloadAfterThreshold`|Offload after threshold size (eg: 1M, 5M)|| -|`-oae`, `--offloadAfterElapsed`|Offload after elapsed in millis (or minutes, hours,days,weeks eg: 100m, 3h, 2d, 5w).|| - -### `get-offload-policies` -Get the offload policy for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-policies tenant/namespace - -``` - -### `set-max-subscriptions-per-topic` -Set the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces set-max-subscriptions-per-topic tenant/namespace - -``` - -### `get-max-subscriptions-per-topic` -Get the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces get-max-subscriptions-per-topic tenant/namespace - -``` - -### `remove-max-subscriptions-per-topic` -Remove the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces remove-max-subscriptions-per-topic tenant/namespace - -``` - -## `ns-isolation-policy` -Operations for managing namespace isolation policies. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy subcommand - -``` - -Subcommands -* `set` -* `get` -* `list` -* `delete` -* `brokers` -* `broker` - -### `set` -Create/update a namespace isolation policy for a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy set cluster-name policy-name options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`--auto-failover-policy-params`|Comma-separated name=value auto failover policy parameters|[]| -|`--auto-failover-policy-type`|Auto failover policy type name. Currently available options: min_available.|[]| -|`--namespaces`|Comma-separated namespaces regex list|[]| -|`--primary`|Comma-separated primary broker regex list|[]| -|`--secondary`|Comma-separated secondary broker regex list|[]| - - -### `get` -Get the namespace isolation policy of a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy get cluster-name policy-name - -``` - -### `list` -List all namespace isolation policies of a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy list cluster-name - -``` - -### `delete` -Delete namespace isolation policy of a cluster. This operation requires superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy delete - -``` - -### `brokers` -List all brokers with namespace-isolation policies attached to it. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy brokers cluster-name - -``` - -### `broker` -Get broker with namespace-isolation policies attached to it. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy broker cluster-name options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`--broker`|Broker name to get namespace-isolation policies attached to it|| - -## `topics` -Operations for managing Pulsar topics (both persistent and non-persistent). - -Usage - -```bash - -$ pulsar-admin topics subcommand - -``` - -From Pulsar 2.7.0, some namespace-level policies are available on topic level. To enable topic-level policy in Pulsar, you need to configure the following parameters in the `broker.conf` file. - -```shell - -systemTopicEnabled=true -topicLevelPoliciesEnabled=true - -``` - -Subcommands -* `compact` -* `compaction-status` -* `offload` -* `offload-status` -* `create-partitioned-topic` -* `create-missed-partitions` -* `delete-partitioned-topic` -* `create` -* `get-partitioned-topic-metadata` -* `update-partitioned-topic` -* `list-partitioned-topics` -* `list` -* `terminate` -* `permissions` -* `grant-permission` -* `revoke-permission` -* `lookup` -* `bundle-range` -* `delete` -* `unload` -* `create-subscription` -* `subscriptions` -* `unsubscribe` -* `stats` -* `stats-internal` -* `info-internal` -* `partitioned-stats` -* `partitioned-stats-internal` -* `skip` -* `clear-backlog` -* `expire-messages` -* `expire-messages-all-subscriptions` -* `peek-messages` -* `reset-cursor` -* `get-message-by-id` -* `last-message-id` -* `get-backlog-quotas` -* `set-backlog-quota` -* `remove-backlog-quota` -* `get-persistence` -* `set-persistence` -* `remove-persistence` -* `get-message-ttl` -* `set-message-ttl` -* `remove-message-ttl` -* `get-deduplication` -* `set-deduplication` -* `remove-deduplication` -* `get-retention` -* `set-retention` -* `remove-retention` -* `get-dispatch-rate` -* `set-dispatch-rate` -* `remove-dispatch-rate` -* `get-max-unacked-messages-per-subscription` -* `set-max-unacked-messages-per-subscription` -* `remove-max-unacked-messages-per-subscription` -* `get-max-unacked-messages-per-consumer` -* `set-max-unacked-messages-per-consumer` -* `remove-max-unacked-messages-per-consumer` -* `get-delayed-delivery` -* `set-delayed-delivery` -* `remove-delayed-delivery` -* `get-max-producers` -* `set-max-producers` -* `remove-max-producers` -* `get-max-consumers` -* `set-max-consumers` -* `remove-max-consumers` -* `get-compaction-threshold` -* `set-compaction-threshold` -* `remove-compaction-threshold` -* `get-offload-policies` -* `set-offload-policies` -* `remove-offload-policies` -* `get-inactive-topic-policies` -* `set-inactive-topic-policies` -* `remove-inactive-topic-policies` -* `set-max-subscriptions` -* `get-max-subscriptions` -* `remove-max-subscriptions` - -### `compact` -Run compaction on the specified topic (persistent topics only) - -Usage - -``` - -$ pulsar-admin topics compact persistent://tenant/namespace/topic - -``` - -### `compaction-status` -Check the status of a topic compaction (persistent topics only) - -Usage - -```bash - -$ pulsar-admin topics compaction-status persistent://tenant/namespace/topic - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-w`, `--wait-complete`|Wait for compaction to complete|false| - - -### `offload` -Trigger offload of data from a topic to long-term storage (e.g. Amazon S3) - -Usage - -```bash - -$ pulsar-admin topics offload persistent://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--size-threshold`|The maximum amount of data to keep in BookKeeper for the specific topic|| - - -### `offload-status` -Check the status of data offloading from a topic to long-term storage - -Usage - -```bash - -$ pulsar-admin topics offload-status persistent://tenant/namespace/topic op - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-w`, `--wait-complete`|Wait for compaction to complete|false| - - -### `create-partitioned-topic` -Create a partitioned topic. A partitioned topic must be created before producers can publish to it. - -:::note - -By default, after 60 seconds of creation, topics are considered inactive and deleted automatically to prevent from generating trash data. -To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. -To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to your desired value. -For more information about these two parameters, see [here](reference-configuration.md#broker). - -::: - -Usage - -```bash - -$ pulsar-admin topics create-partitioned-topic {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-p`, `--partitions`|The number of partitions for the topic|0| - -### `create-missed-partitions` -Try to create partitions for partitioned topic. The partitions of partition topic has to be created, -can be used by repair partitions when topic auto creation is disabled - -Usage - -```bash - -$ pulsar-admin topics create-missed-partitions persistent://tenant/namespace/topic - -``` - -### `delete-partitioned-topic` -Delete a partitioned topic. This will also delete all the partitions of the topic if they exist. - -Usage - -```bash - -$ pulsar-admin topics delete-partitioned-topic {persistent|non-persistent} - -``` - -### `create` -Creates a non-partitioned topic. A non-partitioned topic must explicitly be created by the user if allowAutoTopicCreation or createIfMissing is disabled. - -:::note - -By default, after 60 seconds of creation, topics are considered inactive and deleted automatically to prevent from generating trash data. -To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. -To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to your desired value. -For more information about these two parameters, see [here](reference-configuration.md#broker). - -::: - -Usage - -```bash - -$ pulsar-admin topics create {persistent|non-persistent}://tenant/namespace/topic - -``` - -### `get-partitioned-topic-metadata` -Get the partitioned topic metadata. If the topic is not created or is a non-partitioned topic, this will return an empty topic with zero partitions. - -Usage - -```bash - -$ pulsar-admin topics get-partitioned-topic-metadata {persistent|non-persistent}://tenant/namespace/topic - -``` - -### `update-partitioned-topic` -Update existing non-global partitioned topic. New updating number of partitions must be greater than existing number of partitions. - -Usage - -```bash - -$ pulsar-admin topics update-partitioned-topic {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-p`, `--partitions`|The number of partitions for the topic|0| - -### `list-partitioned-topics` -Get the list of partitioned topics under a namespace. - -Usage - -```bash - -$ pulsar-admin topics list-partitioned-topics tenant/namespace - -``` - -### `list` -Get the list of topics under a namespace - -Usage - -``` - -$ pulsar-admin topics list tenant/cluster/namespace - -``` - -### `terminate` -Terminate a persistent topic (disallow further messages from being published on the topic) - -Usage - -```bash - -$ pulsar-admin topics terminate persistent://tenant/namespace/topic - -``` - -### `permissions` -Get the permissions on a topic. Retrieve the effective permissions for a destination. These permissions are defined by the permissions set at the namespace level combined (union) with any eventual specific permissions set on the topic. - -Usage - -```bash - -$ pulsar-admin topics permissions topic - -``` - -### `grant-permission` -Grant a new permission to a client role on a single topic - -Usage - -```bash - -$ pulsar-admin topics grant-permission {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--actions`|Actions to be granted (`produce` or `consume`)|| -|`--role`|The client role to which to grant the permissions|| - - -### `revoke-permission` -Revoke permissions to a client role on a single topic. If the permission was not set at the topic level, but rather at the namespace level, this operation will return an error (HTTP status code 412). - -Usage - -```bash - -$ pulsar-admin topics revoke-permission topic - -``` - -### `lookup` -Look up a topic from the current serving broker - -Usage - -```bash - -$ pulsar-admin topics lookup topic - -``` - -### `bundle-range` -Get the namespace bundle which contains the given topic - -Usage - -```bash - -$ pulsar-admin topics bundle-range topic - -``` - -### `delete` -Delete a topic. The topic cannot be deleted if there are any active subscriptions or producers connected to the topic. - -Usage - -```bash - -$ pulsar-admin topics delete topic - -``` - -### `unload` -Unload a topic - -Usage - -```bash - -$ pulsar-admin topics unload topic - -``` - -### `create-subscription` -Create a new subscription on a topic. - -Usage - -```bash - -$ pulsar-admin topics create-subscription [options] persistent://tenant/namespace/topic - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-m`, `--messageId`|messageId where to create the subscription. It can be either 'latest', 'earliest' or (ledgerId:entryId)|latest| -|`-s`, `--subscription`|Subscription to reset position on|| - -### `subscriptions` -Get the list of subscriptions on the topic - -Usage - -```bash - -$ pulsar-admin topics subscriptions topic - -``` - -### `unsubscribe` -Delete a durable subscriber from a topic - -Usage - -```bash - -$ pulsar-admin topics unsubscribe topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|The subscription to delete|| -|`-f`, `--force`|Disconnect and close all consumers and delete subscription forcefully|false| - - -### `stats` -Get the stats for the topic and its connected producers and consumers. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -Usage - -```bash - -$ pulsar-admin topics stats topic - -``` - -:::note - -The unit of `storageSize` and `averageMsgSize` is Byte. - -::: - -### `stats-internal` -Get the internal stats for the topic - -Usage - -```bash - -$ pulsar-admin topics stats-internal topic - -``` - -### `info-internal` -Get the internal metadata info for the topic - -Usage - -```bash - -$ pulsar-admin topics info-internal topic - -``` - -### `partitioned-stats` -Get the stats for the partitioned topic and its connected producers and consumers. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -Usage - -```bash - -$ pulsar-admin topics partitioned-stats topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--per-partition`|Get per-partition stats|false| - -### `partitioned-stats-internal` -Get the internal stats for the partitioned topic and its connected producers and consumers. All the rates are computed over a 1 minute window and are relative the last completed 1 minute period. - -Usage - -```bash - -$ pulsar-admin topics partitioned-stats-internal topic - -``` - -### `skip` -Skip some messages for the subscription - -Usage - -```bash - -$ pulsar-admin topics skip topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-n`, `--count`|The number of messages to skip|0| -|`-s`, `--subscription`|The subscription on which to skip messages|| - - -### `clear-backlog` -Clear backlog (skip all the messages) for the subscription - -Usage - -```bash - -$ pulsar-admin topics clear-backlog topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|The subscription to clear|| - - -### `expire-messages` -Expire messages that are older than the given expiry time (in seconds) for the subscription. - -Usage - -```bash - -$ pulsar-admin topics expire-messages topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-t`, `--expireTime`|Expire messages older than the time (in seconds)|0| -|`-s`, `--subscription`|The subscription to skip messages on|| - - -### `expire-messages-all-subscriptions` -Expire messages older than the given expiry time (in seconds) for all subscriptions - -Usage - -```bash - -$ pulsar-admin topics expire-messages-all-subscriptions topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-t`, `--expireTime`|Expire messages older than the time (in seconds)|0| - - -### `peek-messages` -Peek some messages for the subscription. - -Usage - -```bash - -$ pulsar-admin topics peek-messages topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-n`, `--count`|The number of messages|0| -|`-s`, `--subscription`|Subscription to get messages from|| - - -### `reset-cursor` -Reset position for subscription to a position that is closest to timestamp or messageId. - -Usage - -```bash - -$ pulsar-admin topics reset-cursor topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|Subscription to reset position on|| -|`-t`, `--time`|The time in minutes to reset back to (or minutes, hours, days, weeks, etc.). Examples: `100m`, `3h`, `2d`, `5w`.|| -|`-m`, `--messageId`| The messageId to reset back to (ledgerId:entryId). || - -### `get-message-by-id` -Get message by ledger id and entry id - -Usage - -```bash - -$ pulsar-admin topics get-message-by-id topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-l`, `--ledgerId`|The ledger id |0| -|`-e`, `--entryId`|The entry id |0| - -### `last-message-id` -Get the last commit message ID of the topic. - -Usage - -```bash - -$ pulsar-admin topics last-message-id persistent://tenant/namespace/topic - -``` - -### `get-backlog-quotas` -Get the backlog quota policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-backlog-quotas tenant/namespace/topic - -``` - -### `set-backlog-quota` -Set a backlog quota policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-backlog-quota tenant/namespace/topic options - -``` - -### `remove-backlog-quota` -Remove a backlog quota policy from a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-backlog-quota tenant/namespace/topic - -``` - -### `get-persistence` -Get the persistence policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-persistence tenant/namespace/topic - -``` - -### `set-persistence` -Set the persistence policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-persistence tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-e`, `--bookkeeper-ensemble`|Number of bookies to use for a topic|0| -|`-w`, `--bookkeeper-write-quorum`|How many writes to make of each entry|0| -|`-a`, `--bookkeeper-ack-quorum`|Number of acks (guaranteed copies) to wait for each entry|0| -|`-r`, `--ml-mark-delete-max-rate`|Throttling rate of mark-delete operation (0 means no throttle)|| - -### `remove-persistence` -Remove the persistence policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-persistence tenant/namespace/topic - -``` - -### `get-message-ttl` -Get the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-message-ttl tenant/namespace/topic - -``` - -### `set-message-ttl` -Set the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-message-ttl tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-ttl`, `--messageTTL`|Message TTL for a topic in second, allowed range from 1 to `Integer.MAX_VALUE` |0| - -### `remove-message-ttl` -Remove the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-message-ttl tenant/namespace/topic - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable message deduplication on the specified topic.|false| -|`--disable`, `-d`|Disable message deduplication on the specified topic.|false| - -### `get-deduplication` -Get a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-deduplication tenant/namespace/topic - -``` - -### `set-deduplication` -Set a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-deduplication tenant/namespace/topic options - -``` - -### `remove-deduplication` -Remove a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-deduplication tenant/namespace/topic - -``` - -## `tenants` -Operations for managing tenants - -Usage - -```bash - -$ pulsar-admin tenants subcommand - -``` - -Subcommands -* `list` -* `get` -* `create` -* `update` -* `delete` - -### `list` -List the existing tenants - -Usage - -```bash - -$ pulsar-admin tenants list - -``` - -### `get` -Gets the configuration of a tenant - -Usage - -```bash - -$ pulsar-admin tenants get tenant-name - -``` - -### `create` -Creates a new tenant - -Usage - -```bash - -$ pulsar-admin tenants create tenant-name options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-r`, `--admin-roles`|Comma-separated admin roles|| -|`-c`, `--allowed-clusters`|Comma-separated allowed clusters|| - -### `update` -Updates a tenant - -Usage - -```bash - -$ pulsar-admin tenants update tenant-name options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-r`, `--admin-roles`|Comma-separated admin roles|| -|`-c`, `--allowed-clusters`|Comma-separated allowed clusters|| - - -### `delete` -Deletes an existing tenant - -Usage - -```bash - -$ pulsar-admin tenants delete tenant-name - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-f`, `--force`|Delete a tenant forcefully by deleting all namespaces under it.|false| - - -## `resource-quotas` -Operations for managing resource quotas - -Usage - -```bash - -$ pulsar-admin resource-quotas subcommand - -``` - -Subcommands -* `get` -* `set` -* `reset-namespace-bundle-quota` - - -### `get` -Get the resource quota for a specified namespace bundle, or default quota if no namespace/bundle is specified. - -Usage - -```bash - -$ pulsar-admin resource-quotas get options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-n`, `--namespace`|The namespace|| - - -### `set` -Set the resource quota for the specified namespace bundle, or default quota if no namespace/bundle is specified. - -Usage - -```bash - -$ pulsar-admin resource-quotas set options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-bi`, `--bandwidthIn`|The expected inbound bandwidth (in bytes/second)|0| -|`-bo`, `--bandwidthOut`|Expected outbound bandwidth (in bytes/second)0| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-d`, `--dynamic`|Allow to be dynamically re-calculated (or not)|false| -|`-mem`, `--memory`|Expectred memory usage (in megabytes)|0| -|`-mi`, `--msgRateIn`|Expected incoming messages per second|0| -|`-mo`, `--msgRateOut`|Expected outgoing messages per second|0| -|`-n`, `--namespace`|The namespace as tenant/namespace, for example my-tenant/my-ns. Must be specified together with -b/--bundle.|| - - -### `reset-namespace-bundle-quota` -Reset the specified namespace bundle's resource quota to a default value. - -Usage - -```bash - -$ pulsar-admin resource-quotas reset-namespace-bundle-quota options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-n`, `--namespace`|The namespace|| - - - -## `schemas` -Operations related to Schemas associated with Pulsar topics. - -Usage - -``` - -$ pulsar-admin schemas subcommand - -``` - -Subcommands -* `upload` -* `delete` -* `get` -* `extract` - - -### `upload` -Upload the schema definition for a topic - -Usage - -```bash - -$ pulsar-admin schemas upload persistent://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`--filename`|The path to the schema definition file. An example schema file is available under conf directory.|| - - -### `delete` -Delete the schema definition associated with a topic - -Usage - -```bash - -$ pulsar-admin schemas delete persistent://tenant/namespace/topic - -``` - -### `get` -Retrieve the schema definition associated with a topic (at a given version if version is supplied). - -Usage - -```bash - -$ pulsar-admin schemas get persistent://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`--version`|The version of the schema definition to retrieve for a topic.|| - -### `extract` -Provide the schema definition for a topic via Java class name contained in a JAR file - -Usage - -```bash - -$ pulsar-admin schemas extract persistent://tenant/namespace/topic options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--classname`|The Java class name|| -|`-j`, `--jar`|A path to the JAR file which contains the above Java class|| -|`-t`, `--type`|The type of the schema (avro or json)|| diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/reference-rest-api-overview.md b/site2/website/versioned_docs/version-2.8.3-deprecated/reference-rest-api-overview.md deleted file mode 100644 index 4bdcf23483a2b5..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/reference-rest-api-overview.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: reference-rest-api-overview -title: Pulsar REST APIs -sidebar_label: "Pulsar REST APIs" ---- - -A REST API (also known as RESTful API, REpresentational State Transfer Application Programming Interface) is a set of definitions and protocols for building and integrating application software, using HTTP requests to GET, PUT, POST, and DELETE data following the REST standards. In essence, REST API is a set of remote calls using standard methods to request and return data in a specific format between two systems. - -Pulsar provides a variety of REST APIs that enable you to interact with Pulsar to retrieve information or perform an action. - -| REST API category | Description | -| --- | --- | -| [Admin](https://pulsar.apache.org/admin-rest-api/?version=master) | REST APIs for administrative operations.| -| [Functions](https://pulsar.apache.org/functions-rest-api/?version=master) | REST APIs for function-specific operations.| -| [Sources](https://pulsar.apache.org/source-rest-api/?version=master) | REST APIs for source-specific operations.| -| [Sinks](https://pulsar.apache.org/sink-rest-api/?version=master) | REST APIs for sink-specific operations.| -| [Packages](https://pulsar.apache.org/packages-rest-api/?version=master) | REST APIs for package-specific operations. A package can be a group of functions, sources, and sinks.| - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/reference-terminology.md b/site2/website/versioned_docs/version-2.8.3-deprecated/reference-terminology.md deleted file mode 100644 index e5099141c3231e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/reference-terminology.md +++ /dev/null @@ -1,176 +0,0 @@ ---- -id: reference-terminology -title: Pulsar Terminology -sidebar_label: "Terminology" -original_id: reference-terminology ---- - -Here is a glossary of terms related to Apache Pulsar: - -### Concepts - -#### Pulsar - -Pulsar is a distributed messaging system originally created by Yahoo but now under the stewardship of the Apache Software Foundation. - -#### Message - -Messages are the basic unit of Pulsar. They're what [producers](#producer) publish to [topics](#topic) -and what [consumers](#consumer) then consume from topics. - -#### Topic - -A named channel used to pass messages published by [producers](#producer) to [consumers](#consumer) who -process those [messages](#message). - -#### Partitioned Topic - -A topic that is served by multiple Pulsar [brokers](#broker), which enables higher throughput. - -#### Namespace - -A grouping mechanism for related [topics](#topic). - -#### Namespace Bundle - -A virtual group of [topics](#topic) that belong to the same [namespace](#namespace). A namespace bundle -is defined as a range between two 32-bit hashes, such as 0x00000000 and 0xffffffff. - -#### Tenant - -An administrative unit for allocating capacity and enforcing an authentication/authorization scheme. - -#### Subscription - -A lease on a [topic](#topic) established by a group of [consumers](#consumer). Pulsar has four subscription -modes (exclusive, shared, failover and key_shared). - -#### Pub-Sub - -A messaging pattern in which [producer](#producer) processes publish messages on [topics](#topic) that -are then consumed (processed) by [consumer](#consumer) processes. - -#### Producer - -A process that publishes [messages](#message) to a Pulsar [topic](#topic). - -#### Consumer - -A process that establishes a subscription to a Pulsar [topic](#topic) and processes messages published -to that topic by [producers](#producer). - -#### Reader - -Pulsar readers are message processors much like Pulsar [consumers](#consumer) but with two crucial differences: - -- you can specify *where* on a topic readers begin processing messages (consumers always begin with the latest - available unacked message); -- readers don't retain data or acknowledge messages. - -#### Cursor - -The subscription position for a [consumer](#consumer). - -#### Acknowledgment (ack) - -A message sent to a Pulsar broker by a [consumer](#consumer) that a message has been successfully processed. -An acknowledgement (ack) is Pulsar's way of knowing that the message can be deleted from the system; -if no acknowledgement, then the message will be retained until it's processed. - -#### Negative Acknowledgment (nack) - -When an application fails to process a particular message, it can send a "negative ack" to Pulsar -to signal that the message should be replayed at a later timer. (By default, failed messages are -replayed after a 1 minute delay). Be aware that negative acknowledgment on ordered subscription types, -such as Exclusive, Failover and Key_Shared, can cause failed messages to arrive consumers out of the original order. - -#### Unacknowledged - -A message that has been delivered to a consumer for processing but not yet confirmed as processed by the consumer. - -#### Retention Policy - -Size and time limits that you can set on a [namespace](#namespace) to configure retention of [messages](#message) -that have already been [acknowledged](#acknowledgement-ack). - -#### Multi-Tenancy - -The ability to isolate [namespaces](#namespace), specify quotas, and configure authentication and authorization -on a per-[tenant](#tenant) basis. - -#### Failure Domain - -A logical domain under a Pulsar cluster. Each logical domain contains a pre-configured list of brokers. - -#### Anti-affinity Namespaces - -A group of namespaces that have anti-affinity to each other. - -### Architecture - -#### Standalone - -A lightweight Pulsar broker in which all components run in a single Java Virtual Machine (JVM) process. Standalone -clusters can be run on a single machine and are useful for development purposes. - -#### Cluster - -A set of Pulsar [brokers](#broker) and [BookKeeper](#bookkeeper) servers (aka [bookies](#bookie)). -Clusters can reside in different geographical regions and replicate messages to one another -in a process called [geo-replication](#geo-replication). - -#### Instance - -A group of Pulsar [clusters](#cluster) that act together as a single unit. - -#### Geo-Replication - -Replication of messages across Pulsar [clusters](#cluster), potentially in different datacenters -or geographical regions. - -#### Configuration Store - -Pulsar's configuration store (previously known as configuration store) is a ZooKeeper quorum that -is used for configuration-specific tasks. A multi-cluster Pulsar installation requires just one -configuration store across all [clusters](#cluster). - -#### Topic Lookup - -A service provided by Pulsar [brokers](#broker) that enables connecting clients to automatically determine -which Pulsar [cluster](#cluster) is responsible for a [topic](#topic) (and thus where message traffic for -the topic needs to be routed). - -#### Service Discovery - -A mechanism provided by Pulsar that enables connecting clients to use just a single URL to interact -with all the [brokers](#broker) in a [cluster](#cluster). - -#### Broker - -A stateless component of Pulsar [clusters](#cluster) that runs two other components: an HTTP server -exposing a REST interface for administration and topic lookup and a [dispatcher](#dispatcher) that -handles all message transfers. Pulsar clusters typically consist of multiple brokers. - -#### Dispatcher - -An asynchronous TCP server used for all data transfers in-and-out a Pulsar [broker](#broker). The Pulsar -dispatcher uses a custom binary protocol for all communications. - -### Storage - -#### BookKeeper - -[Apache BookKeeper](http://bookkeeper.apache.org/) is a scalable, low-latency persistent log storage -service that Pulsar uses to store data. - -#### Bookie - -Bookie is the name of an individual BookKeeper server. It is effectively the storage server of Pulsar. - -#### Ledger - -An append-only data structure in [BookKeeper](#bookkeeper) that is used to persistently store messages in Pulsar [topics](#topic). - -### Functions - -Pulsar Functions are lightweight functions that can consume messages from Pulsar topics, apply custom processing logic, and, if desired, publish results to topics. diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/schema-evolution-compatibility.md b/site2/website/versioned_docs/version-2.8.3-deprecated/schema-evolution-compatibility.md deleted file mode 100644 index d36eb7ad8a6131..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/schema-evolution-compatibility.md +++ /dev/null @@ -1,207 +0,0 @@ ---- -id: schema-evolution-compatibility -title: Schema evolution and compatibility -sidebar_label: "Schema evolution and compatibility" -original_id: schema-evolution-compatibility ---- - -Normally, schemas do not stay the same over a long period of time. Instead, they undergo evolutions to satisfy new needs. - -This chapter examines how Pulsar schema evolves and what Pulsar schema compatibility check strategies are. - -## Schema evolution - -Pulsar schema is defined in a data structure called `SchemaInfo`. - -Each `SchemaInfo` stored with a topic has a version. The version is used to manage the schema changes happening within a topic. - -The message produced with `SchemaInfo` is tagged with a schema version. When a message is consumed by a Pulsar client, the Pulsar client can use the schema version to retrieve the corresponding `SchemaInfo` and use the correct schema information to deserialize data. - -### What is schema evolution? - -Schemas store the details of attributes and types. To satisfy new business requirements, you need to update schemas inevitably over time, which is called **schema evolution**. - -Any schema changes affect downstream consumers. Schema evolution ensures that the downstream consumers can seamlessly handle data encoded with both old schemas and new schemas. - -### How Pulsar schema should evolve? - -The answer is Pulsar schema compatibility check strategy. It determines how schema compares old schemas with new schemas in topics. - -For more information, see [Schema compatibility check strategy](#schema-compatibility-check-strategy). - -### How does Pulsar support schema evolution? - -1. When a producer/consumer/reader connects to a broker, the broker deploys the schema compatibility checker configured by `schemaRegistryCompatibilityCheckers` to enforce schema compatibility check. - - The schema compatibility checker is one instance per schema type. - - Currently, Avro and JSON have their own compatibility checkers, while all the other schema types share the default compatibility checker which disables schema evolution. - -2. The producer/consumer/reader sends its client `SchemaInfo` to the broker. - -3. The broker knows the schema type and locates the schema compatibility checker for that type. - -4. The broker uses the checker to check if the `SchemaInfo` is compatible with the latest schema of the topic by applying its compatibility check strategy. - - Currently, the compatibility check strategy is configured at the namespace level and applied to all the topics within that namespace. - -## Schema compatibility check strategy - -Pulsar has 8 schema compatibility check strategies, which are summarized in the following table. - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Changes allowed | Check against which schema | Upgrade first | -| --- | --- | --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Disable schema compatibility check. | All changes are allowed | All previous versions | Any order | -| `ALWAYS_INCOMPATIBLE` | Disable schema evolution. | All changes are disabled | None | None | -| `BACKWARD` | Consumers using the schema V3 can process data written by producers using the schema V3 or V2. |
  1369. Add optional fields
  1370. Delete fields
  1371. | Latest version | Consumers | -| `BACKWARD_TRANSITIVE` | Consumers using the schema V3 can process data written by producers using the schema V3, V2 or V1. |
  1372. Add optional fields
  1373. Delete fields
  1374. | All previous versions | Consumers | -| `FORWARD` | Consumers using the schema V3 or V2 can process data written by producers using the schema V3. |
  1375. Add fields
  1376. Delete optional fields
  1377. | Latest version | Producers | -| `FORWARD_TRANSITIVE` | Consumers using the schema V3, V2 or V1 can process data written by producers using the schema V3. |
  1378. Add fields
  1379. Delete optional fields
  1380. | All previous versions | Producers | -| `FULL` | Backward and forward compatible between the schema V3 and V2. |
  1381. Modify optional fields
  1382. | Latest version | Any order | -| `FULL_TRANSITIVE` | Backward and forward compatible among the schema V3, V2, and V1. |
  1383. Modify optional fields
  1384. | All previous versions | Any order | - -### ALWAYS_COMPATIBLE and ALWAYS_INCOMPATIBLE - -| Compatibility check strategy | Definition | Note | -| --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Disable schema compatibility check. | None | -| `ALWAYS_INCOMPATIBLE` | Disable schema evolution, that is, any schema change is rejected. |
  1385. For all schema types except Avro and JSON, the default schema compatibility check strategy is `ALWAYS_INCOMPATIBLE`.
  1386. For Avro and JSON, the default schema compatibility check strategy is `FULL`.
  1387. | - -#### Example - -* Example 1 - - In some situations, an application needs to store events of several different types in the same Pulsar topic. - - In particular, when developing a data model in an `Event Sourcing` style, you might have several kinds of events that affect the state of an entity. - - For example, for a user entity, there are `userCreated`, `userAddressChanged` and `userEnquiryReceived` events. The application requires that those events are always read in the same order. - - Consequently, those events need to go in the same Pulsar partition to maintain order. This application can use `ALWAYS_COMPATIBLE` to allow different kinds of events co-exist in the same topic. - -* Example 2 - - Sometimes we also make incompatible changes. - - For example, you are modifying a field type from `string` to `int`. - - In this case, you need to: - - * Upgrade all producers and consumers to the new schema versions at the same time. - - * Optionally, create a new topic and start migrating applications to use the new topic and the new schema, avoiding the need to handle two incompatible versions in the same topic. - -### BACKWARD and BACKWARD_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | -|---|---|---| -`BACKWARD` | Consumers using the new schema can process data written by producers using the **last schema**. | The consumers using the schema V3 can process data written by producers using the schema V3 or V2. | -`BACKWARD_TRANSITIVE` | Consumers using the new schema can process data written by producers using **all previous schemas**. | The consumers using the schema V3 can process data written by producers using the schema V3, V2, or V1. | - -#### Example - -* Example 1 - - Remove a field. - - A consumer constructed to process events without one field can process events written with the old schema containing the field, and the consumer will ignore that field. - -* Example 2 - - You want to load all Pulsar data into a Hive data warehouse and run SQL queries against the data. - - Same SQL queries must continue to work even the data is changed. To support it, you can evolve the schemas using the `BACKWARD` strategy. - -### FORWARD and FORWARD_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | -|---|---|---| -`FORWARD` | Consumers using the **last schema** can process data written by producers using a new schema, even though they may not be able to use the full capabilities of the new schema. | The consumers using the schema V3 or V2 can process data written by producers using the schema V3. | -`FORWARD_TRANSITIVE` | Consumers using **all previous schemas** can process data written by producers using a new schema. | The consumers using the schema V3, V2, or V1 can process data written by producers using the schema V3. - -#### Example - -* Example 1 - - Add a field. - - In most data formats, consumers written to process events without new fields can continue doing so even when they receive new events containing new fields. - -* Example 2 - - If a consumer has an application logic tied to a full version of a schema, the application logic may not be updated instantly when the schema evolves. - - In this case, you need to project data with a new schema onto an old schema that the application understands. - - Consequently, you can evolve the schemas using the `FORWARD` strategy to ensure that the old schema can process data encoded with the new schema. - -### FULL and FULL_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | Note | -| --- | --- | --- | --- | -| `FULL` | Schemas are both backward and forward compatible, which means: Consumers using the last schema can process data written by producers using the new schema. AND Consumers using the new schema can process data written by producers using the last schema. | Consumers using the schema V3 can process data written by producers using the schema V3 or V2. AND Consumers using the schema V3 or V2 can process data written by producers using the schema V3. |
  1388. For Avro and JSON, the default schema compatibility check strategy is `FULL`.
  1389. For all schema types except Avro and JSON, the default schema compatibility check strategy is `ALWAYS_INCOMPATIBLE`.
  1390. | -| `FULL_TRANSITIVE` | The new schema is backward and forward compatible with all previously registered schemas. | Consumers using the schema V3 can process data written by producers using the schema V3, V2 or V1. AND Consumers using the schema V3, V2 or V1 can process data written by producers using the schema V3. | None | - -#### Example - -In some data formats, for example, Avro, you can define fields with default values. Consequently, adding or removing a field with a default value is a fully compatible change. - -:::tip - -You can set schema compatibility check strategy at namespace or broker level. For how to set the strategy, see [here](schema-manage.md/#set-schema-compatibility-check-strategy). - -::: - -## Schema verification - -When a producer or a consumer tries to connect to a topic, a broker performs some checks to verify a schema. - -### Producer - -When a producer tries to connect to a topic (suppose ignore the schema auto creation), a broker does the following checks: - -* Check if the schema carried by the producer exists in the schema registry or not. - - * If the schema is already registered, then the producer is connected to a broker and produce messages with that schema. - - * If the schema is not registered, then Pulsar verifies if the schema is allowed to be registered based on the configured compatibility check strategy. - -### Consumer -When a consumer tries to connect to a topic, a broker checks if a carried schema is compatible with a registered schema based on the configured schema compatibility check strategy. - -| Compatibility check strategy | Check logic | -| --- | --- | -| `ALWAYS_COMPATIBLE` | All pass | -| `ALWAYS_INCOMPATIBLE` | No pass | -| `BACKWARD` | Can read the last schema | -| `BACKWARD_TRANSITIVE` | Can read all schemas | -| `FORWARD` | Can read the last schema | -| `FORWARD_TRANSITIVE` | Can read the last schema | -| `FULL` | Can read the last schema | -| `FULL_TRANSITIVE` | Can read all schemas | - -## Order of upgrading clients - -The order of upgrading client applications is determined by the compatibility check strategy. - -For example, the producers using schemas to write data to Pulsar and the consumers using schemas to read data from Pulsar. - -| Compatibility check strategy | Upgrade first | Description | -| --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Any order | The compatibility check is disabled. Consequently, you can upgrade the producers and consumers in **any order**. | -| `ALWAYS_INCOMPATIBLE` | None | The schema evolution is disabled. | -|
  1391. `BACKWARD`
  1392. `BACKWARD_TRANSITIVE`
  1393. | Consumers | There is no guarantee that consumers using the old schema can read data produced using the new schema. Consequently, **upgrade all consumers first**, and then start producing new data. | -|
  1394. `FORWARD`
  1395. `FORWARD_TRANSITIVE`
  1396. | Producers | There is no guarantee that consumers using the new schema can read data produced using the old schema. Consequently, **upgrade all producers first**
  1397. to use the new schema and ensure that the data already produced using the old schemas are not available to consumers, and then upgrade the consumers.
  1398. | -|
  1399. `FULL`
  1400. `FULL_TRANSITIVE`
  1401. | Any order | There is no guarantee that consumers using the old schema can read data produced using the new schema and consumers using the new schema can read data produced using the old schema. Consequently, you can upgrade the producers and consumers in **any order**. | - - - - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/schema-get-started.md b/site2/website/versioned_docs/version-2.8.3-deprecated/schema-get-started.md deleted file mode 100644 index afacb0fa51f2ef..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/schema-get-started.md +++ /dev/null @@ -1,102 +0,0 @@ ---- -id: schema-get-started -title: Get started -sidebar_label: "Get started" -original_id: schema-get-started ---- - -This chapter introduces Pulsar schemas and explains why they are important. - -## Schema Registry - -Type safety is extremely important in any application built around a message bus like Pulsar. - -Producers and consumers need some kind of mechanism for coordinating types at the topic level to avoid various potential problems arise. For example, serialization and deserialization issues. - -Applications typically adopt one of the following approaches to guarantee type safety in messaging. Both approaches are available in Pulsar, and you're free to adopt one or the other or to mix and match on a per-topic basis. - -#### Note -> -> Currently, the Pulsar schema registry is only available for the [Java client](client-libraries-java.md), [CGo client](client-libraries-cgo.md), [Python client](client-libraries-python.md), and [C++ client](client-libraries-cpp.md). - -### Client-side approach - -Producers and consumers are responsible for not only serializing and deserializing messages (which consist of raw bytes) but also "knowing" which types are being transmitted via which topics. - -If a producer is sending temperature sensor data on the topic `topic-1`, consumers of that topic will run into trouble if they attempt to parse that data as moisture sensor readings. - -Producers and consumers can send and receive messages consisting of raw byte arrays and leave all type safety enforcement to the application on an "out-of-band" basis. - -### Server-side approach - -Producers and consumers inform the system which data types can be transmitted via the topic. - -With this approach, the messaging system enforces type safety and ensures that producers and consumers remain synced. - -Pulsar has a built-in **schema registry** that enables clients to upload data schemas on a per-topic basis. Those schemas dictate which data types are recognized as valid for that topic. - -## Why use schema - -When a schema is enabled, Pulsar does parse data, it takes bytes as inputs and sends bytes as outputs. While data has meaning beyond bytes, you need to parse data and might encounter parse exceptions which mainly occur in the following situations: - -* The field does not exist - -* The field type has changed (for example, `string` is changed to `int`) - -There are a few methods to prevent and overcome these exceptions, for example, you can catch exceptions when parsing errors, which makes code hard to maintain; or you can adopt a schema management system to perform schema evolution, not to break downstream applications, and enforces type safety to max extend in the language you are using, the solution is Pulsar Schema. - -Pulsar schema enables you to use language-specific types of data when constructing and handling messages from simple types like `string` to more complex application-specific types. - -**Example** - -You can use the _User_ class to define the messages sent to Pulsar topics. - -``` - -public class User { - String name; - int age; -} - -``` - -When constructing a producer with the _User_ class, you can specify a schema or not as below. - -### Without schema - -If you construct a producer without specifying a schema, then the producer can only produce messages of type `byte[]`. If you have a POJO class, you need to serialize the POJO into bytes before sending messages. - -**Example** - -``` - -Producer producer = client.newProducer() - .topic(topic) - .create(); -User user = new User("Tom", 28); -byte[] message = … // serialize the `user` by yourself; -producer.send(message); - -``` - -### With schema - -If you construct a producer with specifying a schema, then you can send a class to a topic directly without worrying about how to serialize POJOs into bytes. - -**Example** - -This example constructs a producer with the _JSONSchema_, and you can send the _User_ class to topics directly without worrying about how to serialize it into bytes. - -``` - -Producer producer = client.newProducer(JSONSchema.of(User.class)) - .topic(topic) - .create(); -User user = new User("Tom", 28); -producer.send(user); - -``` - -### Summary - -When constructing a producer with a schema, you do not need to serialize messages into bytes, instead Pulsar schema does this job in the background. diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/schema-manage.md b/site2/website/versioned_docs/version-2.8.3-deprecated/schema-manage.md deleted file mode 100644 index 4607c3b5dcae1c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/schema-manage.md +++ /dev/null @@ -1,704 +0,0 @@ ---- -id: schema-manage -title: Manage schema -sidebar_label: "Manage schema" -original_id: schema-manage ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide demonstrates the ways to manage schemas: - -* Automatically - - * [Schema AutoUpdate](#schema-autoupdate) - -* Manually - - * [Schema manual management](#schema-manual-management) - - * [Custom schema storage](#custom-schema-storage) - -## Schema AutoUpdate - -If a schema passes the schema compatibility check, Pulsar producer automatically updates this schema to the topic it produces by default. - -### AutoUpdate for producer - -For a producer, the `AutoUpdate` happens in the following cases: - -* If a **topic doesn’t have a schema**, Pulsar registers a schema automatically. - -* If a **topic has a schema**: - - * If a **producer doesn’t carry a schema**: - - * If `isSchemaValidationEnforced` or `schemaValidationEnforced` is **disabled** in the namespace to which the topic belongs, the producer is allowed to connect to the topic and produce data. - - * If `isSchemaValidationEnforced` or `schemaValidationEnforced` is **enabled** in the namespace to which the topic belongs, the producer is rejected and disconnected. - - * If a **producer carries a schema**: - - A broker performs the compatibility check based on the configured compatibility check strategy of the namespace to which the topic belongs. - - * If the schema is registered, a producer is connected to a broker. - - * If the schema is not registered: - - * If `isAllowAutoUpdateSchema` sets to **false**, the producer is rejected to connect to a broker. - - * If `isAllowAutoUpdateSchema` sets to **true**: - - * If the schema passes the compatibility check, then the broker registers a new schema automatically for the topic and the producer is connected. - - * If the schema does not pass the compatibility check, then the broker does not register a schema and the producer is rejected to connect to a broker. - -![AutoUpdate Producer](/assets/schema-producer.png) - -### AutoUpdate for consumer - -For a consumer, the `AutoUpdate` happens in the following cases: - -* If a **consumer connects to a topic without a schema** (which means the consumer receiving raw bytes), the consumer can connect to the topic successfully without doing any compatibility check. - -* If a **consumer connects to a topic with a schema**. - - * If a topic does not have all of them (a schema/data/a local consumer and a local producer): - - * If `isAllowAutoUpdateSchema` sets to **true**, then the consumer registers a schema and it is connected to a broker. - - * If `isAllowAutoUpdateSchema` sets to **false**, then the consumer is rejected to connect to a broker. - - * If a topic has one of them (a schema/data/a local consumer and a local producer), then the schema compatibility check is performed. - - * If the schema passes the compatibility check, then the consumer is connected to the broker. - - * If the schema does not pass the compatibility check, then the consumer is rejected to connect to the broker. - -![AutoUpdate Consumer](/assets/schema-consumer.png) - - -### Manage AutoUpdate strategy - -You can use the `pulsar-admin` command to manage the `AutoUpdate` strategy as below: - -* [Enable AutoUpdate](#enable-autoupdate) - -* [Disable AutoUpdate](#disable-autoupdate) - -* [Adjust compatibility](#adjust-compatibility) - -#### Enable AutoUpdate - -To enable `AutoUpdate` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-is-allow-auto-update-schema --enable tenant/namespace - -``` - -#### Disable AutoUpdate - -To disable `AutoUpdate` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-is-allow-auto-update-schema --disable tenant/namespace - -``` - -Once the `AutoUpdate` is disabled, you can only register a new schema using the `pulsar-admin` command. - -#### Adjust compatibility - -To adjust the schema compatibility level on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-compatibility-strategy --compatibility tenant/namespace - -``` - -### Schema validation - -By default, `schemaValidationEnforced` is **disabled** for producers: - -* This means a producer without a schema can produce any kind of messages to a topic with schemas, which may result in producing trash data to the topic. - -* This allows non-java language clients that don’t support schema can produce messages to a topic with schemas. - -However, if you want a stronger guarantee on the topics with schemas, you can enable `schemaValidationEnforced` across the whole cluster or on a per-namespace basis. - -#### Enable schema validation - -To enable `schemaValidationEnforced` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-validation-enforce --enable tenant/namespace - -``` - -#### Disable schema validation - -To disable `schemaValidationEnforced` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-validation-enforce --disable tenant/namespace - -``` - -## Schema manual management - -To manage schemas, you can use one of the following methods. - -| Method | Description | -| --- | --- | -| **Admin CLI**
  1402. | You can use the `pulsar-admin` tool to manage Pulsar schemas, brokers, clusters, sources, sinks, topics, tenants and so on. For more information about how to use the `pulsar-admin` tool, see [here](reference-pulsar-admin.md). | -| **REST API**
  1403. | Pulsar exposes schema related management API in Pulsar’s admin RESTful API. You can access the admin RESTful endpoint directly to manage schemas. For more information about how to use the Pulsar REST API, see [here](http://pulsar.apache.org/admin-rest-api/). | -| **Java Admin API**
  1404. | Pulsar provides Java admin library. | - -### Upload a schema - -To upload (register) a new schema for a topic, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `upload` subcommand. - -```bash - -$ pulsar-admin schemas upload --filename - -``` - -The `schema-definition-file` is in JSON format. - -```json - -{ - "type": "", - "schema": "", - "properties": {} // the properties associated with the schema -} - -``` - -The `schema-definition-file` includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  1405. If the schema is a
  1406. **primitive**
  1407. schema, this field should be blank.
  1408. If the schema is a
  1409. **struct**
  1410. schema, this field should be a JSON string of the Avro schema definition.
  1411. | -| `properties` | The additional properties associated with the schema. | - -Here are examples of the `schema-definition-file` for a JSON schema. - -**Example 1** - -```json - -{ - "type": "JSON", - "schema": "{\"type\":\"record\",\"name\":\"User\",\"namespace\":\"com.foo\",\"fields\":[{\"name\":\"file1\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"file2\",\"type\":\"string\",\"default\":null},{\"name\":\"file3\",\"type\":[\"null\",\"string\"],\"default\":\"dfdf\"}]}", - "properties": {} -} - -``` - -**Example 2** - -```json - -{ - "type": "STRING", - "schema": "", - "properties": { - "key1": "value1" - } -} - -``` - -
    - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/uploadSchema?version=@pulsar:version_number@} - -The post payload is in JSON format. - -```json - -{ - "type": "", - "schema": "", - "properties": {} // the properties associated with the schema -} - -``` - -The post payload includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  1412. If the schema is a
  1413. **primitive**
  1414. schema, this field should be blank.
  1415. If the schema is a
  1416. **struct**
  1417. schema, this field should be a JSON string of the Avro schema definition.
  1418. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -void createSchema(String topic, PostSchemaPayload schemaPayload) - -``` - -The `PostSchemaPayload` includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  1419. If the schema is a
  1420. **primitive**
  1421. schema, this field should be blank.
  1422. If the schema is a
  1423. **struct**
  1424. schema, this field should be a JSON string of the Avro schema definition.
  1425. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `PostSchemaPayload`: - -```java - -PulsarAdmin admin = …; - -PostSchemaPayload payload = new PostSchemaPayload(); -payload.setType("INT8"); -payload.setSchema(""); - -admin.createSchema("my-tenant/my-ns/my-topic", payload); - -``` - -
    - -
    -```` - -### Get a schema (latest) - -To get the latest schema for a topic, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `get` subcommand. - -```bash - -$ pulsar-admin schemas get - -{ - "version": 0, - "type": "String", - "timestamp": 0, - "data": "string", - "properties": { - "property1": "string", - "property2": "string" - } -} - -``` - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/getSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", - "type": "", - "timestamp": "", - "data": "", - "properties": {} // the properties associated with the schema -} - -``` - -The response includes the following fields: - -| Field | Description | -| --- | --- | -| `version` | The schema version, which is a long number. | -| `type` | The schema type. | -| `timestamp` | The timestamp of creating this version of schema. | -| `data` | The schema definition data, which is encoded in UTF 8 charset.
  1426. If the schema is a
  1427. **primitive**
  1428. schema, this field should be blank.
  1429. If the schema is a
  1430. **struct**
  1431. schema, this field should be a JSON string of the Avro schema definition.
  1432. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -SchemaInfo createSchema(String topic) - -``` - -The `SchemaInfo` includes the following fields: - -| Field | Description | -| --- | --- | -| `name` | The schema name. | -| `type` | The schema type. | -| `schema` | A byte array of the schema definition data, which is encoded in UTF 8 charset.
  1433. If the schema is a
  1434. **primitive**
  1435. schema, this byte array should be empty.
  1436. If the schema is a
  1437. **struct**
  1438. schema, this field should be a JSON string of the Avro schema definition converted to a byte array.
  1439. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `SchemaInfo`: - -```java - -PulsarAdmin admin = …; - -SchemaInfo si = admin.getSchema("my-tenant/my-ns/my-topic"); - -``` - -
    - -
    -```` - -### Get a schema (specific) - -To get a specific version of a schema, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `get` subcommand. - -```bash - -$ pulsar-admin schemas get --version= - -``` - - - - -Send a `GET` request to a schema endpoint: {@inject: endpoint|GET|/admin/v2/schemas/:tenant/:namespace/:topic/schema/:version|operation/getSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", - "type": "", - "timestamp": "", - "data": "", - "properties": {} // the properties associated with the schema -} - -``` - -The response includes the following fields: - -| Field | Description | -| --- | --- | -| `version` | The schema version, which is a long number. | -| `type` | The schema type. | -| `timestamp` | The timestamp of creating this version of schema. | -| `data` | The schema definition data, which is encoded in UTF 8 charset.
  1440. If the schema is a
  1441. **primitive**
  1442. schema, this field should be blank.
  1443. If the schema is a
  1444. **struct**
  1445. schema, this field should be a JSON string of the Avro schema definition.
  1446. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -SchemaInfo createSchema(String topic, long version) - -``` - -The `SchemaInfo` includes the following fields: - -| Field | Description | -| --- | --- | -| `name` | The schema name. | -| `type` | The schema type. | -| `schema` | A byte array of the schema definition data, which is encoded in UTF 8.
  1447. If the schema is a
  1448. **primitive**
  1449. schema, this byte array should be empty.
  1450. If the schema is a
  1451. **struct**
  1452. schema, this field should be a JSON string of the Avro schema definition converted to a byte array.
  1453. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `SchemaInfo`: - -```java - -PulsarAdmin admin = …; - -SchemaInfo si = admin.getSchema("my-tenant/my-ns/my-topic", 1L); - -``` - -
    - -
    -```` - -### Extract a schema - -To provide a schema via a topic, you can use the following method. - -````mdx-code-block - - - - -Use the `extract` subcommand. - -```bash - -$ pulsar-admin schemas extract --classname --jar --type - -``` - - - - -```` - -### Delete a schema - -To delete a schema for a topic, you can use one of the following methods. - -:::note - -In any case, the **delete** action deletes **all versions** of a schema registered for a topic. - -::: - -````mdx-code-block - - - - -Use the `delete` subcommand. - -```bash - -$ pulsar-admin schemas delete - -``` - - - - -Send a `DELETE` request to a schema endpoint: {@inject: endpoint|DELETE|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/deleteSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", -} - -``` - -The response includes the following field: - -Field | Description | ----|---| -`version` | The schema version, which is a long number. | - - - - -```java - -void deleteSchema(String topic) - -``` - -Here is an example of deleting a schema. - -```java - -PulsarAdmin admin = …; - -admin.deleteSchema("my-tenant/my-ns/my-topic"); - -``` - - - - -```` - -## Custom schema storage - -By default, Pulsar stores various data types of schemas in [Apache BookKeeper](https://bookkeeper.apache.org) deployed alongside Pulsar. - -However, you can use another storage system if needed. - -### Implement - -To use a non-default (non-BookKeeper) storage system for Pulsar schemas, you need to implement the following Java interfaces: - -* [SchemaStorage interface](#schemastorage-interface) - -* [SchemaStorageFactory interface](#schemastoragefactory-interface) - -#### SchemaStorage interface - -The `SchemaStorage` interface has the following methods: - -```java - -public interface SchemaStorage { - // How schemas are updated - CompletableFuture put(String key, byte[] value, byte[] hash); - - // How schemas are fetched from storage - CompletableFuture get(String key, SchemaVersion version); - - // How schemas are deleted - CompletableFuture delete(String key); - - // Utility method for converting a schema version byte array to a SchemaVersion object - SchemaVersion versionFromBytes(byte[] version); - - // Startup behavior for the schema storage client - void start() throws Exception; - - // Shutdown behavior for the schema storage client - void close() throws Exception; -} - -``` - -:::tip - -For a complete example of **schema storage** implementation, see [BookKeeperSchemaStorage](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorage.java) class. - -::: - -#### SchemaStorageFactory interface - -The `SchemaStorageFactory` interface has the following method: - -```java - -public interface SchemaStorageFactory { - @NotNull - SchemaStorage create(PulsarService pulsar) throws Exception; -} - -``` - -:::tip - -For a complete example of **schema storage factory** implementation, see [BookKeeperSchemaStorageFactory](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorageFactory.java) class. - -::: - -### Deploy - -To use your custom schema storage implementation, perform the following steps. - -1. Package the implementation in a [JAR](https://docs.oracle.com/javase/tutorial/deployment/jar/basicsindex.html) file. - -2. Add the JAR file to the `lib` folder in your Pulsar binary or source distribution. - -3. Change the `schemaRegistryStorageClassName` configuration in `broker.conf` to your custom factory class. - -4. Start Pulsar. - -## Set schema compatibility check strategy - -You can set [schema compatibility check strategy](schema-evolution-compatibility.md#schema-compatibility-check-strategy) at namespace or broker level. - -- If you set schema compatibility check strategy at both namespace or broker level, it uses the strategy set for the namespace level. - -- If you do not set schema compatibility check strategy at both namespace or broker level, it uses the `FULL` strategy. - -- If you set schema compatibility check strategy at broker level rather than namespace level, it uses the strategy set for the broker level. - -- If you set schema compatibility check strategy at namespace level rather than broker level, it uses the strategy set for the namespace level. - -### Namespace - -You can set schema compatibility check strategy at namespace level using one of the following methods. - -````mdx-code-block - - - - -Use the [`pulsar-admin namespaces set-schema-compatibility-strategy`](https://pulsar.apache.org/tools/pulsar-admin/) command. - -```shell - -pulsar-admin namespaces set-schema-compatibility-strategy options - -``` - - - - -Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace|operation/schemaCompatibilityStrategy?version=@pulsar:version_number@} - - - - -Use the [`setSchemaCompatibilityStrategy`](https://pulsar.apache.org/api/admin/)method. - -```java - -admin.namespaces().setSchemaCompatibilityStrategy("test", SchemaCompatibilityStrategy.FULL); - -``` - - - - -```` - -### Broker - -You can set schema compatibility check strategy at broker level by setting `schemaCompatibilityStrategy` in [`broker.conf`](https://github.com/apache/pulsar/blob/f24b4890c278f72a67fe30e7bf22dc36d71aac6a/conf/broker.conf#L1240) or [`standalone.conf`](https://github.com/apache/pulsar/blob/master/conf/standalone.conf) file. - -**Example** - -``` - -schemaCompatibilityStrategy=ALWAYS_INCOMPATIBLE - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/schema-understand.md b/site2/website/versioned_docs/version-2.8.3-deprecated/schema-understand.md deleted file mode 100644 index a86b02add435e1..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/schema-understand.md +++ /dev/null @@ -1,556 +0,0 @@ ---- -id: schema-understand -title: Understand schema -sidebar_label: "Understand schema" -original_id: schema-understand ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This chapter explains the basic concepts of Pulsar schema, focuses on the topics of particular importance, and provides additional background. - -## SchemaInfo - -Pulsar schema is defined in a data structure called `SchemaInfo`. - -The `SchemaInfo` is stored and enforced on a per-topic basis and cannot be stored at the namespace or tenant level. - -A `SchemaInfo` consists of the following fields: - -| Field | Description | -| --- | --- | -| `name` | Schema name (a string). | -| `type` | Schema type, which determines how to interpret the schema data.
  1454. Predefined schema: see [here](schema-understand.md#schema-type).
  1455. Customized schema: it is left as an empty string.
  1456. | -| `schema`(`payload`) | Schema data, which is a sequence of 8-bit unsigned bytes and schema-type specific. | -| `properties` | It is a user defined properties as a string/string map. Applications can use this bag for carrying any application specific logics. Possible properties might be the Git hash associated with the schema, an environment string like `dev` or `prod`. | - -**Example** - -This is the `SchemaInfo` of a string. - -```json - -{ - "name": "test-string-schema", - "type": "STRING", - "schema": "", - "properties": {} -} - -``` - -## Schema type - -Pulsar supports various schema types, which are mainly divided into two categories: - -* Primitive type - -* Complex type - -### Primitive type - -Currently, Pulsar supports the following primitive types: - -| Primitive Type | Description | -|---|---| -| `BOOLEAN` | A binary value | -| `INT8` | A 8-bit signed integer | -| `INT16` | A 16-bit signed integer | -| `INT32` | A 32-bit signed integer | -| `INT64` | A 64-bit signed integer | -| `FLOAT` | A single precision (32-bit) IEEE 754 floating-point number | -| `DOUBLE` | A double-precision (64-bit) IEEE 754 floating-point number | -| `BYTES` | A sequence of 8-bit unsigned bytes | -| `STRING` | A Unicode character sequence | -| `TIMESTAMP` (`DATE`, `TIME`) | A logic type represents a specific instant in time with millisecond precision.
    It stores the number of milliseconds since `January 1, 1970, 00:00:00 GMT` as an `INT64` value | -| INSTANT | A single instantaneous point on the time-line with nanoseconds precision| -| LOCAL_DATE | An immutable date-time object that represents a date, often viewed as year-month-day| -| LOCAL_TIME | An immutable date-time object that represents a time, often viewed as hour-minute-second. Time is represented to nanosecond precision.| -| LOCAL_DATE_TIME | An immutable date-time object that represents a date-time, often viewed as year-month-day-hour-minute-second | - -For primitive types, Pulsar does not store any schema data in `SchemaInfo`. The `type` in `SchemaInfo` is used to determine how to serialize and deserialize the data. - -Some of the primitive schema implementations can use `properties` to store implementation-specific tunable settings. For example, a `string` schema can use `properties` to store the encoding charset to serialize and deserialize strings. - -The conversions between **Pulsar schema types** and **language-specific primitive types** are as below. - -| Schema Type | Java Type| Python Type | Go Type | -|---|---|---|---| -| BOOLEAN | boolean | bool | bool | -| INT8 | byte | | int8 | -| INT16 | short | | int16 | -| INT32 | int | | int32 | -| INT64 | long | | int64 | -| FLOAT | float | float | float32 | -| DOUBLE | double | float | float64| -| BYTES | byte[], ByteBuffer, ByteBuf | bytes | []byte | -| STRING | string | str | string| -| TIMESTAMP | java.sql.Timestamp | | | -| TIME | java.sql.Time | | | -| DATE | java.util.Date | | | -| INSTANT | java.time.Instant | | | -| LOCAL_DATE | java.time.LocalDate | | | -| LOCAL_TIME | java.time.LocalDateTime | | -| LOCAL_DATE_TIME | java.time.LocalTime | | - -**Example** - -This example demonstrates how to use a string schema. - -1. Create a producer with a string schema and send messages. - - ```java - - Producer producer = client.newProducer(Schema.STRING).create(); - producer.newMessage().value("Hello Pulsar!").send(); - - ``` - -2. Create a consumer with a string schema and receive messages. - - ```java - - Consumer consumer = client.newConsumer(Schema.STRING).subscribe(); - consumer.receive(); - - ``` - -### Complex type - -Currently, Pulsar supports the following complex types: - -| Complex Type | Description | -|---|---| -| `keyvalue` | Represents a complex type of a key/value pair. | -| `struct` | Handles structured data. It supports `AvroBaseStructSchema` and `ProtobufNativeSchema`. | - -#### keyvalue - -`Keyvalue` schema helps applications define schemas for both key and value. - -For `SchemaInfo` of `keyvalue` schema, Pulsar stores the `SchemaInfo` of key schema and the `SchemaInfo` of value schema together. - -Pulsar provides the following methods to encode a key/value pair in messages: - -* `INLINE` - -* `SEPARATED` - -You can choose the encoding type when constructing the key/value schema. - -````mdx-code-block - - - - -Key/value pairs are encoded together in the message payload. - - - - -Key is encoded in the message key and the value is encoded in the message payload. - -**Example** - -This example shows how to construct a key/value schema and then use it to produce and consume messages. - -1. Construct a key/value schema with `INLINE` encoding type. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.INLINE - ); - - ``` - -2. Optionally, construct a key/value schema with `SEPARATED` encoding type. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - ``` - -3. Produce messages using a key/value schema. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - Producer> producer = client.newProducer(kvSchema) - .topic(TOPIC) - .create(); - - final int key = 100; - final String value = "value-100"; - - // send the key/value message - producer.newMessage() - .value(new KeyValue(key, value)) - .send(); - - ``` - -4. Consume messages using a key/value schema. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - Consumer> consumer = client.newConsumer(kvSchema) - ... - .topic(TOPIC) - .subscriptionName(SubscriptionName).subscribe(); - - // receive key/value pair - Message> msg = consumer.receive(); - KeyValue kv = msg.getValue(); - - ``` - - - - -```` - -#### struct - -This section describes the details of type and usage of the `struct` schema. - -##### Type - -`struct` schema supports `AvroBaseStructSchema` and `ProtobufNativeSchema`. - -|Type|Description| ----|---| -`AvroBaseStructSchema`|Pulsar uses [Avro Specification](http://avro.apache.org/docs/current/spec.html) to declare the schema definition for `AvroBaseStructSchema`, which supports `AvroSchema`, `JsonSchema`, and `ProtobufSchema`.

    This allows Pulsar:
    - to use the same tools to manage schema definitions
    - to use different serialization or deserialization methods to handle data| -`ProtobufNativeSchema`|`ProtobufNativeSchema` is based on protobuf native Descriptor.

    This allows Pulsar:
    - to use native protobuf-v3 to serialize or deserialize data
    - to use `AutoConsume` to deserialize data. - -##### Usage - -Pulsar provides the following methods to use the `struct` schema: - -* `static` - -* `generic` - -* `SchemaDefinition` - -````mdx-code-block - - - - -You can predefine the `struct` schema, which can be a POJO in Java, a `struct` in Go, or classes generated by Avro or Protobuf tools. - -**Example** - -Pulsar gets the schema definition from the predefined `struct` using an Avro library. The schema definition is the schema data stored as a part of the `SchemaInfo`. - -1. Create the _User_ class to define the messages sent to Pulsar topics. - - ```java - - @Builder - @AllArgsConstructor - @NoArgsConstructor - public static class User { - String name; - int age; - } - - ``` - -2. Create a producer with a `struct` schema and send messages. - - ```java - - Producer producer = client.newProducer(Schema.AVRO(User.class)).create(); - producer.newMessage().value(User.builder().name("pulsar-user").age(1).build()).send(); - - ``` - -3. Create a consumer with a `struct` schema and receive messages - - ```java - - Consumer consumer = client.newConsumer(Schema.AVRO(User.class)).subscribe(); - User user = consumer.receive(); - - ``` - - - - -Sometimes applications do not have pre-defined structs, and you can use this method to define schema and access data. - -You can define the `struct` schema using the `GenericSchemaBuilder`, generate a generic struct using `GenericRecordBuilder` and consume messages into `GenericRecord`. - -**Example** - -1. Use `RecordSchemaBuilder` to build a schema. - - ```java - - RecordSchemaBuilder recordSchemaBuilder = SchemaBuilder.record("schemaName"); - recordSchemaBuilder.field("intField").type(SchemaType.INT32); - SchemaInfo schemaInfo = recordSchemaBuilder.build(SchemaType.AVRO); - - Producer producer = client.newProducer(Schema.generic(schemaInfo)).create(); - - ``` - -2. Use `RecordBuilder` to build the struct records. - - ```java - - producer.newMessage().value(schema.newRecordBuilder() - .set("intField", 32) - .build()).send(); - - ``` - - - - -You can define the `schemaDefinition` to generate a `struct` schema. - -**Example** - -1. Create the _User_ class to define the messages sent to Pulsar topics. - - ```java - - @Builder - @AllArgsConstructor - @NoArgsConstructor - public static class User { - String name; - int age; - } - - ``` - -2. Create a producer with a `SchemaDefinition` and send messages. - - ```java - - SchemaDefinition schemaDefinition = SchemaDefinition.builder().withPojo(User.class).build(); - Producer producer = client.newProducer(Schema.AVRO(schemaDefinition)).create(); - producer.newMessage().value(User.builder().name("pulsar-user").age(1).build()).send(); - - ``` - -3. Create a consumer with a `SchemaDefinition` schema and receive messages - - ```java - - SchemaDefinition schemaDefinition = SchemaDefinition.builder().withPojo(User.class).build(); - Consumer consumer = client.newConsumer(Schema.AVRO(schemaDefinition)).subscribe(); - User user = consumer.receive().getValue(); - - ``` - - - - -```` - -### Auto Schema - -If you don't know the schema type of a Pulsar topic in advance, you can use AUTO schema to produce or consume generic records to or from brokers. - -| Auto Schema Type | Description | -|---|---| -| `AUTO_PRODUCE` | This is useful for transferring data **from a producer to a Pulsar topic that has a schema**. | -| `AUTO_CONSUME` | This is useful for transferring data **from a Pulsar topic that has a schema to a consumer**. | - -#### AUTO_PRODUCE - -`AUTO_PRODUCE` schema helps a producer validate whether the bytes sent by the producer is compatible with the schema of a topic. - -**Example** - -Suppose that: - -* You have a producer processing messages from a Kafka topic _K_. - -* You have a Pulsar topic _P_, and you do not know its schema type. - -* Your application reads the messages from _K_ and writes the messages to _P_. - -In this case, you can use `AUTO_PRODUCE` to verify whether the bytes produced by _K_ can be sent to _P_ or not. - -```java - -Produce pulsarProducer = client.newProducer(Schema.AUTO_PRODUCE()) - … - .create(); - -byte[] kafkaMessageBytes = … ; - -pulsarProducer.produce(kafkaMessageBytes); - -``` - -#### AUTO_CONSUME - -`AUTO_CONSUME` schema helps a Pulsar topic validate whether the bytes sent by a Pulsar topic is compatible with a consumer, that is, the Pulsar topic deserializes messages into language-specific objects using the `SchemaInfo` retrieved from broker-side. - -Currently, `AUTO_CONSUME` supports AVRO, JSON and ProtobufNativeSchema schemas. It deserializes messages into `GenericRecord`. - -**Example** - -Suppose that: - -* You have a Pulsar topic _P_. - -* You have a consumer (for example, MySQL) receiving messages from the topic _P_. - -* Your application reads the messages from _P_ and writes the messages to MySQL. - -In this case, you can use `AUTO_CONSUME` to verify whether the bytes produced by _P_ can be sent to MySQL or not. - -```java - -Consumer pulsarConsumer = client.newConsumer(Schema.AUTO_CONSUME()) - … - .subscribe(); - -Message msg = consumer.receive() ; -GenericRecord record = msg.getValue(); - -``` - -## Schema version - -Each `SchemaInfo` stored with a topic has a version. Schema version manages schema changes happening within a topic. - -Messages produced with a given `SchemaInfo` is tagged with a schema version, so when a message is consumed by a Pulsar client, the Pulsar client can use the schema version to retrieve the corresponding `SchemaInfo` and then use the `SchemaInfo` to deserialize data. - -Schemas are versioned in succession. Schema storage happens in a broker that handles the associated topics so that version assignments can be made. - -Once a version is assigned/fetched to/for a schema, all subsequent messages produced by that producer are tagged with the appropriate version. - -**Example** - -The following example illustrates how the schema version works. - -Suppose that a Pulsar [Java client](client-libraries-java.md) created using the code below attempts to connect to Pulsar and begins to send messages: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -Producer producer = client.newProducer(JSONSchema.of(SensorReading.class)) - .topic("sensor-data") - .sendTimeout(3, TimeUnit.SECONDS) - .create(); - -``` - -The table below lists the possible scenarios when this connection attempt occurs and what happens in each scenario: - -| Scenario | What happens | -| --- | --- | -|
  1457. No schema exists for the topic.
  1458. | (1) The producer is created using the given schema. (2) Since no existing schema is compatible with the `SensorReading` schema, the schema is transmitted to the broker and stored. (3) Any consumer created using the same schema or topic can consume messages from the `sensor-data` topic. | -|
  1459. A schema already exists.
  1460. The producer connects using the same schema that is already stored.
  1461. | (1) The schema is transmitted to the broker. (2) The broker determines that the schema is compatible. (3) The broker attempts to store the schema in [BookKeeper](concepts-architecture-overview.md#persistent-storage) but then determines that it's already stored, so it is used to tag produced messages. |
  1462. A schema already exists.
  1463. The producer connects using a new schema that is compatible.
  1464. | (1) The schema is transmitted to the broker. (2) The broker determines that the schema is compatible and stores the new schema as the current version (with a new version number). | - -## How does schema work - -Pulsar schemas are applied and enforced at the **topic** level (schemas cannot be applied at the namespace or tenant level). - -Producers and consumers upload schemas to brokers, so Pulsar schemas work on the producer side and the consumer side. - -### Producer side - -This diagram illustrates how does schema work on the Producer side. - -![Schema works at the producer side](/assets/schema-producer.png) - -1. The application uses a schema instance to construct a producer instance. - - The schema instance defines the schema for the data being produced using the producer instance. - - Take AVRO as an example, Pulsar extracts schema definition from the POJO class and constructs the `SchemaInfo` that the producer needs to pass to a broker when it connects. - -2. The producer connects to the broker with the `SchemaInfo` extracted from the passed-in schema instance. - -3. The broker looks up the schema in the schema storage to check if it is already a registered schema. - -4. If yes, the broker skips the schema validation since it is a known schema, and returns the schema version to the producer. - -5. If no, the broker verifies whether a schema can be automatically created in this namespace: - - * If `isAllowAutoUpdateSchema` sets to **true**, then a schema can be created, and the broker validates the schema based on the schema compatibility check strategy defined for the topic. - - * If `isAllowAutoUpdateSchema` sets to **false**, then a schema can not be created, and the producer is rejected to connect to the broker. - -**Tip**: - -`isAllowAutoUpdateSchema` can be set via **Pulsar admin API** or **REST API.** - -For how to set `isAllowAutoUpdateSchema` via Pulsar admin API, see [Manage AutoUpdate Strategy](schema-manage.md/#manage-autoupdate-strategy). - -6. If the schema is allowed to be updated, then the compatible strategy check is performed. - - * If the schema is compatible, the broker stores it and returns the schema version to the producer. - - All the messages produced by this producer are tagged with the schema version. - - * If the schema is incompatible, the broker rejects it. - -### Consumer side - -This diagram illustrates how does Schema work on the consumer side. - -![Schema works at the consumer side](/assets/schema-consumer.png) - -1. The application uses a schema instance to construct a consumer instance. - - The schema instance defines the schema that the consumer uses for decoding messages received from a broker. - -2. The consumer connects to the broker with the `SchemaInfo` extracted from the passed-in schema instance. - -3. The broker determines whether the topic has one of them (a schema/data/a local consumer and a local producer). - -4. If a topic does not have all of them (a schema/data/a local consumer and a local producer): - - * If `isAllowAutoUpdateSchema` sets to **true**, then the consumer registers a schema and it is connected to a broker. - - * If `isAllowAutoUpdateSchema` sets to **false**, then the consumer is rejected to connect to a broker. - -5. If a topic has one of them (a schema/data/a local consumer and a local producer), then the schema compatibility check is performed. - - * If the schema passes the compatibility check, then the consumer is connected to the broker. - - * If the schema does not pass the compatibility check, then the consumer is rejected to connect to the broker. - -6. The consumer receives messages from the broker. - - If the schema used by the consumer supports schema versioning (for example, AVRO schema), the consumer fetches the `SchemaInfo` of the version tagged in messages and uses the passed-in schema and the schema tagged in messages to decode the messages. diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/security-athenz.md b/site2/website/versioned_docs/version-2.8.3-deprecated/security-athenz.md deleted file mode 100644 index 8a39fe25316d07..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/security-athenz.md +++ /dev/null @@ -1,98 +0,0 @@ ---- -id: security-athenz -title: Authentication using Athenz -sidebar_label: "Authentication using Athenz" -original_id: security-athenz ---- - -[Athenz](https://github.com/AthenZ/athenz) is a role-based authentication/authorization system. In Pulsar, you can use Athenz role tokens (also known as *z-tokens*) to establish the identify of the client. - -## Athenz authentication settings - -A [decentralized Athenz system](https://github.com/AthenZ/athenz/blob/master/docs/decent_authz_flow.md) contains an [authori**Z**ation **M**anagement **S**ystem](https://github.com/AthenZ/athenz/blob/master/docs/setup_zms.md) (ZMS) server and an [authori**Z**ation **T**oken **S**ystem](https://github.com/AthenZ/athenz/blob/master/docs/setup_zts) (ZTS) server. - -To begin, you need to set up Athenz service access control. You need to create domains for the *provider* (which provides some resources to other services with some authentication/authorization policies) and the *tenant* (which is provisioned to access some resources in a provider). In this case, the provider corresponds to the Pulsar service itself and the tenant corresponds to each application using Pulsar (typically, a [tenant](reference-terminology.md#tenant) in Pulsar). - -### Create the tenant domain and service - -On the [tenant](reference-terminology.md#tenant) side, you need to do the following things: - -1. Create a domain, such as `shopping` -2. Generate a private/public key pair -3. Create a service, such as `some_app`, on the domain with the public key - -Note that you need to specify the private key generated in step 2 when the Pulsar client connects to the [broker](reference-terminology.md#broker) (see client configuration examples for [Java](client-libraries-java.md#tls-authentication) and [C++](client-libraries-cpp.md#tls-authentication)). - -For more specific steps involving the Athenz UI, refer to [Example Service Access Control Setup](https://github.com/AthenZ/athenz/blob/master/docs/example_service_athenz_setup.md#client-tenant-domain). - -### Create the provider domain and add the tenant service to some role members - -On the provider side, you need to do the following things: - -1. Create a domain, such as `pulsar` -2. Create a role -3. Add the tenant service to members of the role - -Note that you can specify any action and resource in step 2 since they are not used on Pulsar. In other words, Pulsar uses the Athenz role token only for authentication, *not* for authorization. - -For more specific steps involving UI, refer to [Example Service Access Control Setup](https://github.com/AthenZ/athenz/blob/master/docs/example_service_athenz_setup.md#server-provider-domain). - -## Configure the broker for Athenz - -> ### TLS encryption -> -> Note that when you are using Athenz as an authentication provider, you had better use TLS encryption -> as it can protect role tokens from being intercepted and reused. (for more details involving TLS encryption see [Architecture - Data Model](https://github.com/AthenZ/athenz/blob/master/docs/data_model)). - -In the `conf/broker.conf` configuration file in your Pulsar installation, you need to provide the class name of the Athenz authentication provider as well as a comma-separated list of provider domain names. - -```properties - -# Add the Athenz auth provider -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderAthenz -athenzDomainNames=pulsar - -# Enable TLS -tlsEnabled=true -tlsCertificateFilePath=/path/to/broker-cert.pem -tlsKeyFilePath=/path/to/broker-key.pem - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationAthenz -brokerClientAuthenticationParameters={"tenantDomain":"shopping","tenantService":"some_app","providerDomain":"pulsar","privateKey":"file:///path/to/private.pem","keyId":"v1"} - -``` - -> A full listing of parameters is available in the `conf/broker.conf` file, you can also find the default -> values for those parameters in [Broker Configuration](reference-configuration.md#broker). - -## Configure clients for Athenz - -For more information on Pulsar client authentication using Athenz, see the following language-specific docs: - -* [Java client](client-libraries-java.md#athenz) - -## Configure CLI tools for Athenz - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following authentication parameters to the `conf/client.conf` config file to use Athenz with CLI tools of Pulsar: - -```properties - -# URL for the broker -serviceUrl=https://broker.example.com:8443/ - -# Set Athenz auth plugin and its parameters -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationAthenz -authParams={"tenantDomain":"shopping","tenantService":"some_app","providerDomain":"pulsar","privateKey":"file:///path/to/private.pem","keyId":"v1"} - -# Enable TLS -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/cacert.pem - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/security-authorization.md b/site2/website/versioned_docs/version-2.8.3-deprecated/security-authorization.md deleted file mode 100644 index 7ac09b6f439eae..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/security-authorization.md +++ /dev/null @@ -1,114 +0,0 @@ ---- -id: security-authorization -title: Authentication and authorization in Pulsar -sidebar_label: "Authorization and ACLs" -original_id: security-authorization ---- - - -In Pulsar, the [authentication provider](security-overview.md#authentication-providers) is responsible for properly identifying clients and associating the clients with [role tokens](security-overview.md#role-tokens). If you only enable authentication, an authenticated role token has the ability to access all resources in the cluster. *Authorization* is the process that determines *what* clients are able to do. - -The role tokens with the most privileges are the *superusers*. The *superusers* can create and destroy tenants, along with having full access to all tenant resources. - -When a superuser creates a [tenant](reference-terminology.md#tenant), that tenant is assigned an admin role. A client with the admin role token can then create, modify and destroy namespaces, and grant and revoke permissions to *other role tokens* on those namespaces. - -## Broker and Proxy Setup - -### Enable authorization and assign superusers -You can enable the authorization and assign the superusers in the broker ([`conf/broker.conf`](reference-configuration.md#broker)) configuration files. - -```properties - -authorizationEnabled=true -superUserRoles=my-super-user-1,my-super-user-2 - -``` - -> A full list of parameters is available in the `conf/broker.conf` file. -> You can also find the default values for those parameters in [Broker Configuration](reference-configuration.md#broker). - -Typically, you use superuser roles for administrators, clients as well as broker-to-broker authorization. When you use [geo-replication](concepts-replication.md), every broker needs to be able to publish to all the other topics of clusters. - -You can also enable the authorization for the proxy in the proxy configuration file (`conf/proxy.conf`). Once you enable the authorization on the proxy, the proxy does an additional authorization check before forwarding the request to a broker. -If you enable authorization on the broker, the broker checks the authorization of the request when the broker receives the forwarded request. - -### Proxy Roles - -By default, the broker treats the connection between a proxy and the broker as a normal user connection. The broker authenticates the user as the role configured in `proxy.conf`(see ["Enable TLS Authentication on Proxies"](security-tls-authentication.md#enable-tls-authentication-on-proxies)). However, when the user connects to the cluster through a proxy, the user rarely requires the authentication. The user expects to be able to interact with the cluster as the role for which they have authenticated with the proxy. - -Pulsar uses *Proxy roles* to enable the authentication. Proxy roles are specified in the broker configuration file, [`conf/broker.conf`](reference-configuration.md#broker). If a client that is authenticated with a broker is one of its ```proxyRoles```, all requests from that client must also carry information about the role of the client that is authenticated with the proxy. This information is called the *original principal*. If the *original principal* is absent, the client is not able to access anything. - -You must authorize both the *proxy role* and the *original principal* to access a resource to ensure that the resource is accessible via the proxy. Administrators can take two approaches to authorize the *proxy role* and the *original principal*. - -The more secure approach is to grant access to the proxy roles each time you grant access to a resource. For example, if you have a proxy role named `proxy1`, when the superuser creates a tenant, you should specify `proxy1` as one of the admin roles. When a role is granted permissions to produce or consume from a namespace, if that client wants to produce or consume through a proxy, you should also grant `proxy1` the same permissions. - -Another approach is to make the proxy role a superuser. This allows the proxy to access all resources. The client still needs to authenticate with the proxy, and all requests made through the proxy have their role downgraded to the *original principal* of the authenticated client. However, if the proxy is compromised, a bad actor could get full access to your cluster. - -You can specify the roles as proxy roles in [`conf/broker.conf`](reference-configuration.md#broker). - -```properties - -proxyRoles=my-proxy-role - -# if you want to allow superusers to use the proxy (see above) -superUserRoles=my-super-user-1,my-super-user-2,my-proxy-role - -``` - -## Administer tenants - -Pulsar [instance](reference-terminology.md#instance) administrators or some kind of self-service portal typically provisions a Pulsar [tenant](reference-terminology.md#tenant). - -You can manage tenants using the [`pulsar-admin`](reference-pulsar-admin.md) tool. - -### Create a new tenant - -The following is an example tenant creation command: - -```shell - -$ bin/pulsar-admin tenants create my-tenant \ - --admin-roles my-admin-role \ - --allowed-clusters us-west,us-east - -``` - -This command creates a new tenant `my-tenant` that is allowed to use the clusters `us-west` and `us-east`. - -A client that successfully identifies itself as having the role `my-admin-role` is allowed to perform all administrative tasks on this tenant. - -The structure of topic names in Pulsar reflects the hierarchy between tenants, clusters, and namespaces: - -```shell - -persistent://tenant/namespace/topic - -``` - -### Manage permissions - -You can use [Pulsar Admin Tools](admin-api-permissions.md) for managing permission in Pulsar. - -### Pulsar admin authentication - -```java - -PulsarAdmin admin = PulsarAdmin.builder() - .serviceHttpUrl("http://broker:8080") - .authentication("com.org.MyAuthPluginClass", "param1:value1") - .build(); - -``` - -To use TLS: - -```java - -PulsarAdmin admin = PulsarAdmin.builder() - .serviceHttpUrl("https://broker:8080") - .authentication("com.org.MyAuthPluginClass", "param1:value1") - .tlsTrustCertsFilePath("/path/to/trust/cert") - .build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/security-basic-auth.md b/site2/website/versioned_docs/version-2.8.3-deprecated/security-basic-auth.md deleted file mode 100644 index 2585526bb478af..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/security-basic-auth.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -id: security-basic-auth -title: Authentication using HTTP basic -sidebar_label: "Authentication using HTTP basic" ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - -[Basic authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) is a simple authentication scheme built into the HTTP protocol, which uses base64-encoded username and password pairs as credentials. - -## Prerequisites - -Install [`htpasswd`](https://httpd.apache.org/docs/2.4/programs/htpasswd.html) in your environment to create a password file for storing username-password pairs. - -* For Ubuntu/Debian, run the following command to install `htpasswd`. - - ``` - apt install apache2-utils - ``` - -* For CentOS/RHEL, run the following command to install `htpasswd`. - - ``` - yum install httpd-tools - ``` - -## Create your authentication file - -:::note -Currently, you can use MD5 (recommended) and CRYPT encryption to authenticate your password. -::: - -Create a password file named `.htpasswd` with a user account `superuser/admin`: -* Use MD5 encryption (recommended): - - ``` - htpasswd -cmb /path/to/.htpasswd superuser admin - ``` - -* Use CRYPT encryption: - - ``` - htpasswd -cdb /path/to/.htpasswd superuser admin - ``` - -You can preview the content of your password file by running the following command: - -``` -cat path/to/.htpasswd -superuser:$apr1$GBIYZYFZ$MzLcPrvoUky16mLcK6UtX/ -``` - -## Enable basic authentication on brokers - -To configure brokers to authenticate clients, complete the following steps. - -1. Add the following parameters to the `conf/broker.conf` file. If you use a standalone Pulsar, you need to add these parameters to the `conf/standalone.conf` file. - - ``` - # Configuration to enable Basic authentication - authenticationEnabled=true - authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderBasic - - # Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters - brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic - brokerClientAuthenticationParameters={"userId":"superuser","password":"admin"} - - # If this flag is set then the broker authenticates the original Auth data - # else it just accepts the originalPrincipal and authorizes it (if required). - authenticateOriginalAuthData=true - ``` - -2. Set an environment variable named `PULSAR_EXTRA_OPTS` and the value is `-Dpulsar.auth.basic.conf=/path/to/.htpasswd`. Pulsar reads this environment variable to implement HTTP basic authentication. - -## Enable basic authentication on proxies - -To configure proxies to authenticate clients, complete the following steps. - -1. Add the following parameters to the `conf/proxy.conf` file: - - ``` - # For clients connecting to the proxy - authenticationEnabled=true - authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderBasic - - # For the proxy to connect to brokers - brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic - brokerClientAuthenticationParameters={"userId":"superuser","password":"admin"} - - # Whether client authorization credentials are forwarded to the broker for re-authorization. - # Authentication must be enabled via authenticationEnabled=true for this to take effect. - forwardAuthorizationCredentials=true - ``` - -2. Set an environment variable named `PULSAR_EXTRA_OPTS` and the value is `-Dpulsar.auth.basic.conf=/path/to/.htpasswd`. Pulsar reads this environment variable to implement HTTP basic authentication. - -## Configure basic authentication in CLI tools - -[Command-line tools](/docs/next/reference-cli-tools), such as [Pulsar-admin](/tools/pulsar-admin/), [Pulsar-perf](/tools/pulsar-perf/) and [Pulsar-client](/tools/pulsar-client/), use the `conf/client.conf` file in your Pulsar installation. To configure basic authentication in Pulsar CLI tools, you need to add the following parameters to the `conf/client.conf` file. - -``` -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic -authParams={"userId":"superuser","password":"admin"} -``` - - -## Configure basic authentication in Pulsar clients - -The following example shows how to configure basic authentication when using Pulsar clients. - - - - - ```java - AuthenticationBasic auth = new AuthenticationBasic(); - auth.configure("{\"userId\":\"superuser\",\"password\":\"admin\"}"); - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650") - .authentication(auth) - .build(); - ``` - - - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/security-bouncy-castle.md b/site2/website/versioned_docs/version-2.8.3-deprecated/security-bouncy-castle.md deleted file mode 100644 index be937055d8e311..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/security-bouncy-castle.md +++ /dev/null @@ -1,157 +0,0 @@ ---- -id: security-bouncy-castle -title: Bouncy Castle Providers -sidebar_label: "Bouncy Castle Providers" -original_id: security-bouncy-castle ---- - -## BouncyCastle Introduce - -`Bouncy Castle` is a Java library that complements the default Java Cryptographic Extension (JCE), -and it provides more cipher suites and algorithms than the default JCE provided by Sun. - -In addition to that, `Bouncy Castle` has lots of utilities for reading arcane formats like PEM and ASN.1 that no sane person would want to rewrite themselves. - -In Pulsar, security and crypto have dependencies on BouncyCastle Jars. For the detailed installing and configuring Bouncy Castle FIPS, see [BC FIPS Documentation](https://www.bouncycastle.org/documentation.html), especially the **User Guides** and **Security Policy** PDFs. - -`Bouncy Castle` provides both [FIPS](https://www.bouncycastle.org/fips_faq.html) and non-FIPS version. But in a JVM, you can not include both of the 2 versions, and you need to exclude the current version before include the other. - -In Pulsar, the security and crypto methods also depends on `Bouncy Castle`, especially in [TLS Authentication](security-tls-authentication.md) and [Transport Encryption](security-encryption.md). This document contains the configuration between BouncyCastle FIPS(BC-FIPS) and non-FIPS(BC-non-FIPS) version while using Pulsar. - -## How BouncyCastle modules packaged in Pulsar - -In Pulsar's `bouncy-castle` module, We provide 2 sub modules: `bouncy-castle-bc`(for non-FIPS version) and `bouncy-castle-bcfips`(for FIPS version), to package BC jars together to make the include and exclude of `Bouncy Castle` easier. - -To achieve this goal, we will need to package several `bouncy-castle` jars together into `bouncy-castle-bc` or `bouncy-castle-bcfips` jar. -Each of the original bouncy-castle jar is related with security, so BouncyCastle dutifully supplies signed of each JAR. -But when we do the re-package, Maven shade explodes the BouncyCastle jar file which puts the signatures into META-INF, -these signatures aren't valid for this new, uber-jar (signatures are only for the original BC jar). -Usually, You will meet error like `java.lang.SecurityException: Invalid signature file digest for Manifest main attributes`. - -You could exclude these signatures in mvn pom file to avoid above error, by - -```access transformers - -META-INF/*.SF -META-INF/*.DSA -META-INF/*.RSA - -``` - -But it can also lead to new, cryptic errors, e.g. `java.security.NoSuchAlgorithmException: PBEWithSHA256And256BitAES-CBC-BC SecretKeyFactory not available` -By explicitly specifying where to find the algorithm like this: `SecretKeyFactory.getInstance("PBEWithSHA256And256BitAES-CBC-BC","BC")` -It will get the real error: `java.security.NoSuchProviderException: JCE cannot authenticate the provider BC` - -So, we used a [executable packer plugin](https://github.com/nthuemmel/executable-packer-maven-plugin) that uses a jar-in-jar approach to preserve the BouncyCastle signature in a single, executable jar. - -### Include dependencies of BC-non-FIPS - -Pulsar module `bouncy-castle-bc`, which defined by `bouncy-castle/bc/pom.xml` contains the needed non-FIPS jars for Pulsar, and packaged as a jar-in-jar(need to provide `pkg`). - -```xml - - - org.bouncycastle - bcpkix-jdk15on - ${bouncycastle.version} - - - - org.bouncycastle - bcprov-ext-jdk15on - ${bouncycastle.version} - - -``` - -By using this `bouncy-castle-bc` module, you can easily include and exclude BouncyCastle non-FIPS jars. - -### Modules that include BC-non-FIPS module (`bouncy-castle-bc`) - -For Pulsar client, user need the bouncy-castle module, so `pulsar-client-original` will include the `bouncy-castle-bc` module, and have `pkg` set to reference the `jar-in-jar` package. -It is included as following example: - -```xml - - - org.apache.pulsar - bouncy-castle-bc - ${pulsar.version} - pkg - - -``` - -By default `bouncy-castle-bc` already included in `pulsar-client-original`, And `pulsar-client-original` has been included in a lot of other modules like `pulsar-client-admin`, `pulsar-broker`. -But for the above shaded jar and signatures reason, we should not package Pulsar's `bouncy-castle` module into `pulsar-client-all` other shaded modules directly, such as `pulsar-client-shaded`, `pulsar-client-admin-shaded` and `pulsar-broker-shaded`. -So in the shaded modules, we will exclude the `bouncy-castle` modules. - -```xml - - - - org.apache.pulsar:pulsar-client-original - - ** - - - org/bouncycastle/** - - - - -``` - -That means, `bouncy-castle` related jars are not shaded in these fat jars. - -### Module BC-FIPS (`bouncy-castle-bcfips`) - -Pulsar module `bouncy-castle-bcfips`, which defined by `bouncy-castle/bcfips/pom.xml` contains the needed FIPS jars for Pulsar. -Similar to `bouncy-castle-bc`, `bouncy-castle-bcfips` also packaged as a `jar-in-jar` package for easy include/exclude. - -```xml - - - org.bouncycastle - bc-fips - ${bouncycastlefips.version} - - - - org.bouncycastle - bcpkix-fips - ${bouncycastlefips.version} - - -``` - -### Exclude BC-non-FIPS and include BC-FIPS - -If you want to switch from BC-non-FIPS to BC-FIPS version, Here is an example for `pulsar-broker` module: - -```xml - - - org.apache.pulsar - pulsar-broker - ${pulsar.version} - - - org.apache.pulsar - bouncy-castle-bc - - - - - - org.apache.pulsar - bouncy-castle-bcfips - ${pulsar.version} - pkg - - -``` - - -For more example, you can reference module `bcfips-include-test`. - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/security-encryption.md b/site2/website/versioned_docs/version-2.8.3-deprecated/security-encryption.md deleted file mode 100644 index 8c7b298055db91..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/security-encryption.md +++ /dev/null @@ -1,335 +0,0 @@ ---- -id: security-encryption -title: Pulsar Encryption -sidebar_label: "End-to-End Encryption" -original_id: security-encryption ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Applications can use Pulsar encryption to encrypt messages on the producer side and decrypt messages on the consumer side. You can use the public and private key pair that the application configures to perform encryption. Only the consumers with a valid key can decrypt the encrypted messages. - -## Asymmetric and symmetric encryption - -Pulsar uses a dynamically generated symmetric AES key to encrypt messages(data). You can use the application-provided ECDSA (Elliptic Curve Digital Signature Algorithm) or RSA (Rivest–Shamir–Adleman) key pair to encrypt the AES key(data key), so you do not have to share the secret with everyone. - -Key is a public and private key pair used for encryption or decryption. The producer key is the public key of the key pair, and the consumer key is the private key of the key pair. - -The application configures the producer with the public key. You can use this key to encrypt the AES data key. The encrypted data key is sent as part of message header. Only entities with the private key (in this case the consumer) are able to decrypt the data key which is used to decrypt the message. - -You can encrypt a message with more than one key. Any one of the keys used for encrypting the message is sufficient to decrypt the message. - -Pulsar does not store the encryption key anywhere in the Pulsar service. If you lose or delete the private key, your message is irretrievably lost, and is unrecoverable. - -## Producer -![alt text](/assets/pulsar-encryption-producer.jpg "Pulsar Encryption Producer") - -## Consumer -![alt text](/assets/pulsar-encryption-consumer.jpg "Pulsar Encryption Consumer") - -## Get started - -1. Create your ECDSA or RSA public and private key pair by using the following commands. - * ECDSA(for Java clients only) - - ```shell - - openssl ecparam -name secp521r1 -genkey -param_enc explicit -out test_ecdsa_privkey.pem - openssl ec -in test_ecdsa_privkey.pem -pubout -outform pem -out test_ecdsa_pubkey.pem - - ``` - - * RSA (for C++, Python and Node.js clients) - - ```shell - - openssl genrsa -out test_rsa_privkey.pem 2048 - openssl rsa -in test_rsa_privkey.pem -pubout -outform pkcs8 -out test_rsa_pubkey.pem - - ``` - -2. Add the public and private key to the key management and configure your producers to retrieve public keys and consumers clients to retrieve private keys. - -3. Implement the `CryptoKeyReader` interface, specifically `CryptoKeyReader.getPublicKey()` for producer and `CryptoKeyReader.getPrivateKey()` for consumer, which Pulsar client invokes to load the key. - -4. Add the encryption key name to the producer builder: PulsarClient.newProducer().addEncryptionKey("myapp.key"). - -5. Configure a `CryptoKeyReader` to a producer, consumer or reader. - -````mdx-code-block - - - -```java - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl("pulsar://localhost:6650").build(); -String topic = "persistent://my-tenant/my-ns/my-topic"; -// RawFileKeyReader is just an example implementation that's not provided by Pulsar -CryptoKeyReader keyReader = new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem"); - -Producer producer = pulsarClient.newProducer() - .topic(topic) - .cryptoKeyReader(keyReader) - .addEncryptionKey("myappkey") - .create(); - -Consumer consumer = pulsarClient.newConsumer() - .topic(topic) - .subscriptionName("my-subscriber-name") - .cryptoKeyReader(keyReader) - .subscribe(); - -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(MessageId.earliest) - .cryptoKeyReader(keyReader) - .create(); - -``` - - - - -```c++ - -Client client("pulsar://localhost:6650"); -std::string topic = "persistent://my-tenant/my-ns/my-topic"; -// DefaultCryptoKeyReader is a built-in implementation that reads public key and private key from files -auto keyReader = std::make_shared("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem"); - -Producer producer; -ProducerConfiguration producerConf; -producerConf.setCryptoKeyReader(keyReader); -producerConf.addEncryptionKey("myappkey"); -client.createProducer(topic, producerConf, producer); - -Consumer consumer; -ConsumerConfiguration consumerConf; -consumerConf.setCryptoKeyReader(keyReader); -client.subscribe(topic, "my-subscriber-name", consumerConf, consumer); - -Reader reader; -ReaderConfiguration readerConf; -readerConf.setCryptoKeyReader(keyReader); -client.createReader(topic, MessageId::earliest(), readerConf, reader); - -``` - - - - -```python - -from pulsar import Client, CryptoKeyReader - -client = Client('pulsar://localhost:6650') -topic = 'persistent://my-tenant/my-ns/my-topic' -# CryptoKeyReader is a built-in implementation that reads public key and private key from files -key_reader = CryptoKeyReader('test_ecdsa_pubkey.pem', 'test_ecdsa_privkey.pem') - -producer = client.create_producer( - topic=topic, - encryption_key='myappkey', - crypto_key_reader=key_reader -) - -consumer = client.subscribe( - topic=topic, - subscription_name='my-subscriber-name', - crypto_key_reader=key_reader -) - -reader = client.create_reader( - topic=topic, - start_message_id=MessageId.earliest, - crypto_key_reader=key_reader -) - -client.close() - -``` - - - - -```nodejs - -const Pulsar = require('pulsar-client'); - -(async () => { -// Create a client -const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - operationTimeoutSeconds: 30, -}); - -// Create a producer -const producer = await client.createProducer({ - topic: 'persistent://public/default/my-topic', - sendTimeoutMs: 30000, - batchingEnabled: true, - publicKeyPath: "public-key.client-rsa.pem", - encryptionKey: "encryption-key" -}); - -// Create a consumer -const consumer = await client.subscribe({ - topic: 'persistent://public/default/my-topic', - subscription: 'sub1', - subscriptionType: 'Shared', - ackTimeoutMs: 10000, - privateKeyPath: "private-key.client-rsa.pem" -}); - -// Send messages -for (let i = 0; i < 10; i += 1) { - const msg = `my-message-${i}`; - producer.send({ - data: Buffer.from(msg), - }); - console.log(`Sent message: ${msg}`); -} -await producer.flush(); - -// Receive messages -for (let i = 0; i < 10; i += 1) { - const msg = await consumer.receive(); - console.log(msg.getData().toString()); - consumer.acknowledge(msg); -} - -await consumer.close(); -await producer.close(); -await client.close(); -})(); - -``` - - - - -```` - -6. Below is an example of a **customized** `CryptoKeyReader` implementation. - -````mdx-code-block - - - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} - -``` - - - - -```c++ - -class CustomCryptoKeyReader : public CryptoKeyReader { - public: - Result getPublicKey(const std::string& keyName, std::map& metadata, - EncryptionKeyInfo& encKeyInfo) const override { - // TODO: - return ResultOk; - } - - Result getPrivateKey(const std::string& keyName, std::map& metadata, - EncryptionKeyInfo& encKeyInfo) const override { - // TODO: - return ResultOk; - } -}; - -auto keyReader = std::make_shared(/* ... */); -// TODO: create producer, consumer or reader based on keyReader here - -``` - -Besides, you can use the **default** implementation of `CryptoKeyReader` by specifying the paths of `private key` and `public key`. - - - - -Currently, **customized** `CryptoKeyReader` implementation is not supported in Python. However, you can use the **default** implementation by specifying the path of `private key` and `public key`. - - - - -Currently, **customized** `CryptoKeyReader` implementation is not supported in Node.JS. However, you can use the **default** implementation by specifying the path of `private key` and `public key`. - - - - -```` - -## Key rotation -Pulsar generates a new AES data key every 4 hours or after publishing a certain number of messages. A producer fetches the asymmetric public key every 4 hours by calling CryptoKeyReader.getPublicKey() to retrieve the latest version. - -## Enable encryption at the producer application -If you produce messages that are consumed across application boundaries, you need to ensure that consumers in other applications have access to one of the private keys that can decrypt the messages. You can do this in two ways: -1. The consumer application provides you access to their public key, which you add to your producer keys. -2. You grant access to one of the private keys from the pairs that producer uses. - -When producers want to encrypt the messages with multiple keys, producers add all such keys to the config. Consumer can decrypt the message as long as the consumer has access to at least one of the keys. - -If you need to encrypt the messages using 2 keys (`myapp.messagekey1` and `myapp.messagekey2`), refer to the following example. - -```java - -PulsarClient.newProducer().addEncryptionKey("myapp.messagekey1").addEncryptionKey("myapp.messagekey2"); - -``` - -## Decrypt encrypted messages at the consumer application -Consumers require to access one of the private keys to decrypt messages that the producer produces. If you want to receive encrypted messages, create a public or private key and give your public key to the producer application to encrypt messages using your public key. - -## Handle failures -* Producer/Consumer loses access to the key - * Producer action fails to indicate the cause of the failure. Application has the option to proceed with sending unencrypted messages in such cases. Call `PulsarClient.newProducer().cryptoFailureAction(ProducerCryptoFailureAction)` to control the producer behavior. The default behavior is to fail the request. - * If consumption fails due to decryption failure or missing keys in consumer, the application has the option to consume the encrypted message or discard it. Call `PulsarClient.newConsumer().cryptoFailureAction(ConsumerCryptoFailureAction)` to control the consumer behavior. The default behavior is to fail the request. Application is never able to decrypt the messages if the private key is permanently lost. -* Batch messaging - * If decryption fails and the message contains batch messages, client is not able to retrieve individual messages in the batch, hence message consumption fails even if cryptoFailureAction() is set to `ConsumerCryptoFailureAction.CONSUME`. -* If decryption fails, the message consumption stops and the application notices backlog growth in addition to decryption failure messages in the client log. If the application does not have access to the private key to decrypt the message, the only option is to skip or discard backlogged messages. diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/security-extending.md b/site2/website/versioned_docs/version-2.8.3-deprecated/security-extending.md deleted file mode 100644 index e7484453b8beb8..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/security-extending.md +++ /dev/null @@ -1,207 +0,0 @@ ---- -id: security-extending -title: Extending Authentication and Authorization in Pulsar -sidebar_label: "Extending" -original_id: security-extending ---- - -Pulsar provides a way to use custom authentication and authorization mechanisms. - -## Authentication - -Pulsar supports mutual TLS and Athenz authentication plugins. For how to use these authentication plugins, you can refer to the description in [Security](security-overview.md). - -You can use a custom authentication mechanism by providing the implementation in the form of two plugins. One plugin is for the Client library and the other plugin is for the Pulsar Proxy and/or Pulsar Broker to validate the credentials. - -### Client authentication plugin - -For the client library, you need to implement `org.apache.pulsar.client.api.Authentication`. By entering the command below you can pass this class when you create a Pulsar client: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .authentication(new MyAuthentication()) - .build(); - -``` - -You can use 2 interfaces to implement on the client side: - * `Authentication` -> http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/Authentication.html - * `AuthenticationDataProvider` -> http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/AuthenticationDataProvider.html - - -This in turn needs to provide the client credentials in the form of `org.apache.pulsar.client.api.AuthenticationDataProvider`. This leaves the chance to return different kinds of authentication token for different types of connection or by passing a certificate chain to use for TLS. - - -You can find examples for client authentication providers at: - - * Mutual TLS Auth -- https://github.com/apache/pulsar/tree/master/pulsar-client/src/main/java/org/apache/pulsar/client/impl/auth - * Athenz -- https://github.com/apache/pulsar/tree/master/pulsar-client-auth-athenz/src/main/java/org/apache/pulsar/client/impl/auth - -### Proxy/Broker authentication plugin - -On the proxy/broker side, you need to configure the corresponding plugin to validate the credentials that the client sends. The Proxy and Broker can support multiple authentication providers at the same time. - -In `conf/broker.conf` you can choose to specify a list of valid providers: - -```properties - -# Authentication provider name list, which is comma separated list of class names -authenticationProviders= - -``` - -To implement `org.apache.pulsar.broker.authentication.AuthenticationProvider` on one single interface: - -```java - -/** - * Provider of authentication mechanism - */ -public interface AuthenticationProvider extends Closeable { - - /** - * Perform initialization for the authentication provider - * - * @param config - * broker config object - * @throws IOException - * if the initialization fails - */ - void initialize(ServiceConfiguration config) throws IOException; - - /** - * @return the authentication method name supported by this provider - */ - String getAuthMethodName(); - - /** - * Validate the authentication for the given credentials with the specified authentication data - * - * @param authData - * provider specific authentication data - * @return the "role" string for the authenticated connection, if the authentication was successful - * @throws AuthenticationException - * if the credentials are not valid - */ - String authenticate(AuthenticationDataSource authData) throws AuthenticationException; - -} - -``` - -The following is the example for Broker authentication plugins: - - * Mutual TLS -- https://github.com/apache/pulsar/blob/master/pulsar-broker-common/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderTls.java - * Athenz -- https://github.com/apache/pulsar/blob/master/pulsar-broker-auth-athenz/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderAthenz.java - -## Authorization - -Authorization is the operation that checks whether a particular "role" or "principal" has permission to perform a certain operation. - -By default, you can use the embedded authorization provider provided by Pulsar. You can also configure a different authorization provider through a plugin. -Note that although the Authentication plugin is designed for use in both the Proxy and Broker, -the Authorization plugin is designed only for use on the Broker however the Proxy does perform some simple Authorization checks of Roles if authorization is enabled. - -To provide a custom provider, you need to implement the `org.apache.pulsar.broker.authorization.AuthorizationProvider` interface, put this class in the Pulsar broker classpath and configure the class in `conf/broker.conf`: - - ```properties - - # Authorization provider fully qualified class-name - authorizationProvider=org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider - - ``` - -```java - -/** - * Provider of authorization mechanism - */ -public interface AuthorizationProvider extends Closeable { - - /** - * Perform initialization for the authorization provider - * - * @param conf - * broker config object - * @param configCache - * pulsar zk configuration cache service - * @throws IOException - * if the initialization fails - */ - void initialize(ServiceConfiguration conf, ConfigurationCacheService configCache) throws IOException; - - /** - * Check if the specified role has permission to send messages to the specified fully qualified topic name. - * - * @param topicName - * the fully qualified topic name associated with the topic. - * @param role - * the app id used to send messages to the topic. - */ - CompletableFuture canProduceAsync(TopicName topicName, String role, - AuthenticationDataSource authenticationData); - - /** - * Check if the specified role has permission to receive messages from the specified fully qualified topic name. - * - * @param topicName - * the fully qualified topic name associated with the topic. - * @param role - * the app id used to receive messages from the topic. - * @param subscription - * the subscription name defined by the client - */ - CompletableFuture canConsumeAsync(TopicName topicName, String role, - AuthenticationDataSource authenticationData, String subscription); - - /** - * Check whether the specified role can perform a lookup for the specified topic. - * - * For that the caller needs to have producer or consumer permission. - * - * @param topicName - * @param role - * @return - * @throws Exception - */ - CompletableFuture canLookupAsync(TopicName topicName, String role, - AuthenticationDataSource authenticationData); - - /** - * - * Grant authorization-action permission on a namespace to the given client - * - * @param namespace - * @param actions - * @param role - * @param authDataJson - * additional authdata in json format - * @return CompletableFuture - * @completesWith
    - * IllegalArgumentException when namespace not found
    - * IllegalStateException when failed to grant permission - */ - CompletableFuture grantPermissionAsync(NamespaceName namespace, Set actions, String role, - String authDataJson); - - /** - * Grant authorization-action permission on a topic to the given client - * - * @param topicName - * @param role - * @param authDataJson - * additional authdata in json format - * @return CompletableFuture - * @completesWith
    - * IllegalArgumentException when namespace not found
    - * IllegalStateException when failed to grant permission - */ - CompletableFuture grantPermissionAsync(TopicName topicName, Set actions, String role, - String authDataJson); - -} - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/security-jwt.md b/site2/website/versioned_docs/version-2.8.3-deprecated/security-jwt.md deleted file mode 100644 index 1fa65b7c27f60c..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/security-jwt.md +++ /dev/null @@ -1,331 +0,0 @@ ---- -id: security-jwt -title: Client authentication using tokens based on JSON Web Tokens -sidebar_label: "Authentication using JWT" -original_id: security-jwt ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -## Token authentication overview - -Pulsar supports authenticating clients using security tokens that are based on [JSON Web Tokens](https://jwt.io/introduction/) ([RFC-7519](https://tools.ietf.org/html/rfc7519)). - -You can use tokens to identify a Pulsar client and associate with some "principal" (or "role") that -is permitted to do some actions (eg: publish to a topic or consume from a topic). - -A user typically gets a token string from the administrator (or some automated service). - -The compact representation of a signed JWT is a string that looks like as the following: - -``` - -eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -Application specifies the token when you create the client instance. An alternative is to pass a "token supplier" (a function that returns the token when the client library needs one). - -> #### Always use TLS transport encryption -> Sending a token is equivalent to sending a password over the wire. You had better use TLS encryption all the time when you connect to the Pulsar service. See -> [Transport Encryption using TLS](security-tls-transport.md) for more details. - -### CLI Tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use the token authentication with CLI tools of Pulsar: - -```properties - -webServiceUrl=http://broker.example.com:8080/ -brokerServiceUrl=pulsar://broker.example.com:6650/ -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -authParams=token:eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -The token string can also be read from a file, for example: - -``` - -authParams=file:///path/to/token/file - -``` - -### Pulsar client - -You can use tokens to authenticate the following Pulsar clients. - -````mdx-code-block - - - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactory.token("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY")) - .build(); - -``` - -Similarly, you can also pass a `Supplier`: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactory.token(() -> { - // Read token from custom source - return readToken(); - })) - .build(); - -``` - - - - -```python - -from pulsar import Client, AuthenticationToken - -client = Client('pulsar://broker.example.com:6650/' - authentication=AuthenticationToken('eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY')) - -``` - -Alternatively, you can also pass a `Supplier`: - -```python - -def read_token(): - with open('/path/to/token.txt') as tf: - return tf.read().strip() - -client = Client('pulsar://broker.example.com:6650/' - authentication=AuthenticationToken(read_token)) - -``` - - - - -```go - -client, err := NewClient(ClientOptions{ - URL: "pulsar://localhost:6650", - Authentication: NewAuthenticationToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY"), -}) - -``` - -Similarly, you can also pass a `Supplier`: - -```go - -client, err := NewClient(ClientOptions{ - URL: "pulsar://localhost:6650", - Authentication: NewAuthenticationTokenSupplier(func () string { - // Read token from custom source - return readToken() - }), -}) - -``` - - - - -```c++ - -#include - -pulsar::ClientConfiguration config; -config.setAuth(pulsar::AuthToken::createWithToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY")); - -pulsar::Client client("pulsar://broker.example.com:6650/", config); - -``` - - - - -```c# - -var client = PulsarClient.Builder() - .AuthenticateUsingToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY") - .Build(); - -``` - - - - -```` - -## Enable token authentication - -On how to enable token authentication on a Pulsar cluster, you can refer to the guide below. - -JWT supports two different kinds of keys in order to generate and validate the tokens: - - * Symmetric : - - You can use a single ***Secret*** key to generate and validate tokens. - * Asymmetric: A pair of keys consists of the Private key and the Public key. - - You can use ***Private*** key to generate tokens. - - You can use ***Public*** key to validate tokens. - -### Create a secret key - -When you use a secret key, the administrator creates the key and uses the key to generate the client tokens. You can also configure this key to brokers in order to validate the clients. - -Output file is generated in the root of your Pulsar installation directory. You can also provide absolute path for the output file using the command below. - -```shell - -$ bin/pulsar tokens create-secret-key --output my-secret.key - -``` - -Enter this command to generate base64 encoded private key. - -```shell - -$ bin/pulsar tokens create-secret-key --output /opt/my-secret.key --base64 - -``` - -### Create a key pair - -With Public and Private keys, you need to create a pair of keys. Pulsar supports all algorithms that the Java JWT library (shown [here](https://github.com/jwtk/jjwt#signature-algorithms-keys)) supports. - -Output file is generated in the root of your Pulsar installation directory. You can also provide absolute path for the output file using the command below. - -```shell - -$ bin/pulsar tokens create-key-pair --output-private-key my-private.key --output-public-key my-public.key - -``` - - * Store `my-private.key` in a safe location and only administrator can use `my-private.key` to generate new tokens. - * `my-public.key` is distributed to all Pulsar brokers. You can publicly share this file without any security concern. - -### Generate tokens - -A token is the credential associated with a user. The association is done through the "principal" or "role". In the case of JWT tokens, this field is typically referred as **subject**, though they are exactly the same concept. - -Then, you need to use this command to require the generated token to have a **subject** field set. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user - -``` - -This command prints the token string on stdout. - -Similarly, you can create a token by passing the "private" key using the command below: - -```shell - -$ bin/pulsar tokens create --private-key file:///path/to/my-private.key \ - --subject test-user - -``` - -Finally, you can enter the following command to create a token with a pre-defined TTL. And then the token is automatically invalidated. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user \ - --expiry-time 1y - -``` - -### Authorization - -The token itself does not have any permission associated. The authorization engine determines whether the token should have permissions or not. Once you have created the token, you can grant permission for this token to do certain actions. The following is an example. - -```shell - -$ bin/pulsar-admin namespaces grant-permission my-tenant/my-namespace \ - --role test-user \ - --actions produce,consume - -``` - -### Enable token authentication on Brokers - -To configure brokers to authenticate clients, add the following parameters to `broker.conf`: - -```properties - -# Configuration to enable authentication and authorization -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientTlsEnabled=true -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Either configure the token string or specify to read it from a file. The following three available formats are all valid: -# brokerClientAuthenticationParameters={"token":"your-token-string"} -# brokerClientAuthenticationParameters=token:your-token-string -# brokerClientAuthenticationParameters=file:///path/to/token -brokerClientTrustCertsFilePath=/path/my-ca/certs/ca.cert.pem - -# If this flag is set then the broker authenticates the original Auth data -# else it just accepts the originalPrincipal and authorizes it (if required). -authenticateOriginalAuthData=true - -# If using secret key (Note: key files must be DER-encoded) -tokenSecretKey=file:///path/to/secret.key -# The key can also be passed inline: -# tokenSecretKey=data:;base64,FLFyW0oLJ2Fi22KKCm21J18mbAdztfSHN/lAT5ucEKU= - -# If using public/private (Note: key files must be DER-encoded) -# tokenPublicKey=file:///path/to/public.key - -``` - -### Enable token authentication on Proxies - -To configure proxies to authenticate clients, add the following parameters to `proxy.conf`: - -The proxy uses its own token when connecting to brokers. You need to configure the role token for this key pair in the `proxyRoles` of the brokers. For more details, see the [authorization guide](security-authorization.md). - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken -tokenSecretKey=file:///path/to/secret.key - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Either configure the token string or specify to read it from a file. The following three available formats are all valid: -# brokerClientAuthenticationParameters={"token":"your-token-string"} -# brokerClientAuthenticationParameters=token:your-token-string -# brokerClientAuthenticationParameters=file:///path/to/token - -# Whether client authorization credentials are forwarded to the broker for re-authorization. -# Authentication must be enabled via authenticationEnabled=true for this to take effect. -forwardAuthorizationCredentials=true - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/security-kerberos.md b/site2/website/versioned_docs/version-2.8.3-deprecated/security-kerberos.md deleted file mode 100644 index c49fa3bea1fce0..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/security-kerberos.md +++ /dev/null @@ -1,443 +0,0 @@ ---- -id: security-kerberos -title: Authentication using Kerberos -sidebar_label: "Authentication using Kerberos" -original_id: security-kerberos ---- - -[Kerberos](https://web.mit.edu/kerberos/) is a network authentication protocol. By using secret-key cryptography, [Kerberos](https://web.mit.edu/kerberos/) is designed to provide strong authentication for client applications and server applications. - -In Pulsar, you can use Kerberos with [SASL](https://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer) as a choice for authentication. And Pulsar uses the [Java Authentication and Authorization Service (JAAS)](https://en.wikipedia.org/wiki/Java_Authentication_and_Authorization_Service) for SASL configuration. You need to provide JAAS configurations for Kerberos authentication. - -This document introduces how to configure `Kerberos` with `SASL` between Pulsar clients and brokers and how to configure Kerberos for Pulsar proxy in detail. - -## Configuration for Kerberos between Client and Broker - -### Prerequisites - -To begin, you need to set up (or already have) a [Key Distribution Center(KDC)](https://en.wikipedia.org/wiki/Key_distribution_center). Also you need to configure and run the [Key Distribution Center(KDC)](https://en.wikipedia.org/wiki/Key_distribution_center)in advance. - -If your organization already uses a Kerberos server (for example, by using `Active Directory`), you do not have to install a new server for Pulsar. If your organization does not use a Kerberos server, you need to install one. Your Linux vendor might have packages for `Kerberos`. On how to install and configure Kerberos, refer to [Ubuntu](https://help.ubuntu.com/community/Kerberos), -[Redhat](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Managing_Smart_Cards/installing-kerberos.html). - -Note that if you use Oracle Java, you need to download JCE policy files for your Java version and copy them to the `$JAVA_HOME/jre/lib/security` directory. - -#### Kerberos principals - -If you use the existing Kerberos system, ask your Kerberos administrator for a principal for each Brokers in your cluster and for every operating system user that accesses Pulsar with Kerberos authentication(via clients and tools). - -If you have installed your own Kerberos system, you can create these principals with the following commands: - -```shell - -### add Principals for broker -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey broker/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{broker-keytabname}.keytab broker/{hostname}@{REALM}" -### add Principals for client -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey client/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{client-keytabname}.keytab client/{hostname}@{REALM}" - -``` - -Note that *Kerberos* requires that all your hosts can be resolved with their FQDNs. - -The first part of Broker principal (for example, `broker` in `broker/{hostname}@{REALM}`) is the `serverType` of each host. The suggested values of `serverType` are `broker` (host machine runs service Pulsar Broker) and `proxy` (host machine runs service Pulsar Proxy). - -#### Configure how to connect to KDC - -You need to enter the command below to specify the path to the `krb5.conf` file for the client side and the broker side. The content of `krb5.conf` file indicates the default Realm and KDC information. See [JDK’s Kerberos Requirements](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html) for more details. - -```shell - --Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -Here is an example of the krb5.conf file: - -In the configuration file, `EXAMPLE.COM` is the default realm; `kdc = localhost:62037` is the kdc server url for realm `EXAMPLE.COM `: - -``` - -[libdefaults] - default_realm = EXAMPLE.COM - -[realms] - EXAMPLE.COM = { - kdc = localhost:62037 - } - -``` - -Usually machines configured with kerberos already have a system wide configuration and this configuration is optional. - -#### JAAS configuration file - -You need JAAS configuration file for the client side and the broker side. JAAS configuration file provides the section of information that is used to connect KDC. Here is an example named `pulsar_jaas.conf`: - -``` - - PulsarBroker { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - - PulsarClient { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarclient.keytab" - principal="client/localhost@EXAMPLE.COM"; -}; - -``` - -You need to set the `JAAS` configuration file path as JVM parameter for client and broker. For example: - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf - -``` - -In the `pulsar_jaas.conf` file above - -1. `PulsarBroker` is a section name in the JAAS file that each broker uses. This section tells the broker to use which principal inside Kerberos and the location of the keytab where the principal is stored. `PulsarBroker` allows the broker to use the keytab specified in this section. -2. `PulsarClient` is a section name in the JASS file that each broker uses. This section tells the client to use which principal inside Kerberos and the location of the keytab where the principal is stored. `PulsarClient` allows the client to use the keytab specified in this section. - The following example also reuses this `PulsarClient` section in both the Pulsar internal admin configuration and in CLI command of `bin/pulsar-client`, `bin/pulsar-perf` and `bin/pulsar-admin`. You can also add different sections for different use cases. - -You can have 2 separate JAAS configuration files: -* the file for a broker that has sections of both `PulsarBroker` and `PulsarClient`; -* the file for a client that only has a `PulsarClient` section. - - -### Kerberos configuration for Brokers - -#### Configure the `broker.conf` file - - In the `broker.conf` file, set Kerberos related configurations. - - - Set `authenticationEnabled` to `true`; - - Set `authenticationProviders` to choose `AuthenticationProviderSasl`; - - Set `saslJaasClientAllowedIds` regex for principal that is allowed to connect to broker; - - Set `saslJaasBrokerSectionName` that corresponds to the section in JAAS configuration file for broker; - - To make Pulsar internal admin client work properly, you need to set the configuration in the `broker.conf` file as below: - - Set `brokerClientAuthenticationPlugin` to client plugin `AuthenticationSasl`; - - Set `brokerClientAuthenticationParameters` to value in JSON string `{"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"}`, in which `PulsarClient` is the section name in the `pulsar_jaas.conf` file, and `"serverType":"broker"` indicates that the internal admin client connects to a Pulsar Broker; - - Here is an example: - -``` - -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarBroker - -## Authentication settings of the broker itself. Used when the broker connects to other brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -brokerClientAuthenticationParameters={"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"} - -``` - -#### Set Broker JVM parameter - - Set JVM parameters for JAAS configuration file and krb5 configuration file with additional options. - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -You can add this at the end of `PULSAR_EXTRA_OPTS` in the file [`pulsar_env.sh`](https://github.com/apache/pulsar/blob/master/conf/pulsar_env.sh) - -You must ensure that the operating system user who starts broker can reach the keytabs configured in the `pulsar_jaas.conf` file and kdc server in the `krb5.conf` file. - -### Kerberos configuration for clients - -#### Java Client and Java Admin Client - -In client application, include `pulsar-client-auth-sasl` in your project dependency. - -``` - - - org.apache.pulsar - pulsar-client-auth-sasl - ${pulsar.version} - - -``` - -Configure the authentication type to use `AuthenticationSasl`, and also provide the authentication parameters to it. - -You need 2 parameters: -- `saslJaasClientSectionName`. This parameter corresponds to the section in JAAS configuration file for client; -- `serverType`. This parameter stands for whether this client connects to broker or proxy. And client uses this parameter to know which server side principal should be used. - -When you authenticate between client and broker with the setting in above JAAS configuration file, we need to set `saslJaasClientSectionName` to `PulsarClient` and set `serverType` to `broker`. - -The following is an example of creating a Java client: - - ```java - - System.setProperty("java.security.auth.login.config", "/etc/pulsar/pulsar_jaas.conf"); - System.setProperty("java.security.krb5.conf", "/etc/pulsar/krb5.conf"); - - Map authParams = Maps.newHashMap(); - authParams.put("saslJaasClientSectionName", "PulsarClient"); - authParams.put("serverType", "broker"); - - Authentication saslAuth = AuthenticationFactory - .create(org.apache.pulsar.client.impl.auth.AuthenticationSasl.class.getName(), authParams); - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://my-broker.com:6650") - .authentication(saslAuth) - .build(); - - ``` - -> The first two lines in the example above are hard coded, alternatively, you can set additional JVM parameters for JAAS and krb5 configuration file when you run the application like below: - -``` - -java -cp -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf $APP-jar-with-dependencies.jar $CLASSNAME - -``` - -You must ensure that the operating system user who starts pulsar client can reach the keytabs configured in the `pulsar_jaas.conf` file and kdc server in the `krb5.conf` file. - -#### Configure CLI tools - -If you use a command-line tool (such as `bin/pulsar-client`, `bin/pulsar-perf` and `bin/pulsar-admin`), you need to perform the following steps: - -Step 1. Enter the command below to configure your `client.conf`. - -```shell - -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -authParams={"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"} - -``` - -Step 2. Enter the command below to set JVM parameters for JAAS configuration file and krb5 configuration file with additional options. - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -You can add this at the end of `PULSAR_EXTRA_OPTS` in the file [`pulsar_tools_env.sh`](https://github.com/apache/pulsar/blob/master/conf/pulsar_tools_env.sh), -or add this line `OPTS="$OPTS -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf "` directly to the CLI tool script. - -The meaning of configurations is the same as the meaning of configurations in Java client section. - -## Kerberos configuration for working with Pulsar Proxy - -With the above configuration, client and broker can do authentication using Kerberos. - -A client that connects to Pulsar Proxy is a little different. Pulsar Proxy (as a SASL Server in Kerberos) authenticates Client (as a SASL client in Kerberos) first; and then Pulsar broker authenticates Pulsar Proxy. - -Now in comparison with the above configuration between client and broker, we show you how to configure Pulsar Proxy as follows. - -### Create principal for Pulsar Proxy in Kerberos - -You need to add new principals for Pulsar Proxy comparing with the above configuration. If you already have principals for client and broker, you only need to add the proxy principal here. - -```shell - -### add Principals for Pulsar Proxy -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey proxy/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{proxy-keytabname}.keytab proxy/{hostname}@{REALM}" -### add Principals for broker -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey broker/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{broker-keytabname}.keytab broker/{hostname}@{REALM}" -### add Principals for client -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey client/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{client-keytabname}.keytab client/{hostname}@{REALM}" - -``` - -### Add a section in JAAS configuration file for Pulsar Proxy - -In comparison with the above configuration, add a new section for Pulsar Proxy in JAAS configuration file. - -Here is an example named `pulsar_jaas.conf`: - -``` - - PulsarBroker { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - - PulsarProxy { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarproxy.keytab" - principal="proxy/localhost@EXAMPLE.COM"; -}; - - PulsarClient { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarclient.keytab" - principal="client/localhost@EXAMPLE.COM"; -}; - -``` - -### Proxy client configuration - -Pulsar client configuration is similar with client and broker configuration, except that you need to set `serverType` to `proxy` instead of `broker`, for the reason that you need to do the Kerberos authentication between client and proxy. - - ```java - - System.setProperty("java.security.auth.login.config", "/etc/pulsar/pulsar_jaas.conf"); - System.setProperty("java.security.krb5.conf", "/etc/pulsar/krb5.conf"); - - Map authParams = Maps.newHashMap(); - authParams.put("saslJaasClientSectionName", "PulsarClient"); - authParams.put("serverType", "proxy"); // ** here is the different ** - - Authentication saslAuth = AuthenticationFactory - .create(org.apache.pulsar.client.impl.auth.AuthenticationSasl.class.getName(), authParams); - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://my-broker.com:6650") - .authentication(saslAuth) - .build(); - - ``` - -> The first two lines in the example above are hard coded, alternatively, you can set additional JVM parameters for JAAS and krb5 configuration file when you run the application like below: - -``` - -java -cp -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf $APP-jar-with-dependencies.jar $CLASSNAME - -``` - -### Kerberos configuration for Pulsar proxy service - -In the `proxy.conf` file, set Kerberos related configuration. Here is an example: - -```shell - -## related to authenticate client. -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarProxy - -## related to be authenticated by broker -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -brokerClientAuthenticationParameters={"saslJaasClientSectionName":"PulsarProxy", "serverType":"broker"} -forwardAuthorizationCredentials=true - -``` - -The first part relates to authenticating between client and Pulsar Proxy. In this phase, client works as SASL client, while Pulsar Proxy works as SASL server. - -The second part relates to authenticating between Pulsar Proxy and Pulsar Broker. In this phase, Pulsar Proxy works as SASL client, while Pulsar Broker works as SASL server. - -### Broker side configuration. - -The broker side configuration file is the same with the above `broker.conf`, you do not need special configuration for Pulsar Proxy. - -``` - -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarBroker - -``` - -## Regarding authorization and role token - -For Kerberos authentication, we usually use the authenticated principal as the role token for Pulsar authorization. For more information of authorization in Pulsar, see [security authorization](security-authorization.md). - -If you enable 'authorizationEnabled', you need to set `superUserRoles` in `broker.conf` that corresponds to the name registered in kdc. - -For example: - -```bash - -superUserRoles=client/{clientIp}@EXAMPLE.COM - -``` - -## Regarding authentication between ZooKeeper and Broker - -Pulsar Broker acts as a Kerberos client when you authenticate with Zookeeper. According to [ZooKeeper document](https://cwiki.apache.org/confluence/display/ZOOKEEPER/Client-Server+mutual+authentication), you need these settings in `conf/zookeeper.conf`: - -``` - -authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider -requireClientAuthScheme=sasl - -``` - -Enter the following commands to add a section of `Client` configurations in the file `pulsar_jaas.conf`, which Pulsar Broker uses: - -``` - - Client { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - -``` - -In this setting, the principal of Pulsar Broker and keyTab file indicates the role of Broker when you authenticate with ZooKeeper. - -## Regarding authentication between BookKeeper and Broker - -Pulsar Broker acts as a Kerberos client when you authenticate with Bookie. According to [BookKeeper document](http://bookkeeper.apache.org/docs/latest/security/sasl/), you need to add `bookkeeperClientAuthenticationPlugin` parameter in `broker.conf`: - -``` - -bookkeeperClientAuthenticationPlugin=org.apache.bookkeeper.sasl.SASLClientProviderFactory - -``` - -In this setting, `SASLClientProviderFactory` creates a BookKeeper SASL client in a Broker, and the Broker uses the created SASL client to authenticate with a Bookie node. - -Enter the following commands to add a section of `BookKeeper` configurations in the `pulsar_jaas.conf` that Pulsar Broker uses: - -``` - - BookKeeper { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - -``` - -In this setting, the principal of Pulsar Broker and keyTab file indicates the role of Broker when you authenticate with Bookie. diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/security-oauth2.md b/site2/website/versioned_docs/version-2.8.3-deprecated/security-oauth2.md deleted file mode 100644 index 24b1530cc848ae..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/security-oauth2.md +++ /dev/null @@ -1,232 +0,0 @@ ---- -id: security-oauth2 -title: Client authentication using OAuth 2.0 access tokens -sidebar_label: "Authentication using OAuth 2.0 access tokens" -original_id: security-oauth2 ---- - -Pulsar supports authenticating clients using OAuth 2.0 access tokens. You can use OAuth 2.0 access tokens to identify a Pulsar client and associate the Pulsar client with some "principal" (or "role"), which is permitted to do some actions, such as publishing messages to a topic or consume messages from a topic. - -This module is used to support the Pulsar client authentication plugin for OAuth 2.0. After communicating with the Oauth 2.0 server, the Pulsar client gets an `access token` from the Oauth 2.0 server, and passes this `access token` to the Pulsar broker to do the authentication. The broker can use the `org.apache.pulsar.broker.authentication.AuthenticationProviderToken`. Or, you can add your own `AuthenticationProvider` to make it with this module. - -## Authentication provider configuration - -This library allows you to authenticate the Pulsar client by using an access token that is obtained from an OAuth 2.0 authorization service, which acts as a _token issuer_. - -### Authentication types - -The authentication type determines how to obtain an access token through an OAuth 2.0 authorization flow. - -:::note - -Currently, the Pulsar Java client only supports the `client_credentials` authentication type. - -::: - -#### Client credentials - -The following table lists parameters supported for the `client credentials` authentication type. - -| Parameter | Description | Example | Required or not | -| --- | --- | --- | --- | -| `type` | Oauth 2.0 authentication type. | `client_credentials` (default) | Optional | -| `issuerUrl` | URL of the authentication provider which allows the Pulsar client to obtain an access token | `https://accounts.google.com` | Required | -| `privateKey` | URL to a JSON credentials file | Support the following pattern formats:
  1465. `file:///path/to/file`
  1466. `file:/path/to/file`
  1467. `data:application/json;base64,`
  1468. | Required | -| `audience` | An OAuth 2.0 "resource server" identifier for the Pulsar cluster | `https://broker.example.com` | Required | - -The credentials file contains service account credentials used with the client authentication type. The following shows an example of a credentials file `credentials_file.json`. - -```json - -{ - "type": "client_credentials", - "client_id": "d9ZyX97q1ef8Cr81WHVC4hFQ64vSlDK3", - "client_secret": "on1uJ...k6F6R", - "client_email": "1234567890-abcdefghijklmnopqrstuvwxyz@developer.gserviceaccount.com", - "issuer_url": "https://accounts.google.com" -} - -``` - -In the above example, the authentication type is set to `client_credentials` by default. And the fields "client_id" and "client_secret" are required. - -### Typical original OAuth2 request mapping - -The following shows a typical original OAuth2 request, which is used to obtain the access token from the OAuth2 server. - -```bash - -curl --request POST \ - --url https://dev-kt-aa9ne.us.auth0.com/oauth/token \ - --header 'content-type: application/json' \ - --data '{ - "client_id":"Xd23RHsUnvUlP7wchjNYOaIfazgeHd9x", - "client_secret":"rT7ps7WY8uhdVuBTKWZkttwLdQotmdEliaM5rLfmgNibvqziZ-g07ZH52N_poGAb", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "grant_type":"client_credentials"}' - -``` - -In the above example, the mapping relationship is shown as below. - -- The `issuerUrl` parameter in this plugin is mapped to `--url https://dev-kt-aa9ne.us.auth0.com`. -- The `privateKey` file parameter in this plugin should at least contains the `client_id` and `client_secret` fields. -- The `audience` parameter in this plugin is mapped to `"audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"`. - -## Client Configuration - -You can use the OAuth2 authentication provider with the following Pulsar clients. - -### Java - -You can use the factory method to configure authentication for Pulsar Java client. - -```java - -import org.apache.pulsar.client.impl.auth.oauth2.AuthenticationFactoryOAuth2; - -String issuerUrl = "https://dev-kt-aa9ne.us.auth0.com"; -String credentialsUrl = "file:///path/to/KeyFile.json"; -String audience = "https://dev-kt-aa9ne.us.auth0.com/api/v2/"; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactoryOAuth2.clientCredentials(issuerUrl, credentialsUrl, audience)) - .build(); - -``` - -In addition, you can also use the encoded parameters to configure authentication for Pulsar Java client. - -```java - -Authentication auth = AuthenticationFactory - .create(AuthenticationOAuth2.class.getName(), "{"type":"client_credentials","privateKey":"./key/path/..","issuerUrl":"...","audience":"..."}"); -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication(auth) - .build(); - -``` - -### C++ client - -The C++ client is similar to the Java client. You need to provide parameters of `issuerUrl`, `private_key` (the credentials file path), and the audience. - -```c++ - -#include - -pulsar::ClientConfiguration config; -std::string params = R"({ - "issuer_url": "https://dev-kt-aa9ne.us.auth0.com", - "private_key": "../../pulsar-broker/src/test/resources/authentication/token/cpp_credentials_file.json", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/"})"; - -config.setAuth(pulsar::AuthOauth2::create(params)); - -pulsar::Client client("pulsar://broker.example.com:6650/", config); - -``` - -### Go client - -To enable OAuth2 authentication in Go client, you need to configure OAuth2 authentication. -This example shows how to configure OAuth2 authentication in Go client. - -```go - -oauth := pulsar.NewAuthenticationOAuth2(map[string]string{ - "type": "client_credentials", - "issuerUrl": "https://dev-kt-aa9ne.us.auth0.com", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "privateKey": "/path/to/privateKey", - "clientId": "0Xx...Yyxeny", - }) -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://my-cluster:6650", - Authentication: oauth, -}) - -``` - -### Python client - -To enable OAuth2 authentication in Python client, you need to configure OAuth2 authentication. -This example shows how to configure OAuth2 authentication in Python client. - -```python - -from pulsar import Client, AuthenticationOauth2 - -params = ''' -{ - "issuer_url": "https://dev-kt-aa9ne.us.auth0.com", - "private_key": "/path/to/privateKey", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/" -} -''' - -client = Client("pulsar://my-cluster:6650", authentication=AuthenticationOauth2(params)) - -``` - -## CLI configuration - -This section describes how to use Pulsar CLI tools to connect a cluster through OAuth2 authentication plugin. - -### pulsar-admin - -This example shows how to use pulsar-admin to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-admin --admin-url https://streamnative.cloud:443 \ ---auth-plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ -tenants list - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). - -### pulsar-client - -This example shows how to use pulsar-client to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-client \ ---url SERVICE_URL \ ---auth-plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ -produce test-topic -m "test-message" -n 10 - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). - -### pulsar-perf - -This example shows how to use pulsar-perf to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-perf produce --service-url pulsar+ssl://streamnative.cloud:6651 \ ---auth_plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ --r 1000 -s 1024 test-topic - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/security-overview.md b/site2/website/versioned_docs/version-2.8.3-deprecated/security-overview.md deleted file mode 100644 index c6bd9b64e4f766..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/security-overview.md +++ /dev/null @@ -1,36 +0,0 @@ ---- -id: security-overview -title: Pulsar security overview -sidebar_label: "Overview" -original_id: security-overview ---- - -As the central message bus for a business, Apache Pulsar is frequently used for storing mission-critical data. Therefore, enabling security features in Pulsar is crucial. - -By default, Pulsar configures no encryption, authentication, or authorization. Any client can communicate to Apache Pulsar via plain text service URLs. So we must ensure that Pulsar accessing via these plain text service URLs is restricted to trusted clients only. In such cases, you can use Network segmentation and/or authorization ACLs to restrict access to trusted IPs. If you use neither, the state of cluster is wide open and anyone can access the cluster. - -Pulsar supports a pluggable authentication mechanism. And Pulsar clients use this mechanism to authenticate with brokers and proxies. You can also configure Pulsar to support multiple authentication sources. - -The Pulsar broker validates the authentication credentials when a connection is established. After the initial connection is authenticated, the "principal" token is stored for authorization though the connection is not re-authenticated. The broker periodically checks the expiration status of every `ServerCnx` object. You can set the `authenticationRefreshCheckSeconds` on the broker to control the frequency to check the expiration status. By default, the `authenticationRefreshCheckSeconds` is set to 60s. When the authentication is expired, the broker forces to re-authenticate the connection. If the re-authentication fails, the broker disconnects the client. - -The broker supports learning whether a particular client supports authentication refreshing. If a client supports authentication refreshing and the credential is expired, the authentication provider calls the `refreshAuthentication` method to initiate the refreshing process. If a client does not support authentication refreshing and the credential is expired, the broker disconnects the client. - -You had better secure the service components in your Apache Pulsar deployment. - -## Role tokens - -In Pulsar, a *role* is a string, like `admin` or `app1`, which can represent a single client or multiple clients. You can use roles to control permission for clients to produce or consume from certain topics, administer the configuration for tenants, and so on. - -Apache Pulsar uses an [Authentication Provider](#authentication-providers) to establish the identity of a client and then assign a *role token* to that client. This role token is then used for [Authorization and ACLs](security-authorization.md) to determine what the client is authorized to do. - -## Authentication providers - -Currently Pulsar supports the following authentication providers: - -- [TLS Authentication](security-tls-authentication.md) -- [Athenz](security-athenz.md) -- [Kerberos](security-kerberos.md) -- [JSON Web Token Authentication](security-jwt.md) -- [OAuth 2.0 authentication](security-oauth2.md) -- [HTTP basic authentication](security-basic-auth.md) - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/security-tls-authentication.md b/site2/website/versioned_docs/version-2.8.3-deprecated/security-tls-authentication.md deleted file mode 100644 index 85d2240f413060..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/security-tls-authentication.md +++ /dev/null @@ -1,222 +0,0 @@ ---- -id: security-tls-authentication -title: Authentication using TLS -sidebar_label: "Authentication using TLS" -original_id: security-tls-authentication ---- - -## TLS authentication overview - -TLS authentication is an extension of [TLS transport encryption](security-tls-transport.md). Not only servers have keys and certs that the client uses to verify the identity of servers, clients also have keys and certs that the server uses to verify the identity of clients. You must have TLS transport encryption configured on your cluster before you can use TLS authentication. This guide assumes you already have TLS transport encryption configured. - -`Bouncy Castle Provider` provides TLS related cipher suites and algorithms in Pulsar. If you need [FIPS](https://www.bouncycastle.org/fips_faq.html) version of `Bouncy Castle Provider`, please reference [Bouncy Castle page](security-bouncy-castle.md). - -### Create client certificates - -Client certificates are generated using the certificate authority. Server certificates are also generated with the same certificate authority. - -The biggest difference between client certs and server certs is that the **common name** for the client certificate is the **role token** which that client is authenticated as. - -To use client certificates, you need to set `tlsRequireTrustedClientCertOnConnect=true` at the broker side. For details, refer to [TLS broker configuration](security-tls-transport.md#configure-broker). - -First, you need to enter the following command to generate the key : - -```bash - -$ openssl genrsa -out admin.key.pem 2048 - -``` - -Similar to the broker, the client expects the key to be in [PKCS 8](https://en.wikipedia.org/wiki/PKCS_8) format, so you need to convert it by entering the following command: - -```bash - -$ openssl pkcs8 -topk8 -inform PEM -outform PEM \ - -in admin.key.pem -out admin.key-pk8.pem -nocrypt - -``` - -Next, enter the command below to generate the certificate request. When you are asked for a **common name**, enter the **role token** that you want this key pair to authenticate a client as. - -```bash - -$ openssl req -config openssl.cnf \ - -key admin.key.pem -new -sha256 -out admin.csr.pem - -``` - -:::note - -If openssl.cnf is not specified, read [Certificate authority](http://pulsar.apache.org/docs/en/security-tls-transport/#certificate-authority) to get the openssl.cnf. - -::: - -Then, enter the command below to sign with request with the certificate authority. Note that the client certs uses the **usr_cert** extension, which allows the cert to be used for client authentication. - -```bash - -$ openssl ca -config openssl.cnf -extensions usr_cert \ - -days 1000 -notext -md sha256 \ - -in admin.csr.pem -out admin.cert.pem - -``` - -You can get a cert, `admin.cert.pem`, and a key, `admin.key-pk8.pem` from this command. With `ca.cert.pem`, clients can use this cert and this key to authenticate themselves to brokers and proxies as the role token ``admin``. - -:::note - -If the "unable to load CA private key" error occurs and the reason of this error is "No such file or directory: /etc/pki/CA/private/cakey.pem" in this step. Try the command below: - -```bash - -$ cd /etc/pki/tls/misc/CA -$ ./CA -newca - -``` - -to generate `cakey.pem` . - -::: - -## Enable TLS authentication on brokers - -To configure brokers to authenticate clients, add the following parameters to `broker.conf`, alongside [the configuration to enable tls transport](security-tls-transport.md#broker-configuration): - -```properties - -# Configuration to enable authentication -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# operations and publish/consume from all topics -superUserRoles=admin - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientTlsEnabled=true -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters={"tlsCertFile":"/path/my-ca/admin.cert.pem","tlsKeyFile":"/path/my-ca/admin.key-pk8.pem"} -brokerClientTrustCertsFilePath=/path/my-ca/certs/ca.cert.pem - -``` - -## Enable TLS authentication on proxies - -To configure proxies to authenticate clients, add the following parameters to `proxy.conf`, alongside [the configuration to enable tls transport](security-tls-transport.md#proxy-configuration): - -The proxy should have its own client key pair for connecting to brokers. You need to configure the role token for this key pair in the ``proxyRoles`` of the brokers. See the [authorization guide](security-authorization.md) for more details. - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters=tlsCertFile:/path/to/proxy.cert.pem,tlsKeyFile:/path/to/proxy.key-pk8.pem - -``` - -## Client configuration - -When you use TLS authentication, client connects via TLS transport. You need to configure the client to use ```https://``` and 8443 port for the web service URL, ```pulsar+ssl://``` and 6651 port for the broker service URL. - -### CLI tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use TLS authentication with the CLI tools of Pulsar: - -```properties - -webServiceUrl=https://broker.example.com:8443/ -brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/ca.cert.pem -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -authParams=tlsCertFile:/path/to/my-role.cert.pem,tlsKeyFile:/path/to/my-role.key-pk8.pem - -``` - -### Java client - -```java - -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .tlsTrustCertsFilePath("/path/to/ca.cert.pem") - .authentication("org.apache.pulsar.client.impl.auth.AuthenticationTls", - "tlsCertFile:/path/to/my-role.cert.pem,tlsKeyFile:/path/to/my-role.key-pk8.pem") - .build(); - -``` - -### Python client - -```python - -from pulsar import Client, AuthenticationTLS - -auth = AuthenticationTLS("/path/to/my-role.cert.pem", "/path/to/my-role.key-pk8.pem") -client = Client("pulsar+ssl://broker.example.com:6651/", - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False, - authentication=auth) - -``` - -### C++ client - -```c++ - -#include - -pulsar::ClientConfiguration config; -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/ca.cert.pem"); -config.setTlsAllowInsecureConnection(false); - -pulsar::AuthenticationPtr auth = pulsar::AuthTls::create("/path/to/my-role.cert.pem", - "/path/to/my-role.key-pk8.pem") -config.setAuth(auth); - -pulsar::Client client("pulsar+ssl://broker.example.com:6651/", config); - -``` - -### Node.js client - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const auth = new Pulsar.AuthenticationTls({ - certificatePath: '/path/to/my-role.cert.pem', - privateKeyPath: '/path/to/my-role.key-pk8.pem', - }); - - const client = new Pulsar.Client({ - serviceUrl: 'pulsar+ssl://broker.example.com:6651/', - authentication: auth, - tlsTrustCertsFilePath: '/path/to/ca.cert.pem', - }); -})(); - -``` - -### C# client - -```c# - -var clientCertificate = new X509Certificate2("admin.pfx"); -var client = PulsarClient.Builder() - .AuthenticateUsingClientCertificate(clientCertificate) - .Build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/security-tls-keystore.md b/site2/website/versioned_docs/version-2.8.3-deprecated/security-tls-keystore.md deleted file mode 100644 index 2d80782ba88d35..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/security-tls-keystore.md +++ /dev/null @@ -1,342 +0,0 @@ ---- -id: security-tls-keystore -title: Using TLS with KeyStore configure -sidebar_label: "Using TLS with KeyStore configure" -original_id: security-tls-keystore ---- - -## Overview - -Apache Pulsar supports [TLS encryption](security-tls-transport.md) and [TLS authentication](security-tls-authentication.md) between clients and Apache Pulsar service. -By default it uses PEM format file configuration. This page tries to describe use [KeyStore](https://en.wikipedia.org/wiki/Java_KeyStore) type configure for TLS. - - -## TLS encryption with KeyStore configure - -### Generate TLS key and certificate - -The first step of deploying TLS is to generate the key and the certificate for each machine in the cluster. -You can use Java’s `keytool` utility to accomplish this task. We will generate the key into a temporary keystore -initially for broker, so that we can export and sign it later with CA. - -```shell - -keytool -keystore broker.keystore.jks -alias localhost -validity {validity} -genkeypair -keyalg RSA - -``` - -You need to specify two parameters in the above command: - -1. `keystore`: the keystore file that stores the certificate. The *keystore* file contains the private key of - the certificate; hence, it needs to be kept safely. -2. `validity`: the valid time of the certificate in days. - -> Ensure that common name (CN) matches exactly with the fully qualified domain name (FQDN) of the server. -The client compares the CN with the DNS domain name to ensure that it is indeed connecting to the desired server, not a malicious one. - -### Creating your own CA - -After the first step, each broker in the cluster has a public-private key pair, and a certificate to identify the machine. -The certificate, however, is unsigned, which means that an attacker can create such a certificate to pretend to be any machine. - -Therefore, it is important to prevent forged certificates by signing them for each machine in the cluster. -A `certificate authority (CA)` is responsible for signing certificates. CA works likes a government that issues passports — -the government stamps (signs) each passport so that the passport becomes difficult to forge. Other governments verify the stamps -to ensure the passport is authentic. Similarly, the CA signs the certificates, and the cryptography guarantees that a signed -certificate is computationally difficult to forge. Thus, as long as the CA is a genuine and trusted authority, the clients have -high assurance that they are connecting to the authentic machines. - -```shell - -openssl req -new -x509 -keyout ca-key -out ca-cert -days 365 - -``` - -The generated CA is simply a *public-private* key pair and certificate, and it is intended to sign other certificates. - -The next step is to add the generated CA to the clients' truststore so that the clients can trust this CA: - -```shell - -keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert - -``` - -NOTE: If you configure the brokers to require client authentication by setting `tlsRequireTrustedClientCertOnConnect` to `true` on the -broker configuration, then you must also provide a truststore for the brokers and it should have all the CA certificates that clients keys were signed by. - -```shell - -keytool -keystore broker.truststore.jks -alias CARoot -import -file ca-cert - -``` - -In contrast to the keystore, which stores each machine’s own identity, the truststore of a client stores all the certificates -that the client should trust. Importing a certificate into one’s truststore also means trusting all certificates that are signed -by that certificate. As the analogy above, trusting the government (CA) also means trusting all passports (certificates) that -it has issued. This attribute is called the chain of trust, and it is particularly useful when deploying TLS on a large BookKeeper cluster. -You can sign all certificates in the cluster with a single CA, and have all machines share the same truststore that trusts the CA. -That way all machines can authenticate all other machines. - - -### Signing the certificate - -The next step is to sign all certificates in the keystore with the CA we generated. First, you need to export the certificate from the keystore: - -```shell - -keytool -keystore broker.keystore.jks -alias localhost -certreq -file cert-file - -``` - -Then sign it with the CA: - -```shell - -openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days {validity} -CAcreateserial -passin pass:{ca-password} - -``` - -Finally, you need to import both the certificate of the CA and the signed certificate into the keystore: - -```shell - -keytool -keystore broker.keystore.jks -alias CARoot -import -file ca-cert -keytool -keystore broker.keystore.jks -alias localhost -import -file cert-signed - -``` - -The definitions of the parameters are the following: - -1. `keystore`: the location of the keystore -2. `ca-cert`: the certificate of the CA -3. `ca-key`: the private key of the CA -4. `ca-password`: the passphrase of the CA -5. `cert-file`: the exported, unsigned certificate of the broker -6. `cert-signed`: the signed certificate of the broker - -### Configuring brokers - -Brokers enable TLS by provide valid `brokerServicePortTls` and `webServicePortTls`, and also need set `tlsEnabledWithKeyStore` to `true` for using KeyStore type configuration. -Besides this, KeyStore path, KeyStore password, TrustStore path, and TrustStore password need to provided. -And since broker will create internal client/admin client to communicate with other brokers, user also need to provide config for them, this is similar to how user config the outside client/admin-client. -If `tlsRequireTrustedClientCertOnConnect` is `true`, broker will reject the Connection if the Client Certificate is not trusted. - -The following TLS configs are needed on the broker side: - -```properties - -tlsEnabledWithKeyStore=true -# key store -tlsKeyStoreType=JKS -tlsKeyStore=/var/private/tls/broker.keystore.jks -tlsKeyStorePassword=brokerpw - -# trust store -tlsTrustStoreType=JKS -tlsTrustStore=/var/private/tls/broker.truststore.jks -tlsTrustStorePassword=brokerpw - -# internal client/admin-client config -brokerClientTlsEnabled=true -brokerClientTlsEnabledWithKeyStore=true -brokerClientTlsTrustStoreType=JKS -brokerClientTlsTrustStore=/var/private/tls/client.truststore.jks -brokerClientTlsTrustStorePassword=clientpw - -``` - -NOTE: it is important to restrict access to the store files via filesystem permissions. - -If you have configured TLS on the broker, to disable non-TLS ports, you can set the values of the following configurations to empty as below. - -``` - -brokerServicePort= -webServicePort= - -``` - -In this case, you need to set the following configurations. - -```conf - -brokerClientTlsEnabled=true // Set this to true -brokerClientTlsEnabledWithKeyStore=true // Set this to true -brokerClientTlsTrustStore= // Set this to your desired value -brokerClientTlsTrustStorePassword= // Set this to your desired value - -``` - -Optional settings that may worth consider: - -1. tlsClientAuthentication=false: Enable/Disable using TLS for authentication. This config when enabled will authenticate the other end - of the communication channel. It should be enabled on both brokers and clients for mutual TLS. -2. tlsCiphers=[TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256], A cipher suite is a named combination of authentication, encryption, MAC and key exchange - algorithm used to negotiate the security settings for a network connection using TLS network protocol. By default, - it is null. [OpenSSL Ciphers](https://www.openssl.org/docs/man1.0.2/apps/ciphers.html) - [JDK Ciphers](http://docs.oracle.com/javase/8/docs/technotes/guides/security/StandardNames.html#ciphersuites) -3. tlsProtocols=[TLSv1.3,TLSv1.2] (list out the TLS protocols that you are going to accept from clients). - By default, it is not set. - -### Configuring Clients - -This is similar to [TLS encryption configuing for client with PEM type](security-tls-transport.md#Client configuration). -For a a minimal configuration, user need to provide the TrustStore information. - -e.g. -1. for [Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools#pulsar-admin), [`pulsar-perf`](reference-cli-tools#pulsar-perf), and [`pulsar-client`](reference-cli-tools#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - - ```properties - - webServiceUrl=https://broker.example.com:8443/ - brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ - useKeyStoreTls=true - tlsTrustStoreType=JKS - tlsTrustStorePath=/var/private/tls/client.truststore.jks - tlsTrustStorePassword=clientpw - - ``` - -1. for java client - - ```java - - import org.apache.pulsar.client.api.PulsarClient; - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .build(); - - ``` - -1. for java admin client - -```java - - PulsarAdmin amdin = PulsarAdmin.builder().serviceHttpUrl("https://broker.example.com:8443") - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .build(); - -``` - -## TLS authentication with KeyStore configure - -This similar to [TLS authentication with PEM type](security-tls-authentication.md) - -### broker authentication config - -`broker.conf` - -```properties - -# Configuration to enable authentication -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# this should be the CN for one of client keystore. -superUserRoles=admin - -# Enable KeyStore type -tlsEnabledWithKeyStore=true -requireTrustedClientCertOnConnect=true - -# key store -tlsKeyStoreType=JKS -tlsKeyStore=/var/private/tls/broker.keystore.jks -tlsKeyStorePassword=brokerpw - -# trust store -tlsTrustStoreType=JKS -tlsTrustStore=/var/private/tls/broker.truststore.jks -tlsTrustStorePassword=brokerpw - -# internal client/admin-client config -brokerClientTlsEnabled=true -brokerClientTlsEnabledWithKeyStore=true -brokerClientTlsTrustStoreType=JKS -brokerClientTlsTrustStore=/var/private/tls/client.truststore.jks -brokerClientTlsTrustStorePassword=clientpw -# internal auth config -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls -brokerClientAuthenticationParameters={"keyStoreType":"JKS","keyStorePath":"/var/private/tls/client.keystore.jks","keyStorePassword":"clientpw"} -# currently websocket not support keystore type -webSocketServiceEnabled=false - -``` - -### client authentication configuring - -Besides the TLS encryption configuring. The main work is configuring the KeyStore, which contains a valid CN as client role, for client. - -e.g. -1. for [Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools#pulsar-admin), [`pulsar-perf`](reference-cli-tools#pulsar-perf), and [`pulsar-client`](reference-cli-tools#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - - ```properties - - webServiceUrl=https://broker.example.com:8443/ - brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ - useKeyStoreTls=true - tlsTrustStoreType=JKS - tlsTrustStorePath=/var/private/tls/client.truststore.jks - tlsTrustStorePassword=clientpw - authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls - authParams={"keyStoreType":"JKS","keyStorePath":"/path/to/keystorefile","keyStorePassword":"keystorepw"} - - ``` - -1. for java client - - ```java - - import org.apache.pulsar.client.api.PulsarClient; - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .authentication( - "org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls", - "keyStoreType:JKS,keyStorePath:/var/private/tls/client.keystore.jks,keyStorePassword:clientpw") - .build(); - - ``` - -1. for java admin client - - ```java - - PulsarAdmin amdin = PulsarAdmin.builder().serviceHttpUrl("https://broker.example.com:8443") - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .authentication( - "org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls", - "keyStoreType:JKS,keyStorePath:/var/private/tls/client.keystore.jks,keyStorePassword:clientpw") - .build(); - - ``` - -## Enabling TLS Logging - -You can enable TLS debug logging at the JVM level by starting the brokers and/or clients with `javax.net.debug` system property. For example: - -```shell - --Djavax.net.debug=all - -``` - -You can find more details on this in [Oracle documentation](http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/ReadDebug.html) on [debugging SSL/TLS connections](http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/ReadDebug.html). diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/security-tls-transport.md b/site2/website/versioned_docs/version-2.8.3-deprecated/security-tls-transport.md deleted file mode 100644 index 2cad17a78c3507..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/security-tls-transport.md +++ /dev/null @@ -1,295 +0,0 @@ ---- -id: security-tls-transport -title: Transport Encryption using TLS -sidebar_label: "Transport Encryption using TLS" -original_id: security-tls-transport ---- - -## TLS overview - -By default, Apache Pulsar clients communicate with the Apache Pulsar service in plain text. This means that all data is sent in the clear. You can use TLS to encrypt this traffic to protect the traffic from the snooping of a man-in-the-middle attacker. - -You can also configure TLS for both encryption and authentication. Use this guide to configure just TLS transport encryption and refer to [here](security-tls-authentication.md) for TLS authentication configuration. Alternatively, you can use [another authentication mechanism](security-athenz.md) on top of TLS transport encryption. - -> Note that enabling TLS may impact the performance due to encryption overhead. - -## TLS concepts - -TLS is a form of [public key cryptography](https://en.wikipedia.org/wiki/Public-key_cryptography). Using key pairs consisting of a public key and a private key can perform the encryption. The public key encrpyts the messages and the private key decrypts the messages. - -To use TLS transport encryption, you need two kinds of key pairs, **server key pairs** and a **certificate authority**. - -You can use a third kind of key pair, **client key pairs**, for [client authentication](security-tls-authentication.md). - -You should store the **certificate authority** private key in a very secure location (a fully encrypted, disconnected, air gapped computer). As for the certificate authority public key, the **trust cert**, you can freely shared it. - -For both client and server key pairs, the administrator first generates a private key and a certificate request, then uses the certificate authority private key to sign the certificate request, finally generates a certificate. This certificate is the public key for the server/client key pair. - -For TLS transport encryption, the clients can use the **trust cert** to verify that the server has a key pair that the certificate authority signed when the clients are talking to the server. A man-in-the-middle attacker does not have access to the certificate authority, so they couldn't create a server with such a key pair. - -For TLS authentication, the server uses the **trust cert** to verify that the client has a key pair that the certificate authority signed. The common name of the **client cert** is then used as the client's role token (see [Overview](security-overview.md)). - -`Bouncy Castle Provider` provides cipher suites and algorithms in Pulsar. If you need [FIPS](https://www.bouncycastle.org/fips_faq.html) version of `Bouncy Castle Provider`, please reference [Bouncy Castle page](security-bouncy-castle.md). - -## Create TLS certificates - -Creating TLS certificates for Pulsar involves creating a [certificate authority](#certificate-authority) (CA), [server certificate](#server-certificate), and [client certificate](#client-certificate). - -Follow the guide below to set up a certificate authority. You can also refer to plenty of resources on the internet for more details. We recommend [this guide](https://jamielinux.com/docs/openssl-certificate-authority/index.html) for your detailed reference. - -### Certificate authority - -1. Create the certificate for the CA. You can use CA to sign both the broker and client certificates. This ensures that each party will trust the others. You should store CA in a very secure location (ideally completely disconnected from networks, air gapped, and fully encrypted). - -2. Entering the following command to create a directory for your CA, and place [this openssl configuration file](https://github.com/apache/pulsar/tree/master/site2/website/static/examples/openssl.cnf) in the directory. You may want to modify the default answers for company name and department in the configuration file. Export the location of the CA directory to the environment variable, CA_HOME. The configuration file uses this environment variable to find the rest of the files and directories that the CA needs. - -```bash - -mkdir my-ca -cd my-ca -wget https://raw.githubusercontent.com/apache/pulsar-site/main/site2/website/static/examples/openssl.cnf -export CA_HOME=$(pwd) - -``` - -3. Enter the commands below to create the necessary directories, keys and certs. - -```bash - -mkdir certs crl newcerts private -chmod 700 private/ -touch index.txt -echo 1000 > serial -openssl genrsa -aes256 -out private/ca.key.pem 4096 -chmod 400 private/ca.key.pem -openssl req -config openssl.cnf -key private/ca.key.pem \ - -new -x509 -days 7300 -sha256 -extensions v3_ca \ - -out certs/ca.cert.pem -chmod 444 certs/ca.cert.pem - -``` - -4. After you answer the question prompts, CA-related files are stored in the `./my-ca` directory. Within that directory: - -* `certs/ca.cert.pem` is the public certificate. This public certificates is meant to be distributed to all parties involved. -* `private/ca.key.pem` is the private key. You only need it when you are signing a new certificate for either broker or clients and you must safely guard this private key. - -### Server certificate - -Once you have created a CA certificate, you can create certificate requests and sign them with the CA. - -The following commands ask you a few questions and then create the certificates. When you are asked for the common name, you should match the hostname of the broker. You can also use a wildcard to match a group of broker hostnames, for example, `*.broker.usw.example.com`. This ensures that multiple machines can reuse the same certificate. - -:::tip - -Sometimes matching the hostname is not possible or makes no sense, -such as when you create the brokers with random hostnames, or you -plan to connect to the hosts via their IP. In these cases, you -should configure the client to disable TLS hostname verification. For more -details, you can see [the host verification section in client configuration](#hostname-verification). - -::: - -1. Enter the command below to generate the key. - -```bash - -openssl genrsa -out broker.key.pem 2048 - -``` - -The broker expects the key to be in [PKCS 8](https://en.wikipedia.org/wiki/PKCS_8) format, so enter the following command to convert it. - -```bash - -openssl pkcs8 -topk8 -inform PEM -outform PEM \ - -in broker.key.pem -out broker.key-pk8.pem -nocrypt - -``` - -2. Enter the following command to generate the certificate request. - -```bash - -openssl req -config openssl.cnf \ - -key broker.key.pem -new -sha256 -out broker.csr.pem - -``` - -3. Sign it with the certificate authority by entering the command below. - -```bash - -openssl ca -config openssl.cnf -extensions server_cert \ - -days 1000 -notext -md sha256 \ - -in broker.csr.pem -out broker.cert.pem - -``` - -At this point, you have a cert, `broker.cert.pem`, and a key, `broker.key-pk8.pem`, which you can use along with `ca.cert.pem` to configure TLS transport encryption for your broker and proxy nodes. - -## Configure broker - -To configure a Pulsar [broker](reference-terminology.md#broker) to use TLS transport encryption, you need to make some changes to `broker.conf`, which locates in the `conf` directory of your [Pulsar installation](getting-started-standalone.md). - -Add these values to the configuration file (substituting the appropriate certificate paths where necessary): - -```properties - -tlsEnabled=true -tlsRequireTrustedClientCertOnConnect=true -tlsCertificateFilePath=/path/to/broker.cert.pem -tlsKeyFilePath=/path/to/broker.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -> You can find a full list of parameters available in the `conf/broker.conf` file, -> as well as the default values for those parameters, in [Broker Configuration](reference-configuration.md#broker) -> -### TLS Protocol Version and Cipher - -You can configure the broker (and proxy) to require specific TLS protocol versions and ciphers for TLS negiotation. You can use the TLS protocol versions and ciphers to stop clients from requesting downgraded TLS protocol versions or ciphers that may have weaknesses. - -Both the TLS protocol versions and cipher properties can take multiple values, separated by commas. The possible values for protocol version and ciphers depend on the TLS provider that you are using. Pulsar uses OpenSSL if the OpenSSL is available, but if the OpenSSL is not available, Pulsar defaults back to the JDK implementation. - -```properties - -tlsProtocols=TLSv1.3,TLSv1.2 -tlsCiphers=TLS_DH_RSA_WITH_AES_256_GCM_SHA384,TLS_DH_RSA_WITH_AES_256_CBC_SHA - -``` - -OpenSSL currently supports ```TLSv1.1```, ```TLSv1.2``` and ```TLSv1.3``` for the protocol version. You can acquire a list of supported cipher from the openssl ciphers command, i.e. ```openssl ciphers -tls1_3```. - -For JDK 11, you can obtain a list of supported values from the documentation: -- [TLS protocol](https://docs.oracle.com/en/java/javase/11/security/oracle-providers.html#GUID-7093246A-31A3-4304-AC5F-5FB6400405E2__SUNJSSEPROVIDERPROTOCOLPARAMETERS-BBF75009) -- [Ciphers](https://docs.oracle.com/en/java/javase/11/security/oracle-providers.html#GUID-7093246A-31A3-4304-AC5F-5FB6400405E2__SUNJSSE_CIPHER_SUITES) - -## Proxy Configuration - -Proxies need to configure TLS in two directions, for clients connecting to the proxy, and for the proxy connecting to brokers. - -```properties - -# For clients connecting to the proxy -tlsEnabledInProxy=true -tlsCertificateFilePath=/path/to/broker.cert.pem -tlsKeyFilePath=/path/to/broker.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -# For the proxy to connect to brokers -tlsEnabledWithBroker=true -brokerClientTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -## Client configuration - -When you enable the TLS transport encryption, you need to configure the client to use ```https://``` and port 8443 for the web service URL, and ```pulsar+ssl://``` and port 6651 for the broker service URL. - -As the server certificate that you generated above does not belong to any of the default trust chains, you also need to either specify the path the **trust cert** (recommended), or tell the client to allow untrusted server certs. - -### Hostname verification - -Hostname verification is a TLS security feature whereby a client can refuse to connect to a server if the "CommonName" does not match the hostname to which the hostname is connecting. By default, Pulsar clients disable hostname verification, as it requires that each broker has a DNS record and a unique cert. - -Moreover, as the administrator has full control of the certificate authority, a bad actor is unlikely to be able to pull off a man-in-the-middle attack. "allowInsecureConnection" allows the client to connect to servers whose cert has not been signed by an approved CA. The client disables "allowInsecureConnection" by default, and you should always disable "allowInsecureConnection" in production environments. As long as you disable "allowInsecureConnection", a man-in-the-middle attack requires that the attacker has access to the CA. - -One scenario where you may want to enable hostname verification is where you have multiple proxy nodes behind a VIP, and the VIP has a DNS record, for example, pulsar.mycompany.com. In this case, you can generate a TLS cert with pulsar.mycompany.com as the "CommonName," and then enable hostname verification on the client. - -The examples below show that hostname verification is disabled for the CLI tools/Java/Python/C++/Node.js/C# clients by default. - -### CLI tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools.md#pulsar-admin), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use TLS transport with the CLI tools of Pulsar: - -```properties - -webServiceUrl=https://broker.example.com:8443/ -brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/ca.cert.pem -tlsEnableHostnameVerification=false - -``` - -#### Java client - -```java - -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .tlsTrustCertsFilePath("/path/to/ca.cert.pem") - .enableTlsHostnameVerification(false) // false by default, in any case - .allowTlsInsecureConnection(false) // false by default, in any case - .build(); - -``` - -#### Python client - -```python - -from pulsar import Client - -client = Client("pulsar+ssl://broker.example.com:6651/", - tls_hostname_verification=False, - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False) // defaults to false from v2.2.0 onwards - -``` - -#### C++ client - -```c++ - -#include - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); // shouldn't be needed soon -config.setTlsTrustCertsFilePath(caPath); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create(clientPublicKeyPath, clientPrivateKeyPath)); -config.setValidateHostName(false); - -``` - -#### Node.js client - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const client = new Pulsar.Client({ - serviceUrl: 'pulsar+ssl://broker.example.com:6651/', - tlsTrustCertsFilePath: '/path/to/ca.cert.pem', - useTls: true, - tlsValidateHostname: false, - tlsAllowInsecureConnection: false, - }); -})(); - -``` - -#### C# client - -```c# - -var certificate = new X509Certificate2("ca.cert.pem"); -var client = PulsarClient.Builder() - .TrustedCertificateAuthority(certificate) //If the CA is not trusted on the host, you can add it explicitly. - .VerifyCertificateAuthority(true) //Default is 'true' - .VerifyCertificateName(false) //Default is 'false' - .Build(); - -``` - -> Note that `VerifyCertificateName` refers to the configuration of hostname verification in the C# client. diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/security-token-admin.md b/site2/website/versioned_docs/version-2.8.3-deprecated/security-token-admin.md deleted file mode 100644 index a265f6320d28fe..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/security-token-admin.md +++ /dev/null @@ -1,183 +0,0 @@ ---- -id: security-token-admin -title: Token authentication admin -sidebar_label: "Token authentication admin" -original_id: security-token-admin ---- - -## Token Authentication Overview - -Pulsar supports authenticating clients using security tokens that are based on [JSON Web Tokens](https://jwt.io/introduction/) ([RFC-7519](https://tools.ietf.org/html/rfc7519)). - -Tokens are used to identify a Pulsar client and associate with some "principal" (or "role") which -will be then granted permissions to do some actions (eg: publish or consume from a topic). - -A user will typically be given a token string by an administrator (or some automated service). - -The compact representation of a signed JWT is a string that looks like: - -``` - - eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -Application will specify the token when creating the client instance. An alternative is to pass -a "token supplier", that is to say a function that returns the token when the client library -will need one. - -> #### Always use TLS transport encryption -> Sending a token is equivalent to sending a password over the wire. It is strongly recommended to -> always use TLS encryption when talking to the Pulsar service. See -> [Transport Encryption using TLS](security-tls-transport.md) - -## Secret vs Public/Private keys - -JWT support two different kind of keys in order to generate and validate the tokens: - - * Symmetric : - - there is a single ***Secret*** key that is used both to generate and validate - * Asymmetric: there is a pair of keys. - - ***Private*** key is used to generate tokens - - ***Public*** key is used to validate tokens - -### Secret key - -When using a secret key, the administrator will create the key and he will -use it to generate the client tokens. This key will be also configured to -the brokers to allow them to validate the clients. - -#### Creating a secret key - -> Output file will be generated in the root of your pulsar installation directory. You can also provide absolute path for the output file. - -```shell - -$ bin/pulsar tokens create-secret-key --output my-secret.key - -``` - -To generate base64 encoded private key - -```shell - -$ bin/pulsar tokens create-secret-key --output /opt/my-secret.key --base64 - -``` - -### Public/Private keys - -With public/private, we need to create a pair of keys. Pulsar supports all algorithms supported by the Java JWT library shown [here](https://github.com/jwtk/jjwt#signature-algorithms-keys) - -#### Creating a key pair - -> Output file will be generated in the root of your pulsar installation directory. You can also provide absolute path for the output file. - -```shell - -$ bin/pulsar tokens create-key-pair --output-private-key my-private.key --output-public-key my-public.key - -``` - - * `my-private.key` will be stored in a safe location and only used by administrator to generate - new tokens. - * `my-public.key` will be distributed to all Pulsar brokers. This file can be publicly shared without - any security concern. - -## Generating tokens - -A token is the credential associated with a user. The association is done through the "principal", -or "role". In case of JWT tokens, this field it's typically referred to as **subject**, though -it's exactly the same concept. - -The generated token is then required to have a **subject** field set. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user - -``` - -This will print the token string on stdout. - -Similarly, one can create a token by passing the "private" key: - -```shell - -$ bin/pulsar tokens create --private-key file:///path/to/my-private.key \ - --subject test-user - -``` - -Finally, a token can also be created with a pre-defined TTL. After that time, -the token will be automatically invalidated. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user \ - --expiry-time 1y - -``` - -## Authorization - -The token itself doesn't have any permission associated. That will be determined by the -authorization engine. Once the token is created, one can grant permission for this token to do certain -actions. Eg. : - -```shell - -$ bin/pulsar-admin namespaces grant-permission my-tenant/my-namespace \ - --role test-user \ - --actions produce,consume - -``` - -## Enabling Token Authentication ... - -### ... on Brokers - -To configure brokers to authenticate clients, put the following in `broker.conf`: - -```properties - -# Configuration to enable authentication and authorization -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken - -# If using secret key (Note: key files must be DER-encoded) -tokenSecretKey=file:///path/to/secret.key -# The key can also be passed inline: -# tokenSecretKey=data:;base64,FLFyW0oLJ2Fi22KKCm21J18mbAdztfSHN/lAT5ucEKU= - -# If using public/private (Note: key files must be DER-encoded) -# tokenPublicKey=file:///path/to/public.key - -``` - -### ... on Proxies - -To configure proxies to authenticate clients, put the following in `proxy.conf`: - -The proxy will have its own token used when talking to brokers. The role token for this -key pair should be configured in the ``proxyRoles`` of the brokers. See the [authorization guide](security-authorization.md) for more details. - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken -tokenSecretKey=file:///path/to/secret.key - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Or, alternatively, read token from file -# brokerClientAuthenticationParameters=file:///path/to/proxy-token.txt - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/sql-deployment-configurations.md b/site2/website/versioned_docs/version-2.8.3-deprecated/sql-deployment-configurations.md deleted file mode 100644 index 02d8bc78f6cb9d..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/sql-deployment-configurations.md +++ /dev/null @@ -1,199 +0,0 @@ ---- -id: sql-deployment-configurations -title: Pulsar SQL configuration and deployment -sidebar_label: "Configuration and deployment" -original_id: sql-deployment-configurations ---- - -You can configure Presto Pulsar connector and deploy a cluster with the following instruction. - -## Configure Presto Pulsar Connector -You can configure Presto Pulsar Connector in the `${project.root}/conf/presto/catalog/pulsar.properties` properties file. The configuration for the connector and the default values are as follows. - -```properties - -# name of the connector to be displayed in the catalog -connector.name=pulsar - -# the url of Pulsar broker service -pulsar.web-service-url=http://localhost:8080 - -# URI of Zookeeper cluster -pulsar.zookeeper-uri=localhost:2181 - -# minimum number of entries to read at a single time -pulsar.entry-read-batch-size=100 - -# default number of splits to use per query -pulsar.target-num-splits=4 - -``` - -You can connect Presto to a Pulsar cluster with multiple hosts. To configure multiple hosts for brokers, add multiple URLs to `pulsar.web-service-url`. To configure multiple hosts for ZooKeeper, add multiple URIs to `pulsar.zookeeper-uri`. The following is an example. - -``` - -pulsar.web-service-url=http://localhost:8080,localhost:8081,localhost:8082 -pulsar.zookeeper-uri=localhost1,localhost2:2181 - -``` - -**Note: by default, Pulsar SQL does not get the last message in a topic**. It is by design and controlled by settings. By default, BookKeeper LAC only advances when subsequent entries are added. If there is no subsequent entry added, the last written entry is not visible to readers until the ledger is closed. This is not a problem for Pulsar which uses managed ledger, but Pulsar SQL directly reads from BookKeeper ledger. - -If you want to get the last message in a topic, set the following configurations: - -1. For the broker configuration, set `bookkeeperExplicitLacIntervalInMills` > 0 in `broker.conf` or `standalone.conf`. - -2. For the Presto configuration, set `pulsar.bookkeeper-explicit-interval` > 0 and `pulsar.bookkeeper-use-v2-protocol=false`. - -However, using BookKeeper V3 protocol introduces additional GC overhead to BK as it uses Protobuf. - -## Query data from existing Presto clusters - -If you already have a Presto cluster, you can copy the Presto Pulsar connector plugin to your existing cluster. Download the archived plugin package with the following command. - -```bash - -$ wget pulsar:binary_release_url - -``` - -## Deploy a new cluster - -Since Pulsar SQL is powered by [Trino (formerly Presto SQL)](https://trino.io), the configuration for deployment is the same for the Pulsar SQL worker. - -:::note - -For how to set up a standalone single node environment, refer to [Query data](sql-getting-started.md). - -::: - -You can use the same CLI args as the Presto launcher. - -```bash - -$ ./bin/pulsar sql-worker --help -Usage: launcher [options] command - -Commands: run, start, stop, restart, kill, status - -Options: - -h, --help show this help message and exit - -v, --verbose Run verbosely - --etc-dir=DIR Defaults to INSTALL_PATH/etc - --launcher-config=FILE - Defaults to INSTALL_PATH/bin/launcher.properties - --node-config=FILE Defaults to ETC_DIR/node.properties - --jvm-config=FILE Defaults to ETC_DIR/jvm.config - --config=FILE Defaults to ETC_DIR/config.properties - --log-levels-file=FILE - Defaults to ETC_DIR/log.properties - --data-dir=DIR Defaults to INSTALL_PATH - --pid-file=FILE Defaults to DATA_DIR/var/run/launcher.pid - --launcher-log-file=FILE - Defaults to DATA_DIR/var/log/launcher.log (only in - daemon mode) - --server-log-file=FILE - Defaults to DATA_DIR/var/log/server.log (only in - daemon mode) - -D NAME=VALUE Set a Java system property - -``` - -The default configuration for the cluster is located in `${project.root}/conf/presto`. You can customize your deployment by modifying the default configuration. - -You can set the worker to read from a different configuration directory, or set a different directory to write data. - -```bash - -$ ./bin/pulsar sql-worker run --etc-dir /tmp/incubator-pulsar/conf/presto --data-dir /tmp/presto-1 - -``` - -You can start the worker as daemon process. - -```bash - -$ ./bin/pulsar sql-worker start - -``` - -### Deploy a cluster on multiple nodes - -You can deploy a Pulsar SQL cluster or Presto cluster on multiple nodes. The following example shows how to deploy a cluster on three-node cluster. - -1. Copy the Pulsar binary distribution to three nodes. - -The first node runs as Presto coordinator. The minimal configuration requirement in the `${project.root}/conf/presto/config.properties` file is as follows. - -```properties - -coordinator=true -node-scheduler.include-coordinator=true -http-server.http.port=8080 -query.max-memory=50GB -query.max-memory-per-node=1GB -discovery-server.enabled=true -discovery.uri= - -``` - -The other two nodes serve as worker nodes, you can use the following configuration for worker nodes. - -```properties - -coordinator=false -http-server.http.port=8080 -query.max-memory=50GB -query.max-memory-per-node=1GB -discovery.uri= - -``` - -2. Modify `pulsar.web-service-url` and `pulsar.zookeeper-uri` configuration in the `${project.root}/conf/presto/catalog/pulsar.properties` file accordingly for the three nodes. - -3. Start the coordinator node. - -``` - -$ ./bin/pulsar sql-worker run - -``` - -4. Start worker nodes. - -``` - -$ ./bin/pulsar sql-worker run - -``` - -5. Start the SQL CLI and check the status of your cluster. - -```bash - -$ ./bin/pulsar sql --server - -``` - -6. Check the status of your nodes. - -```bash - -presto> SELECT * FROM system.runtime.nodes; - node_id | http_uri | node_version | coordinator | state ----------+-------------------------+--------------+-------------+-------- - 1 | http://192.168.2.1:8081 | testversion | true | active - 3 | http://192.168.2.2:8081 | testversion | false | active - 2 | http://192.168.2.3:8081 | testversion | false | active - -``` - -For more information about deployment in Presto, refer to [Presto deployment](https://trino.io/docs/current/installation/deployment.html). - -:::note - -The broker does not advance LAC, so when Pulsar SQL bypass broker to query data, it can only read entries up to the LAC that all the bookies learned. You can enable periodically write LAC on the broker by setting "bookkeeperExplicitLacIntervalInMills" in the broker.conf. - -::: - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/sql-getting-started.md b/site2/website/versioned_docs/version-2.8.3-deprecated/sql-getting-started.md deleted file mode 100644 index 8a5cd7199b365a..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/sql-getting-started.md +++ /dev/null @@ -1,187 +0,0 @@ ---- -id: sql-getting-started -title: Query data with Pulsar SQL -sidebar_label: "Query data" -original_id: sql-getting-started ---- - -Before querying data in Pulsar, you need to install Pulsar and built-in connectors. - -## Requirements -1. Install [Pulsar](getting-started-standalone.md#install-pulsar-standalone). -2. Install Pulsar [built-in connectors](getting-started-standalone.md#install-builtin-connectors-optional). - -## Query data in Pulsar -To query data in Pulsar with Pulsar SQL, complete the following steps. - -1. Start a Pulsar standalone cluster. - -```bash - -./bin/pulsar standalone - -``` - -2. Start a Pulsar SQL worker. - -```bash - -./bin/pulsar sql-worker run - -``` - -3. After initializing Pulsar standalone cluster and the SQL worker, run SQL CLI. - -```bash - -./bin/pulsar sql - -``` - -4. Test with SQL commands. - -```bash - -presto> show catalogs; - Catalog ---------- - pulsar - system -(2 rows) - -Query 20180829_211752_00004_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [0 rows, 0B] [0 rows/s, 0B/s] - - -presto> show schemas in pulsar; - Schema ------------------------ - information_schema - public/default - public/functions - sample/standalone/ns1 -(4 rows) - -Query 20180829_211818_00005_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [4 rows, 89B] [21 rows/s, 471B/s] - - -presto> show tables in pulsar."public/default"; - Table -------- -(0 rows) - -Query 20180829_211839_00006_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [0 rows, 0B] [0 rows/s, 0B/s] - -``` - -Since there is no data in Pulsar, no records is returned. - -5. Start the built-in connector _DataGeneratorSource_ and ingest some mock data. - -```bash - -./bin/pulsar-admin sources create --name generator --destinationTopicName generator_test --source-type data-generator - -``` - -And then you can query a topic in the namespace "public/default". - -```bash - -presto> show tables in pulsar."public/default"; - Table ----------------- - generator_test -(1 row) - -Query 20180829_213202_00000_csyeu, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:02 [1 rows, 38B] [0 rows/s, 17B/s] - -``` - -You can now query the data within the topic "generator_test". - -```bash - -presto> select * from pulsar."public/default".generator_test; - - firstname | middlename | lastname | email | username | password | telephonenumber | age | companyemail | nationalidentitycardnumber | --------------+-------------+-------------+----------------------------------+--------------+----------+-----------------+-----+-----------------------------------------------+----------------------------+ - Genesis | Katherine | Wiley | genesis.wiley@gmail.com | genesisw | y9D2dtU3 | 959-197-1860 | 71 | genesis.wiley@interdemconsulting.eu | 880-58-9247 | - Brayden | | Stanton | brayden.stanton@yahoo.com | braydens | ZnjmhXik | 220-027-867 | 81 | brayden.stanton@supermemo.eu | 604-60-7069 | - Benjamin | Julian | Velasquez | benjamin.velasquez@yahoo.com | benjaminv | 8Bc7m3eb | 298-377-0062 | 21 | benjamin.velasquez@hostesltd.biz | 213-32-5882 | - Michael | Thomas | Donovan | donovan@mail.com | michaeld | OqBm9MLs | 078-134-4685 | 55 | michael.donovan@memortech.eu | 443-30-3442 | - Brooklyn | Avery | Roach | brooklynroach@yahoo.com | broach | IxtBLafO | 387-786-2998 | 68 | brooklyn.roach@warst.biz | 085-88-3973 | - Skylar | | Bradshaw | skylarbradshaw@yahoo.com | skylarb | p6eC6cKy | 210-872-608 | 96 | skylar.bradshaw@flyhigh.eu | 453-46-0334 | -. -. -. - -``` - -You can query the mock data. - -## Query your own data -If you want to query your own data, you need to ingest your own data first. You can write a simple producer and write custom defined data to Pulsar. The following is an example. - -```java - -public class TestProducer { - - public static class Foo { - private int field1 = 1; - private String field2; - private long field3; - - public Foo() { - } - - public int getField1() { - return field1; - } - - public void setField1(int field1) { - this.field1 = field1; - } - - public String getField2() { - return field2; - } - - public void setField2(String field2) { - this.field2 = field2; - } - - public long getField3() { - return field3; - } - - public void setField3(long field3) { - this.field3 = field3; - } - } - - public static void main(String[] args) throws Exception { - PulsarClient pulsarClient = PulsarClient.builder().serviceUrl("pulsar://localhost:6650").build(); - Producer producer = pulsarClient.newProducer(AvroSchema.of(Foo.class)).topic("test_topic").create(); - - for (int i = 0; i < 1000; i++) { - Foo foo = new Foo(); - foo.setField1(i); - foo.setField2("foo" + i); - foo.setField3(System.currentTimeMillis()); - producer.newMessage().value(foo).send(); - } - producer.close(); - pulsarClient.close(); - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/sql-overview.md b/site2/website/versioned_docs/version-2.8.3-deprecated/sql-overview.md deleted file mode 100644 index 0235d0256c5f19..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/sql-overview.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: sql-overview -title: Pulsar SQL Overview -sidebar_label: "Overview" -original_id: sql-overview ---- - -Apache Pulsar is used to store streams of event data, and the event data is structured with predefined fields. With the implementation of the [Schema Registry](schema-get-started.md), you can store structured data in Pulsar and query the data by using [Trino (formerly Presto SQL.md)](https://trino.io/). - -As the core of Pulsar SQL, Presto Pulsar connector enables Presto workers within a Presto cluster to query data from Pulsar. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-sql-arch-2.png) - -The query performance is efficient and highly scalable, because Pulsar adopts [two level segment based architecture](concepts-architecture-overview.md#apache-bookkeeper). - -Topics in Pulsar are stored as segments in [Apache BookKeeper](https://bookkeeper.apache.org/). Each topic segment is replicated to some BookKeeper nodes, which enables concurrent reads and high read throughput. You can configure the number of BookKeeper nodes, and the default number is `3`. In Presto Pulsar connector, data is read directly from BookKeeper, so Presto workers can read concurrently from horizontally scalable number BookKeeper nodes. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-sql-arch-1.png) diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/sql-rest-api.md b/site2/website/versioned_docs/version-2.8.3-deprecated/sql-rest-api.md deleted file mode 100644 index c92fd62f7d8703..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/sql-rest-api.md +++ /dev/null @@ -1,192 +0,0 @@ ---- -id: sql-rest-api -title: Pulsar SQL REST APIs -sidebar_label: "REST APIs" -original_id: sql-rest-api ---- - -This section lists resources that make up the Presto REST API v1. - -## Request for Presto services - -All requests for Presto services should use Presto REST API v1 version. - -To request services, use explicit URL `http://presto.service:8081/v1`. You need to update `presto.service:8081` with your real Presto address before sending requests. - -`POST` requests require the `X-Presto-User` header. If you use authentication, you must use the same `username` that is specified in the authentication configuration. If you do not use authentication, you can specify anything for `username`. - -```properties - -X-Presto-User: username - -``` - -For more information about headers, refer to [PrestoHeaders](https://github.com/trinodb/trino). - -## Schema - -You can use statement in the HTTP body. All data is received as JSON document that might contain a `nextUri` link. If the received JSON document contains a `nextUri` link, the request continues with the `nextUri` link until the received data does not contain a `nextUri` link. If no error is returned, the query completes successfully. If an `error` field is displayed in `stats`, it means the query fails. - -The following is an example of `show catalogs`. The query continues until the received JSON document does not contain a `nextUri` link. Since no `error` is displayed in `stats`, it means that the query completes successfully. - -```powershell - -➜ ~ curl --header "X-Presto-User: test-user" --request POST --data 'show catalogs' http://localhost:8081/v1/statement -{ - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "stats" : { - "queued" : true, - "nodes" : 0, - "userTimeMillis" : 0, - "cpuTimeMillis" : 0, - "wallTimeMillis" : 0, - "processedBytes" : 0, - "processedRows" : 0, - "runningSplits" : 0, - "queuedTimeMillis" : 0, - "queuedSplits" : 0, - "completedSplits" : 0, - "totalSplits" : 0, - "scheduled" : false, - "peakMemoryBytes" : 0, - "state" : "QUEUED", - "elapsedTimeMillis" : 0 - }, - "id" : "20191113_033653_00006_dg6hb", - "nextUri" : "http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/1" -} - -➜ ~ curl http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/1 -{ - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "nextUri" : "http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/2", - "id" : "20191113_033653_00006_dg6hb", - "stats" : { - "state" : "PLANNING", - "totalSplits" : 0, - "queued" : false, - "userTimeMillis" : 0, - "completedSplits" : 0, - "scheduled" : false, - "wallTimeMillis" : 0, - "runningSplits" : 0, - "queuedSplits" : 0, - "cpuTimeMillis" : 0, - "processedRows" : 0, - "processedBytes" : 0, - "nodes" : 0, - "queuedTimeMillis" : 1, - "elapsedTimeMillis" : 2, - "peakMemoryBytes" : 0 - } -} - -➜ ~ curl http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/2 -{ - "id" : "20191113_033653_00006_dg6hb", - "data" : [ - [ - "pulsar" - ], - [ - "system" - ] - ], - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "columns" : [ - { - "typeSignature" : { - "rawType" : "varchar", - "arguments" : [ - { - "kind" : "LONG_LITERAL", - "value" : 6 - } - ], - "literalArguments" : [], - "typeArguments" : [] - }, - "name" : "Catalog", - "type" : "varchar(6)" - } - ], - "stats" : { - "wallTimeMillis" : 104, - "scheduled" : true, - "userTimeMillis" : 14, - "progressPercentage" : 100, - "totalSplits" : 19, - "nodes" : 1, - "cpuTimeMillis" : 16, - "queued" : false, - "queuedTimeMillis" : 1, - "state" : "FINISHED", - "peakMemoryBytes" : 0, - "elapsedTimeMillis" : 111, - "processedBytes" : 0, - "processedRows" : 0, - "queuedSplits" : 0, - "rootStage" : { - "cpuTimeMillis" : 1, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 1, - "subStages" : [ - { - "cpuTimeMillis" : 14, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 17, - "subStages" : [ - { - "wallTimeMillis" : 7, - "subStages" : [], - "stageId" : "2", - "done" : true, - "nodes" : 1, - "totalSplits" : 1, - "processedBytes" : 22, - "processedRows" : 2, - "queuedSplits" : 0, - "userTimeMillis" : 1, - "cpuTimeMillis" : 1, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 1 - } - ], - "wallTimeMillis" : 92, - "nodes" : 1, - "done" : true, - "stageId" : "1", - "userTimeMillis" : 12, - "processedRows" : 2, - "processedBytes" : 51, - "queuedSplits" : 0, - "totalSplits" : 17 - } - ], - "wallTimeMillis" : 5, - "done" : true, - "nodes" : 1, - "stageId" : "0", - "userTimeMillis" : 1, - "processedRows" : 2, - "processedBytes" : 22, - "totalSplits" : 1, - "queuedSplits" : 0 - }, - "runningSplits" : 0, - "completedSplits" : 19 - } -} - -``` - -:::note - -Since the response data is not in sync with the query state from the perspective of clients, you cannot rely on the response data to determine whether the query completes. - -::: - -For more information about Presto REST API, refer to [Presto HTTP Protocol](https://github.com/prestosql/presto/wiki/HTTP-Protocol). diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/standalone.md b/site2/website/versioned_docs/version-2.8.3-deprecated/standalone.md deleted file mode 100644 index 25afa11a91b117..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/standalone.md +++ /dev/null @@ -1,271 +0,0 @@ ---- -id: standalone -title: Set up a standalone Pulsar locally -sidebar_label: "Run Pulsar locally" -original_id: standalone ---- - -For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process. - -> #### Pulsar in production? -> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal.md) guide. - -## Install Pulsar standalone - -This tutorial guides you through every step of the installation process. - -### System requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions, JRE/JDK 11 is recommended. - -:::tip - -By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. - -::: - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -### Install Pulsar using binary release - -To get started with Pulsar, download a binary tarball release in one of the following ways: - -* download from the Apache mirror (Pulsar @pulsar:version@ binary release) - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:binary_release_url - - ``` - -After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -#### What your package contains - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/). -`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more. -`examples` | A Java JAR file containing [Pulsar Functions](functions-overview.md) example. -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar. -`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar). - -These directories are created once you begin running Pulsar. - -Directory | Contains -:---------|:-------- -`data` | The data storage directory used by ZooKeeper and BookKeeper. -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md). -`logs` | Logs created by the installation. - -:::tip - -If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions: -* [Install builtin connectors (optional)](#install-builtin-connectors-optional) -* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional) -Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders. - -::: - -### Install builtin connectors (optional) - -Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors. -To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways: - -* download from the Apache mirror Pulsar IO Connectors @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. -For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker -(or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions). -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), -you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors). - -::: - -### Install tiered storage offloaders (optional) - -:::tip - -Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -To enable tiered storage feature, follow the instructions below; otherwise skip this section. - -::: - -To get started with [tiered storage offloaders](concepts-tiered-storage.md), you need to download the offloaders tarball release on every broker node in one of the following ways: - -* download from the Apache mirror Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders` -in the pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage.md). - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory. -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), -you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - -::: - -## Start Pulsar standalone - -Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode. - -```bash - -$ bin/pulsar standalone - -``` - -If you have started Pulsar successfully, you will see `INFO`-level log messages like this: - -```bash - -2017-06-01 14:46:29,192 - INFO - [main:WebSocketService@95] - Configuration Store cache started -2017-06-01 14:46:29,192 - INFO - [main:AuthenticationService@61] - Authentication is disabled -2017-06-01 14:46:29,192 - INFO - [main:WebSocketService@108] - Pulsar WebSocket Service started - -``` - -:::tip - -* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window. - -::: - -You can also run the service as a background process using the `pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). -> -> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview.md) document to secure your deployment. -> -> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -## Use Pulsar standalone - -Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. - -### Consume a message - -The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client consume my-topic -s "first-subscription" - -``` - -If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -09:56:55.566 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.MultiTopicsConsumerImpl - [TopicsConsumerFakeTopicNamee2df9] [first-subscription] Success subscribe new topic my-topic in topics consumer, partitions: 4, allTopicPartitionsNumber: 4 - -``` - -:::tip - -As you have noticed that we do not explicitly create the `my-topic` topic, from which we consume the message. When you consume a message from a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well. - -::: - -### Produce a message - -The following command produces a message saying `hello-pulsar` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client produce my-topic --messages "hello-pulsar" - -``` - -If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -13:09:39.356 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced - -``` - -## Stop Pulsar standalone - -Press `Ctrl+C` to stop a local standalone Pulsar. - -:::tip - -If the service runs as a background process using the `pulsar-daemon start standalone` command, then use the `pulsar-daemon stop standalone` command to stop the service. -For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). - -::: - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/tiered-storage-aliyun.md b/site2/website/versioned_docs/version-2.8.3-deprecated/tiered-storage-aliyun.md deleted file mode 100644 index 5772f162b5e26d..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/tiered-storage-aliyun.md +++ /dev/null @@ -1,257 +0,0 @@ ---- -id: tiered-storage-aliyun -title: Use Aliyun OSS offloader with Pulsar -sidebar_label: "Aliyun OSS offloader" -original_id: tiered-storage-aliyun ---- - -This chapter guides you through every step of installing and configuring the Aliyun Object Storage Service (OSS) offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the Aliyun OSS offloader. - -### Prerequisite - -- Pulsar: 2.8.0 or later versions - -### Step - -This example uses Pulsar 2.8.0. - -1. Download the Pulsar tarball, see [here](https://pulsar.apache.org/docs/en/standalone/#install-pulsar-using-binary-release). - -2. Download and untar the Pulsar offloaders package, then copy the Pulsar offloaders as `offloaders` in the Pulsar directory, see [here](https://pulsar.apache.org/docs/en/standalone/#install-tiered-storage-offloaders-optional). - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/), [GCS](https://cloud.google.com/storage/), [Azure](https://portal.azure.com/#home), and [Aliyun OSS](https://www.aliyun.com/product/oss) for long-term storage. - - ``` - - tiered-storage-file-system-2.8.0.nar - tiered-storage-jcloud-2.8.0.nar - - ``` - - :::note - - * If you are running Pulsar in a bare-metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image. The `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to Aliyun OSS, you need to configure some properties of the Aliyun OSS offload driver. - -::: - -Besides, you can also configure the Aliyun OSS offloader to run it automatically or trigger it manually. - -### Configure Aliyun OSS offloader driver - -You can configure the Aliyun OSS offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - | Required configuration | Description | Example value | - | --- | --- |--- | - | `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive. | aliyun-oss | - | `offloadersDirectory` | Offloader directory | offloaders | - | `managedLedgerOffloadBucket` | Bucket | pulsar-topic-offload | - | `managedLedgerOffloadServiceEndpoint` | Endpoint | http://oss-cn-hongkong.aliyuncs.com | - -- **Optional** configurations are as below. - - | Optional | Description | Example value | - | --- | --- | --- | - | `managedLedgerOffloadReadBufferSizeInBytes` | Size of block read | 1 MB | - | `managedLedgerOffloadMaxBlockSizeInBytes` | Size of block write | 64 MB | - | `managedLedgerMinLedgerRolloverTimeMinutes` | Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment. | 2 | - | `managedLedgerMaxEntriesPerLedger` | Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment. | 5000 | - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in Aliyun OSS must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -managedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Endpoint (required) - -The endpoint is the region where a bucket is located. - -:::tip - -For more information about Aliyun OSS regions and endpoints, see [International website](https://www.alibabacloud.com/help/doc-detail/31837.htm) or [Chinese website](https://help.aliyun.com/document_detail/31837.html). - -::: - - -##### Example - -This example sets the endpoint as _oss-us-west-1-internal_. - -``` - -managedLedgerOffloadServiceEndpoint=http://oss-us-west-1-internal.aliyuncs.com - -``` - -#### Authentication (required) - -To be able to access Aliyun OSS, you need to authenticate with Aliyun OSS. - -Set the environment variables `ALIYUN_OSS_ACCESS_KEY_ID` and `ALIYUN_OSS_ACCESS_KEY_SECRET` in `conf/pulsar_env.sh`. - -"export" is important so that the variables are made available in the environment of spawned processes. - -```bash - -export ALIYUN_OSS_ACCESS_KEY_ID=ABC123456789 -export ALIYUN_OSS_ACCESS_KEY_SECRET=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from Aliyun OSS in the configuration file `broker.conf` or `standalone.conf`. - -| Configuration | Description | Default value | -| --- | --- | --- | -| `managedLedgerOffloadReadBufferSizeInBytes` | Block size for each individual read when reading back data from Aliyun OSS. | 1 MB | -| `managedLedgerOffloadMaxBlockSizeInBytes` | Maximum size of a "part" sent during a multipart upload to Aliyun OSS. It **cannot** be smaller than 5 MB. | 64 MB | - -### Run Aliyun OSS offloader automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -| Threshold value | Action | -| --- | --- | -| > 0 | It triggers the offloading operation if the topic storage reaches its threshold. | -| = 0 | It causes a broker to offload data as soon as possible. | -| < 0 | It disables automatic offloading operation. | - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, the offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the Aliyun OSS offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-threshold-em-). - -::: - -### Run Aliyun OSS offloader manually - -For individual topics, you can trigger the Aliyun OSS offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to Aliyun OSS until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the Aliyun OSS offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-em-). - - ::: - -- This example checks the Aliyun OSS offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the Aliyun OSS offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-status-em-). - - ::: - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/tiered-storage-aws.md b/site2/website/versioned_docs/version-2.8.3-deprecated/tiered-storage-aws.md deleted file mode 100644 index 5d3076f49cf5a2..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/tiered-storage-aws.md +++ /dev/null @@ -1,329 +0,0 @@ ---- -id: tiered-storage-aws -title: Use AWS S3 offloader with Pulsar -sidebar_label: "AWS S3 offloader" -original_id: tiered-storage-aws ---- - -This chapter guides you through every step of installing and configuring the AWS S3 offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the AWS S3 offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or later versions - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download from the Pulsar [downloads page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget): - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/) and [GCS](https://cloud.google.com/storage/) for long term storage. - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to AWS S3, you need to configure some properties of the AWS S3 offload driver. - -::: - -Besides, you can also configure the AWS S3 offloader to run it automatically or trigger it manually. - -### Configure AWS S3 offloader driver - -You can configure the AWS S3 offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - Required configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive.

    **Note**: there is a third driver type, S3, which is identical to AWS S3, though S3 requires that you specify an endpoint URL using `s3ManagedLedgerOffloadServiceEndpoint`. This is useful if using an S3 compatible data store other than AWS S3. | aws-s3 - `offloadersDirectory` | Offloader directory | offloaders - `s3ManagedLedgerOffloadBucket` | Bucket | pulsar-topic-offload - -- **Optional** configurations are as below. - - Optional | Description | Example value - |---|---|--- - `s3ManagedLedgerOffloadRegion` | Bucket region

    **Note**: before specifying a value for this parameter, you need to set the following configurations. Otherwise, you might get an error.

    - Set [`s3ManagedLedgerOffloadServiceEndpoint`](https://docs.aws.amazon.com/general/latest/gr/s3.html).

    Example
    `s3ManagedLedgerOffloadServiceEndpoint=https://s3.YOUR_REGION.amazonaws.com`

    - Grant `GetBucketLocation` permission to a user.

    For how to grant `GetBucketLocation` permission to a user, see [here](https://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html#using-with-s3-actions-related-to-buckets).| eu-west-3 - `s3ManagedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `s3ManagedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in AWS S3 must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -s3ManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Bucket region - -A bucket region is a region where a bucket is located. If a bucket region is not specified, the **default** region (`US East (N. Virginia)`) is used. - -:::tip - -For more information about AWS regions and endpoints, see [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). - -::: - - -##### Example - -This example sets the bucket region as _europe-west-3_. - -``` - -s3ManagedLedgerOffloadRegion=eu-west-3 - -``` - -#### Authentication (required) - -To be able to access AWS S3, you need to authenticate with AWS S3. - -Pulsar does not provide any direct methods of configuring authentication for AWS S3, -but relies on the mechanisms supported by the [DefaultAWSCredentialsProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html). - -Once you have created a set of credentials in the AWS IAM console, you can configure credentials using one of the following methods. - -* Use EC2 instance metadata credentials. - - If you are on AWS instance with an instance profile that provides credentials, Pulsar uses these credentials if no other mechanism is provided. - -* Set the environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` in `conf/pulsar_env.sh`. - - "export" is important so that the variables are made available in the environment of spawned processes. - - ```bash - - export AWS_ACCESS_KEY_ID=ABC123456789 - export AWS_SECRET_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -* Add the Java system properties `aws.accessKeyId` and `aws.secretKey` to `PULSAR_EXTRA_OPTS` in `conf/pulsar_env.sh`. - - ```bash - - PULSAR_EXTRA_OPTS="${PULSAR_EXTRA_OPTS} ${PULSAR_MEM} ${PULSAR_GC} -Daws.accessKeyId=ABC123456789 -Daws.secretKey=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.maxCapacity.default=1000 -Dio.netty.recycler.linkCapacity=1024" - - ``` - -* Set the access credentials in `~/.aws/credentials`. - - ```conf - - [default] - aws_access_key_id=ABC123456789 - aws_secret_access_key=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -* Assume an IAM role. - - This example uses the `DefaultAWSCredentialsProviderChain` for assuming this role. - - The broker must be rebooted for credentials specified in `pulsar_env` to take effect. - - ```conf - - s3ManagedLedgerOffloadRole= - s3ManagedLedgerOffloadRoleSessionName=pulsar-s3-offload - - ``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from AWS S3 in the configuration file `broker.conf` or `standalone.conf`. - -Configuration|Description|Default value -|---|---|--- -`s3ManagedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from AWS S3.|1 MB -`s3ManagedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to AWS S3. It **cannot** be smaller than 5 MB. |64 MB - -### Configure AWS S3 offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the AWS S3 offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-threshold-em-). - -::: - -### Configure AWS S3 offloader to run manually - -For individual topics, you can trigger AWS S3 offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to AWS S3 until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the AWS S3 offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-em-). - - ::: - -- This example checks the AWS S3 offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the AWS S3 offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-status-em-). - - ::: - -## Tutorial - -For the complete and step-by-step instructions on how to use the AWS S3 offloader with Pulsar, see [here](https://hub.streamnative.io/offloaders/aws-s3/2.5.1#usage). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/tiered-storage-azure.md b/site2/website/versioned_docs/version-2.8.3-deprecated/tiered-storage-azure.md deleted file mode 100644 index e1485af3984e31..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/tiered-storage-azure.md +++ /dev/null @@ -1,264 +0,0 @@ ---- -id: tiered-storage-azure -title: Use Azure BlobStore offloader with Pulsar -sidebar_label: "Azure BlobStore offloader" -original_id: tiered-storage-azure ---- - -This chapter guides you through every step of installing and configuring the Azure BlobStore offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the Azure BlobStore offloader. - -### Prerequisite - -- Pulsar: 2.6.2 or later versions - -### Step - -This example uses Pulsar 2.6.2. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.6.2/apache-pulsar-2.6.2-bin.tar.gz) - - * Download from the Pulsar [downloads page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget): - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.6.2/apache-pulsar-2.6.2-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.6.2/apache-pulsar-offloaders-2.6.2-bin.tar.gz - tar xvfz apache-pulsar-offloaders-2.6.2-bin.tar.gz - - ``` - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.6.2/offloaders apache-pulsar-2.6.2/offloaders - - ls offloaders - - ``` - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/), [GCS](https://cloud.google.com/storage/) and [Azure](https://portal.azure.com/#home) for long term storage. - - ``` - - tiered-storage-file-system-2.6.2.nar - tiered-storage-jcloud-2.6.2.nar - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to Azure BlobStore, you need to configure some properties of the Azure BlobStore offload driver. - -::: - -Besides, you can also configure the Azure BlobStore offloader to run it automatically or trigger it manually. - -### Configure Azure BlobStore offloader driver - -You can configure the Azure BlobStore offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - Required configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name | azureblob - `offloadersDirectory` | Offloader directory | offloaders - `managedLedgerOffloadBucket` | Bucket | pulsar-topic-offload - -- **Optional** configurations are as below. - - Optional | Description | Example value - |---|---|--- - `managedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `managedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in Azure BlobStore must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -managedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Authentication (required) - -To be able to access Azure BlobStore, you need to authenticate with Azure BlobStore. - -* Set the environment variables `AZURE_STORAGE_ACCOUNT` and `AZURE_STORAGE_ACCESS_KEY` in `conf/pulsar_env.sh`. - - "export" is important so that the variables are made available in the environment of spawned processes. - - ```bash - - export AZURE_STORAGE_ACCOUNT=ABC123456789 - export AZURE_STORAGE_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from Azure BlobStore in the configuration file `broker.conf` or `standalone.conf`. - -Configuration|Description|Default value -|---|---|--- -`managedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from Azure BlobStore store.|1 MB -`managedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to Azure BlobStore store. It **cannot** be smaller than 5 MB. |64 MB - -### Configure Azure BlobStore offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the Azure BlobStore offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-threshold-em-). - -::: - -### Configure Azure BlobStore offloader to run manually - -For individual topics, you can trigger Azure BlobStore offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to Azure BlobStore until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the Azure BlobStore offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-em-). - - ::: - -- This example checks the Azure BlobStore offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the Azure BlobStore offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-status-em-). - - ::: - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/tiered-storage-filesystem.md b/site2/website/versioned_docs/version-2.8.3-deprecated/tiered-storage-filesystem.md deleted file mode 100644 index 85a1644120fc63..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/tiered-storage-filesystem.md +++ /dev/null @@ -1,630 +0,0 @@ ---- -id: tiered-storage-filesystem -title: Use filesystem offloader with Pulsar -sidebar_label: "Filesystem offloader" -original_id: tiered-storage-filesystem ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This chapter guides you through every step of installing and configuring the filesystem offloader and using it with Pulsar. - -## Installation - -This section describes how to install the filesystem offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or higher versions - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download the Pulsar tarball from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download the Pulsar tarball from the Pulsar [download page](https://pulsar.apache.org/download) - - * Use the [wget](https://www.gnu.org/software/wget) command to dowload the Pulsar tarball. - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - - :::note - - * If you run Pulsar in a bare metal cluster, ensure that the `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you run Pulsar in Docker or deploying Pulsar using a Docker image (such as K8S and DCOS), you can use the `apachepulsar/pulsar-all` image. The `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - - :::note - - * If you run Pulsar in a bare metal cluster, ensure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you run Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image. The `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to filesystem, you need to configure some properties of the filesystem offloader driver. - -::: - -Besides, you can also configure the filesystem offloader to run it automatically or trigger it manually. - -### Configure filesystem offloader driver - -You can configure the filesystem offloader driver in the `broker.conf` or `standalone.conf` configuration file. - -````mdx-code-block - - - -- **Required** configurations are as below. - - Parameter | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive. | filesystem - `fileSystemURI` | Connection address, which is the URI to access the default Hadoop distributed file system. | hdfs://127.0.0.1:9000 - `offloadersDirectory` | Offloader directory | offloaders - `fileSystemProfilePath` | Hadoop profile path. The configuration file is stored in the Hadoop profile path. It contains various settings for Hadoop performance tuning. | conf/filesystem_offload_core_site.xml - -- **Optional** configurations are as below. - - Parameter| Description | Example value - |---|---|--- - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic.

    **Note**: it is not recommended to set this parameter in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended to set this parameter in the production environment.|5000 - -
    - - -- **Required** configurations are as below. - - Parameter | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive. | filesystem - `offloadersDirectory` | Offloader directory | offloaders - `fileSystemProfilePath` | NFS profile path. The configuration file is stored in the NFS profile path. It contains various settings for performance tuning. | conf/filesystem_offload_core_site.xml - -- **Optional** configurations are as below. - - Parameter| Description | Example value - |---|---|--- - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic.

    **Note**: it is not recommended to set this parameter in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended to set this parameter in the production environment.|5000 - -
    - -
    -```` - -### Run filesystem offloader automatically - -You can configure the namespace policy to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic storage reaches the threshold, an offload operation is triggered automatically. - -Threshold value|Action -|---|--- -| > 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offload runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, the filesystem offloader does not work until the current segment is full. - -You can configure the threshold using CLI tools, such as pulsar-admin. - -#### Example - -This example sets the filesystem offloader threshold to 10 MB using pulsar-admin. - -```bash - -pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#set-offload-threshold). - -::: - -### Run filesystem offloader manually - -For individual topics, you can trigger the filesystem offloader manually using one of the following methods: - -- Use the REST endpoint. - -- Use CLI tools (such as pulsar-admin). - -To manually trigger the filesystem offloader via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are offloaded to the filesystem until the threshold is no longer exceeded. Older segments are offloaded first. - -#### Example - -- This example manually run the filesystem offloader using pulsar-admin. - - ```bash - - pulsar-admin topics offload --size-threshold 10M persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload). - - ::: - -- This example checks filesystem offloader status using pulsar-admin. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the filesystem to complete the job, add the `-w` flag. - - ```bash - - pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in the offloading operation, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload-status). - - ::: - -## Tutorial - -This section provides step-by-step instructions on how to use the filesystem offloader to move data from Pulsar to Hadoop Distributed File System (HDFS) or Network File system (NFS). - -````mdx-code-block - - - -To move data from Pulsar to HDFS, follow these steps. - -### Step 1: Prepare the HDFS environment - -This tutorial sets up a Hadoop single node cluster and uses Hadoop 3.2.1. - -:::tip - -For details about how to set up a Hadoop single node cluster, see [here](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html). - -::: - -1. Download and uncompress Hadoop 3.2.1. - - ``` - - wget https://mirrors.bfsu.edu.cn/apache/hadoop/common/hadoop-3.2.1/hadoop-3.2.1.tar.gz - - tar -zxvf hadoop-3.2.1.tar.gz -C $HADOOP_HOME - - ``` - -2. Configure Hadoop. - - ``` - - # $HADOOP_HOME/etc/hadoop/core-site.xml - - - fs.defaultFS - hdfs://localhost:9000 - - - - # $HADOOP_HOME/etc/hadoop/hdfs-site.xml - - - dfs.replication - 1 - - - - ``` - -3. Set passphraseless ssh. - - ``` - - # Now check that you can ssh to the localhost without a passphrase: - $ ssh localhost - # If you cannot ssh to localhost without a passphrase, execute the following commands - $ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa - $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys - $ chmod 0600 ~/.ssh/authorized_keys - - ``` - -4. Start HDFS. - - ``` - - # don't execute this command repeatedly, repeat execute will cauld the clusterId of the datanode is not consistent with namenode - $HADOOP_HOME/bin/hadoop namenode -format - $HADOOP_HOME/sbin/start-dfs.sh - - ``` - -5. Navigate to the [HDFS website](http://localhost:9870/). - - You can see the **Overview** page. - - ![](/assets/FileSystem-1.png) - - 1. At the top navigation bar, click **Datanodes** to check DataNode information. - - ![](/assets/FileSystem-2.png) - - 2. Click **HTTP Address** to get more detailed information about localhost:9866. - - As can be seen below, the size of **Capacity Used** is 4 KB, which is the initial value. - - ![](/assets/FileSystem-3.png) - -### Step 2: Install the filesystem offloader - -For details, see [installation](#installation). - -### Step 3: Configure the filesystem offloader - -As indicated in the [configuration](#configuration) section, you need to configure some properties for the filesystem offloader driver before using it. This tutorial assumes that you have configured the filesystem offloader driver as below and run Pulsar in **standalone** mode. - -Set the following configurations in the `conf/standalone.conf` file. - -```conf - -managedLedgerOffloadDriver=filesystem -fileSystemURI=hdfs://127.0.0.1:9000 -fileSystemProfilePath=conf/filesystem_offload_core_site.xml - -``` - -:::note - -For testing purposes, you can set the following two configurations to speed up ledger rollover, but it is not recommended that you set them in the production environment. - -::: - -``` - -managedLedgerMinLedgerRolloverTimeMinutes=1 -managedLedgerMaxEntriesPerLedger=100 - -``` - - - - -:::note - -In this section, it is assumed that you have enabled NFS service and set the shared path of your NFS service. In this section, `/Users/test` is used as the shared path of NFS service. - -::: - -To offload data to NFS, follow these steps. - -### Step 1: Install the filesystem offloader - -For details, see [installation](#installation). - -### Step 2: Mont your NFS to your local filesystem - -This example mounts mounts */Users/pulsar_nfs* to */Users/test*. - -``` - -mount -e 192.168.0.103:/Users/test/Users/pulsar_nfs - -``` - -### Step 3: Configure the filesystem offloader driver - -As indicated in the [configuration](#configuration) section, you need to configure some properties for the filesystem offloader driver before using it. This tutorial assumes that you have configured the filesystem offloader driver as below and run Pulsar in **standalone** mode. - -1. Set the following configurations in the `conf/standalone.conf` file. - - ```conf - - managedLedgerOffloadDriver=filesystem - fileSystemProfilePath=conf/filesystem_offload_core_site.xml - - ``` - -2. Modify the *filesystem_offload_core_site.xml* as follows. - - ``` - - - fs.defaultFS - file:/// - - - - hadoop.tmp.dir - file:///Users/pulsar_nfs - - - - io.file.buffer.size - 4096 - - - - io.seqfile.compress.blocksize - 1000000 - - - - io.seqfile.compression.type - BLOCK - - - - io.map.index.interval - 128 - - - ``` - - - - -```` - -### Step 4: Offload data from BookKeeper to filesystem - -Execute the following commands in the repository where you download Pulsar tarball. For example, `~/path/to/apache-pulsar-2.5.1`. - -1. Start Pulsar standalone. - - ``` - - bin/pulsar standalone -a 127.0.0.1 - - ``` - -2. To ensure the data generated is not deleted immediately, it is recommended to set the [retention policy](https://pulsar.apache.org/docs/en/next/cookbooks-retention-expiry/#retention-policies), which can be either a **size** limit or a **time** limit. The larger value you set for the retention policy, the longer the data can be retained. - - ``` - - bin/pulsar-admin namespaces set-retention public/default --size 100M --time 2d - - ``` - - :::tip - - For more information about the `pulsarctl namespaces set-retention options` command, including flags, descriptions, default values, and shorthands, see [here](https://docs.streamnative.io/pulsarctl/v2.7.0.6/#-em-set-retention-em-). - - ::: - -3. Produce data using pulsar-client. - - ``` - - bin/pulsar-client produce -m "Hello FileSystem Offloader" -n 1000 public/default/fs-test - - ``` - -4. The offloading operation starts after a ledger rollover is triggered. To ensure offload data successfully, it is recommended that you wait until several ledger rollovers are triggered. In this case, you might need to wait for a second. You can check the ledger status using pulsarctl. - - ``` - - bin/pulsar-admin topics stats-internal public/default/fs-test - - ``` - - **Output** - - The data of the ledger 696 is not offloaded. - - ``` - - { - "version": 1, - "creationDate": "2020-06-16T21:46:25.807+08:00", - "modificationDate": "2020-06-16T21:46:25.821+08:00", - "ledgers": [ - { - "ledgerId": 696, - "isOffloaded": false - } - ], - "cursors": {} - } - - ``` - -5. Wait a second and send more messages to the topic. - - ``` - - bin/pulsar-client produce -m "Hello FileSystem Offloader" -n 1000 public/default/fs-test - - ``` - -6. Check the ledger status using pulsarctl. - - ``` - - bin/pulsar-admin topics stats-internal public/default/fs-test - - ``` - - **Output** - - The ledger 696 is rolled over. - - ``` - - { - "version": 2, - "creationDate": "2020-06-16T21:46:25.807+08:00", - "modificationDate": "2020-06-16T21:48:52.288+08:00", - "ledgers": [ - { - "ledgerId": 696, - "entries": 1001, - "size": 81695, - "isOffloaded": false - }, - { - "ledgerId": 697, - "isOffloaded": false - } - ], - "cursors": {} - } - - ``` - -7. Trigger the offloading operation manually using pulsarctl. - - ``` - - bin/pulsar-admin topics offload -s 0 public/default/fs-test - - ``` - - **Output** - - Data in ledgers before the ledge 697 is offloaded. - - ``` - - # offload info, the ledgers before 697 will be offloaded - Offload triggered for persistent://public/default/fs-test3 for messages before 697:0:-1 - - ``` - -8. Check the ledger status using pulsarctl. - - ``` - - bin/pulsar-admin topics stats-internal public/default/fs-test - - ``` - - **Output** - - The data of the ledger 696 is offloaded. - - ``` - - { - "version": 4, - "creationDate": "2020-06-16T21:46:25.807+08:00", - "modificationDate": "2020-06-16T21:52:13.25+08:00", - "ledgers": [ - { - "ledgerId": 696, - "entries": 1001, - "size": 81695, - "isOffloaded": true - }, - { - "ledgerId": 697, - "isOffloaded": false - } - ], - "cursors": {} - } - - ``` - - And the **Capacity Used** is changed from 4 KB to 116.46 KB. - - ![](/assets/FileSystem-8.png) \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/tiered-storage-gcs.md b/site2/website/versioned_docs/version-2.8.3-deprecated/tiered-storage-gcs.md deleted file mode 100644 index 81e7c5c6e6a44b..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/tiered-storage-gcs.md +++ /dev/null @@ -1,319 +0,0 @@ ---- -id: tiered-storage-gcs -title: Use GCS offloader with Pulsar -sidebar_label: "GCS offloader" -original_id: tiered-storage-gcs ---- - -This chapter guides you through every step of installing and configuring the GCS offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the GCS offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or later versions - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download from the Pulsar [download page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget) - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8S and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - As shown in the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support GCS and AWS S3 for long term storage. - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - -## Configuration - -:::note - -Before offloading data from BookKeeper to GCS, you need to configure some properties of the GCS offloader driver. - -::: - -Besides, you can also configure the GCS offloader to run it automatically or trigger it manually. - -### Configure GCS offloader driver - -You can configure GCS offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - **Required** configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver`|Offloader driver name, which is case-insensitive.|google-cloud-storage - `offloadersDirectory`|Offloader directory|offloaders - `gcsManagedLedgerOffloadBucket`|Bucket|pulsar-topic-offload - `gcsManagedLedgerOffloadRegion`|Bucket region|europe-west3 - `gcsManagedLedgerOffloadServiceAccountKeyFile`|Authentication |/Users/user-name/Downloads/project-804d5e6a6f33.json - -- **Optional** configurations are as below. - - Optional configuration|Description|Example value - |---|---|--- - `gcsManagedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `gcsManagedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic.|2 - `managedLedgerMaxEntriesPerLedger`|The max number of entries to append to a ledger before triggering a rollover.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in GCS **must** be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you can not nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -gcsManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Bucket region (required) - -Bucket region is the region where a bucket is located. If a bucket region is not specified, the **default** region (`us multi-regional location`) is used. - -:::tip - -For more information about bucket location, see [here](https://cloud.google.com/storage/docs/bucket-locations). - -::: - -##### Example - -This example sets the bucket region as _europe-west3_. - -``` - -gcsManagedLedgerOffloadRegion=europe-west3 - -``` - -#### Authentication (required) - -To enable a broker access GCS, you need to configure `gcsManagedLedgerOffloadServiceAccountKeyFile` in the configuration file `broker.conf`. - -`gcsManagedLedgerOffloadServiceAccountKeyFile` is -a JSON file, containing GCS credentials of a service account. - -##### Example - -To generate service account credentials or view the public credentials that you've already generated, follow the following steps. - -1. Navigate to the [Service accounts page](https://console.developers.google.com/iam-admin/serviceaccounts). - -2. Select a project or create a new one. - -3. Click **Create service account**. - -4. In the **Create service account** window, type a name for the service account and select **Furnish a new private key**. - - If you want to [grant G Suite domain-wide authority](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#delegatingauthority) to the service account, select **Enable G Suite Domain-wide Delegation**. - -5. Click **Create**. - - :::note - - Make sure the service account you create has permission to operate GCS, you need to assign **Storage Admin** permission to your service account [here](https://cloud.google.com/storage/docs/access-control/iam). - - ::: - -6. You can get the following information and set this in `broker.conf`. - - ```conf - - gcsManagedLedgerOffloadServiceAccountKeyFile="/Users/user-name/Downloads/project-804d5e6a6f33.json" - - ``` - - :::tip - - - For more information about how to create `gcsManagedLedgerOffloadServiceAccountKeyFile`, see [here](https://support.google.com/googleapi/answer/6158849). - - For more information about Google Cloud IAM, see [here](https://cloud.google.com/storage/docs/access-control/iam). - - ::: - -#### Size of block read/write - -You can configure the size of a request sent to or read from GCS in the configuration file `broker.conf`. - -Configuration|Description -|---|--- -`gcsManagedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from GCS.

    The **default** value is 1 MB. -`gcsManagedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to GCS.

    It **can not** be smaller than 5 MB.

    The **default** value is 64 MB. - -### Configure GCS offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offload operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the GCS offloader threshold size to 10 MB using pulsar-admin. - -```bash - -pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#set-offload-threshold). - -::: - -### Configure GCS offloader to run manually - -For individual topics, you can trigger GCS offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger the GCS via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to GCS until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the GCS offloader to run manually using pulsar-admin with the command `pulsar-admin topics offload (topic-name) (threshold)`. - - ```bash - - pulsar-admin topics offload persistent://my-tenant/my-namespace/topic1 10M - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload). - - ::: - -- This example checks the GCS offloader status using pulsar-admin with the command `pulsar-admin topics offload-status options`. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for GCS to complete the job, add the `-w` flag. - - ```bash - - pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload-status). - - ::: - -## Tutorial - -For the complete and step-by-step instructions on how to use the GCS offloader with Pulsar, see [here](https://hub.streamnative.io/offloaders/gcs/2.5.1#usage). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/tiered-storage-overview.md b/site2/website/versioned_docs/version-2.8.3-deprecated/tiered-storage-overview.md deleted file mode 100644 index c635034f463b46..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/tiered-storage-overview.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -id: tiered-storage-overview -title: Overview of tiered storage -sidebar_label: "Overview" -original_id: tiered-storage-overview ---- - -Pulsar's **Tiered Storage** feature allows older backlog data to be moved from BookKeeper to long term and cheaper storage, while still allowing clients to access the backlog as if nothing has changed. - -* Tiered storage uses [Apache jclouds](https://jclouds.apache.org) to support [Amazon S3](https://aws.amazon.com/s3/) and [GCS (Google Cloud Storage)](https://cloud.google.com/storage/) for long term storage. - - With jclouds, it is easy to add support for more [cloud storage providers](https://jclouds.apache.org/reference/providers/#blobstore-providers) in the future. - - :::tip - - - For more information about how to use the AWS S3 offloader with Pulsar, see [here](tiered-storage-aws.md). - - - For more information about how to use the GCS offloader with Pulsar, see [here](tiered-storage-gcs.md). - - ::: - -* Tiered storage uses [Apache Hadoop](http://hadoop.apache.org/) to support filesystems for long term storage. - - With Hadoop, it is easy to add support for more filesystems in the future. - - :::tip - - For more information about how to use the filesystem offloader with Pulsar, see [here](tiered-storage-filesystem.md). - - ::: - -## When to use tiered storage? - -Tiered storage should be used when you have a topic for which you want to keep a very long backlog for a long time. - -For example, if you have a topic containing user actions which you use to train your recommendation systems, you may want to keep that data for a long time, so that if you change your recommendation algorithm, you can rerun it against your full user history. - -## How does tiered storage work? - -A topic in Pulsar is backed by a **log**, known as a **managed ledger**. This log is composed of an ordered list of segments. Pulsar only writes to the final segment of the log. All previous segments are sealed. The data within the segment is immutable. This is known as a **segment oriented architecture**. - -![Tiered storage](/assets/pulsar-tiered-storage.png "Tiered Storage") - -The tiered storage offloading mechanism takes advantage of segment oriented architecture. When offloading is requested, the segments of the log are copied one-by-one to tiered storage. All segments of the log (apart from the current segment) written to tiered storage can be offloaded. - -Data written to BookKeeper is replicated to 3 physical machines by default. However, once a segment is sealed in BookKeeper, it becomes immutable and can be copied to long term storage. Long term storage can achieve cost savings by using mechanisms such as [Reed-Solomon error correction](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction) to require fewer physical copies of data. - -Before offloading ledgers to long term storage, you need to configure buckets, credentials, and other properties for the cloud storage service. Additionally, Pulsar uses multi-part objects to upload the segment data and brokers may crash while uploading the data. It is recommended that you add a life cycle rule for your bucket to expire incomplete multi-part upload after a day or two days to avoid getting charged for incomplete uploads. Moreover, you can trigger the offloading operation manually (via REST API or CLI) or automatically (via CLI). - -After offloading ledgers to long term storage, you can still query data in the offloaded ledgers with Pulsar SQL. - -For more information about tiered storage for Pulsar topics, see [here](https://github.com/apache/pulsar/wiki/PIP-17:-Tiered-storage-for-Pulsar-topics). diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/transaction-api.md b/site2/website/versioned_docs/version-2.8.3-deprecated/transaction-api.md deleted file mode 100644 index fedc314646c938..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/transaction-api.md +++ /dev/null @@ -1,172 +0,0 @@ ---- -id: transactions-api -title: Transactions API -sidebar_label: "Transactions API" -original_id: transactions-api ---- - -All messages in a transaction are available only to consumers after the transaction has been committed. If a transaction has been aborted, all the writes and acknowledgments in this transaction roll back. - -## Prerequisites - -1. To enable transactions in Pulsar, you need to configure the parameter in `broker.conf` file or `standalone.conf` file. - -``` - -transactionCoordinatorEnabled=true - -``` - -2. Initialize transaction coordinator metadata, so the transaction coordinators can leverage advantages of the partitioned topic, such as load balance. - -``` - -bin/pulsar initialize-transaction-coordinator-metadata -cs 127.0.0.1:2181 -c standalone - -``` - -After initializing transaction coordinator metadata, you can use the transactions API. The following APIs are available. - -## Initialize Pulsar client - -You can enable transaction for transaction client and initialize transaction coordinator client. - -``` - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .enableTransaction(true) - .build(); - -``` - -## Start transactions -You can start transaction in the following way. - -``` - -Transaction txn = pulsarClient - .newTransaction() - .withTransactionTimeout(5, TimeUnit.MINUTES) - .build() - .get(); - -``` - -## Produce transaction messages - -A transaction parameter is required when producing new transaction messages. The semantic of the transaction messages in Pulsar is `read-committed`, so the consumer cannot receive the ongoing transaction messages before the transaction is committed. - -``` - -producer.newMessage(txn).value("Hello Pulsar Transaction".getBytes()).sendAsync(); - -``` - -## Acknowledge the messages with the transaction - -The transaction acknowledgement requires a transaction parameter. The transaction acknowledgement marks the messages state to pending-ack state. When the transaction is committed, the pending-ack state becomes ack state. If the transaction is aborted, the pending-ack state becomes unack state. - -``` - -Message message = consumer.receive(); -consumer.acknowledgeAsync(message.getMessageId(), txn); - -``` - -## Commit transactions - -When the transaction is committed, consumers receive the transaction messages and the pending-ack state becomes ack state. - -``` - -txn.commit().get(); - -``` - -## Abort transaction - -When the transaction is aborted, the transaction acknowledgement is canceled and the pending-ack messages are redelivered. - -``` - -txn.abort().get(); - -``` - -### Example -The following example shows how messages are processed in transaction. - -``` - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl(getPulsarServiceList().get(0).getBrokerServiceUrl()) - .statsInterval(0, TimeUnit.SECONDS) - .enableTransaction(true) - .build(); - -String sourceTopic = "public/default/source-topic"; -String sinkTopic = "public/default/sink-topic"; - -Producer sourceProducer = pulsarClient - .newProducer(Schema.STRING) - .topic(sourceTopic) - .create(); -sourceProducer.newMessage().value("hello pulsar transaction").sendAsync(); - -Consumer sourceConsumer = pulsarClient - .newConsumer(Schema.STRING) - .topic(sourceTopic) - .subscriptionName("test") - .subscriptionType(SubscriptionType.Shared) - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscribe(); - -Producer sinkProducer = pulsarClient - .newProducer(Schema.STRING) - .topic(sinkTopic) - .create(); - -Transaction txn = pulsarClient - .newTransaction() - .withTransactionTimeout(5, TimeUnit.MINUTES) - .build() - .get(); - -// source message acknowledgement and sink message produce belong to one transaction, -// they are combined into an atomic operation. -Message message = sourceConsumer.receive(); -sourceConsumer.acknowledgeAsync(message.getMessageId(), txn); -sinkProducer.newMessage(txn).value("sink data").sendAsync(); - -txn.commit().get(); - -``` - -## Enable batch messages in transactions - -To enable batch messages in transactions, you need to enable the batch index acknowledgement feature. The transaction acks check whether the batch index acknowledgement conflicts. - -To enable batch index acknowledgement, you need to set `acknowledgmentAtBatchIndexLevelEnabled` to `true` in the `broker.conf` or `standalone.conf` file. - -``` - -acknowledgmentAtBatchIndexLevelEnabled=true - -``` - -And then you need to call the `enableBatchIndexAcknowledgment(true)` method in the consumer builder. - -``` - -Consumer sinkConsumer = pulsarClient - .newConsumer() - .topic(transferTopic) - .subscriptionName("sink-topic") - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscriptionType(SubscriptionType.Shared) - .enableBatchIndexAcknowledgment(true) // enable batch index acknowledgement - .subscribe(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/transaction-guarantee.md b/site2/website/versioned_docs/version-2.8.3-deprecated/transaction-guarantee.md deleted file mode 100644 index 9db2d254e159f6..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/transaction-guarantee.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -id: transactions-guarantee -title: Transactions Guarantee -sidebar_label: "Transactions Guarantee" -original_id: transactions-guarantee ---- - -Pulsar transactions support the following guarantee. - -## Atomic multi-partition writes and multi-subscription acknowledges -Transactions enable atomic writes to multiple topics and partitions. A batch of messages in a transaction can be received from, produced to, and acknowledged by many partitions. All the operations involved in a transaction succeed or fail as a single unit. - -## Read transactional message -All the messages in a transaction are available only for consumers until the transaction is committed. - -## Acknowledge transactional message -A message is acknowledged successfully only once by a consumer under the subscription when acknowledging the message with the transaction ID. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/txn-how.md b/site2/website/versioned_docs/version-2.8.3-deprecated/txn-how.md deleted file mode 100644 index add072448aeb34..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/txn-how.md +++ /dev/null @@ -1,151 +0,0 @@ ---- -id: txn-how -title: How transactions work? -sidebar_label: "How transactions work?" -original_id: txn-how ---- - -This section describes transaction components and how the components work together. For the complete design details, see [PIP-31: Transactional Streaming](https://docs.google.com/document/d/145VYp09JKTw9jAT-7yNyFU255FptB2_B2Fye100ZXDI/edit#heading=h.bm5ainqxosrx). - -## Key concept - -It is important to know the following key concepts, which is a prerequisite for understanding how transactions work. - -### Transaction coordinator - -The transaction coordinator (TC) is a module running inside a Pulsar broker. - -* It maintains the entire life cycle of transactions and prevents a transaction from getting into an incorrect status. - -* It handles transaction timeout, and ensures that the transaction is aborted after a transaction timeout. - -### Transaction log - -All the transaction metadata persists in the transaction log. The transaction log is backed by a Pulsar topic. If the transaction coordinator crashes, it can restore the transaction metadata from the transaction log. - -The transaction log stores the transaction status rather than actual messages in the transaction (the actual messages are stored in the actual topic partitions). - -### Transaction buffer - -Messages produced to a topic partition within a transaction are stored in the transaction buffer (TB) of that topic partition. The messages in the transaction buffer are not visible to consumers until the transactions are committed. The messages in the transaction buffer are discarded when the transactions are aborted. - -Transaction buffer stores all ongoing and aborted transactions in memory. All messages are sent to the actual partitioned Pulsar topics. After transactions are committed, the messages in the transaction buffer are materialized (visible) to consumers. When the transactions are aborted, the messages in the transaction buffer are discarded. - -### Transaction ID - -Transaction ID (TxnID) identifies a unique transaction in Pulsar. The transaction ID is 128-bit. The highest 16 bits are reserved for the ID of the transaction coordinator, and the remaining bits are used for monotonically increasing numbers in each transaction coordinator. It is easy to locate the transaction crash with the TxnID. - -### Pending acknowledge state - -Pending acknowledge state maintains message acknowledgments within a transaction before a transaction completes. If a message is in the pending acknowledge state, the message cannot be acknowledged by other transactions until the message is removed from the pending acknowledge state. - -The pending acknowledge state is persisted to the pending acknowledge log (cursor ledger). A new broker can restore the state from the pending acknowledge log to ensure the acknowledgement is not lost. - -## Data flow - -At a high level, the data flow can be split into several steps: - -1. Begin a transaction. - -2. Publish messages with a transaction. - -3. Acknowledge messages with a transaction. - -4. End a transaction. - -To help you debug or tune the transaction for better performance, review the following diagrams and descriptions. - -### 1. Begin a transaction - -Before introducing the transaction in Pulsar, a producer is created and then messages are sent to brokers and stored in data logs. - -![](/assets/txn-3.png) - -Let’s walk through the steps for _beginning a transaction_. - -| Step | Description | -| --- | --- | -| 1.1 | The first step is that the Pulsar client finds the transaction coordinator. | -| 1.2 | The transaction coordinator allocates a transaction ID for the transaction. In the transaction log, the transaction is logged with its transaction ID and status (OPEN), which ensures the transaction status is persisted regardless of transaction coordinator crashes. | -| 1.3 | The transaction log sends the result of persisting the transaction ID to the transaction coordinator. | -| 1.4 | After the transaction status entry is logged, the transaction coordinator brings the transaction ID back to the Pulsar client. | - -### 2. Publish messages with a transaction - -In this stage, the Pulsar client enters a transaction loop, repeating the `consume-process-produce` operation for all the messages that comprise the transaction. This is a long phase and is potentially composed of multiple produce and acknowledgement requests. - -![](/assets/txn-4.png) - -Let’s walk through the steps for _publishing messages with a transaction_. - -| Step | Description | -| --- | --- | -| 2.1.1 | Before the Pulsar client produces messages to a new topic partition, it sends a request to the transaction coordinator to add the partition to the transaction. | -| 2.1.2 | The transaction coordinator logs the partition changes of the transaction into the transaction log for durability, which ensures the transaction coordinator knows all the partitions that a transaction is handling. The transaction coordinator can commit or abort changes on each partition at the end-partition phase. | -| 2.1.3 | The transaction log sends the result of logging the new partition (used for producing messages) to the transaction coordinator. | -| 2.1.4 | The transaction coordinator sends the result of adding a new produced partition to the transaction. | -| 2.2.1 | The Pulsar client starts producing messages to partitions. The flow of this part is the same as the normal flow of producing messages except that the batch of messages produced by a transaction contains transaction IDs. | -| 2.2.2 | The broker writes messages to a partition. | - -### 3. Acknowledge messages with a transaction - -In this phase, the Pulsar client sends a request to the transaction coordinator and a new subscription is acknowledged as a part of a transaction. - -![](/assets/txn-5.png) - -Let’s walk through the steps for _acknowledging messages with a transaction_. - -| Step | Description | -| --- | --- | -| 3.1.1 | The Pulsar client sends a request to add an acknowledged subscription to the transaction coordinator. | -| 3.1.2 | The transaction coordinator logs the addition of subscription, which ensures that it knows all subscriptions handled by a transaction and can commit or abort changes on each subscription at the end phase. | -| 3.1.3 | The transaction log sends the result of logging the new partition (used for acknowledging messages) to the transaction coordinator. | -| 3.1.4 | The transaction coordinator sends the result of adding the new acknowledged partition to the transaction. | -| 3.2 | The Pulsar client acknowledges messages on the subscription. The flow of this part is the same as the normal flow of acknowledging messages except that the acknowledged request carries a transaction ID. | -| 3.3 | The broker receiving the acknowledgement request checks if the acknowledgment belongs to a transaction or not. | - -### 4. End a transaction - -At the end of a transaction, the Pulsar client decides to commit or abort the transaction. The transaction can be aborted when a conflict is detected on acknowledging messages. - -#### 4.1 End transaction request - -When the Pulsar client finishes a transaction, it issues an end transaction request. - -![](/assets/txn-6.png) - -Let’s walk through the steps for _ending the transaction_. - -| Step | Description | -| --- | --- | -| 4.1.1 | The Pulsar client issues an end transaction request (with a field indicating whether the transaction is to be committed or aborted) to the transaction coordinator. | -| 4.1.2 | The transaction coordinator writes a COMMITTING or ABORTING message to its transaction log. | -| 4.1.3 | The transaction log sends the result of logging the committing or aborting status. | - -#### 4.2 Finalize a transaction - -The transaction coordinator starts the process of committing or aborting messages to all the partitions involved in this transaction. - -![](/assets/txn-7.png) - -Let’s walk through the steps for _finalizing a transaction_. - -| Step | Description | -| --- | --- | -| 4.2.1 | The transaction coordinator commits transactions on subscriptions and commits transactions on partitions at the same time. | -| 4.2.2 | The broker (produce) writes produced committed markers to the actual partitions. At the same time, the broker (ack) writes acked committed marks to the subscription pending ack partitions. | -| 4.2.3 | The data log sends the result of writing produced committed marks to the broker. At the same time, pending ack data log sends the result of writing acked committed marks to the broker. The cursor moves to the next position. | - -#### 4.3 Mark a transaction as COMMITTED or ABORTED - -The transaction coordinator writes the final transaction status to the transaction log to complete the transaction. - -![](/assets/txn-8.png) - -Let’s walk through the steps for _marking a transaction as COMMITTED or ABORTED_. - -| Step | Description | -| --- | --- | -| 4.3.1 | After all produced messages and acknowledgements to all partitions involved in this transaction have been successfully committed or aborted, the transaction coordinator writes the final COMMITTED or ABORTED transaction status messages to its transaction log, indicating that the transaction is complete. All the messages associated with the transaction in its transaction log can be safely removed. | -| 4.3.2 | The transaction log sends the result of the committed transaction to the transaction coordinator. | -| 4.3.3 | The transaction coordinator sends the result of the committed transaction to the Pulsar client. | diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/txn-monitor.md b/site2/website/versioned_docs/version-2.8.3-deprecated/txn-monitor.md deleted file mode 100644 index 5b50953772d092..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/txn-monitor.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -id: txn-monitor -title: How to monitor transactions? -sidebar_label: "How to monitor transactions?" -original_id: txn-monitor ---- - -You can monitor the status of the transactions in Prometheus and Grafana using the [transaction metrics](https://pulsar.apache.org/docs/en/next/reference-metrics/#pulsar-transaction). - -For how to configure Prometheus and Grafana, see [here](https://pulsar.apache.org/docs/en/next/deploy-monitoring). diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/txn-use.md b/site2/website/versioned_docs/version-2.8.3-deprecated/txn-use.md deleted file mode 100644 index de0e4a92f1b27e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/txn-use.md +++ /dev/null @@ -1,105 +0,0 @@ ---- -id: txn-use -title: How to use transactions? -sidebar_label: "How to use transactions?" -original_id: txn-use ---- - -## Transaction API - -The transaction feature is primarily a server-side and protocol-level feature. You can use the transaction feature via the [transaction API](https://pulsar.apache.org/api/admin/), which is available in **Pulsar 2.8.0 or later**. - -To use the transaction API, you do not need any additional settings in the Pulsar client. **By default**, transactions is **disabled**. - -Currently, transaction API is only available for **Java** clients. Support for other language clients will be added in the future releases. - -## Quick start - -This section provides an example of how to use the transaction API to send and receive messages in a Java client. - -1. Start Pulsar 2.8.0 or later. - -2. Enable transaction. - - Change the configuration in the `broker.conf` file. - - ``` - - transactionCoordinatorEnabled=true - - ``` - - If you want to enable batch messages in transactions, follow the steps below. - - Set `acknowledgmentAtBatchIndexLevelEnabled` to `true` in the `broker.conf` or `standalone.conf` file. - - ``` - - acknowledgmentAtBatchIndexLevelEnabled=true - - ``` - -3. Initialize transaction coordinator metadata. - - The transaction coordinator can leverage the advantages of partitioned topics (such as load balance). - - **Input** - - ``` - - bin/pulsar initialize-transaction-coordinator-metadata -cs 127.0.0.1:2181 -c standalone - - ``` - - **Output** - - ``` - - Transaction coordinator metadata setup success - - ``` - -4. Initialize a Pulsar client. - - ``` - - PulsarClient client = PulsarClient.builder() - - .serviceUrl("pulsar://localhost:6650") - - .enableTransaction(true) - - .build(); - - ``` - -Now you can start using the transaction API to send and receive messages. Below is an example of a `consume-process-produce` application written in Java. - -![](/assets/txn-9.png) - -Let’s walk through this example step by step. - -| Step | Description | -| --- | --- | -| 1. Start a transaction. | The application opens a new transaction by calling PulsarClient.newTransaction. It specifics the transaction timeout as 1 minute. If the transaction is not committed within 1 minute, the transaction is automatically aborted. | -| 2. Receive messages from topics. | The application creates two normal consumers to receive messages from topic input-topic-1 and input-topic-2 respectively. | -| 3. Publish messages to topics with the transaction. | The application creates two producers to produce the resulting messages to the output topic _output-topic-1_ and output-topic-2 respectively. The application applies the processing logic and generates two output messages. The application sends those two output messages as part of the transaction opened in the first step via Producer.newMessage(Transaction). | -| 4. Acknowledge the messages with the transaction. | In the same transaction, the application acknowledges the two input messages. | -| 5. Commit the transaction. | The application commits the transaction by calling Transaction.commit() on the open transaction. The commit operation ensures the two input messages are marked as acknowledged and the two output messages are written successfully to the output topics. | - -[1] Example of enabling batch messages ack in transactions in the consumer builder. - -``` - -Consumer sinkConsumer = pulsarClient - .newConsumer() - .topic(transferTopic) - .subscriptionName("sink-topic") - -.subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscriptionType(SubscriptionType.Shared) - .enableBatchIndexAcknowledgment(true) // enable batch index acknowledgement - .subscribe(); - -``` - diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/txn-what.md b/site2/website/versioned_docs/version-2.8.3-deprecated/txn-what.md deleted file mode 100644 index 844f19a700f8f0..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/txn-what.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -id: txn-what -title: What are transactions? -sidebar_label: "What are transactions?" -original_id: txn-what ---- - -Transactions strengthen the message delivery semantics of Apache Pulsar and [processing guarantees of Pulsar Functions](https://pulsar.apache.org/docs/en/next/functions-overview/#processing-guarantees). The Pulsar Transaction API supports atomic writes and acknowledgments across multiple topics. - -Transactions allow: - -- A producer to send a batch of messages to multiple topics where all messages in the batch are eventually visible to any consumer, or none are ever visible to consumers. - -- End-to-end exactly-once semantics (execute a `consume-process-produce` operation exactly once). - -## Transaction semantics - -Pulsar transactions have the following semantics: - -* All operations within a transaction are committed as a single unit. - - * Either all messages are committed, or none of them are. - - * Each message is written or processed exactly once, without data loss or duplicates (even in the event of failures). - - * If a transaction is aborted, all the writes and acknowledgments in this transaction rollback. - -* A group of messages in a transaction can be received from, produced to, and acknowledged by multiple partitions. - - * Consumers are only allowed to read committed (acked) messages. In other words, the broker does not deliver transactional messages which are part of an open transaction or messages which are part of an aborted transaction. - - * Message writes across multiple partitions are atomic. - - * Message acks across multiple subscriptions are atomic. A message is acked successfully only once by a consumer under the subscription when acknowledging the message with the transaction ID. - -## Transactions and stream processing - -Stream processing on Pulsar is a `consume-process-produce` operation on Pulsar topics: - -* `Consume`: a source operator that runs a Pulsar consumer reads messages from one or multiple Pulsar topics. - -* `Process`: a processing operator transforms the messages. - -* `Produce`: a sink operator that runs a Pulsar producer writes the resulting messages to one or multiple Pulsar topics. - -![](/assets/txn-2.png) - -Pulsar transactions support end-to-end exactly-once stream processing, which means messages are not lost from a source operator and messages are not duplicated to a sink operator. - -## Use case - -Prior to Pulsar 2.8.0, there was no easy way to build stream processing applications with Pulsar to achieve exactly-once processing guarantees. With the transaction introduced in Pulsar 2.8.0, the following services support exactly-once semantics: - -* [Pulsar Flink connector](https://flink.apache.org/2021/01/07/pulsar-flink-connector-270.html) - - Prior to Pulsar 2.8.0, if you want to build stream applications using Pulsar and Flink, the Pulsar Flink connector only supported exactly-once source connector and at-least-once sink connector, which means the highest processing guarantee for end-to-end was at-least-once, there was possibility that the resulting messages from streaming applications produce duplicated messages to the resulting topics in Pulsar. - - With the transaction introduced in Pulsar 2.8.0, the Pulsar Flink sink connector can support exactly-once semantics by implementing the designated `TwoPhaseCommitSinkFunction` and hooking up the Flink sink message lifecycle with Pulsar transaction API. - -* Support for Pulsar Functions and other connectors will be added in the future releases. diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/txn-why.md b/site2/website/versioned_docs/version-2.8.3-deprecated/txn-why.md deleted file mode 100644 index 1ed8769977654e..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/txn-why.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -id: txn-why -title: Why transactions? -sidebar_label: "Why transactions?" -original_id: txn-why ---- - -Pulsar transactions (txn) enable event streaming applications to consume, process, and produce messages in one atomic operation. The reason for developing this feature can be summarized as below. - -## Demand of stream processing - -The demand for stream processing applications with stronger processing guarantees has grown along with the rise of stream processing. For example, in the financial industry, financial institutions use stream processing engines to process debits and credits for users. This type of use case requires that every message is processed exactly once, without exception. - -In other words, if a stream processing application consumes message A and -produces the result as a message B (B = f(A)), then exactly-once processing -guarantee means that A can only be marked as consumed if and only if B is -successfully produced, and vice versa. - -![](/assets/txn-1.png) - -The Pulsar transactions API strengthens the message delivery semantics and the processing guarantees for stream processing. It enables stream processing applications to consume, process, and produce messages in one atomic operation. That means, a batch of messages in a transaction can be received from, produced to and acknowledged by many topic partitions. All the operations involved in a transaction succeed or fail as one single until. - -## Limitation of idempotent producer - -Avoiding data loss or duplication can be achieved by using the Pulsar idempotent producer, but it does not provide guarantees for writes across multiple partitions. - -In Pulsar, the highest level of message delivery guarantee is using an [idempotent producer](https://pulsar.apache.org/docs/en/next/concepts-messaging/#producer-idempotency) with the exactly once semantic at one single partition, that is, each message is persisted exactly once without data loss and duplication. However, there are some limitations in this solution: - -- Due to the monotonic increasing sequence ID, this solution only works on a single partition and within a single producer session (that is, for producing one message), so there is no atomicity when producing multiple messages to one or multiple partitions. - - In this case, if there are some failures (for example, client / broker / bookie crashes, network failure, and more) in the process of producing and receiving messages, messages are re-processed and re-delivered, which may cause data loss or data duplication: - - - For the producer: if the producer retry sending messages, some messages are persisted multiple times; if the producer does not retry sending messages, some messages are persisted once and other messages are lost. - - - For the consumer: since the consumer does not know whether the broker has received messages or not, the consumer may not retry sending acks, which causes it to receive duplicate messages. - -- Similarly, for Pulsar Function, it only guarantees exactly once semantics for an idempotent function on a single event rather than processing multiple events or producing multiple results that can happen exactly. - - For example, if a function accepts multiple events and produces one result (for example, window function), the function may fail between producing the result and acknowledging the incoming messages, or even between acknowledging individual events, which causes all (or some) incoming messages to be re-delivered and reprocessed, and a new result is generated. - - However, many scenarios need atomic guarantees across multiple partitions and sessions. - -- Consumers need to rely on more mechanisms to acknowledge (ack) messages once. - - For example, consumers are required to store the MessageID along with its acked state. After the topic is unloaded, the subscription can recover the acked state of this MessageID in memory when the topic is loaded again. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.8.3-deprecated/window-functions-context.md b/site2/website/versioned_docs/version-2.8.3-deprecated/window-functions-context.md deleted file mode 100644 index f80fea57989ef0..00000000000000 --- a/site2/website/versioned_docs/version-2.8.3-deprecated/window-functions-context.md +++ /dev/null @@ -1,581 +0,0 @@ ---- -id: window-functions-context -title: Window Functions Context -sidebar_label: "Window Functions: Context" -original_id: window-functions-context ---- - -Java SDK provides access to a **window context object** that can be used by a window function. This context object provides a wide variety of information and functionality for Pulsar window functions as below. - -- [Spec](#spec) - - * Names of all input topics and the output topic associated with the function. - * Tenant and namespace associated with the function. - * Pulsar window function name, ID, and version. - * ID of the Pulsar function instance running the window function. - * Number of instances that invoke the window function. - * Built-in type or custom class name of the output schema. - -- [Logger](#logger) - - * Logger object used by the window function, which can be used to create window function log messages. - -- [User config](#user-config) - - * Access to arbitrary user configuration values. - -- [Routing](#routing) - - * Routing is supported in Pulsar window functions. Pulsar window functions send messages to arbitrary topics as per the `publish` interface. - -- [Metrics](#metrics) - - * Interface for recording metrics. - -- [State storage](#state-storage) - - * Interface for storing and retrieving state in [state storage](#state-storage). - -## Spec - -Spec contains the basic information of a function. - -### Get input topics - -The `getInputTopics` method gets the **name list** of all input topics. - -This example demonstrates how to get the name list of all input topics in a Java window function. - -```java - -public class GetInputTopicsWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - Collection inputTopics = context.getInputTopics(); - System.out.println(inputTopics); - - return null; - } - -} - -``` - -### Get output topic - -The `getOutputTopic` method gets the **name of a topic** to which the message is sent. - -This example demonstrates how to get the name of an output topic in a Java window function. - -```java - -public class GetOutputTopicWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String outputTopic = context.getOutputTopic(); - System.out.println(outputTopic); - - return null; - } -} - -``` - -### Get tenant - -The `getTenant` method gets the tenant name associated with the window function. - -This example demonstrates how to get the tenant name in a Java window function. - -```java - -public class GetTenantWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String tenant = context.getTenant(); - System.out.println(tenant); - - return null; - } - -} - -``` - -### Get namespace - -The `getNamespace` method gets the namespace associated with the window function. - -This example demonstrates how to get the namespace in a Java window function. - -```java - -public class GetNamespaceWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String ns = context.getNamespace(); - System.out.println(ns); - - return null; - } - -} - -``` - -### Get function name - -The `getFunctionName` method gets the window function name. - -This example demonstrates how to get the function name in a Java window function. - -```java - -public class GetNameOfWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionName = context.getFunctionName(); - System.out.println(functionName); - - return null; - } - -} - -``` - -### Get function ID - -The `getFunctionId` method gets the window function ID. - -This example demonstrates how to get the function ID in a Java window function. - -```java - -public class GetFunctionIDWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionID = context.getFunctionId(); - System.out.println(functionID); - - return null; - } - -} - -``` - -### Get function version - -The `getFunctionVersion` method gets the window function version. - -This example demonstrates how to get the function version of a Java window function. - -```java - -public class GetVersionOfWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionVersion = context.getFunctionVersion(); - System.out.println(functionVersion); - - return null; - } - -} - -``` - -### Get instance ID - -The `getInstanceId` method gets the instance ID of a window function. - -This example demonstrates how to get the instance ID in a Java window function. - -```java - -public class GetInstanceIDWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - int instanceId = context.getInstanceId(); - System.out.println(instanceId); - - return null; - } - -} - -``` - -### Get num instances - -The `getNumInstances` method gets the number of instances that invoke the window function. - -This example demonstrates how to get the number of instances in a Java window function. - -```java - -public class GetNumInstancesWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - int numInstances = context.getNumInstances(); - System.out.println(numInstances); - - return null; - } - -} - -``` - -### Get output schema type - -The `getOutputSchemaType` method gets the built-in type or custom class name of the output schema. - -This example demonstrates how to get the output schema type of a Java window function. - -```java - -public class GetOutputSchemaTypeWindowFunction implements WindowFunction { - - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String schemaType = context.getOutputSchemaType(); - System.out.println(schemaType); - - return null; - } -} - -``` - -## Logger - -Pulsar window functions using Java SDK has access to an [SLF4j](https://www.slf4j.org/) [`Logger`](https://www.slf4j.org/api/org/apache/log4j/Logger.html) object that can be used to produce logs at the chosen log level. - -This example logs either a `WARNING`-level or `INFO`-level log based on whether the incoming string contains the word `danger` or not in a Java function. - -```java - -import java.util.Collection; -import org.apache.pulsar.functions.api.Record; -import org.apache.pulsar.functions.api.WindowContext; -import org.apache.pulsar.functions.api.WindowFunction; -import org.slf4j.Logger; - -public class LoggingWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - Logger log = context.getLogger(); - for (Record record : inputs) { - log.info(record + "-window-log"); - } - return null; - } - -} - -``` - -If you need your function to produce logs, specify a log topic when creating or running the function. - -```bash - -bin/pulsar-admin functions create \ - --jar my-functions.jar \ - --classname my.package.LoggingFunction \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -You can access all logs produced by `LoggingFunction` via the `persistent://public/default/logging-function-logs` topic. - -## Metrics - -Pulsar window functions can publish arbitrary metrics to the metrics interface which can be queried. - -:::note - -If a Pulsar window function uses the language-native interface for Java, that function is not able to publish metrics and stats to Pulsar. - -::: - -You can record metrics using the context object on a per-key basis. - -This example sets a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message in a Java function. - -```java - -import java.util.Collection; -import org.apache.pulsar.functions.api.Record; -import org.apache.pulsar.functions.api.WindowContext; -import org.apache.pulsar.functions.api.WindowFunction; - - -/** - * Example function that wants to keep track of - * the event time of each message sent. - */ -public class UserMetricWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - - for (Record record : inputs) { - if (record.getEventTime().isPresent()) { - context.recordMetric("MessageEventTime", record.getEventTime().get().doubleValue()); - } - } - - return null; - } -} - -``` - -## User config - -When you run or update Pulsar Functions that are created using SDK, you can pass arbitrary key/value pairs to them with the `--user-config` flag. Key/value pairs **must** be specified as JSON. - -This example passes a user configured key/value to a function. - -```bash - -bin/pulsar-admin functions create \ - --name word-filter \ - --user-config '{"forbidden-word":"rosebud"}' \ - # Other function configs - -``` - -### API -You can use the following APIs to get user-defined information for window functions. -#### getUserConfigMap - -`getUserConfigMap` API gets a map of all user-defined key/value configurations for the window function. - -```java - -/** - * Get a map of all user-defined key/value configs for the function. - * - * @return The full map of user-defined config values - */ - Map getUserConfigMap(); - -``` - -#### getUserConfigValue - -The `getUserConfigValue` API gets a user-defined key/value. - -```java - -/** - * Get any user-defined key/value. - * - * @param key The key - * @return The Optional value specified by the user for that key. - */ - Optional getUserConfigValue(String key); - -``` - -#### getUserConfigValueOrDefault - -The `getUserConfigValueOrDefault` API gets a user-defined key/value or a default value if none is present. - -```java - -/** - * Get any user-defined key/value or a default value if none is present. - * - * @param key - * @param defaultValue - * @return Either the user config value associated with a given key or a supplied default value - */ - Object getUserConfigValueOrDefault(String key, Object defaultValue); - -``` - -This example demonstrates how to access key/value pairs provided to Pulsar window functions. - -Java SDK context object enables you to access key/value pairs provided to Pulsar window functions via the command line (as JSON). - -:::tip - -For all key/value pairs passed to Java window functions, both the `key` and the `value` are `String`. To set the value to be a different type, you need to deserialize it from the `String` type. - -::: - -This example passes a key/value pair in a Java window function. - -```bash - -bin/pulsar-admin functions create \ - --user-config '{"word-of-the-day":"verdure"}' \ - # Other function configs - -``` - -This example accesses values in a Java window function. - -The `UserConfigFunction` function logs the string `"The word of the day is verdure"` every time the function is invoked (which means every time a message arrives). The user config of `word-of-the-day` is changed **only** when the function is updated with a new config value via -multiple ways, such as the command line tool or REST API. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.Optional; - -public class UserConfigWindowFunction implements WindowFunction { - @Override - public String process(Collection> input, WindowContext context) throws Exception { - Optional whatToWrite = context.getUserConfigValue("WhatToWrite"); - if (whatToWrite.get() != null) { - return (String)whatToWrite.get(); - } else { - return "Not a nice way"; - } - } - -} - -``` - -If no value is provided, you can access the entire user config map or set a default value. - -```java - -// Get the whole config map -Map allConfigs = context.getUserConfigMap(); - -// Get value or resort to default -String wotd = context.getUserConfigValueOrDefault("word-of-the-day", "perspicacious"); - -``` - -## Routing - -You can use the `context.publish()` interface to publish as many results as you want. - -This example shows that the `PublishFunction` class uses the built-in function in the context to publish messages to the `publishTopic` in a Java function. - -```java - -public class PublishWindowFunction implements WindowFunction { - @Override - public Void process(Collection> input, WindowContext context) throws Exception { - String publishTopic = (String) context.getUserConfigValueOrDefault("publish-topic", "publishtopic"); - String output = String.format("%s!", input); - context.publish(publishTopic, output); - - return null; - } - -} - -``` - -## State storage - -Pulsar window functions use [Apache BookKeeper](https://bookkeeper.apache.org) as a state storage interface. Apache Pulsar installation (including the standalone installation) includes the deployment of BookKeeper bookies. - -Apache Pulsar integrates with Apache BookKeeper `table service` to store the `state` for functions. For example, the `WordCount` function can store its `counters` state into BookKeeper table service via Pulsar Functions state APIs. - -States are key-value pairs, where the key is a string and the value is arbitrary binary data—counters are stored as 64-bit big-endian binary values. Keys are scoped to an individual Pulsar Function and shared between instances of that function. - -Currently, Pulsar window functions expose Java API to access, update, and manage states. These APIs are available in the context object when you use Java SDK functions. - -| Java API| Description -|---|--- -|`incrCounter`|Increases a built-in distributed counter referred by key. -|`getCounter`|Gets the counter value for the key. -|`putState`|Updates the state value for the key. - -You can use the following APIs to access, update, and manage states in Java window functions. - -#### incrCounter - -The `incrCounter` API increases a built-in distributed counter referred by key. - -Applications use the `incrCounter` API to change the counter of a given `key` by the given `amount`. If the `key` does not exist, a new key is created. - -```java - - /** - * Increment the builtin distributed counter referred by key - * @param key The name of the key - * @param amount The amount to be incremented - */ - void incrCounter(String key, long amount); - -``` - -#### getCounter - -The `getCounter` API gets the counter value for the key. - -Applications uses the `getCounter` API to retrieve the counter of a given `key` changed by the `incrCounter` API. - -```java - - /** - * Retrieve the counter value for the key. - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - long getCounter(String key); - -``` - -Except the `getCounter` API, Pulsar also exposes a general key/value API (`putState`) for functions to store general key/value state. - -#### putState - -The `putState` API updates the state value for the key. - -```java - - /** - * Update the state value for the key. - * - * @param key name of the key - * @param value state value of the key - */ - void putState(String key, ByteBuffer value); - -``` - -This example demonstrates how applications store states in Pulsar window functions. - -The logic of the `WordCountWindowFunction` is simple and straightforward. - -1. The function first splits the received string into multiple words using regex `\\.`. - -2. For each `word`, the function increments the corresponding `counter` by 1 via `incrCounter(key, amount)`. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - for (Record input : inputs) { - Arrays.asList(input.getValue().split("\\.")).forEach(word -> context.incrCounter(word, 1)); - } - return null; - - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/about.md b/site2/website/versioned_docs/version-2.9.0-deprecated/about.md deleted file mode 100644 index cfe2622c1dc44b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/about.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -slug: / -id: about -title: Welcome to the doc portal! -sidebar_label: "About" ---- - -import BlockLinks from "@site/src/components/BlockLinks"; -import BlockLink from "@site/src/components/BlockLink"; -import { docUrl } from "@site/src/utils/index"; - - -# Welcome to the doc portal! -*** - -This portal holds a variety of support documents to help you work with Pulsar . If you’re a beginner, there are tutorials and explainers to help you understand Pulsar and how it works. - -If you’re an experienced coder, review this page to learn the easiest way to access the specific content you’re looking for. - -## Get Started Now - - - - - - - - - -## Navigation -*** - -There are several ways to get around in the doc portal. The index navigation pane is a table of contents for the entire archive. The archive is divided into sections, like chapters in a book. Click the title of the topic to view it. - -In-context links provide an easy way to immediately reference related topics. Click the underlined term to view the topic. - -Links to related topics can be found at the bottom of each topic page. Click the link to view the topic. - -![Page Linking](/assets/page-linking.png) - -## Continuous Improvement -*** -As you probably know, we are working on a new user experience for our documentation portal that will make learning about and building on top of Apache Pulsar a much better experience. Whether you need overview concepts, how-to procedures, curated guides or quick references, we’re building content to support it. This welcome page is just the first step. We will be providing updates every month. - -## Help Improve These Documents -*** - -You’ll notice an Edit button at the bottom and top of each page. Click it to open a landing page with instructions for requesting changes to posted documents. These are your resources. Participation is not only welcomed – it’s essential! - -## Join the Community! -*** - -The Pulsar community on github is active, passionate, and knowledgeable. Join discussions, voice opinions, suggest features, and dive into the code itself. Find your Pulsar family here at [apache/pulsar](https://github.com/apache/pulsar). - -An equally passionate community can be found in the [Pulsar Slack channel](https://apache-pulsar.slack.com/). You’ll need an invitation to join, but many Github Pulsar community members are Slack members too. Join, hang out, learn, and make some new friends. - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/adaptors-kafka.md b/site2/website/versioned_docs/version-2.9.0-deprecated/adaptors-kafka.md deleted file mode 100644 index ea256049710fd1..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/adaptors-kafka.md +++ /dev/null @@ -1,276 +0,0 @@ ---- -id: adaptors-kafka -title: Pulsar adaptor for Apache Kafka -sidebar_label: "Kafka client wrapper" -original_id: adaptors-kafka ---- - - -Pulsar provides an easy option for applications that are currently written using the [Apache Kafka](http://kafka.apache.org) Java client API. - -## Using the Pulsar Kafka compatibility wrapper - -In an existing application, change the regular Kafka client dependency and replace it with the Pulsar Kafka wrapper. Remove the following dependency in `pom.xml`: - -```xml - - - org.apache.kafka - kafka-clients - 0.10.2.1 - - -``` - -Then include this dependency for the Pulsar Kafka wrapper: - -```xml - - - org.apache.pulsar - pulsar-client-kafka - @pulsar:version@ - - -``` - -With the new dependency, the existing code works without any changes. You need to adjust the configuration, and make sure it points the -producers and consumers to Pulsar service rather than Kafka, and uses a particular -Pulsar topic. - -## Using the Pulsar Kafka compatibility wrapper together with existing kafka client - -When migrating from Kafka to Pulsar, the application might use the original kafka client -and the pulsar kafka wrapper together during migration. You should consider using the -unshaded pulsar kafka client wrapper. - -```xml - - - org.apache.pulsar - pulsar-client-kafka-original - @pulsar:version@ - - -``` - -When using this dependency, construct producers using `org.apache.kafka.clients.producer.PulsarKafkaProducer` -instead of `org.apache.kafka.clients.producer.KafkaProducer` and `org.apache.kafka.clients.producer.PulsarKafkaConsumer` for consumers. - -## Producer example - -```java - -// Topic needs to be a regular Pulsar topic -String topic = "persistent://public/default/my-topic"; - -Properties props = new Properties(); -// Point to a Pulsar service -props.put("bootstrap.servers", "pulsar://localhost:6650"); - -props.put("key.serializer", IntegerSerializer.class.getName()); -props.put("value.serializer", StringSerializer.class.getName()); - -Producer producer = new KafkaProducer(props); - -for (int i = 0; i < 10; i++) { - producer.send(new ProducerRecord(topic, i, "hello-" + i)); - log.info("Message {} sent successfully", i); -} - -producer.close(); - -``` - -## Consumer example - -```java - -String topic = "persistent://public/default/my-topic"; - -Properties props = new Properties(); -// Point to a Pulsar service -props.put("bootstrap.servers", "pulsar://localhost:6650"); -props.put("group.id", "my-subscription-name"); -props.put("enable.auto.commit", "false"); -props.put("key.deserializer", IntegerDeserializer.class.getName()); -props.put("value.deserializer", StringDeserializer.class.getName()); - -Consumer consumer = new KafkaConsumer(props); -consumer.subscribe(Arrays.asList(topic)); - -while (true) { - ConsumerRecords records = consumer.poll(100); - records.forEach(record -> { - log.info("Received record: {}", record); - }); - - // Commit last offset - consumer.commitSync(); -} - -``` - -## Complete Examples - -You can find the complete producer and consumer examples [here](https://github.com/apache/pulsar-adapters/tree/master/pulsar-client-kafka-compat/pulsar-client-kafka-tests/src/test/java/org/apache/pulsar/client/kafka/compat/examples). - -## Compatibility matrix - -Currently the Pulsar Kafka wrapper supports most of the operations offered by the Kafka API. - -### Producer - -APIs: - -| Producer Method | Supported | Notes | -|:------------------------------------------------------------------------------|:----------|:-------------------------------------------------------------------------| -| `Future send(ProducerRecord record)` | Yes | | -| `Future send(ProducerRecord record, Callback callback)` | Yes | | -| `void flush()` | Yes | | -| `List partitionsFor(String topic)` | No | | -| `Map metrics()` | No | | -| `void close()` | Yes | | -| `void close(long timeout, TimeUnit unit)` | Yes | | - -Properties: - -| Config property | Supported | Notes | -|:----------------------------------------|:----------|:------------------------------------------------------------------------------| -| `acks` | Ignored | Durability and quorum writes are configured at the namespace level | -| `auto.offset.reset` | Yes | It uses a default value of `earliest` if you do not give a specific setting. | -| `batch.size` | Ignored | | -| `bootstrap.servers` | Yes | | -| `buffer.memory` | Ignored | | -| `client.id` | Ignored | | -| `compression.type` | Yes | Allows `gzip` and `lz4`. No `snappy`. | -| `connections.max.idle.ms` | Yes | Only support up to 2,147,483,647,000(Integer.MAX_VALUE * 1000) ms of idle time| -| `interceptor.classes` | Yes | | -| `key.serializer` | Yes | | -| `linger.ms` | Yes | Controls the group commit time when batching messages | -| `max.block.ms` | Ignored | | -| `max.in.flight.requests.per.connection` | Ignored | In Pulsar ordering is maintained even with multiple requests in flight | -| `max.request.size` | Ignored | | -| `metric.reporters` | Ignored | | -| `metrics.num.samples` | Ignored | | -| `metrics.sample.window.ms` | Ignored | | -| `partitioner.class` | Yes | | -| `receive.buffer.bytes` | Ignored | | -| `reconnect.backoff.ms` | Ignored | | -| `request.timeout.ms` | Ignored | | -| `retries` | Ignored | Pulsar client retries with exponential backoff until the send timeout expires. | -| `send.buffer.bytes` | Ignored | | -| `timeout.ms` | Yes | | -| `value.serializer` | Yes | | - - -### Consumer - -The following table lists consumer APIs. - -| Consumer Method | Supported | Notes | -|:--------------------------------------------------------------------------------------------------------|:----------|:------| -| `Set assignment()` | No | | -| `Set subscription()` | Yes | | -| `void subscribe(Collection topics)` | Yes | | -| `void subscribe(Collection topics, ConsumerRebalanceListener callback)` | No | | -| `void assign(Collection partitions)` | No | | -| `void subscribe(Pattern pattern, ConsumerRebalanceListener callback)` | No | | -| `void unsubscribe()` | Yes | | -| `ConsumerRecords poll(long timeoutMillis)` | Yes | | -| `void commitSync()` | Yes | | -| `void commitSync(Map offsets)` | Yes | | -| `void commitAsync()` | Yes | | -| `void commitAsync(OffsetCommitCallback callback)` | Yes | | -| `void commitAsync(Map offsets, OffsetCommitCallback callback)` | Yes | | -| `void seek(TopicPartition partition, long offset)` | Yes | | -| `void seekToBeginning(Collection partitions)` | Yes | | -| `void seekToEnd(Collection partitions)` | Yes | | -| `long position(TopicPartition partition)` | Yes | | -| `OffsetAndMetadata committed(TopicPartition partition)` | Yes | | -| `Map metrics()` | No | | -| `List partitionsFor(String topic)` | No | | -| `Map> listTopics()` | No | | -| `Set paused()` | No | | -| `void pause(Collection partitions)` | No | | -| `void resume(Collection partitions)` | No | | -| `Map offsetsForTimes(Map timestampsToSearch)` | No | | -| `Map beginningOffsets(Collection partitions)` | No | | -| `Map endOffsets(Collection partitions)` | No | | -| `void close()` | Yes | | -| `void close(long timeout, TimeUnit unit)` | Yes | | -| `void wakeup()` | No | | - -Properties: - -| Config property | Supported | Notes | -|:--------------------------------|:----------|:------------------------------------------------------| -| `group.id` | Yes | Maps to a Pulsar subscription name | -| `max.poll.records` | Yes | | -| `max.poll.interval.ms` | Ignored | Messages are "pushed" from broker | -| `session.timeout.ms` | Ignored | | -| `heartbeat.interval.ms` | Ignored | | -| `bootstrap.servers` | Yes | Needs to point to a single Pulsar service URL | -| `enable.auto.commit` | Yes | | -| `auto.commit.interval.ms` | Ignored | With auto-commit, acks are sent immediately to broker | -| `partition.assignment.strategy` | Ignored | | -| `auto.offset.reset` | Yes | Only support earliest and latest. | -| `fetch.min.bytes` | Ignored | | -| `fetch.max.bytes` | Ignored | | -| `fetch.max.wait.ms` | Ignored | | -| `interceptor.classes` | Yes | | -| `metadata.max.age.ms` | Ignored | | -| `max.partition.fetch.bytes` | Ignored | | -| `send.buffer.bytes` | Ignored | | -| `receive.buffer.bytes` | Ignored | | -| `client.id` | Ignored | | - - -## Customize Pulsar configurations - -You can configure Pulsar authentication provider directly from the Kafka properties. - -### Pulsar client properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.authentication.class`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-org.apache.pulsar.client.api.Authentication-) | | Configure to auth provider. For example, `org.apache.pulsar.client.impl.auth.AuthenticationTls`.| -| [`pulsar.authentication.params.map`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.util.Map-) | | Map which represents parameters for the Authentication-Plugin. | -| [`pulsar.authentication.params.string`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.lang.String-) | | String which represents parameters for the Authentication-Plugin, for example, `key1:val1,key2:val2`. | -| [`pulsar.use.tls`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTls-boolean-) | `false` | Enable TLS transport encryption. | -| [`pulsar.tls.trust.certs.file.path`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsTrustCertsFilePath-java.lang.String-) | | Path for the TLS trust certificate store. | -| [`pulsar.tls.allow.insecure.connection`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsAllowInsecureConnection-boolean-) | `false` | Accept self-signed certificates from brokers. | -| [`pulsar.operation.timeout.ms`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setOperationTimeout-int-java.util.concurrent.TimeUnit-) | `30000` | General operations timeout. | -| [`pulsar.stats.interval.seconds`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setStatsInterval-long-java.util.concurrent.TimeUnit-) | `60` | Pulsar client lib stats printing interval. | -| [`pulsar.num.io.threads`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setIoThreads-int-) | `1` | The number of Netty IO threads to use. | -| [`pulsar.connections.per.broker`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConnectionsPerBroker-int-) | `1` | The maximum number of connection to each broker. | -| [`pulsar.use.tcp.nodelay`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTcpNoDelay-boolean-) | `true` | TCP no-delay. | -| [`pulsar.concurrent.lookup.requests`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConcurrentLookupRequest-int-) | `50000` | The maximum number of concurrent topic lookups. | -| [`pulsar.max.number.rejected.request.per.connection`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setMaxNumberOfRejectedRequestPerConnection-int-) | `50` | The threshold of errors to forcefully close a connection. | -| [`pulsar.keepalive.interval.ms`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientBuilder.html#keepAliveInterval-int-java.util.concurrent.TimeUnit-)| `30000` | Keep alive interval for each client-broker-connection. | - - -### Pulsar producer properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.producer.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setProducerName-java.lang.String-) | | Specify the producer name. | -| [`pulsar.producer.initial.sequence.id`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setInitialSequenceId-long-) | | Specify baseline for sequence ID of this producer. | -| [`pulsar.producer.max.pending.messages`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessages-int-) | `1000` | Set the maximum size of the message queue pending to receive an acknowledgment from the broker. | -| [`pulsar.producer.max.pending.messages.across.partitions`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessagesAcrossPartitions-int-) | `50000` | Set the maximum number of pending messages across all the partitions. | -| [`pulsar.producer.batching.enabled`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingEnabled-boolean-) | `true` | Control whether automatic batching of messages is enabled for the producer. | -| [`pulsar.producer.batching.max.messages`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingMaxMessages-int-) | `1000` | The maximum number of messages in a batch. | -| [`pulsar.block.if.producer.queue.full`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBlockIfQueueFull-boolean-) | | Specify the block producer if queue is full. | -| [`pulsar.crypto.reader.factory.class.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setCryptoKeyReader-org.apache.pulsar.client.api.CryptoKeyReader-) | | Specify the CryptoReader-Factory(`CryptoKeyReaderFactory`) classname which allows producer to create CryptoKeyReader. | - - -### Pulsar consumer Properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.consumer.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setConsumerName-java.lang.String-) | | Specify the consumer name. | -| [`pulsar.consumer.receiver.queue.size`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setReceiverQueueSize-int-) | 1000 | Set the size of the consumer receiver queue. | -| [`pulsar.consumer.acknowledgments.group.time.millis`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#acknowledgmentGroupTime-long-java.util.concurrent.TimeUnit-) | 100 | Set the maximum amount of group time for consumers to send the acknowledgments to the broker. | -| [`pulsar.consumer.total.receiver.queue.size.across.partitions`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setMaxTotalReceiverQueueSizeAcrossPartitions-int-) | 50000 | Set the maximum size of the total receiver queue across partitions. | -| [`pulsar.consumer.subscription.topics.mode`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#subscriptionTopicsMode-Mode-) | PersistentOnly | Set the subscription topic mode for consumers. | -| [`pulsar.crypto.reader.factory.class.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setCryptoKeyReader-org.apache.pulsar.client.api.CryptoKeyReader-) | | Specify the CryptoReader-Factory(`CryptoKeyReaderFactory`) classname which allows consumer to create CryptoKeyReader. | diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/adaptors-spark.md b/site2/website/versioned_docs/version-2.9.0-deprecated/adaptors-spark.md deleted file mode 100644 index e14f13b5d4b079..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/adaptors-spark.md +++ /dev/null @@ -1,91 +0,0 @@ ---- -id: adaptors-spark -title: Pulsar adaptor for Apache Spark -sidebar_label: "Apache Spark" -original_id: adaptors-spark ---- - -## Spark Streaming receiver -The Spark Streaming receiver for Pulsar is a custom receiver that enables Apache [Spark Streaming](https://spark.apache.org/streaming/) to receive raw data from Pulsar. - -An application can receive data in [Resilient Distributed Dataset](https://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds) (RDD) format via the Spark Streaming receiver and can process it in a variety of ways. - -### Prerequisites - -To use the receiver, include a dependency for the `pulsar-spark` library in your Java configuration. - -#### Maven - -If you're using Maven, add this to your `pom.xml`: - -```xml - - -@pulsar:version@ - - - - org.apache.pulsar - pulsar-spark - ${pulsar.version} - - -``` - -#### Gradle - -If you're using Gradle, add this to your `build.gradle` file: - -```groovy - -def pulsarVersion = "@pulsar:version@" - -dependencies { - compile group: 'org.apache.pulsar', name: 'pulsar-spark', version: pulsarVersion -} - -``` - -### Usage - -Pass an instance of `SparkStreamingPulsarReceiver` to the `receiverStream` method in `JavaStreamingContext`: - -```java - - String serviceUrl = "pulsar://localhost:6650/"; - String topic = "persistent://public/default/test_src"; - String subs = "test_sub"; - - SparkConf sparkConf = new SparkConf().setMaster("local[*]").setAppName("Pulsar Spark Example"); - - JavaStreamingContext jsc = new JavaStreamingContext(sparkConf, Durations.seconds(60)); - - ConsumerConfigurationData pulsarConf = new ConsumerConfigurationData(); - - Set set = new HashSet(); - set.add(topic); - pulsarConf.setTopicNames(set); - pulsarConf.setSubscriptionName(subs); - - SparkStreamingPulsarReceiver pulsarReceiver = new SparkStreamingPulsarReceiver( - serviceUrl, - pulsarConf, - new AuthenticationDisabled()); - - JavaReceiverInputDStream lineDStream = jsc.receiverStream(pulsarReceiver); - -``` - -For a complete example, click [here](https://github.com/apache/pulsar-adapters/blob/master/examples/spark/src/main/java/org/apache/spark/streaming/receiver/example/SparkStreamingPulsarReceiverExample.java). In this example, the number of messages that contain the string "Pulsar" in received messages is counted. - -Note that if needed, other Pulsar authentication classes can be used. For example, in order to use a token during authentication the following parameters for the `SparkStreamingPulsarReceiver` constructor can be set: - -```java - -SparkStreamingPulsarReceiver pulsarReceiver = new SparkStreamingPulsarReceiver( - serviceUrl, - pulsarConf, - new AuthenticationToken("token:")); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/adaptors-storm.md b/site2/website/versioned_docs/version-2.9.0-deprecated/adaptors-storm.md deleted file mode 100644 index 76d507164777db..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/adaptors-storm.md +++ /dev/null @@ -1,96 +0,0 @@ ---- -id: adaptors-storm -title: Pulsar adaptor for Apache Storm -sidebar_label: "Apache Storm" -original_id: adaptors-storm ---- - -Pulsar Storm is an adaptor for integrating with [Apache Storm](http://storm.apache.org/) topologies. It provides core Storm implementations for sending and receiving data. - -An application can inject data into a Storm topology via a generic Pulsar spout, as well as consume data from a Storm topology via a generic Pulsar bolt. - -## Using the Pulsar Storm Adaptor - -Include dependency for Pulsar Storm Adaptor: - -```xml - - - org.apache.pulsar - pulsar-storm - ${pulsar.version} - - -``` - -## Pulsar Spout - -The Pulsar Spout allows for the data published on a topic to be consumed by a Storm topology. It emits a Storm tuple based on the message received and the `MessageToValuesMapper` provided by the client. - -The tuples that fail to be processed by the downstream bolts will be re-injected by the spout with an exponential backoff, within a configurable timeout (the default is 60 seconds) or a configurable number of retries, whichever comes first, after which it is acknowledged by the consumer. Here's an example construction of a spout: - -```java - -MessageToValuesMapper messageToValuesMapper = new MessageToValuesMapper() { - - @Override - public Values toValues(Message msg) { - return new Values(new String(msg.getData())); - } - - @Override - public void declareOutputFields(OutputFieldsDeclarer declarer) { - // declare the output fields - declarer.declare(new Fields("string")); - } -}; - -// Configure a Pulsar Spout -PulsarSpoutConfiguration spoutConf = new PulsarSpoutConfiguration(); -spoutConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650"); -spoutConf.setTopic("persistent://my-property/usw/my-ns/my-topic1"); -spoutConf.setSubscriptionName("my-subscriber-name1"); -spoutConf.setMessageToValuesMapper(messageToValuesMapper); - -// Create a Pulsar Spout -PulsarSpout spout = new PulsarSpout(spoutConf); - -``` - -For a complete example, click [here](https://github.com/apache/pulsar-adapters/blob/master/pulsar-storm/src/test/java/org/apache/pulsar/storm/PulsarSpoutTest.java). - -## Pulsar Bolt - -The Pulsar bolt allows data in a Storm topology to be published on a topic. It publishes messages based on the Storm tuple received and the `TupleToMessageMapper` provided by the client. - -A partitioned topic can also be used to publish messages on different topics. In the implementation of the `TupleToMessageMapper`, a "key" will need to be provided in the message which will send the messages with the same key to the same topic. Here's an example bolt: - -```java - -TupleToMessageMapper tupleToMessageMapper = new TupleToMessageMapper() { - - @Override - public TypedMessageBuilder toMessage(TypedMessageBuilder msgBuilder, Tuple tuple) { - String receivedMessage = tuple.getString(0); - // message processing - String processedMsg = receivedMessage + "-processed"; - return msgBuilder.value(processedMsg.getBytes()); - } - - @Override - public void declareOutputFields(OutputFieldsDeclarer declarer) { - // declare the output fields - } -}; - -// Configure a Pulsar Bolt -PulsarBoltConfiguration boltConf = new PulsarBoltConfiguration(); -boltConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650"); -boltConf.setTopic("persistent://my-property/usw/my-ns/my-topic2"); -boltConf.setTupleToMessageMapper(tupleToMessageMapper); - -// Create a Pulsar Bolt -PulsarBolt bolt = new PulsarBolt(boltConf); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-brokers.md b/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-brokers.md deleted file mode 100644 index 930fe69ecfb0ee..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-brokers.md +++ /dev/null @@ -1,286 +0,0 @@ ---- -id: admin-api-brokers -title: Managing Brokers -sidebar_label: "Brokers" -original_id: admin-api-brokers ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Pulsar brokers consist of two components: - -1. An HTTP server exposing a {@inject: rest:REST:/} interface administration and [topic](reference-terminology.md#topic) lookup. -2. A dispatcher that handles all Pulsar [message](reference-terminology.md#message) transfers. - -[Brokers](reference-terminology.md#broker) can be managed via: - -* The `brokers` command of the [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) tool -* The `/admin/v2/brokers` endpoint of the admin {@inject: rest:REST:/} API -* The `brokers` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md) - -In addition to being configurable when you start them up, brokers can also be [dynamically configured](#dynamic-broker-configuration). - -> See the [Configuration](reference-configuration.md#broker) page for a full listing of broker-specific configuration parameters. - -## Brokers resources - -### List active brokers - -Fetch all available active brokers that are serving traffic with cluster name. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers list use - -``` - -``` - -broker1.use.org.com:8080 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/:cluster|operation/getActiveBrokers?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getActiveBrokers(clusterName) - -``` - - - - -```` - -### Get the information of the leader broker - -Fetch the information of the leader broker, for example, the service url. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers leader-broker - -``` - -``` - -BrokerInfo(serviceUrl=broker1.use.org.com:8080) - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/leaderBroker|operation/getLeaderBroker?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getLeaderBroker() - -``` - -For the detail of the code above, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/BrokersImpl.java#L80) - - - - -```` - -#### list of namespaces owned by a given broker - -It finds all namespaces which are owned and served by a given broker. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers namespaces use \ - --url broker1.use.org.com:8080 - -``` - -```json - -{ - "my-property/use/my-ns/0x00000000_0xffffffff": { - "broker_assignment": "shared", - "is_controlled": false, - "is_active": true - } -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/:cluster/:broker/ownedNamespaces|operation/getOwnedNamespaes?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getOwnedNamespaces(cluster,brokerUrl); - -``` - - - - -```` - -### Dynamic broker configuration - -One way to configure a Pulsar [broker](reference-terminology.md#broker) is to supply a [configuration](reference-configuration.md#broker) when the broker is [started up](reference-cli-tools.md#pulsar-broker). - -But since all broker configuration in Pulsar is stored in ZooKeeper, configuration values can also be dynamically updated *while the broker is running*. When you update broker configuration dynamically, ZooKeeper will notify the broker of the change and the broker will then override any existing configuration values. - -* The [`brokers`](reference-pulsar-admin.md#brokers) command for the [`pulsar-admin`](reference-pulsar-admin.md) tool has a variety of subcommands that enable you to manipulate a broker's configuration dynamically, enabling you to [update config values](#update-dynamic-configuration) and more. -* In the Pulsar admin {@inject: rest:REST:/} API, dynamic configuration is managed through the `/admin/v2/brokers/configuration` endpoint. - -### Update dynamic configuration - -````mdx-code-block - - - -The [`update-dynamic-config`](reference-pulsar-admin.md#brokers-update-dynamic-config) subcommand will update existing configuration. It takes two arguments: the name of the parameter and the new value using the `config` and `value` flag respectively. Here's an example for the [`brokerShutdownTimeoutMs`](reference-configuration.md#broker-brokerShutdownTimeoutMs) parameter: - -```shell - -$ pulsar-admin brokers update-dynamic-config --config brokerShutdownTimeoutMs --value 100 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/brokers/configuration/:configName/:configValue|operation/updateDynamicConfiguration?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().updateDynamicConfiguration(configName, configValue); - -``` - - - - -```` - -### List updated values - -Fetch a list of all potentially updatable configuration parameters. -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers list-dynamic-config -brokerShutdownTimeoutMs - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/configuration|operation/getDynamicConfigurationName?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getDynamicConfigurationNames(); - -``` - - - - -```` - -### List all - -Fetch a list of all parameters that have been dynamically updated. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers get-all-dynamic-config -brokerShutdownTimeoutMs:100 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/configuration/values|operation/getAllDynamicConfigurations?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getAllDynamicConfigurations(); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-clusters.md b/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-clusters.md deleted file mode 100644 index 1d0c5dc9786f5a..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-clusters.md +++ /dev/null @@ -1,318 +0,0 @@ ---- -id: admin-api-clusters -title: Managing Clusters -sidebar_label: "Clusters" -original_id: admin-api-clusters ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Pulsar clusters consist of one or more Pulsar [brokers](reference-terminology.md#broker), one or more [BookKeeper](reference-terminology.md#bookkeeper) -servers (aka [bookies](reference-terminology.md#bookie)), and a [ZooKeeper](https://zookeeper.apache.org) cluster that provides configuration and coordination management. - -Clusters can be managed via: - -* The `clusters` command of the [`pulsar-admin`]([reference-pulsar-admin.md](https://pulsar.apache.org/tools/pulsar-admin/)) tool -* The `/admin/v2/clusters` endpoint of the admin {@inject: rest:REST:/} API -* The `clusters` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md) - -## Clusters resources - -### Provision - -New clusters can be provisioned using the admin interface. - -> Please note that this operation requires superuser privileges. - -````mdx-code-block - - - -You can provision a new cluster using the [`create`](reference-pulsar-admin.md#clusters-create) subcommand. Here's an example: - -```shell - -$ pulsar-admin clusters create cluster-1 \ - --url http://my-cluster.org.com:8080 \ - --broker-url pulsar://my-cluster.org.com:6650 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/clusters/:cluster|operation/createCluster?version=@pulsar:version_number@} - - - - -```java - -ClusterData clusterData = new ClusterData( - serviceUrl, - serviceUrlTls, - brokerServiceUrl, - brokerServiceUrlTls -); -admin.clusters().createCluster(clusterName, clusterData); - -``` - - - - -```` - -### Initialize cluster metadata - -When provision a new cluster, you need to initialize that cluster's [metadata](concepts-architecture-overview.md#metadata-store). When initializing cluster metadata, you need to specify all of the following: - -* The name of the cluster -* The local ZooKeeper connection string for the cluster -* The configuration store connection string for the entire instance -* The web service URL for the cluster -* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster - -You must initialize cluster metadata *before* starting up any [brokers](admin-api-brokers.md) that will belong to the cluster. - -> **No cluster metadata initialization through the REST API or the Java admin API** -> -> Unlike most other admin functions in Pulsar, cluster metadata initialization cannot be performed via the admin REST API -> or the admin Java client, as metadata initialization involves communicating with ZooKeeper directly. -> Instead, you can use the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool, in particular -> the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command. - -Here's an example cluster metadata initialization command: - -```shell - -bin/pulsar initialize-cluster-metadata \ - --cluster us-west \ - --zookeeper zk1.us-west.example.com:2181 \ - --configuration-store zk1.us-west.example.com:2184 \ - --web-service-url http://pulsar.us-west.example.com:8080/ \ - --web-service-url-tls https://pulsar.us-west.example.com:8443/ \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/ - -``` - -You'll need to use `--*-tls` flags only if you're using [TLS authentication](security-tls-authentication.md) in your instance. - -### Get configuration - -You can fetch the [configuration](reference-configuration.md) for an existing cluster at any time. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#clusters-get) subcommand and specify the name of the cluster. Here's an example: - -```shell - -$ pulsar-admin clusters get cluster-1 -{ - "serviceUrl": "http://my-cluster.org.com:8080/", - "serviceUrlTls": null, - "brokerServiceUrl": "pulsar://my-cluster.org.com:6650/", - "brokerServiceUrlTls": null - "peerClusterNames": null -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/clusters/:cluster|operation/getCluster?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().getCluster(clusterName); - -``` - - - - -```` - -### Update - -You can update the configuration for an existing cluster at any time. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#clusters-update) subcommand and specify new configuration values using flags. - -```shell - -$ pulsar-admin clusters update cluster-1 \ - --url http://my-cluster.org.com:4081 \ - --broker-url pulsar://my-cluster.org.com:3350 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/clusters/:cluster|operation/updateCluster?version=@pulsar:version_number@} - - - - -```java - -ClusterData clusterData = new ClusterData( - serviceUrl, - serviceUrlTls, - brokerServiceUrl, - brokerServiceUrlTls -); -admin.clusters().updateCluster(clusterName, clusterData); - -``` - - - - -```` - -### Delete - -Clusters can be deleted from a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#clusters-delete) subcommand and specify the name of the cluster. - -``` - -$ pulsar-admin clusters delete cluster-1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/clusters/:cluster|operation/deleteCluster?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().deleteCluster(clusterName); - -``` - - - - -```` - -### List - -You can fetch a list of all clusters in a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#clusters-list) subcommand. - -```shell - -$ pulsar-admin clusters list -cluster-1 -cluster-2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/clusters|operation/getClusters?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().getClusters(); - -``` - - - - -```` - -### Update peer-cluster data - -Peer clusters can be configured for a given cluster in a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`update-peer-clusters`](reference-pulsar-admin.md#clusters-update-peer-clusters) subcommand and specify the list of peer-cluster names. - -``` - -$ pulsar-admin update-peer-clusters cluster-1 --peer-clusters cluster-2 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/clusters/:cluster/peers|operation/setPeerClusterNames?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().updatePeerClusterNames(clusterName, peerClusterList); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-functions.md b/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-functions.md deleted file mode 100644 index d73386caf9b418..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-functions.md +++ /dev/null @@ -1,830 +0,0 @@ ---- -id: admin-api-functions -title: Manage Functions -sidebar_label: "Functions" -original_id: admin-api-functions ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -**Pulsar Functions** are lightweight compute processes that - -* consume messages from one or more Pulsar topics -* apply a user-supplied processing logic to each message -* publish the results of the computation to another topic - -Functions can be managed via the following methods. - -Method | Description ----|--- -**Admin CLI** | The `functions` command of the [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) tool. -**REST API** |The `/admin/v3/functions` endpoint of the admin {@inject: rest:REST:/} API. -**Java Admin API**| The `functions` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md). - -## Function resources - -You can perform the following operations on functions. - -### Create a function - -You can create a Pulsar function in cluster mode (deploy it on a Pulsar cluster) using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#functions-create) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --inputs test-input-topic \ - --output persistent://public/default/test-output-topic \ - --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \ - --jar /examples/api-examples.jar - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName|operation/registerFunction?version=@pulsar:version_number@} - - - - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setTenant(tenant); -functionConfig.setNamespace(namespace); -functionConfig.setName(functionName); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setParallelism(1); -functionConfig.setClassName("org.apache.pulsar.functions.api.examples.ExclamationFunction"); -functionConfig.setProcessingGuarantees(FunctionConfig.ProcessingGuarantees.ATLEAST_ONCE); -functionConfig.setTopicsPattern(sourceTopicPattern); -functionConfig.setSubName(subscriptionName); -functionConfig.setAutoAck(true); -functionConfig.setOutput(sinkTopic); -admin.functions().createFunction(functionConfig, fileName); - -``` - - - - -```` - -### Update a function - -You can update a Pulsar function that has been deployed to a Pulsar cluster using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#functions-update) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions update \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --output persistent://public/default/update-output-topic \ - # other options - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/functions/:tenant/:namespace/:functionName|operation/updateFunction?version=@pulsar:version_number@} - - - - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setTenant(tenant); -functionConfig.setNamespace(namespace); -functionConfig.setName(functionName); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setParallelism(1); -functionConfig.setClassName("org.apache.pulsar.functions.api.examples.ExclamationFunction"); -UpdateOptions updateOptions = new UpdateOptions(); -updateOptions.setUpdateAuthData(updateAuthData); -admin.functions().updateFunction(functionConfig, userCodeFile, updateOptions); - -``` - - - - -```` - -### Start an instance of a function - -You can start a stopped function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`start`](reference-pulsar-admin.md#functions-start) subcommand. - -```shell - -$ pulsar-admin functions start \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/start|operation/startFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().startFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Start all instances of a function - -You can start all stopped function instances using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`start`](reference-pulsar-admin.md#functions-start) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions start \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/start|operation/startFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().startFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Stop an instance of a function - -You can stop a function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`stop`](reference-pulsar-admin.md#functions-stop) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stop \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/stop|operation/stopFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().stopFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Stop all instances of a function - -You can stop all function instances using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`stop`](reference-pulsar-admin.md#functions-stop) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stop \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/stop|operation/stopFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().stopFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Restart an instance of a function - -Restart a function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`restart`](reference-pulsar-admin.md#functions-restart) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions restart \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/restart|operation/restartFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().restartFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Restart all instances of a function - -You can restart all function instances using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`restart`](reference-pulsar-admin.md#functions-restart) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions restart \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/restart|operation/restartFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().restartFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### List all functions - -You can list all Pulsar functions running under a specific tenant and namespace using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#functions-list) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions list \ - --tenant public \ - --namespace default - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace|operation/listFunctions?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctions(tenant, namespace); - -``` - - - - -```` - -### Delete a function - -You can delete a Pulsar function that is running on a Pulsar cluster using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#functions-delete) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions delete \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|DELETE|/admin/v3/functions/:tenant/:namespace/:functionName|operation/deregisterFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().deleteFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Get info about a function - -You can get information about a Pulsar function currently running in cluster mode using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#functions-get) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions get \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName|operation/getFunctionInfo?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Get status of an instance of a function - -You can get the current status of a Pulsar function instance with `instance-id` using Admin CLI, REST API or Java Admin API. -````mdx-code-block - - - -Use the [`status`](reference-pulsar-admin.md#functions-status) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/status|operation/getFunctionInstanceStatus?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStatus(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Get status of all instances of a function - -You can get the current status of a Pulsar function instance using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`status`](reference-pulsar-admin.md#functions-status) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/status|operation/getFunctionStatus?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStatus(tenant, namespace, functionName); - -``` - - - - -```` - -### Get stats of an instance of a function - -You can get the current stats of a Pulsar Function instance with `instance-id` using Admin CLI, REST API or Java admin API. -````mdx-code-block - - - -Use the [`stats`](reference-pulsar-admin.md#functions-stats) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/stats|operation/getFunctionInstanceStats?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStats(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Get stats of all instances of a function - -You can get the current stats of a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`stats`](reference-pulsar-admin.md#functions-stats) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/stats|operation/getFunctionStats?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStats(tenant, namespace, functionName); - -``` - - - - -```` - -### Trigger a function - -You can trigger a specified Pulsar function with a supplied value using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`trigger`](reference-pulsar-admin.md#functions-trigger) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --topic (the name of input topic) \ - --trigger-value \"hello pulsar\" - # or --trigger-file (the path of trigger file) - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/trigger|operation/triggerFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().triggerFunction(tenant, namespace, functionName, topic, triggerValue, triggerFile); - -``` - - - - -```` - -### Put state associated with a function - -You can put the state associated with a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`putstate`](reference-pulsar-admin.md#functions-putstate) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions putstate \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --state "{\"key\":\"pulsar\", \"stringValue\":\"hello pulsar\"}" - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/state/:key|operation/putFunctionState?version=@pulsar:version_number@} - - - - -```java - -TypeReference typeRef = new TypeReference() {}; -FunctionState stateRepr = ObjectMapperFactory.getThreadLocal().readValue(state, typeRef); -admin.functions().putFunctionState(tenant, namespace, functionName, stateRepr); - -``` - - - - -```` - -### Fetch state associated with a function - -You can fetch the current state associated with a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`querystate`](reference-pulsar-admin.md#functions-querystate) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions querystate \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --key (the key of state) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/state/:key|operation/getFunctionState?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionState(tenant, namespace, functionName, key); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-namespaces.md b/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-namespaces.md deleted file mode 100644 index fa6d9efe251ab0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-namespaces.md +++ /dev/null @@ -1,1267 +0,0 @@ ---- -id: admin-api-namespaces -title: Managing Namespaces -sidebar_label: "Namespaces" -original_id: admin-api-namespaces ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Pulsar [namespaces](reference-terminology.md#namespace) are logical groupings of [topics](reference-terminology.md#topic). - -Namespaces can be managed via: - -* The `namespaces` command of the [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) tool -* The `/admin/v2/namespaces` endpoint of the admin {@inject: rest:REST:/} API -* The `namespaces` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md) - -## Namespaces resources - -### Create namespaces - -You can create new namespaces under a given [tenant](reference-terminology.md#tenant). - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#namespaces-create) subcommand and specify the namespace by name: - -```shell - -$ pulsar-admin namespaces create test-tenant/test-namespace - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace|operation/createNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().createNamespace(namespace); - -``` - - - - -```` - -### Get policies - -You can fetch the current policies associated with a namespace at any time. - -````mdx-code-block - - - -Use the [`policies`](reference-pulsar-admin.md#namespaces-policies) subcommand and specify the namespace: - -```shell - -$ pulsar-admin namespaces policies test-tenant/test-namespace -{ - "auth_policies": { - "namespace_auth": {}, - "destination_auth": {} - }, - "replication_clusters": [], - "bundles_activated": true, - "bundles": { - "boundaries": [ - "0x00000000", - "0xffffffff" - ], - "numBundles": 1 - }, - "backlog_quota_map": {}, - "persistence": null, - "latency_stats_sample_rate": {}, - "message_ttl_in_seconds": 0, - "retention_policies": null, - "deleted": false -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace|operation/getPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPolicies(namespace); - -``` - - - - -```` - -### List namespaces - -You can list all namespaces within a given Pulsar [tenant](reference-terminology.md#tenant). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#namespaces-list) subcommand and specify the tenant: - -```shell - -$ pulsar-admin namespaces list test-tenant -test-tenant/ns1 -test-tenant/ns2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant|operation/getTenantNamespaces?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaces(tenant); - -``` - - - - -```` - -### Delete namespaces - -You can delete existing namespaces from a tenant. - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#namespaces-delete) subcommand and specify the namespace: - -```shell - -$ pulsar-admin namespaces delete test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace|operation/deleteNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().deleteNamespace(namespace); - -``` - - - - -```` - -### Configure replication clusters - -#### Set replication cluster - -You can set replication clusters for a namespace to enable Pulsar to internally replicate the published messages from one colocation facility to another. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-clusters test-tenant/ns1 \ - --clusters cl1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/replication|operation/setNamespaceReplicationClusters?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setNamespaceReplicationClusters(namespace, clusters); - -``` - - - - -```` - -#### Get replication cluster - -You can get the list of replication clusters for a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-clusters test-tenant/cl1/ns1 - -``` - -``` - -cl2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/replication|operation/getNamespaceReplicationClusters?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaceReplicationClusters(namespace) - -``` - - - - -```` - -### Configure backlog quota policies - -#### Set backlog quota policies - -Backlog quota helps the broker to restrict bandwidth/storage of a namespace once it reaches a certain threshold limit. Admin can set the limit and take corresponding action after the limit is reached. - - 1. producer_request_hold: broker holds but not persists produce request payload - - 2. producer_exception: broker disconnects with the client by giving an exception - - 3. consumer_backlog_eviction: broker starts discarding backlog messages - -Backlog quota restriction can be taken care by defining restriction of backlog-quota-type: destination_storage. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-backlog-quota --limit 10G --limitTime 36000 --policy producer_request_hold test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/setBacklogQuota?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setBacklogQuota(namespace, new BacklogQuota(limit, limitTime, policy)) - -``` - - - - -```` - -#### Get backlog quota policies - -You can get a configured backlog quota for a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-backlog-quotas test-tenant/ns1 - -``` - -```json - -{ - "destination_storage": { - "limit": 10, - "policy": "producer_request_hold" - } -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/backlogQuotaMap|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getBacklogQuotaMap(namespace); - -``` - - - - -```` - -#### Remove backlog quota policies - -You can remove backlog quota policies for a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-backlog-quota test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/removeBacklogQuota?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeBacklogQuota(namespace, backlogQuotaType) - -``` - - - - -```` - -### Configure persistence policies - -#### Set persistence policies - -Persistence policies allow users to configure persistency-level for all topic messages under a given namespace. - - - Bookkeeper-ack-quorum: Number of acks (guaranteed copies) to wait for each entry, default: 0 - - - Bookkeeper-ensemble: Number of bookies to use for a topic, default: 0 - - - Bookkeeper-write-quorum: How many writes to make of each entry, default: 0 - - - Ml-mark-delete-max-rate: Throttling rate of mark-delete operation (0 means no throttle), default: 0.0 - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-persistence --bookkeeper-ack-quorum 2 --bookkeeper-ensemble 3 --bookkeeper-write-quorum 2 --ml-mark-delete-max-rate 0 test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/setPersistence?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setPersistence(namespace,new PersistencePolicies(bookkeeperEnsemble, bookkeeperWriteQuorum,bookkeeperAckQuorum,managedLedgerMaxMarkDeleteRate)) - -``` - - - - -```` - -#### Get persistence policies - -You can get the configured persistence policies of a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-persistence test-tenant/ns1 - -``` - -```json - -{ - "bookkeeperEnsemble": 3, - "bookkeeperWriteQuorum": 2, - "bookkeeperAckQuorum": 2, - "managedLedgerMaxMarkDeleteRate": 0 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/getPersistence?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPersistence(namespace) - -``` - - - - -```` - -### Configure namespace bundles - -#### Unload namespace bundles - -The namespace bundle is a virtual group of topics which belong to the same namespace. If the broker gets overloaded with the number of bundles, this command can help unload a bundle from that broker, so it can be served by some other less-loaded brokers. The namespace bundle ID ranges from 0x00000000 to 0xffffffff. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces unload --bundle 0x00000000_0xffffffff test-tenant/ns1 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/:bundle/unload|operation/unloadNamespaceBundle?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().unloadNamespaceBundle(namespace, bundle) - -``` - - - - -```` - -#### Split namespace bundles - -One namespace bundle can contain multiple topics but can be served by only one broker. If a single bundle is creating an excessive load on a broker, an admin can split the bundle using the command below, permitting one or more of the new bundles to be unloaded, thus balancing the load across the brokers. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces split-bundle --bundle 0x00000000_0xffffffff test-tenant/ns1 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/:bundle/split|operation/splitNamespaceBundle?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().splitNamespaceBundle(namespace, bundle) - -``` - - - - -```` - -### Configure message TTL - -#### Set message-ttl - -You can configure the time to live (in seconds) duration for messages. In the example below, the message-ttl is set as 100s. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-message-ttl --messageTTL 100 test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/setNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setNamespaceMessageTTL(namespace, messageTTL) - -``` - - - - -```` - -#### Get message-ttl - -When the message-ttl for a namespace is set, you can use the command below to get the configured value. This example comtinues the example of the command `set message-ttl`, so the returned value is 100(s). - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-message-ttl test-tenant/ns1 - -``` - -``` - -100 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/getNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaceMessageTTL(namespace) - -``` - -``` - -100 - -``` - - - - -```` - -#### Remove message-ttl - -Remove a message TTL of the configured namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-message-ttl test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/removeNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeNamespaceMessageTTL(namespace) - -``` - - - - -```` - - -### Clear backlog - -#### Clear namespace backlog - -It clears all message backlog for all the topics that belong to a specific namespace. You can also clear backlog for a specific subscription as well. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces clear-backlog --sub my-subscription test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/clearBacklog|operation/clearNamespaceBacklogForSubscription?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().clearNamespaceBacklogForSubscription(namespace, subscription) - -``` - - - - -```` - -#### Clear bundle backlog - -It clears all message backlog for all the topics that belong to a specific NamespaceBundle. You can also clear backlog for a specific subscription as well. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces clear-backlog --bundle 0x00000000_0xffffffff --sub my-subscription test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/:bundle/clearBacklog|operation/clearNamespaceBundleBacklogForSubscription?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().clearNamespaceBundleBacklogForSubscription(namespace, bundle, subscription) - -``` - - - - -```` - -### Configure retention - -#### Set retention - -Each namespace contains multiple topics and the retention size (storage size) of each topic should not exceed a specific threshold or it should be stored for a certain period. This command helps configure the retention size and time of topics in a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-retention --size 100 --time 10 test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/retention|operation/setRetention?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setRetention(namespace, new RetentionPolicies(retentionTimeInMin, retentionSizeInMB)) - -``` - - - - -```` - -#### Get retention - -It shows retention information of a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-retention test-tenant/ns1 - -``` - -```json - -{ - "retentionTimeInMinutes": 10, - "retentionSizeInMB": 100 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/retention|operation/getRetention?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getRetention(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for topics - -#### Set dispatch throttling for topics - -It sets message dispatch rate for all the topics under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -:::note - -- If neither `clusterDispatchRate` nor `topicDispatchRate` is configured, dispatch throttling is disabled. -- If `topicDispatchRate` is not configured, `clusterDispatchRate` takes effect. -- If `topicDispatchRate` is configured, `topicDispatchRate` takes effect. - -::: - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/dispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for topics - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/dispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getDispatchRate(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for subscription - -#### Set dispatch throttling for subscription - -It sets message dispatch rate for all the subscription of topics under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-subscription-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/subscriptionDispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setSubscriptionDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for subscription - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-subscription-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/subscriptionDispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getSubscriptionDispatchRate(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for replicator - -#### Set dispatch throttling for replicator - -It sets message dispatch rate for all the replicator between replication clusters under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-replicator-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/replicatorDispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setReplicatorDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for replicator - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-replicator-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/replicatorDispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getReplicatorDispatchRate(namespace) - -``` - - - - -```` - -### Configure deduplication snapshot interval - -#### Get deduplication snapshot interval - -It shows configured `deduplicationSnapshotInterval` for a namespace (Each topic under the namespace will take a deduplication snapshot according to this interval) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-deduplication-snapshot-interval test-tenant/ns1 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/getDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getDeduplicationSnapshotInterval(namespace) - -``` - - - - -```` - -#### Set deduplication snapshot interval - -Set configured `deduplicationSnapshotInterval` for a namespace. Each topic under the namespace will take a deduplication snapshot according to this interval. -`brokerDeduplicationEnabled` must be set to `true` for this property to take effect. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-deduplication-snapshot-interval test-tenant/ns1 --interval 1000 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/setDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setDeduplicationSnapshotInterval(namespace, 1000) - -``` - - - - -```` - -#### Remove deduplication snapshot interval - -Remove configured `deduplicationSnapshotInterval` of a namespace (Each topic under the namespace will take a deduplication snapshot according to this interval) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-deduplication-snapshot-interval test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/deleteDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeDeduplicationSnapshotInterval(namespace) - -``` - - - - -```` - -### Namespace isolation - -You can use the [Pulsar isolation policy](administration-isolation.md) to allocate resources (broker and bookie) for a namespace. - -### Unload namespaces from a broker - -You can unload a namespace, or a [namespace bundle](reference-terminology.md#namespace-bundle), from the Pulsar [broker](reference-terminology.md#broker) that is currently responsible for it. - -#### pulsar-admin - -Use the [`unload`](reference-pulsar-admin.md#unload) subcommand of the [`namespaces`](reference-pulsar-admin.md#namespaces) command. - -````mdx-code-block - - - -```shell - -$ pulsar-admin namespaces unload my-tenant/my-ns - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/unload|operation/unloadNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().unload(namespace) - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-non-partitioned-topics.md b/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-non-partitioned-topics.md deleted file mode 100644 index e6347bb8c363a1..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-non-partitioned-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-non-partitioned-topics -title: Managing non-partitioned topics -sidebar_label: "Non-partitioned topics" -original_id: admin-api-non-partitioned-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-non-persistent-topics.md b/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-non-persistent-topics.md deleted file mode 100644 index 3126a6494c7153..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-non-persistent-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-non-persistent-topics -title: Managing non-persistent topics -sidebar_label: "Non-Persistent topics" -original_id: admin-api-non-persistent-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-overview.md b/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-overview.md deleted file mode 100644 index 1154c625aff7b5..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-overview.md +++ /dev/null @@ -1,144 +0,0 @@ ---- -id: admin-api-overview -title: Pulsar admin interface -sidebar_label: "Overview" -original_id: admin-api-overview ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -The Pulsar admin interface enables you to manage all important entities in a Pulsar instance, such as tenants, topics, and namespaces. - -You can interact with the admin interface via: - -- The `pulsar-admin` CLI tool, which is available in the `bin` folder of your Pulsar installation: - - ```shell - - bin/pulsar-admin - - ``` - - > **Important** - > - > For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). - -- HTTP calls, which are made against the admin {@inject: rest:REST:/} API provided by Pulsar brokers. For some RESTful APIs, they might be redirected to the owner brokers for serving with [`307 Temporary Redirect`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/307), hence the HTTP callers should handle `307 Temporary Redirect`. If you use `curl` commands, you should specify `-L` to handle redirections. - - > **Important** - > - > For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. - -- A Java client interface. - - > **Important** - > - > For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -> **The REST API is the admin interface**. Both the `pulsar-admin` CLI tool and the Java client use the REST API. If you implement your own admin interface client, you should use the REST API. - -## Admin setup - -Each of the three admin interfaces (the `pulsar-admin` CLI tool, the {@inject: rest:REST:/} API, and the [Java admin API](/api/admin)) requires some special setup if you have enabled authentication in your Pulsar instance. - -````mdx-code-block - - - -If you have enabled authentication, you need to provide an auth configuration to use the `pulsar-admin` tool. By default, the configuration for the `pulsar-admin` tool is in the [`conf/client.conf`](reference-configuration.md#client) file. The following are the available parameters: - -|Name|Description|Default| -|----|-----------|-------| -|webServiceUrl|The web URL for the cluster.|http://localhost:8080/| -|brokerServiceUrl|The Pulsar protocol URL for the cluster.|pulsar://localhost:6650/| -|authPlugin|The authentication plugin.| | -|authParams|The authentication parameters for the cluster, as a comma-separated string.| | -|useTls|Whether or not TLS authentication will be enforced in the cluster.|false| -|tlsAllowInsecureConnection|Accept untrusted TLS certificate from client.|false| -|tlsTrustCertsFilePath|Path for the trusted TLS certificate file.| | - - - - -You can find details for the REST API exposed by Pulsar brokers in this {@inject: rest:document:/}. - - - - -To use the Java admin API, instantiate a {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object, and specify a URL for a Pulsar broker and a {@inject: javadoc:PulsarAdminBuilder:/admin/org/apache/pulsar/client/admin/PulsarAdminBuilder}. The following is a minimal example using `localhost`: - -```java - -String url = "http://localhost:8080"; -// Pass auth-plugin class fully-qualified name if Pulsar-security enabled -String authPluginClassName = "com.org.MyAuthPluginClass"; -// Pass auth-param if auth-plugin class requires it -String authParams = "param1=value1"; -boolean useTls = false; -boolean tlsAllowInsecureConnection = false; -String tlsTrustCertsFilePath = null; -PulsarAdmin admin = PulsarAdmin.builder() -.authentication(authPluginClassName,authParams) -.serviceHttpUrl(url) -.tlsTrustCertsFilePath(tlsTrustCertsFilePath) -.allowTlsInsecureConnection(tlsAllowInsecureConnection) -.build(); - -``` - -If you use multiple brokers, you can use multi-host like Pulsar service. For example, - -```java - -String url = "http://localhost:8080,localhost:8081,localhost:8082"; -// Pass auth-plugin class fully-qualified name if Pulsar-security enabled -String authPluginClassName = "com.org.MyAuthPluginClass"; -// Pass auth-param if auth-plugin class requires it -String authParams = "param1=value1"; -boolean useTls = false; -boolean tlsAllowInsecureConnection = false; -String tlsTrustCertsFilePath = null; -PulsarAdmin admin = PulsarAdmin.builder() -.authentication(authPluginClassName,authParams) -.serviceHttpUrl(url) -.tlsTrustCertsFilePath(tlsTrustCertsFilePath) -.allowTlsInsecureConnection(tlsAllowInsecureConnection) -.build(); - -``` - - - - -```` - -## How to define Pulsar resource names when running Pulsar in Kubernetes -If you run Pulsar Functions or connectors on Kubernetes, you need to follow Kubernetes naming convention to define the names of your Pulsar resources, whichever admin interface you use. - -Kubernetes requires a name that can be used as a DNS subdomain name as defined in [RFC 1123](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names). Pulsar supports more legal characters than Kubernetes naming convention. If you create a Pulsar resource name with special characters that are not supported by Kubernetes (for example, including colons in a Pulsar namespace name), Kubernetes runtime translates the Pulsar object names into Kubernetes resource labels which are in RFC 1123-compliant forms. Consequently, you can run functions or connectors using Kubernetes runtime. The rules for translating Pulsar object names into Kubernetes resource labels are as below: - -- Truncate to 63 characters - -- Replace the following characters with dashes (-): - - - Non-alphanumeric characters - - - Underscores (_) - - - Dots (.) - -- Replace beginning and ending non-alphanumeric characters with 0 - -:::tip - -- If you get an error in translating Pulsar object names into Kubernetes resource labels (for example, you may have a naming collision if your Pulsar object name is too long) or want to customize the translating rules, see [customize Kubernetes runtime](https://pulsar.apache.org/docs/en/next/functions-runtime/#customize-kubernetes-runtime). -- For how to configure Kubernetes runtime, see [here](https://pulsar.apache.org/docs/en/next/functions-runtime/#configure-kubernetes-runtime). - -::: - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-packages.md b/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-packages.md deleted file mode 100644 index 2852fb74a02be3..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-packages.md +++ /dev/null @@ -1,391 +0,0 @@ ---- -id: admin-api-packages -title: Manage packages -sidebar_label: "Packages" -original_id: admin-api-packages ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Package management enables version management and simplifies the upgrade and rollback processes for Functions, Sinks, and Sources. When you use the same function, sink and source in different namespaces, you can upload them to a common package management system. - -## Package name - -A `package` is identified by five parts: `type`, `tenant`, `namespace`, `package name`, and `version`. - -| Part | Description | -|-------|-------------| -|`type` |The type of the package. The following types are supported: `function`, `sink` and `source`. | -| `name`|The fully qualified name of the package: `//`.| -|`version`|The version of the package.| - -The following is a code sample. - -```java - -class PackageName { - private final PackageType type; - private final String namespace; - private final String tenant; - private final String name; - private final String version; -} - -enum PackageType { - FUNCTION("function"), SINK("sink"), SOURCE("source"); -} - -``` - -## Package URL -A package is located using a URL. The package URL is written in the following format: - -```shell - -:////@ - -``` - -The following are package URL examples: - -`sink://public/default/mysql-sink@1.0` -`function://my-tenant/my-ns/my-function@0.1` -`source://my-tenant/my-ns/mysql-cdc-source@2.3` - -The package management system stores the data, versions and metadata of each package. The metadata is shown in the following table. - -| metadata | Description | -|----------|-------------| -|description|The description of the package.| -|contact |The contact information of a package. For example, team email.| -|create_time| The time when the package is created.| -|modification_time| The time when the package is modified.| -|properties |A key/value map that stores your own information.| - -## Permissions - -The packages are organized by the tenant and namespace, so you can apply the tenant and namespace permissions to packages directly. - -## Package resources -You can use the package management with command line tools, REST API and Java client. - -### Upload a package -You can upload a package to the package management service in the following ways. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages upload functions://public/default/example@v0.1 --path package-file --description package-description - -``` - - - - -{@inject: endpoint|POST|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/upload?version=@pulsar:version_number@} - - - - -Upload a package to the package management service synchronously. - -```java - - void upload(PackageMetadata metadata, String packageName, String path) throws PulsarAdminException; - -``` - -Upload a package to the package management service asynchronously. - -```java - - CompletableFuture uploadAsync(PackageMetadata metadata, String packageName, String path); - -``` - - - - -```` - -### Download a package -You can download a package to the package management service in the following ways. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages download functions://public/default/example@v0.1 --path package-file - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/download?version=@pulsar:version_number@} - - - - -Download a package to the package management service synchronously. - -```java - - void download(String packageName, String path) throws PulsarAdminException; - -``` - -Download a package to the package management service asynchronously. - -```java - - CompletableFuture downloadAsync(String packageName, String path); - -``` - - - - -```` - -### List all versions of a package -You can get a list of all versions of a package in the following ways. -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages list --type function public/default - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName|operation/listPackageVersion?version=@pulsar:version_number@} - - - - -List all versions of a package synchronously. - -```java - - List listPackageVersions(String packageName) throws PulsarAdminException; - -``` - -List all versions of a package asynchronously. - -```java - - CompletableFuture> listPackageVersionsAsync(String packageName); - -``` - - - - -```` - -### List all the specified type packages under a namespace -You can get a list of all the packages with the given type in a namespace in the following ways. -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages list --type function public/default - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/packages/:type/:tenant/:namespace|operation/listPackages?version=@pulsar:version_number@} - - - - -List all the packages with the given type in a namespace synchronously. - -```java - - List listPackages(String type, String namespace) throws PulsarAdminException; - -``` - -List all the packages with the given type in a namespace asynchronously. - -```java - - CompletableFuture> listPackagesAsync(String type, String namespace); - -``` - - - - -```` - -### Get the metadata of a package -You can get the metadata of a package in the following ways. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages get-metadata function://public/default/test@v1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version/metadata|operation/getMeta?version=@pulsar:version_number@} - - - - -Get the metadata of a package synchronously. - -```java - - PackageMetadata getMetadata(String packageName) throws PulsarAdminException; - -``` - -Get the metadata of a package asynchronously. - -```java - - CompletableFuture getMetadataAsync(String packageName); - -``` - - - - -```` - -### Update the metadata of a package -You can update the metadata of a package in the following ways. -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages update-metadata function://public/default/example@v0.1 --description update-description - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version/metadata|operation/updateMeta?version=@pulsar:version_number@} - - - - -Update a package metadata information synchronously. - -```java - - void updateMetadata(String packageName, PackageMetadata metadata) throws PulsarAdminException; - -``` - -Update a package metadata information asynchronously. - -```java - - CompletableFuture updateMetadataAsync(String packageName, PackageMetadata metadata); - -``` - - - - -```` - -### Delete a specified package -You can delete a specified package with its package name in the following ways. - -````mdx-code-block - - - -The following command example deletes a package of version 0.1. - -```shell - -bin/pulsar-admin packages delete functions://public/default/example@v0.1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/delete?version=@pulsar:version_number@} - - - - -Delete a specified package synchronously. - -```java - - void delete(String packageName) throws PulsarAdminException; - -``` - -Delete a specified package asynchronously. - -```java - - CompletableFuture deleteAsync(String packageName); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-partitioned-topics.md b/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-partitioned-topics.md deleted file mode 100644 index 5ce182282e0324..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-partitioned-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-partitioned-topics -title: Managing partitioned topics -sidebar_label: "Partitioned topics" -original_id: admin-api-partitioned-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-permissions.md b/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-permissions.md deleted file mode 100644 index 2496c9be54eb26..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-permissions.md +++ /dev/null @@ -1,184 +0,0 @@ ---- -id: admin-api-permissions -title: Managing permissions -sidebar_label: "Permissions" -original_id: admin-api-permissions ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Permissions in Pulsar are managed at the [namespace](reference-terminology.md#namespace) level -(that is, within [tenants](reference-terminology.md#tenant) and [clusters](reference-terminology.md#cluster)). - -## Grant permissions - -You can grant permissions to specific roles for lists of operations such as `produce` and `consume`. - -````mdx-code-block - - - -Use the [`grant-permission`](reference-pulsar-admin.md#grant-permission) subcommand and specify a namespace, actions using the `--actions` flag, and a role using the `--role` flag: - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role admin10 - -``` - -Wildcard authorization can be performed when `authorizationAllowWildcardsMatching` is set to `true` in `broker.conf`. - -e.g. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role 'my.role.*' - -``` - -Then, roles `my.role.1`, `my.role.2`, `my.role.foo`, `my.role.bar`, etc. can produce and consume. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role '*.role.my' - -``` - -Then, roles `1.role.my`, `2.role.my`, `foo.role.my`, `bar.role.my`, etc. can produce and consume. - -**Note**: A wildcard matching works at **the beginning or end of the role name only**. - -e.g. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role 'my.*.role' - -``` - -In this case, only the role `my.*.role` has permissions. -Roles `my.1.role`, `my.2.role`, `my.foo.role`, `my.bar.role`, etc. **cannot** produce and consume. - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/permissions/:role|operation/grantPermissionOnNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().grantPermissionOnNamespace(namespace, role, getAuthActions(actions)); - -``` - - - - -```` - -## Get permissions - -You can see which permissions have been granted to which roles in a namespace. - -````mdx-code-block - - - -Use the [`permissions`](reference-pulsar-admin#permissions) subcommand and specify a namespace: - -```shell - -$ pulsar-admin namespaces permissions test-tenant/ns1 -{ - "admin10": [ - "produce", - "consume" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/permissions|operation/getPermissions?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPermissions(namespace); - -``` - - - - -```` - -## Revoke permissions - -You can revoke permissions from specific roles, which means that those roles will no longer have access to the specified namespace. - -````mdx-code-block - - - -Use the [`revoke-permission`](reference-pulsar-admin.md#revoke-permission) subcommand and specify a namespace and a role using the `--role` flag: - -```shell - -$ pulsar-admin namespaces revoke-permission test-tenant/ns1 \ - --role admin10 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/permissions/:role|operation/revokePermissionsOnNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().revokePermissionsOnNamespace(namespace, role); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-persistent-topics.md b/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-persistent-topics.md deleted file mode 100644 index 50d135b72f5424..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-persistent-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-persistent-topics -title: Managing persistent topics -sidebar_label: "Persistent topics" -original_id: admin-api-persistent-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-schemas.md b/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-schemas.md deleted file mode 100644 index 9ffe21f5b0f750..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-schemas.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: admin-api-schemas -title: Managing Schemas -sidebar_label: "Schemas" -original_id: admin-api-schemas ---- - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-tenants.md b/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-tenants.md deleted file mode 100644 index 3e13e54a68b2cd..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-tenants.md +++ /dev/null @@ -1,238 +0,0 @@ ---- -id: admin-api-tenants -title: Managing Tenants -sidebar_label: "Tenants" -original_id: admin-api-tenants ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Tenants, like namespaces, can be managed using the [admin API](admin-api-overview.md). There are currently two configurable aspects of tenants: - -* Admin roles -* Allowed clusters - -## Tenant resources - -### List - -You can list all of the tenants associated with an [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#tenants-list) subcommand. - -```shell - -$ pulsar-admin tenants list -my-tenant-1 -my-tenant-2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/tenants|operation/getTenants?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().getTenants(); - -``` - - - - -```` - -### Create - -You can create a new tenant. - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#tenants-create) subcommand: - -```shell - -$ pulsar-admin tenants create my-tenant - -``` - -When creating a tenant, you can assign admin roles using the `-r`/`--admin-roles` flag. You can specify multiple roles as a comma-separated list. Here are some examples: - -```shell - -$ pulsar-admin tenants create my-tenant \ - --admin-roles role1,role2,role3 - -$ pulsar-admin tenants create my-tenant \ - -r role1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/tenants/:tenant|operation/createTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().createTenant(tenantName, tenantInfo); - -``` - - - - -```` - -### Get configuration - -You can fetch the [configuration](reference-configuration.md) for an existing tenant at any time. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#tenants-get) subcommand and specify the name of the tenant. Here's an example: - -```shell - -$ pulsar-admin tenants get my-tenant -{ - "adminRoles": [ - "admin1", - "admin2" - ], - "allowedClusters": [ - "cl1", - "cl2" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/tenants/:cluster|operation/getTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().getTenantInfo(tenantName); - -``` - - - - -```` - -### Delete - -Tenants can be deleted from a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#tenants-delete) subcommand and specify the name of the tenant. - -```shell - -$ pulsar-admin tenants delete my-tenant - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/tenants/:cluster|operation/deleteTenant?version=@pulsar:version_number@} - - - - -```java - -admin.Tenants().deleteTenant(tenantName); - -``` - - - - -```` - -### Update - -You can update a tenant's configuration. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#tenants-update) subcommand. - -```shell - -$ pulsar-admin tenants update my-tenant - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/tenants/:cluster|operation/updateTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().updateTenant(tenantName, tenantInfo); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-topics.md b/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-topics.md deleted file mode 100644 index 8a4d940bc4a0c8..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/admin-api-topics.md +++ /dev/null @@ -1,2334 +0,0 @@ ---- -id: admin-api-topics -title: Manage topics -sidebar_label: "Topics" -original_id: admin-api-topics ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Pulsar has persistent and non-persistent topics. Persistent topic is a logical endpoint for publishing and consuming messages. The topic name structure for persistent topics is: - -```shell - -persistent://tenant/namespace/topic - -``` - -Non-persistent topics are used in applications that only consume real-time published messages and do not need persistent guarantee. In this way, it reduces message-publish latency by removing overhead of persisting messages. The topic name structure for non-persistent topics is: - -```shell - -non-persistent://tenant/namespace/topic - -``` - -## Manage topic resources -Whether it is persistent or non-persistent topic, you can obtain the topic resources through `pulsar-admin` tool, REST API and Java. - -:::note - -In REST API, `:schema` stands for persistent or non-persistent. `:tenant`, `:namespace`, `:x` are variables, replace them with the real tenant, namespace, and `x` names when using them. -Take {@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} as an example, to get the list of persistent topics in REST API, use `https://pulsar.apache.org/admin/v2/persistent/my-tenant/my-namespace`. To get the list of non-persistent topics in REST API, use `https://pulsar.apache.org/admin/v2/non-persistent/my-tenant/my-namespace`. - -::: - -### List of topics - -You can get the list of topics under a given namespace in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list \ - my-tenant/my-namespace - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} - - - - -```java - -String namespace = "my-tenant/my-namespace"; -admin.topics().getList(namespace); - -``` - - - - -```` - -### Grant permission - -You can grant permissions on a client role to perform specific actions on a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics grant-permission \ - --actions produce,consume --role application1 \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/permissions/:role|operation/grantPermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String role = "test-role"; -Set actions = Sets.newHashSet(AuthAction.produce, AuthAction.consume); -admin.topics().grantPermission(topic, role, actions); - -``` - - - - -```` - -### Get permission - -You can fetch permission in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics permissions \ - persistent://test-tenant/ns1/tp1 \ - -{ - "application1": [ - "consume", - "produce" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/permissions|operation/getPermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getPermissions(topic); - -``` - - - - -```` - -### Revoke permission - -You can revoke a permission granted on a client role in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics revoke-permission \ - --role application1 \ - persistent://test-tenant/ns1/tp1 \ - -{ - "application1": [ - "consume", - "produce" - ] -} - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic/permissions/:role|operation/revokePermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String role = "test-role"; -admin.topics().revokePermissions(topic, role); - -``` - - - - -```` - -### Delete topic - -You can delete a topic in the following ways. You cannot delete a topic if any active subscription or producers is connected to the topic. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics delete \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic|operation/deleteTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().delete(topic); - -``` - - - - -```` - -### Unload topic - -You can unload a topic in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics unload \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/unload|operation/unloadTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().unload(topic); - -``` - - - - -```` - -### Get stats - -You can check the following statistics of a given non-partitioned topic. - - - **msgRateIn**: The sum of all local and replication publishers' publish rates (msg/s). - - - **msgThroughputIn**: The sum of all local and replication publishers' publish rates (bytes/s). - - - **msgRateOut**: The sum of all local and replication consumers' dispatch rates(msg/s). - - - **msgThroughputOut**: The sum of all local and replication consumers' dispatch rates (bytes/s). - - - **averageMsgSize**: The average size (in bytes) of messages published within the last interval. - - - **storageSize**: The sum of the ledgers' storage size for this topic. The space used to store the messages for the topic. - - - **bytesInCounter**: Total bytes published to the topic. - - - **msgInCounter**: Total messages published to the topic. - - - **bytesOutCounter**: Total bytes delivered to consumers. - - - **msgOutCounter**: Total messages delivered to consumers. - - - **msgChunkPublished**: Topic has chunked message published on it. - - - **backlogSize**: Estimated total unconsumed or backlog size (in bytes). - - - **offloadedStorageSize**: Space used to store the offloaded messages for the topic (in bytes). - - - **waitingPublishers**: The number of publishers waiting in a queue in exclusive access mode. - - - **deduplicationStatus**: The status of message deduplication for the topic. - - - **topicEpoch**: The topic epoch or empty if not set. - - - **nonContiguousDeletedMessagesRanges**: The number of non-contiguous deleted messages ranges. - - - **nonContiguousDeletedMessagesRangesSerializedSize**: The serialized size of non-contiguous deleted messages ranges. - - - **publishers**: The list of all local publishers into the topic. The list ranges from zero to thousands. - - - **accessMode**: The type of access to the topic that the producer requires. - - - **msgRateIn**: The total rate of messages (msg/s) published by this publisher. - - - **msgThroughputIn**: The total throughput (bytes/s) of the messages published by this publisher. - - - **averageMsgSize**: The average message size in bytes from this publisher within the last interval. - - - **chunkedMessageRate**: The total rate of chunked messages published by this publisher. - - - **producerId**: The internal identifier for this producer on this topic. - - - **producerName**: The internal identifier for this producer, generated by the client library. - - - **address**: The IP address and source port for the connection of this producer. - - - **connectedSince**: The timestamp when this producer is created or reconnected last time. - - - **clientVersion**: The client library version of this producer. - - - **metadata**: Metadata (key/value strings) associated with this publisher. - - - **subscriptions**: The list of all local subscriptions to the topic. - - - **my-subscription**: The name of this subscription. It is defined by the client. - - - **msgRateOut**: The total rate of messages (msg/s) delivered on this subscription. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered on this subscription. - - - **msgBacklog**: The number of messages in the subscription backlog. - - - **type**: The subscription type. - - - **msgRateExpired**: The rate at which messages were discarded instead of dispatched from this subscription due to TTL. - - - **lastExpireTimestamp**: The timestamp of the last message expire execution. - - - **lastConsumedFlowTimestamp**: The timestamp of the last flow command received. - - - **lastConsumedTimestamp**: The latest timestamp of all the consumed timestamp of the consumers. - - - **lastAckedTimestamp**: The latest timestamp of all the acked timestamp of the consumers. - - - **bytesOutCounter**: Total bytes delivered to consumer. - - - **msgOutCounter**: Total messages delivered to consumer. - - - **msgRateRedeliver**: Total rate of messages redelivered on this subscription (msg/s). - - - **chunkedMessageRate**: Chunked message dispatch rate. - - - **backlogSize**: Size of backlog for this subscription (in bytes). - - - **msgBacklogNoDelayed**: Number of messages in the subscription backlog that do not contain the delay messages. - - - **blockedSubscriptionOnUnackedMsgs**: Flag to verify if a subscription is blocked due to reaching threshold of unacked messages. - - - **msgDelayed**: Number of delayed messages currently being tracked. - - - **unackedMessages**: Number of unacknowledged messages for the subscription, where an unacknowledged message is one that has been sent to a consumer but not yet acknowledged. This field is only meaningful when using a subscription that tracks individual message acknowledgement. - - - **activeConsumerName**: The name of the consumer that is active for single active consumer subscriptions. For example, failover or exclusive. - - - **totalMsgExpired**: Total messages expired on this subscription. - - - **lastMarkDeleteAdvancedTimestamp**: Last MarkDelete position advanced timestamp. - - - **durable**: Whether the subscription is durable or ephemeral (for example, from a reader). - - - **replicated**: Mark that the subscription state is kept in sync across different regions. - - - **allowOutOfOrderDelivery**: Whether out of order delivery is allowed on the Key_Shared subscription. - - - **keySharedMode**: Whether the Key_Shared subscription mode is AUTO_SPLIT or STICKY. - - - **consumersAfterMarkDeletePosition**: This is for Key_Shared subscription to get the recentJoinedConsumers in the Key_Shared subscription. - - - **nonContiguousDeletedMessagesRanges**: The number of non-contiguous deleted messages ranges. - - - **nonContiguousDeletedMessagesRangesSerializedSize**: The serialized size of non-contiguous deleted messages ranges. - - - **consumers**: The list of connected consumers for this subscription. - - - **msgRateOut**: The total rate of messages (msg/s) delivered to the consumer. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered to the consumer. - - - **consumerName**: The internal identifier for this consumer, generated by the client library. - - - **availablePermits**: The number of messages that the consumer has space for in the client library's listen queue. `0` means the client library's queue is full and `receive()` isn't being called. A non-zero value means this consumer is ready for dispatched messages. - - - **unackedMessages**: The number of unacknowledged messages for the consumer, where an unacknowledged message is one that has been sent to the consumer but not yet acknowledged. This field is only meaningful when using a subscription that tracks individual message acknowledgement. - - - **blockedConsumerOnUnackedMsgs**: The flag used to verify if the consumer is blocked due to reaching threshold of the unacknowledged messages. - - - **lastConsumedTimestamp**: The timestamp when the consumer reads a message the last time. - - - **lastAckedTimestamp**: The timestamp when the consumer acknowledges a message the last time. - - - **address**: The IP address and source port for the connection of this consumer. - - - **connectedSince**: The timestamp when this consumer is created or reconnected last time. - - - **clientVersion**: The client library version of this consumer. - - - **bytesOutCounter**: Total bytes delivered to consumer. - - - **msgOutCounter**: Total messages delivered to consumer. - - - **msgRateRedeliver**: Total rate of messages redelivered by this consumer (msg/s). - - - **chunkedMessageRate**: The total rate of chunked messages delivered to this consumer. - - - **avgMessagesPerEntry**: Number of average messages per entry for the consumer consumed. - - - **readPositionWhenJoining**: The read position of the cursor when the consumer joining. - - - **keyHashRanges**: Hash ranges assigned to this consumer if is Key_Shared sub mode. - - - **metadata**: Metadata (key/value strings) associated with this consumer. - - - **replication**: This section gives the stats for cross-colo replication of this topic - - - **msgRateIn**: The total rate (msg/s) of messages received from the remote cluster. - - - **msgThroughputIn**: The total throughput (bytes/s) received from the remote cluster. - - - **msgRateOut**: The total rate of messages (msg/s) delivered to the replication-subscriber. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered to the replication-subscriber. - - - **msgRateExpired**: The total rate of messages (msg/s) expired. - - - **replicationBacklog**: The number of messages pending to be replicated to remote cluster. - - - **connected**: Whether the outbound replicator is connected. - - - **replicationDelayInSeconds**: How long the oldest message has been waiting to be sent through the connection, if connected is `true`. - - - **inboundConnection**: The IP and port of the broker in the remote cluster's publisher connection to this broker. - - - **inboundConnectedSince**: The TCP connection being used to publish messages to the remote cluster. If there are no local publishers connected, this connection is automatically closed after a minute. - - - **outboundConnection**: The address of the outbound replication connection. - - - **outboundConnectedSince**: The timestamp of establishing outbound connection. - -The following is an example of a topic status. - -```json - -{ - "msgRateIn" : 0.0, - "msgThroughputIn" : 0.0, - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesInCounter" : 504, - "msgInCounter" : 9, - "bytesOutCounter" : 2296, - "msgOutCounter" : 41, - "averageMsgSize" : 0.0, - "msgChunkPublished" : false, - "storageSize" : 504, - "backlogSize" : 0, - "offloadedStorageSize" : 0, - "publishers" : [ { - "accessMode" : "Shared", - "msgRateIn" : 0.0, - "msgThroughputIn" : 0.0, - "averageMsgSize" : 0.0, - "chunkedMessageRate" : 0.0, - "producerId" : 0, - "metadata" : { }, - "address" : "/127.0.0.1:65402", - "connectedSince" : "2021-06-09T17:22:55.913+08:00", - "clientVersion" : "2.9.0-SNAPSHOT", - "producerName" : "standalone-1-0" - } ], - "waitingPublishers" : 0, - "subscriptions" : { - "sub-demo" : { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesOutCounter" : 2296, - "msgOutCounter" : 41, - "msgRateRedeliver" : 0.0, - "chunkedMessageRate" : 0, - "msgBacklog" : 0, - "backlogSize" : 0, - "msgBacklogNoDelayed" : 0, - "blockedSubscriptionOnUnackedMsgs" : false, - "msgDelayed" : 0, - "unackedMessages" : 0, - "type" : "Exclusive", - "activeConsumerName" : "20b81", - "msgRateExpired" : 0.0, - "totalMsgExpired" : 0, - "lastExpireTimestamp" : 0, - "lastConsumedFlowTimestamp" : 1623230565356, - "lastConsumedTimestamp" : 1623230583946, - "lastAckedTimestamp" : 1623230584033, - "lastMarkDeleteAdvancedTimestamp" : 1623230584033, - "consumers" : [ { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesOutCounter" : 2296, - "msgOutCounter" : 41, - "msgRateRedeliver" : 0.0, - "chunkedMessageRate" : 0.0, - "consumerName" : "20b81", - "availablePermits" : 959, - "unackedMessages" : 0, - "avgMessagesPerEntry" : 314, - "blockedConsumerOnUnackedMsgs" : false, - "lastAckedTimestamp" : 1623230584033, - "lastConsumedTimestamp" : 1623230583946, - "metadata" : { }, - "address" : "/127.0.0.1:65172", - "connectedSince" : "2021-06-09T17:22:45.353+08:00", - "clientVersion" : "2.9.0-SNAPSHOT" - } ], - "allowOutOfOrderDelivery": false, - "consumersAfterMarkDeletePosition" : { }, - "nonContiguousDeletedMessagesRanges" : 0, - "nonContiguousDeletedMessagesRangesSerializedSize" : 0, - "durable" : true, - "replicated" : false - } - }, - "replication" : { }, - "deduplicationStatus" : "Disabled", - "nonContiguousDeletedMessagesRanges" : 0, - "nonContiguousDeletedMessagesRangesSerializedSize" : 0 -} - -``` - -To get the status of a topic, you can use the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getStats(topic); - -``` - - - - -```` - -### Get internal stats - -You can get the detailed statistics of a topic. - - - **entriesAddedCounter**: Messages published since this broker loaded this topic. - - - **numberOfEntries**: The total number of messages being tracked. - - - **totalSize**: The total storage size in bytes of all messages. - - - **currentLedgerEntries**: The count of messages written to the ledger that is currently open for writing. - - - **currentLedgerSize**: The size in bytes of messages written to the ledger that is currently open for writing. - - - **lastLedgerCreatedTimestamp**: The time when the last ledger is created. - - - **lastLedgerCreationFailureTimestamp:** The time when the last ledger failed. - - - **waitingCursorsCount**: The number of cursors that are "caught up" and waiting for a new message to be published. - - - **pendingAddEntriesCount**: The number of messages that complete (asynchronous) write requests. - - - **lastConfirmedEntry**: The ledgerid:entryid of the last message that is written successfully. If the entryid is `-1`, then the ledger is open, yet no entries are written. - - - **state**: The state of this ledger for writing. The state `LedgerOpened` means that a ledger is open for saving published messages. - - - **ledgers**: The ordered list of all ledgers for this topic holding messages. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. - - - **metadata**: The ledger metadata. - - - **schemaLedgers**: The ordered list of all ledgers for this topic schema. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. - - - **metadata**: The ledger metadata. - - - **compactedLedger**: The ledgers holding un-acked messages after topic compaction. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. The value is `false` for the compacted topic ledger. - - - **cursors**: The list of all cursors on this topic. Each subscription in the topic stats has a cursor. - - - **markDeletePosition**: All messages before the markDeletePosition are acknowledged by the subscriber. - - - **readPosition**: The latest position of subscriber for reading message. - - - **waitingReadOp**: This is true when the subscription has read the latest message published to the topic and is waiting for new messages to be published. - - - **pendingReadOps**: The counter for how many outstanding read requests to the BookKeepers in progress. - - - **messagesConsumedCounter**: The number of messages this cursor has acked since this broker loaded this topic. - - - **cursorLedger**: The ledger being used to persistently store the current markDeletePosition. - - - **cursorLedgerLastEntry**: The last entryid used to persistently store the current markDeletePosition. - - - **individuallyDeletedMessages**: If acknowledges are being done out of order, the ranges of messages acknowledged between the markDeletePosition and the read-position shows. - - - **lastLedgerSwitchTimestamp**: The last time the cursor ledger is rolled over. - - - **state**: The state of the cursor ledger: `Open` means you have a cursor ledger for saving updates of the markDeletePosition. - -The following is an example of the detailed statistics of a topic. - -```json - -{ - "entriesAddedCounter":0, - "numberOfEntries":0, - "totalSize":0, - "currentLedgerEntries":0, - "currentLedgerSize":0, - "lastLedgerCreatedTimestamp":"2021-01-22T21:12:14.868+08:00", - "lastLedgerCreationFailureTimestamp":null, - "waitingCursorsCount":0, - "pendingAddEntriesCount":0, - "lastConfirmedEntry":"3:-1", - "state":"LedgerOpened", - "ledgers":[ - { - "ledgerId":3, - "entries":0, - "size":0, - "offloaded":false, - "metadata":null - } - ], - "cursors":{ - "test":{ - "markDeletePosition":"3:-1", - "readPosition":"3:-1", - "waitingReadOp":false, - "pendingReadOps":0, - "messagesConsumedCounter":0, - "cursorLedger":4, - "cursorLedgerLastEntry":1, - "individuallyDeletedMessages":"[]", - "lastLedgerSwitchTimestamp":"2021-01-22T21:12:14.966+08:00", - "state":"Open", - "numberOfEntriesSinceFirstNotAckedMessage":0, - "totalNonContiguousDeletedMessagesRange":0, - "properties":{ - - } - } - }, - "schemaLedgers":[ - { - "ledgerId":1, - "entries":11, - "size":10, - "offloaded":false, - "metadata":null - } - ], - "compactedLedger":{ - "ledgerId":-1, - "entries":-1, - "size":-1, - "offloaded":false, - "metadata":null - } -} - -``` - -To get the internal status of a topic, you can use the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats-internal \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/internalStats|operation/getInternalStats?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getInternalStats(topic); - -``` - - - - -```` - -### Peek messages - -You can peek a number of messages for a specific subscription of a given topic in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics peek-messages \ - --count 10 --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -Message ID: 315674752:0 -Properties: { "X-Pulsar-publish-time" : "2015-07-13 17:40:28.451" } -msg-payload - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/position/:messagePosition|operation/peekNthMessage?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -int numMessages = 1; -admin.topics().peekMessages(topic, subName, numMessages); - -``` - - - - -```` - -### Get message by ID - -You can fetch the message with the given ledger ID and entry ID in the following ways. - -````mdx-code-block - - - -```shell - -$ ./bin/pulsar-admin topics get-message-by-id \ - persistent://public/default/my-topic \ - -l 10 -e 0 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/ledger/:ledgerId/entry/:entryId|operation/getMessageById?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -long ledgerId = 10; -long entryId = 10; -admin.topics().getMessageById(topic, ledgerId, entryId); - -``` - - - - -```` - -### Examine messages - -You can examine a specific message on a topic by position relative to the earliest or the latest message. - -````mdx-code-block - - - -```shell - -./bin/pulsar-admin topics examine-messages \ - persistent://public/default/my-topic \ - -i latest -m 1 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/examinemessage?initialPosition=:initialPosition&messagePosition=:messagePosition|operation/examineMessage?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().examineMessage(topic, "latest", 1); - -``` - - - - -```` - -### Get message ID - -You can get message ID published at or just after the given datetime. - -````mdx-code-block - - - -```shell - -./bin/pulsar-admin topics get-message-id \ - persistent://public/default/my-topic \ - -d 2021-06-28T19:01:17Z - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/messageid/:timestamp|operation/getMessageIdByTimestamp?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -long timestamp = System.currentTimeMillis() -admin.topics().getMessageIdByTimestamp(topic, timestamp); - -``` - - - - -```` - - -### Skip messages - -You can skip a number of messages for a specific subscription of a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics skip \ - --count 10 --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/skip/:numMessages|operation/skipMessages?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -int numMessages = 1; -admin.topics().skipMessages(topic, subName, numMessages); - -``` - - - - -```` - -### Skip all messages - -You can skip all the old messages for a specific subscription of a given topic. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics skip-all \ - --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/skip_all|operation/skipAllMessages?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -admin.topics().skipAllMessages(topic, subName); - -``` - - - - -```` - -### Reset cursor - -You can reset a subscription cursor position back to the position which is recorded X minutes before. It essentially calculates time and position of cursor at X minutes before and resets it at that position. You can reset the cursor in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics reset-cursor \ - --subscription my-subscription --time 10 \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/resetcursor/:timestamp|operation/resetCursor?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -long timestamp = 2342343L; -admin.topics().skipAllMessages(topic, subName, timestamp); - -``` - - - - -```` - -### Look up topic's owner broker - -You can locate the owner broker of the given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics lookup \ - persistent://test-tenant/ns1/tp1 \ - - "pulsar://broker1.org.com:4480" - -``` - - - - -{@inject: endpoint|GET|/lookup/v2/topic/:schema/:tenant:namespace/:topic|/?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.lookup().lookupDestination(topic); - -``` - - - - -```` - -### Get bundle - -You can get the range of the bundle that the given topic belongs to in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics bundle-range \ - persistent://test-tenant/ns1/tp1 \ - - "0x00000000_0xffffffff" - -``` - - - - -{@inject: endpoint|GET|/lookup/v2/topic/:topic_domain/:tenant/:namespace/:topic/bundle|/?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.lookup().getBundleRange(topic); - -``` - - - - -```` - -### Get subscriptions - -You can check all subscription names for a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics subscriptions \ - persistent://test-tenant/ns1/tp1 \ - - my-subscription - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscriptions|operation/getSubscriptions?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getSubscriptions(topic); - -``` - - - - -```` - -### Unsubscribe - -When a subscription does not process messages any more, you can unsubscribe it in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics unsubscribe \ - --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/:topic/subscription/:subscription|operation/deleteSubscription?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subscriptionName = "my-subscription"; -admin.topics().deleteSubscription(topic, subscriptionName); - -``` - - - - -```` - -### Last Message Id - -You can get the last committed message ID for a persistent topic. It is available since 2.3.0 release. - -````mdx-code-block - - - -```shell - -pulsar-admin topics last-message-id topic-name - -``` - - - - -{@inject: endpoint|Get|/admin/v2/:schema/:tenant/:namespace/:topic/lastMessageId|operation/getLastMessageId?version=@pulsar:version_number@} - - - - -```Java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getLastMessage(topic); - -``` - - - - -```` - - -### Configure deduplication snapshot interval - -#### Get deduplication snapshot interval - -To get the topic-level deduplication snapshot interval, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-deduplication-snapshot-interval options - -``` - - - - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/getDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getDeduplicationSnapshotInterval(topic) - -``` - - - - -```` - -#### Set deduplication snapshot interval - -To set the topic-level deduplication snapshot interval, use one of the following methods. - -> **Prerequisite** `brokerDeduplicationEnabled` must be set to `true`. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-deduplication-snapshot-interval options - -``` - - - - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/setDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.topics().setDeduplicationSnapshotInterval(topic, 1000) - -``` - - - - -```` - -#### Remove deduplication snapshot interval - -To remove the topic-level deduplication snapshot interval, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-deduplication-snapshot-interval options - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/deleteDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.topics().removeDeduplicationSnapshotInterval(topic) - -``` - - - - -```` - - -### Configure inactive topic policies - -#### Get inactive topic policies - -To get the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-inactive-topic-policies options - -``` - - - - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/getInactiveTopicPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getInactiveTopicPolicies(topic) - -``` - - - - -```` - -#### Set inactive topic policies - -To set the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-inactive-topic-policies options - -``` - - - - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/setInactiveTopicPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().setInactiveTopicPolicies(topic, inactiveTopicPolicies) - -``` - - - - -```` - -#### Remove inactive topic policies - -To remove the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-inactive-topic-policies options - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/removeInactiveTopicPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().removeInactiveTopicPolicies(topic) - -``` - - - - -```` - - -### Configure offload policies - -#### Get offload policies - -To get the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-offload-policies options - -``` - - - - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/getOffloadPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getOffloadPolicies(topic) - -``` - - - - -```` - -#### Set offload policies - -To set the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-offload-policies options - -``` - - - - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/setOffloadPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().setOffloadPolicies(topic, offloadPolicies) - -``` - - - - -```` - -#### Remove offload policies - -To remove the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-offload-policies options - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/removeOffloadPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().removeOffloadPolicies(topic) - -``` - - - - -```` - - -## Manage non-partitioned topics -You can use Pulsar [admin API](admin-api-overview.md) to create, delete and check status of non-partitioned topics. - -### Create -Non-partitioned topics must be explicitly created. When creating a new non-partitioned topic, you need to provide a name for the topic. - -By default, 60 seconds after creation, topics are considered inactive and deleted automatically to avoid generating trash data. To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to a specific value. - -For more information about the two parameters, see [here](reference-configuration.md#broker). - -You can create non-partitioned topics in the following ways. -````mdx-code-block - - - -When you create non-partitioned topics with the [`create`](reference-pulsar-admin.md#create-3) command, you need to specify the topic name as an argument. - -```shell - -$ bin/pulsar-admin topics create \ - persistent://my-tenant/my-namespace/my-topic - -``` - -:::note - -When you create a non-partitioned topic with the suffix '-partition-' followed by numeric value like 'xyz-topic-partition-x' for the topic name, if a partitioned topic with same suffix 'xyz-topic-partition-y' exists, then the numeric value(x) for the non-partitioned topic must be larger than the number of partitions(y) of the partitioned topic. Otherwise, you cannot create such a non-partitioned topic. - -::: - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic|operation/createNonPartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().createNonPartitionedTopic(topicName); - -``` - - - - -```` - -### Delete -You can delete non-partitioned topics in the following ways. -````mdx-code-block - - - -```shell - -$ bin/pulsar-admin topics delete \ - persistent://my-tenant/my-namespace/my-topic - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic|operation/deleteTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().delete(topic); - -``` - - - - -```` - -### List - -You can get the list of topics under a given namespace in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list tenant/namespace -persistent://tenant/namespace/topic1 -persistent://tenant/namespace/topic2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getList(namespace); - -``` - - - - -```` - -### Stats - -You can check the current statistics of a given topic. The following is an example. For description of each stats, refer to [get stats](#get-stats). - -```json - -{ - "msgRateIn": 4641.528542257553, - "msgThroughputIn": 44663039.74947473, - "msgRateOut": 0, - "msgThroughputOut": 0, - "averageMsgSize": 1232439.816728665, - "storageSize": 135532389160, - "publishers": [ - { - "msgRateIn": 57.855383881403576, - "msgThroughputIn": 558994.7078932219, - "averageMsgSize": 613135, - "producerId": 0, - "producerName": null, - "address": null, - "connectedSince": null - } - ], - "subscriptions": { - "my-topic_subscription": { - "msgRateOut": 0, - "msgThroughputOut": 0, - "msgBacklog": 116632, - "type": null, - "msgRateExpired": 36.98245516804671, - "consumers": [] - } - }, - "replication": {} -} - -``` - -You can check the current statistics of a given topic and its connected producers and consumers in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats \ - persistent://test-tenant/namespace/topic \ - --get-precise-backlog - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getStats(topic, false /* is precise backlog */); - -``` - - - - -```` - -## Manage partitioned topics -You can use Pulsar [admin API](admin-api-overview.md) to create, update, delete and check status of partitioned topics. - -### Create - -Partitioned topics must be explicitly created. When creating a new partitioned topic, you need to provide a name and the number of partitions for the topic. - -By default, 60 seconds after creation, topics are considered inactive and deleted automatically to avoid generating trash data. To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to a specific value. - -For more information about the two parameters, see [here](reference-configuration.md#broker). - -You can create partitioned topics in the following ways. -````mdx-code-block - - - -When you create partitioned topics with the [`create-partitioned-topic`](reference-pulsar-admin.md#create-partitioned-topic) -command, you need to specify the topic name as an argument and the number of partitions using the `-p` or `--partitions` flag. - -```shell - -$ bin/pulsar-admin topics create-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic \ - --partitions 4 - -``` - -:::note - -If a non-partitioned topic with the suffix '-partition-' followed by a numeric value like 'xyz-topic-partition-10', you can not create a partitioned topic with name 'xyz-topic', because the partitions of the partitioned topic could override the existing non-partitioned topic. To create such partitioned topic, you have to delete that non-partitioned topic first. - -::: - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/partitions|operation/createPartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -int numPartitions = 4; -admin.topics().createPartitionedTopic(topicName, numPartitions); - -``` - - - - -```` - -### Create missed partitions - -When topic auto-creation is disabled, and you have a partitioned topic without any partitions, you can use the [`create-missed-partitions`](reference-pulsar-admin.md#create-missed-partitions) command to create partitions for the topic. - -````mdx-code-block - - - -You can create missed partitions with the [`create-missed-partitions`](reference-pulsar-admin.md#create-missed-partitions) command and specify the topic name as an argument. - -```shell - -$ bin/pulsar-admin topics create-missed-partitions \ - persistent://my-tenant/my-namespace/my-topic \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic|operation/createMissedPartitions?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().createMissedPartitions(topicName); - -``` - - - - -```` - -### Get metadata - -Partitioned topics are associated with metadata, you can view it as a JSON object. The following metadata field is available. - -Field | Description -:-----|:------- -`partitions` | The number of partitions into which the topic is divided. - -````mdx-code-block - - - -You can check the number of partitions in a partitioned topic with the [`get-partitioned-topic-metadata`](reference-pulsar-admin.md#get-partitioned-topic-metadata) subcommand. - -```shell - -$ pulsar-admin topics get-partitioned-topic-metadata \ - persistent://my-tenant/my-namespace/my-topic -{ - "partitions": 4 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/partitions|operation/getPartitionedMetadata?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getPartitionedTopicMetadata(topicName); - -``` - - - - -```` - -### Update - -You can update the number of partitions for an existing partitioned topic *if* the topic is non-global. However, you can only add the partition number. Decrementing the number of partitions would delete the topic, which is not supported in Pulsar. - -Producers and consumers can find the newly created partitions automatically. - -````mdx-code-block - - - -You can update partitioned topics with the [`update-partitioned-topic`](reference-pulsar-admin.md#update-partitioned-topic) command. - -```shell - -$ pulsar-admin topics update-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic \ - --partitions 8 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:cluster/:namespace/:destination/partitions|operation/updatePartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().updatePartitionedTopic(topic, numPartitions); - -``` - - - - -```` - -### Delete -You can delete partitioned topics with the [`delete-partitioned-topic`](reference-pulsar-admin.md#delete-partitioned-topic) command, REST API and Java. - -````mdx-code-block - - - -```shell - -$ bin/pulsar-admin topics delete-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:topic/:namespace/:destination/partitions|operation/deletePartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().delete(topic); - -``` - - - - -```` - -### List -You can get the list of topics under a given namespace in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list tenant/namespace -persistent://tenant/namespace/topic1 -persistent://tenant/namespace/topic2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getPartitionedTopicList?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getList(namespace); - -``` - - - - -```` - -### Stats - -You can check the current statistics of a given partitioned topic. The following is an example. For description of each stats, refer to [get stats](#get-stats). - -Note that in the subscription JSON object, `chuckedMessageRate` is deprecated. Please use `chunkedMessageRate`. Both will be sent in the JSON for now. - -```json - -{ - "msgRateIn" : 999.992947159793, - "msgThroughputIn" : 1070918.4635439808, - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesInCounter" : 270318763, - "msgInCounter" : 252489, - "bytesOutCounter" : 0, - "msgOutCounter" : 0, - "averageMsgSize" : 1070.926056966454, - "msgChunkPublished" : false, - "storageSize" : 270316646, - "backlogSize" : 200921133, - "publishers" : [ { - "msgRateIn" : 999.992947159793, - "msgThroughputIn" : 1070918.4635439808, - "averageMsgSize" : 1070.3333333333333, - "chunkedMessageRate" : 0.0, - "producerId" : 0 - } ], - "subscriptions" : { - "test" : { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesOutCounter" : 0, - "msgOutCounter" : 0, - "msgRateRedeliver" : 0.0, - "chuckedMessageRate" : 0, - "chunkedMessageRate" : 0, - "msgBacklog" : 144318, - "msgBacklogNoDelayed" : 144318, - "blockedSubscriptionOnUnackedMsgs" : false, - "msgDelayed" : 0, - "unackedMessages" : 0, - "msgRateExpired" : 0.0, - "lastExpireTimestamp" : 0, - "lastConsumedFlowTimestamp" : 0, - "lastConsumedTimestamp" : 0, - "lastAckedTimestamp" : 0, - "consumers" : [ ], - "isDurable" : true, - "isReplicated" : false - } - }, - "replication" : { }, - "metadata" : { - "partitions" : 3 - }, - "partitions" : { } -} - -``` - -You can check the current statistics of a given partitioned topic and its connected producers and consumers in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics partitioned-stats \ - persistent://test-tenant/namespace/topic \ - --per-partition - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/partitioned-stats|operation/getPartitionedStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getPartitionedStats(topic, true /* per partition */, false /* is precise backlog */); - -``` - - - - -```` - -### Internal stats - -You can check the detailed statistics of a topic. The following is an example. For description of each stats, refer to [get internal stats](#get-internal-stats). - -```json - -{ - "entriesAddedCounter": 20449518, - "numberOfEntries": 3233, - "totalSize": 331482, - "currentLedgerEntries": 3233, - "currentLedgerSize": 331482, - "lastLedgerCreatedTimestamp": "2016-06-29 03:00:23.825", - "lastLedgerCreationFailureTimestamp": null, - "waitingCursorsCount": 1, - "pendingAddEntriesCount": 0, - "lastConfirmedEntry": "324711539:3232", - "state": "LedgerOpened", - "ledgers": [ - { - "ledgerId": 324711539, - "entries": 0, - "size": 0 - } - ], - "cursors": { - "my-subscription": { - "markDeletePosition": "324711539:3133", - "readPosition": "324711539:3233", - "waitingReadOp": true, - "pendingReadOps": 0, - "messagesConsumedCounter": 20449501, - "cursorLedger": 324702104, - "cursorLedgerLastEntry": 21, - "individuallyDeletedMessages": "[(324711539:3134‥324711539:3136], (324711539:3137‥324711539:3140], ]", - "lastLedgerSwitchTimestamp": "2016-06-29 01:30:19.313", - "state": "Open" - } - } -} - -``` - -You can get the internal stats for the partitioned topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats-internal \ - persistent://test-tenant/namespace/topic - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/internalStats|operation/getInternalStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getInternalStats(topic); - -``` - - - - -```` - -### Get backlog size - -You can get backlog size of a single topic partition or a nonpartitioned topic given a message ID (in bytes). - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics get-backlog-size \ - -m 1:1 \ - persistent://test-tenant/ns1/tp1-partition-0 \ - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/backlogSize|operation/getBacklogSizeByMessageId?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -MessageId messageId = MessageId.earliest; -admin.topics().getBacklogSizeByMessageId(topic, messageId); - -``` - - - - -```` - -## Publish to partitioned topics - -By default, Pulsar topics are served by a single broker, which limits the maximum throughput of a topic. *Partitioned topics* can span multiple brokers and thus allow for higher throughput. - -You can publish to partitioned topics using Pulsar client libraries. When publishing to partitioned topics, you must specify a routing mode. If you do not specify any routing mode when you create a new producer, the round robin routing mode is used. - -### Routing mode - -You can specify the routing mode in the ProducerConfiguration object that you use to configure your producer. The routing mode determines which partition(internal topic) that each message should be published to. - -The following {@inject: javadoc:MessageRoutingMode:/client/org/apache/pulsar/client/api/MessageRoutingMode} options are available. - -Mode | Description -:--------|:------------ -`RoundRobinPartition` | If no key is provided, the producer publishes messages across all partitions in round-robin policy to achieve the maximum throughput. Round-robin is not done per individual message, round-robin is set to the same boundary of batching delay to ensure that batching is effective. If a key is specified on the message, the partitioned producer hashes the key and assigns message to a particular partition. This is the default mode. -`SinglePartition` | If no key is provided, the producer picks a single partition randomly and publishes all messages into that partition. If a key is specified on the message, the partitioned producer hashes the key and assigns message to a particular partition. -`CustomPartition` | Use custom message router implementation that is called to determine the partition for a particular message. You can create a custom routing mode by using the Java client and implementing the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface. - -The following is an example: - -```java - -String pulsarBrokerRootUrl = "pulsar://localhost:6650"; -String topic = "persistent://my-tenant/my-namespace/my-topic"; - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl(pulsarBrokerRootUrl).build(); -Producer producer = pulsarClient.newProducer() - .topic(topic) - .messageRoutingMode(MessageRoutingMode.SinglePartition) - .create(); -producer.send("Partitioned topic message".getBytes()); - -``` - -### Custom message router - -To use a custom message router, you need to provide an implementation of the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface, which has just one `choosePartition` method: - -```java - -public interface MessageRouter extends Serializable { - int choosePartition(Message msg); -} - -``` - -The following router routes every message to partition 10: - -```java - -public class AlwaysTenRouter implements MessageRouter { - public int choosePartition(Message msg) { - return 10; - } -} - -``` - -With that implementation, you can send - -```java - -String pulsarBrokerRootUrl = "pulsar://localhost:6650"; -String topic = "persistent://my-tenant/my-cluster-my-namespace/my-topic"; - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl(pulsarBrokerRootUrl).build(); -Producer producer = pulsarClient.newProducer() - .topic(topic) - .messageRouter(new AlwaysTenRouter()) - .create(); -producer.send("Partitioned topic message".getBytes()); - -``` - -### How to choose partitions when using a key -If a message has a key, it supersedes the round robin routing policy. The following example illustrates how to choose the partition when using a key. - -```java - -// If the message has a key, it supersedes the round robin routing policy - if (msg.hasKey()) { - return signSafeMod(hash.makeHash(msg.getKey()), topicMetadata.numPartitions()); - } - - if (isBatchingEnabled) { // if batching is enabled, choose partition on `partitionSwitchMs` boundary. - long currentMs = clock.millis(); - return signSafeMod(currentMs / partitionSwitchMs + startPtnIdx, topicMetadata.numPartitions()); - } else { - return signSafeMod(PARTITION_INDEX_UPDATER.getAndIncrement(this), topicMetadata.numPartitions()); - } - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/administration-dashboard.md b/site2/website/versioned_docs/version-2.9.0-deprecated/administration-dashboard.md deleted file mode 100644 index 92bd7e17869d7b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/administration-dashboard.md +++ /dev/null @@ -1,76 +0,0 @@ ---- -id: administration-dashboard -title: Pulsar dashboard -sidebar_label: "Dashboard" -original_id: administration-dashboard ---- - -:::note - -Pulsar dashboard is deprecated. We recommend you use [Pulsar Manager](administration-pulsar-manager.md) to manage and monitor the stats of your topics. - -::: - -Pulsar dashboard is a web application that enables users to monitor current stats for all [topics](reference-terminology.md#topic) in tabular form. - -The dashboard is a data collector that polls stats from all the brokers in a Pulsar instance (across multiple clusters) and stores all the information in a [PostgreSQL](https://www.postgresql.org/) database. - -You can use the [Django](https://www.djangoproject.com) web app to render the collected data. - -## Install - -The easiest way to use the dashboard is to run it inside a [Docker](https://www.docker.com/products/docker) container. - -```shell - -$ SERVICE_URL=http://broker.example.com:8080/ -$ docker run -p 80:80 \ - -e SERVICE_URL=$SERVICE_URL \ - apachepulsar/pulsar-dashboard:@pulsar:version@ - -``` - -You can find the {@inject: github:Dockerfile:/dashboard/Dockerfile} in the `dashboard` directory and build an image from scratch as well: - -```shell - -$ docker build -t apachepulsar/pulsar-dashboard dashboard - -``` - -If token authentication is enabled: -> Provided token should have super-user access. - -```shell - -$ SERVICE_URL=http://broker.example.com:8080/ -$ JWT_TOKEN=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c -$ docker run -p 80:80 \ - -e SERVICE_URL=$SERVICE_URL \ - -e JWT_TOKEN=$JWT_TOKEN \ - apachepulsar/pulsar-dashboard - -``` - - -You need to specify only one service URL for a Pulsar cluster. Internally, the collector figures out all the existing clusters and the brokers from where it needs to pull the metrics. If you connect the dashboard to Pulsar running in standalone mode, the URL is `http://:8080` by default. `` is the IP address or hostname of the machine that runs Pulsar standalone. The IP address or hostname should be accessible from the running dashboard in the docker instance. - -Once the Docker container starts, the web dashboard is accessible via `localhost` or whichever host that Docker uses. - -> The `SERVICE_URL` that the dashboard uses needs to be reachable from inside the Docker container. - -If the Pulsar service runs in standalone mode in `localhost`, the `SERVICE_URL` has to -be the IP address of the machine. - -Similarly, given the Pulsar standalone advertises itself with localhost by default, you need to -explicitly set the advertise address to the host IP address. For example: - -```shell - -$ bin/pulsar standalone --advertised-address 1.2.3.4 - -``` - -### Known issues - -Currently, only Pulsar Token [authentication](security-overview.md#authentication-providers) is supported. diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/administration-geo.md b/site2/website/versioned_docs/version-2.9.0-deprecated/administration-geo.md deleted file mode 100644 index f1b988dd5f13ba..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/administration-geo.md +++ /dev/null @@ -1,214 +0,0 @@ ---- -id: administration-geo -title: Pulsar geo-replication -sidebar_label: "Geo-replication" -original_id: administration-geo ---- - -*Geo-replication* is the replication of persistently stored message data across multiple clusters of a Pulsar instance. - -## How geo-replication works - -The diagram below illustrates the process of geo-replication across Pulsar clusters: - -![Replication Diagram](/assets/geo-replication.png) - -In this diagram, whenever **P1**, **P2**, and **P3** producers publish messages to the **T1** topic on **Cluster-A**, **Cluster-B**, and **Cluster-C** clusters respectively, those messages are instantly replicated across clusters. Once the messages are replicated, **C1** and **C2** consumers can consume those messages from their respective clusters. - -Without geo-replication, **C1** and **C2** consumers are not able to consume messages that **P3** producer publishes. - -## Geo-replication and Pulsar properties - -You must enable geo-replication on a per-tenant basis in Pulsar. You can enable geo-replication between clusters only when a tenant is created that allows access to both clusters. - -Although geo-replication must be enabled between two clusters, actually geo-replication is managed at the namespace level. You must complete the following tasks to enable geo-replication for a namespace: - -* [Enable geo-replication namespaces](#enable-geo-replication-namespaces) -* Configure that namespace to replicate across two or more provisioned clusters - -Any message published on *any* topic in that namespace is replicated to all clusters in the specified set. - -## Local persistence and forwarding - -When messages are produced on a Pulsar topic, messages are first persisted in the local cluster, and then forwarded asynchronously to the remote clusters. - -In normal cases, when connectivity issues are none, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, the network [round-trip time](https://en.wikipedia.org/wiki/Round-trip_delay_time) (RTT) between the remote regions defines end-to-end delivery latency. - -Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (like during a network partition). - -Producers and consumers can publish messages to and consume messages from any cluster in a Pulsar instance. However, subscriptions cannot only be local to the cluster where the subscriptions are created but also can be transferred between clusters after replicated subscription is enabled. Once replicated subscription is enabled, you can keep subscription state in synchronization. Therefore, a topic can be asynchronously replicated across multiple geographical regions. In case of failover, a consumer can restart consuming messages from the failure point in a different cluster. - -In the aforementioned example, the **T1** topic is replicated among three clusters, **Cluster-A**, **Cluster-B**, and **Cluster-C**. - -All messages produced in any of the three clusters are delivered to all subscriptions in other clusters. In this case, **C1** and **C2** consumers receive all messages that **P1**, **P2**, and **P3** producers publish. Ordering is still guaranteed on a per-producer basis. - -## Configure replication - -As stated in [Geo-replication and Pulsar properties](#geo-replication-and-pulsar-properties) section, geo-replication in Pulsar is managed at the [tenant](reference-terminology.md#tenant) level. - -The following example connects three clusters: **us-east**, **us-west**, and **us-cent**. - -### Connect replication clusters - -To replicate data among clusters, you need to configure each cluster to connect to the other. You can use the [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) tool to create a connection. - -**Example** - -Suppose that you have 3 replication clusters: `us-west`, `us-cent`, and `us-east`. - -1. Configure the connection from `us-west` to `us-east`. - - Run the following command on `us-west`. - -```shell - -$ bin/pulsar-admin clusters create \ - --broker-url pulsar://: \ - --url http://: \ - us-east - -``` - - :::tip - - - If you want to use a secure connection for a cluster, you can use the flags `--broker-url-secure` and `--url-secure`. For more information, see [pulsar-admin clusters create](https://pulsar.apache.org/tools/pulsar-admin/). - - Different clusters may have different authentications. You can use the authentication flag `--auth-plugin` and `--auth-parameters` together to set cluster authentication, which overrides `brokerClientAuthenticationPlugin` and `brokerClientAuthenticationParameters` if `authenticationEnabled` sets to `true` in `broker.conf` and `standalone.conf`. For more information, see [authentication and authorization](concepts-authentication.md). - - ::: - -2. Configure the connection from `us-west` to `us-cent`. - - Run the following command on `us-west`. - -```shell - -$ bin/pulsar-admin clusters create \ - --broker-url pulsar://: \ - --url http://: \ - us-cent - -``` - -3. Run similar commands on `us-east` and `us-cent` to create connections among clusters. - -### Grant permissions to properties - -To replicate to a cluster, the tenant needs permission to use that cluster. You can grant permission to the tenant when you create the tenant or grant later. - -Specify all the intended clusters when you create a tenant: - -```shell - -$ bin/pulsar-admin tenants create my-tenant \ - --admin-roles my-admin-role \ - --allowed-clusters us-west,us-east,us-cent - -``` - -To update permissions of an existing tenant, use `update` instead of `create`. - -### Enable geo-replication namespaces - -You can create a namespace with the following command sample. - -```shell - -$ bin/pulsar-admin namespaces create my-tenant/my-namespace - -``` - -Initially, the namespace is not assigned to any cluster. You can assign the namespace to clusters using the `set-clusters` subcommand: - -```shell - -$ bin/pulsar-admin namespaces set-clusters my-tenant/my-namespace \ - --clusters us-west,us-east,us-cent - -``` - -You can change the replication clusters for a namespace at any time, without disruption to ongoing traffic. Replication channels are immediately set up or stopped in all clusters as soon as the configuration changes. - -### Use topics with geo-replication - -Once you create a geo-replication namespace, any topics that producers or consumers create within that namespace is replicated across clusters. Typically, each application uses the `serviceUrl` for the local cluster. - -#### Selective replication - -By default, messages are replicated to all clusters configured for the namespace. You can restrict replication selectively by specifying a replication list for a message, and then that message is replicated only to the subset in the replication list. - -The following is an example for the [Java API](client-libraries-java.md). Note the use of the `setReplicationClusters` method when you construct the {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} object: - -```java - -List restrictReplicationTo = Arrays.asList( - "us-west", - "us-east" -); - -Producer producer = client.newProducer() - .topic("some-topic") - .create(); - -producer.newMessage() - .value("my-payload".getBytes()) - .setReplicationClusters(restrictReplicationTo) - .send(); - -``` - -#### Topic stats - -Topic-specific statistics for geo-replication topics are available via the [`pulsar-admin`](reference-pulsar-admin.md) tool and {@inject: rest:REST:/} API: - -```shell - -$ bin/pulsar-admin persistent stats persistent://my-tenant/my-namespace/my-topic - -``` - -Each cluster reports its own local stats, including the incoming and outgoing replication rates and backlogs. - -#### Delete a geo-replication topic - -Given that geo-replication topics exist in multiple regions, directly deleting a geo-replication topic is not possible. Instead, you should rely on automatic topic garbage collection. - -In Pulsar, a topic is automatically deleted when the topic meets the following three conditions: -- no producers or consumers are connected to it; -- no subscriptions to it; -- no more messages are kept for retention. -For geo-replication topics, each region uses a fault-tolerant mechanism to decide when deleting the topic locally is safe. - -You can explicitly disable topic garbage collection by setting `brokerDeleteInactiveTopicsEnabled` to `false` in your [broker configuration](reference-configuration.md#broker). - -To delete a geo-replication topic, close all producers and consumers on the topic, and delete all of its local subscriptions in every replication cluster. When Pulsar determines that no valid subscription for the topic remains across the system, it will garbage collect the topic. - -## Replicated subscriptions - -Pulsar supports replicated subscriptions, so you can keep subscription state in sync, within a sub-second timeframe, in the context of a topic that is being asynchronously replicated across multiple geographical regions. - -In case of failover, a consumer can restart consuming from the failure point in a different cluster. - -### Enable replicated subscription - -Replicated subscription is disabled by default. You can enable replicated subscription when creating a consumer. - -```java - -Consumer consumer = client.newConsumer(Schema.STRING) - .topic("my-topic") - .subscriptionName("my-subscription") - .replicateSubscriptionState(true) - .subscribe(); - -``` - -### Advantages - - * It is easy to implement the logic. - * You can choose to enable or disable replicated subscription. - * When you enable it, the overhead is low, and it is easy to configure. - * When you disable it, the overhead is zero. - -### Limitations - -When you enable replicated subscription, you're creating a consistent distributed snapshot to establish an association between message ids from different clusters. The snapshots are taken periodically. The default value is `1 second`. It means that a consumer failing over to a different cluster can potentially receive 1 second of duplicates. You can also configure the frequency of the snapshot in the `broker.conf` file. diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/administration-isolation.md b/site2/website/versioned_docs/version-2.9.0-deprecated/administration-isolation.md deleted file mode 100644 index d2de042a2e7415..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/administration-isolation.md +++ /dev/null @@ -1,115 +0,0 @@ ---- -id: administration-isolation -title: Pulsar isolation -sidebar_label: "Pulsar isolation" -original_id: administration-isolation ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -In an organization, a Pulsar instance provides services to multiple teams. When organizing the resources across multiple teams, you want to make a suitable isolation plan to avoid the resource competition between different teams and applications and provide high-quality messaging service. In this case, you need to take resource isolation into consideration and weigh your intended actions against expected and unexpected consequences. - -To enforce resource isolation, you can use the Pulsar isolation policy, which allows you to allocate resources (**broker** and **bookie**) for the namespace. - -## Broker isolation - -In Pulsar, when namespaces (more specifically, namespace bundles) are assigned dynamically to brokers, the namespace isolation policy limits the set of brokers that can be used for assignment. Before topics are assigned to brokers, you can set the namespace isolation policy with a primary or a secondary regex to select desired brokers. - -You can set a namespace isolation policy for a cluster using one of the following methods. - -````mdx-code-block - - - - -``` - -pulsar-admin ns-isolation-policy set options - -``` - -For more information about the command `pulsar-admin ns-isolation-policy set options`, see [here](https://pulsar.apache.org/tools/pulsar-admin/). - -**Example** - -```shell - -bin/pulsar-admin ns-isolation-policy set \ ---auto-failover-policy-type min_available \ ---auto-failover-policy-params min_limit=1,usage_threshold=80 \ ---namespaces my-tenant/my-namespace \ ---primary 10.193.216.* my-cluster policy-name - -``` - - - - -[PUT /admin/v2/namespaces/{tenant}/{namespace}](https://pulsar.apache.org/admin-rest-api/?version=master&apiversion=v2#operation/createNamespace) - - - - -For how to set namespace isolation policy using Java admin API, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/NamespacesImpl.java#L251). - - - - -```` - -## Bookie isolation - -A namespace can be isolated into user-defined groups of bookies, which guarantees all the data that belongs to the namespace is stored in desired bookies. The bookie affinity group uses the BookKeeper [rack-aware placement policy](https://bookkeeper.apache.org/docs/latest/api/javadoc/org/apache/bookkeeper/client/EnsemblePlacementPolicy.html) and it is a way to feed rack information which is stored as JSON format in znode. - -You can set a bookie affinity group using one of the following methods. - -````mdx-code-block - - - - -``` - -pulsar-admin namespaces set-bookie-affinity-group options - -``` - -For more information about the command `pulsar-admin namespaces set-bookie-affinity-group options`, see [here](https://pulsar.apache.org/tools/pulsar-admin/). - -**Example** - -```shell - -bin/pulsar-admin bookies set-bookie-rack \ ---bookie 127.0.0.1:3181 \ ---hostname 127.0.0.1:3181 \ ---group group-bookie1 \ ---rack rack1 - -bin/pulsar-admin namespaces set-bookie-affinity-group public/default \ ---primary-group group-bookie1 - -``` - - - - -[POST /admin/v2/namespaces/{tenant}/{namespace}/persistence/bookieAffinity](https://pulsar.apache.org/admin-rest-api/?version=master&apiversion=v2#operation/setBookieAffinityGroup) - - - - -For how to set bookie affinity group for a namespace using Java admin API, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/NamespacesImpl.java#L1164). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/administration-load-balance.md b/site2/website/versioned_docs/version-2.9.0-deprecated/administration-load-balance.md deleted file mode 100644 index 788c84a59317b0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/administration-load-balance.md +++ /dev/null @@ -1,250 +0,0 @@ ---- -id: administration-load-balance -title: Pulsar load balance -sidebar_label: "Load balance" -original_id: administration-load-balance ---- - -## Load balance across Pulsar brokers - -Pulsar is an horizontally scalable messaging system, so the traffic in a logical cluster must be balanced across all the available Pulsar brokers as evenly as possible, which is a core requirement. - -You can use multiple settings and tools to control the traffic distribution which require a bit of context to understand how the traffic is managed in Pulsar. Though, in most cases, the core requirement mentioned above is true out of the box and you should not worry about it. - -## Pulsar load manager architecture - -The following part introduces the basic architecture of the Pulsar load manager. - -### Assign topics to brokers dynamically - -Topics are dynamically assigned to brokers based on the load conditions of all brokers in the cluster. - -When a client starts using new topics that are not assigned to any broker, a process is triggered to choose the best suited broker to acquire ownership of these topics according to the load conditions. - -In case of partitioned topics, different partitions are assigned to different brokers. Here "topic" means either a non-partitioned topic or one partition of a topic. - -The assignment is "dynamic" because the assignment changes quickly. For example, if the broker owning the topic crashes, the topic is reassigned immediately to another broker. Another scenario is that the broker owning the topic becomes overloaded. In this case, the topic is reassigned to a less loaded broker. - -The stateless nature of brokers makes the dynamic assignment possible, so you can quickly expand or shrink the cluster based on usage. - -#### Assignment granularity - -The assignment of topics or partitions to brokers is not done at the topics or partitions level, but done at the Bundle level (a higher level). The reason is to amortize the amount of information that you need to keep track. Based on CPU, memory, traffic load and other indexes, topics are assigned to a particular broker dynamically. - -Instead of individual topic or partition assignment, each broker takes ownership of a subset of the topics for a namespace. This subset is called a "*bundle*" and effectively this subset is a sharding mechanism. - -The namespace is the "administrative" unit: many config knobs or operations are done at the namespace level. - -For assignment, a namespaces is sharded into a list of "bundles", with each bundle comprising a portion of overall hash range of the namespace. - -Topics are assigned to a particular bundle by taking the hash of the topic name and checking in which bundle the hash falls into. - -Each bundle is independent of the others and thus is independently assigned to different brokers. - -### Create namespaces and bundles - -When you create a new namespace, the new namespace sets to use the default number of bundles. You can set this in `conf/broker.conf`: - -```properties - -# When a namespace is created without specifying the number of bundle, this -# value will be used as the default -defaultNumberOfNamespaceBundles=4 - -``` - -You can either change the system default, or override it when you create a new namespace: - -```shell - -$ bin/pulsar-admin namespaces create my-tenant/my-namespace --clusters us-west --bundles 16 - -``` - -With this command, you create a namespace with 16 initial bundles. Therefore the topics for this namespaces can immediately be spread across up to 16 brokers. - -In general, if you know the expected traffic and number of topics in advance, you had better start with a reasonable number of bundles instead of waiting for the system to auto-correct the distribution. - -On the same note, it is beneficial to start with more bundles than the number of brokers, because of the hashing nature of the distribution of topics into bundles. For example, for a namespace with 1000 topics, using something like 64 bundles achieves a good distribution of traffic across 16 brokers. - -### Unload topics and bundles - -You can "unload" a topic in Pulsar with admin operation. Unloading means to close the topics, release ownership and reassign the topics to a new broker, based on current load. - -When unloading happens, the client experiences a small latency blip, typically in the order of tens of milliseconds, while the topic is reassigned. - -Unloading is the mechanism that the load-manager uses to perform the load shedding, but you can also trigger the unloading manually, for example to correct the assignments and redistribute traffic even before having any broker overloaded. - -Unloading a topic has no effect on the assignment, but just closes and reopens the particular topic: - -```shell - -pulsar-admin topics unload persistent://tenant/namespace/topic - -``` - -To unload all topics for a namespace and trigger reassignments: - -```shell - -pulsar-admin namespaces unload tenant/namespace - -``` - -### Split namespace bundles - -Since the load for the topics in a bundle might change over time and predicting the load might be hard, bundle split is designed to deal with these issues. The broker splits a bundle into two and the new smaller bundles can be reassigned to different brokers. - -The splitting is based on some tunable thresholds. Any existing bundle that exceeds any of the threshold is a candidate to be split. By default the newly split bundles are also immediately offloaded to other brokers, to facilitate the traffic distribution. - -You can split namespace bundles in two ways, by setting `supportedNamespaceBundleSplitAlgorithms` to `range_equally_divide` or `topic_count_equally_divide` in `broker.conf` file. The former splits the bundle into two parts with the same hash range size; the latter splits the bundle into two parts with the same number of topics. You can also configure other parameters for namespace bundles. - -```properties - -# enable/disable namespace bundle auto split -loadBalancerAutoBundleSplitEnabled=true - -# enable/disable automatic unloading of split bundles -loadBalancerAutoUnloadSplitBundlesEnabled=true - -# maximum topics in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxTopics=1000 - -# maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxSessions=1000 - -# maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxMsgRate=30000 - -# maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxBandwidthMbytes=100 - -# maximum number of bundles in a namespace (for auto-split) -loadBalancerNamespaceMaximumBundles=128 - -``` - -### Shed load automatically - -The support for automatic load shedding is available in the load manager of Pulsar. This means that whenever the system recognizes a particular broker is overloaded, the system forces some traffic to be reassigned to less loaded brokers. - -When a broker is identified as overloaded, the broker forces to "unload" a subset of the bundles, the ones with higher traffic, that make up for the overload percentage. - -For example, the default threshold is 85% and if a broker is over quota at 95% CPU usage, then the broker unloads the percent difference plus a 5% margin: `(95% - 85%) + 5% = 15%`. - -Given the selection of bundles to offload is based on traffic (as a proxy measure for cpu, network and memory), broker unloads bundles for at least 15% of traffic. - -The automatic load shedding is enabled by default and you can disable the automatic load shedding with this setting: - -```properties - -# Enable/disable automatic bundle unloading for load-shedding -loadBalancerSheddingEnabled=true - -``` - -Additional settings that apply to shedding: - -```properties - -# Load shedding interval. Broker periodically checks whether some traffic should be offload from -# some over-loaded broker to other under-loaded brokers -loadBalancerSheddingIntervalMinutes=1 - -# Prevent the same topics to be shed and moved to other brokers more that once within this timeframe -loadBalancerSheddingGracePeriodMinutes=30 - -``` - -#### Broker overload thresholds - -The determinations of when a broker is overloaded is based on threshold of CPU, network and memory usage. Whenever either of those metrics reaches the threshold, the system triggers the shedding (if enabled). - -By default, overload threshold is set at 85%: - -```properties - -# Usage threshold to determine a broker as over-loaded -loadBalancerBrokerOverloadedThresholdPercentage=85 - -``` - -Pulsar gathers the usage stats from the system metrics. - -In case of network utilization, in some cases the network interface speed that Linux reports is not correct and needs to be manually overridden. This is the case in AWS EC2 instances with 1Gbps NIC speed for which the OS reports 10Gbps speed. - -Because of the incorrect max speed, the Pulsar load manager might think the broker has not reached the NIC capacity, while in fact the broker already uses all the bandwidth and the traffic is slowed down. - -You can use the following setting to correct the max NIC speed: - -```properties - -# Override the auto-detection of the network interfaces max speed. -# This option is useful in some environments (eg: EC2 VMs) where the max speed -# reported by Linux is not reflecting the real bandwidth available to the broker. -# Since the network usage is employed by the load manager to decide when a broker -# is overloaded, it is important to make sure the info is correct or override it -# with the right value here. The configured value can be a double (eg: 0.8) and that -# can be used to trigger load-shedding even before hitting on NIC limits. -loadBalancerOverrideBrokerNicSpeedGbps= - -``` - -When the value is empty, Pulsar uses the value that the OS reports. - -### Distribute anti-affinity namespaces across failure domains - -When your application has multiple namespaces and you want one of them available all the time to avoid any downtime, you can group these namespaces and distribute them across different [failure domains](reference-terminology.md#failure-domain) and different brokers. Thus, if one of the failure domains is down (due to release rollout or brokers restart), it only disrupts namespaces owned by that specific failure domain and the rest of the namespaces owned by other domains remain available without any impact. - -Such a group of namespaces has anti-affinity to each other, that is, all the namespaces in this group are [anti-affinity namespaces](reference-terminology.md#anti-affinity-namespaces) and are distributed to different failure domains in a load-balanced manner. - -As illustrated in the following figure, Pulsar has 2 failure domains (Domain1 and Domain2) and each domain has 2 brokers in it. You can create an anti-affinity namespace group that has 4 namespaces in it, and all the 4 namespaces have anti-affinity to each other. The load manager tries to distribute namespaces evenly across all the brokers in the same domain. Since each domain has 2 brokers, every broker owns one namespace from this anti-affinity namespace group, and you can see each domain owns 2 namespaces, and each broker owns 1 namespace. - -![Distribute anti-affinity namespaces across failure domains](/assets/anti-affinity-namespaces-across-failure-domains.svg) - -The load manager follows an even distribution policy across failure domains to assign anti-affinity namespaces. The following table outlines the even-distributed assignment sequence illustrated in the above figure. - -| Assignment sequence | Namespace | Failure domain candidates | Broker candidates | Selected broker | -|:---|:------------|:------------------|:------------------------------------|:-----------------| -| 1 | Namespace1 | Domain1, Domain2 | Broker1, Broker2, Broker3, Broker4 | Domain1:Broker1 | -| 2 | Namespace2 | Domain2 | Broker3, Broker4 | Domain2:Broker3 | -| 3 | Namespace3 | Domain1, Domain2 | Broker2, Broker4 | Domain1:Broker2 | -| 4 | Namespace4 | Domain2 | Broker4 | Domain2:Broker4 | - -:::tip - -* Each namespace belongs to only one anti-affinity group. If a namespace with an existing anti-affinity assignment is assigned to another anti-affinity group, the original assignment is dropped. - -* If there are more anti-affinity namespaces than failure domains, the load manager distributes namespaces evenly across all the domains, and also every domain distributes namespaces evenly across all the brokers under that domain. - -::: - -#### Create a failure domain and register brokers - -:::note - -One broker can only be registered to a single failure domain. - -::: - -To create a domain under a specific cluster and register brokers, run the following command: - -```bash - -pulsar-admin clusters create-failure-domain --domain-name --broker-list - -``` - -You can also view, update, and delete domains under a specific cluster. For more information, refer to [Pulsar admin doc](/tools/pulsar-admin/). - -#### Create an anti-affinity namespace group - -An anti-affinity group is created automatically when the first namespace is assigned to the group. To assign a namespace to an anti-affinity group, run the following command. It sets an anti-affinity group name for a namespace. - -```bash - -pulsar-admin namespaces set-anti-affinity-group --group - -``` - -For more information about `anti-affinity-group` related commands, refer to [Pulsar admin doc](/tools/pulsar-admin/). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/administration-proxy.md b/site2/website/versioned_docs/version-2.9.0-deprecated/administration-proxy.md deleted file mode 100644 index 1657e4f88ce825..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/administration-proxy.md +++ /dev/null @@ -1,86 +0,0 @@ ---- -id: administration-proxy -title: Pulsar proxy -sidebar_label: "Pulsar proxy" -original_id: administration-proxy ---- - -Pulsar proxy is an optional gateway. Pulsar proxy is used when direct connections between clients and Pulsar brokers are either infeasible or undesirable. For example, when you run Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform, you can run Pulsar proxy. - -## Configure the proxy - -Before using the proxy, you need to configure it with the brokers addresses in the cluster. You can configure the proxy to connect directly to service discovery, or specify a broker URL in the configuration. - -### Use service discovery - -Pulsar uses [ZooKeeper](https://zookeeper.apache.org) for service discovery. To connect the proxy to ZooKeeper, specify the following in `conf/proxy.conf`. - -```properties - -zookeeperServers=zk-0,zk-1,zk-2 -configurationStoreServers=zk-0:2184,zk-remote:2184 - -``` - -> To use service discovery, you need to open the network ACLs, so the proxy can connects to the ZooKeeper nodes through the ZooKeeper client port (port `2181`) and the configuration store client port (port `2184`). - -> However, it is not secure to use service discovery. Because if the network ACL is open, when someone compromises a proxy, they have full access to ZooKeeper. - -### Use broker URLs - -It is more secure to specify a URL to connect to the brokers. - -Proxy authorization requires access to ZooKeeper, so if you use these broker URLs to connect to the brokers, you need to disable authorization at the Proxy level. Brokers still authorize requests after the proxy forwards them. - -You can configure the broker URLs in `conf/proxy.conf` as follows. - -```properties - -brokerServiceURL=pulsar://brokers.example.com:6650 -brokerWebServiceURL=http://brokers.example.com:8080 -functionWorkerWebServiceURL=http://function-workers.example.com:8080 - -``` - -If you use TLS, configure the broker URLs in the following way: - -```properties - -brokerServiceURLTLS=pulsar+ssl://brokers.example.com:6651 -brokerWebServiceURLTLS=https://brokers.example.com:8443 -functionWorkerWebServiceURL=https://function-workers.example.com:8443 - -``` - -The hostname in the URLs provided should be a DNS entry which points to multiple brokers or a virtual IP address, which is backed by multiple broker IP addresses, so that the proxy does not lose connectivity to Pulsar cluster if a single broker becomes unavailable. - -The ports to connect to the brokers (6650 and 8080, or in the case of TLS, 6651 and 8443) should be open in the network ACLs. - -Note that if you do not use functions, you do not need to configure `functionWorkerWebServiceURL`. - -## Start the proxy - -To start the proxy: - -```bash - -$ cd /path/to/pulsar/directory -$ bin/pulsar proxy - -``` - -> You can run multiple instances of the Pulsar proxy in a cluster. - -## Stop the proxy - -Pulsar proxy runs in the foreground by default. To stop the proxy, simply stop the process in which the proxy is running. - -## Proxy frontends - -You can run Pulsar proxy behind some kind of load-distributing frontend, such as an [HAProxy](https://www.digitalocean.com/community/tutorials/an-introduction-to-haproxy-and-load-balancing-concepts) load balancer. - -## Use Pulsar clients with the proxy - -Once your Pulsar proxy is up and running, preferably behind a load-distributing [frontend](#proxy-frontends), clients can connect to the proxy via whichever address that the frontend uses. If the address is the DNS address `pulsar.cluster.default`, for example, the connection URL for clients is `pulsar://pulsar.cluster.default:6650`. - -For more information on Proxy configuration, refer to [Pulsar proxy](reference-configuration.md#pulsar-proxy). diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/administration-pulsar-manager.md b/site2/website/versioned_docs/version-2.9.0-deprecated/administration-pulsar-manager.md deleted file mode 100644 index d877cce723e6ab..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/administration-pulsar-manager.md +++ /dev/null @@ -1,205 +0,0 @@ ---- -id: administration-pulsar-manager -title: Pulsar Manager -sidebar_label: "Pulsar Manager" -original_id: administration-pulsar-manager ---- - -Pulsar Manager is a web-based GUI management and monitoring tool that helps administrators and users manage and monitor tenants, namespaces, topics, subscriptions, brokers, clusters, and so on, and supports dynamic configuration of multiple environments. - -:::note - -If you are monitoring your current stats with Pulsar dashboard, we recommend you use Pulsar Manager instead. Pulsar dashboard is deprecated. - -::: - -## Install - -The easiest way to use the Pulsar Manager is to run it inside a [Docker](https://www.docker.com/products/docker) container. - -```shell - -docker pull apachepulsar/pulsar-manager:v0.2.0 -docker run -it \ - -p 9527:9527 -p 7750:7750 \ - -e SPRING_CONFIGURATION_FILE=/pulsar-manager/pulsar-manager/application.properties \ - apachepulsar/pulsar-manager:v0.2.0 - -``` - -* `SPRING_CONFIGURATION_FILE`: Default configuration file for spring. - -### Set administrator account and password - - ```shell - - CSRF_TOKEN=$(curl http://localhost:7750/pulsar-manager/csrf-token) - curl \ - -H 'X-XSRF-TOKEN: $CSRF_TOKEN' \ - -H 'Cookie: XSRF-TOKEN=$CSRF_TOKEN;' \ - -H "Content-Type: application/json" \ - -X PUT http://localhost:7750/pulsar-manager/users/superuser \ - -d '{"name": "admin", "password": "apachepulsar", "description": "test", "email": "username@test.org"}' - - ``` - -You can find the docker image in the [Docker Hub](https://github.com/apache/pulsar-manager/tree/master/docker) directory and build an image from the source code as well: - -``` - -git clone https://github.com/apache/pulsar-manager -cd pulsar-manager/front-end -npm install --save -npm run build:prod -cd .. -./gradlew build -x test -cd .. -docker build -f docker/Dockerfile --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` --build-arg VCS_REF=`latest` --build-arg VERSION=`latest` -t apachepulsar/pulsar-manager . - -``` - -### Use custom databases - -If you have a large amount of data, you can use a custom database. The following is an example of PostgreSQL. - -1. Initialize database and table structures using the [file](https://github.com/apache/pulsar-manager/tree/master/src/main/resources/META-INF/sql/postgresql-schema.sql). - -2. Modify the [configuration file](https://github.com/apache/pulsar-manager/blob/master/src/main/resources/application.properties) and add PostgreSQL configuration. - -``` - -spring.datasource.driver-class-name=org.postgresql.Driver -spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/pulsar_manager -spring.datasource.username=postgres -spring.datasource.password=postgres - -``` - -3. Compile to generate a new executable jar package. - -``` - -./gradlew build -x test - -``` - -### Enable JWT authentication - -If you want to turn on JWT authentication, configure the following parameters: - -* `backend.jwt.token`: token for the superuser. You need to configure this parameter during cluster initialization. -* `jwt.broker.token.mode`: multiple modes of generating token, including PUBLIC, PRIVATE, and SECRET. -* `jwt.broker.public.key`: configure this option if you use the PUBLIC mode. -* `jwt.broker.private.key`: configure this option if you use the PRIVATE mode. -* `jwt.broker.secret.key`: configure this option if you use the SECRET mode. - -For more information, see [Token Authentication Admin of Pulsar](http://pulsar.apache.org/docs/en/security-token-admin/). - - -If you want to enable JWT authentication, use one of the following methods. - - -* Method 1: use command-line tool - -``` - -wget https://dist.apache.org/repos/dist/release/pulsar/pulsar-manager/pulsar-manager-0.2.0/apache-pulsar-manager-0.2.0-bin.tar.gz -tar -zxvf apache-pulsar-manager-0.2.0-bin.tar.gz -cd pulsar-manager -tar -zxvf pulsar-manager.tar -cd pulsar-manager -cp -r ../dist ui -./bin/pulsar-manager --redirect.host=http://localhost --redirect.port=9527 insert.stats.interval=600000 --backend.jwt.token=token --jwt.broker.token.mode=PRIVATE --jwt.broker.private.key=file:///path/broker-private.key --jwt.broker.public.key=file:///path/broker-public.key - -``` - -Firstly, [set the administrator account and password](#set-administrator-account-and-password) - -Secondly, log in to Pulsar manager through http://localhost:7750/ui/index.html. - -* Method 2: configure the application.properties file - -``` - -backend.jwt.token=token - -jwt.broker.token.mode=PRIVATE -jwt.broker.public.key=file:///path/broker-public.key -jwt.broker.private.key=file:///path/broker-private.key - -or -jwt.broker.token.mode=SECRET -jwt.broker.secret.key=file:///path/broker-secret.key - -``` - -* Method 3: use Docker and enable token authentication. - -``` - -export JWT_TOKEN="your-token" -docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -v $PWD:/data apachepulsar/pulsar-manager:v0.2.0 /bin/sh - -``` - -* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --secret-key` or `bin/pulsar tokens create --private-key` command. -* `REDIRECT_HOST`: the IP address of the front-end server. -* `REDIRECT_PORT`: the port of the front-end server. -* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database. -* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database. -* `USERNAME`: the username of PostgreSQL. -* `PASSWORD`: the password of PostgreSQL. -* `LOG_LEVEL`: the level of log. - -* Method 4: use Docker and turn on **token authentication** and **token management** by private key and public key. - -``` - -export JWT_TOKEN="your-token" -export PRIVATE_KEY="file:///pulsar-manager/secret/my-private.key" -export PUBLIC_KEY="file:///pulsar-manager/secret/my-public.key" -docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -e PRIVATE_KEY=$PRIVATE_KEY -e PUBLIC_KEY=$PUBLIC_KEY -v $PWD:/data -v $PWD/secret:/pulsar-manager/secret apachepulsar/pulsar-manager:v0.2.0 /bin/sh - -``` - -* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --private-key` command. -* `PRIVATE_KEY`: private key path mounted in container, generated by `bin/pulsar tokens create-key-pair` command. -* `PUBLIC_KEY`: public key path mounted in container, generated by `bin/pulsar tokens create-key-pair` command. -* `$PWD/secret`: the folder where the private key and public key generated by the `bin/pulsar tokens create-key-pair` command are placed locally -* `REDIRECT_HOST`: the IP address of the front-end server. -* `REDIRECT_PORT`: the port of the front-end server. -* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database. -* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database. -* `USERNAME`: the username of PostgreSQL. -* `PASSWORD`: the password of PostgreSQL. -* `LOG_LEVEL`: the level of log. - -* Method 5: use Docker and turn on **token authentication** and **token management** by secret key. - -``` - -export JWT_TOKEN="your-token" -export SECRET_KEY="file:///pulsar-manager/secret/my-secret.key" -docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -e SECRET_KEY=$SECRET_KEY -v $PWD:/data -v $PWD/secret:/pulsar-manager/secret apachepulsar/pulsar-manager:v0.2.0 /bin/sh - -``` - -* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --secret-key` command. -* `SECRET_KEY`: secret key path mounted in container, generated by `bin/pulsar tokens create-secret-key` command. -* `$PWD/secret`: the folder where the secret key generated by the `bin/pulsar tokens create-secret-key` command are placed locally -* `REDIRECT_HOST`: the IP address of the front-end server. -* `REDIRECT_PORT`: the port of the front-end server. -* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database. -* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database. -* `USERNAME`: the username of PostgreSQL. -* `PASSWORD`: the password of PostgreSQL. -* `LOG_LEVEL`: the level of log. - -* For more information about backend configurations, see [here](https://github.com/apache/pulsar-manager/blob/master/src/README). -* For more information about frontend configurations, see [here](https://github.com/apache/pulsar-manager/tree/master/front-end). - -## Log in - -[Set the administrator account and password](#set-administrator-account-and-password). - -Visit http://localhost:9527 to log in. diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/administration-stats.md b/site2/website/versioned_docs/version-2.9.0-deprecated/administration-stats.md deleted file mode 100644 index ac0c03602f36d5..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/administration-stats.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -id: administration-stats -title: Pulsar stats -sidebar_label: "Pulsar statistics" -original_id: administration-stats ---- - -## Partitioned topics - -|Stat|Description| -|---|---| -|msgRateIn| The sum of publish rates of all local and replication publishers in messages per second.| -|msgThroughputIn| Same as msgRateIn but in bytes per second instead of messages per second.| -|msgRateOut| The sum of dispatch rates of all local and replication consumers in messages per second.| -|msgThroughputOut| Same as msgRateOut but in bytes per second instead of messages per second.| -|averageMsgSize| Average message size, in bytes, from this publisher within the last interval.| -|storageSize| The sum of storage size of the ledgers for this topic.| -|publishers| The list of all local publishers into the topic. Publishers can be anywhere from zero to thousands.| -|producerId| Internal identifier for this producer on this topic.| -|producerName| Internal identifier for this producer, generated by the client library.| -|address| IP address and source port for the connection of this producer.| -|connectedSince| Timestamp this producer is created or last reconnected.| -|subscriptions| The list of all local subscriptions to the topic.| -|my-subscription| The name of this subscription (client defined).| -|msgBacklog| The count of messages in backlog for this subscription.| -|type| This subscription type.| -|msgRateExpired| The rate at which messages are discarded instead of dispatched from this subscription due to TTL.| -|consumers| The list of connected consumers for this subscription.| -|consumerName| Internal identifier for this consumer, generated by the client library.| -|availablePermits| The number of messages this consumer has space for in the listen queue of client library. A value of 0 means the queue of client library is full and receive() is not being called. A nonzero value means this consumer is ready to be dispatched messages.| -|replication| This section gives the stats for cross-colo replication of this topic.| -|replicationBacklog| The outbound replication backlog in messages.| -|connected| Whether the outbound replicator is connected.| -|replicationDelayInSeconds| How long the oldest message has been waiting to be sent through the connection, if connected is true.| -|inboundConnection| The IP and port of the broker in the publisher connection of remote cluster to this broker. | -|inboundConnectedSince| The TCP connection being used to publish messages to the remote cluster. If no local publishers are connected, this connection is automatically closed after a minute.| - - -## Topics - -|Stat|Description| -|---|---| -|entriesAddedCounter| Messages published since this broker loads this topic.| -|numberOfEntries| Total number of messages being tracked.| -|totalSize| Total storage size in bytes of all messages.| -|currentLedgerEntries| Count of messages written to the ledger currently open for writing.| -|currentLedgerSize| Size in bytes of messages written to ledger currently open for writing.| -|lastLedgerCreatedTimestamp| Time when last ledger is created.| -|lastLedgerCreationFailureTimestamp| Time when last ledger is failed.| -|waitingCursorsCount| How many cursors are caught up and waiting for a new message to be published.| -|pendingAddEntriesCount| How many messages have (asynchronous) write requests you are waiting on completion.| -|lastConfirmedEntry| The ledgerid:entryid of the last message successfully written. If the entryid is -1, then the ledger is opened or is being currently opened but has no entries written yet.| -|state| The state of the cursor ledger. Open means you have a cursor ledger for saving updates of the markDeletePosition.| -|ledgers| The ordered list of all ledgers for this topic holding its messages.| -|cursors| The list of all cursors on this topic. Every subscription you saw in the topic stats has one.| -|markDeletePosition| The ack position: the last message the subscriber acknowledges receiving.| -|readPosition| The latest position of subscriber for reading message.| -|waitingReadOp| This is true when the subscription reads the latest message that is published to the topic and waits on new messages to be published.| -|pendingReadOps| The counter for how many outstanding read requests to the BookKeepers you have in progress.| -|messagesConsumedCounter| Number of messages this cursor acks since this broker loads this topic.| -|cursorLedger| The ledger used to persistently store the current markDeletePosition.| -|cursorLedgerLastEntry| The last entryid used to persistently store the current markDeletePosition.| -|individuallyDeletedMessages| If Acks are done out of order, shows the ranges of messages Acked between the markDeletePosition and the read-position.| -|lastLedgerSwitchTimestamp| The last time the cursor ledger is rolled over.| diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/administration-upgrade.md b/site2/website/versioned_docs/version-2.9.0-deprecated/administration-upgrade.md deleted file mode 100644 index 72d136b6460f62..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/administration-upgrade.md +++ /dev/null @@ -1,168 +0,0 @@ ---- -id: administration-upgrade -title: Upgrade Guide -sidebar_label: "Upgrade" -original_id: administration-upgrade ---- - -## Upgrade guidelines - -Apache Pulsar is comprised of multiple components, ZooKeeper, bookies, and brokers. These components are either stateful or stateless. You do not have to upgrade ZooKeeper nodes unless you have special requirement. While you upgrade, you need to pay attention to bookies (stateful), brokers and proxies (stateless). - -The following are some guidelines on upgrading a Pulsar cluster. Read the guidelines before upgrading. - -- Backup all your configuration files before upgrading. -- Read guide entirely, make a plan, and then execute the plan. When you make upgrade plan, you need to take your specific requirements and environment into consideration. -- Pay attention to the upgrading order of components. In general, you do not need to upgrade your ZooKeeper or configuration store cluster (the global ZooKeeper cluster). You need to upgrade bookies first, and then upgrade brokers, proxies, and your clients. -- If `autorecovery` is enabled, you need to disable `autorecovery` in the upgrade process, and re-enable it after completing the process. -- Read the release notes carefully for each release. Release notes contain features, configuration changes that might impact your upgrade. -- Upgrade a small subset of nodes of each type to canary test the new version before upgrading all nodes of that type in the cluster. When you have upgraded the canary nodes, run for a while to ensure that they work correctly. -- Upgrade one data center to verify new version before upgrading all data centers if your cluster runs in multi-cluster replicated mode. - -> Note: Currently, Apache Pulsar is compatible between versions. - -## Upgrade sequence - -To upgrade an Apache Pulsar cluster, follow the upgrade sequence. - -1. Upgrade ZooKeeper (optional) -- Canary test: test an upgraded version in one or a small set of ZooKeeper nodes. -- Rolling upgrade: rollout the upgraded version to all ZooKeeper servers incrementally, one at a time. Monitor your dashboard during the whole rolling upgrade process. -2. Upgrade bookies -- Canary test: test an upgraded version in one or a small set of bookies. -- Rolling upgrade: - - a. Disable `autorecovery` with the following command. - - ```shell - - bin/bookkeeper shell autorecovery -disable - - ``` - - - - b. Rollout the upgraded version to all bookies in the cluster after you determine that a version is safe after canary. - - c. After you upgrade all bookies, re-enable `autorecovery` with the following command. - - ```shell - - bin/bookkeeper shell autorecovery -enable - - ``` - -3. Upgrade brokers -- Canary test: test an upgraded version in one or a small set of brokers. -- Rolling upgrade: rollout the upgraded version to all brokers in the cluster after you determine that a version is safe after canary. -4. Upgrade proxies -- Canary test: test an upgraded version in one or a small set of proxies. -- Rolling upgrade: rollout the upgraded version to all proxies in the cluster after you determine that a version is safe after canary. - -## Upgrade ZooKeeper (optional) -While you upgrade ZooKeeper servers, you can do canary test first, and then upgrade all ZooKeeper servers in the cluster. - -### Canary test - -You can test an upgraded version in one of ZooKeeper servers before upgrading all ZooKeeper servers in your cluster. - -To upgrade ZooKeeper server to a new version, complete the following steps: - -1. Stop a ZooKeeper server. -2. Upgrade the binary and configuration files. -3. Start the ZooKeeper server with the new binary files. -4. Use `pulsar zookeeper-shell` to connect to the newly upgraded ZooKeeper server and run a few commands to verify if it works as expected. -5. Run the ZooKeeper server for a few days, observe and make sure the ZooKeeper cluster runs well. - -#### Canary rollback - -If issues occur during canary test, you can shut down the problematic ZooKeeper node, revert the binary and configuration, and restart the ZooKeeper with the reverted binary. - -### Upgrade all ZooKeeper servers - -After canary test to upgrade one ZooKeeper in your cluster, you can upgrade all ZooKeeper servers in your cluster. - -You can upgrade all ZooKeeper servers one by one by following steps in canary test. - -## Upgrade bookies - -While you upgrade bookies, you can do canary test first, and then upgrade all bookies in the cluster. -For more details, you can read Apache BookKeeper [Upgrade guide](http://bookkeeper.apache.org/docs/latest/admin/upgrade). - -### Canary test - -You can test an upgraded version in one or a small set of bookies before upgrading all bookies in your cluster. - -To upgrade bookie to a new version, complete the following steps: - -1. Stop a bookie. -2. Upgrade the binary and configuration files. -3. Start the bookie in `ReadOnly` mode to verify if the bookie of this new version runs well for read workload. - - ```shell - - bin/pulsar bookie --readOnly - - ``` - -4. When the bookie runs successfully in `ReadOnly` mode, stop the bookie and restart it in `Write/Read` mode. - - ```shell - - bin/pulsar bookie - - ``` - -5. Observe and make sure the cluster serves both write and read traffic. - -#### Canary rollback - -If issues occur during the canary test, you can shut down the problematic bookie node. Other bookies in the cluster replaces this problematic bookie node with autorecovery. - -### Upgrade all bookies - -After canary test to upgrade some bookies in your cluster, you can upgrade all bookies in your cluster. - -Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios. - -In a rolling upgrade scenario, upgrade one bookie at a time. In a downtime upgrade scenario, shut down the entire cluster, upgrade each bookie, and then start the cluster. - -While you upgrade in both scenarios, the procedure is the same for each bookie. - -1. Stop the bookie. -2. Upgrade the software (either new binary or new configuration files). -2. Start the bookie. - -> **Advanced operations** -> When you upgrade a large BookKeeper cluster in a rolling upgrade scenario, upgrading one bookie at a time is slow. If you configure rack-aware or region-aware placement policy, you can upgrade bookies rack by rack or region by region, which speeds up the whole upgrade process. - -## Upgrade brokers and proxies - -The upgrade procedure for brokers and proxies is the same. Brokers and proxies are `stateless`, so upgrading the two services is easy. - -### Canary test - -You can test an upgraded version in one or a small set of nodes before upgrading all nodes in your cluster. - -To upgrade to a new version, complete the following steps: - -1. Stop a broker (or proxy). -2. Upgrade the binary and configuration file. -3. Start a broker (or proxy). - -#### Canary rollback - -If issues occur during canary test, you can shut down the problematic broker (or proxy) node. Revert to the old version and restart the broker (or proxy). - -### Upgrade all brokers or proxies - -After canary test to upgrade some brokers or proxies in your cluster, you can upgrade all brokers or proxies in your cluster. - -Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios. - -In a rolling upgrade scenario, you can upgrade one broker or one proxy at a time if the size of the cluster is small. If your cluster is large, you can upgrade brokers or proxies in batches. When you upgrade a batch of brokers or proxies, make sure the remaining brokers and proxies in the cluster have enough capacity to handle the traffic during upgrade. - -In a downtime upgrade scenario, shut down the entire cluster, upgrade each broker or proxy, and then start the cluster. - -While you upgrade in both scenarios, the procedure is the same for each broker or proxy. - -1. Stop the broker or proxy. -2. Upgrade the software (either new binary or new configuration files). -3. Start the broker or proxy. diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/administration-zk-bk.md b/site2/website/versioned_docs/version-2.9.0-deprecated/administration-zk-bk.md deleted file mode 100644 index 8f66fd23a678f4..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/administration-zk-bk.md +++ /dev/null @@ -1,386 +0,0 @@ ---- -id: administration-zk-bk -title: ZooKeeper and BookKeeper administration -sidebar_label: "ZooKeeper and BookKeeper" -original_id: administration-zk-bk ---- - -Pulsar relies on two external systems for essential tasks: - -* [ZooKeeper](https://zookeeper.apache.org/) is responsible for a wide variety of configuration-related and coordination-related tasks. -* [BookKeeper](http://bookkeeper.apache.org/) is responsible for [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data. - -ZooKeeper and BookKeeper are both open-source [Apache](https://www.apache.org/) projects. - -> Skip to the [How Pulsar uses ZooKeeper and BookKeeper](#how-pulsar-uses-zookeeper-and-bookkeeper) section below for a more schematic explanation of the role of these two systems in Pulsar. - - -## ZooKeeper - -Each Pulsar instance relies on two separate ZooKeeper quorums. - -* [Local ZooKeeper](#deploy-local-zookeeper) operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs to have a dedicated ZooKeeper cluster. -* [Configuration Store](#deploy-configuration-store) operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum. - -### Deploy local ZooKeeper - -ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar. - -To deploy a Pulsar instance, you need to stand up one local ZooKeeper cluster *per Pulsar cluster*. - -To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -On each host, you need to specify the node ID in `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - - -On a ZooKeeper server at `zk1.us-west.example.com`, for example, you can set the `myid` value like this: - -```shell - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com` the command is `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start zookeeper - -``` - -### Deploy configuration store - -The ZooKeeper cluster configured and started up in the section above is a *local* ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks. - -If you deploy a [single-cluster](#single-cluster-pulsar-instance) instance, you do not need a separate cluster for the configuration store. If, however, you deploy a [multi-cluster](#multi-cluster-pulsar-instance) instance, you need to stand up a separate ZooKeeper cluster for configuration tasks. - -#### Single-cluster Pulsar instance - -If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports. - -To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorum uses to the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 - -``` - -As before, create the `myid` files for each server on `data/global-zookeeper/myid`. - -#### Multi-cluster Pulsar instance - -When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions. - -The key here is to make sure the ZK quorum members are spread across at least 3 regions and that other regions run as observers. - -Again, given the very low expected load on the configuration store servers, you can share the same hosts used for the local ZooKeeper quorum. - -For example, you can assume a Pulsar instance with the following clusters `us-west`, `us-east`, `us-central`, `eu-central`, `ap-south`. Also you can assume, each cluster has its own local ZK servers named such as - -``` - -zk[1-3].${CLUSTER}.example.com - -``` - -In this scenario you want to pick the quorum participants from few clusters and let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`. - -This guarantees that writes to configuration store is possible even if one of these regions is unreachable. - -The ZK configuration in all the servers looks like: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 -server.4=zk1.us-central.example.com:2185:2186 -server.5=zk2.us-central.example.com:2185:2186 -server.6=zk3.us-central.example.com:2185:2186:observer -server.7=zk1.us-east.example.com:2185:2186 -server.8=zk2.us-east.example.com:2185:2186 -server.9=zk3.us-east.example.com:2185:2186:observer -server.10=zk1.eu-central.example.com:2185:2186:observer -server.11=zk2.eu-central.example.com:2185:2186:observer -server.12=zk3.eu-central.example.com:2185:2186:observer -server.13=zk1.ap-south.example.com:2185:2186:observer -server.14=zk2.ap-south.example.com:2185:2186:observer -server.15=zk3.ap-south.example.com:2185:2186:observer - -``` - -Additionally, ZK observers need to have: - -```properties - -peerType=observer - -``` - -##### Start the service - -Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) - -```shell - -$ bin/pulsar-daemon start configuration-store - -``` - -### ZooKeeper configuration - -In Pulsar, ZooKeeper configuration is handled by two separate configuration files in the `conf` directory of your Pulsar installation: `conf/zookeeper.conf` for [local ZooKeeper](#local-zookeeper) and `conf/global-zookeeper.conf` for [configuration store](#configuration-store). - -#### Local ZooKeeper - -The [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file handles the configuration for local ZooKeeper. The table below shows the available parameters: - -|Name|Description|Default| -|---|---|---| -|tickTime| The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick. |2000| -|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10| -|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter. |5| -|dataDir| The location where ZooKeeper stores in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper| -|clientPort| The port on which the ZooKeeper server listens for connections. |2181| -|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest). |3| -|autopurge.purgeInterval| The time interval, in hours, which triggers the ZooKeeper database purge task. Setting to a non-zero number enables auto purge; setting to 0 disables. Read this guide before enabling auto purge. |1| -|maxClientCnxns| The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60| - - -#### Configuration Store - -The [`conf/global-zookeeper.conf`](reference-configuration.md#configuration-store) file handles the configuration for configuration store. The table below shows the available parameters: - - -## BookKeeper - -BookKeeper stores all durable messages in Pulsar. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) WAL system that guarantees read consistency of independent message logs calls ledgers. Individual BookKeeper servers are also called *bookies*. - -> To manage message persistence, retention, and expiry in Pulsar, refer to [cookbook](cookbooks-retention-expiry.md). - -### Hardware requirements - -Bookie hosts store message data on disk. To provide optimal performance, ensure that the bookies have a suitable hardware configuration. The following are two key dimensions of bookie hardware capacity: - -- Disk I/O capacity read/write -- Storage capacity - -Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker by default. To ensure low write latency, BookKeeper is designed to use multiple devices: - -- A **journal** to ensure durability. For sequential writes, it is critical to have fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms. -- A **ledger storage device** stores data. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller. - -### Configure BookKeeper - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. When you configure each bookie, ensure that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for local ZooKeeper of the Pulsar cluster. - -The minimum configuration changes required in `conf/bookkeeper.conf` are as follows: - -:::note - -Set `journalDirectory` and `ledgerDirectories` carefully. It is difficilt to change them later. - -::: - -```properties - -# Change to point to journal disk mount point -journalDirectory=data/bookkeeper/journal - -# Point to ledger storage disk mount point -ledgerDirectories=data/bookkeeper/ledgers - -# Point to local ZK quorum -zkServers=zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181 - -#It is recommended to set this parameter. Otherwise, BookKeeper can't start normally in certain environments (for example, Huawei Cloud). -advertisedAddress= - -``` - -To change the ZooKeeper root path that BookKeeper uses, use `zkLedgersRootPath=/MY-PREFIX/ledgers` instead of `zkServers=localhost:2181/MY-PREFIX`. - -> For more information about BookKeeper, refer to the official [BookKeeper docs](http://bookkeeper.apache.org). - -### Deploy BookKeeper - -BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar. Each Pulsar broker has its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster. - -### Start bookies manually - -You can start a bookie in the foreground or as a background daemon. - -To start a bookie in the foreground, use the [`bookkeeper`](reference-cli-tools.md#bookkeeper) CLI tool: - -```bash - -$ bin/bookkeeper bookie - -``` - -To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -You can verify whether the bookie works properly with the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell): - -```shell - -$ bin/bookkeeper shell bookiesanity - -``` - -When you use this command, you create a new ledger on the local bookie, write a few entries, read them back and finally delete the ledger. - -### Decommission bookies cleanly - -Before you decommission a bookie, you need to check your environment and meet the following requirements. - -1. Ensure the state of your cluster supports decommissioning the target bookie. Check if `EnsembleSize >= Write Quorum >= Ack Quorum` is `true` with one less bookie. - -2. Ensure the target bookie is listed after using the `listbookies` command. - -3. Ensure that no other process is ongoing (upgrade etc). - -And then you can decommission bookies safely. To decommission bookies, complete the following steps. - -1. Log in to the bookie node, check if there are underreplicated ledgers. The decommission command force to replicate the underreplicated ledgers. -`$ bin/bookkeeper shell listunderreplicated` - -2. Stop the bookie by killing the bookie process. Make sure that no liveness/readiness probes setup for the bookies to spin them back up if you deploy it in a Kubernetes environment. - -3. Run the decommission command. - - If you have logged in to the node to be decommissioned, you do not need to provide `-bookieid`. - - If you are running the decommission command for the target bookie node from another bookie node, you should mention the target bookie ID in the arguments for `-bookieid` - `$ bin/bookkeeper shell decommissionbookie` - or - `$ bin/bookkeeper shell decommissionbookie -bookieid ` - -4. Validate that no ledgers are on the decommissioned bookie. -`$ bin/bookkeeper shell listledgers -bookieid ` - -You can run the following command to check if the bookie you have decommissioned is listed in the bookies list: - -```bash - -./bookkeeper shell listbookies -rw -h -./bookkeeper shell listbookies -ro -h - -``` - -## BookKeeper persistence policies - -In Pulsar, you can set *persistence policies* at the namespace level, which determines how BookKeeper handles persistent storage of messages. Policies determine four things: - -* The number of acks (guaranteed copies) to wait for each ledger entry. -* The number of bookies to use for a topic. -* The number of writes to make for each ledger entry. -* The throttling rate for mark-delete operations. - -### Set persistence policies - -You can set persistence policies for BookKeeper at the [namespace](reference-terminology.md#namespace) level. - -#### Pulsar-admin - -Use the [`set-persistence`](reference-pulsar-admin.md#namespaces-set-persistence) subcommand and specify a namespace as well as any policies that you want to apply. The available flags are: - -Flag | Description | Default -:----|:------------|:------- -`-a`, `--bookkeeper-ack-quorum` | The number of acks (guaranteed copies) to wait on for each entry | 0 -`-e`, `--bookkeeper-ensemble` | The number of [bookies](reference-terminology.md#bookie) to use for topics in the namespace | 0 -`-w`, `--bookkeeper-write-quorum` | The number of writes to make for each entry | 0 -`-r`, `--ml-mark-delete-max-rate` | Throttling rate for mark-delete operations (0 means no throttle) | 0 - -The following is an example: - -```shell - -$ pulsar-admin namespaces set-persistence my-tenant/my-ns \ - --bookkeeper-ack-quorum 3 \ - --bookeeper-ensemble 2 - -``` - -#### REST API - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/setPersistence?version=@pulsar:version_number@} - -#### Java - -```java - -int bkEnsemble = 2; -int bkQuorum = 3; -int bkAckQuorum = 2; -double markDeleteRate = 0.7; -PersistencePolicies policies = - new PersistencePolicies(ensemble, quorum, ackQuorum, markDeleteRate); -admin.namespaces().setPersistence(namespace, policies); - -``` - -### List persistence policies - -You can see which persistence policy currently applies to a namespace. - -#### Pulsar-admin - -Use the [`get-persistence`](reference-pulsar-admin.md#namespaces-get-persistence) subcommand and specify the namespace. - -The following is an example: - -```shell - -$ pulsar-admin namespaces get-persistence my-tenant/my-ns -{ - "bookkeeperEnsemble": 1, - "bookkeeperWriteQuorum": 1, - "bookkeeperAckQuorum", 1, - "managedLedgerMaxMarkDeleteRate": 0 -} - -``` - -#### REST API - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/getPersistence?version=@pulsar:version_number@} - -#### Java - -```java - -PersistencePolicies policies = admin.namespaces().getPersistence(namespace); - -``` - -## How Pulsar uses ZooKeeper and BookKeeper - -This diagram illustrates the role of ZooKeeper and BookKeeper in a Pulsar cluster: - -![ZooKeeper and BookKeeper](/assets/pulsar-system-architecture.png) - -Each Pulsar cluster consists of one or more message brokers. Each broker relies on an ensemble of bookies. diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-cgo.md b/site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-cgo.md deleted file mode 100644 index f352f942b77144..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-cgo.md +++ /dev/null @@ -1,579 +0,0 @@ ---- -id: client-libraries-cgo -title: Pulsar CGo client -sidebar_label: "CGo(deprecated)" -original_id: client-libraries-cgo ---- - -You can use Pulsar Go client to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang). - -All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Go client are thread-safe. - -Currently, the following Go clients are maintained in two repositories. - -| Language | Project | Maintainer | License | Description | -|----------|---------|------------|---------|-------------| -| CGo | [pulsar-client-go](https://github.com/apache/pulsar/tree/master/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | CGo client that depends on C++ client library | -| Go | [pulsar-client-go](https://github.com/apache/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client | - -> **API docs available as well** -> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar/pulsar-client-go/pulsar). - -## Installation - -### Requirements - -Pulsar Go client library is based on the C++ client library. Follow -the instructions for [C++ library](client-libraries-cpp.md) for installing the binaries through [RPM](client-libraries-cpp.md#rpm), [Deb](client-libraries-cpp.md#deb) or [Homebrew packages](client-libraries-cpp.md#macos). - -### Install go package - -> **Compatibility Warning** -> The version number of the Go client **must match** the version number of the Pulsar C++ client library. - -You can install the `pulsar` library locally using `go get`. Note that `go get` doesn't support fetching a specific tag - it will always pull in master's version of the Go client. You'll need a C++ client library that matches master. - -```bash - -$ go get -u github.com/apache/pulsar/pulsar-client-go/pulsar - -``` - -Or you can use [dep](https://github.com/golang/dep) for managing the dependencies. - -```bash - -$ dep ensure -add github.com/apache/pulsar/pulsar-client-go/pulsar@v@pulsar:version@ - -``` - -Once installed locally, you can import it into your project: - -```go - -import "github.com/apache/pulsar/pulsar-client-go/pulsar" - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example: - -```go - -import ( - "log" - "runtime" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - OperationTimeoutSeconds: 5, - MessageListenerThreads: runtime.NumCPU(), - }) - - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } -} - -``` - -The following configurable parameters are available for Pulsar clients: - -Parameter | Description | Default -:---------|:------------|:------- -`URL` | The connection URL for the Pulsar cluster. See [above](#urls) for more info | -`IOThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker) | 1 -`OperationTimeoutSeconds` | The timeout for some Go client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries will occur until this threshold is reached, at which point the operation will fail. | 30 -`MessageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)) | 1 -`ConcurrentLookupRequests` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 5000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 5000 -`Logger` | A custom logger implementation for the client (as a function that takes a log level, file path, line number, and message). All info, warn, and error messages will be routed to this function. | `nil` -`TLSTrustCertsFilePath` | The file path for the trusted TLS certificate | -`TLSAllowInsecureConnection` | Whether the client accepts untrusted TLS certificates from the broker | `false` -`Authentication` | Configure the authentication provider. (default: no authentication). Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | `nil` -`StatsIntervalInSeconds` | The interval (in seconds) at which client stats are published | 60 - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example: - -```go - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", -}) - -if err != nil { - log.Fatalf("Could not instantiate Pulsar producer: %v", err) -} - -defer producer.Close() - -msg := pulsar.ProducerMessage{ - Payload: []byte("Hello, Pulsar"), -} - -if err := producer.Send(context.Background(), msg); err != nil { - log.Fatalf("Producer could not send message: %v", err) -} - -``` - -> **Blocking operation** -> When you create a new Pulsar producer, the operation will block (waiting on a go channel) until either a producer is successfully created or an error is thrown. - - -### Producer operations - -Pulsar Go producers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string` -`Name()` | Fetches the producer's name | `string` -`Send(context.Context, ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | `error` -`SendAndGetMsgID(context.Context, ProducerMessage)`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | (MessageID, error) -`SendAsync(context.Context, ProducerMessage, func(ProducerMessage, error))` | Publishes a [message](#messages) to the producer's topic asynchronously. The third argument is a callback function that specifies what happens either when the message is acknowledged or an error is thrown. | -`SendAndGetMsgIDAsync(context.Context, ProducerMessage, func(MessageID, error))`| Send a message in asynchronous mode. The callback will report back the message being published and the eventual error in publishing | -`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64 -`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error -`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | `error` -`Schema()` | | Schema - -Here's a more involved example usage of a producer: - -```go - -import ( - "context" - "fmt" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatal(err) } - - // Use the client to instantiate a producer - producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", - }) - - if err != nil { log.Fatal(err) } - - ctx := context.Background() - - // Send 10 messages synchronously and 10 messages asynchronously - for i := 0; i < 10; i++ { - // Create a message - msg := pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("message-%d", i)), - } - - // Attempt to send the message - if err := producer.Send(ctx, msg); err != nil { - log.Fatal(err) - } - - // Create a different message to send asynchronously - asyncMsg := pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("async-message-%d", i)), - } - - // Attempt to send the message asynchronously and handle the response - producer.SendAsync(ctx, asyncMsg, func(msg pulsar.ProducerMessage, err error) { - if err != nil { log.Fatal(err) } - - fmt.Printf("the %s successfully published", string(msg.Payload)) - }) - } -} - -``` - -### Producer configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer will publish messages | -`Name` | A name for the producer. If you don't explicitly assign a name, Pulsar will automatically generate a globally unique name that you can access later using the `Name()` method. If you choose to explicitly assign a name, it will need to be unique across *all* Pulsar clusters, otherwise the creation operation will throw an error. | -`Properties`| Attach a set of application defined properties to the producer. This properties will be visible in the topic stats | -`SendTimeout` | When publishing a message to a topic, the producer will wait for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error will be thrown. If you set `SendTimeout` to -1, the timeout will be set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30 seconds -`MaxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `Send` and `SendAsync` methods will fail *unless* `BlockIfQueueFull` is set to `true`. | -`MaxPendingMessagesAcrossPartitions` | Set the number of max pending messages across all the partitions. This setting will be used to lower the max pending messages for each partition `MaxPendingMessages(int)`, if the total exceeds the configured value.| -`BlockIfQueueFull` | If set to `true`, the producer's `Send` and `SendAsync` methods will block when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `MaxPendingMessages` parameter); if set to `false` (the default), `Send` and `SendAsync` operations will fail and throw a `ProducerQueueIsFullError` when the queue is full. | `false` -`MessageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`pulsar.RoundRobinDistribution`, the default), publishing all messages to a single partition (`pulsar.UseSinglePartition`), or a custom partitioning scheme (`pulsar.CustomPartition`). | `pulsar.RoundRobinDistribution` -`HashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `pulsar.JavaStringHash` (the equivalent of `String.hashCode()` in Java), `pulsar.Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `pulsar.BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library) | `pulsar.JavaStringHash` -`CompressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), [`ZLIB`](https://zlib.net/), [`ZSTD`](https://facebook.github.io/zstd/) and [`SNAPPY`](https://google.github.io/snappy/). | No compression -`MessageRouter` | By default, Pulsar uses a round-robin routing scheme for [partitioned topics](cookbooks-partitioned.md). The `MessageRouter` parameter enables you to specify custom routing logic via a function that takes the Pulsar message and topic metadata as an argument and returns an integer (where the ), i.e. a function signature of `func(Message, TopicMetadata) int`. | -`Batching` | Control whether automatic batching of messages is enabled for the producer. | false -`BatchingMaxPublishDelay` | Set the time period within which the messages sent will be batched (default: 1ms) if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or until | 1ms -`BatchingMaxMessages` | Set the maximum number of messages permitted in a batch. (default: 1000) If set to a value greater than 1, messages will be queued until this threshold is reached or batch interval has elapsed | 1000 - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels: - -```go - -msgChannel := make(chan pulsar.ConsumerMessage) - -consumerOpts := pulsar.ConsumerOptions{ - Topic: "my-topic", - SubscriptionName: "my-subscription-1", - Type: pulsar.Exclusive, - MessageChannel: msgChannel, -} - -consumer, err := client.Subscribe(consumerOpts) - -if err != nil { - log.Fatalf("Could not establish subscription: %v", err) -} - -defer consumer.Close() - -for cm := range msgChannel { - msg := cm.Message - - fmt.Printf("Message ID: %s", msg.ID()) - fmt.Printf("Message value: %s", string(msg.Payload())) - - consumer.Ack(msg) -} - -``` - -> **Blocking operation** -> When you create a new Pulsar consumer, the operation will block (on a go channel) until either a producer is successfully created or an error is thrown. - - -### Consumer operations - -Pulsar Go consumers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the consumer's [topic](reference-terminology.md#topic) | `string` -`Subscription()` | Returns the consumer's subscription name | `string` -`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error` -`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)` -`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | `error` -`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | `error` -`AckCumulative(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `AckCumulative` method will block until the ack has been sent to the broker. After that, the messages will *not* be redelivered to the consumer. Cumulative acking can only be used with a [shared](concepts-messaging.md#shared) subscription type. | `error` -`AckCumulativeID(MessageID)` |Ack the reception of all the messages in the stream up to (and including) the provided message. This method will block until the acknowledge has been sent to the broker. After that, the messages will not be re-delivered to this consumer. | error -`Nack(Message)` | Acknowledge the failure to process a single message. | `error` -`NackID(MessageID)` | Acknowledge the failure to process a single message. | `error` -`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | `error` -`RedeliverUnackedMessages()` | Redelivers *all* unacknowledged messages on the topic. In [failover](concepts-messaging.md#failover) mode, this request is ignored if the consumer isn't active on the specified topic; in [shared](concepts-messaging.md#shared) mode, redelivered messages are distributed across all consumers connected to the topic. **Note**: this is a *non-blocking* operation that doesn't throw an error. | -`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | error - -#### Receive example - -Here's an example usage of a Go consumer that uses the `Receive()` method to process incoming messages: - -```go - -import ( - "context" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatal(err) } - - // Use the client object to instantiate a consumer - consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "my-golang-topic", - SubscriptionName: "sub-1", - Type: pulsar.Exclusive, - }) - - if err != nil { log.Fatal(err) } - - defer consumer.Close() - - ctx := context.Background() - - // Listen indefinitely on the topic - for { - msg, err := consumer.Receive(ctx) - if err != nil { log.Fatal(err) } - - // Do something with the message - err = processMessage(msg) - - if err == nil { - // Message processed successfully - consumer.Ack(msg) - } else { - // Failed to process messages - consumer.Nack(msg) - } - } -} - -``` - -### Consumer configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the consumer will establish a subscription and listen for messages | -`Topics` | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing | -`TopicsPattern` | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing | -`SubscriptionName` | The subscription name for this consumer | -`Properties` | Attach a set of application defined properties to the consumer. This properties will be visible in the topic stats| -`Name` | The name of the consumer | -`AckTimeout` | Set the timeout for unacked messages | 0 -`NackRedeliveryDelay` | The delay after which to redeliver the messages that failed to be processed. Default is 1min. (See `Consumer.Nack()`) | 1 minute -`Type` | Available options are `Exclusive`, `Shared`, and `Failover` | `Exclusive` -`SubscriptionInitPos` | InitialPosition at which the cursor will be set when subscribe | Latest -`MessageChannel` | The Go channel used by the consumer. Messages that arrive from the Pulsar topic(s) will be passed to this channel. | -`ReceiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `Receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000 -`MaxTotalReceiverQueueSizeAcrossPartitions` |Set the max total receiver queue size across partitions. This setting will be used to reduce the receiver queue size for individual partitions if the total exceeds this value | 50000 -`ReadCompacted` | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the consumer will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal. | - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example: - -```go - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageId: pulsar.LatestMessage, -}) - -``` - -> **Blocking operation** -> When you create a new Pulsar reader, the operation will block (on a go channel) until either a reader is successfully created or an error is thrown. - - -### Reader operations - -Pulsar Go readers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string` -`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)` -`HasNext()` | Check if there is any message available to read from the current position| (bool, error) -`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error` - -#### "Next" example - -Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages: - -```go - -import ( - "context" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatalf("Could not create client: %v", err) } - - // Use the client to instantiate a reader - reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: pulsar.EarliestMessage, - }) - - if err != nil { log.Fatalf("Could not create reader: %v", err) } - - defer reader.Close() - - ctx := context.Background() - - // Listen on the topic for incoming messages - for { - msg, err := reader.Next(ctx) - if err != nil { log.Fatalf("Error reading from topic: %v", err) } - - // Process the message - } -} - -``` - -In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example: - -```go - -lastSavedId := // Read last saved message id from external store as byte[] - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: DeserializeMessageID(lastSavedId), -}) - -``` - -### Reader configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader will establish a subscription and listen for messages -`Name` | The name of the reader -`StartMessageID` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `pulsar.EarliestMessage` (the earliest available message on the topic), `pulsar.LatestMessage` (the latest available message on the topic), or a `MessageID` object for a position that isn't earliest or latest. | -`MessageChannel` | The Go channel used by the reader. Messages that arrive from the Pulsar topic(s) will be passed to this channel. | -`ReceiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `Next`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000 -`SubscriptionRolePrefix` | The subscription role prefix. | `reader` -`ReadCompacted` | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the reader will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal.| - -## Messages - -The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message: - -```go - -msg := pulsar.ProducerMessage{ - Payload: []byte("Here is some message data"), - Key: "message-key", - Properties: map[string]string{ - "foo": "bar", - }, - EventTime: time.Now(), - ReplicationClusters: []string{"cluster1", "cluster3"}, -} - -if err := producer.send(msg); err != nil { - log.Fatalf("Could not publish message due to: %v", err) -} - -``` - -The following methods parameters are available for `ProducerMessage` objects: - -Parameter | Description -:---------|:----------- -`Payload` | The actual data payload of the message -`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message. -`Key` | The optional key associated with the message (particularly useful for things like topic compaction) -`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message -`EventTime` | The timestamp associated with the message -`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. -`SequenceID` | Set the sequence id to assign to the current message - -## TLS encryption and authentication - -In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so: - - * Use `pulsar+ssl` URL type - * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker - * Configure `Authentication` option - -Here's an example: - -```go - -opts := pulsar.ClientOptions{ - URL: "pulsar+ssl://my-cluster.com:6651", - TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr", - Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"), -} - -``` - -## Schema - -This example shows how to create a producer and consumer with schema. - -```go - -var exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -jsonSchema := NewJsonSchema(exampleSchemaDef, nil) -// create producer -producer, err := client.CreateProducerWithSchema(ProducerOptions{ - Topic: "jsonTopic", -}, jsonSchema) -err = producer.Send(context.Background(), ProducerMessage{ - Value: &testJson{ - ID: 100, - Name: "pulsar", - }, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() -//create consumer -var s testJson -consumerJS := NewJsonSchema(exampleSchemaDef, nil) -consumer, err := client.SubscribeWithSchema(ConsumerOptions{ - Topic: "jsonTopic", - SubscriptionName: "sub-2", -}, consumerJS) -if err != nil { - log.Fatal(err) -} -msg, err := consumer.Receive(context.Background()) -if err != nil { - log.Fatal(err) -} -err = msg.GetValue(&s) -if err != nil { - log.Fatal(err) -} -fmt.Println(s.ID) // output: 100 -fmt.Println(s.Name) // output: pulsar -defer consumer.Close() - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-cpp.md b/site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-cpp.md deleted file mode 100644 index 455cf02116d502..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-cpp.md +++ /dev/null @@ -1,708 +0,0 @@ ---- -id: client-libraries-cpp -title: Pulsar C++ client -sidebar_label: "C++" -original_id: client-libraries-cpp ---- - -You can use Pulsar C++ client to create Pulsar producers and consumers in C++. - -All the methods in producer, consumer, and reader of a C++ client are thread-safe. - -## Supported platforms - -Pulsar C++ client is supported on **Linux** ,**MacOS** and **Windows** platforms. - -[Doxygen](http://www.doxygen.nl/)-generated API docs for the C++ client are available [here](/api/cpp). - -## System requirements - -You need to install the following components before using the C++ client: - -* [CMake](https://cmake.org/) -* [Boost](http://www.boost.org/) -* [Protocol Buffers](https://developers.google.com/protocol-buffers/) >= 3 -* [libcurl](https://curl.se/libcurl/) -* [Google Test](https://github.com/google/googletest) - -## Linux - -### Compilation - -1. Clone the Pulsar repository. - -```shell - -$ git clone https://github.com/apache/pulsar - -``` - -2. Install all necessary dependencies. - -```shell - -$ apt-get install cmake libssl-dev libcurl4-openssl-dev liblog4cxx-dev \ - libprotobuf-dev protobuf-compiler libboost-all-dev google-mock libgtest-dev libjsoncpp-dev - -``` - -3. Compile and install [Google Test](https://github.com/google/googletest). - -```shell - -# libgtest-dev version is 1.18.0 or above -$ cd /usr/src/googletest -$ sudo cmake . -$ sudo make -$ sudo cp ./googlemock/libgmock.a ./googlemock/gtest/libgtest.a /usr/lib/ - -# less than 1.18.0 -$ cd /usr/src/gtest -$ sudo cmake . -$ sudo make -$ sudo cp libgtest.a /usr/lib - -$ cd /usr/src/gmock -$ sudo cmake . -$ sudo make -$ sudo cp libgmock.a /usr/lib - -``` - -4. Compile the Pulsar client library for C++ inside the Pulsar repository. - -```shell - -$ cd pulsar-client-cpp -$ cmake . -$ make - -``` - -After you install the components successfully, the files `libpulsar.so` and `libpulsar.a` are in the `lib` folder of the repository. The tools `perfProducer` and `perfConsumer` are in the `perf` directory. - -### Install Dependencies - -> Since 2.1.0 release, Pulsar ships pre-built RPM and Debian packages. You can download and install those packages directly. - -After you download and install RPM or DEB, the `libpulsar.so`, `libpulsarnossl.so`, `libpulsar.a`, and `libpulsarwithdeps.a` libraries are in your `/usr/lib` directory. - -By default, they are built in code path `${PULSAR_HOME}/pulsar-client-cpp`. You can build with the command below. - - `cmake . -DBUILD_TESTS=OFF -DLINK_STATIC=ON && make pulsarShared pulsarSharedNossl pulsarStatic pulsarStaticWithDeps -j 3`. - -These libraries rely on some other libraries. If you want to get detailed version of dependencies, see [RPM](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/rpm/Dockerfile) or [DEB](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/deb/Dockerfile) files. - -1. `libpulsar.so` is a shared library, containing statically linked `boost` and `openssl`. It also dynamically links all other necessary libraries. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsar.so -I/usr/local/ssl/include - -``` - -2. `libpulsarnossl.so` is a shared library, similar to `libpulsar.so` except that the libraries `openssl` and `crypto` are dynamically linked. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsarnossl.so -lssl -lcrypto -I/usr/local/ssl/include -L/usr/local/ssl/lib - -``` - -3. `libpulsar.a` is a static library. You need to load dependencies before using this library. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsar.a -lssl -lcrypto -ldl -lpthread -I/usr/local/ssl/include -L/usr/local/ssl/lib -lboost_system -lboost_regex -lcurl -lprotobuf -lzstd -lz - -``` - -4. `libpulsarwithdeps.a` is a static library, based on `libpulsar.a`. It is archived in the dependencies of `libboost_regex`, `libboost_system`, `libcurl`, `libprotobuf`, `libzstd` and `libz`. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsarwithdeps.a -lssl -lcrypto -ldl -lpthread -I/usr/local/ssl/include -L/usr/local/ssl/lib - -``` - -The `libpulsarwithdeps.a` does not include library openssl related libraries `libssl` and `libcrypto`, because these two libraries are related to security. It is more reasonable and easier to use the versions provided by the local system to handle security issues and upgrade libraries. - -### Install RPM - -1. Download a RPM package from the links in the table. - -| Link | Crypto files | -|------|--------------| -| [client](@pulsar:dist_rpm:client@) | [asc](@pulsar:dist_rpm:client@.asc), [sha512](@pulsar:dist_rpm:client@.sha512) | -| [client-debuginfo](@pulsar:dist_rpm:client-debuginfo@) | [asc](@pulsar:dist_rpm:client-debuginfo@.asc), [sha512](@pulsar:dist_rpm:client-debuginfo@.sha512) | -| [client-devel](@pulsar:dist_rpm:client-devel@) | [asc](@pulsar:dist_rpm:client-devel@.asc), [sha512](@pulsar:dist_rpm:client-devel@.sha512) | - -2. Install the package using the following command. - -```bash - -$ rpm -ivh apache-pulsar-client*.rpm - -``` - -After you install RPM successfully, Pulsar libraries are in the `/usr/lib` directory. - -:::note - -If you get the error that `libpulsar.so: cannot open shared object file: No such file or directory` when starting Pulsar client, you may need to run `ldconfig` first. - -::: - -### Install Debian - -1. Download a Debian package from the links in the table. - -| Link | Crypto files | -|------|--------------| -| [client](@pulsar:deb:client@) | [asc](@pulsar:dist_deb:client@.asc), [sha512](@pulsar:dist_deb:client@.sha512) | -| [client-devel](@pulsar:deb:client-devel@) | [asc](@pulsar:dist_deb:client-devel@.asc), [sha512](@pulsar:dist_deb:client-devel@.sha512) | - -2. Install the package using the following command. - -```bash - -$ apt install ./apache-pulsar-client*.deb - -``` - -After you install DEB successfully, Pulsar libraries are in the `/usr/lib` directory. - -### Build - -> If you want to build RPM and Debian packages from the latest master, follow the instructions below. You should run all the instructions at the root directory of your cloned Pulsar repository. - -There are recipes that build RPM and Debian packages containing a -statically linked `libpulsar.so` / `libpulsarnossl.so` / `libpulsar.a` / `libpulsarwithdeps.a` with all required dependencies. - -To build the C++ library packages, you need to build the Java packages first. - -```shell - -mvn install -DskipTests - -``` - -#### RPM - -To build the RPM inside a Docker container, use the command below. The RPMs are in the `pulsar-client-cpp/pkg/rpm/RPMS/x86_64/` path. - -```shell - -pulsar-client-cpp/pkg/rpm/docker-build-rpm.sh - -``` - -| Package name | Content | -|-----|-----| -| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` | -| pulsar-client-devel | Static library `libpulsar.a`, `libpulsarwithdeps.a`and C++ and C headers | -| pulsar-client-debuginfo | Debug symbols for `libpulsar.so` | - -#### Debian - -To build Debian packages, enter the following command. - -```shell - -pulsar-client-cpp/pkg/deb/docker-build-deb.sh - -``` - -Debian packages are created in the `pulsar-client-cpp/pkg/deb/BUILD/DEB/` path. - -| Package name | Content | -|-----|-----| -| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` | -| pulsar-client-dev | Static library `libpulsar.a`, `libpulsarwithdeps.a` and C++ and C headers | - -## MacOS - -### Compilation - -1. Clone the Pulsar repository. - -```shell - -$ git clone https://github.com/apache/pulsar - -``` - -2. Install all necessary dependencies. - -```shell - -# OpenSSL installation -$ brew install openssl -$ export OPENSSL_INCLUDE_DIR=/usr/local/opt/openssl/include/ -$ export OPENSSL_ROOT_DIR=/usr/local/opt/openssl/ - -# Protocol Buffers installation -$ brew install protobuf boost boost-python log4cxx -# If you are using python3, you need to install boost-python3 - -# Google Test installation -$ git clone https://github.com/google/googletest.git -$ cd googletest -$ git checkout release-1.12.1 -$ cmake . -$ make install - -``` - -3. Compile the Pulsar client library in the repository that you cloned. - -```shell - -$ cd pulsar-client-cpp -$ cmake . -$ make - -``` - -### Install `libpulsar` - -Pulsar releases are available in the [Homebrew](https://brew.sh/) core repository. You can install the C++ client library with the following command. The package is installed with the library and headers. - -```shell - -brew install libpulsar - -``` - -## Windows (64-bit) - -### Compilation - -1. Clone the Pulsar repository. - -```shell - -$ git clone https://github.com/apache/pulsar - -``` - -2. Install all necessary dependencies. - -```shell - -cd ${PULSAR_HOME}/pulsar-client-cpp -vcpkg install --feature-flags=manifests --triplet x64-windows - -``` - -3. Build C++ libraries. - -```shell - -cmake -B ./build -A x64 -DBUILD_PYTHON_WRAPPER=OFF -DBUILD_TESTS=OFF -DVCPKG_TRIPLET=x64-windows -DCMAKE_BUILD_TYPE=Release -S . -cmake --build ./build --config Release - -``` - -> **NOTE** -> -> 1. For Windows 32-bit, you need to use `-A Win32` and `-DVCPKG_TRIPLET=x86-windows`. -> 2. For MSVC Debug mode, you need to replace `Release` with `Debug` for both `CMAKE_BUILD_TYPE` variable and `--config` option. - -4. Client libraries are available in the following places. - -``` - -${PULSAR_HOME}/pulsar-client-cpp/build/lib/Release/pulsar.lib -${PULSAR_HOME}/pulsar-client-cpp/build/lib/Release/pulsar.dll - -``` - -## Connection URLs - -To connect Pulsar using client libraries, you need to specify a Pulsar protocol URL. - -Pulsar protocol URLs are assigned to specific clusters, you can use the Pulsar URI scheme. The default port is `6650`. The following is an example for localhost. - -```http - -pulsar://localhost:6650 - -``` - -In a Pulsar cluster in production, the URL looks as follows. - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you use TLS authentication, you need to add `ssl`, and the default port is `6651`. The following is an example. - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a consumer - -To use Pulsar as a consumer, you need to create a consumer on the C++ client. There are two main ways of using the consumer: -- [Blocking style](#blocking-example): synchronously calling `receive(msg)`. -- [Non-blocking](#consumer-with-a-message-listener) (event based) style: using a message listener. - -### Blocking example - -The benefit of this approach is that it is the simplest code. Simply keeps calling `receive(msg)` which blocks until a message is received. - -This example starts a subscription at the earliest offset and consumes 100 messages. - -```c++ - -#include - -using namespace pulsar; - -int main() { - Client client("pulsar://localhost:6650"); - - Consumer consumer; - ConsumerConfiguration config; - config.setSubscriptionInitialPosition(InitialPositionEarliest); - Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer); - if (result != ResultOk) { - std::cout << "Failed to subscribe: " << result << std::endl; - return -1; - } - - Message msg; - int ctr = 0; - // consume 100 messages - while (ctr < 100) { - consumer.receive(msg); - std::cout << "Received: " << msg - << " with payload '" << msg.getDataAsString() << "'" << std::endl; - - consumer.acknowledge(msg); - ctr++; - } - - std::cout << "Finished consuming synchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -### Consumer with a message listener - -You can avoid running a loop with blocking calls with an event based style by using a message listener which is invoked for each message that is received. - -This example starts a subscription at the earliest offset and consumes 100 messages. - -```c++ - -#include -#include -#include - -using namespace pulsar; - -std::atomic messagesReceived; - -void handleAckComplete(Result res) { - std::cout << "Ack res: " << res << std::endl; -} - -void listener(Consumer consumer, const Message& msg) { - std::cout << "Got message " << msg << " with content '" << msg.getDataAsString() << "'" << std::endl; - messagesReceived++; - consumer.acknowledgeAsync(msg.getMessageId(), handleAckComplete); -} - -int main() { - Client client("pulsar://localhost:6650"); - - Consumer consumer; - ConsumerConfiguration config; - config.setMessageListener(listener); - config.setSubscriptionInitialPosition(InitialPositionEarliest); - Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer); - if (result != ResultOk) { - std::cout << "Failed to subscribe: " << result << std::endl; - return -1; - } - - // wait for 100 messages to be consumed - while (messagesReceived < 100) { - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - } - - std::cout << "Finished consuming asynchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -## Create a producer - -To use Pulsar as a producer, you need to create a producer on the C++ client. There are two main ways of using a producer: -- [Blocking style](#simple-blocking-example) : each call to `send` waits for an ack from the broker. -- [Non-blocking asynchronous style](#non-blocking-example) : `sendAsync` is called instead of `send` and a callback is supplied for when the ack is received from the broker. - -### Simple blocking example - -This example sends 100 messages using the blocking style. While simple, it does not produce high throughput as it waits for each ack to come back before sending the next message. - -```c++ - -#include -#include - -using namespace pulsar; - -int main() { - Client client("pulsar://localhost:6650"); - - Result result = client.createProducer("persistent://public/default/my-topic", producer); - if (result != ResultOk) { - std::cout << "Error creating producer: " << result << std::endl; - return -1; - } - - // Send 100 messages synchronously - int ctr = 0; - while (ctr < 100) { - std::string content = "msg" + std::to_string(ctr); - Message msg = MessageBuilder().setContent(content).setProperty("x", "1").build(); - Result result = producer.send(msg); - if (result != ResultOk) { - std::cout << "The message " << content << " could not be sent, received code: " << result << std::endl; - } else { - std::cout << "The message " << content << " sent successfully" << std::endl; - } - - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - ctr++; - } - - std::cout << "Finished producing synchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -### Non-blocking example - -This example sends 100 messages using the non-blocking style calling `sendAsync` instead of `send`. This allows the producer to have multiple messages inflight at a time which increases throughput. - -The producer configuration `blockIfQueueFull` is useful here to avoid `ResultProducerQueueIsFull` errors when the internal queue for outgoing send requests becomes full. Once the internal queue is full, `sendAsync` becomes blocking which can make your code simpler. - -Without this configuration, the result code `ResultProducerQueueIsFull` is passed to the callback. You must decide how to deal with that (retry, discard etc). - -```c++ - -#include -#include - -using namespace pulsar; - -std::atomic acksReceived; - -void callback(Result code, const MessageId& msgId, std::string msgContent) { - // message processing logic here - std::cout << "Received ack for msg: " << msgContent << " with code: " - << code << " -- MsgID: " << msgId << std::endl; - acksReceived++; -} - -int main() { - Client client("pulsar://localhost:6650"); - - ProducerConfiguration producerConf; - producerConf.setBlockIfQueueFull(true); - Producer producer; - Result result = client.createProducer("persistent://public/default/my-topic", - producerConf, producer); - if (result != ResultOk) { - std::cout << "Error creating producer: " << result << std::endl; - return -1; - } - - // Send 100 messages asynchronously - int ctr = 0; - while (ctr < 100) { - std::string content = "msg" + std::to_string(ctr); - Message msg = MessageBuilder().setContent(content).setProperty("x", "1").build(); - producer.sendAsync(msg, std::bind(callback, - std::placeholders::_1, std::placeholders::_2, content)); - - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - ctr++; - } - - // wait for 100 messages to be acked - while (acksReceived < 100) { - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - } - - std::cout << "Finished producing asynchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -### Partitioned topics and lazy producers - -When scaling out a Pulsar topic, you may configure a topic to have hundreds of partitions. Likewise, you may have also scaled out your producers so there are hundreds or even thousands of producers. This can put some strain on the Pulsar brokers as when you create a producer on a partitioned topic, internally it creates one internal producer per partition which involves communications to the brokers for each one. So for a topic with 1000 partitions and 1000 producers, it ends up creating 1,000,000 internal producers across the producer applications, each of which has to communicate with a broker to find out which broker it should connect to and then perform the connection handshake. - -You can reduce the load caused by this combination of a large number of partitions and many producers by doing the following: -- use SinglePartition partition routing mode (this ensures that all messages are only sent to a single, randomly selected partition) -- use non-keyed messages (when messages are keyed, routing is based on the hash of the key and so messages will end up being sent to multiple partitions) -- use lazy producers (this ensures that an internal producer is only created on demand when a message needs to be routed to a partition) - -With our example above, that reduces the number of internal producers spread out over the 1000 producer apps from 1,000,000 to just 1000. - -Note that there can be extra latency for the first message sent. If you set a low send timeout, this timeout could be reached if the initial connection handshake is slow to complete. - -```c++ - -ProducerConfiguration producerConf; -producerConf.setPartitionsRoutingMode(ProducerConfiguration::UseSinglePartition); -producerConf.setLazyStartPartitionedProducers(true); - -``` - -## Enable authentication in connection URLs -If you use TLS authentication when connecting to Pulsar, you need to add `ssl` in the connection URLs, and the default port is `6651`. The following is an example. - -```cpp - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/cacert.pem"); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create( - "/path/to/client-cert.pem", "/path/to/client-key.pem");); - -Client client("pulsar+ssl://my-broker.com:6651", config); - -``` - -For complete examples, refer to [C++ client examples](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/examples). - -## Schema - -This section describes some examples about schema. For more information about -schema, see [Pulsar schema](schema-get-started.md). - -### Avro schema - -- The following example shows how to create a producer with an Avro schema. - - ```cpp - - static const std::string exampleSchema = - "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," - "\"fields\":[{\"name\":\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"int\"}]}"; - Producer producer; - ProducerConfiguration producerConf; - producerConf.setSchema(SchemaInfo(AVRO, "Avro", exampleSchema)); - client.createProducer("topic-avro", producerConf, producer); - - ``` - -- The following example shows how to create a consumer with an Avro schema. - - ```cpp - - static const std::string exampleSchema = - "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," - "\"fields\":[{\"name\":\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"int\"}]}"; - ConsumerConfiguration consumerConf; - Consumer consumer; - consumerConf.setSchema(SchemaInfo(AVRO, "Avro", exampleSchema)); - client.subscribe("topic-avro", "sub-2", consumerConf, consumer) - - ``` - -### ProtobufNative schema - -The following example shows how to create a producer and a consumer with a ProtobufNative schema. -​ -1. Generate the `User` class using Protobuf3. - - :::note - - You need to use Protobuf3 or later versions. - - ::: - -​ - - ```protobuf - - syntax = "proto3"; - - message User { - string name = 1; - int32 age = 2; - } - - ``` - -​ -2. Include the `ProtobufNativeSchema.h` in your source code. Ensure the Protobuf dependency has been added to your project. -​ - - ```c++ - - #include - - ``` - -​ -3. Create a producer to send a `User` instance. -​ - - ```c++ - - ProducerConfiguration producerConf; - producerConf.setSchema(createProtobufNativeSchema(User::GetDescriptor())); - Producer producer; - client.createProducer("topic-protobuf", producerConf, producer); - User user; - user.set_name("my-name"); - user.set_age(10); - std::string content; - user.SerializeToString(&content); - producer.send(MessageBuilder().setContent(content).build()); - - ``` - -​ -4. Create a consumer to receive a `User` instance. -​ - - ```c++ - - ConsumerConfiguration consumerConf; - consumerConf.setSchema(createProtobufNativeSchema(User::GetDescriptor())); - consumerConf.setSubscriptionInitialPosition(InitialPositionEarliest); - Consumer consumer; - client.subscribe("topic-protobuf", "my-sub", consumerConf, consumer); - Message msg; - consumer.receive(msg); - User user2; - user2.ParseFromArray(msg.getData(), msg.getLength()); - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-dotnet.md b/site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-dotnet.md deleted file mode 100644 index fbec5e473be69c..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-dotnet.md +++ /dev/null @@ -1,434 +0,0 @@ ---- -id: client-libraries-dotnet -title: Pulsar C# client -sidebar_label: "C#" -original_id: client-libraries-dotnet ---- - -You can use the Pulsar C# client (DotPulsar) to create Pulsar producers and consumers in C#. All the methods in the producer, consumer, and reader of a C# client are thread-safe. The official documentation for DotPulsar is available [here](https://github.com/apache/pulsar-dotpulsar/wiki). - -## Installation - -You can install the Pulsar C# client library either through the dotnet CLI or through the Visual Studio. This section describes how to install the Pulsar C# client library through the dotnet CLI. For information about how to install the Pulsar C# client library through the Visual Studio, see [here](https://docs.microsoft.com/en-us/visualstudio/mac/nuget-walkthrough?view=vsmac-2019). - -### Prerequisites - -Install the [.NET Core SDK](https://dotnet.microsoft.com/download/), which provides the dotnet command-line tool. Starting in Visual Studio 2017, the dotnet CLI is automatically installed with any .NET Core related workloads. - -### Procedures - -To install the Pulsar C# client library, following these steps: - -1. Create a project. - - 1. Create a folder for the project. - - 2. Open a terminal window and switch to the new folder. - - 3. Create the project using the following command. - - ``` - - dotnet new console - - ``` - - 4. Use `dotnet run` to test that the app has been created properly. - -2. Add the DotPulsar NuGet package. - - 1. Use the following command to install the `DotPulsar` package. - - ``` - - dotnet add package DotPulsar - - ``` - - 2. After the command completes, open the `.csproj` file to see the added reference. - - ```xml - - - - - - ``` - -## Client - -This section describes some configuration examples for the Pulsar C# client. - -### Create client - -This example shows how to create a Pulsar C# client connected to localhost. - -```c# - -var client = PulsarClient.Builder().Build(); - -``` - -To create a Pulsar C# client by using the builder, you can specify the following options. - -| Option | Description | Default | -| ---- | ---- | ---- | -| ServiceUrl | Set the service URL for the Pulsar cluster. | pulsar://localhost:6650 | -| RetryInterval | Set the time to wait before retrying an operation or a reconnection. | 3s | - -### Create producer - -This section describes how to create a producer. - -- Create a producer by using the builder. - - ```c# - - var producer = client.NewProducer() - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a producer without using the builder. - - ```c# - - var options = new ProducerOptions("persistent://public/default/mytopic"); - var producer = client.CreateProducer(options); - - ``` - -### Create consumer - -This section describes how to create a consumer. - -- Create a consumer by using the builder. - - ```c# - - var consumer = client.NewConsumer() - .SubscriptionName("MySubscription") - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a consumer without using the builder. - - ```c# - - var options = new ConsumerOptions("MySubscription", "persistent://public/default/mytopic"); - var consumer = client.CreateConsumer(options); - - ``` - -### Create reader - -This section describes how to create a reader. - -- Create a reader by using the builder. - - ```c# - - var reader = client.NewReader() - .StartMessageId(MessageId.Earliest) - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a reader without using the builder. - - ```c# - - var options = new ReaderOptions(MessageId.Earliest, "persistent://public/default/mytopic"); - var reader = client.CreateReader(options); - - ``` - -### Configure encryption policies - -The Pulsar C# client supports four kinds of encryption policies: - -- `EnforceUnencrypted`: always use unencrypted connections. -- `EnforceEncrypted`: always use encrypted connections) -- `PreferUnencrypted`: use unencrypted connections, if possible. -- `PreferEncrypted`: use encrypted connections, if possible. - -This example shows how to set the `EnforceUnencrypted` encryption policy. - -```c# - -var client = PulsarClient.Builder() - .ConnectionSecurity(EncryptionPolicy.EnforceEncrypted) - .Build(); - -``` - -### Configure authentication - -Currently, the Pulsar C# client supports the TLS (Transport Layer Security) and JWT (JSON Web Token) authentication. - -If you have followed [Authentication using TLS](security-tls-authentication.md), you get a certificate and a key. To use them from the Pulsar C# client, follow these steps: - -1. Create an unencrypted and password-less pfx file. - - ```c# - - openssl pkcs12 -export -keypbe NONE -certpbe NONE -out admin.pfx -inkey admin.key.pem -in admin.cert.pem -passout pass: - - ``` - -2. Use the admin.pfx file to create an X509Certificate2 and pass it to the Pulsar C# client. - - ```c# - - var clientCertificate = new X509Certificate2("admin.pfx"); - var client = PulsarClient.Builder() - .AuthenticateUsingClientCertificate(clientCertificate) - .Build(); - - ``` - -## Producer - -A producer is a process that attaches to a topic and publishes messages to a Pulsar broker for processing. This section describes some configuration examples about the producer. - -## Send data - -This example shows how to send data. - -```c# - -var data = Encoding.UTF8.GetBytes("Hello World"); -await producer.Send(data); - -``` - -### Send messages with customized metadata - -- Send messages with customized metadata by using the builder. - - ```c# - - var data = Encoding.UTF8.GetBytes("Hello World"); - var messageId = await producer.NewMessage() - .Property("SomeKey", "SomeValue") - .Send(data); - - ``` - -- Send messages with customized metadata without using the builder. - - ```c# - - var data = Encoding.UTF8.GetBytes("Hello World"); - var metadata = new MessageMetadata(); - metadata["SomeKey"] = "SomeValue"; - var messageId = await producer.Send(metadata, data)); - - ``` - -## Consumer - -A consumer is a process that attaches to a topic through a subscription and then receives messages. This section describes some configuration examples about the consumer. - -### Receive messages - -This example shows how a consumer receives messages from a topic. - -```c# - -await foreach (var message in consumer.Messages()) -{ - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); -} - -``` - -### Acknowledge messages - -Messages can be acknowledged individually or cumulatively. For details about message acknowledgement, see [acknowledgement](concepts-messaging.md#acknowledgement). - -- Acknowledge messages individually. - - ```c# - - await foreach (var message in consumer.Messages()) - { - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); - } - - ``` - -- Acknowledge messages cumulatively. - - ```c# - - await consumer.AcknowledgeCumulative(message); - - ``` - -### Unsubscribe from topics - -This example shows how a consumer unsubscribes from a topic. - -```c# - -await consumer.Unsubscribe(); - -``` - -#### Note - -> A consumer cannot be used and is disposed once the consumer unsubscribes from a topic. - -## Reader - -A reader is actually just a consumer without a cursor. This means that Pulsar does not keep track of your progress and there is no need to acknowledge messages. - -This example shows how a reader receives messages. - -```c# - -await foreach (var message in reader.Messages()) -{ - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); -} - -``` - -## Monitoring - -This section describes how to monitor the producer, consumer, and reader state. - -### Monitor producer - -The following table lists states available for the producer. - -| State | Description | -| ---- | ----| -| Closed | The producer or the Pulsar client has been disposed. | -| Connected | All is well. | -| Disconnected | The connection is lost and attempts are being made to reconnect. | -| Faulted | An unrecoverable error has occurred. | - -This example shows how to monitor the producer state. - -```c# - -private static async ValueTask Monitor(IProducer producer, CancellationToken cancellationToken) -{ - var state = ProducerState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = await producer.StateChangedFrom(state, cancellationToken); - - var stateMessage = state switch - { - ProducerState.Connected => $"The producer is connected", - ProducerState.Disconnected => $"The producer is disconnected", - ProducerState.Closed => $"The producer has closed", - ProducerState.Faulted => $"The producer has faulted", - _ => $"The producer has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (producer.IsFinalState(state)) - return; - } -} - -``` - -### Monitor consumer state - -The following table lists states available for the consumer. - -| State | Description | -| ---- | ----| -| Active | All is well. | -| Inactive | All is well. The subscription type is `Failover` and you are not the active consumer. | -| Closed | The consumer or the Pulsar client has been disposed. | -| Disconnected | The connection is lost and attempts are being made to reconnect. | -| Faulted | An unrecoverable error has occurred. | -| ReachedEndOfTopic | No more messages are delivered. | - -This example shows how to monitor the consumer state. - -```c# - -private static async ValueTask Monitor(IConsumer consumer, CancellationToken cancellationToken) -{ - var state = ConsumerState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = await consumer.StateChangedFrom(state, cancellationToken); - - var stateMessage = state switch - { - ConsumerState.Active => "The consumer is active", - ConsumerState.Inactive => "The consumer is inactive", - ConsumerState.Disconnected => "The consumer is disconnected", - ConsumerState.Closed => "The consumer has closed", - ConsumerState.ReachedEndOfTopic => "The consumer has reached end of topic", - ConsumerState.Faulted => "The consumer has faulted", - _ => $"The consumer has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (consumer.IsFinalState(state)) - return; - } -} - -``` - -### Monitor reader state - -The following table lists states available for the reader. - -| State | Description | -| ---- | ----| -| Closed | The reader or the Pulsar client has been disposed. | -| Connected | All is well. | -| Disconnected | The connection is lost and attempts are being made to reconnect. -| Faulted | An unrecoverable error has occurred. | -| ReachedEndOfTopic | No more messages are delivered. | - -This example shows how to monitor the reader state. - -```c# - -private static async ValueTask Monitor(IReader reader, CancellationToken cancellationToken) -{ - var state = ReaderState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = await reader.StateChangedFrom(state, cancellationToken); - - var stateMessage = state switch - { - ReaderState.Connected => "The reader is connected", - ReaderState.Disconnected => "The reader is disconnected", - ReaderState.Closed => "The reader has closed", - ReaderState.ReachedEndOfTopic => "The reader has reached end of topic", - ReaderState.Faulted => "The reader has faulted", - _ => $"The reader has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (reader.IsFinalState(state)) - return; - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-go.md b/site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-go.md deleted file mode 100644 index 6281b03dd8c805..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-go.md +++ /dev/null @@ -1,885 +0,0 @@ ---- -id: client-libraries-go -title: Pulsar Go client -sidebar_label: "Go" -original_id: client-libraries-go ---- - -> Tips: Currently, the CGo client will be deprecated, if you want to know more about the CGo client, please refer to [CGo client docs](client-libraries-cgo.md) - -You can use Pulsar [Go client](https://github.com/apache/pulsar-client-go) to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang). - -> **API docs available as well** -> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar-client-go/pulsar). - - -## Installation - -### Install go package - -You can install the `pulsar` library locally using `go get`. - -```bash - -$ go get -u "github.com/apache/pulsar-client-go/pulsar" - -``` - -Once installed locally, you can import it into your project: - -```go - -import "github.com/apache/pulsar-client-go/pulsar" - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -If you have multiple brokers, you can set the URL as below. - -``` - -pulsar://localhost:6550,localhost:6651,localhost:6652 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example: - -```go - -import ( - "log" - "time" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - OperationTimeout: 30 * time.Second, - ConnectionTimeout: 30 * time.Second, - }) - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } - - defer client.Close() -} - -``` - -If you have multiple brokers, you can initiate a client object as below. - -```go - -import ( - "log" - "time" - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650,localhost:6651,localhost:6652", - OperationTimeout: 30 * time.Second, - ConnectionTimeout: 30 * time.Second, - }) - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } - - defer client.Close() -} - -``` - -The following configurable parameters are available for Pulsar clients: - - Name | Description | Default -| :-------- | :---------- |:---------- | -| URL | Configure the service URL for the Pulsar service.

    If you have multiple brokers, you can set multiple Pulsar cluster addresses for a client.

    This parameter is **required**. |None | -| ConnectionTimeout | Timeout for the establishment of a TCP connection | 30s | -| OperationTimeout| Set the operation timeout. Producer-create, subscribe and unsubscribe operations will be retried until this interval, after which the operation will be marked as failed| 30s| -| Authentication | Configure the authentication provider. Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | no authentication | -| TLSTrustCertsFilePath | Set the path to the trusted TLS certificate file | | -| TLSAllowInsecureConnection | Configure whether the Pulsar client accept untrusted TLS certificate from broker | false | -| TLSValidateHostname | Configure whether the Pulsar client verify the validity of the host name from broker | false | -| ListenerName | Configure the net model for VPC users to connect to the Pulsar broker | | -| MaxConnectionsPerBroker | Max number of connections to a single broker that is kept in the pool | 1 | -| CustomMetricsLabels | Add custom labels to all the metrics reported by this client instance | | -| Logger | Configure the logger used by the client | logrus.StandardLogger | - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example: - -```go - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", -}) - -if err != nil { - log.Fatal(err) -} - -_, err = producer.Send(context.Background(), &pulsar.ProducerMessage{ - Payload: []byte("hello"), -}) - -defer producer.Close() - -if err != nil { - fmt.Println("Failed to publish message", err) -} -fmt.Println("Published message") - -``` - -### Producer operations - -Pulsar Go producers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string` -`Name()` | Fetches the producer's name | `string` -`Send(context.Context, *ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | (MessageID, error) -`SendAsync(context.Context, *ProducerMessage, func(MessageID, *ProducerMessage, error))`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | -`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64 -`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error -`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | - -### Producer Example - -#### How to use message router in producer - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: serviceURL, -}) - -if err != nil { - log.Fatal(err) -} -defer client.Close() - -// Only subscribe on the specific partition -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "my-partitioned-topic-partition-2", - SubscriptionName: "my-sub", -}) - -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-partitioned-topic", - MessageRouter: func(msg *ProducerMessage, tm TopicMetadata) int { - fmt.Println("Routing message ", msg, " -- Partitions: ", tm.NumPartitions()) - return 2 - }, -}) - -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -``` - -#### How to use schema interface in producer - -```go - -type testJSON struct { - ID int `json:"id"` - Name string `json:"name"` -} - -``` - -```go - -var ( - exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -) - -``` - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -properties := make(map[string]string) -properties["pulsar"] = "hello" -jsonSchemaWithProperties := NewJSONSchema(exampleSchemaDef, properties) -producer, err := client.CreateProducer(ProducerOptions{ - Topic: "jsonTopic", - Schema: jsonSchemaWithProperties, -}) -assert.Nil(t, err) - -_, err = producer.Send(context.Background(), &ProducerMessage{ - Value: &testJSON{ - ID: 100, - Name: "pulsar", - }, -}) -if err != nil { - log.Fatal(err) -} -producer.Close() - -``` - -#### How to use delay relative in producer - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topicName := newTopicName() -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topicName, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: topicName, - SubscriptionName: "subName", - Type: Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -ID, err := producer.Send(context.Background(), &pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("test")), - DeliverAfter: 3 * time.Second, -}) -if err != nil { - log.Fatal(err) -} -fmt.Println(ID) - -ctx, canc := context.WithTimeout(context.Background(), 1*time.Second) -msg, err := consumer.Receive(ctx) -if err != nil { - log.Fatal(err) -} -fmt.Println(msg.Payload()) -canc() - -ctx, canc = context.WithTimeout(context.Background(), 5*time.Second) -msg, err = consumer.Receive(ctx) -if err != nil { - log.Fatal(err) -} -fmt.Println(msg.Payload()) -canc() - -``` - -### Producer configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Name | Name specify a name for the producer. If not assigned, the system will generate a globally unique name which can be access with Producer.ProducerName(). | | -| Properties | Properties attach a set of application defined properties to the producer This properties will be visible in the topic stats | | -| SendTimeout | SendTimeout set the timeout for a message that is not acknowledged by the server | 30s | -| DisableBlockIfQueueFull | DisableBlockIfQueueFull control whether Send and SendAsync block if producer's message queue is full | false | -| MaxPendingMessages| MaxPendingMessages set the max size of the queue holding the messages pending to receive an acknowledgment from the broker. | | -| HashingScheme | HashingScheme change the `HashingScheme` used to chose the partition on where to publish a particular message. | JavaStringHash | -| CompressionType | CompressionType set the compression type for the producer. | not compressed | -| CompressionLevel | Define the desired compression level. Options: Default, Faster and Better | Default | -| MessageRouter | MessageRouter set a custom message routing policy by passing an implementation of MessageRouter | | -| DisableBatching | DisableBatching control whether automatic batching of messages is enabled for the producer. | false | -| BatchingMaxPublishDelay | BatchingMaxPublishDelay set the time period within which the messages sent will be batched | 1ms | -| BatchingMaxMessages | BatchingMaxMessages set the maximum number of messages permitted in a batch. | 1000 | -| BatchingMaxSize | BatchingMaxSize sets the maximum number of bytes permitted in a batch. | 128KB | -| Schema | Schema set a custom schema type by passing an implementation of `Schema` | bytes[] | -| Interceptors | A chain of interceptors. These interceptors are called at some points defined in the `ProducerInterceptor` interface. | None | -| MaxReconnectToBroker | MaxReconnectToBroker set the maximum retry number of reconnectToBroker | ultimate | -| BatcherBuilderType | BatcherBuilderType sets the batch builder type. This is used to create a batch container when batching is enabled. Options: DefaultBatchBuilder and KeyBasedBatchBuilder | DefaultBatchBuilder | - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels: - -```go - -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "topic-1", - SubscriptionName: "my-sub", - Type: pulsar.Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -for i := 0; i < 10; i++ { - msg, err := consumer.Receive(context.Background()) - if err != nil { - log.Fatal(err) - } - - fmt.Printf("Received message msgId: %#v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - - consumer.Ack(msg) -} - -if err := consumer.Unsubscribe(); err != nil { - log.Fatal(err) -} - -``` - -### Consumer operations - -Pulsar Go consumers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Subscription()` | Returns the consumer's subscription name | `string` -`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error` -`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)` -`Chan()` | Chan returns a channel from which to consume messages. | `<-chan ConsumerMessage` -`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | -`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | -`ReconsumeLater(msg Message, delay time.Duration)` | ReconsumeLater mark a message for redelivery after custom delay | -`Nack(Message)` | Acknowledge the failure to process a single message. | -`NackID(MessageID)` | Acknowledge the failure to process a single message. | -`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | `error` -`SeekByTime(time time.Time)` | Reset the subscription associated with this consumer to a specific message publish time. | `error` -`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | -`Name()` | Name returns the name of consumer | `string` - -### Receive example - -#### How to use regex consumer - -```go - -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) - -defer client.Close() - -p, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topicInRegex, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer p.Close() - -topicsPattern := fmt.Sprintf("persistent://%s/foo.*", namespace) -opts := pulsar.ConsumerOptions{ - TopicsPattern: topicsPattern, - SubscriptionName: "regex-sub", -} -consumer, err := client.Subscribe(opts) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -``` - -#### How to use multi topics Consumer - -```go - -func newTopicName() string { - return fmt.Sprintf("my-topic-%v", time.Now().Nanosecond()) -} - - -topic1 := "topic-1" -topic2 := "topic-2" - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -topics := []string{topic1, topic2} -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topics: topics, - SubscriptionName: "multi-topic-sub", -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -``` - -#### How to use consumer listener - -```go - -import ( - "fmt" - "log" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{URL: "pulsar://localhost:6650"}) - if err != nil { - log.Fatal(err) - } - - defer client.Close() - - channel := make(chan pulsar.ConsumerMessage, 100) - - options := pulsar.ConsumerOptions{ - Topic: "topic-1", - SubscriptionName: "my-subscription", - Type: pulsar.Shared, - } - - options.MessageChannel = channel - - consumer, err := client.Subscribe(options) - if err != nil { - log.Fatal(err) - } - - defer consumer.Close() - - // Receive messages from channel. The channel returns a struct which contains message and the consumer from where - // the message was received. It's not necessary here since we have 1 single consumer, but the channel could be - // shared across multiple consumers as well - for cm := range channel { - msg := cm.Message - fmt.Printf("Received message msgId: %v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - - consumer.Ack(msg) - } -} - -``` - -#### How to use consumer receive timeout - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topic := "test-topic-with-no-messages" -ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond) -defer cancel() - -// create consumer -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: topic, - SubscriptionName: "my-sub1", - Type: Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -msg, err := consumer.Receive(ctx) -fmt.Println(msg.Payload()) -if err != nil { - log.Fatal(err) -} - -``` - -#### How to use schema in consumer - -```go - -type testJSON struct { - ID int `json:"id"` - Name string `json:"name"` -} - -``` - -```go - -var ( - exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -) - -``` - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -var s testJSON - -consumerJS := NewJSONSchema(exampleSchemaDef, nil) -consumer, err := client.Subscribe(ConsumerOptions{ - Topic: "jsonTopic", - SubscriptionName: "sub-1", - Schema: consumerJS, - SubscriptionInitialPosition: SubscriptionPositionEarliest, -}) -assert.Nil(t, err) -msg, err := consumer.Receive(context.Background()) -assert.Nil(t, err) -err = msg.GetSchemaValue(&s) -if err != nil { - log.Fatal(err) -} - -defer consumer.Close() - -``` - -### Consumer configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Topics | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing| | -| TopicsPattern | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing | | -| AutoDiscoveryPeriod | Specify the interval in which to poll for new partitions or new topics if using a TopicsPattern. | | -| SubscriptionName | Specify the subscription name for this consumer. This argument is required when subscribing | | -| Name | Set the consumer name | | -| Properties | Properties attach a set of application defined properties to the producer This properties will be visible in the topic stats | | -| Type | Select the subscription type to be used when subscribing to the topic. | Exclusive | -| SubscriptionInitialPosition | InitialPosition at which the cursor will be set when subscribe | Latest | -| DLQ | Configuration for Dead Letter Queue consumer policy. | no DLQ | -| MessageChannel | Sets a `MessageChannel` for the consumer. When a message is received, it will be pushed to the channel for consumption | | -| ReceiverQueueSize | Sets the size of the consumer receive queue. | 1000| -| NackRedeliveryDelay | The delay after which to redeliver the messages that failed to be processed | 1min | -| ReadCompacted | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic | false | -| ReplicateSubscriptionState | Mark the subscription as replicated to keep it in sync across clusters | false | -| KeySharedPolicy | Configuration for Key Shared consumer policy. | | -| RetryEnable | Auto retry send messages to default filled DLQPolicy topics | false | -| Interceptors | A chain of interceptors. These interceptors are called at some points defined in the `ConsumerInterceptor` interface. | | -| MaxReconnectToBroker | MaxReconnectToBroker set the maximum retry number of reconnectToBroker. | ultimate | -| Schema | Schema set a custom schema type by passing an implementation of `Schema` | bytes[] | - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example: - -```go - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "topic-1", - StartMessageID: pulsar.EarliestMessageID(), -}) -if err != nil { - log.Fatal(err) -} -defer reader.Close() - -``` - -### Reader operations - -Pulsar Go readers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string` -`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)` -`HasNext()` | Check if there is any message available to read from the current position| (bool, error) -`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error` -`Seek(MessageID)` | Reset the subscription associated with this reader to a specific message ID | `error` -`SeekByTime(time time.Time)` | Reset the subscription associated with this reader to a specific message publish time | `error` - -### Reader example - -#### How to use reader to read 'next' message - -Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages: - -```go - -import ( - "context" - "fmt" - "log" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{URL: "pulsar://localhost:6650"}) - if err != nil { - log.Fatal(err) - } - - defer client.Close() - - reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "topic-1", - StartMessageID: pulsar.EarliestMessageID(), - }) - if err != nil { - log.Fatal(err) - } - defer reader.Close() - - for reader.HasNext() { - msg, err := reader.Next(context.Background()) - if err != nil { - log.Fatal(err) - } - - fmt.Printf("Received message msgId: %#v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - } -} - -``` - -In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example: - -```go - -lastSavedId := // Read last saved message id from external store as byte[] - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: pulsar.DeserializeMessageID(lastSavedId), -}) - -``` - -#### How to use reader to read specific message - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: lookupURL, -}) - -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topic := "topic-1" -ctx := context.Background() - -// create producer -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topic, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -// send 10 messages -msgIDs := [10]MessageID{} -for i := 0; i < 10; i++ { - msgID, err := producer.Send(ctx, &pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("hello-%d", i)), - }) - assert.NoError(t, err) - assert.NotNil(t, msgID) - msgIDs[i] = msgID -} - -// create reader on 5th message (not included) -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: topic, - StartMessageID: msgIDs[4], -}) - -if err != nil { - log.Fatal(err) -} -defer reader.Close() - -// receive the remaining 5 messages -for i := 5; i < 10; i++ { - msg, err := reader.Next(context.Background()) - if err != nil { - log.Fatal(err) -} - -// create reader on 5th message (included) -readerInclusive, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: topic, - StartMessageID: msgIDs[4], - StartMessageIDInclusive: true, -}) - -if err != nil { - log.Fatal(err) -} -defer readerInclusive.Close() - -``` - -### Reader configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Name | Name set the reader name. | | -| Properties | Attach a set of application defined properties to the reader. This properties will be visible in the topic stats | | -| StartMessageID | StartMessageID initial reader positioning is done by specifying a message id. | | -| StartMessageIDInclusive | If true, the reader will start at the `StartMessageID`, included. Default is `false` and the reader will start from the "next" message | false | -| MessageChannel | MessageChannel sets a `MessageChannel` for the consumer When a message is received, it will be pushed to the channel for consumption| | -| ReceiverQueueSize | ReceiverQueueSize sets the size of the consumer receive queue. | 1000 | -| SubscriptionRolePrefix| SubscriptionRolePrefix set the subscription role prefix. | "reader" | -| ReadCompacted | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. ReadCompacted can only be enabled when reading from a persistent topic. | false| - -## Messages - -The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message: - -```go - -msg := pulsar.ProducerMessage{ - Payload: []byte("Here is some message data"), - Key: "message-key", - Properties: map[string]string{ - "foo": "bar", - }, - EventTime: time.Now(), - ReplicationClusters: []string{"cluster1", "cluster3"}, -} - -if _, err := producer.send(msg); err != nil { - log.Fatalf("Could not publish message due to: %v", err) -} - -``` - -The following methods parameters are available for `ProducerMessage` objects: - -Parameter | Description -:---------|:----------- -`Payload` | The actual data payload of the message -`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message. -`Key` | The optional key associated with the message (particularly useful for things like topic compaction) -`OrderingKey` | OrderingKey sets the ordering key of the message. -`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message -`EventTime` | The timestamp associated with the message -`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. -`SequenceID` | Set the sequence id to assign to the current message -`DeliverAfter` | Request to deliver the message only after the specified relative delay -`DeliverAt` | Deliver the message only at or after the specified absolute timestamp - -## TLS encryption and authentication - -In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so: - - * Use `pulsar+ssl` URL type - * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker - * Configure `Authentication` option - -Here's an example: - -```go - -opts := pulsar.ClientOptions{ - URL: "pulsar+ssl://my-cluster.com:6651", - TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr", - Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"), -} - -``` - -## OAuth2 authentication - -To use [OAuth2 authentication](security-oauth2.md), you'll need to configure your client to perform the following operations. -This example shows how to configure OAuth2 authentication. - -```go - -oauth := pulsar.NewAuthenticationOAuth2(map[string]string{ - "type": "client_credentials", - "issuerUrl": "https://dev-kt-aa9ne.us.auth0.com", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "privateKey": "/path/to/privateKey", - "clientId": "0Xx...Yyxeny", - }) -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://my-cluster:6650", - Authentication: oauth, -}) - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-java.md b/site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-java.md deleted file mode 100644 index c3d41a3f13da2e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-java.md +++ /dev/null @@ -1,1038 +0,0 @@ ---- -id: client-libraries-java -title: Pulsar Java client -sidebar_label: "Java" -original_id: client-libraries-java ---- - -You can use a Pulsar Java client to create the Java [producer](#producer), [consumer](#consumer), and [readers](#reader) of messages and to perform [administrative tasks](admin-api-overview.md). The current Java client version is **@pulsar:version@**. - -All the methods in [producer](#producer), [consumer](#consumer), and [reader](#reader) of a Java client are thread-safe. - -Javadoc for the Pulsar client is divided into two domains by package as follows. - -Package | Description | Maven Artifact -:-------|:------------|:-------------- -[`org.apache.pulsar.client.api`](/api/client) | The producer and consumer API | [org.apache.pulsar:pulsar-client:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C@pulsar:version@%7Cjar) -[`org.apache.pulsar.client.admin`](/api/admin) | The Java [admin API](admin-api-overview.md) | [org.apache.pulsar:pulsar-client-admin:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client-admin%7C@pulsar:version@%7Cjar) -`org.apache.pulsar.client.all` |Include both `pulsar-client` and `pulsar-client-admin`
    Both `pulsar-client` and `pulsar-client-admin` are shaded packages and they shade dependencies independently. Consequently, the applications using both `pulsar-client` and `pulsar-client-admin` have redundant shaded classes. It would be troublesome if you introduce new dependencies but forget to update shading rules.
    In this case, you can use `pulsar-client-all`, which shades dependencies only one time and reduces the size of dependencies. |[org.apache.pulsar:pulsar-client-all:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client-all%7C@pulsar:version@%7Cjar) - -This document focuses only on the client API for producing and consuming messages on Pulsar topics. For how to use the Java admin client, see [Pulsar admin interface](admin-api-overview.md). - -## Installation - -The latest version of the Pulsar Java client library is available via [Maven Central](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C@pulsar:version@%7Cjar). To use the latest version, add the `pulsar-client` library to your build configuration. - -### Maven - -If you use Maven, add the following information to the `pom.xml` file. - -```xml - - -@pulsar:version@ - - - - org.apache.pulsar - pulsar-client - ${pulsar.version} - - -``` - -### Gradle - -If you use Gradle, add the following information to the `build.gradle` file. - -```groovy - -def pulsarVersion = '@pulsar:version@' - -dependencies { - compile group: 'org.apache.pulsar', name: 'pulsar-client', version: pulsarVersion -} - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -You can assign Pulsar protocol URLs to specific clusters and use the `pulsar` scheme. The default port is `6650`. The following is an example of `localhost`. - -```http - -pulsar://localhost:6650 - -``` - -If you have multiple brokers, the URL is as follows. - -```http - -pulsar://localhost:6550,localhost:6651,localhost:6652 - -``` - -A URL for a production Pulsar cluster is as follows. - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you use [TLS](security-tls-authentication.md) authentication, the URL is as follows. - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Client - -You can instantiate a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object using just a URL for the target Pulsar [cluster](reference-terminology.md#cluster) like this: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -``` - -If you have multiple brokers, you can initiate a PulsarClient like this: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650,localhost:6651,localhost:6652") - .build(); - -``` - -> ### Default broker URLs for standalone clusters -> If you run a cluster in [standalone mode](getting-started-standalone.md), the broker is available at the `pulsar://localhost:6650` URL by default. - -If you create a client, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -| Name | Type |
    Description
    | Default -|---|---|---|--- -`serviceUrl` | String | Service URL provider for Pulsar service | None -`authPluginClassName` | String | Name of the authentication plugin | None - `authParams` | String | Parameters for the authentication plugin

    **Example**
    key1:val1,key2:val2|None -`operationTimeoutMs`|long|`operationTimeoutMs`|Operation timeout |30000 -`statsIntervalSeconds`|long|Interval between each stats information

    Stats is activated with positive `statsInterval`

    Set `statsIntervalSeconds` to 1 second at least. |60 -`numIoThreads`| int| The number of threads used for handling connections to brokers | 1 -`numListenerThreads`|int|The number of threads used for handling message listeners. The listener thread pool is shared across all the consumers and readers using the "listener" model to get messages. For a given consumer, the listener is always invoked from the same thread to ensure ordering. If you want multiple threads to process a single topic, you need to create a [`shared`](https://pulsar.apache.org/docs/en/next/concepts-messaging/#shared) subscription and multiple consumers for this subscription. This does not ensure ordering.| 1 -`useTcpNoDelay`| boolean| Whether to use TCP no-delay flag on the connection to disable Nagle algorithm |true -`enableTls` |boolean | Whether to use TLS encryption on the connection. Note that this parameter is **deprecated**. If you want to enable TLS, use `pulsar+ssl://` in `serviceUrl` instead. | false - `tlsTrustCertsFilePath` |string |Path to the trusted TLS certificate file|None -`tlsAllowInsecureConnection`|boolean|Whether the Pulsar client accepts untrusted TLS certificate from broker | false -`tlsHostnameVerificationEnable` |boolean | Whether to enable TLS hostname verification|false -`concurrentLookupRequest`|int|The number of concurrent lookup requests allowed to send on each broker connection to prevent overload on broker|5000 -`maxLookupRequest`|int|The maximum number of lookup requests allowed on each broker connection to prevent overload on broker | 50000 -`maxNumberOfRejectedRequestPerConnection`|int|The maximum number of rejected requests of a broker in a certain time frame (30 seconds) after the current connection is closed and the client creates a new connection to connect to a different broker|50 -`keepAliveIntervalSeconds`|int|Seconds of keeping alive interval for each client broker connection|30 -`connectionTimeoutMs`|int|Duration of waiting for a connection to a broker to be established

    If the duration passes without a response from a broker, the connection attempt is dropped|10000 -`requestTimeoutMs`|int|Maximum duration for completing a request |60000 -`defaultBackoffIntervalNanos`|int| Default duration for a backoff interval | TimeUnit.MILLISECONDS.toNanos(100); -`maxBackoffIntervalNanos`|long|Maximum duration for a backoff interval|TimeUnit.SECONDS.toNanos(30) -`socks5ProxyAddress`|SocketAddress|SOCKS5 proxy address | None -`socks5ProxyUsername`|string|SOCKS5 proxy username | None -`socks5ProxyPassword`|string|SOCKS5 proxy password | None - -Check out the Javadoc for the {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} class for a full list of configurable parameters. - -> In addition to client-level configuration, you can also apply [producer](#configure-producer) and [consumer](#configure-consumer) specific configuration as described in sections below. - -## Producer - -In Pulsar, producers write messages to topics. Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object (as in the section [above](#client-configuration)), you can create a {@inject: javadoc:Producer:/client/org/apache/pulsar/client/api/Producer} for a specific Pulsar [topic](reference-terminology.md#topic). - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .create(); - -// You can then send messages to the broker and topic you specified: -producer.send("My message".getBytes()); - -``` - -By default, producers produce messages that consist of byte arrays. You can produce different types by specifying a message [schema](#schema). - -```java - -Producer stringProducer = client.newProducer(Schema.STRING) - .topic("my-topic") - .create(); -stringProducer.send("My message"); - -``` - -> Make sure that you close your producers, consumers, and clients when you do not need them. - -> ```java -> -> producer.close(); -> consumer.close(); -> client.close(); -> -> -> ``` - -> -> Close operations can also be asynchronous: - -> ```java -> -> producer.closeAsync() -> .thenRun(() -> System.out.println("Producer closed")) -> .exceptionally((ex) -> { -> System.err.println("Failed to close producer: " + ex); -> return null; -> }); -> -> -> ``` - - -### Configure producer - -If you instantiate a `Producer` object by specifying only a topic name as the example above, the default configuration of producer is used. - -If you create a producer, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -Name| Type |
    Description
    | Default -|---|---|---|--- -`topicName`| string| Topic name| null| -`producerName`| string|Producer name| null -`sendTimeoutMs`| long|Message send timeout in ms.
    If a message is not acknowledged by a server before the `sendTimeout` expires, an error occurs.|30000 -`blockIfQueueFull`|boolean|If it is set to `true`, when the outgoing message queue is full, the `Send` and `SendAsync` methods of producer block, rather than failing and throwing errors.
    If it is set to `false`, when the outgoing message queue is full, the `Send` and `SendAsync` methods of producer fail and `ProducerQueueIsFullError` exceptions occur.

    The `MaxPendingMessages` parameter determines the size of the outgoing message queue.|false -`maxPendingMessages`| int|The maximum size of a queue holding pending messages.

    For example, a message waiting to receive an acknowledgment from a [broker](reference-terminology.md#broker).

    By default, when the queue is full, all calls to the `Send` and `SendAsync` methods fail **unless** you set `BlockIfQueueFull` to `true`.|1000 -`maxPendingMessagesAcrossPartitions`|int|The maximum number of pending messages across partitions.

    Use the setting to lower the max pending messages for each partition ({@link #setMaxPendingMessages(int)}) if the total number exceeds the configured value.|50000 -`messageRoutingMode`| MessageRoutingMode|Message routing logic for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics).
    Apply the logic only when setting no key on messages.
    Available options are as follows:
  1469. `pulsar.RoundRobinDistribution`: round robin
  1470. `pulsar.UseSinglePartition`: publish all messages to a single partition
  1471. `pulsar.CustomPartition`: a custom partitioning scheme
  1472. |
  1473. `pulsar.RoundRobinDistribution`
  1474. -`hashingScheme`| HashingScheme|Hashing function determining the partition where you publish a particular message (**partitioned topics only**).
    Available options are as follows:
  1475. `pulsar.JavastringHash`: the equivalent of `string.hashCode()` in Java
  1476. `pulsar.Murmur3_32Hash`: applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function
  1477. `pulsar.BoostHash`: applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library
  1478. |`HashingScheme.JavastringHash` -`cryptoFailureAction`| ProducerCryptoFailureAction|Producer should take action when encryption fails.
  1479. **FAIL**: if encryption fails, unencrypted messages fail to send.
  1480. **SEND**: if encryption fails, unencrypted messages are sent.
  1481. |`ProducerCryptoFailureAction.FAIL` -`batchingMaxPublishDelayMicros`| long|Batching time period of sending messages.|TimeUnit.MILLISECONDS.toMicros(1) -`batchingMaxMessages` |int|The maximum number of messages permitted in a batch.|1000 -`batchingEnabled`| boolean|Enable batching of messages. |true -`compressionType`|CompressionType|Message data compression type used by a producer.
    Available options:
  1482. [`LZ4`](https://github.com/lz4/lz4)
  1483. [`ZLIB`](https://zlib.net/)
  1484. [`ZSTD`](https://facebook.github.io/zstd/)
  1485. [`SNAPPY`](https://google.github.io/snappy/)
  1486. | No compression - -You can configure parameters if you do not want to use the default configuration. - -For a full list, see the Javadoc for the {@inject: javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder} class. The following is an example. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .batchingMaxPublishDelay(10, TimeUnit.MILLISECONDS) - .sendTimeout(10, TimeUnit.SECONDS) - .blockIfQueueFull(true) - .create(); - -``` - -### Message routing - -When using partitioned topics, you can specify the routing mode whenever you publish messages using a producer. For more information on specifying a routing mode using the Java client, see the [Partitioned Topics cookbook](cookbooks-partitioned.md). - -### Async send - -You can publish messages [asynchronously](concepts-messaging.md#send-modes) using the Java client. With async send, the producer puts the message in a blocking queue and returns it immediately. Then the client library sends the message to the broker in the background. If the queue is full (max size configurable), the producer is blocked or fails immediately when calling the API, depending on arguments passed to the producer. - -The following is an example. - -```java - -producer.sendAsync("my-async-message".getBytes()).thenAccept(msgId -> { - System.out.println("Message with ID " + msgId + " successfully sent"); -}); - -``` - -As you can see from the example above, async send operations return a {@inject: javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId} wrapped in a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture). - -### Configure messages - -In addition to a value, you can set additional items on a given message: - -```java - -producer.newMessage() - .key("my-message-key") - .value("my-async-message".getBytes()) - .property("my-key", "my-value") - .property("my-other-key", "my-other-value") - .send(); - -``` - -You can terminate the builder chain with `sendAsync()` and get a future return. - -## Consumer - -In Pulsar, consumers subscribe to topics and handle messages that producers publish to those topics. You can instantiate a new [consumer](reference-terminology.md#consumer) by first instantiating a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object and passing it a URL for a Pulsar broker (as [above](#client-configuration)). - -Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object, you can create a {@inject: javadoc:Consumer:/client/org/apache/pulsar/client/api/Consumer} by specifying a [topic](reference-terminology.md#topic) and a [subscription](concepts-messaging.md#subscription-modes). - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscribe(); - -``` - -The `subscribe` method will auto subscribe the consumer to the specified topic and subscription. One way to make the consumer listen on the topic is to set up a `while` loop. In this example loop, the consumer listens for messages, prints the contents of any received message, and then [acknowledges](reference-terminology.md#acknowledgment-ack) that the message has been processed. If the processing logic fails, you can use [negative acknowledgement](reference-terminology.md#acknowledgment-ack) to redeliver the message later. - -```java - -while (true) { - // Wait for a message - Message msg = consumer.receive(); - - try { - // Do something with the message - System.out.println("Message received: " + new String(msg.getData())); - - // Acknowledge the message so that it can be deleted by the message broker - consumer.acknowledge(msg); - } catch (Exception e) { - // Message failed to process, redeliver later - consumer.negativeAcknowledge(msg); - } -} - -``` - -If you don't want to block your main thread and rather listen constantly for new messages, consider using a `MessageListener`. - -```java - -MessageListener myMessageListener = (consumer, msg) -> { - try { - System.out.println("Message received: " + new String(msg.getData())); - consumer.acknowledge(msg); - } catch (Exception e) { - consumer.negativeAcknowledge(msg); - } -} - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .messageListener(myMessageListener) - .subscribe(); - -``` - -### Configure consumer - -If you instantiate a `Consumer` object by specifying only a topic and subscription name as in the example above, the consumer uses the default configuration. - -When you create a consumer, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - - Name|Type |
    Description
    | Default -|---|---|---|--- -`topicNames`| Set<String>| Topic name| Sets.newTreeSet() - `topicsPattern`|Pattern| Topic pattern |None -`subscriptionName`|String| Subscription name| None -`subscriptionType`|SubscriptionType| Subscription type
    Four subscription types are available:
  1487. Exclusive
  1488. Failover
  1489. Shared
  1490. Key_Shared
  1491. |SubscriptionType.Exclusive -`receiverQueueSize` |int | Size of a consumer's receiver queue.

    For example, the number of messages accumulated by a consumer before an application calls `Receive`.

    A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.| 1000 -`acknowledgementsGroupTimeMicros`|long|Group a consumer acknowledgment for a specified time.

    By default, a consumer uses 100ms grouping time to send out acknowledgments to a broker.

    Setting a group time of 0 sends out acknowledgments immediately.

    A longer ack group time is more efficient at the expense of a slight increase in message re-deliveries after a failure.|TimeUnit.MILLISECONDS.toMicros(100) -`negativeAckRedeliveryDelayMicros`|long|Delay to wait before redelivering messages that failed to be processed.

    When an application uses {@link Consumer#negativeAcknowledge(Message)}, failed messages are redelivered after a fixed timeout. |TimeUnit.MINUTES.toMicros(1) -`maxTotalReceiverQueueSizeAcrossPartitions`|int |The max total receiver queue size across partitions.

    This setting reduces the receiver queue size for individual partitions if the total receiver queue size exceeds this value.|50000 -`consumerName`|String|Consumer name|null -`ackTimeoutMillis`|long|Timeout of unacked messages|0 -`tickDurationMillis`|long|Granularity of the ack-timeout redelivery.

    Using an higher `tickDurationMillis` reduces the memory overhead to track messages when setting ack-timeout to a bigger value (for example, 1 hour).|1000 -`priorityLevel`|int|Priority level for a consumer to which a broker gives more priority while dispatching messages in the shared subscription mode.

    The broker follows descending priorities. For example, 0=max-priority, 1, 2,...

    In shared subscription mode, the broker **first dispatches messages to the max priority level consumers if they have permits**. Otherwise, the broker considers next priority level consumers.

    **Example 1**
    If a subscription has consumerA with `priorityLevel` 0 and consumerB with `priorityLevel` 1, then the broker **only dispatches messages to consumerA until it runs out permits** and then starts dispatching messages to consumerB.

    **Example 2**
    Consumer Priority, Level, Permits
    C1, 0, 2
    C2, 0, 1
    C3, 0, 1
    C4, 1, 2
    C5, 1, 1

    Order in which a broker dispatches messages to consumers is: C1, C2, C3, C1, C4, C5, C4.|0 -`cryptoFailureAction`|ConsumerCryptoFailureAction|Consumer should take action when it receives a message that can not be decrypted.
  1492. **FAIL**: this is the default option to fail messages until crypto succeeds.
  1493. **DISCARD**:silently acknowledge and not deliver message to an application.
  1494. **CONSUME**: deliver encrypted messages to applications. It is the application's responsibility to decrypt the message.

  1495. The decompression of message fails.

    If messages contain batch messages, a client is not be able to retrieve individual messages in batch.

    Delivered encrypted message contains {@link EncryptionContext} which contains encryption and compression information in it using which application can decrypt consumed message payload.|
  1496. ConsumerCryptoFailureAction.FAIL
  1497. -`properties`|SortedMap|A name or value property of this consumer.

    `properties` is application defined metadata attached to a consumer.

    When getting a topic stats, associate this metadata with the consumer stats for easier identification.|new TreeMap() -`readCompacted`|boolean|If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    Only enabling `readCompacted` on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`.|false -`subscriptionInitialPosition`|SubscriptionInitialPosition|Initial position at which to set cursor when subscribing to a topic at first time.|SubscriptionInitialPosition.Latest -`patternAutoDiscoveryPeriod`|int|Topic auto discovery period when using a pattern for topic's consumer.

    The default and minimum value is 1 minute.|1 -`regexSubscriptionMode`|RegexSubscriptionMode|When subscribing to a topic using a regular expression, you can pick a certain type of topics.

  1498. **PersistentOnly**: only subscribe to persistent topics.
  1499. **NonPersistentOnly**: only subscribe to non-persistent topics.
  1500. **AllTopics**: subscribe to both persistent and non-persistent topics.
  1501. |RegexSubscriptionMode.PersistentOnly -`deadLetterPolicy`|DeadLetterPolicy|Dead letter policy for consumers.

    By default, some messages are probably redelivered many times, even to the extent that it never stops.

    By using the dead letter mechanism, messages have the max redelivery count. **When exceeding the maximum number of redeliveries, messages are sent to the Dead Letter Topic and acknowledged automatically**.

    You can enable the dead letter mechanism by setting `deadLetterPolicy`.

    **Example**

    client.newConsumer()
    .deadLetterPolicy(DeadLetterPolicy.builder().maxRedeliverCount(10).build())
    .subscribe();


    Default dead letter topic name is `{TopicName}-{Subscription}-DLQ`.

    To set a custom dead letter topic name:
    client.newConsumer()
    .deadLetterPolicy(DeadLetterPolicy.builder().maxRedeliverCount(10)
    .deadLetterTopic("your-topic-name").build())
    .subscribe();


    When specifying the dead letter policy while not specifying `ackTimeoutMillis`, you can set the ack timeout to 30000 millisecond.|None -`autoUpdatePartitions`|boolean|If `autoUpdatePartitions` is enabled, a consumer subscribes to partition increasement automatically.

    **Note**: this is only for partitioned consumers.|true -`replicateSubscriptionState`|boolean|If `replicateSubscriptionState` is enabled, a subscription state is replicated to geo-replicated clusters.|false - -You can configure parameters if you do not want to use the default configuration. For a full list, see the Javadoc for the {@inject: javadoc:ConsumerBuilder:/client/org/apache/pulsar/client/api/ConsumerBuilder} class. - -The following is an example. - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .ackTimeout(10, TimeUnit.SECONDS) - .subscriptionType(SubscriptionType.Exclusive) - .subscribe(); - -``` - -### Async receive - -The `receive` method receives messages synchronously (the consumer process is blocked until a message is available). You can also use [async receive](concepts-messaging.md#receive-modes), which returns a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) object immediately once a new message is available. - -The following is an example. - -```java - -CompletableFuture asyncMessage = consumer.receiveAsync(); - -``` - -Async receive operations return a {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} wrapped inside of a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture). - -### Batch receive - -Use `batchReceive` to receive multiple messages for each call. - -The following is an example. - -```java - -Messages messages = consumer.batchReceive(); -for (Object message : messages) { - // do something -} -consumer.acknowledge(messages) - -``` - -:::note - -Batch receive policy limits the number and bytes of messages in a single batch. You can specify a timeout to wait for enough messages. -The batch receive is completed if any of the following condition is met: enough number of messages, bytes of messages, wait timeout. - -```java - -Consumer consumer = client.newConsumer() -.topic("my-topic") -.subscriptionName("my-subscription") -.batchReceivePolicy(BatchReceivePolicy.builder() -.maxNumMessages(100) -.maxNumBytes(1024 * 1024) -.timeout(200, TimeUnit.MILLISECONDS) -.build()) -.subscribe(); - -``` - -The default batch receive policy is: - -```java - -BatchReceivePolicy.builder() -.maxNumMessage(-1) -.maxNumBytes(10 * 1024 * 1024) -.timeout(100, TimeUnit.MILLISECONDS) -.build(); - -``` - -::: - -### Multi-topic subscriptions - -In addition to subscribing a consumer to a single Pulsar topic, you can also subscribe to multiple topics simultaneously using [multi-topic subscriptions](concepts-messaging.md#multi-topic-subscriptions). To use multi-topic subscriptions you can supply either a regular expression (regex) or a `List` of topics. If you select topics via regex, all topics must be within the same Pulsar namespace. - -The followings are some examples. - -```java - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; - -import java.util.Arrays; -import java.util.List; -import java.util.regex.Pattern; - -ConsumerBuilder consumerBuilder = pulsarClient.newConsumer() - .subscriptionName(subscription); - -// Subscribe to all topics in a namespace -Pattern allTopicsInNamespace = Pattern.compile("public/default/.*"); -Consumer allTopicsConsumer = consumerBuilder - .topicsPattern(allTopicsInNamespace) - .subscribe(); - -// Subscribe to a subsets of topics in a namespace, based on regex -Pattern someTopicsInNamespace = Pattern.compile("public/default/foo.*"); -Consumer allTopicsConsumer = consumerBuilder - .topicsPattern(someTopicsInNamespace) - .subscribe(); - -``` - -In the above example, the consumer subscribes to the `persistent` topics that can match the topic name pattern. If you want the consumer subscribes to all `persistent` and `non-persistent` topics that can match the topic name pattern, set `subscriptionTopicsMode` to `RegexSubscriptionMode.AllTopics`. - -```java - -Pattern pattern = Pattern.compile("public/default/.*"); -pulsarClient.newConsumer() - .subscriptionName("my-sub") - .topicsPattern(pattern) - .subscriptionTopicsMode(RegexSubscriptionMode.AllTopics) - .subscribe(); - -``` - -:::note - -By default, the `subscriptionTopicsMode` of the consumer is `PersistentOnly`. Available options of `subscriptionTopicsMode` are `PersistentOnly`, `NonPersistentOnly`, and `AllTopics`. - -::: - -You can also subscribe to an explicit list of topics (across namespaces if you wish): - -```java - -List topics = Arrays.asList( - "topic-1", - "topic-2", - "topic-3" -); - -Consumer multiTopicConsumer = consumerBuilder - .topics(topics) - .subscribe(); - -// Alternatively: -Consumer multiTopicConsumer = consumerBuilder - .topic( - "topic-1", - "topic-2", - "topic-3" - ) - .subscribe(); - -``` - -You can also subscribe to multiple topics asynchronously using the `subscribeAsync` method rather than the synchronous `subscribe` method. The following is an example. - -```java - -Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default.*"); -consumerBuilder - .topics(topics) - .subscribeAsync() - .thenAccept(this::receiveMessageFromConsumer); - -private void receiveMessageFromConsumer(Object consumer) { - ((Consumer)consumer).receiveAsync().thenAccept(message -> { - // Do something with the received message - receiveMessageFromConsumer(consumer); - }); -} - -``` - -### Subscription modes - -Pulsar has various [subscription modes](concepts-messaging#subscription-modes) to match different scenarios. A topic can have multiple subscriptions with different subscription modes. However, a subscription can only have one subscription mode at a time. - -A subscription is identical with the subscription name which can specify only one subscription mode at a time. You cannot change the subscription mode unless all existing consumers of this subscription are offline. - -Different subscription modes have different message distribution modes. This section describes the differences of subscription modes and how to use them. - -In order to better describe their differences, assuming you have a topic named "my-topic", and the producer has published 10 messages. - -```java - -Producer producer = client.newProducer(Schema.STRING) - .topic("my-topic") - .enableBatching(false) - .create(); -// 3 messages with "key-1", 3 messages with "key-2", 2 messages with "key-3" and 2 messages with "key-4" -producer.newMessage().key("key-1").value("message-1-1").send(); -producer.newMessage().key("key-1").value("message-1-2").send(); -producer.newMessage().key("key-1").value("message-1-3").send(); -producer.newMessage().key("key-2").value("message-2-1").send(); -producer.newMessage().key("key-2").value("message-2-2").send(); -producer.newMessage().key("key-2").value("message-2-3").send(); -producer.newMessage().key("key-3").value("message-3-1").send(); -producer.newMessage().key("key-3").value("message-3-2").send(); -producer.newMessage().key("key-4").value("message-4-1").send(); -producer.newMessage().key("key-4").value("message-4-2").send(); - -``` - -#### Exclusive - -Create a new consumer and subscribe with the `Exclusive` subscription mode. - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Exclusive) - .subscribe() - -``` - -Only the first consumer is allowed to the subscription, other consumers receive an error. The first consumer receives all 10 messages, and the consuming order is the same as the producing order. - -:::note - -If topic is a partitioned topic, the first consumer subscribes to all partitioned topics, other consumers are not assigned with partitions and receive an error. - -::: - -#### Failover - -Create new consumers and subscribe with the`Failover` subscription mode. - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Failover) - .subscribe() -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Failover) - .subscribe() -//conumser1 is the active consumer, consumer2 is the standby consumer. -//consumer1 receives 5 messages and then crashes, consumer2 takes over as an active consumer. - -``` - -Multiple consumers can attach to the same subscription, yet only the first consumer is active, and others are standby. When the active consumer is disconnected, messages will be dispatched to one of standby consumers, and the standby consumer then becomes active consumer. - -If the first active consumer is disconnected after receiving 5 messages, the standby consumer becomes active consumer. Consumer1 will receive: - -``` - -("key-1", "message-1-1") -("key-1", "message-1-2") -("key-1", "message-1-3") -("key-2", "message-2-1") -("key-2", "message-2-2") - -``` - -consumer2 will receive: - -``` - -("key-2", "message-2-3") -("key-3", "message-3-1") -("key-3", "message-3-2") -("key-4", "message-4-1") -("key-4", "message-4-2") - -``` - -:::note - -If a topic is a partitioned topic, each partition has only one active consumer, messages of one partition are distributed to only one consumer, and messages of multiple partitions are distributed to multiple consumers. - -::: - -#### Shared - -Create new consumers and subscribe with `Shared` subscription mode: - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .subscribe() - -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .subscribe() -//Both consumer1 and consumer2 are active consumers. - -``` - -In shared subscription mode, multiple consumers can attach to the same subscription and messages are delivered in a round robin distribution across consumers. - -If a broker dispatches only one message at a time, consumer1 receives the following information. - -``` - -("key-1", "message-1-1") -("key-1", "message-1-3") -("key-2", "message-2-2") -("key-3", "message-3-1") -("key-4", "message-4-1") - -``` - -consumer2 receives the following information. - -``` - -("key-1", "message-1-2") -("key-2", "message-2-1") -("key-2", "message-2-3") -("key-3", "message-3-2") -("key-4", "message-4-2") - -``` - -`Shared` subscription is different from `Exclusive` and `Failover` subscription modes. `Shared` subscription has better flexibility, but cannot provide order guarantee. - -#### Key_shared - -This is a new subscription mode since 2.4.0 release, create new consumers and subscribe with `Key_Shared` subscription mode. - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Key_Shared) - .subscribe() - -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Key_Shared) - .subscribe() -//Both consumer1 and consumer2 are active consumers. - -``` - -Just like in the `Shared` subscription, all consumers in the `Key_Shared` subscription mode can attach to the same subscription. But the `Key_Shared` subscription mode is different from the `Shared` subscription. In the `Key_Shared` subscription mode, messages with the same key are delivered to only one consumer in order. The possible distribution of messages between different consumers (by default we do not know in advance which keys will be assigned to a consumer, but a key will only be assigned to a consumer at the same time). - -consumer1 receives the following information. - -``` - -("key-1", "message-1-1") -("key-1", "message-1-2") -("key-1", "message-1-3") -("key-3", "message-3-1") -("key-3", "message-3-2") - -``` - -consumer2 receives the following information. - -``` - -("key-2", "message-2-1") -("key-2", "message-2-2") -("key-2", "message-2-3") -("key-4", "message-4-1") -("key-4", "message-4-2") - -``` - -If batching is enabled at the producer side, messages with different keys are added to a batch by default. The broker will dispatch the batch to the consumer, so the default batch mechanism may break the Key_Shared subscription guaranteed message distribution semantics. The producer needs to use the `KeyBasedBatcher`. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .batcherBuilder(BatcherBuilder.KEY_BASED) - .create(); - -``` - -Or the producer can disable batching. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .enableBatching(false) - .create(); - -``` - -:::note - -If the message key is not specified, messages without key are dispatched to one consumer in order by default. - -::: - -## Reader - -With the [reader interface](concepts-clients.md#reader-interface), Pulsar clients can "manually position" themselves within a topic and reading all messages from a specified message onward. The Pulsar API for Java enables you to create {@inject: javadoc:Reader:/client/org/apache/pulsar/client/api/Reader} objects by specifying a topic and a {@inject: javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId}. - -The following is an example. - -```java - -byte[] msgIdBytes = // Some message ID byte array -MessageId id = MessageId.fromByteArray(msgIdBytes); -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(id) - .create(); - -while (true) { - Message message = reader.readNext(); - // Process message -} - -``` - -In the example above, a `Reader` object is instantiated for a specific topic and message (by ID); the reader iterates over each message in the topic after the message is identified by `msgIdBytes` (how that value is obtained depends on the application). - -The code sample above shows pointing the `Reader` object to a specific message (by ID), but you can also use `MessageId.earliest` to point to the earliest available message on the topic of `MessageId.latest` to point to the most recent available message. - -### Configure reader -When you create a reader, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -| Name | Type|
    Description
    | Default -|---|---|---|--- -`topicName`|String|Topic name. |None -`receiverQueueSize`|int|Size of a consumer's receiver queue.

    For example, the number of messages that can be accumulated by a consumer before an application calls `Receive`.

    A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.|1000 -`readerListener`|ReaderListener<T>|A listener that is called for message received.|None -`readerName`|String|Reader name.|null -`subscriptionName`|String| Subscription name|When there is a single topic, the default subscription name is `"reader-" + 10-digit UUID`.
    When there are multiple topics, the default subscription name is `"multiTopicsReader-" + 10-digit UUID`. -`subscriptionRolePrefix`|String|Prefix of subscription role. |null -`cryptoKeyReader`|CryptoKeyReader|Interface that abstracts the access to a key store.|null -`cryptoFailureAction`|ConsumerCryptoFailureAction|Consumer should take action when it receives a message that can not be decrypted.
  1502. **FAIL**: this is the default option to fail messages until crypto succeeds.
  1503. **DISCARD**: silently acknowledge and not deliver message to an application.
  1504. **CONSUME**: deliver encrypted messages to applications. It is the application's responsibility to decrypt the message.

  1505. The message decompression fails.

    If messages contain batch messages, a client is not be able to retrieve individual messages in batch.

    Delivered encrypted message contains {@link EncryptionContext} which contains encryption and compression information in it using which application can decrypt consumed message payload.|
  1506. ConsumerCryptoFailureAction.FAIL
  1507. -`readCompacted`|boolean|If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (for example, failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`.|false -`resetIncludeHead`|boolean|If set to true, the first message to be returned is the one specified by `messageId`.

    If set to false, the first message to be returned is the one next to the message specified by `messageId`.|false - -### Sticky key range reader - -In sticky key range reader, broker will only dispatch messages which hash of the message key contains by the specified key hash range. Multiple key hash ranges can be specified on a reader. - -The following is an example to create a sticky key range reader. - -```java - -pulsarClient.newReader() - .topic(topic) - .startMessageId(MessageId.earliest) - .keyHashRange(Range.of(0, 10000), Range.of(20001, 30000)) - .create(); - -``` - -Total hash range size is 65536, so the max end of the range should be less than or equal to 65535. - -## Schema - -In Pulsar, all message data consists of byte arrays "under the hood." [Message schemas](schema-get-started.md) enable you to use other types of data when constructing and handling messages (from simple types like strings to more complex, application-specific types.md). If you construct, say, a [producer](#producer) without specifying a schema, then the producer can only produce messages of type `byte[]`. The following is an example. - -```java - -Producer producer = client.newProducer() - .topic(topic) - .create(); - -``` - -The producer above is equivalent to a `Producer` (in fact, you should *always* explicitly specify the type). If you'd like to use a producer for a different type of data, you'll need to specify a **schema** that informs Pulsar which data type will be transmitted over the [topic](reference-terminology.md#topic). - -### AvroBaseStructSchema example - -Let's say that you have a `SensorReading` class that you'd like to transmit over a Pulsar topic: - -```java - -public class SensorReading { - public float temperature; - - public SensorReading(float temperature) { - this.temperature = temperature; - } - - // A no-arg constructor is required - public SensorReading() { - } - - public float getTemperature() { - return temperature; - } - - public void setTemperature(float temperature) { - this.temperature = temperature; - } -} - -``` - -You could then create a `Producer` (or `Consumer`) like this: - -```java - -Producer producer = client.newProducer(JSONSchema.of(SensorReading.class)) - .topic("sensor-readings") - .create(); - -``` - -The following schema formats are currently available for Java: - -* No schema or the byte array schema (which can be applied using `Schema.BYTES`): - - ```java - - Producer bytesProducer = client.newProducer(Schema.BYTES) - .topic("some-raw-bytes-topic") - .create(); - - ``` - - Or, equivalently: - - ```java - - Producer bytesProducer = client.newProducer() - .topic("some-raw-bytes-topic") - .create(); - - ``` - -* `String` for normal UTF-8-encoded string data. Apply the schema using `Schema.STRING`: - - ```java - - Producer stringProducer = client.newProducer(Schema.STRING) - .topic("some-string-topic") - .create(); - - ``` - -* Create JSON schemas for POJOs using `Schema.JSON`. The following is an example. - - ```java - - Producer pojoProducer = client.newProducer(Schema.JSON(MyPojo.class)) - .topic("some-pojo-topic") - .create(); - - ``` - -* Generate Protobuf schemas using `Schema.PROTOBUF`. The following example shows how to create the Protobuf schema and use it to instantiate a new producer: - - ```java - - Producer protobufProducer = client.newProducer(Schema.PROTOBUF(MyProtobuf.class)) - .topic("some-protobuf-topic") - .create(); - - ``` - -* Define Avro schemas with `Schema.AVRO`. The following code snippet demonstrates how to create and use Avro schema. - - ```java - - Producer avroProducer = client.newProducer(Schema.AVRO(MyAvro.class)) - .topic("some-avro-topic") - .create(); - - ``` - -### ProtobufNativeSchema example - -For example of ProtobufNativeSchema, see [`SchemaDefinition` in `Complex type`](schema-understand.md#complex-type). - -## Authentication - -Pulsar currently supports three authentication schemes: [TLS](security-tls-authentication.md), [Athenz](security-athenz.md), and [Oauth2](security-oauth2.md). You can use the Pulsar Java client with all of them. - -### TLS Authentication - -To use [TLS](security-tls-authentication.md), `enableTls` method is deprecated and you need to use "pulsar+ssl://" in serviceUrl to enable, point your Pulsar client to a TLS cert path, and provide paths to cert and key files. - -The following is an example. - -```java - -Map authParams = new HashMap(); -authParams.put("tlsCertFile", "/path/to/client-cert.pem"); -authParams.put("tlsKeyFile", "/path/to/client-key.pem"); - -Authentication tlsAuth = AuthenticationFactory - .create(AuthenticationTls.class.getName(), authParams); - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://my-broker.com:6651") - .tlsTrustCertsFilePath("/path/to/cacert.pem") - .authentication(tlsAuth) - .build(); - -``` - -### Athenz - -To use [Athenz](security-athenz.md) as an authentication provider, you need to [use TLS](#tls-authentication) and provide values for four parameters in a hash: - -* `tenantDomain` -* `tenantService` -* `providerDomain` -* `privateKey` - -You can also set an optional `keyId`. The following is an example. - -```java - -Map authParams = new HashMap(); -authParams.put("tenantDomain", "shopping"); // Tenant domain name -authParams.put("tenantService", "some_app"); // Tenant service name -authParams.put("providerDomain", "pulsar"); // Provider domain name -authParams.put("privateKey", "file:///path/to/private.pem"); // Tenant private key path -authParams.put("keyId", "v1"); // Key id for the tenant private key (optional, default: "0") - -Authentication athenzAuth = AuthenticationFactory - .create(AuthenticationAthenz.class.getName(), authParams); - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://my-broker.com:6651") - .tlsTrustCertsFilePath("/path/to/cacert.pem") - .authentication(athenzAuth) - .build(); - -``` - -> #### Supported pattern formats -> The `privateKey` parameter supports the following three pattern formats: -> * `file:///path/to/file` -> * `file:/path/to/file` -> * `data:application/x-pem-file;base64,` - -### Oauth2 - -The following example shows how to use [Oauth2](security-oauth2.md) as an authentication provider for the Pulsar Java client. - -You can use the factory method to configure authentication for Pulsar Java client. - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactoryOAuth2.clientCredentials(this.issuerUrl, this.credentialsUrl, this.audience)) - .build(); - -``` - -In addition, you can also use the encoded parameters to configure authentication for Pulsar Java client. - -```java - -Authentication auth = AuthenticationFactory - .create(AuthenticationOAuth2.class.getName(), "{"type":"client_credentials","privateKey":"...","issuerUrl":"...","audience":"..."}"); -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication(auth) - .build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-node.md b/site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-node.md deleted file mode 100644 index 1ff37b26294666..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-node.md +++ /dev/null @@ -1,643 +0,0 @@ ---- -id: client-libraries-node -title: The Pulsar Node.js client -sidebar_label: "Node.js" -original_id: client-libraries-node ---- - -The Pulsar Node.js client can be used to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Node.js. - -All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Node.js client are thread-safe. - -For 1.3.0 or later versions, [type definitions](https://github.com/apache/pulsar-client-node/blob/master/index.d.ts) used in TypeScript are available. - -## Installation - -You can install the [`pulsar-client`](https://www.npmjs.com/package/pulsar-client) library via [npm](https://www.npmjs.com/). - -### Requirements -Pulsar Node.js client library is based on the C++ client library. -Follow [these instructions](client-libraries-cpp.md#compilation) and install the Pulsar C++ client library. - -### Compatibility - -Compatibility between each version of the Node.js client and the C++ client is as follows: - -| Node.js client | C++ client | -| :------------- | :------------- | -| 1.0.0 | 2.3.0 or later | -| 1.1.0 | 2.4.0 or later | -| 1.2.0 | 2.5.0 or later | - -If an incompatible version of the C++ client is installed, you may fail to build or run this library. - -### Installation using npm - -Install the `pulsar-client` library via [npm](https://www.npmjs.com/): - -```shell - -$ npm install pulsar-client - -``` - -:::note - -Also, this library works only in Node.js 10.x or later because it uses the [`node-addon-api`](https://github.com/nodejs/node-addon-api) module to wrap the C++ library. - -::: - -## Connection URLs -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here is an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you are using [TLS encryption](security-tls-transport.md) or [TLS Authentication](security-tls-authentication.md), the URL looks like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you first need a client object. You can create a client instance using a `new` operator and the `Client` method, passing in a client options object (more on configuration [below](#client-configuration)). - -Here is an example: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - await client.close(); -})(); - -``` - -### Client configuration - -The following configurable parameters are available for Pulsar clients: - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `serviceUrl` | The connection URL for the Pulsar cluster. See [above](#connection-urls) for more info. | | -| `authentication` | Configure the authentication provider. (default: no authentication). See [TLS Authentication](security-tls-authentication.md) for more info. | | -| `operationTimeoutSeconds` | The timeout for Node.js client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries occur until this threshold is reached, at which point the operation fails. | 30 | -| `ioThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker). | 1 | -| `messageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)). | 1 | -| `concurrentLookupRequest` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 50000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 50000 | -| `tlsTrustCertsFilePath` | The file path for the trusted TLS certificate. | | -| `tlsValidateHostname` | The boolean value of setup whether to enable TLS hostname verification. | `false` | -| `tlsAllowInsecureConnection` | The boolean value of setup whether the Pulsar client accepts untrusted TLS certificate from broker. | `false` | -| `statsIntervalInSeconds` | Interval between each stat info. Stats is activated with positive statsInterval. The value should be set to 1 second at least | 600 | -| `log` | A function that is used for logging. | `console.log` | - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Node.js producers using a producer configuration object. - -Here is an example: - -```JavaScript - -const producer = await client.createProducer({ - topic: 'my-topic', -}); - -await producer.send({ - data: Buffer.from("Hello, Pulsar"), -}); - -await producer.close(); - -``` - -> #### Promise operation -> When you create a new Pulsar producer, the operation returns `Promise` object and get producer instance or an error through executor function. -> In this example, using await operator instead of executor function. - -### Producer operations - -Pulsar Node.js producers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `send(Object)` | Publishes a [message](#messages) to the producer's topic. When the message is successfully acknowledged by the Pulsar broker, or an error is thrown, the Promise object whose result is the message ID runs executor function. | `Promise` | -| `flush()` | Sends message from send queue to Pulsar broker. When the message is successfully acknowledged by the Pulsar broker, or an error is thrown, the Promise object runs executor function. | `Promise` | -| `close()` | Closes the producer and releases all resources allocated to it. Once `close()` is called, no more messages are accepted from the publisher. This method returns a Promise object. It runs the executor function when all pending publish requests are persisted by Pulsar. If an error is thrown, no pending writes are retried. | `Promise` | -| `getProducerName()` | Getter method of the producer name. | `string` | -| `getTopic()` | Getter method of the name of the topic. | `string` | - -### Producer configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer publishes messages. | | -| `producerName` | A name for the producer. If you do not explicitly assign a name, Pulsar automatically generates a globally unique name. If you choose to explicitly assign a name, it needs to be unique across *all* Pulsar clusters, otherwise the creation operation throws an error. | | -| `sendTimeoutMs` | When publishing a message to a topic, the producer waits for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error is thrown. If you set `sendTimeoutMs` to -1, the timeout is set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30000 | -| `initialSequenceId` | The initial sequence ID of the message. When producer send message, add sequence ID to message. The ID is increased each time to send. | | -| `maxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `send` method fails *unless* `blockIfQueueFull` is set to `true`. | 1000 | -| `maxPendingMessagesAcrossPartitions` | The maximum size of the sum of partition's pending queue. | 50000 | -| `blockIfQueueFull` | If set to `true`, the producer's `send` method waits when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `maxPendingMessages` parameter); if set to `false` (the default), `send` operations fails and throw a error when the queue is full. | `false` | -| `messageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-messaging.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`RoundRobinDistribution`), or publishing all messages to a single partition (`UseSinglePartition`, the default). | `UseSinglePartition` | -| `hashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `JavaStringHash` (the equivalent of `String.hashCode()` in Java), `Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library). | `BoostHash` | -| `compressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), and [`Zlib`](https://zlib.net/), [ZSTD](https://github.com/facebook/zstd/), [SNAPPY](https://github.com/google/snappy/). | Compression None | -| `batchingEnabled` | If set to `true`, the producer send message as batch. | `true` | -| `batchingMaxPublishDelayMs` | The maximum time of delay sending message in batching. | 10 | -| `batchingMaxMessages` | The maximum size of sending message in each time of batching. | 1000 | -| `properties` | The metadata of producer. | | - -### Producer example - -This example creates a Node.js producer for the `my-topic` topic and sends 10 messages to that topic: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - // Create a producer - const producer = await client.createProducer({ - topic: 'my-topic', - }); - - // Send messages - for (let i = 0; i < 10; i += 1) { - const msg = `my-message-${i}`; - producer.send({ - data: Buffer.from(msg), - }); - console.log(`Sent message: ${msg}`); - } - await producer.flush(); - - await producer.close(); - await client.close(); -})(); - -``` - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Node.js consumers using a consumer configuration object. - -Here is an example: - -```JavaScript - -const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', -}); - -const msg = await consumer.receive(); -console.log(msg.getData().toString()); -consumer.acknowledge(msg); - -await consumer.close(); - -``` - -> #### Promise operation -> When you create a new Pulsar consumer, the operation returns `Promise` object and get consumer instance or an error through executor function. -> In this example, using await operator instead of executor function. - -### Consumer operations - -Pulsar Node.js consumers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `receive()` | Receives a single message from the topic. When the message is available, the Promise object run executor function and get message object. | `Promise` | -| `receive(Number)` | Receives a single message from the topic with specific timeout in milliseconds. | `Promise` | -| `acknowledge(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message object. | `void` | -| `acknowledgeId(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID object. | `void` | -| `acknowledgeCumulative(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `acknowledgeCumulative` method returns void, and send the ack to the broker asynchronously. After that, the messages are *not* redelivered to the consumer. Cumulative acking can not be used with a [shared](concepts-messaging.md#shared) subscription type. | `void` | -| `acknowledgeCumulativeId(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message ID. | `void` | -| `negativeAcknowledge(Message)`| [Negatively acknowledges](reference-terminology.md#negative-acknowledgment-nack) a message to the Pulsar broker by message object. | `void` | -| `negativeAcknowledgeId(MessageId)` | [Negatively acknowledges](reference-terminology.md#negative-acknowledgment-nack) a message to the Pulsar broker by message ID object. | `void` | -| `close()` | Closes the consumer, disabling its ability to receive messages from the broker. | `Promise` | -| `unsubscribe()` | Unsubscribes the subscription. | `Promise` | - -### Consumer configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar topic on which the consumer establishes a subscription and listen for messages. | | -| `topics` | The array of topics. | | -| `topicsPattern` | The regular expression for topics. | | -| `subscription` | The subscription name for this consumer. | | -| `subscriptionType` | Available options are `Exclusive`, `Shared`, `Key_Shared`, and `Failover`. | `Exclusive` | -| `subscriptionInitialPosition` | Initial position at which to set cursor when subscribing to a topic at first time. | `SubscriptionInitialPosition.Latest` | -| `ackTimeoutMs` | Acknowledge timeout in milliseconds. | 0 | -| `nAckRedeliverTimeoutMs` | Delay to wait before redelivering messages that failed to be processed. | 60000 | -| `receiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000 | -| `receiverQueueSizeAcrossPartitions` | Set the max total receiver queue size across partitions. This setting is used to reduce the receiver queue size for individual partitions if the total exceeds this value. | 50000 | -| `consumerName` | The name of consumer. Currently(v2.4.1), [failover](concepts-messaging.md#failover) mode use consumer name in ordering. | | -| `properties` | The metadata of consumer. | | -| `listener`| A listener that is called for a message received. | | -| `readCompacted`| If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`. | false | - -### Consumer example - -This example creates a Node.js consumer with the `my-subscription` subscription on the `my-topic` topic, receives messages, prints the content that arrive, and acknowledges each message to the Pulsar broker for 10 times: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - // Create a consumer - const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', - subscriptionType: 'Exclusive', - }); - - // Receive messages - for (let i = 0; i < 10; i += 1) { - const msg = await consumer.receive(); - console.log(msg.getData().toString()); - consumer.acknowledge(msg); - } - - await consumer.close(); - await client.close(); -})(); - -``` - -Instead a consumer can be created with `listener` to process messages. - -```JavaScript - -// Create a consumer -const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', - subscriptionType: 'Exclusive', - listener: (msg, msgConsumer) => { - console.log(msg.getData().toString()); - msgConsumer.acknowledge(msg); - }, -}); - -``` - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recently unacked message). You can [configure](#reader-configuration) Node.js readers using a reader configuration object. - -Here is an example: - -```JavaScript - -const reader = await client.createReader({ - topic: 'my-topic', - startMessageId: Pulsar.MessageId.earliest(), -}); - -const msg = await reader.readNext(); -console.log(msg.getData().toString()); - -await reader.close(); - -``` - -### Reader operations - -Pulsar Node.js readers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `readNext()` | Receives the next message on the topic (analogous to the `receive` method for [consumers](#consumer-operations)). When the message is available, the Promise object run executor function and get message object. | `Promise` | -| `readNext(Number)` | Receives a single message from the topic with specific timeout in milliseconds. | `Promise` | -| `hasNext()` | Return whether the broker has next message in target topic. | `Boolean` | -| `close()` | Closes the reader, disabling its ability to receive messages from the broker. | `Promise` | - -### Reader configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader establishes a subscription and listen for messages. | | -| `startMessageId` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `Pulsar.MessageId.earliest` (the earliest available message on the topic), `Pulsar.MessageId.latest` (the latest available message on the topic), or a message ID object for a position that is not earliest or latest. | | -| `receiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `readNext`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000 | -| `readerName` | The name of the reader. | | -| `subscriptionRolePrefix` | The subscription role prefix. | | -| `readCompacted` | If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`. | `false` | - - -### Reader example - -This example creates a Node.js reader with the `my-topic` topic, reads messages, and prints the content that arrive for 10 times: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - operationTimeoutSeconds: 30, - }); - - // Create a reader - const reader = await client.createReader({ - topic: 'my-topic', - startMessageId: Pulsar.MessageId.earliest(), - }); - - // read messages - for (let i = 0; i < 10; i += 1) { - const msg = await reader.readNext(); - console.log(msg.getData().toString()); - } - - await reader.close(); - await client.close(); -})(); - -``` - -## Messages - -In Pulsar Node.js client, you have to construct producer message object for producer. - -Here is an example message: - -```JavaScript - -const msg = { - data: Buffer.from('Hello, Pulsar'), - partitionKey: 'key1', - properties: { - 'foo': 'bar', - }, - eventTimestamp: Date.now(), - replicationClusters: [ - 'cluster1', - 'cluster2', - ], -} - -await producer.send(msg); - -``` - -The following keys are available for producer message objects: - -| Parameter | Description | -| :-------- | :---------- | -| `data` | The actual data payload of the message. | -| `properties` | A Object for any application-specific metadata attached to the message. | -| `eventTimestamp` | The timestamp associated with the message. | -| `sequenceId` | The sequence ID of the message. | -| `partitionKey` | The optional key associated with the message (particularly useful for things like topic compaction). | -| `replicationClusters` | The clusters to which this message is replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. | -| `deliverAt` | The absolute timestamp at or after which the message is delivered. | | -| `deliverAfter` | The relative delay after which the message is delivered. | | - -### Message object operations - -In Pulsar Node.js client, you can receive (or read) message object as consumer (or reader). - -The message object have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `getTopicName()` | Getter method of topic name. | `String` | -| `getProperties()` | Getter method of properties. | `Array` | -| `getData()` | Getter method of message data. | `Buffer` | -| `getMessageId()` | Getter method of [message id object](#message-id-object-operations). | `Object` | -| `getPublishTimestamp()` | Getter method of publish timestamp. | `Number` | -| `getEventTimestamp()` | Getter method of event timestamp. | `Number` | -| `getRedeliveryCount()` | Getter method of redelivery count. | `Number` | -| `getPartitionKey()` | Getter method of partition key. | `String` | - -### Message ID object operations - -In Pulsar Node.js client, you can get message id object from message object. - -The message id object have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `serialize()` | Serialize the message id into a Buffer for storing. | `Buffer` | -| `toString()` | Get message id as String. | `String` | - -The client has static method of message id object. You can access it as `Pulsar.MessageId.someStaticMethod` too. - -The following static methods are available for the message id object: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `earliest()` | MessageId representing the earliest, or oldest available message stored in the topic. | `Object` | -| `latest()` | MessageId representing the latest, or last published message in the topic. | `Object` | -| `deserialize(Buffer)` | Deserialize a message id object from a Buffer. | `Object` | - -## End-to-end encryption - -[End-to-end encryption](https://pulsar.apache.org/docs/en/next/cookbooks-encryption/#docsNav) allows applications to encrypt messages at producers and decrypt at consumers. - -### Configuration - -If you want to use the end-to-end encryption feature in the Node.js client, you need to configure `publicKeyPath` and `privateKeyPath` for both producer and consumer. - -``` - -publicKeyPath: "./public.pem" -privateKeyPath: "./private.pem" - -``` - -### Tutorial - -This section provides step-by-step instructions on how to use the end-to-end encryption feature in the Node.js client. - -**Prerequisite** - -- Pulsar C++ client 2.7.1 or later - -**Step** - -1. Create both public and private key pairs. - - **Input** - - ```shell - - openssl genrsa -out private.pem 2048 - openssl rsa -in private.pem -pubout -out public.pem - - ``` - -2. Create a producer to send encrypted messages. - - **Input** - - ```nodejs - - const Pulsar = require('pulsar-client'); - - (async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - operationTimeoutSeconds: 30, - }); - - // Create a producer - const producer = await client.createProducer({ - topic: 'persistent://public/default/my-topic', - sendTimeoutMs: 30000, - batchingEnabled: true, - publicKeyPath: "./public.pem", - privateKeyPath: "./private.pem", - encryptionKey: "encryption-key" - }); - - console.log(producer.ProducerConfig) - // Send messages - for (let i = 0; i < 10; i += 1) { - const msg = `my-message-${i}`; - producer.send({ - data: Buffer.from(msg), - }); - console.log(`Sent message: ${msg}`); - } - await producer.flush(); - - await producer.close(); - await client.close(); - })(); - - ``` - -3. Create a consumer to receive encrypted messages. - - **Input** - - ```nodejs - - const Pulsar = require('pulsar-client'); - - (async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://172.25.0.3:6650', - operationTimeoutSeconds: 30 - }); - - // Create a consumer - const consumer = await client.subscribe({ - topic: 'persistent://public/default/my-topic', - subscription: 'sub1', - subscriptionType: 'Shared', - ackTimeoutMs: 10000, - publicKeyPath: "./public.pem", - privateKeyPath: "./private.pem" - }); - - console.log(consumer) - // Receive messages - for (let i = 0; i < 10; i += 1) { - const msg = await consumer.receive(); - console.log(msg.getData().toString()); - consumer.acknowledge(msg); - } - - await consumer.close(); - await client.close(); - })(); - - ``` - -4. Run the consumer to receive encrypted messages. - - **Input** - - ```shell - - node consumer.js - - ``` - -5. In a new terminal tab, run the producer to produce encrypted messages. - - **Input** - - ```shell - - node producer.js - - ``` - - Now you can see the producer sends messages and the consumer receives messages successfully. - - **Output** - - This is from the producer side. - - ``` - - Sent message: my-message-0 - Sent message: my-message-1 - Sent message: my-message-2 - Sent message: my-message-3 - Sent message: my-message-4 - Sent message: my-message-5 - Sent message: my-message-6 - Sent message: my-message-7 - Sent message: my-message-8 - Sent message: my-message-9 - - ``` - - This is from the consumer side. - - ``` - - my-message-0 - my-message-1 - my-message-2 - my-message-3 - my-message-4 - my-message-5 - my-message-6 - my-message-7 - my-message-8 - my-message-9 - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-python.md b/site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-python.md deleted file mode 100644 index 90cc840daa0a81..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-python.md +++ /dev/null @@ -1,481 +0,0 @@ ---- -id: client-libraries-python -title: Pulsar Python client -sidebar_label: "Python" -original_id: client-libraries-python ---- - -Pulsar Python client library is a wrapper over the existing [C++ client library](client-libraries-cpp.md) and exposes all of the [same features](/api/cpp). You can find the code in the [Python directory](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/python) of the C++ client code. - -All the methods in producer, consumer, and reader of a Python client are thread-safe. - -[pdoc](https://github.com/BurntSushi/pdoc)-generated API docs for the Python client are available [here](/api/python). - -## Install - -You can install the [`pulsar-client`](https://pypi.python.org/pypi/pulsar-client) library either via [PyPi](https://pypi.python.org/pypi), using [pip](#installation-using-pip), or by building the library from [source](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp). - -### Install using pip - -To install the `pulsar-client` library as a pre-built package using the [pip](https://pip.pypa.io/en/stable/) package manager: - -```shell - -$ pip install pulsar-client==@pulsar:version_number@ - -``` - -### Optional dependencies -If you install the client libraries on Linux to support services like Pulsar functions or Avro serialization, you can install optional components alongside the `pulsar-client` library. - -```shell - -# avro serialization -$ pip install pulsar-client=='@pulsar:version_number@[avro]' - -# functions runtime -$ pip install pulsar-client=='@pulsar:version_number@[functions]' - -# all optional components -$ pip install pulsar-client=='@pulsar:version_number@[all]' - -``` - -Installation via PyPi is available for the following Python versions: - -Platform | Supported Python versions -:--------|:------------------------- -MacOS
    10.13 (High Sierra), 10.14 (Mojave)
    | 2.7, 3.7 -Linux | 2.7, 3.4, 3.5, 3.6, 3.7, 3.8 - -### Install from source - -To install the `pulsar-client` library by building from source, follow [instructions](client-libraries-cpp.md#compilation) and compile the Pulsar C++ client library. That builds the Python binding for the library. - -To install the built Python bindings: - -```shell - -$ git clone https://github.com/apache/pulsar -$ cd pulsar/pulsar-client-cpp/python -$ sudo python setup.py install - -``` - -## API Reference - -The complete Python API reference is available at [api/python](/api/python). - -## Examples - -You can find a variety of Python code examples for the [pulsar-client](/pulsar-client-cpp/python) library. - -### Producer example - -The following example creates a Python producer for the `my-topic` topic and sends 10 messages on that topic: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') - -producer = client.create_producer('my-topic') - -for i in range(10): - producer.send(('Hello-%d' % i).encode('utf-8')) - -client.close() - -``` - -### Consumer example - -The following example creates a consumer with the `my-subscription` subscription name on the `my-topic` topic, receives incoming messages, prints the content and ID of messages that arrive, and acknowledges each message to the Pulsar broker. - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') - -consumer = client.subscribe('my-topic', 'my-subscription') - -while True: - msg = consumer.receive() - try: - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except: - # Message failed to be processed - consumer.negative_acknowledge(msg) - -client.close() - -``` - -This example shows how to configure negative acknowledgement. - -```python - -from pulsar import Client, schema -client = Client('pulsar://localhost:6650') -consumer = client.subscribe('negative_acks','test',schema=schema.StringSchema()) -producer = client.create_producer('negative_acks',schema=schema.StringSchema()) -for i in range(10): - print('send msg "hello-%d"' % i) - producer.send_async('hello-%d' % i, callback=None) -producer.flush() -for i in range(10): - msg = consumer.receive() - consumer.negative_acknowledge(msg) - print('receive and nack msg "%s"' % msg.data()) -for i in range(10): - msg = consumer.receive() - consumer.acknowledge(msg) - print('receive and ack msg "%s"' % msg.data()) -try: - # No more messages expected - msg = consumer.receive(100) -except: - print("no more msg") - pass - -``` - -### Reader interface example - -You can use the Pulsar Python API to use the Pulsar [reader interface](concepts-clients.md#reader-interface). Here's an example: - -```python - -# MessageId taken from a previously fetched message -msg_id = msg.message_id() - -reader = client.create_reader('my-topic', msg_id) - -while True: - msg = reader.read_next() - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # No acknowledgment - -``` - -### Multi-topic subscriptions - -In addition to subscribing a consumer to a single Pulsar topic, you can also subscribe to multiple topics simultaneously. To use multi-topic subscriptions, you can supply a regular expression (regex) or a `List` of topics. If you select topics via regex, all topics must be within the same Pulsar namespace. - -The following is an example: - -```python - -import re -consumer = client.subscribe(re.compile('persistent://public/default/topic-*'), 'my-subscription') -while True: - msg = consumer.receive() - try: - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except: - # Message failed to be processed - consumer.negative_acknowledge(msg) -client.close() - -``` - -## Schema - -### Declare and validate schema - -You can declare a schema by passing a class that inherits -from `pulsar.schema.Record` and defines the fields as -class variables. For example: - -```python - -from pulsar.schema import * - -class Example(Record): - a = String() - b = Integer() - c = Boolean() - -``` - -With this simple schema definition, you can create producers, consumers and readers instances that refer to that. - -```python - -producer = client.create_producer( - topic='my-topic', - schema=AvroSchema(Example) ) - -producer.send(Example(a='Hello', b=1)) - -``` - -After creating the producer, the Pulsar broker validates that the existing topic schema is indeed of "Avro" type and that the format is compatible with the schema definition of the `Example` class. - -If there is a mismatch, an exception occurs in the producer creation. - -Once a producer is created with a certain schema definition, -it will only accept objects that are instances of the declared -schema class. - -Similarly, for a consumer/reader, the consumer will return an -object, instance of the schema record class, rather than the raw -bytes: - -```python - -consumer = client.subscribe( - topic='my-topic', - subscription_name='my-subscription', - schema=AvroSchema(Example) ) - -while True: - msg = consumer.receive() - ex = msg.value() - try: - print("Received message a={} b={} c={}".format(ex.a, ex.b, ex.c)) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except: - # Message failed to be processed - consumer.negative_acknowledge(msg) - -``` - -### Supported schema types - -You can use different builtin schema types in Pulsar. All the definitions are in the `pulsar.schema` package. - -| Schema | Notes | -| ------ | ----- | -| `BytesSchema` | Get the raw payload as a `bytes` object. No serialization/deserialization are performed. This is the default schema mode | -| `StringSchema` | Encode/decode payload as a UTF-8 string. Uses `str` objects | -| `JsonSchema` | Require record definition. Serializes the record into standard JSON payload | -| `AvroSchema` | Require record definition. Serializes in AVRO format | - -### Schema definition reference - -The schema definition is done through a class that inherits from `pulsar.schema.Record`. - -This class has a number of fields which can be of either -`pulsar.schema.Field` type or another nested `Record`. All the -fields are specified in the `pulsar.schema` package. The fields -are matching the AVRO fields types. - -| Field Type | Python Type | Notes | -| ---------- | ----------- | ----- | -| `Boolean` | `bool` | | -| `Integer` | `int` | | -| `Long` | `int` | | -| `Float` | `float` | | -| `Double` | `float` | | -| `Bytes` | `bytes` | | -| `String` | `str` | | -| `Array` | `list` | Need to specify record type for items. | -| `Map` | `dict` | Key is always `String`. Need to specify value type. | - -Additionally, any Python `Enum` type can be used as a valid field type. - -#### Fields parameters - -When adding a field, you can use these parameters in the constructor. - -| Argument | Default | Notes | -| ---------- | --------| ----- | -| `default` | `None` | Set a default value for the field. Eg: `a = Integer(default=5)` | -| `required` | `False` | Mark the field as "required". It is set in the schema accordingly. | - -#### Schema definition examples - -##### Simple definition - -```python - -class Example(Record): - a = String() - b = Integer() - c = Array(String()) - i = Map(String()) - -``` - -##### Using enums - -```python - -from enum import Enum - -class Color(Enum): - red = 1 - green = 2 - blue = 3 - -class Example(Record): - name = String() - color = Color - -``` - -##### Complex types - -```python - -class MySubRecord(Record): - x = Integer() - y = Long() - z = String() - -class Example(Record): - a = String() - sub = MySubRecord() - -``` - -##### Set namespace for Avro schema - -Set the namespace for Avro Record schema using the special field `_avro_namespace`. - -```python - -class NamespaceDemo(Record): - _avro_namespace = 'xxx.xxx.xxx' - x = String() - y = Integer() - -``` - -The schema definition is like this. - -``` - -{ - 'name': 'NamespaceDemo', 'namespace': 'xxx.xxx.xxx', 'type': 'record', 'fields': [ - {'name': 'x', 'type': ['null', 'string']}, - {'name': 'y', 'type': ['null', 'int']} - ] -} - -``` - -## End-to-end encryption - -[End-to-end encryption](https://pulsar.apache.org/docs/en/next/cookbooks-encryption/#docsNav) allows applications to encrypt messages at producers and decrypt messages at consumers. - -### Configuration - -To use the end-to-end encryption feature in the Python client, you need to configure `publicKeyPath` and `privateKeyPath` for both producer and consumer. - -``` - -publicKeyPath: "./public.pem" -privateKeyPath: "./private.pem" - -``` - -### Tutorial - -This section provides step-by-step instructions on how to use the end-to-end encryption feature in the Python client. - -**Prerequisite** - -- Pulsar Python client 2.7.1 or later - -**Step** - -1. Create both public and private key pairs. - - **Input** - - ```shell - - openssl genrsa -out private.pem 2048 - openssl rsa -in private.pem -pubout -out public.pem - - ``` - -2. Create a producer to send encrypted messages. - - **Input** - - ```python - - import pulsar - - publicKeyPath = "./public.pem" - privateKeyPath = "./private.pem" - crypto_key_reader = pulsar.CryptoKeyReader(publicKeyPath, privateKeyPath) - client = pulsar.Client('pulsar://localhost:6650') - producer = client.create_producer(topic='encryption', encryption_key='encryption', crypto_key_reader=crypto_key_reader) - producer.send('encryption message'.encode('utf8')) - print('sent message') - producer.close() - client.close() - - ``` - -3. Create a consumer to receive encrypted messages. - - **Input** - - ```python - - import pulsar - - publicKeyPath = "./public.pem" - privateKeyPath = "./private.pem" - crypto_key_reader = pulsar.CryptoKeyReader(publicKeyPath, privateKeyPath) - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe(topic='encryption', subscription_name='encryption-sub', crypto_key_reader=crypto_key_reader) - msg = consumer.receive() - print("Received msg '{}' id = '{}'".format(msg.data(), msg.message_id())) - consumer.close() - client.close() - - ``` - -4. Run the consumer to receive encrypted messages. - - **Input** - - ```shell - - python consumer.py - - ``` - -5. In a new terminal tab, run the producer to produce encrypted messages. - - **Input** - - ```shell - - python producer.py - - ``` - - Now you can see the producer sends messages and the consumer receives messages successfully. - - **Output** - - This is from the producer side. - - ``` - - sent message - - ``` - - This is from the consumer side. - - ``` - - Received msg 'encryption message' id = '(0,0,-1,-1)' - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-websocket.md b/site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-websocket.md deleted file mode 100644 index 60970c7ea4df2b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries-websocket.md +++ /dev/null @@ -1,664 +0,0 @@ ---- -id: client-libraries-websocket -title: Pulsar WebSocket API -sidebar_label: "WebSocket" -original_id: client-libraries-websocket ---- - -Pulsar [WebSocket](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API) API provides a simple way to interact with Pulsar using languages that do not have an official [client library](getting-started-clients.md). Through WebSocket, you can publish and consume messages and use features available on the [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page. - - -> You can use Pulsar WebSocket API with any WebSocket client library. See examples for Python and Node.js [below](#client-examples). - -## Running the WebSocket service - -The standalone variant of Pulsar that we recommend using for [local development](getting-started-standalone.md) already has the WebSocket service enabled. - -In non-standalone mode, there are two ways to deploy the WebSocket service: - -* [embedded](#embedded-with-a-pulsar-broker) with a Pulsar broker -* as a [separate component](#as-a-separate-component) - -### Embedded with a Pulsar broker - -In this mode, the WebSocket service will run within the same HTTP service that's already running in the broker. To enable this mode, set the [`webSocketServiceEnabled`](reference-configuration.md#broker-webSocketServiceEnabled) parameter in the [`conf/broker.conf`](reference-configuration.md#broker) configuration file in your installation. - -```properties - -webSocketServiceEnabled=true - -``` - -### As a separate component - -In this mode, the WebSocket service will be run from a Pulsar [broker](reference-terminology.md#broker) as a separate service. Configuration for this mode is handled in the [`conf/websocket.conf`](reference-configuration.md#websocket) configuration file. You'll need to set *at least* the following parameters: - -* [`configurationStoreServers`](reference-configuration.md#websocket-configurationStoreServers) -* [`webServicePort`](reference-configuration.md#websocket-webServicePort) -* [`clusterName`](reference-configuration.md#websocket-clusterName) - -Here's an example: - -```properties - -configurationStoreServers=zk1:2181,zk2:2181,zk3:2181 -webServicePort=8080 -clusterName=my-cluster - -``` - -### Security settings - -To enable TLS encryption on WebSocket service: - -```properties - -tlsEnabled=true -tlsAllowInsecureConnection=false -tlsCertificateFilePath=/path/to/client-websocket.cert.pem -tlsKeyFilePath=/path/to/client-websocket.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -### Starting the broker - -When the configuration is set, you can start the service using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) tool: - -```shell - -$ bin/pulsar-daemon start websocket - -``` - -## API Reference - -Pulsar's WebSocket API offers three endpoints for [producing](#producer-endpoint) messages, [consuming](#consumer-endpoint) messages and [reading](#reader-endpoint) messages. - -All exchanges via the WebSocket API use JSON. - -### Authentication - -#### Browser javascript WebSocket client - -Use the query param `token` transport the authentication token. - -```http - -ws://broker-service-url:8080/path?token=token - -``` - -### Producer endpoint - -The producer endpoint requires you to specify a tenant, namespace, and topic in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/producer/persistent/:tenant/:namespace/:topic - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`sendTimeoutMillis` | long | no | Send timeout (default: 30 secs) -`batchingEnabled` | boolean | no | Enable batching of messages (default: false) -`batchingMaxMessages` | int | no | Maximum number of messages permitted in a batch (default: 1000) -`maxPendingMessages` | int | no | Set the max size of the internal-queue holding the messages (default: 1000) -`batchingMaxPublishDelay` | long | no | Time period within which the messages will be batched (default: 10ms) -`messageRoutingMode` | string | no | Message [routing mode](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/ProducerConfiguration.MessageRoutingMode.html) for the partitioned producer: `SinglePartition`, `RoundRobinPartition` -`compressionType` | string | no | Compression [type](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/CompressionType.html): `LZ4`, `ZLIB` -`producerName` | string | no | Specify the name for the producer. Pulsar will enforce only one producer with same name can be publishing on a topic -`initialSequenceId` | long | no | Set the baseline for the sequence ids for messages published by the producer. -`hashingScheme` | string | no | [Hashing function](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.HashingScheme.html) to use when publishing on a partitioned topic: `JavaStringHash`, `Murmur3_32Hash` -`token` | string | no | Authentication token, this is used for the browser javascript client - - -#### Publishing a message - -```json - -{ - "payload": "SGVsbG8gV29ybGQ=", - "properties": {"key1": "value1", "key2": "value2"}, - "context": "1" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`payload` | string | yes | Base-64 encoded payload -`properties` | key-value pairs | no | Application-defined properties -`context` | string | no | Application-defined request identifier -`key` | string | no | For partitioned topics, decides which partition to use -`replicationClusters` | array | no | Restrict replication to this list of [clusters](reference-terminology.md#cluster), specified by name - - -##### Example success response - -```json - -{ - "result": "ok", - "messageId": "CAAQAw==", - "context": "1" - } - -``` - -##### Example failure response - -```json - - { - "result": "send-error:3", - "errorMsg": "Failed to de-serialize from JSON", - "context": "1" - } - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`result` | string | yes | `ok` if successful or an error message if unsuccessful -`messageId` | string | yes | Message ID assigned to the published message -`context` | string | no | Application-defined request identifier - - -### Consumer endpoint - -The consumer endpoint requires you to specify a tenant, namespace, and topic, as well as a subscription, in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/consumer/persistent/:tenant/:namespace/:topic/:subscription - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`ackTimeoutMillis` | long | no | Set the timeout for unacked messages (default: 0) -`subscriptionType` | string | no | [Subscription type](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/SubscriptionType.html): `Exclusive`, `Failover`, `Shared`, `Key_Shared` -`receiverQueueSize` | int | no | Size of the consumer receive queue (default: 1000) -`consumerName` | string | no | Consumer name -`priorityLevel` | int | no | Define a [priority](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setPriorityLevel-int-) for the consumer -`maxRedeliverCount` | int | no | Define a [maxRedeliverCount](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#deadLetterPolicy-org.apache.pulsar.client.api.DeadLetterPolicy-) for the consumer (default: 0). Activates [Dead Letter Topic](https://github.com/apache/pulsar/wiki/PIP-22%3A-Pulsar-Dead-Letter-Topic) feature. -`deadLetterTopic` | string | no | Define a [deadLetterTopic](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#deadLetterPolicy-org.apache.pulsar.client.api.DeadLetterPolicy-) for the consumer (default: {topic}-{subscription}-DLQ). Activates [Dead Letter Topic](https://github.com/apache/pulsar/wiki/PIP-22%3A-Pulsar-Dead-Letter-Topic) feature. -`pullMode` | boolean | no | Enable pull mode (default: false). See "Flow Control" below. -`negativeAckRedeliveryDelay` | int | no | When a message is negatively acknowledged, the delay time before the message is redelivered (in milliseconds). The default value is 60000. -`token` | string | no | Authentication token, this is used for the browser javascript client - -NB: these parameter (except `pullMode`) apply to the internal consumer of the WebSocket service. -So messages will be subject to the redelivery settings as soon as the get into the receive queue, -even if the client doesn't consume on the WebSocket. - -##### Receiving messages - -Server will push messages on the WebSocket session: - -```json - -{ -"messageId": "CAMQADAA", - "payload": "hvXcJvHW7kOSrUn17P2q71RA5SdiXwZBqw==", - "properties": {}, - "publishTime": "2021-10-29T16:01:38.967-07:00", - "redeliveryCount": 0, - "encryptionContext": { - "keys": { - "client-rsa.pem": { - "keyValue": "jEuwS+PeUzmCo7IfLNxqoj4h7txbLjCQjkwpaw5AWJfZ2xoIdMkOuWDkOsqgFmWwxiecakS6GOZHs94x3sxzKHQx9Oe1jpwBg2e7L4fd26pp+WmAiLm/ArZJo6JotTeFSvKO3u/yQtGTZojDDQxiqFOQ1ZbMdtMZA8DpSMuq+Zx7PqLo43UdW1+krjQfE5WD+y+qE3LJQfwyVDnXxoRtqWLpVsAROlN2LxaMbaftv5HckoejJoB4xpf/dPOUqhnRstwQHf6klKT5iNhjsY4usACt78uILT0pEPd14h8wEBidBz/vAlC/zVMEqiDVzgNS7dqEYS4iHbf7cnWVCn3Hxw==", - "metadata": {} - } - }, - "param": "Tfu1PxVm6S9D3+Hk", - "compressionType": "NONE", - "uncompressedMessageSize": 0, - "batchSize": { - "empty": false, - "present": true - } - } -} - -``` - -Below are the parameters in the WebSocket consumer response. - -- General parameters - - Key | Type | Required? | Explanation - :---|:-----|:----------|:----------- - `messageId` | string | yes | Message ID - `payload` | string | yes | Base-64 encoded payload - `publishTime` | string | yes | Publish timestamp - `redeliveryCount` | number | yes | Number of times this message was already delivered - `properties` | key-value pairs | no | Application-defined properties - `key` | string | no | Original routing key set by producer - `encryptionContext` | EncryptionContext | no | Encryption context that consumers can use to decrypt received messages - `param` | string | no | Initialization vector for cipher (Base64 encoding) - `batchSize` | string | no | Number of entries in a message (if it is a batch message) - `uncompressedMessageSize` | string | no | Message size before compression - `compressionType` | string | no | Algorithm used to compress the message payload - -- `encryptionContext` related parameter - - Key | Type | Required? | Explanation - :---|:-----|:----------|:----------- - `keys` |key-EncryptionKey pairs | yes | Key in `key-EncryptionKey` pairs is an encryption key name. Value in `key-EncryptionKey` pairs is an encryption key object. - -- `encryptionKey` related parameters - - Key | Type | Required? | Explanation - :---|:-----|:----------|:----------- - `keyValue` | string | yes | Encryption key (Base64 encoding) - `metadata` | key-value pairs | no | Application-defined metadata - -#### Acknowledging the message - -Consumer needs to acknowledge the successful processing of the message to -have the Pulsar broker delete it. - -```json - -{ - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Negatively acknowledging messages - -```json - -{ - "type": "negativeAcknowledge", - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Flow control - -##### Push Mode - -By default (`pullMode=false`), the consumer endpoint will use the `receiverQueueSize` parameter both to size its -internal receive queue and to limit the number of unacknowledged messages that are passed to the WebSocket client. -In this mode, if you don't send acknowledgements, the Pulsar WebSocket service will stop sending messages after reaching -`receiverQueueSize` unacked messages sent to the WebSocket client. - -##### Pull Mode - -If you set `pullMode` to `true`, the WebSocket client will need to send `permit` commands to permit the -Pulsar WebSocket service to send more messages. - -```json - -{ - "type": "permit", - "permitMessages": 100 -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `permit` -`permitMessages`| int | yes | Number of messages to permit - -NB: in this mode it's possible to acknowledge messages in a different connection. - -#### Check if reach end of topic - -Consumer can check if it has reached end of topic by sending `isEndOfTopic` request. - -**Request** - -```json - -{ - "type": "isEndOfTopic" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `isEndOfTopic` - -**Response** - -```json - -{ - "endOfTopic": "true/false" - } - -``` - -### Reader endpoint - -The reader endpoint requires you to specify a tenant, namespace, and topic in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/reader/persistent/:tenant/:namespace/:topic - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`readerName` | string | no | Reader name -`receiverQueueSize` | int | no | Size of the consumer receive queue (default: 1000) -`messageId` | int or enum | no | Message ID to start from, `earliest` or `latest` (default: `latest`) -`token` | string | no | Authentication token, this is used for the browser javascript client - -##### Receiving messages - -Server will push messages on the WebSocket session: - -```json - -{ - "messageId": "CAAQAw==", - "payload": "SGVsbG8gV29ybGQ=", - "properties": {"key1": "value1", "key2": "value2"}, - "publishTime": "2016-08-30 16:45:57.785", - "redeliveryCount": 4 -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId` | string | yes | Message ID -`payload` | string | yes | Base-64 encoded payload -`publishTime` | string | yes | Publish timestamp -`redeliveryCount` | number | yes | Number of times this message was already delivered -`properties` | key-value pairs | no | Application-defined properties -`key` | string | no | Original routing key set by producer - -#### Acknowledging the message - -**In WebSocket**, Reader needs to acknowledge the successful processing of the message to -have the Pulsar WebSocket service update the number of pending messages. -If you don't send acknowledgements, Pulsar WebSocket service will stop sending messages after reaching the pendingMessages limit. - -```json - -{ - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Check if reach end of topic - -Consumer can check if it has reached end of topic by sending `isEndOfTopic` request. - -**Request** - -```json - -{ - "type": "isEndOfTopic" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `isEndOfTopic` - -**Response** - -```json - -{ - "endOfTopic": "true/false" - } - -``` - -### Error codes - -In case of error the server will close the WebSocket session using the -following error codes: - -Error Code | Error Message -:----------|:------------- -1 | Failed to create producer -2 | Failed to subscribe -3 | Failed to deserialize from JSON -4 | Failed to serialize to JSON -5 | Failed to authenticate client -6 | Client is not authorized -7 | Invalid payload encoding -8 | Unknown error - -> The application is responsible for re-establishing a new WebSocket session after a backoff period. - -## Client examples - -Below you'll find code examples for the Pulsar WebSocket API in [Python](#python) and [Node.js](#nodejs). - -### Python - -This example uses the [`websocket-client`](https://pypi.python.org/pypi/websocket-client) package. You can install it using [pip](https://pypi.python.org/pypi/pip): - -```shell - -$ pip install websocket-client - -``` - -You can also download it from [PyPI](https://pypi.python.org/pypi/websocket-client). - -#### Python producer - -Here's an example Python producer that sends a simple message to a Pulsar [topic](reference-terminology.md#topic): - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/producer/persistent/public/default/my-topic' - -ws = websocket.create_connection(TOPIC) - -# encode message -s = "Hello World" -firstEncoded = s.encode("UTF-8") -binaryEncoded = base64.b64encode(firstEncoded) -payloadString = binaryEncoded.decode('UTF-8') - -# Send one message as JSON -ws.send(json.dumps({ - 'payload' : payloadString, - 'properties': { - 'key1' : 'value1', - 'key2' : 'value2' - }, - 'context' : 5 -})) - -response = json.loads(ws.recv()) -if response['result'] == 'ok': - print( 'Message published successfully') -else: - print('Failed to publish message:', response) -ws.close() - -``` - -#### Python consumer - -Here's an example Python consumer that listens on a Pulsar topic and prints the message ID whenever a message arrives: - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/consumer/persistent/public/default/my-topic/my-sub' - -ws = websocket.create_connection(TOPIC) - -while True: - msg = json.loads(ws.recv()) - if not msg: break - - print( "Received: {} - payload: {}".format(msg, base64.b64decode(msg['payload']))) - - # Acknowledge successful processing - ws.send(json.dumps({'messageId' : msg['messageId']})) - -ws.close() - -``` - -#### Python reader - -Here's an example Python reader that listens on a Pulsar topic and prints the message ID whenever a message arrives: - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/reader/persistent/public/default/my-topic' -ws = websocket.create_connection(TOPIC) - -while True: - msg = json.loads(ws.recv()) - if not msg: break - - print ( "Received: {} - payload: {}".format(msg, base64.b64decode(msg['payload']))) - - # Acknowledge successful processing - ws.send(json.dumps({'messageId' : msg['messageId']})) - -ws.close() - -``` - -### Node.js - -This example uses the [`ws`](https://websockets.github.io/ws/) package. You can install it using [npm](https://www.npmjs.com/): - -```shell - -$ npm install ws - -``` - -#### Node.js producer - -Here's an example Node.js producer that sends a simple message to a Pulsar topic: - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/producer/persistent/public/default/my-topic`; -const ws = new WebSocket(topic); - -var message = { - "payload" : new Buffer("Hello World").toString('base64'), - "properties": { - "key1" : "value1", - "key2" : "value2" - }, - "context" : "1" -}; - -ws.on('open', function() { - // Send one message - ws.send(JSON.stringify(message)); -}); - -ws.on('message', function(message) { - console.log('received ack: %s', message); -}); - -``` - -#### Node.js consumer - -Here's an example Node.js consumer that listens on the same topic used by the producer above: - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/consumer/persistent/public/default/my-topic/my-sub`; -const ws = new WebSocket(topic); - -ws.on('message', function(message) { - var receiveMsg = JSON.parse(message); - console.log('Received: %s - payload: %s', message, new Buffer(receiveMsg.payload, 'base64').toString()); - var ackMsg = {"messageId" : receiveMsg.messageId}; - ws.send(JSON.stringify(ackMsg)); -}); - -``` - -#### NodeJS reader - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/reader/persistent/public/default/my-topic`; -const ws = new WebSocket(topic); - -ws.on('message', function(message) { - var receiveMsg = JSON.parse(message); - console.log('Received: %s - payload: %s', message, new Buffer(receiveMsg.payload, 'base64').toString()); - var ackMsg = {"messageId" : receiveMsg.messageId}; - ws.send(JSON.stringify(ackMsg)); -}); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries.md b/site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries.md deleted file mode 100644 index 607c9317e4b7fb..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/client-libraries.md +++ /dev/null @@ -1,36 +0,0 @@ ---- -id: client-libraries -title: Pulsar client libraries -sidebar_label: "Overview" -original_id: client-libraries ---- - -Pulsar supports the following client libraries: - -- [Java client](client-libraries-java.md) -- [Go client](client-libraries-go.md) -- [Python client](client-libraries-python.md) -- [C++ client](client-libraries-cpp.md) -- [Node.js client](client-libraries-node.md) -- [WebSocket client](client-libraries-websocket.md) -- [C# client](client-libraries-dotnet.md) - -## Feature matrix -Pulsar client feature matrix for different languages is listed on [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page. - -## Third-party clients - -Besides the official released clients, multiple projects on developing Pulsar clients are available in different languages. - -> If you have developed a new Pulsar client, feel free to submit a pull request and add your client to the list below. - -| Language | Project | Maintainer | License | Description | -|----------|---------|------------|---------|-------------| -| Go | [pulsar-client-go](https://github.com/Comcast/pulsar-client-go) | [Comcast](https://github.com/Comcast) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client | -| Go | [go-pulsar](https://github.com/t2y/go-pulsar) | [t2y](https://github.com/t2y) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | -| Haskell | [supernova](https://github.com/cr-org/supernova) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Native Pulsar client for Haskell | -| Scala | [neutron](https://github.com/cr-org/neutron) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Purely functional Apache Pulsar client for Scala built on top of Fs2 | -| Scala | [pulsar4s](https://github.com/sksamuel/pulsar4s) | [sksamuel](https://github.com/sksamuel) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Idomatic, typesafe, and reactive Scala client for Apache Pulsar | -| Rust | [pulsar-rs](https://github.com/wyyerd/pulsar-rs) | [Wyyerd Group](https://github.com/wyyerd) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Future-based Rust bindings for Apache Pulsar | -| .NET | [pulsar-client-dotnet](https://github.com/fsharplang-ru/pulsar-client-dotnet) | [Lanayx](https://github.com/Lanayx) | [![GitHub](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT) | Native .NET client for C#/F#/VB | -| Node.js | [pulsar-flex](https://github.com/ayeo-flex-org/pulsar-flex) | [Daniel Sinai](https://github.com/danielsinai), [Ron Farkash](https://github.com/ronfarkash), [Gal Rosenberg](https://github.com/galrose)| [![GitHub](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT) | Native Nodejs client | diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-architecture-overview.md b/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-architecture-overview.md deleted file mode 100644 index 4baa8c30a0d009..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-architecture-overview.md +++ /dev/null @@ -1,172 +0,0 @@ ---- -id: concepts-architecture-overview -title: Architecture Overview -sidebar_label: "Architecture" -original_id: concepts-architecture-overview ---- - -At the highest level, a Pulsar instance is composed of one or more Pulsar clusters. Clusters within an instance can [replicate](concepts-replication.md) data amongst themselves. - -In a Pulsar cluster: - -* One or more brokers handles and load balances incoming messages from producers, dispatches messages to consumers, communicates with the Pulsar configuration store to handle various coordination tasks, stores messages in BookKeeper instances (aka bookies), relies on a cluster-specific ZooKeeper cluster for certain tasks, and more. -* A BookKeeper cluster consisting of one or more bookies handles [persistent storage](#persistent-storage) of messages. -* A ZooKeeper cluster specific to that cluster handles coordination tasks between Pulsar clusters. - -The diagram below provides an illustration of a Pulsar cluster: - -![Pulsar architecture diagram](/assets/pulsar-system-architecture.png) - -At the broader instance level, an instance-wide ZooKeeper cluster called the configuration store handles coordination tasks involving multiple clusters, for example [geo-replication](concepts-replication.md). - -## Brokers - -The Pulsar message broker is a stateless component that's primarily responsible for running two other components: - -* An HTTP server that exposes a {@inject: rest:REST:/} API for both administrative tasks and [topic lookup](concepts-clients.md#client-setup-phase) for producers and consumers. The producers connect to the brokers to publish messages and the consumers connect to the brokers to consume the messages. -* A dispatcher, which is an asynchronous TCP server over a custom [binary protocol](developing-binary-protocol.md) used for all data transfers - -Messages are typically dispatched out of a [managed ledger](#managed-ledgers) cache for the sake of performance, *unless* the backlog exceeds the cache size. If the backlog grows too large for the cache, the broker will start reading entries from BookKeeper. - -Finally, to support geo-replication on global topics, the broker manages replicators that tail the entries published in the local region and republish them to the remote region using the Pulsar [Java client library](client-libraries-java.md). - -> For a guide to managing Pulsar brokers, see the [brokers](admin-api-brokers.md) guide. - -## Clusters - -A Pulsar instance consists of one or more Pulsar *clusters*. Clusters, in turn, consist of: - -* One or more Pulsar [brokers](#brokers) -* A ZooKeeper quorum used for cluster-level configuration and coordination -* An ensemble of bookies used for [persistent storage](#persistent-storage) of messages - -Clusters can replicate amongst themselves using [geo-replication](concepts-replication.md). - -> For a guide to managing Pulsar clusters, see the [clusters](admin-api-clusters.md) guide. - -## Metadata store - -The Pulsar metadata store maintains all the metadata of a Pulsar cluster, such as topic metadata, schema, broker load data, and so on. Pulsar uses [Apache ZooKeeper](https://zookeeper.apache.org/) for metadata storage, cluster configuration, and coordination. The Pulsar metadata store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster. You can use one ZooKeeper cluster for both Pulsar metadata store and BookKeeper metadata store. If you want to deploy Pulsar brokers connected to an existing BookKeeper cluster, you need to deploy separate ZooKeeper clusters for Pulsar metadata store and BookKeeper metadata store respectively. - -In a Pulsar instance: - -* A configuration store quorum stores configuration for tenants, namespaces, and other entities that need to be globally consistent. -* Each cluster has its own local ZooKeeper ensemble that stores cluster-specific configuration and coordination such as which brokers are responsible for which topics as well as ownership metadata, broker load reports, BookKeeper ledger metadata, and more. - -## Configuration store - -The configuration store maintains all the configurations of a Pulsar instance, such as clusters, tenants, namespaces, partitioned topic related configurations, and so on. A Pulsar instance can have a single local cluster, multiple local clusters, or multiple cross-region clusters. Consequently, the configuration store can share the configurations across multiple clusters under a Pulsar instance. The configuration store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster. - -## Persistent storage - -Pulsar provides guaranteed message delivery for applications. If a message successfully reaches a Pulsar broker, it will be delivered to its intended target. - -This guarantee requires that non-acknowledged messages are stored in a durable manner until they can be delivered to and acknowledged by consumers. This mode of messaging is commonly called *persistent messaging*. In Pulsar, N copies of all messages are stored and synced on disk, for example 4 copies across two servers with mirrored [RAID](https://en.wikipedia.org/wiki/RAID) volumes on each server. - -### Apache BookKeeper - -Pulsar uses a system called [Apache BookKeeper](http://bookkeeper.apache.org/) for persistent message storage. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) (WAL) system that provides a number of crucial advantages for Pulsar: - -* It enables Pulsar to utilize many independent logs, called [ledgers](#ledgers). Multiple ledgers can be created for topics over time. -* It offers very efficient storage for sequential data that handles entry replication. -* It guarantees read consistency of ledgers in the presence of various system failures. -* It offers even distribution of I/O across bookies. -* It's horizontally scalable in both capacity and throughput. Capacity can be immediately increased by adding more bookies to a cluster. -* Bookies are designed to handle thousands of ledgers with concurrent reads and writes. By using multiple disk devices---one for journal and another for general storage--bookies are able to isolate the effects of read operations from the latency of ongoing write operations. - -In addition to message data, *cursors* are also persistently stored in BookKeeper. Cursors are [subscription](reference-terminology.md#subscription) positions for [consumers](reference-terminology.md#consumer). BookKeeper enables Pulsar to store consumer position in a scalable fashion. - -At the moment, Pulsar supports persistent message storage. This accounts for the `persistent` in all topic names. Here's an example: - -```http - -persistent://my-tenant/my-namespace/my-topic - -``` - -> Pulsar also supports ephemeral ([non-persistent](concepts-messaging.md#non-persistent-topics)) message storage. - - -You can see an illustration of how brokers and bookies interact in the diagram below: - -![Brokers and bookies](/assets/broker-bookie.png) - - -### Ledgers - -A ledger is an append-only data structure with a single writer that is assigned to multiple BookKeeper storage nodes, or bookies. Ledger entries are replicated to multiple bookies. Ledgers themselves have very simple semantics: - -* A Pulsar broker can create a ledger, append entries to the ledger, and close the ledger. -* After the ledger has been closed---either explicitly or because the writer process crashed---it can then be opened only in read-only mode. -* Finally, when entries in the ledger are no longer needed, the whole ledger can be deleted from the system (across all bookies). - -#### Ledger read consistency - -The main strength of Bookkeeper is that it guarantees read consistency in ledgers in the presence of failures. Since the ledger can only be written to by a single process, that process is free to append entries very efficiently, without need to obtain consensus. After a failure, the ledger will go through a recovery process that will finalize the state of the ledger and establish which entry was last committed to the log. After that point, all readers of the ledger are guaranteed to see the exact same content. - -#### Managed ledgers - -Given that Bookkeeper ledgers provide a single log abstraction, a library was developed on top of the ledger called the *managed ledger* that represents the storage layer for a single topic. A managed ledger represents the abstraction of a stream of messages with a single writer that keeps appending at the end of the stream and multiple cursors that are consuming the stream, each with its own associated position. - -Internally, a single managed ledger uses multiple BookKeeper ledgers to store the data. There are two reasons to have multiple ledgers: - -1. After a failure, a ledger is no longer writable and a new one needs to be created. -2. A ledger can be deleted when all cursors have consumed the messages it contains. This allows for periodic rollover of ledgers. - -### Journal storage - -In BookKeeper, *journal* files contain BookKeeper transaction logs. Before making an update to a [ledger](#ledgers), a bookie needs to ensure that a transaction describing the update is written to persistent (non-volatile) storage. A new journal file is created once the bookie starts or the older journal file reaches the journal file size threshold (configured using the [`journalMaxSizeMB`](reference-configuration.md#bookkeeper-journalMaxSizeMB) parameter). - -## Pulsar proxy - -One way for Pulsar clients to interact with a Pulsar [cluster](#clusters) is by connecting to Pulsar message [brokers](#brokers) directly. In some cases, however, this kind of direct connection is either infeasible or undesirable because the client doesn't have direct access to broker addresses. If you're running Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform, for example, then direct client connections to brokers are likely not possible. - -The **Pulsar proxy** provides a solution to this problem by acting as a single gateway for all of the brokers in a cluster. If you run the Pulsar proxy (which, again, is optional), all client connections with the Pulsar cluster will flow through the proxy rather than communicating with brokers. - -> For the sake of performance and fault tolerance, you can run as many instances of the Pulsar proxy as you'd like. - -Architecturally, the Pulsar proxy gets all the information it requires from ZooKeeper. When starting the proxy on a machine, you only need to provide ZooKeeper connection strings for the cluster-specific and instance-wide configuration store clusters. Here's an example: - -```bash - -$ bin/pulsar proxy \ - --zookeeper-servers zk-0,zk-1,zk-2 \ - --configuration-store-servers zk-0,zk-1,zk-2 - -``` - -> #### Pulsar proxy docs -> For documentation on using the Pulsar proxy, see the [Pulsar proxy admin documentation](administration-proxy.md). - - -Some important things to know about the Pulsar proxy: - -* Connecting clients don't need to provide *any* specific configuration to use the Pulsar proxy. You won't need to update the client configuration for existing applications beyond updating the IP used for the service URL (for example if you're running a load balancer over the Pulsar proxy). -* [TLS encryption](security-tls-transport.md) and [authentication](security-tls-authentication.md) is supported by the Pulsar proxy - -## Service discovery - -[Clients](getting-started-clients.md) connecting to Pulsar brokers need to be able to communicate with an entire Pulsar instance using a single URL. - -You can use your own service discovery system if you'd like. If you use your own system, there is just one requirement: when a client performs an HTTP request to an endpoint, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to *some* active broker in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means. - -The diagram below illustrates Pulsar service discovery: - -![alt-text](/assets/pulsar-service-discovery.png) - -In this diagram, the Pulsar cluster is addressable via a single DNS name: `pulsar-cluster.acme.com`. A [Python client](client-libraries-python.md), for example, could access this Pulsar cluster like this: - -```python - -from pulsar import Client - -client = Client('pulsar://pulsar-cluster.acme.com:6650') - -``` - -:::note - -In Pulsar, each topic is handled by only one broker. Initial requests from a client to read, update or delete a topic are sent to a broker that may not be the topic owner. If the broker cannot handle the request for this topic, it redirects the request to the appropriate broker. - -::: - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-authentication.md b/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-authentication.md deleted file mode 100644 index f6307890c904a7..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-authentication.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -id: concepts-authentication -title: Authentication and Authorization -sidebar_label: "Authentication and Authorization" -original_id: concepts-authentication ---- - -Pulsar supports a pluggable [authentication](security-overview.md) mechanism which can be configured at the proxy and/or the broker. Pulsar also supports a pluggable [authorization](security-authorization.md) mechanism. These mechanisms work together to identify the client and its access rights on topics, namespaces and tenants. - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-clients.md b/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-clients.md deleted file mode 100644 index 4040624f7d6366..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-clients.md +++ /dev/null @@ -1,92 +0,0 @@ ---- -id: concepts-clients -title: Pulsar Clients -sidebar_label: "Clients" -original_id: concepts-clients ---- - -Pulsar exposes a client API with language bindings for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md), [C++](client-libraries-cpp.md) and [C#](client-libraries-dotnet.md). The client API optimizes and encapsulates Pulsar's client-broker communication protocol and exposes a simple and intuitive API for use by applications. - -Under the hood, the current official Pulsar client libraries support transparent reconnection and/or connection failover to brokers, queuing of messages until acknowledged by the broker, and heuristics such as connection retries with backoff. - -> **Custom client libraries** -> If you'd like to create your own client library, we recommend consulting the documentation on Pulsar's custom [binary protocol](developing-binary-protocol.md). - - -## Client setup phase - -Before an application creates a producer/consumer, the Pulsar client library needs to initiate a setup phase including two steps: - -1. The client attempts to determine the owner of the topic by sending an HTTP lookup request to the broker. The request could reach one of the active brokers which, by looking at the (cached) zookeeper metadata knows who is serving the topic or, in case nobody is serving it, tries to assign it to the least loaded broker. -1. Once the client library has the broker address, it creates a TCP connection (or reuse an existing connection from the pool) and authenticates it. Within this connection, client and broker exchange binary commands from a custom protocol. At this point the client sends a command to create producer/consumer to the broker, which will comply after having validated the authorization policy. - -Whenever the TCP connection breaks, the client immediately re-initiates this setup phase and keeps trying with exponential backoff to re-establish the producer or consumer until the operation succeeds. - -## Reader interface - -In Pulsar, the "standard" [consumer interface](concepts-messaging.md#consumers) involves using consumers to listen on [topics](reference-terminology.md#topic), process incoming messages, and finally acknowledge those messages when they are processed. Whenever a new subscription is created, it is initially positioned at the end of the topic (by default), and consumers associated with that subscription begin reading with the first message created afterwards. Whenever a consumer connects to a topic using a pre-existing subscription, it begins reading from the earliest message un-acked within that subscription. In summary, with the consumer interface, subscription cursors are automatically managed by Pulsar in response to [message acknowledgements](concepts-messaging.md#acknowledgement). - -The **reader interface** for Pulsar enables applications to manually manage cursors. When you use a reader to connect to a topic---rather than a consumer---you need to specify *which* message the reader begins reading from when it connects to a topic. When connecting to a topic, the reader interface enables you to begin with: - -* The **earliest** available message in the topic -* The **latest** available message in the topic -* Some other message between the earliest and the latest. If you select this option, you'll need to explicitly provide a message ID. Your application will be responsible for "knowing" this message ID in advance, perhaps fetching it from a persistent data store or cache. - -The reader interface is helpful for use cases like using Pulsar to provide effectively-once processing semantics for a stream processing system. For this use case, it's essential that the stream processing system be able to "rewind" topics to a specific message and begin reading there. The reader interface provides Pulsar clients with the low-level abstraction necessary to "manually position" themselves within a topic. - -Internally, the reader interface is implemented as a consumer using an exclusive, non-durable subscription to the topic with a randomly-allocated name. - -[ **IMPORTANT** ] - -Unlike subscription/consumer, readers are non-durable in nature and does not prevent data in a topic from being deleted, thus it is ***strongly*** advised that [data retention](cookbooks-retention-expiry.md) be configured. If data retention for a topic is not configured for an adequate amount of time, messages that the reader has not yet read might be deleted . This causes the readers to essentially skip messages. Configuring the data retention for a topic guarantees the reader with a certain duration to read a message. - -Please also note that a reader can have a "backlog", but the metric is only used for users to know how behind the reader is. The metric is not considered for any backlog quota calculations. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-reader-consumer-interfaces.png) - -Here's a Java example that begins reading from the earliest available message on a topic: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageId; -import org.apache.pulsar.client.api.Reader; - -// Create a reader on a topic and for a specific message (and onward) -Reader reader = pulsarClient.newReader() - .topic("reader-api-test") - .startMessageId(MessageId.earliest) - .create(); - -while (true) { - Message message = reader.readNext(); - - // Process the message -} - -``` - -To create a reader that reads from the latest available message: - -```java - -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(MessageId.latest) - .create(); - -``` - -To create a reader that reads from some message between the earliest and the latest: - -```java - -byte[] msgIdBytes = // Some byte array -MessageId id = MessageId.fromByteArray(msgIdBytes); -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(id) - .create(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-messaging.md b/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-messaging.md deleted file mode 100644 index dbe540c5df8e24..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-messaging.md +++ /dev/null @@ -1,714 +0,0 @@ ---- -id: concepts-messaging -title: Messaging -sidebar_label: "Messaging" -original_id: concepts-messaging ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar is built on the [publish-subscribe](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) pattern (often abbreviated to pub-sub). In this pattern, [producers](#producers) publish messages to [topics](#topics); [consumers](#consumers) [subscribe](#subscription-modes) to those topics, process incoming messages, and send [acknowledgements](#acknowledgement) to the broker when processing is finished. - -When a subscription is created, Pulsar [retains](concepts-architecture-overview.md#persistent-storage) all messages, even if the consumer is disconnected. The retained messages are discarded only when a consumer acknowledges that all these messages are processed successfully. - -If the consumption of a message fails and you want this message to be consumed again, then you can enable the automatic redelivery of this message by sending a [negative acknowledgement](#negative-acknowledgement) to the broker or enabling the [acknowledgement timeout](#acknowledgement-timeout) for unacknowledged messages. - -## Messages - -Messages are the basic "unit" of Pulsar. The following table lists the components of messages. - -Component | Description -:---------|:------- -Value / data payload | The data carried by the message. All Pulsar messages contain raw bytes, although message data can also conform to data [schemas](schema-get-started.md). -Key | Messages are optionally tagged with keys, which is useful for things like [topic compaction](concepts-topic-compaction.md). -Properties | An optional key/value map of user-defined properties. -Producer name | The name of the producer who produces the message. If you do not specify a producer name, the default name is used. -Sequence ID | Each Pulsar message belongs to an ordered sequence on its topic. The sequence ID of the message is its order in that sequence. -Publish time | The timestamp of when the message is published. The timestamp is automatically applied by the producer. -Event time | An optional timestamp attached to a message by applications. For example, applications attach a timestamp on when the message is processed. If nothing is set to event time, the value is `0`. -TypedMessageBuilder | It is used to construct a message. You can set message properties such as the message key, message value with `TypedMessageBuilder`.
    When you set `TypedMessageBuilder`, set the key as a string. If you set the key as other types, for example, an AVRO object, the key is sent as bytes, and it is difficult to get the AVRO object back on the consumer. - -The default size of a message is 5 MB. You can configure the max size of a message with the following configurations. - -- In the `broker.conf` file. - - ```bash - - # The max size of a message (in bytes). - maxMessageSize=5242880 - - ``` - -- In the `bookkeeper.conf` file. - - ```bash - - # The max size of the netty frame (in bytes). Any messages received larger than this value are rejected. The default value is 5 MB. - nettyMaxFrameSizeBytes=5253120 - - ``` - -> For more information on Pulsar messages, see Pulsar [binary protocol](developing-binary-protocol.md). - -## Producers - -A producer is a process that attaches to a topic and publishes messages to a Pulsar [broker](reference-terminology.md#broker). The Pulsar broker processes the messages. - -### Send modes - -Producers send messages to brokers synchronously (sync) or asynchronously (async). - -| Mode | Description | -|:-----------|-----------| -| Sync send | The producer waits for an acknowledgement from the broker after sending every message. If the acknowledgment is not received, the producer treats the sending operation as a failure. | -| Async send | The producer puts a message in a blocking queue and returns immediately. The client library sends the message to the broker in the background. If the queue is full (you can [configure](reference-configuration.md#broker) the maximum size), the producer is blocked or fails immediately when calling the API, depending on arguments passed to the producer. | - -### Access mode - -You can have different types of access modes on topics for producers. - -|Access mode | Description -|---|--- -`Shared`|Multiple producers can publish on a topic.

    This is the **default** setting. -`Exclusive`|Only one producer can publish on a topic.

    If there is already a producer connected, other producers trying to publish on this topic get errors immediately.

    The "old" producer is evicted and a "new" producer is selected to be the next exclusive producer if the "old" producer experiences a network partition with the broker. -`WaitForExclusive`|If there is already a producer connected, the producer creation is pending (rather than timing out) until the producer gets the `Exclusive` access.

    The producer that succeeds in becoming the exclusive one is treated as the leader. Consequently, if you want to implement the leader election scheme for your application, you can use this access mode. - -:::note - -Once an application creates a producer with `Exclusive` or `WaitForExclusive` access mode successfully, the instance of this application is guaranteed to be the **only writer** to the topic. Any other producers trying to produce messages on this topic will either get errors immediately or have to wait until they get the `Exclusive` access. -For more information, see [PIP 68: Exclusive Producer](https://github.com/apache/pulsar/wiki/PIP-68:-Exclusive-Producer). - -::: - -You can set producer access mode through Java Client API. For more information, see `ProducerAccessMode` in [ProducerBuilder.java](https://github.com/apache/pulsar/blob/fc5768ca3bbf92815d142fe30e6bfad70a1b4fc6/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/ProducerBuilder.java) file. - - -### Compression - -You can compress messages published by producers during transportation. Pulsar currently supports the following types of compression: - -* [LZ4](https://github.com/lz4/lz4) -* [ZLIB](https://zlib.net/) -* [ZSTD](https://facebook.github.io/zstd/) -* [SNAPPY](https://google.github.io/snappy/) - -### Batching - -When batching is enabled, the producer accumulates and sends a batch of messages in a single request. The batch size is defined by the maximum number of messages and the maximum publish latency. Therefore, the backlog size represents the total number of batches instead of the total number of messages. - -In Pulsar, batches are tracked and stored as single units rather than as individual messages. Consumer unbundles a batch into individual messages. However, scheduled messages (configured through the `deliverAt` or the `deliverAfter` parameter) are always sent as individual messages even batching is enabled. - -In general, a batch is acknowledged when all of its messages are acknowledged by a consumer. It means that when **not all** batch messages are acknowledged, then unexpected failures, negative acknowledgements, or acknowledgement timeouts can result in a redelivery of all messages in this batch. - -To avoid redelivering acknowledged messages in a batch to the consumer, Pulsar introduces batch index acknowledgement since Pulsar 2.6.0. When batch index acknowledgement is enabled, the consumer filters out the batch index that has been acknowledged and sends the batch index acknowledgement request to the broker. The broker maintains the batch index acknowledgement status and tracks the acknowledgement status of each batch index to avoid dispatching acknowledged messages to the consumer. The batch is deleted when all indices of the messages in it are acknowledged. - -By default, batch index acknowledgement is disabled (`acknowledgmentAtBatchIndexLevelEnabled=false`). You can enable batch index acknowledgement by setting the `acknowledgmentAtBatchIndexLevelEnabled` parameter to `true` at the broker side. Enabling batch index acknowledgement results in more memory overheads. - -### Chunking -Before you enable chunking, read the following instructions. -- Batching and chunking cannot be enabled simultaneously. To enable chunking, you must disable batching in advance. -- Chunking is only supported for persisted topics. -- Chunking is only supported for the exclusive and failover subscription modes. - -When chunking is enabled (`chunkingEnabled=true`), if the message size is greater than the allowed maximum publish-payload size, the producer splits the original message into chunked messages and publishes them with chunked metadata to the broker separately and in order. At the broker side, the chunked messages are stored in the managed-ledger in the same way as that of ordinary messages. The only difference is that the consumer needs to buffer the chunked messages and combines them into the real message when all chunked messages have been collected. The chunked messages in the managed-ledger can be interwoven with ordinary messages. If producer fails to publish all the chunks of a message, the consumer can expire incomplete chunks if consumer fail to receive all chunks in expire time. By default, the expire time is set to one minute. - -The consumer consumes the chunked messages and buffers them until the consumer receives all the chunks of a message. And then the consumer stitches chunked messages together and places them into the receiver-queue. Clients consume messages from the receiver-queue. Once the consumer consumes the entire large message and acknowledges it, the consumer internally sends acknowledgement of all the chunk messages associated to that large message. You can set the `maxPendingChunkedMessage` parameter on the consumer. When the threshold is reached, the consumer drops the unchunked messages by silently acknowledging them or asking the broker to redeliver them later by marking them unacknowledged. - -The broker does not require any changes to support chunking for non-shared subscription. The broker only uses `chunkedMessageRate` to record chunked message rate on the topic. - -#### Handle chunked messages with one producer and one ordered consumer - -As shown in the following figure, when a topic has one producer which publishes large message payload in chunked messages along with regular non-chunked messages. The producer publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. The broker stores all the three chunked messages in the managed-ledger and dispatches to the ordered (exclusive/failover) consumer in the same order. The consumer buffers all the chunked messages in memory until it receives all the chunked messages, combines them into one message and then hands over the original message M1 to the client. - -![](/assets/chunking-01.png) - -#### Handle chunked messages with multiple producers and one ordered consumer - -When multiple publishers publish chunked messages into a single topic, the broker stores all the chunked messages coming from different publishers in the same managed-ledger. As shown below, Producer 1 publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. Producer 2 publishes message M2 in three chunks M2-C1, M2-C2 and M2-C3. All chunked messages of the specific message are still in order but might not be consecutive in the managed-ledger. This brings some memory pressure to the consumer because the consumer keeps separate buffer for each large message to aggregate all chunks of the large message and combine them into one message. - -![](/assets/chunking-02.png) - -## Consumers - -A consumer is a process that attaches to a topic via a subscription and then receives messages. - -A consumer sends a [flow permit request](developing-binary-protocol.md#flow-control) to a broker to get messages. There is a queue at the consumer side to receive messages pushed from the broker. You can configure the queue size with the [`receiverQueueSize`](client-libraries-java.md#configure-consumer) parameter. The default size is `1000`). Each time `consumer.receive()` is called, a message is dequeued from the buffer. - -### Receive modes - -Messages are received from [brokers](reference-terminology.md#broker) either synchronously (sync) or asynchronously (async). - -| Mode | Description | -|:--------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Sync receive | A sync receive is blocked until a message is available. | -| Async receive | An async receive returns immediately with a future value—for example, a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) in Java—that completes once a new message is available. | - -### Listeners - -Client libraries provide listener implementation for consumers. For example, the [Java client](client-libraries-java.md) provides a {@inject: javadoc:MesssageListener:/client/org/apache/pulsar/client/api/MessageListener} interface. In this interface, the `received` method is called whenever a new message is received. - -### Acknowledgement - -The consumer sends an acknowledgement request to the broker after it consumes a message successfully. Then, this consumed message will be permanently stored, and be deleted only after all the subscriptions have acknowledged it. If you want to store the messages that have been acknowledged by a consumer, you need to configure the [message retention policy](concepts-messaging.md#message-retention-and-expiry). - -For batch messages, you can enable batch index acknowledgement to avoid dispatching acknowledged messages to the consumer. For details about batch index acknowledgement, see [batching](#batching). - -Messages can be acknowledged in one of the following two ways: - -- Being acknowledged individually. With individual acknowledgement, the consumer acknowledges each message and sends an acknowledgement request to the broker. -- Being acknowledged cumulatively. With cumulative acknowledgement, the consumer **only** acknowledges the last message it received. All messages in the stream up to (and including) the provided message are not redelivered to that consumer. - -If you want to acknowledge messages individually, you can use the following API. - -```java - -consumer.acknowledge(msg); - -``` - -If you want to acknowledge messages cumulatively, you can use the following API. - -```java - -consumer.acknowledgeCumulative(msg); - -``` - -:::note - -Cumulative acknowledgement cannot be used in the [shared subscription mode](#subscription-modes), because the shared subscription mode involves multiple consumers who have access to the same subscription. In the shared subscription mode, messages are acknowledged individually. - -::: - -### Negative acknowledgement - -When a consumer fails to consume a message and intends to consume it again, this consumer should send a negative acknowledgement to the broker. Then, the broker will redeliver this message to the consumer. - -Messages are negatively acknowledged individually or cumulatively, depending on the consumption subscription mode. - -In the exclusive and failover subscription modes, consumers only negatively acknowledge the last message they receive. - -In the shared and Key_Shared subscription modes, consumers can negatively acknowledge messages individually. - -Be aware that negative acknowledgments on ordered subscription types, such as Exclusive, Failover and Key_Shared, might cause failed messages being sent to consumers out of the original order. - -If you want to acknowledge messages negatively, you can use the following API. - -```java - -//With calling this api, messages are negatively acknowledged -consumer.negativeAcknowledge(msg); - -``` - -:::note - -If batching is enabled, all messages in one batch are redelivered to the consumer. - -::: - -### Acknowledgement timeout - -If a message is not consumed successfully, and you want the broker to redeliver this message automatically, then you can enable automatic redelivery mechanism for unacknowledged messages. With automatic redelivery enabled, the client tracks the unacknowledged messages within the entire `acktimeout` time range, and sends a `redeliver unacknowledged messages` request to the broker automatically when the acknowledgement timeout is specified. - -:::note - -- If batching is enabled, all messages in one batch are redelivered to the consumer. -- The negative acknowledgement is preferable over the acknowledgement timeout, since negative acknowledgement controls the redelivery of individual messages more precisely and avoids invalid redeliveries when the message processing time exceeds the acknowledgement timeout. - -::: - -### Dead letter topic - -Dead letter topic enables you to consume new messages when some messages cannot be consumed successfully by a consumer. In this mechanism, messages that are failed to be consumed are stored in a separate topic, which is called dead letter topic. You can decide how to handle messages in the dead letter topic. - -The following example shows how to enable dead letter topic in a Java client using the default dead letter topic: - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic(topic) - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .build()) - .subscribe(); - -``` - -The default dead letter topic uses this format: - -``` - ---DLQ - -``` - -If you want to specify the name of the dead letter topic, use this Java client example: - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic(topic) - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .deadLetterTopic("your-topic-name") - .build()) - .subscribe(); - -``` - -Dead letter topic depends on message redelivery. Messages are redelivered either due to [acknowledgement timeout](#acknowledgement-timeout) or [negative acknowledgement](#negative-acknowledgement). If you are going to use negative acknowledgement on a message, make sure it is negatively acknowledged before the acknowledgement timeout. - -:::note - -Currently, dead letter topic is enabled in the Shared and Key_Shared subscription modes. - -::: - -### Retry letter topic - -For many online business systems, a message is re-consumed due to exception occurs in the business logic processing. To configure the delay time for re-consuming the failed messages, you can configure the producer to send messages to both the business topic and the retry letter topic, and enable automatic retry on the consumer. When automatic retry is enabled on the consumer, a message is stored in the retry letter topic if the messages are not consumed, and therefore the consumer automatically consumes the failed messages from the retry letter topic after a specified delay time. - -By default, automatic retry is disabled. You can set `enableRetry` to `true` to enable automatic retry on the consumer. - -This example shows how to consume messages from a retry letter topic. - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic(topic) - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .enableRetry(true) - .receiverQueueSize(100) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .retryLetterTopic("persistent://my-property/my-ns/my-subscription-custom-Retry") - .build()) - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscribe(); - -``` - -If you want to put messages into a retrial queue, you can use the following API. - -```java - -consumer.reconsumeLater(msg,3,TimeUnit.SECONDS); - -``` - -## Topics - -As in other pub-sub systems, topics in Pulsar are named channels for transmitting messages from producers to consumers. Topic names are URLs that have a well-defined structure: - -```http - -{persistent|non-persistent}://tenant/namespace/topic - -``` - -Topic name component | Description -:--------------------|:----------- -`persistent` / `non-persistent` | This identifies the type of topic. Pulsar supports two kind of topics: [persistent](concepts-architecture-overview.md#persistent-storage) and [non-persistent](#non-persistent-topics). The default is persistent, so if you do not specify a type, the topic is persistent. With persistent topics, all messages are durably persisted on disks (if the broker is not standalone, messages are durably persisted on multiple disks), whereas data for non-persistent topics is not persisted to storage disks. -`tenant` | The topic tenant within the instance. Tenants are essential to multi-tenancy in Pulsar, and spread across clusters. -`namespace` | The administrative unit of the topic, which acts as a grouping mechanism for related topics. Most topic configuration is performed at the [namespace](#namespaces) level. Each tenant has one or multiple namespaces. -`topic` | The final part of the name. Topic names have no special meaning in a Pulsar instance. - -> **No need to explicitly create new topics** -> You do not need to explicitly create topics in Pulsar. If a client attempts to write or receive messages to/from a topic that does not yet exist, Pulsar creates that topic under the namespace provided in the [topic name](#topics) automatically. -> If no tenant or namespace is specified when a client creates a topic, the topic is created in the default tenant and namespace. You can also create a topic in a specified tenant and namespace, such as `persistent://my-tenant/my-namespace/my-topic`. `persistent://my-tenant/my-namespace/my-topic` means the `my-topic` topic is created in the `my-namespace` namespace of the `my-tenant` tenant. - -## Namespaces - -A namespace is a logical nomenclature within a tenant. A tenant creates multiple namespaces via the [admin API](admin-api-namespaces.md#create). For instance, a tenant with different applications can create a separate namespace for each application. A namespace allows the application to create and manage a hierarchy of topics. The topic `my-tenant/app1` is a namespace for the application `app1` for `my-tenant`. You can create any number of [topics](#topics) under the namespace. - -## Subscriptions - -A subscription is a named configuration rule that determines how messages are delivered to consumers. Four subscription modes are available in Pulsar: [exclusive](#exclusive), [shared](#shared), [failover](#failover), and [key_shared](#key_shared). These modes are illustrated in the figure below. - -![Subscription modes](/assets/pulsar-subscription-types.png) - -> **Pub-Sub or Queuing** -> In Pulsar, you can use different subscriptions flexibly. -> * If you want to achieve traditional "fan-out pub-sub messaging" among consumers, specify a unique subscription name for each consumer. It is exclusive subscription mode. -> * If you want to achieve "message queuing" among consumers, share the same subscription name among multiple consumers(shared, failover, key_shared). -> * If you want to achieve both effects simultaneously, combine exclusive subscription mode with other subscription modes for consumers. - -### Consumerless Subscriptions and Their Corresponding Modes -When a subscription has no consumers, its subscription mode is undefined. A subscription's mode is defined when a consumer connects to the subscription, and the mode can be changed by restarting all consumers with a different configuration. - -### Exclusive - -In *exclusive* mode, only a single consumer is allowed to attach to the subscription. If multiple consumers subscribe to a topic using the same subscription, an error occurs. - -In the diagram below, only **Consumer A-0** is allowed to consume messages. - -> Exclusive mode is the default subscription mode. - -![Exclusive subscriptions](/assets/pulsar-exclusive-subscriptions.png) - -### Failover - -In *failover* mode, multiple consumers can attach to the same subscription. A master consumer is picked for non-partitioned topic or each partition of partitioned topic and receives messages. When the master consumer disconnects, all (non-acknowledged and subsequent) messages are delivered to the next consumer in line. - -For partitioned topics, broker will sort consumers by priority level and lexicographical order of consumer name. Then broker will try to evenly assigns topics to consumers with the highest priority level. - -For non-partitioned topic, broker will pick consumer in the order they subscribe to the non partitioned topic. - -In the diagram below, **Consumer-B-0** is the master consumer while **Consumer-B-1** would be the next consumer in line to receive messages if **Consumer-B-0** is disconnected. - -![Failover subscriptions](/assets/pulsar-failover-subscriptions.png) - -### Shared - -In *shared* or *round robin* mode, multiple consumers can attach to the same subscription. Messages are delivered in a round robin distribution across consumers, and any given message is delivered to only one consumer. When a consumer disconnects, all the messages that were sent to it and not acknowledged will be rescheduled for sending to the remaining consumers. - -In the diagram below, **Consumer-C-1** and **Consumer-C-2** are able to subscribe to the topic, but **Consumer-C-3** and others could as well. - -> **Limitations of shared mode** -> When using shared mode, be aware that: -> * Message ordering is not guaranteed. -> * You cannot use cumulative acknowledgment with shared mode. - -![Shared subscriptions](/assets/pulsar-shared-subscriptions.png) - -### Key_Shared - -In *Key_Shared* mode, multiple consumers can attach to the same subscription. Messages are delivered in a distribution across consumers and message with same key or same ordering key are delivered to only one consumer. No matter how many times the message is re-delivered, it is delivered to the same consumer. When a consumer connected or disconnected will cause served consumer change for some key of message. - -![Key_Shared subscriptions](/assets/pulsar-key-shared-subscriptions.png) - -Note that when the consumers are using the Key_Shared subscription mode, you need to **disable batching** or **use key-based batching** for the producers. There are two reasons why the key-based batching is necessary for Key_Shared subscription mode: -1. The broker dispatches messages according to the keys of the messages, but the default batching approach might fail to pack the messages with the same key to the same batch. -2. Since it is the consumers instead of the broker who dispatch the messages from the batches, the key of the first message in one batch is considered as the key of all messages in this batch, thereby leading to context errors. - -The key-based batching aims at resolving the above-mentioned issues. This batching method ensures that the producers pack the messages with the same key to the same batch. The messages without a key are packed into one batch and this batch has no key. When the broker dispatches messages from this batch, it uses `NON_KEY` as the key. In addition, each consumer is associated with **only one** key and should receive **only one message batch** for the connected key. By default, you can limit batching by configuring the number of messages that producers are allowed to send. - -Below are examples of enabling the key-based batching under the Key_Shared subscription mode, with `client` being the Pulsar client that you created. - -````mdx-code-block - - - -``` - -Producer producer = client.newProducer() - .topic("my-topic") - .batcherBuilder(BatcherBuilder.KEY_BASED) - .create(); - -``` - - - - -``` - -ProducerConfiguration producerConfig; -producerConfig.setBatchingType(ProducerConfiguration::BatchingType::KeyBasedBatching); -Producer producer; -client.createProducer("my-topic", producerConfig, producer); - -``` - - - - -``` - -producer = client.create_producer(topic='my-topic', batching_type=pulsar.BatchingType.KeyBased) - -``` - - - - -```` - -> **Limitations of Key_Shared mode** -> When you use Key_Shared mode, be aware that: -> * You need to specify a key or orderingKey for messages. -> * You cannot use cumulative acknowledgment with Key_Shared mode. - -## Multi-topic subscriptions - -When a consumer subscribes to a Pulsar topic, by default it subscribes to one specific topic, such as `persistent://public/default/my-topic`. As of Pulsar version 1.23.0-incubating, however, Pulsar consumers can simultaneously subscribe to multiple topics. You can define a list of topics in two ways: - -* On the basis of a [**reg**ular **ex**pression](https://en.wikipedia.org/wiki/Regular_expression) (regex), for example `persistent://public/default/finance-.*` -* By explicitly defining a list of topics - -> When subscribing to multiple topics by regex, all topics must be in the same [namespace](#namespaces). - -When subscribing to multiple topics, the Pulsar client automatically makes a call to the Pulsar API to discover the topics that match the regex pattern/list, and then subscribe to all of them. If any of the topics do not exist, the consumer auto-subscribes to them once the topics are created. - -> **No ordering guarantees across multiple topics** -> When a producer sends messages to a single topic, all messages are guaranteed to be read from that topic in the same order. However, these guarantees do not hold across multiple topics. So when a producer sends message to multiple topics, the order in which messages are read from those topics is not guaranteed to be the same. - -The following are multi-topic subscription examples for Java. - -```java - -import java.util.regex.Pattern; - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient pulsarClient = // Instantiate Pulsar client object - -// Subscribe to all topics in a namespace -Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default/.*"); -Consumer allTopicsConsumer = pulsarClient.newConsumer() - .topicsPattern(allTopicsInNamespace) - .subscriptionName("subscription-1") - .subscribe(); - -// Subscribe to a subsets of topics in a namespace, based on regex -Pattern someTopicsInNamespace = Pattern.compile("persistent://public/default/foo.*"); -Consumer someTopicsConsumer = pulsarClient.newConsumer() - .topicsPattern(someTopicsInNamespace) - .subscriptionName("subscription-1") - .subscribe(); - -``` - -For code examples, see [Java](client-libraries-java.md#multi-topic-subscriptions). - -## Partitioned topics - -Normal topics are served only by a single broker, which limits the maximum throughput of the topic. *Partitioned topics* are a special type of topic that are handled by multiple brokers, thus allowing for higher throughput. - -A partitioned topic is actually implemented as N internal topics, where N is the number of partitions. When publishing messages to a partitioned topic, each message is routed to one of several brokers. The distribution of partitions across brokers is handled automatically by Pulsar. - -The diagram below illustrates this: - -![](/assets/partitioning.png) - -The **Topic1** topic has five partitions (**P0** through **P4**) split across three brokers. Because there are more partitions than brokers, two brokers handle two partitions a piece, while the third handles only one (again, Pulsar handles this distribution of partitions automatically). - -Messages for this topic are broadcast to two consumers. The [routing mode](#routing-modes) determines each message should be published to which partition, while the [subscription mode](#subscription-modes) determines which messages go to which consumers. - -Decisions about routing and subscription modes can be made separately in most cases. In general, throughput concerns should guide partitioning/routing decisions while subscription decisions should be guided by application semantics. - -There is no difference between partitioned topics and normal topics in terms of how subscription modes work, as partitioning only determines what happens between when a message is published by a producer and processed and acknowledged by a consumer. - -Partitioned topics need to be explicitly created via the [admin API](admin-api-overview.md). The number of partitions can be specified when creating the topic. - -### Routing modes - -When publishing to partitioned topics, you must specify a *routing mode*. The routing mode determines which partition---that is, which internal topic---each message should be published to. - -There are three {@inject: javadoc:MessageRoutingMode:/client/org/apache/pulsar/client/api/MessageRoutingMode} available: - -Mode | Description -:--------|:------------ -`RoundRobinPartition` | If no key is provided, the producer will publish messages across all partitions in round-robin fashion to achieve maximum throughput. Please note that round-robin is not done per individual message but rather it's set to the same boundary of batching delay, to ensure batching is effective. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition. This is the default mode. -`SinglePartition` | If no key is provided, the producer will randomly pick one single partition and publish all the messages into that partition. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition. -`CustomPartition` | Use custom message router implementation that will be called to determine the partition for a particular message. User can create a custom routing mode by using the [Java client](client-libraries-java.md) and implementing the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface. - -### Ordering guarantee - -The ordering of messages is related to MessageRoutingMode and Message Key. Usually, user would want an ordering of Per-key-partition guarantee. - -If there is a key attached to message, the messages will be routed to corresponding partitions based on the hashing scheme specified by {@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} in {@inject: javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder}, when using either `SinglePartition` or `RoundRobinPartition` mode. - -Ordering guarantee | Description | Routing Mode and Key -:------------------|:------------|:------------ -Per-key-partition | All the messages with the same key will be in order and be placed in same partition. | Use either `SinglePartition` or `RoundRobinPartition` mode, and Key is provided by each message. -Per-producer | All the messages from the same producer will be in order. | Use `SinglePartition` mode, and no Key is provided for each message. - -### Hashing scheme - -{@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} is an enum that represent sets of standard hashing functions available when choosing the partition to use for a particular message. - -There are 2 types of standard hashing functions available: `JavaStringHash` and `Murmur3_32Hash`. -The default hashing function for producer is `JavaStringHash`. -Please pay attention that `JavaStringHash` is not useful when producers can be from different multiple language clients, under this use case, it is recommended to use `Murmur3_32Hash`. - - - -## Non-persistent topics - - -By default, Pulsar persistently stores *all* unacknowledged messages on multiple [BookKeeper](concepts-architecture-overview.md#persistent-storage) bookies (storage nodes). Data for messages on persistent topics can thus survive broker restarts and subscriber failover. - -Pulsar also, however, supports **non-persistent topics**, which are topics on which messages are *never* persisted to disk and live only in memory. When using non-persistent delivery, killing a Pulsar broker or disconnecting a subscriber to a topic means that all in-transit messages are lost on that (non-persistent) topic, meaning that clients may see message loss. - -Non-persistent topics have names of this form (note the `non-persistent` in the name): - -```http - -non-persistent://tenant/namespace/topic - -``` - -> For more info on using non-persistent topics, see the [Non-persistent messaging cookbook](cookbooks-non-persistent.md). - -In non-persistent topics, brokers immediately deliver messages to all connected subscribers *without persisting them* in [BookKeeper](concepts-architecture-overview.md#persistent-storage). If a subscriber is disconnected, the broker will not be able to deliver those in-transit messages, and subscribers will never be able to receive those messages again. Eliminating the persistent storage step makes messaging on non-persistent topics slightly faster than on persistent topics in some cases, but with the caveat that some of the core benefits of Pulsar are lost. - -> With non-persistent topics, message data lives only in memory. If a message broker fails or message data can otherwise not be retrieved from memory, your message data may be lost. Use non-persistent topics only if you're *certain* that your use case requires it and can sustain it. - -By default, non-persistent topics are enabled on Pulsar brokers. You can disable them in the broker's [configuration](reference-configuration.md#broker-enableNonPersistentTopics). You can manage non-persistent topics using the `pulsar-admin topics` command. For more information, see [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/). - -### Performance - -Non-persistent messaging is usually faster than persistent messaging because brokers don't persist messages and immediately send acks back to the producer as soon as that message is delivered to connected brokers. Producers thus see comparatively low publish latency with non-persistent topic. - -### Client API - -Producers and consumers can connect to non-persistent topics in the same way as persistent topics, with the crucial difference that the topic name must start with `non-persistent`. All three subscription modes---[exclusive](#exclusive), [shared](#shared), and [failover](#failover)---are supported for non-persistent topics. - -Here's an example [Java consumer](client-libraries-java.md#consumers) for a non-persistent topic: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); -String npTopic = "non-persistent://public/default/my-topic"; -String subscriptionName = "my-subscription-name"; - -Consumer consumer = client.newConsumer() - .topic(npTopic) - .subscriptionName(subscriptionName) - .subscribe(); - -``` - -Here's an example [Java producer](client-libraries-java.md#producer) for the same non-persistent topic: - -```java - -Producer producer = client.newProducer() - .topic(npTopic) - .create(); - -``` - - -## System topic - -System topic is a predefined topic for internal use within Pulsar. It can be either persistent or non-persistent topic. - -System topics serve to implement certain features and eliminate dependencies on third-party components, such as transactions, heartbeat detections, topic-level policies, and resource group services. System topics empower the implementation of these features to be simplified, dependent, and flexible. Take heartbeat detections for example, you can leverage the system topic for healthcheck to internally enable producer/reader to procude/consume messages under the heartbeat namespace, which can detect whether the current service is still alive. - -There are diverse system topics depending on namespaces. The following table outlines the available system topics for each specific namespace. - -| Namespace | TopicName | Domain | Count | Usage | -|-----------|-----------|--------|-------|-------| -| pulsar/system | `transaction_coordinator_assign_${id}` | Persistent | Default 16 | Transaction coordinator | -| pulsar/system | `_transaction_log${tc_id}` | Persistent | Default 16 | Transaction log | -| pulsar/system | `resource-usage` | Non-persistent | Default 4 | Resource group service | -| host/port | `heartbeat` | Persistent | 1 | Heartbeat detection | -| User-defined-ns | [`__change_events`](concepts-multi-tenancy.md#namespace-change-events-and-topic-level-policies) | Persistent | Default 4 | Topic events | -| User-defined-ns | `__transaction_buffer_snapshot` | Persistent | One per namespace | Transaction buffer snapshots | -| User-defined-ns | `${topicName}__transaction_pending_ack` | Persistent | One per every topic subscription acknowledged with transactions | Acknowledgements with transactions | - -:::note - -* You cannot create any system topics. -* By default, system topics are disabled. To enable system topics, you need to change the following configurations in the `conf/broker.conf` or `conf/standalone.conf` file. - - ```conf - systemTopicEnabled=true - topicLevelPoliciesEnabled=true - ``` - -::: - - -## Message retention and expiry - -By default, Pulsar message brokers: - -* immediately delete *all* messages that have been acknowledged by a consumer, and -* [persistently store](concepts-architecture-overview.md#persistent-storage) all unacknowledged messages in a message backlog. - -Pulsar has two features, however, that enable you to override this default behavior: - -* Message **retention** enables you to store messages that have been acknowledged by a consumer -* Message **expiry** enables you to set a time to live (TTL) for messages that have not yet been acknowledged - -> All message retention and expiry is managed at the [namespace](#namespaces) level. For a how-to, see the [Message retention and expiry](cookbooks-retention-expiry.md) cookbook. - -The diagram below illustrates both concepts: - -![Message retention and expiry](/assets/retention-expiry.png) - -With message retention, shown at the top, a retention policy applied to all topics in a namespace dictates that some messages are durably stored in Pulsar even though they've already been acknowledged. Acknowledged messages that are not covered by the retention policy are deleted. Without a retention policy, *all* of the acknowledged messages would be deleted. - -With message expiry, shown at the bottom, some messages are deleted, even though they haven't been acknowledged, because they've expired according to the TTL applied to the namespace (for example because a TTL of 5 minutes has been applied and the messages haven't been acknowledged but are 10 minutes old). - -## Message deduplication - -Message duplication occurs when a message is [persisted](concepts-architecture-overview.md#persistent-storage) by Pulsar more than once. Message deduplication is an optional Pulsar feature that prevents unnecessary message duplication by processing each message only once, even if the message is received more than once. - -The following diagram illustrates what happens when message deduplication is disabled vs. enabled: - -![Pulsar message deduplication](/assets/message-deduplication.png) - - -Message deduplication is disabled in the scenario shown at the top. Here, a producer publishes message 1 on a topic; the message reaches a Pulsar broker and is [persisted](concepts-architecture-overview.md#persistent-storage) to BookKeeper. The producer then sends message 1 again (in this case due to some retry logic), and the message is received by the broker and stored in BookKeeper again, which means that duplication has occurred. - -In the second scenario at the bottom, the producer publishes message 1, which is received by the broker and persisted, as in the first scenario. When the producer attempts to publish the message again, however, the broker knows that it has already seen message 1 and thus does not persist the message. - -> Message deduplication is handled at the namespace level or the topic level. For more instructions, see the [message deduplication cookbook](cookbooks-deduplication.md). - - -### Producer idempotency - -The other available approach to message deduplication is to ensure that each message is *only produced once*. This approach is typically called **producer idempotency**. The drawback of this approach is that it defers the work of message deduplication to the application. In Pulsar, this is handled at the [broker](reference-terminology.md#broker) level, so you do not need to modify your Pulsar client code. Instead, you only need to make administrative changes. For details, see [Managing message deduplication](cookbooks-deduplication.md). - -### Deduplication and effectively-once semantics - -Message deduplication makes Pulsar an ideal messaging system to be used in conjunction with stream processing engines (SPEs) and other systems seeking to provide effectively-once processing semantics. Messaging systems that do not offer automatic message deduplication require the SPE or other system to guarantee deduplication, which means that strict message ordering comes at the cost of burdening the application with the responsibility of deduplication. With Pulsar, strict ordering guarantees come at no application-level cost. - -> You can find more in-depth information in [this post](https://www.splunk.com/en_us/blog/it/exactly-once-is-not-exactly-the-same.html). - -## Delayed message delivery -Delayed message delivery enables you to consume a message later rather than immediately. In this mechanism, a message is stored in BookKeeper, `DelayedDeliveryTracker` maintains the time index(time -> messageId) in memory after published to a broker, and it is delivered to a consumer once the specific delayed time is passed. - -Delayed message delivery only works in Shared subscription mode. In Exclusive and Failover subscription modes, the delayed message is dispatched immediately. - -The diagram below illustrates the concept of delayed message delivery: - -![Delayed Message Delivery](/assets/message_delay.png) - -A broker saves a message without any check. When a consumer consumes a message, if the message is set to delay, then the message is added to `DelayedDeliveryTracker`. A subscription checks and gets timeout messages from `DelayedDeliveryTracker`. - -### Broker -Delayed message delivery is enabled by default. You can change it in the broker configuration file as below: - -``` - -# Whether to enable the delayed delivery for messages. -# If disabled, messages are immediately delivered and there is no tracking overhead. -delayedDeliveryEnabled=true - -# Control the ticking time for the retry of delayed message delivery, -# affecting the accuracy of the delivery time compared to the scheduled time. -# Default is 1 second. -delayedDeliveryTickTimeMillis=1000 - -``` - -### Producer -The following is an example of delayed message delivery for a producer in Java: - -```java - -// message to be delivered at the configured delay interval -producer.newMessage().deliverAfter(3L, TimeUnit.Minute).value("Hello Pulsar!").send(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-multi-tenancy.md b/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-multi-tenancy.md deleted file mode 100644 index 93a59557b2efca..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-multi-tenancy.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -id: concepts-multi-tenancy -title: Multi Tenancy -sidebar_label: "Multi Tenancy" -original_id: concepts-multi-tenancy ---- - -Pulsar was created from the ground up as a multi-tenant system. To support multi-tenancy, Pulsar has a concept of tenants. Tenants can be spread across clusters and can each have their own [authentication and authorization](security-overview.md) scheme applied to them. They are also the administrative unit at which storage quotas, [message TTL](cookbooks-retention-expiry.md#time-to-live-ttl), and isolation policies can be managed. - -The multi-tenant nature of Pulsar is reflected mostly visibly in topic URLs, which have this structure: - -```http - -persistent://tenant/namespace/topic - -``` - -As you can see, the tenant is the most basic unit of categorization for topics (more fundamental than the namespace and topic name). - -## Tenants - -To each tenant in a Pulsar instance you can assign: - -* An [authorization](security-authorization.md) scheme -* The set of [clusters](reference-terminology.md#cluster) to which the tenant's configuration applies - -## Namespaces - -Tenants and namespaces are two key concepts of Pulsar to support multi-tenancy. - -* Pulsar is provisioned for specified tenants with appropriate capacity allocated to the tenant. -* A namespace is the administrative unit nomenclature within a tenant. The configuration policies set on a namespace apply to all the topics created in that namespace. A tenant may create multiple namespaces via self-administration using the REST API and the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool. For instance, a tenant with different applications can create a separate namespace for each application. - -Names for topics in the same namespace will look like this: - -```http - -persistent://tenant/app1/topic-1 - -persistent://tenant/app1/topic-2 - -persistent://tenant/app1/topic-3 - -``` - -### Namespace change events and topic-level policies - -Pulsar is a multi-tenant event streaming system. Administrators can manage the tenants and namespaces by setting policies at different levels. However, the policies, such as retention policy and storage quota policy, are only available at a namespace level. In many use cases, users need to set a policy at the topic level. The namespace change events approach is proposed for supporting topic-level policies in an efficient way. In this approach, Pulsar is used as an event log to store namespace change events (such as topic policy changes). This approach has a few benefits: -- Avoid using ZooKeeper and introducing more loads to ZooKeeper. -- Use Pulsar as an event log for propagating the policy cache. It can scale efficiently. -- Use Pulsar SQL to query the namespace changes and audit the system. - -Each namespace has a [system topic](concepts-messaging.md#system-topic) named `__change_events`. This system topic stores change events for a given namespace. The following figure illustrates how to leverage it to update topic-level policies. - -![Leverage the system topic to update topic-level policies](/assets/system-topic-for-topic-level-policies.svg) - -1. Pulsar Admin clients communicate with the Admin Restful API to update topic-level policies. -2. Any broker that receives the Admin HTTP request publishes a topic policy change event to the corresponding system topic (`__change_events`) of the namespace. -3. Each broker that owns a namespace bundle(s) subscribes to the system topic (`__change_events`) to receive the change events of the namespace. -4. Each broker applies the change events to its policy cache. -5. Once the policy cache is updated, the broker sends the response back to the Pulsar Admin clients. - -:::note - -By default, the system topic is disabled. To enable topic-level policy (`topicLevelPoliciesEnabled`=`true`), you need to enable the system topic by setting `systemtopicenabled` to `true` in the `conf/broker.conf` or `conf/standalone.conf` file. - -::: \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-multiple-advertised-listeners.md b/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-multiple-advertised-listeners.md deleted file mode 100644 index f2e1ae0aadc7ca..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-multiple-advertised-listeners.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -id: concepts-multiple-advertised-listeners -title: Multiple advertised listeners -sidebar_label: "Multiple advertised listeners" -original_id: concepts-multiple-advertised-listeners ---- - -When a Pulsar cluster is deployed in the production environment, it may require to expose multiple advertised addresses for the broker. For example, when you deploy a Pulsar cluster in Kubernetes and want other clients, which are not in the same Kubernetes cluster, to connect to the Pulsar cluster, you need to assign a broker URL to external clients. But clients in the same Kubernetes cluster can still connect to the Pulsar cluster through the internal network of Kubernetes. - -## Advertised listeners - -To ensure clients in both internal and external networks can connect to a Pulsar cluster, Pulsar introduces `advertisedListeners` and `internalListenerName` configuration options into the [broker configuration file](reference-configuration.md#broker) to ensure that the broker supports exposing multiple advertised listeners and support the separation of internal and external network traffic. - -- The `advertisedListeners` is used to specify multiple advertised listeners. The broker uses the listener as the broker identifier in the load manager and the bundle owner data. The `advertisedListeners` is formatted as `:pulsar://:, :pulsar+ssl://:`. You can set up the `advertisedListeners` like -`advertisedListeners=internal:pulsar://192.168.1.11:6660,internal:pulsar+ssl://192.168.1.11:6651`. - -- The `internalListenerName` is used to specify the internal service URL that the broker uses. You can specify the `internalListenerName` by choosing one of the `advertisedListeners`. The broker uses the listener name of the first advertised listener as the `internalListenerName` if the `internalListenerName` is absent. - -After setting up the `advertisedListeners`, clients can choose one of the listeners as the service URL to create a connection to the broker as long as the network is accessible. However, if the client creates producers or consumer on a topic, the client must send a lookup requests to the broker for getting the owner broker, then connect to the owner broker to publish messages or consume messages. Therefore, You must allow the client to get the corresponding service URL with the same advertised listener name as the one used by the client. This helps keep client-side simple and secure. - -## Use multiple advertised listeners - -This example shows how a Pulsar client uses multiple advertised listeners. - -1. Configure multiple advertised listeners in the broker configuration file. - -```shell - -advertisedListeners={listenerName}:pulsar://xxxx:6650, -{listenerName}:pulsar+ssl://xxxx:6651 - -``` - -2. Specify the listener name for the client. - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://xxxx:6650") - .listenerName("external") - .build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-overview.md b/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-overview.md deleted file mode 100644 index c643aa0ce7bbce..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-overview.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -id: concepts-overview -title: Pulsar Overview -sidebar_label: "Overview" -original_id: concepts-overview ---- - -Pulsar is a multi-tenant, high-performance solution for server-to-server messaging. Originally developed by Yahoo, Pulsar is under the stewardship of the [Apache Software Foundation](https://www.apache.org/). - -Key features of Pulsar are listed below: - -* Native support for multiple clusters in a Pulsar instance, with seamless [geo-replication](administration-geo.md) of messages across clusters. -* Very low publish and end-to-end latency. -* Seamless scalability to over a million topics. -* A simple [client API](concepts-clients.md) with bindings for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) and [C++](client-libraries-cpp.md). -* Multiple [subscription modes](concepts-messaging.md#subscription-modes) ([exclusive](concepts-messaging.md#exclusive), [shared](concepts-messaging.md#shared), and [failover](concepts-messaging.md#failover)) for topics. -* Guaranteed message delivery with [persistent message storage](concepts-architecture-overview.md#persistent-storage) provided by [Apache BookKeeper](http://bookkeeper.apache.org/). -* A serverless light-weight computing framework [Pulsar Functions](functions-overview.md) offers the capability for stream-native data processing. -* A serverless connector framework [Pulsar IO](io-overview.md), which is built on Pulsar Functions, makes it easier to move data in and out of Apache Pulsar. -* [Tiered Storage](concepts-tiered-storage.md) offloads data from hot/warm storage to cold/long-term storage (such as S3 and GCS) when the data is aging out. - -## Contents - -- [Messaging Concepts](concepts-messaging.md) -- [Architecture Overview](concepts-architecture-overview.md) -- [Pulsar Clients](concepts-clients.md) -- [Geo Replication](concepts-replication.md) -- [Multi Tenancy](concepts-multi-tenancy.md) -- [Authentication and Authorization](concepts-authentication.md) -- [Topic Compaction](concepts-topic-compaction.md) -- [Tiered Storage](concepts-tiered-storage.md) diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-proxy-sni-routing.md b/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-proxy-sni-routing.md deleted file mode 100644 index 51419a66cefe6e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-proxy-sni-routing.md +++ /dev/null @@ -1,180 +0,0 @@ ---- -id: concepts-proxy-sni-routing -title: Proxy support with SNI routing -sidebar_label: "Proxy support with SNI routing" -original_id: concepts-proxy-sni-routing ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -A proxy server is an intermediary server that forwards requests from multiple clients to different servers across the Internet. The proxy server acts as a "traffic cop" in both forward and reverse proxy scenarios, and benefits your system such as load balancing, performance, security, auto-scaling, and so on. - -The proxy in Pulsar acts as a reverse proxy, and creates a gateway in front of brokers. Proxies such as Apache Traffic Server (ATS), HAProxy, Nginx, and Envoy are not supported in Pulsar. These proxy-servers support **SNI routing**. SNI routing is used to route traffic to a destination without terminating the SSL connection. Layer 4 routing provides greater transparency because the outbound connection is determined by examining the destination address in the client TCP packets. - -Pulsar clients (Java, C++, Python) support [SNI routing protocol](https://github.com/apache/pulsar/wiki/PIP-60:-Support-Proxy-server-with-SNI-routing), so you can connect to brokers through the proxy. This document walks you through how to set up the ATS proxy, enable SNI routing, and connect Pulsar client to the broker through the ATS proxy. - -## ATS-SNI Routing in Pulsar -To support [layer-4 SNI routing](https://docs.trafficserver.apache.org/en/latest/admin-guide/layer-4-routing.en.html) with ATS, the inbound connection must be a TLS connection. Pulsar client supports SNI routing protocol on TLS connection, so when Pulsar clients connect to broker through ATS proxy, Pulsar uses ATS as a reverse proxy. - -Pulsar supports SNI routing for geo-replication, so brokers can connect to brokers in other clusters through the ATS proxy. - -This section explains how to set up and use ATS as a reverse proxy, so Pulsar clients can connect to brokers through the ATS proxy using the SNI routing protocol on TLS connection. - -### Set up ATS Proxy for layer-4 SNI routing -To support layer 4 SNI routing, you need to configure the `records.conf` and `ssl_server_name.conf` files. - -![Pulsar client SNI](/assets/pulsar-sni-client.png) - -The [records.config](https://docs.trafficserver.apache.org/en/latest/admin-guide/files/records.config.en.html) file is located in the `/usr/local/etc/trafficserver/` directory by default. The file lists configurable variables used by the ATS. - -To configure the `records.config` files, complete the following steps. -1. Update TLS port (`http.server_ports`) on which proxy listens, and update proxy certs (`ssl.client.cert.path` and `ssl.client.cert.filename`) to secure TLS tunneling. -2. Configure server ports (`http.connect_ports`) used for tunneling to the broker. If Pulsar brokers are listening on `4443` and `6651` ports, add the brokers service port in the `http.connect_ports` configuration. - -The following is an example. - -``` - -# PROXY TLS PORT -CONFIG proxy.config.http.server_ports STRING 4443:ssl 4080 -# PROXY CERTS FILE PATH -CONFIG proxy.config.ssl.client.cert.path STRING /proxy-cert.pem -# PROXY KEY FILE PATH -CONFIG proxy.config.ssl.client.cert.filename STRING /proxy-key.pem - - -# The range of origin server ports that can be used for tunneling via CONNECT. # Traffic Server allows tunnels only to the specified ports. Supports both wildcards (*) and ranges (e.g. 0-1023). -CONFIG proxy.config.http.connect_ports STRING 4443 6651 - -``` - -The `ssl_server_name` file is used to configure TLS connection handling for inbound and outbound connections. The configuration is determined by the SNI values provided by the inbound connection. The file consists of a set of configuration items, and each is identified by an SNI value (`fqdn`). When an inbound TLS connection is made, the SNI value from the TLS negotiation is matched with the items specified in this file. If the values match, the values specified in that item override the default values. - -The following example shows mapping of the inbound SNI hostname coming from the client, and the actual broker service URL where request should be redirected. For example, if the client sends the SNI header `pulsar-broker1`, the proxy creates a TLS tunnel by redirecting request to the `pulsar-broker1:6651` service URL. - -``` - -server_config = { - { - fqdn = 'pulsar-broker-vip', - # Forward to Pulsar broker which is listening on 6651 - tunnel_route = 'pulsar-broker-vip:6651' - }, - { - fqdn = 'pulsar-broker1', - # Forward to Pulsar broker-1 which is listening on 6651 - tunnel_route = 'pulsar-broker1:6651' - }, - { - fqdn = 'pulsar-broker2', - # Forward to Pulsar broker-2 which is listening on 6651 - tunnel_route = 'pulsar-broker2:6651' - }, -} - -``` - -After you configure the `ssl_server_name.config` and `records.config` files, the ATS-proxy server handles SNI routing and creates TCP tunnel between the client and the broker. - -### Configure Pulsar-client with SNI routing -ATS SNI-routing works only with TLS. You need to enable TLS for the ATS proxy and brokers first, configure the SNI routing protocol, and then connect Pulsar clients to brokers through ATS proxy. Pulsar clients support SNI routing by connecting to the proxy, and sending the target broker URL to the SNI header. This process is processed internally. You only need to configure the following proxy configuration initially when you create a Pulsar client to use the SNI routing protocol. - -````mdx-code-block - - - - -```java - -String brokerServiceUrl = "pulsar+ssl://pulsar-broker-vip:6651/"; -String proxyUrl = "pulsar+ssl://ats-proxy:443"; -ClientBuilder clientBuilder = PulsarClient.builder() - .serviceUrl(brokerServiceUrl) - .tlsTrustCertsFilePath(TLS_TRUST_CERT_FILE_PATH) - .enableTls(true) - .allowTlsInsecureConnection(false) - .proxyServiceUrl(proxyUrl, ProxyProtocol.SNI) - .operationTimeout(1000, TimeUnit.MILLISECONDS); - -Map authParams = new HashMap(); -authParams.put("tlsCertFile", TLS_CLIENT_CERT_FILE_PATH); -authParams.put("tlsKeyFile", TLS_CLIENT_KEY_FILE_PATH); -clientBuilder.authentication(AuthenticationTls.class.getName(), authParams); - -PulsarClient pulsarClient = clientBuilder.build(); - -``` - - - - -```c++ - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/cacert.pem"); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create( - "/path/to/client-cert.pem", "/path/to/client-key.pem");); - -Client client("pulsar+ssl://ats-proxy:443", config); - -``` - - - - -```python - -from pulsar import Client, AuthenticationTLS - -auth = AuthenticationTLS("/path/to/my-role.cert.pem", "/path/to/my-role.key-pk8.pem") -client = Client("pulsar+ssl://ats-proxy:443", - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False, - authentication=auth) - -``` - - - - -```` - -### Pulsar geo-replication with SNI routing -You can use the ATS proxy for geo-replication. Pulsar brokers can connect to brokers in geo-replication by using SNI routing. To enable SNI routing for broker connection cross clusters, you need to configure SNI proxy URL to the cluster metadata. If you have configured SNI proxy URL in the cluster metadata, you can connect to broker cross clusters through the proxy over SNI routing. - -![Pulsar client SNI](/assets/pulsar-sni-geo.png) - -In this example, a Pulsar cluster is deployed into two separate regions, `us-west` and `us-east`. Both regions are configured with ATS proxy, and brokers in each region run behind the ATS proxy. We configure the cluster metadata for both clusters, so brokers in one cluster can use SNI routing and connect to brokers in other clusters through the ATS proxy. - -(a) Configure the cluster metadata for `us-east` with `us-east` broker service URL and `us-east` ATS proxy URL with SNI proxy-protocol. - -``` - -./pulsar-admin clusters update \ ---broker-url-secure pulsar+ssl://east-broker-vip:6651 \ ---url http://east-broker-vip:8080 \ ---proxy-protocol SNI \ ---proxy-url pulsar+ssl://east-ats-proxy:443 - -``` - -(b) Configure the cluster metadata for `us-west` with `us-west` broker service URL and `us-west` ATS proxy URL with SNI proxy-protocol. - -``` - -./pulsar-admin clusters update \ ---broker-url-secure pulsar+ssl://west-broker-vip:6651 \ ---url http://west-broker-vip:8080 \ ---proxy-protocol SNI \ ---proxy-url pulsar+ssl://west-ats-proxy:443 - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-replication.md b/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-replication.md deleted file mode 100644 index 799f0eb4d92c6b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-replication.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -id: concepts-replication -title: Geo Replication -sidebar_label: "Geo Replication" -original_id: concepts-replication ---- - -Pulsar enables messages to be produced and consumed in different geo-locations. For instance, your application may be publishing data in one region or market and you would like to process it for consumption in other regions or markets. [Geo-replication](administration-geo.md) in Pulsar enables you to do that. - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-tiered-storage.md b/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-tiered-storage.md deleted file mode 100644 index f6988e53a8cd4e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-tiered-storage.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: concepts-tiered-storage -title: Tiered Storage -sidebar_label: "Tiered Storage" -original_id: concepts-tiered-storage ---- - -Pulsar's segment oriented architecture allows for topic backlogs to grow very large, effectively without limit. However, this can become expensive over time. - -One way to alleviate this cost is to use Tiered Storage. With tiered storage, older messages in the backlog can be moved from BookKeeper to a cheaper storage mechanism, while still allowing clients to access the backlog as if nothing had changed. - -![Tiered Storage](/assets/pulsar-tiered-storage.png) - -> Data written to BookKeeper is replicated to 3 physical machines by default. However, once a segment is sealed in BookKeeper it becomes immutable and can be copied to long term storage. Long term storage can achieve cost savings by using mechanisms such as [Reed-Solomon error correction](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction) to require fewer physical copies of data. - -Pulsar currently supports S3, Google Cloud Storage (GCS), and filesystem for [long term store](https://pulsar.apache.org/docs/en/cookbooks-tiered-storage/). Offloading to long term storage triggered via a Rest API or command line interface. The user passes in the amount of topic data they wish to retain on BookKeeper, and the broker will copy the backlog data to long term storage. The original data will then be deleted from BookKeeper after a configured delay (4 hours by default). - -> For a guide for setting up tiered storage, see the [Tiered storage cookbook](cookbooks-tiered-storage.md). diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-topic-compaction.md b/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-topic-compaction.md deleted file mode 100644 index 34b7ed7fbbd31e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-topic-compaction.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -id: concepts-topic-compaction -title: Topic Compaction -sidebar_label: "Topic Compaction" -original_id: concepts-topic-compaction ---- - -Pulsar was built with highly scalable [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data as a primary objective. Pulsar topics enable you to persistently store as many unacknowledged messages as you need while preserving message ordering. By default, Pulsar stores *all* unacknowledged/unprocessed messages produced on a topic. Accumulating many unacknowledged messages on a topic is necessary for many Pulsar use cases but it can also be very time intensive for Pulsar consumers to "rewind" through the entire log of messages. - -> For a more practical guide to topic compaction, see the [Topic compaction cookbook](cookbooks-compaction.md). - -For some use cases consumers don't need a complete "image" of the topic log. They may only need a few values to construct a more "shallow" image of the log, perhaps even just the most recent value. For these kinds of use cases Pulsar offers **topic compaction**. When you run compaction on a topic, Pulsar goes through a topic's backlog and removes messages that are *obscured* by later messages, i.e. it goes through the topic on a per-key basis and leaves only the most recent message associated with that key. - -Pulsar's topic compaction feature: - -* Allows for faster "rewind" through topic logs -* Applies only to [persistent topics](concepts-architecture-overview.md#persistent-storage) -* Triggered automatically when the backlog reaches a certain size or can be triggered manually via the command line. See the [Topic compaction cookbook](cookbooks-compaction.md) -* Is conceptually and operationally distinct from [retention and expiry](concepts-messaging.md#message-retention-and-expiry). Topic compaction *does*, however, respect retention. If retention has removed a message from the message backlog of a topic, the message will also not be readable from the compacted topic ledger. - -> #### Topic compaction example: the stock ticker -> An example use case for a compacted Pulsar topic would be a stock ticker topic. On a stock ticker topic, each message bears a timestamped dollar value for stocks for purchase (with the message key holding the stock symbol, e.g. `AAPL` or `GOOG`). With a stock ticker you may care only about the most recent value(s) of the stock and have no interest in historical data (i.e. you don't need to construct a complete image of the topic's sequence of messages per key). Compaction would be highly beneficial in this case because it would keep consumers from needing to rewind through obscured messages. - - -## How topic compaction works - -When topic compaction is triggered [via the CLI](cookbooks-compaction.md), Pulsar will iterate over the entire topic from beginning to end. For each key that it encounters the compaction routine will keep a record of the latest occurrence of that key. - -After that, the broker will create a new [BookKeeper ledger](concepts-architecture-overview.md#ledgers) and make a second iteration through each message on the topic. For each message, if the key matches the latest occurrence of that key, then the key's data payload, message ID, and metadata will be written to the newly created ledger. If the key doesn't match the latest then the message will be skipped and left alone. If any given message has an empty payload, it will be skipped and considered deleted (akin to the concept of [tombstones](https://en.wikipedia.org/wiki/Tombstone_(data_store)) in key-value databases). At the end of this second iteration through the topic, the newly created BookKeeper ledger is closed and two things are written to the topic's metadata: the ID of the BookKeeper ledger and the message ID of the last compacted message (this is known as the **compaction horizon** of the topic). Once this metadata is written compaction is complete. - -After the initial compaction operation, the Pulsar [broker](reference-terminology.md#broker) that owns the topic is notified whenever any future changes are made to the compaction horizon and compacted backlog. When such changes occur: - -* Clients (consumers and readers) that have read compacted enabled will attempt to read messages from a topic and either: - * Read from the topic like normal (if the message ID is greater than or equal to the compaction horizon) or - * Read beginning at the compaction horizon (if the message ID is lower than the compaction horizon) - - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-transactions.md b/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-transactions.md deleted file mode 100644 index 08490ba06b5d7e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/concepts-transactions.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -id: transactions -title: Transactions -sidebar_label: "Overview" -original_id: transactions ---- - -Transactional semantics enable event streaming applications to consume, process, and produce messages in one atomic operation. In Pulsar, a producer or consumer can work with messages across multiple topics and partitions and ensure those messages are processed as a single unit. - -The following concepts help you understand Pulsar transactions. - -## Transaction coordinator and transaction log -The transaction coordinator maintains the topics and subscriptions that interact in a transaction. When a transaction is committed, the transaction coordinator interacts with the topic owner broker to complete the transaction. - -The transaction coordinator maintains the entire life cycle of transactions, and prevents a transaction from incorrect status. - -The transaction coordinator handles transaction timeout, and ensures that the transaction is aborted after a transaction timeout. - -All the transaction metadata is persisted in the transaction log. The transaction log is backed by a Pulsar topic. After the transaction coordinator crashes, it can restore the transaction metadata from the transaction log. - -## Transaction ID -The transaction ID (TxnID) identifies a unique transaction in Pulsar. The transaction ID is 128-bit. The highest 16 bits are reserved for the ID of the transaction coordinator, and the remaining bits are used for monotonically increasing numbers in each transaction coordinator. It is easy to locate the transaction crash with the TxnID. - -## Transaction buffer -Messages produced within a transaction are stored in the transaction buffer. The messages in transaction buffer are not materialized (visible) to consumers until the transactions are committed. The messages in the transaction buffer are discarded when the transactions are aborted. - -## Pending acknowledge state -Message acknowledges within a transaction are maintained by the pending acknowledge state before the transaction completes. If a message is in the pending acknowledge state, the message cannot be acknowledged by other transactions until the message is removed from the pending acknowledge state. - -The pending acknowledge state is persisted to the pending acknowledge log. The pending acknowledge log is backed by a Pulsar topic. A new broker can restore the state from the pending acknowledge log to ensure the acknowledgement is not lost. diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-bookkeepermetadata.md b/site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-bookkeepermetadata.md deleted file mode 100644 index b0fa98dc3b65d5..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-bookkeepermetadata.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -id: cookbooks-bookkeepermetadata -title: BookKeeper Ledger Metadata -original_id: cookbooks-bookkeepermetadata ---- - -Pulsar stores data on BookKeeper ledgers, you can understand the contents of a ledger by inspecting the metadata attached to the ledger. -Such metadata are stored on ZooKeeper and they are readable using BookKeeper APIs. - -Description of current metadata: - -| Scope | Metadata name | Metadata value | -| ------------- | ------------- | ------------- | -| All ledgers | application | 'pulsar' | -| All ledgers | component | 'managed-ledger', 'schema', 'compacted-topic' | -| Managed ledgers | pulsar/managed-ledger | name of the ledger | -| Cursor | pulsar/cursor | name of the cursor | -| Compacted topic | pulsar/compactedTopic | name of the original topic | -| Compacted topic | pulsar/compactedTo | id of the last compacted message | - - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-compaction.md b/site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-compaction.md deleted file mode 100644 index dfa314727241a8..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-compaction.md +++ /dev/null @@ -1,142 +0,0 @@ ---- -id: cookbooks-compaction -title: Topic compaction -sidebar_label: "Topic compaction" -original_id: cookbooks-compaction ---- - -Pulsar's [topic compaction](concepts-topic-compaction.md#compaction) feature enables you to create **compacted** topics in which older, "obscured" entries are pruned from the topic, allowing for faster reads through the topic's history (which messages are deemed obscured/outdated/irrelevant will depend on your use case). - -To use compaction: - -* You need to give messages keys, as topic compaction in Pulsar takes place on a *per-key basis* (i.e. messages are compacted based on their key). For a stock ticker use case, the stock symbol---e.g. `AAPL` or `GOOG`---could serve as the key (more on this [below](#when-should-i-use-compacted-topics)). Messages without keys will be left alone by the compaction process. -* Compaction can be configured to run [automatically](#configuring-compaction-to-run-automatically), or you can manually [trigger](#triggering-compaction-manually) compaction using the Pulsar administrative API. -* Your consumers must be [configured](#consumer-configuration) to read from compacted topics ([Java consumers](#java), for example, have a `readCompacted` setting that must be set to `true`). If this configuration is not set, consumers will still be able to read from the non-compacted topic. - - -> Compaction only works on messages that have keys (as in the stock ticker example the stock symbol serves as the key for each message). Keys can thus be thought of as the axis along which compaction is applied. Messages that don't have keys are simply ignored by compaction. - -## When should I use compacted topics? - -The classic example of a topic that could benefit from compaction would be a stock ticker topic through which consumers can access up-to-date values for specific stocks. Imagine a scenario in which messages carrying stock value data use the stock symbol as the key (`GOOG`, `AAPL`, `TWTR`, etc.). Compacting this topic would give consumers on the topic two options: - -* They can read from the "original," non-compacted topic in case they need access to "historical" values, i.e. the entirety of the topic's messages. -* They can read from the compacted topic if they only want to see the most up-to-date messages. - -Thus, if you're using a Pulsar topic called `stock-values`, some consumers could have access to all messages in the topic (perhaps because they're performing some kind of number crunching of all values in the last hour) while the consumers used to power the real-time stock ticker only see the compacted topic (and thus aren't forced to process outdated messages). Which variant of the topic any given consumer pulls messages from is determined by the consumer's [configuration](#consumer-configuration). - -> One of the benefits of compaction in Pulsar is that you aren't forced to choose between compacted and non-compacted topics, as the compaction process leaves the original topic as-is and essentially adds an alternate topic. In other words, you can run compaction on a topic and consumers that need access to the non-compacted version of the topic will not be adversely affected. - - -## Configuring compaction to run automatically - -Tenant administrators can configure a policy for compaction at the namespace level. The policy specifies how large the topic backlog can grow before compaction is triggered. - -For example, to trigger compaction when the backlog reaches 100MB: - -```bash - -$ bin/pulsar-admin namespaces set-compaction-threshold \ - --threshold 100M my-tenant/my-namespace - -``` - -Configuring the compaction threshold on a namespace will apply to all topics within that namespace. - -## Triggering compaction manually - -In order to run compaction on a topic, you need to use the [`topics compact`](reference-pulsar-admin.md#topics-compact) command for the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool. Here's an example: - -```bash - -$ bin/pulsar-admin topics compact \ - persistent://my-tenant/my-namespace/my-topic - -``` - -The `pulsar-admin` tool runs compaction via the Pulsar {@inject: rest:REST:/} API. To run compaction in its own dedicated process, i.e. *not* through the REST API, you can use the [`pulsar compact-topic`](reference-cli-tools.md#pulsar-compact-topic) command. Here's an example: - -```bash - -$ bin/pulsar compact-topic \ - --topic persistent://my-tenant-namespace/my-topic - -``` - -> Running compaction in its own process is recommended when you want to avoid interfering with the broker's performance. Broker performance should only be affected, however, when running compaction on topics with a large keyspace (i.e when there are many keys on the topic). The first phase of the compaction process keeps a copy of each key in the topic, which can create memory pressure as the number of keys grows. Using the `pulsar-admin topics compact` command to run compaction through the REST API should present no issues in the overwhelming majority of cases; using `pulsar compact-topic` should correspondingly be considered an edge case. - -The `pulsar compact-topic` command communicates with [ZooKeeper](https://zookeeper.apache.org) directly. In order to establish communication with ZooKeeper, though, the `pulsar` CLI tool will need to have a valid [broker configuration](reference-configuration.md#broker). You can either supply a proper configuration in `conf/broker.conf` or specify a non-default location for the configuration: - -```bash - -$ bin/pulsar compact-topic \ - --broker-conf /path/to/broker.conf \ - --topic persistent://my-tenant/my-namespace/my-topic - -# If the configuration is in conf/broker.conf -$ bin/pulsar compact-topic \ - --topic persistent://my-tenant/my-namespace/my-topic - -``` - -#### When should I trigger compaction? - -How often you [trigger compaction](#triggering-compaction-manually) will vary widely based on the use case. If you want a compacted topic to be extremely speedy on read, then you should run compaction fairly frequently. - -## Consumer configuration - -Pulsar consumers and readers need to be configured to read from compacted topics. The sections below show you how to enable compacted topic reads for Pulsar's language clients. - -### Java - -In order to read from a compacted topic using a Java consumer, the `readCompacted` parameter must be set to `true`. Here's an example consumer for a compacted topic: - -```java - -Consumer compactedTopicConsumer = client.newConsumer() - .topic("some-compacted-topic") - .readCompacted(true) - .subscribe(); - -``` - -As mentioned above, topic compaction in Pulsar works on a *per-key basis*. That means that messages that you produce on compacted topics need to have keys (the content of the key will depend on your use case). Messages that don't have keys will be ignored by the compaction process. Here's an example Pulsar message with a key: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageBuilder; - -Message msg = MessageBuilder.create() - .setContent(someByteArray) - .setKey("some-key") - .build(); - -``` - -The example below shows a message with a key being produced on a compacted Pulsar topic: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageBuilder; -import org.apache.pulsar.client.api.Producer; -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -Producer compactedTopicProducer = client.newProducer() - .topic("some-compacted-topic") - .create(); - -Message msg = MessageBuilder.create() - .setContent(someByteArray) - .setKey("some-key") - .build(); - -compactedTopicProducer.send(msg); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-deduplication.md b/site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-deduplication.md deleted file mode 100644 index f7f9e3d7bb425b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-deduplication.md +++ /dev/null @@ -1,151 +0,0 @@ ---- -id: cookbooks-deduplication -title: Message deduplication -sidebar_label: "Message deduplication" -original_id: cookbooks-deduplication ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -When **Message deduplication** is enabled, it ensures that each message produced on Pulsar topics is persisted to disk *only once*, even if the message is produced more than once. Message deduplication is handled automatically on the server side. - -To use message deduplication in Pulsar, you need to configure your Pulsar brokers and clients. - -## How it works - -You can enable or disable message deduplication at the namespace level or the topic level. By default, it is disabled on all namespaces or topics. You can enable it in the following ways: - -* Enable deduplication for all namespaces/topics at the broker-level. -* Enable deduplication for a specific namespace with the `pulsar-admin namespaces` interface. -* Enable deduplication for a specific topic with the `pulsar-admin topics` interface. - -## Configure message deduplication - -You can configure message deduplication in Pulsar using the [`broker.conf`](reference-configuration.md#broker) configuration file. The following deduplication-related parameters are available. - -Parameter | Description | Default -:---------|:------------|:------- -`brokerDeduplicationEnabled` | Sets the default behavior for message deduplication in the Pulsar broker. If it is set to `true`, message deduplication is enabled on all namespaces/topics. If it is set to `false`, you have to enable or disable deduplication at the namespace level or the topic level. | `false` -`brokerDeduplicationMaxNumberOfProducers` | The maximum number of producers for which information is stored for deduplication purposes. | `10000` -`brokerDeduplicationEntriesInterval` | The number of entries after which a deduplication informational snapshot is taken. A larger interval leads to fewer snapshots being taken, though this lengthens the topic recovery time (the time required for entries published after the snapshot to be replayed). | `1000` -`brokerDeduplicationSnapshotIntervalSeconds`| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |`120` -`brokerDeduplicationProducerInactivityTimeoutMinutes` | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | `360` (6 hours) - -### Set default value at the broker-level - -By default, message deduplication is *disabled* on all Pulsar namespaces/topics. To enable it on all namespaces/topics, set the `brokerDeduplicationEnabled` parameter to `true` and re-start the broker. - -Even if you set the value for `brokerDeduplicationEnabled`, enabling or disabling via Pulsar admin CLI overrides the default settings at the broker-level. - -### Enable message deduplication - -Though message deduplication is disabled by default at the broker level, you can enable message deduplication for a specific namespace or topic using the [`pulsar-admin namespaces set-deduplication`](reference-pulsar-admin.md#namespace-set-deduplication) or the [`pulsar-admin topics set-deduplication`](reference-pulsar-admin.md#topic-set-deduplication) command. You can use the `--enable`/`-e` flag and specify the namespace/topic. - -The following example shows how to enable message deduplication at the namespace level. - -```bash - -$ bin/pulsar-admin namespaces set-deduplication \ - public/default \ - --enable # or just -e - -``` - -### Disable message deduplication - -Even if you enable message deduplication at the broker level, you can disable message deduplication for a specific namespace or topic using the [`pulsar-admin namespace set-deduplication`](reference-pulsar-admin.md#namespace-set-deduplication) or the [`pulsar-admin topics set-deduplication`](reference-pulsar-admin.md#topic-set-deduplication) command. Use the `--disable`/`-d` flag and specify the namespace/topic. - -The following example shows how to disable message deduplication at the namespace level. - -```bash - -$ bin/pulsar-admin namespaces set-deduplication \ - public/default \ - --disable # or just -d - -``` - -## Pulsar clients - -If you enable message deduplication in Pulsar brokers, you need complete the following tasks for your client producers: - -1. Specify a name for the producer. -1. Set the message timeout to `0` (namely, no timeout). - -The instructions for Java, Python, and C++ clients are different. - -````mdx-code-block - - - -To enable message deduplication on a [Java producer](client-libraries-java.md#producers), set the producer name using the `producerName` setter, and set the timeout to `0` using the `sendTimeout` setter. - -```java - -import org.apache.pulsar.client.api.Producer; -import org.apache.pulsar.client.api.PulsarClient; -import java.util.concurrent.TimeUnit; - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); -Producer producer = pulsarClient.newProducer() - .producerName("producer-1") - .topic("persistent://public/default/topic-1") - .sendTimeout(0, TimeUnit.SECONDS) - .create(); - -``` - - - - -To enable message deduplication on a [Python producer](client-libraries-python.md#producers), set the producer name using `producer_name`, and set the timeout to `0` using `send_timeout_millis`. - -```python - -import pulsar - -client = pulsar.Client("pulsar://localhost:6650") -producer = client.create_producer( - "persistent://public/default/topic-1", - producer_name="producer-1", - send_timeout_millis=0) - -``` - - - - -To enable message deduplication on a [C++ producer](client-libraries-cpp.md#producer), set the producer name using `producer_name`, and set the timeout to `0` using `send_timeout_millis`. - -```cpp - -#include - -std::string serviceUrl = "pulsar://localhost:6650"; -std::string topic = "persistent://some-tenant/ns1/topic-1"; -std::string producerName = "producer-1"; - -Client client(serviceUrl); - -ProducerConfiguration producerConfig; -producerConfig.setSendTimeout(0); -producerConfig.setProducerName(producerName); - -Producer producer; - -Result result = client.createProducer(topic, producerConfig, producer); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-encryption.md b/site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-encryption.md deleted file mode 100644 index f0d8fb8735eb63..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-encryption.md +++ /dev/null @@ -1,184 +0,0 @@ ---- -id: cookbooks-encryption -title: Pulsar Encryption -sidebar_label: "Encryption" -original_id: cookbooks-encryption ---- - -Pulsar encryption allows applications to encrypt messages at the producer and decrypt at the consumer. Encryption is performed using the public/private key pair configured by the application. Encrypted messages can only be decrypted by consumers with a valid key. - -## Asymmetric and symmetric encryption - -Pulsar uses dynamically generated symmetric AES key to encrypt messages(data). The AES key(data key) is encrypted using application provided ECDSA/RSA key pair, as a result there is no need to share the secret with everyone. - -Key is a public/private key pair used for encryption/decryption. The producer key is the public key, and the consumer key is the private key of the key pair. - -The application configures the producer with the public key. This key is used to encrypt the AES data key. The encrypted data key is sent as part of message header. Only entities with the private key(in this case the consumer) will be able to decrypt the data key which is used to decrypt the message. - -A message can be encrypted with more than one key. Any one of the keys used for encrypting the message is sufficient to decrypt the message - -Pulsar does not store the encryption key anywhere in the pulsar service. If you lose/delete the private key, your message is irretrievably lost, and is unrecoverable - -## Producer -![alt text](/assets/pulsar-encryption-producer.jpg "Pulsar Encryption Producer") - -## Consumer -![alt text](/assets/pulsar-encryption-consumer.jpg "Pulsar Encryption Consumer") - -## Here are the steps to get started: - -1. Create your ECDSA or RSA public/private key pair. - -```shell - -openssl ecparam -name secp521r1 -genkey -param_enc explicit -out test_ecdsa_privkey.pem -openssl ec -in test_ecdsa_privkey.pem -pubout -outform pkcs8 -out test_ecdsa_pubkey.pem - -``` - -2. Add the public and private key to the key management and configure your producers to retrieve public keys and consumers clients to retrieve private keys. -3. Implement CryptoKeyReader::getPublicKey() interface from producer and CryptoKeyReader::getPrivateKey() interface from consumer, which will be invoked by Pulsar client to load the key. -4. Add encryption key to producer configuration: conf.addEncryptionKey("myapp.key") -5. Add CryptoKeyReader implementation to producer/consumer config: conf.setCryptoKeyReader(keyReader) -6. Sample producer application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} -PulsarClient pulsarClient = PulsarClient.create("http://localhost:8080"); - -ProducerConfiguration prodConf = new ProducerConfiguration(); -prodConf.setCryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")); -prodConf.addEncryptionKey("myappkey"); - -Producer producer = pulsarClient.createProducer("persistent://my-tenant/my-ns/my-topic", prodConf); - -for (int i = 0; i < 10; i++) { - producer.send("my-message".getBytes()); -} - -pulsarClient.close(); - -``` - -7. Sample Consumer Application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} - -ConsumerConfiguration consConf = new ConsumerConfiguration(); -consConf.setCryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")); -PulsarClient pulsarClient = PulsarClient.create("http://localhost:8080"); -Consumer consumer = pulsarClient.subscribe("persistent://my-tenant//my-ns/my-topic", "my-subscriber-name", consConf); -Message msg = null; - -for (int i = 0; i < 10; i++) { - msg = consumer.receive(); - // do something - System.out.println("Received: " + new String(msg.getData())); -} - -// Acknowledge the consumption of all messages at once -consumer.acknowledgeCumulative(msg); -pulsarClient.close(); - -``` - -## Key rotation -Pulsar generates new AES data key every 4 hours or after a certain number of messages are published. The asymmetric public key is automatically fetched by producer every 4 hours by calling CryptoKeyReader::getPublicKey() to retrieve the latest version. - -## Enabling encryption at the producer application: -If you produce messages that are consumed across application boundaries, you need to ensure that consumers in other applications have access to one of the private keys that can decrypt the messages. This can be done in two ways: -1. The consumer application provides you access to their public key, which you add to your producer keys -1. You grant access to one of the private keys from the pairs used by producer - -In some cases, the producer may want to encrypt the messages with multiple keys. For this, add all such keys to the config. Consumer will be able to decrypt the message, as long as it has access to at least one of the keys. - -E.g: If messages needs to be encrypted using 2 keys myapp.messagekey1 and myapp.messagekey2, - -```java - -conf.addEncryptionKey("myapp.messagekey1"); -conf.addEncryptionKey("myapp.messagekey2"); - -``` - -## Decrypting encrypted messages at the consumer application: -Consumers require access one of the private keys to decrypt messages produced by the producer. If you would like to receive encrypted messages, create a public/private key and give your public key to the producer application to encrypt messages using your public key. - -## Handling Failures: -* Producer/ Consumer loses access to the key - * Producer action will fail indicating the cause of the failure. Application has the option to proceed with sending unencrypted message in such cases. Call conf.setCryptoFailureAction(ProducerCryptoFailureAction) to control the producer behavior. The default behavior is to fail the request. - * If consumption failed due to decryption failure or missing keys in consumer, application has the option to consume the encrypted message or discard it. Call conf.setCryptoFailureAction(ConsumerCryptoFailureAction) to control the consumer behavior. The default behavior is to fail the request. -Application will never be able to decrypt the messages if the private key is permanently lost. -* Batch messaging - * If decryption fails and the message contain batch messages, client will not be able to retrieve individual messages in the batch, hence message consumption fails even if conf.setCryptoFailureAction() is set to CONSUME. -* If decryption fails, the message consumption stops and application will notice backlog growth in addition to decryption failure messages in the client log. If application does not have access to the private key to decrypt the message, the only option is to skip/discard backlogged messages. - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-message-queue.md b/site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-message-queue.md deleted file mode 100644 index eb43cbde5fb818..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-message-queue.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -id: cookbooks-message-queue -title: Using Pulsar as a message queue -sidebar_label: "Message queue" -original_id: cookbooks-message-queue ---- - -Message queues are essential components of many large-scale data architectures. If every single work object that passes through your system absolutely *must* be processed in spite of the slowness or downright failure of this or that system component, there's a good chance that you'll need a message queue to step in and ensure that unprocessed data is retained---with correct ordering---until the required actions are taken. - -Pulsar is a great choice for a message queue because: - -* it was built with [persistent message storage](concepts-architecture-overview.md#persistent-storage) in mind -* it offers automatic load balancing across [consumers](reference-terminology.md#consumer) for messages on a topic (or custom load balancing if you wish) - -> You can use the same Pulsar installation to act as a real-time message bus and as a message queue if you wish (or just one or the other). You can set aside some topics for real-time purposes and other topics for message queue purposes (or use specific namespaces for either purpose if you wish). - - -# Client configuration changes - -To use a Pulsar [topic](reference-terminology.md#topic) as a message queue, you should distribute the receiver load on that topic across several consumers (the optimal number of consumers will depend on the load). Each consumer must: - -* Establish a [shared subscription](concepts-messaging.md#shared) and use the same subscription name as the other consumers (otherwise the subscription is not shared and the consumers can't act as a processing ensemble) -* If you'd like to have tight control over message dispatching across consumers, set the consumers' **receiver queue** size very low (potentially even to 0 if necessary). Each Pulsar [consumer](reference-terminology.md#consumer) has a receiver queue that determines how many messages the consumer will attempt to fetch at a time. A receiver queue of 1000 (the default), for example, means that the consumer will attempt to process 1000 messages from the topic's backlog upon connection. Setting the receiver queue to zero essentially means ensuring that each consumer is only doing one thing at a time. - - The downside to restricting the receiver queue size of consumers is that that limits the potential throughput of those consumers and cannot be used with [partitioned topics](reference-terminology.md#partitioned-topic). Whether the performance/control trade-off is worthwhile will depend on your use case. - -## Java clients - -Here's an example Java consumer configuration that uses a shared subscription: - -```java - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; -import org.apache.pulsar.client.api.SubscriptionType; - -String SERVICE_URL = "pulsar://localhost:6650"; -String TOPIC = "persistent://public/default/mq-topic-1"; -String subscription = "sub-1"; - -PulsarClient client = PulsarClient.builder() - .serviceUrl(SERVICE_URL) - .build(); - -Consumer consumer = client.newConsumer() - .topic(TOPIC) - .subscriptionName(subscription) - .subscriptionType(SubscriptionType.Shared) - // If you'd like to restrict the receiver queue size - .receiverQueueSize(10) - .subscribe(); - -``` - -## Python clients - -Here's an example Python consumer configuration that uses a shared subscription: - -```python - -from pulsar import Client, ConsumerType - -SERVICE_URL = "pulsar://localhost:6650" -TOPIC = "persistent://public/default/mq-topic-1" -SUBSCRIPTION = "sub-1" - -client = Client(SERVICE_URL) -consumer = client.subscribe( - TOPIC, - SUBSCRIPTION, - # If you'd like to restrict the receiver queue size - receiver_queue_size=10, - consumer_type=ConsumerType.Shared) - -``` - -## C++ clients - -Here's an example C++ consumer configuration that uses a shared subscription: - -```cpp - -#include - -std::string serviceUrl = "pulsar://localhost:6650"; -std::string topic = "persistent://public/defaultmq-topic-1"; -std::string subscription = "sub-1"; - -Client client(serviceUrl); - -ConsumerConfiguration consumerConfig; -consumerConfig.setConsumerType(ConsumerType.ConsumerShared); -// If you'd like to restrict the receiver queue size -consumerConfig.setReceiverQueueSize(10); - -Consumer consumer; - -Result result = client.subscribe(topic, subscription, consumerConfig, consumer); - -``` - -## Go clients - -Here is an example of a Go consumer configuration that uses a shared subscription: - -```go - -import "github.com/apache/pulsar-client-go/pulsar" - -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "persistent://public/default/mq-topic-1", - SubscriptionName: "sub-1", - Type: pulsar.Shared, - ReceiverQueueSize: 10, // If you'd like to restrict the receiver queue size -}) -if err != nil { - log.Fatal(err) -} - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-non-persistent.md b/site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-non-persistent.md deleted file mode 100644 index 178301e86eb8df..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-non-persistent.md +++ /dev/null @@ -1,63 +0,0 @@ ---- -id: cookbooks-non-persistent -title: Non-persistent messaging -sidebar_label: "Non-persistent messaging" -original_id: cookbooks-non-persistent ---- - -**Non-persistent topics** are Pulsar topics in which message data is *never* [persistently stored](concepts-architecture-overview.md#persistent-storage) and kept only in memory. This cookbook provides: - -* A basic [conceptual overview](#overview) of non-persistent topics -* Information about [configurable parameters](#configuration) related to non-persistent topics -* A guide to the [CLI interface](#cli) for managing non-persistent topics - -## Overview - -By default, Pulsar persistently stores *all* unacknowledged messages on multiple [BookKeeper](#persistent-storage) bookies (storage nodes). Data for messages on persistent topics can thus survive broker restarts and subscriber failover. - -Pulsar also, however, supports **non-persistent topics**, which are topics on which messages are *never* persisted to disk and live only in memory. When using non-persistent delivery, killing a Pulsar [broker](reference-terminology.md#broker) or disconnecting a subscriber to a topic means that all in-transit messages are lost on that (non-persistent) topic, meaning that clients may see message loss. - -Non-persistent topics have names of this form (note the `non-persistent` in the name): - -```http - -non-persistent://tenant/namespace/topic - -``` - -> For more high-level information about non-persistent topics, see the [Concepts and Architecture](concepts-messaging.md#non-persistent-topics) documentation. - -## Using - -> In order to use non-persistent topics, they must be [enabled](#enabling) in your Pulsar broker configuration. - -In order to use non-persistent topics, you only need to differentiate them by name when interacting with them. This [`pulsar-client produce`](reference-cli-tools.md#pulsar-client-produce) command, for example, would produce one message on a non-persistent topic in a standalone cluster: - -```bash - -$ bin/pulsar-client produce non-persistent://public/default/example-np-topic \ - --num-produce 1 \ - --messages "This message will be stored only in memory" - -``` - -> For a more thorough guide to non-persistent topics from an administrative perspective, see the [Non-persistent topics](admin-api-topics.md) guide. - -## Enabling - -In order to enable non-persistent topics in a Pulsar broker, the [`enableNonPersistentTopics`](reference-configuration.md#broker-enableNonPersistentTopics) must be set to `true`. This is the default, and so you won't need to take any action to enable non-persistent messaging. - - -> #### Configuration for standalone mode -> If you're running Pulsar in standalone mode, the same configurable parameters are available but in the [`standalone.conf`](reference-configuration.md#standalone) configuration file. - -If you'd like to enable *only* non-persistent topics in a broker, you can set the [`enablePersistentTopics`](reference-configuration.md#broker-enablePersistentTopics) parameter to `false` and the `enableNonPersistentTopics` parameter to `true`. - -## Managing with cli - -Non-persistent topics can be managed using the [`pulsar-admin non-persistent`](reference-pulsar-admin.md#non-persistent) command-line interface. With that interface you can perform actions like [create a partitioned non-persistent topic](reference-pulsar-admin.md#non-persistent-create-partitioned-topic), get [stats](reference-pulsar-admin.md#non-persistent-stats) for a non-persistent topic, [list](reference-pulsar-admin.md) non-persistent topics under a namespace, and more. - -## Using with Pulsar clients - -You shouldn't need to make any changes to your Pulsar clients to use non-persistent messaging beyond making sure that you use proper [topic names](#using) with `non-persistent` as the topic type. - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-partitioned.md b/site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-partitioned.md deleted file mode 100644 index fb9ac354cc6d60..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-partitioned.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: cookbooks-partitioned -title: Partitioned topics -sidebar_label: "Partitioned Topics" -original_id: cookbooks-partitioned ---- -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-retention-expiry.md b/site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-retention-expiry.md deleted file mode 100644 index c8c46b3caa1bee..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-retention-expiry.md +++ /dev/null @@ -1,498 +0,0 @@ ---- -id: cookbooks-retention-expiry -title: Message retention and expiry -sidebar_label: "Message retention and expiry" -original_id: cookbooks-retention-expiry ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar brokers are responsible for handling messages that pass through Pulsar, including [persistent storage](concepts-architecture-overview.md#persistent-storage) of messages. By default, for each topic, brokers only retain messages that are in at least one backlog. A backlog is the set of unacknowledged messages for a particular subscription. As a topic can have multiple subscriptions, a topic can have multiple backlogs. - -As a consequence, no messages are retained (by default) on a topic that has not had any subscriptions created for it. - -(Note that messages that are no longer being stored are not necessarily immediately deleted, and may in fact still be accessible until the next ledger rollover. Because clients cannot predict when rollovers may happen, it is not wise to rely on a rollover not happening at an inconvenient point in time.) - -In Pulsar, you can modify this behavior, with namespace granularity, in two ways: - -* You can persistently store messages that are not within a backlog (because they've been acknowledged by on every existing subscription, or because there are no subscriptions) by setting [retention policies](#retention-policies). -* Messages that are not acknowledged within a specified timeframe can be automatically acknowledged, by specifying the [time to live](#time-to-live-ttl) (TTL). - -Pulsar's [admin interface](admin-api-overview.md) enables you to manage both retention policies and TTL with namespace granularity (and thus within a specific tenant and either on a specific cluster or in the [`global`](concepts-architecture-overview.md#global-cluster) cluster). - - -> #### Retention and TTL solve two different problems -> * Message retention: Keep the data for at least X hours (even if acknowledged) -> * Time-to-live: Discard data after some time (by automatically acknowledging) -> -> Most applications will want to use at most one of these. - - -## Retention policies - -By default, when a Pulsar message arrives at a broker, the message is stored until it has been acknowledged on all subscriptions, at which point it is marked for deletion. You can override this behavior and retain messages that have already been acknowledged on all subscriptions by setting a *retention policy* for all topics in a given namespace. Retention is based on both a *size limit* and a *time limit*. - -Retention policies are useful when you use the Reader interface. The Reader interface does not use acknowledgements, and messages do not exist within backlogs. It is required to configure retention for Reader-only use cases. - -When you set a retention policy on topics in a namespace, you must set **both** a *size limit* and a *time limit*. You can refer to the following table to set retention policies in `pulsar-admin` and Java. - -|Time limit|Size limit| Message retention | -|----------|----------|------------------------| -| -1 | -1 | Infinite retention | -| -1 | >0 | Based on the size limit | -| >0 | -1 | Based on the time limit | -| 0 | 0 | Disable message retention (by default) | -| 0 | >0 | Invalid | -| >0 | 0 | Invalid | -| >0 | >0 | Acknowledged messages or messages with no active subscription will not be retained when either time or size reaches the limit. | - -The retention settings apply to all messages on topics that do not have any subscriptions, or to messages that have been acknowledged by all subscriptions. The retention policy settings do not affect unacknowledged messages on topics with subscriptions. The unacknowledged messages are controlled by the backlog quota. - -When a retention limit on a topic is exceeded, the oldest message is marked for deletion until the set of retained messages falls within the specified limits again. - -### Defaults - -You can set message retention at instance level with the following two parameters: `defaultRetentionTimeInMinutes` and `defaultRetentionSizeInMB`. Both parameters are set to `0` by default. - -For more information of the two parameters, refer to the [`broker.conf`](reference-configuration.md#broker) configuration file. - -### Set retention policy - -You can set a retention policy for a namespace by specifying the namespace, a size limit and a time limit in `pulsar-admin`, REST API and Java. - -````mdx-code-block - - - -You can use the [`set-retention`](reference-pulsar-admin.md#namespaces-set-retention) subcommand and specify a namespace, a size limit using the `-s`/`--size` flag, and a time limit using the `-t`/`--time` flag. - -In the following example, the size limit is set to 10 GB and the time limit is set to 3 hours for each topic within the `my-tenant/my-ns` namespace. -- When the size of messages reaches 10 GB on a topic within 3 hours, the acknowledged messages will not be retained. -- After 3 hours, even if the message size is less than 10 GB, the acknowledged messages will not be retained. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 10G \ - --time 3h - -``` - -In the following example, the time is not limited and the size limit is set to 1 TB. The size limit determines the retention. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 1T \ - --time -1 - -``` - -In the following example, the size is not limited and the time limit is set to 3 hours. The time limit determines the retention. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size -1 \ - --time 3h - -``` - -To achieve infinite retention, set both values to `-1`. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size -1 \ - --time -1 - -``` - -To disable the retention policy, set both values to `0`. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 0 \ - --time 0 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/retention|operation/setRetention?version=@pulsar:version_number@} - -:::note - -To disable the retention policy, you need to set both the size and time limit to `0`. Set either size or time limit to `0` is invalid. - -::: - - - - -```java - -int retentionTime = 10; // 10 minutes -int retentionSize = 500; // 500 megabytes -RetentionPolicies policies = new RetentionPolicies(retentionTime, retentionSize); -admin.namespaces().setRetention(namespace, policies); - -``` - - - - -```` - -### Get retention policy - -You can fetch the retention policy for a namespace by specifying the namespace. The output will be a JSON object with two keys: `retentionTimeInMinutes` and `retentionSizeInMB`. - -````mdx-code-block - - - -Use the [`get-retention`](reference-pulsar-admin.md#namespaces) subcommand and specify the namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces get-retention my-tenant/my-ns -{ - "retentionTimeInMinutes": 10, - "retentionSizeInMB": 500 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/retention|operation/getRetention?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getRetention(namespace); - -``` - - - - -```` - -## Backlog quotas - -*Backlogs* are sets of unacknowledged messages for a topic that have been stored by bookies. Pulsar stores all unacknowledged messages in backlogs until they are processed and acknowledged. - -You can control the allowable size of backlogs, at the namespace level, using *backlog quotas*. Setting a backlog quota involves setting: - -TODO: Expand on is this per backlog or per topic? - -* an allowable *size threshold* for each topic in the namespace -* a *retention policy* that determines which action the [broker](reference-terminology.md#broker) takes if the threshold is exceeded. - -The following retention policies are available: - -Policy | Action -:------|:------ -`producer_request_hold` | The broker will hold and not persist produce request payload -`producer_exception` | The broker will disconnect from the client by throwing an exception -`consumer_backlog_eviction` | The broker will begin discarding backlog messages - - -> #### Beware the distinction between retention policy types -> As you may have noticed, there are two definitions of the term "retention policy" in Pulsar, one that applies to persistent storage of messages not in backlogs, and one that applies to messages within backlogs. - - -Backlog quotas are handled at the namespace level. They can be managed via: - -### Set size/time thresholds and backlog retention policies - -You can set a size and/or time threshold and backlog retention policy for all of the topics in a [namespace](reference-terminology.md#namespace) by specifying the namespace, a size limit and/or a time limit in second, and a policy by name. - -````mdx-code-block - - - -Use the [`set-backlog-quota`](reference-pulsar-admin.md#namespaces) subcommand and specify a namespace, a size limit using the `-l`/`--limit` , `-lt`/`--limitTime` flag to limit backlog, a retention policy using the `-p`/`--policy` flag and a policy type using `-t`/`--type` (default is destination_storage). - -##### Example - -```shell - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \ - --limit 2G \ - --policy producer_request_hold - -``` - -```shell - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns/my-topic \ ---limitTime 3600 \ ---policy producer_request_hold \ ---type message_age - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - - - - -```java - -long sizeLimit = 2147483648L; -BacklogQuota.RetentionPolicy policy = BacklogQuota.RetentionPolicy.producer_request_hold; -BacklogQuota quota = new BacklogQuota(sizeLimit, policy); -admin.namespaces().setBacklogQuota(namespace, quota); - -``` - - - - -```` - -### Get backlog threshold and backlog retention policy - -You can see which size threshold and backlog retention policy has been applied to a namespace. - -````mdx-code-block - - - -Use the [`get-backlog-quotas`](reference-pulsar-admin.md#pulsar-admin-namespaces-get-backlog-quotas) subcommand and specify a namespace. Here's an example: - -```shell - -$ pulsar-admin namespaces get-backlog-quotas my-tenant/my-ns -{ - "destination_storage": { - "limit" : 2147483648, - "policy" : "producer_request_hold" - } -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/backlogQuotaMap|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - - - - -```java - -Map quotas = - admin.namespaces().getBacklogQuotas(namespace); - -``` - - - - -```` - -### Remove backlog quotas - -````mdx-code-block - - - -Use the [`remove-backlog-quota`](reference-pulsar-admin.md#pulsar-admin-namespaces-remove-backlog-quota) subcommand and specify a namespace, use `t`/`--type` to specify backlog type to remove(default is destination_storage). Here's an example: - -```shell - -$ pulsar-admin namespaces remove-backlog-quota my-tenant/my-ns - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/removeBacklogQuota?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeBacklogQuota(namespace); - -``` - - - - -```` - -### Clear backlog - -#### pulsar-admin - -Use the [`clear-backlog`](reference-pulsar-admin.md#pulsar-admin-namespaces-clear-backlog) subcommand. - -##### Example - -```shell - -$ pulsar-admin namespaces clear-backlog my-tenant/my-ns - -``` - -By default, you will be prompted to ensure that you really want to clear the backlog for the namespace. You can override the prompt using the `-f`/`--force` flag. - -## Time to live (TTL) - -By default, Pulsar stores all unacknowledged messages forever. This can lead to heavy disk space usage in cases where a lot of messages are going unacknowledged. If disk space is a concern, you can set a time to live (TTL) that determines how long unacknowledged messages will be retained. - -### Set the TTL for a namespace - -````mdx-code-block - - - -Use the [`set-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-set-message-ttl) subcommand and specify a namespace and a TTL (in seconds) using the `-ttl`/`--messageTTL` flag. - -##### Example - -```shell - -$ pulsar-admin namespaces set-message-ttl my-tenant/my-ns \ - --messageTTL 120 # TTL of 2 minutes - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/setNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setNamespaceMessageTTL(namespace, ttlInSeconds); - -``` - - - - -```` - -### Get the TTL configuration for a namespace - -````mdx-code-block - - - -Use the [`get-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-get-message-ttl) subcommand and specify a namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces get-message-ttl my-tenant/my-ns -60 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/getNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaceMessageTTL(namespace) - -``` - - - - -```` - -### Remove the TTL configuration for a namespace - -````mdx-code-block - - - -Use the [`remove-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-remove-message-ttl) subcommand and specify a namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces remove-message-ttl my-tenant/my-ns - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/removeNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeNamespaceMessageTTL(namespace) - -``` - - - - -```` - -## Delete messages from namespaces - -If you do not have any retention period and that you never have much of a backlog, the upper limit for retaining messages, which are acknowledged, equals to the Pulsar segment rollover period + entry log rollover period + (garbage collection interval * garbage collection ratios). - -- **Segment rollover period**: basically, the segment rollover period is how often a new segment is created. Once a new segment is created, the old segment will be deleted. By default, this happens either when you have written 50,000 entries (messages) or have waited 240 minutes. You can tune this in your broker. - -- **Entry log rollover period**: multiple ledgers in BookKeeper are interleaved into an [entry log](https://bookkeeper.apache.org/docs/4.11.1/getting-started/concepts/#entry-logs). In order for a ledger that has been deleted, the entry log must all be rolled over. -The entry log rollover period is configurable, but is purely based on the entry log size. For details, see [here](https://bookkeeper.apache.org/docs/4.11.1/reference/config/#entry-log-settings). Once the entry log is rolled over, the entry log can be garbage collected. - -- **Garbage collection interval**: because entry logs have interleaved ledgers, to free up space, the entry logs need to be rewritten. The garbage collection interval is how often BookKeeper performs garbage collection. which is related to minor compaction and major compaction of entry logs. For details, see [here](https://bookkeeper.apache.org/docs/4.11.1/reference/config/#entry-log-compaction-settings). diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-tiered-storage.md b/site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-tiered-storage.md deleted file mode 100644 index 3f87de62ca8a14..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/cookbooks-tiered-storage.md +++ /dev/null @@ -1,346 +0,0 @@ ---- -id: cookbooks-tiered-storage -title: Tiered Storage -sidebar_label: "Tiered Storage" -original_id: cookbooks-tiered-storage ---- - -Pulsar's **Tiered Storage** feature allows older backlog data to be offloaded to long term storage, thereby freeing up space in BookKeeper and reducing storage costs. This cookbook walks you through using tiered storage in your Pulsar cluster. - -* Tiered storage uses [Apache jclouds](https://jclouds.apache.org) to support [Amazon S3](https://aws.amazon.com/s3/) and [Google Cloud Storage](https://cloud.google.com/storage/)(GCS for short) -for long term storage. With Jclouds, it is easy to add support for more [cloud storage providers](https://jclouds.apache.org/reference/providers/#blobstore-providers) in the future. - -* Tiered storage uses [Apache Hadoop](http://hadoop.apache.org/) to support filesystem for long term storage. -With Hadoop, it is easy to add support for more filesystem in the future. - -## When should I use Tiered Storage? - -Tiered storage should be used when you have a topic for which you want to keep a very long backlog for a long time. For example, if you have a topic containing user actions which you use to train your recommendation systems, you may want to keep that data for a long time, so that if you change your recommendation algorithm you can rerun it against your full user history. - -## The offloading mechanism - -A topic in Pulsar is backed by a log, known as a managed ledger. This log is composed of an ordered list of segments. Pulsar only every writes to the final segment of the log. All previous segments are sealed. The data within the segment is immutable. This is known as a segment oriented architecture. - -![Tiered storage](/assets/pulsar-tiered-storage.png "Tiered Storage") - -The Tiered Storage offloading mechanism takes advantage of this segment oriented architecture. When offloading is requested, the segments of the log are copied, one-by-one, to tiered storage. All segments of the log, apart from the segment currently being written to can be offloaded. - -On the broker, the administrator must configure the bucket and credentials for the cloud storage service. -The configured bucket must exist before attempting to offload. If it does not exist, the offload operation will fail. - -Pulsar uses multi-part objects to upload the segment data. It is possible that a broker could crash while uploading the data. -We recommend you add a life cycle rule your bucket to expire incomplete multi-part upload after a day or two to avoid -getting charged for incomplete uploads. - -When ledgers are offloaded to long term storage, you can still query data in the offloaded ledgers with Pulsar SQL. - -## Configuring the offload driver - -Offloading is configured in ```broker.conf```. - -At a minimum, the administrator must configure the driver, the bucket and the authenticating credentials. -There is also some other knobs to configure, like the bucket region, the max block size in backed storage, etc. - -Currently we support driver of types: - -- `aws-s3`: [Simple Cloud Storage Service](https://aws.amazon.com/s3/) -- `google-cloud-storage`: [Google Cloud Storage](https://cloud.google.com/storage/) -- `filesystem`: [Filesystem Storage](http://hadoop.apache.org/) - -> Driver names are case-insensitive for driver's name. There is a third driver type, `s3`, which is identical to `aws-s3`, -> though it requires that you specify an endpoint url using `s3ManagedLedgerOffloadServiceEndpoint`. This is useful if -> using a S3 compatible data store, other than AWS. - -```conf - -managedLedgerOffloadDriver=aws-s3 - -``` - -### "aws-s3" Driver configuration - -#### Bucket and Region - -Buckets are the basic containers that hold your data. -Everything that you store in Cloud Storage must be contained in a bucket. -You can use buckets to organize your data and control access to your data, -but unlike directories and folders, you cannot nest buckets. - -```conf - -s3ManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -Bucket Region is the region where bucket located. Bucket Region is not a required -but a recommended configuration. If it is not configured, It will use the default region. - -With AWS S3, the default region is `US East (N. Virginia)`. Page [AWS Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html) contains more information. - -```conf - -s3ManagedLedgerOffloadRegion=eu-west-3 - -``` - -#### Authentication with AWS - -To be able to access AWS S3, you need to authenticate with AWS S3. -Pulsar does not provide any direct means of configuring authentication for AWS S3, -but relies on the mechanisms supported by the [DefaultAWSCredentialsProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html). - -Once you have created a set of credentials in the AWS IAM console, they can be configured in a number of ways. - -1. Using ec2 instance metadata credentials - -If you are on AWS instance with an instance profile that provides credentials, Pulsar will use these credentials -if no other mechanism is provided - -2. Set the environment variables **AWS_ACCESS_KEY_ID** and **AWS_SECRET_ACCESS_KEY** in ```conf/pulsar_env.sh```. - -```bash - -export AWS_ACCESS_KEY_ID=ABC123456789 -export AWS_SECRET_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -> \"export\" is important so that the variables are made available in the environment of spawned processes. - - -3. Add the Java system properties *aws.accessKeyId* and *aws.secretKey* to **PULSAR_EXTRA_OPTS** in `conf/pulsar_env.sh`. - -```bash - -PULSAR_EXTRA_OPTS="${PULSAR_EXTRA_OPTS} ${PULSAR_MEM} ${PULSAR_GC} -Daws.accessKeyId=ABC123456789 -Daws.secretKey=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.maxCapacityPerThread=4096" - -``` - -4. Set the access credentials in ```~/.aws/credentials```. - -```conf - -[default] -aws_access_key_id=ABC123456789 -aws_secret_access_key=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -5. Assuming an IAM role - -If you want to assume an IAM role, this can be done via specifying the following: - -```conf - -s3ManagedLedgerOffloadRole= -s3ManagedLedgerOffloadRoleSessionName=pulsar-s3-offload - -``` - -This will use the `DefaultAWSCredentialsProviderChain` for assuming this role. - -> The broker must be rebooted for credentials specified in pulsar_env to take effect. - -#### Configuring the size of block read/write - -Pulsar also provides some knobs to configure the size of requests sent to AWS S3. - -- ```s3ManagedLedgerOffloadMaxBlockSizeInBytes``` configures the maximum size of - a "part" sent during a multipart upload. This cannot be smaller than 5MB. Default is 64MB. -- ```s3ManagedLedgerOffloadReadBufferSizeInBytes``` configures the block size for - each individual read when reading back data from AWS S3. Default is 1MB. - -In both cases, these should not be touched unless you know what you are doing. - -### "google-cloud-storage" Driver configuration - -Buckets are the basic containers that hold your data. Everything that you store in -Cloud Storage must be contained in a bucket. You can use buckets to organize your data and -control access to your data, but unlike directories and folders, you cannot nest buckets. - -```conf - -gcsManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -Bucket Region is the region where bucket located. Bucket Region is not a required but -a recommended configuration. If it is not configured, It will use the default region. - -Regarding GCS, buckets are default created in the `us multi-regional location`, -page [Bucket Locations](https://cloud.google.com/storage/docs/bucket-locations) contains more information. - -```conf - -gcsManagedLedgerOffloadRegion=europe-west3 - -``` - -#### Authentication with GCS - -The administrator needs to configure `gcsManagedLedgerOffloadServiceAccountKeyFile` in `broker.conf` -for the broker to be able to access the GCS service. `gcsManagedLedgerOffloadServiceAccountKeyFile` is -a Json file, containing the GCS credentials of a service account. -[Service Accounts section of this page](https://support.google.com/googleapi/answer/6158849) contains -more information of how to create this key file for authentication. More information about google cloud IAM -is available [here](https://cloud.google.com/storage/docs/access-control/iam). - -To generate service account credentials or view the public credentials that you've already generated, follow the following steps: - -1. Open the [Service accounts page](https://console.developers.google.com/iam-admin/serviceaccounts). -2. Select a project or create a new one. -3. Click **Create service account**. -4. In the **Create service account** window, type a name for the service account, and select **Furnish a new private key**. If you want to [grant G Suite domain-wide authority](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#delegatingauthority) to the service account, also select **Enable G Suite Domain-wide Delegation**. -5. Click **Create**. - -> Notes: Make ensure that the service account you create has permission to operate GCS, you need to assign **Storage Admin** permission to your service account in [here](https://cloud.google.com/storage/docs/access-control/iam). - -```conf - -gcsManagedLedgerOffloadServiceAccountKeyFile="/Users/hello/Downloads/project-804d5e6a6f33.json" - -``` - -#### Configuring the size of block read/write - -Pulsar also provides some knobs to configure the size of requests sent to GCS. - -- ```gcsManagedLedgerOffloadMaxBlockSizeInBytes``` configures the maximum size of a "part" sent - during a multipart upload. This cannot be smaller than 5MB. Default is 64MB. -- ```gcsManagedLedgerOffloadReadBufferSizeInBytes``` configures the block size for each individual - read when reading back data from GCS. Default is 1MB. - -In both cases, these should not be touched unless you know what you are doing. - -### "filesystem" Driver configuration - - -#### Configure connection address - -You can configure the connection address in the `broker.conf` file. - -```conf - -fileSystemURI="hdfs://127.0.0.1:9000" - -``` - -#### Configure Hadoop profile path - -The configuration file is stored in the Hadoop profile path. It contains various settings, such as base path, authentication, and so on. - -```conf - -fileSystemProfilePath="../conf/filesystem_offload_core_site.xml" - -``` - -The model for storing topic data uses `org.apache.hadoop.io.MapFile`. You can use all of the configurations in `org.apache.hadoop.io.MapFile` for Hadoop. - -**Example** - -```conf - - - fs.defaultFS - - - - - hadoop.tmp.dir - pulsar - - - - io.file.buffer.size - 4096 - - - - io.seqfile.compress.blocksize - 1000000 - - - - io.seqfile.compression.type - BLOCK - - - - io.map.index.interval - 128 - - -``` - -For more information about the configurations in `org.apache.hadoop.io.MapFile`, see [Filesystem Storage](http://hadoop.apache.org/). -## Configuring offload to run automatically - -Namespace policies can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that the topic has stored on the pulsar cluster. Once the topic reaches the threshold, an offload operation will be triggered. Setting a negative value to the threshold will disable automatic offloading. Setting the threshold to 0 will cause the broker to offload data as soon as it possiby can. - -```bash - -$ bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -> Automatic offload runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offload will not until the current segment is full. - -## Configuring read priority for offloaded messages - -By default, once messages were offloaded to long term storage, brokers will read them from long term storage, but messages still exists in bookkeeper for a period depends on the administrator's configuration. For -messages exists in both bookkeeper and long term storage, if they are preferred to read from bookkeeper, you can use command to change this configuration. - -```bash - -# default value for -orp is tiered-storage-first -$ bin/pulsar-admin namespaces set-offload-policies my-tenant/my-namespace -orp bookkeeper-first -$ bin/pulsar-admin topics set-offload-policies my-tenant/my-namespace/topic1 -orp bookkeeper-first - -``` - -## Triggering offload manually - -Offloading can manually triggered through a REST endpoint on the Pulsar broker. We provide a CLI which will call this rest endpoint for you. - -When triggering offload, you must specify the maximum size, in bytes, of backlog which will be retained locally on the bookkeeper. The offload mechanism will offload segments from the start of the topic backlog until this condition is met. - -```bash - -$ bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 -Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - -``` - -The command to triggers an offload will not wait until the offload operation has completed. To check the status of the offload, use offload-status. - -```bash - -$ bin/pulsar-admin topics offload-status my-tenant/my-namespace/topic1 -Offload is currently running - -``` - -To wait for offload to complete, add the -w flag. - -```bash - -$ bin/pulsar-admin topics offload-status -w my-tenant/my-namespace/topic1 -Offload was a success - -``` - -If there is an error offloading, the error will be propagated to the offload-status command. - -```bash - -$ bin/pulsar-admin topics offload-status persistent://public/default/topic1 -Error in offload -null - -Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - -``` - -` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/deploy-aws.md b/site2/website/versioned_docs/version-2.9.0-deprecated/deploy-aws.md deleted file mode 100644 index 93c389b56e2cf1..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/deploy-aws.md +++ /dev/null @@ -1,271 +0,0 @@ ---- -id: deploy-aws -title: Deploying a Pulsar cluster on AWS using Terraform and Ansible -sidebar_label: "Amazon Web Services" -original_id: deploy-aws ---- - -> For instructions on deploying a single Pulsar cluster manually rather than using Terraform and Ansible, see [Deploying a Pulsar cluster on bare metal](deploy-bare-metal.md). For instructions on manually deploying a multi-cluster Pulsar instance, see [Deploying a Pulsar instance on bare metal](deploy-bare-metal-multi-cluster.md). - -One of the easiest ways to get a Pulsar [cluster](reference-terminology.md#cluster) running on [Amazon Web Services](https://aws.amazon.com/) (AWS) is to use the [Terraform](https://terraform.io) infrastructure provisioning tool and the [Ansible](https://www.ansible.com) server automation tool. Terraform can create the resources necessary for running the Pulsar cluster---[EC2](https://aws.amazon.com/ec2/) instances, networking and security infrastructure, etc.---While Ansible can install and run Pulsar on the provisioned resources. - -## Requirements and setup - -In order to install a Pulsar cluster on AWS using Terraform and Ansible, you need to prepare the following things: - -* An [AWS account](https://aws.amazon.com/account/) and the [`aws`](https://aws.amazon.com/cli/) command-line tool -* Python and [pip](https://pip.pypa.io/en/stable/) -* The [`terraform-inventory`](https://github.com/adammck/terraform-inventory) tool, which enables Ansible to use Terraform artifacts - -You also need to make sure that you are currently logged into your AWS account via the `aws` tool: - -```bash - -$ aws configure - -``` - -## Installation - -You can install Ansible on Linux or macOS using pip. - -```bash - -$ pip install ansible - -``` - -You can install Terraform using the instructions [here](https://learn.hashicorp.com/tutorials/terraform/install-cli). - -You also need to have the Terraform and Ansible configuration for Pulsar locally on your machine. You can find them in the [GitHub repository](https://github.com/apache/pulsar) of Pulsar, which you can fetch using Git commands: - -```bash - -$ git clone https://github.com/apache/pulsar -$ cd pulsar/deployment/terraform-ansible/aws - -``` - -## SSH setup - -> If you already have an SSH key and want to use it, you can skip the step of generating an SSH key and update `private_key_file` setting -> in `ansible.cfg` file and `public_key_path` setting in `terraform.tfvars` file. -> -> For example, if you already have a private SSH key in `~/.ssh/pulsar_aws` and a public key in `~/.ssh/pulsar_aws.pub`, -> follow the steps below: -> -> 1. update `ansible.cfg` with following values: -> - -> ```shell -> -> private_key_file=~/.ssh/pulsar_aws -> -> -> ``` - -> -> 2. update `terraform.tfvars` with following values: -> - -> ```shell -> -> public_key_path=~/.ssh/pulsar_aws.pub -> -> -> ``` - - -In order to create the necessary AWS resources using Terraform, you need to create an SSH key. Enter the following commands to create a private SSH key in `~/.ssh/id_rsa` and a public key in `~/.ssh/id_rsa.pub`: - -```bash - -$ ssh-keygen -t rsa - -``` - -Do *not* enter a passphrase (hit **Enter** instead when the prompt comes out). Enter the following command to verify that a key has been created: - -```bash - -$ ls ~/.ssh -id_rsa id_rsa.pub - -``` - -## Create AWS resources using Terraform - -To start building AWS resources with Terraform, you need to install all Terraform dependencies. Enter the following command: - -```bash - -$ terraform init -# This will create a .terraform folder - -``` - -After that, you can apply the default Terraform configuration by entering this command: - -```bash - -$ terraform apply - -``` - -Then you see this prompt below: - -```bash - -Do you want to perform these actions? - Terraform will perform the actions described above. - Only 'yes' will be accepted to approve. - - Enter a value: - -``` - -Type `yes` and hit **Enter**. Applying the configuration could take several minutes. When the configuration applying finishes, you can see `Apply complete!` along with some other information, including the number of resources created. - -### Apply a non-default configuration - -You can apply a non-default Terraform configuration by changing the values in the `terraform.tfvars` file. The following variables are available: - -Variable name | Description | Default -:-------------|:------------|:------- -`public_key_path` | The path of the public key that you have generated. | `~/.ssh/id_rsa.pub` -`region` | The AWS region in which the Pulsar cluster runs | `us-west-2` -`availability_zone` | The AWS availability zone in which the Pulsar cluster runs | `us-west-2a` -`aws_ami` | The [Amazon Machine Image](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) (AMI) that the cluster uses | `ami-9fa343e7` -`num_zookeeper_nodes` | The number of [ZooKeeper](https://zookeeper.apache.org) nodes in the ZooKeeper cluster | 3 -`num_bookie_nodes` | The number of bookies that runs in the cluster | 3 -`num_broker_nodes` | The number of Pulsar brokers that runs in the cluster | 2 -`num_proxy_nodes` | The number of Pulsar proxies that runs in the cluster | 1 -`base_cidr_block` | The root [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) that network assets uses for the cluster | `10.0.0.0/16` -`instance_types` | The EC2 instance types to be used. This variable is a map with two keys: `zookeeper` for the ZooKeeper instances, `bookie` for the BookKeeper bookies and `broker` and `proxy` for Pulsar brokers and bookies | `t2.small` (ZooKeeper), `i3.xlarge` (BookKeeper) and `c5.2xlarge` (Brokers/Proxies) - -### What is installed - -When you run the Ansible playbook, the following AWS resources are used: - -* 9 total [Elastic Compute Cloud](https://aws.amazon.com/ec2) (EC2) instances running the [ami-9fa343e7](https://access.redhat.com/articles/3135091) Amazon Machine Image (AMI), which runs [Red Hat Enterprise Linux (RHEL) 7.4](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/7.4_release_notes/index). By default, that includes: - * 3 small VMs for ZooKeeper ([t2.small](https://www.ec2instances.info/?selected=t2.small) instances) - * 3 larger VMs for BookKeeper [bookies](reference-terminology.md#bookie) ([i3.xlarge](https://www.ec2instances.info/?selected=i3.xlarge) instances) - * 2 larger VMs for Pulsar [brokers](reference-terminology.md#broker) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances) - * 1 larger VMs for Pulsar [proxy](reference-terminology.md#proxy) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances) -* An EC2 [security group](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html) -* A [virtual private cloud](https://aws.amazon.com/vpc/) (VPC) for security -* An [API Gateway](https://aws.amazon.com/api-gateway/) for connections from the outside world -* A [route table](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html) for the Pulsar cluster's VPC -* A [subnet](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html) for the VPC - -All EC2 instances for the cluster run in the [us-west-2](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) region. - -### Fetch your Pulsar connection URL - -When you apply the Terraform configuration by entering the command `terraform apply`, Terraform outputs a value for the `pulsar_service_url`. The value should look something like this: - -``` - -pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650 - -``` - -You can fetch that value at any time by entering the command `terraform output pulsar_service_url` or parsing the `terraform.tstate` file (which is JSON, even though the filename does not reflect that): - -```bash - -$ cat terraform.tfstate | jq .modules[0].outputs.pulsar_service_url.value - -``` - -### Destroy your cluster - -At any point, you can destroy all AWS resources associated with your cluster using Terraform's `destroy` command: - -```bash - -$ terraform destroy - -``` - -## Setup Disks - -Before you run the Pulsar playbook, you need to mount the disks to the correct directories on those bookie nodes. Since different type of machines have different disk layout, you need to update the task defined in `setup-disk.yaml` file after changing the `instance_types` in your terraform config, - -To setup disks on bookie nodes, enter this command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - setup-disk.yaml - -``` - -After that, the disks is mounted under `/mnt/journal` as journal disk, and `/mnt/storage` as ledger disk. -Remember to enter this command just only once. If you attempt to enter this command again after you have run Pulsar playbook, your disks might potentially be erased again, causing the bookies to fail to start up. - -## Run the Pulsar playbook - -Once you have created the necessary AWS resources using Terraform, you can install and run Pulsar on the Terraform-created EC2 instances using Ansible. - -(Optional) If you want to use any [built-in IO connectors](io-connectors.md) , edit the `Download Pulsar IO packages` task in the `deploy-pulsar.yaml` file and uncomment the connectors you want to use. - -To run the playbook, enter this command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - ../deploy-pulsar.yaml - -``` - -If you have created a private SSH key at a location different from `~/.ssh/id_rsa`, you can specify the different location using the `--private-key` flag in the following command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - --private-key="~/.ssh/some-non-default-key" \ - ../deploy-pulsar.yaml - -``` - -## Access the cluster - -You can now access your running Pulsar using the unique Pulsar connection URL for your cluster, which you can obtain following the instructions [above](#fetching-your-pulsar-connection-url). - -For a quick demonstration of accessing the cluster, we can use the Python client for Pulsar and the Python shell. First, install the Pulsar Python module using pip: - -```bash - -$ pip install pulsar-client - -``` - -Now, open up the Python shell using the `python` command: - -```bash - -$ python - -``` - -Once you are in the shell, enter the following command: - -```python - ->>> import pulsar ->>> client = pulsar.Client('pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650') -# Make sure to use your connection URL ->>> producer = client.create_producer('persistent://public/default/test-topic') ->>> producer.send('Hello world') ->>> client.close() - -``` - -If all of these commands are successful, Pulsar clients can now use your cluster! diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/deploy-bare-metal-multi-cluster.md b/site2/website/versioned_docs/version-2.9.0-deprecated/deploy-bare-metal-multi-cluster.md deleted file mode 100644 index f25b11041c5e34..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/deploy-bare-metal-multi-cluster.md +++ /dev/null @@ -1,452 +0,0 @@ ---- -id: deploy-bare-metal-multi-cluster -title: Deploying a multi-cluster on bare metal -sidebar_label: "Bare metal multi-cluster" -original_id: deploy-bare-metal-multi-cluster ---- - -:::tip - -1. You can use single-cluster Pulsar installation in most use cases, such as experimenting with Pulsar or using Pulsar in a startup or in a single team. If you need to run a multi-cluster Pulsar instance, see the [guide](deploy-bare-metal-multi-cluster.md). -2. If you want to use all built-in [Pulsar IO](io-overview.md) connectors, you need to download `apache-pulsar-io-connectors`package and install `apache-pulsar-io-connectors` under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you have run a separate cluster of function workers for [Pulsar Functions](functions-overview.md). -3. If you want to use [Tiered Storage](concepts-tiered-storage.md) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders`package and install `apache-pulsar-offloaders` under `offloaders` directory in the Pulsar directory on every broker node. For more details of how to configure this feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md). - -::: - -A Pulsar instance consists of multiple Pulsar clusters working in unison. You can distribute clusters across data centers or geographical regions and replicate the clusters amongst themselves using [geo-replication](administration-geo.md).Deploying a multi-cluster Pulsar instance consists of the following steps: - -1. Deploying two separate ZooKeeper quorums: a local quorum for each cluster in the instance and a configuration store quorum for instance-wide tasks -2. Initializing cluster metadata for each cluster -3. Deploying a BookKeeper cluster of bookies in each Pulsar cluster -4. Deploying brokers in each Pulsar cluster - - -> #### Run Pulsar locally or on Kubernetes? -> This guide shows you how to deploy Pulsar in production in a non-Kubernetes environment. If you want to run a standalone Pulsar cluster on a single machine for development purposes, see the [Setting up a local cluster](getting-started-standalone.md) guide. If you want to run Pulsar on [Kubernetes](https://kubernetes.io), see the [Pulsar on Kubernetes](deploy-kubernetes.md) guide, which includes sections on running Pulsar on Kubernetes, on Google Kubernetes Engine and on Amazon Web Services. - -## System requirement - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. You need to install 64-bit JRE/JDK 8 or later versions, JRE/JDK 11 is recommended. - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -## Install Pulsar - -To get started running Pulsar, download a binary tarball release in one of the following ways: - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar @pulsar:version@ binary release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget 'https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=pulsar/pulsar-@pulsar:version@/apache-pulsar-@pulsar:version@-bin.tar.gz' -O apache-pulsar-@pulsar:version@-bin.tar.gz - - ``` - -Once you download the tarball, untar it and `cd` into the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | [Command-line tools](reference-cli-tools.md) of Pulsar, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) -`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more -`examples` | A Java JAR file containing example [Pulsar Functions](functions-overview.md) -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that Pulsar uses -`licenses` | License files, in `.txt` form, for various components of the Pulsar codebase - -The following directories are created once you begin running Pulsar: - -Directory | Contains -:---------|:-------- -`data` | The data storage directory that ZooKeeper and BookKeeper use -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md) -`logs` | Logs that the installation creates - - -## Deploy ZooKeeper - -Each Pulsar instance relies on two separate ZooKeeper quorums. - -* Local ZooKeeper operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs a dedicated ZooKeeper cluster. -* Configuration Store operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum. - -You can use an independent cluster of machines or the same machines used by local ZooKeeper to provide the configuration store quorum. - - -### Deploy local ZooKeeper - -ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar. - -You need to stand up one local ZooKeeper cluster per Pulsar cluster for deploying a Pulsar instance. - -To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -On each host, you need to specify the ID of the node in the `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -:::tip - -See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - -::: - -On a ZooKeeper server at `zk1.us-west.example.com`, for example, you could set the `myid` value like this: - -```shell - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com` the command looks like `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start zookeeper - -``` - -### Deploy the configuration store - -The ZooKeeper cluster configured and started up in the section above is a local ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks. - -If you deploy a single-cluster instance, you do not need a separate cluster for the configuration store. If, however, you deploy a multi-cluster instance, you should stand up a separate ZooKeeper cluster for configuration tasks. - -#### Single-cluster Pulsar instance - -If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports. - -To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorum. You need to use the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 - -``` - -As before, create the `myid` files for each server on `data/global-zookeeper/myid`. - -#### Multi-cluster Pulsar instance - -When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions. - -The key here is to make sure the ZK quorum members are spread across at least 3 regions, and other regions run as observers. - -Again, given the very low expected load on the configuration store servers, you can share the same hosts used for the local ZooKeeper quorum. - -For example, assume a Pulsar instance with the following clusters `us-west`, `us-east`, `us-central`, `eu-central`, `ap-south`. Also assume, each cluster has its own local ZK servers named such as the following: - -``` - -zk[1-3].${CLUSTER}.example.com - -``` - -In this scenario if you want to pick the quorum participants from few clusters and let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`. - -This method guarantees that writes to configuration store is possible even if one of these regions is unreachable. - -The ZK configuration in all the servers looks like: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 -server.4=zk1.us-central.example.com:2185:2186 -server.5=zk2.us-central.example.com:2185:2186 -server.6=zk3.us-central.example.com:2185:2186:observer -server.7=zk1.us-east.example.com:2185:2186 -server.8=zk2.us-east.example.com:2185:2186 -server.9=zk3.us-east.example.com:2185:2186:observer -server.10=zk1.eu-central.example.com:2185:2186:observer -server.11=zk2.eu-central.example.com:2185:2186:observer -server.12=zk3.eu-central.example.com:2185:2186:observer -server.13=zk1.ap-south.example.com:2185:2186:observer -server.14=zk2.ap-south.example.com:2185:2186:observer -server.15=zk3.ap-south.example.com:2185:2186:observer - -``` - -Additionally, ZK observers need to have the following parameters: - -```properties - -peerType=observer - -``` - -##### Start the service - -Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) - -```shell - -$ bin/pulsar-daemon start configuration-store - -``` - -## Cluster metadata initialization - -Once you set up the cluster-specific ZooKeeper and configuration store quorums for your instance, you need to write some metadata to ZooKeeper for each cluster in your instance. **you only need to write these metadata once**. - -You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. The following is an example: - -```shell - -$ bin/pulsar initialize-cluster-metadata \ - --cluster us-west \ - --zookeeper zk1.us-west.example.com:2181 \ - --configuration-store zk1.us-west.example.com:2184 \ - --web-service-url http://pulsar.us-west.example.com:8080/ \ - --web-service-url-tls https://pulsar.us-west.example.com:8443/ \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/ - -``` - -As you can see from the example above, you need to specify the following: - -* The name of the cluster -* The local ZooKeeper connection string for the cluster -* The configuration store connection string for the entire instance -* The web service URL for the cluster -* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster - -If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster. - -Make sure to run `initialize-cluster-metadata` for each cluster in your instance. - -## Deploy BookKeeper - -BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar. - -Each Pulsar broker needs its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster. - -### Configure bookies - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important aspect of configuring each bookie is ensuring that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for the local ZooKeeper of Pulsar cluster. - -### Start bookies - -You can start a bookie in two ways: in the foreground or as a background daemon. - -To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -You can verify that the bookie works properly using the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell): - -```bash - -$ bin/bookkeeper shell bookiesanity - -``` - -This command creates a new ledger on the local bookie, writes a few entries, reads them back and finally deletes the ledger. - -After you have started all bookies, you can use the `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to verify that all bookies in the cluster are running. - -```bash - -$ bin/bookkeeper shell simpletest --ensemble --writeQuorum --ackQuorum --numEntries - -``` - -Bookie hosts are responsible for storing message data on disk. In order for bookies to provide optimal performance, having a suitable hardware configuration is essential for the bookies. The following are key dimensions for bookie hardware capacity. - -* Disk I/O capacity read/write -* Storage capacity - -Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker. To ensure low write latency, BookKeeper is -designed to use multiple devices: - -* A **journal** to ensure durability. For sequential writes, having fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts is critical. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms. -* A **ledger storage device** is where data is stored until all consumers acknowledge the message. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller. - - - -## Deploy brokers - -Once you set up ZooKeeper, initialize cluster metadata, and spin up BookKeeper bookies, you can deploy brokers. - -### Broker configuration - -You can configure brokers using the [`conf/broker.conf`](reference-configuration.md#broker) configuration file. - -The most important element of broker configuration is ensuring that each broker is aware of its local ZooKeeper quorum as well as the configuration store quorum. Make sure that you set the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) parameter to reflect the local quorum and the [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameter to reflect the configuration store quorum (although you need to specify only those ZooKeeper servers located in the same cluster). - -You also need to specify the name of the [cluster](reference-terminology.md#cluster) to which the broker belongs using the [`clusterName`](reference-configuration.md#broker-clusterName) parameter. In addition, you need to match the broker and web service ports provided when you initialize the metadata (especially when you use a different port from default) of the cluster. - -The following is an example configuration: - -```properties - -# Local ZooKeeper servers -zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -# Configuration store quorum connection string. -configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184 - -clusterName=us-west - -# Broker data port -brokerServicePort=6650 - -# Broker data port for TLS -brokerServicePortTls=6651 - -# Port to use to server HTTP request -webServicePort=8080 - -# Port to use to server HTTPS request -webServicePortTls=8443 - -``` - -### Broker hardware - -Pulsar brokers do not require any special hardware since they do not use the local disk. You had better choose fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) so that the software can take full advantage of that. - -### Start the broker service - -You can start a broker in the background by using [nohup](https://en.wikipedia.org/wiki/Nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start broker - -``` - -You can also start brokers in the foreground by using [`pulsar broker`](reference-cli-tools.md#broker): - -```shell - -$ bin/pulsar broker - -``` - -## Service discovery - -[Clients](getting-started-clients.md) connecting to Pulsar brokers need to communicate with an entire Pulsar instance using a single URL. - -You can use your own service discovery system. If you use your own system, you only need to satisfy just one requirement: when a client performs an HTTP request to an [endpoint](reference-configuration.md) for a Pulsar cluster, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to some active brokers in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means. - -> **Service discovery already provided by many scheduling systems** -> Many large-scale deployment systems, such as [Kubernetes](deploy-kubernetes.md), have service discovery systems built in. If you run Pulsar on such a system, you may not need to provide your own service discovery mechanism. - -## Admin client and verification - -At this point your Pulsar instance should be ready to use. You can now configure client machines that can serve as [administrative clients](admin-api-overview.md) for each cluster. You can use the [`conf/client.conf`](reference-configuration.md#client) configuration file to configure admin clients. - -The most important thing is that you point the [`serviceUrl`](reference-configuration.md#client-serviceUrl) parameter to the correct service URL for the cluster: - -```properties - -serviceUrl=http://pulsar.us-west.example.com:8080/ - -``` - -## Provision new tenants - -Pulsar is built as a fundamentally multi-tenant system. - - -If a new tenant wants to use the system, you need to create a new one. You can create a new tenant by using the [`pulsar-admin`](reference-pulsar-admin.md#tenants) CLI tool: - -```shell - -$ bin/pulsar-admin tenants create test-tenant \ - --allowed-clusters us-west \ - --admin-roles test-admin-role - -``` - -In this command, users who identify with `test-admin-role` role can administer the configuration for the `test-tenant` tenant. The `test-tenant` tenant can only use the `us-west` cluster. From now on, this tenant can manage its resources. - -Once you create a tenant, you need to create [namespaces](reference-terminology.md#namespace) for topics within that tenant. - - -The first step is to create a namespace. A namespace is an administrative unit that can contain many topics. A common practice is to create a namespace for each different use case from a single tenant. - -```shell - -$ bin/pulsar-admin namespaces create test-tenant/ns1 - -``` - -##### Test producer and consumer - - -Everything is now ready to send and receive messages. The quickest way to test the system is through the [`pulsar-perf`](reference-cli-tools.md#pulsar-perf) client tool. - - -You can use a topic in the namespace that you have just created. Topics are automatically created the first time when a producer or a consumer tries to use them. - -The topic name in this case could be: - -```http - -persistent://test-tenant/ns1/my-topic - -``` - -Start a consumer that creates a subscription on the topic and waits for messages: - -```shell - -$ bin/pulsar-perf consume persistent://test-tenant/ns1/my-topic - -``` - -Start a producer that publishes messages at a fixed rate and reports stats every 10 seconds: - -```shell - -$ bin/pulsar-perf produce persistent://test-tenant/ns1/my-topic - -``` - -To report the topic stats: - -```shell - -$ bin/pulsar-admin topics stats persistent://test-tenant/ns1/my-topic - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/deploy-bare-metal.md b/site2/website/versioned_docs/version-2.9.0-deprecated/deploy-bare-metal.md deleted file mode 100644 index 9bb4235cece5da..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/deploy-bare-metal.md +++ /dev/null @@ -1,559 +0,0 @@ ---- -id: deploy-bare-metal -title: Deploy a cluster on bare metal -sidebar_label: "Bare metal" -original_id: deploy-bare-metal ---- - -:::tip - -1. You can use single-cluster Pulsar installation in most use cases, such as experimenting with Pulsar or using Pulsar in a startup or in a single team. If you need to run a multi-cluster Pulsar instance, see the [guide](deploy-bare-metal-multi-cluster.md). -2. If you want to use all built-in [Pulsar IO](io-overview.md) connectors, you need to download `apache-pulsar-io-connectors`package and install `apache-pulsar-io-connectors` under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you have run a separate cluster of function workers for [Pulsar Functions](functions-overview.md). -3. If you want to use [Tiered Storage](concepts-tiered-storage.md) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders`package and install `apache-pulsar-offloaders` under `offloaders` directory in the Pulsar directory on every broker node. For more details of how to configure this feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md). - -::: - -Deploying a Pulsar cluster consists of the following steps: - -1. Deploy a [ZooKeeper](#deploy-a-zookeeper-cluster) cluster (optional) -2. Initialize [cluster metadata](#initialize-cluster-metadata) -3. Deploy a [BookKeeper](#deploy-a-bookkeeper-cluster) cluster -4. Deploy one or more Pulsar [brokers](#deploy-pulsar-brokers) - -## Preparation - -### Requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions, JRE/JDK 11 is recommended. - -:::tip - -You can reuse existing Zookeeper clusters. - -::: - -To run Pulsar on bare metal, the following configuration is recommended: - -* At least 6 Linux machines or VMs - * 3 for running [ZooKeeper](https://zookeeper.apache.org) - * 3 for running a Pulsar broker, and a [BookKeeper](https://bookkeeper.apache.org) bookie -* A single [DNS](https://en.wikipedia.org/wiki/Domain_Name_System) name covering all of the Pulsar broker hosts - -:::note - -* Broker is only supported on 64-bit JVM. -* If you do not have enough machines, or you want to test Pulsar in cluster mode (and expand the cluster later), You can fully deploy Pulsar on a node on which ZooKeeper, bookie and broker run. -* If you do not have a DNS server, you can use the multi-host format in the service URL instead. - -::: - -Each machine in your cluster needs to have [Java 8](https://adoptium.net/?variant=openjdk8) or [Java 11](https://adoptium.net/?variant=openjdk11) installed. - -The following is a diagram showing the basic setup: - -![alt-text](/assets/pulsar-basic-setup.png) - -In this diagram, connecting clients need to communicate with the Pulsar cluster using a single URL. In this case, `pulsar-cluster.acme.com` abstracts over all of the message-handling brokers. Pulsar message brokers run on machines alongside BookKeeper bookies; brokers and bookies, in turn, rely on ZooKeeper. - -### Hardware considerations - -If you deploy a Pulsar cluster, keep in mind the following basic better choices when you do the capacity planning. - -#### ZooKeeper - -For machines running ZooKeeper, it is recommended to use less powerful machines or VMs. Pulsar uses ZooKeeper only for periodic coordination-related and configuration-related tasks, not for basic operations. If you run Pulsar on [Amazon Web Services](https://aws.amazon.com/) (AWS), for example, a [t2.small](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html) instance might likely suffice. - -#### Bookies and Brokers - -For machines running a bookie and a Pulsar broker, more powerful machines are required. For an AWS deployment, for example, [i3.4xlarge](https://aws.amazon.com/blogs/aws/now-available-i3-instances-for-demanding-io-intensive-applications/) instances may be appropriate. On those machines you can use the following: - -* Fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) (for Pulsar brokers) -* Small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache (for BookKeeper bookies) - -To start a Pulsar instance, below are the minimum and the recommended hardware settings. - -1. The minimum hardware settings (250 Pulsar topics) - - Broker - - CPU: 0.2 - - Memory: 256MB - - Bookie - - CPU: 0.2 - - Memory: 256MB - - Storage: - - Journal: 8GB, PD-SSD - - Ledger: 16GB, PD-STANDARD - -2. The recommended hardware settings (1000 Pulsar topics) - - - Broker - - CPU: 8 - - Memory: 8GB - - Bookie - - CPU: 4 - - Memory: 8GB - - Storage: - - Journal: 256GB, PD-SSD - - Ledger: 2TB, PD-STANDARD - -## Install the Pulsar binary package - -> You need to install the Pulsar binary package on each machine in the cluster, including machines running ZooKeeper and BookKeeper. - -To get started deploying a Pulsar cluster on bare metal, you need to download a binary tarball release in one of the following ways: - -* By clicking on the link below directly, which automatically triggers a download: - * Pulsar @pulsar:version@ binary release -* From the Pulsar [downloads page](pulsar:download_page_url) -* From the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) on GitHub -* Using [wget](https://www.gnu.org/software/wget): - -```bash - -$ wget pulsar:binary_release_url - -``` - -Once you download the tarball, untar it and `cd` into the resulting directory: - -```bash - -$ tar xvzf apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -The extracted directory contains the following subdirectories: - -Directory | Contains -:---------|:-------- -`bin` |[command-line tools](reference-cli-tools.md) of Pulsar, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) -`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more -`data` | The data storage directory that ZooKeeper and BookKeeper use -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that Pulsar uses -`logs` | Logs that the installation creates - -## [Install Builtin Connectors (optional)]( https://pulsar.apache.org/docs/en/next/standalone/#install-builtin-connectors-optional) - -> Since Pulsar release `2.1.0-incubating`, Pulsar provides a separate binary distribution, containing all the `builtin` connectors. -> To enable the `builtin` connectors (optional), you can follow the instructions below. - -To use `builtin` connectors, you need to download the connectors tarball release on every broker node in one of the following ways : - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar IO Connectors @pulsar:version@ release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -Once you download the .nar file, copy the file to directory `connectors` in the pulsar directory. -For example, if you download the connector file `pulsar-io-aerospike-@pulsar:version@.nar`: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -## [Install Tiered Storage Offloaders (optional)](https://pulsar.apache.org/docs/en/next/standalone/#install-tiered-storage-offloaders-optional) - -> Since Pulsar release `2.2.0`, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -> If you want to enable tiered storage feature, you can follow the instructions as below; otherwise you can -> skip this section for now. - -To use tiered storage offloaders, you need to download the offloaders tarball release on every broker node in one of the following ways: - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -Once you download the tarball, in the Pulsar directory, untar the offloaders package and copy the offloaders as `offloaders` in the Pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you can find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more details of how to configure tiered storage feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md) - - -## Deploy a ZooKeeper cluster - -> If you already have an existing zookeeper cluster and want to use it, you can skip this section. - -[ZooKeeper](https://zookeeper.apache.org) manages a variety of essential coordination-related and configuration-related tasks for Pulsar. To deploy a Pulsar cluster, you need to deploy ZooKeeper first. A 3-node ZooKeeper cluster is the recommended configuration. Pulsar does not make heavy use of ZooKeeper, so the lightweight machines or VMs should suffice for running ZooKeeper. - -To begin, add all ZooKeeper servers to the configuration specified in [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) (in the Pulsar directory that you create [above](#install-the-pulsar-binary-package)). The following is an example: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -> If you only have one machine on which to deploy Pulsar, you only need to add one server entry in the configuration file. - -On each host, you need to specify the ID of the node in the `myid` file, which is in the `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - -For example, on a ZooKeeper server like `zk1.us-west.example.com`, you can set the `myid` value as follows: - -```bash - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com`, the command is `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and have the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start zookeeper - -``` - -> If you plan to deploy Zookeeper with the Bookie on the same node, you need to start zookeeper by using different stats -> port by configuring the `metricsProvider.httpPort` in zookeeper.conf. - -## Initialize cluster metadata - -Once you deploy ZooKeeper for your cluster, you need to write some metadata to ZooKeeper. You only need to write this data **once**. - -You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. This command can be run on any machine in your Pulsar cluster, so the metadata can be initialized from a ZooKeeper, broker, or bookie machine. The following is an example: - -```shell - -$ bin/pulsar initialize-cluster-metadata \ - --cluster pulsar-cluster-1 \ - --zookeeper zk1.us-west.example.com:2181 \ - --configuration-store zk1.us-west.example.com:2181 \ - --web-service-url http://pulsar.us-west.example.com:8080 \ - --web-service-url-tls https://pulsar.us-west.example.com:8443 \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650 \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -As you can see from the example above, you will need to specify the following: - -Flag | Description -:----|:----------- -`--cluster` | A name for the cluster -`--zookeeper` | A "local" ZooKeeper connection string for the cluster. This connection string only needs to include *one* machine in the ZooKeeper cluster. -`--configuration-store` | The configuration store connection string for the entire instance. As with the `--zookeeper` flag, this connection string only needs to include *one* machine in the ZooKeeper cluster. -`--web-service-url` | The web service URL for the cluster, plus a port. This URL should be a standard DNS name. The default port is 8080 (you had better not use a different port). -`--web-service-url-tls` | If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster. The default port is 8443 (you had better not use a different port). -`--broker-service-url` | A broker service URL enabling interaction with the brokers in the cluster. This URL should not use the same DNS name as the web service URL but should use the `pulsar` scheme instead. The default port is 6650 (you had better not use a different port). -`--broker-service-url-tls` | If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster. The default port is 6651 (you had better not use a different port). - - -> If you do not have a DNS server, you can use multi-host format in the service URL with the following settings: -> - -> ```shell -> -> --web-service-url http://host1:8080,host2:8080,host3:8080 \ -> --web-service-url-tls https://host1:8443,host2:8443,host3:8443 \ -> --broker-service-url pulsar://host1:6650,host2:6650,host3:6650 \ -> --broker-service-url-tls pulsar+ssl://host1:6651,host2:6651,host3:6651 -> -> -> ``` - -> -> If you want to use an existing BookKeeper cluster, you can add the `--existing-bk-metadata-service-uri` flag as follows: -> - -> ```shell -> -> --existing-bk-metadata-service-uri "zk+null://zk1:2181;zk2:2181/ledgers" \ -> --web-service-url http://host1:8080,host2:8080,host3:8080 \ -> --web-service-url-tls https://host1:8443,host2:8443,host3:8443 \ -> --broker-service-url pulsar://host1:6650,host2:6650,host3:6650 \ -> --broker-service-url-tls pulsar+ssl://host1:6651,host2:6651,host3:6651 -> -> -> ``` - -> You can obtain the metadata service URI of the existing BookKeeper cluster by using the `bin/bookkeeper shell whatisinstanceid` command. You must enclose the value in double quotes since the multiple metadata service URIs are separated with semicolons. - -## Deploy a BookKeeper cluster - -[BookKeeper](https://bookkeeper.apache.org) handles all persistent data storage in Pulsar. You need to deploy a cluster of BookKeeper bookies to use Pulsar. You can choose to run a **3-bookie BookKeeper cluster**. - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important step in configuring bookies for our purposes here is ensuring that [`zkServers`](reference-configuration.md#bookkeeper-zkServers) is set to the connection string for the ZooKeeper cluster. The following is an example: - -```properties - -zkServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -``` - -Once you appropriately modify the `zkServers` parameter, you can make any other configuration changes that you require. You can find a full listing of the available BookKeeper configuration parameters [here](reference-configuration.md#bookkeeper). However, consulting the [BookKeeper documentation](http://bookkeeper.apache.org/docs/latest/reference/config/) for a more in-depth guide might be a better choice. - -Once you apply the desired configuration in `conf/bookkeeper.conf`, you can start up a bookie on each of your BookKeeper hosts. You can start up each bookie either in the background, using [nohup](https://en.wikipedia.org/wiki/Nohup), or in the foreground. - -To start the bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -To start the bookie in the foreground: - -```bash - -$ bin/pulsar bookie - -``` - -You can verify that a bookie works properly by running the `bookiesanity` command on the [BookKeeper shell](reference-cli-tools.md#shell): - -```bash - -$ bin/bookkeeper shell bookiesanity - -``` - -This command creates an ephemeral BookKeeper ledger on the local bookie, writes a few entries, reads them back, and finally deletes the ledger. - -After you start all the bookies, you can use `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to verify all the bookies in the cluster are up running. - -```bash - -$ bin/bookkeeper shell simpletest --ensemble --writeQuorum --ackQuorum --numEntries - -``` - -This command creates a `num-bookies` sized ledger on the cluster, writes a few entries, and finally deletes the ledger. - - -## Deploy Pulsar brokers - -Pulsar brokers are the last thing you need to deploy in your Pulsar cluster. Brokers handle Pulsar messages and provide the administrative interface of Pulsar. A good choice is to run **3 brokers**, one for each machine that already runs a BookKeeper bookie. - -### Configure Brokers - -The most important element of broker configuration is ensuring that each broker is aware of the ZooKeeper cluster that you have deployed. Ensure that the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) and [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameters are correct. In this case, since you only have 1 cluster and no configuration store setup, the `configurationStoreServers` point to the same `zookeeperServers`. - -```properties - -zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 -configurationStoreServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -``` - -You also need to specify the cluster name (matching the name that you provided when you [initialize the metadata of the cluster](#initialize-cluster-metadata)): - -```properties - -clusterName=pulsar-cluster-1 - -``` - -In addition, you need to match the broker and web service ports provided when you initialize the metadata of the cluster (especially when you use a different port than the default): - -```properties - -brokerServicePort=6650 -brokerServicePortTls=6651 -webServicePort=8080 -webServicePortTls=8443 - -``` - -> If you deploy Pulsar in a one-node cluster, you should update the replication settings in `conf/broker.conf` to `1`. -> - -> ```properties -> -> # Number of bookies to use when creating a ledger -> managedLedgerDefaultEnsembleSize=1 -> -> # Number of copies to store for each message -> managedLedgerDefaultWriteQuorum=1 -> -> # Number of guaranteed copies (acks to wait before write is complete) -> managedLedgerDefaultAckQuorum=1 -> -> -> ``` - - -### Enable Pulsar Functions (optional) - -If you want to enable [Pulsar Functions](functions-overview.md), you can follow the instructions as below: - -1. Edit `conf/broker.conf` to enable functions worker, by setting `functionsWorkerEnabled` to `true`. - - ```conf - - functionsWorkerEnabled=true - - ``` - -2. Edit `conf/functions_worker.yml` and set `pulsarFunctionsCluster` to the cluster name that you provide when you [initialize the metadata of the cluster](#initialize-cluster-metadata). - - ```conf - - pulsarFunctionsCluster: pulsar-cluster-1 - - ``` - -If you want to learn more options about deploying the functions worker, check out [Deploy and manage functions worker](functions-worker.md). - -### Start Brokers - -You can then provide any other configuration changes that you want in the [`conf/broker.conf`](reference-configuration.md#broker) file. Once you decide on a configuration, you can start up the brokers for your Pulsar cluster. Like ZooKeeper and BookKeeper, you can start brokers either in the foreground or in the background, using nohup. - -You can start a broker in the foreground using the [`pulsar broker`](reference-cli-tools.md#pulsar-broker) command: - -```bash - -$ bin/pulsar broker - -``` - -You can start a broker in the background using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start broker - -``` - -Once you successfully start up all the brokers that you intend to use, your Pulsar cluster should be ready to go! - -## Connect to the running cluster - -Once your Pulsar cluster is up and running, you should be able to connect with it using Pulsar clients. One such client is the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool, which is included with the Pulsar binary package. The `pulsar-client` tool can publish messages to and consume messages from Pulsar topics and thus provide a simple way to make sure that your cluster runs properly. - -To use the `pulsar-client` tool, first modify the client configuration file in [`conf/client.conf`](reference-configuration.md#client) in your binary package. You need to change the values for `webServiceUrl` and `brokerServiceUrl`, substituting `localhost` (which is the default), with the DNS name that you assign to your broker/bookie hosts. The following is an example: - -```properties - -webServiceUrl=http://us-west.example.com:8080 -brokerServiceurl=pulsar://us-west.example.com:6650 - -``` - -> If you do not have a DNS server, you can specify multi-host in service URL as follows: -> - -> ```properties -> -> webServiceUrl=http://host1:8080,host2:8080,host3:8080 -> brokerServiceurl=pulsar://host1:6650,host2:6650,host3:6650 -> -> -> ``` - - -Once that is complete, you can publish a message to the Pulsar topic: - -```bash - -$ bin/pulsar-client produce \ - persistent://public/default/test \ - -n 1 \ - -m "Hello Pulsar" - -``` - -> You may need to use a different cluster name in the topic if you specify a cluster name other than `pulsar-cluster-1`. - -This command publishes a single message to the Pulsar topic. In addition, you can subscribe to the Pulsar topic in a different terminal before publishing messages as below: - -```bash - -$ bin/pulsar-client consume \ - persistent://public/default/test \ - -n 100 \ - -s "consumer-test" \ - -t "Exclusive" - -``` - -Once you successfully publish the above message to the topic, you should see it in the standard output: - -```bash - ------ got message ----- -Hello Pulsar - -``` - -## Run Functions - -> If you have [enabled](#enable-pulsar-functions-optional) Pulsar Functions, you can try out the Pulsar Functions now. - -Create an ExclamationFunction `exclamation`. - -```bash - -bin/pulsar-admin functions create \ - --jar examples/api-examples.jar \ - --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \ - --inputs persistent://public/default/exclamation-input \ - --output persistent://public/default/exclamation-output \ - --tenant public \ - --namespace default \ - --name exclamation - -``` - -Check whether the function runs as expected by [triggering](functions-deploying.md#triggering-pulsar-functions) the function. - -```bash - -bin/pulsar-admin functions trigger --name exclamation --trigger-value "hello world" - -``` - -You should see the following output: - -```shell - -hello world! - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/deploy-dcos.md b/site2/website/versioned_docs/version-2.9.0-deprecated/deploy-dcos.md deleted file mode 100644 index 35a0a83d716ade..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/deploy-dcos.md +++ /dev/null @@ -1,200 +0,0 @@ ---- -id: deploy-dcos -title: Deploy Pulsar on DC/OS -sidebar_label: "DC/OS" -original_id: deploy-dcos ---- - -:::tip - -To enable all built-in [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, we recommend you use `apachepulsar/pulsar-all` image instead of `apachepulsar/pulsar` image; the former has already bundled [all built-in connectors](io-overview.md#working-with-connectors). - -::: - -[DC/OS](https://dcos.io/) (the DataCenter Operating System) is a distributed operating system for deploying and managing applications and systems on [Apache Mesos](http://mesos.apache.org/). DC/OS is an open-source tool created and maintained by [Mesosphere](https://mesosphere.com/). - -Apache Pulsar is available as a [Marathon Application Group](https://mesosphere.github.io/marathon/docs/application-groups.html), which runs multiple applications as manageable sets. - -## Prerequisites - -You need to prepare your environment before running Pulsar on DC/OS. - -* DC/OS version [1.9](https://docs.mesosphere.com/1.9/) or higher -* A [DC/OS cluster](https://docs.mesosphere.com/1.9/installing/) with at least three agent nodes -* The [DC/OS CLI tool](https://docs.mesosphere.com/1.9/cli/install/) installed -* The [`PulsarGroups.json`](https://github.com/apache/pulsar/blob/master/deployment/dcos/PulsarGroups.json) configuration file from the Pulsar GitHub repo. - - ```bash - - $ curl -O https://raw.githubusercontent.com/apache/pulsar/master/deployment/dcos/PulsarGroups.json - - ``` - -Each node in the DC/OS-managed Mesos cluster must have at least: - -* 4 CPU -* 4 GB of memory -* 60 GB of total persistent disk - -Alternatively, you can change the configuration in `PulsarGroups.json` accordingly to match your resources of the DC/OS cluster. - -## Deploy Pulsar using the DC/OS command interface - -You can deploy Pulsar on DC/OS using this command: - -```bash - -$ dcos marathon group add PulsarGroups.json - -``` - -This command deploys Docker container instances in three groups, which together comprise a Pulsar cluster: - -* 3 bookies (1 [bookie](reference-terminology.md#bookie) on each agent node and 1 [bookie recovery](http://bookkeeper.apache.org/docs/latest/admin/autorecovery/) instance) -* 3 Pulsar [brokers](reference-terminology.md#broker) (1 broker on each node and 1 admin instance) -* 1 [Prometheus](http://prometheus.io/) instance and 1 [Grafana](https://grafana.com/) instance - - -> When you run DC/OS, a ZooKeeper cluster will be running at `master.mesos:2181`, thus you do not have to install or start up ZooKeeper separately. - -After executing the `dcos` command above, click the **Services** tab in the DC/OS [GUI interface](https://docs.mesosphere.com/latest/gui/), which you can access at [http://m1.dcos](http://m1.dcos) in this example. You should see several applications during the deployment. - -![DC/OS command executed](/assets/dcos_command_execute.png) - -![DC/OS command executed2](/assets/dcos_command_execute2.png) - -## The BookKeeper group - -To monitor the status of the BookKeeper cluster deployment, click the **bookkeeper** group in the parent **pulsar** group. - -![DC/OS bookkeeper status](/assets/dcos_bookkeeper_status.png) - -At this point, the status of the 3 [bookies](reference-terminology.md#bookie) are green, which means that the bookies have been deployed successfully and are running. - -![DC/OS bookkeeper running](/assets/dcos_bookkeeper_run.png) - -You can also click each bookie instance to get more detailed information, such as the bookie running log. - -![DC/OS bookie log](/assets/dcos_bookie_log.png) - -To display information about the BookKeeper in ZooKeeper, you can visit [http://m1.dcos/exhibitor](http://m1.dcos/exhibitor). In this example, 3 bookies are under the `available` directory. - -![DC/OS bookkeeper in zk](/assets/dcos_bookkeeper_in_zookeeper.png) - -## The Pulsar broker group - -Similar to the BookKeeper group above, click **brokers** to check the status of the Pulsar brokers. - -![DC/OS broker status](/assets/dcos_broker_status.png) - -![DC/OS broker running](/assets/dcos_broker_run.png) - -You can also click each broker instance to get more detailed information, such as the broker running log. - -![DC/OS broker log](/assets/dcos_broker_log.png) - -Broker cluster information in ZooKeeper is also available through the web UI. In this example, you can see that the `loadbalance` and `managed-ledgers` directories have been created. - -![DC/OS broker in zk](/assets/dcos_broker_in_zookeeper.png) - -## Monitor group - -The **monitory** group consists of Prometheus and Grafana. - -![DC/OS monitor status](/assets/dcos_monitor_status.png) - -### Prometheus - -Click the instance of `prom` to get the endpoint of Prometheus, which is `192.168.65.121:9090` in this example. - -![DC/OS prom endpoint](/assets/dcos_prom_endpoint.png) - -If you click that endpoint, you can see the Prometheus dashboard. All the bookies and brokers are listed on [http://192.168.65.121:9090/targets](http://192.168.65.121:9090/targets). - -![DC/OS prom targets](/assets/dcos_prom_targets.png) - -### Grafana - -Click `grafana` to get the endpoint for Grafana, which is `192.168.65.121:3000` in this example. - -![DC/OS grafana endpoint](/assets/dcos_grafana_endpoint.png) - -If you click that endpoint, you can access the Grafana dashboard. - -![DC/OS grafana targets](/assets/dcos_grafana_dashboard.png) - -## Run a simple Pulsar consumer and producer on DC/OS - -Now that you have a fully deployed Pulsar cluster, you can run a simple consumer and producer to show Pulsar on DC/OS in action. - -### Download and prepare the Pulsar Java tutorial - -You can clone a [Pulsar Java tutorial](https://github.com/streamlio/pulsar-java-tutorial) repo. This repo contains a simple Pulsar consumer and producer (you can find more information in the `README` file in this repo). - -```bash - -$ git clone https://github.com/streamlio/pulsar-java-tutorial - -``` - -Change the `SERVICE_URL` from `pulsar://localhost:6650` to `pulsar://a1.dcos:6650` in both [`ConsumerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ConsumerTutorial.java) file and [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java) file. - -The `pulsar://a1.dcos:6650` endpoint is for the broker service. You can fetch the endpoint details for each broker instance from the DC/OS GUI. `a1.dcos` is a DC/OS client agent that runs a broker, and you can replace it with the client agent IP address. - -Now, you can change the message number from 10 to 10000000 in the main method in [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java) file to produce more messages. - -Then, you can compile the project code using the command below: - -```bash - -$ mvn clean package - -``` - -### Run the consumer and producer - -Execute this command to run the consumer: - -```bash - -$ mvn exec:java -Dexec.mainClass="tutorial.ConsumerTutorial" - -``` - -Execute this command to run the producer: - -```bash - -$ mvn exec:java -Dexec.mainClass="tutorial.ProducerTutorial" - -``` - -You see that the producer is producing messages and the consumer is consuming messages through the DC/OS GUI. - -![DC/OS pulsar producer](/assets/dcos_producer.png) - -![DC/OS pulsar consumer](/assets/dcos_consumer.png) - -### View Grafana metric output - -While the producer and consumer are running, you can access the running metrics from Grafana. - -![DC/OS pulsar dashboard](/assets/dcos_metrics.png) - - -## Uninstall Pulsar - -You can shut down and uninstall the `pulsar` application from DC/OS at any time in one of the following two ways: - -1. Click the three dots at the right end of Pulsar group and choose **Delete** on the DC/OS GUI. - - ![DC/OS pulsar uninstall](/assets/dcos_uninstall.png) - -2. Use the command below. - - ```bash - - $ dcos marathon group remove /pulsar - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/deploy-docker.md b/site2/website/versioned_docs/version-2.9.0-deprecated/deploy-docker.md deleted file mode 100644 index 8348d78deb2378..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/deploy-docker.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -id: deploy-docker -title: Deploy a cluster on Docker -sidebar_label: "Docker" -original_id: deploy-docker ---- - -To deploy a Pulsar cluster on Docker, complete the following steps: -1. Deploy a ZooKeeper cluster (optional) -2. Initialize cluster metadata -3. Deploy a BookKeeper cluster -4. Deploy one or more Pulsar brokers - -## Prepare - -To run Pulsar on Docker, you need to create a container for each Pulsar component: ZooKeeper, BookKeeper and broker. You can pull the images of ZooKeeper and BookKeeper separately on [Docker Hub](https://hub.docker.com/), and pull a [Pulsar image](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) for the broker. You can also pull only one [Pulsar image](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) and create three containers with this image. This tutorial takes the second option as an example. - -### Pull a Pulsar image -You can pull a Pulsar image from [Docker Hub](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) with the following command. - -``` - -docker pull apachepulsar/pulsar-all:latest - -``` - -### Create three containers -Create containers for ZooKeeper, BookKeeper and broker. In this example, they are named as `zookeeper`, `bookkeeper` and `broker` respectively. You can name them as you want with the `--name` flag. By default, the container names are created randomly. - -``` - -docker run -it --name bookkeeper apachepulsar/pulsar-all:latest /bin/bash -docker run -it --name zookeeper apachepulsar/pulsar-all:latest /bin/bash -docker run -it --name broker apachepulsar/pulsar-all:latest /bin/bash - -``` - -### Create a network -To deploy a Pulsar cluster on Docker, you need to create a `network` and connect the containers of ZooKeeper, BookKeeper and broker to this network. The following command creates the network `pulsar`: - -``` - -docker network create pulsar - -``` - -### Connect containers to network -Connect the containers of ZooKeeper, BookKeeper and broker to the `pulsar` network with the following commands. - -``` - -docker network connect pulsar zookeeper -docker network connect pulsar bookkeeper -docker network connect pulsar broker - -``` - -To check whether the containers are successfully connected to the network, enter the `docker network inspect pulsar` command. - -For detailed information about how to deploy ZooKeeper cluster, BookKeeper cluster, brokers, see [deploy a cluster on bare metal](deploy-bare-metal.md). diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/deploy-kubernetes.md b/site2/website/versioned_docs/version-2.9.0-deprecated/deploy-kubernetes.md deleted file mode 100644 index 1aefc6ad79f716..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/deploy-kubernetes.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -id: deploy-kubernetes -title: Deploy Pulsar on Kubernetes -sidebar_label: "Kubernetes" -original_id: deploy-kubernetes ---- - -To get up and running with these charts as fast as possible, in a **non-production** use case, we provide -a [quick start guide](getting-started-helm.md) for Proof of Concept (PoC) deployments. - -To configure and install a Pulsar cluster on Kubernetes for production usage, follow the complete [Installation Guide](helm-install.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/deploy-monitoring.md b/site2/website/versioned_docs/version-2.9.0-deprecated/deploy-monitoring.md deleted file mode 100644 index 2b5c19344dc8c3..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/deploy-monitoring.md +++ /dev/null @@ -1,148 +0,0 @@ ---- -id: deploy-monitoring -title: Monitor -sidebar_label: "Monitor" -original_id: deploy-monitoring ---- - -You can use different ways to monitor a Pulsar cluster, exposing both metrics related to the usage of topics and the overall health of the individual components of the cluster. - -## Collect metrics - -You can collect broker stats, ZooKeeper stats, and BookKeeper stats. - -### Broker stats - -You can collect Pulsar broker metrics from brokers and export the metrics in JSON format. The Pulsar broker metrics mainly have two types: - -* *Destination dumps*, which contain stats for each individual topic. You can fetch the destination dumps using the command below: - - ```shell - - bin/pulsar-admin broker-stats destinations - - ``` - -* Broker metrics, which contain the broker information and topics stats aggregated at namespace level. You can fetch the broker metrics by using the following command: - - ```shell - - bin/pulsar-admin broker-stats monitoring-metrics - - ``` - -All the message rates are updated every minute. - -The aggregated broker metrics are also exposed in the [Prometheus](https://prometheus.io) format at: - -```shell - -http://$BROKER_ADDRESS:8080/metrics/ - -``` - -### ZooKeeper stats - -The local ZooKeeper, configuration store server and clients that are shipped with Pulsar can expose detailed stats through Prometheus. - -```shell - -http://$LOCAL_ZK_SERVER:8000/metrics -http://$GLOBAL_ZK_SERVER:8001/metrics - -``` - -The default port of local ZooKeeper is `8000` and the default port of the configuration store is `8001`. You can use a different stats port by configuring `metricsProvider.httpPort` in the `conf/zookeeper.conf` file. - -### BookKeeper stats - -You can configure the stats frameworks for BookKeeper by modifying the `statsProviderClass` in the `conf/bookkeeper.conf` file. - -The default BookKeeper configuration enables the Prometheus exporter. The configuration is included with Pulsar distribution. - -```shell - -http://$BOOKIE_ADDRESS:8000/metrics - -``` - -The default port for bookie is `8000`. You can change the port by configuring `prometheusStatsHttpPort` in the `conf/bookkeeper.conf` file. - -### Managed cursor acknowledgment state -The acknowledgment state is persistent to the ledger first. When the acknowledgment state fails to be persistent to the ledger, they are persistent to ZooKeeper. To track the stats of acknowledgement, you can configure the metrics for the managed cursor. - -``` - -brk_ml_cursor_persistLedgerSucceed(namespace=", ledger_name="", cursor_name:") -brk_ml_cursor_persistLedgerErrors(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_persistZookeeperSucceed(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_persistZookeeperErrors(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_nonContiguousDeletedMessagesRange(namespace="", ledger_name="", cursor_name:"") - -``` - -Those metrics are added in the Prometheus interface, you can monitor and check the metrics stats in the Grafana. - -### Function and connector stats - -You can collect functions worker stats from `functions-worker` and export the metrics in JSON formats, which contain functions worker JVM metrics. - -``` - -pulsar-admin functions-worker monitoring-metrics - -``` - -You can collect functions and connectors metrics from `functions-worker` and export the metrics in JSON formats. - -``` - -pulsar-admin functions-worker function-stats - -``` - -The aggregated functions and connectors metrics can be exposed in Prometheus formats as below. You can get [`FUNCTIONS_WORKER_ADDRESS`](http://pulsar.apache.org/docs/en/next/functions-worker/) and `WORKER_PORT` from the `functions_worker.yml` file. - -``` - -http://$FUNCTIONS_WORKER_ADDRESS:$WORKER_PORT/metrics: - -``` - -## Configure Prometheus - -You can use Prometheus to collect all the metrics exposed for Pulsar components and set up [Grafana](https://grafana.com/) dashboards to display the metrics and monitor your Pulsar cluster. For details, refer to [Prometheus guide](https://prometheus.io/docs/introduction/getting_started/). - -When you run Pulsar on bare metal, you can provide the list of nodes to be probed. When you deploy Pulsar in a Kubernetes cluster, the monitoring is setup automatically. For details, refer to [Kubernetes instructions](helm-deploy.md). - -## Dashboards - -When you collect time series statistics, the major problem is to make sure the number of dimensions attached to the data does not explode. Thus you only need to collect time series of metrics aggregated at the namespace level. - -### Pulsar per-topic dashboard - -The per-topic dashboard instructions are available at [Pulsar manager](administration-pulsar-manager.md). - -### Grafana - -You can use grafana to create dashboard driven by the data that is stored in Prometheus. - -When you deploy Pulsar on Kubernetes, a `pulsar-grafana` Docker image is enabled by default. You can use the docker image with the principal dashboards. - -Enter the command below to use the dashboard manually: - -```shell - -docker run -p3000:3000 \ - -e PROMETHEUS_URL=http://$PROMETHEUS_HOST:9090/ \ - apachepulsar/pulsar-grafana:latest - -``` - -The following are some Grafana dashboards examples: - -- [pulsar-grafana](http://pulsar.apache.org/docs/en/deploy-monitoring/#grafana): a Grafana dashboard that displays metrics collected in Prometheus for Pulsar clusters running on Kubernetes. -- [apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard): a collection of Grafana dashboard templates for different Pulsar components running on both Kubernetes and on-premise machines. - - ## Alerting rules - You can set alerting rules according to your Pulsar environment. To configure alerting rules for Apache Pulsar, refer to [alerting rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/develop-load-manager.md b/site2/website/versioned_docs/version-2.9.0-deprecated/develop-load-manager.md deleted file mode 100644 index 509209b6a852d8..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/develop-load-manager.md +++ /dev/null @@ -1,227 +0,0 @@ ---- -id: develop-load-manager -title: Modular load manager -sidebar_label: "Modular load manager" -original_id: develop-load-manager ---- - -The *modular load manager*, implemented in [`ModularLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/ModularLoadManagerImpl.java), is a flexible alternative to the previously implemented load manager, [`SimpleLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/SimpleLoadManagerImpl.java), which attempts to simplify how load is managed while also providing abstractions so that complex load management strategies may be implemented. - -## Usage - -There are two ways that you can enable the modular load manager: - -1. Change the value of the `loadManagerClassName` parameter in `conf/broker.conf` from `org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl` to `org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl`. -2. Using the `pulsar-admin` tool. Here's an example: - - ```shell - - $ pulsar-admin update-dynamic-config \ - --config loadManagerClassName \ - --value org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl - - ``` - - You can use the same method to change back to the original value. In either case, any mistake in specifying the load manager will cause Pulsar to default to `SimpleLoadManagerImpl`. - -## Verification - -There are a few different ways to determine which load manager is being used: - -1. Use `pulsar-admin` to examine the `loadManagerClassName` element: - - ```shell - - $ bin/pulsar-admin brokers get-all-dynamic-config - { - "loadManagerClassName" : "org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl" - } - - ``` - - If there is no `loadManagerClassName` element, then the default load manager is used. - -2. Consult a ZooKeeper load report. With the module load manager, the load report in `/loadbalance/brokers/...` will have many differences. for example the `systemResourceUsage` sub-elements (`bandwidthIn`, `bandwidthOut`, etc.) are now all at the top level. Here is an example load report from the module load manager: - - ```json - - { - "bandwidthIn": { - "limit": 10240000.0, - "usage": 4.256510416666667 - }, - "bandwidthOut": { - "limit": 10240000.0, - "usage": 5.287239583333333 - }, - "bundles": [], - "cpu": { - "limit": 2400.0, - "usage": 5.7353247655435915 - }, - "directMemory": { - "limit": 16384.0, - "usage": 1.0 - } - } - - ``` - - With the simple load manager, the load report in `/loadbalance/brokers/...` will look like this: - - ```json - - { - "systemResourceUsage": { - "bandwidthIn": { - "limit": 10240000.0, - "usage": 0.0 - }, - "bandwidthOut": { - "limit": 10240000.0, - "usage": 0.0 - }, - "cpu": { - "limit": 2400.0, - "usage": 0.0 - }, - "directMemory": { - "limit": 16384.0, - "usage": 1.0 - }, - "memory": { - "limit": 8192.0, - "usage": 3903.0 - } - } - } - - ``` - -3. The command-line [broker monitor](reference-cli-tools.md#monitor-brokers) will have a different output format depending on which load manager implementation is being used. - - Here is an example from the modular load manager: - - ``` - - =================================================================================================================== - ||SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.00 |48.33 |0.01 |0.00 |0.00 |48.33 || - ||COUNT |TOPIC |BUNDLE |PRODUCER |CONSUMER |BUNDLE + |BUNDLE - || - || |4 |4 |0 |2 |4 |0 || - ||LATEST |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - ||SHORT |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - ||LONG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - =================================================================================================================== - - ``` - - Here is an example from the simple load manager: - - ``` - - =================================================================================================================== - ||COUNT |TOPIC |BUNDLE |PRODUCER |CONSUMER |BUNDLE + |BUNDLE - || - || |4 |4 |0 |2 |0 |0 || - ||RAW SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.25 |47.94 |0.01 |0.00 |0.00 |47.94 || - ||ALLOC SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.20 |1.89 | |1.27 |3.21 |3.21 || - ||RAW MSG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.01 |0.01 |0.01 || - ||ALLOC MSG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |54.84 |134.48 |189.31 |126.54 |320.96 |447.50 || - =================================================================================================================== - - ``` - -It is important to note that the module load manager is _centralized_, meaning that all requests to assign a bundle---whether it's been seen before or whether this is the first time---only get handled by the _lead_ broker (which can change over time). To determine the current lead broker, examine the `/loadbalance/leader` node in ZooKeeper. - -## Implementation - -### Data - -The data monitored by the modular load manager is contained in the [`LoadData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/LoadData.java) class. -Here, the available data is subdivided into the bundle data and the broker data. - -#### Broker - -The broker data is contained in the [`BrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BrokerData.java) class. It is further subdivided into two parts, -one being the local data which every broker individually writes to ZooKeeper, and the other being the historical broker -data which is written to ZooKeeper by the leader broker. - -##### Local Broker Data -The local broker data is contained in the class [`LocalBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/java/org/apache/pulsar/policies/data/loadbalancer/LocalBrokerData.java) and provides information about the following resources: - -* CPU usage -* JVM heap memory usage -* Direct memory usage -* Bandwidth in/out usage -* Most recent total message rate in/out across all bundles -* Total number of topics, bundles, producers, and consumers -* Names of all bundles assigned to this broker -* Most recent changes in bundle assignments for this broker - -The local broker data is updated periodically according to the service configuration -"loadBalancerReportUpdateMaxIntervalMinutes". After any broker updates their local broker data, the leader broker will -receive the update immediately via a ZooKeeper watch, where the local data is read from the ZooKeeper node -`/loadbalance/brokers/` - -##### Historical Broker Data - -The historical broker data is contained in the [`TimeAverageBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/TimeAverageBrokerData.java) class. - -In order to reconcile the need to make good decisions in a steady-state scenario and make reactive decisions in a critical scenario, the historical data is split into two parts: the short-term data for reactive decisions, and the long-term data for steady-state decisions. Both time frames maintain the following information: - -* Message rate in/out for the entire broker -* Message throughput in/out for the entire broker - -Unlike the bundle data, the broker data does not maintain samples for the global broker message rates and throughputs, which is not expected to remain steady as new bundles are removed or added. Instead, this data is aggregated over the short-term and long-term data for the bundles. See the section on bundle data to understand how that data is collected and maintained. - -The historical broker data is updated for each broker in memory by the leader broker whenever any broker writes their local data to ZooKeeper. Then, the historical data is written to ZooKeeper by the leader broker periodically according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`. - -##### Bundle Data - -The bundle data is contained in the [`BundleData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BundleData.java). Like the historical broker data, the bundle data is split into a short-term and a long-term time frame. The information maintained in each time frame: - -* Message rate in/out for this bundle -* Message Throughput In/Out for this bundle -* Current number of samples for this bundle - -The time frames are implemented by maintaining the average of these values over a set, limited number of samples, where -the samples are obtained through the message rate and throughput values in the local data. Thus, if the update interval -for the local data is 2 minutes, the number of short samples is 10 and the number of long samples is 1000, the -short-term data is maintained over a period of `10 samples * 2 minutes / sample = 20 minutes`, while the long-term -data is similarly over a period of 2000 minutes. Whenever there are not enough samples to satisfy a given time frame, -the average is taken only over the existing samples. When no samples are available, default values are assumed until -they are overwritten by the first sample. Currently, the default values are - -* Message rate in/out: 50 messages per second both ways -* Message throughput in/out: 50KB per second both ways - -The bundle data is updated in memory on the leader broker whenever any broker writes their local data to ZooKeeper. -Then, the bundle data is written to ZooKeeper by the leader broker periodically at the same time as the historical -broker data, according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`. - -### Traffic Distribution - -The modular load manager uses the abstraction provided by [`ModularLoadManagerStrategy`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/ModularLoadManagerStrategy.java) to make decisions about bundle assignment. The strategy makes a decision by considering the service configuration, the entire load data, and the bundle data for the bundle to be assigned. Currently, the only supported strategy is [`LeastLongTermMessageRate`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/LeastLongTermMessageRate.java), though soon users will have the ability to inject their own strategies if desired. - -#### Least Long Term Message Rate Strategy - -As its name suggests, the least long term message rate strategy attempts to distribute bundles across brokers so that -the message rate in the long-term time window for each broker is roughly the same. However, simply balancing load based -on message rate does not handle the issue of asymmetric resource burden per message on each broker. Thus, the system -resource usages, which are CPU, memory, direct memory, bandwidth in, and bandwidth out, are also considered in the -assignment process. This is done by weighting the final message rate according to -`1 / (overload_threshold - max_usage)`, where `overload_threshold` corresponds to the configuration -`loadBalancerBrokerOverloadedThresholdPercentage` and `max_usage` is the maximum proportion among the system resources -that is being utilized by the candidate broker. This multiplier ensures that machines with are being more heavily taxed -by the same message rates will receive less load. In particular, it tries to ensure that if one machine is overloaded, -then all machines are approximately overloaded. In the case in which a broker's max usage exceeds the overload -threshold, that broker is not considered for bundle assignment. If all brokers are overloaded, the bundle is randomly -assigned. - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/develop-schema.md b/site2/website/versioned_docs/version-2.9.0-deprecated/develop-schema.md deleted file mode 100644 index 2d4461a5ea2b55..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/develop-schema.md +++ /dev/null @@ -1,62 +0,0 @@ ---- -id: develop-schema -title: Custom schema storage -sidebar_label: "Custom schema storage" -original_id: develop-schema ---- - -By default, Pulsar stores data type [schemas](concepts-schema-registry.md) in [Apache BookKeeper](https://bookkeeper.apache.org) (which is deployed alongside Pulsar). You can, however, use another storage system if you wish. This doc walks you through creating your own schema storage implementation. - -In order to use a non-default (i.e. non-BookKeeper) storage system for Pulsar schemas, you need to implement two Java interfaces: [`SchemaStorage`](#schemastorage-interface) and [`SchemaStorageFactory`](#schemastoragefactory-interface). - -## SchemaStorage interface - -The `SchemaStorage` interface has the following methods: - -```java - -public interface SchemaStorage { - // How schemas are updated - CompletableFuture put(String key, byte[] value, byte[] hash); - - // How schemas are fetched from storage - CompletableFuture get(String key, SchemaVersion version); - - // How schemas are deleted - CompletableFuture delete(String key); - - // Utility method for converting a schema version byte array to a SchemaVersion object - SchemaVersion versionFromBytes(byte[] version); - - // Startup behavior for the schema storage client - void start() throws Exception; - - // Shutdown behavior for the schema storage client - void close() throws Exception; -} - -``` - -> For a full-fledged example schema storage implementation, see the [`BookKeeperSchemaStorage`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorage.java) class. - -## SchemaStorageFactory interface - -```java - -public interface SchemaStorageFactory { - @NotNull - SchemaStorage create(PulsarService pulsar) throws Exception; -} - -``` - -> For a full-fledged example schema storage factory implementation, see the [`BookKeeperSchemaStorageFactory`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorageFactory.java) class. - -## Deployment - -In order to use your custom schema storage implementation, you'll need to: - -1. Package the implementation in a [JAR](https://docs.oracle.com/javase/tutorial/deployment/jar/basicsindex.html) file. -1. Add that jar to the `lib` folder in your Pulsar [binary or source distribution](getting-started-standalone.md#installing-pulsar). -1. Change the `schemaRegistryStorageClassName` configuration in [`broker.conf`](reference-configuration.md#broker) to your custom factory class (i.e. the `SchemaStorageFactory` implementation, not the `SchemaStorage` implementation). -1. Start up Pulsar. diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/develop-tools.md b/site2/website/versioned_docs/version-2.9.0-deprecated/develop-tools.md deleted file mode 100644 index bc7c29e836e6ac..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/develop-tools.md +++ /dev/null @@ -1,112 +0,0 @@ ---- -id: develop-tools -title: Simulation tools -sidebar_label: "Simulation tools" -original_id: develop-tools ---- - -It is sometimes necessary create an test environment and incur artificial load to observe how well load managers -handle the load. The load simulation controller, the load simulation client, and the broker monitor were created as an -effort to make create this load and observe the effects on the managers more easily. - -## Simulation Client -The simulation client is a machine which will create and subscribe to topics with configurable message rates and sizes. -Because it is sometimes necessary in simulating large load to use multiple client machines, the user does not interact -with the simulation client directly, but instead delegates their requests to the simulation controller, which will then -send signals to clients to start incurring load. The client implementation is in the class -`org.apache.pulsar.testclient.LoadSimulationClient`. - -### Usage -To Start a simulation client, use the `pulsar-perf` script with the command `simulation-client` as follows: - -``` - -pulsar-perf simulation-client --port --service-url - -``` - -The client will then be ready to receive controller commands. -## Simulation Controller -The simulation controller send signals to the simulation clients, requesting them to create new topics, stop old -topics, change the load incurred by topics, as well as several other tasks. It is implemented in the class -`org.apache.pulsar.testclient.LoadSimulationController` and presents a shell to the user as an interface to send -command with. - -### Usage -To start a simulation controller, use the `pulsar-perf` script with the command `simulation-controller` as follows: - -``` - -pulsar-perf simulation-controller --cluster --client-port ---clients - -``` - -The clients should already be started before the controller is started. You will then be presented with a simple prompt, -where you can issue commands to simulation clients. Arguments often refer to tenant names, namespace names, and topic -names. In all cases, the BASE name of the tenants, namespaces, and topics are used. For example, for the topic -`persistent://my_tenant/my_cluster/my_namespace/my_topic`, the tenant name is `my_tenant`, the namespace name is -`my_namespace`, and the topic name is `my_topic`. The controller can perform the following actions: - -* Create a topic with a producer and a consumer - * `trade [--rate ] - [--rand-rate ,] - [--size ]` -* Create a group of topics with a producer and a consumer - * `trade_group [--rate ] - [--rand-rate ,] - [--separation ] [--size ] - [--topics-per-namespace ]` -* Change the configuration of an existing topic - * `change [--rate ] - [--rand-rate ,] - [--size ]` -* Change the configuration of a group of topics - * `change_group [--rate ] [--rand-rate ,] - [--size ] [--topics-per-namespace ]` -* Shutdown a previously created topic - * `stop ` -* Shutdown a previously created group of topics - * `stop_group ` -* Copy the historical data from one ZooKeeper to another and simulate based on the message rates and sizes in that -history - * `copy [--rate-multiplier value]` -* Simulate the load of the historical data on the current ZooKeeper (should be same ZooKeeper being simulated on) - * `simulate [--rate-multiplier value]` -* Stream the latest data from the given active ZooKeeper to simulate the real-time load of that ZooKeeper. - * `stream [--rate-multiplier value]` - -The "group" arguments in these commands allow the user to create or affect multiple topics at once. Groups are created -when calling the `trade_group` command, and all topics from these groups may be subsequently modified or stopped -with the `change_group` and `stop_group` commands respectively. All ZooKeeper arguments are of the form -`zookeeper_host:port`. - -### Difference Between Copy, Simulate, and Stream -The commands `copy`, `simulate`, and `stream` are very similar but have significant differences. `copy` is used when -you want to simulate the load of a static, external ZooKeeper on the ZooKeeper you are simulating on. Thus, -`source zookeeper` should be the ZooKeeper you want to copy and `target zookeeper` should be the ZooKeeper you are -simulating on, and then it will get the full benefit of the historical data of the source in both load manager -implementations. `simulate` on the other hand takes in only one ZooKeeper, the one you are simulating on. It assumes -that you are simulating on a ZooKeeper that has historical data for `SimpleLoadManagerImpl` and creates equivalent -historical data for `ModularLoadManagerImpl`. Then, the load according to the historical data is simulated by the -clients. Finally, `stream` takes in an active ZooKeeper different than the ZooKeeper being simulated on and streams -load data from it and simulates the real-time load. In all cases, the optional `rate-multiplier` argument allows the -user to simulate some proportion of the load. For instance, using `--rate-multiplier 0.05` will cause messages to -be sent at only `5%` of the rate of the load that is being simulated. - -## Broker Monitor -To observe the behavior of the load manager in these simulations, one may utilize the broker monitor, which is -implemented in `org.apache.pulsar.testclient.BrokerMonitor`. The broker monitor will print tabular load data to the -console as it is updated using watchers. - -### Usage -To start a broker monitor, use the `monitor-brokers` command in the `pulsar-perf` script: - -``` - -pulsar-perf monitor-brokers --connect-string - -``` - -The console will then continuously print load data until it is interrupted. - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/developing-binary-protocol.md b/site2/website/versioned_docs/version-2.9.0-deprecated/developing-binary-protocol.md deleted file mode 100644 index a18a8b8d56172e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/developing-binary-protocol.md +++ /dev/null @@ -1,606 +0,0 @@ ---- -id: developing-binary-protocol -title: Pulsar binary protocol specification -sidebar_label: "Binary protocol" -original_id: developing-binary-protocol ---- - -Pulsar uses a custom binary protocol for communications between producers/consumers and brokers. This protocol is designed to support required features, such as acknowledgements and flow control, while ensuring maximum transport and implementation efficiency. - -Clients and brokers exchange *commands* with each other. Commands are formatted as binary [protocol buffer](https://developers.google.com/protocol-buffers/) (aka *protobuf*) messages. The format of protobuf commands is specified in the [`PulsarApi.proto`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto) file and also documented in the [Protobuf interface](#protobuf-interface) section below. - -> ### Connection sharing -> Commands for different producers and consumers can be interleaved and sent through the same connection without restriction. - -All commands associated with Pulsar's protocol are contained in a [`BaseCommand`](#pulsar.proto.BaseCommand) protobuf message that includes a [`Type`](#pulsar.proto.Type) [enum](https://developers.google.com/protocol-buffers/docs/proto#enum) with all possible subcommands as optional fields. `BaseCommand` messages can specify only one subcommand. - -## Framing - -Since protobuf doesn't provide any sort of message frame, all messages in the Pulsar protocol are prepended with a 4-byte field that specifies the size of the frame. The maximum allowable size of a single frame is 5 MB. - -The Pulsar protocol allows for two types of commands: - -1. **Simple commands** that do not carry a message payload. -2. **Payload commands** that bear a payload that is used when publishing or delivering messages. In payload commands, the protobuf command data is followed by protobuf [metadata](#message-metadata) and then the payload, which is passed in raw format outside of protobuf. All sizes are passed as 4-byte unsigned big endian integers. - -> Message payloads are passed in raw format rather than protobuf format for efficiency reasons. - -### Simple commands - -Simple (payload-free) commands have this basic structure: - -| Component | Description | Size (in bytes) | -|:------------|:----------------------------------------------------------------------------------------|:----------------| -| totalSize | The size of the frame, counting everything that comes after it (in bytes) | 4 | -| commandSize | The size of the protobuf-serialized command | 4 | -| message | The protobuf message serialized in a raw binary format (rather than in protobuf format) | | - -### Payload commands - -Payload commands have this basic structure: - -| Component | Description | Size (in bytes) | -|:-------------|:--------------------------------------------------------------------------------------------|:----------------| -| totalSize | The size of the frame, counting everything that comes after it (in bytes) | 4 | -| commandSize | The size of the protobuf-serialized command | 4 | -| message | The protobuf message serialized in a raw binary format (rather than in protobuf format) | | -| magicNumber | A 2-byte byte array (`0x0e01`) identifying the current format | 2 | -| checksum | A [CRC32-C checksum](http://www.evanjones.ca/crc32c.html) of everything that comes after it | 4 | -| metadataSize | The size of the message [metadata](#message-metadata) | 4 | -| metadata | The message [metadata](#message-metadata) stored as a binary protobuf message | | -| payload | Anything left in the frame is considered the payload and can include any sequence of bytes | | - -## Message metadata - -Message metadata is stored alongside the application-specified payload as a serialized protobuf message. Metadata is created by the producer and passed on unchanged to the consumer. - -| Field | Description | -|:-------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `producer_name` | The name of the producer that published the message | -| `sequence_id` | The sequence ID of the message, assigned by producer | -| `publish_time` | The publish timestamp in Unix time (i.e. as the number of milliseconds since January 1st, 1970 in UTC) | -| `properties` | A sequence of key/value pairs (using the [`KeyValue`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto#L32) message). These are application-defined keys and values with no special meaning to Pulsar. | -| `replicated_from` *(optional)* | Indicates that the message has been replicated and specifies the name of the [cluster](reference-terminology.md#cluster) where the message was originally published | -| `partition_key` *(optional)* | While publishing on a partition topic, if the key is present, the hash of the key is used to determine which partition to choose. Partition key is used as the message key. | -| `compression` *(optional)* | Signals that payload has been compressed and with which compression library | -| `uncompressed_size` *(optional)* | If compression is used, the producer must fill the uncompressed size field with the original payload size | -| `num_messages_in_batch` *(optional)* | If this message is really a [batch](#batch-messages) of multiple entries, this field must be set to the number of messages in the batch | - -### Batch messages - -When using batch messages, the payload will be containing a list of entries, -each of them with its individual metadata, defined by the `SingleMessageMetadata` -object. - - -For a single batch, the payload format will look like this: - - -| Field | Description | -|:--------------|:------------------------------------------------------------| -| metadataSizeN | The size of the single message metadata serialized Protobuf | -| metadataN | Single message metadata | -| payloadN | Message payload passed by application | - -Each metadata field looks like this; - -| Field | Description | -|:---------------------------|:--------------------------------------------------------| -| properties | Application-defined properties | -| partition key *(optional)* | Key to indicate the hashing to a particular partition | -| payload_size | Size of the payload for the single message in the batch | - -When compression is enabled, the whole batch will be compressed at once. - -## Interactions - -### Connection establishment - -After opening a TCP connection to a broker, typically on port 6650, the client -is responsible to initiate the session. - -![Connect interaction](/assets/binary-protocol-connect.png) - -After receiving a `Connected` response from the broker, the client can -consider the connection ready to use. Alternatively, if the broker doesn't -validate the client authentication, it will reply with an `Error` command and -close the TCP connection. - -Example: - -```protobuf - -message CommandConnect { - "client_version" : "Pulsar-Client-Java-v1.15.2", - "auth_method_name" : "my-authentication-plugin", - "auth_data" : "my-auth-data", - "protocol_version" : 6 -} - -``` - -Fields: - * `client_version` → String based identifier. Format is not enforced - * `auth_method_name` → *(optional)* Name of the authentication plugin if auth - enabled - * `auth_data` → *(optional)* Plugin specific authentication data - * `protocol_version` → Indicates the protocol version supported by the - client. Broker will not send commands introduced in newer revisions of the - protocol. Broker might be enforcing a minimum version - -```protobuf - -message CommandConnected { - "server_version" : "Pulsar-Broker-v1.15.2", - "protocol_version" : 6 -} - -``` - -Fields: - * `server_version` → String identifier of broker version - * `protocol_version` → Protocol version supported by the broker. Client - must not attempt to send commands introduced in newer revisions of the - protocol - -### Keep Alive - -To identify prolonged network partitions between clients and brokers or cases -in which a machine crashes without interrupting the TCP connection on the remote -end (eg: power outage, kernel panic, hard reboot...), we have introduced a -mechanism to probe for the availability status of the remote peer. - -Both clients and brokers are sending `Ping` commands periodically and they will -close the socket if a `Pong` response is not received within a timeout (default -used by broker is 60s). - -A valid implementation of a Pulsar client is not required to send the `Ping` -probe, though it is required to promptly reply after receiving one from the -broker in order to prevent the remote side from forcibly closing the TCP connection. - - -### Producer - -In order to send messages, a client needs to establish a producer. When creating -a producer, the broker will first verify that this particular client is -authorized to publish on the topic. - -Once the client gets confirmation of the producer creation, it can publish -messages to the broker, referring to the producer id negotiated before. - -![Producer interaction](/assets/binary-protocol-producer.png) - -##### Command Producer - -```protobuf - -message CommandProducer { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "producer_id" : 1, - "request_id" : 1 -} - -``` - -Parameters: - * `topic` → Complete topic name to where you want to create the producer on - * `producer_id` → Client generated producer identifier. Needs to be unique - within the same connection - * `request_id` → Identifier for this request. Used to match the response with - the originating request. Needs to be unique within the same connection - * `producer_name` → *(optional)* If a producer name is specified, the name will - be used, otherwise the broker will generate a unique name. Generated - producer name is guaranteed to be globally unique. Implementations are - expected to let the broker generate a new producer name when the producer - is initially created, then reuse it when recreating the producer after - reconnections. - -The broker will reply with either `ProducerSuccess` or `Error` commands. - -##### Command ProducerSuccess - -```protobuf - -message CommandProducerSuccess { - "request_id" : 1, - "producer_name" : "generated-unique-producer-name" -} - -``` - -Parameters: - * `request_id` → Original id of the `CreateProducer` request - * `producer_name` → Generated globally unique producer name or the name - specified by the client, if any. - -##### Command Send - -Command `Send` is used to publish a new message within the context of an -already existing producer. This command is used in a frame that includes command -as well as message payload, for which the complete format is specified in the [payload commands](#payload-commands) section. - -```protobuf - -message CommandSend { - "producer_id" : 1, - "sequence_id" : 0, - "num_messages" : 1 -} - -``` - -Parameters: - * `producer_id` → id of an existing producer - * `sequence_id` → each message has an associated sequence id which is expected - to be implemented with a counter starting at 0. The `SendReceipt` that - acknowledges the effective publishing of messages will refer to it by - its sequence id. - * `num_messages` → *(optional)* Used when publishing a batch of messages at - once. - -##### Command SendReceipt - -After a message has been persisted on the configured number of replicas, the -broker will send the acknowledgment receipt to the producer. - -```protobuf - -message CommandSendReceipt { - "producer_id" : 1, - "sequence_id" : 0, - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -Parameters: - * `producer_id` → id of producer originating the send request - * `sequence_id` → sequence id of the published message - * `message_id` → message id assigned by the system to the published message - Unique within a single cluster. Message id is composed of 2 longs, `ledgerId` - and `entryId`, that reflect that this unique id is assigned when appending - to a BookKeeper ledger - - -##### Command CloseProducer - -**Note**: *This command can be sent by either producer or broker*. - -When receiving a `CloseProducer` command, the broker will stop accepting any -more messages for the producer, wait until all pending messages are persisted -and then reply `Success` to the client. - -The broker can send a `CloseProducer` command to client when it's performing -a graceful failover (eg: broker is being restarted, or the topic is being unloaded -by load balancer to be transferred to a different broker). - -When receiving the `CloseProducer`, the client is expected to go through the -service discovery lookup again and recreate the producer again. The TCP -connection is not affected. - -### Consumer - -A consumer is used to attach to a subscription and consume messages from it. -After every reconnection, a client needs to subscribe to the topic. If a -subscription is not already there, a new one will be created. - -![Consumer](/assets/binary-protocol-consumer.png) - -#### Flow control - -After the consumer is ready, the client needs to *give permission* to the -broker to push messages. This is done with the `Flow` command. - -A `Flow` command gives additional *permits* to send messages to the consumer. -A typical consumer implementation will use a queue to accumulate these messages -before the application is ready to consume them. - -After the application has dequeued half of the messages in the queue, the consumer -sends permits to the broker to ask for more messages (equals to half of the messages in the queue). - -For example, if the queue size is 1000 and the consumer consumes 500 messages in the queue. -Then the consumer sends permits to the broker to ask for 500 messages. - -##### Command Subscribe - -```protobuf - -message CommandSubscribe { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "subscription" : "my-subscription-name", - "subType" : "Exclusive", - "consumer_id" : 1, - "request_id" : 1 -} - -``` - -Parameters: - * `topic` → Complete topic name to where you want to create the consumer on - * `subscription` → Subscription name - * `subType` → Subscription type: Exclusive, Shared, Failover, Key_Shared - * `consumer_id` → Client generated consumer identifier. Needs to be unique - within the same connection - * `request_id` → Identifier for this request. Used to match the response with - the originating request. Needs to be unique within the same connection - * `consumer_name` → *(optional)* Clients can specify a consumer name. This - name can be used to track a particular consumer in the stats. Also, in - Failover subscription type, the name is used to decide which consumer is - elected as *master* (the one receiving messages): consumers are sorted by - their consumer name and the first one is elected master. - -##### Command Flow - -```protobuf - -message CommandFlow { - "consumer_id" : 1, - "messagePermits" : 1000 -} - -``` - -Parameters: -* `consumer_id` → Id of an already established consumer -* `messagePermits` → Number of additional permits to grant to the broker for - pushing more messages - -##### Command Message - -Command `Message` is used by the broker to push messages to an existing consumer, -within the limits of the given permits. - - -This command is used in a frame that includes the message payload as well, for -which the complete format is specified in the [payload commands](#payload-commands) -section. - -```protobuf - -message CommandMessage { - "consumer_id" : 1, - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -##### Command Ack - -An `Ack` is used to signal to the broker that a given message has been -successfully processed by the application and can be discarded by the broker. - -In addition, the broker will also maintain the consumer position based on the -acknowledged messages. - -```protobuf - -message CommandAck { - "consumer_id" : 1, - "ack_type" : "Individual", - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -Parameters: - * `consumer_id` → Id of an already established consumer - * `ack_type` → Type of acknowledgment: `Individual` or `Cumulative` - * `message_id` → Id of the message to acknowledge - * `validation_error` → *(optional)* Indicates that the consumer has discarded - the messages due to: `UncompressedSizeCorruption`, - `DecompressionError`, `ChecksumMismatch`, `BatchDeSerializeError` - * `properties` → *(optional)* Reserved configuration items - * `txnid_most_bits` → *(optional)* Same as Transaction Coordinator ID, `txnid_most_bits` and `txnid_least_bits` - uniquely identify a transaction. - * `txnid_least_bits` → *(optional)* The ID of the transaction opened in a transaction coordinator, - `txnid_most_bits` and `txnid_least_bits`uniquely identify a transaction. - * `request_id` → *(optional)* ID for handling response and timeout. - - - ##### Command AckResponse - -An `AckResponse` is the broker’s response to acknowledge a request sent by the client. It contains the `consumer_id` sent in the request. -If a transaction is used, it contains both the Transaction ID and the Request ID that are sent in the request. The client finishes the specific request according to the Request ID. If the `error` field is set, it indicates that the request has failed. - -An example of `AckResponse` with redirection: - -```protobuf - -message CommandAckResponse { - "consumer_id" : 1, - "txnid_least_bits" = 0, - "txnid_most_bits" = 1, - "request_id" = 5 -} - -``` - -##### Command CloseConsumer - -***Note***: **This command can be sent by either producer or broker*. - -This command behaves the same as [`CloseProducer`](#command-closeproducer) - -##### Command RedeliverUnacknowledgedMessages - -A consumer can ask the broker to redeliver some or all of the pending messages -that were pushed to that particular consumer and not yet acknowledged. - -The protobuf object accepts a list of message ids that the consumer wants to -be redelivered. If the list is empty, the broker will redeliver all the -pending messages. - -On redelivery, messages can be sent to the same consumer or, in the case of a -shared subscription, spread across all available consumers. - - -##### Command ReachedEndOfTopic - -This is sent by a broker to a particular consumer, whenever the topic -has been "terminated" and all the messages on the subscription were -acknowledged. - -The client should use this command to notify the application that no more -messages are coming from the consumer. - -##### Command ConsumerStats - -This command is sent by the client to retrieve Subscriber and Consumer level -stats from the broker. -Parameters: - * `request_id` → Id of the request, used to correlate the request - and the response. - * `consumer_id` → Id of an already established consumer. - -##### Command ConsumerStatsResponse - -This is the broker's response to ConsumerStats request by the client. -It contains the Subscriber and Consumer level stats of the `consumer_id` sent in the request. -If the `error_code` or the `error_message` field is set it indicates that the request has failed. - -##### Command Unsubscribe - -This command is sent by the client to unsubscribe the `consumer_id` from the associated topic. -Parameters: - * `request_id` → Id of the request. - * `consumer_id` → Id of an already established consumer which needs to unsubscribe. - - -## Service discovery - -### Topic lookup - -Topic lookup needs to be performed each time a client needs to create or -reconnect a producer or a consumer. Lookup is used to discover which particular -broker is serving the topic we are about to use. - -Lookup can be done with a REST call as described in the [admin API](admin-api-topics.md#lookup-of-topic) -docs. - -Since Pulsar-1.16 it is also possible to perform the lookup within the binary -protocol. - -For the sake of example, let's assume we have a service discovery component -running at `pulsar://broker.example.com:6650` - -Individual brokers will be running at `pulsar://broker-1.example.com:6650`, -`pulsar://broker-2.example.com:6650`, ... - -A client can use a connection to the discovery service host to issue a -`LookupTopic` command. The response can either be a broker hostname to -connect to, or a broker hostname to which retry the lookup. - -The `LookupTopic` command has to be used in a connection that has already -gone through the `Connect` / `Connected` initial handshake. - -![Topic lookup](/assets/binary-protocol-topic-lookup.png) - -```protobuf - -message CommandLookupTopic { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "request_id" : 1, - "authoritative" : false -} - -``` - -Fields: - * `topic` → Topic name to lookup - * `request_id` → Id of the request that will be passed with its response - * `authoritative` → Initial lookup request should use false. When following a - redirect response, client should pass the same value contained in the - response - -##### LookupTopicResponse - -Example of response with successful lookup: - -```protobuf - -message CommandLookupTopicResponse { - "request_id" : 1, - "response" : "Connect", - "brokerServiceUrl" : "pulsar://broker-1.example.com:6650", - "brokerServiceUrlTls" : "pulsar+ssl://broker-1.example.com:6651", - "authoritative" : true -} - -``` - -Example of lookup response with redirection: - -```protobuf - -message CommandLookupTopicResponse { - "request_id" : 1, - "response" : "Redirect", - "brokerServiceUrl" : "pulsar://broker-2.example.com:6650", - "brokerServiceUrlTls" : "pulsar+ssl://broker-2.example.com:6651", - "authoritative" : true -} - -``` - -In this second case, we need to reissue the `LookupTopic` command request -to `broker-2.example.com` and this broker will be able to give a definitive -answer to the lookup request. - -### Partitioned topics discovery - -Partitioned topics metadata discovery is used to find out if a topic is a -"partitioned topic" and how many partitions were set up. - -If the topic is marked as "partitioned", the client is expected to create -multiple producers or consumers, one for each partition, using the `partition-X` -suffix. - -This information only needs to be retrieved the first time a producer or -consumer is created. There is no need to do this after reconnections. - -The discovery of partitioned topics metadata works very similar to the topic -lookup. The client send a request to the service discovery address and the -response will contain actual metadata. - -##### Command PartitionedTopicMetadata - -```protobuf - -message CommandPartitionedTopicMetadata { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "request_id" : 1 -} - -``` - -Fields: - * `topic` → the topic for which to check the partitions metadata - * `request_id` → Id of the request that will be passed with its response - - -##### Command PartitionedTopicMetadataResponse - -Example of response with metadata: - -```protobuf - -message CommandPartitionedTopicMetadataResponse { - "request_id" : 1, - "response" : "Success", - "partitions" : 32 -} - -``` - -## Protobuf interface - -All Pulsar's Protobuf definitions can be found {@inject: github:here:/pulsar-common/src/main/proto/PulsarApi.proto}. diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/functions-cli.md b/site2/website/versioned_docs/version-2.9.0-deprecated/functions-cli.md deleted file mode 100644 index c9fcfa201525f0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/functions-cli.md +++ /dev/null @@ -1,198 +0,0 @@ ---- -id: functions-cli -title: Pulsar Functions command line tool -sidebar_label: "Reference: CLI" -original_id: functions-cli ---- - -The following tables list Pulsar Functions command-line tools. You can learn Pulsar Functions modes, commands, and parameters. - -## localrun - -Run Pulsar Functions locally, rather than deploying it to the Pulsar cluster. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -broker-service-url | The URL for the Pulsar broker. | | -classname | The class name of a Pulsar Function.| | -client-auth-params | Client authentication parameter. | | -client-auth-plugin | Client authentication plugin using which function-process can connect to broker. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime).| | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -hostname-verification-enabled | Enable hostname verification. | false -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -instance-id-offset | Start the instanceIds from this offset. | 0 -log-topic | The topic to which the logs a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -tls-allow-insecure | Allow insecure tls connection. | false -tls-trust-cert-path | tls trust cert file path. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -use-tls | Use tls connection. | false -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - - -## create - -Create and deploy a Pulsar Function in cluster mode. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -classname | The class name of a Pulsar Function. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime).| | -custom-runtime-options | A string that encodes options to customize the runtime, see docs for configured runtime for details | | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -log-topic | The topic to which the logs of a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - -## delete - -Delete a Pulsar Function that is running on a Pulsar cluster. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## update - -Update a Pulsar Function that has been deployed to a Pulsar cluster. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -classname | The class name of a Pulsar Function. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime). | | -custom-runtime-options | A string that encodes options to customize the runtime, see docs for configured runtime for details | | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -log-topic | The topic to which the logs of a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -update-auth-data | Whether or not to update the auth data. | false -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - -## get - -Fetch information about a Pulsar Function. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## restart - -Restart function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## stop - -Stops function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## start - -Starts a stopped function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/functions-debug.md b/site2/website/versioned_docs/version-2.9.0-deprecated/functions-debug.md deleted file mode 100644 index c1f19abda64657..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/functions-debug.md +++ /dev/null @@ -1,538 +0,0 @@ ---- -id: functions-debug -title: Debug Pulsar Functions -sidebar_label: "How-to: Debug" -original_id: functions-debug ---- - -You can use the following methods to debug Pulsar Functions: - -* [Captured stderr](functions-debug.md#captured-stderr) -* [Use unit test](functions-debug.md#use-unit-test) -* [Debug with localrun mode](functions-debug.md#debug-with-localrun-mode) -* [Use log topic](functions-debug.md#use-log-topic) -* [Use Functions CLI](functions-debug.md#use-functions-cli) - -## Captured stderr - -Function startup information and captured stderr output is written to `logs/functions////-.log` - -This is useful for debugging why a function fails to start. - -## Use unit test - -A Pulsar Function is a function with inputs and outputs, you can test a Pulsar Function in a similar way as you test any function. - -For example, if you have the following Pulsar Function: - -```java - -import java.util.function.Function; - -public class JavaNativeExclamationFunction implements Function { - @Override - public String apply(String input) { - return String.format("%s!", input); - } -} - -``` - -You can write a simple unit test to test Pulsar Function. - -:::tip - -Pulsar uses testng for testing. - -::: - -```java - -@Test -public void testJavaNativeExclamationFunction() { - JavaNativeExclamationFunction exclamation = new JavaNativeExclamationFunction(); - String output = exclamation.apply("foo"); - Assert.assertEquals(output, "foo!"); -} - -``` - -The following Pulsar Function implements the `org.apache.pulsar.functions.api.Function` interface. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class ExclamationFunction implements Function { - @Override - public String process(String input, Context context) { - return String.format("%s!", input); - } -} - -``` - -In this situation, you can write a unit test for this function as well. Remember to mock the `Context` parameter. The following is an example. - -:::tip - -Pulsar uses testng for testing. - -::: - -```java - -@Test -public void testExclamationFunction() { - ExclamationFunction exclamation = new ExclamationFunction(); - String output = exclamation.process("foo", mock(Context.class)); - Assert.assertEquals(output, "foo!"); -} - -``` - -## Debug with localrun mode -When you run a Pulsar Function in localrun mode, it launches an instance of the Function on your local machine as a thread. - -In this mode, a Pulsar Function consumes and produces actual data to a Pulsar cluster, and mirrors how the function actually runs in a Pulsar cluster. - -:::note - -Currently, debugging with localrun mode is only supported by Pulsar Functions written in Java. You need Pulsar version 2.4.0 or later to do the following. Even though localrun is available in versions earlier than Pulsar 2.4.0, you cannot debug with localrun mode programmatically or run Functions as threads. - -::: - -You can launch your function in the following manner. - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setName(functionName); -functionConfig.setInputs(Collections.singleton(sourceTopic)); -functionConfig.setClassName(ExclamationFunction.class.getName()); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setOutput(sinkTopic); - -LocalRunner localRunner = LocalRunner.builder().functionConfig(functionConfig).build(); -localRunner.start(true); - -``` - -So you can debug functions using an IDE easily. Set breakpoints and manually step through a function to debug with real data. - -The following example illustrates how to programmatically launch a function in localrun mode. - -```java - -public class ExclamationFunction implements Function { - - @Override - public String process(String s, Context context) throws Exception { - return s + "!"; - } - -public static void main(String[] args) throws Exception { - FunctionConfig functionConfig = new FunctionConfig(); - functionConfig.setName("exclamation"); - functionConfig.setInputs(Collections.singleton("input")); - functionConfig.setClassName(ExclamationFunction.class.getName()); - functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); - functionConfig.setOutput("output"); - - LocalRunner localRunner = LocalRunner.builder().functionConfig(functionConfig).build(); - localRunner.start(false); -} - -``` - -To use localrun mode programmatically, add the following dependency. - -```xml - - - org.apache.pulsar - pulsar-functions-local-runner - ${pulsar.version} - - -``` - -For complete code samples, see [here](https://github.com/jerrypeng/pulsar-functions-demos/tree/master/debugging). - -:::note - -Debugging with localrun mode for Pulsar Functions written in other languages will be supported soon. - -::: - -## Use log topic - -In Pulsar Functions, you can generate log information defined in functions to a specified log topic. You can configure consumers to consume messages from a specified log topic to check the log information. - -![Pulsar Functions core programming model](/assets/pulsar-functions-overview.png) - -**Example** - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class LoggingFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - String messageId = new String(context.getMessageId()); - - if (input.contains("danger")) { - LOG.warn("A warning was received in message {}", messageId); - } else { - LOG.info("Message {} received\nContent: {}", messageId, input); - } - - return null; - } -} - -``` - -As shown in the example above, you can get the logger via `context.getLogger()` and assign the logger to the `LOG` variable of `slf4j`, so you can define your desired log information in a function using the `LOG` variable. Meanwhile, you need to specify the topic to which the log information is produced. - -**Example** - -```bash - -$ bin/pulsar-admin functions create \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -The message published to log topic contains several properties for better reasoning: -- `loglevel` -- the level of the log message. -- `fqn` -- fully qualified function name pushes this log message. -- `instance` -- the ID of the function instance pushes this log message. - -## Use Functions CLI - -With [Pulsar Functions CLI](reference-pulsar-admin.md#functions), you can debug Pulsar Functions with the following subcommands: - -* `get` -* `status` -* `stats` -* `list` -* `trigger` - -:::tip - -For complete commands of **Pulsar Functions CLI**, see [here](reference-pulsar-admin.md#functions)。 - -::: - -### `get` - -Get information about a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions get options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -:::tip - -`--fqfn` consists of `--name`, `--namespace` and `--tenant`, so you can specify either `--fqfn` or `--name`, `--namespace` and `--tenant`. - -::: - -**Example** - -You can specify `--fqfn` to get information about a Pulsar Function. - -```bash - -$ ./bin/pulsar-admin functions get public/default/ExclamationFunctio6 - -``` - -Optionally, you can specify `--name`, `--namespace` and `--tenant` to get information about a Pulsar Function. - -```bash - -$ ./bin/pulsar-admin functions get \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 - -``` - -As shown below, the `get` command shows input, output, runtime, and other information about the _ExclamationFunctio6_ function. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "ExclamationFunctio6", - "className": "org.example.test.ExclamationFunction", - "inputSpecs": { - "persistent://public/default/my-topic-1": { - "isRegexPattern": false - } - }, - "output": "persistent://public/default/test-1", - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "userConfig": {}, - "runtime": "JAVA", - "autoAck": true, - "parallelism": 1 -} - -``` - -### `status` - -Check the current status of a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions status options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--instance-id`|The instance ID of a Pulsar Function
    If the `--instance-id` is not specified, it gets the IDs of all instances.
    -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - -``` - -As shown below, the `status` command shows the number of instances, running instances, the instance running under the _ExclamationFunctio6_ function, received messages, successfully processed messages, system exceptions, the average latency and so on. - -```json - -{ - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReceived" : 1, - "numSuccessfullyProcessed" : 1, - "numUserExceptions" : 0, - "latestUserExceptions" : [ ], - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "averageLatency" : 0.8385, - "lastInvocationTime" : 1557734137987, - "workerId" : "c-standalone-fw-23ccc88ef29b-8080" - } - } ] -} - -``` - -### `stats` - -Get the current stats of a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions stats options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--instance-id`|The instance ID of a Pulsar Function.
    If the `--instance-id` is not specified, it gets the IDs of all instances.
    -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - -``` - -The output is shown as follows: - -```json - -{ - "receivedTotal" : 1, - "processedSuccessfullyTotal" : 1, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : 0.8385, - "1min" : { - "receivedTotal" : 0, - "processedSuccessfullyTotal" : 0, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : null - }, - "lastInvocation" : 1557734137987, - "instances" : [ { - "instanceId" : 0, - "metrics" : { - "receivedTotal" : 1, - "processedSuccessfullyTotal" : 1, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : 0.8385, - "1min" : { - "receivedTotal" : 0, - "processedSuccessfullyTotal" : 0, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : null - }, - "lastInvocation" : 1557734137987, - "userMetrics" : { } - } - } ] -} - -``` - -### `list` - -List all Pulsar Functions running under a specific tenant and namespace. - -**Usage** - -```bash - -$ pulsar-admin functions list options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions list \ - --tenant public \ - --namespace default - -``` - -As shown below, the `list` command returns three functions running under the _public_ tenant and the _default_ namespace. - -```text - -ExclamationFunctio1 -ExclamationFunctio2 -ExclamationFunctio3 - -``` - -### `trigger` - -Trigger a specified Pulsar Function with a supplied value. This command simulates the execution process of a Pulsar Function and verifies it. - -**Usage** - -```bash - -$ pulsar-admin functions trigger options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. -|`--topic`|The topic name that a Pulsar Function consumes from. -|`--trigger-file`|The path to a file that contains the data to trigger a Pulsar Function. -|`--trigger-value`|The value to trigger a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - --topic persistent://public/default/my-topic-1 \ - --trigger-value "hello pulsar functions" - -``` - -As shown below, the `trigger` command returns the following result: - -```text - -This is my function! - -``` - -:::note - -You must specify the [entire topic name](getting-started-pulsar.md#topic-names) when using the `--topic` option. Otherwise, the following error occurs. - -```text - -Function in trigger function has unidentified topic -Reason: Function in trigger function has unidentified topic - -``` - -::: - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/functions-deploy.md b/site2/website/versioned_docs/version-2.9.0-deprecated/functions-deploy.md deleted file mode 100644 index 2a0d68d6c623c7..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/functions-deploy.md +++ /dev/null @@ -1,262 +0,0 @@ ---- -id: functions-deploy -title: Deploy Pulsar Functions -sidebar_label: "How-to: Deploy" -original_id: functions-deploy ---- - -## Requirements - -To deploy and manage Pulsar Functions, you need to have a Pulsar cluster running. There are several options for this: - -* You can run a [standalone cluster](getting-started-standalone.md) locally on your own machine. -* You can deploy a Pulsar cluster on [Kubernetes](deploy-kubernetes.md), [Amazon Web Services](deploy-aws.md), [bare metal](deploy-bare-metal.md), [DC/OS](https://dcos.io/), and more. - -If you run a non-[standalone](reference-terminology.md#standalone) cluster, you need to obtain the service URL for the cluster. How you obtain the service URL depends on how you deploy your Pulsar cluster. - -If you want to deploy and trigger Python user-defined functions, you need to install [the pulsar python client](http://pulsar.apache.org/docs/en/client-libraries-python/) on all the machines running [functions workers](functions-worker.md). - -## Command-line interface - -Pulsar Functions are deployed and managed using the [`pulsar-admin functions`](reference-pulsar-admin.md#functions) interface, which contains commands such as [`create`](reference-pulsar-admin.md#functions-create) for deploying functions in [cluster mode](#cluster-mode), [`trigger`](reference-pulsar-admin.md#trigger) for [triggering](#triggering-pulsar-functions) functions, [`list`](reference-pulsar-admin.md#list-2) for listing deployed functions. - -To learn more commands, refer to [`pulsar-admin functions`](reference-pulsar-admin.md#functions). - -### Default arguments - -When managing Pulsar Functions, you need to specify a variety of information about functions, including tenant, namespace, input and output topics, and so on. However, some parameters have default values if you do not specify values for them. The following table lists the default values. - -Parameter | Default -:---------|:------- -Function name | You can specify any value for the class name (except org, library, or similar class names). For example, when you specify the flag `--classname org.example.MyFunction`, the function name is `MyFunction`. -Tenant | Derived from names of the input topics. If the input topics are under the `marketing` tenant, which means the topic names have the form `persistent://marketing/{namespace}/{topicName}`, the tenant is `marketing`. -Namespace | Derived from names of the input topics. If the input topics are under the `asia` namespace under the `marketing` tenant, which means the topic names have the form `persistent://marketing/asia/{topicName}`, then the namespace is `asia`. -Output topic | `{input topic}-{function name}-output`. For example, if an input topic name of a function is `incoming`, and the function name is `exclamation`, then the name of the output topic is `incoming-exclamation-output`. -Subscription type | For `at-least-once` and `at-most-once` [processing guarantees](functions-overview.md#processing-guarantees), the [`SHARED`](concepts-messaging.md#shared) mode is applied by default; for `effectively-once` guarantees, the [`FAILOVER`](concepts-messaging.md#failover) mode is applied. -Processing guarantees | [`ATLEAST_ONCE`](functions-overview.md#processing-guarantees) -Pulsar service URL | `pulsar://localhost:6650` - -### Example of default arguments - -Take the `create` command as an example. - -```bash - -$ bin/pulsar-admin functions create \ - --jar my-pulsar-functions.jar \ - --classname org.example.MyFunction \ - --inputs my-function-input-topic1,my-function-input-topic2 - -``` - -The function has default values for the function name (`MyFunction`), tenant (`public`), namespace (`default`), subscription type (`SHARED`), processing guarantees (`ATLEAST_ONCE`), and Pulsar service URL (`pulsar://localhost:6650`). - -## Local run mode - -If you run a Pulsar Function in **local run** mode, it runs on the machine from which you enter the commands (on your laptop, an [AWS EC2](https://aws.amazon.com/ec2/) instance, and so on). The following is a [`localrun`](reference-pulsar-admin.md#localrun) command example. - -```bash - -$ bin/pulsar-admin functions localrun \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/input-1 \ - --output persistent://public/default/output-1 - -``` - -By default, the function connects to a Pulsar cluster running on the same machine, via a local [broker](reference-terminology.md#broker) service URL of `pulsar://localhost:6650`. If you use local run mode to run a function but connect it to a non-local Pulsar cluster, you can specify a different broker URL using the `--brokerServiceUrl` flag. The following is an example. - -```bash - -$ bin/pulsar-admin functions localrun \ - --broker-service-url pulsar://my-cluster-host:6650 \ - # Other function parameters - -``` - -## Cluster mode - -When you run a Pulsar Function in **cluster** mode, the function code is uploaded to a Pulsar broker and runs *alongside the broker* rather than in your [local environment](#local-run-mode). You can run a function in cluster mode using the [`create`](reference-pulsar-admin.md#create-1) command. - -```bash - -$ bin/pulsar-admin functions create \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/input-1 \ - --output persistent://public/default/output-1 - -``` - -### Update functions in cluster mode - -You can use the [`update`](reference-pulsar-admin.md#update-1) command to update a Pulsar Function running in cluster mode. The following command updates the function created in the [cluster mode](#cluster-mode) section. - -```bash - -$ bin/pulsar-admin functions update \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/new-input-topic \ - --output persistent://public/default/new-output-topic - -``` - -### Parallelism - -Pulsar Functions run as processes or threads, which are called **instances**. When you run a Pulsar Function, it runs as a single instance by default. With one localrun command, you can only run a single instance of a function. If you want to run multiple instances, you can use localrun command multiple times. - -When you create a function, you can specify the *parallelism* of a function (the number of instances to run). You can set the parallelism factor using the `--parallelism` flag of the [`create`](reference-pulsar-admin.md#functions-create) command. - -```bash - -$ bin/pulsar-admin functions create \ - --parallelism 3 \ - # Other function info - -``` - -You can adjust the parallelism of an already created function using the [`update`](reference-pulsar-admin.md#update-1) interface. - -```bash - -$ bin/pulsar-admin functions update \ - --parallelism 5 \ - # Other function - -``` - -If you specify a function configuration via YAML, use the `parallelism` parameter. The following is a config file example. - -```yaml - -# function-config.yaml -parallelism: 3 -inputs: -- persistent://public/default/input-1 -output: persistent://public/default/output-1 -# other parameters - -``` - -The following is corresponding update command. - -```bash - -$ bin/pulsar-admin functions update \ - --function-config-file function-config.yaml - -``` - -### Function instance resources - -When you run Pulsar Functions in [cluster mode](#cluster-mode), you can specify the resources that are assigned to each function [instance](#parallelism). - -Resource | Specified as | Runtimes -:--------|:----------------|:-------- -CPU | The number of cores | Kubernetes -RAM | The number of bytes | Process, Docker -Disk space | The number of bytes | Docker - -The following function creation command allocates 8 cores, 8 GB of RAM, and 10 GB of disk space to a function. - -```bash - -$ bin/pulsar-admin functions create \ - --jar target/my-functions.jar \ - --classname org.example.functions.MyFunction \ - --cpu 8 \ - --ram 8589934592 \ - --disk 10737418240 - -``` - -> #### Resources are *per instance* -> The resources that you apply to a given Pulsar Function are applied to each instance of the function. For example, if you apply 8 GB of RAM to a function with a parallelism of 5, you are applying 40 GB of RAM for the function in total. Make sure that you take the parallelism (the number of instances) factor into your resource calculations. - -### Use Package management service - -Package management enables version management and simplifies the upgrade and rollback processes for Functions, Sinks, and Sources. When you use the same function, sink and source in different namespaces, you can upload them to a common package management system. - -To use [Package management service](admin-api-packages.md), ensure that the package management service has been enabled in your cluster by setting the following properties in `broker.conf`. - -> Note: Package management service is not enabled by default. - -```yaml - -enablePackagesManagement=true -packagesManagementStorageProvider=org.apache.pulsar.packages.management.storage.bookkeeper.BookKeeperPackagesStorageProvider -packagesReplicas=1 -packagesManagementLedgerRootPath=/ledgers - -``` - -With Package management service enabled, you can upload your function packages by [upload a package](admin-api-packages.md#upload-a-package) to the service and get the [package URL](admin-api-packages.md#package-url). - -When you have a ready to use package URL, you can create the function with package URL by setting `--jar`, `--py`, or `--go` to the package URL with `pulsar-admin functions create`. - -## Trigger Pulsar Functions - -If a Pulsar Function is running in [cluster mode](#cluster-mode), you can **trigger** it at any time using the command line. Triggering a function means that you send a message with a specific value to the function and get the function output (if any) via the command line. - -> Triggering a function is to invoke a function by producing a message on one of the input topics. With the [`pulsar-admin functions trigger`](reference-pulsar-admin.md#trigger) command, you can send messages to functions without using the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool or a language-specific client library. - -To learn how to trigger a function, you can start with Python function that returns a simple string based on the input. - -```python - -# myfunc.py -def process(input): - return "This function has been triggered with a value of {0}".format(input) - -``` - -You can run the function in [local run mode](functions-deploy.md#local-run-mode). - -```bash - -$ bin/pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name myfunc \ - --py myfunc.py \ - --classname myfunc \ - --inputs persistent://public/default/in \ - --output persistent://public/default/out - -``` - -Then assign a consumer to listen on the output topic for messages from the `myfunc` function with the [`pulsar-client consume`](reference-cli-tools.md#consume) command. - -```bash - -$ bin/pulsar-client consume persistent://public/default/out \ - --subscription-name my-subscription - --num-messages 0 # Listen indefinitely - -``` - -And then you can trigger the function. - -```bash - -$ bin/pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name myfunc \ - --trigger-value "hello world" - -``` - -The consumer listening on the output topic produces something as follows in the log. - -``` - ------ got message ----- -This function has been triggered with a value of hello world - -``` - -> #### Topic info is not required -> In the `trigger` command, you only need to specify basic information about the function (tenant, namespace, and name). To trigger the function, you do not need to know the function input topics. diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/functions-develop.md b/site2/website/versioned_docs/version-2.9.0-deprecated/functions-develop.md deleted file mode 100644 index 2e29aa1c474005..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/functions-develop.md +++ /dev/null @@ -1,1600 +0,0 @@ ---- -id: functions-develop -title: Develop Pulsar Functions -sidebar_label: "How-to: Develop" -original_id: functions-develop ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -You learn how to develop Pulsar Functions with different APIs for Java, Python and Go. - -## Available APIs -In Java and Python, you have two options to write Pulsar Functions. In Go, you can use Pulsar Functions SDK for Go. - -Interface | Description | Use cases -:---------|:------------|:--------- -Language-native interface | No Pulsar-specific libraries or special dependencies required (only core libraries from Java/Python). | Functions that do not require access to the function [context](#context). -Pulsar Function SDK for Java/Python/Go | Pulsar-specific libraries that provide a range of functionality not provided by "native" interfaces. | Functions that require access to the function [context](#context). - -The language-native function, which adds an exclamation point to all incoming strings and publishes the resulting string to a topic, has no external dependencies. The following example is language-native function. - -````mdx-code-block - - - -```Java - -import java.util.function.Function; - -public class JavaNativeExclamationFunction implements Function { - @Override - public String apply(String input) { - return String.format("%s!", input); - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/JavaNativeExclamationFunction.java). - - - - -```python - -def process(input): - return "{}!".format(input) - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/native_exclamation_function.py). - -:::note - -You can write Pulsar Functions in python2 or python3. However, Pulsar only looks for `python` as the interpreter. -If you're running Pulsar Functions on an Ubuntu system that only supports python3, you might fail to -start the functions. In this case, you can create a symlink. Your system will fail if -you subsequently install any other package that depends on Python 2.x. A solution is under development in [Issue 5518](https://github.com/apache/pulsar/issues/5518). - -```bash - -sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 10 - -``` - -::: - - - - -```` - -The following example uses Pulsar Functions SDK. -````mdx-code-block - - - -```Java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class ExclamationFunction implements Function { - @Override - public String process(String input, Context context) { - return String.format("%s!", input); - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/ExclamationFunction.java). - - - - -```python - -from pulsar import Function - -class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - return input + '!' - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/exclamation_function.py). - - - - -```Go - -package main - -import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" -) - -func HandleRequest(ctx context.Context, in []byte) error{ - fmt.Println(string(in) + "!") - return nil -} - -func main() { - pf.Start(HandleRequest) -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/77cf09eafa4f1626a53a1fe2e65dd25f377c1127/pulsar-function-go/examples/inputFunc/inputFunc.go#L20-L36). - - - - -```` - -## Schema registry -Pulsar has a built-in schema registry and is bundled with popular schema types, such as Avro, JSON and Protobuf. Pulsar Functions can leverage the existing schema information from input topics and derive the input type. The schema registry applies for output topic as well. - -## SerDe -SerDe stands for **Ser**ialization and **De**serialization. Pulsar Functions uses SerDe when publishing data to and consuming data from Pulsar topics. How SerDe works by default depends on the language you use for a particular function. - -````mdx-code-block - - - -When you write Pulsar Functions in Java, the following basic Java types are built in and supported by default: `String`, `Double`, `Integer`, `Float`, `Long`, `Short`, and `Byte`. - -To customize Java types, you need to implement the following interface. - -```java - -public interface SerDe { - T deserialize(byte[] input); - byte[] serialize(T input); -} - -``` - -SerDe works in the following ways in Java Functions. -- If the input and output topics have schema, Pulsar Functions use schema for SerDe. -- If the input or output topics do not exist, Pulsar Functions adopt the following rules to determine SerDe: - - If the schema type is specified, Pulsar Functions use the specified schema type. - - If SerDe is specified, Pulsar Functions use the specified SerDe, and the schema type for input and output topics is `Byte`. - - If neither the schema type nor SerDe is specified, Pulsar Functions use the built-in SerDe. For non-primitive schema type, the built-in SerDe serializes and deserializes objects in the `JSON` format. - - - - -In Python, the default SerDe is identity, meaning that the type is serialized as whatever type the producer function returns. - -You can specify the SerDe when [creating](functions-deploy.md#cluster-mode) or [running](functions-deploy.md#local-run-mode) functions. - -```bash - -$ bin/pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name my_function \ - --py my_function.py \ - --classname my_function.MyFunction \ - --custom-serde-inputs '{"input-topic-1":"Serde1","input-topic-2":"Serde2"}' \ - --output-serde-classname Serde3 \ - --output output-topic-1 - -``` - -This case contains two input topics: `input-topic-1` and `input-topic-2`, each of which is mapped to a different SerDe class (the map must be specified as a JSON string). The output topic, `output-topic-1`, uses the `Serde3` class for SerDe. At the moment, all Pulsar Functions logic, include processing function and SerDe classes, must be contained within a single Python file. - -When using Pulsar Functions for Python, you have three SerDe options: - -1. You can use the [`IdentitySerde`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L70), which leaves the data unchanged. The `IdentitySerDe` is the **default**. Creating or running a function without explicitly specifying SerDe means that this option is used. -2. You can use the [`PickleSerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L62), which uses Python [`pickle`](https://docs.python.org/3/library/pickle.html) for SerDe. -3. You can create a custom SerDe class by implementing the baseline [`SerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L50) class, which has just two methods: [`serialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L53) for converting the object into bytes, and [`deserialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L58) for converting bytes into an object of the required application-specific type. - -The table below shows when you should use each SerDe. - -SerDe option | When to use -:------------|:----------- -`IdentitySerde` | When you work with simple types like strings, Booleans, integers. -`PickleSerDe` | When you work with complex, application-specific types and are comfortable with the "best effort" approach of `pickle`. -Custom SerDe | When you require explicit control over SerDe, potentially for performance or data compatibility purposes. - - - - -Currently, the feature is not available in Go. - - - - -```` - -### Example -Imagine that you're writing Pulsar Functions that are processing tweet objects, you can refer to the following example of `Tweet` class. - -````mdx-code-block - - - -```java - -public class Tweet { - private String username; - private String tweetContent; - - public Tweet(String username, String tweetContent) { - this.username = username; - this.tweetContent = tweetContent; - } - - // Standard setters and getters -} - -``` - -To pass `Tweet` objects directly between Pulsar Functions, you need to provide a custom SerDe class. In the example below, `Tweet` objects are basically strings in which the username and tweet content are separated by a `|`. - -```java - -package com.example.serde; - -import org.apache.pulsar.functions.api.SerDe; - -import java.util.regex.Pattern; - -public class TweetSerde implements SerDe { - public Tweet deserialize(byte[] input) { - String s = new String(input); - String[] fields = s.split(Pattern.quote("|")); - return new Tweet(fields[0], fields[1]); - } - - public byte[] serialize(Tweet input) { - return "%s|%s".format(input.getUsername(), input.getTweetContent()).getBytes(); - } -} - -``` - -To apply this customized SerDe to a particular Pulsar Function, you need to: - -* Package the `Tweet` and `TweetSerde` classes into a JAR. -* Specify a path to the JAR and SerDe class name when deploying the function. - -The following is an example of [`create`](reference-pulsar-admin.md#create-1) operation. - -```bash - -$ bin/pulsar-admin functions create \ - --jar /path/to/your.jar \ - --output-serde-classname com.example.serde.TweetSerde \ - # Other function attributes - -``` - -> #### Custom SerDe classes must be packaged with your function JARs -> Pulsar does not store your custom SerDe classes separately from your Pulsar Functions. So you need to include your SerDe classes in your function JARs. If not, Pulsar returns an error. - - - - -```python - -class Tweet(object): - def __init__(self, username, tweet_content): - self.username = username - self.tweet_content = tweet_content - -``` - -In order to use this class in Pulsar Functions, you have two options: - -1. You can specify `PickleSerDe`, which applies the [`pickle`](https://docs.python.org/3/library/pickle.html) library SerDe. -2. You can create your own SerDe class. The following is an example. - - ```python - - from pulsar import SerDe - - class TweetSerDe(SerDe): - - def serialize(self, input): - return bytes("{0}|{1}".format(input.username, input.tweet_content)) - - def deserialize(self, input_bytes): - tweet_components = str(input_bytes).split('|') - return Tweet(tweet_components[0], tweet_componentsp[1]) - - ``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/custom_object_function.py). - - - - -```` - -In both languages, however, you can write custom SerDe logic for more complex, application-specific types. - -## Context -Java, Python and Go SDKs provide access to a **context object** that can be used by a function. This context object provides a wide variety of information and functionality to the function. - -* The name and ID of a Pulsar Function. -* The message ID of each message. Each Pulsar message is automatically assigned with an ID. -* The key, event time, properties and partition key of each message. -* The name of the topic to which the message is sent. -* The names of all input topics as well as the output topic associated with the function. -* The name of the class used for [SerDe](#serde). -* The [tenant](reference-terminology.md#tenant) and namespace associated with the function. -* The ID of the Pulsar Functions instance running the function. -* The version of the function. -* The [logger object](functions-develop.md#logger) used by the function, which can be used to create function log messages. -* Access to arbitrary [user configuration](#user-config) values supplied via the CLI. -* An interface for recording [metrics](#metrics). -* An interface for storing and retrieving state in [state storage](#state-storage). -* A function to publish new messages onto arbitrary topics. -* A function to ack the message being processed (if auto-ack is disabled). -* (Java) get Pulsar admin client. - -````mdx-code-block - - - -The [Context](https://github.com/apache/pulsar/blob/master/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Context.java) interface provides a number of methods that you can use to access the function [context](#context). The various method signatures for the `Context` interface are listed as follows. - -```java - -public interface Context { - Record getCurrentRecord(); - Collection getInputTopics(); - String getOutputTopic(); - String getOutputSchemaType(); - String getTenant(); - String getNamespace(); - String getFunctionName(); - String getFunctionId(); - String getInstanceId(); - String getFunctionVersion(); - Logger getLogger(); - void incrCounter(String key, long amount); - void incrCounterAsync(String key, long amount); - long getCounter(String key); - long getCounterAsync(String key); - void putState(String key, ByteBuffer value); - void putStateAsync(String key, ByteBuffer value); - void deleteState(String key); - ByteBuffer getState(String key); - ByteBuffer getStateAsync(String key); - Map getUserConfigMap(); - Optional getUserConfigValue(String key); - Object getUserConfigValueOrDefault(String key, Object defaultValue); - void recordMetric(String metricName, double value); - CompletableFuture publish(String topicName, O object, String schemaOrSerdeClassName); - CompletableFuture publish(String topicName, O object); - TypedMessageBuilder newOutputMessage(String topicName, Schema schema) throws PulsarClientException; - ConsumerBuilder newConsumerBuilder(Schema schema) throws PulsarClientException; - PulsarAdmin getPulsarAdmin(); - PulsarAdmin getPulsarAdmin(String clusterName); -} - -``` - -The following example uses several methods available via the `Context` object. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.stream.Collectors; - -public class ContextFunction implements Function { - public Void process(String input, Context context) { - Logger LOG = context.getLogger(); - String inputTopics = context.getInputTopics().stream().collect(Collectors.joining(", ")); - String functionName = context.getFunctionName(); - - String logMessage = String.format("A message with a value of \"%s\" has arrived on one of the following topics: %s\n", - input, - inputTopics); - - LOG.info(logMessage); - - String metricName = String.format("function-%s-messages-received", functionName); - context.recordMetric(metricName, 1); - - return null; - } -} - -``` - - - - -``` - -class ContextImpl(pulsar.Context): - def get_message_id(self): - ... - def get_message_key(self): - ... - def get_message_eventtime(self): - ... - def get_message_properties(self): - ... - def get_current_message_topic_name(self): - ... - def get_partition_key(self): - ... - def get_function_name(self): - ... - def get_function_tenant(self): - ... - def get_function_namespace(self): - ... - def get_function_id(self): - ... - def get_instance_id(self): - ... - def get_function_version(self): - ... - def get_logger(self): - ... - def get_user_config_value(self, key): - ... - def get_user_config_map(self): - ... - def record_metric(self, metric_name, metric_value): - ... - def get_input_topics(self): - ... - def get_output_topic(self): - ... - def get_output_serde_class_name(self): - ... - def publish(self, topic_name, message, serde_class_name="serde.IdentitySerDe", - properties=None, compression_type=None, callback=None, message_conf=None): - ... - def ack(self, msgid, topic): - ... - def get_and_reset_metrics(self): - ... - def reset_metrics(self): - ... - def get_metrics(self): - ... - def incr_counter(self, key, amount): - ... - def get_counter(self, key): - ... - def del_counter(self, key): - ... - def put_state(self, key, value): - ... - def get_state(self, key): - ... - -``` - - - - -``` - -func (c *FunctionContext) GetInstanceID() int { - return c.instanceConf.instanceID -} - -func (c *FunctionContext) GetInputTopics() []string { - return c.inputTopics -} - -func (c *FunctionContext) GetOutputTopic() string { - return c.instanceConf.funcDetails.GetSink().Topic -} - -func (c *FunctionContext) GetFuncTenant() string { - return c.instanceConf.funcDetails.Tenant -} - -func (c *FunctionContext) GetFuncName() string { - return c.instanceConf.funcDetails.Name -} - -func (c *FunctionContext) GetFuncNamespace() string { - return c.instanceConf.funcDetails.Namespace -} - -func (c *FunctionContext) GetFuncID() string { - return c.instanceConf.funcID -} - -func (c *FunctionContext) GetFuncVersion() string { - return c.instanceConf.funcVersion -} - -func (c *FunctionContext) GetUserConfValue(key string) interface{} { - return c.userConfigs[key] -} - -func (c *FunctionContext) GetUserConfMap() map[string]interface{} { - return c.userConfigs -} - -func (c *FunctionContext) SetCurrentRecord(record pulsar.Message) { - c.record = record -} - -func (c *FunctionContext) GetCurrentRecord() pulsar.Message { - return c.record -} - -func (c *FunctionContext) NewOutputMessage(topic string) pulsar.Producer { - return c.outputMessage(topic) -} - -``` - -The following example uses several methods available via the `Context` object. - -``` - -import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" -) - -func contextFunc(ctx context.Context) { - if fc, ok := pf.FromContext(ctx); ok { - fmt.Printf("function ID is:%s, ", fc.GetFuncID()) - fmt.Printf("function version is:%s\n", fc.GetFuncVersion()) - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/77cf09eafa4f1626a53a1fe2e65dd25f377c1127/pulsar-function-go/examples/contextFunc/contextFunc.go#L29-L34). - - - - -```` - -### User config -When you run or update Pulsar Functions created using SDK, you can pass arbitrary key/values to them with the command line with the `--user-config` flag. Key/values must be specified as JSON. The following function creation command passes a user configured key/value to a function. - -```bash - -$ bin/pulsar-admin functions create \ - --name word-filter \ - # Other function configs - --user-config '{"forbidden-word":"rosebud"}' - -``` - -````mdx-code-block - - - -The Java SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - # Other function configs - --user-config '{"word-of-the-day":"verdure"}' - -``` - -To access that value in a Java function: - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.Optional; - -public class UserConfigFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - Optional wotd = context.getUserConfigValue("word-of-the-day"); - if (wotd.isPresent()) { - LOG.info("The word of the day is {}", wotd); - } else { - LOG.warn("No word of the day provided"); - } - return null; - } -} - -``` - -The `UserConfigFunction` function will log the string `"The word of the day is verdure"` every time the function is invoked (which means every time a message arrives). The `word-of-the-day` user config will be changed only when the function is updated with a new config value via the command line. - -You can also access the entire user config map or set a default value in case no value is present: - -```java - -// Get the whole config map -Map allConfigs = context.getUserConfigMap(); - -// Get value or resort to default -String wotd = context.getUserConfigValueOrDefault("word-of-the-day", "perspicacious"); - -``` - -> For all key/value pairs passed to Java functions, both the key *and* the value are `String`. To set the value to be a different type, you need to deserialize from the `String` type. - - - - -In Python function, you can access the configuration value like this. - -```python - -from pulsar import Function - -class WordFilter(Function): - def process(self, context, input): - forbidden_word = context.user_config()["forbidden-word"] - - # Don't publish the message if it contains the user-supplied - # forbidden word - if forbidden_word in input: - pass - # Otherwise publish the message - else: - return input - -``` - -The Python SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - # Other function configs \ - --user-config '{"word-of-the-day":"verdure"}' - -``` - -To access that value in a Python function: - -```python - -from pulsar import Function - -class UserConfigFunction(Function): - def process(self, input, context): - logger = context.get_logger() - wotd = context.get_user_config_value('word-of-the-day') - if wotd is None: - logger.warn('No word of the day provided') - else: - logger.info("The word of the day is {0}".format(wotd)) - -``` - - - - -The Go SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - --go path/to/go/binary - --user-config '{"word-of-the-day":"lackadaisical"}' - -``` - -To access that value in a Go function: - -```go - -func contextFunc(ctx context.Context) { - fc, ok := pf.FromContext(ctx) - if !ok { - logutil.Fatal("Function context is not defined") - } - - wotd := fc.GetUserConfValue("word-of-the-day") - - if wotd == nil { - logutil.Warn("The word of the day is empty") - } else { - logutil.Infof("The word of the day is %s", wotd.(string)) - } -} - -``` - - - - -```` - -### Logger - -````mdx-code-block - - - -Pulsar Functions that use the Java SDK have access to an [SLF4j](https://www.slf4j.org/) [`Logger`](https://www.slf4j.org/api/org/apache/log4j/Logger.html) object that can be used to produce logs at the chosen log level. The following example logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class LoggingFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - String messageId = new String(context.getMessageId()); - - if (input.contains("danger")) { - LOG.warn("A warning was received in message {}", messageId); - } else { - LOG.info("Message {} received\nContent: {}", messageId, input); - } - - return null; - } -} - -``` - -If you want your function to produce logs, you need to specify a log topic when creating or running the function. The following is an example. - -```bash - -$ bin/pulsar-admin functions create \ - --jar my-functions.jar \ - --classname my.package.LoggingFunction \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -All logs produced by `LoggingFunction` above can be accessed via the `persistent://public/default/logging-function-logs` topic. - -#### Customize Function log level -Additionally, you can use the XML file, `functions_log4j2.xml`, to customize the function log level. -To customize the function log level, create or update `functions_log4j2.xml` in your Pulsar conf directory (for example, `/etc/pulsar/` on bare-metal, or `/pulsar/conf` on Kubernetes) to contain contents such as: - -```xml - - - pulsar-functions-instance - 30 - - - pulsar.log.appender - RollingFile - - - pulsar.log.level - debug - - - bk.log.level - debug - - - - - Console - SYSTEM_OUT - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - RollingFile - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.log - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}-%d{MM-dd-yyyy}-%i.log.gz - true - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - 1 - true - - - 1 GB - - - 0 0 0 * * ? - - - - - ${sys:pulsar.function.log.dir} - 2 - - */${sys:pulsar.function.log.file}*log.gz - - - 30d - - - - - - BkRollingFile - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.bk - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.bk-%d{MM-dd-yyyy}-%i.log.gz - true - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - 1 - true - - - 1 GB - - - 0 0 0 * * ? - - - - - ${sys:pulsar.function.log.dir} - 2 - - */${sys:pulsar.function.log.file}.bk*log.gz - - - 30d - - - - - - - - org.apache.pulsar.functions.runtime.shaded.org.apache.bookkeeper - ${sys:bk.log.level} - false - - BkRollingFile - - - - ${sys:pulsar.log.level} - - ${sys:pulsar.log.appender} - ${sys:pulsar.log.level} - - - - - -``` - -The properties set like: - -```xml - - - pulsar.log.level - debug - - -``` - -propagate to places where they are referenced, such as: - -```xml - - - ${sys:pulsar.log.level} - - ${sys:pulsar.log.appender} - ${sys:pulsar.log.level} - - - -``` - -In the above example, debug level logging would be applied to ALL function logs. -This may be more verbose than you desire. To be more selective, you can apply different log levels to different classes or modules. For example: - -```xml - - - com.example.module - info - false - - ${sys:pulsar.log.appender} - - - -``` - -You can be more specific as well, such as applying a more verbose log level to a class in the module, such as: - -```xml - - - com.example.module.className - debug - false - - Console - - - -``` - -Each `` entry allows you to output the log to a target specified in the definition of the Appender. - -Additivity pertains to whether log messages will be duplicated if multiple Logger entries overlap. -To disable additivity, specify - -```xml - -false - -``` - -as shown in examples above. Disabling additivity prevents duplication of log messages when one or more `` entries contain classes or modules that overlap. - -The `` is defined in the `` section, such as: - -```xml - - - Console - SYSTEM_OUT - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - -``` - - - - -Pulsar Functions that use the Python SDK have access to a logging object that can be used to produce logs at the chosen log level. The following example function that logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`. - -```python - -from pulsar import Function - -class LoggingFunction(Function): - def process(self, input, context): - logger = context.get_logger() - msg_id = context.get_message_id() - if 'danger' in input: - logger.warn("A warning was received in message {0}".format(context.get_message_id())) - else: - logger.info("Message {0} received\nContent: {1}".format(msg_id, input)) - -``` - -If you want your function to produce logs on a Pulsar topic, you need to specify a **log topic** when creating or running the function. The following is an example. - -```bash - -$ bin/pulsar-admin functions create \ - --py logging_function.py \ - --classname logging_function.LoggingFunction \ - --log-topic logging-function-logs \ - # Other function configs - -``` - -All logs produced by `LoggingFunction` above can be accessed via the `logging-function-logs` topic. -Additionally, you can specify the function log level through the broker XML file as described in [Customize Function log level](#customize-function-log-level). - - - - -The following Go Function example shows different log levels based on the function input. - -``` - -import ( - "context" - - "github.com/apache/pulsar/pulsar-function-go/pf" - - log "github.com/apache/pulsar/pulsar-function-go/logutil" -) - -func loggerFunc(ctx context.Context, input []byte) { - if len(input) <= 100 { - log.Infof("This input has a length of: %d", len(input)) - } else { - log.Warnf("This input is getting too long! It has {%d} characters", len(input)) - } -} - -func main() { - pf.Start(loggerFunc) -} - -``` - -When you use `logTopic` related functionalities in Go Function, import `github.com/apache/pulsar/pulsar-function-go/logutil`, and you do not have to use the `getLogger()` context object. - -Additionally, you can specify the function log level through the broker XML file, as described here: [Customize Function log level](#customize-function-log-level) - - - - -```` - -### Pulsar admin - -Pulsar Functions using the Java SDK has access to the Pulsar admin client, which allows the Pulsar admin client to manage API calls to current Pulsar clusters or external clusters (if `external-pulsars` is provided). - -````mdx-code-block - - - -Below is an example of how to use the Pulsar admin client exposed from the Function `context`. - -``` - -import org.apache.pulsar.client.admin.PulsarAdmin; -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -/** - * In this particular example, for every input message, - * the function resets the cursor of the current function's subscription to a - * specified timestamp. - */ -public class CursorManagementFunction implements Function { - - @Override - public String process(String input, Context context) throws Exception { - PulsarAdmin adminClient = context.getPulsarAdmin(); - if (adminClient != null) { - String topic = context.getCurrentRecord().getTopicName().isPresent() ? - context.getCurrentRecord().getTopicName().get() : null; - String subName = context.getTenant() + "/" + context.getNamespace() + "/" + context.getFunctionName(); - if (topic != null) { - // 1578188166 below is a random-pick timestamp - adminClient.topics().resetCursor(topic, subName, 1578188166); - return "reset cursor successfully"; - } - } - return null; - } -} - -``` - -If you want your function to get access to the Pulsar admin client, you need to enable this feature by setting `exposeAdminClientEnabled=true` in the `functions_worker.yml` file. You can test whether this feature is enabled or not using the command `pulsar-admin functions localrun` with the flag `--web-service-url`. - -``` - -$ bin/pulsar-admin functions localrun \ - --jar my-functions.jar \ - --classname my.package.CursorManagementFunction \ - --web-service-url http://pulsar-web-service:8080 \ - # Other function configs - -``` - - - - -```` - -## Metrics - -Pulsar Functions allows you to deploy and manage processing functions that consume messages from and publish messages to Pulsar topics easily. It is important to ensure that the running functions are healthy at any time. Pulsar Functions can publish arbitrary metrics to the metrics interface which can be queried. - -:::note - -If a Pulsar Function uses the language-native interface for Java or Python, that function is not able to publish metrics and stats to Pulsar. - -::: - -You can monitor Pulsar Functions that have been deployed with the following methods: - -- Check the metrics provided by Pulsar. - - Pulsar Functions expose the metrics that can be collected and used for monitoring the health of **Java, Python, and Go** functions. You can check the metrics by following the [monitoring](deploy-monitoring.md) guide. - - For the complete list of the function metrics, see [here](reference-metrics.md#pulsar-functions). - -- Set and check your customized metrics. - - In addition to the metrics provided by Pulsar, Pulsar allows you to customize metrics for **Java and Python** functions. Function workers collect user-defined metrics to Prometheus automatically and you can check them in Grafana. - -Here are examples of how to customize metrics for Java and Python functions. - -````mdx-code-block - - - -You can record metrics using the [`Context`](#context) object on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class MetricRecorderFunction implements Function { - @Override - public void apply(Integer input, Context context) { - // Records the metric 1 every time a message arrives - context.recordMetric("hit-count", 1); - - // Records the metric only if the arriving number equals 11 - if (input == 11) { - context.recordMetric("elevens-count", 1); - } - - return null; - } -} - -``` - - - - -You can record metrics using the [`Context`](#context) object on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message. The following is an example. - -```python - -from pulsar import Function - -class MetricRecorderFunction(Function): - def process(self, input, context): - context.record_metric('hit-count', 1) - - if input == 11: - context.record_metric('elevens-count', 1) - -``` - - - - -Currently, the feature is not available in Go. - - - - -```` - -## Security - -If you want to enable security on Pulsar Functions, first you should enable security on [Functions Workers](functions-worker.md). For more details, refer to [Security settings](functions-worker.md#security-settings). - -Pulsar Functions can support the following providers: - -- ClearTextSecretsProvider -- EnvironmentBasedSecretsProvider - -> Pulsar Function supports ClearTextSecretsProvider by default. - -At the same time, Pulsar Functions provides two interfaces, **SecretsProvider** and **SecretsProviderConfigurator**, allowing users to customize secret provider. - -````mdx-code-block - - - -You can get secret provider using the [`Context`](#context) object. The following is an example: - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class GetSecretProviderFunction implements Function { - - @Override - public Void process(String input, Context context) throws Exception { - Logger LOG = context.getLogger(); - String secretProvider = context.getSecret(input); - - if (!secretProvider.isEmpty()) { - LOG.info("The secret provider is {}", secretProvider); - } else { - LOG.warn("No secret provider"); - } - - return null; - } -} - -``` - - - - -You can get secret provider using the [`Context`](#context) object. The following is an example: - -```python - -from pulsar import Function - -class GetSecretProviderFunction(Function): - def process(self, input, context): - logger = context.get_logger() - secret_provider = context.get_secret(input) - if secret_provider is None: - logger.warn('No secret provider') - else: - logger.info("The secret provider is {0}".format(secret_provider)) - -``` - - - - -Currently, the feature is not available in Go. - - - - -```` - -## State storage -Pulsar Functions use [Apache BookKeeper](https://bookkeeper.apache.org) as a state storage interface. Pulsar installation, including the local standalone installation, includes deployment of BookKeeper bookies. - -Since Pulsar 2.1.0 release, Pulsar integrates with Apache BookKeeper [table service](https://docs.google.com/document/d/155xAwWv5IdOitHh1NVMEwCMGgB28M3FyMiQSxEpjE-Y/edit#heading=h.56rbh52koe3f) to store the `State` for functions. For example, a `WordCount` function can store its `counters` state into BookKeeper table service via Pulsar Functions State API. - -States are key-value pairs, where the key is a string and the value is arbitrary binary data - counters are stored as 64-bit big-endian binary values. Keys are scoped to an individual Pulsar Function, and shared between instances of that function. - -You can access states within Pulsar Java Functions using the `putState`, `putStateAsync`, `getState`, `getStateAsync`, `incrCounter`, `incrCounterAsync`, `getCounter`, `getCounterAsync` and `deleteState` calls on the context object. You can access states within Pulsar Python Functions using the `putState`, `getState`, `incrCounter`, `getCounter` and `deleteState` calls on the context object. You can also manage states using the [querystate](#query-state) and [putstate](#putstate) options to `pulsar-admin functions`. - -:::note - -State storage is not available in Go. - -::: - -### API - -````mdx-code-block - - - -Currently Pulsar Functions expose the following APIs for mutating and accessing State. These APIs are available in the [Context](functions-develop.md#context) object when you are using Java SDK functions. - -#### incrCounter - -```java - - /** - * Increment the builtin distributed counter referred by key - * @param key The name of the key - * @param amount The amount to be incremented - */ - void incrCounter(String key, long amount); - -``` - -The application can use `incrCounter` to change the counter of a given `key` by the given `amount`. - -#### incrCounterAsync - -```java - - /** - * Increment the builtin distributed counter referred by key - * but dont wait for the completion of the increment operation - * - * @param key The name of the key - * @param amount The amount to be incremented - */ - CompletableFuture incrCounterAsync(String key, long amount); - -``` - -The application can use `incrCounterAsync` to asynchronously change the counter of a given `key` by the given `amount`. - -#### getCounter - -```java - - /** - * Retrieve the counter value for the key. - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - long getCounter(String key); - -``` - -The application can use `getCounter` to retrieve the counter of a given `key` mutated by `incrCounter`. - -Except the `counter` API, Pulsar also exposes a general key/value API for functions to store -general key/value state. - -#### getCounterAsync - -```java - - /** - * Retrieve the counter value for the key, but don't wait - * for the operation to be completed - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - CompletableFuture getCounterAsync(String key); - -``` - -The application can use `getCounterAsync` to asynchronously retrieve the counter of a given `key` mutated by `incrCounterAsync`. - -#### putState - -```java - - /** - * Update the state value for the key. - * - * @param key name of the key - * @param value state value of the key - */ - void putState(String key, ByteBuffer value); - -``` - -#### putStateAsync - -```java - - /** - * Update the state value for the key, but don't wait for the operation to be completed - * - * @param key name of the key - * @param value state value of the key - */ - CompletableFuture putStateAsync(String key, ByteBuffer value); - -``` - -The application can use `putStateAsync` to asynchronously update the state of a given `key`. - -#### getState - -```java - - /** - * Retrieve the state value for the key. - * - * @param key name of the key - * @return the state value for the key. - */ - ByteBuffer getState(String key); - -``` - -#### getStateAsync - -```java - - /** - * Retrieve the state value for the key, but don't wait for the operation to be completed - * - * @param key name of the key - * @return the state value for the key. - */ - CompletableFuture getStateAsync(String key); - -``` - -The application can use `getStateAsync` to asynchronously retrieve the state of a given `key`. - -#### deleteState - -```java - - /** - * Delete the state value for the key. - * - * @param key name of the key - */ - -``` - -Counters and binary values share the same keyspace, so this deletes either type. - - - - -Currently Pulsar Functions expose the following APIs for mutating and accessing State. These APIs are available in the [Context](#context) object when you are using Python SDK functions. - -#### incr_counter - -```python - - def incr_counter(self, key, amount): - ""incr the counter of a given key in the managed state"" - -``` - -Application can use `incr_counter` to change the counter of a given `key` by the given `amount`. -If the `key` does not exist, a new key is created. - -#### get_counter - -```python - - def get_counter(self, key): - """get the counter of a given key in the managed state""" - -``` - -Application can use `get_counter` to retrieve the counter of a given `key` mutated by `incrCounter`. - -Except the `counter` API, Pulsar also exposes a general key/value API for functions to store -general key/value state. - -#### put_state - -```python - - def put_state(self, key, value): - """update the value of a given key in the managed state""" - -``` - -The key is a string, and the value is arbitrary binary data. - -#### get_state - -```python - - def get_state(self, key): - """get the value of a given key in the managed state""" - -``` - -#### del_counter - -```python - - def del_counter(self, key): - """delete the counter of a given key in the managed state""" - -``` - -Counters and binary values share the same keyspace, so this deletes either type. - - - - -```` - -### Query State - -A Pulsar Function can use the [State API](#api) for storing state into Pulsar's state storage -and retrieving state back from Pulsar's state storage. Additionally Pulsar also provides -CLI commands for querying its state. - -```shell - -$ bin/pulsar-admin functions querystate \ - --tenant \ - --namespace \ - --name \ - --state-storage-url \ - --key \ - [---watch] - -``` - -If `--watch` is specified, the CLI will watch the value of the provided `state-key`. - -### Example - -````mdx-code-block - - - -{@inject: github:WordCountFunction:/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/WordCountFunction.java} is a very good example -demonstrating on how Application can easily store `state` in Pulsar Functions. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountFunction implements Function { - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split("\\.")).forEach(word -> context.incrCounter(word, 1)); - return null; - } -} - -``` - -The logic of this `WordCount` function is pretty simple and straightforward: - -1. The function first splits the received `String` into multiple words using regex `\\.`. -2. For each `word`, the function increments the corresponding `counter` by 1 (via `incrCounter(key, amount)`). - - - - -```python - -from pulsar import Function - -class WordCount(Function): - def process(self, item, context): - for word in item.split(): - context.incr_counter(word, 1) - -``` - -The logic of this `WordCount` function is pretty simple and straightforward: - -1. The function first splits the received string into multiple words on space. -2. For each `word`, the function increments the corresponding `counter` by 1 (via `incr_counter(key, amount)`). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/functions-metrics.md b/site2/website/versioned_docs/version-2.9.0-deprecated/functions-metrics.md deleted file mode 100644 index 8add6693160929..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/functions-metrics.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: functions-metrics -title: Metrics for Pulsar Functions -sidebar_label: "Metrics" -original_id: functions-metrics ---- - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/functions-overview.md b/site2/website/versioned_docs/version-2.9.0-deprecated/functions-overview.md deleted file mode 100644 index 816d301e0fd0e7..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/functions-overview.md +++ /dev/null @@ -1,209 +0,0 @@ ---- -id: functions-overview -title: Pulsar Functions overview -sidebar_label: "Overview" -original_id: functions-overview ---- - -**Pulsar Functions** are lightweight compute processes that - -* consume messages from one or more Pulsar topics, -* apply a user-supplied processing logic to each message, -* publish the results of the computation to another topic. - - -## Goals -With Pulsar Functions, you can create complex processing logic without deploying a separate neighboring system (such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://heron.incubator.apache.org/), [Apache Flink](https://flink.apache.org/)). Pulsar Functions are computing infrastructure of Pulsar messaging system. The core goal is tied to a series of other goals: - -* Developer productivity (language-native vs Pulsar Functions SDK functions) -* Easy troubleshooting -* Operational simplicity (no need for an external processing system) - -## Inspirations -Pulsar Functions are inspired by (and take cues from) several systems and paradigms: - -* Stream processing engines such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://apache.github.io/incubator-heron), and [Apache Flink](https://flink.apache.org) -* "Serverless" and "Function as a Service" (FaaS) cloud platforms like [Amazon Web Services Lambda](https://aws.amazon.com/lambda/), [Google Cloud Functions](https://cloud.google.com/functions/), and [Azure Cloud Functions](https://azure.microsoft.com/en-us/services/functions/) - -Pulsar Functions can be described as - -* [Lambda](https://aws.amazon.com/lambda/)-style functions that are -* specifically designed to use Pulsar as a message bus. - -## Programming model -Pulsar Functions provide a wide range of functionality, and the core programming model is simple. Functions receive messages from one or more **input [topics](reference-terminology.md#topic)**. Each time a message is received, the function will complete the following tasks. - - * Apply some processing logic to the input and write output to: - * An **output topic** in Pulsar - * [Apache BookKeeper](functions-develop.md#state-storage) - * Write logs to a **log topic** (potentially for debugging purposes) - * Increment a [counter](#word-count-example) - -![Pulsar Functions core programming model](/assets/pulsar-functions-overview.png) - -You can use Pulsar Functions to set up the following processing chain: - -* A Python function listens for the `raw-sentences` topic and "sanitizes" incoming strings (removing extraneous whitespace and converting all characters to lowercase) and then publishes the results to a `sanitized-sentences` topic. -* A Java function listens for the `sanitized-sentences` topic, counts the number of times each word appears within a specified time window, and publishes the results to a `results` topic -* Finally, a Python function listens for the `results` topic and writes the results to a MySQL table. - - -### Word count example - -If you implement the classic word count example using Pulsar Functions, it looks something like this: - -![Pulsar Functions word count example](/assets/pulsar-functions-word-count.png) - -To write the function in Java with [Pulsar Functions SDK for Java](functions-develop.md#available-apis), you can write the function as follows. - -```java - -package org.example.functions; - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountFunction implements Function { - // This function is invoked every time a message is published to the input topic - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split(" ")).forEach(word -> { - String counterKey = word.toLowerCase(); - context.incrCounter(counterKey, 1); - }); - return null; - } -} - -``` - -Bundle and build the JAR file to be deployed, and then deploy it in your Pulsar cluster using the [command line](functions-deploy.md#command-line-interface) as follows. - -```bash - -$ bin/pulsar-admin functions create \ - --jar target/my-jar-with-dependencies.jar \ - --classname org.example.functions.WordCountFunction \ - --tenant public \ - --namespace default \ - --name word-count \ - --inputs persistent://public/default/sentences \ - --output persistent://public/default/count - -``` - -### Content-based routing example - -Pulsar Functions are used in many cases. The following is a sophisticated example that involves content-based routing. - -For example, a function takes items (strings) as input and publishes them to either a `fruits` or `vegetables` topic, depending on the item. Or, if an item is neither fruit nor vegetable, a warning is logged to a [log topic](functions-develop.md#logger). The following is a visual representation. - -![Pulsar Functions routing example](/assets/pulsar-functions-routing-example.png) - -If you implement this routing functionality in Python, it looks something like this: - -```python - -from pulsar import Function - -class RoutingFunction(Function): - def __init__(self): - self.fruits_topic = "persistent://public/default/fruits" - self.vegetables_topic = "persistent://public/default/vegetables" - - @staticmethod - def is_fruit(item): - return item in [b"apple", b"orange", b"pear", b"other fruits..."] - - @staticmethod - def is_vegetable(item): - return item in [b"carrot", b"lettuce", b"radish", b"other vegetables..."] - - def process(self, item, context): - if self.is_fruit(item): - context.publish(self.fruits_topic, item) - elif self.is_vegetable(item): - context.publish(self.vegetables_topic, item) - else: - warning = "The item {0} is neither a fruit nor a vegetable".format(item) - context.get_logger().warn(warning) - -``` - -If this code is stored in `~/router.py`, then you can deploy it in your Pulsar cluster using the [command line](functions-deploy.md#command-line-interface) as follows. - -```bash - -$ bin/pulsar-admin functions create \ - --py ~/router.py \ - --classname router.RoutingFunction \ - --tenant public \ - --namespace default \ - --name route-fruit-veg \ - --inputs persistent://public/default/basket-items - -``` - -### Functions, messages and message types -Pulsar Functions take byte arrays as inputs and spit out byte arrays as output. However in languages that support typed interfaces(Java), you can write typed Functions, and bind messages to types in the following ways. -* [Schema Registry](functions-develop.md#schema-registry) -* [SerDe](functions-develop.md#serde) - - -## Fully Qualified Function Name (FQFN) -Each Pulsar Function has a **Fully Qualified Function Name** (FQFN) that consists of three elements: the function tenant, namespace, and function name. FQFN looks like this: - -```http - -tenant/namespace/name - -``` - -FQFNs enable you to create multiple functions with the same name provided that they are in different namespaces. - -## Supported languages -Currently, you can write Pulsar Functions in Java, Python, and Go. For details, refer to [Develop Pulsar Functions](functions-develop.md). - -## Processing guarantees -Pulsar Functions provide three different messaging semantics that you can apply to any function. - -Delivery semantics | Description -:------------------|:------- -**At-most-once** delivery | Each message sent to the function is likely to be processed, or not to be processed (hence "at most"). -**At-least-once** delivery | Each message sent to the function can be processed more than once (hence the "at least"). -**Effectively-once** delivery | Each message sent to the function will have one output associated with it. - - -### Apply processing guarantees to a function -You can set the processing guarantees for a Pulsar Function when you create the Function. The following [`pulsar-function create`](reference-pulsar-admin.md#create-1) command creates a function with effectively-once guarantees applied. - -```bash - -$ bin/pulsar-admin functions create \ - --name my-effectively-once-function \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other function configs - -``` - -The available options for `--processing-guarantees` are: - -* `ATMOST_ONCE` -* `ATLEAST_ONCE` -* `EFFECTIVELY_ONCE` - -> By default, Pulsar Functions provide at-least-once delivery guarantees. So if you create a function without supplying a value for the `--processingGuarantees` flag, the function provides at-least-once guarantees. - -### Update the processing guarantees of a function -You can change the processing guarantees applied to a function using the [`update`](reference-pulsar-admin.md#update-1) command. The following is an example. - -```bash - -$ bin/pulsar-admin functions update \ - --processing-guarantees ATMOST_ONCE \ - # Other function configs - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/functions-package.md b/site2/website/versioned_docs/version-2.9.0-deprecated/functions-package.md deleted file mode 100644 index db2c4e987dc7be..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/functions-package.md +++ /dev/null @@ -1,493 +0,0 @@ ---- -id: functions-package -title: Package Pulsar Functions -sidebar_label: "How-to: Package" -original_id: functions-package ---- - -You can package Pulsar functions in Java, Python, and Go. Packaging the window function in Java is the same as [packaging a function in Java](#java). - -:::note - -Currently, the window function is not available in Python and Go. - -::: - -## Prerequisite - -Before running a Pulsar function, you need to start Pulsar. You can [run a standalone Pulsar in Docker](getting-started-docker.md), or [run Pulsar in Kubernetes](getting-started-helm.md). - -To check whether the Docker image starts, you can use the `docker ps` command. - -## Java - -To package a function in Java, complete the following steps. - -1. Create a new maven project with a pom file. In the following code sample, the value of `mainClass` is your package name. - - ```Java - - - - 4.0.0 - - java-function - java-function - 1.0-SNAPSHOT - - - - org.apache.pulsar - pulsar-functions-api - 2.6.0 - - - - - - - maven-assembly-plugin - - false - - jar-with-dependencies - - - - org.example.test.ExclamationFunction - - - - - - make-assembly - package - - assembly - - - - - - org.apache.maven.plugins - maven-compiler-plugin - - 8 - 8 - - - - - - - - ``` - -2. Write a Java function. - - ``` - - package org.example.test; - - import java.util.function.Function; - - public class ExclamationFunction implements Function { - @Override - public String apply(String s) { - return "This is my function!"; - } - } - - ``` - - For the imported package, you can use one of the following interfaces: - - Function interface provided by Java 8: `java.util.function.Function` - - Pulsar Function interface: `org.apache.pulsar.functions.api.Function` - - The main difference between the two interfaces is that the `org.apache.pulsar.functions.api.Function` interface provides the context interface. When you write a function and want to interact with it, you can use context to obtain a wide variety of information and functionality for Pulsar Functions. - - The following example uses `org.apache.pulsar.functions.api.Function` interface with context. - - ``` - - package org.example.functions; - import org.apache.pulsar.functions.api.Context; - import org.apache.pulsar.functions.api.Function; - - import java.util.Arrays; - public class WordCountFunction implements Function { - // This function is invoked every time a message is published to the input topic - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split(" ")).forEach(word -> { - String counterKey = word.toLowerCase(); - context.incrCounter(counterKey, 1); - }); - return null; - } - } - - ``` - -3. Package the Java function. - - ```bash - - mvn package - - ``` - - After the Java function is packaged, a `target` directory is created automatically. Open the `target` directory to check if there is a JAR package similar to `java-function-1.0-SNAPSHOT.jar`. - - -4. Run the Java function. - - (1) Copy the packaged jar file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Java function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname org.example.test.ExclamationFunction \ - --jar java-function-1.0-SNAPSHOT.jar \ - --inputs persistent://public/default/my-topic-1 \ - --output persistent://public/default/test-1 \ - --tenant public \ - --namespace default \ - --name JavaFunction - - ``` - - The following log indicates that the Java function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -## Python - -Python Function supports the following three formats: - -- One python file -- ZIP file -- PIP - -### One python file - -To package a function with **one python file** in Python, complete the following steps. - -1. Write a Python function. - - ``` - - from pulsar import Function // import the Function module from Pulsar - - # The classic ExclamationFunction that appends an exclamation at the end - # of the input - class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - return input + '!' - - ``` - - In this example, when you write a Python function, you need to inherit the Function class and implement the `process()` method. - - `process()` mainly has two parameters: - - - `input` represents your input. - - - `context` represents an interface exposed by the Pulsar Function. You can get the attributes in the Python function based on the provided context object. - -2. Install a Python client. - - The implementation of a Python function depends on the Python client, so before deploying a Python function, you need to install the corresponding version of the Python client. - - ```bash - - pip install python-client==2.6.0 - - ``` - -3. Run the Python Function. - - (1) Copy the Python function file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Python function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname org.example.test.ExclamationFunction \ - --py \ - --inputs persistent://public/default/my-topic-1 \ - --output persistent://public/default/test-1 \ - --tenant public \ - --namespace default \ - --name PythonFunction - - ``` - - The following log indicates that the Python function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -### ZIP file - -To package a function with the **ZIP file** in Python, complete the following steps. - -1. Prepare the ZIP file. - - The following is required when packaging the ZIP file of the Python Function. - - ```text - - Assuming the zip file is named as `func.zip`, unzip the `func.zip` folder: - "func/src" - "func/requirements.txt" - "func/deps" - - ``` - - Take [exclamation.zip](https://github.com/apache/pulsar/tree/master/tests/docker-images/latest-version-image/python-examples) as an example. The internal structure of the example is as follows. - - ```text - - . - ├── deps - │   └── sh-1.12.14-py2.py3-none-any.whl - └── src - └── exclamation.py - - ``` - -2. Run the Python Function. - - (1) Copy the ZIP file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Python function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname exclamation \ - --py \ - --inputs persistent://public/default/in-topic \ - --output persistent://public/default/out-topic \ - --tenant public \ - --namespace default \ - --name PythonFunction - - ``` - - The following log indicates that the Python function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -### PIP - -The PIP method is only supported in Kubernetes runtime. To package a function with **PIP** in Python, complete the following steps. - -1. Configure the `functions_worker.yml` file. - - ```text - - #### Kubernetes Runtime #### - installUserCodeDependencies: true - - ``` - -2. Write your Python Function. - - ``` - - from pulsar import Function - import js2xml - - # The classic ExclamationFunction that appends an exclamation at the end - # of the input - class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - // add your logic - return input + '!' - - ``` - - You can introduce additional dependencies. When Python Function detects that the file currently used is `whl` and the `installUserCodeDependencies` parameter is specified, the system uses the `pip install` command to install the dependencies required in Python Function. - -3. Generate the `whl` file. - - ```shell script - - $ cd $PULSAR_HOME/pulsar-functions/scripts/python - $ chmod +x generate.sh - $ ./generate.sh - # e.g: ./generate.sh /path/to/python /path/to/python/output 1.0.0 - - ``` - - The output is written in `/path/to/python/output`: - - ```text - - -rw-r--r-- 1 root staff 1.8K 8 27 14:29 pulsarfunction-1.0.0-py2-none-any.whl - -rw-r--r-- 1 root staff 1.4K 8 27 14:29 pulsarfunction-1.0.0.tar.gz - -rw-r--r-- 1 root staff 0B 8 27 14:29 pulsarfunction.whl - - ``` - -## Go - -To package a function in Go, complete the following steps. - -1. Write a Go function. - - Currently, Go function can be **only** implemented using SDK and the interface of the function is exposed in the form of SDK. Before using the Go function, you need to import "github.com/apache/pulsar/pulsar-function-go/pf". - - ``` - - import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" - ) - - func HandleRequest(ctx context.Context, input []byte) error { - fmt.Println(string(input) + "!") - return nil - } - - func main() { - pf.Start(HandleRequest) - } - - ``` - - You can use context to connect to the Go function. - - ``` - - if fc, ok := pf.FromContext(ctx); ok { - fmt.Printf("function ID is:%s, ", fc.GetFuncID()) - fmt.Printf("function version is:%s\n", fc.GetFuncVersion()) - } - - ``` - - When writing a Go function, remember that - - In `main()`, you **only** need to register the function name to `Start()`. **Only** one function name is received in `Start()`. - - Go function uses Go reflection, which is based on the received function name, to verify whether the parameter list and returned value list are correct. The parameter list and returned value list **must be** one of the following sample functions: - - ``` - - func () - func () error - func (input) error - func () (output, error) - func (input) (output, error) - func (context.Context) error - func (context.Context, input) error - func (context.Context) (output, error) - func (context.Context, input) (output, error) - - ``` - -2. Build the Go function. - - ``` - - go build .go - - ``` - -3. Run the Go Function. - - (1) Copy the Go function file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Go function with the following command. - - ``` - - ./bin/pulsar-admin functions localrun \ - --go [your go function path] - --inputs [input topics] \ - --output [output topic] \ - --tenant [default:public] \ - --namespace [default:default] \ - --name [custom unique go function name] - - ``` - - The following log indicates that the Go function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -## Start Functions in cluster mode -If you want to start a function in cluster mode, replace `localrun` with `create` in the commands above. The following log indicates that your function starts successfully. - - ```text - - "Created successfully" - - ``` - -For information about parameters on `--classname`, `--jar`, `--py`, `--go`, `--inputs`, run the command `./bin/pulsar-admin functions` or see [here](reference-pulsar-admin.md#functions). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/functions-runtime.md b/site2/website/versioned_docs/version-2.9.0-deprecated/functions-runtime.md deleted file mode 100644 index 7164bd13668aff..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/functions-runtime.md +++ /dev/null @@ -1,403 +0,0 @@ ---- -id: functions-runtime -title: Configure Functions runtime -sidebar_label: "Setup: Configure Functions runtime" -original_id: functions-runtime ---- - -You can use the following methods to run functions. - -- *Thread*: Invoke functions threads in functions worker. -- *Process*: Invoke functions in processes forked by functions worker. -- *Kubernetes*: Submit functions as Kubernetes StatefulSets by functions worker. - -:::note - -Pulsar supports adding labels to the Kubernetes StatefulSets and services while launching functions, which facilitates selecting the target Kubernetes objects. - -::: - -The differences of the thread and process modes are: -- Thread mode: when a function runs in thread mode, it runs on the same Java virtual machine (JVM) with functions worker. -- Process mode: when a function runs in process mode, it runs on the same machine that functions worker runs. - -## Configure thread runtime -It is easy to configure *Thread* runtime. In most cases, you do not need to configure anything. You can customize the thread group name with the following settings: - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.thread.ThreadRuntimeFactory -functionRuntimeFactoryConfigs: - threadGroupName: "Your Function Container Group" - -``` - -*Thread* runtime is only supported in Java function. - -## Configure process runtime -When you enable *Process* runtime, you do not need to configure anything. - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.process.ProcessRuntimeFactory -functionRuntimeFactoryConfigs: - # the directory for storing the function logs - logDirectory: - # change the jar location only when you put the java instance jar in a different location - javaInstanceJarLocation: - # change the python instance location only when you put the python instance jar in a different location - pythonInstanceLocation: - # change the extra dependencies location: - extraFunctionDependenciesDir: - -``` - -*Process* runtime is supported in Java, Python, and Go functions. - -## Configure Kubernetes runtime - -When the functions worker generates Kubernetes manifests and apply the manifests, the Kubernetes runtime works. If you have run functions worker on Kubernetes, you can use the `serviceAccount` associated with the pod that the functions worker is running in. Otherwise, you can configure it to communicate with a Kubernetes cluster. - -The manifests, generated by the functions worker, include a `StatefulSet`, a `Service` (used to communicate with the pods), and a `Secret` for auth credentials (when applicable). The `StatefulSet` manifest (by default) has a single pod, with the number of replicas determined by the "parallelism" of the function. On pod boot, the pod downloads the function payload (via the functions worker REST API). The pod's container image is configurable, but must have the functions runtime. - -The Kubernetes runtime supports secrets, so you can create a Kubernetes secret and expose it as an environment variable in the pod. The Kubernetes runtime is extensible, you can implement classes and customize the way how to generate Kubernetes manifests, how to pass auth data to pods, and how to integrate secrets. - -:::tip - -For the rules of translating Pulsar object names into Kubernetes resource labels, see [here](admin-api-overview.md#how-to-define-pulsar-resource-names-when-running-pulsar-in-kubernetes). - -::: - -### Basic configuration - -It is easy to configure Kubernetes runtime. You can just uncomment the settings of `kubernetesContainerFactory` in the `functions_worker.yaml` file. The following is an example. - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.kubernetes.KubernetesRuntimeFactory -functionRuntimeFactoryConfigs: - # uri to kubernetes cluster, leave it to empty and it will use the kubernetes settings in function worker - k8Uri: - # the kubernetes namespace to run the function instances. it is `default`, if this setting is left to be empty - jobNamespace: - # The Kubernetes pod name to run the function instances. It is set to - # `pf----` if this setting is left to be empty - jobName: - # the docker image to run function instance. by default it is `apachepulsar/pulsar` - pulsarDockerImageName: - # the docker image to run function instance according to different configurations provided by users. - # By default it is `apachepulsar/pulsar`. - # e.g: - # functionDockerImages: - # JAVA: JAVA_IMAGE_NAME - # PYTHON: PYTHON_IMAGE_NAME - # GO: GO_IMAGE_NAME - functionDockerImages: - # "The image pull policy for image used to run function instance. By default it is `IfNotPresent` - imagePullPolicy: IfNotPresent - # the root directory of pulsar home directory in `pulsarDockerImageName`. by default it is `/pulsar`. - # if you are using your own built image in `pulsarDockerImageName`, you need to set this setting accordingly - pulsarRootDir: - # The config admin CLI allows users to customize the configuration of the admin cli tool, such as: - # `/bin/pulsar-admin and /bin/pulsarctl`. By default it is `/bin/pulsar-admin`. If you want to use `pulsarctl` - # you need to set this setting accordingly - configAdminCLI: - # this setting only takes effects if `k8Uri` is set to null. if your function worker is running as a k8 pod, - # setting this to true is let function worker to submit functions to the same k8s cluster as function worker - # is running. setting this to false if your function worker is not running as a k8 pod. - submittingInsidePod: false - # setting the pulsar service url that pulsar function should use to connect to pulsar - # if it is not set, it will use the pulsar service url configured in worker service - pulsarServiceUrl: - # setting the pulsar admin url that pulsar function should use to connect to pulsar - # if it is not set, it will use the pulsar admin url configured in worker service - pulsarAdminUrl: - # The flag indicates to install user code dependencies. (applied to python package) - installUserCodeDependencies: - # The repository that pulsar functions use to download python dependencies - pythonDependencyRepository: - # The repository that pulsar functions use to download extra python dependencies - pythonExtraDependencyRepository: - # the custom labels that function worker uses to select the nodes for pods - customLabels: - # The expected metrics collection interval, in seconds - expectedMetricsCollectionInterval: 30 - # Kubernetes Runtime will periodically checkback on - # this configMap if defined and if there are any changes - # to the kubernetes specific stuff, we apply those changes - changeConfigMap: - # The namespace for storing change config map - changeConfigMapNamespace: - # The ratio cpu request and cpu limit to be set for a function/source/sink. - # The formula for cpu request is cpuRequest = userRequestCpu / cpuOverCommitRatio - cpuOverCommitRatio: 1.0 - # The ratio memory request and memory limit to be set for a function/source/sink. - # The formula for memory request is memoryRequest = userRequestMemory / memoryOverCommitRatio - memoryOverCommitRatio: 1.0 - # The port inside the function pod which is used by the worker to communicate with the pod - grpcPort: 9093 - # The port inside the function pod on which prometheus metrics are exposed - metricsPort: 9094 - # The directory inside the function pod where nar packages will be extracted - narExtractionDirectory: - # The classpath where function instance files stored - functionInstanceClassPath: - # the directory for dropping extra function dependencies - # if it is not an absolute path, it is relative to `pulsarRootDir` - extraFunctionDependenciesDir: - # Additional memory padding added on top of the memory requested by the function per on a per instance basis - percentMemoryPadding: 10 - # The duration (in seconds) before the StatefulSet is deleted after a function stops or restarts. - # Value must be a non-negative integer. 0 indicates the StatefulSet is deleted immediately. - # Default is 5 seconds. - gracePeriodSeconds: 5 - -``` - -If you run functions worker embedded in a broker on Kubernetes, you can use the default settings. - -### Run standalone functions worker on Kubernetes - -If you run functions worker standalone (that is, not embedded) on Kubernetes, you need to configure `pulsarSerivceUrl` to be the URL of the broker and `pulsarAdminUrl` as the URL to the functions worker. - -For example, both Pulsar brokers and Function Workers run in the `pulsar` K8S namespace. The brokers have a service called `brokers` and the functions worker has a service called `func-worker`. The settings are as follows: - -```yaml - -pulsarServiceUrl: pulsar://broker.pulsar:6650 // or pulsar+ssl://broker.pulsar:6651 if using TLS -pulsarAdminUrl: http://func-worker.pulsar:8080 // or https://func-worker:8443 if using TLS - -``` - -### Run RBAC in Kubernetes clusters - -If you run RBAC in your Kubernetes cluster, make sure that the service account you use for running functions workers (or brokers, if functions workers run along with brokers) have permissions on the following Kubernetes APIs. - -- services -- configmaps -- pods -- apps.statefulsets - -The following is sufficient: - -```yaml - -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRole -metadata: - name: functions-worker -rules: -- apiGroups: [""] - resources: - - services - - configmaps - - pods - verbs: - - '*' -- apiGroups: - - apps - resources: - - statefulsets - verbs: - - '*' ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: functions-worker ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: functions-worker -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: functions-worker -subjectsKubernetesSec: -- kind: ServiceAccount - name: functions-worker - -``` - -If the service-account is not properly configured, an error message similar to this is displayed: - -```bash - -22:04:27.696 [Timer-0] ERROR org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory - Error while trying to fetch configmap example-pulsar-4qvmb5gur3c6fc9dih0x1xn8b-function-worker-config at namespace pulsar -io.kubernetes.client.ApiException: Forbidden - at io.kubernetes.client.ApiClient.handleResponse(ApiClient.java:882) ~[io.kubernetes-client-java-2.0.0.jar:?] - at io.kubernetes.client.ApiClient.execute(ApiClient.java:798) ~[io.kubernetes-client-java-2.0.0.jar:?] - at io.kubernetes.client.apis.CoreV1Api.readNamespacedConfigMapWithHttpInfo(CoreV1Api.java:23673) ~[io.kubernetes-client-java-api-2.0.0.jar:?] - at io.kubernetes.client.apis.CoreV1Api.readNamespacedConfigMap(CoreV1Api.java:23655) ~[io.kubernetes-client-java-api-2.0.0.jar:?] - at org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory.fetchConfigMap(KubernetesRuntimeFactory.java:284) [org.apache.pulsar-pulsar-functions-runtime-2.4.0-42c3bf949.jar:2.4.0-42c3bf949] - at org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory$1.run(KubernetesRuntimeFactory.java:275) [org.apache.pulsar-pulsar-functions-runtime-2.4.0-42c3bf949.jar:2.4.0-42c3bf949] - at java.util.TimerThread.mainLoop(Timer.java:555) [?:1.8.0_212] - at java.util.TimerThread.run(Timer.java:505) [?:1.8.0_212] - -``` - -### Integrate Kubernetes secrets - -In order to safely distribute secrets, Pulsar Functions can reference Kubernetes secrets. To enable this, set the `secretsProviderConfiguratorClassName` to `org.apache.pulsar.functions.secretsproviderconfigurator.KubernetesSecretsProviderConfigurator`. - -You can create a secret in the namespace where your functions are deployed. For example, you deploy functions to the `pulsar-func` Kubernetes namespace, and you have a secret named `database-creds` with a field name `password`, which you want to mount in the pod as an environment variable called `DATABASE_PASSWORD`. The following functions configuration enables you to reference that secret and mount the value as an environment variable in the pod. - -```Yaml - -tenant: "mytenant" -namespace: "mynamespace" -name: "myfunction" -topicName: "persistent://mytenant/mynamespace/myfuncinput" -className: "com.company.pulsar.myfunction" - -secrets: - # the secret will be mounted from the `password` field in the `database-creds` secret as an env var called `DATABASE_PASSWORD` - DATABASE_PASSWORD: - path: "database-creds" - key: "password" - -``` - -### Enable token authentication - -When you enable authentication for your Pulsar cluster, you need a mechanism for the pod running your function to authenticate with the broker. - -The `org.apache.pulsar.functions.auth.KubernetesFunctionAuthProvider` interface provides support for any authentication mechanism. The `functionAuthProviderClassName` in `function-worker.yml` is used to specify your path to this implementation. - -Pulsar includes an implementation of this interface for token authentication, and distributes the certificate authority via the same implementation. The configuration is similar as follows: - -```Yaml - -functionAuthProviderClassName: org.apache.pulsar.functions.auth.KubernetesSecretsTokenAuthProvider - -``` - -For token authentication, the functions worker captures the token that is used to deploy (or update) the function. The token is saved as a secret and mounted into the pod. - -For custom authentication or TLS, you need to implement this interface or use an alternative mechanism to provide authentication. If you use token authentication and TLS encryption to secure the communication with the cluster, Pulsar passes your certificate authority (CA) to the client, so the client obtains what it needs to authenticate the cluster, and trusts the cluster with your signed certificate. - -:::note - -If you use tokens that expire when deploying functions, these tokens will expire. - -::: - -### Run clusters with authentication - -When you run a functions worker in a standalone process (that is, not embedded in the broker) in a cluster with authentication, you must configure your functions worker to interact with the broker and authenticate incoming requests. So you need to configure properties that the broker requires for authentication or authorization. - -For example, if you use token authentication, you need to configure the following properties in the `function-worker.yml` file. - -```Yaml - -clientAuthenticationPlugin: org.apache.pulsar.client.impl.auth.AuthenticationToken -clientAuthenticationParameters: file:///etc/pulsar/token/admin-token.txt -configurationStoreServers: zookeeper-cluster:2181 # auth requires a connection to zookeeper -authenticationProviders: - - "org.apache.pulsar.broker.authentication.AuthenticationProviderToken" -authorizationEnabled: true -authenticationEnabled: true -superUserRoles: - - superuser - - proxy -properties: - tokenSecretKey: file:///etc/pulsar/jwt/secret # if using a secret token, key file must be DER-encoded - tokenPublicKey: file:///etc/pulsar/jwt/public.key # if using public/private key tokens, key file must be DER-encoded - -``` - -:::note - -You must configure both the Function Worker authorization or authentication for the server to authenticate requests and configure the client to be authenticated to communicate with the broker. - -::: - -### Customize Kubernetes runtime - -The Kubernetes integration enables you to implement a class and customize how to generate manifests. You can configure it by setting `runtimeCustomizerClassName` in the `functions-worker.yml` file and use the fully qualified class name. You must implement the `org.apache.pulsar.functions.runtime.kubernetes.KubernetesManifestCustomizer` interface. - -The functions (and sinks/sources) API provides a flag, `customRuntimeOptions`, which is passed to this interface. - -To initialize the `KubernetesManifestCustomizer`, you can provide `runtimeCustomizerConfig` in the `functions-worker.yml` file. `runtimeCustomizerConfig` is passed to the `public void initialize(Map config)` function of the interface. `runtimeCustomizerConfig`is different from the `customRuntimeOptions` as `runtimeCustomizerConfig` is the same across all functions. If you provide both `runtimeCustomizerConfig` and `customRuntimeOptions`, you need to decide how to manage these two configurations in your implementation of `KubernetesManifestCustomizer`. - -Pulsar includes a built-in implementation. To use the basic implementation, set `runtimeCustomizerClassName` to `org.apache.pulsar.functions.runtime.kubernetes.BasicKubernetesManifestCustomizer`. The built-in implementation initialized with `runtimeCustomizerConfig` enables you to pass a JSON document as `customRuntimeOptions` with certain properties to augment, which decides how the manifests are generated. If both `runtimeCustomizerConfig` and `customRuntimeOptions` are provided, `BasicKubernetesManifestCustomizer` uses `customRuntimeOptions` to override the configuration if there are conflicts in these two configurations. - -Below is an example of `customRuntimeOptions`. - -```json - -{ - "jobName": "jobname", // the k8s pod name to run this function instance - "jobNamespace": "namespace", // the k8s namespace to run this function in - "extractLabels": { // extra labels to attach to the statefulSet, service, and pods - "extraLabel": "value" - }, - "extraAnnotations": { // extra annotations to attach to the statefulSet, service, and pods - "extraAnnotation": "value" - }, - "nodeSelectorLabels": { // node selector labels to add on to the pod spec - "customLabel": "value" - }, - "tolerations": [ // tolerations to add to the pod spec - { - "key": "custom-key", - "value": "value", - "effect": "NoSchedule" - } - ], - "resourceRequirements": { // values for cpu and memory should be defined as described here: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container - "requests": { - "cpu": 1, - "memory": "4G" - }, - "limits": { - "cpu": 2, - "memory": "8G" - } - } -} - -``` - -## Run clusters with geo-replication - -If you run multiple clusters tied together with geo-replication, it is important to use a different function namespace for each cluster. Otherwise, the function shares a namespace and potentially schedule across clusters. - -For example, if you have two clusters: `east-1` and `west-1`, you can configure the functions workers for `east-1` and `west-1` perspectively as follows. - -```Yaml - -pulsarFunctionsCluster: east-1 -pulsarFunctionsNamespace: public/functions-east-1 - -``` - -```Yaml - -pulsarFunctionsCluster: west-1 -pulsarFunctionsNamespace: public/functions-west-1 - -``` - -This ensures the two different Functions Workers use distinct sets of topics for their internal coordination. - -## Configure standalone functions worker - -When configuring a standalone functions worker, you need to configure properties that the broker requires, especially if you use TLS. And then Functions Worker can communicate with the broker. - -You need to configure the following required properties. - -```Yaml - -workerPort: 8080 -workerPortTls: 8443 # when using TLS -tlsCertificateFilePath: /etc/pulsar/tls/tls.crt # when using TLS -tlsKeyFilePath: /etc/pulsar/tls/tls.key # when using TLS -tlsTrustCertsFilePath: /etc/pulsar/tls/ca.crt # when using TLS -pulsarServiceUrl: pulsar://broker.pulsar:6650/ # or pulsar+ssl://pulsar-prod-broker.pulsar:6651/ when using TLS -pulsarWebServiceUrl: http://broker.pulsar:8080/ # or https://pulsar-prod-broker.pulsar:8443/ when using TLS -useTls: true # when using TLS, critical! - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/functions-worker.md b/site2/website/versioned_docs/version-2.9.0-deprecated/functions-worker.md deleted file mode 100644 index 49fc76b30bdaa5..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/functions-worker.md +++ /dev/null @@ -1,386 +0,0 @@ ---- -id: functions-worker -title: Deploy and manage functions worker -sidebar_label: "Setup: Pulsar Functions Worker" -original_id: functions-worker ---- -Before using Pulsar Functions, you need to learn how to set up Pulsar Functions worker and how to [configure Functions runtime](functions-runtime.md). - -Pulsar `functions-worker` is a logic component to run Pulsar Functions in cluster mode. Two options are available, and you can select either based on your requirements. -- [run with brokers](#run-functions-worker-with-brokers) -- [run it separately](#run-functions-worker-separately) in a different broker - -:::note - -The `--- Service Urls---` lines in the following diagrams represent Pulsar service URLs that Pulsar client and admin use to connect to a Pulsar cluster. - -::: - -## Run Functions-worker with brokers - -The following diagram illustrates the deployment of functions-workers running along with brokers. - -![assets/functions-worker-corun.png](/assets/functions-worker-corun.png) - -To enable functions-worker running as part of a broker, you need to set `functionsWorkerEnabled` to `true` in the `broker.conf` file. - -```conf - -functionsWorkerEnabled=true - -``` - -If the `functionsWorkerEnabled` is set to `true`, the functions-worker is started as part of a broker. You need to configure the `conf/functions_worker.yml` file to customize your functions_worker. - -Before you run Functions-worker with broker, you have to configure Functions-worker, and then start it with brokers. - -### Configure Functions-Worker to run with brokers -In this mode, most of the settings are already inherited from your broker configuration (for example, configurationStore settings, authentication settings, and so on) since `functions-worker` is running as part of the broker. - -Pay attention to the following required settings when configuring functions-worker in this mode. - -- `numFunctionPackageReplicas`: The number of replicas to store function packages. The default value is `1`, which is good for standalone deployment. For production deployment, to ensure high availability, set it to be larger than `2`. -- `initializedDlogMetadata`: Whether to initialize distributed log metadata in runtime. If it is set to `true`, you must ensure that it has been initialized by `bin/pulsar initialize-cluster-metadata` command. - -If authentication is enabled on the BookKeeper cluster, configure the following BookKeeper authentication settings. - -- `bookkeeperClientAuthenticationPlugin`: the BookKeeper client authentication plugin name. -- `bookkeeperClientAuthenticationParametersName`: the BookKeeper client authentication plugin parameters name. -- `bookkeeperClientAuthenticationParameters`: the BookKeeper client authentication plugin parameters. - -### Configure Stateful-Functions to run with broker - -If you want to use Stateful-Functions related functions (for example, `putState()` and `queryState()` related interfaces), follow steps below. - -1. Enable the **streamStorage** service in the BookKeeper. - - Currently, the service uses the NAR package, so you need to set the configuration in `bookkeeper.conf`. - - ```text - - extraServerComponents=org.apache.bookkeeper.stream.server.StreamStorageLifecycleComponent - - ``` - - After starting bookie, use the following methods to check whether the streamStorage service is started correctly. - - Input: - - ```shell - - telnet localhost 4181 - - ``` - - Output: - - ```text - - Trying 127.0.0.1... - Connected to localhost. - Escape character is '^]'. - - ``` - -2. Turn on this function in `functions_worker.yml`. - - ```text - - stateStorageServiceUrl: bk://:4181 - - ``` - - `bk-service-url` is the service URL pointing to the BookKeeper table service. - -### Start Functions-worker with broker - -Once you have configured the `functions_worker.yml` file, you can start or restart your broker. - -And then you can use the following command to verify if `functions-worker` is running well. - -```bash - -curl :8080/admin/v2/worker/cluster - -``` - -After entering the command above, a list of active function workers in the cluster is returned. The output is similar to the following. - -```json - -[{"workerId":"","workerHostname":"","port":8080}] - -``` - -## Run Functions-worker separately - -This section illustrates how to run `functions-worker` as a separate process in separate machines. - -![assets/functions-worker-separated.png](/assets/functions-worker-separated.png) - -:::note - -In this mode, make sure `functionsWorkerEnabled` is set to `false`, so you won't start `functions-worker` with brokers by mistake. Also, while accessing the `functions-worker` to manage any of the functions, the `pulsar-admin` CLI tool or any of the clients should use the `workerHostname` and `workerPort` that you set in [Worker parameters](#worker-parameters) to generate an `--admin-url`. - -::: - -### Configure Functions-worker to run separately - -To run function-worker separately, you have to configure the following parameters. - -#### Worker parameters - -- `workerId`: The type is string. It is unique across clusters, which is used to identify a worker machine. -- `workerHostname`: The hostname of the worker machine. -- `workerPort`: The port that the worker server listens on. Keep it as default if you don't customize it. -- `workerPortTls`: The TLS port that the worker server listens on. Keep it as default if you don't customize it. - -#### Function package parameter - -- `numFunctionPackageReplicas`: The number of replicas to store function packages. The default value is `1`. - -#### Function metadata parameter - -- `pulsarServiceUrl`: The Pulsar service URL for your broker cluster. -- `pulsarWebServiceUrl`: The Pulsar web service URL for your broker cluster. -- `pulsarFunctionsCluster`: Set the value to your Pulsar cluster name (same as the `clusterName` setting in the broker configuration). - -If authentication is enabled for your broker cluster, you *should* configure the authentication plugin and parameters for the functions worker to communicate with the brokers. - -- `brokerClientAuthenticationEnabled`: Whether to enable the broker client authentication used by function workers to talk to brokers. -- `clientAuthenticationPlugin`: The authentication plugin to be used by the Pulsar client used in worker service. -- `clientAuthenticationParameters`: The authentication parameter to be used by the Pulsar client used in worker service. - -#### Security settings - -If you want to enable security on functions workers, you *should*: -- [Enable TLS transport encryption](#enable-tls-transport-encryption) -- [Enable Authentication Provider](#enable-authentication-provider) -- [Enable Authorization Provider](#enable-authorization-provider) -- [Enable End-to-End Encryption](#enable-end-to-end-encryption) - -##### Enable TLS transport encryption - -To enable TLS transport encryption, configure the following settings. - -``` - -useTLS: true -pulsarServiceUrl: pulsar+ssl://localhost:6651/ -pulsarWebServiceUrl: https://localhost:8443 - -tlsEnabled: true -tlsCertificateFilePath: /path/to/functions-worker.cert.pem -tlsKeyFilePath: /path/to/functions-worker.key-pk8.pem -tlsTrustCertsFilePath: /path/to/ca.cert.pem - -// The path to trusted certificates used by the Pulsar client to authenticate with Pulsar brokers -brokerClientTrustCertsFilePath: /path/to/ca.cert.pem - -``` - -For details on TLS encryption, refer to [Transport Encryption using TLS](security-tls-transport.md). - -##### Enable Authentication Provider - -To enable authentication on Functions Worker, you need to configure the following settings. - -:::note - -Substitute the *providers list* with the providers you want to enable. - -::: - -``` - -authenticationEnabled: true -authenticationProviders: [ provider1, provider2 ] - -``` - -For *TLS Authentication* provider, follow the example below to add the necessary settings. -See [TLS Authentication](security-tls-authentication.md) for more details. - -``` - -brokerClientAuthenticationPlugin: org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters: tlsCertFile:/path/to/admin.cert.pem,tlsKeyFile:/path/to/admin.key-pk8.pem - -authenticationEnabled: true -authenticationProviders: ['org.apache.pulsar.broker.authentication.AuthenticationProviderTls'] - -``` - -For *SASL Authentication* provider, add `saslJaasClientAllowedIds` and `saslJaasBrokerSectionName` -under `properties` if needed. - -``` - -properties: - saslJaasClientAllowedIds: .*pulsar.* - saslJaasBrokerSectionName: Broker - -``` - -For *Token Authentication* provider, add necessary settings for `properties` if needed. -See [Token Authentication](security-jwt.md) for more details. -Note: key files must be DER-encoded - -``` - -properties: - tokenSecretKey: file://my/secret.key - # If using public/private - # tokenPublicKey: file:///path/to/public.key - -``` - -##### Enable Authorization Provider - -To enable authorization on Functions Worker, you need to configure `authorizationEnabled`, `authorizationProvider` and `configurationStoreServers`. The authentication provider connects to `configurationStoreServers` to receive namespace policies. - -```yaml - -authorizationEnabled: true -authorizationProvider: org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider -configurationStoreServers: - -``` - -You should also configure a list of superuser roles. The superuser roles are able to access any admin API. The following is a configuration example. - -```yaml - -superUserRoles: - - role1 - - role2 - - role3 - -``` - -##### Enable End-to-End Encryption - -You can use the public and private key pair that the application configures to perform encryption. Only the consumers with a valid key can decrypt the encrypted messages. - -To enable End-to-End encryption on Functions Worker, you can set it by specifying `--producer-config` in the command line terminal, for more information, please refer to [here](security-encryption.md). - -We include the relevant configuration information of `CryptoConfig` into `ProducerConfig`. The specific configurable field information about `CryptoConfig` is as follows: - -```text - -public class CryptoConfig { - private String cryptoKeyReaderClassName; - private Map cryptoKeyReaderConfig; - - private String[] encryptionKeys; - private ProducerCryptoFailureAction producerCryptoFailureAction; - - private ConsumerCryptoFailureAction consumerCryptoFailureAction; -} - -``` - -- `producerCryptoFailureAction`: define the action if producer fail to encrypt data one of `FAIL`, `SEND`. -- `consumerCryptoFailureAction`: define the action if consumer fail to decrypt data one of `FAIL`, `DISCARD`, `CONSUME`. - -#### BookKeeper Authentication - -If authentication is enabled on the BookKeeper cluster, you need configure the BookKeeper authentication settings as follows: - -- `bookkeeperClientAuthenticationPlugin`: the plugin name of BookKeeper client authentication. -- `bookkeeperClientAuthenticationParametersName`: the plugin parameters name of BookKeeper client authentication. -- `bookkeeperClientAuthenticationParameters`: the plugin parameters of BookKeeper client authentication. - -### Start Functions-worker - -Once you have finished configuring the `functions_worker.yml` configuration file, you can start a `functions-worker` in the background by using [nohup](https://en.wikipedia.org/wiki/Nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -bin/pulsar-daemon start functions-worker - -``` - -You can also start `functions-worker` in the foreground by using `pulsar` CLI tool: - -```bash - -bin/pulsar functions-worker - -``` - -### Configure Proxies for Functions-workers - -When you are running `functions-worker` in a separate cluster, the admin rest endpoints are split into two clusters. `functions`, `function-worker`, `source` and `sink` endpoints are now served -by the `functions-worker` cluster, while all the other remaining endpoints are served by the broker cluster. -Hence you need to configure your `pulsar-admin` to use the right service URL accordingly. - -In order to address this inconvenience, you can start a proxy cluster for routing the admin rest requests accordingly. Hence you will have one central entry point for your admin service. - -If you already have a proxy cluster, continue reading. If you haven't setup a proxy cluster before, you can follow the [instructions](http://pulsar.apache.org/docs/en/administration-proxy/) to -start proxies. - -![assets/functions-worker-separated.png](/assets/functions-worker-separated-proxy.png) - -To enable routing functions related admin requests to `functions-worker` in a proxy, you can edit the `proxy.conf` file to modify the following settings: - -```conf - -functionWorkerWebServiceURL= -functionWorkerWebServiceURLTLS= - -``` - -## Compare the Run-with-Broker and Run-separately modes - -As described above, you can run Function-worker with brokers, or run it separately. And it is more convenient to run functions-workers along with brokers. However, running functions-workers in a separate cluster provides better resource isolation for running functions in `Process` or `Thread` mode. - -Use which mode for your cases, refer to the following guidelines to determine. - -Use the `Run-with-Broker` mode in the following cases: -- a) if resource isolation is not required when running functions in `Process` or `Thread` mode; -- b) if you configure the functions-worker to run functions on Kubernetes (where the resource isolation problem is addressed by Kubernetes). - -Use the `Run-separately` mode in the following cases: -- a) you don't have a Kubernetes cluster; -- b) if you want to run functions and brokers separately. - -## Troubleshooting - -**Error message: Namespace missing local cluster name in clusters list** - -``` - -Failed to get partitioned topic metadata: org.apache.pulsar.client.api.PulsarClientException$BrokerMetadataException: Namespace missing local cluster name in clusters list: local_cluster=xyz ns=public/functions clusters=[standalone] - -``` - -The error message prompts when either of the cases occurs: -- a) a broker is started with `functionsWorkerEnabled=true`, but the `pulsarFunctionsCluster` is not set to the correct cluster in the `conf/functions_worker.yaml` file; -- b) setting up a geo-replicated Pulsar cluster with `functionsWorkerEnabled=true`, while brokers in one cluster run well, brokers in the other cluster do not work well. - -**Workaround** - -If any of these cases happens, follow the instructions below to fix the problem: - -1. Disable Functions Worker by setting `functionsWorkerEnabled=false`, and restart brokers. - -2. Get the current clusters list of `public/functions` namespace. - -```bash - -bin/pulsar-admin namespaces get-clusters public/functions - -``` - -3. Check if the cluster is in the clusters list. If the cluster is not in the list, add it to the list and update the clusters list. - -```bash - -bin/pulsar-admin namespaces set-clusters --clusters , public/functions - -``` - -4. After setting the cluster successfully, enable functions worker by setting `functionsWorkerEnabled=true`. - -5. Set the correct cluster name in `pulsarFunctionsCluster` in the `conf/functions_worker.yml` file, and restart brokers. diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/getting-started-concepts-and-architecture.md b/site2/website/versioned_docs/version-2.9.0-deprecated/getting-started-concepts-and-architecture.md deleted file mode 100644 index fe9c3fbc553b2c..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/getting-started-concepts-and-architecture.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -id: concepts-architecture -title: Pulsar concepts and architecture -sidebar_label: "Concepts and architecture" -original_id: concepts-architecture ---- - - - - - - - - - - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/getting-started-docker.md b/site2/website/versioned_docs/version-2.9.0-deprecated/getting-started-docker.md deleted file mode 100644 index de5ead69e164b0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/getting-started-docker.md +++ /dev/null @@ -1,211 +0,0 @@ ---- -id: getting-started-docker -title: Set up a standalone Pulsar in Docker -sidebar_label: "Run Pulsar in Docker" -original_id: getting-started-docker ---- - -For local development and testing, you can run Pulsar in standalone mode on your own machine within a Docker container. - -If you have not installed Docker, download the [Community edition](https://www.docker.com/community-edition) and follow the instructions for your OS. - -## Start Pulsar in Docker - -* For MacOS, Linux, and Windows: - - ```shell - - $ docker run -it -p 6650:6650 -p 8080:8080 --mount source=pulsardata,target=/pulsar/data --mount source=pulsarconf,target=/pulsar/conf apachepulsar/pulsar:@pulsar:version@ bin/pulsar standalone - - ``` - -A few things to note about this command: - * The data, metadata, and configuration are persisted on Docker volumes in order to not start "fresh" every -time the container is restarted. For details on the volumes you can use `docker volume inspect ` - * For Docker on Windows make sure to configure it to use Linux containers - -If you start Pulsar successfully, you will see `INFO`-level log messages like this: - -``` - -08:18:30.970 [main] INFO org.apache.pulsar.broker.web.WebService - HTTP Service started at http://0.0.0.0:8080 -... -07:53:37.322 [main] INFO org.apache.pulsar.broker.PulsarService - messaging service is ready, bootstrap service port = 8080, broker url= pulsar://localhost:6650, cluster=standalone, configs=org.apache.pulsar.broker.ServiceConfiguration@98b63c1 -... - -``` - -:::tip - -When you start a local standalone cluster, a `public/default` namespace is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -::: - -## Use Pulsar in Docker - -Pulsar offers client libraries for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) -and [C++](client-libraries-cpp.md). If you're running a local standalone cluster, you can -use one of these root URLs to interact with your cluster: - -* `pulsar://localhost:6650` -* `http://localhost:8080` - -The following example will guide you get started with Pulsar quickly by using the [Python client API](client-libraries-python.md) -client API. - -Install the Pulsar Python client library directly from [PyPI](https://pypi.org/project/pulsar-client/): - -```shell - -$ pip install pulsar-client - -``` - -### Consume a message - -Create a consumer and subscribe to the topic: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -consumer = client.subscribe('my-topic', - subscription_name='my-sub') - -while True: - msg = consumer.receive() - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - -client.close() - -``` - -### Produce a message - -Now start a producer to send some test messages: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -producer = client.create_producer('my-topic') - -for i in range(10): - producer.send(('hello-pulsar-%d' % i).encode('utf-8')) - -client.close() - -``` - -## Get the topic statistics - -In Pulsar, you can use REST, Java, or command-line tools to control every aspect of the system. -For details on APIs, refer to [Admin API Overview](admin-api-overview.md). - -In the simplest example, you can use curl to probe the stats for a particular topic: - -```shell - -$ curl http://localhost:8080/admin/v2/persistent/public/default/my-topic/stats | python -m json.tool - -``` - -The output is something like this: - -```json - -{ - "msgRateIn": 0.0, - "msgThroughputIn": 0.0, - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesInCounter": 7097, - "msgInCounter": 143, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "averageMsgSize": 0.0, - "msgChunkPublished": false, - "storageSize": 7097, - "backlogSize": 0, - "offloadedStorageSize": 0, - "publishers": [ - { - "accessMode": "Shared", - "msgRateIn": 0.0, - "msgThroughputIn": 0.0, - "averageMsgSize": 0.0, - "chunkedMessageRate": 0.0, - "producerId": 0, - "metadata": {}, - "address": "/127.0.0.1:35604", - "connectedSince": "2021-07-04T09:05:43.04788Z", - "clientVersion": "2.8.0", - "producerName": "standalone-2-5" - } - ], - "waitingPublishers": 0, - "subscriptions": { - "my-sub": { - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "msgRateRedeliver": 0.0, - "chunkedMessageRate": 0, - "msgBacklog": 0, - "backlogSize": 0, - "msgBacklogNoDelayed": 0, - "blockedSubscriptionOnUnackedMsgs": false, - "msgDelayed": 0, - "unackedMessages": 0, - "type": "Exclusive", - "activeConsumerName": "3c544f1daa", - "msgRateExpired": 0.0, - "totalMsgExpired": 0, - "lastExpireTimestamp": 0, - "lastConsumedFlowTimestamp": 1625389101290, - "lastConsumedTimestamp": 1625389546070, - "lastAckedTimestamp": 1625389546162, - "lastMarkDeleteAdvancedTimestamp": 1625389546163, - "consumers": [ - { - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "msgRateRedeliver": 0.0, - "chunkedMessageRate": 0.0, - "consumerName": "3c544f1daa", - "availablePermits": 867, - "unackedMessages": 0, - "avgMessagesPerEntry": 6, - "blockedConsumerOnUnackedMsgs": false, - "lastAckedTimestamp": 1625389546162, - "lastConsumedTimestamp": 1625389546070, - "metadata": {}, - "address": "/127.0.0.1:35472", - "connectedSince": "2021-07-04T08:58:21.287682Z", - "clientVersion": "2.8.0" - } - ], - "isDurable": true, - "isReplicated": false, - "allowOutOfOrderDelivery": false, - "consumersAfterMarkDeletePosition": {}, - "nonContiguousDeletedMessagesRanges": 0, - "nonContiguousDeletedMessagesRangesSerializedSize": 0, - "durable": true, - "replicated": false - } - }, - "replication": {}, - "deduplicationStatus": "Disabled", - "nonContiguousDeletedMessagesRanges": 0, - "nonContiguousDeletedMessagesRangesSerializedSize": 0 -} - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/getting-started-helm.md b/site2/website/versioned_docs/version-2.9.0-deprecated/getting-started-helm.md deleted file mode 100644 index 5e9f7044a6d74b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/getting-started-helm.md +++ /dev/null @@ -1,447 +0,0 @@ ---- -id: getting-started-helm -title: Get started in Kubernetes -sidebar_label: "Run Pulsar in Kubernetes" -original_id: getting-started-helm ---- - -This section guides you through every step of installing and running Apache Pulsar with Helm on Kubernetes quickly, including the following sections: - -- Install the Apache Pulsar on Kubernetes using Helm -- Start and stop Apache Pulsar -- Create topics using `pulsar-admin` -- Produce and consume messages using Pulsar clients -- Monitor Apache Pulsar status with Prometheus and Grafana - -For deploying a Pulsar cluster for production usage, read the documentation on [how to configure and install a Pulsar Helm chart](helm-deploy.md). - -## Prerequisite - -- Kubernetes server 1.14.0+ -- kubectl 1.14.0+ -- Helm 3.0+ - -:::tip - -For the following steps, step 2 and step 3 are for **developers** and step 4 and step 5 are for **administrators**. - -::: - -## Step 0: Prepare a Kubernetes cluster - -Before installing a Pulsar Helm chart, you have to create a Kubernetes cluster. You can follow [the instructions](helm-prepare.md) to prepare a Kubernetes cluster. - -We use [Minikube](https://minikube.sigs.k8s.io/docs/start/) in this quick start guide. To prepare a Kubernetes cluster, follow these steps: - -1. Create a Kubernetes cluster on Minikube. - - ```bash - - minikube start --memory=8192 --cpus=4 --kubernetes-version= - - ``` - - The `` can be any [Kubernetes version supported by your Minikube installation](https://minikube.sigs.k8s.io/docs/reference/configuration/kubernetes/), such as `v1.16.1`. - -2. Set `kubectl` to use Minikube. - - ```bash - - kubectl config use-context minikube - - ``` - -3. To use the [Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) with the local Kubernetes cluster on Minikube, enter the command below: - - ```bash - - minikube dashboard - - ``` - - The command automatically triggers opening a webpage in your browser. - -## Step 1: Install Pulsar Helm chart - -1. Add Pulsar charts repo. - - ```bash - - helm repo add apache https://pulsar.apache.org/charts - - ``` - - ```bash - - helm repo update - - ``` - -2. Clone the Pulsar Helm chart repository. - - ```bash - - git clone https://github.com/apache/pulsar-helm-chart - cd pulsar-helm-chart - - ``` - -3. Run the script `prepare_helm_release.sh` to create secrets required for installing the Apache Pulsar Helm chart. The username `pulsar` and password `pulsar` are used for logging into the Grafana dashboard and Pulsar Manager. - - :::note - - When running the script, you can use `-n` to specify the Kubernetes namespace where the Pulsar Helm chart is installed, `-k` to define the Pulsar Helm release name, and `-c` to create the Kubernetes namespace. For more information about the script, run `./scripts/pulsar/prepare_helm_release.sh --help`. - - ::: - - ```bash - - ./scripts/pulsar/prepare_helm_release.sh \ - -n pulsar \ - -k pulsar-mini \ - -c - - ``` - -4. Use the Pulsar Helm chart to install a Pulsar cluster to Kubernetes. - - :::note - - You need to specify `--set initialize=true` when installing Pulsar the first time. This command installs and starts Apache Pulsar. - - ::: - - ```bash - - helm install \ - --values examples/values-minikube.yaml \ - --set initialize=true \ - --namespace pulsar \ - pulsar-mini apache/pulsar - - ``` - -5. Check the status of all pods. - - ```bash - - kubectl get pods -n pulsar - - ``` - - If all pods start up successfully, you can see that the `STATUS` is changed to `Running` or `Completed`. - - **Output** - - ```bash - - NAME READY STATUS RESTARTS AGE - pulsar-mini-bookie-0 1/1 Running 0 9m27s - pulsar-mini-bookie-init-5gphs 0/1 Completed 0 9m27s - pulsar-mini-broker-0 1/1 Running 0 9m27s - pulsar-mini-grafana-6b7bcc64c7-4tkxd 1/1 Running 0 9m27s - pulsar-mini-prometheus-5fcf5dd84c-w8mgz 1/1 Running 0 9m27s - pulsar-mini-proxy-0 1/1 Running 0 9m27s - pulsar-mini-pulsar-init-t7cqt 0/1 Completed 0 9m27s - pulsar-mini-pulsar-manager-9bcbb4d9f-htpcs 1/1 Running 0 9m27s - pulsar-mini-toolset-0 1/1 Running 0 9m27s - pulsar-mini-zookeeper-0 1/1 Running 0 9m27s - - ``` - -6. Check the status of all services in the namespace `pulsar`. - - ```bash - - kubectl get services -n pulsar - - ``` - - **Output** - - ```bash - - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - pulsar-mini-bookie ClusterIP None 3181/TCP,8000/TCP 11m - pulsar-mini-broker ClusterIP None 8080/TCP,6650/TCP 11m - pulsar-mini-grafana LoadBalancer 10.106.141.246 3000:31905/TCP 11m - pulsar-mini-prometheus ClusterIP None 9090/TCP 11m - pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 11m - pulsar-mini-pulsar-manager LoadBalancer 10.103.192.175 9527:30190/TCP 11m - pulsar-mini-toolset ClusterIP None 11m - pulsar-mini-zookeeper ClusterIP None 2888/TCP,3888/TCP,2181/TCP 11m - - ``` - -## Step 2: Use pulsar-admin to create Pulsar tenants/namespaces/topics - -`pulsar-admin` is the CLI (command-Line Interface) tool for Pulsar. In this step, you can use `pulsar-admin` to create resources, including tenants, namespaces, and topics. - -1. Enter the `toolset` container. - - ```bash - - kubectl exec -it -n pulsar pulsar-mini-toolset-0 -- /bin/bash - - ``` - -2. In the `toolset` container, create a tenant named `apache`. - - ```bash - - bin/pulsar-admin tenants create apache - - ``` - - Then you can list the tenants to see if the tenant is created successfully. - - ```bash - - bin/pulsar-admin tenants list - - ``` - - You should see a similar output as below. The tenant `apache` has been successfully created. - - ```bash - - "apache" - "public" - "pulsar" - - ``` - -3. In the `toolset` container, create a namespace named `pulsar` in the tenant `apache`. - - ```bash - - bin/pulsar-admin namespaces create apache/pulsar - - ``` - - Then you can list the namespaces of tenant `apache` to see if the namespace is created successfully. - - ```bash - - bin/pulsar-admin namespaces list apache - - ``` - - You should see a similar output as below. The namespace `apache/pulsar` has been successfully created. - - ```bash - - "apache/pulsar" - - ``` - -4. In the `toolset` container, create a topic `test-topic` with `4` partitions in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics create-partitioned-topic apache/pulsar/test-topic -p 4 - - ``` - -5. In the `toolset` container, list all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics list-partitioned-topics apache/pulsar - - ``` - - Then you can see all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - "persistent://apache/pulsar/test-topic" - - ``` - -## Step 3: Use Pulsar client to produce and consume messages - -You can use the Pulsar client to create producers and consumers to produce and consume messages. - -By default, the Pulsar Helm chart exposes the Pulsar cluster through a Kubernetes `LoadBalancer`. In Minikube, you can use the following command to check the proxy service. - -```bash - -kubectl get services -n pulsar | grep pulsar-mini-proxy - -``` - -You will see a similar output as below. - -```bash - -pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 28m - -``` - -This output tells what are the node ports that Pulsar cluster's binary port and HTTP port are mapped to. The port after `80:` is the HTTP port while the port after `6650:` is the binary port. - -Then you can find the IP address and exposed ports of your Minikube server by running the following command. - -```bash - -minikube service pulsar-mini-proxy -n pulsar - -``` - -**Output** - -```bash - -|-----------|-------------------|-------------|-------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|-------------------------| -| pulsar | pulsar-mini-proxy | http/80 | http://172.17.0.4:32305 | -| | | pulsar/6650 | http://172.17.0.4:31816 | -|-----------|-------------------|-------------|-------------------------| -🏃 Starting tunnel for service pulsar-mini-proxy. -|-----------|-------------------|-------------|------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|------------------------| -| pulsar | pulsar-mini-proxy | | http://127.0.0.1:61853 | -| | | | http://127.0.0.1:61854 | -|-----------|-------------------|-------------|------------------------| - -``` - -At this point, you can get the service URLs to connect to your Pulsar client. Here are URL examples: - -``` - -webServiceUrl=http://127.0.0.1:61853/ -brokerServiceUrl=pulsar://127.0.0.1:61854/ - -``` - -Then you can proceed with the following steps: - -1. Download the Apache Pulsar tarball from the [downloads page](https://pulsar.apache.org/download/). - -2. Decompress the tarball based on your download file. - - ```bash - - tar -xf .tar.gz - - ``` - -3. Expose `PULSAR_HOME`. - - (1) Enter the directory of the decompressed download file. - - (2) Expose `PULSAR_HOME` as the environment variable. - - ```bash - - export PULSAR_HOME=$(pwd) - - ``` - -4. Configure the Pulsar client. - - In the `${PULSAR_HOME}/conf/client.conf` file, replace `webServiceUrl` and `brokerServiceUrl` with the service URLs you get from the above steps. - -5. Create a subscription to consume messages from `apache/pulsar/test-topic`. - - ```bash - - bin/pulsar-client consume -s sub apache/pulsar/test-topic -n 0 - - ``` - -6. Open a new terminal. In the new terminal, create a producer and send 10 messages to the `test-topic` topic. - - ```bash - - bin/pulsar-client produce apache/pulsar/test-topic -m "---------hello apache pulsar-------" -n 10 - - ``` - -7. Verify the results. - - - From the producer side - - **Output** - - The messages have been produced successfully. - - ```bash - - 18:15:15.489 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 10 messages successfully produced - - ``` - - - From the consumer side - - **Output** - - At the same time, you can receive the messages as below. - - ```bash - - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - - ``` - -## Step 4: Use Pulsar Manager to manage the cluster - -[Pulsar Manager](administration-pulsar-manager.md) is a web-based GUI management tool for managing and monitoring Pulsar. - -1. By default, the `Pulsar Manager` is exposed as a separate `LoadBalancer`. You can open the Pulsar Manager UI using the following command: - - ```bash - - minikube service -n pulsar pulsar-mini-pulsar-manager - - ``` - -2. The Pulsar Manager UI will be open in your browser. You can use the username `pulsar` and password `pulsar` to log into Pulsar Manager. - -3. In Pulsar Manager UI, you can create an environment. - - - Click `New Environment` button in the top-left corner. - - Type `pulsar-mini` for the field `Environment Name` in the popup window. - - Type `http://pulsar-mini-broker:8080` for the field `Service URL` in the popup window. - - Click `Confirm` button in the popup window. - -4. After successfully creating an environment, you are redirected to the `tenants` page of that environment. Then you can create `tenants`, `namespaces` and `topics` using the Pulsar Manager. - -## Step 5: Use Prometheus and Grafana to monitor cluster - -Grafana is an open-source visualization tool, which can be used for visualizing time series data into dashboards. - -1. By default, the Grafana is exposed as a separate `LoadBalancer`. You can open the Grafana UI using the following command: - - ```bash - - minikube service pulsar-mini-grafana -n pulsar - - ``` - -2. The Grafana UI is open in your browser. You can use the username `pulsar` and password `pulsar` to log into the Grafana Dashboard. - -3. You can view dashboards for different components of a Pulsar cluster. diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/getting-started-pulsar.md b/site2/website/versioned_docs/version-2.9.0-deprecated/getting-started-pulsar.md deleted file mode 100644 index 752590f57b5585..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/getting-started-pulsar.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -id: pulsar-2.0 -title: Pulsar 2.0 -sidebar_label: "Pulsar 2.0" -original_id: pulsar-2.0 ---- - -Pulsar 2.0 is a major new release for Pulsar that brings some bold changes to the platform, including [simplified topic names](#topic-names), the addition of the [Pulsar Functions](functions-overview.md) feature, some terminology changes, and more. - -## New features in Pulsar 2.0 - -Feature | Description -:-------|:----------- -[Pulsar Functions](functions-overview.md) | A lightweight compute option for Pulsar - -## Major changes - -There are a few major changes that you should be aware of, as they may significantly impact your day-to-day usage. - -### Properties versus tenants - -Previously, Pulsar had a concept of properties. A property is essentially the exact same thing as a tenant, so the "property" terminology has been removed in version 2.0. The [`pulsar-admin properties`](reference-pulsar-admin.md#pulsar-admin) command-line interface, for example, has been replaced with the [`pulsar-admin tenants`](reference-pulsar-admin.md#pulsar-admin-tenants) interface. In some cases the properties terminology is still used but is now considered deprecated and will be removed entirely in a future release. - -### Topic names - -Prior to version 2.0, *all* Pulsar topics had the following form: - -```http - -{persistent|non-persistent}://property/cluster/namespace/topic - -``` - -Two important changes have been made in Pulsar 2.0: - -* There is no longer a [cluster component](#no-cluster) -* Properties have been [renamed to tenants](#tenants) -* You can use a [flexible](#flexible-topic-naming) naming system to shorten many topic names -* `/` is not allowed in topic name - -#### No cluster component - -The cluster component has been removed from topic names. Thus, all topic names now have the following form: - -```http - -{persistent|non-persistent}://tenant/namespace/topic - -``` - -> Existing topics that use the legacy name format will continue to work without any change, and there are no plans to change that. - - -#### Flexible topic naming - -All topic names in Pulsar 2.0 internally have the form shown [above](#no-cluster-component) but you can now use shorthand names in many cases (for the sake of simplicity). The flexible naming system stems from the fact that there is now a default topic type, tenant, and namespace: - -Topic aspect | Default -:------------|:------- -topic type | `persistent` -tenant | `public` -namespace | `default` - -The table below shows some example topic name translations that use implicit defaults: - -Input topic name | Translated topic name -:----------------|:--------------------- -`my-topic` | `persistent://public/default/my-topic` -`my-tenant/my-namespace/my-topic` | `persistent://my-tenant/my-namespace/my-topic` - -> For [non-persistent topics](concepts-messaging.md#non-persistent-topics) you'll need to continue to specify the entire topic name, as the default-based rules for persistent topic names don't apply. Thus you cannot use a shorthand name like `non-persistent://my-topic` and would need to use `non-persistent://public/default/my-topic` instead - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/getting-started-standalone.md b/site2/website/versioned_docs/version-2.9.0-deprecated/getting-started-standalone.md deleted file mode 100644 index 9137ba291421a9..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/getting-started-standalone.md +++ /dev/null @@ -1,268 +0,0 @@ ---- -id: getting-started-standalone -title: Set up a standalone Pulsar locally -sidebar_label: "Run Pulsar locally" -original_id: getting-started-standalone ---- - -For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process. - -> **Pulsar in production?** -> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal.md) guide. - -## Install Pulsar standalone - -This tutorial guides you through every step of installing Pulsar locally. - -### System requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions, JRE/JDK 11 is recommended. - -:::tip - -By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. - -::: - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -### Install Pulsar using binary release - -To get started with Pulsar, download a binary tarball release in one of the following ways: - -* download from the Apache mirror (Pulsar @pulsar:version@ binary release) - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:binary_release_url - - ``` - -After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -#### What your package contains - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/). -`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more. -`examples` | A Java JAR file containing [Pulsar Functions](functions-overview.md) example. -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar. -`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar). - -These directories are created once you begin running Pulsar. - -Directory | Contains -:---------|:-------- -`data` | The data storage directory used by ZooKeeper and BookKeeper. -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md). -`logs` | Logs created by the installation. - -:::tip - -If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions: -* [Install builtin connectors (optional)](#install-builtin-connectors-optional) -* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional) -Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders. - -::: - -### Install builtin connectors (optional) - -Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors. -To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways: - -* download from the Apache mirror Pulsar IO Connectors @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. -For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker (or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions). -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors). - -::: - -### Install tiered storage offloaders (optional) - -:::tip - -- Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -- To enable tiered storage feature, follow the instructions below; otherwise skip this section. - -::: - -To get started with [tiered storage offloaders](concepts-tiered-storage.md), you need to download the offloaders tarball release on every broker node in one of the following ways: - -* download from the Apache mirror Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders` -in the pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage.md). - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory. -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - -::: - -## Start Pulsar standalone - -Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode. - -```bash - -$ bin/pulsar standalone - -``` - -If you have started Pulsar successfully, you will see `INFO`-level log messages like this: - -```bash - -21:59:29.327 [DLM-/stream/storage-OrderedScheduler-3-0] INFO org.apache.bookkeeper.stream.storage.impl.sc.StorageContainerImpl - Successfully started storage container (0). -21:59:34.576 [main] INFO org.apache.pulsar.broker.authentication.AuthenticationService - Authentication is disabled -21:59:34.576 [main] INFO org.apache.pulsar.websocket.WebSocketService - Pulsar WebSocket Service started - -``` - -:::tip - -* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window. - -::: - -You can also run the service as a background process using the `pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). -> -> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview.md) document to secure your deployment. -> -> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -## Use Pulsar standalone - -Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. - -### Consume a message - -The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client consume my-topic -s "first-subscription" - -``` - -If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -22:17:16.781 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully consumed - -``` - -:::tip - -As you have noticed that we do not explicitly create the `my-topic` topic, from which we consume the message. When you consume a message from a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well. - -::: - -### Produce a message - -The following command produces a message saying `hello-pulsar` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client produce my-topic --messages "hello-pulsar" - -``` - -If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -22:21:08.693 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced - -``` - -## Stop Pulsar standalone - -Press `Ctrl+C` to stop a local standalone Pulsar. - -:::tip - -If the service runs as a background process using the `pulsar-daemon start standalone` command, then use the `pulsar-daemon stop standalone` command to stop the service. -For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). - -::: - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/helm-deploy.md b/site2/website/versioned_docs/version-2.9.0-deprecated/helm-deploy.md deleted file mode 100644 index 0e7815e4f4d90b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/helm-deploy.md +++ /dev/null @@ -1,434 +0,0 @@ ---- -id: helm-deploy -title: Deploy Pulsar cluster using Helm -sidebar_label: "Deployment" -original_id: helm-deploy ---- - -Before running `helm install`, you need to decide how to run Pulsar. -Options can be specified using Helm's `--set option.name=value` command line option. - -## Select configuration options - -In each section, collect the options that are combined to use with the `helm install` command. - -### Kubernetes namespace - -By default, the Pulsar Helm chart is installed to a namespace called `pulsar`. - -```yaml - -namespace: pulsar - -``` - -To install the Pulsar Helm chart into a different Kubernetes namespace, you can include this option in the `helm install` command. - -```bash - ---set namespace= - -``` - -By default, the Pulsar Helm chart doesn't create the namespace. - -```yaml - -namespaceCreate: false - -``` - -To use the Pulsar Helm chart to create the Kubernetes namespace automatically, you can include this option in the `helm install` command. - -```bash - ---set namespaceCreate=true - -``` - -### Persistence - -By default, the Pulsar Helm chart creates Volume Claims with the expectation that a dynamic provisioner creates the underlying Persistent Volumes. - -```yaml - -volumes: - persistence: true - # configure the components to use local persistent volume - # the local provisioner should be installed prior to enable local persistent volume - local_storage: false - -``` - -To use local persistent volumes as the persistent storage for Helm release, you can install the [local storage provisioner](#install-local-storage-provisioner) and include the following option in the `helm install` command. - -```bash - ---set volumes.local_storage=true - -``` - -:::note - -Before installing the production instance of Pulsar, ensure to plan the storage settings to avoid extra storage migration work. Because after initial installation, you must edit Kubernetes objects manually if you want to change storage settings. - -::: - -The Pulsar Helm chart is designed for production use. To use the Pulsar Helm chart in a development environment (such as Minikube), you can disable persistence by including this option in your `helm install` command. - -```bash - ---set volumes.persistence=false - -``` - -### Affinity - -By default, `anti-affinity` is enabled to ensure pods of the same component can run on different nodes. - -```yaml - -affinity: - anti_affinity: true - -``` - -To use the Pulsar Helm chart in a development environment (such as Minikube), you can disable `anti-affinity` by including this option in your `helm install` command. - -```bash - ---set affinity.anti_affinity=false - -``` - -### Components - -The Pulsar Helm chart is designed for production usage. It deploys a production-ready Pulsar cluster, including Pulsar core components and monitoring components. - -You can customize the components to be deployed by turning on/off individual components. - -```yaml - -## Components -## -## Control what components of Apache Pulsar to deploy for the cluster -components: - # zookeeper - zookeeper: true - # bookkeeper - bookkeeper: true - # bookkeeper - autorecovery - autorecovery: true - # broker - broker: true - # functions - functions: true - # proxy - proxy: true - # toolset - toolset: true - # pulsar manager - pulsar_manager: true - -## Monitoring Components -## -## Control what components of the monitoring stack to deploy for the cluster -monitoring: - # monitoring - prometheus - prometheus: true - # monitoring - grafana - grafana: true - -``` - -### Docker images - -The Pulsar Helm chart is designed to enable controlled upgrades. So it can configure independent image versions for components. You can customize the images by setting individual component. - -```yaml - -## Images -## -## Control what images to use for each component -images: - zookeeper: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - bookie: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - autorecovery: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - broker: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - proxy: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - functions: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - prometheus: - repository: prom/prometheus - tag: v1.6.3 - pullPolicy: IfNotPresent - grafana: - repository: streamnative/apache-pulsar-grafana-dashboard-k8s - tag: 0.0.4 - pullPolicy: IfNotPresent - pulsar_manager: - repository: apachepulsar/pulsar-manager - tag: v0.1.0 - pullPolicy: IfNotPresent - hasCommand: false - -``` - -### TLS - -The Pulsar Helm chart can be configured to enable TLS (Transport Layer Security) to protect all the traffic between components. Before enabling TLS, you have to provision TLS certificates for the required components. - -#### Provision TLS certificates using cert-manager - -To use the `cert-manager` to provision the TLS certificates, you have to install the [cert-manager](#install-cert-manager) before installing the Pulsar Helm chart. After successfully installing the cert-manager, you can set `certs.internal_issuer.enabled` to `true`. Therefore, the Pulsar Helm chart can use the `cert-manager` to generate `selfsigning` TLS certificates for the configured components. - -```yaml - -certs: - internal_issuer: - enabled: false - component: internal-cert-issuer - type: selfsigning - -``` - -You can also customize the generated TLS certificates by configuring the fields as the following. - -```yaml - -tls: - # common settings for generating certs - common: - # 90d - duration: 2160h - # 15d - renewBefore: 360h - organization: - - pulsar - keySize: 4096 - keyAlgorithm: rsa - keyEncoding: pkcs8 - -``` - -#### Enable TLS - -After installing the `cert-manager`, you can set `tls.enabled` to `true` to enable TLS encryption for the entire cluster. - -```yaml - -tls: - enabled: false - -``` - -You can also configure whether to enable TLS encryption for individual component. - -```yaml - -tls: - # settings for generating certs for proxy - proxy: - enabled: false - cert_name: tls-proxy - # settings for generating certs for broker - broker: - enabled: false - cert_name: tls-broker - # settings for generating certs for bookies - bookie: - enabled: false - cert_name: tls-bookie - # settings for generating certs for zookeeper - zookeeper: - enabled: false - cert_name: tls-zookeeper - # settings for generating certs for recovery - autorecovery: - cert_name: tls-recovery - # settings for generating certs for toolset - toolset: - cert_name: tls-toolset - -``` - -### Authentication - -By default, authentication is disabled. You can set `auth.authentication.enabled` to `true` to enable authentication. -Currently, the Pulsar Helm chart only supports JWT authentication provider. You can set `auth.authentication.provider` to `jwt` to use the JWT authentication provider. - -```yaml - -# Enable or disable broker authentication and authorization. -auth: - authentication: - enabled: false - provider: "jwt" - jwt: - # Enable JWT authentication - # If the token is generated by a secret key, set the usingSecretKey as true. - # If the token is generated by a private key, set the usingSecretKey as false. - usingSecretKey: false - superUsers: - # broker to broker communication - broker: "broker-admin" - # proxy to broker communication - proxy: "proxy-admin" - # pulsar-admin client to broker/proxy communication - client: "admin" - -``` - -To enable authentication, you can run [prepare helm release](#prepare-the-helm-release) to generate token secret keys and tokens for three super users specified in the `auth.superUsers` field. The generated token keys and super user tokens are uploaded and stored as Kubernetes secrets prefixed with `-token-`. You can use the following command to find those secrets. - -```bash - -kubectl get secrets -n - -``` - -### Authorization - -By default, authorization is disabled. Authorization can be enabled only when authentication is enabled. - -```yaml - -auth: - authorization: - enabled: false - -``` - -To enable authorization, you can include this option in the `helm install` command. - -```bash - ---set auth.authorization.enabled=true - -``` - -### CPU and RAM resource requirements - -By default, the resource requests and the number of replicas for the Pulsar components in the Pulsar Helm chart are adequate for a small production deployment. If you deploy a non-production instance, you can reduce the defaults to fit into a smaller cluster. - -Once you have all of your configuration options collected, you can install dependent charts before installing the Pulsar Helm chart. - -## Install dependent charts - -### Install local storage provisioner - -To use local persistent volumes as the persistent storage, you need to install a storage provisioner for [local persistent volumes](https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/). - -One of the easiest way to get started is to use the local storage provisioner provided along with the Pulsar Helm chart. - -``` - -helm repo add streamnative https://charts.streamnative.io -helm repo update -helm install pulsar-storage-provisioner streamnative/local-storage-provisioner - -``` - -### Install cert-manager - -The Pulsar Helm chart uses the [cert-manager](https://github.com/jetstack/cert-manager) to provision and manage TLS certificates automatically. To enable TLS encryption for brokers or proxies, you need to install the cert-manager in advance. - -For details about how to install the cert-manager, follow the [official instructions](https://cert-manager.io/docs/installation/kubernetes/#installing-with-helm). - -Alternatively, we provide a bash script [install-cert-manager.sh](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/cert-manager/install-cert-manager.sh) to install a cert-manager release to the namespace `cert-manager`. - -```bash - -git clone https://github.com/apache/pulsar-helm-chart -cd pulsar-helm-chart -./scripts/cert-manager/install-cert-manager.sh - -``` - -## Prepare Helm release - -Once you have install all the dependent charts and collected all of your configuration options, you can run [prepare_helm_release.sh](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/prepare_helm_release.sh) to prepare the Helm release. - -```bash - -git clone https://github.com/apache/pulsar-helm-chart -cd pulsar-helm-chart -./scripts/pulsar/prepare_helm_release.sh -n -k - -``` - -The `prepare_helm_release` creates the following resources: - -- A Kubernetes namespace for installing the Pulsar release -- JWT secret keys and tokens for three super users: `broker-admin`, `proxy-admin`, and `admin`. By default, it generates an asymmetric pubic/private key pair. You can choose to generate a symmetric secret key by specifying `--symmetric`. - - `proxy-admin` role is used for proxies to communicate to brokers. - - `broker-admin` role is used for inter-broker communications. - - `admin` role is used by the admin tools. - -## Deploy Pulsar cluster using Helm - -Once you have finished the following three things, you can install a Helm release. - -- Collect all of your configuration options. -- Install dependent charts. -- Prepare the Helm release. - -In this example, the Helm release is named `pulsar`. - -```bash - -helm repo add apache https://pulsar.apache.org/charts -helm repo update -helm install pulsar apache/pulsar \ - --timeout 10m \ - --set initialize=true \ - --set [your configuration options] - -``` - -:::note - -For the first deployment, add `--set initialize=true` option to initialize bookie and Pulsar cluster metadata. - -::: - -You can also use the `--version ` option if you want to install a specific version of Pulsar Helm chart. - -## Monitor deployment - -A list of installed resources are output once the Pulsar cluster is deployed. This may take 5-10 minutes. - -The status of the deployment can be checked by running the `helm status pulsar` command, which can also be done while the deployment is taking place if you run the command in another terminal. - -## Access Pulsar cluster - -The default values will create a `ClusterIP` for the following resources, which you can use to interact with the cluster. - -- Proxy: You can use the IP address to produce and consume messages to the installed Pulsar cluster. -- Pulsar Manager: You can access the Pulsar Manager UI at `http://:9527`. -- Grafana Dashboard: You can access the Grafana dashboard at `http://:3000`. - -To find the IP addresses of those components, run the following command: - -```bash - -kubectl get service -n - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/helm-install.md b/site2/website/versioned_docs/version-2.9.0-deprecated/helm-install.md deleted file mode 100644 index 9f81f52e0dab18..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/helm-install.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -id: helm-install -title: Install Apache Pulsar using Helm -sidebar_label: "Install" -original_id: helm-install ---- - -Install Apache Pulsar on Kubernetes with the official Pulsar Helm chart. - -## Requirements - -To deploy Apache Pulsar on Kubernetes, the followings are required. - -- kubectl 1.14 or higher, compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin)) -- Helm v3 (3.0.2 or higher) -- A Kubernetes cluster, version 1.14 or higher - -## Environment setup - -Before deploying Pulsar, you need to prepare your environment. - -### Tools - -Install [`helm`](helm-tools.md) and [`kubectl`](helm-tools.md) on your computer. - -## Cloud cluster preparation - -To create and connect to the Kubernetes cluster, follow the instructions: - -- [Google Kubernetes Engine](helm-prepare.md#google-kubernetes-engine) - -## Pulsar deployment - -Once the environment is set up and configuration is generated, you can now proceed to the [deployment of Pulsar](helm-deploy.md). - -## Pulsar upgrade - -To upgrade an existing Kubernetes installation, follow the [upgrade documentation](helm-upgrade.md). diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/helm-overview.md b/site2/website/versioned_docs/version-2.9.0-deprecated/helm-overview.md deleted file mode 100644 index 125f595cbe68a3..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/helm-overview.md +++ /dev/null @@ -1,103 +0,0 @@ ---- -id: helm-overview -title: Apache Pulsar Helm Chart -sidebar_label: "Overview" -original_id: helm-overview ---- - -[Helm chart](https://github.com/apache/pulsar-helm-chart) supports you to install Apache Pulsar in a cloud-native environment. - -## Introduction - -The Apache Pulsar Helm chart provides one of the most convenient ways to operate Pulsar on Kubernetes. With all the required components, Helm chart is scalable and thus being suitable for large-scale deployments. - -The Apache Pulsar Helm chart contains all components to support the features and functions that Pulsar delivers. You can install and configure these components separately. - -- Pulsar core components: - - ZooKeeper - - Bookies - - Brokers - - Function workers - - Proxies -- Control center: - - Pulsar Manager - - Prometheus - - Grafana - -Moreover, Helm chart supports: - -- Security - - Automatically provisioned TLS certificates, using [Jetstack](https://www.jetstack.io/)'s [cert-manager](https://cert-manager.io/docs/) - - self-signed - - [Let's Encrypt](https://letsencrypt.org/) - - TLS Encryption - - Proxy - - Broker - - Toolset - - Bookie - - ZooKeeper - - Authentication - - JWT - - Authorization -- Storage - - Non-persistence storage - - Persistent volume - - Local persistent volumes -- Functions - - Kubernetes Runtime - - Process Runtime - - Thread Runtime -- Operations - - Independent image versions for all components, enabling controlled upgrades - -## Quick start - -To run with Apache Pulsar Helm chart as fast as possible in a **non-production** use case, we provide a [quick start guide](getting-started-helm.md) for Proof of Concept (PoC) deployments. - -This guide walks you through deploying Apache Pulsar Helm chart with default values and features, but it is *not* suitable for deployments in production-ready environments. To deploy the charts in production under sustained load, you can follow the complete [Installation Guide](helm-install.md). - -## Troubleshooting - -Although we have done our best to make these charts as seamless as possible, troubles do go out of our control occasionally. We have been collecting tips and tricks for troubleshooting common issues. Please check it first before raising an [issue](https://github.com/apache/pulsar/issues/new/choose), and feel free to add your solutions by creating a [Pull Request](https://github.com/apache/pulsar/compare). - -## Installation - -The Apache Pulsar Helm chart contains all required dependencies. - -If you deploy a PoC for testing, we strongly suggest you follow this [Quick Start Guide](getting-started-helm.md) for your first iteration. - -1. [Preparation](helm-prepare.md) -2. [Deployment](helm-deploy.md) - -## Upgrading - -Once the Apache Pulsar Helm chart is installed, you can use `helm upgrade` command to configure and update it. - -```bash - -helm repo add apache https://pulsar.apache.org/charts -helm repo update -helm get values > pulsar.yaml -helm upgrade apache/pulsar -f pulsar.yaml - -``` - -For more detailed information, see [Upgrading](helm-upgrade.md). - -## Uninstallation - -To uninstall the Apache Pulsar Helm chart, run the following command: - -```bash - -helm delete - -``` - -For the purposes of continuity, some Kubernetes objects in these charts cannot be removed by `helm delete` command. It is recommended to *consciously* remove these items, as they affect re-deployment. - -* PVCs for stateful data: remove these items. - - ZooKeeper: This is your metadata. - - BookKeeper: This is your data. - - Prometheus: This is your metrics data, which can be safely removed. -* Secrets: if the secrets are generated by the [prepare release script](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/prepare_helm_release.sh), they contain secret keys and tokens. You can use the [cleanup release script](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/cleanup_helm_release.sh) to remove these secrets and tokens as needed. diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/helm-prepare.md b/site2/website/versioned_docs/version-2.9.0-deprecated/helm-prepare.md deleted file mode 100644 index e5d56c7e95e34b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/helm-prepare.md +++ /dev/null @@ -1,80 +0,0 @@ ---- -id: helm-prepare -title: Prepare Kubernetes resources -sidebar_label: "Prepare" -original_id: helm-prepare ---- - -For a fully functional Pulsar cluster, you need a few resources before deploying the Apache Pulsar Helm chart. The following provides instructions to prepare the Kubernetes cluster before deploying the Pulsar Helm chart. - -- [Google Kubernetes Engine](#google-kubernetes-engine) - - [Manual cluster creation](#manual-cluster-creation) - - [Scripted cluster creation](#scripted-cluster-creation) - - [Create cluster with local SSDs](#create-cluster-with-local-ssds) - -## Google Kubernetes Engine - -To get started easier, a script is provided to create the cluster automatically. Alternatively, a cluster can be created manually as well. - -### Manual cluster creation - -To provision a Kubernetes cluster manually, follow the [GKE instructions](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster). - -### Scripted cluster creation - -A [bootstrap script](https://github.com/streamnative/charts/tree/master/scripts/pulsar/gke_bootstrap_script.sh) has been created to automate much of the setup process for users on GCP/GKE. - -The script can: - -1. Create a new GKE cluster. -2. Allow the cluster to modify DNS (Domain Name Server) records. -3. Setup `kubectl`, and connect it to the cluster. - -Google Cloud SDK is a dependency of this script, so ensure it is [set up correctly](helm-tools.md#connect-to-a-gke-cluster) for the script to work. - -The script reads various parameters from environment variables and an argument `up` or `down` for bootstrap and clean-up respectively. - -The following table describes all variables. - -| **Variable** | **Description** | **Default value** | -| ------------ | --------------- | ----------------- | -| PROJECT | ID of your GCP project | No default value. It requires to be set. | -| CLUSTER_NAME | Name of the GKE cluster | `pulsar-dev` | -| CONFDIR | Configuration directory to store Kubernetes configuration | ${HOME}/.config/streamnative | -| INT_NETWORK | IP space to use within this cluster | `default` | -| LOCAL_SSD_COUNT | Number of local SSD counts | 4 | -| MACHINE_TYPE | Type of machine to use for nodes | `n1-standard-4` | -| NUM_NODES | Number of nodes to be created in each of the cluster's zones | 4 | -| PREEMPTIBLE | Create nodes using preemptible VM instances in the new cluster. | false | -| REGION | Compute region for the cluster | `us-east1` | -| USE_LOCAL_SSD | Flag to create a cluster with local SSDs | false | -| ZONE | Compute zone for the cluster | `us-east1-b` | -| ZONE_EXTENSION | The extension (`a`, `b`, `c`) of the zone name of the cluster | `b` | -| EXTRA_CREATE_ARGS | Extra arguments passed to create command | | - -Run the script, by passing in your desired parameters. It can work with the default parameters except for `PROJECT` which is required: - -```bash - -PROJECT= scripts/pulsar/gke_bootstrap_script.sh up - -``` - -The script can also be used to clean up the created GKE resources. - -```bash - -PROJECT= scripts/pulsar/gke_bootstrap_script.sh down - -``` - -#### Create cluster with local SSDs - -To install the Pulsar Helm chart using local persistent volumes, you need to create a GKE cluster with local SSDs. You can do so by specifying `USE_LOCAL_SSD` to be `true` in the following command to create a Pulsar cluster with local SSDs. - -``` - -PROJECT= USE_LOCAL_SSD=true LOCAL_SSD_COUNT= scripts/pulsar/gke_bootstrap_script.sh up - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/helm-tools.md b/site2/website/versioned_docs/version-2.9.0-deprecated/helm-tools.md deleted file mode 100644 index 6ba89006913b64..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/helm-tools.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -id: helm-tools -title: Required tools for deploying Pulsar Helm Chart -sidebar_label: "Required Tools" -original_id: helm-tools ---- - -Before deploying Pulsar to your Kubernetes cluster, there are some tools you must have installed locally. - -## kubectl - -kubectl is the tool that talks to the Kubernetes API. kubectl 1.14 or higher is required and it needs to be compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin)). - -To Install kubectl locally, follow the [Kubernetes documentation](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl). - -The server version of kubectl cannot be obtained until we connect to a cluster. - -## Helm - -Helm is the package manager for Kubernetes. The Apache Pulsar Helm Chart is tested and supported with Helm v3. - -### Get Helm - -You can get Helm from the project's [releases page](https://github.com/helm/helm/releases), or follow other options under the official documentation of [installing Helm](https://helm.sh/docs/intro/install/). - -### Next steps - -Once kubectl and Helm are configured, you can configure your [Kubernetes cluster](helm-prepare.md). - -## Additional information - -### Templates - -Templating in Helm is done through Golang's [text/template](https://golang.org/pkg/text/template/) and [sprig](https://godoc.org/github.com/Masterminds/sprig). - -For more information about how all the inner workings behave, check these documents: - -- [Functions and Pipelines](https://helm.sh/docs/chart_template_guide/functions_and_pipelines/) -- [Subcharts and Globals](https://helm.sh/docs/chart_template_guide/subcharts_and_globals/) - -### Tips and tricks - -For additional information on developing with Helm, check [tips and tricks section](https://helm.sh/docs/howto/charts_tips_and_tricks/) in the Helm repository. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/helm-upgrade.md b/site2/website/versioned_docs/version-2.9.0-deprecated/helm-upgrade.md deleted file mode 100644 index 7d671e6bfb3c10..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/helm-upgrade.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -id: helm-upgrade -title: Upgrade Pulsar Helm release -sidebar_label: "Upgrade" -original_id: helm-upgrade ---- - -Before upgrading your Pulsar installation, you need to check the change log corresponding to the specific release you want to upgrade to and look for any release notes that might pertain to the new Pulsar helm chart version. - -We also recommend that you need to provide all values using the `helm upgrade --set key=value` syntax or the `-f values.yml` instead of using `--reuse-values`, because some of the current values might be deprecated. - -:::note - -You can retrieve your previous `--set` arguments cleanly, with `helm get values `. If you direct this into a file (`helm get values > pulsar.yml`), you can safely pass this file through `-f`, namely `helm upgrade apache/pulsar -f pulsar.yaml`. This safely replaces the behavior of `--reuse-values`. - -::: - -## Steps - -To upgrade Apache Pulsar to a newer version, follow these steps: - -1. Check the change log for the specific version you would like to upgrade to. -2. Go through [deployment documentation](helm-deploy.md) step by step. -3. Extract your previous `--set` arguments with the following command. - - ```bash - - helm get values > pulsar.yaml - - ``` - -4. Decide all the values you need to set. -5. Perform the upgrade, with all `--set` arguments extracted in step 4. - - ```bash - - helm upgrade apache/pulsar \ - --version \ - -f pulsar.yaml \ - --set ... - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-aerospike-sink.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-aerospike-sink.md deleted file mode 100644 index 63d7338a3ba91c..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-aerospike-sink.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -id: io-aerospike-sink -title: Aerospike sink connector -sidebar_label: "Aerospike sink connector" -original_id: io-aerospike-sink ---- - -The Aerospike sink connector pulls messages from Pulsar topics to Aerospike clusters. - -## Configuration - -The configuration of the Aerospike sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `seedHosts` |String| true | No default value| The comma-separated list of one or more Aerospike cluster hosts.

    Each host can be specified as a valid IP address or hostname followed by an optional port number. | -| `keyspace` | String| true |No default value |The Aerospike namespace. | -| `columnName` | String | true| No default value|The Aerospike column name. | -|`userName`|String|false|NULL|The Aerospike username.| -|`password`|String|false|NULL|The Aerospike password.| -| `keySet` | String|false |NULL | The Aerospike set name. | -| `maxConcurrentRequests` |int| false | 100 | The maximum number of concurrent Aerospike transactions that a sink can open. | -| `timeoutMs` | int|false | 100 | This property controls `socketTimeout` and `totalTimeout` for Aerospike transactions. | -| `retries` | int|false | 1 |The maximum number of retries before aborting a write transaction to Aerospike. | diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-canal-source.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-canal-source.md deleted file mode 100644 index d1fd43bb0f74e4..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-canal-source.md +++ /dev/null @@ -1,235 +0,0 @@ ---- -id: io-canal-source -title: Canal source connector -sidebar_label: "Canal source connector" -original_id: io-canal-source ---- - -The Canal source connector pulls messages from MySQL to Pulsar topics. - -## Configuration - -The configuration of Canal source connector has the following properties. - -### Property - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `username` | true | None | Canal server account (not MySQL).| -| `password` | true | None | Canal server password (not MySQL). | -|`destination`|true|None|Source destination that Canal source connector connects to. -| `singleHostname` | false | None | Canal server address.| -| `singlePort` | false | None | Canal server port.| -| `cluster` | true | false | Whether to enable cluster mode based on Canal server configuration or not.

  1508. true: **cluster** mode.
    If set to true, it talks to `zkServers` to figure out the actual database host.

  1509. false: **standalone** mode.
    If set to false, it connects to the database specified by `singleHostname` and `singlePort`.
  1510. | -| `zkServers` | true | None | Address and port of the Zookeeper that Canal source connector talks to figure out the actual database host.| -| `batchSize` | false | 1000 | Batch size to fetch from Canal. | - -### Example - -Before using the Canal connector, you can create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "zkServers": "127.0.0.1:2181", - "batchSize": "5120", - "destination": "example", - "username": "", - "password": "", - "cluster": false, - "singleHostname": "127.0.0.1", - "singlePort": "11111", - } - - ``` - -* YAML - - You can create a YAML file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/resources/canal-mysql-source-config.yaml) below to your YAML file. - - ```yaml - - configs: - zkServers: "127.0.0.1:2181" - batchSize: 5120 - destination: "example" - username: "" - password: "" - cluster: false - singleHostname: "127.0.0.1" - singlePort: 11111 - - ``` - -## Usage - -Here is an example of storing MySQL data using the configuration file as above. - -1. Start a MySQL server. - - ```bash - - $ docker pull mysql:5.7 - $ docker run -d -it --rm --name pulsar-mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=canal -e MYSQL_USER=mysqluser -e MYSQL_PASSWORD=mysqlpw mysql:5.7 - - ``` - -2. Create a configuration file `mysqld.cnf`. - - ```bash - - [mysqld] - pid-file = /var/run/mysqld/mysqld.pid - socket = /var/run/mysqld/mysqld.sock - datadir = /var/lib/mysql - #log-error = /var/log/mysql/error.log - # By default we only accept connections from localhost - #bind-address = 127.0.0.1 - # Disabling symbolic-links is recommended to prevent assorted security risks - symbolic-links=0 - log-bin=mysql-bin - binlog-format=ROW - server_id=1 - - ``` - -3. Copy the configuration file `mysqld.cnf` to MySQL server. - - ```bash - - $ docker cp mysqld.cnf pulsar-mysql:/etc/mysql/mysql.conf.d/ - - ``` - -4. Restart the MySQL server. - - ```bash - - $ docker restart pulsar-mysql - - ``` - -5. Create a test database in MySQL server. - - ```bash - - $ docker exec -it pulsar-mysql /bin/bash - $ mysql -h 127.0.0.1 -uroot -pcanal -e 'create database test;' - - ``` - -6. Start a Canal server and connect to MySQL server. - - ``` - - $ docker pull canal/canal-server:v1.1.2 - $ docker run -d -it --link pulsar-mysql -e canal.auto.scan=false -e canal.destinations=test -e canal.instance.master.address=pulsar-mysql:3306 -e canal.instance.dbUsername=root -e canal.instance.dbPassword=canal -e canal.instance.connectionCharset=UTF-8 -e canal.instance.tsdb.enable=true -e canal.instance.gtidon=false --name=pulsar-canal-server -p 8000:8000 -p 2222:2222 -p 11111:11111 -p 11112:11112 -m 4096m canal/canal-server:v1.1.2 - - ``` - -7. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:2.3.0 - $ docker run -d -it --link pulsar-canal-server -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:2.3.0 bin/pulsar standalone - - ``` - -8. Modify the configuration file `canal-mysql-source-config.yaml`. - - ```yaml - - configs: - zkServers: "" - batchSize: "5120" - destination: "test" - username: "" - password: "" - cluster: false - singleHostname: "pulsar-canal-server" - singlePort: "11111" - - ``` - -9. Create a consumer file `pulsar-client.py`. - - ```python - - import pulsar - - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe('my-topic', - subscription_name='my-sub') - - while True: - msg = consumer.receive() - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - - client.close() - - ``` - -10. Copy the configuration file `canal-mysql-source-config.yaml` and the consumer file `pulsar-client.py` to Pulsar server. - - ```bash - - $ docker cp canal-mysql-source-config.yaml pulsar-standalone:/pulsar/conf/ - $ docker cp pulsar-client.py pulsar-standalone:/pulsar/ - - ``` - -11. Download a Canal connector and start it. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - $ wget https://archive.apache.org/dist/pulsar/pulsar-2.3.0/connectors/pulsar-io-canal-2.3.0.nar -P connectors - $ ./bin/pulsar-admin source localrun \ - --archive ./connectors/pulsar-io-canal-2.3.0.nar \ - --classname org.apache.pulsar.io.canal.CanalStringSource \ - --tenant public \ - --namespace default \ - --name canal \ - --destination-topic-name my-topic \ - --source-config-file /pulsar/conf/canal-mysql-source-config.yaml \ - --parallelism 1 - - ``` - -12. Consume data from MySQL. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - $ python pulsar-client.py - - ``` - -13. Open another window to log in MySQL server. - - ```bash - - $ docker exec -it pulsar-mysql /bin/bash - $ mysql -h 127.0.0.1 -uroot -pcanal - - ``` - -14. Create a table, and insert, delete, and update data in MySQL server. - - ```bash - - mysql> use test; - mysql> show tables; - mysql> CREATE TABLE IF NOT EXISTS `test_table`(`test_id` INT UNSIGNED AUTO_INCREMENT,`test_title` VARCHAR(100) NOT NULL, - `test_author` VARCHAR(40) NOT NULL, - `test_date` DATE,PRIMARY KEY ( `test_id` ))ENGINE=InnoDB DEFAULT CHARSET=utf8; - mysql> INSERT INTO test_table (test_title, test_author, test_date) VALUES("a", "b", NOW()); - mysql> UPDATE test_table SET test_title='c' WHERE test_title='a'; - mysql> DELETE FROM test_table WHERE test_title='c'; - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-cassandra-sink.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-cassandra-sink.md deleted file mode 100644 index b27a754f49e182..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-cassandra-sink.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -id: io-cassandra-sink -title: Cassandra sink connector -sidebar_label: "Cassandra sink connector" -original_id: io-cassandra-sink ---- - -The Cassandra sink connector pulls messages from Pulsar topics to Cassandra clusters. - -## Configuration - -The configuration of the Cassandra sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `roots` | String|true | " " (empty string) | A comma-separated list of Cassandra hosts to connect to.| -| `keyspace` | String|true| " " (empty string)| The key space used for writing pulsar messages.

    **Note: `keyspace` should be created prior to a Cassandra sink.**| -| `keyname` | String|true| " " (empty string)| The key name of the Cassandra column family.

    The column is used for storing Pulsar message keys.

    If a Pulsar message doesn't have any key associated, the message value is used as the key. | -| `columnFamily` | String|true| " " (empty string)| The Cassandra column family name.

    **Note: `columnFamily` should be created prior to a Cassandra sink.**| -| `columnName` | String|true| " " (empty string) | The column name of the Cassandra column family.

    The column is used for storing Pulsar message values. | - -### Example - -Before using the Cassandra sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - } - - ``` - -* YAML - - ``` - - configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - - ``` - -## Usage - -For more information about **how to connect Pulsar with Cassandra**, see [here](io-quickstart.md#connect-pulsar-to-apache-cassandra). diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-cdc-debezium.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-cdc-debezium.md deleted file mode 100644 index 293ccf2b35e8aa..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-cdc-debezium.md +++ /dev/null @@ -1,543 +0,0 @@ ---- -id: io-cdc-debezium -title: Debezium source connector -sidebar_label: "Debezium source connector" -original_id: io-cdc-debezium ---- - -The Debezium source connector pulls messages from MySQL or PostgreSQL -and persists the messages to Pulsar topics. - -## Configuration - -The configuration of Debezium source connector has the following properties. - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `task.class` | true | null | A source task class that implemented in Debezium. | -| `database.hostname` | true | null | The address of a database server. | -| `database.port` | true | null | The port number of a database server.| -| `database.user` | true | null | The name of a database user that has the required privileges. | -| `database.password` | true | null | The password for a database user that has the required privileges. | -| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. | -| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. | -| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the connector.

    This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. | -| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. | -| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value. | -| `database.history` | true | null | The name of the database history class. | -| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements.

    **Note: this topic is for internal use only and should not be used by consumers.** | -| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. | -| `pulsar.service.url` | true | null | Pulsar cluster service URL for the offset topic used in Debezium. You can use the `bin/pulsar-admin --admin-url http://pulsar:8080 sources localrun --source-config-file configs/pg-pulsar-config.yaml` command to point to the target Pulsar cluster.| -| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. | -| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). | -| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. | -| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. | - - - -## Example of MySQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "3306", - "database.user": "debezium", - "database.password": "dbz", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.whitelist": "inventory", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.history.pulsar.topic": "history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "pulsar.service.url": "pulsar://127.0.0.1:6650", - "offset.storage.topic": "offset-topic" - } - - ``` - -* YAML - - You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mysql-source" - topicName: "debezium-mysql-topic" - archive: "connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for mysql, docker image: debezium/example-mysql:0.8 - database.hostname: "localhost" - database.port: "3306" - database.user: "debezium" - database.password: "dbz" - database.server.id: "184054" - database.server.name: "dbserver1" - database.whitelist: "inventory" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.history.pulsar.topic: "history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG - key.converter: "org.apache.kafka.connect.json.JsonConverter" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## OFFSET_STORAGE_TOPIC_CONFIG - offset.storage.topic: "offset-topic" - - ``` - -### Usage - -This example shows how to change the data of a MySQL table using the Pulsar Debezium connector. - -1. Start a MySQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -it --rm \ - --name mysql \ - -p 3306:3306 \ - -e MYSQL_ROOT_PASSWORD=debezium \ - -e MYSQL_USER=mysqluser \ - -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar \ - --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","value.converter": "org.apache.kafka.connect.json.JsonConverter","pulsar.service.url": "pulsar://127.0.0.1:6650","offset.storage.topic": "offset-topic"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mysql-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the table _inventory.products_. - - ```bash - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MySQL client in docker. - - ```bash - - $ docker run -it --rm \ - --name mysqlterm \ - --link mysql \ - --rm mysql:5.7 sh \ - -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' - - ``` - -6. A MySQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - mysql> use inventory; - mysql> show tables; - mysql> SELECT * FROM products; - mysql> UPDATE products SET name='1111111111' WHERE id=101; - mysql> UPDATE products SET name='1111111111' WHERE id=107; - - ``` - - In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic. - -## Example of PostgreSQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "5432", - "database.user": "postgres", - "database.password": "postgres", - "database.dbname": "postgres", - "database.server.name": "dbserver1", - "schema.whitelist": "inventory", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-postgres-source" - topicName: "debezium-postgres-topic" - archive: "connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-postgress:0.8 - database.hostname: "localhost" - database.port: "5432" - database.user: "postgres" - database.password: "postgres" - database.dbname: "postgres" - database.server.name: "dbserver1" - schema.whitelist: "inventory" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector. - - -1. Start a PostgreSQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-postgres:0.8 - $ docker run -d -it --rm --name pulsar-postgresql -p 5432:5432 debezium/example-postgres:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar \ - --name debezium-postgres-source \ - --destination-topic-name debezium-postgres-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "postgres","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-postgres-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a PostgreSQL client in docker. - - ```bash - - $ docker exec -it pulsar-postgresql /bin/bash - - ``` - -6. A PostgreSQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - psql -U postgres postgres - postgres=# \c postgres; - You are now connected to database "postgres" as user "postgres". - postgres=# SET search_path TO inventory; - SET - postgres=# select * from products; - id | name | description | weight - -----+--------------------+---------------------------------------------------------+-------- - 102 | car battery | 12V car battery | 8.1 - 103 | 12-pack drill bits | 12-pack of drill bits with sizes ranging from #40 to #3 | 0.8 - 104 | hammer | 12oz carpenter's hammer | 0.75 - 105 | hammer | 14oz carpenter's hammer | 0.875 - 106 | hammer | 16oz carpenter's hammer | 1 - 107 | rocks | box of assorted rocks | 5.3 - 108 | jacket | water resistent black wind breaker | 0.1 - 109 | spare tire | 24 inch spare tire | 22.2 - 101 | 1111111111 | Small 2-wheel scooter | 3.14 - (9 rows) - - postgres=# UPDATE products SET name='1111111111' WHERE id=107; - UPDATE 1 - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":107}}�{"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products.Value","field":"before"},{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products.Value","field":"after"},{"type":"struct","fields":[{"type":"string","optional":true,"field":"version"},{"type":"string","optional":true,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":false,"field":"db"},{"type":"int64","optional":true,"field":"ts_usec"},{"type":"int64","optional":true,"field":"txId"},{"type":"int64","optional":true,"field":"lsn"},{"type":"string","optional":true,"field":"schema"},{"type":"string","optional":true,"field":"table"},{"type":"boolean","optional":true,"default":false,"field":"snapshot"},{"type":"boolean","optional":true,"field":"last_snapshot_record"}],"optional":false,"name":"io.debezium.connector.postgresql.Source","field":"source"},{"type":"string","optional":false,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"before":{"id":107,"name":"rocks","description":"box of assorted rocks","weight":5.3},"after":{"id":107,"name":"1111111111","description":"box of assorted rocks","weight":5.3},"source":{"version":"0.9.2.Final","connector":"postgresql","name":"dbserver1","db":"postgres","ts_usec":1559208957661080,"txId":577,"lsn":23862872,"schema":"inventory","table":"products","snapshot":false,"last_snapshot_record":null},"op":"u","ts_ms":1559208957692}} - - ``` - -## Example of MongoDB - -You need to create a configuration file before using the Pulsar Debezium connector. - -* JSON - - ```json - - { - "mongodb.hosts": "rs0/mongodb:27017", - "mongodb.name": "dbserver1", - "mongodb.user": "debezium", - "mongodb.password": "dbz", - "mongodb.task.id": "1", - "database.whitelist": "inventory", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mongodb-source" - topicName: "debezium-mongodb-topic" - archive: "connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-postgress:0.10 - mongodb.hosts: "rs0/mongodb:27017", - mongodb.name: "dbserver1", - mongodb.user: "debezium", - mongodb.password: "dbz", - mongodb.task.id: "1", - database.whitelist: "inventory", - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector. - - -1. Start a MongoDB server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-mongodb:0.10 - $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017 debezium/example-mongodb:0.10 - - ``` - - Use the following commands to initialize the data. - - ``` bash - - ./usr/local/bin/init-inventory.sh - - ``` - - If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a``` - - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-mongodb-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar \ - --name debezium-mongodb-source \ - --destination-topic-name debezium-mongodb-topic \ - --tenant public \ - --namespace default \ - --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mongodb-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MongoDB client in docker. - - ```bash - - $ docker exec -it pulsar-mongodb /bin/bash - - ``` - -6. A MongoDB client pops out. - - ```bash - - mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory - db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}}) - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":false,"field":"rs"},{"type":"string","optional":false,"field":"collection"},{"type":"int32","optional":false,"field":"ord"},{"type":"int64","optional":true,"field":"h"}],"optional":false,"name":"io.debezium.connector.mongo.Source","field":"source"},{"type":"string","optional":true,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"after":"{\"_id\": {\"$numberLong\": \"104\"},\"name\": \"hammer\",\"description\": \"12oz carpenter's hammer\",\"weight\": 1.25,\"quantity\": 4}","patch":null,"source":{"version":"0.10.0.Final","connector":"mongodb","name":"dbserver1","ts_ms":1573541905000,"snapshot":"true","db":"inventory","rs":"rs0","collection":"products","ord":1,"h":4983083486544392763},"op":"r","ts_ms":1573541909761}}. - - ``` - -## FAQ - -### Debezium postgres connector will hang when create snap - -```$xslt - -#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000] - java.lang.Thread.State: WAITING (parking) - at sun.misc.Unsafe.park(Native Method) - - parking to wait for <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) - at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) - at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) - at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396) - at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649) - at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132) - at io.debezium.connector.postgresql.PostgresConnectorTask$Lambda$203/385424085.accept(Unknown Source) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$240/1347039967.accept(Unknown Source) - at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$206/589332928.run(Unknown Source) - at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705) - at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717) - at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126) - at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47) - at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127) - at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230) - at java.lang.Thread.run(Thread.java:748) - -``` - -If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file: - -```$xslt - -max.queue.size= - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-cdc.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-cdc.md deleted file mode 100644 index e6e662884826de..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-cdc.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -id: io-cdc -title: CDC connector -sidebar_label: "CDC connector" -original_id: io-cdc ---- - -CDC source connectors capture log changes of databases (such as MySQL, MongoDB, and PostgreSQL) into Pulsar. - -> CDC source connectors are built on top of [Canal](https://github.com/alibaba/canal) and [Debezium](https://debezium.io/) and store all data into Pulsar cluster in a persistent, replicated, and partitioned way. - -Currently, Pulsar has the following CDC connectors. - -Name|Java Class -|---|--- -[Canal source connector](io-canal-source.md)|[org.apache.pulsar.io.canal.CanalStringSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/java/org/apache/pulsar/io/canal/CanalStringSource.java) -[Debezium source connector](io-cdc-debezium.md)|
  1511. [org.apache.pulsar.io.debezium.DebeziumSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/core/src/main/java/org/apache/pulsar/io/debezium/DebeziumSource.java)
  1512. [org.apache.pulsar.io.debezium.mysql.DebeziumMysqlSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/java/org/apache/pulsar/io/debezium/mysql/DebeziumMysqlSource.java)
  1513. [org.apache.pulsar.io.debezium.postgres.DebeziumPostgresSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/java/org/apache/pulsar/io/debezium/postgres/DebeziumPostgresSource.java)
  1514. - -For more information about Canal and Debezium, see the information below. - -Subject | Reference -|---|--- -How to use Canal source connector with MySQL|[Canal guide](https://github.com/alibaba/canal/wiki) -How does Canal work | [Canal tutorial](https://github.com/alibaba/canal/wiki) -How to use Debezium source connector with MySQL | [Debezium guide](https://debezium.io/docs/connectors/mysql/) -How does Debezium work | [Debezium tutorial](https://debezium.io/docs/tutorial/) diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-cli.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-cli.md deleted file mode 100644 index 3d54bb61875e25..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-cli.md +++ /dev/null @@ -1,658 +0,0 @@ ---- -id: io-cli -title: Connector Admin CLI -sidebar_label: "CLI" -original_id: io-cli ---- - -The `pulsar-admin` tool helps you manage Pulsar connectors. - -## `sources` - -An interface for managing Pulsar IO sources (ingress data into Pulsar). - -```bash - -$ pulsar-admin sources subcommands - -``` - -Subcommands are: - -* `create` - -* `update` - -* `delete` - -* `get` - -* `status` - -* `list` - -* `stop` - -* `start` - -* `restart` - -* `localrun` - -* `available-sources` - -* `reload` - - -### `create` - -Submit a Pulsar IO source connector to run in a Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources create options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--classname` | The source's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per source instance (applicable only to Docker runtime). -| `--deserialization-classname` | The SerDe classname for the source. -| `--destination-topic-name` | The Pulsar topic to which data is sent. -| `--disk` | The disk (in bytes) that needs to be allocated per source instance (applicable only to Docker runtime). -|`--name` | The source's name. -| `--namespace` | The source's namespace. -| ` --parallelism` | The source's parallelism factor, that is, the number of source instances to run. -| `--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per source instance (applicable only to the process and Docker runtimes). -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -| `--source-config` | Source config key/values. -| `--source-config-file` | The path to a YAML config file specifying the source's configuration. -| `-t`, `--source-type` | The source's connector provider. -| `--tenant` | The source's tenant. -|`--producer-config`| The custom producer configuration (as a JSON string). - -### `update` - -Update a already submitted Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources update options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--classname` | The source's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per source instance (applicable only to Docker runtime). -| `--deserialization-classname` | The SerDe classname for the source. -| `--destination-topic-name` | The Pulsar topic to which data is sent. -| `--disk` | The disk (in bytes) that needs to be allocated per source instance (applicable only to Docker runtime). -|`--name` | The source's name. -| `--namespace` | The source's namespace. -| ` --parallelism` | The source's parallelism factor, that is, the number of source instances to run. -| `--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per source instance (applicable only to the process and Docker runtimes). -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -| `--source-config` | Source config key/values. -| `--source-config-file` | The path to a YAML config file specifying the source's configuration. -| `-t`, `--source-type` | The source's connector provider. The `source-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. -| `--tenant` | The source's tenant. -| `--update-auth-data` | Whether or not to update the auth data.
    **Default value: false.** - - -### `delete` - -Delete a Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources delete options - -``` - -#### Option - -|Flag|Description| -|---|---| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `get` - -Get the information about a Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources get options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `status` - -Check the current status of a Pulsar Source. - -#### Usage - -```bash - -$ pulsar-admin sources status options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source ID.
    If `instance-id` is not provided, Pulsar gets status of all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `list` - -List all running Pulsar IO source connectors. - -#### Usage - -```bash - -$ pulsar-admin sources list options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `stop` - -Stop a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources stop options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar stops all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `start` - -Start a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources start options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar starts all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `restart` - -Restart a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources restart options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar restarts all instances. -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `localrun` - -Run a Pulsar IO source connector locally rather than deploying it to the Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources localrun options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the Source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--broker-service-url` | The URL for the Pulsar broker. -|`--classname`|The source's class name if `archive` is file-url-path (file://). -| `--client-auth-params` | Client authentication parameter. -| `--client-auth-plugin` | Client authentication plugin using which function-process can connect to broker. -|`--cpu`|The CPU (in cores) that needs to be allocated per source instance (applicable only to the Docker runtime).| -|`--deserialization-classname`|The SerDe classname for the source. -|`--destination-topic-name`|The Pulsar topic to which data is sent. -|`--disk`|The disk (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime).| -|`--hostname-verification-enabled`|Enable hostname verification.
    **Default value: false**. -|`--name`|The source’s name.| -|`--namespace`|The source’s namespace.| -|`--parallelism`|The source’s parallelism factor, that is, the number of source instances to run).| -|`--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -|`--ram`|The RAM (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime).| -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -|`--source-config`|Source config key/values. -|`--source-config-file`|The path to a YAML config file specifying the source’s configuration. -|`--source-type`|The source's connector provider. -|`--tenant`|The source’s tenant. -|`--tls-allow-insecure`|Allow insecure tls connection.
    **Default value: false**. -|`--tls-trust-cert-path`|The tls trust cert file path. -|`--use-tls`|Use tls connection.
    **Default value: false**. -|`--producer-config`| The custom producer configuration (as a JSON string). - -### `available-sources` - -Get the list of Pulsar IO connector sources supported by Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources available-sources - -``` - -### `reload` - -Reload the available built-in connectors. - -#### Usage - -```bash - -$ pulsar-admin sources reload - -``` - -## `sinks` - -An interface for managing Pulsar IO sinks (egress data from Pulsar). - -```bash - -$ pulsar-admin sinks subcommands - -``` - -Subcommands are: - -* `create` - -* `update` - -* `delete` - -* `get` - -* `status` - -* `list` - -* `stop` - -* `start` - -* `restart` - -* `localrun` - -* `available-sinks` - -* `reload` - - -### `create` - -Submit a Pulsar IO sink connector to run in a Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks create options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--classname` | The sink's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per sink instance (applicable only to Docker runtime). -| `--custom-schema-inputs` | The map of input topics to schema types or class names (as a JSON string). -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -| `--disk` | The disk (in bytes) that needs to be allocated per sink instance (applicable only to Docker runtime). -|`-i, --inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name` | The sink's name. -| `--namespace` | The sink's namespace. -| ` --parallelism` | The sink's parallelism factor, that is, the number of sink instances to run. -| `--processing-guarantees` | The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the process and Docker runtimes). -| `--retain-ordering` | Sink consumes and sinks messages in order. -| `--sink-config` | sink config key/values. -| `--sink-config-file` | The path to a YAML config file specifying the sink's configuration. -| `-t`, `--sink-type` | The sink's connector provider. The `sink-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. -| `--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -| `--tenant` | The sink's tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). - -### `update` - -Update a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks update options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--classname` | The sink's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per sink instance (applicable only to Docker runtime). -| `--custom-schema-inputs` | The map of input topics to schema types or class names (as a JSON string). -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -| `--disk` | The disk (in bytes) that needs to be allocated per sink instance (applicable only to Docker runtime). -|`-i, --inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name` | The sink's name. -| `--namespace` | The sink's namespace. -| ` --parallelism` | The sink's parallelism factor, that is, the number of sink instances to run. -| `--processing-guarantees` | The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the process and Docker runtimes). -| `--retain-ordering` | Sink consumes and sinks messages in order. -| `--sink-config` | sink config key/values. -| `--sink-config-file` | The path to a YAML config file specifying the sink's configuration. -| `-t`, `--sink-type` | The sink's connector provider. -| `--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -| `--tenant` | The sink's tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). -| `--update-auth-data` | Whether or not to update the auth data.
    **Default value: false.** - -### `delete` - -Delete a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks delete options - -``` - -#### Option - -|Flag|Description| -|---|---| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - -### `get` - -Get the information about a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks get options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `status` - -Check the current status of a Pulsar sink. - -#### Usage - -```bash - -$ pulsar-admin sinks status options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink ID.
    If `instance-id` is not provided, Pulsar gets status of all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `list` - -List all running Pulsar IO sink connectors. - -#### Usage - -```bash - -$ pulsar-admin sinks list options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `stop` - -Stop a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks stop options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar stops all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - -### `start` - -Start a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks start options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar starts all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `restart` - -Restart a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks restart options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar restarts all instances. -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `localrun` - -Run a Pulsar IO sink connector locally rather than deploying it to the Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks localrun options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--broker-service-url` | The URL for the Pulsar broker. -|`--classname`|The sink's class name if `archive` is file-url-path (file://). -| `--client-auth-params` | Client authentication parameter. -| `--client-auth-plugin` | Client authentication plugin using which function-process can connect to broker. -|`--cpu`|The CPU (in cores) that needs to be allocated per sink instance (applicable only to the Docker runtime). -| `--custom-schema-inputs` | The map of input topics to Schema types or class names (as a JSON string). -| `--max-redeliver-count` | Maximum number of times that a message is redelivered before being sent to the dead letter queue. -| `--dead-letter-topic` | Name of the dead letter topic where the failing messages are sent. -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -|`--disk`|The disk (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime).| -|`--hostname-verification-enabled`|Enable hostname verification.
    **Default value: false**. -| `-i`, `--inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name`|The sink’s name.| -|`--namespace`|The sink’s namespace.| -|`--parallelism`|The sink’s parallelism factor, that is, the number of sink instances to run).| -|`--processing-guarantees`|The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -|`--ram`|The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime).| -|`--retain-ordering` | Sink consumes and sinks messages in order. -|`--sink-config`|sink config key/values. -|`--sink-config-file`|The path to a YAML config file specifying the sink’s configuration. -|`--sink-type`|The sink's connector provider. -|`--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -|`--tenant`|The sink’s tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--negative-ack-redelivery-delay-ms` | The negatively-acknowledged message redelivery delay in milliseconds. | -|`--tls-allow-insecure`|Allow insecure tls connection.
    **Default value: false**. -|`--tls-trust-cert-path`|The tls trust cert file path. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). -|`--use-tls`|Use tls connection.
    **Default value: false**. - -### `available-sinks` - -Get the list of Pulsar IO connector sinks supported by Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks available-sinks - -``` - -### `reload` - -Reload the available built-in connectors. - -#### Usage - -```bash - -$ pulsar-admin sinks reload - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-connectors.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-connectors.md deleted file mode 100644 index 957a02a5a1964a..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-connectors.md +++ /dev/null @@ -1,249 +0,0 @@ ---- -id: io-connectors -title: Built-in connector -sidebar_label: "Built-in connector" -original_id: io-connectors ---- - -Pulsar distribution includes a set of common connectors that have been packaged and tested with the rest of Apache Pulsar. These connectors import and export data from some of the most commonly used data systems. - -Using any of these connectors is as easy as writing a simple connector and running the connector locally or submitting the connector to a Pulsar Functions cluster. - -## Source connector - -Pulsar has various source connectors, which are sorted alphabetically as below. - -### Canal - -* [Configuration](io-canal-source.md#configuration) - -* [Example](io-canal-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/java/org/apache/pulsar/io/canal/CanalStringSource.java) - - -### Debezium MySQL - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-mysql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/java/org/apache/pulsar/io/debezium/mysql/DebeziumMysqlSource.java) - -### Debezium PostgreSQL - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-postgresql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/java/org/apache/pulsar/io/debezium/postgres/DebeziumPostgresSource.java) - -### Debezium MongoDB - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-mongodb) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/java/org/apache/pulsar/io/debezium/mongodb/DebeziumMongoDbSource.java) - -### Debezium Oracle - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-oracle) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/oracle/src/main/java/org/apache/pulsar/io/debezium/oracle/DebeziumOracleSource.java) - -### Debezium Microsoft SQL Server - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-microsoft-sql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mssql/src/main/java/org/apache/pulsar/io/debezium/mssql/DebeziumMsSqlSource.java) - - -### DynamoDB - -* [Configuration](io-dynamodb-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/dynamodb/src/main/java/org/apache/pulsar/io/dynamodb/DynamoDBSource.java) - -### File - -* [Configuration](io-file-source.md#configuration) - -* [Example](io-file-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/file/src/main/java/org/apache/pulsar/io/file/FileSource.java) - -### Flume - -* [Configuration](io-flume-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/java/org/apache/pulsar/io/flume/FlumeConnector.java) - -### Twitter firehose - -* [Configuration](io-twitter-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/twitter/src/main/java/org/apache/pulsar/io/twitter/TwitterFireHose.java) - -### Kafka - -* [Configuration](io-kafka-source.md#configuration) - -* [Example](io-kafka-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java) - -### Kinesis - -* [Configuration](io-kinesis-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kinesis/src/main/java/org/apache/pulsar/io/kinesis/KinesisSource.java) - -### Netty - -* [Configuration](io-netty-source.md#configuration) - -* [Example of TCP](io-netty-source.md#tcp) - -* [Example of HTTP](io-netty-source.md#http) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/netty/src/main/java/org/apache/pulsar/io/netty/NettySource.java) - -### NSQ - -* [Configuration](io-nsq-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/nsq/src/main/java/org/apache/pulsar/io/nsq/NSQSource.java) - -### RabbitMQ - -* [Configuration](io-rabbitmq-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/rabbitmq/src/main/java/org/apache/pulsar/io/rabbitmq/RabbitMQSource.java) - -## Sink connector - -Pulsar has various sink connectors, which are sorted alphabetically as below. - -### Aerospike - -* [Configuration](io-aerospike-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/aerospike/src/main/java/org/apache/pulsar/io/aerospike/AerospikeStringSink.java) - -### Cassandra - -* [Configuration](io-cassandra-sink.md#configuration) - -* [Example](io-cassandra-sink.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/cassandra/src/main/java/org/apache/pulsar/io/cassandra/CassandraStringSink.java) - -### ElasticSearch - -* [Configuration](io-elasticsearch-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/elastic-search/src/main/java/org/apache/pulsar/io/elasticsearch/ElasticSearchSink.java) - -### Flume - -* [Configuration](io-flume-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/java/org/apache/pulsar/io/flume/sink/StringSink.java) - -### HBase - -* [Configuration](io-hbase-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hbase/src/main/java/org/apache/pulsar/io/hbase/HbaseAbstractConfig.java) - -### HDFS2 - -* [Configuration](io-hdfs2-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hdfs2/src/main/java/org/apache/pulsar/io/hdfs2/AbstractHdfsConnector.java) - -### HDFS3 - -* [Configuration](io-hdfs3-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hdfs3/src/main/java/org/apache/pulsar/io/hdfs3/AbstractHdfsConnector.java) - -### InfluxDB - -* [Configuration](io-influxdb-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/influxdb/src/main/java/org/apache/pulsar/io/influxdb/InfluxDBGenericRecordSink.java) - -### JDBC ClickHouse - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-clickhouse) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/clickhouse/src/main/java/org/apache/pulsar/io/jdbc/ClickHouseJdbcAutoSchemaSink.java) - -### JDBC MariaDB - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-mariadb) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/mariadb/src/main/java/org/apache/pulsar/io/jdbc/MariadbJdbcAutoSchemaSink.java) - -### JDBC PostgreSQL - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-postgresql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/postgres/src/main/java/org/apache/pulsar/io/jdbc/PostgresJdbcAutoSchemaSink.java) - -### JDBC SQLite - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-sqlite) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/sqlite/src/main/java/org/apache/pulsar/io/jdbc/SqliteJdbcAutoSchemaSink.java) - -### Kafka - -* [Configuration](io-kafka-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSink.java) - -### Kinesis - -* [Configuration](io-kinesis-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kinesis/src/main/java/org/apache/pulsar/io/kinesis/KinesisSink.java) - -### MongoDB - -* [Configuration](io-mongo-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/mongo/src/main/java/org/apache/pulsar/io/mongodb/MongoSink.java) - -### RabbitMQ - -* [Configuration](io-rabbitmq-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/rabbitmq/src/main/java/org/apache/pulsar/io/rabbitmq/RabbitMQSink.java) - -### Redis - -* [Configuration](io-redis-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/redis/src/main/java/org/apache/pulsar/io/redis/RedisAbstractConfig.java) - -### Solr - -* [Configuration](io-solr-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/solr/src/main/java/org/apache/pulsar/io/solr/SolrSinkConfig.java) - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-debezium-source.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-debezium-source.md deleted file mode 100644 index e7dc56cf54cdd5..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-debezium-source.md +++ /dev/null @@ -1,768 +0,0 @@ ---- -id: io-debezium-source -title: Debezium source connector -sidebar_label: "Debezium source connector" -original_id: io-debezium-source ---- - -The Debezium source connector pulls messages from MySQL or PostgreSQL -and persists the messages to Pulsar topics. - -## Configuration - -The configuration of Debezium source connector has the following properties. - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `task.class` | true | null | A source task class that implemented in Debezium. | -| `database.hostname` | true | null | The address of a database server. | -| `database.port` | true | null | The port number of a database server.| -| `database.user` | true | null | The name of a database user that has the required privileges. | -| `database.password` | true | null | The password for a database user that has the required privileges. | -| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. | -| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. | -| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the connector.

    This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. | -| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. | -| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value. | -| `database.history` | true | null | The name of the database history class. | -| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements.

    **Note: this topic is for internal use only and should not be used by consumers.** | -| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. | -| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. | -| `json-with-envelope` | false | false | Present the message only consist of payload. - -### Converter Options - -1. org.apache.kafka.connect.json.JsonConverter - -This config `json-with-envelope` is valid only for the JsonConverter. It's default value is false, the consumer use the schema ` -Schema.KeyValue(Schema.AUTO_CONSUME(), Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, -and the message only consist of payload. - -If the config `json-with-envelope` value is true, the consumer use the schema -`Schema.KeyValue(Schema.BYTES, Schema.BYTES`, the message consist of schema and payload. - -2. org.apache.pulsar.kafka.shade.io.confluent.connect.avro.AvroConverter - -If users select the AvroConverter, then the pulsar consumer should use the schema `Schema.KeyValue(Schema.AUTO_CONSUME(), -Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, and the message consist of payload. - -### MongoDB Configuration -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). | -| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. | -| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. | - - - -## Example of MySQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "3306", - "database.user": "debezium", - "database.password": "dbz", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.whitelist": "inventory", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.history.pulsar.topic": "history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "offset.storage.topic": "offset-topic" - } - - ``` - -* YAML - - You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mysql-source" - topicName: "debezium-mysql-topic" - archive: "connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for mysql, docker image: debezium/example-mysql:0.8 - database.hostname: "localhost" - database.port: "3306" - database.user: "debezium" - database.password: "dbz" - database.server.id: "184054" - database.server.name: "dbserver1" - database.whitelist: "inventory" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.history.pulsar.topic: "history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG - key.converter: "org.apache.kafka.connect.json.JsonConverter" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - - ## OFFSET_STORAGE_TOPIC_CONFIG - offset.storage.topic: "offset-topic" - - ``` - -### Usage - -This example shows how to change the data of a MySQL table using the Pulsar Debezium connector. - -1. Start a MySQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -it --rm \ - --name mysql \ - -p 3306:3306 \ - -e MYSQL_ROOT_PASSWORD=debezium \ - -e MYSQL_USER=mysqluser \ - -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar \ - --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","value.converter": "org.apache.kafka.connect.json.JsonConverter","pulsar.service.url": "pulsar://127.0.0.1:6650","offset.storage.topic": "offset-topic"}' - - ``` - - :::note - - Currently, the destination topic (specified by the `destination-topic-name` option ) is a required configuration but it is not used for the Debezium connector to save data. The Debezium connector saves data in the following 4 types of topics: - - - One topic named with the database server name ( `database.server.name`) for storing the database metadata messages, such as `public/default/database.server.name`. - - One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. - - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - - One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - - If the automatic topic creation is disabled on your broker, you need to manually create the above 4 types of topics and the destination topic. - - ::: - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mysql-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the table _inventory.products_. - - ```bash - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MySQL client in docker. - - ```bash - - $ docker run -it --rm \ - --name mysqlterm \ - --link mysql \ - --rm mysql:5.7 sh \ - -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' - - ``` - -6. A MySQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - mysql> use inventory; - mysql> show tables; - mysql> SELECT * FROM products; - mysql> UPDATE products SET name='1111111111' WHERE id=101; - mysql> UPDATE products SET name='1111111111' WHERE id=107; - - ``` - - In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic. - -## Example of PostgreSQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "5432", - "database.user": "postgres", - "database.password": "changeme", - "database.dbname": "postgres", - "database.server.name": "dbserver1", - "plugin.name": "pgoutput", - "schema.whitelist": "public", - "table.whitelist": "public.users", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-postgres-source" - topicName: "debezium-postgres-topic" - archive: "connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for postgres version 10+, official docker image: postgres:<10+> - database.hostname: "localhost" - database.port: "5432" - database.user: "postgres" - database.password: "changeme" - database.dbname: "postgres" - database.server.name: "dbserver1" - plugin.name: "pgoutput" - schema.whitelist: "public" - table.whitelist: "public.users" - - ## PULSAR_SERVICE_URL_CONFIG - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -Notice that `pgoutput` is a standard plugin of Postgres introduced in version 10 - [see Postgres architecture docu](https://www.postgresql.org/docs/10/logical-replication-architecture.html). You don't need to install anything, just make sure the WAL level is set to `logical` (see docker command below and [Postgres docu](https://www.postgresql.org/docs/current/runtime-config-wal.html)). - -### Usage - -This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector. - - -1. Start a PostgreSQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -d -it --rm \ - --name pulsar-postgres \ - -p 5432:5432 \ - -e POSTGRES_PASSWORD=changeme \ - postgres:13.3 -c wal_level=logical - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar \ - --name debezium-postgres-source \ - --destination-topic-name debezium-postgres-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "changeme","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "public","table.whitelist": "public.users","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - :::note - - Currently, the destination topic (specified by the `destination-topic-name` option ) is a required configuration but it is not used for the Debezium connector to save data. The Debezium connector saves data in the following 4 types of topics: - - - One topic named with the database server name ( `database.server.name`) for storing the database metadata messages, such as `public/default/database.server.name`. - - One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. - - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - - One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - - If the automatic topic creation is disabled on your broker, you need to manually create the above 4 types of topics and the destination topic. - - ::: - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-postgres-source-config.yaml - - ``` - -4. Subscribe the topic _sub-users_ for the _public.users_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-users" public/default/dbserver1.public.users -n 0 - - ``` - -5. Start a PostgreSQL client in docker. - - ```bash - - $ docker exec -it pulsar-postgresql /bin/bash - - ``` - -6. A PostgreSQL client pops out. - - Use the following commands to create sample data in the table _users_. - - ``` - - psql -U postgres -h localhost -p 5432 - Password for user postgres: - - CREATE TABLE users( - id BIGINT GENERATED ALWAYS AS IDENTITY, PRIMARY KEY(id), - hash_firstname TEXT NOT NULL, - hash_lastname TEXT NOT NULL, - gender VARCHAR(6) NOT NULL CHECK (gender IN ('male', 'female')) - ); - - INSERT INTO users(hash_firstname, hash_lastname, gender) - SELECT md5(RANDOM()::TEXT), md5(RANDOM()::TEXT), CASE WHEN RANDOM() < 0.5 THEN 'male' ELSE 'female' END FROM generate_series(1, 100); - - postgres=# select * from users; - - id | hash_firstname | hash_lastname | gender - -------+----------------------------------+----------------------------------+-------- - 1 | 02bf7880eb489edc624ba637f5ab42bd | 3e742c2cc4217d8e3382cc251415b2fb | female - 2 | dd07064326bb9119189032316158f064 | 9c0e938f9eddbd5200ba348965afbc61 | male - 3 | 2c5316fdd9d6595c1cceb70eed12e80c | 8a93d7d8f9d76acfaaa625c82a03ea8b | female - 4 | 3dfa3b4f70d8cd2155567210e5043d2b | 32c156bc28f7f03ab5d28e2588a3dc19 | female - - - postgres=# UPDATE users SET hash_firstname='maxim' WHERE id=1; - UPDATE 1 - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"before":null,"after":{"id":1,"hash_firstname":"maxim","hash_lastname":"292113d30a3ccee0e19733dd7f88b258","gender":"male"},"source:{"version":"1.0.0.Final","connector":"postgresql","name":"foobar","ts_ms":1624045862644,"snapshot":"false","db":"postgres","schema":"public","table":"users","txId":595,"lsn":24419784,"xmin":null},"op":"u","ts_ms":1624045862648} - ...many more - - ``` - -## Example of MongoDB - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "mongodb.hosts": "rs0/mongodb:27017", - "mongodb.name": "dbserver1", - "mongodb.user": "debezium", - "mongodb.password": "dbz", - "mongodb.task.id": "1", - "database.whitelist": "inventory", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mongodb-source" - topicName: "debezium-mongodb-topic" - archive: "connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-mongodb:0.10 - mongodb.hosts: "rs0/mongodb:27017", - mongodb.name: "dbserver1", - mongodb.user: "debezium", - mongodb.password: "dbz", - mongodb.task.id: "1", - database.whitelist: "inventory", - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector. - - -1. Start a MongoDB server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-mongodb:0.10 - $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017 debezium/example-mongodb:0.10 - - ``` - - Use the following commands to initialize the data. - - ``` bash - - ./usr/local/bin/init-inventory.sh - - ``` - - If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a``` - - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-mongodb-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar \ - --name debezium-mongodb-source \ - --destination-topic-name debezium-mongodb-topic \ - --tenant public \ - --namespace default \ - --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - :::note - - Currently, the destination topic (specified by the `destination-topic-name` option ) is a required configuration but it is not used for the Debezium connector to save data. The Debezium connector saves data in the following 4 types of topics: - - - One topic named with the database server name ( `database.server.name`) for storing the database metadata messages, such as `public/default/database.server.name`. - - One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. - - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - - One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - - If the automatic topic creation is disabled on your broker, you need to manually create the above 4 types of topics and the destination topic. - - ::: - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mongodb-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MongoDB client in docker. - - ```bash - - $ docker exec -it pulsar-mongodb /bin/bash - - ``` - -6. A MongoDB client pops out. - - ```bash - - mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory - db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}}) - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":false,"field":"rs"},{"type":"string","optional":false,"field":"collection"},{"type":"int32","optional":false,"field":"ord"},{"type":"int64","optional":true,"field":"h"}],"optional":false,"name":"io.debezium.connector.mongo.Source","field":"source"},{"type":"string","optional":true,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"after":"{\"_id\": {\"$numberLong\": \"104\"},\"name\": \"hammer\",\"description\": \"12oz carpenter's hammer\",\"weight\": 1.25,\"quantity\": 4}","patch":null,"source":{"version":"0.10.0.Final","connector":"mongodb","name":"dbserver1","ts_ms":1573541905000,"snapshot":"true","db":"inventory","rs":"rs0","collection":"products","ord":1,"h":4983083486544392763},"op":"r","ts_ms":1573541909761}}. - - ``` - -## Example of Oracle - -### Packaging - -Oracle connector does not include Oracle JDBC driver and you need to package it with the connector. -Major reasons for not including the drivers are the variety of versions and Oracle licensing. It is recommended to use the driver provided with your Oracle DB installation, or you can [download](https://www.oracle.com/database/technologies/appdev/jdbc.html) one. -Integration test have an [example](https://github.com/apache/pulsar/blob/e2bc52d40450fa00af258c4432a5b71d50a5c6e0/tests/docker-images/latest-version-image/Dockerfile#L110-L122) of packaging the driver into the connector nar file. - -### Configuration - -Debezium [requires](https://debezium.io/documentation/reference/1.5/connectors/oracle.html#oracle-overview) Oracle DB with LogMiner or XStream API enabled. -Supported options and steps for enabling them vary from version to version of Oracle DB. -Steps outlined in the [documentation](https://debezium.io/documentation/reference/1.5/connectors/oracle.html#oracle-overview) and used in the [integration test](https://github.com/apache/pulsar/blob/master/tests/integration/src/test/java/org/apache/pulsar/tests/integration/io/sources/debezium/DebeziumOracleDbSourceTester.java) may or may not work for the version and edition of Oracle DB you are using. -Please refer to the [documentation for Oracle DB](https://docs.oracle.com/en/database/oracle/oracle-database/) as needed. - -Similarly to other connectors, you can use JSON or YAMl to configure the connector. -Using yaml as an example, you can create a debezium-oracle-source-config.yaml file like: - -* JSON - -```json - -{ - "database.hostname": "localhost", - "database.port": "1521", - "database.user": "dbzuser", - "database.password": "dbz", - "database.dbname": "XE", - "database.server.name": "XE", - "schema.exclude.list": "system,dbzuser", - "snapshot.mode": "initial", - "topic.namespace": "public/default", - "task.class": "io.debezium.connector.oracle.OracleConnectorTask", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "typeClassName": "org.apache.pulsar.common.schema.KeyValue", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.tcpKeepAlive": "true", - "decimal.handling.mode": "double", - "database.history.pulsar.topic": "debezium-oracle-source-history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650" -} - -``` - -* YAML - -```yaml - -tenant: "public" -namespace: "default" -name: "debezium-oracle-source" -topicName: "debezium-oracle-topic" -parallelism: 1 - -className: "org.apache.pulsar.io.debezium.oracle.DebeziumOracleSource" -database.dbname: "XE" - -configs: - database.hostname: "localhost" - database.port: "1521" - database.user: "dbzuser" - database.password: "dbz" - database.dbname: "XE" - database.server.name: "XE" - schema.exclude.list: "system,dbzuser" - snapshot.mode: "initial" - topic.namespace: "public/default" - task.class: "io.debezium.connector.oracle.OracleConnectorTask" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - key.converter: "org.apache.kafka.connect.json.JsonConverter" - typeClassName: "org.apache.pulsar.common.schema.KeyValue" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.tcpKeepAlive: "true" - decimal.handling.mode: "double" - database.history.pulsar.topic: "debezium-oracle-source-history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - -``` - -For the full list of configuration properties supported by Debezium, see [Debezium Connector for Oracle](https://debezium.io/documentation/reference/1.5/connectors/oracle.html#oracle-connector-properties). - -## Example of Microsoft SQL - -### Configuration - -Debezium [requires](https://debezium.io/documentation/reference/1.5/connectors/sqlserver.html#sqlserver-overview) SQL Server with CDC enabled. -Steps outlined in the [documentation](https://debezium.io/documentation/reference/1.5/connectors/sqlserver.html#setting-up-sqlserver) and used in the [integration test](https://github.com/apache/pulsar/blob/master/tests/integration/src/test/java/org/apache/pulsar/tests/integration/src/test/java/org/apache/pulsar/tests/integration/io/sources/debezium/DebeziumMsSqlSourceTester.java). -For more information, see [Enable and disable change data capture in Microsoft SQL Server](https://docs.microsoft.com/en-us/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server). - -Similarly to other connectors, you can use JSON or YAMl to configure the connector. - -* JSON - -```json - -{ - "database.hostname": "localhost", - "database.port": "1433", - "database.user": "sa", - "database.password": "MyP@ssw0rd!", - "database.dbname": "MyTestDB", - "database.server.name": "mssql", - "snapshot.mode": "schema_only", - "topic.namespace": "public/default", - "task.class": "io.debezium.connector.sqlserver.SqlServerConnectorTask", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "typeClassName": "org.apache.pulsar.common.schema.KeyValue", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.tcpKeepAlive": "true", - "decimal.handling.mode": "double", - "database.history.pulsar.topic": "debezium-mssql-source-history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650" -} - -``` - -* YAML - -```yaml - -tenant: "public" -namespace: "default" -name: "debezium-mssql-source" -topicName: "debezium-mssql-topic" -parallelism: 1 - -className: "org.apache.pulsar.io.debezium.mssql.DebeziumMsSqlSource" -database.dbname: "mssql" - -configs: - database.hostname: "localhost" - database.port: "1433" - database.user: "sa" - database.password: "MyP@ssw0rd!" - database.dbname: "MyTestDB" - database.server.name: "mssql" - snapshot.mode: "schema_only" - topic.namespace: "public/default" - task.class: "io.debezium.connector.sqlserver.SqlServerConnectorTask" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - key.converter: "org.apache.kafka.connect.json.JsonConverter" - typeClassName: "org.apache.pulsar.common.schema.KeyValue" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.tcpKeepAlive: "true" - decimal.handling.mode: "double" - database.history.pulsar.topic: "debezium-mssql-source-history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - -``` - -For the full list of configuration properties supported by Debezium, see [Debezium Connector for MS SQL](https://debezium.io/documentation/reference/1.5/connectors/sqlserver.html#sqlserver-connector-properties). - -## FAQ - -### Debezium postgres connector will hang when create snap - -```$xslt - -#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000] - java.lang.Thread.State: WAITING (parking) - at sun.misc.Unsafe.park(Native Method) - - parking to wait for <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) - at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) - at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) - at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396) - at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649) - at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132) - at io.debezium.connector.postgresql.PostgresConnectorTask$Lambda$203/385424085.accept(Unknown Source) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$240/1347039967.accept(Unknown Source) - at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$206/589332928.run(Unknown Source) - at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705) - at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717) - at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126) - at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47) - at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127) - at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230) - at java.lang.Thread.run(Thread.java:748) - -``` - -If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file: - -```$xslt - -max.queue.size= - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-debug.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-debug.md deleted file mode 100644 index 844e101d00d2a7..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-debug.md +++ /dev/null @@ -1,407 +0,0 @@ ---- -id: io-debug -title: How to debug Pulsar connectors -sidebar_label: "Debug" -original_id: io-debug ---- -This guide explains how to debug connectors in localrun or cluster mode and gives a debugging checklist. -To better demonstrate how to debug Pulsar connectors, here takes a Mongo sink connector as an example. - -**Deploy a Mongo sink environment** -1. Start a Mongo service. - - ```bash - - docker pull mongo:4 - docker run -d -p 27017:27017 --name pulsar-mongo -v $PWD/data:/data/db mongo:4 - - ``` - -2. Create a DB and a collection. - - ```bash - - docker exec -it pulsar-mongo /bin/bash - mongo - > use pulsar - > db.createCollection('messages') - > exit - - ``` - -3. Start Pulsar standalone. - - ```bash - - docker pull apachepulsar/pulsar:2.4.0 - docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --link pulsar-mongo --name pulsar-mongo-standalone apachepulsar/pulsar:2.4.0 bin/pulsar standalone - - ``` - -4. Configure the Mongo sink with the `mongo-sink-config.yaml` file. - - ```bash - - configs: - mongoUri: "mongodb://pulsar-mongo:27017" - database: "pulsar" - collection: "messages" - batchSize: 2 - batchTimeMs: 500 - - ``` - - ```bash - - docker cp mongo-sink-config.yaml pulsar-mongo-standalone:/pulsar/ - - ``` - -5. Download the Mongo sink nar package. - - ```bash - - docker exec -it pulsar-mongo-standalone /bin/bash - curl -O http://apache.01link.hk/pulsar/pulsar-2.4.0/connectors/pulsar-io-mongo-2.4.0.nar - - ``` - -## Debug in localrun mode -Start the Mongo sink in localrun mode using the `localrun` command. -:::tip - -For more information about the `localrun` command, see [`localrun`](reference-connector-admin.md/#localrun-1). - -::: - -```bash - -./bin/pulsar-admin sinks localrun \ ---archive pulsar-io-mongo-2.4.0.nar \ ---tenant public --namespace default \ ---inputs test-mongo \ ---name pulsar-mongo-sink \ ---sink-config-file mongo-sink-config.yaml \ ---parallelism 1 - -``` - -### Use connector log -Use one of the following methods to get a connector log in localrun mode: -* After executing the `localrun` command, the **log is automatically printed on the console**. -* The log is located at: - - ```bash - - logs/functions/tenant/namespace/function-name/function-name-instance-id.log - - ``` - - **Example** - - The path of the Mongo sink connector is: - - ```bash - - logs/functions/public/default/pulsar-mongo-sink/pulsar-mongo-sink-0.log - - ``` - -To clearly explain the log information, here breaks down the large block of information into small blocks and add descriptions for each block. -* This piece of log information shows the storage path of the nar package after decompression. - - ``` - - 08:21:54.132 [main] INFO org.apache.pulsar.common.nar.NarClassLoader - Created class loader with paths: [file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/, file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/META-INF/bundled-dependencies/, - - ``` - - :::tip - - If `class cannot be found` exception is thrown, check whether the nar file is decompressed in the folder `file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/META-INF/bundled-dependencies/` or not. - - ::: - -* This piece of log information illustrates the basic information about the Mongo sink connector, such as tenant, namespace, name, parallelism, resources, and so on, which can be used to **check whether the Mongo sink connector is configured correctly or not**. - - ```bash - - 08:21:55.390 [main] INFO org.apache.pulsar.functions.runtime.ThreadRuntime - ThreadContainer starting function with instance config InstanceConfig(instanceId=0, functionId=853d60a1-0c48-44d5-9a5c-6917386476b2, functionVersion=c2ce1458-b69e-4175-88c0-a0a856a2be8c, functionDetails=tenant: "public" - namespace: "default" - name: "pulsar-mongo-sink" - className: "org.apache.pulsar.functions.api.utils.IdentityFunction" - autoAck: true - parallelism: 1 - source { - typeClassName: "[B" - inputSpecs { - key: "test-mongo" - value { - } - } - cleanupSubscription: true - } - sink { - className: "org.apache.pulsar.io.mongodb.MongoSink" - configs: "{\"mongoUri\":\"mongodb://pulsar-mongo:27017\",\"database\":\"pulsar\",\"collection\":\"messages\",\"batchSize\":2,\"batchTimeMs\":500}" - typeClassName: "[B" - } - resources { - cpu: 1.0 - ram: 1073741824 - disk: 10737418240 - } - componentType: SINK - , maxBufferedTuples=1024, functionAuthenticationSpec=null, port=38459, clusterName=local) - - ``` - -* This piece of log information demonstrates the status of the connections to Mongo and configuration information. - - ```bash - - 08:21:56.231 [cluster-ClusterId{value='5d6396a3c9e77c0569ff00eb', description='null'}-pulsar-mongo:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:1, serverValue:8}] to pulsar-mongo:27017 - 08:21:56.326 [cluster-ClusterId{value='5d6396a3c9e77c0569ff00eb', description='null'}-pulsar-mongo:27017] INFO org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=pulsar-mongo:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 2, 0]}, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=89058800} - - ``` - -* This piece of log information explains the configuration of consumers and clients, including the topic name, subscription name, subscription type, and so on. - - ```bash - - 08:21:56.719 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl - Starting Pulsar consumer status recorder with config: { - "topicNames" : [ "test-mongo" ], - "topicsPattern" : null, - "subscriptionName" : "public/default/pulsar-mongo-sink", - "subscriptionType" : "Shared", - "receiverQueueSize" : 1000, - "acknowledgementsGroupTimeMicros" : 100000, - "negativeAckRedeliveryDelayMicros" : 60000000, - "maxTotalReceiverQueueSizeAcrossPartitions" : 50000, - "consumerName" : null, - "ackTimeoutMillis" : 0, - "tickDurationMillis" : 1000, - "priorityLevel" : 0, - "cryptoFailureAction" : "CONSUME", - "properties" : { - "application" : "pulsar-sink", - "id" : "public/default/pulsar-mongo-sink", - "instance_id" : "0" - }, - "readCompacted" : false, - "subscriptionInitialPosition" : "Latest", - "patternAutoDiscoveryPeriod" : 1, - "regexSubscriptionMode" : "PersistentOnly", - "deadLetterPolicy" : null, - "autoUpdatePartitions" : true, - "replicateSubscriptionState" : false, - "resetIncludeHead" : false - } - 08:21:56.726 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl - Pulsar client config: { - "serviceUrl" : "pulsar://localhost:6650", - "authPluginClassName" : null, - "authParams" : null, - "operationTimeoutMs" : 30000, - "statsIntervalSeconds" : 60, - "numIoThreads" : 1, - "numListenerThreads" : 1, - "connectionsPerBroker" : 1, - "useTcpNoDelay" : true, - "useTls" : false, - "tlsTrustCertsFilePath" : null, - "tlsAllowInsecureConnection" : false, - "tlsHostnameVerificationEnable" : false, - "concurrentLookupRequest" : 5000, - "maxLookupRequest" : 50000, - "maxNumberOfRejectedRequestPerConnection" : 50, - "keepAliveIntervalSeconds" : 30, - "connectionTimeoutMs" : 10000, - "requestTimeoutMs" : 60000, - "defaultBackoffIntervalNanos" : 100000000, - "maxBackoffIntervalNanos" : 30000000000 - } - - ``` - -## Debug in cluster mode -You can use the following methods to debug a connector in cluster mode: -* [Use connector log](#use-connector-log) -* [Use admin CLI](#use-admin-cli) -### Use connector log -In cluster mode, multiple connectors can run on a worker. To find the log path of a specified connector, use the `workerId` to locate the connector log. -### Use admin CLI -Pulsar admin CLI helps you debug Pulsar connectors with the following subcommands: -* [`get`](#get) - -* [`status`](#status) -* [`topics stats`](#topics-stats) - -**Create a Mongo sink** - -```bash - -./bin/pulsar-admin sinks create \ ---archive pulsar-io-mongo-2.4.0.nar \ ---tenant public \ ---namespace default \ ---inputs test-mongo \ ---name pulsar-mongo-sink \ ---sink-config-file mongo-sink-config.yaml \ ---parallelism 1 - -``` - -### `get` -Use the `get` command to get the basic information about the Mongo sink connector, such as tenant, namespace, name, parallelism, and so on. - -```bash - -./bin/pulsar-admin sinks get --tenant public --namespace default --name pulsar-mongo-sink -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-mongo-sink", - "className": "org.apache.pulsar.io.mongodb.MongoSink", - "inputSpecs": { - "test-mongo": { - "isRegexPattern": false - } - }, - "configs": { - "mongoUri": "mongodb://pulsar-mongo:27017", - "database": "pulsar", - "collection": "messages", - "batchSize": 2.0, - "batchTimeMs": 500.0 - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -:::tip - -For more information about the `get` command, see [`get`](reference-connector-admin.md/#get-1). - -::: - -### `status` -Use the `status` command to get the current status about the Mongo sink connector, such as the number of instance, the number of running instance, instanceId, workerId and so on. - -```bash - -./bin/pulsar-admin sinks status ---tenant public \ ---namespace default \ ---name pulsar-mongo-sink -{ -"numInstances" : 1, -"numRunning" : 1, -"instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-5d202832fd18-8080" - } -} ] -} - -``` - -:::tip - -For more information about the `status` command, see [`status`](reference-connector-admin.md/#stauts-1). -If there are multiple connectors running on a worker, `workerId` can locate the worker on which the specified connector is running. - -::: - -### `topics stats` -Use the `topics stats` command to get the stats for a topic and its connected producer and consumer, such as whether the topic has received messages or not, whether there is a backlog of messages or not, the available permits and other key information. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -```bash - -./bin/pulsar-admin topics stats test-mongo -{ - "msgRateIn" : 0.0, - "msgThroughputIn" : 0.0, - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "averageMsgSize" : 0.0, - "storageSize" : 1, - "publishers" : [ ], - "subscriptions" : { - "public/default/pulsar-mongo-sink" : { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "msgRateRedeliver" : 0.0, - "msgBacklog" : 0, - "blockedSubscriptionOnUnackedMsgs" : false, - "msgDelayed" : 0, - "unackedMessages" : 0, - "type" : "Shared", - "msgRateExpired" : 0.0, - "consumers" : [ { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "msgRateRedeliver" : 0.0, - "consumerName" : "dffdd", - "availablePermits" : 999, - "unackedMessages" : 0, - "blockedConsumerOnUnackedMsgs" : false, - "metadata" : { - "instance_id" : "0", - "application" : "pulsar-sink", - "id" : "public/default/pulsar-mongo-sink" - }, - "connectedSince" : "2019-08-26T08:48:07.582Z", - "clientVersion" : "2.4.0", - "address" : "/172.17.0.3:57790" - } ], - "isReplicated" : false - } - }, - "replication" : { }, - "deduplicationStatus" : "Disabled" -} - -``` - -:::tip - -For more information about the `topic stats` command, see [`topic stats`](http://pulsar.apache.org/docs/en/pulsar-admin/#stats-1). - -::: - -## Checklist -This checklist indicates the major areas to check when you debug connectors. It is a reminder of what to look for to ensure a thorough review and an evaluation tool to get the status of connectors. -* Does Pulsar start successfully? - -* Does the external service run normally? - -* Is the nar package complete? - -* Is the connector configuration file correct? - -* In localrun mode, run a connector and check the printed information (connector log) on the console. - -* In cluster mode: - - * Use the `get` command to get the basic information. - - * Use the `status` command to get the current status. - * Use the `topics stats` command to get the stats for a specified topic and its connected producers and consumers. - - * Check the connector log. -* Enter into the external system and verify the result. diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-develop.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-develop.md deleted file mode 100644 index d6f4f8261ac820..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-develop.md +++ /dev/null @@ -1,421 +0,0 @@ ---- -id: io-develop -title: How to develop Pulsar connectors -sidebar_label: "Develop" -original_id: io-develop ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide describes how to develop Pulsar connectors to move data -between Pulsar and other systems. - -Pulsar connectors are special [Pulsar Functions](functions-overview.md), so creating -a Pulsar connector is similar to creating a Pulsar function. - -Pulsar connectors come in two types: - -| Type | Description | Example -|---|---|--- -{@inject: github:Source:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java}|Import data from another system to Pulsar.|[RabbitMQ source connector](io-rabbitmq.md) imports the messages of a RabbitMQ queue to a Pulsar topic. -{@inject: github:Sink:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java}|Export data from Pulsar to another system.|[Kinesis sink connector](io-kinesis.md) exports the messages of a Pulsar topic to a Kinesis stream. - -## Develop - -You can develop Pulsar source connectors and sink connectors. - -### Source - -Developing a source connector is to implement the {@inject: github:Source:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} -interface, which means you need to implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method and the {@inject: github:read:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - -1. Implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - - ```java - - /** - * Open connector with configuration - * - * @param config initialization config - * @param sourceContext - * @throws Exception IO type exceptions when opening a connector - */ - void open(final Map config, SourceContext sourceContext) throws Exception; - - ``` - - This method is called when the source connector is initialized. - - In this method, you can retrieve all connector specific settings through the passed-in `config` parameter and initialize all necessary resources. - - For example, a Kafka connector can create a Kafka client in this `open` method. - - Besides, Pulsar runtime also provides a `SourceContext` for the - connector to access runtime resources for tasks like collecting metrics. The implementation can save the `SourceContext` for future use. - -2. Implement the {@inject: github:read:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - - ```java - - /** - * Reads the next message from source. - * If source does not have any new messages, this call should block. - * @return next message from source. The return result should never be null - * @throws Exception - */ - Record read() throws Exception; - - ``` - - If nothing to return, the implementation should be blocking rather than returning `null`. - - The returned {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should encapsulate the following information, which is needed by Pulsar IO runtime. - - * {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should provide the following variables: - - |Variable|Required|Description - |---|---|--- - `TopicName`|No|Pulsar topic name from which the record is originated from. - `Key`|No| Messages can optionally be tagged with keys.

    For more information, see [Routing modes](concepts-messaging.md#routing-modes).| - `Value`|Yes|Actual data of the record. - `EventTime`|No|Event time of the record from the source. - `PartitionId`|No| If the record is originated from a partitioned source, it returns its `PartitionId`.

    `PartitionId` is used as a part of the unique identifier by Pulsar IO runtime to deduplicate messages and achieve exactly-once processing guarantee. - `RecordSequence`|No|If the record is originated from a sequential source, it returns its `RecordSequence`.

    `RecordSequence` is used as a part of the unique identifier by Pulsar IO runtime to deduplicate messages and achieve exactly-once processing guarantee. - `Properties` |No| If the record carries user-defined properties, it returns those properties. - `DestinationTopic`|No|Topic to which message should be written. - `Message`|No|A class which carries data sent by users.

    For more information, see [Message.java](https://github.com/apache/pulsar/blob/master/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/Message.java).| - - * {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should provide the following methods: - - Method|Description - |---|--- - `ack` |Acknowledge that the record is fully processed. - `fail`|Indicate that the record fails to be processed. - -## Handle schema information - -Pulsar IO automatically handles the schema and provides a strongly typed API based on Java generics. -If you know the schema type that you are producing, you can declare the Java class relative to that type in your sink declaration. - -``` - -public class MySource implements Source { - public Record read() {} -} - -``` - -If you want to implement a source that works with any schema, you can go with `byte[]` (of `ByteBuffer`) and use Schema.AUTO_PRODUCE_BYTES(). - -``` - -public class MySource implements Source { - public Record read() { - - Schema wantedSchema = .... - Record myRecord = new MyRecordImplementation(); - .... - } - class MyRecordImplementation implements Record { - public byte[] getValue() { - return ....encoded byte[]...that represents the value - } - public Schema getSchema() { - return Schema.AUTO_PRODUCE_BYTES(wantedSchema); - } - } -} - -``` - -To handle the `KeyValue` type properly, follow the guidelines for your record implementation: -- It must implement {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/KVRecord.java} interface and implement `getKeySchema`,`getValueSchema`, and `getKeyValueEncodingType` -- It must return a `KeyValue` object as `Record.getValue()` -- It may return null in `Record.getSchema()` - -When Pulsar IO runtime encounters a `KVRecord`, it brings the following changes automatically: -- Set properly the `KeyValueSchema` -- Encode the Message Key and the Message Value according to the `KeyValueEncoding` (SEPARATED or INLINE) - -:::tip - -For more information about **how to create a source connector**, see {@inject: github:KafkaSource:/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java}. - -::: - -### Sink - -Developing a sink connector **is similar to** developing a source connector, that is, you need to implement the {@inject: github:Sink:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} interface, which means implementing the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method and the {@inject: github:write:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - -1. Implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - - ```java - - /** - * Open connector with configuration - * - * @param config initialization config - * @param sinkContext - * @throws Exception IO type exceptions when opening a connector - */ - void open(final Map config, SinkContext sinkContext) throws Exception; - - ``` - -2. Implement the {@inject: github:write:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - - ```java - - /** - * Write a message to Sink - * @param record record to write to sink - * @throws Exception - */ - void write(Record record) throws Exception; - - ``` - - During the implementation, you can decide how to write the `Value` and - the `Key` to the actual source, and leverage all the provided information such as - `PartitionId` and `RecordSequence` to achieve different processing guarantees. - - You also need to ack records (if messages are sent successfully) or fail records (if messages fail to send). - -## Handling Schema information - -Pulsar IO handles automatically the Schema and provides a strongly typed API based on Java generics. -If you know the Schema type that you are consuming from you can declare the Java class relative to that type in your Sink declaration. - -``` - -public class MySink implements Sink { - public void write(Record record) {} -} - -``` - -If you want to implement a sink that works with any schema, you can you go with the special GenericObject interface. - -``` - -public class MySink implements Sink { - public void write(Record record) { - Schema schema = record.getSchema(); - GenericObject genericObject = record.getValue(); - if (genericObject != null) { - SchemaType type = genericObject.getSchemaType(); - Object nativeObject = genericObject.getNativeObject(); - ... - } - .... - } -} - -``` - -In the case of AVRO, JSON, and Protobuf records (schemaType=AVRO,JSON,PROTOBUF_NATIVE), you can cast the -`genericObject` variable to `GenericRecord` and use `getFields()` and `getField()` API. -You are able to access the native AVRO record using `genericObject.getNativeObject()`. - -In the case of KeyValue type, you can access both the schema for the key and the schema for the value using this code. - -``` - -public class MySink implements Sink { - public void write(Record record) { - Schema schema = record.getSchema(); - GenericObject genericObject = record.getValue(); - SchemaType type = genericObject.getSchemaType(); - Object nativeObject = genericObject.getNativeObject(); - if (type == SchemaType.KEY_VALUE) { - KeyValue keyValue = (KeyValue) nativeObject; - Object key = keyValue.getKey(); - Object value = keyValue.getValue(); - - KeyValueSchema keyValueSchema = (KeyValueSchema) schema; - Schema keySchema = keyValueSchema.getKeySchema(); - Schema valueSchema = keyValueSchema.getValueSchema(); - } - .... - } -} - -``` - -## Test - -Testing connectors can be challenging because Pulsar IO connectors interact with two systems -that may be difficult to mock—Pulsar and the system to which the connector is connecting. - -It is -recommended writing special tests to test the connector functionalities as below -while mocking the external service. - -### Unit test - -You can create unit tests for your connector. - -### Integration test - -Once you have written sufficient unit tests, you can add -separate integration tests to verify end-to-end functionality. - -Pulsar uses [testcontainers](https://www.testcontainers.org/) **for all integration tests**. - -:::tip - -For more information about **how to create integration tests for Pulsar connectors**, see {@inject: github:IntegrationTests:/tests/integration/src/test/java/org/apache/pulsar/tests/integration/io}. - -::: - -## Package - -Once you've developed and tested your connector, you need to package it so that it can be submitted -to a [Pulsar Functions](functions-overview.md) cluster. - -There are two methods to -work with Pulsar Functions' runtime, that is, [NAR](#nar) and [uber JAR](#uber-jar). - -:::note - -If you plan to package and distribute your connector for others to use, you are obligated to - -::: - -license and copyright your own code properly. Remember to add the license and copyright to -all libraries your code uses and to your distribution. -> -> If you use the [NAR](#nar) method, the NAR plugin -automatically creates a `DEPENDENCIES` file in the generated NAR package, including the proper -licensing and copyrights of all libraries of your connector. - -### NAR - -**NAR** stands for NiFi Archive, which is a custom packaging mechanism used by Apache NiFi, to provide -a bit of Java ClassLoader isolation. - -:::tip - -For more information about **how NAR works**, see [here](https://medium.com/hashmapinc/nifi-nar-files-explained-14113f7796fd). - -::: - -Pulsar uses the same mechanism for packaging **all** [built-in connectors](io-connectors.md). - -The easiest approach to package a Pulsar connector is to create a NAR package using [nifi-nar-maven-plugin](https://mvnrepository.com/artifact/org.apache.nifi/nifi-nar-maven-plugin). - -Include this [nifi-nar-maven-plugin](https://mvnrepository.com/artifact/org.apache.nifi/nifi-nar-maven-plugin) in your maven project for your connector as below. - -```xml - - - - org.apache.nifi - nifi-nar-maven-plugin - 1.2.0 - - - -``` - -You must also create a `resources/META-INF/services/pulsar-io.yaml` file with the following contents: - -```yaml - -name: connector name -description: connector description -sourceClass: fully qualified class name (only if source connector) -sinkClass: fully qualified class name (only if sink connector) - -``` - -For Gradle users, there is a [Gradle Nar plugin available on the Gradle Plugin Portal](https://plugins.gradle.org/plugin/io.github.lhotari.gradle-nar-plugin). - -:::tip - -For more information about an **how to use NAR for Pulsar connectors**, see {@inject: github:TwitterFirehose:/pulsar-io/twitter/pom.xml}. - -::: - -### Uber JAR - -An alternative approach is to create an **uber JAR** that contains all of the connector's JAR files -and other resource files. No directory internal structure is necessary. - -You can use [maven-shade-plugin](https://maven.apache.org/plugins/maven-shade-plugin/examples/includes-excludes.html) to create a uber JAR as below: - -```xml - - - org.apache.maven.plugins - maven-shade-plugin - 3.1.1 - - - package - - shade - - - - - *:* - - - - - - - -``` - -## Monitor - -Pulsar connectors enable you to move data in and out of Pulsar easily. It is important to ensure that the running connectors are healthy at any time. You can monitor Pulsar connectors that have been deployed with the following methods: - -- Check the metrics provided by Pulsar. - - Pulsar connectors expose the metrics that can be collected and used for monitoring the health of **Java** connectors. You can check the metrics by following the [monitoring](deploy-monitoring.md) guide. - -- Set and check your customized metrics. - - In addition to the metrics provided by Pulsar, Pulsar allows you to customize metrics for **Java** connectors. Function workers collect user-defined metrics to Prometheus automatically and you can check them in Grafana. - -Here is an example of how to customize metrics for a Java connector. - -````mdx-code-block - - - -``` - -public class TestMetricSink implements Sink { - - @Override - public void open(Map config, SinkContext sinkContext) throws Exception { - sinkContext.recordMetric("foo", 1); - } - - @Override - public void write(Record record) throws Exception { - - } - - @Override - public void close() throws Exception { - - } - } - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-dynamodb-source.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-dynamodb-source.md deleted file mode 100644 index ce585786eb0428..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-dynamodb-source.md +++ /dev/null @@ -1,80 +0,0 @@ ---- -id: io-dynamodb-source -title: AWS DynamoDB source connector -sidebar_label: "AWS DynamoDB source connector" -original_id: io-dynamodb-source ---- - -The DynamoDB source connector pulls data from DynamoDB table streams and persists data into Pulsar. - -This connector uses the [DynamoDB Streams Kinesis Adapter](https://github.com/awslabs/dynamodb-streams-kinesis-adapter), -which uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual -consuming of messages. The KCL uses DynamoDB to track state for consumers and requires cloudwatch access to log metrics. - - -## Configuration - -The configuration of the DynamoDB source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.

    Below are the available options:

  1515. `AT_TIMESTAMP`: start from the record at or after the specified timestamp.

  1516. `LATEST`: start after the most recent data record.

  1517. `TRIM_HORIZON`: start from the oldest available data record.
  1518. -`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption. -`applicationName`|String|false|Pulsar IO connector|The name of the KCL application. Must be unique, as it is used to define the table name for the dynamo table used for state tracking.

    By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances. -`checkpointInterval`|long|false|60000|The frequency of the KCL checkpoint in milliseconds. -`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds. -`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint. -`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector.

    Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed. -`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsEndpoint`|String|false|" " (empty string)|The DynamoDB Streams end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsDynamodbStreamArn`|String|true|" " (empty string)|The DynamoDB stream arn. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    `awsCredentialProviderPlugin` has the following built-in plugs:

  1519. `org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:
    this plugin uses the default AWS provider chain.
    For more information, see [using the default credential provider chain](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default).

  1520. `org.apache.pulsar.io.kinesis.STSAssumeRoleProviderPlugin`:
    this plugin takes a configuration via the `awsCredentialPluginParam` that describes a role to assume when running the KCL.
    **JSON configuration example**
    `{"roleArn": "arn...", "roleSessionName": "name"}`

    `awsCredentialPluginName` is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If `awsCredentialPluginName` set to empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`.
  1521. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Example - -Before using the DynamoDB source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "awsEndpoint": "https://some.endpoint.aws", - "awsRegion": "us-east-1", - "awsDynamodbStreamArn": "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "applicationName": "My test application", - "checkpointInterval": "30000", - "backoffTime": "4000", - "numRetries": "3", - "receiveQueueSize": 2000, - "initialPositionInStream": "TRIM_HORIZON", - "startAtTime": "2019-03-05T19:28:58.000Z" - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "https://some.endpoint.aws" - awsRegion: "us-east-1" - awsDynamodbStreamArn: "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - applicationName: "My test application" - checkpointInterval: 30000 - backoffTime: 4000 - numRetries: 3 - receiveQueueSize: 2000 - initialPositionInStream: "TRIM_HORIZON" - startAtTime: "2019-03-05T19:28:58.000Z" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-elasticsearch-sink.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-elasticsearch-sink.md deleted file mode 100644 index b5757b3094a9ac..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-elasticsearch-sink.md +++ /dev/null @@ -1,242 +0,0 @@ ---- -id: io-elasticsearch-sink -title: Elasticsearch sink connector -sidebar_label: "Elasticsearch sink connector" -original_id: io-elasticsearch-sink ---- - -The Elasticsearch sink connector pulls messages from Pulsar topics and persists the messages to indexes. - - -## Feature - -### Handle data - -Since Pulsar 2.9.0, the Elasticsearch sink connector has the following ways of -working. You can choose one of them. - -Name | Description ----|---| -Raw processing | The sink reads from topics and passes the raw content to Elasticsearch.

    This is the **default** behavior.

    Raw processing was already available **in Pulsar 2.8.x**. -Schema aware | The sink uses the schema and handles AVRO, JSON, and KeyValue schema types while mapping the content to the Elasticsearch document.

    If you set `schemaEnable` to `true`, the sink interprets the contents of the message and you can define a **primary key** that in turn used as the special `_id` field on Elasticsearch. -

    This allows you to perform `UPDATE`, `INSERT`, and `DELETE` operations -to Elasticsearch driven by the logical primary key of the message.

    This -is very useful in a typical Change Data Capture scenario in which you follow the -changes on your database, write them to Pulsar (using the Debezium adapter for -instance), and then you write to Elasticsearch.

    You configure the -mapping of the primary key using the `primaryFields` configuration -entry.

    The `DELETE` operation can be performed when the primary key is -not empty and the remaining value is empty. Use the `nullValueAction` to -configure this behaviour. The default configuration simply ignores such empty -values. - -### Map multiple indexes - -Since Pulsar 2.9.0, the `indexName` property is no more required. If you omit it, the sink writes to an index name after the Pulsar topic name. - -### Enable bulk writes - -Since Pulsar 2.9.0, you can use bulk writes by setting the `bulkEnabled` property to `true`. - -### Enable secure connections via TLS - -Since Pulsar 2.9.0, you can enable secure connections with TLS. - -## Configuration - -The configuration of the Elasticsearch sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `elasticSearchUrl` | String| true |" " (empty string)| The URL of elastic search cluster to which the connector connects. | -| `indexName` | String| true |" " (empty string)| The index name to which the connector writes messages. | -| `schemaEnable` | Boolean | false | false | Turn on the Schema Aware mode. | -| `createIndexIfNeeded` | Boolean | false | false | Manage index if missing. | -| `maxRetries` | Integer | false | 1 | The maximum number of retries for elasticsearch requests. Use -1 to disable it. | -| `retryBackoffInMs` | Integer | false | 100 | The base time to wait when retrying an Elasticsearch request (in milliseconds). | -| `maxRetryTimeInSec` | Integer| false | 86400 | The maximum retry time interval in seconds for retrying an elasticsearch request. | -| `bulkEnabled` | Boolean | false | false | Enable the elasticsearch bulk processor to flush write requests based on the number or size of requests, or after a given period. | -| `bulkActions` | Integer | false | 1000 | The maximum number of actions per elasticsearch bulk request. Use -1 to disable it. | -| `bulkSizeInMb` | Integer | false |5 | The maximum size in megabytes of elasticsearch bulk requests. Use -1 to disable it. | -| `bulkConcurrentRequests` | Integer | false | 0 | The maximum number of in flight elasticsearch bulk requests. The default 0 allows the execution of a single request. A value of 1 means 1 concurrent request is allowed to be executed while accumulating new bulk requests. | -| `bulkFlushIntervalInMs` | Integer | false | -1 | The maximum period of time to wait for flushing pending writes when bulk writes are enabled. Default is -1 meaning not set. | -| `compressionEnabled` | Boolean | false |false | Enable elasticsearch request compression. | -| `connectTimeoutInMs` | Integer | false |5000 | The elasticsearch client connection timeout in milliseconds. | -| `connectionRequestTimeoutInMs` | Integer | false |1000 | The time in milliseconds for getting a connection from the elasticsearch connection pool. | -| `connectionIdleTimeoutInMs` | Integer | false |5 | Idle connection timeout to prevent a read timeout. | -| `keyIgnore` | Boolean | false |true | Whether to ignore the record key to build the Elasticsearch document `_id`. If primaryFields is defined, the connector extract the primary fields from the payload to build the document `_id` If no primaryFields are provided, elasticsearch auto generates a random document `_id`. | -| `primaryFields` | String | false | "id" | The comma separated ordered list of field names used to build the Elasticsearch document `_id` from the record value. If this list is a singleton, the field is converted as a string. If this list has 2 or more fields, the generated `_id` is a string representation of a JSON array of the field values. | -| `nullValueAction` | enum (IGNORE,DELETE,FAIL) | false | IGNORE | How to handle records with null values, possible options are IGNORE, DELETE or FAIL. Default is IGNORE the message. | -| `malformedDocAction` | enum (IGNORE,WARN,FAIL) | false | FAIL | How to handle elasticsearch rejected documents due to some malformation. Possible options are IGNORE, DELETE or FAIL. Default is FAIL the Elasticsearch document. | -| `stripNulls` | Boolean | false |true | If stripNulls is false, elasticsearch _source includes 'null' for empty fields (for example {"foo": null}), otherwise null fields are stripped. | -| `socketTimeoutInMs` | Integer | false |60000 | The socket timeout in milliseconds waiting to read the elasticsearch response. | -| `typeName` | String | false | "_doc" | The type name to which the connector writes messages to.

    The value should be set explicitly to a valid type name other than "_doc" for Elasticsearch version before 6.2, and left to default otherwise. | -| `indexNumberOfShards` | int| false |1| The number of shards of the index. | -| `indexNumberOfReplicas` | int| false |1 | The number of replicas of the index. | -| `username` | String| false |" " (empty string)| The username used by the connector to connect to the elastic search cluster.

    If `username` is set, then `password` should also be provided. | -| `password` | String| false | " " (empty string)|The password used by the connector to connect to the elastic search cluster.

    If `username` is set, then `password` should also be provided. | -| `ssl` | ElasticSearchSslConfig | false | | Configuration for TLS encrypted communication | - -### Definition of ElasticSearchSslConfig structure: - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `enabled` | Boolean| false | false | Enable SSL/TLS. | -| `hostnameVerification` | Boolean| false | true | Whether or not to validate node hostnames when using SSL. | -| `truststorePath` | String| false |" " (empty string)| The path to the truststore file. | -| `truststorePassword` | String| false |" " (empty string)| Truststore password. | -| `keystorePath` | String| false |" " (empty string)| The path to the keystore file. | -| `keystorePassword` | String| false |" " (empty string)| Keystore password. | -| `cipherSuites` | String| false |" " (empty string)| SSL/TLS cipher suites. | -| `protocols` | String| false |"TLSv1.2" | Comma separated list of enabled SSL/TLS protocols. | - -## Example - -Before using the Elasticsearch sink connector, you need to create a configuration file through one of the following methods. - -### Configuration - -#### For Elasticsearch After 6.2 - -* JSON - - ```json - - { - "elasticSearchUrl": "http://localhost:9200", - "indexName": "my_index", - "username": "scooby", - "password": "doobie" - } - - ``` - -* YAML - - ```yaml - - configs: - elasticSearchUrl: "http://localhost:9200" - indexName: "my_index" - username: "scooby" - password: "doobie" - - ``` - -#### For Elasticsearch Before 6.2 - -* JSON - - ```json - - { - "elasticSearchUrl": "http://localhost:9200", - "indexName": "my_index", - "typeName": "doc", - "username": "scooby", - "password": "doobie" - } - - ``` - -* YAML - - ```yaml - - configs: - elasticSearchUrl: "http://localhost:9200" - indexName: "my_index" - typeName: "doc" - username: "scooby" - password: "doobie" - - ``` - -### Usage - -1. Start a single node Elasticsearch cluster. - - ```bash - - $ docker run -p 9200:9200 -p 9300:9300 \ - -e "discovery.type=single-node" \ - docker.elastic.co/elasticsearch/elasticsearch:7.13.3 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - - Make sure the NAR file is available at `connectors/pulsar-io-elastic-search-@pulsar:version@.nar`. - -3. Start the Pulsar Elasticsearch connector in local run mode using one of the following methods. - * Use the **JSON** configuration as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-elastic-search-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name elasticsearch-test-sink \ - --sink-config '{"elasticSearchUrl":"http://localhost:9200","indexName": "my_index","username": "scooby","password": "doobie"}' \ - --inputs elasticsearch_test - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-elastic-search-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name elasticsearch-test-sink \ - --sink-config-file elasticsearch-sink.yml \ - --inputs elasticsearch_test - - ``` - -4. Publish records to the topic. - - ```bash - - $ bin/pulsar-client produce elasticsearch_test --messages "{\"a\":1}" - - ``` - -5. Check documents in Elasticsearch. - - * refresh the index - - ```bash - - $ curl -s http://localhost:9200/my_index/_refresh - - ``` - - - * search documents - - ```bash - - $ curl -s http://localhost:9200/my_index/_search - - ``` - - You can see the record that published earlier has been successfully written into Elasticsearch. - - ```json - - {"took":2,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":1,"relation":"eq"},"max_score":1.0,"hits":[{"_index":"my_index","_type":"_doc","_id":"FSxemm8BLjG_iC0EeTYJ","_score":1.0,"_source":{"a":1}}]}} - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-file-source.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-file-source.md deleted file mode 100644 index e9d710cce65e83..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-file-source.md +++ /dev/null @@ -1,160 +0,0 @@ ---- -id: io-file-source -title: File source connector -sidebar_label: "File source connector" -original_id: io-file-source ---- - -The File source connector pulls messages from files in directories and persists the messages to Pulsar topics. - -## Configuration - -The configuration of the File source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `inputDirectory` | String|true | No default value|The input directory to pull files. | -| `recurse` | Boolean|false | true | Whether to pull files from subdirectory or not.| -| `keepFile` |Boolean|false | false | If set to true, the file is not deleted after it is processed, which means the file can be picked up continually. | -| `fileFilter` | String|false| [^\\.].* | The file whose name matches the given regular expression is picked up. | -| `pathFilter` | String |false | NULL | If `recurse` is set to true, the subdirectory whose path matches the given regular expression is scanned. | -| `minimumFileAge` | Integer|false | 0 | The minimum age that a file can be processed.

    Any file younger than `minimumFileAge` (according to the last modification date) is ignored. | -| `maximumFileAge` | Long|false |Long.MAX_VALUE | The maximum age that a file can be processed.

    Any file older than `maximumFileAge` (according to last modification date) is ignored. | -| `minimumSize` |Integer| false |1 | The minimum size (in bytes) that a file can be processed. | -| `maximumSize` | Double|false |Double.MAX_VALUE| The maximum size (in bytes) that a file can be processed. | -| `ignoreHiddenFiles` |Boolean| false | true| Whether the hidden files should be ignored or not. | -| `pollingInterval`|Long | false | 10000L | Indicates how long to wait before performing a directory listing. | -| `numWorkers` | Integer | false | 1 | The number of worker threads that process files.

    This allows you to process a larger number of files concurrently.

    However, setting this to a value greater than 1 makes the data from multiple files mixed in the target topic. | - -### Example - -Before using the File source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "inputDirectory": "/Users/david", - "recurse": true, - "keepFile": true, - "fileFilter": "[^\\.].*", - "pathFilter": "*", - "minimumFileAge": 0, - "maximumFileAge": 9999999999, - "minimumSize": 1, - "maximumSize": 5000000, - "ignoreHiddenFiles": true, - "pollingInterval": 5000, - "numWorkers": 1 - } - - ``` - -* YAML - - ```yaml - - configs: - inputDirectory: "/Users/david" - recurse: true - keepFile: true - fileFilter: "[^\\.].*" - pathFilter: "*" - minimumFileAge: 0 - maximumFileAge: 9999999999 - minimumSize: 1 - maximumSize: 5000000 - ignoreHiddenFiles: true - pollingInterval: 5000 - numWorkers: 1 - - ``` - -## Usage - -Here is an example of using the File source connecter. - -1. Pull a Pulsar image. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - ``` - -2. Start Pulsar standalone. - - ```bash - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -3. Create a configuration file _file-connector.yaml_. - - ```yaml - - configs: - inputDirectory: "/opt" - - ``` - -4. Copy the configuration file _file-connector.yaml_ to the container. - - ```bash - - $ docker cp connectors/file-connector.yaml pulsar-standalone:/pulsar/ - - ``` - -5. Download the File source connector. - - ```bash - - $ curl -O https://mirrors.tuna.tsinghua.edu.cn/apache/pulsar/pulsar-{version}/connectors/pulsar-io-file-{version}.nar - - ``` - -6. Start the File source connector. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - - $ ./bin/pulsar-admin sources localrun \ - --archive /pulsar/pulsar-io-file-{version}.nar \ - --name file-test \ - --destination-topic-name pulsar-file-test \ - --source-config-file /pulsar/file-connector.yaml - - ``` - -7. Start a consumer. - - ```bash - - ./bin/pulsar-client consume -s file-test -n 0 pulsar-file-test - - ``` - -8. Write the message to the file _test.txt_. - - ```bash - - echo "hello world!" > /opt/test.txt - - ``` - - The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello world! - - ``` - - \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-flume-sink.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-flume-sink.md deleted file mode 100644 index b2ace53702f8ca..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-flume-sink.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -id: io-flume-sink -title: Flume sink connector -sidebar_label: "Flume sink connector" -original_id: io-flume-sink ---- - -The Flume sink connector pulls messages from Pulsar topics to logs. - -## Configuration - -The configuration of the Flume sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`name`|String|true|"" (empty string)|The name of the agent. -`confFile`|String|true|"" (empty string)|The configuration file. -`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed. -`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection. -`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration. - -### Example - -Before using the Flume sink connector, you need to create a configuration file through one of the following methods. - -> For more information about the `sink.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/sink.conf). - -* JSON - - ```json - - { - "name": "a1", - "confFile": "sink.conf", - "noReloadConf": "false", - "zkConnString": "", - "zkBasePath": "" - } - - ``` - -* YAML - - ```yaml - - configs: - name: a1 - confFile: sink.conf - noReloadConf: false - zkConnString: "" - zkBasePath: "" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-flume-source.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-flume-source.md deleted file mode 100644 index b7fd7edad88111..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-flume-source.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -id: io-flume-source -title: Flume source connector -sidebar_label: "Flume source connector" -original_id: io-flume-source ---- - -The Flume source connector pulls messages from logs to Pulsar topics. - -## Configuration - -The configuration of the Flume source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`name`|String|true|"" (empty string)|The name of the agent. -`confFile`|String|true|"" (empty string)|The configuration file. -`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed. -`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection. -`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration. - -### Example - -Before using the Flume source connector, you need to create a configuration file through one of the following methods. - -> For more information about the `source.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/source.conf). - -* JSON - - ```json - - { - "name": "a1", - "confFile": "source.conf", - "noReloadConf": "false", - "zkConnString": "", - "zkBasePath": "" - } - - ``` - -* YAML - - ```yaml - - configs: - name: a1 - confFile: source.conf - noReloadConf: false - zkConnString: "" - zkBasePath: "" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-hbase-sink.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-hbase-sink.md deleted file mode 100644 index 1737b00fa26805..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-hbase-sink.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -id: io-hbase-sink -title: HBase sink connector -sidebar_label: "HBase sink connector" -original_id: io-hbase-sink ---- - -The HBase sink connector pulls the messages from Pulsar topics -and persists the messages to HBase tables - -## Configuration - -The configuration of the HBase sink connector has the following properties. - -### Property - -| Name | Type|Default | Required | Description | -|------|---------|----------|-------------|--- -| `hbaseConfigResources` | String|None | false | HBase system configuration `hbase-site.xml` file. | -| `zookeeperQuorum` | String|None | true | HBase system configuration about `hbase.zookeeper.quorum` value. | -| `zookeeperClientPort` | String|2181 | false | HBase system configuration about `hbase.zookeeper.property.clientPort` value. | -| `zookeeperZnodeParent` | String|/hbase | false | HBase system configuration about `zookeeper.znode.parent` value. | -| `tableName` | None |String | true | HBase table, the value is `namespace:tableName`. | -| `rowKeyName` | String|None | true | HBase table rowkey name. | -| `familyName` | String|None | true | HBase table column family name. | -| `qualifierNames` |String| None | true | HBase table column qualifier names. | -| `batchTimeMs` | Long|1000l| false | HBase table operation timeout in milliseconds. | -| `batchSize` | int|200| false | Batch size of updates made to the HBase table. | - -### Example - -Before using the HBase sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "hbaseConfigResources": "hbase-site.xml", - "zookeeperQuorum": "localhost", - "zookeeperClientPort": "2181", - "zookeeperZnodeParent": "/hbase", - "tableName": "pulsar_hbase", - "rowKeyName": "rowKey", - "familyName": "info", - "qualifierNames": [ 'name', 'address', 'age'] - } - - ``` - -* YAML - - ```yaml - - configs: - hbaseConfigResources: "hbase-site.xml" - zookeeperQuorum: "localhost" - zookeeperClientPort: "2181" - zookeeperZnodeParent: "/hbase" - tableName: "pulsar_hbase" - rowKeyName: "rowKey" - familyName: "info" - qualifierNames: [ 'name', 'address', 'age'] - - ``` - - \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-hdfs2-sink.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-hdfs2-sink.md deleted file mode 100644 index 4a8527154430d0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-hdfs2-sink.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -id: io-hdfs2-sink -title: HDFS2 sink connector -sidebar_label: "HDFS2 sink connector" -original_id: io-hdfs2-sink ---- - -The HDFS2 sink connector pulls the messages from Pulsar topics -and persists the messages to HDFS files. - -## Configuration - -The configuration of the HDFS2 sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `hdfsConfigResources` | String|true| None | A file or a comma-separated list containing the Hadoop file system configuration.

    **Example**
    'core-site.xml'
    'hdfs-site.xml' | -| `directory` | String | true | None|The HDFS directory where files read from or written to. | -| `encoding` | String |false |None |The character encoding for the files.

    **Example**
    UTF-8
    ASCII | -| `compression` | Compression |false |None |The compression code used to compress or de-compress the files on HDFS.

    Below are the available options:
  1522. BZIP2
  1523. DEFLATE
  1524. GZIP
  1525. LZ4
  1526. SNAPPY
  1527. | -| `kerberosUserPrincipal` |String| false| None|The principal account of Kerberos user used for authentication. | -| `keytab` | String|false|None| The full pathname of the Kerberos keytab file used for authentication. | -| `filenamePrefix` |String| true, if `compression` is set to `None`. | None |The prefix of the files created inside the HDFS directory.

    **Example**
    The value of topicA result in files named topicA-. | -| `fileExtension` | String| true | None | The extension added to the files written to HDFS.

    **Example**
    '.txt'
    '.seq' | -| `separator` | char|false |None |The character used to separate records in a text file.

    If no value is provided, the contents from all records are concatenated together in one continuous byte array. | -| `syncInterval` | long| false |0| The interval between calls to flush data to HDFS disk in milliseconds. | -| `maxPendingRecords` |int| false|Integer.MAX_VALUE | The maximum number of records that hold in memory before acking.

    Setting this property to 1 makes every record send to disk before the record is acked.

    Setting this property to a higher value allows buffering records before flushing them to disk. -| `subdirectoryPattern` | String | false | None | A subdirectory associated with the created time of the sink.
    The pattern is the formatted pattern of `directory`'s subdirectory.

    See [DateTimeFormatter](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html) for pattern's syntax. | - -### Example - -Before using the HDFS2 sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "hdfsConfigResources": "core-site.xml", - "directory": "/foo/bar", - "filenamePrefix": "prefix", - "fileExtension": ".log", - "compression": "SNAPPY", - "subdirectoryPattern": "yyyy-MM-dd" - } - - ``` - -* YAML - - ```yaml - - configs: - hdfsConfigResources: "core-site.xml" - directory: "/foo/bar" - filenamePrefix: "prefix" - fileExtension: ".log" - compression: "SNAPPY" - subdirectoryPattern: "yyyy-MM-dd" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-hdfs3-sink.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-hdfs3-sink.md deleted file mode 100644 index aec065a25db7f4..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-hdfs3-sink.md +++ /dev/null @@ -1,59 +0,0 @@ ---- -id: io-hdfs3-sink -title: HDFS3 sink connector -sidebar_label: "HDFS3 sink connector" -original_id: io-hdfs3-sink ---- - -The HDFS3 sink connector pulls the messages from Pulsar topics -and persists the messages to HDFS files. - -## Configuration - -The configuration of the HDFS3 sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `hdfsConfigResources` | String|true| None | A file or a comma-separated list containing the Hadoop file system configuration.

    **Example**
    'core-site.xml'
    'hdfs-site.xml' | -| `directory` | String | true | None|The HDFS directory where files read from or written to. | -| `encoding` | String |false |None |The character encoding for the files.

    **Example**
    UTF-8
    ASCII | -| `compression` | Compression |false |None |The compression code used to compress or de-compress the files on HDFS.

    Below are the available options:
  1528. BZIP2
  1529. DEFLATE
  1530. GZIP
  1531. LZ4
  1532. SNAPPY
  1533. | -| `kerberosUserPrincipal` |String| false| None|The principal account of Kerberos user used for authentication. | -| `keytab` | String|false|None| The full pathname of the Kerberos keytab file used for authentication. | -| `filenamePrefix` |String| false |None |The prefix of the files created inside the HDFS directory.

    **Example**
    The value of topicA result in files named topicA-. | -| `fileExtension` | String| false | None| The extension added to the files written to HDFS.

    **Example**
    '.txt'
    '.seq' | -| `separator` | char|false |None |The character used to separate records in a text file.

    If no value is provided, the contents from all records are concatenated together in one continuous byte array. | -| `syncInterval` | long| false |0| The interval between calls to flush data to HDFS disk in milliseconds. | -| `maxPendingRecords` |int| false|Integer.MAX_VALUE | The maximum number of records that hold in memory before acking.

    Setting this property to 1 makes every record send to disk before the record is acked.

    Setting this property to a higher value allows buffering records before flushing them to disk. - -### Example - -Before using the HDFS3 sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "hdfsConfigResources": "core-site.xml", - "directory": "/foo/bar", - "filenamePrefix": "prefix", - "compression": "SNAPPY" - } - - ``` - -* YAML - - ```yaml - - configs: - hdfsConfigResources: "core-site.xml" - directory: "/foo/bar" - filenamePrefix: "prefix" - compression: "SNAPPY" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-influxdb-sink.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-influxdb-sink.md deleted file mode 100644 index 9382f8c03121cc..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-influxdb-sink.md +++ /dev/null @@ -1,119 +0,0 @@ ---- -id: io-influxdb-sink -title: InfluxDB sink connector -sidebar_label: "InfluxDB sink connector" -original_id: io-influxdb-sink ---- - -The InfluxDB sink connector pulls messages from Pulsar topics -and persists the messages to InfluxDB. - -The InfluxDB sink provides different configurations for InfluxDBv1 and v2 respectively. - -## Configuration - -The configuration of the InfluxDB sink connector has the following properties. - -### Property -#### InfluxDBv2 -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. | -| `token` | String|true| " " (empty string) |The authentication token used to authenticate to InfluxDB. | -| `organization` | String| true|" " (empty string) | The InfluxDB organization to write to. | -| `bucket` |String| true | " " (empty string)| The InfluxDB bucket to write to. | -| `precision` | String|false| ns | The timestamp precision for writing data to InfluxDB.

    Below are the available options:
  1534. ns
  1535. us
  1536. ms
  1537. s
  1538. | -| `logLevel` | String|false| NONE|The log level for InfluxDB request and response.

    Below are the available options:
  1539. NONE
  1540. BASIC
  1541. HEADERS
  1542. FULL
  1543. | -| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. | -| `batchTimeMs` |long|false| 1000L | The InfluxDB operation time in milliseconds. | -| `batchSize` | int|false|200| The batch size of writing to InfluxDB. | - -#### InfluxDBv1 -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. | -| `username` | String|false| " " (empty string) |The username used to authenticate to InfluxDB. | -| `password` | String| false|" " (empty string) | The password used to authenticate to InfluxDB. | -| `database` |String| true | " " (empty string)| The InfluxDB to which write messages. | -| `consistencyLevel` | String|false|ONE | The consistency level for writing data to InfluxDB.

    Below are the available options:
  1544. ALL
  1545. ANY
  1546. ONE
  1547. QUORUM
  1548. | -| `logLevel` | String|false| NONE|The log level for InfluxDB request and response.

    Below are the available options:
  1549. NONE
  1550. BASIC
  1551. HEADERS
  1552. FULL
  1553. | -| `retentionPolicy` | String|false| autogen| The retention policy for InfluxDB. | -| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. | -| `batchTimeMs` |long|false| 1000L | The InfluxDB operation time in milliseconds. | -| `batchSize` | int|false|200| The batch size of writing to InfluxDB. | - -### Example -Before using the InfluxDB sink connector, you need to create a configuration file through one of the following methods. -#### InfluxDBv2 -* JSON - - ```json - - { - "influxdbUrl": "http://localhost:9999", - "organization": "example-org", - "bucket": "example-bucket", - "token": "xxxx", - "precision": "ns", - "logLevel": "NONE", - "gzipEnable": false, - "batchTimeMs": 1000, - "batchSize": 100 - } - - ``` - - -* YAML - - ```yaml - - configs: - influxdbUrl: "http://localhost:9999" - organization: "example-org" - bucket: "example-bucket" - token: "xxxx" - precision: "ns" - logLevel: "NONE" - gzipEnable: false - batchTimeMs: 1000 - batchSize: 100 - - ``` - - -#### InfluxDBv1 - -* JSON - - ```json - - { - "influxdbUrl": "http://localhost:8086", - "database": "test_db", - "consistencyLevel": "ONE", - "logLevel": "NONE", - "retentionPolicy": "autogen", - "gzipEnable": false, - "batchTimeMs": 1000, - "batchSize": 100 - } - - ``` - -* YAML - - ```yaml - - configs: - influxdbUrl: "http://localhost:8086" - database: "test_db" - consistencyLevel: "ONE" - logLevel: "NONE" - retentionPolicy: "autogen" - gzipEnable: false - batchTimeMs: 1000 - batchSize: 100 - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-jdbc-sink.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-jdbc-sink.md deleted file mode 100644 index 77dbb61fccd7ed..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-jdbc-sink.md +++ /dev/null @@ -1,157 +0,0 @@ ---- -id: io-jdbc-sink -title: JDBC sink connector -sidebar_label: "JDBC sink connector" -original_id: io-jdbc-sink ---- - -The JDBC sink connectors allow pulling messages from Pulsar topics -and persists the messages to ClickHouse, MariaDB, PostgreSQL, and SQLite. - -> Currently, INSERT, DELETE and UPDATE operations are supported. - -## Configuration - -The configuration of all JDBC sink connectors has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `userName` | String|false | " " (empty string) | The username used to connect to the database specified by `jdbcUrl`.

    **Note: `userName` is case-sensitive.**| -| `password` | String|false | " " (empty string)| The password used to connect to the database specified by `jdbcUrl`.

    **Note: `password` is case-sensitive.**| -| `jdbcUrl` | String|true | " " (empty string) | The JDBC URL of the database to which the connector connects. | -| `tableName` | String|true | " " (empty string) | The name of the table to which the connector writes. | -| `nonKey` | String|false | " " (empty string) | A comma-separated list contains the fields used in updating events. | -| `key` | String|false | " " (empty string) | A comma-separated list contains the fields used in `where` condition of updating and deleting events. | -| `timeoutMs` | int| false|500 | The JDBC operation timeout in milliseconds. | -| `batchSize` | int|false | 200 | The batch size of updates made to the database. | - -### Example for ClickHouse - -* JSON - - ```json - - { - "userName": "clickhouse", - "password": "password", - "jdbcUrl": "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink", - "tableName": "pulsar_clickhouse_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-clickhouse-sink" - topicName: "persistent://public/default/jdbc-clickhouse-topic" - sinkType: "jdbc-clickhouse" - configs: - userName: "clickhouse" - password: "password" - jdbcUrl: "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink" - tableName: "pulsar_clickhouse_jdbc_sink" - - ``` - -### Example for MariaDB - -* JSON - - ```json - - { - "userName": "mariadb", - "password": "password", - "jdbcUrl": "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink", - "tableName": "pulsar_mariadb_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-mariadb-sink" - topicName: "persistent://public/default/jdbc-mariadb-topic" - sinkType: "jdbc-mariadb" - configs: - userName: "mariadb" - password: "password" - jdbcUrl: "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink" - tableName: "pulsar_mariadb_jdbc_sink" - - ``` - -### Example for PostgreSQL - -Before using the JDBC PostgreSQL sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "userName": "postgres", - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "tableName": "pulsar_postgres_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-postgres-sink" - topicName: "persistent://public/default/jdbc-postgres-topic" - sinkType: "jdbc-postgres" - configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink" - tableName: "pulsar_postgres_jdbc_sink" - - ``` - -For more information on **how to use this JDBC sink connector**, see [connect Pulsar to PostgreSQL](io-quickstart.md#connect-pulsar-to-postgresql). - -### Example for SQLite - -* JSON - - ```json - - { - "jdbcUrl": "jdbc:sqlite:db.sqlite", - "tableName": "pulsar_sqlite_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-sqlite-sink" - topicName: "persistent://public/default/jdbc-sqlite-topic" - sinkType: "jdbc-sqlite" - configs: - jdbcUrl: "jdbc:sqlite:db.sqlite" - tableName: "pulsar_sqlite_jdbc_sink" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-kafka-sink.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-kafka-sink.md deleted file mode 100644 index 09dad4ce70bac9..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-kafka-sink.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -id: io-kafka-sink -title: Kafka sink connector -sidebar_label: "Kafka sink connector" -original_id: io-kafka-sink ---- - -The Kafka sink connector pulls messages from Pulsar topics and persists the messages -to Kafka topics. - -This guide explains how to configure and use the Kafka sink connector. - -## Configuration - -The configuration of the Kafka sink connector has the following parameters. - -### Property - -| Name | Type| Required | Default | Description -|------|----------|---------|-------------|-------------| -| `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. | -|`acks`|String|true|" " (empty string) |The number of acknowledgments that the producer requires the leader to receive before a request completes.
    This controls the durability of the sent records. -|`batchsize`|long|false|16384L|The batch size that a Kafka producer attempts to batch records together before sending them to brokers. -|`maxRequestSize`|long|false|1048576L|The maximum size of a Kafka request in bytes. -|`topic`|String|true|" " (empty string) |The Kafka topic which receives messages from Pulsar. -| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringSerializer | The serializer class for Kafka producers to serialize keys. -| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArraySerializer | The serializer class for Kafka producers to serialize values.

    The serializer is set by a specific implementation of [`KafkaAbstractSink`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSink.java). -|`producerConfigProperties`|Map|false|" " (empty string)|The producer configuration properties to be passed to producers.

    **Note: other properties specified in the connector configuration file take precedence over this configuration**. - - -### Example - -Before using the Kafka sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "bootstrapServers": "localhost:6667", - "topic": "test", - "acks": "1", - "batchSize": "16384", - "maxRequestSize": "1048576", - "producerConfigProperties": - { - "client.id": "test-pulsar-producer", - "security.protocol": "SASL_PLAINTEXT", - "sasl.mechanism": "GSSAPI", - "sasl.kerberos.service.name": "kafka", - "acks": "all" - } - } - -* YAML - - ``` - -yaml - configs: - bootstrapServers: "localhost:6667" - topic: "test" - acks: "1" - batchSize: "16384" - maxRequestSize: "1048576" - producerConfigProperties: - client.id: "test-pulsar-producer" - security.protocol: "SASL_PLAINTEXT" - sasl.mechanism: "GSSAPI" - sasl.kerberos.service.name: "kafka" - acks: "all" - ``` diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-kafka-source.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-kafka-source.md deleted file mode 100644 index 53448699e21b4a..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-kafka-source.md +++ /dev/null @@ -1,226 +0,0 @@ ---- -id: io-kafka-source -title: Kafka source connector -sidebar_label: "Kafka source connector" -original_id: io-kafka-source ---- - -The Kafka source connector pulls messages from Kafka topics and persists the messages -to Pulsar topics. - -This guide explains how to configure and use the Kafka source connector. - -## Configuration - -The configuration of the Kafka source connector has the following properties. - -### Property - -| Name | Type| Required | Default | Description -|------|----------|---------|-------------|-------------| -| `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. | -| `groupId` |String| true | " " (empty string) | A unique string that identifies the group of consumer processes to which this consumer belongs. | -| `fetchMinBytes` | long|false | 1 | The minimum byte expected for each fetch response. | -| `autoCommitEnabled` | boolean |false | true | If set to true, the consumer's offset is periodically committed in the background.

    This committed offset is used when the process fails as the position from which a new consumer begins. | -| `autoCommitIntervalMs` | long|false | 5000 | The frequency in milliseconds that the consumer offsets are auto-committed to Kafka if `autoCommitEnabled` is set to true. | -| `heartbeatIntervalMs` | long| false | 3000 | The interval between heartbeats to the consumer when using Kafka's group management facilities.

    **Note: `heartbeatIntervalMs` must be smaller than `sessionTimeoutMs`**.| -| `sessionTimeoutMs` | long|false | 30000 | The timeout used to detect consumer failures when using Kafka's group management facility. | -| `topic` | String|true | " " (empty string)| The Kafka topic which sends messages to Pulsar. | -| `consumerConfigProperties` | Map| false | " " (empty string) | The consumer configuration properties to be passed to consumers.

    **Note: other properties specified in the connector configuration file take precedence over this configuration**. | -| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringDeserializer | The deserializer class for Kafka consumers to deserialize keys.
    The deserializer is set by a specific implementation of [`KafkaAbstractSource`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java). -| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArrayDeserializer | The deserializer class for Kafka consumers to deserialize values. -| `autoOffsetReset` | String | false | "earliest" | The default offset reset policy. | - -### Schema Management - -This Kafka source connector applies the schema to the topic depending on the data type that is present on the Kafka topic. -You can detect the data type from the `keyDeserializationClass` and `valueDeserializationClass` configuration parameters. - -If the `valueDeserializationClass` is `org.apache.kafka.common.serialization.StringDeserializer`, you can set Schema.STRING() as schema type on the Pulsar topic. - -If `valueDeserializationClass` is `io.confluent.kafka.serializers.KafkaAvroDeserializer`, Pulsar downloads the AVRO schema from the Confluent Schema Registry® -and sets it properly on the Pulsar topic. - -In this case, you need to set `schema.registry.url` inside of the `consumerConfigProperties` configuration entry -of the source. - -If `keyDeserializationClass` is not `org.apache.kafka.common.serialization.StringDeserializer`, it means -that you do not have a String as key and the Kafka Source uses the KeyValue schema type with the SEPARATED encoding. - -Pulsar supports AVRO format for keys. - -In this case, you can have a Pulsar topic with the following properties: -- Schema: KeyValue schema with SEPARATED encoding -- Key: the content of key of the Kafka message (base64 encoded) -- Value: the content of value of the Kafka message -- KeySchema: the schema detected from `keyDeserializationClass` -- ValueSchema: the schema detected from `valueDeserializationClass` - -Topic compaction and partition routing use the Pulsar key, that contains the Kafka key, and so they are driven by the same value that you have on Kafka. - -When you consume data from Pulsar topics, you can use the `KeyValue` schema. In this way, you can decode the data properly. -If you want to access the raw key, you can use the `Message#getKeyBytes()` API. - -### Example - -Before using the Kafka source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "bootstrapServers": "pulsar-kafka:9092", - "groupId": "test-pulsar-io", - "topic": "my-topic", - "sessionTimeoutMs": "10000", - "autoCommitEnabled": false - } - - ``` - -* YAML - - ```yaml - - configs: - bootstrapServers: "pulsar-kafka:9092" - groupId: "test-pulsar-io" - topic: "my-topic" - sessionTimeoutMs: "10000" - autoCommitEnabled: false - - ``` - -## Usage - -Here is an example of using the Kafka source connector with the configuration file as shown previously. - -1. Download a Kafka client and a Kafka connector. - - ```bash - - $ wget https://repo1.maven.org/maven2/org/apache/kafka/kafka-clients/0.10.2.1/kafka-clients-0.10.2.1.jar - - $ wget https://archive.apache.org/dist/pulsar/pulsar-2.4.0/connectors/pulsar-io-kafka-2.4.0.nar - - ``` - -2. Create a network. - - ```bash - - $ docker network create kafka-pulsar - - ``` - -3. Pull a ZooKeeper image and start ZooKeeper. - - ```bash - - $ docker pull wurstmeister/zookeeper - - $ docker run -d -it -p 2181:2181 --name pulsar-kafka-zookeeper --network kafka-pulsar wurstmeister/zookeeper - - ``` - -4. Pull a Kafka image and start Kafka. - - ```bash - - $ docker pull wurstmeister/kafka:2.11-1.0.2 - - $ docker run -d -it --network kafka-pulsar -p 6667:6667 -p 9092:9092 -e KAFKA_ADVERTISED_HOST_NAME=pulsar-kafka -e KAFKA_ZOOKEEPER_CONNECT=pulsar-kafka-zookeeper:2181 --name pulsar-kafka wurstmeister/kafka:2.11-1.0.2 - - ``` - -5. Pull a Pulsar image and start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:@pulsar:version@ - - $ docker run -d -it --network kafka-pulsar -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-kafka-standalone apachepulsar/pulsar:2.4.0 bin/pulsar standalone - - ``` - -6. Create a producer file _kafka-producer.py_. - - ```python - - from kafka import KafkaProducer - producer = KafkaProducer(bootstrap_servers='pulsar-kafka:9092') - future = producer.send('my-topic', b'hello world') - future.get() - - ``` - -7. Create a consumer file _pulsar-client.py_. - - ```python - - import pulsar - - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe('my-topic', subscription_name='my-aa') - - while True: - msg = consumer.receive() - print msg - print dir(msg) - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - - client.close() - - ``` - -8. Copy the following files to Pulsar. - - ```bash - - $ docker cp pulsar-io-kafka-@pulsar:version@.nar pulsar-kafka-standalone:/pulsar - $ docker cp kafkaSourceConfig.yaml pulsar-kafka-standalone:/pulsar/conf - $ docker cp pulsar-client.py pulsar-kafka-standalone:/pulsar/ - $ docker cp kafka-producer.py pulsar-kafka-standalone:/pulsar/ - - ``` - -9. Open a new terminal window and start the Kafka source connector in local run mode. - - ```bash - - $ docker exec -it pulsar-kafka-standalone /bin/bash - - $ ./bin/pulsar-admin source localrun \ - --archive ./pulsar-io-kafka-@pulsar:version@.nar \ - --classname org.apache.pulsar.io.kafka.KafkaBytesSource \ - --tenant public \ - --namespace default \ - --name kafka \ - --destination-topic-name my-topic \ - --source-config-file ./conf/kafkaSourceConfig.yaml \ - --parallelism 1 - - ``` - -10. Open a new terminal window and run the consumer. - - ```bash - - $ docker exec -it pulsar-kafka-standalone /bin/bash - - $ pip install kafka-python - - $ python3 kafka-producer.py - - ``` - - The following information appears on the consumer terminal window. - - ```bash - - Received message: 'hello world' - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-kinesis-sink.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-kinesis-sink.md deleted file mode 100644 index 153587dcfc783e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-kinesis-sink.md +++ /dev/null @@ -1,80 +0,0 @@ ---- -id: io-kinesis-sink -title: Kinesis sink connector -sidebar_label: "Kinesis sink connector" -original_id: io-kinesis-sink ---- - -The Kinesis sink connector pulls data from Pulsar and persists data into Amazon Kinesis. - -## Configuration - -The configuration of the Kinesis sink connector has the following property. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`messageFormat`|MessageFormat|true|ONLY_RAW_PAYLOAD|Message format in which Kinesis sink converts Pulsar messages and publishes to Kinesis streams.

    Below are the available options:

  1554. `ONLY_RAW_PAYLOAD`: Kinesis sink directly publishes Pulsar message payload as a message into the configured Kinesis stream.

  1555. `FULL_MESSAGE_IN_JSON`: Kinesis sink creates a JSON payload with Pulsar message payload, properties and encryptionCtx, and publishes JSON payload into the configured Kinesis stream.

  1556. `FULL_MESSAGE_IN_FB`: Kinesis sink creates a flatbuffer serialized payload with Pulsar message payload, properties and encryptionCtx, and publishes flatbuffer payload into the configured Kinesis stream.
  1557. -`retainOrdering`|boolean|false|false|Whether Pulsar connectors to retain ordering when moving messages from Pulsar to Kinesis or not. -`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    It is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If it is empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Built-in plugins - -The following are built-in `AwsCredentialProviderPlugin` plugins: - -* `org.apache.pulsar.io.aws.AwsDefaultProviderChainPlugin` - - This plugin takes no configuration, it uses the default AWS provider chain. - - For more information, see [AWS documentation](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default). - -* `org.apache.pulsar.io.aws.STSAssumeRoleProviderPlugin` - - This plugin takes a configuration (via the `awsCredentialPluginParam`) that describes a role to assume when running the KCL. - - This configuration takes the form of a small json document like: - - ```json - - {"roleArn": "arn...", "roleSessionName": "name"} - - ``` - -### Example - -Before using the Kinesis sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "awsEndpoint": "some.endpoint.aws", - "awsRegion": "us-east-1", - "awsKinesisStreamName": "my-stream", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "messageFormat": "ONLY_RAW_PAYLOAD", - "retainOrdering": "true" - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "some.endpoint.aws" - awsRegion: "us-east-1" - awsKinesisStreamName: "my-stream" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - messageFormat: "ONLY_RAW_PAYLOAD" - retainOrdering: "true" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-kinesis-source.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-kinesis-source.md deleted file mode 100644 index 0d07eefc3703b3..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-kinesis-source.md +++ /dev/null @@ -1,81 +0,0 @@ ---- -id: io-kinesis-source -title: Kinesis source connector -sidebar_label: "Kinesis source connector" -original_id: io-kinesis-source ---- - -The Kinesis source connector pulls data from Amazon Kinesis and persists data into Pulsar. - -This connector uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual consuming of messages. The KCL uses DynamoDB to track state for consumers. - -> Note: currently, the Kinesis source connector only supports raw messages. If you use KMS encrypted messages, the encrypted messages are sent to downstream. This connector will support decrypting messages in the future release. - - -## Configuration - -The configuration of the Kinesis source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.

    Below are the available options:

  1558. `AT_TIMESTAMP`: start from the record at or after the specified timestamp.

  1559. `LATEST`: start after the most recent data record.

  1560. `TRIM_HORIZON`: start from the oldest available data record.
  1561. -`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption. -`applicationName`|String|false|Pulsar IO connector|The name of the Amazon Kinesis application.

    By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances. -`checkpointInterval`|long|false|60000|The frequency of the Kinesis stream checkpoint in milliseconds. -`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds. -`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint. -`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector.

    Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed. -`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`useEnhancedFanOut`|boolean|false|true|If set to true, it uses Kinesis enhanced fan-out.

    If set to false, it uses polling. -`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    `awsCredentialProviderPlugin` has the following built-in plugs:

  1562. `org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:
    this plugin uses the default AWS provider chain.
    For more information, see [using the default credential provider chain](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default).

  1563. `org.apache.pulsar.io.kinesis.STSAssumeRoleProviderPlugin`:
    this plugin takes a configuration via the `awsCredentialPluginParam` that describes a role to assume when running the KCL.
    **JSON configuration example**
    `{"roleArn": "arn...", "roleSessionName": "name"}`

    `awsCredentialPluginName` is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If `awsCredentialPluginName` set to empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`.
  1564. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Example - -Before using the Kinesis source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "awsEndpoint": "https://some.endpoint.aws", - "awsRegion": "us-east-1", - "awsKinesisStreamName": "my-stream", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "applicationName": "My test application", - "checkpointInterval": "30000", - "backoffTime": "4000", - "numRetries": "3", - "receiveQueueSize": 2000, - "initialPositionInStream": "TRIM_HORIZON", - "startAtTime": "2019-03-05T19:28:58.000Z" - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "https://some.endpoint.aws" - awsRegion: "us-east-1" - awsKinesisStreamName: "my-stream" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - applicationName: "My test application" - checkpointInterval: 30000 - backoffTime: 4000 - numRetries: 3 - receiveQueueSize: 2000 - initialPositionInStream: "TRIM_HORIZON" - startAtTime: "2019-03-05T19:28:58.000Z" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-mongo-sink.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-mongo-sink.md deleted file mode 100644 index 30c15a6c280938..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-mongo-sink.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -id: io-mongo-sink -title: MongoDB sink connector -sidebar_label: "MongoDB sink connector" -original_id: io-mongo-sink ---- - -The MongoDB sink connector pulls messages from Pulsar topics -and persists the messages to collections. - -## Configuration - -The configuration of the MongoDB sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `mongoUri` | String| true| " " (empty string) | The MongoDB URI to which the connector connects.

    For more information, see [connection string URI format](https://docs.mongodb.com/manual/reference/connection-string/). | -| `database` | String| true| " " (empty string)| The database name to which the collection belongs. | -| `collection` | String| true| " " (empty string)| The collection name to which the connector writes messages. | -| `batchSize` | int|false|100 | The batch size of writing messages to collections. | -| `batchTimeMs` |long|false|1000| The batch operation interval in milliseconds. | - - -### Example - -Before using the Mongo sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "mongoUri": "mongodb://localhost:27017", - "database": "pulsar", - "collection": "messages", - "batchSize": "2", - "batchTimeMs": "500" - } - - ``` - -* YAML - - ```yaml - - configs: - mongoUri: "mongodb://localhost:27017" - database: "pulsar" - collection: "messages" - batchSize: 2 - batchTimeMs: 500 - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-netty-source.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-netty-source.md deleted file mode 100644 index e1ec8d863115b3..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-netty-source.md +++ /dev/null @@ -1,241 +0,0 @@ ---- -id: io-netty-source -title: Netty source connector -sidebar_label: "Netty source connector" -original_id: io-netty-source ---- - -The Netty source connector opens a port that accepts incoming data via the configured network protocol -and publish it to user-defined Pulsar topics. - -This connector can be used in a containerized (for example, k8s) deployment. Otherwise, if the connector is running in process or thread mode, the instance may be conflicting on listening to ports. - -## Configuration - -The configuration of the Netty source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `type` |String| true |tcp | The network protocol over which data is transmitted to netty.

    Below are the available options:
  1565. tcp
  1566. http
  1567. udp
  1568. | -| `host` | String|true | 127.0.0.1 | The host name or address on which the source instance listen. | -| `port` | int|true | 10999 | The port on which the source instance listen. | -| `numberOfThreads` |int| true |1 | The number of threads of Netty TCP server to accept incoming connections and handle the traffic of accepted connections. | - - -### Example - -Before using the Netty source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "type": "tcp", - "host": "127.0.0.1", - "port": "10911", - "numberOfThreads": "1" - } - - ``` - -* YAML - - ```yaml - - configs: - type: "tcp" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -## Usage - -The following examples show how to use the Netty source connector with TCP and HTTP. - -### TCP - -1. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-netty-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -2. Create a configuration file _netty-source-config.yaml_. - - ```yaml - - configs: - type: "tcp" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -3. Copy the configuration file _netty-source-config.yaml_ to Pulsar server. - - ```bash - - $ docker cp netty-source-config.yaml pulsar-netty-standalone:/pulsar/conf/ - - ``` - -4. Download the Netty source connector. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - curl -O http://mirror-hk.koddos.net/apache/pulsar/pulsar-{version}/connectors/pulsar-io-netty-{version}.nar - - ``` - -5. Start the Netty source connector. - - ```bash - - $ ./bin/pulsar-admin sources localrun \ - --archive pulsar-io-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name netty \ - --destination-topic-name netty-topic \ - --source-config-file netty-source-config.yaml \ - --parallelism 1 - - ``` - -6. Consume data. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ ./bin/pulsar-client consume -t Exclusive -s netty-sub netty-topic -n 0 - - ``` - -7. Open another terminal window to send data to the Netty source. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ apt-get update - - $ apt-get -y install telnet - - $ root@1d19327b2c67:/pulsar# telnet 127.0.0.1 10999 - Trying 127.0.0.1... - Connected to 127.0.0.1. - Escape character is '^]'. - hello - world - - ``` - -8. The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello - - ----- got message ----- - world - - ``` - -### HTTP - -1. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-netty-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -2. Create a configuration file _netty-source-config.yaml_. - - ```yaml - - configs: - type: "http" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -3. Copy the configuration file _netty-source-config.yaml_ to Pulsar server. - - ```bash - - $ docker cp netty-source-config.yaml pulsar-netty-standalone:/pulsar/conf/ - - ``` - -4. Download the Netty source connector. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - curl -O http://mirror-hk.koddos.net/apache/pulsar/pulsar-{version}/connectors/pulsar-io-netty-{version}.nar - - ``` - -5. Start the Netty source connector. - - ```bash - - $ ./bin/pulsar-admin sources localrun \ - --archive pulsar-io-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name netty \ - --destination-topic-name netty-topic \ - --source-config-file netty-source-config.yaml \ - --parallelism 1 - - ``` - -6. Consume data. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ ./bin/pulsar-client consume -t Exclusive -s netty-sub netty-topic -n 0 - - ``` - -7. Open another terminal window to send data to the Netty source. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ curl -X POST --data 'hello, world!' http://127.0.0.1:10999/ - - ``` - -8. The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello, world! - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-nsq-source.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-nsq-source.md deleted file mode 100644 index b61e7e100c22e1..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-nsq-source.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -id: io-nsq-source -title: NSQ source connector -sidebar_label: "NSQ source connector" -original_id: io-nsq-source ---- - -The NSQ source connector receives messages from NSQ topics -and writes messages to Pulsar topics. - -## Configuration - -The configuration of the NSQ source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `lookupds` |String| true | " " (empty string) | A comma-separated list of nsqlookupds to connect to. | -| `topic` | String|true | " " (empty string) | The NSQ topic to transport. | -| `channel` | String |false | pulsar-transport-{$topic} | The channel to consume from on the provided NSQ topic. | \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-overview.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-overview.md deleted file mode 100644 index 3db5ee34042d3f..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-overview.md +++ /dev/null @@ -1,164 +0,0 @@ ---- -id: io-overview -title: Pulsar connector overview -sidebar_label: "Overview" -original_id: io-overview ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Messaging systems are most powerful when you can easily use them with external systems like databases and other messaging systems. - -**Pulsar IO connectors** enable you to easily create, deploy, and manage connectors that interact with external systems, such as [Apache Cassandra](https://cassandra.apache.org), [Aerospike](https://www.aerospike.com), and many others. - - -## Concept - -Pulsar IO connectors come in two types: **source** and **sink**. - -This diagram illustrates the relationship between source, Pulsar, and sink: - -![Pulsar IO diagram](/assets/pulsar-io.png "Pulsar IO connectors (sources and sinks)") - - -### Source - -> Sources **feed data from external systems into Pulsar**. - -Common sources include other messaging systems and firehose-style data pipeline APIs. - -For the complete list of Pulsar built-in source connectors, see [source connector](io-connectors.md#source-connector). - -### Sink - -> Sinks **feed data from Pulsar into external systems**. - -Common sinks include other messaging systems and SQL and NoSQL databases. - -For the complete list of Pulsar built-in sink connectors, see [sink connector](io-connectors.md#sink-connector). - -## Processing guarantee - -Processing guarantees are used to handle errors when writing messages to Pulsar topics. - -> Pulsar connectors and Functions use the **same** processing guarantees as below. - -Delivery semantic | Description -:------------------|:------- -`at-most-once` | Each message sent to a connector is to be **processed once** or **not to be processed**. -`at-least-once` | Each message sent to a connector is to be **processed once** or **more than once**. -`effectively-once` | Each message sent to a connector has **one output associated** with it. - -> Processing guarantees for connectors not just rely on Pulsar guarantee but also **relate to external systems**, that is, **the implementation of source and sink**. - -* Source: Pulsar ensures that writing messages to Pulsar topics respects to the processing guarantees. It is within Pulsar's control. - -* Sink: the processing guarantees rely on the sink implementation. If the sink implementation does not handle retries in an idempotent way, the sink does not respect to the processing guarantees. - -### Set - -When creating a connector, you can set the processing guarantee with the following semantics: - -* ATLEAST_ONCE - -* ATMOST_ONCE - -* EFFECTIVELY_ONCE - -> If `--processing-guarantees` is not specified when creating a connector, the default semantic is `ATLEAST_ONCE`. - -Here takes **Admin CLI** as an example. For more information about **REST API** or **JAVA Admin API**, see [here](io-use.md#create). - -````mdx-code-block - - - - -```bash - -$ bin/pulsar-admin sources create \ - --processing-guarantees ATMOST_ONCE \ - # Other source configs - -``` - -For more information about the options of `pulsar-admin sources create`, see [here](reference-connector-admin.md#create). - - - - -```bash - -$ bin/pulsar-admin sinks create \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other sink configs - -``` - -For more information about the options of `pulsar-admin sinks create`, see [here](reference-connector-admin.md#create-1). - - - - -```` - -### Update - -After creating a connector, you can update the processing guarantee with the following semantics: - -* ATLEAST_ONCE - -* ATMOST_ONCE - -* EFFECTIVELY_ONCE - -Here takes **Admin CLI** as an example. For more information about **REST API** or **JAVA Admin API**, see [here](io-use.md#create). - -````mdx-code-block - - - - -```bash - -$ bin/pulsar-admin sources update \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other source configs - -``` - -For more information about the options of `pulsar-admin sources update`, see [here](reference-connector-admin.md#update). - - - - -```bash - -$ bin/pulsar-admin sinks update \ - --processing-guarantees ATMOST_ONCE \ - # Other sink configs - -``` - -For more information about the options of `pulsar-admin sinks update`, see [here](reference-connector-admin.md#update-1). - - - - -```` - - -## Work with connector - -You can manage Pulsar connectors (for example, create, update, start, stop, restart, reload, delete and perform other operations on connectors) via the [Connector Admin CLI](reference-connector-admin.md) with [sources](io-cli.md#sources) and [sinks](io-cli.md#sinks) subcommands. - -Connectors (sources and sinks) and Functions are components of instances, and they all run on Functions workers. When managing a source, sink or function via [Connector Admin CLI](reference-connector-admin.md) or [Functions Admin CLI](functions-cli.md), an instance is started on a worker. For more information, see [Functions worker](functions-worker.md#run-functions-worker-separately). - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-quickstart.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-quickstart.md deleted file mode 100644 index 40eaf5c1de2214..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-quickstart.md +++ /dev/null @@ -1,963 +0,0 @@ ---- -id: io-quickstart -title: How to connect Pulsar to database -sidebar_label: "Get started" -original_id: io-quickstart ---- - -This tutorial provides a hands-on look at how you can move data out of Pulsar without writing a single line of code. - -It is helpful to review the [concepts](io-overview.md) for Pulsar I/O with running the steps in this guide to gain a deeper understanding. - -At the end of this tutorial, you are able to: - -- [Connect Pulsar to Cassandra](#Connect-Pulsar-to-Cassandra) - -- [Connect Pulsar to PostgreSQL](#Connect-Pulsar-to-PostgreSQL) - -:::tip - -* These instructions assume you are running Pulsar in [standalone mode](getting-started-standalone.md). However, all -the commands used in this tutorial can be used in a multi-nodes Pulsar cluster without any changes. -* All the instructions are assumed to run at the root directory of a Pulsar binary distribution. - -::: - -## Install Pulsar and built-in connector - -Before connecting Pulsar to a database, you need to install Pulsar and the desired built-in connector. - -For more information about **how to install a standalone Pulsar and built-in connectors**, see [here](getting-started-standalone.md/#installing-pulsar). - -## Start Pulsar standalone - -1. Start Pulsar locally. - - ```bash - - bin/pulsar standalone - - ``` - - All the components of a Pulsar service are start in order. - - You can curl those pulsar service endpoints to make sure Pulsar service is up running correctly. - -2. Check Pulsar binary protocol port. - - ```bash - - telnet localhost 6650 - - ``` - -3. Check Pulsar Function cluster. - - ```bash - - curl -s http://localhost:8080/admin/v2/worker/cluster - - ``` - - **Example output** - - ```json - - [{"workerId":"c-standalone-fw-localhost-6750","workerHostname":"localhost","port":6750}] - - ``` - -4. Make sure a public tenant and a default namespace exist. - - ```bash - - curl -s http://localhost:8080/admin/v2/namespaces/public - - ``` - - **Example output** - - ```json - - ["public/default","public/functions"] - - ``` - -5. All built-in connectors should be listed as available. - - ```bash - - curl -s http://localhost:8080/admin/v2/functions/connectors - - ``` - - **Example output** - - ```json - - [{"name":"aerospike","description":"Aerospike database sink","sinkClass":"org.apache.pulsar.io.aerospike.AerospikeStringSink"},{"name":"cassandra","description":"Writes data into Cassandra","sinkClass":"org.apache.pulsar.io.cassandra.CassandraStringSink"},{"name":"kafka","description":"Kafka source and sink connector","sourceClass":"org.apache.pulsar.io.kafka.KafkaStringSource","sinkClass":"org.apache.pulsar.io.kafka.KafkaBytesSink"},{"name":"kinesis","description":"Kinesis sink connector","sinkClass":"org.apache.pulsar.io.kinesis.KinesisSink"},{"name":"rabbitmq","description":"RabbitMQ source connector","sourceClass":"org.apache.pulsar.io.rabbitmq.RabbitMQSource"},{"name":"twitter","description":"Ingest data from Twitter firehose","sourceClass":"org.apache.pulsar.io.twitter.TwitterFireHose"}] - - ``` - - If an error occurs when starting Pulsar service, you may see an exception at the terminal running `pulsar/standalone`, - or you can navigate to the `logs` directory under the Pulsar directory to view the logs. - -## Connect Pulsar to Cassandra - -This section demonstrates how to connect Pulsar to Cassandra. - -:::tip - -* Make sure you have Docker installed. If you do not have one, see [install Docker](https://docs.docker.com/docker-for-mac/install/). -* The Cassandra sink connector reads messages from Pulsar topics and writes the messages into Cassandra tables. For more information, see [Cassandra sink connector](io-cassandra-sink.md). - -::: - -### Setup a Cassandra cluster - -This example uses `cassandra` Docker image to start a single-node Cassandra cluster in Docker. - -1. Start a Cassandra cluster. - - ```bash - - docker run -d --rm --name=cassandra -p 9042:9042 cassandra - - ``` - - :::note - - Before moving to the next steps, make sure the Cassandra cluster is running. - - ::: - -2. Make sure the Docker process is running. - - ```bash - - docker ps - - ``` - -3. Check the Cassandra logs to make sure the Cassandra process is running as expected. - - ```bash - - docker logs cassandra - - ``` - -4. Check the status of the Cassandra cluster. - - ```bash - - docker exec cassandra nodetool status - - ``` - - **Example output** - - ``` - - Datacenter: datacenter1 - ======================= - Status=Up/Down - |/ State=Normal/Leaving/Joining/Moving - -- Address Load Tokens Owns (effective) Host ID Rack - UN 172.17.0.2 103.67 KiB 256 100.0% af0e4b2f-84e0-4f0b-bb14-bd5f9070ff26 rack1 - - ``` - -5. Use `cqlsh` to connect to the Cassandra cluster. - - ```bash - - $ docker exec -ti cassandra cqlsh localhost - Connected to Test Cluster at localhost:9042. - [cqlsh 5.0.1 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4] - Use HELP for help. - cqlsh> - - ``` - -6. Create a keyspace `pulsar_test_keyspace`. - - ```bash - - cqlsh> CREATE KEYSPACE pulsar_test_keyspace WITH replication = {'class':'SimpleStrategy', 'replication_factor':1}; - - ``` - -7. Create a table `pulsar_test_table`. - - ```bash - - cqlsh> USE pulsar_test_keyspace; - cqlsh:pulsar_test_keyspace> CREATE TABLE pulsar_test_table (key text PRIMARY KEY, col text); - - ``` - -### Configure a Cassandra sink - -Now that we have a Cassandra cluster running locally. - -In this section, you need to configure a Cassandra sink connector. - -To run a Cassandra sink connector, you need to prepare a configuration file including the information that Pulsar connector runtime needs to know. - -For example, how Pulsar connector can find the Cassandra cluster, what is the keyspace and the table that Pulsar connector uses for writing Pulsar messages to, and so on. - -You can create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - } - - ``` - -* YAML - - ```yaml - - configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - - ``` - -For more information, see [Cassandra sink connector](io-cassandra-sink.md). - -### Create a Cassandra sink - -You can use the [Connector Admin CLI](io-cli.md) -to create a sink connector and perform other operations on them. - -Run the following command to create a Cassandra sink connector with sink type _cassandra_ and the config file _examples/cassandra-sink.yml_ created previously. - -#### Note -> The `sink-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. - -```bash - -bin/pulsar-admin sinks create \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink \ - --sink-type cassandra \ - --sink-config-file examples/cassandra-sink.yml \ - --inputs test_cassandra - -``` - -Once the command is executed, Pulsar creates the sink connector _cassandra-test-sink_. - -This sink connector runs -as a Pulsar Function and writes the messages produced in the topic _test_cassandra_ to the Cassandra table _pulsar_test_table_. - -### Inspect a Cassandra sink - -You can use the [Connector Admin CLI](io-cli.md) -to monitor a connector and perform other operations on it. - -* Get the information of a Cassandra sink. - - ```bash - - bin/pulsar-admin sinks get \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - **Example output** - - ```json - - { - "tenant": "public", - "namespace": "default", - "name": "cassandra-test-sink", - "className": "org.apache.pulsar.io.cassandra.CassandraStringSink", - "inputSpecs": { - "test_cassandra": { - "isRegexPattern": false - } - }, - "configs": { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true, - "archive": "builtin://cassandra" - } - - ``` - -* Check the status of a Cassandra sink. - - ```bash - - bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - **Example output** - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-localhost-8080" - } - } ] - } - - ``` - -### Verify a Cassandra sink - -1. Produce some messages to the input topic of the Cassandra sink _test_cassandra_. - - ```bash - - for i in {0..9}; do bin/pulsar-client produce -m "key-$i" -n 1 test_cassandra; done - - ``` - -2. Inspect the status of the Cassandra sink _test_cassandra_. - - ```bash - - bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - You can see 10 messages are processed by the Cassandra sink _test_cassandra_. - - **Example output** - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 10, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 10, - "lastReceivedTime" : 1551685489136, - "workerId" : "c-standalone-fw-localhost-8080" - } - } ] - } - - ``` - -3. Use `cqlsh` to connect to the Cassandra cluster. - - ```bash - - docker exec -ti cassandra cqlsh localhost - - ``` - -4. Check the data of the Cassandra table _pulsar_test_table_. - - ```bash - - cqlsh> use pulsar_test_keyspace; - cqlsh:pulsar_test_keyspace> select * from pulsar_test_table; - - key | col - --------+-------- - key-5 | key-5 - key-0 | key-0 - key-9 | key-9 - key-2 | key-2 - key-1 | key-1 - key-3 | key-3 - key-6 | key-6 - key-7 | key-7 - key-4 | key-4 - key-8 | key-8 - - ``` - -### Delete a Cassandra Sink - -You can use the [Connector Admin CLI](io-cli.md) -to delete a connector and perform other operations on it. - -```bash - -bin/pulsar-admin sinks delete \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - -``` - -## Connect Pulsar to PostgreSQL - -This section demonstrates how to connect Pulsar to PostgreSQL. - -:::tip - -* Make sure you have Docker installed. If you do not have one, see [install Docker](https://docs.docker.com/docker-for-mac/install/). -* The JDBC sink connector pulls messages from Pulsar topics and persists the messages to ClickHouse, MariaDB, PostgreSQL, or SQlite. - -::: - ->For more information, see [JDBC sink connector](io-jdbc-sink.md). - - -### Setup a PostgreSQL cluster - -This example uses the PostgreSQL 12 docker image to start a single-node PostgreSQL cluster in Docker. - -1. Pull the PostgreSQL 12 image from Docker. - - ```bash - - $ docker pull postgres:12 - - ``` - -2. Start PostgreSQL. - - ```bash - - $ docker run -d -it --rm \ - --name pulsar-postgres \ - -p 5432:5432 \ - -e POSTGRES_PASSWORD=password \ - -e POSTGRES_USER=postgres \ - postgres:12 - - ``` - - #### Tip - - Flag | Description | This example - ---|---|---| - `-d` | To start a container in detached mode. | / - `-it` | Keep STDIN open even if not attached and allocate a terminal. | / - `--rm` | Remove the container automatically when it exits. | / - `-name` | Assign a name to the container. | This example specifies _pulsar-postgres_ for the container. - `-p` | Publish the port of the container to the host. | This example publishes the port _5432_ of the container to the host. - `-e` | Set environment variables. | This example sets the following variables:
    - The password for the user is _password_.
    - The name for the user is _postgres_. - - :::tip - - For more information about Docker commands, see [Docker CLI](https://docs.docker.com/engine/reference/commandline/run/). - - ::: - -3. Check if PostgreSQL has been started successfully. - - ```bash - - $ docker logs -f pulsar-postgres - - ``` - - PostgreSQL has been started successfully if the following message appears. - - ```text - - 2020-05-11 20:09:24.492 UTC [1] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit - 2020-05-11 20:09:24.492 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 - 2020-05-11 20:09:24.492 UTC [1] LOG: listening on IPv6 address "::", port 5432 - 2020-05-11 20:09:24.499 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" - 2020-05-11 20:09:24.523 UTC [55] LOG: database system was shut down at 2020-05-11 20:09:24 UTC - 2020-05-11 20:09:24.533 UTC [1] LOG: database system is ready to accept connections - - ``` - -4. Access to PostgreSQL. - - ```bash - - $ docker exec -it pulsar-postgres /bin/bash - - ``` - -5. Create a PostgreSQL table _pulsar_postgres_jdbc_sink_. - - ```bash - - $ psql -U postgres postgres - - postgres=# create table if not exists pulsar_postgres_jdbc_sink - ( - id serial PRIMARY KEY, - name VARCHAR(255) NOT NULL - ); - - ``` - -### Configure a JDBC sink - -Now we have a PostgreSQL running locally. - -In this section, you need to configure a JDBC sink connector. - -1. Add a configuration file. - - To run a JDBC sink connector, you need to prepare a YAML configuration file including the information that Pulsar connector runtime needs to know. - - For example, how Pulsar connector can find the PostgreSQL cluster, what is the JDBC URL and the table that Pulsar connector uses for writing messages to. - - Create a _pulsar-postgres-jdbc-sink.yaml_ file, copy the following contents to this file, and place the file in the `pulsar/connectors` folder. - - ```yaml - - configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/postgres" - tableName: "pulsar_postgres_jdbc_sink" - - ``` - -2. Create a schema. - - Create a _avro-schema_ file, copy the following contents to this file, and place the file in the `pulsar/connectors` folder. - - ```json - - { - "type": "AVRO", - "schema": "{\"type\":\"record\",\"name\":\"Test\",\"fields\":[{\"name\":\"id\",\"type\":[\"null\",\"int\"]},{\"name\":\"name\",\"type\":[\"null\",\"string\"]}]}", - "properties": {} - } - - ``` - - :::tip - - For more information about AVRO, see [Apache Avro](https://avro.apache.org/docs/1.9.1/). - - ::: - -3. Upload a schema to a topic. - - This example uploads the _avro-schema_ schema to the _pulsar-postgres-jdbc-sink-topic_ topic. - - ```bash - - $ bin/pulsar-admin schemas upload pulsar-postgres-jdbc-sink-topic -f ./connectors/avro-schema - - ``` - -4. Check if the schema has been uploaded successfully. - - ```bash - - $ bin/pulsar-admin schemas get pulsar-postgres-jdbc-sink-topic - - ``` - - The schema has been uploaded successfully if the following message appears. - - ```json - - {"name":"pulsar-postgres-jdbc-sink-topic","schema":"{\"type\":\"record\",\"name\":\"Test\",\"fields\":[{\"name\":\"id\",\"type\":[\"null\",\"int\"]},{\"name\":\"name\",\"type\":[\"null\",\"string\"]}]}","type":"AVRO","properties":{}} - - ``` - -### Create a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to create a sink connector and perform other operations on it. - -This example creates a sink connector and specifies the desired information. - -```bash - -$ bin/pulsar-admin sinks create \ ---archive ./connectors/pulsar-io-jdbc-postgres-@pulsar:version@.nar \ ---inputs pulsar-postgres-jdbc-sink-topic \ ---name pulsar-postgres-jdbc-sink \ ---sink-config-file ./connectors/pulsar-postgres-jdbc-sink.yaml \ ---parallelism 1 - -``` - -Once the command is executed, Pulsar creates a sink connector _pulsar-postgres-jdbc-sink_. - -This sink connector runs as a Pulsar Function and writes the messages produced in the topic _pulsar-postgres-jdbc-sink-topic_ to the PostgreSQL table _pulsar_postgres_jdbc_sink_. - - #### Tip - - Flag | Description | This example - ---|---|---| - `--archive` | The path to the archive file for the sink. | _pulsar-io-jdbc-postgres-@pulsar:version@.nar_ | - `--inputs` | The input topic(s) of the sink.

    Multiple topics can be specified as a comma-separated list.|| - `--name` | The name of the sink. | _pulsar-postgres-jdbc-sink_ | - `--sink-config-file` | The path to a YAML config file specifying the configuration of the sink. | _pulsar-postgres-jdbc-sink.yaml_ | - `--parallelism` | The parallelism factor of the sink.

    For example, the number of sink instances to run. | _1_ | - -:::tip - -For more information about `pulsar-admin sinks create options`, see [here](io-cli.md#sinks). - -::: - -The sink has been created successfully if the following message appears. - -```bash - -"Created successfully" - -``` - -### Inspect a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to monitor a connector and perform other operations on it. - -* List all running JDBC sink(s). - - ```bash - - $ bin/pulsar-admin sinks list \ - --tenant public \ - --namespace default - - ``` - - :::tip - - For more information about `pulsar-admin sinks list options`, see [here](io-cli.md/#list-1). - - ::: - - The result shows that only the _postgres-jdbc-sink_ sink is running. - - ```json - - [ - "pulsar-postgres-jdbc-sink" - ] - - ``` - -* Get the information of a JDBC sink. - - ```bash - - $ bin/pulsar-admin sinks get \ - --tenant public \ - --namespace default \ - --name pulsar-postgres-jdbc-sink - - ``` - - :::tip - - For more information about `pulsar-admin sinks get options`, see [here](io-cli.md/#get-1). - - ::: - - The result shows the information of the sink connector, including tenant, namespace, topic and so on. - - ```json - - { - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true - } - - ``` - -* Get the status of a JDBC sink - - ```bash - - $ bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name pulsar-postgres-jdbc-sink - - ``` - - :::tip - - For more information about `pulsar-admin sinks status options`, see [here](io-cli.md/#status-1). - - ::: - - The result shows the current status of sink connector, including the number of instance, running status, worker ID and so on. - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-192.168.2.52-8080" - } - } ] - } - - ``` - -### Stop a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to stop a connector and perform other operations on it. - -```bash - -$ bin/pulsar-admin sinks stop \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks stop options`, see [here](io-cli.md/#stop-1). - -::: - -The sink instance has been stopped successfully if the following message disappears. - -```bash - -"Stopped successfully" - -``` - -### Restart a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to restart a connector and perform other operations on it. - -```bash - -$ bin/pulsar-admin sinks restart \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks restart options`, see [here](io-cli.md/#restart-1). - -::: - -The sink instance has been started successfully if the following message disappears. - -```bash - -"Started successfully" - -``` - -:::tip - -* Optionally, you can run a standalone sink connector using `pulsar-admin sinks localrun options`. -Note that `pulsar-admin sinks localrun options` **runs a sink connector locally**, while `pulsar-admin sinks start options` **starts a sink connector in a cluster**. -* For more information about `pulsar-admin sinks localrun options`, see [here](io-cli.md#localrun-1). - -::: - -### Update a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to update a connector and perform other operations on it. - -This example updates the parallelism of the _pulsar-postgres-jdbc-sink_ sink connector to 2. - -```bash - -$ bin/pulsar-admin sinks update \ ---name pulsar-postgres-jdbc-sink \ ---parallelism 2 - -``` - -:::tip - -For more information about `pulsar-admin sinks update options`, see [here](io-cli.md/#update-1). - -::: - -The sink connector has been updated successfully if the following message disappears. - -```bash - -"Updated successfully" - -``` - -This example double-checks the information. - -```bash - -$ bin/pulsar-admin sinks get \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -The result shows that the parallelism is 2. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 2, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -### Delete a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to delete a connector and perform other operations on it. - -This example deletes the _pulsar-postgres-jdbc-sink_ sink connector. - -```bash - -$ bin/pulsar-admin sinks delete \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks delete options`, see [here](io-cli.md/#delete-1). - -::: - -The sink connector has been deleted successfully if the following message appears. - -```text - -"Deleted successfully" - -``` - -This example double-checks the status of the sink connector. - -```bash - -$ bin/pulsar-admin sinks get \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -The result shows that the sink connector does not exist. - -```text - -HTTP 404 Not Found - -Reason: Sink pulsar-postgres-jdbc-sink doesn't exist - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-rabbitmq-sink.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-rabbitmq-sink.md deleted file mode 100644 index d7fda99460dc97..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-rabbitmq-sink.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -id: io-rabbitmq-sink -title: RabbitMQ sink connector -sidebar_label: "RabbitMQ sink connector" -original_id: io-rabbitmq-sink ---- - -The RabbitMQ sink connector pulls messages from Pulsar topics -and persist the messages to RabbitMQ queues. - - -## Configuration - -The configuration of the RabbitMQ sink connector has the following properties. - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `connectionName` |String| true | " " (empty string) | The connection name. | -| `host` | String| true | " " (empty string) | The RabbitMQ host. | -| `port` | int |true | 5672 | The RabbitMQ port. | -| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. | -| `username` | String|false | guest | The username used to authenticate to RabbitMQ. | -| `password` | String|false | guest | The password used to authenticate to RabbitMQ. | -| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. | -| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number.

    0 means unlimited. | -| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets.

    0 means unlimited. | -| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds.

    0 means infinite. | -| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. | -| `requestedHeartbeat` | int|false | 60 | The exchange to publish messages. | -| `exchangeName` | String|true | " " (empty string) | The maximum number of messages that the server delivers.

    0 means unlimited. | -| `prefetchGlobal` |String|true | " " (empty string) |The routing key used to publish messages. | - - -### Example - -Before using the RabbitMQ sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "host": "localhost", - "port": "5672", - "virtualHost": "/", - "username": "guest", - "password": "guest", - "queueName": "test-queue", - "connectionName": "test-connection", - "requestedChannelMax": "0", - "requestedFrameMax": "0", - "connectionTimeout": "60000", - "handshakeTimeout": "10000", - "requestedHeartbeat": "60", - "exchangeName": "test-exchange", - "routingKey": "test-key" - } - - ``` - -* YAML - - ```yaml - - configs: - host: "localhost" - port: 5672 - virtualHost: "/", - username: "guest" - password: "guest" - queueName: "test-queue" - connectionName: "test-connection" - requestedChannelMax: 0 - requestedFrameMax: 0 - connectionTimeout: 60000 - handshakeTimeout: 10000 - requestedHeartbeat: 60 - exchangeName: "test-exchange" - routingKey: "test-key" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-rabbitmq-source.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-rabbitmq-source.md deleted file mode 100644 index c2c31cc97d10d9..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-rabbitmq-source.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -id: io-rabbitmq-source -title: RabbitMQ source connector -sidebar_label: "RabbitMQ source connector" -original_id: io-rabbitmq-source ---- - -The RabbitMQ source connector receives messages from RabbitMQ clusters -and writes messages to Pulsar topics. - -## Configuration - -The configuration of the RabbitMQ source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `connectionName` |String| true | " " (empty string) | The connection name. | -| `host` | String| true | " " (empty string) | The RabbitMQ host. | -| `port` | int |true | 5672 | The RabbitMQ port. | -| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. | -| `username` | String|false | guest | The username used to authenticate to RabbitMQ. | -| `password` | String|false | guest | The password used to authenticate to RabbitMQ. | -| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. | -| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number.

    0 means unlimited. | -| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets.

    0 means unlimited. | -| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds.

    0 means infinite. | -| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. | -| `requestedHeartbeat` | int|false | 60 | The requested heartbeat timeout in seconds. | -| `prefetchCount` | int|false | 0 | The maximum number of messages that the server delivers.

    0 means unlimited. | -| `prefetchGlobal` | boolean|false | false |Whether the setting should be applied to the entire channel rather than each consumer. | -| `passive` | boolean|false | false | Whether the rabbitmq consumer should create its own queue or bind to an existing one. | - -### Example - -Before using the RabbitMQ source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "host": "localhost", - "port": "5672", - "virtualHost": "/", - "username": "guest", - "password": "guest", - "queueName": "test-queue", - "connectionName": "test-connection", - "requestedChannelMax": "0", - "requestedFrameMax": "0", - "connectionTimeout": "60000", - "handshakeTimeout": "10000", - "requestedHeartbeat": "60", - "prefetchCount": "0", - "prefetchGlobal": "false", - "passive": "false" - } - - ``` - -* YAML - - ```yaml - - configs: - host: "localhost" - port: 5672 - virtualHost: "/" - username: "guest" - password: "guest" - queueName: "test-queue" - connectionName: "test-connection" - requestedChannelMax: 0 - requestedFrameMax: 0 - connectionTimeout: 60000 - handshakeTimeout: 10000 - requestedHeartbeat: 60 - prefetchCount: 0 - prefetchGlobal: "false" - passive: "false" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-redis-sink.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-redis-sink.md deleted file mode 100644 index 0caf21bcf62e88..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-redis-sink.md +++ /dev/null @@ -1,156 +0,0 @@ ---- -id: io-redis-sink -title: Redis sink connector -sidebar_label: "Redis sink connector" -original_id: io-redis-sink ---- - -The Redis sink connector pulls messages from Pulsar topics -and persists the messages to a Redis database. - - - -## Configuration - -The configuration of the Redis sink connector has the following properties. - - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `redisHosts` |String|true|" " (empty string) | A comma-separated list of Redis hosts to connect to. | -| `redisPassword` |String|false|" " (empty string) | The password used to connect to Redis. | -| `redisDatabase` | int|true|0 | The Redis database to connect to. | -| `clientMode` |String| false|Standalone | The client mode when interacting with Redis cluster.

    Below are the available options:
  1569. Standalone
  1570. Cluster
  1571. | -| `autoReconnect` | boolean|false|true | Whether the Redis client automatically reconnect or not. | -| `requestQueue` | int|false|2147483647 | The maximum number of queued requests to Redis. | -| `tcpNoDelay` |boolean| false| false | Whether to enable TCP with no delay or not. | -| `keepAlive` | boolean|false | false |Whether to enable a keepalive to Redis or not. | -| `connectTimeout` |long| false|10000 | The time to wait before timing out when connecting in milliseconds. | -| `operationTimeout` | long|false|10000 | The time before an operation is marked as timed out in milliseconds . | -| `batchTimeMs` | int|false|1000 | The Redis operation time in milliseconds. | -| `batchSize` | int|false|200 | The batch size of writing to Redis database. | - - -### Example - -Before using the Redis sink connector, you need to create a configuration file in the path you will start Pulsar service (i.e. `PULSAR_HOME`) through one of the following methods. - -* JSON - - ```json - - { - "redisHosts": "localhost:6379", - "redisPassword": "mypassword", - "redisDatabase": "0", - "clientMode": "Standalone", - "operationTimeout": "2000", - "batchSize": "1", - "batchTimeMs": "1000", - "connectTimeout": "3000" - } - - ``` - -* YAML - - ```yaml - - configs: - redisHosts: "localhost:6379" - redisPassword: "mypassword" - redisDatabase: 0 - clientMode: "Standalone" - operationTimeout: 2000 - batchSize: 1 - batchTimeMs: 1000 - connectTimeout: 3000 - - ``` - -### Usage - -This example shows how to write records to a Redis database using the Pulsar Redis connector. - -1. Start a Redis server. - - ```bash - - $ docker pull redis:5.0.5 - $ docker run -d -p 6379:6379 --name my-redis redis:5.0.5 --requirepass "mypassword" - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - - Make sure the NAR file is available at `connectors/pulsar-io-redis-@pulsar:version@.nar`. - -3. Start the Pulsar Redis connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-redis-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name my-redis-sink \ - --sink-config '{"redisHosts": "localhost:6379","redisPassword": "mypassword","redisDatabase": "0","clientMode": "Standalone","operationTimeout": "3000","batchSize": "1"}' \ - --inputs my-redis-topic - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-redis-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name my-redis-sink \ - --sink-config-file redis-sink-config.yaml \ - --inputs my-redis-topic - - ``` - -4. Publish records to the topic. - - ```bash - - $ bin/pulsar-client produce \ - persistent://public/default/my-redis-topic \ - -k "streaming" \ - -m "Pulsar" - - ``` - -5. Start a Redis client in Docker. - - ```bash - - $ docker exec -it my-redis redis-cli -a "mypassword" - - ``` - -6. Check the key/value in Redis. - - ``` - - 127.0.0.1:6379> keys * - 1) "streaming" - 127.0.0.1:6379> get "streaming" - "Pulsar" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-solr-sink.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-solr-sink.md deleted file mode 100644 index df2c3612c38eb6..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-solr-sink.md +++ /dev/null @@ -1,65 +0,0 @@ ---- -id: io-solr-sink -title: Solr sink connector -sidebar_label: "Solr sink connector" -original_id: io-solr-sink ---- - -The Solr sink connector pulls messages from Pulsar topics -and persists the messages to Solr collections. - - - -## Configuration - -The configuration of the Solr sink connector has the following properties. - - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `solrUrl` | String|true|" " (empty string) |
  1572. Comma-separated zookeeper hosts with chroot used in the SolrCloud mode.
    **Example**
    `localhost:2181,localhost:2182/chroot`

  1573. URL to connect to Solr used in standalone mode.
    **Example**
    `localhost:8983/solr`
  1574. | -| `solrMode` | String|true|SolrCloud| The client mode when interacting with the Solr cluster.

    Below are the available options:
  1575. Standalone
  1576. SolrCloud
  1577. | -| `solrCollection` |String|true| " " (empty string) | Solr collection name to which records need to be written. | -| `solrCommitWithinMs` |int| false|10 | The time within million seconds for Solr updating commits.| -| `username` |String|false| " " (empty string) | The username for basic authentication.

    **Note: `usename` is case-sensitive.** | -| `password` | String|false| " " (empty string) | The password for basic authentication.

    **Note: `password` is case-sensitive.** | - - - -### Example - -Before using the Solr sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "solrUrl": "localhost:2181,localhost:2182/chroot", - "solrMode": "SolrCloud", - "solrCollection": "techproducts", - "solrCommitWithinMs": 100, - "username": "fakeuser", - "password": "fake@123" - } - - ``` - -* YAML - - ```yaml - - { - solrUrl: "localhost:2181,localhost:2182/chroot" - solrMode: "SolrCloud" - solrCollection: "techproducts" - solrCommitWithinMs: 100 - username: "fakeuser" - password: "fake@123" - } - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-twitter-source.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-twitter-source.md deleted file mode 100644 index 8de3504dd0fef2..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-twitter-source.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -id: io-twitter-source -title: Twitter Firehose source connector -sidebar_label: "Twitter Firehose source connector" -original_id: io-twitter-source ---- - -The Twitter Firehose source connector receives tweets from Twitter Firehose and -writes the tweets to Pulsar topics. - -## Configuration - -The configuration of the Twitter Firehose source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `consumerKey` | String|true | " " (empty string) | The twitter OAuth consumer key.

    For more information, see [Access tokens](https://developer.twitter.com/en/docs/basics/authentication/guides/access-tokens). | -| `consumerSecret` | String |true | " " (empty string) | The twitter OAuth consumer secret. | -| `token` | String|true | " " (empty string) | The twitter OAuth token. | -| `tokenSecret` | String|true | " " (empty string) | The twitter OAuth secret. | -| `guestimateTweetTime`|Boolean|false|false|Most firehose events have null createdAt time.

    If `guestimateTweetTime` set to true, the connector estimates the createdTime of each firehose event to be current time. -| `clientName` | String |false | openconnector-twitter-source| The twitter firehose client name. | -| `clientHosts` |String| false | Constants.STREAM_HOST | The twitter firehose hosts to which client connects. | -| `clientBufferSize` | int|false | 50000 | The buffer size for buffering tweets fetched from twitter firehose. | - -> For more information about OAuth credentials, see [Twitter developers portal](https://developer.twitter.com/en.html). diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-twitter.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-twitter.md deleted file mode 100644 index 3b2f6325453c3c..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-twitter.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: io-twitter -title: Twitter Firehose Connector -sidebar_label: "Twitter Firehose Connector" -original_id: io-twitter ---- - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/io-use.md b/site2/website/versioned_docs/version-2.9.0-deprecated/io-use.md deleted file mode 100644 index da9ed746c4d372..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/io-use.md +++ /dev/null @@ -1,1787 +0,0 @@ ---- -id: io-use -title: How to use Pulsar connectors -sidebar_label: "Use" -original_id: io-use ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide describes how to use Pulsar connectors. - -## Install a connector - -Pulsar bundles several [builtin connectors](io-connectors.md) used to move data in and out of commonly used systems (such as database and messaging system). Optionally, you can create and use your desired non-builtin connectors. - -:::note - -When using a non-builtin connector, you need to specify the path of a archive file for the connector. - -::: - -To set up a builtin connector, follow -the instructions [here](getting-started-standalone.md#installing-builtin-connectors). - -After the setup, the builtin connector is automatically discovered by Pulsar brokers (or function-workers), so no additional installation steps are required. - -## Configure a connector - -You can configure the following information: - -* [Configure a default storage location for a connector](#configure-a-default-storage-location-for-a-connector) - -* [Configure a connector with a YAML file](#configure-a-connector-with-yaml-file) - -### Configure a default storage location for a connector - -To configure a default folder for builtin connectors, set the `connectorsDirectory` parameter in the `./conf/functions_worker.yml` configuration file. - -**Example** - -Set the `./connectors` folder as the default storage location for builtin connectors. - -``` - -######################## -# Connectors -######################## - -connectorsDirectory: ./connectors - -``` - -### Configure a connector with a YAML file - -To configure a connector, you need to provide a YAML configuration file when creating a connector. - -The YAML configuration file tells Pulsar where to locate connectors and how to connect connectors with Pulsar topics. - -**Example 1** - -Below is a YAML configuration file of a Cassandra sink, which tells Pulsar: - -* Which Cassandra cluster to connect - -* What is the `keyspace` and `columnFamily` to be used in Cassandra for collecting data - -* How to map Pulsar messages into Cassandra table key and columns - -```shell - -tenant: public -namespace: default -name: cassandra-test-sink -... -# cassandra specific config -configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - -``` - -**Example 2** - -Below is a YAML configuration file of a Kafka source. - -```shell - -configs: - bootstrapServers: "pulsar-kafka:9092" - groupId: "test-pulsar-io" - topic: "my-topic" - sessionTimeoutMs: "10000" - autoCommitEnabled: "false" - -``` - -**Example 3** - -Below is a YAML configuration file of a PostgreSQL JDBC sink. - -```shell - -configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/test_jdbc" - tableName: "test_jdbc" - -``` - -## Get available connectors - -Before starting using connectors, you can perform the following operations: - -* [Reload connectors](#reload) - -* [Get a list of available connectors](#get-available-connectors) - -### `reload` - -If you add or delete a nar file in a connector folder, reload the available builtin connector before using it. - -#### Source - -Use the `reload` subcommand. - -```shell - -$ pulsar-admin sources reload - -``` - -For more information, see [`here`](io-cli.md#reload). - -#### Sink - -Use the `reload` subcommand. - -```shell - -$ pulsar-admin sinks reload - -``` - -For more information, see [`here`](io-cli.md#reload-1). - -### `available` - -After reloading connectors (optional), you can get a list of available connectors. - -#### Source - -Use the `available-sources` subcommand. - -```shell - -$ pulsar-admin sources available-sources - -``` - -#### Sink - -Use the `available-sinks` subcommand. - -```shell - -$ pulsar-admin sinks available-sinks - -``` - -## Run a connector - -To run a connector, you can perform the following operations: - -* [Create a connector](#create) - -* [Start a connector](#start) - -* [Run a connector locally](#localrun) - -### `create` - -You can create a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Create a source connector. - -````mdx-code-block - - - - -Use the `create` subcommand. - -``` - -$ pulsar-admin sources create options - -``` - -For more information, see [here](io-cli.md#create). - - - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/registerSource?version=@pulsar:version_number@} - - - - -* Create a source connector with a **local file**. - - ```java - - void createSource(SourceConfig sourceConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - |Name|Description - |---|--- - `sourceConfig` | The source configuration object - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#createSource-SourceConfig-java.lang.String-). - -* Create a source connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void createSourceWithUrl(SourceConfig sourceConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - Parameter| Description - |---|--- - `sourceConfig` | The source configuration object - `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSourceWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#createSourceWithUrl-SourceConfig-java.lang.String-). - - - - -```` - -#### Sink - -Create a sink connector. - -````mdx-code-block - - - - -Use the `create` subcommand. - -``` - -$ pulsar-admin sinks create options - -``` - -For more information, see [here](io-cli.md#create-1). - - - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/registerSink?version=@pulsar:version_number@} - - - - -* Create a sink connector with a **local file**. - - ```java - - void createSink(SinkConfig sinkConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - |Name|Description - |---|--- - `sinkConfig` | The sink configuration object - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#createSink-SinkConfig-java.lang.String-). - -* Create a sink connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void createSinkWithUrl(SinkConfig sinkConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - Parameter| Description - |---|--- - `sinkConfig` | The sink configuration object - `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSinkWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#createSinkWithUrl-SinkConfig-java.lang.String-). - - - - -```` - -### `start` - -You can start a connector using **Admin CLI** or **REST API**. - -#### Source - -Start a source connector. - -````mdx-code-block - - - - -Use the `start` subcommand. - -``` - -$ pulsar-admin sources start options - -``` - -For more information, see [here](io-cli.md#start). - - - - -* Start **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/start|operation/startSource?version=@pulsar:version_number@} - -* Start a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/start|operation/startSource?version=@pulsar:version_number@} - - - - -```` - -#### Sink - -Start a sink connector. - -````mdx-code-block - - - - -Use the `start` subcommand. - -``` - -$ pulsar-admin sinks start options - -``` - -For more information, see [here](io-cli.md#start-1). - - - - -* Start **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/start|operation/startSink?version=@pulsar:version_number@} - -* Start a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sourceName/:instanceId/start|operation/startSink?version=@pulsar:version_number@} - - - - -```` - -### `localrun` - -You can run a connector locally rather than deploying it on a Pulsar cluster using **Admin CLI**. - -#### Source - -Run a source connector locally. - -````mdx-code-block - - - - -Use the `localrun` subcommand. - -``` - -$ pulsar-admin sources localrun options - -``` - -For more information, see [here](io-cli.md#localrun). - - - - -```` - -#### Sink - -Run a sink connector locally. - -````mdx-code-block - - - - -Use the `localrun` subcommand. - -``` - -$ pulsar-admin sinks localrun options - -``` - -For more information, see [here](io-cli.md#localrun-1). - - - - -```` - -## Monitor a connector - -To monitor a connector, you can perform the following operations: - -* [Get the information of a connector](#get) - -* [Get the list of all running connectors](#list) - -* [Get the current status of a connector](#status) - -### `get` - -You can get the information of a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the information of a source connector. - -````mdx-code-block - - - - -Use the `get` subcommand. - -``` - -$ pulsar-admin sources get options - -``` - -For more information, see [here](io-cli.md#get). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/getSourceInfo?version=@pulsar:version_number@} - - - - -```java - -SourceConfig getSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Example** - -This is a sourceConfig. - -```java - -{ - "tenant": "tenantName", - "namespace": "namespaceName", - "name": "sourceName", - "className": "className", - "topicName": "topicName", - "configs": {}, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "resources": { - "cpu": 1.0, - "ram": 1073741824, - "disk": 10737418240 - } -} - -``` - -This is a sourceConfig example. - -``` - -{ - "tenant": "public", - "namespace": "default", - "name": "debezium-mysql-source", - "className": "org.apache.pulsar.io.debezium.mysql.DebeziumMysqlSource", - "topicName": "debezium-mysql-topic", - "configs": { - "database.user": "debezium", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.port": "3306", - "database.hostname": "localhost", - "database.password": "dbz", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "database.whitelist": "inventory", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "pulsar.service.url": "pulsar://127.0.0.1:6650", - "database.history.pulsar.topic": "history-topic2" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "resources": { - "cpu": 1.0, - "ram": 1073741824, - "disk": 10737418240 - } -} - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException.NotFoundException` | Cluster doesn't exist -`PulsarAdminException` | Unexpected error - -For more information, see [`getSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#getSource-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Get the information of a sink connector. - -````mdx-code-block - - - - -Use the `get` subcommand. - -``` - -$ pulsar-admin sinks get options - -``` - -For more information, see [here](io-cli.md#get-1). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/getSinkInfo?version=@pulsar:version_number@} - - - - -```java - -SinkConfig getSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - -``` - -**Example** - -This is a sinkConfig. - -```json - -{ -"tenant": "tenantName", -"namespace": "namespaceName", -"name": "sinkName", -"className": "className", -"inputSpecs": { -"topicName": { - "isRegexPattern": false -} -}, -"configs": {}, -"parallelism": 1, -"processingGuarantees": "ATLEAST_ONCE", -"retainOrdering": false, -"autoAck": true -} - -``` - -This is a sinkConfig example. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -**Parameter description** - -Name| Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`sink` | Sink name - -For more information, see [`getSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#getSink-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -### `list` - -You can get the list of all running connectors using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the list of all running source connectors. - -````mdx-code-block - - - - -Use the `list` subcommand. - -``` - -$ pulsar-admin sources list options - -``` - -For more information, see [here](io-cli.md#list). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace|operation/listSources?version=@pulsar:version_number@} - - - - -```java - -List listSources(String tenant, - String namespace) - throws PulsarAdminException - -``` - -**Response example** - -```java - -["f1", "f2", "f3"] - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException` | Unexpected error - -For more information, see [`listSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#listSources-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Get the list of all running sink connectors. - -````mdx-code-block - - - - -Use the `list` subcommand. - -``` - -$ pulsar-admin sinks list options - -``` - -For more information, see [here](io-cli.md#list-1). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace|operation/listSinks?version=@pulsar:version_number@} - - - - -```java - -List listSinks(String tenant, - String namespace) - throws PulsarAdminException - -``` - -**Response example** - -```java - -["f1", "f2", "f3"] - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException` | Unexpected error - -For more information, see [`listSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#listSinks-java.lang.String-java.lang.String-). - - - - -```` - -### `status` - -You can get the current status of a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the current status of a source connector. - -````mdx-code-block - - - - -Use the `status` subcommand. - -``` - -$ pulsar-admin sources status options - -``` - -For more information, see [here](io-cli.md#status). - - - - -* Get the current status of **all** source connectors. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName/status|operation/getSourceStatus?version=@pulsar:version_number@} - -* Gets the current status of a **specified** source connector. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/status|operation/getSourceStatus?version=@pulsar:version_number@} - - - - -* Get the current status of **all** source connectors. - - ```java - - SourceStatus getSourceStatus(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - - **Exception** - - Name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSourceStatus`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#getSource-java.lang.String-java.lang.String-java.lang.String-). - -* Gets the current status of a **specified** source connector. - - ```java - - SourceStatus.SourceInstanceStatus.SourceInstanceStatusData getSourceStatus(String tenant, - String namespace, - String source, - int id) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - `id` | Source instanceID - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSourceStatus`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#getSourceStatus-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Get the current status of a Pulsar sink connector. - -````mdx-code-block - - - - -Use the `status` subcommand. - -``` - -$ pulsar-admin sinks status options - -``` - -For more information, see [here](io-cli.md#status-1). - - - - -* Get the current status of **all** sink connectors. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sinkName/status|operation/getSinkStatus?version=@pulsar:version_number@} - -* Gets the current status of a **specified** sink connector. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sourceName/:instanceId/status|operation/getSinkInstanceStatus?version=@pulsar:version_number@} - - - - -* Get the current status of **all** sink connectors. - - ```java - - SinkStatus getSinkStatus(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSinkStatus`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#getSinkStatus-java.lang.String-java.lang.String-java.lang.String-). - -* Gets the current status of a **specified** source connector. - - ```java - - SinkStatus.SinkInstanceStatus.SinkInstanceStatusData getSinkStatus(String tenant, - String namespace, - String sink, - int id) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - `id` | Sink instanceID - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSinkStatusWithInstanceID`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#getSinkStatus-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Update a connector - -### `update` - -You can update a running connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Update a running Pulsar source connector. - -````mdx-code-block - - - - -Use the `update` subcommand. - -``` - -$ pulsar-admin sources update options - -``` - -For more information, see [here](io-cli.md#update). - - - - -Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/updateSource?version=@pulsar:version_number@} - - - - -* Update a running source connector with a **local file**. - - ```java - - void updateSource(SourceConfig sourceConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - |`sourceConfig` | The source configuration object - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - - For more information, see [`updateSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#updateSource-SourceConfig-java.lang.String-). - -* Update a source connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void updateSourceWithUrl(SourceConfig sourceConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - | Name | Description - |---|--- - | `sourceConfig` | The source configuration object - | `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - -For more information, see [`createSourceWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#updateSourceWithUrl-SourceConfig-java.lang.String-). - - - - -```` - -#### Sink - -Update a running Pulsar sink connector. - -````mdx-code-block - - - - -Use the `update` subcommand. - -``` - -$ pulsar-admin sinks update options - -``` - -For more information, see [here](io-cli.md#update-1). - - - - -Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/updateSink?version=@pulsar:version_number@} - - - - -* Update a running sink connector with a **local file**. - - ```java - - void updateSink(SinkConfig sinkConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - |`sinkConfig` | The sink configuration object - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - - For more information, see [`updateSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#updateSink-SinkConfig-java.lang.String-). - -* Update a sink connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void updateSinkWithUrl(SinkConfig sinkConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - | Name | Description - |---|--- - | `sinkConfig` | The sink configuration object - | `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - |`PulsarAdminException.NotFoundException` | Cluster doesn't exist - |`PulsarAdminException` | Unexpected error - -For more information, see [`updateSinkWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#updateSinkWithUrl-SinkConfig-java.lang.String-). - - - - -```` - -## Stop a connector - -### `stop` - -You can stop a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Stop a source connector. - -````mdx-code-block - - - - -Use the `stop` subcommand. - -``` - -$ pulsar-admin sources stop options - -``` - -For more information, see [here](io-cli.md#stop). - - - - -* Stop **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/stopSource?version=@pulsar:version_number@} - -* Stop a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId|operation/stopSource?version=@pulsar:version_number@} - - - - -* Stop **all** source connectors. - - ```java - - void stopSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#stopSource-java.lang.String-java.lang.String-java.lang.String-). - -* Stop a **specified** source connector. - - ```java - - void stopSource(String tenant, - String namespace, - String source, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#stopSource-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Stop a sink connector. - -````mdx-code-block - - - - -Use the `stop` subcommand. - -``` - -$ pulsar-admin sinks stop options - -``` - -For more information, see [here](io-cli.md#stop-1). - - - - -* Stop **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sinkName/stop|operation/stopSink?version=@pulsar:version_number@} - -* Stop a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkeName/:instanceId/stop|operation/stopSink?version=@pulsar:version_number@} - - - - -* Stop **all** sink connectors. - - ```java - - void stopSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#stopSink-java.lang.String-java.lang.String-java.lang.String-). - -* Stop a **specified** sink connector. - - ```java - - void stopSink(String tenant, - String namespace, - String sink, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#stopSink-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Restart a connector - -### `restart` - -You can restart a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Restart a source connector. - -````mdx-code-block - - - - -Use the `restart` subcommand. - -``` - -$ pulsar-admin sources restart options - -``` - -For more information, see [here](io-cli.md#restart). - - - - -* Restart **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/restart|operation/restartSource?version=@pulsar:version_number@} - -* Restart a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/restart|operation/restartSource?version=@pulsar:version_number@} - - - - -* Restart **all** source connectors. - - ```java - - void restartSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#restartSource-java.lang.String-java.lang.String-java.lang.String-). - -* Restart a **specified** source connector. - - ```java - - void restartSource(String tenant, - String namespace, - String source, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#restartSource-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Restart a sink connector. - -````mdx-code-block - - - - -Use the `restart` subcommand. - -``` - -$ pulsar-admin sinks restart options - -``` - -For more information, see [here](io-cli.md#restart-1). - - - - -* Restart **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/restart|operation/restartSource?version=@pulsar:version_number@} - -* Restart a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/:instanceId/restart|operation/restartSource?version=@pulsar:version_number@} - - - - -* Restart all Pulsar sink connectors. - - ```java - - void restartSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Sink name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#restartSink-java.lang.String-java.lang.String-java.lang.String-). - -* Restart a **specified** sink connector. - - ```java - - void restartSink(String tenant, - String namespace, - String sink, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Sink instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#restartSink-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Delete a connector - -### `delete` - -You can delete a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Delete a source connector. - -````mdx-code-block - - - - -Use the `delete` subcommand. - -``` - -$ pulsar-admin sources delete options - -``` - -For more information, see [here](io-cli.md#delete). - - - - -Delete al Pulsar source connector. - -Send a `DELETE` request to this endpoint: {@inject: endpoint|DELETE|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/deregisterSource?version=@pulsar:version_number@} - - - - -Delete a source connector. - -```java - -void deleteSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Parameter** - -| Name | Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`source` | Source name - -**Exception** - -|Name|Description| -|---|--- -|`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission -| `PulsarAdminException.NotFoundException` | Cluster doesn't exist -| `PulsarAdminException.PreconditionFailedException` | Cluster is not empty -| `PulsarAdminException` | Unexpected error - -For more information, see [`deleteSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#deleteSource-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Delete a sink connector. - -````mdx-code-block - - - - -Use the `delete` subcommand. - -``` - -$ pulsar-admin sinks delete options - -``` - -For more information, see [here](io-cli.md#delete-1). - - - - -Delete a sink connector. - -Send a `DELETE` request to this endpoint: {@inject: endpoint|DELETE|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/deregisterSink?version=@pulsar:version_number@} - - - - -Delete a Pulsar sink connector. - -```java - -void deleteSink(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Parameter** - -| Name | Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`sink` | Sink name - -**Exception** - -|Name|Description| -|---|--- -|`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission -| `PulsarAdminException.NotFoundException` | Cluster doesn't exist -| `PulsarAdminException.PreconditionFailedException` | Cluster is not empty -| `PulsarAdminException` | Unexpected error - -For more information, see [`deleteSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#deleteSink-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/kubernetes-helm.md b/site2/website/versioned_docs/version-2.9.0-deprecated/kubernetes-helm.md deleted file mode 100644 index ea92a0968cd7d0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/kubernetes-helm.md +++ /dev/null @@ -1,441 +0,0 @@ ---- -id: kubernetes-helm -title: Get started in Kubernetes -sidebar_label: "Run Pulsar in Kubernetes" -original_id: kubernetes-helm ---- - -This section guides you through every step of installing and running Apache Pulsar with Helm on Kubernetes quickly, including the following sections: - -- Install the Apache Pulsar on Kubernetes using Helm -- Start and stop Apache Pulsar -- Create topics using `pulsar-admin` -- Produce and consume messages using Pulsar clients -- Monitor Apache Pulsar status with Prometheus and Grafana - -For deploying a Pulsar cluster for production usage, read the documentation on [how to configure and install a Pulsar Helm chart](helm-deploy.md). - -## Prerequisite - -- Kubernetes server 1.14.0+ -- kubectl 1.14.0+ -- Helm 3.0+ - -:::tip - -For the following steps, step 2 and step 3 are for **developers** and step 4 and step 5 are for **administrators**. - -::: - -## Step 0: Prepare a Kubernetes cluster - -Before installing a Pulsar Helm chart, you have to create a Kubernetes cluster. You can follow [the instructions](helm-prepare.md) to prepare a Kubernetes cluster. - -We use [Minikube](https://minikube.sigs.k8s.io/docs/start/) in this quick start guide. To prepare a Kubernetes cluster, follow these steps: - -1. Create a Kubernetes cluster on Minikube. - - ```bash - - minikube start --memory=8192 --cpus=4 --kubernetes-version= - - ``` - - The `` can be any [Kubernetes version supported by your Minikube installation](https://minikube.sigs.k8s.io/docs/reference/configuration/kubernetes/), such as `v1.16.1`. - -2. Set `kubectl` to use Minikube. - - ```bash - - kubectl config use-context minikube - - ``` - -3. To use the [Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) with the local Kubernetes cluster on Minikube, enter the command below: - - ```bash - - minikube dashboard - - ``` - - The command automatically triggers opening a webpage in your browser. - -## Step 1: Install Pulsar Helm chart - -1. Add Pulsar charts repo. - - ```bash - - helm repo add apache https://pulsar.apache.org/charts - - ``` - - ```bash - - helm repo update - - ``` - -2. Clone the Pulsar Helm chart repository. - - ```bash - - git clone https://github.com/apache/pulsar-helm-chart - cd pulsar-helm-chart - - ``` - -3. Run the script `prepare_helm_release.sh` to create secrets required for installing the Apache Pulsar Helm chart. The username `pulsar` and password `pulsar` are used for logging into the Grafana dashboard and Pulsar Manager. - - ```bash - - ./scripts/pulsar/prepare_helm_release.sh \ - -n pulsar \ - -k pulsar-mini \ - -c - - ``` - -4. Use the Pulsar Helm chart to install a Pulsar cluster to Kubernetes. - - :::note - - You need to specify `--set initialize=true` when installing Pulsar the first time. This command installs and starts Apache Pulsar. - - ::: - - ```bash - - helm install \ - --values examples/values-minikube.yaml \ - --set initialize=true \ - --namespace pulsar \ - pulsar-mini apache/pulsar - - ``` - -5. Check the status of all pods. - - ```bash - - kubectl get pods -n pulsar - - ``` - - If all pods start up successfully, you can see that the `STATUS` is changed to `Running` or `Completed`. - - **Output** - - ```bash - - NAME READY STATUS RESTARTS AGE - pulsar-mini-bookie-0 1/1 Running 0 9m27s - pulsar-mini-bookie-init-5gphs 0/1 Completed 0 9m27s - pulsar-mini-broker-0 1/1 Running 0 9m27s - pulsar-mini-grafana-6b7bcc64c7-4tkxd 1/1 Running 0 9m27s - pulsar-mini-prometheus-5fcf5dd84c-w8mgz 1/1 Running 0 9m27s - pulsar-mini-proxy-0 1/1 Running 0 9m27s - pulsar-mini-pulsar-init-t7cqt 0/1 Completed 0 9m27s - pulsar-mini-pulsar-manager-9bcbb4d9f-htpcs 1/1 Running 0 9m27s - pulsar-mini-toolset-0 1/1 Running 0 9m27s - pulsar-mini-zookeeper-0 1/1 Running 0 9m27s - - ``` - -6. Check the status of all services in the namespace `pulsar`. - - ```bash - - kubectl get services -n pulsar - - ``` - - **Output** - - ```bash - - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - pulsar-mini-bookie ClusterIP None 3181/TCP,8000/TCP 11m - pulsar-mini-broker ClusterIP None 8080/TCP,6650/TCP 11m - pulsar-mini-grafana LoadBalancer 10.106.141.246 3000:31905/TCP 11m - pulsar-mini-prometheus ClusterIP None 9090/TCP 11m - pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 11m - pulsar-mini-pulsar-manager LoadBalancer 10.103.192.175 9527:30190/TCP 11m - pulsar-mini-toolset ClusterIP None 11m - pulsar-mini-zookeeper ClusterIP None 2888/TCP,3888/TCP,2181/TCP 11m - - ``` - -## Step 2: Use pulsar-admin to create Pulsar tenants/namespaces/topics - -`pulsar-admin` is the CLI (command-Line Interface) tool for Pulsar. In this step, you can use `pulsar-admin` to create resources, including tenants, namespaces, and topics. - -1. Enter the `toolset` container. - - ```bash - - kubectl exec -it -n pulsar pulsar-mini-toolset-0 -- /bin/bash - - ``` - -2. In the `toolset` container, create a tenant named `apache`. - - ```bash - - bin/pulsar-admin tenants create apache - - ``` - - Then you can list the tenants to see if the tenant is created successfully. - - ```bash - - bin/pulsar-admin tenants list - - ``` - - You should see a similar output as below. The tenant `apache` has been successfully created. - - ```bash - - "apache" - "public" - "pulsar" - - ``` - -3. In the `toolset` container, create a namespace named `pulsar` in the tenant `apache`. - - ```bash - - bin/pulsar-admin namespaces create apache/pulsar - - ``` - - Then you can list the namespaces of tenant `apache` to see if the namespace is created successfully. - - ```bash - - bin/pulsar-admin namespaces list apache - - ``` - - You should see a similar output as below. The namespace `apache/pulsar` has been successfully created. - - ```bash - - "apache/pulsar" - - ``` - -4. In the `toolset` container, create a topic `test-topic` with `4` partitions in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics create-partitioned-topic apache/pulsar/test-topic -p 4 - - ``` - -5. In the `toolset` container, list all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics list-partitioned-topics apache/pulsar - - ``` - - Then you can see all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - "persistent://apache/pulsar/test-topic" - - ``` - -## Step 3: Use Pulsar client to produce and consume messages - -You can use the Pulsar client to create producers and consumers to produce and consume messages. - -By default, the Pulsar Helm chart exposes the Pulsar cluster through a Kubernetes `LoadBalancer`. In Minikube, you can use the following command to check the proxy service. - -```bash - -kubectl get services -n pulsar | grep pulsar-mini-proxy - -``` - -You will see a similar output as below. - -```bash - -pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 28m - -``` - -This output tells what are the node ports that Pulsar cluster's binary port and HTTP port are mapped to. The port after `80:` is the HTTP port while the port after `6650:` is the binary port. - -Then you can find the IP address and exposed ports of your Minikube server by running the following command. - -```bash - -minikube service pulsar-mini-proxy -n pulsar - -``` - -**Output** - -```bash - -|-----------|-------------------|-------------|-------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|-------------------------| -| pulsar | pulsar-mini-proxy | http/80 | http://172.17.0.4:32305 | -| | | pulsar/6650 | http://172.17.0.4:31816 | -|-----------|-------------------|-------------|-------------------------| -🏃 Starting tunnel for service pulsar-mini-proxy. -|-----------|-------------------|-------------|------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|------------------------| -| pulsar | pulsar-mini-proxy | | http://127.0.0.1:61853 | -| | | | http://127.0.0.1:61854 | -|-----------|-------------------|-------------|------------------------| - -``` - -At this point, you can get the service URLs to connect to your Pulsar client. Here are URL examples: - -``` - -webServiceUrl=http://127.0.0.1:61853/ -brokerServiceUrl=pulsar://127.0.0.1:61854/ - -``` - -Then you can proceed with the following steps: - -1. Download the Apache Pulsar tarball from the [downloads page](https://pulsar.apache.org/download/). - -2. Decompress the tarball based on your download file. - - ```bash - - tar -xf .tar.gz - - ``` - -3. Expose `PULSAR_HOME`. - - (1) Enter the directory of the decompressed download file. - - (2) Expose `PULSAR_HOME` as the environment variable. - - ```bash - - export PULSAR_HOME=$(pwd) - - ``` - -4. Configure the Pulsar client. - - In the `${PULSAR_HOME}/conf/client.conf` file, replace `webServiceUrl` and `brokerServiceUrl` with the service URLs you get from the above steps. - -5. Create a subscription to consume messages from `apache/pulsar/test-topic`. - - ```bash - - bin/pulsar-client consume -s sub apache/pulsar/test-topic -n 0 - - ``` - -6. Open a new terminal. In the new terminal, create a producer and send 10 messages to the `test-topic` topic. - - ```bash - - bin/pulsar-client produce apache/pulsar/test-topic -m "---------hello apache pulsar-------" -n 10 - - ``` - -7. Verify the results. - - - From the producer side - - **Output** - - The messages have been produced successfully. - - ```bash - - 18:15:15.489 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 10 messages successfully produced - - ``` - - - From the consumer side - - **Output** - - At the same time, you can receive the messages as below. - - ```bash - - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - - ``` - -## Step 4: Use Pulsar Manager to manage the cluster - -[Pulsar Manager](administration-pulsar-manager.md) is a web-based GUI management tool for managing and monitoring Pulsar. - -1. By default, the `Pulsar Manager` is exposed as a separate `LoadBalancer`. You can open the Pulsar Manager UI using the following command: - - ```bash - - minikube service -n pulsar pulsar-mini-pulsar-manager - - ``` - -2. The Pulsar Manager UI will be open in your browser. You can use the username `pulsar` and password `pulsar` to log into Pulsar Manager. - -3. In Pulsar Manager UI, you can create an environment. - - - Click `New Environment` button in the top-left corner. - - Type `pulsar-mini` for the field `Environment Name` in the popup window. - - Type `http://pulsar-mini-broker:8080` for the field `Service URL` in the popup window. - - Click `Confirm` button in the popup window. - -4. After successfully creating an environment, you are redirected to the `tenants` page of that environment. Then you can create `tenants`, `namespaces` and `topics` using the Pulsar Manager. - -## Step 5: Use Prometheus and Grafana to monitor cluster - -Grafana is an open-source visualization tool, which can be used for visualizing time series data into dashboards. - -1. By default, the Grafana is exposed as a separate `LoadBalancer`. You can open the Grafana UI using the following command: - - ```bash - - minikube service pulsar-mini-grafana -n pulsar - - ``` - -2. The Grafana UI is open in your browser. You can use the username `pulsar` and password `pulsar` to log into the Grafana Dashboard. - -3. You can view dashboards for different components of a Pulsar cluster. diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/performance-pulsar-perf.md b/site2/website/versioned_docs/version-2.9.0-deprecated/performance-pulsar-perf.md deleted file mode 100644 index 7f45498604536c..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/performance-pulsar-perf.md +++ /dev/null @@ -1,229 +0,0 @@ ---- -id: performance-pulsar-perf -title: Pulsar Perf -sidebar_label: "Pulsar Perf" -original_id: performance-pulsar-perf ---- - -The Pulsar Perf is a built-in performance test tool for Apache Pulsar. You can use the Pulsar Perf to test message writing or reading performance. For detailed information about performance tuning, see [here](https://streamnative.io/en/blog/tech/2021-01-14-pulsar-architecture-performance-tuning). - -## Produce messages - -This example shows how the Pulsar Perf produces messages with default options. For all configuration options available for the `pulsar-perf produce` command, see [configuration options](#configuration-options-for-pulsar-perf-produce). - -``` - -bin/pulsar-perf produce my-topic - -``` - -After the command is executed, the test data is continuously output on the Console. - -**Output** - -``` - -19:53:31.459 [pulsar-perf-producer-exec-1-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Created 1 producers -19:53:31.482 [pulsar-timer-5-1] WARN com.scurrilous.circe.checksum.Crc32cIntChecksum - Failed to load Circe JNI library. Falling back to Java based CRC32c provider -19:53:40.861 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 93.7 msg/s --- 0.7 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.575 ms - med: 3.460 - 95pct: 4.790 - 99pct: 5.308 - 99.9pct: 5.834 - 99.99pct: 6.609 - Max: 6.609 -19:53:50.909 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.437 ms - med: 3.328 - 95pct: 4.656 - 99pct: 5.071 - 99.9pct: 5.519 - 99.99pct: 5.588 - Max: 5.588 -19:54:00.926 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.376 ms - med: 3.276 - 95pct: 4.520 - 99pct: 4.939 - 99.9pct: 5.440 - 99.99pct: 5.490 - Max: 5.490 -19:54:10.940 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.298 ms - med: 3.220 - 95pct: 4.474 - 99pct: 4.926 - 99.9pct: 5.645 - 99.99pct: 5.654 - Max: 5.654 -19:54:20.956 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.1 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.308 ms - med: 3.199 - 95pct: 4.532 - 99pct: 4.871 - 99.9pct: 5.291 - 99.99pct: 5.323 - Max: 5.323 -19:54:30.972 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.249 ms - med: 3.144 - 95pct: 4.437 - 99pct: 4.970 - 99.9pct: 5.329 - 99.99pct: 5.414 - Max: 5.414 -19:54:40.987 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.435 ms - med: 3.361 - 95pct: 4.772 - 99pct: 5.150 - 99.9pct: 5.373 - 99.99pct: 5.837 - Max: 5.837 -^C19:54:44.325 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Aggregated throughput stats --- 7286 records sent --- 99.140 msg/s --- 0.775 Mbit/s -19:54:44.336 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Aggregated latency stats --- Latency: mean: 3.383 ms - med: 3.293 - 95pct: 4.610 - 99pct: 5.059 - 99.9pct: 5.588 - 99.99pct: 5.837 - 99.999pct: 6.609 - Max: 6.609 - -``` - -From the above test data, you can get the throughput statistics and the write latency statistics. The aggregated statistics is printed when the Pulsar Perf is stopped. You can press **Ctrl**+**C** to stop the Pulsar Perf. If you specify a filename with the `--histogram-file` parameter, a file with the [HdrHistogram](http://hdrhistogram.github.io/HdrHistogram/) formatted test result appears under your directory after Pulsar Perf is stopped. You can also check the test result through [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html). For details about how to check the test result through [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html), see [HdrHistogram Plotter](#hdrhistogram-plotter). - -### Configuration options for `pulsar-perf produce` - -You can get all options by executing the `bin/pulsar-perf produce -h` command. Therefore, you can modify these options as required. - -The following table lists configuration options available for the `pulsar-perf produce` command. - -| Option | Description | Default value| -|----|----|----| -| access-mode | Set the producer access mode. Valid values are `Shared`, `Exclusive` and `WaitForExclusive`. | Shared | -| admin-url | Set the Pulsar admin URL. | N/A | -| auth-params | Set the authentication parameters, whose format is determined by the implementation of the `configure` method in the authentication plugin class, such as "key1:val1,key2:val2" or "{"key1":"val1","key2":"val2"}". | N/A | -| auth-plugin | Set the authentication plugin class name. | N/A | -| listener-name | Set the listener name for the broker. | N/A | -| batch-max-bytes | Set the maximum number of bytes for each batch. | 4194304 | -| batch-max-messages | Set the maximum number of messages for each batch. | 1000 | -| batch-time-window | Set a window for a batch of messages. | 1 ms | -| busy-wait | Enable or disable Busy-Wait on the Pulsar client. | false | -| chunking | Configure whether to split the message and publish in chunks if message size is larger than allowed max size. | false | -| compression | Compress the message payload. | N/A | -| conf-file | Set the configuration file. | N/A | -| delay | Mark messages with a given delay. | 0s | -| encryption-key-name | Set the name of the public key used to encrypt the payload. | N/A | -| encryption-key-value-file | Set the file which contains the public key used to encrypt the payload. | N/A | -| exit-on-failure | Configure whether to exit from the process on publish failure. | false | -| format-class | Set the custom formatter class name. | org.apache.pulsar.testclient.DefaultMessageFormatter | -| format-payload | Configure whether to format %i as a message index in the stream from producer and/or %t as the timestamp nanoseconds. | false | -| help | Configure the help message. | false | -| histogram-file | HdrHistogram output file | N/A | -| max-connections | Set the maximum number of TCP connections to a single broker. | 100 | -| max-outstanding | Set the maximum number of outstanding messages. | 1000 | -| max-outstanding-across-partitions | Set the maximum number of outstanding messages across partitions. | 50000 | -| message-key-generation-mode | Set the generation mode of message key. Valid options are `autoIncrement`, `random`. | N/A | -| num-io-threads | Set the number of threads to be used for handling connections to brokers. | 1 | -| num-messages | Set the number of messages to be published in total. If it is set to 0, it keeps publishing messages. | 0 | -| num-producers | Set the number of producers for each topic. | 1 | -| num-test-threads | Set the number of test threads. | 1 | -| num-topic | Set the number of topics. | 1 | -| partitions | Configure whether to create partitioned topics with the given number of partitions. | N/A | -| payload-delimiter | Set the delimiter used to split lines when using payload from a file. | \n | -| payload-file | Use the payload from an UTF-8 encoded text file and a payload is randomly selected when messages are published. | N/A | -| producer-name | Set the producer name. | N/A | -| rate | Set the publish rate of messages across topics. | 100 | -| send-timeout | Set the sendTimeout. | 0 | -| separator | Set the separator between the topic and topic number. | - | -| service-url | Set the Pulsar service URL. | | -| size | Set the message size. | 1024 bytes | -| stats-interval-seconds | Set the statistics interval. If it is set to 0, statistics is disabled. | 0 | -| test-duration | Set the test duration. If it is set to 0, it keeps publishing tests. | 0s | -| trust-cert-file | Set the path for the trusted TLS certificate file. | | | -| warmup-time | Set the warm-up time. | 1s | -| tls-allow-insecure | Set the allowed insecure TLS connection. | N/A | - -## Consume messages - -This example shows how the Pulsar Perf consumes messages with default options. - -``` - -bin/pulsar-perf consume my-topic - -``` - -After the command is executed, the test data is continuously output on the Console. - -**Output** - -``` - -20:35:37.071 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Start receiving from 1 consumers on 1 topics -20:35:41.150 [pulsar-client-io-1-9] WARN com.scurrilous.circe.checksum.Crc32cIntChecksum - Failed to load Circe JNI library. Falling back to Java based CRC32c provider -20:35:47.092 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 59.572 msg/s -- 0.465 Mbit/s --- Latency: mean: 11.298 ms - med: 10 - 95pct: 15 - 99pct: 98 - 99.9pct: 137 - 99.99pct: 152 - Max: 152 -20:35:57.104 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.958 msg/s -- 0.781 Mbit/s --- Latency: mean: 9.176 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 18 - Max: 18 -20:36:07.115 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 100.006 msg/s -- 0.781 Mbit/s --- Latency: mean: 9.316 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -20:36:17.125 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 100.085 msg/s -- 0.782 Mbit/s --- Latency: mean: 9.327 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -20:36:27.136 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.900 msg/s -- 0.780 Mbit/s --- Latency: mean: 9.404 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -20:36:37.147 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.985 msg/s -- 0.781 Mbit/s --- Latency: mean: 8.998 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -^C20:36:42.755 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceConsumer - Aggregated throughput stats --- 6051 records received --- 92.125 msg/s --- 0.720 Mbit/s -20:36:42.759 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceConsumer - Aggregated latency stats --- Latency: mean: 9.422 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 98 - 99.99pct: 137 - 99.999pct: 152 - Max: 152 - -``` - -From the output test data, you can get the throughput statistics and the end-to-end latency statistics. The aggregated statistics is printed after the Pulsar Perf is stopped. You can press **Ctrl**+**C** to stop the Pulsar Perf. - -### Configuration options for `pulsar-perf consume` - -You can get all options by executing the `bin/pulsar-perf consume -h` command. Therefore, you can modify these options as required. - -The following table lists configuration options available for the `pulsar-perf consume` command. - -| Option | Description | Default value | -|----|----|----| -| acks-delay-millis | Set the acknowledgment grouping delay in milliseconds. | 100 ms | -| auth-params | Set the authentication parameters, whose format is determined by the implementation of the `configure` method in the authentication plugin class, such as "key1:val1,key2:val2" or "{"key1":"val1","key2":"val2"}". | N/A | -| auth-plugin | Set the authentication plugin class name. | N/A | -| auto_ack_chunk_q_full | Configure whether to automatically ack for the oldest message in receiver queue if the queue is full. | false | -| listener-name | Set the listener name for the broker. | N/A | -| batch-index-ack | Enable or disable the batch index acknowledgment. | false | -| busy-wait | Enable or disable Busy-Wait on the Pulsar client. | false | -| conf-file | Set the configuration file. | N/A | -| encryption-key-name | Set the name of the public key used to encrypt the payload. | N/A | -| encryption-key-value-file | Set the file which contains the public key used to encrypt the payload. | N/A | -| help | Configure the help message. | false | -| histogram-file | HdrHistogram output file | N/A | -| expire_time_incomplete_chunked_messages | Set the expiration time for incomplete chunk messages (in milliseconds). | 0 | -| max-connections | Set the maximum number of TCP connections to a single broker. | 100 | -| max_chunked_msg | Set the max pending chunk messages. | 0 | -| num-consumers | Set the number of consumers for each topic. | 1 | -| num-io-threads |Set the number of threads to be used for handling connections to brokers. | 1 | -| num-subscriptions | Set the number of subscriptions (per topic). | 1 | -| num-topic | Set the number of topics. | 1 | -| pool-messages | Configure whether to use the pooled message. | true | -| rate | Simulate a slow message consumer (rate in msg/s). | 0.0 | -| receiver-queue-size | Set the size of the receiver queue. | 1000 | -| receiver-queue-size-across-partitions | Set the max total size of the receiver queue across partitions. | 50000 | -| replicated | Configure whether the subscription status should be replicated. | false | -| service-url | Set the Pulsar service URL. | | -| stats-interval-seconds | Set the statistics interval. If it is set to 0, statistics is disabled. | 0 | -| subscriber-name | Set the subscriber name prefix. | | -| subscription-position | Set the subscription position. Valid values are `Latest`, `Earliest`.| Latest | -| subscription-type | Set the subscription type.
  1578. Exclusive
  1579. Shared
  1580. Failover
  1581. Key_Shared
  1582. | Exclusive | -| test-duration | Set the test duration (in seconds). If the value is 0 or smaller than 0, it keeps consuming messages. | 0 | -| tls-allow-insecure | Set the allowed insecure TLS connection. | N/A | -| trust-cert-file | Set the path for the trusted TLS certificate file. | | | - -## Configurations - -By default, the Pulsar Perf uses `conf/client.conf` as the default configuration and uses `conf/log4j2.yaml` as the default Log4j configuration. If you want to connect to other Pulsar clusters, you can update the `brokerServiceUrl` in the client configuration. - -You can use the following commands to change the configuration file and the Log4j configuration file. - -``` - -export PULSAR_CLIENT_CONF= -export PULSAR_LOG_CONF= - -``` - -In addition, you can use the following command to configure the JVM configuration through environment variables: - -``` - -export PULSAR_EXTRA_OPTS='-Xms4g -Xmx4g -XX:MaxDirectMemorySize=4g' - -``` - -## HdrHistogram Plotter - -The [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html) is a visualization tool for checking Pulsar Perf test results, which makes it easier to observe the test results. - -To check test results through the HdrHistogram Plotter, follow these steps: - -1. Clone the HdrHistogram repository from GitHub to the local. - - ``` - - git clone https://github.com/HdrHistogram/HdrHistogram.git - - ``` - -2. Switch to the HdrHistogram folder. - - ``` - - cd HdrHistogram - - ``` - -3. Install the HdrHistogram Plotter. - - ``` - - mvn clean install -DskipTests - - ``` - -4. Transform the file generated by the Pulsar Perf. - - ``` - - ./HistogramLogProcessor -i -o - - ``` - -5. You will get two output files. Upload the output file with the filename extension of .hgrm to the [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html). - -6. Check the test result through the Graphical User Interface of the HdrHistogram Plotter, as shown blow. - - ![](/assets/perf-produce.png) diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/reference-cli-tools.md b/site2/website/versioned_docs/version-2.9.0-deprecated/reference-cli-tools.md deleted file mode 100644 index 6893da3ec6394b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/reference-cli-tools.md +++ /dev/null @@ -1,941 +0,0 @@ ---- -id: reference-cli-tools -title: Pulsar command-line tools -sidebar_label: "Pulsar CLI tools" -original_id: reference-cli-tools ---- - -Pulsar offers several command-line tools that you can use for managing Pulsar installations, performance testing, using command-line producers and consumers, and more. - -All Pulsar command-line tools can be run from the `bin` directory of your [installed Pulsar package](getting-started-standalone.md). The following tools are currently documented: - -* [`pulsar`](#pulsar) -* [`pulsar-client`](#pulsar-client) -* [`pulsar-daemon`](#pulsar-daemon) -* [`pulsar-perf`](#pulsar-perf) -* [`bookkeeper`](#bookkeeper) -* [`broker-tool`](#broker-tool) - -> ### Getting help -> You can get help for any CLI tool, command, or subcommand using the `--help` flag, or `-h` for short. Here's an example: - -> ```shell -> -> $ bin/pulsar broker --help -> -> -> ``` - - -## `pulsar` - -The pulsar tool is used to start Pulsar components, such as bookies and ZooKeeper, in the foreground. - -These processes can also be started in the background, using nohup, using the pulsar-daemon tool, which has the same command interface as pulsar. - -Usage: - -```bash - -$ pulsar command - -``` - -Commands: -* `bookie` -* `broker` -* `compact-topic` -* `configuration-store` -* `initialize-cluster-metadata` -* `proxy` -* `standalone` -* `websocket` -* `zookeeper` -* `zookeeper-shell` - -Example: - -```bash - -$ PULSAR_BROKER_CONF=/path/to/broker.conf pulsar broker - -``` - -The table below lists the environment variables that you can use to configure the `pulsar` tool. - -|Variable|Description|Default| -|---|---|---| -|`PULSAR_LOG_CONF`|Log4j configuration file|`conf/log4j2.yaml`| -|`PULSAR_BROKER_CONF`|Configuration file for broker|`conf/broker.conf`| -|`PULSAR_BOOKKEEPER_CONF`|description: Configuration file for bookie|`conf/bookkeeper.conf`| -|`PULSAR_ZK_CONF`|Configuration file for zookeeper|`conf/zookeeper.conf`| -|`PULSAR_CONFIGURATION_STORE_CONF`|Configuration file for the configuration store|`conf/global_zookeeper.conf`| -|`PULSAR_WEBSOCKET_CONF`|Configuration file for websocket proxy|`conf/websocket.conf`| -|`PULSAR_STANDALONE_CONF`|Configuration file for standalone|`conf/standalone.conf`| -|`PULSAR_EXTRA_OPTS`|Extra options to be passed to the jvm|| -|`PULSAR_EXTRA_CLASSPATH`|Extra paths for Pulsar's classpath|| -|`PULSAR_PID_DIR`|Folder where the pulsar server PID file should be stored|| -|`PULSAR_STOP_TIMEOUT`|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful|| - - - -### `bookie` - -Starts up a bookie server - -Usage: - -```bash - -$ pulsar bookie options - -``` - -Options - -|Option|Description|Default| -|---|---|---| -|`-readOnly`|Force start a read-only bookie server|false| -|`-withAutoRecovery`|Start auto-recover service bookie server|false| - - -Example - -```bash - -$ PULSAR_BOOKKEEPER_CONF=/path/to/bookkeeper.conf pulsar bookie \ - -readOnly \ - -withAutoRecovery - -``` - -### `broker` - -Starts up a Pulsar broker - -Usage - -```bash - -$ pulsar broker options - -``` - -Options - -|Option|Description|Default| -|---|---|---| -|`-bc` , `--bookie-conf`|Configuration file for BookKeeper|| -|`-rb` , `--run-bookie`|Run a BookKeeper bookie on the same host as the Pulsar broker|false| -|`-ra` , `--run-bookie-autorecovery`|Run a BookKeeper autorecovery daemon on the same host as the Pulsar broker|false| - -Example - -```bash - -$ PULSAR_BROKER_CONF=/path/to/broker.conf pulsar broker - -``` - -### `compact-topic` - -Run compaction against a Pulsar topic (in a new process) - -Usage - -```bash - -$ pulsar compact-topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-t` , `--topic`|The Pulsar topic that you would like to compact|| - -Example - -```bash - -$ pulsar compact-topic --topic topic-to-compact - -``` - -### `configuration-store` - -Starts up the Pulsar configuration store - -Usage - -```bash - -$ pulsar configuration-store - -``` - -Example - -```bash - -$ PULSAR_CONFIGURATION_STORE_CONF=/path/to/configuration_store.conf pulsar configuration-store - -``` - -### `initialize-cluster-metadata` - -One-time cluster metadata initialization - -Usage - -```bash - -$ pulsar initialize-cluster-metadata options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-ub` , `--broker-service-url`|The broker service URL for the new cluster|| -|`-tb` , `--broker-service-url-tls`|The broker service URL for the new cluster with TLS encryption|| -|`-c` , `--cluster`|Cluster name|| -|`-cs` , `--configuration-store`|The configuration store quorum connection string|| -|`--existing-bk-metadata-service-uri`|The metadata service URI of the existing BookKeeper cluster that you want to use|| -|`-h` , `--help`|Cluster name|false| -|`--initial-num-stream-storage-containers`|The number of storage containers of BookKeeper stream storage|16| -|`--initial-num-transaction-coordinators`|The number of transaction coordinators assigned in a cluster|16| -|`-uw` , `--web-service-url`|The web service URL for the new cluster|| -|`-tw` , `--web-service-url-tls`|The web service URL for the new cluster with TLS encryption|| -|`-zk` , `--zookeeper`|The local ZooKeeper quorum connection string|| -|`--zookeeper-session-timeout-ms`|The local ZooKeeper session timeout. The time unit is in millisecond(ms)|30000| - - -### `proxy` - -Manages the Pulsar proxy - -Usage - -```bash - -$ pulsar proxy options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--configuration-store`|Configuration store connection string|| -|`-zk` , `--zookeeper-servers`|Local ZooKeeper connection string|| - -Example - -```bash - -$ PULSAR_PROXY_CONF=/path/to/proxy.conf pulsar proxy \ - --zookeeper-servers zk-0,zk-1,zk2 \ - --configuration-store zk-0,zk-1,zk-2 - -``` - -### `standalone` - -Run a broker service with local bookies and local ZooKeeper - -Usage - -```bash - -$ pulsar standalone options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-a` , `--advertised-address`|The standalone broker advertised address|| -|`--bookkeeper-dir`|Local bookies’ base data directory|data/standalone/bookeeper| -|`--bookkeeper-port`|Local bookies’ base port|3181| -|`--no-broker`|Only start ZooKeeper and BookKeeper services, not the broker|false| -|`--num-bookies`|The number of local bookies|1| -|`--only-broker`|Only start the Pulsar broker service (not ZooKeeper or BookKeeper)|| -|`--wipe-data`|Clean up previous ZooKeeper/BookKeeper data|| -|`--zookeeper-dir`|Local ZooKeeper’s data directory|data/standalone/zookeeper| -|`--zookeeper-port` |Local ZooKeeper’s port|2181| - -Example - -```bash - -$ PULSAR_STANDALONE_CONF=/path/to/standalone.conf pulsar standalone - -``` - -### `websocket` - -Usage - -```bash - -$ pulsar websocket - -``` - -Example - -```bash - -$ PULSAR_WEBSOCKET_CONF=/path/to/websocket.conf pulsar websocket - -``` - -### `zookeeper` - -Starts up a ZooKeeper cluster - -Usage - -```bash - -$ pulsar zookeeper - -``` - -Example - -```bash - -$ PULSAR_ZK_CONF=/path/to/zookeeper.conf pulsar zookeeper - -``` - -### `zookeeper-shell` - -Connects to a running ZooKeeper cluster using the ZooKeeper shell - -Usage - -```bash - -$ pulsar zookeeper-shell options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration file for ZooKeeper|| -|`-server`|Configuration zk address, eg: `127.0.0.1:2181`|| - - - -## `pulsar-client` - -The pulsar-client tool - -Usage - -```bash - -$ pulsar-client command - -``` - -Commands -* `produce` -* `consume` - - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class, for example "key1:val1,key2:val2" or "{\"key1\":\"val1\",\"key2\":\"val2\"}"|{"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"}| -|`--auth-plugin`|Authentication plugin class name|org.apache.pulsar.client.impl.auth.AuthenticationSasl| -|`--listener-name`|Listener name for the broker|| -|`--proxy-protocol`|Proxy protocol to select type of routing at proxy|| -|`--proxy-url`|Proxy-server URL to which to connect|| -|`--url`|Broker URL to which to connect|pulsar://localhost:6650/
    ws://localhost:8080 | -| `-v`, `--version` | Get the version of the Pulsar client -|`-h`, `--help`|Show this help - - -### `produce` -Send a message or messages to a specific broker and topic - -Usage - -```bash - -$ pulsar-client produce topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-f`, `--files`|Comma-separated file paths to send; either -m or -f must be specified|[]| -|`-m`, `--messages`|Comma-separated string of messages to send; either -m or -f must be specified|[]| -|`-n`, `--num-produce`|The number of times to send the message(s); the count of messages/files * num-produce should be below 1000|1| -|`-r`, `--rate`|Rate (in messages per second) at which to produce; a value 0 means to produce messages as fast as possible|0.0| -|`-c`, `--chunking`|Split the message and publish in chunks if the message size is larger than the allowed max size|false| -|`-s`, `--separator`|Character to split messages string with.|","| -|`-k`, `--key`|Message key to add|key=value string, like k1=v1,k2=v2.| -|`-p`, `--properties`|Properties to add. If you want to add multiple properties, use the comma as the separator, e.g. `k1=v1,k2=v2`.| | -|`-ekn`, `--encryption-key-name`|The public key name to encrypt payload.| | -|`-ekv`, `--encryption-key-value`|The URI of public key to encrypt payload. For example, `file:///path/to/public.key` or `data:application/x-pem-file;base64,*****`.| | - - -### `consume` -Consume messages from a specific broker and topic - -Usage - -```bash - -$ pulsar-client consume topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--hex`|Display binary messages in hexadecimal format.|false| -|`-n`, `--num-messages`|Number of messages to consume, 0 means to consume forever.|1| -|`-r`, `--rate`|Rate (in messages per second) at which to consume; a value 0 means to consume messages as fast as possible|0.0| -|`--regex`|Indicate the topic name is a regex pattern|false| -|`-s`, `--subscription-name`|Subscription name|| -|`-t`, `--subscription-type`|The type of the subscription. Possible values: Exclusive, Shared, Failover, Key_Shared.|Exclusive| -|`-p`, `--subscription-position`|The position of the subscription. Possible values: Latest, Earliest.|Latest| -|`-m`, `--subscription-mode`|Subscription mode.|Durable| -|`-q`, `--queue-size`|The size of consumer's receiver queue.|0| -|`-mc`, `--max_chunked_msg`|Max pending chunk messages.|0| -|`-ac`, `--auto_ack_chunk_q_full`|Auto ack for the oldest message in consumer's receiver queue if the queue full.|false| -|`--hide-content`|Do not print the message to the console.|false| -|`-st`, `--schema-type`|Set the schema type. Use `auto_consume` to dump AVRO and other structured data types. Possible values: bytes, auto_consume.|bytes| -|`-ekv`, `--encryption-key-value`|The URI of public key to encrypt payload. For example, `file:///path/to/public.key` or `data:application/x-pem-file;base64,*****`.| | -|`-pm`, `--pool-messages`|Use the pooled message.|true| - -## `pulsar-daemon` -A wrapper around the pulsar tool that’s used to start and stop processes, such as ZooKeeper, bookies, and Pulsar brokers, in the background using nohup. - -pulsar-daemon has a similar interface to the pulsar command but adds start and stop commands for various services. For a listing of those services, run pulsar-daemon to see the help output or see the documentation for the pulsar command. - -Usage - -```bash - -$ pulsar-daemon command - -``` - -Commands -* `start` -* `stop` - - -### `start` -Start a service in the background using nohup. - -Usage - -```bash - -$ pulsar-daemon start service - -``` - -### `stop` -Stop a service that’s already been started using start. - -Usage - -```bash - -$ pulsar-daemon stop service options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|-force|Stop the service forcefully if not stopped by normal shutdown.|false| - - - -## `pulsar-perf` -A tool for performance testing a Pulsar broker. - -Usage - -```bash - -$ pulsar-perf command - -``` - -Commands -* `consume` -* `produce` -* `read` -* `websocket-producer` -* `managed-ledger` -* `monitor-brokers` -* `simulation-client` -* `simulation-controller` -* `help` - -Environment variables - -The table below lists the environment variables that you can use to configure the pulsar-perf tool. - -|Variable|Description|Default| -|---|---|---| -|`PULSAR_LOG_CONF`|Log4j configuration file|conf/log4j2.yaml| -|`PULSAR_CLIENT_CONF`|Configuration file for the client|conf/client.conf| -|`PULSAR_EXTRA_OPTS`|Extra options to be passed to the JVM|| -|`PULSAR_EXTRA_CLASSPATH`|Extra paths for Pulsar's classpath|| - - -### `consume` -Run a consumer - -Usage - -``` - -$ pulsar-perf consume options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth-plugin`|Authentication plugin class name|| -|`-ac`, `--auto_ack_chunk_q_full`|Auto ack for the oldest message in consumer's receiver queue if the queue full|false| -|`--listener-name`|Listener name for the broker|| -|`--acks-delay-millis`|Acknowledgements grouping delay in millis|100| -|`--batch-index-ack`|Enable or disable the batch index acknowledgment|false| -|`-bw`, `--busy-wait`|Enable or disable Busy-Wait on the Pulsar client|false| -|`-v`, `--encryption-key-value-file`|The file which contains the private key to decrypt payload|| -|`-h`, `--help`|Help message|false| -|`--conf-file`|Configuration file|| -|`-m`, `--num-messages`|Number of messages to consume in total. If the value is equal to or smaller than 0, it keeps consuming messages.|0| -|`-e`, `--expire_time_incomplete_chunked_messages`|The expiration time for incomplete chunk messages (in milliseconds)|0| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-mc`, `--max_chunked_msg`|Max pending chunk messages|0| -|`-n`, `--num-consumers`|Number of consumers (per topic)|1| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-ns`, `--num-subscriptions`|Number of subscriptions (per topic)|1| -|`-t`, `--num-topics`|The number of topics|1| -|`-pm`, `--pool-messages`|Use the pooled message|true| -|`-r`, `--rate`|Simulate a slow message consumer (rate in msg/s)|0| -|`-q`, `--receiver-queue-size`|Size of the receiver queue|1000| -|`-p`, `--receiver-queue-size-across-partitions`|Max total size of the receiver queue across partitions|50000| -|`--replicated`|Whether the subscription status should be replicated|false| -|`-u`, `--service-url`|Pulsar service URL|| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled|0| -|`-s`, `--subscriber-name`|Subscriber name prefix|| -|`-ss`, `--subscriptions`|A list of subscriptions to consume on (e.g. sub1,sub2)|sub| -|`-st`, `--subscription-type`|Subscriber type. Possible values are Exclusive, Shared, Failover, Key_Shared.|Exclusive| -|`-sp`, `--subscription-position`|Subscriber position. Possible values are Latest, Earliest.|Latest| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps consuming messages.|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - - -### `produce` -Run a producer - -Usage - -```bash - -$ pulsar-perf produce options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-am`, `--access-mode`|Producer access mode. Valid values are `Shared`, `Exclusive` and `WaitForExclusive`|Shared| -|`-au`, `--admin-url`|Pulsar admin URL|| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth-plugin`|Authentication plugin class name|| -|`--listener-name`|Listener name for the broker|| -|`-b`, `--batch-time-window`|Batch messages in a window of the specified number of milliseconds|1| -|`-bb`, `--batch-max-bytes`|Maximum number of bytes per batch|4194304| -|`-bm`, `--batch-max-messages`|Maximum number of messages per batch|1000| -|`-bw`, `--busy-wait`|Enable or disable Busy-Wait on the Pulsar client|false| -|`-ch`, `--chunking`|Split the message and publish in chunks if the message size is larger than allowed max size|false| -|`-d`, `--delay`|Mark messages with a given delay in seconds|0s| -|`-z`, `--compression`|Compress messages’ payload. Possible values are NONE, LZ4, ZLIB, ZSTD or SNAPPY.|| -|`--conf-file`|Configuration file|| -|`-k`, `--encryption-key-name`|The public key name to encrypt payload|| -|`-v`, `--encryption-key-value-file`|The file which contains the public key to encrypt payload|| -|`-ef`, `--exit-on-failure`|Exit from the process on publish failure|false| -|`-fc`, `--format-class`|Custom Formatter class name|org.apache.pulsar.testclient.DefaultMessageFormatter| -|`-fp`, `--format-payload`|Format %i as a message index in the stream from producer and/or %t as the timestamp nanoseconds|false| -|`-h`, `--help`|Help message|false| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-o`, `--max-outstanding`|Max number of outstanding messages|1000| -|`-p`, `--max-outstanding-across-partitions`|Max number of outstanding messages across partitions|50000| -|`-m`, `--num-messages`|Number of messages to publish in total. If this value is less than or equal to 0, it keeps publishing messages.|0| -|`-mk`, `--message-key-generation-mode`|The generation mode of message key. Valid options are `autoIncrement`, `random`|| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-n`, `--num-producers`|The number of producers (per topic)|1| -|`-threads`, `--num-test-threads`|Number of test threads|1| -|`-t`, `--num-topic`|The number of topics|1| -|`-np`, `--partitions`|Create partitioned topics with the given number of partitions. Setting this value to 0 means not trying to create a topic|| -|`-f`, `--payload-file`|Use payload from an UTF-8 encoded text file and a payload will be randomly selected when publishing messages|| -|`-e`, `--payload-delimiter`|The delimiter used to split lines when using payload from a file|\n| -|`-pn`, `--producer-name`|Producer Name|| -|`-r`, `--rate`|Publish rate msg/s across topics|100| -|`--send-timeout`|Set the sendTimeout|0| -|`--separator`|Separator between the topic and topic number|-| -|`-u`, `--service-url`|Pulsar service URL|| -|`-s`, `--size`|Message size (in bytes)|1024| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled.|0| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps publishing messages.|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--warmup-time`|Warm-up time in seconds|1| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - - -### `read` -Run a topic reader - -Usage - -```bash - -$ pulsar-perf read options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth-plugin`|Authentication plugin class name|| -|`--listener-name`|Listener name for the broker|| -|`--conf-file`|Configuration file|| -|`-h`, `--help`|Help message|false| -|`-n`, `--num-messages`|Number of messages to consume in total. If the value is equal to or smaller than 0, it keeps consuming messages.|0| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-t`, `--num-topics`|The number of topics|1| -|`-r`, `--rate`|Simulate a slow message reader (rate in msg/s)|0| -|`-q`, `--receiver-queue-size`|Size of the receiver queue|1000| -|`-u`, `--service-url`|Pulsar service URL|| -|`-m`, `--start-message-id`|Start message id. This can be either 'earliest', 'latest' or a specific message id by using 'lid:eid'|earliest| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled.|0| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps consuming messages.|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--use-tls`|Use TLS encryption on the connection|false| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - -### `websocket-producer` -Run a websocket producer - -Usage - -```bash - -$ pulsar-perf websocket-producer options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth-plugin`|Authentication plugin class name|| -|`--conf-file`|Configuration file|| -|`-h`, `--help`|Help message|false| -|`-m`, `--num-messages`|Number of messages to publish in total. If this value is less than or equal to 0, it keeps publishing messages.|0| -|`-t`, `--num-topic`|The number of topics|1| -|`-f`, `--payload-file`|Use payload from a file instead of empty buffer|| -|`-e`, `--payload-delimiter`|The delimiter used to split lines when using payload from a file|\n| -|`-fp`, `--format-payload`|Format %i as a message index in the stream from producer and/or %t as the timestamp nanoseconds|false| -|`-fc`, `--format-class`|Custom formatter class name|`org.apache.pulsar.testclient.DefaultMessageFormatter`| -|`-u`, `--proxy-url`|Pulsar Proxy URL, e.g., "ws://localhost:8080/"|| -|`-r`, `--rate`|Publish rate msg/s across topics|100| -|`-s`, `--size`|Message size in byte|1024| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps publishing messages.|0| - - -### `managed-ledger` -Write directly on managed-ledgers - -Usage - -```bash - -$ pulsar-perf managed-ledger options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-a`, `--ack-quorum`|Ledger ack quorum|1| -|`-dt`, `--digest-type`|BookKeeper digest type. Possible Values: [CRC32, MAC, CRC32C, DUMMY]|CRC32C| -|`-e`, `--ensemble-size`|Ledger ensemble size|1| -|`-h`, `--help`|Help message|false| -|`-c`, `--max-connections`|Max number of TCP connections to a single bookie|1| -|`-o`, `--max-outstanding`|Max number of outstanding requests|1000| -|`-m`, `--num-messages`|Number of messages to publish in total. If this value is less than or equal to 0, it keeps publishing messages.|0| -|`-t`, `--num-topic`|Number of managed ledgers|1| -|`-r`, `--rate`|Write rate msg/s across managed ledgers|100| -|`-s`, `--size`|Message size in byte|1024| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps publishing messages.|0| -|`--threads`|Number of threads writing|1| -|`-w`, `--write-quorum`|Ledger write quorum|1| -|`-zk`, `--zookeeperServers`|ZooKeeper connection string|| - - -### `monitor-brokers` -Continuously receive broker data and/or load reports - -Usage - -```bash - -$ pulsar-perf monitor-brokers options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--connect-string`|A connection string for one or more ZooKeeper servers|| -|`-h`, `--help`|Help message|false| - - -### `simulation-client` -Run a simulation server acting as a Pulsar client. Uses the client configuration specified in `conf/client.conf`. - -Usage - -```bash - -$ pulsar-perf simulation-client options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--port`|Port to listen on for controller|0| -|`--service-url`|Pulsar Service URL|| -|`-h`, `--help`|Help message|false| - -### `simulation-controller` -Run a simulation controller to give commands to servers - -Usage - -```bash - -$ pulsar-perf simulation-controller options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--client-port`|The port that the clients are listening on|0| -|`--clients`|Comma-separated list of client hostnames|| -|`--cluster`|The cluster to test on|| -|`-h`, `--help`|Help message|false| - - -### `help` -This help message - -Usage - -```bash - -$ pulsar-perf help - -``` - -## `bookkeeper` -A tool for managing BookKeeper. - -Usage - -```bash - -$ bookkeeper command - -``` - -Commands -* `auto-recovery` -* `bookie` -* `localbookie` -* `upgrade` -* `shell` - - -Environment variables - -The table below lists the environment variables that you can use to configure the bookkeeper tool. - -|Variable|Description|Default| -|---|---|---| -|BOOKIE_LOG_CONF|Log4j configuration file|conf/log4j2.yaml| -|BOOKIE_CONF|BookKeeper configuration file|conf/bk_server.conf| -|BOOKIE_EXTRA_OPTS|Extra options to be passed to the JVM|| -|BOOKIE_EXTRA_CLASSPATH|Extra paths for BookKeeper's classpath|| -|ENTRY_FORMATTER_CLASS|The Java class used to format entries|| -|BOOKIE_PID_DIR|Folder where the BookKeeper server PID file should be stored|| -|BOOKIE_STOP_TIMEOUT|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful|| - - -### `auto-recovery` -Runs an auto-recovery service daemon - -Usage - -```bash - -$ bookkeeper auto-recovery options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery daemon|| - - -### `bookie` -Starts up a BookKeeper server (aka bookie) - -Usage - -```bash - -$ bookkeeper bookie options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery daemon|| -|-readOnly|Force start a read-only bookie server|false| -|-withAutoRecovery|Start auto-recovery service bookie server|false| - - -### `localbookie` -Runs a test ensemble of N bookies locally - -Usage - -```bash - -$ bookkeeper localbookie N - -``` - -### `upgrade` -Upgrade the bookie’s filesystem - -Usage - -```bash - -$ bookkeeper upgrade options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery daemon|| -|`-u`, `--upgrade`|Upgrade the bookie’s directories|| - - -### `shell` -Run shell for admin commands. To see a full listing of those commands, run bookkeeper shell without an argument. - -Usage - -```bash - -$ bookkeeper shell - -``` - -Example - -```bash - -$ bookkeeper shell bookiesanity - -``` - -## `broker-tool` - -The `broker- tool` is used for operations on a specific broker. - -Usage - -```bash - -$ broker-tool command - -``` - -Commands -* `load-report` -* `help` - -Example -Two ways to get more information about a command as below: - -```bash - -$ broker-tool help command -$ broker-tool command --help - -``` - -### `load-report` - -Collect the load report of a specific broker. -The command is run on a broker, and used for troubleshooting why broker can’t collect right load report. - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--interval`| Interval to collect load report, in milliseconds || -|`-h`, `--help`| Display help information || - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/reference-configuration.md b/site2/website/versioned_docs/version-2.9.0-deprecated/reference-configuration.md deleted file mode 100644 index 2cf706608298ec..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/reference-configuration.md +++ /dev/null @@ -1,774 +0,0 @@ ---- -id: reference-configuration -title: Pulsar configuration -sidebar_label: "Pulsar configuration" -original_id: reference-configuration ---- - - - - -You can manage Pulsar configuration by configuration files in the [`conf`](https://github.com/apache/pulsar/tree/master/conf) directory of a Pulsar [installation](getting-started-standalone.md). - -- [BookKeeper](#bookkeeper) -- [Broker](#broker) -- [Client](#client) -- [Log4j](#log4j) -- [Log4j shell](#log4j-shell) -- [Standalone](#standalone) -- [WebSocket](#websocket) -- [Pulsar proxy](#pulsar-proxy) -- [ZooKeeper](#zookeeper) - -## BookKeeper - -BookKeeper is a replicated log storage system that Pulsar uses for durable storage of all messages. - - -|Name|Description|Default| -|---|---|---| -|bookiePort|The port on which the bookie server listens.|3181| -|allowLoopback|Whether the bookie is allowed to use a loopback interface as its primary interface (that is the interface used to establish its identity). By default, loopback interfaces are not allowed to work as the primary interface. Using a loopback interface as the primary interface usually indicates a configuration error. For example, it’s fairly common in some VPS setups to not configure a hostname or to have the hostname resolve to `127.0.0.1`. If this is the case, then all bookies in the cluster will establish their identities as `127.0.0.1:3181` and only one will be able to join the cluster. For VPSs configured like this, you should explicitly set the listening interface.|false| -|listeningInterface|The network interface on which the bookie listens. By default, the bookie listens on all interfaces.|eth0| -|advertisedAddress|Configure a specific hostname or IP address that the bookie should use to advertise itself to clients. By default, the bookie advertises either its own IP address or hostname according to the `listeningInterface` and `useHostNameAsBookieID` settings.|N/A| -|allowMultipleDirsUnderSameDiskPartition|Configure the bookie to enable/disable multiple ledger/index/journal directories in the same filesystem disk partition.|false| -|minUsableSizeForIndexFileCreation|The minimum safe usable size available in index directory for bookie to create index files while replaying journal at the time of bookie starts in Readonly Mode (in bytes).|1073741824| -|journalDirectory|The directory where BookKeeper outputs its write-ahead log (WAL).|data/bookkeeper/journal| -|journalDirectories|Directories that BookKeeper outputs its write ahead log. Multiple directories are available, being separated by `,`. For example: `journalDirectories=/tmp/bk-journal1,/tmp/bk-journal2`. If `journalDirectories` is set, the bookies skip `journalDirectory` and use this setting directory.|/tmp/bk-journal| -|ledgerDirectories|The directory where BookKeeper outputs ledger snapshots. This could define multiple directories to store snapshots separated by `,`, for example `ledgerDirectories=/tmp/bk1-data,/tmp/bk2-data`. Ideally, ledger dirs and the journal dir are each in a different device, which reduces the contention between random I/O and sequential write. It is possible to run with a single disk, but performance will be significantly lower.|data/bookkeeper/ledgers| -|ledgerManagerType|The type of ledger manager used to manage how ledgers are stored, managed, and garbage collected. See [BookKeeper Internals](http://bookkeeper.apache.org/docs/latest/getting-started/concepts) for more info.|hierarchical| -|zkLedgersRootPath|The root ZooKeeper path used to store ledger metadata. This parameter is used by the ZooKeeper-based ledger manager as a root znode to store all ledgers.|/ledgers| -|ledgerStorageClass|Ledger storage implementation class|org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage| -|entryLogFilePreallocationEnabled|Enable or disable entry logger preallocation|true| -|logSizeLimit|Max file size of the entry logger, in bytes. A new entry log file will be created when the old one reaches the file size limitation.|1073741824| -|minorCompactionThreshold|Threshold of minor compaction. Entry log files whose remaining size percentage reaches below this threshold will be compacted in a minor compaction. If set to less than zero, the minor compaction is disabled.|0.2| -|minorCompactionInterval|Time interval to run minor compaction, in seconds. If set to less than zero, the minor compaction is disabled. Note: should be greater than gcWaitTime. |3600| -|majorCompactionThreshold|The threshold of major compaction. Entry log files whose remaining size percentage reaches below this threshold will be compacted in a major compaction. Those entry log files whose remaining size percentage is still higher than the threshold will never be compacted. If set to less than zero, the minor compaction is disabled.|0.5| -|majorCompactionInterval|The time interval to run major compaction, in seconds. If set to less than zero, the major compaction is disabled. Note: should be greater than gcWaitTime. |86400| -|readOnlyModeEnabled|If `readOnlyModeEnabled=true`, then on all full ledger disks, bookie will be converted to read-only mode and serve only read requests. Otherwise the bookie will be shutdown.|true| -|forceReadOnlyBookie|Whether the bookie is force started in read only mode.|false| -|persistBookieStatusEnabled|Persist the bookie status locally on the disks. So the bookies can keep their status upon restarts.|false| -|compactionMaxOutstandingRequests|Sets the maximum number of entries that can be compacted without flushing. When compacting, the entries are written to the entrylog and the new offsets are cached in memory. Once the entrylog is flushed the index is updated with the new offsets. This parameter controls the number of entries added to the entrylog before a flush is forced. A higher value for this parameter means more memory will be used for offsets. Each offset consists of 3 longs. This parameter should not be modified unless you’re fully aware of the consequences.|100000| -|compactionRate|The rate at which compaction will read entries, in adds per second.|1000| -|isThrottleByBytes|Throttle compaction by bytes or by entries.|false| -|compactionRateByEntries|The rate at which compaction will read entries, in adds per second.|1000| -|compactionRateByBytes|Set the rate at which compaction reads entries. The unit is bytes added per second.|1000000| -|journalMaxSizeMB|Max file size of journal file, in megabytes. A new journal file will be created when the old one reaches the file size limitation.|2048| -|journalMaxBackups|The max number of old journal files to keep. Keeping a number of old journal files would help data recovery in special cases.|5| -|journalPreAllocSizeMB|How space to pre-allocate at a time in the journal.|16| -|journalWriteBufferSizeKB|The of the write buffers used for the journal.|64| -|journalRemoveFromPageCache|Whether pages should be removed from the page cache after force write.|true| -|journalAdaptiveGroupWrites|Whether to group journal force writes, which optimizes group commit for higher throughput.|true| -|journalMaxGroupWaitMSec|The maximum latency to impose on a journal write to achieve grouping.|1| -|journalAlignmentSize|All the journal writes and commits should be aligned to given size|4096| -|journalBufferedWritesThreshold|Maximum writes to buffer to achieve grouping|524288| -|journalFlushWhenQueueEmpty|If we should flush the journal when journal queue is empty|false| -|numJournalCallbackThreads|The number of threads that should handle journal callbacks|8| -|openLedgerRereplicationGracePeriod | The grace period, in milliseconds, that the replication worker waits before fencing and replicating a ledger fragment that's still being written to upon bookie failure. | 30000 | -|rereplicationEntryBatchSize|The number of max entries to keep in fragment for re-replication|100| -|autoRecoveryDaemonEnabled|Whether the bookie itself can start auto-recovery service.|true| -|lostBookieRecoveryDelay|How long to wait, in seconds, before starting auto recovery of a lost bookie.|0| -|gcWaitTime|How long the interval to trigger next garbage collection, in milliseconds. Since garbage collection is running in background, too frequent gc will heart performance. It is better to give a higher number of gc interval if there is enough disk capacity.|900000| -|gcOverreplicatedLedgerWaitTime|How long the interval to trigger next garbage collection of overreplicated ledgers, in milliseconds. This should not be run very frequently since we read the metadata for all the ledgers on the bookie from zk.|86400000| -|flushInterval|How long the interval to flush ledger index pages to disk, in milliseconds. Flushing index files will introduce much random disk I/O. If separating journal dir and ledger dirs each on different devices, flushing would not affect performance. But if putting journal dir and ledger dirs on same device, performance degrade significantly on too frequent flushing. You can consider increment flush interval to get better performance, but you need to pay more time on bookie server restart after failure.|60000| -|bookieDeathWatchInterval|Interval to watch whether bookie is dead or not, in milliseconds|1000| -|allowStorageExpansion|Allow the bookie storage to expand. Newly added ledger and index dirs must be empty.|false| -|zkServers|A list of one of more servers on which zookeeper is running. The server list can be comma separated values, for example: zkServers=zk1:2181,zk2:2181,zk3:2181.|localhost:2181| -|zkTimeout|ZooKeeper client session timeout in milliseconds Bookie server will exit if it received SESSION_EXPIRED because it was partitioned off from ZooKeeper for more than the session timeout JVM garbage collection, disk I/O will cause SESSION_EXPIRED. Increment this value could help avoiding this issue|30000| -|zkRetryBackoffStartMs|The start time that the Zookeeper client backoff retries in milliseconds.|1000| -|zkRetryBackoffMaxMs|The maximum time that the Zookeeper client backoff retries in milliseconds.|10000| -|zkEnableSecurity|Set ACLs on every node written on ZooKeeper, allowing users to read and write BookKeeper metadata stored on ZooKeeper. In order to make ACLs work you need to setup ZooKeeper JAAS authentication. All the bookies and Client need to share the same user, and this is usually done using Kerberos authentication. See ZooKeeper documentation.|false| -|httpServerEnabled|The flag enables/disables starting the admin http server.|false| -|httpServerPort|The HTTP server port to listen on. By default, the value is `8080`. If you want to keep it consistent with the Prometheus stats provider, you can set it to `8000`.|8080 -|httpServerClass|The http server class.|org.apache.bookkeeper.http.vertx.VertxHttpServer| -|serverTcpNoDelay|This settings is used to enabled/disabled Nagle’s algorithm, which is a means of improving the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network. If you are sending many small messages, such that more than one can fit in a single IP packet, setting server.tcpnodelay to false to enable Nagle algorithm can provide better performance.|true| -|serverSockKeepalive|This setting is used to send keep-alive messages on connection-oriented sockets.|true| -|serverTcpLinger|The socket linger timeout on close. When enabled, a close or shutdown will not return until all queued messages for the socket have been successfully sent or the linger timeout has been reached. Otherwise, the call returns immediately and the closing is done in the background.|0| -|byteBufAllocatorSizeMax|The maximum buf size of the received ByteBuf allocator.|1048576| -|nettyMaxFrameSizeBytes|The maximum netty frame size in bytes. Any message received larger than this will be rejected.|5253120| -|openFileLimit|Max number of ledger index files could be opened in bookie server If number of ledger index files reaches this limitation, bookie server started to swap some ledgers from memory to disk. Too frequent swap will affect performance. You can tune this number to gain performance according your requirements.|0| -|pageSize|Size of a index page in ledger cache, in bytes A larger index page can improve performance writing page to disk, which is efficient when you have small number of ledgers and these ledgers have similar number of entries. If you have large number of ledgers and each ledger has fewer entries, smaller index page would improve memory usage.|8192| -|pageLimit|How many index pages provided in ledger cache If number of index pages reaches this limitation, bookie server starts to swap some ledgers from memory to disk. You can increment this value when you found swap became more frequent. But make sure pageLimit*pageSize should not more than JVM max memory limitation, otherwise you would got OutOfMemoryException. In general, incrementing pageLimit, using smaller index page would gain better performance in lager number of ledgers with fewer entries case If pageLimit is -1, bookie server will use 1/3 of JVM memory to compute the limitation of number of index pages.|0| -|readOnlyModeEnabled|If all ledger directories configured are full, then support only read requests for clients. If "readOnlyModeEnabled=true" then on all ledger disks full, bookie will be converted to read-only mode and serve only read requests. Otherwise the bookie will be shutdown. By default this will be disabled.|true| -|diskUsageThreshold|For each ledger dir, maximum disk space which can be used. Default is 0.95f. i.e. 95% of disk can be used at most after which nothing will be written to that partition. If all ledger dir partitions are full, then bookie will turn to readonly mode if ‘readOnlyModeEnabled=true’ is set, else it will shutdown. Valid values should be in between 0 and 1 (exclusive).|0.95| -|diskCheckInterval|Disk check interval in milli seconds, interval to check the ledger dirs usage.|10000| -|auditorPeriodicCheckInterval|Interval at which the auditor will do a check of all ledgers in the cluster. By default this runs once a week. The interval is set in seconds. To disable the periodic check completely, set this to 0. Note that periodic checking will put extra load on the cluster, so it should not be run more frequently than once a day.|604800| -|sortedLedgerStorageEnabled|Whether sorted-ledger storage is enabled.|true| -|auditorPeriodicBookieCheckInterval|The interval between auditor bookie checks. The auditor bookie check, checks ledger metadata to see which bookies should contain entries for each ledger. If a bookie which should contain entries is unavailable, thea the ledger containing that entry is marked for recovery. Setting this to 0 disabled the periodic check. Bookie checks will still run when a bookie fails. The interval is specified in seconds.|86400| -|numAddWorkerThreads|The number of threads that should handle write requests. if zero, the writes would be handled by netty threads directly.|0| -|numReadWorkerThreads|The number of threads that should handle read requests. if zero, the reads would be handled by netty threads directly.|8| -|numHighPriorityWorkerThreads|The umber of threads that should be used for high priority requests (i.e. recovery reads and adds, and fencing).|8| -|maxPendingReadRequestsPerThread|If read workers threads are enabled, limit the number of pending requests, to avoid the executor queue to grow indefinitely.|2500| -|maxPendingAddRequestsPerThread|The limited number of pending requests, which is used to avoid the executor queue to grow indefinitely when add workers threads are enabled.|10000| -|isForceGCAllowWhenNoSpace|Whether force compaction is allowed when the disk is full or almost full. Forcing GC could get some space back, but could also fill up the disk space more quickly. This is because new log files are created before GC, while old garbage log files are deleted after GC.|false| -|verifyMetadataOnGC|True if the bookie should double check `readMetadata` prior to GC.|false| -|flushEntrylogBytes|Entry log flush interval in bytes. Flushing in smaller chunks but more frequently reduces spikes in disk I/O. Flushing too frequently may also affect performance negatively.|268435456| -|readBufferSizeBytes|The number of bytes we should use as capacity for BufferedReadChannel.|4096| -|writeBufferSizeBytes|The number of bytes used as capacity for the write buffer|65536| -|useHostNameAsBookieID|Whether the bookie should use its hostname to register with the coordination service (e.g.: zookeeper service). When false, bookie will use its ip address for the registration.|false| -|bookieId | If you want to custom a bookie ID or use a dynamic network address for the bookie, you can set the `bookieId`.

    Bookie advertises itself using the `bookieId` rather than the `BookieSocketAddress` (`hostname:port` or `IP:port`). If you set the `bookieId`, then the `useHostNameAsBookieID` does not take effect.

    The `bookieId` is a non-empty string that can contain ASCII digits and letters ([a-zA-Z9-0]), colons, dashes, and dots.

    For more information about `bookieId`, see [here](http://bookkeeper.apache.org/bps/BP-41-bookieid/).|N/A| -|allowEphemeralPorts|Whether the bookie is allowed to use an ephemeral port (port 0) as its server port. By default, an ephemeral port is not allowed. Using an ephemeral port as the service port usually indicates a configuration error. However, in unit tests, using an ephemeral port will address port conflict problems and allow running tests in parallel.|false| -|enableLocalTransport|Whether the bookie is allowed to listen for the BookKeeper clients executed on the local JVM.|false| -|disableServerSocketBind|Whether the bookie is allowed to disable bind on network interfaces. This bookie will be available only to BookKeeper clients executed on the local JVM.|false| -|skipListArenaChunkSize|The number of bytes that we should use as chunk allocation for `org.apache.bookkeeper.bookie.SkipListArena`.|4194304| -|skipListArenaMaxAllocSize|The maximum size that we should allocate from the skiplist arena. Allocations larger than this should be allocated directly by the VM to avoid fragmentation.|131072| -|bookieAuthProviderFactoryClass|The factory class name of the bookie authentication provider. If this is null, then there is no authentication.|null| -|statsProviderClass||org.apache.bookkeeper.stats.prometheus.PrometheusMetricsProvider| -|prometheusStatsHttpPort||8000| -|dbStorage_writeCacheMaxSizeMb|Size of Write Cache. Memory is allocated from JVM direct memory. Write cache is used to buffer entries before flushing into the entry log. For good performance, it should be big enough to hold a substantial amount of entries in the flush interval.|25% of direct memory| -|dbStorage_readAheadCacheMaxSizeMb|Size of Read cache. Memory is allocated from JVM direct memory. This read cache is pre-filled doing read-ahead whenever a cache miss happens. By default, it is allocated to 25% of the available direct memory.|N/A| -|dbStorage_readAheadCacheBatchSize|How many entries to pre-fill in cache after a read cache miss|1000| -|dbStorage_rocksDB_blockCacheSize|Size of RocksDB block-cache. For best performance, this cache should be big enough to hold a significant portion of the index database which can reach ~2GB in some cases. By default, it uses 10% of direct memory.|N/A| -|dbStorage_rocksDB_writeBufferSizeMB||64| -|dbStorage_rocksDB_sstSizeInMB||64| -|dbStorage_rocksDB_blockSize||65536| -|dbStorage_rocksDB_bloomFilterBitsPerKey||10| -|dbStorage_rocksDB_numLevels||-1| -|dbStorage_rocksDB_numFilesInLevel0||4| -|dbStorage_rocksDB_maxSizeInLevel1MB||256| - -## Broker - -Pulsar brokers are responsible for handling incoming messages from producers, dispatching messages to consumers, replicating data between clusters, and more. - -|Name|Description|Default| -|---|---|---| -|advertisedListeners|Specify multiple advertised listeners for the broker.

    The format is `:pulsar://:`.

    If there are multiple listeners, separate them with commas.

    **Note**: do not use this configuration with `advertisedAddress` and `brokerServicePort`. If the value of this configuration is empty, the broker uses `advertisedAddress` and `brokerServicePort`|/| -|internalListenerName|Specify the internal listener name for the broker.

    **Note**: the listener name must be contained in `advertisedListeners`.

    If the value of this configuration is empty, the broker uses the first listener as the internal listener.|/| -|authenticateOriginalAuthData| If this flag is set to `true`, the broker authenticates the original Auth data; else it just accepts the originalPrincipal and authorizes it (if required). |false| -|enablePersistentTopics| Whether persistent topics are enabled on the broker |true| -|enableNonPersistentTopics| Whether non-persistent topics are enabled on the broker |true| -|functionsWorkerEnabled| Whether the Pulsar Functions worker service is enabled in the broker |false| -|exposePublisherStats|Whether to enable topic level metrics.|true| -|statsUpdateFrequencyInSecs||60| -|statsUpdateInitialDelayInSecs||60| -|zookeeperServers| Zookeeper quorum connection string || -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -|brokerServicePort| Broker data port |6650| -|brokerServicePortTls| Broker data port for TLS |6651| -|webServicePort| Port to use to server HTTP request |8080| -|webServicePortTls| Port to use to server HTTPS request |8443| -|webSocketServiceEnabled| Enable the WebSocket API service in broker |false| -|webSocketNumIoThreads|The number of IO threads in Pulsar Client used in WebSocket proxy.|8| -|webSocketConnectionsPerBroker|The number of connections per Broker in Pulsar Client used in WebSocket proxy.|8| -|webSocketSessionIdleTimeoutMillis|Time in milliseconds that idle WebSocket session times out.|300000| -|webSocketMaxTextFrameSize|The maximum size of a text message during parsing in WebSocket proxy.|1048576| -|exposeTopicLevelMetricsInPrometheus|Whether to enable topic level metrics.|true| -|exposeConsumerLevelMetricsInPrometheus|Whether to enable consumer level metrics.|false| -|jvmGCMetricsLoggerClassName|Classname of Pluggable JVM GC metrics logger that can log GC specific metrics.|N/A| -|bindAddress| Hostname or IP address the service binds on, default is 0.0.0.0. |0.0.0.0| -|bindAddresses| Additional Hostname or IP addresses the service binds on: `listener_name:scheme://host:port,...`. || -|advertisedAddress| Hostname or IP address the service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used. || -|clusterName| Name of the cluster to which this broker belongs to || -|maxTenants|The maximum number of tenants that can be created in each Pulsar cluster. When the number of tenants reaches the threshold, the broker rejects the request of creating a new tenant. The default value 0 disables the check. |0| -| maxNamespacesPerTenant | The maximum number of namespaces that can be created in each tenant. When the number of namespaces reaches this threshold, the broker rejects the request of creating a new tenant. The default value 0 disables the check. |0| -|brokerDeduplicationEnabled| Sets the default behavior for message deduplication in the broker. If enabled, the broker will reject messages that were already stored in the topic. This setting can be overridden on a per-namespace basis. |false| -|brokerDeduplicationMaxNumberOfProducers| The maximum number of producers for which information will be stored for deduplication purposes. |10000| -|brokerDeduplicationEntriesInterval| The number of entries after which a deduplication informational snapshot is taken. A larger interval will lead to fewer snapshots being taken, though this would also lengthen the topic recovery time (the time required for entries published after the snapshot to be replayed). |1000| -|brokerDeduplicationSnapshotIntervalSeconds| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |120| -|brokerDeduplicationProducerInactivityTimeoutMinutes| The time of inactivity (in minutes) after which the broker will discard deduplication information related to a disconnected producer. |360| -|dispatchThrottlingRatePerReplicatorInMsg| The default messages per second dispatch throttling-limit for every replicator in replication. The value of `0` means disabling replication message dispatch-throttling| 0 | -|dispatchThrottlingRatePerReplicatorInByte| The default bytes per second dispatch throttling-limit for every replicator in replication. The value of `0` means disabling replication message-byte dispatch-throttling| 0 | -|zooKeeperSessionTimeoutMillis| Zookeeper session timeout in milliseconds |30000| -|brokerShutdownTimeoutMs| Time to wait for broker graceful shutdown. After this time elapses, the process will be killed |60000| -|skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out of memory error. |false| -|backlogQuotaCheckEnabled| Enable backlog quota check. Enforces action on topic when the quota is reached |true| -|backlogQuotaCheckIntervalInSeconds| How often to check for topics that have reached the quota |60| -|backlogQuotaDefaultLimitBytes| The default per-topic backlog quota limit. Being less than 0 means no limitation. By default, it is -1. | -1 | -|backlogQuotaDefaultRetentionPolicy|The defaulted backlog quota retention policy. By Default, it is `producer_request_hold`.
  1583. 'producer_request_hold' Policy which holds producer's send request until the resource becomes available (or holding times out)
  1584. 'producer_exception' Policy which throws `javax.jms.ResourceAllocationException` to the producer
  1585. 'consumer_backlog_eviction' Policy which evicts the oldest message from the slowest consumer's backlog
  1586. |producer_request_hold| -|allowAutoTopicCreation| Enable topic auto creation if a new producer or consumer connected |true| -|allowAutoTopicCreationType| The type of topic that is allowed to be automatically created.(partitioned/non-partitioned) |non-partitioned| -|allowAutoSubscriptionCreation| Enable subscription auto creation if a new consumer connected |true| -|defaultNumPartitions| The number of partitioned topics that is allowed to be automatically created if `allowAutoTopicCreationType` is partitioned |1| -|brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics. If topics are not consumed for some while, these inactive topics might be cleaned up. Deleting inactive topics is enabled by default. The default period is 1 minute. |true| -|brokerDeleteInactiveTopicsFrequencySeconds| How often to check for inactive topics |60| -| brokerDeleteInactiveTopicsMode | Set the mode to delete inactive topics.
  1587. `delete_when_no_subscriptions`: delete the topic which has no subscriptions or active producers.
  1588. `delete_when_subscriptions_caught_up`: delete the topic whose subscriptions have no backlogs and which has no active producers or consumers.
  1589. | `delete_when_no_subscriptions` | -| brokerDeleteInactiveTopicsMaxInactiveDurationSeconds | Set the maximum duration for inactive topics. If it is not specified, the `brokerDeleteInactiveTopicsFrequencySeconds` parameter is adopted. | N/A | -|forceDeleteTenantAllowed| Enable you to delete a tenant forcefully. |false| -|forceDeleteNamespaceAllowed| Enable you to delete a namespace forcefully. |false| -|messageExpiryCheckIntervalInMinutes| The frequency of proactively checking and purging expired messages. |5| -|brokerServiceCompactionMonitorIntervalInSeconds| Interval between checks to determine whether topics with compaction policies need compaction. |60| -brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater than this threshold, compression is triggered.

    Set this threshold to 0 means disabling the compression check.|N/A -|delayedDeliveryEnabled| Whether to enable the delayed delivery for messages. If disabled, messages will be immediately delivered and there will be no tracking overhead.|true| -|delayedDeliveryTickTimeMillis|Control the tick time for retrying on delayed delivery, which affects the accuracy of the delivery time compared to the scheduled time. By default, it is 1 second.|1000| -|activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed. |1000| -|clientLibraryVersionCheckEnabled| Enable check for minimum allowed client library version |false| -|clientLibraryVersionCheckAllowUnversioned| Allow client libraries with no version information |true| -|statusFilePath| Path for the file used to determine the rotation status for the broker when responding to service discovery health checks || -|preferLaterVersions| If true, (and ModularLoadManagerImpl is being used), the load manager will attempt to use only brokers running the latest software version (to minimize impact to bundles) |false| -|maxNumPartitionsPerPartitionedTopic|Max number of partitions per partitioned topic. Use 0 or negative number to disable the check|0| -| maxSubscriptionsPerTopic | Maximum number of subscriptions allowed to subscribe to a topic. Once this limit reaches, the broker rejects new subscriptions until the number of subscriptions decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxProducersPerTopic | Maximum number of producers allowed to connect to a topic. Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerTopic | Maximum number of consumers allowed to connect to a topic. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerSubscription | Maximum number of consumers allowed to connect to a subscription. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -|tlsEnabled|Deprecated - Use `webServicePortTls` and `brokerServicePortTls` instead. |false| -|tlsCertificateFilePath| Path for the TLS certificate file || -|tlsKeyFilePath| Path for the TLS private key file || -|tlsTrustCertsFilePath| Path for the trusted TLS certificate file. This cert is used to verify that any certs presented by connecting clients are signed by a certificate authority. If this verification fails, then the certs are untrusted and the connections are dropped. || -|tlsAllowInsecureConnection| Accept untrusted TLS certificate from client. If it is set to `true`, a client with a cert which cannot be verified with the 'tlsTrustCertsFilePath' cert will be allowed to connect to the server, though the cert will not be used for client authentication. |false| -|tlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLSv1.3```, ```TLSv1.2``` || -|tlsCiphers|Specify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256```|| -|tlsEnabledWithKeyStore| Enable TLS with KeyStore type configuration in broker |false| -|tlsProvider| TLS Provider for KeyStore type || -|tlsKeyStoreType| LS KeyStore type configuration in broker: JKS, PKCS12 |JKS| -|tlsKeyStore| TLS KeyStore path in broker || -|tlsKeyStorePassword| TLS KeyStore password for broker || -|brokerClientTlsEnabledWithKeyStore| Whether internal client use KeyStore type to authenticate with Pulsar brokers |false| -|brokerClientSslProvider| The TLS Provider used by internal client to authenticate with other Pulsar brokers || -|brokerClientTlsTrustStoreType| TLS TrustStore type configuration for internal client: JKS, PKCS12, used by the internal client to authenticate with Pulsar brokers |JKS| -|brokerClientTlsTrustStore| TLS TrustStore path for internal client, used by the internal client to authenticate with Pulsar brokers || -|brokerClientTlsTrustStorePassword| TLS TrustStore password for internal client, used by the internal client to authenticate with Pulsar brokers || -|brokerClientTlsCiphers| Specify the tls cipher the internal client will use to negotiate during TLS Handshake. (a comma-separated list of ciphers) e.g. [TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256]|| -|brokerClientTlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS handshake. (a comma-separated list of protocol names). e.g. `TLSv1.3`, `TLSv1.2` || -|ttlDurationDefaultInSeconds|The default Time to Live (TTL) for namespaces if the TTL is not configured at namespace policies. When the value is set to `0`, TTL is disabled. By default, TTL is disabled. |0| -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicAlg| Configure the algorithm to be used to validate auth tokens. This can be any of the asymettric algorithms supported by Java JWT (https://github.com/jwtk/jjwt#signature-algorithms-keys) |RS256| -|tokenAuthClaim| Specify which of the token's claims will be used as the authentication "principal" or "role". The default "sub" claim will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud", that will be used to get the audience from token. If not set, audience will not be verified. || -|tokenAudience| The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token, need contains this. || -|maxUnackedMessagesPerConsumer| Max number of unacknowledged messages allowed to receive messages by a consumer on a shared subscription. Broker will stop sending messages to consumer once, this limit reaches until consumer starts acknowledging messages back. Using a value of 0, is disabling unackeMessage limit check and consumer can receive messages without any restriction |50000| -|maxUnackedMessagesPerSubscription| Max number of unacknowledged messages allowed per shared subscription. Broker will stop dispatching messages to all consumers of the subscription once this limit reaches until consumer starts acknowledging messages back and unack count reaches to limit/2. Using a value of 0, is disabling unackedMessage-limit check and dispatcher can dispatch messages without any restriction |200000| -|subscriptionRedeliveryTrackerEnabled| Enable subscription message redelivery tracker |true| -|subscriptionExpirationTimeMinutes | How long to delete inactive subscriptions from last consuming.

    Setting this configuration to a value **greater than 0** deletes inactive subscriptions automatically.
    Setting this configuration to **0** does not delete inactive subscriptions automatically.

    Since this configuration takes effect on all topics, if there is even one topic whose subscriptions should not be deleted automatically, you need to set it to 0.
    Instead, you can set a subscription expiration time for each **namespace** using the [`pulsar-admin namespaces set-subscription-expiration-time options` command](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-subscription-expiration-time-em-). | 0 | -|maxConcurrentLookupRequest| Max number of concurrent lookup request broker allows to throttle heavy incoming lookup traffic |50000| -|maxConcurrentTopicLoadRequest| Max number of concurrent topic loading request broker allows to control number of zk-operations |5000| -|authenticationEnabled| Enable authentication |false| -|authenticationProviders| Authentication provider name list, which is comma separated list of class names || -| authenticationRefreshCheckSeconds | Interval of time for checking for expired authentication credentials | 60 | -|authorizationEnabled| Enforce authorization |false| -|superUserRoles| Role names that are treated as "super-user", meaning they will be able to do all admin operations and publish/consume from all topics || -|brokerClientAuthenticationPlugin| Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters || -|brokerClientAuthenticationParameters||| -|athenzDomainNames| Supported Athenz provider domain names(comma separated) for authentication || -|exposePreciseBacklogInPrometheus| Enable expose the precise backlog stats, set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate. |false| -|schemaRegistryStorageClassName|The schema storage implementation used by this broker.|org.apache.pulsar.broker.service.schema.BookkeeperSchemaStorageFactory| -|isSchemaValidationEnforced|Enforce schema validation on following cases: if a producer without a schema attempts to produce to a topic with schema, the producer will be failed to connect. PLEASE be carefully on using this, since non-java clients don't support schema. If this setting is enabled, then non-java clients fail to produce.|false| -| topicFencingTimeoutSeconds | If a topic remains fenced for a certain time period (in seconds), it is closed forcefully. If set to 0 or a negative number, the fenced topic is not closed. | 0 | -|offloadersDirectory|The directory for all the offloader implementations.|./offloaders| -|bookkeeperMetadataServiceUri| Metadata service uri that bookkeeper is used for loading corresponding metadata driver and resolving its metadata service location. This value can be fetched using `bookkeeper shell whatisinstanceid` command in BookKeeper cluster. For example: zk+hierarchical://localhost:2181/ledgers. The metadata service uri list can also be semicolon separated values like below: zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers || -|bookkeeperClientAuthenticationPlugin| Authentication plugin to use when connecting to bookies || -|bookkeeperClientAuthenticationParametersName| BookKeeper auth plugin implementation specifics parameters name and values || -|bookkeeperClientAuthenticationParameters||| -|bookkeeperClientNumWorkerThreads| Number of BookKeeper client worker threads. Default is Runtime.getRuntime().availableProcessors() || -|bookkeeperClientTimeoutInSeconds| Timeout for BK add / read operations |30| -|bookkeeperClientSpeculativeReadTimeoutInMillis| Speculative reads are initiated if a read request doesn’t complete within a certain time Using a value of 0, is disabling the speculative reads |0| -|bookkeeperNumberOfChannelsPerBookie| Number of channels per bookie |16| -|bookkeeperClientHealthCheckEnabled| Enable bookies health check. Bookies that have more than the configured number of failure within the interval will be quarantined for some time. During this period, new ledgers won’t be created on these bookies |true| -|bookkeeperClientHealthCheckIntervalSeconds||60| -|bookkeeperClientHealthCheckErrorThresholdPerInterval||5| -|bookkeeperClientHealthCheckQuarantineTimeInSeconds ||1800| -|bookkeeperClientRackawarePolicyEnabled| Enable rack-aware bookie selection policy. BK will chose bookies from different racks when forming a new bookie ensemble |true| -|bookkeeperClientRegionawarePolicyEnabled| Enable region-aware bookie selection policy. BK will chose bookies from different regions and racks when forming a new bookie ensemble. If enabled, the value of bookkeeperClientRackawarePolicyEnabled is ignored |false| -|bookkeeperClientMinNumRacksPerWriteQuorum| Minimum number of racks per write quorum. BK rack-aware bookie selection policy will try to get bookies from at least 'bookkeeperClientMinNumRacksPerWriteQuorum' racks for a write quorum. |2| -|bookkeeperClientEnforceMinNumRacksPerWriteQuorum| Enforces rack-aware bookie selection policy to pick bookies from 'bookkeeperClientMinNumRacksPerWriteQuorum' racks for a writeQuorum. If BK can't find bookie then it would throw BKNotEnoughBookiesException instead of picking random one. |false| -|bookkeeperClientReorderReadSequenceEnabled| Enable/disable reordering read sequence on reading entries. |false| -|bookkeeperClientIsolationGroups| Enable bookie isolation by specifying a list of bookie groups to choose from. Any bookie outside the specified groups will not be used by the broker || -|bookkeeperClientSecondaryIsolationGroups| Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn't have enough bookie available. || -|bookkeeperClientMinAvailableBookiesInIsolationGroups| Minimum bookies that should be available as part of bookkeeperClientIsolationGroups else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list. || -|bookkeeperClientGetBookieInfoIntervalSeconds| Set the interval to periodically check bookie info |86400| -|bookkeeperClientGetBookieInfoRetryIntervalSeconds| Set the interval to retry a failed bookie info lookup |60| -|bookkeeperEnableStickyReads | Enable/disable having read operations for a ledger to be sticky to a single bookie. If this flag is enabled, the client will use one single bookie (by preference) to read all entries for a ledger. | true | -|managedLedgerDefaultEnsembleSize| Number of bookies to use when creating a ledger |2| -|managedLedgerDefaultWriteQuorum| Number of copies to store for each message |2| -|managedLedgerDefaultAckQuorum| Number of guaranteed copies (acks to wait before write is complete) |2| -|managedLedgerCacheSizeMB| Amount of memory to use for caching data payload in managed ledger. This memory is allocated from JVM direct memory and it’s shared across all the topics running in the same broker. By default, uses 1/5th of available direct memory || -|managedLedgerCacheCopyEntries| Whether we should make a copy of the entry payloads when inserting in cache| false| -|managedLedgerCacheEvictionWatermark| Threshold to which bring down the cache level when eviction is triggered |0.9| -|managedLedgerCacheEvictionFrequency| Configure the cache eviction frequency for the managed ledger cache (evictions/sec) | 100.0 | -|managedLedgerCacheEvictionTimeThresholdMillis| All entries that have stayed in cache for more than the configured time, will be evicted | 1000 | -|managedLedgerCursorBackloggedThreshold| Configure the threshold (in number of entries) from where a cursor should be considered 'backlogged' and thus should be set as inactive. | 1000| -|managedLedgerDefaultMarkDeleteRateLimit| Rate limit the amount of writes per second generated by consumer acking the messages |1.0| -|managedLedgerMaxEntriesPerLedger| The max number of entries to append to a ledger before triggering a rollover. A ledger rollover is triggered after the min rollover time has passed and one of the following conditions is true:
    • The max rollover time has been reached
    • The max entries have been written to the ledger
    • The max ledger size has been written to the ledger
    |50000| -|managedLedgerMinLedgerRolloverTimeMinutes| Minimum time between ledger rollover for a topic |10| -|managedLedgerMaxLedgerRolloverTimeMinutes| Maximum time before forcing a ledger rollover for a topic |240| -|managedLedgerCursorMaxEntriesPerLedger| Max number of entries to append to a cursor ledger |50000| -|managedLedgerCursorRolloverTimeInSeconds| Max time before triggering a rollover on a cursor ledger |14400| -|managedLedgerMaxUnackedRangesToPersist| Max number of "acknowledgment holes" that are going to be persistently stored. When acknowledging out of order, a consumer will leave holes that are supposed to be quickly filled by acking all the messages. The information of which messages are acknowledged is persisted by compressing in "ranges" of messages that were acknowledged. After the max number of ranges is reached, the information will only be tracked in memory and messages will be redelivered in case of crashes. |1000| -|autoSkipNonRecoverableData| Skip reading non-recoverable/unreadable data-ledger under managed-ledger’s list.It helps when data-ledgers gets corrupted at bookkeeper and managed-cursor is stuck at that ledger. |false| -|loadBalancerEnabled| Enable load balancer |true| -|loadBalancerPlacementStrategy| Strategy to assign a new bundle weightedRandomSelection || -|loadBalancerReportUpdateThresholdPercentage| Percentage of change to trigger load report update |10| -|loadBalancerReportUpdateMaxIntervalMinutes| maximum interval to update load report |15| -|loadBalancerHostUsageCheckIntervalMinutes| Frequency of report to collect |1| -|loadBalancerSheddingIntervalMinutes| Load shedding interval. Broker periodically checks whether some traffic should be offload from some over-loaded broker to other under-loaded brokers |30| -|loadBalancerSheddingGracePeriodMinutes| Prevent the same topics to be shed and moved to other broker more than once within this timeframe |30| -|loadBalancerBrokerMaxTopics| Usage threshold to allocate max number of topics to broker |50000| -|loadBalancerBrokerUnderloadedThresholdPercentage| Usage threshold to determine a broker as under-loaded |1| -|loadBalancerBrokerOverloadedThresholdPercentage| Usage threshold to determine a broker as over-loaded |85| -|loadBalancerResourceQuotaUpdateIntervalMinutes| Interval to update namespace bundle resource quota |15| -|loadBalancerBrokerComfortLoadLevelPercentage| Usage threshold to determine a broker is having just right level of load |65| -|loadBalancerAutoBundleSplitEnabled| enable/disable namespace bundle auto split |false| -|loadBalancerNamespaceBundleMaxTopics| maximum topics in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxSessions| maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxMsgRate| maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxBandwidthMbytes| maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered |100| -|loadBalancerNamespaceMaximumBundles| maximum number of bundles in a namespace |128| -|replicationMetricsEnabled| Enable replication metrics |true| -|replicationConnectionsPerBroker| Max number of connections to open for each broker in a remote cluster More connections host-to-host lead to better throughput over high-latency links. |16| -|replicationProducerQueueSize| Replicator producer queue size |1000| -|replicatorPrefix| Replicator prefix used for replicator producer name and cursor name pulsar.repl|| -|replicationTlsEnabled| Enable TLS when talking with other clusters to replicate messages |false| -|brokerServicePurgeInactiveFrequencyInSeconds|Deprecated. Use `brokerDeleteInactiveTopicsFrequencySeconds`.|60| -|transactionCoordinatorEnabled|Whether to enable transaction coordinator in broker.|true| -|transactionMetadataStoreProviderClassName| |org.apache.pulsar.transaction.coordinator.impl.InMemTransactionMetadataStoreProvider| -|defaultRetentionTimeInMinutes| Default message retention time. 0 means retention is disabled. -1 means data is not removed by time quota |0| -|defaultRetentionSizeInMB| Default retention size. 0 means retention is disabled. -1 means data is not removed by size quota |0| -|keepAliveIntervalSeconds| How often to check whether the connections are still alive |30| -|bootstrapNamespaces| The bootstrap name. | N/A | -|loadManagerClassName| Name of load manager to use |org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl| -|supportedNamespaceBundleSplitAlgorithms| Supported algorithms name for namespace bundle split |[range_equally_divide,topic_count_equally_divide]| -|defaultNamespaceBundleSplitAlgorithm| Default algorithm name for namespace bundle split |range_equally_divide| -|managedLedgerOffloadDriver| The directory for all the offloader implementations `offloadersDirectory=./offloaders`. Driver to use to offload old data to long term storage (Possible values: S3, aws-s3, google-cloud-storage). When using google-cloud-storage, Make sure both Google Cloud Storage and Google Cloud Storage JSON API are enabled for the project (check from Developers Console -> Api&auth -> APIs). || -|managedLedgerOffloadMaxThreads| Maximum number of thread pool threads for ledger offloading |2| -|managedLedgerOffloadPrefetchRounds|The maximum prefetch rounds for ledger reading for offloading.|1| -|managedLedgerUnackedRangesOpenCacheSetEnabled| Use Open Range-Set to cache unacknowledged messages |true| -|managedLedgerOffloadDeletionLagMs|Delay between a ledger being successfully offloaded to long term storage and the ledger being deleted from bookkeeper | 14400000| -|managedLedgerOffloadAutoTriggerSizeThresholdBytes|The number of bytes before triggering automatic offload to long term storage |-1 (disabled)| -|s3ManagedLedgerOffloadRegion| For Amazon S3 ledger offload, AWS region || -|s3ManagedLedgerOffloadBucket| For Amazon S3 ledger offload, Bucket to place offloaded ledger into || -|s3ManagedLedgerOffloadServiceEndpoint| For Amazon S3 ledger offload, Alternative endpoint to connect to (useful for testing) || -|s3ManagedLedgerOffloadMaxBlockSizeInBytes| For Amazon S3 ledger offload, Max block size in bytes. (64MB by default, 5MB minimum) |67108864| -|s3ManagedLedgerOffloadReadBufferSizeInBytes| For Amazon S3 ledger offload, Read buffer size in bytes (1MB by default) |1048576| -|gcsManagedLedgerOffloadRegion|For Google Cloud Storage ledger offload, region where offload bucket is located. Go to this page for more details: https://cloud.google.com/storage/docs/bucket-locations .|N/A| -|gcsManagedLedgerOffloadBucket|For Google Cloud Storage ledger offload, Bucket to place offloaded ledger into.|N/A| -|gcsManagedLedgerOffloadMaxBlockSizeInBytes|For Google Cloud Storage ledger offload, the maximum block size in bytes. (64MB by default, 5MB minimum)|67108864| -|gcsManagedLedgerOffloadReadBufferSizeInBytes|For Google Cloud Storage ledger offload, Read buffer size in bytes. (1MB by default)|1048576| -|gcsManagedLedgerOffloadServiceAccountKeyFile|For Google Cloud Storage, path to json file containing service account credentials. For more details, see the "Service Accounts" section of https://support.google.com/googleapi/answer/6158849 .|N/A| -|fileSystemProfilePath|For File System Storage, file system profile path.|../conf/filesystem_offload_core_site.xml| -|fileSystemURI|For File System Storage, file system uri.|N/A| -|s3ManagedLedgerOffloadRole| For Amazon S3 ledger offload, provide a role to assume before writing to s3 || -|s3ManagedLedgerOffloadRoleSessionName| For Amazon S3 ledger offload, provide a role session name when using a role |pulsar-s3-offload| -| acknowledgmentAtBatchIndexLevelEnabled | Enable or disable the batch index acknowledgement. | false | -|enableReplicatedSubscriptions|Whether to enable tracking of replicated subscriptions state across clusters.|true| -|replicatedSubscriptionsSnapshotFrequencyMillis|The frequency of snapshots for replicated subscriptions tracking.|1000| -|replicatedSubscriptionsSnapshotTimeoutSeconds|The timeout for building a consistent snapshot for tracking replicated subscriptions state.|30| -|replicatedSubscriptionsSnapshotMaxCachedPerSubscription|The maximum number of snapshot to be cached per subscription.|10| -|maxMessagePublishBufferSizeInMB|The maximum memory size for a broker to handle messages that are sent by producers. If the processing message size exceeds this value, the broker stops reading data from the connection. The processing messages refer to the messages that are sent to the broker but the broker has not sent response to the client. Usually the messages are waiting to be written to bookies. It is shared across all the topics running in the same broker. The value `-1` disables the memory limitation. By default, it is 50% of direct memory.|N/A| -|messagePublishBufferCheckIntervalInMillis|Interval between checks to see if message publish buffer size exceeds the maximum. Use `0` or negative number to disable the max publish buffer limiting.|100| -|retentionCheckIntervalInSeconds|Check between intervals to see if consumed ledgers need to be trimmed. Use 0 or negative number to disable the check.|120| -| maxMessageSize | Set the maximum size of a message. | 5242880 | -| preciseTopicPublishRateLimiterEnable | Enable precise topic publish rate limiting. | false | -| lazyCursorRecovery | Whether to recover cursors lazily when trying to recover a managed ledger backing a persistent topic. It can improve write availability of topics. The caveat is now when recovered ledger is ready to write we're not sure if all old consumers' last mark delete position(ack position) can be recovered or not. So user can make the trade off or have custom logic in application to checkpoint consumer state.| false | -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| -| maxTopicsPerNamespace | The maximum number of persistent topics that can be created in the namespace. When the number of topics reaches this threshold, the broker rejects the request of creating a new topic, including the auto-created topics by the producer or consumer, until the number of connected consumers decreases. The default value 0 disables the check. | 0 | -|subscriptionTypesEnabled| Enable all subscription types, which are exclusive, shared, failover, and key_shared. | Exclusive, Shared, Failover, Key_Shared | -| managedLedgerInfoCompressionType | Compression type of managed ledger information.

    Available options are `NONE`, `LZ4`, `ZLIB`, `ZSTD`, and `SNAPPY`).

    If this value is `NONE` or invalid, the `managedLedgerInfo` is not compressed.

    **Note** that after enabling this configuration, if you want to degrade a broker, you need to change the value to `NONE` and make sure all ledger metadata is saved without compression. | None | -| additionalServlets | Additional servlet name.

    If you have multiple additional servlets, separate them by commas.

    For example, additionalServlet_1, additionalServlet_2 | N/A | -| additionalServletDirectory | Location of broker additional servlet NAR directory | ./brokerAdditionalServlet | -|narExtractionDirectory | The extraction directory of the nar package.
    Available for Protocol Handler, Additional Servlets, Offloaders, Broker Interceptor. | System.getProperty("java.io.tmpdir") | - -## Client - -You can use the [`pulsar-client`](reference-cli-tools.md#pulsar-client) CLI tool to publish messages to and consume messages from Pulsar topics. You can use this tool in place of a client library. - -|Name|Description|Default| -|---|---|---| -|webServiceUrl| The web URL for the cluster. |http://localhost:8080/| -|brokerServiceUrl| The Pulsar protocol URL for the cluster. |pulsar://localhost:6650/| -|authPlugin| The authentication plugin. || -|authParams| The authentication parameters for the cluster, as a comma-separated string. || -|useTls| Whether to enforce the TLS authentication in the cluster. |false| -| tlsAllowInsecureConnection | Allow TLS connections to servers whose certificate cannot be verified to have been signed by a trusted certificate authority. | false | -| tlsEnableHostnameVerification | Whether the server hostname must match the common name of the certificate that is used by the server. | false | -|tlsTrustCertsFilePath||| -| useKeyStoreTls | Enable TLS with KeyStore type configuration in the broker. | false | -| tlsTrustStoreType | TLS TrustStore type configuration.
  1590. JKS
  1591. PKCS12
  1592. |JKS| -| tlsTrustStore | TLS TrustStore path. | | -| tlsTrustStorePassword | TLS TrustStore password. | | - - - - - - -## Log4j - -You can set the log level and configuration in the [log4j2.yaml](https://github.com/apache/pulsar/blob/d557e0aa286866363bc6261dec87790c055db1b0/conf/log4j2.yaml#L155) file. The following logging configuration parameters are available. - -|Name|Default| -|---|---| -|pulsar.root.logger| WARN,CONSOLE| -|pulsar.log.dir| logs| -|pulsar.log.file| pulsar.log| -|log4j.rootLogger| ${pulsar.root.logger}| -|log4j.appender.CONSOLE| org.apache.log4j.ConsoleAppender| -|log4j.appender.CONSOLE.Threshold| DEBUG| -|log4j.appender.CONSOLE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.CONSOLE.layout.ConversionPattern| %d{ISO8601} - %-5p - [%t:%C{1}@%L] - %m%n| -|log4j.appender.ROLLINGFILE| org.apache.log4j.DailyRollingFileAppender| -|log4j.appender.ROLLINGFILE.Threshold| DEBUG| -|log4j.appender.ROLLINGFILE.File| ${pulsar.log.dir}/${pulsar.log.file}| -|log4j.appender.ROLLINGFILE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.ROLLINGFILE.layout.ConversionPattern| %d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n| -|log4j.appender.TRACEFILE| org.apache.log4j.FileAppender| -|log4j.appender.TRACEFILE.Threshold| TRACE| -|log4j.appender.TRACEFILE.File| pulsar-trace.log| -|log4j.appender.TRACEFILE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.TRACEFILE.layout.ConversionPattern| %d{ISO8601} - %-5p [%t:%C{1}@%L][%x] - %m%n| - -> Note: 'topic' in log4j2.appender is configurable. -> - If you want to append all logs to a single topic, set the same topic name. -> - If you want to append logs to different topics, you can set different topic names. - -## Log4j shell - -|Name|Default| -|---|---| -|bookkeeper.root.logger| ERROR,CONSOLE| -|log4j.rootLogger| ${bookkeeper.root.logger}| -|log4j.appender.CONSOLE| org.apache.log4j.ConsoleAppender| -|log4j.appender.CONSOLE.Threshold| DEBUG| -|log4j.appender.CONSOLE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.CONSOLE.layout.ConversionPattern| %d{ABSOLUTE} %-5p %m%n| -|log4j.logger.org.apache.zookeeper| ERROR| -|log4j.logger.org.apache.bookkeeper| ERROR| -|log4j.logger.org.apache.bookkeeper.bookie.BookieShell| INFO| - - -## Standalone - -|Name|Description|Default| -|---|---|---| -|authenticateOriginalAuthData| If this flag is set to `true`, the broker authenticates the original Auth data; else it just accepts the originalPrincipal and authorizes it (if required). |false| -|zookeeperServers| The quorum connection string for local ZooKeeper || -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -|brokerServicePort| The port on which the standalone broker listens for connections |6650| -|webServicePort| The port used by the standalone broker for HTTP requests |8080| -|bindAddress| The hostname or IP address on which the standalone service binds |0.0.0.0| -|bindAddresses| Additional Hostname or IP addresses the service binds on: `listener_name:scheme://host:port,...`. || -|advertisedAddress| The hostname or IP address that the standalone service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used. || -| numAcceptorThreads | Number of threads to use for Netty Acceptor | 1 | -| numIOThreads | Number of threads to use for Netty IO | 2 * Runtime.getRuntime().availableProcessors() | -| numHttpServerThreads | Number of threads to use for HTTP requests processing | 2 * Runtime.getRuntime().availableProcessors()| -|isRunningStandalone|This flag controls features that are meant to be used when running in standalone mode.|N/A| -|clusterName| The name of the cluster that this broker belongs to. |standalone| -| failureDomainsEnabled | Enable cluster's failure-domain which can distribute brokers into logical region. | false | -|zooKeeperSessionTimeoutMillis| The ZooKeeper session timeout, in milliseconds. |30000| -|zooKeeperOperationTimeoutSeconds|ZooKeeper operation timeout in seconds.|30| -|brokerShutdownTimeoutMs| The time to wait for graceful broker shutdown. After this time elapses, the process will be killed. |60000| -|skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out of memory error. |false| -|backlogQuotaCheckEnabled| Enable the backlog quota check, which enforces a specified action when the quota is reached. |true| -|backlogQuotaCheckIntervalInSeconds| How often to check for topics that have reached the backlog quota. |60| -|backlogQuotaDefaultLimitBytes| The default per-topic backlog quota limit. Being less than 0 means no limitation. By default, it is -1. |-1| -|ttlDurationDefaultInSeconds|The default Time to Live (TTL) for namespaces if the TTL is not configured at namespace policies. When the value is set to `0`, TTL is disabled. By default, TTL is disabled. |0| -|brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics. If topics are not consumed for some while, these inactive topics might be cleaned up. Deleting inactive topics is enabled by default. The default period is 1 minute. |true| -|brokerDeleteInactiveTopicsFrequencySeconds| How often to check for inactive topics, in seconds. |60| -| maxPendingPublishRequestsPerConnection | Maximum pending publish requests per connection to avoid keeping large number of pending requests in memory | 1000| -|messageExpiryCheckIntervalInMinutes| How often to proactively check and purged expired messages. |5| -|activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed. |1000| -| subscriptionExpirationTimeMinutes | How long to delete inactive subscriptions from last consumption. When it is set to 0, inactive subscriptions are not deleted automatically | 0 | -| subscriptionRedeliveryTrackerEnabled | Enable subscription message redelivery tracker to send redelivery count to consumer. | true | -|subscriptionKeySharedEnable|Whether to enable the Key_Shared subscription.|true| -| subscriptionKeySharedUseConsistentHashing | In the Key_Shared subscription mode, with default AUTO_SPLIT mode, use splitting ranges or consistent hashing to reassign keys to new consumers. | false | -| subscriptionKeySharedConsistentHashingReplicaPoints | In the Key_Shared subscription mode, the number of points in the consistent-hashing ring. The greater the number, the more equal the assignment of keys to consumers. | 100 | -| subscriptionExpiryCheckIntervalInMinutes | How frequently to proactively check and purge expired subscription |5 | -| brokerDeduplicationEnabled | Set the default behavior for message deduplication in the broker. This can be overridden per-namespace. If it is enabled, the broker rejects messages that are already stored in the topic. | false | -| brokerDeduplicationMaxNumberOfProducers | Maximum number of producer information that it's going to be persisted for deduplication purposes | 10000 | -| brokerDeduplicationEntriesInterval | Number of entries after which a deduplication information snapshot is taken. A greater interval leads to less snapshots being taken though it would increase the topic recovery time, when the entries published after the snapshot need to be replayed. | 1000 | -| brokerDeduplicationProducerInactivityTimeoutMinutes | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | 360 | -| defaultNumberOfNamespaceBundles | When a namespace is created without specifying the number of bundles, this value is used as the default setting.| 4 | -|clientLibraryVersionCheckEnabled| Enable checks for minimum allowed client library version. |false| -|clientLibraryVersionCheckAllowUnversioned| Allow client libraries with no version information |true| -|statusFilePath| The path for the file used to determine the rotation status for the broker when responding to service discovery health checks |/usr/local/apache/htdocs| -|maxUnackedMessagesPerConsumer| The maximum number of unacknowledged messages allowed to be received by consumers on a shared subscription. The broker will stop sending messages to a consumer once this limit is reached or until the consumer begins acknowledging messages. A value of 0 disables the unacked message limit check and thus allows consumers to receive messages without any restrictions. |50000| -|maxUnackedMessagesPerSubscription| The same as above, except per subscription rather than per consumer. |200000| -| maxUnackedMessagesPerBroker | Maximum number of unacknowledged messages allowed per broker. Once this limit reaches, the broker stops dispatching messages to all shared subscriptions which has a higher number of unacknowledged messages until subscriptions start acknowledging messages back and unacknowledged messages count reaches to limit/2. When the value is set to 0, unacknowledged message limit check is disabled and broker does not block dispatchers. | 0 | -| maxUnackedMessagesPerSubscriptionOnBrokerBlocked | Once the broker reaches maxUnackedMessagesPerBroker limit, it blocks subscriptions which have higher unacknowledged messages than this percentage limit and subscription does not receive any new messages until that subscription acknowledges messages back. | 0.16 | -| unblockStuckSubscriptionEnabled|Broker periodically checks if subscription is stuck and unblock if flag is enabled.|false| -|zookeeperSessionExpiredPolicy|There are two policies when ZooKeeper session expired happens, "shutdown" and "reconnect". If it is set to "shutdown" policy, when ZooKeeper session expired happens, the broker is shutdown. If it is set to "reconnect" policy, the broker tries to reconnect to ZooKeeper server and re-register metadata to ZooKeeper. Note: the "reconnect" policy is an experiment feature.|shutdown| -| topicPublisherThrottlingTickTimeMillis | Tick time to schedule task that checks topic publish rate limiting across all topics. A lower value can improve accuracy while throttling publish but it uses more CPU to perform frequent check. (Disable publish throttling with value 0) | 10| -| brokerPublisherThrottlingTickTimeMillis | Tick time to schedule task that checks broker publish rate limiting across all topics. A lower value can improve accuracy while throttling publish but it uses more CPU to perform frequent check. When the value is set to 0, publish throttling is disabled. |50 | -| brokerPublisherThrottlingMaxMessageRate | Maximum rate (in 1 second) of messages allowed to publish for a broker if the message rate limiting is enabled. When the value is set to 0, message rate limiting is disabled. | 0| -| brokerPublisherThrottlingMaxByteRate | Maximum rate (in 1 second) of bytes allowed to publish for a broker if the byte rate limiting is enabled. When the value is set to 0, the byte rate limiting is disabled. | 0 | -|subscribeThrottlingRatePerConsumer|Too many subscribe requests from a consumer can cause broker rewinding consumer cursors and loading data from bookies, hence causing high network bandwidth usage. When the positive value is set, broker will throttle the subscribe requests for one consumer. Otherwise, the throttling will be disabled. By default, throttling is disabled.|0| -|subscribeRatePeriodPerConsumerInSecond|Rate period for {subscribeThrottlingRatePerConsumer}. By default, it is 30s.|30| -| dispatchThrottlingRatePerTopicInMsg | Default messages (per second) dispatch throttling-limit for every topic. When the value is set to 0, default message dispatch throttling-limit is disabled. |0 | -| dispatchThrottlingRatePerTopicInByte | Default byte (per second) dispatch throttling-limit for every topic. When the value is set to 0, default byte dispatch throttling-limit is disabled. | 0| -| dispatchThrottlingRateRelativeToPublishRate | Enable dispatch rate-limiting relative to publish rate. | false | -|dispatchThrottlingRatePerSubscriptionInMsg|The defaulted number of message dispatching throttling-limit for a subscription. The value of 0 disables message dispatch-throttling.|0| -|dispatchThrottlingRatePerSubscriptionInByte|The default number of message-bytes dispatching throttling-limit for a subscription. The value of 0 disables message-byte dispatch-throttling.|0| -| dispatchThrottlingOnNonBacklogConsumerEnabled | Enable dispatch-throttling for both caught up consumers as well as consumers who have backlogs. | true | -|dispatcherMaxReadBatchSize|The maximum number of entries to read from BookKeeper. By default, it is 100 entries.|100| -|dispatcherMaxReadSizeBytes|The maximum size in bytes of entries to read from BookKeeper. By default, it is 5MB.|5242880| -|dispatcherMinReadBatchSize|The minimum number of entries to read from BookKeeper. By default, it is 1 entry. When there is an error occurred on reading entries from bookkeeper, the broker will backoff the batch size to this minimum number.|1| -|dispatcherMaxRoundRobinBatchSize|The maximum number of entries to dispatch for a shared subscription. By default, it is 20 entries.|20| -| preciseDispatcherFlowControl | Precise dispathcer flow control according to history message number of each entry. | false | -| streamingDispatch | Whether to use streaming read dispatcher. It can be useful when there's a huge backlog to drain and instead of read with micro batch we can streamline the read from bookkeeper to make the most of consumer capacity till we hit bookkeeper read limit or consumer process limit, then we can use consumer flow control to tune the speed. This feature is currently in preview and can be changed in subsequent release. | false | -| maxConcurrentLookupRequest | Maximum number of concurrent lookup request that the broker allows to throttle heavy incoming lookup traffic. | 50000 | -| maxConcurrentTopicLoadRequest | Maximum number of concurrent topic loading request that the broker allows to control the number of zk-operations. | 5000 | -| maxConcurrentNonPersistentMessagePerConnection | Maximum number of concurrent non-persistent message that can be processed per connection. | 1000 | -| numWorkerThreadsForNonPersistentTopic | Number of worker threads to serve non-persistent topic. | 8 | -| enablePersistentTopics | Enable broker to load persistent topics. | true | -| enableNonPersistentTopics | Enable broker to load non-persistent topics. | true | -| maxSubscriptionsPerTopic | Maximum number of subscriptions allowed to subscribe to a topic. Once this limit reaches, the broker rejects new subscriptions until the number of subscriptions decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxProducersPerTopic | Maximum number of producers allowed to connect to a topic. Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerTopic | Maximum number of consumers allowed to connect to a topic. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerSubscription | Maximum number of consumers allowed to connect to a subscription. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxNumPartitionsPerPartitionedTopic | Maximum number of partitions per partitioned topic. When the value is set to a negative number or is set to 0, the check is disabled. | 0 | -| tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in seconds. When the value is set to 0, check the TLS certificate on every new connection. | 300 | -| tlsCertificateFilePath | Path for the TLS certificate file. | | -| tlsKeyFilePath | Path for the TLS private key file. | | -| tlsTrustCertsFilePath | Path for the trusted TLS certificate file.| | -| tlsAllowInsecureConnection | Accept untrusted TLS certificate from the client. If it is set to true, a client with a certificate which cannot be verified with the 'tlsTrustCertsFilePath' certificate is allowed to connect to the server, though the certificate is not be used for client authentication. | false | -| tlsProtocols | Specify the TLS protocols the broker uses to negotiate during TLS handshake. | | -| tlsCiphers | Specify the TLS cipher the broker uses to negotiate during TLS Handshake. | | -| tlsRequireTrustedClientCertOnConnect | Trusted client certificates are required for to connect TLS. Reject the Connection if the client certificate is not trusted. In effect, this requires that all connecting clients perform TLS client authentication. | false | -| tlsEnabledWithKeyStore | Enable TLS with KeyStore type configuration in broker. | false | -| tlsProvider | TLS Provider for KeyStore type. | | -| tlsKeyStoreType | TLS KeyStore type configuration in the broker.
  1593. JKS
  1594. PKCS12
  1595. |JKS| -| tlsKeyStore | TLS KeyStore path in the broker. | | -| tlsKeyStorePassword | TLS KeyStore password for the broker. | | -| tlsTrustStoreType | TLS TrustStore type configuration in the broker
  1596. JKS
  1597. PKCS12
  1598. |JKS| -| tlsTrustStore | TLS TrustStore path in the broker. | | -| tlsTrustStorePassword | TLS TrustStore password for the broker. | | -| brokerClientTlsEnabledWithKeyStore | Configure whether the internal client uses the KeyStore type to authenticate with Pulsar brokers. | false | -| brokerClientSslProvider | The TLS Provider used by the internal client to authenticate with other Pulsar brokers. | | -| brokerClientTlsTrustStoreType | TLS TrustStore type configuration for the internal client to authenticate with Pulsar brokers.
  1599. JKS
  1600. PKCS12
  1601. | JKS | -| brokerClientTlsTrustStore | TLS TrustStore path for the internal client to authenticate with Pulsar brokers. | | -| brokerClientTlsTrustStorePassword | TLS TrustStore password for the internal client to authenticate with Pulsar brokers. | | -| brokerClientTlsCiphers | Specify the TLS cipher that the internal client uses to negotiate during TLS Handshake. | | -| brokerClientTlsProtocols | Specify the TLS protocols that the broker uses to negotiate during TLS handshake. | | -| systemTopicEnabled | Enable/Disable system topics. | false | -| topicLevelPoliciesEnabled | Enable or disable topic level policies. Topic level policies depends on the system topic. Please enable the system topic first. | false | -| topicFencingTimeoutSeconds | If a topic remains fenced for a certain time period (in seconds), it is closed forcefully. If set to 0 or a negative number, the fenced topic is not closed. | 0 | -| proxyRoles | Role names that are treated as "proxy roles". If the broker sees a request with role as proxyRoles, it demands to see a valid original principal. | | -|authenticationEnabled| Enable authentication for the broker. |false| -|authenticationProviders| A comma-separated list of class names for authentication providers. |false| -|authorizationEnabled| Enforce authorization in brokers. |false| -| authorizationProvider | Authorization provider fully qualified class-name. | org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider | -| authorizationAllowWildcardsMatching | Allow wildcard matching in authorization. Wildcard matching is applicable only when the wildcard-character (*) presents at the **first** or **last** position. | false | -|superUserRoles| Role names that are treated as "superusers." Superusers are authorized to perform all admin tasks. | | -|brokerClientAuthenticationPlugin| The authentication settings of the broker itself. Used when the broker connects to other brokers either in the same cluster or from other clusters. | | -|brokerClientAuthenticationParameters| The parameters that go along with the plugin specified using brokerClientAuthenticationPlugin. | | -|athenzDomainNames| Supported Athenz authentication provider domain names as a comma-separated list. | | -| anonymousUserRole | When this parameter is not empty, unauthenticated users perform as anonymousUserRole. | | -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenAuthClaim| Specify the token claim that will be used as the authentication "principal" or "role". The "subject" field will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud". It is used to get the audience from token. If it is not set, the audience is not verified. || -| tokenAudience | The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token need contains this parameter.| | -|saslJaasClientAllowedIds|This is a regexp, which limits the range of possible ids which can connect to the Broker using SASL. By default, it is set to `SaslConstants.JAAS_CLIENT_ALLOWED_IDS_DEFAULT`, which is ".*pulsar.*", so only clients whose id contains 'pulsar' are allowed to connect.|N/A| -|saslJaasBrokerSectionName|Service Principal, for login context name. By default, it is set to `SaslConstants.JAAS_DEFAULT_BROKER_SECTION_NAME`, which is "Broker".|N/A| -|httpMaxRequestSize|If the value is larger than 0, it rejects all HTTP requests with bodies larged than the configured limit.|-1| -|exposePreciseBacklogInPrometheus| Enable expose the precise backlog stats, set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate. |false| -|bookkeeperMetadataServiceUri|Metadata service uri is what BookKeeper used for loading corresponding metadata driver and resolving its metadata service location. This value can be fetched using `bookkeeper shell whatisinstanceid` command in BookKeeper cluster. For example: `zk+hierarchical://localhost:2181/ledgers`. The metadata service uri list can also be semicolon separated values like: `zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers`.|N/A| -|bookkeeperClientAuthenticationPlugin| Authentication plugin to be used when connecting to bookies (BookKeeper servers). || -|bookkeeperClientAuthenticationParametersName| BookKeeper authentication plugin implementation parameters and values. || -|bookkeeperClientAuthenticationParameters| Parameters associated with the bookkeeperClientAuthenticationParametersName || -|bookkeeperClientNumWorkerThreads| Number of BookKeeper client worker threads. Default is Runtime.getRuntime().availableProcessors() || -|bookkeeperClientTimeoutInSeconds| Timeout for BookKeeper add and read operations. |30| -|bookkeeperClientSpeculativeReadTimeoutInMillis| Speculative reads are initiated if a read request doesn’t complete within a certain time. A value of 0 disables speculative reads. |0| -|bookkeeperUseV2WireProtocol|Use older Bookkeeper wire protocol with bookie.|true| -|bookkeeperClientHealthCheckEnabled| Enable bookie health checks. |true| -|bookkeeperClientHealthCheckIntervalSeconds| The time interval, in seconds, at which health checks are performed. New ledgers are not created during health checks. |60| -|bookkeeperClientHealthCheckErrorThresholdPerInterval| Error threshold for health checks. |5| -|bookkeeperClientHealthCheckQuarantineTimeInSeconds| If bookies have more than the allowed number of failures within the time interval specified by bookkeeperClientHealthCheckIntervalSeconds |1800| -|bookkeeperClientGetBookieInfoIntervalSeconds|Specify options for the GetBookieInfo check. This setting helps ensure the list of bookies that are up to date on the brokers.|86400| -|bookkeeperClientGetBookieInfoRetryIntervalSeconds|Specify options for the GetBookieInfo check. This setting helps ensure the list of bookies that are up to date on the brokers.|60| -|bookkeeperClientRackawarePolicyEnabled| |true| -|bookkeeperClientRegionawarePolicyEnabled| |false| -|bookkeeperClientMinNumRacksPerWriteQuorum| |2| -|bookkeeperClientMinNumRacksPerWriteQuorum| |false| -|bookkeeperClientReorderReadSequenceEnabled| |false| -|bookkeeperClientIsolationGroups||| -|bookkeeperClientSecondaryIsolationGroups| Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn't have enough bookie available. || -|bookkeeperClientMinAvailableBookiesInIsolationGroups| Minimum bookies that should be available as part of bookkeeperClientIsolationGroups else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list. || -| bookkeeperTLSProviderFactoryClass | Set the client security provider factory class name. | org.apache.bookkeeper.tls.TLSContextFactory | -| bookkeeperTLSClientAuthentication | Enable TLS authentication with bookie. | false | -| bookkeeperTLSKeyFileType | Supported type: PEM, JKS, PKCS12. | PEM | -| bookkeeperTLSTrustCertTypes | Supported type: PEM, JKS, PKCS12. | PEM | -| bookkeeperTLSKeyStorePasswordPath | Path to file containing keystore password, if the client keystore is password protected. | | -| bookkeeperTLSTrustStorePasswordPath | Path to file containing truststore password, if the client truststore is password protected. | | -| bookkeeperTLSKeyFilePath | Path for the TLS private key file. | | -| bookkeeperTLSCertificateFilePath | Path for the TLS certificate file. | | -| bookkeeperTLSTrustCertsFilePath | Path for the trusted TLS certificate file. | | -| bookkeeperTlsCertFilesRefreshDurationSeconds | Tls cert refresh duration at bookKeeper-client in seconds (0 to disable check). | | -| bookkeeperDiskWeightBasedPlacementEnabled | Enable/Disable disk weight based placement. | false | -| bookkeeperExplicitLacIntervalInMills | Set the interval to check the need for sending an explicit LAC. When the value is set to 0, no explicit LAC is sent. | 0 | -| bookkeeperClientExposeStatsToPrometheus | Expose BookKeeper client managed ledger stats to Prometheus. | false | -|managedLedgerDefaultEnsembleSize| |1| -|managedLedgerDefaultWriteQuorum| |1| -|managedLedgerDefaultAckQuorum| |1| -| managedLedgerDigestType | Default type of checksum to use when writing to BookKeeper. | CRC32C | -| managedLedgerNumSchedulerThreads | Number of threads to be used for managed ledger scheduled tasks. | 8 | -|managedLedgerCacheSizeMB| |N/A| -|managedLedgerCacheCopyEntries| Whether to copy the entry payloads when inserting in cache.| false| -|managedLedgerCacheEvictionWatermark| |0.9| -|managedLedgerCacheEvictionFrequency| Configure the cache eviction frequency for the managed ledger cache (evictions/sec) | 100.0 | -|managedLedgerCacheEvictionTimeThresholdMillis| All entries that have stayed in cache for more than the configured time, will be evicted | 1000 | -|managedLedgerCursorBackloggedThreshold| Configure the threshold (in number of entries) from where a cursor should be considered 'backlogged' and thus should be set as inactive. | 1000| -|managedLedgerUnackedRangesOpenCacheSetEnabled| Use Open Range-Set to cache unacknowledged messages |true| -|managedLedgerDefaultMarkDeleteRateLimit| |0.1| -|managedLedgerMaxEntriesPerLedger| |50000| -|managedLedgerMinLedgerRolloverTimeMinutes| |10| -|managedLedgerMaxLedgerRolloverTimeMinutes| |240| -|managedLedgerCursorMaxEntriesPerLedger| |50000| -|managedLedgerCursorRolloverTimeInSeconds| |14400| -| managedLedgerMaxSizePerLedgerMbytes | Maximum ledger size before triggering a rollover for a topic. | 2048 | -| managedLedgerMaxUnackedRangesToPersist | Maximum number of "acknowledgment holes" that are going to be persistently stored. When acknowledging out of order, a consumer leaves holes that are supposed to be quickly filled by acknowledging all the messages. The information of which messages are acknowledged is persisted by compressing in "ranges" of messages that were acknowledged. After the max number of ranges is reached, the information is only tracked in memory and messages are redelivered in case of crashes. | 10000 | -| managedLedgerMaxUnackedRangesToPersistInZooKeeper | Maximum number of "acknowledgment holes" that can be stored in Zookeeper. If the number of unacknowledged message range is higher than this limit, the broker persists unacknowledged ranges into bookkeeper to avoid additional data overhead into Zookeeper. | 1000 | -|autoSkipNonRecoverableData| |false| -| managedLedgerMetadataOperationsTimeoutSeconds | Operation timeout while updating managed-ledger metadata. | 60 | -| managedLedgerReadEntryTimeoutSeconds | Read entries timeout when the broker tries to read messages from BookKeeper. | 0 | -| managedLedgerAddEntryTimeoutSeconds | Add entry timeout when the broker tries to publish message to BookKeeper. | 0 | -| managedLedgerNewEntriesCheckDelayInMillis | New entries check delay for the cursor under the managed ledger. If no new messages in the topic, the cursor tries to check again after the delay time. For consumption latency sensitive scenario, you can set the value to a smaller value or 0. Of course, a smaller value may degrade consumption throughput.|10 ms| -| managedLedgerPrometheusStatsLatencyRolloverSeconds | Managed ledger prometheus stats latency rollover seconds. | 60 | -| managedLedgerTraceTaskExecution | Whether to trace managed ledger task execution time. | true | -|managedLedgerNewEntriesCheckDelayInMillis|New entries check delay for the cursor under the managed ledger. If no new messages in the topic, the cursor will try to check again after the delay time. For consumption latency sensitive scenario, it can be set to a smaller value or 0. A smaller value degrades consumption throughput. By default, it is 10ms.|10| -|loadBalancerEnabled| |false| -|loadBalancerPlacementStrategy| |weightedRandomSelection| -|loadBalancerReportUpdateThresholdPercentage| |10| -|loadBalancerReportUpdateMaxIntervalMinutes| |15| -|loadBalancerHostUsageCheckIntervalMinutes| |1| -|loadBalancerSheddingIntervalMinutes| |30| -|loadBalancerSheddingGracePeriodMinutes| |30| -|loadBalancerBrokerMaxTopics| |50000| -|loadBalancerBrokerUnderloadedThresholdPercentage| |1| -|loadBalancerBrokerOverloadedThresholdPercentage| |85| -|loadBalancerResourceQuotaUpdateIntervalMinutes| |15| -|loadBalancerBrokerComfortLoadLevelPercentage| |65| -|loadBalancerAutoBundleSplitEnabled| |false| -| loadBalancerAutoUnloadSplitBundlesEnabled | Enable/Disable automatic unloading of split bundles. | true | -|loadBalancerNamespaceBundleMaxTopics| |1000| -|loadBalancerNamespaceBundleMaxSessions| |1000| -|loadBalancerNamespaceBundleMaxMsgRate| |1000| -|loadBalancerNamespaceBundleMaxBandwidthMbytes| |100| -|loadBalancerNamespaceMaximumBundles| |128| -| loadBalancerBrokerThresholdShedderPercentage | The broker resource usage threshold. When the broker resource usage is greater than the pulsar cluster average resource usage, the threshold shedder is triggered to offload bundles from the broker. It only takes effect in the ThresholdShedder strategy. | 10 | -| loadBalancerHistoryResourcePercentage | The history usage when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 0.9 | -| loadBalancerBandwithInResourceWeight | The BandWithIn usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerBandwithOutResourceWeight | The BandWithOut usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerCPUResourceWeight | The CPU usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerMemoryResourceWeight | The heap memory usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerDirectMemoryResourceWeight | The direct memory usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerBundleUnloadMinThroughputThreshold | Bundle unload minimum throughput threshold. Avoid bundle unload frequently. It only takes effect in the ThresholdShedder strategy. | 10 | -|replicationMetricsEnabled| |true| -|replicationConnectionsPerBroker| |16| -|replicationProducerQueueSize| |1000| -| replicationPolicyCheckDurationSeconds | Duration to check replication policy to avoid replicator inconsistency due to missing ZooKeeper watch. When the value is set to 0, disable checking replication policy. | 600 | -|defaultRetentionTimeInMinutes| |0| -|defaultRetentionSizeInMB| |0| -|keepAliveIntervalSeconds| |30| -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| -|bookieId | If you want to custom a bookie ID or use a dynamic network address for a bookie, you can set the `bookieId`.

    Bookie advertises itself using the `bookieId` rather than the `BookieSocketAddress` (`hostname:port` or `IP:port`).

    The `bookieId` is a non-empty string that can contain ASCII digits and letters ([a-zA-Z9-0]), colons, dashes, and dots.

    For more information about `bookieId`, see [here](http://bookkeeper.apache.org/bps/BP-41-bookieid/).|/| -| maxTopicsPerNamespace | The maximum number of persistent topics that can be created in the namespace. When the number of topics reaches this threshold, the broker rejects the request of creating a new topic, including the auto-created topics by the producer or consumer, until the number of connected consumers decreases. The default value 0 disables the check. | 0 | - -## WebSocket - -|Name|Description|Default| -|---|---|---| -|configurationStoreServers ||| -|zooKeeperSessionTimeoutMillis| |30000| -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|serviceUrl||| -|serviceUrlTls||| -|brokerServiceUrl||| -|brokerServiceUrlTls||| -|webServicePort||8080| -|webServicePortTls||8443| -|bindAddress||0.0.0.0| -|clusterName ||| -|authenticationEnabled||false| -|authenticationProviders||| -|authorizationEnabled||false| -|superUserRoles ||| -|brokerClientAuthenticationPlugin||| -|brokerClientAuthenticationParameters||| -|tlsEnabled||false| -|tlsAllowInsecureConnection||false| -|tlsCertificateFilePath||| -|tlsKeyFilePath ||| -|tlsTrustCertsFilePath||| - -## Pulsar proxy - -The [Pulsar proxy](concepts-architecture-overview.md#pulsar-proxy) can be configured in the `conf/proxy.conf` file. - - -|Name|Description|Default| -|---|---|---| -|forwardAuthorizationCredentials| Forward client authorization credentials to Broker for re-authorization, and make sure authentication is enabled for this to take effect. |false| -|zookeeperServers| The ZooKeeper quorum connection string (as a comma-separated list) || -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -| brokerServiceURL | The service URL pointing to the broker cluster. Must begin with `pulsar://`. | | -| brokerServiceURLTLS | The TLS service URL pointing to the broker cluster. Must begin with `pulsar+ssl://`. | | -| brokerWebServiceURL | The Web service URL pointing to the broker cluster | | -| brokerWebServiceURLTLS | The TLS Web service URL pointing to the broker cluster | | -| functionWorkerWebServiceURL | The Web service URL pointing to the function worker cluster. It is only configured when you setup function workers in a separate cluster. | | -| functionWorkerWebServiceURLTLS | The TLS Web service URL pointing to the function worker cluster. It is only configured when you setup function workers in a separate cluster. | | -|zookeeperSessionTimeoutMs| ZooKeeper session timeout (in milliseconds) |30000| -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|advertisedAddress|Hostname or IP address the service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostname()` is used.|N/A| -|servicePort| The port to use for server binary Protobuf requests |6650| -|servicePortTls| The port to use to server binary Protobuf TLS requests |6651| -|statusFilePath| Path for the file used to determine the rotation status for the proxy instance when responding to service discovery health checks || -| proxyLogLevel | Proxy log level
  1602. 0: Do not log any TCP channel information.
  1603. 1: Parse and log any TCP channel information and command information without message body.
  1604. 2: Parse and log channel information, command information and message body.
  1605. | 0 | -|authenticationEnabled| Whether authentication is enabled for the Pulsar proxy |false| -|authenticateMetricsEndpoint| Whether the '/metrics' endpoint requires authentication. Defaults to true. 'authenticationEnabled' must also be set for this to take effect. |true| -|authenticationProviders| Authentication provider name list (a comma-separated list of class names) || -|authorizationEnabled| Whether authorization is enforced by the Pulsar proxy |false| -|authorizationProvider| Authorization provider as a fully qualified class name |org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider| -| anonymousUserRole | When this parameter is not empty, unauthenticated users perform as anonymousUserRole. | | -|brokerClientAuthenticationPlugin| The authentication plugin used by the Pulsar proxy to authenticate with Pulsar brokers || -|brokerClientAuthenticationParameters| The authentication parameters used by the Pulsar proxy to authenticate with Pulsar brokers || -|brokerClientTrustCertsFilePath| The path to trusted certificates used by the Pulsar proxy to authenticate with Pulsar brokers || -|superUserRoles| Role names that are treated as "super-users," meaning that they will be able to perform all admin || -|maxConcurrentInboundConnections| Max concurrent inbound connections. The proxy will reject requests beyond that. |10000| -|maxConcurrentLookupRequests| Max concurrent outbound connections. The proxy will error out requests beyond that. |50000| -|tlsEnabledInProxy| Deprecated - use `servicePortTls` and `webServicePortTls` instead. |false| -|tlsEnabledWithBroker| Whether TLS is enabled when communicating with Pulsar brokers. |false| -| tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in seconds. If the value is set 0, check TLS certificate every new connection. | 300 | -|tlsCertificateFilePath| Path for the TLS certificate file || -|tlsKeyFilePath| Path for the TLS private key file || -|tlsTrustCertsFilePath| Path for the trusted TLS certificate pem file || -|tlsHostnameVerificationEnabled| Whether the hostname is validated when the proxy creates a TLS connection with brokers |false| -|tlsRequireTrustedClientCertOnConnect| Whether client certificates are required for TLS. Connections are rejected if the client certificate isn’t trusted. |false| -|tlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLSv1.3```, ```TLSv1.2``` || -|tlsCiphers|Specify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256```|| -| httpReverseProxyConfigs | HTTP directs to redirect to non-pulsar services | | -| httpOutputBufferSize | HTTP output buffer size. The amount of data that will be buffered for HTTP requests before it is flushed to the channel. A larger buffer size may result in higher HTTP throughput though it may take longer for the client to see data. If using HTTP streaming via the reverse proxy, this should be set to the minimum value (1) so that clients see the data as soon as possible. | 32768 | -| httpNumThreads | Number of threads to use for HTTP requests processing| 2 * Runtime.getRuntime().availableProcessors() | -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenAuthClaim| Specify the token claim that will be used as the authentication "principal" or "role". The "subject" field will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud". It is used to get the audience from token. If it is not set, the audience is not verified. || -| tokenAudience | The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token need contains this parameter.| | -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| - -## ZooKeeper - -ZooKeeper handles a broad range of essential configuration- and coordination-related tasks for Pulsar. The default configuration file for ZooKeeper is in the `conf/zookeeper.conf` file in your Pulsar installation. The following parameters are available: - - -|Name|Description|Default| -|---|---|---| -|tickTime| The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick. |2000| -|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10| -|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter. |5| -|dataDir| The location where ZooKeeper will store in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper| -|clientPort| The port on which the ZooKeeper server will listen for connections. |2181| -|admin.enableServer|The port at which the admin listens.|true| -|admin.serverPort|The port at which the admin listens.|9990| -|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest). |3| -|autopurge.purgeInterval| The time interval, in hours, by which the ZooKeeper database purge task is triggered. Setting to a non-zero number will enable auto purge; setting to 0 will disable. Read this guide before enabling auto purge. |1| -|forceSync|Requires updates to be synced to media of the transaction log before finishing processing the update. If this option is set to 'no', ZooKeeper will not require updates to be synced to the media. WARNING: it's not recommended to run a production ZK cluster with `forceSync` disabled.|yes| -|maxClientCnxns| The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60| - - - - -In addition to the parameters in the table above, configuring ZooKeeper for Pulsar involves adding -a `server.N` line to the `conf/zookeeper.conf` file for each node in the ZooKeeper cluster, where `N` is the number of the ZooKeeper node. Here's an example for a three-node ZooKeeper cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -> We strongly recommend consulting the [ZooKeeper Administrator's Guide](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html) for a more thorough and comprehensive introduction to ZooKeeper configuration diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/reference-connector-admin.md b/site2/website/versioned_docs/version-2.9.0-deprecated/reference-connector-admin.md deleted file mode 100644 index f1240bf8db17de..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/reference-connector-admin.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -id: reference-connector-admin -title: Connector Admin CLI -sidebar_label: "Connector Admin CLI" -original_id: reference-connector-admin ---- - -> **Important** -> -> For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/reference-metrics.md b/site2/website/versioned_docs/version-2.9.0-deprecated/reference-metrics.md deleted file mode 100644 index e4e12d89ac5ae5..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/reference-metrics.md +++ /dev/null @@ -1,556 +0,0 @@ ---- -id: reference-metrics -title: Pulsar Metrics -sidebar_label: "Pulsar Metrics" -original_id: reference-metrics ---- - - - -Pulsar exposes the following metrics in Prometheus format. You can monitor your clusters with those metrics. - -* [ZooKeeper](#zookeeper) -* [BookKeeper](#bookkeeper) -* [Broker](#broker) -* [Pulsar Functions](#pulsar-functions) -* [Proxy](#proxy) -* [Pulsar SQL Worker](#pulsar-sql-worker) -* [Pulsar transaction](#pulsar-transaction) - -The following types of metrics are available: - -- [Counter](https://prometheus.io/docs/concepts/metric_types/#counter): a cumulative metric that represents a single monotonically increasing counter. The value increases by default. You can reset the value to zero or restart your cluster. -- [Gauge](https://prometheus.io/docs/concepts/metric_types/#gauge): a metric that represents a single numerical value that can arbitrarily go up and down. -- [Histogram](https://prometheus.io/docs/concepts/metric_types/#histogram): a histogram samples observations (usually things like request durations or response sizes) and counts them in configurable buckets. -- [Summary](https://prometheus.io/docs/concepts/metric_types/#summary): similar to a histogram, a summary samples observations (usually things like request durations and response sizes). While it also provides a total count of observations and a sum of all observed values, it calculates configurable quantiles over a sliding time window. - -## ZooKeeper - -The ZooKeeper metrics are exposed under "/metrics" at port `8000`. You can use a different port by configuring the `metricsProvider.httpPort` in conf/zookeeper.conf. - -### Server metrics - -| Name | Type | Description | -|---|---|---| -| znode_count | Gauge | The number of z-nodes stored. | -| approximate_data_size | Gauge | The approximate size of all of z-nodes stored. | -| num_alive_connections | Gauge | The number of currently lived connections. | -| watch_count | Gauge | The number of watchers registered. | -| ephemerals_count | Gauge | The number of ephemeral z-nodes. | - -### Request metrics - -| Name | Type | Description | -|---|---|---| -| request_commit_queued | Counter | The total number of requests already committed by a particular server. | -| updatelatency | Summary | The update requests latency calculated in milliseconds. | -| readlatency | Summary | The read requests latency calculated in milliseconds. | - -## BookKeeper - -The BookKeeper metrics are exposed under "/metrics" at port `8000`. You can change the port by updating `prometheusStatsHttpPort` -in the `bookkeeper.conf` configuration file. - -### Server metrics - -| Name | Type | Description | -|---|---|---| -| bookie_SERVER_STATUS | Gauge | The server status for bookie server.
    • 1: the bookie is running in writable mode.
    • 0: the bookie is running in readonly mode.
    | -| bookkeeper_server_ADD_ENTRY_count | Counter | The total number of ADD_ENTRY requests received at the bookie. The `success` label is used to distinguish successes and failures. | -| bookkeeper_server_READ_ENTRY_count | Counter | The total number of READ_ENTRY requests received at the bookie. The `success` label is used to distinguish successes and failures. | -| bookie_WRITE_BYTES | Counter | The total number of bytes written to the bookie. | -| bookie_READ_BYTES | Counter | The total number of bytes read from the bookie. | -| bookkeeper_server_ADD_ENTRY_REQUEST | Summary | The summary of request latency of ADD_ENTRY requests at the bookie. The `success` label is used to distinguish successes and failures. | -| bookkeeper_server_READ_ENTRY_REQUEST | Summary | The summary of request latency of READ_ENTRY requests at the bookie. The `success` label is used to distinguish successes and failures. | - -### Journal metrics - -| Name | Type | Description | -|---|---|---| -| bookie_journal_JOURNAL_SYNC_count | Counter | The total number of journal fsync operations happening at the bookie. The `success` label is used to distinguish successes and failures. | -| bookie_journal_JOURNAL_QUEUE_SIZE | Gauge | The total number of requests pending in the journal queue. | -| bookie_journal_JOURNAL_FORCE_WRITE_QUEUE_SIZE | Gauge | The total number of force write (fsync) requests pending in the force-write queue. | -| bookie_journal_JOURNAL_CB_QUEUE_SIZE | Gauge | The total number of callbacks pending in the callback queue. | -| bookie_journal_JOURNAL_ADD_ENTRY | Summary | The summary of request latency of adding entries to the journal. | -| bookie_journal_JOURNAL_SYNC | Summary | The summary of fsync latency of syncing data to the journal disk. | - -### Storage metrics - -| Name | Type | Description | -|---|---|---| -| bookie_ledgers_count | Gauge | The total number of ledgers stored in the bookie. | -| bookie_entries_count | Gauge | The total number of entries stored in the bookie. | -| bookie_write_cache_size | Gauge | The bookie write cache size (in bytes). | -| bookie_read_cache_size | Gauge | The bookie read cache size (in bytes). | -| bookie_DELETED_LEDGER_COUNT | Counter | The total number of ledgers deleted since the bookie has started. | -| bookie_ledger_writable_dirs | Gauge | The number of writable directories in the bookie. | - -## Broker - -The broker metrics are exposed under "/metrics" at port `8080`. You can change the port by updating `webServicePort` to a different port -in the `broker.conf` configuration file. - -All the metrics exposed by a broker are labelled with `cluster=${pulsar_cluster}`. The name of Pulsar cluster is the value of `${pulsar_cluster}`, which you have configured in the `broker.conf` file. - -The following metrics are available for broker: - -- [ZooKeeper](#zookeeper) - - [Server metrics](#server-metrics) - - [Request metrics](#request-metrics) -- [BookKeeper](#bookkeeper) - - [Server metrics](#server-metrics-1) - - [Journal metrics](#journal-metrics) - - [Storage metrics](#storage-metrics) -- [Broker](#broker) - - [Namespace metrics](#namespace-metrics) - - [Replication metrics](#replication-metrics) - - [Topic metrics](#topic-metrics) - - [Replication metrics](#replication-metrics-1) - - [ManagedLedgerCache metrics](#managedledgercache-metrics) - - [ManagedLedger metrics](#managedledger-metrics) - - [LoadBalancing metrics](#loadbalancing-metrics) - - [BundleUnloading metrics](#bundleunloading-metrics) - - [BundleSplit metrics](#bundlesplit-metrics) - - [Subscription metrics](#subscription-metrics) - - [Consumer metrics](#consumer-metrics) - - [Managed ledger bookie client metrics](#managed-ledger-bookie-client-metrics) - - [Token metrics](#token-metrics) - - [Authentication metrics](#authentication-metrics) - - [Connection metrics](#connection-metrics) -- [Pulsar Functions](#pulsar-functions) -- [Proxy](#proxy) -- [Pulsar SQL Worker](#pulsar-sql-worker) -- [Pulsar transaction](#pulsar-transaction) - -### Namespace metrics - -> Namespace metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `false`. - -All the namespace metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -| Name | Type | Description | -|---|---|---| -| pulsar_topics_count | Gauge | The number of Pulsar topics of the namespace owned by this broker. | -| pulsar_subscriptions_count | Gauge | The number of Pulsar subscriptions of the namespace served by this broker. | -| pulsar_producers_count | Gauge | The number of active producers of the namespace connected to this broker. | -| pulsar_consumers_count | Gauge | The number of active consumers of the namespace connected to this broker. | -| pulsar_rate_in | Gauge | The total message rate of the namespace coming into this broker (messages/second). | -| pulsar_rate_out | Gauge | The total message rate of the namespace going out from this broker (messages/second). | -| pulsar_throughput_in | Gauge | The total throughput of the namespace coming into this broker (bytes/second). | -| pulsar_throughput_out | Gauge | The total throughput of the namespace going out from this broker (bytes/second). | -| pulsar_storage_size | Gauge | The total storage size of the topics in this namespace owned by this broker (bytes). | -| pulsar_storage_logical_size | Gauge | The storage size of topics in the namespace owned by the broker without replicas (in bytes). | -| pulsar_storage_backlog_size | Gauge | The total backlog size of the topics of this namespace owned by this broker (messages). | -| pulsar_storage_offloaded_size | Gauge | The total amount of the data in this namespace offloaded to the tiered storage (bytes). | -| pulsar_storage_write_rate | Gauge | The total message batches (entries) written to the storage for this namespace (message batches / second). | -| pulsar_storage_read_rate | Gauge | The total message batches (entries) read from the storage for this namespace (message batches / second). | -| pulsar_subscription_delayed | Gauge | The total message batches (entries) are delayed for dispatching. | -| pulsar_storage_write_latency_le_* | Histogram | The entry rate of a namespace that the storage write latency is smaller with a given threshold.
    Available thresholds:
    • pulsar_storage_write_latency_le_0_5: <= 0.5ms
    • pulsar_storage_write_latency_le_1: <= 1ms
    • pulsar_storage_write_latency_le_5: <= 5ms
    • pulsar_storage_write_latency_le_10: <= 10ms
    • pulsar_storage_write_latency_le_20: <= 20ms
    • pulsar_storage_write_latency_le_50: <= 50ms
    • pulsar_storage_write_latency_le_100: <= 100ms
    • pulsar_storage_write_latency_le_200: <= 200ms
    • pulsar_storage_write_latency_le_1000: <= 1s
    • pulsar_storage_write_latency_le_overflow: > 1s
    | -| pulsar_entry_size_le_* | Histogram | The entry rate of a namespace that the entry size is smaller with a given threshold.
    Available thresholds:
    • pulsar_entry_size_le_128: <= 128 bytes
    • pulsar_entry_size_le_512: <= 512 bytes
    • pulsar_entry_size_le_1_kb: <= 1 KB
    • pulsar_entry_size_le_2_kb: <= 2 KB
    • pulsar_entry_size_le_4_kb: <= 4 KB
    • pulsar_entry_size_le_16_kb: <= 16 KB
    • pulsar_entry_size_le_100_kb: <= 100 KB
    • pulsar_entry_size_le_1_mb: <= 1 MB
    • pulsar_entry_size_le_overflow: > 1 MB
    | - -#### Replication metrics - -If a namespace is configured to be replicated among multiple Pulsar clusters, the corresponding replication metrics is also exposed when `replicationMetricsEnabled` is enabled. - -All the replication metrics are also labelled with `remoteCluster=${pulsar_remote_cluster}`. - -| Name | Type | Description | -|---|---|---| -| pulsar_replication_rate_in | Gauge | The total message rate of the namespace replicating from remote cluster (messages/second). | -| pulsar_replication_rate_out | Gauge | The total message rate of the namespace replicating to remote cluster (messages/second). | -| pulsar_replication_throughput_in | Gauge | The total throughput of the namespace replicating from remote cluster (bytes/second). | -| pulsar_replication_throughput_out | Gauge | The total throughput of the namespace replicating to remote cluster (bytes/second). | -| pulsar_replication_backlog | Gauge | The total backlog of the namespace replicating to remote cluster (messages). | -| pulsar_replication_rate_expired | Gauge | Total rate of messages expired (messages/second). | -| pulsar_replication_connected_count | Gauge | The count of replication-subscriber up and running to replicate to remote cluster. | -| pulsar_replication_delay_in_seconds | Gauge | Time in seconds from the time a message was produced to the time when it is about to be replicated. | - - -### Topic metrics - -> Topic metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `true`. - -All the topic metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. - -| Name | Type | Description | -|---|---|---| -| pulsar_subscriptions_count | Gauge | The number of Pulsar subscriptions of the topic served by this broker. | -| pulsar_producers_count | Gauge | The number of active producers of the topic connected to this broker. | -| pulsar_consumers_count | Gauge | The number of active consumers of the topic connected to this broker. | -| pulsar_rate_in | Gauge | The total message rate of the topic coming into this broker (messages/second). | -| pulsar_rate_out | Gauge | The total message rate of the topic going out from this broker (messages/second). | -| pulsar_throughput_in | Gauge | The total throughput of the topic coming into this broker (bytes/second). | -| pulsar_throughput_out | Gauge | The total throughput of the topic going out from this broker (bytes/second). | -| pulsar_storage_size | Gauge | The total storage size of the topics in this topic owned by this broker (bytes). | -| pulsar_storage_logical_size | Gauge | The storage size of topics in the namespace owned by the broker without replicas (in bytes). | -| pulsar_storage_backlog_size | Gauge | The total backlog size of the topics of this topic owned by this broker (messages). | -| pulsar_storage_offloaded_size | Gauge | The total amount of the data in this topic offloaded to the tiered storage (bytes). | -| pulsar_storage_backlog_quota_limit | Gauge | The total amount of the data in this topic that limit the backlog quota (bytes). | -| pulsar_storage_write_rate | Gauge | The total message batches (entries) written to the storage for this topic (message batches / second). | -| pulsar_storage_read_rate | Gauge | The total message batches (entries) read from the storage for this topic (message batches / second). | -| pulsar_subscription_delayed | Gauge | The total message batches (entries) are delayed for dispatching. | -| pulsar_storage_write_latency_le_* | Histogram | The entry rate of a topic that the storage write latency is smaller with a given threshold.
    Available thresholds:
    • pulsar_storage_write_latency_le_0_5: <= 0.5ms
    • pulsar_storage_write_latency_le_1: <= 1ms
    • pulsar_storage_write_latency_le_5: <= 5ms
    • pulsar_storage_write_latency_le_10: <= 10ms
    • pulsar_storage_write_latency_le_20: <= 20ms
    • pulsar_storage_write_latency_le_50: <= 50ms
    • pulsar_storage_write_latency_le_100: <= 100ms
    • pulsar_storage_write_latency_le_200: <= 200ms
    • pulsar_storage_write_latency_le_1000: <= 1s
    • pulsar_storage_write_latency_le_overflow: > 1s
    | -| pulsar_entry_size_le_* | Histogram | The entry rate of a topic that the entry size is smaller with a given threshold.
    Available thresholds:
    • pulsar_entry_size_le_128: <= 128 bytes
    • pulsar_entry_size_le_512: <= 512 bytes
    • pulsar_entry_size_le_1_kb: <= 1 KB
    • pulsar_entry_size_le_2_kb: <= 2 KB
    • pulsar_entry_size_le_4_kb: <= 4 KB
    • pulsar_entry_size_le_16_kb: <= 16 KB
    • pulsar_entry_size_le_100_kb: <= 100 KB
    • pulsar_entry_size_le_1_mb: <= 1 MB
    • pulsar_entry_size_le_overflow: > 1 MB
    | -| pulsar_in_bytes_total | Counter | The total number of messages in bytes received for this topic. | -| pulsar_in_messages_total | Counter | The total number of messages received for this topic. | -| pulsar_out_bytes_total | Counter | The total number of messages in bytes read from this topic. | -| pulsar_out_messages_total | Counter | The total number of messages read from this topic. | -| pulsar_compaction_removed_event_count | Gauge | The total number of removed events of the compaction. | -| pulsar_compaction_succeed_count | Gauge | The total number of successes of the compaction. | -| pulsar_compaction_failed_count | Gauge | The total number of failures of the compaction. | -| pulsar_compaction_duration_time_in_mills | Gauge | The duration time of the compaction. | -| pulsar_compaction_read_throughput | Gauge | The read throughput of the compaction. | -| pulsar_compaction_write_throughput | Gauge | The write throughput of the compaction. | -| pulsar_compaction_latency_le_* | Histogram | The compaction latency with given quantile.
    Available thresholds:
    • pulsar_compaction_latency_le_0_5: <= 0.5ms
    • pulsar_compaction_latency_le_1: <= 1ms
    • pulsar_compaction_latency_le_5: <= 5ms
    • pulsar_compaction_latency_le_10: <= 10ms
    • pulsar_compaction_latency_le_20: <= 20ms
    • pulsar_compaction_latency_le_50: <= 50ms
    • pulsar_compaction_latency_le_100: <= 100ms
    • pulsar_compaction_latency_le_200: <= 200ms
    • pulsar_compaction_latency_le_1000: <= 1s
    • pulsar_compaction_latency_le_overflow: > 1s
    | -| pulsar_compaction_compacted_entries_count | Gauge | The total number of the compacted entries. | -| pulsar_compaction_compacted_entries_size |Gauge | The total size of the compacted entries. | - -#### Replication metrics - -If a namespace that a topic belongs to is configured to be replicated among multiple Pulsar clusters, the corresponding replication metrics is also exposed when `replicationMetricsEnabled` is enabled. - -All the replication metrics are labelled with `remoteCluster=${pulsar_remote_cluster}`. - -| Name | Type | Description | -|---|---|---| -| pulsar_replication_rate_in | Gauge | The total message rate of the topic replicating from remote cluster (messages/second). | -| pulsar_replication_rate_out | Gauge | The total message rate of the topic replicating to remote cluster (messages/second). | -| pulsar_replication_throughput_in | Gauge | The total throughput of the topic replicating from remote cluster (bytes/second). | -| pulsar_replication_throughput_out | Gauge | The total throughput of the topic replicating to remote cluster (bytes/second). | -| pulsar_replication_backlog | Gauge | The total backlog of the topic replicating to remote cluster (messages). | - -### ManagedLedgerCache metrics -All the ManagedLedgerCache metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_ml_cache_evictions | Gauge | The number of cache evictions during the last minute. | -| pulsar_ml_cache_hits_rate | Gauge | The number of cache hits per second. | -| pulsar_ml_cache_hits_throughput | Gauge | The amount of data is retrieved from the cache in byte/s | -| pulsar_ml_cache_misses_rate | Gauge | The number of cache misses per second | -| pulsar_ml_cache_misses_throughput | Gauge | The amount of data is retrieved from the cache in byte/s | -| pulsar_ml_cache_pool_active_allocations | Gauge | The number of currently active allocations in direct arena | -| pulsar_ml_cache_pool_active_allocations_huge | Gauge | The number of currently active huge allocation in direct arena | -| pulsar_ml_cache_pool_active_allocations_normal | Gauge | The number of currently active normal allocations in direct arena | -| pulsar_ml_cache_pool_active_allocations_small | Gauge | The number of currently active small allocations in direct arena | -| pulsar_ml_cache_pool_allocated | Gauge | The total allocated memory of chunk lists in direct arena | -| pulsar_ml_cache_pool_used | Gauge | The total used memory of chunk lists in direct arena | -| pulsar_ml_cache_used_size | Gauge | The size in byte used to store the entries payloads | -| pulsar_ml_count | Gauge | The number of currently opened managed ledgers | - -### ManagedLedger metrics -All the managedLedger metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- namespace: namespace=${pulsar_namespace}. ${pulsar_namespace} is the namespace name. -- quantile: quantile=${quantile}. Quantile is only for `Histogram` type metric, and represents the threshold for given Buckets. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_ml_AddEntryBytesRate | Gauge | The bytes/s rate of messages added | -| pulsar_ml_AddEntryWithReplicasBytesRate | Gauge | The bytes/s rate of messages added with replicas | -| pulsar_ml_AddEntryErrors | Gauge | The number of addEntry requests that failed | -| pulsar_ml_AddEntryLatencyBuckets | Histogram | The latency of adding a ledger entry with a given quantile (threshold), including time spent on waiting in queue on the broker side
    Available quantile:
    • quantile="0.0_0.5" is AddEntryLatency between (0.0ms, 0.5ms]
    • quantile="0.5_1.0" is AddEntryLatency between (0.5ms, 1.0ms]
    • quantile="1.0_5.0" is AddEntryLatency between (1ms, 5ms]
    • quantile="5.0_10.0" is AddEntryLatency between (5ms, 10ms]
    • quantile="10.0_20.0" is AddEntryLatency between (10ms, 20ms]
    • quantile="20.0_50.0" is AddEntryLatency between (20ms, 50ms]
    • quantile="50.0_100.0" is AddEntryLatency between (50ms, 100ms]
    • quantile="100.0_200.0" is AddEntryLatency between (100ms, 200ms]
    • quantile="200.0_1000.0" is AddEntryLatency between (200ms, 1s]
    | -| pulsar_ml_AddEntryLatencyBuckets_OVERFLOW | Gauge | The number of times the AddEntryLatency is longer than 1 second | -| pulsar_ml_AddEntryMessagesRate | Gauge | The msg/s rate of messages added | -| pulsar_ml_AddEntrySucceed | Gauge | The number of addEntry requests that succeeded | -| pulsar_ml_EntrySizeBuckets | Histogram | The added entry size of a ledger with a given quantile.
    Available quantile:
    • quantile="0.0_128.0" is EntrySize between (0byte, 128byte]
    • quantile="128.0_512.0" is EntrySize between (128byte, 512byte]
    • quantile="512.0_1024.0" is EntrySize between (512byte, 1KB]
    • quantile="1024.0_2048.0" is EntrySize between (1KB, 2KB]
    • quantile="2048.0_4096.0" is EntrySize between (2KB, 4KB]
    • quantile="4096.0_16384.0" is EntrySize between (4KB, 16KB]
    • quantile="16384.0_102400.0" is EntrySize between (16KB, 100KB]
    • quantile="102400.0_1232896.0" is EntrySize between (100KB, 1MB]
    | -| pulsar_ml_EntrySizeBuckets_OVERFLOW |Gauge | The number of times the EntrySize is larger than 1MB | -| pulsar_ml_LedgerSwitchLatencyBuckets | Histogram | The ledger switch latency with a given quantile.
    Available quantile:
    • quantile="0.0_0.5" is EntrySize between (0ms, 0.5ms]
    • quantile="0.5_1.0" is EntrySize between (0.5ms, 1ms]
    • quantile="1.0_5.0" is EntrySize between (1ms, 5ms]
    • quantile="5.0_10.0" is EntrySize between (5ms, 10ms]
    • quantile="10.0_20.0" is EntrySize between (10ms, 20ms]
    • quantile="20.0_50.0" is EntrySize between (20ms, 50ms]
    • quantile="50.0_100.0" is EntrySize between (50ms, 100ms]
    • quantile="100.0_200.0" is EntrySize between (100ms, 200ms]
    • quantile="200.0_1000.0" is EntrySize between (200ms, 1000ms]
    | -| pulsar_ml_LedgerSwitchLatencyBuckets_OVERFLOW | Gauge | The number of times the ledger switch latency is longer than 1 second | -| pulsar_ml_LedgerAddEntryLatencyBuckets | Histogram | The latency for bookie client to persist a ledger entry from broker to BookKeeper service with a given quantile (threshold).
    Available quantile:
    • quantile="0.0_0.5" is LedgerAddEntryLatency between (0.0ms, 0.5ms]
    • quantile="0.5_1.0" is LedgerAddEntryLatency between (0.5ms, 1.0ms]
    • quantile="1.0_5.0" is LedgerAddEntryLatency between (1ms, 5ms]
    • quantile="5.0_10.0" is LedgerAddEntryLatency between (5ms, 10ms]
    • quantile="10.0_20.0" is LedgerAddEntryLatency between (10ms, 20ms]
    • quantile="20.0_50.0" is LedgerAddEntryLatency between (20ms, 50ms]
    • quantile="50.0_100.0" is LedgerAddEntryLatency between (50ms, 100ms]
    • quantile="100.0_200.0" is LedgerAddEntryLatency between (100ms, 200ms]
    • quantile="200.0_1000.0" is LedgerAddEntryLatency between (200ms, 1s]
    | -| pulsar_ml_LedgerAddEntryLatencyBuckets_OVERFLOW | Gauge | The number of times the LedgerAddEntryLatency is longer than 1 second | -| pulsar_ml_MarkDeleteRate | Gauge | The rate of mark-delete ops/s | -| pulsar_ml_NumberOfMessagesInBacklog | Gauge | The number of backlog messages for all the consumers | -| pulsar_ml_ReadEntriesBytesRate | Gauge | The bytes/s rate of messages read | -| pulsar_ml_ReadEntriesErrors | Gauge | The number of readEntries requests that failed | -| pulsar_ml_ReadEntriesRate | Gauge | The msg/s rate of messages read | -| pulsar_ml_ReadEntriesSucceeded | Gauge | The number of readEntries requests that succeeded | -| pulsar_ml_StoredMessagesSize | Gauge | The total size of the messages in active ledgers (accounting for the multiple copies stored) | - -### Managed cursor acknowledgment state - -The acknowledgment state is persistent to the ledger first. When the acknowledgment state fails to be persistent to the ledger, they are persistent to ZooKeeper. To track the stats of acknowledgment, you can configure the metrics for the managed cursor. - -All the cursor acknowledgment state metrics are labelled with the following labels: - -- namespace: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -- ledger_name: `ledger_name=${pulsar_ledger_name}`. `${pulsar_ledger_name}` is the ledger name. - -- cursor_name: `ledger_name=${pulsar_cursor_name}`. `${pulsar_cursor_name}` is the cursor name. - -Name |Type |Description -|---|---|--- -brk_ml_cursor_persistLedgerSucceed(namespace=", ledger_name="", cursor_name:")|Gauge|The number of acknowledgment states that is persistent to a ledger.| -brk_ml_cursor_persistLedgerErrors(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of ledger errors occurred when acknowledgment states fail to be persistent to the ledger.| -brk_ml_cursor_persistZookeeperSucceed(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of acknowledgment states that is persistent to ZooKeeper. -brk_ml_cursor_persistZookeeperErrors(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of ledger errors occurred when acknowledgment states fail to be persistent to ZooKeeper. -brk_ml_cursor_nonContiguousDeletedMessagesRange(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of non-contiguous deleted messages ranges. -brk_ml_cursor_writeLedgerSize(namespace="", ledger_name="", cursor_name:"")|Gauge|The size of write to ledger. -brk_ml_cursor_writeLedgerLogicalSize(namespace="", ledger_name="", cursor_name:"")|Gauge|The size of write to ledger (accounting for without replicas). -brk_ml_cursor_readLedgerSize(namespace="", ledger_name="", cursor_name:"")|Gauge|The size of read from ledger. - -### LoadBalancing metrics -All the loadbalancing metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- broker: broker=${broker}. ${broker} is the IP address of the broker -- metric: metric="loadBalancing". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_bandwidth_in_usage | Gauge | The broker bandwith in usage | -| pulsar_lb_bandwidth_out_usage | Gauge | The broker bandwith out usage | -| pulsar_lb_cpu_usage | Gauge | The broker cpu usage | -| pulsar_lb_directMemory_usage | Gauge | The broker process direct memory usage | -| pulsar_lb_memory_usage | Gauge | The broker process memory usage | - -#### BundleUnloading metrics -All the bundleUnloading metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- metric: metric="bundleUnloading". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_unload_broker_count | Counter | Unload broker count in this bundle unloading | -| pulsar_lb_unload_bundle_count | Counter | Bundle unload count in this bundle unloading | - -#### BundleSplit metrics -All the bundleUnloading metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- metric: metric="bundlesSplit". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_bundles_split_count | Counter | The total count of bundle split in this leader broker | - -### Subscription metrics - -> Subscription metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `true`. - -All the subscription metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. -- *subscription*: `subscription=${subscription}`. `${subscription}` is the topic subscription name. - -| Name | Type | Description | -|---|---|---| -| pulsar_subscription_back_log | Gauge | The total backlog of a subscription (messages). | -| pulsar_subscription_delayed | Gauge | The total number of messages are delayed to be dispatched for a subscription (messages). | -| pulsar_subscription_msg_rate_redeliver | Gauge | The total message rate for message being redelivered (messages/second). | -| pulsar_subscription_unacked_messages | Gauge | The total number of unacknowledged messages of a subscription (messages). | -| pulsar_subscription_blocked_on_unacked_messages | Gauge | Indicate whether a subscription is blocked on unacknowledged messages or not.
    • 1 means the subscription is blocked on waiting unacknowledged messages to be acked.
    • 0 means the subscription is not blocked on waiting unacknowledged messages to be acked.
    | -| pulsar_subscription_msg_rate_out | Gauge | The total message dispatch rate for a subscription (messages/second). | -| pulsar_subscription_msg_throughput_out | Gauge | The total message dispatch throughput for a subscription (bytes/second). | - -### Consumer metrics - -> Consumer metrics are only exposed when both `exposeTopicLevelMetricsInPrometheus` and `exposeConsumerLevelMetricsInPrometheus` are set to `true`. - -All the consumer metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. -- *subscription*: `subscription=${subscription}`. `${subscription}` is the topic subscription name. -- *consumer_name*: `consumer_name=${consumer_name}`. `${consumer_name}` is the topic consumer name. -- *consumer_id*: `consumer_id=${consumer_id}`. `${consumer_id}` is the topic consumer id. - -| Name | Type | Description | -|---|---|---| -| pulsar_consumer_msg_rate_redeliver | Gauge | The total message rate for message being redelivered (messages/second). | -| pulsar_consumer_unacked_messages | Gauge | The total number of unacknowledged messages of a consumer (messages). | -| pulsar_consumer_blocked_on_unacked_messages | Gauge | Indicate whether a consumer is blocked on unacknowledged messages or not.
    • 1 means the consumer is blocked on waiting unacknowledged messages to be acked.
    • 0 means the consumer is not blocked on waiting unacknowledged messages to be acked.
    | -| pulsar_consumer_msg_rate_out | Gauge | The total message dispatch rate for a consumer (messages/second). | -| pulsar_consumer_msg_throughput_out | Gauge | The total message dispatch throughput for a consumer (bytes/second). | -| pulsar_consumer_available_permits | Gauge | The available permits for for a consumer. | - -### Managed ledger bookie client metrics - -All the managed ledger bookie client metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_completed_tasks_* | Gauge | The number of tasks the scheduler executor execute completed.
    The number of metrics determined by the scheduler executor thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_queue_* | Gauge | The number of tasks queued in the scheduler executor's queue.
    The number of metrics determined by scheduler executor's thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_total_tasks_* | Gauge | The total number of tasks the scheduler executor received.
    The number of metrics determined by scheduler executor's thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_task_execution | Summary | The scheduler task execution latency calculated in milliseconds. | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_task_queued | Summary | The scheduler task queued latency calculated in milliseconds. | - -### Token metrics - -All the token metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -|---|---|---| -| pulsar_expired_token_count | Counter | The number of expired tokens in Pulsar. | -| pulsar_expiring_token_minutes | Histogram | The remaining time of expiring tokens in minutes. | - -### Authentication metrics - -All the authentication metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *provider_name*: `provider_name=${provider_name}`. `${provider_name}` is the class name of the authentication provider. -- *auth_method*: `auth_method=${auth_method}`. `${auth_method}` is the authentication method of the authentication provider. -- *reason*: `reason=${reason}`. `${reason}` is the reason for failing authentication operation. (This label is only for `pulsar_authentication_failures_count`.) - -| Name | Type | Description | -|---|---|---| -| pulsar_authentication_success_count| Counter | The number of successful authentication operations. | -| pulsar_authentication_failures_count | Counter | The number of failing authentication operations. | - -### Connection metrics - -All the connection metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *broker*: `broker=${advertised_address}`. `${advertised_address}` is the advertised address of the broker. -- *metric*: `metric=${metric}`. `${metric}` is the connection metric collective name. - -| Name | Type | Description | -|---|---|---| -| pulsar_active_connections| Gauge | The number of active connections. | -| pulsar_connection_created_total_count | Gauge | The total number of connections. | -| pulsar_connection_create_success_count | Gauge | The number of successfully created connections. | -| pulsar_connection_create_fail_count | Gauge | The number of failed connections. | -| pulsar_connection_closed_total_count | Gauge | The total number of closed connections. | -| pulsar_broker_throttled_connections | Gauge | The number of throttled connections. | -| pulsar_broker_throttled_connections_global_limit | Gauge | The number of throttled connections because of per-connection limit. | - -## Pulsar Functions - -All the Pulsar Functions metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -| Name | Type | Description | -|---|---|---| -| pulsar_function_processed_successfully_total | Counter | The total number of messages processed successfully. | -| pulsar_function_processed_successfully_total_1min | Counter | The total number of messages processed successfully in the last 1 minute. | -| pulsar_function_system_exceptions_total | Counter | The total number of system exceptions. | -| pulsar_function_system_exceptions_total_1min | Counter | The total number of system exceptions in the last 1 minute. | -| pulsar_function_user_exceptions_total | Counter | The total number of user exceptions. | -| pulsar_function_user_exceptions_total_1min | Counter | The total number of user exceptions in the last 1 minute. | -| pulsar_function_process_latency_ms | Summary | The process latency in milliseconds. | -| pulsar_function_process_latency_ms_1min | Summary | The process latency in milliseconds in the last 1 minute. | -| pulsar_function_last_invocation | Gauge | The timestamp of the last invocation of the function. | -| pulsar_function_received_total | Counter | The total number of messages received from source. | -| pulsar_function_received_total_1min | Counter | The total number of messages received from source in the last 1 minute. | -pulsar_function_user_metric_ | Summary|The user-defined metrics. - -## Connectors - -All the Pulsar connector metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -Connector metrics contain **source** metrics and **sink** metrics. - -- **Source** metrics - - | Name | Type | Description | - |---|---|---| - pulsar_source_written_total|Counter|The total number of records written to a Pulsar topic. - pulsar_source_written_total_1min|Counter|The total number of records written to a Pulsar topic in the last 1 minute. - pulsar_source_received_total|Counter|The total number of records received from source. - pulsar_source_received_total_1min|Counter|The total number of records received from source in the last 1 minute. - pulsar_source_last_invocation|Gauge|The timestamp of the last invocation of the source. - pulsar_source_source_exception|Gauge|The exception from a source. - pulsar_source_source_exceptions_total|Counter|The total number of source exceptions. - pulsar_source_source_exceptions_total_1min |Counter|The total number of source exceptions in the last 1 minute. - pulsar_source_system_exception|Gauge|The exception from system code. - pulsar_source_system_exceptions_total|Counter|The total number of system exceptions. - pulsar_source_system_exceptions_total_1min|Counter|The total number of system exceptions in the last 1 minute. - pulsar_source_user_metric_ | Summary|The user-defined metrics. - -- **Sink** metrics - - | Name | Type | Description | - |---|---|---| - pulsar_sink_written_total|Counter| The total number of records processed by a sink. - pulsar_sink_written_total_1min|Counter| The total number of records processed by a sink in the last 1 minute. - pulsar_sink_received_total_1min|Counter| The total number of messages that a sink has received from Pulsar topics in the last 1 minute. - pulsar_sink_received_total|Counter| The total number of records that a sink has received from Pulsar topics. - pulsar_sink_last_invocation|Gauge|The timestamp of the last invocation of the sink. - pulsar_sink_sink_exception|Gauge|The exception from a sink. - pulsar_sink_sink_exceptions_total|Counter|The total number of sink exceptions. - pulsar_sink_sink_exceptions_total_1min |Counter|The total number of sink exceptions in the last 1 minute. - pulsar_sink_system_exception|Gauge|The exception from system code. - pulsar_sink_system_exceptions_total|Counter|The total number of system exceptions. - pulsar_sink_system_exceptions_total_1min|Counter|The total number of system exceptions in the last 1 minute. - pulsar_sink_user_metric_ | Summary|The user-defined metrics. - -## Proxy - -All the proxy metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *kubernetes_pod_name*: `kubernetes_pod_name=${kubernetes_pod_name}`. `${kubernetes_pod_name}` is the Kubernetes pod name. - -| Name | Type | Description | -|---|---|---| -| pulsar_proxy_active_connections | Gauge | Number of connections currently active in the proxy. | -| pulsar_proxy_new_connections | Counter | Counter of connections being opened in the proxy. | -| pulsar_proxy_rejected_connections | Counter | Counter for connections rejected due to throttling. | -| pulsar_proxy_binary_ops | Counter | Counter of proxy operations. | -| pulsar_proxy_binary_bytes | Counter | Counter of proxy bytes. | - -## Pulsar SQL Worker - -| Name | Type | Description | -|---|---|---| -| split_bytes_read | Counter | Number of bytes read from BookKeeper. | -| split_num_messages_deserialized | Counter | Number of messages deserialized. | -| split_num_record_deserialized | Counter | Number of records deserialized. | -| split_bytes_read_per_query | Summary | Total number of bytes read per query. | -| split_entry_deserialize_time | Summary | Time spent on derserializing entries. | -| split_entry_deserialize_time_per_query | Summary | Time spent on derserializing entries per query. | -| split_entry_queue_dequeue_wait_time | Summary | Time spend on waiting to get entry from entry queue because it is empty. | -| split_entry_queue_dequeue_wait_time_per_query | Summary | Total time spent on waiting to get entry from entry queue per query. | -| split_message_queue_dequeue_wait_time_per_query | Summary | Time spent on waiting to dequeue from message queue because is is empty per query. | -| split_message_queue_enqueue_wait_time | Summary | Time spent on waiting for message queue enqueue because the message queue is full. | -| split_message_queue_enqueue_wait_time_per_query | Summary | Time spent on waiting for message queue enqueue because the message queue is full per query. | -| split_num_entries_per_batch | Summary | Number of entries per batch. | -| split_num_entries_per_query | Summary | Number of entries per query. | -| split_num_messages_deserialized_per_entry | Summary | Number of messages deserialized per entry. | -| split_num_messages_deserialized_per_query | Summary | Number of messages deserialized per query. | -| split_read_attempts | Summary | Number of read attempts (fail if queues are full). | -| split_read_attempts_per_query | Summary | Number of read attempts per query. | -| split_read_latency_per_batch | Summary | Latency of reads per batch. | -| split_read_latency_per_query | Summary | Total read latency per query. | -| split_record_deserialize_time | Summary | Time spent on deserializing message to record. For example, Avro, JSON, and so on. | -| split_record_deserialize_time_per_query | Summary | Time spent on deserializing message to record per query. | -| split_total_execution_time | Summary | The total execution time. | - -## Pulsar transaction - -All the transaction metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *coordinator_id*: `coordinator_id=${coordinator_id}`. `${coordinator_id}` is the coordinator id. - -| Name | Type | Description | -|---|---|---| -| pulsar_txn_active_count | Gauge | Number of active transactions. | -| pulsar_txn_created_count | Counter | Number of created transactions. | -| pulsar_txn_committed_count | Counter | Number of committed transactions. | -| pulsar_txn_aborted_count | Counter | Number of aborted transactions of this coordinator. | -| pulsar_txn_timeout_count | Counter | Number of timeout transactions. | -| pulsar_txn_append_log_count | Counter | Number of append transaction logs. | -| pulsar_txn_execution_latency_le_* | Histogram | Transaction execution latency.
    Available latencies are as below:
    • latency="10" is TransactionExecutionLatency between (0ms, 10ms]
    • latency="20" is TransactionExecutionLatency between (10ms, 20ms]
    • latency="50" is TransactionExecutionLatency between (20ms, 50ms]
    • latency="100" is TransactionExecutionLatency between (50ms, 100ms]
    • latency="500" is TransactionExecutionLatency between (100ms, 500ms]
    • latency="1000" is TransactionExecutionLatency between (500ms, 1000ms]
    • latency="5000" is TransactionExecutionLatency between (1s, 5s]
    • latency="15000" is TransactionExecutionLatency between (5s, 15s]
    • latency="30000" is TransactionExecutionLatency between (15s, 30s]
    • latency="60000" is TransactionExecutionLatency between (30s, 60s]
    • latency="300000" is TransactionExecutionLatency between (1m,5m]
    • latency="1500000" is TransactionExecutionLatency between (5m,15m]
    • latency="3000000" is TransactionExecutionLatency between (15m,30m]
    • latency="overflow" is TransactionExecutionLatency between (30m,∞]
    | diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/reference-pulsar-admin.md b/site2/website/versioned_docs/version-2.9.0-deprecated/reference-pulsar-admin.md deleted file mode 100644 index e306289a8798a5..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/reference-pulsar-admin.md +++ /dev/null @@ -1,3297 +0,0 @@ ---- -id: reference-pulsar-admin -title: Pulsar admin CLI -sidebar_label: "Pulsar Admin CLI" -original_id: reference-pulsar-admin ---- - -> **Important** -> -> This page is deprecated and not updated anymore. For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) - -The `pulsar-admin` tool enables you to manage Pulsar installations, including clusters, brokers, namespaces, tenants, and more. - -Usage - -```bash - -$ pulsar-admin command - -``` - -Commands -* `broker-stats` -* `brokers` -* `clusters` -* `functions` -* `functions-worker` -* `namespaces` -* `ns-isolation-policy` -* `sources` - - For more information, see [here](io-cli.md#sources) -* `sinks` - - For more information, see [here](io-cli.md#sinks) -* `topics` -* `tenants` -* `resource-quotas` -* `schemas` - -## `broker-stats` - -Operations to collect broker statistics - -```bash - -$ pulsar-admin broker-stats subcommand - -``` - -Subcommands -* `allocator-stats` -* `topics(destinations)` -* `mbeans` -* `monitoring-metrics` -* `load-report` - - -### `allocator-stats` - -Dump allocator stats - -Usage - -```bash - -$ pulsar-admin broker-stats allocator-stats allocator-name - -``` - -### `topics(destinations)` - -Dump topic stats - -Usage - -```bash - -$ pulsar-admin broker-stats topics options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - -### `mbeans` - -Dump Mbean stats - -Usage - -```bash - -$ pulsar-admin broker-stats mbeans options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - - -### `monitoring-metrics` - -Dump metrics for monitoring - -Usage - -```bash - -$ pulsar-admin broker-stats monitoring-metrics options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - - -### `load-report` - -Dump broker load-report - -Usage - -```bash - -$ pulsar-admin broker-stats load-report - -``` - -## `brokers` - -Operations about brokers - -```bash - -$ pulsar-admin brokers subcommand - -``` - -Subcommands -* `list` -* `namespaces` -* `update-dynamic-config` -* `list-dynamic-config` -* `get-all-dynamic-config` -* `get-internal-config` -* `get-runtime-config` -* `healthcheck` - -### `list` -List active brokers of the cluster - -Usage - -```bash - -$ pulsar-admin brokers list cluster-name - -``` - -### `leader-broker` -Get the information of the leader broker - -Usage - -```bash - -$ pulsar-admin brokers leader-broker - -``` - -### `namespaces` -List namespaces owned by the broker - -Usage - -```bash - -$ pulsar-admin brokers namespaces cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--url`|The URL for the broker|| - - -### `update-dynamic-config` -Update a broker's dynamic service configuration - -Usage - -```bash - -$ pulsar-admin brokers update-dynamic-config options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--config`|Service configuration parameter name|| -|`--value`|Value for the configuration parameter value specified using the `--config` flag|| - - -### `list-dynamic-config` -Get list of updatable configuration name - -Usage - -```bash - -$ pulsar-admin brokers list-dynamic-config - -``` - -### `delete-dynamic-config` -Delete dynamic-serviceConfiguration of broker - -Usage - -```bash - -$ pulsar-admin brokers delete-dynamic-config options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--config`|Service configuration parameter name|| - - -### `get-all-dynamic-config` -Get all overridden dynamic-configuration values - -Usage - -```bash - -$ pulsar-admin brokers get-all-dynamic-config - -``` - -### `get-internal-config` -Get internal configuration information - -Usage - -```bash - -$ pulsar-admin brokers get-internal-config - -``` - -### `get-runtime-config` -Get runtime configuration values - -Usage - -```bash - -$ pulsar-admin brokers get-runtime-config - -``` - -### `healthcheck` -Run a health check against the broker - -Usage - -```bash - -$ pulsar-admin brokers healthcheck - -``` - -## `clusters` -Operations about clusters - -Usage - -```bash - -$ pulsar-admin clusters subcommand - -``` - -Subcommands -* `get` -* `create` -* `update` -* `delete` -* `list` -* `update-peer-clusters` -* `get-peer-clusters` -* `get-failure-domain` -* `create-failure-domain` -* `update-failure-domain` -* `delete-failure-domain` -* `list-failure-domains` - - -### `get` -Get the configuration data for the specified cluster - -Usage - -```bash - -$ pulsar-admin clusters get cluster-name - -``` - -### `create` -Provisions a new cluster. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin clusters create cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--broker-url`|The URL for the broker service.|| -|`--broker-url-secure`|The broker service URL for a secure connection|| -|`--url`|service-url|| -|`--url-secure`|service-url for secure connection|| - - -### `update` -Update the configuration for a cluster - -Usage - -```bash - -$ pulsar-admin clusters update cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--broker-url`|The URL for the broker service.|| -|`--broker-url-secure`|The broker service URL for a secure connection|| -|`--url`|service-url|| -|`--url-secure`|service-url for secure connection|| - - -### `delete` -Deletes an existing cluster - -Usage - -```bash - -$ pulsar-admin clusters delete cluster-name - -``` - -### `list` -List the existing clusters - -Usage - -```bash - -$ pulsar-admin clusters list - -``` - -### `update-peer-clusters` -Update peer cluster names - -Usage - -```bash - -$ pulsar-admin clusters update-peer-clusters cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--peer-clusters`|Comma separated peer cluster names (Pass empty string "" to delete list)|| - -### `get-peer-clusters` -Get list of peer clusters - -Usage - -```bash - -$ pulsar-admin clusters get-peer-clusters - -``` - -### `get-failure-domain` -Get the configuration brokers of a failure domain - -Usage - -```bash - -$ pulsar-admin clusters get-failure-domain cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `create-failure-domain` -Create a new failure domain for a cluster (updates it if already created) - -Usage - -```bash - -$ pulsar-admin clusters create-failure-domain cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--broker-list`|Comma separated broker list|| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `update-failure-domain` -Update failure domain for a cluster (creates a new one if not exist) - -Usage - -```bash - -$ pulsar-admin clusters update-failure-domain cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--broker-list`|Comma separated broker list|| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `delete-failure-domain` -Delete an existing failure domain - -Usage - -```bash - -$ pulsar-admin clusters delete-failure-domain cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `list-failure-domains` -List the existing failure domains for a cluster - -Usage - -```bash - -$ pulsar-admin clusters list-failure-domains cluster-name - -``` - -## `functions` - -A command-line interface for Pulsar Functions - -Usage - -```bash - -$ pulsar-admin functions subcommand - -``` - -Subcommands -* `localrun` -* `create` -* `delete` -* `update` -* `get` -* `restart` -* `stop` -* `start` -* `status` -* `stats` -* `list` -* `querystate` -* `putstate` -* `trigger` - - -### `localrun` -Run the Pulsar Function locally (rather than deploying it to the Pulsar cluster) - - -Usage - -```bash - -$ pulsar-admin functions localrun options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--broker-service-url `|The URL of the Pulsar broker|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--client-auth-params`|Client authentication param|| -|`--client-auth-plugin`|Client authentication plugin using which function-process can connect to broker|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--hostname-verification-enabled`|Enable hostname verification|false| -|`--instance-id-offset`|Start the instanceIds from this offset|0| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--go`|Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--state-storage-service-url`|The URL for the state storage service. By default, it it set to the service URL of the Apache BookKeeper. This service URL must be added manually when the Pulsar Function runs locally. || -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed successfully are sent|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--tls-allow-insecure`|Allow insecure tls connection|false| -|`--tls-trust-cert-path`|The tls trust cert file path|| -|`--use-tls`|Use tls connection|false| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `create` -Create a Pulsar Function in cluster mode (i.e. deploy it on a Pulsar cluster) - -Usage - -``` - -$ pulsar-admin functions create options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function’s namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--go`|Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `delete` -Delete a Pulsar Function that's running on a Pulsar cluster - -Usage - -```bash - -$ pulsar-admin functions delete options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `update` -Update a Pulsar Function that's been deployed to a Pulsar cluster - -Usage - -```bash - -$ pulsar-admin functions update options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function’s namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--go`|Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `get` -Fetch information about a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions get options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `restart` -Restart function instance - -Usage - -```bash - -$ pulsar-admin functions restart options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (restart all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `stop` -Stops function instance - -Usage - -```bash - -$ pulsar-admin functions stop options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (stop all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `start` -Starts a stopped function instance - -Usage - -```bash - -$ pulsar-admin functions start options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (start all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `status` -Check the current status of a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions status options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (Get-status of all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `stats` -Get the current stats of a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions stats options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (Get-stats of all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - -### `list` -List all of the Pulsar Functions running under a specific tenant and namespace - -Usage - -```bash - -$ pulsar-admin functions list options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `querystate` -Fetch the current state associated with a Pulsar Function running in cluster mode - -Usage - -```bash - -$ pulsar-admin functions querystate options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`-k`, `--key`|The key for the state you want to fetch|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| -|`-w`, `--watch`|Watch for changes in the value associated with a key for a Pulsar Function|false| - -### `putstate` -Put a key/value pair to the state associated with a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions putstate options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the Pulsar Function|| -|`--name`|The name of a Pulsar Function|| -|`--namespace`|The namespace of a Pulsar Function|| -|`--tenant`|The tenant of a Pulsar Function|| -|`-s`, `--state`|The FunctionState that needs to be put|| - -### `trigger` -Triggers the specified Pulsar Function with a supplied value - -Usage - -```bash - -$ pulsar-admin functions trigger options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| -|`--topic`|The specific topic name that the function consumes from that you want to inject the data to|| -|`--trigger-file`|The path to the file that contains the data with which you'd like to trigger the function|| -|`--trigger-value`|The value with which you want to trigger the function|| - - -## `functions-worker` -Operations to collect function-worker statistics - -```bash - -$ pulsar-admin functions-worker subcommand - -``` - -Subcommands - -* `function-stats` -* `get-cluster` -* `get-cluster-leader` -* `get-function-assignments` -* `monitoring-metrics` - -### `function-stats` - -Dump all functions stats running on this broker - -Usage - -```bash - -$ pulsar-admin functions-worker function-stats - -``` - -### `get-cluster` - -Get all workers belonging to this cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-cluster - -``` - -### `get-cluster-leader` - -Get the leader of the worker cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-cluster-leader - -``` - -### `get-function-assignments` - -Get the assignments of the functions across the worker cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-function-assignments - -``` - -### `monitoring-metrics` - -Dump metrics for Monitoring - -Usage - -```bash - -$ pulsar-admin functions-worker monitoring-metrics - -``` - -## `namespaces` - -Operations for managing namespaces - -```bash - -$ pulsar-admin namespaces subcommand - -``` - -Subcommands -* `list` -* `topics` -* `policies` -* `create` -* `delete` -* `set-deduplication` -* `set-auto-topic-creation` -* `remove-auto-topic-creation` -* `set-auto-subscription-creation` -* `remove-auto-subscription-creation` -* `permissions` -* `grant-permission` -* `revoke-permission` -* `grant-subscription-permission` -* `revoke-subscription-permission` -* `set-clusters` -* `get-clusters` -* `get-backlog-quotas` -* `set-backlog-quota` -* `remove-backlog-quota` -* `get-persistence` -* `set-persistence` -* `get-message-ttl` -* `set-message-ttl` -* `remove-message-ttl` -* `get-anti-affinity-group` -* `set-anti-affinity-group` -* `get-anti-affinity-namespaces` -* `delete-anti-affinity-group` -* `get-retention` -* `set-retention` -* `unload` -* `split-bundle` -* `set-dispatch-rate` -* `get-dispatch-rate` -* `set-replicator-dispatch-rate` -* `get-replicator-dispatch-rate` -* `set-subscribe-rate` -* `get-subscribe-rate` -* `set-subscription-dispatch-rate` -* `get-subscription-dispatch-rate` -* `clear-backlog` -* `unsubscribe` -* `set-encryption-required` -* `set-delayed-delivery` -* `get-delayed-delivery` -* `set-subscription-auth-mode` -* `get-max-producers-per-topic` -* `set-max-producers-per-topic` -* `get-max-consumers-per-topic` -* `set-max-consumers-per-topic` -* `get-max-consumers-per-subscription` -* `set-max-consumers-per-subscription` -* `get-max-unacked-messages-per-subscription` -* `set-max-unacked-messages-per-subscription` -* `get-max-unacked-messages-per-consumer` -* `set-max-unacked-messages-per-consumer` -* `get-compaction-threshold` -* `set-compaction-threshold` -* `get-offload-threshold` -* `set-offload-threshold` -* `get-offload-deletion-lag` -* `set-offload-deletion-lag` -* `clear-offload-deletion-lag` -* `get-schema-autoupdate-strategy` -* `set-schema-autoupdate-strategy` -* `set-offload-policies` -* `get-offload-policies` -* `set-max-subscriptions-per-topic` -* `get-max-subscriptions-per-topic` -* `remove-max-subscriptions-per-topic` - - -### `list` -Get the namespaces for a tenant - -Usage - -```bash - -$ pulsar-admin namespaces list tenant-name - -``` - -### `topics` -Get the list of topics for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces topics tenant/namespace - -``` - -### `policies` -Get the configuration policies of a namespace - -Usage - -```bash - -$ pulsar-admin namespaces policies tenant/namespace - -``` - -### `create` -Create a new namespace - -Usage - -```bash - -$ pulsar-admin namespaces create tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-b`, `--bundles`|The number of bundles to activate|0| -|`-c`, `--clusters`|List of clusters this namespace will be assigned|| - - -### `delete` -Deletes a namespace. The namespace needs to be empty - -Usage - -```bash - -$ pulsar-admin namespaces delete tenant/namespace - -``` - -### `set-deduplication` -Enable or disable message deduplication on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-deduplication tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable message deduplication on the specified namespace|false| -|`--disable`, `-d`|Disable message deduplication on the specified namespace|false| - -### `set-auto-topic-creation` -Enable or disable autoTopicCreation for a namespace, overriding broker settings - -Usage - -```bash - -$ pulsar-admin namespaces set-auto-topic-creation tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable allowAutoTopicCreation on namespace|false| -|`--disable`, `-d`|Disable allowAutoTopicCreation on namespace|false| -|`--type`, `-t`|Type of topic to be auto-created. Possible values: (partitioned, non-partitioned)|non-partitioned| -|`--num-partitions`, `-n`|Default number of partitions of topic to be auto-created, applicable to partitioned topics only|| - -### `remove-auto-topic-creation` -Remove override of autoTopicCreation for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces remove-auto-topic-creation tenant/namespace - -``` - -### `set-auto-subscription-creation` -Enable autoSubscriptionCreation for a namespace, overriding broker settings - -Usage - -```bash - -$ pulsar-admin namespaces set-auto-subscription-creation tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable allowAutoSubscriptionCreation on namespace|false| - -### `remove-auto-subscription-creation` -Remove override of autoSubscriptionCreation for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces remove-auto-subscription-creation tenant/namespace - -``` - -### `permissions` -Get the permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces permissions tenant/namespace - -``` - -### `grant-permission` -Grant permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces grant-permission tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--actions`|Actions to be granted (`produce` or `consume`)|| -|`--role`|The client role to which to grant the permissions|| - - -### `revoke-permission` -Revoke permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces revoke-permission tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--role`|The client role to which to revoke the permissions|| - -### `grant-subscription-permission` -Grant permissions to access subscription admin-api - -Usage - -```bash - -$ pulsar-admin namespaces grant-subscription-permission tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--roles`|The client roles to which to grant the permissions (comma separated roles)|| -|`--subscription`|The subscription name for which permission will be granted to roles|| - -### `revoke-subscription-permission` -Revoke permissions to access subscription admin-api - -Usage - -```bash - -$ pulsar-admin namespaces revoke-subscription-permission tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--role`|The client role to which to revoke the permissions|| -|`--subscription`|The subscription name for which permission will be revoked to roles|| - -### `set-clusters` -Set replication clusters for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-clusters tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-c`, `--clusters`|Replication clusters ID list (comma-separated values)|| - - -### `get-clusters` -Get replication clusters for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-clusters tenant/namespace - -``` - -### `get-backlog-quotas` -Get the backlog quota policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-backlog-quotas tenant/namespace - -``` - -### `set-backlog-quota` -Set a backlog quota policy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-backlog-quota tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-l`, `--limit`|The backlog size limit (for example `10M` or `16G`)|| -|`-lt`, `--limitTime`|Time limit in second, non-positive number for disabling time limit. (for example 3600 for 1 hour)|| -|`-p`, `--policy`|The retention policy to enforce when the limit is reached. The valid options are: `producer_request_hold`, `producer_exception` or `consumer_backlog_eviction`| -|`-t`, `--type`|Backlog quota type to set. The valid options are: `destination_storage`, `message_age` |destination_storage| - -Example - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \ ---limit 2G \ ---policy producer_request_hold - -``` - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \ ---limitTime 3600 \ ---policy producer_request_hold \ ---type message_age - -``` - -### `remove-backlog-quota` -Remove a backlog quota policy from a namespace - -|Flag|Description|Default| -|---|---|---| -|`-t`, `--type`|Backlog quota type to remove. The valid options are: `destination_storage`, `message_age` |destination_storage| - -Usage - -```bash - -$ pulsar-admin namespaces remove-backlog-quota tenant/namespace - -``` - -### `get-persistence` -Get the persistence policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-persistence tenant/namespace - -``` - -### `set-persistence` -Set the persistence policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-persistence tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-a`, `--bookkeeper-ack-quorum`|The number of acks (guaranteed copies) to wait for each entry|0| -|`-e`, `--bookkeeper-ensemble`|The number of bookies to use for a topic|0| -|`-w`, `--bookkeeper-write-quorum`|How many writes to make of each entry|0| -|`-r`, `--ml-mark-delete-max-rate`|Throttling rate of mark-delete operation (0 means no throttle)|| - - -### `get-message-ttl` -Get the message TTL for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-message-ttl tenant/namespace - -``` - -### `set-message-ttl` -Set the message TTL for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-message-ttl tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-ttl`, `--messageTTL`|Message TTL in seconds. When the value is set to `0`, TTL is disabled. TTL is disabled by default. |0| - -### `remove-message-ttl` -Remove the message TTL for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces remove-message-ttl tenant/namespace - -``` - -### `get-anti-affinity-group` -Get Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-anti-affinity-group tenant/namespace - -``` - -### `set-anti-affinity-group` -Set Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-anti-affinity-group tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-g`, `--group`|Anti-affinity group name|| - -### `get-anti-affinity-namespaces` -Get Anti-affinity namespaces grouped with the given anti-affinity group name - -Usage - -```bash - -$ pulsar-admin namespaces get-anti-affinity-namespaces options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-c`, `--cluster`|Cluster name|| -|`-g`, `--group`|Anti-affinity group name|| -|`-p`, `--tenant`|Tenant is only used for authorization. Client has to be admin of any of the tenant to access this api|| - -### `delete-anti-affinity-group` -Remove Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces delete-anti-affinity-group tenant/namespace - -``` - -### `get-retention` -Get the retention policy that is applied to each topic within the specified namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-retention tenant/namespace - -``` - -### `set-retention` -Set the retention policy for each topic within the specified namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-retention tenant/namespace - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-s`, `--size`|The retention size limits (for example 10M, 16G or 3T) for each topic in the namespace. 0 means no retention and -1 means infinite size retention|| -|`-t`, `--time`|The retention time in minutes, hours, days, or weeks. Examples: 100m, 13h, 2d, 5w. 0 means no retention and -1 means infinite time retention|| - - -### `unload` -Unload a namespace or namespace bundle from the current serving broker. - -Usage - -```bash - -$ pulsar-admin namespaces unload tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| - -### `split-bundle` -Split a namespace-bundle from the current serving broker - -Usage - -```bash - -$ pulsar-admin namespaces split-bundle tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-u`, `--unload`|Unload newly split bundles after splitting old bundle|false| - -### `set-dispatch-rate` -Set message-dispatch-rate for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-dispatch-rate tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-dispatch-rate` -Get configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-dispatch-rate tenant/namespace - -``` - -### `set-replicator-dispatch-rate` -Set replicator message-dispatch-rate for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-replicator-dispatch-rate tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-replicator-dispatch-rate` -Get replicator configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-replicator-dispatch-rate tenant/namespace - -``` - -### `set-subscribe-rate` -Set subscribe-rate per consumer for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscribe-rate tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-sr`, `--subscribe-rate`|The subscribe rate (default -1 will be overwrite if not passed)|-1| -|`-st`, `--subscribe-rate-period`|The subscribe rate period in second type (default 30 second will be overwrite if not passed)|30| - -### `get-subscribe-rate` -Get configured subscribe-rate per consumer for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-subscribe-rate tenant/namespace - -``` - -### `set-subscription-dispatch-rate` -Set subscription message-dispatch-rate for all subscription of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscription-dispatch-rate tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--sub-msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-subscription-dispatch-rate` -Get subscription configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-subscription-dispatch-rate tenant/namespace - -``` - -### `clear-backlog` -Clear the backlog for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces clear-backlog tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-force`, `--force`|Whether to force a clear backlog without prompt|false| -|`-s`, `--sub`|The subscription name|| - - -### `unsubscribe` -Unsubscribe the given subscription on all destinations on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces unsubscribe tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-s`, `--sub`|The subscription name|| - -### `set-encryption-required` -Enable or disable message encryption required for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-encryption-required tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-d`, `--disable`|Disable message encryption required|false| -|`-e`, `--enable`|Enable message encryption required|false| - -### `set-delayed-delivery` -Set the delayed delivery policy on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-delayed-delivery tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-d`, `--disable`|Disable delayed delivery messages|false| -|`-e`, `--enable`|Enable delayed delivery messages|false| -|`-t`, `--time`|The tick time for when retrying on delayed delivery messages|1s| - - -### `get-delayed-delivery` -Get the delayed delivery policy on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-delayed-delivery-time tenant/namespace - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-t`, `--time`|The tick time for when retrying on delayed delivery messages|1s| - - -### `set-subscription-auth-mode` -Set subscription auth mode on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscription-auth-mode tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-m`, `--subscription-auth-mode`|Subscription authorization mode for Pulsar policies. Valid options are: [None, Prefix]|| - -### `get-max-producers-per-topic` -Get maxProducersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-producers-per-topic tenant/namespace - -``` - -### `set-max-producers-per-topic` -Set maxProducersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-producers-per-topic tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-p`, `--max-producers-per-topic`|maxProducersPerTopic for a namespace|0| - -### `get-max-consumers-per-topic` -Get maxConsumersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-consumers-per-topic tenant/namespace - -``` - -### `set-max-consumers-per-topic` -Set maxConsumersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-consumers-per-topic tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-consumers-per-topic`|maxConsumersPerTopic for a namespace|0| - -### `get-max-consumers-per-subscription` -Get maxConsumersPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-consumers-per-subscription tenant/namespace - -``` - -### `set-max-consumers-per-subscription` -Set maxConsumersPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-consumers-per-subscription tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-consumers-per-subscription`|maxConsumersPerSubscription for a namespace|0| - -### `get-max-unacked-messages-per-subscription` -Get maxUnackedMessagesPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-unacked-messages-per-subscription tenant/namespace - -``` - -### `set-max-unacked-messages-per-subscription` -Set maxUnackedMessagesPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-unacked-messages-per-subscription tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-unacked-messages-per-subscription`|maxUnackedMessagesPerSubscription for a namespace|-1| - -### `get-max-unacked-messages-per-consumer` -Get maxUnackedMessagesPerConsumer for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-unacked-messages-per-consumer tenant/namespace - -``` - -### `set-max-unacked-messages-per-consumer` -Set maxUnackedMessagesPerConsumer for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-unacked-messages-per-consumer tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-unacked-messages-per-consumer`|maxUnackedMessagesPerConsumer for a namespace|-1| - - -### `get-compaction-threshold` -Get compactionThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-compaction-threshold tenant/namespace - -``` - -### `set-compaction-threshold` -Set compactionThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-compaction-threshold tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-t`, `--threshold`|Maximum number of bytes in a topic backlog before compaction is triggered (eg: 10M, 16G, 3T). 0 disables automatic compaction|0| - - -### `get-offload-threshold` -Get offloadThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-threshold tenant/namespace - -``` - -### `set-offload-threshold` -Set offloadThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-threshold tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-s`, `--size`|Maximum number of bytes stored in the pulsar cluster for a topic before data will start being automatically offloaded to longterm storage (eg: 10M, 16G, 3T, 100). Negative values disable automatic offload. 0 triggers offloading as soon as possible.|-1| - -### `get-offload-deletion-lag` -Get offloadDeletionLag, in minutes, for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-deletion-lag tenant/namespace - -``` - -### `set-offload-deletion-lag` -Set offloadDeletionLag for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-deletion-lag tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-l`, `--lag`|Duration to wait after offloading a ledger segment, before deleting the copy of that segment from cluster local storage. (eg: 10m, 5h, 3d, 2w).|-1| - -### `clear-offload-deletion-lag` -Clear offloadDeletionLag for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces clear-offload-deletion-lag tenant/namespace - -``` - -### `get-schema-autoupdate-strategy` -Get the schema auto-update strategy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-schema-autoupdate-strategy tenant/namespace - -``` - -### `set-schema-autoupdate-strategy` -Set the schema auto-update strategy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-schema-autoupdate-strategy tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-c`, `--compatibility`|Compatibility level required for new schemas created via a Producer. Possible values (Full, Backward, Forward, None).|Full| -|`-d`, `--disabled`|Disable automatic schema updates.|false| - -### `get-publish-rate` -Get the message publish rate for each topic in a namespace, in bytes as well as messages per second - -Usage - -```bash - -$ pulsar-admin namespaces get-publish-rate tenant/namespace - -``` - -### `set-publish-rate` -Set the message publish rate for each topic in a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-publish-rate tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-m`, `--msg-publish-rate`|Threshold for number of messages per second per topic in the namespace (-1 implies not set, 0 for no limit).|-1| -|`-b`, `--byte-publish-rate`|Threshold for number of bytes per second per topic in the namespace (-1 implies not set, 0 for no limit).|-1| - -### `set-offload-policies` -Set the offload policy for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-policies tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-d`, `--driver`|Driver to use to offload old data to long term storage,(Possible values: S3, aws-s3, google-cloud-storage)|| -|`-r`, `--region`|The long term storage region|| -|`-b`, `--bucket`|Bucket to place offloaded ledger into|| -|`-e`, `--endpoint`|Alternative endpoint to connect to|| -|`-i`, `--aws-id`|AWS Credential Id to use when using driver S3 or aws-s3|| -|`-s`, `--aws-secret`|AWS Credential Secret to use when using driver S3 or aws-s3|| -|`-ro`, `--s3-role`|S3 Role used for STSAssumeRoleSessionCredentialsProvider using driver S3 or aws-s3|| -|`-rsn`, `--s3-role-session-name`|S3 role session name used for STSAssumeRoleSessionCredentialsProvider using driver S3 or aws-s3|| -|`-mbs`, `--maxBlockSize`|Max block size|64MB| -|`-rbs`, `--readBufferSize`|Read buffer size|1MB| -|`-oat`, `--offloadAfterThreshold`|Offload after threshold size (eg: 1M, 5M)|| -|`-oae`, `--offloadAfterElapsed`|Offload after elapsed in millis (or minutes, hours,days,weeks eg: 100m, 3h, 2d, 5w).|| - -### `get-offload-policies` -Get the offload policy for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-policies tenant/namespace - -``` - -### `set-max-subscriptions-per-topic` -Set the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces set-max-subscriptions-per-topic tenant/namespace - -``` - -### `get-max-subscriptions-per-topic` -Get the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces get-max-subscriptions-per-topic tenant/namespace - -``` - -### `remove-max-subscriptions-per-topic` -Remove the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces remove-max-subscriptions-per-topic tenant/namespace - -``` - -## `ns-isolation-policy` -Operations for managing namespace isolation policies. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy subcommand - -``` - -Subcommands -* `set` -* `get` -* `list` -* `delete` -* `brokers` -* `broker` - -### `set` -Create/update a namespace isolation policy for a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy set cluster-name policy-name options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`--auto-failover-policy-params`|Comma-separated name=value auto failover policy parameters|[]| -|`--auto-failover-policy-type`|Auto failover policy type name. Currently available options: min_available.|[]| -|`--namespaces`|Comma-separated namespaces regex list|[]| -|`--primary`|Comma-separated primary broker regex list|[]| -|`--secondary`|Comma-separated secondary broker regex list|[]| - - -### `get` -Get the namespace isolation policy of a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy get cluster-name policy-name - -``` - -### `list` -List all namespace isolation policies of a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy list cluster-name - -``` - -### `delete` -Delete namespace isolation policy of a cluster. This operation requires superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy delete - -``` - -### `brokers` -List all brokers with namespace-isolation policies attached to it. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy brokers cluster-name - -``` - -### `broker` -Get broker with namespace-isolation policies attached to it. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy broker cluster-name options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`--broker`|Broker name to get namespace-isolation policies attached to it|| - -## `topics` -Operations for managing Pulsar topics (both persistent and non-persistent). - -Usage - -```bash - -$ pulsar-admin topics subcommand - -``` - -From Pulsar 2.7.0, some namespace-level policies are available on topic level. To enable topic-level policy in Pulsar, you need to configure the following parameters in the `broker.conf` file. - -```shell - -systemTopicEnabled=true -topicLevelPoliciesEnabled=true - -``` - -Subcommands -* `compact` -* `compaction-status` -* `offload` -* `offload-status` -* `create-partitioned-topic` -* `create-missed-partitions` -* `delete-partitioned-topic` -* `create` -* `get-partitioned-topic-metadata` -* `update-partitioned-topic` -* `list-partitioned-topics` -* `list` -* `terminate` -* `permissions` -* `grant-permission` -* `revoke-permission` -* `lookup` -* `bundle-range` -* `delete` -* `unload` -* `create-subscription` -* `subscriptions` -* `unsubscribe` -* `stats` -* `stats-internal` -* `info-internal` -* `partitioned-stats` -* `partitioned-stats-internal` -* `skip` -* `clear-backlog` -* `expire-messages` -* `expire-messages-all-subscriptions` -* `peek-messages` -* `reset-cursor` -* `get-message-by-id` -* `last-message-id` -* `get-backlog-quotas` -* `set-backlog-quota` -* `remove-backlog-quota` -* `get-persistence` -* `set-persistence` -* `remove-persistence` -* `get-message-ttl` -* `set-message-ttl` -* `remove-message-ttl` -* `get-deduplication` -* `set-deduplication` -* `remove-deduplication` -* `get-retention` -* `set-retention` -* `remove-retention` -* `get-dispatch-rate` -* `set-dispatch-rate` -* `remove-dispatch-rate` -* `get-max-unacked-messages-per-subscription` -* `set-max-unacked-messages-per-subscription` -* `remove-max-unacked-messages-per-subscription` -* `get-max-unacked-messages-per-consumer` -* `set-max-unacked-messages-per-consumer` -* `remove-max-unacked-messages-per-consumer` -* `get-delayed-delivery` -* `set-delayed-delivery` -* `remove-delayed-delivery` -* `get-max-producers` -* `set-max-producers` -* `remove-max-producers` -* `get-max-consumers` -* `set-max-consumers` -* `remove-max-consumers` -* `get-compaction-threshold` -* `set-compaction-threshold` -* `remove-compaction-threshold` -* `get-offload-policies` -* `set-offload-policies` -* `remove-offload-policies` -* `get-inactive-topic-policies` -* `set-inactive-topic-policies` -* `remove-inactive-topic-policies` -* `set-max-subscriptions` -* `get-max-subscriptions` -* `remove-max-subscriptions` - -### `compact` -Run compaction on the specified topic (persistent topics only) - -Usage - -``` - -$ pulsar-admin topics compact persistent://tenant/namespace/topic - -``` - -### `compaction-status` -Check the status of a topic compaction (persistent topics only) - -Usage - -```bash - -$ pulsar-admin topics compaction-status persistent://tenant/namespace/topic - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-w`, `--wait-complete`|Wait for compaction to complete|false| - - -### `offload` -Trigger offload of data from a topic to long-term storage (e.g. Amazon S3) - -Usage - -```bash - -$ pulsar-admin topics offload persistent://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-s`, `--size-threshold`|The maximum amount of data to keep in BookKeeper for the specific topic|| - - -### `offload-status` -Check the status of data offloading from a topic to long-term storage - -Usage - -```bash - -$ pulsar-admin topics offload-status persistent://tenant/namespace/topic op - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-w`, `--wait-complete`|Wait for compaction to complete|false| - - -### `create-partitioned-topic` -Create a partitioned topic. A partitioned topic must be created before producers can publish to it. - -:::note - -By default, after 60 seconds of creation, topics are considered inactive and deleted automatically to prevent from generating trash data. -To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. -To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to your desired value. -For more information about these two parameters, see [here](reference-configuration.md#broker). - -::: - -Usage - -```bash - -$ pulsar-admin topics create-partitioned-topic {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-p`, `--partitions`|The number of partitions for the topic|0| - -### `create-missed-partitions` -Try to create partitions for partitioned topic. The partitions of partition topic has to be created, -can be used by repair partitions when topic auto creation is disabled - -Usage - -```bash - -$ pulsar-admin topics create-missed-partitions persistent://tenant/namespace/topic - -``` - -### `delete-partitioned-topic` -Delete a partitioned topic. This will also delete all the partitions of the topic if they exist. - -Usage - -```bash - -$ pulsar-admin topics delete-partitioned-topic {persistent|non-persistent} - -``` - -### `create` -Creates a non-partitioned topic. A non-partitioned topic must explicitly be created by the user if allowAutoTopicCreation or createIfMissing is disabled. - -:::note - -By default, after 60 seconds of creation, topics are considered inactive and deleted automatically to prevent from generating trash data. -To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. -To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to your desired value. -For more information about these two parameters, see [here](reference-configuration.md#broker). - -::: - -Usage - -```bash - -$ pulsar-admin topics create {persistent|non-persistent}://tenant/namespace/topic - -``` - -### `get-partitioned-topic-metadata` -Get the partitioned topic metadata. If the topic is not created or is a non-partitioned topic, this will return an empty topic with zero partitions. - -Usage - -```bash - -$ pulsar-admin topics get-partitioned-topic-metadata {persistent|non-persistent}://tenant/namespace/topic - -``` - -### `update-partitioned-topic` -Update existing non-global partitioned topic. New updating number of partitions must be greater than existing number of partitions. - -Usage - -```bash - -$ pulsar-admin topics update-partitioned-topic {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-p`, `--partitions`|The number of partitions for the topic|0| - -### `list-partitioned-topics` -Get the list of partitioned topics under a namespace. - -Usage - -```bash - -$ pulsar-admin topics list-partitioned-topics tenant/namespace - -``` - -### `list` -Get the list of topics under a namespace - -Usage - -``` - -$ pulsar-admin topics list tenant/cluster/namespace - -``` - -### `terminate` -Terminate a persistent topic (disallow further messages from being published on the topic) - -Usage - -```bash - -$ pulsar-admin topics terminate persistent://tenant/namespace/topic - -``` - -### `permissions` -Get the permissions on a topic. Retrieve the effective permissions for a destination. These permissions are defined by the permissions set at the namespace level combined (union) with any eventual specific permissions set on the topic. - -Usage - -```bash - -$ pulsar-admin topics permissions topic - -``` - -### `grant-permission` -Grant a new permission to a client role on a single topic - -Usage - -```bash - -$ pulsar-admin topics grant-permission {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--actions`|Actions to be granted (`produce` or `consume`)|| -|`--role`|The client role to which to grant the permissions|| - - -### `revoke-permission` -Revoke permissions to a client role on a single topic. If the permission was not set at the topic level, but rather at the namespace level, this operation will return an error (HTTP status code 412). - -Usage - -```bash - -$ pulsar-admin topics revoke-permission topic - -``` - -### `lookup` -Look up a topic from the current serving broker - -Usage - -```bash - -$ pulsar-admin topics lookup topic - -``` - -### `bundle-range` -Get the namespace bundle which contains the given topic - -Usage - -```bash - -$ pulsar-admin topics bundle-range topic - -``` - -### `delete` -Delete a topic. The topic cannot be deleted if there are any active subscriptions or producers connected to the topic. - -Usage - -```bash - -$ pulsar-admin topics delete topic - -``` - -### `unload` -Unload a topic - -Usage - -```bash - -$ pulsar-admin topics unload topic - -``` - -### `create-subscription` -Create a new subscription on a topic. - -Usage - -```bash - -$ pulsar-admin topics create-subscription [options] persistent://tenant/namespace/topic - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-m`, `--messageId`|messageId where to create the subscription. It can be either 'latest', 'earliest' or (ledgerId:entryId)|latest| -|`-s`, `--subscription`|Subscription to reset position on|| - -### `subscriptions` -Get the list of subscriptions on the topic - -Usage - -```bash - -$ pulsar-admin topics subscriptions topic - -``` - -### `unsubscribe` -Delete a durable subscriber from a topic - -Usage - -```bash - -$ pulsar-admin topics unsubscribe topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|The subscription to delete|| -|`-f`, `--force`|Disconnect and close all consumers and delete subscription forcefully|false| - - -### `stats` -Get the stats for the topic and its connected producers and consumers. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -Usage - -```bash - -$ pulsar-admin topics stats topic - -``` - -:::note - -The unit of `storageSize` and `averageMsgSize` is Byte. - -::: - -### `stats-internal` -Get the internal stats for the topic - -Usage - -```bash - -$ pulsar-admin topics stats-internal topic - -``` - -### `info-internal` -Get the internal metadata info for the topic - -Usage - -```bash - -$ pulsar-admin topics info-internal topic - -``` - -### `partitioned-stats` -Get the stats for the partitioned topic and its connected producers and consumers. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -Usage - -```bash - -$ pulsar-admin topics partitioned-stats topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--per-partition`|Get per-partition stats|false| - -### `partitioned-stats-internal` -Get the internal stats for the partitioned topic and its connected producers and consumers. All the rates are computed over a 1 minute window and are relative the last completed 1 minute period. - -Usage - -```bash - -$ pulsar-admin topics partitioned-stats-internal topic - -``` - -### `skip` -Skip some messages for the subscription - -Usage - -```bash - -$ pulsar-admin topics skip topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-n`, `--count`|The number of messages to skip|0| -|`-s`, `--subscription`|The subscription on which to skip messages|| - - -### `clear-backlog` -Clear backlog (skip all the messages) for the subscription - -Usage - -```bash - -$ pulsar-admin topics clear-backlog topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|The subscription to clear|| - - -### `expire-messages` -Expire messages that are older than the given expiry time (in seconds) for the subscription. - -Usage - -```bash - -$ pulsar-admin topics expire-messages topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-t`, `--expireTime`|Expire messages older than the time (in seconds)|0| -|`-s`, `--subscription`|The subscription to skip messages on|| - - -### `expire-messages-all-subscriptions` -Expire messages older than the given expiry time (in seconds) for all subscriptions - -Usage - -```bash - -$ pulsar-admin topics expire-messages-all-subscriptions topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-t`, `--expireTime`|Expire messages older than the time (in seconds)|0| - - -### `peek-messages` -Peek some messages for the subscription. - -Usage - -```bash - -$ pulsar-admin topics peek-messages topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-n`, `--count`|The number of messages|0| -|`-s`, `--subscription`|Subscription to get messages from|| - - -### `reset-cursor` -Reset position for subscription to a position that is closest to timestamp or messageId. - -Usage - -```bash - -$ pulsar-admin topics reset-cursor topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|Subscription to reset position on|| -|`-t`, `--time`|The time in minutes to reset back to (or minutes, hours, days, weeks, etc.). Examples: `100m`, `3h`, `2d`, `5w`.|| -|`-m`, `--messageId`| The messageId to reset back to (ledgerId:entryId). || - -### `get-message-by-id` -Get message by ledger id and entry id - -Usage - -```bash - -$ pulsar-admin topics get-message-by-id topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-l`, `--ledgerId`|The ledger id |0| -|`-e`, `--entryId`|The entry id |0| - -### `last-message-id` -Get the last commit message ID of the topic. - -Usage - -```bash - -$ pulsar-admin topics last-message-id persistent://tenant/namespace/topic - -``` - -### `get-backlog-quotas` -Get the backlog quota policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-backlog-quotas tenant/namespace/topic - -``` - -### `set-backlog-quota` -Set a backlog quota policy for a topic. - -|Flag|Description|Default| -|----|---|---| -|`-l`, `--limit`|The backlog size limit (for example `10M` or `16G`)|| -|`-lt`, `--limitTime`|Time limit in second, non-positive number for disabling time limit. (for example 3600 for 1 hour)|| -|`-p`, `--policy`|The retention policy to enforce when the limit is reached. The valid options are: `producer_request_hold`, `producer_exception` or `consumer_backlog_eviction`| -|`-t`, `--type`|Backlog quota type to set. The valid options are: `destination_storage`, `message_age` |destination_storage| - -Usage - -```bash - -$ pulsar-admin topics set-backlog-quota tenant/namespace/topic options - -``` - -Example - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns/my-topic \ ---limit 2G \ ---policy producer_request_hold - -``` - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns/my-topic \ ---limitTime 3600 \ ---policy producer_request_hold \ ---type message_age - -``` - -### `remove-backlog-quota` -Remove a backlog quota policy from a topic. - -|Flag|Description|Default| -|---|---|---| -|`-t`, `--type`|Backlog quota type to remove. The valid options are: `destination_storage`, `message_age` |destination_storage| - -Usage - -```bash - -$ pulsar-admin topics remove-backlog-quota tenant/namespace/topic - -``` - -### `get-persistence` -Get the persistence policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-persistence tenant/namespace/topic - -``` - -### `set-persistence` -Set the persistence policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-persistence tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-e`, `--bookkeeper-ensemble`|Number of bookies to use for a topic|0| -|`-w`, `--bookkeeper-write-quorum`|How many writes to make of each entry|0| -|`-a`, `--bookkeeper-ack-quorum`|Number of acks (guaranteed copies) to wait for each entry|0| -|`-r`, `--ml-mark-delete-max-rate`|Throttling rate of mark-delete operation (0 means no throttle)|| - -### `remove-persistence` -Remove the persistence policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-persistence tenant/namespace/topic - -``` - -### `get-message-ttl` -Get the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-message-ttl tenant/namespace/topic - -``` - -### `set-message-ttl` -Set the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-message-ttl tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-ttl`, `--messageTTL`|Message TTL for a topic in second, allowed range from 1 to `Integer.MAX_VALUE` |0| - -### `remove-message-ttl` -Remove the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-message-ttl tenant/namespace/topic - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable message deduplication on the specified topic.|false| -|`--disable`, `-d`|Disable message deduplication on the specified topic.|false| - -### `get-deduplication` -Get a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-deduplication tenant/namespace/topic - -``` - -### `set-deduplication` -Set a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-deduplication tenant/namespace/topic options - -``` - -### `remove-deduplication` -Remove a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-deduplication tenant/namespace/topic - -``` - -## `tenants` -Operations for managing tenants - -Usage - -```bash - -$ pulsar-admin tenants subcommand - -``` - -Subcommands -* `list` -* `get` -* `create` -* `update` -* `delete` - -### `list` -List the existing tenants - -Usage - -```bash - -$ pulsar-admin tenants list - -``` - -### `get` -Gets the configuration of a tenant - -Usage - -```bash - -$ pulsar-admin tenants get tenant-name - -``` - -### `create` -Creates a new tenant - -Usage - -```bash - -$ pulsar-admin tenants create tenant-name options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-r`, `--admin-roles`|Comma-separated admin roles|| -|`-c`, `--allowed-clusters`|Comma-separated allowed clusters|| - -### `update` -Updates a tenant - -Usage - -```bash - -$ pulsar-admin tenants update tenant-name options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-r`, `--admin-roles`|Comma-separated admin roles|| -|`-c`, `--allowed-clusters`|Comma-separated allowed clusters|| - - -### `delete` -Deletes an existing tenant - -Usage - -```bash - -$ pulsar-admin tenants delete tenant-name - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-f`, `--force`|Delete a tenant forcefully by deleting all namespaces under it.|false| - - -## `resource-quotas` -Operations for managing resource quotas - -Usage - -```bash - -$ pulsar-admin resource-quotas subcommand - -``` - -Subcommands -* `get` -* `set` -* `reset-namespace-bundle-quota` - - -### `get` -Get the resource quota for a specified namespace bundle, or default quota if no namespace/bundle is specified. - -Usage - -```bash - -$ pulsar-admin resource-quotas get options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-n`, `--namespace`|The namespace|| - - -### `set` -Set the resource quota for the specified namespace bundle, or default quota if no namespace/bundle is specified. - -Usage - -```bash - -$ pulsar-admin resource-quotas set options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-bi`, `--bandwidthIn`|The expected inbound bandwidth (in bytes/second)|0| -|`-bo`, `--bandwidthOut`|Expected outbound bandwidth (in bytes/second)0| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-d`, `--dynamic`|Allow to be dynamically re-calculated (or not)|false| -|`-mem`, `--memory`|Expectred memory usage (in megabytes)|0| -|`-mi`, `--msgRateIn`|Expected incoming messages per second|0| -|`-mo`, `--msgRateOut`|Expected outgoing messages per second|0| -|`-n`, `--namespace`|The namespace as tenant/namespace, for example my-tenant/my-ns. Must be specified together with -b/--bundle.|| - - -### `reset-namespace-bundle-quota` -Reset the specified namespace bundle's resource quota to a default value. - -Usage - -```bash - -$ pulsar-admin resource-quotas reset-namespace-bundle-quota options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-n`, `--namespace`|The namespace|| - - - -## `schemas` -Operations related to Schemas associated with Pulsar topics. - -Usage - -``` - -$ pulsar-admin schemas subcommand - -``` - -Subcommands -* `upload` -* `delete` -* `get` -* `extract` - - -### `upload` -Upload the schema definition for a topic - -Usage - -```bash - -$ pulsar-admin schemas upload persistent://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`--filename`|The path to the schema definition file. An example schema file is available under conf directory.|| - - -### `delete` -Delete the schema definition associated with a topic - -Usage - -```bash - -$ pulsar-admin schemas delete persistent://tenant/namespace/topic - -``` - -### `get` -Retrieve the schema definition associated with a topic (at a given version if version is supplied). - -Usage - -```bash - -$ pulsar-admin schemas get persistent://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`--version`|The version of the schema definition to retrieve for a topic.|| - -### `extract` -Provide the schema definition for a topic via Java class name contained in a JAR file - -Usage - -```bash - -$ pulsar-admin schemas extract persistent://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-c`, `--classname`|The Java class name|| -|`-j`, `--jar`|A path to the JAR file which contains the above Java class|| -|`-t`, `--type`|The type of the schema (avro or json)|| diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/reference-rest-api-overview.md b/site2/website/versioned_docs/version-2.9.0-deprecated/reference-rest-api-overview.md deleted file mode 100644 index 4bdcf23483a2b5..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/reference-rest-api-overview.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: reference-rest-api-overview -title: Pulsar REST APIs -sidebar_label: "Pulsar REST APIs" ---- - -A REST API (also known as RESTful API, REpresentational State Transfer Application Programming Interface) is a set of definitions and protocols for building and integrating application software, using HTTP requests to GET, PUT, POST, and DELETE data following the REST standards. In essence, REST API is a set of remote calls using standard methods to request and return data in a specific format between two systems. - -Pulsar provides a variety of REST APIs that enable you to interact with Pulsar to retrieve information or perform an action. - -| REST API category | Description | -| --- | --- | -| [Admin](https://pulsar.apache.org/admin-rest-api/?version=master) | REST APIs for administrative operations.| -| [Functions](https://pulsar.apache.org/functions-rest-api/?version=master) | REST APIs for function-specific operations.| -| [Sources](https://pulsar.apache.org/source-rest-api/?version=master) | REST APIs for source-specific operations.| -| [Sinks](https://pulsar.apache.org/sink-rest-api/?version=master) | REST APIs for sink-specific operations.| -| [Packages](https://pulsar.apache.org/packages-rest-api/?version=master) | REST APIs for package-specific operations. A package can be a group of functions, sources, and sinks.| - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/reference-terminology.md b/site2/website/versioned_docs/version-2.9.0-deprecated/reference-terminology.md deleted file mode 100644 index e5099141c3231e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/reference-terminology.md +++ /dev/null @@ -1,176 +0,0 @@ ---- -id: reference-terminology -title: Pulsar Terminology -sidebar_label: "Terminology" -original_id: reference-terminology ---- - -Here is a glossary of terms related to Apache Pulsar: - -### Concepts - -#### Pulsar - -Pulsar is a distributed messaging system originally created by Yahoo but now under the stewardship of the Apache Software Foundation. - -#### Message - -Messages are the basic unit of Pulsar. They're what [producers](#producer) publish to [topics](#topic) -and what [consumers](#consumer) then consume from topics. - -#### Topic - -A named channel used to pass messages published by [producers](#producer) to [consumers](#consumer) who -process those [messages](#message). - -#### Partitioned Topic - -A topic that is served by multiple Pulsar [brokers](#broker), which enables higher throughput. - -#### Namespace - -A grouping mechanism for related [topics](#topic). - -#### Namespace Bundle - -A virtual group of [topics](#topic) that belong to the same [namespace](#namespace). A namespace bundle -is defined as a range between two 32-bit hashes, such as 0x00000000 and 0xffffffff. - -#### Tenant - -An administrative unit for allocating capacity and enforcing an authentication/authorization scheme. - -#### Subscription - -A lease on a [topic](#topic) established by a group of [consumers](#consumer). Pulsar has four subscription -modes (exclusive, shared, failover and key_shared). - -#### Pub-Sub - -A messaging pattern in which [producer](#producer) processes publish messages on [topics](#topic) that -are then consumed (processed) by [consumer](#consumer) processes. - -#### Producer - -A process that publishes [messages](#message) to a Pulsar [topic](#topic). - -#### Consumer - -A process that establishes a subscription to a Pulsar [topic](#topic) and processes messages published -to that topic by [producers](#producer). - -#### Reader - -Pulsar readers are message processors much like Pulsar [consumers](#consumer) but with two crucial differences: - -- you can specify *where* on a topic readers begin processing messages (consumers always begin with the latest - available unacked message); -- readers don't retain data or acknowledge messages. - -#### Cursor - -The subscription position for a [consumer](#consumer). - -#### Acknowledgment (ack) - -A message sent to a Pulsar broker by a [consumer](#consumer) that a message has been successfully processed. -An acknowledgement (ack) is Pulsar's way of knowing that the message can be deleted from the system; -if no acknowledgement, then the message will be retained until it's processed. - -#### Negative Acknowledgment (nack) - -When an application fails to process a particular message, it can send a "negative ack" to Pulsar -to signal that the message should be replayed at a later timer. (By default, failed messages are -replayed after a 1 minute delay). Be aware that negative acknowledgment on ordered subscription types, -such as Exclusive, Failover and Key_Shared, can cause failed messages to arrive consumers out of the original order. - -#### Unacknowledged - -A message that has been delivered to a consumer for processing but not yet confirmed as processed by the consumer. - -#### Retention Policy - -Size and time limits that you can set on a [namespace](#namespace) to configure retention of [messages](#message) -that have already been [acknowledged](#acknowledgement-ack). - -#### Multi-Tenancy - -The ability to isolate [namespaces](#namespace), specify quotas, and configure authentication and authorization -on a per-[tenant](#tenant) basis. - -#### Failure Domain - -A logical domain under a Pulsar cluster. Each logical domain contains a pre-configured list of brokers. - -#### Anti-affinity Namespaces - -A group of namespaces that have anti-affinity to each other. - -### Architecture - -#### Standalone - -A lightweight Pulsar broker in which all components run in a single Java Virtual Machine (JVM) process. Standalone -clusters can be run on a single machine and are useful for development purposes. - -#### Cluster - -A set of Pulsar [brokers](#broker) and [BookKeeper](#bookkeeper) servers (aka [bookies](#bookie)). -Clusters can reside in different geographical regions and replicate messages to one another -in a process called [geo-replication](#geo-replication). - -#### Instance - -A group of Pulsar [clusters](#cluster) that act together as a single unit. - -#### Geo-Replication - -Replication of messages across Pulsar [clusters](#cluster), potentially in different datacenters -or geographical regions. - -#### Configuration Store - -Pulsar's configuration store (previously known as configuration store) is a ZooKeeper quorum that -is used for configuration-specific tasks. A multi-cluster Pulsar installation requires just one -configuration store across all [clusters](#cluster). - -#### Topic Lookup - -A service provided by Pulsar [brokers](#broker) that enables connecting clients to automatically determine -which Pulsar [cluster](#cluster) is responsible for a [topic](#topic) (and thus where message traffic for -the topic needs to be routed). - -#### Service Discovery - -A mechanism provided by Pulsar that enables connecting clients to use just a single URL to interact -with all the [brokers](#broker) in a [cluster](#cluster). - -#### Broker - -A stateless component of Pulsar [clusters](#cluster) that runs two other components: an HTTP server -exposing a REST interface for administration and topic lookup and a [dispatcher](#dispatcher) that -handles all message transfers. Pulsar clusters typically consist of multiple brokers. - -#### Dispatcher - -An asynchronous TCP server used for all data transfers in-and-out a Pulsar [broker](#broker). The Pulsar -dispatcher uses a custom binary protocol for all communications. - -### Storage - -#### BookKeeper - -[Apache BookKeeper](http://bookkeeper.apache.org/) is a scalable, low-latency persistent log storage -service that Pulsar uses to store data. - -#### Bookie - -Bookie is the name of an individual BookKeeper server. It is effectively the storage server of Pulsar. - -#### Ledger - -An append-only data structure in [BookKeeper](#bookkeeper) that is used to persistently store messages in Pulsar [topics](#topic). - -### Functions - -Pulsar Functions are lightweight functions that can consume messages from Pulsar topics, apply custom processing logic, and, if desired, publish results to topics. diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/schema-evolution-compatibility.md b/site2/website/versioned_docs/version-2.9.0-deprecated/schema-evolution-compatibility.md deleted file mode 100644 index 3e78429df69da2..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/schema-evolution-compatibility.md +++ /dev/null @@ -1,201 +0,0 @@ ---- -id: schema-evolution-compatibility -title: Schema evolution and compatibility -sidebar_label: "Schema evolution and compatibility" -original_id: schema-evolution-compatibility ---- - -Normally, schemas do not stay the same over a long period of time. Instead, they undergo evolutions to satisfy new needs. - -This chapter examines how Pulsar schema evolves and what Pulsar schema compatibility check strategies are. - -## Schema evolution - -Pulsar schema is defined in a data structure called `SchemaInfo`. - -Each `SchemaInfo` stored with a topic has a version. The version is used to manage the schema changes happening within a topic. - -The message produced with `SchemaInfo` is tagged with a schema version. When a message is consumed by a Pulsar client, the Pulsar client can use the schema version to retrieve the corresponding `SchemaInfo` and use the correct schema information to deserialize data. - -### What is schema evolution? - -Schemas store the details of attributes and types. To satisfy new business requirements, you need to update schemas inevitably over time, which is called **schema evolution**. - -Any schema changes affect downstream consumers. Schema evolution ensures that the downstream consumers can seamlessly handle data encoded with both old schemas and new schemas. - -### How Pulsar schema should evolve? - -The answer is Pulsar schema compatibility check strategy. It determines how schema compares old schemas with new schemas in topics. - -For more information, see [Schema compatibility check strategy](#schema-compatibility-check-strategy). - -### How does Pulsar support schema evolution? - -1. When a producer/consumer/reader connects to a broker, the broker deploys the schema compatibility checker configured by `schemaRegistryCompatibilityCheckers` to enforce schema compatibility check. - - The schema compatibility checker is one instance per schema type. - - Currently, Avro and JSON have their own compatibility checkers, while all the other schema types share the default compatibility checker which disables schema evolution. - -2. The producer/consumer/reader sends its client `SchemaInfo` to the broker. - -3. The broker knows the schema type and locates the schema compatibility checker for that type. - -4. The broker uses the checker to check if the `SchemaInfo` is compatible with the latest schema of the topic by applying its compatibility check strategy. - - Currently, the compatibility check strategy is configured at the namespace level and applied to all the topics within that namespace. - -## Schema compatibility check strategy - -Pulsar has 8 schema compatibility check strategies, which are summarized in the following table. - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Changes allowed | Check against which schema | Upgrade first | -| --- | --- | --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Disable schema compatibility check. | All changes are allowed | All previous versions | Any order | -| `ALWAYS_INCOMPATIBLE` | Disable schema evolution. | All changes are disabled | None | None | -| `BACKWARD` | Consumers using the schema V3 can process data written by producers using the schema V3 or V2. |
  1606. Add optional fields
  1607. Delete fields
  1608. | Latest version | Consumers | -| `BACKWARD_TRANSITIVE` | Consumers using the schema V3 can process data written by producers using the schema V3, V2 or V1. |
  1609. Add optional fields
  1610. Delete fields
  1611. | All previous versions | Consumers | -| `FORWARD` | Consumers using the schema V3 or V2 can process data written by producers using the schema V3. |
  1612. Add fields
  1613. Delete optional fields
  1614. | Latest version | Producers | -| `FORWARD_TRANSITIVE` | Consumers using the schema V3, V2 or V1 can process data written by producers using the schema V3. |
  1615. Add fields
  1616. Delete optional fields
  1617. | All previous versions | Producers | -| `FULL` | Backward and forward compatible between the schema V3 and V2. |
  1618. Modify optional fields
  1619. | Latest version | Any order | -| `FULL_TRANSITIVE` | Backward and forward compatible among the schema V3, V2, and V1. |
  1620. Modify optional fields
  1621. | All previous versions | Any order | - -### ALWAYS_COMPATIBLE and ALWAYS_INCOMPATIBLE - -| Compatibility check strategy | Definition | Note | -| --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Disable schema compatibility check. | None | -| `ALWAYS_INCOMPATIBLE` | Disable schema evolution, that is, any schema change is rejected. |
  1622. For all schema types except Avro and JSON, the default schema compatibility check strategy is `ALWAYS_INCOMPATIBLE`.
  1623. For Avro and JSON, the default schema compatibility check strategy is `FULL`.
  1624. | - -#### Example - -* Example 1 - - In some situations, an application needs to store events of several different types in the same Pulsar topic. - - In particular, when developing a data model in an `Event Sourcing` style, you might have several kinds of events that affect the state of an entity. - - For example, for a user entity, there are `userCreated`, `userAddressChanged` and `userEnquiryReceived` events. The application requires that those events are always read in the same order. - - Consequently, those events need to go in the same Pulsar partition to maintain order. This application can use `ALWAYS_COMPATIBLE` to allow different kinds of events co-exist in the same topic. - -* Example 2 - - Sometimes we also make incompatible changes. - - For example, you are modifying a field type from `string` to `int`. - - In this case, you need to: - - * Upgrade all producers and consumers to the new schema versions at the same time. - - * Optionally, create a new topic and start migrating applications to use the new topic and the new schema, avoiding the need to handle two incompatible versions in the same topic. - -### BACKWARD and BACKWARD_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | -|---|---|---| -`BACKWARD` | Consumers using the new schema can process data written by producers using the **last schema**. | The consumers using the schema V3 can process data written by producers using the schema V3 or V2. | -`BACKWARD_TRANSITIVE` | Consumers using the new schema can process data written by producers using **all previous schemas**. | The consumers using the schema V3 can process data written by producers using the schema V3, V2, or V1. | - -#### Example - -* Example 1 - - Remove a field. - - A consumer constructed to process events without one field can process events written with the old schema containing the field, and the consumer will ignore that field. - -* Example 2 - - You want to load all Pulsar data into a Hive data warehouse and run SQL queries against the data. - - Same SQL queries must continue to work even the data is changed. To support it, you can evolve the schemas using the `BACKWARD` strategy. - -### FORWARD and FORWARD_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | -|---|---|---| -`FORWARD` | Consumers using the **last schema** can process data written by producers using a new schema, even though they may not be able to use the full capabilities of the new schema. | The consumers using the schema V3 or V2 can process data written by producers using the schema V3. | -`FORWARD_TRANSITIVE` | Consumers using **all previous schemas** can process data written by producers using a new schema. | The consumers using the schema V3, V2, or V1 can process data written by producers using the schema V3. - -#### Example - -* Example 1 - - Add a field. - - In most data formats, consumers written to process events without new fields can continue doing so even when they receive new events containing new fields. - -* Example 2 - - If a consumer has an application logic tied to a full version of a schema, the application logic may not be updated instantly when the schema evolves. - - In this case, you need to project data with a new schema onto an old schema that the application understands. - - Consequently, you can evolve the schemas using the `FORWARD` strategy to ensure that the old schema can process data encoded with the new schema. - -### FULL and FULL_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | Note | -| --- | --- | --- | --- | -| `FULL` | Schemas are both backward and forward compatible, which means: Consumers using the last schema can process data written by producers using the new schema. AND Consumers using the new schema can process data written by producers using the last schema. | Consumers using the schema V3 can process data written by producers using the schema V3 or V2. AND Consumers using the schema V3 or V2 can process data written by producers using the schema V3. |
  1625. For Avro and JSON, the default schema compatibility check strategy is `FULL`.
  1626. For all schema types except Avro and JSON, the default schema compatibility check strategy is `ALWAYS_INCOMPATIBLE`.
  1627. | -| `FULL_TRANSITIVE` | The new schema is backward and forward compatible with all previously registered schemas. | Consumers using the schema V3 can process data written by producers using the schema V3, V2 or V1. AND Consumers using the schema V3, V2 or V1 can process data written by producers using the schema V3. | None | - -#### Example - -In some data formats, for example, Avro, you can define fields with default values. Consequently, adding or removing a field with a default value is a fully compatible change. - -## Schema verification - -When a producer or a consumer tries to connect to a topic, a broker performs some checks to verify a schema. - -### Producer - -When a producer tries to connect to a topic (suppose ignore the schema auto creation), a broker does the following checks: - -* Check if the schema carried by the producer exists in the schema registry or not. - - * If the schema is already registered, then the producer is connected to a broker and produce messages with that schema. - - * If the schema is not registered, then Pulsar verifies if the schema is allowed to be registered based on the configured compatibility check strategy. - -### Consumer -When a consumer tries to connect to a topic, a broker checks if a carried schema is compatible with a registered schema based on the configured schema compatibility check strategy. - -| Compatibility check strategy | Check logic | -| --- | --- | -| `ALWAYS_COMPATIBLE` | All pass | -| `ALWAYS_INCOMPATIBLE` | No pass | -| `BACKWARD` | Can read the last schema | -| `BACKWARD_TRANSITIVE` | Can read all schemas | -| `FORWARD` | Can read the last schema | -| `FORWARD_TRANSITIVE` | Can read the last schema | -| `FULL` | Can read the last schema | -| `FULL_TRANSITIVE` | Can read all schemas | - -## Order of upgrading clients - -The order of upgrading client applications is determined by the compatibility check strategy. - -For example, the producers using schemas to write data to Pulsar and the consumers using schemas to read data from Pulsar. - -| Compatibility check strategy | Upgrade first | Description | -| --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Any order | The compatibility check is disabled. Consequently, you can upgrade the producers and consumers in **any order**. | -| `ALWAYS_INCOMPATIBLE` | None | The schema evolution is disabled. | -|
  1628. `BACKWARD`
  1629. `BACKWARD_TRANSITIVE`
  1630. | Consumers | There is no guarantee that consumers using the old schema can read data produced using the new schema. Consequently, **upgrade all consumers first**, and then start producing new data. | -|
  1631. `FORWARD`
  1632. `FORWARD_TRANSITIVE`
  1633. | Producers | There is no guarantee that consumers using the new schema can read data produced using the old schema. Consequently, **upgrade all producers first**
  1634. to use the new schema and ensure that the data already produced using the old schemas are not available to consumers, and then upgrade the consumers.
  1635. | -|
  1636. `FULL`
  1637. `FULL_TRANSITIVE`
  1638. | Any order | There is no guarantee that consumers using the old schema can read data produced using the new schema and consumers using the new schema can read data produced using the old schema. Consequently, you can upgrade the producers and consumers in **any order**. | - - - - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/schema-get-started.md b/site2/website/versioned_docs/version-2.9.0-deprecated/schema-get-started.md deleted file mode 100644 index afacb0fa51f2ef..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/schema-get-started.md +++ /dev/null @@ -1,102 +0,0 @@ ---- -id: schema-get-started -title: Get started -sidebar_label: "Get started" -original_id: schema-get-started ---- - -This chapter introduces Pulsar schemas and explains why they are important. - -## Schema Registry - -Type safety is extremely important in any application built around a message bus like Pulsar. - -Producers and consumers need some kind of mechanism for coordinating types at the topic level to avoid various potential problems arise. For example, serialization and deserialization issues. - -Applications typically adopt one of the following approaches to guarantee type safety in messaging. Both approaches are available in Pulsar, and you're free to adopt one or the other or to mix and match on a per-topic basis. - -#### Note -> -> Currently, the Pulsar schema registry is only available for the [Java client](client-libraries-java.md), [CGo client](client-libraries-cgo.md), [Python client](client-libraries-python.md), and [C++ client](client-libraries-cpp.md). - -### Client-side approach - -Producers and consumers are responsible for not only serializing and deserializing messages (which consist of raw bytes) but also "knowing" which types are being transmitted via which topics. - -If a producer is sending temperature sensor data on the topic `topic-1`, consumers of that topic will run into trouble if they attempt to parse that data as moisture sensor readings. - -Producers and consumers can send and receive messages consisting of raw byte arrays and leave all type safety enforcement to the application on an "out-of-band" basis. - -### Server-side approach - -Producers and consumers inform the system which data types can be transmitted via the topic. - -With this approach, the messaging system enforces type safety and ensures that producers and consumers remain synced. - -Pulsar has a built-in **schema registry** that enables clients to upload data schemas on a per-topic basis. Those schemas dictate which data types are recognized as valid for that topic. - -## Why use schema - -When a schema is enabled, Pulsar does parse data, it takes bytes as inputs and sends bytes as outputs. While data has meaning beyond bytes, you need to parse data and might encounter parse exceptions which mainly occur in the following situations: - -* The field does not exist - -* The field type has changed (for example, `string` is changed to `int`) - -There are a few methods to prevent and overcome these exceptions, for example, you can catch exceptions when parsing errors, which makes code hard to maintain; or you can adopt a schema management system to perform schema evolution, not to break downstream applications, and enforces type safety to max extend in the language you are using, the solution is Pulsar Schema. - -Pulsar schema enables you to use language-specific types of data when constructing and handling messages from simple types like `string` to more complex application-specific types. - -**Example** - -You can use the _User_ class to define the messages sent to Pulsar topics. - -``` - -public class User { - String name; - int age; -} - -``` - -When constructing a producer with the _User_ class, you can specify a schema or not as below. - -### Without schema - -If you construct a producer without specifying a schema, then the producer can only produce messages of type `byte[]`. If you have a POJO class, you need to serialize the POJO into bytes before sending messages. - -**Example** - -``` - -Producer producer = client.newProducer() - .topic(topic) - .create(); -User user = new User("Tom", 28); -byte[] message = … // serialize the `user` by yourself; -producer.send(message); - -``` - -### With schema - -If you construct a producer with specifying a schema, then you can send a class to a topic directly without worrying about how to serialize POJOs into bytes. - -**Example** - -This example constructs a producer with the _JSONSchema_, and you can send the _User_ class to topics directly without worrying about how to serialize it into bytes. - -``` - -Producer producer = client.newProducer(JSONSchema.of(User.class)) - .topic(topic) - .create(); -User user = new User("Tom", 28); -producer.send(user); - -``` - -### Summary - -When constructing a producer with a schema, you do not need to serialize messages into bytes, instead Pulsar schema does this job in the background. diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/schema-manage.md b/site2/website/versioned_docs/version-2.9.0-deprecated/schema-manage.md deleted file mode 100644 index c588aae619eee9..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/schema-manage.md +++ /dev/null @@ -1,639 +0,0 @@ ---- -id: schema-manage -title: Manage schema -sidebar_label: "Manage schema" -original_id: schema-manage ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide demonstrates the ways to manage schemas: - -* Automatically - - * [Schema AutoUpdate](#schema-autoupdate) - -* Manually - - * [Schema manual management](#schema-manual-management) - - * [Custom schema storage](#custom-schema-storage) - -## Schema AutoUpdate - -If a schema passes the schema compatibility check, Pulsar producer automatically updates this schema to the topic it produces by default. - -### AutoUpdate for producer - -For a producer, the `AutoUpdate` happens in the following cases: - -* If a **topic doesn’t have a schema**, Pulsar registers a schema automatically. - -* If a **topic has a schema**: - - * If a **producer doesn’t carry a schema**: - - * If `isSchemaValidationEnforced` or `schemaValidationEnforced` is **disabled** in the namespace to which the topic belongs, the producer is allowed to connect to the topic and produce data. - - * If `isSchemaValidationEnforced` or `schemaValidationEnforced` is **enabled** in the namespace to which the topic belongs, the producer is rejected and disconnected. - - * If a **producer carries a schema**: - - A broker performs the compatibility check based on the configured compatibility check strategy of the namespace to which the topic belongs. - - * If the schema is registered, a producer is connected to a broker. - - * If the schema is not registered: - - * If `isAllowAutoUpdateSchema` sets to **false**, the producer is rejected to connect to a broker. - - * If `isAllowAutoUpdateSchema` sets to **true**: - - * If the schema passes the compatibility check, then the broker registers a new schema automatically for the topic and the producer is connected. - - * If the schema does not pass the compatibility check, then the broker does not register a schema and the producer is rejected to connect to a broker. - -![AutoUpdate Producer](/assets/schema-producer.png) - -### AutoUpdate for consumer - -For a consumer, the `AutoUpdate` happens in the following cases: - -* If a **consumer connects to a topic without a schema** (which means the consumer receiving raw bytes), the consumer can connect to the topic successfully without doing any compatibility check. - -* If a **consumer connects to a topic with a schema**. - - * If a topic does not have all of them (a schema/data/a local consumer and a local producer): - - * If `isAllowAutoUpdateSchema` sets to **true**, then the consumer registers a schema and it is connected to a broker. - - * If `isAllowAutoUpdateSchema` sets to **false**, then the consumer is rejected to connect to a broker. - - * If a topic has one of them (a schema/data/a local consumer and a local producer), then the schema compatibility check is performed. - - * If the schema passes the compatibility check, then the consumer is connected to the broker. - - * If the schema does not pass the compatibility check, then the consumer is rejected to connect to the broker. - -![AutoUpdate Consumer](/assets/schema-consumer.png) - - -### Manage AutoUpdate strategy - -You can use the `pulsar-admin` command to manage the `AutoUpdate` strategy as below: - -* [Enable AutoUpdate](#enable-autoupdate) - -* [Disable AutoUpdate](#disable-autoupdate) - -* [Adjust compatibility](#adjust-compatibility) - -#### Enable AutoUpdate - -To enable `AutoUpdate` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-is-allow-auto-update-schema --enable tenant/namespace - -``` - -#### Disable AutoUpdate - -To disable `AutoUpdate` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-is-allow-auto-update-schema --disable tenant/namespace - -``` - -Once the `AutoUpdate` is disabled, you can only register a new schema using the `pulsar-admin` command. - -#### Adjust compatibility - -To adjust the schema compatibility level on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-compatibility-strategy --compatibility tenant/namespace - -``` - -### Schema validation - -By default, `schemaValidationEnforced` is **disabled** for producers: - -* This means a producer without a schema can produce any kind of messages to a topic with schemas, which may result in producing trash data to the topic. - -* This allows non-java language clients that don’t support schema can produce messages to a topic with schemas. - -However, if you want a stronger guarantee on the topics with schemas, you can enable `schemaValidationEnforced` across the whole cluster or on a per-namespace basis. - -#### Enable schema validation - -To enable `schemaValidationEnforced` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-validation-enforce --enable tenant/namespace - -``` - -#### Disable schema validation - -To disable `schemaValidationEnforced` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-validation-enforce --disable tenant/namespace - -``` - -## Schema manual management - -To manage schemas, you can use one of the following methods. - -| Method | Description | -| --- | --- | -| **Admin CLI**
  1639. | You can use the `pulsar-admin` tool to manage Pulsar schemas, brokers, clusters, sources, sinks, topics, tenants and so on. For more information about how to use the `pulsar-admin` tool, see [here](reference-pulsar-admin.md). | -| **REST API**
  1640. | Pulsar exposes schema related management API in Pulsar’s admin RESTful API. You can access the admin RESTful endpoint directly to manage schemas. For more information about how to use the Pulsar REST API, see [here](http://pulsar.apache.org/admin-rest-api/). | -| **Java Admin API**
  1641. | Pulsar provides Java admin library. | - -### Upload a schema - -To upload (register) a new schema for a topic, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `upload` subcommand. - -```bash - -$ pulsar-admin schemas upload --filename - -``` - -The `schema-definition-file` is in JSON format. - -```json - -{ - "type": "", - "schema": "", - "properties": {} // the properties associated with the schema -} - -``` - -The `schema-definition-file` includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  1642. If the schema is a
  1643. **primitive**
  1644. schema, this field should be blank.
  1645. If the schema is a
  1646. **struct**
  1647. schema, this field should be a JSON string of the Avro schema definition.
  1648. | -| `properties` | The additional properties associated with the schema. | - -Here are examples of the `schema-definition-file` for a JSON schema. - -**Example 1** - -```json - -{ - "type": "JSON", - "schema": "{\"type\":\"record\",\"name\":\"User\",\"namespace\":\"com.foo\",\"fields\":[{\"name\":\"file1\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"file2\",\"type\":\"string\",\"default\":null},{\"name\":\"file3\",\"type\":[\"null\",\"string\"],\"default\":\"dfdf\"}]}", - "properties": {} -} - -``` - -**Example 2** - -```json - -{ - "type": "STRING", - "schema": "", - "properties": { - "key1": "value1" - } -} - -``` - -
    - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/uploadSchema?version=@pulsar:version_number@} - -The post payload is in JSON format. - -```json - -{ - "type": "", - "schema": "", - "properties": {} // the properties associated with the schema -} - -``` - -The post payload includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  1649. If the schema is a
  1650. **primitive**
  1651. schema, this field should be blank.
  1652. If the schema is a
  1653. **struct**
  1654. schema, this field should be a JSON string of the Avro schema definition.
  1655. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -void createSchema(String topic, PostSchemaPayload schemaPayload) - -``` - -The `PostSchemaPayload` includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  1656. If the schema is a
  1657. **primitive**
  1658. schema, this field should be blank.
  1659. If the schema is a
  1660. **struct**
  1661. schema, this field should be a JSON string of the Avro schema definition.
  1662. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `PostSchemaPayload`: - -```java - -PulsarAdmin admin = …; - -PostSchemaPayload payload = new PostSchemaPayload(); -payload.setType("INT8"); -payload.setSchema(""); - -admin.createSchema("my-tenant/my-ns/my-topic", payload); - -``` - -
    - -
    -```` - -### Get a schema (latest) - -To get the latest schema for a topic, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `get` subcommand. - -```bash - -$ pulsar-admin schemas get - -{ - "version": 0, - "type": "String", - "timestamp": 0, - "data": "string", - "properties": { - "property1": "string", - "property2": "string" - } -} - -``` - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/getSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", - "type": "", - "timestamp": "", - "data": "", - "properties": {} // the properties associated with the schema -} - -``` - -The response includes the following fields: - -| Field | Description | -| --- | --- | -| `version` | The schema version, which is a long number. | -| `type` | The schema type. | -| `timestamp` | The timestamp of creating this version of schema. | -| `data` | The schema definition data, which is encoded in UTF 8 charset.
  1663. If the schema is a
  1664. **primitive**
  1665. schema, this field should be blank.
  1666. If the schema is a
  1667. **struct**
  1668. schema, this field should be a JSON string of the Avro schema definition.
  1669. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -SchemaInfo createSchema(String topic) - -``` - -The `SchemaInfo` includes the following fields: - -| Field | Description | -| --- | --- | -| `name` | The schema name. | -| `type` | The schema type. | -| `schema` | A byte array of the schema definition data, which is encoded in UTF 8 charset.
  1670. If the schema is a
  1671. **primitive**
  1672. schema, this byte array should be empty.
  1673. If the schema is a
  1674. **struct**
  1675. schema, this field should be a JSON string of the Avro schema definition converted to a byte array.
  1676. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `SchemaInfo`: - -```java - -PulsarAdmin admin = …; - -SchemaInfo si = admin.getSchema("my-tenant/my-ns/my-topic"); - -``` - -
    - -
    -```` - -### Get a schema (specific) - -To get a specific version of a schema, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `get` subcommand. - -```bash - -$ pulsar-admin schemas get --version= - -``` - - - - -Send a `GET` request to a schema endpoint: {@inject: endpoint|GET|/admin/v2/schemas/:tenant/:namespace/:topic/schema/:version|operation/getSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", - "type": "", - "timestamp": "", - "data": "", - "properties": {} // the properties associated with the schema -} - -``` - -The response includes the following fields: - -| Field | Description | -| --- | --- | -| `version` | The schema version, which is a long number. | -| `type` | The schema type. | -| `timestamp` | The timestamp of creating this version of schema. | -| `data` | The schema definition data, which is encoded in UTF 8 charset.
  1677. If the schema is a
  1678. **primitive**
  1679. schema, this field should be blank.
  1680. If the schema is a
  1681. **struct**
  1682. schema, this field should be a JSON string of the Avro schema definition.
  1683. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -SchemaInfo createSchema(String topic, long version) - -``` - -The `SchemaInfo` includes the following fields: - -| Field | Description | -| --- | --- | -| `name` | The schema name. | -| `type` | The schema type. | -| `schema` | A byte array of the schema definition data, which is encoded in UTF 8.
  1684. If the schema is a
  1685. **primitive**
  1686. schema, this byte array should be empty.
  1687. If the schema is a
  1688. **struct**
  1689. schema, this field should be a JSON string of the Avro schema definition converted to a byte array.
  1690. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `SchemaInfo`: - -```java - -PulsarAdmin admin = …; - -SchemaInfo si = admin.getSchema("my-tenant/my-ns/my-topic", 1L); - -``` - -
    - -
    -```` - -### Extract a schema - -To provide a schema via a topic, you can use the following method. - -````mdx-code-block - - - - -Use the `extract` subcommand. - -```bash - -$ pulsar-admin schemas extract --classname --jar --type - -``` - - - - -```` - -### Delete a schema - -To delete a schema for a topic, you can use one of the following methods. - -:::note - -In any case, the **delete** action deletes **all versions** of a schema registered for a topic. - -::: - -````mdx-code-block - - - - -Use the `delete` subcommand. - -```bash - -$ pulsar-admin schemas delete - -``` - - - - -Send a `DELETE` request to a schema endpoint: {@inject: endpoint|DELETE|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/deleteSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", -} - -``` - -The response includes the following field: - -Field | Description | ----|---| -`version` | The schema version, which is a long number. | - - - - -```java - -void deleteSchema(String topic) - -``` - -Here is an example of deleting a schema. - -```java - -PulsarAdmin admin = …; - -admin.deleteSchema("my-tenant/my-ns/my-topic"); - -``` - - - - -```` - -## Custom schema storage - -By default, Pulsar stores various data types of schemas in [Apache BookKeeper](https://bookkeeper.apache.org) deployed alongside Pulsar. - -However, you can use another storage system if needed. - -### Implement - -To use a non-default (non-BookKeeper) storage system for Pulsar schemas, you need to implement the following Java interfaces: - -* [SchemaStorage interface](#schemastorage-interface) - -* [SchemaStorageFactory interface](#schemastoragefactory-interface) - -#### SchemaStorage interface - -The `SchemaStorage` interface has the following methods: - -```java - -public interface SchemaStorage { - // How schemas are updated - CompletableFuture put(String key, byte[] value, byte[] hash); - - // How schemas are fetched from storage - CompletableFuture get(String key, SchemaVersion version); - - // How schemas are deleted - CompletableFuture delete(String key); - - // Utility method for converting a schema version byte array to a SchemaVersion object - SchemaVersion versionFromBytes(byte[] version); - - // Startup behavior for the schema storage client - void start() throws Exception; - - // Shutdown behavior for the schema storage client - void close() throws Exception; -} - -``` - -:::tip - -For a complete example of **schema storage** implementation, see [BookKeeperSchemaStorage](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorage.java) class. - -::: - -#### SchemaStorageFactory interface - -The `SchemaStorageFactory` interface has the following method: - -```java - -public interface SchemaStorageFactory { - @NotNull - SchemaStorage create(PulsarService pulsar) throws Exception; -} - -``` - -:::tip - -For a complete example of **schema storage factory** implementation, see [BookKeeperSchemaStorageFactory](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorageFactory.java) class. - -::: - -### Deploy - -To use your custom schema storage implementation, perform the following steps. - -1. Package the implementation in a [JAR](https://docs.oracle.com/javase/tutorial/deployment/jar/basicsindex.html) file. - -2. Add the JAR file to the `lib` folder in your Pulsar binary or source distribution. - -3. Change the `schemaRegistryStorageClassName` configuration in `broker.conf` to your custom factory class. - -4. Start Pulsar. diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/schema-understand.md b/site2/website/versioned_docs/version-2.9.0-deprecated/schema-understand.md deleted file mode 100644 index 55bc662c666338..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/schema-understand.md +++ /dev/null @@ -1,576 +0,0 @@ ---- -id: schema-understand -title: Understand schema -sidebar_label: "Understand schema" -original_id: schema-understand ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This chapter explains the basic concepts of Pulsar schema, focuses on the topics of particular importance, and provides additional background. - -## SchemaInfo - -Pulsar schema is defined in a data structure called `SchemaInfo`. - -The `SchemaInfo` is stored and enforced on a per-topic basis and cannot be stored at the namespace or tenant level. - -A `SchemaInfo` consists of the following fields: - -| Field | Description | -| --- | --- | -| `name` | Schema name (a string). | -| `type` | Schema type, which determines how to interpret the schema data.
  1691. Predefined schema: see [here](schema-understand.md#schema-type).
  1692. Customized schema: it is left as an empty string.
  1693. | -| `schema`(`payload`) | Schema data, which is a sequence of 8-bit unsigned bytes and schema-type specific. | -| `properties` | It is a user defined properties as a string/string map. Applications can use this bag for carrying any application specific logics. Possible properties might be the Git hash associated with the schema, an environment string like `dev` or `prod`. | - -**Example** - -This is the `SchemaInfo` of a string. - -```json - -{ - "name": "test-string-schema", - "type": "STRING", - "schema": "", - "properties": {} -} - -``` - -## Schema type - -Pulsar supports various schema types, which are mainly divided into two categories: - -* Primitive type - -* Complex type - -### Primitive type - -Currently, Pulsar supports the following primitive types: - -| Primitive Type | Description | -|---|---| -| `BOOLEAN` | A binary value | -| `INT8` | A 8-bit signed integer | -| `INT16` | A 16-bit signed integer | -| `INT32` | A 32-bit signed integer | -| `INT64` | A 64-bit signed integer | -| `FLOAT` | A single precision (32-bit) IEEE 754 floating-point number | -| `DOUBLE` | A double-precision (64-bit) IEEE 754 floating-point number | -| `BYTES` | A sequence of 8-bit unsigned bytes | -| `STRING` | A Unicode character sequence | -| `TIMESTAMP` (`DATE`, `TIME`) | A logic type represents a specific instant in time with millisecond precision.
    It stores the number of milliseconds since `January 1, 1970, 00:00:00 GMT` as an `INT64` value | -| INSTANT | A single instantaneous point on the time-line with nanoseconds precision| -| LOCAL_DATE | An immutable date-time object that represents a date, often viewed as year-month-day| -| LOCAL_TIME | An immutable date-time object that represents a time, often viewed as hour-minute-second. Time is represented to nanosecond precision.| -| LOCAL_DATE_TIME | An immutable date-time object that represents a date-time, often viewed as year-month-day-hour-minute-second | - -For primitive types, Pulsar does not store any schema data in `SchemaInfo`. The `type` in `SchemaInfo` is used to determine how to serialize and deserialize the data. - -Some of the primitive schema implementations can use `properties` to store implementation-specific tunable settings. For example, a `string` schema can use `properties` to store the encoding charset to serialize and deserialize strings. - -The conversions between **Pulsar schema types** and **language-specific primitive types** are as below. - -| Schema Type | Java Type| Python Type | Go Type | -|---|---|---|---| -| BOOLEAN | boolean | bool | bool | -| INT8 | byte | | int8 | -| INT16 | short | | int16 | -| INT32 | int | | int32 | -| INT64 | long | | int64 | -| FLOAT | float | float | float32 | -| DOUBLE | double | float | float64| -| BYTES | byte[], ByteBuffer, ByteBuf | bytes | []byte | -| STRING | string | str | string| -| TIMESTAMP | java.sql.Timestamp | | | -| TIME | java.sql.Time | | | -| DATE | java.util.Date | | | -| INSTANT | java.time.Instant | | | -| LOCAL_DATE | java.time.LocalDate | | | -| LOCAL_TIME | java.time.LocalDateTime | | -| LOCAL_DATE_TIME | java.time.LocalTime | | - -**Example** - -This example demonstrates how to use a string schema. - -1. Create a producer with a string schema and send messages. - - ```java - - Producer producer = client.newProducer(Schema.STRING).create(); - producer.newMessage().value("Hello Pulsar!").send(); - - ``` - -2. Create a consumer with a string schema and receive messages. - - ```java - - Consumer consumer = client.newConsumer(Schema.STRING).subscribe(); - consumer.receive(); - - ``` - -### Complex type - -Currently, Pulsar supports the following complex types: - -| Complex Type | Description | -|---|---| -| `keyvalue` | Represents a complex type of a key/value pair. | -| `struct` | Handles structured data. It supports `AvroBaseStructSchema` and `ProtobufNativeSchema`. | - -#### keyvalue - -`Keyvalue` schema helps applications define schemas for both key and value. - -For `SchemaInfo` of `keyvalue` schema, Pulsar stores the `SchemaInfo` of key schema and the `SchemaInfo` of value schema together. - -Pulsar provides the following methods to encode a key/value pair in messages: - -* `INLINE` - -* `SEPARATED` - -You can choose the encoding type when constructing the key/value schema. - -````mdx-code-block - - - - -Key/value pairs are encoded together in the message payload. - - - - -Key is encoded in the message key and the value is encoded in the message payload. - -**Example** - -This example shows how to construct a key/value schema and then use it to produce and consume messages. - -1. Construct a key/value schema with `INLINE` encoding type. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.INLINE - ); - - ``` - -2. Optionally, construct a key/value schema with `SEPARATED` encoding type. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - ``` - -3. Produce messages using a key/value schema. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - Producer> producer = client.newProducer(kvSchema) - .topic(TOPIC) - .create(); - - final int key = 100; - final String value = "value-100"; - - // send the key/value message - producer.newMessage() - .value(new KeyValue(key, value)) - .send(); - - ``` - -4. Consume messages using a key/value schema. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - Consumer> consumer = client.newConsumer(kvSchema) - ... - .topic(TOPIC) - .subscriptionName(SubscriptionName).subscribe(); - - // receive key/value pair - Message> msg = consumer.receive(); - KeyValue kv = msg.getValue(); - - ``` - - - - -```` - -#### struct - -This section describes the details of type and usage of the `struct` schema. - -##### Type - -`struct` schema supports `AvroBaseStructSchema` and `ProtobufNativeSchema`. - -|Type|Description| ----|---| -`AvroBaseStructSchema`|Pulsar uses [Avro Specification](http://avro.apache.org/docs/current/spec.html) to declare the schema definition for `AvroBaseStructSchema`, which supports `AvroSchema`, `JsonSchema`, and `ProtobufSchema`.

    This allows Pulsar:
    - to use the same tools to manage schema definitions
    - to use different serialization or deserialization methods to handle data| -`ProtobufNativeSchema`|`ProtobufNativeSchema` is based on protobuf native Descriptor.

    This allows Pulsar:
    - to use native protobuf-v3 to serialize or deserialize data
    - to use `AutoConsume` to deserialize data. - -##### Usage - -Pulsar provides the following methods to use the `struct` schema: - -* `static` - -* `generic` - -* `SchemaDefinition` - -````mdx-code-block - - - - -You can predefine the `struct` schema, which can be a POJO in Java, a `struct` in Go, or classes generated by Avro or Protobuf tools. - -**Example** - -Pulsar gets the schema definition from the predefined `struct` using an Avro library. The schema definition is the schema data stored as a part of the `SchemaInfo`. - -1. Create the _User_ class to define the messages sent to Pulsar topics. - - ```java - - @Builder - @AllArgsConstructor - @NoArgsConstructor - public static class User { - String name; - int age; - } - - ``` - -2. Create a producer with a `struct` schema and send messages. - - ```java - - Producer producer = client.newProducer(Schema.AVRO(User.class)).create(); - producer.newMessage().value(User.builder().name("pulsar-user").age(1).build()).send(); - - ``` - -3. Create a consumer with a `struct` schema and receive messages - - ```java - - Consumer consumer = client.newConsumer(Schema.AVRO(User.class)).subscribe(); - User user = consumer.receive(); - - ``` - - - - -Sometimes applications do not have pre-defined structs, and you can use this method to define schema and access data. - -You can define the `struct` schema using the `GenericSchemaBuilder`, generate a generic struct using `GenericRecordBuilder` and consume messages into `GenericRecord`. - -**Example** - -1. Use `RecordSchemaBuilder` to build a schema. - - ```java - - RecordSchemaBuilder recordSchemaBuilder = SchemaBuilder.record("schemaName"); - recordSchemaBuilder.field("intField").type(SchemaType.INT32); - SchemaInfo schemaInfo = recordSchemaBuilder.build(SchemaType.AVRO); - - Producer producer = client.newProducer(Schema.generic(schemaInfo)).create(); - - ``` - -2. Use `RecordBuilder` to build the struct records. - - ```java - - producer.newMessage().value(schema.newRecordBuilder() - .set("intField", 32) - .build()).send(); - - ``` - - - - -You can define the `schemaDefinition` to generate a `struct` schema. - -**Example** - -1. Create the _User_ class to define the messages sent to Pulsar topics. - - ```java - - @Builder - @AllArgsConstructor - @NoArgsConstructor - public static class User { - String name; - int age; - } - - ``` - -2. Create a producer with a `SchemaDefinition` and send messages. - - ```java - - SchemaDefinition schemaDefinition = SchemaDefinition.builder().withPojo(User.class).build(); - Producer producer = client.newProducer(Schema.AVRO(schemaDefinition)).create(); - producer.newMessage().value(User.builder().name("pulsar-user").age(1).build()).send(); - - ``` - -3. Create a consumer with a `SchemaDefinition` schema and receive messages - - ```java - - SchemaDefinition schemaDefinition = SchemaDefinition.builder().withPojo(User.class).build(); - Consumer consumer = client.newConsumer(Schema.AVRO(schemaDefinition)).subscribe(); - User user = consumer.receive().getValue(); - - ``` - - - - -```` - -### Auto Schema - -If you don't know the schema type of a Pulsar topic in advance, you can use AUTO schema to produce or consume generic records to or from brokers. - -| Auto Schema Type | Description | -|---|---| -| `AUTO_PRODUCE` | This is useful for transferring data **from a producer to a Pulsar topic that has a schema**. | -| `AUTO_CONSUME` | This is useful for transferring data **from a Pulsar topic that has a schema to a consumer**. | - -#### AUTO_PRODUCE - -`AUTO_PRODUCE` schema helps a producer validate whether the bytes sent by the producer is compatible with the schema of a topic. - -**Example** - -Suppose that: - -* You have a producer processing messages from a Kafka topic _K_. - -* You have a Pulsar topic _P_, and you do not know its schema type. - -* Your application reads the messages from _K_ and writes the messages to _P_. - -In this case, you can use `AUTO_PRODUCE` to verify whether the bytes produced by _K_ can be sent to _P_ or not. - -```java - -Produce pulsarProducer = client.newProducer(Schema.AUTO_PRODUCE()) - … - .create(); - -byte[] kafkaMessageBytes = … ; - -pulsarProducer.produce(kafkaMessageBytes); - -``` - -#### AUTO_CONSUME - -`AUTO_CONSUME` schema helps a Pulsar topic validate whether the bytes sent by a Pulsar topic is compatible with a consumer, that is, the Pulsar topic deserializes messages into language-specific objects using the `SchemaInfo` retrieved from broker-side. - -Currently, `AUTO_CONSUME` supports AVRO, JSON and ProtobufNativeSchema schemas. It deserializes messages into `GenericRecord`. - -**Example** - -Suppose that: - -* You have a Pulsar topic _P_. - -* You have a consumer (for example, MySQL) receiving messages from the topic _P_. - -* Your application reads the messages from _P_ and writes the messages to MySQL. - -In this case, you can use `AUTO_CONSUME` to verify whether the bytes produced by _P_ can be sent to MySQL or not. - -```java - -Consumer pulsarConsumer = client.newConsumer(Schema.AUTO_CONSUME()) - … - .subscribe(); - -Message msg = consumer.receive() ; -GenericRecord record = msg.getValue(); - -``` - -### Native Avro Schema - -When migrating or ingesting event or message data from external systems (such as Kafka and Cassandra), the events are often already serialized in Avro format. The applications producing the data typically have validated the data against their schemas (including compatibility checks) and stored them in a database or a dedicated service (such as a schema registry). The schema of each serialized data record is usually retrievable by some metadata attached to that record. In such cases, a Pulsar producer doesn't need to repeat the schema validation step when sending the ingested events to a topic. All it needs to do is passing each message or event with its schema to Pulsar. - -Hence, we provide `Schema.NATIVE_AVRO` to wrap a native Avro schema of type `org.apache.avro.Schema`. The result is a schema instance of Pulsar that accepts a serialized Avro payload without validating it against the wrapped Avro schema. - -**Example** - -```java - -org.apache.avro.Schema nativeAvroSchema = … ; - -Producer producer = pulsarClient.newProducer().topic("ingress").create(); - -byte[] content = … ; - -producer.newMessage(Schema.NATIVE_AVRO(nativeAvroSchema)).value(content).send(); - -``` - -## Schema version - -Each `SchemaInfo` stored with a topic has a version. Schema version manages schema changes happening within a topic. - -Messages produced with a given `SchemaInfo` is tagged with a schema version, so when a message is consumed by a Pulsar client, the Pulsar client can use the schema version to retrieve the corresponding `SchemaInfo` and then use the `SchemaInfo` to deserialize data. - -Schemas are versioned in succession. Schema storage happens in a broker that handles the associated topics so that version assignments can be made. - -Once a version is assigned/fetched to/for a schema, all subsequent messages produced by that producer are tagged with the appropriate version. - -**Example** - -The following example illustrates how the schema version works. - -Suppose that a Pulsar [Java client](client-libraries-java.md) created using the code below attempts to connect to Pulsar and begins to send messages: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -Producer producer = client.newProducer(JSONSchema.of(SensorReading.class)) - .topic("sensor-data") - .sendTimeout(3, TimeUnit.SECONDS) - .create(); - -``` - -The table below lists the possible scenarios when this connection attempt occurs and what happens in each scenario: - -| Scenario | What happens | -| --- | --- | -|
  1694. No schema exists for the topic.
  1695. | (1) The producer is created using the given schema. (2) Since no existing schema is compatible with the `SensorReading` schema, the schema is transmitted to the broker and stored. (3) Any consumer created using the same schema or topic can consume messages from the `sensor-data` topic. | -|
  1696. A schema already exists.
  1697. The producer connects using the same schema that is already stored.
  1698. | (1) The schema is transmitted to the broker. (2) The broker determines that the schema is compatible. (3) The broker attempts to store the schema in [BookKeeper](concepts-architecture-overview.md#persistent-storage) but then determines that it's already stored, so it is used to tag produced messages. |
  1699. A schema already exists.
  1700. The producer connects using a new schema that is compatible.
  1701. | (1) The schema is transmitted to the broker. (2) The broker determines that the schema is compatible and stores the new schema as the current version (with a new version number). | - -## How does schema work - -Pulsar schemas are applied and enforced at the **topic** level (schemas cannot be applied at the namespace or tenant level). - -Producers and consumers upload schemas to brokers, so Pulsar schemas work on the producer side and the consumer side. - -### Producer side - -This diagram illustrates how does schema work on the Producer side. - -![Schema works at the producer side](/assets/schema-producer.png) - -1. The application uses a schema instance to construct a producer instance. - - The schema instance defines the schema for the data being produced using the producer instance. - - Take AVRO as an example, Pulsar extracts schema definition from the POJO class and constructs the `SchemaInfo` that the producer needs to pass to a broker when it connects. - -2. The producer connects to the broker with the `SchemaInfo` extracted from the passed-in schema instance. - -3. The broker looks up the schema in the schema storage to check if it is already a registered schema. - -4. If yes, the broker skips the schema validation since it is a known schema, and returns the schema version to the producer. - -5. If no, the broker verifies whether a schema can be automatically created in this namespace: - - * If `isAllowAutoUpdateSchema` sets to **true**, then a schema can be created, and the broker validates the schema based on the schema compatibility check strategy defined for the topic. - - * If `isAllowAutoUpdateSchema` sets to **false**, then a schema can not be created, and the producer is rejected to connect to the broker. - -**Tip**: - -`isAllowAutoUpdateSchema` can be set via **Pulsar admin API** or **REST API.** - -For how to set `isAllowAutoUpdateSchema` via Pulsar admin API, see [Manage AutoUpdate Strategy](schema-manage.md/#manage-autoupdate-strategy). - -6. If the schema is allowed to be updated, then the compatible strategy check is performed. - - * If the schema is compatible, the broker stores it and returns the schema version to the producer. - - All the messages produced by this producer are tagged with the schema version. - - * If the schema is incompatible, the broker rejects it. - -### Consumer side - -This diagram illustrates how does Schema work on the consumer side. - -![Schema works at the consumer side](/assets/schema-consumer.png) - -1. The application uses a schema instance to construct a consumer instance. - - The schema instance defines the schema that the consumer uses for decoding messages received from a broker. - -2. The consumer connects to the broker with the `SchemaInfo` extracted from the passed-in schema instance. - -3. The broker determines whether the topic has one of them (a schema/data/a local consumer and a local producer). - -4. If a topic does not have all of them (a schema/data/a local consumer and a local producer): - - * If `isAllowAutoUpdateSchema` sets to **true**, then the consumer registers a schema and it is connected to a broker. - - * If `isAllowAutoUpdateSchema` sets to **false**, then the consumer is rejected to connect to a broker. - -5. If a topic has one of them (a schema/data/a local consumer and a local producer), then the schema compatibility check is performed. - - * If the schema passes the compatibility check, then the consumer is connected to the broker. - - * If the schema does not pass the compatibility check, then the consumer is rejected to connect to the broker. - -6. The consumer receives messages from the broker. - - If the schema used by the consumer supports schema versioning (for example, AVRO schema), the consumer fetches the `SchemaInfo` of the version tagged in messages and uses the passed-in schema and the schema tagged in messages to decode the messages. diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/security-athenz.md b/site2/website/versioned_docs/version-2.9.0-deprecated/security-athenz.md deleted file mode 100644 index 8a39fe25316d07..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/security-athenz.md +++ /dev/null @@ -1,98 +0,0 @@ ---- -id: security-athenz -title: Authentication using Athenz -sidebar_label: "Authentication using Athenz" -original_id: security-athenz ---- - -[Athenz](https://github.com/AthenZ/athenz) is a role-based authentication/authorization system. In Pulsar, you can use Athenz role tokens (also known as *z-tokens*) to establish the identify of the client. - -## Athenz authentication settings - -A [decentralized Athenz system](https://github.com/AthenZ/athenz/blob/master/docs/decent_authz_flow.md) contains an [authori**Z**ation **M**anagement **S**ystem](https://github.com/AthenZ/athenz/blob/master/docs/setup_zms.md) (ZMS) server and an [authori**Z**ation **T**oken **S**ystem](https://github.com/AthenZ/athenz/blob/master/docs/setup_zts) (ZTS) server. - -To begin, you need to set up Athenz service access control. You need to create domains for the *provider* (which provides some resources to other services with some authentication/authorization policies) and the *tenant* (which is provisioned to access some resources in a provider). In this case, the provider corresponds to the Pulsar service itself and the tenant corresponds to each application using Pulsar (typically, a [tenant](reference-terminology.md#tenant) in Pulsar). - -### Create the tenant domain and service - -On the [tenant](reference-terminology.md#tenant) side, you need to do the following things: - -1. Create a domain, such as `shopping` -2. Generate a private/public key pair -3. Create a service, such as `some_app`, on the domain with the public key - -Note that you need to specify the private key generated in step 2 when the Pulsar client connects to the [broker](reference-terminology.md#broker) (see client configuration examples for [Java](client-libraries-java.md#tls-authentication) and [C++](client-libraries-cpp.md#tls-authentication)). - -For more specific steps involving the Athenz UI, refer to [Example Service Access Control Setup](https://github.com/AthenZ/athenz/blob/master/docs/example_service_athenz_setup.md#client-tenant-domain). - -### Create the provider domain and add the tenant service to some role members - -On the provider side, you need to do the following things: - -1. Create a domain, such as `pulsar` -2. Create a role -3. Add the tenant service to members of the role - -Note that you can specify any action and resource in step 2 since they are not used on Pulsar. In other words, Pulsar uses the Athenz role token only for authentication, *not* for authorization. - -For more specific steps involving UI, refer to [Example Service Access Control Setup](https://github.com/AthenZ/athenz/blob/master/docs/example_service_athenz_setup.md#server-provider-domain). - -## Configure the broker for Athenz - -> ### TLS encryption -> -> Note that when you are using Athenz as an authentication provider, you had better use TLS encryption -> as it can protect role tokens from being intercepted and reused. (for more details involving TLS encryption see [Architecture - Data Model](https://github.com/AthenZ/athenz/blob/master/docs/data_model)). - -In the `conf/broker.conf` configuration file in your Pulsar installation, you need to provide the class name of the Athenz authentication provider as well as a comma-separated list of provider domain names. - -```properties - -# Add the Athenz auth provider -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderAthenz -athenzDomainNames=pulsar - -# Enable TLS -tlsEnabled=true -tlsCertificateFilePath=/path/to/broker-cert.pem -tlsKeyFilePath=/path/to/broker-key.pem - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationAthenz -brokerClientAuthenticationParameters={"tenantDomain":"shopping","tenantService":"some_app","providerDomain":"pulsar","privateKey":"file:///path/to/private.pem","keyId":"v1"} - -``` - -> A full listing of parameters is available in the `conf/broker.conf` file, you can also find the default -> values for those parameters in [Broker Configuration](reference-configuration.md#broker). - -## Configure clients for Athenz - -For more information on Pulsar client authentication using Athenz, see the following language-specific docs: - -* [Java client](client-libraries-java.md#athenz) - -## Configure CLI tools for Athenz - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following authentication parameters to the `conf/client.conf` config file to use Athenz with CLI tools of Pulsar: - -```properties - -# URL for the broker -serviceUrl=https://broker.example.com:8443/ - -# Set Athenz auth plugin and its parameters -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationAthenz -authParams={"tenantDomain":"shopping","tenantService":"some_app","providerDomain":"pulsar","privateKey":"file:///path/to/private.pem","keyId":"v1"} - -# Enable TLS -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/cacert.pem - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/security-authorization.md b/site2/website/versioned_docs/version-2.9.0-deprecated/security-authorization.md deleted file mode 100644 index 9cfd7c8c203f63..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/security-authorization.md +++ /dev/null @@ -1,130 +0,0 @@ ---- -id: security-authorization -title: Authentication and authorization in Pulsar -sidebar_label: "Authorization and ACLs" -original_id: security-authorization ---- - - -In Pulsar, the [authentication provider](security-overview.md#authentication-providers) is responsible for properly identifying clients and associating the clients with [role tokens](security-overview.md#role-tokens). If you only enable authentication, an authenticated role token has the ability to access all resources in the cluster. *Authorization* is the process that determines *what* clients are able to do. - -The role tokens with the most privileges are the *superusers*. The *superusers* can create and destroy tenants, along with having full access to all tenant resources. - -When a superuser creates a [tenant](reference-terminology.md#tenant), that tenant is assigned an admin role. A client with the admin role token can then create, modify and destroy namespaces, and grant and revoke permissions to *other role tokens* on those namespaces. - -## Broker and Proxy Setup - -### Enable authorization and assign superusers -You can enable the authorization and assign the superusers in the broker ([`conf/broker.conf`](reference-configuration.md#broker)) configuration files. - -```properties - -authorizationEnabled=true -superUserRoles=my-super-user-1,my-super-user-2 - -``` - -> A full list of parameters is available in the `conf/broker.conf` file. -> You can also find the default values for those parameters in [Broker Configuration](reference-configuration.md#broker). - -Typically, you use superuser roles for administrators, clients as well as broker-to-broker authorization. When you use [geo-replication](concepts-replication.md), every broker needs to be able to publish to all the other topics of clusters. - -You can also enable the authorization for the proxy in the proxy configuration file (`conf/proxy.conf`). Once you enable the authorization on the proxy, the proxy does an additional authorization check before forwarding the request to a broker. -If you enable authorization on the broker, the broker checks the authorization of the request when the broker receives the forwarded request. - -### Proxy Roles - -By default, the broker treats the connection between a proxy and the broker as a normal user connection. The broker authenticates the user as the role configured in `proxy.conf`(see ["Enable TLS Authentication on Proxies"](security-tls-authentication.md#enable-tls-authentication-on-proxies)). However, when the user connects to the cluster through a proxy, the user rarely requires the authentication. The user expects to be able to interact with the cluster as the role for which they have authenticated with the proxy. - -Pulsar uses *Proxy roles* to enable the authentication. Proxy roles are specified in the broker configuration file, [`conf/broker.conf`](reference-configuration.md#broker). If a client that is authenticated with a broker is one of its ```proxyRoles```, all requests from that client must also carry information about the role of the client that is authenticated with the proxy. This information is called the *original principal*. If the *original principal* is absent, the client is not able to access anything. - -You must authorize both the *proxy role* and the *original principal* to access a resource to ensure that the resource is accessible via the proxy. Administrators can take two approaches to authorize the *proxy role* and the *original principal*. - -The more secure approach is to grant access to the proxy roles each time you grant access to a resource. For example, if you have a proxy role named `proxy1`, when the superuser creates a tenant, you should specify `proxy1` as one of the admin roles. When a role is granted permissions to produce or consume from a namespace, if that client wants to produce or consume through a proxy, you should also grant `proxy1` the same permissions. - -Another approach is to make the proxy role a superuser. This allows the proxy to access all resources. The client still needs to authenticate with the proxy, and all requests made through the proxy have their role downgraded to the *original principal* of the authenticated client. However, if the proxy is compromised, a bad actor could get full access to your cluster. - -You can specify the roles as proxy roles in [`conf/broker.conf`](reference-configuration.md#broker). - -```properties - -proxyRoles=my-proxy-role - -# if you want to allow superusers to use the proxy (see above) -superUserRoles=my-super-user-1,my-super-user-2,my-proxy-role - -``` - -## Administer tenants - -Pulsar [instance](reference-terminology.md#instance) administrators or some kind of self-service portal typically provisions a Pulsar [tenant](reference-terminology.md#tenant). - -You can manage tenants using the [`pulsar-admin`](reference-pulsar-admin.md) tool. - -### Create a new tenant - -The following is an example tenant creation command: - -```shell - -$ bin/pulsar-admin tenants create my-tenant \ - --admin-roles my-admin-role \ - --allowed-clusters us-west,us-east - -``` - -This command creates a new tenant `my-tenant` that is allowed to use the clusters `us-west` and `us-east`. - -A client that successfully identifies itself as having the role `my-admin-role` is allowed to perform all administrative tasks on this tenant. - -The structure of topic names in Pulsar reflects the hierarchy between tenants, clusters, and namespaces: - -```shell - -persistent://tenant/namespace/topic - -``` - -### Manage permissions - -You can use [Pulsar Admin Tools](admin-api-permissions.md) for managing permission in Pulsar. - -### Pulsar admin authentication - -```java - -PulsarAdmin admin = PulsarAdmin.builder() - .serviceHttpUrl("http://broker:8080") - .authentication("com.org.MyAuthPluginClass", "param1:value1") - .build(); - -``` - -To use TLS: - -```java - -PulsarAdmin admin = PulsarAdmin.builder() - .serviceHttpUrl("https://broker:8080") - .authentication("com.org.MyAuthPluginClass", "param1:value1") - .tlsTrustCertsFilePath("/path/to/trust/cert") - .build(); - -``` - -## Authorize an authenticated client with multiple roles - -When a client is identified with multiple roles in a token (the type of role claim in the token is an array) during the authentication process, Pulsar supports to check the permissions of all the roles and further authorize the client as long as one of its roles has the required permissions. - -> **Note**
    -> This authorization method is only compatible with [JWT authentication](security-jwt.md). - -To enable this authorization method, configure the authorization provider as `MultiRolesTokenAuthorizationProvider` in the `conf/broker.conf` file. - - ```properties - - # Authorization provider fully qualified class-name - authorizationProvider=org.apache.pulsar.broker.authorization.MultiRolesTokenAuthorizationProvider - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/security-basic-auth.md b/site2/website/versioned_docs/version-2.9.0-deprecated/security-basic-auth.md deleted file mode 100644 index 2585526bb478af..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/security-basic-auth.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -id: security-basic-auth -title: Authentication using HTTP basic -sidebar_label: "Authentication using HTTP basic" ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - -[Basic authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) is a simple authentication scheme built into the HTTP protocol, which uses base64-encoded username and password pairs as credentials. - -## Prerequisites - -Install [`htpasswd`](https://httpd.apache.org/docs/2.4/programs/htpasswd.html) in your environment to create a password file for storing username-password pairs. - -* For Ubuntu/Debian, run the following command to install `htpasswd`. - - ``` - apt install apache2-utils - ``` - -* For CentOS/RHEL, run the following command to install `htpasswd`. - - ``` - yum install httpd-tools - ``` - -## Create your authentication file - -:::note -Currently, you can use MD5 (recommended) and CRYPT encryption to authenticate your password. -::: - -Create a password file named `.htpasswd` with a user account `superuser/admin`: -* Use MD5 encryption (recommended): - - ``` - htpasswd -cmb /path/to/.htpasswd superuser admin - ``` - -* Use CRYPT encryption: - - ``` - htpasswd -cdb /path/to/.htpasswd superuser admin - ``` - -You can preview the content of your password file by running the following command: - -``` -cat path/to/.htpasswd -superuser:$apr1$GBIYZYFZ$MzLcPrvoUky16mLcK6UtX/ -``` - -## Enable basic authentication on brokers - -To configure brokers to authenticate clients, complete the following steps. - -1. Add the following parameters to the `conf/broker.conf` file. If you use a standalone Pulsar, you need to add these parameters to the `conf/standalone.conf` file. - - ``` - # Configuration to enable Basic authentication - authenticationEnabled=true - authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderBasic - - # Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters - brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic - brokerClientAuthenticationParameters={"userId":"superuser","password":"admin"} - - # If this flag is set then the broker authenticates the original Auth data - # else it just accepts the originalPrincipal and authorizes it (if required). - authenticateOriginalAuthData=true - ``` - -2. Set an environment variable named `PULSAR_EXTRA_OPTS` and the value is `-Dpulsar.auth.basic.conf=/path/to/.htpasswd`. Pulsar reads this environment variable to implement HTTP basic authentication. - -## Enable basic authentication on proxies - -To configure proxies to authenticate clients, complete the following steps. - -1. Add the following parameters to the `conf/proxy.conf` file: - - ``` - # For clients connecting to the proxy - authenticationEnabled=true - authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderBasic - - # For the proxy to connect to brokers - brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic - brokerClientAuthenticationParameters={"userId":"superuser","password":"admin"} - - # Whether client authorization credentials are forwarded to the broker for re-authorization. - # Authentication must be enabled via authenticationEnabled=true for this to take effect. - forwardAuthorizationCredentials=true - ``` - -2. Set an environment variable named `PULSAR_EXTRA_OPTS` and the value is `-Dpulsar.auth.basic.conf=/path/to/.htpasswd`. Pulsar reads this environment variable to implement HTTP basic authentication. - -## Configure basic authentication in CLI tools - -[Command-line tools](/docs/next/reference-cli-tools), such as [Pulsar-admin](/tools/pulsar-admin/), [Pulsar-perf](/tools/pulsar-perf/) and [Pulsar-client](/tools/pulsar-client/), use the `conf/client.conf` file in your Pulsar installation. To configure basic authentication in Pulsar CLI tools, you need to add the following parameters to the `conf/client.conf` file. - -``` -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic -authParams={"userId":"superuser","password":"admin"} -``` - - -## Configure basic authentication in Pulsar clients - -The following example shows how to configure basic authentication when using Pulsar clients. - - - - - ```java - AuthenticationBasic auth = new AuthenticationBasic(); - auth.configure("{\"userId\":\"superuser\",\"password\":\"admin\"}"); - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650") - .authentication(auth) - .build(); - ``` - - - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/security-bouncy-castle.md b/site2/website/versioned_docs/version-2.9.0-deprecated/security-bouncy-castle.md deleted file mode 100644 index be937055d8e311..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/security-bouncy-castle.md +++ /dev/null @@ -1,157 +0,0 @@ ---- -id: security-bouncy-castle -title: Bouncy Castle Providers -sidebar_label: "Bouncy Castle Providers" -original_id: security-bouncy-castle ---- - -## BouncyCastle Introduce - -`Bouncy Castle` is a Java library that complements the default Java Cryptographic Extension (JCE), -and it provides more cipher suites and algorithms than the default JCE provided by Sun. - -In addition to that, `Bouncy Castle` has lots of utilities for reading arcane formats like PEM and ASN.1 that no sane person would want to rewrite themselves. - -In Pulsar, security and crypto have dependencies on BouncyCastle Jars. For the detailed installing and configuring Bouncy Castle FIPS, see [BC FIPS Documentation](https://www.bouncycastle.org/documentation.html), especially the **User Guides** and **Security Policy** PDFs. - -`Bouncy Castle` provides both [FIPS](https://www.bouncycastle.org/fips_faq.html) and non-FIPS version. But in a JVM, you can not include both of the 2 versions, and you need to exclude the current version before include the other. - -In Pulsar, the security and crypto methods also depends on `Bouncy Castle`, especially in [TLS Authentication](security-tls-authentication.md) and [Transport Encryption](security-encryption.md). This document contains the configuration between BouncyCastle FIPS(BC-FIPS) and non-FIPS(BC-non-FIPS) version while using Pulsar. - -## How BouncyCastle modules packaged in Pulsar - -In Pulsar's `bouncy-castle` module, We provide 2 sub modules: `bouncy-castle-bc`(for non-FIPS version) and `bouncy-castle-bcfips`(for FIPS version), to package BC jars together to make the include and exclude of `Bouncy Castle` easier. - -To achieve this goal, we will need to package several `bouncy-castle` jars together into `bouncy-castle-bc` or `bouncy-castle-bcfips` jar. -Each of the original bouncy-castle jar is related with security, so BouncyCastle dutifully supplies signed of each JAR. -But when we do the re-package, Maven shade explodes the BouncyCastle jar file which puts the signatures into META-INF, -these signatures aren't valid for this new, uber-jar (signatures are only for the original BC jar). -Usually, You will meet error like `java.lang.SecurityException: Invalid signature file digest for Manifest main attributes`. - -You could exclude these signatures in mvn pom file to avoid above error, by - -```access transformers - -META-INF/*.SF -META-INF/*.DSA -META-INF/*.RSA - -``` - -But it can also lead to new, cryptic errors, e.g. `java.security.NoSuchAlgorithmException: PBEWithSHA256And256BitAES-CBC-BC SecretKeyFactory not available` -By explicitly specifying where to find the algorithm like this: `SecretKeyFactory.getInstance("PBEWithSHA256And256BitAES-CBC-BC","BC")` -It will get the real error: `java.security.NoSuchProviderException: JCE cannot authenticate the provider BC` - -So, we used a [executable packer plugin](https://github.com/nthuemmel/executable-packer-maven-plugin) that uses a jar-in-jar approach to preserve the BouncyCastle signature in a single, executable jar. - -### Include dependencies of BC-non-FIPS - -Pulsar module `bouncy-castle-bc`, which defined by `bouncy-castle/bc/pom.xml` contains the needed non-FIPS jars for Pulsar, and packaged as a jar-in-jar(need to provide `pkg`). - -```xml - - - org.bouncycastle - bcpkix-jdk15on - ${bouncycastle.version} - - - - org.bouncycastle - bcprov-ext-jdk15on - ${bouncycastle.version} - - -``` - -By using this `bouncy-castle-bc` module, you can easily include and exclude BouncyCastle non-FIPS jars. - -### Modules that include BC-non-FIPS module (`bouncy-castle-bc`) - -For Pulsar client, user need the bouncy-castle module, so `pulsar-client-original` will include the `bouncy-castle-bc` module, and have `pkg` set to reference the `jar-in-jar` package. -It is included as following example: - -```xml - - - org.apache.pulsar - bouncy-castle-bc - ${pulsar.version} - pkg - - -``` - -By default `bouncy-castle-bc` already included in `pulsar-client-original`, And `pulsar-client-original` has been included in a lot of other modules like `pulsar-client-admin`, `pulsar-broker`. -But for the above shaded jar and signatures reason, we should not package Pulsar's `bouncy-castle` module into `pulsar-client-all` other shaded modules directly, such as `pulsar-client-shaded`, `pulsar-client-admin-shaded` and `pulsar-broker-shaded`. -So in the shaded modules, we will exclude the `bouncy-castle` modules. - -```xml - - - - org.apache.pulsar:pulsar-client-original - - ** - - - org/bouncycastle/** - - - - -``` - -That means, `bouncy-castle` related jars are not shaded in these fat jars. - -### Module BC-FIPS (`bouncy-castle-bcfips`) - -Pulsar module `bouncy-castle-bcfips`, which defined by `bouncy-castle/bcfips/pom.xml` contains the needed FIPS jars for Pulsar. -Similar to `bouncy-castle-bc`, `bouncy-castle-bcfips` also packaged as a `jar-in-jar` package for easy include/exclude. - -```xml - - - org.bouncycastle - bc-fips - ${bouncycastlefips.version} - - - - org.bouncycastle - bcpkix-fips - ${bouncycastlefips.version} - - -``` - -### Exclude BC-non-FIPS and include BC-FIPS - -If you want to switch from BC-non-FIPS to BC-FIPS version, Here is an example for `pulsar-broker` module: - -```xml - - - org.apache.pulsar - pulsar-broker - ${pulsar.version} - - - org.apache.pulsar - bouncy-castle-bc - - - - - - org.apache.pulsar - bouncy-castle-bcfips - ${pulsar.version} - pkg - - -``` - - -For more example, you can reference module `bcfips-include-test`. - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/security-encryption.md b/site2/website/versioned_docs/version-2.9.0-deprecated/security-encryption.md deleted file mode 100644 index c2f3530d94d9e4..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/security-encryption.md +++ /dev/null @@ -1,200 +0,0 @@ ---- -id: security-encryption -title: Pulsar Encryption -sidebar_label: "End-to-End Encryption" -original_id: security-encryption ---- - -Applications can use Pulsar encryption to encrypt messages on the producer side and decrypt messages on the consumer side. You can use the public and private key pair that the application configures to perform encryption. Only the consumers with a valid key can decrypt the encrypted messages. - -## Asymmetric and symmetric encryption - -Pulsar uses a dynamically generated symmetric AES key to encrypt messages(data). You can use the application-provided ECDSA (Elliptic Curve Digital Signature Algorithm) or RSA (Rivest–Shamir–Adleman) key pair to encrypt the AES key(data key), so you do not have to share the secret with everyone. - -Key is a public and private key pair used for encryption or decryption. The producer key is the public key of the key pair, and the consumer key is the private key of the key pair. - -The application configures the producer with the public key. You can use this key to encrypt the AES data key. The encrypted data key is sent as part of message header. Only entities with the private key (in this case the consumer) are able to decrypt the data key which is used to decrypt the message. - -You can encrypt a message with more than one key. Any one of the keys used for encrypting the message is sufficient to decrypt the message. - -Pulsar does not store the encryption key anywhere in the Pulsar service. If you lose or delete the private key, your message is irretrievably lost, and is unrecoverable. - -## Producer -![alt text](/assets/pulsar-encryption-producer.jpg "Pulsar Encryption Producer") - -## Consumer -![alt text](/assets/pulsar-encryption-consumer.jpg "Pulsar Encryption Consumer") - -## Get started - -1. Create your ECDSA or RSA public and private key pair by using the following commands. - * ECDSA(for Java clients only) - - ```shell - - openssl ecparam -name secp521r1 -genkey -param_enc explicit -out test_ecdsa_privkey.pem - openssl ec -in test_ecdsa_privkey.pem -pubout -outform pem -out test_ecdsa_pubkey.pem - - ``` - - * RSA (for C++, Python and Node.js clients) - - ```shell - - openssl genrsa -out test_rsa_privkey.pem 2048 - openssl rsa -in test_rsa_privkey.pem -pubout -outform pkcs8 -out test_rsa_pubkey.pem - - ``` - -2. Add the public and private key to the key management and configure your producers to retrieve public keys and consumers clients to retrieve private keys. - -3. Implement the `CryptoKeyReader` interface, specifically `CryptoKeyReader.getPublicKey()` for producer and `CryptoKeyReader.getPrivateKey()` for consumer, which Pulsar client invokes to load the key. - -4. Add the encryption key name to the producer builder: PulsarClient.newProducer().addEncryptionKey("myapp.key"). - -5. Add CryptoKeyReader implementation to producer or consumer builder: PulsarClient.newProducer().cryptoKeyReader(keyReader) / PulsarClient.newConsumer().cryptoKeyReader(keyReader). - -6. Sample producer application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl("pulsar://localhost:6650").build(); - -Producer producer = pulsarClient.newProducer() - .topic("persistent://my-tenant/my-ns/my-topic") - .addEncryptionKey("myappkey") - .cryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")) - .create(); - -for (int i = 0; i < 10; i++) { - producer.send("my-message".getBytes()); -} - -producer.close(); -pulsarClient.close(); - -``` - -7. Sample Consumer Application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl("pulsar://localhost:6650").build(); -Consumer consumer = pulsarClient.newConsumer() - .topic("persistent://my-tenant/my-ns/my-topic") - .subscriptionName("my-subscriber-name") - .cryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")) - .subscribe(); -Message msg = null; - -for (int i = 0; i < 10; i++) { - msg = consumer.receive(); - // do something - System.out.println("Received: " + new String(msg.getData())); -} - -// Acknowledge the consumption of all messages at once -consumer.acknowledgeCumulative(msg); -consumer.close(); -pulsarClient.close(); - -``` - -## Key rotation -Pulsar generates a new AES data key every 4 hours or after publishing a certain number of messages. A producer fetches the asymmetric public key every 4 hours by calling CryptoKeyReader.getPublicKey() to retrieve the latest version. - -## Enable encryption at the producer application -If you produce messages that are consumed across application boundaries, you need to ensure that consumers in other applications have access to one of the private keys that can decrypt the messages. You can do this in two ways: -1. The consumer application provides you access to their public key, which you add to your producer keys. -2. You grant access to one of the private keys from the pairs that producer uses. - -When producers want to encrypt the messages with multiple keys, producers add all such keys to the config. Consumer can decrypt the message as long as the consumer has access to at least one of the keys. - -If you need to encrypt the messages using 2 keys (`myapp.messagekey1` and `myapp.messagekey2`), refer to the following example. - -```java - -PulsarClient.newProducer().addEncryptionKey("myapp.messagekey1").addEncryptionKey("myapp.messagekey2"); - -``` - -## Decrypt encrypted messages at the consumer application -Consumers require to access one of the private keys to decrypt messages that the producer produces. If you want to receive encrypted messages, create a public or private key and give your public key to the producer application to encrypt messages using your public key. - -## Handle failures -* Producer/Consumer loses access to the key - * Producer action fails to indicate the cause of the failure. Application has the option to proceed with sending unencrypted messages in such cases. Call `PulsarClient.newProducer().cryptoFailureAction(ProducerCryptoFailureAction)` to control the producer behavior. The default behavior is to fail the request. - * If consumption fails due to decryption failure or missing keys in consumer, the application has the option to consume the encrypted message or discard it. Call `PulsarClient.newConsumer().cryptoFailureAction(ConsumerCryptoFailureAction)` to control the consumer behavior. The default behavior is to fail the request. Application is never able to decrypt the messages if the private key is permanently lost. -* Batch messaging - * If decryption fails and the message contains batch messages, client is not able to retrieve individual messages in the batch, hence message consumption fails even if cryptoFailureAction() is set to `ConsumerCryptoFailureAction.CONSUME`. -* If decryption fails, the message consumption stops and the application notices backlog growth in addition to decryption failure messages in the client log. If the application does not have access to the private key to decrypt the message, the only option is to skip or discard backlogged messages. diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/security-extending.md b/site2/website/versioned_docs/version-2.9.0-deprecated/security-extending.md deleted file mode 100644 index e7484453b8beb8..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/security-extending.md +++ /dev/null @@ -1,207 +0,0 @@ ---- -id: security-extending -title: Extending Authentication and Authorization in Pulsar -sidebar_label: "Extending" -original_id: security-extending ---- - -Pulsar provides a way to use custom authentication and authorization mechanisms. - -## Authentication - -Pulsar supports mutual TLS and Athenz authentication plugins. For how to use these authentication plugins, you can refer to the description in [Security](security-overview.md). - -You can use a custom authentication mechanism by providing the implementation in the form of two plugins. One plugin is for the Client library and the other plugin is for the Pulsar Proxy and/or Pulsar Broker to validate the credentials. - -### Client authentication plugin - -For the client library, you need to implement `org.apache.pulsar.client.api.Authentication`. By entering the command below you can pass this class when you create a Pulsar client: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .authentication(new MyAuthentication()) - .build(); - -``` - -You can use 2 interfaces to implement on the client side: - * `Authentication` -> http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/Authentication.html - * `AuthenticationDataProvider` -> http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/AuthenticationDataProvider.html - - -This in turn needs to provide the client credentials in the form of `org.apache.pulsar.client.api.AuthenticationDataProvider`. This leaves the chance to return different kinds of authentication token for different types of connection or by passing a certificate chain to use for TLS. - - -You can find examples for client authentication providers at: - - * Mutual TLS Auth -- https://github.com/apache/pulsar/tree/master/pulsar-client/src/main/java/org/apache/pulsar/client/impl/auth - * Athenz -- https://github.com/apache/pulsar/tree/master/pulsar-client-auth-athenz/src/main/java/org/apache/pulsar/client/impl/auth - -### Proxy/Broker authentication plugin - -On the proxy/broker side, you need to configure the corresponding plugin to validate the credentials that the client sends. The Proxy and Broker can support multiple authentication providers at the same time. - -In `conf/broker.conf` you can choose to specify a list of valid providers: - -```properties - -# Authentication provider name list, which is comma separated list of class names -authenticationProviders= - -``` - -To implement `org.apache.pulsar.broker.authentication.AuthenticationProvider` on one single interface: - -```java - -/** - * Provider of authentication mechanism - */ -public interface AuthenticationProvider extends Closeable { - - /** - * Perform initialization for the authentication provider - * - * @param config - * broker config object - * @throws IOException - * if the initialization fails - */ - void initialize(ServiceConfiguration config) throws IOException; - - /** - * @return the authentication method name supported by this provider - */ - String getAuthMethodName(); - - /** - * Validate the authentication for the given credentials with the specified authentication data - * - * @param authData - * provider specific authentication data - * @return the "role" string for the authenticated connection, if the authentication was successful - * @throws AuthenticationException - * if the credentials are not valid - */ - String authenticate(AuthenticationDataSource authData) throws AuthenticationException; - -} - -``` - -The following is the example for Broker authentication plugins: - - * Mutual TLS -- https://github.com/apache/pulsar/blob/master/pulsar-broker-common/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderTls.java - * Athenz -- https://github.com/apache/pulsar/blob/master/pulsar-broker-auth-athenz/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderAthenz.java - -## Authorization - -Authorization is the operation that checks whether a particular "role" or "principal" has permission to perform a certain operation. - -By default, you can use the embedded authorization provider provided by Pulsar. You can also configure a different authorization provider through a plugin. -Note that although the Authentication plugin is designed for use in both the Proxy and Broker, -the Authorization plugin is designed only for use on the Broker however the Proxy does perform some simple Authorization checks of Roles if authorization is enabled. - -To provide a custom provider, you need to implement the `org.apache.pulsar.broker.authorization.AuthorizationProvider` interface, put this class in the Pulsar broker classpath and configure the class in `conf/broker.conf`: - - ```properties - - # Authorization provider fully qualified class-name - authorizationProvider=org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider - - ``` - -```java - -/** - * Provider of authorization mechanism - */ -public interface AuthorizationProvider extends Closeable { - - /** - * Perform initialization for the authorization provider - * - * @param conf - * broker config object - * @param configCache - * pulsar zk configuration cache service - * @throws IOException - * if the initialization fails - */ - void initialize(ServiceConfiguration conf, ConfigurationCacheService configCache) throws IOException; - - /** - * Check if the specified role has permission to send messages to the specified fully qualified topic name. - * - * @param topicName - * the fully qualified topic name associated with the topic. - * @param role - * the app id used to send messages to the topic. - */ - CompletableFuture canProduceAsync(TopicName topicName, String role, - AuthenticationDataSource authenticationData); - - /** - * Check if the specified role has permission to receive messages from the specified fully qualified topic name. - * - * @param topicName - * the fully qualified topic name associated with the topic. - * @param role - * the app id used to receive messages from the topic. - * @param subscription - * the subscription name defined by the client - */ - CompletableFuture canConsumeAsync(TopicName topicName, String role, - AuthenticationDataSource authenticationData, String subscription); - - /** - * Check whether the specified role can perform a lookup for the specified topic. - * - * For that the caller needs to have producer or consumer permission. - * - * @param topicName - * @param role - * @return - * @throws Exception - */ - CompletableFuture canLookupAsync(TopicName topicName, String role, - AuthenticationDataSource authenticationData); - - /** - * - * Grant authorization-action permission on a namespace to the given client - * - * @param namespace - * @param actions - * @param role - * @param authDataJson - * additional authdata in json format - * @return CompletableFuture - * @completesWith
    - * IllegalArgumentException when namespace not found
    - * IllegalStateException when failed to grant permission - */ - CompletableFuture grantPermissionAsync(NamespaceName namespace, Set actions, String role, - String authDataJson); - - /** - * Grant authorization-action permission on a topic to the given client - * - * @param topicName - * @param role - * @param authDataJson - * additional authdata in json format - * @return CompletableFuture - * @completesWith
    - * IllegalArgumentException when namespace not found
    - * IllegalStateException when failed to grant permission - */ - CompletableFuture grantPermissionAsync(TopicName topicName, Set actions, String role, - String authDataJson); - -} - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/security-jwt.md b/site2/website/versioned_docs/version-2.9.0-deprecated/security-jwt.md deleted file mode 100644 index 1fa65b7c27f60c..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/security-jwt.md +++ /dev/null @@ -1,331 +0,0 @@ ---- -id: security-jwt -title: Client authentication using tokens based on JSON Web Tokens -sidebar_label: "Authentication using JWT" -original_id: security-jwt ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -## Token authentication overview - -Pulsar supports authenticating clients using security tokens that are based on [JSON Web Tokens](https://jwt.io/introduction/) ([RFC-7519](https://tools.ietf.org/html/rfc7519)). - -You can use tokens to identify a Pulsar client and associate with some "principal" (or "role") that -is permitted to do some actions (eg: publish to a topic or consume from a topic). - -A user typically gets a token string from the administrator (or some automated service). - -The compact representation of a signed JWT is a string that looks like as the following: - -``` - -eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -Application specifies the token when you create the client instance. An alternative is to pass a "token supplier" (a function that returns the token when the client library needs one). - -> #### Always use TLS transport encryption -> Sending a token is equivalent to sending a password over the wire. You had better use TLS encryption all the time when you connect to the Pulsar service. See -> [Transport Encryption using TLS](security-tls-transport.md) for more details. - -### CLI Tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use the token authentication with CLI tools of Pulsar: - -```properties - -webServiceUrl=http://broker.example.com:8080/ -brokerServiceUrl=pulsar://broker.example.com:6650/ -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -authParams=token:eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -The token string can also be read from a file, for example: - -``` - -authParams=file:///path/to/token/file - -``` - -### Pulsar client - -You can use tokens to authenticate the following Pulsar clients. - -````mdx-code-block - - - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactory.token("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY")) - .build(); - -``` - -Similarly, you can also pass a `Supplier`: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactory.token(() -> { - // Read token from custom source - return readToken(); - })) - .build(); - -``` - - - - -```python - -from pulsar import Client, AuthenticationToken - -client = Client('pulsar://broker.example.com:6650/' - authentication=AuthenticationToken('eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY')) - -``` - -Alternatively, you can also pass a `Supplier`: - -```python - -def read_token(): - with open('/path/to/token.txt') as tf: - return tf.read().strip() - -client = Client('pulsar://broker.example.com:6650/' - authentication=AuthenticationToken(read_token)) - -``` - - - - -```go - -client, err := NewClient(ClientOptions{ - URL: "pulsar://localhost:6650", - Authentication: NewAuthenticationToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY"), -}) - -``` - -Similarly, you can also pass a `Supplier`: - -```go - -client, err := NewClient(ClientOptions{ - URL: "pulsar://localhost:6650", - Authentication: NewAuthenticationTokenSupplier(func () string { - // Read token from custom source - return readToken() - }), -}) - -``` - - - - -```c++ - -#include - -pulsar::ClientConfiguration config; -config.setAuth(pulsar::AuthToken::createWithToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY")); - -pulsar::Client client("pulsar://broker.example.com:6650/", config); - -``` - - - - -```c# - -var client = PulsarClient.Builder() - .AuthenticateUsingToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY") - .Build(); - -``` - - - - -```` - -## Enable token authentication - -On how to enable token authentication on a Pulsar cluster, you can refer to the guide below. - -JWT supports two different kinds of keys in order to generate and validate the tokens: - - * Symmetric : - - You can use a single ***Secret*** key to generate and validate tokens. - * Asymmetric: A pair of keys consists of the Private key and the Public key. - - You can use ***Private*** key to generate tokens. - - You can use ***Public*** key to validate tokens. - -### Create a secret key - -When you use a secret key, the administrator creates the key and uses the key to generate the client tokens. You can also configure this key to brokers in order to validate the clients. - -Output file is generated in the root of your Pulsar installation directory. You can also provide absolute path for the output file using the command below. - -```shell - -$ bin/pulsar tokens create-secret-key --output my-secret.key - -``` - -Enter this command to generate base64 encoded private key. - -```shell - -$ bin/pulsar tokens create-secret-key --output /opt/my-secret.key --base64 - -``` - -### Create a key pair - -With Public and Private keys, you need to create a pair of keys. Pulsar supports all algorithms that the Java JWT library (shown [here](https://github.com/jwtk/jjwt#signature-algorithms-keys)) supports. - -Output file is generated in the root of your Pulsar installation directory. You can also provide absolute path for the output file using the command below. - -```shell - -$ bin/pulsar tokens create-key-pair --output-private-key my-private.key --output-public-key my-public.key - -``` - - * Store `my-private.key` in a safe location and only administrator can use `my-private.key` to generate new tokens. - * `my-public.key` is distributed to all Pulsar brokers. You can publicly share this file without any security concern. - -### Generate tokens - -A token is the credential associated with a user. The association is done through the "principal" or "role". In the case of JWT tokens, this field is typically referred as **subject**, though they are exactly the same concept. - -Then, you need to use this command to require the generated token to have a **subject** field set. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user - -``` - -This command prints the token string on stdout. - -Similarly, you can create a token by passing the "private" key using the command below: - -```shell - -$ bin/pulsar tokens create --private-key file:///path/to/my-private.key \ - --subject test-user - -``` - -Finally, you can enter the following command to create a token with a pre-defined TTL. And then the token is automatically invalidated. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user \ - --expiry-time 1y - -``` - -### Authorization - -The token itself does not have any permission associated. The authorization engine determines whether the token should have permissions or not. Once you have created the token, you can grant permission for this token to do certain actions. The following is an example. - -```shell - -$ bin/pulsar-admin namespaces grant-permission my-tenant/my-namespace \ - --role test-user \ - --actions produce,consume - -``` - -### Enable token authentication on Brokers - -To configure brokers to authenticate clients, add the following parameters to `broker.conf`: - -```properties - -# Configuration to enable authentication and authorization -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientTlsEnabled=true -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Either configure the token string or specify to read it from a file. The following three available formats are all valid: -# brokerClientAuthenticationParameters={"token":"your-token-string"} -# brokerClientAuthenticationParameters=token:your-token-string -# brokerClientAuthenticationParameters=file:///path/to/token -brokerClientTrustCertsFilePath=/path/my-ca/certs/ca.cert.pem - -# If this flag is set then the broker authenticates the original Auth data -# else it just accepts the originalPrincipal and authorizes it (if required). -authenticateOriginalAuthData=true - -# If using secret key (Note: key files must be DER-encoded) -tokenSecretKey=file:///path/to/secret.key -# The key can also be passed inline: -# tokenSecretKey=data:;base64,FLFyW0oLJ2Fi22KKCm21J18mbAdztfSHN/lAT5ucEKU= - -# If using public/private (Note: key files must be DER-encoded) -# tokenPublicKey=file:///path/to/public.key - -``` - -### Enable token authentication on Proxies - -To configure proxies to authenticate clients, add the following parameters to `proxy.conf`: - -The proxy uses its own token when connecting to brokers. You need to configure the role token for this key pair in the `proxyRoles` of the brokers. For more details, see the [authorization guide](security-authorization.md). - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken -tokenSecretKey=file:///path/to/secret.key - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Either configure the token string or specify to read it from a file. The following three available formats are all valid: -# brokerClientAuthenticationParameters={"token":"your-token-string"} -# brokerClientAuthenticationParameters=token:your-token-string -# brokerClientAuthenticationParameters=file:///path/to/token - -# Whether client authorization credentials are forwarded to the broker for re-authorization. -# Authentication must be enabled via authenticationEnabled=true for this to take effect. -forwardAuthorizationCredentials=true - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/security-kerberos.md b/site2/website/versioned_docs/version-2.9.0-deprecated/security-kerberos.md deleted file mode 100644 index c49fa3bea1fce0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/security-kerberos.md +++ /dev/null @@ -1,443 +0,0 @@ ---- -id: security-kerberos -title: Authentication using Kerberos -sidebar_label: "Authentication using Kerberos" -original_id: security-kerberos ---- - -[Kerberos](https://web.mit.edu/kerberos/) is a network authentication protocol. By using secret-key cryptography, [Kerberos](https://web.mit.edu/kerberos/) is designed to provide strong authentication for client applications and server applications. - -In Pulsar, you can use Kerberos with [SASL](https://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer) as a choice for authentication. And Pulsar uses the [Java Authentication and Authorization Service (JAAS)](https://en.wikipedia.org/wiki/Java_Authentication_and_Authorization_Service) for SASL configuration. You need to provide JAAS configurations for Kerberos authentication. - -This document introduces how to configure `Kerberos` with `SASL` between Pulsar clients and brokers and how to configure Kerberos for Pulsar proxy in detail. - -## Configuration for Kerberos between Client and Broker - -### Prerequisites - -To begin, you need to set up (or already have) a [Key Distribution Center(KDC)](https://en.wikipedia.org/wiki/Key_distribution_center). Also you need to configure and run the [Key Distribution Center(KDC)](https://en.wikipedia.org/wiki/Key_distribution_center)in advance. - -If your organization already uses a Kerberos server (for example, by using `Active Directory`), you do not have to install a new server for Pulsar. If your organization does not use a Kerberos server, you need to install one. Your Linux vendor might have packages for `Kerberos`. On how to install and configure Kerberos, refer to [Ubuntu](https://help.ubuntu.com/community/Kerberos), -[Redhat](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Managing_Smart_Cards/installing-kerberos.html). - -Note that if you use Oracle Java, you need to download JCE policy files for your Java version and copy them to the `$JAVA_HOME/jre/lib/security` directory. - -#### Kerberos principals - -If you use the existing Kerberos system, ask your Kerberos administrator for a principal for each Brokers in your cluster and for every operating system user that accesses Pulsar with Kerberos authentication(via clients and tools). - -If you have installed your own Kerberos system, you can create these principals with the following commands: - -```shell - -### add Principals for broker -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey broker/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{broker-keytabname}.keytab broker/{hostname}@{REALM}" -### add Principals for client -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey client/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{client-keytabname}.keytab client/{hostname}@{REALM}" - -``` - -Note that *Kerberos* requires that all your hosts can be resolved with their FQDNs. - -The first part of Broker principal (for example, `broker` in `broker/{hostname}@{REALM}`) is the `serverType` of each host. The suggested values of `serverType` are `broker` (host machine runs service Pulsar Broker) and `proxy` (host machine runs service Pulsar Proxy). - -#### Configure how to connect to KDC - -You need to enter the command below to specify the path to the `krb5.conf` file for the client side and the broker side. The content of `krb5.conf` file indicates the default Realm and KDC information. See [JDK’s Kerberos Requirements](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html) for more details. - -```shell - --Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -Here is an example of the krb5.conf file: - -In the configuration file, `EXAMPLE.COM` is the default realm; `kdc = localhost:62037` is the kdc server url for realm `EXAMPLE.COM `: - -``` - -[libdefaults] - default_realm = EXAMPLE.COM - -[realms] - EXAMPLE.COM = { - kdc = localhost:62037 - } - -``` - -Usually machines configured with kerberos already have a system wide configuration and this configuration is optional. - -#### JAAS configuration file - -You need JAAS configuration file for the client side and the broker side. JAAS configuration file provides the section of information that is used to connect KDC. Here is an example named `pulsar_jaas.conf`: - -``` - - PulsarBroker { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - - PulsarClient { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarclient.keytab" - principal="client/localhost@EXAMPLE.COM"; -}; - -``` - -You need to set the `JAAS` configuration file path as JVM parameter for client and broker. For example: - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf - -``` - -In the `pulsar_jaas.conf` file above - -1. `PulsarBroker` is a section name in the JAAS file that each broker uses. This section tells the broker to use which principal inside Kerberos and the location of the keytab where the principal is stored. `PulsarBroker` allows the broker to use the keytab specified in this section. -2. `PulsarClient` is a section name in the JASS file that each broker uses. This section tells the client to use which principal inside Kerberos and the location of the keytab where the principal is stored. `PulsarClient` allows the client to use the keytab specified in this section. - The following example also reuses this `PulsarClient` section in both the Pulsar internal admin configuration and in CLI command of `bin/pulsar-client`, `bin/pulsar-perf` and `bin/pulsar-admin`. You can also add different sections for different use cases. - -You can have 2 separate JAAS configuration files: -* the file for a broker that has sections of both `PulsarBroker` and `PulsarClient`; -* the file for a client that only has a `PulsarClient` section. - - -### Kerberos configuration for Brokers - -#### Configure the `broker.conf` file - - In the `broker.conf` file, set Kerberos related configurations. - - - Set `authenticationEnabled` to `true`; - - Set `authenticationProviders` to choose `AuthenticationProviderSasl`; - - Set `saslJaasClientAllowedIds` regex for principal that is allowed to connect to broker; - - Set `saslJaasBrokerSectionName` that corresponds to the section in JAAS configuration file for broker; - - To make Pulsar internal admin client work properly, you need to set the configuration in the `broker.conf` file as below: - - Set `brokerClientAuthenticationPlugin` to client plugin `AuthenticationSasl`; - - Set `brokerClientAuthenticationParameters` to value in JSON string `{"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"}`, in which `PulsarClient` is the section name in the `pulsar_jaas.conf` file, and `"serverType":"broker"` indicates that the internal admin client connects to a Pulsar Broker; - - Here is an example: - -``` - -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarBroker - -## Authentication settings of the broker itself. Used when the broker connects to other brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -brokerClientAuthenticationParameters={"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"} - -``` - -#### Set Broker JVM parameter - - Set JVM parameters for JAAS configuration file and krb5 configuration file with additional options. - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -You can add this at the end of `PULSAR_EXTRA_OPTS` in the file [`pulsar_env.sh`](https://github.com/apache/pulsar/blob/master/conf/pulsar_env.sh) - -You must ensure that the operating system user who starts broker can reach the keytabs configured in the `pulsar_jaas.conf` file and kdc server in the `krb5.conf` file. - -### Kerberos configuration for clients - -#### Java Client and Java Admin Client - -In client application, include `pulsar-client-auth-sasl` in your project dependency. - -``` - - - org.apache.pulsar - pulsar-client-auth-sasl - ${pulsar.version} - - -``` - -Configure the authentication type to use `AuthenticationSasl`, and also provide the authentication parameters to it. - -You need 2 parameters: -- `saslJaasClientSectionName`. This parameter corresponds to the section in JAAS configuration file for client; -- `serverType`. This parameter stands for whether this client connects to broker or proxy. And client uses this parameter to know which server side principal should be used. - -When you authenticate between client and broker with the setting in above JAAS configuration file, we need to set `saslJaasClientSectionName` to `PulsarClient` and set `serverType` to `broker`. - -The following is an example of creating a Java client: - - ```java - - System.setProperty("java.security.auth.login.config", "/etc/pulsar/pulsar_jaas.conf"); - System.setProperty("java.security.krb5.conf", "/etc/pulsar/krb5.conf"); - - Map authParams = Maps.newHashMap(); - authParams.put("saslJaasClientSectionName", "PulsarClient"); - authParams.put("serverType", "broker"); - - Authentication saslAuth = AuthenticationFactory - .create(org.apache.pulsar.client.impl.auth.AuthenticationSasl.class.getName(), authParams); - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://my-broker.com:6650") - .authentication(saslAuth) - .build(); - - ``` - -> The first two lines in the example above are hard coded, alternatively, you can set additional JVM parameters for JAAS and krb5 configuration file when you run the application like below: - -``` - -java -cp -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf $APP-jar-with-dependencies.jar $CLASSNAME - -``` - -You must ensure that the operating system user who starts pulsar client can reach the keytabs configured in the `pulsar_jaas.conf` file and kdc server in the `krb5.conf` file. - -#### Configure CLI tools - -If you use a command-line tool (such as `bin/pulsar-client`, `bin/pulsar-perf` and `bin/pulsar-admin`), you need to perform the following steps: - -Step 1. Enter the command below to configure your `client.conf`. - -```shell - -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -authParams={"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"} - -``` - -Step 2. Enter the command below to set JVM parameters for JAAS configuration file and krb5 configuration file with additional options. - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -You can add this at the end of `PULSAR_EXTRA_OPTS` in the file [`pulsar_tools_env.sh`](https://github.com/apache/pulsar/blob/master/conf/pulsar_tools_env.sh), -or add this line `OPTS="$OPTS -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf "` directly to the CLI tool script. - -The meaning of configurations is the same as the meaning of configurations in Java client section. - -## Kerberos configuration for working with Pulsar Proxy - -With the above configuration, client and broker can do authentication using Kerberos. - -A client that connects to Pulsar Proxy is a little different. Pulsar Proxy (as a SASL Server in Kerberos) authenticates Client (as a SASL client in Kerberos) first; and then Pulsar broker authenticates Pulsar Proxy. - -Now in comparison with the above configuration between client and broker, we show you how to configure Pulsar Proxy as follows. - -### Create principal for Pulsar Proxy in Kerberos - -You need to add new principals for Pulsar Proxy comparing with the above configuration. If you already have principals for client and broker, you only need to add the proxy principal here. - -```shell - -### add Principals for Pulsar Proxy -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey proxy/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{proxy-keytabname}.keytab proxy/{hostname}@{REALM}" -### add Principals for broker -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey broker/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{broker-keytabname}.keytab broker/{hostname}@{REALM}" -### add Principals for client -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey client/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{client-keytabname}.keytab client/{hostname}@{REALM}" - -``` - -### Add a section in JAAS configuration file for Pulsar Proxy - -In comparison with the above configuration, add a new section for Pulsar Proxy in JAAS configuration file. - -Here is an example named `pulsar_jaas.conf`: - -``` - - PulsarBroker { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - - PulsarProxy { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarproxy.keytab" - principal="proxy/localhost@EXAMPLE.COM"; -}; - - PulsarClient { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarclient.keytab" - principal="client/localhost@EXAMPLE.COM"; -}; - -``` - -### Proxy client configuration - -Pulsar client configuration is similar with client and broker configuration, except that you need to set `serverType` to `proxy` instead of `broker`, for the reason that you need to do the Kerberos authentication between client and proxy. - - ```java - - System.setProperty("java.security.auth.login.config", "/etc/pulsar/pulsar_jaas.conf"); - System.setProperty("java.security.krb5.conf", "/etc/pulsar/krb5.conf"); - - Map authParams = Maps.newHashMap(); - authParams.put("saslJaasClientSectionName", "PulsarClient"); - authParams.put("serverType", "proxy"); // ** here is the different ** - - Authentication saslAuth = AuthenticationFactory - .create(org.apache.pulsar.client.impl.auth.AuthenticationSasl.class.getName(), authParams); - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://my-broker.com:6650") - .authentication(saslAuth) - .build(); - - ``` - -> The first two lines in the example above are hard coded, alternatively, you can set additional JVM parameters for JAAS and krb5 configuration file when you run the application like below: - -``` - -java -cp -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf $APP-jar-with-dependencies.jar $CLASSNAME - -``` - -### Kerberos configuration for Pulsar proxy service - -In the `proxy.conf` file, set Kerberos related configuration. Here is an example: - -```shell - -## related to authenticate client. -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarProxy - -## related to be authenticated by broker -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -brokerClientAuthenticationParameters={"saslJaasClientSectionName":"PulsarProxy", "serverType":"broker"} -forwardAuthorizationCredentials=true - -``` - -The first part relates to authenticating between client and Pulsar Proxy. In this phase, client works as SASL client, while Pulsar Proxy works as SASL server. - -The second part relates to authenticating between Pulsar Proxy and Pulsar Broker. In this phase, Pulsar Proxy works as SASL client, while Pulsar Broker works as SASL server. - -### Broker side configuration. - -The broker side configuration file is the same with the above `broker.conf`, you do not need special configuration for Pulsar Proxy. - -``` - -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarBroker - -``` - -## Regarding authorization and role token - -For Kerberos authentication, we usually use the authenticated principal as the role token for Pulsar authorization. For more information of authorization in Pulsar, see [security authorization](security-authorization.md). - -If you enable 'authorizationEnabled', you need to set `superUserRoles` in `broker.conf` that corresponds to the name registered in kdc. - -For example: - -```bash - -superUserRoles=client/{clientIp}@EXAMPLE.COM - -``` - -## Regarding authentication between ZooKeeper and Broker - -Pulsar Broker acts as a Kerberos client when you authenticate with Zookeeper. According to [ZooKeeper document](https://cwiki.apache.org/confluence/display/ZOOKEEPER/Client-Server+mutual+authentication), you need these settings in `conf/zookeeper.conf`: - -``` - -authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider -requireClientAuthScheme=sasl - -``` - -Enter the following commands to add a section of `Client` configurations in the file `pulsar_jaas.conf`, which Pulsar Broker uses: - -``` - - Client { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - -``` - -In this setting, the principal of Pulsar Broker and keyTab file indicates the role of Broker when you authenticate with ZooKeeper. - -## Regarding authentication between BookKeeper and Broker - -Pulsar Broker acts as a Kerberos client when you authenticate with Bookie. According to [BookKeeper document](http://bookkeeper.apache.org/docs/latest/security/sasl/), you need to add `bookkeeperClientAuthenticationPlugin` parameter in `broker.conf`: - -``` - -bookkeeperClientAuthenticationPlugin=org.apache.bookkeeper.sasl.SASLClientProviderFactory - -``` - -In this setting, `SASLClientProviderFactory` creates a BookKeeper SASL client in a Broker, and the Broker uses the created SASL client to authenticate with a Bookie node. - -Enter the following commands to add a section of `BookKeeper` configurations in the `pulsar_jaas.conf` that Pulsar Broker uses: - -``` - - BookKeeper { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - -``` - -In this setting, the principal of Pulsar Broker and keyTab file indicates the role of Broker when you authenticate with Bookie. diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/security-oauth2.md b/site2/website/versioned_docs/version-2.9.0-deprecated/security-oauth2.md deleted file mode 100644 index a11d4c6cd0e4f3..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/security-oauth2.md +++ /dev/null @@ -1,232 +0,0 @@ ---- -id: security-oauth2 -title: Client authentication using OAuth 2.0 access tokens -sidebar_label: "Authentication using OAuth 2.0 access tokens" -original_id: security-oauth2 ---- - -Pulsar supports authenticating clients using OAuth 2.0 access tokens. You can use OAuth 2.0 access tokens to identify a Pulsar client and associate the Pulsar client with some "principal" (or "role"), which is permitted to do some actions, such as publishing messages to a topic or consume messages from a topic. - -This module is used to support the Pulsar client authentication plugin for OAuth 2.0. After communicating with the Oauth 2.0 server, the Pulsar client gets an `access token` from the Oauth 2.0 server, and passes this `access token` to the Pulsar broker to do the authentication. The broker can use the `org.apache.pulsar.broker.authentication.AuthenticationProviderToken`. Or, you can add your own `AuthenticationProvider` to make it with this module. - -## Authentication provider configuration - -This library allows you to authenticate the Pulsar client by using an access token that is obtained from an OAuth 2.0 authorization service, which acts as a _token issuer_. - -### Authentication types - -The authentication type determines how to obtain an access token through an OAuth 2.0 authorization flow. - -:::note - -Currently, the Pulsar Java client only supports the `client_credentials` authentication type. - -::: - -#### Client credentials - -The following table lists parameters supported for the `client credentials` authentication type. - -| Parameter | Description | Example | Required or not | -| --- | --- | --- | --- | -| `type` | Oauth 2.0 authentication type. | `client_credentials` (default) | Optional | -| `issuerUrl` | URL of the authentication provider which allows the Pulsar client to obtain an access token | `https://accounts.google.com` | Required | -| `privateKey` | URL to a JSON credentials file | Support the following pattern formats:
  1702. `file:///path/to/file`
  1703. `file:/path/to/file`
  1704. `data:application/json;base64,`
  1705. | Required | -| `audience` | An OAuth 2.0 "resource server" identifier for the Pulsar cluster | `https://broker.example.com` | Required | - -The credentials file contains service account credentials used with the client authentication type. The following shows an example of a credentials file `credentials_file.json`. - -```json - -{ - "type": "client_credentials", - "client_id": "d9ZyX97q1ef8Cr81WHVC4hFQ64vSlDK3", - "client_secret": "on1uJ...k6F6R", - "client_email": "1234567890-abcdefghijklmnopqrstuvwxyz@developer.gserviceaccount.com", - "issuer_url": "https://accounts.google.com" -} - -``` - -In the above example, the authentication type is set to `client_credentials` by default. And the fields "client_id" and "client_secret" are required. - -### Typical original OAuth2 request mapping - -The following shows a typical original OAuth2 request, which is used to obtain the access token from the OAuth2 server. - -```bash - -curl --request POST \ - --url https://dev-kt-aa9ne.us.auth0.com/oauth/token \ - --header 'content-type: application/json' \ - --data '{ - "client_id":"Xd23RHsUnvUlP7wchjNYOaIfazgeHd9x", - "client_secret":"rT7ps7WY8uhdVuBTKWZkttwLdQotmdEliaM5rLfmgNibvqziZ-g07ZH52N_poGAb", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "grant_type":"client_credentials"}' - -``` - -In the above example, the mapping relationship is shown as below. - -- The `issuerUrl` parameter in this plugin is mapped to `--url https://dev-kt-aa9ne.us.auth0.com`. -- The `privateKey` file parameter in this plugin should at least contains the `client_id` and `client_secret` fields. -- The `audience` parameter in this plugin is mapped to `"audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"`. - -## Client Configuration - -You can use the OAuth2 authentication provider with the following Pulsar clients. - -### Java - -You can use the factory method to configure authentication for Pulsar Java client. - -```java - -import org.apache.pulsar.client.impl.auth.oauth2.AuthenticationFactoryOAuth2; - -URL issuerUrl = new URL("https://dev-kt-aa9ne.us.auth0.com"); -URL credentialsUrl = new URL("file:///path/to/KeyFile.json"); -String audience = "https://dev-kt-aa9ne.us.auth0.com/api/v2/"; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactoryOAuth2.clientCredentials(issuerUrl, credentialsUrl, audience)) - .build(); - -``` - -In addition, you can also use the encoded parameters to configure authentication for Pulsar Java client. - -```java - -Authentication auth = AuthenticationFactory - .create(AuthenticationOAuth2.class.getName(), "{"type":"client_credentials","privateKey":"./key/path/..","issuerUrl":"...","audience":"..."}"); -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication(auth) - .build(); - -``` - -### C++ client - -The C++ client is similar to the Java client. You need to provide parameters of `issuerUrl`, `private_key` (the credentials file path), and the audience. - -```c++ - -#include - -pulsar::ClientConfiguration config; -std::string params = R"({ - "issuer_url": "https://dev-kt-aa9ne.us.auth0.com", - "private_key": "../../pulsar-broker/src/test/resources/authentication/token/cpp_credentials_file.json", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/"})"; - -config.setAuth(pulsar::AuthOauth2::create(params)); - -pulsar::Client client("pulsar://broker.example.com:6650/", config); - -``` - -### Go client - -To enable OAuth2 authentication in Go client, you need to configure OAuth2 authentication. -This example shows how to configure OAuth2 authentication in Go client. - -```go - -oauth := pulsar.NewAuthenticationOAuth2(map[string]string{ - "type": "client_credentials", - "issuerUrl": "https://dev-kt-aa9ne.us.auth0.com", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "privateKey": "/path/to/privateKey", - "clientId": "0Xx...Yyxeny", - }) -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://my-cluster:6650", - Authentication: oauth, -}) - -``` - -### Python client - -To enable OAuth2 authentication in Python client, you need to configure OAuth2 authentication. -This example shows how to configure OAuth2 authentication in Python client. - -```python - -from pulsar import Client, AuthenticationOauth2 - -params = ''' -{ - "issuer_url": "https://dev-kt-aa9ne.us.auth0.com", - "private_key": "/path/to/privateKey", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/" -} -''' - -client = Client("pulsar://my-cluster:6650", authentication=AuthenticationOauth2(params)) - -``` - -## CLI configuration - -This section describes how to use Pulsar CLI tools to connect a cluster through OAuth2 authentication plugin. - -### pulsar-admin - -This example shows how to use pulsar-admin to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-admin --admin-url https://streamnative.cloud:443 \ ---auth-plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ -tenants list - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). - -### pulsar-client - -This example shows how to use pulsar-client to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-client \ ---url SERVICE_URL \ ---auth-plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ -produce test-topic -m "test-message" -n 10 - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). - -### pulsar-perf - -This example shows how to use pulsar-perf to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-perf produce --service-url pulsar+ssl://streamnative.cloud:6651 \ ---auth-plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ --r 1000 -s 1024 test-topic - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/security-overview.md b/site2/website/versioned_docs/version-2.9.0-deprecated/security-overview.md deleted file mode 100644 index c6bd9b64e4f766..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/security-overview.md +++ /dev/null @@ -1,36 +0,0 @@ ---- -id: security-overview -title: Pulsar security overview -sidebar_label: "Overview" -original_id: security-overview ---- - -As the central message bus for a business, Apache Pulsar is frequently used for storing mission-critical data. Therefore, enabling security features in Pulsar is crucial. - -By default, Pulsar configures no encryption, authentication, or authorization. Any client can communicate to Apache Pulsar via plain text service URLs. So we must ensure that Pulsar accessing via these plain text service URLs is restricted to trusted clients only. In such cases, you can use Network segmentation and/or authorization ACLs to restrict access to trusted IPs. If you use neither, the state of cluster is wide open and anyone can access the cluster. - -Pulsar supports a pluggable authentication mechanism. And Pulsar clients use this mechanism to authenticate with brokers and proxies. You can also configure Pulsar to support multiple authentication sources. - -The Pulsar broker validates the authentication credentials when a connection is established. After the initial connection is authenticated, the "principal" token is stored for authorization though the connection is not re-authenticated. The broker periodically checks the expiration status of every `ServerCnx` object. You can set the `authenticationRefreshCheckSeconds` on the broker to control the frequency to check the expiration status. By default, the `authenticationRefreshCheckSeconds` is set to 60s. When the authentication is expired, the broker forces to re-authenticate the connection. If the re-authentication fails, the broker disconnects the client. - -The broker supports learning whether a particular client supports authentication refreshing. If a client supports authentication refreshing and the credential is expired, the authentication provider calls the `refreshAuthentication` method to initiate the refreshing process. If a client does not support authentication refreshing and the credential is expired, the broker disconnects the client. - -You had better secure the service components in your Apache Pulsar deployment. - -## Role tokens - -In Pulsar, a *role* is a string, like `admin` or `app1`, which can represent a single client or multiple clients. You can use roles to control permission for clients to produce or consume from certain topics, administer the configuration for tenants, and so on. - -Apache Pulsar uses an [Authentication Provider](#authentication-providers) to establish the identity of a client and then assign a *role token* to that client. This role token is then used for [Authorization and ACLs](security-authorization.md) to determine what the client is authorized to do. - -## Authentication providers - -Currently Pulsar supports the following authentication providers: - -- [TLS Authentication](security-tls-authentication.md) -- [Athenz](security-athenz.md) -- [Kerberos](security-kerberos.md) -- [JSON Web Token Authentication](security-jwt.md) -- [OAuth 2.0 authentication](security-oauth2.md) -- [HTTP basic authentication](security-basic-auth.md) - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/security-tls-authentication.md b/site2/website/versioned_docs/version-2.9.0-deprecated/security-tls-authentication.md deleted file mode 100644 index 85d2240f413060..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/security-tls-authentication.md +++ /dev/null @@ -1,222 +0,0 @@ ---- -id: security-tls-authentication -title: Authentication using TLS -sidebar_label: "Authentication using TLS" -original_id: security-tls-authentication ---- - -## TLS authentication overview - -TLS authentication is an extension of [TLS transport encryption](security-tls-transport.md). Not only servers have keys and certs that the client uses to verify the identity of servers, clients also have keys and certs that the server uses to verify the identity of clients. You must have TLS transport encryption configured on your cluster before you can use TLS authentication. This guide assumes you already have TLS transport encryption configured. - -`Bouncy Castle Provider` provides TLS related cipher suites and algorithms in Pulsar. If you need [FIPS](https://www.bouncycastle.org/fips_faq.html) version of `Bouncy Castle Provider`, please reference [Bouncy Castle page](security-bouncy-castle.md). - -### Create client certificates - -Client certificates are generated using the certificate authority. Server certificates are also generated with the same certificate authority. - -The biggest difference between client certs and server certs is that the **common name** for the client certificate is the **role token** which that client is authenticated as. - -To use client certificates, you need to set `tlsRequireTrustedClientCertOnConnect=true` at the broker side. For details, refer to [TLS broker configuration](security-tls-transport.md#configure-broker). - -First, you need to enter the following command to generate the key : - -```bash - -$ openssl genrsa -out admin.key.pem 2048 - -``` - -Similar to the broker, the client expects the key to be in [PKCS 8](https://en.wikipedia.org/wiki/PKCS_8) format, so you need to convert it by entering the following command: - -```bash - -$ openssl pkcs8 -topk8 -inform PEM -outform PEM \ - -in admin.key.pem -out admin.key-pk8.pem -nocrypt - -``` - -Next, enter the command below to generate the certificate request. When you are asked for a **common name**, enter the **role token** that you want this key pair to authenticate a client as. - -```bash - -$ openssl req -config openssl.cnf \ - -key admin.key.pem -new -sha256 -out admin.csr.pem - -``` - -:::note - -If openssl.cnf is not specified, read [Certificate authority](http://pulsar.apache.org/docs/en/security-tls-transport/#certificate-authority) to get the openssl.cnf. - -::: - -Then, enter the command below to sign with request with the certificate authority. Note that the client certs uses the **usr_cert** extension, which allows the cert to be used for client authentication. - -```bash - -$ openssl ca -config openssl.cnf -extensions usr_cert \ - -days 1000 -notext -md sha256 \ - -in admin.csr.pem -out admin.cert.pem - -``` - -You can get a cert, `admin.cert.pem`, and a key, `admin.key-pk8.pem` from this command. With `ca.cert.pem`, clients can use this cert and this key to authenticate themselves to brokers and proxies as the role token ``admin``. - -:::note - -If the "unable to load CA private key" error occurs and the reason of this error is "No such file or directory: /etc/pki/CA/private/cakey.pem" in this step. Try the command below: - -```bash - -$ cd /etc/pki/tls/misc/CA -$ ./CA -newca - -``` - -to generate `cakey.pem` . - -::: - -## Enable TLS authentication on brokers - -To configure brokers to authenticate clients, add the following parameters to `broker.conf`, alongside [the configuration to enable tls transport](security-tls-transport.md#broker-configuration): - -```properties - -# Configuration to enable authentication -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# operations and publish/consume from all topics -superUserRoles=admin - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientTlsEnabled=true -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters={"tlsCertFile":"/path/my-ca/admin.cert.pem","tlsKeyFile":"/path/my-ca/admin.key-pk8.pem"} -brokerClientTrustCertsFilePath=/path/my-ca/certs/ca.cert.pem - -``` - -## Enable TLS authentication on proxies - -To configure proxies to authenticate clients, add the following parameters to `proxy.conf`, alongside [the configuration to enable tls transport](security-tls-transport.md#proxy-configuration): - -The proxy should have its own client key pair for connecting to brokers. You need to configure the role token for this key pair in the ``proxyRoles`` of the brokers. See the [authorization guide](security-authorization.md) for more details. - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters=tlsCertFile:/path/to/proxy.cert.pem,tlsKeyFile:/path/to/proxy.key-pk8.pem - -``` - -## Client configuration - -When you use TLS authentication, client connects via TLS transport. You need to configure the client to use ```https://``` and 8443 port for the web service URL, ```pulsar+ssl://``` and 6651 port for the broker service URL. - -### CLI tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use TLS authentication with the CLI tools of Pulsar: - -```properties - -webServiceUrl=https://broker.example.com:8443/ -brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/ca.cert.pem -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -authParams=tlsCertFile:/path/to/my-role.cert.pem,tlsKeyFile:/path/to/my-role.key-pk8.pem - -``` - -### Java client - -```java - -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .tlsTrustCertsFilePath("/path/to/ca.cert.pem") - .authentication("org.apache.pulsar.client.impl.auth.AuthenticationTls", - "tlsCertFile:/path/to/my-role.cert.pem,tlsKeyFile:/path/to/my-role.key-pk8.pem") - .build(); - -``` - -### Python client - -```python - -from pulsar import Client, AuthenticationTLS - -auth = AuthenticationTLS("/path/to/my-role.cert.pem", "/path/to/my-role.key-pk8.pem") -client = Client("pulsar+ssl://broker.example.com:6651/", - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False, - authentication=auth) - -``` - -### C++ client - -```c++ - -#include - -pulsar::ClientConfiguration config; -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/ca.cert.pem"); -config.setTlsAllowInsecureConnection(false); - -pulsar::AuthenticationPtr auth = pulsar::AuthTls::create("/path/to/my-role.cert.pem", - "/path/to/my-role.key-pk8.pem") -config.setAuth(auth); - -pulsar::Client client("pulsar+ssl://broker.example.com:6651/", config); - -``` - -### Node.js client - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const auth = new Pulsar.AuthenticationTls({ - certificatePath: '/path/to/my-role.cert.pem', - privateKeyPath: '/path/to/my-role.key-pk8.pem', - }); - - const client = new Pulsar.Client({ - serviceUrl: 'pulsar+ssl://broker.example.com:6651/', - authentication: auth, - tlsTrustCertsFilePath: '/path/to/ca.cert.pem', - }); -})(); - -``` - -### C# client - -```c# - -var clientCertificate = new X509Certificate2("admin.pfx"); -var client = PulsarClient.Builder() - .AuthenticateUsingClientCertificate(clientCertificate) - .Build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/security-tls-keystore.md b/site2/website/versioned_docs/version-2.9.0-deprecated/security-tls-keystore.md deleted file mode 100644 index 0b3b50fcebb104..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/security-tls-keystore.md +++ /dev/null @@ -1,342 +0,0 @@ ---- -id: security-tls-keystore -title: Using TLS with KeyStore configure -sidebar_label: "Using TLS with KeyStore configure" -original_id: security-tls-keystore ---- - -## Overview - -Apache Pulsar supports [TLS encryption](security-tls-transport.md) and [TLS authentication](security-tls-authentication.md) between clients and Apache Pulsar service. -By default it uses PEM format file configuration. This page tries to describe use [KeyStore](https://en.wikipedia.org/wiki/Java_KeyStore) type configure for TLS. - - -## TLS encryption with KeyStore configure - -### Generate TLS key and certificate - -The first step of deploying TLS is to generate the key and the certificate for each machine in the cluster. -You can use Java’s `keytool` utility to accomplish this task. We will generate the key into a temporary keystore -initially for broker, so that we can export and sign it later with CA. - -```shell - -keytool -keystore broker.keystore.jks -alias localhost -validity {validity} -genkeypair -keyalg RSA - -``` - -You need to specify two parameters in the above command: - -1. `keystore`: the keystore file that stores the certificate. The *keystore* file contains the private key of - the certificate; hence, it needs to be kept safely. -2. `validity`: the valid time of the certificate in days. - -> Ensure that common name (CN) matches exactly with the fully qualified domain name (FQDN) of the server. -The client compares the CN with the DNS domain name to ensure that it is indeed connecting to the desired server, not a malicious one. - -### Creating your own CA - -After the first step, each broker in the cluster has a public-private key pair, and a certificate to identify the machine. -The certificate, however, is unsigned, which means that an attacker can create such a certificate to pretend to be any machine. - -Therefore, it is important to prevent forged certificates by signing them for each machine in the cluster. -A `certificate authority (CA)` is responsible for signing certificates. CA works likes a government that issues passports — -the government stamps (signs) each passport so that the passport becomes difficult to forge. Other governments verify the stamps -to ensure the passport is authentic. Similarly, the CA signs the certificates, and the cryptography guarantees that a signed -certificate is computationally difficult to forge. Thus, as long as the CA is a genuine and trusted authority, the clients have -high assurance that they are connecting to the authentic machines. - -```shell - -openssl req -new -x509 -keyout ca-key -out ca-cert -days 365 - -``` - -The generated CA is simply a *public-private* key pair and certificate, and it is intended to sign other certificates. - -The next step is to add the generated CA to the clients' truststore so that the clients can trust this CA: - -```shell - -keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert - -``` - -NOTE: If you configure the brokers to require client authentication by setting `tlsRequireTrustedClientCertOnConnect` to `true` on the -broker configuration, then you must also provide a truststore for the brokers and it should have all the CA certificates that clients keys were signed by. - -```shell - -keytool -keystore broker.truststore.jks -alias CARoot -import -file ca-cert - -``` - -In contrast to the keystore, which stores each machine’s own identity, the truststore of a client stores all the certificates -that the client should trust. Importing a certificate into one’s truststore also means trusting all certificates that are signed -by that certificate. As the analogy above, trusting the government (CA) also means trusting all passports (certificates) that -it has issued. This attribute is called the chain of trust, and it is particularly useful when deploying TLS on a large BookKeeper cluster. -You can sign all certificates in the cluster with a single CA, and have all machines share the same truststore that trusts the CA. -That way all machines can authenticate all other machines. - - -### Signing the certificate - -The next step is to sign all certificates in the keystore with the CA we generated. First, you need to export the certificate from the keystore: - -```shell - -keytool -keystore broker.keystore.jks -alias localhost -certreq -file cert-file - -``` - -Then sign it with the CA: - -```shell - -openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days {validity} -CAcreateserial -passin pass:{ca-password} - -``` - -Finally, you need to import both the certificate of the CA and the signed certificate into the keystore: - -```shell - -keytool -keystore broker.keystore.jks -alias CARoot -import -file ca-cert -keytool -keystore broker.keystore.jks -alias localhost -import -file cert-signed - -``` - -The definitions of the parameters are the following: - -1. `keystore`: the location of the keystore -2. `ca-cert`: the certificate of the CA -3. `ca-key`: the private key of the CA -4. `ca-password`: the passphrase of the CA -5. `cert-file`: the exported, unsigned certificate of the broker -6. `cert-signed`: the signed certificate of the broker - -### Configuring brokers - -Brokers enable TLS by provide valid `brokerServicePortTls` and `webServicePortTls`, and also need set `tlsEnabledWithKeyStore` to `true` for using KeyStore type configuration. -Besides this, KeyStore path, KeyStore password, TrustStore path, and TrustStore password need to provided. -And since broker will create internal client/admin client to communicate with other brokers, user also need to provide config for them, this is similar to how user config the outside client/admin-client. -If `tlsRequireTrustedClientCertOnConnect` is `true`, broker will reject the Connection if the Client Certificate is not trusted. - -The following TLS configs are needed on the broker side: - -```properties - -tlsEnabledWithKeyStore=true -# key store -tlsKeyStoreType=JKS -tlsKeyStore=/var/private/tls/broker.keystore.jks -tlsKeyStorePassword=brokerpw - -# trust store -tlsTrustStoreType=JKS -tlsTrustStore=/var/private/tls/broker.truststore.jks -tlsTrustStorePassword=brokerpw - -# internal client/admin-client config -brokerClientTlsEnabled=true -brokerClientTlsEnabledWithKeyStore=true -brokerClientTlsTrustStoreType=JKS -brokerClientTlsTrustStore=/var/private/tls/client.truststore.jks -brokerClientTlsTrustStorePassword=clientpw - -``` - -NOTE: it is important to restrict access to the store files via filesystem permissions. - -If you have configured TLS on the broker, to disable non-TLS ports, you can set the values of the following configurations to empty as below. - -``` - -brokerServicePort= -webServicePort= - -``` - -In this case, you need to set the following configurations. - -```conf - -brokerClientTlsEnabled=true // Set this to true -brokerClientTlsEnabledWithKeyStore=true // Set this to true -brokerClientTlsTrustStore= // Set this to your desired value -brokerClientTlsTrustStorePassword= // Set this to your desired value - -Optional settings that may worth consider: - -1. tlsClientAuthentication=false: Enable/Disable using TLS for authentication. This config when enabled will authenticate the other end - of the communication channel. It should be enabled on both brokers and clients for mutual TLS. -2. tlsCiphers=[TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256], A cipher suite is a named combination of authentication, encryption, MAC and key exchange - algorithm used to negotiate the security settings for a network connection using TLS network protocol. By default, - it is null. [OpenSSL Ciphers](https://www.openssl.org/docs/man1.0.2/apps/ciphers.html) - [JDK Ciphers](http://docs.oracle.com/javase/8/docs/technotes/guides/security/StandardNames.html#ciphersuites) -3. tlsProtocols=[TLSv1.3,TLSv1.2] (list out the TLS protocols that you are going to accept from clients). - By default, it is not set. - -``` - -### Configuring Clients - -This is similar to [TLS encryption configuing for client with PEM type](security-tls-transport.md#Client configuration). -For a a minimal configuration, user need to provide the TrustStore information. - -e.g. -1. for [Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools#pulsar-admin), [`pulsar-perf`](reference-cli-tools#pulsar-perf), and [`pulsar-client`](reference-cli-tools#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - - ```properties - - webServiceUrl=https://broker.example.com:8443/ - brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ - useKeyStoreTls=true - tlsTrustStoreType=JKS - tlsTrustStorePath=/var/private/tls/client.truststore.jks - tlsTrustStorePassword=clientpw - - ``` - -1. for java client - - ```java - - import org.apache.pulsar.client.api.PulsarClient; - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .build(); - - ``` - -1. for java admin client - -```java - - PulsarAdmin amdin = PulsarAdmin.builder().serviceHttpUrl("https://broker.example.com:8443") - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .build(); - -``` - -## TLS authentication with KeyStore configure - -This similar to [TLS authentication with PEM type](security-tls-authentication.md) - -### broker authentication config - -`broker.conf` - -```properties - -# Configuration to enable authentication -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# this should be the CN for one of client keystore. -superUserRoles=admin - -# Enable KeyStore type -tlsEnabledWithKeyStore=true -requireTrustedClientCertOnConnect=true - -# key store -tlsKeyStoreType=JKS -tlsKeyStore=/var/private/tls/broker.keystore.jks -tlsKeyStorePassword=brokerpw - -# trust store -tlsTrustStoreType=JKS -tlsTrustStore=/var/private/tls/broker.truststore.jks -tlsTrustStorePassword=brokerpw - -# internal client/admin-client config -brokerClientTlsEnabled=true -brokerClientTlsEnabledWithKeyStore=true -brokerClientTlsTrustStoreType=JKS -brokerClientTlsTrustStore=/var/private/tls/client.truststore.jks -brokerClientTlsTrustStorePassword=clientpw -# internal auth config -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls -brokerClientAuthenticationParameters={"keyStoreType":"JKS","keyStorePath":"/var/private/tls/client.keystore.jks","keyStorePassword":"clientpw"} -# currently websocket not support keystore type -webSocketServiceEnabled=false - -``` - -### client authentication configuring - -Besides the TLS encryption configuring. The main work is configuring the KeyStore, which contains a valid CN as client role, for client. - -e.g. -1. for [Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools#pulsar-admin), [`pulsar-perf`](reference-cli-tools#pulsar-perf), and [`pulsar-client`](reference-cli-tools#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - - ```properties - - webServiceUrl=https://broker.example.com:8443/ - brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ - useKeyStoreTls=true - tlsTrustStoreType=JKS - tlsTrustStorePath=/var/private/tls/client.truststore.jks - tlsTrustStorePassword=clientpw - authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls - authParams={"keyStoreType":"JKS","keyStorePath":"/path/to/keystorefile","keyStorePassword":"keystorepw"} - - ``` - -1. for java client - - ```java - - import org.apache.pulsar.client.api.PulsarClient; - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .authentication( - "org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls", - "keyStoreType:JKS,keyStorePath:/var/private/tls/client.keystore.jks,keyStorePassword:clientpw") - .build(); - - ``` - -1. for java admin client - - ```java - - PulsarAdmin amdin = PulsarAdmin.builder().serviceHttpUrl("https://broker.example.com:8443") - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .authentication( - "org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls", - "keyStoreType:JKS,keyStorePath:/var/private/tls/client.keystore.jks,keyStorePassword:clientpw") - .build(); - - ``` - -## Enabling TLS Logging - -You can enable TLS debug logging at the JVM level by starting the brokers and/or clients with `javax.net.debug` system property. For example: - -```shell - --Djavax.net.debug=all - -``` - -You can find more details on this in [Oracle documentation](http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/ReadDebug.html) on [debugging SSL/TLS connections](http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/ReadDebug.html). diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/security-tls-transport.md b/site2/website/versioned_docs/version-2.9.0-deprecated/security-tls-transport.md deleted file mode 100644 index 2cad17a78c3507..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/security-tls-transport.md +++ /dev/null @@ -1,295 +0,0 @@ ---- -id: security-tls-transport -title: Transport Encryption using TLS -sidebar_label: "Transport Encryption using TLS" -original_id: security-tls-transport ---- - -## TLS overview - -By default, Apache Pulsar clients communicate with the Apache Pulsar service in plain text. This means that all data is sent in the clear. You can use TLS to encrypt this traffic to protect the traffic from the snooping of a man-in-the-middle attacker. - -You can also configure TLS for both encryption and authentication. Use this guide to configure just TLS transport encryption and refer to [here](security-tls-authentication.md) for TLS authentication configuration. Alternatively, you can use [another authentication mechanism](security-athenz.md) on top of TLS transport encryption. - -> Note that enabling TLS may impact the performance due to encryption overhead. - -## TLS concepts - -TLS is a form of [public key cryptography](https://en.wikipedia.org/wiki/Public-key_cryptography). Using key pairs consisting of a public key and a private key can perform the encryption. The public key encrpyts the messages and the private key decrypts the messages. - -To use TLS transport encryption, you need two kinds of key pairs, **server key pairs** and a **certificate authority**. - -You can use a third kind of key pair, **client key pairs**, for [client authentication](security-tls-authentication.md). - -You should store the **certificate authority** private key in a very secure location (a fully encrypted, disconnected, air gapped computer). As for the certificate authority public key, the **trust cert**, you can freely shared it. - -For both client and server key pairs, the administrator first generates a private key and a certificate request, then uses the certificate authority private key to sign the certificate request, finally generates a certificate. This certificate is the public key for the server/client key pair. - -For TLS transport encryption, the clients can use the **trust cert** to verify that the server has a key pair that the certificate authority signed when the clients are talking to the server. A man-in-the-middle attacker does not have access to the certificate authority, so they couldn't create a server with such a key pair. - -For TLS authentication, the server uses the **trust cert** to verify that the client has a key pair that the certificate authority signed. The common name of the **client cert** is then used as the client's role token (see [Overview](security-overview.md)). - -`Bouncy Castle Provider` provides cipher suites and algorithms in Pulsar. If you need [FIPS](https://www.bouncycastle.org/fips_faq.html) version of `Bouncy Castle Provider`, please reference [Bouncy Castle page](security-bouncy-castle.md). - -## Create TLS certificates - -Creating TLS certificates for Pulsar involves creating a [certificate authority](#certificate-authority) (CA), [server certificate](#server-certificate), and [client certificate](#client-certificate). - -Follow the guide below to set up a certificate authority. You can also refer to plenty of resources on the internet for more details. We recommend [this guide](https://jamielinux.com/docs/openssl-certificate-authority/index.html) for your detailed reference. - -### Certificate authority - -1. Create the certificate for the CA. You can use CA to sign both the broker and client certificates. This ensures that each party will trust the others. You should store CA in a very secure location (ideally completely disconnected from networks, air gapped, and fully encrypted). - -2. Entering the following command to create a directory for your CA, and place [this openssl configuration file](https://github.com/apache/pulsar/tree/master/site2/website/static/examples/openssl.cnf) in the directory. You may want to modify the default answers for company name and department in the configuration file. Export the location of the CA directory to the environment variable, CA_HOME. The configuration file uses this environment variable to find the rest of the files and directories that the CA needs. - -```bash - -mkdir my-ca -cd my-ca -wget https://raw.githubusercontent.com/apache/pulsar-site/main/site2/website/static/examples/openssl.cnf -export CA_HOME=$(pwd) - -``` - -3. Enter the commands below to create the necessary directories, keys and certs. - -```bash - -mkdir certs crl newcerts private -chmod 700 private/ -touch index.txt -echo 1000 > serial -openssl genrsa -aes256 -out private/ca.key.pem 4096 -chmod 400 private/ca.key.pem -openssl req -config openssl.cnf -key private/ca.key.pem \ - -new -x509 -days 7300 -sha256 -extensions v3_ca \ - -out certs/ca.cert.pem -chmod 444 certs/ca.cert.pem - -``` - -4. After you answer the question prompts, CA-related files are stored in the `./my-ca` directory. Within that directory: - -* `certs/ca.cert.pem` is the public certificate. This public certificates is meant to be distributed to all parties involved. -* `private/ca.key.pem` is the private key. You only need it when you are signing a new certificate for either broker or clients and you must safely guard this private key. - -### Server certificate - -Once you have created a CA certificate, you can create certificate requests and sign them with the CA. - -The following commands ask you a few questions and then create the certificates. When you are asked for the common name, you should match the hostname of the broker. You can also use a wildcard to match a group of broker hostnames, for example, `*.broker.usw.example.com`. This ensures that multiple machines can reuse the same certificate. - -:::tip - -Sometimes matching the hostname is not possible or makes no sense, -such as when you create the brokers with random hostnames, or you -plan to connect to the hosts via their IP. In these cases, you -should configure the client to disable TLS hostname verification. For more -details, you can see [the host verification section in client configuration](#hostname-verification). - -::: - -1. Enter the command below to generate the key. - -```bash - -openssl genrsa -out broker.key.pem 2048 - -``` - -The broker expects the key to be in [PKCS 8](https://en.wikipedia.org/wiki/PKCS_8) format, so enter the following command to convert it. - -```bash - -openssl pkcs8 -topk8 -inform PEM -outform PEM \ - -in broker.key.pem -out broker.key-pk8.pem -nocrypt - -``` - -2. Enter the following command to generate the certificate request. - -```bash - -openssl req -config openssl.cnf \ - -key broker.key.pem -new -sha256 -out broker.csr.pem - -``` - -3. Sign it with the certificate authority by entering the command below. - -```bash - -openssl ca -config openssl.cnf -extensions server_cert \ - -days 1000 -notext -md sha256 \ - -in broker.csr.pem -out broker.cert.pem - -``` - -At this point, you have a cert, `broker.cert.pem`, and a key, `broker.key-pk8.pem`, which you can use along with `ca.cert.pem` to configure TLS transport encryption for your broker and proxy nodes. - -## Configure broker - -To configure a Pulsar [broker](reference-terminology.md#broker) to use TLS transport encryption, you need to make some changes to `broker.conf`, which locates in the `conf` directory of your [Pulsar installation](getting-started-standalone.md). - -Add these values to the configuration file (substituting the appropriate certificate paths where necessary): - -```properties - -tlsEnabled=true -tlsRequireTrustedClientCertOnConnect=true -tlsCertificateFilePath=/path/to/broker.cert.pem -tlsKeyFilePath=/path/to/broker.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -> You can find a full list of parameters available in the `conf/broker.conf` file, -> as well as the default values for those parameters, in [Broker Configuration](reference-configuration.md#broker) -> -### TLS Protocol Version and Cipher - -You can configure the broker (and proxy) to require specific TLS protocol versions and ciphers for TLS negiotation. You can use the TLS protocol versions and ciphers to stop clients from requesting downgraded TLS protocol versions or ciphers that may have weaknesses. - -Both the TLS protocol versions and cipher properties can take multiple values, separated by commas. The possible values for protocol version and ciphers depend on the TLS provider that you are using. Pulsar uses OpenSSL if the OpenSSL is available, but if the OpenSSL is not available, Pulsar defaults back to the JDK implementation. - -```properties - -tlsProtocols=TLSv1.3,TLSv1.2 -tlsCiphers=TLS_DH_RSA_WITH_AES_256_GCM_SHA384,TLS_DH_RSA_WITH_AES_256_CBC_SHA - -``` - -OpenSSL currently supports ```TLSv1.1```, ```TLSv1.2``` and ```TLSv1.3``` for the protocol version. You can acquire a list of supported cipher from the openssl ciphers command, i.e. ```openssl ciphers -tls1_3```. - -For JDK 11, you can obtain a list of supported values from the documentation: -- [TLS protocol](https://docs.oracle.com/en/java/javase/11/security/oracle-providers.html#GUID-7093246A-31A3-4304-AC5F-5FB6400405E2__SUNJSSEPROVIDERPROTOCOLPARAMETERS-BBF75009) -- [Ciphers](https://docs.oracle.com/en/java/javase/11/security/oracle-providers.html#GUID-7093246A-31A3-4304-AC5F-5FB6400405E2__SUNJSSE_CIPHER_SUITES) - -## Proxy Configuration - -Proxies need to configure TLS in two directions, for clients connecting to the proxy, and for the proxy connecting to brokers. - -```properties - -# For clients connecting to the proxy -tlsEnabledInProxy=true -tlsCertificateFilePath=/path/to/broker.cert.pem -tlsKeyFilePath=/path/to/broker.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -# For the proxy to connect to brokers -tlsEnabledWithBroker=true -brokerClientTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -## Client configuration - -When you enable the TLS transport encryption, you need to configure the client to use ```https://``` and port 8443 for the web service URL, and ```pulsar+ssl://``` and port 6651 for the broker service URL. - -As the server certificate that you generated above does not belong to any of the default trust chains, you also need to either specify the path the **trust cert** (recommended), or tell the client to allow untrusted server certs. - -### Hostname verification - -Hostname verification is a TLS security feature whereby a client can refuse to connect to a server if the "CommonName" does not match the hostname to which the hostname is connecting. By default, Pulsar clients disable hostname verification, as it requires that each broker has a DNS record and a unique cert. - -Moreover, as the administrator has full control of the certificate authority, a bad actor is unlikely to be able to pull off a man-in-the-middle attack. "allowInsecureConnection" allows the client to connect to servers whose cert has not been signed by an approved CA. The client disables "allowInsecureConnection" by default, and you should always disable "allowInsecureConnection" in production environments. As long as you disable "allowInsecureConnection", a man-in-the-middle attack requires that the attacker has access to the CA. - -One scenario where you may want to enable hostname verification is where you have multiple proxy nodes behind a VIP, and the VIP has a DNS record, for example, pulsar.mycompany.com. In this case, you can generate a TLS cert with pulsar.mycompany.com as the "CommonName," and then enable hostname verification on the client. - -The examples below show that hostname verification is disabled for the CLI tools/Java/Python/C++/Node.js/C# clients by default. - -### CLI tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools.md#pulsar-admin), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use TLS transport with the CLI tools of Pulsar: - -```properties - -webServiceUrl=https://broker.example.com:8443/ -brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/ca.cert.pem -tlsEnableHostnameVerification=false - -``` - -#### Java client - -```java - -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .tlsTrustCertsFilePath("/path/to/ca.cert.pem") - .enableTlsHostnameVerification(false) // false by default, in any case - .allowTlsInsecureConnection(false) // false by default, in any case - .build(); - -``` - -#### Python client - -```python - -from pulsar import Client - -client = Client("pulsar+ssl://broker.example.com:6651/", - tls_hostname_verification=False, - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False) // defaults to false from v2.2.0 onwards - -``` - -#### C++ client - -```c++ - -#include - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); // shouldn't be needed soon -config.setTlsTrustCertsFilePath(caPath); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create(clientPublicKeyPath, clientPrivateKeyPath)); -config.setValidateHostName(false); - -``` - -#### Node.js client - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const client = new Pulsar.Client({ - serviceUrl: 'pulsar+ssl://broker.example.com:6651/', - tlsTrustCertsFilePath: '/path/to/ca.cert.pem', - useTls: true, - tlsValidateHostname: false, - tlsAllowInsecureConnection: false, - }); -})(); - -``` - -#### C# client - -```c# - -var certificate = new X509Certificate2("ca.cert.pem"); -var client = PulsarClient.Builder() - .TrustedCertificateAuthority(certificate) //If the CA is not trusted on the host, you can add it explicitly. - .VerifyCertificateAuthority(true) //Default is 'true' - .VerifyCertificateName(false) //Default is 'false' - .Build(); - -``` - -> Note that `VerifyCertificateName` refers to the configuration of hostname verification in the C# client. diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/security-token-admin.md b/site2/website/versioned_docs/version-2.9.0-deprecated/security-token-admin.md deleted file mode 100644 index a265f6320d28fe..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/security-token-admin.md +++ /dev/null @@ -1,183 +0,0 @@ ---- -id: security-token-admin -title: Token authentication admin -sidebar_label: "Token authentication admin" -original_id: security-token-admin ---- - -## Token Authentication Overview - -Pulsar supports authenticating clients using security tokens that are based on [JSON Web Tokens](https://jwt.io/introduction/) ([RFC-7519](https://tools.ietf.org/html/rfc7519)). - -Tokens are used to identify a Pulsar client and associate with some "principal" (or "role") which -will be then granted permissions to do some actions (eg: publish or consume from a topic). - -A user will typically be given a token string by an administrator (or some automated service). - -The compact representation of a signed JWT is a string that looks like: - -``` - - eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -Application will specify the token when creating the client instance. An alternative is to pass -a "token supplier", that is to say a function that returns the token when the client library -will need one. - -> #### Always use TLS transport encryption -> Sending a token is equivalent to sending a password over the wire. It is strongly recommended to -> always use TLS encryption when talking to the Pulsar service. See -> [Transport Encryption using TLS](security-tls-transport.md) - -## Secret vs Public/Private keys - -JWT support two different kind of keys in order to generate and validate the tokens: - - * Symmetric : - - there is a single ***Secret*** key that is used both to generate and validate - * Asymmetric: there is a pair of keys. - - ***Private*** key is used to generate tokens - - ***Public*** key is used to validate tokens - -### Secret key - -When using a secret key, the administrator will create the key and he will -use it to generate the client tokens. This key will be also configured to -the brokers to allow them to validate the clients. - -#### Creating a secret key - -> Output file will be generated in the root of your pulsar installation directory. You can also provide absolute path for the output file. - -```shell - -$ bin/pulsar tokens create-secret-key --output my-secret.key - -``` - -To generate base64 encoded private key - -```shell - -$ bin/pulsar tokens create-secret-key --output /opt/my-secret.key --base64 - -``` - -### Public/Private keys - -With public/private, we need to create a pair of keys. Pulsar supports all algorithms supported by the Java JWT library shown [here](https://github.com/jwtk/jjwt#signature-algorithms-keys) - -#### Creating a key pair - -> Output file will be generated in the root of your pulsar installation directory. You can also provide absolute path for the output file. - -```shell - -$ bin/pulsar tokens create-key-pair --output-private-key my-private.key --output-public-key my-public.key - -``` - - * `my-private.key` will be stored in a safe location and only used by administrator to generate - new tokens. - * `my-public.key` will be distributed to all Pulsar brokers. This file can be publicly shared without - any security concern. - -## Generating tokens - -A token is the credential associated with a user. The association is done through the "principal", -or "role". In case of JWT tokens, this field it's typically referred to as **subject**, though -it's exactly the same concept. - -The generated token is then required to have a **subject** field set. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user - -``` - -This will print the token string on stdout. - -Similarly, one can create a token by passing the "private" key: - -```shell - -$ bin/pulsar tokens create --private-key file:///path/to/my-private.key \ - --subject test-user - -``` - -Finally, a token can also be created with a pre-defined TTL. After that time, -the token will be automatically invalidated. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user \ - --expiry-time 1y - -``` - -## Authorization - -The token itself doesn't have any permission associated. That will be determined by the -authorization engine. Once the token is created, one can grant permission for this token to do certain -actions. Eg. : - -```shell - -$ bin/pulsar-admin namespaces grant-permission my-tenant/my-namespace \ - --role test-user \ - --actions produce,consume - -``` - -## Enabling Token Authentication ... - -### ... on Brokers - -To configure brokers to authenticate clients, put the following in `broker.conf`: - -```properties - -# Configuration to enable authentication and authorization -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken - -# If using secret key (Note: key files must be DER-encoded) -tokenSecretKey=file:///path/to/secret.key -# The key can also be passed inline: -# tokenSecretKey=data:;base64,FLFyW0oLJ2Fi22KKCm21J18mbAdztfSHN/lAT5ucEKU= - -# If using public/private (Note: key files must be DER-encoded) -# tokenPublicKey=file:///path/to/public.key - -``` - -### ... on Proxies - -To configure proxies to authenticate clients, put the following in `proxy.conf`: - -The proxy will have its own token used when talking to brokers. The role token for this -key pair should be configured in the ``proxyRoles`` of the brokers. See the [authorization guide](security-authorization.md) for more details. - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken -tokenSecretKey=file:///path/to/secret.key - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Or, alternatively, read token from file -# brokerClientAuthenticationParameters=file:///path/to/proxy-token.txt - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/sql-deployment-configurations.md b/site2/website/versioned_docs/version-2.9.0-deprecated/sql-deployment-configurations.md deleted file mode 100644 index 02d8bc78f6cb9d..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/sql-deployment-configurations.md +++ /dev/null @@ -1,199 +0,0 @@ ---- -id: sql-deployment-configurations -title: Pulsar SQL configuration and deployment -sidebar_label: "Configuration and deployment" -original_id: sql-deployment-configurations ---- - -You can configure Presto Pulsar connector and deploy a cluster with the following instruction. - -## Configure Presto Pulsar Connector -You can configure Presto Pulsar Connector in the `${project.root}/conf/presto/catalog/pulsar.properties` properties file. The configuration for the connector and the default values are as follows. - -```properties - -# name of the connector to be displayed in the catalog -connector.name=pulsar - -# the url of Pulsar broker service -pulsar.web-service-url=http://localhost:8080 - -# URI of Zookeeper cluster -pulsar.zookeeper-uri=localhost:2181 - -# minimum number of entries to read at a single time -pulsar.entry-read-batch-size=100 - -# default number of splits to use per query -pulsar.target-num-splits=4 - -``` - -You can connect Presto to a Pulsar cluster with multiple hosts. To configure multiple hosts for brokers, add multiple URLs to `pulsar.web-service-url`. To configure multiple hosts for ZooKeeper, add multiple URIs to `pulsar.zookeeper-uri`. The following is an example. - -``` - -pulsar.web-service-url=http://localhost:8080,localhost:8081,localhost:8082 -pulsar.zookeeper-uri=localhost1,localhost2:2181 - -``` - -**Note: by default, Pulsar SQL does not get the last message in a topic**. It is by design and controlled by settings. By default, BookKeeper LAC only advances when subsequent entries are added. If there is no subsequent entry added, the last written entry is not visible to readers until the ledger is closed. This is not a problem for Pulsar which uses managed ledger, but Pulsar SQL directly reads from BookKeeper ledger. - -If you want to get the last message in a topic, set the following configurations: - -1. For the broker configuration, set `bookkeeperExplicitLacIntervalInMills` > 0 in `broker.conf` or `standalone.conf`. - -2. For the Presto configuration, set `pulsar.bookkeeper-explicit-interval` > 0 and `pulsar.bookkeeper-use-v2-protocol=false`. - -However, using BookKeeper V3 protocol introduces additional GC overhead to BK as it uses Protobuf. - -## Query data from existing Presto clusters - -If you already have a Presto cluster, you can copy the Presto Pulsar connector plugin to your existing cluster. Download the archived plugin package with the following command. - -```bash - -$ wget pulsar:binary_release_url - -``` - -## Deploy a new cluster - -Since Pulsar SQL is powered by [Trino (formerly Presto SQL)](https://trino.io), the configuration for deployment is the same for the Pulsar SQL worker. - -:::note - -For how to set up a standalone single node environment, refer to [Query data](sql-getting-started.md). - -::: - -You can use the same CLI args as the Presto launcher. - -```bash - -$ ./bin/pulsar sql-worker --help -Usage: launcher [options] command - -Commands: run, start, stop, restart, kill, status - -Options: - -h, --help show this help message and exit - -v, --verbose Run verbosely - --etc-dir=DIR Defaults to INSTALL_PATH/etc - --launcher-config=FILE - Defaults to INSTALL_PATH/bin/launcher.properties - --node-config=FILE Defaults to ETC_DIR/node.properties - --jvm-config=FILE Defaults to ETC_DIR/jvm.config - --config=FILE Defaults to ETC_DIR/config.properties - --log-levels-file=FILE - Defaults to ETC_DIR/log.properties - --data-dir=DIR Defaults to INSTALL_PATH - --pid-file=FILE Defaults to DATA_DIR/var/run/launcher.pid - --launcher-log-file=FILE - Defaults to DATA_DIR/var/log/launcher.log (only in - daemon mode) - --server-log-file=FILE - Defaults to DATA_DIR/var/log/server.log (only in - daemon mode) - -D NAME=VALUE Set a Java system property - -``` - -The default configuration for the cluster is located in `${project.root}/conf/presto`. You can customize your deployment by modifying the default configuration. - -You can set the worker to read from a different configuration directory, or set a different directory to write data. - -```bash - -$ ./bin/pulsar sql-worker run --etc-dir /tmp/incubator-pulsar/conf/presto --data-dir /tmp/presto-1 - -``` - -You can start the worker as daemon process. - -```bash - -$ ./bin/pulsar sql-worker start - -``` - -### Deploy a cluster on multiple nodes - -You can deploy a Pulsar SQL cluster or Presto cluster on multiple nodes. The following example shows how to deploy a cluster on three-node cluster. - -1. Copy the Pulsar binary distribution to three nodes. - -The first node runs as Presto coordinator. The minimal configuration requirement in the `${project.root}/conf/presto/config.properties` file is as follows. - -```properties - -coordinator=true -node-scheduler.include-coordinator=true -http-server.http.port=8080 -query.max-memory=50GB -query.max-memory-per-node=1GB -discovery-server.enabled=true -discovery.uri= - -``` - -The other two nodes serve as worker nodes, you can use the following configuration for worker nodes. - -```properties - -coordinator=false -http-server.http.port=8080 -query.max-memory=50GB -query.max-memory-per-node=1GB -discovery.uri= - -``` - -2. Modify `pulsar.web-service-url` and `pulsar.zookeeper-uri` configuration in the `${project.root}/conf/presto/catalog/pulsar.properties` file accordingly for the three nodes. - -3. Start the coordinator node. - -``` - -$ ./bin/pulsar sql-worker run - -``` - -4. Start worker nodes. - -``` - -$ ./bin/pulsar sql-worker run - -``` - -5. Start the SQL CLI and check the status of your cluster. - -```bash - -$ ./bin/pulsar sql --server - -``` - -6. Check the status of your nodes. - -```bash - -presto> SELECT * FROM system.runtime.nodes; - node_id | http_uri | node_version | coordinator | state ----------+-------------------------+--------------+-------------+-------- - 1 | http://192.168.2.1:8081 | testversion | true | active - 3 | http://192.168.2.2:8081 | testversion | false | active - 2 | http://192.168.2.3:8081 | testversion | false | active - -``` - -For more information about deployment in Presto, refer to [Presto deployment](https://trino.io/docs/current/installation/deployment.html). - -:::note - -The broker does not advance LAC, so when Pulsar SQL bypass broker to query data, it can only read entries up to the LAC that all the bookies learned. You can enable periodically write LAC on the broker by setting "bookkeeperExplicitLacIntervalInMills" in the broker.conf. - -::: - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/sql-getting-started.md b/site2/website/versioned_docs/version-2.9.0-deprecated/sql-getting-started.md deleted file mode 100644 index 8a5cd7199b365a..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/sql-getting-started.md +++ /dev/null @@ -1,187 +0,0 @@ ---- -id: sql-getting-started -title: Query data with Pulsar SQL -sidebar_label: "Query data" -original_id: sql-getting-started ---- - -Before querying data in Pulsar, you need to install Pulsar and built-in connectors. - -## Requirements -1. Install [Pulsar](getting-started-standalone.md#install-pulsar-standalone). -2. Install Pulsar [built-in connectors](getting-started-standalone.md#install-builtin-connectors-optional). - -## Query data in Pulsar -To query data in Pulsar with Pulsar SQL, complete the following steps. - -1. Start a Pulsar standalone cluster. - -```bash - -./bin/pulsar standalone - -``` - -2. Start a Pulsar SQL worker. - -```bash - -./bin/pulsar sql-worker run - -``` - -3. After initializing Pulsar standalone cluster and the SQL worker, run SQL CLI. - -```bash - -./bin/pulsar sql - -``` - -4. Test with SQL commands. - -```bash - -presto> show catalogs; - Catalog ---------- - pulsar - system -(2 rows) - -Query 20180829_211752_00004_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [0 rows, 0B] [0 rows/s, 0B/s] - - -presto> show schemas in pulsar; - Schema ------------------------ - information_schema - public/default - public/functions - sample/standalone/ns1 -(4 rows) - -Query 20180829_211818_00005_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [4 rows, 89B] [21 rows/s, 471B/s] - - -presto> show tables in pulsar."public/default"; - Table -------- -(0 rows) - -Query 20180829_211839_00006_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [0 rows, 0B] [0 rows/s, 0B/s] - -``` - -Since there is no data in Pulsar, no records is returned. - -5. Start the built-in connector _DataGeneratorSource_ and ingest some mock data. - -```bash - -./bin/pulsar-admin sources create --name generator --destinationTopicName generator_test --source-type data-generator - -``` - -And then you can query a topic in the namespace "public/default". - -```bash - -presto> show tables in pulsar."public/default"; - Table ----------------- - generator_test -(1 row) - -Query 20180829_213202_00000_csyeu, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:02 [1 rows, 38B] [0 rows/s, 17B/s] - -``` - -You can now query the data within the topic "generator_test". - -```bash - -presto> select * from pulsar."public/default".generator_test; - - firstname | middlename | lastname | email | username | password | telephonenumber | age | companyemail | nationalidentitycardnumber | --------------+-------------+-------------+----------------------------------+--------------+----------+-----------------+-----+-----------------------------------------------+----------------------------+ - Genesis | Katherine | Wiley | genesis.wiley@gmail.com | genesisw | y9D2dtU3 | 959-197-1860 | 71 | genesis.wiley@interdemconsulting.eu | 880-58-9247 | - Brayden | | Stanton | brayden.stanton@yahoo.com | braydens | ZnjmhXik | 220-027-867 | 81 | brayden.stanton@supermemo.eu | 604-60-7069 | - Benjamin | Julian | Velasquez | benjamin.velasquez@yahoo.com | benjaminv | 8Bc7m3eb | 298-377-0062 | 21 | benjamin.velasquez@hostesltd.biz | 213-32-5882 | - Michael | Thomas | Donovan | donovan@mail.com | michaeld | OqBm9MLs | 078-134-4685 | 55 | michael.donovan@memortech.eu | 443-30-3442 | - Brooklyn | Avery | Roach | brooklynroach@yahoo.com | broach | IxtBLafO | 387-786-2998 | 68 | brooklyn.roach@warst.biz | 085-88-3973 | - Skylar | | Bradshaw | skylarbradshaw@yahoo.com | skylarb | p6eC6cKy | 210-872-608 | 96 | skylar.bradshaw@flyhigh.eu | 453-46-0334 | -. -. -. - -``` - -You can query the mock data. - -## Query your own data -If you want to query your own data, you need to ingest your own data first. You can write a simple producer and write custom defined data to Pulsar. The following is an example. - -```java - -public class TestProducer { - - public static class Foo { - private int field1 = 1; - private String field2; - private long field3; - - public Foo() { - } - - public int getField1() { - return field1; - } - - public void setField1(int field1) { - this.field1 = field1; - } - - public String getField2() { - return field2; - } - - public void setField2(String field2) { - this.field2 = field2; - } - - public long getField3() { - return field3; - } - - public void setField3(long field3) { - this.field3 = field3; - } - } - - public static void main(String[] args) throws Exception { - PulsarClient pulsarClient = PulsarClient.builder().serviceUrl("pulsar://localhost:6650").build(); - Producer producer = pulsarClient.newProducer(AvroSchema.of(Foo.class)).topic("test_topic").create(); - - for (int i = 0; i < 1000; i++) { - Foo foo = new Foo(); - foo.setField1(i); - foo.setField2("foo" + i); - foo.setField3(System.currentTimeMillis()); - producer.newMessage().value(foo).send(); - } - producer.close(); - pulsarClient.close(); - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/sql-overview.md b/site2/website/versioned_docs/version-2.9.0-deprecated/sql-overview.md deleted file mode 100644 index 0235d0256c5f19..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/sql-overview.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: sql-overview -title: Pulsar SQL Overview -sidebar_label: "Overview" -original_id: sql-overview ---- - -Apache Pulsar is used to store streams of event data, and the event data is structured with predefined fields. With the implementation of the [Schema Registry](schema-get-started.md), you can store structured data in Pulsar and query the data by using [Trino (formerly Presto SQL.md)](https://trino.io/). - -As the core of Pulsar SQL, Presto Pulsar connector enables Presto workers within a Presto cluster to query data from Pulsar. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-sql-arch-2.png) - -The query performance is efficient and highly scalable, because Pulsar adopts [two level segment based architecture](concepts-architecture-overview.md#apache-bookkeeper). - -Topics in Pulsar are stored as segments in [Apache BookKeeper](https://bookkeeper.apache.org/). Each topic segment is replicated to some BookKeeper nodes, which enables concurrent reads and high read throughput. You can configure the number of BookKeeper nodes, and the default number is `3`. In Presto Pulsar connector, data is read directly from BookKeeper, so Presto workers can read concurrently from horizontally scalable number BookKeeper nodes. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-sql-arch-1.png) diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/sql-rest-api.md b/site2/website/versioned_docs/version-2.9.0-deprecated/sql-rest-api.md deleted file mode 100644 index c92fd62f7d8703..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/sql-rest-api.md +++ /dev/null @@ -1,192 +0,0 @@ ---- -id: sql-rest-api -title: Pulsar SQL REST APIs -sidebar_label: "REST APIs" -original_id: sql-rest-api ---- - -This section lists resources that make up the Presto REST API v1. - -## Request for Presto services - -All requests for Presto services should use Presto REST API v1 version. - -To request services, use explicit URL `http://presto.service:8081/v1`. You need to update `presto.service:8081` with your real Presto address before sending requests. - -`POST` requests require the `X-Presto-User` header. If you use authentication, you must use the same `username` that is specified in the authentication configuration. If you do not use authentication, you can specify anything for `username`. - -```properties - -X-Presto-User: username - -``` - -For more information about headers, refer to [PrestoHeaders](https://github.com/trinodb/trino). - -## Schema - -You can use statement in the HTTP body. All data is received as JSON document that might contain a `nextUri` link. If the received JSON document contains a `nextUri` link, the request continues with the `nextUri` link until the received data does not contain a `nextUri` link. If no error is returned, the query completes successfully. If an `error` field is displayed in `stats`, it means the query fails. - -The following is an example of `show catalogs`. The query continues until the received JSON document does not contain a `nextUri` link. Since no `error` is displayed in `stats`, it means that the query completes successfully. - -```powershell - -➜ ~ curl --header "X-Presto-User: test-user" --request POST --data 'show catalogs' http://localhost:8081/v1/statement -{ - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "stats" : { - "queued" : true, - "nodes" : 0, - "userTimeMillis" : 0, - "cpuTimeMillis" : 0, - "wallTimeMillis" : 0, - "processedBytes" : 0, - "processedRows" : 0, - "runningSplits" : 0, - "queuedTimeMillis" : 0, - "queuedSplits" : 0, - "completedSplits" : 0, - "totalSplits" : 0, - "scheduled" : false, - "peakMemoryBytes" : 0, - "state" : "QUEUED", - "elapsedTimeMillis" : 0 - }, - "id" : "20191113_033653_00006_dg6hb", - "nextUri" : "http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/1" -} - -➜ ~ curl http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/1 -{ - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "nextUri" : "http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/2", - "id" : "20191113_033653_00006_dg6hb", - "stats" : { - "state" : "PLANNING", - "totalSplits" : 0, - "queued" : false, - "userTimeMillis" : 0, - "completedSplits" : 0, - "scheduled" : false, - "wallTimeMillis" : 0, - "runningSplits" : 0, - "queuedSplits" : 0, - "cpuTimeMillis" : 0, - "processedRows" : 0, - "processedBytes" : 0, - "nodes" : 0, - "queuedTimeMillis" : 1, - "elapsedTimeMillis" : 2, - "peakMemoryBytes" : 0 - } -} - -➜ ~ curl http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/2 -{ - "id" : "20191113_033653_00006_dg6hb", - "data" : [ - [ - "pulsar" - ], - [ - "system" - ] - ], - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "columns" : [ - { - "typeSignature" : { - "rawType" : "varchar", - "arguments" : [ - { - "kind" : "LONG_LITERAL", - "value" : 6 - } - ], - "literalArguments" : [], - "typeArguments" : [] - }, - "name" : "Catalog", - "type" : "varchar(6)" - } - ], - "stats" : { - "wallTimeMillis" : 104, - "scheduled" : true, - "userTimeMillis" : 14, - "progressPercentage" : 100, - "totalSplits" : 19, - "nodes" : 1, - "cpuTimeMillis" : 16, - "queued" : false, - "queuedTimeMillis" : 1, - "state" : "FINISHED", - "peakMemoryBytes" : 0, - "elapsedTimeMillis" : 111, - "processedBytes" : 0, - "processedRows" : 0, - "queuedSplits" : 0, - "rootStage" : { - "cpuTimeMillis" : 1, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 1, - "subStages" : [ - { - "cpuTimeMillis" : 14, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 17, - "subStages" : [ - { - "wallTimeMillis" : 7, - "subStages" : [], - "stageId" : "2", - "done" : true, - "nodes" : 1, - "totalSplits" : 1, - "processedBytes" : 22, - "processedRows" : 2, - "queuedSplits" : 0, - "userTimeMillis" : 1, - "cpuTimeMillis" : 1, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 1 - } - ], - "wallTimeMillis" : 92, - "nodes" : 1, - "done" : true, - "stageId" : "1", - "userTimeMillis" : 12, - "processedRows" : 2, - "processedBytes" : 51, - "queuedSplits" : 0, - "totalSplits" : 17 - } - ], - "wallTimeMillis" : 5, - "done" : true, - "nodes" : 1, - "stageId" : "0", - "userTimeMillis" : 1, - "processedRows" : 2, - "processedBytes" : 22, - "totalSplits" : 1, - "queuedSplits" : 0 - }, - "runningSplits" : 0, - "completedSplits" : 19 - } -} - -``` - -:::note - -Since the response data is not in sync with the query state from the perspective of clients, you cannot rely on the response data to determine whether the query completes. - -::: - -For more information about Presto REST API, refer to [Presto HTTP Protocol](https://github.com/prestosql/presto/wiki/HTTP-Protocol). diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/standalone-docker.md b/site2/website/versioned_docs/version-2.9.0-deprecated/standalone-docker.md deleted file mode 100644 index 1710ec819d7a4e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/standalone-docker.md +++ /dev/null @@ -1,214 +0,0 @@ ---- -id: standalone-docker -title: Set up a standalone Pulsar in Docker -sidebar_label: "Run Pulsar in Docker" -original_id: standalone-docker ---- - -For local development and testing, you can run Pulsar in standalone mode on your own machine within a Docker container. - -If you have not installed Docker, download the [Community edition](https://www.docker.com/community-edition) and follow the instructions for your OS. - -## Start Pulsar in Docker - -* For MacOS, Linux, and Windows: - - ```shell - - $ docker run -it -p 6650:6650 -p 8080:8080 --mount source=pulsardata,target=/pulsar/data --mount source=pulsarconf,target=/pulsar/conf apachepulsar/pulsar:@pulsar:version@ bin/pulsar standalone - - ``` - -A few things to note about this command: - * The data, metadata, and configuration are persisted on Docker volumes in order to not start "fresh" every -time the container is restarted. For details on the volumes you can use `docker volume inspect ` - * For Docker on Windows make sure to configure it to use Linux containers - -If you start Pulsar successfully, you will see `INFO`-level log messages like this: - -``` - -08:18:30.970 [main] INFO org.apache.pulsar.broker.web.WebService - HTTP Service started at http://0.0.0.0:8080 -... -07:53:37.322 [main] INFO org.apache.pulsar.broker.PulsarService - messaging service is ready, bootstrap service port = 8080, broker url= pulsar://localhost:6650, cluster=standalone, configs=org.apache.pulsar.broker.ServiceConfiguration@98b63c1 -... - -``` - -:::tip - -When you start a local standalone cluster, a `public/default` - -::: - -namespace is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. -For more information, see [Topics](concepts-messaging.md#topics). - -## Use Pulsar in Docker - -Pulsar offers client libraries for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) -and [C++](client-libraries-cpp.md). If you're running a local standalone cluster, you can -use one of these root URLs to interact with your cluster: - -* `pulsar://localhost:6650` -* `http://localhost:8080` - -The following example will guide you get started with Pulsar quickly by using the [Python client API](client-libraries-python.md) -client API. - -Install the Pulsar Python client library directly from [PyPI](https://pypi.org/project/pulsar-client/): - -```shell - -$ pip install pulsar-client - -``` - -### Consume a message - -Create a consumer and subscribe to the topic: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -consumer = client.subscribe('my-topic', - subscription_name='my-sub') - -while True: - msg = consumer.receive() - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - -client.close() - -``` - -### Produce a message - -Now start a producer to send some test messages: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -producer = client.create_producer('my-topic') - -for i in range(10): - producer.send(('hello-pulsar-%d' % i).encode('utf-8')) - -client.close() - -``` - -## Get the topic statistics - -In Pulsar, you can use REST, Java, or command-line tools to control every aspect of the system. -For details on APIs, refer to [Admin API Overview](admin-api-overview.md). - -In the simplest example, you can use curl to probe the stats for a particular topic: - -```shell - -$ curl http://localhost:8080/admin/v2/persistent/public/default/my-topic/stats | python -m json.tool - -``` - -The output is something like this: - -```json - -{ - "msgRateIn": 0.0, - "msgThroughputIn": 0.0, - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesInCounter": 7097, - "msgInCounter": 143, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "averageMsgSize": 0.0, - "msgChunkPublished": false, - "storageSize": 7097, - "backlogSize": 0, - "offloadedStorageSize": 0, - "publishers": [ - { - "accessMode": "Shared", - "msgRateIn": 0.0, - "msgThroughputIn": 0.0, - "averageMsgSize": 0.0, - "chunkedMessageRate": 0.0, - "producerId": 0, - "metadata": {}, - "address": "/127.0.0.1:35604", - "connectedSince": "2021-07-04T09:05:43.04788Z", - "clientVersion": "2.8.0", - "producerName": "standalone-2-5" - } - ], - "waitingPublishers": 0, - "subscriptions": { - "my-sub": { - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "msgRateRedeliver": 0.0, - "chunkedMessageRate": 0, - "msgBacklog": 0, - "backlogSize": 0, - "msgBacklogNoDelayed": 0, - "blockedSubscriptionOnUnackedMsgs": false, - "msgDelayed": 0, - "unackedMessages": 0, - "type": "Exclusive", - "activeConsumerName": "3c544f1daa", - "msgRateExpired": 0.0, - "totalMsgExpired": 0, - "lastExpireTimestamp": 0, - "lastConsumedFlowTimestamp": 1625389101290, - "lastConsumedTimestamp": 1625389546070, - "lastAckedTimestamp": 1625389546162, - "lastMarkDeleteAdvancedTimestamp": 1625389546163, - "consumers": [ - { - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "msgRateRedeliver": 0.0, - "chunkedMessageRate": 0.0, - "consumerName": "3c544f1daa", - "availablePermits": 867, - "unackedMessages": 0, - "avgMessagesPerEntry": 6, - "blockedConsumerOnUnackedMsgs": false, - "lastAckedTimestamp": 1625389546162, - "lastConsumedTimestamp": 1625389546070, - "metadata": {}, - "address": "/127.0.0.1:35472", - "connectedSince": "2021-07-04T08:58:21.287682Z", - "clientVersion": "2.8.0" - } - ], - "isDurable": true, - "isReplicated": false, - "allowOutOfOrderDelivery": false, - "consumersAfterMarkDeletePosition": {}, - "nonContiguousDeletedMessagesRanges": 0, - "nonContiguousDeletedMessagesRangesSerializedSize": 0, - "durable": true, - "replicated": false - } - }, - "replication": {}, - "deduplicationStatus": "Disabled", - "nonContiguousDeletedMessagesRanges": 0, - "nonContiguousDeletedMessagesRangesSerializedSize": 0 -} - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/standalone.md b/site2/website/versioned_docs/version-2.9.0-deprecated/standalone.md deleted file mode 100644 index 9bff74d92c10bc..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/standalone.md +++ /dev/null @@ -1,268 +0,0 @@ ---- -id: standalone -title: Set up a standalone Pulsar locally -sidebar_label: "Run Pulsar locally" -original_id: standalone ---- - -For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process. - -> **Pulsar in production?** -> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal.md) guide. - -## Install Pulsar standalone - -This tutorial guides you through every step of installing Pulsar locally. - -### System requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions, JRE/JDK 11 is recommended. - -:::tip - -By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. - -::: - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -### Install Pulsar using binary release - -To get started with Pulsar, download a binary tarball release in one of the following ways: - -* download from the Apache mirror (Pulsar @pulsar:version@ binary release) - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:binary_release_url - - ``` - -After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -#### What your package contains - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/). -`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more. -`examples` | A Java JAR file containing [Pulsar Functions](functions-overview.md) example. -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar. -`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar). - -These directories are created once you begin running Pulsar. - -Directory | Contains -:---------|:-------- -`data` | The data storage directory used by ZooKeeper and BookKeeper. -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md). -`logs` | Logs created by the installation. - -:::tip - -If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions: -* [Install builtin connectors (optional)](#install-builtin-connectors-optional) -* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional) -Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders. - -::: - -### Install builtin connectors (optional) - -Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors. -To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways: - -* download from the Apache mirror Pulsar IO Connectors @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. -For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker (or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions). -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors). - -::: - -### Install tiered storage offloaders (optional) - -:::tip - -- Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -- To enable tiered storage feature, follow the instructions below; otherwise skip this section. - -::: - -To get started with [tiered storage offloaders](concepts-tiered-storage.md), you need to download the offloaders tarball release on every broker node in one of the following ways: - -* download from the Apache mirror Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders` -in the pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage.md). - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory. -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - -::: - -## Start Pulsar standalone - -Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode. - -```bash - -$ bin/pulsar standalone - -``` - -If you have started Pulsar successfully, you will see `INFO`-level log messages like this: - -```bash - -21:59:29.327 [DLM-/stream/storage-OrderedScheduler-3-0] INFO org.apache.bookkeeper.stream.storage.impl.sc.StorageContainerImpl - Successfully started storage container (0). -21:59:34.576 [main] INFO org.apache.pulsar.broker.authentication.AuthenticationService - Authentication is disabled -21:59:34.576 [main] INFO org.apache.pulsar.websocket.WebSocketService - Pulsar WebSocket Service started - -``` - -:::tip - -* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window. - -::: - -You can also run the service as a background process using the `pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). -> -> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview.md) document to secure your deployment. -> -> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -## Use Pulsar standalone - -Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. - -### Consume a message - -The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client consume my-topic -s "first-subscription" - -``` - -If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -22:17:16.781 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully consumed - -``` - -:::tip - -As you have noticed that we do not explicitly create the `my-topic` topic, from which we consume the message. When you consume a message from a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well. - -::: - -### Produce a message - -The following command produces a message saying `hello-pulsar` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client produce my-topic --messages "hello-pulsar" - -``` - -If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -22:21:08.693 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced - -``` - -## Stop Pulsar standalone - -Press `Ctrl+C` to stop a local standalone Pulsar. - -:::tip - -If the service runs as a background process using the `pulsar-daemon start standalone` command, then use the `pulsar-daemon stop standalone` command to stop the service. -For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). - -::: - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/tiered-storage-aliyun.md b/site2/website/versioned_docs/version-2.9.0-deprecated/tiered-storage-aliyun.md deleted file mode 100644 index 2486b92df485b3..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/tiered-storage-aliyun.md +++ /dev/null @@ -1,259 +0,0 @@ ---- -id: tiered-storage-aliyun -title: Use Aliyun OSS offloader with Pulsar -sidebar_label: "Aliyun OSS offloader" -original_id: tiered-storage-aliyun ---- - -This chapter guides you through every step of installing and configuring the Aliyun Object Storage Service (OSS) offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the Aliyun OSS offloader. - -### Prerequisite - -- Pulsar: 2.8.0 or later versions - -### Step - -This example uses Pulsar 2.8.0. - -1. Download the Pulsar tarball, see [here](https://pulsar.apache.org/docs/en/standalone/#install-pulsar-using-binary-release). - -2. Download and untar the Pulsar offloaders package, then copy the Pulsar offloaders as `offloaders` in the Pulsar directory, see [here](https://pulsar.apache.org/docs/en/standalone/#install-tiered-storage-offloaders-optional). - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/), [GCS](https://cloud.google.com/storage/), [Azure](https://portal.azure.com/#home), and [Aliyun OSS](https://www.aliyun.com/product/oss) for long-term storage. - - ``` - - tiered-storage-file-system-2.8.0.nar - tiered-storage-jcloud-2.8.0.nar - - ``` - - :::note - - * If you are running Pulsar in a bare-metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image. The `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to Aliyun OSS, you need to configure some properties of the Aliyun OSS offload driver. - -::: - -Besides, you can also configure the Aliyun OSS offloader to run it automatically or trigger it manually. - -### Configure Aliyun OSS offloader driver - -You can configure the Aliyun OSS offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - | Required configuration | Description | Example value | - | --- | --- |--- | - | `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive. | aliyun-oss | - | `offloadersDirectory` | Offloader directory | offloaders | - | `managedLedgerOffloadBucket` | Bucket | pulsar-topic-offload | - | `managedLedgerOffloadServiceEndpoint` | Endpoint | http://oss-cn-hongkong.aliyuncs.com | - -- **Optional** configurations are as below. - - | Optional | Description | Example value | - | --- | --- | --- | - | `managedLedgerOffloadReadBufferSizeInBytes` | Size of block read | 1 MB | - | `managedLedgerOffloadMaxBlockSizeInBytes` | Size of block write | 64 MB | - | `managedLedgerMinLedgerRolloverTimeMinutes` | Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment. | 2 | - | `managedLedgerMaxEntriesPerLedger` | Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment. | 5000 | - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in Aliyun OSS must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -managedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Endpoint (required) - -The endpoint is the region where a bucket is located. - -:::tip - -For more information about Aliyun OSS regions and endpoints, see [International website](https://www.alibabacloud.com/help/doc-detail/31837.htm) or [Chinese website](https://help.aliyun.com/document_detail/31837.html). - -::: - - -##### Example - -This example sets the endpoint as _oss-us-west-1-internal_. - -``` - -managedLedgerOffloadServiceEndpoint=http://oss-us-west-1-internal.aliyuncs.com - -``` - -#### Authentication (required) - -To be able to access Aliyun OSS, you need to authenticate with Aliyun OSS. - -Set the environment variables `ALIYUN_OSS_ACCESS_KEY_ID` and `ALIYUN_OSS_ACCESS_KEY_SECRET` in `conf/pulsar_env.sh`. - -"export" is important so that the variables are made available in the environment of spawned processes. - -```bash - -export ALIYUN_OSS_ACCESS_KEY_ID=ABC123456789 -export ALIYUN_OSS_ACCESS_KEY_SECRET=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from Aliyun OSS in the configuration file `broker.conf` or `standalone.conf`. - -| Configuration | Description | Default value | -| --- | --- | --- | -| `managedLedgerOffloadReadBufferSizeInBytes` | Block size for each individual read when reading back data from Aliyun OSS. | 1 MB | -| `managedLedgerOffloadMaxBlockSizeInBytes` | Maximum size of a "part" sent during a multipart upload to Aliyun OSS. It **cannot** be smaller than 5 MB. | 64 MB | - -### Run Aliyun OSS offloader automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -| Threshold value | Action | -| --- | --- | -| > 0 | It triggers the offloading operation if the topic storage reaches its threshold. | -| = 0 | It causes a broker to offload data as soon as possible. | -| < 0 | It disables automatic offloading operation. | - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, the offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the Aliyun OSS offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-threshold-em-). - -::: - -### Run Aliyun OSS offloader manually - -For individual topics, you can trigger the Aliyun OSS offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to Aliyun OSS until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the Aliyun OSS offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-em-). - - ::: - -- This example checks the Aliyun OSS offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the Aliyun OSS offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - -` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-status-em-). - - ::: - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/tiered-storage-aws.md b/site2/website/versioned_docs/version-2.9.0-deprecated/tiered-storage-aws.md deleted file mode 100644 index 20a6382e770cc6..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/tiered-storage-aws.md +++ /dev/null @@ -1,331 +0,0 @@ ---- -id: tiered-storage-aws -title: Use AWS S3 offloader with Pulsar -sidebar_label: "AWS S3 offloader" -original_id: tiered-storage-aws ---- - -This chapter guides you through every step of installing and configuring the AWS S3 offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the AWS S3 offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or later versions - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download from the Pulsar [downloads page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget): - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/) and [GCS](https://cloud.google.com/storage/) for long term storage. - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to AWS S3, you need to configure some properties of the AWS S3 offload driver. - -::: - -Besides, you can also configure the AWS S3 offloader to run it automatically or trigger it manually. - -### Configure AWS S3 offloader driver - -You can configure the AWS S3 offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - Required configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive.

    **Note**: there is a third driver type, S3, which is identical to AWS S3, though S3 requires that you specify an endpoint URL using `s3ManagedLedgerOffloadServiceEndpoint`. This is useful if using an S3 compatible data store other than AWS S3. | aws-s3 - `offloadersDirectory` | Offloader directory | offloaders - `s3ManagedLedgerOffloadBucket` | Bucket | pulsar-topic-offload - -- **Optional** configurations are as below. - - Optional | Description | Example value - |---|---|--- - `s3ManagedLedgerOffloadRegion` | Bucket region

    **Note**: before specifying a value for this parameter, you need to set the following configurations. Otherwise, you might get an error.

    - Set [`s3ManagedLedgerOffloadServiceEndpoint`](https://docs.aws.amazon.com/general/latest/gr/s3.html).

    Example
    `s3ManagedLedgerOffloadServiceEndpoint=https://s3.YOUR_REGION.amazonaws.com`

    - Grant `GetBucketLocation` permission to a user.

    For how to grant `GetBucketLocation` permission to a user, see [here](https://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html#using-with-s3-actions-related-to-buckets).| eu-west-3 - `s3ManagedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `s3ManagedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in AWS S3 must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -s3ManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Bucket region - -A bucket region is a region where a bucket is located. If a bucket region is not specified, the **default** region (`US East (N. Virginia)`) is used. - -:::tip - -For more information about AWS regions and endpoints, see [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). - -::: - - -##### Example - -This example sets the bucket region as _europe-west-3_. - -``` - -s3ManagedLedgerOffloadRegion=eu-west-3 - -``` - -#### Authentication (required) - -To be able to access AWS S3, you need to authenticate with AWS S3. - -Pulsar does not provide any direct methods of configuring authentication for AWS S3, -but relies on the mechanisms supported by the [DefaultAWSCredentialsProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html). - -Once you have created a set of credentials in the AWS IAM console, you can configure credentials using one of the following methods. - -* Use EC2 instance metadata credentials. - - If you are on AWS instance with an instance profile that provides credentials, Pulsar uses these credentials if no other mechanism is provided. - -* Set the environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` in `conf/pulsar_env.sh`. - - "export" is important so that the variables are made available in the environment of spawned processes. - - ```bash - - export AWS_ACCESS_KEY_ID=ABC123456789 - export AWS_SECRET_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -* Add the Java system properties `aws.accessKeyId` and `aws.secretKey` to `PULSAR_EXTRA_OPTS` in `conf/pulsar_env.sh`. - - ```bash - - PULSAR_EXTRA_OPTS="${PULSAR_EXTRA_OPTS} ${PULSAR_MEM} ${PULSAR_GC} -Daws.accessKeyId=ABC123456789 -Daws.secretKey=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.maxCapacityPerThread=4096" - - ``` - -* Set the access credentials in `~/.aws/credentials`. - - ```conf - - [default] - aws_access_key_id=ABC123456789 - aws_secret_access_key=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -* Assume an IAM role. - - This example uses the `DefaultAWSCredentialsProviderChain` for assuming this role. - - The broker must be rebooted for credentials specified in `pulsar_env` to take effect. - - ```conf - - s3ManagedLedgerOffloadRole= - s3ManagedLedgerOffloadRoleSessionName=pulsar-s3-offload - - ``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from AWS S3 in the configuration file `broker.conf` or `standalone.conf`. - -Configuration|Description|Default value -|---|---|--- -`s3ManagedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from AWS S3.|1 MB -`s3ManagedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to AWS S3. It **cannot** be smaller than 5 MB. |64 MB - -### Configure AWS S3 offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the AWS S3 offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-threshold-em-). - -::: - -### Configure AWS S3 offloader to run manually - -For individual topics, you can trigger AWS S3 offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to AWS S3 until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the AWS S3 offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-em-). - - ::: - -- This example checks the AWS S3 offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the AWS S3 offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - -` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-status-em-). - - ::: - -## Tutorial - -For the complete and step-by-step instructions on how to use the AWS S3 offloader with Pulsar, see [here](https://hub.streamnative.io/offloaders/aws-s3/2.5.1#usage). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/tiered-storage-azure.md b/site2/website/versioned_docs/version-2.9.0-deprecated/tiered-storage-azure.md deleted file mode 100644 index 5923a33147135c..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/tiered-storage-azure.md +++ /dev/null @@ -1,266 +0,0 @@ ---- -id: tiered-storage-azure -title: Use Azure BlobStore offloader with Pulsar -sidebar_label: "Azure BlobStore offloader" -original_id: tiered-storage-azure ---- - -This chapter guides you through every step of installing and configuring the Azure BlobStore offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the Azure BlobStore offloader. - -### Prerequisite - -- Pulsar: 2.6.2 or later versions - -### Step - -This example uses Pulsar 2.6.2. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.6.2/apache-pulsar-2.6.2-bin.tar.gz) - - * Download from the Pulsar [downloads page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget): - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.6.2/apache-pulsar-2.6.2-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.6.2/apache-pulsar-offloaders-2.6.2-bin.tar.gz - tar xvfz apache-pulsar-offloaders-2.6.2-bin.tar.gz - - ``` - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.6.2/offloaders apache-pulsar-2.6.2/offloaders - - ls offloaders - - ``` - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/), [GCS](https://cloud.google.com/storage/) and [Azure](https://portal.azure.com/#home) for long term storage. - - ``` - - tiered-storage-file-system-2.6.2.nar - tiered-storage-jcloud-2.6.2.nar - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to Azure BlobStore, you need to configure some properties of the Azure BlobStore offload driver. - -::: - -Besides, you can also configure the Azure BlobStore offloader to run it automatically or trigger it manually. - -### Configure Azure BlobStore offloader driver - -You can configure the Azure BlobStore offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - Required configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name | azureblob - `offloadersDirectory` | Offloader directory | offloaders - `managedLedgerOffloadBucket` | Bucket | pulsar-topic-offload - -- **Optional** configurations are as below. - - Optional | Description | Example value - |---|---|--- - `managedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `managedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in Azure BlobStore must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -managedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Authentication (required) - -To be able to access Azure BlobStore, you need to authenticate with Azure BlobStore. - -* Set the environment variables `AZURE_STORAGE_ACCOUNT` and `AZURE_STORAGE_ACCESS_KEY` in `conf/pulsar_env.sh`. - - "export" is important so that the variables are made available in the environment of spawned processes. - - ```bash - - export AZURE_STORAGE_ACCOUNT=ABC123456789 - export AZURE_STORAGE_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from Azure BlobStore in the configuration file `broker.conf` or `standalone.conf`. - -Configuration|Description|Default value -|---|---|--- -`managedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from Azure BlobStore store.|1 MB -`managedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to Azure BlobStore store. It **cannot** be smaller than 5 MB. |64 MB - -### Configure Azure BlobStore offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the Azure BlobStore offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-threshold-em-). - -::: - -### Configure Azure BlobStore offloader to run manually - -For individual topics, you can trigger Azure BlobStore offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to Azure BlobStore until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the Azure BlobStore offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-em-). - - ::: - -- This example checks the Azure BlobStore offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the Azure BlobStore offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: - - ``` - -` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-status-em-). - - ::: - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/tiered-storage-filesystem.md b/site2/website/versioned_docs/version-2.9.0-deprecated/tiered-storage-filesystem.md deleted file mode 100644 index 8164e68208bd7d..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/tiered-storage-filesystem.md +++ /dev/null @@ -1,317 +0,0 @@ ---- -id: tiered-storage-filesystem -title: Use filesystem offloader with Pulsar -sidebar_label: "Filesystem offloader" -original_id: tiered-storage-filesystem ---- - -This chapter guides you through every step of installing and configuring the filesystem offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the filesystem offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or later versions - -- Hadoop: 3.x.x - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download from the Pulsar [download page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget) - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8S and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to filesystem, you need to configure some properties of the filesystem offloader driver. - -::: - -Besides, you can also configure the filesystem offloader to run it automatically or trigger it manually. - -### Configure filesystem offloader driver - -You can configure filesystem offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - Required configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive. | filesystem - `fileSystemURI` | Connection address | hdfs://127.0.0.1:9000 - `fileSystemProfilePath` | Hadoop profile path | conf/filesystem_offload_core_site.xml - -- **Optional** configurations are as below. - - Optional configuration| Description | Example value - |---|---|--- - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment.|5000 - -#### Offloader driver (required) - -Offloader driver name, which is case-insensitive. - -This example sets the offloader driver name as _filesystem_. - -```conf - -managedLedgerOffloadDriver=filesystem - -``` - -#### Connection address (required) - -Connection address is the URI to access the default Hadoop distributed file system. - -##### Example - -This example sets the connection address as _hdfs://127.0.0.1:9000_. - -```conf - -fileSystemURI=hdfs://127.0.0.1:9000 - -``` - -#### Hadoop profile path (required) - -The configuration file is stored in the Hadoop profile path. It contains various settings for Hadoop performance tuning. - -##### Example - -This example sets the Hadoop profile path as _conf/filesystem_offload_core_site.xml_. - -```conf - -fileSystemProfilePath=conf/filesystem_offload_core_site.xml - -``` - -You can set the following configurations in the _filesystem_offload_core_site.xml_ file. - -``` - - - fs.defaultFS - - - - - hadoop.tmp.dir - pulsar - - - - io.file.buffer.size - 4096 - - - - io.seqfile.compress.blocksize - 1000000 - - - - io.seqfile.compression.type - BLOCK - - - - io.map.index.interval - 128 - - -``` - -:::tip - -For more information about the Hadoop HDFS, see [here](https://hadoop.apache.org/docs/current/). - -::: - -### Configure filesystem offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offload operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offload runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -#### Example - -This example sets the filesystem offloader threshold size to 10 MB using pulsar-admin. - -```bash - -pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#set-offload-threshold). - -::: - -### Configure filesystem offloader to run manually - -For individual topics, you can trigger filesystem offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - -To trigger via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are offloaded to the filesystem until the threshold is no longer exceeded. Older segments are offloaded first. - -#### Example - -- This example triggers the filesystem offloader to run manually using pulsar-admin. - - ```bash - - pulsar-admin topics offload --size-threshold 10M persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload). - - ::: - -- This example checks filesystem offloader status using pulsar-admin. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the filesystem to complete the job, add the `-w` flag. - - ```bash - - pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in the offloading operation, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - -` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload-status). - - ::: - -## Tutorial - -For the complete and step-by-step instructions on how to use the filesystem offloader with Pulsar, see [here](https://hub.streamnative.io/offloaders/filesystem/2.5.1). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/tiered-storage-gcs.md b/site2/website/versioned_docs/version-2.9.0-deprecated/tiered-storage-gcs.md deleted file mode 100644 index afb1e9a10081ce..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/tiered-storage-gcs.md +++ /dev/null @@ -1,321 +0,0 @@ ---- -id: tiered-storage-gcs -title: Use GCS offloader with Pulsar -sidebar_label: "GCS offloader" -original_id: tiered-storage-gcs ---- - -This chapter guides you through every step of installing and configuring the GCS offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the GCS offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or later versions - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download from the Pulsar [download page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget) - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8S and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - As shown in the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support GCS and AWS S3 for long term storage. - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - -## Configuration - -:::note - -Before offloading data from BookKeeper to GCS, you need to configure some properties of the GCS offloader driver. - -::: - -Besides, you can also configure the GCS offloader to run it automatically or trigger it manually. - -### Configure GCS offloader driver - -You can configure GCS offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - **Required** configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver`|Offloader driver name, which is case-insensitive.|google-cloud-storage - `offloadersDirectory`|Offloader directory|offloaders - `gcsManagedLedgerOffloadBucket`|Bucket|pulsar-topic-offload - `gcsManagedLedgerOffloadRegion`|Bucket region|europe-west3 - `gcsManagedLedgerOffloadServiceAccountKeyFile`|Authentication |/Users/user-name/Downloads/project-804d5e6a6f33.json - -- **Optional** configurations are as below. - - Optional configuration|Description|Example value - |---|---|--- - `gcsManagedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `gcsManagedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic.|2 - `managedLedgerMaxEntriesPerLedger`|The max number of entries to append to a ledger before triggering a rollover.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in GCS **must** be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you can not nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -gcsManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Bucket region (required) - -Bucket region is the region where a bucket is located. If a bucket region is not specified, the **default** region (`us multi-regional location`) is used. - -:::tip - -For more information about bucket location, see [here](https://cloud.google.com/storage/docs/bucket-locations). - -::: - -##### Example - -This example sets the bucket region as _europe-west3_. - -``` - -gcsManagedLedgerOffloadRegion=europe-west3 - -``` - -#### Authentication (required) - -To enable a broker access GCS, you need to configure `gcsManagedLedgerOffloadServiceAccountKeyFile` in the configuration file `broker.conf`. - -`gcsManagedLedgerOffloadServiceAccountKeyFile` is -a JSON file, containing GCS credentials of a service account. - -##### Example - -To generate service account credentials or view the public credentials that you've already generated, follow the following steps. - -1. Navigate to the [Service accounts page](https://console.developers.google.com/iam-admin/serviceaccounts). - -2. Select a project or create a new one. - -3. Click **Create service account**. - -4. In the **Create service account** window, type a name for the service account and select **Furnish a new private key**. - - If you want to [grant G Suite domain-wide authority](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#delegatingauthority) to the service account, select **Enable G Suite Domain-wide Delegation**. - -5. Click **Create**. - - :::note - - Make sure the service account you create has permission to operate GCS, you need to assign **Storage Admin** permission to your service account [here](https://cloud.google.com/storage/docs/access-control/iam). - - ::: - -6. You can get the following information and set this in `broker.conf`. - - ```conf - - gcsManagedLedgerOffloadServiceAccountKeyFile="/Users/user-name/Downloads/project-804d5e6a6f33.json" - - ``` - - :::tip - - - For more information about how to create `gcsManagedLedgerOffloadServiceAccountKeyFile`, see [here](https://support.google.com/googleapi/answer/6158849). - - For more information about Google Cloud IAM, see [here](https://cloud.google.com/storage/docs/access-control/iam). - - ::: - -#### Size of block read/write - -You can configure the size of a request sent to or read from GCS in the configuration file `broker.conf`. - -Configuration|Description -|---|--- -`gcsManagedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from GCS.

    The **default** value is 1 MB. -`gcsManagedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to GCS.

    It **can not** be smaller than 5 MB.

    The **default** value is 64 MB. - -### Configure GCS offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offload operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the GCS offloader threshold size to 10 MB using pulsar-admin. - -```bash - -pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#set-offload-threshold). - -::: - -### Configure GCS offloader to run manually - -For individual topics, you can trigger GCS offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger the GCS via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to GCS until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the GCS offloader to run manually using pulsar-admin with the command `pulsar-admin topics offload (topic-name) (threshold)`. - - ```bash - - pulsar-admin topics offload persistent://my-tenant/my-namespace/topic1 10M - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload). - - ::: - -- This example checks the GCS offloader status using pulsar-admin with the command `pulsar-admin topics offload-status options`. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for GCS to complete the job, add the `-w` flag. - - ```bash - - pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - -` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload-status). - - ::: - -## Tutorial - -For the complete and step-by-step instructions on how to use the GCS offloader with Pulsar, see [here](https://hub.streamnative.io/offloaders/gcs/2.5.1#usage). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/tiered-storage-overview.md b/site2/website/versioned_docs/version-2.9.0-deprecated/tiered-storage-overview.md deleted file mode 100644 index c635034f463b46..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/tiered-storage-overview.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -id: tiered-storage-overview -title: Overview of tiered storage -sidebar_label: "Overview" -original_id: tiered-storage-overview ---- - -Pulsar's **Tiered Storage** feature allows older backlog data to be moved from BookKeeper to long term and cheaper storage, while still allowing clients to access the backlog as if nothing has changed. - -* Tiered storage uses [Apache jclouds](https://jclouds.apache.org) to support [Amazon S3](https://aws.amazon.com/s3/) and [GCS (Google Cloud Storage)](https://cloud.google.com/storage/) for long term storage. - - With jclouds, it is easy to add support for more [cloud storage providers](https://jclouds.apache.org/reference/providers/#blobstore-providers) in the future. - - :::tip - - - For more information about how to use the AWS S3 offloader with Pulsar, see [here](tiered-storage-aws.md). - - - For more information about how to use the GCS offloader with Pulsar, see [here](tiered-storage-gcs.md). - - ::: - -* Tiered storage uses [Apache Hadoop](http://hadoop.apache.org/) to support filesystems for long term storage. - - With Hadoop, it is easy to add support for more filesystems in the future. - - :::tip - - For more information about how to use the filesystem offloader with Pulsar, see [here](tiered-storage-filesystem.md). - - ::: - -## When to use tiered storage? - -Tiered storage should be used when you have a topic for which you want to keep a very long backlog for a long time. - -For example, if you have a topic containing user actions which you use to train your recommendation systems, you may want to keep that data for a long time, so that if you change your recommendation algorithm, you can rerun it against your full user history. - -## How does tiered storage work? - -A topic in Pulsar is backed by a **log**, known as a **managed ledger**. This log is composed of an ordered list of segments. Pulsar only writes to the final segment of the log. All previous segments are sealed. The data within the segment is immutable. This is known as a **segment oriented architecture**. - -![Tiered storage](/assets/pulsar-tiered-storage.png "Tiered Storage") - -The tiered storage offloading mechanism takes advantage of segment oriented architecture. When offloading is requested, the segments of the log are copied one-by-one to tiered storage. All segments of the log (apart from the current segment) written to tiered storage can be offloaded. - -Data written to BookKeeper is replicated to 3 physical machines by default. However, once a segment is sealed in BookKeeper, it becomes immutable and can be copied to long term storage. Long term storage can achieve cost savings by using mechanisms such as [Reed-Solomon error correction](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction) to require fewer physical copies of data. - -Before offloading ledgers to long term storage, you need to configure buckets, credentials, and other properties for the cloud storage service. Additionally, Pulsar uses multi-part objects to upload the segment data and brokers may crash while uploading the data. It is recommended that you add a life cycle rule for your bucket to expire incomplete multi-part upload after a day or two days to avoid getting charged for incomplete uploads. Moreover, you can trigger the offloading operation manually (via REST API or CLI) or automatically (via CLI). - -After offloading ledgers to long term storage, you can still query data in the offloaded ledgers with Pulsar SQL. - -For more information about tiered storage for Pulsar topics, see [here](https://github.com/apache/pulsar/wiki/PIP-17:-Tiered-storage-for-Pulsar-topics). diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/transaction-api.md b/site2/website/versioned_docs/version-2.9.0-deprecated/transaction-api.md deleted file mode 100644 index ecbd0da12c786d..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/transaction-api.md +++ /dev/null @@ -1,172 +0,0 @@ ---- -id: transactions-api -title: Transactions API -sidebar_label: "Transactions API" -original_id: transactions-api ---- - -All messages in a transaction are available only to consumers after the transaction has been committed. If a transaction has been aborted, all the writes and acknowledgments in this transaction roll back. - -## Prerequisites -1. To enable transactions in Pulsar, you need to configure the parameter in the `broker.conf` file. - -``` - -transactionCoordinatorEnabled=true - -``` - -2. Initialize transaction coordinator metadata, so the transaction coordinators can leverage advantages of the partitioned topic, such as load balance. - -``` - -bin/pulsar initialize-transaction-coordinator-metadata -cs 127.0.0.1:2181 -c standalone - -``` - -After initializing transaction coordinator metadata, you can use the transactions API. The following APIs are available. - -## Initialize Pulsar client - -You can enable transaction for transaction client and initialize transaction coordinator client. - -``` - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .enableTransaction(true) - .build(); - -``` - -## Start transactions -You can start transaction in the following way. - -``` - -Transaction txn = pulsarClient - .newTransaction() - .withTransactionTimeout(5, TimeUnit.MINUTES) - .build() - .get(); - -``` - -## Produce transaction messages - -A transaction parameter is required when producing new transaction messages. The semantic of the transaction messages in Pulsar is `read-committed`, so the consumer cannot receive the ongoing transaction messages before the transaction is committed. - -``` - -producer.newMessage(txn).value("Hello Pulsar Transaction".getBytes()).sendAsync(); - -``` - -## Acknowledge the messages with the transaction - -The transaction acknowledgement requires a transaction parameter. The transaction acknowledgement marks the messages state to pending-ack state. When the transaction is committed, the pending-ack state becomes ack state. If the transaction is aborted, the pending-ack state becomes unack state. - -``` - -Message message = consumer.receive(); -consumer.acknowledgeAsync(message.getMessageId(), txn); - -``` - -## Commit transactions - -When the transaction is committed, consumers receive the transaction messages and the pending-ack state becomes ack state. - -``` - -txn.commit().get(); - -``` - -## Abort transaction - -When the transaction is aborted, the transaction acknowledgement is canceled and the pending-ack messages are redelivered. - -``` - -txn.abort().get(); - -``` - -### Example -The following example shows how messages are processed in transaction. - -``` - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl(getPulsarServiceList().get(0).getBrokerServiceUrl()) - .statsInterval(0, TimeUnit.SECONDS) - .enableTransaction(true) - .build(); - -String sourceTopic = "public/default/source-topic"; -String sinkTopic = "public/default/sink-topic"; - -Producer sourceProducer = pulsarClient - .newProducer(Schema.STRING) - .topic(sourceTopic) - .create(); -sourceProducer.newMessage().value("hello pulsar transaction").sendAsync(); - -Consumer sourceConsumer = pulsarClient - .newConsumer(Schema.STRING) - .topic(sourceTopic) - .subscriptionName("test") - .subscriptionType(SubscriptionType.Shared) - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscribe(); - -Producer sinkProducer = pulsarClient - .newProducer(Schema.STRING) - .topic(sinkTopic) - .sendTimeout(0, TimeUnit.MILLISECONDS) - .create(); - -Transaction txn = pulsarClient - .newTransaction() - .withTransactionTimeout(5, TimeUnit.MINUTES) - .build() - .get(); - -// source message acknowledgement and sink message produce belong to one transaction, -// they are combined into an atomic operation. -Message message = sourceConsumer.receive(); -sourceConsumer.acknowledgeAsync(message.getMessageId(), txn); -sinkProducer.newMessage(txn).value("sink data").sendAsync(); - -txn.commit().get(); - -``` - -## Enable batch messages in transactions - -To enable batch messages in transactions, you need to enable the batch index acknowledgement feature. The transaction acks check whether the batch index acknowledgement conflicts. - -To enable batch index acknowledgement, you need to set `acknowledgmentAtBatchIndexLevelEnabled` to `true` in the `broker.conf` or `standalone.conf` file. - -``` - -acknowledgmentAtBatchIndexLevelEnabled=true - -``` - -And then you need to call the `enableBatchIndexAcknowledgment(true)` method in the consumer builder. - -``` - -Consumer sinkConsumer = pulsarClient - .newConsumer() - .topic(transferTopic) - .subscriptionName("sink-topic") - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscriptionType(SubscriptionType.Shared) - .enableBatchIndexAcknowledgment(true) // enable batch index acknowledgement - .subscribe(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/transaction-guarantee.md b/site2/website/versioned_docs/version-2.9.0-deprecated/transaction-guarantee.md deleted file mode 100644 index 9db2d254e159f6..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/transaction-guarantee.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -id: transactions-guarantee -title: Transactions Guarantee -sidebar_label: "Transactions Guarantee" -original_id: transactions-guarantee ---- - -Pulsar transactions support the following guarantee. - -## Atomic multi-partition writes and multi-subscription acknowledges -Transactions enable atomic writes to multiple topics and partitions. A batch of messages in a transaction can be received from, produced to, and acknowledged by many partitions. All the operations involved in a transaction succeed or fail as a single unit. - -## Read transactional message -All the messages in a transaction are available only for consumers until the transaction is committed. - -## Acknowledge transactional message -A message is acknowledged successfully only once by a consumer under the subscription when acknowledging the message with the transaction ID. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/txn-how.md b/site2/website/versioned_docs/version-2.9.0-deprecated/txn-how.md deleted file mode 100644 index add072448aeb34..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/txn-how.md +++ /dev/null @@ -1,151 +0,0 @@ ---- -id: txn-how -title: How transactions work? -sidebar_label: "How transactions work?" -original_id: txn-how ---- - -This section describes transaction components and how the components work together. For the complete design details, see [PIP-31: Transactional Streaming](https://docs.google.com/document/d/145VYp09JKTw9jAT-7yNyFU255FptB2_B2Fye100ZXDI/edit#heading=h.bm5ainqxosrx). - -## Key concept - -It is important to know the following key concepts, which is a prerequisite for understanding how transactions work. - -### Transaction coordinator - -The transaction coordinator (TC) is a module running inside a Pulsar broker. - -* It maintains the entire life cycle of transactions and prevents a transaction from getting into an incorrect status. - -* It handles transaction timeout, and ensures that the transaction is aborted after a transaction timeout. - -### Transaction log - -All the transaction metadata persists in the transaction log. The transaction log is backed by a Pulsar topic. If the transaction coordinator crashes, it can restore the transaction metadata from the transaction log. - -The transaction log stores the transaction status rather than actual messages in the transaction (the actual messages are stored in the actual topic partitions). - -### Transaction buffer - -Messages produced to a topic partition within a transaction are stored in the transaction buffer (TB) of that topic partition. The messages in the transaction buffer are not visible to consumers until the transactions are committed. The messages in the transaction buffer are discarded when the transactions are aborted. - -Transaction buffer stores all ongoing and aborted transactions in memory. All messages are sent to the actual partitioned Pulsar topics. After transactions are committed, the messages in the transaction buffer are materialized (visible) to consumers. When the transactions are aborted, the messages in the transaction buffer are discarded. - -### Transaction ID - -Transaction ID (TxnID) identifies a unique transaction in Pulsar. The transaction ID is 128-bit. The highest 16 bits are reserved for the ID of the transaction coordinator, and the remaining bits are used for monotonically increasing numbers in each transaction coordinator. It is easy to locate the transaction crash with the TxnID. - -### Pending acknowledge state - -Pending acknowledge state maintains message acknowledgments within a transaction before a transaction completes. If a message is in the pending acknowledge state, the message cannot be acknowledged by other transactions until the message is removed from the pending acknowledge state. - -The pending acknowledge state is persisted to the pending acknowledge log (cursor ledger). A new broker can restore the state from the pending acknowledge log to ensure the acknowledgement is not lost. - -## Data flow - -At a high level, the data flow can be split into several steps: - -1. Begin a transaction. - -2. Publish messages with a transaction. - -3. Acknowledge messages with a transaction. - -4. End a transaction. - -To help you debug or tune the transaction for better performance, review the following diagrams and descriptions. - -### 1. Begin a transaction - -Before introducing the transaction in Pulsar, a producer is created and then messages are sent to brokers and stored in data logs. - -![](/assets/txn-3.png) - -Let’s walk through the steps for _beginning a transaction_. - -| Step | Description | -| --- | --- | -| 1.1 | The first step is that the Pulsar client finds the transaction coordinator. | -| 1.2 | The transaction coordinator allocates a transaction ID for the transaction. In the transaction log, the transaction is logged with its transaction ID and status (OPEN), which ensures the transaction status is persisted regardless of transaction coordinator crashes. | -| 1.3 | The transaction log sends the result of persisting the transaction ID to the transaction coordinator. | -| 1.4 | After the transaction status entry is logged, the transaction coordinator brings the transaction ID back to the Pulsar client. | - -### 2. Publish messages with a transaction - -In this stage, the Pulsar client enters a transaction loop, repeating the `consume-process-produce` operation for all the messages that comprise the transaction. This is a long phase and is potentially composed of multiple produce and acknowledgement requests. - -![](/assets/txn-4.png) - -Let’s walk through the steps for _publishing messages with a transaction_. - -| Step | Description | -| --- | --- | -| 2.1.1 | Before the Pulsar client produces messages to a new topic partition, it sends a request to the transaction coordinator to add the partition to the transaction. | -| 2.1.2 | The transaction coordinator logs the partition changes of the transaction into the transaction log for durability, which ensures the transaction coordinator knows all the partitions that a transaction is handling. The transaction coordinator can commit or abort changes on each partition at the end-partition phase. | -| 2.1.3 | The transaction log sends the result of logging the new partition (used for producing messages) to the transaction coordinator. | -| 2.1.4 | The transaction coordinator sends the result of adding a new produced partition to the transaction. | -| 2.2.1 | The Pulsar client starts producing messages to partitions. The flow of this part is the same as the normal flow of producing messages except that the batch of messages produced by a transaction contains transaction IDs. | -| 2.2.2 | The broker writes messages to a partition. | - -### 3. Acknowledge messages with a transaction - -In this phase, the Pulsar client sends a request to the transaction coordinator and a new subscription is acknowledged as a part of a transaction. - -![](/assets/txn-5.png) - -Let’s walk through the steps for _acknowledging messages with a transaction_. - -| Step | Description | -| --- | --- | -| 3.1.1 | The Pulsar client sends a request to add an acknowledged subscription to the transaction coordinator. | -| 3.1.2 | The transaction coordinator logs the addition of subscription, which ensures that it knows all subscriptions handled by a transaction and can commit or abort changes on each subscription at the end phase. | -| 3.1.3 | The transaction log sends the result of logging the new partition (used for acknowledging messages) to the transaction coordinator. | -| 3.1.4 | The transaction coordinator sends the result of adding the new acknowledged partition to the transaction. | -| 3.2 | The Pulsar client acknowledges messages on the subscription. The flow of this part is the same as the normal flow of acknowledging messages except that the acknowledged request carries a transaction ID. | -| 3.3 | The broker receiving the acknowledgement request checks if the acknowledgment belongs to a transaction or not. | - -### 4. End a transaction - -At the end of a transaction, the Pulsar client decides to commit or abort the transaction. The transaction can be aborted when a conflict is detected on acknowledging messages. - -#### 4.1 End transaction request - -When the Pulsar client finishes a transaction, it issues an end transaction request. - -![](/assets/txn-6.png) - -Let’s walk through the steps for _ending the transaction_. - -| Step | Description | -| --- | --- | -| 4.1.1 | The Pulsar client issues an end transaction request (with a field indicating whether the transaction is to be committed or aborted) to the transaction coordinator. | -| 4.1.2 | The transaction coordinator writes a COMMITTING or ABORTING message to its transaction log. | -| 4.1.3 | The transaction log sends the result of logging the committing or aborting status. | - -#### 4.2 Finalize a transaction - -The transaction coordinator starts the process of committing or aborting messages to all the partitions involved in this transaction. - -![](/assets/txn-7.png) - -Let’s walk through the steps for _finalizing a transaction_. - -| Step | Description | -| --- | --- | -| 4.2.1 | The transaction coordinator commits transactions on subscriptions and commits transactions on partitions at the same time. | -| 4.2.2 | The broker (produce) writes produced committed markers to the actual partitions. At the same time, the broker (ack) writes acked committed marks to the subscription pending ack partitions. | -| 4.2.3 | The data log sends the result of writing produced committed marks to the broker. At the same time, pending ack data log sends the result of writing acked committed marks to the broker. The cursor moves to the next position. | - -#### 4.3 Mark a transaction as COMMITTED or ABORTED - -The transaction coordinator writes the final transaction status to the transaction log to complete the transaction. - -![](/assets/txn-8.png) - -Let’s walk through the steps for _marking a transaction as COMMITTED or ABORTED_. - -| Step | Description | -| --- | --- | -| 4.3.1 | After all produced messages and acknowledgements to all partitions involved in this transaction have been successfully committed or aborted, the transaction coordinator writes the final COMMITTED or ABORTED transaction status messages to its transaction log, indicating that the transaction is complete. All the messages associated with the transaction in its transaction log can be safely removed. | -| 4.3.2 | The transaction log sends the result of the committed transaction to the transaction coordinator. | -| 4.3.3 | The transaction coordinator sends the result of the committed transaction to the Pulsar client. | diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/txn-monitor.md b/site2/website/versioned_docs/version-2.9.0-deprecated/txn-monitor.md deleted file mode 100644 index 5b50953772d092..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/txn-monitor.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -id: txn-monitor -title: How to monitor transactions? -sidebar_label: "How to monitor transactions?" -original_id: txn-monitor ---- - -You can monitor the status of the transactions in Prometheus and Grafana using the [transaction metrics](https://pulsar.apache.org/docs/en/next/reference-metrics/#pulsar-transaction). - -For how to configure Prometheus and Grafana, see [here](https://pulsar.apache.org/docs/en/next/deploy-monitoring). diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/txn-use.md b/site2/website/versioned_docs/version-2.9.0-deprecated/txn-use.md deleted file mode 100644 index de0e4a92f1b27e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/txn-use.md +++ /dev/null @@ -1,105 +0,0 @@ ---- -id: txn-use -title: How to use transactions? -sidebar_label: "How to use transactions?" -original_id: txn-use ---- - -## Transaction API - -The transaction feature is primarily a server-side and protocol-level feature. You can use the transaction feature via the [transaction API](https://pulsar.apache.org/api/admin/), which is available in **Pulsar 2.8.0 or later**. - -To use the transaction API, you do not need any additional settings in the Pulsar client. **By default**, transactions is **disabled**. - -Currently, transaction API is only available for **Java** clients. Support for other language clients will be added in the future releases. - -## Quick start - -This section provides an example of how to use the transaction API to send and receive messages in a Java client. - -1. Start Pulsar 2.8.0 or later. - -2. Enable transaction. - - Change the configuration in the `broker.conf` file. - - ``` - - transactionCoordinatorEnabled=true - - ``` - - If you want to enable batch messages in transactions, follow the steps below. - - Set `acknowledgmentAtBatchIndexLevelEnabled` to `true` in the `broker.conf` or `standalone.conf` file. - - ``` - - acknowledgmentAtBatchIndexLevelEnabled=true - - ``` - -3. Initialize transaction coordinator metadata. - - The transaction coordinator can leverage the advantages of partitioned topics (such as load balance). - - **Input** - - ``` - - bin/pulsar initialize-transaction-coordinator-metadata -cs 127.0.0.1:2181 -c standalone - - ``` - - **Output** - - ``` - - Transaction coordinator metadata setup success - - ``` - -4. Initialize a Pulsar client. - - ``` - - PulsarClient client = PulsarClient.builder() - - .serviceUrl("pulsar://localhost:6650") - - .enableTransaction(true) - - .build(); - - ``` - -Now you can start using the transaction API to send and receive messages. Below is an example of a `consume-process-produce` application written in Java. - -![](/assets/txn-9.png) - -Let’s walk through this example step by step. - -| Step | Description | -| --- | --- | -| 1. Start a transaction. | The application opens a new transaction by calling PulsarClient.newTransaction. It specifics the transaction timeout as 1 minute. If the transaction is not committed within 1 minute, the transaction is automatically aborted. | -| 2. Receive messages from topics. | The application creates two normal consumers to receive messages from topic input-topic-1 and input-topic-2 respectively. | -| 3. Publish messages to topics with the transaction. | The application creates two producers to produce the resulting messages to the output topic _output-topic-1_ and output-topic-2 respectively. The application applies the processing logic and generates two output messages. The application sends those two output messages as part of the transaction opened in the first step via Producer.newMessage(Transaction). | -| 4. Acknowledge the messages with the transaction. | In the same transaction, the application acknowledges the two input messages. | -| 5. Commit the transaction. | The application commits the transaction by calling Transaction.commit() on the open transaction. The commit operation ensures the two input messages are marked as acknowledged and the two output messages are written successfully to the output topics. | - -[1] Example of enabling batch messages ack in transactions in the consumer builder. - -``` - -Consumer sinkConsumer = pulsarClient - .newConsumer() - .topic(transferTopic) - .subscriptionName("sink-topic") - -.subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscriptionType(SubscriptionType.Shared) - .enableBatchIndexAcknowledgment(true) // enable batch index acknowledgement - .subscribe(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/txn-what.md b/site2/website/versioned_docs/version-2.9.0-deprecated/txn-what.md deleted file mode 100644 index 844f19a700f8f0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/txn-what.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -id: txn-what -title: What are transactions? -sidebar_label: "What are transactions?" -original_id: txn-what ---- - -Transactions strengthen the message delivery semantics of Apache Pulsar and [processing guarantees of Pulsar Functions](https://pulsar.apache.org/docs/en/next/functions-overview/#processing-guarantees). The Pulsar Transaction API supports atomic writes and acknowledgments across multiple topics. - -Transactions allow: - -- A producer to send a batch of messages to multiple topics where all messages in the batch are eventually visible to any consumer, or none are ever visible to consumers. - -- End-to-end exactly-once semantics (execute a `consume-process-produce` operation exactly once). - -## Transaction semantics - -Pulsar transactions have the following semantics: - -* All operations within a transaction are committed as a single unit. - - * Either all messages are committed, or none of them are. - - * Each message is written or processed exactly once, without data loss or duplicates (even in the event of failures). - - * If a transaction is aborted, all the writes and acknowledgments in this transaction rollback. - -* A group of messages in a transaction can be received from, produced to, and acknowledged by multiple partitions. - - * Consumers are only allowed to read committed (acked) messages. In other words, the broker does not deliver transactional messages which are part of an open transaction or messages which are part of an aborted transaction. - - * Message writes across multiple partitions are atomic. - - * Message acks across multiple subscriptions are atomic. A message is acked successfully only once by a consumer under the subscription when acknowledging the message with the transaction ID. - -## Transactions and stream processing - -Stream processing on Pulsar is a `consume-process-produce` operation on Pulsar topics: - -* `Consume`: a source operator that runs a Pulsar consumer reads messages from one or multiple Pulsar topics. - -* `Process`: a processing operator transforms the messages. - -* `Produce`: a sink operator that runs a Pulsar producer writes the resulting messages to one or multiple Pulsar topics. - -![](/assets/txn-2.png) - -Pulsar transactions support end-to-end exactly-once stream processing, which means messages are not lost from a source operator and messages are not duplicated to a sink operator. - -## Use case - -Prior to Pulsar 2.8.0, there was no easy way to build stream processing applications with Pulsar to achieve exactly-once processing guarantees. With the transaction introduced in Pulsar 2.8.0, the following services support exactly-once semantics: - -* [Pulsar Flink connector](https://flink.apache.org/2021/01/07/pulsar-flink-connector-270.html) - - Prior to Pulsar 2.8.0, if you want to build stream applications using Pulsar and Flink, the Pulsar Flink connector only supported exactly-once source connector and at-least-once sink connector, which means the highest processing guarantee for end-to-end was at-least-once, there was possibility that the resulting messages from streaming applications produce duplicated messages to the resulting topics in Pulsar. - - With the transaction introduced in Pulsar 2.8.0, the Pulsar Flink sink connector can support exactly-once semantics by implementing the designated `TwoPhaseCommitSinkFunction` and hooking up the Flink sink message lifecycle with Pulsar transaction API. - -* Support for Pulsar Functions and other connectors will be added in the future releases. diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/txn-why.md b/site2/website/versioned_docs/version-2.9.0-deprecated/txn-why.md deleted file mode 100644 index 1ed8769977654e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/txn-why.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -id: txn-why -title: Why transactions? -sidebar_label: "Why transactions?" -original_id: txn-why ---- - -Pulsar transactions (txn) enable event streaming applications to consume, process, and produce messages in one atomic operation. The reason for developing this feature can be summarized as below. - -## Demand of stream processing - -The demand for stream processing applications with stronger processing guarantees has grown along with the rise of stream processing. For example, in the financial industry, financial institutions use stream processing engines to process debits and credits for users. This type of use case requires that every message is processed exactly once, without exception. - -In other words, if a stream processing application consumes message A and -produces the result as a message B (B = f(A)), then exactly-once processing -guarantee means that A can only be marked as consumed if and only if B is -successfully produced, and vice versa. - -![](/assets/txn-1.png) - -The Pulsar transactions API strengthens the message delivery semantics and the processing guarantees for stream processing. It enables stream processing applications to consume, process, and produce messages in one atomic operation. That means, a batch of messages in a transaction can be received from, produced to and acknowledged by many topic partitions. All the operations involved in a transaction succeed or fail as one single until. - -## Limitation of idempotent producer - -Avoiding data loss or duplication can be achieved by using the Pulsar idempotent producer, but it does not provide guarantees for writes across multiple partitions. - -In Pulsar, the highest level of message delivery guarantee is using an [idempotent producer](https://pulsar.apache.org/docs/en/next/concepts-messaging/#producer-idempotency) with the exactly once semantic at one single partition, that is, each message is persisted exactly once without data loss and duplication. However, there are some limitations in this solution: - -- Due to the monotonic increasing sequence ID, this solution only works on a single partition and within a single producer session (that is, for producing one message), so there is no atomicity when producing multiple messages to one or multiple partitions. - - In this case, if there are some failures (for example, client / broker / bookie crashes, network failure, and more) in the process of producing and receiving messages, messages are re-processed and re-delivered, which may cause data loss or data duplication: - - - For the producer: if the producer retry sending messages, some messages are persisted multiple times; if the producer does not retry sending messages, some messages are persisted once and other messages are lost. - - - For the consumer: since the consumer does not know whether the broker has received messages or not, the consumer may not retry sending acks, which causes it to receive duplicate messages. - -- Similarly, for Pulsar Function, it only guarantees exactly once semantics for an idempotent function on a single event rather than processing multiple events or producing multiple results that can happen exactly. - - For example, if a function accepts multiple events and produces one result (for example, window function), the function may fail between producing the result and acknowledging the incoming messages, or even between acknowledging individual events, which causes all (or some) incoming messages to be re-delivered and reprocessed, and a new result is generated. - - However, many scenarios need atomic guarantees across multiple partitions and sessions. - -- Consumers need to rely on more mechanisms to acknowledge (ack) messages once. - - For example, consumers are required to store the MessageID along with its acked state. After the topic is unloaded, the subscription can recover the acked state of this MessageID in memory when the topic is loaded again. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.0-deprecated/window-functions-context.md b/site2/website/versioned_docs/version-2.9.0-deprecated/window-functions-context.md deleted file mode 100644 index f80fea57989ef0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.0-deprecated/window-functions-context.md +++ /dev/null @@ -1,581 +0,0 @@ ---- -id: window-functions-context -title: Window Functions Context -sidebar_label: "Window Functions: Context" -original_id: window-functions-context ---- - -Java SDK provides access to a **window context object** that can be used by a window function. This context object provides a wide variety of information and functionality for Pulsar window functions as below. - -- [Spec](#spec) - - * Names of all input topics and the output topic associated with the function. - * Tenant and namespace associated with the function. - * Pulsar window function name, ID, and version. - * ID of the Pulsar function instance running the window function. - * Number of instances that invoke the window function. - * Built-in type or custom class name of the output schema. - -- [Logger](#logger) - - * Logger object used by the window function, which can be used to create window function log messages. - -- [User config](#user-config) - - * Access to arbitrary user configuration values. - -- [Routing](#routing) - - * Routing is supported in Pulsar window functions. Pulsar window functions send messages to arbitrary topics as per the `publish` interface. - -- [Metrics](#metrics) - - * Interface for recording metrics. - -- [State storage](#state-storage) - - * Interface for storing and retrieving state in [state storage](#state-storage). - -## Spec - -Spec contains the basic information of a function. - -### Get input topics - -The `getInputTopics` method gets the **name list** of all input topics. - -This example demonstrates how to get the name list of all input topics in a Java window function. - -```java - -public class GetInputTopicsWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - Collection inputTopics = context.getInputTopics(); - System.out.println(inputTopics); - - return null; - } - -} - -``` - -### Get output topic - -The `getOutputTopic` method gets the **name of a topic** to which the message is sent. - -This example demonstrates how to get the name of an output topic in a Java window function. - -```java - -public class GetOutputTopicWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String outputTopic = context.getOutputTopic(); - System.out.println(outputTopic); - - return null; - } -} - -``` - -### Get tenant - -The `getTenant` method gets the tenant name associated with the window function. - -This example demonstrates how to get the tenant name in a Java window function. - -```java - -public class GetTenantWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String tenant = context.getTenant(); - System.out.println(tenant); - - return null; - } - -} - -``` - -### Get namespace - -The `getNamespace` method gets the namespace associated with the window function. - -This example demonstrates how to get the namespace in a Java window function. - -```java - -public class GetNamespaceWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String ns = context.getNamespace(); - System.out.println(ns); - - return null; - } - -} - -``` - -### Get function name - -The `getFunctionName` method gets the window function name. - -This example demonstrates how to get the function name in a Java window function. - -```java - -public class GetNameOfWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionName = context.getFunctionName(); - System.out.println(functionName); - - return null; - } - -} - -``` - -### Get function ID - -The `getFunctionId` method gets the window function ID. - -This example demonstrates how to get the function ID in a Java window function. - -```java - -public class GetFunctionIDWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionID = context.getFunctionId(); - System.out.println(functionID); - - return null; - } - -} - -``` - -### Get function version - -The `getFunctionVersion` method gets the window function version. - -This example demonstrates how to get the function version of a Java window function. - -```java - -public class GetVersionOfWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionVersion = context.getFunctionVersion(); - System.out.println(functionVersion); - - return null; - } - -} - -``` - -### Get instance ID - -The `getInstanceId` method gets the instance ID of a window function. - -This example demonstrates how to get the instance ID in a Java window function. - -```java - -public class GetInstanceIDWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - int instanceId = context.getInstanceId(); - System.out.println(instanceId); - - return null; - } - -} - -``` - -### Get num instances - -The `getNumInstances` method gets the number of instances that invoke the window function. - -This example demonstrates how to get the number of instances in a Java window function. - -```java - -public class GetNumInstancesWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - int numInstances = context.getNumInstances(); - System.out.println(numInstances); - - return null; - } - -} - -``` - -### Get output schema type - -The `getOutputSchemaType` method gets the built-in type or custom class name of the output schema. - -This example demonstrates how to get the output schema type of a Java window function. - -```java - -public class GetOutputSchemaTypeWindowFunction implements WindowFunction { - - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String schemaType = context.getOutputSchemaType(); - System.out.println(schemaType); - - return null; - } -} - -``` - -## Logger - -Pulsar window functions using Java SDK has access to an [SLF4j](https://www.slf4j.org/) [`Logger`](https://www.slf4j.org/api/org/apache/log4j/Logger.html) object that can be used to produce logs at the chosen log level. - -This example logs either a `WARNING`-level or `INFO`-level log based on whether the incoming string contains the word `danger` or not in a Java function. - -```java - -import java.util.Collection; -import org.apache.pulsar.functions.api.Record; -import org.apache.pulsar.functions.api.WindowContext; -import org.apache.pulsar.functions.api.WindowFunction; -import org.slf4j.Logger; - -public class LoggingWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - Logger log = context.getLogger(); - for (Record record : inputs) { - log.info(record + "-window-log"); - } - return null; - } - -} - -``` - -If you need your function to produce logs, specify a log topic when creating or running the function. - -```bash - -bin/pulsar-admin functions create \ - --jar my-functions.jar \ - --classname my.package.LoggingFunction \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -You can access all logs produced by `LoggingFunction` via the `persistent://public/default/logging-function-logs` topic. - -## Metrics - -Pulsar window functions can publish arbitrary metrics to the metrics interface which can be queried. - -:::note - -If a Pulsar window function uses the language-native interface for Java, that function is not able to publish metrics and stats to Pulsar. - -::: - -You can record metrics using the context object on a per-key basis. - -This example sets a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message in a Java function. - -```java - -import java.util.Collection; -import org.apache.pulsar.functions.api.Record; -import org.apache.pulsar.functions.api.WindowContext; -import org.apache.pulsar.functions.api.WindowFunction; - - -/** - * Example function that wants to keep track of - * the event time of each message sent. - */ -public class UserMetricWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - - for (Record record : inputs) { - if (record.getEventTime().isPresent()) { - context.recordMetric("MessageEventTime", record.getEventTime().get().doubleValue()); - } - } - - return null; - } -} - -``` - -## User config - -When you run or update Pulsar Functions that are created using SDK, you can pass arbitrary key/value pairs to them with the `--user-config` flag. Key/value pairs **must** be specified as JSON. - -This example passes a user configured key/value to a function. - -```bash - -bin/pulsar-admin functions create \ - --name word-filter \ - --user-config '{"forbidden-word":"rosebud"}' \ - # Other function configs - -``` - -### API -You can use the following APIs to get user-defined information for window functions. -#### getUserConfigMap - -`getUserConfigMap` API gets a map of all user-defined key/value configurations for the window function. - -```java - -/** - * Get a map of all user-defined key/value configs for the function. - * - * @return The full map of user-defined config values - */ - Map getUserConfigMap(); - -``` - -#### getUserConfigValue - -The `getUserConfigValue` API gets a user-defined key/value. - -```java - -/** - * Get any user-defined key/value. - * - * @param key The key - * @return The Optional value specified by the user for that key. - */ - Optional getUserConfigValue(String key); - -``` - -#### getUserConfigValueOrDefault - -The `getUserConfigValueOrDefault` API gets a user-defined key/value or a default value if none is present. - -```java - -/** - * Get any user-defined key/value or a default value if none is present. - * - * @param key - * @param defaultValue - * @return Either the user config value associated with a given key or a supplied default value - */ - Object getUserConfigValueOrDefault(String key, Object defaultValue); - -``` - -This example demonstrates how to access key/value pairs provided to Pulsar window functions. - -Java SDK context object enables you to access key/value pairs provided to Pulsar window functions via the command line (as JSON). - -:::tip - -For all key/value pairs passed to Java window functions, both the `key` and the `value` are `String`. To set the value to be a different type, you need to deserialize it from the `String` type. - -::: - -This example passes a key/value pair in a Java window function. - -```bash - -bin/pulsar-admin functions create \ - --user-config '{"word-of-the-day":"verdure"}' \ - # Other function configs - -``` - -This example accesses values in a Java window function. - -The `UserConfigFunction` function logs the string `"The word of the day is verdure"` every time the function is invoked (which means every time a message arrives). The user config of `word-of-the-day` is changed **only** when the function is updated with a new config value via -multiple ways, such as the command line tool or REST API. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.Optional; - -public class UserConfigWindowFunction implements WindowFunction { - @Override - public String process(Collection> input, WindowContext context) throws Exception { - Optional whatToWrite = context.getUserConfigValue("WhatToWrite"); - if (whatToWrite.get() != null) { - return (String)whatToWrite.get(); - } else { - return "Not a nice way"; - } - } - -} - -``` - -If no value is provided, you can access the entire user config map or set a default value. - -```java - -// Get the whole config map -Map allConfigs = context.getUserConfigMap(); - -// Get value or resort to default -String wotd = context.getUserConfigValueOrDefault("word-of-the-day", "perspicacious"); - -``` - -## Routing - -You can use the `context.publish()` interface to publish as many results as you want. - -This example shows that the `PublishFunction` class uses the built-in function in the context to publish messages to the `publishTopic` in a Java function. - -```java - -public class PublishWindowFunction implements WindowFunction { - @Override - public Void process(Collection> input, WindowContext context) throws Exception { - String publishTopic = (String) context.getUserConfigValueOrDefault("publish-topic", "publishtopic"); - String output = String.format("%s!", input); - context.publish(publishTopic, output); - - return null; - } - -} - -``` - -## State storage - -Pulsar window functions use [Apache BookKeeper](https://bookkeeper.apache.org) as a state storage interface. Apache Pulsar installation (including the standalone installation) includes the deployment of BookKeeper bookies. - -Apache Pulsar integrates with Apache BookKeeper `table service` to store the `state` for functions. For example, the `WordCount` function can store its `counters` state into BookKeeper table service via Pulsar Functions state APIs. - -States are key-value pairs, where the key is a string and the value is arbitrary binary data—counters are stored as 64-bit big-endian binary values. Keys are scoped to an individual Pulsar Function and shared between instances of that function. - -Currently, Pulsar window functions expose Java API to access, update, and manage states. These APIs are available in the context object when you use Java SDK functions. - -| Java API| Description -|---|--- -|`incrCounter`|Increases a built-in distributed counter referred by key. -|`getCounter`|Gets the counter value for the key. -|`putState`|Updates the state value for the key. - -You can use the following APIs to access, update, and manage states in Java window functions. - -#### incrCounter - -The `incrCounter` API increases a built-in distributed counter referred by key. - -Applications use the `incrCounter` API to change the counter of a given `key` by the given `amount`. If the `key` does not exist, a new key is created. - -```java - - /** - * Increment the builtin distributed counter referred by key - * @param key The name of the key - * @param amount The amount to be incremented - */ - void incrCounter(String key, long amount); - -``` - -#### getCounter - -The `getCounter` API gets the counter value for the key. - -Applications uses the `getCounter` API to retrieve the counter of a given `key` changed by the `incrCounter` API. - -```java - - /** - * Retrieve the counter value for the key. - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - long getCounter(String key); - -``` - -Except the `getCounter` API, Pulsar also exposes a general key/value API (`putState`) for functions to store general key/value state. - -#### putState - -The `putState` API updates the state value for the key. - -```java - - /** - * Update the state value for the key. - * - * @param key name of the key - * @param value state value of the key - */ - void putState(String key, ByteBuffer value); - -``` - -This example demonstrates how applications store states in Pulsar window functions. - -The logic of the `WordCountWindowFunction` is simple and straightforward. - -1. The function first splits the received string into multiple words using regex `\\.`. - -2. For each `word`, the function increments the corresponding `counter` by 1 via `incrCounter(key, amount)`. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - for (Record input : inputs) { - Arrays.asList(input.getValue().split("\\.")).forEach(word -> context.incrCounter(word, 1)); - } - return null; - - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/about.md b/site2/website/versioned_docs/version-2.9.1-deprecated/about.md deleted file mode 100644 index 12b01eef73821f..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/about.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -slug: / -id: about -title: Welcome to the doc portal! -sidebar_label: "About" ---- - -import BlockLinks from "@site/src/components/BlockLinks"; -import BlockLink from "@site/src/components/BlockLink"; -import { docUrl } from "@site/src/utils/index"; - - -# Welcome to the doc portal! -*** - -This portal holds a variety of support documents to help you work with Pulsar . If you’re a beginner, there are tutorials and explainers to help you understand Pulsar and how it works. - -If you’re an experienced coder, review this page to learn the easiest way to access the specific content you’re looking for. - -## Get Started Now - - - - - - - - - -## Navigation -*** - -There are several ways to get around in the doc portal. The index navigation pane is a table of contents for the entire archive. The archive is divided into sections, like chapters in a book. Click the title of the topic to view it. - -In-context links provide an easy way to immediately reference related topics. Click the underlined term to view the topic. - -Links to related topics can be found at the bottom of each topic page. Click the link to view the topic. - -![Page Linking](/assets/page-linking.png) - -## Continuous Improvement -*** -As you probably know, we are working on a new user experience for our documentation portal that will make learning about and building on top of Apache Pulsar a much better experience. Whether you need overview concepts, how-to procedures, curated guides or quick references, we’re building content to support it. This welcome page is just the first step. We will be providing updates every month. - -## Help Improve These Documents -*** - -You’ll notice an Edit button at the bottom and top of each page. Click it to open a landing page with instructions for requesting changes to posted documents. These are your resources. Participation is not only welcomed – it’s essential! - -## Join the Community! -*** - -The Pulsar community on github is active, passionate, and knowledgeable. Join discussions, voice opinions, suggest features, and dive into the code itself. Find your Pulsar family here at [apache/pulsar](https://github.com/apache/pulsar). - -An equally passionate community can be found in the [Pulsar Slack channel](https://apache-pulsar.slack.com/). You’ll need an invitation to join, but many Github Pulsar community members are Slack members too. Join, hang out, learn, and make some new friends. - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/adaptors-kafka.md b/site2/website/versioned_docs/version-2.9.1-deprecated/adaptors-kafka.md deleted file mode 100644 index ea256049710fd1..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/adaptors-kafka.md +++ /dev/null @@ -1,276 +0,0 @@ ---- -id: adaptors-kafka -title: Pulsar adaptor for Apache Kafka -sidebar_label: "Kafka client wrapper" -original_id: adaptors-kafka ---- - - -Pulsar provides an easy option for applications that are currently written using the [Apache Kafka](http://kafka.apache.org) Java client API. - -## Using the Pulsar Kafka compatibility wrapper - -In an existing application, change the regular Kafka client dependency and replace it with the Pulsar Kafka wrapper. Remove the following dependency in `pom.xml`: - -```xml - - - org.apache.kafka - kafka-clients - 0.10.2.1 - - -``` - -Then include this dependency for the Pulsar Kafka wrapper: - -```xml - - - org.apache.pulsar - pulsar-client-kafka - @pulsar:version@ - - -``` - -With the new dependency, the existing code works without any changes. You need to adjust the configuration, and make sure it points the -producers and consumers to Pulsar service rather than Kafka, and uses a particular -Pulsar topic. - -## Using the Pulsar Kafka compatibility wrapper together with existing kafka client - -When migrating from Kafka to Pulsar, the application might use the original kafka client -and the pulsar kafka wrapper together during migration. You should consider using the -unshaded pulsar kafka client wrapper. - -```xml - - - org.apache.pulsar - pulsar-client-kafka-original - @pulsar:version@ - - -``` - -When using this dependency, construct producers using `org.apache.kafka.clients.producer.PulsarKafkaProducer` -instead of `org.apache.kafka.clients.producer.KafkaProducer` and `org.apache.kafka.clients.producer.PulsarKafkaConsumer` for consumers. - -## Producer example - -```java - -// Topic needs to be a regular Pulsar topic -String topic = "persistent://public/default/my-topic"; - -Properties props = new Properties(); -// Point to a Pulsar service -props.put("bootstrap.servers", "pulsar://localhost:6650"); - -props.put("key.serializer", IntegerSerializer.class.getName()); -props.put("value.serializer", StringSerializer.class.getName()); - -Producer producer = new KafkaProducer(props); - -for (int i = 0; i < 10; i++) { - producer.send(new ProducerRecord(topic, i, "hello-" + i)); - log.info("Message {} sent successfully", i); -} - -producer.close(); - -``` - -## Consumer example - -```java - -String topic = "persistent://public/default/my-topic"; - -Properties props = new Properties(); -// Point to a Pulsar service -props.put("bootstrap.servers", "pulsar://localhost:6650"); -props.put("group.id", "my-subscription-name"); -props.put("enable.auto.commit", "false"); -props.put("key.deserializer", IntegerDeserializer.class.getName()); -props.put("value.deserializer", StringDeserializer.class.getName()); - -Consumer consumer = new KafkaConsumer(props); -consumer.subscribe(Arrays.asList(topic)); - -while (true) { - ConsumerRecords records = consumer.poll(100); - records.forEach(record -> { - log.info("Received record: {}", record); - }); - - // Commit last offset - consumer.commitSync(); -} - -``` - -## Complete Examples - -You can find the complete producer and consumer examples [here](https://github.com/apache/pulsar-adapters/tree/master/pulsar-client-kafka-compat/pulsar-client-kafka-tests/src/test/java/org/apache/pulsar/client/kafka/compat/examples). - -## Compatibility matrix - -Currently the Pulsar Kafka wrapper supports most of the operations offered by the Kafka API. - -### Producer - -APIs: - -| Producer Method | Supported | Notes | -|:------------------------------------------------------------------------------|:----------|:-------------------------------------------------------------------------| -| `Future send(ProducerRecord record)` | Yes | | -| `Future send(ProducerRecord record, Callback callback)` | Yes | | -| `void flush()` | Yes | | -| `List partitionsFor(String topic)` | No | | -| `Map metrics()` | No | | -| `void close()` | Yes | | -| `void close(long timeout, TimeUnit unit)` | Yes | | - -Properties: - -| Config property | Supported | Notes | -|:----------------------------------------|:----------|:------------------------------------------------------------------------------| -| `acks` | Ignored | Durability and quorum writes are configured at the namespace level | -| `auto.offset.reset` | Yes | It uses a default value of `earliest` if you do not give a specific setting. | -| `batch.size` | Ignored | | -| `bootstrap.servers` | Yes | | -| `buffer.memory` | Ignored | | -| `client.id` | Ignored | | -| `compression.type` | Yes | Allows `gzip` and `lz4`. No `snappy`. | -| `connections.max.idle.ms` | Yes | Only support up to 2,147,483,647,000(Integer.MAX_VALUE * 1000) ms of idle time| -| `interceptor.classes` | Yes | | -| `key.serializer` | Yes | | -| `linger.ms` | Yes | Controls the group commit time when batching messages | -| `max.block.ms` | Ignored | | -| `max.in.flight.requests.per.connection` | Ignored | In Pulsar ordering is maintained even with multiple requests in flight | -| `max.request.size` | Ignored | | -| `metric.reporters` | Ignored | | -| `metrics.num.samples` | Ignored | | -| `metrics.sample.window.ms` | Ignored | | -| `partitioner.class` | Yes | | -| `receive.buffer.bytes` | Ignored | | -| `reconnect.backoff.ms` | Ignored | | -| `request.timeout.ms` | Ignored | | -| `retries` | Ignored | Pulsar client retries with exponential backoff until the send timeout expires. | -| `send.buffer.bytes` | Ignored | | -| `timeout.ms` | Yes | | -| `value.serializer` | Yes | | - - -### Consumer - -The following table lists consumer APIs. - -| Consumer Method | Supported | Notes | -|:--------------------------------------------------------------------------------------------------------|:----------|:------| -| `Set assignment()` | No | | -| `Set subscription()` | Yes | | -| `void subscribe(Collection topics)` | Yes | | -| `void subscribe(Collection topics, ConsumerRebalanceListener callback)` | No | | -| `void assign(Collection partitions)` | No | | -| `void subscribe(Pattern pattern, ConsumerRebalanceListener callback)` | No | | -| `void unsubscribe()` | Yes | | -| `ConsumerRecords poll(long timeoutMillis)` | Yes | | -| `void commitSync()` | Yes | | -| `void commitSync(Map offsets)` | Yes | | -| `void commitAsync()` | Yes | | -| `void commitAsync(OffsetCommitCallback callback)` | Yes | | -| `void commitAsync(Map offsets, OffsetCommitCallback callback)` | Yes | | -| `void seek(TopicPartition partition, long offset)` | Yes | | -| `void seekToBeginning(Collection partitions)` | Yes | | -| `void seekToEnd(Collection partitions)` | Yes | | -| `long position(TopicPartition partition)` | Yes | | -| `OffsetAndMetadata committed(TopicPartition partition)` | Yes | | -| `Map metrics()` | No | | -| `List partitionsFor(String topic)` | No | | -| `Map> listTopics()` | No | | -| `Set paused()` | No | | -| `void pause(Collection partitions)` | No | | -| `void resume(Collection partitions)` | No | | -| `Map offsetsForTimes(Map timestampsToSearch)` | No | | -| `Map beginningOffsets(Collection partitions)` | No | | -| `Map endOffsets(Collection partitions)` | No | | -| `void close()` | Yes | | -| `void close(long timeout, TimeUnit unit)` | Yes | | -| `void wakeup()` | No | | - -Properties: - -| Config property | Supported | Notes | -|:--------------------------------|:----------|:------------------------------------------------------| -| `group.id` | Yes | Maps to a Pulsar subscription name | -| `max.poll.records` | Yes | | -| `max.poll.interval.ms` | Ignored | Messages are "pushed" from broker | -| `session.timeout.ms` | Ignored | | -| `heartbeat.interval.ms` | Ignored | | -| `bootstrap.servers` | Yes | Needs to point to a single Pulsar service URL | -| `enable.auto.commit` | Yes | | -| `auto.commit.interval.ms` | Ignored | With auto-commit, acks are sent immediately to broker | -| `partition.assignment.strategy` | Ignored | | -| `auto.offset.reset` | Yes | Only support earliest and latest. | -| `fetch.min.bytes` | Ignored | | -| `fetch.max.bytes` | Ignored | | -| `fetch.max.wait.ms` | Ignored | | -| `interceptor.classes` | Yes | | -| `metadata.max.age.ms` | Ignored | | -| `max.partition.fetch.bytes` | Ignored | | -| `send.buffer.bytes` | Ignored | | -| `receive.buffer.bytes` | Ignored | | -| `client.id` | Ignored | | - - -## Customize Pulsar configurations - -You can configure Pulsar authentication provider directly from the Kafka properties. - -### Pulsar client properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.authentication.class`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-org.apache.pulsar.client.api.Authentication-) | | Configure to auth provider. For example, `org.apache.pulsar.client.impl.auth.AuthenticationTls`.| -| [`pulsar.authentication.params.map`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.util.Map-) | | Map which represents parameters for the Authentication-Plugin. | -| [`pulsar.authentication.params.string`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.lang.String-) | | String which represents parameters for the Authentication-Plugin, for example, `key1:val1,key2:val2`. | -| [`pulsar.use.tls`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTls-boolean-) | `false` | Enable TLS transport encryption. | -| [`pulsar.tls.trust.certs.file.path`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsTrustCertsFilePath-java.lang.String-) | | Path for the TLS trust certificate store. | -| [`pulsar.tls.allow.insecure.connection`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsAllowInsecureConnection-boolean-) | `false` | Accept self-signed certificates from brokers. | -| [`pulsar.operation.timeout.ms`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setOperationTimeout-int-java.util.concurrent.TimeUnit-) | `30000` | General operations timeout. | -| [`pulsar.stats.interval.seconds`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setStatsInterval-long-java.util.concurrent.TimeUnit-) | `60` | Pulsar client lib stats printing interval. | -| [`pulsar.num.io.threads`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setIoThreads-int-) | `1` | The number of Netty IO threads to use. | -| [`pulsar.connections.per.broker`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConnectionsPerBroker-int-) | `1` | The maximum number of connection to each broker. | -| [`pulsar.use.tcp.nodelay`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTcpNoDelay-boolean-) | `true` | TCP no-delay. | -| [`pulsar.concurrent.lookup.requests`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConcurrentLookupRequest-int-) | `50000` | The maximum number of concurrent topic lookups. | -| [`pulsar.max.number.rejected.request.per.connection`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setMaxNumberOfRejectedRequestPerConnection-int-) | `50` | The threshold of errors to forcefully close a connection. | -| [`pulsar.keepalive.interval.ms`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientBuilder.html#keepAliveInterval-int-java.util.concurrent.TimeUnit-)| `30000` | Keep alive interval for each client-broker-connection. | - - -### Pulsar producer properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.producer.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setProducerName-java.lang.String-) | | Specify the producer name. | -| [`pulsar.producer.initial.sequence.id`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setInitialSequenceId-long-) | | Specify baseline for sequence ID of this producer. | -| [`pulsar.producer.max.pending.messages`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessages-int-) | `1000` | Set the maximum size of the message queue pending to receive an acknowledgment from the broker. | -| [`pulsar.producer.max.pending.messages.across.partitions`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessagesAcrossPartitions-int-) | `50000` | Set the maximum number of pending messages across all the partitions. | -| [`pulsar.producer.batching.enabled`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingEnabled-boolean-) | `true` | Control whether automatic batching of messages is enabled for the producer. | -| [`pulsar.producer.batching.max.messages`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingMaxMessages-int-) | `1000` | The maximum number of messages in a batch. | -| [`pulsar.block.if.producer.queue.full`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBlockIfQueueFull-boolean-) | | Specify the block producer if queue is full. | -| [`pulsar.crypto.reader.factory.class.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setCryptoKeyReader-org.apache.pulsar.client.api.CryptoKeyReader-) | | Specify the CryptoReader-Factory(`CryptoKeyReaderFactory`) classname which allows producer to create CryptoKeyReader. | - - -### Pulsar consumer Properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.consumer.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setConsumerName-java.lang.String-) | | Specify the consumer name. | -| [`pulsar.consumer.receiver.queue.size`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setReceiverQueueSize-int-) | 1000 | Set the size of the consumer receiver queue. | -| [`pulsar.consumer.acknowledgments.group.time.millis`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#acknowledgmentGroupTime-long-java.util.concurrent.TimeUnit-) | 100 | Set the maximum amount of group time for consumers to send the acknowledgments to the broker. | -| [`pulsar.consumer.total.receiver.queue.size.across.partitions`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setMaxTotalReceiverQueueSizeAcrossPartitions-int-) | 50000 | Set the maximum size of the total receiver queue across partitions. | -| [`pulsar.consumer.subscription.topics.mode`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#subscriptionTopicsMode-Mode-) | PersistentOnly | Set the subscription topic mode for consumers. | -| [`pulsar.crypto.reader.factory.class.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setCryptoKeyReader-org.apache.pulsar.client.api.CryptoKeyReader-) | | Specify the CryptoReader-Factory(`CryptoKeyReaderFactory`) classname which allows consumer to create CryptoKeyReader. | diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/adaptors-spark.md b/site2/website/versioned_docs/version-2.9.1-deprecated/adaptors-spark.md deleted file mode 100644 index e14f13b5d4b079..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/adaptors-spark.md +++ /dev/null @@ -1,91 +0,0 @@ ---- -id: adaptors-spark -title: Pulsar adaptor for Apache Spark -sidebar_label: "Apache Spark" -original_id: adaptors-spark ---- - -## Spark Streaming receiver -The Spark Streaming receiver for Pulsar is a custom receiver that enables Apache [Spark Streaming](https://spark.apache.org/streaming/) to receive raw data from Pulsar. - -An application can receive data in [Resilient Distributed Dataset](https://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds) (RDD) format via the Spark Streaming receiver and can process it in a variety of ways. - -### Prerequisites - -To use the receiver, include a dependency for the `pulsar-spark` library in your Java configuration. - -#### Maven - -If you're using Maven, add this to your `pom.xml`: - -```xml - - -@pulsar:version@ - - - - org.apache.pulsar - pulsar-spark - ${pulsar.version} - - -``` - -#### Gradle - -If you're using Gradle, add this to your `build.gradle` file: - -```groovy - -def pulsarVersion = "@pulsar:version@" - -dependencies { - compile group: 'org.apache.pulsar', name: 'pulsar-spark', version: pulsarVersion -} - -``` - -### Usage - -Pass an instance of `SparkStreamingPulsarReceiver` to the `receiverStream` method in `JavaStreamingContext`: - -```java - - String serviceUrl = "pulsar://localhost:6650/"; - String topic = "persistent://public/default/test_src"; - String subs = "test_sub"; - - SparkConf sparkConf = new SparkConf().setMaster("local[*]").setAppName("Pulsar Spark Example"); - - JavaStreamingContext jsc = new JavaStreamingContext(sparkConf, Durations.seconds(60)); - - ConsumerConfigurationData pulsarConf = new ConsumerConfigurationData(); - - Set set = new HashSet(); - set.add(topic); - pulsarConf.setTopicNames(set); - pulsarConf.setSubscriptionName(subs); - - SparkStreamingPulsarReceiver pulsarReceiver = new SparkStreamingPulsarReceiver( - serviceUrl, - pulsarConf, - new AuthenticationDisabled()); - - JavaReceiverInputDStream lineDStream = jsc.receiverStream(pulsarReceiver); - -``` - -For a complete example, click [here](https://github.com/apache/pulsar-adapters/blob/master/examples/spark/src/main/java/org/apache/spark/streaming/receiver/example/SparkStreamingPulsarReceiverExample.java). In this example, the number of messages that contain the string "Pulsar" in received messages is counted. - -Note that if needed, other Pulsar authentication classes can be used. For example, in order to use a token during authentication the following parameters for the `SparkStreamingPulsarReceiver` constructor can be set: - -```java - -SparkStreamingPulsarReceiver pulsarReceiver = new SparkStreamingPulsarReceiver( - serviceUrl, - pulsarConf, - new AuthenticationToken("token:")); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/adaptors-storm.md b/site2/website/versioned_docs/version-2.9.1-deprecated/adaptors-storm.md deleted file mode 100644 index 76d507164777db..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/adaptors-storm.md +++ /dev/null @@ -1,96 +0,0 @@ ---- -id: adaptors-storm -title: Pulsar adaptor for Apache Storm -sidebar_label: "Apache Storm" -original_id: adaptors-storm ---- - -Pulsar Storm is an adaptor for integrating with [Apache Storm](http://storm.apache.org/) topologies. It provides core Storm implementations for sending and receiving data. - -An application can inject data into a Storm topology via a generic Pulsar spout, as well as consume data from a Storm topology via a generic Pulsar bolt. - -## Using the Pulsar Storm Adaptor - -Include dependency for Pulsar Storm Adaptor: - -```xml - - - org.apache.pulsar - pulsar-storm - ${pulsar.version} - - -``` - -## Pulsar Spout - -The Pulsar Spout allows for the data published on a topic to be consumed by a Storm topology. It emits a Storm tuple based on the message received and the `MessageToValuesMapper` provided by the client. - -The tuples that fail to be processed by the downstream bolts will be re-injected by the spout with an exponential backoff, within a configurable timeout (the default is 60 seconds) or a configurable number of retries, whichever comes first, after which it is acknowledged by the consumer. Here's an example construction of a spout: - -```java - -MessageToValuesMapper messageToValuesMapper = new MessageToValuesMapper() { - - @Override - public Values toValues(Message msg) { - return new Values(new String(msg.getData())); - } - - @Override - public void declareOutputFields(OutputFieldsDeclarer declarer) { - // declare the output fields - declarer.declare(new Fields("string")); - } -}; - -// Configure a Pulsar Spout -PulsarSpoutConfiguration spoutConf = new PulsarSpoutConfiguration(); -spoutConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650"); -spoutConf.setTopic("persistent://my-property/usw/my-ns/my-topic1"); -spoutConf.setSubscriptionName("my-subscriber-name1"); -spoutConf.setMessageToValuesMapper(messageToValuesMapper); - -// Create a Pulsar Spout -PulsarSpout spout = new PulsarSpout(spoutConf); - -``` - -For a complete example, click [here](https://github.com/apache/pulsar-adapters/blob/master/pulsar-storm/src/test/java/org/apache/pulsar/storm/PulsarSpoutTest.java). - -## Pulsar Bolt - -The Pulsar bolt allows data in a Storm topology to be published on a topic. It publishes messages based on the Storm tuple received and the `TupleToMessageMapper` provided by the client. - -A partitioned topic can also be used to publish messages on different topics. In the implementation of the `TupleToMessageMapper`, a "key" will need to be provided in the message which will send the messages with the same key to the same topic. Here's an example bolt: - -```java - -TupleToMessageMapper tupleToMessageMapper = new TupleToMessageMapper() { - - @Override - public TypedMessageBuilder toMessage(TypedMessageBuilder msgBuilder, Tuple tuple) { - String receivedMessage = tuple.getString(0); - // message processing - String processedMsg = receivedMessage + "-processed"; - return msgBuilder.value(processedMsg.getBytes()); - } - - @Override - public void declareOutputFields(OutputFieldsDeclarer declarer) { - // declare the output fields - } -}; - -// Configure a Pulsar Bolt -PulsarBoltConfiguration boltConf = new PulsarBoltConfiguration(); -boltConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650"); -boltConf.setTopic("persistent://my-property/usw/my-ns/my-topic2"); -boltConf.setTupleToMessageMapper(tupleToMessageMapper); - -// Create a Pulsar Bolt -PulsarBolt bolt = new PulsarBolt(boltConf); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-brokers.md b/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-brokers.md deleted file mode 100644 index 930fe69ecfb0ee..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-brokers.md +++ /dev/null @@ -1,286 +0,0 @@ ---- -id: admin-api-brokers -title: Managing Brokers -sidebar_label: "Brokers" -original_id: admin-api-brokers ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Pulsar brokers consist of two components: - -1. An HTTP server exposing a {@inject: rest:REST:/} interface administration and [topic](reference-terminology.md#topic) lookup. -2. A dispatcher that handles all Pulsar [message](reference-terminology.md#message) transfers. - -[Brokers](reference-terminology.md#broker) can be managed via: - -* The `brokers` command of the [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) tool -* The `/admin/v2/brokers` endpoint of the admin {@inject: rest:REST:/} API -* The `brokers` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md) - -In addition to being configurable when you start them up, brokers can also be [dynamically configured](#dynamic-broker-configuration). - -> See the [Configuration](reference-configuration.md#broker) page for a full listing of broker-specific configuration parameters. - -## Brokers resources - -### List active brokers - -Fetch all available active brokers that are serving traffic with cluster name. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers list use - -``` - -``` - -broker1.use.org.com:8080 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/:cluster|operation/getActiveBrokers?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getActiveBrokers(clusterName) - -``` - - - - -```` - -### Get the information of the leader broker - -Fetch the information of the leader broker, for example, the service url. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers leader-broker - -``` - -``` - -BrokerInfo(serviceUrl=broker1.use.org.com:8080) - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/leaderBroker|operation/getLeaderBroker?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getLeaderBroker() - -``` - -For the detail of the code above, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/BrokersImpl.java#L80) - - - - -```` - -#### list of namespaces owned by a given broker - -It finds all namespaces which are owned and served by a given broker. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers namespaces use \ - --url broker1.use.org.com:8080 - -``` - -```json - -{ - "my-property/use/my-ns/0x00000000_0xffffffff": { - "broker_assignment": "shared", - "is_controlled": false, - "is_active": true - } -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/:cluster/:broker/ownedNamespaces|operation/getOwnedNamespaes?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getOwnedNamespaces(cluster,brokerUrl); - -``` - - - - -```` - -### Dynamic broker configuration - -One way to configure a Pulsar [broker](reference-terminology.md#broker) is to supply a [configuration](reference-configuration.md#broker) when the broker is [started up](reference-cli-tools.md#pulsar-broker). - -But since all broker configuration in Pulsar is stored in ZooKeeper, configuration values can also be dynamically updated *while the broker is running*. When you update broker configuration dynamically, ZooKeeper will notify the broker of the change and the broker will then override any existing configuration values. - -* The [`brokers`](reference-pulsar-admin.md#brokers) command for the [`pulsar-admin`](reference-pulsar-admin.md) tool has a variety of subcommands that enable you to manipulate a broker's configuration dynamically, enabling you to [update config values](#update-dynamic-configuration) and more. -* In the Pulsar admin {@inject: rest:REST:/} API, dynamic configuration is managed through the `/admin/v2/brokers/configuration` endpoint. - -### Update dynamic configuration - -````mdx-code-block - - - -The [`update-dynamic-config`](reference-pulsar-admin.md#brokers-update-dynamic-config) subcommand will update existing configuration. It takes two arguments: the name of the parameter and the new value using the `config` and `value` flag respectively. Here's an example for the [`brokerShutdownTimeoutMs`](reference-configuration.md#broker-brokerShutdownTimeoutMs) parameter: - -```shell - -$ pulsar-admin brokers update-dynamic-config --config brokerShutdownTimeoutMs --value 100 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/brokers/configuration/:configName/:configValue|operation/updateDynamicConfiguration?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().updateDynamicConfiguration(configName, configValue); - -``` - - - - -```` - -### List updated values - -Fetch a list of all potentially updatable configuration parameters. -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers list-dynamic-config -brokerShutdownTimeoutMs - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/configuration|operation/getDynamicConfigurationName?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getDynamicConfigurationNames(); - -``` - - - - -```` - -### List all - -Fetch a list of all parameters that have been dynamically updated. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers get-all-dynamic-config -brokerShutdownTimeoutMs:100 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/configuration/values|operation/getAllDynamicConfigurations?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getAllDynamicConfigurations(); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-clusters.md b/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-clusters.md deleted file mode 100644 index 1d0c5dc9786f5a..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-clusters.md +++ /dev/null @@ -1,318 +0,0 @@ ---- -id: admin-api-clusters -title: Managing Clusters -sidebar_label: "Clusters" -original_id: admin-api-clusters ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Pulsar clusters consist of one or more Pulsar [brokers](reference-terminology.md#broker), one or more [BookKeeper](reference-terminology.md#bookkeeper) -servers (aka [bookies](reference-terminology.md#bookie)), and a [ZooKeeper](https://zookeeper.apache.org) cluster that provides configuration and coordination management. - -Clusters can be managed via: - -* The `clusters` command of the [`pulsar-admin`]([reference-pulsar-admin.md](https://pulsar.apache.org/tools/pulsar-admin/)) tool -* The `/admin/v2/clusters` endpoint of the admin {@inject: rest:REST:/} API -* The `clusters` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md) - -## Clusters resources - -### Provision - -New clusters can be provisioned using the admin interface. - -> Please note that this operation requires superuser privileges. - -````mdx-code-block - - - -You can provision a new cluster using the [`create`](reference-pulsar-admin.md#clusters-create) subcommand. Here's an example: - -```shell - -$ pulsar-admin clusters create cluster-1 \ - --url http://my-cluster.org.com:8080 \ - --broker-url pulsar://my-cluster.org.com:6650 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/clusters/:cluster|operation/createCluster?version=@pulsar:version_number@} - - - - -```java - -ClusterData clusterData = new ClusterData( - serviceUrl, - serviceUrlTls, - brokerServiceUrl, - brokerServiceUrlTls -); -admin.clusters().createCluster(clusterName, clusterData); - -``` - - - - -```` - -### Initialize cluster metadata - -When provision a new cluster, you need to initialize that cluster's [metadata](concepts-architecture-overview.md#metadata-store). When initializing cluster metadata, you need to specify all of the following: - -* The name of the cluster -* The local ZooKeeper connection string for the cluster -* The configuration store connection string for the entire instance -* The web service URL for the cluster -* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster - -You must initialize cluster metadata *before* starting up any [brokers](admin-api-brokers.md) that will belong to the cluster. - -> **No cluster metadata initialization through the REST API or the Java admin API** -> -> Unlike most other admin functions in Pulsar, cluster metadata initialization cannot be performed via the admin REST API -> or the admin Java client, as metadata initialization involves communicating with ZooKeeper directly. -> Instead, you can use the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool, in particular -> the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command. - -Here's an example cluster metadata initialization command: - -```shell - -bin/pulsar initialize-cluster-metadata \ - --cluster us-west \ - --zookeeper zk1.us-west.example.com:2181 \ - --configuration-store zk1.us-west.example.com:2184 \ - --web-service-url http://pulsar.us-west.example.com:8080/ \ - --web-service-url-tls https://pulsar.us-west.example.com:8443/ \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/ - -``` - -You'll need to use `--*-tls` flags only if you're using [TLS authentication](security-tls-authentication.md) in your instance. - -### Get configuration - -You can fetch the [configuration](reference-configuration.md) for an existing cluster at any time. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#clusters-get) subcommand and specify the name of the cluster. Here's an example: - -```shell - -$ pulsar-admin clusters get cluster-1 -{ - "serviceUrl": "http://my-cluster.org.com:8080/", - "serviceUrlTls": null, - "brokerServiceUrl": "pulsar://my-cluster.org.com:6650/", - "brokerServiceUrlTls": null - "peerClusterNames": null -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/clusters/:cluster|operation/getCluster?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().getCluster(clusterName); - -``` - - - - -```` - -### Update - -You can update the configuration for an existing cluster at any time. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#clusters-update) subcommand and specify new configuration values using flags. - -```shell - -$ pulsar-admin clusters update cluster-1 \ - --url http://my-cluster.org.com:4081 \ - --broker-url pulsar://my-cluster.org.com:3350 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/clusters/:cluster|operation/updateCluster?version=@pulsar:version_number@} - - - - -```java - -ClusterData clusterData = new ClusterData( - serviceUrl, - serviceUrlTls, - brokerServiceUrl, - brokerServiceUrlTls -); -admin.clusters().updateCluster(clusterName, clusterData); - -``` - - - - -```` - -### Delete - -Clusters can be deleted from a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#clusters-delete) subcommand and specify the name of the cluster. - -``` - -$ pulsar-admin clusters delete cluster-1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/clusters/:cluster|operation/deleteCluster?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().deleteCluster(clusterName); - -``` - - - - -```` - -### List - -You can fetch a list of all clusters in a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#clusters-list) subcommand. - -```shell - -$ pulsar-admin clusters list -cluster-1 -cluster-2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/clusters|operation/getClusters?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().getClusters(); - -``` - - - - -```` - -### Update peer-cluster data - -Peer clusters can be configured for a given cluster in a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`update-peer-clusters`](reference-pulsar-admin.md#clusters-update-peer-clusters) subcommand and specify the list of peer-cluster names. - -``` - -$ pulsar-admin update-peer-clusters cluster-1 --peer-clusters cluster-2 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/clusters/:cluster/peers|operation/setPeerClusterNames?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().updatePeerClusterNames(clusterName, peerClusterList); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-functions.md b/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-functions.md deleted file mode 100644 index d73386caf9b418..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-functions.md +++ /dev/null @@ -1,830 +0,0 @@ ---- -id: admin-api-functions -title: Manage Functions -sidebar_label: "Functions" -original_id: admin-api-functions ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -**Pulsar Functions** are lightweight compute processes that - -* consume messages from one or more Pulsar topics -* apply a user-supplied processing logic to each message -* publish the results of the computation to another topic - -Functions can be managed via the following methods. - -Method | Description ----|--- -**Admin CLI** | The `functions` command of the [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) tool. -**REST API** |The `/admin/v3/functions` endpoint of the admin {@inject: rest:REST:/} API. -**Java Admin API**| The `functions` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md). - -## Function resources - -You can perform the following operations on functions. - -### Create a function - -You can create a Pulsar function in cluster mode (deploy it on a Pulsar cluster) using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#functions-create) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --inputs test-input-topic \ - --output persistent://public/default/test-output-topic \ - --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \ - --jar /examples/api-examples.jar - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName|operation/registerFunction?version=@pulsar:version_number@} - - - - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setTenant(tenant); -functionConfig.setNamespace(namespace); -functionConfig.setName(functionName); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setParallelism(1); -functionConfig.setClassName("org.apache.pulsar.functions.api.examples.ExclamationFunction"); -functionConfig.setProcessingGuarantees(FunctionConfig.ProcessingGuarantees.ATLEAST_ONCE); -functionConfig.setTopicsPattern(sourceTopicPattern); -functionConfig.setSubName(subscriptionName); -functionConfig.setAutoAck(true); -functionConfig.setOutput(sinkTopic); -admin.functions().createFunction(functionConfig, fileName); - -``` - - - - -```` - -### Update a function - -You can update a Pulsar function that has been deployed to a Pulsar cluster using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#functions-update) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions update \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --output persistent://public/default/update-output-topic \ - # other options - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/functions/:tenant/:namespace/:functionName|operation/updateFunction?version=@pulsar:version_number@} - - - - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setTenant(tenant); -functionConfig.setNamespace(namespace); -functionConfig.setName(functionName); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setParallelism(1); -functionConfig.setClassName("org.apache.pulsar.functions.api.examples.ExclamationFunction"); -UpdateOptions updateOptions = new UpdateOptions(); -updateOptions.setUpdateAuthData(updateAuthData); -admin.functions().updateFunction(functionConfig, userCodeFile, updateOptions); - -``` - - - - -```` - -### Start an instance of a function - -You can start a stopped function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`start`](reference-pulsar-admin.md#functions-start) subcommand. - -```shell - -$ pulsar-admin functions start \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/start|operation/startFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().startFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Start all instances of a function - -You can start all stopped function instances using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`start`](reference-pulsar-admin.md#functions-start) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions start \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/start|operation/startFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().startFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Stop an instance of a function - -You can stop a function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`stop`](reference-pulsar-admin.md#functions-stop) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stop \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/stop|operation/stopFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().stopFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Stop all instances of a function - -You can stop all function instances using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`stop`](reference-pulsar-admin.md#functions-stop) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stop \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/stop|operation/stopFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().stopFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Restart an instance of a function - -Restart a function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`restart`](reference-pulsar-admin.md#functions-restart) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions restart \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/restart|operation/restartFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().restartFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Restart all instances of a function - -You can restart all function instances using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`restart`](reference-pulsar-admin.md#functions-restart) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions restart \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/restart|operation/restartFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().restartFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### List all functions - -You can list all Pulsar functions running under a specific tenant and namespace using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#functions-list) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions list \ - --tenant public \ - --namespace default - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace|operation/listFunctions?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctions(tenant, namespace); - -``` - - - - -```` - -### Delete a function - -You can delete a Pulsar function that is running on a Pulsar cluster using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#functions-delete) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions delete \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|DELETE|/admin/v3/functions/:tenant/:namespace/:functionName|operation/deregisterFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().deleteFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Get info about a function - -You can get information about a Pulsar function currently running in cluster mode using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#functions-get) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions get \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName|operation/getFunctionInfo?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Get status of an instance of a function - -You can get the current status of a Pulsar function instance with `instance-id` using Admin CLI, REST API or Java Admin API. -````mdx-code-block - - - -Use the [`status`](reference-pulsar-admin.md#functions-status) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/status|operation/getFunctionInstanceStatus?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStatus(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Get status of all instances of a function - -You can get the current status of a Pulsar function instance using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`status`](reference-pulsar-admin.md#functions-status) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/status|operation/getFunctionStatus?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStatus(tenant, namespace, functionName); - -``` - - - - -```` - -### Get stats of an instance of a function - -You can get the current stats of a Pulsar Function instance with `instance-id` using Admin CLI, REST API or Java admin API. -````mdx-code-block - - - -Use the [`stats`](reference-pulsar-admin.md#functions-stats) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/stats|operation/getFunctionInstanceStats?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStats(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Get stats of all instances of a function - -You can get the current stats of a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`stats`](reference-pulsar-admin.md#functions-stats) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/stats|operation/getFunctionStats?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStats(tenant, namespace, functionName); - -``` - - - - -```` - -### Trigger a function - -You can trigger a specified Pulsar function with a supplied value using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`trigger`](reference-pulsar-admin.md#functions-trigger) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --topic (the name of input topic) \ - --trigger-value \"hello pulsar\" - # or --trigger-file (the path of trigger file) - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/trigger|operation/triggerFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().triggerFunction(tenant, namespace, functionName, topic, triggerValue, triggerFile); - -``` - - - - -```` - -### Put state associated with a function - -You can put the state associated with a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`putstate`](reference-pulsar-admin.md#functions-putstate) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions putstate \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --state "{\"key\":\"pulsar\", \"stringValue\":\"hello pulsar\"}" - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/state/:key|operation/putFunctionState?version=@pulsar:version_number@} - - - - -```java - -TypeReference typeRef = new TypeReference() {}; -FunctionState stateRepr = ObjectMapperFactory.getThreadLocal().readValue(state, typeRef); -admin.functions().putFunctionState(tenant, namespace, functionName, stateRepr); - -``` - - - - -```` - -### Fetch state associated with a function - -You can fetch the current state associated with a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`querystate`](reference-pulsar-admin.md#functions-querystate) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions querystate \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --key (the key of state) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/state/:key|operation/getFunctionState?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionState(tenant, namespace, functionName, key); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-namespaces.md b/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-namespaces.md deleted file mode 100644 index fa6d9efe251ab0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-namespaces.md +++ /dev/null @@ -1,1267 +0,0 @@ ---- -id: admin-api-namespaces -title: Managing Namespaces -sidebar_label: "Namespaces" -original_id: admin-api-namespaces ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Pulsar [namespaces](reference-terminology.md#namespace) are logical groupings of [topics](reference-terminology.md#topic). - -Namespaces can be managed via: - -* The `namespaces` command of the [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) tool -* The `/admin/v2/namespaces` endpoint of the admin {@inject: rest:REST:/} API -* The `namespaces` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md) - -## Namespaces resources - -### Create namespaces - -You can create new namespaces under a given [tenant](reference-terminology.md#tenant). - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#namespaces-create) subcommand and specify the namespace by name: - -```shell - -$ pulsar-admin namespaces create test-tenant/test-namespace - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace|operation/createNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().createNamespace(namespace); - -``` - - - - -```` - -### Get policies - -You can fetch the current policies associated with a namespace at any time. - -````mdx-code-block - - - -Use the [`policies`](reference-pulsar-admin.md#namespaces-policies) subcommand and specify the namespace: - -```shell - -$ pulsar-admin namespaces policies test-tenant/test-namespace -{ - "auth_policies": { - "namespace_auth": {}, - "destination_auth": {} - }, - "replication_clusters": [], - "bundles_activated": true, - "bundles": { - "boundaries": [ - "0x00000000", - "0xffffffff" - ], - "numBundles": 1 - }, - "backlog_quota_map": {}, - "persistence": null, - "latency_stats_sample_rate": {}, - "message_ttl_in_seconds": 0, - "retention_policies": null, - "deleted": false -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace|operation/getPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPolicies(namespace); - -``` - - - - -```` - -### List namespaces - -You can list all namespaces within a given Pulsar [tenant](reference-terminology.md#tenant). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#namespaces-list) subcommand and specify the tenant: - -```shell - -$ pulsar-admin namespaces list test-tenant -test-tenant/ns1 -test-tenant/ns2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant|operation/getTenantNamespaces?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaces(tenant); - -``` - - - - -```` - -### Delete namespaces - -You can delete existing namespaces from a tenant. - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#namespaces-delete) subcommand and specify the namespace: - -```shell - -$ pulsar-admin namespaces delete test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace|operation/deleteNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().deleteNamespace(namespace); - -``` - - - - -```` - -### Configure replication clusters - -#### Set replication cluster - -You can set replication clusters for a namespace to enable Pulsar to internally replicate the published messages from one colocation facility to another. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-clusters test-tenant/ns1 \ - --clusters cl1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/replication|operation/setNamespaceReplicationClusters?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setNamespaceReplicationClusters(namespace, clusters); - -``` - - - - -```` - -#### Get replication cluster - -You can get the list of replication clusters for a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-clusters test-tenant/cl1/ns1 - -``` - -``` - -cl2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/replication|operation/getNamespaceReplicationClusters?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaceReplicationClusters(namespace) - -``` - - - - -```` - -### Configure backlog quota policies - -#### Set backlog quota policies - -Backlog quota helps the broker to restrict bandwidth/storage of a namespace once it reaches a certain threshold limit. Admin can set the limit and take corresponding action after the limit is reached. - - 1. producer_request_hold: broker holds but not persists produce request payload - - 2. producer_exception: broker disconnects with the client by giving an exception - - 3. consumer_backlog_eviction: broker starts discarding backlog messages - -Backlog quota restriction can be taken care by defining restriction of backlog-quota-type: destination_storage. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-backlog-quota --limit 10G --limitTime 36000 --policy producer_request_hold test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/setBacklogQuota?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setBacklogQuota(namespace, new BacklogQuota(limit, limitTime, policy)) - -``` - - - - -```` - -#### Get backlog quota policies - -You can get a configured backlog quota for a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-backlog-quotas test-tenant/ns1 - -``` - -```json - -{ - "destination_storage": { - "limit": 10, - "policy": "producer_request_hold" - } -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/backlogQuotaMap|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getBacklogQuotaMap(namespace); - -``` - - - - -```` - -#### Remove backlog quota policies - -You can remove backlog quota policies for a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-backlog-quota test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/removeBacklogQuota?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeBacklogQuota(namespace, backlogQuotaType) - -``` - - - - -```` - -### Configure persistence policies - -#### Set persistence policies - -Persistence policies allow users to configure persistency-level for all topic messages under a given namespace. - - - Bookkeeper-ack-quorum: Number of acks (guaranteed copies) to wait for each entry, default: 0 - - - Bookkeeper-ensemble: Number of bookies to use for a topic, default: 0 - - - Bookkeeper-write-quorum: How many writes to make of each entry, default: 0 - - - Ml-mark-delete-max-rate: Throttling rate of mark-delete operation (0 means no throttle), default: 0.0 - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-persistence --bookkeeper-ack-quorum 2 --bookkeeper-ensemble 3 --bookkeeper-write-quorum 2 --ml-mark-delete-max-rate 0 test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/setPersistence?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setPersistence(namespace,new PersistencePolicies(bookkeeperEnsemble, bookkeeperWriteQuorum,bookkeeperAckQuorum,managedLedgerMaxMarkDeleteRate)) - -``` - - - - -```` - -#### Get persistence policies - -You can get the configured persistence policies of a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-persistence test-tenant/ns1 - -``` - -```json - -{ - "bookkeeperEnsemble": 3, - "bookkeeperWriteQuorum": 2, - "bookkeeperAckQuorum": 2, - "managedLedgerMaxMarkDeleteRate": 0 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/getPersistence?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPersistence(namespace) - -``` - - - - -```` - -### Configure namespace bundles - -#### Unload namespace bundles - -The namespace bundle is a virtual group of topics which belong to the same namespace. If the broker gets overloaded with the number of bundles, this command can help unload a bundle from that broker, so it can be served by some other less-loaded brokers. The namespace bundle ID ranges from 0x00000000 to 0xffffffff. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces unload --bundle 0x00000000_0xffffffff test-tenant/ns1 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/:bundle/unload|operation/unloadNamespaceBundle?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().unloadNamespaceBundle(namespace, bundle) - -``` - - - - -```` - -#### Split namespace bundles - -One namespace bundle can contain multiple topics but can be served by only one broker. If a single bundle is creating an excessive load on a broker, an admin can split the bundle using the command below, permitting one or more of the new bundles to be unloaded, thus balancing the load across the brokers. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces split-bundle --bundle 0x00000000_0xffffffff test-tenant/ns1 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/:bundle/split|operation/splitNamespaceBundle?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().splitNamespaceBundle(namespace, bundle) - -``` - - - - -```` - -### Configure message TTL - -#### Set message-ttl - -You can configure the time to live (in seconds) duration for messages. In the example below, the message-ttl is set as 100s. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-message-ttl --messageTTL 100 test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/setNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setNamespaceMessageTTL(namespace, messageTTL) - -``` - - - - -```` - -#### Get message-ttl - -When the message-ttl for a namespace is set, you can use the command below to get the configured value. This example comtinues the example of the command `set message-ttl`, so the returned value is 100(s). - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-message-ttl test-tenant/ns1 - -``` - -``` - -100 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/getNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaceMessageTTL(namespace) - -``` - -``` - -100 - -``` - - - - -```` - -#### Remove message-ttl - -Remove a message TTL of the configured namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-message-ttl test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/removeNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeNamespaceMessageTTL(namespace) - -``` - - - - -```` - - -### Clear backlog - -#### Clear namespace backlog - -It clears all message backlog for all the topics that belong to a specific namespace. You can also clear backlog for a specific subscription as well. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces clear-backlog --sub my-subscription test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/clearBacklog|operation/clearNamespaceBacklogForSubscription?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().clearNamespaceBacklogForSubscription(namespace, subscription) - -``` - - - - -```` - -#### Clear bundle backlog - -It clears all message backlog for all the topics that belong to a specific NamespaceBundle. You can also clear backlog for a specific subscription as well. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces clear-backlog --bundle 0x00000000_0xffffffff --sub my-subscription test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/:bundle/clearBacklog|operation/clearNamespaceBundleBacklogForSubscription?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().clearNamespaceBundleBacklogForSubscription(namespace, bundle, subscription) - -``` - - - - -```` - -### Configure retention - -#### Set retention - -Each namespace contains multiple topics and the retention size (storage size) of each topic should not exceed a specific threshold or it should be stored for a certain period. This command helps configure the retention size and time of topics in a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-retention --size 100 --time 10 test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/retention|operation/setRetention?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setRetention(namespace, new RetentionPolicies(retentionTimeInMin, retentionSizeInMB)) - -``` - - - - -```` - -#### Get retention - -It shows retention information of a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-retention test-tenant/ns1 - -``` - -```json - -{ - "retentionTimeInMinutes": 10, - "retentionSizeInMB": 100 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/retention|operation/getRetention?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getRetention(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for topics - -#### Set dispatch throttling for topics - -It sets message dispatch rate for all the topics under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -:::note - -- If neither `clusterDispatchRate` nor `topicDispatchRate` is configured, dispatch throttling is disabled. -- If `topicDispatchRate` is not configured, `clusterDispatchRate` takes effect. -- If `topicDispatchRate` is configured, `topicDispatchRate` takes effect. - -::: - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/dispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for topics - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/dispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getDispatchRate(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for subscription - -#### Set dispatch throttling for subscription - -It sets message dispatch rate for all the subscription of topics under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-subscription-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/subscriptionDispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setSubscriptionDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for subscription - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-subscription-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/subscriptionDispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getSubscriptionDispatchRate(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for replicator - -#### Set dispatch throttling for replicator - -It sets message dispatch rate for all the replicator between replication clusters under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-replicator-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/replicatorDispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setReplicatorDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for replicator - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-replicator-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/replicatorDispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getReplicatorDispatchRate(namespace) - -``` - - - - -```` - -### Configure deduplication snapshot interval - -#### Get deduplication snapshot interval - -It shows configured `deduplicationSnapshotInterval` for a namespace (Each topic under the namespace will take a deduplication snapshot according to this interval) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-deduplication-snapshot-interval test-tenant/ns1 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/getDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getDeduplicationSnapshotInterval(namespace) - -``` - - - - -```` - -#### Set deduplication snapshot interval - -Set configured `deduplicationSnapshotInterval` for a namespace. Each topic under the namespace will take a deduplication snapshot according to this interval. -`brokerDeduplicationEnabled` must be set to `true` for this property to take effect. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-deduplication-snapshot-interval test-tenant/ns1 --interval 1000 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/setDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setDeduplicationSnapshotInterval(namespace, 1000) - -``` - - - - -```` - -#### Remove deduplication snapshot interval - -Remove configured `deduplicationSnapshotInterval` of a namespace (Each topic under the namespace will take a deduplication snapshot according to this interval) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-deduplication-snapshot-interval test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/deleteDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeDeduplicationSnapshotInterval(namespace) - -``` - - - - -```` - -### Namespace isolation - -You can use the [Pulsar isolation policy](administration-isolation.md) to allocate resources (broker and bookie) for a namespace. - -### Unload namespaces from a broker - -You can unload a namespace, or a [namespace bundle](reference-terminology.md#namespace-bundle), from the Pulsar [broker](reference-terminology.md#broker) that is currently responsible for it. - -#### pulsar-admin - -Use the [`unload`](reference-pulsar-admin.md#unload) subcommand of the [`namespaces`](reference-pulsar-admin.md#namespaces) command. - -````mdx-code-block - - - -```shell - -$ pulsar-admin namespaces unload my-tenant/my-ns - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/unload|operation/unloadNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().unload(namespace) - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-non-partitioned-topics.md b/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-non-partitioned-topics.md deleted file mode 100644 index e6347bb8c363a1..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-non-partitioned-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-non-partitioned-topics -title: Managing non-partitioned topics -sidebar_label: "Non-partitioned topics" -original_id: admin-api-non-partitioned-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-non-persistent-topics.md b/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-non-persistent-topics.md deleted file mode 100644 index 3126a6494c7153..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-non-persistent-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-non-persistent-topics -title: Managing non-persistent topics -sidebar_label: "Non-Persistent topics" -original_id: admin-api-non-persistent-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-overview.md b/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-overview.md deleted file mode 100644 index 1154c625aff7b5..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-overview.md +++ /dev/null @@ -1,144 +0,0 @@ ---- -id: admin-api-overview -title: Pulsar admin interface -sidebar_label: "Overview" -original_id: admin-api-overview ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -The Pulsar admin interface enables you to manage all important entities in a Pulsar instance, such as tenants, topics, and namespaces. - -You can interact with the admin interface via: - -- The `pulsar-admin` CLI tool, which is available in the `bin` folder of your Pulsar installation: - - ```shell - - bin/pulsar-admin - - ``` - - > **Important** - > - > For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). - -- HTTP calls, which are made against the admin {@inject: rest:REST:/} API provided by Pulsar brokers. For some RESTful APIs, they might be redirected to the owner brokers for serving with [`307 Temporary Redirect`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/307), hence the HTTP callers should handle `307 Temporary Redirect`. If you use `curl` commands, you should specify `-L` to handle redirections. - - > **Important** - > - > For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. - -- A Java client interface. - - > **Important** - > - > For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -> **The REST API is the admin interface**. Both the `pulsar-admin` CLI tool and the Java client use the REST API. If you implement your own admin interface client, you should use the REST API. - -## Admin setup - -Each of the three admin interfaces (the `pulsar-admin` CLI tool, the {@inject: rest:REST:/} API, and the [Java admin API](/api/admin)) requires some special setup if you have enabled authentication in your Pulsar instance. - -````mdx-code-block - - - -If you have enabled authentication, you need to provide an auth configuration to use the `pulsar-admin` tool. By default, the configuration for the `pulsar-admin` tool is in the [`conf/client.conf`](reference-configuration.md#client) file. The following are the available parameters: - -|Name|Description|Default| -|----|-----------|-------| -|webServiceUrl|The web URL for the cluster.|http://localhost:8080/| -|brokerServiceUrl|The Pulsar protocol URL for the cluster.|pulsar://localhost:6650/| -|authPlugin|The authentication plugin.| | -|authParams|The authentication parameters for the cluster, as a comma-separated string.| | -|useTls|Whether or not TLS authentication will be enforced in the cluster.|false| -|tlsAllowInsecureConnection|Accept untrusted TLS certificate from client.|false| -|tlsTrustCertsFilePath|Path for the trusted TLS certificate file.| | - - - - -You can find details for the REST API exposed by Pulsar brokers in this {@inject: rest:document:/}. - - - - -To use the Java admin API, instantiate a {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object, and specify a URL for a Pulsar broker and a {@inject: javadoc:PulsarAdminBuilder:/admin/org/apache/pulsar/client/admin/PulsarAdminBuilder}. The following is a minimal example using `localhost`: - -```java - -String url = "http://localhost:8080"; -// Pass auth-plugin class fully-qualified name if Pulsar-security enabled -String authPluginClassName = "com.org.MyAuthPluginClass"; -// Pass auth-param if auth-plugin class requires it -String authParams = "param1=value1"; -boolean useTls = false; -boolean tlsAllowInsecureConnection = false; -String tlsTrustCertsFilePath = null; -PulsarAdmin admin = PulsarAdmin.builder() -.authentication(authPluginClassName,authParams) -.serviceHttpUrl(url) -.tlsTrustCertsFilePath(tlsTrustCertsFilePath) -.allowTlsInsecureConnection(tlsAllowInsecureConnection) -.build(); - -``` - -If you use multiple brokers, you can use multi-host like Pulsar service. For example, - -```java - -String url = "http://localhost:8080,localhost:8081,localhost:8082"; -// Pass auth-plugin class fully-qualified name if Pulsar-security enabled -String authPluginClassName = "com.org.MyAuthPluginClass"; -// Pass auth-param if auth-plugin class requires it -String authParams = "param1=value1"; -boolean useTls = false; -boolean tlsAllowInsecureConnection = false; -String tlsTrustCertsFilePath = null; -PulsarAdmin admin = PulsarAdmin.builder() -.authentication(authPluginClassName,authParams) -.serviceHttpUrl(url) -.tlsTrustCertsFilePath(tlsTrustCertsFilePath) -.allowTlsInsecureConnection(tlsAllowInsecureConnection) -.build(); - -``` - - - - -```` - -## How to define Pulsar resource names when running Pulsar in Kubernetes -If you run Pulsar Functions or connectors on Kubernetes, you need to follow Kubernetes naming convention to define the names of your Pulsar resources, whichever admin interface you use. - -Kubernetes requires a name that can be used as a DNS subdomain name as defined in [RFC 1123](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names). Pulsar supports more legal characters than Kubernetes naming convention. If you create a Pulsar resource name with special characters that are not supported by Kubernetes (for example, including colons in a Pulsar namespace name), Kubernetes runtime translates the Pulsar object names into Kubernetes resource labels which are in RFC 1123-compliant forms. Consequently, you can run functions or connectors using Kubernetes runtime. The rules for translating Pulsar object names into Kubernetes resource labels are as below: - -- Truncate to 63 characters - -- Replace the following characters with dashes (-): - - - Non-alphanumeric characters - - - Underscores (_) - - - Dots (.) - -- Replace beginning and ending non-alphanumeric characters with 0 - -:::tip - -- If you get an error in translating Pulsar object names into Kubernetes resource labels (for example, you may have a naming collision if your Pulsar object name is too long) or want to customize the translating rules, see [customize Kubernetes runtime](https://pulsar.apache.org/docs/en/next/functions-runtime/#customize-kubernetes-runtime). -- For how to configure Kubernetes runtime, see [here](https://pulsar.apache.org/docs/en/next/functions-runtime/#configure-kubernetes-runtime). - -::: - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-packages.md b/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-packages.md deleted file mode 100644 index 2852fb74a02be3..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-packages.md +++ /dev/null @@ -1,391 +0,0 @@ ---- -id: admin-api-packages -title: Manage packages -sidebar_label: "Packages" -original_id: admin-api-packages ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Package management enables version management and simplifies the upgrade and rollback processes for Functions, Sinks, and Sources. When you use the same function, sink and source in different namespaces, you can upload them to a common package management system. - -## Package name - -A `package` is identified by five parts: `type`, `tenant`, `namespace`, `package name`, and `version`. - -| Part | Description | -|-------|-------------| -|`type` |The type of the package. The following types are supported: `function`, `sink` and `source`. | -| `name`|The fully qualified name of the package: `//`.| -|`version`|The version of the package.| - -The following is a code sample. - -```java - -class PackageName { - private final PackageType type; - private final String namespace; - private final String tenant; - private final String name; - private final String version; -} - -enum PackageType { - FUNCTION("function"), SINK("sink"), SOURCE("source"); -} - -``` - -## Package URL -A package is located using a URL. The package URL is written in the following format: - -```shell - -:////@ - -``` - -The following are package URL examples: - -`sink://public/default/mysql-sink@1.0` -`function://my-tenant/my-ns/my-function@0.1` -`source://my-tenant/my-ns/mysql-cdc-source@2.3` - -The package management system stores the data, versions and metadata of each package. The metadata is shown in the following table. - -| metadata | Description | -|----------|-------------| -|description|The description of the package.| -|contact |The contact information of a package. For example, team email.| -|create_time| The time when the package is created.| -|modification_time| The time when the package is modified.| -|properties |A key/value map that stores your own information.| - -## Permissions - -The packages are organized by the tenant and namespace, so you can apply the tenant and namespace permissions to packages directly. - -## Package resources -You can use the package management with command line tools, REST API and Java client. - -### Upload a package -You can upload a package to the package management service in the following ways. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages upload functions://public/default/example@v0.1 --path package-file --description package-description - -``` - - - - -{@inject: endpoint|POST|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/upload?version=@pulsar:version_number@} - - - - -Upload a package to the package management service synchronously. - -```java - - void upload(PackageMetadata metadata, String packageName, String path) throws PulsarAdminException; - -``` - -Upload a package to the package management service asynchronously. - -```java - - CompletableFuture uploadAsync(PackageMetadata metadata, String packageName, String path); - -``` - - - - -```` - -### Download a package -You can download a package to the package management service in the following ways. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages download functions://public/default/example@v0.1 --path package-file - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/download?version=@pulsar:version_number@} - - - - -Download a package to the package management service synchronously. - -```java - - void download(String packageName, String path) throws PulsarAdminException; - -``` - -Download a package to the package management service asynchronously. - -```java - - CompletableFuture downloadAsync(String packageName, String path); - -``` - - - - -```` - -### List all versions of a package -You can get a list of all versions of a package in the following ways. -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages list --type function public/default - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName|operation/listPackageVersion?version=@pulsar:version_number@} - - - - -List all versions of a package synchronously. - -```java - - List listPackageVersions(String packageName) throws PulsarAdminException; - -``` - -List all versions of a package asynchronously. - -```java - - CompletableFuture> listPackageVersionsAsync(String packageName); - -``` - - - - -```` - -### List all the specified type packages under a namespace -You can get a list of all the packages with the given type in a namespace in the following ways. -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages list --type function public/default - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/packages/:type/:tenant/:namespace|operation/listPackages?version=@pulsar:version_number@} - - - - -List all the packages with the given type in a namespace synchronously. - -```java - - List listPackages(String type, String namespace) throws PulsarAdminException; - -``` - -List all the packages with the given type in a namespace asynchronously. - -```java - - CompletableFuture> listPackagesAsync(String type, String namespace); - -``` - - - - -```` - -### Get the metadata of a package -You can get the metadata of a package in the following ways. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages get-metadata function://public/default/test@v1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version/metadata|operation/getMeta?version=@pulsar:version_number@} - - - - -Get the metadata of a package synchronously. - -```java - - PackageMetadata getMetadata(String packageName) throws PulsarAdminException; - -``` - -Get the metadata of a package asynchronously. - -```java - - CompletableFuture getMetadataAsync(String packageName); - -``` - - - - -```` - -### Update the metadata of a package -You can update the metadata of a package in the following ways. -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages update-metadata function://public/default/example@v0.1 --description update-description - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version/metadata|operation/updateMeta?version=@pulsar:version_number@} - - - - -Update a package metadata information synchronously. - -```java - - void updateMetadata(String packageName, PackageMetadata metadata) throws PulsarAdminException; - -``` - -Update a package metadata information asynchronously. - -```java - - CompletableFuture updateMetadataAsync(String packageName, PackageMetadata metadata); - -``` - - - - -```` - -### Delete a specified package -You can delete a specified package with its package name in the following ways. - -````mdx-code-block - - - -The following command example deletes a package of version 0.1. - -```shell - -bin/pulsar-admin packages delete functions://public/default/example@v0.1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/delete?version=@pulsar:version_number@} - - - - -Delete a specified package synchronously. - -```java - - void delete(String packageName) throws PulsarAdminException; - -``` - -Delete a specified package asynchronously. - -```java - - CompletableFuture deleteAsync(String packageName); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-partitioned-topics.md b/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-partitioned-topics.md deleted file mode 100644 index 5ce182282e0324..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-partitioned-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-partitioned-topics -title: Managing partitioned topics -sidebar_label: "Partitioned topics" -original_id: admin-api-partitioned-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-permissions.md b/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-permissions.md deleted file mode 100644 index 2496c9be54eb26..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-permissions.md +++ /dev/null @@ -1,184 +0,0 @@ ---- -id: admin-api-permissions -title: Managing permissions -sidebar_label: "Permissions" -original_id: admin-api-permissions ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Permissions in Pulsar are managed at the [namespace](reference-terminology.md#namespace) level -(that is, within [tenants](reference-terminology.md#tenant) and [clusters](reference-terminology.md#cluster)). - -## Grant permissions - -You can grant permissions to specific roles for lists of operations such as `produce` and `consume`. - -````mdx-code-block - - - -Use the [`grant-permission`](reference-pulsar-admin.md#grant-permission) subcommand and specify a namespace, actions using the `--actions` flag, and a role using the `--role` flag: - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role admin10 - -``` - -Wildcard authorization can be performed when `authorizationAllowWildcardsMatching` is set to `true` in `broker.conf`. - -e.g. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role 'my.role.*' - -``` - -Then, roles `my.role.1`, `my.role.2`, `my.role.foo`, `my.role.bar`, etc. can produce and consume. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role '*.role.my' - -``` - -Then, roles `1.role.my`, `2.role.my`, `foo.role.my`, `bar.role.my`, etc. can produce and consume. - -**Note**: A wildcard matching works at **the beginning or end of the role name only**. - -e.g. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role 'my.*.role' - -``` - -In this case, only the role `my.*.role` has permissions. -Roles `my.1.role`, `my.2.role`, `my.foo.role`, `my.bar.role`, etc. **cannot** produce and consume. - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/permissions/:role|operation/grantPermissionOnNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().grantPermissionOnNamespace(namespace, role, getAuthActions(actions)); - -``` - - - - -```` - -## Get permissions - -You can see which permissions have been granted to which roles in a namespace. - -````mdx-code-block - - - -Use the [`permissions`](reference-pulsar-admin#permissions) subcommand and specify a namespace: - -```shell - -$ pulsar-admin namespaces permissions test-tenant/ns1 -{ - "admin10": [ - "produce", - "consume" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/permissions|operation/getPermissions?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPermissions(namespace); - -``` - - - - -```` - -## Revoke permissions - -You can revoke permissions from specific roles, which means that those roles will no longer have access to the specified namespace. - -````mdx-code-block - - - -Use the [`revoke-permission`](reference-pulsar-admin.md#revoke-permission) subcommand and specify a namespace and a role using the `--role` flag: - -```shell - -$ pulsar-admin namespaces revoke-permission test-tenant/ns1 \ - --role admin10 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/permissions/:role|operation/revokePermissionsOnNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().revokePermissionsOnNamespace(namespace, role); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-persistent-topics.md b/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-persistent-topics.md deleted file mode 100644 index 50d135b72f5424..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-persistent-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-persistent-topics -title: Managing persistent topics -sidebar_label: "Persistent topics" -original_id: admin-api-persistent-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-schemas.md b/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-schemas.md deleted file mode 100644 index 9ffe21f5b0f750..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-schemas.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: admin-api-schemas -title: Managing Schemas -sidebar_label: "Schemas" -original_id: admin-api-schemas ---- - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-tenants.md b/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-tenants.md deleted file mode 100644 index 3e13e54a68b2cd..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-tenants.md +++ /dev/null @@ -1,238 +0,0 @@ ---- -id: admin-api-tenants -title: Managing Tenants -sidebar_label: "Tenants" -original_id: admin-api-tenants ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Tenants, like namespaces, can be managed using the [admin API](admin-api-overview.md). There are currently two configurable aspects of tenants: - -* Admin roles -* Allowed clusters - -## Tenant resources - -### List - -You can list all of the tenants associated with an [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#tenants-list) subcommand. - -```shell - -$ pulsar-admin tenants list -my-tenant-1 -my-tenant-2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/tenants|operation/getTenants?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().getTenants(); - -``` - - - - -```` - -### Create - -You can create a new tenant. - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#tenants-create) subcommand: - -```shell - -$ pulsar-admin tenants create my-tenant - -``` - -When creating a tenant, you can assign admin roles using the `-r`/`--admin-roles` flag. You can specify multiple roles as a comma-separated list. Here are some examples: - -```shell - -$ pulsar-admin tenants create my-tenant \ - --admin-roles role1,role2,role3 - -$ pulsar-admin tenants create my-tenant \ - -r role1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/tenants/:tenant|operation/createTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().createTenant(tenantName, tenantInfo); - -``` - - - - -```` - -### Get configuration - -You can fetch the [configuration](reference-configuration.md) for an existing tenant at any time. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#tenants-get) subcommand and specify the name of the tenant. Here's an example: - -```shell - -$ pulsar-admin tenants get my-tenant -{ - "adminRoles": [ - "admin1", - "admin2" - ], - "allowedClusters": [ - "cl1", - "cl2" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/tenants/:cluster|operation/getTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().getTenantInfo(tenantName); - -``` - - - - -```` - -### Delete - -Tenants can be deleted from a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#tenants-delete) subcommand and specify the name of the tenant. - -```shell - -$ pulsar-admin tenants delete my-tenant - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/tenants/:cluster|operation/deleteTenant?version=@pulsar:version_number@} - - - - -```java - -admin.Tenants().deleteTenant(tenantName); - -``` - - - - -```` - -### Update - -You can update a tenant's configuration. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#tenants-update) subcommand. - -```shell - -$ pulsar-admin tenants update my-tenant - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/tenants/:cluster|operation/updateTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().updateTenant(tenantName, tenantInfo); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-topics.md b/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-topics.md deleted file mode 100644 index 5d36b81e5cae79..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/admin-api-topics.md +++ /dev/null @@ -1,2334 +0,0 @@ ---- -id: admin-api-topics -title: Manage topics -sidebar_label: "Topics" -original_id: admin-api-topics ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Pulsar has persistent and non-persistent topics. Persistent topic is a logical endpoint for publishing and consuming messages. The topic name structure for persistent topics is: - -```shell - -persistent://tenant/namespace/topic - -``` - -Non-persistent topics are used in applications that only consume real-time published messages and do not need persistent guarantee. In this way, it reduces message-publish latency by removing overhead of persisting messages. The topic name structure for non-persistent topics is: - -```shell - -non-persistent://tenant/namespace/topic - -``` - -## Manage topic resources -Whether it is persistent or non-persistent topic, you can obtain the topic resources through `pulsar-admin` tool, REST API and Java. - -:::note - -In REST API, `:schema` stands for persistent or non-persistent. `:tenant`, `:namespace`, `:x` are variables, replace them with the real tenant, namespace, and `x` names when using them. -Take {@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} as an example, to get the list of persistent topics in REST API, use `https://pulsar.apache.org/admin/v2/persistent/my-tenant/my-namespace`. To get the list of non-persistent topics in REST API, use `https://pulsar.apache.org/admin/v2/non-persistent/my-tenant/my-namespace`. - -::: - -### List of topics - -You can get the list of topics under a given namespace in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list \ - my-tenant/my-namespace - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} - - - - -```java - -String namespace = "my-tenant/my-namespace"; -admin.topics().getList(namespace); - -``` - - - - -```` - -### Grant permission - -You can grant permissions on a client role to perform specific actions on a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics grant-permission \ - --actions produce,consume --role application1 \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/permissions/:role|operation/grantPermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String role = "test-role"; -Set actions = Sets.newHashSet(AuthAction.produce, AuthAction.consume); -admin.topics().grantPermission(topic, role, actions); - -``` - - - - -```` - -### Get permission - -You can fetch permission in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics permissions \ - persistent://test-tenant/ns1/tp1 \ - -{ - "application1": [ - "consume", - "produce" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/permissions|operation/getPermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getPermissions(topic); - -``` - - - - -```` - -### Revoke permission - -You can revoke a permission granted on a client role in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics revoke-permission \ - --role application1 \ - persistent://test-tenant/ns1/tp1 \ - -{ - "application1": [ - "consume", - "produce" - ] -} - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic/permissions/:role|operation/revokePermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String role = "test-role"; -admin.topics().revokePermissions(topic, role); - -``` - - - - -```` - -### Delete topic - -You can delete a topic in the following ways. You cannot delete a topic if any active subscription or producers is connected to the topic. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics delete \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic|operation/deleteTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().delete(topic); - -``` - - - - -```` - -### Unload topic - -You can unload a topic in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics unload \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/unload|operation/unloadTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().unload(topic); - -``` - - - - -```` - -### Get stats - -You can check the following statistics of a given non-partitioned topic. - - - **msgRateIn**: The sum of all local and replication publishers' publish rates (msg/s). - - - **msgThroughputIn**: The sum of all local and replication publishers' publish rates (bytes/s). - - - **msgRateOut**: The sum of all local and replication consumers' dispatch rates(msg/s). - - - **msgThroughputOut**: The sum of all local and replication consumers' dispatch rates (bytes/s). - - - **averageMsgSize**: The average size (in bytes) of messages published within the last interval. - - - **storageSize**: The sum of the ledgers' storage size for this topic. The space used to store the messages for the topic. - - - **bytesInCounter**: Total bytes published to the topic. - - - **msgInCounter**: Total messages published to the topic. - - - **bytesOutCounter**: Total bytes delivered to consumers. - - - **msgOutCounter**: Total messages delivered to consumers. - - - **msgChunkPublished**: Topic has chunked message published on it. - - - **backlogSize**: Estimated total unconsumed or backlog size (in bytes). - - - **offloadedStorageSize**: Space used to store the offloaded messages for the topic (in bytes). - - - **waitingPublishers**: The number of publishers waiting in a queue in exclusive access mode. - - - **deduplicationStatus**: The status of message deduplication for the topic. - - - **topicEpoch**: The topic epoch or empty if not set. - - - **nonContiguousDeletedMessagesRanges**: The number of non-contiguous deleted messages ranges. - - - **nonContiguousDeletedMessagesRangesSerializedSize**: The serialized size of non-contiguous deleted messages ranges. - - - **publishers**: The list of all local publishers into the topic. The list ranges from zero to thousands. - - - **accessMode**: The type of access to the topic that the producer requires. - - - **msgRateIn**: The total rate of messages (msg/s) published by this publisher. - - - **msgThroughputIn**: The total throughput (bytes/s) of the messages published by this publisher. - - - **averageMsgSize**: The average message size in bytes from this publisher within the last interval. - - - **chunkedMessageRate**: The total rate of chunked messages published by this publisher. - - - **producerId**: The internal identifier for this producer on this topic. - - - **producerName**: The internal identifier for this producer, generated by the client library. - - - **address**: The IP address and source port for the connection of this producer. - - - **connectedSince**: The timestamp when this producer is created or reconnected last time. - - - **clientVersion**: The client library version of this producer. - - - **metadata**: Metadata (key/value strings) associated with this publisher. - - - **subscriptions**: The list of all local subscriptions to the topic. - - - **my-subscription**: The name of this subscription. It is defined by the client. - - - **msgRateOut**: The total rate of messages (msg/s) delivered on this subscription. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered on this subscription. - - - **msgBacklog**: The number of messages in the subscription backlog. - - - **type**: The subscription type. - - - **msgRateExpired**: The rate at which messages were discarded instead of dispatched from this subscription due to TTL. - - - **lastExpireTimestamp**: The timestamp of the last message expire execution. - - - **lastConsumedFlowTimestamp**: The timestamp of the last flow command received. - - - **lastConsumedTimestamp**: The latest timestamp of all the consumed timestamp of the consumers. - - - **lastAckedTimestamp**: The latest timestamp of all the acked timestamp of the consumers. - - - **bytesOutCounter**: Total bytes delivered to consumer. - - - **msgOutCounter**: Total messages delivered to consumer. - - - **msgRateRedeliver**: Total rate of messages redelivered on this subscription (msg/s). - - - **chunkedMessageRate**: Chunked message dispatch rate. - - - **backlogSize**: Size of backlog for this subscription (in bytes). - - - **msgBacklogNoDelayed**: Number of messages in the subscription backlog that do not contain the delay messages. - - - **blockedSubscriptionOnUnackedMsgs**: Flag to verify if a subscription is blocked due to reaching threshold of unacked messages. - - - **msgDelayed**: Number of delayed messages currently being tracked. - - - **unackedMessages**: Number of unacknowledged messages for the subscription, where an unacknowledged message is one that has been sent to a consumer but not yet acknowledged. This field is only meaningful when using a subscription that tracks individual message acknowledgement. - - - **activeConsumerName**: The name of the consumer that is active for single active consumer subscriptions. For example, failover or exclusive. - - - **totalMsgExpired**: Total messages expired on this subscription. - - - **lastMarkDeleteAdvancedTimestamp**: Last MarkDelete position advanced timestamp. - - - **durable**: Whether the subscription is durable or ephemeral (for example, from a reader). - - - **replicated**: Mark that the subscription state is kept in sync across different regions. - - - **allowOutOfOrderDelivery**: Whether out of order delivery is allowed on the Key_Shared subscription. - - - **keySharedMode**: Whether the Key_Shared subscription mode is AUTO_SPLIT or STICKY. - - - **consumersAfterMarkDeletePosition**: This is for Key_Shared subscription to get the recentJoinedConsumers in the Key_Shared subscription. - - - **nonContiguousDeletedMessagesRanges**: The number of non-contiguous deleted messages ranges. - - - **nonContiguousDeletedMessagesRangesSerializedSize**: The serialized size of non-contiguous deleted messages ranges. - - - **consumers**: The list of connected consumers for this subscription. - - - **msgRateOut**: The total rate of messages (msg/s) delivered to the consumer. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered to the consumer. - - - **consumerName**: The internal identifier for this consumer, generated by the client library. - - - **availablePermits**: The number of messages that the consumer has space for in the client library's listen queue. `0` means the client library's queue is full and `receive()` isn't being called. A non-zero value means this consumer is ready for dispatched messages. - - - **unackedMessages**: The number of unacknowledged messages for the consumer, where an unacknowledged message is one that has been sent to the consumer but not yet acknowledged. This field is only meaningful when using a subscription that tracks individual message acknowledgement. - - - **blockedConsumerOnUnackedMsgs**: The flag used to verify if the consumer is blocked due to reaching threshold of the unacknowledged messages. - - - **lastConsumedTimestamp**: The timestamp when the consumer reads a message the last time. - - - **lastAckedTimestamp**: The timestamp when the consumer acknowledges a message the last time. - - - **address**: The IP address and source port for the connection of this consumer. - - - **connectedSince**: The timestamp when this consumer is created or reconnected last time. - - - **clientVersion**: The client library version of this consumer. - - - **bytesOutCounter**: Total bytes delivered to consumer. - - - **msgOutCounter**: Total messages delivered to consumer. - - - **msgRateRedeliver**: Total rate of messages redelivered by this consumer (msg/s). - - - **chunkedMessageRate**: The total rate of chunked messages delivered to this consumer. - - - **avgMessagesPerEntry**: Number of average messages per entry for the consumer consumed. - - - **readPositionWhenJoining**: The read position of the cursor when the consumer joining. - - - **keyHashRanges**: Hash ranges assigned to this consumer if is Key_Shared sub mode. - - - **metadata**: Metadata (key/value strings) associated with this consumer. - - - **replication**: This section gives the stats for cross-colo replication of this topic - - - **msgRateIn**: The total rate (msg/s) of messages received from the remote cluster. - - - **msgThroughputIn**: The total throughput (bytes/s) received from the remote cluster. - - - **msgRateOut**: The total rate of messages (msg/s) delivered to the replication-subscriber. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered to the replication-subscriber. - - - **msgRateExpired**: The total rate of messages (msg/s) expired. - - - **replicationBacklog**: The number of messages pending to be replicated to remote cluster. - - - **connected**: Whether the outbound replicator is connected. - - - **replicationDelayInSeconds**: How long the oldest message has been waiting to be sent through the connection, if connected is `true`. - - - **inboundConnection**: The IP and port of the broker in the remote cluster's publisher connection to this broker. - - - **inboundConnectedSince**: The TCP connection being used to publish messages to the remote cluster. If there are no local publishers connected, this connection is automatically closed after a minute. - - - **outboundConnection**: The address of the outbound replication connection. - - - **outboundConnectedSince**: The timestamp of establishing outbound connection. - -The following is an example of a topic status. - -```json - -{ - "msgRateIn" : 0.0, - "msgThroughputIn" : 0.0, - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesInCounter" : 504, - "msgInCounter" : 9, - "bytesOutCounter" : 2296, - "msgOutCounter" : 41, - "averageMsgSize" : 0.0, - "msgChunkPublished" : false, - "storageSize" : 504, - "backlogSize" : 0, - "offloadedStorageSize" : 0, - "publishers" : [ { - "accessMode" : "Shared", - "msgRateIn" : 0.0, - "msgThroughputIn" : 0.0, - "averageMsgSize" : 0.0, - "chunkedMessageRate" : 0.0, - "producerId" : 0, - "metadata" : { }, - "address" : "/127.0.0.1:65402", - "connectedSince" : "2021-06-09T17:22:55.913+08:00", - "clientVersion" : "2.9.0-SNAPSHOT", - "producerName" : "standalone-1-0" - } ], - "waitingPublishers" : 0, - "subscriptions" : { - "sub-demo" : { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesOutCounter" : 2296, - "msgOutCounter" : 41, - "msgRateRedeliver" : 0.0, - "chunkedMessageRate" : 0, - "msgBacklog" : 0, - "backlogSize" : 0, - "msgBacklogNoDelayed" : 0, - "blockedSubscriptionOnUnackedMsgs" : false, - "msgDelayed" : 0, - "unackedMessages" : 0, - "type" : "Exclusive", - "activeConsumerName" : "20b81", - "msgRateExpired" : 0.0, - "totalMsgExpired" : 0, - "lastExpireTimestamp" : 0, - "lastConsumedFlowTimestamp" : 1623230565356, - "lastConsumedTimestamp" : 1623230583946, - "lastAckedTimestamp" : 1623230584033, - "lastMarkDeleteAdvancedTimestamp" : 1623230584033, - "consumers" : [ { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesOutCounter" : 2296, - "msgOutCounter" : 41, - "msgRateRedeliver" : 0.0, - "chunkedMessageRate" : 0.0, - "consumerName" : "20b81", - "availablePermits" : 959, - "unackedMessages" : 0, - "avgMessagesPerEntry" : 314, - "blockedConsumerOnUnackedMsgs" : false, - "lastAckedTimestamp" : 1623230584033, - "lastConsumedTimestamp" : 1623230583946, - "metadata" : { }, - "address" : "/127.0.0.1:65172", - "connectedSince" : "2021-06-09T17:22:45.353+08:00", - "clientVersion" : "2.9.0-SNAPSHOT" - } ], - "allowOutOfOrderDelivery": false, - "consumersAfterMarkDeletePosition" : { }, - "nonContiguousDeletedMessagesRanges" : 0, - "nonContiguousDeletedMessagesRangesSerializedSize" : 0, - "durable" : true, - "replicated" : false - } - }, - "replication" : { }, - "deduplicationStatus" : "Disabled", - "nonContiguousDeletedMessagesRanges" : 0, - "nonContiguousDeletedMessagesRangesSerializedSize" : 0 -} - -``` - -To get the status of a topic, you can use the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getStats(topic); - -``` - - - - -```` - -### Get internal stats - -You can get the detailed statistics of a topic. - - - **entriesAddedCounter**: Messages published since this broker loaded this topic. - - - **numberOfEntries**: The total number of messages being tracked. - - - **totalSize**: The total storage size in bytes of all messages. - - - **currentLedgerEntries**: The count of messages written to the ledger that is currently open for writing. - - - **currentLedgerSize**: The size in bytes of messages written to the ledger that is currently open for writing. - - - **lastLedgerCreatedTimestamp**: The time when the last ledger is created. - - - **lastLedgerCreationFailureTimestamp:** The time when the last ledger failed. - - - **waitingCursorsCount**: The number of cursors that are "caught up" and waiting for a new message to be published. - - - **pendingAddEntriesCount**: The number of messages that complete (asynchronous) write requests. - - - **lastConfirmedEntry**: The ledgerid:entryid of the last message that is written successfully. If the entryid is `-1`, then the ledger is open, yet no entries are written. - - - **state**: The state of this ledger for writing. The state `LedgerOpened` means that a ledger is open for saving published messages. - - - **ledgers**: The ordered list of all ledgers for this topic holding messages. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. - - - **metadata**: The ledger metadata. - - - **schemaLedgers**: The ordered list of all ledgers for this topic schema. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. - - - **metadata**: The ledger metadata. - - - **compactedLedger**: The ledgers holding un-acked messages after topic compaction. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. The value is `false` for the compacted topic ledger. - - - **cursors**: The list of all cursors on this topic. Each subscription in the topic stats has a cursor. - - - **markDeletePosition**: All messages before the markDeletePosition are acknowledged by the subscriber. - - - **readPosition**: The latest position of subscriber for reading message. - - - **waitingReadOp**: This is true when the subscription has read the latest message published to the topic and is waiting for new messages to be published. - - - **pendingReadOps**: The counter for how many outstanding read requests to the BookKeepers in progress. - - - **messagesConsumedCounter**: The number of messages this cursor has acked since this broker loaded this topic. - - - **cursorLedger**: The ledger being used to persistently store the current markDeletePosition. - - - **cursorLedgerLastEntry**: The last entryid used to persistently store the current markDeletePosition. - - - **individuallyDeletedMessages**: If acknowledges are being done out of order, the ranges of messages acknowledged between the markDeletePosition and the read-position shows. - - - **lastLedgerSwitchTimestamp**: The last time the cursor ledger is rolled over. - - - **state**: The state of the cursor ledger: `Open` means you have a cursor ledger for saving updates of the markDeletePosition. - -The following is an example of the detailed statistics of a topic. - -```json - -{ - "entriesAddedCounter":0, - "numberOfEntries":0, - "totalSize":0, - "currentLedgerEntries":0, - "currentLedgerSize":0, - "lastLedgerCreatedTimestamp":"2021-01-22T21:12:14.868+08:00", - "lastLedgerCreationFailureTimestamp":null, - "waitingCursorsCount":0, - "pendingAddEntriesCount":0, - "lastConfirmedEntry":"3:-1", - "state":"LedgerOpened", - "ledgers":[ - { - "ledgerId":3, - "entries":0, - "size":0, - "offloaded":false, - "metadata":null - } - ], - "cursors":{ - "test":{ - "markDeletePosition":"3:-1", - "readPosition":"3:-1", - "waitingReadOp":false, - "pendingReadOps":0, - "messagesConsumedCounter":0, - "cursorLedger":4, - "cursorLedgerLastEntry":1, - "individuallyDeletedMessages":"[]", - "lastLedgerSwitchTimestamp":"2021-01-22T21:12:14.966+08:00", - "state":"Open", - "numberOfEntriesSinceFirstNotAckedMessage":0, - "totalNonContiguousDeletedMessagesRange":0, - "properties":{ - - } - } - }, - "schemaLedgers":[ - { - "ledgerId":1, - "entries":11, - "size":10, - "offloaded":false, - "metadata":null - } - ], - "compactedLedger":{ - "ledgerId":-1, - "entries":-1, - "size":-1, - "offloaded":false, - "metadata":null - } -} - -``` - -To get the internal status of a topic, you can use the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats-internal \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/internalStats|operation/getInternalStats?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getInternalStats(topic); - -``` - - - - -```` - -### Peek messages - -You can peek a number of messages for a specific subscription of a given topic in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics peek-messages \ - --count 10 --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -Message ID: 315674752:0 -Properties: { "X-Pulsar-publish-time" : "2015-07-13 17:40:28.451" } -msg-payload - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/position/:messagePosition|operation/peekNthMessage?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -int numMessages = 1; -admin.topics().peekMessages(topic, subName, numMessages); - -``` - - - - -```` - -### Get message by ID - -You can fetch the message with the given ledger ID and entry ID in the following ways. - -````mdx-code-block - - - -```shell - -$ ./bin/pulsar-admin topics get-message-by-id \ - persistent://public/default/my-topic \ - -l 10 -e 0 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/ledger/:ledgerId/entry/:entryId|operation/getMessageById?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -long ledgerId = 10; -long entryId = 10; -admin.topics().getMessageById(topic, ledgerId, entryId); - -``` - - - - -```` - -### Examine messages - -You can examine a specific message on a topic by position relative to the earliest or the latest message. - -````mdx-code-block - - - -```shell - -./bin/pulsar-admin topics examine-messages \ - persistent://public/default/my-topic \ - -i latest -m 1 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/examinemessage?initialPosition=:initialPosition&messagePosition=:messagePosition|operation/examineMessage?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().examineMessage(topic, "latest", 1); - -``` - - - - -```` - -### Get message ID - -You can get message ID published at or just after the given datetime. - -````mdx-code-block - - - -```shell - -./bin/pulsar-admin topics get-message-id \ - persistent://public/default/my-topic \ - -d 2021-06-28T19:01:17Z - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/messageid/:timestamp|operation/getMessageIdByTimestamp?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -long timestamp = System.currentTimeMillis() -admin.topics().getMessageIdByTimestamp(topic, timestamp); - -``` - - - - -```` - - -### Skip messages - -You can skip a number of messages for a specific subscription of a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics skip \ - --count 10 --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/skip/:numMessages|operation/skipMessages?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -int numMessages = 1; -admin.topics().skipMessages(topic, subName, numMessages); - -``` - - - - -```` - -### Skip all messages - -You can skip all the old messages for a specific subscription of a given topic. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics skip-all \ - --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/skip_all|operation/skipAllMessages?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -admin.topics().skipAllMessages(topic, subName); - -``` - - - - -```` - -### Reset cursor - -You can reset a subscription cursor position back to the position which is recorded X minutes before. It essentially calculates time and position of cursor at X minutes before and resets it at that position. You can reset the cursor in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics reset-cursor \ - --subscription my-subscription --time 10 \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/resetcursor/:timestamp|operation/resetCursor?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -long timestamp = 2342343L; -admin.topics().skipAllMessages(topic, subName, timestamp); - -``` - - - - -```` - -### Look up topic's owner broker - -You can locate the owner broker of the given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics lookup \ - persistent://test-tenant/ns1/tp1 \ - - "pulsar://broker1.org.com:4480" - -``` - - - - -{@inject: endpoint|GET|/lookup/v2/topic/:schema/:tenant:namespace/:topic|/?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.lookup().lookupDestination(topic); - -``` - - - - -```` - -### Get bundle - -You can get the range of the bundle that the given topic belongs to in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics bundle-range \ - persistent://test-tenant/ns1/tp1 \ - - "0x00000000_0xffffffff" - -``` - - - - -{@inject: endpoint|GET|/lookup/v2/topic/:topic_domain/:tenant/:namespace/:topic/bundle|/?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.lookup().getBundleRange(topic); - -``` - - - - -```` - -### Get subscriptions - -You can check all subscription names for a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics subscriptions \ - persistent://test-tenant/ns1/tp1 \ - - my-subscription - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscriptions|operation/getSubscriptions?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getSubscriptions(topic); - -``` - - - - -```` - -### Unsubscribe - -When a subscription does not process messages any more, you can unsubscribe it in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics unsubscribe \ - --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/:topic/subscription/:subscription|operation/deleteSubscription?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subscriptionName = "my-subscription"; -admin.topics().deleteSubscription(topic, subscriptionName); - -``` - - - - -```` - -### Last Message Id - -You can get the last committed message ID for a persistent topic. It is available since 2.3.0 release. - -````mdx-code-block - - - -```shell - -pulsar-admin topics last-message-id topic-name - -``` - - - - -{@inject: endpoint|Get|/admin/v2/:schema/:tenant/:namespace/:topic/lastMessageId|operation/getLastMessageId?version=@pulsar:version_number@} - - - - -```Java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getLastMessage(topic); - -``` - - - - -```` - - - -### Configure deduplication snapshot interval - -#### Get deduplication snapshot interval - -To get the topic-level deduplication snapshot interval, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-deduplication-snapshot-interval options - -``` - - - - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/getDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getDeduplicationSnapshotInterval(topic) - -``` - - - - -```` - -#### Set deduplication snapshot interval - -To set the topic-level deduplication snapshot interval, use one of the following methods. - -> **Prerequisite** `brokerDeduplicationEnabled` must be set to `true`. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-deduplication-snapshot-interval options - -``` - - - - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/setDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.topics().setDeduplicationSnapshotInterval(topic, 1000) - -``` - - - - -```` - -#### Remove deduplication snapshot interval - -To remove the topic-level deduplication snapshot interval, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-deduplication-snapshot-interval options - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/deleteDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.topics().removeDeduplicationSnapshotInterval(topic) - -``` - - - - -```` - - -### Configure inactive topic policies - -#### Get inactive topic policies - -To get the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-inactive-topic-policies options - -``` - - - - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/getInactiveTopicPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getInactiveTopicPolicies(topic) - -``` - - - - -```` - -#### Set inactive topic policies - -To set the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-inactive-topic-policies options - -``` - - - - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/setInactiveTopicPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().setInactiveTopicPolicies(topic, inactiveTopicPolicies) - -``` - - - - -```` - -#### Remove inactive topic policies - -To remove the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-inactive-topic-policies options - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/removeInactiveTopicPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().removeInactiveTopicPolicies(topic) - -``` - - - - -```` - - -### Configure offload policies - -#### Get offload policies - -To get the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-offload-policies options - -``` - - - - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/getOffloadPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getOffloadPolicies(topic) - -``` - - - - -```` - -#### Set offload policies - -To set the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-offload-policies options - -``` - - - - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/setOffloadPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().setOffloadPolicies(topic, offloadPolicies) - -``` - - - - -```` - -#### Remove offload policies - -To remove the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-offload-policies options - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/removeOffloadPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().removeOffloadPolicies(topic) - -``` - - - - -```` - -## Manage non-partitioned topics -You can use Pulsar [admin API](admin-api-overview.md) to create, delete and check status of non-partitioned topics. - -### Create -Non-partitioned topics must be explicitly created. When creating a new non-partitioned topic, you need to provide a name for the topic. - -By default, 60 seconds after creation, topics are considered inactive and deleted automatically to avoid generating trash data. To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to a specific value. - -For more information about the two parameters, see [here](reference-configuration.md#broker). - -You can create non-partitioned topics in the following ways. -````mdx-code-block - - - -When you create non-partitioned topics with the [`create`](reference-pulsar-admin.md#create-3) command, you need to specify the topic name as an argument. - -```shell - -$ bin/pulsar-admin topics create \ - persistent://my-tenant/my-namespace/my-topic - -``` - -:::note - -When you create a non-partitioned topic with the suffix '-partition-' followed by numeric value like 'xyz-topic-partition-x' for the topic name, if a partitioned topic with same suffix 'xyz-topic-partition-y' exists, then the numeric value(x) for the non-partitioned topic must be larger than the number of partitions(y) of the partitioned topic. Otherwise, you cannot create such a non-partitioned topic. - -::: - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic|operation/createNonPartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().createNonPartitionedTopic(topicName); - -``` - - - - -```` - -### Delete -You can delete non-partitioned topics in the following ways. -````mdx-code-block - - - -```shell - -$ bin/pulsar-admin topics delete \ - persistent://my-tenant/my-namespace/my-topic - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic|operation/deleteTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().delete(topic); - -``` - - - - -```` - -### List - -You can get the list of topics under a given namespace in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list tenant/namespace -persistent://tenant/namespace/topic1 -persistent://tenant/namespace/topic2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getList(namespace); - -``` - - - - -```` - -### Stats - -You can check the current statistics of a given topic. The following is an example. For description of each stats, refer to [get stats](#get-stats). - -```json - -{ - "msgRateIn": 4641.528542257553, - "msgThroughputIn": 44663039.74947473, - "msgRateOut": 0, - "msgThroughputOut": 0, - "averageMsgSize": 1232439.816728665, - "storageSize": 135532389160, - "publishers": [ - { - "msgRateIn": 57.855383881403576, - "msgThroughputIn": 558994.7078932219, - "averageMsgSize": 613135, - "producerId": 0, - "producerName": null, - "address": null, - "connectedSince": null - } - ], - "subscriptions": { - "my-topic_subscription": { - "msgRateOut": 0, - "msgThroughputOut": 0, - "msgBacklog": 116632, - "type": null, - "msgRateExpired": 36.98245516804671, - "consumers": [] - } - }, - "replication": {} -} - -``` - -You can check the current statistics of a given topic and its connected producers and consumers in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats \ - persistent://test-tenant/namespace/topic \ - --get-precise-backlog - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getStats(topic, false /* is precise backlog */); - -``` - - - - -```` - -## Manage partitioned topics -You can use Pulsar [admin API](admin-api-overview.md) to create, update, delete and check status of partitioned topics. - -### Create - -Partitioned topics must be explicitly created. When creating a new partitioned topic, you need to provide a name and the number of partitions for the topic. - -By default, 60 seconds after creation, topics are considered inactive and deleted automatically to avoid generating trash data. To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to a specific value. - -For more information about the two parameters, see [here](reference-configuration.md#broker). - -You can create partitioned topics in the following ways. -````mdx-code-block - - - -When you create partitioned topics with the [`create-partitioned-topic`](reference-pulsar-admin.md#create-partitioned-topic) -command, you need to specify the topic name as an argument and the number of partitions using the `-p` or `--partitions` flag. - -```shell - -$ bin/pulsar-admin topics create-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic \ - --partitions 4 - -``` - -:::note - -If a non-partitioned topic with the suffix '-partition-' followed by a numeric value like 'xyz-topic-partition-10', you can not create a partitioned topic with name 'xyz-topic', because the partitions of the partitioned topic could override the existing non-partitioned topic. To create such partitioned topic, you have to delete that non-partitioned topic first. - -::: - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/partitions|operation/createPartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -int numPartitions = 4; -admin.topics().createPartitionedTopic(topicName, numPartitions); - -``` - - - - -```` - -### Create missed partitions - -When topic auto-creation is disabled, and you have a partitioned topic without any partitions, you can use the [`create-missed-partitions`](reference-pulsar-admin.md#create-missed-partitions) command to create partitions for the topic. - -````mdx-code-block - - - -You can create missed partitions with the [`create-missed-partitions`](reference-pulsar-admin.md#create-missed-partitions) command and specify the topic name as an argument. - -```shell - -$ bin/pulsar-admin topics create-missed-partitions \ - persistent://my-tenant/my-namespace/my-topic \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic|operation/createMissedPartitions?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().createMissedPartitions(topicName); - -``` - - - - -```` - -### Get metadata - -Partitioned topics are associated with metadata, you can view it as a JSON object. The following metadata field is available. - -Field | Description -:-----|:------- -`partitions` | The number of partitions into which the topic is divided. - -````mdx-code-block - - - -You can check the number of partitions in a partitioned topic with the [`get-partitioned-topic-metadata`](reference-pulsar-admin.md#get-partitioned-topic-metadata) subcommand. - -```shell - -$ pulsar-admin topics get-partitioned-topic-metadata \ - persistent://my-tenant/my-namespace/my-topic -{ - "partitions": 4 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/partitions|operation/getPartitionedMetadata?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getPartitionedTopicMetadata(topicName); - -``` - - - - -```` - -### Update - -You can update the number of partitions for an existing partitioned topic *if* the topic is non-global. However, you can only add the partition number. Decrementing the number of partitions would delete the topic, which is not supported in Pulsar. - -Producers and consumers can find the newly created partitions automatically. - -````mdx-code-block - - - -You can update partitioned topics with the [`update-partitioned-topic`](reference-pulsar-admin.md#update-partitioned-topic) command. - -```shell - -$ pulsar-admin topics update-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic \ - --partitions 8 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:cluster/:namespace/:destination/partitions|operation/updatePartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().updatePartitionedTopic(topic, numPartitions); - -``` - - - - -```` - -### Delete -You can delete partitioned topics with the [`delete-partitioned-topic`](reference-pulsar-admin.md#delete-partitioned-topic) command, REST API and Java. - -````mdx-code-block - - - -```shell - -$ bin/pulsar-admin topics delete-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:topic/:namespace/:destination/partitions|operation/deletePartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().delete(topic); - -``` - - - - -```` - -### List -You can get the list of topics under a given namespace in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list tenant/namespace -persistent://tenant/namespace/topic1 -persistent://tenant/namespace/topic2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getPartitionedTopicList?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getList(namespace); - -``` - - - - -```` - -### Stats - -You can check the current statistics of a given partitioned topic. The following is an example. For description of each stats, refer to [get stats](#get-stats). - -Note that in the subscription JSON object, `chuckedMessageRate` is deprecated. Please use `chunkedMessageRate`. Both will be sent in the JSON for now. - -```json - -{ - "msgRateIn" : 999.992947159793, - "msgThroughputIn" : 1070918.4635439808, - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesInCounter" : 270318763, - "msgInCounter" : 252489, - "bytesOutCounter" : 0, - "msgOutCounter" : 0, - "averageMsgSize" : 1070.926056966454, - "msgChunkPublished" : false, - "storageSize" : 270316646, - "backlogSize" : 200921133, - "publishers" : [ { - "msgRateIn" : 999.992947159793, - "msgThroughputIn" : 1070918.4635439808, - "averageMsgSize" : 1070.3333333333333, - "chunkedMessageRate" : 0.0, - "producerId" : 0 - } ], - "subscriptions" : { - "test" : { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesOutCounter" : 0, - "msgOutCounter" : 0, - "msgRateRedeliver" : 0.0, - "chuckedMessageRate" : 0, - "chunkedMessageRate" : 0, - "msgBacklog" : 144318, - "msgBacklogNoDelayed" : 144318, - "blockedSubscriptionOnUnackedMsgs" : false, - "msgDelayed" : 0, - "unackedMessages" : 0, - "msgRateExpired" : 0.0, - "lastExpireTimestamp" : 0, - "lastConsumedFlowTimestamp" : 0, - "lastConsumedTimestamp" : 0, - "lastAckedTimestamp" : 0, - "consumers" : [ ], - "isDurable" : true, - "isReplicated" : false - } - }, - "replication" : { }, - "metadata" : { - "partitions" : 3 - }, - "partitions" : { } -} - -``` - -You can check the current statistics of a given partitioned topic and its connected producers and consumers in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics partitioned-stats \ - persistent://test-tenant/namespace/topic \ - --per-partition - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/partitioned-stats|operation/getPartitionedStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getPartitionedStats(topic, true /* per partition */, false /* is precise backlog */); - -``` - - - - -```` - -### Internal stats - -You can check the detailed statistics of a topic. The following is an example. For description of each stats, refer to [get internal stats](#get-internal-stats). - -```json - -{ - "entriesAddedCounter": 20449518, - "numberOfEntries": 3233, - "totalSize": 331482, - "currentLedgerEntries": 3233, - "currentLedgerSize": 331482, - "lastLedgerCreatedTimestamp": "2016-06-29 03:00:23.825", - "lastLedgerCreationFailureTimestamp": null, - "waitingCursorsCount": 1, - "pendingAddEntriesCount": 0, - "lastConfirmedEntry": "324711539:3232", - "state": "LedgerOpened", - "ledgers": [ - { - "ledgerId": 324711539, - "entries": 0, - "size": 0 - } - ], - "cursors": { - "my-subscription": { - "markDeletePosition": "324711539:3133", - "readPosition": "324711539:3233", - "waitingReadOp": true, - "pendingReadOps": 0, - "messagesConsumedCounter": 20449501, - "cursorLedger": 324702104, - "cursorLedgerLastEntry": 21, - "individuallyDeletedMessages": "[(324711539:3134‥324711539:3136], (324711539:3137‥324711539:3140], ]", - "lastLedgerSwitchTimestamp": "2016-06-29 01:30:19.313", - "state": "Open" - } - } -} - -``` - -You can get the internal stats for the partitioned topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats-internal \ - persistent://test-tenant/namespace/topic - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/internalStats|operation/getInternalStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getInternalStats(topic); - -``` - - - - -```` - -### Get backlog size - -You can get backlog size of a single topic partition or a nonpartitioned topic given a message ID (in bytes). - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics get-backlog-size \ - -m 1:1 \ - persistent://test-tenant/ns1/tp1-partition-0 \ - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/backlogSize|operation/getBacklogSizeByMessageId?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -MessageId messageId = MessageId.earliest; -admin.topics().getBacklogSizeByMessageId(topic, messageId); - -``` - - - - -```` - -## Publish to partitioned topics - -By default, Pulsar topics are served by a single broker, which limits the maximum throughput of a topic. *Partitioned topics* can span multiple brokers and thus allow for higher throughput. - -You can publish to partitioned topics using Pulsar client libraries. When publishing to partitioned topics, you must specify a routing mode. If you do not specify any routing mode when you create a new producer, the round robin routing mode is used. - -### Routing mode - -You can specify the routing mode in the ProducerConfiguration object that you use to configure your producer. The routing mode determines which partition(internal topic) that each message should be published to. - -The following {@inject: javadoc:MessageRoutingMode:/client/org/apache/pulsar/client/api/MessageRoutingMode} options are available. - -Mode | Description -:--------|:------------ -`RoundRobinPartition` | If no key is provided, the producer publishes messages across all partitions in round-robin policy to achieve the maximum throughput. Round-robin is not done per individual message, round-robin is set to the same boundary of batching delay to ensure that batching is effective. If a key is specified on the message, the partitioned producer hashes the key and assigns message to a particular partition. This is the default mode. -`SinglePartition` | If no key is provided, the producer picks a single partition randomly and publishes all messages into that partition. If a key is specified on the message, the partitioned producer hashes the key and assigns message to a particular partition. -`CustomPartition` | Use custom message router implementation that is called to determine the partition for a particular message. You can create a custom routing mode by using the Java client and implementing the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface. - -The following is an example: - -```java - -String pulsarBrokerRootUrl = "pulsar://localhost:6650"; -String topic = "persistent://my-tenant/my-namespace/my-topic"; - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl(pulsarBrokerRootUrl).build(); -Producer producer = pulsarClient.newProducer() - .topic(topic) - .messageRoutingMode(MessageRoutingMode.SinglePartition) - .create(); -producer.send("Partitioned topic message".getBytes()); - -``` - -### Custom message router - -To use a custom message router, you need to provide an implementation of the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface, which has just one `choosePartition` method: - -```java - -public interface MessageRouter extends Serializable { - int choosePartition(Message msg); -} - -``` - -The following router routes every message to partition 10: - -```java - -public class AlwaysTenRouter implements MessageRouter { - public int choosePartition(Message msg) { - return 10; - } -} - -``` - -With that implementation, you can send - -```java - -String pulsarBrokerRootUrl = "pulsar://localhost:6650"; -String topic = "persistent://my-tenant/my-cluster-my-namespace/my-topic"; - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl(pulsarBrokerRootUrl).build(); -Producer producer = pulsarClient.newProducer() - .topic(topic) - .messageRouter(new AlwaysTenRouter()) - .create(); -producer.send("Partitioned topic message".getBytes()); - -``` - -### How to choose partitions when using a key -If a message has a key, it supersedes the round robin routing policy. The following example illustrates how to choose the partition when using a key. - -```java - -// If the message has a key, it supersedes the round robin routing policy - if (msg.hasKey()) { - return signSafeMod(hash.makeHash(msg.getKey()), topicMetadata.numPartitions()); - } - - if (isBatchingEnabled) { // if batching is enabled, choose partition on `partitionSwitchMs` boundary. - long currentMs = clock.millis(); - return signSafeMod(currentMs / partitionSwitchMs + startPtnIdx, topicMetadata.numPartitions()); - } else { - return signSafeMod(PARTITION_INDEX_UPDATER.getAndIncrement(this), topicMetadata.numPartitions()); - } - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/administration-dashboard.md b/site2/website/versioned_docs/version-2.9.1-deprecated/administration-dashboard.md deleted file mode 100644 index 92bd7e17869d7b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/administration-dashboard.md +++ /dev/null @@ -1,76 +0,0 @@ ---- -id: administration-dashboard -title: Pulsar dashboard -sidebar_label: "Dashboard" -original_id: administration-dashboard ---- - -:::note - -Pulsar dashboard is deprecated. We recommend you use [Pulsar Manager](administration-pulsar-manager.md) to manage and monitor the stats of your topics. - -::: - -Pulsar dashboard is a web application that enables users to monitor current stats for all [topics](reference-terminology.md#topic) in tabular form. - -The dashboard is a data collector that polls stats from all the brokers in a Pulsar instance (across multiple clusters) and stores all the information in a [PostgreSQL](https://www.postgresql.org/) database. - -You can use the [Django](https://www.djangoproject.com) web app to render the collected data. - -## Install - -The easiest way to use the dashboard is to run it inside a [Docker](https://www.docker.com/products/docker) container. - -```shell - -$ SERVICE_URL=http://broker.example.com:8080/ -$ docker run -p 80:80 \ - -e SERVICE_URL=$SERVICE_URL \ - apachepulsar/pulsar-dashboard:@pulsar:version@ - -``` - -You can find the {@inject: github:Dockerfile:/dashboard/Dockerfile} in the `dashboard` directory and build an image from scratch as well: - -```shell - -$ docker build -t apachepulsar/pulsar-dashboard dashboard - -``` - -If token authentication is enabled: -> Provided token should have super-user access. - -```shell - -$ SERVICE_URL=http://broker.example.com:8080/ -$ JWT_TOKEN=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c -$ docker run -p 80:80 \ - -e SERVICE_URL=$SERVICE_URL \ - -e JWT_TOKEN=$JWT_TOKEN \ - apachepulsar/pulsar-dashboard - -``` - - -You need to specify only one service URL for a Pulsar cluster. Internally, the collector figures out all the existing clusters and the brokers from where it needs to pull the metrics. If you connect the dashboard to Pulsar running in standalone mode, the URL is `http://:8080` by default. `` is the IP address or hostname of the machine that runs Pulsar standalone. The IP address or hostname should be accessible from the running dashboard in the docker instance. - -Once the Docker container starts, the web dashboard is accessible via `localhost` or whichever host that Docker uses. - -> The `SERVICE_URL` that the dashboard uses needs to be reachable from inside the Docker container. - -If the Pulsar service runs in standalone mode in `localhost`, the `SERVICE_URL` has to -be the IP address of the machine. - -Similarly, given the Pulsar standalone advertises itself with localhost by default, you need to -explicitly set the advertise address to the host IP address. For example: - -```shell - -$ bin/pulsar standalone --advertised-address 1.2.3.4 - -``` - -### Known issues - -Currently, only Pulsar Token [authentication](security-overview.md#authentication-providers) is supported. diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/administration-geo.md b/site2/website/versioned_docs/version-2.9.1-deprecated/administration-geo.md deleted file mode 100644 index 1d2a9620007f4b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/administration-geo.md +++ /dev/null @@ -1,238 +0,0 @@ ---- -id: administration-geo -title: Pulsar geo-replication -sidebar_label: "Geo-replication" -original_id: administration-geo ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -*Geo-replication* is the replication of persistently stored message data across multiple clusters of a Pulsar instance. - -## How geo-replication works - -The diagram below illustrates the process of geo-replication across Pulsar clusters: - -![Replication Diagram](/assets/geo-replication.png) - -In this diagram, whenever **P1**, **P2**, and **P3** producers publish messages to the **T1** topic on **Cluster-A**, **Cluster-B**, and **Cluster-C** clusters respectively, those messages are instantly replicated across clusters. Once the messages are replicated, **C1** and **C2** consumers can consume those messages from their respective clusters. - -Without geo-replication, **C1** and **C2** consumers are not able to consume messages that **P3** producer publishes. - -## Geo-replication and Pulsar properties - -You must enable geo-replication on a per-tenant basis in Pulsar. You can enable geo-replication between clusters only when a tenant is created that allows access to both clusters. - -Although geo-replication must be enabled between two clusters, actually geo-replication is managed at the namespace level. You must complete the following tasks to enable geo-replication for a namespace: - -* [Enable geo-replication namespaces](#enable-geo-replication-namespaces) -* Configure that namespace to replicate across two or more provisioned clusters - -Any message published on *any* topic in that namespace is replicated to all clusters in the specified set. - -## Local persistence and forwarding - -When messages are produced on a Pulsar topic, messages are first persisted in the local cluster, and then forwarded asynchronously to the remote clusters. - -In normal cases, when connectivity issues are none, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, the network [round-trip time](https://en.wikipedia.org/wiki/Round-trip_delay_time) (RTT) between the remote regions defines end-to-end delivery latency. - -Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (like during a network partition). - -Producers and consumers can publish messages to and consume messages from any cluster in a Pulsar instance. However, subscriptions cannot only be local to the cluster where the subscriptions are created but also can be transferred between clusters after replicated subscription is enabled. Once replicated subscription is enabled, you can keep subscription state in synchronization. Therefore, a topic can be asynchronously replicated across multiple geographical regions. In case of failover, a consumer can restart consuming messages from the failure point in a different cluster. - -In the aforementioned example, the **T1** topic is replicated among three clusters, **Cluster-A**, **Cluster-B**, and **Cluster-C**. - -All messages produced in any of the three clusters are delivered to all subscriptions in other clusters. In this case, **C1** and **C2** consumers receive all messages that **P1**, **P2**, and **P3** producers publish. Ordering is still guaranteed on a per-producer basis. - -## Configure replication - -The following example connects three clusters: **us-east**, **us-west**, and **us-cent**. - -### Connect replication clusters - -To replicate data among clusters, you need to configure each cluster to connect to the other. You can use the [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) tool to create a connection. - -**Example** - -Suppose that you have 3 replication clusters: `us-west`, `us-cent`, and `us-east`. - -1. Configure the connection from `us-west` to `us-east`. - - Run the following command on `us-west`. - -```shell - -$ bin/pulsar-admin clusters create \ - --broker-url pulsar://: \ - --url http://: \ - us-east - -``` - - :::tip - - - If you want to use a secure connection for a cluster, you can use the flags `--broker-url-secure` and `--url-secure`. For more information, see [pulsar-admin clusters create](https://pulsar.apache.org/tools/pulsar-admin/). - - Different clusters may have different authentications. You can use the authentication flag `--auth-plugin` and `--auth-parameters` together to set cluster authentication, which overrides `brokerClientAuthenticationPlugin` and `brokerClientAuthenticationParameters` if `authenticationEnabled` sets to `true` in `broker.conf` and `standalone.conf`. For more information, see [authentication and authorization](concepts-authentication.md). - - ::: - -2. Configure the connection from `us-west` to `us-cent`. - - Run the following command on `us-west`. - -```shell - -$ bin/pulsar-admin clusters create \ - --broker-url pulsar://: \ - --url http://: \ - us-cent - -``` - -3. Run similar commands on `us-east` and `us-cent` to create connections among clusters. - -### Grant permissions to properties - -To replicate to a cluster, the tenant needs permission to use that cluster. You can grant permission to the tenant when you create the tenant or grant later. - -Specify all the intended clusters when you create a tenant: - -```shell - -$ bin/pulsar-admin tenants create my-tenant \ - --admin-roles my-admin-role \ - --allowed-clusters us-west,us-east,us-cent - -``` - -To update permissions of an existing tenant, use `update` instead of `create`. - -### Enable geo-replication - -You can enable geo-replication at **namespace** or **topic** level. - -#### Enable geo-replication at namespace level - -You can create a namespace with the following command sample. - -```shell - -$ bin/pulsar-admin namespaces create my-tenant/my-namespace - -``` - -Initially, the namespace is not assigned to any cluster. You can assign the namespace to clusters using the `set-clusters` subcommand: - -```shell - -$ bin/pulsar-admin namespaces set-clusters my-tenant/my-namespace \ - --clusters us-west,us-east,us-cent - -``` - -### Use topics with geo-replication - -Once you create a geo-replication namespace, any topics that producers or consumers create within that namespace is replicated across clusters. Typically, each application uses the `serviceUrl` for the local cluster. - -#### Selective replication - -By default, messages are replicated to all clusters configured for the namespace. You can restrict replication selectively by specifying a replication list for a message, and then that message is replicated only to the subset in the replication list. - -The following is an example for the [Java API](client-libraries-java.md). Note the use of the `setReplicationClusters` method when you construct the {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} object: - -```java - -List restrictReplicationTo = Arrays.asList( - "us-west", - "us-east" -); - -Producer producer = client.newProducer() - .topic("some-topic") - .create(); - -producer.newMessage() - .value("my-payload".getBytes()) - .setReplicationClusters(restrictReplicationTo) - .send(); - -``` - -#### Topic stats - -You can check topic-specific statistics for geo-replication topics using one of the following methods. - -````mdx-code-block - - - -Use the [`pulsar-admin topics stats`](https://pulsar.apache.org/tools/pulsar-admin/) command. - -```shell - -$ bin/pulsar-admin topics stats persistent://my-tenant/my-namespace/my-topic - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@} - - - - -```` - -Each cluster reports its own local stats, including the incoming and outgoing replication rates and backlogs. - -#### Delete a geo-replication topic - -Given that geo-replication topics exist in multiple regions, directly deleting a geo-replication topic is not possible. Instead, you should rely on automatic topic garbage collection. - -In Pulsar, a topic is automatically deleted when the topic meets the following three conditions: -- no producers or consumers are connected to it; -- no subscriptions to it; -- no more messages are kept for retention. -For geo-replication topics, each region uses a fault-tolerant mechanism to decide when deleting the topic locally is safe. - -You can explicitly disable topic garbage collection by setting `brokerDeleteInactiveTopicsEnabled` to `false` in your [broker configuration](reference-configuration.md#broker). - -To delete a geo-replication topic, close all producers and consumers on the topic, and delete all of its local subscriptions in every replication cluster. When Pulsar determines that no valid subscription for the topic remains across the system, it will garbage collect the topic. - -## Replicated subscriptions - -Pulsar supports replicated subscriptions, so you can keep subscription state in sync, within a sub-second timeframe, in the context of a topic that is being asynchronously replicated across multiple geographical regions. - -In case of failover, a consumer can restart consuming from the failure point in a different cluster. - -### Enable replicated subscription - -Replicated subscription is disabled by default. You can enable replicated subscription when creating a consumer. - -```java - -Consumer consumer = client.newConsumer(Schema.STRING) - .topic("my-topic") - .subscriptionName("my-subscription") - .replicateSubscriptionState(true) - .subscribe(); - -``` - -### Advantages - - * It is easy to implement the logic. - * You can choose to enable or disable replicated subscription. - * When you enable it, the overhead is low, and it is easy to configure. - * When you disable it, the overhead is zero. - -### Limitations - -When you enable replicated subscription, you're creating a consistent distributed snapshot to establish an association between message ids from different clusters. The snapshots are taken periodically. The default value is `1 second`. It means that a consumer failing over to a different cluster can potentially receive 1 second of duplicates. You can also configure the frequency of the snapshot in the `broker.conf` file. diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/administration-isolation.md b/site2/website/versioned_docs/version-2.9.1-deprecated/administration-isolation.md deleted file mode 100644 index d2de042a2e7415..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/administration-isolation.md +++ /dev/null @@ -1,115 +0,0 @@ ---- -id: administration-isolation -title: Pulsar isolation -sidebar_label: "Pulsar isolation" -original_id: administration-isolation ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -In an organization, a Pulsar instance provides services to multiple teams. When organizing the resources across multiple teams, you want to make a suitable isolation plan to avoid the resource competition between different teams and applications and provide high-quality messaging service. In this case, you need to take resource isolation into consideration and weigh your intended actions against expected and unexpected consequences. - -To enforce resource isolation, you can use the Pulsar isolation policy, which allows you to allocate resources (**broker** and **bookie**) for the namespace. - -## Broker isolation - -In Pulsar, when namespaces (more specifically, namespace bundles) are assigned dynamically to brokers, the namespace isolation policy limits the set of brokers that can be used for assignment. Before topics are assigned to brokers, you can set the namespace isolation policy with a primary or a secondary regex to select desired brokers. - -You can set a namespace isolation policy for a cluster using one of the following methods. - -````mdx-code-block - - - - -``` - -pulsar-admin ns-isolation-policy set options - -``` - -For more information about the command `pulsar-admin ns-isolation-policy set options`, see [here](https://pulsar.apache.org/tools/pulsar-admin/). - -**Example** - -```shell - -bin/pulsar-admin ns-isolation-policy set \ ---auto-failover-policy-type min_available \ ---auto-failover-policy-params min_limit=1,usage_threshold=80 \ ---namespaces my-tenant/my-namespace \ ---primary 10.193.216.* my-cluster policy-name - -``` - - - - -[PUT /admin/v2/namespaces/{tenant}/{namespace}](https://pulsar.apache.org/admin-rest-api/?version=master&apiversion=v2#operation/createNamespace) - - - - -For how to set namespace isolation policy using Java admin API, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/NamespacesImpl.java#L251). - - - - -```` - -## Bookie isolation - -A namespace can be isolated into user-defined groups of bookies, which guarantees all the data that belongs to the namespace is stored in desired bookies. The bookie affinity group uses the BookKeeper [rack-aware placement policy](https://bookkeeper.apache.org/docs/latest/api/javadoc/org/apache/bookkeeper/client/EnsemblePlacementPolicy.html) and it is a way to feed rack information which is stored as JSON format in znode. - -You can set a bookie affinity group using one of the following methods. - -````mdx-code-block - - - - -``` - -pulsar-admin namespaces set-bookie-affinity-group options - -``` - -For more information about the command `pulsar-admin namespaces set-bookie-affinity-group options`, see [here](https://pulsar.apache.org/tools/pulsar-admin/). - -**Example** - -```shell - -bin/pulsar-admin bookies set-bookie-rack \ ---bookie 127.0.0.1:3181 \ ---hostname 127.0.0.1:3181 \ ---group group-bookie1 \ ---rack rack1 - -bin/pulsar-admin namespaces set-bookie-affinity-group public/default \ ---primary-group group-bookie1 - -``` - - - - -[POST /admin/v2/namespaces/{tenant}/{namespace}/persistence/bookieAffinity](https://pulsar.apache.org/admin-rest-api/?version=master&apiversion=v2#operation/setBookieAffinityGroup) - - - - -For how to set bookie affinity group for a namespace using Java admin API, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/NamespacesImpl.java#L1164). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/administration-load-balance.md b/site2/website/versioned_docs/version-2.9.1-deprecated/administration-load-balance.md deleted file mode 100644 index 788c84a59317b0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/administration-load-balance.md +++ /dev/null @@ -1,250 +0,0 @@ ---- -id: administration-load-balance -title: Pulsar load balance -sidebar_label: "Load balance" -original_id: administration-load-balance ---- - -## Load balance across Pulsar brokers - -Pulsar is an horizontally scalable messaging system, so the traffic in a logical cluster must be balanced across all the available Pulsar brokers as evenly as possible, which is a core requirement. - -You can use multiple settings and tools to control the traffic distribution which require a bit of context to understand how the traffic is managed in Pulsar. Though, in most cases, the core requirement mentioned above is true out of the box and you should not worry about it. - -## Pulsar load manager architecture - -The following part introduces the basic architecture of the Pulsar load manager. - -### Assign topics to brokers dynamically - -Topics are dynamically assigned to brokers based on the load conditions of all brokers in the cluster. - -When a client starts using new topics that are not assigned to any broker, a process is triggered to choose the best suited broker to acquire ownership of these topics according to the load conditions. - -In case of partitioned topics, different partitions are assigned to different brokers. Here "topic" means either a non-partitioned topic or one partition of a topic. - -The assignment is "dynamic" because the assignment changes quickly. For example, if the broker owning the topic crashes, the topic is reassigned immediately to another broker. Another scenario is that the broker owning the topic becomes overloaded. In this case, the topic is reassigned to a less loaded broker. - -The stateless nature of brokers makes the dynamic assignment possible, so you can quickly expand or shrink the cluster based on usage. - -#### Assignment granularity - -The assignment of topics or partitions to brokers is not done at the topics or partitions level, but done at the Bundle level (a higher level). The reason is to amortize the amount of information that you need to keep track. Based on CPU, memory, traffic load and other indexes, topics are assigned to a particular broker dynamically. - -Instead of individual topic or partition assignment, each broker takes ownership of a subset of the topics for a namespace. This subset is called a "*bundle*" and effectively this subset is a sharding mechanism. - -The namespace is the "administrative" unit: many config knobs or operations are done at the namespace level. - -For assignment, a namespaces is sharded into a list of "bundles", with each bundle comprising a portion of overall hash range of the namespace. - -Topics are assigned to a particular bundle by taking the hash of the topic name and checking in which bundle the hash falls into. - -Each bundle is independent of the others and thus is independently assigned to different brokers. - -### Create namespaces and bundles - -When you create a new namespace, the new namespace sets to use the default number of bundles. You can set this in `conf/broker.conf`: - -```properties - -# When a namespace is created without specifying the number of bundle, this -# value will be used as the default -defaultNumberOfNamespaceBundles=4 - -``` - -You can either change the system default, or override it when you create a new namespace: - -```shell - -$ bin/pulsar-admin namespaces create my-tenant/my-namespace --clusters us-west --bundles 16 - -``` - -With this command, you create a namespace with 16 initial bundles. Therefore the topics for this namespaces can immediately be spread across up to 16 brokers. - -In general, if you know the expected traffic and number of topics in advance, you had better start with a reasonable number of bundles instead of waiting for the system to auto-correct the distribution. - -On the same note, it is beneficial to start with more bundles than the number of brokers, because of the hashing nature of the distribution of topics into bundles. For example, for a namespace with 1000 topics, using something like 64 bundles achieves a good distribution of traffic across 16 brokers. - -### Unload topics and bundles - -You can "unload" a topic in Pulsar with admin operation. Unloading means to close the topics, release ownership and reassign the topics to a new broker, based on current load. - -When unloading happens, the client experiences a small latency blip, typically in the order of tens of milliseconds, while the topic is reassigned. - -Unloading is the mechanism that the load-manager uses to perform the load shedding, but you can also trigger the unloading manually, for example to correct the assignments and redistribute traffic even before having any broker overloaded. - -Unloading a topic has no effect on the assignment, but just closes and reopens the particular topic: - -```shell - -pulsar-admin topics unload persistent://tenant/namespace/topic - -``` - -To unload all topics for a namespace and trigger reassignments: - -```shell - -pulsar-admin namespaces unload tenant/namespace - -``` - -### Split namespace bundles - -Since the load for the topics in a bundle might change over time and predicting the load might be hard, bundle split is designed to deal with these issues. The broker splits a bundle into two and the new smaller bundles can be reassigned to different brokers. - -The splitting is based on some tunable thresholds. Any existing bundle that exceeds any of the threshold is a candidate to be split. By default the newly split bundles are also immediately offloaded to other brokers, to facilitate the traffic distribution. - -You can split namespace bundles in two ways, by setting `supportedNamespaceBundleSplitAlgorithms` to `range_equally_divide` or `topic_count_equally_divide` in `broker.conf` file. The former splits the bundle into two parts with the same hash range size; the latter splits the bundle into two parts with the same number of topics. You can also configure other parameters for namespace bundles. - -```properties - -# enable/disable namespace bundle auto split -loadBalancerAutoBundleSplitEnabled=true - -# enable/disable automatic unloading of split bundles -loadBalancerAutoUnloadSplitBundlesEnabled=true - -# maximum topics in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxTopics=1000 - -# maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxSessions=1000 - -# maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxMsgRate=30000 - -# maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxBandwidthMbytes=100 - -# maximum number of bundles in a namespace (for auto-split) -loadBalancerNamespaceMaximumBundles=128 - -``` - -### Shed load automatically - -The support for automatic load shedding is available in the load manager of Pulsar. This means that whenever the system recognizes a particular broker is overloaded, the system forces some traffic to be reassigned to less loaded brokers. - -When a broker is identified as overloaded, the broker forces to "unload" a subset of the bundles, the ones with higher traffic, that make up for the overload percentage. - -For example, the default threshold is 85% and if a broker is over quota at 95% CPU usage, then the broker unloads the percent difference plus a 5% margin: `(95% - 85%) + 5% = 15%`. - -Given the selection of bundles to offload is based on traffic (as a proxy measure for cpu, network and memory), broker unloads bundles for at least 15% of traffic. - -The automatic load shedding is enabled by default and you can disable the automatic load shedding with this setting: - -```properties - -# Enable/disable automatic bundle unloading for load-shedding -loadBalancerSheddingEnabled=true - -``` - -Additional settings that apply to shedding: - -```properties - -# Load shedding interval. Broker periodically checks whether some traffic should be offload from -# some over-loaded broker to other under-loaded brokers -loadBalancerSheddingIntervalMinutes=1 - -# Prevent the same topics to be shed and moved to other brokers more that once within this timeframe -loadBalancerSheddingGracePeriodMinutes=30 - -``` - -#### Broker overload thresholds - -The determinations of when a broker is overloaded is based on threshold of CPU, network and memory usage. Whenever either of those metrics reaches the threshold, the system triggers the shedding (if enabled). - -By default, overload threshold is set at 85%: - -```properties - -# Usage threshold to determine a broker as over-loaded -loadBalancerBrokerOverloadedThresholdPercentage=85 - -``` - -Pulsar gathers the usage stats from the system metrics. - -In case of network utilization, in some cases the network interface speed that Linux reports is not correct and needs to be manually overridden. This is the case in AWS EC2 instances with 1Gbps NIC speed for which the OS reports 10Gbps speed. - -Because of the incorrect max speed, the Pulsar load manager might think the broker has not reached the NIC capacity, while in fact the broker already uses all the bandwidth and the traffic is slowed down. - -You can use the following setting to correct the max NIC speed: - -```properties - -# Override the auto-detection of the network interfaces max speed. -# This option is useful in some environments (eg: EC2 VMs) where the max speed -# reported by Linux is not reflecting the real bandwidth available to the broker. -# Since the network usage is employed by the load manager to decide when a broker -# is overloaded, it is important to make sure the info is correct or override it -# with the right value here. The configured value can be a double (eg: 0.8) and that -# can be used to trigger load-shedding even before hitting on NIC limits. -loadBalancerOverrideBrokerNicSpeedGbps= - -``` - -When the value is empty, Pulsar uses the value that the OS reports. - -### Distribute anti-affinity namespaces across failure domains - -When your application has multiple namespaces and you want one of them available all the time to avoid any downtime, you can group these namespaces and distribute them across different [failure domains](reference-terminology.md#failure-domain) and different brokers. Thus, if one of the failure domains is down (due to release rollout or brokers restart), it only disrupts namespaces owned by that specific failure domain and the rest of the namespaces owned by other domains remain available without any impact. - -Such a group of namespaces has anti-affinity to each other, that is, all the namespaces in this group are [anti-affinity namespaces](reference-terminology.md#anti-affinity-namespaces) and are distributed to different failure domains in a load-balanced manner. - -As illustrated in the following figure, Pulsar has 2 failure domains (Domain1 and Domain2) and each domain has 2 brokers in it. You can create an anti-affinity namespace group that has 4 namespaces in it, and all the 4 namespaces have anti-affinity to each other. The load manager tries to distribute namespaces evenly across all the brokers in the same domain. Since each domain has 2 brokers, every broker owns one namespace from this anti-affinity namespace group, and you can see each domain owns 2 namespaces, and each broker owns 1 namespace. - -![Distribute anti-affinity namespaces across failure domains](/assets/anti-affinity-namespaces-across-failure-domains.svg) - -The load manager follows an even distribution policy across failure domains to assign anti-affinity namespaces. The following table outlines the even-distributed assignment sequence illustrated in the above figure. - -| Assignment sequence | Namespace | Failure domain candidates | Broker candidates | Selected broker | -|:---|:------------|:------------------|:------------------------------------|:-----------------| -| 1 | Namespace1 | Domain1, Domain2 | Broker1, Broker2, Broker3, Broker4 | Domain1:Broker1 | -| 2 | Namespace2 | Domain2 | Broker3, Broker4 | Domain2:Broker3 | -| 3 | Namespace3 | Domain1, Domain2 | Broker2, Broker4 | Domain1:Broker2 | -| 4 | Namespace4 | Domain2 | Broker4 | Domain2:Broker4 | - -:::tip - -* Each namespace belongs to only one anti-affinity group. If a namespace with an existing anti-affinity assignment is assigned to another anti-affinity group, the original assignment is dropped. - -* If there are more anti-affinity namespaces than failure domains, the load manager distributes namespaces evenly across all the domains, and also every domain distributes namespaces evenly across all the brokers under that domain. - -::: - -#### Create a failure domain and register brokers - -:::note - -One broker can only be registered to a single failure domain. - -::: - -To create a domain under a specific cluster and register brokers, run the following command: - -```bash - -pulsar-admin clusters create-failure-domain --domain-name --broker-list - -``` - -You can also view, update, and delete domains under a specific cluster. For more information, refer to [Pulsar admin doc](/tools/pulsar-admin/). - -#### Create an anti-affinity namespace group - -An anti-affinity group is created automatically when the first namespace is assigned to the group. To assign a namespace to an anti-affinity group, run the following command. It sets an anti-affinity group name for a namespace. - -```bash - -pulsar-admin namespaces set-anti-affinity-group --group - -``` - -For more information about `anti-affinity-group` related commands, refer to [Pulsar admin doc](/tools/pulsar-admin/). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/administration-proxy.md b/site2/website/versioned_docs/version-2.9.1-deprecated/administration-proxy.md deleted file mode 100644 index 1657e4f88ce825..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/administration-proxy.md +++ /dev/null @@ -1,86 +0,0 @@ ---- -id: administration-proxy -title: Pulsar proxy -sidebar_label: "Pulsar proxy" -original_id: administration-proxy ---- - -Pulsar proxy is an optional gateway. Pulsar proxy is used when direct connections between clients and Pulsar brokers are either infeasible or undesirable. For example, when you run Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform, you can run Pulsar proxy. - -## Configure the proxy - -Before using the proxy, you need to configure it with the brokers addresses in the cluster. You can configure the proxy to connect directly to service discovery, or specify a broker URL in the configuration. - -### Use service discovery - -Pulsar uses [ZooKeeper](https://zookeeper.apache.org) for service discovery. To connect the proxy to ZooKeeper, specify the following in `conf/proxy.conf`. - -```properties - -zookeeperServers=zk-0,zk-1,zk-2 -configurationStoreServers=zk-0:2184,zk-remote:2184 - -``` - -> To use service discovery, you need to open the network ACLs, so the proxy can connects to the ZooKeeper nodes through the ZooKeeper client port (port `2181`) and the configuration store client port (port `2184`). - -> However, it is not secure to use service discovery. Because if the network ACL is open, when someone compromises a proxy, they have full access to ZooKeeper. - -### Use broker URLs - -It is more secure to specify a URL to connect to the brokers. - -Proxy authorization requires access to ZooKeeper, so if you use these broker URLs to connect to the brokers, you need to disable authorization at the Proxy level. Brokers still authorize requests after the proxy forwards them. - -You can configure the broker URLs in `conf/proxy.conf` as follows. - -```properties - -brokerServiceURL=pulsar://brokers.example.com:6650 -brokerWebServiceURL=http://brokers.example.com:8080 -functionWorkerWebServiceURL=http://function-workers.example.com:8080 - -``` - -If you use TLS, configure the broker URLs in the following way: - -```properties - -brokerServiceURLTLS=pulsar+ssl://brokers.example.com:6651 -brokerWebServiceURLTLS=https://brokers.example.com:8443 -functionWorkerWebServiceURL=https://function-workers.example.com:8443 - -``` - -The hostname in the URLs provided should be a DNS entry which points to multiple brokers or a virtual IP address, which is backed by multiple broker IP addresses, so that the proxy does not lose connectivity to Pulsar cluster if a single broker becomes unavailable. - -The ports to connect to the brokers (6650 and 8080, or in the case of TLS, 6651 and 8443) should be open in the network ACLs. - -Note that if you do not use functions, you do not need to configure `functionWorkerWebServiceURL`. - -## Start the proxy - -To start the proxy: - -```bash - -$ cd /path/to/pulsar/directory -$ bin/pulsar proxy - -``` - -> You can run multiple instances of the Pulsar proxy in a cluster. - -## Stop the proxy - -Pulsar proxy runs in the foreground by default. To stop the proxy, simply stop the process in which the proxy is running. - -## Proxy frontends - -You can run Pulsar proxy behind some kind of load-distributing frontend, such as an [HAProxy](https://www.digitalocean.com/community/tutorials/an-introduction-to-haproxy-and-load-balancing-concepts) load balancer. - -## Use Pulsar clients with the proxy - -Once your Pulsar proxy is up and running, preferably behind a load-distributing [frontend](#proxy-frontends), clients can connect to the proxy via whichever address that the frontend uses. If the address is the DNS address `pulsar.cluster.default`, for example, the connection URL for clients is `pulsar://pulsar.cluster.default:6650`. - -For more information on Proxy configuration, refer to [Pulsar proxy](reference-configuration.md#pulsar-proxy). diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/administration-pulsar-manager.md b/site2/website/versioned_docs/version-2.9.1-deprecated/administration-pulsar-manager.md deleted file mode 100644 index d877cce723e6ab..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/administration-pulsar-manager.md +++ /dev/null @@ -1,205 +0,0 @@ ---- -id: administration-pulsar-manager -title: Pulsar Manager -sidebar_label: "Pulsar Manager" -original_id: administration-pulsar-manager ---- - -Pulsar Manager is a web-based GUI management and monitoring tool that helps administrators and users manage and monitor tenants, namespaces, topics, subscriptions, brokers, clusters, and so on, and supports dynamic configuration of multiple environments. - -:::note - -If you are monitoring your current stats with Pulsar dashboard, we recommend you use Pulsar Manager instead. Pulsar dashboard is deprecated. - -::: - -## Install - -The easiest way to use the Pulsar Manager is to run it inside a [Docker](https://www.docker.com/products/docker) container. - -```shell - -docker pull apachepulsar/pulsar-manager:v0.2.0 -docker run -it \ - -p 9527:9527 -p 7750:7750 \ - -e SPRING_CONFIGURATION_FILE=/pulsar-manager/pulsar-manager/application.properties \ - apachepulsar/pulsar-manager:v0.2.0 - -``` - -* `SPRING_CONFIGURATION_FILE`: Default configuration file for spring. - -### Set administrator account and password - - ```shell - - CSRF_TOKEN=$(curl http://localhost:7750/pulsar-manager/csrf-token) - curl \ - -H 'X-XSRF-TOKEN: $CSRF_TOKEN' \ - -H 'Cookie: XSRF-TOKEN=$CSRF_TOKEN;' \ - -H "Content-Type: application/json" \ - -X PUT http://localhost:7750/pulsar-manager/users/superuser \ - -d '{"name": "admin", "password": "apachepulsar", "description": "test", "email": "username@test.org"}' - - ``` - -You can find the docker image in the [Docker Hub](https://github.com/apache/pulsar-manager/tree/master/docker) directory and build an image from the source code as well: - -``` - -git clone https://github.com/apache/pulsar-manager -cd pulsar-manager/front-end -npm install --save -npm run build:prod -cd .. -./gradlew build -x test -cd .. -docker build -f docker/Dockerfile --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` --build-arg VCS_REF=`latest` --build-arg VERSION=`latest` -t apachepulsar/pulsar-manager . - -``` - -### Use custom databases - -If you have a large amount of data, you can use a custom database. The following is an example of PostgreSQL. - -1. Initialize database and table structures using the [file](https://github.com/apache/pulsar-manager/tree/master/src/main/resources/META-INF/sql/postgresql-schema.sql). - -2. Modify the [configuration file](https://github.com/apache/pulsar-manager/blob/master/src/main/resources/application.properties) and add PostgreSQL configuration. - -``` - -spring.datasource.driver-class-name=org.postgresql.Driver -spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/pulsar_manager -spring.datasource.username=postgres -spring.datasource.password=postgres - -``` - -3. Compile to generate a new executable jar package. - -``` - -./gradlew build -x test - -``` - -### Enable JWT authentication - -If you want to turn on JWT authentication, configure the following parameters: - -* `backend.jwt.token`: token for the superuser. You need to configure this parameter during cluster initialization. -* `jwt.broker.token.mode`: multiple modes of generating token, including PUBLIC, PRIVATE, and SECRET. -* `jwt.broker.public.key`: configure this option if you use the PUBLIC mode. -* `jwt.broker.private.key`: configure this option if you use the PRIVATE mode. -* `jwt.broker.secret.key`: configure this option if you use the SECRET mode. - -For more information, see [Token Authentication Admin of Pulsar](http://pulsar.apache.org/docs/en/security-token-admin/). - - -If you want to enable JWT authentication, use one of the following methods. - - -* Method 1: use command-line tool - -``` - -wget https://dist.apache.org/repos/dist/release/pulsar/pulsar-manager/pulsar-manager-0.2.0/apache-pulsar-manager-0.2.0-bin.tar.gz -tar -zxvf apache-pulsar-manager-0.2.0-bin.tar.gz -cd pulsar-manager -tar -zxvf pulsar-manager.tar -cd pulsar-manager -cp -r ../dist ui -./bin/pulsar-manager --redirect.host=http://localhost --redirect.port=9527 insert.stats.interval=600000 --backend.jwt.token=token --jwt.broker.token.mode=PRIVATE --jwt.broker.private.key=file:///path/broker-private.key --jwt.broker.public.key=file:///path/broker-public.key - -``` - -Firstly, [set the administrator account and password](#set-administrator-account-and-password) - -Secondly, log in to Pulsar manager through http://localhost:7750/ui/index.html. - -* Method 2: configure the application.properties file - -``` - -backend.jwt.token=token - -jwt.broker.token.mode=PRIVATE -jwt.broker.public.key=file:///path/broker-public.key -jwt.broker.private.key=file:///path/broker-private.key - -or -jwt.broker.token.mode=SECRET -jwt.broker.secret.key=file:///path/broker-secret.key - -``` - -* Method 3: use Docker and enable token authentication. - -``` - -export JWT_TOKEN="your-token" -docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -v $PWD:/data apachepulsar/pulsar-manager:v0.2.0 /bin/sh - -``` - -* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --secret-key` or `bin/pulsar tokens create --private-key` command. -* `REDIRECT_HOST`: the IP address of the front-end server. -* `REDIRECT_PORT`: the port of the front-end server. -* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database. -* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database. -* `USERNAME`: the username of PostgreSQL. -* `PASSWORD`: the password of PostgreSQL. -* `LOG_LEVEL`: the level of log. - -* Method 4: use Docker and turn on **token authentication** and **token management** by private key and public key. - -``` - -export JWT_TOKEN="your-token" -export PRIVATE_KEY="file:///pulsar-manager/secret/my-private.key" -export PUBLIC_KEY="file:///pulsar-manager/secret/my-public.key" -docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -e PRIVATE_KEY=$PRIVATE_KEY -e PUBLIC_KEY=$PUBLIC_KEY -v $PWD:/data -v $PWD/secret:/pulsar-manager/secret apachepulsar/pulsar-manager:v0.2.0 /bin/sh - -``` - -* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --private-key` command. -* `PRIVATE_KEY`: private key path mounted in container, generated by `bin/pulsar tokens create-key-pair` command. -* `PUBLIC_KEY`: public key path mounted in container, generated by `bin/pulsar tokens create-key-pair` command. -* `$PWD/secret`: the folder where the private key and public key generated by the `bin/pulsar tokens create-key-pair` command are placed locally -* `REDIRECT_HOST`: the IP address of the front-end server. -* `REDIRECT_PORT`: the port of the front-end server. -* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database. -* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database. -* `USERNAME`: the username of PostgreSQL. -* `PASSWORD`: the password of PostgreSQL. -* `LOG_LEVEL`: the level of log. - -* Method 5: use Docker and turn on **token authentication** and **token management** by secret key. - -``` - -export JWT_TOKEN="your-token" -export SECRET_KEY="file:///pulsar-manager/secret/my-secret.key" -docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -e SECRET_KEY=$SECRET_KEY -v $PWD:/data -v $PWD/secret:/pulsar-manager/secret apachepulsar/pulsar-manager:v0.2.0 /bin/sh - -``` - -* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --secret-key` command. -* `SECRET_KEY`: secret key path mounted in container, generated by `bin/pulsar tokens create-secret-key` command. -* `$PWD/secret`: the folder where the secret key generated by the `bin/pulsar tokens create-secret-key` command are placed locally -* `REDIRECT_HOST`: the IP address of the front-end server. -* `REDIRECT_PORT`: the port of the front-end server. -* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database. -* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database. -* `USERNAME`: the username of PostgreSQL. -* `PASSWORD`: the password of PostgreSQL. -* `LOG_LEVEL`: the level of log. - -* For more information about backend configurations, see [here](https://github.com/apache/pulsar-manager/blob/master/src/README). -* For more information about frontend configurations, see [here](https://github.com/apache/pulsar-manager/tree/master/front-end). - -## Log in - -[Set the administrator account and password](#set-administrator-account-and-password). - -Visit http://localhost:9527 to log in. diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/administration-stats.md b/site2/website/versioned_docs/version-2.9.1-deprecated/administration-stats.md deleted file mode 100644 index ac0c03602f36d5..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/administration-stats.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -id: administration-stats -title: Pulsar stats -sidebar_label: "Pulsar statistics" -original_id: administration-stats ---- - -## Partitioned topics - -|Stat|Description| -|---|---| -|msgRateIn| The sum of publish rates of all local and replication publishers in messages per second.| -|msgThroughputIn| Same as msgRateIn but in bytes per second instead of messages per second.| -|msgRateOut| The sum of dispatch rates of all local and replication consumers in messages per second.| -|msgThroughputOut| Same as msgRateOut but in bytes per second instead of messages per second.| -|averageMsgSize| Average message size, in bytes, from this publisher within the last interval.| -|storageSize| The sum of storage size of the ledgers for this topic.| -|publishers| The list of all local publishers into the topic. Publishers can be anywhere from zero to thousands.| -|producerId| Internal identifier for this producer on this topic.| -|producerName| Internal identifier for this producer, generated by the client library.| -|address| IP address and source port for the connection of this producer.| -|connectedSince| Timestamp this producer is created or last reconnected.| -|subscriptions| The list of all local subscriptions to the topic.| -|my-subscription| The name of this subscription (client defined).| -|msgBacklog| The count of messages in backlog for this subscription.| -|type| This subscription type.| -|msgRateExpired| The rate at which messages are discarded instead of dispatched from this subscription due to TTL.| -|consumers| The list of connected consumers for this subscription.| -|consumerName| Internal identifier for this consumer, generated by the client library.| -|availablePermits| The number of messages this consumer has space for in the listen queue of client library. A value of 0 means the queue of client library is full and receive() is not being called. A nonzero value means this consumer is ready to be dispatched messages.| -|replication| This section gives the stats for cross-colo replication of this topic.| -|replicationBacklog| The outbound replication backlog in messages.| -|connected| Whether the outbound replicator is connected.| -|replicationDelayInSeconds| How long the oldest message has been waiting to be sent through the connection, if connected is true.| -|inboundConnection| The IP and port of the broker in the publisher connection of remote cluster to this broker. | -|inboundConnectedSince| The TCP connection being used to publish messages to the remote cluster. If no local publishers are connected, this connection is automatically closed after a minute.| - - -## Topics - -|Stat|Description| -|---|---| -|entriesAddedCounter| Messages published since this broker loads this topic.| -|numberOfEntries| Total number of messages being tracked.| -|totalSize| Total storage size in bytes of all messages.| -|currentLedgerEntries| Count of messages written to the ledger currently open for writing.| -|currentLedgerSize| Size in bytes of messages written to ledger currently open for writing.| -|lastLedgerCreatedTimestamp| Time when last ledger is created.| -|lastLedgerCreationFailureTimestamp| Time when last ledger is failed.| -|waitingCursorsCount| How many cursors are caught up and waiting for a new message to be published.| -|pendingAddEntriesCount| How many messages have (asynchronous) write requests you are waiting on completion.| -|lastConfirmedEntry| The ledgerid:entryid of the last message successfully written. If the entryid is -1, then the ledger is opened or is being currently opened but has no entries written yet.| -|state| The state of the cursor ledger. Open means you have a cursor ledger for saving updates of the markDeletePosition.| -|ledgers| The ordered list of all ledgers for this topic holding its messages.| -|cursors| The list of all cursors on this topic. Every subscription you saw in the topic stats has one.| -|markDeletePosition| The ack position: the last message the subscriber acknowledges receiving.| -|readPosition| The latest position of subscriber for reading message.| -|waitingReadOp| This is true when the subscription reads the latest message that is published to the topic and waits on new messages to be published.| -|pendingReadOps| The counter for how many outstanding read requests to the BookKeepers you have in progress.| -|messagesConsumedCounter| Number of messages this cursor acks since this broker loads this topic.| -|cursorLedger| The ledger used to persistently store the current markDeletePosition.| -|cursorLedgerLastEntry| The last entryid used to persistently store the current markDeletePosition.| -|individuallyDeletedMessages| If Acks are done out of order, shows the ranges of messages Acked between the markDeletePosition and the read-position.| -|lastLedgerSwitchTimestamp| The last time the cursor ledger is rolled over.| diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/administration-upgrade.md b/site2/website/versioned_docs/version-2.9.1-deprecated/administration-upgrade.md deleted file mode 100644 index 72d136b6460f62..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/administration-upgrade.md +++ /dev/null @@ -1,168 +0,0 @@ ---- -id: administration-upgrade -title: Upgrade Guide -sidebar_label: "Upgrade" -original_id: administration-upgrade ---- - -## Upgrade guidelines - -Apache Pulsar is comprised of multiple components, ZooKeeper, bookies, and brokers. These components are either stateful or stateless. You do not have to upgrade ZooKeeper nodes unless you have special requirement. While you upgrade, you need to pay attention to bookies (stateful), brokers and proxies (stateless). - -The following are some guidelines on upgrading a Pulsar cluster. Read the guidelines before upgrading. - -- Backup all your configuration files before upgrading. -- Read guide entirely, make a plan, and then execute the plan. When you make upgrade plan, you need to take your specific requirements and environment into consideration. -- Pay attention to the upgrading order of components. In general, you do not need to upgrade your ZooKeeper or configuration store cluster (the global ZooKeeper cluster). You need to upgrade bookies first, and then upgrade brokers, proxies, and your clients. -- If `autorecovery` is enabled, you need to disable `autorecovery` in the upgrade process, and re-enable it after completing the process. -- Read the release notes carefully for each release. Release notes contain features, configuration changes that might impact your upgrade. -- Upgrade a small subset of nodes of each type to canary test the new version before upgrading all nodes of that type in the cluster. When you have upgraded the canary nodes, run for a while to ensure that they work correctly. -- Upgrade one data center to verify new version before upgrading all data centers if your cluster runs in multi-cluster replicated mode. - -> Note: Currently, Apache Pulsar is compatible between versions. - -## Upgrade sequence - -To upgrade an Apache Pulsar cluster, follow the upgrade sequence. - -1. Upgrade ZooKeeper (optional) -- Canary test: test an upgraded version in one or a small set of ZooKeeper nodes. -- Rolling upgrade: rollout the upgraded version to all ZooKeeper servers incrementally, one at a time. Monitor your dashboard during the whole rolling upgrade process. -2. Upgrade bookies -- Canary test: test an upgraded version in one or a small set of bookies. -- Rolling upgrade: - - a. Disable `autorecovery` with the following command. - - ```shell - - bin/bookkeeper shell autorecovery -disable - - ``` - - - - b. Rollout the upgraded version to all bookies in the cluster after you determine that a version is safe after canary. - - c. After you upgrade all bookies, re-enable `autorecovery` with the following command. - - ```shell - - bin/bookkeeper shell autorecovery -enable - - ``` - -3. Upgrade brokers -- Canary test: test an upgraded version in one or a small set of brokers. -- Rolling upgrade: rollout the upgraded version to all brokers in the cluster after you determine that a version is safe after canary. -4. Upgrade proxies -- Canary test: test an upgraded version in one or a small set of proxies. -- Rolling upgrade: rollout the upgraded version to all proxies in the cluster after you determine that a version is safe after canary. - -## Upgrade ZooKeeper (optional) -While you upgrade ZooKeeper servers, you can do canary test first, and then upgrade all ZooKeeper servers in the cluster. - -### Canary test - -You can test an upgraded version in one of ZooKeeper servers before upgrading all ZooKeeper servers in your cluster. - -To upgrade ZooKeeper server to a new version, complete the following steps: - -1. Stop a ZooKeeper server. -2. Upgrade the binary and configuration files. -3. Start the ZooKeeper server with the new binary files. -4. Use `pulsar zookeeper-shell` to connect to the newly upgraded ZooKeeper server and run a few commands to verify if it works as expected. -5. Run the ZooKeeper server for a few days, observe and make sure the ZooKeeper cluster runs well. - -#### Canary rollback - -If issues occur during canary test, you can shut down the problematic ZooKeeper node, revert the binary and configuration, and restart the ZooKeeper with the reverted binary. - -### Upgrade all ZooKeeper servers - -After canary test to upgrade one ZooKeeper in your cluster, you can upgrade all ZooKeeper servers in your cluster. - -You can upgrade all ZooKeeper servers one by one by following steps in canary test. - -## Upgrade bookies - -While you upgrade bookies, you can do canary test first, and then upgrade all bookies in the cluster. -For more details, you can read Apache BookKeeper [Upgrade guide](http://bookkeeper.apache.org/docs/latest/admin/upgrade). - -### Canary test - -You can test an upgraded version in one or a small set of bookies before upgrading all bookies in your cluster. - -To upgrade bookie to a new version, complete the following steps: - -1. Stop a bookie. -2. Upgrade the binary and configuration files. -3. Start the bookie in `ReadOnly` mode to verify if the bookie of this new version runs well for read workload. - - ```shell - - bin/pulsar bookie --readOnly - - ``` - -4. When the bookie runs successfully in `ReadOnly` mode, stop the bookie and restart it in `Write/Read` mode. - - ```shell - - bin/pulsar bookie - - ``` - -5. Observe and make sure the cluster serves both write and read traffic. - -#### Canary rollback - -If issues occur during the canary test, you can shut down the problematic bookie node. Other bookies in the cluster replaces this problematic bookie node with autorecovery. - -### Upgrade all bookies - -After canary test to upgrade some bookies in your cluster, you can upgrade all bookies in your cluster. - -Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios. - -In a rolling upgrade scenario, upgrade one bookie at a time. In a downtime upgrade scenario, shut down the entire cluster, upgrade each bookie, and then start the cluster. - -While you upgrade in both scenarios, the procedure is the same for each bookie. - -1. Stop the bookie. -2. Upgrade the software (either new binary or new configuration files). -2. Start the bookie. - -> **Advanced operations** -> When you upgrade a large BookKeeper cluster in a rolling upgrade scenario, upgrading one bookie at a time is slow. If you configure rack-aware or region-aware placement policy, you can upgrade bookies rack by rack or region by region, which speeds up the whole upgrade process. - -## Upgrade brokers and proxies - -The upgrade procedure for brokers and proxies is the same. Brokers and proxies are `stateless`, so upgrading the two services is easy. - -### Canary test - -You can test an upgraded version in one or a small set of nodes before upgrading all nodes in your cluster. - -To upgrade to a new version, complete the following steps: - -1. Stop a broker (or proxy). -2. Upgrade the binary and configuration file. -3. Start a broker (or proxy). - -#### Canary rollback - -If issues occur during canary test, you can shut down the problematic broker (or proxy) node. Revert to the old version and restart the broker (or proxy). - -### Upgrade all brokers or proxies - -After canary test to upgrade some brokers or proxies in your cluster, you can upgrade all brokers or proxies in your cluster. - -Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios. - -In a rolling upgrade scenario, you can upgrade one broker or one proxy at a time if the size of the cluster is small. If your cluster is large, you can upgrade brokers or proxies in batches. When you upgrade a batch of brokers or proxies, make sure the remaining brokers and proxies in the cluster have enough capacity to handle the traffic during upgrade. - -In a downtime upgrade scenario, shut down the entire cluster, upgrade each broker or proxy, and then start the cluster. - -While you upgrade in both scenarios, the procedure is the same for each broker or proxy. - -1. Stop the broker or proxy. -2. Upgrade the software (either new binary or new configuration files). -3. Start the broker or proxy. diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/administration-zk-bk.md b/site2/website/versioned_docs/version-2.9.1-deprecated/administration-zk-bk.md deleted file mode 100644 index f427d43d57dc1a..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/administration-zk-bk.md +++ /dev/null @@ -1,386 +0,0 @@ ---- -id: administration-zk-bk -title: ZooKeeper and BookKeeper administration -sidebar_label: "ZooKeeper and BookKeeper" -original_id: administration-zk-bk ---- - -Pulsar relies on two external systems for essential tasks: - -* [ZooKeeper](https://zookeeper.apache.org/) is responsible for a wide variety of configuration-related and coordination-related tasks. -* [BookKeeper](http://bookkeeper.apache.org/) is responsible for [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data. - -ZooKeeper and BookKeeper are both open-source [Apache](https://www.apache.org/) projects. - -> Skip to the [How Pulsar uses ZooKeeper and BookKeeper](#how-pulsar-uses-zookeeper-and-bookkeeper) section below for a more schematic explanation of the role of these two systems in Pulsar. - - -## ZooKeeper - -Each Pulsar instance relies on two separate ZooKeeper quorums. - -* [Local ZooKeeper](#deploy-local-zookeeper) operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs to have a dedicated ZooKeeper cluster. -* [Configuration Store](#deploy-configuration-store) operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum. - -### Deploy local ZooKeeper - -ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar. - -To deploy a Pulsar instance, you need to stand up one local ZooKeeper cluster *per Pulsar cluster*. - -To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -On each host, you need to specify the node ID in `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - - -On a ZooKeeper server at `zk1.us-west.example.com`, for example, you can set the `myid` value like this: - -```shell - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com` the command is `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start zookeeper - -``` - -### Deploy configuration store - -The ZooKeeper cluster configured and started up in the section above is a *local* ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks. - -If you deploy a [single-cluster](#single-cluster-pulsar-instance) instance, you do not need a separate cluster for the configuration store. If, however, you deploy a [multi-cluster](#multi-cluster-pulsar-instance) instance, you need to stand up a separate ZooKeeper cluster for configuration tasks. - -#### Single-cluster Pulsar instance - -If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports. - -To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorum uses to the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 - -``` - -As before, create the `myid` files for each server on `data/global-zookeeper/myid`. - -#### Multi-cluster Pulsar instance - -When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions. - -The key here is to make sure the ZK quorum members are spread across at least 3 regions and that other regions run as observers. - -Again, given the very low expected load on the configuration store servers, you can share the same hosts used for the local ZooKeeper quorum. - -For example, you can assume a Pulsar instance with the following clusters `us-west`, `us-east`, `us-central`, `eu-central`, `ap-south`. Also you can assume, each cluster has its own local ZK servers named such as - -``` - -zk[1-3].${CLUSTER}.example.com - -``` - -In this scenario you want to pick the quorum participants from few clusters and let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`. - -This guarantees that writes to configuration store is possible even if one of these regions is unreachable. - -The ZK configuration in all the servers looks like: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 -server.4=zk1.us-central.example.com:2185:2186 -server.5=zk2.us-central.example.com:2185:2186 -server.6=zk3.us-central.example.com:2185:2186:observer -server.7=zk1.us-east.example.com:2185:2186 -server.8=zk2.us-east.example.com:2185:2186 -server.9=zk3.us-east.example.com:2185:2186:observer -server.10=zk1.eu-central.example.com:2185:2186:observer -server.11=zk2.eu-central.example.com:2185:2186:observer -server.12=zk3.eu-central.example.com:2185:2186:observer -server.13=zk1.ap-south.example.com:2185:2186:observer -server.14=zk2.ap-south.example.com:2185:2186:observer -server.15=zk3.ap-south.example.com:2185:2186:observer - -``` - -Additionally, ZK observers need to have: - -```properties - -peerType=observer - -``` - -##### Start the service - -Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) - -```shell - -$ bin/pulsar-daemon start configuration-store - -``` - -### ZooKeeper configuration - -In Pulsar, ZooKeeper configuration is handled by two separate configuration files in the `conf` directory of your Pulsar installation: `conf/zookeeper.conf` for [local ZooKeeper](#local-zookeeper) and `conf/global-zookeeper.conf` for [configuration store](#configuration-store). - -#### Local ZooKeeper - -The [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file handles the configuration for local ZooKeeper. The table below shows the available parameters: - -|Name|Description|Default| -|---|---|---| -|tickTime| The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick. |2000| -|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10| -|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter. |5| -|dataDir| The location where ZooKeeper stores in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper| -|clientPort| The port on which the ZooKeeper server listens for connections. |2181| -|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest). |3| -|autopurge.purgeInterval| The time interval, in hours, which triggers the ZooKeeper database purge task. Setting to a non-zero number enables auto purge; setting to 0 disables. Read this guide before enabling auto purge. |1| -|maxClientCnxns| The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60| - - -#### Configuration Store - -The [`conf/global-zookeeper.conf`](reference-configuration.md#configuration-store) file handles the configuration for configuration store. The table below shows the available parameters: - - -## BookKeeper - -BookKeeper stores all durable messages in Pulsar. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) WAL system that guarantees read consistency of independent message logs calls ledgers. Individual BookKeeper servers are also called *bookies*. - -> To manage message persistence, retention, and expiry in Pulsar, refer to [cookbook](cookbooks-retention-expiry.md). - -### Hardware requirements - -Bookie hosts store message data on disk. To provide optimal performance, ensure that the bookies have a suitable hardware configuration. The following are two key dimensions of bookie hardware capacity: - -- Disk I/O capacity read/write -- Storage capacity - -Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker by default. To ensure low write latency, BookKeeper is designed to use multiple devices: - -- A **journal** to ensure durability. For sequential writes, it is critical to have fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms. -- A **ledger storage device** stores data. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller. - -### Configure BookKeeper - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. When you configure each bookie, ensure that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for local ZooKeeper of the Pulsar cluster. - -The minimum configuration changes required in `conf/bookkeeper.conf` are as follows: - -:::note - -Set `journalDirectory` and `ledgerDirectories` carefully. It is difficilt to change them later. - -::: - -```properties - -# Change to point to journal disk mount point -journalDirectory=data/bookkeeper/journal - -# Point to ledger storage disk mount point -ledgerDirectories=data/bookkeeper/ledgers - -# Point to local ZK quorum -zkServers=zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181 - -#It is recommended to set this parameter. Otherwise, BookKeeper can't start normally in certain environments (for example, Huawei Cloud). -advertisedAddress= - -``` - -To change the ZooKeeper root path that BookKeeper uses, use `zkLedgersRootPath=/MY-PREFIX/ledgers` instead of `zkServers=localhost:2181/MY-PREFIX`. - -> For more information about BookKeeper, refer to the official [BookKeeper docs](http://bookkeeper.apache.org). - -### Deploy BookKeeper - -BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar. Each Pulsar broker has its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster. - -### Start bookies manually - -You can start a bookie in the foreground or as a background daemon. - -To start a bookie in the foreground, use the [`bookkeeper`](reference-cli-tools.md#bookkeeper) CLI tool: - -```bash - -$ bin/bookkeeper bookie - -``` - -To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -You can verify whether the bookie works properly with the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell): - -```shell - -$ bin/bookkeeper shell bookiesanity - -``` - -When you use this command, you create a new ledger on the local bookie, write a few entries, read them back and finally delete the ledger. - -### Decommission bookies cleanly - -Before you decommission a bookie, you need to check your environment and meet the following requirements. - -1. Ensure the state of your cluster supports decommissioning the target bookie. Check if `EnsembleSize >= Write Quorum >= Ack Quorum` is `true` with one less bookie. - -2. Ensure the target bookie is listed after using the `listbookies` command. - -3. Ensure that no other process is ongoing (upgrade etc). - -And then you can decommission bookies safely. To decommission bookies, complete the following steps. - -1. Log in to the bookie node, check if there are underreplicated ledgers. The decommission command force to replicate the underreplicated ledgers. -`$ bin/bookkeeper shell listunderreplicated` - -2. Stop the bookie by killing the bookie process. Make sure that no liveness/readiness probes setup for the bookies to spin them back up if you deploy it in a Kubernetes environment. - -3. Run the decommission command. - - If you have logged in to the node to be decommissioned, you do not need to provide `-bookieid`. - - If you are running the decommission command for the target bookie node from another bookie node, you should mention the target bookie ID in the arguments for `-bookieid` - `$ bin/bookkeeper shell decommissionbookie` - or - `$ bin/bookkeeper shell decommissionbookie -bookieid ` - -4. Validate that no ledgers are on the decommissioned bookie. -`$ bin/bookkeeper shell listledgers -bookieid ` - -You can run the following command to check if the bookie you have decommissioned is listed in the bookies list: - -```bash - -./bookkeeper shell listbookies -rw -h -./bookkeeper shell listbookies -ro -h - -``` - -## BookKeeper persistence policies - -In Pulsar, you can set *persistence policies* at the namespace level, which determines how BookKeeper handles persistent storage of messages. Policies determine four things: - -* The number of acks (guaranteed copies) to wait for each ledger entry. -* The number of bookies to use for a topic. -* The number of writes to make for each ledger entry. -* The throttling rate for mark-delete operations. - -### Set persistence policies - -You can set persistence policies for BookKeeper at the [namespace](reference-terminology.md#namespace) level. - -#### Pulsar-admin - -Use the [`set-persistence`](reference-pulsar-admin.md#namespaces-set-persistence) subcommand and specify a namespace as well as any policies that you want to apply. The available flags are: - -Flag | Description | Default -:----|:------------|:------- -`-a`, `--bookkeeper-ack-quorum` | The number of acks (guaranteed copies) to wait on for each entry | 0 -`-e`, `--bookkeeper-ensemble` | The number of [bookies](reference-terminology.md#bookie) to use for topics in the namespace | 0 -`-w`, `--bookkeeper-write-quorum` | The number of writes to make for each entry | 0 -`-r`, `--ml-mark-delete-max-rate` | Throttling rate for mark-delete operations (0 means no throttle) | 0 - -The following is an example: - -```shell - -$ pulsar-admin namespaces set-persistence my-tenant/my-ns \ - --bookkeeper-ack-quorum 3 \ - --bookeeper-ensemble 2 - -``` - -#### REST API - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/setPersistence?version=@pulsar:version_number@} - -#### Java - -```java - -int bkEnsemble = 2; -int bkQuorum = 3; -int bkAckQuorum = 2; -double markDeleteRate = 0.7; -PersistencePolicies policies = - new PersistencePolicies(ensemble, quorum, ackQuorum, markDeleteRate); -admin.namespaces().setPersistence(namespace, policies); - -``` - -### List persistence policies - -You can see which persistence policy currently applies to a namespace. - -#### Pulsar-admin - -Use the [`get-persistence`](reference-pulsar-admin.md#namespaces-get-persistence) subcommand and specify the namespace. - -The following is an example: - -```shell - -$ pulsar-admin namespaces get-persistence my-tenant/my-ns -{ - "bookkeeperEnsemble": 1, - "bookkeeperWriteQuorum": 1, - "bookkeeperAckQuorum", 1, - "managedLedgerMaxMarkDeleteRate": 0 -} - -``` - -#### REST API - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/getPersistence?version=@pulsar:version_number@} - -#### Java - -```java - -PersistencePolicies policies = admin.namespaces().getPersistence(namespace); - -``` - -## How Pulsar uses ZooKeeper and BookKeeper - -This diagram illustrates the role of ZooKeeper and BookKeeper in a Pulsar cluster: - -![ZooKeeper and BookKeeper](/assets/pulsar-system-architecture.png) - -Each Pulsar cluster consists of one or more message brokers. Each broker relies on an ensemble of bookies. diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-cgo.md b/site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-cgo.md deleted file mode 100644 index f352f942b77144..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-cgo.md +++ /dev/null @@ -1,579 +0,0 @@ ---- -id: client-libraries-cgo -title: Pulsar CGo client -sidebar_label: "CGo(deprecated)" -original_id: client-libraries-cgo ---- - -You can use Pulsar Go client to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang). - -All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Go client are thread-safe. - -Currently, the following Go clients are maintained in two repositories. - -| Language | Project | Maintainer | License | Description | -|----------|---------|------------|---------|-------------| -| CGo | [pulsar-client-go](https://github.com/apache/pulsar/tree/master/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | CGo client that depends on C++ client library | -| Go | [pulsar-client-go](https://github.com/apache/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client | - -> **API docs available as well** -> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar/pulsar-client-go/pulsar). - -## Installation - -### Requirements - -Pulsar Go client library is based on the C++ client library. Follow -the instructions for [C++ library](client-libraries-cpp.md) for installing the binaries through [RPM](client-libraries-cpp.md#rpm), [Deb](client-libraries-cpp.md#deb) or [Homebrew packages](client-libraries-cpp.md#macos). - -### Install go package - -> **Compatibility Warning** -> The version number of the Go client **must match** the version number of the Pulsar C++ client library. - -You can install the `pulsar` library locally using `go get`. Note that `go get` doesn't support fetching a specific tag - it will always pull in master's version of the Go client. You'll need a C++ client library that matches master. - -```bash - -$ go get -u github.com/apache/pulsar/pulsar-client-go/pulsar - -``` - -Or you can use [dep](https://github.com/golang/dep) for managing the dependencies. - -```bash - -$ dep ensure -add github.com/apache/pulsar/pulsar-client-go/pulsar@v@pulsar:version@ - -``` - -Once installed locally, you can import it into your project: - -```go - -import "github.com/apache/pulsar/pulsar-client-go/pulsar" - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example: - -```go - -import ( - "log" - "runtime" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - OperationTimeoutSeconds: 5, - MessageListenerThreads: runtime.NumCPU(), - }) - - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } -} - -``` - -The following configurable parameters are available for Pulsar clients: - -Parameter | Description | Default -:---------|:------------|:------- -`URL` | The connection URL for the Pulsar cluster. See [above](#urls) for more info | -`IOThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker) | 1 -`OperationTimeoutSeconds` | The timeout for some Go client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries will occur until this threshold is reached, at which point the operation will fail. | 30 -`MessageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)) | 1 -`ConcurrentLookupRequests` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 5000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 5000 -`Logger` | A custom logger implementation for the client (as a function that takes a log level, file path, line number, and message). All info, warn, and error messages will be routed to this function. | `nil` -`TLSTrustCertsFilePath` | The file path for the trusted TLS certificate | -`TLSAllowInsecureConnection` | Whether the client accepts untrusted TLS certificates from the broker | `false` -`Authentication` | Configure the authentication provider. (default: no authentication). Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | `nil` -`StatsIntervalInSeconds` | The interval (in seconds) at which client stats are published | 60 - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example: - -```go - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", -}) - -if err != nil { - log.Fatalf("Could not instantiate Pulsar producer: %v", err) -} - -defer producer.Close() - -msg := pulsar.ProducerMessage{ - Payload: []byte("Hello, Pulsar"), -} - -if err := producer.Send(context.Background(), msg); err != nil { - log.Fatalf("Producer could not send message: %v", err) -} - -``` - -> **Blocking operation** -> When you create a new Pulsar producer, the operation will block (waiting on a go channel) until either a producer is successfully created or an error is thrown. - - -### Producer operations - -Pulsar Go producers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string` -`Name()` | Fetches the producer's name | `string` -`Send(context.Context, ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | `error` -`SendAndGetMsgID(context.Context, ProducerMessage)`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | (MessageID, error) -`SendAsync(context.Context, ProducerMessage, func(ProducerMessage, error))` | Publishes a [message](#messages) to the producer's topic asynchronously. The third argument is a callback function that specifies what happens either when the message is acknowledged or an error is thrown. | -`SendAndGetMsgIDAsync(context.Context, ProducerMessage, func(MessageID, error))`| Send a message in asynchronous mode. The callback will report back the message being published and the eventual error in publishing | -`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64 -`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error -`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | `error` -`Schema()` | | Schema - -Here's a more involved example usage of a producer: - -```go - -import ( - "context" - "fmt" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatal(err) } - - // Use the client to instantiate a producer - producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", - }) - - if err != nil { log.Fatal(err) } - - ctx := context.Background() - - // Send 10 messages synchronously and 10 messages asynchronously - for i := 0; i < 10; i++ { - // Create a message - msg := pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("message-%d", i)), - } - - // Attempt to send the message - if err := producer.Send(ctx, msg); err != nil { - log.Fatal(err) - } - - // Create a different message to send asynchronously - asyncMsg := pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("async-message-%d", i)), - } - - // Attempt to send the message asynchronously and handle the response - producer.SendAsync(ctx, asyncMsg, func(msg pulsar.ProducerMessage, err error) { - if err != nil { log.Fatal(err) } - - fmt.Printf("the %s successfully published", string(msg.Payload)) - }) - } -} - -``` - -### Producer configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer will publish messages | -`Name` | A name for the producer. If you don't explicitly assign a name, Pulsar will automatically generate a globally unique name that you can access later using the `Name()` method. If you choose to explicitly assign a name, it will need to be unique across *all* Pulsar clusters, otherwise the creation operation will throw an error. | -`Properties`| Attach a set of application defined properties to the producer. This properties will be visible in the topic stats | -`SendTimeout` | When publishing a message to a topic, the producer will wait for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error will be thrown. If you set `SendTimeout` to -1, the timeout will be set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30 seconds -`MaxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `Send` and `SendAsync` methods will fail *unless* `BlockIfQueueFull` is set to `true`. | -`MaxPendingMessagesAcrossPartitions` | Set the number of max pending messages across all the partitions. This setting will be used to lower the max pending messages for each partition `MaxPendingMessages(int)`, if the total exceeds the configured value.| -`BlockIfQueueFull` | If set to `true`, the producer's `Send` and `SendAsync` methods will block when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `MaxPendingMessages` parameter); if set to `false` (the default), `Send` and `SendAsync` operations will fail and throw a `ProducerQueueIsFullError` when the queue is full. | `false` -`MessageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`pulsar.RoundRobinDistribution`, the default), publishing all messages to a single partition (`pulsar.UseSinglePartition`), or a custom partitioning scheme (`pulsar.CustomPartition`). | `pulsar.RoundRobinDistribution` -`HashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `pulsar.JavaStringHash` (the equivalent of `String.hashCode()` in Java), `pulsar.Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `pulsar.BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library) | `pulsar.JavaStringHash` -`CompressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), [`ZLIB`](https://zlib.net/), [`ZSTD`](https://facebook.github.io/zstd/) and [`SNAPPY`](https://google.github.io/snappy/). | No compression -`MessageRouter` | By default, Pulsar uses a round-robin routing scheme for [partitioned topics](cookbooks-partitioned.md). The `MessageRouter` parameter enables you to specify custom routing logic via a function that takes the Pulsar message and topic metadata as an argument and returns an integer (where the ), i.e. a function signature of `func(Message, TopicMetadata) int`. | -`Batching` | Control whether automatic batching of messages is enabled for the producer. | false -`BatchingMaxPublishDelay` | Set the time period within which the messages sent will be batched (default: 1ms) if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or until | 1ms -`BatchingMaxMessages` | Set the maximum number of messages permitted in a batch. (default: 1000) If set to a value greater than 1, messages will be queued until this threshold is reached or batch interval has elapsed | 1000 - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels: - -```go - -msgChannel := make(chan pulsar.ConsumerMessage) - -consumerOpts := pulsar.ConsumerOptions{ - Topic: "my-topic", - SubscriptionName: "my-subscription-1", - Type: pulsar.Exclusive, - MessageChannel: msgChannel, -} - -consumer, err := client.Subscribe(consumerOpts) - -if err != nil { - log.Fatalf("Could not establish subscription: %v", err) -} - -defer consumer.Close() - -for cm := range msgChannel { - msg := cm.Message - - fmt.Printf("Message ID: %s", msg.ID()) - fmt.Printf("Message value: %s", string(msg.Payload())) - - consumer.Ack(msg) -} - -``` - -> **Blocking operation** -> When you create a new Pulsar consumer, the operation will block (on a go channel) until either a producer is successfully created or an error is thrown. - - -### Consumer operations - -Pulsar Go consumers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the consumer's [topic](reference-terminology.md#topic) | `string` -`Subscription()` | Returns the consumer's subscription name | `string` -`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error` -`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)` -`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | `error` -`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | `error` -`AckCumulative(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `AckCumulative` method will block until the ack has been sent to the broker. After that, the messages will *not* be redelivered to the consumer. Cumulative acking can only be used with a [shared](concepts-messaging.md#shared) subscription type. | `error` -`AckCumulativeID(MessageID)` |Ack the reception of all the messages in the stream up to (and including) the provided message. This method will block until the acknowledge has been sent to the broker. After that, the messages will not be re-delivered to this consumer. | error -`Nack(Message)` | Acknowledge the failure to process a single message. | `error` -`NackID(MessageID)` | Acknowledge the failure to process a single message. | `error` -`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | `error` -`RedeliverUnackedMessages()` | Redelivers *all* unacknowledged messages on the topic. In [failover](concepts-messaging.md#failover) mode, this request is ignored if the consumer isn't active on the specified topic; in [shared](concepts-messaging.md#shared) mode, redelivered messages are distributed across all consumers connected to the topic. **Note**: this is a *non-blocking* operation that doesn't throw an error. | -`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | error - -#### Receive example - -Here's an example usage of a Go consumer that uses the `Receive()` method to process incoming messages: - -```go - -import ( - "context" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatal(err) } - - // Use the client object to instantiate a consumer - consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "my-golang-topic", - SubscriptionName: "sub-1", - Type: pulsar.Exclusive, - }) - - if err != nil { log.Fatal(err) } - - defer consumer.Close() - - ctx := context.Background() - - // Listen indefinitely on the topic - for { - msg, err := consumer.Receive(ctx) - if err != nil { log.Fatal(err) } - - // Do something with the message - err = processMessage(msg) - - if err == nil { - // Message processed successfully - consumer.Ack(msg) - } else { - // Failed to process messages - consumer.Nack(msg) - } - } -} - -``` - -### Consumer configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the consumer will establish a subscription and listen for messages | -`Topics` | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing | -`TopicsPattern` | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing | -`SubscriptionName` | The subscription name for this consumer | -`Properties` | Attach a set of application defined properties to the consumer. This properties will be visible in the topic stats| -`Name` | The name of the consumer | -`AckTimeout` | Set the timeout for unacked messages | 0 -`NackRedeliveryDelay` | The delay after which to redeliver the messages that failed to be processed. Default is 1min. (See `Consumer.Nack()`) | 1 minute -`Type` | Available options are `Exclusive`, `Shared`, and `Failover` | `Exclusive` -`SubscriptionInitPos` | InitialPosition at which the cursor will be set when subscribe | Latest -`MessageChannel` | The Go channel used by the consumer. Messages that arrive from the Pulsar topic(s) will be passed to this channel. | -`ReceiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `Receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000 -`MaxTotalReceiverQueueSizeAcrossPartitions` |Set the max total receiver queue size across partitions. This setting will be used to reduce the receiver queue size for individual partitions if the total exceeds this value | 50000 -`ReadCompacted` | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the consumer will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal. | - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example: - -```go - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageId: pulsar.LatestMessage, -}) - -``` - -> **Blocking operation** -> When you create a new Pulsar reader, the operation will block (on a go channel) until either a reader is successfully created or an error is thrown. - - -### Reader operations - -Pulsar Go readers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string` -`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)` -`HasNext()` | Check if there is any message available to read from the current position| (bool, error) -`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error` - -#### "Next" example - -Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages: - -```go - -import ( - "context" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatalf("Could not create client: %v", err) } - - // Use the client to instantiate a reader - reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: pulsar.EarliestMessage, - }) - - if err != nil { log.Fatalf("Could not create reader: %v", err) } - - defer reader.Close() - - ctx := context.Background() - - // Listen on the topic for incoming messages - for { - msg, err := reader.Next(ctx) - if err != nil { log.Fatalf("Error reading from topic: %v", err) } - - // Process the message - } -} - -``` - -In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example: - -```go - -lastSavedId := // Read last saved message id from external store as byte[] - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: DeserializeMessageID(lastSavedId), -}) - -``` - -### Reader configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader will establish a subscription and listen for messages -`Name` | The name of the reader -`StartMessageID` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `pulsar.EarliestMessage` (the earliest available message on the topic), `pulsar.LatestMessage` (the latest available message on the topic), or a `MessageID` object for a position that isn't earliest or latest. | -`MessageChannel` | The Go channel used by the reader. Messages that arrive from the Pulsar topic(s) will be passed to this channel. | -`ReceiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `Next`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000 -`SubscriptionRolePrefix` | The subscription role prefix. | `reader` -`ReadCompacted` | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the reader will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal.| - -## Messages - -The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message: - -```go - -msg := pulsar.ProducerMessage{ - Payload: []byte("Here is some message data"), - Key: "message-key", - Properties: map[string]string{ - "foo": "bar", - }, - EventTime: time.Now(), - ReplicationClusters: []string{"cluster1", "cluster3"}, -} - -if err := producer.send(msg); err != nil { - log.Fatalf("Could not publish message due to: %v", err) -} - -``` - -The following methods parameters are available for `ProducerMessage` objects: - -Parameter | Description -:---------|:----------- -`Payload` | The actual data payload of the message -`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message. -`Key` | The optional key associated with the message (particularly useful for things like topic compaction) -`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message -`EventTime` | The timestamp associated with the message -`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. -`SequenceID` | Set the sequence id to assign to the current message - -## TLS encryption and authentication - -In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so: - - * Use `pulsar+ssl` URL type - * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker - * Configure `Authentication` option - -Here's an example: - -```go - -opts := pulsar.ClientOptions{ - URL: "pulsar+ssl://my-cluster.com:6651", - TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr", - Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"), -} - -``` - -## Schema - -This example shows how to create a producer and consumer with schema. - -```go - -var exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -jsonSchema := NewJsonSchema(exampleSchemaDef, nil) -// create producer -producer, err := client.CreateProducerWithSchema(ProducerOptions{ - Topic: "jsonTopic", -}, jsonSchema) -err = producer.Send(context.Background(), ProducerMessage{ - Value: &testJson{ - ID: 100, - Name: "pulsar", - }, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() -//create consumer -var s testJson -consumerJS := NewJsonSchema(exampleSchemaDef, nil) -consumer, err := client.SubscribeWithSchema(ConsumerOptions{ - Topic: "jsonTopic", - SubscriptionName: "sub-2", -}, consumerJS) -if err != nil { - log.Fatal(err) -} -msg, err := consumer.Receive(context.Background()) -if err != nil { - log.Fatal(err) -} -err = msg.GetValue(&s) -if err != nil { - log.Fatal(err) -} -fmt.Println(s.ID) // output: 100 -fmt.Println(s.Name) // output: pulsar -defer consumer.Close() - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-cpp.md b/site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-cpp.md deleted file mode 100644 index 455cf02116d502..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-cpp.md +++ /dev/null @@ -1,708 +0,0 @@ ---- -id: client-libraries-cpp -title: Pulsar C++ client -sidebar_label: "C++" -original_id: client-libraries-cpp ---- - -You can use Pulsar C++ client to create Pulsar producers and consumers in C++. - -All the methods in producer, consumer, and reader of a C++ client are thread-safe. - -## Supported platforms - -Pulsar C++ client is supported on **Linux** ,**MacOS** and **Windows** platforms. - -[Doxygen](http://www.doxygen.nl/)-generated API docs for the C++ client are available [here](/api/cpp). - -## System requirements - -You need to install the following components before using the C++ client: - -* [CMake](https://cmake.org/) -* [Boost](http://www.boost.org/) -* [Protocol Buffers](https://developers.google.com/protocol-buffers/) >= 3 -* [libcurl](https://curl.se/libcurl/) -* [Google Test](https://github.com/google/googletest) - -## Linux - -### Compilation - -1. Clone the Pulsar repository. - -```shell - -$ git clone https://github.com/apache/pulsar - -``` - -2. Install all necessary dependencies. - -```shell - -$ apt-get install cmake libssl-dev libcurl4-openssl-dev liblog4cxx-dev \ - libprotobuf-dev protobuf-compiler libboost-all-dev google-mock libgtest-dev libjsoncpp-dev - -``` - -3. Compile and install [Google Test](https://github.com/google/googletest). - -```shell - -# libgtest-dev version is 1.18.0 or above -$ cd /usr/src/googletest -$ sudo cmake . -$ sudo make -$ sudo cp ./googlemock/libgmock.a ./googlemock/gtest/libgtest.a /usr/lib/ - -# less than 1.18.0 -$ cd /usr/src/gtest -$ sudo cmake . -$ sudo make -$ sudo cp libgtest.a /usr/lib - -$ cd /usr/src/gmock -$ sudo cmake . -$ sudo make -$ sudo cp libgmock.a /usr/lib - -``` - -4. Compile the Pulsar client library for C++ inside the Pulsar repository. - -```shell - -$ cd pulsar-client-cpp -$ cmake . -$ make - -``` - -After you install the components successfully, the files `libpulsar.so` and `libpulsar.a` are in the `lib` folder of the repository. The tools `perfProducer` and `perfConsumer` are in the `perf` directory. - -### Install Dependencies - -> Since 2.1.0 release, Pulsar ships pre-built RPM and Debian packages. You can download and install those packages directly. - -After you download and install RPM or DEB, the `libpulsar.so`, `libpulsarnossl.so`, `libpulsar.a`, and `libpulsarwithdeps.a` libraries are in your `/usr/lib` directory. - -By default, they are built in code path `${PULSAR_HOME}/pulsar-client-cpp`. You can build with the command below. - - `cmake . -DBUILD_TESTS=OFF -DLINK_STATIC=ON && make pulsarShared pulsarSharedNossl pulsarStatic pulsarStaticWithDeps -j 3`. - -These libraries rely on some other libraries. If you want to get detailed version of dependencies, see [RPM](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/rpm/Dockerfile) or [DEB](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/deb/Dockerfile) files. - -1. `libpulsar.so` is a shared library, containing statically linked `boost` and `openssl`. It also dynamically links all other necessary libraries. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsar.so -I/usr/local/ssl/include - -``` - -2. `libpulsarnossl.so` is a shared library, similar to `libpulsar.so` except that the libraries `openssl` and `crypto` are dynamically linked. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsarnossl.so -lssl -lcrypto -I/usr/local/ssl/include -L/usr/local/ssl/lib - -``` - -3. `libpulsar.a` is a static library. You need to load dependencies before using this library. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsar.a -lssl -lcrypto -ldl -lpthread -I/usr/local/ssl/include -L/usr/local/ssl/lib -lboost_system -lboost_regex -lcurl -lprotobuf -lzstd -lz - -``` - -4. `libpulsarwithdeps.a` is a static library, based on `libpulsar.a`. It is archived in the dependencies of `libboost_regex`, `libboost_system`, `libcurl`, `libprotobuf`, `libzstd` and `libz`. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsarwithdeps.a -lssl -lcrypto -ldl -lpthread -I/usr/local/ssl/include -L/usr/local/ssl/lib - -``` - -The `libpulsarwithdeps.a` does not include library openssl related libraries `libssl` and `libcrypto`, because these two libraries are related to security. It is more reasonable and easier to use the versions provided by the local system to handle security issues and upgrade libraries. - -### Install RPM - -1. Download a RPM package from the links in the table. - -| Link | Crypto files | -|------|--------------| -| [client](@pulsar:dist_rpm:client@) | [asc](@pulsar:dist_rpm:client@.asc), [sha512](@pulsar:dist_rpm:client@.sha512) | -| [client-debuginfo](@pulsar:dist_rpm:client-debuginfo@) | [asc](@pulsar:dist_rpm:client-debuginfo@.asc), [sha512](@pulsar:dist_rpm:client-debuginfo@.sha512) | -| [client-devel](@pulsar:dist_rpm:client-devel@) | [asc](@pulsar:dist_rpm:client-devel@.asc), [sha512](@pulsar:dist_rpm:client-devel@.sha512) | - -2. Install the package using the following command. - -```bash - -$ rpm -ivh apache-pulsar-client*.rpm - -``` - -After you install RPM successfully, Pulsar libraries are in the `/usr/lib` directory. - -:::note - -If you get the error that `libpulsar.so: cannot open shared object file: No such file or directory` when starting Pulsar client, you may need to run `ldconfig` first. - -::: - -### Install Debian - -1. Download a Debian package from the links in the table. - -| Link | Crypto files | -|------|--------------| -| [client](@pulsar:deb:client@) | [asc](@pulsar:dist_deb:client@.asc), [sha512](@pulsar:dist_deb:client@.sha512) | -| [client-devel](@pulsar:deb:client-devel@) | [asc](@pulsar:dist_deb:client-devel@.asc), [sha512](@pulsar:dist_deb:client-devel@.sha512) | - -2. Install the package using the following command. - -```bash - -$ apt install ./apache-pulsar-client*.deb - -``` - -After you install DEB successfully, Pulsar libraries are in the `/usr/lib` directory. - -### Build - -> If you want to build RPM and Debian packages from the latest master, follow the instructions below. You should run all the instructions at the root directory of your cloned Pulsar repository. - -There are recipes that build RPM and Debian packages containing a -statically linked `libpulsar.so` / `libpulsarnossl.so` / `libpulsar.a` / `libpulsarwithdeps.a` with all required dependencies. - -To build the C++ library packages, you need to build the Java packages first. - -```shell - -mvn install -DskipTests - -``` - -#### RPM - -To build the RPM inside a Docker container, use the command below. The RPMs are in the `pulsar-client-cpp/pkg/rpm/RPMS/x86_64/` path. - -```shell - -pulsar-client-cpp/pkg/rpm/docker-build-rpm.sh - -``` - -| Package name | Content | -|-----|-----| -| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` | -| pulsar-client-devel | Static library `libpulsar.a`, `libpulsarwithdeps.a`and C++ and C headers | -| pulsar-client-debuginfo | Debug symbols for `libpulsar.so` | - -#### Debian - -To build Debian packages, enter the following command. - -```shell - -pulsar-client-cpp/pkg/deb/docker-build-deb.sh - -``` - -Debian packages are created in the `pulsar-client-cpp/pkg/deb/BUILD/DEB/` path. - -| Package name | Content | -|-----|-----| -| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` | -| pulsar-client-dev | Static library `libpulsar.a`, `libpulsarwithdeps.a` and C++ and C headers | - -## MacOS - -### Compilation - -1. Clone the Pulsar repository. - -```shell - -$ git clone https://github.com/apache/pulsar - -``` - -2. Install all necessary dependencies. - -```shell - -# OpenSSL installation -$ brew install openssl -$ export OPENSSL_INCLUDE_DIR=/usr/local/opt/openssl/include/ -$ export OPENSSL_ROOT_DIR=/usr/local/opt/openssl/ - -# Protocol Buffers installation -$ brew install protobuf boost boost-python log4cxx -# If you are using python3, you need to install boost-python3 - -# Google Test installation -$ git clone https://github.com/google/googletest.git -$ cd googletest -$ git checkout release-1.12.1 -$ cmake . -$ make install - -``` - -3. Compile the Pulsar client library in the repository that you cloned. - -```shell - -$ cd pulsar-client-cpp -$ cmake . -$ make - -``` - -### Install `libpulsar` - -Pulsar releases are available in the [Homebrew](https://brew.sh/) core repository. You can install the C++ client library with the following command. The package is installed with the library and headers. - -```shell - -brew install libpulsar - -``` - -## Windows (64-bit) - -### Compilation - -1. Clone the Pulsar repository. - -```shell - -$ git clone https://github.com/apache/pulsar - -``` - -2. Install all necessary dependencies. - -```shell - -cd ${PULSAR_HOME}/pulsar-client-cpp -vcpkg install --feature-flags=manifests --triplet x64-windows - -``` - -3. Build C++ libraries. - -```shell - -cmake -B ./build -A x64 -DBUILD_PYTHON_WRAPPER=OFF -DBUILD_TESTS=OFF -DVCPKG_TRIPLET=x64-windows -DCMAKE_BUILD_TYPE=Release -S . -cmake --build ./build --config Release - -``` - -> **NOTE** -> -> 1. For Windows 32-bit, you need to use `-A Win32` and `-DVCPKG_TRIPLET=x86-windows`. -> 2. For MSVC Debug mode, you need to replace `Release` with `Debug` for both `CMAKE_BUILD_TYPE` variable and `--config` option. - -4. Client libraries are available in the following places. - -``` - -${PULSAR_HOME}/pulsar-client-cpp/build/lib/Release/pulsar.lib -${PULSAR_HOME}/pulsar-client-cpp/build/lib/Release/pulsar.dll - -``` - -## Connection URLs - -To connect Pulsar using client libraries, you need to specify a Pulsar protocol URL. - -Pulsar protocol URLs are assigned to specific clusters, you can use the Pulsar URI scheme. The default port is `6650`. The following is an example for localhost. - -```http - -pulsar://localhost:6650 - -``` - -In a Pulsar cluster in production, the URL looks as follows. - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you use TLS authentication, you need to add `ssl`, and the default port is `6651`. The following is an example. - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a consumer - -To use Pulsar as a consumer, you need to create a consumer on the C++ client. There are two main ways of using the consumer: -- [Blocking style](#blocking-example): synchronously calling `receive(msg)`. -- [Non-blocking](#consumer-with-a-message-listener) (event based) style: using a message listener. - -### Blocking example - -The benefit of this approach is that it is the simplest code. Simply keeps calling `receive(msg)` which blocks until a message is received. - -This example starts a subscription at the earliest offset and consumes 100 messages. - -```c++ - -#include - -using namespace pulsar; - -int main() { - Client client("pulsar://localhost:6650"); - - Consumer consumer; - ConsumerConfiguration config; - config.setSubscriptionInitialPosition(InitialPositionEarliest); - Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer); - if (result != ResultOk) { - std::cout << "Failed to subscribe: " << result << std::endl; - return -1; - } - - Message msg; - int ctr = 0; - // consume 100 messages - while (ctr < 100) { - consumer.receive(msg); - std::cout << "Received: " << msg - << " with payload '" << msg.getDataAsString() << "'" << std::endl; - - consumer.acknowledge(msg); - ctr++; - } - - std::cout << "Finished consuming synchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -### Consumer with a message listener - -You can avoid running a loop with blocking calls with an event based style by using a message listener which is invoked for each message that is received. - -This example starts a subscription at the earliest offset and consumes 100 messages. - -```c++ - -#include -#include -#include - -using namespace pulsar; - -std::atomic messagesReceived; - -void handleAckComplete(Result res) { - std::cout << "Ack res: " << res << std::endl; -} - -void listener(Consumer consumer, const Message& msg) { - std::cout << "Got message " << msg << " with content '" << msg.getDataAsString() << "'" << std::endl; - messagesReceived++; - consumer.acknowledgeAsync(msg.getMessageId(), handleAckComplete); -} - -int main() { - Client client("pulsar://localhost:6650"); - - Consumer consumer; - ConsumerConfiguration config; - config.setMessageListener(listener); - config.setSubscriptionInitialPosition(InitialPositionEarliest); - Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer); - if (result != ResultOk) { - std::cout << "Failed to subscribe: " << result << std::endl; - return -1; - } - - // wait for 100 messages to be consumed - while (messagesReceived < 100) { - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - } - - std::cout << "Finished consuming asynchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -## Create a producer - -To use Pulsar as a producer, you need to create a producer on the C++ client. There are two main ways of using a producer: -- [Blocking style](#simple-blocking-example) : each call to `send` waits for an ack from the broker. -- [Non-blocking asynchronous style](#non-blocking-example) : `sendAsync` is called instead of `send` and a callback is supplied for when the ack is received from the broker. - -### Simple blocking example - -This example sends 100 messages using the blocking style. While simple, it does not produce high throughput as it waits for each ack to come back before sending the next message. - -```c++ - -#include -#include - -using namespace pulsar; - -int main() { - Client client("pulsar://localhost:6650"); - - Result result = client.createProducer("persistent://public/default/my-topic", producer); - if (result != ResultOk) { - std::cout << "Error creating producer: " << result << std::endl; - return -1; - } - - // Send 100 messages synchronously - int ctr = 0; - while (ctr < 100) { - std::string content = "msg" + std::to_string(ctr); - Message msg = MessageBuilder().setContent(content).setProperty("x", "1").build(); - Result result = producer.send(msg); - if (result != ResultOk) { - std::cout << "The message " << content << " could not be sent, received code: " << result << std::endl; - } else { - std::cout << "The message " << content << " sent successfully" << std::endl; - } - - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - ctr++; - } - - std::cout << "Finished producing synchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -### Non-blocking example - -This example sends 100 messages using the non-blocking style calling `sendAsync` instead of `send`. This allows the producer to have multiple messages inflight at a time which increases throughput. - -The producer configuration `blockIfQueueFull` is useful here to avoid `ResultProducerQueueIsFull` errors when the internal queue for outgoing send requests becomes full. Once the internal queue is full, `sendAsync` becomes blocking which can make your code simpler. - -Without this configuration, the result code `ResultProducerQueueIsFull` is passed to the callback. You must decide how to deal with that (retry, discard etc). - -```c++ - -#include -#include - -using namespace pulsar; - -std::atomic acksReceived; - -void callback(Result code, const MessageId& msgId, std::string msgContent) { - // message processing logic here - std::cout << "Received ack for msg: " << msgContent << " with code: " - << code << " -- MsgID: " << msgId << std::endl; - acksReceived++; -} - -int main() { - Client client("pulsar://localhost:6650"); - - ProducerConfiguration producerConf; - producerConf.setBlockIfQueueFull(true); - Producer producer; - Result result = client.createProducer("persistent://public/default/my-topic", - producerConf, producer); - if (result != ResultOk) { - std::cout << "Error creating producer: " << result << std::endl; - return -1; - } - - // Send 100 messages asynchronously - int ctr = 0; - while (ctr < 100) { - std::string content = "msg" + std::to_string(ctr); - Message msg = MessageBuilder().setContent(content).setProperty("x", "1").build(); - producer.sendAsync(msg, std::bind(callback, - std::placeholders::_1, std::placeholders::_2, content)); - - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - ctr++; - } - - // wait for 100 messages to be acked - while (acksReceived < 100) { - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - } - - std::cout << "Finished producing asynchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -### Partitioned topics and lazy producers - -When scaling out a Pulsar topic, you may configure a topic to have hundreds of partitions. Likewise, you may have also scaled out your producers so there are hundreds or even thousands of producers. This can put some strain on the Pulsar brokers as when you create a producer on a partitioned topic, internally it creates one internal producer per partition which involves communications to the brokers for each one. So for a topic with 1000 partitions and 1000 producers, it ends up creating 1,000,000 internal producers across the producer applications, each of which has to communicate with a broker to find out which broker it should connect to and then perform the connection handshake. - -You can reduce the load caused by this combination of a large number of partitions and many producers by doing the following: -- use SinglePartition partition routing mode (this ensures that all messages are only sent to a single, randomly selected partition) -- use non-keyed messages (when messages are keyed, routing is based on the hash of the key and so messages will end up being sent to multiple partitions) -- use lazy producers (this ensures that an internal producer is only created on demand when a message needs to be routed to a partition) - -With our example above, that reduces the number of internal producers spread out over the 1000 producer apps from 1,000,000 to just 1000. - -Note that there can be extra latency for the first message sent. If you set a low send timeout, this timeout could be reached if the initial connection handshake is slow to complete. - -```c++ - -ProducerConfiguration producerConf; -producerConf.setPartitionsRoutingMode(ProducerConfiguration::UseSinglePartition); -producerConf.setLazyStartPartitionedProducers(true); - -``` - -## Enable authentication in connection URLs -If you use TLS authentication when connecting to Pulsar, you need to add `ssl` in the connection URLs, and the default port is `6651`. The following is an example. - -```cpp - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/cacert.pem"); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create( - "/path/to/client-cert.pem", "/path/to/client-key.pem");); - -Client client("pulsar+ssl://my-broker.com:6651", config); - -``` - -For complete examples, refer to [C++ client examples](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/examples). - -## Schema - -This section describes some examples about schema. For more information about -schema, see [Pulsar schema](schema-get-started.md). - -### Avro schema - -- The following example shows how to create a producer with an Avro schema. - - ```cpp - - static const std::string exampleSchema = - "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," - "\"fields\":[{\"name\":\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"int\"}]}"; - Producer producer; - ProducerConfiguration producerConf; - producerConf.setSchema(SchemaInfo(AVRO, "Avro", exampleSchema)); - client.createProducer("topic-avro", producerConf, producer); - - ``` - -- The following example shows how to create a consumer with an Avro schema. - - ```cpp - - static const std::string exampleSchema = - "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," - "\"fields\":[{\"name\":\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"int\"}]}"; - ConsumerConfiguration consumerConf; - Consumer consumer; - consumerConf.setSchema(SchemaInfo(AVRO, "Avro", exampleSchema)); - client.subscribe("topic-avro", "sub-2", consumerConf, consumer) - - ``` - -### ProtobufNative schema - -The following example shows how to create a producer and a consumer with a ProtobufNative schema. -​ -1. Generate the `User` class using Protobuf3. - - :::note - - You need to use Protobuf3 or later versions. - - ::: - -​ - - ```protobuf - - syntax = "proto3"; - - message User { - string name = 1; - int32 age = 2; - } - - ``` - -​ -2. Include the `ProtobufNativeSchema.h` in your source code. Ensure the Protobuf dependency has been added to your project. -​ - - ```c++ - - #include - - ``` - -​ -3. Create a producer to send a `User` instance. -​ - - ```c++ - - ProducerConfiguration producerConf; - producerConf.setSchema(createProtobufNativeSchema(User::GetDescriptor())); - Producer producer; - client.createProducer("topic-protobuf", producerConf, producer); - User user; - user.set_name("my-name"); - user.set_age(10); - std::string content; - user.SerializeToString(&content); - producer.send(MessageBuilder().setContent(content).build()); - - ``` - -​ -4. Create a consumer to receive a `User` instance. -​ - - ```c++ - - ConsumerConfiguration consumerConf; - consumerConf.setSchema(createProtobufNativeSchema(User::GetDescriptor())); - consumerConf.setSubscriptionInitialPosition(InitialPositionEarliest); - Consumer consumer; - client.subscribe("topic-protobuf", "my-sub", consumerConf, consumer); - Message msg; - consumer.receive(msg); - User user2; - user2.ParseFromArray(msg.getData(), msg.getLength()); - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-dotnet.md b/site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-dotnet.md deleted file mode 100644 index b574fa0b2e5ed8..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-dotnet.md +++ /dev/null @@ -1,434 +0,0 @@ ---- -id: client-libraries-dotnet -title: Pulsar C# client -sidebar_label: "C#" -original_id: client-libraries-dotnet ---- - -You can use the Pulsar C# client (DotPulsar) to create Pulsar producers and consumers in C#. All the methods in the producer, consumer, and reader of a C# client are thread-safe. The official documentation for DotPulsar is available [here](https://github.com/apache/pulsar-dotpulsar/wiki). - -## Installation - -You can install the Pulsar C# client library either through the dotnet CLI or through the Visual Studio. This section describes how to install the Pulsar C# client library through the dotnet CLI. For information about how to install the Pulsar C# client library through the Visual Studio , see [here](https://docs.microsoft.com/en-us/visualstudio/mac/nuget-walkthrough?view=vsmac-2019). - -### Prerequisites - -Install the [.NET Core SDK](https://dotnet.microsoft.com/download/), which provides the dotnet command-line tool. Starting in Visual Studio 2017, the dotnet CLI is automatically installed with any .NET Core related workloads. - -### Procedures - -To install the Pulsar C# client library, following these steps: - -1. Create a project. - - 1. Create a folder for the project. - - 2. Open a terminal window and switch to the new folder. - - 3. Create the project using the following command. - - ``` - - dotnet new console - - ``` - - 4. Use `dotnet run` to test that the app has been created properly. - -2. Add the DotPulsar NuGet package. - - 1. Use the following command to install the `DotPulsar` package. - - ``` - - dotnet add package DotPulsar - - ``` - - 2. After the command completes, open the `.csproj` file to see the added reference. - - ```xml - - - - - - ``` - -## Client - -This section describes some configuration examples for the Pulsar C# client. - -### Create client - -This example shows how to create a Pulsar C# client connected to localhost. - -```c# - -var client = PulsarClient.Builder().Build(); - -``` - -To create a Pulsar C# client by using the builder, you can specify the following options. - -| Option | Description | Default | -| ---- | ---- | ---- | -| ServiceUrl | Set the service URL for the Pulsar cluster. | pulsar://localhost:6650 | -| RetryInterval | Set the time to wait before retrying an operation or a reconnection. | 3s | - -### Create producer - -This section describes how to create a producer. - -- Create a producer by using the builder. - - ```c# - - var producer = client.NewProducer() - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a producer without using the builder. - - ```c# - - var options = new ProducerOptions("persistent://public/default/mytopic"); - var producer = client.CreateProducer(options); - - ``` - -### Create consumer - -This section describes how to create a consumer. - -- Create a consumer by using the builder. - - ```c# - - var consumer = client.NewConsumer() - .SubscriptionName("MySubscription") - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a consumer without using the builder. - - ```c# - - var options = new ConsumerOptions("MySubscription", "persistent://public/default/mytopic"); - var consumer = client.CreateConsumer(options); - - ``` - -### Create reader - -This section describes how to create a reader. - -- Create a reader by using the builder. - - ```c# - - var reader = client.NewReader() - .StartMessageId(MessageId.Earliest) - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a reader without using the builder. - - ```c# - - var options = new ReaderOptions(MessageId.Earliest, "persistent://public/default/mytopic"); - var reader = client.CreateReader(options); - - ``` - -### Configure encryption policies - -The Pulsar C# client supports four kinds of encryption policies: - -- `EnforceUnencrypted`: always use unencrypted connections. -- `EnforceEncrypted`: always use encrypted connections) -- `PreferUnencrypted`: use unencrypted connections, if possible. -- `PreferEncrypted`: use encrypted connections, if possible. - -This example shows how to set the `EnforceUnencrypted` encryption policy. - -```c# - -var client = PulsarClient.Builder() - .ConnectionSecurity(EncryptionPolicy.EnforceEncrypted) - .Build(); - -``` - -### Configure authentication - -Currently, the Pulsar C# client supports the TLS (Transport Layer Security) and JWT (JSON Web Token) authentication. - -If you have followed [Authentication using TLS](security-tls-authentication.md), you get a certificate and a key. To use them from the Pulsar C# client, follow these steps: - -1. Create an unencrypted and password-less pfx file. - - ```c# - - openssl pkcs12 -export -keypbe NONE -certpbe NONE -out admin.pfx -inkey admin.key.pem -in admin.cert.pem -passout pass: - - ``` - -2. Use the admin.pfx file to create an X509Certificate2 and pass it to the Pulsar C# client. - - ```c# - - var clientCertificate = new X509Certificate2("admin.pfx"); - var client = PulsarClient.Builder() - .AuthenticateUsingClientCertificate(clientCertificate) - .Build(); - - ``` - -## Producer - -A producer is a process that attaches to a topic and publishes messages to a Pulsar broker for processing. This section describes some configuration examples about the producer. - -## Send data - -This example shows how to send data. - -```c# - -var data = Encoding.UTF8.GetBytes("Hello World"); -await producer.Send(data); - -``` - -### Send messages with customized metadata - -- Send messages with customized metadata by using the builder. - - ```c# - - var data = Encoding.UTF8.GetBytes("Hello World"); - var messageId = await producer.NewMessage() - .Property("SomeKey", "SomeValue") - .Send(data); - - ``` - -- Send messages with customized metadata without using the builder. - - ```c# - - var data = Encoding.UTF8.GetBytes("Hello World"); - var metadata = new MessageMetadata(); - metadata["SomeKey"] = "SomeValue"; - var messageId = await producer.Send(metadata, data)); - - ``` - -## Consumer - -A consumer is a process that attaches to a topic through a subscription and then receives messages. This section describes some configuration examples about the consumer. - -### Receive messages - -This example shows how a consumer receives messages from a topic. - -```c# - -await foreach (var message in consumer.Messages()) -{ - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); -} - -``` - -### Acknowledge messages - -Messages can be acknowledged individually or cumulatively. For details about message acknowledgement, see [acknowledgement](concepts-messaging.md#acknowledgement). - -- Acknowledge messages individually. - - ```c# - - await foreach (var message in consumer.Messages()) - { - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); - } - - ``` - -- Acknowledge messages cumulatively. - - ```c# - - await consumer.AcknowledgeCumulative(message); - - ``` - -### Unsubscribe from topics - -This example shows how a consumer unsubscribes from a topic. - -```c# - -await consumer.Unsubscribe(); - -``` - -#### Note - -> A consumer cannot be used and is disposed once the consumer unsubscribes from a topic. - -## Reader - -A reader is actually just a consumer without a cursor. This means that Pulsar does not keep track of your progress and there is no need to acknowledge messages. - -This example shows how a reader receives messages. - -```c# - -await foreach (var message in reader.Messages()) -{ - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); -} - -``` - -## Monitoring - -This section describes how to monitor the producer, consumer, and reader state. - -### Monitor producer - -The following table lists states available for the producer. - -| State | Description | -| ---- | ----| -| Closed | The producer or the Pulsar client has been disposed. | -| Connected | All is well. | -| Disconnected | The connection is lost and attempts are being made to reconnect. | -| Faulted | An unrecoverable error has occurred. | - -This example shows how to monitor the producer state. - -```c# - -private static async ValueTask Monitor(IProducer producer, CancellationToken cancellationToken) -{ - var state = ProducerState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = await producer.StateChangedFrom(state, cancellationToken); - - var stateMessage = state switch - { - ProducerState.Connected => $"The producer is connected", - ProducerState.Disconnected => $"The producer is disconnected", - ProducerState.Closed => $"The producer has closed", - ProducerState.Faulted => $"The producer has faulted", - _ => $"The producer has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (producer.IsFinalState(state)) - return; - } -} - -``` - -### Monitor consumer state - -The following table lists states available for the consumer. - -| State | Description | -| ---- | ----| -| Active | All is well. | -| Inactive | All is well. The subscription type is `Failover` and you are not the active consumer. | -| Closed | The consumer or the Pulsar client has been disposed. | -| Disconnected | The connection is lost and attempts are being made to reconnect. | -| Faulted | An unrecoverable error has occurred. | -| ReachedEndOfTopic | No more messages are delivered. | - -This example shows how to monitor the consumer state. - -```c# - -private static async ValueTask Monitor(IConsumer consumer, CancellationToken cancellationToken) -{ - var state = ConsumerState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = await consumer.StateChangedFrom(state, cancellationToken); - - var stateMessage = state switch - { - ConsumerState.Active => "The consumer is active", - ConsumerState.Inactive => "The consumer is inactive", - ConsumerState.Disconnected => "The consumer is disconnected", - ConsumerState.Closed => "The consumer has closed", - ConsumerState.ReachedEndOfTopic => "The consumer has reached end of topic", - ConsumerState.Faulted => "The consumer has faulted", - _ => $"The consumer has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (consumer.IsFinalState(state)) - return; - } -} - -``` - -### Monitor reader state - -The following table lists states available for the reader. - -| State | Description | -| ---- | ----| -| Closed | The reader or the Pulsar client has been disposed. | -| Connected | All is well. | -| Disconnected | The connection is lost and attempts are being made to reconnect. -| Faulted | An unrecoverable error has occurred. | -| ReachedEndOfTopic | No more messages are delivered. | - -This example shows how to monitor the reader state. - -```c# - -private static async ValueTask Monitor(IReader reader, CancellationToken cancellationToken) -{ - var state = ReaderState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = await reader.StateChangedFrom(state, cancellationToken); - - var stateMessage = state switch - { - ReaderState.Connected => "The reader is connected", - ReaderState.Disconnected => "The reader is disconnected", - ReaderState.Closed => "The reader has closed", - ReaderState.ReachedEndOfTopic => "The reader has reached end of topic", - ReaderState.Faulted => "The reader has faulted", - _ => $"The reader has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (reader.IsFinalState(state)) - return; - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-go.md b/site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-go.md deleted file mode 100644 index 6281b03dd8c805..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-go.md +++ /dev/null @@ -1,885 +0,0 @@ ---- -id: client-libraries-go -title: Pulsar Go client -sidebar_label: "Go" -original_id: client-libraries-go ---- - -> Tips: Currently, the CGo client will be deprecated, if you want to know more about the CGo client, please refer to [CGo client docs](client-libraries-cgo.md) - -You can use Pulsar [Go client](https://github.com/apache/pulsar-client-go) to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang). - -> **API docs available as well** -> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar-client-go/pulsar). - - -## Installation - -### Install go package - -You can install the `pulsar` library locally using `go get`. - -```bash - -$ go get -u "github.com/apache/pulsar-client-go/pulsar" - -``` - -Once installed locally, you can import it into your project: - -```go - -import "github.com/apache/pulsar-client-go/pulsar" - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -If you have multiple brokers, you can set the URL as below. - -``` - -pulsar://localhost:6550,localhost:6651,localhost:6652 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example: - -```go - -import ( - "log" - "time" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - OperationTimeout: 30 * time.Second, - ConnectionTimeout: 30 * time.Second, - }) - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } - - defer client.Close() -} - -``` - -If you have multiple brokers, you can initiate a client object as below. - -```go - -import ( - "log" - "time" - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650,localhost:6651,localhost:6652", - OperationTimeout: 30 * time.Second, - ConnectionTimeout: 30 * time.Second, - }) - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } - - defer client.Close() -} - -``` - -The following configurable parameters are available for Pulsar clients: - - Name | Description | Default -| :-------- | :---------- |:---------- | -| URL | Configure the service URL for the Pulsar service.

    If you have multiple brokers, you can set multiple Pulsar cluster addresses for a client.

    This parameter is **required**. |None | -| ConnectionTimeout | Timeout for the establishment of a TCP connection | 30s | -| OperationTimeout| Set the operation timeout. Producer-create, subscribe and unsubscribe operations will be retried until this interval, after which the operation will be marked as failed| 30s| -| Authentication | Configure the authentication provider. Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | no authentication | -| TLSTrustCertsFilePath | Set the path to the trusted TLS certificate file | | -| TLSAllowInsecureConnection | Configure whether the Pulsar client accept untrusted TLS certificate from broker | false | -| TLSValidateHostname | Configure whether the Pulsar client verify the validity of the host name from broker | false | -| ListenerName | Configure the net model for VPC users to connect to the Pulsar broker | | -| MaxConnectionsPerBroker | Max number of connections to a single broker that is kept in the pool | 1 | -| CustomMetricsLabels | Add custom labels to all the metrics reported by this client instance | | -| Logger | Configure the logger used by the client | logrus.StandardLogger | - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example: - -```go - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", -}) - -if err != nil { - log.Fatal(err) -} - -_, err = producer.Send(context.Background(), &pulsar.ProducerMessage{ - Payload: []byte("hello"), -}) - -defer producer.Close() - -if err != nil { - fmt.Println("Failed to publish message", err) -} -fmt.Println("Published message") - -``` - -### Producer operations - -Pulsar Go producers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string` -`Name()` | Fetches the producer's name | `string` -`Send(context.Context, *ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | (MessageID, error) -`SendAsync(context.Context, *ProducerMessage, func(MessageID, *ProducerMessage, error))`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | -`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64 -`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error -`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | - -### Producer Example - -#### How to use message router in producer - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: serviceURL, -}) - -if err != nil { - log.Fatal(err) -} -defer client.Close() - -// Only subscribe on the specific partition -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "my-partitioned-topic-partition-2", - SubscriptionName: "my-sub", -}) - -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-partitioned-topic", - MessageRouter: func(msg *ProducerMessage, tm TopicMetadata) int { - fmt.Println("Routing message ", msg, " -- Partitions: ", tm.NumPartitions()) - return 2 - }, -}) - -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -``` - -#### How to use schema interface in producer - -```go - -type testJSON struct { - ID int `json:"id"` - Name string `json:"name"` -} - -``` - -```go - -var ( - exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -) - -``` - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -properties := make(map[string]string) -properties["pulsar"] = "hello" -jsonSchemaWithProperties := NewJSONSchema(exampleSchemaDef, properties) -producer, err := client.CreateProducer(ProducerOptions{ - Topic: "jsonTopic", - Schema: jsonSchemaWithProperties, -}) -assert.Nil(t, err) - -_, err = producer.Send(context.Background(), &ProducerMessage{ - Value: &testJSON{ - ID: 100, - Name: "pulsar", - }, -}) -if err != nil { - log.Fatal(err) -} -producer.Close() - -``` - -#### How to use delay relative in producer - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topicName := newTopicName() -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topicName, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: topicName, - SubscriptionName: "subName", - Type: Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -ID, err := producer.Send(context.Background(), &pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("test")), - DeliverAfter: 3 * time.Second, -}) -if err != nil { - log.Fatal(err) -} -fmt.Println(ID) - -ctx, canc := context.WithTimeout(context.Background(), 1*time.Second) -msg, err := consumer.Receive(ctx) -if err != nil { - log.Fatal(err) -} -fmt.Println(msg.Payload()) -canc() - -ctx, canc = context.WithTimeout(context.Background(), 5*time.Second) -msg, err = consumer.Receive(ctx) -if err != nil { - log.Fatal(err) -} -fmt.Println(msg.Payload()) -canc() - -``` - -### Producer configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Name | Name specify a name for the producer. If not assigned, the system will generate a globally unique name which can be access with Producer.ProducerName(). | | -| Properties | Properties attach a set of application defined properties to the producer This properties will be visible in the topic stats | | -| SendTimeout | SendTimeout set the timeout for a message that is not acknowledged by the server | 30s | -| DisableBlockIfQueueFull | DisableBlockIfQueueFull control whether Send and SendAsync block if producer's message queue is full | false | -| MaxPendingMessages| MaxPendingMessages set the max size of the queue holding the messages pending to receive an acknowledgment from the broker. | | -| HashingScheme | HashingScheme change the `HashingScheme` used to chose the partition on where to publish a particular message. | JavaStringHash | -| CompressionType | CompressionType set the compression type for the producer. | not compressed | -| CompressionLevel | Define the desired compression level. Options: Default, Faster and Better | Default | -| MessageRouter | MessageRouter set a custom message routing policy by passing an implementation of MessageRouter | | -| DisableBatching | DisableBatching control whether automatic batching of messages is enabled for the producer. | false | -| BatchingMaxPublishDelay | BatchingMaxPublishDelay set the time period within which the messages sent will be batched | 1ms | -| BatchingMaxMessages | BatchingMaxMessages set the maximum number of messages permitted in a batch. | 1000 | -| BatchingMaxSize | BatchingMaxSize sets the maximum number of bytes permitted in a batch. | 128KB | -| Schema | Schema set a custom schema type by passing an implementation of `Schema` | bytes[] | -| Interceptors | A chain of interceptors. These interceptors are called at some points defined in the `ProducerInterceptor` interface. | None | -| MaxReconnectToBroker | MaxReconnectToBroker set the maximum retry number of reconnectToBroker | ultimate | -| BatcherBuilderType | BatcherBuilderType sets the batch builder type. This is used to create a batch container when batching is enabled. Options: DefaultBatchBuilder and KeyBasedBatchBuilder | DefaultBatchBuilder | - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels: - -```go - -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "topic-1", - SubscriptionName: "my-sub", - Type: pulsar.Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -for i := 0; i < 10; i++ { - msg, err := consumer.Receive(context.Background()) - if err != nil { - log.Fatal(err) - } - - fmt.Printf("Received message msgId: %#v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - - consumer.Ack(msg) -} - -if err := consumer.Unsubscribe(); err != nil { - log.Fatal(err) -} - -``` - -### Consumer operations - -Pulsar Go consumers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Subscription()` | Returns the consumer's subscription name | `string` -`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error` -`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)` -`Chan()` | Chan returns a channel from which to consume messages. | `<-chan ConsumerMessage` -`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | -`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | -`ReconsumeLater(msg Message, delay time.Duration)` | ReconsumeLater mark a message for redelivery after custom delay | -`Nack(Message)` | Acknowledge the failure to process a single message. | -`NackID(MessageID)` | Acknowledge the failure to process a single message. | -`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | `error` -`SeekByTime(time time.Time)` | Reset the subscription associated with this consumer to a specific message publish time. | `error` -`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | -`Name()` | Name returns the name of consumer | `string` - -### Receive example - -#### How to use regex consumer - -```go - -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) - -defer client.Close() - -p, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topicInRegex, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer p.Close() - -topicsPattern := fmt.Sprintf("persistent://%s/foo.*", namespace) -opts := pulsar.ConsumerOptions{ - TopicsPattern: topicsPattern, - SubscriptionName: "regex-sub", -} -consumer, err := client.Subscribe(opts) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -``` - -#### How to use multi topics Consumer - -```go - -func newTopicName() string { - return fmt.Sprintf("my-topic-%v", time.Now().Nanosecond()) -} - - -topic1 := "topic-1" -topic2 := "topic-2" - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -topics := []string{topic1, topic2} -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topics: topics, - SubscriptionName: "multi-topic-sub", -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -``` - -#### How to use consumer listener - -```go - -import ( - "fmt" - "log" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{URL: "pulsar://localhost:6650"}) - if err != nil { - log.Fatal(err) - } - - defer client.Close() - - channel := make(chan pulsar.ConsumerMessage, 100) - - options := pulsar.ConsumerOptions{ - Topic: "topic-1", - SubscriptionName: "my-subscription", - Type: pulsar.Shared, - } - - options.MessageChannel = channel - - consumer, err := client.Subscribe(options) - if err != nil { - log.Fatal(err) - } - - defer consumer.Close() - - // Receive messages from channel. The channel returns a struct which contains message and the consumer from where - // the message was received. It's not necessary here since we have 1 single consumer, but the channel could be - // shared across multiple consumers as well - for cm := range channel { - msg := cm.Message - fmt.Printf("Received message msgId: %v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - - consumer.Ack(msg) - } -} - -``` - -#### How to use consumer receive timeout - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topic := "test-topic-with-no-messages" -ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond) -defer cancel() - -// create consumer -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: topic, - SubscriptionName: "my-sub1", - Type: Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -msg, err := consumer.Receive(ctx) -fmt.Println(msg.Payload()) -if err != nil { - log.Fatal(err) -} - -``` - -#### How to use schema in consumer - -```go - -type testJSON struct { - ID int `json:"id"` - Name string `json:"name"` -} - -``` - -```go - -var ( - exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -) - -``` - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -var s testJSON - -consumerJS := NewJSONSchema(exampleSchemaDef, nil) -consumer, err := client.Subscribe(ConsumerOptions{ - Topic: "jsonTopic", - SubscriptionName: "sub-1", - Schema: consumerJS, - SubscriptionInitialPosition: SubscriptionPositionEarliest, -}) -assert.Nil(t, err) -msg, err := consumer.Receive(context.Background()) -assert.Nil(t, err) -err = msg.GetSchemaValue(&s) -if err != nil { - log.Fatal(err) -} - -defer consumer.Close() - -``` - -### Consumer configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Topics | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing| | -| TopicsPattern | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing | | -| AutoDiscoveryPeriod | Specify the interval in which to poll for new partitions or new topics if using a TopicsPattern. | | -| SubscriptionName | Specify the subscription name for this consumer. This argument is required when subscribing | | -| Name | Set the consumer name | | -| Properties | Properties attach a set of application defined properties to the producer This properties will be visible in the topic stats | | -| Type | Select the subscription type to be used when subscribing to the topic. | Exclusive | -| SubscriptionInitialPosition | InitialPosition at which the cursor will be set when subscribe | Latest | -| DLQ | Configuration for Dead Letter Queue consumer policy. | no DLQ | -| MessageChannel | Sets a `MessageChannel` for the consumer. When a message is received, it will be pushed to the channel for consumption | | -| ReceiverQueueSize | Sets the size of the consumer receive queue. | 1000| -| NackRedeliveryDelay | The delay after which to redeliver the messages that failed to be processed | 1min | -| ReadCompacted | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic | false | -| ReplicateSubscriptionState | Mark the subscription as replicated to keep it in sync across clusters | false | -| KeySharedPolicy | Configuration for Key Shared consumer policy. | | -| RetryEnable | Auto retry send messages to default filled DLQPolicy topics | false | -| Interceptors | A chain of interceptors. These interceptors are called at some points defined in the `ConsumerInterceptor` interface. | | -| MaxReconnectToBroker | MaxReconnectToBroker set the maximum retry number of reconnectToBroker. | ultimate | -| Schema | Schema set a custom schema type by passing an implementation of `Schema` | bytes[] | - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example: - -```go - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "topic-1", - StartMessageID: pulsar.EarliestMessageID(), -}) -if err != nil { - log.Fatal(err) -} -defer reader.Close() - -``` - -### Reader operations - -Pulsar Go readers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string` -`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)` -`HasNext()` | Check if there is any message available to read from the current position| (bool, error) -`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error` -`Seek(MessageID)` | Reset the subscription associated with this reader to a specific message ID | `error` -`SeekByTime(time time.Time)` | Reset the subscription associated with this reader to a specific message publish time | `error` - -### Reader example - -#### How to use reader to read 'next' message - -Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages: - -```go - -import ( - "context" - "fmt" - "log" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{URL: "pulsar://localhost:6650"}) - if err != nil { - log.Fatal(err) - } - - defer client.Close() - - reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "topic-1", - StartMessageID: pulsar.EarliestMessageID(), - }) - if err != nil { - log.Fatal(err) - } - defer reader.Close() - - for reader.HasNext() { - msg, err := reader.Next(context.Background()) - if err != nil { - log.Fatal(err) - } - - fmt.Printf("Received message msgId: %#v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - } -} - -``` - -In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example: - -```go - -lastSavedId := // Read last saved message id from external store as byte[] - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: pulsar.DeserializeMessageID(lastSavedId), -}) - -``` - -#### How to use reader to read specific message - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: lookupURL, -}) - -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topic := "topic-1" -ctx := context.Background() - -// create producer -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topic, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -// send 10 messages -msgIDs := [10]MessageID{} -for i := 0; i < 10; i++ { - msgID, err := producer.Send(ctx, &pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("hello-%d", i)), - }) - assert.NoError(t, err) - assert.NotNil(t, msgID) - msgIDs[i] = msgID -} - -// create reader on 5th message (not included) -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: topic, - StartMessageID: msgIDs[4], -}) - -if err != nil { - log.Fatal(err) -} -defer reader.Close() - -// receive the remaining 5 messages -for i := 5; i < 10; i++ { - msg, err := reader.Next(context.Background()) - if err != nil { - log.Fatal(err) -} - -// create reader on 5th message (included) -readerInclusive, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: topic, - StartMessageID: msgIDs[4], - StartMessageIDInclusive: true, -}) - -if err != nil { - log.Fatal(err) -} -defer readerInclusive.Close() - -``` - -### Reader configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Name | Name set the reader name. | | -| Properties | Attach a set of application defined properties to the reader. This properties will be visible in the topic stats | | -| StartMessageID | StartMessageID initial reader positioning is done by specifying a message id. | | -| StartMessageIDInclusive | If true, the reader will start at the `StartMessageID`, included. Default is `false` and the reader will start from the "next" message | false | -| MessageChannel | MessageChannel sets a `MessageChannel` for the consumer When a message is received, it will be pushed to the channel for consumption| | -| ReceiverQueueSize | ReceiverQueueSize sets the size of the consumer receive queue. | 1000 | -| SubscriptionRolePrefix| SubscriptionRolePrefix set the subscription role prefix. | "reader" | -| ReadCompacted | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. ReadCompacted can only be enabled when reading from a persistent topic. | false| - -## Messages - -The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message: - -```go - -msg := pulsar.ProducerMessage{ - Payload: []byte("Here is some message data"), - Key: "message-key", - Properties: map[string]string{ - "foo": "bar", - }, - EventTime: time.Now(), - ReplicationClusters: []string{"cluster1", "cluster3"}, -} - -if _, err := producer.send(msg); err != nil { - log.Fatalf("Could not publish message due to: %v", err) -} - -``` - -The following methods parameters are available for `ProducerMessage` objects: - -Parameter | Description -:---------|:----------- -`Payload` | The actual data payload of the message -`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message. -`Key` | The optional key associated with the message (particularly useful for things like topic compaction) -`OrderingKey` | OrderingKey sets the ordering key of the message. -`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message -`EventTime` | The timestamp associated with the message -`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. -`SequenceID` | Set the sequence id to assign to the current message -`DeliverAfter` | Request to deliver the message only after the specified relative delay -`DeliverAt` | Deliver the message only at or after the specified absolute timestamp - -## TLS encryption and authentication - -In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so: - - * Use `pulsar+ssl` URL type - * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker - * Configure `Authentication` option - -Here's an example: - -```go - -opts := pulsar.ClientOptions{ - URL: "pulsar+ssl://my-cluster.com:6651", - TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr", - Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"), -} - -``` - -## OAuth2 authentication - -To use [OAuth2 authentication](security-oauth2.md), you'll need to configure your client to perform the following operations. -This example shows how to configure OAuth2 authentication. - -```go - -oauth := pulsar.NewAuthenticationOAuth2(map[string]string{ - "type": "client_credentials", - "issuerUrl": "https://dev-kt-aa9ne.us.auth0.com", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "privateKey": "/path/to/privateKey", - "clientId": "0Xx...Yyxeny", - }) -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://my-cluster:6650", - Authentication: oauth, -}) - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-java.md b/site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-java.md deleted file mode 100644 index c3d41a3f13da2e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-java.md +++ /dev/null @@ -1,1038 +0,0 @@ ---- -id: client-libraries-java -title: Pulsar Java client -sidebar_label: "Java" -original_id: client-libraries-java ---- - -You can use a Pulsar Java client to create the Java [producer](#producer), [consumer](#consumer), and [readers](#reader) of messages and to perform [administrative tasks](admin-api-overview.md). The current Java client version is **@pulsar:version@**. - -All the methods in [producer](#producer), [consumer](#consumer), and [reader](#reader) of a Java client are thread-safe. - -Javadoc for the Pulsar client is divided into two domains by package as follows. - -Package | Description | Maven Artifact -:-------|:------------|:-------------- -[`org.apache.pulsar.client.api`](/api/client) | The producer and consumer API | [org.apache.pulsar:pulsar-client:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C@pulsar:version@%7Cjar) -[`org.apache.pulsar.client.admin`](/api/admin) | The Java [admin API](admin-api-overview.md) | [org.apache.pulsar:pulsar-client-admin:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client-admin%7C@pulsar:version@%7Cjar) -`org.apache.pulsar.client.all` |Include both `pulsar-client` and `pulsar-client-admin`
    Both `pulsar-client` and `pulsar-client-admin` are shaded packages and they shade dependencies independently. Consequently, the applications using both `pulsar-client` and `pulsar-client-admin` have redundant shaded classes. It would be troublesome if you introduce new dependencies but forget to update shading rules.
    In this case, you can use `pulsar-client-all`, which shades dependencies only one time and reduces the size of dependencies. |[org.apache.pulsar:pulsar-client-all:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client-all%7C@pulsar:version@%7Cjar) - -This document focuses only on the client API for producing and consuming messages on Pulsar topics. For how to use the Java admin client, see [Pulsar admin interface](admin-api-overview.md). - -## Installation - -The latest version of the Pulsar Java client library is available via [Maven Central](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C@pulsar:version@%7Cjar). To use the latest version, add the `pulsar-client` library to your build configuration. - -### Maven - -If you use Maven, add the following information to the `pom.xml` file. - -```xml - - -@pulsar:version@ - - - - org.apache.pulsar - pulsar-client - ${pulsar.version} - - -``` - -### Gradle - -If you use Gradle, add the following information to the `build.gradle` file. - -```groovy - -def pulsarVersion = '@pulsar:version@' - -dependencies { - compile group: 'org.apache.pulsar', name: 'pulsar-client', version: pulsarVersion -} - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -You can assign Pulsar protocol URLs to specific clusters and use the `pulsar` scheme. The default port is `6650`. The following is an example of `localhost`. - -```http - -pulsar://localhost:6650 - -``` - -If you have multiple brokers, the URL is as follows. - -```http - -pulsar://localhost:6550,localhost:6651,localhost:6652 - -``` - -A URL for a production Pulsar cluster is as follows. - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you use [TLS](security-tls-authentication.md) authentication, the URL is as follows. - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Client - -You can instantiate a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object using just a URL for the target Pulsar [cluster](reference-terminology.md#cluster) like this: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -``` - -If you have multiple brokers, you can initiate a PulsarClient like this: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650,localhost:6651,localhost:6652") - .build(); - -``` - -> ### Default broker URLs for standalone clusters -> If you run a cluster in [standalone mode](getting-started-standalone.md), the broker is available at the `pulsar://localhost:6650` URL by default. - -If you create a client, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -| Name | Type |
    Description
    | Default -|---|---|---|--- -`serviceUrl` | String | Service URL provider for Pulsar service | None -`authPluginClassName` | String | Name of the authentication plugin | None - `authParams` | String | Parameters for the authentication plugin

    **Example**
    key1:val1,key2:val2|None -`operationTimeoutMs`|long|`operationTimeoutMs`|Operation timeout |30000 -`statsIntervalSeconds`|long|Interval between each stats information

    Stats is activated with positive `statsInterval`

    Set `statsIntervalSeconds` to 1 second at least. |60 -`numIoThreads`| int| The number of threads used for handling connections to brokers | 1 -`numListenerThreads`|int|The number of threads used for handling message listeners. The listener thread pool is shared across all the consumers and readers using the "listener" model to get messages. For a given consumer, the listener is always invoked from the same thread to ensure ordering. If you want multiple threads to process a single topic, you need to create a [`shared`](https://pulsar.apache.org/docs/en/next/concepts-messaging/#shared) subscription and multiple consumers for this subscription. This does not ensure ordering.| 1 -`useTcpNoDelay`| boolean| Whether to use TCP no-delay flag on the connection to disable Nagle algorithm |true -`enableTls` |boolean | Whether to use TLS encryption on the connection. Note that this parameter is **deprecated**. If you want to enable TLS, use `pulsar+ssl://` in `serviceUrl` instead. | false - `tlsTrustCertsFilePath` |string |Path to the trusted TLS certificate file|None -`tlsAllowInsecureConnection`|boolean|Whether the Pulsar client accepts untrusted TLS certificate from broker | false -`tlsHostnameVerificationEnable` |boolean | Whether to enable TLS hostname verification|false -`concurrentLookupRequest`|int|The number of concurrent lookup requests allowed to send on each broker connection to prevent overload on broker|5000 -`maxLookupRequest`|int|The maximum number of lookup requests allowed on each broker connection to prevent overload on broker | 50000 -`maxNumberOfRejectedRequestPerConnection`|int|The maximum number of rejected requests of a broker in a certain time frame (30 seconds) after the current connection is closed and the client creates a new connection to connect to a different broker|50 -`keepAliveIntervalSeconds`|int|Seconds of keeping alive interval for each client broker connection|30 -`connectionTimeoutMs`|int|Duration of waiting for a connection to a broker to be established

    If the duration passes without a response from a broker, the connection attempt is dropped|10000 -`requestTimeoutMs`|int|Maximum duration for completing a request |60000 -`defaultBackoffIntervalNanos`|int| Default duration for a backoff interval | TimeUnit.MILLISECONDS.toNanos(100); -`maxBackoffIntervalNanos`|long|Maximum duration for a backoff interval|TimeUnit.SECONDS.toNanos(30) -`socks5ProxyAddress`|SocketAddress|SOCKS5 proxy address | None -`socks5ProxyUsername`|string|SOCKS5 proxy username | None -`socks5ProxyPassword`|string|SOCKS5 proxy password | None - -Check out the Javadoc for the {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} class for a full list of configurable parameters. - -> In addition to client-level configuration, you can also apply [producer](#configure-producer) and [consumer](#configure-consumer) specific configuration as described in sections below. - -## Producer - -In Pulsar, producers write messages to topics. Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object (as in the section [above](#client-configuration)), you can create a {@inject: javadoc:Producer:/client/org/apache/pulsar/client/api/Producer} for a specific Pulsar [topic](reference-terminology.md#topic). - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .create(); - -// You can then send messages to the broker and topic you specified: -producer.send("My message".getBytes()); - -``` - -By default, producers produce messages that consist of byte arrays. You can produce different types by specifying a message [schema](#schema). - -```java - -Producer stringProducer = client.newProducer(Schema.STRING) - .topic("my-topic") - .create(); -stringProducer.send("My message"); - -``` - -> Make sure that you close your producers, consumers, and clients when you do not need them. - -> ```java -> -> producer.close(); -> consumer.close(); -> client.close(); -> -> -> ``` - -> -> Close operations can also be asynchronous: - -> ```java -> -> producer.closeAsync() -> .thenRun(() -> System.out.println("Producer closed")) -> .exceptionally((ex) -> { -> System.err.println("Failed to close producer: " + ex); -> return null; -> }); -> -> -> ``` - - -### Configure producer - -If you instantiate a `Producer` object by specifying only a topic name as the example above, the default configuration of producer is used. - -If you create a producer, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -Name| Type |
    Description
    | Default -|---|---|---|--- -`topicName`| string| Topic name| null| -`producerName`| string|Producer name| null -`sendTimeoutMs`| long|Message send timeout in ms.
    If a message is not acknowledged by a server before the `sendTimeout` expires, an error occurs.|30000 -`blockIfQueueFull`|boolean|If it is set to `true`, when the outgoing message queue is full, the `Send` and `SendAsync` methods of producer block, rather than failing and throwing errors.
    If it is set to `false`, when the outgoing message queue is full, the `Send` and `SendAsync` methods of producer fail and `ProducerQueueIsFullError` exceptions occur.

    The `MaxPendingMessages` parameter determines the size of the outgoing message queue.|false -`maxPendingMessages`| int|The maximum size of a queue holding pending messages.

    For example, a message waiting to receive an acknowledgment from a [broker](reference-terminology.md#broker).

    By default, when the queue is full, all calls to the `Send` and `SendAsync` methods fail **unless** you set `BlockIfQueueFull` to `true`.|1000 -`maxPendingMessagesAcrossPartitions`|int|The maximum number of pending messages across partitions.

    Use the setting to lower the max pending messages for each partition ({@link #setMaxPendingMessages(int)}) if the total number exceeds the configured value.|50000 -`messageRoutingMode`| MessageRoutingMode|Message routing logic for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics).
    Apply the logic only when setting no key on messages.
    Available options are as follows:
  1706. `pulsar.RoundRobinDistribution`: round robin
  1707. `pulsar.UseSinglePartition`: publish all messages to a single partition
  1708. `pulsar.CustomPartition`: a custom partitioning scheme
  1709. |
  1710. `pulsar.RoundRobinDistribution`
  1711. -`hashingScheme`| HashingScheme|Hashing function determining the partition where you publish a particular message (**partitioned topics only**).
    Available options are as follows:
  1712. `pulsar.JavastringHash`: the equivalent of `string.hashCode()` in Java
  1713. `pulsar.Murmur3_32Hash`: applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function
  1714. `pulsar.BoostHash`: applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library
  1715. |`HashingScheme.JavastringHash` -`cryptoFailureAction`| ProducerCryptoFailureAction|Producer should take action when encryption fails.
  1716. **FAIL**: if encryption fails, unencrypted messages fail to send.
  1717. **SEND**: if encryption fails, unencrypted messages are sent.
  1718. |`ProducerCryptoFailureAction.FAIL` -`batchingMaxPublishDelayMicros`| long|Batching time period of sending messages.|TimeUnit.MILLISECONDS.toMicros(1) -`batchingMaxMessages` |int|The maximum number of messages permitted in a batch.|1000 -`batchingEnabled`| boolean|Enable batching of messages. |true -`compressionType`|CompressionType|Message data compression type used by a producer.
    Available options:
  1719. [`LZ4`](https://github.com/lz4/lz4)
  1720. [`ZLIB`](https://zlib.net/)
  1721. [`ZSTD`](https://facebook.github.io/zstd/)
  1722. [`SNAPPY`](https://google.github.io/snappy/)
  1723. | No compression - -You can configure parameters if you do not want to use the default configuration. - -For a full list, see the Javadoc for the {@inject: javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder} class. The following is an example. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .batchingMaxPublishDelay(10, TimeUnit.MILLISECONDS) - .sendTimeout(10, TimeUnit.SECONDS) - .blockIfQueueFull(true) - .create(); - -``` - -### Message routing - -When using partitioned topics, you can specify the routing mode whenever you publish messages using a producer. For more information on specifying a routing mode using the Java client, see the [Partitioned Topics cookbook](cookbooks-partitioned.md). - -### Async send - -You can publish messages [asynchronously](concepts-messaging.md#send-modes) using the Java client. With async send, the producer puts the message in a blocking queue and returns it immediately. Then the client library sends the message to the broker in the background. If the queue is full (max size configurable), the producer is blocked or fails immediately when calling the API, depending on arguments passed to the producer. - -The following is an example. - -```java - -producer.sendAsync("my-async-message".getBytes()).thenAccept(msgId -> { - System.out.println("Message with ID " + msgId + " successfully sent"); -}); - -``` - -As you can see from the example above, async send operations return a {@inject: javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId} wrapped in a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture). - -### Configure messages - -In addition to a value, you can set additional items on a given message: - -```java - -producer.newMessage() - .key("my-message-key") - .value("my-async-message".getBytes()) - .property("my-key", "my-value") - .property("my-other-key", "my-other-value") - .send(); - -``` - -You can terminate the builder chain with `sendAsync()` and get a future return. - -## Consumer - -In Pulsar, consumers subscribe to topics and handle messages that producers publish to those topics. You can instantiate a new [consumer](reference-terminology.md#consumer) by first instantiating a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object and passing it a URL for a Pulsar broker (as [above](#client-configuration)). - -Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object, you can create a {@inject: javadoc:Consumer:/client/org/apache/pulsar/client/api/Consumer} by specifying a [topic](reference-terminology.md#topic) and a [subscription](concepts-messaging.md#subscription-modes). - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscribe(); - -``` - -The `subscribe` method will auto subscribe the consumer to the specified topic and subscription. One way to make the consumer listen on the topic is to set up a `while` loop. In this example loop, the consumer listens for messages, prints the contents of any received message, and then [acknowledges](reference-terminology.md#acknowledgment-ack) that the message has been processed. If the processing logic fails, you can use [negative acknowledgement](reference-terminology.md#acknowledgment-ack) to redeliver the message later. - -```java - -while (true) { - // Wait for a message - Message msg = consumer.receive(); - - try { - // Do something with the message - System.out.println("Message received: " + new String(msg.getData())); - - // Acknowledge the message so that it can be deleted by the message broker - consumer.acknowledge(msg); - } catch (Exception e) { - // Message failed to process, redeliver later - consumer.negativeAcknowledge(msg); - } -} - -``` - -If you don't want to block your main thread and rather listen constantly for new messages, consider using a `MessageListener`. - -```java - -MessageListener myMessageListener = (consumer, msg) -> { - try { - System.out.println("Message received: " + new String(msg.getData())); - consumer.acknowledge(msg); - } catch (Exception e) { - consumer.negativeAcknowledge(msg); - } -} - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .messageListener(myMessageListener) - .subscribe(); - -``` - -### Configure consumer - -If you instantiate a `Consumer` object by specifying only a topic and subscription name as in the example above, the consumer uses the default configuration. - -When you create a consumer, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - - Name|Type |
    Description
    | Default -|---|---|---|--- -`topicNames`| Set<String>| Topic name| Sets.newTreeSet() - `topicsPattern`|Pattern| Topic pattern |None -`subscriptionName`|String| Subscription name| None -`subscriptionType`|SubscriptionType| Subscription type
    Four subscription types are available:
  1724. Exclusive
  1725. Failover
  1726. Shared
  1727. Key_Shared
  1728. |SubscriptionType.Exclusive -`receiverQueueSize` |int | Size of a consumer's receiver queue.

    For example, the number of messages accumulated by a consumer before an application calls `Receive`.

    A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.| 1000 -`acknowledgementsGroupTimeMicros`|long|Group a consumer acknowledgment for a specified time.

    By default, a consumer uses 100ms grouping time to send out acknowledgments to a broker.

    Setting a group time of 0 sends out acknowledgments immediately.

    A longer ack group time is more efficient at the expense of a slight increase in message re-deliveries after a failure.|TimeUnit.MILLISECONDS.toMicros(100) -`negativeAckRedeliveryDelayMicros`|long|Delay to wait before redelivering messages that failed to be processed.

    When an application uses {@link Consumer#negativeAcknowledge(Message)}, failed messages are redelivered after a fixed timeout. |TimeUnit.MINUTES.toMicros(1) -`maxTotalReceiverQueueSizeAcrossPartitions`|int |The max total receiver queue size across partitions.

    This setting reduces the receiver queue size for individual partitions if the total receiver queue size exceeds this value.|50000 -`consumerName`|String|Consumer name|null -`ackTimeoutMillis`|long|Timeout of unacked messages|0 -`tickDurationMillis`|long|Granularity of the ack-timeout redelivery.

    Using an higher `tickDurationMillis` reduces the memory overhead to track messages when setting ack-timeout to a bigger value (for example, 1 hour).|1000 -`priorityLevel`|int|Priority level for a consumer to which a broker gives more priority while dispatching messages in the shared subscription mode.

    The broker follows descending priorities. For example, 0=max-priority, 1, 2,...

    In shared subscription mode, the broker **first dispatches messages to the max priority level consumers if they have permits**. Otherwise, the broker considers next priority level consumers.

    **Example 1**
    If a subscription has consumerA with `priorityLevel` 0 and consumerB with `priorityLevel` 1, then the broker **only dispatches messages to consumerA until it runs out permits** and then starts dispatching messages to consumerB.

    **Example 2**
    Consumer Priority, Level, Permits
    C1, 0, 2
    C2, 0, 1
    C3, 0, 1
    C4, 1, 2
    C5, 1, 1

    Order in which a broker dispatches messages to consumers is: C1, C2, C3, C1, C4, C5, C4.|0 -`cryptoFailureAction`|ConsumerCryptoFailureAction|Consumer should take action when it receives a message that can not be decrypted.
  1729. **FAIL**: this is the default option to fail messages until crypto succeeds.
  1730. **DISCARD**:silently acknowledge and not deliver message to an application.
  1731. **CONSUME**: deliver encrypted messages to applications. It is the application's responsibility to decrypt the message.

  1732. The decompression of message fails.

    If messages contain batch messages, a client is not be able to retrieve individual messages in batch.

    Delivered encrypted message contains {@link EncryptionContext} which contains encryption and compression information in it using which application can decrypt consumed message payload.|
  1733. ConsumerCryptoFailureAction.FAIL
  1734. -`properties`|SortedMap|A name or value property of this consumer.

    `properties` is application defined metadata attached to a consumer.

    When getting a topic stats, associate this metadata with the consumer stats for easier identification.|new TreeMap() -`readCompacted`|boolean|If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    Only enabling `readCompacted` on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`.|false -`subscriptionInitialPosition`|SubscriptionInitialPosition|Initial position at which to set cursor when subscribing to a topic at first time.|SubscriptionInitialPosition.Latest -`patternAutoDiscoveryPeriod`|int|Topic auto discovery period when using a pattern for topic's consumer.

    The default and minimum value is 1 minute.|1 -`regexSubscriptionMode`|RegexSubscriptionMode|When subscribing to a topic using a regular expression, you can pick a certain type of topics.

  1735. **PersistentOnly**: only subscribe to persistent topics.
  1736. **NonPersistentOnly**: only subscribe to non-persistent topics.
  1737. **AllTopics**: subscribe to both persistent and non-persistent topics.
  1738. |RegexSubscriptionMode.PersistentOnly -`deadLetterPolicy`|DeadLetterPolicy|Dead letter policy for consumers.

    By default, some messages are probably redelivered many times, even to the extent that it never stops.

    By using the dead letter mechanism, messages have the max redelivery count. **When exceeding the maximum number of redeliveries, messages are sent to the Dead Letter Topic and acknowledged automatically**.

    You can enable the dead letter mechanism by setting `deadLetterPolicy`.

    **Example**

    client.newConsumer()
    .deadLetterPolicy(DeadLetterPolicy.builder().maxRedeliverCount(10).build())
    .subscribe();


    Default dead letter topic name is `{TopicName}-{Subscription}-DLQ`.

    To set a custom dead letter topic name:
    client.newConsumer()
    .deadLetterPolicy(DeadLetterPolicy.builder().maxRedeliverCount(10)
    .deadLetterTopic("your-topic-name").build())
    .subscribe();


    When specifying the dead letter policy while not specifying `ackTimeoutMillis`, you can set the ack timeout to 30000 millisecond.|None -`autoUpdatePartitions`|boolean|If `autoUpdatePartitions` is enabled, a consumer subscribes to partition increasement automatically.

    **Note**: this is only for partitioned consumers.|true -`replicateSubscriptionState`|boolean|If `replicateSubscriptionState` is enabled, a subscription state is replicated to geo-replicated clusters.|false - -You can configure parameters if you do not want to use the default configuration. For a full list, see the Javadoc for the {@inject: javadoc:ConsumerBuilder:/client/org/apache/pulsar/client/api/ConsumerBuilder} class. - -The following is an example. - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .ackTimeout(10, TimeUnit.SECONDS) - .subscriptionType(SubscriptionType.Exclusive) - .subscribe(); - -``` - -### Async receive - -The `receive` method receives messages synchronously (the consumer process is blocked until a message is available). You can also use [async receive](concepts-messaging.md#receive-modes), which returns a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) object immediately once a new message is available. - -The following is an example. - -```java - -CompletableFuture asyncMessage = consumer.receiveAsync(); - -``` - -Async receive operations return a {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} wrapped inside of a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture). - -### Batch receive - -Use `batchReceive` to receive multiple messages for each call. - -The following is an example. - -```java - -Messages messages = consumer.batchReceive(); -for (Object message : messages) { - // do something -} -consumer.acknowledge(messages) - -``` - -:::note - -Batch receive policy limits the number and bytes of messages in a single batch. You can specify a timeout to wait for enough messages. -The batch receive is completed if any of the following condition is met: enough number of messages, bytes of messages, wait timeout. - -```java - -Consumer consumer = client.newConsumer() -.topic("my-topic") -.subscriptionName("my-subscription") -.batchReceivePolicy(BatchReceivePolicy.builder() -.maxNumMessages(100) -.maxNumBytes(1024 * 1024) -.timeout(200, TimeUnit.MILLISECONDS) -.build()) -.subscribe(); - -``` - -The default batch receive policy is: - -```java - -BatchReceivePolicy.builder() -.maxNumMessage(-1) -.maxNumBytes(10 * 1024 * 1024) -.timeout(100, TimeUnit.MILLISECONDS) -.build(); - -``` - -::: - -### Multi-topic subscriptions - -In addition to subscribing a consumer to a single Pulsar topic, you can also subscribe to multiple topics simultaneously using [multi-topic subscriptions](concepts-messaging.md#multi-topic-subscriptions). To use multi-topic subscriptions you can supply either a regular expression (regex) or a `List` of topics. If you select topics via regex, all topics must be within the same Pulsar namespace. - -The followings are some examples. - -```java - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; - -import java.util.Arrays; -import java.util.List; -import java.util.regex.Pattern; - -ConsumerBuilder consumerBuilder = pulsarClient.newConsumer() - .subscriptionName(subscription); - -// Subscribe to all topics in a namespace -Pattern allTopicsInNamespace = Pattern.compile("public/default/.*"); -Consumer allTopicsConsumer = consumerBuilder - .topicsPattern(allTopicsInNamespace) - .subscribe(); - -// Subscribe to a subsets of topics in a namespace, based on regex -Pattern someTopicsInNamespace = Pattern.compile("public/default/foo.*"); -Consumer allTopicsConsumer = consumerBuilder - .topicsPattern(someTopicsInNamespace) - .subscribe(); - -``` - -In the above example, the consumer subscribes to the `persistent` topics that can match the topic name pattern. If you want the consumer subscribes to all `persistent` and `non-persistent` topics that can match the topic name pattern, set `subscriptionTopicsMode` to `RegexSubscriptionMode.AllTopics`. - -```java - -Pattern pattern = Pattern.compile("public/default/.*"); -pulsarClient.newConsumer() - .subscriptionName("my-sub") - .topicsPattern(pattern) - .subscriptionTopicsMode(RegexSubscriptionMode.AllTopics) - .subscribe(); - -``` - -:::note - -By default, the `subscriptionTopicsMode` of the consumer is `PersistentOnly`. Available options of `subscriptionTopicsMode` are `PersistentOnly`, `NonPersistentOnly`, and `AllTopics`. - -::: - -You can also subscribe to an explicit list of topics (across namespaces if you wish): - -```java - -List topics = Arrays.asList( - "topic-1", - "topic-2", - "topic-3" -); - -Consumer multiTopicConsumer = consumerBuilder - .topics(topics) - .subscribe(); - -// Alternatively: -Consumer multiTopicConsumer = consumerBuilder - .topic( - "topic-1", - "topic-2", - "topic-3" - ) - .subscribe(); - -``` - -You can also subscribe to multiple topics asynchronously using the `subscribeAsync` method rather than the synchronous `subscribe` method. The following is an example. - -```java - -Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default.*"); -consumerBuilder - .topics(topics) - .subscribeAsync() - .thenAccept(this::receiveMessageFromConsumer); - -private void receiveMessageFromConsumer(Object consumer) { - ((Consumer)consumer).receiveAsync().thenAccept(message -> { - // Do something with the received message - receiveMessageFromConsumer(consumer); - }); -} - -``` - -### Subscription modes - -Pulsar has various [subscription modes](concepts-messaging#subscription-modes) to match different scenarios. A topic can have multiple subscriptions with different subscription modes. However, a subscription can only have one subscription mode at a time. - -A subscription is identical with the subscription name which can specify only one subscription mode at a time. You cannot change the subscription mode unless all existing consumers of this subscription are offline. - -Different subscription modes have different message distribution modes. This section describes the differences of subscription modes and how to use them. - -In order to better describe their differences, assuming you have a topic named "my-topic", and the producer has published 10 messages. - -```java - -Producer producer = client.newProducer(Schema.STRING) - .topic("my-topic") - .enableBatching(false) - .create(); -// 3 messages with "key-1", 3 messages with "key-2", 2 messages with "key-3" and 2 messages with "key-4" -producer.newMessage().key("key-1").value("message-1-1").send(); -producer.newMessage().key("key-1").value("message-1-2").send(); -producer.newMessage().key("key-1").value("message-1-3").send(); -producer.newMessage().key("key-2").value("message-2-1").send(); -producer.newMessage().key("key-2").value("message-2-2").send(); -producer.newMessage().key("key-2").value("message-2-3").send(); -producer.newMessage().key("key-3").value("message-3-1").send(); -producer.newMessage().key("key-3").value("message-3-2").send(); -producer.newMessage().key("key-4").value("message-4-1").send(); -producer.newMessage().key("key-4").value("message-4-2").send(); - -``` - -#### Exclusive - -Create a new consumer and subscribe with the `Exclusive` subscription mode. - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Exclusive) - .subscribe() - -``` - -Only the first consumer is allowed to the subscription, other consumers receive an error. The first consumer receives all 10 messages, and the consuming order is the same as the producing order. - -:::note - -If topic is a partitioned topic, the first consumer subscribes to all partitioned topics, other consumers are not assigned with partitions and receive an error. - -::: - -#### Failover - -Create new consumers and subscribe with the`Failover` subscription mode. - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Failover) - .subscribe() -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Failover) - .subscribe() -//conumser1 is the active consumer, consumer2 is the standby consumer. -//consumer1 receives 5 messages and then crashes, consumer2 takes over as an active consumer. - -``` - -Multiple consumers can attach to the same subscription, yet only the first consumer is active, and others are standby. When the active consumer is disconnected, messages will be dispatched to one of standby consumers, and the standby consumer then becomes active consumer. - -If the first active consumer is disconnected after receiving 5 messages, the standby consumer becomes active consumer. Consumer1 will receive: - -``` - -("key-1", "message-1-1") -("key-1", "message-1-2") -("key-1", "message-1-3") -("key-2", "message-2-1") -("key-2", "message-2-2") - -``` - -consumer2 will receive: - -``` - -("key-2", "message-2-3") -("key-3", "message-3-1") -("key-3", "message-3-2") -("key-4", "message-4-1") -("key-4", "message-4-2") - -``` - -:::note - -If a topic is a partitioned topic, each partition has only one active consumer, messages of one partition are distributed to only one consumer, and messages of multiple partitions are distributed to multiple consumers. - -::: - -#### Shared - -Create new consumers and subscribe with `Shared` subscription mode: - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .subscribe() - -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .subscribe() -//Both consumer1 and consumer2 are active consumers. - -``` - -In shared subscription mode, multiple consumers can attach to the same subscription and messages are delivered in a round robin distribution across consumers. - -If a broker dispatches only one message at a time, consumer1 receives the following information. - -``` - -("key-1", "message-1-1") -("key-1", "message-1-3") -("key-2", "message-2-2") -("key-3", "message-3-1") -("key-4", "message-4-1") - -``` - -consumer2 receives the following information. - -``` - -("key-1", "message-1-2") -("key-2", "message-2-1") -("key-2", "message-2-3") -("key-3", "message-3-2") -("key-4", "message-4-2") - -``` - -`Shared` subscription is different from `Exclusive` and `Failover` subscription modes. `Shared` subscription has better flexibility, but cannot provide order guarantee. - -#### Key_shared - -This is a new subscription mode since 2.4.0 release, create new consumers and subscribe with `Key_Shared` subscription mode. - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Key_Shared) - .subscribe() - -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Key_Shared) - .subscribe() -//Both consumer1 and consumer2 are active consumers. - -``` - -Just like in the `Shared` subscription, all consumers in the `Key_Shared` subscription mode can attach to the same subscription. But the `Key_Shared` subscription mode is different from the `Shared` subscription. In the `Key_Shared` subscription mode, messages with the same key are delivered to only one consumer in order. The possible distribution of messages between different consumers (by default we do not know in advance which keys will be assigned to a consumer, but a key will only be assigned to a consumer at the same time). - -consumer1 receives the following information. - -``` - -("key-1", "message-1-1") -("key-1", "message-1-2") -("key-1", "message-1-3") -("key-3", "message-3-1") -("key-3", "message-3-2") - -``` - -consumer2 receives the following information. - -``` - -("key-2", "message-2-1") -("key-2", "message-2-2") -("key-2", "message-2-3") -("key-4", "message-4-1") -("key-4", "message-4-2") - -``` - -If batching is enabled at the producer side, messages with different keys are added to a batch by default. The broker will dispatch the batch to the consumer, so the default batch mechanism may break the Key_Shared subscription guaranteed message distribution semantics. The producer needs to use the `KeyBasedBatcher`. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .batcherBuilder(BatcherBuilder.KEY_BASED) - .create(); - -``` - -Or the producer can disable batching. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .enableBatching(false) - .create(); - -``` - -:::note - -If the message key is not specified, messages without key are dispatched to one consumer in order by default. - -::: - -## Reader - -With the [reader interface](concepts-clients.md#reader-interface), Pulsar clients can "manually position" themselves within a topic and reading all messages from a specified message onward. The Pulsar API for Java enables you to create {@inject: javadoc:Reader:/client/org/apache/pulsar/client/api/Reader} objects by specifying a topic and a {@inject: javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId}. - -The following is an example. - -```java - -byte[] msgIdBytes = // Some message ID byte array -MessageId id = MessageId.fromByteArray(msgIdBytes); -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(id) - .create(); - -while (true) { - Message message = reader.readNext(); - // Process message -} - -``` - -In the example above, a `Reader` object is instantiated for a specific topic and message (by ID); the reader iterates over each message in the topic after the message is identified by `msgIdBytes` (how that value is obtained depends on the application). - -The code sample above shows pointing the `Reader` object to a specific message (by ID), but you can also use `MessageId.earliest` to point to the earliest available message on the topic of `MessageId.latest` to point to the most recent available message. - -### Configure reader -When you create a reader, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -| Name | Type|
    Description
    | Default -|---|---|---|--- -`topicName`|String|Topic name. |None -`receiverQueueSize`|int|Size of a consumer's receiver queue.

    For example, the number of messages that can be accumulated by a consumer before an application calls `Receive`.

    A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.|1000 -`readerListener`|ReaderListener<T>|A listener that is called for message received.|None -`readerName`|String|Reader name.|null -`subscriptionName`|String| Subscription name|When there is a single topic, the default subscription name is `"reader-" + 10-digit UUID`.
    When there are multiple topics, the default subscription name is `"multiTopicsReader-" + 10-digit UUID`. -`subscriptionRolePrefix`|String|Prefix of subscription role. |null -`cryptoKeyReader`|CryptoKeyReader|Interface that abstracts the access to a key store.|null -`cryptoFailureAction`|ConsumerCryptoFailureAction|Consumer should take action when it receives a message that can not be decrypted.
  1739. **FAIL**: this is the default option to fail messages until crypto succeeds.
  1740. **DISCARD**: silently acknowledge and not deliver message to an application.
  1741. **CONSUME**: deliver encrypted messages to applications. It is the application's responsibility to decrypt the message.

  1742. The message decompression fails.

    If messages contain batch messages, a client is not be able to retrieve individual messages in batch.

    Delivered encrypted message contains {@link EncryptionContext} which contains encryption and compression information in it using which application can decrypt consumed message payload.|
  1743. ConsumerCryptoFailureAction.FAIL
  1744. -`readCompacted`|boolean|If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (for example, failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`.|false -`resetIncludeHead`|boolean|If set to true, the first message to be returned is the one specified by `messageId`.

    If set to false, the first message to be returned is the one next to the message specified by `messageId`.|false - -### Sticky key range reader - -In sticky key range reader, broker will only dispatch messages which hash of the message key contains by the specified key hash range. Multiple key hash ranges can be specified on a reader. - -The following is an example to create a sticky key range reader. - -```java - -pulsarClient.newReader() - .topic(topic) - .startMessageId(MessageId.earliest) - .keyHashRange(Range.of(0, 10000), Range.of(20001, 30000)) - .create(); - -``` - -Total hash range size is 65536, so the max end of the range should be less than or equal to 65535. - -## Schema - -In Pulsar, all message data consists of byte arrays "under the hood." [Message schemas](schema-get-started.md) enable you to use other types of data when constructing and handling messages (from simple types like strings to more complex, application-specific types.md). If you construct, say, a [producer](#producer) without specifying a schema, then the producer can only produce messages of type `byte[]`. The following is an example. - -```java - -Producer producer = client.newProducer() - .topic(topic) - .create(); - -``` - -The producer above is equivalent to a `Producer` (in fact, you should *always* explicitly specify the type). If you'd like to use a producer for a different type of data, you'll need to specify a **schema** that informs Pulsar which data type will be transmitted over the [topic](reference-terminology.md#topic). - -### AvroBaseStructSchema example - -Let's say that you have a `SensorReading` class that you'd like to transmit over a Pulsar topic: - -```java - -public class SensorReading { - public float temperature; - - public SensorReading(float temperature) { - this.temperature = temperature; - } - - // A no-arg constructor is required - public SensorReading() { - } - - public float getTemperature() { - return temperature; - } - - public void setTemperature(float temperature) { - this.temperature = temperature; - } -} - -``` - -You could then create a `Producer` (or `Consumer`) like this: - -```java - -Producer producer = client.newProducer(JSONSchema.of(SensorReading.class)) - .topic("sensor-readings") - .create(); - -``` - -The following schema formats are currently available for Java: - -* No schema or the byte array schema (which can be applied using `Schema.BYTES`): - - ```java - - Producer bytesProducer = client.newProducer(Schema.BYTES) - .topic("some-raw-bytes-topic") - .create(); - - ``` - - Or, equivalently: - - ```java - - Producer bytesProducer = client.newProducer() - .topic("some-raw-bytes-topic") - .create(); - - ``` - -* `String` for normal UTF-8-encoded string data. Apply the schema using `Schema.STRING`: - - ```java - - Producer stringProducer = client.newProducer(Schema.STRING) - .topic("some-string-topic") - .create(); - - ``` - -* Create JSON schemas for POJOs using `Schema.JSON`. The following is an example. - - ```java - - Producer pojoProducer = client.newProducer(Schema.JSON(MyPojo.class)) - .topic("some-pojo-topic") - .create(); - - ``` - -* Generate Protobuf schemas using `Schema.PROTOBUF`. The following example shows how to create the Protobuf schema and use it to instantiate a new producer: - - ```java - - Producer protobufProducer = client.newProducer(Schema.PROTOBUF(MyProtobuf.class)) - .topic("some-protobuf-topic") - .create(); - - ``` - -* Define Avro schemas with `Schema.AVRO`. The following code snippet demonstrates how to create and use Avro schema. - - ```java - - Producer avroProducer = client.newProducer(Schema.AVRO(MyAvro.class)) - .topic("some-avro-topic") - .create(); - - ``` - -### ProtobufNativeSchema example - -For example of ProtobufNativeSchema, see [`SchemaDefinition` in `Complex type`](schema-understand.md#complex-type). - -## Authentication - -Pulsar currently supports three authentication schemes: [TLS](security-tls-authentication.md), [Athenz](security-athenz.md), and [Oauth2](security-oauth2.md). You can use the Pulsar Java client with all of them. - -### TLS Authentication - -To use [TLS](security-tls-authentication.md), `enableTls` method is deprecated and you need to use "pulsar+ssl://" in serviceUrl to enable, point your Pulsar client to a TLS cert path, and provide paths to cert and key files. - -The following is an example. - -```java - -Map authParams = new HashMap(); -authParams.put("tlsCertFile", "/path/to/client-cert.pem"); -authParams.put("tlsKeyFile", "/path/to/client-key.pem"); - -Authentication tlsAuth = AuthenticationFactory - .create(AuthenticationTls.class.getName(), authParams); - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://my-broker.com:6651") - .tlsTrustCertsFilePath("/path/to/cacert.pem") - .authentication(tlsAuth) - .build(); - -``` - -### Athenz - -To use [Athenz](security-athenz.md) as an authentication provider, you need to [use TLS](#tls-authentication) and provide values for four parameters in a hash: - -* `tenantDomain` -* `tenantService` -* `providerDomain` -* `privateKey` - -You can also set an optional `keyId`. The following is an example. - -```java - -Map authParams = new HashMap(); -authParams.put("tenantDomain", "shopping"); // Tenant domain name -authParams.put("tenantService", "some_app"); // Tenant service name -authParams.put("providerDomain", "pulsar"); // Provider domain name -authParams.put("privateKey", "file:///path/to/private.pem"); // Tenant private key path -authParams.put("keyId", "v1"); // Key id for the tenant private key (optional, default: "0") - -Authentication athenzAuth = AuthenticationFactory - .create(AuthenticationAthenz.class.getName(), authParams); - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://my-broker.com:6651") - .tlsTrustCertsFilePath("/path/to/cacert.pem") - .authentication(athenzAuth) - .build(); - -``` - -> #### Supported pattern formats -> The `privateKey` parameter supports the following three pattern formats: -> * `file:///path/to/file` -> * `file:/path/to/file` -> * `data:application/x-pem-file;base64,` - -### Oauth2 - -The following example shows how to use [Oauth2](security-oauth2.md) as an authentication provider for the Pulsar Java client. - -You can use the factory method to configure authentication for Pulsar Java client. - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactoryOAuth2.clientCredentials(this.issuerUrl, this.credentialsUrl, this.audience)) - .build(); - -``` - -In addition, you can also use the encoded parameters to configure authentication for Pulsar Java client. - -```java - -Authentication auth = AuthenticationFactory - .create(AuthenticationOAuth2.class.getName(), "{"type":"client_credentials","privateKey":"...","issuerUrl":"...","audience":"..."}"); -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication(auth) - .build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-node.md b/site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-node.md deleted file mode 100644 index 1ff37b26294666..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-node.md +++ /dev/null @@ -1,643 +0,0 @@ ---- -id: client-libraries-node -title: The Pulsar Node.js client -sidebar_label: "Node.js" -original_id: client-libraries-node ---- - -The Pulsar Node.js client can be used to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Node.js. - -All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Node.js client are thread-safe. - -For 1.3.0 or later versions, [type definitions](https://github.com/apache/pulsar-client-node/blob/master/index.d.ts) used in TypeScript are available. - -## Installation - -You can install the [`pulsar-client`](https://www.npmjs.com/package/pulsar-client) library via [npm](https://www.npmjs.com/). - -### Requirements -Pulsar Node.js client library is based on the C++ client library. -Follow [these instructions](client-libraries-cpp.md#compilation) and install the Pulsar C++ client library. - -### Compatibility - -Compatibility between each version of the Node.js client and the C++ client is as follows: - -| Node.js client | C++ client | -| :------------- | :------------- | -| 1.0.0 | 2.3.0 or later | -| 1.1.0 | 2.4.0 or later | -| 1.2.0 | 2.5.0 or later | - -If an incompatible version of the C++ client is installed, you may fail to build or run this library. - -### Installation using npm - -Install the `pulsar-client` library via [npm](https://www.npmjs.com/): - -```shell - -$ npm install pulsar-client - -``` - -:::note - -Also, this library works only in Node.js 10.x or later because it uses the [`node-addon-api`](https://github.com/nodejs/node-addon-api) module to wrap the C++ library. - -::: - -## Connection URLs -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here is an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you are using [TLS encryption](security-tls-transport.md) or [TLS Authentication](security-tls-authentication.md), the URL looks like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you first need a client object. You can create a client instance using a `new` operator and the `Client` method, passing in a client options object (more on configuration [below](#client-configuration)). - -Here is an example: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - await client.close(); -})(); - -``` - -### Client configuration - -The following configurable parameters are available for Pulsar clients: - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `serviceUrl` | The connection URL for the Pulsar cluster. See [above](#connection-urls) for more info. | | -| `authentication` | Configure the authentication provider. (default: no authentication). See [TLS Authentication](security-tls-authentication.md) for more info. | | -| `operationTimeoutSeconds` | The timeout for Node.js client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries occur until this threshold is reached, at which point the operation fails. | 30 | -| `ioThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker). | 1 | -| `messageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)). | 1 | -| `concurrentLookupRequest` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 50000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 50000 | -| `tlsTrustCertsFilePath` | The file path for the trusted TLS certificate. | | -| `tlsValidateHostname` | The boolean value of setup whether to enable TLS hostname verification. | `false` | -| `tlsAllowInsecureConnection` | The boolean value of setup whether the Pulsar client accepts untrusted TLS certificate from broker. | `false` | -| `statsIntervalInSeconds` | Interval between each stat info. Stats is activated with positive statsInterval. The value should be set to 1 second at least | 600 | -| `log` | A function that is used for logging. | `console.log` | - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Node.js producers using a producer configuration object. - -Here is an example: - -```JavaScript - -const producer = await client.createProducer({ - topic: 'my-topic', -}); - -await producer.send({ - data: Buffer.from("Hello, Pulsar"), -}); - -await producer.close(); - -``` - -> #### Promise operation -> When you create a new Pulsar producer, the operation returns `Promise` object and get producer instance or an error through executor function. -> In this example, using await operator instead of executor function. - -### Producer operations - -Pulsar Node.js producers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `send(Object)` | Publishes a [message](#messages) to the producer's topic. When the message is successfully acknowledged by the Pulsar broker, or an error is thrown, the Promise object whose result is the message ID runs executor function. | `Promise` | -| `flush()` | Sends message from send queue to Pulsar broker. When the message is successfully acknowledged by the Pulsar broker, or an error is thrown, the Promise object runs executor function. | `Promise` | -| `close()` | Closes the producer and releases all resources allocated to it. Once `close()` is called, no more messages are accepted from the publisher. This method returns a Promise object. It runs the executor function when all pending publish requests are persisted by Pulsar. If an error is thrown, no pending writes are retried. | `Promise` | -| `getProducerName()` | Getter method of the producer name. | `string` | -| `getTopic()` | Getter method of the name of the topic. | `string` | - -### Producer configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer publishes messages. | | -| `producerName` | A name for the producer. If you do not explicitly assign a name, Pulsar automatically generates a globally unique name. If you choose to explicitly assign a name, it needs to be unique across *all* Pulsar clusters, otherwise the creation operation throws an error. | | -| `sendTimeoutMs` | When publishing a message to a topic, the producer waits for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error is thrown. If you set `sendTimeoutMs` to -1, the timeout is set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30000 | -| `initialSequenceId` | The initial sequence ID of the message. When producer send message, add sequence ID to message. The ID is increased each time to send. | | -| `maxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `send` method fails *unless* `blockIfQueueFull` is set to `true`. | 1000 | -| `maxPendingMessagesAcrossPartitions` | The maximum size of the sum of partition's pending queue. | 50000 | -| `blockIfQueueFull` | If set to `true`, the producer's `send` method waits when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `maxPendingMessages` parameter); if set to `false` (the default), `send` operations fails and throw a error when the queue is full. | `false` | -| `messageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-messaging.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`RoundRobinDistribution`), or publishing all messages to a single partition (`UseSinglePartition`, the default). | `UseSinglePartition` | -| `hashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `JavaStringHash` (the equivalent of `String.hashCode()` in Java), `Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library). | `BoostHash` | -| `compressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), and [`Zlib`](https://zlib.net/), [ZSTD](https://github.com/facebook/zstd/), [SNAPPY](https://github.com/google/snappy/). | Compression None | -| `batchingEnabled` | If set to `true`, the producer send message as batch. | `true` | -| `batchingMaxPublishDelayMs` | The maximum time of delay sending message in batching. | 10 | -| `batchingMaxMessages` | The maximum size of sending message in each time of batching. | 1000 | -| `properties` | The metadata of producer. | | - -### Producer example - -This example creates a Node.js producer for the `my-topic` topic and sends 10 messages to that topic: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - // Create a producer - const producer = await client.createProducer({ - topic: 'my-topic', - }); - - // Send messages - for (let i = 0; i < 10; i += 1) { - const msg = `my-message-${i}`; - producer.send({ - data: Buffer.from(msg), - }); - console.log(`Sent message: ${msg}`); - } - await producer.flush(); - - await producer.close(); - await client.close(); -})(); - -``` - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Node.js consumers using a consumer configuration object. - -Here is an example: - -```JavaScript - -const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', -}); - -const msg = await consumer.receive(); -console.log(msg.getData().toString()); -consumer.acknowledge(msg); - -await consumer.close(); - -``` - -> #### Promise operation -> When you create a new Pulsar consumer, the operation returns `Promise` object and get consumer instance or an error through executor function. -> In this example, using await operator instead of executor function. - -### Consumer operations - -Pulsar Node.js consumers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `receive()` | Receives a single message from the topic. When the message is available, the Promise object run executor function and get message object. | `Promise` | -| `receive(Number)` | Receives a single message from the topic with specific timeout in milliseconds. | `Promise` | -| `acknowledge(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message object. | `void` | -| `acknowledgeId(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID object. | `void` | -| `acknowledgeCumulative(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `acknowledgeCumulative` method returns void, and send the ack to the broker asynchronously. After that, the messages are *not* redelivered to the consumer. Cumulative acking can not be used with a [shared](concepts-messaging.md#shared) subscription type. | `void` | -| `acknowledgeCumulativeId(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message ID. | `void` | -| `negativeAcknowledge(Message)`| [Negatively acknowledges](reference-terminology.md#negative-acknowledgment-nack) a message to the Pulsar broker by message object. | `void` | -| `negativeAcknowledgeId(MessageId)` | [Negatively acknowledges](reference-terminology.md#negative-acknowledgment-nack) a message to the Pulsar broker by message ID object. | `void` | -| `close()` | Closes the consumer, disabling its ability to receive messages from the broker. | `Promise` | -| `unsubscribe()` | Unsubscribes the subscription. | `Promise` | - -### Consumer configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar topic on which the consumer establishes a subscription and listen for messages. | | -| `topics` | The array of topics. | | -| `topicsPattern` | The regular expression for topics. | | -| `subscription` | The subscription name for this consumer. | | -| `subscriptionType` | Available options are `Exclusive`, `Shared`, `Key_Shared`, and `Failover`. | `Exclusive` | -| `subscriptionInitialPosition` | Initial position at which to set cursor when subscribing to a topic at first time. | `SubscriptionInitialPosition.Latest` | -| `ackTimeoutMs` | Acknowledge timeout in milliseconds. | 0 | -| `nAckRedeliverTimeoutMs` | Delay to wait before redelivering messages that failed to be processed. | 60000 | -| `receiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000 | -| `receiverQueueSizeAcrossPartitions` | Set the max total receiver queue size across partitions. This setting is used to reduce the receiver queue size for individual partitions if the total exceeds this value. | 50000 | -| `consumerName` | The name of consumer. Currently(v2.4.1), [failover](concepts-messaging.md#failover) mode use consumer name in ordering. | | -| `properties` | The metadata of consumer. | | -| `listener`| A listener that is called for a message received. | | -| `readCompacted`| If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`. | false | - -### Consumer example - -This example creates a Node.js consumer with the `my-subscription` subscription on the `my-topic` topic, receives messages, prints the content that arrive, and acknowledges each message to the Pulsar broker for 10 times: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - // Create a consumer - const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', - subscriptionType: 'Exclusive', - }); - - // Receive messages - for (let i = 0; i < 10; i += 1) { - const msg = await consumer.receive(); - console.log(msg.getData().toString()); - consumer.acknowledge(msg); - } - - await consumer.close(); - await client.close(); -})(); - -``` - -Instead a consumer can be created with `listener` to process messages. - -```JavaScript - -// Create a consumer -const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', - subscriptionType: 'Exclusive', - listener: (msg, msgConsumer) => { - console.log(msg.getData().toString()); - msgConsumer.acknowledge(msg); - }, -}); - -``` - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recently unacked message). You can [configure](#reader-configuration) Node.js readers using a reader configuration object. - -Here is an example: - -```JavaScript - -const reader = await client.createReader({ - topic: 'my-topic', - startMessageId: Pulsar.MessageId.earliest(), -}); - -const msg = await reader.readNext(); -console.log(msg.getData().toString()); - -await reader.close(); - -``` - -### Reader operations - -Pulsar Node.js readers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `readNext()` | Receives the next message on the topic (analogous to the `receive` method for [consumers](#consumer-operations)). When the message is available, the Promise object run executor function and get message object. | `Promise` | -| `readNext(Number)` | Receives a single message from the topic with specific timeout in milliseconds. | `Promise` | -| `hasNext()` | Return whether the broker has next message in target topic. | `Boolean` | -| `close()` | Closes the reader, disabling its ability to receive messages from the broker. | `Promise` | - -### Reader configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader establishes a subscription and listen for messages. | | -| `startMessageId` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `Pulsar.MessageId.earliest` (the earliest available message on the topic), `Pulsar.MessageId.latest` (the latest available message on the topic), or a message ID object for a position that is not earliest or latest. | | -| `receiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `readNext`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000 | -| `readerName` | The name of the reader. | | -| `subscriptionRolePrefix` | The subscription role prefix. | | -| `readCompacted` | If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`. | `false` | - - -### Reader example - -This example creates a Node.js reader with the `my-topic` topic, reads messages, and prints the content that arrive for 10 times: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - operationTimeoutSeconds: 30, - }); - - // Create a reader - const reader = await client.createReader({ - topic: 'my-topic', - startMessageId: Pulsar.MessageId.earliest(), - }); - - // read messages - for (let i = 0; i < 10; i += 1) { - const msg = await reader.readNext(); - console.log(msg.getData().toString()); - } - - await reader.close(); - await client.close(); -})(); - -``` - -## Messages - -In Pulsar Node.js client, you have to construct producer message object for producer. - -Here is an example message: - -```JavaScript - -const msg = { - data: Buffer.from('Hello, Pulsar'), - partitionKey: 'key1', - properties: { - 'foo': 'bar', - }, - eventTimestamp: Date.now(), - replicationClusters: [ - 'cluster1', - 'cluster2', - ], -} - -await producer.send(msg); - -``` - -The following keys are available for producer message objects: - -| Parameter | Description | -| :-------- | :---------- | -| `data` | The actual data payload of the message. | -| `properties` | A Object for any application-specific metadata attached to the message. | -| `eventTimestamp` | The timestamp associated with the message. | -| `sequenceId` | The sequence ID of the message. | -| `partitionKey` | The optional key associated with the message (particularly useful for things like topic compaction). | -| `replicationClusters` | The clusters to which this message is replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. | -| `deliverAt` | The absolute timestamp at or after which the message is delivered. | | -| `deliverAfter` | The relative delay after which the message is delivered. | | - -### Message object operations - -In Pulsar Node.js client, you can receive (or read) message object as consumer (or reader). - -The message object have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `getTopicName()` | Getter method of topic name. | `String` | -| `getProperties()` | Getter method of properties. | `Array` | -| `getData()` | Getter method of message data. | `Buffer` | -| `getMessageId()` | Getter method of [message id object](#message-id-object-operations). | `Object` | -| `getPublishTimestamp()` | Getter method of publish timestamp. | `Number` | -| `getEventTimestamp()` | Getter method of event timestamp. | `Number` | -| `getRedeliveryCount()` | Getter method of redelivery count. | `Number` | -| `getPartitionKey()` | Getter method of partition key. | `String` | - -### Message ID object operations - -In Pulsar Node.js client, you can get message id object from message object. - -The message id object have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `serialize()` | Serialize the message id into a Buffer for storing. | `Buffer` | -| `toString()` | Get message id as String. | `String` | - -The client has static method of message id object. You can access it as `Pulsar.MessageId.someStaticMethod` too. - -The following static methods are available for the message id object: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `earliest()` | MessageId representing the earliest, or oldest available message stored in the topic. | `Object` | -| `latest()` | MessageId representing the latest, or last published message in the topic. | `Object` | -| `deserialize(Buffer)` | Deserialize a message id object from a Buffer. | `Object` | - -## End-to-end encryption - -[End-to-end encryption](https://pulsar.apache.org/docs/en/next/cookbooks-encryption/#docsNav) allows applications to encrypt messages at producers and decrypt at consumers. - -### Configuration - -If you want to use the end-to-end encryption feature in the Node.js client, you need to configure `publicKeyPath` and `privateKeyPath` for both producer and consumer. - -``` - -publicKeyPath: "./public.pem" -privateKeyPath: "./private.pem" - -``` - -### Tutorial - -This section provides step-by-step instructions on how to use the end-to-end encryption feature in the Node.js client. - -**Prerequisite** - -- Pulsar C++ client 2.7.1 or later - -**Step** - -1. Create both public and private key pairs. - - **Input** - - ```shell - - openssl genrsa -out private.pem 2048 - openssl rsa -in private.pem -pubout -out public.pem - - ``` - -2. Create a producer to send encrypted messages. - - **Input** - - ```nodejs - - const Pulsar = require('pulsar-client'); - - (async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - operationTimeoutSeconds: 30, - }); - - // Create a producer - const producer = await client.createProducer({ - topic: 'persistent://public/default/my-topic', - sendTimeoutMs: 30000, - batchingEnabled: true, - publicKeyPath: "./public.pem", - privateKeyPath: "./private.pem", - encryptionKey: "encryption-key" - }); - - console.log(producer.ProducerConfig) - // Send messages - for (let i = 0; i < 10; i += 1) { - const msg = `my-message-${i}`; - producer.send({ - data: Buffer.from(msg), - }); - console.log(`Sent message: ${msg}`); - } - await producer.flush(); - - await producer.close(); - await client.close(); - })(); - - ``` - -3. Create a consumer to receive encrypted messages. - - **Input** - - ```nodejs - - const Pulsar = require('pulsar-client'); - - (async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://172.25.0.3:6650', - operationTimeoutSeconds: 30 - }); - - // Create a consumer - const consumer = await client.subscribe({ - topic: 'persistent://public/default/my-topic', - subscription: 'sub1', - subscriptionType: 'Shared', - ackTimeoutMs: 10000, - publicKeyPath: "./public.pem", - privateKeyPath: "./private.pem" - }); - - console.log(consumer) - // Receive messages - for (let i = 0; i < 10; i += 1) { - const msg = await consumer.receive(); - console.log(msg.getData().toString()); - consumer.acknowledge(msg); - } - - await consumer.close(); - await client.close(); - })(); - - ``` - -4. Run the consumer to receive encrypted messages. - - **Input** - - ```shell - - node consumer.js - - ``` - -5. In a new terminal tab, run the producer to produce encrypted messages. - - **Input** - - ```shell - - node producer.js - - ``` - - Now you can see the producer sends messages and the consumer receives messages successfully. - - **Output** - - This is from the producer side. - - ``` - - Sent message: my-message-0 - Sent message: my-message-1 - Sent message: my-message-2 - Sent message: my-message-3 - Sent message: my-message-4 - Sent message: my-message-5 - Sent message: my-message-6 - Sent message: my-message-7 - Sent message: my-message-8 - Sent message: my-message-9 - - ``` - - This is from the consumer side. - - ``` - - my-message-0 - my-message-1 - my-message-2 - my-message-3 - my-message-4 - my-message-5 - my-message-6 - my-message-7 - my-message-8 - my-message-9 - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-python.md b/site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-python.md deleted file mode 100644 index 90cc840daa0a81..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-python.md +++ /dev/null @@ -1,481 +0,0 @@ ---- -id: client-libraries-python -title: Pulsar Python client -sidebar_label: "Python" -original_id: client-libraries-python ---- - -Pulsar Python client library is a wrapper over the existing [C++ client library](client-libraries-cpp.md) and exposes all of the [same features](/api/cpp). You can find the code in the [Python directory](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/python) of the C++ client code. - -All the methods in producer, consumer, and reader of a Python client are thread-safe. - -[pdoc](https://github.com/BurntSushi/pdoc)-generated API docs for the Python client are available [here](/api/python). - -## Install - -You can install the [`pulsar-client`](https://pypi.python.org/pypi/pulsar-client) library either via [PyPi](https://pypi.python.org/pypi), using [pip](#installation-using-pip), or by building the library from [source](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp). - -### Install using pip - -To install the `pulsar-client` library as a pre-built package using the [pip](https://pip.pypa.io/en/stable/) package manager: - -```shell - -$ pip install pulsar-client==@pulsar:version_number@ - -``` - -### Optional dependencies -If you install the client libraries on Linux to support services like Pulsar functions or Avro serialization, you can install optional components alongside the `pulsar-client` library. - -```shell - -# avro serialization -$ pip install pulsar-client=='@pulsar:version_number@[avro]' - -# functions runtime -$ pip install pulsar-client=='@pulsar:version_number@[functions]' - -# all optional components -$ pip install pulsar-client=='@pulsar:version_number@[all]' - -``` - -Installation via PyPi is available for the following Python versions: - -Platform | Supported Python versions -:--------|:------------------------- -MacOS
    10.13 (High Sierra), 10.14 (Mojave)
    | 2.7, 3.7 -Linux | 2.7, 3.4, 3.5, 3.6, 3.7, 3.8 - -### Install from source - -To install the `pulsar-client` library by building from source, follow [instructions](client-libraries-cpp.md#compilation) and compile the Pulsar C++ client library. That builds the Python binding for the library. - -To install the built Python bindings: - -```shell - -$ git clone https://github.com/apache/pulsar -$ cd pulsar/pulsar-client-cpp/python -$ sudo python setup.py install - -``` - -## API Reference - -The complete Python API reference is available at [api/python](/api/python). - -## Examples - -You can find a variety of Python code examples for the [pulsar-client](/pulsar-client-cpp/python) library. - -### Producer example - -The following example creates a Python producer for the `my-topic` topic and sends 10 messages on that topic: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') - -producer = client.create_producer('my-topic') - -for i in range(10): - producer.send(('Hello-%d' % i).encode('utf-8')) - -client.close() - -``` - -### Consumer example - -The following example creates a consumer with the `my-subscription` subscription name on the `my-topic` topic, receives incoming messages, prints the content and ID of messages that arrive, and acknowledges each message to the Pulsar broker. - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') - -consumer = client.subscribe('my-topic', 'my-subscription') - -while True: - msg = consumer.receive() - try: - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except: - # Message failed to be processed - consumer.negative_acknowledge(msg) - -client.close() - -``` - -This example shows how to configure negative acknowledgement. - -```python - -from pulsar import Client, schema -client = Client('pulsar://localhost:6650') -consumer = client.subscribe('negative_acks','test',schema=schema.StringSchema()) -producer = client.create_producer('negative_acks',schema=schema.StringSchema()) -for i in range(10): - print('send msg "hello-%d"' % i) - producer.send_async('hello-%d' % i, callback=None) -producer.flush() -for i in range(10): - msg = consumer.receive() - consumer.negative_acknowledge(msg) - print('receive and nack msg "%s"' % msg.data()) -for i in range(10): - msg = consumer.receive() - consumer.acknowledge(msg) - print('receive and ack msg "%s"' % msg.data()) -try: - # No more messages expected - msg = consumer.receive(100) -except: - print("no more msg") - pass - -``` - -### Reader interface example - -You can use the Pulsar Python API to use the Pulsar [reader interface](concepts-clients.md#reader-interface). Here's an example: - -```python - -# MessageId taken from a previously fetched message -msg_id = msg.message_id() - -reader = client.create_reader('my-topic', msg_id) - -while True: - msg = reader.read_next() - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # No acknowledgment - -``` - -### Multi-topic subscriptions - -In addition to subscribing a consumer to a single Pulsar topic, you can also subscribe to multiple topics simultaneously. To use multi-topic subscriptions, you can supply a regular expression (regex) or a `List` of topics. If you select topics via regex, all topics must be within the same Pulsar namespace. - -The following is an example: - -```python - -import re -consumer = client.subscribe(re.compile('persistent://public/default/topic-*'), 'my-subscription') -while True: - msg = consumer.receive() - try: - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except: - # Message failed to be processed - consumer.negative_acknowledge(msg) -client.close() - -``` - -## Schema - -### Declare and validate schema - -You can declare a schema by passing a class that inherits -from `pulsar.schema.Record` and defines the fields as -class variables. For example: - -```python - -from pulsar.schema import * - -class Example(Record): - a = String() - b = Integer() - c = Boolean() - -``` - -With this simple schema definition, you can create producers, consumers and readers instances that refer to that. - -```python - -producer = client.create_producer( - topic='my-topic', - schema=AvroSchema(Example) ) - -producer.send(Example(a='Hello', b=1)) - -``` - -After creating the producer, the Pulsar broker validates that the existing topic schema is indeed of "Avro" type and that the format is compatible with the schema definition of the `Example` class. - -If there is a mismatch, an exception occurs in the producer creation. - -Once a producer is created with a certain schema definition, -it will only accept objects that are instances of the declared -schema class. - -Similarly, for a consumer/reader, the consumer will return an -object, instance of the schema record class, rather than the raw -bytes: - -```python - -consumer = client.subscribe( - topic='my-topic', - subscription_name='my-subscription', - schema=AvroSchema(Example) ) - -while True: - msg = consumer.receive() - ex = msg.value() - try: - print("Received message a={} b={} c={}".format(ex.a, ex.b, ex.c)) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except: - # Message failed to be processed - consumer.negative_acknowledge(msg) - -``` - -### Supported schema types - -You can use different builtin schema types in Pulsar. All the definitions are in the `pulsar.schema` package. - -| Schema | Notes | -| ------ | ----- | -| `BytesSchema` | Get the raw payload as a `bytes` object. No serialization/deserialization are performed. This is the default schema mode | -| `StringSchema` | Encode/decode payload as a UTF-8 string. Uses `str` objects | -| `JsonSchema` | Require record definition. Serializes the record into standard JSON payload | -| `AvroSchema` | Require record definition. Serializes in AVRO format | - -### Schema definition reference - -The schema definition is done through a class that inherits from `pulsar.schema.Record`. - -This class has a number of fields which can be of either -`pulsar.schema.Field` type or another nested `Record`. All the -fields are specified in the `pulsar.schema` package. The fields -are matching the AVRO fields types. - -| Field Type | Python Type | Notes | -| ---------- | ----------- | ----- | -| `Boolean` | `bool` | | -| `Integer` | `int` | | -| `Long` | `int` | | -| `Float` | `float` | | -| `Double` | `float` | | -| `Bytes` | `bytes` | | -| `String` | `str` | | -| `Array` | `list` | Need to specify record type for items. | -| `Map` | `dict` | Key is always `String`. Need to specify value type. | - -Additionally, any Python `Enum` type can be used as a valid field type. - -#### Fields parameters - -When adding a field, you can use these parameters in the constructor. - -| Argument | Default | Notes | -| ---------- | --------| ----- | -| `default` | `None` | Set a default value for the field. Eg: `a = Integer(default=5)` | -| `required` | `False` | Mark the field as "required". It is set in the schema accordingly. | - -#### Schema definition examples - -##### Simple definition - -```python - -class Example(Record): - a = String() - b = Integer() - c = Array(String()) - i = Map(String()) - -``` - -##### Using enums - -```python - -from enum import Enum - -class Color(Enum): - red = 1 - green = 2 - blue = 3 - -class Example(Record): - name = String() - color = Color - -``` - -##### Complex types - -```python - -class MySubRecord(Record): - x = Integer() - y = Long() - z = String() - -class Example(Record): - a = String() - sub = MySubRecord() - -``` - -##### Set namespace for Avro schema - -Set the namespace for Avro Record schema using the special field `_avro_namespace`. - -```python - -class NamespaceDemo(Record): - _avro_namespace = 'xxx.xxx.xxx' - x = String() - y = Integer() - -``` - -The schema definition is like this. - -``` - -{ - 'name': 'NamespaceDemo', 'namespace': 'xxx.xxx.xxx', 'type': 'record', 'fields': [ - {'name': 'x', 'type': ['null', 'string']}, - {'name': 'y', 'type': ['null', 'int']} - ] -} - -``` - -## End-to-end encryption - -[End-to-end encryption](https://pulsar.apache.org/docs/en/next/cookbooks-encryption/#docsNav) allows applications to encrypt messages at producers and decrypt messages at consumers. - -### Configuration - -To use the end-to-end encryption feature in the Python client, you need to configure `publicKeyPath` and `privateKeyPath` for both producer and consumer. - -``` - -publicKeyPath: "./public.pem" -privateKeyPath: "./private.pem" - -``` - -### Tutorial - -This section provides step-by-step instructions on how to use the end-to-end encryption feature in the Python client. - -**Prerequisite** - -- Pulsar Python client 2.7.1 or later - -**Step** - -1. Create both public and private key pairs. - - **Input** - - ```shell - - openssl genrsa -out private.pem 2048 - openssl rsa -in private.pem -pubout -out public.pem - - ``` - -2. Create a producer to send encrypted messages. - - **Input** - - ```python - - import pulsar - - publicKeyPath = "./public.pem" - privateKeyPath = "./private.pem" - crypto_key_reader = pulsar.CryptoKeyReader(publicKeyPath, privateKeyPath) - client = pulsar.Client('pulsar://localhost:6650') - producer = client.create_producer(topic='encryption', encryption_key='encryption', crypto_key_reader=crypto_key_reader) - producer.send('encryption message'.encode('utf8')) - print('sent message') - producer.close() - client.close() - - ``` - -3. Create a consumer to receive encrypted messages. - - **Input** - - ```python - - import pulsar - - publicKeyPath = "./public.pem" - privateKeyPath = "./private.pem" - crypto_key_reader = pulsar.CryptoKeyReader(publicKeyPath, privateKeyPath) - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe(topic='encryption', subscription_name='encryption-sub', crypto_key_reader=crypto_key_reader) - msg = consumer.receive() - print("Received msg '{}' id = '{}'".format(msg.data(), msg.message_id())) - consumer.close() - client.close() - - ``` - -4. Run the consumer to receive encrypted messages. - - **Input** - - ```shell - - python consumer.py - - ``` - -5. In a new terminal tab, run the producer to produce encrypted messages. - - **Input** - - ```shell - - python producer.py - - ``` - - Now you can see the producer sends messages and the consumer receives messages successfully. - - **Output** - - This is from the producer side. - - ``` - - sent message - - ``` - - This is from the consumer side. - - ``` - - Received msg 'encryption message' id = '(0,0,-1,-1)' - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-websocket.md b/site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-websocket.md deleted file mode 100644 index 77ec54f803dc5b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries-websocket.md +++ /dev/null @@ -1,662 +0,0 @@ ---- -id: client-libraries-websocket -title: Pulsar WebSocket API -sidebar_label: "WebSocket" -original_id: client-libraries-websocket ---- - -Pulsar [WebSocket](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API) API provides a simple way to interact with Pulsar using languages that do not have an official [client library](getting-started-clients.md). Through WebSocket, you can publish and consume messages and use features available on the [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page. - - -> You can use Pulsar WebSocket API with any WebSocket client library. See examples for Python and Node.js [below](#client-examples). - -## Running the WebSocket service - -The standalone variant of Pulsar that we recommend using for [local development](getting-started-standalone.md) already has the WebSocket service enabled. - -In non-standalone mode, there are two ways to deploy the WebSocket service: - -* [embedded](#embedded-with-a-pulsar-broker) with a Pulsar broker -* as a [separate component](#as-a-separate-component) - -### Embedded with a Pulsar broker - -In this mode, the WebSocket service will run within the same HTTP service that's already running in the broker. To enable this mode, set the [`webSocketServiceEnabled`](reference-configuration.md#broker-webSocketServiceEnabled) parameter in the [`conf/broker.conf`](reference-configuration.md#broker) configuration file in your installation. - -```properties - -webSocketServiceEnabled=true - -``` - -### As a separate component - -In this mode, the WebSocket service will be run from a Pulsar [broker](reference-terminology.md#broker) as a separate service. Configuration for this mode is handled in the [`conf/websocket.conf`](reference-configuration.md#websocket) configuration file. You'll need to set *at least* the following parameters: - -* [`configurationStoreServers`](reference-configuration.md#websocket-configurationStoreServers) -* [`webServicePort`](reference-configuration.md#websocket-webServicePort) -* [`clusterName`](reference-configuration.md#websocket-clusterName) - -Here's an example: - -```properties - -configurationStoreServers=zk1:2181,zk2:2181,zk3:2181 -webServicePort=8080 -clusterName=my-cluster - -``` - -### Security settings - -To enable TLS encryption on WebSocket service: - -```properties - -tlsEnabled=true -tlsAllowInsecureConnection=false -tlsCertificateFilePath=/path/to/client-websocket.cert.pem -tlsKeyFilePath=/path/to/client-websocket.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -### Starting the broker - -When the configuration is set, you can start the service using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) tool: - -```shell - -$ bin/pulsar-daemon start websocket - -``` - -## API Reference - -Pulsar's WebSocket API offers three endpoints for [producing](#producer-endpoint) messages, [consuming](#consumer-endpoint) messages and [reading](#reader-endpoint) messages. - -All exchanges via the WebSocket API use JSON. - -### Authentication - -#### Browser javascript WebSocket client - -Use the query param `token` transport the authentication token. - -```http - -ws://broker-service-url:8080/path?token=token - -``` - -### Producer endpoint - -The producer endpoint requires you to specify a tenant, namespace, and topic in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/producer/persistent/:tenant/:namespace/:topic - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`sendTimeoutMillis` | long | no | Send timeout (default: 30 secs) -`batchingEnabled` | boolean | no | Enable batching of messages (default: false) -`batchingMaxMessages` | int | no | Maximum number of messages permitted in a batch (default: 1000) -`maxPendingMessages` | int | no | Set the max size of the internal-queue holding the messages (default: 1000) -`batchingMaxPublishDelay` | long | no | Time period within which the messages will be batched (default: 10ms) -`messageRoutingMode` | string | no | Message [routing mode](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/ProducerConfiguration.MessageRoutingMode.html) for the partitioned producer: `SinglePartition`, `RoundRobinPartition` -`compressionType` | string | no | Compression [type](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/CompressionType.html): `LZ4`, `ZLIB` -`producerName` | string | no | Specify the name for the producer. Pulsar will enforce only one producer with same name can be publishing on a topic -`initialSequenceId` | long | no | Set the baseline for the sequence ids for messages published by the producer. -`hashingScheme` | string | no | [Hashing function](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.HashingScheme.html) to use when publishing on a partitioned topic: `JavaStringHash`, `Murmur3_32Hash` -`token` | string | no | Authentication token, this is used for the browser javascript client - - -#### Publishing a message - -```json - -{ - "payload": "SGVsbG8gV29ybGQ=", - "properties": {"key1": "value1", "key2": "value2"}, - "context": "1" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`payload` | string | yes | Base-64 encoded payload -`properties` | key-value pairs | no | Application-defined properties -`context` | string | no | Application-defined request identifier -`key` | string | no | For partitioned topics, decides which partition to use -`replicationClusters` | array | no | Restrict replication to this list of [clusters](reference-terminology.md#cluster), specified by name - - -##### Example success response - -```json - -{ - "result": "ok", - "messageId": "CAAQAw==", - "context": "1" - } - -``` - -##### Example failure response - -```json - - { - "result": "send-error:3", - "errorMsg": "Failed to de-serialize from JSON", - "context": "1" - } - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`result` | string | yes | `ok` if successful or an error message if unsuccessful -`messageId` | string | yes | Message ID assigned to the published message -`context` | string | no | Application-defined request identifier - - -### Consumer endpoint - -The consumer endpoint requires you to specify a tenant, namespace, and topic, as well as a subscription, in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/consumer/persistent/:tenant/:namespace/:topic/:subscription - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`ackTimeoutMillis` | long | no | Set the timeout for unacked messages (default: 0) -`subscriptionType` | string | no | [Subscription type](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/SubscriptionType.html): `Exclusive`, `Failover`, `Shared`, `Key_Shared` -`receiverQueueSize` | int | no | Size of the consumer receive queue (default: 1000) -`consumerName` | string | no | Consumer name -`priorityLevel` | int | no | Define a [priority](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setPriorityLevel-int-) for the consumer -`maxRedeliverCount` | int | no | Define a [maxRedeliverCount](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#deadLetterPolicy-org.apache.pulsar.client.api.DeadLetterPolicy-) for the consumer (default: 0). Activates [Dead Letter Topic](https://github.com/apache/pulsar/wiki/PIP-22%3A-Pulsar-Dead-Letter-Topic) feature. -`deadLetterTopic` | string | no | Define a [deadLetterTopic](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#deadLetterPolicy-org.apache.pulsar.client.api.DeadLetterPolicy-) for the consumer (default: {topic}-{subscription}-DLQ). Activates [Dead Letter Topic](https://github.com/apache/pulsar/wiki/PIP-22%3A-Pulsar-Dead-Letter-Topic) feature. -`pullMode` | boolean | no | Enable pull mode (default: false). See "Flow Control" below. -`negativeAckRedeliveryDelay` | int | no | When a message is negatively acknowledged, the delay time before the message is redelivered (in milliseconds). The default value is 60000. -`token` | string | no | Authentication token, this is used for the browser javascript client - -NB: these parameter (except `pullMode`) apply to the internal consumer of the WebSocket service. -So messages will be subject to the redelivery settings as soon as the get into the receive queue, -even if the client doesn't consume on the WebSocket. - -##### Receiving messages - -Server will push messages on the WebSocket session: - -```json - -"messageId": "CAMQADAA", - "payload": "hvXcJvHW7kOSrUn17P2q71RA5SdiXwZBqw==", - "properties": {}, - "publishTime": "2021-10-29T16:01:38.967-07:00", - "redeliveryCount": 0, - "encryptionContext": { - "keys": { - "client-rsa.pem": { - "keyValue": "jEuwS+PeUzmCo7IfLNxqoj4h7txbLjCQjkwpaw5AWJfZ2xoIdMkOuWDkOsqgFmWwxiecakS6GOZHs94x3sxzKHQx9Oe1jpwBg2e7L4fd26pp+WmAiLm/ArZJo6JotTeFSvKO3u/yQtGTZojDDQxiqFOQ1ZbMdtMZA8DpSMuq+Zx7PqLo43UdW1+krjQfE5WD+y+qE3LJQfwyVDnXxoRtqWLpVsAROlN2LxaMbaftv5HckoejJoB4xpf/dPOUqhnRstwQHf6klKT5iNhjsY4usACt78uILT0pEPd14h8wEBidBz/vAlC/zVMEqiDVzgNS7dqEYS4iHbf7cnWVCn3Hxw==", - "metadata": {} - } - }, - "param": "Tfu1PxVm6S9D3+Hk", - "compressionType": "NONE", - "uncompressedMessageSize": 0, - "batchSize": { - "empty": false, - "present": true - } - } - -``` - -Below are the parameters in the WebSocket consumer response. - -- General parameters - - Key | Type | Required? | Explanation - :---|:-----|:----------|:----------- - `messageId` | string | yes | Message ID - `payload` | string | yes | Base-64 encoded payload - `publishTime` | string | yes | Publish timestamp - `redeliveryCount` | number | yes | Number of times this message was already delivered - `properties` | key-value pairs | no | Application-defined properties - `key` | string | no | Original routing key set by producer - `encryptionContext` | EncryptionContext | no | Encryption context that consumers can use to decrypt received messages - `param` | string | no | Initialization vector for cipher (Base64 encoding) - `batchSize` | string | no | Number of entries in a message (if it is a batch message) - `uncompressedMessageSize` | string | no | Message size before compression - `compressionType` | string | no | Algorithm used to compress the message payload - -- `encryptionContext` related parameter - - Key | Type | Required? | Explanation - :---|:-----|:----------|:----------- - `keys` |key-EncryptionKey pairs | yes | Key in `key-EncryptionKey` pairs is an encryption key name. Value in `key-EncryptionKey` pairs is an encryption key object. - -- `encryptionKey` related parameters - - Key | Type | Required? | Explanation - :---|:-----|:----------|:----------- - `keyValue` | string | yes | Encryption key (Base64 encoding) - `metadata` | key-value pairs | no | Application-defined metadata - -#### Acknowledging the message - -Consumer needs to acknowledge the successful processing of the message to -have the Pulsar broker delete it. - -```json - -{ - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Negatively acknowledging messages - -```json - -{ - "type": "negativeAcknowledge", - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Flow control - -##### Push Mode - -By default (`pullMode=false`), the consumer endpoint will use the `receiverQueueSize` parameter both to size its -internal receive queue and to limit the number of unacknowledged messages that are passed to the WebSocket client. -In this mode, if you don't send acknowledgements, the Pulsar WebSocket service will stop sending messages after reaching -`receiverQueueSize` unacked messages sent to the WebSocket client. - -##### Pull Mode - -If you set `pullMode` to `true`, the WebSocket client will need to send `permit` commands to permit the -Pulsar WebSocket service to send more messages. - -```json - -{ - "type": "permit", - "permitMessages": 100 -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `permit` -`permitMessages`| int | yes | Number of messages to permit - -NB: in this mode it's possible to acknowledge messages in a different connection. - -#### Check if reach end of topic - -Consumer can check if it has reached end of topic by sending `isEndOfTopic` request. - -**Request** - -```json - -{ - "type": "isEndOfTopic" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `isEndOfTopic` - -**Response** - -```json - -{ - "endOfTopic": "true/false" - } - -``` - -### Reader endpoint - -The reader endpoint requires you to specify a tenant, namespace, and topic in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/reader/persistent/:tenant/:namespace/:topic - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`readerName` | string | no | Reader name -`receiverQueueSize` | int | no | Size of the consumer receive queue (default: 1000) -`messageId` | int or enum | no | Message ID to start from, `earliest` or `latest` (default: `latest`) -`token` | string | no | Authentication token, this is used for the browser javascript client - -##### Receiving messages - -Server will push messages on the WebSocket session: - -```json - -{ - "messageId": "CAAQAw==", - "payload": "SGVsbG8gV29ybGQ=", - "properties": {"key1": "value1", "key2": "value2"}, - "publishTime": "2016-08-30 16:45:57.785", - "redeliveryCount": 4 -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId` | string | yes | Message ID -`payload` | string | yes | Base-64 encoded payload -`publishTime` | string | yes | Publish timestamp -`redeliveryCount` | number | yes | Number of times this message was already delivered -`properties` | key-value pairs | no | Application-defined properties -`key` | string | no | Original routing key set by producer - -#### Acknowledging the message - -**In WebSocket**, Reader needs to acknowledge the successful processing of the message to -have the Pulsar WebSocket service update the number of pending messages. -If you don't send acknowledgements, Pulsar WebSocket service will stop sending messages after reaching the pendingMessages limit. - -```json - -{ - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Check if reach end of topic - -Consumer can check if it has reached end of topic by sending `isEndOfTopic` request. - -**Request** - -```json - -{ - "type": "isEndOfTopic" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `isEndOfTopic` - -**Response** - -```json - -{ - "endOfTopic": "true/false" - } - -``` - -### Error codes - -In case of error the server will close the WebSocket session using the -following error codes: - -Error Code | Error Message -:----------|:------------- -1 | Failed to create producer -2 | Failed to subscribe -3 | Failed to deserialize from JSON -4 | Failed to serialize to JSON -5 | Failed to authenticate client -6 | Client is not authorized -7 | Invalid payload encoding -8 | Unknown error - -> The application is responsible for re-establishing a new WebSocket session after a backoff period. - -## Client examples - -Below you'll find code examples for the Pulsar WebSocket API in [Python](#python) and [Node.js](#nodejs). - -### Python - -This example uses the [`websocket-client`](https://pypi.python.org/pypi/websocket-client) package. You can install it using [pip](https://pypi.python.org/pypi/pip): - -```shell - -$ pip install websocket-client - -``` - -You can also download it from [PyPI](https://pypi.python.org/pypi/websocket-client). - -#### Python producer - -Here's an example Python producer that sends a simple message to a Pulsar [topic](reference-terminology.md#topic): - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/producer/persistent/public/default/my-topic' - -ws = websocket.create_connection(TOPIC) - -# encode message -s = "Hello World" -firstEncoded = s.encode("UTF-8") -binaryEncoded = base64.b64encode(firstEncoded) -payloadString = binaryEncoded.decode('UTF-8') - -# Send one message as JSON -ws.send(json.dumps({ - 'payload' : payloadString, - 'properties': { - 'key1' : 'value1', - 'key2' : 'value2' - }, - 'context' : 5 -})) - -response = json.loads(ws.recv()) -if response['result'] == 'ok': - print( 'Message published successfully') -else: - print('Failed to publish message:', response) -ws.close() - -``` - -#### Python consumer - -Here's an example Python consumer that listens on a Pulsar topic and prints the message ID whenever a message arrives: - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/consumer/persistent/public/default/my-topic/my-sub' - -ws = websocket.create_connection(TOPIC) - -while True: - msg = json.loads(ws.recv()) - if not msg: break - - print( "Received: {} - payload: {}".format(msg, base64.b64decode(msg['payload']))) - - # Acknowledge successful processing - ws.send(json.dumps({'messageId' : msg['messageId']})) - -ws.close() - -``` - -#### Python reader - -Here's an example Python reader that listens on a Pulsar topic and prints the message ID whenever a message arrives: - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/reader/persistent/public/default/my-topic' -ws = websocket.create_connection(TOPIC) - -while True: - msg = json.loads(ws.recv()) - if not msg: break - - print ( "Received: {} - payload: {}".format(msg, base64.b64decode(msg['payload']))) - - # Acknowledge successful processing - ws.send(json.dumps({'messageId' : msg['messageId']})) - -ws.close() - -``` - -### Node.js - -This example uses the [`ws`](https://websockets.github.io/ws/) package. You can install it using [npm](https://www.npmjs.com/): - -```shell - -$ npm install ws - -``` - -#### Node.js producer - -Here's an example Node.js producer that sends a simple message to a Pulsar topic: - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/producer/persistent/public/default/my-topic`; -const ws = new WebSocket(topic); - -var message = { - "payload" : new Buffer("Hello World").toString('base64'), - "properties": { - "key1" : "value1", - "key2" : "value2" - }, - "context" : "1" -}; - -ws.on('open', function() { - // Send one message - ws.send(JSON.stringify(message)); -}); - -ws.on('message', function(message) { - console.log('received ack: %s', message); -}); - -``` - -#### Node.js consumer - -Here's an example Node.js consumer that listens on the same topic used by the producer above: - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/consumer/persistent/public/default/my-topic/my-sub`; -const ws = new WebSocket(topic); - -ws.on('message', function(message) { - var receiveMsg = JSON.parse(message); - console.log('Received: %s - payload: %s', message, new Buffer(receiveMsg.payload, 'base64').toString()); - var ackMsg = {"messageId" : receiveMsg.messageId}; - ws.send(JSON.stringify(ackMsg)); -}); - -``` - -#### NodeJS reader - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/reader/persistent/public/default/my-topic`; -const ws = new WebSocket(topic); - -ws.on('message', function(message) { - var receiveMsg = JSON.parse(message); - console.log('Received: %s - payload: %s', message, new Buffer(receiveMsg.payload, 'base64').toString()); - var ackMsg = {"messageId" : receiveMsg.messageId}; - ws.send(JSON.stringify(ackMsg)); -}); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries.md b/site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries.md deleted file mode 100644 index 607c9317e4b7fb..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/client-libraries.md +++ /dev/null @@ -1,36 +0,0 @@ ---- -id: client-libraries -title: Pulsar client libraries -sidebar_label: "Overview" -original_id: client-libraries ---- - -Pulsar supports the following client libraries: - -- [Java client](client-libraries-java.md) -- [Go client](client-libraries-go.md) -- [Python client](client-libraries-python.md) -- [C++ client](client-libraries-cpp.md) -- [Node.js client](client-libraries-node.md) -- [WebSocket client](client-libraries-websocket.md) -- [C# client](client-libraries-dotnet.md) - -## Feature matrix -Pulsar client feature matrix for different languages is listed on [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page. - -## Third-party clients - -Besides the official released clients, multiple projects on developing Pulsar clients are available in different languages. - -> If you have developed a new Pulsar client, feel free to submit a pull request and add your client to the list below. - -| Language | Project | Maintainer | License | Description | -|----------|---------|------------|---------|-------------| -| Go | [pulsar-client-go](https://github.com/Comcast/pulsar-client-go) | [Comcast](https://github.com/Comcast) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client | -| Go | [go-pulsar](https://github.com/t2y/go-pulsar) | [t2y](https://github.com/t2y) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | -| Haskell | [supernova](https://github.com/cr-org/supernova) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Native Pulsar client for Haskell | -| Scala | [neutron](https://github.com/cr-org/neutron) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Purely functional Apache Pulsar client for Scala built on top of Fs2 | -| Scala | [pulsar4s](https://github.com/sksamuel/pulsar4s) | [sksamuel](https://github.com/sksamuel) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Idomatic, typesafe, and reactive Scala client for Apache Pulsar | -| Rust | [pulsar-rs](https://github.com/wyyerd/pulsar-rs) | [Wyyerd Group](https://github.com/wyyerd) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Future-based Rust bindings for Apache Pulsar | -| .NET | [pulsar-client-dotnet](https://github.com/fsharplang-ru/pulsar-client-dotnet) | [Lanayx](https://github.com/Lanayx) | [![GitHub](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT) | Native .NET client for C#/F#/VB | -| Node.js | [pulsar-flex](https://github.com/ayeo-flex-org/pulsar-flex) | [Daniel Sinai](https://github.com/danielsinai), [Ron Farkash](https://github.com/ronfarkash), [Gal Rosenberg](https://github.com/galrose)| [![GitHub](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT) | Native Nodejs client | diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-architecture-overview.md b/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-architecture-overview.md deleted file mode 100644 index 4baa8c30a0d009..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-architecture-overview.md +++ /dev/null @@ -1,172 +0,0 @@ ---- -id: concepts-architecture-overview -title: Architecture Overview -sidebar_label: "Architecture" -original_id: concepts-architecture-overview ---- - -At the highest level, a Pulsar instance is composed of one or more Pulsar clusters. Clusters within an instance can [replicate](concepts-replication.md) data amongst themselves. - -In a Pulsar cluster: - -* One or more brokers handles and load balances incoming messages from producers, dispatches messages to consumers, communicates with the Pulsar configuration store to handle various coordination tasks, stores messages in BookKeeper instances (aka bookies), relies on a cluster-specific ZooKeeper cluster for certain tasks, and more. -* A BookKeeper cluster consisting of one or more bookies handles [persistent storage](#persistent-storage) of messages. -* A ZooKeeper cluster specific to that cluster handles coordination tasks between Pulsar clusters. - -The diagram below provides an illustration of a Pulsar cluster: - -![Pulsar architecture diagram](/assets/pulsar-system-architecture.png) - -At the broader instance level, an instance-wide ZooKeeper cluster called the configuration store handles coordination tasks involving multiple clusters, for example [geo-replication](concepts-replication.md). - -## Brokers - -The Pulsar message broker is a stateless component that's primarily responsible for running two other components: - -* An HTTP server that exposes a {@inject: rest:REST:/} API for both administrative tasks and [topic lookup](concepts-clients.md#client-setup-phase) for producers and consumers. The producers connect to the brokers to publish messages and the consumers connect to the brokers to consume the messages. -* A dispatcher, which is an asynchronous TCP server over a custom [binary protocol](developing-binary-protocol.md) used for all data transfers - -Messages are typically dispatched out of a [managed ledger](#managed-ledgers) cache for the sake of performance, *unless* the backlog exceeds the cache size. If the backlog grows too large for the cache, the broker will start reading entries from BookKeeper. - -Finally, to support geo-replication on global topics, the broker manages replicators that tail the entries published in the local region and republish them to the remote region using the Pulsar [Java client library](client-libraries-java.md). - -> For a guide to managing Pulsar brokers, see the [brokers](admin-api-brokers.md) guide. - -## Clusters - -A Pulsar instance consists of one or more Pulsar *clusters*. Clusters, in turn, consist of: - -* One or more Pulsar [brokers](#brokers) -* A ZooKeeper quorum used for cluster-level configuration and coordination -* An ensemble of bookies used for [persistent storage](#persistent-storage) of messages - -Clusters can replicate amongst themselves using [geo-replication](concepts-replication.md). - -> For a guide to managing Pulsar clusters, see the [clusters](admin-api-clusters.md) guide. - -## Metadata store - -The Pulsar metadata store maintains all the metadata of a Pulsar cluster, such as topic metadata, schema, broker load data, and so on. Pulsar uses [Apache ZooKeeper](https://zookeeper.apache.org/) for metadata storage, cluster configuration, and coordination. The Pulsar metadata store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster. You can use one ZooKeeper cluster for both Pulsar metadata store and BookKeeper metadata store. If you want to deploy Pulsar brokers connected to an existing BookKeeper cluster, you need to deploy separate ZooKeeper clusters for Pulsar metadata store and BookKeeper metadata store respectively. - -In a Pulsar instance: - -* A configuration store quorum stores configuration for tenants, namespaces, and other entities that need to be globally consistent. -* Each cluster has its own local ZooKeeper ensemble that stores cluster-specific configuration and coordination such as which brokers are responsible for which topics as well as ownership metadata, broker load reports, BookKeeper ledger metadata, and more. - -## Configuration store - -The configuration store maintains all the configurations of a Pulsar instance, such as clusters, tenants, namespaces, partitioned topic related configurations, and so on. A Pulsar instance can have a single local cluster, multiple local clusters, or multiple cross-region clusters. Consequently, the configuration store can share the configurations across multiple clusters under a Pulsar instance. The configuration store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster. - -## Persistent storage - -Pulsar provides guaranteed message delivery for applications. If a message successfully reaches a Pulsar broker, it will be delivered to its intended target. - -This guarantee requires that non-acknowledged messages are stored in a durable manner until they can be delivered to and acknowledged by consumers. This mode of messaging is commonly called *persistent messaging*. In Pulsar, N copies of all messages are stored and synced on disk, for example 4 copies across two servers with mirrored [RAID](https://en.wikipedia.org/wiki/RAID) volumes on each server. - -### Apache BookKeeper - -Pulsar uses a system called [Apache BookKeeper](http://bookkeeper.apache.org/) for persistent message storage. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) (WAL) system that provides a number of crucial advantages for Pulsar: - -* It enables Pulsar to utilize many independent logs, called [ledgers](#ledgers). Multiple ledgers can be created for topics over time. -* It offers very efficient storage for sequential data that handles entry replication. -* It guarantees read consistency of ledgers in the presence of various system failures. -* It offers even distribution of I/O across bookies. -* It's horizontally scalable in both capacity and throughput. Capacity can be immediately increased by adding more bookies to a cluster. -* Bookies are designed to handle thousands of ledgers with concurrent reads and writes. By using multiple disk devices---one for journal and another for general storage--bookies are able to isolate the effects of read operations from the latency of ongoing write operations. - -In addition to message data, *cursors* are also persistently stored in BookKeeper. Cursors are [subscription](reference-terminology.md#subscription) positions for [consumers](reference-terminology.md#consumer). BookKeeper enables Pulsar to store consumer position in a scalable fashion. - -At the moment, Pulsar supports persistent message storage. This accounts for the `persistent` in all topic names. Here's an example: - -```http - -persistent://my-tenant/my-namespace/my-topic - -``` - -> Pulsar also supports ephemeral ([non-persistent](concepts-messaging.md#non-persistent-topics)) message storage. - - -You can see an illustration of how brokers and bookies interact in the diagram below: - -![Brokers and bookies](/assets/broker-bookie.png) - - -### Ledgers - -A ledger is an append-only data structure with a single writer that is assigned to multiple BookKeeper storage nodes, or bookies. Ledger entries are replicated to multiple bookies. Ledgers themselves have very simple semantics: - -* A Pulsar broker can create a ledger, append entries to the ledger, and close the ledger. -* After the ledger has been closed---either explicitly or because the writer process crashed---it can then be opened only in read-only mode. -* Finally, when entries in the ledger are no longer needed, the whole ledger can be deleted from the system (across all bookies). - -#### Ledger read consistency - -The main strength of Bookkeeper is that it guarantees read consistency in ledgers in the presence of failures. Since the ledger can only be written to by a single process, that process is free to append entries very efficiently, without need to obtain consensus. After a failure, the ledger will go through a recovery process that will finalize the state of the ledger and establish which entry was last committed to the log. After that point, all readers of the ledger are guaranteed to see the exact same content. - -#### Managed ledgers - -Given that Bookkeeper ledgers provide a single log abstraction, a library was developed on top of the ledger called the *managed ledger* that represents the storage layer for a single topic. A managed ledger represents the abstraction of a stream of messages with a single writer that keeps appending at the end of the stream and multiple cursors that are consuming the stream, each with its own associated position. - -Internally, a single managed ledger uses multiple BookKeeper ledgers to store the data. There are two reasons to have multiple ledgers: - -1. After a failure, a ledger is no longer writable and a new one needs to be created. -2. A ledger can be deleted when all cursors have consumed the messages it contains. This allows for periodic rollover of ledgers. - -### Journal storage - -In BookKeeper, *journal* files contain BookKeeper transaction logs. Before making an update to a [ledger](#ledgers), a bookie needs to ensure that a transaction describing the update is written to persistent (non-volatile) storage. A new journal file is created once the bookie starts or the older journal file reaches the journal file size threshold (configured using the [`journalMaxSizeMB`](reference-configuration.md#bookkeeper-journalMaxSizeMB) parameter). - -## Pulsar proxy - -One way for Pulsar clients to interact with a Pulsar [cluster](#clusters) is by connecting to Pulsar message [brokers](#brokers) directly. In some cases, however, this kind of direct connection is either infeasible or undesirable because the client doesn't have direct access to broker addresses. If you're running Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform, for example, then direct client connections to brokers are likely not possible. - -The **Pulsar proxy** provides a solution to this problem by acting as a single gateway for all of the brokers in a cluster. If you run the Pulsar proxy (which, again, is optional), all client connections with the Pulsar cluster will flow through the proxy rather than communicating with brokers. - -> For the sake of performance and fault tolerance, you can run as many instances of the Pulsar proxy as you'd like. - -Architecturally, the Pulsar proxy gets all the information it requires from ZooKeeper. When starting the proxy on a machine, you only need to provide ZooKeeper connection strings for the cluster-specific and instance-wide configuration store clusters. Here's an example: - -```bash - -$ bin/pulsar proxy \ - --zookeeper-servers zk-0,zk-1,zk-2 \ - --configuration-store-servers zk-0,zk-1,zk-2 - -``` - -> #### Pulsar proxy docs -> For documentation on using the Pulsar proxy, see the [Pulsar proxy admin documentation](administration-proxy.md). - - -Some important things to know about the Pulsar proxy: - -* Connecting clients don't need to provide *any* specific configuration to use the Pulsar proxy. You won't need to update the client configuration for existing applications beyond updating the IP used for the service URL (for example if you're running a load balancer over the Pulsar proxy). -* [TLS encryption](security-tls-transport.md) and [authentication](security-tls-authentication.md) is supported by the Pulsar proxy - -## Service discovery - -[Clients](getting-started-clients.md) connecting to Pulsar brokers need to be able to communicate with an entire Pulsar instance using a single URL. - -You can use your own service discovery system if you'd like. If you use your own system, there is just one requirement: when a client performs an HTTP request to an endpoint, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to *some* active broker in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means. - -The diagram below illustrates Pulsar service discovery: - -![alt-text](/assets/pulsar-service-discovery.png) - -In this diagram, the Pulsar cluster is addressable via a single DNS name: `pulsar-cluster.acme.com`. A [Python client](client-libraries-python.md), for example, could access this Pulsar cluster like this: - -```python - -from pulsar import Client - -client = Client('pulsar://pulsar-cluster.acme.com:6650') - -``` - -:::note - -In Pulsar, each topic is handled by only one broker. Initial requests from a client to read, update or delete a topic are sent to a broker that may not be the topic owner. If the broker cannot handle the request for this topic, it redirects the request to the appropriate broker. - -::: - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-authentication.md b/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-authentication.md deleted file mode 100644 index f6307890c904a7..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-authentication.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -id: concepts-authentication -title: Authentication and Authorization -sidebar_label: "Authentication and Authorization" -original_id: concepts-authentication ---- - -Pulsar supports a pluggable [authentication](security-overview.md) mechanism which can be configured at the proxy and/or the broker. Pulsar also supports a pluggable [authorization](security-authorization.md) mechanism. These mechanisms work together to identify the client and its access rights on topics, namespaces and tenants. - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-clients.md b/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-clients.md deleted file mode 100644 index 4040624f7d6366..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-clients.md +++ /dev/null @@ -1,92 +0,0 @@ ---- -id: concepts-clients -title: Pulsar Clients -sidebar_label: "Clients" -original_id: concepts-clients ---- - -Pulsar exposes a client API with language bindings for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md), [C++](client-libraries-cpp.md) and [C#](client-libraries-dotnet.md). The client API optimizes and encapsulates Pulsar's client-broker communication protocol and exposes a simple and intuitive API for use by applications. - -Under the hood, the current official Pulsar client libraries support transparent reconnection and/or connection failover to brokers, queuing of messages until acknowledged by the broker, and heuristics such as connection retries with backoff. - -> **Custom client libraries** -> If you'd like to create your own client library, we recommend consulting the documentation on Pulsar's custom [binary protocol](developing-binary-protocol.md). - - -## Client setup phase - -Before an application creates a producer/consumer, the Pulsar client library needs to initiate a setup phase including two steps: - -1. The client attempts to determine the owner of the topic by sending an HTTP lookup request to the broker. The request could reach one of the active brokers which, by looking at the (cached) zookeeper metadata knows who is serving the topic or, in case nobody is serving it, tries to assign it to the least loaded broker. -1. Once the client library has the broker address, it creates a TCP connection (or reuse an existing connection from the pool) and authenticates it. Within this connection, client and broker exchange binary commands from a custom protocol. At this point the client sends a command to create producer/consumer to the broker, which will comply after having validated the authorization policy. - -Whenever the TCP connection breaks, the client immediately re-initiates this setup phase and keeps trying with exponential backoff to re-establish the producer or consumer until the operation succeeds. - -## Reader interface - -In Pulsar, the "standard" [consumer interface](concepts-messaging.md#consumers) involves using consumers to listen on [topics](reference-terminology.md#topic), process incoming messages, and finally acknowledge those messages when they are processed. Whenever a new subscription is created, it is initially positioned at the end of the topic (by default), and consumers associated with that subscription begin reading with the first message created afterwards. Whenever a consumer connects to a topic using a pre-existing subscription, it begins reading from the earliest message un-acked within that subscription. In summary, with the consumer interface, subscription cursors are automatically managed by Pulsar in response to [message acknowledgements](concepts-messaging.md#acknowledgement). - -The **reader interface** for Pulsar enables applications to manually manage cursors. When you use a reader to connect to a topic---rather than a consumer---you need to specify *which* message the reader begins reading from when it connects to a topic. When connecting to a topic, the reader interface enables you to begin with: - -* The **earliest** available message in the topic -* The **latest** available message in the topic -* Some other message between the earliest and the latest. If you select this option, you'll need to explicitly provide a message ID. Your application will be responsible for "knowing" this message ID in advance, perhaps fetching it from a persistent data store or cache. - -The reader interface is helpful for use cases like using Pulsar to provide effectively-once processing semantics for a stream processing system. For this use case, it's essential that the stream processing system be able to "rewind" topics to a specific message and begin reading there. The reader interface provides Pulsar clients with the low-level abstraction necessary to "manually position" themselves within a topic. - -Internally, the reader interface is implemented as a consumer using an exclusive, non-durable subscription to the topic with a randomly-allocated name. - -[ **IMPORTANT** ] - -Unlike subscription/consumer, readers are non-durable in nature and does not prevent data in a topic from being deleted, thus it is ***strongly*** advised that [data retention](cookbooks-retention-expiry.md) be configured. If data retention for a topic is not configured for an adequate amount of time, messages that the reader has not yet read might be deleted . This causes the readers to essentially skip messages. Configuring the data retention for a topic guarantees the reader with a certain duration to read a message. - -Please also note that a reader can have a "backlog", but the metric is only used for users to know how behind the reader is. The metric is not considered for any backlog quota calculations. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-reader-consumer-interfaces.png) - -Here's a Java example that begins reading from the earliest available message on a topic: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageId; -import org.apache.pulsar.client.api.Reader; - -// Create a reader on a topic and for a specific message (and onward) -Reader reader = pulsarClient.newReader() - .topic("reader-api-test") - .startMessageId(MessageId.earliest) - .create(); - -while (true) { - Message message = reader.readNext(); - - // Process the message -} - -``` - -To create a reader that reads from the latest available message: - -```java - -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(MessageId.latest) - .create(); - -``` - -To create a reader that reads from some message between the earliest and the latest: - -```java - -byte[] msgIdBytes = // Some byte array -MessageId id = MessageId.fromByteArray(msgIdBytes); -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(id) - .create(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-messaging.md b/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-messaging.md deleted file mode 100644 index dbe540c5df8e24..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-messaging.md +++ /dev/null @@ -1,714 +0,0 @@ ---- -id: concepts-messaging -title: Messaging -sidebar_label: "Messaging" -original_id: concepts-messaging ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar is built on the [publish-subscribe](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) pattern (often abbreviated to pub-sub). In this pattern, [producers](#producers) publish messages to [topics](#topics); [consumers](#consumers) [subscribe](#subscription-modes) to those topics, process incoming messages, and send [acknowledgements](#acknowledgement) to the broker when processing is finished. - -When a subscription is created, Pulsar [retains](concepts-architecture-overview.md#persistent-storage) all messages, even if the consumer is disconnected. The retained messages are discarded only when a consumer acknowledges that all these messages are processed successfully. - -If the consumption of a message fails and you want this message to be consumed again, then you can enable the automatic redelivery of this message by sending a [negative acknowledgement](#negative-acknowledgement) to the broker or enabling the [acknowledgement timeout](#acknowledgement-timeout) for unacknowledged messages. - -## Messages - -Messages are the basic "unit" of Pulsar. The following table lists the components of messages. - -Component | Description -:---------|:------- -Value / data payload | The data carried by the message. All Pulsar messages contain raw bytes, although message data can also conform to data [schemas](schema-get-started.md). -Key | Messages are optionally tagged with keys, which is useful for things like [topic compaction](concepts-topic-compaction.md). -Properties | An optional key/value map of user-defined properties. -Producer name | The name of the producer who produces the message. If you do not specify a producer name, the default name is used. -Sequence ID | Each Pulsar message belongs to an ordered sequence on its topic. The sequence ID of the message is its order in that sequence. -Publish time | The timestamp of when the message is published. The timestamp is automatically applied by the producer. -Event time | An optional timestamp attached to a message by applications. For example, applications attach a timestamp on when the message is processed. If nothing is set to event time, the value is `0`. -TypedMessageBuilder | It is used to construct a message. You can set message properties such as the message key, message value with `TypedMessageBuilder`.
    When you set `TypedMessageBuilder`, set the key as a string. If you set the key as other types, for example, an AVRO object, the key is sent as bytes, and it is difficult to get the AVRO object back on the consumer. - -The default size of a message is 5 MB. You can configure the max size of a message with the following configurations. - -- In the `broker.conf` file. - - ```bash - - # The max size of a message (in bytes). - maxMessageSize=5242880 - - ``` - -- In the `bookkeeper.conf` file. - - ```bash - - # The max size of the netty frame (in bytes). Any messages received larger than this value are rejected. The default value is 5 MB. - nettyMaxFrameSizeBytes=5253120 - - ``` - -> For more information on Pulsar messages, see Pulsar [binary protocol](developing-binary-protocol.md). - -## Producers - -A producer is a process that attaches to a topic and publishes messages to a Pulsar [broker](reference-terminology.md#broker). The Pulsar broker processes the messages. - -### Send modes - -Producers send messages to brokers synchronously (sync) or asynchronously (async). - -| Mode | Description | -|:-----------|-----------| -| Sync send | The producer waits for an acknowledgement from the broker after sending every message. If the acknowledgment is not received, the producer treats the sending operation as a failure. | -| Async send | The producer puts a message in a blocking queue and returns immediately. The client library sends the message to the broker in the background. If the queue is full (you can [configure](reference-configuration.md#broker) the maximum size), the producer is blocked or fails immediately when calling the API, depending on arguments passed to the producer. | - -### Access mode - -You can have different types of access modes on topics for producers. - -|Access mode | Description -|---|--- -`Shared`|Multiple producers can publish on a topic.

    This is the **default** setting. -`Exclusive`|Only one producer can publish on a topic.

    If there is already a producer connected, other producers trying to publish on this topic get errors immediately.

    The "old" producer is evicted and a "new" producer is selected to be the next exclusive producer if the "old" producer experiences a network partition with the broker. -`WaitForExclusive`|If there is already a producer connected, the producer creation is pending (rather than timing out) until the producer gets the `Exclusive` access.

    The producer that succeeds in becoming the exclusive one is treated as the leader. Consequently, if you want to implement the leader election scheme for your application, you can use this access mode. - -:::note - -Once an application creates a producer with `Exclusive` or `WaitForExclusive` access mode successfully, the instance of this application is guaranteed to be the **only writer** to the topic. Any other producers trying to produce messages on this topic will either get errors immediately or have to wait until they get the `Exclusive` access. -For more information, see [PIP 68: Exclusive Producer](https://github.com/apache/pulsar/wiki/PIP-68:-Exclusive-Producer). - -::: - -You can set producer access mode through Java Client API. For more information, see `ProducerAccessMode` in [ProducerBuilder.java](https://github.com/apache/pulsar/blob/fc5768ca3bbf92815d142fe30e6bfad70a1b4fc6/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/ProducerBuilder.java) file. - - -### Compression - -You can compress messages published by producers during transportation. Pulsar currently supports the following types of compression: - -* [LZ4](https://github.com/lz4/lz4) -* [ZLIB](https://zlib.net/) -* [ZSTD](https://facebook.github.io/zstd/) -* [SNAPPY](https://google.github.io/snappy/) - -### Batching - -When batching is enabled, the producer accumulates and sends a batch of messages in a single request. The batch size is defined by the maximum number of messages and the maximum publish latency. Therefore, the backlog size represents the total number of batches instead of the total number of messages. - -In Pulsar, batches are tracked and stored as single units rather than as individual messages. Consumer unbundles a batch into individual messages. However, scheduled messages (configured through the `deliverAt` or the `deliverAfter` parameter) are always sent as individual messages even batching is enabled. - -In general, a batch is acknowledged when all of its messages are acknowledged by a consumer. It means that when **not all** batch messages are acknowledged, then unexpected failures, negative acknowledgements, or acknowledgement timeouts can result in a redelivery of all messages in this batch. - -To avoid redelivering acknowledged messages in a batch to the consumer, Pulsar introduces batch index acknowledgement since Pulsar 2.6.0. When batch index acknowledgement is enabled, the consumer filters out the batch index that has been acknowledged and sends the batch index acknowledgement request to the broker. The broker maintains the batch index acknowledgement status and tracks the acknowledgement status of each batch index to avoid dispatching acknowledged messages to the consumer. The batch is deleted when all indices of the messages in it are acknowledged. - -By default, batch index acknowledgement is disabled (`acknowledgmentAtBatchIndexLevelEnabled=false`). You can enable batch index acknowledgement by setting the `acknowledgmentAtBatchIndexLevelEnabled` parameter to `true` at the broker side. Enabling batch index acknowledgement results in more memory overheads. - -### Chunking -Before you enable chunking, read the following instructions. -- Batching and chunking cannot be enabled simultaneously. To enable chunking, you must disable batching in advance. -- Chunking is only supported for persisted topics. -- Chunking is only supported for the exclusive and failover subscription modes. - -When chunking is enabled (`chunkingEnabled=true`), if the message size is greater than the allowed maximum publish-payload size, the producer splits the original message into chunked messages and publishes them with chunked metadata to the broker separately and in order. At the broker side, the chunked messages are stored in the managed-ledger in the same way as that of ordinary messages. The only difference is that the consumer needs to buffer the chunked messages and combines them into the real message when all chunked messages have been collected. The chunked messages in the managed-ledger can be interwoven with ordinary messages. If producer fails to publish all the chunks of a message, the consumer can expire incomplete chunks if consumer fail to receive all chunks in expire time. By default, the expire time is set to one minute. - -The consumer consumes the chunked messages and buffers them until the consumer receives all the chunks of a message. And then the consumer stitches chunked messages together and places them into the receiver-queue. Clients consume messages from the receiver-queue. Once the consumer consumes the entire large message and acknowledges it, the consumer internally sends acknowledgement of all the chunk messages associated to that large message. You can set the `maxPendingChunkedMessage` parameter on the consumer. When the threshold is reached, the consumer drops the unchunked messages by silently acknowledging them or asking the broker to redeliver them later by marking them unacknowledged. - -The broker does not require any changes to support chunking for non-shared subscription. The broker only uses `chunkedMessageRate` to record chunked message rate on the topic. - -#### Handle chunked messages with one producer and one ordered consumer - -As shown in the following figure, when a topic has one producer which publishes large message payload in chunked messages along with regular non-chunked messages. The producer publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. The broker stores all the three chunked messages in the managed-ledger and dispatches to the ordered (exclusive/failover) consumer in the same order. The consumer buffers all the chunked messages in memory until it receives all the chunked messages, combines them into one message and then hands over the original message M1 to the client. - -![](/assets/chunking-01.png) - -#### Handle chunked messages with multiple producers and one ordered consumer - -When multiple publishers publish chunked messages into a single topic, the broker stores all the chunked messages coming from different publishers in the same managed-ledger. As shown below, Producer 1 publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. Producer 2 publishes message M2 in three chunks M2-C1, M2-C2 and M2-C3. All chunked messages of the specific message are still in order but might not be consecutive in the managed-ledger. This brings some memory pressure to the consumer because the consumer keeps separate buffer for each large message to aggregate all chunks of the large message and combine them into one message. - -![](/assets/chunking-02.png) - -## Consumers - -A consumer is a process that attaches to a topic via a subscription and then receives messages. - -A consumer sends a [flow permit request](developing-binary-protocol.md#flow-control) to a broker to get messages. There is a queue at the consumer side to receive messages pushed from the broker. You can configure the queue size with the [`receiverQueueSize`](client-libraries-java.md#configure-consumer) parameter. The default size is `1000`). Each time `consumer.receive()` is called, a message is dequeued from the buffer. - -### Receive modes - -Messages are received from [brokers](reference-terminology.md#broker) either synchronously (sync) or asynchronously (async). - -| Mode | Description | -|:--------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Sync receive | A sync receive is blocked until a message is available. | -| Async receive | An async receive returns immediately with a future value—for example, a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) in Java—that completes once a new message is available. | - -### Listeners - -Client libraries provide listener implementation for consumers. For example, the [Java client](client-libraries-java.md) provides a {@inject: javadoc:MesssageListener:/client/org/apache/pulsar/client/api/MessageListener} interface. In this interface, the `received` method is called whenever a new message is received. - -### Acknowledgement - -The consumer sends an acknowledgement request to the broker after it consumes a message successfully. Then, this consumed message will be permanently stored, and be deleted only after all the subscriptions have acknowledged it. If you want to store the messages that have been acknowledged by a consumer, you need to configure the [message retention policy](concepts-messaging.md#message-retention-and-expiry). - -For batch messages, you can enable batch index acknowledgement to avoid dispatching acknowledged messages to the consumer. For details about batch index acknowledgement, see [batching](#batching). - -Messages can be acknowledged in one of the following two ways: - -- Being acknowledged individually. With individual acknowledgement, the consumer acknowledges each message and sends an acknowledgement request to the broker. -- Being acknowledged cumulatively. With cumulative acknowledgement, the consumer **only** acknowledges the last message it received. All messages in the stream up to (and including) the provided message are not redelivered to that consumer. - -If you want to acknowledge messages individually, you can use the following API. - -```java - -consumer.acknowledge(msg); - -``` - -If you want to acknowledge messages cumulatively, you can use the following API. - -```java - -consumer.acknowledgeCumulative(msg); - -``` - -:::note - -Cumulative acknowledgement cannot be used in the [shared subscription mode](#subscription-modes), because the shared subscription mode involves multiple consumers who have access to the same subscription. In the shared subscription mode, messages are acknowledged individually. - -::: - -### Negative acknowledgement - -When a consumer fails to consume a message and intends to consume it again, this consumer should send a negative acknowledgement to the broker. Then, the broker will redeliver this message to the consumer. - -Messages are negatively acknowledged individually or cumulatively, depending on the consumption subscription mode. - -In the exclusive and failover subscription modes, consumers only negatively acknowledge the last message they receive. - -In the shared and Key_Shared subscription modes, consumers can negatively acknowledge messages individually. - -Be aware that negative acknowledgments on ordered subscription types, such as Exclusive, Failover and Key_Shared, might cause failed messages being sent to consumers out of the original order. - -If you want to acknowledge messages negatively, you can use the following API. - -```java - -//With calling this api, messages are negatively acknowledged -consumer.negativeAcknowledge(msg); - -``` - -:::note - -If batching is enabled, all messages in one batch are redelivered to the consumer. - -::: - -### Acknowledgement timeout - -If a message is not consumed successfully, and you want the broker to redeliver this message automatically, then you can enable automatic redelivery mechanism for unacknowledged messages. With automatic redelivery enabled, the client tracks the unacknowledged messages within the entire `acktimeout` time range, and sends a `redeliver unacknowledged messages` request to the broker automatically when the acknowledgement timeout is specified. - -:::note - -- If batching is enabled, all messages in one batch are redelivered to the consumer. -- The negative acknowledgement is preferable over the acknowledgement timeout, since negative acknowledgement controls the redelivery of individual messages more precisely and avoids invalid redeliveries when the message processing time exceeds the acknowledgement timeout. - -::: - -### Dead letter topic - -Dead letter topic enables you to consume new messages when some messages cannot be consumed successfully by a consumer. In this mechanism, messages that are failed to be consumed are stored in a separate topic, which is called dead letter topic. You can decide how to handle messages in the dead letter topic. - -The following example shows how to enable dead letter topic in a Java client using the default dead letter topic: - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic(topic) - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .build()) - .subscribe(); - -``` - -The default dead letter topic uses this format: - -``` - ---DLQ - -``` - -If you want to specify the name of the dead letter topic, use this Java client example: - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic(topic) - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .deadLetterTopic("your-topic-name") - .build()) - .subscribe(); - -``` - -Dead letter topic depends on message redelivery. Messages are redelivered either due to [acknowledgement timeout](#acknowledgement-timeout) or [negative acknowledgement](#negative-acknowledgement). If you are going to use negative acknowledgement on a message, make sure it is negatively acknowledged before the acknowledgement timeout. - -:::note - -Currently, dead letter topic is enabled in the Shared and Key_Shared subscription modes. - -::: - -### Retry letter topic - -For many online business systems, a message is re-consumed due to exception occurs in the business logic processing. To configure the delay time for re-consuming the failed messages, you can configure the producer to send messages to both the business topic and the retry letter topic, and enable automatic retry on the consumer. When automatic retry is enabled on the consumer, a message is stored in the retry letter topic if the messages are not consumed, and therefore the consumer automatically consumes the failed messages from the retry letter topic after a specified delay time. - -By default, automatic retry is disabled. You can set `enableRetry` to `true` to enable automatic retry on the consumer. - -This example shows how to consume messages from a retry letter topic. - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic(topic) - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .enableRetry(true) - .receiverQueueSize(100) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .retryLetterTopic("persistent://my-property/my-ns/my-subscription-custom-Retry") - .build()) - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscribe(); - -``` - -If you want to put messages into a retrial queue, you can use the following API. - -```java - -consumer.reconsumeLater(msg,3,TimeUnit.SECONDS); - -``` - -## Topics - -As in other pub-sub systems, topics in Pulsar are named channels for transmitting messages from producers to consumers. Topic names are URLs that have a well-defined structure: - -```http - -{persistent|non-persistent}://tenant/namespace/topic - -``` - -Topic name component | Description -:--------------------|:----------- -`persistent` / `non-persistent` | This identifies the type of topic. Pulsar supports two kind of topics: [persistent](concepts-architecture-overview.md#persistent-storage) and [non-persistent](#non-persistent-topics). The default is persistent, so if you do not specify a type, the topic is persistent. With persistent topics, all messages are durably persisted on disks (if the broker is not standalone, messages are durably persisted on multiple disks), whereas data for non-persistent topics is not persisted to storage disks. -`tenant` | The topic tenant within the instance. Tenants are essential to multi-tenancy in Pulsar, and spread across clusters. -`namespace` | The administrative unit of the topic, which acts as a grouping mechanism for related topics. Most topic configuration is performed at the [namespace](#namespaces) level. Each tenant has one or multiple namespaces. -`topic` | The final part of the name. Topic names have no special meaning in a Pulsar instance. - -> **No need to explicitly create new topics** -> You do not need to explicitly create topics in Pulsar. If a client attempts to write or receive messages to/from a topic that does not yet exist, Pulsar creates that topic under the namespace provided in the [topic name](#topics) automatically. -> If no tenant or namespace is specified when a client creates a topic, the topic is created in the default tenant and namespace. You can also create a topic in a specified tenant and namespace, such as `persistent://my-tenant/my-namespace/my-topic`. `persistent://my-tenant/my-namespace/my-topic` means the `my-topic` topic is created in the `my-namespace` namespace of the `my-tenant` tenant. - -## Namespaces - -A namespace is a logical nomenclature within a tenant. A tenant creates multiple namespaces via the [admin API](admin-api-namespaces.md#create). For instance, a tenant with different applications can create a separate namespace for each application. A namespace allows the application to create and manage a hierarchy of topics. The topic `my-tenant/app1` is a namespace for the application `app1` for `my-tenant`. You can create any number of [topics](#topics) under the namespace. - -## Subscriptions - -A subscription is a named configuration rule that determines how messages are delivered to consumers. Four subscription modes are available in Pulsar: [exclusive](#exclusive), [shared](#shared), [failover](#failover), and [key_shared](#key_shared). These modes are illustrated in the figure below. - -![Subscription modes](/assets/pulsar-subscription-types.png) - -> **Pub-Sub or Queuing** -> In Pulsar, you can use different subscriptions flexibly. -> * If you want to achieve traditional "fan-out pub-sub messaging" among consumers, specify a unique subscription name for each consumer. It is exclusive subscription mode. -> * If you want to achieve "message queuing" among consumers, share the same subscription name among multiple consumers(shared, failover, key_shared). -> * If you want to achieve both effects simultaneously, combine exclusive subscription mode with other subscription modes for consumers. - -### Consumerless Subscriptions and Their Corresponding Modes -When a subscription has no consumers, its subscription mode is undefined. A subscription's mode is defined when a consumer connects to the subscription, and the mode can be changed by restarting all consumers with a different configuration. - -### Exclusive - -In *exclusive* mode, only a single consumer is allowed to attach to the subscription. If multiple consumers subscribe to a topic using the same subscription, an error occurs. - -In the diagram below, only **Consumer A-0** is allowed to consume messages. - -> Exclusive mode is the default subscription mode. - -![Exclusive subscriptions](/assets/pulsar-exclusive-subscriptions.png) - -### Failover - -In *failover* mode, multiple consumers can attach to the same subscription. A master consumer is picked for non-partitioned topic or each partition of partitioned topic and receives messages. When the master consumer disconnects, all (non-acknowledged and subsequent) messages are delivered to the next consumer in line. - -For partitioned topics, broker will sort consumers by priority level and lexicographical order of consumer name. Then broker will try to evenly assigns topics to consumers with the highest priority level. - -For non-partitioned topic, broker will pick consumer in the order they subscribe to the non partitioned topic. - -In the diagram below, **Consumer-B-0** is the master consumer while **Consumer-B-1** would be the next consumer in line to receive messages if **Consumer-B-0** is disconnected. - -![Failover subscriptions](/assets/pulsar-failover-subscriptions.png) - -### Shared - -In *shared* or *round robin* mode, multiple consumers can attach to the same subscription. Messages are delivered in a round robin distribution across consumers, and any given message is delivered to only one consumer. When a consumer disconnects, all the messages that were sent to it and not acknowledged will be rescheduled for sending to the remaining consumers. - -In the diagram below, **Consumer-C-1** and **Consumer-C-2** are able to subscribe to the topic, but **Consumer-C-3** and others could as well. - -> **Limitations of shared mode** -> When using shared mode, be aware that: -> * Message ordering is not guaranteed. -> * You cannot use cumulative acknowledgment with shared mode. - -![Shared subscriptions](/assets/pulsar-shared-subscriptions.png) - -### Key_Shared - -In *Key_Shared* mode, multiple consumers can attach to the same subscription. Messages are delivered in a distribution across consumers and message with same key or same ordering key are delivered to only one consumer. No matter how many times the message is re-delivered, it is delivered to the same consumer. When a consumer connected or disconnected will cause served consumer change for some key of message. - -![Key_Shared subscriptions](/assets/pulsar-key-shared-subscriptions.png) - -Note that when the consumers are using the Key_Shared subscription mode, you need to **disable batching** or **use key-based batching** for the producers. There are two reasons why the key-based batching is necessary for Key_Shared subscription mode: -1. The broker dispatches messages according to the keys of the messages, but the default batching approach might fail to pack the messages with the same key to the same batch. -2. Since it is the consumers instead of the broker who dispatch the messages from the batches, the key of the first message in one batch is considered as the key of all messages in this batch, thereby leading to context errors. - -The key-based batching aims at resolving the above-mentioned issues. This batching method ensures that the producers pack the messages with the same key to the same batch. The messages without a key are packed into one batch and this batch has no key. When the broker dispatches messages from this batch, it uses `NON_KEY` as the key. In addition, each consumer is associated with **only one** key and should receive **only one message batch** for the connected key. By default, you can limit batching by configuring the number of messages that producers are allowed to send. - -Below are examples of enabling the key-based batching under the Key_Shared subscription mode, with `client` being the Pulsar client that you created. - -````mdx-code-block - - - -``` - -Producer producer = client.newProducer() - .topic("my-topic") - .batcherBuilder(BatcherBuilder.KEY_BASED) - .create(); - -``` - - - - -``` - -ProducerConfiguration producerConfig; -producerConfig.setBatchingType(ProducerConfiguration::BatchingType::KeyBasedBatching); -Producer producer; -client.createProducer("my-topic", producerConfig, producer); - -``` - - - - -``` - -producer = client.create_producer(topic='my-topic', batching_type=pulsar.BatchingType.KeyBased) - -``` - - - - -```` - -> **Limitations of Key_Shared mode** -> When you use Key_Shared mode, be aware that: -> * You need to specify a key or orderingKey for messages. -> * You cannot use cumulative acknowledgment with Key_Shared mode. - -## Multi-topic subscriptions - -When a consumer subscribes to a Pulsar topic, by default it subscribes to one specific topic, such as `persistent://public/default/my-topic`. As of Pulsar version 1.23.0-incubating, however, Pulsar consumers can simultaneously subscribe to multiple topics. You can define a list of topics in two ways: - -* On the basis of a [**reg**ular **ex**pression](https://en.wikipedia.org/wiki/Regular_expression) (regex), for example `persistent://public/default/finance-.*` -* By explicitly defining a list of topics - -> When subscribing to multiple topics by regex, all topics must be in the same [namespace](#namespaces). - -When subscribing to multiple topics, the Pulsar client automatically makes a call to the Pulsar API to discover the topics that match the regex pattern/list, and then subscribe to all of them. If any of the topics do not exist, the consumer auto-subscribes to them once the topics are created. - -> **No ordering guarantees across multiple topics** -> When a producer sends messages to a single topic, all messages are guaranteed to be read from that topic in the same order. However, these guarantees do not hold across multiple topics. So when a producer sends message to multiple topics, the order in which messages are read from those topics is not guaranteed to be the same. - -The following are multi-topic subscription examples for Java. - -```java - -import java.util.regex.Pattern; - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient pulsarClient = // Instantiate Pulsar client object - -// Subscribe to all topics in a namespace -Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default/.*"); -Consumer allTopicsConsumer = pulsarClient.newConsumer() - .topicsPattern(allTopicsInNamespace) - .subscriptionName("subscription-1") - .subscribe(); - -// Subscribe to a subsets of topics in a namespace, based on regex -Pattern someTopicsInNamespace = Pattern.compile("persistent://public/default/foo.*"); -Consumer someTopicsConsumer = pulsarClient.newConsumer() - .topicsPattern(someTopicsInNamespace) - .subscriptionName("subscription-1") - .subscribe(); - -``` - -For code examples, see [Java](client-libraries-java.md#multi-topic-subscriptions). - -## Partitioned topics - -Normal topics are served only by a single broker, which limits the maximum throughput of the topic. *Partitioned topics* are a special type of topic that are handled by multiple brokers, thus allowing for higher throughput. - -A partitioned topic is actually implemented as N internal topics, where N is the number of partitions. When publishing messages to a partitioned topic, each message is routed to one of several brokers. The distribution of partitions across brokers is handled automatically by Pulsar. - -The diagram below illustrates this: - -![](/assets/partitioning.png) - -The **Topic1** topic has five partitions (**P0** through **P4**) split across three brokers. Because there are more partitions than brokers, two brokers handle two partitions a piece, while the third handles only one (again, Pulsar handles this distribution of partitions automatically). - -Messages for this topic are broadcast to two consumers. The [routing mode](#routing-modes) determines each message should be published to which partition, while the [subscription mode](#subscription-modes) determines which messages go to which consumers. - -Decisions about routing and subscription modes can be made separately in most cases. In general, throughput concerns should guide partitioning/routing decisions while subscription decisions should be guided by application semantics. - -There is no difference between partitioned topics and normal topics in terms of how subscription modes work, as partitioning only determines what happens between when a message is published by a producer and processed and acknowledged by a consumer. - -Partitioned topics need to be explicitly created via the [admin API](admin-api-overview.md). The number of partitions can be specified when creating the topic. - -### Routing modes - -When publishing to partitioned topics, you must specify a *routing mode*. The routing mode determines which partition---that is, which internal topic---each message should be published to. - -There are three {@inject: javadoc:MessageRoutingMode:/client/org/apache/pulsar/client/api/MessageRoutingMode} available: - -Mode | Description -:--------|:------------ -`RoundRobinPartition` | If no key is provided, the producer will publish messages across all partitions in round-robin fashion to achieve maximum throughput. Please note that round-robin is not done per individual message but rather it's set to the same boundary of batching delay, to ensure batching is effective. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition. This is the default mode. -`SinglePartition` | If no key is provided, the producer will randomly pick one single partition and publish all the messages into that partition. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition. -`CustomPartition` | Use custom message router implementation that will be called to determine the partition for a particular message. User can create a custom routing mode by using the [Java client](client-libraries-java.md) and implementing the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface. - -### Ordering guarantee - -The ordering of messages is related to MessageRoutingMode and Message Key. Usually, user would want an ordering of Per-key-partition guarantee. - -If there is a key attached to message, the messages will be routed to corresponding partitions based on the hashing scheme specified by {@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} in {@inject: javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder}, when using either `SinglePartition` or `RoundRobinPartition` mode. - -Ordering guarantee | Description | Routing Mode and Key -:------------------|:------------|:------------ -Per-key-partition | All the messages with the same key will be in order and be placed in same partition. | Use either `SinglePartition` or `RoundRobinPartition` mode, and Key is provided by each message. -Per-producer | All the messages from the same producer will be in order. | Use `SinglePartition` mode, and no Key is provided for each message. - -### Hashing scheme - -{@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} is an enum that represent sets of standard hashing functions available when choosing the partition to use for a particular message. - -There are 2 types of standard hashing functions available: `JavaStringHash` and `Murmur3_32Hash`. -The default hashing function for producer is `JavaStringHash`. -Please pay attention that `JavaStringHash` is not useful when producers can be from different multiple language clients, under this use case, it is recommended to use `Murmur3_32Hash`. - - - -## Non-persistent topics - - -By default, Pulsar persistently stores *all* unacknowledged messages on multiple [BookKeeper](concepts-architecture-overview.md#persistent-storage) bookies (storage nodes). Data for messages on persistent topics can thus survive broker restarts and subscriber failover. - -Pulsar also, however, supports **non-persistent topics**, which are topics on which messages are *never* persisted to disk and live only in memory. When using non-persistent delivery, killing a Pulsar broker or disconnecting a subscriber to a topic means that all in-transit messages are lost on that (non-persistent) topic, meaning that clients may see message loss. - -Non-persistent topics have names of this form (note the `non-persistent` in the name): - -```http - -non-persistent://tenant/namespace/topic - -``` - -> For more info on using non-persistent topics, see the [Non-persistent messaging cookbook](cookbooks-non-persistent.md). - -In non-persistent topics, brokers immediately deliver messages to all connected subscribers *without persisting them* in [BookKeeper](concepts-architecture-overview.md#persistent-storage). If a subscriber is disconnected, the broker will not be able to deliver those in-transit messages, and subscribers will never be able to receive those messages again. Eliminating the persistent storage step makes messaging on non-persistent topics slightly faster than on persistent topics in some cases, but with the caveat that some of the core benefits of Pulsar are lost. - -> With non-persistent topics, message data lives only in memory. If a message broker fails or message data can otherwise not be retrieved from memory, your message data may be lost. Use non-persistent topics only if you're *certain* that your use case requires it and can sustain it. - -By default, non-persistent topics are enabled on Pulsar brokers. You can disable them in the broker's [configuration](reference-configuration.md#broker-enableNonPersistentTopics). You can manage non-persistent topics using the `pulsar-admin topics` command. For more information, see [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/). - -### Performance - -Non-persistent messaging is usually faster than persistent messaging because brokers don't persist messages and immediately send acks back to the producer as soon as that message is delivered to connected brokers. Producers thus see comparatively low publish latency with non-persistent topic. - -### Client API - -Producers and consumers can connect to non-persistent topics in the same way as persistent topics, with the crucial difference that the topic name must start with `non-persistent`. All three subscription modes---[exclusive](#exclusive), [shared](#shared), and [failover](#failover)---are supported for non-persistent topics. - -Here's an example [Java consumer](client-libraries-java.md#consumers) for a non-persistent topic: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); -String npTopic = "non-persistent://public/default/my-topic"; -String subscriptionName = "my-subscription-name"; - -Consumer consumer = client.newConsumer() - .topic(npTopic) - .subscriptionName(subscriptionName) - .subscribe(); - -``` - -Here's an example [Java producer](client-libraries-java.md#producer) for the same non-persistent topic: - -```java - -Producer producer = client.newProducer() - .topic(npTopic) - .create(); - -``` - - -## System topic - -System topic is a predefined topic for internal use within Pulsar. It can be either persistent or non-persistent topic. - -System topics serve to implement certain features and eliminate dependencies on third-party components, such as transactions, heartbeat detections, topic-level policies, and resource group services. System topics empower the implementation of these features to be simplified, dependent, and flexible. Take heartbeat detections for example, you can leverage the system topic for healthcheck to internally enable producer/reader to procude/consume messages under the heartbeat namespace, which can detect whether the current service is still alive. - -There are diverse system topics depending on namespaces. The following table outlines the available system topics for each specific namespace. - -| Namespace | TopicName | Domain | Count | Usage | -|-----------|-----------|--------|-------|-------| -| pulsar/system | `transaction_coordinator_assign_${id}` | Persistent | Default 16 | Transaction coordinator | -| pulsar/system | `_transaction_log${tc_id}` | Persistent | Default 16 | Transaction log | -| pulsar/system | `resource-usage` | Non-persistent | Default 4 | Resource group service | -| host/port | `heartbeat` | Persistent | 1 | Heartbeat detection | -| User-defined-ns | [`__change_events`](concepts-multi-tenancy.md#namespace-change-events-and-topic-level-policies) | Persistent | Default 4 | Topic events | -| User-defined-ns | `__transaction_buffer_snapshot` | Persistent | One per namespace | Transaction buffer snapshots | -| User-defined-ns | `${topicName}__transaction_pending_ack` | Persistent | One per every topic subscription acknowledged with transactions | Acknowledgements with transactions | - -:::note - -* You cannot create any system topics. -* By default, system topics are disabled. To enable system topics, you need to change the following configurations in the `conf/broker.conf` or `conf/standalone.conf` file. - - ```conf - systemTopicEnabled=true - topicLevelPoliciesEnabled=true - ``` - -::: - - -## Message retention and expiry - -By default, Pulsar message brokers: - -* immediately delete *all* messages that have been acknowledged by a consumer, and -* [persistently store](concepts-architecture-overview.md#persistent-storage) all unacknowledged messages in a message backlog. - -Pulsar has two features, however, that enable you to override this default behavior: - -* Message **retention** enables you to store messages that have been acknowledged by a consumer -* Message **expiry** enables you to set a time to live (TTL) for messages that have not yet been acknowledged - -> All message retention and expiry is managed at the [namespace](#namespaces) level. For a how-to, see the [Message retention and expiry](cookbooks-retention-expiry.md) cookbook. - -The diagram below illustrates both concepts: - -![Message retention and expiry](/assets/retention-expiry.png) - -With message retention, shown at the top, a retention policy applied to all topics in a namespace dictates that some messages are durably stored in Pulsar even though they've already been acknowledged. Acknowledged messages that are not covered by the retention policy are deleted. Without a retention policy, *all* of the acknowledged messages would be deleted. - -With message expiry, shown at the bottom, some messages are deleted, even though they haven't been acknowledged, because they've expired according to the TTL applied to the namespace (for example because a TTL of 5 minutes has been applied and the messages haven't been acknowledged but are 10 minutes old). - -## Message deduplication - -Message duplication occurs when a message is [persisted](concepts-architecture-overview.md#persistent-storage) by Pulsar more than once. Message deduplication is an optional Pulsar feature that prevents unnecessary message duplication by processing each message only once, even if the message is received more than once. - -The following diagram illustrates what happens when message deduplication is disabled vs. enabled: - -![Pulsar message deduplication](/assets/message-deduplication.png) - - -Message deduplication is disabled in the scenario shown at the top. Here, a producer publishes message 1 on a topic; the message reaches a Pulsar broker and is [persisted](concepts-architecture-overview.md#persistent-storage) to BookKeeper. The producer then sends message 1 again (in this case due to some retry logic), and the message is received by the broker and stored in BookKeeper again, which means that duplication has occurred. - -In the second scenario at the bottom, the producer publishes message 1, which is received by the broker and persisted, as in the first scenario. When the producer attempts to publish the message again, however, the broker knows that it has already seen message 1 and thus does not persist the message. - -> Message deduplication is handled at the namespace level or the topic level. For more instructions, see the [message deduplication cookbook](cookbooks-deduplication.md). - - -### Producer idempotency - -The other available approach to message deduplication is to ensure that each message is *only produced once*. This approach is typically called **producer idempotency**. The drawback of this approach is that it defers the work of message deduplication to the application. In Pulsar, this is handled at the [broker](reference-terminology.md#broker) level, so you do not need to modify your Pulsar client code. Instead, you only need to make administrative changes. For details, see [Managing message deduplication](cookbooks-deduplication.md). - -### Deduplication and effectively-once semantics - -Message deduplication makes Pulsar an ideal messaging system to be used in conjunction with stream processing engines (SPEs) and other systems seeking to provide effectively-once processing semantics. Messaging systems that do not offer automatic message deduplication require the SPE or other system to guarantee deduplication, which means that strict message ordering comes at the cost of burdening the application with the responsibility of deduplication. With Pulsar, strict ordering guarantees come at no application-level cost. - -> You can find more in-depth information in [this post](https://www.splunk.com/en_us/blog/it/exactly-once-is-not-exactly-the-same.html). - -## Delayed message delivery -Delayed message delivery enables you to consume a message later rather than immediately. In this mechanism, a message is stored in BookKeeper, `DelayedDeliveryTracker` maintains the time index(time -> messageId) in memory after published to a broker, and it is delivered to a consumer once the specific delayed time is passed. - -Delayed message delivery only works in Shared subscription mode. In Exclusive and Failover subscription modes, the delayed message is dispatched immediately. - -The diagram below illustrates the concept of delayed message delivery: - -![Delayed Message Delivery](/assets/message_delay.png) - -A broker saves a message without any check. When a consumer consumes a message, if the message is set to delay, then the message is added to `DelayedDeliveryTracker`. A subscription checks and gets timeout messages from `DelayedDeliveryTracker`. - -### Broker -Delayed message delivery is enabled by default. You can change it in the broker configuration file as below: - -``` - -# Whether to enable the delayed delivery for messages. -# If disabled, messages are immediately delivered and there is no tracking overhead. -delayedDeliveryEnabled=true - -# Control the ticking time for the retry of delayed message delivery, -# affecting the accuracy of the delivery time compared to the scheduled time. -# Default is 1 second. -delayedDeliveryTickTimeMillis=1000 - -``` - -### Producer -The following is an example of delayed message delivery for a producer in Java: - -```java - -// message to be delivered at the configured delay interval -producer.newMessage().deliverAfter(3L, TimeUnit.Minute).value("Hello Pulsar!").send(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-multi-tenancy.md b/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-multi-tenancy.md deleted file mode 100644 index 93a59557b2efca..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-multi-tenancy.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -id: concepts-multi-tenancy -title: Multi Tenancy -sidebar_label: "Multi Tenancy" -original_id: concepts-multi-tenancy ---- - -Pulsar was created from the ground up as a multi-tenant system. To support multi-tenancy, Pulsar has a concept of tenants. Tenants can be spread across clusters and can each have their own [authentication and authorization](security-overview.md) scheme applied to them. They are also the administrative unit at which storage quotas, [message TTL](cookbooks-retention-expiry.md#time-to-live-ttl), and isolation policies can be managed. - -The multi-tenant nature of Pulsar is reflected mostly visibly in topic URLs, which have this structure: - -```http - -persistent://tenant/namespace/topic - -``` - -As you can see, the tenant is the most basic unit of categorization for topics (more fundamental than the namespace and topic name). - -## Tenants - -To each tenant in a Pulsar instance you can assign: - -* An [authorization](security-authorization.md) scheme -* The set of [clusters](reference-terminology.md#cluster) to which the tenant's configuration applies - -## Namespaces - -Tenants and namespaces are two key concepts of Pulsar to support multi-tenancy. - -* Pulsar is provisioned for specified tenants with appropriate capacity allocated to the tenant. -* A namespace is the administrative unit nomenclature within a tenant. The configuration policies set on a namespace apply to all the topics created in that namespace. A tenant may create multiple namespaces via self-administration using the REST API and the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool. For instance, a tenant with different applications can create a separate namespace for each application. - -Names for topics in the same namespace will look like this: - -```http - -persistent://tenant/app1/topic-1 - -persistent://tenant/app1/topic-2 - -persistent://tenant/app1/topic-3 - -``` - -### Namespace change events and topic-level policies - -Pulsar is a multi-tenant event streaming system. Administrators can manage the tenants and namespaces by setting policies at different levels. However, the policies, such as retention policy and storage quota policy, are only available at a namespace level. In many use cases, users need to set a policy at the topic level. The namespace change events approach is proposed for supporting topic-level policies in an efficient way. In this approach, Pulsar is used as an event log to store namespace change events (such as topic policy changes). This approach has a few benefits: -- Avoid using ZooKeeper and introducing more loads to ZooKeeper. -- Use Pulsar as an event log for propagating the policy cache. It can scale efficiently. -- Use Pulsar SQL to query the namespace changes and audit the system. - -Each namespace has a [system topic](concepts-messaging.md#system-topic) named `__change_events`. This system topic stores change events for a given namespace. The following figure illustrates how to leverage it to update topic-level policies. - -![Leverage the system topic to update topic-level policies](/assets/system-topic-for-topic-level-policies.svg) - -1. Pulsar Admin clients communicate with the Admin Restful API to update topic-level policies. -2. Any broker that receives the Admin HTTP request publishes a topic policy change event to the corresponding system topic (`__change_events`) of the namespace. -3. Each broker that owns a namespace bundle(s) subscribes to the system topic (`__change_events`) to receive the change events of the namespace. -4. Each broker applies the change events to its policy cache. -5. Once the policy cache is updated, the broker sends the response back to the Pulsar Admin clients. - -:::note - -By default, the system topic is disabled. To enable topic-level policy (`topicLevelPoliciesEnabled`=`true`), you need to enable the system topic by setting `systemtopicenabled` to `true` in the `conf/broker.conf` or `conf/standalone.conf` file. - -::: \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-multiple-advertised-listeners.md b/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-multiple-advertised-listeners.md deleted file mode 100644 index f2e1ae0aadc7ca..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-multiple-advertised-listeners.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -id: concepts-multiple-advertised-listeners -title: Multiple advertised listeners -sidebar_label: "Multiple advertised listeners" -original_id: concepts-multiple-advertised-listeners ---- - -When a Pulsar cluster is deployed in the production environment, it may require to expose multiple advertised addresses for the broker. For example, when you deploy a Pulsar cluster in Kubernetes and want other clients, which are not in the same Kubernetes cluster, to connect to the Pulsar cluster, you need to assign a broker URL to external clients. But clients in the same Kubernetes cluster can still connect to the Pulsar cluster through the internal network of Kubernetes. - -## Advertised listeners - -To ensure clients in both internal and external networks can connect to a Pulsar cluster, Pulsar introduces `advertisedListeners` and `internalListenerName` configuration options into the [broker configuration file](reference-configuration.md#broker) to ensure that the broker supports exposing multiple advertised listeners and support the separation of internal and external network traffic. - -- The `advertisedListeners` is used to specify multiple advertised listeners. The broker uses the listener as the broker identifier in the load manager and the bundle owner data. The `advertisedListeners` is formatted as `:pulsar://:, :pulsar+ssl://:`. You can set up the `advertisedListeners` like -`advertisedListeners=internal:pulsar://192.168.1.11:6660,internal:pulsar+ssl://192.168.1.11:6651`. - -- The `internalListenerName` is used to specify the internal service URL that the broker uses. You can specify the `internalListenerName` by choosing one of the `advertisedListeners`. The broker uses the listener name of the first advertised listener as the `internalListenerName` if the `internalListenerName` is absent. - -After setting up the `advertisedListeners`, clients can choose one of the listeners as the service URL to create a connection to the broker as long as the network is accessible. However, if the client creates producers or consumer on a topic, the client must send a lookup requests to the broker for getting the owner broker, then connect to the owner broker to publish messages or consume messages. Therefore, You must allow the client to get the corresponding service URL with the same advertised listener name as the one used by the client. This helps keep client-side simple and secure. - -## Use multiple advertised listeners - -This example shows how a Pulsar client uses multiple advertised listeners. - -1. Configure multiple advertised listeners in the broker configuration file. - -```shell - -advertisedListeners={listenerName}:pulsar://xxxx:6650, -{listenerName}:pulsar+ssl://xxxx:6651 - -``` - -2. Specify the listener name for the client. - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://xxxx:6650") - .listenerName("external") - .build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-overview.md b/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-overview.md deleted file mode 100644 index c643aa0ce7bbce..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-overview.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -id: concepts-overview -title: Pulsar Overview -sidebar_label: "Overview" -original_id: concepts-overview ---- - -Pulsar is a multi-tenant, high-performance solution for server-to-server messaging. Originally developed by Yahoo, Pulsar is under the stewardship of the [Apache Software Foundation](https://www.apache.org/). - -Key features of Pulsar are listed below: - -* Native support for multiple clusters in a Pulsar instance, with seamless [geo-replication](administration-geo.md) of messages across clusters. -* Very low publish and end-to-end latency. -* Seamless scalability to over a million topics. -* A simple [client API](concepts-clients.md) with bindings for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) and [C++](client-libraries-cpp.md). -* Multiple [subscription modes](concepts-messaging.md#subscription-modes) ([exclusive](concepts-messaging.md#exclusive), [shared](concepts-messaging.md#shared), and [failover](concepts-messaging.md#failover)) for topics. -* Guaranteed message delivery with [persistent message storage](concepts-architecture-overview.md#persistent-storage) provided by [Apache BookKeeper](http://bookkeeper.apache.org/). -* A serverless light-weight computing framework [Pulsar Functions](functions-overview.md) offers the capability for stream-native data processing. -* A serverless connector framework [Pulsar IO](io-overview.md), which is built on Pulsar Functions, makes it easier to move data in and out of Apache Pulsar. -* [Tiered Storage](concepts-tiered-storage.md) offloads data from hot/warm storage to cold/long-term storage (such as S3 and GCS) when the data is aging out. - -## Contents - -- [Messaging Concepts](concepts-messaging.md) -- [Architecture Overview](concepts-architecture-overview.md) -- [Pulsar Clients](concepts-clients.md) -- [Geo Replication](concepts-replication.md) -- [Multi Tenancy](concepts-multi-tenancy.md) -- [Authentication and Authorization](concepts-authentication.md) -- [Topic Compaction](concepts-topic-compaction.md) -- [Tiered Storage](concepts-tiered-storage.md) diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-proxy-sni-routing.md b/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-proxy-sni-routing.md deleted file mode 100644 index 51419a66cefe6e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-proxy-sni-routing.md +++ /dev/null @@ -1,180 +0,0 @@ ---- -id: concepts-proxy-sni-routing -title: Proxy support with SNI routing -sidebar_label: "Proxy support with SNI routing" -original_id: concepts-proxy-sni-routing ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -A proxy server is an intermediary server that forwards requests from multiple clients to different servers across the Internet. The proxy server acts as a "traffic cop" in both forward and reverse proxy scenarios, and benefits your system such as load balancing, performance, security, auto-scaling, and so on. - -The proxy in Pulsar acts as a reverse proxy, and creates a gateway in front of brokers. Proxies such as Apache Traffic Server (ATS), HAProxy, Nginx, and Envoy are not supported in Pulsar. These proxy-servers support **SNI routing**. SNI routing is used to route traffic to a destination without terminating the SSL connection. Layer 4 routing provides greater transparency because the outbound connection is determined by examining the destination address in the client TCP packets. - -Pulsar clients (Java, C++, Python) support [SNI routing protocol](https://github.com/apache/pulsar/wiki/PIP-60:-Support-Proxy-server-with-SNI-routing), so you can connect to brokers through the proxy. This document walks you through how to set up the ATS proxy, enable SNI routing, and connect Pulsar client to the broker through the ATS proxy. - -## ATS-SNI Routing in Pulsar -To support [layer-4 SNI routing](https://docs.trafficserver.apache.org/en/latest/admin-guide/layer-4-routing.en.html) with ATS, the inbound connection must be a TLS connection. Pulsar client supports SNI routing protocol on TLS connection, so when Pulsar clients connect to broker through ATS proxy, Pulsar uses ATS as a reverse proxy. - -Pulsar supports SNI routing for geo-replication, so brokers can connect to brokers in other clusters through the ATS proxy. - -This section explains how to set up and use ATS as a reverse proxy, so Pulsar clients can connect to brokers through the ATS proxy using the SNI routing protocol on TLS connection. - -### Set up ATS Proxy for layer-4 SNI routing -To support layer 4 SNI routing, you need to configure the `records.conf` and `ssl_server_name.conf` files. - -![Pulsar client SNI](/assets/pulsar-sni-client.png) - -The [records.config](https://docs.trafficserver.apache.org/en/latest/admin-guide/files/records.config.en.html) file is located in the `/usr/local/etc/trafficserver/` directory by default. The file lists configurable variables used by the ATS. - -To configure the `records.config` files, complete the following steps. -1. Update TLS port (`http.server_ports`) on which proxy listens, and update proxy certs (`ssl.client.cert.path` and `ssl.client.cert.filename`) to secure TLS tunneling. -2. Configure server ports (`http.connect_ports`) used for tunneling to the broker. If Pulsar brokers are listening on `4443` and `6651` ports, add the brokers service port in the `http.connect_ports` configuration. - -The following is an example. - -``` - -# PROXY TLS PORT -CONFIG proxy.config.http.server_ports STRING 4443:ssl 4080 -# PROXY CERTS FILE PATH -CONFIG proxy.config.ssl.client.cert.path STRING /proxy-cert.pem -# PROXY KEY FILE PATH -CONFIG proxy.config.ssl.client.cert.filename STRING /proxy-key.pem - - -# The range of origin server ports that can be used for tunneling via CONNECT. # Traffic Server allows tunnels only to the specified ports. Supports both wildcards (*) and ranges (e.g. 0-1023). -CONFIG proxy.config.http.connect_ports STRING 4443 6651 - -``` - -The `ssl_server_name` file is used to configure TLS connection handling for inbound and outbound connections. The configuration is determined by the SNI values provided by the inbound connection. The file consists of a set of configuration items, and each is identified by an SNI value (`fqdn`). When an inbound TLS connection is made, the SNI value from the TLS negotiation is matched with the items specified in this file. If the values match, the values specified in that item override the default values. - -The following example shows mapping of the inbound SNI hostname coming from the client, and the actual broker service URL where request should be redirected. For example, if the client sends the SNI header `pulsar-broker1`, the proxy creates a TLS tunnel by redirecting request to the `pulsar-broker1:6651` service URL. - -``` - -server_config = { - { - fqdn = 'pulsar-broker-vip', - # Forward to Pulsar broker which is listening on 6651 - tunnel_route = 'pulsar-broker-vip:6651' - }, - { - fqdn = 'pulsar-broker1', - # Forward to Pulsar broker-1 which is listening on 6651 - tunnel_route = 'pulsar-broker1:6651' - }, - { - fqdn = 'pulsar-broker2', - # Forward to Pulsar broker-2 which is listening on 6651 - tunnel_route = 'pulsar-broker2:6651' - }, -} - -``` - -After you configure the `ssl_server_name.config` and `records.config` files, the ATS-proxy server handles SNI routing and creates TCP tunnel between the client and the broker. - -### Configure Pulsar-client with SNI routing -ATS SNI-routing works only with TLS. You need to enable TLS for the ATS proxy and brokers first, configure the SNI routing protocol, and then connect Pulsar clients to brokers through ATS proxy. Pulsar clients support SNI routing by connecting to the proxy, and sending the target broker URL to the SNI header. This process is processed internally. You only need to configure the following proxy configuration initially when you create a Pulsar client to use the SNI routing protocol. - -````mdx-code-block - - - - -```java - -String brokerServiceUrl = "pulsar+ssl://pulsar-broker-vip:6651/"; -String proxyUrl = "pulsar+ssl://ats-proxy:443"; -ClientBuilder clientBuilder = PulsarClient.builder() - .serviceUrl(brokerServiceUrl) - .tlsTrustCertsFilePath(TLS_TRUST_CERT_FILE_PATH) - .enableTls(true) - .allowTlsInsecureConnection(false) - .proxyServiceUrl(proxyUrl, ProxyProtocol.SNI) - .operationTimeout(1000, TimeUnit.MILLISECONDS); - -Map authParams = new HashMap(); -authParams.put("tlsCertFile", TLS_CLIENT_CERT_FILE_PATH); -authParams.put("tlsKeyFile", TLS_CLIENT_KEY_FILE_PATH); -clientBuilder.authentication(AuthenticationTls.class.getName(), authParams); - -PulsarClient pulsarClient = clientBuilder.build(); - -``` - - - - -```c++ - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/cacert.pem"); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create( - "/path/to/client-cert.pem", "/path/to/client-key.pem");); - -Client client("pulsar+ssl://ats-proxy:443", config); - -``` - - - - -```python - -from pulsar import Client, AuthenticationTLS - -auth = AuthenticationTLS("/path/to/my-role.cert.pem", "/path/to/my-role.key-pk8.pem") -client = Client("pulsar+ssl://ats-proxy:443", - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False, - authentication=auth) - -``` - - - - -```` - -### Pulsar geo-replication with SNI routing -You can use the ATS proxy for geo-replication. Pulsar brokers can connect to brokers in geo-replication by using SNI routing. To enable SNI routing for broker connection cross clusters, you need to configure SNI proxy URL to the cluster metadata. If you have configured SNI proxy URL in the cluster metadata, you can connect to broker cross clusters through the proxy over SNI routing. - -![Pulsar client SNI](/assets/pulsar-sni-geo.png) - -In this example, a Pulsar cluster is deployed into two separate regions, `us-west` and `us-east`. Both regions are configured with ATS proxy, and brokers in each region run behind the ATS proxy. We configure the cluster metadata for both clusters, so brokers in one cluster can use SNI routing and connect to brokers in other clusters through the ATS proxy. - -(a) Configure the cluster metadata for `us-east` with `us-east` broker service URL and `us-east` ATS proxy URL with SNI proxy-protocol. - -``` - -./pulsar-admin clusters update \ ---broker-url-secure pulsar+ssl://east-broker-vip:6651 \ ---url http://east-broker-vip:8080 \ ---proxy-protocol SNI \ ---proxy-url pulsar+ssl://east-ats-proxy:443 - -``` - -(b) Configure the cluster metadata for `us-west` with `us-west` broker service URL and `us-west` ATS proxy URL with SNI proxy-protocol. - -``` - -./pulsar-admin clusters update \ ---broker-url-secure pulsar+ssl://west-broker-vip:6651 \ ---url http://west-broker-vip:8080 \ ---proxy-protocol SNI \ ---proxy-url pulsar+ssl://west-ats-proxy:443 - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-replication.md b/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-replication.md deleted file mode 100644 index 799f0eb4d92c6b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-replication.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -id: concepts-replication -title: Geo Replication -sidebar_label: "Geo Replication" -original_id: concepts-replication ---- - -Pulsar enables messages to be produced and consumed in different geo-locations. For instance, your application may be publishing data in one region or market and you would like to process it for consumption in other regions or markets. [Geo-replication](administration-geo.md) in Pulsar enables you to do that. - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-tiered-storage.md b/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-tiered-storage.md deleted file mode 100644 index f6988e53a8cd4e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-tiered-storage.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: concepts-tiered-storage -title: Tiered Storage -sidebar_label: "Tiered Storage" -original_id: concepts-tiered-storage ---- - -Pulsar's segment oriented architecture allows for topic backlogs to grow very large, effectively without limit. However, this can become expensive over time. - -One way to alleviate this cost is to use Tiered Storage. With tiered storage, older messages in the backlog can be moved from BookKeeper to a cheaper storage mechanism, while still allowing clients to access the backlog as if nothing had changed. - -![Tiered Storage](/assets/pulsar-tiered-storage.png) - -> Data written to BookKeeper is replicated to 3 physical machines by default. However, once a segment is sealed in BookKeeper it becomes immutable and can be copied to long term storage. Long term storage can achieve cost savings by using mechanisms such as [Reed-Solomon error correction](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction) to require fewer physical copies of data. - -Pulsar currently supports S3, Google Cloud Storage (GCS), and filesystem for [long term store](https://pulsar.apache.org/docs/en/cookbooks-tiered-storage/). Offloading to long term storage triggered via a Rest API or command line interface. The user passes in the amount of topic data they wish to retain on BookKeeper, and the broker will copy the backlog data to long term storage. The original data will then be deleted from BookKeeper after a configured delay (4 hours by default). - -> For a guide for setting up tiered storage, see the [Tiered storage cookbook](cookbooks-tiered-storage.md). diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-topic-compaction.md b/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-topic-compaction.md deleted file mode 100644 index 34b7ed7fbbd31e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-topic-compaction.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -id: concepts-topic-compaction -title: Topic Compaction -sidebar_label: "Topic Compaction" -original_id: concepts-topic-compaction ---- - -Pulsar was built with highly scalable [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data as a primary objective. Pulsar topics enable you to persistently store as many unacknowledged messages as you need while preserving message ordering. By default, Pulsar stores *all* unacknowledged/unprocessed messages produced on a topic. Accumulating many unacknowledged messages on a topic is necessary for many Pulsar use cases but it can also be very time intensive for Pulsar consumers to "rewind" through the entire log of messages. - -> For a more practical guide to topic compaction, see the [Topic compaction cookbook](cookbooks-compaction.md). - -For some use cases consumers don't need a complete "image" of the topic log. They may only need a few values to construct a more "shallow" image of the log, perhaps even just the most recent value. For these kinds of use cases Pulsar offers **topic compaction**. When you run compaction on a topic, Pulsar goes through a topic's backlog and removes messages that are *obscured* by later messages, i.e. it goes through the topic on a per-key basis and leaves only the most recent message associated with that key. - -Pulsar's topic compaction feature: - -* Allows for faster "rewind" through topic logs -* Applies only to [persistent topics](concepts-architecture-overview.md#persistent-storage) -* Triggered automatically when the backlog reaches a certain size or can be triggered manually via the command line. See the [Topic compaction cookbook](cookbooks-compaction.md) -* Is conceptually and operationally distinct from [retention and expiry](concepts-messaging.md#message-retention-and-expiry). Topic compaction *does*, however, respect retention. If retention has removed a message from the message backlog of a topic, the message will also not be readable from the compacted topic ledger. - -> #### Topic compaction example: the stock ticker -> An example use case for a compacted Pulsar topic would be a stock ticker topic. On a stock ticker topic, each message bears a timestamped dollar value for stocks for purchase (with the message key holding the stock symbol, e.g. `AAPL` or `GOOG`). With a stock ticker you may care only about the most recent value(s) of the stock and have no interest in historical data (i.e. you don't need to construct a complete image of the topic's sequence of messages per key). Compaction would be highly beneficial in this case because it would keep consumers from needing to rewind through obscured messages. - - -## How topic compaction works - -When topic compaction is triggered [via the CLI](cookbooks-compaction.md), Pulsar will iterate over the entire topic from beginning to end. For each key that it encounters the compaction routine will keep a record of the latest occurrence of that key. - -After that, the broker will create a new [BookKeeper ledger](concepts-architecture-overview.md#ledgers) and make a second iteration through each message on the topic. For each message, if the key matches the latest occurrence of that key, then the key's data payload, message ID, and metadata will be written to the newly created ledger. If the key doesn't match the latest then the message will be skipped and left alone. If any given message has an empty payload, it will be skipped and considered deleted (akin to the concept of [tombstones](https://en.wikipedia.org/wiki/Tombstone_(data_store)) in key-value databases). At the end of this second iteration through the topic, the newly created BookKeeper ledger is closed and two things are written to the topic's metadata: the ID of the BookKeeper ledger and the message ID of the last compacted message (this is known as the **compaction horizon** of the topic). Once this metadata is written compaction is complete. - -After the initial compaction operation, the Pulsar [broker](reference-terminology.md#broker) that owns the topic is notified whenever any future changes are made to the compaction horizon and compacted backlog. When such changes occur: - -* Clients (consumers and readers) that have read compacted enabled will attempt to read messages from a topic and either: - * Read from the topic like normal (if the message ID is greater than or equal to the compaction horizon) or - * Read beginning at the compaction horizon (if the message ID is lower than the compaction horizon) - - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-transactions.md b/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-transactions.md deleted file mode 100644 index 08490ba06b5d7e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/concepts-transactions.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -id: transactions -title: Transactions -sidebar_label: "Overview" -original_id: transactions ---- - -Transactional semantics enable event streaming applications to consume, process, and produce messages in one atomic operation. In Pulsar, a producer or consumer can work with messages across multiple topics and partitions and ensure those messages are processed as a single unit. - -The following concepts help you understand Pulsar transactions. - -## Transaction coordinator and transaction log -The transaction coordinator maintains the topics and subscriptions that interact in a transaction. When a transaction is committed, the transaction coordinator interacts with the topic owner broker to complete the transaction. - -The transaction coordinator maintains the entire life cycle of transactions, and prevents a transaction from incorrect status. - -The transaction coordinator handles transaction timeout, and ensures that the transaction is aborted after a transaction timeout. - -All the transaction metadata is persisted in the transaction log. The transaction log is backed by a Pulsar topic. After the transaction coordinator crashes, it can restore the transaction metadata from the transaction log. - -## Transaction ID -The transaction ID (TxnID) identifies a unique transaction in Pulsar. The transaction ID is 128-bit. The highest 16 bits are reserved for the ID of the transaction coordinator, and the remaining bits are used for monotonically increasing numbers in each transaction coordinator. It is easy to locate the transaction crash with the TxnID. - -## Transaction buffer -Messages produced within a transaction are stored in the transaction buffer. The messages in transaction buffer are not materialized (visible) to consumers until the transactions are committed. The messages in the transaction buffer are discarded when the transactions are aborted. - -## Pending acknowledge state -Message acknowledges within a transaction are maintained by the pending acknowledge state before the transaction completes. If a message is in the pending acknowledge state, the message cannot be acknowledged by other transactions until the message is removed from the pending acknowledge state. - -The pending acknowledge state is persisted to the pending acknowledge log. The pending acknowledge log is backed by a Pulsar topic. A new broker can restore the state from the pending acknowledge log to ensure the acknowledgement is not lost. diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-bookkeepermetadata.md b/site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-bookkeepermetadata.md deleted file mode 100644 index b0fa98dc3b65d5..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-bookkeepermetadata.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -id: cookbooks-bookkeepermetadata -title: BookKeeper Ledger Metadata -original_id: cookbooks-bookkeepermetadata ---- - -Pulsar stores data on BookKeeper ledgers, you can understand the contents of a ledger by inspecting the metadata attached to the ledger. -Such metadata are stored on ZooKeeper and they are readable using BookKeeper APIs. - -Description of current metadata: - -| Scope | Metadata name | Metadata value | -| ------------- | ------------- | ------------- | -| All ledgers | application | 'pulsar' | -| All ledgers | component | 'managed-ledger', 'schema', 'compacted-topic' | -| Managed ledgers | pulsar/managed-ledger | name of the ledger | -| Cursor | pulsar/cursor | name of the cursor | -| Compacted topic | pulsar/compactedTopic | name of the original topic | -| Compacted topic | pulsar/compactedTo | id of the last compacted message | - - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-compaction.md b/site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-compaction.md deleted file mode 100644 index dfa314727241a8..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-compaction.md +++ /dev/null @@ -1,142 +0,0 @@ ---- -id: cookbooks-compaction -title: Topic compaction -sidebar_label: "Topic compaction" -original_id: cookbooks-compaction ---- - -Pulsar's [topic compaction](concepts-topic-compaction.md#compaction) feature enables you to create **compacted** topics in which older, "obscured" entries are pruned from the topic, allowing for faster reads through the topic's history (which messages are deemed obscured/outdated/irrelevant will depend on your use case). - -To use compaction: - -* You need to give messages keys, as topic compaction in Pulsar takes place on a *per-key basis* (i.e. messages are compacted based on their key). For a stock ticker use case, the stock symbol---e.g. `AAPL` or `GOOG`---could serve as the key (more on this [below](#when-should-i-use-compacted-topics)). Messages without keys will be left alone by the compaction process. -* Compaction can be configured to run [automatically](#configuring-compaction-to-run-automatically), or you can manually [trigger](#triggering-compaction-manually) compaction using the Pulsar administrative API. -* Your consumers must be [configured](#consumer-configuration) to read from compacted topics ([Java consumers](#java), for example, have a `readCompacted` setting that must be set to `true`). If this configuration is not set, consumers will still be able to read from the non-compacted topic. - - -> Compaction only works on messages that have keys (as in the stock ticker example the stock symbol serves as the key for each message). Keys can thus be thought of as the axis along which compaction is applied. Messages that don't have keys are simply ignored by compaction. - -## When should I use compacted topics? - -The classic example of a topic that could benefit from compaction would be a stock ticker topic through which consumers can access up-to-date values for specific stocks. Imagine a scenario in which messages carrying stock value data use the stock symbol as the key (`GOOG`, `AAPL`, `TWTR`, etc.). Compacting this topic would give consumers on the topic two options: - -* They can read from the "original," non-compacted topic in case they need access to "historical" values, i.e. the entirety of the topic's messages. -* They can read from the compacted topic if they only want to see the most up-to-date messages. - -Thus, if you're using a Pulsar topic called `stock-values`, some consumers could have access to all messages in the topic (perhaps because they're performing some kind of number crunching of all values in the last hour) while the consumers used to power the real-time stock ticker only see the compacted topic (and thus aren't forced to process outdated messages). Which variant of the topic any given consumer pulls messages from is determined by the consumer's [configuration](#consumer-configuration). - -> One of the benefits of compaction in Pulsar is that you aren't forced to choose between compacted and non-compacted topics, as the compaction process leaves the original topic as-is and essentially adds an alternate topic. In other words, you can run compaction on a topic and consumers that need access to the non-compacted version of the topic will not be adversely affected. - - -## Configuring compaction to run automatically - -Tenant administrators can configure a policy for compaction at the namespace level. The policy specifies how large the topic backlog can grow before compaction is triggered. - -For example, to trigger compaction when the backlog reaches 100MB: - -```bash - -$ bin/pulsar-admin namespaces set-compaction-threshold \ - --threshold 100M my-tenant/my-namespace - -``` - -Configuring the compaction threshold on a namespace will apply to all topics within that namespace. - -## Triggering compaction manually - -In order to run compaction on a topic, you need to use the [`topics compact`](reference-pulsar-admin.md#topics-compact) command for the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool. Here's an example: - -```bash - -$ bin/pulsar-admin topics compact \ - persistent://my-tenant/my-namespace/my-topic - -``` - -The `pulsar-admin` tool runs compaction via the Pulsar {@inject: rest:REST:/} API. To run compaction in its own dedicated process, i.e. *not* through the REST API, you can use the [`pulsar compact-topic`](reference-cli-tools.md#pulsar-compact-topic) command. Here's an example: - -```bash - -$ bin/pulsar compact-topic \ - --topic persistent://my-tenant-namespace/my-topic - -``` - -> Running compaction in its own process is recommended when you want to avoid interfering with the broker's performance. Broker performance should only be affected, however, when running compaction on topics with a large keyspace (i.e when there are many keys on the topic). The first phase of the compaction process keeps a copy of each key in the topic, which can create memory pressure as the number of keys grows. Using the `pulsar-admin topics compact` command to run compaction through the REST API should present no issues in the overwhelming majority of cases; using `pulsar compact-topic` should correspondingly be considered an edge case. - -The `pulsar compact-topic` command communicates with [ZooKeeper](https://zookeeper.apache.org) directly. In order to establish communication with ZooKeeper, though, the `pulsar` CLI tool will need to have a valid [broker configuration](reference-configuration.md#broker). You can either supply a proper configuration in `conf/broker.conf` or specify a non-default location for the configuration: - -```bash - -$ bin/pulsar compact-topic \ - --broker-conf /path/to/broker.conf \ - --topic persistent://my-tenant/my-namespace/my-topic - -# If the configuration is in conf/broker.conf -$ bin/pulsar compact-topic \ - --topic persistent://my-tenant/my-namespace/my-topic - -``` - -#### When should I trigger compaction? - -How often you [trigger compaction](#triggering-compaction-manually) will vary widely based on the use case. If you want a compacted topic to be extremely speedy on read, then you should run compaction fairly frequently. - -## Consumer configuration - -Pulsar consumers and readers need to be configured to read from compacted topics. The sections below show you how to enable compacted topic reads for Pulsar's language clients. - -### Java - -In order to read from a compacted topic using a Java consumer, the `readCompacted` parameter must be set to `true`. Here's an example consumer for a compacted topic: - -```java - -Consumer compactedTopicConsumer = client.newConsumer() - .topic("some-compacted-topic") - .readCompacted(true) - .subscribe(); - -``` - -As mentioned above, topic compaction in Pulsar works on a *per-key basis*. That means that messages that you produce on compacted topics need to have keys (the content of the key will depend on your use case). Messages that don't have keys will be ignored by the compaction process. Here's an example Pulsar message with a key: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageBuilder; - -Message msg = MessageBuilder.create() - .setContent(someByteArray) - .setKey("some-key") - .build(); - -``` - -The example below shows a message with a key being produced on a compacted Pulsar topic: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageBuilder; -import org.apache.pulsar.client.api.Producer; -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -Producer compactedTopicProducer = client.newProducer() - .topic("some-compacted-topic") - .create(); - -Message msg = MessageBuilder.create() - .setContent(someByteArray) - .setKey("some-key") - .build(); - -compactedTopicProducer.send(msg); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-deduplication.md b/site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-deduplication.md deleted file mode 100644 index f7f9e3d7bb425b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-deduplication.md +++ /dev/null @@ -1,151 +0,0 @@ ---- -id: cookbooks-deduplication -title: Message deduplication -sidebar_label: "Message deduplication" -original_id: cookbooks-deduplication ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -When **Message deduplication** is enabled, it ensures that each message produced on Pulsar topics is persisted to disk *only once*, even if the message is produced more than once. Message deduplication is handled automatically on the server side. - -To use message deduplication in Pulsar, you need to configure your Pulsar brokers and clients. - -## How it works - -You can enable or disable message deduplication at the namespace level or the topic level. By default, it is disabled on all namespaces or topics. You can enable it in the following ways: - -* Enable deduplication for all namespaces/topics at the broker-level. -* Enable deduplication for a specific namespace with the `pulsar-admin namespaces` interface. -* Enable deduplication for a specific topic with the `pulsar-admin topics` interface. - -## Configure message deduplication - -You can configure message deduplication in Pulsar using the [`broker.conf`](reference-configuration.md#broker) configuration file. The following deduplication-related parameters are available. - -Parameter | Description | Default -:---------|:------------|:------- -`brokerDeduplicationEnabled` | Sets the default behavior for message deduplication in the Pulsar broker. If it is set to `true`, message deduplication is enabled on all namespaces/topics. If it is set to `false`, you have to enable or disable deduplication at the namespace level or the topic level. | `false` -`brokerDeduplicationMaxNumberOfProducers` | The maximum number of producers for which information is stored for deduplication purposes. | `10000` -`brokerDeduplicationEntriesInterval` | The number of entries after which a deduplication informational snapshot is taken. A larger interval leads to fewer snapshots being taken, though this lengthens the topic recovery time (the time required for entries published after the snapshot to be replayed). | `1000` -`brokerDeduplicationSnapshotIntervalSeconds`| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |`120` -`brokerDeduplicationProducerInactivityTimeoutMinutes` | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | `360` (6 hours) - -### Set default value at the broker-level - -By default, message deduplication is *disabled* on all Pulsar namespaces/topics. To enable it on all namespaces/topics, set the `brokerDeduplicationEnabled` parameter to `true` and re-start the broker. - -Even if you set the value for `brokerDeduplicationEnabled`, enabling or disabling via Pulsar admin CLI overrides the default settings at the broker-level. - -### Enable message deduplication - -Though message deduplication is disabled by default at the broker level, you can enable message deduplication for a specific namespace or topic using the [`pulsar-admin namespaces set-deduplication`](reference-pulsar-admin.md#namespace-set-deduplication) or the [`pulsar-admin topics set-deduplication`](reference-pulsar-admin.md#topic-set-deduplication) command. You can use the `--enable`/`-e` flag and specify the namespace/topic. - -The following example shows how to enable message deduplication at the namespace level. - -```bash - -$ bin/pulsar-admin namespaces set-deduplication \ - public/default \ - --enable # or just -e - -``` - -### Disable message deduplication - -Even if you enable message deduplication at the broker level, you can disable message deduplication for a specific namespace or topic using the [`pulsar-admin namespace set-deduplication`](reference-pulsar-admin.md#namespace-set-deduplication) or the [`pulsar-admin topics set-deduplication`](reference-pulsar-admin.md#topic-set-deduplication) command. Use the `--disable`/`-d` flag and specify the namespace/topic. - -The following example shows how to disable message deduplication at the namespace level. - -```bash - -$ bin/pulsar-admin namespaces set-deduplication \ - public/default \ - --disable # or just -d - -``` - -## Pulsar clients - -If you enable message deduplication in Pulsar brokers, you need complete the following tasks for your client producers: - -1. Specify a name for the producer. -1. Set the message timeout to `0` (namely, no timeout). - -The instructions for Java, Python, and C++ clients are different. - -````mdx-code-block - - - -To enable message deduplication on a [Java producer](client-libraries-java.md#producers), set the producer name using the `producerName` setter, and set the timeout to `0` using the `sendTimeout` setter. - -```java - -import org.apache.pulsar.client.api.Producer; -import org.apache.pulsar.client.api.PulsarClient; -import java.util.concurrent.TimeUnit; - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); -Producer producer = pulsarClient.newProducer() - .producerName("producer-1") - .topic("persistent://public/default/topic-1") - .sendTimeout(0, TimeUnit.SECONDS) - .create(); - -``` - - - - -To enable message deduplication on a [Python producer](client-libraries-python.md#producers), set the producer name using `producer_name`, and set the timeout to `0` using `send_timeout_millis`. - -```python - -import pulsar - -client = pulsar.Client("pulsar://localhost:6650") -producer = client.create_producer( - "persistent://public/default/topic-1", - producer_name="producer-1", - send_timeout_millis=0) - -``` - - - - -To enable message deduplication on a [C++ producer](client-libraries-cpp.md#producer), set the producer name using `producer_name`, and set the timeout to `0` using `send_timeout_millis`. - -```cpp - -#include - -std::string serviceUrl = "pulsar://localhost:6650"; -std::string topic = "persistent://some-tenant/ns1/topic-1"; -std::string producerName = "producer-1"; - -Client client(serviceUrl); - -ProducerConfiguration producerConfig; -producerConfig.setSendTimeout(0); -producerConfig.setProducerName(producerName); - -Producer producer; - -Result result = client.createProducer(topic, producerConfig, producer); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-encryption.md b/site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-encryption.md deleted file mode 100644 index f0d8fb8735eb63..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-encryption.md +++ /dev/null @@ -1,184 +0,0 @@ ---- -id: cookbooks-encryption -title: Pulsar Encryption -sidebar_label: "Encryption" -original_id: cookbooks-encryption ---- - -Pulsar encryption allows applications to encrypt messages at the producer and decrypt at the consumer. Encryption is performed using the public/private key pair configured by the application. Encrypted messages can only be decrypted by consumers with a valid key. - -## Asymmetric and symmetric encryption - -Pulsar uses dynamically generated symmetric AES key to encrypt messages(data). The AES key(data key) is encrypted using application provided ECDSA/RSA key pair, as a result there is no need to share the secret with everyone. - -Key is a public/private key pair used for encryption/decryption. The producer key is the public key, and the consumer key is the private key of the key pair. - -The application configures the producer with the public key. This key is used to encrypt the AES data key. The encrypted data key is sent as part of message header. Only entities with the private key(in this case the consumer) will be able to decrypt the data key which is used to decrypt the message. - -A message can be encrypted with more than one key. Any one of the keys used for encrypting the message is sufficient to decrypt the message - -Pulsar does not store the encryption key anywhere in the pulsar service. If you lose/delete the private key, your message is irretrievably lost, and is unrecoverable - -## Producer -![alt text](/assets/pulsar-encryption-producer.jpg "Pulsar Encryption Producer") - -## Consumer -![alt text](/assets/pulsar-encryption-consumer.jpg "Pulsar Encryption Consumer") - -## Here are the steps to get started: - -1. Create your ECDSA or RSA public/private key pair. - -```shell - -openssl ecparam -name secp521r1 -genkey -param_enc explicit -out test_ecdsa_privkey.pem -openssl ec -in test_ecdsa_privkey.pem -pubout -outform pkcs8 -out test_ecdsa_pubkey.pem - -``` - -2. Add the public and private key to the key management and configure your producers to retrieve public keys and consumers clients to retrieve private keys. -3. Implement CryptoKeyReader::getPublicKey() interface from producer and CryptoKeyReader::getPrivateKey() interface from consumer, which will be invoked by Pulsar client to load the key. -4. Add encryption key to producer configuration: conf.addEncryptionKey("myapp.key") -5. Add CryptoKeyReader implementation to producer/consumer config: conf.setCryptoKeyReader(keyReader) -6. Sample producer application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} -PulsarClient pulsarClient = PulsarClient.create("http://localhost:8080"); - -ProducerConfiguration prodConf = new ProducerConfiguration(); -prodConf.setCryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")); -prodConf.addEncryptionKey("myappkey"); - -Producer producer = pulsarClient.createProducer("persistent://my-tenant/my-ns/my-topic", prodConf); - -for (int i = 0; i < 10; i++) { - producer.send("my-message".getBytes()); -} - -pulsarClient.close(); - -``` - -7. Sample Consumer Application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} - -ConsumerConfiguration consConf = new ConsumerConfiguration(); -consConf.setCryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")); -PulsarClient pulsarClient = PulsarClient.create("http://localhost:8080"); -Consumer consumer = pulsarClient.subscribe("persistent://my-tenant//my-ns/my-topic", "my-subscriber-name", consConf); -Message msg = null; - -for (int i = 0; i < 10; i++) { - msg = consumer.receive(); - // do something - System.out.println("Received: " + new String(msg.getData())); -} - -// Acknowledge the consumption of all messages at once -consumer.acknowledgeCumulative(msg); -pulsarClient.close(); - -``` - -## Key rotation -Pulsar generates new AES data key every 4 hours or after a certain number of messages are published. The asymmetric public key is automatically fetched by producer every 4 hours by calling CryptoKeyReader::getPublicKey() to retrieve the latest version. - -## Enabling encryption at the producer application: -If you produce messages that are consumed across application boundaries, you need to ensure that consumers in other applications have access to one of the private keys that can decrypt the messages. This can be done in two ways: -1. The consumer application provides you access to their public key, which you add to your producer keys -1. You grant access to one of the private keys from the pairs used by producer - -In some cases, the producer may want to encrypt the messages with multiple keys. For this, add all such keys to the config. Consumer will be able to decrypt the message, as long as it has access to at least one of the keys. - -E.g: If messages needs to be encrypted using 2 keys myapp.messagekey1 and myapp.messagekey2, - -```java - -conf.addEncryptionKey("myapp.messagekey1"); -conf.addEncryptionKey("myapp.messagekey2"); - -``` - -## Decrypting encrypted messages at the consumer application: -Consumers require access one of the private keys to decrypt messages produced by the producer. If you would like to receive encrypted messages, create a public/private key and give your public key to the producer application to encrypt messages using your public key. - -## Handling Failures: -* Producer/ Consumer loses access to the key - * Producer action will fail indicating the cause of the failure. Application has the option to proceed with sending unencrypted message in such cases. Call conf.setCryptoFailureAction(ProducerCryptoFailureAction) to control the producer behavior. The default behavior is to fail the request. - * If consumption failed due to decryption failure or missing keys in consumer, application has the option to consume the encrypted message or discard it. Call conf.setCryptoFailureAction(ConsumerCryptoFailureAction) to control the consumer behavior. The default behavior is to fail the request. -Application will never be able to decrypt the messages if the private key is permanently lost. -* Batch messaging - * If decryption fails and the message contain batch messages, client will not be able to retrieve individual messages in the batch, hence message consumption fails even if conf.setCryptoFailureAction() is set to CONSUME. -* If decryption fails, the message consumption stops and application will notice backlog growth in addition to decryption failure messages in the client log. If application does not have access to the private key to decrypt the message, the only option is to skip/discard backlogged messages. - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-message-queue.md b/site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-message-queue.md deleted file mode 100644 index eb43cbde5fb818..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-message-queue.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -id: cookbooks-message-queue -title: Using Pulsar as a message queue -sidebar_label: "Message queue" -original_id: cookbooks-message-queue ---- - -Message queues are essential components of many large-scale data architectures. If every single work object that passes through your system absolutely *must* be processed in spite of the slowness or downright failure of this or that system component, there's a good chance that you'll need a message queue to step in and ensure that unprocessed data is retained---with correct ordering---until the required actions are taken. - -Pulsar is a great choice for a message queue because: - -* it was built with [persistent message storage](concepts-architecture-overview.md#persistent-storage) in mind -* it offers automatic load balancing across [consumers](reference-terminology.md#consumer) for messages on a topic (or custom load balancing if you wish) - -> You can use the same Pulsar installation to act as a real-time message bus and as a message queue if you wish (or just one or the other). You can set aside some topics for real-time purposes and other topics for message queue purposes (or use specific namespaces for either purpose if you wish). - - -# Client configuration changes - -To use a Pulsar [topic](reference-terminology.md#topic) as a message queue, you should distribute the receiver load on that topic across several consumers (the optimal number of consumers will depend on the load). Each consumer must: - -* Establish a [shared subscription](concepts-messaging.md#shared) and use the same subscription name as the other consumers (otherwise the subscription is not shared and the consumers can't act as a processing ensemble) -* If you'd like to have tight control over message dispatching across consumers, set the consumers' **receiver queue** size very low (potentially even to 0 if necessary). Each Pulsar [consumer](reference-terminology.md#consumer) has a receiver queue that determines how many messages the consumer will attempt to fetch at a time. A receiver queue of 1000 (the default), for example, means that the consumer will attempt to process 1000 messages from the topic's backlog upon connection. Setting the receiver queue to zero essentially means ensuring that each consumer is only doing one thing at a time. - - The downside to restricting the receiver queue size of consumers is that that limits the potential throughput of those consumers and cannot be used with [partitioned topics](reference-terminology.md#partitioned-topic). Whether the performance/control trade-off is worthwhile will depend on your use case. - -## Java clients - -Here's an example Java consumer configuration that uses a shared subscription: - -```java - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; -import org.apache.pulsar.client.api.SubscriptionType; - -String SERVICE_URL = "pulsar://localhost:6650"; -String TOPIC = "persistent://public/default/mq-topic-1"; -String subscription = "sub-1"; - -PulsarClient client = PulsarClient.builder() - .serviceUrl(SERVICE_URL) - .build(); - -Consumer consumer = client.newConsumer() - .topic(TOPIC) - .subscriptionName(subscription) - .subscriptionType(SubscriptionType.Shared) - // If you'd like to restrict the receiver queue size - .receiverQueueSize(10) - .subscribe(); - -``` - -## Python clients - -Here's an example Python consumer configuration that uses a shared subscription: - -```python - -from pulsar import Client, ConsumerType - -SERVICE_URL = "pulsar://localhost:6650" -TOPIC = "persistent://public/default/mq-topic-1" -SUBSCRIPTION = "sub-1" - -client = Client(SERVICE_URL) -consumer = client.subscribe( - TOPIC, - SUBSCRIPTION, - # If you'd like to restrict the receiver queue size - receiver_queue_size=10, - consumer_type=ConsumerType.Shared) - -``` - -## C++ clients - -Here's an example C++ consumer configuration that uses a shared subscription: - -```cpp - -#include - -std::string serviceUrl = "pulsar://localhost:6650"; -std::string topic = "persistent://public/defaultmq-topic-1"; -std::string subscription = "sub-1"; - -Client client(serviceUrl); - -ConsumerConfiguration consumerConfig; -consumerConfig.setConsumerType(ConsumerType.ConsumerShared); -// If you'd like to restrict the receiver queue size -consumerConfig.setReceiverQueueSize(10); - -Consumer consumer; - -Result result = client.subscribe(topic, subscription, consumerConfig, consumer); - -``` - -## Go clients - -Here is an example of a Go consumer configuration that uses a shared subscription: - -```go - -import "github.com/apache/pulsar-client-go/pulsar" - -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "persistent://public/default/mq-topic-1", - SubscriptionName: "sub-1", - Type: pulsar.Shared, - ReceiverQueueSize: 10, // If you'd like to restrict the receiver queue size -}) -if err != nil { - log.Fatal(err) -} - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-non-persistent.md b/site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-non-persistent.md deleted file mode 100644 index 178301e86eb8df..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-non-persistent.md +++ /dev/null @@ -1,63 +0,0 @@ ---- -id: cookbooks-non-persistent -title: Non-persistent messaging -sidebar_label: "Non-persistent messaging" -original_id: cookbooks-non-persistent ---- - -**Non-persistent topics** are Pulsar topics in which message data is *never* [persistently stored](concepts-architecture-overview.md#persistent-storage) and kept only in memory. This cookbook provides: - -* A basic [conceptual overview](#overview) of non-persistent topics -* Information about [configurable parameters](#configuration) related to non-persistent topics -* A guide to the [CLI interface](#cli) for managing non-persistent topics - -## Overview - -By default, Pulsar persistently stores *all* unacknowledged messages on multiple [BookKeeper](#persistent-storage) bookies (storage nodes). Data for messages on persistent topics can thus survive broker restarts and subscriber failover. - -Pulsar also, however, supports **non-persistent topics**, which are topics on which messages are *never* persisted to disk and live only in memory. When using non-persistent delivery, killing a Pulsar [broker](reference-terminology.md#broker) or disconnecting a subscriber to a topic means that all in-transit messages are lost on that (non-persistent) topic, meaning that clients may see message loss. - -Non-persistent topics have names of this form (note the `non-persistent` in the name): - -```http - -non-persistent://tenant/namespace/topic - -``` - -> For more high-level information about non-persistent topics, see the [Concepts and Architecture](concepts-messaging.md#non-persistent-topics) documentation. - -## Using - -> In order to use non-persistent topics, they must be [enabled](#enabling) in your Pulsar broker configuration. - -In order to use non-persistent topics, you only need to differentiate them by name when interacting with them. This [`pulsar-client produce`](reference-cli-tools.md#pulsar-client-produce) command, for example, would produce one message on a non-persistent topic in a standalone cluster: - -```bash - -$ bin/pulsar-client produce non-persistent://public/default/example-np-topic \ - --num-produce 1 \ - --messages "This message will be stored only in memory" - -``` - -> For a more thorough guide to non-persistent topics from an administrative perspective, see the [Non-persistent topics](admin-api-topics.md) guide. - -## Enabling - -In order to enable non-persistent topics in a Pulsar broker, the [`enableNonPersistentTopics`](reference-configuration.md#broker-enableNonPersistentTopics) must be set to `true`. This is the default, and so you won't need to take any action to enable non-persistent messaging. - - -> #### Configuration for standalone mode -> If you're running Pulsar in standalone mode, the same configurable parameters are available but in the [`standalone.conf`](reference-configuration.md#standalone) configuration file. - -If you'd like to enable *only* non-persistent topics in a broker, you can set the [`enablePersistentTopics`](reference-configuration.md#broker-enablePersistentTopics) parameter to `false` and the `enableNonPersistentTopics` parameter to `true`. - -## Managing with cli - -Non-persistent topics can be managed using the [`pulsar-admin non-persistent`](reference-pulsar-admin.md#non-persistent) command-line interface. With that interface you can perform actions like [create a partitioned non-persistent topic](reference-pulsar-admin.md#non-persistent-create-partitioned-topic), get [stats](reference-pulsar-admin.md#non-persistent-stats) for a non-persistent topic, [list](reference-pulsar-admin.md) non-persistent topics under a namespace, and more. - -## Using with Pulsar clients - -You shouldn't need to make any changes to your Pulsar clients to use non-persistent messaging beyond making sure that you use proper [topic names](#using) with `non-persistent` as the topic type. - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-partitioned.md b/site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-partitioned.md deleted file mode 100644 index fb9ac354cc6d60..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-partitioned.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: cookbooks-partitioned -title: Partitioned topics -sidebar_label: "Partitioned Topics" -original_id: cookbooks-partitioned ---- -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-retention-expiry.md b/site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-retention-expiry.md deleted file mode 100644 index c8c46b3caa1bee..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-retention-expiry.md +++ /dev/null @@ -1,498 +0,0 @@ ---- -id: cookbooks-retention-expiry -title: Message retention and expiry -sidebar_label: "Message retention and expiry" -original_id: cookbooks-retention-expiry ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar brokers are responsible for handling messages that pass through Pulsar, including [persistent storage](concepts-architecture-overview.md#persistent-storage) of messages. By default, for each topic, brokers only retain messages that are in at least one backlog. A backlog is the set of unacknowledged messages for a particular subscription. As a topic can have multiple subscriptions, a topic can have multiple backlogs. - -As a consequence, no messages are retained (by default) on a topic that has not had any subscriptions created for it. - -(Note that messages that are no longer being stored are not necessarily immediately deleted, and may in fact still be accessible until the next ledger rollover. Because clients cannot predict when rollovers may happen, it is not wise to rely on a rollover not happening at an inconvenient point in time.) - -In Pulsar, you can modify this behavior, with namespace granularity, in two ways: - -* You can persistently store messages that are not within a backlog (because they've been acknowledged by on every existing subscription, or because there are no subscriptions) by setting [retention policies](#retention-policies). -* Messages that are not acknowledged within a specified timeframe can be automatically acknowledged, by specifying the [time to live](#time-to-live-ttl) (TTL). - -Pulsar's [admin interface](admin-api-overview.md) enables you to manage both retention policies and TTL with namespace granularity (and thus within a specific tenant and either on a specific cluster or in the [`global`](concepts-architecture-overview.md#global-cluster) cluster). - - -> #### Retention and TTL solve two different problems -> * Message retention: Keep the data for at least X hours (even if acknowledged) -> * Time-to-live: Discard data after some time (by automatically acknowledging) -> -> Most applications will want to use at most one of these. - - -## Retention policies - -By default, when a Pulsar message arrives at a broker, the message is stored until it has been acknowledged on all subscriptions, at which point it is marked for deletion. You can override this behavior and retain messages that have already been acknowledged on all subscriptions by setting a *retention policy* for all topics in a given namespace. Retention is based on both a *size limit* and a *time limit*. - -Retention policies are useful when you use the Reader interface. The Reader interface does not use acknowledgements, and messages do not exist within backlogs. It is required to configure retention for Reader-only use cases. - -When you set a retention policy on topics in a namespace, you must set **both** a *size limit* and a *time limit*. You can refer to the following table to set retention policies in `pulsar-admin` and Java. - -|Time limit|Size limit| Message retention | -|----------|----------|------------------------| -| -1 | -1 | Infinite retention | -| -1 | >0 | Based on the size limit | -| >0 | -1 | Based on the time limit | -| 0 | 0 | Disable message retention (by default) | -| 0 | >0 | Invalid | -| >0 | 0 | Invalid | -| >0 | >0 | Acknowledged messages or messages with no active subscription will not be retained when either time or size reaches the limit. | - -The retention settings apply to all messages on topics that do not have any subscriptions, or to messages that have been acknowledged by all subscriptions. The retention policy settings do not affect unacknowledged messages on topics with subscriptions. The unacknowledged messages are controlled by the backlog quota. - -When a retention limit on a topic is exceeded, the oldest message is marked for deletion until the set of retained messages falls within the specified limits again. - -### Defaults - -You can set message retention at instance level with the following two parameters: `defaultRetentionTimeInMinutes` and `defaultRetentionSizeInMB`. Both parameters are set to `0` by default. - -For more information of the two parameters, refer to the [`broker.conf`](reference-configuration.md#broker) configuration file. - -### Set retention policy - -You can set a retention policy for a namespace by specifying the namespace, a size limit and a time limit in `pulsar-admin`, REST API and Java. - -````mdx-code-block - - - -You can use the [`set-retention`](reference-pulsar-admin.md#namespaces-set-retention) subcommand and specify a namespace, a size limit using the `-s`/`--size` flag, and a time limit using the `-t`/`--time` flag. - -In the following example, the size limit is set to 10 GB and the time limit is set to 3 hours for each topic within the `my-tenant/my-ns` namespace. -- When the size of messages reaches 10 GB on a topic within 3 hours, the acknowledged messages will not be retained. -- After 3 hours, even if the message size is less than 10 GB, the acknowledged messages will not be retained. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 10G \ - --time 3h - -``` - -In the following example, the time is not limited and the size limit is set to 1 TB. The size limit determines the retention. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 1T \ - --time -1 - -``` - -In the following example, the size is not limited and the time limit is set to 3 hours. The time limit determines the retention. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size -1 \ - --time 3h - -``` - -To achieve infinite retention, set both values to `-1`. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size -1 \ - --time -1 - -``` - -To disable the retention policy, set both values to `0`. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 0 \ - --time 0 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/retention|operation/setRetention?version=@pulsar:version_number@} - -:::note - -To disable the retention policy, you need to set both the size and time limit to `0`. Set either size or time limit to `0` is invalid. - -::: - - - - -```java - -int retentionTime = 10; // 10 minutes -int retentionSize = 500; // 500 megabytes -RetentionPolicies policies = new RetentionPolicies(retentionTime, retentionSize); -admin.namespaces().setRetention(namespace, policies); - -``` - - - - -```` - -### Get retention policy - -You can fetch the retention policy for a namespace by specifying the namespace. The output will be a JSON object with two keys: `retentionTimeInMinutes` and `retentionSizeInMB`. - -````mdx-code-block - - - -Use the [`get-retention`](reference-pulsar-admin.md#namespaces) subcommand and specify the namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces get-retention my-tenant/my-ns -{ - "retentionTimeInMinutes": 10, - "retentionSizeInMB": 500 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/retention|operation/getRetention?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getRetention(namespace); - -``` - - - - -```` - -## Backlog quotas - -*Backlogs* are sets of unacknowledged messages for a topic that have been stored by bookies. Pulsar stores all unacknowledged messages in backlogs until they are processed and acknowledged. - -You can control the allowable size of backlogs, at the namespace level, using *backlog quotas*. Setting a backlog quota involves setting: - -TODO: Expand on is this per backlog or per topic? - -* an allowable *size threshold* for each topic in the namespace -* a *retention policy* that determines which action the [broker](reference-terminology.md#broker) takes if the threshold is exceeded. - -The following retention policies are available: - -Policy | Action -:------|:------ -`producer_request_hold` | The broker will hold and not persist produce request payload -`producer_exception` | The broker will disconnect from the client by throwing an exception -`consumer_backlog_eviction` | The broker will begin discarding backlog messages - - -> #### Beware the distinction between retention policy types -> As you may have noticed, there are two definitions of the term "retention policy" in Pulsar, one that applies to persistent storage of messages not in backlogs, and one that applies to messages within backlogs. - - -Backlog quotas are handled at the namespace level. They can be managed via: - -### Set size/time thresholds and backlog retention policies - -You can set a size and/or time threshold and backlog retention policy for all of the topics in a [namespace](reference-terminology.md#namespace) by specifying the namespace, a size limit and/or a time limit in second, and a policy by name. - -````mdx-code-block - - - -Use the [`set-backlog-quota`](reference-pulsar-admin.md#namespaces) subcommand and specify a namespace, a size limit using the `-l`/`--limit` , `-lt`/`--limitTime` flag to limit backlog, a retention policy using the `-p`/`--policy` flag and a policy type using `-t`/`--type` (default is destination_storage). - -##### Example - -```shell - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \ - --limit 2G \ - --policy producer_request_hold - -``` - -```shell - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns/my-topic \ ---limitTime 3600 \ ---policy producer_request_hold \ ---type message_age - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - - - - -```java - -long sizeLimit = 2147483648L; -BacklogQuota.RetentionPolicy policy = BacklogQuota.RetentionPolicy.producer_request_hold; -BacklogQuota quota = new BacklogQuota(sizeLimit, policy); -admin.namespaces().setBacklogQuota(namespace, quota); - -``` - - - - -```` - -### Get backlog threshold and backlog retention policy - -You can see which size threshold and backlog retention policy has been applied to a namespace. - -````mdx-code-block - - - -Use the [`get-backlog-quotas`](reference-pulsar-admin.md#pulsar-admin-namespaces-get-backlog-quotas) subcommand and specify a namespace. Here's an example: - -```shell - -$ pulsar-admin namespaces get-backlog-quotas my-tenant/my-ns -{ - "destination_storage": { - "limit" : 2147483648, - "policy" : "producer_request_hold" - } -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/backlogQuotaMap|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - - - - -```java - -Map quotas = - admin.namespaces().getBacklogQuotas(namespace); - -``` - - - - -```` - -### Remove backlog quotas - -````mdx-code-block - - - -Use the [`remove-backlog-quota`](reference-pulsar-admin.md#pulsar-admin-namespaces-remove-backlog-quota) subcommand and specify a namespace, use `t`/`--type` to specify backlog type to remove(default is destination_storage). Here's an example: - -```shell - -$ pulsar-admin namespaces remove-backlog-quota my-tenant/my-ns - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/removeBacklogQuota?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeBacklogQuota(namespace); - -``` - - - - -```` - -### Clear backlog - -#### pulsar-admin - -Use the [`clear-backlog`](reference-pulsar-admin.md#pulsar-admin-namespaces-clear-backlog) subcommand. - -##### Example - -```shell - -$ pulsar-admin namespaces clear-backlog my-tenant/my-ns - -``` - -By default, you will be prompted to ensure that you really want to clear the backlog for the namespace. You can override the prompt using the `-f`/`--force` flag. - -## Time to live (TTL) - -By default, Pulsar stores all unacknowledged messages forever. This can lead to heavy disk space usage in cases where a lot of messages are going unacknowledged. If disk space is a concern, you can set a time to live (TTL) that determines how long unacknowledged messages will be retained. - -### Set the TTL for a namespace - -````mdx-code-block - - - -Use the [`set-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-set-message-ttl) subcommand and specify a namespace and a TTL (in seconds) using the `-ttl`/`--messageTTL` flag. - -##### Example - -```shell - -$ pulsar-admin namespaces set-message-ttl my-tenant/my-ns \ - --messageTTL 120 # TTL of 2 minutes - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/setNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setNamespaceMessageTTL(namespace, ttlInSeconds); - -``` - - - - -```` - -### Get the TTL configuration for a namespace - -````mdx-code-block - - - -Use the [`get-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-get-message-ttl) subcommand and specify a namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces get-message-ttl my-tenant/my-ns -60 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/getNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaceMessageTTL(namespace) - -``` - - - - -```` - -### Remove the TTL configuration for a namespace - -````mdx-code-block - - - -Use the [`remove-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-remove-message-ttl) subcommand and specify a namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces remove-message-ttl my-tenant/my-ns - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/removeNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeNamespaceMessageTTL(namespace) - -``` - - - - -```` - -## Delete messages from namespaces - -If you do not have any retention period and that you never have much of a backlog, the upper limit for retaining messages, which are acknowledged, equals to the Pulsar segment rollover period + entry log rollover period + (garbage collection interval * garbage collection ratios). - -- **Segment rollover period**: basically, the segment rollover period is how often a new segment is created. Once a new segment is created, the old segment will be deleted. By default, this happens either when you have written 50,000 entries (messages) or have waited 240 minutes. You can tune this in your broker. - -- **Entry log rollover period**: multiple ledgers in BookKeeper are interleaved into an [entry log](https://bookkeeper.apache.org/docs/4.11.1/getting-started/concepts/#entry-logs). In order for a ledger that has been deleted, the entry log must all be rolled over. -The entry log rollover period is configurable, but is purely based on the entry log size. For details, see [here](https://bookkeeper.apache.org/docs/4.11.1/reference/config/#entry-log-settings). Once the entry log is rolled over, the entry log can be garbage collected. - -- **Garbage collection interval**: because entry logs have interleaved ledgers, to free up space, the entry logs need to be rewritten. The garbage collection interval is how often BookKeeper performs garbage collection. which is related to minor compaction and major compaction of entry logs. For details, see [here](https://bookkeeper.apache.org/docs/4.11.1/reference/config/#entry-log-compaction-settings). diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-tiered-storage.md b/site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-tiered-storage.md deleted file mode 100644 index 3f87de62ca8a14..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/cookbooks-tiered-storage.md +++ /dev/null @@ -1,346 +0,0 @@ ---- -id: cookbooks-tiered-storage -title: Tiered Storage -sidebar_label: "Tiered Storage" -original_id: cookbooks-tiered-storage ---- - -Pulsar's **Tiered Storage** feature allows older backlog data to be offloaded to long term storage, thereby freeing up space in BookKeeper and reducing storage costs. This cookbook walks you through using tiered storage in your Pulsar cluster. - -* Tiered storage uses [Apache jclouds](https://jclouds.apache.org) to support [Amazon S3](https://aws.amazon.com/s3/) and [Google Cloud Storage](https://cloud.google.com/storage/)(GCS for short) -for long term storage. With Jclouds, it is easy to add support for more [cloud storage providers](https://jclouds.apache.org/reference/providers/#blobstore-providers) in the future. - -* Tiered storage uses [Apache Hadoop](http://hadoop.apache.org/) to support filesystem for long term storage. -With Hadoop, it is easy to add support for more filesystem in the future. - -## When should I use Tiered Storage? - -Tiered storage should be used when you have a topic for which you want to keep a very long backlog for a long time. For example, if you have a topic containing user actions which you use to train your recommendation systems, you may want to keep that data for a long time, so that if you change your recommendation algorithm you can rerun it against your full user history. - -## The offloading mechanism - -A topic in Pulsar is backed by a log, known as a managed ledger. This log is composed of an ordered list of segments. Pulsar only every writes to the final segment of the log. All previous segments are sealed. The data within the segment is immutable. This is known as a segment oriented architecture. - -![Tiered storage](/assets/pulsar-tiered-storage.png "Tiered Storage") - -The Tiered Storage offloading mechanism takes advantage of this segment oriented architecture. When offloading is requested, the segments of the log are copied, one-by-one, to tiered storage. All segments of the log, apart from the segment currently being written to can be offloaded. - -On the broker, the administrator must configure the bucket and credentials for the cloud storage service. -The configured bucket must exist before attempting to offload. If it does not exist, the offload operation will fail. - -Pulsar uses multi-part objects to upload the segment data. It is possible that a broker could crash while uploading the data. -We recommend you add a life cycle rule your bucket to expire incomplete multi-part upload after a day or two to avoid -getting charged for incomplete uploads. - -When ledgers are offloaded to long term storage, you can still query data in the offloaded ledgers with Pulsar SQL. - -## Configuring the offload driver - -Offloading is configured in ```broker.conf```. - -At a minimum, the administrator must configure the driver, the bucket and the authenticating credentials. -There is also some other knobs to configure, like the bucket region, the max block size in backed storage, etc. - -Currently we support driver of types: - -- `aws-s3`: [Simple Cloud Storage Service](https://aws.amazon.com/s3/) -- `google-cloud-storage`: [Google Cloud Storage](https://cloud.google.com/storage/) -- `filesystem`: [Filesystem Storage](http://hadoop.apache.org/) - -> Driver names are case-insensitive for driver's name. There is a third driver type, `s3`, which is identical to `aws-s3`, -> though it requires that you specify an endpoint url using `s3ManagedLedgerOffloadServiceEndpoint`. This is useful if -> using a S3 compatible data store, other than AWS. - -```conf - -managedLedgerOffloadDriver=aws-s3 - -``` - -### "aws-s3" Driver configuration - -#### Bucket and Region - -Buckets are the basic containers that hold your data. -Everything that you store in Cloud Storage must be contained in a bucket. -You can use buckets to organize your data and control access to your data, -but unlike directories and folders, you cannot nest buckets. - -```conf - -s3ManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -Bucket Region is the region where bucket located. Bucket Region is not a required -but a recommended configuration. If it is not configured, It will use the default region. - -With AWS S3, the default region is `US East (N. Virginia)`. Page [AWS Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html) contains more information. - -```conf - -s3ManagedLedgerOffloadRegion=eu-west-3 - -``` - -#### Authentication with AWS - -To be able to access AWS S3, you need to authenticate with AWS S3. -Pulsar does not provide any direct means of configuring authentication for AWS S3, -but relies on the mechanisms supported by the [DefaultAWSCredentialsProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html). - -Once you have created a set of credentials in the AWS IAM console, they can be configured in a number of ways. - -1. Using ec2 instance metadata credentials - -If you are on AWS instance with an instance profile that provides credentials, Pulsar will use these credentials -if no other mechanism is provided - -2. Set the environment variables **AWS_ACCESS_KEY_ID** and **AWS_SECRET_ACCESS_KEY** in ```conf/pulsar_env.sh```. - -```bash - -export AWS_ACCESS_KEY_ID=ABC123456789 -export AWS_SECRET_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -> \"export\" is important so that the variables are made available in the environment of spawned processes. - - -3. Add the Java system properties *aws.accessKeyId* and *aws.secretKey* to **PULSAR_EXTRA_OPTS** in `conf/pulsar_env.sh`. - -```bash - -PULSAR_EXTRA_OPTS="${PULSAR_EXTRA_OPTS} ${PULSAR_MEM} ${PULSAR_GC} -Daws.accessKeyId=ABC123456789 -Daws.secretKey=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.maxCapacityPerThread=4096" - -``` - -4. Set the access credentials in ```~/.aws/credentials```. - -```conf - -[default] -aws_access_key_id=ABC123456789 -aws_secret_access_key=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -5. Assuming an IAM role - -If you want to assume an IAM role, this can be done via specifying the following: - -```conf - -s3ManagedLedgerOffloadRole= -s3ManagedLedgerOffloadRoleSessionName=pulsar-s3-offload - -``` - -This will use the `DefaultAWSCredentialsProviderChain` for assuming this role. - -> The broker must be rebooted for credentials specified in pulsar_env to take effect. - -#### Configuring the size of block read/write - -Pulsar also provides some knobs to configure the size of requests sent to AWS S3. - -- ```s3ManagedLedgerOffloadMaxBlockSizeInBytes``` configures the maximum size of - a "part" sent during a multipart upload. This cannot be smaller than 5MB. Default is 64MB. -- ```s3ManagedLedgerOffloadReadBufferSizeInBytes``` configures the block size for - each individual read when reading back data from AWS S3. Default is 1MB. - -In both cases, these should not be touched unless you know what you are doing. - -### "google-cloud-storage" Driver configuration - -Buckets are the basic containers that hold your data. Everything that you store in -Cloud Storage must be contained in a bucket. You can use buckets to organize your data and -control access to your data, but unlike directories and folders, you cannot nest buckets. - -```conf - -gcsManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -Bucket Region is the region where bucket located. Bucket Region is not a required but -a recommended configuration. If it is not configured, It will use the default region. - -Regarding GCS, buckets are default created in the `us multi-regional location`, -page [Bucket Locations](https://cloud.google.com/storage/docs/bucket-locations) contains more information. - -```conf - -gcsManagedLedgerOffloadRegion=europe-west3 - -``` - -#### Authentication with GCS - -The administrator needs to configure `gcsManagedLedgerOffloadServiceAccountKeyFile` in `broker.conf` -for the broker to be able to access the GCS service. `gcsManagedLedgerOffloadServiceAccountKeyFile` is -a Json file, containing the GCS credentials of a service account. -[Service Accounts section of this page](https://support.google.com/googleapi/answer/6158849) contains -more information of how to create this key file for authentication. More information about google cloud IAM -is available [here](https://cloud.google.com/storage/docs/access-control/iam). - -To generate service account credentials or view the public credentials that you've already generated, follow the following steps: - -1. Open the [Service accounts page](https://console.developers.google.com/iam-admin/serviceaccounts). -2. Select a project or create a new one. -3. Click **Create service account**. -4. In the **Create service account** window, type a name for the service account, and select **Furnish a new private key**. If you want to [grant G Suite domain-wide authority](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#delegatingauthority) to the service account, also select **Enable G Suite Domain-wide Delegation**. -5. Click **Create**. - -> Notes: Make ensure that the service account you create has permission to operate GCS, you need to assign **Storage Admin** permission to your service account in [here](https://cloud.google.com/storage/docs/access-control/iam). - -```conf - -gcsManagedLedgerOffloadServiceAccountKeyFile="/Users/hello/Downloads/project-804d5e6a6f33.json" - -``` - -#### Configuring the size of block read/write - -Pulsar also provides some knobs to configure the size of requests sent to GCS. - -- ```gcsManagedLedgerOffloadMaxBlockSizeInBytes``` configures the maximum size of a "part" sent - during a multipart upload. This cannot be smaller than 5MB. Default is 64MB. -- ```gcsManagedLedgerOffloadReadBufferSizeInBytes``` configures the block size for each individual - read when reading back data from GCS. Default is 1MB. - -In both cases, these should not be touched unless you know what you are doing. - -### "filesystem" Driver configuration - - -#### Configure connection address - -You can configure the connection address in the `broker.conf` file. - -```conf - -fileSystemURI="hdfs://127.0.0.1:9000" - -``` - -#### Configure Hadoop profile path - -The configuration file is stored in the Hadoop profile path. It contains various settings, such as base path, authentication, and so on. - -```conf - -fileSystemProfilePath="../conf/filesystem_offload_core_site.xml" - -``` - -The model for storing topic data uses `org.apache.hadoop.io.MapFile`. You can use all of the configurations in `org.apache.hadoop.io.MapFile` for Hadoop. - -**Example** - -```conf - - - fs.defaultFS - - - - - hadoop.tmp.dir - pulsar - - - - io.file.buffer.size - 4096 - - - - io.seqfile.compress.blocksize - 1000000 - - - - io.seqfile.compression.type - BLOCK - - - - io.map.index.interval - 128 - - -``` - -For more information about the configurations in `org.apache.hadoop.io.MapFile`, see [Filesystem Storage](http://hadoop.apache.org/). -## Configuring offload to run automatically - -Namespace policies can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that the topic has stored on the pulsar cluster. Once the topic reaches the threshold, an offload operation will be triggered. Setting a negative value to the threshold will disable automatic offloading. Setting the threshold to 0 will cause the broker to offload data as soon as it possiby can. - -```bash - -$ bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -> Automatic offload runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offload will not until the current segment is full. - -## Configuring read priority for offloaded messages - -By default, once messages were offloaded to long term storage, brokers will read them from long term storage, but messages still exists in bookkeeper for a period depends on the administrator's configuration. For -messages exists in both bookkeeper and long term storage, if they are preferred to read from bookkeeper, you can use command to change this configuration. - -```bash - -# default value for -orp is tiered-storage-first -$ bin/pulsar-admin namespaces set-offload-policies my-tenant/my-namespace -orp bookkeeper-first -$ bin/pulsar-admin topics set-offload-policies my-tenant/my-namespace/topic1 -orp bookkeeper-first - -``` - -## Triggering offload manually - -Offloading can manually triggered through a REST endpoint on the Pulsar broker. We provide a CLI which will call this rest endpoint for you. - -When triggering offload, you must specify the maximum size, in bytes, of backlog which will be retained locally on the bookkeeper. The offload mechanism will offload segments from the start of the topic backlog until this condition is met. - -```bash - -$ bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 -Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - -``` - -The command to triggers an offload will not wait until the offload operation has completed. To check the status of the offload, use offload-status. - -```bash - -$ bin/pulsar-admin topics offload-status my-tenant/my-namespace/topic1 -Offload is currently running - -``` - -To wait for offload to complete, add the -w flag. - -```bash - -$ bin/pulsar-admin topics offload-status -w my-tenant/my-namespace/topic1 -Offload was a success - -``` - -If there is an error offloading, the error will be propagated to the offload-status command. - -```bash - -$ bin/pulsar-admin topics offload-status persistent://public/default/topic1 -Error in offload -null - -Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - -``` - -` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/deploy-aws.md b/site2/website/versioned_docs/version-2.9.1-deprecated/deploy-aws.md deleted file mode 100644 index 93c389b56e2cf1..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/deploy-aws.md +++ /dev/null @@ -1,271 +0,0 @@ ---- -id: deploy-aws -title: Deploying a Pulsar cluster on AWS using Terraform and Ansible -sidebar_label: "Amazon Web Services" -original_id: deploy-aws ---- - -> For instructions on deploying a single Pulsar cluster manually rather than using Terraform and Ansible, see [Deploying a Pulsar cluster on bare metal](deploy-bare-metal.md). For instructions on manually deploying a multi-cluster Pulsar instance, see [Deploying a Pulsar instance on bare metal](deploy-bare-metal-multi-cluster.md). - -One of the easiest ways to get a Pulsar [cluster](reference-terminology.md#cluster) running on [Amazon Web Services](https://aws.amazon.com/) (AWS) is to use the [Terraform](https://terraform.io) infrastructure provisioning tool and the [Ansible](https://www.ansible.com) server automation tool. Terraform can create the resources necessary for running the Pulsar cluster---[EC2](https://aws.amazon.com/ec2/) instances, networking and security infrastructure, etc.---While Ansible can install and run Pulsar on the provisioned resources. - -## Requirements and setup - -In order to install a Pulsar cluster on AWS using Terraform and Ansible, you need to prepare the following things: - -* An [AWS account](https://aws.amazon.com/account/) and the [`aws`](https://aws.amazon.com/cli/) command-line tool -* Python and [pip](https://pip.pypa.io/en/stable/) -* The [`terraform-inventory`](https://github.com/adammck/terraform-inventory) tool, which enables Ansible to use Terraform artifacts - -You also need to make sure that you are currently logged into your AWS account via the `aws` tool: - -```bash - -$ aws configure - -``` - -## Installation - -You can install Ansible on Linux or macOS using pip. - -```bash - -$ pip install ansible - -``` - -You can install Terraform using the instructions [here](https://learn.hashicorp.com/tutorials/terraform/install-cli). - -You also need to have the Terraform and Ansible configuration for Pulsar locally on your machine. You can find them in the [GitHub repository](https://github.com/apache/pulsar) of Pulsar, which you can fetch using Git commands: - -```bash - -$ git clone https://github.com/apache/pulsar -$ cd pulsar/deployment/terraform-ansible/aws - -``` - -## SSH setup - -> If you already have an SSH key and want to use it, you can skip the step of generating an SSH key and update `private_key_file` setting -> in `ansible.cfg` file and `public_key_path` setting in `terraform.tfvars` file. -> -> For example, if you already have a private SSH key in `~/.ssh/pulsar_aws` and a public key in `~/.ssh/pulsar_aws.pub`, -> follow the steps below: -> -> 1. update `ansible.cfg` with following values: -> - -> ```shell -> -> private_key_file=~/.ssh/pulsar_aws -> -> -> ``` - -> -> 2. update `terraform.tfvars` with following values: -> - -> ```shell -> -> public_key_path=~/.ssh/pulsar_aws.pub -> -> -> ``` - - -In order to create the necessary AWS resources using Terraform, you need to create an SSH key. Enter the following commands to create a private SSH key in `~/.ssh/id_rsa` and a public key in `~/.ssh/id_rsa.pub`: - -```bash - -$ ssh-keygen -t rsa - -``` - -Do *not* enter a passphrase (hit **Enter** instead when the prompt comes out). Enter the following command to verify that a key has been created: - -```bash - -$ ls ~/.ssh -id_rsa id_rsa.pub - -``` - -## Create AWS resources using Terraform - -To start building AWS resources with Terraform, you need to install all Terraform dependencies. Enter the following command: - -```bash - -$ terraform init -# This will create a .terraform folder - -``` - -After that, you can apply the default Terraform configuration by entering this command: - -```bash - -$ terraform apply - -``` - -Then you see this prompt below: - -```bash - -Do you want to perform these actions? - Terraform will perform the actions described above. - Only 'yes' will be accepted to approve. - - Enter a value: - -``` - -Type `yes` and hit **Enter**. Applying the configuration could take several minutes. When the configuration applying finishes, you can see `Apply complete!` along with some other information, including the number of resources created. - -### Apply a non-default configuration - -You can apply a non-default Terraform configuration by changing the values in the `terraform.tfvars` file. The following variables are available: - -Variable name | Description | Default -:-------------|:------------|:------- -`public_key_path` | The path of the public key that you have generated. | `~/.ssh/id_rsa.pub` -`region` | The AWS region in which the Pulsar cluster runs | `us-west-2` -`availability_zone` | The AWS availability zone in which the Pulsar cluster runs | `us-west-2a` -`aws_ami` | The [Amazon Machine Image](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) (AMI) that the cluster uses | `ami-9fa343e7` -`num_zookeeper_nodes` | The number of [ZooKeeper](https://zookeeper.apache.org) nodes in the ZooKeeper cluster | 3 -`num_bookie_nodes` | The number of bookies that runs in the cluster | 3 -`num_broker_nodes` | The number of Pulsar brokers that runs in the cluster | 2 -`num_proxy_nodes` | The number of Pulsar proxies that runs in the cluster | 1 -`base_cidr_block` | The root [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) that network assets uses for the cluster | `10.0.0.0/16` -`instance_types` | The EC2 instance types to be used. This variable is a map with two keys: `zookeeper` for the ZooKeeper instances, `bookie` for the BookKeeper bookies and `broker` and `proxy` for Pulsar brokers and bookies | `t2.small` (ZooKeeper), `i3.xlarge` (BookKeeper) and `c5.2xlarge` (Brokers/Proxies) - -### What is installed - -When you run the Ansible playbook, the following AWS resources are used: - -* 9 total [Elastic Compute Cloud](https://aws.amazon.com/ec2) (EC2) instances running the [ami-9fa343e7](https://access.redhat.com/articles/3135091) Amazon Machine Image (AMI), which runs [Red Hat Enterprise Linux (RHEL) 7.4](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/7.4_release_notes/index). By default, that includes: - * 3 small VMs for ZooKeeper ([t2.small](https://www.ec2instances.info/?selected=t2.small) instances) - * 3 larger VMs for BookKeeper [bookies](reference-terminology.md#bookie) ([i3.xlarge](https://www.ec2instances.info/?selected=i3.xlarge) instances) - * 2 larger VMs for Pulsar [brokers](reference-terminology.md#broker) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances) - * 1 larger VMs for Pulsar [proxy](reference-terminology.md#proxy) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances) -* An EC2 [security group](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html) -* A [virtual private cloud](https://aws.amazon.com/vpc/) (VPC) for security -* An [API Gateway](https://aws.amazon.com/api-gateway/) for connections from the outside world -* A [route table](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html) for the Pulsar cluster's VPC -* A [subnet](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html) for the VPC - -All EC2 instances for the cluster run in the [us-west-2](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) region. - -### Fetch your Pulsar connection URL - -When you apply the Terraform configuration by entering the command `terraform apply`, Terraform outputs a value for the `pulsar_service_url`. The value should look something like this: - -``` - -pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650 - -``` - -You can fetch that value at any time by entering the command `terraform output pulsar_service_url` or parsing the `terraform.tstate` file (which is JSON, even though the filename does not reflect that): - -```bash - -$ cat terraform.tfstate | jq .modules[0].outputs.pulsar_service_url.value - -``` - -### Destroy your cluster - -At any point, you can destroy all AWS resources associated with your cluster using Terraform's `destroy` command: - -```bash - -$ terraform destroy - -``` - -## Setup Disks - -Before you run the Pulsar playbook, you need to mount the disks to the correct directories on those bookie nodes. Since different type of machines have different disk layout, you need to update the task defined in `setup-disk.yaml` file after changing the `instance_types` in your terraform config, - -To setup disks on bookie nodes, enter this command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - setup-disk.yaml - -``` - -After that, the disks is mounted under `/mnt/journal` as journal disk, and `/mnt/storage` as ledger disk. -Remember to enter this command just only once. If you attempt to enter this command again after you have run Pulsar playbook, your disks might potentially be erased again, causing the bookies to fail to start up. - -## Run the Pulsar playbook - -Once you have created the necessary AWS resources using Terraform, you can install and run Pulsar on the Terraform-created EC2 instances using Ansible. - -(Optional) If you want to use any [built-in IO connectors](io-connectors.md) , edit the `Download Pulsar IO packages` task in the `deploy-pulsar.yaml` file and uncomment the connectors you want to use. - -To run the playbook, enter this command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - ../deploy-pulsar.yaml - -``` - -If you have created a private SSH key at a location different from `~/.ssh/id_rsa`, you can specify the different location using the `--private-key` flag in the following command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - --private-key="~/.ssh/some-non-default-key" \ - ../deploy-pulsar.yaml - -``` - -## Access the cluster - -You can now access your running Pulsar using the unique Pulsar connection URL for your cluster, which you can obtain following the instructions [above](#fetching-your-pulsar-connection-url). - -For a quick demonstration of accessing the cluster, we can use the Python client for Pulsar and the Python shell. First, install the Pulsar Python module using pip: - -```bash - -$ pip install pulsar-client - -``` - -Now, open up the Python shell using the `python` command: - -```bash - -$ python - -``` - -Once you are in the shell, enter the following command: - -```python - ->>> import pulsar ->>> client = pulsar.Client('pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650') -# Make sure to use your connection URL ->>> producer = client.create_producer('persistent://public/default/test-topic') ->>> producer.send('Hello world') ->>> client.close() - -``` - -If all of these commands are successful, Pulsar clients can now use your cluster! diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/deploy-bare-metal-multi-cluster.md b/site2/website/versioned_docs/version-2.9.1-deprecated/deploy-bare-metal-multi-cluster.md deleted file mode 100644 index f25b11041c5e34..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/deploy-bare-metal-multi-cluster.md +++ /dev/null @@ -1,452 +0,0 @@ ---- -id: deploy-bare-metal-multi-cluster -title: Deploying a multi-cluster on bare metal -sidebar_label: "Bare metal multi-cluster" -original_id: deploy-bare-metal-multi-cluster ---- - -:::tip - -1. You can use single-cluster Pulsar installation in most use cases, such as experimenting with Pulsar or using Pulsar in a startup or in a single team. If you need to run a multi-cluster Pulsar instance, see the [guide](deploy-bare-metal-multi-cluster.md). -2. If you want to use all built-in [Pulsar IO](io-overview.md) connectors, you need to download `apache-pulsar-io-connectors`package and install `apache-pulsar-io-connectors` under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you have run a separate cluster of function workers for [Pulsar Functions](functions-overview.md). -3. If you want to use [Tiered Storage](concepts-tiered-storage.md) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders`package and install `apache-pulsar-offloaders` under `offloaders` directory in the Pulsar directory on every broker node. For more details of how to configure this feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md). - -::: - -A Pulsar instance consists of multiple Pulsar clusters working in unison. You can distribute clusters across data centers or geographical regions and replicate the clusters amongst themselves using [geo-replication](administration-geo.md).Deploying a multi-cluster Pulsar instance consists of the following steps: - -1. Deploying two separate ZooKeeper quorums: a local quorum for each cluster in the instance and a configuration store quorum for instance-wide tasks -2. Initializing cluster metadata for each cluster -3. Deploying a BookKeeper cluster of bookies in each Pulsar cluster -4. Deploying brokers in each Pulsar cluster - - -> #### Run Pulsar locally or on Kubernetes? -> This guide shows you how to deploy Pulsar in production in a non-Kubernetes environment. If you want to run a standalone Pulsar cluster on a single machine for development purposes, see the [Setting up a local cluster](getting-started-standalone.md) guide. If you want to run Pulsar on [Kubernetes](https://kubernetes.io), see the [Pulsar on Kubernetes](deploy-kubernetes.md) guide, which includes sections on running Pulsar on Kubernetes, on Google Kubernetes Engine and on Amazon Web Services. - -## System requirement - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. You need to install 64-bit JRE/JDK 8 or later versions, JRE/JDK 11 is recommended. - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -## Install Pulsar - -To get started running Pulsar, download a binary tarball release in one of the following ways: - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar @pulsar:version@ binary release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget 'https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=pulsar/pulsar-@pulsar:version@/apache-pulsar-@pulsar:version@-bin.tar.gz' -O apache-pulsar-@pulsar:version@-bin.tar.gz - - ``` - -Once you download the tarball, untar it and `cd` into the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | [Command-line tools](reference-cli-tools.md) of Pulsar, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) -`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more -`examples` | A Java JAR file containing example [Pulsar Functions](functions-overview.md) -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that Pulsar uses -`licenses` | License files, in `.txt` form, for various components of the Pulsar codebase - -The following directories are created once you begin running Pulsar: - -Directory | Contains -:---------|:-------- -`data` | The data storage directory that ZooKeeper and BookKeeper use -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md) -`logs` | Logs that the installation creates - - -## Deploy ZooKeeper - -Each Pulsar instance relies on two separate ZooKeeper quorums. - -* Local ZooKeeper operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs a dedicated ZooKeeper cluster. -* Configuration Store operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum. - -You can use an independent cluster of machines or the same machines used by local ZooKeeper to provide the configuration store quorum. - - -### Deploy local ZooKeeper - -ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar. - -You need to stand up one local ZooKeeper cluster per Pulsar cluster for deploying a Pulsar instance. - -To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -On each host, you need to specify the ID of the node in the `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -:::tip - -See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - -::: - -On a ZooKeeper server at `zk1.us-west.example.com`, for example, you could set the `myid` value like this: - -```shell - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com` the command looks like `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start zookeeper - -``` - -### Deploy the configuration store - -The ZooKeeper cluster configured and started up in the section above is a local ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks. - -If you deploy a single-cluster instance, you do not need a separate cluster for the configuration store. If, however, you deploy a multi-cluster instance, you should stand up a separate ZooKeeper cluster for configuration tasks. - -#### Single-cluster Pulsar instance - -If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports. - -To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorum. You need to use the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 - -``` - -As before, create the `myid` files for each server on `data/global-zookeeper/myid`. - -#### Multi-cluster Pulsar instance - -When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions. - -The key here is to make sure the ZK quorum members are spread across at least 3 regions, and other regions run as observers. - -Again, given the very low expected load on the configuration store servers, you can share the same hosts used for the local ZooKeeper quorum. - -For example, assume a Pulsar instance with the following clusters `us-west`, `us-east`, `us-central`, `eu-central`, `ap-south`. Also assume, each cluster has its own local ZK servers named such as the following: - -``` - -zk[1-3].${CLUSTER}.example.com - -``` - -In this scenario if you want to pick the quorum participants from few clusters and let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`. - -This method guarantees that writes to configuration store is possible even if one of these regions is unreachable. - -The ZK configuration in all the servers looks like: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 -server.4=zk1.us-central.example.com:2185:2186 -server.5=zk2.us-central.example.com:2185:2186 -server.6=zk3.us-central.example.com:2185:2186:observer -server.7=zk1.us-east.example.com:2185:2186 -server.8=zk2.us-east.example.com:2185:2186 -server.9=zk3.us-east.example.com:2185:2186:observer -server.10=zk1.eu-central.example.com:2185:2186:observer -server.11=zk2.eu-central.example.com:2185:2186:observer -server.12=zk3.eu-central.example.com:2185:2186:observer -server.13=zk1.ap-south.example.com:2185:2186:observer -server.14=zk2.ap-south.example.com:2185:2186:observer -server.15=zk3.ap-south.example.com:2185:2186:observer - -``` - -Additionally, ZK observers need to have the following parameters: - -```properties - -peerType=observer - -``` - -##### Start the service - -Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) - -```shell - -$ bin/pulsar-daemon start configuration-store - -``` - -## Cluster metadata initialization - -Once you set up the cluster-specific ZooKeeper and configuration store quorums for your instance, you need to write some metadata to ZooKeeper for each cluster in your instance. **you only need to write these metadata once**. - -You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. The following is an example: - -```shell - -$ bin/pulsar initialize-cluster-metadata \ - --cluster us-west \ - --zookeeper zk1.us-west.example.com:2181 \ - --configuration-store zk1.us-west.example.com:2184 \ - --web-service-url http://pulsar.us-west.example.com:8080/ \ - --web-service-url-tls https://pulsar.us-west.example.com:8443/ \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/ - -``` - -As you can see from the example above, you need to specify the following: - -* The name of the cluster -* The local ZooKeeper connection string for the cluster -* The configuration store connection string for the entire instance -* The web service URL for the cluster -* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster - -If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster. - -Make sure to run `initialize-cluster-metadata` for each cluster in your instance. - -## Deploy BookKeeper - -BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar. - -Each Pulsar broker needs its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster. - -### Configure bookies - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important aspect of configuring each bookie is ensuring that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for the local ZooKeeper of Pulsar cluster. - -### Start bookies - -You can start a bookie in two ways: in the foreground or as a background daemon. - -To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -You can verify that the bookie works properly using the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell): - -```bash - -$ bin/bookkeeper shell bookiesanity - -``` - -This command creates a new ledger on the local bookie, writes a few entries, reads them back and finally deletes the ledger. - -After you have started all bookies, you can use the `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to verify that all bookies in the cluster are running. - -```bash - -$ bin/bookkeeper shell simpletest --ensemble --writeQuorum --ackQuorum --numEntries - -``` - -Bookie hosts are responsible for storing message data on disk. In order for bookies to provide optimal performance, having a suitable hardware configuration is essential for the bookies. The following are key dimensions for bookie hardware capacity. - -* Disk I/O capacity read/write -* Storage capacity - -Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker. To ensure low write latency, BookKeeper is -designed to use multiple devices: - -* A **journal** to ensure durability. For sequential writes, having fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts is critical. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms. -* A **ledger storage device** is where data is stored until all consumers acknowledge the message. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller. - - - -## Deploy brokers - -Once you set up ZooKeeper, initialize cluster metadata, and spin up BookKeeper bookies, you can deploy brokers. - -### Broker configuration - -You can configure brokers using the [`conf/broker.conf`](reference-configuration.md#broker) configuration file. - -The most important element of broker configuration is ensuring that each broker is aware of its local ZooKeeper quorum as well as the configuration store quorum. Make sure that you set the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) parameter to reflect the local quorum and the [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameter to reflect the configuration store quorum (although you need to specify only those ZooKeeper servers located in the same cluster). - -You also need to specify the name of the [cluster](reference-terminology.md#cluster) to which the broker belongs using the [`clusterName`](reference-configuration.md#broker-clusterName) parameter. In addition, you need to match the broker and web service ports provided when you initialize the metadata (especially when you use a different port from default) of the cluster. - -The following is an example configuration: - -```properties - -# Local ZooKeeper servers -zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -# Configuration store quorum connection string. -configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184 - -clusterName=us-west - -# Broker data port -brokerServicePort=6650 - -# Broker data port for TLS -brokerServicePortTls=6651 - -# Port to use to server HTTP request -webServicePort=8080 - -# Port to use to server HTTPS request -webServicePortTls=8443 - -``` - -### Broker hardware - -Pulsar brokers do not require any special hardware since they do not use the local disk. You had better choose fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) so that the software can take full advantage of that. - -### Start the broker service - -You can start a broker in the background by using [nohup](https://en.wikipedia.org/wiki/Nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start broker - -``` - -You can also start brokers in the foreground by using [`pulsar broker`](reference-cli-tools.md#broker): - -```shell - -$ bin/pulsar broker - -``` - -## Service discovery - -[Clients](getting-started-clients.md) connecting to Pulsar brokers need to communicate with an entire Pulsar instance using a single URL. - -You can use your own service discovery system. If you use your own system, you only need to satisfy just one requirement: when a client performs an HTTP request to an [endpoint](reference-configuration.md) for a Pulsar cluster, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to some active brokers in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means. - -> **Service discovery already provided by many scheduling systems** -> Many large-scale deployment systems, such as [Kubernetes](deploy-kubernetes.md), have service discovery systems built in. If you run Pulsar on such a system, you may not need to provide your own service discovery mechanism. - -## Admin client and verification - -At this point your Pulsar instance should be ready to use. You can now configure client machines that can serve as [administrative clients](admin-api-overview.md) for each cluster. You can use the [`conf/client.conf`](reference-configuration.md#client) configuration file to configure admin clients. - -The most important thing is that you point the [`serviceUrl`](reference-configuration.md#client-serviceUrl) parameter to the correct service URL for the cluster: - -```properties - -serviceUrl=http://pulsar.us-west.example.com:8080/ - -``` - -## Provision new tenants - -Pulsar is built as a fundamentally multi-tenant system. - - -If a new tenant wants to use the system, you need to create a new one. You can create a new tenant by using the [`pulsar-admin`](reference-pulsar-admin.md#tenants) CLI tool: - -```shell - -$ bin/pulsar-admin tenants create test-tenant \ - --allowed-clusters us-west \ - --admin-roles test-admin-role - -``` - -In this command, users who identify with `test-admin-role` role can administer the configuration for the `test-tenant` tenant. The `test-tenant` tenant can only use the `us-west` cluster. From now on, this tenant can manage its resources. - -Once you create a tenant, you need to create [namespaces](reference-terminology.md#namespace) for topics within that tenant. - - -The first step is to create a namespace. A namespace is an administrative unit that can contain many topics. A common practice is to create a namespace for each different use case from a single tenant. - -```shell - -$ bin/pulsar-admin namespaces create test-tenant/ns1 - -``` - -##### Test producer and consumer - - -Everything is now ready to send and receive messages. The quickest way to test the system is through the [`pulsar-perf`](reference-cli-tools.md#pulsar-perf) client tool. - - -You can use a topic in the namespace that you have just created. Topics are automatically created the first time when a producer or a consumer tries to use them. - -The topic name in this case could be: - -```http - -persistent://test-tenant/ns1/my-topic - -``` - -Start a consumer that creates a subscription on the topic and waits for messages: - -```shell - -$ bin/pulsar-perf consume persistent://test-tenant/ns1/my-topic - -``` - -Start a producer that publishes messages at a fixed rate and reports stats every 10 seconds: - -```shell - -$ bin/pulsar-perf produce persistent://test-tenant/ns1/my-topic - -``` - -To report the topic stats: - -```shell - -$ bin/pulsar-admin topics stats persistent://test-tenant/ns1/my-topic - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/deploy-bare-metal.md b/site2/website/versioned_docs/version-2.9.1-deprecated/deploy-bare-metal.md deleted file mode 100644 index 9bb4235cece5da..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/deploy-bare-metal.md +++ /dev/null @@ -1,559 +0,0 @@ ---- -id: deploy-bare-metal -title: Deploy a cluster on bare metal -sidebar_label: "Bare metal" -original_id: deploy-bare-metal ---- - -:::tip - -1. You can use single-cluster Pulsar installation in most use cases, such as experimenting with Pulsar or using Pulsar in a startup or in a single team. If you need to run a multi-cluster Pulsar instance, see the [guide](deploy-bare-metal-multi-cluster.md). -2. If you want to use all built-in [Pulsar IO](io-overview.md) connectors, you need to download `apache-pulsar-io-connectors`package and install `apache-pulsar-io-connectors` under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you have run a separate cluster of function workers for [Pulsar Functions](functions-overview.md). -3. If you want to use [Tiered Storage](concepts-tiered-storage.md) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders`package and install `apache-pulsar-offloaders` under `offloaders` directory in the Pulsar directory on every broker node. For more details of how to configure this feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md). - -::: - -Deploying a Pulsar cluster consists of the following steps: - -1. Deploy a [ZooKeeper](#deploy-a-zookeeper-cluster) cluster (optional) -2. Initialize [cluster metadata](#initialize-cluster-metadata) -3. Deploy a [BookKeeper](#deploy-a-bookkeeper-cluster) cluster -4. Deploy one or more Pulsar [brokers](#deploy-pulsar-brokers) - -## Preparation - -### Requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions, JRE/JDK 11 is recommended. - -:::tip - -You can reuse existing Zookeeper clusters. - -::: - -To run Pulsar on bare metal, the following configuration is recommended: - -* At least 6 Linux machines or VMs - * 3 for running [ZooKeeper](https://zookeeper.apache.org) - * 3 for running a Pulsar broker, and a [BookKeeper](https://bookkeeper.apache.org) bookie -* A single [DNS](https://en.wikipedia.org/wiki/Domain_Name_System) name covering all of the Pulsar broker hosts - -:::note - -* Broker is only supported on 64-bit JVM. -* If you do not have enough machines, or you want to test Pulsar in cluster mode (and expand the cluster later), You can fully deploy Pulsar on a node on which ZooKeeper, bookie and broker run. -* If you do not have a DNS server, you can use the multi-host format in the service URL instead. - -::: - -Each machine in your cluster needs to have [Java 8](https://adoptium.net/?variant=openjdk8) or [Java 11](https://adoptium.net/?variant=openjdk11) installed. - -The following is a diagram showing the basic setup: - -![alt-text](/assets/pulsar-basic-setup.png) - -In this diagram, connecting clients need to communicate with the Pulsar cluster using a single URL. In this case, `pulsar-cluster.acme.com` abstracts over all of the message-handling brokers. Pulsar message brokers run on machines alongside BookKeeper bookies; brokers and bookies, in turn, rely on ZooKeeper. - -### Hardware considerations - -If you deploy a Pulsar cluster, keep in mind the following basic better choices when you do the capacity planning. - -#### ZooKeeper - -For machines running ZooKeeper, it is recommended to use less powerful machines or VMs. Pulsar uses ZooKeeper only for periodic coordination-related and configuration-related tasks, not for basic operations. If you run Pulsar on [Amazon Web Services](https://aws.amazon.com/) (AWS), for example, a [t2.small](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html) instance might likely suffice. - -#### Bookies and Brokers - -For machines running a bookie and a Pulsar broker, more powerful machines are required. For an AWS deployment, for example, [i3.4xlarge](https://aws.amazon.com/blogs/aws/now-available-i3-instances-for-demanding-io-intensive-applications/) instances may be appropriate. On those machines you can use the following: - -* Fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) (for Pulsar brokers) -* Small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache (for BookKeeper bookies) - -To start a Pulsar instance, below are the minimum and the recommended hardware settings. - -1. The minimum hardware settings (250 Pulsar topics) - - Broker - - CPU: 0.2 - - Memory: 256MB - - Bookie - - CPU: 0.2 - - Memory: 256MB - - Storage: - - Journal: 8GB, PD-SSD - - Ledger: 16GB, PD-STANDARD - -2. The recommended hardware settings (1000 Pulsar topics) - - - Broker - - CPU: 8 - - Memory: 8GB - - Bookie - - CPU: 4 - - Memory: 8GB - - Storage: - - Journal: 256GB, PD-SSD - - Ledger: 2TB, PD-STANDARD - -## Install the Pulsar binary package - -> You need to install the Pulsar binary package on each machine in the cluster, including machines running ZooKeeper and BookKeeper. - -To get started deploying a Pulsar cluster on bare metal, you need to download a binary tarball release in one of the following ways: - -* By clicking on the link below directly, which automatically triggers a download: - * Pulsar @pulsar:version@ binary release -* From the Pulsar [downloads page](pulsar:download_page_url) -* From the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) on GitHub -* Using [wget](https://www.gnu.org/software/wget): - -```bash - -$ wget pulsar:binary_release_url - -``` - -Once you download the tarball, untar it and `cd` into the resulting directory: - -```bash - -$ tar xvzf apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -The extracted directory contains the following subdirectories: - -Directory | Contains -:---------|:-------- -`bin` |[command-line tools](reference-cli-tools.md) of Pulsar, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) -`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more -`data` | The data storage directory that ZooKeeper and BookKeeper use -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that Pulsar uses -`logs` | Logs that the installation creates - -## [Install Builtin Connectors (optional)]( https://pulsar.apache.org/docs/en/next/standalone/#install-builtin-connectors-optional) - -> Since Pulsar release `2.1.0-incubating`, Pulsar provides a separate binary distribution, containing all the `builtin` connectors. -> To enable the `builtin` connectors (optional), you can follow the instructions below. - -To use `builtin` connectors, you need to download the connectors tarball release on every broker node in one of the following ways : - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar IO Connectors @pulsar:version@ release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -Once you download the .nar file, copy the file to directory `connectors` in the pulsar directory. -For example, if you download the connector file `pulsar-io-aerospike-@pulsar:version@.nar`: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -## [Install Tiered Storage Offloaders (optional)](https://pulsar.apache.org/docs/en/next/standalone/#install-tiered-storage-offloaders-optional) - -> Since Pulsar release `2.2.0`, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -> If you want to enable tiered storage feature, you can follow the instructions as below; otherwise you can -> skip this section for now. - -To use tiered storage offloaders, you need to download the offloaders tarball release on every broker node in one of the following ways: - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -Once you download the tarball, in the Pulsar directory, untar the offloaders package and copy the offloaders as `offloaders` in the Pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you can find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more details of how to configure tiered storage feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md) - - -## Deploy a ZooKeeper cluster - -> If you already have an existing zookeeper cluster and want to use it, you can skip this section. - -[ZooKeeper](https://zookeeper.apache.org) manages a variety of essential coordination-related and configuration-related tasks for Pulsar. To deploy a Pulsar cluster, you need to deploy ZooKeeper first. A 3-node ZooKeeper cluster is the recommended configuration. Pulsar does not make heavy use of ZooKeeper, so the lightweight machines or VMs should suffice for running ZooKeeper. - -To begin, add all ZooKeeper servers to the configuration specified in [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) (in the Pulsar directory that you create [above](#install-the-pulsar-binary-package)). The following is an example: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -> If you only have one machine on which to deploy Pulsar, you only need to add one server entry in the configuration file. - -On each host, you need to specify the ID of the node in the `myid` file, which is in the `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - -For example, on a ZooKeeper server like `zk1.us-west.example.com`, you can set the `myid` value as follows: - -```bash - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com`, the command is `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and have the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start zookeeper - -``` - -> If you plan to deploy Zookeeper with the Bookie on the same node, you need to start zookeeper by using different stats -> port by configuring the `metricsProvider.httpPort` in zookeeper.conf. - -## Initialize cluster metadata - -Once you deploy ZooKeeper for your cluster, you need to write some metadata to ZooKeeper. You only need to write this data **once**. - -You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. This command can be run on any machine in your Pulsar cluster, so the metadata can be initialized from a ZooKeeper, broker, or bookie machine. The following is an example: - -```shell - -$ bin/pulsar initialize-cluster-metadata \ - --cluster pulsar-cluster-1 \ - --zookeeper zk1.us-west.example.com:2181 \ - --configuration-store zk1.us-west.example.com:2181 \ - --web-service-url http://pulsar.us-west.example.com:8080 \ - --web-service-url-tls https://pulsar.us-west.example.com:8443 \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650 \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -As you can see from the example above, you will need to specify the following: - -Flag | Description -:----|:----------- -`--cluster` | A name for the cluster -`--zookeeper` | A "local" ZooKeeper connection string for the cluster. This connection string only needs to include *one* machine in the ZooKeeper cluster. -`--configuration-store` | The configuration store connection string for the entire instance. As with the `--zookeeper` flag, this connection string only needs to include *one* machine in the ZooKeeper cluster. -`--web-service-url` | The web service URL for the cluster, plus a port. This URL should be a standard DNS name. The default port is 8080 (you had better not use a different port). -`--web-service-url-tls` | If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster. The default port is 8443 (you had better not use a different port). -`--broker-service-url` | A broker service URL enabling interaction with the brokers in the cluster. This URL should not use the same DNS name as the web service URL but should use the `pulsar` scheme instead. The default port is 6650 (you had better not use a different port). -`--broker-service-url-tls` | If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster. The default port is 6651 (you had better not use a different port). - - -> If you do not have a DNS server, you can use multi-host format in the service URL with the following settings: -> - -> ```shell -> -> --web-service-url http://host1:8080,host2:8080,host3:8080 \ -> --web-service-url-tls https://host1:8443,host2:8443,host3:8443 \ -> --broker-service-url pulsar://host1:6650,host2:6650,host3:6650 \ -> --broker-service-url-tls pulsar+ssl://host1:6651,host2:6651,host3:6651 -> -> -> ``` - -> -> If you want to use an existing BookKeeper cluster, you can add the `--existing-bk-metadata-service-uri` flag as follows: -> - -> ```shell -> -> --existing-bk-metadata-service-uri "zk+null://zk1:2181;zk2:2181/ledgers" \ -> --web-service-url http://host1:8080,host2:8080,host3:8080 \ -> --web-service-url-tls https://host1:8443,host2:8443,host3:8443 \ -> --broker-service-url pulsar://host1:6650,host2:6650,host3:6650 \ -> --broker-service-url-tls pulsar+ssl://host1:6651,host2:6651,host3:6651 -> -> -> ``` - -> You can obtain the metadata service URI of the existing BookKeeper cluster by using the `bin/bookkeeper shell whatisinstanceid` command. You must enclose the value in double quotes since the multiple metadata service URIs are separated with semicolons. - -## Deploy a BookKeeper cluster - -[BookKeeper](https://bookkeeper.apache.org) handles all persistent data storage in Pulsar. You need to deploy a cluster of BookKeeper bookies to use Pulsar. You can choose to run a **3-bookie BookKeeper cluster**. - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important step in configuring bookies for our purposes here is ensuring that [`zkServers`](reference-configuration.md#bookkeeper-zkServers) is set to the connection string for the ZooKeeper cluster. The following is an example: - -```properties - -zkServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -``` - -Once you appropriately modify the `zkServers` parameter, you can make any other configuration changes that you require. You can find a full listing of the available BookKeeper configuration parameters [here](reference-configuration.md#bookkeeper). However, consulting the [BookKeeper documentation](http://bookkeeper.apache.org/docs/latest/reference/config/) for a more in-depth guide might be a better choice. - -Once you apply the desired configuration in `conf/bookkeeper.conf`, you can start up a bookie on each of your BookKeeper hosts. You can start up each bookie either in the background, using [nohup](https://en.wikipedia.org/wiki/Nohup), or in the foreground. - -To start the bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -To start the bookie in the foreground: - -```bash - -$ bin/pulsar bookie - -``` - -You can verify that a bookie works properly by running the `bookiesanity` command on the [BookKeeper shell](reference-cli-tools.md#shell): - -```bash - -$ bin/bookkeeper shell bookiesanity - -``` - -This command creates an ephemeral BookKeeper ledger on the local bookie, writes a few entries, reads them back, and finally deletes the ledger. - -After you start all the bookies, you can use `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to verify all the bookies in the cluster are up running. - -```bash - -$ bin/bookkeeper shell simpletest --ensemble --writeQuorum --ackQuorum --numEntries - -``` - -This command creates a `num-bookies` sized ledger on the cluster, writes a few entries, and finally deletes the ledger. - - -## Deploy Pulsar brokers - -Pulsar brokers are the last thing you need to deploy in your Pulsar cluster. Brokers handle Pulsar messages and provide the administrative interface of Pulsar. A good choice is to run **3 brokers**, one for each machine that already runs a BookKeeper bookie. - -### Configure Brokers - -The most important element of broker configuration is ensuring that each broker is aware of the ZooKeeper cluster that you have deployed. Ensure that the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) and [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameters are correct. In this case, since you only have 1 cluster and no configuration store setup, the `configurationStoreServers` point to the same `zookeeperServers`. - -```properties - -zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 -configurationStoreServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -``` - -You also need to specify the cluster name (matching the name that you provided when you [initialize the metadata of the cluster](#initialize-cluster-metadata)): - -```properties - -clusterName=pulsar-cluster-1 - -``` - -In addition, you need to match the broker and web service ports provided when you initialize the metadata of the cluster (especially when you use a different port than the default): - -```properties - -brokerServicePort=6650 -brokerServicePortTls=6651 -webServicePort=8080 -webServicePortTls=8443 - -``` - -> If you deploy Pulsar in a one-node cluster, you should update the replication settings in `conf/broker.conf` to `1`. -> - -> ```properties -> -> # Number of bookies to use when creating a ledger -> managedLedgerDefaultEnsembleSize=1 -> -> # Number of copies to store for each message -> managedLedgerDefaultWriteQuorum=1 -> -> # Number of guaranteed copies (acks to wait before write is complete) -> managedLedgerDefaultAckQuorum=1 -> -> -> ``` - - -### Enable Pulsar Functions (optional) - -If you want to enable [Pulsar Functions](functions-overview.md), you can follow the instructions as below: - -1. Edit `conf/broker.conf` to enable functions worker, by setting `functionsWorkerEnabled` to `true`. - - ```conf - - functionsWorkerEnabled=true - - ``` - -2. Edit `conf/functions_worker.yml` and set `pulsarFunctionsCluster` to the cluster name that you provide when you [initialize the metadata of the cluster](#initialize-cluster-metadata). - - ```conf - - pulsarFunctionsCluster: pulsar-cluster-1 - - ``` - -If you want to learn more options about deploying the functions worker, check out [Deploy and manage functions worker](functions-worker.md). - -### Start Brokers - -You can then provide any other configuration changes that you want in the [`conf/broker.conf`](reference-configuration.md#broker) file. Once you decide on a configuration, you can start up the brokers for your Pulsar cluster. Like ZooKeeper and BookKeeper, you can start brokers either in the foreground or in the background, using nohup. - -You can start a broker in the foreground using the [`pulsar broker`](reference-cli-tools.md#pulsar-broker) command: - -```bash - -$ bin/pulsar broker - -``` - -You can start a broker in the background using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start broker - -``` - -Once you successfully start up all the brokers that you intend to use, your Pulsar cluster should be ready to go! - -## Connect to the running cluster - -Once your Pulsar cluster is up and running, you should be able to connect with it using Pulsar clients. One such client is the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool, which is included with the Pulsar binary package. The `pulsar-client` tool can publish messages to and consume messages from Pulsar topics and thus provide a simple way to make sure that your cluster runs properly. - -To use the `pulsar-client` tool, first modify the client configuration file in [`conf/client.conf`](reference-configuration.md#client) in your binary package. You need to change the values for `webServiceUrl` and `brokerServiceUrl`, substituting `localhost` (which is the default), with the DNS name that you assign to your broker/bookie hosts. The following is an example: - -```properties - -webServiceUrl=http://us-west.example.com:8080 -brokerServiceurl=pulsar://us-west.example.com:6650 - -``` - -> If you do not have a DNS server, you can specify multi-host in service URL as follows: -> - -> ```properties -> -> webServiceUrl=http://host1:8080,host2:8080,host3:8080 -> brokerServiceurl=pulsar://host1:6650,host2:6650,host3:6650 -> -> -> ``` - - -Once that is complete, you can publish a message to the Pulsar topic: - -```bash - -$ bin/pulsar-client produce \ - persistent://public/default/test \ - -n 1 \ - -m "Hello Pulsar" - -``` - -> You may need to use a different cluster name in the topic if you specify a cluster name other than `pulsar-cluster-1`. - -This command publishes a single message to the Pulsar topic. In addition, you can subscribe to the Pulsar topic in a different terminal before publishing messages as below: - -```bash - -$ bin/pulsar-client consume \ - persistent://public/default/test \ - -n 100 \ - -s "consumer-test" \ - -t "Exclusive" - -``` - -Once you successfully publish the above message to the topic, you should see it in the standard output: - -```bash - ------ got message ----- -Hello Pulsar - -``` - -## Run Functions - -> If you have [enabled](#enable-pulsar-functions-optional) Pulsar Functions, you can try out the Pulsar Functions now. - -Create an ExclamationFunction `exclamation`. - -```bash - -bin/pulsar-admin functions create \ - --jar examples/api-examples.jar \ - --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \ - --inputs persistent://public/default/exclamation-input \ - --output persistent://public/default/exclamation-output \ - --tenant public \ - --namespace default \ - --name exclamation - -``` - -Check whether the function runs as expected by [triggering](functions-deploying.md#triggering-pulsar-functions) the function. - -```bash - -bin/pulsar-admin functions trigger --name exclamation --trigger-value "hello world" - -``` - -You should see the following output: - -```shell - -hello world! - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/deploy-dcos.md b/site2/website/versioned_docs/version-2.9.1-deprecated/deploy-dcos.md deleted file mode 100644 index 35a0a83d716ade..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/deploy-dcos.md +++ /dev/null @@ -1,200 +0,0 @@ ---- -id: deploy-dcos -title: Deploy Pulsar on DC/OS -sidebar_label: "DC/OS" -original_id: deploy-dcos ---- - -:::tip - -To enable all built-in [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, we recommend you use `apachepulsar/pulsar-all` image instead of `apachepulsar/pulsar` image; the former has already bundled [all built-in connectors](io-overview.md#working-with-connectors). - -::: - -[DC/OS](https://dcos.io/) (the DataCenter Operating System) is a distributed operating system for deploying and managing applications and systems on [Apache Mesos](http://mesos.apache.org/). DC/OS is an open-source tool created and maintained by [Mesosphere](https://mesosphere.com/). - -Apache Pulsar is available as a [Marathon Application Group](https://mesosphere.github.io/marathon/docs/application-groups.html), which runs multiple applications as manageable sets. - -## Prerequisites - -You need to prepare your environment before running Pulsar on DC/OS. - -* DC/OS version [1.9](https://docs.mesosphere.com/1.9/) or higher -* A [DC/OS cluster](https://docs.mesosphere.com/1.9/installing/) with at least three agent nodes -* The [DC/OS CLI tool](https://docs.mesosphere.com/1.9/cli/install/) installed -* The [`PulsarGroups.json`](https://github.com/apache/pulsar/blob/master/deployment/dcos/PulsarGroups.json) configuration file from the Pulsar GitHub repo. - - ```bash - - $ curl -O https://raw.githubusercontent.com/apache/pulsar/master/deployment/dcos/PulsarGroups.json - - ``` - -Each node in the DC/OS-managed Mesos cluster must have at least: - -* 4 CPU -* 4 GB of memory -* 60 GB of total persistent disk - -Alternatively, you can change the configuration in `PulsarGroups.json` accordingly to match your resources of the DC/OS cluster. - -## Deploy Pulsar using the DC/OS command interface - -You can deploy Pulsar on DC/OS using this command: - -```bash - -$ dcos marathon group add PulsarGroups.json - -``` - -This command deploys Docker container instances in three groups, which together comprise a Pulsar cluster: - -* 3 bookies (1 [bookie](reference-terminology.md#bookie) on each agent node and 1 [bookie recovery](http://bookkeeper.apache.org/docs/latest/admin/autorecovery/) instance) -* 3 Pulsar [brokers](reference-terminology.md#broker) (1 broker on each node and 1 admin instance) -* 1 [Prometheus](http://prometheus.io/) instance and 1 [Grafana](https://grafana.com/) instance - - -> When you run DC/OS, a ZooKeeper cluster will be running at `master.mesos:2181`, thus you do not have to install or start up ZooKeeper separately. - -After executing the `dcos` command above, click the **Services** tab in the DC/OS [GUI interface](https://docs.mesosphere.com/latest/gui/), which you can access at [http://m1.dcos](http://m1.dcos) in this example. You should see several applications during the deployment. - -![DC/OS command executed](/assets/dcos_command_execute.png) - -![DC/OS command executed2](/assets/dcos_command_execute2.png) - -## The BookKeeper group - -To monitor the status of the BookKeeper cluster deployment, click the **bookkeeper** group in the parent **pulsar** group. - -![DC/OS bookkeeper status](/assets/dcos_bookkeeper_status.png) - -At this point, the status of the 3 [bookies](reference-terminology.md#bookie) are green, which means that the bookies have been deployed successfully and are running. - -![DC/OS bookkeeper running](/assets/dcos_bookkeeper_run.png) - -You can also click each bookie instance to get more detailed information, such as the bookie running log. - -![DC/OS bookie log](/assets/dcos_bookie_log.png) - -To display information about the BookKeeper in ZooKeeper, you can visit [http://m1.dcos/exhibitor](http://m1.dcos/exhibitor). In this example, 3 bookies are under the `available` directory. - -![DC/OS bookkeeper in zk](/assets/dcos_bookkeeper_in_zookeeper.png) - -## The Pulsar broker group - -Similar to the BookKeeper group above, click **brokers** to check the status of the Pulsar brokers. - -![DC/OS broker status](/assets/dcos_broker_status.png) - -![DC/OS broker running](/assets/dcos_broker_run.png) - -You can also click each broker instance to get more detailed information, such as the broker running log. - -![DC/OS broker log](/assets/dcos_broker_log.png) - -Broker cluster information in ZooKeeper is also available through the web UI. In this example, you can see that the `loadbalance` and `managed-ledgers` directories have been created. - -![DC/OS broker in zk](/assets/dcos_broker_in_zookeeper.png) - -## Monitor group - -The **monitory** group consists of Prometheus and Grafana. - -![DC/OS monitor status](/assets/dcos_monitor_status.png) - -### Prometheus - -Click the instance of `prom` to get the endpoint of Prometheus, which is `192.168.65.121:9090` in this example. - -![DC/OS prom endpoint](/assets/dcos_prom_endpoint.png) - -If you click that endpoint, you can see the Prometheus dashboard. All the bookies and brokers are listed on [http://192.168.65.121:9090/targets](http://192.168.65.121:9090/targets). - -![DC/OS prom targets](/assets/dcos_prom_targets.png) - -### Grafana - -Click `grafana` to get the endpoint for Grafana, which is `192.168.65.121:3000` in this example. - -![DC/OS grafana endpoint](/assets/dcos_grafana_endpoint.png) - -If you click that endpoint, you can access the Grafana dashboard. - -![DC/OS grafana targets](/assets/dcos_grafana_dashboard.png) - -## Run a simple Pulsar consumer and producer on DC/OS - -Now that you have a fully deployed Pulsar cluster, you can run a simple consumer and producer to show Pulsar on DC/OS in action. - -### Download and prepare the Pulsar Java tutorial - -You can clone a [Pulsar Java tutorial](https://github.com/streamlio/pulsar-java-tutorial) repo. This repo contains a simple Pulsar consumer and producer (you can find more information in the `README` file in this repo). - -```bash - -$ git clone https://github.com/streamlio/pulsar-java-tutorial - -``` - -Change the `SERVICE_URL` from `pulsar://localhost:6650` to `pulsar://a1.dcos:6650` in both [`ConsumerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ConsumerTutorial.java) file and [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java) file. - -The `pulsar://a1.dcos:6650` endpoint is for the broker service. You can fetch the endpoint details for each broker instance from the DC/OS GUI. `a1.dcos` is a DC/OS client agent that runs a broker, and you can replace it with the client agent IP address. - -Now, you can change the message number from 10 to 10000000 in the main method in [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java) file to produce more messages. - -Then, you can compile the project code using the command below: - -```bash - -$ mvn clean package - -``` - -### Run the consumer and producer - -Execute this command to run the consumer: - -```bash - -$ mvn exec:java -Dexec.mainClass="tutorial.ConsumerTutorial" - -``` - -Execute this command to run the producer: - -```bash - -$ mvn exec:java -Dexec.mainClass="tutorial.ProducerTutorial" - -``` - -You see that the producer is producing messages and the consumer is consuming messages through the DC/OS GUI. - -![DC/OS pulsar producer](/assets/dcos_producer.png) - -![DC/OS pulsar consumer](/assets/dcos_consumer.png) - -### View Grafana metric output - -While the producer and consumer are running, you can access the running metrics from Grafana. - -![DC/OS pulsar dashboard](/assets/dcos_metrics.png) - - -## Uninstall Pulsar - -You can shut down and uninstall the `pulsar` application from DC/OS at any time in one of the following two ways: - -1. Click the three dots at the right end of Pulsar group and choose **Delete** on the DC/OS GUI. - - ![DC/OS pulsar uninstall](/assets/dcos_uninstall.png) - -2. Use the command below. - - ```bash - - $ dcos marathon group remove /pulsar - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/deploy-docker.md b/site2/website/versioned_docs/version-2.9.1-deprecated/deploy-docker.md deleted file mode 100644 index 8348d78deb2378..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/deploy-docker.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -id: deploy-docker -title: Deploy a cluster on Docker -sidebar_label: "Docker" -original_id: deploy-docker ---- - -To deploy a Pulsar cluster on Docker, complete the following steps: -1. Deploy a ZooKeeper cluster (optional) -2. Initialize cluster metadata -3. Deploy a BookKeeper cluster -4. Deploy one or more Pulsar brokers - -## Prepare - -To run Pulsar on Docker, you need to create a container for each Pulsar component: ZooKeeper, BookKeeper and broker. You can pull the images of ZooKeeper and BookKeeper separately on [Docker Hub](https://hub.docker.com/), and pull a [Pulsar image](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) for the broker. You can also pull only one [Pulsar image](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) and create three containers with this image. This tutorial takes the second option as an example. - -### Pull a Pulsar image -You can pull a Pulsar image from [Docker Hub](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) with the following command. - -``` - -docker pull apachepulsar/pulsar-all:latest - -``` - -### Create three containers -Create containers for ZooKeeper, BookKeeper and broker. In this example, they are named as `zookeeper`, `bookkeeper` and `broker` respectively. You can name them as you want with the `--name` flag. By default, the container names are created randomly. - -``` - -docker run -it --name bookkeeper apachepulsar/pulsar-all:latest /bin/bash -docker run -it --name zookeeper apachepulsar/pulsar-all:latest /bin/bash -docker run -it --name broker apachepulsar/pulsar-all:latest /bin/bash - -``` - -### Create a network -To deploy a Pulsar cluster on Docker, you need to create a `network` and connect the containers of ZooKeeper, BookKeeper and broker to this network. The following command creates the network `pulsar`: - -``` - -docker network create pulsar - -``` - -### Connect containers to network -Connect the containers of ZooKeeper, BookKeeper and broker to the `pulsar` network with the following commands. - -``` - -docker network connect pulsar zookeeper -docker network connect pulsar bookkeeper -docker network connect pulsar broker - -``` - -To check whether the containers are successfully connected to the network, enter the `docker network inspect pulsar` command. - -For detailed information about how to deploy ZooKeeper cluster, BookKeeper cluster, brokers, see [deploy a cluster on bare metal](deploy-bare-metal.md). diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/deploy-kubernetes.md b/site2/website/versioned_docs/version-2.9.1-deprecated/deploy-kubernetes.md deleted file mode 100644 index 1aefc6ad79f716..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/deploy-kubernetes.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -id: deploy-kubernetes -title: Deploy Pulsar on Kubernetes -sidebar_label: "Kubernetes" -original_id: deploy-kubernetes ---- - -To get up and running with these charts as fast as possible, in a **non-production** use case, we provide -a [quick start guide](getting-started-helm.md) for Proof of Concept (PoC) deployments. - -To configure and install a Pulsar cluster on Kubernetes for production usage, follow the complete [Installation Guide](helm-install.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/deploy-monitoring.md b/site2/website/versioned_docs/version-2.9.1-deprecated/deploy-monitoring.md deleted file mode 100644 index 2b5c19344dc8c3..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/deploy-monitoring.md +++ /dev/null @@ -1,148 +0,0 @@ ---- -id: deploy-monitoring -title: Monitor -sidebar_label: "Monitor" -original_id: deploy-monitoring ---- - -You can use different ways to monitor a Pulsar cluster, exposing both metrics related to the usage of topics and the overall health of the individual components of the cluster. - -## Collect metrics - -You can collect broker stats, ZooKeeper stats, and BookKeeper stats. - -### Broker stats - -You can collect Pulsar broker metrics from brokers and export the metrics in JSON format. The Pulsar broker metrics mainly have two types: - -* *Destination dumps*, which contain stats for each individual topic. You can fetch the destination dumps using the command below: - - ```shell - - bin/pulsar-admin broker-stats destinations - - ``` - -* Broker metrics, which contain the broker information and topics stats aggregated at namespace level. You can fetch the broker metrics by using the following command: - - ```shell - - bin/pulsar-admin broker-stats monitoring-metrics - - ``` - -All the message rates are updated every minute. - -The aggregated broker metrics are also exposed in the [Prometheus](https://prometheus.io) format at: - -```shell - -http://$BROKER_ADDRESS:8080/metrics/ - -``` - -### ZooKeeper stats - -The local ZooKeeper, configuration store server and clients that are shipped with Pulsar can expose detailed stats through Prometheus. - -```shell - -http://$LOCAL_ZK_SERVER:8000/metrics -http://$GLOBAL_ZK_SERVER:8001/metrics - -``` - -The default port of local ZooKeeper is `8000` and the default port of the configuration store is `8001`. You can use a different stats port by configuring `metricsProvider.httpPort` in the `conf/zookeeper.conf` file. - -### BookKeeper stats - -You can configure the stats frameworks for BookKeeper by modifying the `statsProviderClass` in the `conf/bookkeeper.conf` file. - -The default BookKeeper configuration enables the Prometheus exporter. The configuration is included with Pulsar distribution. - -```shell - -http://$BOOKIE_ADDRESS:8000/metrics - -``` - -The default port for bookie is `8000`. You can change the port by configuring `prometheusStatsHttpPort` in the `conf/bookkeeper.conf` file. - -### Managed cursor acknowledgment state -The acknowledgment state is persistent to the ledger first. When the acknowledgment state fails to be persistent to the ledger, they are persistent to ZooKeeper. To track the stats of acknowledgement, you can configure the metrics for the managed cursor. - -``` - -brk_ml_cursor_persistLedgerSucceed(namespace=", ledger_name="", cursor_name:") -brk_ml_cursor_persistLedgerErrors(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_persistZookeeperSucceed(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_persistZookeeperErrors(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_nonContiguousDeletedMessagesRange(namespace="", ledger_name="", cursor_name:"") - -``` - -Those metrics are added in the Prometheus interface, you can monitor and check the metrics stats in the Grafana. - -### Function and connector stats - -You can collect functions worker stats from `functions-worker` and export the metrics in JSON formats, which contain functions worker JVM metrics. - -``` - -pulsar-admin functions-worker monitoring-metrics - -``` - -You can collect functions and connectors metrics from `functions-worker` and export the metrics in JSON formats. - -``` - -pulsar-admin functions-worker function-stats - -``` - -The aggregated functions and connectors metrics can be exposed in Prometheus formats as below. You can get [`FUNCTIONS_WORKER_ADDRESS`](http://pulsar.apache.org/docs/en/next/functions-worker/) and `WORKER_PORT` from the `functions_worker.yml` file. - -``` - -http://$FUNCTIONS_WORKER_ADDRESS:$WORKER_PORT/metrics: - -``` - -## Configure Prometheus - -You can use Prometheus to collect all the metrics exposed for Pulsar components and set up [Grafana](https://grafana.com/) dashboards to display the metrics and monitor your Pulsar cluster. For details, refer to [Prometheus guide](https://prometheus.io/docs/introduction/getting_started/). - -When you run Pulsar on bare metal, you can provide the list of nodes to be probed. When you deploy Pulsar in a Kubernetes cluster, the monitoring is setup automatically. For details, refer to [Kubernetes instructions](helm-deploy.md). - -## Dashboards - -When you collect time series statistics, the major problem is to make sure the number of dimensions attached to the data does not explode. Thus you only need to collect time series of metrics aggregated at the namespace level. - -### Pulsar per-topic dashboard - -The per-topic dashboard instructions are available at [Pulsar manager](administration-pulsar-manager.md). - -### Grafana - -You can use grafana to create dashboard driven by the data that is stored in Prometheus. - -When you deploy Pulsar on Kubernetes, a `pulsar-grafana` Docker image is enabled by default. You can use the docker image with the principal dashboards. - -Enter the command below to use the dashboard manually: - -```shell - -docker run -p3000:3000 \ - -e PROMETHEUS_URL=http://$PROMETHEUS_HOST:9090/ \ - apachepulsar/pulsar-grafana:latest - -``` - -The following are some Grafana dashboards examples: - -- [pulsar-grafana](http://pulsar.apache.org/docs/en/deploy-monitoring/#grafana): a Grafana dashboard that displays metrics collected in Prometheus for Pulsar clusters running on Kubernetes. -- [apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard): a collection of Grafana dashboard templates for different Pulsar components running on both Kubernetes and on-premise machines. - - ## Alerting rules - You can set alerting rules according to your Pulsar environment. To configure alerting rules for Apache Pulsar, refer to [alerting rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/develop-load-manager.md b/site2/website/versioned_docs/version-2.9.1-deprecated/develop-load-manager.md deleted file mode 100644 index 509209b6a852d8..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/develop-load-manager.md +++ /dev/null @@ -1,227 +0,0 @@ ---- -id: develop-load-manager -title: Modular load manager -sidebar_label: "Modular load manager" -original_id: develop-load-manager ---- - -The *modular load manager*, implemented in [`ModularLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/ModularLoadManagerImpl.java), is a flexible alternative to the previously implemented load manager, [`SimpleLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/SimpleLoadManagerImpl.java), which attempts to simplify how load is managed while also providing abstractions so that complex load management strategies may be implemented. - -## Usage - -There are two ways that you can enable the modular load manager: - -1. Change the value of the `loadManagerClassName` parameter in `conf/broker.conf` from `org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl` to `org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl`. -2. Using the `pulsar-admin` tool. Here's an example: - - ```shell - - $ pulsar-admin update-dynamic-config \ - --config loadManagerClassName \ - --value org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl - - ``` - - You can use the same method to change back to the original value. In either case, any mistake in specifying the load manager will cause Pulsar to default to `SimpleLoadManagerImpl`. - -## Verification - -There are a few different ways to determine which load manager is being used: - -1. Use `pulsar-admin` to examine the `loadManagerClassName` element: - - ```shell - - $ bin/pulsar-admin brokers get-all-dynamic-config - { - "loadManagerClassName" : "org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl" - } - - ``` - - If there is no `loadManagerClassName` element, then the default load manager is used. - -2. Consult a ZooKeeper load report. With the module load manager, the load report in `/loadbalance/brokers/...` will have many differences. for example the `systemResourceUsage` sub-elements (`bandwidthIn`, `bandwidthOut`, etc.) are now all at the top level. Here is an example load report from the module load manager: - - ```json - - { - "bandwidthIn": { - "limit": 10240000.0, - "usage": 4.256510416666667 - }, - "bandwidthOut": { - "limit": 10240000.0, - "usage": 5.287239583333333 - }, - "bundles": [], - "cpu": { - "limit": 2400.0, - "usage": 5.7353247655435915 - }, - "directMemory": { - "limit": 16384.0, - "usage": 1.0 - } - } - - ``` - - With the simple load manager, the load report in `/loadbalance/brokers/...` will look like this: - - ```json - - { - "systemResourceUsage": { - "bandwidthIn": { - "limit": 10240000.0, - "usage": 0.0 - }, - "bandwidthOut": { - "limit": 10240000.0, - "usage": 0.0 - }, - "cpu": { - "limit": 2400.0, - "usage": 0.0 - }, - "directMemory": { - "limit": 16384.0, - "usage": 1.0 - }, - "memory": { - "limit": 8192.0, - "usage": 3903.0 - } - } - } - - ``` - -3. The command-line [broker monitor](reference-cli-tools.md#monitor-brokers) will have a different output format depending on which load manager implementation is being used. - - Here is an example from the modular load manager: - - ``` - - =================================================================================================================== - ||SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.00 |48.33 |0.01 |0.00 |0.00 |48.33 || - ||COUNT |TOPIC |BUNDLE |PRODUCER |CONSUMER |BUNDLE + |BUNDLE - || - || |4 |4 |0 |2 |4 |0 || - ||LATEST |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - ||SHORT |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - ||LONG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - =================================================================================================================== - - ``` - - Here is an example from the simple load manager: - - ``` - - =================================================================================================================== - ||COUNT |TOPIC |BUNDLE |PRODUCER |CONSUMER |BUNDLE + |BUNDLE - || - || |4 |4 |0 |2 |0 |0 || - ||RAW SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.25 |47.94 |0.01 |0.00 |0.00 |47.94 || - ||ALLOC SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.20 |1.89 | |1.27 |3.21 |3.21 || - ||RAW MSG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.01 |0.01 |0.01 || - ||ALLOC MSG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |54.84 |134.48 |189.31 |126.54 |320.96 |447.50 || - =================================================================================================================== - - ``` - -It is important to note that the module load manager is _centralized_, meaning that all requests to assign a bundle---whether it's been seen before or whether this is the first time---only get handled by the _lead_ broker (which can change over time). To determine the current lead broker, examine the `/loadbalance/leader` node in ZooKeeper. - -## Implementation - -### Data - -The data monitored by the modular load manager is contained in the [`LoadData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/LoadData.java) class. -Here, the available data is subdivided into the bundle data and the broker data. - -#### Broker - -The broker data is contained in the [`BrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BrokerData.java) class. It is further subdivided into two parts, -one being the local data which every broker individually writes to ZooKeeper, and the other being the historical broker -data which is written to ZooKeeper by the leader broker. - -##### Local Broker Data -The local broker data is contained in the class [`LocalBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/java/org/apache/pulsar/policies/data/loadbalancer/LocalBrokerData.java) and provides information about the following resources: - -* CPU usage -* JVM heap memory usage -* Direct memory usage -* Bandwidth in/out usage -* Most recent total message rate in/out across all bundles -* Total number of topics, bundles, producers, and consumers -* Names of all bundles assigned to this broker -* Most recent changes in bundle assignments for this broker - -The local broker data is updated periodically according to the service configuration -"loadBalancerReportUpdateMaxIntervalMinutes". After any broker updates their local broker data, the leader broker will -receive the update immediately via a ZooKeeper watch, where the local data is read from the ZooKeeper node -`/loadbalance/brokers/` - -##### Historical Broker Data - -The historical broker data is contained in the [`TimeAverageBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/TimeAverageBrokerData.java) class. - -In order to reconcile the need to make good decisions in a steady-state scenario and make reactive decisions in a critical scenario, the historical data is split into two parts: the short-term data for reactive decisions, and the long-term data for steady-state decisions. Both time frames maintain the following information: - -* Message rate in/out for the entire broker -* Message throughput in/out for the entire broker - -Unlike the bundle data, the broker data does not maintain samples for the global broker message rates and throughputs, which is not expected to remain steady as new bundles are removed or added. Instead, this data is aggregated over the short-term and long-term data for the bundles. See the section on bundle data to understand how that data is collected and maintained. - -The historical broker data is updated for each broker in memory by the leader broker whenever any broker writes their local data to ZooKeeper. Then, the historical data is written to ZooKeeper by the leader broker periodically according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`. - -##### Bundle Data - -The bundle data is contained in the [`BundleData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BundleData.java). Like the historical broker data, the bundle data is split into a short-term and a long-term time frame. The information maintained in each time frame: - -* Message rate in/out for this bundle -* Message Throughput In/Out for this bundle -* Current number of samples for this bundle - -The time frames are implemented by maintaining the average of these values over a set, limited number of samples, where -the samples are obtained through the message rate and throughput values in the local data. Thus, if the update interval -for the local data is 2 minutes, the number of short samples is 10 and the number of long samples is 1000, the -short-term data is maintained over a period of `10 samples * 2 minutes / sample = 20 minutes`, while the long-term -data is similarly over a period of 2000 minutes. Whenever there are not enough samples to satisfy a given time frame, -the average is taken only over the existing samples. When no samples are available, default values are assumed until -they are overwritten by the first sample. Currently, the default values are - -* Message rate in/out: 50 messages per second both ways -* Message throughput in/out: 50KB per second both ways - -The bundle data is updated in memory on the leader broker whenever any broker writes their local data to ZooKeeper. -Then, the bundle data is written to ZooKeeper by the leader broker periodically at the same time as the historical -broker data, according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`. - -### Traffic Distribution - -The modular load manager uses the abstraction provided by [`ModularLoadManagerStrategy`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/ModularLoadManagerStrategy.java) to make decisions about bundle assignment. The strategy makes a decision by considering the service configuration, the entire load data, and the bundle data for the bundle to be assigned. Currently, the only supported strategy is [`LeastLongTermMessageRate`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/LeastLongTermMessageRate.java), though soon users will have the ability to inject their own strategies if desired. - -#### Least Long Term Message Rate Strategy - -As its name suggests, the least long term message rate strategy attempts to distribute bundles across brokers so that -the message rate in the long-term time window for each broker is roughly the same. However, simply balancing load based -on message rate does not handle the issue of asymmetric resource burden per message on each broker. Thus, the system -resource usages, which are CPU, memory, direct memory, bandwidth in, and bandwidth out, are also considered in the -assignment process. This is done by weighting the final message rate according to -`1 / (overload_threshold - max_usage)`, where `overload_threshold` corresponds to the configuration -`loadBalancerBrokerOverloadedThresholdPercentage` and `max_usage` is the maximum proportion among the system resources -that is being utilized by the candidate broker. This multiplier ensures that machines with are being more heavily taxed -by the same message rates will receive less load. In particular, it tries to ensure that if one machine is overloaded, -then all machines are approximately overloaded. In the case in which a broker's max usage exceeds the overload -threshold, that broker is not considered for bundle assignment. If all brokers are overloaded, the bundle is randomly -assigned. - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/develop-schema.md b/site2/website/versioned_docs/version-2.9.1-deprecated/develop-schema.md deleted file mode 100644 index 2d4461a5ea2b55..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/develop-schema.md +++ /dev/null @@ -1,62 +0,0 @@ ---- -id: develop-schema -title: Custom schema storage -sidebar_label: "Custom schema storage" -original_id: develop-schema ---- - -By default, Pulsar stores data type [schemas](concepts-schema-registry.md) in [Apache BookKeeper](https://bookkeeper.apache.org) (which is deployed alongside Pulsar). You can, however, use another storage system if you wish. This doc walks you through creating your own schema storage implementation. - -In order to use a non-default (i.e. non-BookKeeper) storage system for Pulsar schemas, you need to implement two Java interfaces: [`SchemaStorage`](#schemastorage-interface) and [`SchemaStorageFactory`](#schemastoragefactory-interface). - -## SchemaStorage interface - -The `SchemaStorage` interface has the following methods: - -```java - -public interface SchemaStorage { - // How schemas are updated - CompletableFuture put(String key, byte[] value, byte[] hash); - - // How schemas are fetched from storage - CompletableFuture get(String key, SchemaVersion version); - - // How schemas are deleted - CompletableFuture delete(String key); - - // Utility method for converting a schema version byte array to a SchemaVersion object - SchemaVersion versionFromBytes(byte[] version); - - // Startup behavior for the schema storage client - void start() throws Exception; - - // Shutdown behavior for the schema storage client - void close() throws Exception; -} - -``` - -> For a full-fledged example schema storage implementation, see the [`BookKeeperSchemaStorage`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorage.java) class. - -## SchemaStorageFactory interface - -```java - -public interface SchemaStorageFactory { - @NotNull - SchemaStorage create(PulsarService pulsar) throws Exception; -} - -``` - -> For a full-fledged example schema storage factory implementation, see the [`BookKeeperSchemaStorageFactory`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorageFactory.java) class. - -## Deployment - -In order to use your custom schema storage implementation, you'll need to: - -1. Package the implementation in a [JAR](https://docs.oracle.com/javase/tutorial/deployment/jar/basicsindex.html) file. -1. Add that jar to the `lib` folder in your Pulsar [binary or source distribution](getting-started-standalone.md#installing-pulsar). -1. Change the `schemaRegistryStorageClassName` configuration in [`broker.conf`](reference-configuration.md#broker) to your custom factory class (i.e. the `SchemaStorageFactory` implementation, not the `SchemaStorage` implementation). -1. Start up Pulsar. diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/develop-tools.md b/site2/website/versioned_docs/version-2.9.1-deprecated/develop-tools.md deleted file mode 100644 index bc7c29e836e6ac..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/develop-tools.md +++ /dev/null @@ -1,112 +0,0 @@ ---- -id: develop-tools -title: Simulation tools -sidebar_label: "Simulation tools" -original_id: develop-tools ---- - -It is sometimes necessary create an test environment and incur artificial load to observe how well load managers -handle the load. The load simulation controller, the load simulation client, and the broker monitor were created as an -effort to make create this load and observe the effects on the managers more easily. - -## Simulation Client -The simulation client is a machine which will create and subscribe to topics with configurable message rates and sizes. -Because it is sometimes necessary in simulating large load to use multiple client machines, the user does not interact -with the simulation client directly, but instead delegates their requests to the simulation controller, which will then -send signals to clients to start incurring load. The client implementation is in the class -`org.apache.pulsar.testclient.LoadSimulationClient`. - -### Usage -To Start a simulation client, use the `pulsar-perf` script with the command `simulation-client` as follows: - -``` - -pulsar-perf simulation-client --port --service-url - -``` - -The client will then be ready to receive controller commands. -## Simulation Controller -The simulation controller send signals to the simulation clients, requesting them to create new topics, stop old -topics, change the load incurred by topics, as well as several other tasks. It is implemented in the class -`org.apache.pulsar.testclient.LoadSimulationController` and presents a shell to the user as an interface to send -command with. - -### Usage -To start a simulation controller, use the `pulsar-perf` script with the command `simulation-controller` as follows: - -``` - -pulsar-perf simulation-controller --cluster --client-port ---clients - -``` - -The clients should already be started before the controller is started. You will then be presented with a simple prompt, -where you can issue commands to simulation clients. Arguments often refer to tenant names, namespace names, and topic -names. In all cases, the BASE name of the tenants, namespaces, and topics are used. For example, for the topic -`persistent://my_tenant/my_cluster/my_namespace/my_topic`, the tenant name is `my_tenant`, the namespace name is -`my_namespace`, and the topic name is `my_topic`. The controller can perform the following actions: - -* Create a topic with a producer and a consumer - * `trade [--rate ] - [--rand-rate ,] - [--size ]` -* Create a group of topics with a producer and a consumer - * `trade_group [--rate ] - [--rand-rate ,] - [--separation ] [--size ] - [--topics-per-namespace ]` -* Change the configuration of an existing topic - * `change [--rate ] - [--rand-rate ,] - [--size ]` -* Change the configuration of a group of topics - * `change_group [--rate ] [--rand-rate ,] - [--size ] [--topics-per-namespace ]` -* Shutdown a previously created topic - * `stop ` -* Shutdown a previously created group of topics - * `stop_group ` -* Copy the historical data from one ZooKeeper to another and simulate based on the message rates and sizes in that -history - * `copy [--rate-multiplier value]` -* Simulate the load of the historical data on the current ZooKeeper (should be same ZooKeeper being simulated on) - * `simulate [--rate-multiplier value]` -* Stream the latest data from the given active ZooKeeper to simulate the real-time load of that ZooKeeper. - * `stream [--rate-multiplier value]` - -The "group" arguments in these commands allow the user to create or affect multiple topics at once. Groups are created -when calling the `trade_group` command, and all topics from these groups may be subsequently modified or stopped -with the `change_group` and `stop_group` commands respectively. All ZooKeeper arguments are of the form -`zookeeper_host:port`. - -### Difference Between Copy, Simulate, and Stream -The commands `copy`, `simulate`, and `stream` are very similar but have significant differences. `copy` is used when -you want to simulate the load of a static, external ZooKeeper on the ZooKeeper you are simulating on. Thus, -`source zookeeper` should be the ZooKeeper you want to copy and `target zookeeper` should be the ZooKeeper you are -simulating on, and then it will get the full benefit of the historical data of the source in both load manager -implementations. `simulate` on the other hand takes in only one ZooKeeper, the one you are simulating on. It assumes -that you are simulating on a ZooKeeper that has historical data for `SimpleLoadManagerImpl` and creates equivalent -historical data for `ModularLoadManagerImpl`. Then, the load according to the historical data is simulated by the -clients. Finally, `stream` takes in an active ZooKeeper different than the ZooKeeper being simulated on and streams -load data from it and simulates the real-time load. In all cases, the optional `rate-multiplier` argument allows the -user to simulate some proportion of the load. For instance, using `--rate-multiplier 0.05` will cause messages to -be sent at only `5%` of the rate of the load that is being simulated. - -## Broker Monitor -To observe the behavior of the load manager in these simulations, one may utilize the broker monitor, which is -implemented in `org.apache.pulsar.testclient.BrokerMonitor`. The broker monitor will print tabular load data to the -console as it is updated using watchers. - -### Usage -To start a broker monitor, use the `monitor-brokers` command in the `pulsar-perf` script: - -``` - -pulsar-perf monitor-brokers --connect-string - -``` - -The console will then continuously print load data until it is interrupted. - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/developing-binary-protocol.md b/site2/website/versioned_docs/version-2.9.1-deprecated/developing-binary-protocol.md deleted file mode 100644 index 94306febee93e2..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/developing-binary-protocol.md +++ /dev/null @@ -1,606 +0,0 @@ ---- -id: developing-binary-protocol -title: Pulsar binary protocol specification -sidebar_label: "Binary protocol" -original_id: developing-binary-protocol ---- - -Pulsar uses a custom binary protocol for communications between producers/consumers and brokers. This protocol is designed to support required features, such as acknowledgements and flow control, while ensuring maximum transport and implementation efficiency. - -Clients and brokers exchange *commands* with each other. Commands are formatted as binary [protocol buffer](https://developers.google.com/protocol-buffers/) (aka *protobuf*) messages. The format of protobuf commands is specified in the [`PulsarApi.proto`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto) file and also documented in the [Protobuf interface](#protobuf-interface) section below. - -> ### Connection sharing -> Commands for different producers and consumers can be interleaved and sent through the same connection without restriction. - -All commands associated with Pulsar's protocol are contained in a [`BaseCommand`](#pulsar.proto.BaseCommand) protobuf message that includes a [`Type`](#pulsar.proto.Type) [enum](https://developers.google.com/protocol-buffers/docs/proto#enum) with all possible subcommands as optional fields. `BaseCommand` messages can specify only one subcommand. - -## Framing - -Since protobuf doesn't provide any sort of message frame, all messages in the Pulsar protocol are prepended with a 4-byte field that specifies the size of the frame. The maximum allowable size of a single frame is 5 MB. - -The Pulsar protocol allows for two types of commands: - -1. **Simple commands** that do not carry a message payload. -2. **Payload commands** that bear a payload that is used when publishing or delivering messages. In payload commands, the protobuf command data is followed by protobuf [metadata](#message-metadata) and then the payload, which is passed in raw format outside of protobuf. All sizes are passed as 4-byte unsigned big endian integers. - -> Message payloads are passed in raw format rather than protobuf format for efficiency reasons. - -### Simple commands - -Simple (payload-free) commands have this basic structure: - -| Component | Description | Size (in bytes) | -|:------------|:----------------------------------------------------------------------------------------|:----------------| -| totalSize | The size of the frame, counting everything that comes after it (in bytes) | 4 | -| commandSize | The size of the protobuf-serialized command | 4 | -| message | The protobuf message serialized in a raw binary format (rather than in protobuf format) | | - -### Payload commands - -Payload commands have this basic structure: - -| Component | Description | Size (in bytes) | -|:-------------|:--------------------------------------------------------------------------------------------|:----------------| -| totalSize | The size of the frame, counting everything that comes after it (in bytes) | 4 | -| commandSize | The size of the protobuf-serialized command | 4 | -| message | The protobuf message serialized in a raw binary format (rather than in protobuf format) | | -| magicNumber | A 2-byte byte array (`0x0e01`) identifying the current format | 2 | -| checksum | A [CRC32-C checksum](http://www.evanjones.ca/crc32c.html) of everything that comes after it | 4 | -| metadataSize | The size of the message [metadata](#message-metadata) | 4 | -| metadata | The message [metadata](#message-metadata) stored as a binary protobuf message | | -| payload | Anything left in the frame is considered the payload and can include any sequence of bytes | | - -## Message metadata - -Message metadata is stored alongside the application-specified payload as a serialized protobuf message. Metadata is created by the producer and passed on unchanged to the consumer. - -| Field | Description | -|:-------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `producer_name` | The name of the producer that published the message | -| `sequence_id` | The sequence ID of the message, assigned by producer | -| `publish_time` | The publish timestamp in Unix time (i.e. as the number of milliseconds since January 1st, 1970 in UTC) | -| `properties` | A sequence of key/value pairs (using the [`KeyValue`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto#L32) message). These are application-defined keys and values with no special meaning to Pulsar. | -| `replicated_from` *(optional)* | Indicates that the message has been replicated and specifies the name of the [cluster](reference-terminology.md#cluster) where the message was originally published | -| `partition_key` *(optional)* | While publishing on a partition topic, if the key is present, the hash of the key is used to determine which partition to choose. Partition key is used as the message key. | -| `compression` *(optional)* | Signals that payload has been compressed and with which compression library | -| `uncompressed_size` *(optional)* | If compression is used, the producer must fill the uncompressed size field with the original payload size | -| `num_messages_in_batch` *(optional)* | If this message is really a [batch](#batch-messages) of multiple entries, this field must be set to the number of messages in the batch | - -### Batch messages - -When using batch messages, the payload will be containing a list of entries, -each of them with its individual metadata, defined by the `SingleMessageMetadata` -object. - - -For a single batch, the payload format will look like this: - - -| Field | Description | -|:--------------|:------------------------------------------------------------| -| metadataSizeN | The size of the single message metadata serialized Protobuf | -| metadataN | Single message metadata | -| payloadN | Message payload passed by application | - -Each metadata field looks like this; - -| Field | Description | -|:---------------------------|:--------------------------------------------------------| -| properties | Application-defined properties | -| partition key *(optional)* | Key to indicate the hashing to a particular partition | -| payload_size | Size of the payload for the single message in the batch | - -When compression is enabled, the whole batch will be compressed at once. - -## Interactions - -### Connection establishment - -After opening a TCP connection to a broker, typically on port 6650, the client -is responsible to initiate the session. - -![Connect interaction](/assets/binary-protocol-connect.png) - -After receiving a `Connected` response from the broker, the client can -consider the connection ready to use. Alternatively, if the broker doesn't -validate the client authentication, it will reply with an `Error` command and -close the TCP connection. - -Example: - -```protobuf - -message CommandConnect { - "client_version" : "Pulsar-Client-Java-v1.15.2", - "auth_method_name" : "my-authentication-plugin", - "auth_data" : "my-auth-data", - "protocol_version" : 6 -} - -``` - -Fields: - * `client_version` → String based identifier. Format is not enforced - * `auth_method_name` → *(optional)* Name of the authentication plugin if auth - enabled - * `auth_data` → *(optional)* Plugin specific authentication data - * `protocol_version` → Indicates the protocol version supported by the - client. Broker will not send commands introduced in newer revisions of the - protocol. Broker might be enforcing a minimum version - -```protobuf - -message CommandConnected { - "server_version" : "Pulsar-Broker-v1.15.2", - "protocol_version" : 6 -} - -``` - -Fields: - * `server_version` → String identifier of broker version - * `protocol_version` → Protocol version supported by the broker. Client - must not attempt to send commands introduced in newer revisions of the - protocol - -### Keep Alive - -To identify prolonged network partitions between clients and brokers or cases -in which a machine crashes without interrupting the TCP connection on the remote -end (eg: power outage, kernel panic, hard reboot...), we have introduced a -mechanism to probe for the availability status of the remote peer. - -Both clients and brokers are sending `Ping` commands periodically and they will -close the socket if a `Pong` response is not received within a timeout (default -used by broker is 60s). - -A valid implementation of a Pulsar client is not required to send the `Ping` -probe, though it is required to promptly reply after receiving one from the -broker in order to prevent the remote side from forcibly closing the TCP connection. - - -### Producer - -In order to send messages, a client needs to establish a producer. When creating -a producer, the broker will first verify that this particular client is -authorized to publish on the topic. - -Once the client gets confirmation of the producer creation, it can publish -messages to the broker, referring to the producer id negotiated before. - -![Producer interaction](/assets/binary-protocol-producer.png) - -##### Command Producer - -```protobuf - -message CommandProducer { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "producer_id" : 1, - "request_id" : 1 -} - -``` - -Parameters: - * `topic` → Complete topic name to where you want to create the producer on - * `producer_id` → Client generated producer identifier. Needs to be unique - within the same connection - * `request_id` → Identifier for this request. Used to match the response with - the originating request. Needs to be unique within the same connection - * `producer_name` → *(optional)* If a producer name is specified, the name will - be used, otherwise the broker will generate a unique name. Generated - producer name is guaranteed to be globally unique. Implementations are - expected to let the broker generate a new producer name when the producer - is initially created, then reuse it when recreating the producer after - reconnections. - -The broker will reply with either `ProducerSuccess` or `Error` commands. - -##### Command ProducerSuccess - -```protobuf - -message CommandProducerSuccess { - "request_id" : 1, - "producer_name" : "generated-unique-producer-name" -} - -``` - -Parameters: - * `request_id` → Original id of the `CreateProducer` request - * `producer_name` → Generated globally unique producer name or the name - specified by the client, if any. - -##### Command Send - -Command `Send` is used to publish a new message within the context of an -already existing producer. This command is used in a frame that includes command -as well as message payload, for which the complete format is specified in the [payload commands](#payload-commands) section. - -```protobuf - -message CommandSend { - "producer_id" : 1, - "sequence_id" : 0, - "num_messages" : 1 -} - -``` - -Parameters: - * `producer_id` → id of an existing producer - * `sequence_id` → each message has an associated sequence id which is expected - to be implemented with a counter starting at 0. The `SendReceipt` that - acknowledges the effective publishing of a messages will refer to it by - its sequence id. - * `num_messages` → *(optional)* Used when publishing a batch of messages at - once. - -##### Command SendReceipt - -After a message has been persisted on the configured number of replicas, the -broker will send the acknowledgment receipt to the producer. - -```protobuf - -message CommandSendReceipt { - "producer_id" : 1, - "sequence_id" : 0, - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -Parameters: - * `producer_id` → id of producer originating the send request - * `sequence_id` → sequence id of the published message - * `message_id` → message id assigned by the system to the published message - Unique within a single cluster. Message id is composed of 2 longs, `ledgerId` - and `entryId`, that reflect that this unique id is assigned when appending - to a BookKeeper ledger - - -##### Command CloseProducer - -**Note**: *This command can be sent by either producer or broker*. - -When receiving a `CloseProducer` command, the broker will stop accepting any -more messages for the producer, wait until all pending messages are persisted -and then reply `Success` to the client. - -The broker can send a `CloseProducer` command to client when it's performing -a graceful failover (eg: broker is being restarted, or the topic is being unloaded -by load balancer to be transferred to a different broker). - -When receiving the `CloseProducer`, the client is expected to go through the -service discovery lookup again and recreate the producer again. The TCP -connection is not affected. - -### Consumer - -A consumer is used to attach to a subscription and consume messages from it. -After every reconnection, a client needs to subscribe to the topic. If a -subscription is not already there, a new one will be created. - -![Consumer](/assets/binary-protocol-consumer.png) - -#### Flow control - -After the consumer is ready, the client needs to *give permission* to the -broker to push messages. This is done with the `Flow` command. - -A `Flow` command gives additional *permits* to send messages to the consumer. -A typical consumer implementation will use a queue to accumulate these messages -before the application is ready to consume them. - -After the application has dequeued half of the messages in the queue, the consumer -sends permits to the broker to ask for more messages (equals to half of the messages in the queue). - -For example, if the queue size is 1000 and the consumer consumes 500 messages in the queue. -Then the consumer sends permits to the broker to ask for 500 messages. - -##### Command Subscribe - -```protobuf - -message CommandSubscribe { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "subscription" : "my-subscription-name", - "subType" : "Exclusive", - "consumer_id" : 1, - "request_id" : 1 -} - -``` - -Parameters: - * `topic` → Complete topic name to where you want to create the consumer on - * `subscription` → Subscription name - * `subType` → Subscription type: Exclusive, Shared, Failover, Key_Shared - * `consumer_id` → Client generated consumer identifier. Needs to be unique - within the same connection - * `request_id` → Identifier for this request. Used to match the response with - the originating request. Needs to be unique within the same connection - * `consumer_name` → *(optional)* Clients can specify a consumer name. This - name can be used to track a particular consumer in the stats. Also, in - Failover subscription type, the name is used to decide which consumer is - elected as *master* (the one receiving messages): consumers are sorted by - their consumer name and the first one is elected master. - -##### Command Flow - -```protobuf - -message CommandFlow { - "consumer_id" : 1, - "messagePermits" : 1000 -} - -``` - -Parameters: -* `consumer_id` → Id of an already established consumer -* `messagePermits` → Number of additional permits to grant to the broker for - pushing more messages - -##### Command Message - -Command `Message` is used by the broker to push messages to an existing consumer, -within the limits of the given permits. - - -This command is used in a frame that includes the message payload as well, for -which the complete format is specified in the [payload commands](#payload-commands) -section. - -```protobuf - -message CommandMessage { - "consumer_id" : 1, - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -##### Command Ack - -An `Ack` is used to signal to the broker that a given message has been -successfully processed by the application and can be discarded by the broker. - -In addition, the broker will also maintain the consumer position based on the -acknowledged messages. - -```protobuf - -message CommandAck { - "consumer_id" : 1, - "ack_type" : "Individual", - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -Parameters: - * `consumer_id` → Id of an already established consumer - * `ack_type` → Type of acknowledgment: `Individual` or `Cumulative` - * `message_id` → Id of the message to acknowledge - * `validation_error` → *(optional)* Indicates that the consumer has discarded - the messages due to: `UncompressedSizeCorruption`, - `DecompressionError`, `ChecksumMismatch`, `BatchDeSerializeError` - * `properties` → *(optional)* Reserved configuration items - * `txnid_most_bits` → *(optional)* Same as Transaction Coordinator ID, `txnid_most_bits` and `txnid_least_bits` - uniquely identify a transaction. - * `txnid_least_bits` → *(optional)* The ID of the transaction opened in a transaction coordinator, - `txnid_most_bits` and `txnid_least_bits`uniquely identify a transaction. - * `request_id` → *(optional)* ID for handling response and timeout. - - - ##### Command AckResponse - -An `AckResponse` is the broker’s response to acknowledge a request sent by the client. It contains the `consumer_id` sent in the request. -If a transaction is used, it contains both the Transaction ID and the Request ID that are sent in the request. The client finishes the specific request according to the Request ID. If the `error` field is set, it indicates that the request has failed. - -An example of `AckResponse` with redirection: - -```protobuf - -message CommandAckResponse { - "consumer_id" : 1, - "txnid_least_bits" = 0, - "txnid_most_bits" = 1, - "request_id" = 5 -} - -``` - -##### Command CloseConsumer - -***Note***: **This command can be sent by either producer or broker*. - -This command behaves the same as [`CloseProducer`](#command-closeproducer) - -##### Command RedeliverUnacknowledgedMessages - -A consumer can ask the broker to redeliver some or all of the pending messages -that were pushed to that particular consumer and not yet acknowledged. - -The protobuf object accepts a list of message ids that the consumer wants to -be redelivered. If the list is empty, the broker will redeliver all the -pending messages. - -On redelivery, messages can be sent to the same consumer or, in the case of a -shared subscription, spread across all available consumers. - - -##### Command ReachedEndOfTopic - -This is sent by a broker to a particular consumer, whenever the topic -has been "terminated" and all the messages on the subscription were -acknowledged. - -The client should use this command to notify the application that no more -messages are coming from the consumer. - -##### Command ConsumerStats - -This command is sent by the client to retrieve Subscriber and Consumer level -stats from the broker. -Parameters: - * `request_id` → Id of the request, used to correlate the request - and the response. - * `consumer_id` → Id of an already established consumer. - -##### Command ConsumerStatsResponse - -This is the broker's response to ConsumerStats request by the client. -It contains the Subscriber and Consumer level stats of the `consumer_id` sent in the request. -If the `error_code` or the `error_message` field is set it indicates that the request has failed. - -##### Command Unsubscribe - -This command is sent by the client to unsubscribe the `consumer_id` from the associated topic. -Parameters: - * `request_id` → Id of the request. - * `consumer_id` → Id of an already established consumer which needs to unsubscribe. - - -## Service discovery - -### Topic lookup - -Topic lookup needs to be performed each time a client needs to create or -reconnect a producer or a consumer. Lookup is used to discover which particular -broker is serving the topic we are about to use. - -Lookup can be done with a REST call as described in the [admin API](admin-api-topics.md#lookup-of-topic) -docs. - -Since Pulsar-1.16 it is also possible to perform the lookup within the binary -protocol. - -For the sake of example, let's assume we have a service discovery component -running at `pulsar://broker.example.com:6650` - -Individual brokers will be running at `pulsar://broker-1.example.com:6650`, -`pulsar://broker-2.example.com:6650`, ... - -A client can use a connection to the discovery service host to issue a -`LookupTopic` command. The response can either be a broker hostname to -connect to, or a broker hostname to which retry the lookup. - -The `LookupTopic` command has to be used in a connection that has already -gone through the `Connect` / `Connected` initial handshake. - -![Topic lookup](/assets/binary-protocol-topic-lookup.png) - -```protobuf - -message CommandLookupTopic { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "request_id" : 1, - "authoritative" : false -} - -``` - -Fields: - * `topic` → Topic name to lookup - * `request_id` → Id of the request that will be passed with its response - * `authoritative` → Initial lookup request should use false. When following a - redirect response, client should pass the same value contained in the - response - -##### LookupTopicResponse - -Example of response with successful lookup: - -```protobuf - -message CommandLookupTopicResponse { - "request_id" : 1, - "response" : "Connect", - "brokerServiceUrl" : "pulsar://broker-1.example.com:6650", - "brokerServiceUrlTls" : "pulsar+ssl://broker-1.example.com:6651", - "authoritative" : true -} - -``` - -Example of lookup response with redirection: - -```protobuf - -message CommandLookupTopicResponse { - "request_id" : 1, - "response" : "Redirect", - "brokerServiceUrl" : "pulsar://broker-2.example.com:6650", - "brokerServiceUrlTls" : "pulsar+ssl://broker-2.example.com:6651", - "authoritative" : true -} - -``` - -In this second case, we need to reissue the `LookupTopic` command request -to `broker-2.example.com` and this broker will be able to give a definitive -answer to the lookup request. - -### Partitioned topics discovery - -Partitioned topics metadata discovery is used to find out if a topic is a -"partitioned topic" and how many partitions were set up. - -If the topic is marked as "partitioned", the client is expected to create -multiple producers or consumers, one for each partition, using the `partition-X` -suffix. - -This information only needs to be retrieved the first time a producer or -consumer is created. There is no need to do this after reconnections. - -The discovery of partitioned topics metadata works very similar to the topic -lookup. The client send a request to the service discovery address and the -response will contain actual metadata. - -##### Command PartitionedTopicMetadata - -```protobuf - -message CommandPartitionedTopicMetadata { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "request_id" : 1 -} - -``` - -Fields: - * `topic` → the topic for which to check the partitions metadata - * `request_id` → Id of the request that will be passed with its response - - -##### Command PartitionedTopicMetadataResponse - -Example of response with metadata: - -```protobuf - -message CommandPartitionedTopicMetadataResponse { - "request_id" : 1, - "response" : "Success", - "partitions" : 32 -} - -``` - -## Protobuf interface - -All Pulsar's Protobuf definitions can be found {@inject: github:here:/pulsar-common/src/main/proto/PulsarApi.proto}. diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/functions-cli.md b/site2/website/versioned_docs/version-2.9.1-deprecated/functions-cli.md deleted file mode 100644 index c9fcfa201525f0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/functions-cli.md +++ /dev/null @@ -1,198 +0,0 @@ ---- -id: functions-cli -title: Pulsar Functions command line tool -sidebar_label: "Reference: CLI" -original_id: functions-cli ---- - -The following tables list Pulsar Functions command-line tools. You can learn Pulsar Functions modes, commands, and parameters. - -## localrun - -Run Pulsar Functions locally, rather than deploying it to the Pulsar cluster. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -broker-service-url | The URL for the Pulsar broker. | | -classname | The class name of a Pulsar Function.| | -client-auth-params | Client authentication parameter. | | -client-auth-plugin | Client authentication plugin using which function-process can connect to broker. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime).| | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -hostname-verification-enabled | Enable hostname verification. | false -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -instance-id-offset | Start the instanceIds from this offset. | 0 -log-topic | The topic to which the logs a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -tls-allow-insecure | Allow insecure tls connection. | false -tls-trust-cert-path | tls trust cert file path. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -use-tls | Use tls connection. | false -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - - -## create - -Create and deploy a Pulsar Function in cluster mode. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -classname | The class name of a Pulsar Function. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime).| | -custom-runtime-options | A string that encodes options to customize the runtime, see docs for configured runtime for details | | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -log-topic | The topic to which the logs of a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - -## delete - -Delete a Pulsar Function that is running on a Pulsar cluster. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## update - -Update a Pulsar Function that has been deployed to a Pulsar cluster. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -classname | The class name of a Pulsar Function. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime). | | -custom-runtime-options | A string that encodes options to customize the runtime, see docs for configured runtime for details | | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -log-topic | The topic to which the logs of a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -update-auth-data | Whether or not to update the auth data. | false -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - -## get - -Fetch information about a Pulsar Function. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## restart - -Restart function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## stop - -Stops function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## start - -Starts a stopped function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/functions-debug.md b/site2/website/versioned_docs/version-2.9.1-deprecated/functions-debug.md deleted file mode 100644 index c1f19abda64657..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/functions-debug.md +++ /dev/null @@ -1,538 +0,0 @@ ---- -id: functions-debug -title: Debug Pulsar Functions -sidebar_label: "How-to: Debug" -original_id: functions-debug ---- - -You can use the following methods to debug Pulsar Functions: - -* [Captured stderr](functions-debug.md#captured-stderr) -* [Use unit test](functions-debug.md#use-unit-test) -* [Debug with localrun mode](functions-debug.md#debug-with-localrun-mode) -* [Use log topic](functions-debug.md#use-log-topic) -* [Use Functions CLI](functions-debug.md#use-functions-cli) - -## Captured stderr - -Function startup information and captured stderr output is written to `logs/functions////-.log` - -This is useful for debugging why a function fails to start. - -## Use unit test - -A Pulsar Function is a function with inputs and outputs, you can test a Pulsar Function in a similar way as you test any function. - -For example, if you have the following Pulsar Function: - -```java - -import java.util.function.Function; - -public class JavaNativeExclamationFunction implements Function { - @Override - public String apply(String input) { - return String.format("%s!", input); - } -} - -``` - -You can write a simple unit test to test Pulsar Function. - -:::tip - -Pulsar uses testng for testing. - -::: - -```java - -@Test -public void testJavaNativeExclamationFunction() { - JavaNativeExclamationFunction exclamation = new JavaNativeExclamationFunction(); - String output = exclamation.apply("foo"); - Assert.assertEquals(output, "foo!"); -} - -``` - -The following Pulsar Function implements the `org.apache.pulsar.functions.api.Function` interface. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class ExclamationFunction implements Function { - @Override - public String process(String input, Context context) { - return String.format("%s!", input); - } -} - -``` - -In this situation, you can write a unit test for this function as well. Remember to mock the `Context` parameter. The following is an example. - -:::tip - -Pulsar uses testng for testing. - -::: - -```java - -@Test -public void testExclamationFunction() { - ExclamationFunction exclamation = new ExclamationFunction(); - String output = exclamation.process("foo", mock(Context.class)); - Assert.assertEquals(output, "foo!"); -} - -``` - -## Debug with localrun mode -When you run a Pulsar Function in localrun mode, it launches an instance of the Function on your local machine as a thread. - -In this mode, a Pulsar Function consumes and produces actual data to a Pulsar cluster, and mirrors how the function actually runs in a Pulsar cluster. - -:::note - -Currently, debugging with localrun mode is only supported by Pulsar Functions written in Java. You need Pulsar version 2.4.0 or later to do the following. Even though localrun is available in versions earlier than Pulsar 2.4.0, you cannot debug with localrun mode programmatically or run Functions as threads. - -::: - -You can launch your function in the following manner. - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setName(functionName); -functionConfig.setInputs(Collections.singleton(sourceTopic)); -functionConfig.setClassName(ExclamationFunction.class.getName()); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setOutput(sinkTopic); - -LocalRunner localRunner = LocalRunner.builder().functionConfig(functionConfig).build(); -localRunner.start(true); - -``` - -So you can debug functions using an IDE easily. Set breakpoints and manually step through a function to debug with real data. - -The following example illustrates how to programmatically launch a function in localrun mode. - -```java - -public class ExclamationFunction implements Function { - - @Override - public String process(String s, Context context) throws Exception { - return s + "!"; - } - -public static void main(String[] args) throws Exception { - FunctionConfig functionConfig = new FunctionConfig(); - functionConfig.setName("exclamation"); - functionConfig.setInputs(Collections.singleton("input")); - functionConfig.setClassName(ExclamationFunction.class.getName()); - functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); - functionConfig.setOutput("output"); - - LocalRunner localRunner = LocalRunner.builder().functionConfig(functionConfig).build(); - localRunner.start(false); -} - -``` - -To use localrun mode programmatically, add the following dependency. - -```xml - - - org.apache.pulsar - pulsar-functions-local-runner - ${pulsar.version} - - -``` - -For complete code samples, see [here](https://github.com/jerrypeng/pulsar-functions-demos/tree/master/debugging). - -:::note - -Debugging with localrun mode for Pulsar Functions written in other languages will be supported soon. - -::: - -## Use log topic - -In Pulsar Functions, you can generate log information defined in functions to a specified log topic. You can configure consumers to consume messages from a specified log topic to check the log information. - -![Pulsar Functions core programming model](/assets/pulsar-functions-overview.png) - -**Example** - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class LoggingFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - String messageId = new String(context.getMessageId()); - - if (input.contains("danger")) { - LOG.warn("A warning was received in message {}", messageId); - } else { - LOG.info("Message {} received\nContent: {}", messageId, input); - } - - return null; - } -} - -``` - -As shown in the example above, you can get the logger via `context.getLogger()` and assign the logger to the `LOG` variable of `slf4j`, so you can define your desired log information in a function using the `LOG` variable. Meanwhile, you need to specify the topic to which the log information is produced. - -**Example** - -```bash - -$ bin/pulsar-admin functions create \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -The message published to log topic contains several properties for better reasoning: -- `loglevel` -- the level of the log message. -- `fqn` -- fully qualified function name pushes this log message. -- `instance` -- the ID of the function instance pushes this log message. - -## Use Functions CLI - -With [Pulsar Functions CLI](reference-pulsar-admin.md#functions), you can debug Pulsar Functions with the following subcommands: - -* `get` -* `status` -* `stats` -* `list` -* `trigger` - -:::tip - -For complete commands of **Pulsar Functions CLI**, see [here](reference-pulsar-admin.md#functions)。 - -::: - -### `get` - -Get information about a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions get options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -:::tip - -`--fqfn` consists of `--name`, `--namespace` and `--tenant`, so you can specify either `--fqfn` or `--name`, `--namespace` and `--tenant`. - -::: - -**Example** - -You can specify `--fqfn` to get information about a Pulsar Function. - -```bash - -$ ./bin/pulsar-admin functions get public/default/ExclamationFunctio6 - -``` - -Optionally, you can specify `--name`, `--namespace` and `--tenant` to get information about a Pulsar Function. - -```bash - -$ ./bin/pulsar-admin functions get \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 - -``` - -As shown below, the `get` command shows input, output, runtime, and other information about the _ExclamationFunctio6_ function. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "ExclamationFunctio6", - "className": "org.example.test.ExclamationFunction", - "inputSpecs": { - "persistent://public/default/my-topic-1": { - "isRegexPattern": false - } - }, - "output": "persistent://public/default/test-1", - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "userConfig": {}, - "runtime": "JAVA", - "autoAck": true, - "parallelism": 1 -} - -``` - -### `status` - -Check the current status of a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions status options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--instance-id`|The instance ID of a Pulsar Function
    If the `--instance-id` is not specified, it gets the IDs of all instances.
    -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - -``` - -As shown below, the `status` command shows the number of instances, running instances, the instance running under the _ExclamationFunctio6_ function, received messages, successfully processed messages, system exceptions, the average latency and so on. - -```json - -{ - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReceived" : 1, - "numSuccessfullyProcessed" : 1, - "numUserExceptions" : 0, - "latestUserExceptions" : [ ], - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "averageLatency" : 0.8385, - "lastInvocationTime" : 1557734137987, - "workerId" : "c-standalone-fw-23ccc88ef29b-8080" - } - } ] -} - -``` - -### `stats` - -Get the current stats of a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions stats options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--instance-id`|The instance ID of a Pulsar Function.
    If the `--instance-id` is not specified, it gets the IDs of all instances.
    -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - -``` - -The output is shown as follows: - -```json - -{ - "receivedTotal" : 1, - "processedSuccessfullyTotal" : 1, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : 0.8385, - "1min" : { - "receivedTotal" : 0, - "processedSuccessfullyTotal" : 0, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : null - }, - "lastInvocation" : 1557734137987, - "instances" : [ { - "instanceId" : 0, - "metrics" : { - "receivedTotal" : 1, - "processedSuccessfullyTotal" : 1, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : 0.8385, - "1min" : { - "receivedTotal" : 0, - "processedSuccessfullyTotal" : 0, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : null - }, - "lastInvocation" : 1557734137987, - "userMetrics" : { } - } - } ] -} - -``` - -### `list` - -List all Pulsar Functions running under a specific tenant and namespace. - -**Usage** - -```bash - -$ pulsar-admin functions list options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions list \ - --tenant public \ - --namespace default - -``` - -As shown below, the `list` command returns three functions running under the _public_ tenant and the _default_ namespace. - -```text - -ExclamationFunctio1 -ExclamationFunctio2 -ExclamationFunctio3 - -``` - -### `trigger` - -Trigger a specified Pulsar Function with a supplied value. This command simulates the execution process of a Pulsar Function and verifies it. - -**Usage** - -```bash - -$ pulsar-admin functions trigger options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. -|`--topic`|The topic name that a Pulsar Function consumes from. -|`--trigger-file`|The path to a file that contains the data to trigger a Pulsar Function. -|`--trigger-value`|The value to trigger a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - --topic persistent://public/default/my-topic-1 \ - --trigger-value "hello pulsar functions" - -``` - -As shown below, the `trigger` command returns the following result: - -```text - -This is my function! - -``` - -:::note - -You must specify the [entire topic name](getting-started-pulsar.md#topic-names) when using the `--topic` option. Otherwise, the following error occurs. - -```text - -Function in trigger function has unidentified topic -Reason: Function in trigger function has unidentified topic - -``` - -::: - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/functions-deploy.md b/site2/website/versioned_docs/version-2.9.1-deprecated/functions-deploy.md deleted file mode 100644 index 2a0d68d6c623c7..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/functions-deploy.md +++ /dev/null @@ -1,262 +0,0 @@ ---- -id: functions-deploy -title: Deploy Pulsar Functions -sidebar_label: "How-to: Deploy" -original_id: functions-deploy ---- - -## Requirements - -To deploy and manage Pulsar Functions, you need to have a Pulsar cluster running. There are several options for this: - -* You can run a [standalone cluster](getting-started-standalone.md) locally on your own machine. -* You can deploy a Pulsar cluster on [Kubernetes](deploy-kubernetes.md), [Amazon Web Services](deploy-aws.md), [bare metal](deploy-bare-metal.md), [DC/OS](https://dcos.io/), and more. - -If you run a non-[standalone](reference-terminology.md#standalone) cluster, you need to obtain the service URL for the cluster. How you obtain the service URL depends on how you deploy your Pulsar cluster. - -If you want to deploy and trigger Python user-defined functions, you need to install [the pulsar python client](http://pulsar.apache.org/docs/en/client-libraries-python/) on all the machines running [functions workers](functions-worker.md). - -## Command-line interface - -Pulsar Functions are deployed and managed using the [`pulsar-admin functions`](reference-pulsar-admin.md#functions) interface, which contains commands such as [`create`](reference-pulsar-admin.md#functions-create) for deploying functions in [cluster mode](#cluster-mode), [`trigger`](reference-pulsar-admin.md#trigger) for [triggering](#triggering-pulsar-functions) functions, [`list`](reference-pulsar-admin.md#list-2) for listing deployed functions. - -To learn more commands, refer to [`pulsar-admin functions`](reference-pulsar-admin.md#functions). - -### Default arguments - -When managing Pulsar Functions, you need to specify a variety of information about functions, including tenant, namespace, input and output topics, and so on. However, some parameters have default values if you do not specify values for them. The following table lists the default values. - -Parameter | Default -:---------|:------- -Function name | You can specify any value for the class name (except org, library, or similar class names). For example, when you specify the flag `--classname org.example.MyFunction`, the function name is `MyFunction`. -Tenant | Derived from names of the input topics. If the input topics are under the `marketing` tenant, which means the topic names have the form `persistent://marketing/{namespace}/{topicName}`, the tenant is `marketing`. -Namespace | Derived from names of the input topics. If the input topics are under the `asia` namespace under the `marketing` tenant, which means the topic names have the form `persistent://marketing/asia/{topicName}`, then the namespace is `asia`. -Output topic | `{input topic}-{function name}-output`. For example, if an input topic name of a function is `incoming`, and the function name is `exclamation`, then the name of the output topic is `incoming-exclamation-output`. -Subscription type | For `at-least-once` and `at-most-once` [processing guarantees](functions-overview.md#processing-guarantees), the [`SHARED`](concepts-messaging.md#shared) mode is applied by default; for `effectively-once` guarantees, the [`FAILOVER`](concepts-messaging.md#failover) mode is applied. -Processing guarantees | [`ATLEAST_ONCE`](functions-overview.md#processing-guarantees) -Pulsar service URL | `pulsar://localhost:6650` - -### Example of default arguments - -Take the `create` command as an example. - -```bash - -$ bin/pulsar-admin functions create \ - --jar my-pulsar-functions.jar \ - --classname org.example.MyFunction \ - --inputs my-function-input-topic1,my-function-input-topic2 - -``` - -The function has default values for the function name (`MyFunction`), tenant (`public`), namespace (`default`), subscription type (`SHARED`), processing guarantees (`ATLEAST_ONCE`), and Pulsar service URL (`pulsar://localhost:6650`). - -## Local run mode - -If you run a Pulsar Function in **local run** mode, it runs on the machine from which you enter the commands (on your laptop, an [AWS EC2](https://aws.amazon.com/ec2/) instance, and so on). The following is a [`localrun`](reference-pulsar-admin.md#localrun) command example. - -```bash - -$ bin/pulsar-admin functions localrun \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/input-1 \ - --output persistent://public/default/output-1 - -``` - -By default, the function connects to a Pulsar cluster running on the same machine, via a local [broker](reference-terminology.md#broker) service URL of `pulsar://localhost:6650`. If you use local run mode to run a function but connect it to a non-local Pulsar cluster, you can specify a different broker URL using the `--brokerServiceUrl` flag. The following is an example. - -```bash - -$ bin/pulsar-admin functions localrun \ - --broker-service-url pulsar://my-cluster-host:6650 \ - # Other function parameters - -``` - -## Cluster mode - -When you run a Pulsar Function in **cluster** mode, the function code is uploaded to a Pulsar broker and runs *alongside the broker* rather than in your [local environment](#local-run-mode). You can run a function in cluster mode using the [`create`](reference-pulsar-admin.md#create-1) command. - -```bash - -$ bin/pulsar-admin functions create \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/input-1 \ - --output persistent://public/default/output-1 - -``` - -### Update functions in cluster mode - -You can use the [`update`](reference-pulsar-admin.md#update-1) command to update a Pulsar Function running in cluster mode. The following command updates the function created in the [cluster mode](#cluster-mode) section. - -```bash - -$ bin/pulsar-admin functions update \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/new-input-topic \ - --output persistent://public/default/new-output-topic - -``` - -### Parallelism - -Pulsar Functions run as processes or threads, which are called **instances**. When you run a Pulsar Function, it runs as a single instance by default. With one localrun command, you can only run a single instance of a function. If you want to run multiple instances, you can use localrun command multiple times. - -When you create a function, you can specify the *parallelism* of a function (the number of instances to run). You can set the parallelism factor using the `--parallelism` flag of the [`create`](reference-pulsar-admin.md#functions-create) command. - -```bash - -$ bin/pulsar-admin functions create \ - --parallelism 3 \ - # Other function info - -``` - -You can adjust the parallelism of an already created function using the [`update`](reference-pulsar-admin.md#update-1) interface. - -```bash - -$ bin/pulsar-admin functions update \ - --parallelism 5 \ - # Other function - -``` - -If you specify a function configuration via YAML, use the `parallelism` parameter. The following is a config file example. - -```yaml - -# function-config.yaml -parallelism: 3 -inputs: -- persistent://public/default/input-1 -output: persistent://public/default/output-1 -# other parameters - -``` - -The following is corresponding update command. - -```bash - -$ bin/pulsar-admin functions update \ - --function-config-file function-config.yaml - -``` - -### Function instance resources - -When you run Pulsar Functions in [cluster mode](#cluster-mode), you can specify the resources that are assigned to each function [instance](#parallelism). - -Resource | Specified as | Runtimes -:--------|:----------------|:-------- -CPU | The number of cores | Kubernetes -RAM | The number of bytes | Process, Docker -Disk space | The number of bytes | Docker - -The following function creation command allocates 8 cores, 8 GB of RAM, and 10 GB of disk space to a function. - -```bash - -$ bin/pulsar-admin functions create \ - --jar target/my-functions.jar \ - --classname org.example.functions.MyFunction \ - --cpu 8 \ - --ram 8589934592 \ - --disk 10737418240 - -``` - -> #### Resources are *per instance* -> The resources that you apply to a given Pulsar Function are applied to each instance of the function. For example, if you apply 8 GB of RAM to a function with a parallelism of 5, you are applying 40 GB of RAM for the function in total. Make sure that you take the parallelism (the number of instances) factor into your resource calculations. - -### Use Package management service - -Package management enables version management and simplifies the upgrade and rollback processes for Functions, Sinks, and Sources. When you use the same function, sink and source in different namespaces, you can upload them to a common package management system. - -To use [Package management service](admin-api-packages.md), ensure that the package management service has been enabled in your cluster by setting the following properties in `broker.conf`. - -> Note: Package management service is not enabled by default. - -```yaml - -enablePackagesManagement=true -packagesManagementStorageProvider=org.apache.pulsar.packages.management.storage.bookkeeper.BookKeeperPackagesStorageProvider -packagesReplicas=1 -packagesManagementLedgerRootPath=/ledgers - -``` - -With Package management service enabled, you can upload your function packages by [upload a package](admin-api-packages.md#upload-a-package) to the service and get the [package URL](admin-api-packages.md#package-url). - -When you have a ready to use package URL, you can create the function with package URL by setting `--jar`, `--py`, or `--go` to the package URL with `pulsar-admin functions create`. - -## Trigger Pulsar Functions - -If a Pulsar Function is running in [cluster mode](#cluster-mode), you can **trigger** it at any time using the command line. Triggering a function means that you send a message with a specific value to the function and get the function output (if any) via the command line. - -> Triggering a function is to invoke a function by producing a message on one of the input topics. With the [`pulsar-admin functions trigger`](reference-pulsar-admin.md#trigger) command, you can send messages to functions without using the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool or a language-specific client library. - -To learn how to trigger a function, you can start with Python function that returns a simple string based on the input. - -```python - -# myfunc.py -def process(input): - return "This function has been triggered with a value of {0}".format(input) - -``` - -You can run the function in [local run mode](functions-deploy.md#local-run-mode). - -```bash - -$ bin/pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name myfunc \ - --py myfunc.py \ - --classname myfunc \ - --inputs persistent://public/default/in \ - --output persistent://public/default/out - -``` - -Then assign a consumer to listen on the output topic for messages from the `myfunc` function with the [`pulsar-client consume`](reference-cli-tools.md#consume) command. - -```bash - -$ bin/pulsar-client consume persistent://public/default/out \ - --subscription-name my-subscription - --num-messages 0 # Listen indefinitely - -``` - -And then you can trigger the function. - -```bash - -$ bin/pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name myfunc \ - --trigger-value "hello world" - -``` - -The consumer listening on the output topic produces something as follows in the log. - -``` - ------ got message ----- -This function has been triggered with a value of hello world - -``` - -> #### Topic info is not required -> In the `trigger` command, you only need to specify basic information about the function (tenant, namespace, and name). To trigger the function, you do not need to know the function input topics. diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/functions-develop.md b/site2/website/versioned_docs/version-2.9.1-deprecated/functions-develop.md deleted file mode 100644 index 2e29aa1c474005..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/functions-develop.md +++ /dev/null @@ -1,1600 +0,0 @@ ---- -id: functions-develop -title: Develop Pulsar Functions -sidebar_label: "How-to: Develop" -original_id: functions-develop ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -You learn how to develop Pulsar Functions with different APIs for Java, Python and Go. - -## Available APIs -In Java and Python, you have two options to write Pulsar Functions. In Go, you can use Pulsar Functions SDK for Go. - -Interface | Description | Use cases -:---------|:------------|:--------- -Language-native interface | No Pulsar-specific libraries or special dependencies required (only core libraries from Java/Python). | Functions that do not require access to the function [context](#context). -Pulsar Function SDK for Java/Python/Go | Pulsar-specific libraries that provide a range of functionality not provided by "native" interfaces. | Functions that require access to the function [context](#context). - -The language-native function, which adds an exclamation point to all incoming strings and publishes the resulting string to a topic, has no external dependencies. The following example is language-native function. - -````mdx-code-block - - - -```Java - -import java.util.function.Function; - -public class JavaNativeExclamationFunction implements Function { - @Override - public String apply(String input) { - return String.format("%s!", input); - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/JavaNativeExclamationFunction.java). - - - - -```python - -def process(input): - return "{}!".format(input) - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/native_exclamation_function.py). - -:::note - -You can write Pulsar Functions in python2 or python3. However, Pulsar only looks for `python` as the interpreter. -If you're running Pulsar Functions on an Ubuntu system that only supports python3, you might fail to -start the functions. In this case, you can create a symlink. Your system will fail if -you subsequently install any other package that depends on Python 2.x. A solution is under development in [Issue 5518](https://github.com/apache/pulsar/issues/5518). - -```bash - -sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 10 - -``` - -::: - - - - -```` - -The following example uses Pulsar Functions SDK. -````mdx-code-block - - - -```Java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class ExclamationFunction implements Function { - @Override - public String process(String input, Context context) { - return String.format("%s!", input); - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/ExclamationFunction.java). - - - - -```python - -from pulsar import Function - -class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - return input + '!' - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/exclamation_function.py). - - - - -```Go - -package main - -import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" -) - -func HandleRequest(ctx context.Context, in []byte) error{ - fmt.Println(string(in) + "!") - return nil -} - -func main() { - pf.Start(HandleRequest) -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/77cf09eafa4f1626a53a1fe2e65dd25f377c1127/pulsar-function-go/examples/inputFunc/inputFunc.go#L20-L36). - - - - -```` - -## Schema registry -Pulsar has a built-in schema registry and is bundled with popular schema types, such as Avro, JSON and Protobuf. Pulsar Functions can leverage the existing schema information from input topics and derive the input type. The schema registry applies for output topic as well. - -## SerDe -SerDe stands for **Ser**ialization and **De**serialization. Pulsar Functions uses SerDe when publishing data to and consuming data from Pulsar topics. How SerDe works by default depends on the language you use for a particular function. - -````mdx-code-block - - - -When you write Pulsar Functions in Java, the following basic Java types are built in and supported by default: `String`, `Double`, `Integer`, `Float`, `Long`, `Short`, and `Byte`. - -To customize Java types, you need to implement the following interface. - -```java - -public interface SerDe { - T deserialize(byte[] input); - byte[] serialize(T input); -} - -``` - -SerDe works in the following ways in Java Functions. -- If the input and output topics have schema, Pulsar Functions use schema for SerDe. -- If the input or output topics do not exist, Pulsar Functions adopt the following rules to determine SerDe: - - If the schema type is specified, Pulsar Functions use the specified schema type. - - If SerDe is specified, Pulsar Functions use the specified SerDe, and the schema type for input and output topics is `Byte`. - - If neither the schema type nor SerDe is specified, Pulsar Functions use the built-in SerDe. For non-primitive schema type, the built-in SerDe serializes and deserializes objects in the `JSON` format. - - - - -In Python, the default SerDe is identity, meaning that the type is serialized as whatever type the producer function returns. - -You can specify the SerDe when [creating](functions-deploy.md#cluster-mode) or [running](functions-deploy.md#local-run-mode) functions. - -```bash - -$ bin/pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name my_function \ - --py my_function.py \ - --classname my_function.MyFunction \ - --custom-serde-inputs '{"input-topic-1":"Serde1","input-topic-2":"Serde2"}' \ - --output-serde-classname Serde3 \ - --output output-topic-1 - -``` - -This case contains two input topics: `input-topic-1` and `input-topic-2`, each of which is mapped to a different SerDe class (the map must be specified as a JSON string). The output topic, `output-topic-1`, uses the `Serde3` class for SerDe. At the moment, all Pulsar Functions logic, include processing function and SerDe classes, must be contained within a single Python file. - -When using Pulsar Functions for Python, you have three SerDe options: - -1. You can use the [`IdentitySerde`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L70), which leaves the data unchanged. The `IdentitySerDe` is the **default**. Creating or running a function without explicitly specifying SerDe means that this option is used. -2. You can use the [`PickleSerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L62), which uses Python [`pickle`](https://docs.python.org/3/library/pickle.html) for SerDe. -3. You can create a custom SerDe class by implementing the baseline [`SerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L50) class, which has just two methods: [`serialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L53) for converting the object into bytes, and [`deserialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L58) for converting bytes into an object of the required application-specific type. - -The table below shows when you should use each SerDe. - -SerDe option | When to use -:------------|:----------- -`IdentitySerde` | When you work with simple types like strings, Booleans, integers. -`PickleSerDe` | When you work with complex, application-specific types and are comfortable with the "best effort" approach of `pickle`. -Custom SerDe | When you require explicit control over SerDe, potentially for performance or data compatibility purposes. - - - - -Currently, the feature is not available in Go. - - - - -```` - -### Example -Imagine that you're writing Pulsar Functions that are processing tweet objects, you can refer to the following example of `Tweet` class. - -````mdx-code-block - - - -```java - -public class Tweet { - private String username; - private String tweetContent; - - public Tweet(String username, String tweetContent) { - this.username = username; - this.tweetContent = tweetContent; - } - - // Standard setters and getters -} - -``` - -To pass `Tweet` objects directly between Pulsar Functions, you need to provide a custom SerDe class. In the example below, `Tweet` objects are basically strings in which the username and tweet content are separated by a `|`. - -```java - -package com.example.serde; - -import org.apache.pulsar.functions.api.SerDe; - -import java.util.regex.Pattern; - -public class TweetSerde implements SerDe { - public Tweet deserialize(byte[] input) { - String s = new String(input); - String[] fields = s.split(Pattern.quote("|")); - return new Tweet(fields[0], fields[1]); - } - - public byte[] serialize(Tweet input) { - return "%s|%s".format(input.getUsername(), input.getTweetContent()).getBytes(); - } -} - -``` - -To apply this customized SerDe to a particular Pulsar Function, you need to: - -* Package the `Tweet` and `TweetSerde` classes into a JAR. -* Specify a path to the JAR and SerDe class name when deploying the function. - -The following is an example of [`create`](reference-pulsar-admin.md#create-1) operation. - -```bash - -$ bin/pulsar-admin functions create \ - --jar /path/to/your.jar \ - --output-serde-classname com.example.serde.TweetSerde \ - # Other function attributes - -``` - -> #### Custom SerDe classes must be packaged with your function JARs -> Pulsar does not store your custom SerDe classes separately from your Pulsar Functions. So you need to include your SerDe classes in your function JARs. If not, Pulsar returns an error. - - - - -```python - -class Tweet(object): - def __init__(self, username, tweet_content): - self.username = username - self.tweet_content = tweet_content - -``` - -In order to use this class in Pulsar Functions, you have two options: - -1. You can specify `PickleSerDe`, which applies the [`pickle`](https://docs.python.org/3/library/pickle.html) library SerDe. -2. You can create your own SerDe class. The following is an example. - - ```python - - from pulsar import SerDe - - class TweetSerDe(SerDe): - - def serialize(self, input): - return bytes("{0}|{1}".format(input.username, input.tweet_content)) - - def deserialize(self, input_bytes): - tweet_components = str(input_bytes).split('|') - return Tweet(tweet_components[0], tweet_componentsp[1]) - - ``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/custom_object_function.py). - - - - -```` - -In both languages, however, you can write custom SerDe logic for more complex, application-specific types. - -## Context -Java, Python and Go SDKs provide access to a **context object** that can be used by a function. This context object provides a wide variety of information and functionality to the function. - -* The name and ID of a Pulsar Function. -* The message ID of each message. Each Pulsar message is automatically assigned with an ID. -* The key, event time, properties and partition key of each message. -* The name of the topic to which the message is sent. -* The names of all input topics as well as the output topic associated with the function. -* The name of the class used for [SerDe](#serde). -* The [tenant](reference-terminology.md#tenant) and namespace associated with the function. -* The ID of the Pulsar Functions instance running the function. -* The version of the function. -* The [logger object](functions-develop.md#logger) used by the function, which can be used to create function log messages. -* Access to arbitrary [user configuration](#user-config) values supplied via the CLI. -* An interface for recording [metrics](#metrics). -* An interface for storing and retrieving state in [state storage](#state-storage). -* A function to publish new messages onto arbitrary topics. -* A function to ack the message being processed (if auto-ack is disabled). -* (Java) get Pulsar admin client. - -````mdx-code-block - - - -The [Context](https://github.com/apache/pulsar/blob/master/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Context.java) interface provides a number of methods that you can use to access the function [context](#context). The various method signatures for the `Context` interface are listed as follows. - -```java - -public interface Context { - Record getCurrentRecord(); - Collection getInputTopics(); - String getOutputTopic(); - String getOutputSchemaType(); - String getTenant(); - String getNamespace(); - String getFunctionName(); - String getFunctionId(); - String getInstanceId(); - String getFunctionVersion(); - Logger getLogger(); - void incrCounter(String key, long amount); - void incrCounterAsync(String key, long amount); - long getCounter(String key); - long getCounterAsync(String key); - void putState(String key, ByteBuffer value); - void putStateAsync(String key, ByteBuffer value); - void deleteState(String key); - ByteBuffer getState(String key); - ByteBuffer getStateAsync(String key); - Map getUserConfigMap(); - Optional getUserConfigValue(String key); - Object getUserConfigValueOrDefault(String key, Object defaultValue); - void recordMetric(String metricName, double value); - CompletableFuture publish(String topicName, O object, String schemaOrSerdeClassName); - CompletableFuture publish(String topicName, O object); - TypedMessageBuilder newOutputMessage(String topicName, Schema schema) throws PulsarClientException; - ConsumerBuilder newConsumerBuilder(Schema schema) throws PulsarClientException; - PulsarAdmin getPulsarAdmin(); - PulsarAdmin getPulsarAdmin(String clusterName); -} - -``` - -The following example uses several methods available via the `Context` object. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.stream.Collectors; - -public class ContextFunction implements Function { - public Void process(String input, Context context) { - Logger LOG = context.getLogger(); - String inputTopics = context.getInputTopics().stream().collect(Collectors.joining(", ")); - String functionName = context.getFunctionName(); - - String logMessage = String.format("A message with a value of \"%s\" has arrived on one of the following topics: %s\n", - input, - inputTopics); - - LOG.info(logMessage); - - String metricName = String.format("function-%s-messages-received", functionName); - context.recordMetric(metricName, 1); - - return null; - } -} - -``` - - - - -``` - -class ContextImpl(pulsar.Context): - def get_message_id(self): - ... - def get_message_key(self): - ... - def get_message_eventtime(self): - ... - def get_message_properties(self): - ... - def get_current_message_topic_name(self): - ... - def get_partition_key(self): - ... - def get_function_name(self): - ... - def get_function_tenant(self): - ... - def get_function_namespace(self): - ... - def get_function_id(self): - ... - def get_instance_id(self): - ... - def get_function_version(self): - ... - def get_logger(self): - ... - def get_user_config_value(self, key): - ... - def get_user_config_map(self): - ... - def record_metric(self, metric_name, metric_value): - ... - def get_input_topics(self): - ... - def get_output_topic(self): - ... - def get_output_serde_class_name(self): - ... - def publish(self, topic_name, message, serde_class_name="serde.IdentitySerDe", - properties=None, compression_type=None, callback=None, message_conf=None): - ... - def ack(self, msgid, topic): - ... - def get_and_reset_metrics(self): - ... - def reset_metrics(self): - ... - def get_metrics(self): - ... - def incr_counter(self, key, amount): - ... - def get_counter(self, key): - ... - def del_counter(self, key): - ... - def put_state(self, key, value): - ... - def get_state(self, key): - ... - -``` - - - - -``` - -func (c *FunctionContext) GetInstanceID() int { - return c.instanceConf.instanceID -} - -func (c *FunctionContext) GetInputTopics() []string { - return c.inputTopics -} - -func (c *FunctionContext) GetOutputTopic() string { - return c.instanceConf.funcDetails.GetSink().Topic -} - -func (c *FunctionContext) GetFuncTenant() string { - return c.instanceConf.funcDetails.Tenant -} - -func (c *FunctionContext) GetFuncName() string { - return c.instanceConf.funcDetails.Name -} - -func (c *FunctionContext) GetFuncNamespace() string { - return c.instanceConf.funcDetails.Namespace -} - -func (c *FunctionContext) GetFuncID() string { - return c.instanceConf.funcID -} - -func (c *FunctionContext) GetFuncVersion() string { - return c.instanceConf.funcVersion -} - -func (c *FunctionContext) GetUserConfValue(key string) interface{} { - return c.userConfigs[key] -} - -func (c *FunctionContext) GetUserConfMap() map[string]interface{} { - return c.userConfigs -} - -func (c *FunctionContext) SetCurrentRecord(record pulsar.Message) { - c.record = record -} - -func (c *FunctionContext) GetCurrentRecord() pulsar.Message { - return c.record -} - -func (c *FunctionContext) NewOutputMessage(topic string) pulsar.Producer { - return c.outputMessage(topic) -} - -``` - -The following example uses several methods available via the `Context` object. - -``` - -import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" -) - -func contextFunc(ctx context.Context) { - if fc, ok := pf.FromContext(ctx); ok { - fmt.Printf("function ID is:%s, ", fc.GetFuncID()) - fmt.Printf("function version is:%s\n", fc.GetFuncVersion()) - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/77cf09eafa4f1626a53a1fe2e65dd25f377c1127/pulsar-function-go/examples/contextFunc/contextFunc.go#L29-L34). - - - - -```` - -### User config -When you run or update Pulsar Functions created using SDK, you can pass arbitrary key/values to them with the command line with the `--user-config` flag. Key/values must be specified as JSON. The following function creation command passes a user configured key/value to a function. - -```bash - -$ bin/pulsar-admin functions create \ - --name word-filter \ - # Other function configs - --user-config '{"forbidden-word":"rosebud"}' - -``` - -````mdx-code-block - - - -The Java SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - # Other function configs - --user-config '{"word-of-the-day":"verdure"}' - -``` - -To access that value in a Java function: - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.Optional; - -public class UserConfigFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - Optional wotd = context.getUserConfigValue("word-of-the-day"); - if (wotd.isPresent()) { - LOG.info("The word of the day is {}", wotd); - } else { - LOG.warn("No word of the day provided"); - } - return null; - } -} - -``` - -The `UserConfigFunction` function will log the string `"The word of the day is verdure"` every time the function is invoked (which means every time a message arrives). The `word-of-the-day` user config will be changed only when the function is updated with a new config value via the command line. - -You can also access the entire user config map or set a default value in case no value is present: - -```java - -// Get the whole config map -Map allConfigs = context.getUserConfigMap(); - -// Get value or resort to default -String wotd = context.getUserConfigValueOrDefault("word-of-the-day", "perspicacious"); - -``` - -> For all key/value pairs passed to Java functions, both the key *and* the value are `String`. To set the value to be a different type, you need to deserialize from the `String` type. - - - - -In Python function, you can access the configuration value like this. - -```python - -from pulsar import Function - -class WordFilter(Function): - def process(self, context, input): - forbidden_word = context.user_config()["forbidden-word"] - - # Don't publish the message if it contains the user-supplied - # forbidden word - if forbidden_word in input: - pass - # Otherwise publish the message - else: - return input - -``` - -The Python SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - # Other function configs \ - --user-config '{"word-of-the-day":"verdure"}' - -``` - -To access that value in a Python function: - -```python - -from pulsar import Function - -class UserConfigFunction(Function): - def process(self, input, context): - logger = context.get_logger() - wotd = context.get_user_config_value('word-of-the-day') - if wotd is None: - logger.warn('No word of the day provided') - else: - logger.info("The word of the day is {0}".format(wotd)) - -``` - - - - -The Go SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - --go path/to/go/binary - --user-config '{"word-of-the-day":"lackadaisical"}' - -``` - -To access that value in a Go function: - -```go - -func contextFunc(ctx context.Context) { - fc, ok := pf.FromContext(ctx) - if !ok { - logutil.Fatal("Function context is not defined") - } - - wotd := fc.GetUserConfValue("word-of-the-day") - - if wotd == nil { - logutil.Warn("The word of the day is empty") - } else { - logutil.Infof("The word of the day is %s", wotd.(string)) - } -} - -``` - - - - -```` - -### Logger - -````mdx-code-block - - - -Pulsar Functions that use the Java SDK have access to an [SLF4j](https://www.slf4j.org/) [`Logger`](https://www.slf4j.org/api/org/apache/log4j/Logger.html) object that can be used to produce logs at the chosen log level. The following example logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class LoggingFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - String messageId = new String(context.getMessageId()); - - if (input.contains("danger")) { - LOG.warn("A warning was received in message {}", messageId); - } else { - LOG.info("Message {} received\nContent: {}", messageId, input); - } - - return null; - } -} - -``` - -If you want your function to produce logs, you need to specify a log topic when creating or running the function. The following is an example. - -```bash - -$ bin/pulsar-admin functions create \ - --jar my-functions.jar \ - --classname my.package.LoggingFunction \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -All logs produced by `LoggingFunction` above can be accessed via the `persistent://public/default/logging-function-logs` topic. - -#### Customize Function log level -Additionally, you can use the XML file, `functions_log4j2.xml`, to customize the function log level. -To customize the function log level, create or update `functions_log4j2.xml` in your Pulsar conf directory (for example, `/etc/pulsar/` on bare-metal, or `/pulsar/conf` on Kubernetes) to contain contents such as: - -```xml - - - pulsar-functions-instance - 30 - - - pulsar.log.appender - RollingFile - - - pulsar.log.level - debug - - - bk.log.level - debug - - - - - Console - SYSTEM_OUT - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - RollingFile - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.log - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}-%d{MM-dd-yyyy}-%i.log.gz - true - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - 1 - true - - - 1 GB - - - 0 0 0 * * ? - - - - - ${sys:pulsar.function.log.dir} - 2 - - */${sys:pulsar.function.log.file}*log.gz - - - 30d - - - - - - BkRollingFile - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.bk - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.bk-%d{MM-dd-yyyy}-%i.log.gz - true - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - 1 - true - - - 1 GB - - - 0 0 0 * * ? - - - - - ${sys:pulsar.function.log.dir} - 2 - - */${sys:pulsar.function.log.file}.bk*log.gz - - - 30d - - - - - - - - org.apache.pulsar.functions.runtime.shaded.org.apache.bookkeeper - ${sys:bk.log.level} - false - - BkRollingFile - - - - ${sys:pulsar.log.level} - - ${sys:pulsar.log.appender} - ${sys:pulsar.log.level} - - - - - -``` - -The properties set like: - -```xml - - - pulsar.log.level - debug - - -``` - -propagate to places where they are referenced, such as: - -```xml - - - ${sys:pulsar.log.level} - - ${sys:pulsar.log.appender} - ${sys:pulsar.log.level} - - - -``` - -In the above example, debug level logging would be applied to ALL function logs. -This may be more verbose than you desire. To be more selective, you can apply different log levels to different classes or modules. For example: - -```xml - - - com.example.module - info - false - - ${sys:pulsar.log.appender} - - - -``` - -You can be more specific as well, such as applying a more verbose log level to a class in the module, such as: - -```xml - - - com.example.module.className - debug - false - - Console - - - -``` - -Each `` entry allows you to output the log to a target specified in the definition of the Appender. - -Additivity pertains to whether log messages will be duplicated if multiple Logger entries overlap. -To disable additivity, specify - -```xml - -false - -``` - -as shown in examples above. Disabling additivity prevents duplication of log messages when one or more `` entries contain classes or modules that overlap. - -The `` is defined in the `` section, such as: - -```xml - - - Console - SYSTEM_OUT - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - -``` - - - - -Pulsar Functions that use the Python SDK have access to a logging object that can be used to produce logs at the chosen log level. The following example function that logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`. - -```python - -from pulsar import Function - -class LoggingFunction(Function): - def process(self, input, context): - logger = context.get_logger() - msg_id = context.get_message_id() - if 'danger' in input: - logger.warn("A warning was received in message {0}".format(context.get_message_id())) - else: - logger.info("Message {0} received\nContent: {1}".format(msg_id, input)) - -``` - -If you want your function to produce logs on a Pulsar topic, you need to specify a **log topic** when creating or running the function. The following is an example. - -```bash - -$ bin/pulsar-admin functions create \ - --py logging_function.py \ - --classname logging_function.LoggingFunction \ - --log-topic logging-function-logs \ - # Other function configs - -``` - -All logs produced by `LoggingFunction` above can be accessed via the `logging-function-logs` topic. -Additionally, you can specify the function log level through the broker XML file as described in [Customize Function log level](#customize-function-log-level). - - - - -The following Go Function example shows different log levels based on the function input. - -``` - -import ( - "context" - - "github.com/apache/pulsar/pulsar-function-go/pf" - - log "github.com/apache/pulsar/pulsar-function-go/logutil" -) - -func loggerFunc(ctx context.Context, input []byte) { - if len(input) <= 100 { - log.Infof("This input has a length of: %d", len(input)) - } else { - log.Warnf("This input is getting too long! It has {%d} characters", len(input)) - } -} - -func main() { - pf.Start(loggerFunc) -} - -``` - -When you use `logTopic` related functionalities in Go Function, import `github.com/apache/pulsar/pulsar-function-go/logutil`, and you do not have to use the `getLogger()` context object. - -Additionally, you can specify the function log level through the broker XML file, as described here: [Customize Function log level](#customize-function-log-level) - - - - -```` - -### Pulsar admin - -Pulsar Functions using the Java SDK has access to the Pulsar admin client, which allows the Pulsar admin client to manage API calls to current Pulsar clusters or external clusters (if `external-pulsars` is provided). - -````mdx-code-block - - - -Below is an example of how to use the Pulsar admin client exposed from the Function `context`. - -``` - -import org.apache.pulsar.client.admin.PulsarAdmin; -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -/** - * In this particular example, for every input message, - * the function resets the cursor of the current function's subscription to a - * specified timestamp. - */ -public class CursorManagementFunction implements Function { - - @Override - public String process(String input, Context context) throws Exception { - PulsarAdmin adminClient = context.getPulsarAdmin(); - if (adminClient != null) { - String topic = context.getCurrentRecord().getTopicName().isPresent() ? - context.getCurrentRecord().getTopicName().get() : null; - String subName = context.getTenant() + "/" + context.getNamespace() + "/" + context.getFunctionName(); - if (topic != null) { - // 1578188166 below is a random-pick timestamp - adminClient.topics().resetCursor(topic, subName, 1578188166); - return "reset cursor successfully"; - } - } - return null; - } -} - -``` - -If you want your function to get access to the Pulsar admin client, you need to enable this feature by setting `exposeAdminClientEnabled=true` in the `functions_worker.yml` file. You can test whether this feature is enabled or not using the command `pulsar-admin functions localrun` with the flag `--web-service-url`. - -``` - -$ bin/pulsar-admin functions localrun \ - --jar my-functions.jar \ - --classname my.package.CursorManagementFunction \ - --web-service-url http://pulsar-web-service:8080 \ - # Other function configs - -``` - - - - -```` - -## Metrics - -Pulsar Functions allows you to deploy and manage processing functions that consume messages from and publish messages to Pulsar topics easily. It is important to ensure that the running functions are healthy at any time. Pulsar Functions can publish arbitrary metrics to the metrics interface which can be queried. - -:::note - -If a Pulsar Function uses the language-native interface for Java or Python, that function is not able to publish metrics and stats to Pulsar. - -::: - -You can monitor Pulsar Functions that have been deployed with the following methods: - -- Check the metrics provided by Pulsar. - - Pulsar Functions expose the metrics that can be collected and used for monitoring the health of **Java, Python, and Go** functions. You can check the metrics by following the [monitoring](deploy-monitoring.md) guide. - - For the complete list of the function metrics, see [here](reference-metrics.md#pulsar-functions). - -- Set and check your customized metrics. - - In addition to the metrics provided by Pulsar, Pulsar allows you to customize metrics for **Java and Python** functions. Function workers collect user-defined metrics to Prometheus automatically and you can check them in Grafana. - -Here are examples of how to customize metrics for Java and Python functions. - -````mdx-code-block - - - -You can record metrics using the [`Context`](#context) object on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class MetricRecorderFunction implements Function { - @Override - public void apply(Integer input, Context context) { - // Records the metric 1 every time a message arrives - context.recordMetric("hit-count", 1); - - // Records the metric only if the arriving number equals 11 - if (input == 11) { - context.recordMetric("elevens-count", 1); - } - - return null; - } -} - -``` - - - - -You can record metrics using the [`Context`](#context) object on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message. The following is an example. - -```python - -from pulsar import Function - -class MetricRecorderFunction(Function): - def process(self, input, context): - context.record_metric('hit-count', 1) - - if input == 11: - context.record_metric('elevens-count', 1) - -``` - - - - -Currently, the feature is not available in Go. - - - - -```` - -## Security - -If you want to enable security on Pulsar Functions, first you should enable security on [Functions Workers](functions-worker.md). For more details, refer to [Security settings](functions-worker.md#security-settings). - -Pulsar Functions can support the following providers: - -- ClearTextSecretsProvider -- EnvironmentBasedSecretsProvider - -> Pulsar Function supports ClearTextSecretsProvider by default. - -At the same time, Pulsar Functions provides two interfaces, **SecretsProvider** and **SecretsProviderConfigurator**, allowing users to customize secret provider. - -````mdx-code-block - - - -You can get secret provider using the [`Context`](#context) object. The following is an example: - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class GetSecretProviderFunction implements Function { - - @Override - public Void process(String input, Context context) throws Exception { - Logger LOG = context.getLogger(); - String secretProvider = context.getSecret(input); - - if (!secretProvider.isEmpty()) { - LOG.info("The secret provider is {}", secretProvider); - } else { - LOG.warn("No secret provider"); - } - - return null; - } -} - -``` - - - - -You can get secret provider using the [`Context`](#context) object. The following is an example: - -```python - -from pulsar import Function - -class GetSecretProviderFunction(Function): - def process(self, input, context): - logger = context.get_logger() - secret_provider = context.get_secret(input) - if secret_provider is None: - logger.warn('No secret provider') - else: - logger.info("The secret provider is {0}".format(secret_provider)) - -``` - - - - -Currently, the feature is not available in Go. - - - - -```` - -## State storage -Pulsar Functions use [Apache BookKeeper](https://bookkeeper.apache.org) as a state storage interface. Pulsar installation, including the local standalone installation, includes deployment of BookKeeper bookies. - -Since Pulsar 2.1.0 release, Pulsar integrates with Apache BookKeeper [table service](https://docs.google.com/document/d/155xAwWv5IdOitHh1NVMEwCMGgB28M3FyMiQSxEpjE-Y/edit#heading=h.56rbh52koe3f) to store the `State` for functions. For example, a `WordCount` function can store its `counters` state into BookKeeper table service via Pulsar Functions State API. - -States are key-value pairs, where the key is a string and the value is arbitrary binary data - counters are stored as 64-bit big-endian binary values. Keys are scoped to an individual Pulsar Function, and shared between instances of that function. - -You can access states within Pulsar Java Functions using the `putState`, `putStateAsync`, `getState`, `getStateAsync`, `incrCounter`, `incrCounterAsync`, `getCounter`, `getCounterAsync` and `deleteState` calls on the context object. You can access states within Pulsar Python Functions using the `putState`, `getState`, `incrCounter`, `getCounter` and `deleteState` calls on the context object. You can also manage states using the [querystate](#query-state) and [putstate](#putstate) options to `pulsar-admin functions`. - -:::note - -State storage is not available in Go. - -::: - -### API - -````mdx-code-block - - - -Currently Pulsar Functions expose the following APIs for mutating and accessing State. These APIs are available in the [Context](functions-develop.md#context) object when you are using Java SDK functions. - -#### incrCounter - -```java - - /** - * Increment the builtin distributed counter referred by key - * @param key The name of the key - * @param amount The amount to be incremented - */ - void incrCounter(String key, long amount); - -``` - -The application can use `incrCounter` to change the counter of a given `key` by the given `amount`. - -#### incrCounterAsync - -```java - - /** - * Increment the builtin distributed counter referred by key - * but dont wait for the completion of the increment operation - * - * @param key The name of the key - * @param amount The amount to be incremented - */ - CompletableFuture incrCounterAsync(String key, long amount); - -``` - -The application can use `incrCounterAsync` to asynchronously change the counter of a given `key` by the given `amount`. - -#### getCounter - -```java - - /** - * Retrieve the counter value for the key. - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - long getCounter(String key); - -``` - -The application can use `getCounter` to retrieve the counter of a given `key` mutated by `incrCounter`. - -Except the `counter` API, Pulsar also exposes a general key/value API for functions to store -general key/value state. - -#### getCounterAsync - -```java - - /** - * Retrieve the counter value for the key, but don't wait - * for the operation to be completed - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - CompletableFuture getCounterAsync(String key); - -``` - -The application can use `getCounterAsync` to asynchronously retrieve the counter of a given `key` mutated by `incrCounterAsync`. - -#### putState - -```java - - /** - * Update the state value for the key. - * - * @param key name of the key - * @param value state value of the key - */ - void putState(String key, ByteBuffer value); - -``` - -#### putStateAsync - -```java - - /** - * Update the state value for the key, but don't wait for the operation to be completed - * - * @param key name of the key - * @param value state value of the key - */ - CompletableFuture putStateAsync(String key, ByteBuffer value); - -``` - -The application can use `putStateAsync` to asynchronously update the state of a given `key`. - -#### getState - -```java - - /** - * Retrieve the state value for the key. - * - * @param key name of the key - * @return the state value for the key. - */ - ByteBuffer getState(String key); - -``` - -#### getStateAsync - -```java - - /** - * Retrieve the state value for the key, but don't wait for the operation to be completed - * - * @param key name of the key - * @return the state value for the key. - */ - CompletableFuture getStateAsync(String key); - -``` - -The application can use `getStateAsync` to asynchronously retrieve the state of a given `key`. - -#### deleteState - -```java - - /** - * Delete the state value for the key. - * - * @param key name of the key - */ - -``` - -Counters and binary values share the same keyspace, so this deletes either type. - - - - -Currently Pulsar Functions expose the following APIs for mutating and accessing State. These APIs are available in the [Context](#context) object when you are using Python SDK functions. - -#### incr_counter - -```python - - def incr_counter(self, key, amount): - ""incr the counter of a given key in the managed state"" - -``` - -Application can use `incr_counter` to change the counter of a given `key` by the given `amount`. -If the `key` does not exist, a new key is created. - -#### get_counter - -```python - - def get_counter(self, key): - """get the counter of a given key in the managed state""" - -``` - -Application can use `get_counter` to retrieve the counter of a given `key` mutated by `incrCounter`. - -Except the `counter` API, Pulsar also exposes a general key/value API for functions to store -general key/value state. - -#### put_state - -```python - - def put_state(self, key, value): - """update the value of a given key in the managed state""" - -``` - -The key is a string, and the value is arbitrary binary data. - -#### get_state - -```python - - def get_state(self, key): - """get the value of a given key in the managed state""" - -``` - -#### del_counter - -```python - - def del_counter(self, key): - """delete the counter of a given key in the managed state""" - -``` - -Counters and binary values share the same keyspace, so this deletes either type. - - - - -```` - -### Query State - -A Pulsar Function can use the [State API](#api) for storing state into Pulsar's state storage -and retrieving state back from Pulsar's state storage. Additionally Pulsar also provides -CLI commands for querying its state. - -```shell - -$ bin/pulsar-admin functions querystate \ - --tenant \ - --namespace \ - --name \ - --state-storage-url \ - --key \ - [---watch] - -``` - -If `--watch` is specified, the CLI will watch the value of the provided `state-key`. - -### Example - -````mdx-code-block - - - -{@inject: github:WordCountFunction:/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/WordCountFunction.java} is a very good example -demonstrating on how Application can easily store `state` in Pulsar Functions. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountFunction implements Function { - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split("\\.")).forEach(word -> context.incrCounter(word, 1)); - return null; - } -} - -``` - -The logic of this `WordCount` function is pretty simple and straightforward: - -1. The function first splits the received `String` into multiple words using regex `\\.`. -2. For each `word`, the function increments the corresponding `counter` by 1 (via `incrCounter(key, amount)`). - - - - -```python - -from pulsar import Function - -class WordCount(Function): - def process(self, item, context): - for word in item.split(): - context.incr_counter(word, 1) - -``` - -The logic of this `WordCount` function is pretty simple and straightforward: - -1. The function first splits the received string into multiple words on space. -2. For each `word`, the function increments the corresponding `counter` by 1 (via `incr_counter(key, amount)`). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/functions-metrics.md b/site2/website/versioned_docs/version-2.9.1-deprecated/functions-metrics.md deleted file mode 100644 index 8add6693160929..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/functions-metrics.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: functions-metrics -title: Metrics for Pulsar Functions -sidebar_label: "Metrics" -original_id: functions-metrics ---- - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/functions-overview.md b/site2/website/versioned_docs/version-2.9.1-deprecated/functions-overview.md deleted file mode 100644 index 816d301e0fd0e7..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/functions-overview.md +++ /dev/null @@ -1,209 +0,0 @@ ---- -id: functions-overview -title: Pulsar Functions overview -sidebar_label: "Overview" -original_id: functions-overview ---- - -**Pulsar Functions** are lightweight compute processes that - -* consume messages from one or more Pulsar topics, -* apply a user-supplied processing logic to each message, -* publish the results of the computation to another topic. - - -## Goals -With Pulsar Functions, you can create complex processing logic without deploying a separate neighboring system (such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://heron.incubator.apache.org/), [Apache Flink](https://flink.apache.org/)). Pulsar Functions are computing infrastructure of Pulsar messaging system. The core goal is tied to a series of other goals: - -* Developer productivity (language-native vs Pulsar Functions SDK functions) -* Easy troubleshooting -* Operational simplicity (no need for an external processing system) - -## Inspirations -Pulsar Functions are inspired by (and take cues from) several systems and paradigms: - -* Stream processing engines such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://apache.github.io/incubator-heron), and [Apache Flink](https://flink.apache.org) -* "Serverless" and "Function as a Service" (FaaS) cloud platforms like [Amazon Web Services Lambda](https://aws.amazon.com/lambda/), [Google Cloud Functions](https://cloud.google.com/functions/), and [Azure Cloud Functions](https://azure.microsoft.com/en-us/services/functions/) - -Pulsar Functions can be described as - -* [Lambda](https://aws.amazon.com/lambda/)-style functions that are -* specifically designed to use Pulsar as a message bus. - -## Programming model -Pulsar Functions provide a wide range of functionality, and the core programming model is simple. Functions receive messages from one or more **input [topics](reference-terminology.md#topic)**. Each time a message is received, the function will complete the following tasks. - - * Apply some processing logic to the input and write output to: - * An **output topic** in Pulsar - * [Apache BookKeeper](functions-develop.md#state-storage) - * Write logs to a **log topic** (potentially for debugging purposes) - * Increment a [counter](#word-count-example) - -![Pulsar Functions core programming model](/assets/pulsar-functions-overview.png) - -You can use Pulsar Functions to set up the following processing chain: - -* A Python function listens for the `raw-sentences` topic and "sanitizes" incoming strings (removing extraneous whitespace and converting all characters to lowercase) and then publishes the results to a `sanitized-sentences` topic. -* A Java function listens for the `sanitized-sentences` topic, counts the number of times each word appears within a specified time window, and publishes the results to a `results` topic -* Finally, a Python function listens for the `results` topic and writes the results to a MySQL table. - - -### Word count example - -If you implement the classic word count example using Pulsar Functions, it looks something like this: - -![Pulsar Functions word count example](/assets/pulsar-functions-word-count.png) - -To write the function in Java with [Pulsar Functions SDK for Java](functions-develop.md#available-apis), you can write the function as follows. - -```java - -package org.example.functions; - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountFunction implements Function { - // This function is invoked every time a message is published to the input topic - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split(" ")).forEach(word -> { - String counterKey = word.toLowerCase(); - context.incrCounter(counterKey, 1); - }); - return null; - } -} - -``` - -Bundle and build the JAR file to be deployed, and then deploy it in your Pulsar cluster using the [command line](functions-deploy.md#command-line-interface) as follows. - -```bash - -$ bin/pulsar-admin functions create \ - --jar target/my-jar-with-dependencies.jar \ - --classname org.example.functions.WordCountFunction \ - --tenant public \ - --namespace default \ - --name word-count \ - --inputs persistent://public/default/sentences \ - --output persistent://public/default/count - -``` - -### Content-based routing example - -Pulsar Functions are used in many cases. The following is a sophisticated example that involves content-based routing. - -For example, a function takes items (strings) as input and publishes them to either a `fruits` or `vegetables` topic, depending on the item. Or, if an item is neither fruit nor vegetable, a warning is logged to a [log topic](functions-develop.md#logger). The following is a visual representation. - -![Pulsar Functions routing example](/assets/pulsar-functions-routing-example.png) - -If you implement this routing functionality in Python, it looks something like this: - -```python - -from pulsar import Function - -class RoutingFunction(Function): - def __init__(self): - self.fruits_topic = "persistent://public/default/fruits" - self.vegetables_topic = "persistent://public/default/vegetables" - - @staticmethod - def is_fruit(item): - return item in [b"apple", b"orange", b"pear", b"other fruits..."] - - @staticmethod - def is_vegetable(item): - return item in [b"carrot", b"lettuce", b"radish", b"other vegetables..."] - - def process(self, item, context): - if self.is_fruit(item): - context.publish(self.fruits_topic, item) - elif self.is_vegetable(item): - context.publish(self.vegetables_topic, item) - else: - warning = "The item {0} is neither a fruit nor a vegetable".format(item) - context.get_logger().warn(warning) - -``` - -If this code is stored in `~/router.py`, then you can deploy it in your Pulsar cluster using the [command line](functions-deploy.md#command-line-interface) as follows. - -```bash - -$ bin/pulsar-admin functions create \ - --py ~/router.py \ - --classname router.RoutingFunction \ - --tenant public \ - --namespace default \ - --name route-fruit-veg \ - --inputs persistent://public/default/basket-items - -``` - -### Functions, messages and message types -Pulsar Functions take byte arrays as inputs and spit out byte arrays as output. However in languages that support typed interfaces(Java), you can write typed Functions, and bind messages to types in the following ways. -* [Schema Registry](functions-develop.md#schema-registry) -* [SerDe](functions-develop.md#serde) - - -## Fully Qualified Function Name (FQFN) -Each Pulsar Function has a **Fully Qualified Function Name** (FQFN) that consists of three elements: the function tenant, namespace, and function name. FQFN looks like this: - -```http - -tenant/namespace/name - -``` - -FQFNs enable you to create multiple functions with the same name provided that they are in different namespaces. - -## Supported languages -Currently, you can write Pulsar Functions in Java, Python, and Go. For details, refer to [Develop Pulsar Functions](functions-develop.md). - -## Processing guarantees -Pulsar Functions provide three different messaging semantics that you can apply to any function. - -Delivery semantics | Description -:------------------|:------- -**At-most-once** delivery | Each message sent to the function is likely to be processed, or not to be processed (hence "at most"). -**At-least-once** delivery | Each message sent to the function can be processed more than once (hence the "at least"). -**Effectively-once** delivery | Each message sent to the function will have one output associated with it. - - -### Apply processing guarantees to a function -You can set the processing guarantees for a Pulsar Function when you create the Function. The following [`pulsar-function create`](reference-pulsar-admin.md#create-1) command creates a function with effectively-once guarantees applied. - -```bash - -$ bin/pulsar-admin functions create \ - --name my-effectively-once-function \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other function configs - -``` - -The available options for `--processing-guarantees` are: - -* `ATMOST_ONCE` -* `ATLEAST_ONCE` -* `EFFECTIVELY_ONCE` - -> By default, Pulsar Functions provide at-least-once delivery guarantees. So if you create a function without supplying a value for the `--processingGuarantees` flag, the function provides at-least-once guarantees. - -### Update the processing guarantees of a function -You can change the processing guarantees applied to a function using the [`update`](reference-pulsar-admin.md#update-1) command. The following is an example. - -```bash - -$ bin/pulsar-admin functions update \ - --processing-guarantees ATMOST_ONCE \ - # Other function configs - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/functions-package.md b/site2/website/versioned_docs/version-2.9.1-deprecated/functions-package.md deleted file mode 100644 index db2c4e987dc7be..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/functions-package.md +++ /dev/null @@ -1,493 +0,0 @@ ---- -id: functions-package -title: Package Pulsar Functions -sidebar_label: "How-to: Package" -original_id: functions-package ---- - -You can package Pulsar functions in Java, Python, and Go. Packaging the window function in Java is the same as [packaging a function in Java](#java). - -:::note - -Currently, the window function is not available in Python and Go. - -::: - -## Prerequisite - -Before running a Pulsar function, you need to start Pulsar. You can [run a standalone Pulsar in Docker](getting-started-docker.md), or [run Pulsar in Kubernetes](getting-started-helm.md). - -To check whether the Docker image starts, you can use the `docker ps` command. - -## Java - -To package a function in Java, complete the following steps. - -1. Create a new maven project with a pom file. In the following code sample, the value of `mainClass` is your package name. - - ```Java - - - - 4.0.0 - - java-function - java-function - 1.0-SNAPSHOT - - - - org.apache.pulsar - pulsar-functions-api - 2.6.0 - - - - - - - maven-assembly-plugin - - false - - jar-with-dependencies - - - - org.example.test.ExclamationFunction - - - - - - make-assembly - package - - assembly - - - - - - org.apache.maven.plugins - maven-compiler-plugin - - 8 - 8 - - - - - - - - ``` - -2. Write a Java function. - - ``` - - package org.example.test; - - import java.util.function.Function; - - public class ExclamationFunction implements Function { - @Override - public String apply(String s) { - return "This is my function!"; - } - } - - ``` - - For the imported package, you can use one of the following interfaces: - - Function interface provided by Java 8: `java.util.function.Function` - - Pulsar Function interface: `org.apache.pulsar.functions.api.Function` - - The main difference between the two interfaces is that the `org.apache.pulsar.functions.api.Function` interface provides the context interface. When you write a function and want to interact with it, you can use context to obtain a wide variety of information and functionality for Pulsar Functions. - - The following example uses `org.apache.pulsar.functions.api.Function` interface with context. - - ``` - - package org.example.functions; - import org.apache.pulsar.functions.api.Context; - import org.apache.pulsar.functions.api.Function; - - import java.util.Arrays; - public class WordCountFunction implements Function { - // This function is invoked every time a message is published to the input topic - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split(" ")).forEach(word -> { - String counterKey = word.toLowerCase(); - context.incrCounter(counterKey, 1); - }); - return null; - } - } - - ``` - -3. Package the Java function. - - ```bash - - mvn package - - ``` - - After the Java function is packaged, a `target` directory is created automatically. Open the `target` directory to check if there is a JAR package similar to `java-function-1.0-SNAPSHOT.jar`. - - -4. Run the Java function. - - (1) Copy the packaged jar file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Java function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname org.example.test.ExclamationFunction \ - --jar java-function-1.0-SNAPSHOT.jar \ - --inputs persistent://public/default/my-topic-1 \ - --output persistent://public/default/test-1 \ - --tenant public \ - --namespace default \ - --name JavaFunction - - ``` - - The following log indicates that the Java function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -## Python - -Python Function supports the following three formats: - -- One python file -- ZIP file -- PIP - -### One python file - -To package a function with **one python file** in Python, complete the following steps. - -1. Write a Python function. - - ``` - - from pulsar import Function // import the Function module from Pulsar - - # The classic ExclamationFunction that appends an exclamation at the end - # of the input - class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - return input + '!' - - ``` - - In this example, when you write a Python function, you need to inherit the Function class and implement the `process()` method. - - `process()` mainly has two parameters: - - - `input` represents your input. - - - `context` represents an interface exposed by the Pulsar Function. You can get the attributes in the Python function based on the provided context object. - -2. Install a Python client. - - The implementation of a Python function depends on the Python client, so before deploying a Python function, you need to install the corresponding version of the Python client. - - ```bash - - pip install python-client==2.6.0 - - ``` - -3. Run the Python Function. - - (1) Copy the Python function file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Python function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname org.example.test.ExclamationFunction \ - --py \ - --inputs persistent://public/default/my-topic-1 \ - --output persistent://public/default/test-1 \ - --tenant public \ - --namespace default \ - --name PythonFunction - - ``` - - The following log indicates that the Python function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -### ZIP file - -To package a function with the **ZIP file** in Python, complete the following steps. - -1. Prepare the ZIP file. - - The following is required when packaging the ZIP file of the Python Function. - - ```text - - Assuming the zip file is named as `func.zip`, unzip the `func.zip` folder: - "func/src" - "func/requirements.txt" - "func/deps" - - ``` - - Take [exclamation.zip](https://github.com/apache/pulsar/tree/master/tests/docker-images/latest-version-image/python-examples) as an example. The internal structure of the example is as follows. - - ```text - - . - ├── deps - │   └── sh-1.12.14-py2.py3-none-any.whl - └── src - └── exclamation.py - - ``` - -2. Run the Python Function. - - (1) Copy the ZIP file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Python function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname exclamation \ - --py \ - --inputs persistent://public/default/in-topic \ - --output persistent://public/default/out-topic \ - --tenant public \ - --namespace default \ - --name PythonFunction - - ``` - - The following log indicates that the Python function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -### PIP - -The PIP method is only supported in Kubernetes runtime. To package a function with **PIP** in Python, complete the following steps. - -1. Configure the `functions_worker.yml` file. - - ```text - - #### Kubernetes Runtime #### - installUserCodeDependencies: true - - ``` - -2. Write your Python Function. - - ``` - - from pulsar import Function - import js2xml - - # The classic ExclamationFunction that appends an exclamation at the end - # of the input - class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - // add your logic - return input + '!' - - ``` - - You can introduce additional dependencies. When Python Function detects that the file currently used is `whl` and the `installUserCodeDependencies` parameter is specified, the system uses the `pip install` command to install the dependencies required in Python Function. - -3. Generate the `whl` file. - - ```shell script - - $ cd $PULSAR_HOME/pulsar-functions/scripts/python - $ chmod +x generate.sh - $ ./generate.sh - # e.g: ./generate.sh /path/to/python /path/to/python/output 1.0.0 - - ``` - - The output is written in `/path/to/python/output`: - - ```text - - -rw-r--r-- 1 root staff 1.8K 8 27 14:29 pulsarfunction-1.0.0-py2-none-any.whl - -rw-r--r-- 1 root staff 1.4K 8 27 14:29 pulsarfunction-1.0.0.tar.gz - -rw-r--r-- 1 root staff 0B 8 27 14:29 pulsarfunction.whl - - ``` - -## Go - -To package a function in Go, complete the following steps. - -1. Write a Go function. - - Currently, Go function can be **only** implemented using SDK and the interface of the function is exposed in the form of SDK. Before using the Go function, you need to import "github.com/apache/pulsar/pulsar-function-go/pf". - - ``` - - import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" - ) - - func HandleRequest(ctx context.Context, input []byte) error { - fmt.Println(string(input) + "!") - return nil - } - - func main() { - pf.Start(HandleRequest) - } - - ``` - - You can use context to connect to the Go function. - - ``` - - if fc, ok := pf.FromContext(ctx); ok { - fmt.Printf("function ID is:%s, ", fc.GetFuncID()) - fmt.Printf("function version is:%s\n", fc.GetFuncVersion()) - } - - ``` - - When writing a Go function, remember that - - In `main()`, you **only** need to register the function name to `Start()`. **Only** one function name is received in `Start()`. - - Go function uses Go reflection, which is based on the received function name, to verify whether the parameter list and returned value list are correct. The parameter list and returned value list **must be** one of the following sample functions: - - ``` - - func () - func () error - func (input) error - func () (output, error) - func (input) (output, error) - func (context.Context) error - func (context.Context, input) error - func (context.Context) (output, error) - func (context.Context, input) (output, error) - - ``` - -2. Build the Go function. - - ``` - - go build .go - - ``` - -3. Run the Go Function. - - (1) Copy the Go function file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Go function with the following command. - - ``` - - ./bin/pulsar-admin functions localrun \ - --go [your go function path] - --inputs [input topics] \ - --output [output topic] \ - --tenant [default:public] \ - --namespace [default:default] \ - --name [custom unique go function name] - - ``` - - The following log indicates that the Go function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -## Start Functions in cluster mode -If you want to start a function in cluster mode, replace `localrun` with `create` in the commands above. The following log indicates that your function starts successfully. - - ```text - - "Created successfully" - - ``` - -For information about parameters on `--classname`, `--jar`, `--py`, `--go`, `--inputs`, run the command `./bin/pulsar-admin functions` or see [here](reference-pulsar-admin.md#functions). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/functions-runtime.md b/site2/website/versioned_docs/version-2.9.1-deprecated/functions-runtime.md deleted file mode 100644 index 7164bd13668aff..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/functions-runtime.md +++ /dev/null @@ -1,403 +0,0 @@ ---- -id: functions-runtime -title: Configure Functions runtime -sidebar_label: "Setup: Configure Functions runtime" -original_id: functions-runtime ---- - -You can use the following methods to run functions. - -- *Thread*: Invoke functions threads in functions worker. -- *Process*: Invoke functions in processes forked by functions worker. -- *Kubernetes*: Submit functions as Kubernetes StatefulSets by functions worker. - -:::note - -Pulsar supports adding labels to the Kubernetes StatefulSets and services while launching functions, which facilitates selecting the target Kubernetes objects. - -::: - -The differences of the thread and process modes are: -- Thread mode: when a function runs in thread mode, it runs on the same Java virtual machine (JVM) with functions worker. -- Process mode: when a function runs in process mode, it runs on the same machine that functions worker runs. - -## Configure thread runtime -It is easy to configure *Thread* runtime. In most cases, you do not need to configure anything. You can customize the thread group name with the following settings: - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.thread.ThreadRuntimeFactory -functionRuntimeFactoryConfigs: - threadGroupName: "Your Function Container Group" - -``` - -*Thread* runtime is only supported in Java function. - -## Configure process runtime -When you enable *Process* runtime, you do not need to configure anything. - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.process.ProcessRuntimeFactory -functionRuntimeFactoryConfigs: - # the directory for storing the function logs - logDirectory: - # change the jar location only when you put the java instance jar in a different location - javaInstanceJarLocation: - # change the python instance location only when you put the python instance jar in a different location - pythonInstanceLocation: - # change the extra dependencies location: - extraFunctionDependenciesDir: - -``` - -*Process* runtime is supported in Java, Python, and Go functions. - -## Configure Kubernetes runtime - -When the functions worker generates Kubernetes manifests and apply the manifests, the Kubernetes runtime works. If you have run functions worker on Kubernetes, you can use the `serviceAccount` associated with the pod that the functions worker is running in. Otherwise, you can configure it to communicate with a Kubernetes cluster. - -The manifests, generated by the functions worker, include a `StatefulSet`, a `Service` (used to communicate with the pods), and a `Secret` for auth credentials (when applicable). The `StatefulSet` manifest (by default) has a single pod, with the number of replicas determined by the "parallelism" of the function. On pod boot, the pod downloads the function payload (via the functions worker REST API). The pod's container image is configurable, but must have the functions runtime. - -The Kubernetes runtime supports secrets, so you can create a Kubernetes secret and expose it as an environment variable in the pod. The Kubernetes runtime is extensible, you can implement classes and customize the way how to generate Kubernetes manifests, how to pass auth data to pods, and how to integrate secrets. - -:::tip - -For the rules of translating Pulsar object names into Kubernetes resource labels, see [here](admin-api-overview.md#how-to-define-pulsar-resource-names-when-running-pulsar-in-kubernetes). - -::: - -### Basic configuration - -It is easy to configure Kubernetes runtime. You can just uncomment the settings of `kubernetesContainerFactory` in the `functions_worker.yaml` file. The following is an example. - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.kubernetes.KubernetesRuntimeFactory -functionRuntimeFactoryConfigs: - # uri to kubernetes cluster, leave it to empty and it will use the kubernetes settings in function worker - k8Uri: - # the kubernetes namespace to run the function instances. it is `default`, if this setting is left to be empty - jobNamespace: - # The Kubernetes pod name to run the function instances. It is set to - # `pf----` if this setting is left to be empty - jobName: - # the docker image to run function instance. by default it is `apachepulsar/pulsar` - pulsarDockerImageName: - # the docker image to run function instance according to different configurations provided by users. - # By default it is `apachepulsar/pulsar`. - # e.g: - # functionDockerImages: - # JAVA: JAVA_IMAGE_NAME - # PYTHON: PYTHON_IMAGE_NAME - # GO: GO_IMAGE_NAME - functionDockerImages: - # "The image pull policy for image used to run function instance. By default it is `IfNotPresent` - imagePullPolicy: IfNotPresent - # the root directory of pulsar home directory in `pulsarDockerImageName`. by default it is `/pulsar`. - # if you are using your own built image in `pulsarDockerImageName`, you need to set this setting accordingly - pulsarRootDir: - # The config admin CLI allows users to customize the configuration of the admin cli tool, such as: - # `/bin/pulsar-admin and /bin/pulsarctl`. By default it is `/bin/pulsar-admin`. If you want to use `pulsarctl` - # you need to set this setting accordingly - configAdminCLI: - # this setting only takes effects if `k8Uri` is set to null. if your function worker is running as a k8 pod, - # setting this to true is let function worker to submit functions to the same k8s cluster as function worker - # is running. setting this to false if your function worker is not running as a k8 pod. - submittingInsidePod: false - # setting the pulsar service url that pulsar function should use to connect to pulsar - # if it is not set, it will use the pulsar service url configured in worker service - pulsarServiceUrl: - # setting the pulsar admin url that pulsar function should use to connect to pulsar - # if it is not set, it will use the pulsar admin url configured in worker service - pulsarAdminUrl: - # The flag indicates to install user code dependencies. (applied to python package) - installUserCodeDependencies: - # The repository that pulsar functions use to download python dependencies - pythonDependencyRepository: - # The repository that pulsar functions use to download extra python dependencies - pythonExtraDependencyRepository: - # the custom labels that function worker uses to select the nodes for pods - customLabels: - # The expected metrics collection interval, in seconds - expectedMetricsCollectionInterval: 30 - # Kubernetes Runtime will periodically checkback on - # this configMap if defined and if there are any changes - # to the kubernetes specific stuff, we apply those changes - changeConfigMap: - # The namespace for storing change config map - changeConfigMapNamespace: - # The ratio cpu request and cpu limit to be set for a function/source/sink. - # The formula for cpu request is cpuRequest = userRequestCpu / cpuOverCommitRatio - cpuOverCommitRatio: 1.0 - # The ratio memory request and memory limit to be set for a function/source/sink. - # The formula for memory request is memoryRequest = userRequestMemory / memoryOverCommitRatio - memoryOverCommitRatio: 1.0 - # The port inside the function pod which is used by the worker to communicate with the pod - grpcPort: 9093 - # The port inside the function pod on which prometheus metrics are exposed - metricsPort: 9094 - # The directory inside the function pod where nar packages will be extracted - narExtractionDirectory: - # The classpath where function instance files stored - functionInstanceClassPath: - # the directory for dropping extra function dependencies - # if it is not an absolute path, it is relative to `pulsarRootDir` - extraFunctionDependenciesDir: - # Additional memory padding added on top of the memory requested by the function per on a per instance basis - percentMemoryPadding: 10 - # The duration (in seconds) before the StatefulSet is deleted after a function stops or restarts. - # Value must be a non-negative integer. 0 indicates the StatefulSet is deleted immediately. - # Default is 5 seconds. - gracePeriodSeconds: 5 - -``` - -If you run functions worker embedded in a broker on Kubernetes, you can use the default settings. - -### Run standalone functions worker on Kubernetes - -If you run functions worker standalone (that is, not embedded) on Kubernetes, you need to configure `pulsarSerivceUrl` to be the URL of the broker and `pulsarAdminUrl` as the URL to the functions worker. - -For example, both Pulsar brokers and Function Workers run in the `pulsar` K8S namespace. The brokers have a service called `brokers` and the functions worker has a service called `func-worker`. The settings are as follows: - -```yaml - -pulsarServiceUrl: pulsar://broker.pulsar:6650 // or pulsar+ssl://broker.pulsar:6651 if using TLS -pulsarAdminUrl: http://func-worker.pulsar:8080 // or https://func-worker:8443 if using TLS - -``` - -### Run RBAC in Kubernetes clusters - -If you run RBAC in your Kubernetes cluster, make sure that the service account you use for running functions workers (or brokers, if functions workers run along with brokers) have permissions on the following Kubernetes APIs. - -- services -- configmaps -- pods -- apps.statefulsets - -The following is sufficient: - -```yaml - -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRole -metadata: - name: functions-worker -rules: -- apiGroups: [""] - resources: - - services - - configmaps - - pods - verbs: - - '*' -- apiGroups: - - apps - resources: - - statefulsets - verbs: - - '*' ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: functions-worker ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: functions-worker -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: functions-worker -subjectsKubernetesSec: -- kind: ServiceAccount - name: functions-worker - -``` - -If the service-account is not properly configured, an error message similar to this is displayed: - -```bash - -22:04:27.696 [Timer-0] ERROR org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory - Error while trying to fetch configmap example-pulsar-4qvmb5gur3c6fc9dih0x1xn8b-function-worker-config at namespace pulsar -io.kubernetes.client.ApiException: Forbidden - at io.kubernetes.client.ApiClient.handleResponse(ApiClient.java:882) ~[io.kubernetes-client-java-2.0.0.jar:?] - at io.kubernetes.client.ApiClient.execute(ApiClient.java:798) ~[io.kubernetes-client-java-2.0.0.jar:?] - at io.kubernetes.client.apis.CoreV1Api.readNamespacedConfigMapWithHttpInfo(CoreV1Api.java:23673) ~[io.kubernetes-client-java-api-2.0.0.jar:?] - at io.kubernetes.client.apis.CoreV1Api.readNamespacedConfigMap(CoreV1Api.java:23655) ~[io.kubernetes-client-java-api-2.0.0.jar:?] - at org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory.fetchConfigMap(KubernetesRuntimeFactory.java:284) [org.apache.pulsar-pulsar-functions-runtime-2.4.0-42c3bf949.jar:2.4.0-42c3bf949] - at org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory$1.run(KubernetesRuntimeFactory.java:275) [org.apache.pulsar-pulsar-functions-runtime-2.4.0-42c3bf949.jar:2.4.0-42c3bf949] - at java.util.TimerThread.mainLoop(Timer.java:555) [?:1.8.0_212] - at java.util.TimerThread.run(Timer.java:505) [?:1.8.0_212] - -``` - -### Integrate Kubernetes secrets - -In order to safely distribute secrets, Pulsar Functions can reference Kubernetes secrets. To enable this, set the `secretsProviderConfiguratorClassName` to `org.apache.pulsar.functions.secretsproviderconfigurator.KubernetesSecretsProviderConfigurator`. - -You can create a secret in the namespace where your functions are deployed. For example, you deploy functions to the `pulsar-func` Kubernetes namespace, and you have a secret named `database-creds` with a field name `password`, which you want to mount in the pod as an environment variable called `DATABASE_PASSWORD`. The following functions configuration enables you to reference that secret and mount the value as an environment variable in the pod. - -```Yaml - -tenant: "mytenant" -namespace: "mynamespace" -name: "myfunction" -topicName: "persistent://mytenant/mynamespace/myfuncinput" -className: "com.company.pulsar.myfunction" - -secrets: - # the secret will be mounted from the `password` field in the `database-creds` secret as an env var called `DATABASE_PASSWORD` - DATABASE_PASSWORD: - path: "database-creds" - key: "password" - -``` - -### Enable token authentication - -When you enable authentication for your Pulsar cluster, you need a mechanism for the pod running your function to authenticate with the broker. - -The `org.apache.pulsar.functions.auth.KubernetesFunctionAuthProvider` interface provides support for any authentication mechanism. The `functionAuthProviderClassName` in `function-worker.yml` is used to specify your path to this implementation. - -Pulsar includes an implementation of this interface for token authentication, and distributes the certificate authority via the same implementation. The configuration is similar as follows: - -```Yaml - -functionAuthProviderClassName: org.apache.pulsar.functions.auth.KubernetesSecretsTokenAuthProvider - -``` - -For token authentication, the functions worker captures the token that is used to deploy (or update) the function. The token is saved as a secret and mounted into the pod. - -For custom authentication or TLS, you need to implement this interface or use an alternative mechanism to provide authentication. If you use token authentication and TLS encryption to secure the communication with the cluster, Pulsar passes your certificate authority (CA) to the client, so the client obtains what it needs to authenticate the cluster, and trusts the cluster with your signed certificate. - -:::note - -If you use tokens that expire when deploying functions, these tokens will expire. - -::: - -### Run clusters with authentication - -When you run a functions worker in a standalone process (that is, not embedded in the broker) in a cluster with authentication, you must configure your functions worker to interact with the broker and authenticate incoming requests. So you need to configure properties that the broker requires for authentication or authorization. - -For example, if you use token authentication, you need to configure the following properties in the `function-worker.yml` file. - -```Yaml - -clientAuthenticationPlugin: org.apache.pulsar.client.impl.auth.AuthenticationToken -clientAuthenticationParameters: file:///etc/pulsar/token/admin-token.txt -configurationStoreServers: zookeeper-cluster:2181 # auth requires a connection to zookeeper -authenticationProviders: - - "org.apache.pulsar.broker.authentication.AuthenticationProviderToken" -authorizationEnabled: true -authenticationEnabled: true -superUserRoles: - - superuser - - proxy -properties: - tokenSecretKey: file:///etc/pulsar/jwt/secret # if using a secret token, key file must be DER-encoded - tokenPublicKey: file:///etc/pulsar/jwt/public.key # if using public/private key tokens, key file must be DER-encoded - -``` - -:::note - -You must configure both the Function Worker authorization or authentication for the server to authenticate requests and configure the client to be authenticated to communicate with the broker. - -::: - -### Customize Kubernetes runtime - -The Kubernetes integration enables you to implement a class and customize how to generate manifests. You can configure it by setting `runtimeCustomizerClassName` in the `functions-worker.yml` file and use the fully qualified class name. You must implement the `org.apache.pulsar.functions.runtime.kubernetes.KubernetesManifestCustomizer` interface. - -The functions (and sinks/sources) API provides a flag, `customRuntimeOptions`, which is passed to this interface. - -To initialize the `KubernetesManifestCustomizer`, you can provide `runtimeCustomizerConfig` in the `functions-worker.yml` file. `runtimeCustomizerConfig` is passed to the `public void initialize(Map config)` function of the interface. `runtimeCustomizerConfig`is different from the `customRuntimeOptions` as `runtimeCustomizerConfig` is the same across all functions. If you provide both `runtimeCustomizerConfig` and `customRuntimeOptions`, you need to decide how to manage these two configurations in your implementation of `KubernetesManifestCustomizer`. - -Pulsar includes a built-in implementation. To use the basic implementation, set `runtimeCustomizerClassName` to `org.apache.pulsar.functions.runtime.kubernetes.BasicKubernetesManifestCustomizer`. The built-in implementation initialized with `runtimeCustomizerConfig` enables you to pass a JSON document as `customRuntimeOptions` with certain properties to augment, which decides how the manifests are generated. If both `runtimeCustomizerConfig` and `customRuntimeOptions` are provided, `BasicKubernetesManifestCustomizer` uses `customRuntimeOptions` to override the configuration if there are conflicts in these two configurations. - -Below is an example of `customRuntimeOptions`. - -```json - -{ - "jobName": "jobname", // the k8s pod name to run this function instance - "jobNamespace": "namespace", // the k8s namespace to run this function in - "extractLabels": { // extra labels to attach to the statefulSet, service, and pods - "extraLabel": "value" - }, - "extraAnnotations": { // extra annotations to attach to the statefulSet, service, and pods - "extraAnnotation": "value" - }, - "nodeSelectorLabels": { // node selector labels to add on to the pod spec - "customLabel": "value" - }, - "tolerations": [ // tolerations to add to the pod spec - { - "key": "custom-key", - "value": "value", - "effect": "NoSchedule" - } - ], - "resourceRequirements": { // values for cpu and memory should be defined as described here: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container - "requests": { - "cpu": 1, - "memory": "4G" - }, - "limits": { - "cpu": 2, - "memory": "8G" - } - } -} - -``` - -## Run clusters with geo-replication - -If you run multiple clusters tied together with geo-replication, it is important to use a different function namespace for each cluster. Otherwise, the function shares a namespace and potentially schedule across clusters. - -For example, if you have two clusters: `east-1` and `west-1`, you can configure the functions workers for `east-1` and `west-1` perspectively as follows. - -```Yaml - -pulsarFunctionsCluster: east-1 -pulsarFunctionsNamespace: public/functions-east-1 - -``` - -```Yaml - -pulsarFunctionsCluster: west-1 -pulsarFunctionsNamespace: public/functions-west-1 - -``` - -This ensures the two different Functions Workers use distinct sets of topics for their internal coordination. - -## Configure standalone functions worker - -When configuring a standalone functions worker, you need to configure properties that the broker requires, especially if you use TLS. And then Functions Worker can communicate with the broker. - -You need to configure the following required properties. - -```Yaml - -workerPort: 8080 -workerPortTls: 8443 # when using TLS -tlsCertificateFilePath: /etc/pulsar/tls/tls.crt # when using TLS -tlsKeyFilePath: /etc/pulsar/tls/tls.key # when using TLS -tlsTrustCertsFilePath: /etc/pulsar/tls/ca.crt # when using TLS -pulsarServiceUrl: pulsar://broker.pulsar:6650/ # or pulsar+ssl://pulsar-prod-broker.pulsar:6651/ when using TLS -pulsarWebServiceUrl: http://broker.pulsar:8080/ # or https://pulsar-prod-broker.pulsar:8443/ when using TLS -useTls: true # when using TLS, critical! - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/functions-worker.md b/site2/website/versioned_docs/version-2.9.1-deprecated/functions-worker.md deleted file mode 100644 index 49fc76b30bdaa5..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/functions-worker.md +++ /dev/null @@ -1,386 +0,0 @@ ---- -id: functions-worker -title: Deploy and manage functions worker -sidebar_label: "Setup: Pulsar Functions Worker" -original_id: functions-worker ---- -Before using Pulsar Functions, you need to learn how to set up Pulsar Functions worker and how to [configure Functions runtime](functions-runtime.md). - -Pulsar `functions-worker` is a logic component to run Pulsar Functions in cluster mode. Two options are available, and you can select either based on your requirements. -- [run with brokers](#run-functions-worker-with-brokers) -- [run it separately](#run-functions-worker-separately) in a different broker - -:::note - -The `--- Service Urls---` lines in the following diagrams represent Pulsar service URLs that Pulsar client and admin use to connect to a Pulsar cluster. - -::: - -## Run Functions-worker with brokers - -The following diagram illustrates the deployment of functions-workers running along with brokers. - -![assets/functions-worker-corun.png](/assets/functions-worker-corun.png) - -To enable functions-worker running as part of a broker, you need to set `functionsWorkerEnabled` to `true` in the `broker.conf` file. - -```conf - -functionsWorkerEnabled=true - -``` - -If the `functionsWorkerEnabled` is set to `true`, the functions-worker is started as part of a broker. You need to configure the `conf/functions_worker.yml` file to customize your functions_worker. - -Before you run Functions-worker with broker, you have to configure Functions-worker, and then start it with brokers. - -### Configure Functions-Worker to run with brokers -In this mode, most of the settings are already inherited from your broker configuration (for example, configurationStore settings, authentication settings, and so on) since `functions-worker` is running as part of the broker. - -Pay attention to the following required settings when configuring functions-worker in this mode. - -- `numFunctionPackageReplicas`: The number of replicas to store function packages. The default value is `1`, which is good for standalone deployment. For production deployment, to ensure high availability, set it to be larger than `2`. -- `initializedDlogMetadata`: Whether to initialize distributed log metadata in runtime. If it is set to `true`, you must ensure that it has been initialized by `bin/pulsar initialize-cluster-metadata` command. - -If authentication is enabled on the BookKeeper cluster, configure the following BookKeeper authentication settings. - -- `bookkeeperClientAuthenticationPlugin`: the BookKeeper client authentication plugin name. -- `bookkeeperClientAuthenticationParametersName`: the BookKeeper client authentication plugin parameters name. -- `bookkeeperClientAuthenticationParameters`: the BookKeeper client authentication plugin parameters. - -### Configure Stateful-Functions to run with broker - -If you want to use Stateful-Functions related functions (for example, `putState()` and `queryState()` related interfaces), follow steps below. - -1. Enable the **streamStorage** service in the BookKeeper. - - Currently, the service uses the NAR package, so you need to set the configuration in `bookkeeper.conf`. - - ```text - - extraServerComponents=org.apache.bookkeeper.stream.server.StreamStorageLifecycleComponent - - ``` - - After starting bookie, use the following methods to check whether the streamStorage service is started correctly. - - Input: - - ```shell - - telnet localhost 4181 - - ``` - - Output: - - ```text - - Trying 127.0.0.1... - Connected to localhost. - Escape character is '^]'. - - ``` - -2. Turn on this function in `functions_worker.yml`. - - ```text - - stateStorageServiceUrl: bk://:4181 - - ``` - - `bk-service-url` is the service URL pointing to the BookKeeper table service. - -### Start Functions-worker with broker - -Once you have configured the `functions_worker.yml` file, you can start or restart your broker. - -And then you can use the following command to verify if `functions-worker` is running well. - -```bash - -curl :8080/admin/v2/worker/cluster - -``` - -After entering the command above, a list of active function workers in the cluster is returned. The output is similar to the following. - -```json - -[{"workerId":"","workerHostname":"","port":8080}] - -``` - -## Run Functions-worker separately - -This section illustrates how to run `functions-worker` as a separate process in separate machines. - -![assets/functions-worker-separated.png](/assets/functions-worker-separated.png) - -:::note - -In this mode, make sure `functionsWorkerEnabled` is set to `false`, so you won't start `functions-worker` with brokers by mistake. Also, while accessing the `functions-worker` to manage any of the functions, the `pulsar-admin` CLI tool or any of the clients should use the `workerHostname` and `workerPort` that you set in [Worker parameters](#worker-parameters) to generate an `--admin-url`. - -::: - -### Configure Functions-worker to run separately - -To run function-worker separately, you have to configure the following parameters. - -#### Worker parameters - -- `workerId`: The type is string. It is unique across clusters, which is used to identify a worker machine. -- `workerHostname`: The hostname of the worker machine. -- `workerPort`: The port that the worker server listens on. Keep it as default if you don't customize it. -- `workerPortTls`: The TLS port that the worker server listens on. Keep it as default if you don't customize it. - -#### Function package parameter - -- `numFunctionPackageReplicas`: The number of replicas to store function packages. The default value is `1`. - -#### Function metadata parameter - -- `pulsarServiceUrl`: The Pulsar service URL for your broker cluster. -- `pulsarWebServiceUrl`: The Pulsar web service URL for your broker cluster. -- `pulsarFunctionsCluster`: Set the value to your Pulsar cluster name (same as the `clusterName` setting in the broker configuration). - -If authentication is enabled for your broker cluster, you *should* configure the authentication plugin and parameters for the functions worker to communicate with the brokers. - -- `brokerClientAuthenticationEnabled`: Whether to enable the broker client authentication used by function workers to talk to brokers. -- `clientAuthenticationPlugin`: The authentication plugin to be used by the Pulsar client used in worker service. -- `clientAuthenticationParameters`: The authentication parameter to be used by the Pulsar client used in worker service. - -#### Security settings - -If you want to enable security on functions workers, you *should*: -- [Enable TLS transport encryption](#enable-tls-transport-encryption) -- [Enable Authentication Provider](#enable-authentication-provider) -- [Enable Authorization Provider](#enable-authorization-provider) -- [Enable End-to-End Encryption](#enable-end-to-end-encryption) - -##### Enable TLS transport encryption - -To enable TLS transport encryption, configure the following settings. - -``` - -useTLS: true -pulsarServiceUrl: pulsar+ssl://localhost:6651/ -pulsarWebServiceUrl: https://localhost:8443 - -tlsEnabled: true -tlsCertificateFilePath: /path/to/functions-worker.cert.pem -tlsKeyFilePath: /path/to/functions-worker.key-pk8.pem -tlsTrustCertsFilePath: /path/to/ca.cert.pem - -// The path to trusted certificates used by the Pulsar client to authenticate with Pulsar brokers -brokerClientTrustCertsFilePath: /path/to/ca.cert.pem - -``` - -For details on TLS encryption, refer to [Transport Encryption using TLS](security-tls-transport.md). - -##### Enable Authentication Provider - -To enable authentication on Functions Worker, you need to configure the following settings. - -:::note - -Substitute the *providers list* with the providers you want to enable. - -::: - -``` - -authenticationEnabled: true -authenticationProviders: [ provider1, provider2 ] - -``` - -For *TLS Authentication* provider, follow the example below to add the necessary settings. -See [TLS Authentication](security-tls-authentication.md) for more details. - -``` - -brokerClientAuthenticationPlugin: org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters: tlsCertFile:/path/to/admin.cert.pem,tlsKeyFile:/path/to/admin.key-pk8.pem - -authenticationEnabled: true -authenticationProviders: ['org.apache.pulsar.broker.authentication.AuthenticationProviderTls'] - -``` - -For *SASL Authentication* provider, add `saslJaasClientAllowedIds` and `saslJaasBrokerSectionName` -under `properties` if needed. - -``` - -properties: - saslJaasClientAllowedIds: .*pulsar.* - saslJaasBrokerSectionName: Broker - -``` - -For *Token Authentication* provider, add necessary settings for `properties` if needed. -See [Token Authentication](security-jwt.md) for more details. -Note: key files must be DER-encoded - -``` - -properties: - tokenSecretKey: file://my/secret.key - # If using public/private - # tokenPublicKey: file:///path/to/public.key - -``` - -##### Enable Authorization Provider - -To enable authorization on Functions Worker, you need to configure `authorizationEnabled`, `authorizationProvider` and `configurationStoreServers`. The authentication provider connects to `configurationStoreServers` to receive namespace policies. - -```yaml - -authorizationEnabled: true -authorizationProvider: org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider -configurationStoreServers: - -``` - -You should also configure a list of superuser roles. The superuser roles are able to access any admin API. The following is a configuration example. - -```yaml - -superUserRoles: - - role1 - - role2 - - role3 - -``` - -##### Enable End-to-End Encryption - -You can use the public and private key pair that the application configures to perform encryption. Only the consumers with a valid key can decrypt the encrypted messages. - -To enable End-to-End encryption on Functions Worker, you can set it by specifying `--producer-config` in the command line terminal, for more information, please refer to [here](security-encryption.md). - -We include the relevant configuration information of `CryptoConfig` into `ProducerConfig`. The specific configurable field information about `CryptoConfig` is as follows: - -```text - -public class CryptoConfig { - private String cryptoKeyReaderClassName; - private Map cryptoKeyReaderConfig; - - private String[] encryptionKeys; - private ProducerCryptoFailureAction producerCryptoFailureAction; - - private ConsumerCryptoFailureAction consumerCryptoFailureAction; -} - -``` - -- `producerCryptoFailureAction`: define the action if producer fail to encrypt data one of `FAIL`, `SEND`. -- `consumerCryptoFailureAction`: define the action if consumer fail to decrypt data one of `FAIL`, `DISCARD`, `CONSUME`. - -#### BookKeeper Authentication - -If authentication is enabled on the BookKeeper cluster, you need configure the BookKeeper authentication settings as follows: - -- `bookkeeperClientAuthenticationPlugin`: the plugin name of BookKeeper client authentication. -- `bookkeeperClientAuthenticationParametersName`: the plugin parameters name of BookKeeper client authentication. -- `bookkeeperClientAuthenticationParameters`: the plugin parameters of BookKeeper client authentication. - -### Start Functions-worker - -Once you have finished configuring the `functions_worker.yml` configuration file, you can start a `functions-worker` in the background by using [nohup](https://en.wikipedia.org/wiki/Nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -bin/pulsar-daemon start functions-worker - -``` - -You can also start `functions-worker` in the foreground by using `pulsar` CLI tool: - -```bash - -bin/pulsar functions-worker - -``` - -### Configure Proxies for Functions-workers - -When you are running `functions-worker` in a separate cluster, the admin rest endpoints are split into two clusters. `functions`, `function-worker`, `source` and `sink` endpoints are now served -by the `functions-worker` cluster, while all the other remaining endpoints are served by the broker cluster. -Hence you need to configure your `pulsar-admin` to use the right service URL accordingly. - -In order to address this inconvenience, you can start a proxy cluster for routing the admin rest requests accordingly. Hence you will have one central entry point for your admin service. - -If you already have a proxy cluster, continue reading. If you haven't setup a proxy cluster before, you can follow the [instructions](http://pulsar.apache.org/docs/en/administration-proxy/) to -start proxies. - -![assets/functions-worker-separated.png](/assets/functions-worker-separated-proxy.png) - -To enable routing functions related admin requests to `functions-worker` in a proxy, you can edit the `proxy.conf` file to modify the following settings: - -```conf - -functionWorkerWebServiceURL= -functionWorkerWebServiceURLTLS= - -``` - -## Compare the Run-with-Broker and Run-separately modes - -As described above, you can run Function-worker with brokers, or run it separately. And it is more convenient to run functions-workers along with brokers. However, running functions-workers in a separate cluster provides better resource isolation for running functions in `Process` or `Thread` mode. - -Use which mode for your cases, refer to the following guidelines to determine. - -Use the `Run-with-Broker` mode in the following cases: -- a) if resource isolation is not required when running functions in `Process` or `Thread` mode; -- b) if you configure the functions-worker to run functions on Kubernetes (where the resource isolation problem is addressed by Kubernetes). - -Use the `Run-separately` mode in the following cases: -- a) you don't have a Kubernetes cluster; -- b) if you want to run functions and brokers separately. - -## Troubleshooting - -**Error message: Namespace missing local cluster name in clusters list** - -``` - -Failed to get partitioned topic metadata: org.apache.pulsar.client.api.PulsarClientException$BrokerMetadataException: Namespace missing local cluster name in clusters list: local_cluster=xyz ns=public/functions clusters=[standalone] - -``` - -The error message prompts when either of the cases occurs: -- a) a broker is started with `functionsWorkerEnabled=true`, but the `pulsarFunctionsCluster` is not set to the correct cluster in the `conf/functions_worker.yaml` file; -- b) setting up a geo-replicated Pulsar cluster with `functionsWorkerEnabled=true`, while brokers in one cluster run well, brokers in the other cluster do not work well. - -**Workaround** - -If any of these cases happens, follow the instructions below to fix the problem: - -1. Disable Functions Worker by setting `functionsWorkerEnabled=false`, and restart brokers. - -2. Get the current clusters list of `public/functions` namespace. - -```bash - -bin/pulsar-admin namespaces get-clusters public/functions - -``` - -3. Check if the cluster is in the clusters list. If the cluster is not in the list, add it to the list and update the clusters list. - -```bash - -bin/pulsar-admin namespaces set-clusters --clusters , public/functions - -``` - -4. After setting the cluster successfully, enable functions worker by setting `functionsWorkerEnabled=true`. - -5. Set the correct cluster name in `pulsarFunctionsCluster` in the `conf/functions_worker.yml` file, and restart brokers. diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/getting-started-concepts-and-architecture.md b/site2/website/versioned_docs/version-2.9.1-deprecated/getting-started-concepts-and-architecture.md deleted file mode 100644 index fe9c3fbc553b2c..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/getting-started-concepts-and-architecture.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -id: concepts-architecture -title: Pulsar concepts and architecture -sidebar_label: "Concepts and architecture" -original_id: concepts-architecture ---- - - - - - - - - - - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/getting-started-docker.md b/site2/website/versioned_docs/version-2.9.1-deprecated/getting-started-docker.md deleted file mode 100644 index de5ead69e164b0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/getting-started-docker.md +++ /dev/null @@ -1,211 +0,0 @@ ---- -id: getting-started-docker -title: Set up a standalone Pulsar in Docker -sidebar_label: "Run Pulsar in Docker" -original_id: getting-started-docker ---- - -For local development and testing, you can run Pulsar in standalone mode on your own machine within a Docker container. - -If you have not installed Docker, download the [Community edition](https://www.docker.com/community-edition) and follow the instructions for your OS. - -## Start Pulsar in Docker - -* For MacOS, Linux, and Windows: - - ```shell - - $ docker run -it -p 6650:6650 -p 8080:8080 --mount source=pulsardata,target=/pulsar/data --mount source=pulsarconf,target=/pulsar/conf apachepulsar/pulsar:@pulsar:version@ bin/pulsar standalone - - ``` - -A few things to note about this command: - * The data, metadata, and configuration are persisted on Docker volumes in order to not start "fresh" every -time the container is restarted. For details on the volumes you can use `docker volume inspect ` - * For Docker on Windows make sure to configure it to use Linux containers - -If you start Pulsar successfully, you will see `INFO`-level log messages like this: - -``` - -08:18:30.970 [main] INFO org.apache.pulsar.broker.web.WebService - HTTP Service started at http://0.0.0.0:8080 -... -07:53:37.322 [main] INFO org.apache.pulsar.broker.PulsarService - messaging service is ready, bootstrap service port = 8080, broker url= pulsar://localhost:6650, cluster=standalone, configs=org.apache.pulsar.broker.ServiceConfiguration@98b63c1 -... - -``` - -:::tip - -When you start a local standalone cluster, a `public/default` namespace is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -::: - -## Use Pulsar in Docker - -Pulsar offers client libraries for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) -and [C++](client-libraries-cpp.md). If you're running a local standalone cluster, you can -use one of these root URLs to interact with your cluster: - -* `pulsar://localhost:6650` -* `http://localhost:8080` - -The following example will guide you get started with Pulsar quickly by using the [Python client API](client-libraries-python.md) -client API. - -Install the Pulsar Python client library directly from [PyPI](https://pypi.org/project/pulsar-client/): - -```shell - -$ pip install pulsar-client - -``` - -### Consume a message - -Create a consumer and subscribe to the topic: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -consumer = client.subscribe('my-topic', - subscription_name='my-sub') - -while True: - msg = consumer.receive() - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - -client.close() - -``` - -### Produce a message - -Now start a producer to send some test messages: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -producer = client.create_producer('my-topic') - -for i in range(10): - producer.send(('hello-pulsar-%d' % i).encode('utf-8')) - -client.close() - -``` - -## Get the topic statistics - -In Pulsar, you can use REST, Java, or command-line tools to control every aspect of the system. -For details on APIs, refer to [Admin API Overview](admin-api-overview.md). - -In the simplest example, you can use curl to probe the stats for a particular topic: - -```shell - -$ curl http://localhost:8080/admin/v2/persistent/public/default/my-topic/stats | python -m json.tool - -``` - -The output is something like this: - -```json - -{ - "msgRateIn": 0.0, - "msgThroughputIn": 0.0, - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesInCounter": 7097, - "msgInCounter": 143, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "averageMsgSize": 0.0, - "msgChunkPublished": false, - "storageSize": 7097, - "backlogSize": 0, - "offloadedStorageSize": 0, - "publishers": [ - { - "accessMode": "Shared", - "msgRateIn": 0.0, - "msgThroughputIn": 0.0, - "averageMsgSize": 0.0, - "chunkedMessageRate": 0.0, - "producerId": 0, - "metadata": {}, - "address": "/127.0.0.1:35604", - "connectedSince": "2021-07-04T09:05:43.04788Z", - "clientVersion": "2.8.0", - "producerName": "standalone-2-5" - } - ], - "waitingPublishers": 0, - "subscriptions": { - "my-sub": { - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "msgRateRedeliver": 0.0, - "chunkedMessageRate": 0, - "msgBacklog": 0, - "backlogSize": 0, - "msgBacklogNoDelayed": 0, - "blockedSubscriptionOnUnackedMsgs": false, - "msgDelayed": 0, - "unackedMessages": 0, - "type": "Exclusive", - "activeConsumerName": "3c544f1daa", - "msgRateExpired": 0.0, - "totalMsgExpired": 0, - "lastExpireTimestamp": 0, - "lastConsumedFlowTimestamp": 1625389101290, - "lastConsumedTimestamp": 1625389546070, - "lastAckedTimestamp": 1625389546162, - "lastMarkDeleteAdvancedTimestamp": 1625389546163, - "consumers": [ - { - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "msgRateRedeliver": 0.0, - "chunkedMessageRate": 0.0, - "consumerName": "3c544f1daa", - "availablePermits": 867, - "unackedMessages": 0, - "avgMessagesPerEntry": 6, - "blockedConsumerOnUnackedMsgs": false, - "lastAckedTimestamp": 1625389546162, - "lastConsumedTimestamp": 1625389546070, - "metadata": {}, - "address": "/127.0.0.1:35472", - "connectedSince": "2021-07-04T08:58:21.287682Z", - "clientVersion": "2.8.0" - } - ], - "isDurable": true, - "isReplicated": false, - "allowOutOfOrderDelivery": false, - "consumersAfterMarkDeletePosition": {}, - "nonContiguousDeletedMessagesRanges": 0, - "nonContiguousDeletedMessagesRangesSerializedSize": 0, - "durable": true, - "replicated": false - } - }, - "replication": {}, - "deduplicationStatus": "Disabled", - "nonContiguousDeletedMessagesRanges": 0, - "nonContiguousDeletedMessagesRangesSerializedSize": 0 -} - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/getting-started-helm.md b/site2/website/versioned_docs/version-2.9.1-deprecated/getting-started-helm.md deleted file mode 100644 index 5e9f7044a6d74b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/getting-started-helm.md +++ /dev/null @@ -1,447 +0,0 @@ ---- -id: getting-started-helm -title: Get started in Kubernetes -sidebar_label: "Run Pulsar in Kubernetes" -original_id: getting-started-helm ---- - -This section guides you through every step of installing and running Apache Pulsar with Helm on Kubernetes quickly, including the following sections: - -- Install the Apache Pulsar on Kubernetes using Helm -- Start and stop Apache Pulsar -- Create topics using `pulsar-admin` -- Produce and consume messages using Pulsar clients -- Monitor Apache Pulsar status with Prometheus and Grafana - -For deploying a Pulsar cluster for production usage, read the documentation on [how to configure and install a Pulsar Helm chart](helm-deploy.md). - -## Prerequisite - -- Kubernetes server 1.14.0+ -- kubectl 1.14.0+ -- Helm 3.0+ - -:::tip - -For the following steps, step 2 and step 3 are for **developers** and step 4 and step 5 are for **administrators**. - -::: - -## Step 0: Prepare a Kubernetes cluster - -Before installing a Pulsar Helm chart, you have to create a Kubernetes cluster. You can follow [the instructions](helm-prepare.md) to prepare a Kubernetes cluster. - -We use [Minikube](https://minikube.sigs.k8s.io/docs/start/) in this quick start guide. To prepare a Kubernetes cluster, follow these steps: - -1. Create a Kubernetes cluster on Minikube. - - ```bash - - minikube start --memory=8192 --cpus=4 --kubernetes-version= - - ``` - - The `` can be any [Kubernetes version supported by your Minikube installation](https://minikube.sigs.k8s.io/docs/reference/configuration/kubernetes/), such as `v1.16.1`. - -2. Set `kubectl` to use Minikube. - - ```bash - - kubectl config use-context minikube - - ``` - -3. To use the [Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) with the local Kubernetes cluster on Minikube, enter the command below: - - ```bash - - minikube dashboard - - ``` - - The command automatically triggers opening a webpage in your browser. - -## Step 1: Install Pulsar Helm chart - -1. Add Pulsar charts repo. - - ```bash - - helm repo add apache https://pulsar.apache.org/charts - - ``` - - ```bash - - helm repo update - - ``` - -2. Clone the Pulsar Helm chart repository. - - ```bash - - git clone https://github.com/apache/pulsar-helm-chart - cd pulsar-helm-chart - - ``` - -3. Run the script `prepare_helm_release.sh` to create secrets required for installing the Apache Pulsar Helm chart. The username `pulsar` and password `pulsar` are used for logging into the Grafana dashboard and Pulsar Manager. - - :::note - - When running the script, you can use `-n` to specify the Kubernetes namespace where the Pulsar Helm chart is installed, `-k` to define the Pulsar Helm release name, and `-c` to create the Kubernetes namespace. For more information about the script, run `./scripts/pulsar/prepare_helm_release.sh --help`. - - ::: - - ```bash - - ./scripts/pulsar/prepare_helm_release.sh \ - -n pulsar \ - -k pulsar-mini \ - -c - - ``` - -4. Use the Pulsar Helm chart to install a Pulsar cluster to Kubernetes. - - :::note - - You need to specify `--set initialize=true` when installing Pulsar the first time. This command installs and starts Apache Pulsar. - - ::: - - ```bash - - helm install \ - --values examples/values-minikube.yaml \ - --set initialize=true \ - --namespace pulsar \ - pulsar-mini apache/pulsar - - ``` - -5. Check the status of all pods. - - ```bash - - kubectl get pods -n pulsar - - ``` - - If all pods start up successfully, you can see that the `STATUS` is changed to `Running` or `Completed`. - - **Output** - - ```bash - - NAME READY STATUS RESTARTS AGE - pulsar-mini-bookie-0 1/1 Running 0 9m27s - pulsar-mini-bookie-init-5gphs 0/1 Completed 0 9m27s - pulsar-mini-broker-0 1/1 Running 0 9m27s - pulsar-mini-grafana-6b7bcc64c7-4tkxd 1/1 Running 0 9m27s - pulsar-mini-prometheus-5fcf5dd84c-w8mgz 1/1 Running 0 9m27s - pulsar-mini-proxy-0 1/1 Running 0 9m27s - pulsar-mini-pulsar-init-t7cqt 0/1 Completed 0 9m27s - pulsar-mini-pulsar-manager-9bcbb4d9f-htpcs 1/1 Running 0 9m27s - pulsar-mini-toolset-0 1/1 Running 0 9m27s - pulsar-mini-zookeeper-0 1/1 Running 0 9m27s - - ``` - -6. Check the status of all services in the namespace `pulsar`. - - ```bash - - kubectl get services -n pulsar - - ``` - - **Output** - - ```bash - - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - pulsar-mini-bookie ClusterIP None 3181/TCP,8000/TCP 11m - pulsar-mini-broker ClusterIP None 8080/TCP,6650/TCP 11m - pulsar-mini-grafana LoadBalancer 10.106.141.246 3000:31905/TCP 11m - pulsar-mini-prometheus ClusterIP None 9090/TCP 11m - pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 11m - pulsar-mini-pulsar-manager LoadBalancer 10.103.192.175 9527:30190/TCP 11m - pulsar-mini-toolset ClusterIP None 11m - pulsar-mini-zookeeper ClusterIP None 2888/TCP,3888/TCP,2181/TCP 11m - - ``` - -## Step 2: Use pulsar-admin to create Pulsar tenants/namespaces/topics - -`pulsar-admin` is the CLI (command-Line Interface) tool for Pulsar. In this step, you can use `pulsar-admin` to create resources, including tenants, namespaces, and topics. - -1. Enter the `toolset` container. - - ```bash - - kubectl exec -it -n pulsar pulsar-mini-toolset-0 -- /bin/bash - - ``` - -2. In the `toolset` container, create a tenant named `apache`. - - ```bash - - bin/pulsar-admin tenants create apache - - ``` - - Then you can list the tenants to see if the tenant is created successfully. - - ```bash - - bin/pulsar-admin tenants list - - ``` - - You should see a similar output as below. The tenant `apache` has been successfully created. - - ```bash - - "apache" - "public" - "pulsar" - - ``` - -3. In the `toolset` container, create a namespace named `pulsar` in the tenant `apache`. - - ```bash - - bin/pulsar-admin namespaces create apache/pulsar - - ``` - - Then you can list the namespaces of tenant `apache` to see if the namespace is created successfully. - - ```bash - - bin/pulsar-admin namespaces list apache - - ``` - - You should see a similar output as below. The namespace `apache/pulsar` has been successfully created. - - ```bash - - "apache/pulsar" - - ``` - -4. In the `toolset` container, create a topic `test-topic` with `4` partitions in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics create-partitioned-topic apache/pulsar/test-topic -p 4 - - ``` - -5. In the `toolset` container, list all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics list-partitioned-topics apache/pulsar - - ``` - - Then you can see all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - "persistent://apache/pulsar/test-topic" - - ``` - -## Step 3: Use Pulsar client to produce and consume messages - -You can use the Pulsar client to create producers and consumers to produce and consume messages. - -By default, the Pulsar Helm chart exposes the Pulsar cluster through a Kubernetes `LoadBalancer`. In Minikube, you can use the following command to check the proxy service. - -```bash - -kubectl get services -n pulsar | grep pulsar-mini-proxy - -``` - -You will see a similar output as below. - -```bash - -pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 28m - -``` - -This output tells what are the node ports that Pulsar cluster's binary port and HTTP port are mapped to. The port after `80:` is the HTTP port while the port after `6650:` is the binary port. - -Then you can find the IP address and exposed ports of your Minikube server by running the following command. - -```bash - -minikube service pulsar-mini-proxy -n pulsar - -``` - -**Output** - -```bash - -|-----------|-------------------|-------------|-------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|-------------------------| -| pulsar | pulsar-mini-proxy | http/80 | http://172.17.0.4:32305 | -| | | pulsar/6650 | http://172.17.0.4:31816 | -|-----------|-------------------|-------------|-------------------------| -🏃 Starting tunnel for service pulsar-mini-proxy. -|-----------|-------------------|-------------|------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|------------------------| -| pulsar | pulsar-mini-proxy | | http://127.0.0.1:61853 | -| | | | http://127.0.0.1:61854 | -|-----------|-------------------|-------------|------------------------| - -``` - -At this point, you can get the service URLs to connect to your Pulsar client. Here are URL examples: - -``` - -webServiceUrl=http://127.0.0.1:61853/ -brokerServiceUrl=pulsar://127.0.0.1:61854/ - -``` - -Then you can proceed with the following steps: - -1. Download the Apache Pulsar tarball from the [downloads page](https://pulsar.apache.org/download/). - -2. Decompress the tarball based on your download file. - - ```bash - - tar -xf .tar.gz - - ``` - -3. Expose `PULSAR_HOME`. - - (1) Enter the directory of the decompressed download file. - - (2) Expose `PULSAR_HOME` as the environment variable. - - ```bash - - export PULSAR_HOME=$(pwd) - - ``` - -4. Configure the Pulsar client. - - In the `${PULSAR_HOME}/conf/client.conf` file, replace `webServiceUrl` and `brokerServiceUrl` with the service URLs you get from the above steps. - -5. Create a subscription to consume messages from `apache/pulsar/test-topic`. - - ```bash - - bin/pulsar-client consume -s sub apache/pulsar/test-topic -n 0 - - ``` - -6. Open a new terminal. In the new terminal, create a producer and send 10 messages to the `test-topic` topic. - - ```bash - - bin/pulsar-client produce apache/pulsar/test-topic -m "---------hello apache pulsar-------" -n 10 - - ``` - -7. Verify the results. - - - From the producer side - - **Output** - - The messages have been produced successfully. - - ```bash - - 18:15:15.489 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 10 messages successfully produced - - ``` - - - From the consumer side - - **Output** - - At the same time, you can receive the messages as below. - - ```bash - - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - - ``` - -## Step 4: Use Pulsar Manager to manage the cluster - -[Pulsar Manager](administration-pulsar-manager.md) is a web-based GUI management tool for managing and monitoring Pulsar. - -1. By default, the `Pulsar Manager` is exposed as a separate `LoadBalancer`. You can open the Pulsar Manager UI using the following command: - - ```bash - - minikube service -n pulsar pulsar-mini-pulsar-manager - - ``` - -2. The Pulsar Manager UI will be open in your browser. You can use the username `pulsar` and password `pulsar` to log into Pulsar Manager. - -3. In Pulsar Manager UI, you can create an environment. - - - Click `New Environment` button in the top-left corner. - - Type `pulsar-mini` for the field `Environment Name` in the popup window. - - Type `http://pulsar-mini-broker:8080` for the field `Service URL` in the popup window. - - Click `Confirm` button in the popup window. - -4. After successfully creating an environment, you are redirected to the `tenants` page of that environment. Then you can create `tenants`, `namespaces` and `topics` using the Pulsar Manager. - -## Step 5: Use Prometheus and Grafana to monitor cluster - -Grafana is an open-source visualization tool, which can be used for visualizing time series data into dashboards. - -1. By default, the Grafana is exposed as a separate `LoadBalancer`. You can open the Grafana UI using the following command: - - ```bash - - minikube service pulsar-mini-grafana -n pulsar - - ``` - -2. The Grafana UI is open in your browser. You can use the username `pulsar` and password `pulsar` to log into the Grafana Dashboard. - -3. You can view dashboards for different components of a Pulsar cluster. diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/getting-started-pulsar.md b/site2/website/versioned_docs/version-2.9.1-deprecated/getting-started-pulsar.md deleted file mode 100644 index 752590f57b5585..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/getting-started-pulsar.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -id: pulsar-2.0 -title: Pulsar 2.0 -sidebar_label: "Pulsar 2.0" -original_id: pulsar-2.0 ---- - -Pulsar 2.0 is a major new release for Pulsar that brings some bold changes to the platform, including [simplified topic names](#topic-names), the addition of the [Pulsar Functions](functions-overview.md) feature, some terminology changes, and more. - -## New features in Pulsar 2.0 - -Feature | Description -:-------|:----------- -[Pulsar Functions](functions-overview.md) | A lightweight compute option for Pulsar - -## Major changes - -There are a few major changes that you should be aware of, as they may significantly impact your day-to-day usage. - -### Properties versus tenants - -Previously, Pulsar had a concept of properties. A property is essentially the exact same thing as a tenant, so the "property" terminology has been removed in version 2.0. The [`pulsar-admin properties`](reference-pulsar-admin.md#pulsar-admin) command-line interface, for example, has been replaced with the [`pulsar-admin tenants`](reference-pulsar-admin.md#pulsar-admin-tenants) interface. In some cases the properties terminology is still used but is now considered deprecated and will be removed entirely in a future release. - -### Topic names - -Prior to version 2.0, *all* Pulsar topics had the following form: - -```http - -{persistent|non-persistent}://property/cluster/namespace/topic - -``` - -Two important changes have been made in Pulsar 2.0: - -* There is no longer a [cluster component](#no-cluster) -* Properties have been [renamed to tenants](#tenants) -* You can use a [flexible](#flexible-topic-naming) naming system to shorten many topic names -* `/` is not allowed in topic name - -#### No cluster component - -The cluster component has been removed from topic names. Thus, all topic names now have the following form: - -```http - -{persistent|non-persistent}://tenant/namespace/topic - -``` - -> Existing topics that use the legacy name format will continue to work without any change, and there are no plans to change that. - - -#### Flexible topic naming - -All topic names in Pulsar 2.0 internally have the form shown [above](#no-cluster-component) but you can now use shorthand names in many cases (for the sake of simplicity). The flexible naming system stems from the fact that there is now a default topic type, tenant, and namespace: - -Topic aspect | Default -:------------|:------- -topic type | `persistent` -tenant | `public` -namespace | `default` - -The table below shows some example topic name translations that use implicit defaults: - -Input topic name | Translated topic name -:----------------|:--------------------- -`my-topic` | `persistent://public/default/my-topic` -`my-tenant/my-namespace/my-topic` | `persistent://my-tenant/my-namespace/my-topic` - -> For [non-persistent topics](concepts-messaging.md#non-persistent-topics) you'll need to continue to specify the entire topic name, as the default-based rules for persistent topic names don't apply. Thus you cannot use a shorthand name like `non-persistent://my-topic` and would need to use `non-persistent://public/default/my-topic` instead - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/getting-started-standalone.md b/site2/website/versioned_docs/version-2.9.1-deprecated/getting-started-standalone.md deleted file mode 100644 index 9137ba291421a9..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/getting-started-standalone.md +++ /dev/null @@ -1,268 +0,0 @@ ---- -id: getting-started-standalone -title: Set up a standalone Pulsar locally -sidebar_label: "Run Pulsar locally" -original_id: getting-started-standalone ---- - -For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process. - -> **Pulsar in production?** -> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal.md) guide. - -## Install Pulsar standalone - -This tutorial guides you through every step of installing Pulsar locally. - -### System requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions, JRE/JDK 11 is recommended. - -:::tip - -By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. - -::: - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -### Install Pulsar using binary release - -To get started with Pulsar, download a binary tarball release in one of the following ways: - -* download from the Apache mirror (Pulsar @pulsar:version@ binary release) - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:binary_release_url - - ``` - -After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -#### What your package contains - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/). -`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more. -`examples` | A Java JAR file containing [Pulsar Functions](functions-overview.md) example. -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar. -`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar). - -These directories are created once you begin running Pulsar. - -Directory | Contains -:---------|:-------- -`data` | The data storage directory used by ZooKeeper and BookKeeper. -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md). -`logs` | Logs created by the installation. - -:::tip - -If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions: -* [Install builtin connectors (optional)](#install-builtin-connectors-optional) -* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional) -Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders. - -::: - -### Install builtin connectors (optional) - -Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors. -To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways: - -* download from the Apache mirror Pulsar IO Connectors @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. -For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker (or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions). -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors). - -::: - -### Install tiered storage offloaders (optional) - -:::tip - -- Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -- To enable tiered storage feature, follow the instructions below; otherwise skip this section. - -::: - -To get started with [tiered storage offloaders](concepts-tiered-storage.md), you need to download the offloaders tarball release on every broker node in one of the following ways: - -* download from the Apache mirror Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders` -in the pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage.md). - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory. -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - -::: - -## Start Pulsar standalone - -Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode. - -```bash - -$ bin/pulsar standalone - -``` - -If you have started Pulsar successfully, you will see `INFO`-level log messages like this: - -```bash - -21:59:29.327 [DLM-/stream/storage-OrderedScheduler-3-0] INFO org.apache.bookkeeper.stream.storage.impl.sc.StorageContainerImpl - Successfully started storage container (0). -21:59:34.576 [main] INFO org.apache.pulsar.broker.authentication.AuthenticationService - Authentication is disabled -21:59:34.576 [main] INFO org.apache.pulsar.websocket.WebSocketService - Pulsar WebSocket Service started - -``` - -:::tip - -* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window. - -::: - -You can also run the service as a background process using the `pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). -> -> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview.md) document to secure your deployment. -> -> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -## Use Pulsar standalone - -Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. - -### Consume a message - -The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client consume my-topic -s "first-subscription" - -``` - -If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -22:17:16.781 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully consumed - -``` - -:::tip - -As you have noticed that we do not explicitly create the `my-topic` topic, from which we consume the message. When you consume a message from a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well. - -::: - -### Produce a message - -The following command produces a message saying `hello-pulsar` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client produce my-topic --messages "hello-pulsar" - -``` - -If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -22:21:08.693 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced - -``` - -## Stop Pulsar standalone - -Press `Ctrl+C` to stop a local standalone Pulsar. - -:::tip - -If the service runs as a background process using the `pulsar-daemon start standalone` command, then use the `pulsar-daemon stop standalone` command to stop the service. -For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). - -::: - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/helm-deploy.md b/site2/website/versioned_docs/version-2.9.1-deprecated/helm-deploy.md deleted file mode 100644 index 0e7815e4f4d90b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/helm-deploy.md +++ /dev/null @@ -1,434 +0,0 @@ ---- -id: helm-deploy -title: Deploy Pulsar cluster using Helm -sidebar_label: "Deployment" -original_id: helm-deploy ---- - -Before running `helm install`, you need to decide how to run Pulsar. -Options can be specified using Helm's `--set option.name=value` command line option. - -## Select configuration options - -In each section, collect the options that are combined to use with the `helm install` command. - -### Kubernetes namespace - -By default, the Pulsar Helm chart is installed to a namespace called `pulsar`. - -```yaml - -namespace: pulsar - -``` - -To install the Pulsar Helm chart into a different Kubernetes namespace, you can include this option in the `helm install` command. - -```bash - ---set namespace= - -``` - -By default, the Pulsar Helm chart doesn't create the namespace. - -```yaml - -namespaceCreate: false - -``` - -To use the Pulsar Helm chart to create the Kubernetes namespace automatically, you can include this option in the `helm install` command. - -```bash - ---set namespaceCreate=true - -``` - -### Persistence - -By default, the Pulsar Helm chart creates Volume Claims with the expectation that a dynamic provisioner creates the underlying Persistent Volumes. - -```yaml - -volumes: - persistence: true - # configure the components to use local persistent volume - # the local provisioner should be installed prior to enable local persistent volume - local_storage: false - -``` - -To use local persistent volumes as the persistent storage for Helm release, you can install the [local storage provisioner](#install-local-storage-provisioner) and include the following option in the `helm install` command. - -```bash - ---set volumes.local_storage=true - -``` - -:::note - -Before installing the production instance of Pulsar, ensure to plan the storage settings to avoid extra storage migration work. Because after initial installation, you must edit Kubernetes objects manually if you want to change storage settings. - -::: - -The Pulsar Helm chart is designed for production use. To use the Pulsar Helm chart in a development environment (such as Minikube), you can disable persistence by including this option in your `helm install` command. - -```bash - ---set volumes.persistence=false - -``` - -### Affinity - -By default, `anti-affinity` is enabled to ensure pods of the same component can run on different nodes. - -```yaml - -affinity: - anti_affinity: true - -``` - -To use the Pulsar Helm chart in a development environment (such as Minikube), you can disable `anti-affinity` by including this option in your `helm install` command. - -```bash - ---set affinity.anti_affinity=false - -``` - -### Components - -The Pulsar Helm chart is designed for production usage. It deploys a production-ready Pulsar cluster, including Pulsar core components and monitoring components. - -You can customize the components to be deployed by turning on/off individual components. - -```yaml - -## Components -## -## Control what components of Apache Pulsar to deploy for the cluster -components: - # zookeeper - zookeeper: true - # bookkeeper - bookkeeper: true - # bookkeeper - autorecovery - autorecovery: true - # broker - broker: true - # functions - functions: true - # proxy - proxy: true - # toolset - toolset: true - # pulsar manager - pulsar_manager: true - -## Monitoring Components -## -## Control what components of the monitoring stack to deploy for the cluster -monitoring: - # monitoring - prometheus - prometheus: true - # monitoring - grafana - grafana: true - -``` - -### Docker images - -The Pulsar Helm chart is designed to enable controlled upgrades. So it can configure independent image versions for components. You can customize the images by setting individual component. - -```yaml - -## Images -## -## Control what images to use for each component -images: - zookeeper: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - bookie: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - autorecovery: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - broker: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - proxy: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - functions: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - prometheus: - repository: prom/prometheus - tag: v1.6.3 - pullPolicy: IfNotPresent - grafana: - repository: streamnative/apache-pulsar-grafana-dashboard-k8s - tag: 0.0.4 - pullPolicy: IfNotPresent - pulsar_manager: - repository: apachepulsar/pulsar-manager - tag: v0.1.0 - pullPolicy: IfNotPresent - hasCommand: false - -``` - -### TLS - -The Pulsar Helm chart can be configured to enable TLS (Transport Layer Security) to protect all the traffic between components. Before enabling TLS, you have to provision TLS certificates for the required components. - -#### Provision TLS certificates using cert-manager - -To use the `cert-manager` to provision the TLS certificates, you have to install the [cert-manager](#install-cert-manager) before installing the Pulsar Helm chart. After successfully installing the cert-manager, you can set `certs.internal_issuer.enabled` to `true`. Therefore, the Pulsar Helm chart can use the `cert-manager` to generate `selfsigning` TLS certificates for the configured components. - -```yaml - -certs: - internal_issuer: - enabled: false - component: internal-cert-issuer - type: selfsigning - -``` - -You can also customize the generated TLS certificates by configuring the fields as the following. - -```yaml - -tls: - # common settings for generating certs - common: - # 90d - duration: 2160h - # 15d - renewBefore: 360h - organization: - - pulsar - keySize: 4096 - keyAlgorithm: rsa - keyEncoding: pkcs8 - -``` - -#### Enable TLS - -After installing the `cert-manager`, you can set `tls.enabled` to `true` to enable TLS encryption for the entire cluster. - -```yaml - -tls: - enabled: false - -``` - -You can also configure whether to enable TLS encryption for individual component. - -```yaml - -tls: - # settings for generating certs for proxy - proxy: - enabled: false - cert_name: tls-proxy - # settings for generating certs for broker - broker: - enabled: false - cert_name: tls-broker - # settings for generating certs for bookies - bookie: - enabled: false - cert_name: tls-bookie - # settings for generating certs for zookeeper - zookeeper: - enabled: false - cert_name: tls-zookeeper - # settings for generating certs for recovery - autorecovery: - cert_name: tls-recovery - # settings for generating certs for toolset - toolset: - cert_name: tls-toolset - -``` - -### Authentication - -By default, authentication is disabled. You can set `auth.authentication.enabled` to `true` to enable authentication. -Currently, the Pulsar Helm chart only supports JWT authentication provider. You can set `auth.authentication.provider` to `jwt` to use the JWT authentication provider. - -```yaml - -# Enable or disable broker authentication and authorization. -auth: - authentication: - enabled: false - provider: "jwt" - jwt: - # Enable JWT authentication - # If the token is generated by a secret key, set the usingSecretKey as true. - # If the token is generated by a private key, set the usingSecretKey as false. - usingSecretKey: false - superUsers: - # broker to broker communication - broker: "broker-admin" - # proxy to broker communication - proxy: "proxy-admin" - # pulsar-admin client to broker/proxy communication - client: "admin" - -``` - -To enable authentication, you can run [prepare helm release](#prepare-the-helm-release) to generate token secret keys and tokens for three super users specified in the `auth.superUsers` field. The generated token keys and super user tokens are uploaded and stored as Kubernetes secrets prefixed with `-token-`. You can use the following command to find those secrets. - -```bash - -kubectl get secrets -n - -``` - -### Authorization - -By default, authorization is disabled. Authorization can be enabled only when authentication is enabled. - -```yaml - -auth: - authorization: - enabled: false - -``` - -To enable authorization, you can include this option in the `helm install` command. - -```bash - ---set auth.authorization.enabled=true - -``` - -### CPU and RAM resource requirements - -By default, the resource requests and the number of replicas for the Pulsar components in the Pulsar Helm chart are adequate for a small production deployment. If you deploy a non-production instance, you can reduce the defaults to fit into a smaller cluster. - -Once you have all of your configuration options collected, you can install dependent charts before installing the Pulsar Helm chart. - -## Install dependent charts - -### Install local storage provisioner - -To use local persistent volumes as the persistent storage, you need to install a storage provisioner for [local persistent volumes](https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/). - -One of the easiest way to get started is to use the local storage provisioner provided along with the Pulsar Helm chart. - -``` - -helm repo add streamnative https://charts.streamnative.io -helm repo update -helm install pulsar-storage-provisioner streamnative/local-storage-provisioner - -``` - -### Install cert-manager - -The Pulsar Helm chart uses the [cert-manager](https://github.com/jetstack/cert-manager) to provision and manage TLS certificates automatically. To enable TLS encryption for brokers or proxies, you need to install the cert-manager in advance. - -For details about how to install the cert-manager, follow the [official instructions](https://cert-manager.io/docs/installation/kubernetes/#installing-with-helm). - -Alternatively, we provide a bash script [install-cert-manager.sh](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/cert-manager/install-cert-manager.sh) to install a cert-manager release to the namespace `cert-manager`. - -```bash - -git clone https://github.com/apache/pulsar-helm-chart -cd pulsar-helm-chart -./scripts/cert-manager/install-cert-manager.sh - -``` - -## Prepare Helm release - -Once you have install all the dependent charts and collected all of your configuration options, you can run [prepare_helm_release.sh](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/prepare_helm_release.sh) to prepare the Helm release. - -```bash - -git clone https://github.com/apache/pulsar-helm-chart -cd pulsar-helm-chart -./scripts/pulsar/prepare_helm_release.sh -n -k - -``` - -The `prepare_helm_release` creates the following resources: - -- A Kubernetes namespace for installing the Pulsar release -- JWT secret keys and tokens for three super users: `broker-admin`, `proxy-admin`, and `admin`. By default, it generates an asymmetric pubic/private key pair. You can choose to generate a symmetric secret key by specifying `--symmetric`. - - `proxy-admin` role is used for proxies to communicate to brokers. - - `broker-admin` role is used for inter-broker communications. - - `admin` role is used by the admin tools. - -## Deploy Pulsar cluster using Helm - -Once you have finished the following three things, you can install a Helm release. - -- Collect all of your configuration options. -- Install dependent charts. -- Prepare the Helm release. - -In this example, the Helm release is named `pulsar`. - -```bash - -helm repo add apache https://pulsar.apache.org/charts -helm repo update -helm install pulsar apache/pulsar \ - --timeout 10m \ - --set initialize=true \ - --set [your configuration options] - -``` - -:::note - -For the first deployment, add `--set initialize=true` option to initialize bookie and Pulsar cluster metadata. - -::: - -You can also use the `--version ` option if you want to install a specific version of Pulsar Helm chart. - -## Monitor deployment - -A list of installed resources are output once the Pulsar cluster is deployed. This may take 5-10 minutes. - -The status of the deployment can be checked by running the `helm status pulsar` command, which can also be done while the deployment is taking place if you run the command in another terminal. - -## Access Pulsar cluster - -The default values will create a `ClusterIP` for the following resources, which you can use to interact with the cluster. - -- Proxy: You can use the IP address to produce and consume messages to the installed Pulsar cluster. -- Pulsar Manager: You can access the Pulsar Manager UI at `http://:9527`. -- Grafana Dashboard: You can access the Grafana dashboard at `http://:3000`. - -To find the IP addresses of those components, run the following command: - -```bash - -kubectl get service -n - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/helm-install.md b/site2/website/versioned_docs/version-2.9.1-deprecated/helm-install.md deleted file mode 100644 index 9f81f52e0dab18..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/helm-install.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -id: helm-install -title: Install Apache Pulsar using Helm -sidebar_label: "Install" -original_id: helm-install ---- - -Install Apache Pulsar on Kubernetes with the official Pulsar Helm chart. - -## Requirements - -To deploy Apache Pulsar on Kubernetes, the followings are required. - -- kubectl 1.14 or higher, compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin)) -- Helm v3 (3.0.2 or higher) -- A Kubernetes cluster, version 1.14 or higher - -## Environment setup - -Before deploying Pulsar, you need to prepare your environment. - -### Tools - -Install [`helm`](helm-tools.md) and [`kubectl`](helm-tools.md) on your computer. - -## Cloud cluster preparation - -To create and connect to the Kubernetes cluster, follow the instructions: - -- [Google Kubernetes Engine](helm-prepare.md#google-kubernetes-engine) - -## Pulsar deployment - -Once the environment is set up and configuration is generated, you can now proceed to the [deployment of Pulsar](helm-deploy.md). - -## Pulsar upgrade - -To upgrade an existing Kubernetes installation, follow the [upgrade documentation](helm-upgrade.md). diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/helm-overview.md b/site2/website/versioned_docs/version-2.9.1-deprecated/helm-overview.md deleted file mode 100644 index 125f595cbe68a3..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/helm-overview.md +++ /dev/null @@ -1,103 +0,0 @@ ---- -id: helm-overview -title: Apache Pulsar Helm Chart -sidebar_label: "Overview" -original_id: helm-overview ---- - -[Helm chart](https://github.com/apache/pulsar-helm-chart) supports you to install Apache Pulsar in a cloud-native environment. - -## Introduction - -The Apache Pulsar Helm chart provides one of the most convenient ways to operate Pulsar on Kubernetes. With all the required components, Helm chart is scalable and thus being suitable for large-scale deployments. - -The Apache Pulsar Helm chart contains all components to support the features and functions that Pulsar delivers. You can install and configure these components separately. - -- Pulsar core components: - - ZooKeeper - - Bookies - - Brokers - - Function workers - - Proxies -- Control center: - - Pulsar Manager - - Prometheus - - Grafana - -Moreover, Helm chart supports: - -- Security - - Automatically provisioned TLS certificates, using [Jetstack](https://www.jetstack.io/)'s [cert-manager](https://cert-manager.io/docs/) - - self-signed - - [Let's Encrypt](https://letsencrypt.org/) - - TLS Encryption - - Proxy - - Broker - - Toolset - - Bookie - - ZooKeeper - - Authentication - - JWT - - Authorization -- Storage - - Non-persistence storage - - Persistent volume - - Local persistent volumes -- Functions - - Kubernetes Runtime - - Process Runtime - - Thread Runtime -- Operations - - Independent image versions for all components, enabling controlled upgrades - -## Quick start - -To run with Apache Pulsar Helm chart as fast as possible in a **non-production** use case, we provide a [quick start guide](getting-started-helm.md) for Proof of Concept (PoC) deployments. - -This guide walks you through deploying Apache Pulsar Helm chart with default values and features, but it is *not* suitable for deployments in production-ready environments. To deploy the charts in production under sustained load, you can follow the complete [Installation Guide](helm-install.md). - -## Troubleshooting - -Although we have done our best to make these charts as seamless as possible, troubles do go out of our control occasionally. We have been collecting tips and tricks for troubleshooting common issues. Please check it first before raising an [issue](https://github.com/apache/pulsar/issues/new/choose), and feel free to add your solutions by creating a [Pull Request](https://github.com/apache/pulsar/compare). - -## Installation - -The Apache Pulsar Helm chart contains all required dependencies. - -If you deploy a PoC for testing, we strongly suggest you follow this [Quick Start Guide](getting-started-helm.md) for your first iteration. - -1. [Preparation](helm-prepare.md) -2. [Deployment](helm-deploy.md) - -## Upgrading - -Once the Apache Pulsar Helm chart is installed, you can use `helm upgrade` command to configure and update it. - -```bash - -helm repo add apache https://pulsar.apache.org/charts -helm repo update -helm get values > pulsar.yaml -helm upgrade apache/pulsar -f pulsar.yaml - -``` - -For more detailed information, see [Upgrading](helm-upgrade.md). - -## Uninstallation - -To uninstall the Apache Pulsar Helm chart, run the following command: - -```bash - -helm delete - -``` - -For the purposes of continuity, some Kubernetes objects in these charts cannot be removed by `helm delete` command. It is recommended to *consciously* remove these items, as they affect re-deployment. - -* PVCs for stateful data: remove these items. - - ZooKeeper: This is your metadata. - - BookKeeper: This is your data. - - Prometheus: This is your metrics data, which can be safely removed. -* Secrets: if the secrets are generated by the [prepare release script](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/prepare_helm_release.sh), they contain secret keys and tokens. You can use the [cleanup release script](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/cleanup_helm_release.sh) to remove these secrets and tokens as needed. diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/helm-prepare.md b/site2/website/versioned_docs/version-2.9.1-deprecated/helm-prepare.md deleted file mode 100644 index e5d56c7e95e34b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/helm-prepare.md +++ /dev/null @@ -1,80 +0,0 @@ ---- -id: helm-prepare -title: Prepare Kubernetes resources -sidebar_label: "Prepare" -original_id: helm-prepare ---- - -For a fully functional Pulsar cluster, you need a few resources before deploying the Apache Pulsar Helm chart. The following provides instructions to prepare the Kubernetes cluster before deploying the Pulsar Helm chart. - -- [Google Kubernetes Engine](#google-kubernetes-engine) - - [Manual cluster creation](#manual-cluster-creation) - - [Scripted cluster creation](#scripted-cluster-creation) - - [Create cluster with local SSDs](#create-cluster-with-local-ssds) - -## Google Kubernetes Engine - -To get started easier, a script is provided to create the cluster automatically. Alternatively, a cluster can be created manually as well. - -### Manual cluster creation - -To provision a Kubernetes cluster manually, follow the [GKE instructions](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster). - -### Scripted cluster creation - -A [bootstrap script](https://github.com/streamnative/charts/tree/master/scripts/pulsar/gke_bootstrap_script.sh) has been created to automate much of the setup process for users on GCP/GKE. - -The script can: - -1. Create a new GKE cluster. -2. Allow the cluster to modify DNS (Domain Name Server) records. -3. Setup `kubectl`, and connect it to the cluster. - -Google Cloud SDK is a dependency of this script, so ensure it is [set up correctly](helm-tools.md#connect-to-a-gke-cluster) for the script to work. - -The script reads various parameters from environment variables and an argument `up` or `down` for bootstrap and clean-up respectively. - -The following table describes all variables. - -| **Variable** | **Description** | **Default value** | -| ------------ | --------------- | ----------------- | -| PROJECT | ID of your GCP project | No default value. It requires to be set. | -| CLUSTER_NAME | Name of the GKE cluster | `pulsar-dev` | -| CONFDIR | Configuration directory to store Kubernetes configuration | ${HOME}/.config/streamnative | -| INT_NETWORK | IP space to use within this cluster | `default` | -| LOCAL_SSD_COUNT | Number of local SSD counts | 4 | -| MACHINE_TYPE | Type of machine to use for nodes | `n1-standard-4` | -| NUM_NODES | Number of nodes to be created in each of the cluster's zones | 4 | -| PREEMPTIBLE | Create nodes using preemptible VM instances in the new cluster. | false | -| REGION | Compute region for the cluster | `us-east1` | -| USE_LOCAL_SSD | Flag to create a cluster with local SSDs | false | -| ZONE | Compute zone for the cluster | `us-east1-b` | -| ZONE_EXTENSION | The extension (`a`, `b`, `c`) of the zone name of the cluster | `b` | -| EXTRA_CREATE_ARGS | Extra arguments passed to create command | | - -Run the script, by passing in your desired parameters. It can work with the default parameters except for `PROJECT` which is required: - -```bash - -PROJECT= scripts/pulsar/gke_bootstrap_script.sh up - -``` - -The script can also be used to clean up the created GKE resources. - -```bash - -PROJECT= scripts/pulsar/gke_bootstrap_script.sh down - -``` - -#### Create cluster with local SSDs - -To install the Pulsar Helm chart using local persistent volumes, you need to create a GKE cluster with local SSDs. You can do so by specifying `USE_LOCAL_SSD` to be `true` in the following command to create a Pulsar cluster with local SSDs. - -``` - -PROJECT= USE_LOCAL_SSD=true LOCAL_SSD_COUNT= scripts/pulsar/gke_bootstrap_script.sh up - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/helm-tools.md b/site2/website/versioned_docs/version-2.9.1-deprecated/helm-tools.md deleted file mode 100644 index 6ba89006913b64..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/helm-tools.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -id: helm-tools -title: Required tools for deploying Pulsar Helm Chart -sidebar_label: "Required Tools" -original_id: helm-tools ---- - -Before deploying Pulsar to your Kubernetes cluster, there are some tools you must have installed locally. - -## kubectl - -kubectl is the tool that talks to the Kubernetes API. kubectl 1.14 or higher is required and it needs to be compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin)). - -To Install kubectl locally, follow the [Kubernetes documentation](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl). - -The server version of kubectl cannot be obtained until we connect to a cluster. - -## Helm - -Helm is the package manager for Kubernetes. The Apache Pulsar Helm Chart is tested and supported with Helm v3. - -### Get Helm - -You can get Helm from the project's [releases page](https://github.com/helm/helm/releases), or follow other options under the official documentation of [installing Helm](https://helm.sh/docs/intro/install/). - -### Next steps - -Once kubectl and Helm are configured, you can configure your [Kubernetes cluster](helm-prepare.md). - -## Additional information - -### Templates - -Templating in Helm is done through Golang's [text/template](https://golang.org/pkg/text/template/) and [sprig](https://godoc.org/github.com/Masterminds/sprig). - -For more information about how all the inner workings behave, check these documents: - -- [Functions and Pipelines](https://helm.sh/docs/chart_template_guide/functions_and_pipelines/) -- [Subcharts and Globals](https://helm.sh/docs/chart_template_guide/subcharts_and_globals/) - -### Tips and tricks - -For additional information on developing with Helm, check [tips and tricks section](https://helm.sh/docs/howto/charts_tips_and_tricks/) in the Helm repository. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/helm-upgrade.md b/site2/website/versioned_docs/version-2.9.1-deprecated/helm-upgrade.md deleted file mode 100644 index 7d671e6bfb3c10..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/helm-upgrade.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -id: helm-upgrade -title: Upgrade Pulsar Helm release -sidebar_label: "Upgrade" -original_id: helm-upgrade ---- - -Before upgrading your Pulsar installation, you need to check the change log corresponding to the specific release you want to upgrade to and look for any release notes that might pertain to the new Pulsar helm chart version. - -We also recommend that you need to provide all values using the `helm upgrade --set key=value` syntax or the `-f values.yml` instead of using `--reuse-values`, because some of the current values might be deprecated. - -:::note - -You can retrieve your previous `--set` arguments cleanly, with `helm get values `. If you direct this into a file (`helm get values > pulsar.yml`), you can safely pass this file through `-f`, namely `helm upgrade apache/pulsar -f pulsar.yaml`. This safely replaces the behavior of `--reuse-values`. - -::: - -## Steps - -To upgrade Apache Pulsar to a newer version, follow these steps: - -1. Check the change log for the specific version you would like to upgrade to. -2. Go through [deployment documentation](helm-deploy.md) step by step. -3. Extract your previous `--set` arguments with the following command. - - ```bash - - helm get values > pulsar.yaml - - ``` - -4. Decide all the values you need to set. -5. Perform the upgrade, with all `--set` arguments extracted in step 4. - - ```bash - - helm upgrade apache/pulsar \ - --version \ - -f pulsar.yaml \ - --set ... - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-aerospike-sink.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-aerospike-sink.md deleted file mode 100644 index 63d7338a3ba91c..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-aerospike-sink.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -id: io-aerospike-sink -title: Aerospike sink connector -sidebar_label: "Aerospike sink connector" -original_id: io-aerospike-sink ---- - -The Aerospike sink connector pulls messages from Pulsar topics to Aerospike clusters. - -## Configuration - -The configuration of the Aerospike sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `seedHosts` |String| true | No default value| The comma-separated list of one or more Aerospike cluster hosts.

    Each host can be specified as a valid IP address or hostname followed by an optional port number. | -| `keyspace` | String| true |No default value |The Aerospike namespace. | -| `columnName` | String | true| No default value|The Aerospike column name. | -|`userName`|String|false|NULL|The Aerospike username.| -|`password`|String|false|NULL|The Aerospike password.| -| `keySet` | String|false |NULL | The Aerospike set name. | -| `maxConcurrentRequests` |int| false | 100 | The maximum number of concurrent Aerospike transactions that a sink can open. | -| `timeoutMs` | int|false | 100 | This property controls `socketTimeout` and `totalTimeout` for Aerospike transactions. | -| `retries` | int|false | 1 |The maximum number of retries before aborting a write transaction to Aerospike. | diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-canal-source.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-canal-source.md deleted file mode 100644 index d1fd43bb0f74e4..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-canal-source.md +++ /dev/null @@ -1,235 +0,0 @@ ---- -id: io-canal-source -title: Canal source connector -sidebar_label: "Canal source connector" -original_id: io-canal-source ---- - -The Canal source connector pulls messages from MySQL to Pulsar topics. - -## Configuration - -The configuration of Canal source connector has the following properties. - -### Property - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `username` | true | None | Canal server account (not MySQL).| -| `password` | true | None | Canal server password (not MySQL). | -|`destination`|true|None|Source destination that Canal source connector connects to. -| `singleHostname` | false | None | Canal server address.| -| `singlePort` | false | None | Canal server port.| -| `cluster` | true | false | Whether to enable cluster mode based on Canal server configuration or not.

  1745. true: **cluster** mode.
    If set to true, it talks to `zkServers` to figure out the actual database host.

  1746. false: **standalone** mode.
    If set to false, it connects to the database specified by `singleHostname` and `singlePort`.
  1747. | -| `zkServers` | true | None | Address and port of the Zookeeper that Canal source connector talks to figure out the actual database host.| -| `batchSize` | false | 1000 | Batch size to fetch from Canal. | - -### Example - -Before using the Canal connector, you can create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "zkServers": "127.0.0.1:2181", - "batchSize": "5120", - "destination": "example", - "username": "", - "password": "", - "cluster": false, - "singleHostname": "127.0.0.1", - "singlePort": "11111", - } - - ``` - -* YAML - - You can create a YAML file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/resources/canal-mysql-source-config.yaml) below to your YAML file. - - ```yaml - - configs: - zkServers: "127.0.0.1:2181" - batchSize: 5120 - destination: "example" - username: "" - password: "" - cluster: false - singleHostname: "127.0.0.1" - singlePort: 11111 - - ``` - -## Usage - -Here is an example of storing MySQL data using the configuration file as above. - -1. Start a MySQL server. - - ```bash - - $ docker pull mysql:5.7 - $ docker run -d -it --rm --name pulsar-mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=canal -e MYSQL_USER=mysqluser -e MYSQL_PASSWORD=mysqlpw mysql:5.7 - - ``` - -2. Create a configuration file `mysqld.cnf`. - - ```bash - - [mysqld] - pid-file = /var/run/mysqld/mysqld.pid - socket = /var/run/mysqld/mysqld.sock - datadir = /var/lib/mysql - #log-error = /var/log/mysql/error.log - # By default we only accept connections from localhost - #bind-address = 127.0.0.1 - # Disabling symbolic-links is recommended to prevent assorted security risks - symbolic-links=0 - log-bin=mysql-bin - binlog-format=ROW - server_id=1 - - ``` - -3. Copy the configuration file `mysqld.cnf` to MySQL server. - - ```bash - - $ docker cp mysqld.cnf pulsar-mysql:/etc/mysql/mysql.conf.d/ - - ``` - -4. Restart the MySQL server. - - ```bash - - $ docker restart pulsar-mysql - - ``` - -5. Create a test database in MySQL server. - - ```bash - - $ docker exec -it pulsar-mysql /bin/bash - $ mysql -h 127.0.0.1 -uroot -pcanal -e 'create database test;' - - ``` - -6. Start a Canal server and connect to MySQL server. - - ``` - - $ docker pull canal/canal-server:v1.1.2 - $ docker run -d -it --link pulsar-mysql -e canal.auto.scan=false -e canal.destinations=test -e canal.instance.master.address=pulsar-mysql:3306 -e canal.instance.dbUsername=root -e canal.instance.dbPassword=canal -e canal.instance.connectionCharset=UTF-8 -e canal.instance.tsdb.enable=true -e canal.instance.gtidon=false --name=pulsar-canal-server -p 8000:8000 -p 2222:2222 -p 11111:11111 -p 11112:11112 -m 4096m canal/canal-server:v1.1.2 - - ``` - -7. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:2.3.0 - $ docker run -d -it --link pulsar-canal-server -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:2.3.0 bin/pulsar standalone - - ``` - -8. Modify the configuration file `canal-mysql-source-config.yaml`. - - ```yaml - - configs: - zkServers: "" - batchSize: "5120" - destination: "test" - username: "" - password: "" - cluster: false - singleHostname: "pulsar-canal-server" - singlePort: "11111" - - ``` - -9. Create a consumer file `pulsar-client.py`. - - ```python - - import pulsar - - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe('my-topic', - subscription_name='my-sub') - - while True: - msg = consumer.receive() - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - - client.close() - - ``` - -10. Copy the configuration file `canal-mysql-source-config.yaml` and the consumer file `pulsar-client.py` to Pulsar server. - - ```bash - - $ docker cp canal-mysql-source-config.yaml pulsar-standalone:/pulsar/conf/ - $ docker cp pulsar-client.py pulsar-standalone:/pulsar/ - - ``` - -11. Download a Canal connector and start it. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - $ wget https://archive.apache.org/dist/pulsar/pulsar-2.3.0/connectors/pulsar-io-canal-2.3.0.nar -P connectors - $ ./bin/pulsar-admin source localrun \ - --archive ./connectors/pulsar-io-canal-2.3.0.nar \ - --classname org.apache.pulsar.io.canal.CanalStringSource \ - --tenant public \ - --namespace default \ - --name canal \ - --destination-topic-name my-topic \ - --source-config-file /pulsar/conf/canal-mysql-source-config.yaml \ - --parallelism 1 - - ``` - -12. Consume data from MySQL. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - $ python pulsar-client.py - - ``` - -13. Open another window to log in MySQL server. - - ```bash - - $ docker exec -it pulsar-mysql /bin/bash - $ mysql -h 127.0.0.1 -uroot -pcanal - - ``` - -14. Create a table, and insert, delete, and update data in MySQL server. - - ```bash - - mysql> use test; - mysql> show tables; - mysql> CREATE TABLE IF NOT EXISTS `test_table`(`test_id` INT UNSIGNED AUTO_INCREMENT,`test_title` VARCHAR(100) NOT NULL, - `test_author` VARCHAR(40) NOT NULL, - `test_date` DATE,PRIMARY KEY ( `test_id` ))ENGINE=InnoDB DEFAULT CHARSET=utf8; - mysql> INSERT INTO test_table (test_title, test_author, test_date) VALUES("a", "b", NOW()); - mysql> UPDATE test_table SET test_title='c' WHERE test_title='a'; - mysql> DELETE FROM test_table WHERE test_title='c'; - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-cassandra-sink.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-cassandra-sink.md deleted file mode 100644 index b27a754f49e182..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-cassandra-sink.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -id: io-cassandra-sink -title: Cassandra sink connector -sidebar_label: "Cassandra sink connector" -original_id: io-cassandra-sink ---- - -The Cassandra sink connector pulls messages from Pulsar topics to Cassandra clusters. - -## Configuration - -The configuration of the Cassandra sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `roots` | String|true | " " (empty string) | A comma-separated list of Cassandra hosts to connect to.| -| `keyspace` | String|true| " " (empty string)| The key space used for writing pulsar messages.

    **Note: `keyspace` should be created prior to a Cassandra sink.**| -| `keyname` | String|true| " " (empty string)| The key name of the Cassandra column family.

    The column is used for storing Pulsar message keys.

    If a Pulsar message doesn't have any key associated, the message value is used as the key. | -| `columnFamily` | String|true| " " (empty string)| The Cassandra column family name.

    **Note: `columnFamily` should be created prior to a Cassandra sink.**| -| `columnName` | String|true| " " (empty string) | The column name of the Cassandra column family.

    The column is used for storing Pulsar message values. | - -### Example - -Before using the Cassandra sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - } - - ``` - -* YAML - - ``` - - configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - - ``` - -## Usage - -For more information about **how to connect Pulsar with Cassandra**, see [here](io-quickstart.md#connect-pulsar-to-apache-cassandra). diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-cdc-debezium.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-cdc-debezium.md deleted file mode 100644 index 293ccf2b35e8aa..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-cdc-debezium.md +++ /dev/null @@ -1,543 +0,0 @@ ---- -id: io-cdc-debezium -title: Debezium source connector -sidebar_label: "Debezium source connector" -original_id: io-cdc-debezium ---- - -The Debezium source connector pulls messages from MySQL or PostgreSQL -and persists the messages to Pulsar topics. - -## Configuration - -The configuration of Debezium source connector has the following properties. - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `task.class` | true | null | A source task class that implemented in Debezium. | -| `database.hostname` | true | null | The address of a database server. | -| `database.port` | true | null | The port number of a database server.| -| `database.user` | true | null | The name of a database user that has the required privileges. | -| `database.password` | true | null | The password for a database user that has the required privileges. | -| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. | -| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. | -| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the connector.

    This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. | -| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. | -| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value. | -| `database.history` | true | null | The name of the database history class. | -| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements.

    **Note: this topic is for internal use only and should not be used by consumers.** | -| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. | -| `pulsar.service.url` | true | null | Pulsar cluster service URL for the offset topic used in Debezium. You can use the `bin/pulsar-admin --admin-url http://pulsar:8080 sources localrun --source-config-file configs/pg-pulsar-config.yaml` command to point to the target Pulsar cluster.| -| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. | -| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). | -| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. | -| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. | - - - -## Example of MySQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "3306", - "database.user": "debezium", - "database.password": "dbz", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.whitelist": "inventory", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.history.pulsar.topic": "history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "pulsar.service.url": "pulsar://127.0.0.1:6650", - "offset.storage.topic": "offset-topic" - } - - ``` - -* YAML - - You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mysql-source" - topicName: "debezium-mysql-topic" - archive: "connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for mysql, docker image: debezium/example-mysql:0.8 - database.hostname: "localhost" - database.port: "3306" - database.user: "debezium" - database.password: "dbz" - database.server.id: "184054" - database.server.name: "dbserver1" - database.whitelist: "inventory" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.history.pulsar.topic: "history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG - key.converter: "org.apache.kafka.connect.json.JsonConverter" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## OFFSET_STORAGE_TOPIC_CONFIG - offset.storage.topic: "offset-topic" - - ``` - -### Usage - -This example shows how to change the data of a MySQL table using the Pulsar Debezium connector. - -1. Start a MySQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -it --rm \ - --name mysql \ - -p 3306:3306 \ - -e MYSQL_ROOT_PASSWORD=debezium \ - -e MYSQL_USER=mysqluser \ - -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar \ - --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","value.converter": "org.apache.kafka.connect.json.JsonConverter","pulsar.service.url": "pulsar://127.0.0.1:6650","offset.storage.topic": "offset-topic"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mysql-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the table _inventory.products_. - - ```bash - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MySQL client in docker. - - ```bash - - $ docker run -it --rm \ - --name mysqlterm \ - --link mysql \ - --rm mysql:5.7 sh \ - -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' - - ``` - -6. A MySQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - mysql> use inventory; - mysql> show tables; - mysql> SELECT * FROM products; - mysql> UPDATE products SET name='1111111111' WHERE id=101; - mysql> UPDATE products SET name='1111111111' WHERE id=107; - - ``` - - In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic. - -## Example of PostgreSQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "5432", - "database.user": "postgres", - "database.password": "postgres", - "database.dbname": "postgres", - "database.server.name": "dbserver1", - "schema.whitelist": "inventory", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-postgres-source" - topicName: "debezium-postgres-topic" - archive: "connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-postgress:0.8 - database.hostname: "localhost" - database.port: "5432" - database.user: "postgres" - database.password: "postgres" - database.dbname: "postgres" - database.server.name: "dbserver1" - schema.whitelist: "inventory" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector. - - -1. Start a PostgreSQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-postgres:0.8 - $ docker run -d -it --rm --name pulsar-postgresql -p 5432:5432 debezium/example-postgres:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar \ - --name debezium-postgres-source \ - --destination-topic-name debezium-postgres-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "postgres","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-postgres-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a PostgreSQL client in docker. - - ```bash - - $ docker exec -it pulsar-postgresql /bin/bash - - ``` - -6. A PostgreSQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - psql -U postgres postgres - postgres=# \c postgres; - You are now connected to database "postgres" as user "postgres". - postgres=# SET search_path TO inventory; - SET - postgres=# select * from products; - id | name | description | weight - -----+--------------------+---------------------------------------------------------+-------- - 102 | car battery | 12V car battery | 8.1 - 103 | 12-pack drill bits | 12-pack of drill bits with sizes ranging from #40 to #3 | 0.8 - 104 | hammer | 12oz carpenter's hammer | 0.75 - 105 | hammer | 14oz carpenter's hammer | 0.875 - 106 | hammer | 16oz carpenter's hammer | 1 - 107 | rocks | box of assorted rocks | 5.3 - 108 | jacket | water resistent black wind breaker | 0.1 - 109 | spare tire | 24 inch spare tire | 22.2 - 101 | 1111111111 | Small 2-wheel scooter | 3.14 - (9 rows) - - postgres=# UPDATE products SET name='1111111111' WHERE id=107; - UPDATE 1 - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":107}}�{"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products.Value","field":"before"},{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products.Value","field":"after"},{"type":"struct","fields":[{"type":"string","optional":true,"field":"version"},{"type":"string","optional":true,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":false,"field":"db"},{"type":"int64","optional":true,"field":"ts_usec"},{"type":"int64","optional":true,"field":"txId"},{"type":"int64","optional":true,"field":"lsn"},{"type":"string","optional":true,"field":"schema"},{"type":"string","optional":true,"field":"table"},{"type":"boolean","optional":true,"default":false,"field":"snapshot"},{"type":"boolean","optional":true,"field":"last_snapshot_record"}],"optional":false,"name":"io.debezium.connector.postgresql.Source","field":"source"},{"type":"string","optional":false,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"before":{"id":107,"name":"rocks","description":"box of assorted rocks","weight":5.3},"after":{"id":107,"name":"1111111111","description":"box of assorted rocks","weight":5.3},"source":{"version":"0.9.2.Final","connector":"postgresql","name":"dbserver1","db":"postgres","ts_usec":1559208957661080,"txId":577,"lsn":23862872,"schema":"inventory","table":"products","snapshot":false,"last_snapshot_record":null},"op":"u","ts_ms":1559208957692}} - - ``` - -## Example of MongoDB - -You need to create a configuration file before using the Pulsar Debezium connector. - -* JSON - - ```json - - { - "mongodb.hosts": "rs0/mongodb:27017", - "mongodb.name": "dbserver1", - "mongodb.user": "debezium", - "mongodb.password": "dbz", - "mongodb.task.id": "1", - "database.whitelist": "inventory", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mongodb-source" - topicName: "debezium-mongodb-topic" - archive: "connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-postgress:0.10 - mongodb.hosts: "rs0/mongodb:27017", - mongodb.name: "dbserver1", - mongodb.user: "debezium", - mongodb.password: "dbz", - mongodb.task.id: "1", - database.whitelist: "inventory", - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector. - - -1. Start a MongoDB server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-mongodb:0.10 - $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017 debezium/example-mongodb:0.10 - - ``` - - Use the following commands to initialize the data. - - ``` bash - - ./usr/local/bin/init-inventory.sh - - ``` - - If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a``` - - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-mongodb-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar \ - --name debezium-mongodb-source \ - --destination-topic-name debezium-mongodb-topic \ - --tenant public \ - --namespace default \ - --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mongodb-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MongoDB client in docker. - - ```bash - - $ docker exec -it pulsar-mongodb /bin/bash - - ``` - -6. A MongoDB client pops out. - - ```bash - - mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory - db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}}) - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":false,"field":"rs"},{"type":"string","optional":false,"field":"collection"},{"type":"int32","optional":false,"field":"ord"},{"type":"int64","optional":true,"field":"h"}],"optional":false,"name":"io.debezium.connector.mongo.Source","field":"source"},{"type":"string","optional":true,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"after":"{\"_id\": {\"$numberLong\": \"104\"},\"name\": \"hammer\",\"description\": \"12oz carpenter's hammer\",\"weight\": 1.25,\"quantity\": 4}","patch":null,"source":{"version":"0.10.0.Final","connector":"mongodb","name":"dbserver1","ts_ms":1573541905000,"snapshot":"true","db":"inventory","rs":"rs0","collection":"products","ord":1,"h":4983083486544392763},"op":"r","ts_ms":1573541909761}}. - - ``` - -## FAQ - -### Debezium postgres connector will hang when create snap - -```$xslt - -#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000] - java.lang.Thread.State: WAITING (parking) - at sun.misc.Unsafe.park(Native Method) - - parking to wait for <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) - at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) - at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) - at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396) - at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649) - at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132) - at io.debezium.connector.postgresql.PostgresConnectorTask$Lambda$203/385424085.accept(Unknown Source) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$240/1347039967.accept(Unknown Source) - at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$206/589332928.run(Unknown Source) - at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705) - at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717) - at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126) - at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47) - at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127) - at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230) - at java.lang.Thread.run(Thread.java:748) - -``` - -If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file: - -```$xslt - -max.queue.size= - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-cdc.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-cdc.md deleted file mode 100644 index e6e662884826de..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-cdc.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -id: io-cdc -title: CDC connector -sidebar_label: "CDC connector" -original_id: io-cdc ---- - -CDC source connectors capture log changes of databases (such as MySQL, MongoDB, and PostgreSQL) into Pulsar. - -> CDC source connectors are built on top of [Canal](https://github.com/alibaba/canal) and [Debezium](https://debezium.io/) and store all data into Pulsar cluster in a persistent, replicated, and partitioned way. - -Currently, Pulsar has the following CDC connectors. - -Name|Java Class -|---|--- -[Canal source connector](io-canal-source.md)|[org.apache.pulsar.io.canal.CanalStringSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/java/org/apache/pulsar/io/canal/CanalStringSource.java) -[Debezium source connector](io-cdc-debezium.md)|
  1748. [org.apache.pulsar.io.debezium.DebeziumSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/core/src/main/java/org/apache/pulsar/io/debezium/DebeziumSource.java)
  1749. [org.apache.pulsar.io.debezium.mysql.DebeziumMysqlSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/java/org/apache/pulsar/io/debezium/mysql/DebeziumMysqlSource.java)
  1750. [org.apache.pulsar.io.debezium.postgres.DebeziumPostgresSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/java/org/apache/pulsar/io/debezium/postgres/DebeziumPostgresSource.java)
  1751. - -For more information about Canal and Debezium, see the information below. - -Subject | Reference -|---|--- -How to use Canal source connector with MySQL|[Canal guide](https://github.com/alibaba/canal/wiki) -How does Canal work | [Canal tutorial](https://github.com/alibaba/canal/wiki) -How to use Debezium source connector with MySQL | [Debezium guide](https://debezium.io/docs/connectors/mysql/) -How does Debezium work | [Debezium tutorial](https://debezium.io/docs/tutorial/) diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-cli.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-cli.md deleted file mode 100644 index 3d54bb61875e25..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-cli.md +++ /dev/null @@ -1,658 +0,0 @@ ---- -id: io-cli -title: Connector Admin CLI -sidebar_label: "CLI" -original_id: io-cli ---- - -The `pulsar-admin` tool helps you manage Pulsar connectors. - -## `sources` - -An interface for managing Pulsar IO sources (ingress data into Pulsar). - -```bash - -$ pulsar-admin sources subcommands - -``` - -Subcommands are: - -* `create` - -* `update` - -* `delete` - -* `get` - -* `status` - -* `list` - -* `stop` - -* `start` - -* `restart` - -* `localrun` - -* `available-sources` - -* `reload` - - -### `create` - -Submit a Pulsar IO source connector to run in a Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources create options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--classname` | The source's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per source instance (applicable only to Docker runtime). -| `--deserialization-classname` | The SerDe classname for the source. -| `--destination-topic-name` | The Pulsar topic to which data is sent. -| `--disk` | The disk (in bytes) that needs to be allocated per source instance (applicable only to Docker runtime). -|`--name` | The source's name. -| `--namespace` | The source's namespace. -| ` --parallelism` | The source's parallelism factor, that is, the number of source instances to run. -| `--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per source instance (applicable only to the process and Docker runtimes). -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -| `--source-config` | Source config key/values. -| `--source-config-file` | The path to a YAML config file specifying the source's configuration. -| `-t`, `--source-type` | The source's connector provider. -| `--tenant` | The source's tenant. -|`--producer-config`| The custom producer configuration (as a JSON string). - -### `update` - -Update a already submitted Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources update options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--classname` | The source's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per source instance (applicable only to Docker runtime). -| `--deserialization-classname` | The SerDe classname for the source. -| `--destination-topic-name` | The Pulsar topic to which data is sent. -| `--disk` | The disk (in bytes) that needs to be allocated per source instance (applicable only to Docker runtime). -|`--name` | The source's name. -| `--namespace` | The source's namespace. -| ` --parallelism` | The source's parallelism factor, that is, the number of source instances to run. -| `--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per source instance (applicable only to the process and Docker runtimes). -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -| `--source-config` | Source config key/values. -| `--source-config-file` | The path to a YAML config file specifying the source's configuration. -| `-t`, `--source-type` | The source's connector provider. The `source-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. -| `--tenant` | The source's tenant. -| `--update-auth-data` | Whether or not to update the auth data.
    **Default value: false.** - - -### `delete` - -Delete a Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources delete options - -``` - -#### Option - -|Flag|Description| -|---|---| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `get` - -Get the information about a Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources get options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `status` - -Check the current status of a Pulsar Source. - -#### Usage - -```bash - -$ pulsar-admin sources status options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source ID.
    If `instance-id` is not provided, Pulsar gets status of all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `list` - -List all running Pulsar IO source connectors. - -#### Usage - -```bash - -$ pulsar-admin sources list options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `stop` - -Stop a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources stop options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar stops all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `start` - -Start a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources start options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar starts all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `restart` - -Restart a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources restart options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar restarts all instances. -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `localrun` - -Run a Pulsar IO source connector locally rather than deploying it to the Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources localrun options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the Source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--broker-service-url` | The URL for the Pulsar broker. -|`--classname`|The source's class name if `archive` is file-url-path (file://). -| `--client-auth-params` | Client authentication parameter. -| `--client-auth-plugin` | Client authentication plugin using which function-process can connect to broker. -|`--cpu`|The CPU (in cores) that needs to be allocated per source instance (applicable only to the Docker runtime).| -|`--deserialization-classname`|The SerDe classname for the source. -|`--destination-topic-name`|The Pulsar topic to which data is sent. -|`--disk`|The disk (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime).| -|`--hostname-verification-enabled`|Enable hostname verification.
    **Default value: false**. -|`--name`|The source’s name.| -|`--namespace`|The source’s namespace.| -|`--parallelism`|The source’s parallelism factor, that is, the number of source instances to run).| -|`--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -|`--ram`|The RAM (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime).| -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -|`--source-config`|Source config key/values. -|`--source-config-file`|The path to a YAML config file specifying the source’s configuration. -|`--source-type`|The source's connector provider. -|`--tenant`|The source’s tenant. -|`--tls-allow-insecure`|Allow insecure tls connection.
    **Default value: false**. -|`--tls-trust-cert-path`|The tls trust cert file path. -|`--use-tls`|Use tls connection.
    **Default value: false**. -|`--producer-config`| The custom producer configuration (as a JSON string). - -### `available-sources` - -Get the list of Pulsar IO connector sources supported by Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources available-sources - -``` - -### `reload` - -Reload the available built-in connectors. - -#### Usage - -```bash - -$ pulsar-admin sources reload - -``` - -## `sinks` - -An interface for managing Pulsar IO sinks (egress data from Pulsar). - -```bash - -$ pulsar-admin sinks subcommands - -``` - -Subcommands are: - -* `create` - -* `update` - -* `delete` - -* `get` - -* `status` - -* `list` - -* `stop` - -* `start` - -* `restart` - -* `localrun` - -* `available-sinks` - -* `reload` - - -### `create` - -Submit a Pulsar IO sink connector to run in a Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks create options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--classname` | The sink's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per sink instance (applicable only to Docker runtime). -| `--custom-schema-inputs` | The map of input topics to schema types or class names (as a JSON string). -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -| `--disk` | The disk (in bytes) that needs to be allocated per sink instance (applicable only to Docker runtime). -|`-i, --inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name` | The sink's name. -| `--namespace` | The sink's namespace. -| ` --parallelism` | The sink's parallelism factor, that is, the number of sink instances to run. -| `--processing-guarantees` | The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the process and Docker runtimes). -| `--retain-ordering` | Sink consumes and sinks messages in order. -| `--sink-config` | sink config key/values. -| `--sink-config-file` | The path to a YAML config file specifying the sink's configuration. -| `-t`, `--sink-type` | The sink's connector provider. The `sink-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. -| `--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -| `--tenant` | The sink's tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). - -### `update` - -Update a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks update options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--classname` | The sink's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per sink instance (applicable only to Docker runtime). -| `--custom-schema-inputs` | The map of input topics to schema types or class names (as a JSON string). -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -| `--disk` | The disk (in bytes) that needs to be allocated per sink instance (applicable only to Docker runtime). -|`-i, --inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name` | The sink's name. -| `--namespace` | The sink's namespace. -| ` --parallelism` | The sink's parallelism factor, that is, the number of sink instances to run. -| `--processing-guarantees` | The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the process and Docker runtimes). -| `--retain-ordering` | Sink consumes and sinks messages in order. -| `--sink-config` | sink config key/values. -| `--sink-config-file` | The path to a YAML config file specifying the sink's configuration. -| `-t`, `--sink-type` | The sink's connector provider. -| `--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -| `--tenant` | The sink's tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). -| `--update-auth-data` | Whether or not to update the auth data.
    **Default value: false.** - -### `delete` - -Delete a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks delete options - -``` - -#### Option - -|Flag|Description| -|---|---| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - -### `get` - -Get the information about a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks get options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `status` - -Check the current status of a Pulsar sink. - -#### Usage - -```bash - -$ pulsar-admin sinks status options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink ID.
    If `instance-id` is not provided, Pulsar gets status of all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `list` - -List all running Pulsar IO sink connectors. - -#### Usage - -```bash - -$ pulsar-admin sinks list options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `stop` - -Stop a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks stop options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar stops all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - -### `start` - -Start a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks start options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar starts all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `restart` - -Restart a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks restart options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar restarts all instances. -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `localrun` - -Run a Pulsar IO sink connector locally rather than deploying it to the Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks localrun options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--broker-service-url` | The URL for the Pulsar broker. -|`--classname`|The sink's class name if `archive` is file-url-path (file://). -| `--client-auth-params` | Client authentication parameter. -| `--client-auth-plugin` | Client authentication plugin using which function-process can connect to broker. -|`--cpu`|The CPU (in cores) that needs to be allocated per sink instance (applicable only to the Docker runtime). -| `--custom-schema-inputs` | The map of input topics to Schema types or class names (as a JSON string). -| `--max-redeliver-count` | Maximum number of times that a message is redelivered before being sent to the dead letter queue. -| `--dead-letter-topic` | Name of the dead letter topic where the failing messages are sent. -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -|`--disk`|The disk (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime).| -|`--hostname-verification-enabled`|Enable hostname verification.
    **Default value: false**. -| `-i`, `--inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name`|The sink’s name.| -|`--namespace`|The sink’s namespace.| -|`--parallelism`|The sink’s parallelism factor, that is, the number of sink instances to run).| -|`--processing-guarantees`|The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -|`--ram`|The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime).| -|`--retain-ordering` | Sink consumes and sinks messages in order. -|`--sink-config`|sink config key/values. -|`--sink-config-file`|The path to a YAML config file specifying the sink’s configuration. -|`--sink-type`|The sink's connector provider. -|`--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -|`--tenant`|The sink’s tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--negative-ack-redelivery-delay-ms` | The negatively-acknowledged message redelivery delay in milliseconds. | -|`--tls-allow-insecure`|Allow insecure tls connection.
    **Default value: false**. -|`--tls-trust-cert-path`|The tls trust cert file path. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). -|`--use-tls`|Use tls connection.
    **Default value: false**. - -### `available-sinks` - -Get the list of Pulsar IO connector sinks supported by Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks available-sinks - -``` - -### `reload` - -Reload the available built-in connectors. - -#### Usage - -```bash - -$ pulsar-admin sinks reload - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-connectors.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-connectors.md deleted file mode 100644 index 957a02a5a1964a..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-connectors.md +++ /dev/null @@ -1,249 +0,0 @@ ---- -id: io-connectors -title: Built-in connector -sidebar_label: "Built-in connector" -original_id: io-connectors ---- - -Pulsar distribution includes a set of common connectors that have been packaged and tested with the rest of Apache Pulsar. These connectors import and export data from some of the most commonly used data systems. - -Using any of these connectors is as easy as writing a simple connector and running the connector locally or submitting the connector to a Pulsar Functions cluster. - -## Source connector - -Pulsar has various source connectors, which are sorted alphabetically as below. - -### Canal - -* [Configuration](io-canal-source.md#configuration) - -* [Example](io-canal-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/java/org/apache/pulsar/io/canal/CanalStringSource.java) - - -### Debezium MySQL - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-mysql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/java/org/apache/pulsar/io/debezium/mysql/DebeziumMysqlSource.java) - -### Debezium PostgreSQL - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-postgresql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/java/org/apache/pulsar/io/debezium/postgres/DebeziumPostgresSource.java) - -### Debezium MongoDB - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-mongodb) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/java/org/apache/pulsar/io/debezium/mongodb/DebeziumMongoDbSource.java) - -### Debezium Oracle - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-oracle) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/oracle/src/main/java/org/apache/pulsar/io/debezium/oracle/DebeziumOracleSource.java) - -### Debezium Microsoft SQL Server - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-microsoft-sql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mssql/src/main/java/org/apache/pulsar/io/debezium/mssql/DebeziumMsSqlSource.java) - - -### DynamoDB - -* [Configuration](io-dynamodb-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/dynamodb/src/main/java/org/apache/pulsar/io/dynamodb/DynamoDBSource.java) - -### File - -* [Configuration](io-file-source.md#configuration) - -* [Example](io-file-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/file/src/main/java/org/apache/pulsar/io/file/FileSource.java) - -### Flume - -* [Configuration](io-flume-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/java/org/apache/pulsar/io/flume/FlumeConnector.java) - -### Twitter firehose - -* [Configuration](io-twitter-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/twitter/src/main/java/org/apache/pulsar/io/twitter/TwitterFireHose.java) - -### Kafka - -* [Configuration](io-kafka-source.md#configuration) - -* [Example](io-kafka-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java) - -### Kinesis - -* [Configuration](io-kinesis-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kinesis/src/main/java/org/apache/pulsar/io/kinesis/KinesisSource.java) - -### Netty - -* [Configuration](io-netty-source.md#configuration) - -* [Example of TCP](io-netty-source.md#tcp) - -* [Example of HTTP](io-netty-source.md#http) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/netty/src/main/java/org/apache/pulsar/io/netty/NettySource.java) - -### NSQ - -* [Configuration](io-nsq-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/nsq/src/main/java/org/apache/pulsar/io/nsq/NSQSource.java) - -### RabbitMQ - -* [Configuration](io-rabbitmq-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/rabbitmq/src/main/java/org/apache/pulsar/io/rabbitmq/RabbitMQSource.java) - -## Sink connector - -Pulsar has various sink connectors, which are sorted alphabetically as below. - -### Aerospike - -* [Configuration](io-aerospike-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/aerospike/src/main/java/org/apache/pulsar/io/aerospike/AerospikeStringSink.java) - -### Cassandra - -* [Configuration](io-cassandra-sink.md#configuration) - -* [Example](io-cassandra-sink.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/cassandra/src/main/java/org/apache/pulsar/io/cassandra/CassandraStringSink.java) - -### ElasticSearch - -* [Configuration](io-elasticsearch-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/elastic-search/src/main/java/org/apache/pulsar/io/elasticsearch/ElasticSearchSink.java) - -### Flume - -* [Configuration](io-flume-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/java/org/apache/pulsar/io/flume/sink/StringSink.java) - -### HBase - -* [Configuration](io-hbase-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hbase/src/main/java/org/apache/pulsar/io/hbase/HbaseAbstractConfig.java) - -### HDFS2 - -* [Configuration](io-hdfs2-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hdfs2/src/main/java/org/apache/pulsar/io/hdfs2/AbstractHdfsConnector.java) - -### HDFS3 - -* [Configuration](io-hdfs3-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hdfs3/src/main/java/org/apache/pulsar/io/hdfs3/AbstractHdfsConnector.java) - -### InfluxDB - -* [Configuration](io-influxdb-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/influxdb/src/main/java/org/apache/pulsar/io/influxdb/InfluxDBGenericRecordSink.java) - -### JDBC ClickHouse - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-clickhouse) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/clickhouse/src/main/java/org/apache/pulsar/io/jdbc/ClickHouseJdbcAutoSchemaSink.java) - -### JDBC MariaDB - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-mariadb) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/mariadb/src/main/java/org/apache/pulsar/io/jdbc/MariadbJdbcAutoSchemaSink.java) - -### JDBC PostgreSQL - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-postgresql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/postgres/src/main/java/org/apache/pulsar/io/jdbc/PostgresJdbcAutoSchemaSink.java) - -### JDBC SQLite - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-sqlite) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/sqlite/src/main/java/org/apache/pulsar/io/jdbc/SqliteJdbcAutoSchemaSink.java) - -### Kafka - -* [Configuration](io-kafka-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSink.java) - -### Kinesis - -* [Configuration](io-kinesis-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kinesis/src/main/java/org/apache/pulsar/io/kinesis/KinesisSink.java) - -### MongoDB - -* [Configuration](io-mongo-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/mongo/src/main/java/org/apache/pulsar/io/mongodb/MongoSink.java) - -### RabbitMQ - -* [Configuration](io-rabbitmq-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/rabbitmq/src/main/java/org/apache/pulsar/io/rabbitmq/RabbitMQSink.java) - -### Redis - -* [Configuration](io-redis-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/redis/src/main/java/org/apache/pulsar/io/redis/RedisAbstractConfig.java) - -### Solr - -* [Configuration](io-solr-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/solr/src/main/java/org/apache/pulsar/io/solr/SolrSinkConfig.java) - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-debezium-source.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-debezium-source.md deleted file mode 100644 index f94f8336fc0209..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-debezium-source.md +++ /dev/null @@ -1,768 +0,0 @@ ---- -id: io-debezium-source -title: Debezium source connector -sidebar_label: "Debezium source connector" -original_id: io-debezium-source ---- - -The Debezium source connector pulls messages from MySQL or PostgreSQL -and persists the messages to Pulsar topics. - -## Configuration - -The configuration of Debezium source connector has the following properties. - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `task.class` | true | null | A source task class that implemented in Debezium. | -| `database.hostname` | true | null | The address of a database server. | -| `database.port` | true | null | The port number of a database server.| -| `database.user` | true | null | The name of a database user that has the required privileges. | -| `database.password` | true | null | The password for a database user that has the required privileges. | -| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. | -| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. | -| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the connector.

    This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. | -| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. | -| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value. | -| `database.history` | true | null | The name of the database history class. | -| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements.

    **Note: this topic is for internal use only and should not be used by consumers.** | -| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. | -| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. | -| `json-with-envelope` | false | false | Present the message only consist of payload. - -### Converter Options - -1. org.apache.kafka.connect.json.JsonConverter - -This config `json-with-envelope` is valid only for the JsonConverter. It's default value is false, the consumer use the schema ` -Schema.KeyValue(Schema.AUTO_CONSUME(), Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, -and the message only consist of payload. - -If the config `json-with-envelope` value is true, the consumer use the schema -`Schema.KeyValue(Schema.BYTES, Schema.BYTES`, the message consist of schema and payload. - -2. org.apache.pulsar.kafka.shade.io.confluent.connect.avro.AvroConverter - -If users select the AvroConverter, then the pulsar consumer should use the schema `Schema.KeyValue(Schema.AUTO_CONSUME(), -Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, and the message consist of payload. - -### MongoDB Configuration -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). | -| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. | -| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. | - - - -## Example of MySQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "3306", - "database.user": "debezium", - "database.password": "dbz", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.whitelist": "inventory", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.history.pulsar.topic": "history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "offset.storage.topic": "offset-topic" - } - - ``` - -* YAML - - You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mysql-source" - topicName: "debezium-mysql-topic" - archive: "connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for mysql, docker image: debezium/example-mysql:0.8 - database.hostname: "localhost" - database.port: "3306" - database.user: "debezium" - database.password: "dbz" - database.server.id: "184054" - database.server.name: "dbserver1" - database.whitelist: "inventory" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.history.pulsar.topic: "history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG - key.converter: "org.apache.kafka.connect.json.JsonConverter" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - - ## OFFSET_STORAGE_TOPIC_CONFIG - offset.storage.topic: "offset-topic" - - ``` - -### Usage - -This example shows how to change the data of a MySQL table using the Pulsar Debezium connector. - -1. Start a MySQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -it --rm \ - --name mysql \ - -p 3306:3306 \ - -e MYSQL_ROOT_PASSWORD=debezium \ - -e MYSQL_USER=mysqluser \ - -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar \ - --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","value.converter": "org.apache.kafka.connect.json.JsonConverter","pulsar.service.url": "pulsar://127.0.0.1:6650","offset.storage.topic": "offset-topic"}' - - ``` - - :::note - - Currently, the destination topic (specified by the `destination-topic-name` option ) is a required configuration but it is not used for the Debezium connector to save data. The Debezium connector saves data in the following 4 types of topics: - - - One topic named with the database server name ( `database.server.name`) for storing the database metadata messages, such as `public/default/database.server.name`. - - One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. - - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - - One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - - If the automatic topic creation is disabled on your broker, you need to manually create the above 4 types of topics and the destination topic. - - ::: - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mysql-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the table _inventory.products_. - - ```bash - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MySQL client in docker. - - ```bash - - $ docker run -it --rm \ - --name mysqlterm \ - --link mysql \ - --rm mysql:5.7 sh \ - -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' - - ``` - -6. A MySQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - mysql> use inventory; - mysql> show tables; - mysql> SELECT * FROM products; - mysql> UPDATE products SET name='1111111111' WHERE id=101; - mysql> UPDATE products SET name='1111111111' WHERE id=107; - - ``` - - In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic. - -## Example of PostgreSQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "5432", - "database.user": "postgres", - "database.password": "changeme", - "database.dbname": "postgres", - "database.server.name": "dbserver1", - "plugin.name": "pgoutput", - "schema.whitelist": "public", - "table.whitelist": "public.users", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-postgres-source" - topicName: "debezium-postgres-topic" - archive: "connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for postgres version 10+, official docker image: postgres:<10+> - database.hostname: "localhost" - database.port: "5432" - database.user: "postgres" - database.password: "changeme" - database.dbname: "postgres" - database.server.name: "dbserver1" - plugin.name: "pgoutput" - schema.whitelist: "public" - table.whitelist: "public.users" - - ## PULSAR_SERVICE_URL_CONFIG - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -Notice that `pgoutput` is a standard plugin of Postgres introduced in version 10 - [see Postgres architecture docu](https://www.postgresql.org/docs/10/logical-replication-architecture.html). You don't need to install anything, just make sure the WAL level is set to `logical` (see docker command below and [Postgres docu](https://www.postgresql.org/docs/current/runtime-config-wal.html)). - -### Usage - -This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector. - - -1. Start a PostgreSQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -d -it --rm \ - --name pulsar-postgres \ - -p 5432:5432 \ - -e POSTGRES_PASSWORD=changeme \ - postgres:13.3 -c wal_level=logical - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar \ - --name debezium-postgres-source \ - --destination-topic-name debezium-postgres-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "changeme","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "public","table.whitelist": "public.users","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - :::note - - Currently, the destination topic (specified by the `destination-topic-name` option ) is a required configuration but it is not used for the Debezium connector to save data. The Debezium connector saves data in the following 4 types of topics: - - - One topic named with the database server name ( `database.server.name`) for storing the database metadata messages, such as `public/default/database.server.name`. - - One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. - - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - - One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - - If the automatic topic creation is disabled on your broker, you need to manually create the above 4 types of topics and the destination topic. - - ::: - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-postgres-source-config.yaml - - ``` - -4. Subscribe the topic _sub-users_ for the _public.users_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-users" public/default/dbserver1.public.users -n 0 - - ``` - -5. Start a PostgreSQL client in docker. - - ```bash - - $ docker exec -it pulsar-postgresql /bin/bash - - ``` - -6. A PostgreSQL client pops out. - - Use the following commands to create sample data in the table _users_. - - ``` - - psql -U postgres -h localhost -p 5432 - Password for user postgres: - - CREATE TABLE users( - id BIGINT GENERATED ALWAYS AS IDENTITY, PRIMARY KEY(id), - hash_firstname TEXT NOT NULL, - hash_lastname TEXT NOT NULL, - gender VARCHAR(6) NOT NULL CHECK (gender IN ('male', 'female')) - ); - - INSERT INTO users(hash_firstname, hash_lastname, gender) - SELECT md5(RANDOM()::TEXT), md5(RANDOM()::TEXT), CASE WHEN RANDOM() < 0.5 THEN 'male' ELSE 'female' END FROM generate_series(1, 100); - - postgres=# select * from users; - - id | hash_firstname | hash_lastname | gender - -------+----------------------------------+----------------------------------+-------- - 1 | 02bf7880eb489edc624ba637f5ab42bd | 3e742c2cc4217d8e3382cc251415b2fb | female - 2 | dd07064326bb9119189032316158f064 | 9c0e938f9eddbd5200ba348965afbc61 | male - 3 | 2c5316fdd9d6595c1cceb70eed12e80c | 8a93d7d8f9d76acfaaa625c82a03ea8b | female - 4 | 3dfa3b4f70d8cd2155567210e5043d2b | 32c156bc28f7f03ab5d28e2588a3dc19 | female - - - postgres=# UPDATE users SET hash_firstname='maxim' WHERE id=1; - UPDATE 1 - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"before":null,"after":{"id":1,"hash_firstname":"maxim","hash_lastname":"292113d30a3ccee0e19733dd7f88b258","gender":"male"},"source:{"version":"1.0.0.Final","connector":"postgresql","name":"foobar","ts_ms":1624045862644,"snapshot":"false","db":"postgres","schema":"public","table":"users","txId":595,"lsn":24419784,"xmin":null},"op":"u","ts_ms":1624045862648} - ...many more - - ``` - -## Example of MongoDB - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "mongodb.hosts": "rs0/mongodb:27017", - "mongodb.name": "dbserver1", - "mongodb.user": "debezium", - "mongodb.password": "dbz", - "mongodb.task.id": "1", - "database.whitelist": "inventory", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mongodb-source" - topicName: "debezium-mongodb-topic" - archive: "connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-mongodb:0.10 - mongodb.hosts: "rs0/mongodb:27017", - mongodb.name: "dbserver1", - mongodb.user: "debezium", - mongodb.password: "dbz", - mongodb.task.id: "1", - database.whitelist: "inventory", - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector. - - -1. Start a MongoDB server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-mongodb:0.10 - $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017 debezium/example-mongodb:0.10 - - ``` - - Use the following commands to initialize the data. - - ``` bash - - ./usr/local/bin/init-inventory.sh - - ``` - - If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a``` - - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-mongodb-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar \ - --name debezium-mongodb-source \ - --destination-topic-name debezium-mongodb-topic \ - --tenant public \ - --namespace default \ - --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - :::note - - Currently, the destination topic (specified by the `destination-topic-name` option ) is a required configuration but it is not used for the Debezium connector to save data. The Debezium connector saves data in the following 4 types of topics: - - - One topic named with the database server name ( `database.server.name`) for storing the database metadata messages, such as `public/default/database.server.name`. - - One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. - - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - - One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - - If the automatic topic creation is disabled on your broker, you need to manually create the above 4 types of topics and the destination topic. - - ::: - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mongodb-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MongoDB client in docker. - - ```bash - - $ docker exec -it pulsar-mongodb /bin/bash - - ``` - -6. A MongoDB client pops out. - - ```bash - - mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory - db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}}) - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":false,"field":"rs"},{"type":"string","optional":false,"field":"collection"},{"type":"int32","optional":false,"field":"ord"},{"type":"int64","optional":true,"field":"h"}],"optional":false,"name":"io.debezium.connector.mongo.Source","field":"source"},{"type":"string","optional":true,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"after":"{\"_id\": {\"$numberLong\": \"104\"},\"name\": \"hammer\",\"description\": \"12oz carpenter's hammer\",\"weight\": 1.25,\"quantity\": 4}","patch":null,"source":{"version":"0.10.0.Final","connector":"mongodb","name":"dbserver1","ts_ms":1573541905000,"snapshot":"true","db":"inventory","rs":"rs0","collection":"products","ord":1,"h":4983083486544392763},"op":"r","ts_ms":1573541909761}}. - - ``` - -## Example of Oracle - -### Packaging - -Oracle connector does not include Oracle JDBC driver and you need to package it with the connector. -Major reasons for not including the drivers are the variety of versions and Oracle licensing. It is recommended to use the driver provided with your Oracle DB installation, or you can [download](https://www.oracle.com/database/technologies/appdev/jdbc.html) one. -Integration test have an [example](https://github.com/apache/pulsar/blob/e2bc52d40450fa00af258c4432a5b71d50a5c6e0/tests/docker-images/latest-version-image/Dockerfile#L110-L122) of packaging the driver into the connector nar file. - -### Configuration - -Debezium [requires](https://debezium.io/documentation/reference/1.5/connectors/oracle.html#oracle-overview) Oracle DB with LogMiner or XStream API enabled. -Supported options and steps for enabling them vary from version to version of Oracle DB. -Steps outlined in the [documentation](https://debezium.io/documentation/reference/1.5/connectors/oracle.html#oracle-overview) and used in the [integration test](https://github.com/apache/pulsar/blob/master/tests/integration/src/test/java/org/apache/pulsar/tests/integration/io/sources/debezium/DebeziumOracleDbSourceTester.java) may or may not work for the version and edition of Oracle DB you are using. -Please refer to the [documentation for Oracle DB](https://docs.oracle.com/en/database/oracle/oracle-database/) as needed. - -Similarly to other connectors, you can use JSON or YAMl to configure the connector. -Using yaml as an example, you can create a debezium-oracle-source-config.yaml file like: - -* JSON - -```json - -{ - "database.hostname": "localhost", - "database.port": "1521", - "database.user": "dbzuser", - "database.password": "dbz", - "database.dbname": "XE", - "database.server.name": "XE", - "schema.exclude.list": "system,dbzuser", - "snapshot.mode": "initial", - "topic.namespace": "public/default", - "task.class": "io.debezium.connector.oracle.OracleConnectorTask", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "typeClassName": "org.apache.pulsar.common.schema.KeyValue", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.tcpKeepAlive": "true", - "decimal.handling.mode": "double", - "database.history.pulsar.topic": "debezium-oracle-source-history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650" -} - -``` - -* YAML - -```yaml - -tenant: "public" -namespace: "default" -name: "debezium-oracle-source" -topicName: "debezium-oracle-topic" -parallelism: 1 - -className: "org.apache.pulsar.io.debezium.oracle.DebeziumOracleSource" -database.dbname: "XE" - -configs: - database.hostname: "localhost" - database.port: "1521" - database.user: "dbzuser" - database.password: "dbz" - database.dbname: "XE" - database.server.name: "XE" - schema.exclude.list: "system,dbzuser" - snapshot.mode: "initial" - topic.namespace: "public/default" - task.class: "io.debezium.connector.oracle.OracleConnectorTask" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - key.converter: "org.apache.kafka.connect.json.JsonConverter" - typeClassName: "org.apache.pulsar.common.schema.KeyValue" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.tcpKeepAlive: "true" - decimal.handling.mode: "double" - database.history.pulsar.topic: "debezium-oracle-source-history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - -``` - -For the full list of configuration properties supported by Debezium, see [Debezium Connector for Oracle](https://debezium.io/documentation/reference/1.5/connectors/oracle.html#oracle-connector-properties). - -## Example of Microsoft SQL - -### Configuration - -Debezium [requires](https://debezium.io/documentation/reference/1.5/connectors/sqlserver.html#sqlserver-overview) SQL Server with CDC enabled. -Steps outlined in the [documentation](https://debezium.io/documentation/reference/1.5/connectors/sqlserver.html#setting-up-sqlserver) and used in the [integration test](https://github.com/apache/pulsar/blob/master/tests/integration/src/test/java/org/apache/pulsar/tests/integration/src/test/java/org/apache/pulsar/tests/integration/io/sources/debezium/DebeziumMsSqlSourceTester.java). -For more information, see [Enable and disable change data capture in Microsoft SQL Server](https://docs.microsoft.com/en-us/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server). - -Similarly to other connectors, you can use JSON or YAMl to configure the connector. - -* JSON - -```json - -{ - "database.hostname": "localhost", - "database.port": "1433", - "database.user": "sa", - "database.password": "MyP@ssw0rd!", - "database.dbname": "MyTestDB", - "database.server.name": "mssql", - "snapshot.mode": "schema_only", - "topic.namespace": "public/default", - "task.class": "io.debezium.connector.sqlserver.SqlServerConnectorTask", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "typeClassName": "org.apache.pulsar.common.schema.KeyValue", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.tcpKeepAlive": "true", - "decimal.handling.mode": "double", - "database.history.pulsar.topic": "debezium-mssql-source-history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650" -} - -``` - -* YAML - -```yaml - -tenant: "public" -namespace: "default" -name: "debezium-mssql-source" -topicName: "debezium-mssql-topic" -parallelism: 1 - -className: "org.apache.pulsar.io.debezium.mssql.DebeziumMsSqlSource" -database.dbname: "mssql" - -configs: - database.hostname: "localhost" - database.port: "1433" - database.user: "sa" - database.password: "MyP@ssw0rd!" - database.dbname: "MyTestDB" - database.server.name: "mssql" - snapshot.mode: "schema_only" - topic.namespace: "public/default" - task.class: "io.debezium.connector.sqlserver.SqlServerConnectorTask" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - key.converter: "org.apache.kafka.connect.json.JsonConverter" - typeClassName: "org.apache.pulsar.common.schema.KeyValue" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.tcpKeepAlive: "true" - decimal.handling.mode: "double" - database.history.pulsar.topic: "debezium-mssql-source-history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - -``` - -For the full list of configuration properties supported by Debezium, see [Debezium Connector for MS SQL](https://debezium.io/documentation/reference/1.5/connectors/sqlserver.html#sqlserver-connector-properties). - -## FAQ - -### Debezium postgres connector will hang when create snap - -```$xslt - -#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000] - java.lang.Thread.State: WAITING (parking) - at sun.misc.Unsafe.park(Native Method) - - parking to wait for <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) - at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) - at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) - at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396) - at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649) - at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132) - at io.debezium.connector.postgresql.PostgresConnectorTask$Lambda$203/385424085.accept(Unknown Source) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$240/1347039967.accept(Unknown Source) - at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$206/589332928.run(Unknown Source) - at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705) - at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717) - at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126) - at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47) - at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127) - at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230) - at java.lang.Thread.run(Thread.java:748) - -``` - -If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file: - -```$xslt - -max.queue.size= - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-debug.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-debug.md deleted file mode 100644 index 844e101d00d2a7..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-debug.md +++ /dev/null @@ -1,407 +0,0 @@ ---- -id: io-debug -title: How to debug Pulsar connectors -sidebar_label: "Debug" -original_id: io-debug ---- -This guide explains how to debug connectors in localrun or cluster mode and gives a debugging checklist. -To better demonstrate how to debug Pulsar connectors, here takes a Mongo sink connector as an example. - -**Deploy a Mongo sink environment** -1. Start a Mongo service. - - ```bash - - docker pull mongo:4 - docker run -d -p 27017:27017 --name pulsar-mongo -v $PWD/data:/data/db mongo:4 - - ``` - -2. Create a DB and a collection. - - ```bash - - docker exec -it pulsar-mongo /bin/bash - mongo - > use pulsar - > db.createCollection('messages') - > exit - - ``` - -3. Start Pulsar standalone. - - ```bash - - docker pull apachepulsar/pulsar:2.4.0 - docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --link pulsar-mongo --name pulsar-mongo-standalone apachepulsar/pulsar:2.4.0 bin/pulsar standalone - - ``` - -4. Configure the Mongo sink with the `mongo-sink-config.yaml` file. - - ```bash - - configs: - mongoUri: "mongodb://pulsar-mongo:27017" - database: "pulsar" - collection: "messages" - batchSize: 2 - batchTimeMs: 500 - - ``` - - ```bash - - docker cp mongo-sink-config.yaml pulsar-mongo-standalone:/pulsar/ - - ``` - -5. Download the Mongo sink nar package. - - ```bash - - docker exec -it pulsar-mongo-standalone /bin/bash - curl -O http://apache.01link.hk/pulsar/pulsar-2.4.0/connectors/pulsar-io-mongo-2.4.0.nar - - ``` - -## Debug in localrun mode -Start the Mongo sink in localrun mode using the `localrun` command. -:::tip - -For more information about the `localrun` command, see [`localrun`](reference-connector-admin.md/#localrun-1). - -::: - -```bash - -./bin/pulsar-admin sinks localrun \ ---archive pulsar-io-mongo-2.4.0.nar \ ---tenant public --namespace default \ ---inputs test-mongo \ ---name pulsar-mongo-sink \ ---sink-config-file mongo-sink-config.yaml \ ---parallelism 1 - -``` - -### Use connector log -Use one of the following methods to get a connector log in localrun mode: -* After executing the `localrun` command, the **log is automatically printed on the console**. -* The log is located at: - - ```bash - - logs/functions/tenant/namespace/function-name/function-name-instance-id.log - - ``` - - **Example** - - The path of the Mongo sink connector is: - - ```bash - - logs/functions/public/default/pulsar-mongo-sink/pulsar-mongo-sink-0.log - - ``` - -To clearly explain the log information, here breaks down the large block of information into small blocks and add descriptions for each block. -* This piece of log information shows the storage path of the nar package after decompression. - - ``` - - 08:21:54.132 [main] INFO org.apache.pulsar.common.nar.NarClassLoader - Created class loader with paths: [file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/, file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/META-INF/bundled-dependencies/, - - ``` - - :::tip - - If `class cannot be found` exception is thrown, check whether the nar file is decompressed in the folder `file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/META-INF/bundled-dependencies/` or not. - - ::: - -* This piece of log information illustrates the basic information about the Mongo sink connector, such as tenant, namespace, name, parallelism, resources, and so on, which can be used to **check whether the Mongo sink connector is configured correctly or not**. - - ```bash - - 08:21:55.390 [main] INFO org.apache.pulsar.functions.runtime.ThreadRuntime - ThreadContainer starting function with instance config InstanceConfig(instanceId=0, functionId=853d60a1-0c48-44d5-9a5c-6917386476b2, functionVersion=c2ce1458-b69e-4175-88c0-a0a856a2be8c, functionDetails=tenant: "public" - namespace: "default" - name: "pulsar-mongo-sink" - className: "org.apache.pulsar.functions.api.utils.IdentityFunction" - autoAck: true - parallelism: 1 - source { - typeClassName: "[B" - inputSpecs { - key: "test-mongo" - value { - } - } - cleanupSubscription: true - } - sink { - className: "org.apache.pulsar.io.mongodb.MongoSink" - configs: "{\"mongoUri\":\"mongodb://pulsar-mongo:27017\",\"database\":\"pulsar\",\"collection\":\"messages\",\"batchSize\":2,\"batchTimeMs\":500}" - typeClassName: "[B" - } - resources { - cpu: 1.0 - ram: 1073741824 - disk: 10737418240 - } - componentType: SINK - , maxBufferedTuples=1024, functionAuthenticationSpec=null, port=38459, clusterName=local) - - ``` - -* This piece of log information demonstrates the status of the connections to Mongo and configuration information. - - ```bash - - 08:21:56.231 [cluster-ClusterId{value='5d6396a3c9e77c0569ff00eb', description='null'}-pulsar-mongo:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:1, serverValue:8}] to pulsar-mongo:27017 - 08:21:56.326 [cluster-ClusterId{value='5d6396a3c9e77c0569ff00eb', description='null'}-pulsar-mongo:27017] INFO org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=pulsar-mongo:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 2, 0]}, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=89058800} - - ``` - -* This piece of log information explains the configuration of consumers and clients, including the topic name, subscription name, subscription type, and so on. - - ```bash - - 08:21:56.719 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl - Starting Pulsar consumer status recorder with config: { - "topicNames" : [ "test-mongo" ], - "topicsPattern" : null, - "subscriptionName" : "public/default/pulsar-mongo-sink", - "subscriptionType" : "Shared", - "receiverQueueSize" : 1000, - "acknowledgementsGroupTimeMicros" : 100000, - "negativeAckRedeliveryDelayMicros" : 60000000, - "maxTotalReceiverQueueSizeAcrossPartitions" : 50000, - "consumerName" : null, - "ackTimeoutMillis" : 0, - "tickDurationMillis" : 1000, - "priorityLevel" : 0, - "cryptoFailureAction" : "CONSUME", - "properties" : { - "application" : "pulsar-sink", - "id" : "public/default/pulsar-mongo-sink", - "instance_id" : "0" - }, - "readCompacted" : false, - "subscriptionInitialPosition" : "Latest", - "patternAutoDiscoveryPeriod" : 1, - "regexSubscriptionMode" : "PersistentOnly", - "deadLetterPolicy" : null, - "autoUpdatePartitions" : true, - "replicateSubscriptionState" : false, - "resetIncludeHead" : false - } - 08:21:56.726 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl - Pulsar client config: { - "serviceUrl" : "pulsar://localhost:6650", - "authPluginClassName" : null, - "authParams" : null, - "operationTimeoutMs" : 30000, - "statsIntervalSeconds" : 60, - "numIoThreads" : 1, - "numListenerThreads" : 1, - "connectionsPerBroker" : 1, - "useTcpNoDelay" : true, - "useTls" : false, - "tlsTrustCertsFilePath" : null, - "tlsAllowInsecureConnection" : false, - "tlsHostnameVerificationEnable" : false, - "concurrentLookupRequest" : 5000, - "maxLookupRequest" : 50000, - "maxNumberOfRejectedRequestPerConnection" : 50, - "keepAliveIntervalSeconds" : 30, - "connectionTimeoutMs" : 10000, - "requestTimeoutMs" : 60000, - "defaultBackoffIntervalNanos" : 100000000, - "maxBackoffIntervalNanos" : 30000000000 - } - - ``` - -## Debug in cluster mode -You can use the following methods to debug a connector in cluster mode: -* [Use connector log](#use-connector-log) -* [Use admin CLI](#use-admin-cli) -### Use connector log -In cluster mode, multiple connectors can run on a worker. To find the log path of a specified connector, use the `workerId` to locate the connector log. -### Use admin CLI -Pulsar admin CLI helps you debug Pulsar connectors with the following subcommands: -* [`get`](#get) - -* [`status`](#status) -* [`topics stats`](#topics-stats) - -**Create a Mongo sink** - -```bash - -./bin/pulsar-admin sinks create \ ---archive pulsar-io-mongo-2.4.0.nar \ ---tenant public \ ---namespace default \ ---inputs test-mongo \ ---name pulsar-mongo-sink \ ---sink-config-file mongo-sink-config.yaml \ ---parallelism 1 - -``` - -### `get` -Use the `get` command to get the basic information about the Mongo sink connector, such as tenant, namespace, name, parallelism, and so on. - -```bash - -./bin/pulsar-admin sinks get --tenant public --namespace default --name pulsar-mongo-sink -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-mongo-sink", - "className": "org.apache.pulsar.io.mongodb.MongoSink", - "inputSpecs": { - "test-mongo": { - "isRegexPattern": false - } - }, - "configs": { - "mongoUri": "mongodb://pulsar-mongo:27017", - "database": "pulsar", - "collection": "messages", - "batchSize": 2.0, - "batchTimeMs": 500.0 - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -:::tip - -For more information about the `get` command, see [`get`](reference-connector-admin.md/#get-1). - -::: - -### `status` -Use the `status` command to get the current status about the Mongo sink connector, such as the number of instance, the number of running instance, instanceId, workerId and so on. - -```bash - -./bin/pulsar-admin sinks status ---tenant public \ ---namespace default \ ---name pulsar-mongo-sink -{ -"numInstances" : 1, -"numRunning" : 1, -"instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-5d202832fd18-8080" - } -} ] -} - -``` - -:::tip - -For more information about the `status` command, see [`status`](reference-connector-admin.md/#stauts-1). -If there are multiple connectors running on a worker, `workerId` can locate the worker on which the specified connector is running. - -::: - -### `topics stats` -Use the `topics stats` command to get the stats for a topic and its connected producer and consumer, such as whether the topic has received messages or not, whether there is a backlog of messages or not, the available permits and other key information. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -```bash - -./bin/pulsar-admin topics stats test-mongo -{ - "msgRateIn" : 0.0, - "msgThroughputIn" : 0.0, - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "averageMsgSize" : 0.0, - "storageSize" : 1, - "publishers" : [ ], - "subscriptions" : { - "public/default/pulsar-mongo-sink" : { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "msgRateRedeliver" : 0.0, - "msgBacklog" : 0, - "blockedSubscriptionOnUnackedMsgs" : false, - "msgDelayed" : 0, - "unackedMessages" : 0, - "type" : "Shared", - "msgRateExpired" : 0.0, - "consumers" : [ { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "msgRateRedeliver" : 0.0, - "consumerName" : "dffdd", - "availablePermits" : 999, - "unackedMessages" : 0, - "blockedConsumerOnUnackedMsgs" : false, - "metadata" : { - "instance_id" : "0", - "application" : "pulsar-sink", - "id" : "public/default/pulsar-mongo-sink" - }, - "connectedSince" : "2019-08-26T08:48:07.582Z", - "clientVersion" : "2.4.0", - "address" : "/172.17.0.3:57790" - } ], - "isReplicated" : false - } - }, - "replication" : { }, - "deduplicationStatus" : "Disabled" -} - -``` - -:::tip - -For more information about the `topic stats` command, see [`topic stats`](http://pulsar.apache.org/docs/en/pulsar-admin/#stats-1). - -::: - -## Checklist -This checklist indicates the major areas to check when you debug connectors. It is a reminder of what to look for to ensure a thorough review and an evaluation tool to get the status of connectors. -* Does Pulsar start successfully? - -* Does the external service run normally? - -* Is the nar package complete? - -* Is the connector configuration file correct? - -* In localrun mode, run a connector and check the printed information (connector log) on the console. - -* In cluster mode: - - * Use the `get` command to get the basic information. - - * Use the `status` command to get the current status. - * Use the `topics stats` command to get the stats for a specified topic and its connected producers and consumers. - - * Check the connector log. -* Enter into the external system and verify the result. diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-develop.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-develop.md deleted file mode 100644 index d6f4f8261ac820..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-develop.md +++ /dev/null @@ -1,421 +0,0 @@ ---- -id: io-develop -title: How to develop Pulsar connectors -sidebar_label: "Develop" -original_id: io-develop ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide describes how to develop Pulsar connectors to move data -between Pulsar and other systems. - -Pulsar connectors are special [Pulsar Functions](functions-overview.md), so creating -a Pulsar connector is similar to creating a Pulsar function. - -Pulsar connectors come in two types: - -| Type | Description | Example -|---|---|--- -{@inject: github:Source:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java}|Import data from another system to Pulsar.|[RabbitMQ source connector](io-rabbitmq.md) imports the messages of a RabbitMQ queue to a Pulsar topic. -{@inject: github:Sink:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java}|Export data from Pulsar to another system.|[Kinesis sink connector](io-kinesis.md) exports the messages of a Pulsar topic to a Kinesis stream. - -## Develop - -You can develop Pulsar source connectors and sink connectors. - -### Source - -Developing a source connector is to implement the {@inject: github:Source:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} -interface, which means you need to implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method and the {@inject: github:read:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - -1. Implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - - ```java - - /** - * Open connector with configuration - * - * @param config initialization config - * @param sourceContext - * @throws Exception IO type exceptions when opening a connector - */ - void open(final Map config, SourceContext sourceContext) throws Exception; - - ``` - - This method is called when the source connector is initialized. - - In this method, you can retrieve all connector specific settings through the passed-in `config` parameter and initialize all necessary resources. - - For example, a Kafka connector can create a Kafka client in this `open` method. - - Besides, Pulsar runtime also provides a `SourceContext` for the - connector to access runtime resources for tasks like collecting metrics. The implementation can save the `SourceContext` for future use. - -2. Implement the {@inject: github:read:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - - ```java - - /** - * Reads the next message from source. - * If source does not have any new messages, this call should block. - * @return next message from source. The return result should never be null - * @throws Exception - */ - Record read() throws Exception; - - ``` - - If nothing to return, the implementation should be blocking rather than returning `null`. - - The returned {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should encapsulate the following information, which is needed by Pulsar IO runtime. - - * {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should provide the following variables: - - |Variable|Required|Description - |---|---|--- - `TopicName`|No|Pulsar topic name from which the record is originated from. - `Key`|No| Messages can optionally be tagged with keys.

    For more information, see [Routing modes](concepts-messaging.md#routing-modes).| - `Value`|Yes|Actual data of the record. - `EventTime`|No|Event time of the record from the source. - `PartitionId`|No| If the record is originated from a partitioned source, it returns its `PartitionId`.

    `PartitionId` is used as a part of the unique identifier by Pulsar IO runtime to deduplicate messages and achieve exactly-once processing guarantee. - `RecordSequence`|No|If the record is originated from a sequential source, it returns its `RecordSequence`.

    `RecordSequence` is used as a part of the unique identifier by Pulsar IO runtime to deduplicate messages and achieve exactly-once processing guarantee. - `Properties` |No| If the record carries user-defined properties, it returns those properties. - `DestinationTopic`|No|Topic to which message should be written. - `Message`|No|A class which carries data sent by users.

    For more information, see [Message.java](https://github.com/apache/pulsar/blob/master/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/Message.java).| - - * {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should provide the following methods: - - Method|Description - |---|--- - `ack` |Acknowledge that the record is fully processed. - `fail`|Indicate that the record fails to be processed. - -## Handle schema information - -Pulsar IO automatically handles the schema and provides a strongly typed API based on Java generics. -If you know the schema type that you are producing, you can declare the Java class relative to that type in your sink declaration. - -``` - -public class MySource implements Source { - public Record read() {} -} - -``` - -If you want to implement a source that works with any schema, you can go with `byte[]` (of `ByteBuffer`) and use Schema.AUTO_PRODUCE_BYTES(). - -``` - -public class MySource implements Source { - public Record read() { - - Schema wantedSchema = .... - Record myRecord = new MyRecordImplementation(); - .... - } - class MyRecordImplementation implements Record { - public byte[] getValue() { - return ....encoded byte[]...that represents the value - } - public Schema getSchema() { - return Schema.AUTO_PRODUCE_BYTES(wantedSchema); - } - } -} - -``` - -To handle the `KeyValue` type properly, follow the guidelines for your record implementation: -- It must implement {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/KVRecord.java} interface and implement `getKeySchema`,`getValueSchema`, and `getKeyValueEncodingType` -- It must return a `KeyValue` object as `Record.getValue()` -- It may return null in `Record.getSchema()` - -When Pulsar IO runtime encounters a `KVRecord`, it brings the following changes automatically: -- Set properly the `KeyValueSchema` -- Encode the Message Key and the Message Value according to the `KeyValueEncoding` (SEPARATED or INLINE) - -:::tip - -For more information about **how to create a source connector**, see {@inject: github:KafkaSource:/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java}. - -::: - -### Sink - -Developing a sink connector **is similar to** developing a source connector, that is, you need to implement the {@inject: github:Sink:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} interface, which means implementing the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method and the {@inject: github:write:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - -1. Implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - - ```java - - /** - * Open connector with configuration - * - * @param config initialization config - * @param sinkContext - * @throws Exception IO type exceptions when opening a connector - */ - void open(final Map config, SinkContext sinkContext) throws Exception; - - ``` - -2. Implement the {@inject: github:write:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - - ```java - - /** - * Write a message to Sink - * @param record record to write to sink - * @throws Exception - */ - void write(Record record) throws Exception; - - ``` - - During the implementation, you can decide how to write the `Value` and - the `Key` to the actual source, and leverage all the provided information such as - `PartitionId` and `RecordSequence` to achieve different processing guarantees. - - You also need to ack records (if messages are sent successfully) or fail records (if messages fail to send). - -## Handling Schema information - -Pulsar IO handles automatically the Schema and provides a strongly typed API based on Java generics. -If you know the Schema type that you are consuming from you can declare the Java class relative to that type in your Sink declaration. - -``` - -public class MySink implements Sink { - public void write(Record record) {} -} - -``` - -If you want to implement a sink that works with any schema, you can you go with the special GenericObject interface. - -``` - -public class MySink implements Sink { - public void write(Record record) { - Schema schema = record.getSchema(); - GenericObject genericObject = record.getValue(); - if (genericObject != null) { - SchemaType type = genericObject.getSchemaType(); - Object nativeObject = genericObject.getNativeObject(); - ... - } - .... - } -} - -``` - -In the case of AVRO, JSON, and Protobuf records (schemaType=AVRO,JSON,PROTOBUF_NATIVE), you can cast the -`genericObject` variable to `GenericRecord` and use `getFields()` and `getField()` API. -You are able to access the native AVRO record using `genericObject.getNativeObject()`. - -In the case of KeyValue type, you can access both the schema for the key and the schema for the value using this code. - -``` - -public class MySink implements Sink { - public void write(Record record) { - Schema schema = record.getSchema(); - GenericObject genericObject = record.getValue(); - SchemaType type = genericObject.getSchemaType(); - Object nativeObject = genericObject.getNativeObject(); - if (type == SchemaType.KEY_VALUE) { - KeyValue keyValue = (KeyValue) nativeObject; - Object key = keyValue.getKey(); - Object value = keyValue.getValue(); - - KeyValueSchema keyValueSchema = (KeyValueSchema) schema; - Schema keySchema = keyValueSchema.getKeySchema(); - Schema valueSchema = keyValueSchema.getValueSchema(); - } - .... - } -} - -``` - -## Test - -Testing connectors can be challenging because Pulsar IO connectors interact with two systems -that may be difficult to mock—Pulsar and the system to which the connector is connecting. - -It is -recommended writing special tests to test the connector functionalities as below -while mocking the external service. - -### Unit test - -You can create unit tests for your connector. - -### Integration test - -Once you have written sufficient unit tests, you can add -separate integration tests to verify end-to-end functionality. - -Pulsar uses [testcontainers](https://www.testcontainers.org/) **for all integration tests**. - -:::tip - -For more information about **how to create integration tests for Pulsar connectors**, see {@inject: github:IntegrationTests:/tests/integration/src/test/java/org/apache/pulsar/tests/integration/io}. - -::: - -## Package - -Once you've developed and tested your connector, you need to package it so that it can be submitted -to a [Pulsar Functions](functions-overview.md) cluster. - -There are two methods to -work with Pulsar Functions' runtime, that is, [NAR](#nar) and [uber JAR](#uber-jar). - -:::note - -If you plan to package and distribute your connector for others to use, you are obligated to - -::: - -license and copyright your own code properly. Remember to add the license and copyright to -all libraries your code uses and to your distribution. -> -> If you use the [NAR](#nar) method, the NAR plugin -automatically creates a `DEPENDENCIES` file in the generated NAR package, including the proper -licensing and copyrights of all libraries of your connector. - -### NAR - -**NAR** stands for NiFi Archive, which is a custom packaging mechanism used by Apache NiFi, to provide -a bit of Java ClassLoader isolation. - -:::tip - -For more information about **how NAR works**, see [here](https://medium.com/hashmapinc/nifi-nar-files-explained-14113f7796fd). - -::: - -Pulsar uses the same mechanism for packaging **all** [built-in connectors](io-connectors.md). - -The easiest approach to package a Pulsar connector is to create a NAR package using [nifi-nar-maven-plugin](https://mvnrepository.com/artifact/org.apache.nifi/nifi-nar-maven-plugin). - -Include this [nifi-nar-maven-plugin](https://mvnrepository.com/artifact/org.apache.nifi/nifi-nar-maven-plugin) in your maven project for your connector as below. - -```xml - - - - org.apache.nifi - nifi-nar-maven-plugin - 1.2.0 - - - -``` - -You must also create a `resources/META-INF/services/pulsar-io.yaml` file with the following contents: - -```yaml - -name: connector name -description: connector description -sourceClass: fully qualified class name (only if source connector) -sinkClass: fully qualified class name (only if sink connector) - -``` - -For Gradle users, there is a [Gradle Nar plugin available on the Gradle Plugin Portal](https://plugins.gradle.org/plugin/io.github.lhotari.gradle-nar-plugin). - -:::tip - -For more information about an **how to use NAR for Pulsar connectors**, see {@inject: github:TwitterFirehose:/pulsar-io/twitter/pom.xml}. - -::: - -### Uber JAR - -An alternative approach is to create an **uber JAR** that contains all of the connector's JAR files -and other resource files. No directory internal structure is necessary. - -You can use [maven-shade-plugin](https://maven.apache.org/plugins/maven-shade-plugin/examples/includes-excludes.html) to create a uber JAR as below: - -```xml - - - org.apache.maven.plugins - maven-shade-plugin - 3.1.1 - - - package - - shade - - - - - *:* - - - - - - - -``` - -## Monitor - -Pulsar connectors enable you to move data in and out of Pulsar easily. It is important to ensure that the running connectors are healthy at any time. You can monitor Pulsar connectors that have been deployed with the following methods: - -- Check the metrics provided by Pulsar. - - Pulsar connectors expose the metrics that can be collected and used for monitoring the health of **Java** connectors. You can check the metrics by following the [monitoring](deploy-monitoring.md) guide. - -- Set and check your customized metrics. - - In addition to the metrics provided by Pulsar, Pulsar allows you to customize metrics for **Java** connectors. Function workers collect user-defined metrics to Prometheus automatically and you can check them in Grafana. - -Here is an example of how to customize metrics for a Java connector. - -````mdx-code-block - - - -``` - -public class TestMetricSink implements Sink { - - @Override - public void open(Map config, SinkContext sinkContext) throws Exception { - sinkContext.recordMetric("foo", 1); - } - - @Override - public void write(Record record) throws Exception { - - } - - @Override - public void close() throws Exception { - - } - } - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-dynamodb-source.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-dynamodb-source.md deleted file mode 100644 index ce585786eb0428..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-dynamodb-source.md +++ /dev/null @@ -1,80 +0,0 @@ ---- -id: io-dynamodb-source -title: AWS DynamoDB source connector -sidebar_label: "AWS DynamoDB source connector" -original_id: io-dynamodb-source ---- - -The DynamoDB source connector pulls data from DynamoDB table streams and persists data into Pulsar. - -This connector uses the [DynamoDB Streams Kinesis Adapter](https://github.com/awslabs/dynamodb-streams-kinesis-adapter), -which uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual -consuming of messages. The KCL uses DynamoDB to track state for consumers and requires cloudwatch access to log metrics. - - -## Configuration - -The configuration of the DynamoDB source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.

    Below are the available options:

  1752. `AT_TIMESTAMP`: start from the record at or after the specified timestamp.

  1753. `LATEST`: start after the most recent data record.

  1754. `TRIM_HORIZON`: start from the oldest available data record.
  1755. -`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption. -`applicationName`|String|false|Pulsar IO connector|The name of the KCL application. Must be unique, as it is used to define the table name for the dynamo table used for state tracking.

    By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances. -`checkpointInterval`|long|false|60000|The frequency of the KCL checkpoint in milliseconds. -`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds. -`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint. -`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector.

    Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed. -`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsEndpoint`|String|false|" " (empty string)|The DynamoDB Streams end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsDynamodbStreamArn`|String|true|" " (empty string)|The DynamoDB stream arn. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    `awsCredentialProviderPlugin` has the following built-in plugs:

  1756. `org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:
    this plugin uses the default AWS provider chain.
    For more information, see [using the default credential provider chain](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default).

  1757. `org.apache.pulsar.io.kinesis.STSAssumeRoleProviderPlugin`:
    this plugin takes a configuration via the `awsCredentialPluginParam` that describes a role to assume when running the KCL.
    **JSON configuration example**
    `{"roleArn": "arn...", "roleSessionName": "name"}`

    `awsCredentialPluginName` is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If `awsCredentialPluginName` set to empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`.
  1758. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Example - -Before using the DynamoDB source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "awsEndpoint": "https://some.endpoint.aws", - "awsRegion": "us-east-1", - "awsDynamodbStreamArn": "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "applicationName": "My test application", - "checkpointInterval": "30000", - "backoffTime": "4000", - "numRetries": "3", - "receiveQueueSize": 2000, - "initialPositionInStream": "TRIM_HORIZON", - "startAtTime": "2019-03-05T19:28:58.000Z" - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "https://some.endpoint.aws" - awsRegion: "us-east-1" - awsDynamodbStreamArn: "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - applicationName: "My test application" - checkpointInterval: 30000 - backoffTime: 4000 - numRetries: 3 - receiveQueueSize: 2000 - initialPositionInStream: "TRIM_HORIZON" - startAtTime: "2019-03-05T19:28:58.000Z" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-elasticsearch-sink.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-elasticsearch-sink.md deleted file mode 100644 index b5757b3094a9ac..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-elasticsearch-sink.md +++ /dev/null @@ -1,242 +0,0 @@ ---- -id: io-elasticsearch-sink -title: Elasticsearch sink connector -sidebar_label: "Elasticsearch sink connector" -original_id: io-elasticsearch-sink ---- - -The Elasticsearch sink connector pulls messages from Pulsar topics and persists the messages to indexes. - - -## Feature - -### Handle data - -Since Pulsar 2.9.0, the Elasticsearch sink connector has the following ways of -working. You can choose one of them. - -Name | Description ----|---| -Raw processing | The sink reads from topics and passes the raw content to Elasticsearch.

    This is the **default** behavior.

    Raw processing was already available **in Pulsar 2.8.x**. -Schema aware | The sink uses the schema and handles AVRO, JSON, and KeyValue schema types while mapping the content to the Elasticsearch document.

    If you set `schemaEnable` to `true`, the sink interprets the contents of the message and you can define a **primary key** that in turn used as the special `_id` field on Elasticsearch. -

    This allows you to perform `UPDATE`, `INSERT`, and `DELETE` operations -to Elasticsearch driven by the logical primary key of the message.

    This -is very useful in a typical Change Data Capture scenario in which you follow the -changes on your database, write them to Pulsar (using the Debezium adapter for -instance), and then you write to Elasticsearch.

    You configure the -mapping of the primary key using the `primaryFields` configuration -entry.

    The `DELETE` operation can be performed when the primary key is -not empty and the remaining value is empty. Use the `nullValueAction` to -configure this behaviour. The default configuration simply ignores such empty -values. - -### Map multiple indexes - -Since Pulsar 2.9.0, the `indexName` property is no more required. If you omit it, the sink writes to an index name after the Pulsar topic name. - -### Enable bulk writes - -Since Pulsar 2.9.0, you can use bulk writes by setting the `bulkEnabled` property to `true`. - -### Enable secure connections via TLS - -Since Pulsar 2.9.0, you can enable secure connections with TLS. - -## Configuration - -The configuration of the Elasticsearch sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `elasticSearchUrl` | String| true |" " (empty string)| The URL of elastic search cluster to which the connector connects. | -| `indexName` | String| true |" " (empty string)| The index name to which the connector writes messages. | -| `schemaEnable` | Boolean | false | false | Turn on the Schema Aware mode. | -| `createIndexIfNeeded` | Boolean | false | false | Manage index if missing. | -| `maxRetries` | Integer | false | 1 | The maximum number of retries for elasticsearch requests. Use -1 to disable it. | -| `retryBackoffInMs` | Integer | false | 100 | The base time to wait when retrying an Elasticsearch request (in milliseconds). | -| `maxRetryTimeInSec` | Integer| false | 86400 | The maximum retry time interval in seconds for retrying an elasticsearch request. | -| `bulkEnabled` | Boolean | false | false | Enable the elasticsearch bulk processor to flush write requests based on the number or size of requests, or after a given period. | -| `bulkActions` | Integer | false | 1000 | The maximum number of actions per elasticsearch bulk request. Use -1 to disable it. | -| `bulkSizeInMb` | Integer | false |5 | The maximum size in megabytes of elasticsearch bulk requests. Use -1 to disable it. | -| `bulkConcurrentRequests` | Integer | false | 0 | The maximum number of in flight elasticsearch bulk requests. The default 0 allows the execution of a single request. A value of 1 means 1 concurrent request is allowed to be executed while accumulating new bulk requests. | -| `bulkFlushIntervalInMs` | Integer | false | -1 | The maximum period of time to wait for flushing pending writes when bulk writes are enabled. Default is -1 meaning not set. | -| `compressionEnabled` | Boolean | false |false | Enable elasticsearch request compression. | -| `connectTimeoutInMs` | Integer | false |5000 | The elasticsearch client connection timeout in milliseconds. | -| `connectionRequestTimeoutInMs` | Integer | false |1000 | The time in milliseconds for getting a connection from the elasticsearch connection pool. | -| `connectionIdleTimeoutInMs` | Integer | false |5 | Idle connection timeout to prevent a read timeout. | -| `keyIgnore` | Boolean | false |true | Whether to ignore the record key to build the Elasticsearch document `_id`. If primaryFields is defined, the connector extract the primary fields from the payload to build the document `_id` If no primaryFields are provided, elasticsearch auto generates a random document `_id`. | -| `primaryFields` | String | false | "id" | The comma separated ordered list of field names used to build the Elasticsearch document `_id` from the record value. If this list is a singleton, the field is converted as a string. If this list has 2 or more fields, the generated `_id` is a string representation of a JSON array of the field values. | -| `nullValueAction` | enum (IGNORE,DELETE,FAIL) | false | IGNORE | How to handle records with null values, possible options are IGNORE, DELETE or FAIL. Default is IGNORE the message. | -| `malformedDocAction` | enum (IGNORE,WARN,FAIL) | false | FAIL | How to handle elasticsearch rejected documents due to some malformation. Possible options are IGNORE, DELETE or FAIL. Default is FAIL the Elasticsearch document. | -| `stripNulls` | Boolean | false |true | If stripNulls is false, elasticsearch _source includes 'null' for empty fields (for example {"foo": null}), otherwise null fields are stripped. | -| `socketTimeoutInMs` | Integer | false |60000 | The socket timeout in milliseconds waiting to read the elasticsearch response. | -| `typeName` | String | false | "_doc" | The type name to which the connector writes messages to.

    The value should be set explicitly to a valid type name other than "_doc" for Elasticsearch version before 6.2, and left to default otherwise. | -| `indexNumberOfShards` | int| false |1| The number of shards of the index. | -| `indexNumberOfReplicas` | int| false |1 | The number of replicas of the index. | -| `username` | String| false |" " (empty string)| The username used by the connector to connect to the elastic search cluster.

    If `username` is set, then `password` should also be provided. | -| `password` | String| false | " " (empty string)|The password used by the connector to connect to the elastic search cluster.

    If `username` is set, then `password` should also be provided. | -| `ssl` | ElasticSearchSslConfig | false | | Configuration for TLS encrypted communication | - -### Definition of ElasticSearchSslConfig structure: - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `enabled` | Boolean| false | false | Enable SSL/TLS. | -| `hostnameVerification` | Boolean| false | true | Whether or not to validate node hostnames when using SSL. | -| `truststorePath` | String| false |" " (empty string)| The path to the truststore file. | -| `truststorePassword` | String| false |" " (empty string)| Truststore password. | -| `keystorePath` | String| false |" " (empty string)| The path to the keystore file. | -| `keystorePassword` | String| false |" " (empty string)| Keystore password. | -| `cipherSuites` | String| false |" " (empty string)| SSL/TLS cipher suites. | -| `protocols` | String| false |"TLSv1.2" | Comma separated list of enabled SSL/TLS protocols. | - -## Example - -Before using the Elasticsearch sink connector, you need to create a configuration file through one of the following methods. - -### Configuration - -#### For Elasticsearch After 6.2 - -* JSON - - ```json - - { - "elasticSearchUrl": "http://localhost:9200", - "indexName": "my_index", - "username": "scooby", - "password": "doobie" - } - - ``` - -* YAML - - ```yaml - - configs: - elasticSearchUrl: "http://localhost:9200" - indexName: "my_index" - username: "scooby" - password: "doobie" - - ``` - -#### For Elasticsearch Before 6.2 - -* JSON - - ```json - - { - "elasticSearchUrl": "http://localhost:9200", - "indexName": "my_index", - "typeName": "doc", - "username": "scooby", - "password": "doobie" - } - - ``` - -* YAML - - ```yaml - - configs: - elasticSearchUrl: "http://localhost:9200" - indexName: "my_index" - typeName: "doc" - username: "scooby" - password: "doobie" - - ``` - -### Usage - -1. Start a single node Elasticsearch cluster. - - ```bash - - $ docker run -p 9200:9200 -p 9300:9300 \ - -e "discovery.type=single-node" \ - docker.elastic.co/elasticsearch/elasticsearch:7.13.3 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - - Make sure the NAR file is available at `connectors/pulsar-io-elastic-search-@pulsar:version@.nar`. - -3. Start the Pulsar Elasticsearch connector in local run mode using one of the following methods. - * Use the **JSON** configuration as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-elastic-search-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name elasticsearch-test-sink \ - --sink-config '{"elasticSearchUrl":"http://localhost:9200","indexName": "my_index","username": "scooby","password": "doobie"}' \ - --inputs elasticsearch_test - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-elastic-search-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name elasticsearch-test-sink \ - --sink-config-file elasticsearch-sink.yml \ - --inputs elasticsearch_test - - ``` - -4. Publish records to the topic. - - ```bash - - $ bin/pulsar-client produce elasticsearch_test --messages "{\"a\":1}" - - ``` - -5. Check documents in Elasticsearch. - - * refresh the index - - ```bash - - $ curl -s http://localhost:9200/my_index/_refresh - - ``` - - - * search documents - - ```bash - - $ curl -s http://localhost:9200/my_index/_search - - ``` - - You can see the record that published earlier has been successfully written into Elasticsearch. - - ```json - - {"took":2,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":1,"relation":"eq"},"max_score":1.0,"hits":[{"_index":"my_index","_type":"_doc","_id":"FSxemm8BLjG_iC0EeTYJ","_score":1.0,"_source":{"a":1}}]}} - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-file-source.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-file-source.md deleted file mode 100644 index e9d710cce65e83..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-file-source.md +++ /dev/null @@ -1,160 +0,0 @@ ---- -id: io-file-source -title: File source connector -sidebar_label: "File source connector" -original_id: io-file-source ---- - -The File source connector pulls messages from files in directories and persists the messages to Pulsar topics. - -## Configuration - -The configuration of the File source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `inputDirectory` | String|true | No default value|The input directory to pull files. | -| `recurse` | Boolean|false | true | Whether to pull files from subdirectory or not.| -| `keepFile` |Boolean|false | false | If set to true, the file is not deleted after it is processed, which means the file can be picked up continually. | -| `fileFilter` | String|false| [^\\.].* | The file whose name matches the given regular expression is picked up. | -| `pathFilter` | String |false | NULL | If `recurse` is set to true, the subdirectory whose path matches the given regular expression is scanned. | -| `minimumFileAge` | Integer|false | 0 | The minimum age that a file can be processed.

    Any file younger than `minimumFileAge` (according to the last modification date) is ignored. | -| `maximumFileAge` | Long|false |Long.MAX_VALUE | The maximum age that a file can be processed.

    Any file older than `maximumFileAge` (according to last modification date) is ignored. | -| `minimumSize` |Integer| false |1 | The minimum size (in bytes) that a file can be processed. | -| `maximumSize` | Double|false |Double.MAX_VALUE| The maximum size (in bytes) that a file can be processed. | -| `ignoreHiddenFiles` |Boolean| false | true| Whether the hidden files should be ignored or not. | -| `pollingInterval`|Long | false | 10000L | Indicates how long to wait before performing a directory listing. | -| `numWorkers` | Integer | false | 1 | The number of worker threads that process files.

    This allows you to process a larger number of files concurrently.

    However, setting this to a value greater than 1 makes the data from multiple files mixed in the target topic. | - -### Example - -Before using the File source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "inputDirectory": "/Users/david", - "recurse": true, - "keepFile": true, - "fileFilter": "[^\\.].*", - "pathFilter": "*", - "minimumFileAge": 0, - "maximumFileAge": 9999999999, - "minimumSize": 1, - "maximumSize": 5000000, - "ignoreHiddenFiles": true, - "pollingInterval": 5000, - "numWorkers": 1 - } - - ``` - -* YAML - - ```yaml - - configs: - inputDirectory: "/Users/david" - recurse: true - keepFile: true - fileFilter: "[^\\.].*" - pathFilter: "*" - minimumFileAge: 0 - maximumFileAge: 9999999999 - minimumSize: 1 - maximumSize: 5000000 - ignoreHiddenFiles: true - pollingInterval: 5000 - numWorkers: 1 - - ``` - -## Usage - -Here is an example of using the File source connecter. - -1. Pull a Pulsar image. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - ``` - -2. Start Pulsar standalone. - - ```bash - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -3. Create a configuration file _file-connector.yaml_. - - ```yaml - - configs: - inputDirectory: "/opt" - - ``` - -4. Copy the configuration file _file-connector.yaml_ to the container. - - ```bash - - $ docker cp connectors/file-connector.yaml pulsar-standalone:/pulsar/ - - ``` - -5. Download the File source connector. - - ```bash - - $ curl -O https://mirrors.tuna.tsinghua.edu.cn/apache/pulsar/pulsar-{version}/connectors/pulsar-io-file-{version}.nar - - ``` - -6. Start the File source connector. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - - $ ./bin/pulsar-admin sources localrun \ - --archive /pulsar/pulsar-io-file-{version}.nar \ - --name file-test \ - --destination-topic-name pulsar-file-test \ - --source-config-file /pulsar/file-connector.yaml - - ``` - -7. Start a consumer. - - ```bash - - ./bin/pulsar-client consume -s file-test -n 0 pulsar-file-test - - ``` - -8. Write the message to the file _test.txt_. - - ```bash - - echo "hello world!" > /opt/test.txt - - ``` - - The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello world! - - ``` - - \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-flume-sink.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-flume-sink.md deleted file mode 100644 index b2ace53702f8ca..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-flume-sink.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -id: io-flume-sink -title: Flume sink connector -sidebar_label: "Flume sink connector" -original_id: io-flume-sink ---- - -The Flume sink connector pulls messages from Pulsar topics to logs. - -## Configuration - -The configuration of the Flume sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`name`|String|true|"" (empty string)|The name of the agent. -`confFile`|String|true|"" (empty string)|The configuration file. -`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed. -`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection. -`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration. - -### Example - -Before using the Flume sink connector, you need to create a configuration file through one of the following methods. - -> For more information about the `sink.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/sink.conf). - -* JSON - - ```json - - { - "name": "a1", - "confFile": "sink.conf", - "noReloadConf": "false", - "zkConnString": "", - "zkBasePath": "" - } - - ``` - -* YAML - - ```yaml - - configs: - name: a1 - confFile: sink.conf - noReloadConf: false - zkConnString: "" - zkBasePath: "" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-flume-source.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-flume-source.md deleted file mode 100644 index b7fd7edad88111..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-flume-source.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -id: io-flume-source -title: Flume source connector -sidebar_label: "Flume source connector" -original_id: io-flume-source ---- - -The Flume source connector pulls messages from logs to Pulsar topics. - -## Configuration - -The configuration of the Flume source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`name`|String|true|"" (empty string)|The name of the agent. -`confFile`|String|true|"" (empty string)|The configuration file. -`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed. -`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection. -`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration. - -### Example - -Before using the Flume source connector, you need to create a configuration file through one of the following methods. - -> For more information about the `source.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/source.conf). - -* JSON - - ```json - - { - "name": "a1", - "confFile": "source.conf", - "noReloadConf": "false", - "zkConnString": "", - "zkBasePath": "" - } - - ``` - -* YAML - - ```yaml - - configs: - name: a1 - confFile: source.conf - noReloadConf: false - zkConnString: "" - zkBasePath: "" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-hbase-sink.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-hbase-sink.md deleted file mode 100644 index 1737b00fa26805..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-hbase-sink.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -id: io-hbase-sink -title: HBase sink connector -sidebar_label: "HBase sink connector" -original_id: io-hbase-sink ---- - -The HBase sink connector pulls the messages from Pulsar topics -and persists the messages to HBase tables - -## Configuration - -The configuration of the HBase sink connector has the following properties. - -### Property - -| Name | Type|Default | Required | Description | -|------|---------|----------|-------------|--- -| `hbaseConfigResources` | String|None | false | HBase system configuration `hbase-site.xml` file. | -| `zookeeperQuorum` | String|None | true | HBase system configuration about `hbase.zookeeper.quorum` value. | -| `zookeeperClientPort` | String|2181 | false | HBase system configuration about `hbase.zookeeper.property.clientPort` value. | -| `zookeeperZnodeParent` | String|/hbase | false | HBase system configuration about `zookeeper.znode.parent` value. | -| `tableName` | None |String | true | HBase table, the value is `namespace:tableName`. | -| `rowKeyName` | String|None | true | HBase table rowkey name. | -| `familyName` | String|None | true | HBase table column family name. | -| `qualifierNames` |String| None | true | HBase table column qualifier names. | -| `batchTimeMs` | Long|1000l| false | HBase table operation timeout in milliseconds. | -| `batchSize` | int|200| false | Batch size of updates made to the HBase table. | - -### Example - -Before using the HBase sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "hbaseConfigResources": "hbase-site.xml", - "zookeeperQuorum": "localhost", - "zookeeperClientPort": "2181", - "zookeeperZnodeParent": "/hbase", - "tableName": "pulsar_hbase", - "rowKeyName": "rowKey", - "familyName": "info", - "qualifierNames": [ 'name', 'address', 'age'] - } - - ``` - -* YAML - - ```yaml - - configs: - hbaseConfigResources: "hbase-site.xml" - zookeeperQuorum: "localhost" - zookeeperClientPort: "2181" - zookeeperZnodeParent: "/hbase" - tableName: "pulsar_hbase" - rowKeyName: "rowKey" - familyName: "info" - qualifierNames: [ 'name', 'address', 'age'] - - ``` - - \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-hdfs2-sink.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-hdfs2-sink.md deleted file mode 100644 index 4a8527154430d0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-hdfs2-sink.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -id: io-hdfs2-sink -title: HDFS2 sink connector -sidebar_label: "HDFS2 sink connector" -original_id: io-hdfs2-sink ---- - -The HDFS2 sink connector pulls the messages from Pulsar topics -and persists the messages to HDFS files. - -## Configuration - -The configuration of the HDFS2 sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `hdfsConfigResources` | String|true| None | A file or a comma-separated list containing the Hadoop file system configuration.

    **Example**
    'core-site.xml'
    'hdfs-site.xml' | -| `directory` | String | true | None|The HDFS directory where files read from or written to. | -| `encoding` | String |false |None |The character encoding for the files.

    **Example**
    UTF-8
    ASCII | -| `compression` | Compression |false |None |The compression code used to compress or de-compress the files on HDFS.

    Below are the available options:
  1759. BZIP2
  1760. DEFLATE
  1761. GZIP
  1762. LZ4
  1763. SNAPPY
  1764. | -| `kerberosUserPrincipal` |String| false| None|The principal account of Kerberos user used for authentication. | -| `keytab` | String|false|None| The full pathname of the Kerberos keytab file used for authentication. | -| `filenamePrefix` |String| true, if `compression` is set to `None`. | None |The prefix of the files created inside the HDFS directory.

    **Example**
    The value of topicA result in files named topicA-. | -| `fileExtension` | String| true | None | The extension added to the files written to HDFS.

    **Example**
    '.txt'
    '.seq' | -| `separator` | char|false |None |The character used to separate records in a text file.

    If no value is provided, the contents from all records are concatenated together in one continuous byte array. | -| `syncInterval` | long| false |0| The interval between calls to flush data to HDFS disk in milliseconds. | -| `maxPendingRecords` |int| false|Integer.MAX_VALUE | The maximum number of records that hold in memory before acking.

    Setting this property to 1 makes every record send to disk before the record is acked.

    Setting this property to a higher value allows buffering records before flushing them to disk. -| `subdirectoryPattern` | String | false | None | A subdirectory associated with the created time of the sink.
    The pattern is the formatted pattern of `directory`'s subdirectory.

    See [DateTimeFormatter](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html) for pattern's syntax. | - -### Example - -Before using the HDFS2 sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "hdfsConfigResources": "core-site.xml", - "directory": "/foo/bar", - "filenamePrefix": "prefix", - "fileExtension": ".log", - "compression": "SNAPPY", - "subdirectoryPattern": "yyyy-MM-dd" - } - - ``` - -* YAML - - ```yaml - - configs: - hdfsConfigResources: "core-site.xml" - directory: "/foo/bar" - filenamePrefix: "prefix" - fileExtension: ".log" - compression: "SNAPPY" - subdirectoryPattern: "yyyy-MM-dd" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-hdfs3-sink.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-hdfs3-sink.md deleted file mode 100644 index aec065a25db7f4..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-hdfs3-sink.md +++ /dev/null @@ -1,59 +0,0 @@ ---- -id: io-hdfs3-sink -title: HDFS3 sink connector -sidebar_label: "HDFS3 sink connector" -original_id: io-hdfs3-sink ---- - -The HDFS3 sink connector pulls the messages from Pulsar topics -and persists the messages to HDFS files. - -## Configuration - -The configuration of the HDFS3 sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `hdfsConfigResources` | String|true| None | A file or a comma-separated list containing the Hadoop file system configuration.

    **Example**
    'core-site.xml'
    'hdfs-site.xml' | -| `directory` | String | true | None|The HDFS directory where files read from or written to. | -| `encoding` | String |false |None |The character encoding for the files.

    **Example**
    UTF-8
    ASCII | -| `compression` | Compression |false |None |The compression code used to compress or de-compress the files on HDFS.

    Below are the available options:
  1765. BZIP2
  1766. DEFLATE
  1767. GZIP
  1768. LZ4
  1769. SNAPPY
  1770. | -| `kerberosUserPrincipal` |String| false| None|The principal account of Kerberos user used for authentication. | -| `keytab` | String|false|None| The full pathname of the Kerberos keytab file used for authentication. | -| `filenamePrefix` |String| false |None |The prefix of the files created inside the HDFS directory.

    **Example**
    The value of topicA result in files named topicA-. | -| `fileExtension` | String| false | None| The extension added to the files written to HDFS.

    **Example**
    '.txt'
    '.seq' | -| `separator` | char|false |None |The character used to separate records in a text file.

    If no value is provided, the contents from all records are concatenated together in one continuous byte array. | -| `syncInterval` | long| false |0| The interval between calls to flush data to HDFS disk in milliseconds. | -| `maxPendingRecords` |int| false|Integer.MAX_VALUE | The maximum number of records that hold in memory before acking.

    Setting this property to 1 makes every record send to disk before the record is acked.

    Setting this property to a higher value allows buffering records before flushing them to disk. - -### Example - -Before using the HDFS3 sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "hdfsConfigResources": "core-site.xml", - "directory": "/foo/bar", - "filenamePrefix": "prefix", - "compression": "SNAPPY" - } - - ``` - -* YAML - - ```yaml - - configs: - hdfsConfigResources: "core-site.xml" - directory: "/foo/bar" - filenamePrefix: "prefix" - compression: "SNAPPY" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-influxdb-sink.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-influxdb-sink.md deleted file mode 100644 index 9382f8c03121cc..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-influxdb-sink.md +++ /dev/null @@ -1,119 +0,0 @@ ---- -id: io-influxdb-sink -title: InfluxDB sink connector -sidebar_label: "InfluxDB sink connector" -original_id: io-influxdb-sink ---- - -The InfluxDB sink connector pulls messages from Pulsar topics -and persists the messages to InfluxDB. - -The InfluxDB sink provides different configurations for InfluxDBv1 and v2 respectively. - -## Configuration - -The configuration of the InfluxDB sink connector has the following properties. - -### Property -#### InfluxDBv2 -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. | -| `token` | String|true| " " (empty string) |The authentication token used to authenticate to InfluxDB. | -| `organization` | String| true|" " (empty string) | The InfluxDB organization to write to. | -| `bucket` |String| true | " " (empty string)| The InfluxDB bucket to write to. | -| `precision` | String|false| ns | The timestamp precision for writing data to InfluxDB.

    Below are the available options:
  1771. ns
  1772. us
  1773. ms
  1774. s
  1775. | -| `logLevel` | String|false| NONE|The log level for InfluxDB request and response.

    Below are the available options:
  1776. NONE
  1777. BASIC
  1778. HEADERS
  1779. FULL
  1780. | -| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. | -| `batchTimeMs` |long|false| 1000L | The InfluxDB operation time in milliseconds. | -| `batchSize` | int|false|200| The batch size of writing to InfluxDB. | - -#### InfluxDBv1 -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. | -| `username` | String|false| " " (empty string) |The username used to authenticate to InfluxDB. | -| `password` | String| false|" " (empty string) | The password used to authenticate to InfluxDB. | -| `database` |String| true | " " (empty string)| The InfluxDB to which write messages. | -| `consistencyLevel` | String|false|ONE | The consistency level for writing data to InfluxDB.

    Below are the available options:
  1781. ALL
  1782. ANY
  1783. ONE
  1784. QUORUM
  1785. | -| `logLevel` | String|false| NONE|The log level for InfluxDB request and response.

    Below are the available options:
  1786. NONE
  1787. BASIC
  1788. HEADERS
  1789. FULL
  1790. | -| `retentionPolicy` | String|false| autogen| The retention policy for InfluxDB. | -| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. | -| `batchTimeMs` |long|false| 1000L | The InfluxDB operation time in milliseconds. | -| `batchSize` | int|false|200| The batch size of writing to InfluxDB. | - -### Example -Before using the InfluxDB sink connector, you need to create a configuration file through one of the following methods. -#### InfluxDBv2 -* JSON - - ```json - - { - "influxdbUrl": "http://localhost:9999", - "organization": "example-org", - "bucket": "example-bucket", - "token": "xxxx", - "precision": "ns", - "logLevel": "NONE", - "gzipEnable": false, - "batchTimeMs": 1000, - "batchSize": 100 - } - - ``` - - -* YAML - - ```yaml - - configs: - influxdbUrl: "http://localhost:9999" - organization: "example-org" - bucket: "example-bucket" - token: "xxxx" - precision: "ns" - logLevel: "NONE" - gzipEnable: false - batchTimeMs: 1000 - batchSize: 100 - - ``` - - -#### InfluxDBv1 - -* JSON - - ```json - - { - "influxdbUrl": "http://localhost:8086", - "database": "test_db", - "consistencyLevel": "ONE", - "logLevel": "NONE", - "retentionPolicy": "autogen", - "gzipEnable": false, - "batchTimeMs": 1000, - "batchSize": 100 - } - - ``` - -* YAML - - ```yaml - - configs: - influxdbUrl: "http://localhost:8086" - database: "test_db" - consistencyLevel: "ONE" - logLevel: "NONE" - retentionPolicy: "autogen" - gzipEnable: false - batchTimeMs: 1000 - batchSize: 100 - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-jdbc-sink.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-jdbc-sink.md deleted file mode 100644 index 77dbb61fccd7ed..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-jdbc-sink.md +++ /dev/null @@ -1,157 +0,0 @@ ---- -id: io-jdbc-sink -title: JDBC sink connector -sidebar_label: "JDBC sink connector" -original_id: io-jdbc-sink ---- - -The JDBC sink connectors allow pulling messages from Pulsar topics -and persists the messages to ClickHouse, MariaDB, PostgreSQL, and SQLite. - -> Currently, INSERT, DELETE and UPDATE operations are supported. - -## Configuration - -The configuration of all JDBC sink connectors has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `userName` | String|false | " " (empty string) | The username used to connect to the database specified by `jdbcUrl`.

    **Note: `userName` is case-sensitive.**| -| `password` | String|false | " " (empty string)| The password used to connect to the database specified by `jdbcUrl`.

    **Note: `password` is case-sensitive.**| -| `jdbcUrl` | String|true | " " (empty string) | The JDBC URL of the database to which the connector connects. | -| `tableName` | String|true | " " (empty string) | The name of the table to which the connector writes. | -| `nonKey` | String|false | " " (empty string) | A comma-separated list contains the fields used in updating events. | -| `key` | String|false | " " (empty string) | A comma-separated list contains the fields used in `where` condition of updating and deleting events. | -| `timeoutMs` | int| false|500 | The JDBC operation timeout in milliseconds. | -| `batchSize` | int|false | 200 | The batch size of updates made to the database. | - -### Example for ClickHouse - -* JSON - - ```json - - { - "userName": "clickhouse", - "password": "password", - "jdbcUrl": "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink", - "tableName": "pulsar_clickhouse_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-clickhouse-sink" - topicName: "persistent://public/default/jdbc-clickhouse-topic" - sinkType: "jdbc-clickhouse" - configs: - userName: "clickhouse" - password: "password" - jdbcUrl: "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink" - tableName: "pulsar_clickhouse_jdbc_sink" - - ``` - -### Example for MariaDB - -* JSON - - ```json - - { - "userName": "mariadb", - "password": "password", - "jdbcUrl": "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink", - "tableName": "pulsar_mariadb_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-mariadb-sink" - topicName: "persistent://public/default/jdbc-mariadb-topic" - sinkType: "jdbc-mariadb" - configs: - userName: "mariadb" - password: "password" - jdbcUrl: "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink" - tableName: "pulsar_mariadb_jdbc_sink" - - ``` - -### Example for PostgreSQL - -Before using the JDBC PostgreSQL sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "userName": "postgres", - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "tableName": "pulsar_postgres_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-postgres-sink" - topicName: "persistent://public/default/jdbc-postgres-topic" - sinkType: "jdbc-postgres" - configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink" - tableName: "pulsar_postgres_jdbc_sink" - - ``` - -For more information on **how to use this JDBC sink connector**, see [connect Pulsar to PostgreSQL](io-quickstart.md#connect-pulsar-to-postgresql). - -### Example for SQLite - -* JSON - - ```json - - { - "jdbcUrl": "jdbc:sqlite:db.sqlite", - "tableName": "pulsar_sqlite_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-sqlite-sink" - topicName: "persistent://public/default/jdbc-sqlite-topic" - sinkType: "jdbc-sqlite" - configs: - jdbcUrl: "jdbc:sqlite:db.sqlite" - tableName: "pulsar_sqlite_jdbc_sink" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-kafka-sink.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-kafka-sink.md deleted file mode 100644 index 09dad4ce70bac9..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-kafka-sink.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -id: io-kafka-sink -title: Kafka sink connector -sidebar_label: "Kafka sink connector" -original_id: io-kafka-sink ---- - -The Kafka sink connector pulls messages from Pulsar topics and persists the messages -to Kafka topics. - -This guide explains how to configure and use the Kafka sink connector. - -## Configuration - -The configuration of the Kafka sink connector has the following parameters. - -### Property - -| Name | Type| Required | Default | Description -|------|----------|---------|-------------|-------------| -| `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. | -|`acks`|String|true|" " (empty string) |The number of acknowledgments that the producer requires the leader to receive before a request completes.
    This controls the durability of the sent records. -|`batchsize`|long|false|16384L|The batch size that a Kafka producer attempts to batch records together before sending them to brokers. -|`maxRequestSize`|long|false|1048576L|The maximum size of a Kafka request in bytes. -|`topic`|String|true|" " (empty string) |The Kafka topic which receives messages from Pulsar. -| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringSerializer | The serializer class for Kafka producers to serialize keys. -| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArraySerializer | The serializer class for Kafka producers to serialize values.

    The serializer is set by a specific implementation of [`KafkaAbstractSink`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSink.java). -|`producerConfigProperties`|Map|false|" " (empty string)|The producer configuration properties to be passed to producers.

    **Note: other properties specified in the connector configuration file take precedence over this configuration**. - - -### Example - -Before using the Kafka sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "bootstrapServers": "localhost:6667", - "topic": "test", - "acks": "1", - "batchSize": "16384", - "maxRequestSize": "1048576", - "producerConfigProperties": - { - "client.id": "test-pulsar-producer", - "security.protocol": "SASL_PLAINTEXT", - "sasl.mechanism": "GSSAPI", - "sasl.kerberos.service.name": "kafka", - "acks": "all" - } - } - -* YAML - - ``` - -yaml - configs: - bootstrapServers: "localhost:6667" - topic: "test" - acks: "1" - batchSize: "16384" - maxRequestSize: "1048576" - producerConfigProperties: - client.id: "test-pulsar-producer" - security.protocol: "SASL_PLAINTEXT" - sasl.mechanism: "GSSAPI" - sasl.kerberos.service.name: "kafka" - acks: "all" - ``` diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-kafka-source.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-kafka-source.md deleted file mode 100644 index 53448699e21b4a..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-kafka-source.md +++ /dev/null @@ -1,226 +0,0 @@ ---- -id: io-kafka-source -title: Kafka source connector -sidebar_label: "Kafka source connector" -original_id: io-kafka-source ---- - -The Kafka source connector pulls messages from Kafka topics and persists the messages -to Pulsar topics. - -This guide explains how to configure and use the Kafka source connector. - -## Configuration - -The configuration of the Kafka source connector has the following properties. - -### Property - -| Name | Type| Required | Default | Description -|------|----------|---------|-------------|-------------| -| `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. | -| `groupId` |String| true | " " (empty string) | A unique string that identifies the group of consumer processes to which this consumer belongs. | -| `fetchMinBytes` | long|false | 1 | The minimum byte expected for each fetch response. | -| `autoCommitEnabled` | boolean |false | true | If set to true, the consumer's offset is periodically committed in the background.

    This committed offset is used when the process fails as the position from which a new consumer begins. | -| `autoCommitIntervalMs` | long|false | 5000 | The frequency in milliseconds that the consumer offsets are auto-committed to Kafka if `autoCommitEnabled` is set to true. | -| `heartbeatIntervalMs` | long| false | 3000 | The interval between heartbeats to the consumer when using Kafka's group management facilities.

    **Note: `heartbeatIntervalMs` must be smaller than `sessionTimeoutMs`**.| -| `sessionTimeoutMs` | long|false | 30000 | The timeout used to detect consumer failures when using Kafka's group management facility. | -| `topic` | String|true | " " (empty string)| The Kafka topic which sends messages to Pulsar. | -| `consumerConfigProperties` | Map| false | " " (empty string) | The consumer configuration properties to be passed to consumers.

    **Note: other properties specified in the connector configuration file take precedence over this configuration**. | -| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringDeserializer | The deserializer class for Kafka consumers to deserialize keys.
    The deserializer is set by a specific implementation of [`KafkaAbstractSource`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java). -| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArrayDeserializer | The deserializer class for Kafka consumers to deserialize values. -| `autoOffsetReset` | String | false | "earliest" | The default offset reset policy. | - -### Schema Management - -This Kafka source connector applies the schema to the topic depending on the data type that is present on the Kafka topic. -You can detect the data type from the `keyDeserializationClass` and `valueDeserializationClass` configuration parameters. - -If the `valueDeserializationClass` is `org.apache.kafka.common.serialization.StringDeserializer`, you can set Schema.STRING() as schema type on the Pulsar topic. - -If `valueDeserializationClass` is `io.confluent.kafka.serializers.KafkaAvroDeserializer`, Pulsar downloads the AVRO schema from the Confluent Schema Registry® -and sets it properly on the Pulsar topic. - -In this case, you need to set `schema.registry.url` inside of the `consumerConfigProperties` configuration entry -of the source. - -If `keyDeserializationClass` is not `org.apache.kafka.common.serialization.StringDeserializer`, it means -that you do not have a String as key and the Kafka Source uses the KeyValue schema type with the SEPARATED encoding. - -Pulsar supports AVRO format for keys. - -In this case, you can have a Pulsar topic with the following properties: -- Schema: KeyValue schema with SEPARATED encoding -- Key: the content of key of the Kafka message (base64 encoded) -- Value: the content of value of the Kafka message -- KeySchema: the schema detected from `keyDeserializationClass` -- ValueSchema: the schema detected from `valueDeserializationClass` - -Topic compaction and partition routing use the Pulsar key, that contains the Kafka key, and so they are driven by the same value that you have on Kafka. - -When you consume data from Pulsar topics, you can use the `KeyValue` schema. In this way, you can decode the data properly. -If you want to access the raw key, you can use the `Message#getKeyBytes()` API. - -### Example - -Before using the Kafka source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "bootstrapServers": "pulsar-kafka:9092", - "groupId": "test-pulsar-io", - "topic": "my-topic", - "sessionTimeoutMs": "10000", - "autoCommitEnabled": false - } - - ``` - -* YAML - - ```yaml - - configs: - bootstrapServers: "pulsar-kafka:9092" - groupId: "test-pulsar-io" - topic: "my-topic" - sessionTimeoutMs: "10000" - autoCommitEnabled: false - - ``` - -## Usage - -Here is an example of using the Kafka source connector with the configuration file as shown previously. - -1. Download a Kafka client and a Kafka connector. - - ```bash - - $ wget https://repo1.maven.org/maven2/org/apache/kafka/kafka-clients/0.10.2.1/kafka-clients-0.10.2.1.jar - - $ wget https://archive.apache.org/dist/pulsar/pulsar-2.4.0/connectors/pulsar-io-kafka-2.4.0.nar - - ``` - -2. Create a network. - - ```bash - - $ docker network create kafka-pulsar - - ``` - -3. Pull a ZooKeeper image and start ZooKeeper. - - ```bash - - $ docker pull wurstmeister/zookeeper - - $ docker run -d -it -p 2181:2181 --name pulsar-kafka-zookeeper --network kafka-pulsar wurstmeister/zookeeper - - ``` - -4. Pull a Kafka image and start Kafka. - - ```bash - - $ docker pull wurstmeister/kafka:2.11-1.0.2 - - $ docker run -d -it --network kafka-pulsar -p 6667:6667 -p 9092:9092 -e KAFKA_ADVERTISED_HOST_NAME=pulsar-kafka -e KAFKA_ZOOKEEPER_CONNECT=pulsar-kafka-zookeeper:2181 --name pulsar-kafka wurstmeister/kafka:2.11-1.0.2 - - ``` - -5. Pull a Pulsar image and start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:@pulsar:version@ - - $ docker run -d -it --network kafka-pulsar -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-kafka-standalone apachepulsar/pulsar:2.4.0 bin/pulsar standalone - - ``` - -6. Create a producer file _kafka-producer.py_. - - ```python - - from kafka import KafkaProducer - producer = KafkaProducer(bootstrap_servers='pulsar-kafka:9092') - future = producer.send('my-topic', b'hello world') - future.get() - - ``` - -7. Create a consumer file _pulsar-client.py_. - - ```python - - import pulsar - - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe('my-topic', subscription_name='my-aa') - - while True: - msg = consumer.receive() - print msg - print dir(msg) - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - - client.close() - - ``` - -8. Copy the following files to Pulsar. - - ```bash - - $ docker cp pulsar-io-kafka-@pulsar:version@.nar pulsar-kafka-standalone:/pulsar - $ docker cp kafkaSourceConfig.yaml pulsar-kafka-standalone:/pulsar/conf - $ docker cp pulsar-client.py pulsar-kafka-standalone:/pulsar/ - $ docker cp kafka-producer.py pulsar-kafka-standalone:/pulsar/ - - ``` - -9. Open a new terminal window and start the Kafka source connector in local run mode. - - ```bash - - $ docker exec -it pulsar-kafka-standalone /bin/bash - - $ ./bin/pulsar-admin source localrun \ - --archive ./pulsar-io-kafka-@pulsar:version@.nar \ - --classname org.apache.pulsar.io.kafka.KafkaBytesSource \ - --tenant public \ - --namespace default \ - --name kafka \ - --destination-topic-name my-topic \ - --source-config-file ./conf/kafkaSourceConfig.yaml \ - --parallelism 1 - - ``` - -10. Open a new terminal window and run the consumer. - - ```bash - - $ docker exec -it pulsar-kafka-standalone /bin/bash - - $ pip install kafka-python - - $ python3 kafka-producer.py - - ``` - - The following information appears on the consumer terminal window. - - ```bash - - Received message: 'hello world' - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-kinesis-sink.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-kinesis-sink.md deleted file mode 100644 index 153587dcfc783e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-kinesis-sink.md +++ /dev/null @@ -1,80 +0,0 @@ ---- -id: io-kinesis-sink -title: Kinesis sink connector -sidebar_label: "Kinesis sink connector" -original_id: io-kinesis-sink ---- - -The Kinesis sink connector pulls data from Pulsar and persists data into Amazon Kinesis. - -## Configuration - -The configuration of the Kinesis sink connector has the following property. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`messageFormat`|MessageFormat|true|ONLY_RAW_PAYLOAD|Message format in which Kinesis sink converts Pulsar messages and publishes to Kinesis streams.

    Below are the available options:

  1791. `ONLY_RAW_PAYLOAD`: Kinesis sink directly publishes Pulsar message payload as a message into the configured Kinesis stream.

  1792. `FULL_MESSAGE_IN_JSON`: Kinesis sink creates a JSON payload with Pulsar message payload, properties and encryptionCtx, and publishes JSON payload into the configured Kinesis stream.

  1793. `FULL_MESSAGE_IN_FB`: Kinesis sink creates a flatbuffer serialized payload with Pulsar message payload, properties and encryptionCtx, and publishes flatbuffer payload into the configured Kinesis stream.
  1794. -`retainOrdering`|boolean|false|false|Whether Pulsar connectors to retain ordering when moving messages from Pulsar to Kinesis or not. -`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    It is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If it is empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Built-in plugins - -The following are built-in `AwsCredentialProviderPlugin` plugins: - -* `org.apache.pulsar.io.aws.AwsDefaultProviderChainPlugin` - - This plugin takes no configuration, it uses the default AWS provider chain. - - For more information, see [AWS documentation](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default). - -* `org.apache.pulsar.io.aws.STSAssumeRoleProviderPlugin` - - This plugin takes a configuration (via the `awsCredentialPluginParam`) that describes a role to assume when running the KCL. - - This configuration takes the form of a small json document like: - - ```json - - {"roleArn": "arn...", "roleSessionName": "name"} - - ``` - -### Example - -Before using the Kinesis sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "awsEndpoint": "some.endpoint.aws", - "awsRegion": "us-east-1", - "awsKinesisStreamName": "my-stream", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "messageFormat": "ONLY_RAW_PAYLOAD", - "retainOrdering": "true" - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "some.endpoint.aws" - awsRegion: "us-east-1" - awsKinesisStreamName: "my-stream" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - messageFormat: "ONLY_RAW_PAYLOAD" - retainOrdering: "true" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-kinesis-source.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-kinesis-source.md deleted file mode 100644 index 0d07eefc3703b3..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-kinesis-source.md +++ /dev/null @@ -1,81 +0,0 @@ ---- -id: io-kinesis-source -title: Kinesis source connector -sidebar_label: "Kinesis source connector" -original_id: io-kinesis-source ---- - -The Kinesis source connector pulls data from Amazon Kinesis and persists data into Pulsar. - -This connector uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual consuming of messages. The KCL uses DynamoDB to track state for consumers. - -> Note: currently, the Kinesis source connector only supports raw messages. If you use KMS encrypted messages, the encrypted messages are sent to downstream. This connector will support decrypting messages in the future release. - - -## Configuration - -The configuration of the Kinesis source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.

    Below are the available options:

  1795. `AT_TIMESTAMP`: start from the record at or after the specified timestamp.

  1796. `LATEST`: start after the most recent data record.

  1797. `TRIM_HORIZON`: start from the oldest available data record.
  1798. -`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption. -`applicationName`|String|false|Pulsar IO connector|The name of the Amazon Kinesis application.

    By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances. -`checkpointInterval`|long|false|60000|The frequency of the Kinesis stream checkpoint in milliseconds. -`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds. -`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint. -`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector.

    Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed. -`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`useEnhancedFanOut`|boolean|false|true|If set to true, it uses Kinesis enhanced fan-out.

    If set to false, it uses polling. -`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    `awsCredentialProviderPlugin` has the following built-in plugs:

  1799. `org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:
    this plugin uses the default AWS provider chain.
    For more information, see [using the default credential provider chain](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default).

  1800. `org.apache.pulsar.io.kinesis.STSAssumeRoleProviderPlugin`:
    this plugin takes a configuration via the `awsCredentialPluginParam` that describes a role to assume when running the KCL.
    **JSON configuration example**
    `{"roleArn": "arn...", "roleSessionName": "name"}`

    `awsCredentialPluginName` is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If `awsCredentialPluginName` set to empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`.
  1801. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Example - -Before using the Kinesis source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "awsEndpoint": "https://some.endpoint.aws", - "awsRegion": "us-east-1", - "awsKinesisStreamName": "my-stream", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "applicationName": "My test application", - "checkpointInterval": "30000", - "backoffTime": "4000", - "numRetries": "3", - "receiveQueueSize": 2000, - "initialPositionInStream": "TRIM_HORIZON", - "startAtTime": "2019-03-05T19:28:58.000Z" - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "https://some.endpoint.aws" - awsRegion: "us-east-1" - awsKinesisStreamName: "my-stream" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - applicationName: "My test application" - checkpointInterval: 30000 - backoffTime: 4000 - numRetries: 3 - receiveQueueSize: 2000 - initialPositionInStream: "TRIM_HORIZON" - startAtTime: "2019-03-05T19:28:58.000Z" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-mongo-sink.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-mongo-sink.md deleted file mode 100644 index 30c15a6c280938..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-mongo-sink.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -id: io-mongo-sink -title: MongoDB sink connector -sidebar_label: "MongoDB sink connector" -original_id: io-mongo-sink ---- - -The MongoDB sink connector pulls messages from Pulsar topics -and persists the messages to collections. - -## Configuration - -The configuration of the MongoDB sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `mongoUri` | String| true| " " (empty string) | The MongoDB URI to which the connector connects.

    For more information, see [connection string URI format](https://docs.mongodb.com/manual/reference/connection-string/). | -| `database` | String| true| " " (empty string)| The database name to which the collection belongs. | -| `collection` | String| true| " " (empty string)| The collection name to which the connector writes messages. | -| `batchSize` | int|false|100 | The batch size of writing messages to collections. | -| `batchTimeMs` |long|false|1000| The batch operation interval in milliseconds. | - - -### Example - -Before using the Mongo sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "mongoUri": "mongodb://localhost:27017", - "database": "pulsar", - "collection": "messages", - "batchSize": "2", - "batchTimeMs": "500" - } - - ``` - -* YAML - - ```yaml - - configs: - mongoUri: "mongodb://localhost:27017" - database: "pulsar" - collection: "messages" - batchSize: 2 - batchTimeMs: 500 - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-netty-source.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-netty-source.md deleted file mode 100644 index e1ec8d863115b3..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-netty-source.md +++ /dev/null @@ -1,241 +0,0 @@ ---- -id: io-netty-source -title: Netty source connector -sidebar_label: "Netty source connector" -original_id: io-netty-source ---- - -The Netty source connector opens a port that accepts incoming data via the configured network protocol -and publish it to user-defined Pulsar topics. - -This connector can be used in a containerized (for example, k8s) deployment. Otherwise, if the connector is running in process or thread mode, the instance may be conflicting on listening to ports. - -## Configuration - -The configuration of the Netty source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `type` |String| true |tcp | The network protocol over which data is transmitted to netty.

    Below are the available options:
  1802. tcp
  1803. http
  1804. udp
  1805. | -| `host` | String|true | 127.0.0.1 | The host name or address on which the source instance listen. | -| `port` | int|true | 10999 | The port on which the source instance listen. | -| `numberOfThreads` |int| true |1 | The number of threads of Netty TCP server to accept incoming connections and handle the traffic of accepted connections. | - - -### Example - -Before using the Netty source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "type": "tcp", - "host": "127.0.0.1", - "port": "10911", - "numberOfThreads": "1" - } - - ``` - -* YAML - - ```yaml - - configs: - type: "tcp" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -## Usage - -The following examples show how to use the Netty source connector with TCP and HTTP. - -### TCP - -1. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-netty-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -2. Create a configuration file _netty-source-config.yaml_. - - ```yaml - - configs: - type: "tcp" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -3. Copy the configuration file _netty-source-config.yaml_ to Pulsar server. - - ```bash - - $ docker cp netty-source-config.yaml pulsar-netty-standalone:/pulsar/conf/ - - ``` - -4. Download the Netty source connector. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - curl -O http://mirror-hk.koddos.net/apache/pulsar/pulsar-{version}/connectors/pulsar-io-netty-{version}.nar - - ``` - -5. Start the Netty source connector. - - ```bash - - $ ./bin/pulsar-admin sources localrun \ - --archive pulsar-io-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name netty \ - --destination-topic-name netty-topic \ - --source-config-file netty-source-config.yaml \ - --parallelism 1 - - ``` - -6. Consume data. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ ./bin/pulsar-client consume -t Exclusive -s netty-sub netty-topic -n 0 - - ``` - -7. Open another terminal window to send data to the Netty source. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ apt-get update - - $ apt-get -y install telnet - - $ root@1d19327b2c67:/pulsar# telnet 127.0.0.1 10999 - Trying 127.0.0.1... - Connected to 127.0.0.1. - Escape character is '^]'. - hello - world - - ``` - -8. The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello - - ----- got message ----- - world - - ``` - -### HTTP - -1. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-netty-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -2. Create a configuration file _netty-source-config.yaml_. - - ```yaml - - configs: - type: "http" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -3. Copy the configuration file _netty-source-config.yaml_ to Pulsar server. - - ```bash - - $ docker cp netty-source-config.yaml pulsar-netty-standalone:/pulsar/conf/ - - ``` - -4. Download the Netty source connector. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - curl -O http://mirror-hk.koddos.net/apache/pulsar/pulsar-{version}/connectors/pulsar-io-netty-{version}.nar - - ``` - -5. Start the Netty source connector. - - ```bash - - $ ./bin/pulsar-admin sources localrun \ - --archive pulsar-io-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name netty \ - --destination-topic-name netty-topic \ - --source-config-file netty-source-config.yaml \ - --parallelism 1 - - ``` - -6. Consume data. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ ./bin/pulsar-client consume -t Exclusive -s netty-sub netty-topic -n 0 - - ``` - -7. Open another terminal window to send data to the Netty source. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ curl -X POST --data 'hello, world!' http://127.0.0.1:10999/ - - ``` - -8. The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello, world! - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-nsq-source.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-nsq-source.md deleted file mode 100644 index b61e7e100c22e1..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-nsq-source.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -id: io-nsq-source -title: NSQ source connector -sidebar_label: "NSQ source connector" -original_id: io-nsq-source ---- - -The NSQ source connector receives messages from NSQ topics -and writes messages to Pulsar topics. - -## Configuration - -The configuration of the NSQ source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `lookupds` |String| true | " " (empty string) | A comma-separated list of nsqlookupds to connect to. | -| `topic` | String|true | " " (empty string) | The NSQ topic to transport. | -| `channel` | String |false | pulsar-transport-{$topic} | The channel to consume from on the provided NSQ topic. | \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-overview.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-overview.md deleted file mode 100644 index 3db5ee34042d3f..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-overview.md +++ /dev/null @@ -1,164 +0,0 @@ ---- -id: io-overview -title: Pulsar connector overview -sidebar_label: "Overview" -original_id: io-overview ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Messaging systems are most powerful when you can easily use them with external systems like databases and other messaging systems. - -**Pulsar IO connectors** enable you to easily create, deploy, and manage connectors that interact with external systems, such as [Apache Cassandra](https://cassandra.apache.org), [Aerospike](https://www.aerospike.com), and many others. - - -## Concept - -Pulsar IO connectors come in two types: **source** and **sink**. - -This diagram illustrates the relationship between source, Pulsar, and sink: - -![Pulsar IO diagram](/assets/pulsar-io.png "Pulsar IO connectors (sources and sinks)") - - -### Source - -> Sources **feed data from external systems into Pulsar**. - -Common sources include other messaging systems and firehose-style data pipeline APIs. - -For the complete list of Pulsar built-in source connectors, see [source connector](io-connectors.md#source-connector). - -### Sink - -> Sinks **feed data from Pulsar into external systems**. - -Common sinks include other messaging systems and SQL and NoSQL databases. - -For the complete list of Pulsar built-in sink connectors, see [sink connector](io-connectors.md#sink-connector). - -## Processing guarantee - -Processing guarantees are used to handle errors when writing messages to Pulsar topics. - -> Pulsar connectors and Functions use the **same** processing guarantees as below. - -Delivery semantic | Description -:------------------|:------- -`at-most-once` | Each message sent to a connector is to be **processed once** or **not to be processed**. -`at-least-once` | Each message sent to a connector is to be **processed once** or **more than once**. -`effectively-once` | Each message sent to a connector has **one output associated** with it. - -> Processing guarantees for connectors not just rely on Pulsar guarantee but also **relate to external systems**, that is, **the implementation of source and sink**. - -* Source: Pulsar ensures that writing messages to Pulsar topics respects to the processing guarantees. It is within Pulsar's control. - -* Sink: the processing guarantees rely on the sink implementation. If the sink implementation does not handle retries in an idempotent way, the sink does not respect to the processing guarantees. - -### Set - -When creating a connector, you can set the processing guarantee with the following semantics: - -* ATLEAST_ONCE - -* ATMOST_ONCE - -* EFFECTIVELY_ONCE - -> If `--processing-guarantees` is not specified when creating a connector, the default semantic is `ATLEAST_ONCE`. - -Here takes **Admin CLI** as an example. For more information about **REST API** or **JAVA Admin API**, see [here](io-use.md#create). - -````mdx-code-block - - - - -```bash - -$ bin/pulsar-admin sources create \ - --processing-guarantees ATMOST_ONCE \ - # Other source configs - -``` - -For more information about the options of `pulsar-admin sources create`, see [here](reference-connector-admin.md#create). - - - - -```bash - -$ bin/pulsar-admin sinks create \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other sink configs - -``` - -For more information about the options of `pulsar-admin sinks create`, see [here](reference-connector-admin.md#create-1). - - - - -```` - -### Update - -After creating a connector, you can update the processing guarantee with the following semantics: - -* ATLEAST_ONCE - -* ATMOST_ONCE - -* EFFECTIVELY_ONCE - -Here takes **Admin CLI** as an example. For more information about **REST API** or **JAVA Admin API**, see [here](io-use.md#create). - -````mdx-code-block - - - - -```bash - -$ bin/pulsar-admin sources update \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other source configs - -``` - -For more information about the options of `pulsar-admin sources update`, see [here](reference-connector-admin.md#update). - - - - -```bash - -$ bin/pulsar-admin sinks update \ - --processing-guarantees ATMOST_ONCE \ - # Other sink configs - -``` - -For more information about the options of `pulsar-admin sinks update`, see [here](reference-connector-admin.md#update-1). - - - - -```` - - -## Work with connector - -You can manage Pulsar connectors (for example, create, update, start, stop, restart, reload, delete and perform other operations on connectors) via the [Connector Admin CLI](reference-connector-admin.md) with [sources](io-cli.md#sources) and [sinks](io-cli.md#sinks) subcommands. - -Connectors (sources and sinks) and Functions are components of instances, and they all run on Functions workers. When managing a source, sink or function via [Connector Admin CLI](reference-connector-admin.md) or [Functions Admin CLI](functions-cli.md), an instance is started on a worker. For more information, see [Functions worker](functions-worker.md#run-functions-worker-separately). - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-quickstart.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-quickstart.md deleted file mode 100644 index 40eaf5c1de2214..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-quickstart.md +++ /dev/null @@ -1,963 +0,0 @@ ---- -id: io-quickstart -title: How to connect Pulsar to database -sidebar_label: "Get started" -original_id: io-quickstart ---- - -This tutorial provides a hands-on look at how you can move data out of Pulsar without writing a single line of code. - -It is helpful to review the [concepts](io-overview.md) for Pulsar I/O with running the steps in this guide to gain a deeper understanding. - -At the end of this tutorial, you are able to: - -- [Connect Pulsar to Cassandra](#Connect-Pulsar-to-Cassandra) - -- [Connect Pulsar to PostgreSQL](#Connect-Pulsar-to-PostgreSQL) - -:::tip - -* These instructions assume you are running Pulsar in [standalone mode](getting-started-standalone.md). However, all -the commands used in this tutorial can be used in a multi-nodes Pulsar cluster without any changes. -* All the instructions are assumed to run at the root directory of a Pulsar binary distribution. - -::: - -## Install Pulsar and built-in connector - -Before connecting Pulsar to a database, you need to install Pulsar and the desired built-in connector. - -For more information about **how to install a standalone Pulsar and built-in connectors**, see [here](getting-started-standalone.md/#installing-pulsar). - -## Start Pulsar standalone - -1. Start Pulsar locally. - - ```bash - - bin/pulsar standalone - - ``` - - All the components of a Pulsar service are start in order. - - You can curl those pulsar service endpoints to make sure Pulsar service is up running correctly. - -2. Check Pulsar binary protocol port. - - ```bash - - telnet localhost 6650 - - ``` - -3. Check Pulsar Function cluster. - - ```bash - - curl -s http://localhost:8080/admin/v2/worker/cluster - - ``` - - **Example output** - - ```json - - [{"workerId":"c-standalone-fw-localhost-6750","workerHostname":"localhost","port":6750}] - - ``` - -4. Make sure a public tenant and a default namespace exist. - - ```bash - - curl -s http://localhost:8080/admin/v2/namespaces/public - - ``` - - **Example output** - - ```json - - ["public/default","public/functions"] - - ``` - -5. All built-in connectors should be listed as available. - - ```bash - - curl -s http://localhost:8080/admin/v2/functions/connectors - - ``` - - **Example output** - - ```json - - [{"name":"aerospike","description":"Aerospike database sink","sinkClass":"org.apache.pulsar.io.aerospike.AerospikeStringSink"},{"name":"cassandra","description":"Writes data into Cassandra","sinkClass":"org.apache.pulsar.io.cassandra.CassandraStringSink"},{"name":"kafka","description":"Kafka source and sink connector","sourceClass":"org.apache.pulsar.io.kafka.KafkaStringSource","sinkClass":"org.apache.pulsar.io.kafka.KafkaBytesSink"},{"name":"kinesis","description":"Kinesis sink connector","sinkClass":"org.apache.pulsar.io.kinesis.KinesisSink"},{"name":"rabbitmq","description":"RabbitMQ source connector","sourceClass":"org.apache.pulsar.io.rabbitmq.RabbitMQSource"},{"name":"twitter","description":"Ingest data from Twitter firehose","sourceClass":"org.apache.pulsar.io.twitter.TwitterFireHose"}] - - ``` - - If an error occurs when starting Pulsar service, you may see an exception at the terminal running `pulsar/standalone`, - or you can navigate to the `logs` directory under the Pulsar directory to view the logs. - -## Connect Pulsar to Cassandra - -This section demonstrates how to connect Pulsar to Cassandra. - -:::tip - -* Make sure you have Docker installed. If you do not have one, see [install Docker](https://docs.docker.com/docker-for-mac/install/). -* The Cassandra sink connector reads messages from Pulsar topics and writes the messages into Cassandra tables. For more information, see [Cassandra sink connector](io-cassandra-sink.md). - -::: - -### Setup a Cassandra cluster - -This example uses `cassandra` Docker image to start a single-node Cassandra cluster in Docker. - -1. Start a Cassandra cluster. - - ```bash - - docker run -d --rm --name=cassandra -p 9042:9042 cassandra - - ``` - - :::note - - Before moving to the next steps, make sure the Cassandra cluster is running. - - ::: - -2. Make sure the Docker process is running. - - ```bash - - docker ps - - ``` - -3. Check the Cassandra logs to make sure the Cassandra process is running as expected. - - ```bash - - docker logs cassandra - - ``` - -4. Check the status of the Cassandra cluster. - - ```bash - - docker exec cassandra nodetool status - - ``` - - **Example output** - - ``` - - Datacenter: datacenter1 - ======================= - Status=Up/Down - |/ State=Normal/Leaving/Joining/Moving - -- Address Load Tokens Owns (effective) Host ID Rack - UN 172.17.0.2 103.67 KiB 256 100.0% af0e4b2f-84e0-4f0b-bb14-bd5f9070ff26 rack1 - - ``` - -5. Use `cqlsh` to connect to the Cassandra cluster. - - ```bash - - $ docker exec -ti cassandra cqlsh localhost - Connected to Test Cluster at localhost:9042. - [cqlsh 5.0.1 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4] - Use HELP for help. - cqlsh> - - ``` - -6. Create a keyspace `pulsar_test_keyspace`. - - ```bash - - cqlsh> CREATE KEYSPACE pulsar_test_keyspace WITH replication = {'class':'SimpleStrategy', 'replication_factor':1}; - - ``` - -7. Create a table `pulsar_test_table`. - - ```bash - - cqlsh> USE pulsar_test_keyspace; - cqlsh:pulsar_test_keyspace> CREATE TABLE pulsar_test_table (key text PRIMARY KEY, col text); - - ``` - -### Configure a Cassandra sink - -Now that we have a Cassandra cluster running locally. - -In this section, you need to configure a Cassandra sink connector. - -To run a Cassandra sink connector, you need to prepare a configuration file including the information that Pulsar connector runtime needs to know. - -For example, how Pulsar connector can find the Cassandra cluster, what is the keyspace and the table that Pulsar connector uses for writing Pulsar messages to, and so on. - -You can create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - } - - ``` - -* YAML - - ```yaml - - configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - - ``` - -For more information, see [Cassandra sink connector](io-cassandra-sink.md). - -### Create a Cassandra sink - -You can use the [Connector Admin CLI](io-cli.md) -to create a sink connector and perform other operations on them. - -Run the following command to create a Cassandra sink connector with sink type _cassandra_ and the config file _examples/cassandra-sink.yml_ created previously. - -#### Note -> The `sink-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. - -```bash - -bin/pulsar-admin sinks create \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink \ - --sink-type cassandra \ - --sink-config-file examples/cassandra-sink.yml \ - --inputs test_cassandra - -``` - -Once the command is executed, Pulsar creates the sink connector _cassandra-test-sink_. - -This sink connector runs -as a Pulsar Function and writes the messages produced in the topic _test_cassandra_ to the Cassandra table _pulsar_test_table_. - -### Inspect a Cassandra sink - -You can use the [Connector Admin CLI](io-cli.md) -to monitor a connector and perform other operations on it. - -* Get the information of a Cassandra sink. - - ```bash - - bin/pulsar-admin sinks get \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - **Example output** - - ```json - - { - "tenant": "public", - "namespace": "default", - "name": "cassandra-test-sink", - "className": "org.apache.pulsar.io.cassandra.CassandraStringSink", - "inputSpecs": { - "test_cassandra": { - "isRegexPattern": false - } - }, - "configs": { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true, - "archive": "builtin://cassandra" - } - - ``` - -* Check the status of a Cassandra sink. - - ```bash - - bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - **Example output** - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-localhost-8080" - } - } ] - } - - ``` - -### Verify a Cassandra sink - -1. Produce some messages to the input topic of the Cassandra sink _test_cassandra_. - - ```bash - - for i in {0..9}; do bin/pulsar-client produce -m "key-$i" -n 1 test_cassandra; done - - ``` - -2. Inspect the status of the Cassandra sink _test_cassandra_. - - ```bash - - bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - You can see 10 messages are processed by the Cassandra sink _test_cassandra_. - - **Example output** - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 10, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 10, - "lastReceivedTime" : 1551685489136, - "workerId" : "c-standalone-fw-localhost-8080" - } - } ] - } - - ``` - -3. Use `cqlsh` to connect to the Cassandra cluster. - - ```bash - - docker exec -ti cassandra cqlsh localhost - - ``` - -4. Check the data of the Cassandra table _pulsar_test_table_. - - ```bash - - cqlsh> use pulsar_test_keyspace; - cqlsh:pulsar_test_keyspace> select * from pulsar_test_table; - - key | col - --------+-------- - key-5 | key-5 - key-0 | key-0 - key-9 | key-9 - key-2 | key-2 - key-1 | key-1 - key-3 | key-3 - key-6 | key-6 - key-7 | key-7 - key-4 | key-4 - key-8 | key-8 - - ``` - -### Delete a Cassandra Sink - -You can use the [Connector Admin CLI](io-cli.md) -to delete a connector and perform other operations on it. - -```bash - -bin/pulsar-admin sinks delete \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - -``` - -## Connect Pulsar to PostgreSQL - -This section demonstrates how to connect Pulsar to PostgreSQL. - -:::tip - -* Make sure you have Docker installed. If you do not have one, see [install Docker](https://docs.docker.com/docker-for-mac/install/). -* The JDBC sink connector pulls messages from Pulsar topics and persists the messages to ClickHouse, MariaDB, PostgreSQL, or SQlite. - -::: - ->For more information, see [JDBC sink connector](io-jdbc-sink.md). - - -### Setup a PostgreSQL cluster - -This example uses the PostgreSQL 12 docker image to start a single-node PostgreSQL cluster in Docker. - -1. Pull the PostgreSQL 12 image from Docker. - - ```bash - - $ docker pull postgres:12 - - ``` - -2. Start PostgreSQL. - - ```bash - - $ docker run -d -it --rm \ - --name pulsar-postgres \ - -p 5432:5432 \ - -e POSTGRES_PASSWORD=password \ - -e POSTGRES_USER=postgres \ - postgres:12 - - ``` - - #### Tip - - Flag | Description | This example - ---|---|---| - `-d` | To start a container in detached mode. | / - `-it` | Keep STDIN open even if not attached and allocate a terminal. | / - `--rm` | Remove the container automatically when it exits. | / - `-name` | Assign a name to the container. | This example specifies _pulsar-postgres_ for the container. - `-p` | Publish the port of the container to the host. | This example publishes the port _5432_ of the container to the host. - `-e` | Set environment variables. | This example sets the following variables:
    - The password for the user is _password_.
    - The name for the user is _postgres_. - - :::tip - - For more information about Docker commands, see [Docker CLI](https://docs.docker.com/engine/reference/commandline/run/). - - ::: - -3. Check if PostgreSQL has been started successfully. - - ```bash - - $ docker logs -f pulsar-postgres - - ``` - - PostgreSQL has been started successfully if the following message appears. - - ```text - - 2020-05-11 20:09:24.492 UTC [1] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit - 2020-05-11 20:09:24.492 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 - 2020-05-11 20:09:24.492 UTC [1] LOG: listening on IPv6 address "::", port 5432 - 2020-05-11 20:09:24.499 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" - 2020-05-11 20:09:24.523 UTC [55] LOG: database system was shut down at 2020-05-11 20:09:24 UTC - 2020-05-11 20:09:24.533 UTC [1] LOG: database system is ready to accept connections - - ``` - -4. Access to PostgreSQL. - - ```bash - - $ docker exec -it pulsar-postgres /bin/bash - - ``` - -5. Create a PostgreSQL table _pulsar_postgres_jdbc_sink_. - - ```bash - - $ psql -U postgres postgres - - postgres=# create table if not exists pulsar_postgres_jdbc_sink - ( - id serial PRIMARY KEY, - name VARCHAR(255) NOT NULL - ); - - ``` - -### Configure a JDBC sink - -Now we have a PostgreSQL running locally. - -In this section, you need to configure a JDBC sink connector. - -1. Add a configuration file. - - To run a JDBC sink connector, you need to prepare a YAML configuration file including the information that Pulsar connector runtime needs to know. - - For example, how Pulsar connector can find the PostgreSQL cluster, what is the JDBC URL and the table that Pulsar connector uses for writing messages to. - - Create a _pulsar-postgres-jdbc-sink.yaml_ file, copy the following contents to this file, and place the file in the `pulsar/connectors` folder. - - ```yaml - - configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/postgres" - tableName: "pulsar_postgres_jdbc_sink" - - ``` - -2. Create a schema. - - Create a _avro-schema_ file, copy the following contents to this file, and place the file in the `pulsar/connectors` folder. - - ```json - - { - "type": "AVRO", - "schema": "{\"type\":\"record\",\"name\":\"Test\",\"fields\":[{\"name\":\"id\",\"type\":[\"null\",\"int\"]},{\"name\":\"name\",\"type\":[\"null\",\"string\"]}]}", - "properties": {} - } - - ``` - - :::tip - - For more information about AVRO, see [Apache Avro](https://avro.apache.org/docs/1.9.1/). - - ::: - -3. Upload a schema to a topic. - - This example uploads the _avro-schema_ schema to the _pulsar-postgres-jdbc-sink-topic_ topic. - - ```bash - - $ bin/pulsar-admin schemas upload pulsar-postgres-jdbc-sink-topic -f ./connectors/avro-schema - - ``` - -4. Check if the schema has been uploaded successfully. - - ```bash - - $ bin/pulsar-admin schemas get pulsar-postgres-jdbc-sink-topic - - ``` - - The schema has been uploaded successfully if the following message appears. - - ```json - - {"name":"pulsar-postgres-jdbc-sink-topic","schema":"{\"type\":\"record\",\"name\":\"Test\",\"fields\":[{\"name\":\"id\",\"type\":[\"null\",\"int\"]},{\"name\":\"name\",\"type\":[\"null\",\"string\"]}]}","type":"AVRO","properties":{}} - - ``` - -### Create a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to create a sink connector and perform other operations on it. - -This example creates a sink connector and specifies the desired information. - -```bash - -$ bin/pulsar-admin sinks create \ ---archive ./connectors/pulsar-io-jdbc-postgres-@pulsar:version@.nar \ ---inputs pulsar-postgres-jdbc-sink-topic \ ---name pulsar-postgres-jdbc-sink \ ---sink-config-file ./connectors/pulsar-postgres-jdbc-sink.yaml \ ---parallelism 1 - -``` - -Once the command is executed, Pulsar creates a sink connector _pulsar-postgres-jdbc-sink_. - -This sink connector runs as a Pulsar Function and writes the messages produced in the topic _pulsar-postgres-jdbc-sink-topic_ to the PostgreSQL table _pulsar_postgres_jdbc_sink_. - - #### Tip - - Flag | Description | This example - ---|---|---| - `--archive` | The path to the archive file for the sink. | _pulsar-io-jdbc-postgres-@pulsar:version@.nar_ | - `--inputs` | The input topic(s) of the sink.

    Multiple topics can be specified as a comma-separated list.|| - `--name` | The name of the sink. | _pulsar-postgres-jdbc-sink_ | - `--sink-config-file` | The path to a YAML config file specifying the configuration of the sink. | _pulsar-postgres-jdbc-sink.yaml_ | - `--parallelism` | The parallelism factor of the sink.

    For example, the number of sink instances to run. | _1_ | - -:::tip - -For more information about `pulsar-admin sinks create options`, see [here](io-cli.md#sinks). - -::: - -The sink has been created successfully if the following message appears. - -```bash - -"Created successfully" - -``` - -### Inspect a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to monitor a connector and perform other operations on it. - -* List all running JDBC sink(s). - - ```bash - - $ bin/pulsar-admin sinks list \ - --tenant public \ - --namespace default - - ``` - - :::tip - - For more information about `pulsar-admin sinks list options`, see [here](io-cli.md/#list-1). - - ::: - - The result shows that only the _postgres-jdbc-sink_ sink is running. - - ```json - - [ - "pulsar-postgres-jdbc-sink" - ] - - ``` - -* Get the information of a JDBC sink. - - ```bash - - $ bin/pulsar-admin sinks get \ - --tenant public \ - --namespace default \ - --name pulsar-postgres-jdbc-sink - - ``` - - :::tip - - For more information about `pulsar-admin sinks get options`, see [here](io-cli.md/#get-1). - - ::: - - The result shows the information of the sink connector, including tenant, namespace, topic and so on. - - ```json - - { - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true - } - - ``` - -* Get the status of a JDBC sink - - ```bash - - $ bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name pulsar-postgres-jdbc-sink - - ``` - - :::tip - - For more information about `pulsar-admin sinks status options`, see [here](io-cli.md/#status-1). - - ::: - - The result shows the current status of sink connector, including the number of instance, running status, worker ID and so on. - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-192.168.2.52-8080" - } - } ] - } - - ``` - -### Stop a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to stop a connector and perform other operations on it. - -```bash - -$ bin/pulsar-admin sinks stop \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks stop options`, see [here](io-cli.md/#stop-1). - -::: - -The sink instance has been stopped successfully if the following message disappears. - -```bash - -"Stopped successfully" - -``` - -### Restart a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to restart a connector and perform other operations on it. - -```bash - -$ bin/pulsar-admin sinks restart \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks restart options`, see [here](io-cli.md/#restart-1). - -::: - -The sink instance has been started successfully if the following message disappears. - -```bash - -"Started successfully" - -``` - -:::tip - -* Optionally, you can run a standalone sink connector using `pulsar-admin sinks localrun options`. -Note that `pulsar-admin sinks localrun options` **runs a sink connector locally**, while `pulsar-admin sinks start options` **starts a sink connector in a cluster**. -* For more information about `pulsar-admin sinks localrun options`, see [here](io-cli.md#localrun-1). - -::: - -### Update a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to update a connector and perform other operations on it. - -This example updates the parallelism of the _pulsar-postgres-jdbc-sink_ sink connector to 2. - -```bash - -$ bin/pulsar-admin sinks update \ ---name pulsar-postgres-jdbc-sink \ ---parallelism 2 - -``` - -:::tip - -For more information about `pulsar-admin sinks update options`, see [here](io-cli.md/#update-1). - -::: - -The sink connector has been updated successfully if the following message disappears. - -```bash - -"Updated successfully" - -``` - -This example double-checks the information. - -```bash - -$ bin/pulsar-admin sinks get \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -The result shows that the parallelism is 2. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 2, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -### Delete a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to delete a connector and perform other operations on it. - -This example deletes the _pulsar-postgres-jdbc-sink_ sink connector. - -```bash - -$ bin/pulsar-admin sinks delete \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks delete options`, see [here](io-cli.md/#delete-1). - -::: - -The sink connector has been deleted successfully if the following message appears. - -```text - -"Deleted successfully" - -``` - -This example double-checks the status of the sink connector. - -```bash - -$ bin/pulsar-admin sinks get \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -The result shows that the sink connector does not exist. - -```text - -HTTP 404 Not Found - -Reason: Sink pulsar-postgres-jdbc-sink doesn't exist - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-rabbitmq-sink.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-rabbitmq-sink.md deleted file mode 100644 index d7fda99460dc97..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-rabbitmq-sink.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -id: io-rabbitmq-sink -title: RabbitMQ sink connector -sidebar_label: "RabbitMQ sink connector" -original_id: io-rabbitmq-sink ---- - -The RabbitMQ sink connector pulls messages from Pulsar topics -and persist the messages to RabbitMQ queues. - - -## Configuration - -The configuration of the RabbitMQ sink connector has the following properties. - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `connectionName` |String| true | " " (empty string) | The connection name. | -| `host` | String| true | " " (empty string) | The RabbitMQ host. | -| `port` | int |true | 5672 | The RabbitMQ port. | -| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. | -| `username` | String|false | guest | The username used to authenticate to RabbitMQ. | -| `password` | String|false | guest | The password used to authenticate to RabbitMQ. | -| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. | -| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number.

    0 means unlimited. | -| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets.

    0 means unlimited. | -| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds.

    0 means infinite. | -| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. | -| `requestedHeartbeat` | int|false | 60 | The exchange to publish messages. | -| `exchangeName` | String|true | " " (empty string) | The maximum number of messages that the server delivers.

    0 means unlimited. | -| `prefetchGlobal` |String|true | " " (empty string) |The routing key used to publish messages. | - - -### Example - -Before using the RabbitMQ sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "host": "localhost", - "port": "5672", - "virtualHost": "/", - "username": "guest", - "password": "guest", - "queueName": "test-queue", - "connectionName": "test-connection", - "requestedChannelMax": "0", - "requestedFrameMax": "0", - "connectionTimeout": "60000", - "handshakeTimeout": "10000", - "requestedHeartbeat": "60", - "exchangeName": "test-exchange", - "routingKey": "test-key" - } - - ``` - -* YAML - - ```yaml - - configs: - host: "localhost" - port: 5672 - virtualHost: "/", - username: "guest" - password: "guest" - queueName: "test-queue" - connectionName: "test-connection" - requestedChannelMax: 0 - requestedFrameMax: 0 - connectionTimeout: 60000 - handshakeTimeout: 10000 - requestedHeartbeat: 60 - exchangeName: "test-exchange" - routingKey: "test-key" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-rabbitmq-source.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-rabbitmq-source.md deleted file mode 100644 index c2c31cc97d10d9..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-rabbitmq-source.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -id: io-rabbitmq-source -title: RabbitMQ source connector -sidebar_label: "RabbitMQ source connector" -original_id: io-rabbitmq-source ---- - -The RabbitMQ source connector receives messages from RabbitMQ clusters -and writes messages to Pulsar topics. - -## Configuration - -The configuration of the RabbitMQ source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `connectionName` |String| true | " " (empty string) | The connection name. | -| `host` | String| true | " " (empty string) | The RabbitMQ host. | -| `port` | int |true | 5672 | The RabbitMQ port. | -| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. | -| `username` | String|false | guest | The username used to authenticate to RabbitMQ. | -| `password` | String|false | guest | The password used to authenticate to RabbitMQ. | -| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. | -| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number.

    0 means unlimited. | -| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets.

    0 means unlimited. | -| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds.

    0 means infinite. | -| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. | -| `requestedHeartbeat` | int|false | 60 | The requested heartbeat timeout in seconds. | -| `prefetchCount` | int|false | 0 | The maximum number of messages that the server delivers.

    0 means unlimited. | -| `prefetchGlobal` | boolean|false | false |Whether the setting should be applied to the entire channel rather than each consumer. | -| `passive` | boolean|false | false | Whether the rabbitmq consumer should create its own queue or bind to an existing one. | - -### Example - -Before using the RabbitMQ source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "host": "localhost", - "port": "5672", - "virtualHost": "/", - "username": "guest", - "password": "guest", - "queueName": "test-queue", - "connectionName": "test-connection", - "requestedChannelMax": "0", - "requestedFrameMax": "0", - "connectionTimeout": "60000", - "handshakeTimeout": "10000", - "requestedHeartbeat": "60", - "prefetchCount": "0", - "prefetchGlobal": "false", - "passive": "false" - } - - ``` - -* YAML - - ```yaml - - configs: - host: "localhost" - port: 5672 - virtualHost: "/" - username: "guest" - password: "guest" - queueName: "test-queue" - connectionName: "test-connection" - requestedChannelMax: 0 - requestedFrameMax: 0 - connectionTimeout: 60000 - handshakeTimeout: 10000 - requestedHeartbeat: 60 - prefetchCount: 0 - prefetchGlobal: "false" - passive: "false" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-redis-sink.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-redis-sink.md deleted file mode 100644 index 0caf21bcf62e88..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-redis-sink.md +++ /dev/null @@ -1,156 +0,0 @@ ---- -id: io-redis-sink -title: Redis sink connector -sidebar_label: "Redis sink connector" -original_id: io-redis-sink ---- - -The Redis sink connector pulls messages from Pulsar topics -and persists the messages to a Redis database. - - - -## Configuration - -The configuration of the Redis sink connector has the following properties. - - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `redisHosts` |String|true|" " (empty string) | A comma-separated list of Redis hosts to connect to. | -| `redisPassword` |String|false|" " (empty string) | The password used to connect to Redis. | -| `redisDatabase` | int|true|0 | The Redis database to connect to. | -| `clientMode` |String| false|Standalone | The client mode when interacting with Redis cluster.

    Below are the available options:
  1806. Standalone
  1807. Cluster
  1808. | -| `autoReconnect` | boolean|false|true | Whether the Redis client automatically reconnect or not. | -| `requestQueue` | int|false|2147483647 | The maximum number of queued requests to Redis. | -| `tcpNoDelay` |boolean| false| false | Whether to enable TCP with no delay or not. | -| `keepAlive` | boolean|false | false |Whether to enable a keepalive to Redis or not. | -| `connectTimeout` |long| false|10000 | The time to wait before timing out when connecting in milliseconds. | -| `operationTimeout` | long|false|10000 | The time before an operation is marked as timed out in milliseconds . | -| `batchTimeMs` | int|false|1000 | The Redis operation time in milliseconds. | -| `batchSize` | int|false|200 | The batch size of writing to Redis database. | - - -### Example - -Before using the Redis sink connector, you need to create a configuration file in the path you will start Pulsar service (i.e. `PULSAR_HOME`) through one of the following methods. - -* JSON - - ```json - - { - "redisHosts": "localhost:6379", - "redisPassword": "mypassword", - "redisDatabase": "0", - "clientMode": "Standalone", - "operationTimeout": "2000", - "batchSize": "1", - "batchTimeMs": "1000", - "connectTimeout": "3000" - } - - ``` - -* YAML - - ```yaml - - configs: - redisHosts: "localhost:6379" - redisPassword: "mypassword" - redisDatabase: 0 - clientMode: "Standalone" - operationTimeout: 2000 - batchSize: 1 - batchTimeMs: 1000 - connectTimeout: 3000 - - ``` - -### Usage - -This example shows how to write records to a Redis database using the Pulsar Redis connector. - -1. Start a Redis server. - - ```bash - - $ docker pull redis:5.0.5 - $ docker run -d -p 6379:6379 --name my-redis redis:5.0.5 --requirepass "mypassword" - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - - Make sure the NAR file is available at `connectors/pulsar-io-redis-@pulsar:version@.nar`. - -3. Start the Pulsar Redis connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-redis-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name my-redis-sink \ - --sink-config '{"redisHosts": "localhost:6379","redisPassword": "mypassword","redisDatabase": "0","clientMode": "Standalone","operationTimeout": "3000","batchSize": "1"}' \ - --inputs my-redis-topic - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-redis-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name my-redis-sink \ - --sink-config-file redis-sink-config.yaml \ - --inputs my-redis-topic - - ``` - -4. Publish records to the topic. - - ```bash - - $ bin/pulsar-client produce \ - persistent://public/default/my-redis-topic \ - -k "streaming" \ - -m "Pulsar" - - ``` - -5. Start a Redis client in Docker. - - ```bash - - $ docker exec -it my-redis redis-cli -a "mypassword" - - ``` - -6. Check the key/value in Redis. - - ``` - - 127.0.0.1:6379> keys * - 1) "streaming" - 127.0.0.1:6379> get "streaming" - "Pulsar" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-solr-sink.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-solr-sink.md deleted file mode 100644 index df2c3612c38eb6..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-solr-sink.md +++ /dev/null @@ -1,65 +0,0 @@ ---- -id: io-solr-sink -title: Solr sink connector -sidebar_label: "Solr sink connector" -original_id: io-solr-sink ---- - -The Solr sink connector pulls messages from Pulsar topics -and persists the messages to Solr collections. - - - -## Configuration - -The configuration of the Solr sink connector has the following properties. - - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `solrUrl` | String|true|" " (empty string) |
  1809. Comma-separated zookeeper hosts with chroot used in the SolrCloud mode.
    **Example**
    `localhost:2181,localhost:2182/chroot`

  1810. URL to connect to Solr used in standalone mode.
    **Example**
    `localhost:8983/solr`
  1811. | -| `solrMode` | String|true|SolrCloud| The client mode when interacting with the Solr cluster.

    Below are the available options:
  1812. Standalone
  1813. SolrCloud
  1814. | -| `solrCollection` |String|true| " " (empty string) | Solr collection name to which records need to be written. | -| `solrCommitWithinMs` |int| false|10 | The time within million seconds for Solr updating commits.| -| `username` |String|false| " " (empty string) | The username for basic authentication.

    **Note: `usename` is case-sensitive.** | -| `password` | String|false| " " (empty string) | The password for basic authentication.

    **Note: `password` is case-sensitive.** | - - - -### Example - -Before using the Solr sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "solrUrl": "localhost:2181,localhost:2182/chroot", - "solrMode": "SolrCloud", - "solrCollection": "techproducts", - "solrCommitWithinMs": 100, - "username": "fakeuser", - "password": "fake@123" - } - - ``` - -* YAML - - ```yaml - - { - solrUrl: "localhost:2181,localhost:2182/chroot" - solrMode: "SolrCloud" - solrCollection: "techproducts" - solrCommitWithinMs: 100 - username: "fakeuser" - password: "fake@123" - } - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-twitter-source.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-twitter-source.md deleted file mode 100644 index 8de3504dd0fef2..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-twitter-source.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -id: io-twitter-source -title: Twitter Firehose source connector -sidebar_label: "Twitter Firehose source connector" -original_id: io-twitter-source ---- - -The Twitter Firehose source connector receives tweets from Twitter Firehose and -writes the tweets to Pulsar topics. - -## Configuration - -The configuration of the Twitter Firehose source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `consumerKey` | String|true | " " (empty string) | The twitter OAuth consumer key.

    For more information, see [Access tokens](https://developer.twitter.com/en/docs/basics/authentication/guides/access-tokens). | -| `consumerSecret` | String |true | " " (empty string) | The twitter OAuth consumer secret. | -| `token` | String|true | " " (empty string) | The twitter OAuth token. | -| `tokenSecret` | String|true | " " (empty string) | The twitter OAuth secret. | -| `guestimateTweetTime`|Boolean|false|false|Most firehose events have null createdAt time.

    If `guestimateTweetTime` set to true, the connector estimates the createdTime of each firehose event to be current time. -| `clientName` | String |false | openconnector-twitter-source| The twitter firehose client name. | -| `clientHosts` |String| false | Constants.STREAM_HOST | The twitter firehose hosts to which client connects. | -| `clientBufferSize` | int|false | 50000 | The buffer size for buffering tweets fetched from twitter firehose. | - -> For more information about OAuth credentials, see [Twitter developers portal](https://developer.twitter.com/en.html). diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-twitter.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-twitter.md deleted file mode 100644 index 3b2f6325453c3c..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-twitter.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: io-twitter -title: Twitter Firehose Connector -sidebar_label: "Twitter Firehose Connector" -original_id: io-twitter ---- - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/io-use.md b/site2/website/versioned_docs/version-2.9.1-deprecated/io-use.md deleted file mode 100644 index da9ed746c4d372..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/io-use.md +++ /dev/null @@ -1,1787 +0,0 @@ ---- -id: io-use -title: How to use Pulsar connectors -sidebar_label: "Use" -original_id: io-use ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide describes how to use Pulsar connectors. - -## Install a connector - -Pulsar bundles several [builtin connectors](io-connectors.md) used to move data in and out of commonly used systems (such as database and messaging system). Optionally, you can create and use your desired non-builtin connectors. - -:::note - -When using a non-builtin connector, you need to specify the path of a archive file for the connector. - -::: - -To set up a builtin connector, follow -the instructions [here](getting-started-standalone.md#installing-builtin-connectors). - -After the setup, the builtin connector is automatically discovered by Pulsar brokers (or function-workers), so no additional installation steps are required. - -## Configure a connector - -You can configure the following information: - -* [Configure a default storage location for a connector](#configure-a-default-storage-location-for-a-connector) - -* [Configure a connector with a YAML file](#configure-a-connector-with-yaml-file) - -### Configure a default storage location for a connector - -To configure a default folder for builtin connectors, set the `connectorsDirectory` parameter in the `./conf/functions_worker.yml` configuration file. - -**Example** - -Set the `./connectors` folder as the default storage location for builtin connectors. - -``` - -######################## -# Connectors -######################## - -connectorsDirectory: ./connectors - -``` - -### Configure a connector with a YAML file - -To configure a connector, you need to provide a YAML configuration file when creating a connector. - -The YAML configuration file tells Pulsar where to locate connectors and how to connect connectors with Pulsar topics. - -**Example 1** - -Below is a YAML configuration file of a Cassandra sink, which tells Pulsar: - -* Which Cassandra cluster to connect - -* What is the `keyspace` and `columnFamily` to be used in Cassandra for collecting data - -* How to map Pulsar messages into Cassandra table key and columns - -```shell - -tenant: public -namespace: default -name: cassandra-test-sink -... -# cassandra specific config -configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - -``` - -**Example 2** - -Below is a YAML configuration file of a Kafka source. - -```shell - -configs: - bootstrapServers: "pulsar-kafka:9092" - groupId: "test-pulsar-io" - topic: "my-topic" - sessionTimeoutMs: "10000" - autoCommitEnabled: "false" - -``` - -**Example 3** - -Below is a YAML configuration file of a PostgreSQL JDBC sink. - -```shell - -configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/test_jdbc" - tableName: "test_jdbc" - -``` - -## Get available connectors - -Before starting using connectors, you can perform the following operations: - -* [Reload connectors](#reload) - -* [Get a list of available connectors](#get-available-connectors) - -### `reload` - -If you add or delete a nar file in a connector folder, reload the available builtin connector before using it. - -#### Source - -Use the `reload` subcommand. - -```shell - -$ pulsar-admin sources reload - -``` - -For more information, see [`here`](io-cli.md#reload). - -#### Sink - -Use the `reload` subcommand. - -```shell - -$ pulsar-admin sinks reload - -``` - -For more information, see [`here`](io-cli.md#reload-1). - -### `available` - -After reloading connectors (optional), you can get a list of available connectors. - -#### Source - -Use the `available-sources` subcommand. - -```shell - -$ pulsar-admin sources available-sources - -``` - -#### Sink - -Use the `available-sinks` subcommand. - -```shell - -$ pulsar-admin sinks available-sinks - -``` - -## Run a connector - -To run a connector, you can perform the following operations: - -* [Create a connector](#create) - -* [Start a connector](#start) - -* [Run a connector locally](#localrun) - -### `create` - -You can create a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Create a source connector. - -````mdx-code-block - - - - -Use the `create` subcommand. - -``` - -$ pulsar-admin sources create options - -``` - -For more information, see [here](io-cli.md#create). - - - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/registerSource?version=@pulsar:version_number@} - - - - -* Create a source connector with a **local file**. - - ```java - - void createSource(SourceConfig sourceConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - |Name|Description - |---|--- - `sourceConfig` | The source configuration object - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#createSource-SourceConfig-java.lang.String-). - -* Create a source connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void createSourceWithUrl(SourceConfig sourceConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - Parameter| Description - |---|--- - `sourceConfig` | The source configuration object - `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSourceWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#createSourceWithUrl-SourceConfig-java.lang.String-). - - - - -```` - -#### Sink - -Create a sink connector. - -````mdx-code-block - - - - -Use the `create` subcommand. - -``` - -$ pulsar-admin sinks create options - -``` - -For more information, see [here](io-cli.md#create-1). - - - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/registerSink?version=@pulsar:version_number@} - - - - -* Create a sink connector with a **local file**. - - ```java - - void createSink(SinkConfig sinkConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - |Name|Description - |---|--- - `sinkConfig` | The sink configuration object - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#createSink-SinkConfig-java.lang.String-). - -* Create a sink connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void createSinkWithUrl(SinkConfig sinkConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - Parameter| Description - |---|--- - `sinkConfig` | The sink configuration object - `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSinkWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#createSinkWithUrl-SinkConfig-java.lang.String-). - - - - -```` - -### `start` - -You can start a connector using **Admin CLI** or **REST API**. - -#### Source - -Start a source connector. - -````mdx-code-block - - - - -Use the `start` subcommand. - -``` - -$ pulsar-admin sources start options - -``` - -For more information, see [here](io-cli.md#start). - - - - -* Start **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/start|operation/startSource?version=@pulsar:version_number@} - -* Start a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/start|operation/startSource?version=@pulsar:version_number@} - - - - -```` - -#### Sink - -Start a sink connector. - -````mdx-code-block - - - - -Use the `start` subcommand. - -``` - -$ pulsar-admin sinks start options - -``` - -For more information, see [here](io-cli.md#start-1). - - - - -* Start **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/start|operation/startSink?version=@pulsar:version_number@} - -* Start a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sourceName/:instanceId/start|operation/startSink?version=@pulsar:version_number@} - - - - -```` - -### `localrun` - -You can run a connector locally rather than deploying it on a Pulsar cluster using **Admin CLI**. - -#### Source - -Run a source connector locally. - -````mdx-code-block - - - - -Use the `localrun` subcommand. - -``` - -$ pulsar-admin sources localrun options - -``` - -For more information, see [here](io-cli.md#localrun). - - - - -```` - -#### Sink - -Run a sink connector locally. - -````mdx-code-block - - - - -Use the `localrun` subcommand. - -``` - -$ pulsar-admin sinks localrun options - -``` - -For more information, see [here](io-cli.md#localrun-1). - - - - -```` - -## Monitor a connector - -To monitor a connector, you can perform the following operations: - -* [Get the information of a connector](#get) - -* [Get the list of all running connectors](#list) - -* [Get the current status of a connector](#status) - -### `get` - -You can get the information of a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the information of a source connector. - -````mdx-code-block - - - - -Use the `get` subcommand. - -``` - -$ pulsar-admin sources get options - -``` - -For more information, see [here](io-cli.md#get). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/getSourceInfo?version=@pulsar:version_number@} - - - - -```java - -SourceConfig getSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Example** - -This is a sourceConfig. - -```java - -{ - "tenant": "tenantName", - "namespace": "namespaceName", - "name": "sourceName", - "className": "className", - "topicName": "topicName", - "configs": {}, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "resources": { - "cpu": 1.0, - "ram": 1073741824, - "disk": 10737418240 - } -} - -``` - -This is a sourceConfig example. - -``` - -{ - "tenant": "public", - "namespace": "default", - "name": "debezium-mysql-source", - "className": "org.apache.pulsar.io.debezium.mysql.DebeziumMysqlSource", - "topicName": "debezium-mysql-topic", - "configs": { - "database.user": "debezium", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.port": "3306", - "database.hostname": "localhost", - "database.password": "dbz", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "database.whitelist": "inventory", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "pulsar.service.url": "pulsar://127.0.0.1:6650", - "database.history.pulsar.topic": "history-topic2" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "resources": { - "cpu": 1.0, - "ram": 1073741824, - "disk": 10737418240 - } -} - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException.NotFoundException` | Cluster doesn't exist -`PulsarAdminException` | Unexpected error - -For more information, see [`getSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#getSource-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Get the information of a sink connector. - -````mdx-code-block - - - - -Use the `get` subcommand. - -``` - -$ pulsar-admin sinks get options - -``` - -For more information, see [here](io-cli.md#get-1). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/getSinkInfo?version=@pulsar:version_number@} - - - - -```java - -SinkConfig getSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - -``` - -**Example** - -This is a sinkConfig. - -```json - -{ -"tenant": "tenantName", -"namespace": "namespaceName", -"name": "sinkName", -"className": "className", -"inputSpecs": { -"topicName": { - "isRegexPattern": false -} -}, -"configs": {}, -"parallelism": 1, -"processingGuarantees": "ATLEAST_ONCE", -"retainOrdering": false, -"autoAck": true -} - -``` - -This is a sinkConfig example. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -**Parameter description** - -Name| Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`sink` | Sink name - -For more information, see [`getSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#getSink-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -### `list` - -You can get the list of all running connectors using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the list of all running source connectors. - -````mdx-code-block - - - - -Use the `list` subcommand. - -``` - -$ pulsar-admin sources list options - -``` - -For more information, see [here](io-cli.md#list). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace|operation/listSources?version=@pulsar:version_number@} - - - - -```java - -List listSources(String tenant, - String namespace) - throws PulsarAdminException - -``` - -**Response example** - -```java - -["f1", "f2", "f3"] - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException` | Unexpected error - -For more information, see [`listSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#listSources-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Get the list of all running sink connectors. - -````mdx-code-block - - - - -Use the `list` subcommand. - -``` - -$ pulsar-admin sinks list options - -``` - -For more information, see [here](io-cli.md#list-1). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace|operation/listSinks?version=@pulsar:version_number@} - - - - -```java - -List listSinks(String tenant, - String namespace) - throws PulsarAdminException - -``` - -**Response example** - -```java - -["f1", "f2", "f3"] - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException` | Unexpected error - -For more information, see [`listSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#listSinks-java.lang.String-java.lang.String-). - - - - -```` - -### `status` - -You can get the current status of a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the current status of a source connector. - -````mdx-code-block - - - - -Use the `status` subcommand. - -``` - -$ pulsar-admin sources status options - -``` - -For more information, see [here](io-cli.md#status). - - - - -* Get the current status of **all** source connectors. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName/status|operation/getSourceStatus?version=@pulsar:version_number@} - -* Gets the current status of a **specified** source connector. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/status|operation/getSourceStatus?version=@pulsar:version_number@} - - - - -* Get the current status of **all** source connectors. - - ```java - - SourceStatus getSourceStatus(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - - **Exception** - - Name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSourceStatus`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#getSource-java.lang.String-java.lang.String-java.lang.String-). - -* Gets the current status of a **specified** source connector. - - ```java - - SourceStatus.SourceInstanceStatus.SourceInstanceStatusData getSourceStatus(String tenant, - String namespace, - String source, - int id) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - `id` | Source instanceID - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSourceStatus`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#getSourceStatus-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Get the current status of a Pulsar sink connector. - -````mdx-code-block - - - - -Use the `status` subcommand. - -``` - -$ pulsar-admin sinks status options - -``` - -For more information, see [here](io-cli.md#status-1). - - - - -* Get the current status of **all** sink connectors. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sinkName/status|operation/getSinkStatus?version=@pulsar:version_number@} - -* Gets the current status of a **specified** sink connector. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sourceName/:instanceId/status|operation/getSinkInstanceStatus?version=@pulsar:version_number@} - - - - -* Get the current status of **all** sink connectors. - - ```java - - SinkStatus getSinkStatus(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSinkStatus`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#getSinkStatus-java.lang.String-java.lang.String-java.lang.String-). - -* Gets the current status of a **specified** source connector. - - ```java - - SinkStatus.SinkInstanceStatus.SinkInstanceStatusData getSinkStatus(String tenant, - String namespace, - String sink, - int id) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - `id` | Sink instanceID - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSinkStatusWithInstanceID`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#getSinkStatus-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Update a connector - -### `update` - -You can update a running connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Update a running Pulsar source connector. - -````mdx-code-block - - - - -Use the `update` subcommand. - -``` - -$ pulsar-admin sources update options - -``` - -For more information, see [here](io-cli.md#update). - - - - -Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/updateSource?version=@pulsar:version_number@} - - - - -* Update a running source connector with a **local file**. - - ```java - - void updateSource(SourceConfig sourceConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - |`sourceConfig` | The source configuration object - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - - For more information, see [`updateSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#updateSource-SourceConfig-java.lang.String-). - -* Update a source connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void updateSourceWithUrl(SourceConfig sourceConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - | Name | Description - |---|--- - | `sourceConfig` | The source configuration object - | `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - -For more information, see [`createSourceWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#updateSourceWithUrl-SourceConfig-java.lang.String-). - - - - -```` - -#### Sink - -Update a running Pulsar sink connector. - -````mdx-code-block - - - - -Use the `update` subcommand. - -``` - -$ pulsar-admin sinks update options - -``` - -For more information, see [here](io-cli.md#update-1). - - - - -Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/updateSink?version=@pulsar:version_number@} - - - - -* Update a running sink connector with a **local file**. - - ```java - - void updateSink(SinkConfig sinkConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - |`sinkConfig` | The sink configuration object - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - - For more information, see [`updateSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#updateSink-SinkConfig-java.lang.String-). - -* Update a sink connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void updateSinkWithUrl(SinkConfig sinkConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - | Name | Description - |---|--- - | `sinkConfig` | The sink configuration object - | `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - |`PulsarAdminException.NotFoundException` | Cluster doesn't exist - |`PulsarAdminException` | Unexpected error - -For more information, see [`updateSinkWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#updateSinkWithUrl-SinkConfig-java.lang.String-). - - - - -```` - -## Stop a connector - -### `stop` - -You can stop a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Stop a source connector. - -````mdx-code-block - - - - -Use the `stop` subcommand. - -``` - -$ pulsar-admin sources stop options - -``` - -For more information, see [here](io-cli.md#stop). - - - - -* Stop **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/stopSource?version=@pulsar:version_number@} - -* Stop a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId|operation/stopSource?version=@pulsar:version_number@} - - - - -* Stop **all** source connectors. - - ```java - - void stopSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#stopSource-java.lang.String-java.lang.String-java.lang.String-). - -* Stop a **specified** source connector. - - ```java - - void stopSource(String tenant, - String namespace, - String source, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#stopSource-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Stop a sink connector. - -````mdx-code-block - - - - -Use the `stop` subcommand. - -``` - -$ pulsar-admin sinks stop options - -``` - -For more information, see [here](io-cli.md#stop-1). - - - - -* Stop **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sinkName/stop|operation/stopSink?version=@pulsar:version_number@} - -* Stop a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkeName/:instanceId/stop|operation/stopSink?version=@pulsar:version_number@} - - - - -* Stop **all** sink connectors. - - ```java - - void stopSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#stopSink-java.lang.String-java.lang.String-java.lang.String-). - -* Stop a **specified** sink connector. - - ```java - - void stopSink(String tenant, - String namespace, - String sink, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#stopSink-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Restart a connector - -### `restart` - -You can restart a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Restart a source connector. - -````mdx-code-block - - - - -Use the `restart` subcommand. - -``` - -$ pulsar-admin sources restart options - -``` - -For more information, see [here](io-cli.md#restart). - - - - -* Restart **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/restart|operation/restartSource?version=@pulsar:version_number@} - -* Restart a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/restart|operation/restartSource?version=@pulsar:version_number@} - - - - -* Restart **all** source connectors. - - ```java - - void restartSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#restartSource-java.lang.String-java.lang.String-java.lang.String-). - -* Restart a **specified** source connector. - - ```java - - void restartSource(String tenant, - String namespace, - String source, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#restartSource-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Restart a sink connector. - -````mdx-code-block - - - - -Use the `restart` subcommand. - -``` - -$ pulsar-admin sinks restart options - -``` - -For more information, see [here](io-cli.md#restart-1). - - - - -* Restart **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/restart|operation/restartSource?version=@pulsar:version_number@} - -* Restart a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/:instanceId/restart|operation/restartSource?version=@pulsar:version_number@} - - - - -* Restart all Pulsar sink connectors. - - ```java - - void restartSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Sink name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#restartSink-java.lang.String-java.lang.String-java.lang.String-). - -* Restart a **specified** sink connector. - - ```java - - void restartSink(String tenant, - String namespace, - String sink, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Sink instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#restartSink-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Delete a connector - -### `delete` - -You can delete a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Delete a source connector. - -````mdx-code-block - - - - -Use the `delete` subcommand. - -``` - -$ pulsar-admin sources delete options - -``` - -For more information, see [here](io-cli.md#delete). - - - - -Delete al Pulsar source connector. - -Send a `DELETE` request to this endpoint: {@inject: endpoint|DELETE|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/deregisterSource?version=@pulsar:version_number@} - - - - -Delete a source connector. - -```java - -void deleteSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Parameter** - -| Name | Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`source` | Source name - -**Exception** - -|Name|Description| -|---|--- -|`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission -| `PulsarAdminException.NotFoundException` | Cluster doesn't exist -| `PulsarAdminException.PreconditionFailedException` | Cluster is not empty -| `PulsarAdminException` | Unexpected error - -For more information, see [`deleteSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#deleteSource-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Delete a sink connector. - -````mdx-code-block - - - - -Use the `delete` subcommand. - -``` - -$ pulsar-admin sinks delete options - -``` - -For more information, see [here](io-cli.md#delete-1). - - - - -Delete a sink connector. - -Send a `DELETE` request to this endpoint: {@inject: endpoint|DELETE|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/deregisterSink?version=@pulsar:version_number@} - - - - -Delete a Pulsar sink connector. - -```java - -void deleteSink(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Parameter** - -| Name | Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`sink` | Sink name - -**Exception** - -|Name|Description| -|---|--- -|`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission -| `PulsarAdminException.NotFoundException` | Cluster doesn't exist -| `PulsarAdminException.PreconditionFailedException` | Cluster is not empty -| `PulsarAdminException` | Unexpected error - -For more information, see [`deleteSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#deleteSink-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/kubernetes-helm.md b/site2/website/versioned_docs/version-2.9.1-deprecated/kubernetes-helm.md deleted file mode 100644 index ea92a0968cd7d0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/kubernetes-helm.md +++ /dev/null @@ -1,441 +0,0 @@ ---- -id: kubernetes-helm -title: Get started in Kubernetes -sidebar_label: "Run Pulsar in Kubernetes" -original_id: kubernetes-helm ---- - -This section guides you through every step of installing and running Apache Pulsar with Helm on Kubernetes quickly, including the following sections: - -- Install the Apache Pulsar on Kubernetes using Helm -- Start and stop Apache Pulsar -- Create topics using `pulsar-admin` -- Produce and consume messages using Pulsar clients -- Monitor Apache Pulsar status with Prometheus and Grafana - -For deploying a Pulsar cluster for production usage, read the documentation on [how to configure and install a Pulsar Helm chart](helm-deploy.md). - -## Prerequisite - -- Kubernetes server 1.14.0+ -- kubectl 1.14.0+ -- Helm 3.0+ - -:::tip - -For the following steps, step 2 and step 3 are for **developers** and step 4 and step 5 are for **administrators**. - -::: - -## Step 0: Prepare a Kubernetes cluster - -Before installing a Pulsar Helm chart, you have to create a Kubernetes cluster. You can follow [the instructions](helm-prepare.md) to prepare a Kubernetes cluster. - -We use [Minikube](https://minikube.sigs.k8s.io/docs/start/) in this quick start guide. To prepare a Kubernetes cluster, follow these steps: - -1. Create a Kubernetes cluster on Minikube. - - ```bash - - minikube start --memory=8192 --cpus=4 --kubernetes-version= - - ``` - - The `` can be any [Kubernetes version supported by your Minikube installation](https://minikube.sigs.k8s.io/docs/reference/configuration/kubernetes/), such as `v1.16.1`. - -2. Set `kubectl` to use Minikube. - - ```bash - - kubectl config use-context minikube - - ``` - -3. To use the [Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) with the local Kubernetes cluster on Minikube, enter the command below: - - ```bash - - minikube dashboard - - ``` - - The command automatically triggers opening a webpage in your browser. - -## Step 1: Install Pulsar Helm chart - -1. Add Pulsar charts repo. - - ```bash - - helm repo add apache https://pulsar.apache.org/charts - - ``` - - ```bash - - helm repo update - - ``` - -2. Clone the Pulsar Helm chart repository. - - ```bash - - git clone https://github.com/apache/pulsar-helm-chart - cd pulsar-helm-chart - - ``` - -3. Run the script `prepare_helm_release.sh` to create secrets required for installing the Apache Pulsar Helm chart. The username `pulsar` and password `pulsar` are used for logging into the Grafana dashboard and Pulsar Manager. - - ```bash - - ./scripts/pulsar/prepare_helm_release.sh \ - -n pulsar \ - -k pulsar-mini \ - -c - - ``` - -4. Use the Pulsar Helm chart to install a Pulsar cluster to Kubernetes. - - :::note - - You need to specify `--set initialize=true` when installing Pulsar the first time. This command installs and starts Apache Pulsar. - - ::: - - ```bash - - helm install \ - --values examples/values-minikube.yaml \ - --set initialize=true \ - --namespace pulsar \ - pulsar-mini apache/pulsar - - ``` - -5. Check the status of all pods. - - ```bash - - kubectl get pods -n pulsar - - ``` - - If all pods start up successfully, you can see that the `STATUS` is changed to `Running` or `Completed`. - - **Output** - - ```bash - - NAME READY STATUS RESTARTS AGE - pulsar-mini-bookie-0 1/1 Running 0 9m27s - pulsar-mini-bookie-init-5gphs 0/1 Completed 0 9m27s - pulsar-mini-broker-0 1/1 Running 0 9m27s - pulsar-mini-grafana-6b7bcc64c7-4tkxd 1/1 Running 0 9m27s - pulsar-mini-prometheus-5fcf5dd84c-w8mgz 1/1 Running 0 9m27s - pulsar-mini-proxy-0 1/1 Running 0 9m27s - pulsar-mini-pulsar-init-t7cqt 0/1 Completed 0 9m27s - pulsar-mini-pulsar-manager-9bcbb4d9f-htpcs 1/1 Running 0 9m27s - pulsar-mini-toolset-0 1/1 Running 0 9m27s - pulsar-mini-zookeeper-0 1/1 Running 0 9m27s - - ``` - -6. Check the status of all services in the namespace `pulsar`. - - ```bash - - kubectl get services -n pulsar - - ``` - - **Output** - - ```bash - - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - pulsar-mini-bookie ClusterIP None 3181/TCP,8000/TCP 11m - pulsar-mini-broker ClusterIP None 8080/TCP,6650/TCP 11m - pulsar-mini-grafana LoadBalancer 10.106.141.246 3000:31905/TCP 11m - pulsar-mini-prometheus ClusterIP None 9090/TCP 11m - pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 11m - pulsar-mini-pulsar-manager LoadBalancer 10.103.192.175 9527:30190/TCP 11m - pulsar-mini-toolset ClusterIP None 11m - pulsar-mini-zookeeper ClusterIP None 2888/TCP,3888/TCP,2181/TCP 11m - - ``` - -## Step 2: Use pulsar-admin to create Pulsar tenants/namespaces/topics - -`pulsar-admin` is the CLI (command-Line Interface) tool for Pulsar. In this step, you can use `pulsar-admin` to create resources, including tenants, namespaces, and topics. - -1. Enter the `toolset` container. - - ```bash - - kubectl exec -it -n pulsar pulsar-mini-toolset-0 -- /bin/bash - - ``` - -2. In the `toolset` container, create a tenant named `apache`. - - ```bash - - bin/pulsar-admin tenants create apache - - ``` - - Then you can list the tenants to see if the tenant is created successfully. - - ```bash - - bin/pulsar-admin tenants list - - ``` - - You should see a similar output as below. The tenant `apache` has been successfully created. - - ```bash - - "apache" - "public" - "pulsar" - - ``` - -3. In the `toolset` container, create a namespace named `pulsar` in the tenant `apache`. - - ```bash - - bin/pulsar-admin namespaces create apache/pulsar - - ``` - - Then you can list the namespaces of tenant `apache` to see if the namespace is created successfully. - - ```bash - - bin/pulsar-admin namespaces list apache - - ``` - - You should see a similar output as below. The namespace `apache/pulsar` has been successfully created. - - ```bash - - "apache/pulsar" - - ``` - -4. In the `toolset` container, create a topic `test-topic` with `4` partitions in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics create-partitioned-topic apache/pulsar/test-topic -p 4 - - ``` - -5. In the `toolset` container, list all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics list-partitioned-topics apache/pulsar - - ``` - - Then you can see all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - "persistent://apache/pulsar/test-topic" - - ``` - -## Step 3: Use Pulsar client to produce and consume messages - -You can use the Pulsar client to create producers and consumers to produce and consume messages. - -By default, the Pulsar Helm chart exposes the Pulsar cluster through a Kubernetes `LoadBalancer`. In Minikube, you can use the following command to check the proxy service. - -```bash - -kubectl get services -n pulsar | grep pulsar-mini-proxy - -``` - -You will see a similar output as below. - -```bash - -pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 28m - -``` - -This output tells what are the node ports that Pulsar cluster's binary port and HTTP port are mapped to. The port after `80:` is the HTTP port while the port after `6650:` is the binary port. - -Then you can find the IP address and exposed ports of your Minikube server by running the following command. - -```bash - -minikube service pulsar-mini-proxy -n pulsar - -``` - -**Output** - -```bash - -|-----------|-------------------|-------------|-------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|-------------------------| -| pulsar | pulsar-mini-proxy | http/80 | http://172.17.0.4:32305 | -| | | pulsar/6650 | http://172.17.0.4:31816 | -|-----------|-------------------|-------------|-------------------------| -🏃 Starting tunnel for service pulsar-mini-proxy. -|-----------|-------------------|-------------|------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|------------------------| -| pulsar | pulsar-mini-proxy | | http://127.0.0.1:61853 | -| | | | http://127.0.0.1:61854 | -|-----------|-------------------|-------------|------------------------| - -``` - -At this point, you can get the service URLs to connect to your Pulsar client. Here are URL examples: - -``` - -webServiceUrl=http://127.0.0.1:61853/ -brokerServiceUrl=pulsar://127.0.0.1:61854/ - -``` - -Then you can proceed with the following steps: - -1. Download the Apache Pulsar tarball from the [downloads page](https://pulsar.apache.org/download/). - -2. Decompress the tarball based on your download file. - - ```bash - - tar -xf .tar.gz - - ``` - -3. Expose `PULSAR_HOME`. - - (1) Enter the directory of the decompressed download file. - - (2) Expose `PULSAR_HOME` as the environment variable. - - ```bash - - export PULSAR_HOME=$(pwd) - - ``` - -4. Configure the Pulsar client. - - In the `${PULSAR_HOME}/conf/client.conf` file, replace `webServiceUrl` and `brokerServiceUrl` with the service URLs you get from the above steps. - -5. Create a subscription to consume messages from `apache/pulsar/test-topic`. - - ```bash - - bin/pulsar-client consume -s sub apache/pulsar/test-topic -n 0 - - ``` - -6. Open a new terminal. In the new terminal, create a producer and send 10 messages to the `test-topic` topic. - - ```bash - - bin/pulsar-client produce apache/pulsar/test-topic -m "---------hello apache pulsar-------" -n 10 - - ``` - -7. Verify the results. - - - From the producer side - - **Output** - - The messages have been produced successfully. - - ```bash - - 18:15:15.489 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 10 messages successfully produced - - ``` - - - From the consumer side - - **Output** - - At the same time, you can receive the messages as below. - - ```bash - - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - - ``` - -## Step 4: Use Pulsar Manager to manage the cluster - -[Pulsar Manager](administration-pulsar-manager.md) is a web-based GUI management tool for managing and monitoring Pulsar. - -1. By default, the `Pulsar Manager` is exposed as a separate `LoadBalancer`. You can open the Pulsar Manager UI using the following command: - - ```bash - - minikube service -n pulsar pulsar-mini-pulsar-manager - - ``` - -2. The Pulsar Manager UI will be open in your browser. You can use the username `pulsar` and password `pulsar` to log into Pulsar Manager. - -3. In Pulsar Manager UI, you can create an environment. - - - Click `New Environment` button in the top-left corner. - - Type `pulsar-mini` for the field `Environment Name` in the popup window. - - Type `http://pulsar-mini-broker:8080` for the field `Service URL` in the popup window. - - Click `Confirm` button in the popup window. - -4. After successfully creating an environment, you are redirected to the `tenants` page of that environment. Then you can create `tenants`, `namespaces` and `topics` using the Pulsar Manager. - -## Step 5: Use Prometheus and Grafana to monitor cluster - -Grafana is an open-source visualization tool, which can be used for visualizing time series data into dashboards. - -1. By default, the Grafana is exposed as a separate `LoadBalancer`. You can open the Grafana UI using the following command: - - ```bash - - minikube service pulsar-mini-grafana -n pulsar - - ``` - -2. The Grafana UI is open in your browser. You can use the username `pulsar` and password `pulsar` to log into the Grafana Dashboard. - -3. You can view dashboards for different components of a Pulsar cluster. diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/performance-pulsar-perf.md b/site2/website/versioned_docs/version-2.9.1-deprecated/performance-pulsar-perf.md deleted file mode 100644 index 7f45498604536c..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/performance-pulsar-perf.md +++ /dev/null @@ -1,229 +0,0 @@ ---- -id: performance-pulsar-perf -title: Pulsar Perf -sidebar_label: "Pulsar Perf" -original_id: performance-pulsar-perf ---- - -The Pulsar Perf is a built-in performance test tool for Apache Pulsar. You can use the Pulsar Perf to test message writing or reading performance. For detailed information about performance tuning, see [here](https://streamnative.io/en/blog/tech/2021-01-14-pulsar-architecture-performance-tuning). - -## Produce messages - -This example shows how the Pulsar Perf produces messages with default options. For all configuration options available for the `pulsar-perf produce` command, see [configuration options](#configuration-options-for-pulsar-perf-produce). - -``` - -bin/pulsar-perf produce my-topic - -``` - -After the command is executed, the test data is continuously output on the Console. - -**Output** - -``` - -19:53:31.459 [pulsar-perf-producer-exec-1-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Created 1 producers -19:53:31.482 [pulsar-timer-5-1] WARN com.scurrilous.circe.checksum.Crc32cIntChecksum - Failed to load Circe JNI library. Falling back to Java based CRC32c provider -19:53:40.861 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 93.7 msg/s --- 0.7 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.575 ms - med: 3.460 - 95pct: 4.790 - 99pct: 5.308 - 99.9pct: 5.834 - 99.99pct: 6.609 - Max: 6.609 -19:53:50.909 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.437 ms - med: 3.328 - 95pct: 4.656 - 99pct: 5.071 - 99.9pct: 5.519 - 99.99pct: 5.588 - Max: 5.588 -19:54:00.926 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.376 ms - med: 3.276 - 95pct: 4.520 - 99pct: 4.939 - 99.9pct: 5.440 - 99.99pct: 5.490 - Max: 5.490 -19:54:10.940 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.298 ms - med: 3.220 - 95pct: 4.474 - 99pct: 4.926 - 99.9pct: 5.645 - 99.99pct: 5.654 - Max: 5.654 -19:54:20.956 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.1 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.308 ms - med: 3.199 - 95pct: 4.532 - 99pct: 4.871 - 99.9pct: 5.291 - 99.99pct: 5.323 - Max: 5.323 -19:54:30.972 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.249 ms - med: 3.144 - 95pct: 4.437 - 99pct: 4.970 - 99.9pct: 5.329 - 99.99pct: 5.414 - Max: 5.414 -19:54:40.987 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.435 ms - med: 3.361 - 95pct: 4.772 - 99pct: 5.150 - 99.9pct: 5.373 - 99.99pct: 5.837 - Max: 5.837 -^C19:54:44.325 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Aggregated throughput stats --- 7286 records sent --- 99.140 msg/s --- 0.775 Mbit/s -19:54:44.336 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Aggregated latency stats --- Latency: mean: 3.383 ms - med: 3.293 - 95pct: 4.610 - 99pct: 5.059 - 99.9pct: 5.588 - 99.99pct: 5.837 - 99.999pct: 6.609 - Max: 6.609 - -``` - -From the above test data, you can get the throughput statistics and the write latency statistics. The aggregated statistics is printed when the Pulsar Perf is stopped. You can press **Ctrl**+**C** to stop the Pulsar Perf. If you specify a filename with the `--histogram-file` parameter, a file with the [HdrHistogram](http://hdrhistogram.github.io/HdrHistogram/) formatted test result appears under your directory after Pulsar Perf is stopped. You can also check the test result through [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html). For details about how to check the test result through [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html), see [HdrHistogram Plotter](#hdrhistogram-plotter). - -### Configuration options for `pulsar-perf produce` - -You can get all options by executing the `bin/pulsar-perf produce -h` command. Therefore, you can modify these options as required. - -The following table lists configuration options available for the `pulsar-perf produce` command. - -| Option | Description | Default value| -|----|----|----| -| access-mode | Set the producer access mode. Valid values are `Shared`, `Exclusive` and `WaitForExclusive`. | Shared | -| admin-url | Set the Pulsar admin URL. | N/A | -| auth-params | Set the authentication parameters, whose format is determined by the implementation of the `configure` method in the authentication plugin class, such as "key1:val1,key2:val2" or "{"key1":"val1","key2":"val2"}". | N/A | -| auth-plugin | Set the authentication plugin class name. | N/A | -| listener-name | Set the listener name for the broker. | N/A | -| batch-max-bytes | Set the maximum number of bytes for each batch. | 4194304 | -| batch-max-messages | Set the maximum number of messages for each batch. | 1000 | -| batch-time-window | Set a window for a batch of messages. | 1 ms | -| busy-wait | Enable or disable Busy-Wait on the Pulsar client. | false | -| chunking | Configure whether to split the message and publish in chunks if message size is larger than allowed max size. | false | -| compression | Compress the message payload. | N/A | -| conf-file | Set the configuration file. | N/A | -| delay | Mark messages with a given delay. | 0s | -| encryption-key-name | Set the name of the public key used to encrypt the payload. | N/A | -| encryption-key-value-file | Set the file which contains the public key used to encrypt the payload. | N/A | -| exit-on-failure | Configure whether to exit from the process on publish failure. | false | -| format-class | Set the custom formatter class name. | org.apache.pulsar.testclient.DefaultMessageFormatter | -| format-payload | Configure whether to format %i as a message index in the stream from producer and/or %t as the timestamp nanoseconds. | false | -| help | Configure the help message. | false | -| histogram-file | HdrHistogram output file | N/A | -| max-connections | Set the maximum number of TCP connections to a single broker. | 100 | -| max-outstanding | Set the maximum number of outstanding messages. | 1000 | -| max-outstanding-across-partitions | Set the maximum number of outstanding messages across partitions. | 50000 | -| message-key-generation-mode | Set the generation mode of message key. Valid options are `autoIncrement`, `random`. | N/A | -| num-io-threads | Set the number of threads to be used for handling connections to brokers. | 1 | -| num-messages | Set the number of messages to be published in total. If it is set to 0, it keeps publishing messages. | 0 | -| num-producers | Set the number of producers for each topic. | 1 | -| num-test-threads | Set the number of test threads. | 1 | -| num-topic | Set the number of topics. | 1 | -| partitions | Configure whether to create partitioned topics with the given number of partitions. | N/A | -| payload-delimiter | Set the delimiter used to split lines when using payload from a file. | \n | -| payload-file | Use the payload from an UTF-8 encoded text file and a payload is randomly selected when messages are published. | N/A | -| producer-name | Set the producer name. | N/A | -| rate | Set the publish rate of messages across topics. | 100 | -| send-timeout | Set the sendTimeout. | 0 | -| separator | Set the separator between the topic and topic number. | - | -| service-url | Set the Pulsar service URL. | | -| size | Set the message size. | 1024 bytes | -| stats-interval-seconds | Set the statistics interval. If it is set to 0, statistics is disabled. | 0 | -| test-duration | Set the test duration. If it is set to 0, it keeps publishing tests. | 0s | -| trust-cert-file | Set the path for the trusted TLS certificate file. | | | -| warmup-time | Set the warm-up time. | 1s | -| tls-allow-insecure | Set the allowed insecure TLS connection. | N/A | - -## Consume messages - -This example shows how the Pulsar Perf consumes messages with default options. - -``` - -bin/pulsar-perf consume my-topic - -``` - -After the command is executed, the test data is continuously output on the Console. - -**Output** - -``` - -20:35:37.071 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Start receiving from 1 consumers on 1 topics -20:35:41.150 [pulsar-client-io-1-9] WARN com.scurrilous.circe.checksum.Crc32cIntChecksum - Failed to load Circe JNI library. Falling back to Java based CRC32c provider -20:35:47.092 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 59.572 msg/s -- 0.465 Mbit/s --- Latency: mean: 11.298 ms - med: 10 - 95pct: 15 - 99pct: 98 - 99.9pct: 137 - 99.99pct: 152 - Max: 152 -20:35:57.104 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.958 msg/s -- 0.781 Mbit/s --- Latency: mean: 9.176 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 18 - Max: 18 -20:36:07.115 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 100.006 msg/s -- 0.781 Mbit/s --- Latency: mean: 9.316 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -20:36:17.125 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 100.085 msg/s -- 0.782 Mbit/s --- Latency: mean: 9.327 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -20:36:27.136 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.900 msg/s -- 0.780 Mbit/s --- Latency: mean: 9.404 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -20:36:37.147 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.985 msg/s -- 0.781 Mbit/s --- Latency: mean: 8.998 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -^C20:36:42.755 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceConsumer - Aggregated throughput stats --- 6051 records received --- 92.125 msg/s --- 0.720 Mbit/s -20:36:42.759 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceConsumer - Aggregated latency stats --- Latency: mean: 9.422 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 98 - 99.99pct: 137 - 99.999pct: 152 - Max: 152 - -``` - -From the output test data, you can get the throughput statistics and the end-to-end latency statistics. The aggregated statistics is printed after the Pulsar Perf is stopped. You can press **Ctrl**+**C** to stop the Pulsar Perf. - -### Configuration options for `pulsar-perf consume` - -You can get all options by executing the `bin/pulsar-perf consume -h` command. Therefore, you can modify these options as required. - -The following table lists configuration options available for the `pulsar-perf consume` command. - -| Option | Description | Default value | -|----|----|----| -| acks-delay-millis | Set the acknowledgment grouping delay in milliseconds. | 100 ms | -| auth-params | Set the authentication parameters, whose format is determined by the implementation of the `configure` method in the authentication plugin class, such as "key1:val1,key2:val2" or "{"key1":"val1","key2":"val2"}". | N/A | -| auth-plugin | Set the authentication plugin class name. | N/A | -| auto_ack_chunk_q_full | Configure whether to automatically ack for the oldest message in receiver queue if the queue is full. | false | -| listener-name | Set the listener name for the broker. | N/A | -| batch-index-ack | Enable or disable the batch index acknowledgment. | false | -| busy-wait | Enable or disable Busy-Wait on the Pulsar client. | false | -| conf-file | Set the configuration file. | N/A | -| encryption-key-name | Set the name of the public key used to encrypt the payload. | N/A | -| encryption-key-value-file | Set the file which contains the public key used to encrypt the payload. | N/A | -| help | Configure the help message. | false | -| histogram-file | HdrHistogram output file | N/A | -| expire_time_incomplete_chunked_messages | Set the expiration time for incomplete chunk messages (in milliseconds). | 0 | -| max-connections | Set the maximum number of TCP connections to a single broker. | 100 | -| max_chunked_msg | Set the max pending chunk messages. | 0 | -| num-consumers | Set the number of consumers for each topic. | 1 | -| num-io-threads |Set the number of threads to be used for handling connections to brokers. | 1 | -| num-subscriptions | Set the number of subscriptions (per topic). | 1 | -| num-topic | Set the number of topics. | 1 | -| pool-messages | Configure whether to use the pooled message. | true | -| rate | Simulate a slow message consumer (rate in msg/s). | 0.0 | -| receiver-queue-size | Set the size of the receiver queue. | 1000 | -| receiver-queue-size-across-partitions | Set the max total size of the receiver queue across partitions. | 50000 | -| replicated | Configure whether the subscription status should be replicated. | false | -| service-url | Set the Pulsar service URL. | | -| stats-interval-seconds | Set the statistics interval. If it is set to 0, statistics is disabled. | 0 | -| subscriber-name | Set the subscriber name prefix. | | -| subscription-position | Set the subscription position. Valid values are `Latest`, `Earliest`.| Latest | -| subscription-type | Set the subscription type.
  1815. Exclusive
  1816. Shared
  1817. Failover
  1818. Key_Shared
  1819. | Exclusive | -| test-duration | Set the test duration (in seconds). If the value is 0 or smaller than 0, it keeps consuming messages. | 0 | -| tls-allow-insecure | Set the allowed insecure TLS connection. | N/A | -| trust-cert-file | Set the path for the trusted TLS certificate file. | | | - -## Configurations - -By default, the Pulsar Perf uses `conf/client.conf` as the default configuration and uses `conf/log4j2.yaml` as the default Log4j configuration. If you want to connect to other Pulsar clusters, you can update the `brokerServiceUrl` in the client configuration. - -You can use the following commands to change the configuration file and the Log4j configuration file. - -``` - -export PULSAR_CLIENT_CONF= -export PULSAR_LOG_CONF= - -``` - -In addition, you can use the following command to configure the JVM configuration through environment variables: - -``` - -export PULSAR_EXTRA_OPTS='-Xms4g -Xmx4g -XX:MaxDirectMemorySize=4g' - -``` - -## HdrHistogram Plotter - -The [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html) is a visualization tool for checking Pulsar Perf test results, which makes it easier to observe the test results. - -To check test results through the HdrHistogram Plotter, follow these steps: - -1. Clone the HdrHistogram repository from GitHub to the local. - - ``` - - git clone https://github.com/HdrHistogram/HdrHistogram.git - - ``` - -2. Switch to the HdrHistogram folder. - - ``` - - cd HdrHistogram - - ``` - -3. Install the HdrHistogram Plotter. - - ``` - - mvn clean install -DskipTests - - ``` - -4. Transform the file generated by the Pulsar Perf. - - ``` - - ./HistogramLogProcessor -i -o - - ``` - -5. You will get two output files. Upload the output file with the filename extension of .hgrm to the [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html). - -6. Check the test result through the Graphical User Interface of the HdrHistogram Plotter, as shown blow. - - ![](/assets/perf-produce.png) diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/reference-cli-tools.md b/site2/website/versioned_docs/version-2.9.1-deprecated/reference-cli-tools.md deleted file mode 100644 index 6893da3ec6394b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/reference-cli-tools.md +++ /dev/null @@ -1,941 +0,0 @@ ---- -id: reference-cli-tools -title: Pulsar command-line tools -sidebar_label: "Pulsar CLI tools" -original_id: reference-cli-tools ---- - -Pulsar offers several command-line tools that you can use for managing Pulsar installations, performance testing, using command-line producers and consumers, and more. - -All Pulsar command-line tools can be run from the `bin` directory of your [installed Pulsar package](getting-started-standalone.md). The following tools are currently documented: - -* [`pulsar`](#pulsar) -* [`pulsar-client`](#pulsar-client) -* [`pulsar-daemon`](#pulsar-daemon) -* [`pulsar-perf`](#pulsar-perf) -* [`bookkeeper`](#bookkeeper) -* [`broker-tool`](#broker-tool) - -> ### Getting help -> You can get help for any CLI tool, command, or subcommand using the `--help` flag, or `-h` for short. Here's an example: - -> ```shell -> -> $ bin/pulsar broker --help -> -> -> ``` - - -## `pulsar` - -The pulsar tool is used to start Pulsar components, such as bookies and ZooKeeper, in the foreground. - -These processes can also be started in the background, using nohup, using the pulsar-daemon tool, which has the same command interface as pulsar. - -Usage: - -```bash - -$ pulsar command - -``` - -Commands: -* `bookie` -* `broker` -* `compact-topic` -* `configuration-store` -* `initialize-cluster-metadata` -* `proxy` -* `standalone` -* `websocket` -* `zookeeper` -* `zookeeper-shell` - -Example: - -```bash - -$ PULSAR_BROKER_CONF=/path/to/broker.conf pulsar broker - -``` - -The table below lists the environment variables that you can use to configure the `pulsar` tool. - -|Variable|Description|Default| -|---|---|---| -|`PULSAR_LOG_CONF`|Log4j configuration file|`conf/log4j2.yaml`| -|`PULSAR_BROKER_CONF`|Configuration file for broker|`conf/broker.conf`| -|`PULSAR_BOOKKEEPER_CONF`|description: Configuration file for bookie|`conf/bookkeeper.conf`| -|`PULSAR_ZK_CONF`|Configuration file for zookeeper|`conf/zookeeper.conf`| -|`PULSAR_CONFIGURATION_STORE_CONF`|Configuration file for the configuration store|`conf/global_zookeeper.conf`| -|`PULSAR_WEBSOCKET_CONF`|Configuration file for websocket proxy|`conf/websocket.conf`| -|`PULSAR_STANDALONE_CONF`|Configuration file for standalone|`conf/standalone.conf`| -|`PULSAR_EXTRA_OPTS`|Extra options to be passed to the jvm|| -|`PULSAR_EXTRA_CLASSPATH`|Extra paths for Pulsar's classpath|| -|`PULSAR_PID_DIR`|Folder where the pulsar server PID file should be stored|| -|`PULSAR_STOP_TIMEOUT`|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful|| - - - -### `bookie` - -Starts up a bookie server - -Usage: - -```bash - -$ pulsar bookie options - -``` - -Options - -|Option|Description|Default| -|---|---|---| -|`-readOnly`|Force start a read-only bookie server|false| -|`-withAutoRecovery`|Start auto-recover service bookie server|false| - - -Example - -```bash - -$ PULSAR_BOOKKEEPER_CONF=/path/to/bookkeeper.conf pulsar bookie \ - -readOnly \ - -withAutoRecovery - -``` - -### `broker` - -Starts up a Pulsar broker - -Usage - -```bash - -$ pulsar broker options - -``` - -Options - -|Option|Description|Default| -|---|---|---| -|`-bc` , `--bookie-conf`|Configuration file for BookKeeper|| -|`-rb` , `--run-bookie`|Run a BookKeeper bookie on the same host as the Pulsar broker|false| -|`-ra` , `--run-bookie-autorecovery`|Run a BookKeeper autorecovery daemon on the same host as the Pulsar broker|false| - -Example - -```bash - -$ PULSAR_BROKER_CONF=/path/to/broker.conf pulsar broker - -``` - -### `compact-topic` - -Run compaction against a Pulsar topic (in a new process) - -Usage - -```bash - -$ pulsar compact-topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-t` , `--topic`|The Pulsar topic that you would like to compact|| - -Example - -```bash - -$ pulsar compact-topic --topic topic-to-compact - -``` - -### `configuration-store` - -Starts up the Pulsar configuration store - -Usage - -```bash - -$ pulsar configuration-store - -``` - -Example - -```bash - -$ PULSAR_CONFIGURATION_STORE_CONF=/path/to/configuration_store.conf pulsar configuration-store - -``` - -### `initialize-cluster-metadata` - -One-time cluster metadata initialization - -Usage - -```bash - -$ pulsar initialize-cluster-metadata options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-ub` , `--broker-service-url`|The broker service URL for the new cluster|| -|`-tb` , `--broker-service-url-tls`|The broker service URL for the new cluster with TLS encryption|| -|`-c` , `--cluster`|Cluster name|| -|`-cs` , `--configuration-store`|The configuration store quorum connection string|| -|`--existing-bk-metadata-service-uri`|The metadata service URI of the existing BookKeeper cluster that you want to use|| -|`-h` , `--help`|Cluster name|false| -|`--initial-num-stream-storage-containers`|The number of storage containers of BookKeeper stream storage|16| -|`--initial-num-transaction-coordinators`|The number of transaction coordinators assigned in a cluster|16| -|`-uw` , `--web-service-url`|The web service URL for the new cluster|| -|`-tw` , `--web-service-url-tls`|The web service URL for the new cluster with TLS encryption|| -|`-zk` , `--zookeeper`|The local ZooKeeper quorum connection string|| -|`--zookeeper-session-timeout-ms`|The local ZooKeeper session timeout. The time unit is in millisecond(ms)|30000| - - -### `proxy` - -Manages the Pulsar proxy - -Usage - -```bash - -$ pulsar proxy options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--configuration-store`|Configuration store connection string|| -|`-zk` , `--zookeeper-servers`|Local ZooKeeper connection string|| - -Example - -```bash - -$ PULSAR_PROXY_CONF=/path/to/proxy.conf pulsar proxy \ - --zookeeper-servers zk-0,zk-1,zk2 \ - --configuration-store zk-0,zk-1,zk-2 - -``` - -### `standalone` - -Run a broker service with local bookies and local ZooKeeper - -Usage - -```bash - -$ pulsar standalone options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-a` , `--advertised-address`|The standalone broker advertised address|| -|`--bookkeeper-dir`|Local bookies’ base data directory|data/standalone/bookeeper| -|`--bookkeeper-port`|Local bookies’ base port|3181| -|`--no-broker`|Only start ZooKeeper and BookKeeper services, not the broker|false| -|`--num-bookies`|The number of local bookies|1| -|`--only-broker`|Only start the Pulsar broker service (not ZooKeeper or BookKeeper)|| -|`--wipe-data`|Clean up previous ZooKeeper/BookKeeper data|| -|`--zookeeper-dir`|Local ZooKeeper’s data directory|data/standalone/zookeeper| -|`--zookeeper-port` |Local ZooKeeper’s port|2181| - -Example - -```bash - -$ PULSAR_STANDALONE_CONF=/path/to/standalone.conf pulsar standalone - -``` - -### `websocket` - -Usage - -```bash - -$ pulsar websocket - -``` - -Example - -```bash - -$ PULSAR_WEBSOCKET_CONF=/path/to/websocket.conf pulsar websocket - -``` - -### `zookeeper` - -Starts up a ZooKeeper cluster - -Usage - -```bash - -$ pulsar zookeeper - -``` - -Example - -```bash - -$ PULSAR_ZK_CONF=/path/to/zookeeper.conf pulsar zookeeper - -``` - -### `zookeeper-shell` - -Connects to a running ZooKeeper cluster using the ZooKeeper shell - -Usage - -```bash - -$ pulsar zookeeper-shell options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration file for ZooKeeper|| -|`-server`|Configuration zk address, eg: `127.0.0.1:2181`|| - - - -## `pulsar-client` - -The pulsar-client tool - -Usage - -```bash - -$ pulsar-client command - -``` - -Commands -* `produce` -* `consume` - - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class, for example "key1:val1,key2:val2" or "{\"key1\":\"val1\",\"key2\":\"val2\"}"|{"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"}| -|`--auth-plugin`|Authentication plugin class name|org.apache.pulsar.client.impl.auth.AuthenticationSasl| -|`--listener-name`|Listener name for the broker|| -|`--proxy-protocol`|Proxy protocol to select type of routing at proxy|| -|`--proxy-url`|Proxy-server URL to which to connect|| -|`--url`|Broker URL to which to connect|pulsar://localhost:6650/
    ws://localhost:8080 | -| `-v`, `--version` | Get the version of the Pulsar client -|`-h`, `--help`|Show this help - - -### `produce` -Send a message or messages to a specific broker and topic - -Usage - -```bash - -$ pulsar-client produce topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-f`, `--files`|Comma-separated file paths to send; either -m or -f must be specified|[]| -|`-m`, `--messages`|Comma-separated string of messages to send; either -m or -f must be specified|[]| -|`-n`, `--num-produce`|The number of times to send the message(s); the count of messages/files * num-produce should be below 1000|1| -|`-r`, `--rate`|Rate (in messages per second) at which to produce; a value 0 means to produce messages as fast as possible|0.0| -|`-c`, `--chunking`|Split the message and publish in chunks if the message size is larger than the allowed max size|false| -|`-s`, `--separator`|Character to split messages string with.|","| -|`-k`, `--key`|Message key to add|key=value string, like k1=v1,k2=v2.| -|`-p`, `--properties`|Properties to add. If you want to add multiple properties, use the comma as the separator, e.g. `k1=v1,k2=v2`.| | -|`-ekn`, `--encryption-key-name`|The public key name to encrypt payload.| | -|`-ekv`, `--encryption-key-value`|The URI of public key to encrypt payload. For example, `file:///path/to/public.key` or `data:application/x-pem-file;base64,*****`.| | - - -### `consume` -Consume messages from a specific broker and topic - -Usage - -```bash - -$ pulsar-client consume topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--hex`|Display binary messages in hexadecimal format.|false| -|`-n`, `--num-messages`|Number of messages to consume, 0 means to consume forever.|1| -|`-r`, `--rate`|Rate (in messages per second) at which to consume; a value 0 means to consume messages as fast as possible|0.0| -|`--regex`|Indicate the topic name is a regex pattern|false| -|`-s`, `--subscription-name`|Subscription name|| -|`-t`, `--subscription-type`|The type of the subscription. Possible values: Exclusive, Shared, Failover, Key_Shared.|Exclusive| -|`-p`, `--subscription-position`|The position of the subscription. Possible values: Latest, Earliest.|Latest| -|`-m`, `--subscription-mode`|Subscription mode.|Durable| -|`-q`, `--queue-size`|The size of consumer's receiver queue.|0| -|`-mc`, `--max_chunked_msg`|Max pending chunk messages.|0| -|`-ac`, `--auto_ack_chunk_q_full`|Auto ack for the oldest message in consumer's receiver queue if the queue full.|false| -|`--hide-content`|Do not print the message to the console.|false| -|`-st`, `--schema-type`|Set the schema type. Use `auto_consume` to dump AVRO and other structured data types. Possible values: bytes, auto_consume.|bytes| -|`-ekv`, `--encryption-key-value`|The URI of public key to encrypt payload. For example, `file:///path/to/public.key` or `data:application/x-pem-file;base64,*****`.| | -|`-pm`, `--pool-messages`|Use the pooled message.|true| - -## `pulsar-daemon` -A wrapper around the pulsar tool that’s used to start and stop processes, such as ZooKeeper, bookies, and Pulsar brokers, in the background using nohup. - -pulsar-daemon has a similar interface to the pulsar command but adds start and stop commands for various services. For a listing of those services, run pulsar-daemon to see the help output or see the documentation for the pulsar command. - -Usage - -```bash - -$ pulsar-daemon command - -``` - -Commands -* `start` -* `stop` - - -### `start` -Start a service in the background using nohup. - -Usage - -```bash - -$ pulsar-daemon start service - -``` - -### `stop` -Stop a service that’s already been started using start. - -Usage - -```bash - -$ pulsar-daemon stop service options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|-force|Stop the service forcefully if not stopped by normal shutdown.|false| - - - -## `pulsar-perf` -A tool for performance testing a Pulsar broker. - -Usage - -```bash - -$ pulsar-perf command - -``` - -Commands -* `consume` -* `produce` -* `read` -* `websocket-producer` -* `managed-ledger` -* `monitor-brokers` -* `simulation-client` -* `simulation-controller` -* `help` - -Environment variables - -The table below lists the environment variables that you can use to configure the pulsar-perf tool. - -|Variable|Description|Default| -|---|---|---| -|`PULSAR_LOG_CONF`|Log4j configuration file|conf/log4j2.yaml| -|`PULSAR_CLIENT_CONF`|Configuration file for the client|conf/client.conf| -|`PULSAR_EXTRA_OPTS`|Extra options to be passed to the JVM|| -|`PULSAR_EXTRA_CLASSPATH`|Extra paths for Pulsar's classpath|| - - -### `consume` -Run a consumer - -Usage - -``` - -$ pulsar-perf consume options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth-plugin`|Authentication plugin class name|| -|`-ac`, `--auto_ack_chunk_q_full`|Auto ack for the oldest message in consumer's receiver queue if the queue full|false| -|`--listener-name`|Listener name for the broker|| -|`--acks-delay-millis`|Acknowledgements grouping delay in millis|100| -|`--batch-index-ack`|Enable or disable the batch index acknowledgment|false| -|`-bw`, `--busy-wait`|Enable or disable Busy-Wait on the Pulsar client|false| -|`-v`, `--encryption-key-value-file`|The file which contains the private key to decrypt payload|| -|`-h`, `--help`|Help message|false| -|`--conf-file`|Configuration file|| -|`-m`, `--num-messages`|Number of messages to consume in total. If the value is equal to or smaller than 0, it keeps consuming messages.|0| -|`-e`, `--expire_time_incomplete_chunked_messages`|The expiration time for incomplete chunk messages (in milliseconds)|0| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-mc`, `--max_chunked_msg`|Max pending chunk messages|0| -|`-n`, `--num-consumers`|Number of consumers (per topic)|1| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-ns`, `--num-subscriptions`|Number of subscriptions (per topic)|1| -|`-t`, `--num-topics`|The number of topics|1| -|`-pm`, `--pool-messages`|Use the pooled message|true| -|`-r`, `--rate`|Simulate a slow message consumer (rate in msg/s)|0| -|`-q`, `--receiver-queue-size`|Size of the receiver queue|1000| -|`-p`, `--receiver-queue-size-across-partitions`|Max total size of the receiver queue across partitions|50000| -|`--replicated`|Whether the subscription status should be replicated|false| -|`-u`, `--service-url`|Pulsar service URL|| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled|0| -|`-s`, `--subscriber-name`|Subscriber name prefix|| -|`-ss`, `--subscriptions`|A list of subscriptions to consume on (e.g. sub1,sub2)|sub| -|`-st`, `--subscription-type`|Subscriber type. Possible values are Exclusive, Shared, Failover, Key_Shared.|Exclusive| -|`-sp`, `--subscription-position`|Subscriber position. Possible values are Latest, Earliest.|Latest| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps consuming messages.|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - - -### `produce` -Run a producer - -Usage - -```bash - -$ pulsar-perf produce options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-am`, `--access-mode`|Producer access mode. Valid values are `Shared`, `Exclusive` and `WaitForExclusive`|Shared| -|`-au`, `--admin-url`|Pulsar admin URL|| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth-plugin`|Authentication plugin class name|| -|`--listener-name`|Listener name for the broker|| -|`-b`, `--batch-time-window`|Batch messages in a window of the specified number of milliseconds|1| -|`-bb`, `--batch-max-bytes`|Maximum number of bytes per batch|4194304| -|`-bm`, `--batch-max-messages`|Maximum number of messages per batch|1000| -|`-bw`, `--busy-wait`|Enable or disable Busy-Wait on the Pulsar client|false| -|`-ch`, `--chunking`|Split the message and publish in chunks if the message size is larger than allowed max size|false| -|`-d`, `--delay`|Mark messages with a given delay in seconds|0s| -|`-z`, `--compression`|Compress messages’ payload. Possible values are NONE, LZ4, ZLIB, ZSTD or SNAPPY.|| -|`--conf-file`|Configuration file|| -|`-k`, `--encryption-key-name`|The public key name to encrypt payload|| -|`-v`, `--encryption-key-value-file`|The file which contains the public key to encrypt payload|| -|`-ef`, `--exit-on-failure`|Exit from the process on publish failure|false| -|`-fc`, `--format-class`|Custom Formatter class name|org.apache.pulsar.testclient.DefaultMessageFormatter| -|`-fp`, `--format-payload`|Format %i as a message index in the stream from producer and/or %t as the timestamp nanoseconds|false| -|`-h`, `--help`|Help message|false| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-o`, `--max-outstanding`|Max number of outstanding messages|1000| -|`-p`, `--max-outstanding-across-partitions`|Max number of outstanding messages across partitions|50000| -|`-m`, `--num-messages`|Number of messages to publish in total. If this value is less than or equal to 0, it keeps publishing messages.|0| -|`-mk`, `--message-key-generation-mode`|The generation mode of message key. Valid options are `autoIncrement`, `random`|| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-n`, `--num-producers`|The number of producers (per topic)|1| -|`-threads`, `--num-test-threads`|Number of test threads|1| -|`-t`, `--num-topic`|The number of topics|1| -|`-np`, `--partitions`|Create partitioned topics with the given number of partitions. Setting this value to 0 means not trying to create a topic|| -|`-f`, `--payload-file`|Use payload from an UTF-8 encoded text file and a payload will be randomly selected when publishing messages|| -|`-e`, `--payload-delimiter`|The delimiter used to split lines when using payload from a file|\n| -|`-pn`, `--producer-name`|Producer Name|| -|`-r`, `--rate`|Publish rate msg/s across topics|100| -|`--send-timeout`|Set the sendTimeout|0| -|`--separator`|Separator between the topic and topic number|-| -|`-u`, `--service-url`|Pulsar service URL|| -|`-s`, `--size`|Message size (in bytes)|1024| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled.|0| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps publishing messages.|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--warmup-time`|Warm-up time in seconds|1| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - - -### `read` -Run a topic reader - -Usage - -```bash - -$ pulsar-perf read options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth-plugin`|Authentication plugin class name|| -|`--listener-name`|Listener name for the broker|| -|`--conf-file`|Configuration file|| -|`-h`, `--help`|Help message|false| -|`-n`, `--num-messages`|Number of messages to consume in total. If the value is equal to or smaller than 0, it keeps consuming messages.|0| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-t`, `--num-topics`|The number of topics|1| -|`-r`, `--rate`|Simulate a slow message reader (rate in msg/s)|0| -|`-q`, `--receiver-queue-size`|Size of the receiver queue|1000| -|`-u`, `--service-url`|Pulsar service URL|| -|`-m`, `--start-message-id`|Start message id. This can be either 'earliest', 'latest' or a specific message id by using 'lid:eid'|earliest| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled.|0| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps consuming messages.|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--use-tls`|Use TLS encryption on the connection|false| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - -### `websocket-producer` -Run a websocket producer - -Usage - -```bash - -$ pulsar-perf websocket-producer options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth-plugin`|Authentication plugin class name|| -|`--conf-file`|Configuration file|| -|`-h`, `--help`|Help message|false| -|`-m`, `--num-messages`|Number of messages to publish in total. If this value is less than or equal to 0, it keeps publishing messages.|0| -|`-t`, `--num-topic`|The number of topics|1| -|`-f`, `--payload-file`|Use payload from a file instead of empty buffer|| -|`-e`, `--payload-delimiter`|The delimiter used to split lines when using payload from a file|\n| -|`-fp`, `--format-payload`|Format %i as a message index in the stream from producer and/or %t as the timestamp nanoseconds|false| -|`-fc`, `--format-class`|Custom formatter class name|`org.apache.pulsar.testclient.DefaultMessageFormatter`| -|`-u`, `--proxy-url`|Pulsar Proxy URL, e.g., "ws://localhost:8080/"|| -|`-r`, `--rate`|Publish rate msg/s across topics|100| -|`-s`, `--size`|Message size in byte|1024| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps publishing messages.|0| - - -### `managed-ledger` -Write directly on managed-ledgers - -Usage - -```bash - -$ pulsar-perf managed-ledger options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-a`, `--ack-quorum`|Ledger ack quorum|1| -|`-dt`, `--digest-type`|BookKeeper digest type. Possible Values: [CRC32, MAC, CRC32C, DUMMY]|CRC32C| -|`-e`, `--ensemble-size`|Ledger ensemble size|1| -|`-h`, `--help`|Help message|false| -|`-c`, `--max-connections`|Max number of TCP connections to a single bookie|1| -|`-o`, `--max-outstanding`|Max number of outstanding requests|1000| -|`-m`, `--num-messages`|Number of messages to publish in total. If this value is less than or equal to 0, it keeps publishing messages.|0| -|`-t`, `--num-topic`|Number of managed ledgers|1| -|`-r`, `--rate`|Write rate msg/s across managed ledgers|100| -|`-s`, `--size`|Message size in byte|1024| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps publishing messages.|0| -|`--threads`|Number of threads writing|1| -|`-w`, `--write-quorum`|Ledger write quorum|1| -|`-zk`, `--zookeeperServers`|ZooKeeper connection string|| - - -### `monitor-brokers` -Continuously receive broker data and/or load reports - -Usage - -```bash - -$ pulsar-perf monitor-brokers options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--connect-string`|A connection string for one or more ZooKeeper servers|| -|`-h`, `--help`|Help message|false| - - -### `simulation-client` -Run a simulation server acting as a Pulsar client. Uses the client configuration specified in `conf/client.conf`. - -Usage - -```bash - -$ pulsar-perf simulation-client options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--port`|Port to listen on for controller|0| -|`--service-url`|Pulsar Service URL|| -|`-h`, `--help`|Help message|false| - -### `simulation-controller` -Run a simulation controller to give commands to servers - -Usage - -```bash - -$ pulsar-perf simulation-controller options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--client-port`|The port that the clients are listening on|0| -|`--clients`|Comma-separated list of client hostnames|| -|`--cluster`|The cluster to test on|| -|`-h`, `--help`|Help message|false| - - -### `help` -This help message - -Usage - -```bash - -$ pulsar-perf help - -``` - -## `bookkeeper` -A tool for managing BookKeeper. - -Usage - -```bash - -$ bookkeeper command - -``` - -Commands -* `auto-recovery` -* `bookie` -* `localbookie` -* `upgrade` -* `shell` - - -Environment variables - -The table below lists the environment variables that you can use to configure the bookkeeper tool. - -|Variable|Description|Default| -|---|---|---| -|BOOKIE_LOG_CONF|Log4j configuration file|conf/log4j2.yaml| -|BOOKIE_CONF|BookKeeper configuration file|conf/bk_server.conf| -|BOOKIE_EXTRA_OPTS|Extra options to be passed to the JVM|| -|BOOKIE_EXTRA_CLASSPATH|Extra paths for BookKeeper's classpath|| -|ENTRY_FORMATTER_CLASS|The Java class used to format entries|| -|BOOKIE_PID_DIR|Folder where the BookKeeper server PID file should be stored|| -|BOOKIE_STOP_TIMEOUT|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful|| - - -### `auto-recovery` -Runs an auto-recovery service daemon - -Usage - -```bash - -$ bookkeeper auto-recovery options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery daemon|| - - -### `bookie` -Starts up a BookKeeper server (aka bookie) - -Usage - -```bash - -$ bookkeeper bookie options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery daemon|| -|-readOnly|Force start a read-only bookie server|false| -|-withAutoRecovery|Start auto-recovery service bookie server|false| - - -### `localbookie` -Runs a test ensemble of N bookies locally - -Usage - -```bash - -$ bookkeeper localbookie N - -``` - -### `upgrade` -Upgrade the bookie’s filesystem - -Usage - -```bash - -$ bookkeeper upgrade options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery daemon|| -|`-u`, `--upgrade`|Upgrade the bookie’s directories|| - - -### `shell` -Run shell for admin commands. To see a full listing of those commands, run bookkeeper shell without an argument. - -Usage - -```bash - -$ bookkeeper shell - -``` - -Example - -```bash - -$ bookkeeper shell bookiesanity - -``` - -## `broker-tool` - -The `broker- tool` is used for operations on a specific broker. - -Usage - -```bash - -$ broker-tool command - -``` - -Commands -* `load-report` -* `help` - -Example -Two ways to get more information about a command as below: - -```bash - -$ broker-tool help command -$ broker-tool command --help - -``` - -### `load-report` - -Collect the load report of a specific broker. -The command is run on a broker, and used for troubleshooting why broker can’t collect right load report. - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--interval`| Interval to collect load report, in milliseconds || -|`-h`, `--help`| Display help information || - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/reference-configuration.md b/site2/website/versioned_docs/version-2.9.1-deprecated/reference-configuration.md deleted file mode 100644 index d71ce8214b21da..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/reference-configuration.md +++ /dev/null @@ -1,774 +0,0 @@ ---- -id: reference-configuration -title: Pulsar configuration -sidebar_label: "Pulsar configuration" -original_id: reference-configuration ---- - - - - -You can manage Pulsar configuration by configuration files in the [`conf`](https://github.com/apache/pulsar/tree/master/conf) directory of a Pulsar [installation](getting-started-standalone.md). - -- [BookKeeper](#bookkeeper) -- [Broker](#broker) -- [Client](#client) -- [Log4j](#log4j) -- [Log4j shell](#log4j-shell) -- [Standalone](#standalone) -- [WebSocket](#websocket) -- [Pulsar proxy](#pulsar-proxy) -- [ZooKeeper](#zookeeper) - -## BookKeeper - -BookKeeper is a replicated log storage system that Pulsar uses for durable storage of all messages. - - -|Name|Description|Default| -|---|---|---| -|bookiePort|The port on which the bookie server listens.|3181| -|allowLoopback|Whether the bookie is allowed to use a loopback interface as its primary interface (that is the interface used to establish its identity). By default, loopback interfaces are not allowed to work as the primary interface. Using a loopback interface as the primary interface usually indicates a configuration error. For example, it’s fairly common in some VPS setups to not configure a hostname or to have the hostname resolve to `127.0.0.1`. If this is the case, then all bookies in the cluster will establish their identities as `127.0.0.1:3181` and only one will be able to join the cluster. For VPSs configured like this, you should explicitly set the listening interface.|false| -|listeningInterface|The network interface on which the bookie listens. By default, the bookie listens on all interfaces.|eth0| -|advertisedAddress|Configure a specific hostname or IP address that the bookie should use to advertise itself to clients. By default, the bookie advertises either its own IP address or hostname according to the `listeningInterface` and `useHostNameAsBookieID` settings.|N/A| -|allowMultipleDirsUnderSameDiskPartition|Configure the bookie to enable/disable multiple ledger/index/journal directories in the same filesystem disk partition.|false| -|minUsableSizeForIndexFileCreation|The minimum safe usable size available in index directory for bookie to create index files while replaying journal at the time of bookie starts in Readonly Mode (in bytes).|1073741824| -|journalDirectory|The directory where BookKeeper outputs its write-ahead log (WAL).|data/bookkeeper/journal| -|journalDirectories|Directories that BookKeeper outputs its write ahead log. Multiple directories are available, being separated by `,`. For example: `journalDirectories=/tmp/bk-journal1,/tmp/bk-journal2`. If `journalDirectories` is set, the bookies skip `journalDirectory` and use this setting directory.|/tmp/bk-journal| -|ledgerDirectories|The directory where BookKeeper outputs ledger snapshots. This could define multiple directories to store snapshots separated by `,`, for example `ledgerDirectories=/tmp/bk1-data,/tmp/bk2-data`. Ideally, ledger dirs and the journal dir are each in a different device, which reduces the contention between random I/O and sequential write. It is possible to run with a single disk, but performance will be significantly lower.|data/bookkeeper/ledgers| -|ledgerManagerType|The type of ledger manager used to manage how ledgers are stored, managed, and garbage collected. See [BookKeeper Internals](http://bookkeeper.apache.org/docs/latest/getting-started/concepts) for more info.|hierarchical| -|zkLedgersRootPath|The root ZooKeeper path used to store ledger metadata. This parameter is used by the ZooKeeper-based ledger manager as a root znode to store all ledgers.|/ledgers| -|ledgerStorageClass|Ledger storage implementation class|org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage| -|entryLogFilePreallocationEnabled|Enable or disable entry logger preallocation|true| -|logSizeLimit|Max file size of the entry logger, in bytes. A new entry log file will be created when the old one reaches the file size limitation.|1073741824| -|minorCompactionThreshold|Threshold of minor compaction. Entry log files whose remaining size percentage reaches below this threshold will be compacted in a minor compaction. If set to less than zero, the minor compaction is disabled.|0.2| -|minorCompactionInterval|Time interval to run minor compaction, in seconds. If set to less than zero, the minor compaction is disabled. Note: should be greater than gcWaitTime. |3600| -|majorCompactionThreshold|The threshold of major compaction. Entry log files whose remaining size percentage reaches below this threshold will be compacted in a major compaction. Those entry log files whose remaining size percentage is still higher than the threshold will never be compacted. If set to less than zero, the minor compaction is disabled.|0.5| -|majorCompactionInterval|The time interval to run major compaction, in seconds. If set to less than zero, the major compaction is disabled. Note: should be greater than gcWaitTime. |86400| -|readOnlyModeEnabled|If `readOnlyModeEnabled=true`, then on all full ledger disks, bookie will be converted to read-only mode and serve only read requests. Otherwise the bookie will be shutdown.|true| -|forceReadOnlyBookie|Whether the bookie is force started in read only mode.|false| -|persistBookieStatusEnabled|Persist the bookie status locally on the disks. So the bookies can keep their status upon restarts.|false| -|compactionMaxOutstandingRequests|Sets the maximum number of entries that can be compacted without flushing. When compacting, the entries are written to the entrylog and the new offsets are cached in memory. Once the entrylog is flushed the index is updated with the new offsets. This parameter controls the number of entries added to the entrylog before a flush is forced. A higher value for this parameter means more memory will be used for offsets. Each offset consists of 3 longs. This parameter should not be modified unless you’re fully aware of the consequences.|100000| -|compactionRate|The rate at which compaction will read entries, in adds per second.|1000| -|isThrottleByBytes|Throttle compaction by bytes or by entries.|false| -|compactionRateByEntries|The rate at which compaction will read entries, in adds per second.|1000| -|compactionRateByBytes|Set the rate at which compaction reads entries. The unit is bytes added per second.|1000000| -|journalMaxSizeMB|Max file size of journal file, in megabytes. A new journal file will be created when the old one reaches the file size limitation.|2048| -|journalMaxBackups|The max number of old journal files to keep. Keeping a number of old journal files would help data recovery in special cases.|5| -|journalPreAllocSizeMB|How space to pre-allocate at a time in the journal.|16| -|journalWriteBufferSizeKB|The of the write buffers used for the journal.|64| -|journalRemoveFromPageCache|Whether pages should be removed from the page cache after force write.|true| -|journalAdaptiveGroupWrites|Whether to group journal force writes, which optimizes group commit for higher throughput.|true| -|journalMaxGroupWaitMSec|The maximum latency to impose on a journal write to achieve grouping.|1| -|journalAlignmentSize|All the journal writes and commits should be aligned to given size|4096| -|journalBufferedWritesThreshold|Maximum writes to buffer to achieve grouping|524288| -|journalFlushWhenQueueEmpty|If we should flush the journal when journal queue is empty|false| -|numJournalCallbackThreads|The number of threads that should handle journal callbacks|8| -|openLedgerRereplicationGracePeriod | The grace period, in milliseconds, that the replication worker waits before fencing and replicating a ledger fragment that's still being written to upon bookie failure. | 30000 | -|rereplicationEntryBatchSize|The number of max entries to keep in fragment for re-replication|100| -|autoRecoveryDaemonEnabled|Whether the bookie itself can start auto-recovery service.|true| -|lostBookieRecoveryDelay|How long to wait, in seconds, before starting auto recovery of a lost bookie.|0| -|gcWaitTime|How long the interval to trigger next garbage collection, in milliseconds. Since garbage collection is running in background, too frequent gc will heart performance. It is better to give a higher number of gc interval if there is enough disk capacity.|900000| -|gcOverreplicatedLedgerWaitTime|How long the interval to trigger next garbage collection of overreplicated ledgers, in milliseconds. This should not be run very frequently since we read the metadata for all the ledgers on the bookie from zk.|86400000| -|flushInterval|How long the interval to flush ledger index pages to disk, in milliseconds. Flushing index files will introduce much random disk I/O. If separating journal dir and ledger dirs each on different devices, flushing would not affect performance. But if putting journal dir and ledger dirs on same device, performance degrade significantly on too frequent flushing. You can consider increment flush interval to get better performance, but you need to pay more time on bookie server restart after failure.|60000| -|bookieDeathWatchInterval|Interval to watch whether bookie is dead or not, in milliseconds|1000| -|allowStorageExpansion|Allow the bookie storage to expand. Newly added ledger and index dirs must be empty.|false| -|zkServers|A list of one of more servers on which zookeeper is running. The server list can be comma separated values, for example: zkServers=zk1:2181,zk2:2181,zk3:2181.|localhost:2181| -|zkTimeout|ZooKeeper client session timeout in milliseconds Bookie server will exit if it received SESSION_EXPIRED because it was partitioned off from ZooKeeper for more than the session timeout JVM garbage collection, disk I/O will cause SESSION_EXPIRED. Increment this value could help avoiding this issue|30000| -|zkRetryBackoffStartMs|The start time that the Zookeeper client backoff retries in milliseconds.|1000| -|zkRetryBackoffMaxMs|The maximum time that the Zookeeper client backoff retries in milliseconds.|10000| -|zkEnableSecurity|Set ACLs on every node written on ZooKeeper, allowing users to read and write BookKeeper metadata stored on ZooKeeper. In order to make ACLs work you need to setup ZooKeeper JAAS authentication. All the bookies and Client need to share the same user, and this is usually done using Kerberos authentication. See ZooKeeper documentation.|false| -|httpServerEnabled|The flag enables/disables starting the admin http server.|false| -|httpServerPort|The HTTP server port to listen on. By default, the value is `8080`. If you want to keep it consistent with the Prometheus stats provider, you can set it to `8000`.|8080 -|httpServerClass|The http server class.|org.apache.bookkeeper.http.vertx.VertxHttpServer| -|serverTcpNoDelay|This settings is used to enabled/disabled Nagle’s algorithm, which is a means of improving the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network. If you are sending many small messages, such that more than one can fit in a single IP packet, setting server.tcpnodelay to false to enable Nagle algorithm can provide better performance.|true| -|serverSockKeepalive|This setting is used to send keep-alive messages on connection-oriented sockets.|true| -|serverTcpLinger|The socket linger timeout on close. When enabled, a close or shutdown will not return until all queued messages for the socket have been successfully sent or the linger timeout has been reached. Otherwise, the call returns immediately and the closing is done in the background.|0| -|byteBufAllocatorSizeMax|The maximum buf size of the received ByteBuf allocator.|1048576| -|nettyMaxFrameSizeBytes|The maximum netty frame size in bytes. Any message received larger than this will be rejected.|5253120| -|openFileLimit|Max number of ledger index files could be opened in bookie server If number of ledger index files reaches this limitation, bookie server started to swap some ledgers from memory to disk. Too frequent swap will affect performance. You can tune this number to gain performance according your requirements.|0| -|pageSize|Size of a index page in ledger cache, in bytes A larger index page can improve performance writing page to disk, which is efficient when you have small number of ledgers and these ledgers have similar number of entries. If you have large number of ledgers and each ledger has fewer entries, smaller index page would improve memory usage.|8192| -|pageLimit|How many index pages provided in ledger cache If number of index pages reaches this limitation, bookie server starts to swap some ledgers from memory to disk. You can increment this value when you found swap became more frequent. But make sure pageLimit*pageSize should not more than JVM max memory limitation, otherwise you would got OutOfMemoryException. In general, incrementing pageLimit, using smaller index page would gain better performance in lager number of ledgers with fewer entries case If pageLimit is -1, bookie server will use 1/3 of JVM memory to compute the limitation of number of index pages.|0| -|readOnlyModeEnabled|If all ledger directories configured are full, then support only read requests for clients. If "readOnlyModeEnabled=true" then on all ledger disks full, bookie will be converted to read-only mode and serve only read requests. Otherwise the bookie will be shutdown. By default this will be disabled.|true| -|diskUsageThreshold|For each ledger dir, maximum disk space which can be used. Default is 0.95f. i.e. 95% of disk can be used at most after which nothing will be written to that partition. If all ledger dir partitions are full, then bookie will turn to readonly mode if ‘readOnlyModeEnabled=true’ is set, else it will shutdown. Valid values should be in between 0 and 1 (exclusive).|0.95| -|diskCheckInterval|Disk check interval in milli seconds, interval to check the ledger dirs usage.|10000| -|auditorPeriodicCheckInterval|Interval at which the auditor will do a check of all ledgers in the cluster. By default this runs once a week. The interval is set in seconds. To disable the periodic check completely, set this to 0. Note that periodic checking will put extra load on the cluster, so it should not be run more frequently than once a day.|604800| -|sortedLedgerStorageEnabled|Whether sorted-ledger storage is enabled.|true| -|auditorPeriodicBookieCheckInterval|The interval between auditor bookie checks. The auditor bookie check, checks ledger metadata to see which bookies should contain entries for each ledger. If a bookie which should contain entries is unavailable, thea the ledger containing that entry is marked for recovery. Setting this to 0 disabled the periodic check. Bookie checks will still run when a bookie fails. The interval is specified in seconds.|86400| -|numAddWorkerThreads|The number of threads that should handle write requests. if zero, the writes would be handled by netty threads directly.|0| -|numReadWorkerThreads|The number of threads that should handle read requests. if zero, the reads would be handled by netty threads directly.|8| -|numHighPriorityWorkerThreads|The umber of threads that should be used for high priority requests (i.e. recovery reads and adds, and fencing).|8| -|maxPendingReadRequestsPerThread|If read workers threads are enabled, limit the number of pending requests, to avoid the executor queue to grow indefinitely.|2500| -|maxPendingAddRequestsPerThread|The limited number of pending requests, which is used to avoid the executor queue to grow indefinitely when add workers threads are enabled.|10000| -|isForceGCAllowWhenNoSpace|Whether force compaction is allowed when the disk is full or almost full. Forcing GC could get some space back, but could also fill up the disk space more quickly. This is because new log files are created before GC, while old garbage log files are deleted after GC.|false| -|verifyMetadataOnGC|True if the bookie should double check `readMetadata` prior to GC.|false| -|flushEntrylogBytes|Entry log flush interval in bytes. Flushing in smaller chunks but more frequently reduces spikes in disk I/O. Flushing too frequently may also affect performance negatively.|268435456| -|readBufferSizeBytes|The number of bytes we should use as capacity for BufferedReadChannel.|4096| -|writeBufferSizeBytes|The number of bytes used as capacity for the write buffer|65536| -|useHostNameAsBookieID|Whether the bookie should use its hostname to register with the coordination service (e.g.: zookeeper service). When false, bookie will use its ip address for the registration.|false| -|bookieId | If you want to custom a bookie ID or use a dynamic network address for the bookie, you can set the `bookieId`.

    Bookie advertises itself using the `bookieId` rather than the `BookieSocketAddress` (`hostname:port` or `IP:port`). If you set the `bookieId`, then the `useHostNameAsBookieID` does not take effect.

    The `bookieId` is a non-empty string that can contain ASCII digits and letters ([a-zA-Z9-0]), colons, dashes, and dots.

    For more information about `bookieId`, see [here](http://bookkeeper.apache.org/bps/BP-41-bookieid/).|N/A| -|allowEphemeralPorts|Whether the bookie is allowed to use an ephemeral port (port 0) as its server port. By default, an ephemeral port is not allowed. Using an ephemeral port as the service port usually indicates a configuration error. However, in unit tests, using an ephemeral port will address port conflict problems and allow running tests in parallel.|false| -|enableLocalTransport|Whether the bookie is allowed to listen for the BookKeeper clients executed on the local JVM.|false| -|disableServerSocketBind|Whether the bookie is allowed to disable bind on network interfaces. This bookie will be available only to BookKeeper clients executed on the local JVM.|false| -|skipListArenaChunkSize|The number of bytes that we should use as chunk allocation for `org.apache.bookkeeper.bookie.SkipListArena`.|4194304| -|skipListArenaMaxAllocSize|The maximum size that we should allocate from the skiplist arena. Allocations larger than this should be allocated directly by the VM to avoid fragmentation.|131072| -|bookieAuthProviderFactoryClass|The factory class name of the bookie authentication provider. If this is null, then there is no authentication.|null| -|statsProviderClass||org.apache.bookkeeper.stats.prometheus.PrometheusMetricsProvider| -|prometheusStatsHttpPort||8000| -|dbStorage_writeCacheMaxSizeMb|Size of Write Cache. Memory is allocated from JVM direct memory. Write cache is used to buffer entries before flushing into the entry log. For good performance, it should be big enough to hold a substantial amount of entries in the flush interval.|25% of direct memory| -|dbStorage_readAheadCacheMaxSizeMb|Size of Read cache. Memory is allocated from JVM direct memory. This read cache is pre-filled doing read-ahead whenever a cache miss happens. By default, it is allocated to 25% of the available direct memory.|N/A| -|dbStorage_readAheadCacheBatchSize|How many entries to pre-fill in cache after a read cache miss|1000| -|dbStorage_rocksDB_blockCacheSize|Size of RocksDB block-cache. For best performance, this cache should be big enough to hold a significant portion of the index database which can reach ~2GB in some cases. By default, it uses 10% of direct memory.|N/A| -|dbStorage_rocksDB_writeBufferSizeMB||64| -|dbStorage_rocksDB_sstSizeInMB||64| -|dbStorage_rocksDB_blockSize||65536| -|dbStorage_rocksDB_bloomFilterBitsPerKey||10| -|dbStorage_rocksDB_numLevels||-1| -|dbStorage_rocksDB_numFilesInLevel0||4| -|dbStorage_rocksDB_maxSizeInLevel1MB||256| - -## Broker - -Pulsar brokers are responsible for handling incoming messages from producers, dispatching messages to consumers, replicating data between clusters, and more. - -|Name|Description|Default| -|---|---|---| -|advertisedListeners|Specify multiple advertised listeners for the broker.

    The format is `:pulsar://:`.

    If there are multiple listeners, separate them with commas.

    **Note**: do not use this configuration with `advertisedAddress` and `brokerServicePort`. If the value of this configuration is empty, the broker uses `advertisedAddress` and `brokerServicePort`|/| -|internalListenerName|Specify the internal listener name for the broker.

    **Note**: the listener name must be contained in `advertisedListeners`.

    If the value of this configuration is empty, the broker uses the first listener as the internal listener.|/| -|authenticateOriginalAuthData| If this flag is set to `true`, the broker authenticates the original Auth data; else it just accepts the originalPrincipal and authorizes it (if required). |false| -|enablePersistentTopics| Whether persistent topics are enabled on the broker |true| -|enableNonPersistentTopics| Whether non-persistent topics are enabled on the broker |true| -|functionsWorkerEnabled| Whether the Pulsar Functions worker service is enabled in the broker |false| -|exposePublisherStats|Whether to enable topic level metrics.|true| -|statsUpdateFrequencyInSecs||60| -|statsUpdateInitialDelayInSecs||60| -|zookeeperServers| Zookeeper quorum connection string || -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -|brokerServicePort| Broker data port |6650| -|brokerServicePortTls| Broker data port for TLS |6651| -|webServicePort| Port to use to server HTTP request |8080| -|webServicePortTls| Port to use to server HTTPS request |8443| -|webSocketServiceEnabled| Enable the WebSocket API service in broker |false| -|webSocketNumIoThreads|The number of IO threads in Pulsar Client used in WebSocket proxy.|8| -|webSocketConnectionsPerBroker|The number of connections per Broker in Pulsar Client used in WebSocket proxy.|8| -|webSocketSessionIdleTimeoutMillis|Time in milliseconds that idle WebSocket session times out.|300000| -|webSocketMaxTextFrameSize|The maximum size of a text message during parsing in WebSocket proxy.|1048576| -|exposeTopicLevelMetricsInPrometheus|Whether to enable topic level metrics.|true| -|exposeConsumerLevelMetricsInPrometheus|Whether to enable consumer level metrics.|false| -|jvmGCMetricsLoggerClassName|Classname of Pluggable JVM GC metrics logger that can log GC specific metrics.|N/A| -|bindAddress| Hostname or IP address the service binds on, default is 0.0.0.0. |0.0.0.0| -|bindAddresses| Additional Hostname or IP addresses the service binds on: `listener_name:scheme://host:port,...`. || -|advertisedAddress| Hostname or IP address the service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used. || -|clusterName| Name of the cluster to which this broker belongs to || -|maxTenants|The maximum number of tenants that can be created in each Pulsar cluster. When the number of tenants reaches the threshold, the broker rejects the request of creating a new tenant. The default value 0 disables the check. |0| -| maxNamespacesPerTenant | The maximum number of namespaces that can be created in each tenant. When the number of namespaces reaches this threshold, the broker rejects the request of creating a new tenant. The default value 0 disables the check. |0| -|brokerDeduplicationEnabled| Sets the default behavior for message deduplication in the broker. If enabled, the broker will reject messages that were already stored in the topic. This setting can be overridden on a per-namespace basis. |false| -|brokerDeduplicationMaxNumberOfProducers| The maximum number of producers for which information will be stored for deduplication purposes. |10000| -|brokerDeduplicationEntriesInterval| The number of entries after which a deduplication informational snapshot is taken. A larger interval will lead to fewer snapshots being taken, though this would also lengthen the topic recovery time (the time required for entries published after the snapshot to be replayed). |1000| -|brokerDeduplicationSnapshotIntervalSeconds| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |120| -|brokerDeduplicationProducerInactivityTimeoutMinutes| The time of inactivity (in minutes) after which the broker will discard deduplication information related to a disconnected producer. |360| -|dispatchThrottlingRatePerReplicatorInMsg| The default messages per second dispatch throttling-limit for every replicator in replication. The value of `0` means disabling replication message dispatch-throttling| 0 | -|dispatchThrottlingRatePerReplicatorInByte| The default bytes per second dispatch throttling-limit for every replicator in replication. The value of `0` means disabling replication message-byte dispatch-throttling| 0 | -|zooKeeperSessionTimeoutMillis| Zookeeper session timeout in milliseconds |30000| -|brokerShutdownTimeoutMs| Time to wait for broker graceful shutdown. After this time elapses, the process will be killed |60000| -|skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out of memory error. |false| -|backlogQuotaCheckEnabled| Enable backlog quota check. Enforces action on topic when the quota is reached |true| -|backlogQuotaCheckIntervalInSeconds| How often to check for topics that have reached the quota |60| -|backlogQuotaDefaultLimitBytes| The default per-topic backlog quota limit. Being less than 0 means no limitation. By default, it is -1. | -1 | -|backlogQuotaDefaultRetentionPolicy|The defaulted backlog quota retention policy. By Default, it is `producer_request_hold`.
  1820. 'producer_request_hold' Policy which holds producer's send request until the resource becomes available (or holding times out)
  1821. 'producer_exception' Policy which throws `javax.jms.ResourceAllocationException` to the producer
  1822. 'consumer_backlog_eviction' Policy which evicts the oldest message from the slowest consumer's backlog
  1823. |producer_request_hold| -|allowAutoTopicCreation| Enable topic auto creation if a new producer or consumer connected |true| -|allowAutoTopicCreationType| The type of topic that is allowed to be automatically created.(partitioned/non-partitioned) |non-partitioned| -|allowAutoSubscriptionCreation| Enable subscription auto creation if a new consumer connected |true| -|defaultNumPartitions| The number of partitioned topics that is allowed to be automatically created if `allowAutoTopicCreationType` is partitioned |1| -|brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics. If topics are not consumed for some while, these inactive topics might be cleaned up. Deleting inactive topics is enabled by default. The default period is 1 minute. |true| -|brokerDeleteInactiveTopicsFrequencySeconds| How often to check for inactive topics |60| -| brokerDeleteInactiveTopicsMode | Set the mode to delete inactive topics.
  1824. `delete_when_no_subscriptions`: delete the topic which has no subscriptions or active producers.
  1825. `delete_when_subscriptions_caught_up`: delete the topic whose subscriptions have no backlogs and which has no active producers or consumers.
  1826. | `delete_when_no_subscriptions` | -| brokerDeleteInactiveTopicsMaxInactiveDurationSeconds | Set the maximum duration for inactive topics. If it is not specified, the `brokerDeleteInactiveTopicsFrequencySeconds` parameter is adopted. | N/A | -|forceDeleteTenantAllowed| Enable you to delete a tenant forcefully. |false| -|forceDeleteNamespaceAllowed| Enable you to delete a namespace forcefully. |false| -|messageExpiryCheckIntervalInMinutes| The frequency of proactively checking and purging expired messages. |5| -|brokerServiceCompactionMonitorIntervalInSeconds| Interval between checks to determine whether topics with compaction policies need compaction. |60| -brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater than this threshold, compression is triggered.

    Set this threshold to 0 means disabling the compression check.|N/A -|delayedDeliveryEnabled| Whether to enable the delayed delivery for messages. If disabled, messages will be immediately delivered and there will be no tracking overhead.|true| -|delayedDeliveryTickTimeMillis|Control the tick time for retrying on delayed delivery, which affects the accuracy of the delivery time compared to the scheduled time. By default, it is 1 second.|1000| -|activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed. |1000| -|clientLibraryVersionCheckEnabled| Enable check for minimum allowed client library version |false| -|clientLibraryVersionCheckAllowUnversioned| Allow client libraries with no version information |true| -|statusFilePath| Path for the file used to determine the rotation status for the broker when responding to service discovery health checks || -|preferLaterVersions| If true, (and ModularLoadManagerImpl is being used), the load manager will attempt to use only brokers running the latest software version (to minimize impact to bundles) |false| -|maxNumPartitionsPerPartitionedTopic|Max number of partitions per partitioned topic. Use 0 or negative number to disable the check|0| -| maxSubscriptionsPerTopic | Maximum number of subscriptions allowed to subscribe to a topic. Once this limit reaches, the broker rejects new subscriptions until the number of subscriptions decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxProducersPerTopic | Maximum number of producers allowed to connect to a topic. Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerTopic | Maximum number of consumers allowed to connect to a topic. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerSubscription | Maximum number of consumers allowed to connect to a subscription. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -|tlsEnabled|Deprecated - Use `webServicePortTls` and `brokerServicePortTls` instead. |false| -|tlsCertificateFilePath| Path for the TLS certificate file || -|tlsKeyFilePath| Path for the TLS private key file || -|tlsTrustCertsFilePath| Path for the trusted TLS certificate file. This cert is used to verify that any certs presented by connecting clients are signed by a certificate authority. If this verification fails, then the certs are untrusted and the connections are dropped. || -|tlsAllowInsecureConnection| Accept untrusted TLS certificate from client. If it is set to `true`, a client with a cert which cannot be verified with the 'tlsTrustCertsFilePath' cert will be allowed to connect to the server, though the cert will not be used for client authentication. |false| -|tlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLSv1.3```, ```TLSv1.2``` || -|tlsCiphers|Specify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256```|| -|tlsEnabledWithKeyStore| Enable TLS with KeyStore type configuration in broker |false| -|tlsProvider| TLS Provider for KeyStore type || -|tlsKeyStoreType| LS KeyStore type configuration in broker: JKS, PKCS12 |JKS| -|tlsKeyStore| TLS KeyStore path in broker || -|tlsKeyStorePassword| TLS KeyStore password for broker || -|brokerClientTlsEnabledWithKeyStore| Whether internal client use KeyStore type to authenticate with Pulsar brokers |false| -|brokerClientSslProvider| The TLS Provider used by internal client to authenticate with other Pulsar brokers || -|brokerClientTlsTrustStoreType| TLS TrustStore type configuration for internal client: JKS, PKCS12, used by the internal client to authenticate with Pulsar brokers |JKS| -|brokerClientTlsTrustStore| TLS TrustStore path for internal client, used by the internal client to authenticate with Pulsar brokers || -|brokerClientTlsTrustStorePassword| TLS TrustStore password for internal client, used by the internal client to authenticate with Pulsar brokers || -|brokerClientTlsCiphers| Specify the tls cipher the internal client will use to negotiate during TLS Handshake. (a comma-separated list of ciphers) e.g. [TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256]|| -|brokerClientTlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS handshake. (a comma-separated list of protocol names). e.g. `TLSv1.3`, `TLSv1.2` || -|ttlDurationDefaultInSeconds|The default Time to Live (TTL) for namespaces if the TTL is not configured at namespace policies. When the value is set to `0`, TTL is disabled. By default, TTL is disabled. |0| -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicAlg| Configure the algorithm to be used to validate auth tokens. This can be any of the asymettric algorithms supported by Java JWT (https://github.com/jwtk/jjwt#signature-algorithms-keys) |RS256| -|tokenAuthClaim| Specify which of the token's claims will be used as the authentication "principal" or "role". The default "sub" claim will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud", that will be used to get the audience from token. If not set, audience will not be verified. || -|tokenAudience| The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token, need contains this. || -|maxUnackedMessagesPerConsumer| Max number of unacknowledged messages allowed to receive messages by a consumer on a shared subscription. Broker will stop sending messages to consumer once, this limit reaches until consumer starts acknowledging messages back. Using a value of 0, is disabling unackeMessage limit check and consumer can receive messages without any restriction |50000| -|maxUnackedMessagesPerSubscription| Max number of unacknowledged messages allowed per shared subscription. Broker will stop dispatching messages to all consumers of the subscription once this limit reaches until consumer starts acknowledging messages back and unack count reaches to limit/2. Using a value of 0, is disabling unackedMessage-limit check and dispatcher can dispatch messages without any restriction |200000| -|subscriptionRedeliveryTrackerEnabled| Enable subscription message redelivery tracker |true| -|subscriptionExpirationTimeMinutes | How long to delete inactive subscriptions from last consuming.

    Setting this configuration to a value **greater than 0** deletes inactive subscriptions automatically.
    Setting this configuration to **0** does not delete inactive subscriptions automatically.

    Since this configuration takes effect on all topics, if there is even one topic whose subscriptions should not be deleted automatically, you need to set it to 0.
    Instead, you can set a subscription expiration time for each **namespace** using the [`pulsar-admin namespaces set-subscription-expiration-time options` command](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-subscription-expiration-time-em-). | 0 | -|maxConcurrentLookupRequest| Max number of concurrent lookup request broker allows to throttle heavy incoming lookup traffic |50000| -|maxConcurrentTopicLoadRequest| Max number of concurrent topic loading request broker allows to control number of zk-operations |5000| -|authenticationEnabled| Enable authentication |false| -|authenticationProviders| Authentication provider name list, which is comma separated list of class names || -| authenticationRefreshCheckSeconds | Interval of time for checking for expired authentication credentials | 60 | -|authorizationEnabled| Enforce authorization |false| -|superUserRoles| Role names that are treated as "super-user", meaning they will be able to do all admin operations and publish/consume from all topics || -|brokerClientAuthenticationPlugin| Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters || -|brokerClientAuthenticationParameters||| -|athenzDomainNames| Supported Athenz provider domain names(comma separated) for authentication || -|exposePreciseBacklogInPrometheus| Enable expose the precise backlog stats, set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate. |false| -|schemaRegistryStorageClassName|The schema storage implementation used by this broker.|org.apache.pulsar.broker.service.schema.BookkeeperSchemaStorageFactory| -|isSchemaValidationEnforced|Enforce schema validation on following cases: if a producer without a schema attempts to produce to a topic with schema, the producer will be failed to connect. PLEASE be carefully on using this, since non-java clients don't support schema. If this setting is enabled, then non-java clients fail to produce.|false| -| topicFencingTimeoutSeconds | If a topic remains fenced for a certain time period (in seconds), it is closed forcefully. If set to 0 or a negative number, the fenced topic is not closed. | 0 | -|offloadersDirectory|The directory for all the offloader implementations.|./offloaders| -|bookkeeperMetadataServiceUri| Metadata service uri that bookkeeper is used for loading corresponding metadata driver and resolving its metadata service location. This value can be fetched using `bookkeeper shell whatisinstanceid` command in BookKeeper cluster. For example: zk+hierarchical://localhost:2181/ledgers. The metadata service uri list can also be semicolon separated values like below: zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers || -|bookkeeperClientAuthenticationPlugin| Authentication plugin to use when connecting to bookies || -|bookkeeperClientAuthenticationParametersName| BookKeeper auth plugin implementation specifics parameters name and values || -|bookkeeperClientAuthenticationParameters||| -|bookkeeperClientNumWorkerThreads| Number of BookKeeper client worker threads. Default is Runtime.getRuntime().availableProcessors() || -|bookkeeperClientTimeoutInSeconds| Timeout for BK add / read operations |30| -|bookkeeperClientSpeculativeReadTimeoutInMillis| Speculative reads are initiated if a read request doesn’t complete within a certain time Using a value of 0, is disabling the speculative reads |0| -|bookkeeperNumberOfChannelsPerBookie| Number of channels per bookie |16| -|bookkeeperClientHealthCheckEnabled| Enable bookies health check. Bookies that have more than the configured number of failure within the interval will be quarantined for some time. During this period, new ledgers won’t be created on these bookies |true| -|bookkeeperClientHealthCheckIntervalSeconds||60| -|bookkeeperClientHealthCheckErrorThresholdPerInterval||5| -|bookkeeperClientHealthCheckQuarantineTimeInSeconds ||1800| -|bookkeeperClientRackawarePolicyEnabled| Enable rack-aware bookie selection policy. BK will chose bookies from different racks when forming a new bookie ensemble |true| -|bookkeeperClientRegionawarePolicyEnabled| Enable region-aware bookie selection policy. BK will chose bookies from different regions and racks when forming a new bookie ensemble. If enabled, the value of bookkeeperClientRackawarePolicyEnabled is ignored |false| -|bookkeeperClientMinNumRacksPerWriteQuorum| Minimum number of racks per write quorum. BK rack-aware bookie selection policy will try to get bookies from at least 'bookkeeperClientMinNumRacksPerWriteQuorum' racks for a write quorum. |2| -|bookkeeperClientEnforceMinNumRacksPerWriteQuorum| Enforces rack-aware bookie selection policy to pick bookies from 'bookkeeperClientMinNumRacksPerWriteQuorum' racks for a writeQuorum. If BK can't find bookie then it would throw BKNotEnoughBookiesException instead of picking random one. |false| -|bookkeeperClientReorderReadSequenceEnabled| Enable/disable reordering read sequence on reading entries. |false| -|bookkeeperClientIsolationGroups| Enable bookie isolation by specifying a list of bookie groups to choose from. Any bookie outside the specified groups will not be used by the broker || -|bookkeeperClientSecondaryIsolationGroups| Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn't have enough bookie available. || -|bookkeeperClientMinAvailableBookiesInIsolationGroups| Minimum bookies that should be available as part of bookkeeperClientIsolationGroups else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list. || -|bookkeeperClientGetBookieInfoIntervalSeconds| Set the interval to periodically check bookie info |86400| -|bookkeeperClientGetBookieInfoRetryIntervalSeconds| Set the interval to retry a failed bookie info lookup |60| -|bookkeeperEnableStickyReads | Enable/disable having read operations for a ledger to be sticky to a single bookie. If this flag is enabled, the client will use one single bookie (by preference) to read all entries for a ledger. | true | -|managedLedgerDefaultEnsembleSize| Number of bookies to use when creating a ledger |2| -|managedLedgerDefaultWriteQuorum| Number of copies to store for each message |2| -|managedLedgerDefaultAckQuorum| Number of guaranteed copies (acks to wait before write is complete) |2| -|managedLedgerCacheSizeMB| Amount of memory to use for caching data payload in managed ledger. This memory is allocated from JVM direct memory and it’s shared across all the topics running in the same broker. By default, uses 1/5th of available direct memory || -|managedLedgerCacheCopyEntries| Whether we should make a copy of the entry payloads when inserting in cache| false| -|managedLedgerCacheEvictionWatermark| Threshold to which bring down the cache level when eviction is triggered |0.9| -|managedLedgerCacheEvictionFrequency| Configure the cache eviction frequency for the managed ledger cache (evictions/sec) | 100.0 | -|managedLedgerCacheEvictionTimeThresholdMillis| All entries that have stayed in cache for more than the configured time, will be evicted | 1000 | -|managedLedgerCursorBackloggedThreshold| Configure the threshold (in number of entries) from where a cursor should be considered 'backlogged' and thus should be set as inactive. | 1000| -|managedLedgerDefaultMarkDeleteRateLimit| Rate limit the amount of writes per second generated by consumer acking the messages |1.0| -|managedLedgerMaxEntriesPerLedger| The max number of entries to append to a ledger before triggering a rollover. A ledger rollover is triggered after the min rollover time has passed and one of the following conditions is true:
    • The max rollover time has been reached
    • The max entries have been written to the ledger
    • The max ledger size has been written to the ledger
    |50000| -|managedLedgerMinLedgerRolloverTimeMinutes| Minimum time between ledger rollover for a topic |10| -|managedLedgerMaxLedgerRolloverTimeMinutes| Maximum time before forcing a ledger rollover for a topic |240| -|managedLedgerCursorMaxEntriesPerLedger| Max number of entries to append to a cursor ledger |50000| -|managedLedgerCursorRolloverTimeInSeconds| Max time before triggering a rollover on a cursor ledger |14400| -|managedLedgerMaxUnackedRangesToPersist| Max number of "acknowledgment holes" that are going to be persistently stored. When acknowledging out of order, a consumer will leave holes that are supposed to be quickly filled by acking all the messages. The information of which messages are acknowledged is persisted by compressing in "ranges" of messages that were acknowledged. After the max number of ranges is reached, the information will only be tracked in memory and messages will be redelivered in case of crashes. |1000| -|autoSkipNonRecoverableData| Skip reading non-recoverable/unreadable data-ledger under managed-ledger’s list.It helps when data-ledgers gets corrupted at bookkeeper and managed-cursor is stuck at that ledger. |false| -|loadBalancerEnabled| Enable load balancer |true| -|loadBalancerPlacementStrategy| Strategy to assign a new bundle weightedRandomSelection || -|loadBalancerReportUpdateThresholdPercentage| Percentage of change to trigger load report update |10| -|loadBalancerReportUpdateMaxIntervalMinutes| maximum interval to update load report |15| -|loadBalancerHostUsageCheckIntervalMinutes| Frequency of report to collect |1| -|loadBalancerSheddingIntervalMinutes| Load shedding interval. Broker periodically checks whether some traffic should be offload from some over-loaded broker to other under-loaded brokers |30| -|loadBalancerSheddingGracePeriodMinutes| Prevent the same topics to be shed and moved to other broker more than once within this timeframe |30| -|loadBalancerBrokerMaxTopics| Usage threshold to allocate max number of topics to broker |50000| -|loadBalancerBrokerUnderloadedThresholdPercentage| Usage threshold to determine a broker as under-loaded |1| -|loadBalancerBrokerOverloadedThresholdPercentage| Usage threshold to determine a broker as over-loaded |85| -|loadBalancerResourceQuotaUpdateIntervalMinutes| Interval to update namespace bundle resource quota |15| -|loadBalancerBrokerComfortLoadLevelPercentage| Usage threshold to determine a broker is having just right level of load |65| -|loadBalancerAutoBundleSplitEnabled| enable/disable namespace bundle auto split |false| -|loadBalancerNamespaceBundleMaxTopics| maximum topics in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxSessions| maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxMsgRate| maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxBandwidthMbytes| maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered |100| -|loadBalancerNamespaceMaximumBundles| maximum number of bundles in a namespace |128| -|replicationMetricsEnabled| Enable replication metrics |true| -|replicationConnectionsPerBroker| Max number of connections to open for each broker in a remote cluster More connections host-to-host lead to better throughput over high-latency links. |16| -|replicationProducerQueueSize| Replicator producer queue size |1000| -|replicatorPrefix| Replicator prefix used for replicator producer name and cursor name pulsar.repl|| -|replicationTlsEnabled| Enable TLS when talking with other clusters to replicate messages |false| -|brokerServicePurgeInactiveFrequencyInSeconds|Deprecated. Use `brokerDeleteInactiveTopicsFrequencySeconds`.|60| -|transactionCoordinatorEnabled|Whether to enable transaction coordinator in broker.|true| -|transactionMetadataStoreProviderClassName| |org.apache.pulsar.transaction.coordinator.impl.InMemTransactionMetadataStoreProvider| -|defaultRetentionTimeInMinutes| Default message retention time. 0 means retention is disabled. -1 means data is not removed by time quota |0| -|defaultRetentionSizeInMB| Default retention size. 0 means retention is disabled. -1 means data is not removed by size quota |0| -|keepAliveIntervalSeconds| How often to check whether the connections are still alive |30| -|bootstrapNamespaces| The bootstrap name. | N/A | -|loadManagerClassName| Name of load manager to use |org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl| -|supportedNamespaceBundleSplitAlgorithms| Supported algorithms name for namespace bundle split |[range_equally_divide,topic_count_equally_divide]| -|defaultNamespaceBundleSplitAlgorithm| Default algorithm name for namespace bundle split |range_equally_divide| -|managedLedgerOffloadDriver| The directory for all the offloader implementations `offloadersDirectory=./offloaders`. Driver to use to offload old data to long term storage (Possible values: S3, aws-s3, google-cloud-storage). When using google-cloud-storage, Make sure both Google Cloud Storage and Google Cloud Storage JSON API are enabled for the project (check from Developers Console -> Api&auth -> APIs). || -|managedLedgerOffloadMaxThreads| Maximum number of thread pool threads for ledger offloading |2| -|managedLedgerOffloadPrefetchRounds|The maximum prefetch rounds for ledger reading for offloading.|1| -|managedLedgerUnackedRangesOpenCacheSetEnabled| Use Open Range-Set to cache unacknowledged messages |true| -|managedLedgerOffloadDeletionLagMs|Delay between a ledger being successfully offloaded to long term storage and the ledger being deleted from bookkeeper | 14400000| -|managedLedgerOffloadAutoTriggerSizeThresholdBytes|The number of bytes before triggering automatic offload to long term storage |-1 (disabled)| -|s3ManagedLedgerOffloadRegion| For Amazon S3 ledger offload, AWS region || -|s3ManagedLedgerOffloadBucket| For Amazon S3 ledger offload, Bucket to place offloaded ledger into || -|s3ManagedLedgerOffloadServiceEndpoint| For Amazon S3 ledger offload, Alternative endpoint to connect to (useful for testing) || -|s3ManagedLedgerOffloadMaxBlockSizeInBytes| For Amazon S3 ledger offload, Max block size in bytes. (64MB by default, 5MB minimum) |67108864| -|s3ManagedLedgerOffloadReadBufferSizeInBytes| For Amazon S3 ledger offload, Read buffer size in bytes (1MB by default) |1048576| -|gcsManagedLedgerOffloadRegion|For Google Cloud Storage ledger offload, region where offload bucket is located. Go to this page for more details: https://cloud.google.com/storage/docs/bucket-locations .|N/A| -|gcsManagedLedgerOffloadBucket|For Google Cloud Storage ledger offload, Bucket to place offloaded ledger into.|N/A| -|gcsManagedLedgerOffloadMaxBlockSizeInBytes|For Google Cloud Storage ledger offload, the maximum block size in bytes. (64MB by default, 5MB minimum)|67108864| -|gcsManagedLedgerOffloadReadBufferSizeInBytes|For Google Cloud Storage ledger offload, Read buffer size in bytes. (1MB by default)|1048576| -|gcsManagedLedgerOffloadServiceAccountKeyFile|For Google Cloud Storage, path to json file containing service account credentials. For more details, see the "Service Accounts" section of https://support.google.com/googleapi/answer/6158849 .|N/A| -|fileSystemProfilePath|For File System Storage, file system profile path.|../conf/filesystem_offload_core_site.xml| -|fileSystemURI|For File System Storage, file system uri.|N/A| -|s3ManagedLedgerOffloadRole| For Amazon S3 ledger offload, provide a role to assume before writing to s3 || -|s3ManagedLedgerOffloadRoleSessionName| For Amazon S3 ledger offload, provide a role session name when using a role |pulsar-s3-offload| -| acknowledgmentAtBatchIndexLevelEnabled | Enable or disable the batch index acknowledgement. | false | -|enableReplicatedSubscriptions|Whether to enable tracking of replicated subscriptions state across clusters.|true| -|replicatedSubscriptionsSnapshotFrequencyMillis|The frequency of snapshots for replicated subscriptions tracking.|1000| -|replicatedSubscriptionsSnapshotTimeoutSeconds|The timeout for building a consistent snapshot for tracking replicated subscriptions state.|30| -|replicatedSubscriptionsSnapshotMaxCachedPerSubscription|The maximum number of snapshot to be cached per subscription.|10| -|maxMessagePublishBufferSizeInMB|The maximum memory size for a broker to handle messages that are sent by producers. If the processing message size exceeds this value, the broker stops reading data from the connection. The processing messages refer to the messages that are sent to the broker but the broker has not sent response to the client. Usually the messages are waiting to be written to bookies. It is shared across all the topics running in the same broker. The value `-1` disables the memory limitation. By default, it is 50% of direct memory.|N/A| -|messagePublishBufferCheckIntervalInMillis|Interval between checks to see if message publish buffer size exceeds the maximum. Use `0` or negative number to disable the max publish buffer limiting.|100| -|retentionCheckIntervalInSeconds|Check between intervals to see if consumed ledgers need to be trimmed. Use 0 or negative number to disable the check.|120| -| maxMessageSize | Set the maximum size of a message. | 5242880 | -| preciseTopicPublishRateLimiterEnable | Enable precise topic publish rate limiting. | false | -| lazyCursorRecovery | Whether to recover cursors lazily when trying to recover a managed ledger backing a persistent topic. It can improve write availability of topics. The caveat is now when recovered ledger is ready to write we're not sure if all old consumers' last mark delete position(ack position) can be recovered or not. So user can make the trade off or have custom logic in application to checkpoint consumer state.| false | -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| -| maxTopicsPerNamespace | The maximum number of persistent topics that can be created in the namespace. When the number of topics reaches this threshold, the broker rejects the request of creating a new topic, including the auto-created topics by the producer or consumer, until the number of connected consumers decreases. The default value 0 disables the check. | 0 | -|subscriptionTypesEnabled| Enable all subscription types, which are exclusive, shared, failover, and key_shared. | Exclusive, Shared, Failover, Key_Shared | -| managedLedgerInfoCompressionType | Compression type of managed ledger information.

    Available options are `NONE`, `LZ4`, `ZLIB`, `ZSTD`, and `SNAPPY`).

    If this value is `NONE` or invalid, the `managedLedgerInfo` is not compressed.

    **Note** that after enabling this configuration, if you want to degrade a broker, you need to change the value to `NONE` and make sure all ledger metadata is saved without compression. | None | -| additionalServlets | Additional servlet name.

    If you have multiple additional servlets, separate them by commas.

    For example, additionalServlet_1, additionalServlet_2 | N/A | -| additionalServletDirectory | Location of broker additional servlet NAR directory | ./brokerAdditionalServlet | -|narExtractionDirectory | The extraction directory of the nar package.
    Available for Protocol Handler, Additional Servlets, Offloaders, Broker Interceptor. | System.getProperty("java.io.tmpdir") | - -## Client - -You can use the [`pulsar-client`](reference-cli-tools.md#pulsar-client) CLI tool to publish messages to and consume messages from Pulsar topics. You can use this tool in place of a client library. - -|Name|Description|Default| -|---|---|---| -|webServiceUrl| The web URL for the cluster. |http://localhost:8080/| -|brokerServiceUrl| The Pulsar protocol URL for the cluster. |pulsar://localhost:6650/| -|authPlugin| The authentication plugin. || -|authParams| The authentication parameters for the cluster, as a comma-separated string. || -|useTls| Whether to enforce the TLS authentication in the cluster. |false| -| tlsAllowInsecureConnection | Allow TLS connections to servers whose certificate cannot be verified to have been signed by a trusted certificate authority. | false | -| tlsEnableHostnameVerification | Whether the server hostname must match the common name of the certificate that is used by the server. | false | -|tlsTrustCertsFilePath||| -| useKeyStoreTls | Enable TLS with KeyStore type configuration in the broker. | false | -| tlsTrustStoreType | TLS TrustStore type configuration.
  1827. JKS
  1828. PKCS12
  1829. |JKS| -| tlsTrustStore | TLS TrustStore path. | | -| tlsTrustStorePassword | TLS TrustStore password. | | - - - - - - -## Log4j - -You can set the log level and configuration in the [log4j2.yaml](https://github.com/apache/pulsar/blob/d557e0aa286866363bc6261dec87790c055db1b0/conf/log4j2.yaml#L155) file. The following logging configuration parameters are available. - -|Name|Default| -|---|---| -|pulsar.root.logger| WARN,CONSOLE| -|pulsar.log.dir| logs| -|pulsar.log.file| pulsar.log| -|log4j.rootLogger| ${pulsar.root.logger}| -|log4j.appender.CONSOLE| org.apache.log4j.ConsoleAppender| -|log4j.appender.CONSOLE.Threshold| DEBUG| -|log4j.appender.CONSOLE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.CONSOLE.layout.ConversionPattern| %d{ISO8601} - %-5p - [%t:%C{1}@%L] - %m%n| -|log4j.appender.ROLLINGFILE| org.apache.log4j.DailyRollingFileAppender| -|log4j.appender.ROLLINGFILE.Threshold| DEBUG| -|log4j.appender.ROLLINGFILE.File| ${pulsar.log.dir}/${pulsar.log.file}| -|log4j.appender.ROLLINGFILE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.ROLLINGFILE.layout.ConversionPattern| %d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n| -|log4j.appender.TRACEFILE| org.apache.log4j.FileAppender| -|log4j.appender.TRACEFILE.Threshold| TRACE| -|log4j.appender.TRACEFILE.File| pulsar-trace.log| -|log4j.appender.TRACEFILE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.TRACEFILE.layout.ConversionPattern| %d{ISO8601} - %-5p [%t:%C{1}@%L][%x] - %m%n| - -> Note: 'topic' in log4j2.appender is configurable. -> - If you want to append all logs to a single topic, set the same topic name. -> - If you want to append logs to different topics, you can set different topic names. - -## Log4j shell - -|Name|Default| -|---|---| -|bookkeeper.root.logger| ERROR,CONSOLE| -|log4j.rootLogger| ${bookkeeper.root.logger}| -|log4j.appender.CONSOLE| org.apache.log4j.ConsoleAppender| -|log4j.appender.CONSOLE.Threshold| DEBUG| -|log4j.appender.CONSOLE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.CONSOLE.layout.ConversionPattern| %d{ABSOLUTE} %-5p %m%n| -|log4j.logger.org.apache.zookeeper| ERROR| -|log4j.logger.org.apache.bookkeeper| ERROR| -|log4j.logger.org.apache.bookkeeper.bookie.BookieShell| INFO| - - -## Standalone - -|Name|Description|Default| -|---|---|---| -|authenticateOriginalAuthData| If this flag is set to `true`, the broker authenticates the original Auth data; else it just accepts the originalPrincipal and authorizes it (if required). |false| -|zookeeperServers| The quorum connection string for local ZooKeeper || -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -|brokerServicePort| The port on which the standalone broker listens for connections |6650| -|webServicePort| The port used by the standalone broker for HTTP requests |8080| -|bindAddress| The hostname or IP address on which the standalone service binds |0.0.0.0| -|bindAddresses| Additional Hostname or IP addresses the service binds on: `listener_name:scheme://host:port,...`. || -|advertisedAddress| The hostname or IP address that the standalone service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used. || -| numAcceptorThreads | Number of threads to use for Netty Acceptor | 1 | -| numIOThreads | Number of threads to use for Netty IO | 2 * Runtime.getRuntime().availableProcessors() | -| numHttpServerThreads | Number of threads to use for HTTP requests processing | 2 * Runtime.getRuntime().availableProcessors()| -|isRunningStandalone|This flag controls features that are meant to be used when running in standalone mode.|N/A| -|clusterName| The name of the cluster that this broker belongs to. |standalone| -| failureDomainsEnabled | Enable cluster's failure-domain which can distribute brokers into logical region. | false | -|zooKeeperSessionTimeoutMillis| The ZooKeeper session timeout, in milliseconds. |30000| -|zooKeeperOperationTimeoutSeconds|ZooKeeper operation timeout in seconds.|30| -|brokerShutdownTimeoutMs| The time to wait for graceful broker shutdown. After this time elapses, the process will be killed. |60000| -|skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out of memory error. |false| -|backlogQuotaCheckEnabled| Enable the backlog quota check, which enforces a specified action when the quota is reached. |true| -|backlogQuotaCheckIntervalInSeconds| How often to check for topics that have reached the backlog quota. |60| -|backlogQuotaDefaultLimitBytes| The default per-topic backlog quota limit. Being less than 0 means no limitation. By default, it is -1. |-1| -|ttlDurationDefaultInSeconds|The default Time to Live (TTL) for namespaces if the TTL is not configured at namespace policies. When the value is set to `0`, TTL is disabled. By default, TTL is disabled. |0| -|brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics. If topics are not consumed for some while, these inactive topics might be cleaned up. Deleting inactive topics is enabled by default. The default period is 1 minute. |true| -|brokerDeleteInactiveTopicsFrequencySeconds| How often to check for inactive topics, in seconds. |60| -| maxPendingPublishRequestsPerConnection | Maximum pending publish requests per connection to avoid keeping large number of pending requests in memory | 1000| -|messageExpiryCheckIntervalInMinutes| How often to proactively check and purged expired messages. |5| -|activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed. |1000| -| subscriptionExpirationTimeMinutes | How long to delete inactive subscriptions from last consumption. When it is set to 0, inactive subscriptions are not deleted automatically | 0 | -| subscriptionRedeliveryTrackerEnabled | Enable subscription message redelivery tracker to send redelivery count to consumer. | true | -|subscriptionKeySharedEnable|Whether to enable the Key_Shared subscription.|true| -| subscriptionKeySharedUseConsistentHashing | In the Key_Shared subscription mode, with default AUTO_SPLIT mode, use splitting ranges or consistent hashing to reassign keys to new consumers. | false | -| subscriptionKeySharedConsistentHashingReplicaPoints | In the Key_Shared subscription mode, the number of points in the consistent-hashing ring. The greater the number, the more equal the assignment of keys to consumers. | 100 | -| subscriptionExpiryCheckIntervalInMinutes | How frequently to proactively check and purge expired subscription |5 | -| brokerDeduplicationEnabled | Set the default behavior for message deduplication in the broker. This can be overridden per-namespace. If it is enabled, the broker rejects messages that are already stored in the topic. | false | -| brokerDeduplicationMaxNumberOfProducers | Maximum number of producer information that it's going to be persisted for deduplication purposes | 10000 | -| brokerDeduplicationEntriesInterval | Number of entries after which a deduplication information snapshot is taken. A greater interval leads to less snapshots being taken though it would increase the topic recovery time, when the entries published after the snapshot need to be replayed. | 1000 | -| brokerDeduplicationProducerInactivityTimeoutMinutes | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | 360 | -| defaultNumberOfNamespaceBundles | When a namespace is created without specifying the number of bundles, this value is used as the default setting.| 4 | -|clientLibraryVersionCheckEnabled| Enable checks for minimum allowed client library version. |false| -|clientLibraryVersionCheckAllowUnversioned| Allow client libraries with no version information |true| -|statusFilePath| The path for the file used to determine the rotation status for the broker when responding to service discovery health checks |/usr/local/apache/htdocs| -|maxUnackedMessagesPerConsumer| The maximum number of unacknowledged messages allowed to be received by consumers on a shared subscription. The broker will stop sending messages to a consumer once this limit is reached or until the consumer begins acknowledging messages. A value of 0 disables the unacked message limit check and thus allows consumers to receive messages without any restrictions. |50000| -|maxUnackedMessagesPerSubscription| The same as above, except per subscription rather than per consumer. |200000| -| maxUnackedMessagesPerBroker | Maximum number of unacknowledged messages allowed per broker. Once this limit reaches, the broker stops dispatching messages to all shared subscriptions which has a higher number of unacknowledged messages until subscriptions start acknowledging messages back and unacknowledged messages count reaches to limit/2. When the value is set to 0, unacknowledged message limit check is disabled and broker does not block dispatchers. | 0 | -| maxUnackedMessagesPerSubscriptionOnBrokerBlocked | Once the broker reaches maxUnackedMessagesPerBroker limit, it blocks subscriptions which have higher unacknowledged messages than this percentage limit and subscription does not receive any new messages until that subscription acknowledges messages back. | 0.16 | -| unblockStuckSubscriptionEnabled|Broker periodically checks if subscription is stuck and unblock if flag is enabled.|false| -|zookeeperSessionExpiredPolicy|There are two policies when ZooKeeper session expired happens, "shutdown" and "reconnect". If it is set to "shutdown" policy, when ZooKeeper session expired happens, the broker is shutdown. If it is set to "reconnect" policy, the broker tries to reconnect to ZooKeeper server and re-register metadata to ZooKeeper. Note: the "reconnect" policy is an experiment feature.|shutdown| -| topicPublisherThrottlingTickTimeMillis | Tick time to schedule task that checks topic publish rate limiting across all topics. A lower value can improve accuracy while throttling publish but it uses more CPU to perform frequent check. (Disable publish throttling with value 0) | 10| -| brokerPublisherThrottlingTickTimeMillis | Tick time to schedule task that checks broker publish rate limiting across all topics. A lower value can improve accuracy while throttling publish but it uses more CPU to perform frequent check. When the value is set to 0, publish throttling is disabled. |50 | -| brokerPublisherThrottlingMaxMessageRate | Maximum rate (in 1 second) of messages allowed to publish for a broker if the message rate limiting is enabled. When the value is set to 0, message rate limiting is disabled. | 0| -| brokerPublisherThrottlingMaxByteRate | Maximum rate (in 1 second) of bytes allowed to publish for a broker if the byte rate limiting is enabled. When the value is set to 0, the byte rate limiting is disabled. | 0 | -|subscribeThrottlingRatePerConsumer|Too many subscribe requests from a consumer can cause broker rewinding consumer cursors and loading data from bookies, hence causing high network bandwidth usage. When the positive value is set, broker will throttle the subscribe requests for one consumer. Otherwise, the throttling will be disabled. By default, throttling is disabled.|0| -|subscribeRatePeriodPerConsumerInSecond|Rate period for {subscribeThrottlingRatePerConsumer}. By default, it is 30s.|30| -| dispatchThrottlingRatePerTopicInMsg | Default messages (per second) dispatch throttling-limit for every topic. When the value is set to 0, default message dispatch throttling-limit is disabled. |0 | -| dispatchThrottlingRatePerTopicInByte | Default byte (per second) dispatch throttling-limit for every topic. When the value is set to 0, default byte dispatch throttling-limit is disabled. | 0| -| dispatchThrottlingRateRelativeToPublishRate | Enable dispatch rate-limiting relative to publish rate. | false | -|dispatchThrottlingRatePerSubscriptionInMsg|The defaulted number of message dispatching throttling-limit for a subscription. The value of 0 disables message dispatch-throttling.|0| -|dispatchThrottlingRatePerSubscriptionInByte|The default number of message-bytes dispatching throttling-limit for a subscription. The value of 0 disables message-byte dispatch-throttling.|0| -| dispatchThrottlingOnNonBacklogConsumerEnabled | Enable dispatch-throttling for both caught up consumers as well as consumers who have backlogs. | true | -|dispatcherMaxReadBatchSize|The maximum number of entries to read from BookKeeper. By default, it is 100 entries.|100| -|dispatcherMaxReadSizeBytes|The maximum size in bytes of entries to read from BookKeeper. By default, it is 5MB.|5242880| -|dispatcherMinReadBatchSize|The minimum number of entries to read from BookKeeper. By default, it is 1 entry. When there is an error occurred on reading entries from bookkeeper, the broker will backoff the batch size to this minimum number.|1| -|dispatcherMaxRoundRobinBatchSize|The maximum number of entries to dispatch for a shared subscription. By default, it is 20 entries.|20| -| preciseDispatcherFlowControl | Precise dispathcer flow control according to history message number of each entry. | false | -| streamingDispatch | Whether to use streaming read dispatcher. It can be useful when there's a huge backlog to drain and instead of read with micro batch we can streamline the read from bookkeeper to make the most of consumer capacity till we hit bookkeeper read limit or consumer process limit, then we can use consumer flow control to tune the speed. This feature is currently in preview and can be changed in subsequent release. | false | -| maxConcurrentLookupRequest | Maximum number of concurrent lookup request that the broker allows to throttle heavy incoming lookup traffic. | 50000 | -| maxConcurrentTopicLoadRequest | Maximum number of concurrent topic loading request that the broker allows to control the number of zk-operations. | 5000 | -| maxConcurrentNonPersistentMessagePerConnection | Maximum number of concurrent non-persistent message that can be processed per connection. | 1000 | -| numWorkerThreadsForNonPersistentTopic | Number of worker threads to serve non-persistent topic. | 8 | -| enablePersistentTopics | Enable broker to load persistent topics. | true | -| enableNonPersistentTopics | Enable broker to load non-persistent topics. | true | -| maxNumPartitionsPerPartitionedTopic | Maximum number of partitions per partitioned topic. When the value is set to a negative number or is set to 0, the check is disabled. | 0 | -| maxSubscriptionsPerTopic | Maximum number of subscriptions allowed to subscribe to a topic. Once this limit reaches, the broker rejects new subscriptions until the number of subscriptions decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxProducersPerTopic | Maximum number of producers allowed to connect to a topic. Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerTopic | Maximum number of consumers allowed to connect to a topic. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerSubscription | Maximum number of consumers allowed to connect to a subscription. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in seconds. When the value is set to 0, check the TLS certificate on every new connection. | 300 | -| tlsCertificateFilePath | Path for the TLS certificate file. | | -| tlsKeyFilePath | Path for the TLS private key file. | | -| tlsTrustCertsFilePath | Path for the trusted TLS certificate file.| | -| tlsAllowInsecureConnection | Accept untrusted TLS certificate from the client. If it is set to true, a client with a certificate which cannot be verified with the 'tlsTrustCertsFilePath' certificate is allowed to connect to the server, though the certificate is not be used for client authentication. | false | -| tlsProtocols | Specify the TLS protocols the broker uses to negotiate during TLS handshake. | | -| tlsCiphers | Specify the TLS cipher the broker uses to negotiate during TLS Handshake. | | -| tlsRequireTrustedClientCertOnConnect | Trusted client certificates are required for to connect TLS. Reject the Connection if the client certificate is not trusted. In effect, this requires that all connecting clients perform TLS client authentication. | false | -| tlsEnabledWithKeyStore | Enable TLS with KeyStore type configuration in broker. | false | -| tlsProvider | TLS Provider for KeyStore type. | | -| tlsKeyStoreType | TLS KeyStore type configuration in the broker.
  1830. JKS
  1831. PKCS12
  1832. |JKS| -| tlsKeyStore | TLS KeyStore path in the broker. | | -| tlsKeyStorePassword | TLS KeyStore password for the broker. | | -| tlsTrustStoreType | TLS TrustStore type configuration in the broker
  1833. JKS
  1834. PKCS12
  1835. |JKS| -| tlsTrustStore | TLS TrustStore path in the broker. | | -| tlsTrustStorePassword | TLS TrustStore password for the broker. | | -| brokerClientTlsEnabledWithKeyStore | Configure whether the internal client uses the KeyStore type to authenticate with Pulsar brokers. | false | -| brokerClientSslProvider | The TLS Provider used by the internal client to authenticate with other Pulsar brokers. | | -| brokerClientTlsTrustStoreType | TLS TrustStore type configuration for the internal client to authenticate with Pulsar brokers.
  1836. JKS
  1837. PKCS12
  1838. | JKS | -| brokerClientTlsTrustStore | TLS TrustStore path for the internal client to authenticate with Pulsar brokers. | | -| brokerClientTlsTrustStorePassword | TLS TrustStore password for the internal client to authenticate with Pulsar brokers. | | -| brokerClientTlsCiphers | Specify the TLS cipher that the internal client uses to negotiate during TLS Handshake. | | -| brokerClientTlsProtocols | Specify the TLS protocols that the broker uses to negotiate during TLS handshake. | | -| systemTopicEnabled | Enable/Disable system topics. | false | -| topicLevelPoliciesEnabled | Enable or disable topic level policies. Topic level policies depends on the system topic. Please enable the system topic first. | false | -| topicFencingTimeoutSeconds | If a topic remains fenced for a certain time period (in seconds), it is closed forcefully. If set to 0 or a negative number, the fenced topic is not closed. | 0 | -| proxyRoles | Role names that are treated as "proxy roles". If the broker sees a request with role as proxyRoles, it demands to see a valid original principal. | | -|authenticationEnabled| Enable authentication for the broker. |false| -|authenticationProviders| A comma-separated list of class names for authentication providers. |false| -|authorizationEnabled| Enforce authorization in brokers. |false| -| authorizationProvider | Authorization provider fully qualified class-name. | org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider | -| authorizationAllowWildcardsMatching | Allow wildcard matching in authorization. Wildcard matching is applicable only when the wildcard-character (*) presents at the **first** or **last** position. | false | -|superUserRoles| Role names that are treated as "superusers." Superusers are authorized to perform all admin tasks. | | -|brokerClientAuthenticationPlugin| The authentication settings of the broker itself. Used when the broker connects to other brokers either in the same cluster or from other clusters. | | -|brokerClientAuthenticationParameters| The parameters that go along with the plugin specified using brokerClientAuthenticationPlugin. | | -|athenzDomainNames| Supported Athenz authentication provider domain names as a comma-separated list. | | -| anonymousUserRole | When this parameter is not empty, unauthenticated users perform as anonymousUserRole. | | -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenAuthClaim| Specify the token claim that will be used as the authentication "principal" or "role". The "subject" field will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud". It is used to get the audience from token. If it is not set, the audience is not verified. || -| tokenAudience | The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token need contains this parameter.| | -|saslJaasClientAllowedIds|This is a regexp, which limits the range of possible ids which can connect to the Broker using SASL. By default, it is set to `SaslConstants.JAAS_CLIENT_ALLOWED_IDS_DEFAULT`, which is ".*pulsar.*", so only clients whose id contains 'pulsar' are allowed to connect.|N/A| -|saslJaasBrokerSectionName|Service Principal, for login context name. By default, it is set to `SaslConstants.JAAS_DEFAULT_BROKER_SECTION_NAME`, which is "Broker".|N/A| -|httpMaxRequestSize|If the value is larger than 0, it rejects all HTTP requests with bodies larged than the configured limit.|-1| -|exposePreciseBacklogInPrometheus| Enable expose the precise backlog stats, set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate. |false| -|bookkeeperMetadataServiceUri|Metadata service uri is what BookKeeper used for loading corresponding metadata driver and resolving its metadata service location. This value can be fetched using `bookkeeper shell whatisinstanceid` command in BookKeeper cluster. For example: `zk+hierarchical://localhost:2181/ledgers`. The metadata service uri list can also be semicolon separated values like: `zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers`.|N/A| -|bookkeeperClientAuthenticationPlugin| Authentication plugin to be used when connecting to bookies (BookKeeper servers). || -|bookkeeperClientAuthenticationParametersName| BookKeeper authentication plugin implementation parameters and values. || -|bookkeeperClientAuthenticationParameters| Parameters associated with the bookkeeperClientAuthenticationParametersName || -|bookkeeperClientNumWorkerThreads| Number of BookKeeper client worker threads. Default is Runtime.getRuntime().availableProcessors() || -|bookkeeperClientTimeoutInSeconds| Timeout for BookKeeper add and read operations. |30| -|bookkeeperClientSpeculativeReadTimeoutInMillis| Speculative reads are initiated if a read request doesn’t complete within a certain time. A value of 0 disables speculative reads. |0| -|bookkeeperUseV2WireProtocol|Use older Bookkeeper wire protocol with bookie.|true| -|bookkeeperClientHealthCheckEnabled| Enable bookie health checks. |true| -|bookkeeperClientHealthCheckIntervalSeconds| The time interval, in seconds, at which health checks are performed. New ledgers are not created during health checks. |60| -|bookkeeperClientHealthCheckErrorThresholdPerInterval| Error threshold for health checks. |5| -|bookkeeperClientHealthCheckQuarantineTimeInSeconds| If bookies have more than the allowed number of failures within the time interval specified by bookkeeperClientHealthCheckIntervalSeconds |1800| -|bookkeeperClientGetBookieInfoIntervalSeconds|Specify options for the GetBookieInfo check. This setting helps ensure the list of bookies that are up to date on the brokers.|86400| -|bookkeeperClientGetBookieInfoRetryIntervalSeconds|Specify options for the GetBookieInfo check. This setting helps ensure the list of bookies that are up to date on the brokers.|60| -|bookkeeperClientRackawarePolicyEnabled| |true| -|bookkeeperClientRegionawarePolicyEnabled| |false| -|bookkeeperClientMinNumRacksPerWriteQuorum| |2| -|bookkeeperClientMinNumRacksPerWriteQuorum| |false| -|bookkeeperClientReorderReadSequenceEnabled| |false| -|bookkeeperClientIsolationGroups||| -|bookkeeperClientSecondaryIsolationGroups| Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn't have enough bookie available. || -|bookkeeperClientMinAvailableBookiesInIsolationGroups| Minimum bookies that should be available as part of bookkeeperClientIsolationGroups else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list. || -| bookkeeperTLSProviderFactoryClass | Set the client security provider factory class name. | org.apache.bookkeeper.tls.TLSContextFactory | -| bookkeeperTLSClientAuthentication | Enable TLS authentication with bookie. | false | -| bookkeeperTLSKeyFileType | Supported type: PEM, JKS, PKCS12. | PEM | -| bookkeeperTLSTrustCertTypes | Supported type: PEM, JKS, PKCS12. | PEM | -| bookkeeperTLSKeyStorePasswordPath | Path to file containing keystore password, if the client keystore is password protected. | | -| bookkeeperTLSTrustStorePasswordPath | Path to file containing truststore password, if the client truststore is password protected. | | -| bookkeeperTLSKeyFilePath | Path for the TLS private key file. | | -| bookkeeperTLSCertificateFilePath | Path for the TLS certificate file. | | -| bookkeeperTLSTrustCertsFilePath | Path for the trusted TLS certificate file. | | -| bookkeeperTlsCertFilesRefreshDurationSeconds | Tls cert refresh duration at bookKeeper-client in seconds (0 to disable check). | | -| bookkeeperDiskWeightBasedPlacementEnabled | Enable/Disable disk weight based placement. | false | -| bookkeeperExplicitLacIntervalInMills | Set the interval to check the need for sending an explicit LAC. When the value is set to 0, no explicit LAC is sent. | 0 | -| bookkeeperClientExposeStatsToPrometheus | Expose BookKeeper client managed ledger stats to Prometheus. | false | -|managedLedgerDefaultEnsembleSize| |1| -|managedLedgerDefaultWriteQuorum| |1| -|managedLedgerDefaultAckQuorum| |1| -| managedLedgerDigestType | Default type of checksum to use when writing to BookKeeper. | CRC32C | -| managedLedgerNumSchedulerThreads | Number of threads to be used for managed ledger scheduled tasks. | 8 | -|managedLedgerCacheSizeMB| |N/A| -|managedLedgerCacheCopyEntries| Whether to copy the entry payloads when inserting in cache.| false| -|managedLedgerCacheEvictionWatermark| |0.9| -|managedLedgerCacheEvictionFrequency| Configure the cache eviction frequency for the managed ledger cache (evictions/sec) | 100.0 | -|managedLedgerCacheEvictionTimeThresholdMillis| All entries that have stayed in cache for more than the configured time, will be evicted | 1000 | -|managedLedgerCursorBackloggedThreshold| Configure the threshold (in number of entries) from where a cursor should be considered 'backlogged' and thus should be set as inactive. | 1000| -|managedLedgerUnackedRangesOpenCacheSetEnabled| Use Open Range-Set to cache unacknowledged messages |true| -|managedLedgerDefaultMarkDeleteRateLimit| |0.1| -|managedLedgerMaxEntriesPerLedger| |50000| -|managedLedgerMinLedgerRolloverTimeMinutes| |10| -|managedLedgerMaxLedgerRolloverTimeMinutes| |240| -|managedLedgerCursorMaxEntriesPerLedger| |50000| -|managedLedgerCursorRolloverTimeInSeconds| |14400| -| managedLedgerMaxSizePerLedgerMbytes | Maximum ledger size before triggering a rollover for a topic. | 2048 | -| managedLedgerMaxUnackedRangesToPersist | Maximum number of "acknowledgment holes" that are going to be persistently stored. When acknowledging out of order, a consumer leaves holes that are supposed to be quickly filled by acknowledging all the messages. The information of which messages are acknowledged is persisted by compressing in "ranges" of messages that were acknowledged. After the max number of ranges is reached, the information is only tracked in memory and messages are redelivered in case of crashes. | 10000 | -| managedLedgerMaxUnackedRangesToPersistInZooKeeper | Maximum number of "acknowledgment holes" that can be stored in Zookeeper. If the number of unacknowledged message range is higher than this limit, the broker persists unacknowledged ranges into bookkeeper to avoid additional data overhead into Zookeeper. | 1000 | -|autoSkipNonRecoverableData| |false| -| managedLedgerMetadataOperationsTimeoutSeconds | Operation timeout while updating managed-ledger metadata. | 60 | -| managedLedgerReadEntryTimeoutSeconds | Read entries timeout when the broker tries to read messages from BookKeeper. | 0 | -| managedLedgerAddEntryTimeoutSeconds | Add entry timeout when the broker tries to publish message to BookKeeper. | 0 | -| managedLedgerNewEntriesCheckDelayInMillis | New entries check delay for the cursor under the managed ledger. If no new messages in the topic, the cursor tries to check again after the delay time. For consumption latency sensitive scenario, you can set the value to a smaller value or 0. Of course, a smaller value may degrade consumption throughput.|10 ms| -| managedLedgerPrometheusStatsLatencyRolloverSeconds | Managed ledger prometheus stats latency rollover seconds. | 60 | -| managedLedgerTraceTaskExecution | Whether to trace managed ledger task execution time. | true | -|managedLedgerNewEntriesCheckDelayInMillis|New entries check delay for the cursor under the managed ledger. If no new messages in the topic, the cursor will try to check again after the delay time. For consumption latency sensitive scenario, it can be set to a smaller value or 0. A smaller value degrades consumption throughput. By default, it is 10ms.|10| -|loadBalancerEnabled| |false| -|loadBalancerPlacementStrategy| |weightedRandomSelection| -|loadBalancerReportUpdateThresholdPercentage| |10| -|loadBalancerReportUpdateMaxIntervalMinutes| |15| -|loadBalancerHostUsageCheckIntervalMinutes| |1| -|loadBalancerSheddingIntervalMinutes| |30| -|loadBalancerSheddingGracePeriodMinutes| |30| -|loadBalancerBrokerMaxTopics| |50000| -|loadBalancerBrokerUnderloadedThresholdPercentage| |1| -|loadBalancerBrokerOverloadedThresholdPercentage| |85| -|loadBalancerResourceQuotaUpdateIntervalMinutes| |15| -|loadBalancerBrokerComfortLoadLevelPercentage| |65| -|loadBalancerAutoBundleSplitEnabled| |false| -| loadBalancerAutoUnloadSplitBundlesEnabled | Enable/Disable automatic unloading of split bundles. | true | -|loadBalancerNamespaceBundleMaxTopics| |1000| -|loadBalancerNamespaceBundleMaxSessions| |1000| -|loadBalancerNamespaceBundleMaxMsgRate| |1000| -|loadBalancerNamespaceBundleMaxBandwidthMbytes| |100| -|loadBalancerNamespaceMaximumBundles| |128| -| loadBalancerBrokerThresholdShedderPercentage | The broker resource usage threshold. When the broker resource usage is greater than the pulsar cluster average resource usage, the threshold shedder is triggered to offload bundles from the broker. It only takes effect in the ThresholdShedder strategy. | 10 | -| loadBalancerHistoryResourcePercentage | The history usage when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 0.9 | -| loadBalancerBandwithInResourceWeight | The BandWithIn usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerBandwithOutResourceWeight | The BandWithOut usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerCPUResourceWeight | The CPU usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerMemoryResourceWeight | The heap memory usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerDirectMemoryResourceWeight | The direct memory usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerBundleUnloadMinThroughputThreshold | Bundle unload minimum throughput threshold. Avoid bundle unload frequently. It only takes effect in the ThresholdShedder strategy. | 10 | -|replicationMetricsEnabled| |true| -|replicationConnectionsPerBroker| |16| -|replicationProducerQueueSize| |1000| -| replicationPolicyCheckDurationSeconds | Duration to check replication policy to avoid replicator inconsistency due to missing ZooKeeper watch. When the value is set to 0, disable checking replication policy. | 600 | -|defaultRetentionTimeInMinutes| |0| -|defaultRetentionSizeInMB| |0| -|keepAliveIntervalSeconds| |30| -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| -|bookieId | If you want to custom a bookie ID or use a dynamic network address for a bookie, you can set the `bookieId`.

    Bookie advertises itself using the `bookieId` rather than the `BookieSocketAddress` (`hostname:port` or `IP:port`).

    The `bookieId` is a non-empty string that can contain ASCII digits and letters ([a-zA-Z9-0]), colons, dashes, and dots.

    For more information about `bookieId`, see [here](http://bookkeeper.apache.org/bps/BP-41-bookieid/).|/| -| maxTopicsPerNamespace | The maximum number of persistent topics that can be created in the namespace. When the number of topics reaches this threshold, the broker rejects the request of creating a new topic, including the auto-created topics by the producer or consumer, until the number of connected consumers decreases. The default value 0 disables the check. | 0 | - -## WebSocket - -|Name|Description|Default| -|---|---|---| -|configurationStoreServers ||| -|zooKeeperSessionTimeoutMillis| |30000| -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|serviceUrl||| -|serviceUrlTls||| -|brokerServiceUrl||| -|brokerServiceUrlTls||| -|webServicePort||8080| -|webServicePortTls||8443| -|bindAddress||0.0.0.0| -|clusterName ||| -|authenticationEnabled||false| -|authenticationProviders||| -|authorizationEnabled||false| -|superUserRoles ||| -|brokerClientAuthenticationPlugin||| -|brokerClientAuthenticationParameters||| -|tlsEnabled||false| -|tlsAllowInsecureConnection||false| -|tlsCertificateFilePath||| -|tlsKeyFilePath ||| -|tlsTrustCertsFilePath||| - -## Pulsar proxy - -The [Pulsar proxy](concepts-architecture-overview.md#pulsar-proxy) can be configured in the `conf/proxy.conf` file. - - -|Name|Description|Default| -|---|---|---| -|forwardAuthorizationCredentials| Forward client authorization credentials to Broker for re-authorization, and make sure authentication is enabled for this to take effect. |false| -|zookeeperServers| The ZooKeeper quorum connection string (as a comma-separated list) || -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -| brokerServiceURL | The service URL pointing to the broker cluster. Must begin with `pulsar://`. | | -| brokerServiceURLTLS | The TLS service URL pointing to the broker cluster. Must begin with `pulsar+ssl://`. | | -| brokerWebServiceURL | The Web service URL pointing to the broker cluster | | -| brokerWebServiceURLTLS | The TLS Web service URL pointing to the broker cluster | | -| functionWorkerWebServiceURL | The Web service URL pointing to the function worker cluster. It is only configured when you setup function workers in a separate cluster. | | -| functionWorkerWebServiceURLTLS | The TLS Web service URL pointing to the function worker cluster. It is only configured when you setup function workers in a separate cluster. | | -|zookeeperSessionTimeoutMs| ZooKeeper session timeout (in milliseconds) |30000| -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|advertisedAddress|Hostname or IP address the service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostname()` is used.|N/A| -|servicePort| The port to use for server binary Protobuf requests |6650| -|servicePortTls| The port to use to server binary Protobuf TLS requests |6651| -|statusFilePath| Path for the file used to determine the rotation status for the proxy instance when responding to service discovery health checks || -| proxyLogLevel | Proxy log level
  1839. 0: Do not log any TCP channel information.
  1840. 1: Parse and log any TCP channel information and command information without message body.
  1841. 2: Parse and log channel information, command information and message body.
  1842. | 0 | -|authenticationEnabled| Whether authentication is enabled for the Pulsar proxy |false| -|authenticateMetricsEndpoint| Whether the '/metrics' endpoint requires authentication. Defaults to true. 'authenticationEnabled' must also be set for this to take effect. |true| -|authenticationProviders| Authentication provider name list (a comma-separated list of class names) || -|authorizationEnabled| Whether authorization is enforced by the Pulsar proxy |false| -|authorizationProvider| Authorization provider as a fully qualified class name |org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider| -| anonymousUserRole | When this parameter is not empty, unauthenticated users perform as anonymousUserRole. | | -|brokerClientAuthenticationPlugin| The authentication plugin used by the Pulsar proxy to authenticate with Pulsar brokers || -|brokerClientAuthenticationParameters| The authentication parameters used by the Pulsar proxy to authenticate with Pulsar brokers || -|brokerClientTrustCertsFilePath| The path to trusted certificates used by the Pulsar proxy to authenticate with Pulsar brokers || -|superUserRoles| Role names that are treated as "super-users," meaning that they will be able to perform all admin || -|maxConcurrentInboundConnections| Max concurrent inbound connections. The proxy will reject requests beyond that. |10000| -|maxConcurrentLookupRequests| Max concurrent outbound connections. The proxy will error out requests beyond that. |50000| -|tlsEnabledInProxy| Deprecated - use `servicePortTls` and `webServicePortTls` instead. |false| -|tlsEnabledWithBroker| Whether TLS is enabled when communicating with Pulsar brokers. |false| -| tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in seconds. If the value is set 0, check TLS certificate every new connection. | 300 | -|tlsCertificateFilePath| Path for the TLS certificate file || -|tlsKeyFilePath| Path for the TLS private key file || -|tlsTrustCertsFilePath| Path for the trusted TLS certificate pem file || -|tlsHostnameVerificationEnabled| Whether the hostname is validated when the proxy creates a TLS connection with brokers |false| -|tlsRequireTrustedClientCertOnConnect| Whether client certificates are required for TLS. Connections are rejected if the client certificate isn’t trusted. |false| -|tlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLSv1.3```, ```TLSv1.2``` || -|tlsCiphers|Specify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256```|| -| httpReverseProxyConfigs | HTTP directs to redirect to non-pulsar services | | -| httpOutputBufferSize | HTTP output buffer size. The amount of data that will be buffered for HTTP requests before it is flushed to the channel. A larger buffer size may result in higher HTTP throughput though it may take longer for the client to see data. If using HTTP streaming via the reverse proxy, this should be set to the minimum value (1) so that clients see the data as soon as possible. | 32768 | -| httpNumThreads | Number of threads to use for HTTP requests processing| 2 * Runtime.getRuntime().availableProcessors() | -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenAuthClaim| Specify the token claim that will be used as the authentication "principal" or "role". The "subject" field will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud". It is used to get the audience from token. If it is not set, the audience is not verified. || -| tokenAudience | The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token need contains this parameter.| | -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| - -## ZooKeeper - -ZooKeeper handles a broad range of essential configuration- and coordination-related tasks for Pulsar. The default configuration file for ZooKeeper is in the `conf/zookeeper.conf` file in your Pulsar installation. The following parameters are available: - - -|Name|Description|Default| -|---|---|---| -|tickTime| The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick. |2000| -|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10| -|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter. |5| -|dataDir| The location where ZooKeeper will store in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper| -|clientPort| The port on which the ZooKeeper server will listen for connections. |2181| -|admin.enableServer|The port at which the admin listens.|true| -|admin.serverPort|The port at which the admin listens.|9990| -|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest). |3| -|autopurge.purgeInterval| The time interval, in hours, by which the ZooKeeper database purge task is triggered. Setting to a non-zero number will enable auto purge; setting to 0 will disable. Read this guide before enabling auto purge. |1| -|forceSync|Requires updates to be synced to media of the transaction log before finishing processing the update. If this option is set to 'no', ZooKeeper will not require updates to be synced to the media. WARNING: it's not recommended to run a production ZK cluster with `forceSync` disabled.|yes| -|maxClientCnxns| The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60| - - - - -In addition to the parameters in the table above, configuring ZooKeeper for Pulsar involves adding -a `server.N` line to the `conf/zookeeper.conf` file for each node in the ZooKeeper cluster, where `N` is the number of the ZooKeeper node. Here's an example for a three-node ZooKeeper cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -> We strongly recommend consulting the [ZooKeeper Administrator's Guide](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html) for a more thorough and comprehensive introduction to ZooKeeper configuration diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/reference-connector-admin.md b/site2/website/versioned_docs/version-2.9.1-deprecated/reference-connector-admin.md deleted file mode 100644 index f1240bf8db17de..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/reference-connector-admin.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -id: reference-connector-admin -title: Connector Admin CLI -sidebar_label: "Connector Admin CLI" -original_id: reference-connector-admin ---- - -> **Important** -> -> For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/reference-metrics.md b/site2/website/versioned_docs/version-2.9.1-deprecated/reference-metrics.md deleted file mode 100644 index e4e12d89ac5ae5..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/reference-metrics.md +++ /dev/null @@ -1,556 +0,0 @@ ---- -id: reference-metrics -title: Pulsar Metrics -sidebar_label: "Pulsar Metrics" -original_id: reference-metrics ---- - - - -Pulsar exposes the following metrics in Prometheus format. You can monitor your clusters with those metrics. - -* [ZooKeeper](#zookeeper) -* [BookKeeper](#bookkeeper) -* [Broker](#broker) -* [Pulsar Functions](#pulsar-functions) -* [Proxy](#proxy) -* [Pulsar SQL Worker](#pulsar-sql-worker) -* [Pulsar transaction](#pulsar-transaction) - -The following types of metrics are available: - -- [Counter](https://prometheus.io/docs/concepts/metric_types/#counter): a cumulative metric that represents a single monotonically increasing counter. The value increases by default. You can reset the value to zero or restart your cluster. -- [Gauge](https://prometheus.io/docs/concepts/metric_types/#gauge): a metric that represents a single numerical value that can arbitrarily go up and down. -- [Histogram](https://prometheus.io/docs/concepts/metric_types/#histogram): a histogram samples observations (usually things like request durations or response sizes) and counts them in configurable buckets. -- [Summary](https://prometheus.io/docs/concepts/metric_types/#summary): similar to a histogram, a summary samples observations (usually things like request durations and response sizes). While it also provides a total count of observations and a sum of all observed values, it calculates configurable quantiles over a sliding time window. - -## ZooKeeper - -The ZooKeeper metrics are exposed under "/metrics" at port `8000`. You can use a different port by configuring the `metricsProvider.httpPort` in conf/zookeeper.conf. - -### Server metrics - -| Name | Type | Description | -|---|---|---| -| znode_count | Gauge | The number of z-nodes stored. | -| approximate_data_size | Gauge | The approximate size of all of z-nodes stored. | -| num_alive_connections | Gauge | The number of currently lived connections. | -| watch_count | Gauge | The number of watchers registered. | -| ephemerals_count | Gauge | The number of ephemeral z-nodes. | - -### Request metrics - -| Name | Type | Description | -|---|---|---| -| request_commit_queued | Counter | The total number of requests already committed by a particular server. | -| updatelatency | Summary | The update requests latency calculated in milliseconds. | -| readlatency | Summary | The read requests latency calculated in milliseconds. | - -## BookKeeper - -The BookKeeper metrics are exposed under "/metrics" at port `8000`. You can change the port by updating `prometheusStatsHttpPort` -in the `bookkeeper.conf` configuration file. - -### Server metrics - -| Name | Type | Description | -|---|---|---| -| bookie_SERVER_STATUS | Gauge | The server status for bookie server.
    • 1: the bookie is running in writable mode.
    • 0: the bookie is running in readonly mode.
    | -| bookkeeper_server_ADD_ENTRY_count | Counter | The total number of ADD_ENTRY requests received at the bookie. The `success` label is used to distinguish successes and failures. | -| bookkeeper_server_READ_ENTRY_count | Counter | The total number of READ_ENTRY requests received at the bookie. The `success` label is used to distinguish successes and failures. | -| bookie_WRITE_BYTES | Counter | The total number of bytes written to the bookie. | -| bookie_READ_BYTES | Counter | The total number of bytes read from the bookie. | -| bookkeeper_server_ADD_ENTRY_REQUEST | Summary | The summary of request latency of ADD_ENTRY requests at the bookie. The `success` label is used to distinguish successes and failures. | -| bookkeeper_server_READ_ENTRY_REQUEST | Summary | The summary of request latency of READ_ENTRY requests at the bookie. The `success` label is used to distinguish successes and failures. | - -### Journal metrics - -| Name | Type | Description | -|---|---|---| -| bookie_journal_JOURNAL_SYNC_count | Counter | The total number of journal fsync operations happening at the bookie. The `success` label is used to distinguish successes and failures. | -| bookie_journal_JOURNAL_QUEUE_SIZE | Gauge | The total number of requests pending in the journal queue. | -| bookie_journal_JOURNAL_FORCE_WRITE_QUEUE_SIZE | Gauge | The total number of force write (fsync) requests pending in the force-write queue. | -| bookie_journal_JOURNAL_CB_QUEUE_SIZE | Gauge | The total number of callbacks pending in the callback queue. | -| bookie_journal_JOURNAL_ADD_ENTRY | Summary | The summary of request latency of adding entries to the journal. | -| bookie_journal_JOURNAL_SYNC | Summary | The summary of fsync latency of syncing data to the journal disk. | - -### Storage metrics - -| Name | Type | Description | -|---|---|---| -| bookie_ledgers_count | Gauge | The total number of ledgers stored in the bookie. | -| bookie_entries_count | Gauge | The total number of entries stored in the bookie. | -| bookie_write_cache_size | Gauge | The bookie write cache size (in bytes). | -| bookie_read_cache_size | Gauge | The bookie read cache size (in bytes). | -| bookie_DELETED_LEDGER_COUNT | Counter | The total number of ledgers deleted since the bookie has started. | -| bookie_ledger_writable_dirs | Gauge | The number of writable directories in the bookie. | - -## Broker - -The broker metrics are exposed under "/metrics" at port `8080`. You can change the port by updating `webServicePort` to a different port -in the `broker.conf` configuration file. - -All the metrics exposed by a broker are labelled with `cluster=${pulsar_cluster}`. The name of Pulsar cluster is the value of `${pulsar_cluster}`, which you have configured in the `broker.conf` file. - -The following metrics are available for broker: - -- [ZooKeeper](#zookeeper) - - [Server metrics](#server-metrics) - - [Request metrics](#request-metrics) -- [BookKeeper](#bookkeeper) - - [Server metrics](#server-metrics-1) - - [Journal metrics](#journal-metrics) - - [Storage metrics](#storage-metrics) -- [Broker](#broker) - - [Namespace metrics](#namespace-metrics) - - [Replication metrics](#replication-metrics) - - [Topic metrics](#topic-metrics) - - [Replication metrics](#replication-metrics-1) - - [ManagedLedgerCache metrics](#managedledgercache-metrics) - - [ManagedLedger metrics](#managedledger-metrics) - - [LoadBalancing metrics](#loadbalancing-metrics) - - [BundleUnloading metrics](#bundleunloading-metrics) - - [BundleSplit metrics](#bundlesplit-metrics) - - [Subscription metrics](#subscription-metrics) - - [Consumer metrics](#consumer-metrics) - - [Managed ledger bookie client metrics](#managed-ledger-bookie-client-metrics) - - [Token metrics](#token-metrics) - - [Authentication metrics](#authentication-metrics) - - [Connection metrics](#connection-metrics) -- [Pulsar Functions](#pulsar-functions) -- [Proxy](#proxy) -- [Pulsar SQL Worker](#pulsar-sql-worker) -- [Pulsar transaction](#pulsar-transaction) - -### Namespace metrics - -> Namespace metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `false`. - -All the namespace metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -| Name | Type | Description | -|---|---|---| -| pulsar_topics_count | Gauge | The number of Pulsar topics of the namespace owned by this broker. | -| pulsar_subscriptions_count | Gauge | The number of Pulsar subscriptions of the namespace served by this broker. | -| pulsar_producers_count | Gauge | The number of active producers of the namespace connected to this broker. | -| pulsar_consumers_count | Gauge | The number of active consumers of the namespace connected to this broker. | -| pulsar_rate_in | Gauge | The total message rate of the namespace coming into this broker (messages/second). | -| pulsar_rate_out | Gauge | The total message rate of the namespace going out from this broker (messages/second). | -| pulsar_throughput_in | Gauge | The total throughput of the namespace coming into this broker (bytes/second). | -| pulsar_throughput_out | Gauge | The total throughput of the namespace going out from this broker (bytes/second). | -| pulsar_storage_size | Gauge | The total storage size of the topics in this namespace owned by this broker (bytes). | -| pulsar_storage_logical_size | Gauge | The storage size of topics in the namespace owned by the broker without replicas (in bytes). | -| pulsar_storage_backlog_size | Gauge | The total backlog size of the topics of this namespace owned by this broker (messages). | -| pulsar_storage_offloaded_size | Gauge | The total amount of the data in this namespace offloaded to the tiered storage (bytes). | -| pulsar_storage_write_rate | Gauge | The total message batches (entries) written to the storage for this namespace (message batches / second). | -| pulsar_storage_read_rate | Gauge | The total message batches (entries) read from the storage for this namespace (message batches / second). | -| pulsar_subscription_delayed | Gauge | The total message batches (entries) are delayed for dispatching. | -| pulsar_storage_write_latency_le_* | Histogram | The entry rate of a namespace that the storage write latency is smaller with a given threshold.
    Available thresholds:
    • pulsar_storage_write_latency_le_0_5: <= 0.5ms
    • pulsar_storage_write_latency_le_1: <= 1ms
    • pulsar_storage_write_latency_le_5: <= 5ms
    • pulsar_storage_write_latency_le_10: <= 10ms
    • pulsar_storage_write_latency_le_20: <= 20ms
    • pulsar_storage_write_latency_le_50: <= 50ms
    • pulsar_storage_write_latency_le_100: <= 100ms
    • pulsar_storage_write_latency_le_200: <= 200ms
    • pulsar_storage_write_latency_le_1000: <= 1s
    • pulsar_storage_write_latency_le_overflow: > 1s
    | -| pulsar_entry_size_le_* | Histogram | The entry rate of a namespace that the entry size is smaller with a given threshold.
    Available thresholds:
    • pulsar_entry_size_le_128: <= 128 bytes
    • pulsar_entry_size_le_512: <= 512 bytes
    • pulsar_entry_size_le_1_kb: <= 1 KB
    • pulsar_entry_size_le_2_kb: <= 2 KB
    • pulsar_entry_size_le_4_kb: <= 4 KB
    • pulsar_entry_size_le_16_kb: <= 16 KB
    • pulsar_entry_size_le_100_kb: <= 100 KB
    • pulsar_entry_size_le_1_mb: <= 1 MB
    • pulsar_entry_size_le_overflow: > 1 MB
    | - -#### Replication metrics - -If a namespace is configured to be replicated among multiple Pulsar clusters, the corresponding replication metrics is also exposed when `replicationMetricsEnabled` is enabled. - -All the replication metrics are also labelled with `remoteCluster=${pulsar_remote_cluster}`. - -| Name | Type | Description | -|---|---|---| -| pulsar_replication_rate_in | Gauge | The total message rate of the namespace replicating from remote cluster (messages/second). | -| pulsar_replication_rate_out | Gauge | The total message rate of the namespace replicating to remote cluster (messages/second). | -| pulsar_replication_throughput_in | Gauge | The total throughput of the namespace replicating from remote cluster (bytes/second). | -| pulsar_replication_throughput_out | Gauge | The total throughput of the namespace replicating to remote cluster (bytes/second). | -| pulsar_replication_backlog | Gauge | The total backlog of the namespace replicating to remote cluster (messages). | -| pulsar_replication_rate_expired | Gauge | Total rate of messages expired (messages/second). | -| pulsar_replication_connected_count | Gauge | The count of replication-subscriber up and running to replicate to remote cluster. | -| pulsar_replication_delay_in_seconds | Gauge | Time in seconds from the time a message was produced to the time when it is about to be replicated. | - - -### Topic metrics - -> Topic metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `true`. - -All the topic metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. - -| Name | Type | Description | -|---|---|---| -| pulsar_subscriptions_count | Gauge | The number of Pulsar subscriptions of the topic served by this broker. | -| pulsar_producers_count | Gauge | The number of active producers of the topic connected to this broker. | -| pulsar_consumers_count | Gauge | The number of active consumers of the topic connected to this broker. | -| pulsar_rate_in | Gauge | The total message rate of the topic coming into this broker (messages/second). | -| pulsar_rate_out | Gauge | The total message rate of the topic going out from this broker (messages/second). | -| pulsar_throughput_in | Gauge | The total throughput of the topic coming into this broker (bytes/second). | -| pulsar_throughput_out | Gauge | The total throughput of the topic going out from this broker (bytes/second). | -| pulsar_storage_size | Gauge | The total storage size of the topics in this topic owned by this broker (bytes). | -| pulsar_storage_logical_size | Gauge | The storage size of topics in the namespace owned by the broker without replicas (in bytes). | -| pulsar_storage_backlog_size | Gauge | The total backlog size of the topics of this topic owned by this broker (messages). | -| pulsar_storage_offloaded_size | Gauge | The total amount of the data in this topic offloaded to the tiered storage (bytes). | -| pulsar_storage_backlog_quota_limit | Gauge | The total amount of the data in this topic that limit the backlog quota (bytes). | -| pulsar_storage_write_rate | Gauge | The total message batches (entries) written to the storage for this topic (message batches / second). | -| pulsar_storage_read_rate | Gauge | The total message batches (entries) read from the storage for this topic (message batches / second). | -| pulsar_subscription_delayed | Gauge | The total message batches (entries) are delayed for dispatching. | -| pulsar_storage_write_latency_le_* | Histogram | The entry rate of a topic that the storage write latency is smaller with a given threshold.
    Available thresholds:
    • pulsar_storage_write_latency_le_0_5: <= 0.5ms
    • pulsar_storage_write_latency_le_1: <= 1ms
    • pulsar_storage_write_latency_le_5: <= 5ms
    • pulsar_storage_write_latency_le_10: <= 10ms
    • pulsar_storage_write_latency_le_20: <= 20ms
    • pulsar_storage_write_latency_le_50: <= 50ms
    • pulsar_storage_write_latency_le_100: <= 100ms
    • pulsar_storage_write_latency_le_200: <= 200ms
    • pulsar_storage_write_latency_le_1000: <= 1s
    • pulsar_storage_write_latency_le_overflow: > 1s
    | -| pulsar_entry_size_le_* | Histogram | The entry rate of a topic that the entry size is smaller with a given threshold.
    Available thresholds:
    • pulsar_entry_size_le_128: <= 128 bytes
    • pulsar_entry_size_le_512: <= 512 bytes
    • pulsar_entry_size_le_1_kb: <= 1 KB
    • pulsar_entry_size_le_2_kb: <= 2 KB
    • pulsar_entry_size_le_4_kb: <= 4 KB
    • pulsar_entry_size_le_16_kb: <= 16 KB
    • pulsar_entry_size_le_100_kb: <= 100 KB
    • pulsar_entry_size_le_1_mb: <= 1 MB
    • pulsar_entry_size_le_overflow: > 1 MB
    | -| pulsar_in_bytes_total | Counter | The total number of messages in bytes received for this topic. | -| pulsar_in_messages_total | Counter | The total number of messages received for this topic. | -| pulsar_out_bytes_total | Counter | The total number of messages in bytes read from this topic. | -| pulsar_out_messages_total | Counter | The total number of messages read from this topic. | -| pulsar_compaction_removed_event_count | Gauge | The total number of removed events of the compaction. | -| pulsar_compaction_succeed_count | Gauge | The total number of successes of the compaction. | -| pulsar_compaction_failed_count | Gauge | The total number of failures of the compaction. | -| pulsar_compaction_duration_time_in_mills | Gauge | The duration time of the compaction. | -| pulsar_compaction_read_throughput | Gauge | The read throughput of the compaction. | -| pulsar_compaction_write_throughput | Gauge | The write throughput of the compaction. | -| pulsar_compaction_latency_le_* | Histogram | The compaction latency with given quantile.
    Available thresholds:
    • pulsar_compaction_latency_le_0_5: <= 0.5ms
    • pulsar_compaction_latency_le_1: <= 1ms
    • pulsar_compaction_latency_le_5: <= 5ms
    • pulsar_compaction_latency_le_10: <= 10ms
    • pulsar_compaction_latency_le_20: <= 20ms
    • pulsar_compaction_latency_le_50: <= 50ms
    • pulsar_compaction_latency_le_100: <= 100ms
    • pulsar_compaction_latency_le_200: <= 200ms
    • pulsar_compaction_latency_le_1000: <= 1s
    • pulsar_compaction_latency_le_overflow: > 1s
    | -| pulsar_compaction_compacted_entries_count | Gauge | The total number of the compacted entries. | -| pulsar_compaction_compacted_entries_size |Gauge | The total size of the compacted entries. | - -#### Replication metrics - -If a namespace that a topic belongs to is configured to be replicated among multiple Pulsar clusters, the corresponding replication metrics is also exposed when `replicationMetricsEnabled` is enabled. - -All the replication metrics are labelled with `remoteCluster=${pulsar_remote_cluster}`. - -| Name | Type | Description | -|---|---|---| -| pulsar_replication_rate_in | Gauge | The total message rate of the topic replicating from remote cluster (messages/second). | -| pulsar_replication_rate_out | Gauge | The total message rate of the topic replicating to remote cluster (messages/second). | -| pulsar_replication_throughput_in | Gauge | The total throughput of the topic replicating from remote cluster (bytes/second). | -| pulsar_replication_throughput_out | Gauge | The total throughput of the topic replicating to remote cluster (bytes/second). | -| pulsar_replication_backlog | Gauge | The total backlog of the topic replicating to remote cluster (messages). | - -### ManagedLedgerCache metrics -All the ManagedLedgerCache metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_ml_cache_evictions | Gauge | The number of cache evictions during the last minute. | -| pulsar_ml_cache_hits_rate | Gauge | The number of cache hits per second. | -| pulsar_ml_cache_hits_throughput | Gauge | The amount of data is retrieved from the cache in byte/s | -| pulsar_ml_cache_misses_rate | Gauge | The number of cache misses per second | -| pulsar_ml_cache_misses_throughput | Gauge | The amount of data is retrieved from the cache in byte/s | -| pulsar_ml_cache_pool_active_allocations | Gauge | The number of currently active allocations in direct arena | -| pulsar_ml_cache_pool_active_allocations_huge | Gauge | The number of currently active huge allocation in direct arena | -| pulsar_ml_cache_pool_active_allocations_normal | Gauge | The number of currently active normal allocations in direct arena | -| pulsar_ml_cache_pool_active_allocations_small | Gauge | The number of currently active small allocations in direct arena | -| pulsar_ml_cache_pool_allocated | Gauge | The total allocated memory of chunk lists in direct arena | -| pulsar_ml_cache_pool_used | Gauge | The total used memory of chunk lists in direct arena | -| pulsar_ml_cache_used_size | Gauge | The size in byte used to store the entries payloads | -| pulsar_ml_count | Gauge | The number of currently opened managed ledgers | - -### ManagedLedger metrics -All the managedLedger metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- namespace: namespace=${pulsar_namespace}. ${pulsar_namespace} is the namespace name. -- quantile: quantile=${quantile}. Quantile is only for `Histogram` type metric, and represents the threshold for given Buckets. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_ml_AddEntryBytesRate | Gauge | The bytes/s rate of messages added | -| pulsar_ml_AddEntryWithReplicasBytesRate | Gauge | The bytes/s rate of messages added with replicas | -| pulsar_ml_AddEntryErrors | Gauge | The number of addEntry requests that failed | -| pulsar_ml_AddEntryLatencyBuckets | Histogram | The latency of adding a ledger entry with a given quantile (threshold), including time spent on waiting in queue on the broker side
    Available quantile:
    • quantile="0.0_0.5" is AddEntryLatency between (0.0ms, 0.5ms]
    • quantile="0.5_1.0" is AddEntryLatency between (0.5ms, 1.0ms]
    • quantile="1.0_5.0" is AddEntryLatency between (1ms, 5ms]
    • quantile="5.0_10.0" is AddEntryLatency between (5ms, 10ms]
    • quantile="10.0_20.0" is AddEntryLatency between (10ms, 20ms]
    • quantile="20.0_50.0" is AddEntryLatency between (20ms, 50ms]
    • quantile="50.0_100.0" is AddEntryLatency between (50ms, 100ms]
    • quantile="100.0_200.0" is AddEntryLatency between (100ms, 200ms]
    • quantile="200.0_1000.0" is AddEntryLatency between (200ms, 1s]
    | -| pulsar_ml_AddEntryLatencyBuckets_OVERFLOW | Gauge | The number of times the AddEntryLatency is longer than 1 second | -| pulsar_ml_AddEntryMessagesRate | Gauge | The msg/s rate of messages added | -| pulsar_ml_AddEntrySucceed | Gauge | The number of addEntry requests that succeeded | -| pulsar_ml_EntrySizeBuckets | Histogram | The added entry size of a ledger with a given quantile.
    Available quantile:
    • quantile="0.0_128.0" is EntrySize between (0byte, 128byte]
    • quantile="128.0_512.0" is EntrySize between (128byte, 512byte]
    • quantile="512.0_1024.0" is EntrySize between (512byte, 1KB]
    • quantile="1024.0_2048.0" is EntrySize between (1KB, 2KB]
    • quantile="2048.0_4096.0" is EntrySize between (2KB, 4KB]
    • quantile="4096.0_16384.0" is EntrySize between (4KB, 16KB]
    • quantile="16384.0_102400.0" is EntrySize between (16KB, 100KB]
    • quantile="102400.0_1232896.0" is EntrySize between (100KB, 1MB]
    | -| pulsar_ml_EntrySizeBuckets_OVERFLOW |Gauge | The number of times the EntrySize is larger than 1MB | -| pulsar_ml_LedgerSwitchLatencyBuckets | Histogram | The ledger switch latency with a given quantile.
    Available quantile:
    • quantile="0.0_0.5" is EntrySize between (0ms, 0.5ms]
    • quantile="0.5_1.0" is EntrySize between (0.5ms, 1ms]
    • quantile="1.0_5.0" is EntrySize between (1ms, 5ms]
    • quantile="5.0_10.0" is EntrySize between (5ms, 10ms]
    • quantile="10.0_20.0" is EntrySize between (10ms, 20ms]
    • quantile="20.0_50.0" is EntrySize between (20ms, 50ms]
    • quantile="50.0_100.0" is EntrySize between (50ms, 100ms]
    • quantile="100.0_200.0" is EntrySize between (100ms, 200ms]
    • quantile="200.0_1000.0" is EntrySize between (200ms, 1000ms]
    | -| pulsar_ml_LedgerSwitchLatencyBuckets_OVERFLOW | Gauge | The number of times the ledger switch latency is longer than 1 second | -| pulsar_ml_LedgerAddEntryLatencyBuckets | Histogram | The latency for bookie client to persist a ledger entry from broker to BookKeeper service with a given quantile (threshold).
    Available quantile:
    • quantile="0.0_0.5" is LedgerAddEntryLatency between (0.0ms, 0.5ms]
    • quantile="0.5_1.0" is LedgerAddEntryLatency between (0.5ms, 1.0ms]
    • quantile="1.0_5.0" is LedgerAddEntryLatency between (1ms, 5ms]
    • quantile="5.0_10.0" is LedgerAddEntryLatency between (5ms, 10ms]
    • quantile="10.0_20.0" is LedgerAddEntryLatency between (10ms, 20ms]
    • quantile="20.0_50.0" is LedgerAddEntryLatency between (20ms, 50ms]
    • quantile="50.0_100.0" is LedgerAddEntryLatency between (50ms, 100ms]
    • quantile="100.0_200.0" is LedgerAddEntryLatency between (100ms, 200ms]
    • quantile="200.0_1000.0" is LedgerAddEntryLatency between (200ms, 1s]
    | -| pulsar_ml_LedgerAddEntryLatencyBuckets_OVERFLOW | Gauge | The number of times the LedgerAddEntryLatency is longer than 1 second | -| pulsar_ml_MarkDeleteRate | Gauge | The rate of mark-delete ops/s | -| pulsar_ml_NumberOfMessagesInBacklog | Gauge | The number of backlog messages for all the consumers | -| pulsar_ml_ReadEntriesBytesRate | Gauge | The bytes/s rate of messages read | -| pulsar_ml_ReadEntriesErrors | Gauge | The number of readEntries requests that failed | -| pulsar_ml_ReadEntriesRate | Gauge | The msg/s rate of messages read | -| pulsar_ml_ReadEntriesSucceeded | Gauge | The number of readEntries requests that succeeded | -| pulsar_ml_StoredMessagesSize | Gauge | The total size of the messages in active ledgers (accounting for the multiple copies stored) | - -### Managed cursor acknowledgment state - -The acknowledgment state is persistent to the ledger first. When the acknowledgment state fails to be persistent to the ledger, they are persistent to ZooKeeper. To track the stats of acknowledgment, you can configure the metrics for the managed cursor. - -All the cursor acknowledgment state metrics are labelled with the following labels: - -- namespace: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -- ledger_name: `ledger_name=${pulsar_ledger_name}`. `${pulsar_ledger_name}` is the ledger name. - -- cursor_name: `ledger_name=${pulsar_cursor_name}`. `${pulsar_cursor_name}` is the cursor name. - -Name |Type |Description -|---|---|--- -brk_ml_cursor_persistLedgerSucceed(namespace=", ledger_name="", cursor_name:")|Gauge|The number of acknowledgment states that is persistent to a ledger.| -brk_ml_cursor_persistLedgerErrors(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of ledger errors occurred when acknowledgment states fail to be persistent to the ledger.| -brk_ml_cursor_persistZookeeperSucceed(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of acknowledgment states that is persistent to ZooKeeper. -brk_ml_cursor_persistZookeeperErrors(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of ledger errors occurred when acknowledgment states fail to be persistent to ZooKeeper. -brk_ml_cursor_nonContiguousDeletedMessagesRange(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of non-contiguous deleted messages ranges. -brk_ml_cursor_writeLedgerSize(namespace="", ledger_name="", cursor_name:"")|Gauge|The size of write to ledger. -brk_ml_cursor_writeLedgerLogicalSize(namespace="", ledger_name="", cursor_name:"")|Gauge|The size of write to ledger (accounting for without replicas). -brk_ml_cursor_readLedgerSize(namespace="", ledger_name="", cursor_name:"")|Gauge|The size of read from ledger. - -### LoadBalancing metrics -All the loadbalancing metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- broker: broker=${broker}. ${broker} is the IP address of the broker -- metric: metric="loadBalancing". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_bandwidth_in_usage | Gauge | The broker bandwith in usage | -| pulsar_lb_bandwidth_out_usage | Gauge | The broker bandwith out usage | -| pulsar_lb_cpu_usage | Gauge | The broker cpu usage | -| pulsar_lb_directMemory_usage | Gauge | The broker process direct memory usage | -| pulsar_lb_memory_usage | Gauge | The broker process memory usage | - -#### BundleUnloading metrics -All the bundleUnloading metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- metric: metric="bundleUnloading". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_unload_broker_count | Counter | Unload broker count in this bundle unloading | -| pulsar_lb_unload_bundle_count | Counter | Bundle unload count in this bundle unloading | - -#### BundleSplit metrics -All the bundleUnloading metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- metric: metric="bundlesSplit". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_bundles_split_count | Counter | The total count of bundle split in this leader broker | - -### Subscription metrics - -> Subscription metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `true`. - -All the subscription metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. -- *subscription*: `subscription=${subscription}`. `${subscription}` is the topic subscription name. - -| Name | Type | Description | -|---|---|---| -| pulsar_subscription_back_log | Gauge | The total backlog of a subscription (messages). | -| pulsar_subscription_delayed | Gauge | The total number of messages are delayed to be dispatched for a subscription (messages). | -| pulsar_subscription_msg_rate_redeliver | Gauge | The total message rate for message being redelivered (messages/second). | -| pulsar_subscription_unacked_messages | Gauge | The total number of unacknowledged messages of a subscription (messages). | -| pulsar_subscription_blocked_on_unacked_messages | Gauge | Indicate whether a subscription is blocked on unacknowledged messages or not.
    • 1 means the subscription is blocked on waiting unacknowledged messages to be acked.
    • 0 means the subscription is not blocked on waiting unacknowledged messages to be acked.
    | -| pulsar_subscription_msg_rate_out | Gauge | The total message dispatch rate for a subscription (messages/second). | -| pulsar_subscription_msg_throughput_out | Gauge | The total message dispatch throughput for a subscription (bytes/second). | - -### Consumer metrics - -> Consumer metrics are only exposed when both `exposeTopicLevelMetricsInPrometheus` and `exposeConsumerLevelMetricsInPrometheus` are set to `true`. - -All the consumer metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. -- *subscription*: `subscription=${subscription}`. `${subscription}` is the topic subscription name. -- *consumer_name*: `consumer_name=${consumer_name}`. `${consumer_name}` is the topic consumer name. -- *consumer_id*: `consumer_id=${consumer_id}`. `${consumer_id}` is the topic consumer id. - -| Name | Type | Description | -|---|---|---| -| pulsar_consumer_msg_rate_redeliver | Gauge | The total message rate for message being redelivered (messages/second). | -| pulsar_consumer_unacked_messages | Gauge | The total number of unacknowledged messages of a consumer (messages). | -| pulsar_consumer_blocked_on_unacked_messages | Gauge | Indicate whether a consumer is blocked on unacknowledged messages or not.
    • 1 means the consumer is blocked on waiting unacknowledged messages to be acked.
    • 0 means the consumer is not blocked on waiting unacknowledged messages to be acked.
    | -| pulsar_consumer_msg_rate_out | Gauge | The total message dispatch rate for a consumer (messages/second). | -| pulsar_consumer_msg_throughput_out | Gauge | The total message dispatch throughput for a consumer (bytes/second). | -| pulsar_consumer_available_permits | Gauge | The available permits for for a consumer. | - -### Managed ledger bookie client metrics - -All the managed ledger bookie client metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_completed_tasks_* | Gauge | The number of tasks the scheduler executor execute completed.
    The number of metrics determined by the scheduler executor thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_queue_* | Gauge | The number of tasks queued in the scheduler executor's queue.
    The number of metrics determined by scheduler executor's thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_total_tasks_* | Gauge | The total number of tasks the scheduler executor received.
    The number of metrics determined by scheduler executor's thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_task_execution | Summary | The scheduler task execution latency calculated in milliseconds. | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_task_queued | Summary | The scheduler task queued latency calculated in milliseconds. | - -### Token metrics - -All the token metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -|---|---|---| -| pulsar_expired_token_count | Counter | The number of expired tokens in Pulsar. | -| pulsar_expiring_token_minutes | Histogram | The remaining time of expiring tokens in minutes. | - -### Authentication metrics - -All the authentication metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *provider_name*: `provider_name=${provider_name}`. `${provider_name}` is the class name of the authentication provider. -- *auth_method*: `auth_method=${auth_method}`. `${auth_method}` is the authentication method of the authentication provider. -- *reason*: `reason=${reason}`. `${reason}` is the reason for failing authentication operation. (This label is only for `pulsar_authentication_failures_count`.) - -| Name | Type | Description | -|---|---|---| -| pulsar_authentication_success_count| Counter | The number of successful authentication operations. | -| pulsar_authentication_failures_count | Counter | The number of failing authentication operations. | - -### Connection metrics - -All the connection metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *broker*: `broker=${advertised_address}`. `${advertised_address}` is the advertised address of the broker. -- *metric*: `metric=${metric}`. `${metric}` is the connection metric collective name. - -| Name | Type | Description | -|---|---|---| -| pulsar_active_connections| Gauge | The number of active connections. | -| pulsar_connection_created_total_count | Gauge | The total number of connections. | -| pulsar_connection_create_success_count | Gauge | The number of successfully created connections. | -| pulsar_connection_create_fail_count | Gauge | The number of failed connections. | -| pulsar_connection_closed_total_count | Gauge | The total number of closed connections. | -| pulsar_broker_throttled_connections | Gauge | The number of throttled connections. | -| pulsar_broker_throttled_connections_global_limit | Gauge | The number of throttled connections because of per-connection limit. | - -## Pulsar Functions - -All the Pulsar Functions metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -| Name | Type | Description | -|---|---|---| -| pulsar_function_processed_successfully_total | Counter | The total number of messages processed successfully. | -| pulsar_function_processed_successfully_total_1min | Counter | The total number of messages processed successfully in the last 1 minute. | -| pulsar_function_system_exceptions_total | Counter | The total number of system exceptions. | -| pulsar_function_system_exceptions_total_1min | Counter | The total number of system exceptions in the last 1 minute. | -| pulsar_function_user_exceptions_total | Counter | The total number of user exceptions. | -| pulsar_function_user_exceptions_total_1min | Counter | The total number of user exceptions in the last 1 minute. | -| pulsar_function_process_latency_ms | Summary | The process latency in milliseconds. | -| pulsar_function_process_latency_ms_1min | Summary | The process latency in milliseconds in the last 1 minute. | -| pulsar_function_last_invocation | Gauge | The timestamp of the last invocation of the function. | -| pulsar_function_received_total | Counter | The total number of messages received from source. | -| pulsar_function_received_total_1min | Counter | The total number of messages received from source in the last 1 minute. | -pulsar_function_user_metric_ | Summary|The user-defined metrics. - -## Connectors - -All the Pulsar connector metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -Connector metrics contain **source** metrics and **sink** metrics. - -- **Source** metrics - - | Name | Type | Description | - |---|---|---| - pulsar_source_written_total|Counter|The total number of records written to a Pulsar topic. - pulsar_source_written_total_1min|Counter|The total number of records written to a Pulsar topic in the last 1 minute. - pulsar_source_received_total|Counter|The total number of records received from source. - pulsar_source_received_total_1min|Counter|The total number of records received from source in the last 1 minute. - pulsar_source_last_invocation|Gauge|The timestamp of the last invocation of the source. - pulsar_source_source_exception|Gauge|The exception from a source. - pulsar_source_source_exceptions_total|Counter|The total number of source exceptions. - pulsar_source_source_exceptions_total_1min |Counter|The total number of source exceptions in the last 1 minute. - pulsar_source_system_exception|Gauge|The exception from system code. - pulsar_source_system_exceptions_total|Counter|The total number of system exceptions. - pulsar_source_system_exceptions_total_1min|Counter|The total number of system exceptions in the last 1 minute. - pulsar_source_user_metric_ | Summary|The user-defined metrics. - -- **Sink** metrics - - | Name | Type | Description | - |---|---|---| - pulsar_sink_written_total|Counter| The total number of records processed by a sink. - pulsar_sink_written_total_1min|Counter| The total number of records processed by a sink in the last 1 minute. - pulsar_sink_received_total_1min|Counter| The total number of messages that a sink has received from Pulsar topics in the last 1 minute. - pulsar_sink_received_total|Counter| The total number of records that a sink has received from Pulsar topics. - pulsar_sink_last_invocation|Gauge|The timestamp of the last invocation of the sink. - pulsar_sink_sink_exception|Gauge|The exception from a sink. - pulsar_sink_sink_exceptions_total|Counter|The total number of sink exceptions. - pulsar_sink_sink_exceptions_total_1min |Counter|The total number of sink exceptions in the last 1 minute. - pulsar_sink_system_exception|Gauge|The exception from system code. - pulsar_sink_system_exceptions_total|Counter|The total number of system exceptions. - pulsar_sink_system_exceptions_total_1min|Counter|The total number of system exceptions in the last 1 minute. - pulsar_sink_user_metric_ | Summary|The user-defined metrics. - -## Proxy - -All the proxy metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *kubernetes_pod_name*: `kubernetes_pod_name=${kubernetes_pod_name}`. `${kubernetes_pod_name}` is the Kubernetes pod name. - -| Name | Type | Description | -|---|---|---| -| pulsar_proxy_active_connections | Gauge | Number of connections currently active in the proxy. | -| pulsar_proxy_new_connections | Counter | Counter of connections being opened in the proxy. | -| pulsar_proxy_rejected_connections | Counter | Counter for connections rejected due to throttling. | -| pulsar_proxy_binary_ops | Counter | Counter of proxy operations. | -| pulsar_proxy_binary_bytes | Counter | Counter of proxy bytes. | - -## Pulsar SQL Worker - -| Name | Type | Description | -|---|---|---| -| split_bytes_read | Counter | Number of bytes read from BookKeeper. | -| split_num_messages_deserialized | Counter | Number of messages deserialized. | -| split_num_record_deserialized | Counter | Number of records deserialized. | -| split_bytes_read_per_query | Summary | Total number of bytes read per query. | -| split_entry_deserialize_time | Summary | Time spent on derserializing entries. | -| split_entry_deserialize_time_per_query | Summary | Time spent on derserializing entries per query. | -| split_entry_queue_dequeue_wait_time | Summary | Time spend on waiting to get entry from entry queue because it is empty. | -| split_entry_queue_dequeue_wait_time_per_query | Summary | Total time spent on waiting to get entry from entry queue per query. | -| split_message_queue_dequeue_wait_time_per_query | Summary | Time spent on waiting to dequeue from message queue because is is empty per query. | -| split_message_queue_enqueue_wait_time | Summary | Time spent on waiting for message queue enqueue because the message queue is full. | -| split_message_queue_enqueue_wait_time_per_query | Summary | Time spent on waiting for message queue enqueue because the message queue is full per query. | -| split_num_entries_per_batch | Summary | Number of entries per batch. | -| split_num_entries_per_query | Summary | Number of entries per query. | -| split_num_messages_deserialized_per_entry | Summary | Number of messages deserialized per entry. | -| split_num_messages_deserialized_per_query | Summary | Number of messages deserialized per query. | -| split_read_attempts | Summary | Number of read attempts (fail if queues are full). | -| split_read_attempts_per_query | Summary | Number of read attempts per query. | -| split_read_latency_per_batch | Summary | Latency of reads per batch. | -| split_read_latency_per_query | Summary | Total read latency per query. | -| split_record_deserialize_time | Summary | Time spent on deserializing message to record. For example, Avro, JSON, and so on. | -| split_record_deserialize_time_per_query | Summary | Time spent on deserializing message to record per query. | -| split_total_execution_time | Summary | The total execution time. | - -## Pulsar transaction - -All the transaction metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *coordinator_id*: `coordinator_id=${coordinator_id}`. `${coordinator_id}` is the coordinator id. - -| Name | Type | Description | -|---|---|---| -| pulsar_txn_active_count | Gauge | Number of active transactions. | -| pulsar_txn_created_count | Counter | Number of created transactions. | -| pulsar_txn_committed_count | Counter | Number of committed transactions. | -| pulsar_txn_aborted_count | Counter | Number of aborted transactions of this coordinator. | -| pulsar_txn_timeout_count | Counter | Number of timeout transactions. | -| pulsar_txn_append_log_count | Counter | Number of append transaction logs. | -| pulsar_txn_execution_latency_le_* | Histogram | Transaction execution latency.
    Available latencies are as below:
    • latency="10" is TransactionExecutionLatency between (0ms, 10ms]
    • latency="20" is TransactionExecutionLatency between (10ms, 20ms]
    • latency="50" is TransactionExecutionLatency between (20ms, 50ms]
    • latency="100" is TransactionExecutionLatency between (50ms, 100ms]
    • latency="500" is TransactionExecutionLatency between (100ms, 500ms]
    • latency="1000" is TransactionExecutionLatency between (500ms, 1000ms]
    • latency="5000" is TransactionExecutionLatency between (1s, 5s]
    • latency="15000" is TransactionExecutionLatency between (5s, 15s]
    • latency="30000" is TransactionExecutionLatency between (15s, 30s]
    • latency="60000" is TransactionExecutionLatency between (30s, 60s]
    • latency="300000" is TransactionExecutionLatency between (1m,5m]
    • latency="1500000" is TransactionExecutionLatency between (5m,15m]
    • latency="3000000" is TransactionExecutionLatency between (15m,30m]
    • latency="overflow" is TransactionExecutionLatency between (30m,∞]
    | diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/reference-pulsar-admin.md b/site2/website/versioned_docs/version-2.9.1-deprecated/reference-pulsar-admin.md deleted file mode 100644 index e306289a8798a5..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/reference-pulsar-admin.md +++ /dev/null @@ -1,3297 +0,0 @@ ---- -id: reference-pulsar-admin -title: Pulsar admin CLI -sidebar_label: "Pulsar Admin CLI" -original_id: reference-pulsar-admin ---- - -> **Important** -> -> This page is deprecated and not updated anymore. For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) - -The `pulsar-admin` tool enables you to manage Pulsar installations, including clusters, brokers, namespaces, tenants, and more. - -Usage - -```bash - -$ pulsar-admin command - -``` - -Commands -* `broker-stats` -* `brokers` -* `clusters` -* `functions` -* `functions-worker` -* `namespaces` -* `ns-isolation-policy` -* `sources` - - For more information, see [here](io-cli.md#sources) -* `sinks` - - For more information, see [here](io-cli.md#sinks) -* `topics` -* `tenants` -* `resource-quotas` -* `schemas` - -## `broker-stats` - -Operations to collect broker statistics - -```bash - -$ pulsar-admin broker-stats subcommand - -``` - -Subcommands -* `allocator-stats` -* `topics(destinations)` -* `mbeans` -* `monitoring-metrics` -* `load-report` - - -### `allocator-stats` - -Dump allocator stats - -Usage - -```bash - -$ pulsar-admin broker-stats allocator-stats allocator-name - -``` - -### `topics(destinations)` - -Dump topic stats - -Usage - -```bash - -$ pulsar-admin broker-stats topics options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - -### `mbeans` - -Dump Mbean stats - -Usage - -```bash - -$ pulsar-admin broker-stats mbeans options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - - -### `monitoring-metrics` - -Dump metrics for monitoring - -Usage - -```bash - -$ pulsar-admin broker-stats monitoring-metrics options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - - -### `load-report` - -Dump broker load-report - -Usage - -```bash - -$ pulsar-admin broker-stats load-report - -``` - -## `brokers` - -Operations about brokers - -```bash - -$ pulsar-admin brokers subcommand - -``` - -Subcommands -* `list` -* `namespaces` -* `update-dynamic-config` -* `list-dynamic-config` -* `get-all-dynamic-config` -* `get-internal-config` -* `get-runtime-config` -* `healthcheck` - -### `list` -List active brokers of the cluster - -Usage - -```bash - -$ pulsar-admin brokers list cluster-name - -``` - -### `leader-broker` -Get the information of the leader broker - -Usage - -```bash - -$ pulsar-admin brokers leader-broker - -``` - -### `namespaces` -List namespaces owned by the broker - -Usage - -```bash - -$ pulsar-admin brokers namespaces cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--url`|The URL for the broker|| - - -### `update-dynamic-config` -Update a broker's dynamic service configuration - -Usage - -```bash - -$ pulsar-admin brokers update-dynamic-config options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--config`|Service configuration parameter name|| -|`--value`|Value for the configuration parameter value specified using the `--config` flag|| - - -### `list-dynamic-config` -Get list of updatable configuration name - -Usage - -```bash - -$ pulsar-admin brokers list-dynamic-config - -``` - -### `delete-dynamic-config` -Delete dynamic-serviceConfiguration of broker - -Usage - -```bash - -$ pulsar-admin brokers delete-dynamic-config options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--config`|Service configuration parameter name|| - - -### `get-all-dynamic-config` -Get all overridden dynamic-configuration values - -Usage - -```bash - -$ pulsar-admin brokers get-all-dynamic-config - -``` - -### `get-internal-config` -Get internal configuration information - -Usage - -```bash - -$ pulsar-admin brokers get-internal-config - -``` - -### `get-runtime-config` -Get runtime configuration values - -Usage - -```bash - -$ pulsar-admin brokers get-runtime-config - -``` - -### `healthcheck` -Run a health check against the broker - -Usage - -```bash - -$ pulsar-admin brokers healthcheck - -``` - -## `clusters` -Operations about clusters - -Usage - -```bash - -$ pulsar-admin clusters subcommand - -``` - -Subcommands -* `get` -* `create` -* `update` -* `delete` -* `list` -* `update-peer-clusters` -* `get-peer-clusters` -* `get-failure-domain` -* `create-failure-domain` -* `update-failure-domain` -* `delete-failure-domain` -* `list-failure-domains` - - -### `get` -Get the configuration data for the specified cluster - -Usage - -```bash - -$ pulsar-admin clusters get cluster-name - -``` - -### `create` -Provisions a new cluster. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin clusters create cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--broker-url`|The URL for the broker service.|| -|`--broker-url-secure`|The broker service URL for a secure connection|| -|`--url`|service-url|| -|`--url-secure`|service-url for secure connection|| - - -### `update` -Update the configuration for a cluster - -Usage - -```bash - -$ pulsar-admin clusters update cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--broker-url`|The URL for the broker service.|| -|`--broker-url-secure`|The broker service URL for a secure connection|| -|`--url`|service-url|| -|`--url-secure`|service-url for secure connection|| - - -### `delete` -Deletes an existing cluster - -Usage - -```bash - -$ pulsar-admin clusters delete cluster-name - -``` - -### `list` -List the existing clusters - -Usage - -```bash - -$ pulsar-admin clusters list - -``` - -### `update-peer-clusters` -Update peer cluster names - -Usage - -```bash - -$ pulsar-admin clusters update-peer-clusters cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--peer-clusters`|Comma separated peer cluster names (Pass empty string "" to delete list)|| - -### `get-peer-clusters` -Get list of peer clusters - -Usage - -```bash - -$ pulsar-admin clusters get-peer-clusters - -``` - -### `get-failure-domain` -Get the configuration brokers of a failure domain - -Usage - -```bash - -$ pulsar-admin clusters get-failure-domain cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `create-failure-domain` -Create a new failure domain for a cluster (updates it if already created) - -Usage - -```bash - -$ pulsar-admin clusters create-failure-domain cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--broker-list`|Comma separated broker list|| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `update-failure-domain` -Update failure domain for a cluster (creates a new one if not exist) - -Usage - -```bash - -$ pulsar-admin clusters update-failure-domain cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--broker-list`|Comma separated broker list|| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `delete-failure-domain` -Delete an existing failure domain - -Usage - -```bash - -$ pulsar-admin clusters delete-failure-domain cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `list-failure-domains` -List the existing failure domains for a cluster - -Usage - -```bash - -$ pulsar-admin clusters list-failure-domains cluster-name - -``` - -## `functions` - -A command-line interface for Pulsar Functions - -Usage - -```bash - -$ pulsar-admin functions subcommand - -``` - -Subcommands -* `localrun` -* `create` -* `delete` -* `update` -* `get` -* `restart` -* `stop` -* `start` -* `status` -* `stats` -* `list` -* `querystate` -* `putstate` -* `trigger` - - -### `localrun` -Run the Pulsar Function locally (rather than deploying it to the Pulsar cluster) - - -Usage - -```bash - -$ pulsar-admin functions localrun options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--broker-service-url `|The URL of the Pulsar broker|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--client-auth-params`|Client authentication param|| -|`--client-auth-plugin`|Client authentication plugin using which function-process can connect to broker|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--hostname-verification-enabled`|Enable hostname verification|false| -|`--instance-id-offset`|Start the instanceIds from this offset|0| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--go`|Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--state-storage-service-url`|The URL for the state storage service. By default, it it set to the service URL of the Apache BookKeeper. This service URL must be added manually when the Pulsar Function runs locally. || -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed successfully are sent|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--tls-allow-insecure`|Allow insecure tls connection|false| -|`--tls-trust-cert-path`|The tls trust cert file path|| -|`--use-tls`|Use tls connection|false| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `create` -Create a Pulsar Function in cluster mode (i.e. deploy it on a Pulsar cluster) - -Usage - -``` - -$ pulsar-admin functions create options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function’s namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--go`|Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `delete` -Delete a Pulsar Function that's running on a Pulsar cluster - -Usage - -```bash - -$ pulsar-admin functions delete options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `update` -Update a Pulsar Function that's been deployed to a Pulsar cluster - -Usage - -```bash - -$ pulsar-admin functions update options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function’s namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--go`|Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `get` -Fetch information about a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions get options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `restart` -Restart function instance - -Usage - -```bash - -$ pulsar-admin functions restart options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (restart all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `stop` -Stops function instance - -Usage - -```bash - -$ pulsar-admin functions stop options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (stop all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `start` -Starts a stopped function instance - -Usage - -```bash - -$ pulsar-admin functions start options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (start all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `status` -Check the current status of a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions status options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (Get-status of all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `stats` -Get the current stats of a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions stats options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (Get-stats of all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - -### `list` -List all of the Pulsar Functions running under a specific tenant and namespace - -Usage - -```bash - -$ pulsar-admin functions list options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `querystate` -Fetch the current state associated with a Pulsar Function running in cluster mode - -Usage - -```bash - -$ pulsar-admin functions querystate options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`-k`, `--key`|The key for the state you want to fetch|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| -|`-w`, `--watch`|Watch for changes in the value associated with a key for a Pulsar Function|false| - -### `putstate` -Put a key/value pair to the state associated with a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions putstate options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the Pulsar Function|| -|`--name`|The name of a Pulsar Function|| -|`--namespace`|The namespace of a Pulsar Function|| -|`--tenant`|The tenant of a Pulsar Function|| -|`-s`, `--state`|The FunctionState that needs to be put|| - -### `trigger` -Triggers the specified Pulsar Function with a supplied value - -Usage - -```bash - -$ pulsar-admin functions trigger options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| -|`--topic`|The specific topic name that the function consumes from that you want to inject the data to|| -|`--trigger-file`|The path to the file that contains the data with which you'd like to trigger the function|| -|`--trigger-value`|The value with which you want to trigger the function|| - - -## `functions-worker` -Operations to collect function-worker statistics - -```bash - -$ pulsar-admin functions-worker subcommand - -``` - -Subcommands - -* `function-stats` -* `get-cluster` -* `get-cluster-leader` -* `get-function-assignments` -* `monitoring-metrics` - -### `function-stats` - -Dump all functions stats running on this broker - -Usage - -```bash - -$ pulsar-admin functions-worker function-stats - -``` - -### `get-cluster` - -Get all workers belonging to this cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-cluster - -``` - -### `get-cluster-leader` - -Get the leader of the worker cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-cluster-leader - -``` - -### `get-function-assignments` - -Get the assignments of the functions across the worker cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-function-assignments - -``` - -### `monitoring-metrics` - -Dump metrics for Monitoring - -Usage - -```bash - -$ pulsar-admin functions-worker monitoring-metrics - -``` - -## `namespaces` - -Operations for managing namespaces - -```bash - -$ pulsar-admin namespaces subcommand - -``` - -Subcommands -* `list` -* `topics` -* `policies` -* `create` -* `delete` -* `set-deduplication` -* `set-auto-topic-creation` -* `remove-auto-topic-creation` -* `set-auto-subscription-creation` -* `remove-auto-subscription-creation` -* `permissions` -* `grant-permission` -* `revoke-permission` -* `grant-subscription-permission` -* `revoke-subscription-permission` -* `set-clusters` -* `get-clusters` -* `get-backlog-quotas` -* `set-backlog-quota` -* `remove-backlog-quota` -* `get-persistence` -* `set-persistence` -* `get-message-ttl` -* `set-message-ttl` -* `remove-message-ttl` -* `get-anti-affinity-group` -* `set-anti-affinity-group` -* `get-anti-affinity-namespaces` -* `delete-anti-affinity-group` -* `get-retention` -* `set-retention` -* `unload` -* `split-bundle` -* `set-dispatch-rate` -* `get-dispatch-rate` -* `set-replicator-dispatch-rate` -* `get-replicator-dispatch-rate` -* `set-subscribe-rate` -* `get-subscribe-rate` -* `set-subscription-dispatch-rate` -* `get-subscription-dispatch-rate` -* `clear-backlog` -* `unsubscribe` -* `set-encryption-required` -* `set-delayed-delivery` -* `get-delayed-delivery` -* `set-subscription-auth-mode` -* `get-max-producers-per-topic` -* `set-max-producers-per-topic` -* `get-max-consumers-per-topic` -* `set-max-consumers-per-topic` -* `get-max-consumers-per-subscription` -* `set-max-consumers-per-subscription` -* `get-max-unacked-messages-per-subscription` -* `set-max-unacked-messages-per-subscription` -* `get-max-unacked-messages-per-consumer` -* `set-max-unacked-messages-per-consumer` -* `get-compaction-threshold` -* `set-compaction-threshold` -* `get-offload-threshold` -* `set-offload-threshold` -* `get-offload-deletion-lag` -* `set-offload-deletion-lag` -* `clear-offload-deletion-lag` -* `get-schema-autoupdate-strategy` -* `set-schema-autoupdate-strategy` -* `set-offload-policies` -* `get-offload-policies` -* `set-max-subscriptions-per-topic` -* `get-max-subscriptions-per-topic` -* `remove-max-subscriptions-per-topic` - - -### `list` -Get the namespaces for a tenant - -Usage - -```bash - -$ pulsar-admin namespaces list tenant-name - -``` - -### `topics` -Get the list of topics for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces topics tenant/namespace - -``` - -### `policies` -Get the configuration policies of a namespace - -Usage - -```bash - -$ pulsar-admin namespaces policies tenant/namespace - -``` - -### `create` -Create a new namespace - -Usage - -```bash - -$ pulsar-admin namespaces create tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-b`, `--bundles`|The number of bundles to activate|0| -|`-c`, `--clusters`|List of clusters this namespace will be assigned|| - - -### `delete` -Deletes a namespace. The namespace needs to be empty - -Usage - -```bash - -$ pulsar-admin namespaces delete tenant/namespace - -``` - -### `set-deduplication` -Enable or disable message deduplication on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-deduplication tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable message deduplication on the specified namespace|false| -|`--disable`, `-d`|Disable message deduplication on the specified namespace|false| - -### `set-auto-topic-creation` -Enable or disable autoTopicCreation for a namespace, overriding broker settings - -Usage - -```bash - -$ pulsar-admin namespaces set-auto-topic-creation tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable allowAutoTopicCreation on namespace|false| -|`--disable`, `-d`|Disable allowAutoTopicCreation on namespace|false| -|`--type`, `-t`|Type of topic to be auto-created. Possible values: (partitioned, non-partitioned)|non-partitioned| -|`--num-partitions`, `-n`|Default number of partitions of topic to be auto-created, applicable to partitioned topics only|| - -### `remove-auto-topic-creation` -Remove override of autoTopicCreation for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces remove-auto-topic-creation tenant/namespace - -``` - -### `set-auto-subscription-creation` -Enable autoSubscriptionCreation for a namespace, overriding broker settings - -Usage - -```bash - -$ pulsar-admin namespaces set-auto-subscription-creation tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable allowAutoSubscriptionCreation on namespace|false| - -### `remove-auto-subscription-creation` -Remove override of autoSubscriptionCreation for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces remove-auto-subscription-creation tenant/namespace - -``` - -### `permissions` -Get the permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces permissions tenant/namespace - -``` - -### `grant-permission` -Grant permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces grant-permission tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--actions`|Actions to be granted (`produce` or `consume`)|| -|`--role`|The client role to which to grant the permissions|| - - -### `revoke-permission` -Revoke permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces revoke-permission tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--role`|The client role to which to revoke the permissions|| - -### `grant-subscription-permission` -Grant permissions to access subscription admin-api - -Usage - -```bash - -$ pulsar-admin namespaces grant-subscription-permission tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--roles`|The client roles to which to grant the permissions (comma separated roles)|| -|`--subscription`|The subscription name for which permission will be granted to roles|| - -### `revoke-subscription-permission` -Revoke permissions to access subscription admin-api - -Usage - -```bash - -$ pulsar-admin namespaces revoke-subscription-permission tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--role`|The client role to which to revoke the permissions|| -|`--subscription`|The subscription name for which permission will be revoked to roles|| - -### `set-clusters` -Set replication clusters for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-clusters tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-c`, `--clusters`|Replication clusters ID list (comma-separated values)|| - - -### `get-clusters` -Get replication clusters for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-clusters tenant/namespace - -``` - -### `get-backlog-quotas` -Get the backlog quota policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-backlog-quotas tenant/namespace - -``` - -### `set-backlog-quota` -Set a backlog quota policy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-backlog-quota tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-l`, `--limit`|The backlog size limit (for example `10M` or `16G`)|| -|`-lt`, `--limitTime`|Time limit in second, non-positive number for disabling time limit. (for example 3600 for 1 hour)|| -|`-p`, `--policy`|The retention policy to enforce when the limit is reached. The valid options are: `producer_request_hold`, `producer_exception` or `consumer_backlog_eviction`| -|`-t`, `--type`|Backlog quota type to set. The valid options are: `destination_storage`, `message_age` |destination_storage| - -Example - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \ ---limit 2G \ ---policy producer_request_hold - -``` - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \ ---limitTime 3600 \ ---policy producer_request_hold \ ---type message_age - -``` - -### `remove-backlog-quota` -Remove a backlog quota policy from a namespace - -|Flag|Description|Default| -|---|---|---| -|`-t`, `--type`|Backlog quota type to remove. The valid options are: `destination_storage`, `message_age` |destination_storage| - -Usage - -```bash - -$ pulsar-admin namespaces remove-backlog-quota tenant/namespace - -``` - -### `get-persistence` -Get the persistence policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-persistence tenant/namespace - -``` - -### `set-persistence` -Set the persistence policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-persistence tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-a`, `--bookkeeper-ack-quorum`|The number of acks (guaranteed copies) to wait for each entry|0| -|`-e`, `--bookkeeper-ensemble`|The number of bookies to use for a topic|0| -|`-w`, `--bookkeeper-write-quorum`|How many writes to make of each entry|0| -|`-r`, `--ml-mark-delete-max-rate`|Throttling rate of mark-delete operation (0 means no throttle)|| - - -### `get-message-ttl` -Get the message TTL for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-message-ttl tenant/namespace - -``` - -### `set-message-ttl` -Set the message TTL for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-message-ttl tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-ttl`, `--messageTTL`|Message TTL in seconds. When the value is set to `0`, TTL is disabled. TTL is disabled by default. |0| - -### `remove-message-ttl` -Remove the message TTL for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces remove-message-ttl tenant/namespace - -``` - -### `get-anti-affinity-group` -Get Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-anti-affinity-group tenant/namespace - -``` - -### `set-anti-affinity-group` -Set Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-anti-affinity-group tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-g`, `--group`|Anti-affinity group name|| - -### `get-anti-affinity-namespaces` -Get Anti-affinity namespaces grouped with the given anti-affinity group name - -Usage - -```bash - -$ pulsar-admin namespaces get-anti-affinity-namespaces options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-c`, `--cluster`|Cluster name|| -|`-g`, `--group`|Anti-affinity group name|| -|`-p`, `--tenant`|Tenant is only used for authorization. Client has to be admin of any of the tenant to access this api|| - -### `delete-anti-affinity-group` -Remove Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces delete-anti-affinity-group tenant/namespace - -``` - -### `get-retention` -Get the retention policy that is applied to each topic within the specified namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-retention tenant/namespace - -``` - -### `set-retention` -Set the retention policy for each topic within the specified namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-retention tenant/namespace - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-s`, `--size`|The retention size limits (for example 10M, 16G or 3T) for each topic in the namespace. 0 means no retention and -1 means infinite size retention|| -|`-t`, `--time`|The retention time in minutes, hours, days, or weeks. Examples: 100m, 13h, 2d, 5w. 0 means no retention and -1 means infinite time retention|| - - -### `unload` -Unload a namespace or namespace bundle from the current serving broker. - -Usage - -```bash - -$ pulsar-admin namespaces unload tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| - -### `split-bundle` -Split a namespace-bundle from the current serving broker - -Usage - -```bash - -$ pulsar-admin namespaces split-bundle tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-u`, `--unload`|Unload newly split bundles after splitting old bundle|false| - -### `set-dispatch-rate` -Set message-dispatch-rate for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-dispatch-rate tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-dispatch-rate` -Get configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-dispatch-rate tenant/namespace - -``` - -### `set-replicator-dispatch-rate` -Set replicator message-dispatch-rate for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-replicator-dispatch-rate tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-replicator-dispatch-rate` -Get replicator configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-replicator-dispatch-rate tenant/namespace - -``` - -### `set-subscribe-rate` -Set subscribe-rate per consumer for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscribe-rate tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-sr`, `--subscribe-rate`|The subscribe rate (default -1 will be overwrite if not passed)|-1| -|`-st`, `--subscribe-rate-period`|The subscribe rate period in second type (default 30 second will be overwrite if not passed)|30| - -### `get-subscribe-rate` -Get configured subscribe-rate per consumer for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-subscribe-rate tenant/namespace - -``` - -### `set-subscription-dispatch-rate` -Set subscription message-dispatch-rate for all subscription of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscription-dispatch-rate tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--sub-msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-subscription-dispatch-rate` -Get subscription configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-subscription-dispatch-rate tenant/namespace - -``` - -### `clear-backlog` -Clear the backlog for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces clear-backlog tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-force`, `--force`|Whether to force a clear backlog without prompt|false| -|`-s`, `--sub`|The subscription name|| - - -### `unsubscribe` -Unsubscribe the given subscription on all destinations on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces unsubscribe tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-s`, `--sub`|The subscription name|| - -### `set-encryption-required` -Enable or disable message encryption required for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-encryption-required tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-d`, `--disable`|Disable message encryption required|false| -|`-e`, `--enable`|Enable message encryption required|false| - -### `set-delayed-delivery` -Set the delayed delivery policy on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-delayed-delivery tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-d`, `--disable`|Disable delayed delivery messages|false| -|`-e`, `--enable`|Enable delayed delivery messages|false| -|`-t`, `--time`|The tick time for when retrying on delayed delivery messages|1s| - - -### `get-delayed-delivery` -Get the delayed delivery policy on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-delayed-delivery-time tenant/namespace - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-t`, `--time`|The tick time for when retrying on delayed delivery messages|1s| - - -### `set-subscription-auth-mode` -Set subscription auth mode on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscription-auth-mode tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-m`, `--subscription-auth-mode`|Subscription authorization mode for Pulsar policies. Valid options are: [None, Prefix]|| - -### `get-max-producers-per-topic` -Get maxProducersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-producers-per-topic tenant/namespace - -``` - -### `set-max-producers-per-topic` -Set maxProducersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-producers-per-topic tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-p`, `--max-producers-per-topic`|maxProducersPerTopic for a namespace|0| - -### `get-max-consumers-per-topic` -Get maxConsumersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-consumers-per-topic tenant/namespace - -``` - -### `set-max-consumers-per-topic` -Set maxConsumersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-consumers-per-topic tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-consumers-per-topic`|maxConsumersPerTopic for a namespace|0| - -### `get-max-consumers-per-subscription` -Get maxConsumersPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-consumers-per-subscription tenant/namespace - -``` - -### `set-max-consumers-per-subscription` -Set maxConsumersPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-consumers-per-subscription tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-consumers-per-subscription`|maxConsumersPerSubscription for a namespace|0| - -### `get-max-unacked-messages-per-subscription` -Get maxUnackedMessagesPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-unacked-messages-per-subscription tenant/namespace - -``` - -### `set-max-unacked-messages-per-subscription` -Set maxUnackedMessagesPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-unacked-messages-per-subscription tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-unacked-messages-per-subscription`|maxUnackedMessagesPerSubscription for a namespace|-1| - -### `get-max-unacked-messages-per-consumer` -Get maxUnackedMessagesPerConsumer for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-unacked-messages-per-consumer tenant/namespace - -``` - -### `set-max-unacked-messages-per-consumer` -Set maxUnackedMessagesPerConsumer for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-unacked-messages-per-consumer tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-unacked-messages-per-consumer`|maxUnackedMessagesPerConsumer for a namespace|-1| - - -### `get-compaction-threshold` -Get compactionThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-compaction-threshold tenant/namespace - -``` - -### `set-compaction-threshold` -Set compactionThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-compaction-threshold tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-t`, `--threshold`|Maximum number of bytes in a topic backlog before compaction is triggered (eg: 10M, 16G, 3T). 0 disables automatic compaction|0| - - -### `get-offload-threshold` -Get offloadThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-threshold tenant/namespace - -``` - -### `set-offload-threshold` -Set offloadThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-threshold tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-s`, `--size`|Maximum number of bytes stored in the pulsar cluster for a topic before data will start being automatically offloaded to longterm storage (eg: 10M, 16G, 3T, 100). Negative values disable automatic offload. 0 triggers offloading as soon as possible.|-1| - -### `get-offload-deletion-lag` -Get offloadDeletionLag, in minutes, for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-deletion-lag tenant/namespace - -``` - -### `set-offload-deletion-lag` -Set offloadDeletionLag for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-deletion-lag tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-l`, `--lag`|Duration to wait after offloading a ledger segment, before deleting the copy of that segment from cluster local storage. (eg: 10m, 5h, 3d, 2w).|-1| - -### `clear-offload-deletion-lag` -Clear offloadDeletionLag for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces clear-offload-deletion-lag tenant/namespace - -``` - -### `get-schema-autoupdate-strategy` -Get the schema auto-update strategy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-schema-autoupdate-strategy tenant/namespace - -``` - -### `set-schema-autoupdate-strategy` -Set the schema auto-update strategy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-schema-autoupdate-strategy tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-c`, `--compatibility`|Compatibility level required for new schemas created via a Producer. Possible values (Full, Backward, Forward, None).|Full| -|`-d`, `--disabled`|Disable automatic schema updates.|false| - -### `get-publish-rate` -Get the message publish rate for each topic in a namespace, in bytes as well as messages per second - -Usage - -```bash - -$ pulsar-admin namespaces get-publish-rate tenant/namespace - -``` - -### `set-publish-rate` -Set the message publish rate for each topic in a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-publish-rate tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-m`, `--msg-publish-rate`|Threshold for number of messages per second per topic in the namespace (-1 implies not set, 0 for no limit).|-1| -|`-b`, `--byte-publish-rate`|Threshold for number of bytes per second per topic in the namespace (-1 implies not set, 0 for no limit).|-1| - -### `set-offload-policies` -Set the offload policy for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-policies tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-d`, `--driver`|Driver to use to offload old data to long term storage,(Possible values: S3, aws-s3, google-cloud-storage)|| -|`-r`, `--region`|The long term storage region|| -|`-b`, `--bucket`|Bucket to place offloaded ledger into|| -|`-e`, `--endpoint`|Alternative endpoint to connect to|| -|`-i`, `--aws-id`|AWS Credential Id to use when using driver S3 or aws-s3|| -|`-s`, `--aws-secret`|AWS Credential Secret to use when using driver S3 or aws-s3|| -|`-ro`, `--s3-role`|S3 Role used for STSAssumeRoleSessionCredentialsProvider using driver S3 or aws-s3|| -|`-rsn`, `--s3-role-session-name`|S3 role session name used for STSAssumeRoleSessionCredentialsProvider using driver S3 or aws-s3|| -|`-mbs`, `--maxBlockSize`|Max block size|64MB| -|`-rbs`, `--readBufferSize`|Read buffer size|1MB| -|`-oat`, `--offloadAfterThreshold`|Offload after threshold size (eg: 1M, 5M)|| -|`-oae`, `--offloadAfterElapsed`|Offload after elapsed in millis (or minutes, hours,days,weeks eg: 100m, 3h, 2d, 5w).|| - -### `get-offload-policies` -Get the offload policy for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-policies tenant/namespace - -``` - -### `set-max-subscriptions-per-topic` -Set the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces set-max-subscriptions-per-topic tenant/namespace - -``` - -### `get-max-subscriptions-per-topic` -Get the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces get-max-subscriptions-per-topic tenant/namespace - -``` - -### `remove-max-subscriptions-per-topic` -Remove the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces remove-max-subscriptions-per-topic tenant/namespace - -``` - -## `ns-isolation-policy` -Operations for managing namespace isolation policies. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy subcommand - -``` - -Subcommands -* `set` -* `get` -* `list` -* `delete` -* `brokers` -* `broker` - -### `set` -Create/update a namespace isolation policy for a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy set cluster-name policy-name options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`--auto-failover-policy-params`|Comma-separated name=value auto failover policy parameters|[]| -|`--auto-failover-policy-type`|Auto failover policy type name. Currently available options: min_available.|[]| -|`--namespaces`|Comma-separated namespaces regex list|[]| -|`--primary`|Comma-separated primary broker regex list|[]| -|`--secondary`|Comma-separated secondary broker regex list|[]| - - -### `get` -Get the namespace isolation policy of a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy get cluster-name policy-name - -``` - -### `list` -List all namespace isolation policies of a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy list cluster-name - -``` - -### `delete` -Delete namespace isolation policy of a cluster. This operation requires superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy delete - -``` - -### `brokers` -List all brokers with namespace-isolation policies attached to it. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy brokers cluster-name - -``` - -### `broker` -Get broker with namespace-isolation policies attached to it. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy broker cluster-name options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`--broker`|Broker name to get namespace-isolation policies attached to it|| - -## `topics` -Operations for managing Pulsar topics (both persistent and non-persistent). - -Usage - -```bash - -$ pulsar-admin topics subcommand - -``` - -From Pulsar 2.7.0, some namespace-level policies are available on topic level. To enable topic-level policy in Pulsar, you need to configure the following parameters in the `broker.conf` file. - -```shell - -systemTopicEnabled=true -topicLevelPoliciesEnabled=true - -``` - -Subcommands -* `compact` -* `compaction-status` -* `offload` -* `offload-status` -* `create-partitioned-topic` -* `create-missed-partitions` -* `delete-partitioned-topic` -* `create` -* `get-partitioned-topic-metadata` -* `update-partitioned-topic` -* `list-partitioned-topics` -* `list` -* `terminate` -* `permissions` -* `grant-permission` -* `revoke-permission` -* `lookup` -* `bundle-range` -* `delete` -* `unload` -* `create-subscription` -* `subscriptions` -* `unsubscribe` -* `stats` -* `stats-internal` -* `info-internal` -* `partitioned-stats` -* `partitioned-stats-internal` -* `skip` -* `clear-backlog` -* `expire-messages` -* `expire-messages-all-subscriptions` -* `peek-messages` -* `reset-cursor` -* `get-message-by-id` -* `last-message-id` -* `get-backlog-quotas` -* `set-backlog-quota` -* `remove-backlog-quota` -* `get-persistence` -* `set-persistence` -* `remove-persistence` -* `get-message-ttl` -* `set-message-ttl` -* `remove-message-ttl` -* `get-deduplication` -* `set-deduplication` -* `remove-deduplication` -* `get-retention` -* `set-retention` -* `remove-retention` -* `get-dispatch-rate` -* `set-dispatch-rate` -* `remove-dispatch-rate` -* `get-max-unacked-messages-per-subscription` -* `set-max-unacked-messages-per-subscription` -* `remove-max-unacked-messages-per-subscription` -* `get-max-unacked-messages-per-consumer` -* `set-max-unacked-messages-per-consumer` -* `remove-max-unacked-messages-per-consumer` -* `get-delayed-delivery` -* `set-delayed-delivery` -* `remove-delayed-delivery` -* `get-max-producers` -* `set-max-producers` -* `remove-max-producers` -* `get-max-consumers` -* `set-max-consumers` -* `remove-max-consumers` -* `get-compaction-threshold` -* `set-compaction-threshold` -* `remove-compaction-threshold` -* `get-offload-policies` -* `set-offload-policies` -* `remove-offload-policies` -* `get-inactive-topic-policies` -* `set-inactive-topic-policies` -* `remove-inactive-topic-policies` -* `set-max-subscriptions` -* `get-max-subscriptions` -* `remove-max-subscriptions` - -### `compact` -Run compaction on the specified topic (persistent topics only) - -Usage - -``` - -$ pulsar-admin topics compact persistent://tenant/namespace/topic - -``` - -### `compaction-status` -Check the status of a topic compaction (persistent topics only) - -Usage - -```bash - -$ pulsar-admin topics compaction-status persistent://tenant/namespace/topic - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-w`, `--wait-complete`|Wait for compaction to complete|false| - - -### `offload` -Trigger offload of data from a topic to long-term storage (e.g. Amazon S3) - -Usage - -```bash - -$ pulsar-admin topics offload persistent://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-s`, `--size-threshold`|The maximum amount of data to keep in BookKeeper for the specific topic|| - - -### `offload-status` -Check the status of data offloading from a topic to long-term storage - -Usage - -```bash - -$ pulsar-admin topics offload-status persistent://tenant/namespace/topic op - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-w`, `--wait-complete`|Wait for compaction to complete|false| - - -### `create-partitioned-topic` -Create a partitioned topic. A partitioned topic must be created before producers can publish to it. - -:::note - -By default, after 60 seconds of creation, topics are considered inactive and deleted automatically to prevent from generating trash data. -To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. -To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to your desired value. -For more information about these two parameters, see [here](reference-configuration.md#broker). - -::: - -Usage - -```bash - -$ pulsar-admin topics create-partitioned-topic {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-p`, `--partitions`|The number of partitions for the topic|0| - -### `create-missed-partitions` -Try to create partitions for partitioned topic. The partitions of partition topic has to be created, -can be used by repair partitions when topic auto creation is disabled - -Usage - -```bash - -$ pulsar-admin topics create-missed-partitions persistent://tenant/namespace/topic - -``` - -### `delete-partitioned-topic` -Delete a partitioned topic. This will also delete all the partitions of the topic if they exist. - -Usage - -```bash - -$ pulsar-admin topics delete-partitioned-topic {persistent|non-persistent} - -``` - -### `create` -Creates a non-partitioned topic. A non-partitioned topic must explicitly be created by the user if allowAutoTopicCreation or createIfMissing is disabled. - -:::note - -By default, after 60 seconds of creation, topics are considered inactive and deleted automatically to prevent from generating trash data. -To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. -To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to your desired value. -For more information about these two parameters, see [here](reference-configuration.md#broker). - -::: - -Usage - -```bash - -$ pulsar-admin topics create {persistent|non-persistent}://tenant/namespace/topic - -``` - -### `get-partitioned-topic-metadata` -Get the partitioned topic metadata. If the topic is not created or is a non-partitioned topic, this will return an empty topic with zero partitions. - -Usage - -```bash - -$ pulsar-admin topics get-partitioned-topic-metadata {persistent|non-persistent}://tenant/namespace/topic - -``` - -### `update-partitioned-topic` -Update existing non-global partitioned topic. New updating number of partitions must be greater than existing number of partitions. - -Usage - -```bash - -$ pulsar-admin topics update-partitioned-topic {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-p`, `--partitions`|The number of partitions for the topic|0| - -### `list-partitioned-topics` -Get the list of partitioned topics under a namespace. - -Usage - -```bash - -$ pulsar-admin topics list-partitioned-topics tenant/namespace - -``` - -### `list` -Get the list of topics under a namespace - -Usage - -``` - -$ pulsar-admin topics list tenant/cluster/namespace - -``` - -### `terminate` -Terminate a persistent topic (disallow further messages from being published on the topic) - -Usage - -```bash - -$ pulsar-admin topics terminate persistent://tenant/namespace/topic - -``` - -### `permissions` -Get the permissions on a topic. Retrieve the effective permissions for a destination. These permissions are defined by the permissions set at the namespace level combined (union) with any eventual specific permissions set on the topic. - -Usage - -```bash - -$ pulsar-admin topics permissions topic - -``` - -### `grant-permission` -Grant a new permission to a client role on a single topic - -Usage - -```bash - -$ pulsar-admin topics grant-permission {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--actions`|Actions to be granted (`produce` or `consume`)|| -|`--role`|The client role to which to grant the permissions|| - - -### `revoke-permission` -Revoke permissions to a client role on a single topic. If the permission was not set at the topic level, but rather at the namespace level, this operation will return an error (HTTP status code 412). - -Usage - -```bash - -$ pulsar-admin topics revoke-permission topic - -``` - -### `lookup` -Look up a topic from the current serving broker - -Usage - -```bash - -$ pulsar-admin topics lookup topic - -``` - -### `bundle-range` -Get the namespace bundle which contains the given topic - -Usage - -```bash - -$ pulsar-admin topics bundle-range topic - -``` - -### `delete` -Delete a topic. The topic cannot be deleted if there are any active subscriptions or producers connected to the topic. - -Usage - -```bash - -$ pulsar-admin topics delete topic - -``` - -### `unload` -Unload a topic - -Usage - -```bash - -$ pulsar-admin topics unload topic - -``` - -### `create-subscription` -Create a new subscription on a topic. - -Usage - -```bash - -$ pulsar-admin topics create-subscription [options] persistent://tenant/namespace/topic - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-m`, `--messageId`|messageId where to create the subscription. It can be either 'latest', 'earliest' or (ledgerId:entryId)|latest| -|`-s`, `--subscription`|Subscription to reset position on|| - -### `subscriptions` -Get the list of subscriptions on the topic - -Usage - -```bash - -$ pulsar-admin topics subscriptions topic - -``` - -### `unsubscribe` -Delete a durable subscriber from a topic - -Usage - -```bash - -$ pulsar-admin topics unsubscribe topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|The subscription to delete|| -|`-f`, `--force`|Disconnect and close all consumers and delete subscription forcefully|false| - - -### `stats` -Get the stats for the topic and its connected producers and consumers. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -Usage - -```bash - -$ pulsar-admin topics stats topic - -``` - -:::note - -The unit of `storageSize` and `averageMsgSize` is Byte. - -::: - -### `stats-internal` -Get the internal stats for the topic - -Usage - -```bash - -$ pulsar-admin topics stats-internal topic - -``` - -### `info-internal` -Get the internal metadata info for the topic - -Usage - -```bash - -$ pulsar-admin topics info-internal topic - -``` - -### `partitioned-stats` -Get the stats for the partitioned topic and its connected producers and consumers. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -Usage - -```bash - -$ pulsar-admin topics partitioned-stats topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--per-partition`|Get per-partition stats|false| - -### `partitioned-stats-internal` -Get the internal stats for the partitioned topic and its connected producers and consumers. All the rates are computed over a 1 minute window and are relative the last completed 1 minute period. - -Usage - -```bash - -$ pulsar-admin topics partitioned-stats-internal topic - -``` - -### `skip` -Skip some messages for the subscription - -Usage - -```bash - -$ pulsar-admin topics skip topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-n`, `--count`|The number of messages to skip|0| -|`-s`, `--subscription`|The subscription on which to skip messages|| - - -### `clear-backlog` -Clear backlog (skip all the messages) for the subscription - -Usage - -```bash - -$ pulsar-admin topics clear-backlog topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|The subscription to clear|| - - -### `expire-messages` -Expire messages that are older than the given expiry time (in seconds) for the subscription. - -Usage - -```bash - -$ pulsar-admin topics expire-messages topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-t`, `--expireTime`|Expire messages older than the time (in seconds)|0| -|`-s`, `--subscription`|The subscription to skip messages on|| - - -### `expire-messages-all-subscriptions` -Expire messages older than the given expiry time (in seconds) for all subscriptions - -Usage - -```bash - -$ pulsar-admin topics expire-messages-all-subscriptions topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-t`, `--expireTime`|Expire messages older than the time (in seconds)|0| - - -### `peek-messages` -Peek some messages for the subscription. - -Usage - -```bash - -$ pulsar-admin topics peek-messages topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-n`, `--count`|The number of messages|0| -|`-s`, `--subscription`|Subscription to get messages from|| - - -### `reset-cursor` -Reset position for subscription to a position that is closest to timestamp or messageId. - -Usage - -```bash - -$ pulsar-admin topics reset-cursor topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|Subscription to reset position on|| -|`-t`, `--time`|The time in minutes to reset back to (or minutes, hours, days, weeks, etc.). Examples: `100m`, `3h`, `2d`, `5w`.|| -|`-m`, `--messageId`| The messageId to reset back to (ledgerId:entryId). || - -### `get-message-by-id` -Get message by ledger id and entry id - -Usage - -```bash - -$ pulsar-admin topics get-message-by-id topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-l`, `--ledgerId`|The ledger id |0| -|`-e`, `--entryId`|The entry id |0| - -### `last-message-id` -Get the last commit message ID of the topic. - -Usage - -```bash - -$ pulsar-admin topics last-message-id persistent://tenant/namespace/topic - -``` - -### `get-backlog-quotas` -Get the backlog quota policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-backlog-quotas tenant/namespace/topic - -``` - -### `set-backlog-quota` -Set a backlog quota policy for a topic. - -|Flag|Description|Default| -|----|---|---| -|`-l`, `--limit`|The backlog size limit (for example `10M` or `16G`)|| -|`-lt`, `--limitTime`|Time limit in second, non-positive number for disabling time limit. (for example 3600 for 1 hour)|| -|`-p`, `--policy`|The retention policy to enforce when the limit is reached. The valid options are: `producer_request_hold`, `producer_exception` or `consumer_backlog_eviction`| -|`-t`, `--type`|Backlog quota type to set. The valid options are: `destination_storage`, `message_age` |destination_storage| - -Usage - -```bash - -$ pulsar-admin topics set-backlog-quota tenant/namespace/topic options - -``` - -Example - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns/my-topic \ ---limit 2G \ ---policy producer_request_hold - -``` - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns/my-topic \ ---limitTime 3600 \ ---policy producer_request_hold \ ---type message_age - -``` - -### `remove-backlog-quota` -Remove a backlog quota policy from a topic. - -|Flag|Description|Default| -|---|---|---| -|`-t`, `--type`|Backlog quota type to remove. The valid options are: `destination_storage`, `message_age` |destination_storage| - -Usage - -```bash - -$ pulsar-admin topics remove-backlog-quota tenant/namespace/topic - -``` - -### `get-persistence` -Get the persistence policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-persistence tenant/namespace/topic - -``` - -### `set-persistence` -Set the persistence policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-persistence tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-e`, `--bookkeeper-ensemble`|Number of bookies to use for a topic|0| -|`-w`, `--bookkeeper-write-quorum`|How many writes to make of each entry|0| -|`-a`, `--bookkeeper-ack-quorum`|Number of acks (guaranteed copies) to wait for each entry|0| -|`-r`, `--ml-mark-delete-max-rate`|Throttling rate of mark-delete operation (0 means no throttle)|| - -### `remove-persistence` -Remove the persistence policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-persistence tenant/namespace/topic - -``` - -### `get-message-ttl` -Get the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-message-ttl tenant/namespace/topic - -``` - -### `set-message-ttl` -Set the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-message-ttl tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-ttl`, `--messageTTL`|Message TTL for a topic in second, allowed range from 1 to `Integer.MAX_VALUE` |0| - -### `remove-message-ttl` -Remove the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-message-ttl tenant/namespace/topic - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable message deduplication on the specified topic.|false| -|`--disable`, `-d`|Disable message deduplication on the specified topic.|false| - -### `get-deduplication` -Get a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-deduplication tenant/namespace/topic - -``` - -### `set-deduplication` -Set a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-deduplication tenant/namespace/topic options - -``` - -### `remove-deduplication` -Remove a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-deduplication tenant/namespace/topic - -``` - -## `tenants` -Operations for managing tenants - -Usage - -```bash - -$ pulsar-admin tenants subcommand - -``` - -Subcommands -* `list` -* `get` -* `create` -* `update` -* `delete` - -### `list` -List the existing tenants - -Usage - -```bash - -$ pulsar-admin tenants list - -``` - -### `get` -Gets the configuration of a tenant - -Usage - -```bash - -$ pulsar-admin tenants get tenant-name - -``` - -### `create` -Creates a new tenant - -Usage - -```bash - -$ pulsar-admin tenants create tenant-name options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-r`, `--admin-roles`|Comma-separated admin roles|| -|`-c`, `--allowed-clusters`|Comma-separated allowed clusters|| - -### `update` -Updates a tenant - -Usage - -```bash - -$ pulsar-admin tenants update tenant-name options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-r`, `--admin-roles`|Comma-separated admin roles|| -|`-c`, `--allowed-clusters`|Comma-separated allowed clusters|| - - -### `delete` -Deletes an existing tenant - -Usage - -```bash - -$ pulsar-admin tenants delete tenant-name - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-f`, `--force`|Delete a tenant forcefully by deleting all namespaces under it.|false| - - -## `resource-quotas` -Operations for managing resource quotas - -Usage - -```bash - -$ pulsar-admin resource-quotas subcommand - -``` - -Subcommands -* `get` -* `set` -* `reset-namespace-bundle-quota` - - -### `get` -Get the resource quota for a specified namespace bundle, or default quota if no namespace/bundle is specified. - -Usage - -```bash - -$ pulsar-admin resource-quotas get options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-n`, `--namespace`|The namespace|| - - -### `set` -Set the resource quota for the specified namespace bundle, or default quota if no namespace/bundle is specified. - -Usage - -```bash - -$ pulsar-admin resource-quotas set options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-bi`, `--bandwidthIn`|The expected inbound bandwidth (in bytes/second)|0| -|`-bo`, `--bandwidthOut`|Expected outbound bandwidth (in bytes/second)0| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-d`, `--dynamic`|Allow to be dynamically re-calculated (or not)|false| -|`-mem`, `--memory`|Expectred memory usage (in megabytes)|0| -|`-mi`, `--msgRateIn`|Expected incoming messages per second|0| -|`-mo`, `--msgRateOut`|Expected outgoing messages per second|0| -|`-n`, `--namespace`|The namespace as tenant/namespace, for example my-tenant/my-ns. Must be specified together with -b/--bundle.|| - - -### `reset-namespace-bundle-quota` -Reset the specified namespace bundle's resource quota to a default value. - -Usage - -```bash - -$ pulsar-admin resource-quotas reset-namespace-bundle-quota options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-n`, `--namespace`|The namespace|| - - - -## `schemas` -Operations related to Schemas associated with Pulsar topics. - -Usage - -``` - -$ pulsar-admin schemas subcommand - -``` - -Subcommands -* `upload` -* `delete` -* `get` -* `extract` - - -### `upload` -Upload the schema definition for a topic - -Usage - -```bash - -$ pulsar-admin schemas upload persistent://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`--filename`|The path to the schema definition file. An example schema file is available under conf directory.|| - - -### `delete` -Delete the schema definition associated with a topic - -Usage - -```bash - -$ pulsar-admin schemas delete persistent://tenant/namespace/topic - -``` - -### `get` -Retrieve the schema definition associated with a topic (at a given version if version is supplied). - -Usage - -```bash - -$ pulsar-admin schemas get persistent://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`--version`|The version of the schema definition to retrieve for a topic.|| - -### `extract` -Provide the schema definition for a topic via Java class name contained in a JAR file - -Usage - -```bash - -$ pulsar-admin schemas extract persistent://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-c`, `--classname`|The Java class name|| -|`-j`, `--jar`|A path to the JAR file which contains the above Java class|| -|`-t`, `--type`|The type of the schema (avro or json)|| diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/reference-rest-api-overview.md b/site2/website/versioned_docs/version-2.9.1-deprecated/reference-rest-api-overview.md deleted file mode 100644 index 4bdcf23483a2b5..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/reference-rest-api-overview.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: reference-rest-api-overview -title: Pulsar REST APIs -sidebar_label: "Pulsar REST APIs" ---- - -A REST API (also known as RESTful API, REpresentational State Transfer Application Programming Interface) is a set of definitions and protocols for building and integrating application software, using HTTP requests to GET, PUT, POST, and DELETE data following the REST standards. In essence, REST API is a set of remote calls using standard methods to request and return data in a specific format between two systems. - -Pulsar provides a variety of REST APIs that enable you to interact with Pulsar to retrieve information or perform an action. - -| REST API category | Description | -| --- | --- | -| [Admin](https://pulsar.apache.org/admin-rest-api/?version=master) | REST APIs for administrative operations.| -| [Functions](https://pulsar.apache.org/functions-rest-api/?version=master) | REST APIs for function-specific operations.| -| [Sources](https://pulsar.apache.org/source-rest-api/?version=master) | REST APIs for source-specific operations.| -| [Sinks](https://pulsar.apache.org/sink-rest-api/?version=master) | REST APIs for sink-specific operations.| -| [Packages](https://pulsar.apache.org/packages-rest-api/?version=master) | REST APIs for package-specific operations. A package can be a group of functions, sources, and sinks.| - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/reference-terminology.md b/site2/website/versioned_docs/version-2.9.1-deprecated/reference-terminology.md deleted file mode 100644 index e5099141c3231e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/reference-terminology.md +++ /dev/null @@ -1,176 +0,0 @@ ---- -id: reference-terminology -title: Pulsar Terminology -sidebar_label: "Terminology" -original_id: reference-terminology ---- - -Here is a glossary of terms related to Apache Pulsar: - -### Concepts - -#### Pulsar - -Pulsar is a distributed messaging system originally created by Yahoo but now under the stewardship of the Apache Software Foundation. - -#### Message - -Messages are the basic unit of Pulsar. They're what [producers](#producer) publish to [topics](#topic) -and what [consumers](#consumer) then consume from topics. - -#### Topic - -A named channel used to pass messages published by [producers](#producer) to [consumers](#consumer) who -process those [messages](#message). - -#### Partitioned Topic - -A topic that is served by multiple Pulsar [brokers](#broker), which enables higher throughput. - -#### Namespace - -A grouping mechanism for related [topics](#topic). - -#### Namespace Bundle - -A virtual group of [topics](#topic) that belong to the same [namespace](#namespace). A namespace bundle -is defined as a range between two 32-bit hashes, such as 0x00000000 and 0xffffffff. - -#### Tenant - -An administrative unit for allocating capacity and enforcing an authentication/authorization scheme. - -#### Subscription - -A lease on a [topic](#topic) established by a group of [consumers](#consumer). Pulsar has four subscription -modes (exclusive, shared, failover and key_shared). - -#### Pub-Sub - -A messaging pattern in which [producer](#producer) processes publish messages on [topics](#topic) that -are then consumed (processed) by [consumer](#consumer) processes. - -#### Producer - -A process that publishes [messages](#message) to a Pulsar [topic](#topic). - -#### Consumer - -A process that establishes a subscription to a Pulsar [topic](#topic) and processes messages published -to that topic by [producers](#producer). - -#### Reader - -Pulsar readers are message processors much like Pulsar [consumers](#consumer) but with two crucial differences: - -- you can specify *where* on a topic readers begin processing messages (consumers always begin with the latest - available unacked message); -- readers don't retain data or acknowledge messages. - -#### Cursor - -The subscription position for a [consumer](#consumer). - -#### Acknowledgment (ack) - -A message sent to a Pulsar broker by a [consumer](#consumer) that a message has been successfully processed. -An acknowledgement (ack) is Pulsar's way of knowing that the message can be deleted from the system; -if no acknowledgement, then the message will be retained until it's processed. - -#### Negative Acknowledgment (nack) - -When an application fails to process a particular message, it can send a "negative ack" to Pulsar -to signal that the message should be replayed at a later timer. (By default, failed messages are -replayed after a 1 minute delay). Be aware that negative acknowledgment on ordered subscription types, -such as Exclusive, Failover and Key_Shared, can cause failed messages to arrive consumers out of the original order. - -#### Unacknowledged - -A message that has been delivered to a consumer for processing but not yet confirmed as processed by the consumer. - -#### Retention Policy - -Size and time limits that you can set on a [namespace](#namespace) to configure retention of [messages](#message) -that have already been [acknowledged](#acknowledgement-ack). - -#### Multi-Tenancy - -The ability to isolate [namespaces](#namespace), specify quotas, and configure authentication and authorization -on a per-[tenant](#tenant) basis. - -#### Failure Domain - -A logical domain under a Pulsar cluster. Each logical domain contains a pre-configured list of brokers. - -#### Anti-affinity Namespaces - -A group of namespaces that have anti-affinity to each other. - -### Architecture - -#### Standalone - -A lightweight Pulsar broker in which all components run in a single Java Virtual Machine (JVM) process. Standalone -clusters can be run on a single machine and are useful for development purposes. - -#### Cluster - -A set of Pulsar [brokers](#broker) and [BookKeeper](#bookkeeper) servers (aka [bookies](#bookie)). -Clusters can reside in different geographical regions and replicate messages to one another -in a process called [geo-replication](#geo-replication). - -#### Instance - -A group of Pulsar [clusters](#cluster) that act together as a single unit. - -#### Geo-Replication - -Replication of messages across Pulsar [clusters](#cluster), potentially in different datacenters -or geographical regions. - -#### Configuration Store - -Pulsar's configuration store (previously known as configuration store) is a ZooKeeper quorum that -is used for configuration-specific tasks. A multi-cluster Pulsar installation requires just one -configuration store across all [clusters](#cluster). - -#### Topic Lookup - -A service provided by Pulsar [brokers](#broker) that enables connecting clients to automatically determine -which Pulsar [cluster](#cluster) is responsible for a [topic](#topic) (and thus where message traffic for -the topic needs to be routed). - -#### Service Discovery - -A mechanism provided by Pulsar that enables connecting clients to use just a single URL to interact -with all the [brokers](#broker) in a [cluster](#cluster). - -#### Broker - -A stateless component of Pulsar [clusters](#cluster) that runs two other components: an HTTP server -exposing a REST interface for administration and topic lookup and a [dispatcher](#dispatcher) that -handles all message transfers. Pulsar clusters typically consist of multiple brokers. - -#### Dispatcher - -An asynchronous TCP server used for all data transfers in-and-out a Pulsar [broker](#broker). The Pulsar -dispatcher uses a custom binary protocol for all communications. - -### Storage - -#### BookKeeper - -[Apache BookKeeper](http://bookkeeper.apache.org/) is a scalable, low-latency persistent log storage -service that Pulsar uses to store data. - -#### Bookie - -Bookie is the name of an individual BookKeeper server. It is effectively the storage server of Pulsar. - -#### Ledger - -An append-only data structure in [BookKeeper](#bookkeeper) that is used to persistently store messages in Pulsar [topics](#topic). - -### Functions - -Pulsar Functions are lightweight functions that can consume messages from Pulsar topics, apply custom processing logic, and, if desired, publish results to topics. diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/schema-evolution-compatibility.md b/site2/website/versioned_docs/version-2.9.1-deprecated/schema-evolution-compatibility.md deleted file mode 100644 index 3e78429df69da2..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/schema-evolution-compatibility.md +++ /dev/null @@ -1,201 +0,0 @@ ---- -id: schema-evolution-compatibility -title: Schema evolution and compatibility -sidebar_label: "Schema evolution and compatibility" -original_id: schema-evolution-compatibility ---- - -Normally, schemas do not stay the same over a long period of time. Instead, they undergo evolutions to satisfy new needs. - -This chapter examines how Pulsar schema evolves and what Pulsar schema compatibility check strategies are. - -## Schema evolution - -Pulsar schema is defined in a data structure called `SchemaInfo`. - -Each `SchemaInfo` stored with a topic has a version. The version is used to manage the schema changes happening within a topic. - -The message produced with `SchemaInfo` is tagged with a schema version. When a message is consumed by a Pulsar client, the Pulsar client can use the schema version to retrieve the corresponding `SchemaInfo` and use the correct schema information to deserialize data. - -### What is schema evolution? - -Schemas store the details of attributes and types. To satisfy new business requirements, you need to update schemas inevitably over time, which is called **schema evolution**. - -Any schema changes affect downstream consumers. Schema evolution ensures that the downstream consumers can seamlessly handle data encoded with both old schemas and new schemas. - -### How Pulsar schema should evolve? - -The answer is Pulsar schema compatibility check strategy. It determines how schema compares old schemas with new schemas in topics. - -For more information, see [Schema compatibility check strategy](#schema-compatibility-check-strategy). - -### How does Pulsar support schema evolution? - -1. When a producer/consumer/reader connects to a broker, the broker deploys the schema compatibility checker configured by `schemaRegistryCompatibilityCheckers` to enforce schema compatibility check. - - The schema compatibility checker is one instance per schema type. - - Currently, Avro and JSON have their own compatibility checkers, while all the other schema types share the default compatibility checker which disables schema evolution. - -2. The producer/consumer/reader sends its client `SchemaInfo` to the broker. - -3. The broker knows the schema type and locates the schema compatibility checker for that type. - -4. The broker uses the checker to check if the `SchemaInfo` is compatible with the latest schema of the topic by applying its compatibility check strategy. - - Currently, the compatibility check strategy is configured at the namespace level and applied to all the topics within that namespace. - -## Schema compatibility check strategy - -Pulsar has 8 schema compatibility check strategies, which are summarized in the following table. - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Changes allowed | Check against which schema | Upgrade first | -| --- | --- | --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Disable schema compatibility check. | All changes are allowed | All previous versions | Any order | -| `ALWAYS_INCOMPATIBLE` | Disable schema evolution. | All changes are disabled | None | None | -| `BACKWARD` | Consumers using the schema V3 can process data written by producers using the schema V3 or V2. |
  1843. Add optional fields
  1844. Delete fields
  1845. | Latest version | Consumers | -| `BACKWARD_TRANSITIVE` | Consumers using the schema V3 can process data written by producers using the schema V3, V2 or V1. |
  1846. Add optional fields
  1847. Delete fields
  1848. | All previous versions | Consumers | -| `FORWARD` | Consumers using the schema V3 or V2 can process data written by producers using the schema V3. |
  1849. Add fields
  1850. Delete optional fields
  1851. | Latest version | Producers | -| `FORWARD_TRANSITIVE` | Consumers using the schema V3, V2 or V1 can process data written by producers using the schema V3. |
  1852. Add fields
  1853. Delete optional fields
  1854. | All previous versions | Producers | -| `FULL` | Backward and forward compatible between the schema V3 and V2. |
  1855. Modify optional fields
  1856. | Latest version | Any order | -| `FULL_TRANSITIVE` | Backward and forward compatible among the schema V3, V2, and V1. |
  1857. Modify optional fields
  1858. | All previous versions | Any order | - -### ALWAYS_COMPATIBLE and ALWAYS_INCOMPATIBLE - -| Compatibility check strategy | Definition | Note | -| --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Disable schema compatibility check. | None | -| `ALWAYS_INCOMPATIBLE` | Disable schema evolution, that is, any schema change is rejected. |
  1859. For all schema types except Avro and JSON, the default schema compatibility check strategy is `ALWAYS_INCOMPATIBLE`.
  1860. For Avro and JSON, the default schema compatibility check strategy is `FULL`.
  1861. | - -#### Example - -* Example 1 - - In some situations, an application needs to store events of several different types in the same Pulsar topic. - - In particular, when developing a data model in an `Event Sourcing` style, you might have several kinds of events that affect the state of an entity. - - For example, for a user entity, there are `userCreated`, `userAddressChanged` and `userEnquiryReceived` events. The application requires that those events are always read in the same order. - - Consequently, those events need to go in the same Pulsar partition to maintain order. This application can use `ALWAYS_COMPATIBLE` to allow different kinds of events co-exist in the same topic. - -* Example 2 - - Sometimes we also make incompatible changes. - - For example, you are modifying a field type from `string` to `int`. - - In this case, you need to: - - * Upgrade all producers and consumers to the new schema versions at the same time. - - * Optionally, create a new topic and start migrating applications to use the new topic and the new schema, avoiding the need to handle two incompatible versions in the same topic. - -### BACKWARD and BACKWARD_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | -|---|---|---| -`BACKWARD` | Consumers using the new schema can process data written by producers using the **last schema**. | The consumers using the schema V3 can process data written by producers using the schema V3 or V2. | -`BACKWARD_TRANSITIVE` | Consumers using the new schema can process data written by producers using **all previous schemas**. | The consumers using the schema V3 can process data written by producers using the schema V3, V2, or V1. | - -#### Example - -* Example 1 - - Remove a field. - - A consumer constructed to process events without one field can process events written with the old schema containing the field, and the consumer will ignore that field. - -* Example 2 - - You want to load all Pulsar data into a Hive data warehouse and run SQL queries against the data. - - Same SQL queries must continue to work even the data is changed. To support it, you can evolve the schemas using the `BACKWARD` strategy. - -### FORWARD and FORWARD_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | -|---|---|---| -`FORWARD` | Consumers using the **last schema** can process data written by producers using a new schema, even though they may not be able to use the full capabilities of the new schema. | The consumers using the schema V3 or V2 can process data written by producers using the schema V3. | -`FORWARD_TRANSITIVE` | Consumers using **all previous schemas** can process data written by producers using a new schema. | The consumers using the schema V3, V2, or V1 can process data written by producers using the schema V3. - -#### Example - -* Example 1 - - Add a field. - - In most data formats, consumers written to process events without new fields can continue doing so even when they receive new events containing new fields. - -* Example 2 - - If a consumer has an application logic tied to a full version of a schema, the application logic may not be updated instantly when the schema evolves. - - In this case, you need to project data with a new schema onto an old schema that the application understands. - - Consequently, you can evolve the schemas using the `FORWARD` strategy to ensure that the old schema can process data encoded with the new schema. - -### FULL and FULL_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | Note | -| --- | --- | --- | --- | -| `FULL` | Schemas are both backward and forward compatible, which means: Consumers using the last schema can process data written by producers using the new schema. AND Consumers using the new schema can process data written by producers using the last schema. | Consumers using the schema V3 can process data written by producers using the schema V3 or V2. AND Consumers using the schema V3 or V2 can process data written by producers using the schema V3. |
  1862. For Avro and JSON, the default schema compatibility check strategy is `FULL`.
  1863. For all schema types except Avro and JSON, the default schema compatibility check strategy is `ALWAYS_INCOMPATIBLE`.
  1864. | -| `FULL_TRANSITIVE` | The new schema is backward and forward compatible with all previously registered schemas. | Consumers using the schema V3 can process data written by producers using the schema V3, V2 or V1. AND Consumers using the schema V3, V2 or V1 can process data written by producers using the schema V3. | None | - -#### Example - -In some data formats, for example, Avro, you can define fields with default values. Consequently, adding or removing a field with a default value is a fully compatible change. - -## Schema verification - -When a producer or a consumer tries to connect to a topic, a broker performs some checks to verify a schema. - -### Producer - -When a producer tries to connect to a topic (suppose ignore the schema auto creation), a broker does the following checks: - -* Check if the schema carried by the producer exists in the schema registry or not. - - * If the schema is already registered, then the producer is connected to a broker and produce messages with that schema. - - * If the schema is not registered, then Pulsar verifies if the schema is allowed to be registered based on the configured compatibility check strategy. - -### Consumer -When a consumer tries to connect to a topic, a broker checks if a carried schema is compatible with a registered schema based on the configured schema compatibility check strategy. - -| Compatibility check strategy | Check logic | -| --- | --- | -| `ALWAYS_COMPATIBLE` | All pass | -| `ALWAYS_INCOMPATIBLE` | No pass | -| `BACKWARD` | Can read the last schema | -| `BACKWARD_TRANSITIVE` | Can read all schemas | -| `FORWARD` | Can read the last schema | -| `FORWARD_TRANSITIVE` | Can read the last schema | -| `FULL` | Can read the last schema | -| `FULL_TRANSITIVE` | Can read all schemas | - -## Order of upgrading clients - -The order of upgrading client applications is determined by the compatibility check strategy. - -For example, the producers using schemas to write data to Pulsar and the consumers using schemas to read data from Pulsar. - -| Compatibility check strategy | Upgrade first | Description | -| --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Any order | The compatibility check is disabled. Consequently, you can upgrade the producers and consumers in **any order**. | -| `ALWAYS_INCOMPATIBLE` | None | The schema evolution is disabled. | -|
  1865. `BACKWARD`
  1866. `BACKWARD_TRANSITIVE`
  1867. | Consumers | There is no guarantee that consumers using the old schema can read data produced using the new schema. Consequently, **upgrade all consumers first**, and then start producing new data. | -|
  1868. `FORWARD`
  1869. `FORWARD_TRANSITIVE`
  1870. | Producers | There is no guarantee that consumers using the new schema can read data produced using the old schema. Consequently, **upgrade all producers first**
  1871. to use the new schema and ensure that the data already produced using the old schemas are not available to consumers, and then upgrade the consumers.
  1872. | -|
  1873. `FULL`
  1874. `FULL_TRANSITIVE`
  1875. | Any order | There is no guarantee that consumers using the old schema can read data produced using the new schema and consumers using the new schema can read data produced using the old schema. Consequently, you can upgrade the producers and consumers in **any order**. | - - - - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/schema-get-started.md b/site2/website/versioned_docs/version-2.9.1-deprecated/schema-get-started.md deleted file mode 100644 index afacb0fa51f2ef..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/schema-get-started.md +++ /dev/null @@ -1,102 +0,0 @@ ---- -id: schema-get-started -title: Get started -sidebar_label: "Get started" -original_id: schema-get-started ---- - -This chapter introduces Pulsar schemas and explains why they are important. - -## Schema Registry - -Type safety is extremely important in any application built around a message bus like Pulsar. - -Producers and consumers need some kind of mechanism for coordinating types at the topic level to avoid various potential problems arise. For example, serialization and deserialization issues. - -Applications typically adopt one of the following approaches to guarantee type safety in messaging. Both approaches are available in Pulsar, and you're free to adopt one or the other or to mix and match on a per-topic basis. - -#### Note -> -> Currently, the Pulsar schema registry is only available for the [Java client](client-libraries-java.md), [CGo client](client-libraries-cgo.md), [Python client](client-libraries-python.md), and [C++ client](client-libraries-cpp.md). - -### Client-side approach - -Producers and consumers are responsible for not only serializing and deserializing messages (which consist of raw bytes) but also "knowing" which types are being transmitted via which topics. - -If a producer is sending temperature sensor data on the topic `topic-1`, consumers of that topic will run into trouble if they attempt to parse that data as moisture sensor readings. - -Producers and consumers can send and receive messages consisting of raw byte arrays and leave all type safety enforcement to the application on an "out-of-band" basis. - -### Server-side approach - -Producers and consumers inform the system which data types can be transmitted via the topic. - -With this approach, the messaging system enforces type safety and ensures that producers and consumers remain synced. - -Pulsar has a built-in **schema registry** that enables clients to upload data schemas on a per-topic basis. Those schemas dictate which data types are recognized as valid for that topic. - -## Why use schema - -When a schema is enabled, Pulsar does parse data, it takes bytes as inputs and sends bytes as outputs. While data has meaning beyond bytes, you need to parse data and might encounter parse exceptions which mainly occur in the following situations: - -* The field does not exist - -* The field type has changed (for example, `string` is changed to `int`) - -There are a few methods to prevent and overcome these exceptions, for example, you can catch exceptions when parsing errors, which makes code hard to maintain; or you can adopt a schema management system to perform schema evolution, not to break downstream applications, and enforces type safety to max extend in the language you are using, the solution is Pulsar Schema. - -Pulsar schema enables you to use language-specific types of data when constructing and handling messages from simple types like `string` to more complex application-specific types. - -**Example** - -You can use the _User_ class to define the messages sent to Pulsar topics. - -``` - -public class User { - String name; - int age; -} - -``` - -When constructing a producer with the _User_ class, you can specify a schema or not as below. - -### Without schema - -If you construct a producer without specifying a schema, then the producer can only produce messages of type `byte[]`. If you have a POJO class, you need to serialize the POJO into bytes before sending messages. - -**Example** - -``` - -Producer producer = client.newProducer() - .topic(topic) - .create(); -User user = new User("Tom", 28); -byte[] message = … // serialize the `user` by yourself; -producer.send(message); - -``` - -### With schema - -If you construct a producer with specifying a schema, then you can send a class to a topic directly without worrying about how to serialize POJOs into bytes. - -**Example** - -This example constructs a producer with the _JSONSchema_, and you can send the _User_ class to topics directly without worrying about how to serialize it into bytes. - -``` - -Producer producer = client.newProducer(JSONSchema.of(User.class)) - .topic(topic) - .create(); -User user = new User("Tom", 28); -producer.send(user); - -``` - -### Summary - -When constructing a producer with a schema, you do not need to serialize messages into bytes, instead Pulsar schema does this job in the background. diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/schema-manage.md b/site2/website/versioned_docs/version-2.9.1-deprecated/schema-manage.md deleted file mode 100644 index c588aae619eee9..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/schema-manage.md +++ /dev/null @@ -1,639 +0,0 @@ ---- -id: schema-manage -title: Manage schema -sidebar_label: "Manage schema" -original_id: schema-manage ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide demonstrates the ways to manage schemas: - -* Automatically - - * [Schema AutoUpdate](#schema-autoupdate) - -* Manually - - * [Schema manual management](#schema-manual-management) - - * [Custom schema storage](#custom-schema-storage) - -## Schema AutoUpdate - -If a schema passes the schema compatibility check, Pulsar producer automatically updates this schema to the topic it produces by default. - -### AutoUpdate for producer - -For a producer, the `AutoUpdate` happens in the following cases: - -* If a **topic doesn’t have a schema**, Pulsar registers a schema automatically. - -* If a **topic has a schema**: - - * If a **producer doesn’t carry a schema**: - - * If `isSchemaValidationEnforced` or `schemaValidationEnforced` is **disabled** in the namespace to which the topic belongs, the producer is allowed to connect to the topic and produce data. - - * If `isSchemaValidationEnforced` or `schemaValidationEnforced` is **enabled** in the namespace to which the topic belongs, the producer is rejected and disconnected. - - * If a **producer carries a schema**: - - A broker performs the compatibility check based on the configured compatibility check strategy of the namespace to which the topic belongs. - - * If the schema is registered, a producer is connected to a broker. - - * If the schema is not registered: - - * If `isAllowAutoUpdateSchema` sets to **false**, the producer is rejected to connect to a broker. - - * If `isAllowAutoUpdateSchema` sets to **true**: - - * If the schema passes the compatibility check, then the broker registers a new schema automatically for the topic and the producer is connected. - - * If the schema does not pass the compatibility check, then the broker does not register a schema and the producer is rejected to connect to a broker. - -![AutoUpdate Producer](/assets/schema-producer.png) - -### AutoUpdate for consumer - -For a consumer, the `AutoUpdate` happens in the following cases: - -* If a **consumer connects to a topic without a schema** (which means the consumer receiving raw bytes), the consumer can connect to the topic successfully without doing any compatibility check. - -* If a **consumer connects to a topic with a schema**. - - * If a topic does not have all of them (a schema/data/a local consumer and a local producer): - - * If `isAllowAutoUpdateSchema` sets to **true**, then the consumer registers a schema and it is connected to a broker. - - * If `isAllowAutoUpdateSchema` sets to **false**, then the consumer is rejected to connect to a broker. - - * If a topic has one of them (a schema/data/a local consumer and a local producer), then the schema compatibility check is performed. - - * If the schema passes the compatibility check, then the consumer is connected to the broker. - - * If the schema does not pass the compatibility check, then the consumer is rejected to connect to the broker. - -![AutoUpdate Consumer](/assets/schema-consumer.png) - - -### Manage AutoUpdate strategy - -You can use the `pulsar-admin` command to manage the `AutoUpdate` strategy as below: - -* [Enable AutoUpdate](#enable-autoupdate) - -* [Disable AutoUpdate](#disable-autoupdate) - -* [Adjust compatibility](#adjust-compatibility) - -#### Enable AutoUpdate - -To enable `AutoUpdate` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-is-allow-auto-update-schema --enable tenant/namespace - -``` - -#### Disable AutoUpdate - -To disable `AutoUpdate` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-is-allow-auto-update-schema --disable tenant/namespace - -``` - -Once the `AutoUpdate` is disabled, you can only register a new schema using the `pulsar-admin` command. - -#### Adjust compatibility - -To adjust the schema compatibility level on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-compatibility-strategy --compatibility tenant/namespace - -``` - -### Schema validation - -By default, `schemaValidationEnforced` is **disabled** for producers: - -* This means a producer without a schema can produce any kind of messages to a topic with schemas, which may result in producing trash data to the topic. - -* This allows non-java language clients that don’t support schema can produce messages to a topic with schemas. - -However, if you want a stronger guarantee on the topics with schemas, you can enable `schemaValidationEnforced` across the whole cluster or on a per-namespace basis. - -#### Enable schema validation - -To enable `schemaValidationEnforced` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-validation-enforce --enable tenant/namespace - -``` - -#### Disable schema validation - -To disable `schemaValidationEnforced` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-validation-enforce --disable tenant/namespace - -``` - -## Schema manual management - -To manage schemas, you can use one of the following methods. - -| Method | Description | -| --- | --- | -| **Admin CLI**
  1876. | You can use the `pulsar-admin` tool to manage Pulsar schemas, brokers, clusters, sources, sinks, topics, tenants and so on. For more information about how to use the `pulsar-admin` tool, see [here](reference-pulsar-admin.md). | -| **REST API**
  1877. | Pulsar exposes schema related management API in Pulsar’s admin RESTful API. You can access the admin RESTful endpoint directly to manage schemas. For more information about how to use the Pulsar REST API, see [here](http://pulsar.apache.org/admin-rest-api/). | -| **Java Admin API**
  1878. | Pulsar provides Java admin library. | - -### Upload a schema - -To upload (register) a new schema for a topic, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `upload` subcommand. - -```bash - -$ pulsar-admin schemas upload --filename - -``` - -The `schema-definition-file` is in JSON format. - -```json - -{ - "type": "", - "schema": "", - "properties": {} // the properties associated with the schema -} - -``` - -The `schema-definition-file` includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  1879. If the schema is a
  1880. **primitive**
  1881. schema, this field should be blank.
  1882. If the schema is a
  1883. **struct**
  1884. schema, this field should be a JSON string of the Avro schema definition.
  1885. | -| `properties` | The additional properties associated with the schema. | - -Here are examples of the `schema-definition-file` for a JSON schema. - -**Example 1** - -```json - -{ - "type": "JSON", - "schema": "{\"type\":\"record\",\"name\":\"User\",\"namespace\":\"com.foo\",\"fields\":[{\"name\":\"file1\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"file2\",\"type\":\"string\",\"default\":null},{\"name\":\"file3\",\"type\":[\"null\",\"string\"],\"default\":\"dfdf\"}]}", - "properties": {} -} - -``` - -**Example 2** - -```json - -{ - "type": "STRING", - "schema": "", - "properties": { - "key1": "value1" - } -} - -``` - -
    - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/uploadSchema?version=@pulsar:version_number@} - -The post payload is in JSON format. - -```json - -{ - "type": "", - "schema": "", - "properties": {} // the properties associated with the schema -} - -``` - -The post payload includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  1886. If the schema is a
  1887. **primitive**
  1888. schema, this field should be blank.
  1889. If the schema is a
  1890. **struct**
  1891. schema, this field should be a JSON string of the Avro schema definition.
  1892. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -void createSchema(String topic, PostSchemaPayload schemaPayload) - -``` - -The `PostSchemaPayload` includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  1893. If the schema is a
  1894. **primitive**
  1895. schema, this field should be blank.
  1896. If the schema is a
  1897. **struct**
  1898. schema, this field should be a JSON string of the Avro schema definition.
  1899. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `PostSchemaPayload`: - -```java - -PulsarAdmin admin = …; - -PostSchemaPayload payload = new PostSchemaPayload(); -payload.setType("INT8"); -payload.setSchema(""); - -admin.createSchema("my-tenant/my-ns/my-topic", payload); - -``` - -
    - -
    -```` - -### Get a schema (latest) - -To get the latest schema for a topic, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `get` subcommand. - -```bash - -$ pulsar-admin schemas get - -{ - "version": 0, - "type": "String", - "timestamp": 0, - "data": "string", - "properties": { - "property1": "string", - "property2": "string" - } -} - -``` - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/getSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", - "type": "", - "timestamp": "", - "data": "", - "properties": {} // the properties associated with the schema -} - -``` - -The response includes the following fields: - -| Field | Description | -| --- | --- | -| `version` | The schema version, which is a long number. | -| `type` | The schema type. | -| `timestamp` | The timestamp of creating this version of schema. | -| `data` | The schema definition data, which is encoded in UTF 8 charset.
  1900. If the schema is a
  1901. **primitive**
  1902. schema, this field should be blank.
  1903. If the schema is a
  1904. **struct**
  1905. schema, this field should be a JSON string of the Avro schema definition.
  1906. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -SchemaInfo createSchema(String topic) - -``` - -The `SchemaInfo` includes the following fields: - -| Field | Description | -| --- | --- | -| `name` | The schema name. | -| `type` | The schema type. | -| `schema` | A byte array of the schema definition data, which is encoded in UTF 8 charset.
  1907. If the schema is a
  1908. **primitive**
  1909. schema, this byte array should be empty.
  1910. If the schema is a
  1911. **struct**
  1912. schema, this field should be a JSON string of the Avro schema definition converted to a byte array.
  1913. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `SchemaInfo`: - -```java - -PulsarAdmin admin = …; - -SchemaInfo si = admin.getSchema("my-tenant/my-ns/my-topic"); - -``` - -
    - -
    -```` - -### Get a schema (specific) - -To get a specific version of a schema, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `get` subcommand. - -```bash - -$ pulsar-admin schemas get --version= - -``` - - - - -Send a `GET` request to a schema endpoint: {@inject: endpoint|GET|/admin/v2/schemas/:tenant/:namespace/:topic/schema/:version|operation/getSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", - "type": "", - "timestamp": "", - "data": "", - "properties": {} // the properties associated with the schema -} - -``` - -The response includes the following fields: - -| Field | Description | -| --- | --- | -| `version` | The schema version, which is a long number. | -| `type` | The schema type. | -| `timestamp` | The timestamp of creating this version of schema. | -| `data` | The schema definition data, which is encoded in UTF 8 charset.
  1914. If the schema is a
  1915. **primitive**
  1916. schema, this field should be blank.
  1917. If the schema is a
  1918. **struct**
  1919. schema, this field should be a JSON string of the Avro schema definition.
  1920. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -SchemaInfo createSchema(String topic, long version) - -``` - -The `SchemaInfo` includes the following fields: - -| Field | Description | -| --- | --- | -| `name` | The schema name. | -| `type` | The schema type. | -| `schema` | A byte array of the schema definition data, which is encoded in UTF 8.
  1921. If the schema is a
  1922. **primitive**
  1923. schema, this byte array should be empty.
  1924. If the schema is a
  1925. **struct**
  1926. schema, this field should be a JSON string of the Avro schema definition converted to a byte array.
  1927. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `SchemaInfo`: - -```java - -PulsarAdmin admin = …; - -SchemaInfo si = admin.getSchema("my-tenant/my-ns/my-topic", 1L); - -``` - -
    - -
    -```` - -### Extract a schema - -To provide a schema via a topic, you can use the following method. - -````mdx-code-block - - - - -Use the `extract` subcommand. - -```bash - -$ pulsar-admin schemas extract --classname --jar --type - -``` - - - - -```` - -### Delete a schema - -To delete a schema for a topic, you can use one of the following methods. - -:::note - -In any case, the **delete** action deletes **all versions** of a schema registered for a topic. - -::: - -````mdx-code-block - - - - -Use the `delete` subcommand. - -```bash - -$ pulsar-admin schemas delete - -``` - - - - -Send a `DELETE` request to a schema endpoint: {@inject: endpoint|DELETE|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/deleteSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", -} - -``` - -The response includes the following field: - -Field | Description | ----|---| -`version` | The schema version, which is a long number. | - - - - -```java - -void deleteSchema(String topic) - -``` - -Here is an example of deleting a schema. - -```java - -PulsarAdmin admin = …; - -admin.deleteSchema("my-tenant/my-ns/my-topic"); - -``` - - - - -```` - -## Custom schema storage - -By default, Pulsar stores various data types of schemas in [Apache BookKeeper](https://bookkeeper.apache.org) deployed alongside Pulsar. - -However, you can use another storage system if needed. - -### Implement - -To use a non-default (non-BookKeeper) storage system for Pulsar schemas, you need to implement the following Java interfaces: - -* [SchemaStorage interface](#schemastorage-interface) - -* [SchemaStorageFactory interface](#schemastoragefactory-interface) - -#### SchemaStorage interface - -The `SchemaStorage` interface has the following methods: - -```java - -public interface SchemaStorage { - // How schemas are updated - CompletableFuture put(String key, byte[] value, byte[] hash); - - // How schemas are fetched from storage - CompletableFuture get(String key, SchemaVersion version); - - // How schemas are deleted - CompletableFuture delete(String key); - - // Utility method for converting a schema version byte array to a SchemaVersion object - SchemaVersion versionFromBytes(byte[] version); - - // Startup behavior for the schema storage client - void start() throws Exception; - - // Shutdown behavior for the schema storage client - void close() throws Exception; -} - -``` - -:::tip - -For a complete example of **schema storage** implementation, see [BookKeeperSchemaStorage](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorage.java) class. - -::: - -#### SchemaStorageFactory interface - -The `SchemaStorageFactory` interface has the following method: - -```java - -public interface SchemaStorageFactory { - @NotNull - SchemaStorage create(PulsarService pulsar) throws Exception; -} - -``` - -:::tip - -For a complete example of **schema storage factory** implementation, see [BookKeeperSchemaStorageFactory](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorageFactory.java) class. - -::: - -### Deploy - -To use your custom schema storage implementation, perform the following steps. - -1. Package the implementation in a [JAR](https://docs.oracle.com/javase/tutorial/deployment/jar/basicsindex.html) file. - -2. Add the JAR file to the `lib` folder in your Pulsar binary or source distribution. - -3. Change the `schemaRegistryStorageClassName` configuration in `broker.conf` to your custom factory class. - -4. Start Pulsar. diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/schema-understand.md b/site2/website/versioned_docs/version-2.9.1-deprecated/schema-understand.md deleted file mode 100644 index 55bc662c666338..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/schema-understand.md +++ /dev/null @@ -1,576 +0,0 @@ ---- -id: schema-understand -title: Understand schema -sidebar_label: "Understand schema" -original_id: schema-understand ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This chapter explains the basic concepts of Pulsar schema, focuses on the topics of particular importance, and provides additional background. - -## SchemaInfo - -Pulsar schema is defined in a data structure called `SchemaInfo`. - -The `SchemaInfo` is stored and enforced on a per-topic basis and cannot be stored at the namespace or tenant level. - -A `SchemaInfo` consists of the following fields: - -| Field | Description | -| --- | --- | -| `name` | Schema name (a string). | -| `type` | Schema type, which determines how to interpret the schema data.
  1928. Predefined schema: see [here](schema-understand.md#schema-type).
  1929. Customized schema: it is left as an empty string.
  1930. | -| `schema`(`payload`) | Schema data, which is a sequence of 8-bit unsigned bytes and schema-type specific. | -| `properties` | It is a user defined properties as a string/string map. Applications can use this bag for carrying any application specific logics. Possible properties might be the Git hash associated with the schema, an environment string like `dev` or `prod`. | - -**Example** - -This is the `SchemaInfo` of a string. - -```json - -{ - "name": "test-string-schema", - "type": "STRING", - "schema": "", - "properties": {} -} - -``` - -## Schema type - -Pulsar supports various schema types, which are mainly divided into two categories: - -* Primitive type - -* Complex type - -### Primitive type - -Currently, Pulsar supports the following primitive types: - -| Primitive Type | Description | -|---|---| -| `BOOLEAN` | A binary value | -| `INT8` | A 8-bit signed integer | -| `INT16` | A 16-bit signed integer | -| `INT32` | A 32-bit signed integer | -| `INT64` | A 64-bit signed integer | -| `FLOAT` | A single precision (32-bit) IEEE 754 floating-point number | -| `DOUBLE` | A double-precision (64-bit) IEEE 754 floating-point number | -| `BYTES` | A sequence of 8-bit unsigned bytes | -| `STRING` | A Unicode character sequence | -| `TIMESTAMP` (`DATE`, `TIME`) | A logic type represents a specific instant in time with millisecond precision.
    It stores the number of milliseconds since `January 1, 1970, 00:00:00 GMT` as an `INT64` value | -| INSTANT | A single instantaneous point on the time-line with nanoseconds precision| -| LOCAL_DATE | An immutable date-time object that represents a date, often viewed as year-month-day| -| LOCAL_TIME | An immutable date-time object that represents a time, often viewed as hour-minute-second. Time is represented to nanosecond precision.| -| LOCAL_DATE_TIME | An immutable date-time object that represents a date-time, often viewed as year-month-day-hour-minute-second | - -For primitive types, Pulsar does not store any schema data in `SchemaInfo`. The `type` in `SchemaInfo` is used to determine how to serialize and deserialize the data. - -Some of the primitive schema implementations can use `properties` to store implementation-specific tunable settings. For example, a `string` schema can use `properties` to store the encoding charset to serialize and deserialize strings. - -The conversions between **Pulsar schema types** and **language-specific primitive types** are as below. - -| Schema Type | Java Type| Python Type | Go Type | -|---|---|---|---| -| BOOLEAN | boolean | bool | bool | -| INT8 | byte | | int8 | -| INT16 | short | | int16 | -| INT32 | int | | int32 | -| INT64 | long | | int64 | -| FLOAT | float | float | float32 | -| DOUBLE | double | float | float64| -| BYTES | byte[], ByteBuffer, ByteBuf | bytes | []byte | -| STRING | string | str | string| -| TIMESTAMP | java.sql.Timestamp | | | -| TIME | java.sql.Time | | | -| DATE | java.util.Date | | | -| INSTANT | java.time.Instant | | | -| LOCAL_DATE | java.time.LocalDate | | | -| LOCAL_TIME | java.time.LocalDateTime | | -| LOCAL_DATE_TIME | java.time.LocalTime | | - -**Example** - -This example demonstrates how to use a string schema. - -1. Create a producer with a string schema and send messages. - - ```java - - Producer producer = client.newProducer(Schema.STRING).create(); - producer.newMessage().value("Hello Pulsar!").send(); - - ``` - -2. Create a consumer with a string schema and receive messages. - - ```java - - Consumer consumer = client.newConsumer(Schema.STRING).subscribe(); - consumer.receive(); - - ``` - -### Complex type - -Currently, Pulsar supports the following complex types: - -| Complex Type | Description | -|---|---| -| `keyvalue` | Represents a complex type of a key/value pair. | -| `struct` | Handles structured data. It supports `AvroBaseStructSchema` and `ProtobufNativeSchema`. | - -#### keyvalue - -`Keyvalue` schema helps applications define schemas for both key and value. - -For `SchemaInfo` of `keyvalue` schema, Pulsar stores the `SchemaInfo` of key schema and the `SchemaInfo` of value schema together. - -Pulsar provides the following methods to encode a key/value pair in messages: - -* `INLINE` - -* `SEPARATED` - -You can choose the encoding type when constructing the key/value schema. - -````mdx-code-block - - - - -Key/value pairs are encoded together in the message payload. - - - - -Key is encoded in the message key and the value is encoded in the message payload. - -**Example** - -This example shows how to construct a key/value schema and then use it to produce and consume messages. - -1. Construct a key/value schema with `INLINE` encoding type. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.INLINE - ); - - ``` - -2. Optionally, construct a key/value schema with `SEPARATED` encoding type. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - ``` - -3. Produce messages using a key/value schema. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - Producer> producer = client.newProducer(kvSchema) - .topic(TOPIC) - .create(); - - final int key = 100; - final String value = "value-100"; - - // send the key/value message - producer.newMessage() - .value(new KeyValue(key, value)) - .send(); - - ``` - -4. Consume messages using a key/value schema. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - Consumer> consumer = client.newConsumer(kvSchema) - ... - .topic(TOPIC) - .subscriptionName(SubscriptionName).subscribe(); - - // receive key/value pair - Message> msg = consumer.receive(); - KeyValue kv = msg.getValue(); - - ``` - - - - -```` - -#### struct - -This section describes the details of type and usage of the `struct` schema. - -##### Type - -`struct` schema supports `AvroBaseStructSchema` and `ProtobufNativeSchema`. - -|Type|Description| ----|---| -`AvroBaseStructSchema`|Pulsar uses [Avro Specification](http://avro.apache.org/docs/current/spec.html) to declare the schema definition for `AvroBaseStructSchema`, which supports `AvroSchema`, `JsonSchema`, and `ProtobufSchema`.

    This allows Pulsar:
    - to use the same tools to manage schema definitions
    - to use different serialization or deserialization methods to handle data| -`ProtobufNativeSchema`|`ProtobufNativeSchema` is based on protobuf native Descriptor.

    This allows Pulsar:
    - to use native protobuf-v3 to serialize or deserialize data
    - to use `AutoConsume` to deserialize data. - -##### Usage - -Pulsar provides the following methods to use the `struct` schema: - -* `static` - -* `generic` - -* `SchemaDefinition` - -````mdx-code-block - - - - -You can predefine the `struct` schema, which can be a POJO in Java, a `struct` in Go, or classes generated by Avro or Protobuf tools. - -**Example** - -Pulsar gets the schema definition from the predefined `struct` using an Avro library. The schema definition is the schema data stored as a part of the `SchemaInfo`. - -1. Create the _User_ class to define the messages sent to Pulsar topics. - - ```java - - @Builder - @AllArgsConstructor - @NoArgsConstructor - public static class User { - String name; - int age; - } - - ``` - -2. Create a producer with a `struct` schema and send messages. - - ```java - - Producer producer = client.newProducer(Schema.AVRO(User.class)).create(); - producer.newMessage().value(User.builder().name("pulsar-user").age(1).build()).send(); - - ``` - -3. Create a consumer with a `struct` schema and receive messages - - ```java - - Consumer consumer = client.newConsumer(Schema.AVRO(User.class)).subscribe(); - User user = consumer.receive(); - - ``` - - - - -Sometimes applications do not have pre-defined structs, and you can use this method to define schema and access data. - -You can define the `struct` schema using the `GenericSchemaBuilder`, generate a generic struct using `GenericRecordBuilder` and consume messages into `GenericRecord`. - -**Example** - -1. Use `RecordSchemaBuilder` to build a schema. - - ```java - - RecordSchemaBuilder recordSchemaBuilder = SchemaBuilder.record("schemaName"); - recordSchemaBuilder.field("intField").type(SchemaType.INT32); - SchemaInfo schemaInfo = recordSchemaBuilder.build(SchemaType.AVRO); - - Producer producer = client.newProducer(Schema.generic(schemaInfo)).create(); - - ``` - -2. Use `RecordBuilder` to build the struct records. - - ```java - - producer.newMessage().value(schema.newRecordBuilder() - .set("intField", 32) - .build()).send(); - - ``` - - - - -You can define the `schemaDefinition` to generate a `struct` schema. - -**Example** - -1. Create the _User_ class to define the messages sent to Pulsar topics. - - ```java - - @Builder - @AllArgsConstructor - @NoArgsConstructor - public static class User { - String name; - int age; - } - - ``` - -2. Create a producer with a `SchemaDefinition` and send messages. - - ```java - - SchemaDefinition schemaDefinition = SchemaDefinition.builder().withPojo(User.class).build(); - Producer producer = client.newProducer(Schema.AVRO(schemaDefinition)).create(); - producer.newMessage().value(User.builder().name("pulsar-user").age(1).build()).send(); - - ``` - -3. Create a consumer with a `SchemaDefinition` schema and receive messages - - ```java - - SchemaDefinition schemaDefinition = SchemaDefinition.builder().withPojo(User.class).build(); - Consumer consumer = client.newConsumer(Schema.AVRO(schemaDefinition)).subscribe(); - User user = consumer.receive().getValue(); - - ``` - - - - -```` - -### Auto Schema - -If you don't know the schema type of a Pulsar topic in advance, you can use AUTO schema to produce or consume generic records to or from brokers. - -| Auto Schema Type | Description | -|---|---| -| `AUTO_PRODUCE` | This is useful for transferring data **from a producer to a Pulsar topic that has a schema**. | -| `AUTO_CONSUME` | This is useful for transferring data **from a Pulsar topic that has a schema to a consumer**. | - -#### AUTO_PRODUCE - -`AUTO_PRODUCE` schema helps a producer validate whether the bytes sent by the producer is compatible with the schema of a topic. - -**Example** - -Suppose that: - -* You have a producer processing messages from a Kafka topic _K_. - -* You have a Pulsar topic _P_, and you do not know its schema type. - -* Your application reads the messages from _K_ and writes the messages to _P_. - -In this case, you can use `AUTO_PRODUCE` to verify whether the bytes produced by _K_ can be sent to _P_ or not. - -```java - -Produce pulsarProducer = client.newProducer(Schema.AUTO_PRODUCE()) - … - .create(); - -byte[] kafkaMessageBytes = … ; - -pulsarProducer.produce(kafkaMessageBytes); - -``` - -#### AUTO_CONSUME - -`AUTO_CONSUME` schema helps a Pulsar topic validate whether the bytes sent by a Pulsar topic is compatible with a consumer, that is, the Pulsar topic deserializes messages into language-specific objects using the `SchemaInfo` retrieved from broker-side. - -Currently, `AUTO_CONSUME` supports AVRO, JSON and ProtobufNativeSchema schemas. It deserializes messages into `GenericRecord`. - -**Example** - -Suppose that: - -* You have a Pulsar topic _P_. - -* You have a consumer (for example, MySQL) receiving messages from the topic _P_. - -* Your application reads the messages from _P_ and writes the messages to MySQL. - -In this case, you can use `AUTO_CONSUME` to verify whether the bytes produced by _P_ can be sent to MySQL or not. - -```java - -Consumer pulsarConsumer = client.newConsumer(Schema.AUTO_CONSUME()) - … - .subscribe(); - -Message msg = consumer.receive() ; -GenericRecord record = msg.getValue(); - -``` - -### Native Avro Schema - -When migrating or ingesting event or message data from external systems (such as Kafka and Cassandra), the events are often already serialized in Avro format. The applications producing the data typically have validated the data against their schemas (including compatibility checks) and stored them in a database or a dedicated service (such as a schema registry). The schema of each serialized data record is usually retrievable by some metadata attached to that record. In such cases, a Pulsar producer doesn't need to repeat the schema validation step when sending the ingested events to a topic. All it needs to do is passing each message or event with its schema to Pulsar. - -Hence, we provide `Schema.NATIVE_AVRO` to wrap a native Avro schema of type `org.apache.avro.Schema`. The result is a schema instance of Pulsar that accepts a serialized Avro payload without validating it against the wrapped Avro schema. - -**Example** - -```java - -org.apache.avro.Schema nativeAvroSchema = … ; - -Producer producer = pulsarClient.newProducer().topic("ingress").create(); - -byte[] content = … ; - -producer.newMessage(Schema.NATIVE_AVRO(nativeAvroSchema)).value(content).send(); - -``` - -## Schema version - -Each `SchemaInfo` stored with a topic has a version. Schema version manages schema changes happening within a topic. - -Messages produced with a given `SchemaInfo` is tagged with a schema version, so when a message is consumed by a Pulsar client, the Pulsar client can use the schema version to retrieve the corresponding `SchemaInfo` and then use the `SchemaInfo` to deserialize data. - -Schemas are versioned in succession. Schema storage happens in a broker that handles the associated topics so that version assignments can be made. - -Once a version is assigned/fetched to/for a schema, all subsequent messages produced by that producer are tagged with the appropriate version. - -**Example** - -The following example illustrates how the schema version works. - -Suppose that a Pulsar [Java client](client-libraries-java.md) created using the code below attempts to connect to Pulsar and begins to send messages: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -Producer producer = client.newProducer(JSONSchema.of(SensorReading.class)) - .topic("sensor-data") - .sendTimeout(3, TimeUnit.SECONDS) - .create(); - -``` - -The table below lists the possible scenarios when this connection attempt occurs and what happens in each scenario: - -| Scenario | What happens | -| --- | --- | -|
  1931. No schema exists for the topic.
  1932. | (1) The producer is created using the given schema. (2) Since no existing schema is compatible with the `SensorReading` schema, the schema is transmitted to the broker and stored. (3) Any consumer created using the same schema or topic can consume messages from the `sensor-data` topic. | -|
  1933. A schema already exists.
  1934. The producer connects using the same schema that is already stored.
  1935. | (1) The schema is transmitted to the broker. (2) The broker determines that the schema is compatible. (3) The broker attempts to store the schema in [BookKeeper](concepts-architecture-overview.md#persistent-storage) but then determines that it's already stored, so it is used to tag produced messages. |
  1936. A schema already exists.
  1937. The producer connects using a new schema that is compatible.
  1938. | (1) The schema is transmitted to the broker. (2) The broker determines that the schema is compatible and stores the new schema as the current version (with a new version number). | - -## How does schema work - -Pulsar schemas are applied and enforced at the **topic** level (schemas cannot be applied at the namespace or tenant level). - -Producers and consumers upload schemas to brokers, so Pulsar schemas work on the producer side and the consumer side. - -### Producer side - -This diagram illustrates how does schema work on the Producer side. - -![Schema works at the producer side](/assets/schema-producer.png) - -1. The application uses a schema instance to construct a producer instance. - - The schema instance defines the schema for the data being produced using the producer instance. - - Take AVRO as an example, Pulsar extracts schema definition from the POJO class and constructs the `SchemaInfo` that the producer needs to pass to a broker when it connects. - -2. The producer connects to the broker with the `SchemaInfo` extracted from the passed-in schema instance. - -3. The broker looks up the schema in the schema storage to check if it is already a registered schema. - -4. If yes, the broker skips the schema validation since it is a known schema, and returns the schema version to the producer. - -5. If no, the broker verifies whether a schema can be automatically created in this namespace: - - * If `isAllowAutoUpdateSchema` sets to **true**, then a schema can be created, and the broker validates the schema based on the schema compatibility check strategy defined for the topic. - - * If `isAllowAutoUpdateSchema` sets to **false**, then a schema can not be created, and the producer is rejected to connect to the broker. - -**Tip**: - -`isAllowAutoUpdateSchema` can be set via **Pulsar admin API** or **REST API.** - -For how to set `isAllowAutoUpdateSchema` via Pulsar admin API, see [Manage AutoUpdate Strategy](schema-manage.md/#manage-autoupdate-strategy). - -6. If the schema is allowed to be updated, then the compatible strategy check is performed. - - * If the schema is compatible, the broker stores it and returns the schema version to the producer. - - All the messages produced by this producer are tagged with the schema version. - - * If the schema is incompatible, the broker rejects it. - -### Consumer side - -This diagram illustrates how does Schema work on the consumer side. - -![Schema works at the consumer side](/assets/schema-consumer.png) - -1. The application uses a schema instance to construct a consumer instance. - - The schema instance defines the schema that the consumer uses for decoding messages received from a broker. - -2. The consumer connects to the broker with the `SchemaInfo` extracted from the passed-in schema instance. - -3. The broker determines whether the topic has one of them (a schema/data/a local consumer and a local producer). - -4. If a topic does not have all of them (a schema/data/a local consumer and a local producer): - - * If `isAllowAutoUpdateSchema` sets to **true**, then the consumer registers a schema and it is connected to a broker. - - * If `isAllowAutoUpdateSchema` sets to **false**, then the consumer is rejected to connect to a broker. - -5. If a topic has one of them (a schema/data/a local consumer and a local producer), then the schema compatibility check is performed. - - * If the schema passes the compatibility check, then the consumer is connected to the broker. - - * If the schema does not pass the compatibility check, then the consumer is rejected to connect to the broker. - -6. The consumer receives messages from the broker. - - If the schema used by the consumer supports schema versioning (for example, AVRO schema), the consumer fetches the `SchemaInfo` of the version tagged in messages and uses the passed-in schema and the schema tagged in messages to decode the messages. diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/security-athenz.md b/site2/website/versioned_docs/version-2.9.1-deprecated/security-athenz.md deleted file mode 100644 index 8a39fe25316d07..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/security-athenz.md +++ /dev/null @@ -1,98 +0,0 @@ ---- -id: security-athenz -title: Authentication using Athenz -sidebar_label: "Authentication using Athenz" -original_id: security-athenz ---- - -[Athenz](https://github.com/AthenZ/athenz) is a role-based authentication/authorization system. In Pulsar, you can use Athenz role tokens (also known as *z-tokens*) to establish the identify of the client. - -## Athenz authentication settings - -A [decentralized Athenz system](https://github.com/AthenZ/athenz/blob/master/docs/decent_authz_flow.md) contains an [authori**Z**ation **M**anagement **S**ystem](https://github.com/AthenZ/athenz/blob/master/docs/setup_zms.md) (ZMS) server and an [authori**Z**ation **T**oken **S**ystem](https://github.com/AthenZ/athenz/blob/master/docs/setup_zts) (ZTS) server. - -To begin, you need to set up Athenz service access control. You need to create domains for the *provider* (which provides some resources to other services with some authentication/authorization policies) and the *tenant* (which is provisioned to access some resources in a provider). In this case, the provider corresponds to the Pulsar service itself and the tenant corresponds to each application using Pulsar (typically, a [tenant](reference-terminology.md#tenant) in Pulsar). - -### Create the tenant domain and service - -On the [tenant](reference-terminology.md#tenant) side, you need to do the following things: - -1. Create a domain, such as `shopping` -2. Generate a private/public key pair -3. Create a service, such as `some_app`, on the domain with the public key - -Note that you need to specify the private key generated in step 2 when the Pulsar client connects to the [broker](reference-terminology.md#broker) (see client configuration examples for [Java](client-libraries-java.md#tls-authentication) and [C++](client-libraries-cpp.md#tls-authentication)). - -For more specific steps involving the Athenz UI, refer to [Example Service Access Control Setup](https://github.com/AthenZ/athenz/blob/master/docs/example_service_athenz_setup.md#client-tenant-domain). - -### Create the provider domain and add the tenant service to some role members - -On the provider side, you need to do the following things: - -1. Create a domain, such as `pulsar` -2. Create a role -3. Add the tenant service to members of the role - -Note that you can specify any action and resource in step 2 since they are not used on Pulsar. In other words, Pulsar uses the Athenz role token only for authentication, *not* for authorization. - -For more specific steps involving UI, refer to [Example Service Access Control Setup](https://github.com/AthenZ/athenz/blob/master/docs/example_service_athenz_setup.md#server-provider-domain). - -## Configure the broker for Athenz - -> ### TLS encryption -> -> Note that when you are using Athenz as an authentication provider, you had better use TLS encryption -> as it can protect role tokens from being intercepted and reused. (for more details involving TLS encryption see [Architecture - Data Model](https://github.com/AthenZ/athenz/blob/master/docs/data_model)). - -In the `conf/broker.conf` configuration file in your Pulsar installation, you need to provide the class name of the Athenz authentication provider as well as a comma-separated list of provider domain names. - -```properties - -# Add the Athenz auth provider -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderAthenz -athenzDomainNames=pulsar - -# Enable TLS -tlsEnabled=true -tlsCertificateFilePath=/path/to/broker-cert.pem -tlsKeyFilePath=/path/to/broker-key.pem - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationAthenz -brokerClientAuthenticationParameters={"tenantDomain":"shopping","tenantService":"some_app","providerDomain":"pulsar","privateKey":"file:///path/to/private.pem","keyId":"v1"} - -``` - -> A full listing of parameters is available in the `conf/broker.conf` file, you can also find the default -> values for those parameters in [Broker Configuration](reference-configuration.md#broker). - -## Configure clients for Athenz - -For more information on Pulsar client authentication using Athenz, see the following language-specific docs: - -* [Java client](client-libraries-java.md#athenz) - -## Configure CLI tools for Athenz - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following authentication parameters to the `conf/client.conf` config file to use Athenz with CLI tools of Pulsar: - -```properties - -# URL for the broker -serviceUrl=https://broker.example.com:8443/ - -# Set Athenz auth plugin and its parameters -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationAthenz -authParams={"tenantDomain":"shopping","tenantService":"some_app","providerDomain":"pulsar","privateKey":"file:///path/to/private.pem","keyId":"v1"} - -# Enable TLS -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/cacert.pem - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/security-authorization.md b/site2/website/versioned_docs/version-2.9.1-deprecated/security-authorization.md deleted file mode 100644 index 9cfd7c8c203f63..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/security-authorization.md +++ /dev/null @@ -1,130 +0,0 @@ ---- -id: security-authorization -title: Authentication and authorization in Pulsar -sidebar_label: "Authorization and ACLs" -original_id: security-authorization ---- - - -In Pulsar, the [authentication provider](security-overview.md#authentication-providers) is responsible for properly identifying clients and associating the clients with [role tokens](security-overview.md#role-tokens). If you only enable authentication, an authenticated role token has the ability to access all resources in the cluster. *Authorization* is the process that determines *what* clients are able to do. - -The role tokens with the most privileges are the *superusers*. The *superusers* can create and destroy tenants, along with having full access to all tenant resources. - -When a superuser creates a [tenant](reference-terminology.md#tenant), that tenant is assigned an admin role. A client with the admin role token can then create, modify and destroy namespaces, and grant and revoke permissions to *other role tokens* on those namespaces. - -## Broker and Proxy Setup - -### Enable authorization and assign superusers -You can enable the authorization and assign the superusers in the broker ([`conf/broker.conf`](reference-configuration.md#broker)) configuration files. - -```properties - -authorizationEnabled=true -superUserRoles=my-super-user-1,my-super-user-2 - -``` - -> A full list of parameters is available in the `conf/broker.conf` file. -> You can also find the default values for those parameters in [Broker Configuration](reference-configuration.md#broker). - -Typically, you use superuser roles for administrators, clients as well as broker-to-broker authorization. When you use [geo-replication](concepts-replication.md), every broker needs to be able to publish to all the other topics of clusters. - -You can also enable the authorization for the proxy in the proxy configuration file (`conf/proxy.conf`). Once you enable the authorization on the proxy, the proxy does an additional authorization check before forwarding the request to a broker. -If you enable authorization on the broker, the broker checks the authorization of the request when the broker receives the forwarded request. - -### Proxy Roles - -By default, the broker treats the connection between a proxy and the broker as a normal user connection. The broker authenticates the user as the role configured in `proxy.conf`(see ["Enable TLS Authentication on Proxies"](security-tls-authentication.md#enable-tls-authentication-on-proxies)). However, when the user connects to the cluster through a proxy, the user rarely requires the authentication. The user expects to be able to interact with the cluster as the role for which they have authenticated with the proxy. - -Pulsar uses *Proxy roles* to enable the authentication. Proxy roles are specified in the broker configuration file, [`conf/broker.conf`](reference-configuration.md#broker). If a client that is authenticated with a broker is one of its ```proxyRoles```, all requests from that client must also carry information about the role of the client that is authenticated with the proxy. This information is called the *original principal*. If the *original principal* is absent, the client is not able to access anything. - -You must authorize both the *proxy role* and the *original principal* to access a resource to ensure that the resource is accessible via the proxy. Administrators can take two approaches to authorize the *proxy role* and the *original principal*. - -The more secure approach is to grant access to the proxy roles each time you grant access to a resource. For example, if you have a proxy role named `proxy1`, when the superuser creates a tenant, you should specify `proxy1` as one of the admin roles. When a role is granted permissions to produce or consume from a namespace, if that client wants to produce or consume through a proxy, you should also grant `proxy1` the same permissions. - -Another approach is to make the proxy role a superuser. This allows the proxy to access all resources. The client still needs to authenticate with the proxy, and all requests made through the proxy have their role downgraded to the *original principal* of the authenticated client. However, if the proxy is compromised, a bad actor could get full access to your cluster. - -You can specify the roles as proxy roles in [`conf/broker.conf`](reference-configuration.md#broker). - -```properties - -proxyRoles=my-proxy-role - -# if you want to allow superusers to use the proxy (see above) -superUserRoles=my-super-user-1,my-super-user-2,my-proxy-role - -``` - -## Administer tenants - -Pulsar [instance](reference-terminology.md#instance) administrators or some kind of self-service portal typically provisions a Pulsar [tenant](reference-terminology.md#tenant). - -You can manage tenants using the [`pulsar-admin`](reference-pulsar-admin.md) tool. - -### Create a new tenant - -The following is an example tenant creation command: - -```shell - -$ bin/pulsar-admin tenants create my-tenant \ - --admin-roles my-admin-role \ - --allowed-clusters us-west,us-east - -``` - -This command creates a new tenant `my-tenant` that is allowed to use the clusters `us-west` and `us-east`. - -A client that successfully identifies itself as having the role `my-admin-role` is allowed to perform all administrative tasks on this tenant. - -The structure of topic names in Pulsar reflects the hierarchy between tenants, clusters, and namespaces: - -```shell - -persistent://tenant/namespace/topic - -``` - -### Manage permissions - -You can use [Pulsar Admin Tools](admin-api-permissions.md) for managing permission in Pulsar. - -### Pulsar admin authentication - -```java - -PulsarAdmin admin = PulsarAdmin.builder() - .serviceHttpUrl("http://broker:8080") - .authentication("com.org.MyAuthPluginClass", "param1:value1") - .build(); - -``` - -To use TLS: - -```java - -PulsarAdmin admin = PulsarAdmin.builder() - .serviceHttpUrl("https://broker:8080") - .authentication("com.org.MyAuthPluginClass", "param1:value1") - .tlsTrustCertsFilePath("/path/to/trust/cert") - .build(); - -``` - -## Authorize an authenticated client with multiple roles - -When a client is identified with multiple roles in a token (the type of role claim in the token is an array) during the authentication process, Pulsar supports to check the permissions of all the roles and further authorize the client as long as one of its roles has the required permissions. - -> **Note**
    -> This authorization method is only compatible with [JWT authentication](security-jwt.md). - -To enable this authorization method, configure the authorization provider as `MultiRolesTokenAuthorizationProvider` in the `conf/broker.conf` file. - - ```properties - - # Authorization provider fully qualified class-name - authorizationProvider=org.apache.pulsar.broker.authorization.MultiRolesTokenAuthorizationProvider - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/security-basic-auth.md b/site2/website/versioned_docs/version-2.9.1-deprecated/security-basic-auth.md deleted file mode 100644 index 2585526bb478af..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/security-basic-auth.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -id: security-basic-auth -title: Authentication using HTTP basic -sidebar_label: "Authentication using HTTP basic" ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - -[Basic authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) is a simple authentication scheme built into the HTTP protocol, which uses base64-encoded username and password pairs as credentials. - -## Prerequisites - -Install [`htpasswd`](https://httpd.apache.org/docs/2.4/programs/htpasswd.html) in your environment to create a password file for storing username-password pairs. - -* For Ubuntu/Debian, run the following command to install `htpasswd`. - - ``` - apt install apache2-utils - ``` - -* For CentOS/RHEL, run the following command to install `htpasswd`. - - ``` - yum install httpd-tools - ``` - -## Create your authentication file - -:::note -Currently, you can use MD5 (recommended) and CRYPT encryption to authenticate your password. -::: - -Create a password file named `.htpasswd` with a user account `superuser/admin`: -* Use MD5 encryption (recommended): - - ``` - htpasswd -cmb /path/to/.htpasswd superuser admin - ``` - -* Use CRYPT encryption: - - ``` - htpasswd -cdb /path/to/.htpasswd superuser admin - ``` - -You can preview the content of your password file by running the following command: - -``` -cat path/to/.htpasswd -superuser:$apr1$GBIYZYFZ$MzLcPrvoUky16mLcK6UtX/ -``` - -## Enable basic authentication on brokers - -To configure brokers to authenticate clients, complete the following steps. - -1. Add the following parameters to the `conf/broker.conf` file. If you use a standalone Pulsar, you need to add these parameters to the `conf/standalone.conf` file. - - ``` - # Configuration to enable Basic authentication - authenticationEnabled=true - authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderBasic - - # Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters - brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic - brokerClientAuthenticationParameters={"userId":"superuser","password":"admin"} - - # If this flag is set then the broker authenticates the original Auth data - # else it just accepts the originalPrincipal and authorizes it (if required). - authenticateOriginalAuthData=true - ``` - -2. Set an environment variable named `PULSAR_EXTRA_OPTS` and the value is `-Dpulsar.auth.basic.conf=/path/to/.htpasswd`. Pulsar reads this environment variable to implement HTTP basic authentication. - -## Enable basic authentication on proxies - -To configure proxies to authenticate clients, complete the following steps. - -1. Add the following parameters to the `conf/proxy.conf` file: - - ``` - # For clients connecting to the proxy - authenticationEnabled=true - authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderBasic - - # For the proxy to connect to brokers - brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic - brokerClientAuthenticationParameters={"userId":"superuser","password":"admin"} - - # Whether client authorization credentials are forwarded to the broker for re-authorization. - # Authentication must be enabled via authenticationEnabled=true for this to take effect. - forwardAuthorizationCredentials=true - ``` - -2. Set an environment variable named `PULSAR_EXTRA_OPTS` and the value is `-Dpulsar.auth.basic.conf=/path/to/.htpasswd`. Pulsar reads this environment variable to implement HTTP basic authentication. - -## Configure basic authentication in CLI tools - -[Command-line tools](/docs/next/reference-cli-tools), such as [Pulsar-admin](/tools/pulsar-admin/), [Pulsar-perf](/tools/pulsar-perf/) and [Pulsar-client](/tools/pulsar-client/), use the `conf/client.conf` file in your Pulsar installation. To configure basic authentication in Pulsar CLI tools, you need to add the following parameters to the `conf/client.conf` file. - -``` -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic -authParams={"userId":"superuser","password":"admin"} -``` - - -## Configure basic authentication in Pulsar clients - -The following example shows how to configure basic authentication when using Pulsar clients. - - - - - ```java - AuthenticationBasic auth = new AuthenticationBasic(); - auth.configure("{\"userId\":\"superuser\",\"password\":\"admin\"}"); - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650") - .authentication(auth) - .build(); - ``` - - - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/security-bouncy-castle.md b/site2/website/versioned_docs/version-2.9.1-deprecated/security-bouncy-castle.md deleted file mode 100644 index be937055d8e311..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/security-bouncy-castle.md +++ /dev/null @@ -1,157 +0,0 @@ ---- -id: security-bouncy-castle -title: Bouncy Castle Providers -sidebar_label: "Bouncy Castle Providers" -original_id: security-bouncy-castle ---- - -## BouncyCastle Introduce - -`Bouncy Castle` is a Java library that complements the default Java Cryptographic Extension (JCE), -and it provides more cipher suites and algorithms than the default JCE provided by Sun. - -In addition to that, `Bouncy Castle` has lots of utilities for reading arcane formats like PEM and ASN.1 that no sane person would want to rewrite themselves. - -In Pulsar, security and crypto have dependencies on BouncyCastle Jars. For the detailed installing and configuring Bouncy Castle FIPS, see [BC FIPS Documentation](https://www.bouncycastle.org/documentation.html), especially the **User Guides** and **Security Policy** PDFs. - -`Bouncy Castle` provides both [FIPS](https://www.bouncycastle.org/fips_faq.html) and non-FIPS version. But in a JVM, you can not include both of the 2 versions, and you need to exclude the current version before include the other. - -In Pulsar, the security and crypto methods also depends on `Bouncy Castle`, especially in [TLS Authentication](security-tls-authentication.md) and [Transport Encryption](security-encryption.md). This document contains the configuration between BouncyCastle FIPS(BC-FIPS) and non-FIPS(BC-non-FIPS) version while using Pulsar. - -## How BouncyCastle modules packaged in Pulsar - -In Pulsar's `bouncy-castle` module, We provide 2 sub modules: `bouncy-castle-bc`(for non-FIPS version) and `bouncy-castle-bcfips`(for FIPS version), to package BC jars together to make the include and exclude of `Bouncy Castle` easier. - -To achieve this goal, we will need to package several `bouncy-castle` jars together into `bouncy-castle-bc` or `bouncy-castle-bcfips` jar. -Each of the original bouncy-castle jar is related with security, so BouncyCastle dutifully supplies signed of each JAR. -But when we do the re-package, Maven shade explodes the BouncyCastle jar file which puts the signatures into META-INF, -these signatures aren't valid for this new, uber-jar (signatures are only for the original BC jar). -Usually, You will meet error like `java.lang.SecurityException: Invalid signature file digest for Manifest main attributes`. - -You could exclude these signatures in mvn pom file to avoid above error, by - -```access transformers - -META-INF/*.SF -META-INF/*.DSA -META-INF/*.RSA - -``` - -But it can also lead to new, cryptic errors, e.g. `java.security.NoSuchAlgorithmException: PBEWithSHA256And256BitAES-CBC-BC SecretKeyFactory not available` -By explicitly specifying where to find the algorithm like this: `SecretKeyFactory.getInstance("PBEWithSHA256And256BitAES-CBC-BC","BC")` -It will get the real error: `java.security.NoSuchProviderException: JCE cannot authenticate the provider BC` - -So, we used a [executable packer plugin](https://github.com/nthuemmel/executable-packer-maven-plugin) that uses a jar-in-jar approach to preserve the BouncyCastle signature in a single, executable jar. - -### Include dependencies of BC-non-FIPS - -Pulsar module `bouncy-castle-bc`, which defined by `bouncy-castle/bc/pom.xml` contains the needed non-FIPS jars for Pulsar, and packaged as a jar-in-jar(need to provide `pkg`). - -```xml - - - org.bouncycastle - bcpkix-jdk15on - ${bouncycastle.version} - - - - org.bouncycastle - bcprov-ext-jdk15on - ${bouncycastle.version} - - -``` - -By using this `bouncy-castle-bc` module, you can easily include and exclude BouncyCastle non-FIPS jars. - -### Modules that include BC-non-FIPS module (`bouncy-castle-bc`) - -For Pulsar client, user need the bouncy-castle module, so `pulsar-client-original` will include the `bouncy-castle-bc` module, and have `pkg` set to reference the `jar-in-jar` package. -It is included as following example: - -```xml - - - org.apache.pulsar - bouncy-castle-bc - ${pulsar.version} - pkg - - -``` - -By default `bouncy-castle-bc` already included in `pulsar-client-original`, And `pulsar-client-original` has been included in a lot of other modules like `pulsar-client-admin`, `pulsar-broker`. -But for the above shaded jar and signatures reason, we should not package Pulsar's `bouncy-castle` module into `pulsar-client-all` other shaded modules directly, such as `pulsar-client-shaded`, `pulsar-client-admin-shaded` and `pulsar-broker-shaded`. -So in the shaded modules, we will exclude the `bouncy-castle` modules. - -```xml - - - - org.apache.pulsar:pulsar-client-original - - ** - - - org/bouncycastle/** - - - - -``` - -That means, `bouncy-castle` related jars are not shaded in these fat jars. - -### Module BC-FIPS (`bouncy-castle-bcfips`) - -Pulsar module `bouncy-castle-bcfips`, which defined by `bouncy-castle/bcfips/pom.xml` contains the needed FIPS jars for Pulsar. -Similar to `bouncy-castle-bc`, `bouncy-castle-bcfips` also packaged as a `jar-in-jar` package for easy include/exclude. - -```xml - - - org.bouncycastle - bc-fips - ${bouncycastlefips.version} - - - - org.bouncycastle - bcpkix-fips - ${bouncycastlefips.version} - - -``` - -### Exclude BC-non-FIPS and include BC-FIPS - -If you want to switch from BC-non-FIPS to BC-FIPS version, Here is an example for `pulsar-broker` module: - -```xml - - - org.apache.pulsar - pulsar-broker - ${pulsar.version} - - - org.apache.pulsar - bouncy-castle-bc - - - - - - org.apache.pulsar - bouncy-castle-bcfips - ${pulsar.version} - pkg - - -``` - - -For more example, you can reference module `bcfips-include-test`. - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/security-encryption.md b/site2/website/versioned_docs/version-2.9.1-deprecated/security-encryption.md deleted file mode 100644 index cbf7b3f28fe7bb..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/security-encryption.md +++ /dev/null @@ -1,200 +0,0 @@ ---- -id: security-encryption -title: Pulsar Encryption -sidebar_label: "End-to-End Encryption" -original_id: security-encryption ---- - -Applications can use Pulsar encryption to encrypt messages on the producer side and decrypt messages on the consumer side. You can use the public and private key pair that the application configures to perform encryption. Only the consumers with a valid key can decrypt the encrypted messages. - -## Asymmetric and symmetric encryption - -Pulsar uses a dynamically generated symmetric AES key to encrypt messages(data). You can use the application-provided ECDSA (Elliptic Curve Digital Signature Algorithm) or RSA (Rivest–Shamir–Adleman) key pair to encrypt the AES key(data key), so you do not have to share the secret with everyone. - -Key is a public and private key pair used for encryption or decryption. The producer key is the public key of the key pair, and the consumer key is the private key of the key pair. - -The application configures the producer with the public key. You can use this key to encrypt the AES data key. The encrypted data key is sent as part of message header. Only entities with the private key (in this case the consumer) are able to decrypt the data key which is used to decrypt the message. - -You can encrypt a message with more than one key. Any one of the keys used for encrypting the message is sufficient to decrypt the message. - -Pulsar does not store the encryption key anywhere in the Pulsar service. If you lose or delete the private key, your message is irretrievably lost, and is unrecoverable. - -## Producer -![alt text](/assets/pulsar-encryption-producer.jpg "Pulsar Encryption Producer") - -## Consumer -![alt text](/assets/pulsar-encryption-consumer.jpg "Pulsar Encryption Consumer") - -## Get started - -1. Create your ECDSA or RSA public and private key pair by using the following commands. - * ECDSA(for Java clients only) - - ```shell - - openssl ecparam -name secp521r1 -genkey -param_enc explicit -out test_ecdsa_privkey.pem - openssl ec -in test_ecdsa_privkey.pem -pubout -outform pem -out test_ecdsa_pubkey.pem - - ``` - - * RSA (for C++, Python and Node.js clients) - - ```shell - - openssl genrsa -out test_rsa_privkey.pem 2048 - openssl rsa -in test_rsa_privkey.pem -pubout -outform pkcs8 -out test_rsa_pubkey.pem - - ``` - -2. Add the public and private key to the key management and configure your producers to retrieve public keys and consumers clients to retrieve private keys. - -3. Implement the `CryptoKeyReader` interface, specifically `CryptoKeyReader.getPublicKey()` for producer and `CryptoKeyReader.getPrivateKey()` for consumer, which Pulsar client invokes to load the key. - -4. Add the encryption key name to the producer builder: PulsarClient.newProducer().addEncryptionKey("myapp.key"). - -5. Add CryptoKeyReader implementation to producer or consumer builder: PulsarClient.newProducer().cryptoKeyReader(keyReader) / PulsarClient.newConsumer().cryptoKeyReader(keyReader). - -6. Sample producer application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl("pulsar://localhost:6650").build(); - -Producer producer = pulsarClient.newProducer() - .topic("persistent://my-tenant/my-ns/my-topic") - .addEncryptionKey("myappkey") - .cryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")) - .create(); - -for (int i = 0; i < 10; i++) { - producer.send("my-message".getBytes()); -} - -producer.close(); -pulsarClient.close(); - -``` - -7. Sample Consumer Application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl("pulsar://localhost:6650").build(); -Consumer consumer = pulsarClient.newConsumer() - .topic("persistent://my-tenant/my-ns/my-topic") - .subscriptionName("my-subscriber-name") - .cryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")) - .subscribe(); -Message msg = null; - -for (int i = 0; i < 10; i++) { - msg = consumer.receive(); - // do something - System.out.println("Received: " + new String(msg.getData())); -} - -// Acknowledge the consumption of all messages at once -consumer.acknowledgeCumulative(msg); -consumer.close(); -pulsarClient.close(); - -``` - -## Key rotation -Pulsar generates a new AES data key every 4 hours or after publishing a certain number of messages. A producer fetches the asymmetric public key every 4 hours by calling CryptoKeyReader.getPublicKey() to retrieve the latest version. - -## Enable encryption at the producer application -If you produce messages that are consumed across application boundaries, you need to ensure that consumers in other applications have access to one of the private keys that can decrypt the messages. You can do this in two ways: -1. The consumer application provides you access to their public key, which you add to your producer keys. -2. You grant access to one of the private keys from the pairs that producer uses. - -When producers want to encrypt the messages with multiple keys, producers add all such keys to the config. Consumer can decrypt the message as long as the consumer has access to at least one of the keys. - -If you need to encrypt the messages using 2 keys (`myapp.messagekey1` and `myapp.messagekey2`), refer to the following example. - -```java - -PulsarClient.newProducer().addEncryptionKey("myapp.messagekey1").addEncryptionKey("myapp.messagekey2"); - -``` - -## Decrypt encrypted messages at the consumer application -Consumers require access one of the private keys to decrypt messages that the producer produces. If you want to receive encrypted messages, create a public or private key and give your public key to the producer application to encrypt messages using your public key. - -## Handle failures -* Producer/ Consumer loses access to the key - * Producer action fails indicating the cause of the failure. Application has the option to proceed with sending unencrypted message in such cases. Call PulsarClient.newProducer().cryptoFailureAction(ProducerCryptoFailureAction) to control the producer behavior. The default behavior is to fail the request. - * If consumption fails due to decryption failure or missing keys in consumer, application has the option to consume the encrypted message or discard it. Call PulsarClient.newConsumer().cryptoFailureAction(ConsumerCryptoFailureAction) to control the consumer behavior. The default behavior is to fail the request. Application is never able to decrypt the messages if the private key is permanently lost. -* Batch messaging - * If decryption fails and the message contains batch messages, client is not able to retrieve individual messages in the batch, hence message consumption fails even if cryptoFailureAction() is set to ConsumerCryptoFailureAction.CONSUME. -* If decryption fails, the message consumption stops and application notices backlog growth in addition to decryption failure messages in the client log. If application does not have access to the private key to decrypt the message, the only option is to skip or discard backlogged messages. diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/security-extending.md b/site2/website/versioned_docs/version-2.9.1-deprecated/security-extending.md deleted file mode 100644 index e7484453b8beb8..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/security-extending.md +++ /dev/null @@ -1,207 +0,0 @@ ---- -id: security-extending -title: Extending Authentication and Authorization in Pulsar -sidebar_label: "Extending" -original_id: security-extending ---- - -Pulsar provides a way to use custom authentication and authorization mechanisms. - -## Authentication - -Pulsar supports mutual TLS and Athenz authentication plugins. For how to use these authentication plugins, you can refer to the description in [Security](security-overview.md). - -You can use a custom authentication mechanism by providing the implementation in the form of two plugins. One plugin is for the Client library and the other plugin is for the Pulsar Proxy and/or Pulsar Broker to validate the credentials. - -### Client authentication plugin - -For the client library, you need to implement `org.apache.pulsar.client.api.Authentication`. By entering the command below you can pass this class when you create a Pulsar client: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .authentication(new MyAuthentication()) - .build(); - -``` - -You can use 2 interfaces to implement on the client side: - * `Authentication` -> http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/Authentication.html - * `AuthenticationDataProvider` -> http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/AuthenticationDataProvider.html - - -This in turn needs to provide the client credentials in the form of `org.apache.pulsar.client.api.AuthenticationDataProvider`. This leaves the chance to return different kinds of authentication token for different types of connection or by passing a certificate chain to use for TLS. - - -You can find examples for client authentication providers at: - - * Mutual TLS Auth -- https://github.com/apache/pulsar/tree/master/pulsar-client/src/main/java/org/apache/pulsar/client/impl/auth - * Athenz -- https://github.com/apache/pulsar/tree/master/pulsar-client-auth-athenz/src/main/java/org/apache/pulsar/client/impl/auth - -### Proxy/Broker authentication plugin - -On the proxy/broker side, you need to configure the corresponding plugin to validate the credentials that the client sends. The Proxy and Broker can support multiple authentication providers at the same time. - -In `conf/broker.conf` you can choose to specify a list of valid providers: - -```properties - -# Authentication provider name list, which is comma separated list of class names -authenticationProviders= - -``` - -To implement `org.apache.pulsar.broker.authentication.AuthenticationProvider` on one single interface: - -```java - -/** - * Provider of authentication mechanism - */ -public interface AuthenticationProvider extends Closeable { - - /** - * Perform initialization for the authentication provider - * - * @param config - * broker config object - * @throws IOException - * if the initialization fails - */ - void initialize(ServiceConfiguration config) throws IOException; - - /** - * @return the authentication method name supported by this provider - */ - String getAuthMethodName(); - - /** - * Validate the authentication for the given credentials with the specified authentication data - * - * @param authData - * provider specific authentication data - * @return the "role" string for the authenticated connection, if the authentication was successful - * @throws AuthenticationException - * if the credentials are not valid - */ - String authenticate(AuthenticationDataSource authData) throws AuthenticationException; - -} - -``` - -The following is the example for Broker authentication plugins: - - * Mutual TLS -- https://github.com/apache/pulsar/blob/master/pulsar-broker-common/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderTls.java - * Athenz -- https://github.com/apache/pulsar/blob/master/pulsar-broker-auth-athenz/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderAthenz.java - -## Authorization - -Authorization is the operation that checks whether a particular "role" or "principal" has permission to perform a certain operation. - -By default, you can use the embedded authorization provider provided by Pulsar. You can also configure a different authorization provider through a plugin. -Note that although the Authentication plugin is designed for use in both the Proxy and Broker, -the Authorization plugin is designed only for use on the Broker however the Proxy does perform some simple Authorization checks of Roles if authorization is enabled. - -To provide a custom provider, you need to implement the `org.apache.pulsar.broker.authorization.AuthorizationProvider` interface, put this class in the Pulsar broker classpath and configure the class in `conf/broker.conf`: - - ```properties - - # Authorization provider fully qualified class-name - authorizationProvider=org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider - - ``` - -```java - -/** - * Provider of authorization mechanism - */ -public interface AuthorizationProvider extends Closeable { - - /** - * Perform initialization for the authorization provider - * - * @param conf - * broker config object - * @param configCache - * pulsar zk configuration cache service - * @throws IOException - * if the initialization fails - */ - void initialize(ServiceConfiguration conf, ConfigurationCacheService configCache) throws IOException; - - /** - * Check if the specified role has permission to send messages to the specified fully qualified topic name. - * - * @param topicName - * the fully qualified topic name associated with the topic. - * @param role - * the app id used to send messages to the topic. - */ - CompletableFuture canProduceAsync(TopicName topicName, String role, - AuthenticationDataSource authenticationData); - - /** - * Check if the specified role has permission to receive messages from the specified fully qualified topic name. - * - * @param topicName - * the fully qualified topic name associated with the topic. - * @param role - * the app id used to receive messages from the topic. - * @param subscription - * the subscription name defined by the client - */ - CompletableFuture canConsumeAsync(TopicName topicName, String role, - AuthenticationDataSource authenticationData, String subscription); - - /** - * Check whether the specified role can perform a lookup for the specified topic. - * - * For that the caller needs to have producer or consumer permission. - * - * @param topicName - * @param role - * @return - * @throws Exception - */ - CompletableFuture canLookupAsync(TopicName topicName, String role, - AuthenticationDataSource authenticationData); - - /** - * - * Grant authorization-action permission on a namespace to the given client - * - * @param namespace - * @param actions - * @param role - * @param authDataJson - * additional authdata in json format - * @return CompletableFuture - * @completesWith
    - * IllegalArgumentException when namespace not found
    - * IllegalStateException when failed to grant permission - */ - CompletableFuture grantPermissionAsync(NamespaceName namespace, Set actions, String role, - String authDataJson); - - /** - * Grant authorization-action permission on a topic to the given client - * - * @param topicName - * @param role - * @param authDataJson - * additional authdata in json format - * @return CompletableFuture - * @completesWith
    - * IllegalArgumentException when namespace not found
    - * IllegalStateException when failed to grant permission - */ - CompletableFuture grantPermissionAsync(TopicName topicName, Set actions, String role, - String authDataJson); - -} - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/security-jwt.md b/site2/website/versioned_docs/version-2.9.1-deprecated/security-jwt.md deleted file mode 100644 index 1fa65b7c27f60c..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/security-jwt.md +++ /dev/null @@ -1,331 +0,0 @@ ---- -id: security-jwt -title: Client authentication using tokens based on JSON Web Tokens -sidebar_label: "Authentication using JWT" -original_id: security-jwt ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -## Token authentication overview - -Pulsar supports authenticating clients using security tokens that are based on [JSON Web Tokens](https://jwt.io/introduction/) ([RFC-7519](https://tools.ietf.org/html/rfc7519)). - -You can use tokens to identify a Pulsar client and associate with some "principal" (or "role") that -is permitted to do some actions (eg: publish to a topic or consume from a topic). - -A user typically gets a token string from the administrator (or some automated service). - -The compact representation of a signed JWT is a string that looks like as the following: - -``` - -eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -Application specifies the token when you create the client instance. An alternative is to pass a "token supplier" (a function that returns the token when the client library needs one). - -> #### Always use TLS transport encryption -> Sending a token is equivalent to sending a password over the wire. You had better use TLS encryption all the time when you connect to the Pulsar service. See -> [Transport Encryption using TLS](security-tls-transport.md) for more details. - -### CLI Tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use the token authentication with CLI tools of Pulsar: - -```properties - -webServiceUrl=http://broker.example.com:8080/ -brokerServiceUrl=pulsar://broker.example.com:6650/ -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -authParams=token:eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -The token string can also be read from a file, for example: - -``` - -authParams=file:///path/to/token/file - -``` - -### Pulsar client - -You can use tokens to authenticate the following Pulsar clients. - -````mdx-code-block - - - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactory.token("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY")) - .build(); - -``` - -Similarly, you can also pass a `Supplier`: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactory.token(() -> { - // Read token from custom source - return readToken(); - })) - .build(); - -``` - - - - -```python - -from pulsar import Client, AuthenticationToken - -client = Client('pulsar://broker.example.com:6650/' - authentication=AuthenticationToken('eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY')) - -``` - -Alternatively, you can also pass a `Supplier`: - -```python - -def read_token(): - with open('/path/to/token.txt') as tf: - return tf.read().strip() - -client = Client('pulsar://broker.example.com:6650/' - authentication=AuthenticationToken(read_token)) - -``` - - - - -```go - -client, err := NewClient(ClientOptions{ - URL: "pulsar://localhost:6650", - Authentication: NewAuthenticationToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY"), -}) - -``` - -Similarly, you can also pass a `Supplier`: - -```go - -client, err := NewClient(ClientOptions{ - URL: "pulsar://localhost:6650", - Authentication: NewAuthenticationTokenSupplier(func () string { - // Read token from custom source - return readToken() - }), -}) - -``` - - - - -```c++ - -#include - -pulsar::ClientConfiguration config; -config.setAuth(pulsar::AuthToken::createWithToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY")); - -pulsar::Client client("pulsar://broker.example.com:6650/", config); - -``` - - - - -```c# - -var client = PulsarClient.Builder() - .AuthenticateUsingToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY") - .Build(); - -``` - - - - -```` - -## Enable token authentication - -On how to enable token authentication on a Pulsar cluster, you can refer to the guide below. - -JWT supports two different kinds of keys in order to generate and validate the tokens: - - * Symmetric : - - You can use a single ***Secret*** key to generate and validate tokens. - * Asymmetric: A pair of keys consists of the Private key and the Public key. - - You can use ***Private*** key to generate tokens. - - You can use ***Public*** key to validate tokens. - -### Create a secret key - -When you use a secret key, the administrator creates the key and uses the key to generate the client tokens. You can also configure this key to brokers in order to validate the clients. - -Output file is generated in the root of your Pulsar installation directory. You can also provide absolute path for the output file using the command below. - -```shell - -$ bin/pulsar tokens create-secret-key --output my-secret.key - -``` - -Enter this command to generate base64 encoded private key. - -```shell - -$ bin/pulsar tokens create-secret-key --output /opt/my-secret.key --base64 - -``` - -### Create a key pair - -With Public and Private keys, you need to create a pair of keys. Pulsar supports all algorithms that the Java JWT library (shown [here](https://github.com/jwtk/jjwt#signature-algorithms-keys)) supports. - -Output file is generated in the root of your Pulsar installation directory. You can also provide absolute path for the output file using the command below. - -```shell - -$ bin/pulsar tokens create-key-pair --output-private-key my-private.key --output-public-key my-public.key - -``` - - * Store `my-private.key` in a safe location and only administrator can use `my-private.key` to generate new tokens. - * `my-public.key` is distributed to all Pulsar brokers. You can publicly share this file without any security concern. - -### Generate tokens - -A token is the credential associated with a user. The association is done through the "principal" or "role". In the case of JWT tokens, this field is typically referred as **subject**, though they are exactly the same concept. - -Then, you need to use this command to require the generated token to have a **subject** field set. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user - -``` - -This command prints the token string on stdout. - -Similarly, you can create a token by passing the "private" key using the command below: - -```shell - -$ bin/pulsar tokens create --private-key file:///path/to/my-private.key \ - --subject test-user - -``` - -Finally, you can enter the following command to create a token with a pre-defined TTL. And then the token is automatically invalidated. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user \ - --expiry-time 1y - -``` - -### Authorization - -The token itself does not have any permission associated. The authorization engine determines whether the token should have permissions or not. Once you have created the token, you can grant permission for this token to do certain actions. The following is an example. - -```shell - -$ bin/pulsar-admin namespaces grant-permission my-tenant/my-namespace \ - --role test-user \ - --actions produce,consume - -``` - -### Enable token authentication on Brokers - -To configure brokers to authenticate clients, add the following parameters to `broker.conf`: - -```properties - -# Configuration to enable authentication and authorization -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientTlsEnabled=true -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Either configure the token string or specify to read it from a file. The following three available formats are all valid: -# brokerClientAuthenticationParameters={"token":"your-token-string"} -# brokerClientAuthenticationParameters=token:your-token-string -# brokerClientAuthenticationParameters=file:///path/to/token -brokerClientTrustCertsFilePath=/path/my-ca/certs/ca.cert.pem - -# If this flag is set then the broker authenticates the original Auth data -# else it just accepts the originalPrincipal and authorizes it (if required). -authenticateOriginalAuthData=true - -# If using secret key (Note: key files must be DER-encoded) -tokenSecretKey=file:///path/to/secret.key -# The key can also be passed inline: -# tokenSecretKey=data:;base64,FLFyW0oLJ2Fi22KKCm21J18mbAdztfSHN/lAT5ucEKU= - -# If using public/private (Note: key files must be DER-encoded) -# tokenPublicKey=file:///path/to/public.key - -``` - -### Enable token authentication on Proxies - -To configure proxies to authenticate clients, add the following parameters to `proxy.conf`: - -The proxy uses its own token when connecting to brokers. You need to configure the role token for this key pair in the `proxyRoles` of the brokers. For more details, see the [authorization guide](security-authorization.md). - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken -tokenSecretKey=file:///path/to/secret.key - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Either configure the token string or specify to read it from a file. The following three available formats are all valid: -# brokerClientAuthenticationParameters={"token":"your-token-string"} -# brokerClientAuthenticationParameters=token:your-token-string -# brokerClientAuthenticationParameters=file:///path/to/token - -# Whether client authorization credentials are forwarded to the broker for re-authorization. -# Authentication must be enabled via authenticationEnabled=true for this to take effect. -forwardAuthorizationCredentials=true - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/security-kerberos.md b/site2/website/versioned_docs/version-2.9.1-deprecated/security-kerberos.md deleted file mode 100644 index c49fa3bea1fce0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/security-kerberos.md +++ /dev/null @@ -1,443 +0,0 @@ ---- -id: security-kerberos -title: Authentication using Kerberos -sidebar_label: "Authentication using Kerberos" -original_id: security-kerberos ---- - -[Kerberos](https://web.mit.edu/kerberos/) is a network authentication protocol. By using secret-key cryptography, [Kerberos](https://web.mit.edu/kerberos/) is designed to provide strong authentication for client applications and server applications. - -In Pulsar, you can use Kerberos with [SASL](https://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer) as a choice for authentication. And Pulsar uses the [Java Authentication and Authorization Service (JAAS)](https://en.wikipedia.org/wiki/Java_Authentication_and_Authorization_Service) for SASL configuration. You need to provide JAAS configurations for Kerberos authentication. - -This document introduces how to configure `Kerberos` with `SASL` between Pulsar clients and brokers and how to configure Kerberos for Pulsar proxy in detail. - -## Configuration for Kerberos between Client and Broker - -### Prerequisites - -To begin, you need to set up (or already have) a [Key Distribution Center(KDC)](https://en.wikipedia.org/wiki/Key_distribution_center). Also you need to configure and run the [Key Distribution Center(KDC)](https://en.wikipedia.org/wiki/Key_distribution_center)in advance. - -If your organization already uses a Kerberos server (for example, by using `Active Directory`), you do not have to install a new server for Pulsar. If your organization does not use a Kerberos server, you need to install one. Your Linux vendor might have packages for `Kerberos`. On how to install and configure Kerberos, refer to [Ubuntu](https://help.ubuntu.com/community/Kerberos), -[Redhat](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Managing_Smart_Cards/installing-kerberos.html). - -Note that if you use Oracle Java, you need to download JCE policy files for your Java version and copy them to the `$JAVA_HOME/jre/lib/security` directory. - -#### Kerberos principals - -If you use the existing Kerberos system, ask your Kerberos administrator for a principal for each Brokers in your cluster and for every operating system user that accesses Pulsar with Kerberos authentication(via clients and tools). - -If you have installed your own Kerberos system, you can create these principals with the following commands: - -```shell - -### add Principals for broker -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey broker/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{broker-keytabname}.keytab broker/{hostname}@{REALM}" -### add Principals for client -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey client/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{client-keytabname}.keytab client/{hostname}@{REALM}" - -``` - -Note that *Kerberos* requires that all your hosts can be resolved with their FQDNs. - -The first part of Broker principal (for example, `broker` in `broker/{hostname}@{REALM}`) is the `serverType` of each host. The suggested values of `serverType` are `broker` (host machine runs service Pulsar Broker) and `proxy` (host machine runs service Pulsar Proxy). - -#### Configure how to connect to KDC - -You need to enter the command below to specify the path to the `krb5.conf` file for the client side and the broker side. The content of `krb5.conf` file indicates the default Realm and KDC information. See [JDK’s Kerberos Requirements](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html) for more details. - -```shell - --Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -Here is an example of the krb5.conf file: - -In the configuration file, `EXAMPLE.COM` is the default realm; `kdc = localhost:62037` is the kdc server url for realm `EXAMPLE.COM `: - -``` - -[libdefaults] - default_realm = EXAMPLE.COM - -[realms] - EXAMPLE.COM = { - kdc = localhost:62037 - } - -``` - -Usually machines configured with kerberos already have a system wide configuration and this configuration is optional. - -#### JAAS configuration file - -You need JAAS configuration file for the client side and the broker side. JAAS configuration file provides the section of information that is used to connect KDC. Here is an example named `pulsar_jaas.conf`: - -``` - - PulsarBroker { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - - PulsarClient { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarclient.keytab" - principal="client/localhost@EXAMPLE.COM"; -}; - -``` - -You need to set the `JAAS` configuration file path as JVM parameter for client and broker. For example: - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf - -``` - -In the `pulsar_jaas.conf` file above - -1. `PulsarBroker` is a section name in the JAAS file that each broker uses. This section tells the broker to use which principal inside Kerberos and the location of the keytab where the principal is stored. `PulsarBroker` allows the broker to use the keytab specified in this section. -2. `PulsarClient` is a section name in the JASS file that each broker uses. This section tells the client to use which principal inside Kerberos and the location of the keytab where the principal is stored. `PulsarClient` allows the client to use the keytab specified in this section. - The following example also reuses this `PulsarClient` section in both the Pulsar internal admin configuration and in CLI command of `bin/pulsar-client`, `bin/pulsar-perf` and `bin/pulsar-admin`. You can also add different sections for different use cases. - -You can have 2 separate JAAS configuration files: -* the file for a broker that has sections of both `PulsarBroker` and `PulsarClient`; -* the file for a client that only has a `PulsarClient` section. - - -### Kerberos configuration for Brokers - -#### Configure the `broker.conf` file - - In the `broker.conf` file, set Kerberos related configurations. - - - Set `authenticationEnabled` to `true`; - - Set `authenticationProviders` to choose `AuthenticationProviderSasl`; - - Set `saslJaasClientAllowedIds` regex for principal that is allowed to connect to broker; - - Set `saslJaasBrokerSectionName` that corresponds to the section in JAAS configuration file for broker; - - To make Pulsar internal admin client work properly, you need to set the configuration in the `broker.conf` file as below: - - Set `brokerClientAuthenticationPlugin` to client plugin `AuthenticationSasl`; - - Set `brokerClientAuthenticationParameters` to value in JSON string `{"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"}`, in which `PulsarClient` is the section name in the `pulsar_jaas.conf` file, and `"serverType":"broker"` indicates that the internal admin client connects to a Pulsar Broker; - - Here is an example: - -``` - -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarBroker - -## Authentication settings of the broker itself. Used when the broker connects to other brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -brokerClientAuthenticationParameters={"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"} - -``` - -#### Set Broker JVM parameter - - Set JVM parameters for JAAS configuration file and krb5 configuration file with additional options. - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -You can add this at the end of `PULSAR_EXTRA_OPTS` in the file [`pulsar_env.sh`](https://github.com/apache/pulsar/blob/master/conf/pulsar_env.sh) - -You must ensure that the operating system user who starts broker can reach the keytabs configured in the `pulsar_jaas.conf` file and kdc server in the `krb5.conf` file. - -### Kerberos configuration for clients - -#### Java Client and Java Admin Client - -In client application, include `pulsar-client-auth-sasl` in your project dependency. - -``` - - - org.apache.pulsar - pulsar-client-auth-sasl - ${pulsar.version} - - -``` - -Configure the authentication type to use `AuthenticationSasl`, and also provide the authentication parameters to it. - -You need 2 parameters: -- `saslJaasClientSectionName`. This parameter corresponds to the section in JAAS configuration file for client; -- `serverType`. This parameter stands for whether this client connects to broker or proxy. And client uses this parameter to know which server side principal should be used. - -When you authenticate between client and broker with the setting in above JAAS configuration file, we need to set `saslJaasClientSectionName` to `PulsarClient` and set `serverType` to `broker`. - -The following is an example of creating a Java client: - - ```java - - System.setProperty("java.security.auth.login.config", "/etc/pulsar/pulsar_jaas.conf"); - System.setProperty("java.security.krb5.conf", "/etc/pulsar/krb5.conf"); - - Map authParams = Maps.newHashMap(); - authParams.put("saslJaasClientSectionName", "PulsarClient"); - authParams.put("serverType", "broker"); - - Authentication saslAuth = AuthenticationFactory - .create(org.apache.pulsar.client.impl.auth.AuthenticationSasl.class.getName(), authParams); - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://my-broker.com:6650") - .authentication(saslAuth) - .build(); - - ``` - -> The first two lines in the example above are hard coded, alternatively, you can set additional JVM parameters for JAAS and krb5 configuration file when you run the application like below: - -``` - -java -cp -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf $APP-jar-with-dependencies.jar $CLASSNAME - -``` - -You must ensure that the operating system user who starts pulsar client can reach the keytabs configured in the `pulsar_jaas.conf` file and kdc server in the `krb5.conf` file. - -#### Configure CLI tools - -If you use a command-line tool (such as `bin/pulsar-client`, `bin/pulsar-perf` and `bin/pulsar-admin`), you need to perform the following steps: - -Step 1. Enter the command below to configure your `client.conf`. - -```shell - -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -authParams={"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"} - -``` - -Step 2. Enter the command below to set JVM parameters for JAAS configuration file and krb5 configuration file with additional options. - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -You can add this at the end of `PULSAR_EXTRA_OPTS` in the file [`pulsar_tools_env.sh`](https://github.com/apache/pulsar/blob/master/conf/pulsar_tools_env.sh), -or add this line `OPTS="$OPTS -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf "` directly to the CLI tool script. - -The meaning of configurations is the same as the meaning of configurations in Java client section. - -## Kerberos configuration for working with Pulsar Proxy - -With the above configuration, client and broker can do authentication using Kerberos. - -A client that connects to Pulsar Proxy is a little different. Pulsar Proxy (as a SASL Server in Kerberos) authenticates Client (as a SASL client in Kerberos) first; and then Pulsar broker authenticates Pulsar Proxy. - -Now in comparison with the above configuration between client and broker, we show you how to configure Pulsar Proxy as follows. - -### Create principal for Pulsar Proxy in Kerberos - -You need to add new principals for Pulsar Proxy comparing with the above configuration. If you already have principals for client and broker, you only need to add the proxy principal here. - -```shell - -### add Principals for Pulsar Proxy -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey proxy/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{proxy-keytabname}.keytab proxy/{hostname}@{REALM}" -### add Principals for broker -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey broker/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{broker-keytabname}.keytab broker/{hostname}@{REALM}" -### add Principals for client -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey client/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{client-keytabname}.keytab client/{hostname}@{REALM}" - -``` - -### Add a section in JAAS configuration file for Pulsar Proxy - -In comparison with the above configuration, add a new section for Pulsar Proxy in JAAS configuration file. - -Here is an example named `pulsar_jaas.conf`: - -``` - - PulsarBroker { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - - PulsarProxy { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarproxy.keytab" - principal="proxy/localhost@EXAMPLE.COM"; -}; - - PulsarClient { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarclient.keytab" - principal="client/localhost@EXAMPLE.COM"; -}; - -``` - -### Proxy client configuration - -Pulsar client configuration is similar with client and broker configuration, except that you need to set `serverType` to `proxy` instead of `broker`, for the reason that you need to do the Kerberos authentication between client and proxy. - - ```java - - System.setProperty("java.security.auth.login.config", "/etc/pulsar/pulsar_jaas.conf"); - System.setProperty("java.security.krb5.conf", "/etc/pulsar/krb5.conf"); - - Map authParams = Maps.newHashMap(); - authParams.put("saslJaasClientSectionName", "PulsarClient"); - authParams.put("serverType", "proxy"); // ** here is the different ** - - Authentication saslAuth = AuthenticationFactory - .create(org.apache.pulsar.client.impl.auth.AuthenticationSasl.class.getName(), authParams); - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://my-broker.com:6650") - .authentication(saslAuth) - .build(); - - ``` - -> The first two lines in the example above are hard coded, alternatively, you can set additional JVM parameters for JAAS and krb5 configuration file when you run the application like below: - -``` - -java -cp -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf $APP-jar-with-dependencies.jar $CLASSNAME - -``` - -### Kerberos configuration for Pulsar proxy service - -In the `proxy.conf` file, set Kerberos related configuration. Here is an example: - -```shell - -## related to authenticate client. -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarProxy - -## related to be authenticated by broker -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -brokerClientAuthenticationParameters={"saslJaasClientSectionName":"PulsarProxy", "serverType":"broker"} -forwardAuthorizationCredentials=true - -``` - -The first part relates to authenticating between client and Pulsar Proxy. In this phase, client works as SASL client, while Pulsar Proxy works as SASL server. - -The second part relates to authenticating between Pulsar Proxy and Pulsar Broker. In this phase, Pulsar Proxy works as SASL client, while Pulsar Broker works as SASL server. - -### Broker side configuration. - -The broker side configuration file is the same with the above `broker.conf`, you do not need special configuration for Pulsar Proxy. - -``` - -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarBroker - -``` - -## Regarding authorization and role token - -For Kerberos authentication, we usually use the authenticated principal as the role token for Pulsar authorization. For more information of authorization in Pulsar, see [security authorization](security-authorization.md). - -If you enable 'authorizationEnabled', you need to set `superUserRoles` in `broker.conf` that corresponds to the name registered in kdc. - -For example: - -```bash - -superUserRoles=client/{clientIp}@EXAMPLE.COM - -``` - -## Regarding authentication between ZooKeeper and Broker - -Pulsar Broker acts as a Kerberos client when you authenticate with Zookeeper. According to [ZooKeeper document](https://cwiki.apache.org/confluence/display/ZOOKEEPER/Client-Server+mutual+authentication), you need these settings in `conf/zookeeper.conf`: - -``` - -authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider -requireClientAuthScheme=sasl - -``` - -Enter the following commands to add a section of `Client` configurations in the file `pulsar_jaas.conf`, which Pulsar Broker uses: - -``` - - Client { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - -``` - -In this setting, the principal of Pulsar Broker and keyTab file indicates the role of Broker when you authenticate with ZooKeeper. - -## Regarding authentication between BookKeeper and Broker - -Pulsar Broker acts as a Kerberos client when you authenticate with Bookie. According to [BookKeeper document](http://bookkeeper.apache.org/docs/latest/security/sasl/), you need to add `bookkeeperClientAuthenticationPlugin` parameter in `broker.conf`: - -``` - -bookkeeperClientAuthenticationPlugin=org.apache.bookkeeper.sasl.SASLClientProviderFactory - -``` - -In this setting, `SASLClientProviderFactory` creates a BookKeeper SASL client in a Broker, and the Broker uses the created SASL client to authenticate with a Bookie node. - -Enter the following commands to add a section of `BookKeeper` configurations in the `pulsar_jaas.conf` that Pulsar Broker uses: - -``` - - BookKeeper { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - -``` - -In this setting, the principal of Pulsar Broker and keyTab file indicates the role of Broker when you authenticate with Bookie. diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/security-oauth2.md b/site2/website/versioned_docs/version-2.9.1-deprecated/security-oauth2.md deleted file mode 100644 index ad61fdf665e5e8..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/security-oauth2.md +++ /dev/null @@ -1,232 +0,0 @@ ---- -id: security-oauth2 -title: Client authentication using OAuth 2.0 access tokens -sidebar_label: "Authentication using OAuth 2.0 access tokens" -original_id: security-oauth2 ---- - -Pulsar supports authenticating clients using OAuth 2.0 access tokens. You can use OAuth 2.0 access tokens to identify a Pulsar client and associate the Pulsar client with some "principal" (or "role"), which is permitted to do some actions, such as publishing messages to a topic or consume messages from a topic. - -This module is used to support the Pulsar client authentication plugin for OAuth 2.0. After communicating with the Oauth 2.0 server, the Pulsar client gets an `access token` from the Oauth 2.0 server, and passes this `access token` to the Pulsar broker to do the authentication. The broker can use the `org.apache.pulsar.broker.authentication.AuthenticationProviderToken`. Or, you can add your own `AuthenticationProvider` to make it with this module. - -## Authentication provider configuration - -This library allows you to authenticate the Pulsar client by using an access token that is obtained from an OAuth 2.0 authorization service, which acts as a _token issuer_. - -### Authentication types - -The authentication type determines how to obtain an access token through an OAuth 2.0 authorization flow. - -:::note - -Currently, the Pulsar Java client only supports the `client_credentials` authentication type. - -::: - -#### Client credentials - -The following table lists parameters supported for the `client credentials` authentication type. - -| Parameter | Description | Example | Required or not | -| --- | --- | --- | --- | -| `type` | Oauth 2.0 authentication type. | `client_credentials` (default) | Optional | -| `issuerUrl` | URL of the authentication provider which allows the Pulsar client to obtain an access token | `https://accounts.google.com` | Required | -| `privateKey` | URL to a JSON credentials file | Support the following pattern formats:
  1939. `file:///path/to/file`
  1940. `file:/path/to/file`
  1941. `data:application/json;base64,`
  1942. | Required | -| `audience` | An OAuth 2.0 "resource server" identifier for the Pulsar cluster | `https://broker.example.com` | Optional | - -The credentials file contains service account credentials used with the client authentication type. The following shows an example of a credentials file `credentials_file.json`. - -```json - -{ - "type": "client_credentials", - "client_id": "d9ZyX97q1ef8Cr81WHVC4hFQ64vSlDK3", - "client_secret": "on1uJ...k6F6R", - "client_email": "1234567890-abcdefghijklmnopqrstuvwxyz@developer.gserviceaccount.com", - "issuer_url": "https://accounts.google.com" -} - -``` - -In the above example, the authentication type is set to `client_credentials` by default. And the fields "client_id" and "client_secret" are required. - -### Typical original OAuth2 request mapping - -The following shows a typical original OAuth2 request, which is used to obtain the access token from the OAuth2 server. - -```bash - -curl --request POST \ - --url https://dev-kt-aa9ne.us.auth0.com/oauth/token \ - --header 'content-type: application/json' \ - --data '{ - "client_id":"Xd23RHsUnvUlP7wchjNYOaIfazgeHd9x", - "client_secret":"rT7ps7WY8uhdVuBTKWZkttwLdQotmdEliaM5rLfmgNibvqziZ-g07ZH52N_poGAb", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "grant_type":"client_credentials"}' - -``` - -In the above example, the mapping relationship is shown as below. - -- The `issuerUrl` parameter in this plugin is mapped to `--url https://dev-kt-aa9ne.us.auth0.com`. -- The `privateKey` file parameter in this plugin should at least contains the `client_id` and `client_secret` fields. -- The `audience` parameter in this plugin is mapped to `"audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"`. This field is only used by some identity providers. - -## Client Configuration - -You can use the OAuth2 authentication provider with the following Pulsar clients. - -### Java - -You can use the factory method to configure authentication for Pulsar Java client. - -```java - -import org.apache.pulsar.client.impl.auth.oauth2.AuthenticationFactoryOAuth2; - -URL issuerUrl = new URL("https://dev-kt-aa9ne.us.auth0.com"); -URL credentialsUrl = new URL("file:///path/to/KeyFile.json"); -String audience = "https://dev-kt-aa9ne.us.auth0.com/api/v2/"; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactoryOAuth2.clientCredentials(issuerUrl, credentialsUrl, audience)) - .build(); - -``` - -In addition, you can also use the encoded parameters to configure authentication for Pulsar Java client. - -```java - -Authentication auth = AuthenticationFactory - .create(AuthenticationOAuth2.class.getName(), "{"type":"client_credentials","privateKey":"./key/path/..","issuerUrl":"...","audience":"..."}"); -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication(auth) - .build(); - -``` - -### C++ client - -The C++ client is similar to the Java client. You need to provide parameters of `issuerUrl`, `private_key` (the credentials file path), and the audience. - -```c++ - -#include - -pulsar::ClientConfiguration config; -std::string params = R"({ - "issuer_url": "https://dev-kt-aa9ne.us.auth0.com", - "private_key": "../../pulsar-broker/src/test/resources/authentication/token/cpp_credentials_file.json", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/"})"; - -config.setAuth(pulsar::AuthOauth2::create(params)); - -pulsar::Client client("pulsar://broker.example.com:6650/", config); - -``` - -### Go client - -To enable OAuth2 authentication in Go client, you need to configure OAuth2 authentication. -This example shows how to configure OAuth2 authentication in Go client. - -```go - -oauth := pulsar.NewAuthenticationOAuth2(map[string]string{ - "type": "client_credentials", - "issuerUrl": "https://dev-kt-aa9ne.us.auth0.com", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "privateKey": "/path/to/privateKey", - "clientId": "0Xx...Yyxeny", - }) -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://my-cluster:6650", - Authentication: oauth, -}) - -``` - -### Python client - -To enable OAuth2 authentication in Python client, you need to configure OAuth2 authentication. -This example shows how to configure OAuth2 authentication in Python client. - -```python - -from pulsar import Client, AuthenticationOauth2 - -params = ''' -{ - "issuer_url": "https://dev-kt-aa9ne.us.auth0.com", - "private_key": "/path/to/privateKey", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/" -} -''' - -client = Client("pulsar://my-cluster:6650", authentication=AuthenticationOauth2(params)) - -``` - -## CLI configuration - -This section describes how to use Pulsar CLI tools to connect a cluster through OAuth2 authentication plugin. - -### pulsar-admin - -This example shows how to use pulsar-admin to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-admin --admin-url https://streamnative.cloud:443 \ ---auth-plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ -tenants list - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). - -### pulsar-client - -This example shows how to use pulsar-client to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-client \ ---url SERVICE_URL \ ---auth-plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ -produce test-topic -m "test-message" -n 10 - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). - -### pulsar-perf - -This example shows how to use pulsar-perf to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-perf produce --service-url pulsar+ssl://streamnative.cloud:6651 \ ---auth-plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ --r 1000 -s 1024 test-topic - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/security-overview.md b/site2/website/versioned_docs/version-2.9.1-deprecated/security-overview.md deleted file mode 100644 index c6bd9b64e4f766..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/security-overview.md +++ /dev/null @@ -1,36 +0,0 @@ ---- -id: security-overview -title: Pulsar security overview -sidebar_label: "Overview" -original_id: security-overview ---- - -As the central message bus for a business, Apache Pulsar is frequently used for storing mission-critical data. Therefore, enabling security features in Pulsar is crucial. - -By default, Pulsar configures no encryption, authentication, or authorization. Any client can communicate to Apache Pulsar via plain text service URLs. So we must ensure that Pulsar accessing via these plain text service URLs is restricted to trusted clients only. In such cases, you can use Network segmentation and/or authorization ACLs to restrict access to trusted IPs. If you use neither, the state of cluster is wide open and anyone can access the cluster. - -Pulsar supports a pluggable authentication mechanism. And Pulsar clients use this mechanism to authenticate with brokers and proxies. You can also configure Pulsar to support multiple authentication sources. - -The Pulsar broker validates the authentication credentials when a connection is established. After the initial connection is authenticated, the "principal" token is stored for authorization though the connection is not re-authenticated. The broker periodically checks the expiration status of every `ServerCnx` object. You can set the `authenticationRefreshCheckSeconds` on the broker to control the frequency to check the expiration status. By default, the `authenticationRefreshCheckSeconds` is set to 60s. When the authentication is expired, the broker forces to re-authenticate the connection. If the re-authentication fails, the broker disconnects the client. - -The broker supports learning whether a particular client supports authentication refreshing. If a client supports authentication refreshing and the credential is expired, the authentication provider calls the `refreshAuthentication` method to initiate the refreshing process. If a client does not support authentication refreshing and the credential is expired, the broker disconnects the client. - -You had better secure the service components in your Apache Pulsar deployment. - -## Role tokens - -In Pulsar, a *role* is a string, like `admin` or `app1`, which can represent a single client or multiple clients. You can use roles to control permission for clients to produce or consume from certain topics, administer the configuration for tenants, and so on. - -Apache Pulsar uses an [Authentication Provider](#authentication-providers) to establish the identity of a client and then assign a *role token* to that client. This role token is then used for [Authorization and ACLs](security-authorization.md) to determine what the client is authorized to do. - -## Authentication providers - -Currently Pulsar supports the following authentication providers: - -- [TLS Authentication](security-tls-authentication.md) -- [Athenz](security-athenz.md) -- [Kerberos](security-kerberos.md) -- [JSON Web Token Authentication](security-jwt.md) -- [OAuth 2.0 authentication](security-oauth2.md) -- [HTTP basic authentication](security-basic-auth.md) - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/security-tls-authentication.md b/site2/website/versioned_docs/version-2.9.1-deprecated/security-tls-authentication.md deleted file mode 100644 index 85d2240f413060..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/security-tls-authentication.md +++ /dev/null @@ -1,222 +0,0 @@ ---- -id: security-tls-authentication -title: Authentication using TLS -sidebar_label: "Authentication using TLS" -original_id: security-tls-authentication ---- - -## TLS authentication overview - -TLS authentication is an extension of [TLS transport encryption](security-tls-transport.md). Not only servers have keys and certs that the client uses to verify the identity of servers, clients also have keys and certs that the server uses to verify the identity of clients. You must have TLS transport encryption configured on your cluster before you can use TLS authentication. This guide assumes you already have TLS transport encryption configured. - -`Bouncy Castle Provider` provides TLS related cipher suites and algorithms in Pulsar. If you need [FIPS](https://www.bouncycastle.org/fips_faq.html) version of `Bouncy Castle Provider`, please reference [Bouncy Castle page](security-bouncy-castle.md). - -### Create client certificates - -Client certificates are generated using the certificate authority. Server certificates are also generated with the same certificate authority. - -The biggest difference between client certs and server certs is that the **common name** for the client certificate is the **role token** which that client is authenticated as. - -To use client certificates, you need to set `tlsRequireTrustedClientCertOnConnect=true` at the broker side. For details, refer to [TLS broker configuration](security-tls-transport.md#configure-broker). - -First, you need to enter the following command to generate the key : - -```bash - -$ openssl genrsa -out admin.key.pem 2048 - -``` - -Similar to the broker, the client expects the key to be in [PKCS 8](https://en.wikipedia.org/wiki/PKCS_8) format, so you need to convert it by entering the following command: - -```bash - -$ openssl pkcs8 -topk8 -inform PEM -outform PEM \ - -in admin.key.pem -out admin.key-pk8.pem -nocrypt - -``` - -Next, enter the command below to generate the certificate request. When you are asked for a **common name**, enter the **role token** that you want this key pair to authenticate a client as. - -```bash - -$ openssl req -config openssl.cnf \ - -key admin.key.pem -new -sha256 -out admin.csr.pem - -``` - -:::note - -If openssl.cnf is not specified, read [Certificate authority](http://pulsar.apache.org/docs/en/security-tls-transport/#certificate-authority) to get the openssl.cnf. - -::: - -Then, enter the command below to sign with request with the certificate authority. Note that the client certs uses the **usr_cert** extension, which allows the cert to be used for client authentication. - -```bash - -$ openssl ca -config openssl.cnf -extensions usr_cert \ - -days 1000 -notext -md sha256 \ - -in admin.csr.pem -out admin.cert.pem - -``` - -You can get a cert, `admin.cert.pem`, and a key, `admin.key-pk8.pem` from this command. With `ca.cert.pem`, clients can use this cert and this key to authenticate themselves to brokers and proxies as the role token ``admin``. - -:::note - -If the "unable to load CA private key" error occurs and the reason of this error is "No such file or directory: /etc/pki/CA/private/cakey.pem" in this step. Try the command below: - -```bash - -$ cd /etc/pki/tls/misc/CA -$ ./CA -newca - -``` - -to generate `cakey.pem` . - -::: - -## Enable TLS authentication on brokers - -To configure brokers to authenticate clients, add the following parameters to `broker.conf`, alongside [the configuration to enable tls transport](security-tls-transport.md#broker-configuration): - -```properties - -# Configuration to enable authentication -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# operations and publish/consume from all topics -superUserRoles=admin - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientTlsEnabled=true -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters={"tlsCertFile":"/path/my-ca/admin.cert.pem","tlsKeyFile":"/path/my-ca/admin.key-pk8.pem"} -brokerClientTrustCertsFilePath=/path/my-ca/certs/ca.cert.pem - -``` - -## Enable TLS authentication on proxies - -To configure proxies to authenticate clients, add the following parameters to `proxy.conf`, alongside [the configuration to enable tls transport](security-tls-transport.md#proxy-configuration): - -The proxy should have its own client key pair for connecting to brokers. You need to configure the role token for this key pair in the ``proxyRoles`` of the brokers. See the [authorization guide](security-authorization.md) for more details. - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters=tlsCertFile:/path/to/proxy.cert.pem,tlsKeyFile:/path/to/proxy.key-pk8.pem - -``` - -## Client configuration - -When you use TLS authentication, client connects via TLS transport. You need to configure the client to use ```https://``` and 8443 port for the web service URL, ```pulsar+ssl://``` and 6651 port for the broker service URL. - -### CLI tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use TLS authentication with the CLI tools of Pulsar: - -```properties - -webServiceUrl=https://broker.example.com:8443/ -brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/ca.cert.pem -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -authParams=tlsCertFile:/path/to/my-role.cert.pem,tlsKeyFile:/path/to/my-role.key-pk8.pem - -``` - -### Java client - -```java - -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .tlsTrustCertsFilePath("/path/to/ca.cert.pem") - .authentication("org.apache.pulsar.client.impl.auth.AuthenticationTls", - "tlsCertFile:/path/to/my-role.cert.pem,tlsKeyFile:/path/to/my-role.key-pk8.pem") - .build(); - -``` - -### Python client - -```python - -from pulsar import Client, AuthenticationTLS - -auth = AuthenticationTLS("/path/to/my-role.cert.pem", "/path/to/my-role.key-pk8.pem") -client = Client("pulsar+ssl://broker.example.com:6651/", - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False, - authentication=auth) - -``` - -### C++ client - -```c++ - -#include - -pulsar::ClientConfiguration config; -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/ca.cert.pem"); -config.setTlsAllowInsecureConnection(false); - -pulsar::AuthenticationPtr auth = pulsar::AuthTls::create("/path/to/my-role.cert.pem", - "/path/to/my-role.key-pk8.pem") -config.setAuth(auth); - -pulsar::Client client("pulsar+ssl://broker.example.com:6651/", config); - -``` - -### Node.js client - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const auth = new Pulsar.AuthenticationTls({ - certificatePath: '/path/to/my-role.cert.pem', - privateKeyPath: '/path/to/my-role.key-pk8.pem', - }); - - const client = new Pulsar.Client({ - serviceUrl: 'pulsar+ssl://broker.example.com:6651/', - authentication: auth, - tlsTrustCertsFilePath: '/path/to/ca.cert.pem', - }); -})(); - -``` - -### C# client - -```c# - -var clientCertificate = new X509Certificate2("admin.pfx"); -var client = PulsarClient.Builder() - .AuthenticateUsingClientCertificate(clientCertificate) - .Build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/security-tls-keystore.md b/site2/website/versioned_docs/version-2.9.1-deprecated/security-tls-keystore.md deleted file mode 100644 index 0b3b50fcebb104..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/security-tls-keystore.md +++ /dev/null @@ -1,342 +0,0 @@ ---- -id: security-tls-keystore -title: Using TLS with KeyStore configure -sidebar_label: "Using TLS with KeyStore configure" -original_id: security-tls-keystore ---- - -## Overview - -Apache Pulsar supports [TLS encryption](security-tls-transport.md) and [TLS authentication](security-tls-authentication.md) between clients and Apache Pulsar service. -By default it uses PEM format file configuration. This page tries to describe use [KeyStore](https://en.wikipedia.org/wiki/Java_KeyStore) type configure for TLS. - - -## TLS encryption with KeyStore configure - -### Generate TLS key and certificate - -The first step of deploying TLS is to generate the key and the certificate for each machine in the cluster. -You can use Java’s `keytool` utility to accomplish this task. We will generate the key into a temporary keystore -initially for broker, so that we can export and sign it later with CA. - -```shell - -keytool -keystore broker.keystore.jks -alias localhost -validity {validity} -genkeypair -keyalg RSA - -``` - -You need to specify two parameters in the above command: - -1. `keystore`: the keystore file that stores the certificate. The *keystore* file contains the private key of - the certificate; hence, it needs to be kept safely. -2. `validity`: the valid time of the certificate in days. - -> Ensure that common name (CN) matches exactly with the fully qualified domain name (FQDN) of the server. -The client compares the CN with the DNS domain name to ensure that it is indeed connecting to the desired server, not a malicious one. - -### Creating your own CA - -After the first step, each broker in the cluster has a public-private key pair, and a certificate to identify the machine. -The certificate, however, is unsigned, which means that an attacker can create such a certificate to pretend to be any machine. - -Therefore, it is important to prevent forged certificates by signing them for each machine in the cluster. -A `certificate authority (CA)` is responsible for signing certificates. CA works likes a government that issues passports — -the government stamps (signs) each passport so that the passport becomes difficult to forge. Other governments verify the stamps -to ensure the passport is authentic. Similarly, the CA signs the certificates, and the cryptography guarantees that a signed -certificate is computationally difficult to forge. Thus, as long as the CA is a genuine and trusted authority, the clients have -high assurance that they are connecting to the authentic machines. - -```shell - -openssl req -new -x509 -keyout ca-key -out ca-cert -days 365 - -``` - -The generated CA is simply a *public-private* key pair and certificate, and it is intended to sign other certificates. - -The next step is to add the generated CA to the clients' truststore so that the clients can trust this CA: - -```shell - -keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert - -``` - -NOTE: If you configure the brokers to require client authentication by setting `tlsRequireTrustedClientCertOnConnect` to `true` on the -broker configuration, then you must also provide a truststore for the brokers and it should have all the CA certificates that clients keys were signed by. - -```shell - -keytool -keystore broker.truststore.jks -alias CARoot -import -file ca-cert - -``` - -In contrast to the keystore, which stores each machine’s own identity, the truststore of a client stores all the certificates -that the client should trust. Importing a certificate into one’s truststore also means trusting all certificates that are signed -by that certificate. As the analogy above, trusting the government (CA) also means trusting all passports (certificates) that -it has issued. This attribute is called the chain of trust, and it is particularly useful when deploying TLS on a large BookKeeper cluster. -You can sign all certificates in the cluster with a single CA, and have all machines share the same truststore that trusts the CA. -That way all machines can authenticate all other machines. - - -### Signing the certificate - -The next step is to sign all certificates in the keystore with the CA we generated. First, you need to export the certificate from the keystore: - -```shell - -keytool -keystore broker.keystore.jks -alias localhost -certreq -file cert-file - -``` - -Then sign it with the CA: - -```shell - -openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days {validity} -CAcreateserial -passin pass:{ca-password} - -``` - -Finally, you need to import both the certificate of the CA and the signed certificate into the keystore: - -```shell - -keytool -keystore broker.keystore.jks -alias CARoot -import -file ca-cert -keytool -keystore broker.keystore.jks -alias localhost -import -file cert-signed - -``` - -The definitions of the parameters are the following: - -1. `keystore`: the location of the keystore -2. `ca-cert`: the certificate of the CA -3. `ca-key`: the private key of the CA -4. `ca-password`: the passphrase of the CA -5. `cert-file`: the exported, unsigned certificate of the broker -6. `cert-signed`: the signed certificate of the broker - -### Configuring brokers - -Brokers enable TLS by provide valid `brokerServicePortTls` and `webServicePortTls`, and also need set `tlsEnabledWithKeyStore` to `true` for using KeyStore type configuration. -Besides this, KeyStore path, KeyStore password, TrustStore path, and TrustStore password need to provided. -And since broker will create internal client/admin client to communicate with other brokers, user also need to provide config for them, this is similar to how user config the outside client/admin-client. -If `tlsRequireTrustedClientCertOnConnect` is `true`, broker will reject the Connection if the Client Certificate is not trusted. - -The following TLS configs are needed on the broker side: - -```properties - -tlsEnabledWithKeyStore=true -# key store -tlsKeyStoreType=JKS -tlsKeyStore=/var/private/tls/broker.keystore.jks -tlsKeyStorePassword=brokerpw - -# trust store -tlsTrustStoreType=JKS -tlsTrustStore=/var/private/tls/broker.truststore.jks -tlsTrustStorePassword=brokerpw - -# internal client/admin-client config -brokerClientTlsEnabled=true -brokerClientTlsEnabledWithKeyStore=true -brokerClientTlsTrustStoreType=JKS -brokerClientTlsTrustStore=/var/private/tls/client.truststore.jks -brokerClientTlsTrustStorePassword=clientpw - -``` - -NOTE: it is important to restrict access to the store files via filesystem permissions. - -If you have configured TLS on the broker, to disable non-TLS ports, you can set the values of the following configurations to empty as below. - -``` - -brokerServicePort= -webServicePort= - -``` - -In this case, you need to set the following configurations. - -```conf - -brokerClientTlsEnabled=true // Set this to true -brokerClientTlsEnabledWithKeyStore=true // Set this to true -brokerClientTlsTrustStore= // Set this to your desired value -brokerClientTlsTrustStorePassword= // Set this to your desired value - -Optional settings that may worth consider: - -1. tlsClientAuthentication=false: Enable/Disable using TLS for authentication. This config when enabled will authenticate the other end - of the communication channel. It should be enabled on both brokers and clients for mutual TLS. -2. tlsCiphers=[TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256], A cipher suite is a named combination of authentication, encryption, MAC and key exchange - algorithm used to negotiate the security settings for a network connection using TLS network protocol. By default, - it is null. [OpenSSL Ciphers](https://www.openssl.org/docs/man1.0.2/apps/ciphers.html) - [JDK Ciphers](http://docs.oracle.com/javase/8/docs/technotes/guides/security/StandardNames.html#ciphersuites) -3. tlsProtocols=[TLSv1.3,TLSv1.2] (list out the TLS protocols that you are going to accept from clients). - By default, it is not set. - -``` - -### Configuring Clients - -This is similar to [TLS encryption configuing for client with PEM type](security-tls-transport.md#Client configuration). -For a a minimal configuration, user need to provide the TrustStore information. - -e.g. -1. for [Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools#pulsar-admin), [`pulsar-perf`](reference-cli-tools#pulsar-perf), and [`pulsar-client`](reference-cli-tools#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - - ```properties - - webServiceUrl=https://broker.example.com:8443/ - brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ - useKeyStoreTls=true - tlsTrustStoreType=JKS - tlsTrustStorePath=/var/private/tls/client.truststore.jks - tlsTrustStorePassword=clientpw - - ``` - -1. for java client - - ```java - - import org.apache.pulsar.client.api.PulsarClient; - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .build(); - - ``` - -1. for java admin client - -```java - - PulsarAdmin amdin = PulsarAdmin.builder().serviceHttpUrl("https://broker.example.com:8443") - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .build(); - -``` - -## TLS authentication with KeyStore configure - -This similar to [TLS authentication with PEM type](security-tls-authentication.md) - -### broker authentication config - -`broker.conf` - -```properties - -# Configuration to enable authentication -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# this should be the CN for one of client keystore. -superUserRoles=admin - -# Enable KeyStore type -tlsEnabledWithKeyStore=true -requireTrustedClientCertOnConnect=true - -# key store -tlsKeyStoreType=JKS -tlsKeyStore=/var/private/tls/broker.keystore.jks -tlsKeyStorePassword=brokerpw - -# trust store -tlsTrustStoreType=JKS -tlsTrustStore=/var/private/tls/broker.truststore.jks -tlsTrustStorePassword=brokerpw - -# internal client/admin-client config -brokerClientTlsEnabled=true -brokerClientTlsEnabledWithKeyStore=true -brokerClientTlsTrustStoreType=JKS -brokerClientTlsTrustStore=/var/private/tls/client.truststore.jks -brokerClientTlsTrustStorePassword=clientpw -# internal auth config -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls -brokerClientAuthenticationParameters={"keyStoreType":"JKS","keyStorePath":"/var/private/tls/client.keystore.jks","keyStorePassword":"clientpw"} -# currently websocket not support keystore type -webSocketServiceEnabled=false - -``` - -### client authentication configuring - -Besides the TLS encryption configuring. The main work is configuring the KeyStore, which contains a valid CN as client role, for client. - -e.g. -1. for [Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools#pulsar-admin), [`pulsar-perf`](reference-cli-tools#pulsar-perf), and [`pulsar-client`](reference-cli-tools#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - - ```properties - - webServiceUrl=https://broker.example.com:8443/ - brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ - useKeyStoreTls=true - tlsTrustStoreType=JKS - tlsTrustStorePath=/var/private/tls/client.truststore.jks - tlsTrustStorePassword=clientpw - authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls - authParams={"keyStoreType":"JKS","keyStorePath":"/path/to/keystorefile","keyStorePassword":"keystorepw"} - - ``` - -1. for java client - - ```java - - import org.apache.pulsar.client.api.PulsarClient; - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .authentication( - "org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls", - "keyStoreType:JKS,keyStorePath:/var/private/tls/client.keystore.jks,keyStorePassword:clientpw") - .build(); - - ``` - -1. for java admin client - - ```java - - PulsarAdmin amdin = PulsarAdmin.builder().serviceHttpUrl("https://broker.example.com:8443") - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .authentication( - "org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls", - "keyStoreType:JKS,keyStorePath:/var/private/tls/client.keystore.jks,keyStorePassword:clientpw") - .build(); - - ``` - -## Enabling TLS Logging - -You can enable TLS debug logging at the JVM level by starting the brokers and/or clients with `javax.net.debug` system property. For example: - -```shell - --Djavax.net.debug=all - -``` - -You can find more details on this in [Oracle documentation](http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/ReadDebug.html) on [debugging SSL/TLS connections](http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/ReadDebug.html). diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/security-tls-transport.md b/site2/website/versioned_docs/version-2.9.1-deprecated/security-tls-transport.md deleted file mode 100644 index 2cad17a78c3507..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/security-tls-transport.md +++ /dev/null @@ -1,295 +0,0 @@ ---- -id: security-tls-transport -title: Transport Encryption using TLS -sidebar_label: "Transport Encryption using TLS" -original_id: security-tls-transport ---- - -## TLS overview - -By default, Apache Pulsar clients communicate with the Apache Pulsar service in plain text. This means that all data is sent in the clear. You can use TLS to encrypt this traffic to protect the traffic from the snooping of a man-in-the-middle attacker. - -You can also configure TLS for both encryption and authentication. Use this guide to configure just TLS transport encryption and refer to [here](security-tls-authentication.md) for TLS authentication configuration. Alternatively, you can use [another authentication mechanism](security-athenz.md) on top of TLS transport encryption. - -> Note that enabling TLS may impact the performance due to encryption overhead. - -## TLS concepts - -TLS is a form of [public key cryptography](https://en.wikipedia.org/wiki/Public-key_cryptography). Using key pairs consisting of a public key and a private key can perform the encryption. The public key encrpyts the messages and the private key decrypts the messages. - -To use TLS transport encryption, you need two kinds of key pairs, **server key pairs** and a **certificate authority**. - -You can use a third kind of key pair, **client key pairs**, for [client authentication](security-tls-authentication.md). - -You should store the **certificate authority** private key in a very secure location (a fully encrypted, disconnected, air gapped computer). As for the certificate authority public key, the **trust cert**, you can freely shared it. - -For both client and server key pairs, the administrator first generates a private key and a certificate request, then uses the certificate authority private key to sign the certificate request, finally generates a certificate. This certificate is the public key for the server/client key pair. - -For TLS transport encryption, the clients can use the **trust cert** to verify that the server has a key pair that the certificate authority signed when the clients are talking to the server. A man-in-the-middle attacker does not have access to the certificate authority, so they couldn't create a server with such a key pair. - -For TLS authentication, the server uses the **trust cert** to verify that the client has a key pair that the certificate authority signed. The common name of the **client cert** is then used as the client's role token (see [Overview](security-overview.md)). - -`Bouncy Castle Provider` provides cipher suites and algorithms in Pulsar. If you need [FIPS](https://www.bouncycastle.org/fips_faq.html) version of `Bouncy Castle Provider`, please reference [Bouncy Castle page](security-bouncy-castle.md). - -## Create TLS certificates - -Creating TLS certificates for Pulsar involves creating a [certificate authority](#certificate-authority) (CA), [server certificate](#server-certificate), and [client certificate](#client-certificate). - -Follow the guide below to set up a certificate authority. You can also refer to plenty of resources on the internet for more details. We recommend [this guide](https://jamielinux.com/docs/openssl-certificate-authority/index.html) for your detailed reference. - -### Certificate authority - -1. Create the certificate for the CA. You can use CA to sign both the broker and client certificates. This ensures that each party will trust the others. You should store CA in a very secure location (ideally completely disconnected from networks, air gapped, and fully encrypted). - -2. Entering the following command to create a directory for your CA, and place [this openssl configuration file](https://github.com/apache/pulsar/tree/master/site2/website/static/examples/openssl.cnf) in the directory. You may want to modify the default answers for company name and department in the configuration file. Export the location of the CA directory to the environment variable, CA_HOME. The configuration file uses this environment variable to find the rest of the files and directories that the CA needs. - -```bash - -mkdir my-ca -cd my-ca -wget https://raw.githubusercontent.com/apache/pulsar-site/main/site2/website/static/examples/openssl.cnf -export CA_HOME=$(pwd) - -``` - -3. Enter the commands below to create the necessary directories, keys and certs. - -```bash - -mkdir certs crl newcerts private -chmod 700 private/ -touch index.txt -echo 1000 > serial -openssl genrsa -aes256 -out private/ca.key.pem 4096 -chmod 400 private/ca.key.pem -openssl req -config openssl.cnf -key private/ca.key.pem \ - -new -x509 -days 7300 -sha256 -extensions v3_ca \ - -out certs/ca.cert.pem -chmod 444 certs/ca.cert.pem - -``` - -4. After you answer the question prompts, CA-related files are stored in the `./my-ca` directory. Within that directory: - -* `certs/ca.cert.pem` is the public certificate. This public certificates is meant to be distributed to all parties involved. -* `private/ca.key.pem` is the private key. You only need it when you are signing a new certificate for either broker or clients and you must safely guard this private key. - -### Server certificate - -Once you have created a CA certificate, you can create certificate requests and sign them with the CA. - -The following commands ask you a few questions and then create the certificates. When you are asked for the common name, you should match the hostname of the broker. You can also use a wildcard to match a group of broker hostnames, for example, `*.broker.usw.example.com`. This ensures that multiple machines can reuse the same certificate. - -:::tip - -Sometimes matching the hostname is not possible or makes no sense, -such as when you create the brokers with random hostnames, or you -plan to connect to the hosts via their IP. In these cases, you -should configure the client to disable TLS hostname verification. For more -details, you can see [the host verification section in client configuration](#hostname-verification). - -::: - -1. Enter the command below to generate the key. - -```bash - -openssl genrsa -out broker.key.pem 2048 - -``` - -The broker expects the key to be in [PKCS 8](https://en.wikipedia.org/wiki/PKCS_8) format, so enter the following command to convert it. - -```bash - -openssl pkcs8 -topk8 -inform PEM -outform PEM \ - -in broker.key.pem -out broker.key-pk8.pem -nocrypt - -``` - -2. Enter the following command to generate the certificate request. - -```bash - -openssl req -config openssl.cnf \ - -key broker.key.pem -new -sha256 -out broker.csr.pem - -``` - -3. Sign it with the certificate authority by entering the command below. - -```bash - -openssl ca -config openssl.cnf -extensions server_cert \ - -days 1000 -notext -md sha256 \ - -in broker.csr.pem -out broker.cert.pem - -``` - -At this point, you have a cert, `broker.cert.pem`, and a key, `broker.key-pk8.pem`, which you can use along with `ca.cert.pem` to configure TLS transport encryption for your broker and proxy nodes. - -## Configure broker - -To configure a Pulsar [broker](reference-terminology.md#broker) to use TLS transport encryption, you need to make some changes to `broker.conf`, which locates in the `conf` directory of your [Pulsar installation](getting-started-standalone.md). - -Add these values to the configuration file (substituting the appropriate certificate paths where necessary): - -```properties - -tlsEnabled=true -tlsRequireTrustedClientCertOnConnect=true -tlsCertificateFilePath=/path/to/broker.cert.pem -tlsKeyFilePath=/path/to/broker.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -> You can find a full list of parameters available in the `conf/broker.conf` file, -> as well as the default values for those parameters, in [Broker Configuration](reference-configuration.md#broker) -> -### TLS Protocol Version and Cipher - -You can configure the broker (and proxy) to require specific TLS protocol versions and ciphers for TLS negiotation. You can use the TLS protocol versions and ciphers to stop clients from requesting downgraded TLS protocol versions or ciphers that may have weaknesses. - -Both the TLS protocol versions and cipher properties can take multiple values, separated by commas. The possible values for protocol version and ciphers depend on the TLS provider that you are using. Pulsar uses OpenSSL if the OpenSSL is available, but if the OpenSSL is not available, Pulsar defaults back to the JDK implementation. - -```properties - -tlsProtocols=TLSv1.3,TLSv1.2 -tlsCiphers=TLS_DH_RSA_WITH_AES_256_GCM_SHA384,TLS_DH_RSA_WITH_AES_256_CBC_SHA - -``` - -OpenSSL currently supports ```TLSv1.1```, ```TLSv1.2``` and ```TLSv1.3``` for the protocol version. You can acquire a list of supported cipher from the openssl ciphers command, i.e. ```openssl ciphers -tls1_3```. - -For JDK 11, you can obtain a list of supported values from the documentation: -- [TLS protocol](https://docs.oracle.com/en/java/javase/11/security/oracle-providers.html#GUID-7093246A-31A3-4304-AC5F-5FB6400405E2__SUNJSSEPROVIDERPROTOCOLPARAMETERS-BBF75009) -- [Ciphers](https://docs.oracle.com/en/java/javase/11/security/oracle-providers.html#GUID-7093246A-31A3-4304-AC5F-5FB6400405E2__SUNJSSE_CIPHER_SUITES) - -## Proxy Configuration - -Proxies need to configure TLS in two directions, for clients connecting to the proxy, and for the proxy connecting to brokers. - -```properties - -# For clients connecting to the proxy -tlsEnabledInProxy=true -tlsCertificateFilePath=/path/to/broker.cert.pem -tlsKeyFilePath=/path/to/broker.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -# For the proxy to connect to brokers -tlsEnabledWithBroker=true -brokerClientTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -## Client configuration - -When you enable the TLS transport encryption, you need to configure the client to use ```https://``` and port 8443 for the web service URL, and ```pulsar+ssl://``` and port 6651 for the broker service URL. - -As the server certificate that you generated above does not belong to any of the default trust chains, you also need to either specify the path the **trust cert** (recommended), or tell the client to allow untrusted server certs. - -### Hostname verification - -Hostname verification is a TLS security feature whereby a client can refuse to connect to a server if the "CommonName" does not match the hostname to which the hostname is connecting. By default, Pulsar clients disable hostname verification, as it requires that each broker has a DNS record and a unique cert. - -Moreover, as the administrator has full control of the certificate authority, a bad actor is unlikely to be able to pull off a man-in-the-middle attack. "allowInsecureConnection" allows the client to connect to servers whose cert has not been signed by an approved CA. The client disables "allowInsecureConnection" by default, and you should always disable "allowInsecureConnection" in production environments. As long as you disable "allowInsecureConnection", a man-in-the-middle attack requires that the attacker has access to the CA. - -One scenario where you may want to enable hostname verification is where you have multiple proxy nodes behind a VIP, and the VIP has a DNS record, for example, pulsar.mycompany.com. In this case, you can generate a TLS cert with pulsar.mycompany.com as the "CommonName," and then enable hostname verification on the client. - -The examples below show that hostname verification is disabled for the CLI tools/Java/Python/C++/Node.js/C# clients by default. - -### CLI tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools.md#pulsar-admin), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use TLS transport with the CLI tools of Pulsar: - -```properties - -webServiceUrl=https://broker.example.com:8443/ -brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/ca.cert.pem -tlsEnableHostnameVerification=false - -``` - -#### Java client - -```java - -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .tlsTrustCertsFilePath("/path/to/ca.cert.pem") - .enableTlsHostnameVerification(false) // false by default, in any case - .allowTlsInsecureConnection(false) // false by default, in any case - .build(); - -``` - -#### Python client - -```python - -from pulsar import Client - -client = Client("pulsar+ssl://broker.example.com:6651/", - tls_hostname_verification=False, - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False) // defaults to false from v2.2.0 onwards - -``` - -#### C++ client - -```c++ - -#include - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); // shouldn't be needed soon -config.setTlsTrustCertsFilePath(caPath); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create(clientPublicKeyPath, clientPrivateKeyPath)); -config.setValidateHostName(false); - -``` - -#### Node.js client - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const client = new Pulsar.Client({ - serviceUrl: 'pulsar+ssl://broker.example.com:6651/', - tlsTrustCertsFilePath: '/path/to/ca.cert.pem', - useTls: true, - tlsValidateHostname: false, - tlsAllowInsecureConnection: false, - }); -})(); - -``` - -#### C# client - -```c# - -var certificate = new X509Certificate2("ca.cert.pem"); -var client = PulsarClient.Builder() - .TrustedCertificateAuthority(certificate) //If the CA is not trusted on the host, you can add it explicitly. - .VerifyCertificateAuthority(true) //Default is 'true' - .VerifyCertificateName(false) //Default is 'false' - .Build(); - -``` - -> Note that `VerifyCertificateName` refers to the configuration of hostname verification in the C# client. diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/security-token-admin.md b/site2/website/versioned_docs/version-2.9.1-deprecated/security-token-admin.md deleted file mode 100644 index a265f6320d28fe..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/security-token-admin.md +++ /dev/null @@ -1,183 +0,0 @@ ---- -id: security-token-admin -title: Token authentication admin -sidebar_label: "Token authentication admin" -original_id: security-token-admin ---- - -## Token Authentication Overview - -Pulsar supports authenticating clients using security tokens that are based on [JSON Web Tokens](https://jwt.io/introduction/) ([RFC-7519](https://tools.ietf.org/html/rfc7519)). - -Tokens are used to identify a Pulsar client and associate with some "principal" (or "role") which -will be then granted permissions to do some actions (eg: publish or consume from a topic). - -A user will typically be given a token string by an administrator (or some automated service). - -The compact representation of a signed JWT is a string that looks like: - -``` - - eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -Application will specify the token when creating the client instance. An alternative is to pass -a "token supplier", that is to say a function that returns the token when the client library -will need one. - -> #### Always use TLS transport encryption -> Sending a token is equivalent to sending a password over the wire. It is strongly recommended to -> always use TLS encryption when talking to the Pulsar service. See -> [Transport Encryption using TLS](security-tls-transport.md) - -## Secret vs Public/Private keys - -JWT support two different kind of keys in order to generate and validate the tokens: - - * Symmetric : - - there is a single ***Secret*** key that is used both to generate and validate - * Asymmetric: there is a pair of keys. - - ***Private*** key is used to generate tokens - - ***Public*** key is used to validate tokens - -### Secret key - -When using a secret key, the administrator will create the key and he will -use it to generate the client tokens. This key will be also configured to -the brokers to allow them to validate the clients. - -#### Creating a secret key - -> Output file will be generated in the root of your pulsar installation directory. You can also provide absolute path for the output file. - -```shell - -$ bin/pulsar tokens create-secret-key --output my-secret.key - -``` - -To generate base64 encoded private key - -```shell - -$ bin/pulsar tokens create-secret-key --output /opt/my-secret.key --base64 - -``` - -### Public/Private keys - -With public/private, we need to create a pair of keys. Pulsar supports all algorithms supported by the Java JWT library shown [here](https://github.com/jwtk/jjwt#signature-algorithms-keys) - -#### Creating a key pair - -> Output file will be generated in the root of your pulsar installation directory. You can also provide absolute path for the output file. - -```shell - -$ bin/pulsar tokens create-key-pair --output-private-key my-private.key --output-public-key my-public.key - -``` - - * `my-private.key` will be stored in a safe location and only used by administrator to generate - new tokens. - * `my-public.key` will be distributed to all Pulsar brokers. This file can be publicly shared without - any security concern. - -## Generating tokens - -A token is the credential associated with a user. The association is done through the "principal", -or "role". In case of JWT tokens, this field it's typically referred to as **subject**, though -it's exactly the same concept. - -The generated token is then required to have a **subject** field set. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user - -``` - -This will print the token string on stdout. - -Similarly, one can create a token by passing the "private" key: - -```shell - -$ bin/pulsar tokens create --private-key file:///path/to/my-private.key \ - --subject test-user - -``` - -Finally, a token can also be created with a pre-defined TTL. After that time, -the token will be automatically invalidated. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user \ - --expiry-time 1y - -``` - -## Authorization - -The token itself doesn't have any permission associated. That will be determined by the -authorization engine. Once the token is created, one can grant permission for this token to do certain -actions. Eg. : - -```shell - -$ bin/pulsar-admin namespaces grant-permission my-tenant/my-namespace \ - --role test-user \ - --actions produce,consume - -``` - -## Enabling Token Authentication ... - -### ... on Brokers - -To configure brokers to authenticate clients, put the following in `broker.conf`: - -```properties - -# Configuration to enable authentication and authorization -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken - -# If using secret key (Note: key files must be DER-encoded) -tokenSecretKey=file:///path/to/secret.key -# The key can also be passed inline: -# tokenSecretKey=data:;base64,FLFyW0oLJ2Fi22KKCm21J18mbAdztfSHN/lAT5ucEKU= - -# If using public/private (Note: key files must be DER-encoded) -# tokenPublicKey=file:///path/to/public.key - -``` - -### ... on Proxies - -To configure proxies to authenticate clients, put the following in `proxy.conf`: - -The proxy will have its own token used when talking to brokers. The role token for this -key pair should be configured in the ``proxyRoles`` of the brokers. See the [authorization guide](security-authorization.md) for more details. - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken -tokenSecretKey=file:///path/to/secret.key - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Or, alternatively, read token from file -# brokerClientAuthenticationParameters=file:///path/to/proxy-token.txt - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/sql-deployment-configurations.md b/site2/website/versioned_docs/version-2.9.1-deprecated/sql-deployment-configurations.md deleted file mode 100644 index 02d8bc78f6cb9d..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/sql-deployment-configurations.md +++ /dev/null @@ -1,199 +0,0 @@ ---- -id: sql-deployment-configurations -title: Pulsar SQL configuration and deployment -sidebar_label: "Configuration and deployment" -original_id: sql-deployment-configurations ---- - -You can configure Presto Pulsar connector and deploy a cluster with the following instruction. - -## Configure Presto Pulsar Connector -You can configure Presto Pulsar Connector in the `${project.root}/conf/presto/catalog/pulsar.properties` properties file. The configuration for the connector and the default values are as follows. - -```properties - -# name of the connector to be displayed in the catalog -connector.name=pulsar - -# the url of Pulsar broker service -pulsar.web-service-url=http://localhost:8080 - -# URI of Zookeeper cluster -pulsar.zookeeper-uri=localhost:2181 - -# minimum number of entries to read at a single time -pulsar.entry-read-batch-size=100 - -# default number of splits to use per query -pulsar.target-num-splits=4 - -``` - -You can connect Presto to a Pulsar cluster with multiple hosts. To configure multiple hosts for brokers, add multiple URLs to `pulsar.web-service-url`. To configure multiple hosts for ZooKeeper, add multiple URIs to `pulsar.zookeeper-uri`. The following is an example. - -``` - -pulsar.web-service-url=http://localhost:8080,localhost:8081,localhost:8082 -pulsar.zookeeper-uri=localhost1,localhost2:2181 - -``` - -**Note: by default, Pulsar SQL does not get the last message in a topic**. It is by design and controlled by settings. By default, BookKeeper LAC only advances when subsequent entries are added. If there is no subsequent entry added, the last written entry is not visible to readers until the ledger is closed. This is not a problem for Pulsar which uses managed ledger, but Pulsar SQL directly reads from BookKeeper ledger. - -If you want to get the last message in a topic, set the following configurations: - -1. For the broker configuration, set `bookkeeperExplicitLacIntervalInMills` > 0 in `broker.conf` or `standalone.conf`. - -2. For the Presto configuration, set `pulsar.bookkeeper-explicit-interval` > 0 and `pulsar.bookkeeper-use-v2-protocol=false`. - -However, using BookKeeper V3 protocol introduces additional GC overhead to BK as it uses Protobuf. - -## Query data from existing Presto clusters - -If you already have a Presto cluster, you can copy the Presto Pulsar connector plugin to your existing cluster. Download the archived plugin package with the following command. - -```bash - -$ wget pulsar:binary_release_url - -``` - -## Deploy a new cluster - -Since Pulsar SQL is powered by [Trino (formerly Presto SQL)](https://trino.io), the configuration for deployment is the same for the Pulsar SQL worker. - -:::note - -For how to set up a standalone single node environment, refer to [Query data](sql-getting-started.md). - -::: - -You can use the same CLI args as the Presto launcher. - -```bash - -$ ./bin/pulsar sql-worker --help -Usage: launcher [options] command - -Commands: run, start, stop, restart, kill, status - -Options: - -h, --help show this help message and exit - -v, --verbose Run verbosely - --etc-dir=DIR Defaults to INSTALL_PATH/etc - --launcher-config=FILE - Defaults to INSTALL_PATH/bin/launcher.properties - --node-config=FILE Defaults to ETC_DIR/node.properties - --jvm-config=FILE Defaults to ETC_DIR/jvm.config - --config=FILE Defaults to ETC_DIR/config.properties - --log-levels-file=FILE - Defaults to ETC_DIR/log.properties - --data-dir=DIR Defaults to INSTALL_PATH - --pid-file=FILE Defaults to DATA_DIR/var/run/launcher.pid - --launcher-log-file=FILE - Defaults to DATA_DIR/var/log/launcher.log (only in - daemon mode) - --server-log-file=FILE - Defaults to DATA_DIR/var/log/server.log (only in - daemon mode) - -D NAME=VALUE Set a Java system property - -``` - -The default configuration for the cluster is located in `${project.root}/conf/presto`. You can customize your deployment by modifying the default configuration. - -You can set the worker to read from a different configuration directory, or set a different directory to write data. - -```bash - -$ ./bin/pulsar sql-worker run --etc-dir /tmp/incubator-pulsar/conf/presto --data-dir /tmp/presto-1 - -``` - -You can start the worker as daemon process. - -```bash - -$ ./bin/pulsar sql-worker start - -``` - -### Deploy a cluster on multiple nodes - -You can deploy a Pulsar SQL cluster or Presto cluster on multiple nodes. The following example shows how to deploy a cluster on three-node cluster. - -1. Copy the Pulsar binary distribution to three nodes. - -The first node runs as Presto coordinator. The minimal configuration requirement in the `${project.root}/conf/presto/config.properties` file is as follows. - -```properties - -coordinator=true -node-scheduler.include-coordinator=true -http-server.http.port=8080 -query.max-memory=50GB -query.max-memory-per-node=1GB -discovery-server.enabled=true -discovery.uri= - -``` - -The other two nodes serve as worker nodes, you can use the following configuration for worker nodes. - -```properties - -coordinator=false -http-server.http.port=8080 -query.max-memory=50GB -query.max-memory-per-node=1GB -discovery.uri= - -``` - -2. Modify `pulsar.web-service-url` and `pulsar.zookeeper-uri` configuration in the `${project.root}/conf/presto/catalog/pulsar.properties` file accordingly for the three nodes. - -3. Start the coordinator node. - -``` - -$ ./bin/pulsar sql-worker run - -``` - -4. Start worker nodes. - -``` - -$ ./bin/pulsar sql-worker run - -``` - -5. Start the SQL CLI and check the status of your cluster. - -```bash - -$ ./bin/pulsar sql --server - -``` - -6. Check the status of your nodes. - -```bash - -presto> SELECT * FROM system.runtime.nodes; - node_id | http_uri | node_version | coordinator | state ----------+-------------------------+--------------+-------------+-------- - 1 | http://192.168.2.1:8081 | testversion | true | active - 3 | http://192.168.2.2:8081 | testversion | false | active - 2 | http://192.168.2.3:8081 | testversion | false | active - -``` - -For more information about deployment in Presto, refer to [Presto deployment](https://trino.io/docs/current/installation/deployment.html). - -:::note - -The broker does not advance LAC, so when Pulsar SQL bypass broker to query data, it can only read entries up to the LAC that all the bookies learned. You can enable periodically write LAC on the broker by setting "bookkeeperExplicitLacIntervalInMills" in the broker.conf. - -::: - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/sql-getting-started.md b/site2/website/versioned_docs/version-2.9.1-deprecated/sql-getting-started.md deleted file mode 100644 index 8a5cd7199b365a..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/sql-getting-started.md +++ /dev/null @@ -1,187 +0,0 @@ ---- -id: sql-getting-started -title: Query data with Pulsar SQL -sidebar_label: "Query data" -original_id: sql-getting-started ---- - -Before querying data in Pulsar, you need to install Pulsar and built-in connectors. - -## Requirements -1. Install [Pulsar](getting-started-standalone.md#install-pulsar-standalone). -2. Install Pulsar [built-in connectors](getting-started-standalone.md#install-builtin-connectors-optional). - -## Query data in Pulsar -To query data in Pulsar with Pulsar SQL, complete the following steps. - -1. Start a Pulsar standalone cluster. - -```bash - -./bin/pulsar standalone - -``` - -2. Start a Pulsar SQL worker. - -```bash - -./bin/pulsar sql-worker run - -``` - -3. After initializing Pulsar standalone cluster and the SQL worker, run SQL CLI. - -```bash - -./bin/pulsar sql - -``` - -4. Test with SQL commands. - -```bash - -presto> show catalogs; - Catalog ---------- - pulsar - system -(2 rows) - -Query 20180829_211752_00004_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [0 rows, 0B] [0 rows/s, 0B/s] - - -presto> show schemas in pulsar; - Schema ------------------------ - information_schema - public/default - public/functions - sample/standalone/ns1 -(4 rows) - -Query 20180829_211818_00005_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [4 rows, 89B] [21 rows/s, 471B/s] - - -presto> show tables in pulsar."public/default"; - Table -------- -(0 rows) - -Query 20180829_211839_00006_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [0 rows, 0B] [0 rows/s, 0B/s] - -``` - -Since there is no data in Pulsar, no records is returned. - -5. Start the built-in connector _DataGeneratorSource_ and ingest some mock data. - -```bash - -./bin/pulsar-admin sources create --name generator --destinationTopicName generator_test --source-type data-generator - -``` - -And then you can query a topic in the namespace "public/default". - -```bash - -presto> show tables in pulsar."public/default"; - Table ----------------- - generator_test -(1 row) - -Query 20180829_213202_00000_csyeu, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:02 [1 rows, 38B] [0 rows/s, 17B/s] - -``` - -You can now query the data within the topic "generator_test". - -```bash - -presto> select * from pulsar."public/default".generator_test; - - firstname | middlename | lastname | email | username | password | telephonenumber | age | companyemail | nationalidentitycardnumber | --------------+-------------+-------------+----------------------------------+--------------+----------+-----------------+-----+-----------------------------------------------+----------------------------+ - Genesis | Katherine | Wiley | genesis.wiley@gmail.com | genesisw | y9D2dtU3 | 959-197-1860 | 71 | genesis.wiley@interdemconsulting.eu | 880-58-9247 | - Brayden | | Stanton | brayden.stanton@yahoo.com | braydens | ZnjmhXik | 220-027-867 | 81 | brayden.stanton@supermemo.eu | 604-60-7069 | - Benjamin | Julian | Velasquez | benjamin.velasquez@yahoo.com | benjaminv | 8Bc7m3eb | 298-377-0062 | 21 | benjamin.velasquez@hostesltd.biz | 213-32-5882 | - Michael | Thomas | Donovan | donovan@mail.com | michaeld | OqBm9MLs | 078-134-4685 | 55 | michael.donovan@memortech.eu | 443-30-3442 | - Brooklyn | Avery | Roach | brooklynroach@yahoo.com | broach | IxtBLafO | 387-786-2998 | 68 | brooklyn.roach@warst.biz | 085-88-3973 | - Skylar | | Bradshaw | skylarbradshaw@yahoo.com | skylarb | p6eC6cKy | 210-872-608 | 96 | skylar.bradshaw@flyhigh.eu | 453-46-0334 | -. -. -. - -``` - -You can query the mock data. - -## Query your own data -If you want to query your own data, you need to ingest your own data first. You can write a simple producer and write custom defined data to Pulsar. The following is an example. - -```java - -public class TestProducer { - - public static class Foo { - private int field1 = 1; - private String field2; - private long field3; - - public Foo() { - } - - public int getField1() { - return field1; - } - - public void setField1(int field1) { - this.field1 = field1; - } - - public String getField2() { - return field2; - } - - public void setField2(String field2) { - this.field2 = field2; - } - - public long getField3() { - return field3; - } - - public void setField3(long field3) { - this.field3 = field3; - } - } - - public static void main(String[] args) throws Exception { - PulsarClient pulsarClient = PulsarClient.builder().serviceUrl("pulsar://localhost:6650").build(); - Producer producer = pulsarClient.newProducer(AvroSchema.of(Foo.class)).topic("test_topic").create(); - - for (int i = 0; i < 1000; i++) { - Foo foo = new Foo(); - foo.setField1(i); - foo.setField2("foo" + i); - foo.setField3(System.currentTimeMillis()); - producer.newMessage().value(foo).send(); - } - producer.close(); - pulsarClient.close(); - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/sql-overview.md b/site2/website/versioned_docs/version-2.9.1-deprecated/sql-overview.md deleted file mode 100644 index 0235d0256c5f19..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/sql-overview.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: sql-overview -title: Pulsar SQL Overview -sidebar_label: "Overview" -original_id: sql-overview ---- - -Apache Pulsar is used to store streams of event data, and the event data is structured with predefined fields. With the implementation of the [Schema Registry](schema-get-started.md), you can store structured data in Pulsar and query the data by using [Trino (formerly Presto SQL.md)](https://trino.io/). - -As the core of Pulsar SQL, Presto Pulsar connector enables Presto workers within a Presto cluster to query data from Pulsar. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-sql-arch-2.png) - -The query performance is efficient and highly scalable, because Pulsar adopts [two level segment based architecture](concepts-architecture-overview.md#apache-bookkeeper). - -Topics in Pulsar are stored as segments in [Apache BookKeeper](https://bookkeeper.apache.org/). Each topic segment is replicated to some BookKeeper nodes, which enables concurrent reads and high read throughput. You can configure the number of BookKeeper nodes, and the default number is `3`. In Presto Pulsar connector, data is read directly from BookKeeper, so Presto workers can read concurrently from horizontally scalable number BookKeeper nodes. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-sql-arch-1.png) diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/sql-rest-api.md b/site2/website/versioned_docs/version-2.9.1-deprecated/sql-rest-api.md deleted file mode 100644 index c92fd62f7d8703..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/sql-rest-api.md +++ /dev/null @@ -1,192 +0,0 @@ ---- -id: sql-rest-api -title: Pulsar SQL REST APIs -sidebar_label: "REST APIs" -original_id: sql-rest-api ---- - -This section lists resources that make up the Presto REST API v1. - -## Request for Presto services - -All requests for Presto services should use Presto REST API v1 version. - -To request services, use explicit URL `http://presto.service:8081/v1`. You need to update `presto.service:8081` with your real Presto address before sending requests. - -`POST` requests require the `X-Presto-User` header. If you use authentication, you must use the same `username` that is specified in the authentication configuration. If you do not use authentication, you can specify anything for `username`. - -```properties - -X-Presto-User: username - -``` - -For more information about headers, refer to [PrestoHeaders](https://github.com/trinodb/trino). - -## Schema - -You can use statement in the HTTP body. All data is received as JSON document that might contain a `nextUri` link. If the received JSON document contains a `nextUri` link, the request continues with the `nextUri` link until the received data does not contain a `nextUri` link. If no error is returned, the query completes successfully. If an `error` field is displayed in `stats`, it means the query fails. - -The following is an example of `show catalogs`. The query continues until the received JSON document does not contain a `nextUri` link. Since no `error` is displayed in `stats`, it means that the query completes successfully. - -```powershell - -➜ ~ curl --header "X-Presto-User: test-user" --request POST --data 'show catalogs' http://localhost:8081/v1/statement -{ - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "stats" : { - "queued" : true, - "nodes" : 0, - "userTimeMillis" : 0, - "cpuTimeMillis" : 0, - "wallTimeMillis" : 0, - "processedBytes" : 0, - "processedRows" : 0, - "runningSplits" : 0, - "queuedTimeMillis" : 0, - "queuedSplits" : 0, - "completedSplits" : 0, - "totalSplits" : 0, - "scheduled" : false, - "peakMemoryBytes" : 0, - "state" : "QUEUED", - "elapsedTimeMillis" : 0 - }, - "id" : "20191113_033653_00006_dg6hb", - "nextUri" : "http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/1" -} - -➜ ~ curl http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/1 -{ - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "nextUri" : "http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/2", - "id" : "20191113_033653_00006_dg6hb", - "stats" : { - "state" : "PLANNING", - "totalSplits" : 0, - "queued" : false, - "userTimeMillis" : 0, - "completedSplits" : 0, - "scheduled" : false, - "wallTimeMillis" : 0, - "runningSplits" : 0, - "queuedSplits" : 0, - "cpuTimeMillis" : 0, - "processedRows" : 0, - "processedBytes" : 0, - "nodes" : 0, - "queuedTimeMillis" : 1, - "elapsedTimeMillis" : 2, - "peakMemoryBytes" : 0 - } -} - -➜ ~ curl http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/2 -{ - "id" : "20191113_033653_00006_dg6hb", - "data" : [ - [ - "pulsar" - ], - [ - "system" - ] - ], - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "columns" : [ - { - "typeSignature" : { - "rawType" : "varchar", - "arguments" : [ - { - "kind" : "LONG_LITERAL", - "value" : 6 - } - ], - "literalArguments" : [], - "typeArguments" : [] - }, - "name" : "Catalog", - "type" : "varchar(6)" - } - ], - "stats" : { - "wallTimeMillis" : 104, - "scheduled" : true, - "userTimeMillis" : 14, - "progressPercentage" : 100, - "totalSplits" : 19, - "nodes" : 1, - "cpuTimeMillis" : 16, - "queued" : false, - "queuedTimeMillis" : 1, - "state" : "FINISHED", - "peakMemoryBytes" : 0, - "elapsedTimeMillis" : 111, - "processedBytes" : 0, - "processedRows" : 0, - "queuedSplits" : 0, - "rootStage" : { - "cpuTimeMillis" : 1, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 1, - "subStages" : [ - { - "cpuTimeMillis" : 14, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 17, - "subStages" : [ - { - "wallTimeMillis" : 7, - "subStages" : [], - "stageId" : "2", - "done" : true, - "nodes" : 1, - "totalSplits" : 1, - "processedBytes" : 22, - "processedRows" : 2, - "queuedSplits" : 0, - "userTimeMillis" : 1, - "cpuTimeMillis" : 1, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 1 - } - ], - "wallTimeMillis" : 92, - "nodes" : 1, - "done" : true, - "stageId" : "1", - "userTimeMillis" : 12, - "processedRows" : 2, - "processedBytes" : 51, - "queuedSplits" : 0, - "totalSplits" : 17 - } - ], - "wallTimeMillis" : 5, - "done" : true, - "nodes" : 1, - "stageId" : "0", - "userTimeMillis" : 1, - "processedRows" : 2, - "processedBytes" : 22, - "totalSplits" : 1, - "queuedSplits" : 0 - }, - "runningSplits" : 0, - "completedSplits" : 19 - } -} - -``` - -:::note - -Since the response data is not in sync with the query state from the perspective of clients, you cannot rely on the response data to determine whether the query completes. - -::: - -For more information about Presto REST API, refer to [Presto HTTP Protocol](https://github.com/prestosql/presto/wiki/HTTP-Protocol). diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/standalone-docker.md b/site2/website/versioned_docs/version-2.9.1-deprecated/standalone-docker.md deleted file mode 100644 index 1710ec819d7a4e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/standalone-docker.md +++ /dev/null @@ -1,214 +0,0 @@ ---- -id: standalone-docker -title: Set up a standalone Pulsar in Docker -sidebar_label: "Run Pulsar in Docker" -original_id: standalone-docker ---- - -For local development and testing, you can run Pulsar in standalone mode on your own machine within a Docker container. - -If you have not installed Docker, download the [Community edition](https://www.docker.com/community-edition) and follow the instructions for your OS. - -## Start Pulsar in Docker - -* For MacOS, Linux, and Windows: - - ```shell - - $ docker run -it -p 6650:6650 -p 8080:8080 --mount source=pulsardata,target=/pulsar/data --mount source=pulsarconf,target=/pulsar/conf apachepulsar/pulsar:@pulsar:version@ bin/pulsar standalone - - ``` - -A few things to note about this command: - * The data, metadata, and configuration are persisted on Docker volumes in order to not start "fresh" every -time the container is restarted. For details on the volumes you can use `docker volume inspect ` - * For Docker on Windows make sure to configure it to use Linux containers - -If you start Pulsar successfully, you will see `INFO`-level log messages like this: - -``` - -08:18:30.970 [main] INFO org.apache.pulsar.broker.web.WebService - HTTP Service started at http://0.0.0.0:8080 -... -07:53:37.322 [main] INFO org.apache.pulsar.broker.PulsarService - messaging service is ready, bootstrap service port = 8080, broker url= pulsar://localhost:6650, cluster=standalone, configs=org.apache.pulsar.broker.ServiceConfiguration@98b63c1 -... - -``` - -:::tip - -When you start a local standalone cluster, a `public/default` - -::: - -namespace is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. -For more information, see [Topics](concepts-messaging.md#topics). - -## Use Pulsar in Docker - -Pulsar offers client libraries for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) -and [C++](client-libraries-cpp.md). If you're running a local standalone cluster, you can -use one of these root URLs to interact with your cluster: - -* `pulsar://localhost:6650` -* `http://localhost:8080` - -The following example will guide you get started with Pulsar quickly by using the [Python client API](client-libraries-python.md) -client API. - -Install the Pulsar Python client library directly from [PyPI](https://pypi.org/project/pulsar-client/): - -```shell - -$ pip install pulsar-client - -``` - -### Consume a message - -Create a consumer and subscribe to the topic: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -consumer = client.subscribe('my-topic', - subscription_name='my-sub') - -while True: - msg = consumer.receive() - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - -client.close() - -``` - -### Produce a message - -Now start a producer to send some test messages: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -producer = client.create_producer('my-topic') - -for i in range(10): - producer.send(('hello-pulsar-%d' % i).encode('utf-8')) - -client.close() - -``` - -## Get the topic statistics - -In Pulsar, you can use REST, Java, or command-line tools to control every aspect of the system. -For details on APIs, refer to [Admin API Overview](admin-api-overview.md). - -In the simplest example, you can use curl to probe the stats for a particular topic: - -```shell - -$ curl http://localhost:8080/admin/v2/persistent/public/default/my-topic/stats | python -m json.tool - -``` - -The output is something like this: - -```json - -{ - "msgRateIn": 0.0, - "msgThroughputIn": 0.0, - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesInCounter": 7097, - "msgInCounter": 143, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "averageMsgSize": 0.0, - "msgChunkPublished": false, - "storageSize": 7097, - "backlogSize": 0, - "offloadedStorageSize": 0, - "publishers": [ - { - "accessMode": "Shared", - "msgRateIn": 0.0, - "msgThroughputIn": 0.0, - "averageMsgSize": 0.0, - "chunkedMessageRate": 0.0, - "producerId": 0, - "metadata": {}, - "address": "/127.0.0.1:35604", - "connectedSince": "2021-07-04T09:05:43.04788Z", - "clientVersion": "2.8.0", - "producerName": "standalone-2-5" - } - ], - "waitingPublishers": 0, - "subscriptions": { - "my-sub": { - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "msgRateRedeliver": 0.0, - "chunkedMessageRate": 0, - "msgBacklog": 0, - "backlogSize": 0, - "msgBacklogNoDelayed": 0, - "blockedSubscriptionOnUnackedMsgs": false, - "msgDelayed": 0, - "unackedMessages": 0, - "type": "Exclusive", - "activeConsumerName": "3c544f1daa", - "msgRateExpired": 0.0, - "totalMsgExpired": 0, - "lastExpireTimestamp": 0, - "lastConsumedFlowTimestamp": 1625389101290, - "lastConsumedTimestamp": 1625389546070, - "lastAckedTimestamp": 1625389546162, - "lastMarkDeleteAdvancedTimestamp": 1625389546163, - "consumers": [ - { - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "msgRateRedeliver": 0.0, - "chunkedMessageRate": 0.0, - "consumerName": "3c544f1daa", - "availablePermits": 867, - "unackedMessages": 0, - "avgMessagesPerEntry": 6, - "blockedConsumerOnUnackedMsgs": false, - "lastAckedTimestamp": 1625389546162, - "lastConsumedTimestamp": 1625389546070, - "metadata": {}, - "address": "/127.0.0.1:35472", - "connectedSince": "2021-07-04T08:58:21.287682Z", - "clientVersion": "2.8.0" - } - ], - "isDurable": true, - "isReplicated": false, - "allowOutOfOrderDelivery": false, - "consumersAfterMarkDeletePosition": {}, - "nonContiguousDeletedMessagesRanges": 0, - "nonContiguousDeletedMessagesRangesSerializedSize": 0, - "durable": true, - "replicated": false - } - }, - "replication": {}, - "deduplicationStatus": "Disabled", - "nonContiguousDeletedMessagesRanges": 0, - "nonContiguousDeletedMessagesRangesSerializedSize": 0 -} - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/standalone.md b/site2/website/versioned_docs/version-2.9.1-deprecated/standalone.md deleted file mode 100644 index 9bff74d92c10bc..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/standalone.md +++ /dev/null @@ -1,268 +0,0 @@ ---- -id: standalone -title: Set up a standalone Pulsar locally -sidebar_label: "Run Pulsar locally" -original_id: standalone ---- - -For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process. - -> **Pulsar in production?** -> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal.md) guide. - -## Install Pulsar standalone - -This tutorial guides you through every step of installing Pulsar locally. - -### System requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions, JRE/JDK 11 is recommended. - -:::tip - -By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. - -::: - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -### Install Pulsar using binary release - -To get started with Pulsar, download a binary tarball release in one of the following ways: - -* download from the Apache mirror (Pulsar @pulsar:version@ binary release) - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:binary_release_url - - ``` - -After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -#### What your package contains - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/). -`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more. -`examples` | A Java JAR file containing [Pulsar Functions](functions-overview.md) example. -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar. -`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar). - -These directories are created once you begin running Pulsar. - -Directory | Contains -:---------|:-------- -`data` | The data storage directory used by ZooKeeper and BookKeeper. -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md). -`logs` | Logs created by the installation. - -:::tip - -If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions: -* [Install builtin connectors (optional)](#install-builtin-connectors-optional) -* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional) -Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders. - -::: - -### Install builtin connectors (optional) - -Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors. -To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways: - -* download from the Apache mirror Pulsar IO Connectors @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. -For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker (or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions). -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors). - -::: - -### Install tiered storage offloaders (optional) - -:::tip - -- Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -- To enable tiered storage feature, follow the instructions below; otherwise skip this section. - -::: - -To get started with [tiered storage offloaders](concepts-tiered-storage.md), you need to download the offloaders tarball release on every broker node in one of the following ways: - -* download from the Apache mirror Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders` -in the pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage.md). - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory. -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - -::: - -## Start Pulsar standalone - -Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode. - -```bash - -$ bin/pulsar standalone - -``` - -If you have started Pulsar successfully, you will see `INFO`-level log messages like this: - -```bash - -21:59:29.327 [DLM-/stream/storage-OrderedScheduler-3-0] INFO org.apache.bookkeeper.stream.storage.impl.sc.StorageContainerImpl - Successfully started storage container (0). -21:59:34.576 [main] INFO org.apache.pulsar.broker.authentication.AuthenticationService - Authentication is disabled -21:59:34.576 [main] INFO org.apache.pulsar.websocket.WebSocketService - Pulsar WebSocket Service started - -``` - -:::tip - -* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window. - -::: - -You can also run the service as a background process using the `pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). -> -> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview.md) document to secure your deployment. -> -> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -## Use Pulsar standalone - -Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. - -### Consume a message - -The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client consume my-topic -s "first-subscription" - -``` - -If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -22:17:16.781 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully consumed - -``` - -:::tip - -As you have noticed that we do not explicitly create the `my-topic` topic, from which we consume the message. When you consume a message from a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well. - -::: - -### Produce a message - -The following command produces a message saying `hello-pulsar` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client produce my-topic --messages "hello-pulsar" - -``` - -If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -22:21:08.693 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced - -``` - -## Stop Pulsar standalone - -Press `Ctrl+C` to stop a local standalone Pulsar. - -:::tip - -If the service runs as a background process using the `pulsar-daemon start standalone` command, then use the `pulsar-daemon stop standalone` command to stop the service. -For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). - -::: - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/tiered-storage-aliyun.md b/site2/website/versioned_docs/version-2.9.1-deprecated/tiered-storage-aliyun.md deleted file mode 100644 index 2486b92df485b3..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/tiered-storage-aliyun.md +++ /dev/null @@ -1,259 +0,0 @@ ---- -id: tiered-storage-aliyun -title: Use Aliyun OSS offloader with Pulsar -sidebar_label: "Aliyun OSS offloader" -original_id: tiered-storage-aliyun ---- - -This chapter guides you through every step of installing and configuring the Aliyun Object Storage Service (OSS) offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the Aliyun OSS offloader. - -### Prerequisite - -- Pulsar: 2.8.0 or later versions - -### Step - -This example uses Pulsar 2.8.0. - -1. Download the Pulsar tarball, see [here](https://pulsar.apache.org/docs/en/standalone/#install-pulsar-using-binary-release). - -2. Download and untar the Pulsar offloaders package, then copy the Pulsar offloaders as `offloaders` in the Pulsar directory, see [here](https://pulsar.apache.org/docs/en/standalone/#install-tiered-storage-offloaders-optional). - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/), [GCS](https://cloud.google.com/storage/), [Azure](https://portal.azure.com/#home), and [Aliyun OSS](https://www.aliyun.com/product/oss) for long-term storage. - - ``` - - tiered-storage-file-system-2.8.0.nar - tiered-storage-jcloud-2.8.0.nar - - ``` - - :::note - - * If you are running Pulsar in a bare-metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image. The `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to Aliyun OSS, you need to configure some properties of the Aliyun OSS offload driver. - -::: - -Besides, you can also configure the Aliyun OSS offloader to run it automatically or trigger it manually. - -### Configure Aliyun OSS offloader driver - -You can configure the Aliyun OSS offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - | Required configuration | Description | Example value | - | --- | --- |--- | - | `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive. | aliyun-oss | - | `offloadersDirectory` | Offloader directory | offloaders | - | `managedLedgerOffloadBucket` | Bucket | pulsar-topic-offload | - | `managedLedgerOffloadServiceEndpoint` | Endpoint | http://oss-cn-hongkong.aliyuncs.com | - -- **Optional** configurations are as below. - - | Optional | Description | Example value | - | --- | --- | --- | - | `managedLedgerOffloadReadBufferSizeInBytes` | Size of block read | 1 MB | - | `managedLedgerOffloadMaxBlockSizeInBytes` | Size of block write | 64 MB | - | `managedLedgerMinLedgerRolloverTimeMinutes` | Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment. | 2 | - | `managedLedgerMaxEntriesPerLedger` | Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment. | 5000 | - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in Aliyun OSS must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -managedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Endpoint (required) - -The endpoint is the region where a bucket is located. - -:::tip - -For more information about Aliyun OSS regions and endpoints, see [International website](https://www.alibabacloud.com/help/doc-detail/31837.htm) or [Chinese website](https://help.aliyun.com/document_detail/31837.html). - -::: - - -##### Example - -This example sets the endpoint as _oss-us-west-1-internal_. - -``` - -managedLedgerOffloadServiceEndpoint=http://oss-us-west-1-internal.aliyuncs.com - -``` - -#### Authentication (required) - -To be able to access Aliyun OSS, you need to authenticate with Aliyun OSS. - -Set the environment variables `ALIYUN_OSS_ACCESS_KEY_ID` and `ALIYUN_OSS_ACCESS_KEY_SECRET` in `conf/pulsar_env.sh`. - -"export" is important so that the variables are made available in the environment of spawned processes. - -```bash - -export ALIYUN_OSS_ACCESS_KEY_ID=ABC123456789 -export ALIYUN_OSS_ACCESS_KEY_SECRET=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from Aliyun OSS in the configuration file `broker.conf` or `standalone.conf`. - -| Configuration | Description | Default value | -| --- | --- | --- | -| `managedLedgerOffloadReadBufferSizeInBytes` | Block size for each individual read when reading back data from Aliyun OSS. | 1 MB | -| `managedLedgerOffloadMaxBlockSizeInBytes` | Maximum size of a "part" sent during a multipart upload to Aliyun OSS. It **cannot** be smaller than 5 MB. | 64 MB | - -### Run Aliyun OSS offloader automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -| Threshold value | Action | -| --- | --- | -| > 0 | It triggers the offloading operation if the topic storage reaches its threshold. | -| = 0 | It causes a broker to offload data as soon as possible. | -| < 0 | It disables automatic offloading operation. | - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, the offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the Aliyun OSS offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-threshold-em-). - -::: - -### Run Aliyun OSS offloader manually - -For individual topics, you can trigger the Aliyun OSS offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to Aliyun OSS until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the Aliyun OSS offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-em-). - - ::: - -- This example checks the Aliyun OSS offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the Aliyun OSS offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - -` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-status-em-). - - ::: - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/tiered-storage-aws.md b/site2/website/versioned_docs/version-2.9.1-deprecated/tiered-storage-aws.md deleted file mode 100644 index 20a6382e770cc6..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/tiered-storage-aws.md +++ /dev/null @@ -1,331 +0,0 @@ ---- -id: tiered-storage-aws -title: Use AWS S3 offloader with Pulsar -sidebar_label: "AWS S3 offloader" -original_id: tiered-storage-aws ---- - -This chapter guides you through every step of installing and configuring the AWS S3 offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the AWS S3 offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or later versions - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download from the Pulsar [downloads page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget): - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/) and [GCS](https://cloud.google.com/storage/) for long term storage. - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to AWS S3, you need to configure some properties of the AWS S3 offload driver. - -::: - -Besides, you can also configure the AWS S3 offloader to run it automatically or trigger it manually. - -### Configure AWS S3 offloader driver - -You can configure the AWS S3 offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - Required configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive.

    **Note**: there is a third driver type, S3, which is identical to AWS S3, though S3 requires that you specify an endpoint URL using `s3ManagedLedgerOffloadServiceEndpoint`. This is useful if using an S3 compatible data store other than AWS S3. | aws-s3 - `offloadersDirectory` | Offloader directory | offloaders - `s3ManagedLedgerOffloadBucket` | Bucket | pulsar-topic-offload - -- **Optional** configurations are as below. - - Optional | Description | Example value - |---|---|--- - `s3ManagedLedgerOffloadRegion` | Bucket region

    **Note**: before specifying a value for this parameter, you need to set the following configurations. Otherwise, you might get an error.

    - Set [`s3ManagedLedgerOffloadServiceEndpoint`](https://docs.aws.amazon.com/general/latest/gr/s3.html).

    Example
    `s3ManagedLedgerOffloadServiceEndpoint=https://s3.YOUR_REGION.amazonaws.com`

    - Grant `GetBucketLocation` permission to a user.

    For how to grant `GetBucketLocation` permission to a user, see [here](https://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html#using-with-s3-actions-related-to-buckets).| eu-west-3 - `s3ManagedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `s3ManagedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in AWS S3 must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -s3ManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Bucket region - -A bucket region is a region where a bucket is located. If a bucket region is not specified, the **default** region (`US East (N. Virginia)`) is used. - -:::tip - -For more information about AWS regions and endpoints, see [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). - -::: - - -##### Example - -This example sets the bucket region as _europe-west-3_. - -``` - -s3ManagedLedgerOffloadRegion=eu-west-3 - -``` - -#### Authentication (required) - -To be able to access AWS S3, you need to authenticate with AWS S3. - -Pulsar does not provide any direct methods of configuring authentication for AWS S3, -but relies on the mechanisms supported by the [DefaultAWSCredentialsProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html). - -Once you have created a set of credentials in the AWS IAM console, you can configure credentials using one of the following methods. - -* Use EC2 instance metadata credentials. - - If you are on AWS instance with an instance profile that provides credentials, Pulsar uses these credentials if no other mechanism is provided. - -* Set the environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` in `conf/pulsar_env.sh`. - - "export" is important so that the variables are made available in the environment of spawned processes. - - ```bash - - export AWS_ACCESS_KEY_ID=ABC123456789 - export AWS_SECRET_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -* Add the Java system properties `aws.accessKeyId` and `aws.secretKey` to `PULSAR_EXTRA_OPTS` in `conf/pulsar_env.sh`. - - ```bash - - PULSAR_EXTRA_OPTS="${PULSAR_EXTRA_OPTS} ${PULSAR_MEM} ${PULSAR_GC} -Daws.accessKeyId=ABC123456789 -Daws.secretKey=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.maxCapacityPerThread=4096" - - ``` - -* Set the access credentials in `~/.aws/credentials`. - - ```conf - - [default] - aws_access_key_id=ABC123456789 - aws_secret_access_key=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -* Assume an IAM role. - - This example uses the `DefaultAWSCredentialsProviderChain` for assuming this role. - - The broker must be rebooted for credentials specified in `pulsar_env` to take effect. - - ```conf - - s3ManagedLedgerOffloadRole= - s3ManagedLedgerOffloadRoleSessionName=pulsar-s3-offload - - ``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from AWS S3 in the configuration file `broker.conf` or `standalone.conf`. - -Configuration|Description|Default value -|---|---|--- -`s3ManagedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from AWS S3.|1 MB -`s3ManagedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to AWS S3. It **cannot** be smaller than 5 MB. |64 MB - -### Configure AWS S3 offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the AWS S3 offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-threshold-em-). - -::: - -### Configure AWS S3 offloader to run manually - -For individual topics, you can trigger AWS S3 offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to AWS S3 until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the AWS S3 offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-em-). - - ::: - -- This example checks the AWS S3 offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the AWS S3 offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - -` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-status-em-). - - ::: - -## Tutorial - -For the complete and step-by-step instructions on how to use the AWS S3 offloader with Pulsar, see [here](https://hub.streamnative.io/offloaders/aws-s3/2.5.1#usage). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/tiered-storage-azure.md b/site2/website/versioned_docs/version-2.9.1-deprecated/tiered-storage-azure.md deleted file mode 100644 index 5923a33147135c..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/tiered-storage-azure.md +++ /dev/null @@ -1,266 +0,0 @@ ---- -id: tiered-storage-azure -title: Use Azure BlobStore offloader with Pulsar -sidebar_label: "Azure BlobStore offloader" -original_id: tiered-storage-azure ---- - -This chapter guides you through every step of installing and configuring the Azure BlobStore offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the Azure BlobStore offloader. - -### Prerequisite - -- Pulsar: 2.6.2 or later versions - -### Step - -This example uses Pulsar 2.6.2. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.6.2/apache-pulsar-2.6.2-bin.tar.gz) - - * Download from the Pulsar [downloads page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget): - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.6.2/apache-pulsar-2.6.2-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.6.2/apache-pulsar-offloaders-2.6.2-bin.tar.gz - tar xvfz apache-pulsar-offloaders-2.6.2-bin.tar.gz - - ``` - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.6.2/offloaders apache-pulsar-2.6.2/offloaders - - ls offloaders - - ``` - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/), [GCS](https://cloud.google.com/storage/) and [Azure](https://portal.azure.com/#home) for long term storage. - - ``` - - tiered-storage-file-system-2.6.2.nar - tiered-storage-jcloud-2.6.2.nar - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to Azure BlobStore, you need to configure some properties of the Azure BlobStore offload driver. - -::: - -Besides, you can also configure the Azure BlobStore offloader to run it automatically or trigger it manually. - -### Configure Azure BlobStore offloader driver - -You can configure the Azure BlobStore offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - Required configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name | azureblob - `offloadersDirectory` | Offloader directory | offloaders - `managedLedgerOffloadBucket` | Bucket | pulsar-topic-offload - -- **Optional** configurations are as below. - - Optional | Description | Example value - |---|---|--- - `managedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `managedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in Azure BlobStore must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -managedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Authentication (required) - -To be able to access Azure BlobStore, you need to authenticate with Azure BlobStore. - -* Set the environment variables `AZURE_STORAGE_ACCOUNT` and `AZURE_STORAGE_ACCESS_KEY` in `conf/pulsar_env.sh`. - - "export" is important so that the variables are made available in the environment of spawned processes. - - ```bash - - export AZURE_STORAGE_ACCOUNT=ABC123456789 - export AZURE_STORAGE_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from Azure BlobStore in the configuration file `broker.conf` or `standalone.conf`. - -Configuration|Description|Default value -|---|---|--- -`managedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from Azure BlobStore store.|1 MB -`managedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to Azure BlobStore store. It **cannot** be smaller than 5 MB. |64 MB - -### Configure Azure BlobStore offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the Azure BlobStore offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-threshold-em-). - -::: - -### Configure Azure BlobStore offloader to run manually - -For individual topics, you can trigger Azure BlobStore offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to Azure BlobStore until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the Azure BlobStore offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-em-). - - ::: - -- This example checks the Azure BlobStore offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the Azure BlobStore offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: - - ``` - -` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-status-em-). - - ::: - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/tiered-storage-filesystem.md b/site2/website/versioned_docs/version-2.9.1-deprecated/tiered-storage-filesystem.md deleted file mode 100644 index 8164e68208bd7d..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/tiered-storage-filesystem.md +++ /dev/null @@ -1,317 +0,0 @@ ---- -id: tiered-storage-filesystem -title: Use filesystem offloader with Pulsar -sidebar_label: "Filesystem offloader" -original_id: tiered-storage-filesystem ---- - -This chapter guides you through every step of installing and configuring the filesystem offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the filesystem offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or later versions - -- Hadoop: 3.x.x - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download from the Pulsar [download page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget) - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8S and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to filesystem, you need to configure some properties of the filesystem offloader driver. - -::: - -Besides, you can also configure the filesystem offloader to run it automatically or trigger it manually. - -### Configure filesystem offloader driver - -You can configure filesystem offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - Required configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive. | filesystem - `fileSystemURI` | Connection address | hdfs://127.0.0.1:9000 - `fileSystemProfilePath` | Hadoop profile path | conf/filesystem_offload_core_site.xml - -- **Optional** configurations are as below. - - Optional configuration| Description | Example value - |---|---|--- - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment.|5000 - -#### Offloader driver (required) - -Offloader driver name, which is case-insensitive. - -This example sets the offloader driver name as _filesystem_. - -```conf - -managedLedgerOffloadDriver=filesystem - -``` - -#### Connection address (required) - -Connection address is the URI to access the default Hadoop distributed file system. - -##### Example - -This example sets the connection address as _hdfs://127.0.0.1:9000_. - -```conf - -fileSystemURI=hdfs://127.0.0.1:9000 - -``` - -#### Hadoop profile path (required) - -The configuration file is stored in the Hadoop profile path. It contains various settings for Hadoop performance tuning. - -##### Example - -This example sets the Hadoop profile path as _conf/filesystem_offload_core_site.xml_. - -```conf - -fileSystemProfilePath=conf/filesystem_offload_core_site.xml - -``` - -You can set the following configurations in the _filesystem_offload_core_site.xml_ file. - -``` - - - fs.defaultFS - - - - - hadoop.tmp.dir - pulsar - - - - io.file.buffer.size - 4096 - - - - io.seqfile.compress.blocksize - 1000000 - - - - io.seqfile.compression.type - BLOCK - - - - io.map.index.interval - 128 - - -``` - -:::tip - -For more information about the Hadoop HDFS, see [here](https://hadoop.apache.org/docs/current/). - -::: - -### Configure filesystem offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offload operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offload runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -#### Example - -This example sets the filesystem offloader threshold size to 10 MB using pulsar-admin. - -```bash - -pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#set-offload-threshold). - -::: - -### Configure filesystem offloader to run manually - -For individual topics, you can trigger filesystem offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - -To trigger via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are offloaded to the filesystem until the threshold is no longer exceeded. Older segments are offloaded first. - -#### Example - -- This example triggers the filesystem offloader to run manually using pulsar-admin. - - ```bash - - pulsar-admin topics offload --size-threshold 10M persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload). - - ::: - -- This example checks filesystem offloader status using pulsar-admin. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the filesystem to complete the job, add the `-w` flag. - - ```bash - - pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in the offloading operation, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - -` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload-status). - - ::: - -## Tutorial - -For the complete and step-by-step instructions on how to use the filesystem offloader with Pulsar, see [here](https://hub.streamnative.io/offloaders/filesystem/2.5.1). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/tiered-storage-gcs.md b/site2/website/versioned_docs/version-2.9.1-deprecated/tiered-storage-gcs.md deleted file mode 100644 index afb1e9a10081ce..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/tiered-storage-gcs.md +++ /dev/null @@ -1,321 +0,0 @@ ---- -id: tiered-storage-gcs -title: Use GCS offloader with Pulsar -sidebar_label: "GCS offloader" -original_id: tiered-storage-gcs ---- - -This chapter guides you through every step of installing and configuring the GCS offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the GCS offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or later versions - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download from the Pulsar [download page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget) - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8S and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - As shown in the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support GCS and AWS S3 for long term storage. - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - -## Configuration - -:::note - -Before offloading data from BookKeeper to GCS, you need to configure some properties of the GCS offloader driver. - -::: - -Besides, you can also configure the GCS offloader to run it automatically or trigger it manually. - -### Configure GCS offloader driver - -You can configure GCS offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - **Required** configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver`|Offloader driver name, which is case-insensitive.|google-cloud-storage - `offloadersDirectory`|Offloader directory|offloaders - `gcsManagedLedgerOffloadBucket`|Bucket|pulsar-topic-offload - `gcsManagedLedgerOffloadRegion`|Bucket region|europe-west3 - `gcsManagedLedgerOffloadServiceAccountKeyFile`|Authentication |/Users/user-name/Downloads/project-804d5e6a6f33.json - -- **Optional** configurations are as below. - - Optional configuration|Description|Example value - |---|---|--- - `gcsManagedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `gcsManagedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic.|2 - `managedLedgerMaxEntriesPerLedger`|The max number of entries to append to a ledger before triggering a rollover.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in GCS **must** be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you can not nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -gcsManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Bucket region (required) - -Bucket region is the region where a bucket is located. If a bucket region is not specified, the **default** region (`us multi-regional location`) is used. - -:::tip - -For more information about bucket location, see [here](https://cloud.google.com/storage/docs/bucket-locations). - -::: - -##### Example - -This example sets the bucket region as _europe-west3_. - -``` - -gcsManagedLedgerOffloadRegion=europe-west3 - -``` - -#### Authentication (required) - -To enable a broker access GCS, you need to configure `gcsManagedLedgerOffloadServiceAccountKeyFile` in the configuration file `broker.conf`. - -`gcsManagedLedgerOffloadServiceAccountKeyFile` is -a JSON file, containing GCS credentials of a service account. - -##### Example - -To generate service account credentials or view the public credentials that you've already generated, follow the following steps. - -1. Navigate to the [Service accounts page](https://console.developers.google.com/iam-admin/serviceaccounts). - -2. Select a project or create a new one. - -3. Click **Create service account**. - -4. In the **Create service account** window, type a name for the service account and select **Furnish a new private key**. - - If you want to [grant G Suite domain-wide authority](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#delegatingauthority) to the service account, select **Enable G Suite Domain-wide Delegation**. - -5. Click **Create**. - - :::note - - Make sure the service account you create has permission to operate GCS, you need to assign **Storage Admin** permission to your service account [here](https://cloud.google.com/storage/docs/access-control/iam). - - ::: - -6. You can get the following information and set this in `broker.conf`. - - ```conf - - gcsManagedLedgerOffloadServiceAccountKeyFile="/Users/user-name/Downloads/project-804d5e6a6f33.json" - - ``` - - :::tip - - - For more information about how to create `gcsManagedLedgerOffloadServiceAccountKeyFile`, see [here](https://support.google.com/googleapi/answer/6158849). - - For more information about Google Cloud IAM, see [here](https://cloud.google.com/storage/docs/access-control/iam). - - ::: - -#### Size of block read/write - -You can configure the size of a request sent to or read from GCS in the configuration file `broker.conf`. - -Configuration|Description -|---|--- -`gcsManagedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from GCS.

    The **default** value is 1 MB. -`gcsManagedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to GCS.

    It **can not** be smaller than 5 MB.

    The **default** value is 64 MB. - -### Configure GCS offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offload operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the GCS offloader threshold size to 10 MB using pulsar-admin. - -```bash - -pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#set-offload-threshold). - -::: - -### Configure GCS offloader to run manually - -For individual topics, you can trigger GCS offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger the GCS via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to GCS until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the GCS offloader to run manually using pulsar-admin with the command `pulsar-admin topics offload (topic-name) (threshold)`. - - ```bash - - pulsar-admin topics offload persistent://my-tenant/my-namespace/topic1 10M - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload). - - ::: - -- This example checks the GCS offloader status using pulsar-admin with the command `pulsar-admin topics offload-status options`. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for GCS to complete the job, add the `-w` flag. - - ```bash - - pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - -` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload-status). - - ::: - -## Tutorial - -For the complete and step-by-step instructions on how to use the GCS offloader with Pulsar, see [here](https://hub.streamnative.io/offloaders/gcs/2.5.1#usage). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/tiered-storage-overview.md b/site2/website/versioned_docs/version-2.9.1-deprecated/tiered-storage-overview.md deleted file mode 100644 index bebd16d28a0981..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/tiered-storage-overview.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -id: tiered-storage-overview -title: Overview of tiered storage -sidebar_label: "Overview" -original_id: tiered-storage-overview ---- - -Pulsar's **Tiered Storage** feature allows older backlog data to be moved from BookKeeper to long term and cheaper storage, while still allowing clients to access the backlog as if nothing has changed. - -* Tiered storage uses [Apache jclouds](https://jclouds.apache.org) to support [Amazon S3](https://aws.amazon.com/s3/) and [GCS (Google Cloud Storage)](https://cloud.google.com/storage/) for long term storage. - - With jclouds, it is easy to add support for more [cloud storage providers](https://jclouds.apache.org/reference/providers/#blobstore-providers) in the future. - - :::tip - - - For more information about how to use the AWS S3 offloader with Pulsar, see [here](tiered-storage-aws.md). - - - For more information about how to use the GCS offloader with Pulsar, see [here](tiered-storage-gcs.md). - - ::: - -* Tiered storage uses [Apache Hadoop](http://hadoop.apache.org/) to support filesystems for long term storage. - - With Hadoop, it is easy to add support for more filesystems in the future. - - :::tip - - For more information about how to use the filesystem offloader with Pulsar, see [here](tiered-storage-filesystem.md). - - ::: - -## When to use tiered storage? - -Tiered storage should be used when you have a topic for which you want to keep a very long backlog for a long time. - -For example, if you have a topic containing user actions which you use to train your recommendation systems, you may want to keep that data for a long time, so that if you change your recommendation algorithm, you can rerun it against your full user history. - -## How does tiered storage work? - -A topic in Pulsar is backed by a **log**, known as a **managed ledger**. This log is composed of an ordered list of segments. Pulsar only writes to the final segment of the log. All previous segments are sealed. The data within the segment is immutable. This is known as a **segment oriented architecture**. - -![Tiered storage](/assets/pulsar-tiered-storage.png "Tiered Storage") - -The tiered storage offloading mechanism takes advantage of segment oriented architecture. When offloading is requested, the segments of the log are copied one-by-one to tiered storage. All segments of the log (apart from the current segment) written to tiered storage can be offloaded. - -Data written to BookKeeper is replicated to 3 physical machines by default. However, once a segment is sealed in BookKeeper, it becomes immutable and can be copied to long term storage. Long term storage can achieve cost savings by using mechanisms such as [Reed-Solomon error correction](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction) to require fewer physical copies of data. - -Before offloading ledgers to long term storage, you need to configure buckets, credentials, and other properties for the cloud storage service. Additionally, Pulsar uses multi-part objects to upload the segment data and brokers may crash while uploading the data. It is recommended that you add a life cycle rule for your bucket to expire incomplete multi-part upload after a day or two days to avoid getting charged for incomplete uploads. Moreover, you can trigger the offloading operation manually (via REST API or CLI) or automatically (via CLI). - -After offloading ledgers to long term storage, you can still query data in the offloaded ledgers with Pulsar SQL. - -For more information about tiered storage for Pulsar topics, see [here](https://github.com/apache/pulsar/wiki/PIP-17:-Tiered-storage-for-Pulsar-topics). diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/transaction-api.md b/site2/website/versioned_docs/version-2.9.1-deprecated/transaction-api.md deleted file mode 100644 index ecbd0da12c786d..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/transaction-api.md +++ /dev/null @@ -1,172 +0,0 @@ ---- -id: transactions-api -title: Transactions API -sidebar_label: "Transactions API" -original_id: transactions-api ---- - -All messages in a transaction are available only to consumers after the transaction has been committed. If a transaction has been aborted, all the writes and acknowledgments in this transaction roll back. - -## Prerequisites -1. To enable transactions in Pulsar, you need to configure the parameter in the `broker.conf` file. - -``` - -transactionCoordinatorEnabled=true - -``` - -2. Initialize transaction coordinator metadata, so the transaction coordinators can leverage advantages of the partitioned topic, such as load balance. - -``` - -bin/pulsar initialize-transaction-coordinator-metadata -cs 127.0.0.1:2181 -c standalone - -``` - -After initializing transaction coordinator metadata, you can use the transactions API. The following APIs are available. - -## Initialize Pulsar client - -You can enable transaction for transaction client and initialize transaction coordinator client. - -``` - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .enableTransaction(true) - .build(); - -``` - -## Start transactions -You can start transaction in the following way. - -``` - -Transaction txn = pulsarClient - .newTransaction() - .withTransactionTimeout(5, TimeUnit.MINUTES) - .build() - .get(); - -``` - -## Produce transaction messages - -A transaction parameter is required when producing new transaction messages. The semantic of the transaction messages in Pulsar is `read-committed`, so the consumer cannot receive the ongoing transaction messages before the transaction is committed. - -``` - -producer.newMessage(txn).value("Hello Pulsar Transaction".getBytes()).sendAsync(); - -``` - -## Acknowledge the messages with the transaction - -The transaction acknowledgement requires a transaction parameter. The transaction acknowledgement marks the messages state to pending-ack state. When the transaction is committed, the pending-ack state becomes ack state. If the transaction is aborted, the pending-ack state becomes unack state. - -``` - -Message message = consumer.receive(); -consumer.acknowledgeAsync(message.getMessageId(), txn); - -``` - -## Commit transactions - -When the transaction is committed, consumers receive the transaction messages and the pending-ack state becomes ack state. - -``` - -txn.commit().get(); - -``` - -## Abort transaction - -When the transaction is aborted, the transaction acknowledgement is canceled and the pending-ack messages are redelivered. - -``` - -txn.abort().get(); - -``` - -### Example -The following example shows how messages are processed in transaction. - -``` - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl(getPulsarServiceList().get(0).getBrokerServiceUrl()) - .statsInterval(0, TimeUnit.SECONDS) - .enableTransaction(true) - .build(); - -String sourceTopic = "public/default/source-topic"; -String sinkTopic = "public/default/sink-topic"; - -Producer sourceProducer = pulsarClient - .newProducer(Schema.STRING) - .topic(sourceTopic) - .create(); -sourceProducer.newMessage().value("hello pulsar transaction").sendAsync(); - -Consumer sourceConsumer = pulsarClient - .newConsumer(Schema.STRING) - .topic(sourceTopic) - .subscriptionName("test") - .subscriptionType(SubscriptionType.Shared) - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscribe(); - -Producer sinkProducer = pulsarClient - .newProducer(Schema.STRING) - .topic(sinkTopic) - .sendTimeout(0, TimeUnit.MILLISECONDS) - .create(); - -Transaction txn = pulsarClient - .newTransaction() - .withTransactionTimeout(5, TimeUnit.MINUTES) - .build() - .get(); - -// source message acknowledgement and sink message produce belong to one transaction, -// they are combined into an atomic operation. -Message message = sourceConsumer.receive(); -sourceConsumer.acknowledgeAsync(message.getMessageId(), txn); -sinkProducer.newMessage(txn).value("sink data").sendAsync(); - -txn.commit().get(); - -``` - -## Enable batch messages in transactions - -To enable batch messages in transactions, you need to enable the batch index acknowledgement feature. The transaction acks check whether the batch index acknowledgement conflicts. - -To enable batch index acknowledgement, you need to set `acknowledgmentAtBatchIndexLevelEnabled` to `true` in the `broker.conf` or `standalone.conf` file. - -``` - -acknowledgmentAtBatchIndexLevelEnabled=true - -``` - -And then you need to call the `enableBatchIndexAcknowledgment(true)` method in the consumer builder. - -``` - -Consumer sinkConsumer = pulsarClient - .newConsumer() - .topic(transferTopic) - .subscriptionName("sink-topic") - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscriptionType(SubscriptionType.Shared) - .enableBatchIndexAcknowledgment(true) // enable batch index acknowledgement - .subscribe(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/transaction-guarantee.md b/site2/website/versioned_docs/version-2.9.1-deprecated/transaction-guarantee.md deleted file mode 100644 index 9db2d254e159f6..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/transaction-guarantee.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -id: transactions-guarantee -title: Transactions Guarantee -sidebar_label: "Transactions Guarantee" -original_id: transactions-guarantee ---- - -Pulsar transactions support the following guarantee. - -## Atomic multi-partition writes and multi-subscription acknowledges -Transactions enable atomic writes to multiple topics and partitions. A batch of messages in a transaction can be received from, produced to, and acknowledged by many partitions. All the operations involved in a transaction succeed or fail as a single unit. - -## Read transactional message -All the messages in a transaction are available only for consumers until the transaction is committed. - -## Acknowledge transactional message -A message is acknowledged successfully only once by a consumer under the subscription when acknowledging the message with the transaction ID. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/txn-how.md b/site2/website/versioned_docs/version-2.9.1-deprecated/txn-how.md deleted file mode 100644 index add072448aeb34..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/txn-how.md +++ /dev/null @@ -1,151 +0,0 @@ ---- -id: txn-how -title: How transactions work? -sidebar_label: "How transactions work?" -original_id: txn-how ---- - -This section describes transaction components and how the components work together. For the complete design details, see [PIP-31: Transactional Streaming](https://docs.google.com/document/d/145VYp09JKTw9jAT-7yNyFU255FptB2_B2Fye100ZXDI/edit#heading=h.bm5ainqxosrx). - -## Key concept - -It is important to know the following key concepts, which is a prerequisite for understanding how transactions work. - -### Transaction coordinator - -The transaction coordinator (TC) is a module running inside a Pulsar broker. - -* It maintains the entire life cycle of transactions and prevents a transaction from getting into an incorrect status. - -* It handles transaction timeout, and ensures that the transaction is aborted after a transaction timeout. - -### Transaction log - -All the transaction metadata persists in the transaction log. The transaction log is backed by a Pulsar topic. If the transaction coordinator crashes, it can restore the transaction metadata from the transaction log. - -The transaction log stores the transaction status rather than actual messages in the transaction (the actual messages are stored in the actual topic partitions). - -### Transaction buffer - -Messages produced to a topic partition within a transaction are stored in the transaction buffer (TB) of that topic partition. The messages in the transaction buffer are not visible to consumers until the transactions are committed. The messages in the transaction buffer are discarded when the transactions are aborted. - -Transaction buffer stores all ongoing and aborted transactions in memory. All messages are sent to the actual partitioned Pulsar topics. After transactions are committed, the messages in the transaction buffer are materialized (visible) to consumers. When the transactions are aborted, the messages in the transaction buffer are discarded. - -### Transaction ID - -Transaction ID (TxnID) identifies a unique transaction in Pulsar. The transaction ID is 128-bit. The highest 16 bits are reserved for the ID of the transaction coordinator, and the remaining bits are used for monotonically increasing numbers in each transaction coordinator. It is easy to locate the transaction crash with the TxnID. - -### Pending acknowledge state - -Pending acknowledge state maintains message acknowledgments within a transaction before a transaction completes. If a message is in the pending acknowledge state, the message cannot be acknowledged by other transactions until the message is removed from the pending acknowledge state. - -The pending acknowledge state is persisted to the pending acknowledge log (cursor ledger). A new broker can restore the state from the pending acknowledge log to ensure the acknowledgement is not lost. - -## Data flow - -At a high level, the data flow can be split into several steps: - -1. Begin a transaction. - -2. Publish messages with a transaction. - -3. Acknowledge messages with a transaction. - -4. End a transaction. - -To help you debug or tune the transaction for better performance, review the following diagrams and descriptions. - -### 1. Begin a transaction - -Before introducing the transaction in Pulsar, a producer is created and then messages are sent to brokers and stored in data logs. - -![](/assets/txn-3.png) - -Let’s walk through the steps for _beginning a transaction_. - -| Step | Description | -| --- | --- | -| 1.1 | The first step is that the Pulsar client finds the transaction coordinator. | -| 1.2 | The transaction coordinator allocates a transaction ID for the transaction. In the transaction log, the transaction is logged with its transaction ID and status (OPEN), which ensures the transaction status is persisted regardless of transaction coordinator crashes. | -| 1.3 | The transaction log sends the result of persisting the transaction ID to the transaction coordinator. | -| 1.4 | After the transaction status entry is logged, the transaction coordinator brings the transaction ID back to the Pulsar client. | - -### 2. Publish messages with a transaction - -In this stage, the Pulsar client enters a transaction loop, repeating the `consume-process-produce` operation for all the messages that comprise the transaction. This is a long phase and is potentially composed of multiple produce and acknowledgement requests. - -![](/assets/txn-4.png) - -Let’s walk through the steps for _publishing messages with a transaction_. - -| Step | Description | -| --- | --- | -| 2.1.1 | Before the Pulsar client produces messages to a new topic partition, it sends a request to the transaction coordinator to add the partition to the transaction. | -| 2.1.2 | The transaction coordinator logs the partition changes of the transaction into the transaction log for durability, which ensures the transaction coordinator knows all the partitions that a transaction is handling. The transaction coordinator can commit or abort changes on each partition at the end-partition phase. | -| 2.1.3 | The transaction log sends the result of logging the new partition (used for producing messages) to the transaction coordinator. | -| 2.1.4 | The transaction coordinator sends the result of adding a new produced partition to the transaction. | -| 2.2.1 | The Pulsar client starts producing messages to partitions. The flow of this part is the same as the normal flow of producing messages except that the batch of messages produced by a transaction contains transaction IDs. | -| 2.2.2 | The broker writes messages to a partition. | - -### 3. Acknowledge messages with a transaction - -In this phase, the Pulsar client sends a request to the transaction coordinator and a new subscription is acknowledged as a part of a transaction. - -![](/assets/txn-5.png) - -Let’s walk through the steps for _acknowledging messages with a transaction_. - -| Step | Description | -| --- | --- | -| 3.1.1 | The Pulsar client sends a request to add an acknowledged subscription to the transaction coordinator. | -| 3.1.2 | The transaction coordinator logs the addition of subscription, which ensures that it knows all subscriptions handled by a transaction and can commit or abort changes on each subscription at the end phase. | -| 3.1.3 | The transaction log sends the result of logging the new partition (used for acknowledging messages) to the transaction coordinator. | -| 3.1.4 | The transaction coordinator sends the result of adding the new acknowledged partition to the transaction. | -| 3.2 | The Pulsar client acknowledges messages on the subscription. The flow of this part is the same as the normal flow of acknowledging messages except that the acknowledged request carries a transaction ID. | -| 3.3 | The broker receiving the acknowledgement request checks if the acknowledgment belongs to a transaction or not. | - -### 4. End a transaction - -At the end of a transaction, the Pulsar client decides to commit or abort the transaction. The transaction can be aborted when a conflict is detected on acknowledging messages. - -#### 4.1 End transaction request - -When the Pulsar client finishes a transaction, it issues an end transaction request. - -![](/assets/txn-6.png) - -Let’s walk through the steps for _ending the transaction_. - -| Step | Description | -| --- | --- | -| 4.1.1 | The Pulsar client issues an end transaction request (with a field indicating whether the transaction is to be committed or aborted) to the transaction coordinator. | -| 4.1.2 | The transaction coordinator writes a COMMITTING or ABORTING message to its transaction log. | -| 4.1.3 | The transaction log sends the result of logging the committing or aborting status. | - -#### 4.2 Finalize a transaction - -The transaction coordinator starts the process of committing or aborting messages to all the partitions involved in this transaction. - -![](/assets/txn-7.png) - -Let’s walk through the steps for _finalizing a transaction_. - -| Step | Description | -| --- | --- | -| 4.2.1 | The transaction coordinator commits transactions on subscriptions and commits transactions on partitions at the same time. | -| 4.2.2 | The broker (produce) writes produced committed markers to the actual partitions. At the same time, the broker (ack) writes acked committed marks to the subscription pending ack partitions. | -| 4.2.3 | The data log sends the result of writing produced committed marks to the broker. At the same time, pending ack data log sends the result of writing acked committed marks to the broker. The cursor moves to the next position. | - -#### 4.3 Mark a transaction as COMMITTED or ABORTED - -The transaction coordinator writes the final transaction status to the transaction log to complete the transaction. - -![](/assets/txn-8.png) - -Let’s walk through the steps for _marking a transaction as COMMITTED or ABORTED_. - -| Step | Description | -| --- | --- | -| 4.3.1 | After all produced messages and acknowledgements to all partitions involved in this transaction have been successfully committed or aborted, the transaction coordinator writes the final COMMITTED or ABORTED transaction status messages to its transaction log, indicating that the transaction is complete. All the messages associated with the transaction in its transaction log can be safely removed. | -| 4.3.2 | The transaction log sends the result of the committed transaction to the transaction coordinator. | -| 4.3.3 | The transaction coordinator sends the result of the committed transaction to the Pulsar client. | diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/txn-monitor.md b/site2/website/versioned_docs/version-2.9.1-deprecated/txn-monitor.md deleted file mode 100644 index 5b50953772d092..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/txn-monitor.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -id: txn-monitor -title: How to monitor transactions? -sidebar_label: "How to monitor transactions?" -original_id: txn-monitor ---- - -You can monitor the status of the transactions in Prometheus and Grafana using the [transaction metrics](https://pulsar.apache.org/docs/en/next/reference-metrics/#pulsar-transaction). - -For how to configure Prometheus and Grafana, see [here](https://pulsar.apache.org/docs/en/next/deploy-monitoring). diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/txn-use.md b/site2/website/versioned_docs/version-2.9.1-deprecated/txn-use.md deleted file mode 100644 index de0e4a92f1b27e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/txn-use.md +++ /dev/null @@ -1,105 +0,0 @@ ---- -id: txn-use -title: How to use transactions? -sidebar_label: "How to use transactions?" -original_id: txn-use ---- - -## Transaction API - -The transaction feature is primarily a server-side and protocol-level feature. You can use the transaction feature via the [transaction API](https://pulsar.apache.org/api/admin/), which is available in **Pulsar 2.8.0 or later**. - -To use the transaction API, you do not need any additional settings in the Pulsar client. **By default**, transactions is **disabled**. - -Currently, transaction API is only available for **Java** clients. Support for other language clients will be added in the future releases. - -## Quick start - -This section provides an example of how to use the transaction API to send and receive messages in a Java client. - -1. Start Pulsar 2.8.0 or later. - -2. Enable transaction. - - Change the configuration in the `broker.conf` file. - - ``` - - transactionCoordinatorEnabled=true - - ``` - - If you want to enable batch messages in transactions, follow the steps below. - - Set `acknowledgmentAtBatchIndexLevelEnabled` to `true` in the `broker.conf` or `standalone.conf` file. - - ``` - - acknowledgmentAtBatchIndexLevelEnabled=true - - ``` - -3. Initialize transaction coordinator metadata. - - The transaction coordinator can leverage the advantages of partitioned topics (such as load balance). - - **Input** - - ``` - - bin/pulsar initialize-transaction-coordinator-metadata -cs 127.0.0.1:2181 -c standalone - - ``` - - **Output** - - ``` - - Transaction coordinator metadata setup success - - ``` - -4. Initialize a Pulsar client. - - ``` - - PulsarClient client = PulsarClient.builder() - - .serviceUrl("pulsar://localhost:6650") - - .enableTransaction(true) - - .build(); - - ``` - -Now you can start using the transaction API to send and receive messages. Below is an example of a `consume-process-produce` application written in Java. - -![](/assets/txn-9.png) - -Let’s walk through this example step by step. - -| Step | Description | -| --- | --- | -| 1. Start a transaction. | The application opens a new transaction by calling PulsarClient.newTransaction. It specifics the transaction timeout as 1 minute. If the transaction is not committed within 1 minute, the transaction is automatically aborted. | -| 2. Receive messages from topics. | The application creates two normal consumers to receive messages from topic input-topic-1 and input-topic-2 respectively. | -| 3. Publish messages to topics with the transaction. | The application creates two producers to produce the resulting messages to the output topic _output-topic-1_ and output-topic-2 respectively. The application applies the processing logic and generates two output messages. The application sends those two output messages as part of the transaction opened in the first step via Producer.newMessage(Transaction). | -| 4. Acknowledge the messages with the transaction. | In the same transaction, the application acknowledges the two input messages. | -| 5. Commit the transaction. | The application commits the transaction by calling Transaction.commit() on the open transaction. The commit operation ensures the two input messages are marked as acknowledged and the two output messages are written successfully to the output topics. | - -[1] Example of enabling batch messages ack in transactions in the consumer builder. - -``` - -Consumer sinkConsumer = pulsarClient - .newConsumer() - .topic(transferTopic) - .subscriptionName("sink-topic") - -.subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscriptionType(SubscriptionType.Shared) - .enableBatchIndexAcknowledgment(true) // enable batch index acknowledgement - .subscribe(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/txn-what.md b/site2/website/versioned_docs/version-2.9.1-deprecated/txn-what.md deleted file mode 100644 index 844f19a700f8f0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/txn-what.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -id: txn-what -title: What are transactions? -sidebar_label: "What are transactions?" -original_id: txn-what ---- - -Transactions strengthen the message delivery semantics of Apache Pulsar and [processing guarantees of Pulsar Functions](https://pulsar.apache.org/docs/en/next/functions-overview/#processing-guarantees). The Pulsar Transaction API supports atomic writes and acknowledgments across multiple topics. - -Transactions allow: - -- A producer to send a batch of messages to multiple topics where all messages in the batch are eventually visible to any consumer, or none are ever visible to consumers. - -- End-to-end exactly-once semantics (execute a `consume-process-produce` operation exactly once). - -## Transaction semantics - -Pulsar transactions have the following semantics: - -* All operations within a transaction are committed as a single unit. - - * Either all messages are committed, or none of them are. - - * Each message is written or processed exactly once, without data loss or duplicates (even in the event of failures). - - * If a transaction is aborted, all the writes and acknowledgments in this transaction rollback. - -* A group of messages in a transaction can be received from, produced to, and acknowledged by multiple partitions. - - * Consumers are only allowed to read committed (acked) messages. In other words, the broker does not deliver transactional messages which are part of an open transaction or messages which are part of an aborted transaction. - - * Message writes across multiple partitions are atomic. - - * Message acks across multiple subscriptions are atomic. A message is acked successfully only once by a consumer under the subscription when acknowledging the message with the transaction ID. - -## Transactions and stream processing - -Stream processing on Pulsar is a `consume-process-produce` operation on Pulsar topics: - -* `Consume`: a source operator that runs a Pulsar consumer reads messages from one or multiple Pulsar topics. - -* `Process`: a processing operator transforms the messages. - -* `Produce`: a sink operator that runs a Pulsar producer writes the resulting messages to one or multiple Pulsar topics. - -![](/assets/txn-2.png) - -Pulsar transactions support end-to-end exactly-once stream processing, which means messages are not lost from a source operator and messages are not duplicated to a sink operator. - -## Use case - -Prior to Pulsar 2.8.0, there was no easy way to build stream processing applications with Pulsar to achieve exactly-once processing guarantees. With the transaction introduced in Pulsar 2.8.0, the following services support exactly-once semantics: - -* [Pulsar Flink connector](https://flink.apache.org/2021/01/07/pulsar-flink-connector-270.html) - - Prior to Pulsar 2.8.0, if you want to build stream applications using Pulsar and Flink, the Pulsar Flink connector only supported exactly-once source connector and at-least-once sink connector, which means the highest processing guarantee for end-to-end was at-least-once, there was possibility that the resulting messages from streaming applications produce duplicated messages to the resulting topics in Pulsar. - - With the transaction introduced in Pulsar 2.8.0, the Pulsar Flink sink connector can support exactly-once semantics by implementing the designated `TwoPhaseCommitSinkFunction` and hooking up the Flink sink message lifecycle with Pulsar transaction API. - -* Support for Pulsar Functions and other connectors will be added in the future releases. diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/txn-why.md b/site2/website/versioned_docs/version-2.9.1-deprecated/txn-why.md deleted file mode 100644 index 1ed8769977654e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/txn-why.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -id: txn-why -title: Why transactions? -sidebar_label: "Why transactions?" -original_id: txn-why ---- - -Pulsar transactions (txn) enable event streaming applications to consume, process, and produce messages in one atomic operation. The reason for developing this feature can be summarized as below. - -## Demand of stream processing - -The demand for stream processing applications with stronger processing guarantees has grown along with the rise of stream processing. For example, in the financial industry, financial institutions use stream processing engines to process debits and credits for users. This type of use case requires that every message is processed exactly once, without exception. - -In other words, if a stream processing application consumes message A and -produces the result as a message B (B = f(A)), then exactly-once processing -guarantee means that A can only be marked as consumed if and only if B is -successfully produced, and vice versa. - -![](/assets/txn-1.png) - -The Pulsar transactions API strengthens the message delivery semantics and the processing guarantees for stream processing. It enables stream processing applications to consume, process, and produce messages in one atomic operation. That means, a batch of messages in a transaction can be received from, produced to and acknowledged by many topic partitions. All the operations involved in a transaction succeed or fail as one single until. - -## Limitation of idempotent producer - -Avoiding data loss or duplication can be achieved by using the Pulsar idempotent producer, but it does not provide guarantees for writes across multiple partitions. - -In Pulsar, the highest level of message delivery guarantee is using an [idempotent producer](https://pulsar.apache.org/docs/en/next/concepts-messaging/#producer-idempotency) with the exactly once semantic at one single partition, that is, each message is persisted exactly once without data loss and duplication. However, there are some limitations in this solution: - -- Due to the monotonic increasing sequence ID, this solution only works on a single partition and within a single producer session (that is, for producing one message), so there is no atomicity when producing multiple messages to one or multiple partitions. - - In this case, if there are some failures (for example, client / broker / bookie crashes, network failure, and more) in the process of producing and receiving messages, messages are re-processed and re-delivered, which may cause data loss or data duplication: - - - For the producer: if the producer retry sending messages, some messages are persisted multiple times; if the producer does not retry sending messages, some messages are persisted once and other messages are lost. - - - For the consumer: since the consumer does not know whether the broker has received messages or not, the consumer may not retry sending acks, which causes it to receive duplicate messages. - -- Similarly, for Pulsar Function, it only guarantees exactly once semantics for an idempotent function on a single event rather than processing multiple events or producing multiple results that can happen exactly. - - For example, if a function accepts multiple events and produces one result (for example, window function), the function may fail between producing the result and acknowledging the incoming messages, or even between acknowledging individual events, which causes all (or some) incoming messages to be re-delivered and reprocessed, and a new result is generated. - - However, many scenarios need atomic guarantees across multiple partitions and sessions. - -- Consumers need to rely on more mechanisms to acknowledge (ack) messages once. - - For example, consumers are required to store the MessageID along with its acked state. After the topic is unloaded, the subscription can recover the acked state of this MessageID in memory when the topic is loaded again. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.1-deprecated/window-functions-context.md b/site2/website/versioned_docs/version-2.9.1-deprecated/window-functions-context.md deleted file mode 100644 index f80fea57989ef0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.1-deprecated/window-functions-context.md +++ /dev/null @@ -1,581 +0,0 @@ ---- -id: window-functions-context -title: Window Functions Context -sidebar_label: "Window Functions: Context" -original_id: window-functions-context ---- - -Java SDK provides access to a **window context object** that can be used by a window function. This context object provides a wide variety of information and functionality for Pulsar window functions as below. - -- [Spec](#spec) - - * Names of all input topics and the output topic associated with the function. - * Tenant and namespace associated with the function. - * Pulsar window function name, ID, and version. - * ID of the Pulsar function instance running the window function. - * Number of instances that invoke the window function. - * Built-in type or custom class name of the output schema. - -- [Logger](#logger) - - * Logger object used by the window function, which can be used to create window function log messages. - -- [User config](#user-config) - - * Access to arbitrary user configuration values. - -- [Routing](#routing) - - * Routing is supported in Pulsar window functions. Pulsar window functions send messages to arbitrary topics as per the `publish` interface. - -- [Metrics](#metrics) - - * Interface for recording metrics. - -- [State storage](#state-storage) - - * Interface for storing and retrieving state in [state storage](#state-storage). - -## Spec - -Spec contains the basic information of a function. - -### Get input topics - -The `getInputTopics` method gets the **name list** of all input topics. - -This example demonstrates how to get the name list of all input topics in a Java window function. - -```java - -public class GetInputTopicsWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - Collection inputTopics = context.getInputTopics(); - System.out.println(inputTopics); - - return null; - } - -} - -``` - -### Get output topic - -The `getOutputTopic` method gets the **name of a topic** to which the message is sent. - -This example demonstrates how to get the name of an output topic in a Java window function. - -```java - -public class GetOutputTopicWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String outputTopic = context.getOutputTopic(); - System.out.println(outputTopic); - - return null; - } -} - -``` - -### Get tenant - -The `getTenant` method gets the tenant name associated with the window function. - -This example demonstrates how to get the tenant name in a Java window function. - -```java - -public class GetTenantWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String tenant = context.getTenant(); - System.out.println(tenant); - - return null; - } - -} - -``` - -### Get namespace - -The `getNamespace` method gets the namespace associated with the window function. - -This example demonstrates how to get the namespace in a Java window function. - -```java - -public class GetNamespaceWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String ns = context.getNamespace(); - System.out.println(ns); - - return null; - } - -} - -``` - -### Get function name - -The `getFunctionName` method gets the window function name. - -This example demonstrates how to get the function name in a Java window function. - -```java - -public class GetNameOfWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionName = context.getFunctionName(); - System.out.println(functionName); - - return null; - } - -} - -``` - -### Get function ID - -The `getFunctionId` method gets the window function ID. - -This example demonstrates how to get the function ID in a Java window function. - -```java - -public class GetFunctionIDWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionID = context.getFunctionId(); - System.out.println(functionID); - - return null; - } - -} - -``` - -### Get function version - -The `getFunctionVersion` method gets the window function version. - -This example demonstrates how to get the function version of a Java window function. - -```java - -public class GetVersionOfWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionVersion = context.getFunctionVersion(); - System.out.println(functionVersion); - - return null; - } - -} - -``` - -### Get instance ID - -The `getInstanceId` method gets the instance ID of a window function. - -This example demonstrates how to get the instance ID in a Java window function. - -```java - -public class GetInstanceIDWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - int instanceId = context.getInstanceId(); - System.out.println(instanceId); - - return null; - } - -} - -``` - -### Get num instances - -The `getNumInstances` method gets the number of instances that invoke the window function. - -This example demonstrates how to get the number of instances in a Java window function. - -```java - -public class GetNumInstancesWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - int numInstances = context.getNumInstances(); - System.out.println(numInstances); - - return null; - } - -} - -``` - -### Get output schema type - -The `getOutputSchemaType` method gets the built-in type or custom class name of the output schema. - -This example demonstrates how to get the output schema type of a Java window function. - -```java - -public class GetOutputSchemaTypeWindowFunction implements WindowFunction { - - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String schemaType = context.getOutputSchemaType(); - System.out.println(schemaType); - - return null; - } -} - -``` - -## Logger - -Pulsar window functions using Java SDK has access to an [SLF4j](https://www.slf4j.org/) [`Logger`](https://www.slf4j.org/api/org/apache/log4j/Logger.html) object that can be used to produce logs at the chosen log level. - -This example logs either a `WARNING`-level or `INFO`-level log based on whether the incoming string contains the word `danger` or not in a Java function. - -```java - -import java.util.Collection; -import org.apache.pulsar.functions.api.Record; -import org.apache.pulsar.functions.api.WindowContext; -import org.apache.pulsar.functions.api.WindowFunction; -import org.slf4j.Logger; - -public class LoggingWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - Logger log = context.getLogger(); - for (Record record : inputs) { - log.info(record + "-window-log"); - } - return null; - } - -} - -``` - -If you need your function to produce logs, specify a log topic when creating or running the function. - -```bash - -bin/pulsar-admin functions create \ - --jar my-functions.jar \ - --classname my.package.LoggingFunction \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -You can access all logs produced by `LoggingFunction` via the `persistent://public/default/logging-function-logs` topic. - -## Metrics - -Pulsar window functions can publish arbitrary metrics to the metrics interface which can be queried. - -:::note - -If a Pulsar window function uses the language-native interface for Java, that function is not able to publish metrics and stats to Pulsar. - -::: - -You can record metrics using the context object on a per-key basis. - -This example sets a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message in a Java function. - -```java - -import java.util.Collection; -import org.apache.pulsar.functions.api.Record; -import org.apache.pulsar.functions.api.WindowContext; -import org.apache.pulsar.functions.api.WindowFunction; - - -/** - * Example function that wants to keep track of - * the event time of each message sent. - */ -public class UserMetricWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - - for (Record record : inputs) { - if (record.getEventTime().isPresent()) { - context.recordMetric("MessageEventTime", record.getEventTime().get().doubleValue()); - } - } - - return null; - } -} - -``` - -## User config - -When you run or update Pulsar Functions that are created using SDK, you can pass arbitrary key/value pairs to them with the `--user-config` flag. Key/value pairs **must** be specified as JSON. - -This example passes a user configured key/value to a function. - -```bash - -bin/pulsar-admin functions create \ - --name word-filter \ - --user-config '{"forbidden-word":"rosebud"}' \ - # Other function configs - -``` - -### API -You can use the following APIs to get user-defined information for window functions. -#### getUserConfigMap - -`getUserConfigMap` API gets a map of all user-defined key/value configurations for the window function. - -```java - -/** - * Get a map of all user-defined key/value configs for the function. - * - * @return The full map of user-defined config values - */ - Map getUserConfigMap(); - -``` - -#### getUserConfigValue - -The `getUserConfigValue` API gets a user-defined key/value. - -```java - -/** - * Get any user-defined key/value. - * - * @param key The key - * @return The Optional value specified by the user for that key. - */ - Optional getUserConfigValue(String key); - -``` - -#### getUserConfigValueOrDefault - -The `getUserConfigValueOrDefault` API gets a user-defined key/value or a default value if none is present. - -```java - -/** - * Get any user-defined key/value or a default value if none is present. - * - * @param key - * @param defaultValue - * @return Either the user config value associated with a given key or a supplied default value - */ - Object getUserConfigValueOrDefault(String key, Object defaultValue); - -``` - -This example demonstrates how to access key/value pairs provided to Pulsar window functions. - -Java SDK context object enables you to access key/value pairs provided to Pulsar window functions via the command line (as JSON). - -:::tip - -For all key/value pairs passed to Java window functions, both the `key` and the `value` are `String`. To set the value to be a different type, you need to deserialize it from the `String` type. - -::: - -This example passes a key/value pair in a Java window function. - -```bash - -bin/pulsar-admin functions create \ - --user-config '{"word-of-the-day":"verdure"}' \ - # Other function configs - -``` - -This example accesses values in a Java window function. - -The `UserConfigFunction` function logs the string `"The word of the day is verdure"` every time the function is invoked (which means every time a message arrives). The user config of `word-of-the-day` is changed **only** when the function is updated with a new config value via -multiple ways, such as the command line tool or REST API. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.Optional; - -public class UserConfigWindowFunction implements WindowFunction { - @Override - public String process(Collection> input, WindowContext context) throws Exception { - Optional whatToWrite = context.getUserConfigValue("WhatToWrite"); - if (whatToWrite.get() != null) { - return (String)whatToWrite.get(); - } else { - return "Not a nice way"; - } - } - -} - -``` - -If no value is provided, you can access the entire user config map or set a default value. - -```java - -// Get the whole config map -Map allConfigs = context.getUserConfigMap(); - -// Get value or resort to default -String wotd = context.getUserConfigValueOrDefault("word-of-the-day", "perspicacious"); - -``` - -## Routing - -You can use the `context.publish()` interface to publish as many results as you want. - -This example shows that the `PublishFunction` class uses the built-in function in the context to publish messages to the `publishTopic` in a Java function. - -```java - -public class PublishWindowFunction implements WindowFunction { - @Override - public Void process(Collection> input, WindowContext context) throws Exception { - String publishTopic = (String) context.getUserConfigValueOrDefault("publish-topic", "publishtopic"); - String output = String.format("%s!", input); - context.publish(publishTopic, output); - - return null; - } - -} - -``` - -## State storage - -Pulsar window functions use [Apache BookKeeper](https://bookkeeper.apache.org) as a state storage interface. Apache Pulsar installation (including the standalone installation) includes the deployment of BookKeeper bookies. - -Apache Pulsar integrates with Apache BookKeeper `table service` to store the `state` for functions. For example, the `WordCount` function can store its `counters` state into BookKeeper table service via Pulsar Functions state APIs. - -States are key-value pairs, where the key is a string and the value is arbitrary binary data—counters are stored as 64-bit big-endian binary values. Keys are scoped to an individual Pulsar Function and shared between instances of that function. - -Currently, Pulsar window functions expose Java API to access, update, and manage states. These APIs are available in the context object when you use Java SDK functions. - -| Java API| Description -|---|--- -|`incrCounter`|Increases a built-in distributed counter referred by key. -|`getCounter`|Gets the counter value for the key. -|`putState`|Updates the state value for the key. - -You can use the following APIs to access, update, and manage states in Java window functions. - -#### incrCounter - -The `incrCounter` API increases a built-in distributed counter referred by key. - -Applications use the `incrCounter` API to change the counter of a given `key` by the given `amount`. If the `key` does not exist, a new key is created. - -```java - - /** - * Increment the builtin distributed counter referred by key - * @param key The name of the key - * @param amount The amount to be incremented - */ - void incrCounter(String key, long amount); - -``` - -#### getCounter - -The `getCounter` API gets the counter value for the key. - -Applications uses the `getCounter` API to retrieve the counter of a given `key` changed by the `incrCounter` API. - -```java - - /** - * Retrieve the counter value for the key. - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - long getCounter(String key); - -``` - -Except the `getCounter` API, Pulsar also exposes a general key/value API (`putState`) for functions to store general key/value state. - -#### putState - -The `putState` API updates the state value for the key. - -```java - - /** - * Update the state value for the key. - * - * @param key name of the key - * @param value state value of the key - */ - void putState(String key, ByteBuffer value); - -``` - -This example demonstrates how applications store states in Pulsar window functions. - -The logic of the `WordCountWindowFunction` is simple and straightforward. - -1. The function first splits the received string into multiple words using regex `\\.`. - -2. For each `word`, the function increments the corresponding `counter` by 1 via `incrCounter(key, amount)`. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - for (Record input : inputs) { - Arrays.asList(input.getValue().split("\\.")).forEach(word -> context.incrCounter(word, 1)); - } - return null; - - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/about.md b/site2/website/versioned_docs/version-2.9.2-deprecated/about.md deleted file mode 100644 index 219912ac658220..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/about.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -slug: / -id: about -title: Welcome to the doc portal! -sidebar_label: "About" ---- - -import BlockLinks from "@site/src/components/BlockLinks"; -import BlockLink from "@site/src/components/BlockLink"; -import { docUrl } from "@site/src/utils/index"; - - -# Welcome to the doc portal! -*** - -This portal holds a variety of support documents to help you work with Pulsar . If you’re a beginner, there are tutorials and explainers to help you understand Pulsar and how it works. - -If you’re an experienced coder, review this page to learn the easiest way to access the specific content you’re looking for. - -## Get Started Now - - - - - - - - - -## Navigation -*** - -There are several ways to get around in the doc portal. The index navigation pane is a table of contents for the entire archive. The archive is divided into sections, like chapters in a book. Click the title of the topic to view it. - -In-context links provide an easy way to immediately reference related topics. Click the underlined term to view the topic. - -Links to related topics can be found at the bottom of each topic page. Click the link to view the topic. - -![Page Linking](/assets/page-linking.png) - -## Continuous Improvement -*** -As you probably know, we are working on a new user experience for our documentation portal that will make learning about and building on top of Apache Pulsar a much better experience. Whether you need overview concepts, how-to procedures, curated guides or quick references, we’re building content to support it. This welcome page is just the first step. We will be providing updates every month. - -## Help Improve These Documents -*** - -You’ll notice an Edit button at the bottom and top of each page. Click it to open a landing page with instructions for requesting changes to posted documents. These are your resources. Participation is not only welcomed – it’s essential! - -## Join the Community! -*** - -The Pulsar community on github is active, passionate, and knowledgeable. Join discussions, voice opinions, suggest features, and dive into the code itself. Find your Pulsar family here at [apache/pulsar](https://github.com/apache/pulsar). - -An equally passionate community can be found in the [Pulsar Slack channel](https://apache-pulsar.slack.com/). You’ll need an invitation to join, but many Github Pulsar community members are Slack members too. Join, hang out, learn, and make some new friends. - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/adaptors-kafka.md b/site2/website/versioned_docs/version-2.9.2-deprecated/adaptors-kafka.md deleted file mode 100644 index ea256049710fd1..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/adaptors-kafka.md +++ /dev/null @@ -1,276 +0,0 @@ ---- -id: adaptors-kafka -title: Pulsar adaptor for Apache Kafka -sidebar_label: "Kafka client wrapper" -original_id: adaptors-kafka ---- - - -Pulsar provides an easy option for applications that are currently written using the [Apache Kafka](http://kafka.apache.org) Java client API. - -## Using the Pulsar Kafka compatibility wrapper - -In an existing application, change the regular Kafka client dependency and replace it with the Pulsar Kafka wrapper. Remove the following dependency in `pom.xml`: - -```xml - - - org.apache.kafka - kafka-clients - 0.10.2.1 - - -``` - -Then include this dependency for the Pulsar Kafka wrapper: - -```xml - - - org.apache.pulsar - pulsar-client-kafka - @pulsar:version@ - - -``` - -With the new dependency, the existing code works without any changes. You need to adjust the configuration, and make sure it points the -producers and consumers to Pulsar service rather than Kafka, and uses a particular -Pulsar topic. - -## Using the Pulsar Kafka compatibility wrapper together with existing kafka client - -When migrating from Kafka to Pulsar, the application might use the original kafka client -and the pulsar kafka wrapper together during migration. You should consider using the -unshaded pulsar kafka client wrapper. - -```xml - - - org.apache.pulsar - pulsar-client-kafka-original - @pulsar:version@ - - -``` - -When using this dependency, construct producers using `org.apache.kafka.clients.producer.PulsarKafkaProducer` -instead of `org.apache.kafka.clients.producer.KafkaProducer` and `org.apache.kafka.clients.producer.PulsarKafkaConsumer` for consumers. - -## Producer example - -```java - -// Topic needs to be a regular Pulsar topic -String topic = "persistent://public/default/my-topic"; - -Properties props = new Properties(); -// Point to a Pulsar service -props.put("bootstrap.servers", "pulsar://localhost:6650"); - -props.put("key.serializer", IntegerSerializer.class.getName()); -props.put("value.serializer", StringSerializer.class.getName()); - -Producer producer = new KafkaProducer(props); - -for (int i = 0; i < 10; i++) { - producer.send(new ProducerRecord(topic, i, "hello-" + i)); - log.info("Message {} sent successfully", i); -} - -producer.close(); - -``` - -## Consumer example - -```java - -String topic = "persistent://public/default/my-topic"; - -Properties props = new Properties(); -// Point to a Pulsar service -props.put("bootstrap.servers", "pulsar://localhost:6650"); -props.put("group.id", "my-subscription-name"); -props.put("enable.auto.commit", "false"); -props.put("key.deserializer", IntegerDeserializer.class.getName()); -props.put("value.deserializer", StringDeserializer.class.getName()); - -Consumer consumer = new KafkaConsumer(props); -consumer.subscribe(Arrays.asList(topic)); - -while (true) { - ConsumerRecords records = consumer.poll(100); - records.forEach(record -> { - log.info("Received record: {}", record); - }); - - // Commit last offset - consumer.commitSync(); -} - -``` - -## Complete Examples - -You can find the complete producer and consumer examples [here](https://github.com/apache/pulsar-adapters/tree/master/pulsar-client-kafka-compat/pulsar-client-kafka-tests/src/test/java/org/apache/pulsar/client/kafka/compat/examples). - -## Compatibility matrix - -Currently the Pulsar Kafka wrapper supports most of the operations offered by the Kafka API. - -### Producer - -APIs: - -| Producer Method | Supported | Notes | -|:------------------------------------------------------------------------------|:----------|:-------------------------------------------------------------------------| -| `Future send(ProducerRecord record)` | Yes | | -| `Future send(ProducerRecord record, Callback callback)` | Yes | | -| `void flush()` | Yes | | -| `List partitionsFor(String topic)` | No | | -| `Map metrics()` | No | | -| `void close()` | Yes | | -| `void close(long timeout, TimeUnit unit)` | Yes | | - -Properties: - -| Config property | Supported | Notes | -|:----------------------------------------|:----------|:------------------------------------------------------------------------------| -| `acks` | Ignored | Durability and quorum writes are configured at the namespace level | -| `auto.offset.reset` | Yes | It uses a default value of `earliest` if you do not give a specific setting. | -| `batch.size` | Ignored | | -| `bootstrap.servers` | Yes | | -| `buffer.memory` | Ignored | | -| `client.id` | Ignored | | -| `compression.type` | Yes | Allows `gzip` and `lz4`. No `snappy`. | -| `connections.max.idle.ms` | Yes | Only support up to 2,147,483,647,000(Integer.MAX_VALUE * 1000) ms of idle time| -| `interceptor.classes` | Yes | | -| `key.serializer` | Yes | | -| `linger.ms` | Yes | Controls the group commit time when batching messages | -| `max.block.ms` | Ignored | | -| `max.in.flight.requests.per.connection` | Ignored | In Pulsar ordering is maintained even with multiple requests in flight | -| `max.request.size` | Ignored | | -| `metric.reporters` | Ignored | | -| `metrics.num.samples` | Ignored | | -| `metrics.sample.window.ms` | Ignored | | -| `partitioner.class` | Yes | | -| `receive.buffer.bytes` | Ignored | | -| `reconnect.backoff.ms` | Ignored | | -| `request.timeout.ms` | Ignored | | -| `retries` | Ignored | Pulsar client retries with exponential backoff until the send timeout expires. | -| `send.buffer.bytes` | Ignored | | -| `timeout.ms` | Yes | | -| `value.serializer` | Yes | | - - -### Consumer - -The following table lists consumer APIs. - -| Consumer Method | Supported | Notes | -|:--------------------------------------------------------------------------------------------------------|:----------|:------| -| `Set assignment()` | No | | -| `Set subscription()` | Yes | | -| `void subscribe(Collection topics)` | Yes | | -| `void subscribe(Collection topics, ConsumerRebalanceListener callback)` | No | | -| `void assign(Collection partitions)` | No | | -| `void subscribe(Pattern pattern, ConsumerRebalanceListener callback)` | No | | -| `void unsubscribe()` | Yes | | -| `ConsumerRecords poll(long timeoutMillis)` | Yes | | -| `void commitSync()` | Yes | | -| `void commitSync(Map offsets)` | Yes | | -| `void commitAsync()` | Yes | | -| `void commitAsync(OffsetCommitCallback callback)` | Yes | | -| `void commitAsync(Map offsets, OffsetCommitCallback callback)` | Yes | | -| `void seek(TopicPartition partition, long offset)` | Yes | | -| `void seekToBeginning(Collection partitions)` | Yes | | -| `void seekToEnd(Collection partitions)` | Yes | | -| `long position(TopicPartition partition)` | Yes | | -| `OffsetAndMetadata committed(TopicPartition partition)` | Yes | | -| `Map metrics()` | No | | -| `List partitionsFor(String topic)` | No | | -| `Map> listTopics()` | No | | -| `Set paused()` | No | | -| `void pause(Collection partitions)` | No | | -| `void resume(Collection partitions)` | No | | -| `Map offsetsForTimes(Map timestampsToSearch)` | No | | -| `Map beginningOffsets(Collection partitions)` | No | | -| `Map endOffsets(Collection partitions)` | No | | -| `void close()` | Yes | | -| `void close(long timeout, TimeUnit unit)` | Yes | | -| `void wakeup()` | No | | - -Properties: - -| Config property | Supported | Notes | -|:--------------------------------|:----------|:------------------------------------------------------| -| `group.id` | Yes | Maps to a Pulsar subscription name | -| `max.poll.records` | Yes | | -| `max.poll.interval.ms` | Ignored | Messages are "pushed" from broker | -| `session.timeout.ms` | Ignored | | -| `heartbeat.interval.ms` | Ignored | | -| `bootstrap.servers` | Yes | Needs to point to a single Pulsar service URL | -| `enable.auto.commit` | Yes | | -| `auto.commit.interval.ms` | Ignored | With auto-commit, acks are sent immediately to broker | -| `partition.assignment.strategy` | Ignored | | -| `auto.offset.reset` | Yes | Only support earliest and latest. | -| `fetch.min.bytes` | Ignored | | -| `fetch.max.bytes` | Ignored | | -| `fetch.max.wait.ms` | Ignored | | -| `interceptor.classes` | Yes | | -| `metadata.max.age.ms` | Ignored | | -| `max.partition.fetch.bytes` | Ignored | | -| `send.buffer.bytes` | Ignored | | -| `receive.buffer.bytes` | Ignored | | -| `client.id` | Ignored | | - - -## Customize Pulsar configurations - -You can configure Pulsar authentication provider directly from the Kafka properties. - -### Pulsar client properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.authentication.class`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-org.apache.pulsar.client.api.Authentication-) | | Configure to auth provider. For example, `org.apache.pulsar.client.impl.auth.AuthenticationTls`.| -| [`pulsar.authentication.params.map`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.util.Map-) | | Map which represents parameters for the Authentication-Plugin. | -| [`pulsar.authentication.params.string`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.lang.String-) | | String which represents parameters for the Authentication-Plugin, for example, `key1:val1,key2:val2`. | -| [`pulsar.use.tls`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTls-boolean-) | `false` | Enable TLS transport encryption. | -| [`pulsar.tls.trust.certs.file.path`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsTrustCertsFilePath-java.lang.String-) | | Path for the TLS trust certificate store. | -| [`pulsar.tls.allow.insecure.connection`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsAllowInsecureConnection-boolean-) | `false` | Accept self-signed certificates from brokers. | -| [`pulsar.operation.timeout.ms`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setOperationTimeout-int-java.util.concurrent.TimeUnit-) | `30000` | General operations timeout. | -| [`pulsar.stats.interval.seconds`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setStatsInterval-long-java.util.concurrent.TimeUnit-) | `60` | Pulsar client lib stats printing interval. | -| [`pulsar.num.io.threads`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setIoThreads-int-) | `1` | The number of Netty IO threads to use. | -| [`pulsar.connections.per.broker`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConnectionsPerBroker-int-) | `1` | The maximum number of connection to each broker. | -| [`pulsar.use.tcp.nodelay`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTcpNoDelay-boolean-) | `true` | TCP no-delay. | -| [`pulsar.concurrent.lookup.requests`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConcurrentLookupRequest-int-) | `50000` | The maximum number of concurrent topic lookups. | -| [`pulsar.max.number.rejected.request.per.connection`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setMaxNumberOfRejectedRequestPerConnection-int-) | `50` | The threshold of errors to forcefully close a connection. | -| [`pulsar.keepalive.interval.ms`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientBuilder.html#keepAliveInterval-int-java.util.concurrent.TimeUnit-)| `30000` | Keep alive interval for each client-broker-connection. | - - -### Pulsar producer properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.producer.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setProducerName-java.lang.String-) | | Specify the producer name. | -| [`pulsar.producer.initial.sequence.id`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setInitialSequenceId-long-) | | Specify baseline for sequence ID of this producer. | -| [`pulsar.producer.max.pending.messages`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessages-int-) | `1000` | Set the maximum size of the message queue pending to receive an acknowledgment from the broker. | -| [`pulsar.producer.max.pending.messages.across.partitions`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessagesAcrossPartitions-int-) | `50000` | Set the maximum number of pending messages across all the partitions. | -| [`pulsar.producer.batching.enabled`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingEnabled-boolean-) | `true` | Control whether automatic batching of messages is enabled for the producer. | -| [`pulsar.producer.batching.max.messages`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingMaxMessages-int-) | `1000` | The maximum number of messages in a batch. | -| [`pulsar.block.if.producer.queue.full`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBlockIfQueueFull-boolean-) | | Specify the block producer if queue is full. | -| [`pulsar.crypto.reader.factory.class.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setCryptoKeyReader-org.apache.pulsar.client.api.CryptoKeyReader-) | | Specify the CryptoReader-Factory(`CryptoKeyReaderFactory`) classname which allows producer to create CryptoKeyReader. | - - -### Pulsar consumer Properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.consumer.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setConsumerName-java.lang.String-) | | Specify the consumer name. | -| [`pulsar.consumer.receiver.queue.size`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setReceiverQueueSize-int-) | 1000 | Set the size of the consumer receiver queue. | -| [`pulsar.consumer.acknowledgments.group.time.millis`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#acknowledgmentGroupTime-long-java.util.concurrent.TimeUnit-) | 100 | Set the maximum amount of group time for consumers to send the acknowledgments to the broker. | -| [`pulsar.consumer.total.receiver.queue.size.across.partitions`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setMaxTotalReceiverQueueSizeAcrossPartitions-int-) | 50000 | Set the maximum size of the total receiver queue across partitions. | -| [`pulsar.consumer.subscription.topics.mode`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#subscriptionTopicsMode-Mode-) | PersistentOnly | Set the subscription topic mode for consumers. | -| [`pulsar.crypto.reader.factory.class.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setCryptoKeyReader-org.apache.pulsar.client.api.CryptoKeyReader-) | | Specify the CryptoReader-Factory(`CryptoKeyReaderFactory`) classname which allows consumer to create CryptoKeyReader. | diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/adaptors-spark.md b/site2/website/versioned_docs/version-2.9.2-deprecated/adaptors-spark.md deleted file mode 100644 index e14f13b5d4b079..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/adaptors-spark.md +++ /dev/null @@ -1,91 +0,0 @@ ---- -id: adaptors-spark -title: Pulsar adaptor for Apache Spark -sidebar_label: "Apache Spark" -original_id: adaptors-spark ---- - -## Spark Streaming receiver -The Spark Streaming receiver for Pulsar is a custom receiver that enables Apache [Spark Streaming](https://spark.apache.org/streaming/) to receive raw data from Pulsar. - -An application can receive data in [Resilient Distributed Dataset](https://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds) (RDD) format via the Spark Streaming receiver and can process it in a variety of ways. - -### Prerequisites - -To use the receiver, include a dependency for the `pulsar-spark` library in your Java configuration. - -#### Maven - -If you're using Maven, add this to your `pom.xml`: - -```xml - - -@pulsar:version@ - - - - org.apache.pulsar - pulsar-spark - ${pulsar.version} - - -``` - -#### Gradle - -If you're using Gradle, add this to your `build.gradle` file: - -```groovy - -def pulsarVersion = "@pulsar:version@" - -dependencies { - compile group: 'org.apache.pulsar', name: 'pulsar-spark', version: pulsarVersion -} - -``` - -### Usage - -Pass an instance of `SparkStreamingPulsarReceiver` to the `receiverStream` method in `JavaStreamingContext`: - -```java - - String serviceUrl = "pulsar://localhost:6650/"; - String topic = "persistent://public/default/test_src"; - String subs = "test_sub"; - - SparkConf sparkConf = new SparkConf().setMaster("local[*]").setAppName("Pulsar Spark Example"); - - JavaStreamingContext jsc = new JavaStreamingContext(sparkConf, Durations.seconds(60)); - - ConsumerConfigurationData pulsarConf = new ConsumerConfigurationData(); - - Set set = new HashSet(); - set.add(topic); - pulsarConf.setTopicNames(set); - pulsarConf.setSubscriptionName(subs); - - SparkStreamingPulsarReceiver pulsarReceiver = new SparkStreamingPulsarReceiver( - serviceUrl, - pulsarConf, - new AuthenticationDisabled()); - - JavaReceiverInputDStream lineDStream = jsc.receiverStream(pulsarReceiver); - -``` - -For a complete example, click [here](https://github.com/apache/pulsar-adapters/blob/master/examples/spark/src/main/java/org/apache/spark/streaming/receiver/example/SparkStreamingPulsarReceiverExample.java). In this example, the number of messages that contain the string "Pulsar" in received messages is counted. - -Note that if needed, other Pulsar authentication classes can be used. For example, in order to use a token during authentication the following parameters for the `SparkStreamingPulsarReceiver` constructor can be set: - -```java - -SparkStreamingPulsarReceiver pulsarReceiver = new SparkStreamingPulsarReceiver( - serviceUrl, - pulsarConf, - new AuthenticationToken("token:")); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/adaptors-storm.md b/site2/website/versioned_docs/version-2.9.2-deprecated/adaptors-storm.md deleted file mode 100644 index 76d507164777db..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/adaptors-storm.md +++ /dev/null @@ -1,96 +0,0 @@ ---- -id: adaptors-storm -title: Pulsar adaptor for Apache Storm -sidebar_label: "Apache Storm" -original_id: adaptors-storm ---- - -Pulsar Storm is an adaptor for integrating with [Apache Storm](http://storm.apache.org/) topologies. It provides core Storm implementations for sending and receiving data. - -An application can inject data into a Storm topology via a generic Pulsar spout, as well as consume data from a Storm topology via a generic Pulsar bolt. - -## Using the Pulsar Storm Adaptor - -Include dependency for Pulsar Storm Adaptor: - -```xml - - - org.apache.pulsar - pulsar-storm - ${pulsar.version} - - -``` - -## Pulsar Spout - -The Pulsar Spout allows for the data published on a topic to be consumed by a Storm topology. It emits a Storm tuple based on the message received and the `MessageToValuesMapper` provided by the client. - -The tuples that fail to be processed by the downstream bolts will be re-injected by the spout with an exponential backoff, within a configurable timeout (the default is 60 seconds) or a configurable number of retries, whichever comes first, after which it is acknowledged by the consumer. Here's an example construction of a spout: - -```java - -MessageToValuesMapper messageToValuesMapper = new MessageToValuesMapper() { - - @Override - public Values toValues(Message msg) { - return new Values(new String(msg.getData())); - } - - @Override - public void declareOutputFields(OutputFieldsDeclarer declarer) { - // declare the output fields - declarer.declare(new Fields("string")); - } -}; - -// Configure a Pulsar Spout -PulsarSpoutConfiguration spoutConf = new PulsarSpoutConfiguration(); -spoutConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650"); -spoutConf.setTopic("persistent://my-property/usw/my-ns/my-topic1"); -spoutConf.setSubscriptionName("my-subscriber-name1"); -spoutConf.setMessageToValuesMapper(messageToValuesMapper); - -// Create a Pulsar Spout -PulsarSpout spout = new PulsarSpout(spoutConf); - -``` - -For a complete example, click [here](https://github.com/apache/pulsar-adapters/blob/master/pulsar-storm/src/test/java/org/apache/pulsar/storm/PulsarSpoutTest.java). - -## Pulsar Bolt - -The Pulsar bolt allows data in a Storm topology to be published on a topic. It publishes messages based on the Storm tuple received and the `TupleToMessageMapper` provided by the client. - -A partitioned topic can also be used to publish messages on different topics. In the implementation of the `TupleToMessageMapper`, a "key" will need to be provided in the message which will send the messages with the same key to the same topic. Here's an example bolt: - -```java - -TupleToMessageMapper tupleToMessageMapper = new TupleToMessageMapper() { - - @Override - public TypedMessageBuilder toMessage(TypedMessageBuilder msgBuilder, Tuple tuple) { - String receivedMessage = tuple.getString(0); - // message processing - String processedMsg = receivedMessage + "-processed"; - return msgBuilder.value(processedMsg.getBytes()); - } - - @Override - public void declareOutputFields(OutputFieldsDeclarer declarer) { - // declare the output fields - } -}; - -// Configure a Pulsar Bolt -PulsarBoltConfiguration boltConf = new PulsarBoltConfiguration(); -boltConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650"); -boltConf.setTopic("persistent://my-property/usw/my-ns/my-topic2"); -boltConf.setTupleToMessageMapper(tupleToMessageMapper); - -// Create a Pulsar Bolt -PulsarBolt bolt = new PulsarBolt(boltConf); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-brokers.md b/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-brokers.md deleted file mode 100644 index 930fe69ecfb0ee..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-brokers.md +++ /dev/null @@ -1,286 +0,0 @@ ---- -id: admin-api-brokers -title: Managing Brokers -sidebar_label: "Brokers" -original_id: admin-api-brokers ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Pulsar brokers consist of two components: - -1. An HTTP server exposing a {@inject: rest:REST:/} interface administration and [topic](reference-terminology.md#topic) lookup. -2. A dispatcher that handles all Pulsar [message](reference-terminology.md#message) transfers. - -[Brokers](reference-terminology.md#broker) can be managed via: - -* The `brokers` command of the [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) tool -* The `/admin/v2/brokers` endpoint of the admin {@inject: rest:REST:/} API -* The `brokers` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md) - -In addition to being configurable when you start them up, brokers can also be [dynamically configured](#dynamic-broker-configuration). - -> See the [Configuration](reference-configuration.md#broker) page for a full listing of broker-specific configuration parameters. - -## Brokers resources - -### List active brokers - -Fetch all available active brokers that are serving traffic with cluster name. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers list use - -``` - -``` - -broker1.use.org.com:8080 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/:cluster|operation/getActiveBrokers?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getActiveBrokers(clusterName) - -``` - - - - -```` - -### Get the information of the leader broker - -Fetch the information of the leader broker, for example, the service url. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers leader-broker - -``` - -``` - -BrokerInfo(serviceUrl=broker1.use.org.com:8080) - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/leaderBroker|operation/getLeaderBroker?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getLeaderBroker() - -``` - -For the detail of the code above, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/BrokersImpl.java#L80) - - - - -```` - -#### list of namespaces owned by a given broker - -It finds all namespaces which are owned and served by a given broker. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers namespaces use \ - --url broker1.use.org.com:8080 - -``` - -```json - -{ - "my-property/use/my-ns/0x00000000_0xffffffff": { - "broker_assignment": "shared", - "is_controlled": false, - "is_active": true - } -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/:cluster/:broker/ownedNamespaces|operation/getOwnedNamespaes?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getOwnedNamespaces(cluster,brokerUrl); - -``` - - - - -```` - -### Dynamic broker configuration - -One way to configure a Pulsar [broker](reference-terminology.md#broker) is to supply a [configuration](reference-configuration.md#broker) when the broker is [started up](reference-cli-tools.md#pulsar-broker). - -But since all broker configuration in Pulsar is stored in ZooKeeper, configuration values can also be dynamically updated *while the broker is running*. When you update broker configuration dynamically, ZooKeeper will notify the broker of the change and the broker will then override any existing configuration values. - -* The [`brokers`](reference-pulsar-admin.md#brokers) command for the [`pulsar-admin`](reference-pulsar-admin.md) tool has a variety of subcommands that enable you to manipulate a broker's configuration dynamically, enabling you to [update config values](#update-dynamic-configuration) and more. -* In the Pulsar admin {@inject: rest:REST:/} API, dynamic configuration is managed through the `/admin/v2/brokers/configuration` endpoint. - -### Update dynamic configuration - -````mdx-code-block - - - -The [`update-dynamic-config`](reference-pulsar-admin.md#brokers-update-dynamic-config) subcommand will update existing configuration. It takes two arguments: the name of the parameter and the new value using the `config` and `value` flag respectively. Here's an example for the [`brokerShutdownTimeoutMs`](reference-configuration.md#broker-brokerShutdownTimeoutMs) parameter: - -```shell - -$ pulsar-admin brokers update-dynamic-config --config brokerShutdownTimeoutMs --value 100 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/brokers/configuration/:configName/:configValue|operation/updateDynamicConfiguration?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().updateDynamicConfiguration(configName, configValue); - -``` - - - - -```` - -### List updated values - -Fetch a list of all potentially updatable configuration parameters. -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers list-dynamic-config -brokerShutdownTimeoutMs - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/configuration|operation/getDynamicConfigurationName?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getDynamicConfigurationNames(); - -``` - - - - -```` - -### List all - -Fetch a list of all parameters that have been dynamically updated. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers get-all-dynamic-config -brokerShutdownTimeoutMs:100 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/configuration/values|operation/getAllDynamicConfigurations?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getAllDynamicConfigurations(); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-clusters.md b/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-clusters.md deleted file mode 100644 index 1d0c5dc9786f5a..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-clusters.md +++ /dev/null @@ -1,318 +0,0 @@ ---- -id: admin-api-clusters -title: Managing Clusters -sidebar_label: "Clusters" -original_id: admin-api-clusters ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Pulsar clusters consist of one or more Pulsar [brokers](reference-terminology.md#broker), one or more [BookKeeper](reference-terminology.md#bookkeeper) -servers (aka [bookies](reference-terminology.md#bookie)), and a [ZooKeeper](https://zookeeper.apache.org) cluster that provides configuration and coordination management. - -Clusters can be managed via: - -* The `clusters` command of the [`pulsar-admin`]([reference-pulsar-admin.md](https://pulsar.apache.org/tools/pulsar-admin/)) tool -* The `/admin/v2/clusters` endpoint of the admin {@inject: rest:REST:/} API -* The `clusters` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md) - -## Clusters resources - -### Provision - -New clusters can be provisioned using the admin interface. - -> Please note that this operation requires superuser privileges. - -````mdx-code-block - - - -You can provision a new cluster using the [`create`](reference-pulsar-admin.md#clusters-create) subcommand. Here's an example: - -```shell - -$ pulsar-admin clusters create cluster-1 \ - --url http://my-cluster.org.com:8080 \ - --broker-url pulsar://my-cluster.org.com:6650 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/clusters/:cluster|operation/createCluster?version=@pulsar:version_number@} - - - - -```java - -ClusterData clusterData = new ClusterData( - serviceUrl, - serviceUrlTls, - brokerServiceUrl, - brokerServiceUrlTls -); -admin.clusters().createCluster(clusterName, clusterData); - -``` - - - - -```` - -### Initialize cluster metadata - -When provision a new cluster, you need to initialize that cluster's [metadata](concepts-architecture-overview.md#metadata-store). When initializing cluster metadata, you need to specify all of the following: - -* The name of the cluster -* The local ZooKeeper connection string for the cluster -* The configuration store connection string for the entire instance -* The web service URL for the cluster -* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster - -You must initialize cluster metadata *before* starting up any [brokers](admin-api-brokers.md) that will belong to the cluster. - -> **No cluster metadata initialization through the REST API or the Java admin API** -> -> Unlike most other admin functions in Pulsar, cluster metadata initialization cannot be performed via the admin REST API -> or the admin Java client, as metadata initialization involves communicating with ZooKeeper directly. -> Instead, you can use the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool, in particular -> the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command. - -Here's an example cluster metadata initialization command: - -```shell - -bin/pulsar initialize-cluster-metadata \ - --cluster us-west \ - --zookeeper zk1.us-west.example.com:2181 \ - --configuration-store zk1.us-west.example.com:2184 \ - --web-service-url http://pulsar.us-west.example.com:8080/ \ - --web-service-url-tls https://pulsar.us-west.example.com:8443/ \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/ - -``` - -You'll need to use `--*-tls` flags only if you're using [TLS authentication](security-tls-authentication.md) in your instance. - -### Get configuration - -You can fetch the [configuration](reference-configuration.md) for an existing cluster at any time. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#clusters-get) subcommand and specify the name of the cluster. Here's an example: - -```shell - -$ pulsar-admin clusters get cluster-1 -{ - "serviceUrl": "http://my-cluster.org.com:8080/", - "serviceUrlTls": null, - "brokerServiceUrl": "pulsar://my-cluster.org.com:6650/", - "brokerServiceUrlTls": null - "peerClusterNames": null -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/clusters/:cluster|operation/getCluster?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().getCluster(clusterName); - -``` - - - - -```` - -### Update - -You can update the configuration for an existing cluster at any time. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#clusters-update) subcommand and specify new configuration values using flags. - -```shell - -$ pulsar-admin clusters update cluster-1 \ - --url http://my-cluster.org.com:4081 \ - --broker-url pulsar://my-cluster.org.com:3350 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/clusters/:cluster|operation/updateCluster?version=@pulsar:version_number@} - - - - -```java - -ClusterData clusterData = new ClusterData( - serviceUrl, - serviceUrlTls, - brokerServiceUrl, - brokerServiceUrlTls -); -admin.clusters().updateCluster(clusterName, clusterData); - -``` - - - - -```` - -### Delete - -Clusters can be deleted from a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#clusters-delete) subcommand and specify the name of the cluster. - -``` - -$ pulsar-admin clusters delete cluster-1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/clusters/:cluster|operation/deleteCluster?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().deleteCluster(clusterName); - -``` - - - - -```` - -### List - -You can fetch a list of all clusters in a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#clusters-list) subcommand. - -```shell - -$ pulsar-admin clusters list -cluster-1 -cluster-2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/clusters|operation/getClusters?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().getClusters(); - -``` - - - - -```` - -### Update peer-cluster data - -Peer clusters can be configured for a given cluster in a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`update-peer-clusters`](reference-pulsar-admin.md#clusters-update-peer-clusters) subcommand and specify the list of peer-cluster names. - -``` - -$ pulsar-admin update-peer-clusters cluster-1 --peer-clusters cluster-2 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/clusters/:cluster/peers|operation/setPeerClusterNames?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().updatePeerClusterNames(clusterName, peerClusterList); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-functions.md b/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-functions.md deleted file mode 100644 index d73386caf9b418..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-functions.md +++ /dev/null @@ -1,830 +0,0 @@ ---- -id: admin-api-functions -title: Manage Functions -sidebar_label: "Functions" -original_id: admin-api-functions ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -**Pulsar Functions** are lightweight compute processes that - -* consume messages from one or more Pulsar topics -* apply a user-supplied processing logic to each message -* publish the results of the computation to another topic - -Functions can be managed via the following methods. - -Method | Description ----|--- -**Admin CLI** | The `functions` command of the [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) tool. -**REST API** |The `/admin/v3/functions` endpoint of the admin {@inject: rest:REST:/} API. -**Java Admin API**| The `functions` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md). - -## Function resources - -You can perform the following operations on functions. - -### Create a function - -You can create a Pulsar function in cluster mode (deploy it on a Pulsar cluster) using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#functions-create) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --inputs test-input-topic \ - --output persistent://public/default/test-output-topic \ - --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \ - --jar /examples/api-examples.jar - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName|operation/registerFunction?version=@pulsar:version_number@} - - - - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setTenant(tenant); -functionConfig.setNamespace(namespace); -functionConfig.setName(functionName); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setParallelism(1); -functionConfig.setClassName("org.apache.pulsar.functions.api.examples.ExclamationFunction"); -functionConfig.setProcessingGuarantees(FunctionConfig.ProcessingGuarantees.ATLEAST_ONCE); -functionConfig.setTopicsPattern(sourceTopicPattern); -functionConfig.setSubName(subscriptionName); -functionConfig.setAutoAck(true); -functionConfig.setOutput(sinkTopic); -admin.functions().createFunction(functionConfig, fileName); - -``` - - - - -```` - -### Update a function - -You can update a Pulsar function that has been deployed to a Pulsar cluster using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#functions-update) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions update \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --output persistent://public/default/update-output-topic \ - # other options - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/functions/:tenant/:namespace/:functionName|operation/updateFunction?version=@pulsar:version_number@} - - - - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setTenant(tenant); -functionConfig.setNamespace(namespace); -functionConfig.setName(functionName); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setParallelism(1); -functionConfig.setClassName("org.apache.pulsar.functions.api.examples.ExclamationFunction"); -UpdateOptions updateOptions = new UpdateOptions(); -updateOptions.setUpdateAuthData(updateAuthData); -admin.functions().updateFunction(functionConfig, userCodeFile, updateOptions); - -``` - - - - -```` - -### Start an instance of a function - -You can start a stopped function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`start`](reference-pulsar-admin.md#functions-start) subcommand. - -```shell - -$ pulsar-admin functions start \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/start|operation/startFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().startFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Start all instances of a function - -You can start all stopped function instances using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`start`](reference-pulsar-admin.md#functions-start) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions start \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/start|operation/startFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().startFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Stop an instance of a function - -You can stop a function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`stop`](reference-pulsar-admin.md#functions-stop) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stop \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/stop|operation/stopFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().stopFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Stop all instances of a function - -You can stop all function instances using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`stop`](reference-pulsar-admin.md#functions-stop) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stop \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/stop|operation/stopFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().stopFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Restart an instance of a function - -Restart a function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`restart`](reference-pulsar-admin.md#functions-restart) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions restart \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/restart|operation/restartFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().restartFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Restart all instances of a function - -You can restart all function instances using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`restart`](reference-pulsar-admin.md#functions-restart) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions restart \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/restart|operation/restartFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().restartFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### List all functions - -You can list all Pulsar functions running under a specific tenant and namespace using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#functions-list) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions list \ - --tenant public \ - --namespace default - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace|operation/listFunctions?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctions(tenant, namespace); - -``` - - - - -```` - -### Delete a function - -You can delete a Pulsar function that is running on a Pulsar cluster using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#functions-delete) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions delete \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|DELETE|/admin/v3/functions/:tenant/:namespace/:functionName|operation/deregisterFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().deleteFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Get info about a function - -You can get information about a Pulsar function currently running in cluster mode using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#functions-get) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions get \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName|operation/getFunctionInfo?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Get status of an instance of a function - -You can get the current status of a Pulsar function instance with `instance-id` using Admin CLI, REST API or Java Admin API. -````mdx-code-block - - - -Use the [`status`](reference-pulsar-admin.md#functions-status) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/status|operation/getFunctionInstanceStatus?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStatus(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Get status of all instances of a function - -You can get the current status of a Pulsar function instance using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`status`](reference-pulsar-admin.md#functions-status) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/status|operation/getFunctionStatus?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStatus(tenant, namespace, functionName); - -``` - - - - -```` - -### Get stats of an instance of a function - -You can get the current stats of a Pulsar Function instance with `instance-id` using Admin CLI, REST API or Java admin API. -````mdx-code-block - - - -Use the [`stats`](reference-pulsar-admin.md#functions-stats) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/stats|operation/getFunctionInstanceStats?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStats(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Get stats of all instances of a function - -You can get the current stats of a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`stats`](reference-pulsar-admin.md#functions-stats) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/stats|operation/getFunctionStats?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStats(tenant, namespace, functionName); - -``` - - - - -```` - -### Trigger a function - -You can trigger a specified Pulsar function with a supplied value using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`trigger`](reference-pulsar-admin.md#functions-trigger) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --topic (the name of input topic) \ - --trigger-value \"hello pulsar\" - # or --trigger-file (the path of trigger file) - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/trigger|operation/triggerFunction?version=@pulsar:version_number@} - - - - -```java - -admin.functions().triggerFunction(tenant, namespace, functionName, topic, triggerValue, triggerFile); - -``` - - - - -```` - -### Put state associated with a function - -You can put the state associated with a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`putstate`](reference-pulsar-admin.md#functions-putstate) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions putstate \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --state "{\"key\":\"pulsar\", \"stringValue\":\"hello pulsar\"}" - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/state/:key|operation/putFunctionState?version=@pulsar:version_number@} - - - - -```java - -TypeReference typeRef = new TypeReference() {}; -FunctionState stateRepr = ObjectMapperFactory.getThreadLocal().readValue(state, typeRef); -admin.functions().putFunctionState(tenant, namespace, functionName, stateRepr); - -``` - - - - -```` - -### Fetch state associated with a function - -You can fetch the current state associated with a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`querystate`](reference-pulsar-admin.md#functions-querystate) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions querystate \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --key (the key of state) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/state/:key|operation/getFunctionState?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionState(tenant, namespace, functionName, key); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-namespaces.md b/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-namespaces.md deleted file mode 100644 index fa6d9efe251ab0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-namespaces.md +++ /dev/null @@ -1,1267 +0,0 @@ ---- -id: admin-api-namespaces -title: Managing Namespaces -sidebar_label: "Namespaces" -original_id: admin-api-namespaces ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Pulsar [namespaces](reference-terminology.md#namespace) are logical groupings of [topics](reference-terminology.md#topic). - -Namespaces can be managed via: - -* The `namespaces` command of the [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) tool -* The `/admin/v2/namespaces` endpoint of the admin {@inject: rest:REST:/} API -* The `namespaces` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md) - -## Namespaces resources - -### Create namespaces - -You can create new namespaces under a given [tenant](reference-terminology.md#tenant). - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#namespaces-create) subcommand and specify the namespace by name: - -```shell - -$ pulsar-admin namespaces create test-tenant/test-namespace - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace|operation/createNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().createNamespace(namespace); - -``` - - - - -```` - -### Get policies - -You can fetch the current policies associated with a namespace at any time. - -````mdx-code-block - - - -Use the [`policies`](reference-pulsar-admin.md#namespaces-policies) subcommand and specify the namespace: - -```shell - -$ pulsar-admin namespaces policies test-tenant/test-namespace -{ - "auth_policies": { - "namespace_auth": {}, - "destination_auth": {} - }, - "replication_clusters": [], - "bundles_activated": true, - "bundles": { - "boundaries": [ - "0x00000000", - "0xffffffff" - ], - "numBundles": 1 - }, - "backlog_quota_map": {}, - "persistence": null, - "latency_stats_sample_rate": {}, - "message_ttl_in_seconds": 0, - "retention_policies": null, - "deleted": false -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace|operation/getPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPolicies(namespace); - -``` - - - - -```` - -### List namespaces - -You can list all namespaces within a given Pulsar [tenant](reference-terminology.md#tenant). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#namespaces-list) subcommand and specify the tenant: - -```shell - -$ pulsar-admin namespaces list test-tenant -test-tenant/ns1 -test-tenant/ns2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant|operation/getTenantNamespaces?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaces(tenant); - -``` - - - - -```` - -### Delete namespaces - -You can delete existing namespaces from a tenant. - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#namespaces-delete) subcommand and specify the namespace: - -```shell - -$ pulsar-admin namespaces delete test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace|operation/deleteNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().deleteNamespace(namespace); - -``` - - - - -```` - -### Configure replication clusters - -#### Set replication cluster - -You can set replication clusters for a namespace to enable Pulsar to internally replicate the published messages from one colocation facility to another. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-clusters test-tenant/ns1 \ - --clusters cl1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/replication|operation/setNamespaceReplicationClusters?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setNamespaceReplicationClusters(namespace, clusters); - -``` - - - - -```` - -#### Get replication cluster - -You can get the list of replication clusters for a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-clusters test-tenant/cl1/ns1 - -``` - -``` - -cl2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/replication|operation/getNamespaceReplicationClusters?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaceReplicationClusters(namespace) - -``` - - - - -```` - -### Configure backlog quota policies - -#### Set backlog quota policies - -Backlog quota helps the broker to restrict bandwidth/storage of a namespace once it reaches a certain threshold limit. Admin can set the limit and take corresponding action after the limit is reached. - - 1. producer_request_hold: broker holds but not persists produce request payload - - 2. producer_exception: broker disconnects with the client by giving an exception - - 3. consumer_backlog_eviction: broker starts discarding backlog messages - -Backlog quota restriction can be taken care by defining restriction of backlog-quota-type: destination_storage. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-backlog-quota --limit 10G --limitTime 36000 --policy producer_request_hold test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/setBacklogQuota?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setBacklogQuota(namespace, new BacklogQuota(limit, limitTime, policy)) - -``` - - - - -```` - -#### Get backlog quota policies - -You can get a configured backlog quota for a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-backlog-quotas test-tenant/ns1 - -``` - -```json - -{ - "destination_storage": { - "limit": 10, - "policy": "producer_request_hold" - } -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/backlogQuotaMap|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getBacklogQuotaMap(namespace); - -``` - - - - -```` - -#### Remove backlog quota policies - -You can remove backlog quota policies for a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-backlog-quota test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/removeBacklogQuota?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeBacklogQuota(namespace, backlogQuotaType) - -``` - - - - -```` - -### Configure persistence policies - -#### Set persistence policies - -Persistence policies allow users to configure persistency-level for all topic messages under a given namespace. - - - Bookkeeper-ack-quorum: Number of acks (guaranteed copies) to wait for each entry, default: 0 - - - Bookkeeper-ensemble: Number of bookies to use for a topic, default: 0 - - - Bookkeeper-write-quorum: How many writes to make of each entry, default: 0 - - - Ml-mark-delete-max-rate: Throttling rate of mark-delete operation (0 means no throttle), default: 0.0 - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-persistence --bookkeeper-ack-quorum 2 --bookkeeper-ensemble 3 --bookkeeper-write-quorum 2 --ml-mark-delete-max-rate 0 test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/setPersistence?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setPersistence(namespace,new PersistencePolicies(bookkeeperEnsemble, bookkeeperWriteQuorum,bookkeeperAckQuorum,managedLedgerMaxMarkDeleteRate)) - -``` - - - - -```` - -#### Get persistence policies - -You can get the configured persistence policies of a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-persistence test-tenant/ns1 - -``` - -```json - -{ - "bookkeeperEnsemble": 3, - "bookkeeperWriteQuorum": 2, - "bookkeeperAckQuorum": 2, - "managedLedgerMaxMarkDeleteRate": 0 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/getPersistence?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPersistence(namespace) - -``` - - - - -```` - -### Configure namespace bundles - -#### Unload namespace bundles - -The namespace bundle is a virtual group of topics which belong to the same namespace. If the broker gets overloaded with the number of bundles, this command can help unload a bundle from that broker, so it can be served by some other less-loaded brokers. The namespace bundle ID ranges from 0x00000000 to 0xffffffff. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces unload --bundle 0x00000000_0xffffffff test-tenant/ns1 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/:bundle/unload|operation/unloadNamespaceBundle?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().unloadNamespaceBundle(namespace, bundle) - -``` - - - - -```` - -#### Split namespace bundles - -One namespace bundle can contain multiple topics but can be served by only one broker. If a single bundle is creating an excessive load on a broker, an admin can split the bundle using the command below, permitting one or more of the new bundles to be unloaded, thus balancing the load across the brokers. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces split-bundle --bundle 0x00000000_0xffffffff test-tenant/ns1 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/:bundle/split|operation/splitNamespaceBundle?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().splitNamespaceBundle(namespace, bundle) - -``` - - - - -```` - -### Configure message TTL - -#### Set message-ttl - -You can configure the time to live (in seconds) duration for messages. In the example below, the message-ttl is set as 100s. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-message-ttl --messageTTL 100 test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/setNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setNamespaceMessageTTL(namespace, messageTTL) - -``` - - - - -```` - -#### Get message-ttl - -When the message-ttl for a namespace is set, you can use the command below to get the configured value. This example comtinues the example of the command `set message-ttl`, so the returned value is 100(s). - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-message-ttl test-tenant/ns1 - -``` - -``` - -100 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/getNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaceMessageTTL(namespace) - -``` - -``` - -100 - -``` - - - - -```` - -#### Remove message-ttl - -Remove a message TTL of the configured namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-message-ttl test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/removeNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeNamespaceMessageTTL(namespace) - -``` - - - - -```` - - -### Clear backlog - -#### Clear namespace backlog - -It clears all message backlog for all the topics that belong to a specific namespace. You can also clear backlog for a specific subscription as well. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces clear-backlog --sub my-subscription test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/clearBacklog|operation/clearNamespaceBacklogForSubscription?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().clearNamespaceBacklogForSubscription(namespace, subscription) - -``` - - - - -```` - -#### Clear bundle backlog - -It clears all message backlog for all the topics that belong to a specific NamespaceBundle. You can also clear backlog for a specific subscription as well. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces clear-backlog --bundle 0x00000000_0xffffffff --sub my-subscription test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/:bundle/clearBacklog|operation/clearNamespaceBundleBacklogForSubscription?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().clearNamespaceBundleBacklogForSubscription(namespace, bundle, subscription) - -``` - - - - -```` - -### Configure retention - -#### Set retention - -Each namespace contains multiple topics and the retention size (storage size) of each topic should not exceed a specific threshold or it should be stored for a certain period. This command helps configure the retention size and time of topics in a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-retention --size 100 --time 10 test-tenant/ns1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/retention|operation/setRetention?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setRetention(namespace, new RetentionPolicies(retentionTimeInMin, retentionSizeInMB)) - -``` - - - - -```` - -#### Get retention - -It shows retention information of a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-retention test-tenant/ns1 - -``` - -```json - -{ - "retentionTimeInMinutes": 10, - "retentionSizeInMB": 100 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/retention|operation/getRetention?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getRetention(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for topics - -#### Set dispatch throttling for topics - -It sets message dispatch rate for all the topics under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -:::note - -- If neither `clusterDispatchRate` nor `topicDispatchRate` is configured, dispatch throttling is disabled. -- If `topicDispatchRate` is not configured, `clusterDispatchRate` takes effect. -- If `topicDispatchRate` is configured, `topicDispatchRate` takes effect. - -::: - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/dispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for topics - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/dispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getDispatchRate(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for subscription - -#### Set dispatch throttling for subscription - -It sets message dispatch rate for all the subscription of topics under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-subscription-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/subscriptionDispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setSubscriptionDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for subscription - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-subscription-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/subscriptionDispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getSubscriptionDispatchRate(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for replicator - -#### Set dispatch throttling for replicator - -It sets message dispatch rate for all the replicator between replication clusters under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-replicator-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/replicatorDispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setReplicatorDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for replicator - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-replicator-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/replicatorDispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getReplicatorDispatchRate(namespace) - -``` - - - - -```` - -### Configure deduplication snapshot interval - -#### Get deduplication snapshot interval - -It shows configured `deduplicationSnapshotInterval` for a namespace (Each topic under the namespace will take a deduplication snapshot according to this interval) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-deduplication-snapshot-interval test-tenant/ns1 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/getDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getDeduplicationSnapshotInterval(namespace) - -``` - - - - -```` - -#### Set deduplication snapshot interval - -Set configured `deduplicationSnapshotInterval` for a namespace. Each topic under the namespace will take a deduplication snapshot according to this interval. -`brokerDeduplicationEnabled` must be set to `true` for this property to take effect. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-deduplication-snapshot-interval test-tenant/ns1 --interval 1000 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/setDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setDeduplicationSnapshotInterval(namespace, 1000) - -``` - - - - -```` - -#### Remove deduplication snapshot interval - -Remove configured `deduplicationSnapshotInterval` of a namespace (Each topic under the namespace will take a deduplication snapshot according to this interval) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-deduplication-snapshot-interval test-tenant/ns1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/deleteDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeDeduplicationSnapshotInterval(namespace) - -``` - - - - -```` - -### Namespace isolation - -You can use the [Pulsar isolation policy](administration-isolation.md) to allocate resources (broker and bookie) for a namespace. - -### Unload namespaces from a broker - -You can unload a namespace, or a [namespace bundle](reference-terminology.md#namespace-bundle), from the Pulsar [broker](reference-terminology.md#broker) that is currently responsible for it. - -#### pulsar-admin - -Use the [`unload`](reference-pulsar-admin.md#unload) subcommand of the [`namespaces`](reference-pulsar-admin.md#namespaces) command. - -````mdx-code-block - - - -```shell - -$ pulsar-admin namespaces unload my-tenant/my-ns - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/unload|operation/unloadNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().unload(namespace) - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-non-partitioned-topics.md b/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-non-partitioned-topics.md deleted file mode 100644 index e6347bb8c363a1..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-non-partitioned-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-non-partitioned-topics -title: Managing non-partitioned topics -sidebar_label: "Non-partitioned topics" -original_id: admin-api-non-partitioned-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-non-persistent-topics.md b/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-non-persistent-topics.md deleted file mode 100644 index 3126a6494c7153..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-non-persistent-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-non-persistent-topics -title: Managing non-persistent topics -sidebar_label: "Non-Persistent topics" -original_id: admin-api-non-persistent-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-overview.md b/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-overview.md deleted file mode 100644 index 1154c625aff7b5..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-overview.md +++ /dev/null @@ -1,144 +0,0 @@ ---- -id: admin-api-overview -title: Pulsar admin interface -sidebar_label: "Overview" -original_id: admin-api-overview ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -The Pulsar admin interface enables you to manage all important entities in a Pulsar instance, such as tenants, topics, and namespaces. - -You can interact with the admin interface via: - -- The `pulsar-admin` CLI tool, which is available in the `bin` folder of your Pulsar installation: - - ```shell - - bin/pulsar-admin - - ``` - - > **Important** - > - > For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). - -- HTTP calls, which are made against the admin {@inject: rest:REST:/} API provided by Pulsar brokers. For some RESTful APIs, they might be redirected to the owner brokers for serving with [`307 Temporary Redirect`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/307), hence the HTTP callers should handle `307 Temporary Redirect`. If you use `curl` commands, you should specify `-L` to handle redirections. - - > **Important** - > - > For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. - -- A Java client interface. - - > **Important** - > - > For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -> **The REST API is the admin interface**. Both the `pulsar-admin` CLI tool and the Java client use the REST API. If you implement your own admin interface client, you should use the REST API. - -## Admin setup - -Each of the three admin interfaces (the `pulsar-admin` CLI tool, the {@inject: rest:REST:/} API, and the [Java admin API](/api/admin)) requires some special setup if you have enabled authentication in your Pulsar instance. - -````mdx-code-block - - - -If you have enabled authentication, you need to provide an auth configuration to use the `pulsar-admin` tool. By default, the configuration for the `pulsar-admin` tool is in the [`conf/client.conf`](reference-configuration.md#client) file. The following are the available parameters: - -|Name|Description|Default| -|----|-----------|-------| -|webServiceUrl|The web URL for the cluster.|http://localhost:8080/| -|brokerServiceUrl|The Pulsar protocol URL for the cluster.|pulsar://localhost:6650/| -|authPlugin|The authentication plugin.| | -|authParams|The authentication parameters for the cluster, as a comma-separated string.| | -|useTls|Whether or not TLS authentication will be enforced in the cluster.|false| -|tlsAllowInsecureConnection|Accept untrusted TLS certificate from client.|false| -|tlsTrustCertsFilePath|Path for the trusted TLS certificate file.| | - - - - -You can find details for the REST API exposed by Pulsar brokers in this {@inject: rest:document:/}. - - - - -To use the Java admin API, instantiate a {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object, and specify a URL for a Pulsar broker and a {@inject: javadoc:PulsarAdminBuilder:/admin/org/apache/pulsar/client/admin/PulsarAdminBuilder}. The following is a minimal example using `localhost`: - -```java - -String url = "http://localhost:8080"; -// Pass auth-plugin class fully-qualified name if Pulsar-security enabled -String authPluginClassName = "com.org.MyAuthPluginClass"; -// Pass auth-param if auth-plugin class requires it -String authParams = "param1=value1"; -boolean useTls = false; -boolean tlsAllowInsecureConnection = false; -String tlsTrustCertsFilePath = null; -PulsarAdmin admin = PulsarAdmin.builder() -.authentication(authPluginClassName,authParams) -.serviceHttpUrl(url) -.tlsTrustCertsFilePath(tlsTrustCertsFilePath) -.allowTlsInsecureConnection(tlsAllowInsecureConnection) -.build(); - -``` - -If you use multiple brokers, you can use multi-host like Pulsar service. For example, - -```java - -String url = "http://localhost:8080,localhost:8081,localhost:8082"; -// Pass auth-plugin class fully-qualified name if Pulsar-security enabled -String authPluginClassName = "com.org.MyAuthPluginClass"; -// Pass auth-param if auth-plugin class requires it -String authParams = "param1=value1"; -boolean useTls = false; -boolean tlsAllowInsecureConnection = false; -String tlsTrustCertsFilePath = null; -PulsarAdmin admin = PulsarAdmin.builder() -.authentication(authPluginClassName,authParams) -.serviceHttpUrl(url) -.tlsTrustCertsFilePath(tlsTrustCertsFilePath) -.allowTlsInsecureConnection(tlsAllowInsecureConnection) -.build(); - -``` - - - - -```` - -## How to define Pulsar resource names when running Pulsar in Kubernetes -If you run Pulsar Functions or connectors on Kubernetes, you need to follow Kubernetes naming convention to define the names of your Pulsar resources, whichever admin interface you use. - -Kubernetes requires a name that can be used as a DNS subdomain name as defined in [RFC 1123](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names). Pulsar supports more legal characters than Kubernetes naming convention. If you create a Pulsar resource name with special characters that are not supported by Kubernetes (for example, including colons in a Pulsar namespace name), Kubernetes runtime translates the Pulsar object names into Kubernetes resource labels which are in RFC 1123-compliant forms. Consequently, you can run functions or connectors using Kubernetes runtime. The rules for translating Pulsar object names into Kubernetes resource labels are as below: - -- Truncate to 63 characters - -- Replace the following characters with dashes (-): - - - Non-alphanumeric characters - - - Underscores (_) - - - Dots (.) - -- Replace beginning and ending non-alphanumeric characters with 0 - -:::tip - -- If you get an error in translating Pulsar object names into Kubernetes resource labels (for example, you may have a naming collision if your Pulsar object name is too long) or want to customize the translating rules, see [customize Kubernetes runtime](https://pulsar.apache.org/docs/en/next/functions-runtime/#customize-kubernetes-runtime). -- For how to configure Kubernetes runtime, see [here](https://pulsar.apache.org/docs/en/next/functions-runtime/#configure-kubernetes-runtime). - -::: - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-packages.md b/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-packages.md deleted file mode 100644 index 2852fb74a02be3..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-packages.md +++ /dev/null @@ -1,391 +0,0 @@ ---- -id: admin-api-packages -title: Manage packages -sidebar_label: "Packages" -original_id: admin-api-packages ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Package management enables version management and simplifies the upgrade and rollback processes for Functions, Sinks, and Sources. When you use the same function, sink and source in different namespaces, you can upload them to a common package management system. - -## Package name - -A `package` is identified by five parts: `type`, `tenant`, `namespace`, `package name`, and `version`. - -| Part | Description | -|-------|-------------| -|`type` |The type of the package. The following types are supported: `function`, `sink` and `source`. | -| `name`|The fully qualified name of the package: `//`.| -|`version`|The version of the package.| - -The following is a code sample. - -```java - -class PackageName { - private final PackageType type; - private final String namespace; - private final String tenant; - private final String name; - private final String version; -} - -enum PackageType { - FUNCTION("function"), SINK("sink"), SOURCE("source"); -} - -``` - -## Package URL -A package is located using a URL. The package URL is written in the following format: - -```shell - -:////@ - -``` - -The following are package URL examples: - -`sink://public/default/mysql-sink@1.0` -`function://my-tenant/my-ns/my-function@0.1` -`source://my-tenant/my-ns/mysql-cdc-source@2.3` - -The package management system stores the data, versions and metadata of each package. The metadata is shown in the following table. - -| metadata | Description | -|----------|-------------| -|description|The description of the package.| -|contact |The contact information of a package. For example, team email.| -|create_time| The time when the package is created.| -|modification_time| The time when the package is modified.| -|properties |A key/value map that stores your own information.| - -## Permissions - -The packages are organized by the tenant and namespace, so you can apply the tenant and namespace permissions to packages directly. - -## Package resources -You can use the package management with command line tools, REST API and Java client. - -### Upload a package -You can upload a package to the package management service in the following ways. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages upload functions://public/default/example@v0.1 --path package-file --description package-description - -``` - - - - -{@inject: endpoint|POST|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/upload?version=@pulsar:version_number@} - - - - -Upload a package to the package management service synchronously. - -```java - - void upload(PackageMetadata metadata, String packageName, String path) throws PulsarAdminException; - -``` - -Upload a package to the package management service asynchronously. - -```java - - CompletableFuture uploadAsync(PackageMetadata metadata, String packageName, String path); - -``` - - - - -```` - -### Download a package -You can download a package to the package management service in the following ways. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages download functions://public/default/example@v0.1 --path package-file - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/download?version=@pulsar:version_number@} - - - - -Download a package to the package management service synchronously. - -```java - - void download(String packageName, String path) throws PulsarAdminException; - -``` - -Download a package to the package management service asynchronously. - -```java - - CompletableFuture downloadAsync(String packageName, String path); - -``` - - - - -```` - -### List all versions of a package -You can get a list of all versions of a package in the following ways. -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages list --type function public/default - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName|operation/listPackageVersion?version=@pulsar:version_number@} - - - - -List all versions of a package synchronously. - -```java - - List listPackageVersions(String packageName) throws PulsarAdminException; - -``` - -List all versions of a package asynchronously. - -```java - - CompletableFuture> listPackageVersionsAsync(String packageName); - -``` - - - - -```` - -### List all the specified type packages under a namespace -You can get a list of all the packages with the given type in a namespace in the following ways. -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages list --type function public/default - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/packages/:type/:tenant/:namespace|operation/listPackages?version=@pulsar:version_number@} - - - - -List all the packages with the given type in a namespace synchronously. - -```java - - List listPackages(String type, String namespace) throws PulsarAdminException; - -``` - -List all the packages with the given type in a namespace asynchronously. - -```java - - CompletableFuture> listPackagesAsync(String type, String namespace); - -``` - - - - -```` - -### Get the metadata of a package -You can get the metadata of a package in the following ways. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages get-metadata function://public/default/test@v1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version/metadata|operation/getMeta?version=@pulsar:version_number@} - - - - -Get the metadata of a package synchronously. - -```java - - PackageMetadata getMetadata(String packageName) throws PulsarAdminException; - -``` - -Get the metadata of a package asynchronously. - -```java - - CompletableFuture getMetadataAsync(String packageName); - -``` - - - - -```` - -### Update the metadata of a package -You can update the metadata of a package in the following ways. -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages update-metadata function://public/default/example@v0.1 --description update-description - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version/metadata|operation/updateMeta?version=@pulsar:version_number@} - - - - -Update a package metadata information synchronously. - -```java - - void updateMetadata(String packageName, PackageMetadata metadata) throws PulsarAdminException; - -``` - -Update a package metadata information asynchronously. - -```java - - CompletableFuture updateMetadataAsync(String packageName, PackageMetadata metadata); - -``` - - - - -```` - -### Delete a specified package -You can delete a specified package with its package name in the following ways. - -````mdx-code-block - - - -The following command example deletes a package of version 0.1. - -```shell - -bin/pulsar-admin packages delete functions://public/default/example@v0.1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/delete?version=@pulsar:version_number@} - - - - -Delete a specified package synchronously. - -```java - - void delete(String packageName) throws PulsarAdminException; - -``` - -Delete a specified package asynchronously. - -```java - - CompletableFuture deleteAsync(String packageName); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-partitioned-topics.md b/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-partitioned-topics.md deleted file mode 100644 index 5ce182282e0324..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-partitioned-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-partitioned-topics -title: Managing partitioned topics -sidebar_label: "Partitioned topics" -original_id: admin-api-partitioned-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-permissions.md b/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-permissions.md deleted file mode 100644 index 2496c9be54eb26..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-permissions.md +++ /dev/null @@ -1,184 +0,0 @@ ---- -id: admin-api-permissions -title: Managing permissions -sidebar_label: "Permissions" -original_id: admin-api-permissions ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Permissions in Pulsar are managed at the [namespace](reference-terminology.md#namespace) level -(that is, within [tenants](reference-terminology.md#tenant) and [clusters](reference-terminology.md#cluster)). - -## Grant permissions - -You can grant permissions to specific roles for lists of operations such as `produce` and `consume`. - -````mdx-code-block - - - -Use the [`grant-permission`](reference-pulsar-admin.md#grant-permission) subcommand and specify a namespace, actions using the `--actions` flag, and a role using the `--role` flag: - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role admin10 - -``` - -Wildcard authorization can be performed when `authorizationAllowWildcardsMatching` is set to `true` in `broker.conf`. - -e.g. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role 'my.role.*' - -``` - -Then, roles `my.role.1`, `my.role.2`, `my.role.foo`, `my.role.bar`, etc. can produce and consume. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role '*.role.my' - -``` - -Then, roles `1.role.my`, `2.role.my`, `foo.role.my`, `bar.role.my`, etc. can produce and consume. - -**Note**: A wildcard matching works at **the beginning or end of the role name only**. - -e.g. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role 'my.*.role' - -``` - -In this case, only the role `my.*.role` has permissions. -Roles `my.1.role`, `my.2.role`, `my.foo.role`, `my.bar.role`, etc. **cannot** produce and consume. - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/permissions/:role|operation/grantPermissionOnNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().grantPermissionOnNamespace(namespace, role, getAuthActions(actions)); - -``` - - - - -```` - -## Get permissions - -You can see which permissions have been granted to which roles in a namespace. - -````mdx-code-block - - - -Use the [`permissions`](reference-pulsar-admin#permissions) subcommand and specify a namespace: - -```shell - -$ pulsar-admin namespaces permissions test-tenant/ns1 -{ - "admin10": [ - "produce", - "consume" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/permissions|operation/getPermissions?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPermissions(namespace); - -``` - - - - -```` - -## Revoke permissions - -You can revoke permissions from specific roles, which means that those roles will no longer have access to the specified namespace. - -````mdx-code-block - - - -Use the [`revoke-permission`](reference-pulsar-admin.md#revoke-permission) subcommand and specify a namespace and a role using the `--role` flag: - -```shell - -$ pulsar-admin namespaces revoke-permission test-tenant/ns1 \ - --role admin10 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/permissions/:role|operation/revokePermissionsOnNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().revokePermissionsOnNamespace(namespace, role); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-persistent-topics.md b/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-persistent-topics.md deleted file mode 100644 index 50d135b72f5424..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-persistent-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-persistent-topics -title: Managing persistent topics -sidebar_label: "Persistent topics" -original_id: admin-api-persistent-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-schemas.md b/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-schemas.md deleted file mode 100644 index 9ffe21f5b0f750..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-schemas.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: admin-api-schemas -title: Managing Schemas -sidebar_label: "Schemas" -original_id: admin-api-schemas ---- - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-tenants.md b/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-tenants.md deleted file mode 100644 index 3e13e54a68b2cd..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-tenants.md +++ /dev/null @@ -1,238 +0,0 @@ ---- -id: admin-api-tenants -title: Managing Tenants -sidebar_label: "Tenants" -original_id: admin-api-tenants ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Tenants, like namespaces, can be managed using the [admin API](admin-api-overview.md). There are currently two configurable aspects of tenants: - -* Admin roles -* Allowed clusters - -## Tenant resources - -### List - -You can list all of the tenants associated with an [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#tenants-list) subcommand. - -```shell - -$ pulsar-admin tenants list -my-tenant-1 -my-tenant-2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/tenants|operation/getTenants?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().getTenants(); - -``` - - - - -```` - -### Create - -You can create a new tenant. - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#tenants-create) subcommand: - -```shell - -$ pulsar-admin tenants create my-tenant - -``` - -When creating a tenant, you can assign admin roles using the `-r`/`--admin-roles` flag. You can specify multiple roles as a comma-separated list. Here are some examples: - -```shell - -$ pulsar-admin tenants create my-tenant \ - --admin-roles role1,role2,role3 - -$ pulsar-admin tenants create my-tenant \ - -r role1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/tenants/:tenant|operation/createTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().createTenant(tenantName, tenantInfo); - -``` - - - - -```` - -### Get configuration - -You can fetch the [configuration](reference-configuration.md) for an existing tenant at any time. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#tenants-get) subcommand and specify the name of the tenant. Here's an example: - -```shell - -$ pulsar-admin tenants get my-tenant -{ - "adminRoles": [ - "admin1", - "admin2" - ], - "allowedClusters": [ - "cl1", - "cl2" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/tenants/:cluster|operation/getTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().getTenantInfo(tenantName); - -``` - - - - -```` - -### Delete - -Tenants can be deleted from a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#tenants-delete) subcommand and specify the name of the tenant. - -```shell - -$ pulsar-admin tenants delete my-tenant - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/tenants/:cluster|operation/deleteTenant?version=@pulsar:version_number@} - - - - -```java - -admin.Tenants().deleteTenant(tenantName); - -``` - - - - -```` - -### Update - -You can update a tenant's configuration. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#tenants-update) subcommand. - -```shell - -$ pulsar-admin tenants update my-tenant - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/tenants/:cluster|operation/updateTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().updateTenant(tenantName, tenantInfo); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-topics.md b/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-topics.md deleted file mode 100644 index 5d36b81e5cae79..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/admin-api-topics.md +++ /dev/null @@ -1,2334 +0,0 @@ ---- -id: admin-api-topics -title: Manage topics -sidebar_label: "Topics" -original_id: admin-api-topics ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Pulsar has persistent and non-persistent topics. Persistent topic is a logical endpoint for publishing and consuming messages. The topic name structure for persistent topics is: - -```shell - -persistent://tenant/namespace/topic - -``` - -Non-persistent topics are used in applications that only consume real-time published messages and do not need persistent guarantee. In this way, it reduces message-publish latency by removing overhead of persisting messages. The topic name structure for non-persistent topics is: - -```shell - -non-persistent://tenant/namespace/topic - -``` - -## Manage topic resources -Whether it is persistent or non-persistent topic, you can obtain the topic resources through `pulsar-admin` tool, REST API and Java. - -:::note - -In REST API, `:schema` stands for persistent or non-persistent. `:tenant`, `:namespace`, `:x` are variables, replace them with the real tenant, namespace, and `x` names when using them. -Take {@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} as an example, to get the list of persistent topics in REST API, use `https://pulsar.apache.org/admin/v2/persistent/my-tenant/my-namespace`. To get the list of non-persistent topics in REST API, use `https://pulsar.apache.org/admin/v2/non-persistent/my-tenant/my-namespace`. - -::: - -### List of topics - -You can get the list of topics under a given namespace in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list \ - my-tenant/my-namespace - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} - - - - -```java - -String namespace = "my-tenant/my-namespace"; -admin.topics().getList(namespace); - -``` - - - - -```` - -### Grant permission - -You can grant permissions on a client role to perform specific actions on a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics grant-permission \ - --actions produce,consume --role application1 \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/permissions/:role|operation/grantPermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String role = "test-role"; -Set actions = Sets.newHashSet(AuthAction.produce, AuthAction.consume); -admin.topics().grantPermission(topic, role, actions); - -``` - - - - -```` - -### Get permission - -You can fetch permission in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics permissions \ - persistent://test-tenant/ns1/tp1 \ - -{ - "application1": [ - "consume", - "produce" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/permissions|operation/getPermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getPermissions(topic); - -``` - - - - -```` - -### Revoke permission - -You can revoke a permission granted on a client role in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics revoke-permission \ - --role application1 \ - persistent://test-tenant/ns1/tp1 \ - -{ - "application1": [ - "consume", - "produce" - ] -} - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic/permissions/:role|operation/revokePermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String role = "test-role"; -admin.topics().revokePermissions(topic, role); - -``` - - - - -```` - -### Delete topic - -You can delete a topic in the following ways. You cannot delete a topic if any active subscription or producers is connected to the topic. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics delete \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic|operation/deleteTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().delete(topic); - -``` - - - - -```` - -### Unload topic - -You can unload a topic in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics unload \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/unload|operation/unloadTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().unload(topic); - -``` - - - - -```` - -### Get stats - -You can check the following statistics of a given non-partitioned topic. - - - **msgRateIn**: The sum of all local and replication publishers' publish rates (msg/s). - - - **msgThroughputIn**: The sum of all local and replication publishers' publish rates (bytes/s). - - - **msgRateOut**: The sum of all local and replication consumers' dispatch rates(msg/s). - - - **msgThroughputOut**: The sum of all local and replication consumers' dispatch rates (bytes/s). - - - **averageMsgSize**: The average size (in bytes) of messages published within the last interval. - - - **storageSize**: The sum of the ledgers' storage size for this topic. The space used to store the messages for the topic. - - - **bytesInCounter**: Total bytes published to the topic. - - - **msgInCounter**: Total messages published to the topic. - - - **bytesOutCounter**: Total bytes delivered to consumers. - - - **msgOutCounter**: Total messages delivered to consumers. - - - **msgChunkPublished**: Topic has chunked message published on it. - - - **backlogSize**: Estimated total unconsumed or backlog size (in bytes). - - - **offloadedStorageSize**: Space used to store the offloaded messages for the topic (in bytes). - - - **waitingPublishers**: The number of publishers waiting in a queue in exclusive access mode. - - - **deduplicationStatus**: The status of message deduplication for the topic. - - - **topicEpoch**: The topic epoch or empty if not set. - - - **nonContiguousDeletedMessagesRanges**: The number of non-contiguous deleted messages ranges. - - - **nonContiguousDeletedMessagesRangesSerializedSize**: The serialized size of non-contiguous deleted messages ranges. - - - **publishers**: The list of all local publishers into the topic. The list ranges from zero to thousands. - - - **accessMode**: The type of access to the topic that the producer requires. - - - **msgRateIn**: The total rate of messages (msg/s) published by this publisher. - - - **msgThroughputIn**: The total throughput (bytes/s) of the messages published by this publisher. - - - **averageMsgSize**: The average message size in bytes from this publisher within the last interval. - - - **chunkedMessageRate**: The total rate of chunked messages published by this publisher. - - - **producerId**: The internal identifier for this producer on this topic. - - - **producerName**: The internal identifier for this producer, generated by the client library. - - - **address**: The IP address and source port for the connection of this producer. - - - **connectedSince**: The timestamp when this producer is created or reconnected last time. - - - **clientVersion**: The client library version of this producer. - - - **metadata**: Metadata (key/value strings) associated with this publisher. - - - **subscriptions**: The list of all local subscriptions to the topic. - - - **my-subscription**: The name of this subscription. It is defined by the client. - - - **msgRateOut**: The total rate of messages (msg/s) delivered on this subscription. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered on this subscription. - - - **msgBacklog**: The number of messages in the subscription backlog. - - - **type**: The subscription type. - - - **msgRateExpired**: The rate at which messages were discarded instead of dispatched from this subscription due to TTL. - - - **lastExpireTimestamp**: The timestamp of the last message expire execution. - - - **lastConsumedFlowTimestamp**: The timestamp of the last flow command received. - - - **lastConsumedTimestamp**: The latest timestamp of all the consumed timestamp of the consumers. - - - **lastAckedTimestamp**: The latest timestamp of all the acked timestamp of the consumers. - - - **bytesOutCounter**: Total bytes delivered to consumer. - - - **msgOutCounter**: Total messages delivered to consumer. - - - **msgRateRedeliver**: Total rate of messages redelivered on this subscription (msg/s). - - - **chunkedMessageRate**: Chunked message dispatch rate. - - - **backlogSize**: Size of backlog for this subscription (in bytes). - - - **msgBacklogNoDelayed**: Number of messages in the subscription backlog that do not contain the delay messages. - - - **blockedSubscriptionOnUnackedMsgs**: Flag to verify if a subscription is blocked due to reaching threshold of unacked messages. - - - **msgDelayed**: Number of delayed messages currently being tracked. - - - **unackedMessages**: Number of unacknowledged messages for the subscription, where an unacknowledged message is one that has been sent to a consumer but not yet acknowledged. This field is only meaningful when using a subscription that tracks individual message acknowledgement. - - - **activeConsumerName**: The name of the consumer that is active for single active consumer subscriptions. For example, failover or exclusive. - - - **totalMsgExpired**: Total messages expired on this subscription. - - - **lastMarkDeleteAdvancedTimestamp**: Last MarkDelete position advanced timestamp. - - - **durable**: Whether the subscription is durable or ephemeral (for example, from a reader). - - - **replicated**: Mark that the subscription state is kept in sync across different regions. - - - **allowOutOfOrderDelivery**: Whether out of order delivery is allowed on the Key_Shared subscription. - - - **keySharedMode**: Whether the Key_Shared subscription mode is AUTO_SPLIT or STICKY. - - - **consumersAfterMarkDeletePosition**: This is for Key_Shared subscription to get the recentJoinedConsumers in the Key_Shared subscription. - - - **nonContiguousDeletedMessagesRanges**: The number of non-contiguous deleted messages ranges. - - - **nonContiguousDeletedMessagesRangesSerializedSize**: The serialized size of non-contiguous deleted messages ranges. - - - **consumers**: The list of connected consumers for this subscription. - - - **msgRateOut**: The total rate of messages (msg/s) delivered to the consumer. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered to the consumer. - - - **consumerName**: The internal identifier for this consumer, generated by the client library. - - - **availablePermits**: The number of messages that the consumer has space for in the client library's listen queue. `0` means the client library's queue is full and `receive()` isn't being called. A non-zero value means this consumer is ready for dispatched messages. - - - **unackedMessages**: The number of unacknowledged messages for the consumer, where an unacknowledged message is one that has been sent to the consumer but not yet acknowledged. This field is only meaningful when using a subscription that tracks individual message acknowledgement. - - - **blockedConsumerOnUnackedMsgs**: The flag used to verify if the consumer is blocked due to reaching threshold of the unacknowledged messages. - - - **lastConsumedTimestamp**: The timestamp when the consumer reads a message the last time. - - - **lastAckedTimestamp**: The timestamp when the consumer acknowledges a message the last time. - - - **address**: The IP address and source port for the connection of this consumer. - - - **connectedSince**: The timestamp when this consumer is created or reconnected last time. - - - **clientVersion**: The client library version of this consumer. - - - **bytesOutCounter**: Total bytes delivered to consumer. - - - **msgOutCounter**: Total messages delivered to consumer. - - - **msgRateRedeliver**: Total rate of messages redelivered by this consumer (msg/s). - - - **chunkedMessageRate**: The total rate of chunked messages delivered to this consumer. - - - **avgMessagesPerEntry**: Number of average messages per entry for the consumer consumed. - - - **readPositionWhenJoining**: The read position of the cursor when the consumer joining. - - - **keyHashRanges**: Hash ranges assigned to this consumer if is Key_Shared sub mode. - - - **metadata**: Metadata (key/value strings) associated with this consumer. - - - **replication**: This section gives the stats for cross-colo replication of this topic - - - **msgRateIn**: The total rate (msg/s) of messages received from the remote cluster. - - - **msgThroughputIn**: The total throughput (bytes/s) received from the remote cluster. - - - **msgRateOut**: The total rate of messages (msg/s) delivered to the replication-subscriber. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered to the replication-subscriber. - - - **msgRateExpired**: The total rate of messages (msg/s) expired. - - - **replicationBacklog**: The number of messages pending to be replicated to remote cluster. - - - **connected**: Whether the outbound replicator is connected. - - - **replicationDelayInSeconds**: How long the oldest message has been waiting to be sent through the connection, if connected is `true`. - - - **inboundConnection**: The IP and port of the broker in the remote cluster's publisher connection to this broker. - - - **inboundConnectedSince**: The TCP connection being used to publish messages to the remote cluster. If there are no local publishers connected, this connection is automatically closed after a minute. - - - **outboundConnection**: The address of the outbound replication connection. - - - **outboundConnectedSince**: The timestamp of establishing outbound connection. - -The following is an example of a topic status. - -```json - -{ - "msgRateIn" : 0.0, - "msgThroughputIn" : 0.0, - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesInCounter" : 504, - "msgInCounter" : 9, - "bytesOutCounter" : 2296, - "msgOutCounter" : 41, - "averageMsgSize" : 0.0, - "msgChunkPublished" : false, - "storageSize" : 504, - "backlogSize" : 0, - "offloadedStorageSize" : 0, - "publishers" : [ { - "accessMode" : "Shared", - "msgRateIn" : 0.0, - "msgThroughputIn" : 0.0, - "averageMsgSize" : 0.0, - "chunkedMessageRate" : 0.0, - "producerId" : 0, - "metadata" : { }, - "address" : "/127.0.0.1:65402", - "connectedSince" : "2021-06-09T17:22:55.913+08:00", - "clientVersion" : "2.9.0-SNAPSHOT", - "producerName" : "standalone-1-0" - } ], - "waitingPublishers" : 0, - "subscriptions" : { - "sub-demo" : { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesOutCounter" : 2296, - "msgOutCounter" : 41, - "msgRateRedeliver" : 0.0, - "chunkedMessageRate" : 0, - "msgBacklog" : 0, - "backlogSize" : 0, - "msgBacklogNoDelayed" : 0, - "blockedSubscriptionOnUnackedMsgs" : false, - "msgDelayed" : 0, - "unackedMessages" : 0, - "type" : "Exclusive", - "activeConsumerName" : "20b81", - "msgRateExpired" : 0.0, - "totalMsgExpired" : 0, - "lastExpireTimestamp" : 0, - "lastConsumedFlowTimestamp" : 1623230565356, - "lastConsumedTimestamp" : 1623230583946, - "lastAckedTimestamp" : 1623230584033, - "lastMarkDeleteAdvancedTimestamp" : 1623230584033, - "consumers" : [ { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesOutCounter" : 2296, - "msgOutCounter" : 41, - "msgRateRedeliver" : 0.0, - "chunkedMessageRate" : 0.0, - "consumerName" : "20b81", - "availablePermits" : 959, - "unackedMessages" : 0, - "avgMessagesPerEntry" : 314, - "blockedConsumerOnUnackedMsgs" : false, - "lastAckedTimestamp" : 1623230584033, - "lastConsumedTimestamp" : 1623230583946, - "metadata" : { }, - "address" : "/127.0.0.1:65172", - "connectedSince" : "2021-06-09T17:22:45.353+08:00", - "clientVersion" : "2.9.0-SNAPSHOT" - } ], - "allowOutOfOrderDelivery": false, - "consumersAfterMarkDeletePosition" : { }, - "nonContiguousDeletedMessagesRanges" : 0, - "nonContiguousDeletedMessagesRangesSerializedSize" : 0, - "durable" : true, - "replicated" : false - } - }, - "replication" : { }, - "deduplicationStatus" : "Disabled", - "nonContiguousDeletedMessagesRanges" : 0, - "nonContiguousDeletedMessagesRangesSerializedSize" : 0 -} - -``` - -To get the status of a topic, you can use the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getStats(topic); - -``` - - - - -```` - -### Get internal stats - -You can get the detailed statistics of a topic. - - - **entriesAddedCounter**: Messages published since this broker loaded this topic. - - - **numberOfEntries**: The total number of messages being tracked. - - - **totalSize**: The total storage size in bytes of all messages. - - - **currentLedgerEntries**: The count of messages written to the ledger that is currently open for writing. - - - **currentLedgerSize**: The size in bytes of messages written to the ledger that is currently open for writing. - - - **lastLedgerCreatedTimestamp**: The time when the last ledger is created. - - - **lastLedgerCreationFailureTimestamp:** The time when the last ledger failed. - - - **waitingCursorsCount**: The number of cursors that are "caught up" and waiting for a new message to be published. - - - **pendingAddEntriesCount**: The number of messages that complete (asynchronous) write requests. - - - **lastConfirmedEntry**: The ledgerid:entryid of the last message that is written successfully. If the entryid is `-1`, then the ledger is open, yet no entries are written. - - - **state**: The state of this ledger for writing. The state `LedgerOpened` means that a ledger is open for saving published messages. - - - **ledgers**: The ordered list of all ledgers for this topic holding messages. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. - - - **metadata**: The ledger metadata. - - - **schemaLedgers**: The ordered list of all ledgers for this topic schema. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. - - - **metadata**: The ledger metadata. - - - **compactedLedger**: The ledgers holding un-acked messages after topic compaction. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. The value is `false` for the compacted topic ledger. - - - **cursors**: The list of all cursors on this topic. Each subscription in the topic stats has a cursor. - - - **markDeletePosition**: All messages before the markDeletePosition are acknowledged by the subscriber. - - - **readPosition**: The latest position of subscriber for reading message. - - - **waitingReadOp**: This is true when the subscription has read the latest message published to the topic and is waiting for new messages to be published. - - - **pendingReadOps**: The counter for how many outstanding read requests to the BookKeepers in progress. - - - **messagesConsumedCounter**: The number of messages this cursor has acked since this broker loaded this topic. - - - **cursorLedger**: The ledger being used to persistently store the current markDeletePosition. - - - **cursorLedgerLastEntry**: The last entryid used to persistently store the current markDeletePosition. - - - **individuallyDeletedMessages**: If acknowledges are being done out of order, the ranges of messages acknowledged between the markDeletePosition and the read-position shows. - - - **lastLedgerSwitchTimestamp**: The last time the cursor ledger is rolled over. - - - **state**: The state of the cursor ledger: `Open` means you have a cursor ledger for saving updates of the markDeletePosition. - -The following is an example of the detailed statistics of a topic. - -```json - -{ - "entriesAddedCounter":0, - "numberOfEntries":0, - "totalSize":0, - "currentLedgerEntries":0, - "currentLedgerSize":0, - "lastLedgerCreatedTimestamp":"2021-01-22T21:12:14.868+08:00", - "lastLedgerCreationFailureTimestamp":null, - "waitingCursorsCount":0, - "pendingAddEntriesCount":0, - "lastConfirmedEntry":"3:-1", - "state":"LedgerOpened", - "ledgers":[ - { - "ledgerId":3, - "entries":0, - "size":0, - "offloaded":false, - "metadata":null - } - ], - "cursors":{ - "test":{ - "markDeletePosition":"3:-1", - "readPosition":"3:-1", - "waitingReadOp":false, - "pendingReadOps":0, - "messagesConsumedCounter":0, - "cursorLedger":4, - "cursorLedgerLastEntry":1, - "individuallyDeletedMessages":"[]", - "lastLedgerSwitchTimestamp":"2021-01-22T21:12:14.966+08:00", - "state":"Open", - "numberOfEntriesSinceFirstNotAckedMessage":0, - "totalNonContiguousDeletedMessagesRange":0, - "properties":{ - - } - } - }, - "schemaLedgers":[ - { - "ledgerId":1, - "entries":11, - "size":10, - "offloaded":false, - "metadata":null - } - ], - "compactedLedger":{ - "ledgerId":-1, - "entries":-1, - "size":-1, - "offloaded":false, - "metadata":null - } -} - -``` - -To get the internal status of a topic, you can use the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats-internal \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/internalStats|operation/getInternalStats?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getInternalStats(topic); - -``` - - - - -```` - -### Peek messages - -You can peek a number of messages for a specific subscription of a given topic in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics peek-messages \ - --count 10 --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -Message ID: 315674752:0 -Properties: { "X-Pulsar-publish-time" : "2015-07-13 17:40:28.451" } -msg-payload - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/position/:messagePosition|operation/peekNthMessage?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -int numMessages = 1; -admin.topics().peekMessages(topic, subName, numMessages); - -``` - - - - -```` - -### Get message by ID - -You can fetch the message with the given ledger ID and entry ID in the following ways. - -````mdx-code-block - - - -```shell - -$ ./bin/pulsar-admin topics get-message-by-id \ - persistent://public/default/my-topic \ - -l 10 -e 0 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/ledger/:ledgerId/entry/:entryId|operation/getMessageById?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -long ledgerId = 10; -long entryId = 10; -admin.topics().getMessageById(topic, ledgerId, entryId); - -``` - - - - -```` - -### Examine messages - -You can examine a specific message on a topic by position relative to the earliest or the latest message. - -````mdx-code-block - - - -```shell - -./bin/pulsar-admin topics examine-messages \ - persistent://public/default/my-topic \ - -i latest -m 1 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/examinemessage?initialPosition=:initialPosition&messagePosition=:messagePosition|operation/examineMessage?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().examineMessage(topic, "latest", 1); - -``` - - - - -```` - -### Get message ID - -You can get message ID published at or just after the given datetime. - -````mdx-code-block - - - -```shell - -./bin/pulsar-admin topics get-message-id \ - persistent://public/default/my-topic \ - -d 2021-06-28T19:01:17Z - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/messageid/:timestamp|operation/getMessageIdByTimestamp?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -long timestamp = System.currentTimeMillis() -admin.topics().getMessageIdByTimestamp(topic, timestamp); - -``` - - - - -```` - - -### Skip messages - -You can skip a number of messages for a specific subscription of a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics skip \ - --count 10 --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/skip/:numMessages|operation/skipMessages?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -int numMessages = 1; -admin.topics().skipMessages(topic, subName, numMessages); - -``` - - - - -```` - -### Skip all messages - -You can skip all the old messages for a specific subscription of a given topic. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics skip-all \ - --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/skip_all|operation/skipAllMessages?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -admin.topics().skipAllMessages(topic, subName); - -``` - - - - -```` - -### Reset cursor - -You can reset a subscription cursor position back to the position which is recorded X minutes before. It essentially calculates time and position of cursor at X minutes before and resets it at that position. You can reset the cursor in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics reset-cursor \ - --subscription my-subscription --time 10 \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/resetcursor/:timestamp|operation/resetCursor?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -long timestamp = 2342343L; -admin.topics().skipAllMessages(topic, subName, timestamp); - -``` - - - - -```` - -### Look up topic's owner broker - -You can locate the owner broker of the given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics lookup \ - persistent://test-tenant/ns1/tp1 \ - - "pulsar://broker1.org.com:4480" - -``` - - - - -{@inject: endpoint|GET|/lookup/v2/topic/:schema/:tenant:namespace/:topic|/?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.lookup().lookupDestination(topic); - -``` - - - - -```` - -### Get bundle - -You can get the range of the bundle that the given topic belongs to in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics bundle-range \ - persistent://test-tenant/ns1/tp1 \ - - "0x00000000_0xffffffff" - -``` - - - - -{@inject: endpoint|GET|/lookup/v2/topic/:topic_domain/:tenant/:namespace/:topic/bundle|/?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.lookup().getBundleRange(topic); - -``` - - - - -```` - -### Get subscriptions - -You can check all subscription names for a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics subscriptions \ - persistent://test-tenant/ns1/tp1 \ - - my-subscription - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscriptions|operation/getSubscriptions?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getSubscriptions(topic); - -``` - - - - -```` - -### Unsubscribe - -When a subscription does not process messages any more, you can unsubscribe it in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics unsubscribe \ - --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/:topic/subscription/:subscription|operation/deleteSubscription?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subscriptionName = "my-subscription"; -admin.topics().deleteSubscription(topic, subscriptionName); - -``` - - - - -```` - -### Last Message Id - -You can get the last committed message ID for a persistent topic. It is available since 2.3.0 release. - -````mdx-code-block - - - -```shell - -pulsar-admin topics last-message-id topic-name - -``` - - - - -{@inject: endpoint|Get|/admin/v2/:schema/:tenant/:namespace/:topic/lastMessageId|operation/getLastMessageId?version=@pulsar:version_number@} - - - - -```Java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getLastMessage(topic); - -``` - - - - -```` - - - -### Configure deduplication snapshot interval - -#### Get deduplication snapshot interval - -To get the topic-level deduplication snapshot interval, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-deduplication-snapshot-interval options - -``` - - - - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/getDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getDeduplicationSnapshotInterval(topic) - -``` - - - - -```` - -#### Set deduplication snapshot interval - -To set the topic-level deduplication snapshot interval, use one of the following methods. - -> **Prerequisite** `brokerDeduplicationEnabled` must be set to `true`. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-deduplication-snapshot-interval options - -``` - - - - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/setDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.topics().setDeduplicationSnapshotInterval(topic, 1000) - -``` - - - - -```` - -#### Remove deduplication snapshot interval - -To remove the topic-level deduplication snapshot interval, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-deduplication-snapshot-interval options - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/deleteDeduplicationSnapshotInterval?version=@pulsar:version_number@} - - - - -```java - -admin.topics().removeDeduplicationSnapshotInterval(topic) - -``` - - - - -```` - - -### Configure inactive topic policies - -#### Get inactive topic policies - -To get the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-inactive-topic-policies options - -``` - - - - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/getInactiveTopicPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getInactiveTopicPolicies(topic) - -``` - - - - -```` - -#### Set inactive topic policies - -To set the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-inactive-topic-policies options - -``` - - - - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/setInactiveTopicPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().setInactiveTopicPolicies(topic, inactiveTopicPolicies) - -``` - - - - -```` - -#### Remove inactive topic policies - -To remove the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-inactive-topic-policies options - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/removeInactiveTopicPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().removeInactiveTopicPolicies(topic) - -``` - - - - -```` - - -### Configure offload policies - -#### Get offload policies - -To get the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-offload-policies options - -``` - - - - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/getOffloadPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getOffloadPolicies(topic) - -``` - - - - -```` - -#### Set offload policies - -To set the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-offload-policies options - -``` - - - - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/setOffloadPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().setOffloadPolicies(topic, offloadPolicies) - -``` - - - - -```` - -#### Remove offload policies - -To remove the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-offload-policies options - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/removeOffloadPolicies?version=@pulsar:version_number@} - - - - -```java - -admin.topics().removeOffloadPolicies(topic) - -``` - - - - -```` - -## Manage non-partitioned topics -You can use Pulsar [admin API](admin-api-overview.md) to create, delete and check status of non-partitioned topics. - -### Create -Non-partitioned topics must be explicitly created. When creating a new non-partitioned topic, you need to provide a name for the topic. - -By default, 60 seconds after creation, topics are considered inactive and deleted automatically to avoid generating trash data. To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to a specific value. - -For more information about the two parameters, see [here](reference-configuration.md#broker). - -You can create non-partitioned topics in the following ways. -````mdx-code-block - - - -When you create non-partitioned topics with the [`create`](reference-pulsar-admin.md#create-3) command, you need to specify the topic name as an argument. - -```shell - -$ bin/pulsar-admin topics create \ - persistent://my-tenant/my-namespace/my-topic - -``` - -:::note - -When you create a non-partitioned topic with the suffix '-partition-' followed by numeric value like 'xyz-topic-partition-x' for the topic name, if a partitioned topic with same suffix 'xyz-topic-partition-y' exists, then the numeric value(x) for the non-partitioned topic must be larger than the number of partitions(y) of the partitioned topic. Otherwise, you cannot create such a non-partitioned topic. - -::: - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic|operation/createNonPartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().createNonPartitionedTopic(topicName); - -``` - - - - -```` - -### Delete -You can delete non-partitioned topics in the following ways. -````mdx-code-block - - - -```shell - -$ bin/pulsar-admin topics delete \ - persistent://my-tenant/my-namespace/my-topic - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic|operation/deleteTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().delete(topic); - -``` - - - - -```` - -### List - -You can get the list of topics under a given namespace in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list tenant/namespace -persistent://tenant/namespace/topic1 -persistent://tenant/namespace/topic2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getList(namespace); - -``` - - - - -```` - -### Stats - -You can check the current statistics of a given topic. The following is an example. For description of each stats, refer to [get stats](#get-stats). - -```json - -{ - "msgRateIn": 4641.528542257553, - "msgThroughputIn": 44663039.74947473, - "msgRateOut": 0, - "msgThroughputOut": 0, - "averageMsgSize": 1232439.816728665, - "storageSize": 135532389160, - "publishers": [ - { - "msgRateIn": 57.855383881403576, - "msgThroughputIn": 558994.7078932219, - "averageMsgSize": 613135, - "producerId": 0, - "producerName": null, - "address": null, - "connectedSince": null - } - ], - "subscriptions": { - "my-topic_subscription": { - "msgRateOut": 0, - "msgThroughputOut": 0, - "msgBacklog": 116632, - "type": null, - "msgRateExpired": 36.98245516804671, - "consumers": [] - } - }, - "replication": {} -} - -``` - -You can check the current statistics of a given topic and its connected producers and consumers in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats \ - persistent://test-tenant/namespace/topic \ - --get-precise-backlog - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getStats(topic, false /* is precise backlog */); - -``` - - - - -```` - -## Manage partitioned topics -You can use Pulsar [admin API](admin-api-overview.md) to create, update, delete and check status of partitioned topics. - -### Create - -Partitioned topics must be explicitly created. When creating a new partitioned topic, you need to provide a name and the number of partitions for the topic. - -By default, 60 seconds after creation, topics are considered inactive and deleted automatically to avoid generating trash data. To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to a specific value. - -For more information about the two parameters, see [here](reference-configuration.md#broker). - -You can create partitioned topics in the following ways. -````mdx-code-block - - - -When you create partitioned topics with the [`create-partitioned-topic`](reference-pulsar-admin.md#create-partitioned-topic) -command, you need to specify the topic name as an argument and the number of partitions using the `-p` or `--partitions` flag. - -```shell - -$ bin/pulsar-admin topics create-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic \ - --partitions 4 - -``` - -:::note - -If a non-partitioned topic with the suffix '-partition-' followed by a numeric value like 'xyz-topic-partition-10', you can not create a partitioned topic with name 'xyz-topic', because the partitions of the partitioned topic could override the existing non-partitioned topic. To create such partitioned topic, you have to delete that non-partitioned topic first. - -::: - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/partitions|operation/createPartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -int numPartitions = 4; -admin.topics().createPartitionedTopic(topicName, numPartitions); - -``` - - - - -```` - -### Create missed partitions - -When topic auto-creation is disabled, and you have a partitioned topic without any partitions, you can use the [`create-missed-partitions`](reference-pulsar-admin.md#create-missed-partitions) command to create partitions for the topic. - -````mdx-code-block - - - -You can create missed partitions with the [`create-missed-partitions`](reference-pulsar-admin.md#create-missed-partitions) command and specify the topic name as an argument. - -```shell - -$ bin/pulsar-admin topics create-missed-partitions \ - persistent://my-tenant/my-namespace/my-topic \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic|operation/createMissedPartitions?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().createMissedPartitions(topicName); - -``` - - - - -```` - -### Get metadata - -Partitioned topics are associated with metadata, you can view it as a JSON object. The following metadata field is available. - -Field | Description -:-----|:------- -`partitions` | The number of partitions into which the topic is divided. - -````mdx-code-block - - - -You can check the number of partitions in a partitioned topic with the [`get-partitioned-topic-metadata`](reference-pulsar-admin.md#get-partitioned-topic-metadata) subcommand. - -```shell - -$ pulsar-admin topics get-partitioned-topic-metadata \ - persistent://my-tenant/my-namespace/my-topic -{ - "partitions": 4 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/partitions|operation/getPartitionedMetadata?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getPartitionedTopicMetadata(topicName); - -``` - - - - -```` - -### Update - -You can update the number of partitions for an existing partitioned topic *if* the topic is non-global. However, you can only add the partition number. Decrementing the number of partitions would delete the topic, which is not supported in Pulsar. - -Producers and consumers can find the newly created partitions automatically. - -````mdx-code-block - - - -You can update partitioned topics with the [`update-partitioned-topic`](reference-pulsar-admin.md#update-partitioned-topic) command. - -```shell - -$ pulsar-admin topics update-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic \ - --partitions 8 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:cluster/:namespace/:destination/partitions|operation/updatePartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().updatePartitionedTopic(topic, numPartitions); - -``` - - - - -```` - -### Delete -You can delete partitioned topics with the [`delete-partitioned-topic`](reference-pulsar-admin.md#delete-partitioned-topic) command, REST API and Java. - -````mdx-code-block - - - -```shell - -$ bin/pulsar-admin topics delete-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:topic/:namespace/:destination/partitions|operation/deletePartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().delete(topic); - -``` - - - - -```` - -### List -You can get the list of topics under a given namespace in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list tenant/namespace -persistent://tenant/namespace/topic1 -persistent://tenant/namespace/topic2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getPartitionedTopicList?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getList(namespace); - -``` - - - - -```` - -### Stats - -You can check the current statistics of a given partitioned topic. The following is an example. For description of each stats, refer to [get stats](#get-stats). - -Note that in the subscription JSON object, `chuckedMessageRate` is deprecated. Please use `chunkedMessageRate`. Both will be sent in the JSON for now. - -```json - -{ - "msgRateIn" : 999.992947159793, - "msgThroughputIn" : 1070918.4635439808, - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesInCounter" : 270318763, - "msgInCounter" : 252489, - "bytesOutCounter" : 0, - "msgOutCounter" : 0, - "averageMsgSize" : 1070.926056966454, - "msgChunkPublished" : false, - "storageSize" : 270316646, - "backlogSize" : 200921133, - "publishers" : [ { - "msgRateIn" : 999.992947159793, - "msgThroughputIn" : 1070918.4635439808, - "averageMsgSize" : 1070.3333333333333, - "chunkedMessageRate" : 0.0, - "producerId" : 0 - } ], - "subscriptions" : { - "test" : { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesOutCounter" : 0, - "msgOutCounter" : 0, - "msgRateRedeliver" : 0.0, - "chuckedMessageRate" : 0, - "chunkedMessageRate" : 0, - "msgBacklog" : 144318, - "msgBacklogNoDelayed" : 144318, - "blockedSubscriptionOnUnackedMsgs" : false, - "msgDelayed" : 0, - "unackedMessages" : 0, - "msgRateExpired" : 0.0, - "lastExpireTimestamp" : 0, - "lastConsumedFlowTimestamp" : 0, - "lastConsumedTimestamp" : 0, - "lastAckedTimestamp" : 0, - "consumers" : [ ], - "isDurable" : true, - "isReplicated" : false - } - }, - "replication" : { }, - "metadata" : { - "partitions" : 3 - }, - "partitions" : { } -} - -``` - -You can check the current statistics of a given partitioned topic and its connected producers and consumers in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics partitioned-stats \ - persistent://test-tenant/namespace/topic \ - --per-partition - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/partitioned-stats|operation/getPartitionedStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getPartitionedStats(topic, true /* per partition */, false /* is precise backlog */); - -``` - - - - -```` - -### Internal stats - -You can check the detailed statistics of a topic. The following is an example. For description of each stats, refer to [get internal stats](#get-internal-stats). - -```json - -{ - "entriesAddedCounter": 20449518, - "numberOfEntries": 3233, - "totalSize": 331482, - "currentLedgerEntries": 3233, - "currentLedgerSize": 331482, - "lastLedgerCreatedTimestamp": "2016-06-29 03:00:23.825", - "lastLedgerCreationFailureTimestamp": null, - "waitingCursorsCount": 1, - "pendingAddEntriesCount": 0, - "lastConfirmedEntry": "324711539:3232", - "state": "LedgerOpened", - "ledgers": [ - { - "ledgerId": 324711539, - "entries": 0, - "size": 0 - } - ], - "cursors": { - "my-subscription": { - "markDeletePosition": "324711539:3133", - "readPosition": "324711539:3233", - "waitingReadOp": true, - "pendingReadOps": 0, - "messagesConsumedCounter": 20449501, - "cursorLedger": 324702104, - "cursorLedgerLastEntry": 21, - "individuallyDeletedMessages": "[(324711539:3134‥324711539:3136], (324711539:3137‥324711539:3140], ]", - "lastLedgerSwitchTimestamp": "2016-06-29 01:30:19.313", - "state": "Open" - } - } -} - -``` - -You can get the internal stats for the partitioned topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats-internal \ - persistent://test-tenant/namespace/topic - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/internalStats|operation/getInternalStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getInternalStats(topic); - -``` - - - - -```` - -### Get backlog size - -You can get backlog size of a single topic partition or a nonpartitioned topic given a message ID (in bytes). - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics get-backlog-size \ - -m 1:1 \ - persistent://test-tenant/ns1/tp1-partition-0 \ - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/backlogSize|operation/getBacklogSizeByMessageId?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -MessageId messageId = MessageId.earliest; -admin.topics().getBacklogSizeByMessageId(topic, messageId); - -``` - - - - -```` - -## Publish to partitioned topics - -By default, Pulsar topics are served by a single broker, which limits the maximum throughput of a topic. *Partitioned topics* can span multiple brokers and thus allow for higher throughput. - -You can publish to partitioned topics using Pulsar client libraries. When publishing to partitioned topics, you must specify a routing mode. If you do not specify any routing mode when you create a new producer, the round robin routing mode is used. - -### Routing mode - -You can specify the routing mode in the ProducerConfiguration object that you use to configure your producer. The routing mode determines which partition(internal topic) that each message should be published to. - -The following {@inject: javadoc:MessageRoutingMode:/client/org/apache/pulsar/client/api/MessageRoutingMode} options are available. - -Mode | Description -:--------|:------------ -`RoundRobinPartition` | If no key is provided, the producer publishes messages across all partitions in round-robin policy to achieve the maximum throughput. Round-robin is not done per individual message, round-robin is set to the same boundary of batching delay to ensure that batching is effective. If a key is specified on the message, the partitioned producer hashes the key and assigns message to a particular partition. This is the default mode. -`SinglePartition` | If no key is provided, the producer picks a single partition randomly and publishes all messages into that partition. If a key is specified on the message, the partitioned producer hashes the key and assigns message to a particular partition. -`CustomPartition` | Use custom message router implementation that is called to determine the partition for a particular message. You can create a custom routing mode by using the Java client and implementing the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface. - -The following is an example: - -```java - -String pulsarBrokerRootUrl = "pulsar://localhost:6650"; -String topic = "persistent://my-tenant/my-namespace/my-topic"; - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl(pulsarBrokerRootUrl).build(); -Producer producer = pulsarClient.newProducer() - .topic(topic) - .messageRoutingMode(MessageRoutingMode.SinglePartition) - .create(); -producer.send("Partitioned topic message".getBytes()); - -``` - -### Custom message router - -To use a custom message router, you need to provide an implementation of the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface, which has just one `choosePartition` method: - -```java - -public interface MessageRouter extends Serializable { - int choosePartition(Message msg); -} - -``` - -The following router routes every message to partition 10: - -```java - -public class AlwaysTenRouter implements MessageRouter { - public int choosePartition(Message msg) { - return 10; - } -} - -``` - -With that implementation, you can send - -```java - -String pulsarBrokerRootUrl = "pulsar://localhost:6650"; -String topic = "persistent://my-tenant/my-cluster-my-namespace/my-topic"; - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl(pulsarBrokerRootUrl).build(); -Producer producer = pulsarClient.newProducer() - .topic(topic) - .messageRouter(new AlwaysTenRouter()) - .create(); -producer.send("Partitioned topic message".getBytes()); - -``` - -### How to choose partitions when using a key -If a message has a key, it supersedes the round robin routing policy. The following example illustrates how to choose the partition when using a key. - -```java - -// If the message has a key, it supersedes the round robin routing policy - if (msg.hasKey()) { - return signSafeMod(hash.makeHash(msg.getKey()), topicMetadata.numPartitions()); - } - - if (isBatchingEnabled) { // if batching is enabled, choose partition on `partitionSwitchMs` boundary. - long currentMs = clock.millis(); - return signSafeMod(currentMs / partitionSwitchMs + startPtnIdx, topicMetadata.numPartitions()); - } else { - return signSafeMod(PARTITION_INDEX_UPDATER.getAndIncrement(this), topicMetadata.numPartitions()); - } - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/administration-dashboard.md b/site2/website/versioned_docs/version-2.9.2-deprecated/administration-dashboard.md deleted file mode 100644 index 92bd7e17869d7b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/administration-dashboard.md +++ /dev/null @@ -1,76 +0,0 @@ ---- -id: administration-dashboard -title: Pulsar dashboard -sidebar_label: "Dashboard" -original_id: administration-dashboard ---- - -:::note - -Pulsar dashboard is deprecated. We recommend you use [Pulsar Manager](administration-pulsar-manager.md) to manage and monitor the stats of your topics. - -::: - -Pulsar dashboard is a web application that enables users to monitor current stats for all [topics](reference-terminology.md#topic) in tabular form. - -The dashboard is a data collector that polls stats from all the brokers in a Pulsar instance (across multiple clusters) and stores all the information in a [PostgreSQL](https://www.postgresql.org/) database. - -You can use the [Django](https://www.djangoproject.com) web app to render the collected data. - -## Install - -The easiest way to use the dashboard is to run it inside a [Docker](https://www.docker.com/products/docker) container. - -```shell - -$ SERVICE_URL=http://broker.example.com:8080/ -$ docker run -p 80:80 \ - -e SERVICE_URL=$SERVICE_URL \ - apachepulsar/pulsar-dashboard:@pulsar:version@ - -``` - -You can find the {@inject: github:Dockerfile:/dashboard/Dockerfile} in the `dashboard` directory and build an image from scratch as well: - -```shell - -$ docker build -t apachepulsar/pulsar-dashboard dashboard - -``` - -If token authentication is enabled: -> Provided token should have super-user access. - -```shell - -$ SERVICE_URL=http://broker.example.com:8080/ -$ JWT_TOKEN=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c -$ docker run -p 80:80 \ - -e SERVICE_URL=$SERVICE_URL \ - -e JWT_TOKEN=$JWT_TOKEN \ - apachepulsar/pulsar-dashboard - -``` - - -You need to specify only one service URL for a Pulsar cluster. Internally, the collector figures out all the existing clusters and the brokers from where it needs to pull the metrics. If you connect the dashboard to Pulsar running in standalone mode, the URL is `http://:8080` by default. `` is the IP address or hostname of the machine that runs Pulsar standalone. The IP address or hostname should be accessible from the running dashboard in the docker instance. - -Once the Docker container starts, the web dashboard is accessible via `localhost` or whichever host that Docker uses. - -> The `SERVICE_URL` that the dashboard uses needs to be reachable from inside the Docker container. - -If the Pulsar service runs in standalone mode in `localhost`, the `SERVICE_URL` has to -be the IP address of the machine. - -Similarly, given the Pulsar standalone advertises itself with localhost by default, you need to -explicitly set the advertise address to the host IP address. For example: - -```shell - -$ bin/pulsar standalone --advertised-address 1.2.3.4 - -``` - -### Known issues - -Currently, only Pulsar Token [authentication](security-overview.md#authentication-providers) is supported. diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/administration-geo.md b/site2/website/versioned_docs/version-2.9.2-deprecated/administration-geo.md deleted file mode 100644 index 1d2a9620007f4b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/administration-geo.md +++ /dev/null @@ -1,238 +0,0 @@ ---- -id: administration-geo -title: Pulsar geo-replication -sidebar_label: "Geo-replication" -original_id: administration-geo ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -*Geo-replication* is the replication of persistently stored message data across multiple clusters of a Pulsar instance. - -## How geo-replication works - -The diagram below illustrates the process of geo-replication across Pulsar clusters: - -![Replication Diagram](/assets/geo-replication.png) - -In this diagram, whenever **P1**, **P2**, and **P3** producers publish messages to the **T1** topic on **Cluster-A**, **Cluster-B**, and **Cluster-C** clusters respectively, those messages are instantly replicated across clusters. Once the messages are replicated, **C1** and **C2** consumers can consume those messages from their respective clusters. - -Without geo-replication, **C1** and **C2** consumers are not able to consume messages that **P3** producer publishes. - -## Geo-replication and Pulsar properties - -You must enable geo-replication on a per-tenant basis in Pulsar. You can enable geo-replication between clusters only when a tenant is created that allows access to both clusters. - -Although geo-replication must be enabled between two clusters, actually geo-replication is managed at the namespace level. You must complete the following tasks to enable geo-replication for a namespace: - -* [Enable geo-replication namespaces](#enable-geo-replication-namespaces) -* Configure that namespace to replicate across two or more provisioned clusters - -Any message published on *any* topic in that namespace is replicated to all clusters in the specified set. - -## Local persistence and forwarding - -When messages are produced on a Pulsar topic, messages are first persisted in the local cluster, and then forwarded asynchronously to the remote clusters. - -In normal cases, when connectivity issues are none, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, the network [round-trip time](https://en.wikipedia.org/wiki/Round-trip_delay_time) (RTT) between the remote regions defines end-to-end delivery latency. - -Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (like during a network partition). - -Producers and consumers can publish messages to and consume messages from any cluster in a Pulsar instance. However, subscriptions cannot only be local to the cluster where the subscriptions are created but also can be transferred between clusters after replicated subscription is enabled. Once replicated subscription is enabled, you can keep subscription state in synchronization. Therefore, a topic can be asynchronously replicated across multiple geographical regions. In case of failover, a consumer can restart consuming messages from the failure point in a different cluster. - -In the aforementioned example, the **T1** topic is replicated among three clusters, **Cluster-A**, **Cluster-B**, and **Cluster-C**. - -All messages produced in any of the three clusters are delivered to all subscriptions in other clusters. In this case, **C1** and **C2** consumers receive all messages that **P1**, **P2**, and **P3** producers publish. Ordering is still guaranteed on a per-producer basis. - -## Configure replication - -The following example connects three clusters: **us-east**, **us-west**, and **us-cent**. - -### Connect replication clusters - -To replicate data among clusters, you need to configure each cluster to connect to the other. You can use the [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) tool to create a connection. - -**Example** - -Suppose that you have 3 replication clusters: `us-west`, `us-cent`, and `us-east`. - -1. Configure the connection from `us-west` to `us-east`. - - Run the following command on `us-west`. - -```shell - -$ bin/pulsar-admin clusters create \ - --broker-url pulsar://: \ - --url http://: \ - us-east - -``` - - :::tip - - - If you want to use a secure connection for a cluster, you can use the flags `--broker-url-secure` and `--url-secure`. For more information, see [pulsar-admin clusters create](https://pulsar.apache.org/tools/pulsar-admin/). - - Different clusters may have different authentications. You can use the authentication flag `--auth-plugin` and `--auth-parameters` together to set cluster authentication, which overrides `brokerClientAuthenticationPlugin` and `brokerClientAuthenticationParameters` if `authenticationEnabled` sets to `true` in `broker.conf` and `standalone.conf`. For more information, see [authentication and authorization](concepts-authentication.md). - - ::: - -2. Configure the connection from `us-west` to `us-cent`. - - Run the following command on `us-west`. - -```shell - -$ bin/pulsar-admin clusters create \ - --broker-url pulsar://: \ - --url http://: \ - us-cent - -``` - -3. Run similar commands on `us-east` and `us-cent` to create connections among clusters. - -### Grant permissions to properties - -To replicate to a cluster, the tenant needs permission to use that cluster. You can grant permission to the tenant when you create the tenant or grant later. - -Specify all the intended clusters when you create a tenant: - -```shell - -$ bin/pulsar-admin tenants create my-tenant \ - --admin-roles my-admin-role \ - --allowed-clusters us-west,us-east,us-cent - -``` - -To update permissions of an existing tenant, use `update` instead of `create`. - -### Enable geo-replication - -You can enable geo-replication at **namespace** or **topic** level. - -#### Enable geo-replication at namespace level - -You can create a namespace with the following command sample. - -```shell - -$ bin/pulsar-admin namespaces create my-tenant/my-namespace - -``` - -Initially, the namespace is not assigned to any cluster. You can assign the namespace to clusters using the `set-clusters` subcommand: - -```shell - -$ bin/pulsar-admin namespaces set-clusters my-tenant/my-namespace \ - --clusters us-west,us-east,us-cent - -``` - -### Use topics with geo-replication - -Once you create a geo-replication namespace, any topics that producers or consumers create within that namespace is replicated across clusters. Typically, each application uses the `serviceUrl` for the local cluster. - -#### Selective replication - -By default, messages are replicated to all clusters configured for the namespace. You can restrict replication selectively by specifying a replication list for a message, and then that message is replicated only to the subset in the replication list. - -The following is an example for the [Java API](client-libraries-java.md). Note the use of the `setReplicationClusters` method when you construct the {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} object: - -```java - -List restrictReplicationTo = Arrays.asList( - "us-west", - "us-east" -); - -Producer producer = client.newProducer() - .topic("some-topic") - .create(); - -producer.newMessage() - .value("my-payload".getBytes()) - .setReplicationClusters(restrictReplicationTo) - .send(); - -``` - -#### Topic stats - -You can check topic-specific statistics for geo-replication topics using one of the following methods. - -````mdx-code-block - - - -Use the [`pulsar-admin topics stats`](https://pulsar.apache.org/tools/pulsar-admin/) command. - -```shell - -$ bin/pulsar-admin topics stats persistent://my-tenant/my-namespace/my-topic - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@} - - - - -```` - -Each cluster reports its own local stats, including the incoming and outgoing replication rates and backlogs. - -#### Delete a geo-replication topic - -Given that geo-replication topics exist in multiple regions, directly deleting a geo-replication topic is not possible. Instead, you should rely on automatic topic garbage collection. - -In Pulsar, a topic is automatically deleted when the topic meets the following three conditions: -- no producers or consumers are connected to it; -- no subscriptions to it; -- no more messages are kept for retention. -For geo-replication topics, each region uses a fault-tolerant mechanism to decide when deleting the topic locally is safe. - -You can explicitly disable topic garbage collection by setting `brokerDeleteInactiveTopicsEnabled` to `false` in your [broker configuration](reference-configuration.md#broker). - -To delete a geo-replication topic, close all producers and consumers on the topic, and delete all of its local subscriptions in every replication cluster. When Pulsar determines that no valid subscription for the topic remains across the system, it will garbage collect the topic. - -## Replicated subscriptions - -Pulsar supports replicated subscriptions, so you can keep subscription state in sync, within a sub-second timeframe, in the context of a topic that is being asynchronously replicated across multiple geographical regions. - -In case of failover, a consumer can restart consuming from the failure point in a different cluster. - -### Enable replicated subscription - -Replicated subscription is disabled by default. You can enable replicated subscription when creating a consumer. - -```java - -Consumer consumer = client.newConsumer(Schema.STRING) - .topic("my-topic") - .subscriptionName("my-subscription") - .replicateSubscriptionState(true) - .subscribe(); - -``` - -### Advantages - - * It is easy to implement the logic. - * You can choose to enable or disable replicated subscription. - * When you enable it, the overhead is low, and it is easy to configure. - * When you disable it, the overhead is zero. - -### Limitations - -When you enable replicated subscription, you're creating a consistent distributed snapshot to establish an association between message ids from different clusters. The snapshots are taken periodically. The default value is `1 second`. It means that a consumer failing over to a different cluster can potentially receive 1 second of duplicates. You can also configure the frequency of the snapshot in the `broker.conf` file. diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/administration-isolation.md b/site2/website/versioned_docs/version-2.9.2-deprecated/administration-isolation.md deleted file mode 100644 index d2de042a2e7415..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/administration-isolation.md +++ /dev/null @@ -1,115 +0,0 @@ ---- -id: administration-isolation -title: Pulsar isolation -sidebar_label: "Pulsar isolation" -original_id: administration-isolation ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -In an organization, a Pulsar instance provides services to multiple teams. When organizing the resources across multiple teams, you want to make a suitable isolation plan to avoid the resource competition between different teams and applications and provide high-quality messaging service. In this case, you need to take resource isolation into consideration and weigh your intended actions against expected and unexpected consequences. - -To enforce resource isolation, you can use the Pulsar isolation policy, which allows you to allocate resources (**broker** and **bookie**) for the namespace. - -## Broker isolation - -In Pulsar, when namespaces (more specifically, namespace bundles) are assigned dynamically to brokers, the namespace isolation policy limits the set of brokers that can be used for assignment. Before topics are assigned to brokers, you can set the namespace isolation policy with a primary or a secondary regex to select desired brokers. - -You can set a namespace isolation policy for a cluster using one of the following methods. - -````mdx-code-block - - - - -``` - -pulsar-admin ns-isolation-policy set options - -``` - -For more information about the command `pulsar-admin ns-isolation-policy set options`, see [here](https://pulsar.apache.org/tools/pulsar-admin/). - -**Example** - -```shell - -bin/pulsar-admin ns-isolation-policy set \ ---auto-failover-policy-type min_available \ ---auto-failover-policy-params min_limit=1,usage_threshold=80 \ ---namespaces my-tenant/my-namespace \ ---primary 10.193.216.* my-cluster policy-name - -``` - - - - -[PUT /admin/v2/namespaces/{tenant}/{namespace}](https://pulsar.apache.org/admin-rest-api/?version=master&apiversion=v2#operation/createNamespace) - - - - -For how to set namespace isolation policy using Java admin API, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/NamespacesImpl.java#L251). - - - - -```` - -## Bookie isolation - -A namespace can be isolated into user-defined groups of bookies, which guarantees all the data that belongs to the namespace is stored in desired bookies. The bookie affinity group uses the BookKeeper [rack-aware placement policy](https://bookkeeper.apache.org/docs/latest/api/javadoc/org/apache/bookkeeper/client/EnsemblePlacementPolicy.html) and it is a way to feed rack information which is stored as JSON format in znode. - -You can set a bookie affinity group using one of the following methods. - -````mdx-code-block - - - - -``` - -pulsar-admin namespaces set-bookie-affinity-group options - -``` - -For more information about the command `pulsar-admin namespaces set-bookie-affinity-group options`, see [here](https://pulsar.apache.org/tools/pulsar-admin/). - -**Example** - -```shell - -bin/pulsar-admin bookies set-bookie-rack \ ---bookie 127.0.0.1:3181 \ ---hostname 127.0.0.1:3181 \ ---group group-bookie1 \ ---rack rack1 - -bin/pulsar-admin namespaces set-bookie-affinity-group public/default \ ---primary-group group-bookie1 - -``` - - - - -[POST /admin/v2/namespaces/{tenant}/{namespace}/persistence/bookieAffinity](https://pulsar.apache.org/admin-rest-api/?version=master&apiversion=v2#operation/setBookieAffinityGroup) - - - - -For how to set bookie affinity group for a namespace using Java admin API, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/NamespacesImpl.java#L1164). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/administration-load-balance.md b/site2/website/versioned_docs/version-2.9.2-deprecated/administration-load-balance.md deleted file mode 100644 index 788c84a59317b0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/administration-load-balance.md +++ /dev/null @@ -1,250 +0,0 @@ ---- -id: administration-load-balance -title: Pulsar load balance -sidebar_label: "Load balance" -original_id: administration-load-balance ---- - -## Load balance across Pulsar brokers - -Pulsar is an horizontally scalable messaging system, so the traffic in a logical cluster must be balanced across all the available Pulsar brokers as evenly as possible, which is a core requirement. - -You can use multiple settings and tools to control the traffic distribution which require a bit of context to understand how the traffic is managed in Pulsar. Though, in most cases, the core requirement mentioned above is true out of the box and you should not worry about it. - -## Pulsar load manager architecture - -The following part introduces the basic architecture of the Pulsar load manager. - -### Assign topics to brokers dynamically - -Topics are dynamically assigned to brokers based on the load conditions of all brokers in the cluster. - -When a client starts using new topics that are not assigned to any broker, a process is triggered to choose the best suited broker to acquire ownership of these topics according to the load conditions. - -In case of partitioned topics, different partitions are assigned to different brokers. Here "topic" means either a non-partitioned topic or one partition of a topic. - -The assignment is "dynamic" because the assignment changes quickly. For example, if the broker owning the topic crashes, the topic is reassigned immediately to another broker. Another scenario is that the broker owning the topic becomes overloaded. In this case, the topic is reassigned to a less loaded broker. - -The stateless nature of brokers makes the dynamic assignment possible, so you can quickly expand or shrink the cluster based on usage. - -#### Assignment granularity - -The assignment of topics or partitions to brokers is not done at the topics or partitions level, but done at the Bundle level (a higher level). The reason is to amortize the amount of information that you need to keep track. Based on CPU, memory, traffic load and other indexes, topics are assigned to a particular broker dynamically. - -Instead of individual topic or partition assignment, each broker takes ownership of a subset of the topics for a namespace. This subset is called a "*bundle*" and effectively this subset is a sharding mechanism. - -The namespace is the "administrative" unit: many config knobs or operations are done at the namespace level. - -For assignment, a namespaces is sharded into a list of "bundles", with each bundle comprising a portion of overall hash range of the namespace. - -Topics are assigned to a particular bundle by taking the hash of the topic name and checking in which bundle the hash falls into. - -Each bundle is independent of the others and thus is independently assigned to different brokers. - -### Create namespaces and bundles - -When you create a new namespace, the new namespace sets to use the default number of bundles. You can set this in `conf/broker.conf`: - -```properties - -# When a namespace is created without specifying the number of bundle, this -# value will be used as the default -defaultNumberOfNamespaceBundles=4 - -``` - -You can either change the system default, or override it when you create a new namespace: - -```shell - -$ bin/pulsar-admin namespaces create my-tenant/my-namespace --clusters us-west --bundles 16 - -``` - -With this command, you create a namespace with 16 initial bundles. Therefore the topics for this namespaces can immediately be spread across up to 16 brokers. - -In general, if you know the expected traffic and number of topics in advance, you had better start with a reasonable number of bundles instead of waiting for the system to auto-correct the distribution. - -On the same note, it is beneficial to start with more bundles than the number of brokers, because of the hashing nature of the distribution of topics into bundles. For example, for a namespace with 1000 topics, using something like 64 bundles achieves a good distribution of traffic across 16 brokers. - -### Unload topics and bundles - -You can "unload" a topic in Pulsar with admin operation. Unloading means to close the topics, release ownership and reassign the topics to a new broker, based on current load. - -When unloading happens, the client experiences a small latency blip, typically in the order of tens of milliseconds, while the topic is reassigned. - -Unloading is the mechanism that the load-manager uses to perform the load shedding, but you can also trigger the unloading manually, for example to correct the assignments and redistribute traffic even before having any broker overloaded. - -Unloading a topic has no effect on the assignment, but just closes and reopens the particular topic: - -```shell - -pulsar-admin topics unload persistent://tenant/namespace/topic - -``` - -To unload all topics for a namespace and trigger reassignments: - -```shell - -pulsar-admin namespaces unload tenant/namespace - -``` - -### Split namespace bundles - -Since the load for the topics in a bundle might change over time and predicting the load might be hard, bundle split is designed to deal with these issues. The broker splits a bundle into two and the new smaller bundles can be reassigned to different brokers. - -The splitting is based on some tunable thresholds. Any existing bundle that exceeds any of the threshold is a candidate to be split. By default the newly split bundles are also immediately offloaded to other brokers, to facilitate the traffic distribution. - -You can split namespace bundles in two ways, by setting `supportedNamespaceBundleSplitAlgorithms` to `range_equally_divide` or `topic_count_equally_divide` in `broker.conf` file. The former splits the bundle into two parts with the same hash range size; the latter splits the bundle into two parts with the same number of topics. You can also configure other parameters for namespace bundles. - -```properties - -# enable/disable namespace bundle auto split -loadBalancerAutoBundleSplitEnabled=true - -# enable/disable automatic unloading of split bundles -loadBalancerAutoUnloadSplitBundlesEnabled=true - -# maximum topics in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxTopics=1000 - -# maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxSessions=1000 - -# maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxMsgRate=30000 - -# maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxBandwidthMbytes=100 - -# maximum number of bundles in a namespace (for auto-split) -loadBalancerNamespaceMaximumBundles=128 - -``` - -### Shed load automatically - -The support for automatic load shedding is available in the load manager of Pulsar. This means that whenever the system recognizes a particular broker is overloaded, the system forces some traffic to be reassigned to less loaded brokers. - -When a broker is identified as overloaded, the broker forces to "unload" a subset of the bundles, the ones with higher traffic, that make up for the overload percentage. - -For example, the default threshold is 85% and if a broker is over quota at 95% CPU usage, then the broker unloads the percent difference plus a 5% margin: `(95% - 85%) + 5% = 15%`. - -Given the selection of bundles to offload is based on traffic (as a proxy measure for cpu, network and memory), broker unloads bundles for at least 15% of traffic. - -The automatic load shedding is enabled by default and you can disable the automatic load shedding with this setting: - -```properties - -# Enable/disable automatic bundle unloading for load-shedding -loadBalancerSheddingEnabled=true - -``` - -Additional settings that apply to shedding: - -```properties - -# Load shedding interval. Broker periodically checks whether some traffic should be offload from -# some over-loaded broker to other under-loaded brokers -loadBalancerSheddingIntervalMinutes=1 - -# Prevent the same topics to be shed and moved to other brokers more that once within this timeframe -loadBalancerSheddingGracePeriodMinutes=30 - -``` - -#### Broker overload thresholds - -The determinations of when a broker is overloaded is based on threshold of CPU, network and memory usage. Whenever either of those metrics reaches the threshold, the system triggers the shedding (if enabled). - -By default, overload threshold is set at 85%: - -```properties - -# Usage threshold to determine a broker as over-loaded -loadBalancerBrokerOverloadedThresholdPercentage=85 - -``` - -Pulsar gathers the usage stats from the system metrics. - -In case of network utilization, in some cases the network interface speed that Linux reports is not correct and needs to be manually overridden. This is the case in AWS EC2 instances with 1Gbps NIC speed for which the OS reports 10Gbps speed. - -Because of the incorrect max speed, the Pulsar load manager might think the broker has not reached the NIC capacity, while in fact the broker already uses all the bandwidth and the traffic is slowed down. - -You can use the following setting to correct the max NIC speed: - -```properties - -# Override the auto-detection of the network interfaces max speed. -# This option is useful in some environments (eg: EC2 VMs) where the max speed -# reported by Linux is not reflecting the real bandwidth available to the broker. -# Since the network usage is employed by the load manager to decide when a broker -# is overloaded, it is important to make sure the info is correct or override it -# with the right value here. The configured value can be a double (eg: 0.8) and that -# can be used to trigger load-shedding even before hitting on NIC limits. -loadBalancerOverrideBrokerNicSpeedGbps= - -``` - -When the value is empty, Pulsar uses the value that the OS reports. - -### Distribute anti-affinity namespaces across failure domains - -When your application has multiple namespaces and you want one of them available all the time to avoid any downtime, you can group these namespaces and distribute them across different [failure domains](reference-terminology.md#failure-domain) and different brokers. Thus, if one of the failure domains is down (due to release rollout or brokers restart), it only disrupts namespaces owned by that specific failure domain and the rest of the namespaces owned by other domains remain available without any impact. - -Such a group of namespaces has anti-affinity to each other, that is, all the namespaces in this group are [anti-affinity namespaces](reference-terminology.md#anti-affinity-namespaces) and are distributed to different failure domains in a load-balanced manner. - -As illustrated in the following figure, Pulsar has 2 failure domains (Domain1 and Domain2) and each domain has 2 brokers in it. You can create an anti-affinity namespace group that has 4 namespaces in it, and all the 4 namespaces have anti-affinity to each other. The load manager tries to distribute namespaces evenly across all the brokers in the same domain. Since each domain has 2 brokers, every broker owns one namespace from this anti-affinity namespace group, and you can see each domain owns 2 namespaces, and each broker owns 1 namespace. - -![Distribute anti-affinity namespaces across failure domains](/assets/anti-affinity-namespaces-across-failure-domains.svg) - -The load manager follows an even distribution policy across failure domains to assign anti-affinity namespaces. The following table outlines the even-distributed assignment sequence illustrated in the above figure. - -| Assignment sequence | Namespace | Failure domain candidates | Broker candidates | Selected broker | -|:---|:------------|:------------------|:------------------------------------|:-----------------| -| 1 | Namespace1 | Domain1, Domain2 | Broker1, Broker2, Broker3, Broker4 | Domain1:Broker1 | -| 2 | Namespace2 | Domain2 | Broker3, Broker4 | Domain2:Broker3 | -| 3 | Namespace3 | Domain1, Domain2 | Broker2, Broker4 | Domain1:Broker2 | -| 4 | Namespace4 | Domain2 | Broker4 | Domain2:Broker4 | - -:::tip - -* Each namespace belongs to only one anti-affinity group. If a namespace with an existing anti-affinity assignment is assigned to another anti-affinity group, the original assignment is dropped. - -* If there are more anti-affinity namespaces than failure domains, the load manager distributes namespaces evenly across all the domains, and also every domain distributes namespaces evenly across all the brokers under that domain. - -::: - -#### Create a failure domain and register brokers - -:::note - -One broker can only be registered to a single failure domain. - -::: - -To create a domain under a specific cluster and register brokers, run the following command: - -```bash - -pulsar-admin clusters create-failure-domain --domain-name --broker-list - -``` - -You can also view, update, and delete domains under a specific cluster. For more information, refer to [Pulsar admin doc](/tools/pulsar-admin/). - -#### Create an anti-affinity namespace group - -An anti-affinity group is created automatically when the first namespace is assigned to the group. To assign a namespace to an anti-affinity group, run the following command. It sets an anti-affinity group name for a namespace. - -```bash - -pulsar-admin namespaces set-anti-affinity-group --group - -``` - -For more information about `anti-affinity-group` related commands, refer to [Pulsar admin doc](/tools/pulsar-admin/). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/administration-proxy.md b/site2/website/versioned_docs/version-2.9.2-deprecated/administration-proxy.md deleted file mode 100644 index 577eff7db0253a..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/administration-proxy.md +++ /dev/null @@ -1,125 +0,0 @@ ---- -id: administration-proxy -title: Pulsar proxy -sidebar_label: "Pulsar proxy" -original_id: administration-proxy ---- - -Pulsar proxy is an optional gateway. Pulsar proxy is used when direct connections between clients and Pulsar brokers are either infeasible or undesirable. For example, when you run Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform, you can run Pulsar proxy. - -The Pulsar proxy is not intended to be exposed on the public internet. The security considerations in the current design expect network perimeter security. The requirement of network perimeter security can be achieved with private networks. - -If a proxy deployment cannot be protected with network perimeter security, the alternative would be to use [Pulsar's "Proxy SNI routing" feature](concepts-proxy-sni-routing.md) with a properly secured and audited solution. In that case Pulsar proxy component is not used at all. - -## Configure the proxy - -Before using a proxy, you need to configure it with a broker's address in the cluster. You can configure the broker URL in the proxy configuration, or the proxy to connect directly using service discovery. - -> In a production environment service discovery is not recommended. - -### Use broker URLs - -It is more secure to specify a URL to connect to the brokers. - -Proxy authorization requires access to ZooKeeper, so if you use these broker URLs to connect to the brokers, you need to disable authorization at the Proxy level. Brokers still authorize requests after the proxy forwards them. - -You can configure the broker URLs in `conf/proxy.conf` as follows. - -```properties -brokerServiceURL=pulsar://brokers.example.com:6650 -brokerWebServiceURL=http://brokers.example.com:8080 -functionWorkerWebServiceURL=http://function-workers.example.com:8080 -``` - -If you use TLS, configure the broker URLs in the following way: - -```properties -brokerServiceURLTLS=pulsar+ssl://brokers.example.com:6651 -brokerWebServiceURLTLS=https://brokers.example.com:8443 -functionWorkerWebServiceURL=https://function-workers.example.com:8443 -``` - -The hostname in the URLs provided should be a DNS entry that points to multiple brokers or a virtual IP address, which is backed by multiple broker IP addresses, so that the proxy does not lose connectivity to Pulsar cluster if a single broker becomes unavailable. - -The ports to connect to the brokers (6650 and 8080, or in the case of TLS, 6651 and 8443) should be open in the network ACLs. - -Note that if you do not use functions, you do not need to configure `functionWorkerWebServiceURL`. - -### Use service discovery - -Pulsar uses [ZooKeeper](https://zookeeper.apache.org) for service discovery. To connect the proxy to ZooKeeper, specify the following in `conf/proxy.conf`. - -```properties -metadataStoreUrl=my-zk-0:2181,my-zk-1:2181,my-zk-2:2181 -configurationMetadataStoreUrl=my-zk-0:2184,my-zk-remote:2184 -``` - -> To use service discovery, you need to open the network ACLs, so the proxy can connects to the ZooKeeper nodes through the ZooKeeper client port (port `2181`) and the configuration store client port (port `2184`). - -> However, it is not secure to use service discovery. Because if the network ACL is open, when someone compromises a proxy, they have full access to ZooKeeper. - -### Restricting target broker addresses to mitigate CVE-2022-24280 - -The Pulsar Proxy trusts clients to provide valid target broker addresses to connect to. -Unless the Pulsar Proxy is explicitly configured to limit access, the Pulsar Proxy is vulnerable as described in the security advisory [Apache Pulsar Proxy target broker address isn't validated (CVE-2022-24280)](https://github.com/apache/pulsar/wiki/CVE-2022-24280). - -It is necessary to limit proxied broker connections to known broker addresses by specifying `brokerProxyAllowedHostNames` and `brokerProxyAllowedIPAddresses` settings. - -When specifying `brokerProxyAllowedHostNames`, it's possible to use a wildcard. -Please notice that `*` is a wildcard that matches any character in the hostname. It also matches dot `.` characters. - -It is recommended to use a pattern that matches only the desired brokers and no other hosts in the local network. Pulsar lookups will use the default host name of the broker by default. This can be overridden with the `advertisedAddress` setting in `broker.conf`. - -To increase security, it is also possible to restrict access with the `brokerProxyAllowedIPAddresses` setting. It is not mandatory to configure `brokerProxyAllowedIPAddresses` when `brokerProxyAllowedHostNames` is properly configured so that the pattern matches only the target brokers. -`brokerProxyAllowedIPAddresses` setting supports a comma separate list of IP address, IP address ranges and IP address networks [(supported format reference)](https://seancfoley.github.io/IPAddress/IPAddress/apidocs/inet/ipaddr/IPAddressString.html). - -Example: limiting by host name in a Kubernetes deployment -```yaml - # example of limiting to Kubernetes statefulset hostnames that contain "broker-" - PULSAR_PREFIX_brokerProxyAllowedHostNames: '*broker-*.*.*.svc.cluster.local' -``` - -Example: limiting by both host name and ip address in a `proxy.conf` file for host deployment. -```properties -# require "broker" in host name -brokerProxyAllowedHostNames=*broker*.localdomain -# limit target ip addresses to a specific network -brokerProxyAllowedIPAddresses=10.0.0.0/8 -``` - -Example: limiting by multiple host name patterns and multiple ip address ranges in a `proxy.conf` file for host deployment. -```properties -```properties -# require "broker" in host name -brokerProxyAllowedHostNames=*broker*.localdomain,*broker*.otherdomain -# limit target ip addresses to a specific network or range demonstrating multiple supported formats -brokerProxyAllowedIPAddresses=10.10.0.0/16,192.168.1.100-120,172.16.2.*,10.1.2.3 -``` - - -## Start the proxy - -To start the proxy: - -```bash - -$ cd /path/to/pulsar/directory -$ bin/pulsar proxy - -``` - -> You can run multiple instances of the Pulsar proxy in a cluster. - -## Stop the proxy - -Pulsar proxy runs in the foreground by default. To stop the proxy, simply stop the process in which the proxy is running. - -## Proxy frontends - -You can run Pulsar proxy behind some kind of load-distributing frontend, such as an [HAProxy](https://www.digitalocean.com/community/tutorials/an-introduction-to-haproxy-and-load-balancing-concepts) load balancer. - -## Use Pulsar clients with the proxy - -Once your Pulsar proxy is up and running, preferably behind a load-distributing [frontend](#proxy-frontends), clients can connect to the proxy via whichever address that the frontend uses. If the address is the DNS address `pulsar.cluster.default`, for example, the connection URL for clients is `pulsar://pulsar.cluster.default:6650`. - -For more information on Proxy configuration, refer to [Pulsar proxy](reference-configuration.md#pulsar-proxy). diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/administration-pulsar-manager.md b/site2/website/versioned_docs/version-2.9.2-deprecated/administration-pulsar-manager.md deleted file mode 100644 index d877cce723e6ab..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/administration-pulsar-manager.md +++ /dev/null @@ -1,205 +0,0 @@ ---- -id: administration-pulsar-manager -title: Pulsar Manager -sidebar_label: "Pulsar Manager" -original_id: administration-pulsar-manager ---- - -Pulsar Manager is a web-based GUI management and monitoring tool that helps administrators and users manage and monitor tenants, namespaces, topics, subscriptions, brokers, clusters, and so on, and supports dynamic configuration of multiple environments. - -:::note - -If you are monitoring your current stats with Pulsar dashboard, we recommend you use Pulsar Manager instead. Pulsar dashboard is deprecated. - -::: - -## Install - -The easiest way to use the Pulsar Manager is to run it inside a [Docker](https://www.docker.com/products/docker) container. - -```shell - -docker pull apachepulsar/pulsar-manager:v0.2.0 -docker run -it \ - -p 9527:9527 -p 7750:7750 \ - -e SPRING_CONFIGURATION_FILE=/pulsar-manager/pulsar-manager/application.properties \ - apachepulsar/pulsar-manager:v0.2.0 - -``` - -* `SPRING_CONFIGURATION_FILE`: Default configuration file for spring. - -### Set administrator account and password - - ```shell - - CSRF_TOKEN=$(curl http://localhost:7750/pulsar-manager/csrf-token) - curl \ - -H 'X-XSRF-TOKEN: $CSRF_TOKEN' \ - -H 'Cookie: XSRF-TOKEN=$CSRF_TOKEN;' \ - -H "Content-Type: application/json" \ - -X PUT http://localhost:7750/pulsar-manager/users/superuser \ - -d '{"name": "admin", "password": "apachepulsar", "description": "test", "email": "username@test.org"}' - - ``` - -You can find the docker image in the [Docker Hub](https://github.com/apache/pulsar-manager/tree/master/docker) directory and build an image from the source code as well: - -``` - -git clone https://github.com/apache/pulsar-manager -cd pulsar-manager/front-end -npm install --save -npm run build:prod -cd .. -./gradlew build -x test -cd .. -docker build -f docker/Dockerfile --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` --build-arg VCS_REF=`latest` --build-arg VERSION=`latest` -t apachepulsar/pulsar-manager . - -``` - -### Use custom databases - -If you have a large amount of data, you can use a custom database. The following is an example of PostgreSQL. - -1. Initialize database and table structures using the [file](https://github.com/apache/pulsar-manager/tree/master/src/main/resources/META-INF/sql/postgresql-schema.sql). - -2. Modify the [configuration file](https://github.com/apache/pulsar-manager/blob/master/src/main/resources/application.properties) and add PostgreSQL configuration. - -``` - -spring.datasource.driver-class-name=org.postgresql.Driver -spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/pulsar_manager -spring.datasource.username=postgres -spring.datasource.password=postgres - -``` - -3. Compile to generate a new executable jar package. - -``` - -./gradlew build -x test - -``` - -### Enable JWT authentication - -If you want to turn on JWT authentication, configure the following parameters: - -* `backend.jwt.token`: token for the superuser. You need to configure this parameter during cluster initialization. -* `jwt.broker.token.mode`: multiple modes of generating token, including PUBLIC, PRIVATE, and SECRET. -* `jwt.broker.public.key`: configure this option if you use the PUBLIC mode. -* `jwt.broker.private.key`: configure this option if you use the PRIVATE mode. -* `jwt.broker.secret.key`: configure this option if you use the SECRET mode. - -For more information, see [Token Authentication Admin of Pulsar](http://pulsar.apache.org/docs/en/security-token-admin/). - - -If you want to enable JWT authentication, use one of the following methods. - - -* Method 1: use command-line tool - -``` - -wget https://dist.apache.org/repos/dist/release/pulsar/pulsar-manager/pulsar-manager-0.2.0/apache-pulsar-manager-0.2.0-bin.tar.gz -tar -zxvf apache-pulsar-manager-0.2.0-bin.tar.gz -cd pulsar-manager -tar -zxvf pulsar-manager.tar -cd pulsar-manager -cp -r ../dist ui -./bin/pulsar-manager --redirect.host=http://localhost --redirect.port=9527 insert.stats.interval=600000 --backend.jwt.token=token --jwt.broker.token.mode=PRIVATE --jwt.broker.private.key=file:///path/broker-private.key --jwt.broker.public.key=file:///path/broker-public.key - -``` - -Firstly, [set the administrator account and password](#set-administrator-account-and-password) - -Secondly, log in to Pulsar manager through http://localhost:7750/ui/index.html. - -* Method 2: configure the application.properties file - -``` - -backend.jwt.token=token - -jwt.broker.token.mode=PRIVATE -jwt.broker.public.key=file:///path/broker-public.key -jwt.broker.private.key=file:///path/broker-private.key - -or -jwt.broker.token.mode=SECRET -jwt.broker.secret.key=file:///path/broker-secret.key - -``` - -* Method 3: use Docker and enable token authentication. - -``` - -export JWT_TOKEN="your-token" -docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -v $PWD:/data apachepulsar/pulsar-manager:v0.2.0 /bin/sh - -``` - -* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --secret-key` or `bin/pulsar tokens create --private-key` command. -* `REDIRECT_HOST`: the IP address of the front-end server. -* `REDIRECT_PORT`: the port of the front-end server. -* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database. -* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database. -* `USERNAME`: the username of PostgreSQL. -* `PASSWORD`: the password of PostgreSQL. -* `LOG_LEVEL`: the level of log. - -* Method 4: use Docker and turn on **token authentication** and **token management** by private key and public key. - -``` - -export JWT_TOKEN="your-token" -export PRIVATE_KEY="file:///pulsar-manager/secret/my-private.key" -export PUBLIC_KEY="file:///pulsar-manager/secret/my-public.key" -docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -e PRIVATE_KEY=$PRIVATE_KEY -e PUBLIC_KEY=$PUBLIC_KEY -v $PWD:/data -v $PWD/secret:/pulsar-manager/secret apachepulsar/pulsar-manager:v0.2.0 /bin/sh - -``` - -* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --private-key` command. -* `PRIVATE_KEY`: private key path mounted in container, generated by `bin/pulsar tokens create-key-pair` command. -* `PUBLIC_KEY`: public key path mounted in container, generated by `bin/pulsar tokens create-key-pair` command. -* `$PWD/secret`: the folder where the private key and public key generated by the `bin/pulsar tokens create-key-pair` command are placed locally -* `REDIRECT_HOST`: the IP address of the front-end server. -* `REDIRECT_PORT`: the port of the front-end server. -* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database. -* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database. -* `USERNAME`: the username of PostgreSQL. -* `PASSWORD`: the password of PostgreSQL. -* `LOG_LEVEL`: the level of log. - -* Method 5: use Docker and turn on **token authentication** and **token management** by secret key. - -``` - -export JWT_TOKEN="your-token" -export SECRET_KEY="file:///pulsar-manager/secret/my-secret.key" -docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -e SECRET_KEY=$SECRET_KEY -v $PWD:/data -v $PWD/secret:/pulsar-manager/secret apachepulsar/pulsar-manager:v0.2.0 /bin/sh - -``` - -* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --secret-key` command. -* `SECRET_KEY`: secret key path mounted in container, generated by `bin/pulsar tokens create-secret-key` command. -* `$PWD/secret`: the folder where the secret key generated by the `bin/pulsar tokens create-secret-key` command are placed locally -* `REDIRECT_HOST`: the IP address of the front-end server. -* `REDIRECT_PORT`: the port of the front-end server. -* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database. -* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database. -* `USERNAME`: the username of PostgreSQL. -* `PASSWORD`: the password of PostgreSQL. -* `LOG_LEVEL`: the level of log. - -* For more information about backend configurations, see [here](https://github.com/apache/pulsar-manager/blob/master/src/README). -* For more information about frontend configurations, see [here](https://github.com/apache/pulsar-manager/tree/master/front-end). - -## Log in - -[Set the administrator account and password](#set-administrator-account-and-password). - -Visit http://localhost:9527 to log in. diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/administration-stats.md b/site2/website/versioned_docs/version-2.9.2-deprecated/administration-stats.md deleted file mode 100644 index ac0c03602f36d5..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/administration-stats.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -id: administration-stats -title: Pulsar stats -sidebar_label: "Pulsar statistics" -original_id: administration-stats ---- - -## Partitioned topics - -|Stat|Description| -|---|---| -|msgRateIn| The sum of publish rates of all local and replication publishers in messages per second.| -|msgThroughputIn| Same as msgRateIn but in bytes per second instead of messages per second.| -|msgRateOut| The sum of dispatch rates of all local and replication consumers in messages per second.| -|msgThroughputOut| Same as msgRateOut but in bytes per second instead of messages per second.| -|averageMsgSize| Average message size, in bytes, from this publisher within the last interval.| -|storageSize| The sum of storage size of the ledgers for this topic.| -|publishers| The list of all local publishers into the topic. Publishers can be anywhere from zero to thousands.| -|producerId| Internal identifier for this producer on this topic.| -|producerName| Internal identifier for this producer, generated by the client library.| -|address| IP address and source port for the connection of this producer.| -|connectedSince| Timestamp this producer is created or last reconnected.| -|subscriptions| The list of all local subscriptions to the topic.| -|my-subscription| The name of this subscription (client defined).| -|msgBacklog| The count of messages in backlog for this subscription.| -|type| This subscription type.| -|msgRateExpired| The rate at which messages are discarded instead of dispatched from this subscription due to TTL.| -|consumers| The list of connected consumers for this subscription.| -|consumerName| Internal identifier for this consumer, generated by the client library.| -|availablePermits| The number of messages this consumer has space for in the listen queue of client library. A value of 0 means the queue of client library is full and receive() is not being called. A nonzero value means this consumer is ready to be dispatched messages.| -|replication| This section gives the stats for cross-colo replication of this topic.| -|replicationBacklog| The outbound replication backlog in messages.| -|connected| Whether the outbound replicator is connected.| -|replicationDelayInSeconds| How long the oldest message has been waiting to be sent through the connection, if connected is true.| -|inboundConnection| The IP and port of the broker in the publisher connection of remote cluster to this broker. | -|inboundConnectedSince| The TCP connection being used to publish messages to the remote cluster. If no local publishers are connected, this connection is automatically closed after a minute.| - - -## Topics - -|Stat|Description| -|---|---| -|entriesAddedCounter| Messages published since this broker loads this topic.| -|numberOfEntries| Total number of messages being tracked.| -|totalSize| Total storage size in bytes of all messages.| -|currentLedgerEntries| Count of messages written to the ledger currently open for writing.| -|currentLedgerSize| Size in bytes of messages written to ledger currently open for writing.| -|lastLedgerCreatedTimestamp| Time when last ledger is created.| -|lastLedgerCreationFailureTimestamp| Time when last ledger is failed.| -|waitingCursorsCount| How many cursors are caught up and waiting for a new message to be published.| -|pendingAddEntriesCount| How many messages have (asynchronous) write requests you are waiting on completion.| -|lastConfirmedEntry| The ledgerid:entryid of the last message successfully written. If the entryid is -1, then the ledger is opened or is being currently opened but has no entries written yet.| -|state| The state of the cursor ledger. Open means you have a cursor ledger for saving updates of the markDeletePosition.| -|ledgers| The ordered list of all ledgers for this topic holding its messages.| -|cursors| The list of all cursors on this topic. Every subscription you saw in the topic stats has one.| -|markDeletePosition| The ack position: the last message the subscriber acknowledges receiving.| -|readPosition| The latest position of subscriber for reading message.| -|waitingReadOp| This is true when the subscription reads the latest message that is published to the topic and waits on new messages to be published.| -|pendingReadOps| The counter for how many outstanding read requests to the BookKeepers you have in progress.| -|messagesConsumedCounter| Number of messages this cursor acks since this broker loads this topic.| -|cursorLedger| The ledger used to persistently store the current markDeletePosition.| -|cursorLedgerLastEntry| The last entryid used to persistently store the current markDeletePosition.| -|individuallyDeletedMessages| If Acks are done out of order, shows the ranges of messages Acked between the markDeletePosition and the read-position.| -|lastLedgerSwitchTimestamp| The last time the cursor ledger is rolled over.| diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/administration-upgrade.md b/site2/website/versioned_docs/version-2.9.2-deprecated/administration-upgrade.md deleted file mode 100644 index 72d136b6460f62..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/administration-upgrade.md +++ /dev/null @@ -1,168 +0,0 @@ ---- -id: administration-upgrade -title: Upgrade Guide -sidebar_label: "Upgrade" -original_id: administration-upgrade ---- - -## Upgrade guidelines - -Apache Pulsar is comprised of multiple components, ZooKeeper, bookies, and brokers. These components are either stateful or stateless. You do not have to upgrade ZooKeeper nodes unless you have special requirement. While you upgrade, you need to pay attention to bookies (stateful), brokers and proxies (stateless). - -The following are some guidelines on upgrading a Pulsar cluster. Read the guidelines before upgrading. - -- Backup all your configuration files before upgrading. -- Read guide entirely, make a plan, and then execute the plan. When you make upgrade plan, you need to take your specific requirements and environment into consideration. -- Pay attention to the upgrading order of components. In general, you do not need to upgrade your ZooKeeper or configuration store cluster (the global ZooKeeper cluster). You need to upgrade bookies first, and then upgrade brokers, proxies, and your clients. -- If `autorecovery` is enabled, you need to disable `autorecovery` in the upgrade process, and re-enable it after completing the process. -- Read the release notes carefully for each release. Release notes contain features, configuration changes that might impact your upgrade. -- Upgrade a small subset of nodes of each type to canary test the new version before upgrading all nodes of that type in the cluster. When you have upgraded the canary nodes, run for a while to ensure that they work correctly. -- Upgrade one data center to verify new version before upgrading all data centers if your cluster runs in multi-cluster replicated mode. - -> Note: Currently, Apache Pulsar is compatible between versions. - -## Upgrade sequence - -To upgrade an Apache Pulsar cluster, follow the upgrade sequence. - -1. Upgrade ZooKeeper (optional) -- Canary test: test an upgraded version in one or a small set of ZooKeeper nodes. -- Rolling upgrade: rollout the upgraded version to all ZooKeeper servers incrementally, one at a time. Monitor your dashboard during the whole rolling upgrade process. -2. Upgrade bookies -- Canary test: test an upgraded version in one or a small set of bookies. -- Rolling upgrade: - - a. Disable `autorecovery` with the following command. - - ```shell - - bin/bookkeeper shell autorecovery -disable - - ``` - - - - b. Rollout the upgraded version to all bookies in the cluster after you determine that a version is safe after canary. - - c. After you upgrade all bookies, re-enable `autorecovery` with the following command. - - ```shell - - bin/bookkeeper shell autorecovery -enable - - ``` - -3. Upgrade brokers -- Canary test: test an upgraded version in one or a small set of brokers. -- Rolling upgrade: rollout the upgraded version to all brokers in the cluster after you determine that a version is safe after canary. -4. Upgrade proxies -- Canary test: test an upgraded version in one or a small set of proxies. -- Rolling upgrade: rollout the upgraded version to all proxies in the cluster after you determine that a version is safe after canary. - -## Upgrade ZooKeeper (optional) -While you upgrade ZooKeeper servers, you can do canary test first, and then upgrade all ZooKeeper servers in the cluster. - -### Canary test - -You can test an upgraded version in one of ZooKeeper servers before upgrading all ZooKeeper servers in your cluster. - -To upgrade ZooKeeper server to a new version, complete the following steps: - -1. Stop a ZooKeeper server. -2. Upgrade the binary and configuration files. -3. Start the ZooKeeper server with the new binary files. -4. Use `pulsar zookeeper-shell` to connect to the newly upgraded ZooKeeper server and run a few commands to verify if it works as expected. -5. Run the ZooKeeper server for a few days, observe and make sure the ZooKeeper cluster runs well. - -#### Canary rollback - -If issues occur during canary test, you can shut down the problematic ZooKeeper node, revert the binary and configuration, and restart the ZooKeeper with the reverted binary. - -### Upgrade all ZooKeeper servers - -After canary test to upgrade one ZooKeeper in your cluster, you can upgrade all ZooKeeper servers in your cluster. - -You can upgrade all ZooKeeper servers one by one by following steps in canary test. - -## Upgrade bookies - -While you upgrade bookies, you can do canary test first, and then upgrade all bookies in the cluster. -For more details, you can read Apache BookKeeper [Upgrade guide](http://bookkeeper.apache.org/docs/latest/admin/upgrade). - -### Canary test - -You can test an upgraded version in one or a small set of bookies before upgrading all bookies in your cluster. - -To upgrade bookie to a new version, complete the following steps: - -1. Stop a bookie. -2. Upgrade the binary and configuration files. -3. Start the bookie in `ReadOnly` mode to verify if the bookie of this new version runs well for read workload. - - ```shell - - bin/pulsar bookie --readOnly - - ``` - -4. When the bookie runs successfully in `ReadOnly` mode, stop the bookie and restart it in `Write/Read` mode. - - ```shell - - bin/pulsar bookie - - ``` - -5. Observe and make sure the cluster serves both write and read traffic. - -#### Canary rollback - -If issues occur during the canary test, you can shut down the problematic bookie node. Other bookies in the cluster replaces this problematic bookie node with autorecovery. - -### Upgrade all bookies - -After canary test to upgrade some bookies in your cluster, you can upgrade all bookies in your cluster. - -Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios. - -In a rolling upgrade scenario, upgrade one bookie at a time. In a downtime upgrade scenario, shut down the entire cluster, upgrade each bookie, and then start the cluster. - -While you upgrade in both scenarios, the procedure is the same for each bookie. - -1. Stop the bookie. -2. Upgrade the software (either new binary or new configuration files). -2. Start the bookie. - -> **Advanced operations** -> When you upgrade a large BookKeeper cluster in a rolling upgrade scenario, upgrading one bookie at a time is slow. If you configure rack-aware or region-aware placement policy, you can upgrade bookies rack by rack or region by region, which speeds up the whole upgrade process. - -## Upgrade brokers and proxies - -The upgrade procedure for brokers and proxies is the same. Brokers and proxies are `stateless`, so upgrading the two services is easy. - -### Canary test - -You can test an upgraded version in one or a small set of nodes before upgrading all nodes in your cluster. - -To upgrade to a new version, complete the following steps: - -1. Stop a broker (or proxy). -2. Upgrade the binary and configuration file. -3. Start a broker (or proxy). - -#### Canary rollback - -If issues occur during canary test, you can shut down the problematic broker (or proxy) node. Revert to the old version and restart the broker (or proxy). - -### Upgrade all brokers or proxies - -After canary test to upgrade some brokers or proxies in your cluster, you can upgrade all brokers or proxies in your cluster. - -Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios. - -In a rolling upgrade scenario, you can upgrade one broker or one proxy at a time if the size of the cluster is small. If your cluster is large, you can upgrade brokers or proxies in batches. When you upgrade a batch of brokers or proxies, make sure the remaining brokers and proxies in the cluster have enough capacity to handle the traffic during upgrade. - -In a downtime upgrade scenario, shut down the entire cluster, upgrade each broker or proxy, and then start the cluster. - -While you upgrade in both scenarios, the procedure is the same for each broker or proxy. - -1. Stop the broker or proxy. -2. Upgrade the software (either new binary or new configuration files). -3. Start the broker or proxy. diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/administration-zk-bk.md b/site2/website/versioned_docs/version-2.9.2-deprecated/administration-zk-bk.md deleted file mode 100644 index f427d43d57dc1a..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/administration-zk-bk.md +++ /dev/null @@ -1,386 +0,0 @@ ---- -id: administration-zk-bk -title: ZooKeeper and BookKeeper administration -sidebar_label: "ZooKeeper and BookKeeper" -original_id: administration-zk-bk ---- - -Pulsar relies on two external systems for essential tasks: - -* [ZooKeeper](https://zookeeper.apache.org/) is responsible for a wide variety of configuration-related and coordination-related tasks. -* [BookKeeper](http://bookkeeper.apache.org/) is responsible for [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data. - -ZooKeeper and BookKeeper are both open-source [Apache](https://www.apache.org/) projects. - -> Skip to the [How Pulsar uses ZooKeeper and BookKeeper](#how-pulsar-uses-zookeeper-and-bookkeeper) section below for a more schematic explanation of the role of these two systems in Pulsar. - - -## ZooKeeper - -Each Pulsar instance relies on two separate ZooKeeper quorums. - -* [Local ZooKeeper](#deploy-local-zookeeper) operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs to have a dedicated ZooKeeper cluster. -* [Configuration Store](#deploy-configuration-store) operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum. - -### Deploy local ZooKeeper - -ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar. - -To deploy a Pulsar instance, you need to stand up one local ZooKeeper cluster *per Pulsar cluster*. - -To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -On each host, you need to specify the node ID in `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - - -On a ZooKeeper server at `zk1.us-west.example.com`, for example, you can set the `myid` value like this: - -```shell - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com` the command is `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start zookeeper - -``` - -### Deploy configuration store - -The ZooKeeper cluster configured and started up in the section above is a *local* ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks. - -If you deploy a [single-cluster](#single-cluster-pulsar-instance) instance, you do not need a separate cluster for the configuration store. If, however, you deploy a [multi-cluster](#multi-cluster-pulsar-instance) instance, you need to stand up a separate ZooKeeper cluster for configuration tasks. - -#### Single-cluster Pulsar instance - -If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports. - -To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorum uses to the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 - -``` - -As before, create the `myid` files for each server on `data/global-zookeeper/myid`. - -#### Multi-cluster Pulsar instance - -When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions. - -The key here is to make sure the ZK quorum members are spread across at least 3 regions and that other regions run as observers. - -Again, given the very low expected load on the configuration store servers, you can share the same hosts used for the local ZooKeeper quorum. - -For example, you can assume a Pulsar instance with the following clusters `us-west`, `us-east`, `us-central`, `eu-central`, `ap-south`. Also you can assume, each cluster has its own local ZK servers named such as - -``` - -zk[1-3].${CLUSTER}.example.com - -``` - -In this scenario you want to pick the quorum participants from few clusters and let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`. - -This guarantees that writes to configuration store is possible even if one of these regions is unreachable. - -The ZK configuration in all the servers looks like: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 -server.4=zk1.us-central.example.com:2185:2186 -server.5=zk2.us-central.example.com:2185:2186 -server.6=zk3.us-central.example.com:2185:2186:observer -server.7=zk1.us-east.example.com:2185:2186 -server.8=zk2.us-east.example.com:2185:2186 -server.9=zk3.us-east.example.com:2185:2186:observer -server.10=zk1.eu-central.example.com:2185:2186:observer -server.11=zk2.eu-central.example.com:2185:2186:observer -server.12=zk3.eu-central.example.com:2185:2186:observer -server.13=zk1.ap-south.example.com:2185:2186:observer -server.14=zk2.ap-south.example.com:2185:2186:observer -server.15=zk3.ap-south.example.com:2185:2186:observer - -``` - -Additionally, ZK observers need to have: - -```properties - -peerType=observer - -``` - -##### Start the service - -Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) - -```shell - -$ bin/pulsar-daemon start configuration-store - -``` - -### ZooKeeper configuration - -In Pulsar, ZooKeeper configuration is handled by two separate configuration files in the `conf` directory of your Pulsar installation: `conf/zookeeper.conf` for [local ZooKeeper](#local-zookeeper) and `conf/global-zookeeper.conf` for [configuration store](#configuration-store). - -#### Local ZooKeeper - -The [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file handles the configuration for local ZooKeeper. The table below shows the available parameters: - -|Name|Description|Default| -|---|---|---| -|tickTime| The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick. |2000| -|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10| -|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter. |5| -|dataDir| The location where ZooKeeper stores in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper| -|clientPort| The port on which the ZooKeeper server listens for connections. |2181| -|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest). |3| -|autopurge.purgeInterval| The time interval, in hours, which triggers the ZooKeeper database purge task. Setting to a non-zero number enables auto purge; setting to 0 disables. Read this guide before enabling auto purge. |1| -|maxClientCnxns| The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60| - - -#### Configuration Store - -The [`conf/global-zookeeper.conf`](reference-configuration.md#configuration-store) file handles the configuration for configuration store. The table below shows the available parameters: - - -## BookKeeper - -BookKeeper stores all durable messages in Pulsar. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) WAL system that guarantees read consistency of independent message logs calls ledgers. Individual BookKeeper servers are also called *bookies*. - -> To manage message persistence, retention, and expiry in Pulsar, refer to [cookbook](cookbooks-retention-expiry.md). - -### Hardware requirements - -Bookie hosts store message data on disk. To provide optimal performance, ensure that the bookies have a suitable hardware configuration. The following are two key dimensions of bookie hardware capacity: - -- Disk I/O capacity read/write -- Storage capacity - -Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker by default. To ensure low write latency, BookKeeper is designed to use multiple devices: - -- A **journal** to ensure durability. For sequential writes, it is critical to have fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms. -- A **ledger storage device** stores data. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller. - -### Configure BookKeeper - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. When you configure each bookie, ensure that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for local ZooKeeper of the Pulsar cluster. - -The minimum configuration changes required in `conf/bookkeeper.conf` are as follows: - -:::note - -Set `journalDirectory` and `ledgerDirectories` carefully. It is difficilt to change them later. - -::: - -```properties - -# Change to point to journal disk mount point -journalDirectory=data/bookkeeper/journal - -# Point to ledger storage disk mount point -ledgerDirectories=data/bookkeeper/ledgers - -# Point to local ZK quorum -zkServers=zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181 - -#It is recommended to set this parameter. Otherwise, BookKeeper can't start normally in certain environments (for example, Huawei Cloud). -advertisedAddress= - -``` - -To change the ZooKeeper root path that BookKeeper uses, use `zkLedgersRootPath=/MY-PREFIX/ledgers` instead of `zkServers=localhost:2181/MY-PREFIX`. - -> For more information about BookKeeper, refer to the official [BookKeeper docs](http://bookkeeper.apache.org). - -### Deploy BookKeeper - -BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar. Each Pulsar broker has its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster. - -### Start bookies manually - -You can start a bookie in the foreground or as a background daemon. - -To start a bookie in the foreground, use the [`bookkeeper`](reference-cli-tools.md#bookkeeper) CLI tool: - -```bash - -$ bin/bookkeeper bookie - -``` - -To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -You can verify whether the bookie works properly with the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell): - -```shell - -$ bin/bookkeeper shell bookiesanity - -``` - -When you use this command, you create a new ledger on the local bookie, write a few entries, read them back and finally delete the ledger. - -### Decommission bookies cleanly - -Before you decommission a bookie, you need to check your environment and meet the following requirements. - -1. Ensure the state of your cluster supports decommissioning the target bookie. Check if `EnsembleSize >= Write Quorum >= Ack Quorum` is `true` with one less bookie. - -2. Ensure the target bookie is listed after using the `listbookies` command. - -3. Ensure that no other process is ongoing (upgrade etc). - -And then you can decommission bookies safely. To decommission bookies, complete the following steps. - -1. Log in to the bookie node, check if there are underreplicated ledgers. The decommission command force to replicate the underreplicated ledgers. -`$ bin/bookkeeper shell listunderreplicated` - -2. Stop the bookie by killing the bookie process. Make sure that no liveness/readiness probes setup for the bookies to spin them back up if you deploy it in a Kubernetes environment. - -3. Run the decommission command. - - If you have logged in to the node to be decommissioned, you do not need to provide `-bookieid`. - - If you are running the decommission command for the target bookie node from another bookie node, you should mention the target bookie ID in the arguments for `-bookieid` - `$ bin/bookkeeper shell decommissionbookie` - or - `$ bin/bookkeeper shell decommissionbookie -bookieid ` - -4. Validate that no ledgers are on the decommissioned bookie. -`$ bin/bookkeeper shell listledgers -bookieid ` - -You can run the following command to check if the bookie you have decommissioned is listed in the bookies list: - -```bash - -./bookkeeper shell listbookies -rw -h -./bookkeeper shell listbookies -ro -h - -``` - -## BookKeeper persistence policies - -In Pulsar, you can set *persistence policies* at the namespace level, which determines how BookKeeper handles persistent storage of messages. Policies determine four things: - -* The number of acks (guaranteed copies) to wait for each ledger entry. -* The number of bookies to use for a topic. -* The number of writes to make for each ledger entry. -* The throttling rate for mark-delete operations. - -### Set persistence policies - -You can set persistence policies for BookKeeper at the [namespace](reference-terminology.md#namespace) level. - -#### Pulsar-admin - -Use the [`set-persistence`](reference-pulsar-admin.md#namespaces-set-persistence) subcommand and specify a namespace as well as any policies that you want to apply. The available flags are: - -Flag | Description | Default -:----|:------------|:------- -`-a`, `--bookkeeper-ack-quorum` | The number of acks (guaranteed copies) to wait on for each entry | 0 -`-e`, `--bookkeeper-ensemble` | The number of [bookies](reference-terminology.md#bookie) to use for topics in the namespace | 0 -`-w`, `--bookkeeper-write-quorum` | The number of writes to make for each entry | 0 -`-r`, `--ml-mark-delete-max-rate` | Throttling rate for mark-delete operations (0 means no throttle) | 0 - -The following is an example: - -```shell - -$ pulsar-admin namespaces set-persistence my-tenant/my-ns \ - --bookkeeper-ack-quorum 3 \ - --bookeeper-ensemble 2 - -``` - -#### REST API - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/setPersistence?version=@pulsar:version_number@} - -#### Java - -```java - -int bkEnsemble = 2; -int bkQuorum = 3; -int bkAckQuorum = 2; -double markDeleteRate = 0.7; -PersistencePolicies policies = - new PersistencePolicies(ensemble, quorum, ackQuorum, markDeleteRate); -admin.namespaces().setPersistence(namespace, policies); - -``` - -### List persistence policies - -You can see which persistence policy currently applies to a namespace. - -#### Pulsar-admin - -Use the [`get-persistence`](reference-pulsar-admin.md#namespaces-get-persistence) subcommand and specify the namespace. - -The following is an example: - -```shell - -$ pulsar-admin namespaces get-persistence my-tenant/my-ns -{ - "bookkeeperEnsemble": 1, - "bookkeeperWriteQuorum": 1, - "bookkeeperAckQuorum", 1, - "managedLedgerMaxMarkDeleteRate": 0 -} - -``` - -#### REST API - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/getPersistence?version=@pulsar:version_number@} - -#### Java - -```java - -PersistencePolicies policies = admin.namespaces().getPersistence(namespace); - -``` - -## How Pulsar uses ZooKeeper and BookKeeper - -This diagram illustrates the role of ZooKeeper and BookKeeper in a Pulsar cluster: - -![ZooKeeper and BookKeeper](/assets/pulsar-system-architecture.png) - -Each Pulsar cluster consists of one or more message brokers. Each broker relies on an ensemble of bookies. diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-cgo.md b/site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-cgo.md deleted file mode 100644 index f352f942b77144..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-cgo.md +++ /dev/null @@ -1,579 +0,0 @@ ---- -id: client-libraries-cgo -title: Pulsar CGo client -sidebar_label: "CGo(deprecated)" -original_id: client-libraries-cgo ---- - -You can use Pulsar Go client to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang). - -All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Go client are thread-safe. - -Currently, the following Go clients are maintained in two repositories. - -| Language | Project | Maintainer | License | Description | -|----------|---------|------------|---------|-------------| -| CGo | [pulsar-client-go](https://github.com/apache/pulsar/tree/master/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | CGo client that depends on C++ client library | -| Go | [pulsar-client-go](https://github.com/apache/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client | - -> **API docs available as well** -> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar/pulsar-client-go/pulsar). - -## Installation - -### Requirements - -Pulsar Go client library is based on the C++ client library. Follow -the instructions for [C++ library](client-libraries-cpp.md) for installing the binaries through [RPM](client-libraries-cpp.md#rpm), [Deb](client-libraries-cpp.md#deb) or [Homebrew packages](client-libraries-cpp.md#macos). - -### Install go package - -> **Compatibility Warning** -> The version number of the Go client **must match** the version number of the Pulsar C++ client library. - -You can install the `pulsar` library locally using `go get`. Note that `go get` doesn't support fetching a specific tag - it will always pull in master's version of the Go client. You'll need a C++ client library that matches master. - -```bash - -$ go get -u github.com/apache/pulsar/pulsar-client-go/pulsar - -``` - -Or you can use [dep](https://github.com/golang/dep) for managing the dependencies. - -```bash - -$ dep ensure -add github.com/apache/pulsar/pulsar-client-go/pulsar@v@pulsar:version@ - -``` - -Once installed locally, you can import it into your project: - -```go - -import "github.com/apache/pulsar/pulsar-client-go/pulsar" - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example: - -```go - -import ( - "log" - "runtime" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - OperationTimeoutSeconds: 5, - MessageListenerThreads: runtime.NumCPU(), - }) - - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } -} - -``` - -The following configurable parameters are available for Pulsar clients: - -Parameter | Description | Default -:---------|:------------|:------- -`URL` | The connection URL for the Pulsar cluster. See [above](#urls) for more info | -`IOThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker) | 1 -`OperationTimeoutSeconds` | The timeout for some Go client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries will occur until this threshold is reached, at which point the operation will fail. | 30 -`MessageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)) | 1 -`ConcurrentLookupRequests` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 5000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 5000 -`Logger` | A custom logger implementation for the client (as a function that takes a log level, file path, line number, and message). All info, warn, and error messages will be routed to this function. | `nil` -`TLSTrustCertsFilePath` | The file path for the trusted TLS certificate | -`TLSAllowInsecureConnection` | Whether the client accepts untrusted TLS certificates from the broker | `false` -`Authentication` | Configure the authentication provider. (default: no authentication). Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | `nil` -`StatsIntervalInSeconds` | The interval (in seconds) at which client stats are published | 60 - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example: - -```go - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", -}) - -if err != nil { - log.Fatalf("Could not instantiate Pulsar producer: %v", err) -} - -defer producer.Close() - -msg := pulsar.ProducerMessage{ - Payload: []byte("Hello, Pulsar"), -} - -if err := producer.Send(context.Background(), msg); err != nil { - log.Fatalf("Producer could not send message: %v", err) -} - -``` - -> **Blocking operation** -> When you create a new Pulsar producer, the operation will block (waiting on a go channel) until either a producer is successfully created or an error is thrown. - - -### Producer operations - -Pulsar Go producers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string` -`Name()` | Fetches the producer's name | `string` -`Send(context.Context, ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | `error` -`SendAndGetMsgID(context.Context, ProducerMessage)`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | (MessageID, error) -`SendAsync(context.Context, ProducerMessage, func(ProducerMessage, error))` | Publishes a [message](#messages) to the producer's topic asynchronously. The third argument is a callback function that specifies what happens either when the message is acknowledged or an error is thrown. | -`SendAndGetMsgIDAsync(context.Context, ProducerMessage, func(MessageID, error))`| Send a message in asynchronous mode. The callback will report back the message being published and the eventual error in publishing | -`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64 -`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error -`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | `error` -`Schema()` | | Schema - -Here's a more involved example usage of a producer: - -```go - -import ( - "context" - "fmt" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatal(err) } - - // Use the client to instantiate a producer - producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", - }) - - if err != nil { log.Fatal(err) } - - ctx := context.Background() - - // Send 10 messages synchronously and 10 messages asynchronously - for i := 0; i < 10; i++ { - // Create a message - msg := pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("message-%d", i)), - } - - // Attempt to send the message - if err := producer.Send(ctx, msg); err != nil { - log.Fatal(err) - } - - // Create a different message to send asynchronously - asyncMsg := pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("async-message-%d", i)), - } - - // Attempt to send the message asynchronously and handle the response - producer.SendAsync(ctx, asyncMsg, func(msg pulsar.ProducerMessage, err error) { - if err != nil { log.Fatal(err) } - - fmt.Printf("the %s successfully published", string(msg.Payload)) - }) - } -} - -``` - -### Producer configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer will publish messages | -`Name` | A name for the producer. If you don't explicitly assign a name, Pulsar will automatically generate a globally unique name that you can access later using the `Name()` method. If you choose to explicitly assign a name, it will need to be unique across *all* Pulsar clusters, otherwise the creation operation will throw an error. | -`Properties`| Attach a set of application defined properties to the producer. This properties will be visible in the topic stats | -`SendTimeout` | When publishing a message to a topic, the producer will wait for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error will be thrown. If you set `SendTimeout` to -1, the timeout will be set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30 seconds -`MaxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `Send` and `SendAsync` methods will fail *unless* `BlockIfQueueFull` is set to `true`. | -`MaxPendingMessagesAcrossPartitions` | Set the number of max pending messages across all the partitions. This setting will be used to lower the max pending messages for each partition `MaxPendingMessages(int)`, if the total exceeds the configured value.| -`BlockIfQueueFull` | If set to `true`, the producer's `Send` and `SendAsync` methods will block when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `MaxPendingMessages` parameter); if set to `false` (the default), `Send` and `SendAsync` operations will fail and throw a `ProducerQueueIsFullError` when the queue is full. | `false` -`MessageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`pulsar.RoundRobinDistribution`, the default), publishing all messages to a single partition (`pulsar.UseSinglePartition`), or a custom partitioning scheme (`pulsar.CustomPartition`). | `pulsar.RoundRobinDistribution` -`HashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `pulsar.JavaStringHash` (the equivalent of `String.hashCode()` in Java), `pulsar.Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `pulsar.BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library) | `pulsar.JavaStringHash` -`CompressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), [`ZLIB`](https://zlib.net/), [`ZSTD`](https://facebook.github.io/zstd/) and [`SNAPPY`](https://google.github.io/snappy/). | No compression -`MessageRouter` | By default, Pulsar uses a round-robin routing scheme for [partitioned topics](cookbooks-partitioned.md). The `MessageRouter` parameter enables you to specify custom routing logic via a function that takes the Pulsar message and topic metadata as an argument and returns an integer (where the ), i.e. a function signature of `func(Message, TopicMetadata) int`. | -`Batching` | Control whether automatic batching of messages is enabled for the producer. | false -`BatchingMaxPublishDelay` | Set the time period within which the messages sent will be batched (default: 1ms) if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or until | 1ms -`BatchingMaxMessages` | Set the maximum number of messages permitted in a batch. (default: 1000) If set to a value greater than 1, messages will be queued until this threshold is reached or batch interval has elapsed | 1000 - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels: - -```go - -msgChannel := make(chan pulsar.ConsumerMessage) - -consumerOpts := pulsar.ConsumerOptions{ - Topic: "my-topic", - SubscriptionName: "my-subscription-1", - Type: pulsar.Exclusive, - MessageChannel: msgChannel, -} - -consumer, err := client.Subscribe(consumerOpts) - -if err != nil { - log.Fatalf("Could not establish subscription: %v", err) -} - -defer consumer.Close() - -for cm := range msgChannel { - msg := cm.Message - - fmt.Printf("Message ID: %s", msg.ID()) - fmt.Printf("Message value: %s", string(msg.Payload())) - - consumer.Ack(msg) -} - -``` - -> **Blocking operation** -> When you create a new Pulsar consumer, the operation will block (on a go channel) until either a producer is successfully created or an error is thrown. - - -### Consumer operations - -Pulsar Go consumers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the consumer's [topic](reference-terminology.md#topic) | `string` -`Subscription()` | Returns the consumer's subscription name | `string` -`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error` -`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)` -`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | `error` -`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | `error` -`AckCumulative(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `AckCumulative` method will block until the ack has been sent to the broker. After that, the messages will *not* be redelivered to the consumer. Cumulative acking can only be used with a [shared](concepts-messaging.md#shared) subscription type. | `error` -`AckCumulativeID(MessageID)` |Ack the reception of all the messages in the stream up to (and including) the provided message. This method will block until the acknowledge has been sent to the broker. After that, the messages will not be re-delivered to this consumer. | error -`Nack(Message)` | Acknowledge the failure to process a single message. | `error` -`NackID(MessageID)` | Acknowledge the failure to process a single message. | `error` -`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | `error` -`RedeliverUnackedMessages()` | Redelivers *all* unacknowledged messages on the topic. In [failover](concepts-messaging.md#failover) mode, this request is ignored if the consumer isn't active on the specified topic; in [shared](concepts-messaging.md#shared) mode, redelivered messages are distributed across all consumers connected to the topic. **Note**: this is a *non-blocking* operation that doesn't throw an error. | -`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | error - -#### Receive example - -Here's an example usage of a Go consumer that uses the `Receive()` method to process incoming messages: - -```go - -import ( - "context" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatal(err) } - - // Use the client object to instantiate a consumer - consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "my-golang-topic", - SubscriptionName: "sub-1", - Type: pulsar.Exclusive, - }) - - if err != nil { log.Fatal(err) } - - defer consumer.Close() - - ctx := context.Background() - - // Listen indefinitely on the topic - for { - msg, err := consumer.Receive(ctx) - if err != nil { log.Fatal(err) } - - // Do something with the message - err = processMessage(msg) - - if err == nil { - // Message processed successfully - consumer.Ack(msg) - } else { - // Failed to process messages - consumer.Nack(msg) - } - } -} - -``` - -### Consumer configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the consumer will establish a subscription and listen for messages | -`Topics` | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing | -`TopicsPattern` | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing | -`SubscriptionName` | The subscription name for this consumer | -`Properties` | Attach a set of application defined properties to the consumer. This properties will be visible in the topic stats| -`Name` | The name of the consumer | -`AckTimeout` | Set the timeout for unacked messages | 0 -`NackRedeliveryDelay` | The delay after which to redeliver the messages that failed to be processed. Default is 1min. (See `Consumer.Nack()`) | 1 minute -`Type` | Available options are `Exclusive`, `Shared`, and `Failover` | `Exclusive` -`SubscriptionInitPos` | InitialPosition at which the cursor will be set when subscribe | Latest -`MessageChannel` | The Go channel used by the consumer. Messages that arrive from the Pulsar topic(s) will be passed to this channel. | -`ReceiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `Receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000 -`MaxTotalReceiverQueueSizeAcrossPartitions` |Set the max total receiver queue size across partitions. This setting will be used to reduce the receiver queue size for individual partitions if the total exceeds this value | 50000 -`ReadCompacted` | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the consumer will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal. | - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example: - -```go - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageId: pulsar.LatestMessage, -}) - -``` - -> **Blocking operation** -> When you create a new Pulsar reader, the operation will block (on a go channel) until either a reader is successfully created or an error is thrown. - - -### Reader operations - -Pulsar Go readers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string` -`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)` -`HasNext()` | Check if there is any message available to read from the current position| (bool, error) -`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error` - -#### "Next" example - -Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages: - -```go - -import ( - "context" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatalf("Could not create client: %v", err) } - - // Use the client to instantiate a reader - reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: pulsar.EarliestMessage, - }) - - if err != nil { log.Fatalf("Could not create reader: %v", err) } - - defer reader.Close() - - ctx := context.Background() - - // Listen on the topic for incoming messages - for { - msg, err := reader.Next(ctx) - if err != nil { log.Fatalf("Error reading from topic: %v", err) } - - // Process the message - } -} - -``` - -In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example: - -```go - -lastSavedId := // Read last saved message id from external store as byte[] - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: DeserializeMessageID(lastSavedId), -}) - -``` - -### Reader configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader will establish a subscription and listen for messages -`Name` | The name of the reader -`StartMessageID` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `pulsar.EarliestMessage` (the earliest available message on the topic), `pulsar.LatestMessage` (the latest available message on the topic), or a `MessageID` object for a position that isn't earliest or latest. | -`MessageChannel` | The Go channel used by the reader. Messages that arrive from the Pulsar topic(s) will be passed to this channel. | -`ReceiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `Next`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000 -`SubscriptionRolePrefix` | The subscription role prefix. | `reader` -`ReadCompacted` | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the reader will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal.| - -## Messages - -The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message: - -```go - -msg := pulsar.ProducerMessage{ - Payload: []byte("Here is some message data"), - Key: "message-key", - Properties: map[string]string{ - "foo": "bar", - }, - EventTime: time.Now(), - ReplicationClusters: []string{"cluster1", "cluster3"}, -} - -if err := producer.send(msg); err != nil { - log.Fatalf("Could not publish message due to: %v", err) -} - -``` - -The following methods parameters are available for `ProducerMessage` objects: - -Parameter | Description -:---------|:----------- -`Payload` | The actual data payload of the message -`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message. -`Key` | The optional key associated with the message (particularly useful for things like topic compaction) -`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message -`EventTime` | The timestamp associated with the message -`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. -`SequenceID` | Set the sequence id to assign to the current message - -## TLS encryption and authentication - -In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so: - - * Use `pulsar+ssl` URL type - * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker - * Configure `Authentication` option - -Here's an example: - -```go - -opts := pulsar.ClientOptions{ - URL: "pulsar+ssl://my-cluster.com:6651", - TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr", - Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"), -} - -``` - -## Schema - -This example shows how to create a producer and consumer with schema. - -```go - -var exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -jsonSchema := NewJsonSchema(exampleSchemaDef, nil) -// create producer -producer, err := client.CreateProducerWithSchema(ProducerOptions{ - Topic: "jsonTopic", -}, jsonSchema) -err = producer.Send(context.Background(), ProducerMessage{ - Value: &testJson{ - ID: 100, - Name: "pulsar", - }, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() -//create consumer -var s testJson -consumerJS := NewJsonSchema(exampleSchemaDef, nil) -consumer, err := client.SubscribeWithSchema(ConsumerOptions{ - Topic: "jsonTopic", - SubscriptionName: "sub-2", -}, consumerJS) -if err != nil { - log.Fatal(err) -} -msg, err := consumer.Receive(context.Background()) -if err != nil { - log.Fatal(err) -} -err = msg.GetValue(&s) -if err != nil { - log.Fatal(err) -} -fmt.Println(s.ID) // output: 100 -fmt.Println(s.Name) // output: pulsar -defer consumer.Close() - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-cpp.md b/site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-cpp.md deleted file mode 100644 index 455cf02116d502..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-cpp.md +++ /dev/null @@ -1,708 +0,0 @@ ---- -id: client-libraries-cpp -title: Pulsar C++ client -sidebar_label: "C++" -original_id: client-libraries-cpp ---- - -You can use Pulsar C++ client to create Pulsar producers and consumers in C++. - -All the methods in producer, consumer, and reader of a C++ client are thread-safe. - -## Supported platforms - -Pulsar C++ client is supported on **Linux** ,**MacOS** and **Windows** platforms. - -[Doxygen](http://www.doxygen.nl/)-generated API docs for the C++ client are available [here](/api/cpp). - -## System requirements - -You need to install the following components before using the C++ client: - -* [CMake](https://cmake.org/) -* [Boost](http://www.boost.org/) -* [Protocol Buffers](https://developers.google.com/protocol-buffers/) >= 3 -* [libcurl](https://curl.se/libcurl/) -* [Google Test](https://github.com/google/googletest) - -## Linux - -### Compilation - -1. Clone the Pulsar repository. - -```shell - -$ git clone https://github.com/apache/pulsar - -``` - -2. Install all necessary dependencies. - -```shell - -$ apt-get install cmake libssl-dev libcurl4-openssl-dev liblog4cxx-dev \ - libprotobuf-dev protobuf-compiler libboost-all-dev google-mock libgtest-dev libjsoncpp-dev - -``` - -3. Compile and install [Google Test](https://github.com/google/googletest). - -```shell - -# libgtest-dev version is 1.18.0 or above -$ cd /usr/src/googletest -$ sudo cmake . -$ sudo make -$ sudo cp ./googlemock/libgmock.a ./googlemock/gtest/libgtest.a /usr/lib/ - -# less than 1.18.0 -$ cd /usr/src/gtest -$ sudo cmake . -$ sudo make -$ sudo cp libgtest.a /usr/lib - -$ cd /usr/src/gmock -$ sudo cmake . -$ sudo make -$ sudo cp libgmock.a /usr/lib - -``` - -4. Compile the Pulsar client library for C++ inside the Pulsar repository. - -```shell - -$ cd pulsar-client-cpp -$ cmake . -$ make - -``` - -After you install the components successfully, the files `libpulsar.so` and `libpulsar.a` are in the `lib` folder of the repository. The tools `perfProducer` and `perfConsumer` are in the `perf` directory. - -### Install Dependencies - -> Since 2.1.0 release, Pulsar ships pre-built RPM and Debian packages. You can download and install those packages directly. - -After you download and install RPM or DEB, the `libpulsar.so`, `libpulsarnossl.so`, `libpulsar.a`, and `libpulsarwithdeps.a` libraries are in your `/usr/lib` directory. - -By default, they are built in code path `${PULSAR_HOME}/pulsar-client-cpp`. You can build with the command below. - - `cmake . -DBUILD_TESTS=OFF -DLINK_STATIC=ON && make pulsarShared pulsarSharedNossl pulsarStatic pulsarStaticWithDeps -j 3`. - -These libraries rely on some other libraries. If you want to get detailed version of dependencies, see [RPM](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/rpm/Dockerfile) or [DEB](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/deb/Dockerfile) files. - -1. `libpulsar.so` is a shared library, containing statically linked `boost` and `openssl`. It also dynamically links all other necessary libraries. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsar.so -I/usr/local/ssl/include - -``` - -2. `libpulsarnossl.so` is a shared library, similar to `libpulsar.so` except that the libraries `openssl` and `crypto` are dynamically linked. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsarnossl.so -lssl -lcrypto -I/usr/local/ssl/include -L/usr/local/ssl/lib - -``` - -3. `libpulsar.a` is a static library. You need to load dependencies before using this library. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsar.a -lssl -lcrypto -ldl -lpthread -I/usr/local/ssl/include -L/usr/local/ssl/lib -lboost_system -lboost_regex -lcurl -lprotobuf -lzstd -lz - -``` - -4. `libpulsarwithdeps.a` is a static library, based on `libpulsar.a`. It is archived in the dependencies of `libboost_regex`, `libboost_system`, `libcurl`, `libprotobuf`, `libzstd` and `libz`. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsarwithdeps.a -lssl -lcrypto -ldl -lpthread -I/usr/local/ssl/include -L/usr/local/ssl/lib - -``` - -The `libpulsarwithdeps.a` does not include library openssl related libraries `libssl` and `libcrypto`, because these two libraries are related to security. It is more reasonable and easier to use the versions provided by the local system to handle security issues and upgrade libraries. - -### Install RPM - -1. Download a RPM package from the links in the table. - -| Link | Crypto files | -|------|--------------| -| [client](@pulsar:dist_rpm:client@) | [asc](@pulsar:dist_rpm:client@.asc), [sha512](@pulsar:dist_rpm:client@.sha512) | -| [client-debuginfo](@pulsar:dist_rpm:client-debuginfo@) | [asc](@pulsar:dist_rpm:client-debuginfo@.asc), [sha512](@pulsar:dist_rpm:client-debuginfo@.sha512) | -| [client-devel](@pulsar:dist_rpm:client-devel@) | [asc](@pulsar:dist_rpm:client-devel@.asc), [sha512](@pulsar:dist_rpm:client-devel@.sha512) | - -2. Install the package using the following command. - -```bash - -$ rpm -ivh apache-pulsar-client*.rpm - -``` - -After you install RPM successfully, Pulsar libraries are in the `/usr/lib` directory. - -:::note - -If you get the error that `libpulsar.so: cannot open shared object file: No such file or directory` when starting Pulsar client, you may need to run `ldconfig` first. - -::: - -### Install Debian - -1. Download a Debian package from the links in the table. - -| Link | Crypto files | -|------|--------------| -| [client](@pulsar:deb:client@) | [asc](@pulsar:dist_deb:client@.asc), [sha512](@pulsar:dist_deb:client@.sha512) | -| [client-devel](@pulsar:deb:client-devel@) | [asc](@pulsar:dist_deb:client-devel@.asc), [sha512](@pulsar:dist_deb:client-devel@.sha512) | - -2. Install the package using the following command. - -```bash - -$ apt install ./apache-pulsar-client*.deb - -``` - -After you install DEB successfully, Pulsar libraries are in the `/usr/lib` directory. - -### Build - -> If you want to build RPM and Debian packages from the latest master, follow the instructions below. You should run all the instructions at the root directory of your cloned Pulsar repository. - -There are recipes that build RPM and Debian packages containing a -statically linked `libpulsar.so` / `libpulsarnossl.so` / `libpulsar.a` / `libpulsarwithdeps.a` with all required dependencies. - -To build the C++ library packages, you need to build the Java packages first. - -```shell - -mvn install -DskipTests - -``` - -#### RPM - -To build the RPM inside a Docker container, use the command below. The RPMs are in the `pulsar-client-cpp/pkg/rpm/RPMS/x86_64/` path. - -```shell - -pulsar-client-cpp/pkg/rpm/docker-build-rpm.sh - -``` - -| Package name | Content | -|-----|-----| -| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` | -| pulsar-client-devel | Static library `libpulsar.a`, `libpulsarwithdeps.a`and C++ and C headers | -| pulsar-client-debuginfo | Debug symbols for `libpulsar.so` | - -#### Debian - -To build Debian packages, enter the following command. - -```shell - -pulsar-client-cpp/pkg/deb/docker-build-deb.sh - -``` - -Debian packages are created in the `pulsar-client-cpp/pkg/deb/BUILD/DEB/` path. - -| Package name | Content | -|-----|-----| -| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` | -| pulsar-client-dev | Static library `libpulsar.a`, `libpulsarwithdeps.a` and C++ and C headers | - -## MacOS - -### Compilation - -1. Clone the Pulsar repository. - -```shell - -$ git clone https://github.com/apache/pulsar - -``` - -2. Install all necessary dependencies. - -```shell - -# OpenSSL installation -$ brew install openssl -$ export OPENSSL_INCLUDE_DIR=/usr/local/opt/openssl/include/ -$ export OPENSSL_ROOT_DIR=/usr/local/opt/openssl/ - -# Protocol Buffers installation -$ brew install protobuf boost boost-python log4cxx -# If you are using python3, you need to install boost-python3 - -# Google Test installation -$ git clone https://github.com/google/googletest.git -$ cd googletest -$ git checkout release-1.12.1 -$ cmake . -$ make install - -``` - -3. Compile the Pulsar client library in the repository that you cloned. - -```shell - -$ cd pulsar-client-cpp -$ cmake . -$ make - -``` - -### Install `libpulsar` - -Pulsar releases are available in the [Homebrew](https://brew.sh/) core repository. You can install the C++ client library with the following command. The package is installed with the library and headers. - -```shell - -brew install libpulsar - -``` - -## Windows (64-bit) - -### Compilation - -1. Clone the Pulsar repository. - -```shell - -$ git clone https://github.com/apache/pulsar - -``` - -2. Install all necessary dependencies. - -```shell - -cd ${PULSAR_HOME}/pulsar-client-cpp -vcpkg install --feature-flags=manifests --triplet x64-windows - -``` - -3. Build C++ libraries. - -```shell - -cmake -B ./build -A x64 -DBUILD_PYTHON_WRAPPER=OFF -DBUILD_TESTS=OFF -DVCPKG_TRIPLET=x64-windows -DCMAKE_BUILD_TYPE=Release -S . -cmake --build ./build --config Release - -``` - -> **NOTE** -> -> 1. For Windows 32-bit, you need to use `-A Win32` and `-DVCPKG_TRIPLET=x86-windows`. -> 2. For MSVC Debug mode, you need to replace `Release` with `Debug` for both `CMAKE_BUILD_TYPE` variable and `--config` option. - -4. Client libraries are available in the following places. - -``` - -${PULSAR_HOME}/pulsar-client-cpp/build/lib/Release/pulsar.lib -${PULSAR_HOME}/pulsar-client-cpp/build/lib/Release/pulsar.dll - -``` - -## Connection URLs - -To connect Pulsar using client libraries, you need to specify a Pulsar protocol URL. - -Pulsar protocol URLs are assigned to specific clusters, you can use the Pulsar URI scheme. The default port is `6650`. The following is an example for localhost. - -```http - -pulsar://localhost:6650 - -``` - -In a Pulsar cluster in production, the URL looks as follows. - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you use TLS authentication, you need to add `ssl`, and the default port is `6651`. The following is an example. - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a consumer - -To use Pulsar as a consumer, you need to create a consumer on the C++ client. There are two main ways of using the consumer: -- [Blocking style](#blocking-example): synchronously calling `receive(msg)`. -- [Non-blocking](#consumer-with-a-message-listener) (event based) style: using a message listener. - -### Blocking example - -The benefit of this approach is that it is the simplest code. Simply keeps calling `receive(msg)` which blocks until a message is received. - -This example starts a subscription at the earliest offset and consumes 100 messages. - -```c++ - -#include - -using namespace pulsar; - -int main() { - Client client("pulsar://localhost:6650"); - - Consumer consumer; - ConsumerConfiguration config; - config.setSubscriptionInitialPosition(InitialPositionEarliest); - Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer); - if (result != ResultOk) { - std::cout << "Failed to subscribe: " << result << std::endl; - return -1; - } - - Message msg; - int ctr = 0; - // consume 100 messages - while (ctr < 100) { - consumer.receive(msg); - std::cout << "Received: " << msg - << " with payload '" << msg.getDataAsString() << "'" << std::endl; - - consumer.acknowledge(msg); - ctr++; - } - - std::cout << "Finished consuming synchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -### Consumer with a message listener - -You can avoid running a loop with blocking calls with an event based style by using a message listener which is invoked for each message that is received. - -This example starts a subscription at the earliest offset and consumes 100 messages. - -```c++ - -#include -#include -#include - -using namespace pulsar; - -std::atomic messagesReceived; - -void handleAckComplete(Result res) { - std::cout << "Ack res: " << res << std::endl; -} - -void listener(Consumer consumer, const Message& msg) { - std::cout << "Got message " << msg << " with content '" << msg.getDataAsString() << "'" << std::endl; - messagesReceived++; - consumer.acknowledgeAsync(msg.getMessageId(), handleAckComplete); -} - -int main() { - Client client("pulsar://localhost:6650"); - - Consumer consumer; - ConsumerConfiguration config; - config.setMessageListener(listener); - config.setSubscriptionInitialPosition(InitialPositionEarliest); - Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer); - if (result != ResultOk) { - std::cout << "Failed to subscribe: " << result << std::endl; - return -1; - } - - // wait for 100 messages to be consumed - while (messagesReceived < 100) { - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - } - - std::cout << "Finished consuming asynchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -## Create a producer - -To use Pulsar as a producer, you need to create a producer on the C++ client. There are two main ways of using a producer: -- [Blocking style](#simple-blocking-example) : each call to `send` waits for an ack from the broker. -- [Non-blocking asynchronous style](#non-blocking-example) : `sendAsync` is called instead of `send` and a callback is supplied for when the ack is received from the broker. - -### Simple blocking example - -This example sends 100 messages using the blocking style. While simple, it does not produce high throughput as it waits for each ack to come back before sending the next message. - -```c++ - -#include -#include - -using namespace pulsar; - -int main() { - Client client("pulsar://localhost:6650"); - - Result result = client.createProducer("persistent://public/default/my-topic", producer); - if (result != ResultOk) { - std::cout << "Error creating producer: " << result << std::endl; - return -1; - } - - // Send 100 messages synchronously - int ctr = 0; - while (ctr < 100) { - std::string content = "msg" + std::to_string(ctr); - Message msg = MessageBuilder().setContent(content).setProperty("x", "1").build(); - Result result = producer.send(msg); - if (result != ResultOk) { - std::cout << "The message " << content << " could not be sent, received code: " << result << std::endl; - } else { - std::cout << "The message " << content << " sent successfully" << std::endl; - } - - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - ctr++; - } - - std::cout << "Finished producing synchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -### Non-blocking example - -This example sends 100 messages using the non-blocking style calling `sendAsync` instead of `send`. This allows the producer to have multiple messages inflight at a time which increases throughput. - -The producer configuration `blockIfQueueFull` is useful here to avoid `ResultProducerQueueIsFull` errors when the internal queue for outgoing send requests becomes full. Once the internal queue is full, `sendAsync` becomes blocking which can make your code simpler. - -Without this configuration, the result code `ResultProducerQueueIsFull` is passed to the callback. You must decide how to deal with that (retry, discard etc). - -```c++ - -#include -#include - -using namespace pulsar; - -std::atomic acksReceived; - -void callback(Result code, const MessageId& msgId, std::string msgContent) { - // message processing logic here - std::cout << "Received ack for msg: " << msgContent << " with code: " - << code << " -- MsgID: " << msgId << std::endl; - acksReceived++; -} - -int main() { - Client client("pulsar://localhost:6650"); - - ProducerConfiguration producerConf; - producerConf.setBlockIfQueueFull(true); - Producer producer; - Result result = client.createProducer("persistent://public/default/my-topic", - producerConf, producer); - if (result != ResultOk) { - std::cout << "Error creating producer: " << result << std::endl; - return -1; - } - - // Send 100 messages asynchronously - int ctr = 0; - while (ctr < 100) { - std::string content = "msg" + std::to_string(ctr); - Message msg = MessageBuilder().setContent(content).setProperty("x", "1").build(); - producer.sendAsync(msg, std::bind(callback, - std::placeholders::_1, std::placeholders::_2, content)); - - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - ctr++; - } - - // wait for 100 messages to be acked - while (acksReceived < 100) { - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - } - - std::cout << "Finished producing asynchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -### Partitioned topics and lazy producers - -When scaling out a Pulsar topic, you may configure a topic to have hundreds of partitions. Likewise, you may have also scaled out your producers so there are hundreds or even thousands of producers. This can put some strain on the Pulsar brokers as when you create a producer on a partitioned topic, internally it creates one internal producer per partition which involves communications to the brokers for each one. So for a topic with 1000 partitions and 1000 producers, it ends up creating 1,000,000 internal producers across the producer applications, each of which has to communicate with a broker to find out which broker it should connect to and then perform the connection handshake. - -You can reduce the load caused by this combination of a large number of partitions and many producers by doing the following: -- use SinglePartition partition routing mode (this ensures that all messages are only sent to a single, randomly selected partition) -- use non-keyed messages (when messages are keyed, routing is based on the hash of the key and so messages will end up being sent to multiple partitions) -- use lazy producers (this ensures that an internal producer is only created on demand when a message needs to be routed to a partition) - -With our example above, that reduces the number of internal producers spread out over the 1000 producer apps from 1,000,000 to just 1000. - -Note that there can be extra latency for the first message sent. If you set a low send timeout, this timeout could be reached if the initial connection handshake is slow to complete. - -```c++ - -ProducerConfiguration producerConf; -producerConf.setPartitionsRoutingMode(ProducerConfiguration::UseSinglePartition); -producerConf.setLazyStartPartitionedProducers(true); - -``` - -## Enable authentication in connection URLs -If you use TLS authentication when connecting to Pulsar, you need to add `ssl` in the connection URLs, and the default port is `6651`. The following is an example. - -```cpp - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/cacert.pem"); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create( - "/path/to/client-cert.pem", "/path/to/client-key.pem");); - -Client client("pulsar+ssl://my-broker.com:6651", config); - -``` - -For complete examples, refer to [C++ client examples](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/examples). - -## Schema - -This section describes some examples about schema. For more information about -schema, see [Pulsar schema](schema-get-started.md). - -### Avro schema - -- The following example shows how to create a producer with an Avro schema. - - ```cpp - - static const std::string exampleSchema = - "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," - "\"fields\":[{\"name\":\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"int\"}]}"; - Producer producer; - ProducerConfiguration producerConf; - producerConf.setSchema(SchemaInfo(AVRO, "Avro", exampleSchema)); - client.createProducer("topic-avro", producerConf, producer); - - ``` - -- The following example shows how to create a consumer with an Avro schema. - - ```cpp - - static const std::string exampleSchema = - "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," - "\"fields\":[{\"name\":\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"int\"}]}"; - ConsumerConfiguration consumerConf; - Consumer consumer; - consumerConf.setSchema(SchemaInfo(AVRO, "Avro", exampleSchema)); - client.subscribe("topic-avro", "sub-2", consumerConf, consumer) - - ``` - -### ProtobufNative schema - -The following example shows how to create a producer and a consumer with a ProtobufNative schema. -​ -1. Generate the `User` class using Protobuf3. - - :::note - - You need to use Protobuf3 or later versions. - - ::: - -​ - - ```protobuf - - syntax = "proto3"; - - message User { - string name = 1; - int32 age = 2; - } - - ``` - -​ -2. Include the `ProtobufNativeSchema.h` in your source code. Ensure the Protobuf dependency has been added to your project. -​ - - ```c++ - - #include - - ``` - -​ -3. Create a producer to send a `User` instance. -​ - - ```c++ - - ProducerConfiguration producerConf; - producerConf.setSchema(createProtobufNativeSchema(User::GetDescriptor())); - Producer producer; - client.createProducer("topic-protobuf", producerConf, producer); - User user; - user.set_name("my-name"); - user.set_age(10); - std::string content; - user.SerializeToString(&content); - producer.send(MessageBuilder().setContent(content).build()); - - ``` - -​ -4. Create a consumer to receive a `User` instance. -​ - - ```c++ - - ConsumerConfiguration consumerConf; - consumerConf.setSchema(createProtobufNativeSchema(User::GetDescriptor())); - consumerConf.setSubscriptionInitialPosition(InitialPositionEarliest); - Consumer consumer; - client.subscribe("topic-protobuf", "my-sub", consumerConf, consumer); - Message msg; - consumer.receive(msg); - User user2; - user2.ParseFromArray(msg.getData(), msg.getLength()); - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-dotnet.md b/site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-dotnet.md deleted file mode 100644 index b574fa0b2e5ed8..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-dotnet.md +++ /dev/null @@ -1,434 +0,0 @@ ---- -id: client-libraries-dotnet -title: Pulsar C# client -sidebar_label: "C#" -original_id: client-libraries-dotnet ---- - -You can use the Pulsar C# client (DotPulsar) to create Pulsar producers and consumers in C#. All the methods in the producer, consumer, and reader of a C# client are thread-safe. The official documentation for DotPulsar is available [here](https://github.com/apache/pulsar-dotpulsar/wiki). - -## Installation - -You can install the Pulsar C# client library either through the dotnet CLI or through the Visual Studio. This section describes how to install the Pulsar C# client library through the dotnet CLI. For information about how to install the Pulsar C# client library through the Visual Studio , see [here](https://docs.microsoft.com/en-us/visualstudio/mac/nuget-walkthrough?view=vsmac-2019). - -### Prerequisites - -Install the [.NET Core SDK](https://dotnet.microsoft.com/download/), which provides the dotnet command-line tool. Starting in Visual Studio 2017, the dotnet CLI is automatically installed with any .NET Core related workloads. - -### Procedures - -To install the Pulsar C# client library, following these steps: - -1. Create a project. - - 1. Create a folder for the project. - - 2. Open a terminal window and switch to the new folder. - - 3. Create the project using the following command. - - ``` - - dotnet new console - - ``` - - 4. Use `dotnet run` to test that the app has been created properly. - -2. Add the DotPulsar NuGet package. - - 1. Use the following command to install the `DotPulsar` package. - - ``` - - dotnet add package DotPulsar - - ``` - - 2. After the command completes, open the `.csproj` file to see the added reference. - - ```xml - - - - - - ``` - -## Client - -This section describes some configuration examples for the Pulsar C# client. - -### Create client - -This example shows how to create a Pulsar C# client connected to localhost. - -```c# - -var client = PulsarClient.Builder().Build(); - -``` - -To create a Pulsar C# client by using the builder, you can specify the following options. - -| Option | Description | Default | -| ---- | ---- | ---- | -| ServiceUrl | Set the service URL for the Pulsar cluster. | pulsar://localhost:6650 | -| RetryInterval | Set the time to wait before retrying an operation or a reconnection. | 3s | - -### Create producer - -This section describes how to create a producer. - -- Create a producer by using the builder. - - ```c# - - var producer = client.NewProducer() - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a producer without using the builder. - - ```c# - - var options = new ProducerOptions("persistent://public/default/mytopic"); - var producer = client.CreateProducer(options); - - ``` - -### Create consumer - -This section describes how to create a consumer. - -- Create a consumer by using the builder. - - ```c# - - var consumer = client.NewConsumer() - .SubscriptionName("MySubscription") - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a consumer without using the builder. - - ```c# - - var options = new ConsumerOptions("MySubscription", "persistent://public/default/mytopic"); - var consumer = client.CreateConsumer(options); - - ``` - -### Create reader - -This section describes how to create a reader. - -- Create a reader by using the builder. - - ```c# - - var reader = client.NewReader() - .StartMessageId(MessageId.Earliest) - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a reader without using the builder. - - ```c# - - var options = new ReaderOptions(MessageId.Earliest, "persistent://public/default/mytopic"); - var reader = client.CreateReader(options); - - ``` - -### Configure encryption policies - -The Pulsar C# client supports four kinds of encryption policies: - -- `EnforceUnencrypted`: always use unencrypted connections. -- `EnforceEncrypted`: always use encrypted connections) -- `PreferUnencrypted`: use unencrypted connections, if possible. -- `PreferEncrypted`: use encrypted connections, if possible. - -This example shows how to set the `EnforceUnencrypted` encryption policy. - -```c# - -var client = PulsarClient.Builder() - .ConnectionSecurity(EncryptionPolicy.EnforceEncrypted) - .Build(); - -``` - -### Configure authentication - -Currently, the Pulsar C# client supports the TLS (Transport Layer Security) and JWT (JSON Web Token) authentication. - -If you have followed [Authentication using TLS](security-tls-authentication.md), you get a certificate and a key. To use them from the Pulsar C# client, follow these steps: - -1. Create an unencrypted and password-less pfx file. - - ```c# - - openssl pkcs12 -export -keypbe NONE -certpbe NONE -out admin.pfx -inkey admin.key.pem -in admin.cert.pem -passout pass: - - ``` - -2. Use the admin.pfx file to create an X509Certificate2 and pass it to the Pulsar C# client. - - ```c# - - var clientCertificate = new X509Certificate2("admin.pfx"); - var client = PulsarClient.Builder() - .AuthenticateUsingClientCertificate(clientCertificate) - .Build(); - - ``` - -## Producer - -A producer is a process that attaches to a topic and publishes messages to a Pulsar broker for processing. This section describes some configuration examples about the producer. - -## Send data - -This example shows how to send data. - -```c# - -var data = Encoding.UTF8.GetBytes("Hello World"); -await producer.Send(data); - -``` - -### Send messages with customized metadata - -- Send messages with customized metadata by using the builder. - - ```c# - - var data = Encoding.UTF8.GetBytes("Hello World"); - var messageId = await producer.NewMessage() - .Property("SomeKey", "SomeValue") - .Send(data); - - ``` - -- Send messages with customized metadata without using the builder. - - ```c# - - var data = Encoding.UTF8.GetBytes("Hello World"); - var metadata = new MessageMetadata(); - metadata["SomeKey"] = "SomeValue"; - var messageId = await producer.Send(metadata, data)); - - ``` - -## Consumer - -A consumer is a process that attaches to a topic through a subscription and then receives messages. This section describes some configuration examples about the consumer. - -### Receive messages - -This example shows how a consumer receives messages from a topic. - -```c# - -await foreach (var message in consumer.Messages()) -{ - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); -} - -``` - -### Acknowledge messages - -Messages can be acknowledged individually or cumulatively. For details about message acknowledgement, see [acknowledgement](concepts-messaging.md#acknowledgement). - -- Acknowledge messages individually. - - ```c# - - await foreach (var message in consumer.Messages()) - { - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); - } - - ``` - -- Acknowledge messages cumulatively. - - ```c# - - await consumer.AcknowledgeCumulative(message); - - ``` - -### Unsubscribe from topics - -This example shows how a consumer unsubscribes from a topic. - -```c# - -await consumer.Unsubscribe(); - -``` - -#### Note - -> A consumer cannot be used and is disposed once the consumer unsubscribes from a topic. - -## Reader - -A reader is actually just a consumer without a cursor. This means that Pulsar does not keep track of your progress and there is no need to acknowledge messages. - -This example shows how a reader receives messages. - -```c# - -await foreach (var message in reader.Messages()) -{ - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); -} - -``` - -## Monitoring - -This section describes how to monitor the producer, consumer, and reader state. - -### Monitor producer - -The following table lists states available for the producer. - -| State | Description | -| ---- | ----| -| Closed | The producer or the Pulsar client has been disposed. | -| Connected | All is well. | -| Disconnected | The connection is lost and attempts are being made to reconnect. | -| Faulted | An unrecoverable error has occurred. | - -This example shows how to monitor the producer state. - -```c# - -private static async ValueTask Monitor(IProducer producer, CancellationToken cancellationToken) -{ - var state = ProducerState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = await producer.StateChangedFrom(state, cancellationToken); - - var stateMessage = state switch - { - ProducerState.Connected => $"The producer is connected", - ProducerState.Disconnected => $"The producer is disconnected", - ProducerState.Closed => $"The producer has closed", - ProducerState.Faulted => $"The producer has faulted", - _ => $"The producer has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (producer.IsFinalState(state)) - return; - } -} - -``` - -### Monitor consumer state - -The following table lists states available for the consumer. - -| State | Description | -| ---- | ----| -| Active | All is well. | -| Inactive | All is well. The subscription type is `Failover` and you are not the active consumer. | -| Closed | The consumer or the Pulsar client has been disposed. | -| Disconnected | The connection is lost and attempts are being made to reconnect. | -| Faulted | An unrecoverable error has occurred. | -| ReachedEndOfTopic | No more messages are delivered. | - -This example shows how to monitor the consumer state. - -```c# - -private static async ValueTask Monitor(IConsumer consumer, CancellationToken cancellationToken) -{ - var state = ConsumerState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = await consumer.StateChangedFrom(state, cancellationToken); - - var stateMessage = state switch - { - ConsumerState.Active => "The consumer is active", - ConsumerState.Inactive => "The consumer is inactive", - ConsumerState.Disconnected => "The consumer is disconnected", - ConsumerState.Closed => "The consumer has closed", - ConsumerState.ReachedEndOfTopic => "The consumer has reached end of topic", - ConsumerState.Faulted => "The consumer has faulted", - _ => $"The consumer has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (consumer.IsFinalState(state)) - return; - } -} - -``` - -### Monitor reader state - -The following table lists states available for the reader. - -| State | Description | -| ---- | ----| -| Closed | The reader or the Pulsar client has been disposed. | -| Connected | All is well. | -| Disconnected | The connection is lost and attempts are being made to reconnect. -| Faulted | An unrecoverable error has occurred. | -| ReachedEndOfTopic | No more messages are delivered. | - -This example shows how to monitor the reader state. - -```c# - -private static async ValueTask Monitor(IReader reader, CancellationToken cancellationToken) -{ - var state = ReaderState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = await reader.StateChangedFrom(state, cancellationToken); - - var stateMessage = state switch - { - ReaderState.Connected => "The reader is connected", - ReaderState.Disconnected => "The reader is disconnected", - ReaderState.Closed => "The reader has closed", - ReaderState.ReachedEndOfTopic => "The reader has reached end of topic", - ReaderState.Faulted => "The reader has faulted", - _ => $"The reader has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (reader.IsFinalState(state)) - return; - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-go.md b/site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-go.md deleted file mode 100644 index 6281b03dd8c805..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-go.md +++ /dev/null @@ -1,885 +0,0 @@ ---- -id: client-libraries-go -title: Pulsar Go client -sidebar_label: "Go" -original_id: client-libraries-go ---- - -> Tips: Currently, the CGo client will be deprecated, if you want to know more about the CGo client, please refer to [CGo client docs](client-libraries-cgo.md) - -You can use Pulsar [Go client](https://github.com/apache/pulsar-client-go) to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang). - -> **API docs available as well** -> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar-client-go/pulsar). - - -## Installation - -### Install go package - -You can install the `pulsar` library locally using `go get`. - -```bash - -$ go get -u "github.com/apache/pulsar-client-go/pulsar" - -``` - -Once installed locally, you can import it into your project: - -```go - -import "github.com/apache/pulsar-client-go/pulsar" - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -If you have multiple brokers, you can set the URL as below. - -``` - -pulsar://localhost:6550,localhost:6651,localhost:6652 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example: - -```go - -import ( - "log" - "time" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - OperationTimeout: 30 * time.Second, - ConnectionTimeout: 30 * time.Second, - }) - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } - - defer client.Close() -} - -``` - -If you have multiple brokers, you can initiate a client object as below. - -```go - -import ( - "log" - "time" - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650,localhost:6651,localhost:6652", - OperationTimeout: 30 * time.Second, - ConnectionTimeout: 30 * time.Second, - }) - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } - - defer client.Close() -} - -``` - -The following configurable parameters are available for Pulsar clients: - - Name | Description | Default -| :-------- | :---------- |:---------- | -| URL | Configure the service URL for the Pulsar service.

    If you have multiple brokers, you can set multiple Pulsar cluster addresses for a client.

    This parameter is **required**. |None | -| ConnectionTimeout | Timeout for the establishment of a TCP connection | 30s | -| OperationTimeout| Set the operation timeout. Producer-create, subscribe and unsubscribe operations will be retried until this interval, after which the operation will be marked as failed| 30s| -| Authentication | Configure the authentication provider. Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | no authentication | -| TLSTrustCertsFilePath | Set the path to the trusted TLS certificate file | | -| TLSAllowInsecureConnection | Configure whether the Pulsar client accept untrusted TLS certificate from broker | false | -| TLSValidateHostname | Configure whether the Pulsar client verify the validity of the host name from broker | false | -| ListenerName | Configure the net model for VPC users to connect to the Pulsar broker | | -| MaxConnectionsPerBroker | Max number of connections to a single broker that is kept in the pool | 1 | -| CustomMetricsLabels | Add custom labels to all the metrics reported by this client instance | | -| Logger | Configure the logger used by the client | logrus.StandardLogger | - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example: - -```go - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", -}) - -if err != nil { - log.Fatal(err) -} - -_, err = producer.Send(context.Background(), &pulsar.ProducerMessage{ - Payload: []byte("hello"), -}) - -defer producer.Close() - -if err != nil { - fmt.Println("Failed to publish message", err) -} -fmt.Println("Published message") - -``` - -### Producer operations - -Pulsar Go producers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string` -`Name()` | Fetches the producer's name | `string` -`Send(context.Context, *ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | (MessageID, error) -`SendAsync(context.Context, *ProducerMessage, func(MessageID, *ProducerMessage, error))`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | -`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64 -`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error -`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | - -### Producer Example - -#### How to use message router in producer - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: serviceURL, -}) - -if err != nil { - log.Fatal(err) -} -defer client.Close() - -// Only subscribe on the specific partition -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "my-partitioned-topic-partition-2", - SubscriptionName: "my-sub", -}) - -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-partitioned-topic", - MessageRouter: func(msg *ProducerMessage, tm TopicMetadata) int { - fmt.Println("Routing message ", msg, " -- Partitions: ", tm.NumPartitions()) - return 2 - }, -}) - -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -``` - -#### How to use schema interface in producer - -```go - -type testJSON struct { - ID int `json:"id"` - Name string `json:"name"` -} - -``` - -```go - -var ( - exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -) - -``` - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -properties := make(map[string]string) -properties["pulsar"] = "hello" -jsonSchemaWithProperties := NewJSONSchema(exampleSchemaDef, properties) -producer, err := client.CreateProducer(ProducerOptions{ - Topic: "jsonTopic", - Schema: jsonSchemaWithProperties, -}) -assert.Nil(t, err) - -_, err = producer.Send(context.Background(), &ProducerMessage{ - Value: &testJSON{ - ID: 100, - Name: "pulsar", - }, -}) -if err != nil { - log.Fatal(err) -} -producer.Close() - -``` - -#### How to use delay relative in producer - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topicName := newTopicName() -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topicName, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: topicName, - SubscriptionName: "subName", - Type: Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -ID, err := producer.Send(context.Background(), &pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("test")), - DeliverAfter: 3 * time.Second, -}) -if err != nil { - log.Fatal(err) -} -fmt.Println(ID) - -ctx, canc := context.WithTimeout(context.Background(), 1*time.Second) -msg, err := consumer.Receive(ctx) -if err != nil { - log.Fatal(err) -} -fmt.Println(msg.Payload()) -canc() - -ctx, canc = context.WithTimeout(context.Background(), 5*time.Second) -msg, err = consumer.Receive(ctx) -if err != nil { - log.Fatal(err) -} -fmt.Println(msg.Payload()) -canc() - -``` - -### Producer configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Name | Name specify a name for the producer. If not assigned, the system will generate a globally unique name which can be access with Producer.ProducerName(). | | -| Properties | Properties attach a set of application defined properties to the producer This properties will be visible in the topic stats | | -| SendTimeout | SendTimeout set the timeout for a message that is not acknowledged by the server | 30s | -| DisableBlockIfQueueFull | DisableBlockIfQueueFull control whether Send and SendAsync block if producer's message queue is full | false | -| MaxPendingMessages| MaxPendingMessages set the max size of the queue holding the messages pending to receive an acknowledgment from the broker. | | -| HashingScheme | HashingScheme change the `HashingScheme` used to chose the partition on where to publish a particular message. | JavaStringHash | -| CompressionType | CompressionType set the compression type for the producer. | not compressed | -| CompressionLevel | Define the desired compression level. Options: Default, Faster and Better | Default | -| MessageRouter | MessageRouter set a custom message routing policy by passing an implementation of MessageRouter | | -| DisableBatching | DisableBatching control whether automatic batching of messages is enabled for the producer. | false | -| BatchingMaxPublishDelay | BatchingMaxPublishDelay set the time period within which the messages sent will be batched | 1ms | -| BatchingMaxMessages | BatchingMaxMessages set the maximum number of messages permitted in a batch. | 1000 | -| BatchingMaxSize | BatchingMaxSize sets the maximum number of bytes permitted in a batch. | 128KB | -| Schema | Schema set a custom schema type by passing an implementation of `Schema` | bytes[] | -| Interceptors | A chain of interceptors. These interceptors are called at some points defined in the `ProducerInterceptor` interface. | None | -| MaxReconnectToBroker | MaxReconnectToBroker set the maximum retry number of reconnectToBroker | ultimate | -| BatcherBuilderType | BatcherBuilderType sets the batch builder type. This is used to create a batch container when batching is enabled. Options: DefaultBatchBuilder and KeyBasedBatchBuilder | DefaultBatchBuilder | - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels: - -```go - -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "topic-1", - SubscriptionName: "my-sub", - Type: pulsar.Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -for i := 0; i < 10; i++ { - msg, err := consumer.Receive(context.Background()) - if err != nil { - log.Fatal(err) - } - - fmt.Printf("Received message msgId: %#v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - - consumer.Ack(msg) -} - -if err := consumer.Unsubscribe(); err != nil { - log.Fatal(err) -} - -``` - -### Consumer operations - -Pulsar Go consumers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Subscription()` | Returns the consumer's subscription name | `string` -`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error` -`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)` -`Chan()` | Chan returns a channel from which to consume messages. | `<-chan ConsumerMessage` -`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | -`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | -`ReconsumeLater(msg Message, delay time.Duration)` | ReconsumeLater mark a message for redelivery after custom delay | -`Nack(Message)` | Acknowledge the failure to process a single message. | -`NackID(MessageID)` | Acknowledge the failure to process a single message. | -`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | `error` -`SeekByTime(time time.Time)` | Reset the subscription associated with this consumer to a specific message publish time. | `error` -`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | -`Name()` | Name returns the name of consumer | `string` - -### Receive example - -#### How to use regex consumer - -```go - -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) - -defer client.Close() - -p, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topicInRegex, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer p.Close() - -topicsPattern := fmt.Sprintf("persistent://%s/foo.*", namespace) -opts := pulsar.ConsumerOptions{ - TopicsPattern: topicsPattern, - SubscriptionName: "regex-sub", -} -consumer, err := client.Subscribe(opts) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -``` - -#### How to use multi topics Consumer - -```go - -func newTopicName() string { - return fmt.Sprintf("my-topic-%v", time.Now().Nanosecond()) -} - - -topic1 := "topic-1" -topic2 := "topic-2" - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -topics := []string{topic1, topic2} -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topics: topics, - SubscriptionName: "multi-topic-sub", -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -``` - -#### How to use consumer listener - -```go - -import ( - "fmt" - "log" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{URL: "pulsar://localhost:6650"}) - if err != nil { - log.Fatal(err) - } - - defer client.Close() - - channel := make(chan pulsar.ConsumerMessage, 100) - - options := pulsar.ConsumerOptions{ - Topic: "topic-1", - SubscriptionName: "my-subscription", - Type: pulsar.Shared, - } - - options.MessageChannel = channel - - consumer, err := client.Subscribe(options) - if err != nil { - log.Fatal(err) - } - - defer consumer.Close() - - // Receive messages from channel. The channel returns a struct which contains message and the consumer from where - // the message was received. It's not necessary here since we have 1 single consumer, but the channel could be - // shared across multiple consumers as well - for cm := range channel { - msg := cm.Message - fmt.Printf("Received message msgId: %v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - - consumer.Ack(msg) - } -} - -``` - -#### How to use consumer receive timeout - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topic := "test-topic-with-no-messages" -ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond) -defer cancel() - -// create consumer -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: topic, - SubscriptionName: "my-sub1", - Type: Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -msg, err := consumer.Receive(ctx) -fmt.Println(msg.Payload()) -if err != nil { - log.Fatal(err) -} - -``` - -#### How to use schema in consumer - -```go - -type testJSON struct { - ID int `json:"id"` - Name string `json:"name"` -} - -``` - -```go - -var ( - exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -) - -``` - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -var s testJSON - -consumerJS := NewJSONSchema(exampleSchemaDef, nil) -consumer, err := client.Subscribe(ConsumerOptions{ - Topic: "jsonTopic", - SubscriptionName: "sub-1", - Schema: consumerJS, - SubscriptionInitialPosition: SubscriptionPositionEarliest, -}) -assert.Nil(t, err) -msg, err := consumer.Receive(context.Background()) -assert.Nil(t, err) -err = msg.GetSchemaValue(&s) -if err != nil { - log.Fatal(err) -} - -defer consumer.Close() - -``` - -### Consumer configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Topics | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing| | -| TopicsPattern | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing | | -| AutoDiscoveryPeriod | Specify the interval in which to poll for new partitions or new topics if using a TopicsPattern. | | -| SubscriptionName | Specify the subscription name for this consumer. This argument is required when subscribing | | -| Name | Set the consumer name | | -| Properties | Properties attach a set of application defined properties to the producer This properties will be visible in the topic stats | | -| Type | Select the subscription type to be used when subscribing to the topic. | Exclusive | -| SubscriptionInitialPosition | InitialPosition at which the cursor will be set when subscribe | Latest | -| DLQ | Configuration for Dead Letter Queue consumer policy. | no DLQ | -| MessageChannel | Sets a `MessageChannel` for the consumer. When a message is received, it will be pushed to the channel for consumption | | -| ReceiverQueueSize | Sets the size of the consumer receive queue. | 1000| -| NackRedeliveryDelay | The delay after which to redeliver the messages that failed to be processed | 1min | -| ReadCompacted | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic | false | -| ReplicateSubscriptionState | Mark the subscription as replicated to keep it in sync across clusters | false | -| KeySharedPolicy | Configuration for Key Shared consumer policy. | | -| RetryEnable | Auto retry send messages to default filled DLQPolicy topics | false | -| Interceptors | A chain of interceptors. These interceptors are called at some points defined in the `ConsumerInterceptor` interface. | | -| MaxReconnectToBroker | MaxReconnectToBroker set the maximum retry number of reconnectToBroker. | ultimate | -| Schema | Schema set a custom schema type by passing an implementation of `Schema` | bytes[] | - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example: - -```go - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "topic-1", - StartMessageID: pulsar.EarliestMessageID(), -}) -if err != nil { - log.Fatal(err) -} -defer reader.Close() - -``` - -### Reader operations - -Pulsar Go readers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string` -`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)` -`HasNext()` | Check if there is any message available to read from the current position| (bool, error) -`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error` -`Seek(MessageID)` | Reset the subscription associated with this reader to a specific message ID | `error` -`SeekByTime(time time.Time)` | Reset the subscription associated with this reader to a specific message publish time | `error` - -### Reader example - -#### How to use reader to read 'next' message - -Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages: - -```go - -import ( - "context" - "fmt" - "log" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{URL: "pulsar://localhost:6650"}) - if err != nil { - log.Fatal(err) - } - - defer client.Close() - - reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "topic-1", - StartMessageID: pulsar.EarliestMessageID(), - }) - if err != nil { - log.Fatal(err) - } - defer reader.Close() - - for reader.HasNext() { - msg, err := reader.Next(context.Background()) - if err != nil { - log.Fatal(err) - } - - fmt.Printf("Received message msgId: %#v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - } -} - -``` - -In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example: - -```go - -lastSavedId := // Read last saved message id from external store as byte[] - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: pulsar.DeserializeMessageID(lastSavedId), -}) - -``` - -#### How to use reader to read specific message - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: lookupURL, -}) - -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topic := "topic-1" -ctx := context.Background() - -// create producer -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topic, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -// send 10 messages -msgIDs := [10]MessageID{} -for i := 0; i < 10; i++ { - msgID, err := producer.Send(ctx, &pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("hello-%d", i)), - }) - assert.NoError(t, err) - assert.NotNil(t, msgID) - msgIDs[i] = msgID -} - -// create reader on 5th message (not included) -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: topic, - StartMessageID: msgIDs[4], -}) - -if err != nil { - log.Fatal(err) -} -defer reader.Close() - -// receive the remaining 5 messages -for i := 5; i < 10; i++ { - msg, err := reader.Next(context.Background()) - if err != nil { - log.Fatal(err) -} - -// create reader on 5th message (included) -readerInclusive, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: topic, - StartMessageID: msgIDs[4], - StartMessageIDInclusive: true, -}) - -if err != nil { - log.Fatal(err) -} -defer readerInclusive.Close() - -``` - -### Reader configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Name | Name set the reader name. | | -| Properties | Attach a set of application defined properties to the reader. This properties will be visible in the topic stats | | -| StartMessageID | StartMessageID initial reader positioning is done by specifying a message id. | | -| StartMessageIDInclusive | If true, the reader will start at the `StartMessageID`, included. Default is `false` and the reader will start from the "next" message | false | -| MessageChannel | MessageChannel sets a `MessageChannel` for the consumer When a message is received, it will be pushed to the channel for consumption| | -| ReceiverQueueSize | ReceiverQueueSize sets the size of the consumer receive queue. | 1000 | -| SubscriptionRolePrefix| SubscriptionRolePrefix set the subscription role prefix. | "reader" | -| ReadCompacted | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. ReadCompacted can only be enabled when reading from a persistent topic. | false| - -## Messages - -The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message: - -```go - -msg := pulsar.ProducerMessage{ - Payload: []byte("Here is some message data"), - Key: "message-key", - Properties: map[string]string{ - "foo": "bar", - }, - EventTime: time.Now(), - ReplicationClusters: []string{"cluster1", "cluster3"}, -} - -if _, err := producer.send(msg); err != nil { - log.Fatalf("Could not publish message due to: %v", err) -} - -``` - -The following methods parameters are available for `ProducerMessage` objects: - -Parameter | Description -:---------|:----------- -`Payload` | The actual data payload of the message -`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message. -`Key` | The optional key associated with the message (particularly useful for things like topic compaction) -`OrderingKey` | OrderingKey sets the ordering key of the message. -`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message -`EventTime` | The timestamp associated with the message -`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. -`SequenceID` | Set the sequence id to assign to the current message -`DeliverAfter` | Request to deliver the message only after the specified relative delay -`DeliverAt` | Deliver the message only at or after the specified absolute timestamp - -## TLS encryption and authentication - -In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so: - - * Use `pulsar+ssl` URL type - * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker - * Configure `Authentication` option - -Here's an example: - -```go - -opts := pulsar.ClientOptions{ - URL: "pulsar+ssl://my-cluster.com:6651", - TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr", - Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"), -} - -``` - -## OAuth2 authentication - -To use [OAuth2 authentication](security-oauth2.md), you'll need to configure your client to perform the following operations. -This example shows how to configure OAuth2 authentication. - -```go - -oauth := pulsar.NewAuthenticationOAuth2(map[string]string{ - "type": "client_credentials", - "issuerUrl": "https://dev-kt-aa9ne.us.auth0.com", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "privateKey": "/path/to/privateKey", - "clientId": "0Xx...Yyxeny", - }) -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://my-cluster:6650", - Authentication: oauth, -}) - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-java.md b/site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-java.md deleted file mode 100644 index c3d41a3f13da2e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-java.md +++ /dev/null @@ -1,1038 +0,0 @@ ---- -id: client-libraries-java -title: Pulsar Java client -sidebar_label: "Java" -original_id: client-libraries-java ---- - -You can use a Pulsar Java client to create the Java [producer](#producer), [consumer](#consumer), and [readers](#reader) of messages and to perform [administrative tasks](admin-api-overview.md). The current Java client version is **@pulsar:version@**. - -All the methods in [producer](#producer), [consumer](#consumer), and [reader](#reader) of a Java client are thread-safe. - -Javadoc for the Pulsar client is divided into two domains by package as follows. - -Package | Description | Maven Artifact -:-------|:------------|:-------------- -[`org.apache.pulsar.client.api`](/api/client) | The producer and consumer API | [org.apache.pulsar:pulsar-client:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C@pulsar:version@%7Cjar) -[`org.apache.pulsar.client.admin`](/api/admin) | The Java [admin API](admin-api-overview.md) | [org.apache.pulsar:pulsar-client-admin:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client-admin%7C@pulsar:version@%7Cjar) -`org.apache.pulsar.client.all` |Include both `pulsar-client` and `pulsar-client-admin`
    Both `pulsar-client` and `pulsar-client-admin` are shaded packages and they shade dependencies independently. Consequently, the applications using both `pulsar-client` and `pulsar-client-admin` have redundant shaded classes. It would be troublesome if you introduce new dependencies but forget to update shading rules.
    In this case, you can use `pulsar-client-all`, which shades dependencies only one time and reduces the size of dependencies. |[org.apache.pulsar:pulsar-client-all:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client-all%7C@pulsar:version@%7Cjar) - -This document focuses only on the client API for producing and consuming messages on Pulsar topics. For how to use the Java admin client, see [Pulsar admin interface](admin-api-overview.md). - -## Installation - -The latest version of the Pulsar Java client library is available via [Maven Central](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C@pulsar:version@%7Cjar). To use the latest version, add the `pulsar-client` library to your build configuration. - -### Maven - -If you use Maven, add the following information to the `pom.xml` file. - -```xml - - -@pulsar:version@ - - - - org.apache.pulsar - pulsar-client - ${pulsar.version} - - -``` - -### Gradle - -If you use Gradle, add the following information to the `build.gradle` file. - -```groovy - -def pulsarVersion = '@pulsar:version@' - -dependencies { - compile group: 'org.apache.pulsar', name: 'pulsar-client', version: pulsarVersion -} - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -You can assign Pulsar protocol URLs to specific clusters and use the `pulsar` scheme. The default port is `6650`. The following is an example of `localhost`. - -```http - -pulsar://localhost:6650 - -``` - -If you have multiple brokers, the URL is as follows. - -```http - -pulsar://localhost:6550,localhost:6651,localhost:6652 - -``` - -A URL for a production Pulsar cluster is as follows. - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you use [TLS](security-tls-authentication.md) authentication, the URL is as follows. - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Client - -You can instantiate a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object using just a URL for the target Pulsar [cluster](reference-terminology.md#cluster) like this: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -``` - -If you have multiple brokers, you can initiate a PulsarClient like this: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650,localhost:6651,localhost:6652") - .build(); - -``` - -> ### Default broker URLs for standalone clusters -> If you run a cluster in [standalone mode](getting-started-standalone.md), the broker is available at the `pulsar://localhost:6650` URL by default. - -If you create a client, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -| Name | Type |
    Description
    | Default -|---|---|---|--- -`serviceUrl` | String | Service URL provider for Pulsar service | None -`authPluginClassName` | String | Name of the authentication plugin | None - `authParams` | String | Parameters for the authentication plugin

    **Example**
    key1:val1,key2:val2|None -`operationTimeoutMs`|long|`operationTimeoutMs`|Operation timeout |30000 -`statsIntervalSeconds`|long|Interval between each stats information

    Stats is activated with positive `statsInterval`

    Set `statsIntervalSeconds` to 1 second at least. |60 -`numIoThreads`| int| The number of threads used for handling connections to brokers | 1 -`numListenerThreads`|int|The number of threads used for handling message listeners. The listener thread pool is shared across all the consumers and readers using the "listener" model to get messages. For a given consumer, the listener is always invoked from the same thread to ensure ordering. If you want multiple threads to process a single topic, you need to create a [`shared`](https://pulsar.apache.org/docs/en/next/concepts-messaging/#shared) subscription and multiple consumers for this subscription. This does not ensure ordering.| 1 -`useTcpNoDelay`| boolean| Whether to use TCP no-delay flag on the connection to disable Nagle algorithm |true -`enableTls` |boolean | Whether to use TLS encryption on the connection. Note that this parameter is **deprecated**. If you want to enable TLS, use `pulsar+ssl://` in `serviceUrl` instead. | false - `tlsTrustCertsFilePath` |string |Path to the trusted TLS certificate file|None -`tlsAllowInsecureConnection`|boolean|Whether the Pulsar client accepts untrusted TLS certificate from broker | false -`tlsHostnameVerificationEnable` |boolean | Whether to enable TLS hostname verification|false -`concurrentLookupRequest`|int|The number of concurrent lookup requests allowed to send on each broker connection to prevent overload on broker|5000 -`maxLookupRequest`|int|The maximum number of lookup requests allowed on each broker connection to prevent overload on broker | 50000 -`maxNumberOfRejectedRequestPerConnection`|int|The maximum number of rejected requests of a broker in a certain time frame (30 seconds) after the current connection is closed and the client creates a new connection to connect to a different broker|50 -`keepAliveIntervalSeconds`|int|Seconds of keeping alive interval for each client broker connection|30 -`connectionTimeoutMs`|int|Duration of waiting for a connection to a broker to be established

    If the duration passes without a response from a broker, the connection attempt is dropped|10000 -`requestTimeoutMs`|int|Maximum duration for completing a request |60000 -`defaultBackoffIntervalNanos`|int| Default duration for a backoff interval | TimeUnit.MILLISECONDS.toNanos(100); -`maxBackoffIntervalNanos`|long|Maximum duration for a backoff interval|TimeUnit.SECONDS.toNanos(30) -`socks5ProxyAddress`|SocketAddress|SOCKS5 proxy address | None -`socks5ProxyUsername`|string|SOCKS5 proxy username | None -`socks5ProxyPassword`|string|SOCKS5 proxy password | None - -Check out the Javadoc for the {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} class for a full list of configurable parameters. - -> In addition to client-level configuration, you can also apply [producer](#configure-producer) and [consumer](#configure-consumer) specific configuration as described in sections below. - -## Producer - -In Pulsar, producers write messages to topics. Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object (as in the section [above](#client-configuration)), you can create a {@inject: javadoc:Producer:/client/org/apache/pulsar/client/api/Producer} for a specific Pulsar [topic](reference-terminology.md#topic). - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .create(); - -// You can then send messages to the broker and topic you specified: -producer.send("My message".getBytes()); - -``` - -By default, producers produce messages that consist of byte arrays. You can produce different types by specifying a message [schema](#schema). - -```java - -Producer stringProducer = client.newProducer(Schema.STRING) - .topic("my-topic") - .create(); -stringProducer.send("My message"); - -``` - -> Make sure that you close your producers, consumers, and clients when you do not need them. - -> ```java -> -> producer.close(); -> consumer.close(); -> client.close(); -> -> -> ``` - -> -> Close operations can also be asynchronous: - -> ```java -> -> producer.closeAsync() -> .thenRun(() -> System.out.println("Producer closed")) -> .exceptionally((ex) -> { -> System.err.println("Failed to close producer: " + ex); -> return null; -> }); -> -> -> ``` - - -### Configure producer - -If you instantiate a `Producer` object by specifying only a topic name as the example above, the default configuration of producer is used. - -If you create a producer, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -Name| Type |
    Description
    | Default -|---|---|---|--- -`topicName`| string| Topic name| null| -`producerName`| string|Producer name| null -`sendTimeoutMs`| long|Message send timeout in ms.
    If a message is not acknowledged by a server before the `sendTimeout` expires, an error occurs.|30000 -`blockIfQueueFull`|boolean|If it is set to `true`, when the outgoing message queue is full, the `Send` and `SendAsync` methods of producer block, rather than failing and throwing errors.
    If it is set to `false`, when the outgoing message queue is full, the `Send` and `SendAsync` methods of producer fail and `ProducerQueueIsFullError` exceptions occur.

    The `MaxPendingMessages` parameter determines the size of the outgoing message queue.|false -`maxPendingMessages`| int|The maximum size of a queue holding pending messages.

    For example, a message waiting to receive an acknowledgment from a [broker](reference-terminology.md#broker).

    By default, when the queue is full, all calls to the `Send` and `SendAsync` methods fail **unless** you set `BlockIfQueueFull` to `true`.|1000 -`maxPendingMessagesAcrossPartitions`|int|The maximum number of pending messages across partitions.

    Use the setting to lower the max pending messages for each partition ({@link #setMaxPendingMessages(int)}) if the total number exceeds the configured value.|50000 -`messageRoutingMode`| MessageRoutingMode|Message routing logic for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics).
    Apply the logic only when setting no key on messages.
    Available options are as follows:
  1943. `pulsar.RoundRobinDistribution`: round robin
  1944. `pulsar.UseSinglePartition`: publish all messages to a single partition
  1945. `pulsar.CustomPartition`: a custom partitioning scheme
  1946. |
  1947. `pulsar.RoundRobinDistribution`
  1948. -`hashingScheme`| HashingScheme|Hashing function determining the partition where you publish a particular message (**partitioned topics only**).
    Available options are as follows:
  1949. `pulsar.JavastringHash`: the equivalent of `string.hashCode()` in Java
  1950. `pulsar.Murmur3_32Hash`: applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function
  1951. `pulsar.BoostHash`: applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library
  1952. |`HashingScheme.JavastringHash` -`cryptoFailureAction`| ProducerCryptoFailureAction|Producer should take action when encryption fails.
  1953. **FAIL**: if encryption fails, unencrypted messages fail to send.
  1954. **SEND**: if encryption fails, unencrypted messages are sent.
  1955. |`ProducerCryptoFailureAction.FAIL` -`batchingMaxPublishDelayMicros`| long|Batching time period of sending messages.|TimeUnit.MILLISECONDS.toMicros(1) -`batchingMaxMessages` |int|The maximum number of messages permitted in a batch.|1000 -`batchingEnabled`| boolean|Enable batching of messages. |true -`compressionType`|CompressionType|Message data compression type used by a producer.
    Available options:
  1956. [`LZ4`](https://github.com/lz4/lz4)
  1957. [`ZLIB`](https://zlib.net/)
  1958. [`ZSTD`](https://facebook.github.io/zstd/)
  1959. [`SNAPPY`](https://google.github.io/snappy/)
  1960. | No compression - -You can configure parameters if you do not want to use the default configuration. - -For a full list, see the Javadoc for the {@inject: javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder} class. The following is an example. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .batchingMaxPublishDelay(10, TimeUnit.MILLISECONDS) - .sendTimeout(10, TimeUnit.SECONDS) - .blockIfQueueFull(true) - .create(); - -``` - -### Message routing - -When using partitioned topics, you can specify the routing mode whenever you publish messages using a producer. For more information on specifying a routing mode using the Java client, see the [Partitioned Topics cookbook](cookbooks-partitioned.md). - -### Async send - -You can publish messages [asynchronously](concepts-messaging.md#send-modes) using the Java client. With async send, the producer puts the message in a blocking queue and returns it immediately. Then the client library sends the message to the broker in the background. If the queue is full (max size configurable), the producer is blocked or fails immediately when calling the API, depending on arguments passed to the producer. - -The following is an example. - -```java - -producer.sendAsync("my-async-message".getBytes()).thenAccept(msgId -> { - System.out.println("Message with ID " + msgId + " successfully sent"); -}); - -``` - -As you can see from the example above, async send operations return a {@inject: javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId} wrapped in a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture). - -### Configure messages - -In addition to a value, you can set additional items on a given message: - -```java - -producer.newMessage() - .key("my-message-key") - .value("my-async-message".getBytes()) - .property("my-key", "my-value") - .property("my-other-key", "my-other-value") - .send(); - -``` - -You can terminate the builder chain with `sendAsync()` and get a future return. - -## Consumer - -In Pulsar, consumers subscribe to topics and handle messages that producers publish to those topics. You can instantiate a new [consumer](reference-terminology.md#consumer) by first instantiating a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object and passing it a URL for a Pulsar broker (as [above](#client-configuration)). - -Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object, you can create a {@inject: javadoc:Consumer:/client/org/apache/pulsar/client/api/Consumer} by specifying a [topic](reference-terminology.md#topic) and a [subscription](concepts-messaging.md#subscription-modes). - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscribe(); - -``` - -The `subscribe` method will auto subscribe the consumer to the specified topic and subscription. One way to make the consumer listen on the topic is to set up a `while` loop. In this example loop, the consumer listens for messages, prints the contents of any received message, and then [acknowledges](reference-terminology.md#acknowledgment-ack) that the message has been processed. If the processing logic fails, you can use [negative acknowledgement](reference-terminology.md#acknowledgment-ack) to redeliver the message later. - -```java - -while (true) { - // Wait for a message - Message msg = consumer.receive(); - - try { - // Do something with the message - System.out.println("Message received: " + new String(msg.getData())); - - // Acknowledge the message so that it can be deleted by the message broker - consumer.acknowledge(msg); - } catch (Exception e) { - // Message failed to process, redeliver later - consumer.negativeAcknowledge(msg); - } -} - -``` - -If you don't want to block your main thread and rather listen constantly for new messages, consider using a `MessageListener`. - -```java - -MessageListener myMessageListener = (consumer, msg) -> { - try { - System.out.println("Message received: " + new String(msg.getData())); - consumer.acknowledge(msg); - } catch (Exception e) { - consumer.negativeAcknowledge(msg); - } -} - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .messageListener(myMessageListener) - .subscribe(); - -``` - -### Configure consumer - -If you instantiate a `Consumer` object by specifying only a topic and subscription name as in the example above, the consumer uses the default configuration. - -When you create a consumer, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - - Name|Type |
    Description
    | Default -|---|---|---|--- -`topicNames`| Set<String>| Topic name| Sets.newTreeSet() - `topicsPattern`|Pattern| Topic pattern |None -`subscriptionName`|String| Subscription name| None -`subscriptionType`|SubscriptionType| Subscription type
    Four subscription types are available:
  1961. Exclusive
  1962. Failover
  1963. Shared
  1964. Key_Shared
  1965. |SubscriptionType.Exclusive -`receiverQueueSize` |int | Size of a consumer's receiver queue.

    For example, the number of messages accumulated by a consumer before an application calls `Receive`.

    A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.| 1000 -`acknowledgementsGroupTimeMicros`|long|Group a consumer acknowledgment for a specified time.

    By default, a consumer uses 100ms grouping time to send out acknowledgments to a broker.

    Setting a group time of 0 sends out acknowledgments immediately.

    A longer ack group time is more efficient at the expense of a slight increase in message re-deliveries after a failure.|TimeUnit.MILLISECONDS.toMicros(100) -`negativeAckRedeliveryDelayMicros`|long|Delay to wait before redelivering messages that failed to be processed.

    When an application uses {@link Consumer#negativeAcknowledge(Message)}, failed messages are redelivered after a fixed timeout. |TimeUnit.MINUTES.toMicros(1) -`maxTotalReceiverQueueSizeAcrossPartitions`|int |The max total receiver queue size across partitions.

    This setting reduces the receiver queue size for individual partitions if the total receiver queue size exceeds this value.|50000 -`consumerName`|String|Consumer name|null -`ackTimeoutMillis`|long|Timeout of unacked messages|0 -`tickDurationMillis`|long|Granularity of the ack-timeout redelivery.

    Using an higher `tickDurationMillis` reduces the memory overhead to track messages when setting ack-timeout to a bigger value (for example, 1 hour).|1000 -`priorityLevel`|int|Priority level for a consumer to which a broker gives more priority while dispatching messages in the shared subscription mode.

    The broker follows descending priorities. For example, 0=max-priority, 1, 2,...

    In shared subscription mode, the broker **first dispatches messages to the max priority level consumers if they have permits**. Otherwise, the broker considers next priority level consumers.

    **Example 1**
    If a subscription has consumerA with `priorityLevel` 0 and consumerB with `priorityLevel` 1, then the broker **only dispatches messages to consumerA until it runs out permits** and then starts dispatching messages to consumerB.

    **Example 2**
    Consumer Priority, Level, Permits
    C1, 0, 2
    C2, 0, 1
    C3, 0, 1
    C4, 1, 2
    C5, 1, 1

    Order in which a broker dispatches messages to consumers is: C1, C2, C3, C1, C4, C5, C4.|0 -`cryptoFailureAction`|ConsumerCryptoFailureAction|Consumer should take action when it receives a message that can not be decrypted.
  1966. **FAIL**: this is the default option to fail messages until crypto succeeds.
  1967. **DISCARD**:silently acknowledge and not deliver message to an application.
  1968. **CONSUME**: deliver encrypted messages to applications. It is the application's responsibility to decrypt the message.

  1969. The decompression of message fails.

    If messages contain batch messages, a client is not be able to retrieve individual messages in batch.

    Delivered encrypted message contains {@link EncryptionContext} which contains encryption and compression information in it using which application can decrypt consumed message payload.|
  1970. ConsumerCryptoFailureAction.FAIL
  1971. -`properties`|SortedMap|A name or value property of this consumer.

    `properties` is application defined metadata attached to a consumer.

    When getting a topic stats, associate this metadata with the consumer stats for easier identification.|new TreeMap() -`readCompacted`|boolean|If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    Only enabling `readCompacted` on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`.|false -`subscriptionInitialPosition`|SubscriptionInitialPosition|Initial position at which to set cursor when subscribing to a topic at first time.|SubscriptionInitialPosition.Latest -`patternAutoDiscoveryPeriod`|int|Topic auto discovery period when using a pattern for topic's consumer.

    The default and minimum value is 1 minute.|1 -`regexSubscriptionMode`|RegexSubscriptionMode|When subscribing to a topic using a regular expression, you can pick a certain type of topics.

  1972. **PersistentOnly**: only subscribe to persistent topics.
  1973. **NonPersistentOnly**: only subscribe to non-persistent topics.
  1974. **AllTopics**: subscribe to both persistent and non-persistent topics.
  1975. |RegexSubscriptionMode.PersistentOnly -`deadLetterPolicy`|DeadLetterPolicy|Dead letter policy for consumers.

    By default, some messages are probably redelivered many times, even to the extent that it never stops.

    By using the dead letter mechanism, messages have the max redelivery count. **When exceeding the maximum number of redeliveries, messages are sent to the Dead Letter Topic and acknowledged automatically**.

    You can enable the dead letter mechanism by setting `deadLetterPolicy`.

    **Example**

    client.newConsumer()
    .deadLetterPolicy(DeadLetterPolicy.builder().maxRedeliverCount(10).build())
    .subscribe();


    Default dead letter topic name is `{TopicName}-{Subscription}-DLQ`.

    To set a custom dead letter topic name:
    client.newConsumer()
    .deadLetterPolicy(DeadLetterPolicy.builder().maxRedeliverCount(10)
    .deadLetterTopic("your-topic-name").build())
    .subscribe();


    When specifying the dead letter policy while not specifying `ackTimeoutMillis`, you can set the ack timeout to 30000 millisecond.|None -`autoUpdatePartitions`|boolean|If `autoUpdatePartitions` is enabled, a consumer subscribes to partition increasement automatically.

    **Note**: this is only for partitioned consumers.|true -`replicateSubscriptionState`|boolean|If `replicateSubscriptionState` is enabled, a subscription state is replicated to geo-replicated clusters.|false - -You can configure parameters if you do not want to use the default configuration. For a full list, see the Javadoc for the {@inject: javadoc:ConsumerBuilder:/client/org/apache/pulsar/client/api/ConsumerBuilder} class. - -The following is an example. - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .ackTimeout(10, TimeUnit.SECONDS) - .subscriptionType(SubscriptionType.Exclusive) - .subscribe(); - -``` - -### Async receive - -The `receive` method receives messages synchronously (the consumer process is blocked until a message is available). You can also use [async receive](concepts-messaging.md#receive-modes), which returns a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) object immediately once a new message is available. - -The following is an example. - -```java - -CompletableFuture asyncMessage = consumer.receiveAsync(); - -``` - -Async receive operations return a {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} wrapped inside of a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture). - -### Batch receive - -Use `batchReceive` to receive multiple messages for each call. - -The following is an example. - -```java - -Messages messages = consumer.batchReceive(); -for (Object message : messages) { - // do something -} -consumer.acknowledge(messages) - -``` - -:::note - -Batch receive policy limits the number and bytes of messages in a single batch. You can specify a timeout to wait for enough messages. -The batch receive is completed if any of the following condition is met: enough number of messages, bytes of messages, wait timeout. - -```java - -Consumer consumer = client.newConsumer() -.topic("my-topic") -.subscriptionName("my-subscription") -.batchReceivePolicy(BatchReceivePolicy.builder() -.maxNumMessages(100) -.maxNumBytes(1024 * 1024) -.timeout(200, TimeUnit.MILLISECONDS) -.build()) -.subscribe(); - -``` - -The default batch receive policy is: - -```java - -BatchReceivePolicy.builder() -.maxNumMessage(-1) -.maxNumBytes(10 * 1024 * 1024) -.timeout(100, TimeUnit.MILLISECONDS) -.build(); - -``` - -::: - -### Multi-topic subscriptions - -In addition to subscribing a consumer to a single Pulsar topic, you can also subscribe to multiple topics simultaneously using [multi-topic subscriptions](concepts-messaging.md#multi-topic-subscriptions). To use multi-topic subscriptions you can supply either a regular expression (regex) or a `List` of topics. If you select topics via regex, all topics must be within the same Pulsar namespace. - -The followings are some examples. - -```java - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; - -import java.util.Arrays; -import java.util.List; -import java.util.regex.Pattern; - -ConsumerBuilder consumerBuilder = pulsarClient.newConsumer() - .subscriptionName(subscription); - -// Subscribe to all topics in a namespace -Pattern allTopicsInNamespace = Pattern.compile("public/default/.*"); -Consumer allTopicsConsumer = consumerBuilder - .topicsPattern(allTopicsInNamespace) - .subscribe(); - -// Subscribe to a subsets of topics in a namespace, based on regex -Pattern someTopicsInNamespace = Pattern.compile("public/default/foo.*"); -Consumer allTopicsConsumer = consumerBuilder - .topicsPattern(someTopicsInNamespace) - .subscribe(); - -``` - -In the above example, the consumer subscribes to the `persistent` topics that can match the topic name pattern. If you want the consumer subscribes to all `persistent` and `non-persistent` topics that can match the topic name pattern, set `subscriptionTopicsMode` to `RegexSubscriptionMode.AllTopics`. - -```java - -Pattern pattern = Pattern.compile("public/default/.*"); -pulsarClient.newConsumer() - .subscriptionName("my-sub") - .topicsPattern(pattern) - .subscriptionTopicsMode(RegexSubscriptionMode.AllTopics) - .subscribe(); - -``` - -:::note - -By default, the `subscriptionTopicsMode` of the consumer is `PersistentOnly`. Available options of `subscriptionTopicsMode` are `PersistentOnly`, `NonPersistentOnly`, and `AllTopics`. - -::: - -You can also subscribe to an explicit list of topics (across namespaces if you wish): - -```java - -List topics = Arrays.asList( - "topic-1", - "topic-2", - "topic-3" -); - -Consumer multiTopicConsumer = consumerBuilder - .topics(topics) - .subscribe(); - -// Alternatively: -Consumer multiTopicConsumer = consumerBuilder - .topic( - "topic-1", - "topic-2", - "topic-3" - ) - .subscribe(); - -``` - -You can also subscribe to multiple topics asynchronously using the `subscribeAsync` method rather than the synchronous `subscribe` method. The following is an example. - -```java - -Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default.*"); -consumerBuilder - .topics(topics) - .subscribeAsync() - .thenAccept(this::receiveMessageFromConsumer); - -private void receiveMessageFromConsumer(Object consumer) { - ((Consumer)consumer).receiveAsync().thenAccept(message -> { - // Do something with the received message - receiveMessageFromConsumer(consumer); - }); -} - -``` - -### Subscription modes - -Pulsar has various [subscription modes](concepts-messaging#subscription-modes) to match different scenarios. A topic can have multiple subscriptions with different subscription modes. However, a subscription can only have one subscription mode at a time. - -A subscription is identical with the subscription name which can specify only one subscription mode at a time. You cannot change the subscription mode unless all existing consumers of this subscription are offline. - -Different subscription modes have different message distribution modes. This section describes the differences of subscription modes and how to use them. - -In order to better describe their differences, assuming you have a topic named "my-topic", and the producer has published 10 messages. - -```java - -Producer producer = client.newProducer(Schema.STRING) - .topic("my-topic") - .enableBatching(false) - .create(); -// 3 messages with "key-1", 3 messages with "key-2", 2 messages with "key-3" and 2 messages with "key-4" -producer.newMessage().key("key-1").value("message-1-1").send(); -producer.newMessage().key("key-1").value("message-1-2").send(); -producer.newMessage().key("key-1").value("message-1-3").send(); -producer.newMessage().key("key-2").value("message-2-1").send(); -producer.newMessage().key("key-2").value("message-2-2").send(); -producer.newMessage().key("key-2").value("message-2-3").send(); -producer.newMessage().key("key-3").value("message-3-1").send(); -producer.newMessage().key("key-3").value("message-3-2").send(); -producer.newMessage().key("key-4").value("message-4-1").send(); -producer.newMessage().key("key-4").value("message-4-2").send(); - -``` - -#### Exclusive - -Create a new consumer and subscribe with the `Exclusive` subscription mode. - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Exclusive) - .subscribe() - -``` - -Only the first consumer is allowed to the subscription, other consumers receive an error. The first consumer receives all 10 messages, and the consuming order is the same as the producing order. - -:::note - -If topic is a partitioned topic, the first consumer subscribes to all partitioned topics, other consumers are not assigned with partitions and receive an error. - -::: - -#### Failover - -Create new consumers and subscribe with the`Failover` subscription mode. - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Failover) - .subscribe() -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Failover) - .subscribe() -//conumser1 is the active consumer, consumer2 is the standby consumer. -//consumer1 receives 5 messages and then crashes, consumer2 takes over as an active consumer. - -``` - -Multiple consumers can attach to the same subscription, yet only the first consumer is active, and others are standby. When the active consumer is disconnected, messages will be dispatched to one of standby consumers, and the standby consumer then becomes active consumer. - -If the first active consumer is disconnected after receiving 5 messages, the standby consumer becomes active consumer. Consumer1 will receive: - -``` - -("key-1", "message-1-1") -("key-1", "message-1-2") -("key-1", "message-1-3") -("key-2", "message-2-1") -("key-2", "message-2-2") - -``` - -consumer2 will receive: - -``` - -("key-2", "message-2-3") -("key-3", "message-3-1") -("key-3", "message-3-2") -("key-4", "message-4-1") -("key-4", "message-4-2") - -``` - -:::note - -If a topic is a partitioned topic, each partition has only one active consumer, messages of one partition are distributed to only one consumer, and messages of multiple partitions are distributed to multiple consumers. - -::: - -#### Shared - -Create new consumers and subscribe with `Shared` subscription mode: - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .subscribe() - -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .subscribe() -//Both consumer1 and consumer2 are active consumers. - -``` - -In shared subscription mode, multiple consumers can attach to the same subscription and messages are delivered in a round robin distribution across consumers. - -If a broker dispatches only one message at a time, consumer1 receives the following information. - -``` - -("key-1", "message-1-1") -("key-1", "message-1-3") -("key-2", "message-2-2") -("key-3", "message-3-1") -("key-4", "message-4-1") - -``` - -consumer2 receives the following information. - -``` - -("key-1", "message-1-2") -("key-2", "message-2-1") -("key-2", "message-2-3") -("key-3", "message-3-2") -("key-4", "message-4-2") - -``` - -`Shared` subscription is different from `Exclusive` and `Failover` subscription modes. `Shared` subscription has better flexibility, but cannot provide order guarantee. - -#### Key_shared - -This is a new subscription mode since 2.4.0 release, create new consumers and subscribe with `Key_Shared` subscription mode. - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Key_Shared) - .subscribe() - -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Key_Shared) - .subscribe() -//Both consumer1 and consumer2 are active consumers. - -``` - -Just like in the `Shared` subscription, all consumers in the `Key_Shared` subscription mode can attach to the same subscription. But the `Key_Shared` subscription mode is different from the `Shared` subscription. In the `Key_Shared` subscription mode, messages with the same key are delivered to only one consumer in order. The possible distribution of messages between different consumers (by default we do not know in advance which keys will be assigned to a consumer, but a key will only be assigned to a consumer at the same time). - -consumer1 receives the following information. - -``` - -("key-1", "message-1-1") -("key-1", "message-1-2") -("key-1", "message-1-3") -("key-3", "message-3-1") -("key-3", "message-3-2") - -``` - -consumer2 receives the following information. - -``` - -("key-2", "message-2-1") -("key-2", "message-2-2") -("key-2", "message-2-3") -("key-4", "message-4-1") -("key-4", "message-4-2") - -``` - -If batching is enabled at the producer side, messages with different keys are added to a batch by default. The broker will dispatch the batch to the consumer, so the default batch mechanism may break the Key_Shared subscription guaranteed message distribution semantics. The producer needs to use the `KeyBasedBatcher`. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .batcherBuilder(BatcherBuilder.KEY_BASED) - .create(); - -``` - -Or the producer can disable batching. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .enableBatching(false) - .create(); - -``` - -:::note - -If the message key is not specified, messages without key are dispatched to one consumer in order by default. - -::: - -## Reader - -With the [reader interface](concepts-clients.md#reader-interface), Pulsar clients can "manually position" themselves within a topic and reading all messages from a specified message onward. The Pulsar API for Java enables you to create {@inject: javadoc:Reader:/client/org/apache/pulsar/client/api/Reader} objects by specifying a topic and a {@inject: javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId}. - -The following is an example. - -```java - -byte[] msgIdBytes = // Some message ID byte array -MessageId id = MessageId.fromByteArray(msgIdBytes); -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(id) - .create(); - -while (true) { - Message message = reader.readNext(); - // Process message -} - -``` - -In the example above, a `Reader` object is instantiated for a specific topic and message (by ID); the reader iterates over each message in the topic after the message is identified by `msgIdBytes` (how that value is obtained depends on the application). - -The code sample above shows pointing the `Reader` object to a specific message (by ID), but you can also use `MessageId.earliest` to point to the earliest available message on the topic of `MessageId.latest` to point to the most recent available message. - -### Configure reader -When you create a reader, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -| Name | Type|
    Description
    | Default -|---|---|---|--- -`topicName`|String|Topic name. |None -`receiverQueueSize`|int|Size of a consumer's receiver queue.

    For example, the number of messages that can be accumulated by a consumer before an application calls `Receive`.

    A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.|1000 -`readerListener`|ReaderListener<T>|A listener that is called for message received.|None -`readerName`|String|Reader name.|null -`subscriptionName`|String| Subscription name|When there is a single topic, the default subscription name is `"reader-" + 10-digit UUID`.
    When there are multiple topics, the default subscription name is `"multiTopicsReader-" + 10-digit UUID`. -`subscriptionRolePrefix`|String|Prefix of subscription role. |null -`cryptoKeyReader`|CryptoKeyReader|Interface that abstracts the access to a key store.|null -`cryptoFailureAction`|ConsumerCryptoFailureAction|Consumer should take action when it receives a message that can not be decrypted.
  1976. **FAIL**: this is the default option to fail messages until crypto succeeds.
  1977. **DISCARD**: silently acknowledge and not deliver message to an application.
  1978. **CONSUME**: deliver encrypted messages to applications. It is the application's responsibility to decrypt the message.

  1979. The message decompression fails.

    If messages contain batch messages, a client is not be able to retrieve individual messages in batch.

    Delivered encrypted message contains {@link EncryptionContext} which contains encryption and compression information in it using which application can decrypt consumed message payload.|
  1980. ConsumerCryptoFailureAction.FAIL
  1981. -`readCompacted`|boolean|If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (for example, failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`.|false -`resetIncludeHead`|boolean|If set to true, the first message to be returned is the one specified by `messageId`.

    If set to false, the first message to be returned is the one next to the message specified by `messageId`.|false - -### Sticky key range reader - -In sticky key range reader, broker will only dispatch messages which hash of the message key contains by the specified key hash range. Multiple key hash ranges can be specified on a reader. - -The following is an example to create a sticky key range reader. - -```java - -pulsarClient.newReader() - .topic(topic) - .startMessageId(MessageId.earliest) - .keyHashRange(Range.of(0, 10000), Range.of(20001, 30000)) - .create(); - -``` - -Total hash range size is 65536, so the max end of the range should be less than or equal to 65535. - -## Schema - -In Pulsar, all message data consists of byte arrays "under the hood." [Message schemas](schema-get-started.md) enable you to use other types of data when constructing and handling messages (from simple types like strings to more complex, application-specific types.md). If you construct, say, a [producer](#producer) without specifying a schema, then the producer can only produce messages of type `byte[]`. The following is an example. - -```java - -Producer producer = client.newProducer() - .topic(topic) - .create(); - -``` - -The producer above is equivalent to a `Producer` (in fact, you should *always* explicitly specify the type). If you'd like to use a producer for a different type of data, you'll need to specify a **schema** that informs Pulsar which data type will be transmitted over the [topic](reference-terminology.md#topic). - -### AvroBaseStructSchema example - -Let's say that you have a `SensorReading` class that you'd like to transmit over a Pulsar topic: - -```java - -public class SensorReading { - public float temperature; - - public SensorReading(float temperature) { - this.temperature = temperature; - } - - // A no-arg constructor is required - public SensorReading() { - } - - public float getTemperature() { - return temperature; - } - - public void setTemperature(float temperature) { - this.temperature = temperature; - } -} - -``` - -You could then create a `Producer` (or `Consumer`) like this: - -```java - -Producer producer = client.newProducer(JSONSchema.of(SensorReading.class)) - .topic("sensor-readings") - .create(); - -``` - -The following schema formats are currently available for Java: - -* No schema or the byte array schema (which can be applied using `Schema.BYTES`): - - ```java - - Producer bytesProducer = client.newProducer(Schema.BYTES) - .topic("some-raw-bytes-topic") - .create(); - - ``` - - Or, equivalently: - - ```java - - Producer bytesProducer = client.newProducer() - .topic("some-raw-bytes-topic") - .create(); - - ``` - -* `String` for normal UTF-8-encoded string data. Apply the schema using `Schema.STRING`: - - ```java - - Producer stringProducer = client.newProducer(Schema.STRING) - .topic("some-string-topic") - .create(); - - ``` - -* Create JSON schemas for POJOs using `Schema.JSON`. The following is an example. - - ```java - - Producer pojoProducer = client.newProducer(Schema.JSON(MyPojo.class)) - .topic("some-pojo-topic") - .create(); - - ``` - -* Generate Protobuf schemas using `Schema.PROTOBUF`. The following example shows how to create the Protobuf schema and use it to instantiate a new producer: - - ```java - - Producer protobufProducer = client.newProducer(Schema.PROTOBUF(MyProtobuf.class)) - .topic("some-protobuf-topic") - .create(); - - ``` - -* Define Avro schemas with `Schema.AVRO`. The following code snippet demonstrates how to create and use Avro schema. - - ```java - - Producer avroProducer = client.newProducer(Schema.AVRO(MyAvro.class)) - .topic("some-avro-topic") - .create(); - - ``` - -### ProtobufNativeSchema example - -For example of ProtobufNativeSchema, see [`SchemaDefinition` in `Complex type`](schema-understand.md#complex-type). - -## Authentication - -Pulsar currently supports three authentication schemes: [TLS](security-tls-authentication.md), [Athenz](security-athenz.md), and [Oauth2](security-oauth2.md). You can use the Pulsar Java client with all of them. - -### TLS Authentication - -To use [TLS](security-tls-authentication.md), `enableTls` method is deprecated and you need to use "pulsar+ssl://" in serviceUrl to enable, point your Pulsar client to a TLS cert path, and provide paths to cert and key files. - -The following is an example. - -```java - -Map authParams = new HashMap(); -authParams.put("tlsCertFile", "/path/to/client-cert.pem"); -authParams.put("tlsKeyFile", "/path/to/client-key.pem"); - -Authentication tlsAuth = AuthenticationFactory - .create(AuthenticationTls.class.getName(), authParams); - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://my-broker.com:6651") - .tlsTrustCertsFilePath("/path/to/cacert.pem") - .authentication(tlsAuth) - .build(); - -``` - -### Athenz - -To use [Athenz](security-athenz.md) as an authentication provider, you need to [use TLS](#tls-authentication) and provide values for four parameters in a hash: - -* `tenantDomain` -* `tenantService` -* `providerDomain` -* `privateKey` - -You can also set an optional `keyId`. The following is an example. - -```java - -Map authParams = new HashMap(); -authParams.put("tenantDomain", "shopping"); // Tenant domain name -authParams.put("tenantService", "some_app"); // Tenant service name -authParams.put("providerDomain", "pulsar"); // Provider domain name -authParams.put("privateKey", "file:///path/to/private.pem"); // Tenant private key path -authParams.put("keyId", "v1"); // Key id for the tenant private key (optional, default: "0") - -Authentication athenzAuth = AuthenticationFactory - .create(AuthenticationAthenz.class.getName(), authParams); - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://my-broker.com:6651") - .tlsTrustCertsFilePath("/path/to/cacert.pem") - .authentication(athenzAuth) - .build(); - -``` - -> #### Supported pattern formats -> The `privateKey` parameter supports the following three pattern formats: -> * `file:///path/to/file` -> * `file:/path/to/file` -> * `data:application/x-pem-file;base64,` - -### Oauth2 - -The following example shows how to use [Oauth2](security-oauth2.md) as an authentication provider for the Pulsar Java client. - -You can use the factory method to configure authentication for Pulsar Java client. - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactoryOAuth2.clientCredentials(this.issuerUrl, this.credentialsUrl, this.audience)) - .build(); - -``` - -In addition, you can also use the encoded parameters to configure authentication for Pulsar Java client. - -```java - -Authentication auth = AuthenticationFactory - .create(AuthenticationOAuth2.class.getName(), "{"type":"client_credentials","privateKey":"...","issuerUrl":"...","audience":"..."}"); -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication(auth) - .build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-node.md b/site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-node.md deleted file mode 100644 index 1ff37b26294666..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-node.md +++ /dev/null @@ -1,643 +0,0 @@ ---- -id: client-libraries-node -title: The Pulsar Node.js client -sidebar_label: "Node.js" -original_id: client-libraries-node ---- - -The Pulsar Node.js client can be used to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Node.js. - -All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Node.js client are thread-safe. - -For 1.3.0 or later versions, [type definitions](https://github.com/apache/pulsar-client-node/blob/master/index.d.ts) used in TypeScript are available. - -## Installation - -You can install the [`pulsar-client`](https://www.npmjs.com/package/pulsar-client) library via [npm](https://www.npmjs.com/). - -### Requirements -Pulsar Node.js client library is based on the C++ client library. -Follow [these instructions](client-libraries-cpp.md#compilation) and install the Pulsar C++ client library. - -### Compatibility - -Compatibility between each version of the Node.js client and the C++ client is as follows: - -| Node.js client | C++ client | -| :------------- | :------------- | -| 1.0.0 | 2.3.0 or later | -| 1.1.0 | 2.4.0 or later | -| 1.2.0 | 2.5.0 or later | - -If an incompatible version of the C++ client is installed, you may fail to build or run this library. - -### Installation using npm - -Install the `pulsar-client` library via [npm](https://www.npmjs.com/): - -```shell - -$ npm install pulsar-client - -``` - -:::note - -Also, this library works only in Node.js 10.x or later because it uses the [`node-addon-api`](https://github.com/nodejs/node-addon-api) module to wrap the C++ library. - -::: - -## Connection URLs -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here is an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you are using [TLS encryption](security-tls-transport.md) or [TLS Authentication](security-tls-authentication.md), the URL looks like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you first need a client object. You can create a client instance using a `new` operator and the `Client` method, passing in a client options object (more on configuration [below](#client-configuration)). - -Here is an example: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - await client.close(); -})(); - -``` - -### Client configuration - -The following configurable parameters are available for Pulsar clients: - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `serviceUrl` | The connection URL for the Pulsar cluster. See [above](#connection-urls) for more info. | | -| `authentication` | Configure the authentication provider. (default: no authentication). See [TLS Authentication](security-tls-authentication.md) for more info. | | -| `operationTimeoutSeconds` | The timeout for Node.js client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries occur until this threshold is reached, at which point the operation fails. | 30 | -| `ioThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker). | 1 | -| `messageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)). | 1 | -| `concurrentLookupRequest` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 50000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 50000 | -| `tlsTrustCertsFilePath` | The file path for the trusted TLS certificate. | | -| `tlsValidateHostname` | The boolean value of setup whether to enable TLS hostname verification. | `false` | -| `tlsAllowInsecureConnection` | The boolean value of setup whether the Pulsar client accepts untrusted TLS certificate from broker. | `false` | -| `statsIntervalInSeconds` | Interval between each stat info. Stats is activated with positive statsInterval. The value should be set to 1 second at least | 600 | -| `log` | A function that is used for logging. | `console.log` | - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Node.js producers using a producer configuration object. - -Here is an example: - -```JavaScript - -const producer = await client.createProducer({ - topic: 'my-topic', -}); - -await producer.send({ - data: Buffer.from("Hello, Pulsar"), -}); - -await producer.close(); - -``` - -> #### Promise operation -> When you create a new Pulsar producer, the operation returns `Promise` object and get producer instance or an error through executor function. -> In this example, using await operator instead of executor function. - -### Producer operations - -Pulsar Node.js producers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `send(Object)` | Publishes a [message](#messages) to the producer's topic. When the message is successfully acknowledged by the Pulsar broker, or an error is thrown, the Promise object whose result is the message ID runs executor function. | `Promise` | -| `flush()` | Sends message from send queue to Pulsar broker. When the message is successfully acknowledged by the Pulsar broker, or an error is thrown, the Promise object runs executor function. | `Promise` | -| `close()` | Closes the producer and releases all resources allocated to it. Once `close()` is called, no more messages are accepted from the publisher. This method returns a Promise object. It runs the executor function when all pending publish requests are persisted by Pulsar. If an error is thrown, no pending writes are retried. | `Promise` | -| `getProducerName()` | Getter method of the producer name. | `string` | -| `getTopic()` | Getter method of the name of the topic. | `string` | - -### Producer configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer publishes messages. | | -| `producerName` | A name for the producer. If you do not explicitly assign a name, Pulsar automatically generates a globally unique name. If you choose to explicitly assign a name, it needs to be unique across *all* Pulsar clusters, otherwise the creation operation throws an error. | | -| `sendTimeoutMs` | When publishing a message to a topic, the producer waits for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error is thrown. If you set `sendTimeoutMs` to -1, the timeout is set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30000 | -| `initialSequenceId` | The initial sequence ID of the message. When producer send message, add sequence ID to message. The ID is increased each time to send. | | -| `maxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `send` method fails *unless* `blockIfQueueFull` is set to `true`. | 1000 | -| `maxPendingMessagesAcrossPartitions` | The maximum size of the sum of partition's pending queue. | 50000 | -| `blockIfQueueFull` | If set to `true`, the producer's `send` method waits when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `maxPendingMessages` parameter); if set to `false` (the default), `send` operations fails and throw a error when the queue is full. | `false` | -| `messageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-messaging.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`RoundRobinDistribution`), or publishing all messages to a single partition (`UseSinglePartition`, the default). | `UseSinglePartition` | -| `hashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `JavaStringHash` (the equivalent of `String.hashCode()` in Java), `Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library). | `BoostHash` | -| `compressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), and [`Zlib`](https://zlib.net/), [ZSTD](https://github.com/facebook/zstd/), [SNAPPY](https://github.com/google/snappy/). | Compression None | -| `batchingEnabled` | If set to `true`, the producer send message as batch. | `true` | -| `batchingMaxPublishDelayMs` | The maximum time of delay sending message in batching. | 10 | -| `batchingMaxMessages` | The maximum size of sending message in each time of batching. | 1000 | -| `properties` | The metadata of producer. | | - -### Producer example - -This example creates a Node.js producer for the `my-topic` topic and sends 10 messages to that topic: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - // Create a producer - const producer = await client.createProducer({ - topic: 'my-topic', - }); - - // Send messages - for (let i = 0; i < 10; i += 1) { - const msg = `my-message-${i}`; - producer.send({ - data: Buffer.from(msg), - }); - console.log(`Sent message: ${msg}`); - } - await producer.flush(); - - await producer.close(); - await client.close(); -})(); - -``` - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Node.js consumers using a consumer configuration object. - -Here is an example: - -```JavaScript - -const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', -}); - -const msg = await consumer.receive(); -console.log(msg.getData().toString()); -consumer.acknowledge(msg); - -await consumer.close(); - -``` - -> #### Promise operation -> When you create a new Pulsar consumer, the operation returns `Promise` object and get consumer instance or an error through executor function. -> In this example, using await operator instead of executor function. - -### Consumer operations - -Pulsar Node.js consumers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `receive()` | Receives a single message from the topic. When the message is available, the Promise object run executor function and get message object. | `Promise` | -| `receive(Number)` | Receives a single message from the topic with specific timeout in milliseconds. | `Promise` | -| `acknowledge(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message object. | `void` | -| `acknowledgeId(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID object. | `void` | -| `acknowledgeCumulative(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `acknowledgeCumulative` method returns void, and send the ack to the broker asynchronously. After that, the messages are *not* redelivered to the consumer. Cumulative acking can not be used with a [shared](concepts-messaging.md#shared) subscription type. | `void` | -| `acknowledgeCumulativeId(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message ID. | `void` | -| `negativeAcknowledge(Message)`| [Negatively acknowledges](reference-terminology.md#negative-acknowledgment-nack) a message to the Pulsar broker by message object. | `void` | -| `negativeAcknowledgeId(MessageId)` | [Negatively acknowledges](reference-terminology.md#negative-acknowledgment-nack) a message to the Pulsar broker by message ID object. | `void` | -| `close()` | Closes the consumer, disabling its ability to receive messages from the broker. | `Promise` | -| `unsubscribe()` | Unsubscribes the subscription. | `Promise` | - -### Consumer configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar topic on which the consumer establishes a subscription and listen for messages. | | -| `topics` | The array of topics. | | -| `topicsPattern` | The regular expression for topics. | | -| `subscription` | The subscription name for this consumer. | | -| `subscriptionType` | Available options are `Exclusive`, `Shared`, `Key_Shared`, and `Failover`. | `Exclusive` | -| `subscriptionInitialPosition` | Initial position at which to set cursor when subscribing to a topic at first time. | `SubscriptionInitialPosition.Latest` | -| `ackTimeoutMs` | Acknowledge timeout in milliseconds. | 0 | -| `nAckRedeliverTimeoutMs` | Delay to wait before redelivering messages that failed to be processed. | 60000 | -| `receiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000 | -| `receiverQueueSizeAcrossPartitions` | Set the max total receiver queue size across partitions. This setting is used to reduce the receiver queue size for individual partitions if the total exceeds this value. | 50000 | -| `consumerName` | The name of consumer. Currently(v2.4.1), [failover](concepts-messaging.md#failover) mode use consumer name in ordering. | | -| `properties` | The metadata of consumer. | | -| `listener`| A listener that is called for a message received. | | -| `readCompacted`| If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`. | false | - -### Consumer example - -This example creates a Node.js consumer with the `my-subscription` subscription on the `my-topic` topic, receives messages, prints the content that arrive, and acknowledges each message to the Pulsar broker for 10 times: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - // Create a consumer - const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', - subscriptionType: 'Exclusive', - }); - - // Receive messages - for (let i = 0; i < 10; i += 1) { - const msg = await consumer.receive(); - console.log(msg.getData().toString()); - consumer.acknowledge(msg); - } - - await consumer.close(); - await client.close(); -})(); - -``` - -Instead a consumer can be created with `listener` to process messages. - -```JavaScript - -// Create a consumer -const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', - subscriptionType: 'Exclusive', - listener: (msg, msgConsumer) => { - console.log(msg.getData().toString()); - msgConsumer.acknowledge(msg); - }, -}); - -``` - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recently unacked message). You can [configure](#reader-configuration) Node.js readers using a reader configuration object. - -Here is an example: - -```JavaScript - -const reader = await client.createReader({ - topic: 'my-topic', - startMessageId: Pulsar.MessageId.earliest(), -}); - -const msg = await reader.readNext(); -console.log(msg.getData().toString()); - -await reader.close(); - -``` - -### Reader operations - -Pulsar Node.js readers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `readNext()` | Receives the next message on the topic (analogous to the `receive` method for [consumers](#consumer-operations)). When the message is available, the Promise object run executor function and get message object. | `Promise` | -| `readNext(Number)` | Receives a single message from the topic with specific timeout in milliseconds. | `Promise` | -| `hasNext()` | Return whether the broker has next message in target topic. | `Boolean` | -| `close()` | Closes the reader, disabling its ability to receive messages from the broker. | `Promise` | - -### Reader configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader establishes a subscription and listen for messages. | | -| `startMessageId` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `Pulsar.MessageId.earliest` (the earliest available message on the topic), `Pulsar.MessageId.latest` (the latest available message on the topic), or a message ID object for a position that is not earliest or latest. | | -| `receiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `readNext`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000 | -| `readerName` | The name of the reader. | | -| `subscriptionRolePrefix` | The subscription role prefix. | | -| `readCompacted` | If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`. | `false` | - - -### Reader example - -This example creates a Node.js reader with the `my-topic` topic, reads messages, and prints the content that arrive for 10 times: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - operationTimeoutSeconds: 30, - }); - - // Create a reader - const reader = await client.createReader({ - topic: 'my-topic', - startMessageId: Pulsar.MessageId.earliest(), - }); - - // read messages - for (let i = 0; i < 10; i += 1) { - const msg = await reader.readNext(); - console.log(msg.getData().toString()); - } - - await reader.close(); - await client.close(); -})(); - -``` - -## Messages - -In Pulsar Node.js client, you have to construct producer message object for producer. - -Here is an example message: - -```JavaScript - -const msg = { - data: Buffer.from('Hello, Pulsar'), - partitionKey: 'key1', - properties: { - 'foo': 'bar', - }, - eventTimestamp: Date.now(), - replicationClusters: [ - 'cluster1', - 'cluster2', - ], -} - -await producer.send(msg); - -``` - -The following keys are available for producer message objects: - -| Parameter | Description | -| :-------- | :---------- | -| `data` | The actual data payload of the message. | -| `properties` | A Object for any application-specific metadata attached to the message. | -| `eventTimestamp` | The timestamp associated with the message. | -| `sequenceId` | The sequence ID of the message. | -| `partitionKey` | The optional key associated with the message (particularly useful for things like topic compaction). | -| `replicationClusters` | The clusters to which this message is replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. | -| `deliverAt` | The absolute timestamp at or after which the message is delivered. | | -| `deliverAfter` | The relative delay after which the message is delivered. | | - -### Message object operations - -In Pulsar Node.js client, you can receive (or read) message object as consumer (or reader). - -The message object have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `getTopicName()` | Getter method of topic name. | `String` | -| `getProperties()` | Getter method of properties. | `Array` | -| `getData()` | Getter method of message data. | `Buffer` | -| `getMessageId()` | Getter method of [message id object](#message-id-object-operations). | `Object` | -| `getPublishTimestamp()` | Getter method of publish timestamp. | `Number` | -| `getEventTimestamp()` | Getter method of event timestamp. | `Number` | -| `getRedeliveryCount()` | Getter method of redelivery count. | `Number` | -| `getPartitionKey()` | Getter method of partition key. | `String` | - -### Message ID object operations - -In Pulsar Node.js client, you can get message id object from message object. - -The message id object have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `serialize()` | Serialize the message id into a Buffer for storing. | `Buffer` | -| `toString()` | Get message id as String. | `String` | - -The client has static method of message id object. You can access it as `Pulsar.MessageId.someStaticMethod` too. - -The following static methods are available for the message id object: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `earliest()` | MessageId representing the earliest, or oldest available message stored in the topic. | `Object` | -| `latest()` | MessageId representing the latest, or last published message in the topic. | `Object` | -| `deserialize(Buffer)` | Deserialize a message id object from a Buffer. | `Object` | - -## End-to-end encryption - -[End-to-end encryption](https://pulsar.apache.org/docs/en/next/cookbooks-encryption/#docsNav) allows applications to encrypt messages at producers and decrypt at consumers. - -### Configuration - -If you want to use the end-to-end encryption feature in the Node.js client, you need to configure `publicKeyPath` and `privateKeyPath` for both producer and consumer. - -``` - -publicKeyPath: "./public.pem" -privateKeyPath: "./private.pem" - -``` - -### Tutorial - -This section provides step-by-step instructions on how to use the end-to-end encryption feature in the Node.js client. - -**Prerequisite** - -- Pulsar C++ client 2.7.1 or later - -**Step** - -1. Create both public and private key pairs. - - **Input** - - ```shell - - openssl genrsa -out private.pem 2048 - openssl rsa -in private.pem -pubout -out public.pem - - ``` - -2. Create a producer to send encrypted messages. - - **Input** - - ```nodejs - - const Pulsar = require('pulsar-client'); - - (async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - operationTimeoutSeconds: 30, - }); - - // Create a producer - const producer = await client.createProducer({ - topic: 'persistent://public/default/my-topic', - sendTimeoutMs: 30000, - batchingEnabled: true, - publicKeyPath: "./public.pem", - privateKeyPath: "./private.pem", - encryptionKey: "encryption-key" - }); - - console.log(producer.ProducerConfig) - // Send messages - for (let i = 0; i < 10; i += 1) { - const msg = `my-message-${i}`; - producer.send({ - data: Buffer.from(msg), - }); - console.log(`Sent message: ${msg}`); - } - await producer.flush(); - - await producer.close(); - await client.close(); - })(); - - ``` - -3. Create a consumer to receive encrypted messages. - - **Input** - - ```nodejs - - const Pulsar = require('pulsar-client'); - - (async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://172.25.0.3:6650', - operationTimeoutSeconds: 30 - }); - - // Create a consumer - const consumer = await client.subscribe({ - topic: 'persistent://public/default/my-topic', - subscription: 'sub1', - subscriptionType: 'Shared', - ackTimeoutMs: 10000, - publicKeyPath: "./public.pem", - privateKeyPath: "./private.pem" - }); - - console.log(consumer) - // Receive messages - for (let i = 0; i < 10; i += 1) { - const msg = await consumer.receive(); - console.log(msg.getData().toString()); - consumer.acknowledge(msg); - } - - await consumer.close(); - await client.close(); - })(); - - ``` - -4. Run the consumer to receive encrypted messages. - - **Input** - - ```shell - - node consumer.js - - ``` - -5. In a new terminal tab, run the producer to produce encrypted messages. - - **Input** - - ```shell - - node producer.js - - ``` - - Now you can see the producer sends messages and the consumer receives messages successfully. - - **Output** - - This is from the producer side. - - ``` - - Sent message: my-message-0 - Sent message: my-message-1 - Sent message: my-message-2 - Sent message: my-message-3 - Sent message: my-message-4 - Sent message: my-message-5 - Sent message: my-message-6 - Sent message: my-message-7 - Sent message: my-message-8 - Sent message: my-message-9 - - ``` - - This is from the consumer side. - - ``` - - my-message-0 - my-message-1 - my-message-2 - my-message-3 - my-message-4 - my-message-5 - my-message-6 - my-message-7 - my-message-8 - my-message-9 - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-python.md b/site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-python.md deleted file mode 100644 index 90cc840daa0a81..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-python.md +++ /dev/null @@ -1,481 +0,0 @@ ---- -id: client-libraries-python -title: Pulsar Python client -sidebar_label: "Python" -original_id: client-libraries-python ---- - -Pulsar Python client library is a wrapper over the existing [C++ client library](client-libraries-cpp.md) and exposes all of the [same features](/api/cpp). You can find the code in the [Python directory](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/python) of the C++ client code. - -All the methods in producer, consumer, and reader of a Python client are thread-safe. - -[pdoc](https://github.com/BurntSushi/pdoc)-generated API docs for the Python client are available [here](/api/python). - -## Install - -You can install the [`pulsar-client`](https://pypi.python.org/pypi/pulsar-client) library either via [PyPi](https://pypi.python.org/pypi), using [pip](#installation-using-pip), or by building the library from [source](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp). - -### Install using pip - -To install the `pulsar-client` library as a pre-built package using the [pip](https://pip.pypa.io/en/stable/) package manager: - -```shell - -$ pip install pulsar-client==@pulsar:version_number@ - -``` - -### Optional dependencies -If you install the client libraries on Linux to support services like Pulsar functions or Avro serialization, you can install optional components alongside the `pulsar-client` library. - -```shell - -# avro serialization -$ pip install pulsar-client=='@pulsar:version_number@[avro]' - -# functions runtime -$ pip install pulsar-client=='@pulsar:version_number@[functions]' - -# all optional components -$ pip install pulsar-client=='@pulsar:version_number@[all]' - -``` - -Installation via PyPi is available for the following Python versions: - -Platform | Supported Python versions -:--------|:------------------------- -MacOS
    10.13 (High Sierra), 10.14 (Mojave)
    | 2.7, 3.7 -Linux | 2.7, 3.4, 3.5, 3.6, 3.7, 3.8 - -### Install from source - -To install the `pulsar-client` library by building from source, follow [instructions](client-libraries-cpp.md#compilation) and compile the Pulsar C++ client library. That builds the Python binding for the library. - -To install the built Python bindings: - -```shell - -$ git clone https://github.com/apache/pulsar -$ cd pulsar/pulsar-client-cpp/python -$ sudo python setup.py install - -``` - -## API Reference - -The complete Python API reference is available at [api/python](/api/python). - -## Examples - -You can find a variety of Python code examples for the [pulsar-client](/pulsar-client-cpp/python) library. - -### Producer example - -The following example creates a Python producer for the `my-topic` topic and sends 10 messages on that topic: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') - -producer = client.create_producer('my-topic') - -for i in range(10): - producer.send(('Hello-%d' % i).encode('utf-8')) - -client.close() - -``` - -### Consumer example - -The following example creates a consumer with the `my-subscription` subscription name on the `my-topic` topic, receives incoming messages, prints the content and ID of messages that arrive, and acknowledges each message to the Pulsar broker. - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') - -consumer = client.subscribe('my-topic', 'my-subscription') - -while True: - msg = consumer.receive() - try: - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except: - # Message failed to be processed - consumer.negative_acknowledge(msg) - -client.close() - -``` - -This example shows how to configure negative acknowledgement. - -```python - -from pulsar import Client, schema -client = Client('pulsar://localhost:6650') -consumer = client.subscribe('negative_acks','test',schema=schema.StringSchema()) -producer = client.create_producer('negative_acks',schema=schema.StringSchema()) -for i in range(10): - print('send msg "hello-%d"' % i) - producer.send_async('hello-%d' % i, callback=None) -producer.flush() -for i in range(10): - msg = consumer.receive() - consumer.negative_acknowledge(msg) - print('receive and nack msg "%s"' % msg.data()) -for i in range(10): - msg = consumer.receive() - consumer.acknowledge(msg) - print('receive and ack msg "%s"' % msg.data()) -try: - # No more messages expected - msg = consumer.receive(100) -except: - print("no more msg") - pass - -``` - -### Reader interface example - -You can use the Pulsar Python API to use the Pulsar [reader interface](concepts-clients.md#reader-interface). Here's an example: - -```python - -# MessageId taken from a previously fetched message -msg_id = msg.message_id() - -reader = client.create_reader('my-topic', msg_id) - -while True: - msg = reader.read_next() - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # No acknowledgment - -``` - -### Multi-topic subscriptions - -In addition to subscribing a consumer to a single Pulsar topic, you can also subscribe to multiple topics simultaneously. To use multi-topic subscriptions, you can supply a regular expression (regex) or a `List` of topics. If you select topics via regex, all topics must be within the same Pulsar namespace. - -The following is an example: - -```python - -import re -consumer = client.subscribe(re.compile('persistent://public/default/topic-*'), 'my-subscription') -while True: - msg = consumer.receive() - try: - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except: - # Message failed to be processed - consumer.negative_acknowledge(msg) -client.close() - -``` - -## Schema - -### Declare and validate schema - -You can declare a schema by passing a class that inherits -from `pulsar.schema.Record` and defines the fields as -class variables. For example: - -```python - -from pulsar.schema import * - -class Example(Record): - a = String() - b = Integer() - c = Boolean() - -``` - -With this simple schema definition, you can create producers, consumers and readers instances that refer to that. - -```python - -producer = client.create_producer( - topic='my-topic', - schema=AvroSchema(Example) ) - -producer.send(Example(a='Hello', b=1)) - -``` - -After creating the producer, the Pulsar broker validates that the existing topic schema is indeed of "Avro" type and that the format is compatible with the schema definition of the `Example` class. - -If there is a mismatch, an exception occurs in the producer creation. - -Once a producer is created with a certain schema definition, -it will only accept objects that are instances of the declared -schema class. - -Similarly, for a consumer/reader, the consumer will return an -object, instance of the schema record class, rather than the raw -bytes: - -```python - -consumer = client.subscribe( - topic='my-topic', - subscription_name='my-subscription', - schema=AvroSchema(Example) ) - -while True: - msg = consumer.receive() - ex = msg.value() - try: - print("Received message a={} b={} c={}".format(ex.a, ex.b, ex.c)) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except: - # Message failed to be processed - consumer.negative_acknowledge(msg) - -``` - -### Supported schema types - -You can use different builtin schema types in Pulsar. All the definitions are in the `pulsar.schema` package. - -| Schema | Notes | -| ------ | ----- | -| `BytesSchema` | Get the raw payload as a `bytes` object. No serialization/deserialization are performed. This is the default schema mode | -| `StringSchema` | Encode/decode payload as a UTF-8 string. Uses `str` objects | -| `JsonSchema` | Require record definition. Serializes the record into standard JSON payload | -| `AvroSchema` | Require record definition. Serializes in AVRO format | - -### Schema definition reference - -The schema definition is done through a class that inherits from `pulsar.schema.Record`. - -This class has a number of fields which can be of either -`pulsar.schema.Field` type or another nested `Record`. All the -fields are specified in the `pulsar.schema` package. The fields -are matching the AVRO fields types. - -| Field Type | Python Type | Notes | -| ---------- | ----------- | ----- | -| `Boolean` | `bool` | | -| `Integer` | `int` | | -| `Long` | `int` | | -| `Float` | `float` | | -| `Double` | `float` | | -| `Bytes` | `bytes` | | -| `String` | `str` | | -| `Array` | `list` | Need to specify record type for items. | -| `Map` | `dict` | Key is always `String`. Need to specify value type. | - -Additionally, any Python `Enum` type can be used as a valid field type. - -#### Fields parameters - -When adding a field, you can use these parameters in the constructor. - -| Argument | Default | Notes | -| ---------- | --------| ----- | -| `default` | `None` | Set a default value for the field. Eg: `a = Integer(default=5)` | -| `required` | `False` | Mark the field as "required". It is set in the schema accordingly. | - -#### Schema definition examples - -##### Simple definition - -```python - -class Example(Record): - a = String() - b = Integer() - c = Array(String()) - i = Map(String()) - -``` - -##### Using enums - -```python - -from enum import Enum - -class Color(Enum): - red = 1 - green = 2 - blue = 3 - -class Example(Record): - name = String() - color = Color - -``` - -##### Complex types - -```python - -class MySubRecord(Record): - x = Integer() - y = Long() - z = String() - -class Example(Record): - a = String() - sub = MySubRecord() - -``` - -##### Set namespace for Avro schema - -Set the namespace for Avro Record schema using the special field `_avro_namespace`. - -```python - -class NamespaceDemo(Record): - _avro_namespace = 'xxx.xxx.xxx' - x = String() - y = Integer() - -``` - -The schema definition is like this. - -``` - -{ - 'name': 'NamespaceDemo', 'namespace': 'xxx.xxx.xxx', 'type': 'record', 'fields': [ - {'name': 'x', 'type': ['null', 'string']}, - {'name': 'y', 'type': ['null', 'int']} - ] -} - -``` - -## End-to-end encryption - -[End-to-end encryption](https://pulsar.apache.org/docs/en/next/cookbooks-encryption/#docsNav) allows applications to encrypt messages at producers and decrypt messages at consumers. - -### Configuration - -To use the end-to-end encryption feature in the Python client, you need to configure `publicKeyPath` and `privateKeyPath` for both producer and consumer. - -``` - -publicKeyPath: "./public.pem" -privateKeyPath: "./private.pem" - -``` - -### Tutorial - -This section provides step-by-step instructions on how to use the end-to-end encryption feature in the Python client. - -**Prerequisite** - -- Pulsar Python client 2.7.1 or later - -**Step** - -1. Create both public and private key pairs. - - **Input** - - ```shell - - openssl genrsa -out private.pem 2048 - openssl rsa -in private.pem -pubout -out public.pem - - ``` - -2. Create a producer to send encrypted messages. - - **Input** - - ```python - - import pulsar - - publicKeyPath = "./public.pem" - privateKeyPath = "./private.pem" - crypto_key_reader = pulsar.CryptoKeyReader(publicKeyPath, privateKeyPath) - client = pulsar.Client('pulsar://localhost:6650') - producer = client.create_producer(topic='encryption', encryption_key='encryption', crypto_key_reader=crypto_key_reader) - producer.send('encryption message'.encode('utf8')) - print('sent message') - producer.close() - client.close() - - ``` - -3. Create a consumer to receive encrypted messages. - - **Input** - - ```python - - import pulsar - - publicKeyPath = "./public.pem" - privateKeyPath = "./private.pem" - crypto_key_reader = pulsar.CryptoKeyReader(publicKeyPath, privateKeyPath) - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe(topic='encryption', subscription_name='encryption-sub', crypto_key_reader=crypto_key_reader) - msg = consumer.receive() - print("Received msg '{}' id = '{}'".format(msg.data(), msg.message_id())) - consumer.close() - client.close() - - ``` - -4. Run the consumer to receive encrypted messages. - - **Input** - - ```shell - - python consumer.py - - ``` - -5. In a new terminal tab, run the producer to produce encrypted messages. - - **Input** - - ```shell - - python producer.py - - ``` - - Now you can see the producer sends messages and the consumer receives messages successfully. - - **Output** - - This is from the producer side. - - ``` - - sent message - - ``` - - This is from the consumer side. - - ``` - - Received msg 'encryption message' id = '(0,0,-1,-1)' - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-websocket.md b/site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-websocket.md deleted file mode 100644 index 77ec54f803dc5b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries-websocket.md +++ /dev/null @@ -1,662 +0,0 @@ ---- -id: client-libraries-websocket -title: Pulsar WebSocket API -sidebar_label: "WebSocket" -original_id: client-libraries-websocket ---- - -Pulsar [WebSocket](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API) API provides a simple way to interact with Pulsar using languages that do not have an official [client library](getting-started-clients.md). Through WebSocket, you can publish and consume messages and use features available on the [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page. - - -> You can use Pulsar WebSocket API with any WebSocket client library. See examples for Python and Node.js [below](#client-examples). - -## Running the WebSocket service - -The standalone variant of Pulsar that we recommend using for [local development](getting-started-standalone.md) already has the WebSocket service enabled. - -In non-standalone mode, there are two ways to deploy the WebSocket service: - -* [embedded](#embedded-with-a-pulsar-broker) with a Pulsar broker -* as a [separate component](#as-a-separate-component) - -### Embedded with a Pulsar broker - -In this mode, the WebSocket service will run within the same HTTP service that's already running in the broker. To enable this mode, set the [`webSocketServiceEnabled`](reference-configuration.md#broker-webSocketServiceEnabled) parameter in the [`conf/broker.conf`](reference-configuration.md#broker) configuration file in your installation. - -```properties - -webSocketServiceEnabled=true - -``` - -### As a separate component - -In this mode, the WebSocket service will be run from a Pulsar [broker](reference-terminology.md#broker) as a separate service. Configuration for this mode is handled in the [`conf/websocket.conf`](reference-configuration.md#websocket) configuration file. You'll need to set *at least* the following parameters: - -* [`configurationStoreServers`](reference-configuration.md#websocket-configurationStoreServers) -* [`webServicePort`](reference-configuration.md#websocket-webServicePort) -* [`clusterName`](reference-configuration.md#websocket-clusterName) - -Here's an example: - -```properties - -configurationStoreServers=zk1:2181,zk2:2181,zk3:2181 -webServicePort=8080 -clusterName=my-cluster - -``` - -### Security settings - -To enable TLS encryption on WebSocket service: - -```properties - -tlsEnabled=true -tlsAllowInsecureConnection=false -tlsCertificateFilePath=/path/to/client-websocket.cert.pem -tlsKeyFilePath=/path/to/client-websocket.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -### Starting the broker - -When the configuration is set, you can start the service using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) tool: - -```shell - -$ bin/pulsar-daemon start websocket - -``` - -## API Reference - -Pulsar's WebSocket API offers three endpoints for [producing](#producer-endpoint) messages, [consuming](#consumer-endpoint) messages and [reading](#reader-endpoint) messages. - -All exchanges via the WebSocket API use JSON. - -### Authentication - -#### Browser javascript WebSocket client - -Use the query param `token` transport the authentication token. - -```http - -ws://broker-service-url:8080/path?token=token - -``` - -### Producer endpoint - -The producer endpoint requires you to specify a tenant, namespace, and topic in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/producer/persistent/:tenant/:namespace/:topic - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`sendTimeoutMillis` | long | no | Send timeout (default: 30 secs) -`batchingEnabled` | boolean | no | Enable batching of messages (default: false) -`batchingMaxMessages` | int | no | Maximum number of messages permitted in a batch (default: 1000) -`maxPendingMessages` | int | no | Set the max size of the internal-queue holding the messages (default: 1000) -`batchingMaxPublishDelay` | long | no | Time period within which the messages will be batched (default: 10ms) -`messageRoutingMode` | string | no | Message [routing mode](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/ProducerConfiguration.MessageRoutingMode.html) for the partitioned producer: `SinglePartition`, `RoundRobinPartition` -`compressionType` | string | no | Compression [type](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/CompressionType.html): `LZ4`, `ZLIB` -`producerName` | string | no | Specify the name for the producer. Pulsar will enforce only one producer with same name can be publishing on a topic -`initialSequenceId` | long | no | Set the baseline for the sequence ids for messages published by the producer. -`hashingScheme` | string | no | [Hashing function](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.HashingScheme.html) to use when publishing on a partitioned topic: `JavaStringHash`, `Murmur3_32Hash` -`token` | string | no | Authentication token, this is used for the browser javascript client - - -#### Publishing a message - -```json - -{ - "payload": "SGVsbG8gV29ybGQ=", - "properties": {"key1": "value1", "key2": "value2"}, - "context": "1" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`payload` | string | yes | Base-64 encoded payload -`properties` | key-value pairs | no | Application-defined properties -`context` | string | no | Application-defined request identifier -`key` | string | no | For partitioned topics, decides which partition to use -`replicationClusters` | array | no | Restrict replication to this list of [clusters](reference-terminology.md#cluster), specified by name - - -##### Example success response - -```json - -{ - "result": "ok", - "messageId": "CAAQAw==", - "context": "1" - } - -``` - -##### Example failure response - -```json - - { - "result": "send-error:3", - "errorMsg": "Failed to de-serialize from JSON", - "context": "1" - } - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`result` | string | yes | `ok` if successful or an error message if unsuccessful -`messageId` | string | yes | Message ID assigned to the published message -`context` | string | no | Application-defined request identifier - - -### Consumer endpoint - -The consumer endpoint requires you to specify a tenant, namespace, and topic, as well as a subscription, in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/consumer/persistent/:tenant/:namespace/:topic/:subscription - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`ackTimeoutMillis` | long | no | Set the timeout for unacked messages (default: 0) -`subscriptionType` | string | no | [Subscription type](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/SubscriptionType.html): `Exclusive`, `Failover`, `Shared`, `Key_Shared` -`receiverQueueSize` | int | no | Size of the consumer receive queue (default: 1000) -`consumerName` | string | no | Consumer name -`priorityLevel` | int | no | Define a [priority](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setPriorityLevel-int-) for the consumer -`maxRedeliverCount` | int | no | Define a [maxRedeliverCount](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#deadLetterPolicy-org.apache.pulsar.client.api.DeadLetterPolicy-) for the consumer (default: 0). Activates [Dead Letter Topic](https://github.com/apache/pulsar/wiki/PIP-22%3A-Pulsar-Dead-Letter-Topic) feature. -`deadLetterTopic` | string | no | Define a [deadLetterTopic](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#deadLetterPolicy-org.apache.pulsar.client.api.DeadLetterPolicy-) for the consumer (default: {topic}-{subscription}-DLQ). Activates [Dead Letter Topic](https://github.com/apache/pulsar/wiki/PIP-22%3A-Pulsar-Dead-Letter-Topic) feature. -`pullMode` | boolean | no | Enable pull mode (default: false). See "Flow Control" below. -`negativeAckRedeliveryDelay` | int | no | When a message is negatively acknowledged, the delay time before the message is redelivered (in milliseconds). The default value is 60000. -`token` | string | no | Authentication token, this is used for the browser javascript client - -NB: these parameter (except `pullMode`) apply to the internal consumer of the WebSocket service. -So messages will be subject to the redelivery settings as soon as the get into the receive queue, -even if the client doesn't consume on the WebSocket. - -##### Receiving messages - -Server will push messages on the WebSocket session: - -```json - -"messageId": "CAMQADAA", - "payload": "hvXcJvHW7kOSrUn17P2q71RA5SdiXwZBqw==", - "properties": {}, - "publishTime": "2021-10-29T16:01:38.967-07:00", - "redeliveryCount": 0, - "encryptionContext": { - "keys": { - "client-rsa.pem": { - "keyValue": "jEuwS+PeUzmCo7IfLNxqoj4h7txbLjCQjkwpaw5AWJfZ2xoIdMkOuWDkOsqgFmWwxiecakS6GOZHs94x3sxzKHQx9Oe1jpwBg2e7L4fd26pp+WmAiLm/ArZJo6JotTeFSvKO3u/yQtGTZojDDQxiqFOQ1ZbMdtMZA8DpSMuq+Zx7PqLo43UdW1+krjQfE5WD+y+qE3LJQfwyVDnXxoRtqWLpVsAROlN2LxaMbaftv5HckoejJoB4xpf/dPOUqhnRstwQHf6klKT5iNhjsY4usACt78uILT0pEPd14h8wEBidBz/vAlC/zVMEqiDVzgNS7dqEYS4iHbf7cnWVCn3Hxw==", - "metadata": {} - } - }, - "param": "Tfu1PxVm6S9D3+Hk", - "compressionType": "NONE", - "uncompressedMessageSize": 0, - "batchSize": { - "empty": false, - "present": true - } - } - -``` - -Below are the parameters in the WebSocket consumer response. - -- General parameters - - Key | Type | Required? | Explanation - :---|:-----|:----------|:----------- - `messageId` | string | yes | Message ID - `payload` | string | yes | Base-64 encoded payload - `publishTime` | string | yes | Publish timestamp - `redeliveryCount` | number | yes | Number of times this message was already delivered - `properties` | key-value pairs | no | Application-defined properties - `key` | string | no | Original routing key set by producer - `encryptionContext` | EncryptionContext | no | Encryption context that consumers can use to decrypt received messages - `param` | string | no | Initialization vector for cipher (Base64 encoding) - `batchSize` | string | no | Number of entries in a message (if it is a batch message) - `uncompressedMessageSize` | string | no | Message size before compression - `compressionType` | string | no | Algorithm used to compress the message payload - -- `encryptionContext` related parameter - - Key | Type | Required? | Explanation - :---|:-----|:----------|:----------- - `keys` |key-EncryptionKey pairs | yes | Key in `key-EncryptionKey` pairs is an encryption key name. Value in `key-EncryptionKey` pairs is an encryption key object. - -- `encryptionKey` related parameters - - Key | Type | Required? | Explanation - :---|:-----|:----------|:----------- - `keyValue` | string | yes | Encryption key (Base64 encoding) - `metadata` | key-value pairs | no | Application-defined metadata - -#### Acknowledging the message - -Consumer needs to acknowledge the successful processing of the message to -have the Pulsar broker delete it. - -```json - -{ - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Negatively acknowledging messages - -```json - -{ - "type": "negativeAcknowledge", - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Flow control - -##### Push Mode - -By default (`pullMode=false`), the consumer endpoint will use the `receiverQueueSize` parameter both to size its -internal receive queue and to limit the number of unacknowledged messages that are passed to the WebSocket client. -In this mode, if you don't send acknowledgements, the Pulsar WebSocket service will stop sending messages after reaching -`receiverQueueSize` unacked messages sent to the WebSocket client. - -##### Pull Mode - -If you set `pullMode` to `true`, the WebSocket client will need to send `permit` commands to permit the -Pulsar WebSocket service to send more messages. - -```json - -{ - "type": "permit", - "permitMessages": 100 -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `permit` -`permitMessages`| int | yes | Number of messages to permit - -NB: in this mode it's possible to acknowledge messages in a different connection. - -#### Check if reach end of topic - -Consumer can check if it has reached end of topic by sending `isEndOfTopic` request. - -**Request** - -```json - -{ - "type": "isEndOfTopic" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `isEndOfTopic` - -**Response** - -```json - -{ - "endOfTopic": "true/false" - } - -``` - -### Reader endpoint - -The reader endpoint requires you to specify a tenant, namespace, and topic in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/reader/persistent/:tenant/:namespace/:topic - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`readerName` | string | no | Reader name -`receiverQueueSize` | int | no | Size of the consumer receive queue (default: 1000) -`messageId` | int or enum | no | Message ID to start from, `earliest` or `latest` (default: `latest`) -`token` | string | no | Authentication token, this is used for the browser javascript client - -##### Receiving messages - -Server will push messages on the WebSocket session: - -```json - -{ - "messageId": "CAAQAw==", - "payload": "SGVsbG8gV29ybGQ=", - "properties": {"key1": "value1", "key2": "value2"}, - "publishTime": "2016-08-30 16:45:57.785", - "redeliveryCount": 4 -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId` | string | yes | Message ID -`payload` | string | yes | Base-64 encoded payload -`publishTime` | string | yes | Publish timestamp -`redeliveryCount` | number | yes | Number of times this message was already delivered -`properties` | key-value pairs | no | Application-defined properties -`key` | string | no | Original routing key set by producer - -#### Acknowledging the message - -**In WebSocket**, Reader needs to acknowledge the successful processing of the message to -have the Pulsar WebSocket service update the number of pending messages. -If you don't send acknowledgements, Pulsar WebSocket service will stop sending messages after reaching the pendingMessages limit. - -```json - -{ - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Check if reach end of topic - -Consumer can check if it has reached end of topic by sending `isEndOfTopic` request. - -**Request** - -```json - -{ - "type": "isEndOfTopic" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `isEndOfTopic` - -**Response** - -```json - -{ - "endOfTopic": "true/false" - } - -``` - -### Error codes - -In case of error the server will close the WebSocket session using the -following error codes: - -Error Code | Error Message -:----------|:------------- -1 | Failed to create producer -2 | Failed to subscribe -3 | Failed to deserialize from JSON -4 | Failed to serialize to JSON -5 | Failed to authenticate client -6 | Client is not authorized -7 | Invalid payload encoding -8 | Unknown error - -> The application is responsible for re-establishing a new WebSocket session after a backoff period. - -## Client examples - -Below you'll find code examples for the Pulsar WebSocket API in [Python](#python) and [Node.js](#nodejs). - -### Python - -This example uses the [`websocket-client`](https://pypi.python.org/pypi/websocket-client) package. You can install it using [pip](https://pypi.python.org/pypi/pip): - -```shell - -$ pip install websocket-client - -``` - -You can also download it from [PyPI](https://pypi.python.org/pypi/websocket-client). - -#### Python producer - -Here's an example Python producer that sends a simple message to a Pulsar [topic](reference-terminology.md#topic): - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/producer/persistent/public/default/my-topic' - -ws = websocket.create_connection(TOPIC) - -# encode message -s = "Hello World" -firstEncoded = s.encode("UTF-8") -binaryEncoded = base64.b64encode(firstEncoded) -payloadString = binaryEncoded.decode('UTF-8') - -# Send one message as JSON -ws.send(json.dumps({ - 'payload' : payloadString, - 'properties': { - 'key1' : 'value1', - 'key2' : 'value2' - }, - 'context' : 5 -})) - -response = json.loads(ws.recv()) -if response['result'] == 'ok': - print( 'Message published successfully') -else: - print('Failed to publish message:', response) -ws.close() - -``` - -#### Python consumer - -Here's an example Python consumer that listens on a Pulsar topic and prints the message ID whenever a message arrives: - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/consumer/persistent/public/default/my-topic/my-sub' - -ws = websocket.create_connection(TOPIC) - -while True: - msg = json.loads(ws.recv()) - if not msg: break - - print( "Received: {} - payload: {}".format(msg, base64.b64decode(msg['payload']))) - - # Acknowledge successful processing - ws.send(json.dumps({'messageId' : msg['messageId']})) - -ws.close() - -``` - -#### Python reader - -Here's an example Python reader that listens on a Pulsar topic and prints the message ID whenever a message arrives: - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/reader/persistent/public/default/my-topic' -ws = websocket.create_connection(TOPIC) - -while True: - msg = json.loads(ws.recv()) - if not msg: break - - print ( "Received: {} - payload: {}".format(msg, base64.b64decode(msg['payload']))) - - # Acknowledge successful processing - ws.send(json.dumps({'messageId' : msg['messageId']})) - -ws.close() - -``` - -### Node.js - -This example uses the [`ws`](https://websockets.github.io/ws/) package. You can install it using [npm](https://www.npmjs.com/): - -```shell - -$ npm install ws - -``` - -#### Node.js producer - -Here's an example Node.js producer that sends a simple message to a Pulsar topic: - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/producer/persistent/public/default/my-topic`; -const ws = new WebSocket(topic); - -var message = { - "payload" : new Buffer("Hello World").toString('base64'), - "properties": { - "key1" : "value1", - "key2" : "value2" - }, - "context" : "1" -}; - -ws.on('open', function() { - // Send one message - ws.send(JSON.stringify(message)); -}); - -ws.on('message', function(message) { - console.log('received ack: %s', message); -}); - -``` - -#### Node.js consumer - -Here's an example Node.js consumer that listens on the same topic used by the producer above: - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/consumer/persistent/public/default/my-topic/my-sub`; -const ws = new WebSocket(topic); - -ws.on('message', function(message) { - var receiveMsg = JSON.parse(message); - console.log('Received: %s - payload: %s', message, new Buffer(receiveMsg.payload, 'base64').toString()); - var ackMsg = {"messageId" : receiveMsg.messageId}; - ws.send(JSON.stringify(ackMsg)); -}); - -``` - -#### NodeJS reader - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/reader/persistent/public/default/my-topic`; -const ws = new WebSocket(topic); - -ws.on('message', function(message) { - var receiveMsg = JSON.parse(message); - console.log('Received: %s - payload: %s', message, new Buffer(receiveMsg.payload, 'base64').toString()); - var ackMsg = {"messageId" : receiveMsg.messageId}; - ws.send(JSON.stringify(ackMsg)); -}); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries.md b/site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries.md deleted file mode 100644 index 607c9317e4b7fb..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/client-libraries.md +++ /dev/null @@ -1,36 +0,0 @@ ---- -id: client-libraries -title: Pulsar client libraries -sidebar_label: "Overview" -original_id: client-libraries ---- - -Pulsar supports the following client libraries: - -- [Java client](client-libraries-java.md) -- [Go client](client-libraries-go.md) -- [Python client](client-libraries-python.md) -- [C++ client](client-libraries-cpp.md) -- [Node.js client](client-libraries-node.md) -- [WebSocket client](client-libraries-websocket.md) -- [C# client](client-libraries-dotnet.md) - -## Feature matrix -Pulsar client feature matrix for different languages is listed on [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page. - -## Third-party clients - -Besides the official released clients, multiple projects on developing Pulsar clients are available in different languages. - -> If you have developed a new Pulsar client, feel free to submit a pull request and add your client to the list below. - -| Language | Project | Maintainer | License | Description | -|----------|---------|------------|---------|-------------| -| Go | [pulsar-client-go](https://github.com/Comcast/pulsar-client-go) | [Comcast](https://github.com/Comcast) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client | -| Go | [go-pulsar](https://github.com/t2y/go-pulsar) | [t2y](https://github.com/t2y) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | -| Haskell | [supernova](https://github.com/cr-org/supernova) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Native Pulsar client for Haskell | -| Scala | [neutron](https://github.com/cr-org/neutron) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Purely functional Apache Pulsar client for Scala built on top of Fs2 | -| Scala | [pulsar4s](https://github.com/sksamuel/pulsar4s) | [sksamuel](https://github.com/sksamuel) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Idomatic, typesafe, and reactive Scala client for Apache Pulsar | -| Rust | [pulsar-rs](https://github.com/wyyerd/pulsar-rs) | [Wyyerd Group](https://github.com/wyyerd) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Future-based Rust bindings for Apache Pulsar | -| .NET | [pulsar-client-dotnet](https://github.com/fsharplang-ru/pulsar-client-dotnet) | [Lanayx](https://github.com/Lanayx) | [![GitHub](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT) | Native .NET client for C#/F#/VB | -| Node.js | [pulsar-flex](https://github.com/ayeo-flex-org/pulsar-flex) | [Daniel Sinai](https://github.com/danielsinai), [Ron Farkash](https://github.com/ronfarkash), [Gal Rosenberg](https://github.com/galrose)| [![GitHub](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT) | Native Nodejs client | diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-architecture-overview.md b/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-architecture-overview.md deleted file mode 100644 index 4baa8c30a0d009..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-architecture-overview.md +++ /dev/null @@ -1,172 +0,0 @@ ---- -id: concepts-architecture-overview -title: Architecture Overview -sidebar_label: "Architecture" -original_id: concepts-architecture-overview ---- - -At the highest level, a Pulsar instance is composed of one or more Pulsar clusters. Clusters within an instance can [replicate](concepts-replication.md) data amongst themselves. - -In a Pulsar cluster: - -* One or more brokers handles and load balances incoming messages from producers, dispatches messages to consumers, communicates with the Pulsar configuration store to handle various coordination tasks, stores messages in BookKeeper instances (aka bookies), relies on a cluster-specific ZooKeeper cluster for certain tasks, and more. -* A BookKeeper cluster consisting of one or more bookies handles [persistent storage](#persistent-storage) of messages. -* A ZooKeeper cluster specific to that cluster handles coordination tasks between Pulsar clusters. - -The diagram below provides an illustration of a Pulsar cluster: - -![Pulsar architecture diagram](/assets/pulsar-system-architecture.png) - -At the broader instance level, an instance-wide ZooKeeper cluster called the configuration store handles coordination tasks involving multiple clusters, for example [geo-replication](concepts-replication.md). - -## Brokers - -The Pulsar message broker is a stateless component that's primarily responsible for running two other components: - -* An HTTP server that exposes a {@inject: rest:REST:/} API for both administrative tasks and [topic lookup](concepts-clients.md#client-setup-phase) for producers and consumers. The producers connect to the brokers to publish messages and the consumers connect to the brokers to consume the messages. -* A dispatcher, which is an asynchronous TCP server over a custom [binary protocol](developing-binary-protocol.md) used for all data transfers - -Messages are typically dispatched out of a [managed ledger](#managed-ledgers) cache for the sake of performance, *unless* the backlog exceeds the cache size. If the backlog grows too large for the cache, the broker will start reading entries from BookKeeper. - -Finally, to support geo-replication on global topics, the broker manages replicators that tail the entries published in the local region and republish them to the remote region using the Pulsar [Java client library](client-libraries-java.md). - -> For a guide to managing Pulsar brokers, see the [brokers](admin-api-brokers.md) guide. - -## Clusters - -A Pulsar instance consists of one or more Pulsar *clusters*. Clusters, in turn, consist of: - -* One or more Pulsar [brokers](#brokers) -* A ZooKeeper quorum used for cluster-level configuration and coordination -* An ensemble of bookies used for [persistent storage](#persistent-storage) of messages - -Clusters can replicate amongst themselves using [geo-replication](concepts-replication.md). - -> For a guide to managing Pulsar clusters, see the [clusters](admin-api-clusters.md) guide. - -## Metadata store - -The Pulsar metadata store maintains all the metadata of a Pulsar cluster, such as topic metadata, schema, broker load data, and so on. Pulsar uses [Apache ZooKeeper](https://zookeeper.apache.org/) for metadata storage, cluster configuration, and coordination. The Pulsar metadata store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster. You can use one ZooKeeper cluster for both Pulsar metadata store and BookKeeper metadata store. If you want to deploy Pulsar brokers connected to an existing BookKeeper cluster, you need to deploy separate ZooKeeper clusters for Pulsar metadata store and BookKeeper metadata store respectively. - -In a Pulsar instance: - -* A configuration store quorum stores configuration for tenants, namespaces, and other entities that need to be globally consistent. -* Each cluster has its own local ZooKeeper ensemble that stores cluster-specific configuration and coordination such as which brokers are responsible for which topics as well as ownership metadata, broker load reports, BookKeeper ledger metadata, and more. - -## Configuration store - -The configuration store maintains all the configurations of a Pulsar instance, such as clusters, tenants, namespaces, partitioned topic related configurations, and so on. A Pulsar instance can have a single local cluster, multiple local clusters, or multiple cross-region clusters. Consequently, the configuration store can share the configurations across multiple clusters under a Pulsar instance. The configuration store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster. - -## Persistent storage - -Pulsar provides guaranteed message delivery for applications. If a message successfully reaches a Pulsar broker, it will be delivered to its intended target. - -This guarantee requires that non-acknowledged messages are stored in a durable manner until they can be delivered to and acknowledged by consumers. This mode of messaging is commonly called *persistent messaging*. In Pulsar, N copies of all messages are stored and synced on disk, for example 4 copies across two servers with mirrored [RAID](https://en.wikipedia.org/wiki/RAID) volumes on each server. - -### Apache BookKeeper - -Pulsar uses a system called [Apache BookKeeper](http://bookkeeper.apache.org/) for persistent message storage. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) (WAL) system that provides a number of crucial advantages for Pulsar: - -* It enables Pulsar to utilize many independent logs, called [ledgers](#ledgers). Multiple ledgers can be created for topics over time. -* It offers very efficient storage for sequential data that handles entry replication. -* It guarantees read consistency of ledgers in the presence of various system failures. -* It offers even distribution of I/O across bookies. -* It's horizontally scalable in both capacity and throughput. Capacity can be immediately increased by adding more bookies to a cluster. -* Bookies are designed to handle thousands of ledgers with concurrent reads and writes. By using multiple disk devices---one for journal and another for general storage--bookies are able to isolate the effects of read operations from the latency of ongoing write operations. - -In addition to message data, *cursors* are also persistently stored in BookKeeper. Cursors are [subscription](reference-terminology.md#subscription) positions for [consumers](reference-terminology.md#consumer). BookKeeper enables Pulsar to store consumer position in a scalable fashion. - -At the moment, Pulsar supports persistent message storage. This accounts for the `persistent` in all topic names. Here's an example: - -```http - -persistent://my-tenant/my-namespace/my-topic - -``` - -> Pulsar also supports ephemeral ([non-persistent](concepts-messaging.md#non-persistent-topics)) message storage. - - -You can see an illustration of how brokers and bookies interact in the diagram below: - -![Brokers and bookies](/assets/broker-bookie.png) - - -### Ledgers - -A ledger is an append-only data structure with a single writer that is assigned to multiple BookKeeper storage nodes, or bookies. Ledger entries are replicated to multiple bookies. Ledgers themselves have very simple semantics: - -* A Pulsar broker can create a ledger, append entries to the ledger, and close the ledger. -* After the ledger has been closed---either explicitly or because the writer process crashed---it can then be opened only in read-only mode. -* Finally, when entries in the ledger are no longer needed, the whole ledger can be deleted from the system (across all bookies). - -#### Ledger read consistency - -The main strength of Bookkeeper is that it guarantees read consistency in ledgers in the presence of failures. Since the ledger can only be written to by a single process, that process is free to append entries very efficiently, without need to obtain consensus. After a failure, the ledger will go through a recovery process that will finalize the state of the ledger and establish which entry was last committed to the log. After that point, all readers of the ledger are guaranteed to see the exact same content. - -#### Managed ledgers - -Given that Bookkeeper ledgers provide a single log abstraction, a library was developed on top of the ledger called the *managed ledger* that represents the storage layer for a single topic. A managed ledger represents the abstraction of a stream of messages with a single writer that keeps appending at the end of the stream and multiple cursors that are consuming the stream, each with its own associated position. - -Internally, a single managed ledger uses multiple BookKeeper ledgers to store the data. There are two reasons to have multiple ledgers: - -1. After a failure, a ledger is no longer writable and a new one needs to be created. -2. A ledger can be deleted when all cursors have consumed the messages it contains. This allows for periodic rollover of ledgers. - -### Journal storage - -In BookKeeper, *journal* files contain BookKeeper transaction logs. Before making an update to a [ledger](#ledgers), a bookie needs to ensure that a transaction describing the update is written to persistent (non-volatile) storage. A new journal file is created once the bookie starts or the older journal file reaches the journal file size threshold (configured using the [`journalMaxSizeMB`](reference-configuration.md#bookkeeper-journalMaxSizeMB) parameter). - -## Pulsar proxy - -One way for Pulsar clients to interact with a Pulsar [cluster](#clusters) is by connecting to Pulsar message [brokers](#brokers) directly. In some cases, however, this kind of direct connection is either infeasible or undesirable because the client doesn't have direct access to broker addresses. If you're running Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform, for example, then direct client connections to brokers are likely not possible. - -The **Pulsar proxy** provides a solution to this problem by acting as a single gateway for all of the brokers in a cluster. If you run the Pulsar proxy (which, again, is optional), all client connections with the Pulsar cluster will flow through the proxy rather than communicating with brokers. - -> For the sake of performance and fault tolerance, you can run as many instances of the Pulsar proxy as you'd like. - -Architecturally, the Pulsar proxy gets all the information it requires from ZooKeeper. When starting the proxy on a machine, you only need to provide ZooKeeper connection strings for the cluster-specific and instance-wide configuration store clusters. Here's an example: - -```bash - -$ bin/pulsar proxy \ - --zookeeper-servers zk-0,zk-1,zk-2 \ - --configuration-store-servers zk-0,zk-1,zk-2 - -``` - -> #### Pulsar proxy docs -> For documentation on using the Pulsar proxy, see the [Pulsar proxy admin documentation](administration-proxy.md). - - -Some important things to know about the Pulsar proxy: - -* Connecting clients don't need to provide *any* specific configuration to use the Pulsar proxy. You won't need to update the client configuration for existing applications beyond updating the IP used for the service URL (for example if you're running a load balancer over the Pulsar proxy). -* [TLS encryption](security-tls-transport.md) and [authentication](security-tls-authentication.md) is supported by the Pulsar proxy - -## Service discovery - -[Clients](getting-started-clients.md) connecting to Pulsar brokers need to be able to communicate with an entire Pulsar instance using a single URL. - -You can use your own service discovery system if you'd like. If you use your own system, there is just one requirement: when a client performs an HTTP request to an endpoint, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to *some* active broker in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means. - -The diagram below illustrates Pulsar service discovery: - -![alt-text](/assets/pulsar-service-discovery.png) - -In this diagram, the Pulsar cluster is addressable via a single DNS name: `pulsar-cluster.acme.com`. A [Python client](client-libraries-python.md), for example, could access this Pulsar cluster like this: - -```python - -from pulsar import Client - -client = Client('pulsar://pulsar-cluster.acme.com:6650') - -``` - -:::note - -In Pulsar, each topic is handled by only one broker. Initial requests from a client to read, update or delete a topic are sent to a broker that may not be the topic owner. If the broker cannot handle the request for this topic, it redirects the request to the appropriate broker. - -::: - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-authentication.md b/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-authentication.md deleted file mode 100644 index f6307890c904a7..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-authentication.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -id: concepts-authentication -title: Authentication and Authorization -sidebar_label: "Authentication and Authorization" -original_id: concepts-authentication ---- - -Pulsar supports a pluggable [authentication](security-overview.md) mechanism which can be configured at the proxy and/or the broker. Pulsar also supports a pluggable [authorization](security-authorization.md) mechanism. These mechanisms work together to identify the client and its access rights on topics, namespaces and tenants. - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-clients.md b/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-clients.md deleted file mode 100644 index 4040624f7d6366..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-clients.md +++ /dev/null @@ -1,92 +0,0 @@ ---- -id: concepts-clients -title: Pulsar Clients -sidebar_label: "Clients" -original_id: concepts-clients ---- - -Pulsar exposes a client API with language bindings for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md), [C++](client-libraries-cpp.md) and [C#](client-libraries-dotnet.md). The client API optimizes and encapsulates Pulsar's client-broker communication protocol and exposes a simple and intuitive API for use by applications. - -Under the hood, the current official Pulsar client libraries support transparent reconnection and/or connection failover to brokers, queuing of messages until acknowledged by the broker, and heuristics such as connection retries with backoff. - -> **Custom client libraries** -> If you'd like to create your own client library, we recommend consulting the documentation on Pulsar's custom [binary protocol](developing-binary-protocol.md). - - -## Client setup phase - -Before an application creates a producer/consumer, the Pulsar client library needs to initiate a setup phase including two steps: - -1. The client attempts to determine the owner of the topic by sending an HTTP lookup request to the broker. The request could reach one of the active brokers which, by looking at the (cached) zookeeper metadata knows who is serving the topic or, in case nobody is serving it, tries to assign it to the least loaded broker. -1. Once the client library has the broker address, it creates a TCP connection (or reuse an existing connection from the pool) and authenticates it. Within this connection, client and broker exchange binary commands from a custom protocol. At this point the client sends a command to create producer/consumer to the broker, which will comply after having validated the authorization policy. - -Whenever the TCP connection breaks, the client immediately re-initiates this setup phase and keeps trying with exponential backoff to re-establish the producer or consumer until the operation succeeds. - -## Reader interface - -In Pulsar, the "standard" [consumer interface](concepts-messaging.md#consumers) involves using consumers to listen on [topics](reference-terminology.md#topic), process incoming messages, and finally acknowledge those messages when they are processed. Whenever a new subscription is created, it is initially positioned at the end of the topic (by default), and consumers associated with that subscription begin reading with the first message created afterwards. Whenever a consumer connects to a topic using a pre-existing subscription, it begins reading from the earliest message un-acked within that subscription. In summary, with the consumer interface, subscription cursors are automatically managed by Pulsar in response to [message acknowledgements](concepts-messaging.md#acknowledgement). - -The **reader interface** for Pulsar enables applications to manually manage cursors. When you use a reader to connect to a topic---rather than a consumer---you need to specify *which* message the reader begins reading from when it connects to a topic. When connecting to a topic, the reader interface enables you to begin with: - -* The **earliest** available message in the topic -* The **latest** available message in the topic -* Some other message between the earliest and the latest. If you select this option, you'll need to explicitly provide a message ID. Your application will be responsible for "knowing" this message ID in advance, perhaps fetching it from a persistent data store or cache. - -The reader interface is helpful for use cases like using Pulsar to provide effectively-once processing semantics for a stream processing system. For this use case, it's essential that the stream processing system be able to "rewind" topics to a specific message and begin reading there. The reader interface provides Pulsar clients with the low-level abstraction necessary to "manually position" themselves within a topic. - -Internally, the reader interface is implemented as a consumer using an exclusive, non-durable subscription to the topic with a randomly-allocated name. - -[ **IMPORTANT** ] - -Unlike subscription/consumer, readers are non-durable in nature and does not prevent data in a topic from being deleted, thus it is ***strongly*** advised that [data retention](cookbooks-retention-expiry.md) be configured. If data retention for a topic is not configured for an adequate amount of time, messages that the reader has not yet read might be deleted . This causes the readers to essentially skip messages. Configuring the data retention for a topic guarantees the reader with a certain duration to read a message. - -Please also note that a reader can have a "backlog", but the metric is only used for users to know how behind the reader is. The metric is not considered for any backlog quota calculations. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-reader-consumer-interfaces.png) - -Here's a Java example that begins reading from the earliest available message on a topic: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageId; -import org.apache.pulsar.client.api.Reader; - -// Create a reader on a topic and for a specific message (and onward) -Reader reader = pulsarClient.newReader() - .topic("reader-api-test") - .startMessageId(MessageId.earliest) - .create(); - -while (true) { - Message message = reader.readNext(); - - // Process the message -} - -``` - -To create a reader that reads from the latest available message: - -```java - -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(MessageId.latest) - .create(); - -``` - -To create a reader that reads from some message between the earliest and the latest: - -```java - -byte[] msgIdBytes = // Some byte array -MessageId id = MessageId.fromByteArray(msgIdBytes); -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(id) - .create(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-messaging.md b/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-messaging.md deleted file mode 100644 index c7469606f30552..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-messaging.md +++ /dev/null @@ -1,713 +0,0 @@ ---- -id: concepts-messaging -title: Messaging -sidebar_label: "Messaging" -original_id: concepts-messaging ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar is built on the [publish-subscribe](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) pattern (often abbreviated to pub-sub). In this pattern, [producers](#producers) publish messages to [topics](#topics); [consumers](#consumers) [subscribe](#subscription-modes) to those topics, process incoming messages, and send [acknowledgements](#acknowledgement) to the broker when processing is finished. - -When a subscription is created, Pulsar [retains](concepts-architecture-overview.md#persistent-storage) all messages, even if the consumer is disconnected. The retained messages are discarded only when a consumer acknowledges that all these messages are processed successfully. - -If the consumption of a message fails and you want this message to be consumed again, then you can enable the automatic redelivery of this message by sending a [negative acknowledgement](#negative-acknowledgement) to the broker or enabling the [acknowledgement timeout](#acknowledgement-timeout) for unacknowledged messages. - -## Messages - -Messages are the basic "unit" of Pulsar. The following table lists the components of messages. - -Component | Description -:---------|:------- -Value / data payload | The data carried by the message. All Pulsar messages contain raw bytes, although message data can also conform to data [schemas](schema-get-started.md). -Key | Messages are optionally tagged with keys, which is useful for things like [topic compaction](concepts-topic-compaction.md). -Properties | An optional key/value map of user-defined properties. -Producer name | The name of the producer who produces the message. If you do not specify a producer name, the default name is used. -Sequence ID | Each Pulsar message belongs to an ordered sequence on its topic. The sequence ID of the message is its order in that sequence. -Publish time | The timestamp of when the message is published. The timestamp is automatically applied by the producer. -Event time | An optional timestamp attached to a message by applications. For example, applications attach a timestamp on when the message is processed. If nothing is set to event time, the value is `0`. -TypedMessageBuilder | It is used to construct a message. You can set message properties such as the message key, message value with `TypedMessageBuilder`.
    When you set `TypedMessageBuilder`, set the key as a string. If you set the key as other types, for example, an AVRO object, the key is sent as bytes, and it is difficult to get the AVRO object back on the consumer. - -The default size of a message is 5 MB. You can configure the max size of a message with the following configurations. - -- In the `broker.conf` file. - - ```bash - - # The max size of a message (in bytes). - maxMessageSize=5242880 - - ``` - -- In the `bookkeeper.conf` file. - - ```bash - - # The max size of the netty frame (in bytes). Any messages received larger than this value are rejected. The default value is 5 MB. - nettyMaxFrameSizeBytes=5253120 - - ``` - -> For more information on Pulsar messages, see Pulsar [binary protocol](developing-binary-protocol.md). - -## Producers - -A producer is a process that attaches to a topic and publishes messages to a Pulsar [broker](reference-terminology.md#broker). The Pulsar broker processes the messages. - -### Send modes - -Producers send messages to brokers synchronously (sync) or asynchronously (async). - -| Mode | Description | -|:-----------|-----------| -| Sync send | The producer waits for an acknowledgement from the broker after sending every message. If the acknowledgment is not received, the producer treats the sending operation as a failure. | -| Async send | The producer puts a message in a blocking queue and returns immediately. The client library sends the message to the broker in the background. If the queue is full (you can [configure](reference-configuration.md#broker) the maximum size), the producer is blocked or fails immediately when calling the API, depending on arguments passed to the producer. | - -### Access mode - -You can have different types of access modes on topics for producers. - -|Access mode | Description -|---|--- -`Shared`|Multiple producers can publish on a topic.

    This is the **default** setting. -`Exclusive`|Only one producer can publish on a topic.

    If there is already a producer connected, other producers trying to publish on this topic get errors immediately.

    The "old" producer is evicted and a "new" producer is selected to be the next exclusive producer if the "old" producer experiences a network partition with the broker. -`WaitForExclusive`|If there is already a producer connected, the producer creation is pending (rather than timing out) until the producer gets the `Exclusive` access.

    The producer that succeeds in becoming the exclusive one is treated as the leader. Consequently, if you want to implement the leader election scheme for your application, you can use this access mode. - -:::note - -Once an application creates a producer with `Exclusive` or `WaitForExclusive` access mode successfully, the instance of this application is guaranteed to be the **only writer** to the topic. Any other producers trying to produce messages on this topic will either get errors immediately or have to wait until they get the `Exclusive` access. -For more information, see [PIP 68: Exclusive Producer](https://github.com/apache/pulsar/wiki/PIP-68:-Exclusive-Producer). - -::: - -You can set producer access mode through Java Client API. For more information, see `ProducerAccessMode` in [ProducerBuilder.java](https://github.com/apache/pulsar/blob/fc5768ca3bbf92815d142fe30e6bfad70a1b4fc6/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/ProducerBuilder.java) file. - - -### Compression - -You can compress messages published by producers during transportation. Pulsar currently supports the following types of compression: - -* [LZ4](https://github.com/lz4/lz4) -* [ZLIB](https://zlib.net/) -* [ZSTD](https://facebook.github.io/zstd/) -* [SNAPPY](https://google.github.io/snappy/) - -### Batching - -When batching is enabled, the producer accumulates and sends a batch of messages in a single request. The batch size is defined by the maximum number of messages and the maximum publish latency. Therefore, the backlog size represents the total number of batches instead of the total number of messages. - -In Pulsar, batches are tracked and stored as single units rather than as individual messages. Consumer unbundles a batch into individual messages. However, scheduled messages (configured through the `deliverAt` or the `deliverAfter` parameter) are always sent as individual messages even batching is enabled. - -In general, a batch is acknowledged when all of its messages are acknowledged by a consumer. It means that when **not all** batch messages are acknowledged, then unexpected failures, negative acknowledgements, or acknowledgement timeouts can result in a redelivery of all messages in this batch. - -To avoid redelivering acknowledged messages in a batch to the consumer, Pulsar introduces batch index acknowledgement since Pulsar 2.6.0. When batch index acknowledgement is enabled, the consumer filters out the batch index that has been acknowledged and sends the batch index acknowledgement request to the broker. The broker maintains the batch index acknowledgement status and tracks the acknowledgement status of each batch index to avoid dispatching acknowledged messages to the consumer. The batch is deleted when all indices of the messages in it are acknowledged. - -By default, batch index acknowledgement is disabled (`acknowledgmentAtBatchIndexLevelEnabled=false`). You can enable batch index acknowledgement by setting the `acknowledgmentAtBatchIndexLevelEnabled` parameter to `true` at the broker side. Enabling batch index acknowledgement results in more memory overheads. - -### Chunking -Before you enable chunking, read the following instructions. -- Batching and chunking cannot be enabled simultaneously. To enable chunking, you must disable batching in advance. -- Chunking is only supported for persisted topics. -- Chunking is only supported for the exclusive and failover subscription modes. - -When chunking is enabled (`chunkingEnabled=true`), if the message size is greater than the allowed maximum publish-payload size, the producer splits the original message into chunked messages and publishes them with chunked metadata to the broker separately and in order. At the broker side, the chunked messages are stored in the managed-ledger in the same way as that of ordinary messages. The only difference is that the consumer needs to buffer the chunked messages and combines them into the real message when all chunked messages have been collected. The chunked messages in the managed-ledger can be interwoven with ordinary messages. If producer fails to publish all the chunks of a message, the consumer can expire incomplete chunks if consumer fail to receive all chunks in expire time. By default, the expire time is set to one minute. - -The consumer consumes the chunked messages and buffers them until the consumer receives all the chunks of a message. And then the consumer stitches chunked messages together and places them into the receiver-queue. Clients consume messages from the receiver-queue. Once the consumer consumes the entire large message and acknowledges it, the consumer internally sends acknowledgement of all the chunk messages associated to that large message. You can set the `maxPendingChunkedMessage` parameter on the consumer. When the threshold is reached, the consumer drops the unchunked messages by silently acknowledging them or asking the broker to redeliver them later by marking them unacknowledged. - -The broker does not require any changes to support chunking for non-shared subscription. The broker only uses `chunkedMessageRate` to record chunked message rate on the topic. - -#### Handle chunked messages with one producer and one ordered consumer - -As shown in the following figure, when a topic has one producer which publishes large message payload in chunked messages along with regular non-chunked messages. The producer publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. The broker stores all the three chunked messages in the managed-ledger and dispatches to the ordered (exclusive/failover) consumer in the same order. The consumer buffers all the chunked messages in memory until it receives all the chunked messages, combines them into one message and then hands over the original message M1 to the client. - -![](/assets/chunking-01.png) - -#### Handle chunked messages with multiple producers and one ordered consumer - -When multiple publishers publish chunked messages into a single topic, the broker stores all the chunked messages coming from different publishers in the same managed-ledger. As shown below, Producer 1 publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. Producer 2 publishes message M2 in three chunks M2-C1, M2-C2 and M2-C3. All chunked messages of the specific message are still in order but might not be consecutive in the managed-ledger. This brings some memory pressure to the consumer because the consumer keeps separate buffer for each large message to aggregate all chunks of the large message and combine them into one message. - -![](/assets/chunking-02.png) - -## Consumers - -A consumer is a process that attaches to a topic via a subscription and then receives messages. - -A consumer sends a [flow permit request](developing-binary-protocol.md#flow-control) to a broker to get messages. There is a queue at the consumer side to receive messages pushed from the broker. You can configure the queue size with the [`receiverQueueSize`](client-libraries-java.md#configure-consumer) parameter. The default size is `1000`). Each time `consumer.receive()` is called, a message is dequeued from the buffer. - -### Receive modes - -Messages are received from [brokers](reference-terminology.md#broker) either synchronously (sync) or asynchronously (async). - -| Mode | Description | -|:--------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Sync receive | A sync receive is blocked until a message is available. | -| Async receive | An async receive returns immediately with a future value—for example, a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) in Java—that completes once a new message is available. | - -### Listeners - -Client libraries provide listener implementation for consumers. For example, the [Java client](client-libraries-java.md) provides a {@inject: javadoc:MesssageListener:/client/org/apache/pulsar/client/api/MessageListener} interface. In this interface, the `received` method is called whenever a new message is received. - -### Acknowledgement - -The consumer sends an acknowledgement request to the broker after it consumes a message successfully. Then, this consumed message will be permanently stored, and be deleted only after all the subscriptions have acknowledged it. If you want to store the messages that have been acknowledged by a consumer, you need to configure the [message retention policy](concepts-messaging.md#message-retention-and-expiry). - -For batch messages, you can enable batch index acknowledgement to avoid dispatching acknowledged messages to the consumer. For details about batch index acknowledgement, see [batching](#batching). - -Messages can be acknowledged in one of the following two ways: - -- Being acknowledged individually. With individual acknowledgement, the consumer acknowledges each message and sends an acknowledgement request to the broker. -- Being acknowledged cumulatively. With cumulative acknowledgement, the consumer **only** acknowledges the last message it received. All messages in the stream up to (and including) the provided message are not redelivered to that consumer. - -If you want to acknowledge messages individually, you can use the following API. - -```java - -consumer.acknowledge(msg); - -``` - -If you want to acknowledge messages cumulatively, you can use the following API. - -```java - -consumer.acknowledgeCumulative(msg); - -``` - -:::note - -Cumulative acknowledgement cannot be used in the [shared subscription mode](#subscription-modes), because the shared subscription mode involves multiple consumers who have access to the same subscription. In the shared subscription mode, messages are acknowledged individually. - -::: - -### Negative acknowledgement - -When a consumer fails to consume a message and intends to consume it again, this consumer should send a negative acknowledgement to the broker. Then, the broker will redeliver this message to the consumer. - -Messages are negatively acknowledged individually or cumulatively, depending on the consumption subscription mode. - -In the exclusive and failover subscription modes, consumers only negatively acknowledge the last message they receive. - -In the shared and Key_Shared subscription modes, consumers can negatively acknowledge messages individually. - -Be aware that negative acknowledgments on ordered subscription types, such as Exclusive, Failover and Key_Shared, might cause failed messages being sent to consumers out of the original order. - -If you want to acknowledge messages negatively, you can use the following API. - -```java - -//With calling this api, messages are negatively acknowledged -consumer.negativeAcknowledge(msg); - -``` - -:::note - -If batching is enabled, all messages in one batch are redelivered to the consumer. - -::: - -### Acknowledgement timeout - -If a message is not consumed successfully, and you want the broker to redeliver this message automatically, then you can enable automatic redelivery mechanism for unacknowledged messages. With automatic redelivery enabled, the client tracks the unacknowledged messages within the entire `acktimeout` time range, and sends a `redeliver unacknowledged messages` request to the broker automatically when the acknowledgement timeout is specified. - -:::note - -- If batching is enabled, all messages in one batch are redelivered to the consumer. -- The negative acknowledgement is preferable over the acknowledgement timeout, since negative acknowledgement controls the redelivery of individual messages more precisely and avoids invalid redeliveries when the message processing time exceeds the acknowledgement timeout. - -::: - -### Dead letter topic - -Dead letter topic enables you to consume new messages when some messages cannot be consumed successfully by a consumer. In this mechanism, messages that are failed to be consumed are stored in a separate topic, which is called dead letter topic. You can decide how to handle messages in the dead letter topic. - -The following example shows how to enable dead letter topic in a Java client using the default dead letter topic: - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic(topic) - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .build()) - .subscribe(); - -``` - -The default dead letter topic uses this format: - -``` - ---DLQ - -``` - -If you want to specify the name of the dead letter topic, use this Java client example: - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic(topic) - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .deadLetterTopic("your-topic-name") - .build()) - .subscribe(); - -``` - -Dead letter topic depends on message redelivery. Messages are redelivered either due to [acknowledgement timeout](#acknowledgement-timeout) or [negative acknowledgement](#negative-acknowledgement). If you are going to use negative acknowledgement on a message, make sure it is negatively acknowledged before the acknowledgement timeout. - -:::note - -Currently, dead letter topic is enabled in the Shared and Key_Shared subscription modes. - -::: - -### Retry letter topic - -For many online business systems, a message is re-consumed due to exception occurs in the business logic processing. To configure the delay time for re-consuming the failed messages, you can configure the producer to send messages to both the business topic and the retry letter topic, and enable automatic retry on the consumer. When automatic retry is enabled on the consumer, a message is stored in the retry letter topic if the messages are not consumed, and therefore the consumer automatically consumes the failed messages from the retry letter topic after a specified delay time. - -By default, automatic retry is disabled. You can set `enableRetry` to `true` to enable automatic retry on the consumer. - -This example shows how to consume messages from a retry letter topic. - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic(topic) - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .enableRetry(true) - .receiverQueueSize(100) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .retryLetterTopic("persistent://my-property/my-ns/my-subscription-custom-Retry") - .build()) - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscribe(); - -``` - -If you want to put messages into a retrial queue, you can use the following API. - -```java - -consumer.reconsumeLater(msg,3,TimeUnit.SECONDS); - -``` - -## Topics - -As in other pub-sub systems, topics in Pulsar are named channels for transmitting messages from producers to consumers. Topic names are URLs that have a well-defined structure: - -```http - -{persistent|non-persistent}://tenant/namespace/topic - -``` - -Topic name component | Description -:--------------------|:----------- -`persistent` / `non-persistent` | This identifies the type of topic. Pulsar supports two kind of topics: [persistent](concepts-architecture-overview.md#persistent-storage) and [non-persistent](#non-persistent-topics). The default is persistent, so if you do not specify a type, the topic is persistent. With persistent topics, all messages are durably persisted on disks (if the broker is not standalone, messages are durably persisted on multiple disks), whereas data for non-persistent topics is not persisted to storage disks. -`tenant` | The topic tenant within the instance. Tenants are essential to multi-tenancy in Pulsar, and spread across clusters. -`namespace` | The administrative unit of the topic, which acts as a grouping mechanism for related topics. Most topic configuration is performed at the [namespace](#namespaces) level. Each tenant has one or multiple namespaces. -`topic` | The final part of the name. Topic names have no special meaning in a Pulsar instance. - -> **No need to explicitly create new topics** -> You do not need to explicitly create topics in Pulsar. If a client attempts to write or receive messages to/from a topic that does not yet exist, Pulsar creates that topic under the namespace provided in the [topic name](#topics) automatically. -> If no tenant or namespace is specified when a client creates a topic, the topic is created in the default tenant and namespace. You can also create a topic in a specified tenant and namespace, such as `persistent://my-tenant/my-namespace/my-topic`. `persistent://my-tenant/my-namespace/my-topic` means the `my-topic` topic is created in the `my-namespace` namespace of the `my-tenant` tenant. - -## Namespaces - -A namespace is a logical nomenclature within a tenant. A tenant creates multiple namespaces via the [admin API](admin-api-namespaces.md#create). For instance, a tenant with different applications can create a separate namespace for each application. A namespace allows the application to create and manage a hierarchy of topics. The topic `my-tenant/app1` is a namespace for the application `app1` for `my-tenant`. You can create any number of [topics](#topics) under the namespace. - -## Subscriptions - -A subscription is a named configuration rule that determines how messages are delivered to consumers. Four subscription modes are available in Pulsar: [exclusive](#exclusive), [shared](#shared), [failover](#failover), and [key_shared](#key_shared). These modes are illustrated in the figure below. - -![Subscription modes](/assets/pulsar-subscription-types.png) - -> **Pub-Sub or Queuing** -> In Pulsar, you can use different subscriptions flexibly. -> * If you want to achieve traditional "fan-out pub-sub messaging" among consumers, specify a unique subscription name for each consumer. It is exclusive subscription mode. -> * If you want to achieve "message queuing" among consumers, share the same subscription name among multiple consumers(shared, failover, key_shared). -> * If you want to achieve both effects simultaneously, combine exclusive subscription mode with other subscription modes for consumers. - -### Consumerless Subscriptions and Their Corresponding Modes -When a subscription has no consumers, its subscription mode is undefined. A subscription's mode is defined when a consumer connects to the subscription, and the mode can be changed by restarting all consumers with a different configuration. - -### Exclusive - -In *exclusive* mode, only a single consumer is allowed to attach to the subscription. If multiple consumers subscribe to a topic using the same subscription, an error occurs. - -In the diagram below, only **Consumer A-0** is allowed to consume messages. - -> Exclusive mode is the default subscription mode. - -![Exclusive subscriptions](/assets/pulsar-exclusive-subscriptions.png) - -### Failover - -In *failover* mode, multiple consumers can attach to the same subscription. A master consumer is picked for non-partitioned topic or each partition of partitioned topic and receives messages. When the master consumer disconnects, all (non-acknowledged and subsequent) messages are delivered to the next consumer in line. - -For partitioned topics, broker will sort consumers by priority level and lexicographical order of consumer name. Then broker will try to evenly assigns topics to consumers with the highest priority level. - -For non-partitioned topic, broker will pick consumer in the order they subscribe to the non partitioned topic. - -In the diagram below, **Consumer-B-0** is the master consumer while **Consumer-B-1** would be the next consumer in line to receive messages if **Consumer-B-0** is disconnected. - -![Failover subscriptions](/assets/pulsar-failover-subscriptions.png) - -### Shared - -In *shared* or *round robin* mode, multiple consumers can attach to the same subscription. Messages are delivered in a round robin distribution across consumers, and any given message is delivered to only one consumer. When a consumer disconnects, all the messages that were sent to it and not acknowledged will be rescheduled for sending to the remaining consumers. - -In the diagram below, **Consumer-C-1** and **Consumer-C-2** are able to subscribe to the topic, but **Consumer-C-3** and others could as well. - -> **Limitations of shared mode** -> When using shared mode, be aware that: -> * Message ordering is not guaranteed. -> * You cannot use cumulative acknowledgment with shared mode. - -![Shared subscriptions](/assets/pulsar-shared-subscriptions.png) - -### Key_Shared - -In *Key_Shared* mode, multiple consumers can attach to the same subscription. Messages are delivered in a distribution across consumers and message with same key or same ordering key are delivered to only one consumer. No matter how many times the message is re-delivered, it is delivered to the same consumer. When a consumer connected or disconnected will cause served consumer change for some key of message. - -![Key_Shared subscriptions](/assets/pulsar-key-shared-subscriptions.png) - -Note that when the consumers are using the Key_Shared subscription mode, you need to **disable batching** or **use key-based batching** for the producers. There are two reasons why the key-based batching is necessary for Key_Shared subscription mode: -1. The broker dispatches messages according to the keys of the messages, but the default batching approach might fail to pack the messages with the same key to the same batch. -2. Since it is the consumers instead of the broker who dispatch the messages from the batches, the key of the first message in one batch is considered as the key of all messages in this batch, thereby leading to context errors. - -The key-based batching aims at resolving the above-mentioned issues. This batching method ensures that the producers pack the messages with the same key to the same batch. The messages without a key are packed into one batch and this batch has no key. When the broker dispatches messages from this batch, it uses `NON_KEY` as the key. In addition, each consumer is associated with **only one** key and should receive **only one message batch** for the connected key. By default, you can limit batching by configuring the number of messages that producers are allowed to send. - -Below are examples of enabling the key-based batching under the Key_Shared subscription mode, with `client` being the Pulsar client that you created. - -````mdx-code-block - - - -``` - -Producer producer = client.newProducer() - .topic("my-topic") - .batcherBuilder(BatcherBuilder.KEY_BASED) - .create(); - -``` - - - - -``` - -ProducerConfiguration producerConfig; -producerConfig.setBatchingType(ProducerConfiguration::BatchingType::KeyBasedBatching); -Producer producer; -client.createProducer("my-topic", producerConfig, producer); - -``` - - - - -``` - -producer = client.create_producer(topic='my-topic', batching_type=pulsar.BatchingType.KeyBased) - -``` - - - - -```` - -> **Limitations of Key_Shared mode** -> When you use Key_Shared mode, be aware that: -> * You need to specify a key or orderingKey for messages. -> * You cannot use cumulative acknowledgment with Key_Shared mode. - -## Multi-topic subscriptions - -When a consumer subscribes to a Pulsar topic, by default it subscribes to one specific topic, such as `persistent://public/default/my-topic`. As of Pulsar version 1.23.0-incubating, however, Pulsar consumers can simultaneously subscribe to multiple topics. You can define a list of topics in two ways: - -* On the basis of a [**reg**ular **ex**pression](https://en.wikipedia.org/wiki/Regular_expression) (regex), for example `persistent://public/default/finance-.*` -* By explicitly defining a list of topics - -> When subscribing to multiple topics by regex, all topics must be in the same [namespace](#namespaces). - -When subscribing to multiple topics, the Pulsar client automatically makes a call to the Pulsar API to discover the topics that match the regex pattern/list, and then subscribe to all of them. If any of the topics do not exist, the consumer auto-subscribes to them once the topics are created. - -> **No ordering guarantees across multiple topics** -> When a producer sends messages to a single topic, all messages are guaranteed to be read from that topic in the same order. However, these guarantees do not hold across multiple topics. So when a producer sends message to multiple topics, the order in which messages are read from those topics is not guaranteed to be the same. - -The following are multi-topic subscription examples for Java. - -```java - -import java.util.regex.Pattern; - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient pulsarClient = // Instantiate Pulsar client object - -// Subscribe to all topics in a namespace -Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default/.*"); -Consumer allTopicsConsumer = pulsarClient.newConsumer() - .topicsPattern(allTopicsInNamespace) - .subscriptionName("subscription-1") - .subscribe(); - -// Subscribe to a subsets of topics in a namespace, based on regex -Pattern someTopicsInNamespace = Pattern.compile("persistent://public/default/foo.*"); -Consumer someTopicsConsumer = pulsarClient.newConsumer() - .topicsPattern(someTopicsInNamespace) - .subscriptionName("subscription-1") - .subscribe(); - -``` - -For code examples, see [Java](client-libraries-java.md#multi-topic-subscriptions). - -## Partitioned topics - -Normal topics are served only by a single broker, which limits the maximum throughput of the topic. *Partitioned topics* are a special type of topic that are handled by multiple brokers, thus allowing for higher throughput. - -A partitioned topic is actually implemented as N internal topics, where N is the number of partitions. When publishing messages to a partitioned topic, each message is routed to one of several brokers. The distribution of partitions across brokers is handled automatically by Pulsar. - -The diagram below illustrates this: - -![](/assets/partitioning.png) - -The **Topic1** topic has five partitions (**P0** through **P4**) split across three brokers. Because there are more partitions than brokers, two brokers handle two partitions a piece, while the third handles only one (again, Pulsar handles this distribution of partitions automatically). - -Messages for this topic are broadcast to two consumers. The [routing mode](#routing-modes) determines each message should be published to which partition, while the [subscription mode](#subscription-modes) determines which messages go to which consumers. - -Decisions about routing and subscription modes can be made separately in most cases. In general, throughput concerns should guide partitioning/routing decisions while subscription decisions should be guided by application semantics. - -There is no difference between partitioned topics and normal topics in terms of how subscription modes work, as partitioning only determines what happens between when a message is published by a producer and processed and acknowledged by a consumer. - -Partitioned topics need to be explicitly created via the [admin API](admin-api-overview.md). The number of partitions can be specified when creating the topic. - -### Routing modes - -When publishing to partitioned topics, you must specify a *routing mode*. The routing mode determines which partition---that is, which internal topic---each message should be published to. - -There are three {@inject: javadoc:MessageRoutingMode:/client/org/apache/pulsar/client/api/MessageRoutingMode} available: - -Mode | Description -:--------|:------------ -`RoundRobinPartition` | If no key is provided, the producer will publish messages across all partitions in round-robin fashion to achieve maximum throughput. Please note that round-robin is not done per individual message but rather it's set to the same boundary of batching delay, to ensure batching is effective. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition. This is the default mode. -`SinglePartition` | If no key is provided, the producer will randomly pick one single partition and publish all the messages into that partition. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition. -`CustomPartition` | Use custom message router implementation that will be called to determine the partition for a particular message. User can create a custom routing mode by using the [Java client](client-libraries-java.md) and implementing the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface. - -### Ordering guarantee - -The ordering of messages is related to MessageRoutingMode and Message Key. Usually, user would want an ordering of Per-key-partition guarantee. - -If there is a key attached to message, the messages will be routed to corresponding partitions based on the hashing scheme specified by {@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} in {@inject: javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder}, when using either `SinglePartition` or `RoundRobinPartition` mode. - -Ordering guarantee | Description | Routing Mode and Key -:------------------|:------------|:------------ -Per-key-partition | All the messages with the same key will be in order and be placed in same partition. | Use either `SinglePartition` or `RoundRobinPartition` mode, and Key is provided by each message. -Per-producer | All the messages from the same producer will be in order. | Use `SinglePartition` mode, and no Key is provided for each message. - -### Hashing scheme - -{@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} is an enum that represent sets of standard hashing functions available when choosing the partition to use for a particular message. - -There are 2 types of standard hashing functions available: `JavaStringHash` and `Murmur3_32Hash`. -The default hashing function for producer is `JavaStringHash`. -Please pay attention that `JavaStringHash` is not useful when producers can be from different multiple language clients, under this use case, it is recommended to use `Murmur3_32Hash`. - - - -## Non-persistent topics - - -By default, Pulsar persistently stores *all* unacknowledged messages on multiple [BookKeeper](concepts-architecture-overview.md#persistent-storage) bookies (storage nodes). Data for messages on persistent topics can thus survive broker restarts and subscriber failover. - -Pulsar also, however, supports **non-persistent topics**, which are topics on which messages are *never* persisted to disk and live only in memory. When using non-persistent delivery, killing a Pulsar broker or disconnecting a subscriber to a topic means that all in-transit messages are lost on that (non-persistent) topic, meaning that clients may see message loss. - -Non-persistent topics have names of this form (note the `non-persistent` in the name): - -```http - -non-persistent://tenant/namespace/topic - -``` - -> For more info on using non-persistent topics, see the [Non-persistent messaging cookbook](cookbooks-non-persistent.md). - -In non-persistent topics, brokers immediately deliver messages to all connected subscribers *without persisting them* in [BookKeeper](concepts-architecture-overview.md#persistent-storage). If a subscriber is disconnected, the broker will not be able to deliver those in-transit messages, and subscribers will never be able to receive those messages again. Eliminating the persistent storage step makes messaging on non-persistent topics slightly faster than on persistent topics in some cases, but with the caveat that some of the core benefits of Pulsar are lost. - -> With non-persistent topics, message data lives only in memory. If a message broker fails or message data can otherwise not be retrieved from memory, your message data may be lost. Use non-persistent topics only if you're *certain* that your use case requires it and can sustain it. - -By default, non-persistent topics are enabled on Pulsar brokers. You can disable them in the broker's [configuration](reference-configuration.md#broker-enableNonPersistentTopics). You can manage non-persistent topics using the `pulsar-admin topics` command. For more information, see [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/). - -### Performance - -Non-persistent messaging is usually faster than persistent messaging because brokers don't persist messages and immediately send acks back to the producer as soon as that message is delivered to connected brokers. Producers thus see comparatively low publish latency with non-persistent topic. - -### Client API - -Producers and consumers can connect to non-persistent topics in the same way as persistent topics, with the crucial difference that the topic name must start with `non-persistent`. All three subscription modes---[exclusive](#exclusive), [shared](#shared), and [failover](#failover)---are supported for non-persistent topics. - -Here's an example [Java consumer](client-libraries-java.md#consumers) for a non-persistent topic: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); -String npTopic = "non-persistent://public/default/my-topic"; -String subscriptionName = "my-subscription-name"; - -Consumer consumer = client.newConsumer() - .topic(npTopic) - .subscriptionName(subscriptionName) - .subscribe(); - -``` - -Here's an example [Java producer](client-libraries-java.md#producer) for the same non-persistent topic: - -```java - -Producer producer = client.newProducer() - .topic(npTopic) - .create(); - -``` - -## System topic - -System topic is a predefined topic for internal use within Pulsar. It can be either persistent or non-persistent topic. - -System topics serve to implement certain features and eliminate dependencies on third-party components, such as transactions, heartbeat detections, topic-level policies, and resource group services. System topics empower the implementation of these features to be simplified, dependent, and flexible. Take heartbeat detections for example, you can leverage the system topic for healthcheck to internally enable producer/reader to procude/consume messages under the heartbeat namespace, which can detect whether the current service is still alive. - -There are diverse system topics depending on namespaces. The following table outlines the available system topics for each specific namespace. - -| Namespace | TopicName | Domain | Count | Usage | -|-----------|-----------|--------|-------|-------| -| pulsar/system | `transaction_coordinator_assign_${id}` | Persistent | Default 16 | Transaction coordinator | -| pulsar/system | `_transaction_log${tc_id}` | Persistent | Default 16 | Transaction log | -| pulsar/system | `resource-usage` | Non-persistent | Default 4 | Resource group service | -| host/port | `heartbeat` | Persistent | 1 | Heartbeat detection | -| User-defined-ns | [`__change_events`](concepts-multi-tenancy.md#namespace-change-events-and-topic-level-policies) | Persistent | Default 4 | Topic events | -| User-defined-ns | `__transaction_buffer_snapshot` | Persistent | One per namespace | Transaction buffer snapshots | -| User-defined-ns | `${topicName}__transaction_pending_ack` | Persistent | One per every topic subscription acknowledged with transactions | Acknowledgements with transactions | - -:::note - -* You cannot create any system topics. -* By default, system topics are disabled. To enable system topics, you need to change the following configurations in the `conf/broker.conf` or `conf/standalone.conf` file. - - ```conf - systemTopicEnabled=true - topicLevelPoliciesEnabled=true - ``` - -::: - - -## Message retention and expiry - -By default, Pulsar message brokers: - -* immediately delete *all* messages that have been acknowledged by a consumer, and -* [persistently store](concepts-architecture-overview.md#persistent-storage) all unacknowledged messages in a message backlog. - -Pulsar has two features, however, that enable you to override this default behavior: - -* Message **retention** enables you to store messages that have been acknowledged by a consumer -* Message **expiry** enables you to set a time to live (TTL) for messages that have not yet been acknowledged - -> All message retention and expiry is managed at the [namespace](#namespaces) level. For a how-to, see the [Message retention and expiry](cookbooks-retention-expiry.md) cookbook. - -The diagram below illustrates both concepts: - -![Message retention and expiry](/assets/retention-expiry.png) - -With message retention, shown at the top, a retention policy applied to all topics in a namespace dictates that some messages are durably stored in Pulsar even though they've already been acknowledged. Acknowledged messages that are not covered by the retention policy are deleted. Without a retention policy, *all* of the acknowledged messages would be deleted. - -With message expiry, shown at the bottom, some messages are deleted, even though they haven't been acknowledged, because they've expired according to the TTL applied to the namespace (for example because a TTL of 5 minutes has been applied and the messages haven't been acknowledged but are 10 minutes old). - -## Message deduplication - -Message duplication occurs when a message is [persisted](concepts-architecture-overview.md#persistent-storage) by Pulsar more than once. Message deduplication is an optional Pulsar feature that prevents unnecessary message duplication by processing each message only once, even if the message is received more than once. - -The following diagram illustrates what happens when message deduplication is disabled vs. enabled: - -![Pulsar message deduplication](/assets/message-deduplication.png) - - -Message deduplication is disabled in the scenario shown at the top. Here, a producer publishes message 1 on a topic; the message reaches a Pulsar broker and is [persisted](concepts-architecture-overview.md#persistent-storage) to BookKeeper. The producer then sends message 1 again (in this case due to some retry logic), and the message is received by the broker and stored in BookKeeper again, which means that duplication has occurred. - -In the second scenario at the bottom, the producer publishes message 1, which is received by the broker and persisted, as in the first scenario. When the producer attempts to publish the message again, however, the broker knows that it has already seen message 1 and thus does not persist the message. - -> Message deduplication is handled at the namespace level or the topic level. For more instructions, see the [message deduplication cookbook](cookbooks-deduplication.md). - - -### Producer idempotency - -The other available approach to message deduplication is to ensure that each message is *only produced once*. This approach is typically called **producer idempotency**. The drawback of this approach is that it defers the work of message deduplication to the application. In Pulsar, this is handled at the [broker](reference-terminology.md#broker) level, so you do not need to modify your Pulsar client code. Instead, you only need to make administrative changes. For details, see [Managing message deduplication](cookbooks-deduplication.md). - -### Deduplication and effectively-once semantics - -Message deduplication makes Pulsar an ideal messaging system to be used in conjunction with stream processing engines (SPEs) and other systems seeking to provide effectively-once processing semantics. Messaging systems that do not offer automatic message deduplication require the SPE or other system to guarantee deduplication, which means that strict message ordering comes at the cost of burdening the application with the responsibility of deduplication. With Pulsar, strict ordering guarantees come at no application-level cost. - -> You can find more in-depth information in [this post](https://www.splunk.com/en_us/blog/it/exactly-once-is-not-exactly-the-same.html). - -## Delayed message delivery -Delayed message delivery enables you to consume a message later rather than immediately. In this mechanism, a message is stored in BookKeeper, `DelayedDeliveryTracker` maintains the time index(time -> messageId) in memory after published to a broker, and it is delivered to a consumer once the specific delayed time is passed. - -Delayed message delivery only works in Shared subscription mode. In Exclusive and Failover subscription modes, the delayed message is dispatched immediately. - -The diagram below illustrates the concept of delayed message delivery: - -![Delayed Message Delivery](/assets/message_delay.png) - -A broker saves a message without any check. When a consumer consumes a message, if the message is set to delay, then the message is added to `DelayedDeliveryTracker`. A subscription checks and gets timeout messages from `DelayedDeliveryTracker`. - -### Broker -Delayed message delivery is enabled by default. You can change it in the broker configuration file as below: - -``` - -# Whether to enable the delayed delivery for messages. -# If disabled, messages are immediately delivered and there is no tracking overhead. -delayedDeliveryEnabled=true - -# Control the ticking time for the retry of delayed message delivery, -# affecting the accuracy of the delivery time compared to the scheduled time. -# Default is 1 second. -delayedDeliveryTickTimeMillis=1000 - -``` - -### Producer -The following is an example of delayed message delivery for a producer in Java: - -```java - -// message to be delivered at the configured delay interval -producer.newMessage().deliverAfter(3L, TimeUnit.Minute).value("Hello Pulsar!").send(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-multi-tenancy.md b/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-multi-tenancy.md deleted file mode 100644 index 93a59557b2efca..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-multi-tenancy.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -id: concepts-multi-tenancy -title: Multi Tenancy -sidebar_label: "Multi Tenancy" -original_id: concepts-multi-tenancy ---- - -Pulsar was created from the ground up as a multi-tenant system. To support multi-tenancy, Pulsar has a concept of tenants. Tenants can be spread across clusters and can each have their own [authentication and authorization](security-overview.md) scheme applied to them. They are also the administrative unit at which storage quotas, [message TTL](cookbooks-retention-expiry.md#time-to-live-ttl), and isolation policies can be managed. - -The multi-tenant nature of Pulsar is reflected mostly visibly in topic URLs, which have this structure: - -```http - -persistent://tenant/namespace/topic - -``` - -As you can see, the tenant is the most basic unit of categorization for topics (more fundamental than the namespace and topic name). - -## Tenants - -To each tenant in a Pulsar instance you can assign: - -* An [authorization](security-authorization.md) scheme -* The set of [clusters](reference-terminology.md#cluster) to which the tenant's configuration applies - -## Namespaces - -Tenants and namespaces are two key concepts of Pulsar to support multi-tenancy. - -* Pulsar is provisioned for specified tenants with appropriate capacity allocated to the tenant. -* A namespace is the administrative unit nomenclature within a tenant. The configuration policies set on a namespace apply to all the topics created in that namespace. A tenant may create multiple namespaces via self-administration using the REST API and the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool. For instance, a tenant with different applications can create a separate namespace for each application. - -Names for topics in the same namespace will look like this: - -```http - -persistent://tenant/app1/topic-1 - -persistent://tenant/app1/topic-2 - -persistent://tenant/app1/topic-3 - -``` - -### Namespace change events and topic-level policies - -Pulsar is a multi-tenant event streaming system. Administrators can manage the tenants and namespaces by setting policies at different levels. However, the policies, such as retention policy and storage quota policy, are only available at a namespace level. In many use cases, users need to set a policy at the topic level. The namespace change events approach is proposed for supporting topic-level policies in an efficient way. In this approach, Pulsar is used as an event log to store namespace change events (such as topic policy changes). This approach has a few benefits: -- Avoid using ZooKeeper and introducing more loads to ZooKeeper. -- Use Pulsar as an event log for propagating the policy cache. It can scale efficiently. -- Use Pulsar SQL to query the namespace changes and audit the system. - -Each namespace has a [system topic](concepts-messaging.md#system-topic) named `__change_events`. This system topic stores change events for a given namespace. The following figure illustrates how to leverage it to update topic-level policies. - -![Leverage the system topic to update topic-level policies](/assets/system-topic-for-topic-level-policies.svg) - -1. Pulsar Admin clients communicate with the Admin Restful API to update topic-level policies. -2. Any broker that receives the Admin HTTP request publishes a topic policy change event to the corresponding system topic (`__change_events`) of the namespace. -3. Each broker that owns a namespace bundle(s) subscribes to the system topic (`__change_events`) to receive the change events of the namespace. -4. Each broker applies the change events to its policy cache. -5. Once the policy cache is updated, the broker sends the response back to the Pulsar Admin clients. - -:::note - -By default, the system topic is disabled. To enable topic-level policy (`topicLevelPoliciesEnabled`=`true`), you need to enable the system topic by setting `systemtopicenabled` to `true` in the `conf/broker.conf` or `conf/standalone.conf` file. - -::: \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-multiple-advertised-listeners.md b/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-multiple-advertised-listeners.md deleted file mode 100644 index f2e1ae0aadc7ca..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-multiple-advertised-listeners.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -id: concepts-multiple-advertised-listeners -title: Multiple advertised listeners -sidebar_label: "Multiple advertised listeners" -original_id: concepts-multiple-advertised-listeners ---- - -When a Pulsar cluster is deployed in the production environment, it may require to expose multiple advertised addresses for the broker. For example, when you deploy a Pulsar cluster in Kubernetes and want other clients, which are not in the same Kubernetes cluster, to connect to the Pulsar cluster, you need to assign a broker URL to external clients. But clients in the same Kubernetes cluster can still connect to the Pulsar cluster through the internal network of Kubernetes. - -## Advertised listeners - -To ensure clients in both internal and external networks can connect to a Pulsar cluster, Pulsar introduces `advertisedListeners` and `internalListenerName` configuration options into the [broker configuration file](reference-configuration.md#broker) to ensure that the broker supports exposing multiple advertised listeners and support the separation of internal and external network traffic. - -- The `advertisedListeners` is used to specify multiple advertised listeners. The broker uses the listener as the broker identifier in the load manager and the bundle owner data. The `advertisedListeners` is formatted as `:pulsar://:, :pulsar+ssl://:`. You can set up the `advertisedListeners` like -`advertisedListeners=internal:pulsar://192.168.1.11:6660,internal:pulsar+ssl://192.168.1.11:6651`. - -- The `internalListenerName` is used to specify the internal service URL that the broker uses. You can specify the `internalListenerName` by choosing one of the `advertisedListeners`. The broker uses the listener name of the first advertised listener as the `internalListenerName` if the `internalListenerName` is absent. - -After setting up the `advertisedListeners`, clients can choose one of the listeners as the service URL to create a connection to the broker as long as the network is accessible. However, if the client creates producers or consumer on a topic, the client must send a lookup requests to the broker for getting the owner broker, then connect to the owner broker to publish messages or consume messages. Therefore, You must allow the client to get the corresponding service URL with the same advertised listener name as the one used by the client. This helps keep client-side simple and secure. - -## Use multiple advertised listeners - -This example shows how a Pulsar client uses multiple advertised listeners. - -1. Configure multiple advertised listeners in the broker configuration file. - -```shell - -advertisedListeners={listenerName}:pulsar://xxxx:6650, -{listenerName}:pulsar+ssl://xxxx:6651 - -``` - -2. Specify the listener name for the client. - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://xxxx:6650") - .listenerName("external") - .build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-overview.md b/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-overview.md deleted file mode 100644 index c643aa0ce7bbce..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-overview.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -id: concepts-overview -title: Pulsar Overview -sidebar_label: "Overview" -original_id: concepts-overview ---- - -Pulsar is a multi-tenant, high-performance solution for server-to-server messaging. Originally developed by Yahoo, Pulsar is under the stewardship of the [Apache Software Foundation](https://www.apache.org/). - -Key features of Pulsar are listed below: - -* Native support for multiple clusters in a Pulsar instance, with seamless [geo-replication](administration-geo.md) of messages across clusters. -* Very low publish and end-to-end latency. -* Seamless scalability to over a million topics. -* A simple [client API](concepts-clients.md) with bindings for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) and [C++](client-libraries-cpp.md). -* Multiple [subscription modes](concepts-messaging.md#subscription-modes) ([exclusive](concepts-messaging.md#exclusive), [shared](concepts-messaging.md#shared), and [failover](concepts-messaging.md#failover)) for topics. -* Guaranteed message delivery with [persistent message storage](concepts-architecture-overview.md#persistent-storage) provided by [Apache BookKeeper](http://bookkeeper.apache.org/). -* A serverless light-weight computing framework [Pulsar Functions](functions-overview.md) offers the capability for stream-native data processing. -* A serverless connector framework [Pulsar IO](io-overview.md), which is built on Pulsar Functions, makes it easier to move data in and out of Apache Pulsar. -* [Tiered Storage](concepts-tiered-storage.md) offloads data from hot/warm storage to cold/long-term storage (such as S3 and GCS) when the data is aging out. - -## Contents - -- [Messaging Concepts](concepts-messaging.md) -- [Architecture Overview](concepts-architecture-overview.md) -- [Pulsar Clients](concepts-clients.md) -- [Geo Replication](concepts-replication.md) -- [Multi Tenancy](concepts-multi-tenancy.md) -- [Authentication and Authorization](concepts-authentication.md) -- [Topic Compaction](concepts-topic-compaction.md) -- [Tiered Storage](concepts-tiered-storage.md) diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-proxy-sni-routing.md b/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-proxy-sni-routing.md deleted file mode 100644 index 51419a66cefe6e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-proxy-sni-routing.md +++ /dev/null @@ -1,180 +0,0 @@ ---- -id: concepts-proxy-sni-routing -title: Proxy support with SNI routing -sidebar_label: "Proxy support with SNI routing" -original_id: concepts-proxy-sni-routing ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -A proxy server is an intermediary server that forwards requests from multiple clients to different servers across the Internet. The proxy server acts as a "traffic cop" in both forward and reverse proxy scenarios, and benefits your system such as load balancing, performance, security, auto-scaling, and so on. - -The proxy in Pulsar acts as a reverse proxy, and creates a gateway in front of brokers. Proxies such as Apache Traffic Server (ATS), HAProxy, Nginx, and Envoy are not supported in Pulsar. These proxy-servers support **SNI routing**. SNI routing is used to route traffic to a destination without terminating the SSL connection. Layer 4 routing provides greater transparency because the outbound connection is determined by examining the destination address in the client TCP packets. - -Pulsar clients (Java, C++, Python) support [SNI routing protocol](https://github.com/apache/pulsar/wiki/PIP-60:-Support-Proxy-server-with-SNI-routing), so you can connect to brokers through the proxy. This document walks you through how to set up the ATS proxy, enable SNI routing, and connect Pulsar client to the broker through the ATS proxy. - -## ATS-SNI Routing in Pulsar -To support [layer-4 SNI routing](https://docs.trafficserver.apache.org/en/latest/admin-guide/layer-4-routing.en.html) with ATS, the inbound connection must be a TLS connection. Pulsar client supports SNI routing protocol on TLS connection, so when Pulsar clients connect to broker through ATS proxy, Pulsar uses ATS as a reverse proxy. - -Pulsar supports SNI routing for geo-replication, so brokers can connect to brokers in other clusters through the ATS proxy. - -This section explains how to set up and use ATS as a reverse proxy, so Pulsar clients can connect to brokers through the ATS proxy using the SNI routing protocol on TLS connection. - -### Set up ATS Proxy for layer-4 SNI routing -To support layer 4 SNI routing, you need to configure the `records.conf` and `ssl_server_name.conf` files. - -![Pulsar client SNI](/assets/pulsar-sni-client.png) - -The [records.config](https://docs.trafficserver.apache.org/en/latest/admin-guide/files/records.config.en.html) file is located in the `/usr/local/etc/trafficserver/` directory by default. The file lists configurable variables used by the ATS. - -To configure the `records.config` files, complete the following steps. -1. Update TLS port (`http.server_ports`) on which proxy listens, and update proxy certs (`ssl.client.cert.path` and `ssl.client.cert.filename`) to secure TLS tunneling. -2. Configure server ports (`http.connect_ports`) used for tunneling to the broker. If Pulsar brokers are listening on `4443` and `6651` ports, add the brokers service port in the `http.connect_ports` configuration. - -The following is an example. - -``` - -# PROXY TLS PORT -CONFIG proxy.config.http.server_ports STRING 4443:ssl 4080 -# PROXY CERTS FILE PATH -CONFIG proxy.config.ssl.client.cert.path STRING /proxy-cert.pem -# PROXY KEY FILE PATH -CONFIG proxy.config.ssl.client.cert.filename STRING /proxy-key.pem - - -# The range of origin server ports that can be used for tunneling via CONNECT. # Traffic Server allows tunnels only to the specified ports. Supports both wildcards (*) and ranges (e.g. 0-1023). -CONFIG proxy.config.http.connect_ports STRING 4443 6651 - -``` - -The `ssl_server_name` file is used to configure TLS connection handling for inbound and outbound connections. The configuration is determined by the SNI values provided by the inbound connection. The file consists of a set of configuration items, and each is identified by an SNI value (`fqdn`). When an inbound TLS connection is made, the SNI value from the TLS negotiation is matched with the items specified in this file. If the values match, the values specified in that item override the default values. - -The following example shows mapping of the inbound SNI hostname coming from the client, and the actual broker service URL where request should be redirected. For example, if the client sends the SNI header `pulsar-broker1`, the proxy creates a TLS tunnel by redirecting request to the `pulsar-broker1:6651` service URL. - -``` - -server_config = { - { - fqdn = 'pulsar-broker-vip', - # Forward to Pulsar broker which is listening on 6651 - tunnel_route = 'pulsar-broker-vip:6651' - }, - { - fqdn = 'pulsar-broker1', - # Forward to Pulsar broker-1 which is listening on 6651 - tunnel_route = 'pulsar-broker1:6651' - }, - { - fqdn = 'pulsar-broker2', - # Forward to Pulsar broker-2 which is listening on 6651 - tunnel_route = 'pulsar-broker2:6651' - }, -} - -``` - -After you configure the `ssl_server_name.config` and `records.config` files, the ATS-proxy server handles SNI routing and creates TCP tunnel between the client and the broker. - -### Configure Pulsar-client with SNI routing -ATS SNI-routing works only with TLS. You need to enable TLS for the ATS proxy and brokers first, configure the SNI routing protocol, and then connect Pulsar clients to brokers through ATS proxy. Pulsar clients support SNI routing by connecting to the proxy, and sending the target broker URL to the SNI header. This process is processed internally. You only need to configure the following proxy configuration initially when you create a Pulsar client to use the SNI routing protocol. - -````mdx-code-block - - - - -```java - -String brokerServiceUrl = "pulsar+ssl://pulsar-broker-vip:6651/"; -String proxyUrl = "pulsar+ssl://ats-proxy:443"; -ClientBuilder clientBuilder = PulsarClient.builder() - .serviceUrl(brokerServiceUrl) - .tlsTrustCertsFilePath(TLS_TRUST_CERT_FILE_PATH) - .enableTls(true) - .allowTlsInsecureConnection(false) - .proxyServiceUrl(proxyUrl, ProxyProtocol.SNI) - .operationTimeout(1000, TimeUnit.MILLISECONDS); - -Map authParams = new HashMap(); -authParams.put("tlsCertFile", TLS_CLIENT_CERT_FILE_PATH); -authParams.put("tlsKeyFile", TLS_CLIENT_KEY_FILE_PATH); -clientBuilder.authentication(AuthenticationTls.class.getName(), authParams); - -PulsarClient pulsarClient = clientBuilder.build(); - -``` - - - - -```c++ - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/cacert.pem"); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create( - "/path/to/client-cert.pem", "/path/to/client-key.pem");); - -Client client("pulsar+ssl://ats-proxy:443", config); - -``` - - - - -```python - -from pulsar import Client, AuthenticationTLS - -auth = AuthenticationTLS("/path/to/my-role.cert.pem", "/path/to/my-role.key-pk8.pem") -client = Client("pulsar+ssl://ats-proxy:443", - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False, - authentication=auth) - -``` - - - - -```` - -### Pulsar geo-replication with SNI routing -You can use the ATS proxy for geo-replication. Pulsar brokers can connect to brokers in geo-replication by using SNI routing. To enable SNI routing for broker connection cross clusters, you need to configure SNI proxy URL to the cluster metadata. If you have configured SNI proxy URL in the cluster metadata, you can connect to broker cross clusters through the proxy over SNI routing. - -![Pulsar client SNI](/assets/pulsar-sni-geo.png) - -In this example, a Pulsar cluster is deployed into two separate regions, `us-west` and `us-east`. Both regions are configured with ATS proxy, and brokers in each region run behind the ATS proxy. We configure the cluster metadata for both clusters, so brokers in one cluster can use SNI routing and connect to brokers in other clusters through the ATS proxy. - -(a) Configure the cluster metadata for `us-east` with `us-east` broker service URL and `us-east` ATS proxy URL with SNI proxy-protocol. - -``` - -./pulsar-admin clusters update \ ---broker-url-secure pulsar+ssl://east-broker-vip:6651 \ ---url http://east-broker-vip:8080 \ ---proxy-protocol SNI \ ---proxy-url pulsar+ssl://east-ats-proxy:443 - -``` - -(b) Configure the cluster metadata for `us-west` with `us-west` broker service URL and `us-west` ATS proxy URL with SNI proxy-protocol. - -``` - -./pulsar-admin clusters update \ ---broker-url-secure pulsar+ssl://west-broker-vip:6651 \ ---url http://west-broker-vip:8080 \ ---proxy-protocol SNI \ ---proxy-url pulsar+ssl://west-ats-proxy:443 - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-replication.md b/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-replication.md deleted file mode 100644 index 799f0eb4d92c6b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-replication.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -id: concepts-replication -title: Geo Replication -sidebar_label: "Geo Replication" -original_id: concepts-replication ---- - -Pulsar enables messages to be produced and consumed in different geo-locations. For instance, your application may be publishing data in one region or market and you would like to process it for consumption in other regions or markets. [Geo-replication](administration-geo.md) in Pulsar enables you to do that. - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-tiered-storage.md b/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-tiered-storage.md deleted file mode 100644 index f6988e53a8cd4e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-tiered-storage.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: concepts-tiered-storage -title: Tiered Storage -sidebar_label: "Tiered Storage" -original_id: concepts-tiered-storage ---- - -Pulsar's segment oriented architecture allows for topic backlogs to grow very large, effectively without limit. However, this can become expensive over time. - -One way to alleviate this cost is to use Tiered Storage. With tiered storage, older messages in the backlog can be moved from BookKeeper to a cheaper storage mechanism, while still allowing clients to access the backlog as if nothing had changed. - -![Tiered Storage](/assets/pulsar-tiered-storage.png) - -> Data written to BookKeeper is replicated to 3 physical machines by default. However, once a segment is sealed in BookKeeper it becomes immutable and can be copied to long term storage. Long term storage can achieve cost savings by using mechanisms such as [Reed-Solomon error correction](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction) to require fewer physical copies of data. - -Pulsar currently supports S3, Google Cloud Storage (GCS), and filesystem for [long term store](https://pulsar.apache.org/docs/en/cookbooks-tiered-storage/). Offloading to long term storage triggered via a Rest API or command line interface. The user passes in the amount of topic data they wish to retain on BookKeeper, and the broker will copy the backlog data to long term storage. The original data will then be deleted from BookKeeper after a configured delay (4 hours by default). - -> For a guide for setting up tiered storage, see the [Tiered storage cookbook](cookbooks-tiered-storage.md). diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-topic-compaction.md b/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-topic-compaction.md deleted file mode 100644 index 34b7ed7fbbd31e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-topic-compaction.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -id: concepts-topic-compaction -title: Topic Compaction -sidebar_label: "Topic Compaction" -original_id: concepts-topic-compaction ---- - -Pulsar was built with highly scalable [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data as a primary objective. Pulsar topics enable you to persistently store as many unacknowledged messages as you need while preserving message ordering. By default, Pulsar stores *all* unacknowledged/unprocessed messages produced on a topic. Accumulating many unacknowledged messages on a topic is necessary for many Pulsar use cases but it can also be very time intensive for Pulsar consumers to "rewind" through the entire log of messages. - -> For a more practical guide to topic compaction, see the [Topic compaction cookbook](cookbooks-compaction.md). - -For some use cases consumers don't need a complete "image" of the topic log. They may only need a few values to construct a more "shallow" image of the log, perhaps even just the most recent value. For these kinds of use cases Pulsar offers **topic compaction**. When you run compaction on a topic, Pulsar goes through a topic's backlog and removes messages that are *obscured* by later messages, i.e. it goes through the topic on a per-key basis and leaves only the most recent message associated with that key. - -Pulsar's topic compaction feature: - -* Allows for faster "rewind" through topic logs -* Applies only to [persistent topics](concepts-architecture-overview.md#persistent-storage) -* Triggered automatically when the backlog reaches a certain size or can be triggered manually via the command line. See the [Topic compaction cookbook](cookbooks-compaction.md) -* Is conceptually and operationally distinct from [retention and expiry](concepts-messaging.md#message-retention-and-expiry). Topic compaction *does*, however, respect retention. If retention has removed a message from the message backlog of a topic, the message will also not be readable from the compacted topic ledger. - -> #### Topic compaction example: the stock ticker -> An example use case for a compacted Pulsar topic would be a stock ticker topic. On a stock ticker topic, each message bears a timestamped dollar value for stocks for purchase (with the message key holding the stock symbol, e.g. `AAPL` or `GOOG`). With a stock ticker you may care only about the most recent value(s) of the stock and have no interest in historical data (i.e. you don't need to construct a complete image of the topic's sequence of messages per key). Compaction would be highly beneficial in this case because it would keep consumers from needing to rewind through obscured messages. - - -## How topic compaction works - -When topic compaction is triggered [via the CLI](cookbooks-compaction.md), Pulsar will iterate over the entire topic from beginning to end. For each key that it encounters the compaction routine will keep a record of the latest occurrence of that key. - -After that, the broker will create a new [BookKeeper ledger](concepts-architecture-overview.md#ledgers) and make a second iteration through each message on the topic. For each message, if the key matches the latest occurrence of that key, then the key's data payload, message ID, and metadata will be written to the newly created ledger. If the key doesn't match the latest then the message will be skipped and left alone. If any given message has an empty payload, it will be skipped and considered deleted (akin to the concept of [tombstones](https://en.wikipedia.org/wiki/Tombstone_(data_store)) in key-value databases). At the end of this second iteration through the topic, the newly created BookKeeper ledger is closed and two things are written to the topic's metadata: the ID of the BookKeeper ledger and the message ID of the last compacted message (this is known as the **compaction horizon** of the topic). Once this metadata is written compaction is complete. - -After the initial compaction operation, the Pulsar [broker](reference-terminology.md#broker) that owns the topic is notified whenever any future changes are made to the compaction horizon and compacted backlog. When such changes occur: - -* Clients (consumers and readers) that have read compacted enabled will attempt to read messages from a topic and either: - * Read from the topic like normal (if the message ID is greater than or equal to the compaction horizon) or - * Read beginning at the compaction horizon (if the message ID is lower than the compaction horizon) - - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-transactions.md b/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-transactions.md deleted file mode 100644 index 08490ba06b5d7e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/concepts-transactions.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -id: transactions -title: Transactions -sidebar_label: "Overview" -original_id: transactions ---- - -Transactional semantics enable event streaming applications to consume, process, and produce messages in one atomic operation. In Pulsar, a producer or consumer can work with messages across multiple topics and partitions and ensure those messages are processed as a single unit. - -The following concepts help you understand Pulsar transactions. - -## Transaction coordinator and transaction log -The transaction coordinator maintains the topics and subscriptions that interact in a transaction. When a transaction is committed, the transaction coordinator interacts with the topic owner broker to complete the transaction. - -The transaction coordinator maintains the entire life cycle of transactions, and prevents a transaction from incorrect status. - -The transaction coordinator handles transaction timeout, and ensures that the transaction is aborted after a transaction timeout. - -All the transaction metadata is persisted in the transaction log. The transaction log is backed by a Pulsar topic. After the transaction coordinator crashes, it can restore the transaction metadata from the transaction log. - -## Transaction ID -The transaction ID (TxnID) identifies a unique transaction in Pulsar. The transaction ID is 128-bit. The highest 16 bits are reserved for the ID of the transaction coordinator, and the remaining bits are used for monotonically increasing numbers in each transaction coordinator. It is easy to locate the transaction crash with the TxnID. - -## Transaction buffer -Messages produced within a transaction are stored in the transaction buffer. The messages in transaction buffer are not materialized (visible) to consumers until the transactions are committed. The messages in the transaction buffer are discarded when the transactions are aborted. - -## Pending acknowledge state -Message acknowledges within a transaction are maintained by the pending acknowledge state before the transaction completes. If a message is in the pending acknowledge state, the message cannot be acknowledged by other transactions until the message is removed from the pending acknowledge state. - -The pending acknowledge state is persisted to the pending acknowledge log. The pending acknowledge log is backed by a Pulsar topic. A new broker can restore the state from the pending acknowledge log to ensure the acknowledgement is not lost. diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-bookkeepermetadata.md b/site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-bookkeepermetadata.md deleted file mode 100644 index b0fa98dc3b65d5..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-bookkeepermetadata.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -id: cookbooks-bookkeepermetadata -title: BookKeeper Ledger Metadata -original_id: cookbooks-bookkeepermetadata ---- - -Pulsar stores data on BookKeeper ledgers, you can understand the contents of a ledger by inspecting the metadata attached to the ledger. -Such metadata are stored on ZooKeeper and they are readable using BookKeeper APIs. - -Description of current metadata: - -| Scope | Metadata name | Metadata value | -| ------------- | ------------- | ------------- | -| All ledgers | application | 'pulsar' | -| All ledgers | component | 'managed-ledger', 'schema', 'compacted-topic' | -| Managed ledgers | pulsar/managed-ledger | name of the ledger | -| Cursor | pulsar/cursor | name of the cursor | -| Compacted topic | pulsar/compactedTopic | name of the original topic | -| Compacted topic | pulsar/compactedTo | id of the last compacted message | - - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-compaction.md b/site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-compaction.md deleted file mode 100644 index dfa314727241a8..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-compaction.md +++ /dev/null @@ -1,142 +0,0 @@ ---- -id: cookbooks-compaction -title: Topic compaction -sidebar_label: "Topic compaction" -original_id: cookbooks-compaction ---- - -Pulsar's [topic compaction](concepts-topic-compaction.md#compaction) feature enables you to create **compacted** topics in which older, "obscured" entries are pruned from the topic, allowing for faster reads through the topic's history (which messages are deemed obscured/outdated/irrelevant will depend on your use case). - -To use compaction: - -* You need to give messages keys, as topic compaction in Pulsar takes place on a *per-key basis* (i.e. messages are compacted based on their key). For a stock ticker use case, the stock symbol---e.g. `AAPL` or `GOOG`---could serve as the key (more on this [below](#when-should-i-use-compacted-topics)). Messages without keys will be left alone by the compaction process. -* Compaction can be configured to run [automatically](#configuring-compaction-to-run-automatically), or you can manually [trigger](#triggering-compaction-manually) compaction using the Pulsar administrative API. -* Your consumers must be [configured](#consumer-configuration) to read from compacted topics ([Java consumers](#java), for example, have a `readCompacted` setting that must be set to `true`). If this configuration is not set, consumers will still be able to read from the non-compacted topic. - - -> Compaction only works on messages that have keys (as in the stock ticker example the stock symbol serves as the key for each message). Keys can thus be thought of as the axis along which compaction is applied. Messages that don't have keys are simply ignored by compaction. - -## When should I use compacted topics? - -The classic example of a topic that could benefit from compaction would be a stock ticker topic through which consumers can access up-to-date values for specific stocks. Imagine a scenario in which messages carrying stock value data use the stock symbol as the key (`GOOG`, `AAPL`, `TWTR`, etc.). Compacting this topic would give consumers on the topic two options: - -* They can read from the "original," non-compacted topic in case they need access to "historical" values, i.e. the entirety of the topic's messages. -* They can read from the compacted topic if they only want to see the most up-to-date messages. - -Thus, if you're using a Pulsar topic called `stock-values`, some consumers could have access to all messages in the topic (perhaps because they're performing some kind of number crunching of all values in the last hour) while the consumers used to power the real-time stock ticker only see the compacted topic (and thus aren't forced to process outdated messages). Which variant of the topic any given consumer pulls messages from is determined by the consumer's [configuration](#consumer-configuration). - -> One of the benefits of compaction in Pulsar is that you aren't forced to choose between compacted and non-compacted topics, as the compaction process leaves the original topic as-is and essentially adds an alternate topic. In other words, you can run compaction on a topic and consumers that need access to the non-compacted version of the topic will not be adversely affected. - - -## Configuring compaction to run automatically - -Tenant administrators can configure a policy for compaction at the namespace level. The policy specifies how large the topic backlog can grow before compaction is triggered. - -For example, to trigger compaction when the backlog reaches 100MB: - -```bash - -$ bin/pulsar-admin namespaces set-compaction-threshold \ - --threshold 100M my-tenant/my-namespace - -``` - -Configuring the compaction threshold on a namespace will apply to all topics within that namespace. - -## Triggering compaction manually - -In order to run compaction on a topic, you need to use the [`topics compact`](reference-pulsar-admin.md#topics-compact) command for the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool. Here's an example: - -```bash - -$ bin/pulsar-admin topics compact \ - persistent://my-tenant/my-namespace/my-topic - -``` - -The `pulsar-admin` tool runs compaction via the Pulsar {@inject: rest:REST:/} API. To run compaction in its own dedicated process, i.e. *not* through the REST API, you can use the [`pulsar compact-topic`](reference-cli-tools.md#pulsar-compact-topic) command. Here's an example: - -```bash - -$ bin/pulsar compact-topic \ - --topic persistent://my-tenant-namespace/my-topic - -``` - -> Running compaction in its own process is recommended when you want to avoid interfering with the broker's performance. Broker performance should only be affected, however, when running compaction on topics with a large keyspace (i.e when there are many keys on the topic). The first phase of the compaction process keeps a copy of each key in the topic, which can create memory pressure as the number of keys grows. Using the `pulsar-admin topics compact` command to run compaction through the REST API should present no issues in the overwhelming majority of cases; using `pulsar compact-topic` should correspondingly be considered an edge case. - -The `pulsar compact-topic` command communicates with [ZooKeeper](https://zookeeper.apache.org) directly. In order to establish communication with ZooKeeper, though, the `pulsar` CLI tool will need to have a valid [broker configuration](reference-configuration.md#broker). You can either supply a proper configuration in `conf/broker.conf` or specify a non-default location for the configuration: - -```bash - -$ bin/pulsar compact-topic \ - --broker-conf /path/to/broker.conf \ - --topic persistent://my-tenant/my-namespace/my-topic - -# If the configuration is in conf/broker.conf -$ bin/pulsar compact-topic \ - --topic persistent://my-tenant/my-namespace/my-topic - -``` - -#### When should I trigger compaction? - -How often you [trigger compaction](#triggering-compaction-manually) will vary widely based on the use case. If you want a compacted topic to be extremely speedy on read, then you should run compaction fairly frequently. - -## Consumer configuration - -Pulsar consumers and readers need to be configured to read from compacted topics. The sections below show you how to enable compacted topic reads for Pulsar's language clients. - -### Java - -In order to read from a compacted topic using a Java consumer, the `readCompacted` parameter must be set to `true`. Here's an example consumer for a compacted topic: - -```java - -Consumer compactedTopicConsumer = client.newConsumer() - .topic("some-compacted-topic") - .readCompacted(true) - .subscribe(); - -``` - -As mentioned above, topic compaction in Pulsar works on a *per-key basis*. That means that messages that you produce on compacted topics need to have keys (the content of the key will depend on your use case). Messages that don't have keys will be ignored by the compaction process. Here's an example Pulsar message with a key: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageBuilder; - -Message msg = MessageBuilder.create() - .setContent(someByteArray) - .setKey("some-key") - .build(); - -``` - -The example below shows a message with a key being produced on a compacted Pulsar topic: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageBuilder; -import org.apache.pulsar.client.api.Producer; -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -Producer compactedTopicProducer = client.newProducer() - .topic("some-compacted-topic") - .create(); - -Message msg = MessageBuilder.create() - .setContent(someByteArray) - .setKey("some-key") - .build(); - -compactedTopicProducer.send(msg); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-deduplication.md b/site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-deduplication.md deleted file mode 100644 index f7f9e3d7bb425b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-deduplication.md +++ /dev/null @@ -1,151 +0,0 @@ ---- -id: cookbooks-deduplication -title: Message deduplication -sidebar_label: "Message deduplication" -original_id: cookbooks-deduplication ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -When **Message deduplication** is enabled, it ensures that each message produced on Pulsar topics is persisted to disk *only once*, even if the message is produced more than once. Message deduplication is handled automatically on the server side. - -To use message deduplication in Pulsar, you need to configure your Pulsar brokers and clients. - -## How it works - -You can enable or disable message deduplication at the namespace level or the topic level. By default, it is disabled on all namespaces or topics. You can enable it in the following ways: - -* Enable deduplication for all namespaces/topics at the broker-level. -* Enable deduplication for a specific namespace with the `pulsar-admin namespaces` interface. -* Enable deduplication for a specific topic with the `pulsar-admin topics` interface. - -## Configure message deduplication - -You can configure message deduplication in Pulsar using the [`broker.conf`](reference-configuration.md#broker) configuration file. The following deduplication-related parameters are available. - -Parameter | Description | Default -:---------|:------------|:------- -`brokerDeduplicationEnabled` | Sets the default behavior for message deduplication in the Pulsar broker. If it is set to `true`, message deduplication is enabled on all namespaces/topics. If it is set to `false`, you have to enable or disable deduplication at the namespace level or the topic level. | `false` -`brokerDeduplicationMaxNumberOfProducers` | The maximum number of producers for which information is stored for deduplication purposes. | `10000` -`brokerDeduplicationEntriesInterval` | The number of entries after which a deduplication informational snapshot is taken. A larger interval leads to fewer snapshots being taken, though this lengthens the topic recovery time (the time required for entries published after the snapshot to be replayed). | `1000` -`brokerDeduplicationSnapshotIntervalSeconds`| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |`120` -`brokerDeduplicationProducerInactivityTimeoutMinutes` | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | `360` (6 hours) - -### Set default value at the broker-level - -By default, message deduplication is *disabled* on all Pulsar namespaces/topics. To enable it on all namespaces/topics, set the `brokerDeduplicationEnabled` parameter to `true` and re-start the broker. - -Even if you set the value for `brokerDeduplicationEnabled`, enabling or disabling via Pulsar admin CLI overrides the default settings at the broker-level. - -### Enable message deduplication - -Though message deduplication is disabled by default at the broker level, you can enable message deduplication for a specific namespace or topic using the [`pulsar-admin namespaces set-deduplication`](reference-pulsar-admin.md#namespace-set-deduplication) or the [`pulsar-admin topics set-deduplication`](reference-pulsar-admin.md#topic-set-deduplication) command. You can use the `--enable`/`-e` flag and specify the namespace/topic. - -The following example shows how to enable message deduplication at the namespace level. - -```bash - -$ bin/pulsar-admin namespaces set-deduplication \ - public/default \ - --enable # or just -e - -``` - -### Disable message deduplication - -Even if you enable message deduplication at the broker level, you can disable message deduplication for a specific namespace or topic using the [`pulsar-admin namespace set-deduplication`](reference-pulsar-admin.md#namespace-set-deduplication) or the [`pulsar-admin topics set-deduplication`](reference-pulsar-admin.md#topic-set-deduplication) command. Use the `--disable`/`-d` flag and specify the namespace/topic. - -The following example shows how to disable message deduplication at the namespace level. - -```bash - -$ bin/pulsar-admin namespaces set-deduplication \ - public/default \ - --disable # or just -d - -``` - -## Pulsar clients - -If you enable message deduplication in Pulsar brokers, you need complete the following tasks for your client producers: - -1. Specify a name for the producer. -1. Set the message timeout to `0` (namely, no timeout). - -The instructions for Java, Python, and C++ clients are different. - -````mdx-code-block - - - -To enable message deduplication on a [Java producer](client-libraries-java.md#producers), set the producer name using the `producerName` setter, and set the timeout to `0` using the `sendTimeout` setter. - -```java - -import org.apache.pulsar.client.api.Producer; -import org.apache.pulsar.client.api.PulsarClient; -import java.util.concurrent.TimeUnit; - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); -Producer producer = pulsarClient.newProducer() - .producerName("producer-1") - .topic("persistent://public/default/topic-1") - .sendTimeout(0, TimeUnit.SECONDS) - .create(); - -``` - - - - -To enable message deduplication on a [Python producer](client-libraries-python.md#producers), set the producer name using `producer_name`, and set the timeout to `0` using `send_timeout_millis`. - -```python - -import pulsar - -client = pulsar.Client("pulsar://localhost:6650") -producer = client.create_producer( - "persistent://public/default/topic-1", - producer_name="producer-1", - send_timeout_millis=0) - -``` - - - - -To enable message deduplication on a [C++ producer](client-libraries-cpp.md#producer), set the producer name using `producer_name`, and set the timeout to `0` using `send_timeout_millis`. - -```cpp - -#include - -std::string serviceUrl = "pulsar://localhost:6650"; -std::string topic = "persistent://some-tenant/ns1/topic-1"; -std::string producerName = "producer-1"; - -Client client(serviceUrl); - -ProducerConfiguration producerConfig; -producerConfig.setSendTimeout(0); -producerConfig.setProducerName(producerName); - -Producer producer; - -Result result = client.createProducer(topic, producerConfig, producer); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-encryption.md b/site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-encryption.md deleted file mode 100644 index f0d8fb8735eb63..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-encryption.md +++ /dev/null @@ -1,184 +0,0 @@ ---- -id: cookbooks-encryption -title: Pulsar Encryption -sidebar_label: "Encryption" -original_id: cookbooks-encryption ---- - -Pulsar encryption allows applications to encrypt messages at the producer and decrypt at the consumer. Encryption is performed using the public/private key pair configured by the application. Encrypted messages can only be decrypted by consumers with a valid key. - -## Asymmetric and symmetric encryption - -Pulsar uses dynamically generated symmetric AES key to encrypt messages(data). The AES key(data key) is encrypted using application provided ECDSA/RSA key pair, as a result there is no need to share the secret with everyone. - -Key is a public/private key pair used for encryption/decryption. The producer key is the public key, and the consumer key is the private key of the key pair. - -The application configures the producer with the public key. This key is used to encrypt the AES data key. The encrypted data key is sent as part of message header. Only entities with the private key(in this case the consumer) will be able to decrypt the data key which is used to decrypt the message. - -A message can be encrypted with more than one key. Any one of the keys used for encrypting the message is sufficient to decrypt the message - -Pulsar does not store the encryption key anywhere in the pulsar service. If you lose/delete the private key, your message is irretrievably lost, and is unrecoverable - -## Producer -![alt text](/assets/pulsar-encryption-producer.jpg "Pulsar Encryption Producer") - -## Consumer -![alt text](/assets/pulsar-encryption-consumer.jpg "Pulsar Encryption Consumer") - -## Here are the steps to get started: - -1. Create your ECDSA or RSA public/private key pair. - -```shell - -openssl ecparam -name secp521r1 -genkey -param_enc explicit -out test_ecdsa_privkey.pem -openssl ec -in test_ecdsa_privkey.pem -pubout -outform pkcs8 -out test_ecdsa_pubkey.pem - -``` - -2. Add the public and private key to the key management and configure your producers to retrieve public keys and consumers clients to retrieve private keys. -3. Implement CryptoKeyReader::getPublicKey() interface from producer and CryptoKeyReader::getPrivateKey() interface from consumer, which will be invoked by Pulsar client to load the key. -4. Add encryption key to producer configuration: conf.addEncryptionKey("myapp.key") -5. Add CryptoKeyReader implementation to producer/consumer config: conf.setCryptoKeyReader(keyReader) -6. Sample producer application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} -PulsarClient pulsarClient = PulsarClient.create("http://localhost:8080"); - -ProducerConfiguration prodConf = new ProducerConfiguration(); -prodConf.setCryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")); -prodConf.addEncryptionKey("myappkey"); - -Producer producer = pulsarClient.createProducer("persistent://my-tenant/my-ns/my-topic", prodConf); - -for (int i = 0; i < 10; i++) { - producer.send("my-message".getBytes()); -} - -pulsarClient.close(); - -``` - -7. Sample Consumer Application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} - -ConsumerConfiguration consConf = new ConsumerConfiguration(); -consConf.setCryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")); -PulsarClient pulsarClient = PulsarClient.create("http://localhost:8080"); -Consumer consumer = pulsarClient.subscribe("persistent://my-tenant//my-ns/my-topic", "my-subscriber-name", consConf); -Message msg = null; - -for (int i = 0; i < 10; i++) { - msg = consumer.receive(); - // do something - System.out.println("Received: " + new String(msg.getData())); -} - -// Acknowledge the consumption of all messages at once -consumer.acknowledgeCumulative(msg); -pulsarClient.close(); - -``` - -## Key rotation -Pulsar generates new AES data key every 4 hours or after a certain number of messages are published. The asymmetric public key is automatically fetched by producer every 4 hours by calling CryptoKeyReader::getPublicKey() to retrieve the latest version. - -## Enabling encryption at the producer application: -If you produce messages that are consumed across application boundaries, you need to ensure that consumers in other applications have access to one of the private keys that can decrypt the messages. This can be done in two ways: -1. The consumer application provides you access to their public key, which you add to your producer keys -1. You grant access to one of the private keys from the pairs used by producer - -In some cases, the producer may want to encrypt the messages with multiple keys. For this, add all such keys to the config. Consumer will be able to decrypt the message, as long as it has access to at least one of the keys. - -E.g: If messages needs to be encrypted using 2 keys myapp.messagekey1 and myapp.messagekey2, - -```java - -conf.addEncryptionKey("myapp.messagekey1"); -conf.addEncryptionKey("myapp.messagekey2"); - -``` - -## Decrypting encrypted messages at the consumer application: -Consumers require access one of the private keys to decrypt messages produced by the producer. If you would like to receive encrypted messages, create a public/private key and give your public key to the producer application to encrypt messages using your public key. - -## Handling Failures: -* Producer/ Consumer loses access to the key - * Producer action will fail indicating the cause of the failure. Application has the option to proceed with sending unencrypted message in such cases. Call conf.setCryptoFailureAction(ProducerCryptoFailureAction) to control the producer behavior. The default behavior is to fail the request. - * If consumption failed due to decryption failure or missing keys in consumer, application has the option to consume the encrypted message or discard it. Call conf.setCryptoFailureAction(ConsumerCryptoFailureAction) to control the consumer behavior. The default behavior is to fail the request. -Application will never be able to decrypt the messages if the private key is permanently lost. -* Batch messaging - * If decryption fails and the message contain batch messages, client will not be able to retrieve individual messages in the batch, hence message consumption fails even if conf.setCryptoFailureAction() is set to CONSUME. -* If decryption fails, the message consumption stops and application will notice backlog growth in addition to decryption failure messages in the client log. If application does not have access to the private key to decrypt the message, the only option is to skip/discard backlogged messages. - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-message-queue.md b/site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-message-queue.md deleted file mode 100644 index eb43cbde5fb818..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-message-queue.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -id: cookbooks-message-queue -title: Using Pulsar as a message queue -sidebar_label: "Message queue" -original_id: cookbooks-message-queue ---- - -Message queues are essential components of many large-scale data architectures. If every single work object that passes through your system absolutely *must* be processed in spite of the slowness or downright failure of this or that system component, there's a good chance that you'll need a message queue to step in and ensure that unprocessed data is retained---with correct ordering---until the required actions are taken. - -Pulsar is a great choice for a message queue because: - -* it was built with [persistent message storage](concepts-architecture-overview.md#persistent-storage) in mind -* it offers automatic load balancing across [consumers](reference-terminology.md#consumer) for messages on a topic (or custom load balancing if you wish) - -> You can use the same Pulsar installation to act as a real-time message bus and as a message queue if you wish (or just one or the other). You can set aside some topics for real-time purposes and other topics for message queue purposes (or use specific namespaces for either purpose if you wish). - - -# Client configuration changes - -To use a Pulsar [topic](reference-terminology.md#topic) as a message queue, you should distribute the receiver load on that topic across several consumers (the optimal number of consumers will depend on the load). Each consumer must: - -* Establish a [shared subscription](concepts-messaging.md#shared) and use the same subscription name as the other consumers (otherwise the subscription is not shared and the consumers can't act as a processing ensemble) -* If you'd like to have tight control over message dispatching across consumers, set the consumers' **receiver queue** size very low (potentially even to 0 if necessary). Each Pulsar [consumer](reference-terminology.md#consumer) has a receiver queue that determines how many messages the consumer will attempt to fetch at a time. A receiver queue of 1000 (the default), for example, means that the consumer will attempt to process 1000 messages from the topic's backlog upon connection. Setting the receiver queue to zero essentially means ensuring that each consumer is only doing one thing at a time. - - The downside to restricting the receiver queue size of consumers is that that limits the potential throughput of those consumers and cannot be used with [partitioned topics](reference-terminology.md#partitioned-topic). Whether the performance/control trade-off is worthwhile will depend on your use case. - -## Java clients - -Here's an example Java consumer configuration that uses a shared subscription: - -```java - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; -import org.apache.pulsar.client.api.SubscriptionType; - -String SERVICE_URL = "pulsar://localhost:6650"; -String TOPIC = "persistent://public/default/mq-topic-1"; -String subscription = "sub-1"; - -PulsarClient client = PulsarClient.builder() - .serviceUrl(SERVICE_URL) - .build(); - -Consumer consumer = client.newConsumer() - .topic(TOPIC) - .subscriptionName(subscription) - .subscriptionType(SubscriptionType.Shared) - // If you'd like to restrict the receiver queue size - .receiverQueueSize(10) - .subscribe(); - -``` - -## Python clients - -Here's an example Python consumer configuration that uses a shared subscription: - -```python - -from pulsar import Client, ConsumerType - -SERVICE_URL = "pulsar://localhost:6650" -TOPIC = "persistent://public/default/mq-topic-1" -SUBSCRIPTION = "sub-1" - -client = Client(SERVICE_URL) -consumer = client.subscribe( - TOPIC, - SUBSCRIPTION, - # If you'd like to restrict the receiver queue size - receiver_queue_size=10, - consumer_type=ConsumerType.Shared) - -``` - -## C++ clients - -Here's an example C++ consumer configuration that uses a shared subscription: - -```cpp - -#include - -std::string serviceUrl = "pulsar://localhost:6650"; -std::string topic = "persistent://public/defaultmq-topic-1"; -std::string subscription = "sub-1"; - -Client client(serviceUrl); - -ConsumerConfiguration consumerConfig; -consumerConfig.setConsumerType(ConsumerType.ConsumerShared); -// If you'd like to restrict the receiver queue size -consumerConfig.setReceiverQueueSize(10); - -Consumer consumer; - -Result result = client.subscribe(topic, subscription, consumerConfig, consumer); - -``` - -## Go clients - -Here is an example of a Go consumer configuration that uses a shared subscription: - -```go - -import "github.com/apache/pulsar-client-go/pulsar" - -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "persistent://public/default/mq-topic-1", - SubscriptionName: "sub-1", - Type: pulsar.Shared, - ReceiverQueueSize: 10, // If you'd like to restrict the receiver queue size -}) -if err != nil { - log.Fatal(err) -} - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-non-persistent.md b/site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-non-persistent.md deleted file mode 100644 index 178301e86eb8df..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-non-persistent.md +++ /dev/null @@ -1,63 +0,0 @@ ---- -id: cookbooks-non-persistent -title: Non-persistent messaging -sidebar_label: "Non-persistent messaging" -original_id: cookbooks-non-persistent ---- - -**Non-persistent topics** are Pulsar topics in which message data is *never* [persistently stored](concepts-architecture-overview.md#persistent-storage) and kept only in memory. This cookbook provides: - -* A basic [conceptual overview](#overview) of non-persistent topics -* Information about [configurable parameters](#configuration) related to non-persistent topics -* A guide to the [CLI interface](#cli) for managing non-persistent topics - -## Overview - -By default, Pulsar persistently stores *all* unacknowledged messages on multiple [BookKeeper](#persistent-storage) bookies (storage nodes). Data for messages on persistent topics can thus survive broker restarts and subscriber failover. - -Pulsar also, however, supports **non-persistent topics**, which are topics on which messages are *never* persisted to disk and live only in memory. When using non-persistent delivery, killing a Pulsar [broker](reference-terminology.md#broker) or disconnecting a subscriber to a topic means that all in-transit messages are lost on that (non-persistent) topic, meaning that clients may see message loss. - -Non-persistent topics have names of this form (note the `non-persistent` in the name): - -```http - -non-persistent://tenant/namespace/topic - -``` - -> For more high-level information about non-persistent topics, see the [Concepts and Architecture](concepts-messaging.md#non-persistent-topics) documentation. - -## Using - -> In order to use non-persistent topics, they must be [enabled](#enabling) in your Pulsar broker configuration. - -In order to use non-persistent topics, you only need to differentiate them by name when interacting with them. This [`pulsar-client produce`](reference-cli-tools.md#pulsar-client-produce) command, for example, would produce one message on a non-persistent topic in a standalone cluster: - -```bash - -$ bin/pulsar-client produce non-persistent://public/default/example-np-topic \ - --num-produce 1 \ - --messages "This message will be stored only in memory" - -``` - -> For a more thorough guide to non-persistent topics from an administrative perspective, see the [Non-persistent topics](admin-api-topics.md) guide. - -## Enabling - -In order to enable non-persistent topics in a Pulsar broker, the [`enableNonPersistentTopics`](reference-configuration.md#broker-enableNonPersistentTopics) must be set to `true`. This is the default, and so you won't need to take any action to enable non-persistent messaging. - - -> #### Configuration for standalone mode -> If you're running Pulsar in standalone mode, the same configurable parameters are available but in the [`standalone.conf`](reference-configuration.md#standalone) configuration file. - -If you'd like to enable *only* non-persistent topics in a broker, you can set the [`enablePersistentTopics`](reference-configuration.md#broker-enablePersistentTopics) parameter to `false` and the `enableNonPersistentTopics` parameter to `true`. - -## Managing with cli - -Non-persistent topics can be managed using the [`pulsar-admin non-persistent`](reference-pulsar-admin.md#non-persistent) command-line interface. With that interface you can perform actions like [create a partitioned non-persistent topic](reference-pulsar-admin.md#non-persistent-create-partitioned-topic), get [stats](reference-pulsar-admin.md#non-persistent-stats) for a non-persistent topic, [list](reference-pulsar-admin.md) non-persistent topics under a namespace, and more. - -## Using with Pulsar clients - -You shouldn't need to make any changes to your Pulsar clients to use non-persistent messaging beyond making sure that you use proper [topic names](#using) with `non-persistent` as the topic type. - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-partitioned.md b/site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-partitioned.md deleted file mode 100644 index fb9ac354cc6d60..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-partitioned.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: cookbooks-partitioned -title: Partitioned topics -sidebar_label: "Partitioned Topics" -original_id: cookbooks-partitioned ---- -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-retention-expiry.md b/site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-retention-expiry.md deleted file mode 100644 index c8c46b3caa1bee..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-retention-expiry.md +++ /dev/null @@ -1,498 +0,0 @@ ---- -id: cookbooks-retention-expiry -title: Message retention and expiry -sidebar_label: "Message retention and expiry" -original_id: cookbooks-retention-expiry ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar brokers are responsible for handling messages that pass through Pulsar, including [persistent storage](concepts-architecture-overview.md#persistent-storage) of messages. By default, for each topic, brokers only retain messages that are in at least one backlog. A backlog is the set of unacknowledged messages for a particular subscription. As a topic can have multiple subscriptions, a topic can have multiple backlogs. - -As a consequence, no messages are retained (by default) on a topic that has not had any subscriptions created for it. - -(Note that messages that are no longer being stored are not necessarily immediately deleted, and may in fact still be accessible until the next ledger rollover. Because clients cannot predict when rollovers may happen, it is not wise to rely on a rollover not happening at an inconvenient point in time.) - -In Pulsar, you can modify this behavior, with namespace granularity, in two ways: - -* You can persistently store messages that are not within a backlog (because they've been acknowledged by on every existing subscription, or because there are no subscriptions) by setting [retention policies](#retention-policies). -* Messages that are not acknowledged within a specified timeframe can be automatically acknowledged, by specifying the [time to live](#time-to-live-ttl) (TTL). - -Pulsar's [admin interface](admin-api-overview.md) enables you to manage both retention policies and TTL with namespace granularity (and thus within a specific tenant and either on a specific cluster or in the [`global`](concepts-architecture-overview.md#global-cluster) cluster). - - -> #### Retention and TTL solve two different problems -> * Message retention: Keep the data for at least X hours (even if acknowledged) -> * Time-to-live: Discard data after some time (by automatically acknowledging) -> -> Most applications will want to use at most one of these. - - -## Retention policies - -By default, when a Pulsar message arrives at a broker, the message is stored until it has been acknowledged on all subscriptions, at which point it is marked for deletion. You can override this behavior and retain messages that have already been acknowledged on all subscriptions by setting a *retention policy* for all topics in a given namespace. Retention is based on both a *size limit* and a *time limit*. - -Retention policies are useful when you use the Reader interface. The Reader interface does not use acknowledgements, and messages do not exist within backlogs. It is required to configure retention for Reader-only use cases. - -When you set a retention policy on topics in a namespace, you must set **both** a *size limit* and a *time limit*. You can refer to the following table to set retention policies in `pulsar-admin` and Java. - -|Time limit|Size limit| Message retention | -|----------|----------|------------------------| -| -1 | -1 | Infinite retention | -| -1 | >0 | Based on the size limit | -| >0 | -1 | Based on the time limit | -| 0 | 0 | Disable message retention (by default) | -| 0 | >0 | Invalid | -| >0 | 0 | Invalid | -| >0 | >0 | Acknowledged messages or messages with no active subscription will not be retained when either time or size reaches the limit. | - -The retention settings apply to all messages on topics that do not have any subscriptions, or to messages that have been acknowledged by all subscriptions. The retention policy settings do not affect unacknowledged messages on topics with subscriptions. The unacknowledged messages are controlled by the backlog quota. - -When a retention limit on a topic is exceeded, the oldest message is marked for deletion until the set of retained messages falls within the specified limits again. - -### Defaults - -You can set message retention at instance level with the following two parameters: `defaultRetentionTimeInMinutes` and `defaultRetentionSizeInMB`. Both parameters are set to `0` by default. - -For more information of the two parameters, refer to the [`broker.conf`](reference-configuration.md#broker) configuration file. - -### Set retention policy - -You can set a retention policy for a namespace by specifying the namespace, a size limit and a time limit in `pulsar-admin`, REST API and Java. - -````mdx-code-block - - - -You can use the [`set-retention`](reference-pulsar-admin.md#namespaces-set-retention) subcommand and specify a namespace, a size limit using the `-s`/`--size` flag, and a time limit using the `-t`/`--time` flag. - -In the following example, the size limit is set to 10 GB and the time limit is set to 3 hours for each topic within the `my-tenant/my-ns` namespace. -- When the size of messages reaches 10 GB on a topic within 3 hours, the acknowledged messages will not be retained. -- After 3 hours, even if the message size is less than 10 GB, the acknowledged messages will not be retained. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 10G \ - --time 3h - -``` - -In the following example, the time is not limited and the size limit is set to 1 TB. The size limit determines the retention. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 1T \ - --time -1 - -``` - -In the following example, the size is not limited and the time limit is set to 3 hours. The time limit determines the retention. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size -1 \ - --time 3h - -``` - -To achieve infinite retention, set both values to `-1`. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size -1 \ - --time -1 - -``` - -To disable the retention policy, set both values to `0`. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 0 \ - --time 0 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/retention|operation/setRetention?version=@pulsar:version_number@} - -:::note - -To disable the retention policy, you need to set both the size and time limit to `0`. Set either size or time limit to `0` is invalid. - -::: - - - - -```java - -int retentionTime = 10; // 10 minutes -int retentionSize = 500; // 500 megabytes -RetentionPolicies policies = new RetentionPolicies(retentionTime, retentionSize); -admin.namespaces().setRetention(namespace, policies); - -``` - - - - -```` - -### Get retention policy - -You can fetch the retention policy for a namespace by specifying the namespace. The output will be a JSON object with two keys: `retentionTimeInMinutes` and `retentionSizeInMB`. - -````mdx-code-block - - - -Use the [`get-retention`](reference-pulsar-admin.md#namespaces) subcommand and specify the namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces get-retention my-tenant/my-ns -{ - "retentionTimeInMinutes": 10, - "retentionSizeInMB": 500 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/retention|operation/getRetention?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getRetention(namespace); - -``` - - - - -```` - -## Backlog quotas - -*Backlogs* are sets of unacknowledged messages for a topic that have been stored by bookies. Pulsar stores all unacknowledged messages in backlogs until they are processed and acknowledged. - -You can control the allowable size of backlogs, at the namespace level, using *backlog quotas*. Setting a backlog quota involves setting: - -TODO: Expand on is this per backlog or per topic? - -* an allowable *size threshold* for each topic in the namespace -* a *retention policy* that determines which action the [broker](reference-terminology.md#broker) takes if the threshold is exceeded. - -The following retention policies are available: - -Policy | Action -:------|:------ -`producer_request_hold` | The broker will hold and not persist produce request payload -`producer_exception` | The broker will disconnect from the client by throwing an exception -`consumer_backlog_eviction` | The broker will begin discarding backlog messages - - -> #### Beware the distinction between retention policy types -> As you may have noticed, there are two definitions of the term "retention policy" in Pulsar, one that applies to persistent storage of messages not in backlogs, and one that applies to messages within backlogs. - - -Backlog quotas are handled at the namespace level. They can be managed via: - -### Set size/time thresholds and backlog retention policies - -You can set a size and/or time threshold and backlog retention policy for all of the topics in a [namespace](reference-terminology.md#namespace) by specifying the namespace, a size limit and/or a time limit in second, and a policy by name. - -````mdx-code-block - - - -Use the [`set-backlog-quota`](reference-pulsar-admin.md#namespaces) subcommand and specify a namespace, a size limit using the `-l`/`--limit` , `-lt`/`--limitTime` flag to limit backlog, a retention policy using the `-p`/`--policy` flag and a policy type using `-t`/`--type` (default is destination_storage). - -##### Example - -```shell - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \ - --limit 2G \ - --policy producer_request_hold - -``` - -```shell - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns/my-topic \ ---limitTime 3600 \ ---policy producer_request_hold \ ---type message_age - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - - - - -```java - -long sizeLimit = 2147483648L; -BacklogQuota.RetentionPolicy policy = BacklogQuota.RetentionPolicy.producer_request_hold; -BacklogQuota quota = new BacklogQuota(sizeLimit, policy); -admin.namespaces().setBacklogQuota(namespace, quota); - -``` - - - - -```` - -### Get backlog threshold and backlog retention policy - -You can see which size threshold and backlog retention policy has been applied to a namespace. - -````mdx-code-block - - - -Use the [`get-backlog-quotas`](reference-pulsar-admin.md#pulsar-admin-namespaces-get-backlog-quotas) subcommand and specify a namespace. Here's an example: - -```shell - -$ pulsar-admin namespaces get-backlog-quotas my-tenant/my-ns -{ - "destination_storage": { - "limit" : 2147483648, - "policy" : "producer_request_hold" - } -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/backlogQuotaMap|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - - - - -```java - -Map quotas = - admin.namespaces().getBacklogQuotas(namespace); - -``` - - - - -```` - -### Remove backlog quotas - -````mdx-code-block - - - -Use the [`remove-backlog-quota`](reference-pulsar-admin.md#pulsar-admin-namespaces-remove-backlog-quota) subcommand and specify a namespace, use `t`/`--type` to specify backlog type to remove(default is destination_storage). Here's an example: - -```shell - -$ pulsar-admin namespaces remove-backlog-quota my-tenant/my-ns - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/removeBacklogQuota?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeBacklogQuota(namespace); - -``` - - - - -```` - -### Clear backlog - -#### pulsar-admin - -Use the [`clear-backlog`](reference-pulsar-admin.md#pulsar-admin-namespaces-clear-backlog) subcommand. - -##### Example - -```shell - -$ pulsar-admin namespaces clear-backlog my-tenant/my-ns - -``` - -By default, you will be prompted to ensure that you really want to clear the backlog for the namespace. You can override the prompt using the `-f`/`--force` flag. - -## Time to live (TTL) - -By default, Pulsar stores all unacknowledged messages forever. This can lead to heavy disk space usage in cases where a lot of messages are going unacknowledged. If disk space is a concern, you can set a time to live (TTL) that determines how long unacknowledged messages will be retained. - -### Set the TTL for a namespace - -````mdx-code-block - - - -Use the [`set-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-set-message-ttl) subcommand and specify a namespace and a TTL (in seconds) using the `-ttl`/`--messageTTL` flag. - -##### Example - -```shell - -$ pulsar-admin namespaces set-message-ttl my-tenant/my-ns \ - --messageTTL 120 # TTL of 2 minutes - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/setNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setNamespaceMessageTTL(namespace, ttlInSeconds); - -``` - - - - -```` - -### Get the TTL configuration for a namespace - -````mdx-code-block - - - -Use the [`get-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-get-message-ttl) subcommand and specify a namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces get-message-ttl my-tenant/my-ns -60 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/getNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaceMessageTTL(namespace) - -``` - - - - -```` - -### Remove the TTL configuration for a namespace - -````mdx-code-block - - - -Use the [`remove-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-remove-message-ttl) subcommand and specify a namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces remove-message-ttl my-tenant/my-ns - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/removeNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeNamespaceMessageTTL(namespace) - -``` - - - - -```` - -## Delete messages from namespaces - -If you do not have any retention period and that you never have much of a backlog, the upper limit for retaining messages, which are acknowledged, equals to the Pulsar segment rollover period + entry log rollover period + (garbage collection interval * garbage collection ratios). - -- **Segment rollover period**: basically, the segment rollover period is how often a new segment is created. Once a new segment is created, the old segment will be deleted. By default, this happens either when you have written 50,000 entries (messages) or have waited 240 minutes. You can tune this in your broker. - -- **Entry log rollover period**: multiple ledgers in BookKeeper are interleaved into an [entry log](https://bookkeeper.apache.org/docs/4.11.1/getting-started/concepts/#entry-logs). In order for a ledger that has been deleted, the entry log must all be rolled over. -The entry log rollover period is configurable, but is purely based on the entry log size. For details, see [here](https://bookkeeper.apache.org/docs/4.11.1/reference/config/#entry-log-settings). Once the entry log is rolled over, the entry log can be garbage collected. - -- **Garbage collection interval**: because entry logs have interleaved ledgers, to free up space, the entry logs need to be rewritten. The garbage collection interval is how often BookKeeper performs garbage collection. which is related to minor compaction and major compaction of entry logs. For details, see [here](https://bookkeeper.apache.org/docs/4.11.1/reference/config/#entry-log-compaction-settings). diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-tiered-storage.md b/site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-tiered-storage.md deleted file mode 100644 index 5afeaa388d6407..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/cookbooks-tiered-storage.md +++ /dev/null @@ -1,346 +0,0 @@ ---- -id: cookbooks-tiered-storage -title: Tiered Storage -sidebar_label: "Tiered Storage" -original_id: cookbooks-tiered-storage ---- - -Pulsar's **Tiered Storage** feature allows older backlog data to be offloaded to long term storage, thereby freeing up space in BookKeeper and reducing storage costs. This cookbook walks you through using tiered storage in your Pulsar cluster. - -* Tiered storage uses [Apache jclouds](https://jclouds.apache.org) to support [Amazon S3](https://aws.amazon.com/s3/) and [Google Cloud Storage](https://cloud.google.com/storage/)(GCS for short) -for long term storage. With Jclouds, it is easy to add support for more [cloud storage providers](https://jclouds.apache.org/reference/providers/#blobstore-providers) in the future. - -* Tiered storage uses [Apache Hadoop](http://hadoop.apache.org/) to support filesystem for long term storage. -With Hadoop, it is easy to add support for more filesystem in the future. - -## When should I use Tiered Storage? - -Tiered storage should be used when you have a topic for which you want to keep a very long backlog for a long time. For example, if you have a topic containing user actions which you use to train your recommendation systems, you may want to keep that data for a long time, so that if you change your recommendation algorithm you can rerun it against your full user history. - -## The offloading mechanism - -A topic in Pulsar is backed by a log, known as a managed ledger. This log is composed of an ordered list of segments. Pulsar only every writes to the final segment of the log. All previous segments are sealed. The data within the segment is immutable. This is known as a segment oriented architecture. - -![Tiered storage](/assets/pulsar-tiered-storage.png "Tiered Storage") - -The Tiered Storage offloading mechanism takes advantage of this segment oriented architecture. When offloading is requested, the segments of the log are copied, one-by-one, to tiered storage. All segments of the log, apart from the segment currently being written to can be offloaded. - -On the broker, the administrator must configure the bucket and credentials for the cloud storage service. -The configured bucket must exist before attempting to offload. If it does not exist, the offload operation will fail. - -Pulsar uses multi-part objects to upload the segment data. It is possible that a broker could crash while uploading the data. -We recommend you add a life cycle rule your bucket to expire incomplete multi-part upload after a day or two to avoid -getting charged for incomplete uploads. - -When ledgers are offloaded to long term storage, you can still query data in the offloaded ledgers with Pulsar SQL. - -## Configuring the offload driver - -Offloading is configured in ```broker.conf```. - -At a minimum, the administrator must configure the driver, the bucket and the authenticating credentials. -There is also some other knobs to configure, like the bucket region, the max block size in backed storage, etc. - -Currently we support driver of types: - -- `aws-s3`: [Simple Cloud Storage Service](https://aws.amazon.com/s3/) -- `google-cloud-storage`: [Google Cloud Storage](https://cloud.google.com/storage/) -- `filesystem`: [Filesystem Storage](http://hadoop.apache.org/) - -> Driver names are case-insensitive for driver's name. There is a third driver type, `s3`, which is identical to `aws-s3`, -> though it requires that you specify an endpoint url using `s3ManagedLedgerOffloadServiceEndpoint`. This is useful if -> using a S3 compatible data store, other than AWS. - -```conf - -managedLedgerOffloadDriver=aws-s3 - -``` - -### "aws-s3" Driver configuration - -#### Bucket and Region - -Buckets are the basic containers that hold your data. -Everything that you store in Cloud Storage must be contained in a bucket. -You can use buckets to organize your data and control access to your data, -but unlike directories and folders, you cannot nest buckets. - -```conf - -s3ManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -Bucket Region is the region where bucket located. Bucket Region is not a required -but a recommended configuration. If it is not configured, It will use the default region. - -With AWS S3, the default region is `US East (N. Virginia)`. Page [AWS Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html) contains more information. - -```conf - -s3ManagedLedgerOffloadRegion=eu-west-3 - -``` - -#### Authentication with AWS - -To be able to access AWS S3, you need to authenticate with AWS S3. -Pulsar does not provide any direct means of configuring authentication for AWS S3, -but relies on the mechanisms supported by the [DefaultAWSCredentialsProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html). - -Once you have created a set of credentials in the AWS IAM console, they can be configured in a number of ways. - -1. Using ec2 instance metadata credentials - -If you are on AWS instance with an instance profile that provides credentials, Pulsar will use these credentials -if no other mechanism is provided - -2. Set the environment variables **AWS_ACCESS_KEY_ID** and **AWS_SECRET_ACCESS_KEY** in ```conf/pulsar_env.sh```. - -```bash - -export AWS_ACCESS_KEY_ID=ABC123456789 -export AWS_SECRET_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -> \"export\" is important so that the variables are made available in the environment of spawned processes. - - -3. Add the Java system properties *aws.accessKeyId* and *aws.secretKey* to **PULSAR_EXTRA_OPTS** in `conf/pulsar_env.sh`. - -```bash - -PULSAR_EXTRA_OPTS="${PULSAR_EXTRA_OPTS} ${PULSAR_MEM} ${PULSAR_GC} -Daws.accessKeyId=ABC123456789 -Daws.secretKey=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.maxCapacity.default=1000 -Dio.netty.recycler.linkCapacity=1024" - -``` - -4. Set the access credentials in ```~/.aws/credentials```. - -```conf - -[default] -aws_access_key_id=ABC123456789 -aws_secret_access_key=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -5. Assuming an IAM role - -If you want to assume an IAM role, this can be done via specifying the following: - -```conf - -s3ManagedLedgerOffloadRole= -s3ManagedLedgerOffloadRoleSessionName=pulsar-s3-offload - -``` - -This will use the `DefaultAWSCredentialsProviderChain` for assuming this role. - -> The broker must be rebooted for credentials specified in pulsar_env to take effect. - -#### Configuring the size of block read/write - -Pulsar also provides some knobs to configure the size of requests sent to AWS S3. - -- ```s3ManagedLedgerOffloadMaxBlockSizeInBytes``` configures the maximum size of - a "part" sent during a multipart upload. This cannot be smaller than 5MB. Default is 64MB. -- ```s3ManagedLedgerOffloadReadBufferSizeInBytes``` configures the block size for - each individual read when reading back data from AWS S3. Default is 1MB. - -In both cases, these should not be touched unless you know what you are doing. - -### "google-cloud-storage" Driver configuration - -Buckets are the basic containers that hold your data. Everything that you store in -Cloud Storage must be contained in a bucket. You can use buckets to organize your data and -control access to your data, but unlike directories and folders, you cannot nest buckets. - -```conf - -gcsManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -Bucket Region is the region where bucket located. Bucket Region is not a required but -a recommended configuration. If it is not configured, It will use the default region. - -Regarding GCS, buckets are default created in the `us multi-regional location`, -page [Bucket Locations](https://cloud.google.com/storage/docs/bucket-locations) contains more information. - -```conf - -gcsManagedLedgerOffloadRegion=europe-west3 - -``` - -#### Authentication with GCS - -The administrator needs to configure `gcsManagedLedgerOffloadServiceAccountKeyFile` in `broker.conf` -for the broker to be able to access the GCS service. `gcsManagedLedgerOffloadServiceAccountKeyFile` is -a Json file, containing the GCS credentials of a service account. -[Service Accounts section of this page](https://support.google.com/googleapi/answer/6158849) contains -more information of how to create this key file for authentication. More information about google cloud IAM -is available [here](https://cloud.google.com/storage/docs/access-control/iam). - -To generate service account credentials or view the public credentials that you've already generated, follow the following steps: - -1. Open the [Service accounts page](https://console.developers.google.com/iam-admin/serviceaccounts). -2. Select a project or create a new one. -3. Click **Create service account**. -4. In the **Create service account** window, type a name for the service account, and select **Furnish a new private key**. If you want to [grant G Suite domain-wide authority](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#delegatingauthority) to the service account, also select **Enable G Suite Domain-wide Delegation**. -5. Click **Create**. - -> Notes: Make ensure that the service account you create has permission to operate GCS, you need to assign **Storage Admin** permission to your service account in [here](https://cloud.google.com/storage/docs/access-control/iam). - -```conf - -gcsManagedLedgerOffloadServiceAccountKeyFile="/Users/hello/Downloads/project-804d5e6a6f33.json" - -``` - -#### Configuring the size of block read/write - -Pulsar also provides some knobs to configure the size of requests sent to GCS. - -- ```gcsManagedLedgerOffloadMaxBlockSizeInBytes``` configures the maximum size of a "part" sent - during a multipart upload. This cannot be smaller than 5MB. Default is 64MB. -- ```gcsManagedLedgerOffloadReadBufferSizeInBytes``` configures the block size for each individual - read when reading back data from GCS. Default is 1MB. - -In both cases, these should not be touched unless you know what you are doing. - -### "filesystem" Driver configuration - - -#### Configure connection address - -You can configure the connection address in the `broker.conf` file. - -```conf - -fileSystemURI="hdfs://127.0.0.1:9000" - -``` - -#### Configure Hadoop profile path - -The configuration file is stored in the Hadoop profile path. It contains various settings, such as base path, authentication, and so on. - -```conf - -fileSystemProfilePath="../conf/filesystem_offload_core_site.xml" - -``` - -The model for storing topic data uses `org.apache.hadoop.io.MapFile`. You can use all of the configurations in `org.apache.hadoop.io.MapFile` for Hadoop. - -**Example** - -```conf - - - fs.defaultFS - - - - - hadoop.tmp.dir - pulsar - - - - io.file.buffer.size - 4096 - - - - io.seqfile.compress.blocksize - 1000000 - - - - io.seqfile.compression.type - BLOCK - - - - io.map.index.interval - 128 - - -``` - -For more information about the configurations in `org.apache.hadoop.io.MapFile`, see [Filesystem Storage](http://hadoop.apache.org/). -## Configuring offload to run automatically - -Namespace policies can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that the topic has stored on the pulsar cluster. Once the topic reaches the threshold, an offload operation will be triggered. Setting a negative value to the threshold will disable automatic offloading. Setting the threshold to 0 will cause the broker to offload data as soon as it possiby can. - -```bash - -$ bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -> Automatic offload runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offload will not until the current segment is full. - -## Configuring read priority for offloaded messages - -By default, once messages were offloaded to long term storage, brokers will read them from long term storage, but messages still exists in bookkeeper for a period depends on the administrator's configuration. For -messages exists in both bookkeeper and long term storage, if they are preferred to read from bookkeeper, you can use command to change this configuration. - -```bash - -# default value for -orp is tiered-storage-first -$ bin/pulsar-admin namespaces set-offload-policies my-tenant/my-namespace -orp bookkeeper-first -$ bin/pulsar-admin topics set-offload-policies my-tenant/my-namespace/topic1 -orp bookkeeper-first - -``` - -## Triggering offload manually - -Offloading can manually triggered through a REST endpoint on the Pulsar broker. We provide a CLI which will call this rest endpoint for you. - -When triggering offload, you must specify the maximum size, in bytes, of backlog which will be retained locally on the bookkeeper. The offload mechanism will offload segments from the start of the topic backlog until this condition is met. - -```bash - -$ bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 -Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - -``` - -The command to triggers an offload will not wait until the offload operation has completed. To check the status of the offload, use offload-status. - -```bash - -$ bin/pulsar-admin topics offload-status my-tenant/my-namespace/topic1 -Offload is currently running - -``` - -To wait for offload to complete, add the -w flag. - -```bash - -$ bin/pulsar-admin topics offload-status -w my-tenant/my-namespace/topic1 -Offload was a success - -``` - -If there is an error offloading, the error will be propagated to the offload-status command. - -```bash - -$ bin/pulsar-admin topics offload-status persistent://public/default/topic1 -Error in offload -null - -Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - -``` - -` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/deploy-aws.md b/site2/website/versioned_docs/version-2.9.2-deprecated/deploy-aws.md deleted file mode 100644 index 0d94fc13cdb2d6..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/deploy-aws.md +++ /dev/null @@ -1,271 +0,0 @@ ---- -id: deploy-aws -title: Deploying a Pulsar cluster on AWS using Terraform and Ansible -sidebar_label: "Amazon Web Services" -original_id: deploy-aws ---- - -> For instructions on deploying a single Pulsar cluster manually rather than using Terraform and Ansible, see [Deploying a Pulsar cluster on bare metal](deploy-bare-metal.md). For instructions on manually deploying a multi-cluster Pulsar instance, see [Deploying a Pulsar instance on bare metal](deploy-bare-metal-multi-cluster.md). - -One of the easiest ways to get a Pulsar [cluster](reference-terminology.md#cluster) running on [Amazon Web Services](https://aws.amazon.com/) (AWS) is to use the [Terraform](https://terraform.io) infrastructure provisioning tool and the [Ansible](https://www.ansible.com) server automation tool. Terraform can create the resources necessary for running the Pulsar cluster---[EC2](https://aws.amazon.com/ec2/) instances, networking and security infrastructure, etc.---While Ansible can install and run Pulsar on the provisioned resources. - -## Requirements and setup - -In order to install a Pulsar cluster on AWS using Terraform and Ansible, you need to prepare the following things: - -* An [AWS account](https://aws.amazon.com/account/) and the [`aws`](https://aws.amazon.com/cli/) command-line tool -* Python and [pip](https://pip.pypa.io/en/stable/) -* The [`terraform-inventory`](https://github.com/adammck/terraform-inventory) tool, which enables Ansible to use Terraform artifacts - -You also need to make sure that you are currently logged into your AWS account via the `aws` tool: - -```bash - -$ aws configure - -``` - -## Installation - -You can install Ansible on Linux or macOS using pip. - -```bash - -$ pip install ansible - -``` - -You can install Terraform using the instructions [here](https://www.terraform.io/intro/getting-started/install.html). - -You also need to have the Terraform and Ansible configuration for Pulsar locally on your machine. You can find them in the [GitHub repository](https://github.com/apache/pulsar) of Pulsar, which you can fetch using Git commands: - -```bash - -$ git clone https://github.com/apache/pulsar -$ cd pulsar/deployment/terraform-ansible/aws - -``` - -## SSH setup - -> If you already have an SSH key and want to use it, you can skip the step of generating an SSH key and update `private_key_file` setting -> in `ansible.cfg` file and `public_key_path` setting in `terraform.tfvars` file. -> -> For example, if you already have a private SSH key in `~/.ssh/pulsar_aws` and a public key in `~/.ssh/pulsar_aws.pub`, -> follow the steps below: -> -> 1. update `ansible.cfg` with following values: -> - -> ```shell -> -> private_key_file=~/.ssh/pulsar_aws -> -> -> ``` - -> -> 2. update `terraform.tfvars` with following values: -> - -> ```shell -> -> public_key_path=~/.ssh/pulsar_aws.pub -> -> -> ``` - - -In order to create the necessary AWS resources using Terraform, you need to create an SSH key. Enter the following commands to create a private SSH key in `~/.ssh/id_rsa` and a public key in `~/.ssh/id_rsa.pub`: - -```bash - -$ ssh-keygen -t rsa - -``` - -Do *not* enter a passphrase (hit **Enter** instead when the prompt comes out). Enter the following command to verify that a key has been created: - -```bash - -$ ls ~/.ssh -id_rsa id_rsa.pub - -``` - -## Create AWS resources using Terraform - -To start building AWS resources with Terraform, you need to install all Terraform dependencies. Enter the following command: - -```bash - -$ terraform init -# This will create a .terraform folder - -``` - -After that, you can apply the default Terraform configuration by entering this command: - -```bash - -$ terraform apply - -``` - -Then you see this prompt below: - -```bash - -Do you want to perform these actions? - Terraform will perform the actions described above. - Only 'yes' will be accepted to approve. - - Enter a value: - -``` - -Type `yes` and hit **Enter**. Applying the configuration could take several minutes. When the configuration applying finishes, you can see `Apply complete!` along with some other information, including the number of resources created. - -### Apply a non-default configuration - -You can apply a non-default Terraform configuration by changing the values in the `terraform.tfvars` file. The following variables are available: - -Variable name | Description | Default -:-------------|:------------|:------- -`public_key_path` | The path of the public key that you have generated. | `~/.ssh/id_rsa.pub` -`region` | The AWS region in which the Pulsar cluster runs | `us-west-2` -`availability_zone` | The AWS availability zone in which the Pulsar cluster runs | `us-west-2a` -`aws_ami` | The [Amazon Machine Image](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) (AMI) that the cluster uses | `ami-9fa343e7` -`num_zookeeper_nodes` | The number of [ZooKeeper](https://zookeeper.apache.org) nodes in the ZooKeeper cluster | 3 -`num_bookie_nodes` | The number of bookies that runs in the cluster | 3 -`num_broker_nodes` | The number of Pulsar brokers that runs in the cluster | 2 -`num_proxy_nodes` | The number of Pulsar proxies that runs in the cluster | 1 -`base_cidr_block` | The root [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) that network assets uses for the cluster | `10.0.0.0/16` -`instance_types` | The EC2 instance types to be used. This variable is a map with two keys: `zookeeper` for the ZooKeeper instances, `bookie` for the BookKeeper bookies and `broker` and `proxy` for Pulsar brokers and bookies | `t2.small` (ZooKeeper), `i3.xlarge` (BookKeeper) and `c5.2xlarge` (Brokers/Proxies) - -### What is installed - -When you run the Ansible playbook, the following AWS resources are used: - -* 9 total [Elastic Compute Cloud](https://aws.amazon.com/ec2) (EC2) instances running the [ami-9fa343e7](https://access.redhat.com/articles/3135091) Amazon Machine Image (AMI), which runs [Red Hat Enterprise Linux (RHEL) 7.4](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/7.4_release_notes/index). By default, that includes: - * 3 small VMs for ZooKeeper ([t2.small](https://www.ec2instances.info/?selected=t2.small) instances) - * 3 larger VMs for BookKeeper [bookies](reference-terminology.md#bookie) ([i3.xlarge](https://www.ec2instances.info/?selected=i3.xlarge) instances) - * 2 larger VMs for Pulsar [brokers](reference-terminology.md#broker) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances) - * 1 larger VMs for Pulsar [proxy](reference-terminology.md#proxy) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances) -* An EC2 [security group](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html) -* A [virtual private cloud](https://aws.amazon.com/vpc/) (VPC) for security -* An [API Gateway](https://aws.amazon.com/api-gateway/) for connections from the outside world -* A [route table](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html) for the Pulsar cluster's VPC -* A [subnet](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html) for the VPC - -All EC2 instances for the cluster run in the [us-west-2](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) region. - -### Fetch your Pulsar connection URL - -When you apply the Terraform configuration by entering the command `terraform apply`, Terraform outputs a value for the `pulsar_service_url`. The value should look something like this: - -``` - -pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650 - -``` - -You can fetch that value at any time by entering the command `terraform output pulsar_service_url` or parsing the `terraform.tstate` file (which is JSON, even though the filename does not reflect that): - -```bash - -$ cat terraform.tfstate | jq .modules[0].outputs.pulsar_service_url.value - -``` - -### Destroy your cluster - -At any point, you can destroy all AWS resources associated with your cluster using Terraform's `destroy` command: - -```bash - -$ terraform destroy - -``` - -## Setup Disks - -Before you run the Pulsar playbook, you need to mount the disks to the correct directories on those bookie nodes. Since different type of machines have different disk layout, you need to update the task defined in `setup-disk.yaml` file after changing the `instance_types` in your terraform config, - -To setup disks on bookie nodes, enter this command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - setup-disk.yaml - -``` - -After that, the disks is mounted under `/mnt/journal` as journal disk, and `/mnt/storage` as ledger disk. -Remember to enter this command just only once. If you attempt to enter this command again after you have run Pulsar playbook, your disks might potentially be erased again, causing the bookies to fail to start up. - -## Run the Pulsar playbook - -Once you have created the necessary AWS resources using Terraform, you can install and run Pulsar on the Terraform-created EC2 instances using Ansible. - -(Optional) If you want to use any [built-in IO connectors](io-connectors.md) , edit the `Download Pulsar IO packages` task in the `deploy-pulsar.yaml` file and uncomment the connectors you want to use. - -To run the playbook, enter this command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - ../deploy-pulsar.yaml - -``` - -If you have created a private SSH key at a location different from `~/.ssh/id_rsa`, you can specify the different location using the `--private-key` flag in the following command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - --private-key="~/.ssh/some-non-default-key" \ - ../deploy-pulsar.yaml - -``` - -## Access the cluster - -You can now access your running Pulsar using the unique Pulsar connection URL for your cluster, which you can obtain following the instructions [above](#fetching-your-pulsar-connection-url). - -For a quick demonstration of accessing the cluster, we can use the Python client for Pulsar and the Python shell. First, install the Pulsar Python module using pip: - -```bash - -$ pip install pulsar-client - -``` - -Now, open up the Python shell using the `python` command: - -```bash - -$ python - -``` - -Once you are in the shell, enter the following command: - -```python - ->>> import pulsar ->>> client = pulsar.Client('pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650') -# Make sure to use your connection URL ->>> producer = client.create_producer('persistent://public/default/test-topic') ->>> producer.send('Hello world') ->>> client.close() - -``` - -If all of these commands are successful, Pulsar clients can now use your cluster! diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/deploy-bare-metal-multi-cluster.md b/site2/website/versioned_docs/version-2.9.2-deprecated/deploy-bare-metal-multi-cluster.md deleted file mode 100644 index 49fd3938c64d42..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/deploy-bare-metal-multi-cluster.md +++ /dev/null @@ -1,452 +0,0 @@ ---- -id: deploy-bare-metal-multi-cluster -title: Deploying a multi-cluster on bare metal -sidebar_label: "Bare metal multi-cluster" -original_id: deploy-bare-metal-multi-cluster ---- - -:::tip - -1. You can use single-cluster Pulsar installation in most use cases, such as experimenting with Pulsar or using Pulsar in a startup or in a single team. If you need to run a multi-cluster Pulsar instance, see the [guide](deploy-bare-metal-multi-cluster.md). -2. If you want to use all built-in [Pulsar IO](io-overview.md) connectors, you need to download `apache-pulsar-io-connectors`package and install `apache-pulsar-io-connectors` under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you have run a separate cluster of function workers for [Pulsar Functions](functions-overview.md). -3. If you want to use [Tiered Storage](concepts-tiered-storage.md) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders`package and install `apache-pulsar-offloaders` under `offloaders` directory in the Pulsar directory on every broker node. For more details of how to configure this feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md). - -::: - -A Pulsar instance consists of multiple Pulsar clusters working in unison. You can distribute clusters across data centers or geographical regions and replicate the clusters amongst themselves using [geo-replication](administration-geo.md).Deploying a multi-cluster Pulsar instance consists of the following steps: - -1. Deploying two separate ZooKeeper quorums: a local quorum for each cluster in the instance and a configuration store quorum for instance-wide tasks -2. Initializing cluster metadata for each cluster -3. Deploying a BookKeeper cluster of bookies in each Pulsar cluster -4. Deploying brokers in each Pulsar cluster - - -> #### Run Pulsar locally or on Kubernetes? -> This guide shows you how to deploy Pulsar in production in a non-Kubernetes environment. If you want to run a standalone Pulsar cluster on a single machine for development purposes, see the [Setting up a local cluster](getting-started-standalone.md) guide. If you want to run Pulsar on [Kubernetes](https://kubernetes.io), see the [Pulsar on Kubernetes](deploy-kubernetes.md) guide, which includes sections on running Pulsar on Kubernetes, on Google Kubernetes Engine and on Amazon Web Services. - -## System requirement - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. You need to install 64-bit JRE/JDK 8 or later versions. - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -## Install Pulsar - -To get started running Pulsar, download a binary tarball release in one of the following ways: - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar @pulsar:version@ binary release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget 'https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=pulsar/pulsar-@pulsar:version@/apache-pulsar-@pulsar:version@-bin.tar.gz' -O apache-pulsar-@pulsar:version@-bin.tar.gz - - ``` - -Once you download the tarball, untar it and `cd` into the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | [Command-line tools](reference-cli-tools.md) of Pulsar, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) -`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more -`examples` | A Java JAR file containing example [Pulsar Functions](functions-overview.md) -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that Pulsar uses -`licenses` | License files, in `.txt` form, for various components of the Pulsar codebase - -The following directories are created once you begin running Pulsar: - -Directory | Contains -:---------|:-------- -`data` | The data storage directory that ZooKeeper and BookKeeper use -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md) -`logs` | Logs that the installation creates - - -## Deploy ZooKeeper - -Each Pulsar instance relies on two separate ZooKeeper quorums. - -* Local ZooKeeper operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs a dedicated ZooKeeper cluster. -* Configuration Store operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum. - -You can use an independent cluster of machines or the same machines used by local ZooKeeper to provide the configuration store quorum. - - -### Deploy local ZooKeeper - -ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar. - -You need to stand up one local ZooKeeper cluster per Pulsar cluster for deploying a Pulsar instance. - -To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -On each host, you need to specify the ID of the node in the `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -:::tip - -See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - -::: - -On a ZooKeeper server at `zk1.us-west.example.com`, for example, you could set the `myid` value like this: - -```shell - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com` the command looks like `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start zookeeper - -``` - -### Deploy the configuration store - -The ZooKeeper cluster configured and started up in the section above is a local ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks. - -If you deploy a single-cluster instance, you do not need a separate cluster for the configuration store. If, however, you deploy a multi-cluster instance, you should stand up a separate ZooKeeper cluster for configuration tasks. - -#### Single-cluster Pulsar instance - -If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports. - -To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorum. You need to use the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 - -``` - -As before, create the `myid` files for each server on `data/global-zookeeper/myid`. - -#### Multi-cluster Pulsar instance - -When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions. - -The key here is to make sure the ZK quorum members are spread across at least 3 regions, and other regions run as observers. - -Again, given the very low expected load on the configuration store servers, you can share the same hosts used for the local ZooKeeper quorum. - -For example, assume a Pulsar instance with the following clusters `us-west`, `us-east`, `us-central`, `eu-central`, `ap-south`. Also assume, each cluster has its own local ZK servers named such as the following: - -``` - -zk[1-3].${CLUSTER}.example.com - -``` - -In this scenario if you want to pick the quorum participants from few clusters and let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`. - -This method guarantees that writes to configuration store is possible even if one of these regions is unreachable. - -The ZK configuration in all the servers looks like: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 -server.4=zk1.us-central.example.com:2185:2186 -server.5=zk2.us-central.example.com:2185:2186 -server.6=zk3.us-central.example.com:2185:2186:observer -server.7=zk1.us-east.example.com:2185:2186 -server.8=zk2.us-east.example.com:2185:2186 -server.9=zk3.us-east.example.com:2185:2186:observer -server.10=zk1.eu-central.example.com:2185:2186:observer -server.11=zk2.eu-central.example.com:2185:2186:observer -server.12=zk3.eu-central.example.com:2185:2186:observer -server.13=zk1.ap-south.example.com:2185:2186:observer -server.14=zk2.ap-south.example.com:2185:2186:observer -server.15=zk3.ap-south.example.com:2185:2186:observer - -``` - -Additionally, ZK observers need to have the following parameters: - -```properties - -peerType=observer - -``` - -##### Start the service - -Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) - -```shell - -$ bin/pulsar-daemon start configuration-store - -``` - -## Cluster metadata initialization - -Once you set up the cluster-specific ZooKeeper and configuration store quorums for your instance, you need to write some metadata to ZooKeeper for each cluster in your instance. **you only need to write these metadata once**. - -You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. The following is an example: - -```shell - -$ bin/pulsar initialize-cluster-metadata \ - --cluster us-west \ - --zookeeper zk1.us-west.example.com:2181 \ - --configuration-store zk1.us-west.example.com:2184 \ - --web-service-url http://pulsar.us-west.example.com:8080/ \ - --web-service-url-tls https://pulsar.us-west.example.com:8443/ \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/ - -``` - -As you can see from the example above, you need to specify the following: - -* The name of the cluster -* The local ZooKeeper connection string for the cluster -* The configuration store connection string for the entire instance -* The web service URL for the cluster -* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster - -If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster. - -Make sure to run `initialize-cluster-metadata` for each cluster in your instance. - -## Deploy BookKeeper - -BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar. - -Each Pulsar broker needs its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster. - -### Configure bookies - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important aspect of configuring each bookie is ensuring that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for the local ZooKeeper of Pulsar cluster. - -### Start bookies - -You can start a bookie in two ways: in the foreground or as a background daemon. - -To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -You can verify that the bookie works properly using the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell): - -```bash - -$ bin/bookkeeper shell bookiesanity - -``` - -This command creates a new ledger on the local bookie, writes a few entries, reads them back and finally deletes the ledger. - -After you have started all bookies, you can use the `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to verify that all bookies in the cluster are running. - -```bash - -$ bin/bookkeeper shell simpletest --ensemble --writeQuorum --ackQuorum --numEntries - -``` - -Bookie hosts are responsible for storing message data on disk. In order for bookies to provide optimal performance, having a suitable hardware configuration is essential for the bookies. The following are key dimensions for bookie hardware capacity. - -* Disk I/O capacity read/write -* Storage capacity - -Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker. To ensure low write latency, BookKeeper is -designed to use multiple devices: - -* A **journal** to ensure durability. For sequential writes, having fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts is critical. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms. -* A **ledger storage device** is where data is stored until all consumers acknowledge the message. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller. - - - -## Deploy brokers - -Once you set up ZooKeeper, initialize cluster metadata, and spin up BookKeeper bookies, you can deploy brokers. - -### Broker configuration - -You can configure brokers using the [`conf/broker.conf`](reference-configuration.md#broker) configuration file. - -The most important element of broker configuration is ensuring that each broker is aware of its local ZooKeeper quorum as well as the configuration store quorum. Make sure that you set the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) parameter to reflect the local quorum and the [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameter to reflect the configuration store quorum (although you need to specify only those ZooKeeper servers located in the same cluster). - -You also need to specify the name of the [cluster](reference-terminology.md#cluster) to which the broker belongs using the [`clusterName`](reference-configuration.md#broker-clusterName) parameter. In addition, you need to match the broker and web service ports provided when you initialize the metadata (especially when you use a different port from default) of the cluster. - -The following is an example configuration: - -```properties - -# Local ZooKeeper servers -zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -# Configuration store quorum connection string. -configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184 - -clusterName=us-west - -# Broker data port -brokerServicePort=6650 - -# Broker data port for TLS -brokerServicePortTls=6651 - -# Port to use to server HTTP request -webServicePort=8080 - -# Port to use to server HTTPS request -webServicePortTls=8443 - -``` - -### Broker hardware - -Pulsar brokers do not require any special hardware since they do not use the local disk. You had better choose fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) so that the software can take full advantage of that. - -### Start the broker service - -You can start a broker in the background by using [nohup](https://en.wikipedia.org/wiki/Nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start broker - -``` - -You can also start brokers in the foreground by using [`pulsar broker`](reference-cli-tools.md#broker): - -```shell - -$ bin/pulsar broker - -``` - -## Service discovery - -[Clients](getting-started-clients.md) connecting to Pulsar brokers need to communicate with an entire Pulsar instance using a single URL. - -You can use your own service discovery system. If you use your own system, you only need to satisfy just one requirement: when a client performs an HTTP request to an [endpoint](reference-configuration.md) for a Pulsar cluster, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to some active brokers in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means. - -> **Service discovery already provided by many scheduling systems** -> Many large-scale deployment systems, such as [Kubernetes](deploy-kubernetes.md), have service discovery systems built in. If you run Pulsar on such a system, you may not need to provide your own service discovery mechanism. - -## Admin client and verification - -At this point your Pulsar instance should be ready to use. You can now configure client machines that can serve as [administrative clients](admin-api-overview.md) for each cluster. You can use the [`conf/client.conf`](reference-configuration.md#client) configuration file to configure admin clients. - -The most important thing is that you point the [`serviceUrl`](reference-configuration.md#client-serviceUrl) parameter to the correct service URL for the cluster: - -```properties - -serviceUrl=http://pulsar.us-west.example.com:8080/ - -``` - -## Provision new tenants - -Pulsar is built as a fundamentally multi-tenant system. - - -If a new tenant wants to use the system, you need to create a new one. You can create a new tenant by using the [`pulsar-admin`](reference-pulsar-admin.md#tenants) CLI tool: - -```shell - -$ bin/pulsar-admin tenants create test-tenant \ - --allowed-clusters us-west \ - --admin-roles test-admin-role - -``` - -In this command, users who identify with `test-admin-role` role can administer the configuration for the `test-tenant` tenant. The `test-tenant` tenant can only use the `us-west` cluster. From now on, this tenant can manage its resources. - -Once you create a tenant, you need to create [namespaces](reference-terminology.md#namespace) for topics within that tenant. - - -The first step is to create a namespace. A namespace is an administrative unit that can contain many topics. A common practice is to create a namespace for each different use case from a single tenant. - -```shell - -$ bin/pulsar-admin namespaces create test-tenant/ns1 - -``` - -##### Test producer and consumer - - -Everything is now ready to send and receive messages. The quickest way to test the system is through the [`pulsar-perf`](reference-cli-tools.md#pulsar-perf) client tool. - - -You can use a topic in the namespace that you have just created. Topics are automatically created the first time when a producer or a consumer tries to use them. - -The topic name in this case could be: - -```http - -persistent://test-tenant/ns1/my-topic - -``` - -Start a consumer that creates a subscription on the topic and waits for messages: - -```shell - -$ bin/pulsar-perf consume persistent://test-tenant/ns1/my-topic - -``` - -Start a producer that publishes messages at a fixed rate and reports stats every 10 seconds: - -```shell - -$ bin/pulsar-perf produce persistent://test-tenant/ns1/my-topic - -``` - -To report the topic stats: - -```shell - -$ bin/pulsar-admin topics stats persistent://test-tenant/ns1/my-topic - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/deploy-bare-metal.md b/site2/website/versioned_docs/version-2.9.2-deprecated/deploy-bare-metal.md deleted file mode 100644 index 292157e3ddc89d..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/deploy-bare-metal.md +++ /dev/null @@ -1,559 +0,0 @@ ---- -id: deploy-bare-metal -title: Deploy a cluster on bare metal -sidebar_label: "Bare metal" -original_id: deploy-bare-metal ---- - -:::tip - -1. You can use single-cluster Pulsar installation in most use cases, such as experimenting with Pulsar or using Pulsar in a startup or in a single team. If you need to run a multi-cluster Pulsar instance, see the [guide](deploy-bare-metal-multi-cluster.md). -2. If you want to use all built-in [Pulsar IO](io-overview.md) connectors, you need to download `apache-pulsar-io-connectors`package and install `apache-pulsar-io-connectors` under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you have run a separate cluster of function workers for [Pulsar Functions](functions-overview.md). -3. If you want to use [Tiered Storage](concepts-tiered-storage.md) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders`package and install `apache-pulsar-offloaders` under `offloaders` directory in the Pulsar directory on every broker node. For more details of how to configure this feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md). - -::: - -Deploying a Pulsar cluster consists of the following steps: - -1. Deploy a [ZooKeeper](#deploy-a-zookeeper-cluster) cluster (optional) -2. Initialize [cluster metadata](#initialize-cluster-metadata) -3. Deploy a [BookKeeper](#deploy-a-bookkeeper-cluster) cluster -4. Deploy one or more Pulsar [brokers](#deploy-pulsar-brokers) - -## Preparation - -### Requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions. - -:::tip - -You can reuse existing Zookeeper clusters. - -::: - -To run Pulsar on bare metal, the following configuration is recommended: - -* At least 6 Linux machines or VMs - * 3 for running [ZooKeeper](https://zookeeper.apache.org) - * 3 for running a Pulsar broker, and a [BookKeeper](https://bookkeeper.apache.org) bookie -* A single [DNS](https://en.wikipedia.org/wiki/Domain_Name_System) name covering all of the Pulsar broker hosts - -:::note - -* Broker is only supported on 64-bit JVM. -* If you do not have enough machines, or you want to test Pulsar in cluster mode (and expand the cluster later), You can fully deploy Pulsar on a node on which ZooKeeper, bookie and broker run. -* If you do not have a DNS server, you can use the multi-host format in the service URL instead. - -::: - -Each machine in your cluster needs to have [Java 8](https://adoptium.net/?variant=openjdk8) or [Java 11](https://adoptium.net/?variant=openjdk11) installed. - -The following is a diagram showing the basic setup: - -![alt-text](/assets/pulsar-basic-setup.png) - -In this diagram, connecting clients need to communicate with the Pulsar cluster using a single URL. In this case, `pulsar-cluster.acme.com` abstracts over all of the message-handling brokers. Pulsar message brokers run on machines alongside BookKeeper bookies; brokers and bookies, in turn, rely on ZooKeeper. - -### Hardware considerations - -If you deploy a Pulsar cluster, keep in mind the following basic better choices when you do the capacity planning. - -#### ZooKeeper - -For machines running ZooKeeper, it is recommended to use less powerful machines or VMs. Pulsar uses ZooKeeper only for periodic coordination-related and configuration-related tasks, not for basic operations. If you run Pulsar on [Amazon Web Services](https://aws.amazon.com/) (AWS), for example, a [t2.small](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html) instance might likely suffice. - -#### Bookies and Brokers - -For machines running a bookie and a Pulsar broker, more powerful machines are required. For an AWS deployment, for example, [i3.4xlarge](https://aws.amazon.com/blogs/aws/now-available-i3-instances-for-demanding-io-intensive-applications/) instances may be appropriate. On those machines you can use the following: - -* Fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) (for Pulsar brokers) -* Small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache (for BookKeeper bookies) - -To start a Pulsar instance, below are the minimum and the recommended hardware settings. - -1. The minimum hardware settings (250 Pulsar topics) - - Broker - - CPU: 0.2 - - Memory: 256MB - - Bookie - - CPU: 0.2 - - Memory: 256MB - - Storage: - - Journal: 8GB, PD-SSD - - Ledger: 16GB, PD-STANDARD - -2. The recommended hardware settings (1000 Pulsar topics) - - - Broker - - CPU: 8 - - Memory: 8GB - - Bookie - - CPU: 4 - - Memory: 8GB - - Storage: - - Journal: 256GB, PD-SSD - - Ledger: 2TB, PD-STANDARD - -## Install the Pulsar binary package - -> You need to install the Pulsar binary package on each machine in the cluster, including machines running ZooKeeper and BookKeeper. - -To get started deploying a Pulsar cluster on bare metal, you need to download a binary tarball release in one of the following ways: - -* By clicking on the link below directly, which automatically triggers a download: - * Pulsar @pulsar:version@ binary release -* From the Pulsar [downloads page](pulsar:download_page_url) -* From the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) on GitHub -* Using [wget](https://www.gnu.org/software/wget): - -```bash - -$ wget pulsar:binary_release_url - -``` - -Once you download the tarball, untar it and `cd` into the resulting directory: - -```bash - -$ tar xvzf apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -The extracted directory contains the following subdirectories: - -Directory | Contains -:---------|:-------- -`bin` |[command-line tools](reference-cli-tools.md) of Pulsar, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) -`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more -`data` | The data storage directory that ZooKeeper and BookKeeper use -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that Pulsar uses -`logs` | Logs that the installation creates - -## [Install Builtin Connectors (optional)]( https://pulsar.apache.org/docs/en/next/standalone/#install-builtin-connectors-optional) - -> Since Pulsar release `2.1.0-incubating`, Pulsar provides a separate binary distribution, containing all the `builtin` connectors. -> To enable the `builtin` connectors (optional), you can follow the instructions below. - -To use `builtin` connectors, you need to download the connectors tarball release on every broker node in one of the following ways : - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar IO Connectors @pulsar:version@ release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -Once you download the .nar file, copy the file to directory `connectors` in the pulsar directory. -For example, if you download the connector file `pulsar-io-aerospike-@pulsar:version@.nar`: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -## [Install Tiered Storage Offloaders (optional)](https://pulsar.apache.org/docs/en/next/standalone/#install-tiered-storage-offloaders-optional) - -> Since Pulsar release `2.2.0`, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -> If you want to enable tiered storage feature, you can follow the instructions as below; otherwise you can -> skip this section for now. - -To use tiered storage offloaders, you need to download the offloaders tarball release on every broker node in one of the following ways: - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -Once you download the tarball, in the Pulsar directory, untar the offloaders package and copy the offloaders as `offloaders` in the Pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you can find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more details of how to configure tiered storage feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md) - - -## Deploy a ZooKeeper cluster - -> If you already have an existing zookeeper cluster and want to use it, you can skip this section. - -[ZooKeeper](https://zookeeper.apache.org) manages a variety of essential coordination-related and configuration-related tasks for Pulsar. To deploy a Pulsar cluster, you need to deploy ZooKeeper first. A 3-node ZooKeeper cluster is the recommended configuration. Pulsar does not make heavy use of ZooKeeper, so the lightweight machines or VMs should suffice for running ZooKeeper. - -To begin, add all ZooKeeper servers to the configuration specified in [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) (in the Pulsar directory that you create [above](#install-the-pulsar-binary-package)). The following is an example: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -> If you only have one machine on which to deploy Pulsar, you only need to add one server entry in the configuration file. - -On each host, you need to specify the ID of the node in the `myid` file, which is in the `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - -For example, on a ZooKeeper server like `zk1.us-west.example.com`, you can set the `myid` value as follows: - -```bash - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com`, the command is `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and have the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start zookeeper - -``` - -> If you plan to deploy Zookeeper with the Bookie on the same node, you need to start zookeeper by using different stats -> port by configuring the `metricsProvider.httpPort` in zookeeper.conf. - -## Initialize cluster metadata - -Once you deploy ZooKeeper for your cluster, you need to write some metadata to ZooKeeper. You only need to write this data **once**. - -You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. This command can be run on any machine in your Pulsar cluster, so the metadata can be initialized from a ZooKeeper, broker, or bookie machine. The following is an example: - -```shell - -$ bin/pulsar initialize-cluster-metadata \ - --cluster pulsar-cluster-1 \ - --zookeeper zk1.us-west.example.com:2181 \ - --configuration-store zk1.us-west.example.com:2181 \ - --web-service-url http://pulsar.us-west.example.com:8080 \ - --web-service-url-tls https://pulsar.us-west.example.com:8443 \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650 \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -As you can see from the example above, you will need to specify the following: - -Flag | Description -:----|:----------- -`--cluster` | A name for the cluster -`--zookeeper` | A "local" ZooKeeper connection string for the cluster. This connection string only needs to include *one* machine in the ZooKeeper cluster. -`--configuration-store` | The configuration store connection string for the entire instance. As with the `--zookeeper` flag, this connection string only needs to include *one* machine in the ZooKeeper cluster. -`--web-service-url` | The web service URL for the cluster, plus a port. This URL should be a standard DNS name. The default port is 8080 (you had better not use a different port). -`--web-service-url-tls` | If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster. The default port is 8443 (you had better not use a different port). -`--broker-service-url` | A broker service URL enabling interaction with the brokers in the cluster. This URL should not use the same DNS name as the web service URL but should use the `pulsar` scheme instead. The default port is 6650 (you had better not use a different port). -`--broker-service-url-tls` | If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster. The default port is 6651 (you had better not use a different port). - - -> If you do not have a DNS server, you can use multi-host format in the service URL with the following settings: -> - -> ```shell -> -> --web-service-url http://host1:8080,host2:8080,host3:8080 \ -> --web-service-url-tls https://host1:8443,host2:8443,host3:8443 \ -> --broker-service-url pulsar://host1:6650,host2:6650,host3:6650 \ -> --broker-service-url-tls pulsar+ssl://host1:6651,host2:6651,host3:6651 -> -> -> ``` - -> -> If you want to use an existing BookKeeper cluster, you can add the `--existing-bk-metadata-service-uri` flag as follows: -> - -> ```shell -> -> --existing-bk-metadata-service-uri "zk+null://zk1:2181;zk2:2181/ledgers" \ -> --web-service-url http://host1:8080,host2:8080,host3:8080 \ -> --web-service-url-tls https://host1:8443,host2:8443,host3:8443 \ -> --broker-service-url pulsar://host1:6650,host2:6650,host3:6650 \ -> --broker-service-url-tls pulsar+ssl://host1:6651,host2:6651,host3:6651 -> -> -> ``` - -> You can obtain the metadata service URI of the existing BookKeeper cluster by using the `bin/bookkeeper shell whatisinstanceid` command. You must enclose the value in double quotes since the multiple metadata service URIs are separated with semicolons. - -## Deploy a BookKeeper cluster - -[BookKeeper](https://bookkeeper.apache.org) handles all persistent data storage in Pulsar. You need to deploy a cluster of BookKeeper bookies to use Pulsar. You can choose to run a **3-bookie BookKeeper cluster**. - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important step in configuring bookies for our purposes here is ensuring that [`zkServers`](reference-configuration.md#bookkeeper-zkServers) is set to the connection string for the ZooKeeper cluster. The following is an example: - -```properties - -zkServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -``` - -Once you appropriately modify the `zkServers` parameter, you can make any other configuration changes that you require. You can find a full listing of the available BookKeeper configuration parameters [here](reference-configuration.md#bookkeeper). However, consulting the [BookKeeper documentation](http://bookkeeper.apache.org/docs/latest/reference/config/) for a more in-depth guide might be a better choice. - -Once you apply the desired configuration in `conf/bookkeeper.conf`, you can start up a bookie on each of your BookKeeper hosts. You can start up each bookie either in the background, using [nohup](https://en.wikipedia.org/wiki/Nohup), or in the foreground. - -To start the bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -To start the bookie in the foreground: - -```bash - -$ bin/pulsar bookie - -``` - -You can verify that a bookie works properly by running the `bookiesanity` command on the [BookKeeper shell](reference-cli-tools.md#shell): - -```bash - -$ bin/bookkeeper shell bookiesanity - -``` - -This command creates an ephemeral BookKeeper ledger on the local bookie, writes a few entries, reads them back, and finally deletes the ledger. - -After you start all the bookies, you can use `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to verify all the bookies in the cluster are up running. - -```bash - -$ bin/bookkeeper shell simpletest --ensemble --writeQuorum --ackQuorum --numEntries - -``` - -This command creates a `num-bookies` sized ledger on the cluster, writes a few entries, and finally deletes the ledger. - - -## Deploy Pulsar brokers - -Pulsar brokers are the last thing you need to deploy in your Pulsar cluster. Brokers handle Pulsar messages and provide the administrative interface of Pulsar. A good choice is to run **3 brokers**, one for each machine that already runs a BookKeeper bookie. - -### Configure Brokers - -The most important element of broker configuration is ensuring that each broker is aware of the ZooKeeper cluster that you have deployed. Ensure that the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) and [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameters are correct. In this case, since you only have 1 cluster and no configuration store setup, the `configurationStoreServers` point to the same `zookeeperServers`. - -```properties - -zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 -configurationStoreServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -``` - -You also need to specify the cluster name (matching the name that you provided when you [initialize the metadata of the cluster](#initialize-cluster-metadata)): - -```properties - -clusterName=pulsar-cluster-1 - -``` - -In addition, you need to match the broker and web service ports provided when you initialize the metadata of the cluster (especially when you use a different port than the default): - -```properties - -brokerServicePort=6650 -brokerServicePortTls=6651 -webServicePort=8080 -webServicePortTls=8443 - -``` - -> If you deploy Pulsar in a one-node cluster, you should update the replication settings in `conf/broker.conf` to `1`. -> - -> ```properties -> -> # Number of bookies to use when creating a ledger -> managedLedgerDefaultEnsembleSize=1 -> -> # Number of copies to store for each message -> managedLedgerDefaultWriteQuorum=1 -> -> # Number of guaranteed copies (acks to wait before write is complete) -> managedLedgerDefaultAckQuorum=1 -> -> -> ``` - - -### Enable Pulsar Functions (optional) - -If you want to enable [Pulsar Functions](functions-overview.md), you can follow the instructions as below: - -1. Edit `conf/broker.conf` to enable functions worker, by setting `functionsWorkerEnabled` to `true`. - - ```conf - - functionsWorkerEnabled=true - - ``` - -2. Edit `conf/functions_worker.yml` and set `pulsarFunctionsCluster` to the cluster name that you provide when you [initialize the metadata of the cluster](#initialize-cluster-metadata). - - ```conf - - pulsarFunctionsCluster: pulsar-cluster-1 - - ``` - -If you want to learn more options about deploying the functions worker, check out [Deploy and manage functions worker](functions-worker.md). - -### Start Brokers - -You can then provide any other configuration changes that you want in the [`conf/broker.conf`](reference-configuration.md#broker) file. Once you decide on a configuration, you can start up the brokers for your Pulsar cluster. Like ZooKeeper and BookKeeper, you can start brokers either in the foreground or in the background, using nohup. - -You can start a broker in the foreground using the [`pulsar broker`](reference-cli-tools.md#pulsar-broker) command: - -```bash - -$ bin/pulsar broker - -``` - -You can start a broker in the background using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start broker - -``` - -Once you successfully start up all the brokers that you intend to use, your Pulsar cluster should be ready to go! - -## Connect to the running cluster - -Once your Pulsar cluster is up and running, you should be able to connect with it using Pulsar clients. One such client is the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool, which is included with the Pulsar binary package. The `pulsar-client` tool can publish messages to and consume messages from Pulsar topics and thus provide a simple way to make sure that your cluster runs properly. - -To use the `pulsar-client` tool, first modify the client configuration file in [`conf/client.conf`](reference-configuration.md#client) in your binary package. You need to change the values for `webServiceUrl` and `brokerServiceUrl`, substituting `localhost` (which is the default), with the DNS name that you assign to your broker/bookie hosts. The following is an example: - -```properties - -webServiceUrl=http://us-west.example.com:8080 -brokerServiceurl=pulsar://us-west.example.com:6650 - -``` - -> If you do not have a DNS server, you can specify multi-host in service URL as follows: -> - -> ```properties -> -> webServiceUrl=http://host1:8080,host2:8080,host3:8080 -> brokerServiceurl=pulsar://host1:6650,host2:6650,host3:6650 -> -> -> ``` - - -Once that is complete, you can publish a message to the Pulsar topic: - -```bash - -$ bin/pulsar-client produce \ - persistent://public/default/test \ - -n 1 \ - -m "Hello Pulsar" - -``` - -> You may need to use a different cluster name in the topic if you specify a cluster name other than `pulsar-cluster-1`. - -This command publishes a single message to the Pulsar topic. In addition, you can subscribe to the Pulsar topic in a different terminal before publishing messages as below: - -```bash - -$ bin/pulsar-client consume \ - persistent://public/default/test \ - -n 100 \ - -s "consumer-test" \ - -t "Exclusive" - -``` - -Once you successfully publish the above message to the topic, you should see it in the standard output: - -```bash - ------ got message ----- -Hello Pulsar - -``` - -## Run Functions - -> If you have [enabled](#enable-pulsar-functions-optional) Pulsar Functions, you can try out the Pulsar Functions now. - -Create an ExclamationFunction `exclamation`. - -```bash - -bin/pulsar-admin functions create \ - --jar examples/api-examples.jar \ - --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \ - --inputs persistent://public/default/exclamation-input \ - --output persistent://public/default/exclamation-output \ - --tenant public \ - --namespace default \ - --name exclamation - -``` - -Check whether the function runs as expected by [triggering](functions-deploying.md#triggering-pulsar-functions) the function. - -```bash - -bin/pulsar-admin functions trigger --name exclamation --trigger-value "hello world" - -``` - -You should see the following output: - -```shell - -hello world! - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/deploy-dcos.md b/site2/website/versioned_docs/version-2.9.2-deprecated/deploy-dcos.md deleted file mode 100644 index 35a0a83d716ade..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/deploy-dcos.md +++ /dev/null @@ -1,200 +0,0 @@ ---- -id: deploy-dcos -title: Deploy Pulsar on DC/OS -sidebar_label: "DC/OS" -original_id: deploy-dcos ---- - -:::tip - -To enable all built-in [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, we recommend you use `apachepulsar/pulsar-all` image instead of `apachepulsar/pulsar` image; the former has already bundled [all built-in connectors](io-overview.md#working-with-connectors). - -::: - -[DC/OS](https://dcos.io/) (the DataCenter Operating System) is a distributed operating system for deploying and managing applications and systems on [Apache Mesos](http://mesos.apache.org/). DC/OS is an open-source tool created and maintained by [Mesosphere](https://mesosphere.com/). - -Apache Pulsar is available as a [Marathon Application Group](https://mesosphere.github.io/marathon/docs/application-groups.html), which runs multiple applications as manageable sets. - -## Prerequisites - -You need to prepare your environment before running Pulsar on DC/OS. - -* DC/OS version [1.9](https://docs.mesosphere.com/1.9/) or higher -* A [DC/OS cluster](https://docs.mesosphere.com/1.9/installing/) with at least three agent nodes -* The [DC/OS CLI tool](https://docs.mesosphere.com/1.9/cli/install/) installed -* The [`PulsarGroups.json`](https://github.com/apache/pulsar/blob/master/deployment/dcos/PulsarGroups.json) configuration file from the Pulsar GitHub repo. - - ```bash - - $ curl -O https://raw.githubusercontent.com/apache/pulsar/master/deployment/dcos/PulsarGroups.json - - ``` - -Each node in the DC/OS-managed Mesos cluster must have at least: - -* 4 CPU -* 4 GB of memory -* 60 GB of total persistent disk - -Alternatively, you can change the configuration in `PulsarGroups.json` accordingly to match your resources of the DC/OS cluster. - -## Deploy Pulsar using the DC/OS command interface - -You can deploy Pulsar on DC/OS using this command: - -```bash - -$ dcos marathon group add PulsarGroups.json - -``` - -This command deploys Docker container instances in three groups, which together comprise a Pulsar cluster: - -* 3 bookies (1 [bookie](reference-terminology.md#bookie) on each agent node and 1 [bookie recovery](http://bookkeeper.apache.org/docs/latest/admin/autorecovery/) instance) -* 3 Pulsar [brokers](reference-terminology.md#broker) (1 broker on each node and 1 admin instance) -* 1 [Prometheus](http://prometheus.io/) instance and 1 [Grafana](https://grafana.com/) instance - - -> When you run DC/OS, a ZooKeeper cluster will be running at `master.mesos:2181`, thus you do not have to install or start up ZooKeeper separately. - -After executing the `dcos` command above, click the **Services** tab in the DC/OS [GUI interface](https://docs.mesosphere.com/latest/gui/), which you can access at [http://m1.dcos](http://m1.dcos) in this example. You should see several applications during the deployment. - -![DC/OS command executed](/assets/dcos_command_execute.png) - -![DC/OS command executed2](/assets/dcos_command_execute2.png) - -## The BookKeeper group - -To monitor the status of the BookKeeper cluster deployment, click the **bookkeeper** group in the parent **pulsar** group. - -![DC/OS bookkeeper status](/assets/dcos_bookkeeper_status.png) - -At this point, the status of the 3 [bookies](reference-terminology.md#bookie) are green, which means that the bookies have been deployed successfully and are running. - -![DC/OS bookkeeper running](/assets/dcos_bookkeeper_run.png) - -You can also click each bookie instance to get more detailed information, such as the bookie running log. - -![DC/OS bookie log](/assets/dcos_bookie_log.png) - -To display information about the BookKeeper in ZooKeeper, you can visit [http://m1.dcos/exhibitor](http://m1.dcos/exhibitor). In this example, 3 bookies are under the `available` directory. - -![DC/OS bookkeeper in zk](/assets/dcos_bookkeeper_in_zookeeper.png) - -## The Pulsar broker group - -Similar to the BookKeeper group above, click **brokers** to check the status of the Pulsar brokers. - -![DC/OS broker status](/assets/dcos_broker_status.png) - -![DC/OS broker running](/assets/dcos_broker_run.png) - -You can also click each broker instance to get more detailed information, such as the broker running log. - -![DC/OS broker log](/assets/dcos_broker_log.png) - -Broker cluster information in ZooKeeper is also available through the web UI. In this example, you can see that the `loadbalance` and `managed-ledgers` directories have been created. - -![DC/OS broker in zk](/assets/dcos_broker_in_zookeeper.png) - -## Monitor group - -The **monitory** group consists of Prometheus and Grafana. - -![DC/OS monitor status](/assets/dcos_monitor_status.png) - -### Prometheus - -Click the instance of `prom` to get the endpoint of Prometheus, which is `192.168.65.121:9090` in this example. - -![DC/OS prom endpoint](/assets/dcos_prom_endpoint.png) - -If you click that endpoint, you can see the Prometheus dashboard. All the bookies and brokers are listed on [http://192.168.65.121:9090/targets](http://192.168.65.121:9090/targets). - -![DC/OS prom targets](/assets/dcos_prom_targets.png) - -### Grafana - -Click `grafana` to get the endpoint for Grafana, which is `192.168.65.121:3000` in this example. - -![DC/OS grafana endpoint](/assets/dcos_grafana_endpoint.png) - -If you click that endpoint, you can access the Grafana dashboard. - -![DC/OS grafana targets](/assets/dcos_grafana_dashboard.png) - -## Run a simple Pulsar consumer and producer on DC/OS - -Now that you have a fully deployed Pulsar cluster, you can run a simple consumer and producer to show Pulsar on DC/OS in action. - -### Download and prepare the Pulsar Java tutorial - -You can clone a [Pulsar Java tutorial](https://github.com/streamlio/pulsar-java-tutorial) repo. This repo contains a simple Pulsar consumer and producer (you can find more information in the `README` file in this repo). - -```bash - -$ git clone https://github.com/streamlio/pulsar-java-tutorial - -``` - -Change the `SERVICE_URL` from `pulsar://localhost:6650` to `pulsar://a1.dcos:6650` in both [`ConsumerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ConsumerTutorial.java) file and [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java) file. - -The `pulsar://a1.dcos:6650` endpoint is for the broker service. You can fetch the endpoint details for each broker instance from the DC/OS GUI. `a1.dcos` is a DC/OS client agent that runs a broker, and you can replace it with the client agent IP address. - -Now, you can change the message number from 10 to 10000000 in the main method in [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java) file to produce more messages. - -Then, you can compile the project code using the command below: - -```bash - -$ mvn clean package - -``` - -### Run the consumer and producer - -Execute this command to run the consumer: - -```bash - -$ mvn exec:java -Dexec.mainClass="tutorial.ConsumerTutorial" - -``` - -Execute this command to run the producer: - -```bash - -$ mvn exec:java -Dexec.mainClass="tutorial.ProducerTutorial" - -``` - -You see that the producer is producing messages and the consumer is consuming messages through the DC/OS GUI. - -![DC/OS pulsar producer](/assets/dcos_producer.png) - -![DC/OS pulsar consumer](/assets/dcos_consumer.png) - -### View Grafana metric output - -While the producer and consumer are running, you can access the running metrics from Grafana. - -![DC/OS pulsar dashboard](/assets/dcos_metrics.png) - - -## Uninstall Pulsar - -You can shut down and uninstall the `pulsar` application from DC/OS at any time in one of the following two ways: - -1. Click the three dots at the right end of Pulsar group and choose **Delete** on the DC/OS GUI. - - ![DC/OS pulsar uninstall](/assets/dcos_uninstall.png) - -2. Use the command below. - - ```bash - - $ dcos marathon group remove /pulsar - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/deploy-docker.md b/site2/website/versioned_docs/version-2.9.2-deprecated/deploy-docker.md deleted file mode 100644 index 8348d78deb2378..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/deploy-docker.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -id: deploy-docker -title: Deploy a cluster on Docker -sidebar_label: "Docker" -original_id: deploy-docker ---- - -To deploy a Pulsar cluster on Docker, complete the following steps: -1. Deploy a ZooKeeper cluster (optional) -2. Initialize cluster metadata -3. Deploy a BookKeeper cluster -4. Deploy one or more Pulsar brokers - -## Prepare - -To run Pulsar on Docker, you need to create a container for each Pulsar component: ZooKeeper, BookKeeper and broker. You can pull the images of ZooKeeper and BookKeeper separately on [Docker Hub](https://hub.docker.com/), and pull a [Pulsar image](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) for the broker. You can also pull only one [Pulsar image](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) and create three containers with this image. This tutorial takes the second option as an example. - -### Pull a Pulsar image -You can pull a Pulsar image from [Docker Hub](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) with the following command. - -``` - -docker pull apachepulsar/pulsar-all:latest - -``` - -### Create three containers -Create containers for ZooKeeper, BookKeeper and broker. In this example, they are named as `zookeeper`, `bookkeeper` and `broker` respectively. You can name them as you want with the `--name` flag. By default, the container names are created randomly. - -``` - -docker run -it --name bookkeeper apachepulsar/pulsar-all:latest /bin/bash -docker run -it --name zookeeper apachepulsar/pulsar-all:latest /bin/bash -docker run -it --name broker apachepulsar/pulsar-all:latest /bin/bash - -``` - -### Create a network -To deploy a Pulsar cluster on Docker, you need to create a `network` and connect the containers of ZooKeeper, BookKeeper and broker to this network. The following command creates the network `pulsar`: - -``` - -docker network create pulsar - -``` - -### Connect containers to network -Connect the containers of ZooKeeper, BookKeeper and broker to the `pulsar` network with the following commands. - -``` - -docker network connect pulsar zookeeper -docker network connect pulsar bookkeeper -docker network connect pulsar broker - -``` - -To check whether the containers are successfully connected to the network, enter the `docker network inspect pulsar` command. - -For detailed information about how to deploy ZooKeeper cluster, BookKeeper cluster, brokers, see [deploy a cluster on bare metal](deploy-bare-metal.md). diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/deploy-kubernetes.md b/site2/website/versioned_docs/version-2.9.2-deprecated/deploy-kubernetes.md deleted file mode 100644 index 1aefc6ad79f716..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/deploy-kubernetes.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -id: deploy-kubernetes -title: Deploy Pulsar on Kubernetes -sidebar_label: "Kubernetes" -original_id: deploy-kubernetes ---- - -To get up and running with these charts as fast as possible, in a **non-production** use case, we provide -a [quick start guide](getting-started-helm.md) for Proof of Concept (PoC) deployments. - -To configure and install a Pulsar cluster on Kubernetes for production usage, follow the complete [Installation Guide](helm-install.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/deploy-monitoring.md b/site2/website/versioned_docs/version-2.9.2-deprecated/deploy-monitoring.md deleted file mode 100644 index 2b5c19344dc8c3..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/deploy-monitoring.md +++ /dev/null @@ -1,148 +0,0 @@ ---- -id: deploy-monitoring -title: Monitor -sidebar_label: "Monitor" -original_id: deploy-monitoring ---- - -You can use different ways to monitor a Pulsar cluster, exposing both metrics related to the usage of topics and the overall health of the individual components of the cluster. - -## Collect metrics - -You can collect broker stats, ZooKeeper stats, and BookKeeper stats. - -### Broker stats - -You can collect Pulsar broker metrics from brokers and export the metrics in JSON format. The Pulsar broker metrics mainly have two types: - -* *Destination dumps*, which contain stats for each individual topic. You can fetch the destination dumps using the command below: - - ```shell - - bin/pulsar-admin broker-stats destinations - - ``` - -* Broker metrics, which contain the broker information and topics stats aggregated at namespace level. You can fetch the broker metrics by using the following command: - - ```shell - - bin/pulsar-admin broker-stats monitoring-metrics - - ``` - -All the message rates are updated every minute. - -The aggregated broker metrics are also exposed in the [Prometheus](https://prometheus.io) format at: - -```shell - -http://$BROKER_ADDRESS:8080/metrics/ - -``` - -### ZooKeeper stats - -The local ZooKeeper, configuration store server and clients that are shipped with Pulsar can expose detailed stats through Prometheus. - -```shell - -http://$LOCAL_ZK_SERVER:8000/metrics -http://$GLOBAL_ZK_SERVER:8001/metrics - -``` - -The default port of local ZooKeeper is `8000` and the default port of the configuration store is `8001`. You can use a different stats port by configuring `metricsProvider.httpPort` in the `conf/zookeeper.conf` file. - -### BookKeeper stats - -You can configure the stats frameworks for BookKeeper by modifying the `statsProviderClass` in the `conf/bookkeeper.conf` file. - -The default BookKeeper configuration enables the Prometheus exporter. The configuration is included with Pulsar distribution. - -```shell - -http://$BOOKIE_ADDRESS:8000/metrics - -``` - -The default port for bookie is `8000`. You can change the port by configuring `prometheusStatsHttpPort` in the `conf/bookkeeper.conf` file. - -### Managed cursor acknowledgment state -The acknowledgment state is persistent to the ledger first. When the acknowledgment state fails to be persistent to the ledger, they are persistent to ZooKeeper. To track the stats of acknowledgement, you can configure the metrics for the managed cursor. - -``` - -brk_ml_cursor_persistLedgerSucceed(namespace=", ledger_name="", cursor_name:") -brk_ml_cursor_persistLedgerErrors(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_persistZookeeperSucceed(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_persistZookeeperErrors(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_nonContiguousDeletedMessagesRange(namespace="", ledger_name="", cursor_name:"") - -``` - -Those metrics are added in the Prometheus interface, you can monitor and check the metrics stats in the Grafana. - -### Function and connector stats - -You can collect functions worker stats from `functions-worker` and export the metrics in JSON formats, which contain functions worker JVM metrics. - -``` - -pulsar-admin functions-worker monitoring-metrics - -``` - -You can collect functions and connectors metrics from `functions-worker` and export the metrics in JSON formats. - -``` - -pulsar-admin functions-worker function-stats - -``` - -The aggregated functions and connectors metrics can be exposed in Prometheus formats as below. You can get [`FUNCTIONS_WORKER_ADDRESS`](http://pulsar.apache.org/docs/en/next/functions-worker/) and `WORKER_PORT` from the `functions_worker.yml` file. - -``` - -http://$FUNCTIONS_WORKER_ADDRESS:$WORKER_PORT/metrics: - -``` - -## Configure Prometheus - -You can use Prometheus to collect all the metrics exposed for Pulsar components and set up [Grafana](https://grafana.com/) dashboards to display the metrics and monitor your Pulsar cluster. For details, refer to [Prometheus guide](https://prometheus.io/docs/introduction/getting_started/). - -When you run Pulsar on bare metal, you can provide the list of nodes to be probed. When you deploy Pulsar in a Kubernetes cluster, the monitoring is setup automatically. For details, refer to [Kubernetes instructions](helm-deploy.md). - -## Dashboards - -When you collect time series statistics, the major problem is to make sure the number of dimensions attached to the data does not explode. Thus you only need to collect time series of metrics aggregated at the namespace level. - -### Pulsar per-topic dashboard - -The per-topic dashboard instructions are available at [Pulsar manager](administration-pulsar-manager.md). - -### Grafana - -You can use grafana to create dashboard driven by the data that is stored in Prometheus. - -When you deploy Pulsar on Kubernetes, a `pulsar-grafana` Docker image is enabled by default. You can use the docker image with the principal dashboards. - -Enter the command below to use the dashboard manually: - -```shell - -docker run -p3000:3000 \ - -e PROMETHEUS_URL=http://$PROMETHEUS_HOST:9090/ \ - apachepulsar/pulsar-grafana:latest - -``` - -The following are some Grafana dashboards examples: - -- [pulsar-grafana](http://pulsar.apache.org/docs/en/deploy-monitoring/#grafana): a Grafana dashboard that displays metrics collected in Prometheus for Pulsar clusters running on Kubernetes. -- [apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard): a collection of Grafana dashboard templates for different Pulsar components running on both Kubernetes and on-premise machines. - - ## Alerting rules - You can set alerting rules according to your Pulsar environment. To configure alerting rules for Apache Pulsar, refer to [alerting rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/develop-load-manager.md b/site2/website/versioned_docs/version-2.9.2-deprecated/develop-load-manager.md deleted file mode 100644 index 509209b6a852d8..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/develop-load-manager.md +++ /dev/null @@ -1,227 +0,0 @@ ---- -id: develop-load-manager -title: Modular load manager -sidebar_label: "Modular load manager" -original_id: develop-load-manager ---- - -The *modular load manager*, implemented in [`ModularLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/ModularLoadManagerImpl.java), is a flexible alternative to the previously implemented load manager, [`SimpleLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/SimpleLoadManagerImpl.java), which attempts to simplify how load is managed while also providing abstractions so that complex load management strategies may be implemented. - -## Usage - -There are two ways that you can enable the modular load manager: - -1. Change the value of the `loadManagerClassName` parameter in `conf/broker.conf` from `org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl` to `org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl`. -2. Using the `pulsar-admin` tool. Here's an example: - - ```shell - - $ pulsar-admin update-dynamic-config \ - --config loadManagerClassName \ - --value org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl - - ``` - - You can use the same method to change back to the original value. In either case, any mistake in specifying the load manager will cause Pulsar to default to `SimpleLoadManagerImpl`. - -## Verification - -There are a few different ways to determine which load manager is being used: - -1. Use `pulsar-admin` to examine the `loadManagerClassName` element: - - ```shell - - $ bin/pulsar-admin brokers get-all-dynamic-config - { - "loadManagerClassName" : "org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl" - } - - ``` - - If there is no `loadManagerClassName` element, then the default load manager is used. - -2. Consult a ZooKeeper load report. With the module load manager, the load report in `/loadbalance/brokers/...` will have many differences. for example the `systemResourceUsage` sub-elements (`bandwidthIn`, `bandwidthOut`, etc.) are now all at the top level. Here is an example load report from the module load manager: - - ```json - - { - "bandwidthIn": { - "limit": 10240000.0, - "usage": 4.256510416666667 - }, - "bandwidthOut": { - "limit": 10240000.0, - "usage": 5.287239583333333 - }, - "bundles": [], - "cpu": { - "limit": 2400.0, - "usage": 5.7353247655435915 - }, - "directMemory": { - "limit": 16384.0, - "usage": 1.0 - } - } - - ``` - - With the simple load manager, the load report in `/loadbalance/brokers/...` will look like this: - - ```json - - { - "systemResourceUsage": { - "bandwidthIn": { - "limit": 10240000.0, - "usage": 0.0 - }, - "bandwidthOut": { - "limit": 10240000.0, - "usage": 0.0 - }, - "cpu": { - "limit": 2400.0, - "usage": 0.0 - }, - "directMemory": { - "limit": 16384.0, - "usage": 1.0 - }, - "memory": { - "limit": 8192.0, - "usage": 3903.0 - } - } - } - - ``` - -3. The command-line [broker monitor](reference-cli-tools.md#monitor-brokers) will have a different output format depending on which load manager implementation is being used. - - Here is an example from the modular load manager: - - ``` - - =================================================================================================================== - ||SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.00 |48.33 |0.01 |0.00 |0.00 |48.33 || - ||COUNT |TOPIC |BUNDLE |PRODUCER |CONSUMER |BUNDLE + |BUNDLE - || - || |4 |4 |0 |2 |4 |0 || - ||LATEST |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - ||SHORT |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - ||LONG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - =================================================================================================================== - - ``` - - Here is an example from the simple load manager: - - ``` - - =================================================================================================================== - ||COUNT |TOPIC |BUNDLE |PRODUCER |CONSUMER |BUNDLE + |BUNDLE - || - || |4 |4 |0 |2 |0 |0 || - ||RAW SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.25 |47.94 |0.01 |0.00 |0.00 |47.94 || - ||ALLOC SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.20 |1.89 | |1.27 |3.21 |3.21 || - ||RAW MSG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.01 |0.01 |0.01 || - ||ALLOC MSG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |54.84 |134.48 |189.31 |126.54 |320.96 |447.50 || - =================================================================================================================== - - ``` - -It is important to note that the module load manager is _centralized_, meaning that all requests to assign a bundle---whether it's been seen before or whether this is the first time---only get handled by the _lead_ broker (which can change over time). To determine the current lead broker, examine the `/loadbalance/leader` node in ZooKeeper. - -## Implementation - -### Data - -The data monitored by the modular load manager is contained in the [`LoadData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/LoadData.java) class. -Here, the available data is subdivided into the bundle data and the broker data. - -#### Broker - -The broker data is contained in the [`BrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BrokerData.java) class. It is further subdivided into two parts, -one being the local data which every broker individually writes to ZooKeeper, and the other being the historical broker -data which is written to ZooKeeper by the leader broker. - -##### Local Broker Data -The local broker data is contained in the class [`LocalBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/java/org/apache/pulsar/policies/data/loadbalancer/LocalBrokerData.java) and provides information about the following resources: - -* CPU usage -* JVM heap memory usage -* Direct memory usage -* Bandwidth in/out usage -* Most recent total message rate in/out across all bundles -* Total number of topics, bundles, producers, and consumers -* Names of all bundles assigned to this broker -* Most recent changes in bundle assignments for this broker - -The local broker data is updated periodically according to the service configuration -"loadBalancerReportUpdateMaxIntervalMinutes". After any broker updates their local broker data, the leader broker will -receive the update immediately via a ZooKeeper watch, where the local data is read from the ZooKeeper node -`/loadbalance/brokers/` - -##### Historical Broker Data - -The historical broker data is contained in the [`TimeAverageBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/TimeAverageBrokerData.java) class. - -In order to reconcile the need to make good decisions in a steady-state scenario and make reactive decisions in a critical scenario, the historical data is split into two parts: the short-term data for reactive decisions, and the long-term data for steady-state decisions. Both time frames maintain the following information: - -* Message rate in/out for the entire broker -* Message throughput in/out for the entire broker - -Unlike the bundle data, the broker data does not maintain samples for the global broker message rates and throughputs, which is not expected to remain steady as new bundles are removed or added. Instead, this data is aggregated over the short-term and long-term data for the bundles. See the section on bundle data to understand how that data is collected and maintained. - -The historical broker data is updated for each broker in memory by the leader broker whenever any broker writes their local data to ZooKeeper. Then, the historical data is written to ZooKeeper by the leader broker periodically according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`. - -##### Bundle Data - -The bundle data is contained in the [`BundleData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BundleData.java). Like the historical broker data, the bundle data is split into a short-term and a long-term time frame. The information maintained in each time frame: - -* Message rate in/out for this bundle -* Message Throughput In/Out for this bundle -* Current number of samples for this bundle - -The time frames are implemented by maintaining the average of these values over a set, limited number of samples, where -the samples are obtained through the message rate and throughput values in the local data. Thus, if the update interval -for the local data is 2 minutes, the number of short samples is 10 and the number of long samples is 1000, the -short-term data is maintained over a period of `10 samples * 2 minutes / sample = 20 minutes`, while the long-term -data is similarly over a period of 2000 minutes. Whenever there are not enough samples to satisfy a given time frame, -the average is taken only over the existing samples. When no samples are available, default values are assumed until -they are overwritten by the first sample. Currently, the default values are - -* Message rate in/out: 50 messages per second both ways -* Message throughput in/out: 50KB per second both ways - -The bundle data is updated in memory on the leader broker whenever any broker writes their local data to ZooKeeper. -Then, the bundle data is written to ZooKeeper by the leader broker periodically at the same time as the historical -broker data, according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`. - -### Traffic Distribution - -The modular load manager uses the abstraction provided by [`ModularLoadManagerStrategy`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/ModularLoadManagerStrategy.java) to make decisions about bundle assignment. The strategy makes a decision by considering the service configuration, the entire load data, and the bundle data for the bundle to be assigned. Currently, the only supported strategy is [`LeastLongTermMessageRate`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/LeastLongTermMessageRate.java), though soon users will have the ability to inject their own strategies if desired. - -#### Least Long Term Message Rate Strategy - -As its name suggests, the least long term message rate strategy attempts to distribute bundles across brokers so that -the message rate in the long-term time window for each broker is roughly the same. However, simply balancing load based -on message rate does not handle the issue of asymmetric resource burden per message on each broker. Thus, the system -resource usages, which are CPU, memory, direct memory, bandwidth in, and bandwidth out, are also considered in the -assignment process. This is done by weighting the final message rate according to -`1 / (overload_threshold - max_usage)`, where `overload_threshold` corresponds to the configuration -`loadBalancerBrokerOverloadedThresholdPercentage` and `max_usage` is the maximum proportion among the system resources -that is being utilized by the candidate broker. This multiplier ensures that machines with are being more heavily taxed -by the same message rates will receive less load. In particular, it tries to ensure that if one machine is overloaded, -then all machines are approximately overloaded. In the case in which a broker's max usage exceeds the overload -threshold, that broker is not considered for bundle assignment. If all brokers are overloaded, the bundle is randomly -assigned. - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/develop-schema.md b/site2/website/versioned_docs/version-2.9.2-deprecated/develop-schema.md deleted file mode 100644 index 2d4461a5ea2b55..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/develop-schema.md +++ /dev/null @@ -1,62 +0,0 @@ ---- -id: develop-schema -title: Custom schema storage -sidebar_label: "Custom schema storage" -original_id: develop-schema ---- - -By default, Pulsar stores data type [schemas](concepts-schema-registry.md) in [Apache BookKeeper](https://bookkeeper.apache.org) (which is deployed alongside Pulsar). You can, however, use another storage system if you wish. This doc walks you through creating your own schema storage implementation. - -In order to use a non-default (i.e. non-BookKeeper) storage system for Pulsar schemas, you need to implement two Java interfaces: [`SchemaStorage`](#schemastorage-interface) and [`SchemaStorageFactory`](#schemastoragefactory-interface). - -## SchemaStorage interface - -The `SchemaStorage` interface has the following methods: - -```java - -public interface SchemaStorage { - // How schemas are updated - CompletableFuture put(String key, byte[] value, byte[] hash); - - // How schemas are fetched from storage - CompletableFuture get(String key, SchemaVersion version); - - // How schemas are deleted - CompletableFuture delete(String key); - - // Utility method for converting a schema version byte array to a SchemaVersion object - SchemaVersion versionFromBytes(byte[] version); - - // Startup behavior for the schema storage client - void start() throws Exception; - - // Shutdown behavior for the schema storage client - void close() throws Exception; -} - -``` - -> For a full-fledged example schema storage implementation, see the [`BookKeeperSchemaStorage`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorage.java) class. - -## SchemaStorageFactory interface - -```java - -public interface SchemaStorageFactory { - @NotNull - SchemaStorage create(PulsarService pulsar) throws Exception; -} - -``` - -> For a full-fledged example schema storage factory implementation, see the [`BookKeeperSchemaStorageFactory`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorageFactory.java) class. - -## Deployment - -In order to use your custom schema storage implementation, you'll need to: - -1. Package the implementation in a [JAR](https://docs.oracle.com/javase/tutorial/deployment/jar/basicsindex.html) file. -1. Add that jar to the `lib` folder in your Pulsar [binary or source distribution](getting-started-standalone.md#installing-pulsar). -1. Change the `schemaRegistryStorageClassName` configuration in [`broker.conf`](reference-configuration.md#broker) to your custom factory class (i.e. the `SchemaStorageFactory` implementation, not the `SchemaStorage` implementation). -1. Start up Pulsar. diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/develop-tools.md b/site2/website/versioned_docs/version-2.9.2-deprecated/develop-tools.md deleted file mode 100644 index bc7c29e836e6ac..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/develop-tools.md +++ /dev/null @@ -1,112 +0,0 @@ ---- -id: develop-tools -title: Simulation tools -sidebar_label: "Simulation tools" -original_id: develop-tools ---- - -It is sometimes necessary create an test environment and incur artificial load to observe how well load managers -handle the load. The load simulation controller, the load simulation client, and the broker monitor were created as an -effort to make create this load and observe the effects on the managers more easily. - -## Simulation Client -The simulation client is a machine which will create and subscribe to topics with configurable message rates and sizes. -Because it is sometimes necessary in simulating large load to use multiple client machines, the user does not interact -with the simulation client directly, but instead delegates their requests to the simulation controller, which will then -send signals to clients to start incurring load. The client implementation is in the class -`org.apache.pulsar.testclient.LoadSimulationClient`. - -### Usage -To Start a simulation client, use the `pulsar-perf` script with the command `simulation-client` as follows: - -``` - -pulsar-perf simulation-client --port --service-url - -``` - -The client will then be ready to receive controller commands. -## Simulation Controller -The simulation controller send signals to the simulation clients, requesting them to create new topics, stop old -topics, change the load incurred by topics, as well as several other tasks. It is implemented in the class -`org.apache.pulsar.testclient.LoadSimulationController` and presents a shell to the user as an interface to send -command with. - -### Usage -To start a simulation controller, use the `pulsar-perf` script with the command `simulation-controller` as follows: - -``` - -pulsar-perf simulation-controller --cluster --client-port ---clients - -``` - -The clients should already be started before the controller is started. You will then be presented with a simple prompt, -where you can issue commands to simulation clients. Arguments often refer to tenant names, namespace names, and topic -names. In all cases, the BASE name of the tenants, namespaces, and topics are used. For example, for the topic -`persistent://my_tenant/my_cluster/my_namespace/my_topic`, the tenant name is `my_tenant`, the namespace name is -`my_namespace`, and the topic name is `my_topic`. The controller can perform the following actions: - -* Create a topic with a producer and a consumer - * `trade [--rate ] - [--rand-rate ,] - [--size ]` -* Create a group of topics with a producer and a consumer - * `trade_group [--rate ] - [--rand-rate ,] - [--separation ] [--size ] - [--topics-per-namespace ]` -* Change the configuration of an existing topic - * `change [--rate ] - [--rand-rate ,] - [--size ]` -* Change the configuration of a group of topics - * `change_group [--rate ] [--rand-rate ,] - [--size ] [--topics-per-namespace ]` -* Shutdown a previously created topic - * `stop ` -* Shutdown a previously created group of topics - * `stop_group ` -* Copy the historical data from one ZooKeeper to another and simulate based on the message rates and sizes in that -history - * `copy [--rate-multiplier value]` -* Simulate the load of the historical data on the current ZooKeeper (should be same ZooKeeper being simulated on) - * `simulate [--rate-multiplier value]` -* Stream the latest data from the given active ZooKeeper to simulate the real-time load of that ZooKeeper. - * `stream [--rate-multiplier value]` - -The "group" arguments in these commands allow the user to create or affect multiple topics at once. Groups are created -when calling the `trade_group` command, and all topics from these groups may be subsequently modified or stopped -with the `change_group` and `stop_group` commands respectively. All ZooKeeper arguments are of the form -`zookeeper_host:port`. - -### Difference Between Copy, Simulate, and Stream -The commands `copy`, `simulate`, and `stream` are very similar but have significant differences. `copy` is used when -you want to simulate the load of a static, external ZooKeeper on the ZooKeeper you are simulating on. Thus, -`source zookeeper` should be the ZooKeeper you want to copy and `target zookeeper` should be the ZooKeeper you are -simulating on, and then it will get the full benefit of the historical data of the source in both load manager -implementations. `simulate` on the other hand takes in only one ZooKeeper, the one you are simulating on. It assumes -that you are simulating on a ZooKeeper that has historical data for `SimpleLoadManagerImpl` and creates equivalent -historical data for `ModularLoadManagerImpl`. Then, the load according to the historical data is simulated by the -clients. Finally, `stream` takes in an active ZooKeeper different than the ZooKeeper being simulated on and streams -load data from it and simulates the real-time load. In all cases, the optional `rate-multiplier` argument allows the -user to simulate some proportion of the load. For instance, using `--rate-multiplier 0.05` will cause messages to -be sent at only `5%` of the rate of the load that is being simulated. - -## Broker Monitor -To observe the behavior of the load manager in these simulations, one may utilize the broker monitor, which is -implemented in `org.apache.pulsar.testclient.BrokerMonitor`. The broker monitor will print tabular load data to the -console as it is updated using watchers. - -### Usage -To start a broker monitor, use the `monitor-brokers` command in the `pulsar-perf` script: - -``` - -pulsar-perf monitor-brokers --connect-string - -``` - -The console will then continuously print load data until it is interrupted. - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/developing-binary-protocol.md b/site2/website/versioned_docs/version-2.9.2-deprecated/developing-binary-protocol.md deleted file mode 100644 index b34b7a4cf90354..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/developing-binary-protocol.md +++ /dev/null @@ -1,606 +0,0 @@ ---- -id: developing-binary-protocol -title: Pulsar binary protocol specification -sidebar_label: "Binary protocol" -original_id: developing-binary-protocol ---- - -Pulsar uses a custom binary protocol for communications between producers/consumers and brokers. This protocol is designed to support required features, such as acknowledgements and flow control, while ensuring maximum transport and implementation efficiency. - -Clients and brokers exchange *commands* with each other. Commands are formatted as binary [protocol buffer](https://developers.google.com/protocol-buffers/) (aka *protobuf*) messages. The format of protobuf commands is specified in the [`PulsarApi.proto`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto) file and also documented in the [Protobuf interface](#protobuf-interface) section below. - -> ### Connection sharing -> Commands for different producers and consumers can be interleaved and sent through the same connection without restriction. - -All commands associated with Pulsar's protocol are contained in a [`BaseCommand`](#pulsar.proto.BaseCommand) protobuf message that includes a [`Type`](#pulsar.proto.Type) [enum](https://developers.google.com/protocol-buffers/docs/proto#enum) with all possible subcommands as optional fields. `BaseCommand` messages can specify only one subcommand. - -## Framing - -Since protobuf doesn't provide any sort of message frame, all messages in the Pulsar protocol are prepended with a 4-byte field that specifies the size of the frame. The maximum allowable size of a single frame is 5 MB. - -The Pulsar protocol allows for two types of commands: - -1. **Simple commands** that do not carry a message payload. -2. **Payload commands** that bear a payload that is used when publishing or delivering messages. In payload commands, the protobuf command data is followed by protobuf [metadata](#message-metadata) and then the payload, which is passed in raw format outside of protobuf. All sizes are passed as 4-byte unsigned big endian integers. - -> Message payloads are passed in raw format rather than protobuf format for efficiency reasons. - -### Simple commands - -Simple (payload-free) commands have this basic structure: - -| Component | Description | Size (in bytes) | -|:------------|:----------------------------------------------------------------------------------------|:----------------| -| totalSize | The size of the frame, counting everything that comes after it (in bytes) | 4 | -| commandSize | The size of the protobuf-serialized command | 4 | -| message | The protobuf message serialized in a raw binary format (rather than in protobuf format) | | - -### Payload commands - -Payload commands have this basic structure: - -| Component | Description | Size (in bytes) | -|:-------------|:--------------------------------------------------------------------------------------------|:----------------| -| totalSize | The size of the frame, counting everything that comes after it (in bytes) | 4 | -| commandSize | The size of the protobuf-serialized command | 4 | -| message | The protobuf message serialized in a raw binary format (rather than in protobuf format) | | -| magicNumber | A 2-byte byte array (`0x0e01`) identifying the current format | 2 | -| checksum | A [CRC32-C checksum](http://www.evanjones.ca/crc32c.html) of everything that comes after it | 4 | -| metadataSize | The size of the message [metadata](#message-metadata) | 4 | -| metadata | The message [metadata](#message-metadata) stored as a binary protobuf message | | -| payload | Anything left in the frame is considered the payload and can include any sequence of bytes | | - -## Message metadata - -Message metadata is stored alongside the application-specified payload as a serialized protobuf message. Metadata is created by the producer and passed on unchanged to the consumer. - -| Field | Description | -|:-------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `producer_name` | The name of the producer that published the message | -| `sequence_id` | The sequence ID of the message, assigned by producer | -| `publish_time` | The publish timestamp in Unix time (i.e. as the number of milliseconds since January 1st, 1970 in UTC) | -| `properties` | A sequence of key/value pairs (using the [`KeyValue`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto#L32) message). These are application-defined keys and values with no special meaning to Pulsar. | -| `replicated_from` *(optional)* | Indicates that the message has been replicated and specifies the name of the [cluster](reference-terminology.md#cluster) where the message was originally published | -| `partition_key` *(optional)* | While publishing on a partition topic, if the key is present, the hash of the key is used to determine which partition to choose. Partition key is used as the message key. | -| `compression` *(optional)* | Signals that payload has been compressed and with which compression library | -| `uncompressed_size` *(optional)* | If compression is used, the producer must fill the uncompressed size field with the original payload size | -| `num_messages_in_batch` *(optional)* | If this message is really a [batch](#batch-messages) of multiple entries, this field must be set to the number of messages in the batch | - -### Batch messages - -When using batch messages, the payload will be containing a list of entries, -each of them with its individual metadata, defined by the `SingleMessageMetadata` -object. - - -For a single batch, the payload format will look like this: - - -| Field | Description | -|:--------------|:------------------------------------------------------------| -| metadataSizeN | The size of the single message metadata serialized Protobuf | -| metadataN | Single message metadata | -| payloadN | Message payload passed by application | - -Each metadata field looks like this; - -| Field | Description | -|:---------------------------|:--------------------------------------------------------| -| properties | Application-defined properties | -| partition key *(optional)* | Key to indicate the hashing to a particular partition | -| payload_size | Size of the payload for the single message in the batch | - -When compression is enabled, the whole batch will be compressed at once. - -## Interactions - -### Connection establishment - -After opening a TCP connection to a broker, typically on port 6650, the client -is responsible to initiate the session. - -![Connect interaction](/assets/binary-protocol-connect.png) - -After receiving a `Connected` response from the broker, the client can -consider the connection ready to use. Alternatively, if the broker doesn't -validate the client authentication, it will reply with an `Error` command and -close the TCP connection. - -Example: - -```protobuf - -message CommandConnect { - "client_version" : "Pulsar-Client-Java-v1.15.2", - "auth_method_name" : "my-authentication-plugin", - "auth_data" : "my-auth-data", - "protocol_version" : 6 -} - -``` - -Fields: - * `client_version` → String based identifier. Format is not enforced - * `auth_method_name` → *(optional)* Name of the authentication plugin if auth - enabled - * `auth_data` → *(optional)* Plugin specific authentication data - * `protocol_version` → Indicates the protocol version supported by the - client. Broker will not send commands introduced in newer revisions of the - protocol. Broker might be enforcing a minimum version - -```protobuf - -message CommandConnected { - "server_version" : "Pulsar-Broker-v1.15.2", - "protocol_version" : 6 -} - -``` - -Fields: - * `server_version` → String identifier of broker version - * `protocol_version` → Protocol version supported by the broker. Client - must not attempt to send commands introduced in newer revisions of the - protocol - -### Keep Alive - -To identify prolonged network partitions between clients and brokers or cases -in which a machine crashes without interrupting the TCP connection on the remote -end (eg: power outage, kernel panic, hard reboot...), we have introduced a -mechanism to probe for the availability status of the remote peer. - -Both clients and brokers are sending `Ping` commands periodically and they will -close the socket if a `Pong` response is not received within a timeout (default -used by broker is 60s). - -A valid implementation of a Pulsar client is not required to send the `Ping` -probe, though it is required to promptly reply after receiving one from the -broker in order to prevent the remote side from forcibly closing the TCP connection. - - -### Producer - -In order to send messages, a client needs to establish a producer. When creating -a producer, the broker will first verify that this particular client is -authorized to publish on the topic. - -Once the client gets confirmation of the producer creation, it can publish -messages to the broker, referring to the producer id negotiated before. - -![Producer interaction](/assets/binary-protocol-producer.png) - -##### Command Producer - -```protobuf - -message CommandProducer { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "producer_id" : 1, - "request_id" : 1 -} - -``` - -Parameters: - * `topic` → Complete topic name to where you want to create the producer on - * `producer_id` → Client generated producer identifier. Needs to be unique - within the same connection - * `request_id` → Identifier for this request. Used to match the response with - the originating request. Needs to be unique within the same connection - * `producer_name` → *(optional)* If a producer name is specified, the name will - be used, otherwise the broker will generate a unique name. Generated - producer name is guaranteed to be globally unique. Implementations are - expected to let the broker generate a new producer name when the producer - is initially created, then reuse it when recreating the producer after - reconnections. - -The broker will reply with either `ProducerSuccess` or `Error` commands. - -##### Command ProducerSuccess - -```protobuf - -message CommandProducerSuccess { - "request_id" : 1, - "producer_name" : "generated-unique-producer-name" -} - -``` - -Parameters: - * `request_id` → Original id of the `CreateProducer` request - * `producer_name` → Generated globally unique producer name or the name - specified by the client, if any. - -##### Command Send - -Command `Send` is used to publish a new message within the context of an -already existing producer. This command is used in a frame that includes command -as well as message payload, for which the complete format is specified in the [payload commands](#payload-commands) section. - -```protobuf - -message CommandSend { - "producer_id" : 1, - "sequence_id" : 0, - "num_messages" : 1 -} - -``` - -Parameters: - * `producer_id` → id of an existing producer - * `sequence_id` → each message has an associated sequence id which is expected - to be implemented with a counter starting at 0. The `SendReceipt` that - acknowledges the effective publishing of messages will refer to it by - its sequence id. - * `num_messages` → *(optional)* Used when publishing a batch of messages at - once. - -##### Command SendReceipt - -After a message has been persisted on the configured number of replicas, the -broker will send the acknowledgment receipt to the producer. - -```protobuf - -message CommandSendReceipt { - "producer_id" : 1, - "sequence_id" : 0, - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -Parameters: - * `producer_id` → id of producer originating the send request - * `sequence_id` → sequence id of the published message - * `message_id` → message id assigned by the system to the published message - Unique within a single cluster. Message id is composed of 2 longs, `ledgerId` - and `entryId`, that reflect that this unique id is assigned when appending - to a BookKeeper ledger - - -##### Command CloseProducer - -**Note**: *This command can be sent by either producer or broker*. - -When receiving a `CloseProducer` command, the broker will stop accepting any -more messages for the producer, wait until all pending messages are persisted -and then reply `Success` to the client. - -The broker can send a `CloseProducer` command to client when it's performing -a graceful failover (eg: broker is being restarted, or the topic is being unloaded -by load balancer to be transferred to a different broker). - -When receiving the `CloseProducer`, the client is expected to go through the -service discovery lookup again and recreate the producer again. The TCP -connection is not affected. - -### Consumer - -A consumer is used to attach to a subscription and consume messages from it. -After every reconnection, a client needs to subscribe to the topic. If a -subscription is not already there, a new one will be created. - -![Consumer](/assets/binary-protocol-consumer.png) - -#### Flow control - -After the consumer is ready, the client needs to *give permission* to the -broker to push messages. This is done with the `Flow` command. - -A `Flow` command gives additional *permits* to send messages to the consumer. -A typical consumer implementation will use a queue to accumulate these messages -before the application is ready to consume them. - -After the application has dequeued half of the messages in the queue, the consumer -sends permits to the broker to ask for more messages (equals to half of the messages in the queue). - -For example, if the queue size is 1000 and the consumer consumes 500 messages in the queue. -Then the consumer sends permits to the broker to ask for 500 messages. - -##### Command Subscribe - -```protobuf - -message CommandSubscribe { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "subscription" : "my-subscription-name", - "subType" : "Exclusive", - "consumer_id" : 1, - "request_id" : 1 -} - -``` - -Parameters: - * `topic` → Complete topic name to where you want to create the consumer on - * `subscription` → Subscription name - * `subType` → Subscription type: Exclusive, Shared, Failover, Key_Shared - * `consumer_id` → Client generated consumer identifier. Needs to be unique - within the same connection - * `request_id` → Identifier for this request. Used to match the response with - the originating request. Needs to be unique within the same connection - * `consumer_name` → *(optional)* Clients can specify a consumer name. This - name can be used to track a particular consumer in the stats. Also, in - Failover subscription type, the name is used to decide which consumer is - elected as *master* (the one receiving messages): consumers are sorted by - their consumer name and the first one is elected master. - -##### Command Flow - -```protobuf - -message CommandFlow { - "consumer_id" : 1, - "messagePermits" : 1000 -} - -``` - -Parameters: -* `consumer_id` → Id of an already established consumer -* `messagePermits` → Number of additional permits to grant to the broker for - pushing more messages - -##### Command Message - -Command `Message` is used by the broker to push messages to an existing consumer, -within the limits of the given permits. - - -This command is used in a frame that includes the message payload as well, for -which the complete format is specified in the [payload commands](#payload-commands) -section. - -```protobuf - -message CommandMessage { - "consumer_id" : 1, - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -##### Command Ack - -An `Ack` is used to signal to the broker that a given message has been -successfully processed by the application and can be discarded by the broker. - -In addition, the broker will also maintain the consumer position based on the -acknowledged messages. - -```protobuf - -message CommandAck { - "consumer_id" : 1, - "ack_type" : "Individual", - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -Parameters: - * `consumer_id` → Id of an already established consumer - * `ack_type` → Type of acknowledgment: `Individual` or `Cumulative` - * `message_id` → Id of the message to acknowledge - * `validation_error` → *(optional)* Indicates that the consumer has discarded - the messages due to: `UncompressedSizeCorruption`, - `DecompressionError`, `ChecksumMismatch`, `BatchDeSerializeError` - * `properties` → *(optional)* Reserved configuration items - * `txnid_most_bits` → *(optional)* Same as Transaction Coordinator ID, `txnid_most_bits` and `txnid_least_bits` - uniquely identify a transaction. - * `txnid_least_bits` → *(optional)* The ID of the transaction opened in a transaction coordinator, - `txnid_most_bits` and `txnid_least_bits`uniquely identify a transaction. - * `request_id` → *(optional)* ID for handling response and timeout. - - - ##### Command AckResponse - -An `AckResponse` is the broker’s response to acknowledge a request sent by the client. It contains the `consumer_id` sent in the request. -If a transaction is used, it contains both the Transaction ID and the Request ID that are sent in the request. The client finishes the specific request according to the Request ID. If the `error` field is set, it indicates that the request has failed. - -An example of `AckResponse` with redirection: - -```protobuf - -message CommandAckResponse { - "consumer_id" : 1, - "txnid_least_bits" = 0, - "txnid_most_bits" = 1, - "request_id" = 5 -} - -``` - -##### Command CloseConsumer - -***Note***: **This command can be sent by either producer or broker*. - -This command behaves the same as [`CloseProducer`](#command-closeproducer) - -##### Command RedeliverUnacknowledgedMessages - -A consumer can ask the broker to redeliver some or all of the pending messages -that were pushed to that particular consumer and not yet acknowledged. - -The protobuf object accepts a list of message ids that the consumer wants to -be redelivered. If the list is empty, the broker will redeliver all the -pending messages. - -On redelivery, messages can be sent to the same consumer or, in the case of a -shared subscription, spread across all available consumers. - - -##### Command ReachedEndOfTopic - -This is sent by a broker to a particular consumer, whenever the topic -has been "terminated" and all the messages on the subscription were -acknowledged. - -The client should use this command to notify the application that no more -messages are coming from the consumer. - -##### Command ConsumerStats - -This command is sent by the client to retrieve Subscriber and Consumer level -stats from the broker. -Parameters: - * `request_id` → Id of the request, used to correlate the request - and the response. - * `consumer_id` → Id of an already established consumer. - -##### Command ConsumerStatsResponse - -This is the broker's response to ConsumerStats request by the client. -It contains the Subscriber and Consumer level stats of the `consumer_id` sent in the request. -If the `error_code` or the `error_message` field is set it indicates that the request has failed. - -##### Command Unsubscribe - -This command is sent by the client to unsubscribe the `consumer_id` from the associated topic. -Parameters: - * `request_id` → Id of the request. - * `consumer_id` → Id of an already established consumer which needs to unsubscribe. - - -## Service discovery - -### Topic lookup - -Topic lookup needs to be performed each time a client needs to create or -reconnect a producer or a consumer. Lookup is used to discover which particular -broker is serving the topic we are about to use. - -Lookup can be done with a REST call as described in the [admin API](admin-api-topics.md#look-up-topics-owner-broker) -docs. - -Since Pulsar-1.16 it is also possible to perform the lookup within the binary -protocol. - -For the sake of example, let's assume we have a service discovery component -running at `pulsar://broker.example.com:6650` - -Individual brokers will be running at `pulsar://broker-1.example.com:6650`, -`pulsar://broker-2.example.com:6650`, ... - -A client can use a connection to the discovery service host to issue a -`LookupTopic` command. The response can either be a broker hostname to -connect to, or a broker hostname to which retry the lookup. - -The `LookupTopic` command has to be used in a connection that has already -gone through the `Connect` / `Connected` initial handshake. - -![Topic lookup](/assets/binary-protocol-topic-lookup.png) - -```protobuf - -message CommandLookupTopic { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "request_id" : 1, - "authoritative" : false -} - -``` - -Fields: - * `topic` → Topic name to lookup - * `request_id` → Id of the request that will be passed with its response - * `authoritative` → Initial lookup request should use false. When following a - redirect response, client should pass the same value contained in the - response - -##### LookupTopicResponse - -Example of response with successful lookup: - -```protobuf - -message CommandLookupTopicResponse { - "request_id" : 1, - "response" : "Connect", - "brokerServiceUrl" : "pulsar://broker-1.example.com:6650", - "brokerServiceUrlTls" : "pulsar+ssl://broker-1.example.com:6651", - "authoritative" : true -} - -``` - -Example of lookup response with redirection: - -```protobuf - -message CommandLookupTopicResponse { - "request_id" : 1, - "response" : "Redirect", - "brokerServiceUrl" : "pulsar://broker-2.example.com:6650", - "brokerServiceUrlTls" : "pulsar+ssl://broker-2.example.com:6651", - "authoritative" : true -} - -``` - -In this second case, we need to reissue the `LookupTopic` command request -to `broker-2.example.com` and this broker will be able to give a definitive -answer to the lookup request. - -### Partitioned topics discovery - -Partitioned topics metadata discovery is used to find out if a topic is a -"partitioned topic" and how many partitions were set up. - -If the topic is marked as "partitioned", the client is expected to create -multiple producers or consumers, one for each partition, using the `partition-X` -suffix. - -This information only needs to be retrieved the first time a producer or -consumer is created. There is no need to do this after reconnections. - -The discovery of partitioned topics metadata works very similar to the topic -lookup. The client send a request to the service discovery address and the -response will contain actual metadata. - -##### Command PartitionedTopicMetadata - -```protobuf - -message CommandPartitionedTopicMetadata { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "request_id" : 1 -} - -``` - -Fields: - * `topic` → the topic for which to check the partitions metadata - * `request_id` → Id of the request that will be passed with its response - - -##### Command PartitionedTopicMetadataResponse - -Example of response with metadata: - -```protobuf - -message CommandPartitionedTopicMetadataResponse { - "request_id" : 1, - "response" : "Success", - "partitions" : 32 -} - -``` - -## Protobuf interface - -All Pulsar's Protobuf definitions can be found {@inject: github:here:/pulsar-common/src/main/proto/PulsarApi.proto}. diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/functions-cli.md b/site2/website/versioned_docs/version-2.9.2-deprecated/functions-cli.md deleted file mode 100644 index c9fcfa201525f0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/functions-cli.md +++ /dev/null @@ -1,198 +0,0 @@ ---- -id: functions-cli -title: Pulsar Functions command line tool -sidebar_label: "Reference: CLI" -original_id: functions-cli ---- - -The following tables list Pulsar Functions command-line tools. You can learn Pulsar Functions modes, commands, and parameters. - -## localrun - -Run Pulsar Functions locally, rather than deploying it to the Pulsar cluster. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -broker-service-url | The URL for the Pulsar broker. | | -classname | The class name of a Pulsar Function.| | -client-auth-params | Client authentication parameter. | | -client-auth-plugin | Client authentication plugin using which function-process can connect to broker. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime).| | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -hostname-verification-enabled | Enable hostname verification. | false -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -instance-id-offset | Start the instanceIds from this offset. | 0 -log-topic | The topic to which the logs a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -tls-allow-insecure | Allow insecure tls connection. | false -tls-trust-cert-path | tls trust cert file path. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -use-tls | Use tls connection. | false -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - - -## create - -Create and deploy a Pulsar Function in cluster mode. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -classname | The class name of a Pulsar Function. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime).| | -custom-runtime-options | A string that encodes options to customize the runtime, see docs for configured runtime for details | | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -log-topic | The topic to which the logs of a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - -## delete - -Delete a Pulsar Function that is running on a Pulsar cluster. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## update - -Update a Pulsar Function that has been deployed to a Pulsar cluster. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -classname | The class name of a Pulsar Function. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime). | | -custom-runtime-options | A string that encodes options to customize the runtime, see docs for configured runtime for details | | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -log-topic | The topic to which the logs of a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -update-auth-data | Whether or not to update the auth data. | false -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - -## get - -Fetch information about a Pulsar Function. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## restart - -Restart function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## stop - -Stops function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## start - -Starts a stopped function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/functions-debug.md b/site2/website/versioned_docs/version-2.9.2-deprecated/functions-debug.md deleted file mode 100644 index c1f19abda64657..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/functions-debug.md +++ /dev/null @@ -1,538 +0,0 @@ ---- -id: functions-debug -title: Debug Pulsar Functions -sidebar_label: "How-to: Debug" -original_id: functions-debug ---- - -You can use the following methods to debug Pulsar Functions: - -* [Captured stderr](functions-debug.md#captured-stderr) -* [Use unit test](functions-debug.md#use-unit-test) -* [Debug with localrun mode](functions-debug.md#debug-with-localrun-mode) -* [Use log topic](functions-debug.md#use-log-topic) -* [Use Functions CLI](functions-debug.md#use-functions-cli) - -## Captured stderr - -Function startup information and captured stderr output is written to `logs/functions////-.log` - -This is useful for debugging why a function fails to start. - -## Use unit test - -A Pulsar Function is a function with inputs and outputs, you can test a Pulsar Function in a similar way as you test any function. - -For example, if you have the following Pulsar Function: - -```java - -import java.util.function.Function; - -public class JavaNativeExclamationFunction implements Function { - @Override - public String apply(String input) { - return String.format("%s!", input); - } -} - -``` - -You can write a simple unit test to test Pulsar Function. - -:::tip - -Pulsar uses testng for testing. - -::: - -```java - -@Test -public void testJavaNativeExclamationFunction() { - JavaNativeExclamationFunction exclamation = new JavaNativeExclamationFunction(); - String output = exclamation.apply("foo"); - Assert.assertEquals(output, "foo!"); -} - -``` - -The following Pulsar Function implements the `org.apache.pulsar.functions.api.Function` interface. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class ExclamationFunction implements Function { - @Override - public String process(String input, Context context) { - return String.format("%s!", input); - } -} - -``` - -In this situation, you can write a unit test for this function as well. Remember to mock the `Context` parameter. The following is an example. - -:::tip - -Pulsar uses testng for testing. - -::: - -```java - -@Test -public void testExclamationFunction() { - ExclamationFunction exclamation = new ExclamationFunction(); - String output = exclamation.process("foo", mock(Context.class)); - Assert.assertEquals(output, "foo!"); -} - -``` - -## Debug with localrun mode -When you run a Pulsar Function in localrun mode, it launches an instance of the Function on your local machine as a thread. - -In this mode, a Pulsar Function consumes and produces actual data to a Pulsar cluster, and mirrors how the function actually runs in a Pulsar cluster. - -:::note - -Currently, debugging with localrun mode is only supported by Pulsar Functions written in Java. You need Pulsar version 2.4.0 or later to do the following. Even though localrun is available in versions earlier than Pulsar 2.4.0, you cannot debug with localrun mode programmatically or run Functions as threads. - -::: - -You can launch your function in the following manner. - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setName(functionName); -functionConfig.setInputs(Collections.singleton(sourceTopic)); -functionConfig.setClassName(ExclamationFunction.class.getName()); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setOutput(sinkTopic); - -LocalRunner localRunner = LocalRunner.builder().functionConfig(functionConfig).build(); -localRunner.start(true); - -``` - -So you can debug functions using an IDE easily. Set breakpoints and manually step through a function to debug with real data. - -The following example illustrates how to programmatically launch a function in localrun mode. - -```java - -public class ExclamationFunction implements Function { - - @Override - public String process(String s, Context context) throws Exception { - return s + "!"; - } - -public static void main(String[] args) throws Exception { - FunctionConfig functionConfig = new FunctionConfig(); - functionConfig.setName("exclamation"); - functionConfig.setInputs(Collections.singleton("input")); - functionConfig.setClassName(ExclamationFunction.class.getName()); - functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); - functionConfig.setOutput("output"); - - LocalRunner localRunner = LocalRunner.builder().functionConfig(functionConfig).build(); - localRunner.start(false); -} - -``` - -To use localrun mode programmatically, add the following dependency. - -```xml - - - org.apache.pulsar - pulsar-functions-local-runner - ${pulsar.version} - - -``` - -For complete code samples, see [here](https://github.com/jerrypeng/pulsar-functions-demos/tree/master/debugging). - -:::note - -Debugging with localrun mode for Pulsar Functions written in other languages will be supported soon. - -::: - -## Use log topic - -In Pulsar Functions, you can generate log information defined in functions to a specified log topic. You can configure consumers to consume messages from a specified log topic to check the log information. - -![Pulsar Functions core programming model](/assets/pulsar-functions-overview.png) - -**Example** - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class LoggingFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - String messageId = new String(context.getMessageId()); - - if (input.contains("danger")) { - LOG.warn("A warning was received in message {}", messageId); - } else { - LOG.info("Message {} received\nContent: {}", messageId, input); - } - - return null; - } -} - -``` - -As shown in the example above, you can get the logger via `context.getLogger()` and assign the logger to the `LOG` variable of `slf4j`, so you can define your desired log information in a function using the `LOG` variable. Meanwhile, you need to specify the topic to which the log information is produced. - -**Example** - -```bash - -$ bin/pulsar-admin functions create \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -The message published to log topic contains several properties for better reasoning: -- `loglevel` -- the level of the log message. -- `fqn` -- fully qualified function name pushes this log message. -- `instance` -- the ID of the function instance pushes this log message. - -## Use Functions CLI - -With [Pulsar Functions CLI](reference-pulsar-admin.md#functions), you can debug Pulsar Functions with the following subcommands: - -* `get` -* `status` -* `stats` -* `list` -* `trigger` - -:::tip - -For complete commands of **Pulsar Functions CLI**, see [here](reference-pulsar-admin.md#functions)。 - -::: - -### `get` - -Get information about a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions get options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -:::tip - -`--fqfn` consists of `--name`, `--namespace` and `--tenant`, so you can specify either `--fqfn` or `--name`, `--namespace` and `--tenant`. - -::: - -**Example** - -You can specify `--fqfn` to get information about a Pulsar Function. - -```bash - -$ ./bin/pulsar-admin functions get public/default/ExclamationFunctio6 - -``` - -Optionally, you can specify `--name`, `--namespace` and `--tenant` to get information about a Pulsar Function. - -```bash - -$ ./bin/pulsar-admin functions get \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 - -``` - -As shown below, the `get` command shows input, output, runtime, and other information about the _ExclamationFunctio6_ function. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "ExclamationFunctio6", - "className": "org.example.test.ExclamationFunction", - "inputSpecs": { - "persistent://public/default/my-topic-1": { - "isRegexPattern": false - } - }, - "output": "persistent://public/default/test-1", - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "userConfig": {}, - "runtime": "JAVA", - "autoAck": true, - "parallelism": 1 -} - -``` - -### `status` - -Check the current status of a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions status options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--instance-id`|The instance ID of a Pulsar Function
    If the `--instance-id` is not specified, it gets the IDs of all instances.
    -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - -``` - -As shown below, the `status` command shows the number of instances, running instances, the instance running under the _ExclamationFunctio6_ function, received messages, successfully processed messages, system exceptions, the average latency and so on. - -```json - -{ - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReceived" : 1, - "numSuccessfullyProcessed" : 1, - "numUserExceptions" : 0, - "latestUserExceptions" : [ ], - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "averageLatency" : 0.8385, - "lastInvocationTime" : 1557734137987, - "workerId" : "c-standalone-fw-23ccc88ef29b-8080" - } - } ] -} - -``` - -### `stats` - -Get the current stats of a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions stats options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--instance-id`|The instance ID of a Pulsar Function.
    If the `--instance-id` is not specified, it gets the IDs of all instances.
    -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - -``` - -The output is shown as follows: - -```json - -{ - "receivedTotal" : 1, - "processedSuccessfullyTotal" : 1, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : 0.8385, - "1min" : { - "receivedTotal" : 0, - "processedSuccessfullyTotal" : 0, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : null - }, - "lastInvocation" : 1557734137987, - "instances" : [ { - "instanceId" : 0, - "metrics" : { - "receivedTotal" : 1, - "processedSuccessfullyTotal" : 1, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : 0.8385, - "1min" : { - "receivedTotal" : 0, - "processedSuccessfullyTotal" : 0, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : null - }, - "lastInvocation" : 1557734137987, - "userMetrics" : { } - } - } ] -} - -``` - -### `list` - -List all Pulsar Functions running under a specific tenant and namespace. - -**Usage** - -```bash - -$ pulsar-admin functions list options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions list \ - --tenant public \ - --namespace default - -``` - -As shown below, the `list` command returns three functions running under the _public_ tenant and the _default_ namespace. - -```text - -ExclamationFunctio1 -ExclamationFunctio2 -ExclamationFunctio3 - -``` - -### `trigger` - -Trigger a specified Pulsar Function with a supplied value. This command simulates the execution process of a Pulsar Function and verifies it. - -**Usage** - -```bash - -$ pulsar-admin functions trigger options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. -|`--topic`|The topic name that a Pulsar Function consumes from. -|`--trigger-file`|The path to a file that contains the data to trigger a Pulsar Function. -|`--trigger-value`|The value to trigger a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - --topic persistent://public/default/my-topic-1 \ - --trigger-value "hello pulsar functions" - -``` - -As shown below, the `trigger` command returns the following result: - -```text - -This is my function! - -``` - -:::note - -You must specify the [entire topic name](getting-started-pulsar.md#topic-names) when using the `--topic` option. Otherwise, the following error occurs. - -```text - -Function in trigger function has unidentified topic -Reason: Function in trigger function has unidentified topic - -``` - -::: - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/functions-deploy.md b/site2/website/versioned_docs/version-2.9.2-deprecated/functions-deploy.md deleted file mode 100644 index 2a0d68d6c623c7..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/functions-deploy.md +++ /dev/null @@ -1,262 +0,0 @@ ---- -id: functions-deploy -title: Deploy Pulsar Functions -sidebar_label: "How-to: Deploy" -original_id: functions-deploy ---- - -## Requirements - -To deploy and manage Pulsar Functions, you need to have a Pulsar cluster running. There are several options for this: - -* You can run a [standalone cluster](getting-started-standalone.md) locally on your own machine. -* You can deploy a Pulsar cluster on [Kubernetes](deploy-kubernetes.md), [Amazon Web Services](deploy-aws.md), [bare metal](deploy-bare-metal.md), [DC/OS](https://dcos.io/), and more. - -If you run a non-[standalone](reference-terminology.md#standalone) cluster, you need to obtain the service URL for the cluster. How you obtain the service URL depends on how you deploy your Pulsar cluster. - -If you want to deploy and trigger Python user-defined functions, you need to install [the pulsar python client](http://pulsar.apache.org/docs/en/client-libraries-python/) on all the machines running [functions workers](functions-worker.md). - -## Command-line interface - -Pulsar Functions are deployed and managed using the [`pulsar-admin functions`](reference-pulsar-admin.md#functions) interface, which contains commands such as [`create`](reference-pulsar-admin.md#functions-create) for deploying functions in [cluster mode](#cluster-mode), [`trigger`](reference-pulsar-admin.md#trigger) for [triggering](#triggering-pulsar-functions) functions, [`list`](reference-pulsar-admin.md#list-2) for listing deployed functions. - -To learn more commands, refer to [`pulsar-admin functions`](reference-pulsar-admin.md#functions). - -### Default arguments - -When managing Pulsar Functions, you need to specify a variety of information about functions, including tenant, namespace, input and output topics, and so on. However, some parameters have default values if you do not specify values for them. The following table lists the default values. - -Parameter | Default -:---------|:------- -Function name | You can specify any value for the class name (except org, library, or similar class names). For example, when you specify the flag `--classname org.example.MyFunction`, the function name is `MyFunction`. -Tenant | Derived from names of the input topics. If the input topics are under the `marketing` tenant, which means the topic names have the form `persistent://marketing/{namespace}/{topicName}`, the tenant is `marketing`. -Namespace | Derived from names of the input topics. If the input topics are under the `asia` namespace under the `marketing` tenant, which means the topic names have the form `persistent://marketing/asia/{topicName}`, then the namespace is `asia`. -Output topic | `{input topic}-{function name}-output`. For example, if an input topic name of a function is `incoming`, and the function name is `exclamation`, then the name of the output topic is `incoming-exclamation-output`. -Subscription type | For `at-least-once` and `at-most-once` [processing guarantees](functions-overview.md#processing-guarantees), the [`SHARED`](concepts-messaging.md#shared) mode is applied by default; for `effectively-once` guarantees, the [`FAILOVER`](concepts-messaging.md#failover) mode is applied. -Processing guarantees | [`ATLEAST_ONCE`](functions-overview.md#processing-guarantees) -Pulsar service URL | `pulsar://localhost:6650` - -### Example of default arguments - -Take the `create` command as an example. - -```bash - -$ bin/pulsar-admin functions create \ - --jar my-pulsar-functions.jar \ - --classname org.example.MyFunction \ - --inputs my-function-input-topic1,my-function-input-topic2 - -``` - -The function has default values for the function name (`MyFunction`), tenant (`public`), namespace (`default`), subscription type (`SHARED`), processing guarantees (`ATLEAST_ONCE`), and Pulsar service URL (`pulsar://localhost:6650`). - -## Local run mode - -If you run a Pulsar Function in **local run** mode, it runs on the machine from which you enter the commands (on your laptop, an [AWS EC2](https://aws.amazon.com/ec2/) instance, and so on). The following is a [`localrun`](reference-pulsar-admin.md#localrun) command example. - -```bash - -$ bin/pulsar-admin functions localrun \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/input-1 \ - --output persistent://public/default/output-1 - -``` - -By default, the function connects to a Pulsar cluster running on the same machine, via a local [broker](reference-terminology.md#broker) service URL of `pulsar://localhost:6650`. If you use local run mode to run a function but connect it to a non-local Pulsar cluster, you can specify a different broker URL using the `--brokerServiceUrl` flag. The following is an example. - -```bash - -$ bin/pulsar-admin functions localrun \ - --broker-service-url pulsar://my-cluster-host:6650 \ - # Other function parameters - -``` - -## Cluster mode - -When you run a Pulsar Function in **cluster** mode, the function code is uploaded to a Pulsar broker and runs *alongside the broker* rather than in your [local environment](#local-run-mode). You can run a function in cluster mode using the [`create`](reference-pulsar-admin.md#create-1) command. - -```bash - -$ bin/pulsar-admin functions create \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/input-1 \ - --output persistent://public/default/output-1 - -``` - -### Update functions in cluster mode - -You can use the [`update`](reference-pulsar-admin.md#update-1) command to update a Pulsar Function running in cluster mode. The following command updates the function created in the [cluster mode](#cluster-mode) section. - -```bash - -$ bin/pulsar-admin functions update \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/new-input-topic \ - --output persistent://public/default/new-output-topic - -``` - -### Parallelism - -Pulsar Functions run as processes or threads, which are called **instances**. When you run a Pulsar Function, it runs as a single instance by default. With one localrun command, you can only run a single instance of a function. If you want to run multiple instances, you can use localrun command multiple times. - -When you create a function, you can specify the *parallelism* of a function (the number of instances to run). You can set the parallelism factor using the `--parallelism` flag of the [`create`](reference-pulsar-admin.md#functions-create) command. - -```bash - -$ bin/pulsar-admin functions create \ - --parallelism 3 \ - # Other function info - -``` - -You can adjust the parallelism of an already created function using the [`update`](reference-pulsar-admin.md#update-1) interface. - -```bash - -$ bin/pulsar-admin functions update \ - --parallelism 5 \ - # Other function - -``` - -If you specify a function configuration via YAML, use the `parallelism` parameter. The following is a config file example. - -```yaml - -# function-config.yaml -parallelism: 3 -inputs: -- persistent://public/default/input-1 -output: persistent://public/default/output-1 -# other parameters - -``` - -The following is corresponding update command. - -```bash - -$ bin/pulsar-admin functions update \ - --function-config-file function-config.yaml - -``` - -### Function instance resources - -When you run Pulsar Functions in [cluster mode](#cluster-mode), you can specify the resources that are assigned to each function [instance](#parallelism). - -Resource | Specified as | Runtimes -:--------|:----------------|:-------- -CPU | The number of cores | Kubernetes -RAM | The number of bytes | Process, Docker -Disk space | The number of bytes | Docker - -The following function creation command allocates 8 cores, 8 GB of RAM, and 10 GB of disk space to a function. - -```bash - -$ bin/pulsar-admin functions create \ - --jar target/my-functions.jar \ - --classname org.example.functions.MyFunction \ - --cpu 8 \ - --ram 8589934592 \ - --disk 10737418240 - -``` - -> #### Resources are *per instance* -> The resources that you apply to a given Pulsar Function are applied to each instance of the function. For example, if you apply 8 GB of RAM to a function with a parallelism of 5, you are applying 40 GB of RAM for the function in total. Make sure that you take the parallelism (the number of instances) factor into your resource calculations. - -### Use Package management service - -Package management enables version management and simplifies the upgrade and rollback processes for Functions, Sinks, and Sources. When you use the same function, sink and source in different namespaces, you can upload them to a common package management system. - -To use [Package management service](admin-api-packages.md), ensure that the package management service has been enabled in your cluster by setting the following properties in `broker.conf`. - -> Note: Package management service is not enabled by default. - -```yaml - -enablePackagesManagement=true -packagesManagementStorageProvider=org.apache.pulsar.packages.management.storage.bookkeeper.BookKeeperPackagesStorageProvider -packagesReplicas=1 -packagesManagementLedgerRootPath=/ledgers - -``` - -With Package management service enabled, you can upload your function packages by [upload a package](admin-api-packages.md#upload-a-package) to the service and get the [package URL](admin-api-packages.md#package-url). - -When you have a ready to use package URL, you can create the function with package URL by setting `--jar`, `--py`, or `--go` to the package URL with `pulsar-admin functions create`. - -## Trigger Pulsar Functions - -If a Pulsar Function is running in [cluster mode](#cluster-mode), you can **trigger** it at any time using the command line. Triggering a function means that you send a message with a specific value to the function and get the function output (if any) via the command line. - -> Triggering a function is to invoke a function by producing a message on one of the input topics. With the [`pulsar-admin functions trigger`](reference-pulsar-admin.md#trigger) command, you can send messages to functions without using the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool or a language-specific client library. - -To learn how to trigger a function, you can start with Python function that returns a simple string based on the input. - -```python - -# myfunc.py -def process(input): - return "This function has been triggered with a value of {0}".format(input) - -``` - -You can run the function in [local run mode](functions-deploy.md#local-run-mode). - -```bash - -$ bin/pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name myfunc \ - --py myfunc.py \ - --classname myfunc \ - --inputs persistent://public/default/in \ - --output persistent://public/default/out - -``` - -Then assign a consumer to listen on the output topic for messages from the `myfunc` function with the [`pulsar-client consume`](reference-cli-tools.md#consume) command. - -```bash - -$ bin/pulsar-client consume persistent://public/default/out \ - --subscription-name my-subscription - --num-messages 0 # Listen indefinitely - -``` - -And then you can trigger the function. - -```bash - -$ bin/pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name myfunc \ - --trigger-value "hello world" - -``` - -The consumer listening on the output topic produces something as follows in the log. - -``` - ------ got message ----- -This function has been triggered with a value of hello world - -``` - -> #### Topic info is not required -> In the `trigger` command, you only need to specify basic information about the function (tenant, namespace, and name). To trigger the function, you do not need to know the function input topics. diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/functions-develop.md b/site2/website/versioned_docs/version-2.9.2-deprecated/functions-develop.md deleted file mode 100644 index 2e29aa1c474005..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/functions-develop.md +++ /dev/null @@ -1,1600 +0,0 @@ ---- -id: functions-develop -title: Develop Pulsar Functions -sidebar_label: "How-to: Develop" -original_id: functions-develop ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -You learn how to develop Pulsar Functions with different APIs for Java, Python and Go. - -## Available APIs -In Java and Python, you have two options to write Pulsar Functions. In Go, you can use Pulsar Functions SDK for Go. - -Interface | Description | Use cases -:---------|:------------|:--------- -Language-native interface | No Pulsar-specific libraries or special dependencies required (only core libraries from Java/Python). | Functions that do not require access to the function [context](#context). -Pulsar Function SDK for Java/Python/Go | Pulsar-specific libraries that provide a range of functionality not provided by "native" interfaces. | Functions that require access to the function [context](#context). - -The language-native function, which adds an exclamation point to all incoming strings and publishes the resulting string to a topic, has no external dependencies. The following example is language-native function. - -````mdx-code-block - - - -```Java - -import java.util.function.Function; - -public class JavaNativeExclamationFunction implements Function { - @Override - public String apply(String input) { - return String.format("%s!", input); - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/JavaNativeExclamationFunction.java). - - - - -```python - -def process(input): - return "{}!".format(input) - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/native_exclamation_function.py). - -:::note - -You can write Pulsar Functions in python2 or python3. However, Pulsar only looks for `python` as the interpreter. -If you're running Pulsar Functions on an Ubuntu system that only supports python3, you might fail to -start the functions. In this case, you can create a symlink. Your system will fail if -you subsequently install any other package that depends on Python 2.x. A solution is under development in [Issue 5518](https://github.com/apache/pulsar/issues/5518). - -```bash - -sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 10 - -``` - -::: - - - - -```` - -The following example uses Pulsar Functions SDK. -````mdx-code-block - - - -```Java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class ExclamationFunction implements Function { - @Override - public String process(String input, Context context) { - return String.format("%s!", input); - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/ExclamationFunction.java). - - - - -```python - -from pulsar import Function - -class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - return input + '!' - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/exclamation_function.py). - - - - -```Go - -package main - -import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" -) - -func HandleRequest(ctx context.Context, in []byte) error{ - fmt.Println(string(in) + "!") - return nil -} - -func main() { - pf.Start(HandleRequest) -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/77cf09eafa4f1626a53a1fe2e65dd25f377c1127/pulsar-function-go/examples/inputFunc/inputFunc.go#L20-L36). - - - - -```` - -## Schema registry -Pulsar has a built-in schema registry and is bundled with popular schema types, such as Avro, JSON and Protobuf. Pulsar Functions can leverage the existing schema information from input topics and derive the input type. The schema registry applies for output topic as well. - -## SerDe -SerDe stands for **Ser**ialization and **De**serialization. Pulsar Functions uses SerDe when publishing data to and consuming data from Pulsar topics. How SerDe works by default depends on the language you use for a particular function. - -````mdx-code-block - - - -When you write Pulsar Functions in Java, the following basic Java types are built in and supported by default: `String`, `Double`, `Integer`, `Float`, `Long`, `Short`, and `Byte`. - -To customize Java types, you need to implement the following interface. - -```java - -public interface SerDe { - T deserialize(byte[] input); - byte[] serialize(T input); -} - -``` - -SerDe works in the following ways in Java Functions. -- If the input and output topics have schema, Pulsar Functions use schema for SerDe. -- If the input or output topics do not exist, Pulsar Functions adopt the following rules to determine SerDe: - - If the schema type is specified, Pulsar Functions use the specified schema type. - - If SerDe is specified, Pulsar Functions use the specified SerDe, and the schema type for input and output topics is `Byte`. - - If neither the schema type nor SerDe is specified, Pulsar Functions use the built-in SerDe. For non-primitive schema type, the built-in SerDe serializes and deserializes objects in the `JSON` format. - - - - -In Python, the default SerDe is identity, meaning that the type is serialized as whatever type the producer function returns. - -You can specify the SerDe when [creating](functions-deploy.md#cluster-mode) or [running](functions-deploy.md#local-run-mode) functions. - -```bash - -$ bin/pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name my_function \ - --py my_function.py \ - --classname my_function.MyFunction \ - --custom-serde-inputs '{"input-topic-1":"Serde1","input-topic-2":"Serde2"}' \ - --output-serde-classname Serde3 \ - --output output-topic-1 - -``` - -This case contains two input topics: `input-topic-1` and `input-topic-2`, each of which is mapped to a different SerDe class (the map must be specified as a JSON string). The output topic, `output-topic-1`, uses the `Serde3` class for SerDe. At the moment, all Pulsar Functions logic, include processing function and SerDe classes, must be contained within a single Python file. - -When using Pulsar Functions for Python, you have three SerDe options: - -1. You can use the [`IdentitySerde`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L70), which leaves the data unchanged. The `IdentitySerDe` is the **default**. Creating or running a function without explicitly specifying SerDe means that this option is used. -2. You can use the [`PickleSerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L62), which uses Python [`pickle`](https://docs.python.org/3/library/pickle.html) for SerDe. -3. You can create a custom SerDe class by implementing the baseline [`SerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L50) class, which has just two methods: [`serialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L53) for converting the object into bytes, and [`deserialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L58) for converting bytes into an object of the required application-specific type. - -The table below shows when you should use each SerDe. - -SerDe option | When to use -:------------|:----------- -`IdentitySerde` | When you work with simple types like strings, Booleans, integers. -`PickleSerDe` | When you work with complex, application-specific types and are comfortable with the "best effort" approach of `pickle`. -Custom SerDe | When you require explicit control over SerDe, potentially for performance or data compatibility purposes. - - - - -Currently, the feature is not available in Go. - - - - -```` - -### Example -Imagine that you're writing Pulsar Functions that are processing tweet objects, you can refer to the following example of `Tweet` class. - -````mdx-code-block - - - -```java - -public class Tweet { - private String username; - private String tweetContent; - - public Tweet(String username, String tweetContent) { - this.username = username; - this.tweetContent = tweetContent; - } - - // Standard setters and getters -} - -``` - -To pass `Tweet` objects directly between Pulsar Functions, you need to provide a custom SerDe class. In the example below, `Tweet` objects are basically strings in which the username and tweet content are separated by a `|`. - -```java - -package com.example.serde; - -import org.apache.pulsar.functions.api.SerDe; - -import java.util.regex.Pattern; - -public class TweetSerde implements SerDe { - public Tweet deserialize(byte[] input) { - String s = new String(input); - String[] fields = s.split(Pattern.quote("|")); - return new Tweet(fields[0], fields[1]); - } - - public byte[] serialize(Tweet input) { - return "%s|%s".format(input.getUsername(), input.getTweetContent()).getBytes(); - } -} - -``` - -To apply this customized SerDe to a particular Pulsar Function, you need to: - -* Package the `Tweet` and `TweetSerde` classes into a JAR. -* Specify a path to the JAR and SerDe class name when deploying the function. - -The following is an example of [`create`](reference-pulsar-admin.md#create-1) operation. - -```bash - -$ bin/pulsar-admin functions create \ - --jar /path/to/your.jar \ - --output-serde-classname com.example.serde.TweetSerde \ - # Other function attributes - -``` - -> #### Custom SerDe classes must be packaged with your function JARs -> Pulsar does not store your custom SerDe classes separately from your Pulsar Functions. So you need to include your SerDe classes in your function JARs. If not, Pulsar returns an error. - - - - -```python - -class Tweet(object): - def __init__(self, username, tweet_content): - self.username = username - self.tweet_content = tweet_content - -``` - -In order to use this class in Pulsar Functions, you have two options: - -1. You can specify `PickleSerDe`, which applies the [`pickle`](https://docs.python.org/3/library/pickle.html) library SerDe. -2. You can create your own SerDe class. The following is an example. - - ```python - - from pulsar import SerDe - - class TweetSerDe(SerDe): - - def serialize(self, input): - return bytes("{0}|{1}".format(input.username, input.tweet_content)) - - def deserialize(self, input_bytes): - tweet_components = str(input_bytes).split('|') - return Tweet(tweet_components[0], tweet_componentsp[1]) - - ``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/custom_object_function.py). - - - - -```` - -In both languages, however, you can write custom SerDe logic for more complex, application-specific types. - -## Context -Java, Python and Go SDKs provide access to a **context object** that can be used by a function. This context object provides a wide variety of information and functionality to the function. - -* The name and ID of a Pulsar Function. -* The message ID of each message. Each Pulsar message is automatically assigned with an ID. -* The key, event time, properties and partition key of each message. -* The name of the topic to which the message is sent. -* The names of all input topics as well as the output topic associated with the function. -* The name of the class used for [SerDe](#serde). -* The [tenant](reference-terminology.md#tenant) and namespace associated with the function. -* The ID of the Pulsar Functions instance running the function. -* The version of the function. -* The [logger object](functions-develop.md#logger) used by the function, which can be used to create function log messages. -* Access to arbitrary [user configuration](#user-config) values supplied via the CLI. -* An interface for recording [metrics](#metrics). -* An interface for storing and retrieving state in [state storage](#state-storage). -* A function to publish new messages onto arbitrary topics. -* A function to ack the message being processed (if auto-ack is disabled). -* (Java) get Pulsar admin client. - -````mdx-code-block - - - -The [Context](https://github.com/apache/pulsar/blob/master/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Context.java) interface provides a number of methods that you can use to access the function [context](#context). The various method signatures for the `Context` interface are listed as follows. - -```java - -public interface Context { - Record getCurrentRecord(); - Collection getInputTopics(); - String getOutputTopic(); - String getOutputSchemaType(); - String getTenant(); - String getNamespace(); - String getFunctionName(); - String getFunctionId(); - String getInstanceId(); - String getFunctionVersion(); - Logger getLogger(); - void incrCounter(String key, long amount); - void incrCounterAsync(String key, long amount); - long getCounter(String key); - long getCounterAsync(String key); - void putState(String key, ByteBuffer value); - void putStateAsync(String key, ByteBuffer value); - void deleteState(String key); - ByteBuffer getState(String key); - ByteBuffer getStateAsync(String key); - Map getUserConfigMap(); - Optional getUserConfigValue(String key); - Object getUserConfigValueOrDefault(String key, Object defaultValue); - void recordMetric(String metricName, double value); - CompletableFuture publish(String topicName, O object, String schemaOrSerdeClassName); - CompletableFuture publish(String topicName, O object); - TypedMessageBuilder newOutputMessage(String topicName, Schema schema) throws PulsarClientException; - ConsumerBuilder newConsumerBuilder(Schema schema) throws PulsarClientException; - PulsarAdmin getPulsarAdmin(); - PulsarAdmin getPulsarAdmin(String clusterName); -} - -``` - -The following example uses several methods available via the `Context` object. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.stream.Collectors; - -public class ContextFunction implements Function { - public Void process(String input, Context context) { - Logger LOG = context.getLogger(); - String inputTopics = context.getInputTopics().stream().collect(Collectors.joining(", ")); - String functionName = context.getFunctionName(); - - String logMessage = String.format("A message with a value of \"%s\" has arrived on one of the following topics: %s\n", - input, - inputTopics); - - LOG.info(logMessage); - - String metricName = String.format("function-%s-messages-received", functionName); - context.recordMetric(metricName, 1); - - return null; - } -} - -``` - - - - -``` - -class ContextImpl(pulsar.Context): - def get_message_id(self): - ... - def get_message_key(self): - ... - def get_message_eventtime(self): - ... - def get_message_properties(self): - ... - def get_current_message_topic_name(self): - ... - def get_partition_key(self): - ... - def get_function_name(self): - ... - def get_function_tenant(self): - ... - def get_function_namespace(self): - ... - def get_function_id(self): - ... - def get_instance_id(self): - ... - def get_function_version(self): - ... - def get_logger(self): - ... - def get_user_config_value(self, key): - ... - def get_user_config_map(self): - ... - def record_metric(self, metric_name, metric_value): - ... - def get_input_topics(self): - ... - def get_output_topic(self): - ... - def get_output_serde_class_name(self): - ... - def publish(self, topic_name, message, serde_class_name="serde.IdentitySerDe", - properties=None, compression_type=None, callback=None, message_conf=None): - ... - def ack(self, msgid, topic): - ... - def get_and_reset_metrics(self): - ... - def reset_metrics(self): - ... - def get_metrics(self): - ... - def incr_counter(self, key, amount): - ... - def get_counter(self, key): - ... - def del_counter(self, key): - ... - def put_state(self, key, value): - ... - def get_state(self, key): - ... - -``` - - - - -``` - -func (c *FunctionContext) GetInstanceID() int { - return c.instanceConf.instanceID -} - -func (c *FunctionContext) GetInputTopics() []string { - return c.inputTopics -} - -func (c *FunctionContext) GetOutputTopic() string { - return c.instanceConf.funcDetails.GetSink().Topic -} - -func (c *FunctionContext) GetFuncTenant() string { - return c.instanceConf.funcDetails.Tenant -} - -func (c *FunctionContext) GetFuncName() string { - return c.instanceConf.funcDetails.Name -} - -func (c *FunctionContext) GetFuncNamespace() string { - return c.instanceConf.funcDetails.Namespace -} - -func (c *FunctionContext) GetFuncID() string { - return c.instanceConf.funcID -} - -func (c *FunctionContext) GetFuncVersion() string { - return c.instanceConf.funcVersion -} - -func (c *FunctionContext) GetUserConfValue(key string) interface{} { - return c.userConfigs[key] -} - -func (c *FunctionContext) GetUserConfMap() map[string]interface{} { - return c.userConfigs -} - -func (c *FunctionContext) SetCurrentRecord(record pulsar.Message) { - c.record = record -} - -func (c *FunctionContext) GetCurrentRecord() pulsar.Message { - return c.record -} - -func (c *FunctionContext) NewOutputMessage(topic string) pulsar.Producer { - return c.outputMessage(topic) -} - -``` - -The following example uses several methods available via the `Context` object. - -``` - -import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" -) - -func contextFunc(ctx context.Context) { - if fc, ok := pf.FromContext(ctx); ok { - fmt.Printf("function ID is:%s, ", fc.GetFuncID()) - fmt.Printf("function version is:%s\n", fc.GetFuncVersion()) - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/77cf09eafa4f1626a53a1fe2e65dd25f377c1127/pulsar-function-go/examples/contextFunc/contextFunc.go#L29-L34). - - - - -```` - -### User config -When you run or update Pulsar Functions created using SDK, you can pass arbitrary key/values to them with the command line with the `--user-config` flag. Key/values must be specified as JSON. The following function creation command passes a user configured key/value to a function. - -```bash - -$ bin/pulsar-admin functions create \ - --name word-filter \ - # Other function configs - --user-config '{"forbidden-word":"rosebud"}' - -``` - -````mdx-code-block - - - -The Java SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - # Other function configs - --user-config '{"word-of-the-day":"verdure"}' - -``` - -To access that value in a Java function: - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.Optional; - -public class UserConfigFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - Optional wotd = context.getUserConfigValue("word-of-the-day"); - if (wotd.isPresent()) { - LOG.info("The word of the day is {}", wotd); - } else { - LOG.warn("No word of the day provided"); - } - return null; - } -} - -``` - -The `UserConfigFunction` function will log the string `"The word of the day is verdure"` every time the function is invoked (which means every time a message arrives). The `word-of-the-day` user config will be changed only when the function is updated with a new config value via the command line. - -You can also access the entire user config map or set a default value in case no value is present: - -```java - -// Get the whole config map -Map allConfigs = context.getUserConfigMap(); - -// Get value or resort to default -String wotd = context.getUserConfigValueOrDefault("word-of-the-day", "perspicacious"); - -``` - -> For all key/value pairs passed to Java functions, both the key *and* the value are `String`. To set the value to be a different type, you need to deserialize from the `String` type. - - - - -In Python function, you can access the configuration value like this. - -```python - -from pulsar import Function - -class WordFilter(Function): - def process(self, context, input): - forbidden_word = context.user_config()["forbidden-word"] - - # Don't publish the message if it contains the user-supplied - # forbidden word - if forbidden_word in input: - pass - # Otherwise publish the message - else: - return input - -``` - -The Python SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - # Other function configs \ - --user-config '{"word-of-the-day":"verdure"}' - -``` - -To access that value in a Python function: - -```python - -from pulsar import Function - -class UserConfigFunction(Function): - def process(self, input, context): - logger = context.get_logger() - wotd = context.get_user_config_value('word-of-the-day') - if wotd is None: - logger.warn('No word of the day provided') - else: - logger.info("The word of the day is {0}".format(wotd)) - -``` - - - - -The Go SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - --go path/to/go/binary - --user-config '{"word-of-the-day":"lackadaisical"}' - -``` - -To access that value in a Go function: - -```go - -func contextFunc(ctx context.Context) { - fc, ok := pf.FromContext(ctx) - if !ok { - logutil.Fatal("Function context is not defined") - } - - wotd := fc.GetUserConfValue("word-of-the-day") - - if wotd == nil { - logutil.Warn("The word of the day is empty") - } else { - logutil.Infof("The word of the day is %s", wotd.(string)) - } -} - -``` - - - - -```` - -### Logger - -````mdx-code-block - - - -Pulsar Functions that use the Java SDK have access to an [SLF4j](https://www.slf4j.org/) [`Logger`](https://www.slf4j.org/api/org/apache/log4j/Logger.html) object that can be used to produce logs at the chosen log level. The following example logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class LoggingFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - String messageId = new String(context.getMessageId()); - - if (input.contains("danger")) { - LOG.warn("A warning was received in message {}", messageId); - } else { - LOG.info("Message {} received\nContent: {}", messageId, input); - } - - return null; - } -} - -``` - -If you want your function to produce logs, you need to specify a log topic when creating or running the function. The following is an example. - -```bash - -$ bin/pulsar-admin functions create \ - --jar my-functions.jar \ - --classname my.package.LoggingFunction \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -All logs produced by `LoggingFunction` above can be accessed via the `persistent://public/default/logging-function-logs` topic. - -#### Customize Function log level -Additionally, you can use the XML file, `functions_log4j2.xml`, to customize the function log level. -To customize the function log level, create or update `functions_log4j2.xml` in your Pulsar conf directory (for example, `/etc/pulsar/` on bare-metal, or `/pulsar/conf` on Kubernetes) to contain contents such as: - -```xml - - - pulsar-functions-instance - 30 - - - pulsar.log.appender - RollingFile - - - pulsar.log.level - debug - - - bk.log.level - debug - - - - - Console - SYSTEM_OUT - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - RollingFile - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.log - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}-%d{MM-dd-yyyy}-%i.log.gz - true - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - 1 - true - - - 1 GB - - - 0 0 0 * * ? - - - - - ${sys:pulsar.function.log.dir} - 2 - - */${sys:pulsar.function.log.file}*log.gz - - - 30d - - - - - - BkRollingFile - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.bk - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.bk-%d{MM-dd-yyyy}-%i.log.gz - true - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - 1 - true - - - 1 GB - - - 0 0 0 * * ? - - - - - ${sys:pulsar.function.log.dir} - 2 - - */${sys:pulsar.function.log.file}.bk*log.gz - - - 30d - - - - - - - - org.apache.pulsar.functions.runtime.shaded.org.apache.bookkeeper - ${sys:bk.log.level} - false - - BkRollingFile - - - - ${sys:pulsar.log.level} - - ${sys:pulsar.log.appender} - ${sys:pulsar.log.level} - - - - - -``` - -The properties set like: - -```xml - - - pulsar.log.level - debug - - -``` - -propagate to places where they are referenced, such as: - -```xml - - - ${sys:pulsar.log.level} - - ${sys:pulsar.log.appender} - ${sys:pulsar.log.level} - - - -``` - -In the above example, debug level logging would be applied to ALL function logs. -This may be more verbose than you desire. To be more selective, you can apply different log levels to different classes or modules. For example: - -```xml - - - com.example.module - info - false - - ${sys:pulsar.log.appender} - - - -``` - -You can be more specific as well, such as applying a more verbose log level to a class in the module, such as: - -```xml - - - com.example.module.className - debug - false - - Console - - - -``` - -Each `` entry allows you to output the log to a target specified in the definition of the Appender. - -Additivity pertains to whether log messages will be duplicated if multiple Logger entries overlap. -To disable additivity, specify - -```xml - -false - -``` - -as shown in examples above. Disabling additivity prevents duplication of log messages when one or more `` entries contain classes or modules that overlap. - -The `` is defined in the `` section, such as: - -```xml - - - Console - SYSTEM_OUT - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - -``` - - - - -Pulsar Functions that use the Python SDK have access to a logging object that can be used to produce logs at the chosen log level. The following example function that logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`. - -```python - -from pulsar import Function - -class LoggingFunction(Function): - def process(self, input, context): - logger = context.get_logger() - msg_id = context.get_message_id() - if 'danger' in input: - logger.warn("A warning was received in message {0}".format(context.get_message_id())) - else: - logger.info("Message {0} received\nContent: {1}".format(msg_id, input)) - -``` - -If you want your function to produce logs on a Pulsar topic, you need to specify a **log topic** when creating or running the function. The following is an example. - -```bash - -$ bin/pulsar-admin functions create \ - --py logging_function.py \ - --classname logging_function.LoggingFunction \ - --log-topic logging-function-logs \ - # Other function configs - -``` - -All logs produced by `LoggingFunction` above can be accessed via the `logging-function-logs` topic. -Additionally, you can specify the function log level through the broker XML file as described in [Customize Function log level](#customize-function-log-level). - - - - -The following Go Function example shows different log levels based on the function input. - -``` - -import ( - "context" - - "github.com/apache/pulsar/pulsar-function-go/pf" - - log "github.com/apache/pulsar/pulsar-function-go/logutil" -) - -func loggerFunc(ctx context.Context, input []byte) { - if len(input) <= 100 { - log.Infof("This input has a length of: %d", len(input)) - } else { - log.Warnf("This input is getting too long! It has {%d} characters", len(input)) - } -} - -func main() { - pf.Start(loggerFunc) -} - -``` - -When you use `logTopic` related functionalities in Go Function, import `github.com/apache/pulsar/pulsar-function-go/logutil`, and you do not have to use the `getLogger()` context object. - -Additionally, you can specify the function log level through the broker XML file, as described here: [Customize Function log level](#customize-function-log-level) - - - - -```` - -### Pulsar admin - -Pulsar Functions using the Java SDK has access to the Pulsar admin client, which allows the Pulsar admin client to manage API calls to current Pulsar clusters or external clusters (if `external-pulsars` is provided). - -````mdx-code-block - - - -Below is an example of how to use the Pulsar admin client exposed from the Function `context`. - -``` - -import org.apache.pulsar.client.admin.PulsarAdmin; -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -/** - * In this particular example, for every input message, - * the function resets the cursor of the current function's subscription to a - * specified timestamp. - */ -public class CursorManagementFunction implements Function { - - @Override - public String process(String input, Context context) throws Exception { - PulsarAdmin adminClient = context.getPulsarAdmin(); - if (adminClient != null) { - String topic = context.getCurrentRecord().getTopicName().isPresent() ? - context.getCurrentRecord().getTopicName().get() : null; - String subName = context.getTenant() + "/" + context.getNamespace() + "/" + context.getFunctionName(); - if (topic != null) { - // 1578188166 below is a random-pick timestamp - adminClient.topics().resetCursor(topic, subName, 1578188166); - return "reset cursor successfully"; - } - } - return null; - } -} - -``` - -If you want your function to get access to the Pulsar admin client, you need to enable this feature by setting `exposeAdminClientEnabled=true` in the `functions_worker.yml` file. You can test whether this feature is enabled or not using the command `pulsar-admin functions localrun` with the flag `--web-service-url`. - -``` - -$ bin/pulsar-admin functions localrun \ - --jar my-functions.jar \ - --classname my.package.CursorManagementFunction \ - --web-service-url http://pulsar-web-service:8080 \ - # Other function configs - -``` - - - - -```` - -## Metrics - -Pulsar Functions allows you to deploy and manage processing functions that consume messages from and publish messages to Pulsar topics easily. It is important to ensure that the running functions are healthy at any time. Pulsar Functions can publish arbitrary metrics to the metrics interface which can be queried. - -:::note - -If a Pulsar Function uses the language-native interface for Java or Python, that function is not able to publish metrics and stats to Pulsar. - -::: - -You can monitor Pulsar Functions that have been deployed with the following methods: - -- Check the metrics provided by Pulsar. - - Pulsar Functions expose the metrics that can be collected and used for monitoring the health of **Java, Python, and Go** functions. You can check the metrics by following the [monitoring](deploy-monitoring.md) guide. - - For the complete list of the function metrics, see [here](reference-metrics.md#pulsar-functions). - -- Set and check your customized metrics. - - In addition to the metrics provided by Pulsar, Pulsar allows you to customize metrics for **Java and Python** functions. Function workers collect user-defined metrics to Prometheus automatically and you can check them in Grafana. - -Here are examples of how to customize metrics for Java and Python functions. - -````mdx-code-block - - - -You can record metrics using the [`Context`](#context) object on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class MetricRecorderFunction implements Function { - @Override - public void apply(Integer input, Context context) { - // Records the metric 1 every time a message arrives - context.recordMetric("hit-count", 1); - - // Records the metric only if the arriving number equals 11 - if (input == 11) { - context.recordMetric("elevens-count", 1); - } - - return null; - } -} - -``` - - - - -You can record metrics using the [`Context`](#context) object on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message. The following is an example. - -```python - -from pulsar import Function - -class MetricRecorderFunction(Function): - def process(self, input, context): - context.record_metric('hit-count', 1) - - if input == 11: - context.record_metric('elevens-count', 1) - -``` - - - - -Currently, the feature is not available in Go. - - - - -```` - -## Security - -If you want to enable security on Pulsar Functions, first you should enable security on [Functions Workers](functions-worker.md). For more details, refer to [Security settings](functions-worker.md#security-settings). - -Pulsar Functions can support the following providers: - -- ClearTextSecretsProvider -- EnvironmentBasedSecretsProvider - -> Pulsar Function supports ClearTextSecretsProvider by default. - -At the same time, Pulsar Functions provides two interfaces, **SecretsProvider** and **SecretsProviderConfigurator**, allowing users to customize secret provider. - -````mdx-code-block - - - -You can get secret provider using the [`Context`](#context) object. The following is an example: - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class GetSecretProviderFunction implements Function { - - @Override - public Void process(String input, Context context) throws Exception { - Logger LOG = context.getLogger(); - String secretProvider = context.getSecret(input); - - if (!secretProvider.isEmpty()) { - LOG.info("The secret provider is {}", secretProvider); - } else { - LOG.warn("No secret provider"); - } - - return null; - } -} - -``` - - - - -You can get secret provider using the [`Context`](#context) object. The following is an example: - -```python - -from pulsar import Function - -class GetSecretProviderFunction(Function): - def process(self, input, context): - logger = context.get_logger() - secret_provider = context.get_secret(input) - if secret_provider is None: - logger.warn('No secret provider') - else: - logger.info("The secret provider is {0}".format(secret_provider)) - -``` - - - - -Currently, the feature is not available in Go. - - - - -```` - -## State storage -Pulsar Functions use [Apache BookKeeper](https://bookkeeper.apache.org) as a state storage interface. Pulsar installation, including the local standalone installation, includes deployment of BookKeeper bookies. - -Since Pulsar 2.1.0 release, Pulsar integrates with Apache BookKeeper [table service](https://docs.google.com/document/d/155xAwWv5IdOitHh1NVMEwCMGgB28M3FyMiQSxEpjE-Y/edit#heading=h.56rbh52koe3f) to store the `State` for functions. For example, a `WordCount` function can store its `counters` state into BookKeeper table service via Pulsar Functions State API. - -States are key-value pairs, where the key is a string and the value is arbitrary binary data - counters are stored as 64-bit big-endian binary values. Keys are scoped to an individual Pulsar Function, and shared between instances of that function. - -You can access states within Pulsar Java Functions using the `putState`, `putStateAsync`, `getState`, `getStateAsync`, `incrCounter`, `incrCounterAsync`, `getCounter`, `getCounterAsync` and `deleteState` calls on the context object. You can access states within Pulsar Python Functions using the `putState`, `getState`, `incrCounter`, `getCounter` and `deleteState` calls on the context object. You can also manage states using the [querystate](#query-state) and [putstate](#putstate) options to `pulsar-admin functions`. - -:::note - -State storage is not available in Go. - -::: - -### API - -````mdx-code-block - - - -Currently Pulsar Functions expose the following APIs for mutating and accessing State. These APIs are available in the [Context](functions-develop.md#context) object when you are using Java SDK functions. - -#### incrCounter - -```java - - /** - * Increment the builtin distributed counter referred by key - * @param key The name of the key - * @param amount The amount to be incremented - */ - void incrCounter(String key, long amount); - -``` - -The application can use `incrCounter` to change the counter of a given `key` by the given `amount`. - -#### incrCounterAsync - -```java - - /** - * Increment the builtin distributed counter referred by key - * but dont wait for the completion of the increment operation - * - * @param key The name of the key - * @param amount The amount to be incremented - */ - CompletableFuture incrCounterAsync(String key, long amount); - -``` - -The application can use `incrCounterAsync` to asynchronously change the counter of a given `key` by the given `amount`. - -#### getCounter - -```java - - /** - * Retrieve the counter value for the key. - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - long getCounter(String key); - -``` - -The application can use `getCounter` to retrieve the counter of a given `key` mutated by `incrCounter`. - -Except the `counter` API, Pulsar also exposes a general key/value API for functions to store -general key/value state. - -#### getCounterAsync - -```java - - /** - * Retrieve the counter value for the key, but don't wait - * for the operation to be completed - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - CompletableFuture getCounterAsync(String key); - -``` - -The application can use `getCounterAsync` to asynchronously retrieve the counter of a given `key` mutated by `incrCounterAsync`. - -#### putState - -```java - - /** - * Update the state value for the key. - * - * @param key name of the key - * @param value state value of the key - */ - void putState(String key, ByteBuffer value); - -``` - -#### putStateAsync - -```java - - /** - * Update the state value for the key, but don't wait for the operation to be completed - * - * @param key name of the key - * @param value state value of the key - */ - CompletableFuture putStateAsync(String key, ByteBuffer value); - -``` - -The application can use `putStateAsync` to asynchronously update the state of a given `key`. - -#### getState - -```java - - /** - * Retrieve the state value for the key. - * - * @param key name of the key - * @return the state value for the key. - */ - ByteBuffer getState(String key); - -``` - -#### getStateAsync - -```java - - /** - * Retrieve the state value for the key, but don't wait for the operation to be completed - * - * @param key name of the key - * @return the state value for the key. - */ - CompletableFuture getStateAsync(String key); - -``` - -The application can use `getStateAsync` to asynchronously retrieve the state of a given `key`. - -#### deleteState - -```java - - /** - * Delete the state value for the key. - * - * @param key name of the key - */ - -``` - -Counters and binary values share the same keyspace, so this deletes either type. - - - - -Currently Pulsar Functions expose the following APIs for mutating and accessing State. These APIs are available in the [Context](#context) object when you are using Python SDK functions. - -#### incr_counter - -```python - - def incr_counter(self, key, amount): - ""incr the counter of a given key in the managed state"" - -``` - -Application can use `incr_counter` to change the counter of a given `key` by the given `amount`. -If the `key` does not exist, a new key is created. - -#### get_counter - -```python - - def get_counter(self, key): - """get the counter of a given key in the managed state""" - -``` - -Application can use `get_counter` to retrieve the counter of a given `key` mutated by `incrCounter`. - -Except the `counter` API, Pulsar also exposes a general key/value API for functions to store -general key/value state. - -#### put_state - -```python - - def put_state(self, key, value): - """update the value of a given key in the managed state""" - -``` - -The key is a string, and the value is arbitrary binary data. - -#### get_state - -```python - - def get_state(self, key): - """get the value of a given key in the managed state""" - -``` - -#### del_counter - -```python - - def del_counter(self, key): - """delete the counter of a given key in the managed state""" - -``` - -Counters and binary values share the same keyspace, so this deletes either type. - - - - -```` - -### Query State - -A Pulsar Function can use the [State API](#api) for storing state into Pulsar's state storage -and retrieving state back from Pulsar's state storage. Additionally Pulsar also provides -CLI commands for querying its state. - -```shell - -$ bin/pulsar-admin functions querystate \ - --tenant \ - --namespace \ - --name \ - --state-storage-url \ - --key \ - [---watch] - -``` - -If `--watch` is specified, the CLI will watch the value of the provided `state-key`. - -### Example - -````mdx-code-block - - - -{@inject: github:WordCountFunction:/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/WordCountFunction.java} is a very good example -demonstrating on how Application can easily store `state` in Pulsar Functions. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountFunction implements Function { - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split("\\.")).forEach(word -> context.incrCounter(word, 1)); - return null; - } -} - -``` - -The logic of this `WordCount` function is pretty simple and straightforward: - -1. The function first splits the received `String` into multiple words using regex `\\.`. -2. For each `word`, the function increments the corresponding `counter` by 1 (via `incrCounter(key, amount)`). - - - - -```python - -from pulsar import Function - -class WordCount(Function): - def process(self, item, context): - for word in item.split(): - context.incr_counter(word, 1) - -``` - -The logic of this `WordCount` function is pretty simple and straightforward: - -1. The function first splits the received string into multiple words on space. -2. For each `word`, the function increments the corresponding `counter` by 1 (via `incr_counter(key, amount)`). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/functions-metrics.md b/site2/website/versioned_docs/version-2.9.2-deprecated/functions-metrics.md deleted file mode 100644 index 8add6693160929..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/functions-metrics.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: functions-metrics -title: Metrics for Pulsar Functions -sidebar_label: "Metrics" -original_id: functions-metrics ---- - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/functions-overview.md b/site2/website/versioned_docs/version-2.9.2-deprecated/functions-overview.md deleted file mode 100644 index 816d301e0fd0e7..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/functions-overview.md +++ /dev/null @@ -1,209 +0,0 @@ ---- -id: functions-overview -title: Pulsar Functions overview -sidebar_label: "Overview" -original_id: functions-overview ---- - -**Pulsar Functions** are lightweight compute processes that - -* consume messages from one or more Pulsar topics, -* apply a user-supplied processing logic to each message, -* publish the results of the computation to another topic. - - -## Goals -With Pulsar Functions, you can create complex processing logic without deploying a separate neighboring system (such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://heron.incubator.apache.org/), [Apache Flink](https://flink.apache.org/)). Pulsar Functions are computing infrastructure of Pulsar messaging system. The core goal is tied to a series of other goals: - -* Developer productivity (language-native vs Pulsar Functions SDK functions) -* Easy troubleshooting -* Operational simplicity (no need for an external processing system) - -## Inspirations -Pulsar Functions are inspired by (and take cues from) several systems and paradigms: - -* Stream processing engines such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://apache.github.io/incubator-heron), and [Apache Flink](https://flink.apache.org) -* "Serverless" and "Function as a Service" (FaaS) cloud platforms like [Amazon Web Services Lambda](https://aws.amazon.com/lambda/), [Google Cloud Functions](https://cloud.google.com/functions/), and [Azure Cloud Functions](https://azure.microsoft.com/en-us/services/functions/) - -Pulsar Functions can be described as - -* [Lambda](https://aws.amazon.com/lambda/)-style functions that are -* specifically designed to use Pulsar as a message bus. - -## Programming model -Pulsar Functions provide a wide range of functionality, and the core programming model is simple. Functions receive messages from one or more **input [topics](reference-terminology.md#topic)**. Each time a message is received, the function will complete the following tasks. - - * Apply some processing logic to the input and write output to: - * An **output topic** in Pulsar - * [Apache BookKeeper](functions-develop.md#state-storage) - * Write logs to a **log topic** (potentially for debugging purposes) - * Increment a [counter](#word-count-example) - -![Pulsar Functions core programming model](/assets/pulsar-functions-overview.png) - -You can use Pulsar Functions to set up the following processing chain: - -* A Python function listens for the `raw-sentences` topic and "sanitizes" incoming strings (removing extraneous whitespace and converting all characters to lowercase) and then publishes the results to a `sanitized-sentences` topic. -* A Java function listens for the `sanitized-sentences` topic, counts the number of times each word appears within a specified time window, and publishes the results to a `results` topic -* Finally, a Python function listens for the `results` topic and writes the results to a MySQL table. - - -### Word count example - -If you implement the classic word count example using Pulsar Functions, it looks something like this: - -![Pulsar Functions word count example](/assets/pulsar-functions-word-count.png) - -To write the function in Java with [Pulsar Functions SDK for Java](functions-develop.md#available-apis), you can write the function as follows. - -```java - -package org.example.functions; - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountFunction implements Function { - // This function is invoked every time a message is published to the input topic - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split(" ")).forEach(word -> { - String counterKey = word.toLowerCase(); - context.incrCounter(counterKey, 1); - }); - return null; - } -} - -``` - -Bundle and build the JAR file to be deployed, and then deploy it in your Pulsar cluster using the [command line](functions-deploy.md#command-line-interface) as follows. - -```bash - -$ bin/pulsar-admin functions create \ - --jar target/my-jar-with-dependencies.jar \ - --classname org.example.functions.WordCountFunction \ - --tenant public \ - --namespace default \ - --name word-count \ - --inputs persistent://public/default/sentences \ - --output persistent://public/default/count - -``` - -### Content-based routing example - -Pulsar Functions are used in many cases. The following is a sophisticated example that involves content-based routing. - -For example, a function takes items (strings) as input and publishes them to either a `fruits` or `vegetables` topic, depending on the item. Or, if an item is neither fruit nor vegetable, a warning is logged to a [log topic](functions-develop.md#logger). The following is a visual representation. - -![Pulsar Functions routing example](/assets/pulsar-functions-routing-example.png) - -If you implement this routing functionality in Python, it looks something like this: - -```python - -from pulsar import Function - -class RoutingFunction(Function): - def __init__(self): - self.fruits_topic = "persistent://public/default/fruits" - self.vegetables_topic = "persistent://public/default/vegetables" - - @staticmethod - def is_fruit(item): - return item in [b"apple", b"orange", b"pear", b"other fruits..."] - - @staticmethod - def is_vegetable(item): - return item in [b"carrot", b"lettuce", b"radish", b"other vegetables..."] - - def process(self, item, context): - if self.is_fruit(item): - context.publish(self.fruits_topic, item) - elif self.is_vegetable(item): - context.publish(self.vegetables_topic, item) - else: - warning = "The item {0} is neither a fruit nor a vegetable".format(item) - context.get_logger().warn(warning) - -``` - -If this code is stored in `~/router.py`, then you can deploy it in your Pulsar cluster using the [command line](functions-deploy.md#command-line-interface) as follows. - -```bash - -$ bin/pulsar-admin functions create \ - --py ~/router.py \ - --classname router.RoutingFunction \ - --tenant public \ - --namespace default \ - --name route-fruit-veg \ - --inputs persistent://public/default/basket-items - -``` - -### Functions, messages and message types -Pulsar Functions take byte arrays as inputs and spit out byte arrays as output. However in languages that support typed interfaces(Java), you can write typed Functions, and bind messages to types in the following ways. -* [Schema Registry](functions-develop.md#schema-registry) -* [SerDe](functions-develop.md#serde) - - -## Fully Qualified Function Name (FQFN) -Each Pulsar Function has a **Fully Qualified Function Name** (FQFN) that consists of three elements: the function tenant, namespace, and function name. FQFN looks like this: - -```http - -tenant/namespace/name - -``` - -FQFNs enable you to create multiple functions with the same name provided that they are in different namespaces. - -## Supported languages -Currently, you can write Pulsar Functions in Java, Python, and Go. For details, refer to [Develop Pulsar Functions](functions-develop.md). - -## Processing guarantees -Pulsar Functions provide three different messaging semantics that you can apply to any function. - -Delivery semantics | Description -:------------------|:------- -**At-most-once** delivery | Each message sent to the function is likely to be processed, or not to be processed (hence "at most"). -**At-least-once** delivery | Each message sent to the function can be processed more than once (hence the "at least"). -**Effectively-once** delivery | Each message sent to the function will have one output associated with it. - - -### Apply processing guarantees to a function -You can set the processing guarantees for a Pulsar Function when you create the Function. The following [`pulsar-function create`](reference-pulsar-admin.md#create-1) command creates a function with effectively-once guarantees applied. - -```bash - -$ bin/pulsar-admin functions create \ - --name my-effectively-once-function \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other function configs - -``` - -The available options for `--processing-guarantees` are: - -* `ATMOST_ONCE` -* `ATLEAST_ONCE` -* `EFFECTIVELY_ONCE` - -> By default, Pulsar Functions provide at-least-once delivery guarantees. So if you create a function without supplying a value for the `--processingGuarantees` flag, the function provides at-least-once guarantees. - -### Update the processing guarantees of a function -You can change the processing guarantees applied to a function using the [`update`](reference-pulsar-admin.md#update-1) command. The following is an example. - -```bash - -$ bin/pulsar-admin functions update \ - --processing-guarantees ATMOST_ONCE \ - # Other function configs - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/functions-package.md b/site2/website/versioned_docs/version-2.9.2-deprecated/functions-package.md deleted file mode 100644 index db2c4e987dc7be..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/functions-package.md +++ /dev/null @@ -1,493 +0,0 @@ ---- -id: functions-package -title: Package Pulsar Functions -sidebar_label: "How-to: Package" -original_id: functions-package ---- - -You can package Pulsar functions in Java, Python, and Go. Packaging the window function in Java is the same as [packaging a function in Java](#java). - -:::note - -Currently, the window function is not available in Python and Go. - -::: - -## Prerequisite - -Before running a Pulsar function, you need to start Pulsar. You can [run a standalone Pulsar in Docker](getting-started-docker.md), or [run Pulsar in Kubernetes](getting-started-helm.md). - -To check whether the Docker image starts, you can use the `docker ps` command. - -## Java - -To package a function in Java, complete the following steps. - -1. Create a new maven project with a pom file. In the following code sample, the value of `mainClass` is your package name. - - ```Java - - - - 4.0.0 - - java-function - java-function - 1.0-SNAPSHOT - - - - org.apache.pulsar - pulsar-functions-api - 2.6.0 - - - - - - - maven-assembly-plugin - - false - - jar-with-dependencies - - - - org.example.test.ExclamationFunction - - - - - - make-assembly - package - - assembly - - - - - - org.apache.maven.plugins - maven-compiler-plugin - - 8 - 8 - - - - - - - - ``` - -2. Write a Java function. - - ``` - - package org.example.test; - - import java.util.function.Function; - - public class ExclamationFunction implements Function { - @Override - public String apply(String s) { - return "This is my function!"; - } - } - - ``` - - For the imported package, you can use one of the following interfaces: - - Function interface provided by Java 8: `java.util.function.Function` - - Pulsar Function interface: `org.apache.pulsar.functions.api.Function` - - The main difference between the two interfaces is that the `org.apache.pulsar.functions.api.Function` interface provides the context interface. When you write a function and want to interact with it, you can use context to obtain a wide variety of information and functionality for Pulsar Functions. - - The following example uses `org.apache.pulsar.functions.api.Function` interface with context. - - ``` - - package org.example.functions; - import org.apache.pulsar.functions.api.Context; - import org.apache.pulsar.functions.api.Function; - - import java.util.Arrays; - public class WordCountFunction implements Function { - // This function is invoked every time a message is published to the input topic - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split(" ")).forEach(word -> { - String counterKey = word.toLowerCase(); - context.incrCounter(counterKey, 1); - }); - return null; - } - } - - ``` - -3. Package the Java function. - - ```bash - - mvn package - - ``` - - After the Java function is packaged, a `target` directory is created automatically. Open the `target` directory to check if there is a JAR package similar to `java-function-1.0-SNAPSHOT.jar`. - - -4. Run the Java function. - - (1) Copy the packaged jar file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Java function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname org.example.test.ExclamationFunction \ - --jar java-function-1.0-SNAPSHOT.jar \ - --inputs persistent://public/default/my-topic-1 \ - --output persistent://public/default/test-1 \ - --tenant public \ - --namespace default \ - --name JavaFunction - - ``` - - The following log indicates that the Java function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -## Python - -Python Function supports the following three formats: - -- One python file -- ZIP file -- PIP - -### One python file - -To package a function with **one python file** in Python, complete the following steps. - -1. Write a Python function. - - ``` - - from pulsar import Function // import the Function module from Pulsar - - # The classic ExclamationFunction that appends an exclamation at the end - # of the input - class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - return input + '!' - - ``` - - In this example, when you write a Python function, you need to inherit the Function class and implement the `process()` method. - - `process()` mainly has two parameters: - - - `input` represents your input. - - - `context` represents an interface exposed by the Pulsar Function. You can get the attributes in the Python function based on the provided context object. - -2. Install a Python client. - - The implementation of a Python function depends on the Python client, so before deploying a Python function, you need to install the corresponding version of the Python client. - - ```bash - - pip install python-client==2.6.0 - - ``` - -3. Run the Python Function. - - (1) Copy the Python function file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Python function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname org.example.test.ExclamationFunction \ - --py \ - --inputs persistent://public/default/my-topic-1 \ - --output persistent://public/default/test-1 \ - --tenant public \ - --namespace default \ - --name PythonFunction - - ``` - - The following log indicates that the Python function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -### ZIP file - -To package a function with the **ZIP file** in Python, complete the following steps. - -1. Prepare the ZIP file. - - The following is required when packaging the ZIP file of the Python Function. - - ```text - - Assuming the zip file is named as `func.zip`, unzip the `func.zip` folder: - "func/src" - "func/requirements.txt" - "func/deps" - - ``` - - Take [exclamation.zip](https://github.com/apache/pulsar/tree/master/tests/docker-images/latest-version-image/python-examples) as an example. The internal structure of the example is as follows. - - ```text - - . - ├── deps - │   └── sh-1.12.14-py2.py3-none-any.whl - └── src - └── exclamation.py - - ``` - -2. Run the Python Function. - - (1) Copy the ZIP file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Python function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname exclamation \ - --py \ - --inputs persistent://public/default/in-topic \ - --output persistent://public/default/out-topic \ - --tenant public \ - --namespace default \ - --name PythonFunction - - ``` - - The following log indicates that the Python function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -### PIP - -The PIP method is only supported in Kubernetes runtime. To package a function with **PIP** in Python, complete the following steps. - -1. Configure the `functions_worker.yml` file. - - ```text - - #### Kubernetes Runtime #### - installUserCodeDependencies: true - - ``` - -2. Write your Python Function. - - ``` - - from pulsar import Function - import js2xml - - # The classic ExclamationFunction that appends an exclamation at the end - # of the input - class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - // add your logic - return input + '!' - - ``` - - You can introduce additional dependencies. When Python Function detects that the file currently used is `whl` and the `installUserCodeDependencies` parameter is specified, the system uses the `pip install` command to install the dependencies required in Python Function. - -3. Generate the `whl` file. - - ```shell script - - $ cd $PULSAR_HOME/pulsar-functions/scripts/python - $ chmod +x generate.sh - $ ./generate.sh - # e.g: ./generate.sh /path/to/python /path/to/python/output 1.0.0 - - ``` - - The output is written in `/path/to/python/output`: - - ```text - - -rw-r--r-- 1 root staff 1.8K 8 27 14:29 pulsarfunction-1.0.0-py2-none-any.whl - -rw-r--r-- 1 root staff 1.4K 8 27 14:29 pulsarfunction-1.0.0.tar.gz - -rw-r--r-- 1 root staff 0B 8 27 14:29 pulsarfunction.whl - - ``` - -## Go - -To package a function in Go, complete the following steps. - -1. Write a Go function. - - Currently, Go function can be **only** implemented using SDK and the interface of the function is exposed in the form of SDK. Before using the Go function, you need to import "github.com/apache/pulsar/pulsar-function-go/pf". - - ``` - - import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" - ) - - func HandleRequest(ctx context.Context, input []byte) error { - fmt.Println(string(input) + "!") - return nil - } - - func main() { - pf.Start(HandleRequest) - } - - ``` - - You can use context to connect to the Go function. - - ``` - - if fc, ok := pf.FromContext(ctx); ok { - fmt.Printf("function ID is:%s, ", fc.GetFuncID()) - fmt.Printf("function version is:%s\n", fc.GetFuncVersion()) - } - - ``` - - When writing a Go function, remember that - - In `main()`, you **only** need to register the function name to `Start()`. **Only** one function name is received in `Start()`. - - Go function uses Go reflection, which is based on the received function name, to verify whether the parameter list and returned value list are correct. The parameter list and returned value list **must be** one of the following sample functions: - - ``` - - func () - func () error - func (input) error - func () (output, error) - func (input) (output, error) - func (context.Context) error - func (context.Context, input) error - func (context.Context) (output, error) - func (context.Context, input) (output, error) - - ``` - -2. Build the Go function. - - ``` - - go build .go - - ``` - -3. Run the Go Function. - - (1) Copy the Go function file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Go function with the following command. - - ``` - - ./bin/pulsar-admin functions localrun \ - --go [your go function path] - --inputs [input topics] \ - --output [output topic] \ - --tenant [default:public] \ - --namespace [default:default] \ - --name [custom unique go function name] - - ``` - - The following log indicates that the Go function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -## Start Functions in cluster mode -If you want to start a function in cluster mode, replace `localrun` with `create` in the commands above. The following log indicates that your function starts successfully. - - ```text - - "Created successfully" - - ``` - -For information about parameters on `--classname`, `--jar`, `--py`, `--go`, `--inputs`, run the command `./bin/pulsar-admin functions` or see [here](reference-pulsar-admin.md#functions). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/functions-runtime.md b/site2/website/versioned_docs/version-2.9.2-deprecated/functions-runtime.md deleted file mode 100644 index 7164bd13668aff..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/functions-runtime.md +++ /dev/null @@ -1,403 +0,0 @@ ---- -id: functions-runtime -title: Configure Functions runtime -sidebar_label: "Setup: Configure Functions runtime" -original_id: functions-runtime ---- - -You can use the following methods to run functions. - -- *Thread*: Invoke functions threads in functions worker. -- *Process*: Invoke functions in processes forked by functions worker. -- *Kubernetes*: Submit functions as Kubernetes StatefulSets by functions worker. - -:::note - -Pulsar supports adding labels to the Kubernetes StatefulSets and services while launching functions, which facilitates selecting the target Kubernetes objects. - -::: - -The differences of the thread and process modes are: -- Thread mode: when a function runs in thread mode, it runs on the same Java virtual machine (JVM) with functions worker. -- Process mode: when a function runs in process mode, it runs on the same machine that functions worker runs. - -## Configure thread runtime -It is easy to configure *Thread* runtime. In most cases, you do not need to configure anything. You can customize the thread group name with the following settings: - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.thread.ThreadRuntimeFactory -functionRuntimeFactoryConfigs: - threadGroupName: "Your Function Container Group" - -``` - -*Thread* runtime is only supported in Java function. - -## Configure process runtime -When you enable *Process* runtime, you do not need to configure anything. - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.process.ProcessRuntimeFactory -functionRuntimeFactoryConfigs: - # the directory for storing the function logs - logDirectory: - # change the jar location only when you put the java instance jar in a different location - javaInstanceJarLocation: - # change the python instance location only when you put the python instance jar in a different location - pythonInstanceLocation: - # change the extra dependencies location: - extraFunctionDependenciesDir: - -``` - -*Process* runtime is supported in Java, Python, and Go functions. - -## Configure Kubernetes runtime - -When the functions worker generates Kubernetes manifests and apply the manifests, the Kubernetes runtime works. If you have run functions worker on Kubernetes, you can use the `serviceAccount` associated with the pod that the functions worker is running in. Otherwise, you can configure it to communicate with a Kubernetes cluster. - -The manifests, generated by the functions worker, include a `StatefulSet`, a `Service` (used to communicate with the pods), and a `Secret` for auth credentials (when applicable). The `StatefulSet` manifest (by default) has a single pod, with the number of replicas determined by the "parallelism" of the function. On pod boot, the pod downloads the function payload (via the functions worker REST API). The pod's container image is configurable, but must have the functions runtime. - -The Kubernetes runtime supports secrets, so you can create a Kubernetes secret and expose it as an environment variable in the pod. The Kubernetes runtime is extensible, you can implement classes and customize the way how to generate Kubernetes manifests, how to pass auth data to pods, and how to integrate secrets. - -:::tip - -For the rules of translating Pulsar object names into Kubernetes resource labels, see [here](admin-api-overview.md#how-to-define-pulsar-resource-names-when-running-pulsar-in-kubernetes). - -::: - -### Basic configuration - -It is easy to configure Kubernetes runtime. You can just uncomment the settings of `kubernetesContainerFactory` in the `functions_worker.yaml` file. The following is an example. - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.kubernetes.KubernetesRuntimeFactory -functionRuntimeFactoryConfigs: - # uri to kubernetes cluster, leave it to empty and it will use the kubernetes settings in function worker - k8Uri: - # the kubernetes namespace to run the function instances. it is `default`, if this setting is left to be empty - jobNamespace: - # The Kubernetes pod name to run the function instances. It is set to - # `pf----` if this setting is left to be empty - jobName: - # the docker image to run function instance. by default it is `apachepulsar/pulsar` - pulsarDockerImageName: - # the docker image to run function instance according to different configurations provided by users. - # By default it is `apachepulsar/pulsar`. - # e.g: - # functionDockerImages: - # JAVA: JAVA_IMAGE_NAME - # PYTHON: PYTHON_IMAGE_NAME - # GO: GO_IMAGE_NAME - functionDockerImages: - # "The image pull policy for image used to run function instance. By default it is `IfNotPresent` - imagePullPolicy: IfNotPresent - # the root directory of pulsar home directory in `pulsarDockerImageName`. by default it is `/pulsar`. - # if you are using your own built image in `pulsarDockerImageName`, you need to set this setting accordingly - pulsarRootDir: - # The config admin CLI allows users to customize the configuration of the admin cli tool, such as: - # `/bin/pulsar-admin and /bin/pulsarctl`. By default it is `/bin/pulsar-admin`. If you want to use `pulsarctl` - # you need to set this setting accordingly - configAdminCLI: - # this setting only takes effects if `k8Uri` is set to null. if your function worker is running as a k8 pod, - # setting this to true is let function worker to submit functions to the same k8s cluster as function worker - # is running. setting this to false if your function worker is not running as a k8 pod. - submittingInsidePod: false - # setting the pulsar service url that pulsar function should use to connect to pulsar - # if it is not set, it will use the pulsar service url configured in worker service - pulsarServiceUrl: - # setting the pulsar admin url that pulsar function should use to connect to pulsar - # if it is not set, it will use the pulsar admin url configured in worker service - pulsarAdminUrl: - # The flag indicates to install user code dependencies. (applied to python package) - installUserCodeDependencies: - # The repository that pulsar functions use to download python dependencies - pythonDependencyRepository: - # The repository that pulsar functions use to download extra python dependencies - pythonExtraDependencyRepository: - # the custom labels that function worker uses to select the nodes for pods - customLabels: - # The expected metrics collection interval, in seconds - expectedMetricsCollectionInterval: 30 - # Kubernetes Runtime will periodically checkback on - # this configMap if defined and if there are any changes - # to the kubernetes specific stuff, we apply those changes - changeConfigMap: - # The namespace for storing change config map - changeConfigMapNamespace: - # The ratio cpu request and cpu limit to be set for a function/source/sink. - # The formula for cpu request is cpuRequest = userRequestCpu / cpuOverCommitRatio - cpuOverCommitRatio: 1.0 - # The ratio memory request and memory limit to be set for a function/source/sink. - # The formula for memory request is memoryRequest = userRequestMemory / memoryOverCommitRatio - memoryOverCommitRatio: 1.0 - # The port inside the function pod which is used by the worker to communicate with the pod - grpcPort: 9093 - # The port inside the function pod on which prometheus metrics are exposed - metricsPort: 9094 - # The directory inside the function pod where nar packages will be extracted - narExtractionDirectory: - # The classpath where function instance files stored - functionInstanceClassPath: - # the directory for dropping extra function dependencies - # if it is not an absolute path, it is relative to `pulsarRootDir` - extraFunctionDependenciesDir: - # Additional memory padding added on top of the memory requested by the function per on a per instance basis - percentMemoryPadding: 10 - # The duration (in seconds) before the StatefulSet is deleted after a function stops or restarts. - # Value must be a non-negative integer. 0 indicates the StatefulSet is deleted immediately. - # Default is 5 seconds. - gracePeriodSeconds: 5 - -``` - -If you run functions worker embedded in a broker on Kubernetes, you can use the default settings. - -### Run standalone functions worker on Kubernetes - -If you run functions worker standalone (that is, not embedded) on Kubernetes, you need to configure `pulsarSerivceUrl` to be the URL of the broker and `pulsarAdminUrl` as the URL to the functions worker. - -For example, both Pulsar brokers and Function Workers run in the `pulsar` K8S namespace. The brokers have a service called `brokers` and the functions worker has a service called `func-worker`. The settings are as follows: - -```yaml - -pulsarServiceUrl: pulsar://broker.pulsar:6650 // or pulsar+ssl://broker.pulsar:6651 if using TLS -pulsarAdminUrl: http://func-worker.pulsar:8080 // or https://func-worker:8443 if using TLS - -``` - -### Run RBAC in Kubernetes clusters - -If you run RBAC in your Kubernetes cluster, make sure that the service account you use for running functions workers (or brokers, if functions workers run along with brokers) have permissions on the following Kubernetes APIs. - -- services -- configmaps -- pods -- apps.statefulsets - -The following is sufficient: - -```yaml - -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRole -metadata: - name: functions-worker -rules: -- apiGroups: [""] - resources: - - services - - configmaps - - pods - verbs: - - '*' -- apiGroups: - - apps - resources: - - statefulsets - verbs: - - '*' ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: functions-worker ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: functions-worker -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: functions-worker -subjectsKubernetesSec: -- kind: ServiceAccount - name: functions-worker - -``` - -If the service-account is not properly configured, an error message similar to this is displayed: - -```bash - -22:04:27.696 [Timer-0] ERROR org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory - Error while trying to fetch configmap example-pulsar-4qvmb5gur3c6fc9dih0x1xn8b-function-worker-config at namespace pulsar -io.kubernetes.client.ApiException: Forbidden - at io.kubernetes.client.ApiClient.handleResponse(ApiClient.java:882) ~[io.kubernetes-client-java-2.0.0.jar:?] - at io.kubernetes.client.ApiClient.execute(ApiClient.java:798) ~[io.kubernetes-client-java-2.0.0.jar:?] - at io.kubernetes.client.apis.CoreV1Api.readNamespacedConfigMapWithHttpInfo(CoreV1Api.java:23673) ~[io.kubernetes-client-java-api-2.0.0.jar:?] - at io.kubernetes.client.apis.CoreV1Api.readNamespacedConfigMap(CoreV1Api.java:23655) ~[io.kubernetes-client-java-api-2.0.0.jar:?] - at org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory.fetchConfigMap(KubernetesRuntimeFactory.java:284) [org.apache.pulsar-pulsar-functions-runtime-2.4.0-42c3bf949.jar:2.4.0-42c3bf949] - at org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory$1.run(KubernetesRuntimeFactory.java:275) [org.apache.pulsar-pulsar-functions-runtime-2.4.0-42c3bf949.jar:2.4.0-42c3bf949] - at java.util.TimerThread.mainLoop(Timer.java:555) [?:1.8.0_212] - at java.util.TimerThread.run(Timer.java:505) [?:1.8.0_212] - -``` - -### Integrate Kubernetes secrets - -In order to safely distribute secrets, Pulsar Functions can reference Kubernetes secrets. To enable this, set the `secretsProviderConfiguratorClassName` to `org.apache.pulsar.functions.secretsproviderconfigurator.KubernetesSecretsProviderConfigurator`. - -You can create a secret in the namespace where your functions are deployed. For example, you deploy functions to the `pulsar-func` Kubernetes namespace, and you have a secret named `database-creds` with a field name `password`, which you want to mount in the pod as an environment variable called `DATABASE_PASSWORD`. The following functions configuration enables you to reference that secret and mount the value as an environment variable in the pod. - -```Yaml - -tenant: "mytenant" -namespace: "mynamespace" -name: "myfunction" -topicName: "persistent://mytenant/mynamespace/myfuncinput" -className: "com.company.pulsar.myfunction" - -secrets: - # the secret will be mounted from the `password` field in the `database-creds` secret as an env var called `DATABASE_PASSWORD` - DATABASE_PASSWORD: - path: "database-creds" - key: "password" - -``` - -### Enable token authentication - -When you enable authentication for your Pulsar cluster, you need a mechanism for the pod running your function to authenticate with the broker. - -The `org.apache.pulsar.functions.auth.KubernetesFunctionAuthProvider` interface provides support for any authentication mechanism. The `functionAuthProviderClassName` in `function-worker.yml` is used to specify your path to this implementation. - -Pulsar includes an implementation of this interface for token authentication, and distributes the certificate authority via the same implementation. The configuration is similar as follows: - -```Yaml - -functionAuthProviderClassName: org.apache.pulsar.functions.auth.KubernetesSecretsTokenAuthProvider - -``` - -For token authentication, the functions worker captures the token that is used to deploy (or update) the function. The token is saved as a secret and mounted into the pod. - -For custom authentication or TLS, you need to implement this interface or use an alternative mechanism to provide authentication. If you use token authentication and TLS encryption to secure the communication with the cluster, Pulsar passes your certificate authority (CA) to the client, so the client obtains what it needs to authenticate the cluster, and trusts the cluster with your signed certificate. - -:::note - -If you use tokens that expire when deploying functions, these tokens will expire. - -::: - -### Run clusters with authentication - -When you run a functions worker in a standalone process (that is, not embedded in the broker) in a cluster with authentication, you must configure your functions worker to interact with the broker and authenticate incoming requests. So you need to configure properties that the broker requires for authentication or authorization. - -For example, if you use token authentication, you need to configure the following properties in the `function-worker.yml` file. - -```Yaml - -clientAuthenticationPlugin: org.apache.pulsar.client.impl.auth.AuthenticationToken -clientAuthenticationParameters: file:///etc/pulsar/token/admin-token.txt -configurationStoreServers: zookeeper-cluster:2181 # auth requires a connection to zookeeper -authenticationProviders: - - "org.apache.pulsar.broker.authentication.AuthenticationProviderToken" -authorizationEnabled: true -authenticationEnabled: true -superUserRoles: - - superuser - - proxy -properties: - tokenSecretKey: file:///etc/pulsar/jwt/secret # if using a secret token, key file must be DER-encoded - tokenPublicKey: file:///etc/pulsar/jwt/public.key # if using public/private key tokens, key file must be DER-encoded - -``` - -:::note - -You must configure both the Function Worker authorization or authentication for the server to authenticate requests and configure the client to be authenticated to communicate with the broker. - -::: - -### Customize Kubernetes runtime - -The Kubernetes integration enables you to implement a class and customize how to generate manifests. You can configure it by setting `runtimeCustomizerClassName` in the `functions-worker.yml` file and use the fully qualified class name. You must implement the `org.apache.pulsar.functions.runtime.kubernetes.KubernetesManifestCustomizer` interface. - -The functions (and sinks/sources) API provides a flag, `customRuntimeOptions`, which is passed to this interface. - -To initialize the `KubernetesManifestCustomizer`, you can provide `runtimeCustomizerConfig` in the `functions-worker.yml` file. `runtimeCustomizerConfig` is passed to the `public void initialize(Map config)` function of the interface. `runtimeCustomizerConfig`is different from the `customRuntimeOptions` as `runtimeCustomizerConfig` is the same across all functions. If you provide both `runtimeCustomizerConfig` and `customRuntimeOptions`, you need to decide how to manage these two configurations in your implementation of `KubernetesManifestCustomizer`. - -Pulsar includes a built-in implementation. To use the basic implementation, set `runtimeCustomizerClassName` to `org.apache.pulsar.functions.runtime.kubernetes.BasicKubernetesManifestCustomizer`. The built-in implementation initialized with `runtimeCustomizerConfig` enables you to pass a JSON document as `customRuntimeOptions` with certain properties to augment, which decides how the manifests are generated. If both `runtimeCustomizerConfig` and `customRuntimeOptions` are provided, `BasicKubernetesManifestCustomizer` uses `customRuntimeOptions` to override the configuration if there are conflicts in these two configurations. - -Below is an example of `customRuntimeOptions`. - -```json - -{ - "jobName": "jobname", // the k8s pod name to run this function instance - "jobNamespace": "namespace", // the k8s namespace to run this function in - "extractLabels": { // extra labels to attach to the statefulSet, service, and pods - "extraLabel": "value" - }, - "extraAnnotations": { // extra annotations to attach to the statefulSet, service, and pods - "extraAnnotation": "value" - }, - "nodeSelectorLabels": { // node selector labels to add on to the pod spec - "customLabel": "value" - }, - "tolerations": [ // tolerations to add to the pod spec - { - "key": "custom-key", - "value": "value", - "effect": "NoSchedule" - } - ], - "resourceRequirements": { // values for cpu and memory should be defined as described here: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container - "requests": { - "cpu": 1, - "memory": "4G" - }, - "limits": { - "cpu": 2, - "memory": "8G" - } - } -} - -``` - -## Run clusters with geo-replication - -If you run multiple clusters tied together with geo-replication, it is important to use a different function namespace for each cluster. Otherwise, the function shares a namespace and potentially schedule across clusters. - -For example, if you have two clusters: `east-1` and `west-1`, you can configure the functions workers for `east-1` and `west-1` perspectively as follows. - -```Yaml - -pulsarFunctionsCluster: east-1 -pulsarFunctionsNamespace: public/functions-east-1 - -``` - -```Yaml - -pulsarFunctionsCluster: west-1 -pulsarFunctionsNamespace: public/functions-west-1 - -``` - -This ensures the two different Functions Workers use distinct sets of topics for their internal coordination. - -## Configure standalone functions worker - -When configuring a standalone functions worker, you need to configure properties that the broker requires, especially if you use TLS. And then Functions Worker can communicate with the broker. - -You need to configure the following required properties. - -```Yaml - -workerPort: 8080 -workerPortTls: 8443 # when using TLS -tlsCertificateFilePath: /etc/pulsar/tls/tls.crt # when using TLS -tlsKeyFilePath: /etc/pulsar/tls/tls.key # when using TLS -tlsTrustCertsFilePath: /etc/pulsar/tls/ca.crt # when using TLS -pulsarServiceUrl: pulsar://broker.pulsar:6650/ # or pulsar+ssl://pulsar-prod-broker.pulsar:6651/ when using TLS -pulsarWebServiceUrl: http://broker.pulsar:8080/ # or https://pulsar-prod-broker.pulsar:8443/ when using TLS -useTls: true # when using TLS, critical! - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/functions-worker.md b/site2/website/versioned_docs/version-2.9.2-deprecated/functions-worker.md deleted file mode 100644 index 35e26926bb7aba..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/functions-worker.md +++ /dev/null @@ -1,385 +0,0 @@ ---- -id: functions-worker -title: Deploy and manage functions worker -sidebar_label: "Setup: Pulsar Functions Worker" -original_id: functions-worker ---- -Before using Pulsar Functions, you need to learn how to set up Pulsar Functions worker and how to [configure Functions runtime](functions-runtime.md). - -Pulsar `functions-worker` is a logic component to run Pulsar Functions in cluster mode. Two options are available, and you can select either based on your requirements. -- [run with brokers](#run-functions-worker-with-brokers) -- [run it separately](#run-functions-worker-separately) in a different broker - -:::note - -The `--- Service Urls---` lines in the following diagrams represent Pulsar service URLs that Pulsar client and admin use to connect to a Pulsar cluster. - -::: - -## Run Functions-worker with brokers - -The following diagram illustrates the deployment of functions-workers running along with brokers. - -![assets/functions-worker-corun.png](/assets/functions-worker-corun.png) - -To enable functions-worker running as part of a broker, you need to set `functionsWorkerEnabled` to `true` in the `broker.conf` file. - -```conf - -functionsWorkerEnabled=true - -``` - -If the `functionsWorkerEnabled` is set to `true`, the functions-worker is started as part of a broker. You need to configure the `conf/functions_worker.yml` file to customize your functions_worker. - -Before you run Functions-worker with broker, you have to configure Functions-worker, and then start it with brokers. - -### Configure Functions-Worker to run with brokers -In this mode, most of the settings are already inherited from your broker configuration (for example, configurationStore settings, authentication settings, and so on) since `functions-worker` is running as part of the broker. - -Pay attention to the following required settings when configuring functions-worker in this mode. - -- `numFunctionPackageReplicas`: The number of replicas to store function packages. The default value is `1`, which is good for standalone deployment. For production deployment, to ensure high availability, set it to be larger than `2`. -- `initializedDlogMetadata`: Whether to initialize distributed log metadata in runtime. If it is set to `true`, you must ensure that it has been initialized by `bin/pulsar initialize-cluster-metadata` command. - -If authentication is enabled on the BookKeeper cluster, configure the following BookKeeper authentication settings. - -- `bookkeeperClientAuthenticationPlugin`: the BookKeeper client authentication plugin name. -- `bookkeeperClientAuthenticationParametersName`: the BookKeeper client authentication plugin parameters name. -- `bookkeeperClientAuthenticationParameters`: the BookKeeper client authentication plugin parameters. - -### Configure Stateful-Functions to run with broker - -If you want to use Stateful-Functions related functions (for example, `putState()` and `queryState()` related interfaces), follow steps below. - -1. Enable the **streamStorage** service in the BookKeeper. - - Currently, the service uses the NAR package, so you need to set the configuration in `bookkeeper.conf`. - - ```text - - extraServerComponents=org.apache.bookkeeper.stream.server.StreamStorageLifecycleComponent - - ``` - - After starting bookie, use the following methods to check whether the streamStorage service is started correctly. - - Input: - - ```shell - - telnet localhost 4181 - - ``` - - Output: - - ```text - - Trying 127.0.0.1... - Connected to localhost. - Escape character is '^]'. - - ``` - -2. Turn on this function in `functions_worker.yml`. - - ```text - - stateStorageServiceUrl: bk://:4181 - - ``` - - `bk-service-url` is the service URL pointing to the BookKeeper table service. - -### Start Functions-worker with broker - -Once you have configured the `functions_worker.yml` file, you can start or restart your broker. - -And then you can use the following command to verify if `functions-worker` is running well. - -```bash - -curl :8080/admin/v2/worker/cluster - -``` - -After entering the command above, a list of active function workers in the cluster is returned. The output is similar to the following. - -```json - -[{"workerId":"","workerHostname":"","port":8080}] - -``` - -## Run Functions-worker separately - -This section illustrates how to run `functions-worker` as a separate process in separate machines. - -![assets/functions-worker-separated.png](/assets/functions-worker-separated.png) - -:::note - -In this mode, make sure `functionsWorkerEnabled` is set to `false`, so you won't start `functions-worker` with brokers by mistake. Also, while accessing the `functions-worker` to manage any of the functions, the `pulsar-admin` CLI tool or any of the clients should use the `workerHostname` and `workerPort` that you set in [Worker parameters](#worker-parameters) to generate an `--admin-url`. - -::: - -### Configure Functions-worker to run separately - -To run function-worker separately, you have to configure the following parameters. - -#### Worker parameters - -- `workerId`: The type is string. It is unique across clusters, which is used to identify a worker machine. -- `workerHostname`: The hostname of the worker machine. -- `workerPort`: The port that the worker server listens on. Keep it as default if you don't customize it. -- `workerPortTls`: The TLS port that the worker server listens on. Keep it as default if you don't customize it. - -#### Function package parameter - -- `numFunctionPackageReplicas`: The number of replicas to store function packages. The default value is `1`. - -#### Function metadata parameter - -- `pulsarServiceUrl`: The Pulsar service URL for your broker cluster. -- `pulsarWebServiceUrl`: The Pulsar web service URL for your broker cluster. -- `pulsarFunctionsCluster`: Set the value to your Pulsar cluster name (same as the `clusterName` setting in the broker configuration). - -If authentication is enabled for your broker cluster, you *should* configure the authentication plugin and parameters for the functions worker to communicate with the brokers. - -- `clientAuthenticationPlugin` -- `clientAuthenticationParameters` - -#### Security settings - -If you want to enable security on functions workers, you *should*: -- [Enable TLS transport encryption](#enable-tls-transport-encryption) -- [Enable Authentication Provider](#enable-authentication-provider) -- [Enable Authorization Provider](#enable-authorization-provider) -- [Enable End-to-End Encryption](#enable-end-to-end-encryption) - -##### Enable TLS transport encryption - -To enable TLS transport encryption, configure the following settings. - -``` - -useTLS: true -pulsarServiceUrl: pulsar+ssl://localhost:6651/ -pulsarWebServiceUrl: https://localhost:8443 - -tlsEnabled: true -tlsCertificateFilePath: /path/to/functions-worker.cert.pem -tlsKeyFilePath: /path/to/functions-worker.key-pk8.pem -tlsTrustCertsFilePath: /path/to/ca.cert.pem - -// The path to trusted certificates used by the Pulsar client to authenticate with Pulsar brokers -brokerClientTrustCertsFilePath: /path/to/ca.cert.pem - -``` - -For details on TLS encryption, refer to [Transport Encryption using TLS](security-tls-transport.md). - -##### Enable Authentication Provider - -To enable authentication on Functions Worker, you need to configure the following settings. - -:::note - -Substitute the *providers list* with the providers you want to enable. - -::: - -``` - -authenticationEnabled: true -authenticationProviders: [ provider1, provider2 ] - -``` - -For *TLS Authentication* provider, follow the example below to add the necessary settings. -See [TLS Authentication](security-tls-authentication.md) for more details. - -``` - -brokerClientAuthenticationPlugin: org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters: tlsCertFile:/path/to/admin.cert.pem,tlsKeyFile:/path/to/admin.key-pk8.pem - -authenticationEnabled: true -authenticationProviders: ['org.apache.pulsar.broker.authentication.AuthenticationProviderTls'] - -``` - -For *SASL Authentication* provider, add `saslJaasClientAllowedIds` and `saslJaasBrokerSectionName` -under `properties` if needed. - -``` - -properties: - saslJaasClientAllowedIds: .*pulsar.* - saslJaasBrokerSectionName: Broker - -``` - -For *Token Authentication* provider, add necessary settings for `properties` if needed. -See [Token Authentication](security-jwt.md) for more details. -Note: key files must be DER-encoded - -``` - -properties: - tokenSecretKey: file://my/secret.key - # If using public/private - # tokenPublicKey: file:///path/to/public.key - -``` - -##### Enable Authorization Provider - -To enable authorization on Functions Worker, you need to configure `authorizationEnabled`, `authorizationProvider` and `configurationStoreServers`. The authentication provider connects to `configurationStoreServers` to receive namespace policies. - -```yaml - -authorizationEnabled: true -authorizationProvider: org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider -configurationStoreServers: - -``` - -You should also configure a list of superuser roles. The superuser roles are able to access any admin API. The following is a configuration example. - -```yaml - -superUserRoles: - - role1 - - role2 - - role3 - -``` - -##### Enable End-to-End Encryption - -You can use the public and private key pair that the application configures to perform encryption. Only the consumers with a valid key can decrypt the encrypted messages. - -To enable End-to-End encryption on Functions Worker, you can set it by specifying `--producer-config` in the command line terminal, for more information, please refer to [here](security-encryption.md). - -We include the relevant configuration information of `CryptoConfig` into `ProducerConfig`. The specific configurable field information about `CryptoConfig` is as follows: - -```text - -public class CryptoConfig { - private String cryptoKeyReaderClassName; - private Map cryptoKeyReaderConfig; - - private String[] encryptionKeys; - private ProducerCryptoFailureAction producerCryptoFailureAction; - - private ConsumerCryptoFailureAction consumerCryptoFailureAction; -} - -``` - -- `producerCryptoFailureAction`: define the action if producer fail to encrypt data one of `FAIL`, `SEND`. -- `consumerCryptoFailureAction`: define the action if consumer fail to decrypt data one of `FAIL`, `DISCARD`, `CONSUME`. - -#### BookKeeper Authentication - -If authentication is enabled on the BookKeeper cluster, you need configure the BookKeeper authentication settings as follows: - -- `bookkeeperClientAuthenticationPlugin`: the plugin name of BookKeeper client authentication. -- `bookkeeperClientAuthenticationParametersName`: the plugin parameters name of BookKeeper client authentication. -- `bookkeeperClientAuthenticationParameters`: the plugin parameters of BookKeeper client authentication. - -### Start Functions-worker - -Once you have finished configuring the `functions_worker.yml` configuration file, you can start a `functions-worker` in the background by using [nohup](https://en.wikipedia.org/wiki/Nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -bin/pulsar-daemon start functions-worker - -``` - -You can also start `functions-worker` in the foreground by using `pulsar` CLI tool: - -```bash - -bin/pulsar functions-worker - -``` - -### Configure Proxies for Functions-workers - -When you are running `functions-worker` in a separate cluster, the admin rest endpoints are split into two clusters. `functions`, `function-worker`, `source` and `sink` endpoints are now served -by the `functions-worker` cluster, while all the other remaining endpoints are served by the broker cluster. -Hence you need to configure your `pulsar-admin` to use the right service URL accordingly. - -In order to address this inconvenience, you can start a proxy cluster for routing the admin rest requests accordingly. Hence you will have one central entry point for your admin service. - -If you already have a proxy cluster, continue reading. If you haven't setup a proxy cluster before, you can follow the [instructions](http://pulsar.apache.org/docs/en/administration-proxy/) to -start proxies. - -![assets/functions-worker-separated.png](/assets/functions-worker-separated-proxy.png) - -To enable routing functions related admin requests to `functions-worker` in a proxy, you can edit the `proxy.conf` file to modify the following settings: - -```conf - -functionWorkerWebServiceURL= -functionWorkerWebServiceURLTLS= - -``` - -## Compare the Run-with-Broker and Run-separately modes - -As described above, you can run Function-worker with brokers, or run it separately. And it is more convenient to run functions-workers along with brokers. However, running functions-workers in a separate cluster provides better resource isolation for running functions in `Process` or `Thread` mode. - -Use which mode for your cases, refer to the following guidelines to determine. - -Use the `Run-with-Broker` mode in the following cases: -- a) if resource isolation is not required when running functions in `Process` or `Thread` mode; -- b) if you configure the functions-worker to run functions on Kubernetes (where the resource isolation problem is addressed by Kubernetes). - -Use the `Run-separately` mode in the following cases: -- a) you don't have a Kubernetes cluster; -- b) if you want to run functions and brokers separately. - -## Troubleshooting - -**Error message: Namespace missing local cluster name in clusters list** - -``` - -Failed to get partitioned topic metadata: org.apache.pulsar.client.api.PulsarClientException$BrokerMetadataException: Namespace missing local cluster name in clusters list: local_cluster=xyz ns=public/functions clusters=[standalone] - -``` - -The error message prompts when either of the cases occurs: -- a) a broker is started with `functionsWorkerEnabled=true`, but the `pulsarFunctionsCluster` is not set to the correct cluster in the `conf/functions_worker.yaml` file; -- b) setting up a geo-replicated Pulsar cluster with `functionsWorkerEnabled=true`, while brokers in one cluster run well, brokers in the other cluster do not work well. - -**Workaround** - -If any of these cases happens, follow the instructions below to fix the problem: - -1. Disable Functions Worker by setting `functionsWorkerEnabled=false`, and restart brokers. - -2. Get the current clusters list of `public/functions` namespace. - -```bash - -bin/pulsar-admin namespaces get-clusters public/functions - -``` - -3. Check if the cluster is in the clusters list. If the cluster is not in the list, add it to the list and update the clusters list. - -```bash - -bin/pulsar-admin namespaces set-clusters --clusters , public/functions - -``` - -4. After setting the cluster successfully, enable functions worker by setting `functionsWorkerEnabled=true`. - -5. Set the correct cluster name in `pulsarFunctionsCluster` in the `conf/functions_worker.yml` file, and restart brokers. diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/getting-started-concepts-and-architecture.md b/site2/website/versioned_docs/version-2.9.2-deprecated/getting-started-concepts-and-architecture.md deleted file mode 100644 index fe9c3fbc553b2c..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/getting-started-concepts-and-architecture.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -id: concepts-architecture -title: Pulsar concepts and architecture -sidebar_label: "Concepts and architecture" -original_id: concepts-architecture ---- - - - - - - - - - - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/getting-started-docker.md b/site2/website/versioned_docs/version-2.9.2-deprecated/getting-started-docker.md deleted file mode 100644 index de5ead69e164b0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/getting-started-docker.md +++ /dev/null @@ -1,211 +0,0 @@ ---- -id: getting-started-docker -title: Set up a standalone Pulsar in Docker -sidebar_label: "Run Pulsar in Docker" -original_id: getting-started-docker ---- - -For local development and testing, you can run Pulsar in standalone mode on your own machine within a Docker container. - -If you have not installed Docker, download the [Community edition](https://www.docker.com/community-edition) and follow the instructions for your OS. - -## Start Pulsar in Docker - -* For MacOS, Linux, and Windows: - - ```shell - - $ docker run -it -p 6650:6650 -p 8080:8080 --mount source=pulsardata,target=/pulsar/data --mount source=pulsarconf,target=/pulsar/conf apachepulsar/pulsar:@pulsar:version@ bin/pulsar standalone - - ``` - -A few things to note about this command: - * The data, metadata, and configuration are persisted on Docker volumes in order to not start "fresh" every -time the container is restarted. For details on the volumes you can use `docker volume inspect ` - * For Docker on Windows make sure to configure it to use Linux containers - -If you start Pulsar successfully, you will see `INFO`-level log messages like this: - -``` - -08:18:30.970 [main] INFO org.apache.pulsar.broker.web.WebService - HTTP Service started at http://0.0.0.0:8080 -... -07:53:37.322 [main] INFO org.apache.pulsar.broker.PulsarService - messaging service is ready, bootstrap service port = 8080, broker url= pulsar://localhost:6650, cluster=standalone, configs=org.apache.pulsar.broker.ServiceConfiguration@98b63c1 -... - -``` - -:::tip - -When you start a local standalone cluster, a `public/default` namespace is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -::: - -## Use Pulsar in Docker - -Pulsar offers client libraries for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) -and [C++](client-libraries-cpp.md). If you're running a local standalone cluster, you can -use one of these root URLs to interact with your cluster: - -* `pulsar://localhost:6650` -* `http://localhost:8080` - -The following example will guide you get started with Pulsar quickly by using the [Python client API](client-libraries-python.md) -client API. - -Install the Pulsar Python client library directly from [PyPI](https://pypi.org/project/pulsar-client/): - -```shell - -$ pip install pulsar-client - -``` - -### Consume a message - -Create a consumer and subscribe to the topic: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -consumer = client.subscribe('my-topic', - subscription_name='my-sub') - -while True: - msg = consumer.receive() - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - -client.close() - -``` - -### Produce a message - -Now start a producer to send some test messages: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -producer = client.create_producer('my-topic') - -for i in range(10): - producer.send(('hello-pulsar-%d' % i).encode('utf-8')) - -client.close() - -``` - -## Get the topic statistics - -In Pulsar, you can use REST, Java, or command-line tools to control every aspect of the system. -For details on APIs, refer to [Admin API Overview](admin-api-overview.md). - -In the simplest example, you can use curl to probe the stats for a particular topic: - -```shell - -$ curl http://localhost:8080/admin/v2/persistent/public/default/my-topic/stats | python -m json.tool - -``` - -The output is something like this: - -```json - -{ - "msgRateIn": 0.0, - "msgThroughputIn": 0.0, - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesInCounter": 7097, - "msgInCounter": 143, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "averageMsgSize": 0.0, - "msgChunkPublished": false, - "storageSize": 7097, - "backlogSize": 0, - "offloadedStorageSize": 0, - "publishers": [ - { - "accessMode": "Shared", - "msgRateIn": 0.0, - "msgThroughputIn": 0.0, - "averageMsgSize": 0.0, - "chunkedMessageRate": 0.0, - "producerId": 0, - "metadata": {}, - "address": "/127.0.0.1:35604", - "connectedSince": "2021-07-04T09:05:43.04788Z", - "clientVersion": "2.8.0", - "producerName": "standalone-2-5" - } - ], - "waitingPublishers": 0, - "subscriptions": { - "my-sub": { - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "msgRateRedeliver": 0.0, - "chunkedMessageRate": 0, - "msgBacklog": 0, - "backlogSize": 0, - "msgBacklogNoDelayed": 0, - "blockedSubscriptionOnUnackedMsgs": false, - "msgDelayed": 0, - "unackedMessages": 0, - "type": "Exclusive", - "activeConsumerName": "3c544f1daa", - "msgRateExpired": 0.0, - "totalMsgExpired": 0, - "lastExpireTimestamp": 0, - "lastConsumedFlowTimestamp": 1625389101290, - "lastConsumedTimestamp": 1625389546070, - "lastAckedTimestamp": 1625389546162, - "lastMarkDeleteAdvancedTimestamp": 1625389546163, - "consumers": [ - { - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "msgRateRedeliver": 0.0, - "chunkedMessageRate": 0.0, - "consumerName": "3c544f1daa", - "availablePermits": 867, - "unackedMessages": 0, - "avgMessagesPerEntry": 6, - "blockedConsumerOnUnackedMsgs": false, - "lastAckedTimestamp": 1625389546162, - "lastConsumedTimestamp": 1625389546070, - "metadata": {}, - "address": "/127.0.0.1:35472", - "connectedSince": "2021-07-04T08:58:21.287682Z", - "clientVersion": "2.8.0" - } - ], - "isDurable": true, - "isReplicated": false, - "allowOutOfOrderDelivery": false, - "consumersAfterMarkDeletePosition": {}, - "nonContiguousDeletedMessagesRanges": 0, - "nonContiguousDeletedMessagesRangesSerializedSize": 0, - "durable": true, - "replicated": false - } - }, - "replication": {}, - "deduplicationStatus": "Disabled", - "nonContiguousDeletedMessagesRanges": 0, - "nonContiguousDeletedMessagesRangesSerializedSize": 0 -} - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/getting-started-helm.md b/site2/website/versioned_docs/version-2.9.2-deprecated/getting-started-helm.md deleted file mode 100644 index 5e9f7044a6d74b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/getting-started-helm.md +++ /dev/null @@ -1,447 +0,0 @@ ---- -id: getting-started-helm -title: Get started in Kubernetes -sidebar_label: "Run Pulsar in Kubernetes" -original_id: getting-started-helm ---- - -This section guides you through every step of installing and running Apache Pulsar with Helm on Kubernetes quickly, including the following sections: - -- Install the Apache Pulsar on Kubernetes using Helm -- Start and stop Apache Pulsar -- Create topics using `pulsar-admin` -- Produce and consume messages using Pulsar clients -- Monitor Apache Pulsar status with Prometheus and Grafana - -For deploying a Pulsar cluster for production usage, read the documentation on [how to configure and install a Pulsar Helm chart](helm-deploy.md). - -## Prerequisite - -- Kubernetes server 1.14.0+ -- kubectl 1.14.0+ -- Helm 3.0+ - -:::tip - -For the following steps, step 2 and step 3 are for **developers** and step 4 and step 5 are for **administrators**. - -::: - -## Step 0: Prepare a Kubernetes cluster - -Before installing a Pulsar Helm chart, you have to create a Kubernetes cluster. You can follow [the instructions](helm-prepare.md) to prepare a Kubernetes cluster. - -We use [Minikube](https://minikube.sigs.k8s.io/docs/start/) in this quick start guide. To prepare a Kubernetes cluster, follow these steps: - -1. Create a Kubernetes cluster on Minikube. - - ```bash - - minikube start --memory=8192 --cpus=4 --kubernetes-version= - - ``` - - The `` can be any [Kubernetes version supported by your Minikube installation](https://minikube.sigs.k8s.io/docs/reference/configuration/kubernetes/), such as `v1.16.1`. - -2. Set `kubectl` to use Minikube. - - ```bash - - kubectl config use-context minikube - - ``` - -3. To use the [Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) with the local Kubernetes cluster on Minikube, enter the command below: - - ```bash - - minikube dashboard - - ``` - - The command automatically triggers opening a webpage in your browser. - -## Step 1: Install Pulsar Helm chart - -1. Add Pulsar charts repo. - - ```bash - - helm repo add apache https://pulsar.apache.org/charts - - ``` - - ```bash - - helm repo update - - ``` - -2. Clone the Pulsar Helm chart repository. - - ```bash - - git clone https://github.com/apache/pulsar-helm-chart - cd pulsar-helm-chart - - ``` - -3. Run the script `prepare_helm_release.sh` to create secrets required for installing the Apache Pulsar Helm chart. The username `pulsar` and password `pulsar` are used for logging into the Grafana dashboard and Pulsar Manager. - - :::note - - When running the script, you can use `-n` to specify the Kubernetes namespace where the Pulsar Helm chart is installed, `-k` to define the Pulsar Helm release name, and `-c` to create the Kubernetes namespace. For more information about the script, run `./scripts/pulsar/prepare_helm_release.sh --help`. - - ::: - - ```bash - - ./scripts/pulsar/prepare_helm_release.sh \ - -n pulsar \ - -k pulsar-mini \ - -c - - ``` - -4. Use the Pulsar Helm chart to install a Pulsar cluster to Kubernetes. - - :::note - - You need to specify `--set initialize=true` when installing Pulsar the first time. This command installs and starts Apache Pulsar. - - ::: - - ```bash - - helm install \ - --values examples/values-minikube.yaml \ - --set initialize=true \ - --namespace pulsar \ - pulsar-mini apache/pulsar - - ``` - -5. Check the status of all pods. - - ```bash - - kubectl get pods -n pulsar - - ``` - - If all pods start up successfully, you can see that the `STATUS` is changed to `Running` or `Completed`. - - **Output** - - ```bash - - NAME READY STATUS RESTARTS AGE - pulsar-mini-bookie-0 1/1 Running 0 9m27s - pulsar-mini-bookie-init-5gphs 0/1 Completed 0 9m27s - pulsar-mini-broker-0 1/1 Running 0 9m27s - pulsar-mini-grafana-6b7bcc64c7-4tkxd 1/1 Running 0 9m27s - pulsar-mini-prometheus-5fcf5dd84c-w8mgz 1/1 Running 0 9m27s - pulsar-mini-proxy-0 1/1 Running 0 9m27s - pulsar-mini-pulsar-init-t7cqt 0/1 Completed 0 9m27s - pulsar-mini-pulsar-manager-9bcbb4d9f-htpcs 1/1 Running 0 9m27s - pulsar-mini-toolset-0 1/1 Running 0 9m27s - pulsar-mini-zookeeper-0 1/1 Running 0 9m27s - - ``` - -6. Check the status of all services in the namespace `pulsar`. - - ```bash - - kubectl get services -n pulsar - - ``` - - **Output** - - ```bash - - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - pulsar-mini-bookie ClusterIP None 3181/TCP,8000/TCP 11m - pulsar-mini-broker ClusterIP None 8080/TCP,6650/TCP 11m - pulsar-mini-grafana LoadBalancer 10.106.141.246 3000:31905/TCP 11m - pulsar-mini-prometheus ClusterIP None 9090/TCP 11m - pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 11m - pulsar-mini-pulsar-manager LoadBalancer 10.103.192.175 9527:30190/TCP 11m - pulsar-mini-toolset ClusterIP None 11m - pulsar-mini-zookeeper ClusterIP None 2888/TCP,3888/TCP,2181/TCP 11m - - ``` - -## Step 2: Use pulsar-admin to create Pulsar tenants/namespaces/topics - -`pulsar-admin` is the CLI (command-Line Interface) tool for Pulsar. In this step, you can use `pulsar-admin` to create resources, including tenants, namespaces, and topics. - -1. Enter the `toolset` container. - - ```bash - - kubectl exec -it -n pulsar pulsar-mini-toolset-0 -- /bin/bash - - ``` - -2. In the `toolset` container, create a tenant named `apache`. - - ```bash - - bin/pulsar-admin tenants create apache - - ``` - - Then you can list the tenants to see if the tenant is created successfully. - - ```bash - - bin/pulsar-admin tenants list - - ``` - - You should see a similar output as below. The tenant `apache` has been successfully created. - - ```bash - - "apache" - "public" - "pulsar" - - ``` - -3. In the `toolset` container, create a namespace named `pulsar` in the tenant `apache`. - - ```bash - - bin/pulsar-admin namespaces create apache/pulsar - - ``` - - Then you can list the namespaces of tenant `apache` to see if the namespace is created successfully. - - ```bash - - bin/pulsar-admin namespaces list apache - - ``` - - You should see a similar output as below. The namespace `apache/pulsar` has been successfully created. - - ```bash - - "apache/pulsar" - - ``` - -4. In the `toolset` container, create a topic `test-topic` with `4` partitions in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics create-partitioned-topic apache/pulsar/test-topic -p 4 - - ``` - -5. In the `toolset` container, list all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics list-partitioned-topics apache/pulsar - - ``` - - Then you can see all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - "persistent://apache/pulsar/test-topic" - - ``` - -## Step 3: Use Pulsar client to produce and consume messages - -You can use the Pulsar client to create producers and consumers to produce and consume messages. - -By default, the Pulsar Helm chart exposes the Pulsar cluster through a Kubernetes `LoadBalancer`. In Minikube, you can use the following command to check the proxy service. - -```bash - -kubectl get services -n pulsar | grep pulsar-mini-proxy - -``` - -You will see a similar output as below. - -```bash - -pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 28m - -``` - -This output tells what are the node ports that Pulsar cluster's binary port and HTTP port are mapped to. The port after `80:` is the HTTP port while the port after `6650:` is the binary port. - -Then you can find the IP address and exposed ports of your Minikube server by running the following command. - -```bash - -minikube service pulsar-mini-proxy -n pulsar - -``` - -**Output** - -```bash - -|-----------|-------------------|-------------|-------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|-------------------------| -| pulsar | pulsar-mini-proxy | http/80 | http://172.17.0.4:32305 | -| | | pulsar/6650 | http://172.17.0.4:31816 | -|-----------|-------------------|-------------|-------------------------| -🏃 Starting tunnel for service pulsar-mini-proxy. -|-----------|-------------------|-------------|------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|------------------------| -| pulsar | pulsar-mini-proxy | | http://127.0.0.1:61853 | -| | | | http://127.0.0.1:61854 | -|-----------|-------------------|-------------|------------------------| - -``` - -At this point, you can get the service URLs to connect to your Pulsar client. Here are URL examples: - -``` - -webServiceUrl=http://127.0.0.1:61853/ -brokerServiceUrl=pulsar://127.0.0.1:61854/ - -``` - -Then you can proceed with the following steps: - -1. Download the Apache Pulsar tarball from the [downloads page](https://pulsar.apache.org/download/). - -2. Decompress the tarball based on your download file. - - ```bash - - tar -xf .tar.gz - - ``` - -3. Expose `PULSAR_HOME`. - - (1) Enter the directory of the decompressed download file. - - (2) Expose `PULSAR_HOME` as the environment variable. - - ```bash - - export PULSAR_HOME=$(pwd) - - ``` - -4. Configure the Pulsar client. - - In the `${PULSAR_HOME}/conf/client.conf` file, replace `webServiceUrl` and `brokerServiceUrl` with the service URLs you get from the above steps. - -5. Create a subscription to consume messages from `apache/pulsar/test-topic`. - - ```bash - - bin/pulsar-client consume -s sub apache/pulsar/test-topic -n 0 - - ``` - -6. Open a new terminal. In the new terminal, create a producer and send 10 messages to the `test-topic` topic. - - ```bash - - bin/pulsar-client produce apache/pulsar/test-topic -m "---------hello apache pulsar-------" -n 10 - - ``` - -7. Verify the results. - - - From the producer side - - **Output** - - The messages have been produced successfully. - - ```bash - - 18:15:15.489 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 10 messages successfully produced - - ``` - - - From the consumer side - - **Output** - - At the same time, you can receive the messages as below. - - ```bash - - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - - ``` - -## Step 4: Use Pulsar Manager to manage the cluster - -[Pulsar Manager](administration-pulsar-manager.md) is a web-based GUI management tool for managing and monitoring Pulsar. - -1. By default, the `Pulsar Manager` is exposed as a separate `LoadBalancer`. You can open the Pulsar Manager UI using the following command: - - ```bash - - minikube service -n pulsar pulsar-mini-pulsar-manager - - ``` - -2. The Pulsar Manager UI will be open in your browser. You can use the username `pulsar` and password `pulsar` to log into Pulsar Manager. - -3. In Pulsar Manager UI, you can create an environment. - - - Click `New Environment` button in the top-left corner. - - Type `pulsar-mini` for the field `Environment Name` in the popup window. - - Type `http://pulsar-mini-broker:8080` for the field `Service URL` in the popup window. - - Click `Confirm` button in the popup window. - -4. After successfully creating an environment, you are redirected to the `tenants` page of that environment. Then you can create `tenants`, `namespaces` and `topics` using the Pulsar Manager. - -## Step 5: Use Prometheus and Grafana to monitor cluster - -Grafana is an open-source visualization tool, which can be used for visualizing time series data into dashboards. - -1. By default, the Grafana is exposed as a separate `LoadBalancer`. You can open the Grafana UI using the following command: - - ```bash - - minikube service pulsar-mini-grafana -n pulsar - - ``` - -2. The Grafana UI is open in your browser. You can use the username `pulsar` and password `pulsar` to log into the Grafana Dashboard. - -3. You can view dashboards for different components of a Pulsar cluster. diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/getting-started-pulsar.md b/site2/website/versioned_docs/version-2.9.2-deprecated/getting-started-pulsar.md deleted file mode 100644 index 752590f57b5585..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/getting-started-pulsar.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -id: pulsar-2.0 -title: Pulsar 2.0 -sidebar_label: "Pulsar 2.0" -original_id: pulsar-2.0 ---- - -Pulsar 2.0 is a major new release for Pulsar that brings some bold changes to the platform, including [simplified topic names](#topic-names), the addition of the [Pulsar Functions](functions-overview.md) feature, some terminology changes, and more. - -## New features in Pulsar 2.0 - -Feature | Description -:-------|:----------- -[Pulsar Functions](functions-overview.md) | A lightweight compute option for Pulsar - -## Major changes - -There are a few major changes that you should be aware of, as they may significantly impact your day-to-day usage. - -### Properties versus tenants - -Previously, Pulsar had a concept of properties. A property is essentially the exact same thing as a tenant, so the "property" terminology has been removed in version 2.0. The [`pulsar-admin properties`](reference-pulsar-admin.md#pulsar-admin) command-line interface, for example, has been replaced with the [`pulsar-admin tenants`](reference-pulsar-admin.md#pulsar-admin-tenants) interface. In some cases the properties terminology is still used but is now considered deprecated and will be removed entirely in a future release. - -### Topic names - -Prior to version 2.0, *all* Pulsar topics had the following form: - -```http - -{persistent|non-persistent}://property/cluster/namespace/topic - -``` - -Two important changes have been made in Pulsar 2.0: - -* There is no longer a [cluster component](#no-cluster) -* Properties have been [renamed to tenants](#tenants) -* You can use a [flexible](#flexible-topic-naming) naming system to shorten many topic names -* `/` is not allowed in topic name - -#### No cluster component - -The cluster component has been removed from topic names. Thus, all topic names now have the following form: - -```http - -{persistent|non-persistent}://tenant/namespace/topic - -``` - -> Existing topics that use the legacy name format will continue to work without any change, and there are no plans to change that. - - -#### Flexible topic naming - -All topic names in Pulsar 2.0 internally have the form shown [above](#no-cluster-component) but you can now use shorthand names in many cases (for the sake of simplicity). The flexible naming system stems from the fact that there is now a default topic type, tenant, and namespace: - -Topic aspect | Default -:------------|:------- -topic type | `persistent` -tenant | `public` -namespace | `default` - -The table below shows some example topic name translations that use implicit defaults: - -Input topic name | Translated topic name -:----------------|:--------------------- -`my-topic` | `persistent://public/default/my-topic` -`my-tenant/my-namespace/my-topic` | `persistent://my-tenant/my-namespace/my-topic` - -> For [non-persistent topics](concepts-messaging.md#non-persistent-topics) you'll need to continue to specify the entire topic name, as the default-based rules for persistent topic names don't apply. Thus you cannot use a shorthand name like `non-persistent://my-topic` and would need to use `non-persistent://public/default/my-topic` instead - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/getting-started-standalone.md b/site2/website/versioned_docs/version-2.9.2-deprecated/getting-started-standalone.md deleted file mode 100644 index 573a33771d64ea..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/getting-started-standalone.md +++ /dev/null @@ -1,268 +0,0 @@ ---- -id: getting-started-standalone -title: Set up a standalone Pulsar locally -sidebar_label: "Run Pulsar locally" -original_id: getting-started-standalone ---- - -For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process. - -> **Pulsar in production?** -> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal.md) guide. - -## Install Pulsar standalone - -This tutorial guides you through every step of installing Pulsar locally. - -### System requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions. - -:::tip - -By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. - -::: - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -### Install Pulsar using binary release - -To get started with Pulsar, download a binary tarball release in one of the following ways: - -* download from the Apache mirror (Pulsar @pulsar:version@ binary release) - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:binary_release_url - - ``` - -After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -#### What your package contains - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/). -`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more. -`examples` | A Java JAR file containing [Pulsar Functions](functions-overview.md) example. -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar. -`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar). - -These directories are created once you begin running Pulsar. - -Directory | Contains -:---------|:-------- -`data` | The data storage directory used by ZooKeeper and BookKeeper. -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md). -`logs` | Logs created by the installation. - -:::tip - -If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions: -* [Install builtin connectors (optional)](#install-builtin-connectors-optional) -* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional) -Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders. - -::: - -### Install builtin connectors (optional) - -Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors. -To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways: - -* download from the Apache mirror Pulsar IO Connectors @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. -For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker (or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions). -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors). - -::: - -### Install tiered storage offloaders (optional) - -:::tip - -- Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -- To enable tiered storage feature, follow the instructions below; otherwise skip this section. - -::: - -To get started with [tiered storage offloaders](concepts-tiered-storage.md), you need to download the offloaders tarball release on every broker node in one of the following ways: - -* download from the Apache mirror Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders` -in the pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage.md). - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory. -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - -::: - -## Start Pulsar standalone - -Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode. - -```bash - -$ bin/pulsar standalone - -``` - -If you have started Pulsar successfully, you will see `INFO`-level log messages like this: - -```bash - -21:59:29.327 [DLM-/stream/storage-OrderedScheduler-3-0] INFO org.apache.bookkeeper.stream.storage.impl.sc.StorageContainerImpl - Successfully started storage container (0). -21:59:34.576 [main] INFO org.apache.pulsar.broker.authentication.AuthenticationService - Authentication is disabled -21:59:34.576 [main] INFO org.apache.pulsar.websocket.WebSocketService - Pulsar WebSocket Service started - -``` - -:::tip - -* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window. - -::: - -You can also run the service as a background process using the `pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). -> -> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview.md) document to secure your deployment. -> -> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -## Use Pulsar standalone - -Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. - -### Consume a message - -The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client consume my-topic -s "first-subscription" - -``` - -If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -22:17:16.781 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully consumed - -``` - -:::tip - -As you have noticed that we do not explicitly create the `my-topic` topic, from which we consume the message. When you consume a message from a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well. - -::: - -### Produce a message - -The following command produces a message saying `hello-pulsar` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client produce my-topic --messages "hello-pulsar" - -``` - -If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -22:21:08.693 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced - -``` - -## Stop Pulsar standalone - -Press `Ctrl+C` to stop a local standalone Pulsar. - -:::tip - -If the service runs as a background process using the `pulsar-daemon start standalone` command, then use the `pulsar-daemon stop standalone` command to stop the service. -For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). - -::: - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/helm-deploy.md b/site2/website/versioned_docs/version-2.9.2-deprecated/helm-deploy.md deleted file mode 100644 index 0e7815e4f4d90b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/helm-deploy.md +++ /dev/null @@ -1,434 +0,0 @@ ---- -id: helm-deploy -title: Deploy Pulsar cluster using Helm -sidebar_label: "Deployment" -original_id: helm-deploy ---- - -Before running `helm install`, you need to decide how to run Pulsar. -Options can be specified using Helm's `--set option.name=value` command line option. - -## Select configuration options - -In each section, collect the options that are combined to use with the `helm install` command. - -### Kubernetes namespace - -By default, the Pulsar Helm chart is installed to a namespace called `pulsar`. - -```yaml - -namespace: pulsar - -``` - -To install the Pulsar Helm chart into a different Kubernetes namespace, you can include this option in the `helm install` command. - -```bash - ---set namespace= - -``` - -By default, the Pulsar Helm chart doesn't create the namespace. - -```yaml - -namespaceCreate: false - -``` - -To use the Pulsar Helm chart to create the Kubernetes namespace automatically, you can include this option in the `helm install` command. - -```bash - ---set namespaceCreate=true - -``` - -### Persistence - -By default, the Pulsar Helm chart creates Volume Claims with the expectation that a dynamic provisioner creates the underlying Persistent Volumes. - -```yaml - -volumes: - persistence: true - # configure the components to use local persistent volume - # the local provisioner should be installed prior to enable local persistent volume - local_storage: false - -``` - -To use local persistent volumes as the persistent storage for Helm release, you can install the [local storage provisioner](#install-local-storage-provisioner) and include the following option in the `helm install` command. - -```bash - ---set volumes.local_storage=true - -``` - -:::note - -Before installing the production instance of Pulsar, ensure to plan the storage settings to avoid extra storage migration work. Because after initial installation, you must edit Kubernetes objects manually if you want to change storage settings. - -::: - -The Pulsar Helm chart is designed for production use. To use the Pulsar Helm chart in a development environment (such as Minikube), you can disable persistence by including this option in your `helm install` command. - -```bash - ---set volumes.persistence=false - -``` - -### Affinity - -By default, `anti-affinity` is enabled to ensure pods of the same component can run on different nodes. - -```yaml - -affinity: - anti_affinity: true - -``` - -To use the Pulsar Helm chart in a development environment (such as Minikube), you can disable `anti-affinity` by including this option in your `helm install` command. - -```bash - ---set affinity.anti_affinity=false - -``` - -### Components - -The Pulsar Helm chart is designed for production usage. It deploys a production-ready Pulsar cluster, including Pulsar core components and monitoring components. - -You can customize the components to be deployed by turning on/off individual components. - -```yaml - -## Components -## -## Control what components of Apache Pulsar to deploy for the cluster -components: - # zookeeper - zookeeper: true - # bookkeeper - bookkeeper: true - # bookkeeper - autorecovery - autorecovery: true - # broker - broker: true - # functions - functions: true - # proxy - proxy: true - # toolset - toolset: true - # pulsar manager - pulsar_manager: true - -## Monitoring Components -## -## Control what components of the monitoring stack to deploy for the cluster -monitoring: - # monitoring - prometheus - prometheus: true - # monitoring - grafana - grafana: true - -``` - -### Docker images - -The Pulsar Helm chart is designed to enable controlled upgrades. So it can configure independent image versions for components. You can customize the images by setting individual component. - -```yaml - -## Images -## -## Control what images to use for each component -images: - zookeeper: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - bookie: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - autorecovery: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - broker: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - proxy: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - functions: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - prometheus: - repository: prom/prometheus - tag: v1.6.3 - pullPolicy: IfNotPresent - grafana: - repository: streamnative/apache-pulsar-grafana-dashboard-k8s - tag: 0.0.4 - pullPolicy: IfNotPresent - pulsar_manager: - repository: apachepulsar/pulsar-manager - tag: v0.1.0 - pullPolicy: IfNotPresent - hasCommand: false - -``` - -### TLS - -The Pulsar Helm chart can be configured to enable TLS (Transport Layer Security) to protect all the traffic between components. Before enabling TLS, you have to provision TLS certificates for the required components. - -#### Provision TLS certificates using cert-manager - -To use the `cert-manager` to provision the TLS certificates, you have to install the [cert-manager](#install-cert-manager) before installing the Pulsar Helm chart. After successfully installing the cert-manager, you can set `certs.internal_issuer.enabled` to `true`. Therefore, the Pulsar Helm chart can use the `cert-manager` to generate `selfsigning` TLS certificates for the configured components. - -```yaml - -certs: - internal_issuer: - enabled: false - component: internal-cert-issuer - type: selfsigning - -``` - -You can also customize the generated TLS certificates by configuring the fields as the following. - -```yaml - -tls: - # common settings for generating certs - common: - # 90d - duration: 2160h - # 15d - renewBefore: 360h - organization: - - pulsar - keySize: 4096 - keyAlgorithm: rsa - keyEncoding: pkcs8 - -``` - -#### Enable TLS - -After installing the `cert-manager`, you can set `tls.enabled` to `true` to enable TLS encryption for the entire cluster. - -```yaml - -tls: - enabled: false - -``` - -You can also configure whether to enable TLS encryption for individual component. - -```yaml - -tls: - # settings for generating certs for proxy - proxy: - enabled: false - cert_name: tls-proxy - # settings for generating certs for broker - broker: - enabled: false - cert_name: tls-broker - # settings for generating certs for bookies - bookie: - enabled: false - cert_name: tls-bookie - # settings for generating certs for zookeeper - zookeeper: - enabled: false - cert_name: tls-zookeeper - # settings for generating certs for recovery - autorecovery: - cert_name: tls-recovery - # settings for generating certs for toolset - toolset: - cert_name: tls-toolset - -``` - -### Authentication - -By default, authentication is disabled. You can set `auth.authentication.enabled` to `true` to enable authentication. -Currently, the Pulsar Helm chart only supports JWT authentication provider. You can set `auth.authentication.provider` to `jwt` to use the JWT authentication provider. - -```yaml - -# Enable or disable broker authentication and authorization. -auth: - authentication: - enabled: false - provider: "jwt" - jwt: - # Enable JWT authentication - # If the token is generated by a secret key, set the usingSecretKey as true. - # If the token is generated by a private key, set the usingSecretKey as false. - usingSecretKey: false - superUsers: - # broker to broker communication - broker: "broker-admin" - # proxy to broker communication - proxy: "proxy-admin" - # pulsar-admin client to broker/proxy communication - client: "admin" - -``` - -To enable authentication, you can run [prepare helm release](#prepare-the-helm-release) to generate token secret keys and tokens for three super users specified in the `auth.superUsers` field. The generated token keys and super user tokens are uploaded and stored as Kubernetes secrets prefixed with `-token-`. You can use the following command to find those secrets. - -```bash - -kubectl get secrets -n - -``` - -### Authorization - -By default, authorization is disabled. Authorization can be enabled only when authentication is enabled. - -```yaml - -auth: - authorization: - enabled: false - -``` - -To enable authorization, you can include this option in the `helm install` command. - -```bash - ---set auth.authorization.enabled=true - -``` - -### CPU and RAM resource requirements - -By default, the resource requests and the number of replicas for the Pulsar components in the Pulsar Helm chart are adequate for a small production deployment. If you deploy a non-production instance, you can reduce the defaults to fit into a smaller cluster. - -Once you have all of your configuration options collected, you can install dependent charts before installing the Pulsar Helm chart. - -## Install dependent charts - -### Install local storage provisioner - -To use local persistent volumes as the persistent storage, you need to install a storage provisioner for [local persistent volumes](https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/). - -One of the easiest way to get started is to use the local storage provisioner provided along with the Pulsar Helm chart. - -``` - -helm repo add streamnative https://charts.streamnative.io -helm repo update -helm install pulsar-storage-provisioner streamnative/local-storage-provisioner - -``` - -### Install cert-manager - -The Pulsar Helm chart uses the [cert-manager](https://github.com/jetstack/cert-manager) to provision and manage TLS certificates automatically. To enable TLS encryption for brokers or proxies, you need to install the cert-manager in advance. - -For details about how to install the cert-manager, follow the [official instructions](https://cert-manager.io/docs/installation/kubernetes/#installing-with-helm). - -Alternatively, we provide a bash script [install-cert-manager.sh](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/cert-manager/install-cert-manager.sh) to install a cert-manager release to the namespace `cert-manager`. - -```bash - -git clone https://github.com/apache/pulsar-helm-chart -cd pulsar-helm-chart -./scripts/cert-manager/install-cert-manager.sh - -``` - -## Prepare Helm release - -Once you have install all the dependent charts and collected all of your configuration options, you can run [prepare_helm_release.sh](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/prepare_helm_release.sh) to prepare the Helm release. - -```bash - -git clone https://github.com/apache/pulsar-helm-chart -cd pulsar-helm-chart -./scripts/pulsar/prepare_helm_release.sh -n -k - -``` - -The `prepare_helm_release` creates the following resources: - -- A Kubernetes namespace for installing the Pulsar release -- JWT secret keys and tokens for three super users: `broker-admin`, `proxy-admin`, and `admin`. By default, it generates an asymmetric pubic/private key pair. You can choose to generate a symmetric secret key by specifying `--symmetric`. - - `proxy-admin` role is used for proxies to communicate to brokers. - - `broker-admin` role is used for inter-broker communications. - - `admin` role is used by the admin tools. - -## Deploy Pulsar cluster using Helm - -Once you have finished the following three things, you can install a Helm release. - -- Collect all of your configuration options. -- Install dependent charts. -- Prepare the Helm release. - -In this example, the Helm release is named `pulsar`. - -```bash - -helm repo add apache https://pulsar.apache.org/charts -helm repo update -helm install pulsar apache/pulsar \ - --timeout 10m \ - --set initialize=true \ - --set [your configuration options] - -``` - -:::note - -For the first deployment, add `--set initialize=true` option to initialize bookie and Pulsar cluster metadata. - -::: - -You can also use the `--version ` option if you want to install a specific version of Pulsar Helm chart. - -## Monitor deployment - -A list of installed resources are output once the Pulsar cluster is deployed. This may take 5-10 minutes. - -The status of the deployment can be checked by running the `helm status pulsar` command, which can also be done while the deployment is taking place if you run the command in another terminal. - -## Access Pulsar cluster - -The default values will create a `ClusterIP` for the following resources, which you can use to interact with the cluster. - -- Proxy: You can use the IP address to produce and consume messages to the installed Pulsar cluster. -- Pulsar Manager: You can access the Pulsar Manager UI at `http://:9527`. -- Grafana Dashboard: You can access the Grafana dashboard at `http://:3000`. - -To find the IP addresses of those components, run the following command: - -```bash - -kubectl get service -n - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/helm-install.md b/site2/website/versioned_docs/version-2.9.2-deprecated/helm-install.md deleted file mode 100644 index 9f81f52e0dab18..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/helm-install.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -id: helm-install -title: Install Apache Pulsar using Helm -sidebar_label: "Install" -original_id: helm-install ---- - -Install Apache Pulsar on Kubernetes with the official Pulsar Helm chart. - -## Requirements - -To deploy Apache Pulsar on Kubernetes, the followings are required. - -- kubectl 1.14 or higher, compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin)) -- Helm v3 (3.0.2 or higher) -- A Kubernetes cluster, version 1.14 or higher - -## Environment setup - -Before deploying Pulsar, you need to prepare your environment. - -### Tools - -Install [`helm`](helm-tools.md) and [`kubectl`](helm-tools.md) on your computer. - -## Cloud cluster preparation - -To create and connect to the Kubernetes cluster, follow the instructions: - -- [Google Kubernetes Engine](helm-prepare.md#google-kubernetes-engine) - -## Pulsar deployment - -Once the environment is set up and configuration is generated, you can now proceed to the [deployment of Pulsar](helm-deploy.md). - -## Pulsar upgrade - -To upgrade an existing Kubernetes installation, follow the [upgrade documentation](helm-upgrade.md). diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/helm-overview.md b/site2/website/versioned_docs/version-2.9.2-deprecated/helm-overview.md deleted file mode 100644 index 125f595cbe68a3..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/helm-overview.md +++ /dev/null @@ -1,103 +0,0 @@ ---- -id: helm-overview -title: Apache Pulsar Helm Chart -sidebar_label: "Overview" -original_id: helm-overview ---- - -[Helm chart](https://github.com/apache/pulsar-helm-chart) supports you to install Apache Pulsar in a cloud-native environment. - -## Introduction - -The Apache Pulsar Helm chart provides one of the most convenient ways to operate Pulsar on Kubernetes. With all the required components, Helm chart is scalable and thus being suitable for large-scale deployments. - -The Apache Pulsar Helm chart contains all components to support the features and functions that Pulsar delivers. You can install and configure these components separately. - -- Pulsar core components: - - ZooKeeper - - Bookies - - Brokers - - Function workers - - Proxies -- Control center: - - Pulsar Manager - - Prometheus - - Grafana - -Moreover, Helm chart supports: - -- Security - - Automatically provisioned TLS certificates, using [Jetstack](https://www.jetstack.io/)'s [cert-manager](https://cert-manager.io/docs/) - - self-signed - - [Let's Encrypt](https://letsencrypt.org/) - - TLS Encryption - - Proxy - - Broker - - Toolset - - Bookie - - ZooKeeper - - Authentication - - JWT - - Authorization -- Storage - - Non-persistence storage - - Persistent volume - - Local persistent volumes -- Functions - - Kubernetes Runtime - - Process Runtime - - Thread Runtime -- Operations - - Independent image versions for all components, enabling controlled upgrades - -## Quick start - -To run with Apache Pulsar Helm chart as fast as possible in a **non-production** use case, we provide a [quick start guide](getting-started-helm.md) for Proof of Concept (PoC) deployments. - -This guide walks you through deploying Apache Pulsar Helm chart with default values and features, but it is *not* suitable for deployments in production-ready environments. To deploy the charts in production under sustained load, you can follow the complete [Installation Guide](helm-install.md). - -## Troubleshooting - -Although we have done our best to make these charts as seamless as possible, troubles do go out of our control occasionally. We have been collecting tips and tricks for troubleshooting common issues. Please check it first before raising an [issue](https://github.com/apache/pulsar/issues/new/choose), and feel free to add your solutions by creating a [Pull Request](https://github.com/apache/pulsar/compare). - -## Installation - -The Apache Pulsar Helm chart contains all required dependencies. - -If you deploy a PoC for testing, we strongly suggest you follow this [Quick Start Guide](getting-started-helm.md) for your first iteration. - -1. [Preparation](helm-prepare.md) -2. [Deployment](helm-deploy.md) - -## Upgrading - -Once the Apache Pulsar Helm chart is installed, you can use `helm upgrade` command to configure and update it. - -```bash - -helm repo add apache https://pulsar.apache.org/charts -helm repo update -helm get values > pulsar.yaml -helm upgrade apache/pulsar -f pulsar.yaml - -``` - -For more detailed information, see [Upgrading](helm-upgrade.md). - -## Uninstallation - -To uninstall the Apache Pulsar Helm chart, run the following command: - -```bash - -helm delete - -``` - -For the purposes of continuity, some Kubernetes objects in these charts cannot be removed by `helm delete` command. It is recommended to *consciously* remove these items, as they affect re-deployment. - -* PVCs for stateful data: remove these items. - - ZooKeeper: This is your metadata. - - BookKeeper: This is your data. - - Prometheus: This is your metrics data, which can be safely removed. -* Secrets: if the secrets are generated by the [prepare release script](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/prepare_helm_release.sh), they contain secret keys and tokens. You can use the [cleanup release script](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/cleanup_helm_release.sh) to remove these secrets and tokens as needed. diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/helm-prepare.md b/site2/website/versioned_docs/version-2.9.2-deprecated/helm-prepare.md deleted file mode 100644 index e5d56c7e95e34b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/helm-prepare.md +++ /dev/null @@ -1,80 +0,0 @@ ---- -id: helm-prepare -title: Prepare Kubernetes resources -sidebar_label: "Prepare" -original_id: helm-prepare ---- - -For a fully functional Pulsar cluster, you need a few resources before deploying the Apache Pulsar Helm chart. The following provides instructions to prepare the Kubernetes cluster before deploying the Pulsar Helm chart. - -- [Google Kubernetes Engine](#google-kubernetes-engine) - - [Manual cluster creation](#manual-cluster-creation) - - [Scripted cluster creation](#scripted-cluster-creation) - - [Create cluster with local SSDs](#create-cluster-with-local-ssds) - -## Google Kubernetes Engine - -To get started easier, a script is provided to create the cluster automatically. Alternatively, a cluster can be created manually as well. - -### Manual cluster creation - -To provision a Kubernetes cluster manually, follow the [GKE instructions](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster). - -### Scripted cluster creation - -A [bootstrap script](https://github.com/streamnative/charts/tree/master/scripts/pulsar/gke_bootstrap_script.sh) has been created to automate much of the setup process for users on GCP/GKE. - -The script can: - -1. Create a new GKE cluster. -2. Allow the cluster to modify DNS (Domain Name Server) records. -3. Setup `kubectl`, and connect it to the cluster. - -Google Cloud SDK is a dependency of this script, so ensure it is [set up correctly](helm-tools.md#connect-to-a-gke-cluster) for the script to work. - -The script reads various parameters from environment variables and an argument `up` or `down` for bootstrap and clean-up respectively. - -The following table describes all variables. - -| **Variable** | **Description** | **Default value** | -| ------------ | --------------- | ----------------- | -| PROJECT | ID of your GCP project | No default value. It requires to be set. | -| CLUSTER_NAME | Name of the GKE cluster | `pulsar-dev` | -| CONFDIR | Configuration directory to store Kubernetes configuration | ${HOME}/.config/streamnative | -| INT_NETWORK | IP space to use within this cluster | `default` | -| LOCAL_SSD_COUNT | Number of local SSD counts | 4 | -| MACHINE_TYPE | Type of machine to use for nodes | `n1-standard-4` | -| NUM_NODES | Number of nodes to be created in each of the cluster's zones | 4 | -| PREEMPTIBLE | Create nodes using preemptible VM instances in the new cluster. | false | -| REGION | Compute region for the cluster | `us-east1` | -| USE_LOCAL_SSD | Flag to create a cluster with local SSDs | false | -| ZONE | Compute zone for the cluster | `us-east1-b` | -| ZONE_EXTENSION | The extension (`a`, `b`, `c`) of the zone name of the cluster | `b` | -| EXTRA_CREATE_ARGS | Extra arguments passed to create command | | - -Run the script, by passing in your desired parameters. It can work with the default parameters except for `PROJECT` which is required: - -```bash - -PROJECT= scripts/pulsar/gke_bootstrap_script.sh up - -``` - -The script can also be used to clean up the created GKE resources. - -```bash - -PROJECT= scripts/pulsar/gke_bootstrap_script.sh down - -``` - -#### Create cluster with local SSDs - -To install the Pulsar Helm chart using local persistent volumes, you need to create a GKE cluster with local SSDs. You can do so by specifying `USE_LOCAL_SSD` to be `true` in the following command to create a Pulsar cluster with local SSDs. - -``` - -PROJECT= USE_LOCAL_SSD=true LOCAL_SSD_COUNT= scripts/pulsar/gke_bootstrap_script.sh up - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/helm-tools.md b/site2/website/versioned_docs/version-2.9.2-deprecated/helm-tools.md deleted file mode 100644 index 6ba89006913b64..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/helm-tools.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -id: helm-tools -title: Required tools for deploying Pulsar Helm Chart -sidebar_label: "Required Tools" -original_id: helm-tools ---- - -Before deploying Pulsar to your Kubernetes cluster, there are some tools you must have installed locally. - -## kubectl - -kubectl is the tool that talks to the Kubernetes API. kubectl 1.14 or higher is required and it needs to be compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin)). - -To Install kubectl locally, follow the [Kubernetes documentation](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl). - -The server version of kubectl cannot be obtained until we connect to a cluster. - -## Helm - -Helm is the package manager for Kubernetes. The Apache Pulsar Helm Chart is tested and supported with Helm v3. - -### Get Helm - -You can get Helm from the project's [releases page](https://github.com/helm/helm/releases), or follow other options under the official documentation of [installing Helm](https://helm.sh/docs/intro/install/). - -### Next steps - -Once kubectl and Helm are configured, you can configure your [Kubernetes cluster](helm-prepare.md). - -## Additional information - -### Templates - -Templating in Helm is done through Golang's [text/template](https://golang.org/pkg/text/template/) and [sprig](https://godoc.org/github.com/Masterminds/sprig). - -For more information about how all the inner workings behave, check these documents: - -- [Functions and Pipelines](https://helm.sh/docs/chart_template_guide/functions_and_pipelines/) -- [Subcharts and Globals](https://helm.sh/docs/chart_template_guide/subcharts_and_globals/) - -### Tips and tricks - -For additional information on developing with Helm, check [tips and tricks section](https://helm.sh/docs/howto/charts_tips_and_tricks/) in the Helm repository. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/helm-upgrade.md b/site2/website/versioned_docs/version-2.9.2-deprecated/helm-upgrade.md deleted file mode 100644 index 7d671e6bfb3c10..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/helm-upgrade.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -id: helm-upgrade -title: Upgrade Pulsar Helm release -sidebar_label: "Upgrade" -original_id: helm-upgrade ---- - -Before upgrading your Pulsar installation, you need to check the change log corresponding to the specific release you want to upgrade to and look for any release notes that might pertain to the new Pulsar helm chart version. - -We also recommend that you need to provide all values using the `helm upgrade --set key=value` syntax or the `-f values.yml` instead of using `--reuse-values`, because some of the current values might be deprecated. - -:::note - -You can retrieve your previous `--set` arguments cleanly, with `helm get values `. If you direct this into a file (`helm get values > pulsar.yml`), you can safely pass this file through `-f`, namely `helm upgrade apache/pulsar -f pulsar.yaml`. This safely replaces the behavior of `--reuse-values`. - -::: - -## Steps - -To upgrade Apache Pulsar to a newer version, follow these steps: - -1. Check the change log for the specific version you would like to upgrade to. -2. Go through [deployment documentation](helm-deploy.md) step by step. -3. Extract your previous `--set` arguments with the following command. - - ```bash - - helm get values > pulsar.yaml - - ``` - -4. Decide all the values you need to set. -5. Perform the upgrade, with all `--set` arguments extracted in step 4. - - ```bash - - helm upgrade apache/pulsar \ - --version \ - -f pulsar.yaml \ - --set ... - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-aerospike-sink.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-aerospike-sink.md deleted file mode 100644 index 63d7338a3ba91c..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-aerospike-sink.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -id: io-aerospike-sink -title: Aerospike sink connector -sidebar_label: "Aerospike sink connector" -original_id: io-aerospike-sink ---- - -The Aerospike sink connector pulls messages from Pulsar topics to Aerospike clusters. - -## Configuration - -The configuration of the Aerospike sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `seedHosts` |String| true | No default value| The comma-separated list of one or more Aerospike cluster hosts.

    Each host can be specified as a valid IP address or hostname followed by an optional port number. | -| `keyspace` | String| true |No default value |The Aerospike namespace. | -| `columnName` | String | true| No default value|The Aerospike column name. | -|`userName`|String|false|NULL|The Aerospike username.| -|`password`|String|false|NULL|The Aerospike password.| -| `keySet` | String|false |NULL | The Aerospike set name. | -| `maxConcurrentRequests` |int| false | 100 | The maximum number of concurrent Aerospike transactions that a sink can open. | -| `timeoutMs` | int|false | 100 | This property controls `socketTimeout` and `totalTimeout` for Aerospike transactions. | -| `retries` | int|false | 1 |The maximum number of retries before aborting a write transaction to Aerospike. | diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-canal-source.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-canal-source.md deleted file mode 100644 index d1fd43bb0f74e4..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-canal-source.md +++ /dev/null @@ -1,235 +0,0 @@ ---- -id: io-canal-source -title: Canal source connector -sidebar_label: "Canal source connector" -original_id: io-canal-source ---- - -The Canal source connector pulls messages from MySQL to Pulsar topics. - -## Configuration - -The configuration of Canal source connector has the following properties. - -### Property - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `username` | true | None | Canal server account (not MySQL).| -| `password` | true | None | Canal server password (not MySQL). | -|`destination`|true|None|Source destination that Canal source connector connects to. -| `singleHostname` | false | None | Canal server address.| -| `singlePort` | false | None | Canal server port.| -| `cluster` | true | false | Whether to enable cluster mode based on Canal server configuration or not.

  1982. true: **cluster** mode.
    If set to true, it talks to `zkServers` to figure out the actual database host.

  1983. false: **standalone** mode.
    If set to false, it connects to the database specified by `singleHostname` and `singlePort`.
  1984. | -| `zkServers` | true | None | Address and port of the Zookeeper that Canal source connector talks to figure out the actual database host.| -| `batchSize` | false | 1000 | Batch size to fetch from Canal. | - -### Example - -Before using the Canal connector, you can create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "zkServers": "127.0.0.1:2181", - "batchSize": "5120", - "destination": "example", - "username": "", - "password": "", - "cluster": false, - "singleHostname": "127.0.0.1", - "singlePort": "11111", - } - - ``` - -* YAML - - You can create a YAML file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/resources/canal-mysql-source-config.yaml) below to your YAML file. - - ```yaml - - configs: - zkServers: "127.0.0.1:2181" - batchSize: 5120 - destination: "example" - username: "" - password: "" - cluster: false - singleHostname: "127.0.0.1" - singlePort: 11111 - - ``` - -## Usage - -Here is an example of storing MySQL data using the configuration file as above. - -1. Start a MySQL server. - - ```bash - - $ docker pull mysql:5.7 - $ docker run -d -it --rm --name pulsar-mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=canal -e MYSQL_USER=mysqluser -e MYSQL_PASSWORD=mysqlpw mysql:5.7 - - ``` - -2. Create a configuration file `mysqld.cnf`. - - ```bash - - [mysqld] - pid-file = /var/run/mysqld/mysqld.pid - socket = /var/run/mysqld/mysqld.sock - datadir = /var/lib/mysql - #log-error = /var/log/mysql/error.log - # By default we only accept connections from localhost - #bind-address = 127.0.0.1 - # Disabling symbolic-links is recommended to prevent assorted security risks - symbolic-links=0 - log-bin=mysql-bin - binlog-format=ROW - server_id=1 - - ``` - -3. Copy the configuration file `mysqld.cnf` to MySQL server. - - ```bash - - $ docker cp mysqld.cnf pulsar-mysql:/etc/mysql/mysql.conf.d/ - - ``` - -4. Restart the MySQL server. - - ```bash - - $ docker restart pulsar-mysql - - ``` - -5. Create a test database in MySQL server. - - ```bash - - $ docker exec -it pulsar-mysql /bin/bash - $ mysql -h 127.0.0.1 -uroot -pcanal -e 'create database test;' - - ``` - -6. Start a Canal server and connect to MySQL server. - - ``` - - $ docker pull canal/canal-server:v1.1.2 - $ docker run -d -it --link pulsar-mysql -e canal.auto.scan=false -e canal.destinations=test -e canal.instance.master.address=pulsar-mysql:3306 -e canal.instance.dbUsername=root -e canal.instance.dbPassword=canal -e canal.instance.connectionCharset=UTF-8 -e canal.instance.tsdb.enable=true -e canal.instance.gtidon=false --name=pulsar-canal-server -p 8000:8000 -p 2222:2222 -p 11111:11111 -p 11112:11112 -m 4096m canal/canal-server:v1.1.2 - - ``` - -7. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:2.3.0 - $ docker run -d -it --link pulsar-canal-server -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:2.3.0 bin/pulsar standalone - - ``` - -8. Modify the configuration file `canal-mysql-source-config.yaml`. - - ```yaml - - configs: - zkServers: "" - batchSize: "5120" - destination: "test" - username: "" - password: "" - cluster: false - singleHostname: "pulsar-canal-server" - singlePort: "11111" - - ``` - -9. Create a consumer file `pulsar-client.py`. - - ```python - - import pulsar - - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe('my-topic', - subscription_name='my-sub') - - while True: - msg = consumer.receive() - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - - client.close() - - ``` - -10. Copy the configuration file `canal-mysql-source-config.yaml` and the consumer file `pulsar-client.py` to Pulsar server. - - ```bash - - $ docker cp canal-mysql-source-config.yaml pulsar-standalone:/pulsar/conf/ - $ docker cp pulsar-client.py pulsar-standalone:/pulsar/ - - ``` - -11. Download a Canal connector and start it. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - $ wget https://archive.apache.org/dist/pulsar/pulsar-2.3.0/connectors/pulsar-io-canal-2.3.0.nar -P connectors - $ ./bin/pulsar-admin source localrun \ - --archive ./connectors/pulsar-io-canal-2.3.0.nar \ - --classname org.apache.pulsar.io.canal.CanalStringSource \ - --tenant public \ - --namespace default \ - --name canal \ - --destination-topic-name my-topic \ - --source-config-file /pulsar/conf/canal-mysql-source-config.yaml \ - --parallelism 1 - - ``` - -12. Consume data from MySQL. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - $ python pulsar-client.py - - ``` - -13. Open another window to log in MySQL server. - - ```bash - - $ docker exec -it pulsar-mysql /bin/bash - $ mysql -h 127.0.0.1 -uroot -pcanal - - ``` - -14. Create a table, and insert, delete, and update data in MySQL server. - - ```bash - - mysql> use test; - mysql> show tables; - mysql> CREATE TABLE IF NOT EXISTS `test_table`(`test_id` INT UNSIGNED AUTO_INCREMENT,`test_title` VARCHAR(100) NOT NULL, - `test_author` VARCHAR(40) NOT NULL, - `test_date` DATE,PRIMARY KEY ( `test_id` ))ENGINE=InnoDB DEFAULT CHARSET=utf8; - mysql> INSERT INTO test_table (test_title, test_author, test_date) VALUES("a", "b", NOW()); - mysql> UPDATE test_table SET test_title='c' WHERE test_title='a'; - mysql> DELETE FROM test_table WHERE test_title='c'; - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-cassandra-sink.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-cassandra-sink.md deleted file mode 100644 index b27a754f49e182..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-cassandra-sink.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -id: io-cassandra-sink -title: Cassandra sink connector -sidebar_label: "Cassandra sink connector" -original_id: io-cassandra-sink ---- - -The Cassandra sink connector pulls messages from Pulsar topics to Cassandra clusters. - -## Configuration - -The configuration of the Cassandra sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `roots` | String|true | " " (empty string) | A comma-separated list of Cassandra hosts to connect to.| -| `keyspace` | String|true| " " (empty string)| The key space used for writing pulsar messages.

    **Note: `keyspace` should be created prior to a Cassandra sink.**| -| `keyname` | String|true| " " (empty string)| The key name of the Cassandra column family.

    The column is used for storing Pulsar message keys.

    If a Pulsar message doesn't have any key associated, the message value is used as the key. | -| `columnFamily` | String|true| " " (empty string)| The Cassandra column family name.

    **Note: `columnFamily` should be created prior to a Cassandra sink.**| -| `columnName` | String|true| " " (empty string) | The column name of the Cassandra column family.

    The column is used for storing Pulsar message values. | - -### Example - -Before using the Cassandra sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - } - - ``` - -* YAML - - ``` - - configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - - ``` - -## Usage - -For more information about **how to connect Pulsar with Cassandra**, see [here](io-quickstart.md#connect-pulsar-to-apache-cassandra). diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-cdc-debezium.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-cdc-debezium.md deleted file mode 100644 index 293ccf2b35e8aa..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-cdc-debezium.md +++ /dev/null @@ -1,543 +0,0 @@ ---- -id: io-cdc-debezium -title: Debezium source connector -sidebar_label: "Debezium source connector" -original_id: io-cdc-debezium ---- - -The Debezium source connector pulls messages from MySQL or PostgreSQL -and persists the messages to Pulsar topics. - -## Configuration - -The configuration of Debezium source connector has the following properties. - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `task.class` | true | null | A source task class that implemented in Debezium. | -| `database.hostname` | true | null | The address of a database server. | -| `database.port` | true | null | The port number of a database server.| -| `database.user` | true | null | The name of a database user that has the required privileges. | -| `database.password` | true | null | The password for a database user that has the required privileges. | -| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. | -| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. | -| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the connector.

    This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. | -| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. | -| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value. | -| `database.history` | true | null | The name of the database history class. | -| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements.

    **Note: this topic is for internal use only and should not be used by consumers.** | -| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. | -| `pulsar.service.url` | true | null | Pulsar cluster service URL for the offset topic used in Debezium. You can use the `bin/pulsar-admin --admin-url http://pulsar:8080 sources localrun --source-config-file configs/pg-pulsar-config.yaml` command to point to the target Pulsar cluster.| -| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. | -| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). | -| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. | -| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. | - - - -## Example of MySQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "3306", - "database.user": "debezium", - "database.password": "dbz", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.whitelist": "inventory", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.history.pulsar.topic": "history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "pulsar.service.url": "pulsar://127.0.0.1:6650", - "offset.storage.topic": "offset-topic" - } - - ``` - -* YAML - - You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mysql-source" - topicName: "debezium-mysql-topic" - archive: "connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for mysql, docker image: debezium/example-mysql:0.8 - database.hostname: "localhost" - database.port: "3306" - database.user: "debezium" - database.password: "dbz" - database.server.id: "184054" - database.server.name: "dbserver1" - database.whitelist: "inventory" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.history.pulsar.topic: "history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG - key.converter: "org.apache.kafka.connect.json.JsonConverter" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## OFFSET_STORAGE_TOPIC_CONFIG - offset.storage.topic: "offset-topic" - - ``` - -### Usage - -This example shows how to change the data of a MySQL table using the Pulsar Debezium connector. - -1. Start a MySQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -it --rm \ - --name mysql \ - -p 3306:3306 \ - -e MYSQL_ROOT_PASSWORD=debezium \ - -e MYSQL_USER=mysqluser \ - -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar \ - --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","value.converter": "org.apache.kafka.connect.json.JsonConverter","pulsar.service.url": "pulsar://127.0.0.1:6650","offset.storage.topic": "offset-topic"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mysql-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the table _inventory.products_. - - ```bash - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MySQL client in docker. - - ```bash - - $ docker run -it --rm \ - --name mysqlterm \ - --link mysql \ - --rm mysql:5.7 sh \ - -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' - - ``` - -6. A MySQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - mysql> use inventory; - mysql> show tables; - mysql> SELECT * FROM products; - mysql> UPDATE products SET name='1111111111' WHERE id=101; - mysql> UPDATE products SET name='1111111111' WHERE id=107; - - ``` - - In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic. - -## Example of PostgreSQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "5432", - "database.user": "postgres", - "database.password": "postgres", - "database.dbname": "postgres", - "database.server.name": "dbserver1", - "schema.whitelist": "inventory", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-postgres-source" - topicName: "debezium-postgres-topic" - archive: "connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-postgress:0.8 - database.hostname: "localhost" - database.port: "5432" - database.user: "postgres" - database.password: "postgres" - database.dbname: "postgres" - database.server.name: "dbserver1" - schema.whitelist: "inventory" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector. - - -1. Start a PostgreSQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-postgres:0.8 - $ docker run -d -it --rm --name pulsar-postgresql -p 5432:5432 debezium/example-postgres:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar \ - --name debezium-postgres-source \ - --destination-topic-name debezium-postgres-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "postgres","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-postgres-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a PostgreSQL client in docker. - - ```bash - - $ docker exec -it pulsar-postgresql /bin/bash - - ``` - -6. A PostgreSQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - psql -U postgres postgres - postgres=# \c postgres; - You are now connected to database "postgres" as user "postgres". - postgres=# SET search_path TO inventory; - SET - postgres=# select * from products; - id | name | description | weight - -----+--------------------+---------------------------------------------------------+-------- - 102 | car battery | 12V car battery | 8.1 - 103 | 12-pack drill bits | 12-pack of drill bits with sizes ranging from #40 to #3 | 0.8 - 104 | hammer | 12oz carpenter's hammer | 0.75 - 105 | hammer | 14oz carpenter's hammer | 0.875 - 106 | hammer | 16oz carpenter's hammer | 1 - 107 | rocks | box of assorted rocks | 5.3 - 108 | jacket | water resistent black wind breaker | 0.1 - 109 | spare tire | 24 inch spare tire | 22.2 - 101 | 1111111111 | Small 2-wheel scooter | 3.14 - (9 rows) - - postgres=# UPDATE products SET name='1111111111' WHERE id=107; - UPDATE 1 - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":107}}�{"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products.Value","field":"before"},{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products.Value","field":"after"},{"type":"struct","fields":[{"type":"string","optional":true,"field":"version"},{"type":"string","optional":true,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":false,"field":"db"},{"type":"int64","optional":true,"field":"ts_usec"},{"type":"int64","optional":true,"field":"txId"},{"type":"int64","optional":true,"field":"lsn"},{"type":"string","optional":true,"field":"schema"},{"type":"string","optional":true,"field":"table"},{"type":"boolean","optional":true,"default":false,"field":"snapshot"},{"type":"boolean","optional":true,"field":"last_snapshot_record"}],"optional":false,"name":"io.debezium.connector.postgresql.Source","field":"source"},{"type":"string","optional":false,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"before":{"id":107,"name":"rocks","description":"box of assorted rocks","weight":5.3},"after":{"id":107,"name":"1111111111","description":"box of assorted rocks","weight":5.3},"source":{"version":"0.9.2.Final","connector":"postgresql","name":"dbserver1","db":"postgres","ts_usec":1559208957661080,"txId":577,"lsn":23862872,"schema":"inventory","table":"products","snapshot":false,"last_snapshot_record":null},"op":"u","ts_ms":1559208957692}} - - ``` - -## Example of MongoDB - -You need to create a configuration file before using the Pulsar Debezium connector. - -* JSON - - ```json - - { - "mongodb.hosts": "rs0/mongodb:27017", - "mongodb.name": "dbserver1", - "mongodb.user": "debezium", - "mongodb.password": "dbz", - "mongodb.task.id": "1", - "database.whitelist": "inventory", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mongodb-source" - topicName: "debezium-mongodb-topic" - archive: "connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-postgress:0.10 - mongodb.hosts: "rs0/mongodb:27017", - mongodb.name: "dbserver1", - mongodb.user: "debezium", - mongodb.password: "dbz", - mongodb.task.id: "1", - database.whitelist: "inventory", - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector. - - -1. Start a MongoDB server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-mongodb:0.10 - $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017 debezium/example-mongodb:0.10 - - ``` - - Use the following commands to initialize the data. - - ``` bash - - ./usr/local/bin/init-inventory.sh - - ``` - - If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a``` - - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-mongodb-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar \ - --name debezium-mongodb-source \ - --destination-topic-name debezium-mongodb-topic \ - --tenant public \ - --namespace default \ - --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mongodb-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MongoDB client in docker. - - ```bash - - $ docker exec -it pulsar-mongodb /bin/bash - - ``` - -6. A MongoDB client pops out. - - ```bash - - mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory - db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}}) - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":false,"field":"rs"},{"type":"string","optional":false,"field":"collection"},{"type":"int32","optional":false,"field":"ord"},{"type":"int64","optional":true,"field":"h"}],"optional":false,"name":"io.debezium.connector.mongo.Source","field":"source"},{"type":"string","optional":true,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"after":"{\"_id\": {\"$numberLong\": \"104\"},\"name\": \"hammer\",\"description\": \"12oz carpenter's hammer\",\"weight\": 1.25,\"quantity\": 4}","patch":null,"source":{"version":"0.10.0.Final","connector":"mongodb","name":"dbserver1","ts_ms":1573541905000,"snapshot":"true","db":"inventory","rs":"rs0","collection":"products","ord":1,"h":4983083486544392763},"op":"r","ts_ms":1573541909761}}. - - ``` - -## FAQ - -### Debezium postgres connector will hang when create snap - -```$xslt - -#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000] - java.lang.Thread.State: WAITING (parking) - at sun.misc.Unsafe.park(Native Method) - - parking to wait for <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) - at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) - at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) - at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396) - at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649) - at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132) - at io.debezium.connector.postgresql.PostgresConnectorTask$Lambda$203/385424085.accept(Unknown Source) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$240/1347039967.accept(Unknown Source) - at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$206/589332928.run(Unknown Source) - at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705) - at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717) - at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126) - at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47) - at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127) - at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230) - at java.lang.Thread.run(Thread.java:748) - -``` - -If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file: - -```$xslt - -max.queue.size= - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-cdc.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-cdc.md deleted file mode 100644 index e6e662884826de..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-cdc.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -id: io-cdc -title: CDC connector -sidebar_label: "CDC connector" -original_id: io-cdc ---- - -CDC source connectors capture log changes of databases (such as MySQL, MongoDB, and PostgreSQL) into Pulsar. - -> CDC source connectors are built on top of [Canal](https://github.com/alibaba/canal) and [Debezium](https://debezium.io/) and store all data into Pulsar cluster in a persistent, replicated, and partitioned way. - -Currently, Pulsar has the following CDC connectors. - -Name|Java Class -|---|--- -[Canal source connector](io-canal-source.md)|[org.apache.pulsar.io.canal.CanalStringSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/java/org/apache/pulsar/io/canal/CanalStringSource.java) -[Debezium source connector](io-cdc-debezium.md)|
  1985. [org.apache.pulsar.io.debezium.DebeziumSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/core/src/main/java/org/apache/pulsar/io/debezium/DebeziumSource.java)
  1986. [org.apache.pulsar.io.debezium.mysql.DebeziumMysqlSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/java/org/apache/pulsar/io/debezium/mysql/DebeziumMysqlSource.java)
  1987. [org.apache.pulsar.io.debezium.postgres.DebeziumPostgresSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/java/org/apache/pulsar/io/debezium/postgres/DebeziumPostgresSource.java)
  1988. - -For more information about Canal and Debezium, see the information below. - -Subject | Reference -|---|--- -How to use Canal source connector with MySQL|[Canal guide](https://github.com/alibaba/canal/wiki) -How does Canal work | [Canal tutorial](https://github.com/alibaba/canal/wiki) -How to use Debezium source connector with MySQL | [Debezium guide](https://debezium.io/docs/connectors/mysql/) -How does Debezium work | [Debezium tutorial](https://debezium.io/docs/tutorial/) diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-cli.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-cli.md deleted file mode 100644 index 3d54bb61875e25..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-cli.md +++ /dev/null @@ -1,658 +0,0 @@ ---- -id: io-cli -title: Connector Admin CLI -sidebar_label: "CLI" -original_id: io-cli ---- - -The `pulsar-admin` tool helps you manage Pulsar connectors. - -## `sources` - -An interface for managing Pulsar IO sources (ingress data into Pulsar). - -```bash - -$ pulsar-admin sources subcommands - -``` - -Subcommands are: - -* `create` - -* `update` - -* `delete` - -* `get` - -* `status` - -* `list` - -* `stop` - -* `start` - -* `restart` - -* `localrun` - -* `available-sources` - -* `reload` - - -### `create` - -Submit a Pulsar IO source connector to run in a Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources create options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--classname` | The source's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per source instance (applicable only to Docker runtime). -| `--deserialization-classname` | The SerDe classname for the source. -| `--destination-topic-name` | The Pulsar topic to which data is sent. -| `--disk` | The disk (in bytes) that needs to be allocated per source instance (applicable only to Docker runtime). -|`--name` | The source's name. -| `--namespace` | The source's namespace. -| ` --parallelism` | The source's parallelism factor, that is, the number of source instances to run. -| `--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per source instance (applicable only to the process and Docker runtimes). -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -| `--source-config` | Source config key/values. -| `--source-config-file` | The path to a YAML config file specifying the source's configuration. -| `-t`, `--source-type` | The source's connector provider. -| `--tenant` | The source's tenant. -|`--producer-config`| The custom producer configuration (as a JSON string). - -### `update` - -Update a already submitted Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources update options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--classname` | The source's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per source instance (applicable only to Docker runtime). -| `--deserialization-classname` | The SerDe classname for the source. -| `--destination-topic-name` | The Pulsar topic to which data is sent. -| `--disk` | The disk (in bytes) that needs to be allocated per source instance (applicable only to Docker runtime). -|`--name` | The source's name. -| `--namespace` | The source's namespace. -| ` --parallelism` | The source's parallelism factor, that is, the number of source instances to run. -| `--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per source instance (applicable only to the process and Docker runtimes). -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -| `--source-config` | Source config key/values. -| `--source-config-file` | The path to a YAML config file specifying the source's configuration. -| `-t`, `--source-type` | The source's connector provider. The `source-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. -| `--tenant` | The source's tenant. -| `--update-auth-data` | Whether or not to update the auth data.
    **Default value: false.** - - -### `delete` - -Delete a Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources delete options - -``` - -#### Option - -|Flag|Description| -|---|---| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `get` - -Get the information about a Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources get options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `status` - -Check the current status of a Pulsar Source. - -#### Usage - -```bash - -$ pulsar-admin sources status options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source ID.
    If `instance-id` is not provided, Pulsar gets status of all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `list` - -List all running Pulsar IO source connectors. - -#### Usage - -```bash - -$ pulsar-admin sources list options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `stop` - -Stop a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources stop options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar stops all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `start` - -Start a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources start options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar starts all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `restart` - -Restart a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources restart options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar restarts all instances. -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `localrun` - -Run a Pulsar IO source connector locally rather than deploying it to the Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources localrun options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the Source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--broker-service-url` | The URL for the Pulsar broker. -|`--classname`|The source's class name if `archive` is file-url-path (file://). -| `--client-auth-params` | Client authentication parameter. -| `--client-auth-plugin` | Client authentication plugin using which function-process can connect to broker. -|`--cpu`|The CPU (in cores) that needs to be allocated per source instance (applicable only to the Docker runtime).| -|`--deserialization-classname`|The SerDe classname for the source. -|`--destination-topic-name`|The Pulsar topic to which data is sent. -|`--disk`|The disk (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime).| -|`--hostname-verification-enabled`|Enable hostname verification.
    **Default value: false**. -|`--name`|The source’s name.| -|`--namespace`|The source’s namespace.| -|`--parallelism`|The source’s parallelism factor, that is, the number of source instances to run).| -|`--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -|`--ram`|The RAM (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime).| -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -|`--source-config`|Source config key/values. -|`--source-config-file`|The path to a YAML config file specifying the source’s configuration. -|`--source-type`|The source's connector provider. -|`--tenant`|The source’s tenant. -|`--tls-allow-insecure`|Allow insecure tls connection.
    **Default value: false**. -|`--tls-trust-cert-path`|The tls trust cert file path. -|`--use-tls`|Use tls connection.
    **Default value: false**. -|`--producer-config`| The custom producer configuration (as a JSON string). - -### `available-sources` - -Get the list of Pulsar IO connector sources supported by Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources available-sources - -``` - -### `reload` - -Reload the available built-in connectors. - -#### Usage - -```bash - -$ pulsar-admin sources reload - -``` - -## `sinks` - -An interface for managing Pulsar IO sinks (egress data from Pulsar). - -```bash - -$ pulsar-admin sinks subcommands - -``` - -Subcommands are: - -* `create` - -* `update` - -* `delete` - -* `get` - -* `status` - -* `list` - -* `stop` - -* `start` - -* `restart` - -* `localrun` - -* `available-sinks` - -* `reload` - - -### `create` - -Submit a Pulsar IO sink connector to run in a Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks create options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--classname` | The sink's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per sink instance (applicable only to Docker runtime). -| `--custom-schema-inputs` | The map of input topics to schema types or class names (as a JSON string). -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -| `--disk` | The disk (in bytes) that needs to be allocated per sink instance (applicable only to Docker runtime). -|`-i, --inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name` | The sink's name. -| `--namespace` | The sink's namespace. -| ` --parallelism` | The sink's parallelism factor, that is, the number of sink instances to run. -| `--processing-guarantees` | The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the process and Docker runtimes). -| `--retain-ordering` | Sink consumes and sinks messages in order. -| `--sink-config` | sink config key/values. -| `--sink-config-file` | The path to a YAML config file specifying the sink's configuration. -| `-t`, `--sink-type` | The sink's connector provider. The `sink-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. -| `--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -| `--tenant` | The sink's tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). - -### `update` - -Update a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks update options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--classname` | The sink's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per sink instance (applicable only to Docker runtime). -| `--custom-schema-inputs` | The map of input topics to schema types or class names (as a JSON string). -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -| `--disk` | The disk (in bytes) that needs to be allocated per sink instance (applicable only to Docker runtime). -|`-i, --inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name` | The sink's name. -| `--namespace` | The sink's namespace. -| ` --parallelism` | The sink's parallelism factor, that is, the number of sink instances to run. -| `--processing-guarantees` | The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the process and Docker runtimes). -| `--retain-ordering` | Sink consumes and sinks messages in order. -| `--sink-config` | sink config key/values. -| `--sink-config-file` | The path to a YAML config file specifying the sink's configuration. -| `-t`, `--sink-type` | The sink's connector provider. -| `--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -| `--tenant` | The sink's tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). -| `--update-auth-data` | Whether or not to update the auth data.
    **Default value: false.** - -### `delete` - -Delete a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks delete options - -``` - -#### Option - -|Flag|Description| -|---|---| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - -### `get` - -Get the information about a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks get options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `status` - -Check the current status of a Pulsar sink. - -#### Usage - -```bash - -$ pulsar-admin sinks status options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink ID.
    If `instance-id` is not provided, Pulsar gets status of all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `list` - -List all running Pulsar IO sink connectors. - -#### Usage - -```bash - -$ pulsar-admin sinks list options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `stop` - -Stop a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks stop options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar stops all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - -### `start` - -Start a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks start options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar starts all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `restart` - -Restart a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks restart options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar restarts all instances. -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `localrun` - -Run a Pulsar IO sink connector locally rather than deploying it to the Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks localrun options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--broker-service-url` | The URL for the Pulsar broker. -|`--classname`|The sink's class name if `archive` is file-url-path (file://). -| `--client-auth-params` | Client authentication parameter. -| `--client-auth-plugin` | Client authentication plugin using which function-process can connect to broker. -|`--cpu`|The CPU (in cores) that needs to be allocated per sink instance (applicable only to the Docker runtime). -| `--custom-schema-inputs` | The map of input topics to Schema types or class names (as a JSON string). -| `--max-redeliver-count` | Maximum number of times that a message is redelivered before being sent to the dead letter queue. -| `--dead-letter-topic` | Name of the dead letter topic where the failing messages are sent. -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -|`--disk`|The disk (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime).| -|`--hostname-verification-enabled`|Enable hostname verification.
    **Default value: false**. -| `-i`, `--inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name`|The sink’s name.| -|`--namespace`|The sink’s namespace.| -|`--parallelism`|The sink’s parallelism factor, that is, the number of sink instances to run).| -|`--processing-guarantees`|The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -|`--ram`|The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime).| -|`--retain-ordering` | Sink consumes and sinks messages in order. -|`--sink-config`|sink config key/values. -|`--sink-config-file`|The path to a YAML config file specifying the sink’s configuration. -|`--sink-type`|The sink's connector provider. -|`--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -|`--tenant`|The sink’s tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--negative-ack-redelivery-delay-ms` | The negatively-acknowledged message redelivery delay in milliseconds. | -|`--tls-allow-insecure`|Allow insecure tls connection.
    **Default value: false**. -|`--tls-trust-cert-path`|The tls trust cert file path. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). -|`--use-tls`|Use tls connection.
    **Default value: false**. - -### `available-sinks` - -Get the list of Pulsar IO connector sinks supported by Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks available-sinks - -``` - -### `reload` - -Reload the available built-in connectors. - -#### Usage - -```bash - -$ pulsar-admin sinks reload - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-connectors.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-connectors.md deleted file mode 100644 index 957a02a5a1964a..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-connectors.md +++ /dev/null @@ -1,249 +0,0 @@ ---- -id: io-connectors -title: Built-in connector -sidebar_label: "Built-in connector" -original_id: io-connectors ---- - -Pulsar distribution includes a set of common connectors that have been packaged and tested with the rest of Apache Pulsar. These connectors import and export data from some of the most commonly used data systems. - -Using any of these connectors is as easy as writing a simple connector and running the connector locally or submitting the connector to a Pulsar Functions cluster. - -## Source connector - -Pulsar has various source connectors, which are sorted alphabetically as below. - -### Canal - -* [Configuration](io-canal-source.md#configuration) - -* [Example](io-canal-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/java/org/apache/pulsar/io/canal/CanalStringSource.java) - - -### Debezium MySQL - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-mysql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/java/org/apache/pulsar/io/debezium/mysql/DebeziumMysqlSource.java) - -### Debezium PostgreSQL - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-postgresql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/java/org/apache/pulsar/io/debezium/postgres/DebeziumPostgresSource.java) - -### Debezium MongoDB - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-mongodb) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/java/org/apache/pulsar/io/debezium/mongodb/DebeziumMongoDbSource.java) - -### Debezium Oracle - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-oracle) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/oracle/src/main/java/org/apache/pulsar/io/debezium/oracle/DebeziumOracleSource.java) - -### Debezium Microsoft SQL Server - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-microsoft-sql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mssql/src/main/java/org/apache/pulsar/io/debezium/mssql/DebeziumMsSqlSource.java) - - -### DynamoDB - -* [Configuration](io-dynamodb-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/dynamodb/src/main/java/org/apache/pulsar/io/dynamodb/DynamoDBSource.java) - -### File - -* [Configuration](io-file-source.md#configuration) - -* [Example](io-file-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/file/src/main/java/org/apache/pulsar/io/file/FileSource.java) - -### Flume - -* [Configuration](io-flume-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/java/org/apache/pulsar/io/flume/FlumeConnector.java) - -### Twitter firehose - -* [Configuration](io-twitter-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/twitter/src/main/java/org/apache/pulsar/io/twitter/TwitterFireHose.java) - -### Kafka - -* [Configuration](io-kafka-source.md#configuration) - -* [Example](io-kafka-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java) - -### Kinesis - -* [Configuration](io-kinesis-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kinesis/src/main/java/org/apache/pulsar/io/kinesis/KinesisSource.java) - -### Netty - -* [Configuration](io-netty-source.md#configuration) - -* [Example of TCP](io-netty-source.md#tcp) - -* [Example of HTTP](io-netty-source.md#http) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/netty/src/main/java/org/apache/pulsar/io/netty/NettySource.java) - -### NSQ - -* [Configuration](io-nsq-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/nsq/src/main/java/org/apache/pulsar/io/nsq/NSQSource.java) - -### RabbitMQ - -* [Configuration](io-rabbitmq-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/rabbitmq/src/main/java/org/apache/pulsar/io/rabbitmq/RabbitMQSource.java) - -## Sink connector - -Pulsar has various sink connectors, which are sorted alphabetically as below. - -### Aerospike - -* [Configuration](io-aerospike-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/aerospike/src/main/java/org/apache/pulsar/io/aerospike/AerospikeStringSink.java) - -### Cassandra - -* [Configuration](io-cassandra-sink.md#configuration) - -* [Example](io-cassandra-sink.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/cassandra/src/main/java/org/apache/pulsar/io/cassandra/CassandraStringSink.java) - -### ElasticSearch - -* [Configuration](io-elasticsearch-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/elastic-search/src/main/java/org/apache/pulsar/io/elasticsearch/ElasticSearchSink.java) - -### Flume - -* [Configuration](io-flume-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/java/org/apache/pulsar/io/flume/sink/StringSink.java) - -### HBase - -* [Configuration](io-hbase-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hbase/src/main/java/org/apache/pulsar/io/hbase/HbaseAbstractConfig.java) - -### HDFS2 - -* [Configuration](io-hdfs2-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hdfs2/src/main/java/org/apache/pulsar/io/hdfs2/AbstractHdfsConnector.java) - -### HDFS3 - -* [Configuration](io-hdfs3-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hdfs3/src/main/java/org/apache/pulsar/io/hdfs3/AbstractHdfsConnector.java) - -### InfluxDB - -* [Configuration](io-influxdb-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/influxdb/src/main/java/org/apache/pulsar/io/influxdb/InfluxDBGenericRecordSink.java) - -### JDBC ClickHouse - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-clickhouse) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/clickhouse/src/main/java/org/apache/pulsar/io/jdbc/ClickHouseJdbcAutoSchemaSink.java) - -### JDBC MariaDB - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-mariadb) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/mariadb/src/main/java/org/apache/pulsar/io/jdbc/MariadbJdbcAutoSchemaSink.java) - -### JDBC PostgreSQL - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-postgresql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/postgres/src/main/java/org/apache/pulsar/io/jdbc/PostgresJdbcAutoSchemaSink.java) - -### JDBC SQLite - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-sqlite) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/sqlite/src/main/java/org/apache/pulsar/io/jdbc/SqliteJdbcAutoSchemaSink.java) - -### Kafka - -* [Configuration](io-kafka-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSink.java) - -### Kinesis - -* [Configuration](io-kinesis-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kinesis/src/main/java/org/apache/pulsar/io/kinesis/KinesisSink.java) - -### MongoDB - -* [Configuration](io-mongo-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/mongo/src/main/java/org/apache/pulsar/io/mongodb/MongoSink.java) - -### RabbitMQ - -* [Configuration](io-rabbitmq-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/rabbitmq/src/main/java/org/apache/pulsar/io/rabbitmq/RabbitMQSink.java) - -### Redis - -* [Configuration](io-redis-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/redis/src/main/java/org/apache/pulsar/io/redis/RedisAbstractConfig.java) - -### Solr - -* [Configuration](io-solr-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/solr/src/main/java/org/apache/pulsar/io/solr/SolrSinkConfig.java) - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-debezium-source.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-debezium-source.md deleted file mode 100644 index f9b7e10a8e2824..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-debezium-source.md +++ /dev/null @@ -1,768 +0,0 @@ ---- -id: io-debezium-source -title: Debezium source connector -sidebar_label: "Debezium source connector" -original_id: io-debezium-source ---- - -The Debezium source connector pulls messages from MySQL or PostgreSQL -and persists the messages to Pulsar topics. - -## Configuration - -The configuration of Debezium source connector has the following properties. - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `task.class` | true | null | A source task class that implemented in Debezium. | -| `database.hostname` | true | null | The address of a database server. | -| `database.port` | true | null | The port number of a database server.| -| `database.user` | true | null | The name of a database user that has the required privileges. | -| `database.password` | true | null | The password for a database user that has the required privileges. | -| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. | -| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. | -| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the connector.

    This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. | -| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. | -| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value. | -| `database.history` | true | null | The name of the database history class. | -| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements.

    **Note: this topic is for internal use only and should not be used by consumers.** | -| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. | -| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. | -| `json-with-envelope` | false | false | Present the message only consist of payload. - -### Converter Options - -1. org.apache.kafka.connect.json.JsonConverter - -This config `json-with-envelope` is valid only for the JsonConverter. It's default value is false, the consumer use the schema ` -Schema.KeyValue(Schema.AUTO_CONSUME(), Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, -and the message only consist of payload. - -If the config `json-with-envelope` value is true, the consumer use the schema -`Schema.KeyValue(Schema.BYTES, Schema.BYTES`, the message consist of schema and payload. - -2. org.apache.pulsar.kafka.shade.io.confluent.connect.avro.AvroConverter - -If users select the AvroConverter, then the pulsar consumer should use the schema `Schema.KeyValue(Schema.AUTO_CONSUME(), -Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, and the message consist of payload. - -### MongoDB Configuration -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). | -| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. | -| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. | - - - -## Example of MySQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "3306", - "database.user": "debezium", - "database.password": "dbz", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.whitelist": "inventory", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.history.pulsar.topic": "history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "offset.storage.topic": "offset-topic" - } - - ``` - -* YAML - - You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mysql-source" - topicName: "debezium-mysql-topic" - archive: "connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for mysql, docker image: debezium/example-mysql:0.8 - database.hostname: "localhost" - database.port: "3306" - database.user: "debezium" - database.password: "dbz" - database.server.id: "184054" - database.server.name: "dbserver1" - database.whitelist: "inventory" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.history.pulsar.topic: "history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG - key.converter: "org.apache.kafka.connect.json.JsonConverter" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - - ## OFFSET_STORAGE_TOPIC_CONFIG - offset.storage.topic: "offset-topic" - - ``` - -### Usage - -This example shows how to change the data of a MySQL table using the Pulsar Debezium connector. - -1. Start a MySQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -it --rm \ - --name mysql \ - -p 3306:3306 \ - -e MYSQL_ROOT_PASSWORD=debezium \ - -e MYSQL_USER=mysqluser \ - -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar \ - --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","value.converter": "org.apache.kafka.connect.json.JsonConverter","pulsar.service.url": "pulsar://127.0.0.1:6650","offset.storage.topic": "offset-topic"}' - - ``` - - :::note - - Currently, the destination topic (specified by the `destination-topic-name` option ) is a required configuration but it is not used for the Debezium connector to save data. The Debezium connector saves data in the following 4 types of topics: - - - One topic named with the database server name ( `database.server.name`) for storing the database metadata messages, such as `public/default/database.server.name`. - - One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. - - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - - One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - - If the automatic topic creation is disabled on your broker, you need to manually create the above 4 types of topics and the destination topic. - - ::: - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mysql-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the table _inventory.products_. - - ```bash - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MySQL client in docker. - - ```bash - - $ docker run -it --rm \ - --name mysqlterm \ - --link mysql \ - --rm mysql:5.7 sh \ - -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' - - ``` - -6. A MySQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - mysql> use inventory; - mysql> show tables; - mysql> SELECT * FROM products; - mysql> UPDATE products SET name='1111111111' WHERE id=101; - mysql> UPDATE products SET name='1111111111' WHERE id=107; - - ``` - - In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic. - -## Example of PostgreSQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "5432", - "database.user": "postgres", - "database.password": "changeme", - "database.dbname": "postgres", - "database.server.name": "dbserver1", - "plugin.name": "pgoutput", - "schema.whitelist": "public", - "table.whitelist": "public.users", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-postgres-source" - topicName: "debezium-postgres-topic" - archive: "connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for postgres version 10+, official docker image: postgres:<10+> - database.hostname: "localhost" - database.port: "5432" - database.user: "postgres" - database.password: "changeme" - database.dbname: "postgres" - database.server.name: "dbserver1" - plugin.name: "pgoutput" - schema.whitelist: "public" - table.whitelist: "public.users" - - ## PULSAR_SERVICE_URL_CONFIG - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -Notice that `pgoutput` is a standard plugin of Postgres introduced in version 10 - [see Postgres architecture docu](https://www.postgresql.org/docs/10/logical-replication-architecture.html). You don't need to install anything, just make sure the WAL level is set to `logical` (see docker command below and [Postgres docu](https://www.postgresql.org/docs/current/runtime-config-wal.html)). - -### Usage - -This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector. - - -1. Start a PostgreSQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -d -it --rm \ - --name pulsar-postgres \ - -p 5432:5432 \ - -e POSTGRES_PASSWORD=changeme \ - postgres:13.3 -c wal_level=logical - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar \ - --name debezium-postgres-source \ - --destination-topic-name debezium-postgres-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "changeme","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "public","table.whitelist": "public.users","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - :::note - - Currently, the destination topic (specified by the `destination-topic-name` option ) is a required configuration but it is not used for the Debezium connector to save data. The Debezium connector saves data in the following 4 types of topics: - - - One topic named with the database server name ( `database.server.name`) for storing the database metadata messages, such as `public/default/database.server.name`. - - One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. - - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - - One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - - If the automatic topic creation is disabled on your broker, you need to manually create the above 4 types of topics and the destination topic. - - ::: - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-postgres-source-config.yaml - - ``` - -4. Subscribe the topic _sub-users_ for the _public.users_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-users" public/default/dbserver1.public.users -n 0 - - ``` - -5. Start a PostgreSQL client in docker. - - ```bash - - $ docker exec -it pulsar-postgresql /bin/bash - - ``` - -6. A PostgreSQL client pops out. - - Use the following commands to create sample data in the table _users_. - - ``` - - psql -U postgres -h localhost -p 5432 - Password for user postgres: - - CREATE TABLE users( - id BIGINT GENERATED ALWAYS AS IDENTITY, PRIMARY KEY(id), - hash_firstname TEXT NOT NULL, - hash_lastname TEXT NOT NULL, - gender VARCHAR(6) NOT NULL CHECK (gender IN ('male', 'female')) - ); - - INSERT INTO users(hash_firstname, hash_lastname, gender) - SELECT md5(RANDOM()::TEXT), md5(RANDOM()::TEXT), CASE WHEN RANDOM() < 0.5 THEN 'male' ELSE 'female' END FROM generate_series(1, 100); - - postgres=# select * from users; - - id | hash_firstname | hash_lastname | gender - -------+----------------------------------+----------------------------------+-------- - 1 | 02bf7880eb489edc624ba637f5ab42bd | 3e742c2cc4217d8e3382cc251415b2fb | female - 2 | dd07064326bb9119189032316158f064 | 9c0e938f9eddbd5200ba348965afbc61 | male - 3 | 2c5316fdd9d6595c1cceb70eed12e80c | 8a93d7d8f9d76acfaaa625c82a03ea8b | female - 4 | 3dfa3b4f70d8cd2155567210e5043d2b | 32c156bc28f7f03ab5d28e2588a3dc19 | female - - - postgres=# UPDATE users SET hash_firstname='maxim' WHERE id=1; - UPDATE 1 - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"before":null,"after":{"id":1,"hash_firstname":"maxim","hash_lastname":"292113d30a3ccee0e19733dd7f88b258","gender":"male"},"source:{"version":"1.0.0.Final","connector":"postgresql","name":"foobar","ts_ms":1624045862644,"snapshot":"false","db":"postgres","schema":"public","table":"users","txId":595,"lsn":24419784,"xmin":null},"op":"u","ts_ms":1624045862648} - ...many more - - ``` - -## Example of MongoDB - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "mongodb.hosts": "rs0/mongodb:27017", - "mongodb.name": "dbserver1", - "mongodb.user": "debezium", - "mongodb.password": "dbz", - "mongodb.task.id": "1", - "database.whitelist": "inventory", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mongodb-source" - topicName: "debezium-mongodb-topic" - archive: "connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-mongodb:0.10 - mongodb.hosts: "rs0/mongodb:27017", - mongodb.name: "dbserver1", - mongodb.user: "debezium", - mongodb.password: "dbz", - mongodb.task.id: "1", - database.whitelist: "inventory", - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector. - - -1. Start a MongoDB server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-mongodb:0.10 - $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017 debezium/example-mongodb:0.10 - - ``` - - Use the following commands to initialize the data. - - ``` bash - - ./usr/local/bin/init-inventory.sh - - ``` - - If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a``` - - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-mongodb-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar \ - --name debezium-mongodb-source \ - --destination-topic-name debezium-mongodb-topic \ - --tenant public \ - --namespace default \ - --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - :::note - - Currently, the destination topic (specified by the `destination-topic-name` option ) is a required configuration but it is not used for the Debezium connector to save data. The Debezium connector saves data in the following 4 types of topics: - - - One topic named with the database server name ( `database.server.name`) for storing the database metadata messages, such as `public/default/database.server.name`. - - One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. - - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - - One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - - If the automatic topic creation is disabled on your broker, you need to manually create the above 4 types of topics and the destination topic. - - ::: - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mongodb-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MongoDB client in docker. - - ```bash - - $ docker exec -it pulsar-mongodb /bin/bash - - ``` - -6. A MongoDB client pops out. - - ```bash - - mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory - db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}}) - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":false,"field":"rs"},{"type":"string","optional":false,"field":"collection"},{"type":"int32","optional":false,"field":"ord"},{"type":"int64","optional":true,"field":"h"}],"optional":false,"name":"io.debezium.connector.mongo.Source","field":"source"},{"type":"string","optional":true,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"after":"{\"_id\": {\"$numberLong\": \"104\"},\"name\": \"hammer\",\"description\": \"12oz carpenter's hammer\",\"weight\": 1.25,\"quantity\": 4}","patch":null,"source":{"version":"0.10.0.Final","connector":"mongodb","name":"dbserver1","ts_ms":1573541905000,"snapshot":"true","db":"inventory","rs":"rs0","collection":"products","ord":1,"h":4983083486544392763},"op":"r","ts_ms":1573541909761}}. - - ``` - -## Example of Oracle - -### Packaging - -Oracle connector does not include Oracle JDBC driver and you need to package it with the connector. -Major reasons for not including the drivers are the variety of versions and Oracle licensing. It is recommended to use the driver provided with your Oracle DB installation, or you can [download](https://www.oracle.com/database/technologies/appdev/jdbc.html) one. -Integration test have an [example](https://github.com/apache/pulsar/blob/e2bc52d40450fa00af258c4432a5b71d50a5c6e0/tests/docker-images/latest-version-image/Dockerfile#L110-L122) of packaging the driver into the connector nar file. - -### Configuration - -Debezium [requires](https://debezium.io/documentation/reference/1.5/connectors/oracle.html#oracle-overview) Oracle DB with LogMiner or XStream API enabled. -Supported options and steps for enabling them vary from version to version of Oracle DB. -Steps outlined in the [documentation](https://debezium.io/documentation/reference/1.5/connectors/oracle.html#oracle-overview) and used in the [integration test](https://github.com/apache/pulsar/blob/master/tests/integration/src/test/java/org/apache/pulsar/tests/integration/io/sources/debezium/DebeziumOracleDbSourceTester.java) may or may not work for the version and edition of Oracle DB you are using. -Please refer to the [documentation for Oracle DB](https://docs.oracle.com/en/database/oracle/oracle-database/) as needed. - -Similarly to other connectors, you can use JSON or YAMl to configure the connector. -Using yaml as an example, you can create a debezium-oracle-source-config.yaml file like: - -* JSON - -```json - -{ - "database.hostname": "localhost", - "database.port": "1521", - "database.user": "dbzuser", - "database.password": "dbz", - "database.dbname": "XE", - "database.server.name": "XE", - "schema.exclude.list": "system,dbzuser", - "snapshot.mode": "initial", - "topic.namespace": "public/default", - "task.class": "io.debezium.connector.oracle.OracleConnectorTask", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "typeClassName": "org.apache.pulsar.common.schema.KeyValue", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.tcpKeepAlive": "true", - "decimal.handling.mode": "double", - "database.history.pulsar.topic": "debezium-oracle-source-history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650" -} - -``` - -* YAML - -```yaml - -tenant: "public" -namespace: "default" -name: "debezium-oracle-source" -topicName: "debezium-oracle-topic" -parallelism: 1 - -className: "org.apache.pulsar.io.debezium.oracle.DebeziumOracleSource" -database.dbname: "XE" - -configs: - database.hostname: "localhost" - database.port: "1521" - database.user: "dbzuser" - database.password: "dbz" - database.dbname: "XE" - database.server.name: "XE" - schema.exclude.list: "system,dbzuser" - snapshot.mode: "initial" - topic.namespace: "public/default" - task.class: "io.debezium.connector.oracle.OracleConnectorTask" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - key.converter: "org.apache.kafka.connect.json.JsonConverter" - typeClassName: "org.apache.pulsar.common.schema.KeyValue" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.tcpKeepAlive: "true" - decimal.handling.mode: "double" - database.history.pulsar.topic: "debezium-oracle-source-history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - -``` - -For the full list of configuration properties supported by Debezium, see [Debezium Connector for Oracle](https://debezium.io/documentation/reference/1.5/connectors/oracle.html#oracle-connector-properties). - -## Example of Microsoft SQL - -### Configuration - -Debezium [requires](https://debezium.io/documentation/reference/1.5/connectors/sqlserver.html#sqlserver-overview) SQL Server with CDC enabled. -Steps outlined in the [documentation](https://debezium.io/documentation/reference/1.5/connectors/sqlserver.html#setting-up-sqlserver) and used in the [integration test](https://github.com/apache/pulsar/blob/master/tests/integration/src/test/java/org/apache/pulsar/tests/integration/src/test/java/org/apache/pulsar/tests/integration/io/sources/debezium/DebeziumMsSqlSourceTester.java). -For more information, see [Enable and disable change data capture in Microsoft SQL Server](https://docs.microsoft.com/en-us/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server). - -Similarly to other connectors, you can use JSON or YAMl to configure the connector. - -* JSON - -```json - -{ - "database.hostname": "localhost", - "database.port": "1433", - "database.user": "sa", - "database.password": "MyP@ssw0rd!", - "database.dbname": "MyTestDB", - "database.server.name": "mssql", - "snapshot.mode": "schema_only", - "topic.namespace": "public/default", - "task.class": "io.debezium.connector.sqlserver.SqlServerConnectorTask", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "typeClassName": "org.apache.pulsar.common.schema.KeyValue", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.tcpKeepAlive": "true", - "decimal.handling.mode": "double", - "database.history.pulsar.topic": "debezium-mssql-source-history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650" -} - -``` - -* YAML - -```yaml - -tenant: "public" -namespace: "default" -name: "debezium-mssql-source" -topicName: "debezium-mssql-topic" -parallelism: 1 - -className: "org.apache.pulsar.io.debezium.mssql.DebeziumMsSqlSource" -database.dbname: "mssql" - -configs: - database.hostname: "localhost" - database.port: "1433" - database.user: "sa" - database.password: "MyP@ssw0rd!" - database.dbname: "MyTestDB" - database.server.name: "mssql" - snapshot.mode: "schema_only" - topic.namespace: "public/default" - task.class: "io.debezium.connector.sqlserver.SqlServerConnectorTask" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - key.converter: "org.apache.kafka.connect.json.JsonConverter" - typeClassName: "org.apache.pulsar.common.schema.KeyValue" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.tcpKeepAlive: "true" - decimal.handling.mode: "double" - database.history.pulsar.topic: "debezium-mssql-source-history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - -``` - -For the full list of configuration properties supported by Debezium, see [Debezium Connector for MS SQL](https://debezium.io/documentation/reference/1.5/connectors/sqlserver.html#sqlserver-connector-properties). - -## FAQ - -### Debezium postgres connector will hang when create snap - -```$xslt - -#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000] - java.lang.Thread.State: WAITING (parking) - at sun.misc.Unsafe.park(Native Method) - - parking to wait for <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) - at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) - at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) - at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396) - at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649) - at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132) - at io.debezium.connector.postgresql.PostgresConnectorTask$Lambda$203/385424085.accept(Unknown Source) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$240/1347039967.accept(Unknown Source) - at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$206/589332928.run(Unknown Source) - at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705) - at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717) - at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126) - at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47) - at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127) - at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230) - at java.lang.Thread.run(Thread.java:748) - -``` - -If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file: - -```$xslt - -max.queue.size= - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-debug.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-debug.md deleted file mode 100644 index 844e101d00d2a7..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-debug.md +++ /dev/null @@ -1,407 +0,0 @@ ---- -id: io-debug -title: How to debug Pulsar connectors -sidebar_label: "Debug" -original_id: io-debug ---- -This guide explains how to debug connectors in localrun or cluster mode and gives a debugging checklist. -To better demonstrate how to debug Pulsar connectors, here takes a Mongo sink connector as an example. - -**Deploy a Mongo sink environment** -1. Start a Mongo service. - - ```bash - - docker pull mongo:4 - docker run -d -p 27017:27017 --name pulsar-mongo -v $PWD/data:/data/db mongo:4 - - ``` - -2. Create a DB and a collection. - - ```bash - - docker exec -it pulsar-mongo /bin/bash - mongo - > use pulsar - > db.createCollection('messages') - > exit - - ``` - -3. Start Pulsar standalone. - - ```bash - - docker pull apachepulsar/pulsar:2.4.0 - docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --link pulsar-mongo --name pulsar-mongo-standalone apachepulsar/pulsar:2.4.0 bin/pulsar standalone - - ``` - -4. Configure the Mongo sink with the `mongo-sink-config.yaml` file. - - ```bash - - configs: - mongoUri: "mongodb://pulsar-mongo:27017" - database: "pulsar" - collection: "messages" - batchSize: 2 - batchTimeMs: 500 - - ``` - - ```bash - - docker cp mongo-sink-config.yaml pulsar-mongo-standalone:/pulsar/ - - ``` - -5. Download the Mongo sink nar package. - - ```bash - - docker exec -it pulsar-mongo-standalone /bin/bash - curl -O http://apache.01link.hk/pulsar/pulsar-2.4.0/connectors/pulsar-io-mongo-2.4.0.nar - - ``` - -## Debug in localrun mode -Start the Mongo sink in localrun mode using the `localrun` command. -:::tip - -For more information about the `localrun` command, see [`localrun`](reference-connector-admin.md/#localrun-1). - -::: - -```bash - -./bin/pulsar-admin sinks localrun \ ---archive pulsar-io-mongo-2.4.0.nar \ ---tenant public --namespace default \ ---inputs test-mongo \ ---name pulsar-mongo-sink \ ---sink-config-file mongo-sink-config.yaml \ ---parallelism 1 - -``` - -### Use connector log -Use one of the following methods to get a connector log in localrun mode: -* After executing the `localrun` command, the **log is automatically printed on the console**. -* The log is located at: - - ```bash - - logs/functions/tenant/namespace/function-name/function-name-instance-id.log - - ``` - - **Example** - - The path of the Mongo sink connector is: - - ```bash - - logs/functions/public/default/pulsar-mongo-sink/pulsar-mongo-sink-0.log - - ``` - -To clearly explain the log information, here breaks down the large block of information into small blocks and add descriptions for each block. -* This piece of log information shows the storage path of the nar package after decompression. - - ``` - - 08:21:54.132 [main] INFO org.apache.pulsar.common.nar.NarClassLoader - Created class loader with paths: [file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/, file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/META-INF/bundled-dependencies/, - - ``` - - :::tip - - If `class cannot be found` exception is thrown, check whether the nar file is decompressed in the folder `file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/META-INF/bundled-dependencies/` or not. - - ::: - -* This piece of log information illustrates the basic information about the Mongo sink connector, such as tenant, namespace, name, parallelism, resources, and so on, which can be used to **check whether the Mongo sink connector is configured correctly or not**. - - ```bash - - 08:21:55.390 [main] INFO org.apache.pulsar.functions.runtime.ThreadRuntime - ThreadContainer starting function with instance config InstanceConfig(instanceId=0, functionId=853d60a1-0c48-44d5-9a5c-6917386476b2, functionVersion=c2ce1458-b69e-4175-88c0-a0a856a2be8c, functionDetails=tenant: "public" - namespace: "default" - name: "pulsar-mongo-sink" - className: "org.apache.pulsar.functions.api.utils.IdentityFunction" - autoAck: true - parallelism: 1 - source { - typeClassName: "[B" - inputSpecs { - key: "test-mongo" - value { - } - } - cleanupSubscription: true - } - sink { - className: "org.apache.pulsar.io.mongodb.MongoSink" - configs: "{\"mongoUri\":\"mongodb://pulsar-mongo:27017\",\"database\":\"pulsar\",\"collection\":\"messages\",\"batchSize\":2,\"batchTimeMs\":500}" - typeClassName: "[B" - } - resources { - cpu: 1.0 - ram: 1073741824 - disk: 10737418240 - } - componentType: SINK - , maxBufferedTuples=1024, functionAuthenticationSpec=null, port=38459, clusterName=local) - - ``` - -* This piece of log information demonstrates the status of the connections to Mongo and configuration information. - - ```bash - - 08:21:56.231 [cluster-ClusterId{value='5d6396a3c9e77c0569ff00eb', description='null'}-pulsar-mongo:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:1, serverValue:8}] to pulsar-mongo:27017 - 08:21:56.326 [cluster-ClusterId{value='5d6396a3c9e77c0569ff00eb', description='null'}-pulsar-mongo:27017] INFO org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=pulsar-mongo:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 2, 0]}, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=89058800} - - ``` - -* This piece of log information explains the configuration of consumers and clients, including the topic name, subscription name, subscription type, and so on. - - ```bash - - 08:21:56.719 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl - Starting Pulsar consumer status recorder with config: { - "topicNames" : [ "test-mongo" ], - "topicsPattern" : null, - "subscriptionName" : "public/default/pulsar-mongo-sink", - "subscriptionType" : "Shared", - "receiverQueueSize" : 1000, - "acknowledgementsGroupTimeMicros" : 100000, - "negativeAckRedeliveryDelayMicros" : 60000000, - "maxTotalReceiverQueueSizeAcrossPartitions" : 50000, - "consumerName" : null, - "ackTimeoutMillis" : 0, - "tickDurationMillis" : 1000, - "priorityLevel" : 0, - "cryptoFailureAction" : "CONSUME", - "properties" : { - "application" : "pulsar-sink", - "id" : "public/default/pulsar-mongo-sink", - "instance_id" : "0" - }, - "readCompacted" : false, - "subscriptionInitialPosition" : "Latest", - "patternAutoDiscoveryPeriod" : 1, - "regexSubscriptionMode" : "PersistentOnly", - "deadLetterPolicy" : null, - "autoUpdatePartitions" : true, - "replicateSubscriptionState" : false, - "resetIncludeHead" : false - } - 08:21:56.726 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl - Pulsar client config: { - "serviceUrl" : "pulsar://localhost:6650", - "authPluginClassName" : null, - "authParams" : null, - "operationTimeoutMs" : 30000, - "statsIntervalSeconds" : 60, - "numIoThreads" : 1, - "numListenerThreads" : 1, - "connectionsPerBroker" : 1, - "useTcpNoDelay" : true, - "useTls" : false, - "tlsTrustCertsFilePath" : null, - "tlsAllowInsecureConnection" : false, - "tlsHostnameVerificationEnable" : false, - "concurrentLookupRequest" : 5000, - "maxLookupRequest" : 50000, - "maxNumberOfRejectedRequestPerConnection" : 50, - "keepAliveIntervalSeconds" : 30, - "connectionTimeoutMs" : 10000, - "requestTimeoutMs" : 60000, - "defaultBackoffIntervalNanos" : 100000000, - "maxBackoffIntervalNanos" : 30000000000 - } - - ``` - -## Debug in cluster mode -You can use the following methods to debug a connector in cluster mode: -* [Use connector log](#use-connector-log) -* [Use admin CLI](#use-admin-cli) -### Use connector log -In cluster mode, multiple connectors can run on a worker. To find the log path of a specified connector, use the `workerId` to locate the connector log. -### Use admin CLI -Pulsar admin CLI helps you debug Pulsar connectors with the following subcommands: -* [`get`](#get) - -* [`status`](#status) -* [`topics stats`](#topics-stats) - -**Create a Mongo sink** - -```bash - -./bin/pulsar-admin sinks create \ ---archive pulsar-io-mongo-2.4.0.nar \ ---tenant public \ ---namespace default \ ---inputs test-mongo \ ---name pulsar-mongo-sink \ ---sink-config-file mongo-sink-config.yaml \ ---parallelism 1 - -``` - -### `get` -Use the `get` command to get the basic information about the Mongo sink connector, such as tenant, namespace, name, parallelism, and so on. - -```bash - -./bin/pulsar-admin sinks get --tenant public --namespace default --name pulsar-mongo-sink -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-mongo-sink", - "className": "org.apache.pulsar.io.mongodb.MongoSink", - "inputSpecs": { - "test-mongo": { - "isRegexPattern": false - } - }, - "configs": { - "mongoUri": "mongodb://pulsar-mongo:27017", - "database": "pulsar", - "collection": "messages", - "batchSize": 2.0, - "batchTimeMs": 500.0 - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -:::tip - -For more information about the `get` command, see [`get`](reference-connector-admin.md/#get-1). - -::: - -### `status` -Use the `status` command to get the current status about the Mongo sink connector, such as the number of instance, the number of running instance, instanceId, workerId and so on. - -```bash - -./bin/pulsar-admin sinks status ---tenant public \ ---namespace default \ ---name pulsar-mongo-sink -{ -"numInstances" : 1, -"numRunning" : 1, -"instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-5d202832fd18-8080" - } -} ] -} - -``` - -:::tip - -For more information about the `status` command, see [`status`](reference-connector-admin.md/#stauts-1). -If there are multiple connectors running on a worker, `workerId` can locate the worker on which the specified connector is running. - -::: - -### `topics stats` -Use the `topics stats` command to get the stats for a topic and its connected producer and consumer, such as whether the topic has received messages or not, whether there is a backlog of messages or not, the available permits and other key information. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -```bash - -./bin/pulsar-admin topics stats test-mongo -{ - "msgRateIn" : 0.0, - "msgThroughputIn" : 0.0, - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "averageMsgSize" : 0.0, - "storageSize" : 1, - "publishers" : [ ], - "subscriptions" : { - "public/default/pulsar-mongo-sink" : { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "msgRateRedeliver" : 0.0, - "msgBacklog" : 0, - "blockedSubscriptionOnUnackedMsgs" : false, - "msgDelayed" : 0, - "unackedMessages" : 0, - "type" : "Shared", - "msgRateExpired" : 0.0, - "consumers" : [ { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "msgRateRedeliver" : 0.0, - "consumerName" : "dffdd", - "availablePermits" : 999, - "unackedMessages" : 0, - "blockedConsumerOnUnackedMsgs" : false, - "metadata" : { - "instance_id" : "0", - "application" : "pulsar-sink", - "id" : "public/default/pulsar-mongo-sink" - }, - "connectedSince" : "2019-08-26T08:48:07.582Z", - "clientVersion" : "2.4.0", - "address" : "/172.17.0.3:57790" - } ], - "isReplicated" : false - } - }, - "replication" : { }, - "deduplicationStatus" : "Disabled" -} - -``` - -:::tip - -For more information about the `topic stats` command, see [`topic stats`](http://pulsar.apache.org/docs/en/pulsar-admin/#stats-1). - -::: - -## Checklist -This checklist indicates the major areas to check when you debug connectors. It is a reminder of what to look for to ensure a thorough review and an evaluation tool to get the status of connectors. -* Does Pulsar start successfully? - -* Does the external service run normally? - -* Is the nar package complete? - -* Is the connector configuration file correct? - -* In localrun mode, run a connector and check the printed information (connector log) on the console. - -* In cluster mode: - - * Use the `get` command to get the basic information. - - * Use the `status` command to get the current status. - * Use the `topics stats` command to get the stats for a specified topic and its connected producers and consumers. - - * Check the connector log. -* Enter into the external system and verify the result. diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-develop.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-develop.md deleted file mode 100644 index d6f4f8261ac820..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-develop.md +++ /dev/null @@ -1,421 +0,0 @@ ---- -id: io-develop -title: How to develop Pulsar connectors -sidebar_label: "Develop" -original_id: io-develop ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide describes how to develop Pulsar connectors to move data -between Pulsar and other systems. - -Pulsar connectors are special [Pulsar Functions](functions-overview.md), so creating -a Pulsar connector is similar to creating a Pulsar function. - -Pulsar connectors come in two types: - -| Type | Description | Example -|---|---|--- -{@inject: github:Source:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java}|Import data from another system to Pulsar.|[RabbitMQ source connector](io-rabbitmq.md) imports the messages of a RabbitMQ queue to a Pulsar topic. -{@inject: github:Sink:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java}|Export data from Pulsar to another system.|[Kinesis sink connector](io-kinesis.md) exports the messages of a Pulsar topic to a Kinesis stream. - -## Develop - -You can develop Pulsar source connectors and sink connectors. - -### Source - -Developing a source connector is to implement the {@inject: github:Source:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} -interface, which means you need to implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method and the {@inject: github:read:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - -1. Implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - - ```java - - /** - * Open connector with configuration - * - * @param config initialization config - * @param sourceContext - * @throws Exception IO type exceptions when opening a connector - */ - void open(final Map config, SourceContext sourceContext) throws Exception; - - ``` - - This method is called when the source connector is initialized. - - In this method, you can retrieve all connector specific settings through the passed-in `config` parameter and initialize all necessary resources. - - For example, a Kafka connector can create a Kafka client in this `open` method. - - Besides, Pulsar runtime also provides a `SourceContext` for the - connector to access runtime resources for tasks like collecting metrics. The implementation can save the `SourceContext` for future use. - -2. Implement the {@inject: github:read:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - - ```java - - /** - * Reads the next message from source. - * If source does not have any new messages, this call should block. - * @return next message from source. The return result should never be null - * @throws Exception - */ - Record read() throws Exception; - - ``` - - If nothing to return, the implementation should be blocking rather than returning `null`. - - The returned {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should encapsulate the following information, which is needed by Pulsar IO runtime. - - * {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should provide the following variables: - - |Variable|Required|Description - |---|---|--- - `TopicName`|No|Pulsar topic name from which the record is originated from. - `Key`|No| Messages can optionally be tagged with keys.

    For more information, see [Routing modes](concepts-messaging.md#routing-modes).| - `Value`|Yes|Actual data of the record. - `EventTime`|No|Event time of the record from the source. - `PartitionId`|No| If the record is originated from a partitioned source, it returns its `PartitionId`.

    `PartitionId` is used as a part of the unique identifier by Pulsar IO runtime to deduplicate messages and achieve exactly-once processing guarantee. - `RecordSequence`|No|If the record is originated from a sequential source, it returns its `RecordSequence`.

    `RecordSequence` is used as a part of the unique identifier by Pulsar IO runtime to deduplicate messages and achieve exactly-once processing guarantee. - `Properties` |No| If the record carries user-defined properties, it returns those properties. - `DestinationTopic`|No|Topic to which message should be written. - `Message`|No|A class which carries data sent by users.

    For more information, see [Message.java](https://github.com/apache/pulsar/blob/master/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/Message.java).| - - * {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should provide the following methods: - - Method|Description - |---|--- - `ack` |Acknowledge that the record is fully processed. - `fail`|Indicate that the record fails to be processed. - -## Handle schema information - -Pulsar IO automatically handles the schema and provides a strongly typed API based on Java generics. -If you know the schema type that you are producing, you can declare the Java class relative to that type in your sink declaration. - -``` - -public class MySource implements Source { - public Record read() {} -} - -``` - -If you want to implement a source that works with any schema, you can go with `byte[]` (of `ByteBuffer`) and use Schema.AUTO_PRODUCE_BYTES(). - -``` - -public class MySource implements Source { - public Record read() { - - Schema wantedSchema = .... - Record myRecord = new MyRecordImplementation(); - .... - } - class MyRecordImplementation implements Record { - public byte[] getValue() { - return ....encoded byte[]...that represents the value - } - public Schema getSchema() { - return Schema.AUTO_PRODUCE_BYTES(wantedSchema); - } - } -} - -``` - -To handle the `KeyValue` type properly, follow the guidelines for your record implementation: -- It must implement {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/KVRecord.java} interface and implement `getKeySchema`,`getValueSchema`, and `getKeyValueEncodingType` -- It must return a `KeyValue` object as `Record.getValue()` -- It may return null in `Record.getSchema()` - -When Pulsar IO runtime encounters a `KVRecord`, it brings the following changes automatically: -- Set properly the `KeyValueSchema` -- Encode the Message Key and the Message Value according to the `KeyValueEncoding` (SEPARATED or INLINE) - -:::tip - -For more information about **how to create a source connector**, see {@inject: github:KafkaSource:/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java}. - -::: - -### Sink - -Developing a sink connector **is similar to** developing a source connector, that is, you need to implement the {@inject: github:Sink:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} interface, which means implementing the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method and the {@inject: github:write:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - -1. Implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - - ```java - - /** - * Open connector with configuration - * - * @param config initialization config - * @param sinkContext - * @throws Exception IO type exceptions when opening a connector - */ - void open(final Map config, SinkContext sinkContext) throws Exception; - - ``` - -2. Implement the {@inject: github:write:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - - ```java - - /** - * Write a message to Sink - * @param record record to write to sink - * @throws Exception - */ - void write(Record record) throws Exception; - - ``` - - During the implementation, you can decide how to write the `Value` and - the `Key` to the actual source, and leverage all the provided information such as - `PartitionId` and `RecordSequence` to achieve different processing guarantees. - - You also need to ack records (if messages are sent successfully) or fail records (if messages fail to send). - -## Handling Schema information - -Pulsar IO handles automatically the Schema and provides a strongly typed API based on Java generics. -If you know the Schema type that you are consuming from you can declare the Java class relative to that type in your Sink declaration. - -``` - -public class MySink implements Sink { - public void write(Record record) {} -} - -``` - -If you want to implement a sink that works with any schema, you can you go with the special GenericObject interface. - -``` - -public class MySink implements Sink { - public void write(Record record) { - Schema schema = record.getSchema(); - GenericObject genericObject = record.getValue(); - if (genericObject != null) { - SchemaType type = genericObject.getSchemaType(); - Object nativeObject = genericObject.getNativeObject(); - ... - } - .... - } -} - -``` - -In the case of AVRO, JSON, and Protobuf records (schemaType=AVRO,JSON,PROTOBUF_NATIVE), you can cast the -`genericObject` variable to `GenericRecord` and use `getFields()` and `getField()` API. -You are able to access the native AVRO record using `genericObject.getNativeObject()`. - -In the case of KeyValue type, you can access both the schema for the key and the schema for the value using this code. - -``` - -public class MySink implements Sink { - public void write(Record record) { - Schema schema = record.getSchema(); - GenericObject genericObject = record.getValue(); - SchemaType type = genericObject.getSchemaType(); - Object nativeObject = genericObject.getNativeObject(); - if (type == SchemaType.KEY_VALUE) { - KeyValue keyValue = (KeyValue) nativeObject; - Object key = keyValue.getKey(); - Object value = keyValue.getValue(); - - KeyValueSchema keyValueSchema = (KeyValueSchema) schema; - Schema keySchema = keyValueSchema.getKeySchema(); - Schema valueSchema = keyValueSchema.getValueSchema(); - } - .... - } -} - -``` - -## Test - -Testing connectors can be challenging because Pulsar IO connectors interact with two systems -that may be difficult to mock—Pulsar and the system to which the connector is connecting. - -It is -recommended writing special tests to test the connector functionalities as below -while mocking the external service. - -### Unit test - -You can create unit tests for your connector. - -### Integration test - -Once you have written sufficient unit tests, you can add -separate integration tests to verify end-to-end functionality. - -Pulsar uses [testcontainers](https://www.testcontainers.org/) **for all integration tests**. - -:::tip - -For more information about **how to create integration tests for Pulsar connectors**, see {@inject: github:IntegrationTests:/tests/integration/src/test/java/org/apache/pulsar/tests/integration/io}. - -::: - -## Package - -Once you've developed and tested your connector, you need to package it so that it can be submitted -to a [Pulsar Functions](functions-overview.md) cluster. - -There are two methods to -work with Pulsar Functions' runtime, that is, [NAR](#nar) and [uber JAR](#uber-jar). - -:::note - -If you plan to package and distribute your connector for others to use, you are obligated to - -::: - -license and copyright your own code properly. Remember to add the license and copyright to -all libraries your code uses and to your distribution. -> -> If you use the [NAR](#nar) method, the NAR plugin -automatically creates a `DEPENDENCIES` file in the generated NAR package, including the proper -licensing and copyrights of all libraries of your connector. - -### NAR - -**NAR** stands for NiFi Archive, which is a custom packaging mechanism used by Apache NiFi, to provide -a bit of Java ClassLoader isolation. - -:::tip - -For more information about **how NAR works**, see [here](https://medium.com/hashmapinc/nifi-nar-files-explained-14113f7796fd). - -::: - -Pulsar uses the same mechanism for packaging **all** [built-in connectors](io-connectors.md). - -The easiest approach to package a Pulsar connector is to create a NAR package using [nifi-nar-maven-plugin](https://mvnrepository.com/artifact/org.apache.nifi/nifi-nar-maven-plugin). - -Include this [nifi-nar-maven-plugin](https://mvnrepository.com/artifact/org.apache.nifi/nifi-nar-maven-plugin) in your maven project for your connector as below. - -```xml - - - - org.apache.nifi - nifi-nar-maven-plugin - 1.2.0 - - - -``` - -You must also create a `resources/META-INF/services/pulsar-io.yaml` file with the following contents: - -```yaml - -name: connector name -description: connector description -sourceClass: fully qualified class name (only if source connector) -sinkClass: fully qualified class name (only if sink connector) - -``` - -For Gradle users, there is a [Gradle Nar plugin available on the Gradle Plugin Portal](https://plugins.gradle.org/plugin/io.github.lhotari.gradle-nar-plugin). - -:::tip - -For more information about an **how to use NAR for Pulsar connectors**, see {@inject: github:TwitterFirehose:/pulsar-io/twitter/pom.xml}. - -::: - -### Uber JAR - -An alternative approach is to create an **uber JAR** that contains all of the connector's JAR files -and other resource files. No directory internal structure is necessary. - -You can use [maven-shade-plugin](https://maven.apache.org/plugins/maven-shade-plugin/examples/includes-excludes.html) to create a uber JAR as below: - -```xml - - - org.apache.maven.plugins - maven-shade-plugin - 3.1.1 - - - package - - shade - - - - - *:* - - - - - - - -``` - -## Monitor - -Pulsar connectors enable you to move data in and out of Pulsar easily. It is important to ensure that the running connectors are healthy at any time. You can monitor Pulsar connectors that have been deployed with the following methods: - -- Check the metrics provided by Pulsar. - - Pulsar connectors expose the metrics that can be collected and used for monitoring the health of **Java** connectors. You can check the metrics by following the [monitoring](deploy-monitoring.md) guide. - -- Set and check your customized metrics. - - In addition to the metrics provided by Pulsar, Pulsar allows you to customize metrics for **Java** connectors. Function workers collect user-defined metrics to Prometheus automatically and you can check them in Grafana. - -Here is an example of how to customize metrics for a Java connector. - -````mdx-code-block - - - -``` - -public class TestMetricSink implements Sink { - - @Override - public void open(Map config, SinkContext sinkContext) throws Exception { - sinkContext.recordMetric("foo", 1); - } - - @Override - public void write(Record record) throws Exception { - - } - - @Override - public void close() throws Exception { - - } - } - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-dynamodb-source.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-dynamodb-source.md deleted file mode 100644 index ce585786eb0428..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-dynamodb-source.md +++ /dev/null @@ -1,80 +0,0 @@ ---- -id: io-dynamodb-source -title: AWS DynamoDB source connector -sidebar_label: "AWS DynamoDB source connector" -original_id: io-dynamodb-source ---- - -The DynamoDB source connector pulls data from DynamoDB table streams and persists data into Pulsar. - -This connector uses the [DynamoDB Streams Kinesis Adapter](https://github.com/awslabs/dynamodb-streams-kinesis-adapter), -which uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual -consuming of messages. The KCL uses DynamoDB to track state for consumers and requires cloudwatch access to log metrics. - - -## Configuration - -The configuration of the DynamoDB source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.

    Below are the available options:

  1989. `AT_TIMESTAMP`: start from the record at or after the specified timestamp.

  1990. `LATEST`: start after the most recent data record.

  1991. `TRIM_HORIZON`: start from the oldest available data record.
  1992. -`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption. -`applicationName`|String|false|Pulsar IO connector|The name of the KCL application. Must be unique, as it is used to define the table name for the dynamo table used for state tracking.

    By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances. -`checkpointInterval`|long|false|60000|The frequency of the KCL checkpoint in milliseconds. -`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds. -`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint. -`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector.

    Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed. -`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsEndpoint`|String|false|" " (empty string)|The DynamoDB Streams end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsDynamodbStreamArn`|String|true|" " (empty string)|The DynamoDB stream arn. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    `awsCredentialProviderPlugin` has the following built-in plugs:

  1993. `org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:
    this plugin uses the default AWS provider chain.
    For more information, see [using the default credential provider chain](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default).

  1994. `org.apache.pulsar.io.kinesis.STSAssumeRoleProviderPlugin`:
    this plugin takes a configuration via the `awsCredentialPluginParam` that describes a role to assume when running the KCL.
    **JSON configuration example**
    `{"roleArn": "arn...", "roleSessionName": "name"}`

    `awsCredentialPluginName` is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If `awsCredentialPluginName` set to empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`.
  1995. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Example - -Before using the DynamoDB source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "awsEndpoint": "https://some.endpoint.aws", - "awsRegion": "us-east-1", - "awsDynamodbStreamArn": "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "applicationName": "My test application", - "checkpointInterval": "30000", - "backoffTime": "4000", - "numRetries": "3", - "receiveQueueSize": 2000, - "initialPositionInStream": "TRIM_HORIZON", - "startAtTime": "2019-03-05T19:28:58.000Z" - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "https://some.endpoint.aws" - awsRegion: "us-east-1" - awsDynamodbStreamArn: "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - applicationName: "My test application" - checkpointInterval: 30000 - backoffTime: 4000 - numRetries: 3 - receiveQueueSize: 2000 - initialPositionInStream: "TRIM_HORIZON" - startAtTime: "2019-03-05T19:28:58.000Z" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-elasticsearch-sink.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-elasticsearch-sink.md deleted file mode 100644 index b5757b3094a9ac..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-elasticsearch-sink.md +++ /dev/null @@ -1,242 +0,0 @@ ---- -id: io-elasticsearch-sink -title: Elasticsearch sink connector -sidebar_label: "Elasticsearch sink connector" -original_id: io-elasticsearch-sink ---- - -The Elasticsearch sink connector pulls messages from Pulsar topics and persists the messages to indexes. - - -## Feature - -### Handle data - -Since Pulsar 2.9.0, the Elasticsearch sink connector has the following ways of -working. You can choose one of them. - -Name | Description ----|---| -Raw processing | The sink reads from topics and passes the raw content to Elasticsearch.

    This is the **default** behavior.

    Raw processing was already available **in Pulsar 2.8.x**. -Schema aware | The sink uses the schema and handles AVRO, JSON, and KeyValue schema types while mapping the content to the Elasticsearch document.

    If you set `schemaEnable` to `true`, the sink interprets the contents of the message and you can define a **primary key** that in turn used as the special `_id` field on Elasticsearch. -

    This allows you to perform `UPDATE`, `INSERT`, and `DELETE` operations -to Elasticsearch driven by the logical primary key of the message.

    This -is very useful in a typical Change Data Capture scenario in which you follow the -changes on your database, write them to Pulsar (using the Debezium adapter for -instance), and then you write to Elasticsearch.

    You configure the -mapping of the primary key using the `primaryFields` configuration -entry.

    The `DELETE` operation can be performed when the primary key is -not empty and the remaining value is empty. Use the `nullValueAction` to -configure this behaviour. The default configuration simply ignores such empty -values. - -### Map multiple indexes - -Since Pulsar 2.9.0, the `indexName` property is no more required. If you omit it, the sink writes to an index name after the Pulsar topic name. - -### Enable bulk writes - -Since Pulsar 2.9.0, you can use bulk writes by setting the `bulkEnabled` property to `true`. - -### Enable secure connections via TLS - -Since Pulsar 2.9.0, you can enable secure connections with TLS. - -## Configuration - -The configuration of the Elasticsearch sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `elasticSearchUrl` | String| true |" " (empty string)| The URL of elastic search cluster to which the connector connects. | -| `indexName` | String| true |" " (empty string)| The index name to which the connector writes messages. | -| `schemaEnable` | Boolean | false | false | Turn on the Schema Aware mode. | -| `createIndexIfNeeded` | Boolean | false | false | Manage index if missing. | -| `maxRetries` | Integer | false | 1 | The maximum number of retries for elasticsearch requests. Use -1 to disable it. | -| `retryBackoffInMs` | Integer | false | 100 | The base time to wait when retrying an Elasticsearch request (in milliseconds). | -| `maxRetryTimeInSec` | Integer| false | 86400 | The maximum retry time interval in seconds for retrying an elasticsearch request. | -| `bulkEnabled` | Boolean | false | false | Enable the elasticsearch bulk processor to flush write requests based on the number or size of requests, or after a given period. | -| `bulkActions` | Integer | false | 1000 | The maximum number of actions per elasticsearch bulk request. Use -1 to disable it. | -| `bulkSizeInMb` | Integer | false |5 | The maximum size in megabytes of elasticsearch bulk requests. Use -1 to disable it. | -| `bulkConcurrentRequests` | Integer | false | 0 | The maximum number of in flight elasticsearch bulk requests. The default 0 allows the execution of a single request. A value of 1 means 1 concurrent request is allowed to be executed while accumulating new bulk requests. | -| `bulkFlushIntervalInMs` | Integer | false | -1 | The maximum period of time to wait for flushing pending writes when bulk writes are enabled. Default is -1 meaning not set. | -| `compressionEnabled` | Boolean | false |false | Enable elasticsearch request compression. | -| `connectTimeoutInMs` | Integer | false |5000 | The elasticsearch client connection timeout in milliseconds. | -| `connectionRequestTimeoutInMs` | Integer | false |1000 | The time in milliseconds for getting a connection from the elasticsearch connection pool. | -| `connectionIdleTimeoutInMs` | Integer | false |5 | Idle connection timeout to prevent a read timeout. | -| `keyIgnore` | Boolean | false |true | Whether to ignore the record key to build the Elasticsearch document `_id`. If primaryFields is defined, the connector extract the primary fields from the payload to build the document `_id` If no primaryFields are provided, elasticsearch auto generates a random document `_id`. | -| `primaryFields` | String | false | "id" | The comma separated ordered list of field names used to build the Elasticsearch document `_id` from the record value. If this list is a singleton, the field is converted as a string. If this list has 2 or more fields, the generated `_id` is a string representation of a JSON array of the field values. | -| `nullValueAction` | enum (IGNORE,DELETE,FAIL) | false | IGNORE | How to handle records with null values, possible options are IGNORE, DELETE or FAIL. Default is IGNORE the message. | -| `malformedDocAction` | enum (IGNORE,WARN,FAIL) | false | FAIL | How to handle elasticsearch rejected documents due to some malformation. Possible options are IGNORE, DELETE or FAIL. Default is FAIL the Elasticsearch document. | -| `stripNulls` | Boolean | false |true | If stripNulls is false, elasticsearch _source includes 'null' for empty fields (for example {"foo": null}), otherwise null fields are stripped. | -| `socketTimeoutInMs` | Integer | false |60000 | The socket timeout in milliseconds waiting to read the elasticsearch response. | -| `typeName` | String | false | "_doc" | The type name to which the connector writes messages to.

    The value should be set explicitly to a valid type name other than "_doc" for Elasticsearch version before 6.2, and left to default otherwise. | -| `indexNumberOfShards` | int| false |1| The number of shards of the index. | -| `indexNumberOfReplicas` | int| false |1 | The number of replicas of the index. | -| `username` | String| false |" " (empty string)| The username used by the connector to connect to the elastic search cluster.

    If `username` is set, then `password` should also be provided. | -| `password` | String| false | " " (empty string)|The password used by the connector to connect to the elastic search cluster.

    If `username` is set, then `password` should also be provided. | -| `ssl` | ElasticSearchSslConfig | false | | Configuration for TLS encrypted communication | - -### Definition of ElasticSearchSslConfig structure: - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `enabled` | Boolean| false | false | Enable SSL/TLS. | -| `hostnameVerification` | Boolean| false | true | Whether or not to validate node hostnames when using SSL. | -| `truststorePath` | String| false |" " (empty string)| The path to the truststore file. | -| `truststorePassword` | String| false |" " (empty string)| Truststore password. | -| `keystorePath` | String| false |" " (empty string)| The path to the keystore file. | -| `keystorePassword` | String| false |" " (empty string)| Keystore password. | -| `cipherSuites` | String| false |" " (empty string)| SSL/TLS cipher suites. | -| `protocols` | String| false |"TLSv1.2" | Comma separated list of enabled SSL/TLS protocols. | - -## Example - -Before using the Elasticsearch sink connector, you need to create a configuration file through one of the following methods. - -### Configuration - -#### For Elasticsearch After 6.2 - -* JSON - - ```json - - { - "elasticSearchUrl": "http://localhost:9200", - "indexName": "my_index", - "username": "scooby", - "password": "doobie" - } - - ``` - -* YAML - - ```yaml - - configs: - elasticSearchUrl: "http://localhost:9200" - indexName: "my_index" - username: "scooby" - password: "doobie" - - ``` - -#### For Elasticsearch Before 6.2 - -* JSON - - ```json - - { - "elasticSearchUrl": "http://localhost:9200", - "indexName": "my_index", - "typeName": "doc", - "username": "scooby", - "password": "doobie" - } - - ``` - -* YAML - - ```yaml - - configs: - elasticSearchUrl: "http://localhost:9200" - indexName: "my_index" - typeName: "doc" - username: "scooby" - password: "doobie" - - ``` - -### Usage - -1. Start a single node Elasticsearch cluster. - - ```bash - - $ docker run -p 9200:9200 -p 9300:9300 \ - -e "discovery.type=single-node" \ - docker.elastic.co/elasticsearch/elasticsearch:7.13.3 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - - Make sure the NAR file is available at `connectors/pulsar-io-elastic-search-@pulsar:version@.nar`. - -3. Start the Pulsar Elasticsearch connector in local run mode using one of the following methods. - * Use the **JSON** configuration as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-elastic-search-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name elasticsearch-test-sink \ - --sink-config '{"elasticSearchUrl":"http://localhost:9200","indexName": "my_index","username": "scooby","password": "doobie"}' \ - --inputs elasticsearch_test - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-elastic-search-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name elasticsearch-test-sink \ - --sink-config-file elasticsearch-sink.yml \ - --inputs elasticsearch_test - - ``` - -4. Publish records to the topic. - - ```bash - - $ bin/pulsar-client produce elasticsearch_test --messages "{\"a\":1}" - - ``` - -5. Check documents in Elasticsearch. - - * refresh the index - - ```bash - - $ curl -s http://localhost:9200/my_index/_refresh - - ``` - - - * search documents - - ```bash - - $ curl -s http://localhost:9200/my_index/_search - - ``` - - You can see the record that published earlier has been successfully written into Elasticsearch. - - ```json - - {"took":2,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":1,"relation":"eq"},"max_score":1.0,"hits":[{"_index":"my_index","_type":"_doc","_id":"FSxemm8BLjG_iC0EeTYJ","_score":1.0,"_source":{"a":1}}]}} - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-file-source.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-file-source.md deleted file mode 100644 index e9d710cce65e83..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-file-source.md +++ /dev/null @@ -1,160 +0,0 @@ ---- -id: io-file-source -title: File source connector -sidebar_label: "File source connector" -original_id: io-file-source ---- - -The File source connector pulls messages from files in directories and persists the messages to Pulsar topics. - -## Configuration - -The configuration of the File source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `inputDirectory` | String|true | No default value|The input directory to pull files. | -| `recurse` | Boolean|false | true | Whether to pull files from subdirectory or not.| -| `keepFile` |Boolean|false | false | If set to true, the file is not deleted after it is processed, which means the file can be picked up continually. | -| `fileFilter` | String|false| [^\\.].* | The file whose name matches the given regular expression is picked up. | -| `pathFilter` | String |false | NULL | If `recurse` is set to true, the subdirectory whose path matches the given regular expression is scanned. | -| `minimumFileAge` | Integer|false | 0 | The minimum age that a file can be processed.

    Any file younger than `minimumFileAge` (according to the last modification date) is ignored. | -| `maximumFileAge` | Long|false |Long.MAX_VALUE | The maximum age that a file can be processed.

    Any file older than `maximumFileAge` (according to last modification date) is ignored. | -| `minimumSize` |Integer| false |1 | The minimum size (in bytes) that a file can be processed. | -| `maximumSize` | Double|false |Double.MAX_VALUE| The maximum size (in bytes) that a file can be processed. | -| `ignoreHiddenFiles` |Boolean| false | true| Whether the hidden files should be ignored or not. | -| `pollingInterval`|Long | false | 10000L | Indicates how long to wait before performing a directory listing. | -| `numWorkers` | Integer | false | 1 | The number of worker threads that process files.

    This allows you to process a larger number of files concurrently.

    However, setting this to a value greater than 1 makes the data from multiple files mixed in the target topic. | - -### Example - -Before using the File source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "inputDirectory": "/Users/david", - "recurse": true, - "keepFile": true, - "fileFilter": "[^\\.].*", - "pathFilter": "*", - "minimumFileAge": 0, - "maximumFileAge": 9999999999, - "minimumSize": 1, - "maximumSize": 5000000, - "ignoreHiddenFiles": true, - "pollingInterval": 5000, - "numWorkers": 1 - } - - ``` - -* YAML - - ```yaml - - configs: - inputDirectory: "/Users/david" - recurse: true - keepFile: true - fileFilter: "[^\\.].*" - pathFilter: "*" - minimumFileAge: 0 - maximumFileAge: 9999999999 - minimumSize: 1 - maximumSize: 5000000 - ignoreHiddenFiles: true - pollingInterval: 5000 - numWorkers: 1 - - ``` - -## Usage - -Here is an example of using the File source connecter. - -1. Pull a Pulsar image. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - ``` - -2. Start Pulsar standalone. - - ```bash - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -3. Create a configuration file _file-connector.yaml_. - - ```yaml - - configs: - inputDirectory: "/opt" - - ``` - -4. Copy the configuration file _file-connector.yaml_ to the container. - - ```bash - - $ docker cp connectors/file-connector.yaml pulsar-standalone:/pulsar/ - - ``` - -5. Download the File source connector. - - ```bash - - $ curl -O https://mirrors.tuna.tsinghua.edu.cn/apache/pulsar/pulsar-{version}/connectors/pulsar-io-file-{version}.nar - - ``` - -6. Start the File source connector. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - - $ ./bin/pulsar-admin sources localrun \ - --archive /pulsar/pulsar-io-file-{version}.nar \ - --name file-test \ - --destination-topic-name pulsar-file-test \ - --source-config-file /pulsar/file-connector.yaml - - ``` - -7. Start a consumer. - - ```bash - - ./bin/pulsar-client consume -s file-test -n 0 pulsar-file-test - - ``` - -8. Write the message to the file _test.txt_. - - ```bash - - echo "hello world!" > /opt/test.txt - - ``` - - The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello world! - - ``` - - \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-flume-sink.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-flume-sink.md deleted file mode 100644 index b2ace53702f8ca..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-flume-sink.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -id: io-flume-sink -title: Flume sink connector -sidebar_label: "Flume sink connector" -original_id: io-flume-sink ---- - -The Flume sink connector pulls messages from Pulsar topics to logs. - -## Configuration - -The configuration of the Flume sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`name`|String|true|"" (empty string)|The name of the agent. -`confFile`|String|true|"" (empty string)|The configuration file. -`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed. -`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection. -`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration. - -### Example - -Before using the Flume sink connector, you need to create a configuration file through one of the following methods. - -> For more information about the `sink.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/sink.conf). - -* JSON - - ```json - - { - "name": "a1", - "confFile": "sink.conf", - "noReloadConf": "false", - "zkConnString": "", - "zkBasePath": "" - } - - ``` - -* YAML - - ```yaml - - configs: - name: a1 - confFile: sink.conf - noReloadConf: false - zkConnString: "" - zkBasePath: "" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-flume-source.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-flume-source.md deleted file mode 100644 index b7fd7edad88111..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-flume-source.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -id: io-flume-source -title: Flume source connector -sidebar_label: "Flume source connector" -original_id: io-flume-source ---- - -The Flume source connector pulls messages from logs to Pulsar topics. - -## Configuration - -The configuration of the Flume source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`name`|String|true|"" (empty string)|The name of the agent. -`confFile`|String|true|"" (empty string)|The configuration file. -`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed. -`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection. -`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration. - -### Example - -Before using the Flume source connector, you need to create a configuration file through one of the following methods. - -> For more information about the `source.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/source.conf). - -* JSON - - ```json - - { - "name": "a1", - "confFile": "source.conf", - "noReloadConf": "false", - "zkConnString": "", - "zkBasePath": "" - } - - ``` - -* YAML - - ```yaml - - configs: - name: a1 - confFile: source.conf - noReloadConf: false - zkConnString: "" - zkBasePath: "" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-hbase-sink.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-hbase-sink.md deleted file mode 100644 index 1737b00fa26805..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-hbase-sink.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -id: io-hbase-sink -title: HBase sink connector -sidebar_label: "HBase sink connector" -original_id: io-hbase-sink ---- - -The HBase sink connector pulls the messages from Pulsar topics -and persists the messages to HBase tables - -## Configuration - -The configuration of the HBase sink connector has the following properties. - -### Property - -| Name | Type|Default | Required | Description | -|------|---------|----------|-------------|--- -| `hbaseConfigResources` | String|None | false | HBase system configuration `hbase-site.xml` file. | -| `zookeeperQuorum` | String|None | true | HBase system configuration about `hbase.zookeeper.quorum` value. | -| `zookeeperClientPort` | String|2181 | false | HBase system configuration about `hbase.zookeeper.property.clientPort` value. | -| `zookeeperZnodeParent` | String|/hbase | false | HBase system configuration about `zookeeper.znode.parent` value. | -| `tableName` | None |String | true | HBase table, the value is `namespace:tableName`. | -| `rowKeyName` | String|None | true | HBase table rowkey name. | -| `familyName` | String|None | true | HBase table column family name. | -| `qualifierNames` |String| None | true | HBase table column qualifier names. | -| `batchTimeMs` | Long|1000l| false | HBase table operation timeout in milliseconds. | -| `batchSize` | int|200| false | Batch size of updates made to the HBase table. | - -### Example - -Before using the HBase sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "hbaseConfigResources": "hbase-site.xml", - "zookeeperQuorum": "localhost", - "zookeeperClientPort": "2181", - "zookeeperZnodeParent": "/hbase", - "tableName": "pulsar_hbase", - "rowKeyName": "rowKey", - "familyName": "info", - "qualifierNames": [ 'name', 'address', 'age'] - } - - ``` - -* YAML - - ```yaml - - configs: - hbaseConfigResources: "hbase-site.xml" - zookeeperQuorum: "localhost" - zookeeperClientPort: "2181" - zookeeperZnodeParent: "/hbase" - tableName: "pulsar_hbase" - rowKeyName: "rowKey" - familyName: "info" - qualifierNames: [ 'name', 'address', 'age'] - - ``` - - \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-hdfs2-sink.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-hdfs2-sink.md deleted file mode 100644 index 4a8527154430d0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-hdfs2-sink.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -id: io-hdfs2-sink -title: HDFS2 sink connector -sidebar_label: "HDFS2 sink connector" -original_id: io-hdfs2-sink ---- - -The HDFS2 sink connector pulls the messages from Pulsar topics -and persists the messages to HDFS files. - -## Configuration - -The configuration of the HDFS2 sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `hdfsConfigResources` | String|true| None | A file or a comma-separated list containing the Hadoop file system configuration.

    **Example**
    'core-site.xml'
    'hdfs-site.xml' | -| `directory` | String | true | None|The HDFS directory where files read from or written to. | -| `encoding` | String |false |None |The character encoding for the files.

    **Example**
    UTF-8
    ASCII | -| `compression` | Compression |false |None |The compression code used to compress or de-compress the files on HDFS.

    Below are the available options:
  1996. BZIP2
  1997. DEFLATE
  1998. GZIP
  1999. LZ4
  2000. SNAPPY
  2001. | -| `kerberosUserPrincipal` |String| false| None|The principal account of Kerberos user used for authentication. | -| `keytab` | String|false|None| The full pathname of the Kerberos keytab file used for authentication. | -| `filenamePrefix` |String| true, if `compression` is set to `None`. | None |The prefix of the files created inside the HDFS directory.

    **Example**
    The value of topicA result in files named topicA-. | -| `fileExtension` | String| true | None | The extension added to the files written to HDFS.

    **Example**
    '.txt'
    '.seq' | -| `separator` | char|false |None |The character used to separate records in a text file.

    If no value is provided, the contents from all records are concatenated together in one continuous byte array. | -| `syncInterval` | long| false |0| The interval between calls to flush data to HDFS disk in milliseconds. | -| `maxPendingRecords` |int| false|Integer.MAX_VALUE | The maximum number of records that hold in memory before acking.

    Setting this property to 1 makes every record send to disk before the record is acked.

    Setting this property to a higher value allows buffering records before flushing them to disk. -| `subdirectoryPattern` | String | false | None | A subdirectory associated with the created time of the sink.
    The pattern is the formatted pattern of `directory`'s subdirectory.

    See [DateTimeFormatter](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html) for pattern's syntax. | - -### Example - -Before using the HDFS2 sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "hdfsConfigResources": "core-site.xml", - "directory": "/foo/bar", - "filenamePrefix": "prefix", - "fileExtension": ".log", - "compression": "SNAPPY", - "subdirectoryPattern": "yyyy-MM-dd" - } - - ``` - -* YAML - - ```yaml - - configs: - hdfsConfigResources: "core-site.xml" - directory: "/foo/bar" - filenamePrefix: "prefix" - fileExtension: ".log" - compression: "SNAPPY" - subdirectoryPattern: "yyyy-MM-dd" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-hdfs3-sink.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-hdfs3-sink.md deleted file mode 100644 index aec065a25db7f4..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-hdfs3-sink.md +++ /dev/null @@ -1,59 +0,0 @@ ---- -id: io-hdfs3-sink -title: HDFS3 sink connector -sidebar_label: "HDFS3 sink connector" -original_id: io-hdfs3-sink ---- - -The HDFS3 sink connector pulls the messages from Pulsar topics -and persists the messages to HDFS files. - -## Configuration - -The configuration of the HDFS3 sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `hdfsConfigResources` | String|true| None | A file or a comma-separated list containing the Hadoop file system configuration.

    **Example**
    'core-site.xml'
    'hdfs-site.xml' | -| `directory` | String | true | None|The HDFS directory where files read from or written to. | -| `encoding` | String |false |None |The character encoding for the files.

    **Example**
    UTF-8
    ASCII | -| `compression` | Compression |false |None |The compression code used to compress or de-compress the files on HDFS.

    Below are the available options:
  2002. BZIP2
  2003. DEFLATE
  2004. GZIP
  2005. LZ4
  2006. SNAPPY
  2007. | -| `kerberosUserPrincipal` |String| false| None|The principal account of Kerberos user used for authentication. | -| `keytab` | String|false|None| The full pathname of the Kerberos keytab file used for authentication. | -| `filenamePrefix` |String| false |None |The prefix of the files created inside the HDFS directory.

    **Example**
    The value of topicA result in files named topicA-. | -| `fileExtension` | String| false | None| The extension added to the files written to HDFS.

    **Example**
    '.txt'
    '.seq' | -| `separator` | char|false |None |The character used to separate records in a text file.

    If no value is provided, the contents from all records are concatenated together in one continuous byte array. | -| `syncInterval` | long| false |0| The interval between calls to flush data to HDFS disk in milliseconds. | -| `maxPendingRecords` |int| false|Integer.MAX_VALUE | The maximum number of records that hold in memory before acking.

    Setting this property to 1 makes every record send to disk before the record is acked.

    Setting this property to a higher value allows buffering records before flushing them to disk. - -### Example - -Before using the HDFS3 sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "hdfsConfigResources": "core-site.xml", - "directory": "/foo/bar", - "filenamePrefix": "prefix", - "compression": "SNAPPY" - } - - ``` - -* YAML - - ```yaml - - configs: - hdfsConfigResources: "core-site.xml" - directory: "/foo/bar" - filenamePrefix: "prefix" - compression: "SNAPPY" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-influxdb-sink.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-influxdb-sink.md deleted file mode 100644 index 9382f8c03121cc..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-influxdb-sink.md +++ /dev/null @@ -1,119 +0,0 @@ ---- -id: io-influxdb-sink -title: InfluxDB sink connector -sidebar_label: "InfluxDB sink connector" -original_id: io-influxdb-sink ---- - -The InfluxDB sink connector pulls messages from Pulsar topics -and persists the messages to InfluxDB. - -The InfluxDB sink provides different configurations for InfluxDBv1 and v2 respectively. - -## Configuration - -The configuration of the InfluxDB sink connector has the following properties. - -### Property -#### InfluxDBv2 -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. | -| `token` | String|true| " " (empty string) |The authentication token used to authenticate to InfluxDB. | -| `organization` | String| true|" " (empty string) | The InfluxDB organization to write to. | -| `bucket` |String| true | " " (empty string)| The InfluxDB bucket to write to. | -| `precision` | String|false| ns | The timestamp precision for writing data to InfluxDB.

    Below are the available options:
  2008. ns
  2009. us
  2010. ms
  2011. s
  2012. | -| `logLevel` | String|false| NONE|The log level for InfluxDB request and response.

    Below are the available options:
  2013. NONE
  2014. BASIC
  2015. HEADERS
  2016. FULL
  2017. | -| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. | -| `batchTimeMs` |long|false| 1000L | The InfluxDB operation time in milliseconds. | -| `batchSize` | int|false|200| The batch size of writing to InfluxDB. | - -#### InfluxDBv1 -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. | -| `username` | String|false| " " (empty string) |The username used to authenticate to InfluxDB. | -| `password` | String| false|" " (empty string) | The password used to authenticate to InfluxDB. | -| `database` |String| true | " " (empty string)| The InfluxDB to which write messages. | -| `consistencyLevel` | String|false|ONE | The consistency level for writing data to InfluxDB.

    Below are the available options:
  2018. ALL
  2019. ANY
  2020. ONE
  2021. QUORUM
  2022. | -| `logLevel` | String|false| NONE|The log level for InfluxDB request and response.

    Below are the available options:
  2023. NONE
  2024. BASIC
  2025. HEADERS
  2026. FULL
  2027. | -| `retentionPolicy` | String|false| autogen| The retention policy for InfluxDB. | -| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. | -| `batchTimeMs` |long|false| 1000L | The InfluxDB operation time in milliseconds. | -| `batchSize` | int|false|200| The batch size of writing to InfluxDB. | - -### Example -Before using the InfluxDB sink connector, you need to create a configuration file through one of the following methods. -#### InfluxDBv2 -* JSON - - ```json - - { - "influxdbUrl": "http://localhost:9999", - "organization": "example-org", - "bucket": "example-bucket", - "token": "xxxx", - "precision": "ns", - "logLevel": "NONE", - "gzipEnable": false, - "batchTimeMs": 1000, - "batchSize": 100 - } - - ``` - - -* YAML - - ```yaml - - configs: - influxdbUrl: "http://localhost:9999" - organization: "example-org" - bucket: "example-bucket" - token: "xxxx" - precision: "ns" - logLevel: "NONE" - gzipEnable: false - batchTimeMs: 1000 - batchSize: 100 - - ``` - - -#### InfluxDBv1 - -* JSON - - ```json - - { - "influxdbUrl": "http://localhost:8086", - "database": "test_db", - "consistencyLevel": "ONE", - "logLevel": "NONE", - "retentionPolicy": "autogen", - "gzipEnable": false, - "batchTimeMs": 1000, - "batchSize": 100 - } - - ``` - -* YAML - - ```yaml - - configs: - influxdbUrl: "http://localhost:8086" - database: "test_db" - consistencyLevel: "ONE" - logLevel: "NONE" - retentionPolicy: "autogen" - gzipEnable: false - batchTimeMs: 1000 - batchSize: 100 - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-jdbc-sink.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-jdbc-sink.md deleted file mode 100644 index 77dbb61fccd7ed..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-jdbc-sink.md +++ /dev/null @@ -1,157 +0,0 @@ ---- -id: io-jdbc-sink -title: JDBC sink connector -sidebar_label: "JDBC sink connector" -original_id: io-jdbc-sink ---- - -The JDBC sink connectors allow pulling messages from Pulsar topics -and persists the messages to ClickHouse, MariaDB, PostgreSQL, and SQLite. - -> Currently, INSERT, DELETE and UPDATE operations are supported. - -## Configuration - -The configuration of all JDBC sink connectors has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `userName` | String|false | " " (empty string) | The username used to connect to the database specified by `jdbcUrl`.

    **Note: `userName` is case-sensitive.**| -| `password` | String|false | " " (empty string)| The password used to connect to the database specified by `jdbcUrl`.

    **Note: `password` is case-sensitive.**| -| `jdbcUrl` | String|true | " " (empty string) | The JDBC URL of the database to which the connector connects. | -| `tableName` | String|true | " " (empty string) | The name of the table to which the connector writes. | -| `nonKey` | String|false | " " (empty string) | A comma-separated list contains the fields used in updating events. | -| `key` | String|false | " " (empty string) | A comma-separated list contains the fields used in `where` condition of updating and deleting events. | -| `timeoutMs` | int| false|500 | The JDBC operation timeout in milliseconds. | -| `batchSize` | int|false | 200 | The batch size of updates made to the database. | - -### Example for ClickHouse - -* JSON - - ```json - - { - "userName": "clickhouse", - "password": "password", - "jdbcUrl": "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink", - "tableName": "pulsar_clickhouse_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-clickhouse-sink" - topicName: "persistent://public/default/jdbc-clickhouse-topic" - sinkType: "jdbc-clickhouse" - configs: - userName: "clickhouse" - password: "password" - jdbcUrl: "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink" - tableName: "pulsar_clickhouse_jdbc_sink" - - ``` - -### Example for MariaDB - -* JSON - - ```json - - { - "userName": "mariadb", - "password": "password", - "jdbcUrl": "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink", - "tableName": "pulsar_mariadb_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-mariadb-sink" - topicName: "persistent://public/default/jdbc-mariadb-topic" - sinkType: "jdbc-mariadb" - configs: - userName: "mariadb" - password: "password" - jdbcUrl: "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink" - tableName: "pulsar_mariadb_jdbc_sink" - - ``` - -### Example for PostgreSQL - -Before using the JDBC PostgreSQL sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "userName": "postgres", - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "tableName": "pulsar_postgres_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-postgres-sink" - topicName: "persistent://public/default/jdbc-postgres-topic" - sinkType: "jdbc-postgres" - configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink" - tableName: "pulsar_postgres_jdbc_sink" - - ``` - -For more information on **how to use this JDBC sink connector**, see [connect Pulsar to PostgreSQL](io-quickstart.md#connect-pulsar-to-postgresql). - -### Example for SQLite - -* JSON - - ```json - - { - "jdbcUrl": "jdbc:sqlite:db.sqlite", - "tableName": "pulsar_sqlite_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-sqlite-sink" - topicName: "persistent://public/default/jdbc-sqlite-topic" - sinkType: "jdbc-sqlite" - configs: - jdbcUrl: "jdbc:sqlite:db.sqlite" - tableName: "pulsar_sqlite_jdbc_sink" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-kafka-sink.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-kafka-sink.md deleted file mode 100644 index 09dad4ce70bac9..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-kafka-sink.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -id: io-kafka-sink -title: Kafka sink connector -sidebar_label: "Kafka sink connector" -original_id: io-kafka-sink ---- - -The Kafka sink connector pulls messages from Pulsar topics and persists the messages -to Kafka topics. - -This guide explains how to configure and use the Kafka sink connector. - -## Configuration - -The configuration of the Kafka sink connector has the following parameters. - -### Property - -| Name | Type| Required | Default | Description -|------|----------|---------|-------------|-------------| -| `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. | -|`acks`|String|true|" " (empty string) |The number of acknowledgments that the producer requires the leader to receive before a request completes.
    This controls the durability of the sent records. -|`batchsize`|long|false|16384L|The batch size that a Kafka producer attempts to batch records together before sending them to brokers. -|`maxRequestSize`|long|false|1048576L|The maximum size of a Kafka request in bytes. -|`topic`|String|true|" " (empty string) |The Kafka topic which receives messages from Pulsar. -| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringSerializer | The serializer class for Kafka producers to serialize keys. -| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArraySerializer | The serializer class for Kafka producers to serialize values.

    The serializer is set by a specific implementation of [`KafkaAbstractSink`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSink.java). -|`producerConfigProperties`|Map|false|" " (empty string)|The producer configuration properties to be passed to producers.

    **Note: other properties specified in the connector configuration file take precedence over this configuration**. - - -### Example - -Before using the Kafka sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "bootstrapServers": "localhost:6667", - "topic": "test", - "acks": "1", - "batchSize": "16384", - "maxRequestSize": "1048576", - "producerConfigProperties": - { - "client.id": "test-pulsar-producer", - "security.protocol": "SASL_PLAINTEXT", - "sasl.mechanism": "GSSAPI", - "sasl.kerberos.service.name": "kafka", - "acks": "all" - } - } - -* YAML - - ``` - -yaml - configs: - bootstrapServers: "localhost:6667" - topic: "test" - acks: "1" - batchSize: "16384" - maxRequestSize: "1048576" - producerConfigProperties: - client.id: "test-pulsar-producer" - security.protocol: "SASL_PLAINTEXT" - sasl.mechanism: "GSSAPI" - sasl.kerberos.service.name: "kafka" - acks: "all" - ``` diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-kafka-source.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-kafka-source.md deleted file mode 100644 index 35b04c52b41258..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-kafka-source.md +++ /dev/null @@ -1,240 +0,0 @@ ---- -id: io-kafka-source -title: Kafka source connector -sidebar_label: "Kafka source connector" -original_id: io-kafka-source ---- - -The Kafka source connector pulls messages from Kafka topics and persists the messages -to Pulsar topics. - -This guide explains how to configure and use the Kafka source connector. - -## Configuration - -The configuration of the Kafka source connector has the following properties. - -### Property - -| Name | Type| Required | Default | Description -|------|----------|---------|-------------|-------------| -| `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. | -| `groupId` |String| true | " " (empty string) | A unique string that identifies the group of consumer processes to which this consumer belongs. | -| `fetchMinBytes` | long|false | 1 | The minimum byte expected for each fetch response. | -| `autoCommitEnabled` | boolean |false | true | If set to true, the consumer's offset is periodically committed in the background.

    This committed offset is used when the process fails as the position from which a new consumer begins. | -| `autoCommitIntervalMs` | long|false | 5000 | The frequency in milliseconds that the consumer offsets are auto-committed to Kafka if `autoCommitEnabled` is set to true. | -| `heartbeatIntervalMs` | long| false | 3000 | The interval between heartbeats to the consumer when using Kafka's group management facilities.

    **Note: `heartbeatIntervalMs` must be smaller than `sessionTimeoutMs`**.| -| `sessionTimeoutMs` | long|false | 30000 | The timeout used to detect consumer failures when using Kafka's group management facility. | -| `topic` | String|true | " " (empty string)| The Kafka topic that sends messages to Pulsar. | -| `consumerConfigProperties` | Map| false | " " (empty string) | The consumer configuration properties to be passed to consumers.

    **Note: other properties specified in the connector configuration file take precedence over this configuration**. | -| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringDeserializer | The deserializer class for Kafka consumers to deserialize keys.
    The deserializer is set by a specific implementation of [`KafkaAbstractSource`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java). -| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArrayDeserializer | The deserializer class for Kafka consumers to deserialize values. -| `autoOffsetReset` | String | false | earliest | The default offset reset policy. | - -### Schema Management - -This Kafka source connector applies the schema to the topic depending on the data type that is present on the Kafka topic. -You can detect the data type from the `keyDeserializationClass` and `valueDeserializationClass` configuration parameters. - -If the `valueDeserializationClass` is `org.apache.kafka.common.serialization.StringDeserializer`, you can set Schema.STRING() as schema type on the Pulsar topic. - -If `valueDeserializationClass` is `io.confluent.kafka.serializers.KafkaAvroDeserializer`, Pulsar downloads the AVRO schema from the Confluent Schema Registry® -and sets it properly on the Pulsar topic. - -In this case, you need to set `schema.registry.url` inside of the `consumerConfigProperties` configuration entry -of the source. - -If `keyDeserializationClass` is not `org.apache.kafka.common.serialization.StringDeserializer`, it means -that you do not have a String as key and the Kafka Source uses the KeyValue schema type with the SEPARATED encoding. - -Pulsar supports AVRO format for keys. - -In this case, you can have a Pulsar topic with the following properties: -- Schema: KeyValue schema with SEPARATED encoding -- Key: the content of key of the Kafka message (base64 encoded) -- Value: the content of value of the Kafka message -- KeySchema: the schema detected from `keyDeserializationClass` -- ValueSchema: the schema detected from `valueDeserializationClass` - -Topic compaction and partition routing use the Pulsar key, that contains the Kafka key, and so they are driven by the same value that you have on Kafka. - -When you consume data from Pulsar topics, you can use the `KeyValue` schema. In this way, you can decode the data properly. -If you want to access the raw key, you can use the `Message#getKeyBytes()` API. - -### Example - -Before using the Kafka source connector, you need to create a configuration file through one of the following methods. - -- JSON - - ```json - - { - "bootstrapServers": "pulsar-kafka:9092", - "groupId": "test-pulsar-io", - "topic": "my-topic", - "sessionTimeoutMs": "10000", - "autoCommitEnabled": false - } - - ``` - -- YAML - - ```yaml - - configs: - bootstrapServers: "pulsar-kafka:9092" - groupId: "test-pulsar-io" - topic: "my-topic" - sessionTimeoutMs: "10000" - autoCommitEnabled: false - - ``` - -## Usage - -You can make the Kafka source connector as a Pulsar built-in connector and use it on a standalone cluster or an on-premises cluster. - -### Standalone cluster - -This example describes how to use the Kafka source connector to feed data from Kafka and write data to Pulsar topics in the standalone mode. - -#### Prerequisites - -- Install [Docker](https://docs.docker.com/get-docker/)(Community Edition). - -#### Steps - -1. Download and start the Confluent Platform. - -For details, see the [documentation](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-1-download-and-start-cp) to install the Kafka service locally. - -2. Pull a Pulsar image and start Pulsar in standalone mode. - - ```bash - - docker pull apachepulsar/pulsar:latest - - docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-kafka-standalone apachepulsar/pulsar:latest bin/pulsar standalone - - ``` - -3. Create a producer file _kafka-producer.py_. - - ```python - - from kafka import KafkaProducer - producer = KafkaProducer(bootstrap_servers='localhost:9092') - future = producer.send('my-topic', b'hello world') - future.get() - - ``` - -4. Create a consumer file _pulsar-client.py_. - - ```python - - import pulsar - - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe('my-topic', subscription_name='my-aa') - - while True: - msg = consumer.receive() - print msg - print dir(msg) - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - - client.close() - - ``` - -5. Copy the following files to Pulsar. - - ```bash - - docker cp pulsar-io-kafka.nar pulsar-kafka-standalone:/pulsar - docker cp kafkaSourceConfig.yaml pulsar-kafka-standalone:/pulsar/conf - - ``` - -6. Open a new terminal window and start the Kafka source connector in local run mode. - - ```bash - - docker exec -it pulsar-kafka-standalone /bin/bash - - ./bin/pulsar-admin source localrun \ - --archive ./pulsar-io-kafka.nar \ - --tenant public \ - --namespace default \ - --name kafka \ - --destination-topic-name my-topic \ - --source-config-file ./conf/kafkaSourceConfig.yaml \ - --parallelism 1 - - ``` - -7. Open a new terminal window and run the Kafka producer locally. - - ```bash - - python3 kafka-producer.py - - ``` - -8. Open a new terminal window and run the Pulsar consumer locally. - - ```bash - - python3 pulsar-client.py - - ``` - -The following information appears on the consumer terminal window. - - ```bash - - Received message: 'hello world' - - ``` - -### On-premises cluster - -This example explains how to create a Kafka source connector in an on-premises cluster. - -1. Copy the NAR package of the Kafka connector to the Pulsar connectors directory. - - ``` - - cp pulsar-io-kafka-{{connector:version}}.nar $PULSAR_HOME/connectors/pulsar-io-kafka-{{connector:version}}.nar - - ``` - -2. Reload all [built-in connectors](https://pulsar.apache.org/docs/en/next/io-connectors/). - - ``` - - PULSAR_HOME/bin/pulsar-admin sources reload - - ``` - -3. Check whether the Kafka source connector is available on the list or not. - - ``` - - PULSAR_HOME/bin/pulsar-admin sources available-sources - - ``` - -4. Create a Kafka source connector on a Pulsar cluster using the [`pulsar-admin sources create`](http://pulsar.apache.org/tools/pulsar-admin/2.9.0-SNAPSHOT/#-em-create-em--14) command. - - ``` - - PULSAR_HOME/bin/pulsar-admin sources create \ - --source-config-file - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-kinesis-sink.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-kinesis-sink.md deleted file mode 100644 index 153587dcfc783e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-kinesis-sink.md +++ /dev/null @@ -1,80 +0,0 @@ ---- -id: io-kinesis-sink -title: Kinesis sink connector -sidebar_label: "Kinesis sink connector" -original_id: io-kinesis-sink ---- - -The Kinesis sink connector pulls data from Pulsar and persists data into Amazon Kinesis. - -## Configuration - -The configuration of the Kinesis sink connector has the following property. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`messageFormat`|MessageFormat|true|ONLY_RAW_PAYLOAD|Message format in which Kinesis sink converts Pulsar messages and publishes to Kinesis streams.

    Below are the available options:

  2028. `ONLY_RAW_PAYLOAD`: Kinesis sink directly publishes Pulsar message payload as a message into the configured Kinesis stream.

  2029. `FULL_MESSAGE_IN_JSON`: Kinesis sink creates a JSON payload with Pulsar message payload, properties and encryptionCtx, and publishes JSON payload into the configured Kinesis stream.

  2030. `FULL_MESSAGE_IN_FB`: Kinesis sink creates a flatbuffer serialized payload with Pulsar message payload, properties and encryptionCtx, and publishes flatbuffer payload into the configured Kinesis stream.
  2031. -`retainOrdering`|boolean|false|false|Whether Pulsar connectors to retain ordering when moving messages from Pulsar to Kinesis or not. -`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    It is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If it is empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Built-in plugins - -The following are built-in `AwsCredentialProviderPlugin` plugins: - -* `org.apache.pulsar.io.aws.AwsDefaultProviderChainPlugin` - - This plugin takes no configuration, it uses the default AWS provider chain. - - For more information, see [AWS documentation](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default). - -* `org.apache.pulsar.io.aws.STSAssumeRoleProviderPlugin` - - This plugin takes a configuration (via the `awsCredentialPluginParam`) that describes a role to assume when running the KCL. - - This configuration takes the form of a small json document like: - - ```json - - {"roleArn": "arn...", "roleSessionName": "name"} - - ``` - -### Example - -Before using the Kinesis sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "awsEndpoint": "some.endpoint.aws", - "awsRegion": "us-east-1", - "awsKinesisStreamName": "my-stream", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "messageFormat": "ONLY_RAW_PAYLOAD", - "retainOrdering": "true" - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "some.endpoint.aws" - awsRegion: "us-east-1" - awsKinesisStreamName: "my-stream" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - messageFormat: "ONLY_RAW_PAYLOAD" - retainOrdering: "true" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-kinesis-source.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-kinesis-source.md deleted file mode 100644 index 0d07eefc3703b3..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-kinesis-source.md +++ /dev/null @@ -1,81 +0,0 @@ ---- -id: io-kinesis-source -title: Kinesis source connector -sidebar_label: "Kinesis source connector" -original_id: io-kinesis-source ---- - -The Kinesis source connector pulls data from Amazon Kinesis and persists data into Pulsar. - -This connector uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual consuming of messages. The KCL uses DynamoDB to track state for consumers. - -> Note: currently, the Kinesis source connector only supports raw messages. If you use KMS encrypted messages, the encrypted messages are sent to downstream. This connector will support decrypting messages in the future release. - - -## Configuration - -The configuration of the Kinesis source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.

    Below are the available options:

  2032. `AT_TIMESTAMP`: start from the record at or after the specified timestamp.

  2033. `LATEST`: start after the most recent data record.

  2034. `TRIM_HORIZON`: start from the oldest available data record.
  2035. -`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption. -`applicationName`|String|false|Pulsar IO connector|The name of the Amazon Kinesis application.

    By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances. -`checkpointInterval`|long|false|60000|The frequency of the Kinesis stream checkpoint in milliseconds. -`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds. -`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint. -`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector.

    Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed. -`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`useEnhancedFanOut`|boolean|false|true|If set to true, it uses Kinesis enhanced fan-out.

    If set to false, it uses polling. -`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    `awsCredentialProviderPlugin` has the following built-in plugs:

  2036. `org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:
    this plugin uses the default AWS provider chain.
    For more information, see [using the default credential provider chain](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default).

  2037. `org.apache.pulsar.io.kinesis.STSAssumeRoleProviderPlugin`:
    this plugin takes a configuration via the `awsCredentialPluginParam` that describes a role to assume when running the KCL.
    **JSON configuration example**
    `{"roleArn": "arn...", "roleSessionName": "name"}`

    `awsCredentialPluginName` is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If `awsCredentialPluginName` set to empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`.
  2038. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Example - -Before using the Kinesis source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "awsEndpoint": "https://some.endpoint.aws", - "awsRegion": "us-east-1", - "awsKinesisStreamName": "my-stream", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "applicationName": "My test application", - "checkpointInterval": "30000", - "backoffTime": "4000", - "numRetries": "3", - "receiveQueueSize": 2000, - "initialPositionInStream": "TRIM_HORIZON", - "startAtTime": "2019-03-05T19:28:58.000Z" - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "https://some.endpoint.aws" - awsRegion: "us-east-1" - awsKinesisStreamName: "my-stream" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - applicationName: "My test application" - checkpointInterval: 30000 - backoffTime: 4000 - numRetries: 3 - receiveQueueSize: 2000 - initialPositionInStream: "TRIM_HORIZON" - startAtTime: "2019-03-05T19:28:58.000Z" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-mongo-sink.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-mongo-sink.md deleted file mode 100644 index 30c15a6c280938..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-mongo-sink.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -id: io-mongo-sink -title: MongoDB sink connector -sidebar_label: "MongoDB sink connector" -original_id: io-mongo-sink ---- - -The MongoDB sink connector pulls messages from Pulsar topics -and persists the messages to collections. - -## Configuration - -The configuration of the MongoDB sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `mongoUri` | String| true| " " (empty string) | The MongoDB URI to which the connector connects.

    For more information, see [connection string URI format](https://docs.mongodb.com/manual/reference/connection-string/). | -| `database` | String| true| " " (empty string)| The database name to which the collection belongs. | -| `collection` | String| true| " " (empty string)| The collection name to which the connector writes messages. | -| `batchSize` | int|false|100 | The batch size of writing messages to collections. | -| `batchTimeMs` |long|false|1000| The batch operation interval in milliseconds. | - - -### Example - -Before using the Mongo sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "mongoUri": "mongodb://localhost:27017", - "database": "pulsar", - "collection": "messages", - "batchSize": "2", - "batchTimeMs": "500" - } - - ``` - -* YAML - - ```yaml - - configs: - mongoUri: "mongodb://localhost:27017" - database: "pulsar" - collection: "messages" - batchSize: 2 - batchTimeMs: 500 - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-netty-source.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-netty-source.md deleted file mode 100644 index e1ec8d863115b3..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-netty-source.md +++ /dev/null @@ -1,241 +0,0 @@ ---- -id: io-netty-source -title: Netty source connector -sidebar_label: "Netty source connector" -original_id: io-netty-source ---- - -The Netty source connector opens a port that accepts incoming data via the configured network protocol -and publish it to user-defined Pulsar topics. - -This connector can be used in a containerized (for example, k8s) deployment. Otherwise, if the connector is running in process or thread mode, the instance may be conflicting on listening to ports. - -## Configuration - -The configuration of the Netty source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `type` |String| true |tcp | The network protocol over which data is transmitted to netty.

    Below are the available options:
  2039. tcp
  2040. http
  2041. udp
  2042. | -| `host` | String|true | 127.0.0.1 | The host name or address on which the source instance listen. | -| `port` | int|true | 10999 | The port on which the source instance listen. | -| `numberOfThreads` |int| true |1 | The number of threads of Netty TCP server to accept incoming connections and handle the traffic of accepted connections. | - - -### Example - -Before using the Netty source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "type": "tcp", - "host": "127.0.0.1", - "port": "10911", - "numberOfThreads": "1" - } - - ``` - -* YAML - - ```yaml - - configs: - type: "tcp" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -## Usage - -The following examples show how to use the Netty source connector with TCP and HTTP. - -### TCP - -1. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-netty-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -2. Create a configuration file _netty-source-config.yaml_. - - ```yaml - - configs: - type: "tcp" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -3. Copy the configuration file _netty-source-config.yaml_ to Pulsar server. - - ```bash - - $ docker cp netty-source-config.yaml pulsar-netty-standalone:/pulsar/conf/ - - ``` - -4. Download the Netty source connector. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - curl -O http://mirror-hk.koddos.net/apache/pulsar/pulsar-{version}/connectors/pulsar-io-netty-{version}.nar - - ``` - -5. Start the Netty source connector. - - ```bash - - $ ./bin/pulsar-admin sources localrun \ - --archive pulsar-io-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name netty \ - --destination-topic-name netty-topic \ - --source-config-file netty-source-config.yaml \ - --parallelism 1 - - ``` - -6. Consume data. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ ./bin/pulsar-client consume -t Exclusive -s netty-sub netty-topic -n 0 - - ``` - -7. Open another terminal window to send data to the Netty source. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ apt-get update - - $ apt-get -y install telnet - - $ root@1d19327b2c67:/pulsar# telnet 127.0.0.1 10999 - Trying 127.0.0.1... - Connected to 127.0.0.1. - Escape character is '^]'. - hello - world - - ``` - -8. The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello - - ----- got message ----- - world - - ``` - -### HTTP - -1. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-netty-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -2. Create a configuration file _netty-source-config.yaml_. - - ```yaml - - configs: - type: "http" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -3. Copy the configuration file _netty-source-config.yaml_ to Pulsar server. - - ```bash - - $ docker cp netty-source-config.yaml pulsar-netty-standalone:/pulsar/conf/ - - ``` - -4. Download the Netty source connector. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - curl -O http://mirror-hk.koddos.net/apache/pulsar/pulsar-{version}/connectors/pulsar-io-netty-{version}.nar - - ``` - -5. Start the Netty source connector. - - ```bash - - $ ./bin/pulsar-admin sources localrun \ - --archive pulsar-io-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name netty \ - --destination-topic-name netty-topic \ - --source-config-file netty-source-config.yaml \ - --parallelism 1 - - ``` - -6. Consume data. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ ./bin/pulsar-client consume -t Exclusive -s netty-sub netty-topic -n 0 - - ``` - -7. Open another terminal window to send data to the Netty source. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ curl -X POST --data 'hello, world!' http://127.0.0.1:10999/ - - ``` - -8. The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello, world! - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-nsq-source.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-nsq-source.md deleted file mode 100644 index b61e7e100c22e1..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-nsq-source.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -id: io-nsq-source -title: NSQ source connector -sidebar_label: "NSQ source connector" -original_id: io-nsq-source ---- - -The NSQ source connector receives messages from NSQ topics -and writes messages to Pulsar topics. - -## Configuration - -The configuration of the NSQ source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `lookupds` |String| true | " " (empty string) | A comma-separated list of nsqlookupds to connect to. | -| `topic` | String|true | " " (empty string) | The NSQ topic to transport. | -| `channel` | String |false | pulsar-transport-{$topic} | The channel to consume from on the provided NSQ topic. | \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-overview.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-overview.md deleted file mode 100644 index 3db5ee34042d3f..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-overview.md +++ /dev/null @@ -1,164 +0,0 @@ ---- -id: io-overview -title: Pulsar connector overview -sidebar_label: "Overview" -original_id: io-overview ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Messaging systems are most powerful when you can easily use them with external systems like databases and other messaging systems. - -**Pulsar IO connectors** enable you to easily create, deploy, and manage connectors that interact with external systems, such as [Apache Cassandra](https://cassandra.apache.org), [Aerospike](https://www.aerospike.com), and many others. - - -## Concept - -Pulsar IO connectors come in two types: **source** and **sink**. - -This diagram illustrates the relationship between source, Pulsar, and sink: - -![Pulsar IO diagram](/assets/pulsar-io.png "Pulsar IO connectors (sources and sinks)") - - -### Source - -> Sources **feed data from external systems into Pulsar**. - -Common sources include other messaging systems and firehose-style data pipeline APIs. - -For the complete list of Pulsar built-in source connectors, see [source connector](io-connectors.md#source-connector). - -### Sink - -> Sinks **feed data from Pulsar into external systems**. - -Common sinks include other messaging systems and SQL and NoSQL databases. - -For the complete list of Pulsar built-in sink connectors, see [sink connector](io-connectors.md#sink-connector). - -## Processing guarantee - -Processing guarantees are used to handle errors when writing messages to Pulsar topics. - -> Pulsar connectors and Functions use the **same** processing guarantees as below. - -Delivery semantic | Description -:------------------|:------- -`at-most-once` | Each message sent to a connector is to be **processed once** or **not to be processed**. -`at-least-once` | Each message sent to a connector is to be **processed once** or **more than once**. -`effectively-once` | Each message sent to a connector has **one output associated** with it. - -> Processing guarantees for connectors not just rely on Pulsar guarantee but also **relate to external systems**, that is, **the implementation of source and sink**. - -* Source: Pulsar ensures that writing messages to Pulsar topics respects to the processing guarantees. It is within Pulsar's control. - -* Sink: the processing guarantees rely on the sink implementation. If the sink implementation does not handle retries in an idempotent way, the sink does not respect to the processing guarantees. - -### Set - -When creating a connector, you can set the processing guarantee with the following semantics: - -* ATLEAST_ONCE - -* ATMOST_ONCE - -* EFFECTIVELY_ONCE - -> If `--processing-guarantees` is not specified when creating a connector, the default semantic is `ATLEAST_ONCE`. - -Here takes **Admin CLI** as an example. For more information about **REST API** or **JAVA Admin API**, see [here](io-use.md#create). - -````mdx-code-block - - - - -```bash - -$ bin/pulsar-admin sources create \ - --processing-guarantees ATMOST_ONCE \ - # Other source configs - -``` - -For more information about the options of `pulsar-admin sources create`, see [here](reference-connector-admin.md#create). - - - - -```bash - -$ bin/pulsar-admin sinks create \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other sink configs - -``` - -For more information about the options of `pulsar-admin sinks create`, see [here](reference-connector-admin.md#create-1). - - - - -```` - -### Update - -After creating a connector, you can update the processing guarantee with the following semantics: - -* ATLEAST_ONCE - -* ATMOST_ONCE - -* EFFECTIVELY_ONCE - -Here takes **Admin CLI** as an example. For more information about **REST API** or **JAVA Admin API**, see [here](io-use.md#create). - -````mdx-code-block - - - - -```bash - -$ bin/pulsar-admin sources update \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other source configs - -``` - -For more information about the options of `pulsar-admin sources update`, see [here](reference-connector-admin.md#update). - - - - -```bash - -$ bin/pulsar-admin sinks update \ - --processing-guarantees ATMOST_ONCE \ - # Other sink configs - -``` - -For more information about the options of `pulsar-admin sinks update`, see [here](reference-connector-admin.md#update-1). - - - - -```` - - -## Work with connector - -You can manage Pulsar connectors (for example, create, update, start, stop, restart, reload, delete and perform other operations on connectors) via the [Connector Admin CLI](reference-connector-admin.md) with [sources](io-cli.md#sources) and [sinks](io-cli.md#sinks) subcommands. - -Connectors (sources and sinks) and Functions are components of instances, and they all run on Functions workers. When managing a source, sink or function via [Connector Admin CLI](reference-connector-admin.md) or [Functions Admin CLI](functions-cli.md), an instance is started on a worker. For more information, see [Functions worker](functions-worker.md#run-functions-worker-separately). - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-quickstart.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-quickstart.md deleted file mode 100644 index 40eaf5c1de2214..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-quickstart.md +++ /dev/null @@ -1,963 +0,0 @@ ---- -id: io-quickstart -title: How to connect Pulsar to database -sidebar_label: "Get started" -original_id: io-quickstart ---- - -This tutorial provides a hands-on look at how you can move data out of Pulsar without writing a single line of code. - -It is helpful to review the [concepts](io-overview.md) for Pulsar I/O with running the steps in this guide to gain a deeper understanding. - -At the end of this tutorial, you are able to: - -- [Connect Pulsar to Cassandra](#Connect-Pulsar-to-Cassandra) - -- [Connect Pulsar to PostgreSQL](#Connect-Pulsar-to-PostgreSQL) - -:::tip - -* These instructions assume you are running Pulsar in [standalone mode](getting-started-standalone.md). However, all -the commands used in this tutorial can be used in a multi-nodes Pulsar cluster without any changes. -* All the instructions are assumed to run at the root directory of a Pulsar binary distribution. - -::: - -## Install Pulsar and built-in connector - -Before connecting Pulsar to a database, you need to install Pulsar and the desired built-in connector. - -For more information about **how to install a standalone Pulsar and built-in connectors**, see [here](getting-started-standalone.md/#installing-pulsar). - -## Start Pulsar standalone - -1. Start Pulsar locally. - - ```bash - - bin/pulsar standalone - - ``` - - All the components of a Pulsar service are start in order. - - You can curl those pulsar service endpoints to make sure Pulsar service is up running correctly. - -2. Check Pulsar binary protocol port. - - ```bash - - telnet localhost 6650 - - ``` - -3. Check Pulsar Function cluster. - - ```bash - - curl -s http://localhost:8080/admin/v2/worker/cluster - - ``` - - **Example output** - - ```json - - [{"workerId":"c-standalone-fw-localhost-6750","workerHostname":"localhost","port":6750}] - - ``` - -4. Make sure a public tenant and a default namespace exist. - - ```bash - - curl -s http://localhost:8080/admin/v2/namespaces/public - - ``` - - **Example output** - - ```json - - ["public/default","public/functions"] - - ``` - -5. All built-in connectors should be listed as available. - - ```bash - - curl -s http://localhost:8080/admin/v2/functions/connectors - - ``` - - **Example output** - - ```json - - [{"name":"aerospike","description":"Aerospike database sink","sinkClass":"org.apache.pulsar.io.aerospike.AerospikeStringSink"},{"name":"cassandra","description":"Writes data into Cassandra","sinkClass":"org.apache.pulsar.io.cassandra.CassandraStringSink"},{"name":"kafka","description":"Kafka source and sink connector","sourceClass":"org.apache.pulsar.io.kafka.KafkaStringSource","sinkClass":"org.apache.pulsar.io.kafka.KafkaBytesSink"},{"name":"kinesis","description":"Kinesis sink connector","sinkClass":"org.apache.pulsar.io.kinesis.KinesisSink"},{"name":"rabbitmq","description":"RabbitMQ source connector","sourceClass":"org.apache.pulsar.io.rabbitmq.RabbitMQSource"},{"name":"twitter","description":"Ingest data from Twitter firehose","sourceClass":"org.apache.pulsar.io.twitter.TwitterFireHose"}] - - ``` - - If an error occurs when starting Pulsar service, you may see an exception at the terminal running `pulsar/standalone`, - or you can navigate to the `logs` directory under the Pulsar directory to view the logs. - -## Connect Pulsar to Cassandra - -This section demonstrates how to connect Pulsar to Cassandra. - -:::tip - -* Make sure you have Docker installed. If you do not have one, see [install Docker](https://docs.docker.com/docker-for-mac/install/). -* The Cassandra sink connector reads messages from Pulsar topics and writes the messages into Cassandra tables. For more information, see [Cassandra sink connector](io-cassandra-sink.md). - -::: - -### Setup a Cassandra cluster - -This example uses `cassandra` Docker image to start a single-node Cassandra cluster in Docker. - -1. Start a Cassandra cluster. - - ```bash - - docker run -d --rm --name=cassandra -p 9042:9042 cassandra - - ``` - - :::note - - Before moving to the next steps, make sure the Cassandra cluster is running. - - ::: - -2. Make sure the Docker process is running. - - ```bash - - docker ps - - ``` - -3. Check the Cassandra logs to make sure the Cassandra process is running as expected. - - ```bash - - docker logs cassandra - - ``` - -4. Check the status of the Cassandra cluster. - - ```bash - - docker exec cassandra nodetool status - - ``` - - **Example output** - - ``` - - Datacenter: datacenter1 - ======================= - Status=Up/Down - |/ State=Normal/Leaving/Joining/Moving - -- Address Load Tokens Owns (effective) Host ID Rack - UN 172.17.0.2 103.67 KiB 256 100.0% af0e4b2f-84e0-4f0b-bb14-bd5f9070ff26 rack1 - - ``` - -5. Use `cqlsh` to connect to the Cassandra cluster. - - ```bash - - $ docker exec -ti cassandra cqlsh localhost - Connected to Test Cluster at localhost:9042. - [cqlsh 5.0.1 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4] - Use HELP for help. - cqlsh> - - ``` - -6. Create a keyspace `pulsar_test_keyspace`. - - ```bash - - cqlsh> CREATE KEYSPACE pulsar_test_keyspace WITH replication = {'class':'SimpleStrategy', 'replication_factor':1}; - - ``` - -7. Create a table `pulsar_test_table`. - - ```bash - - cqlsh> USE pulsar_test_keyspace; - cqlsh:pulsar_test_keyspace> CREATE TABLE pulsar_test_table (key text PRIMARY KEY, col text); - - ``` - -### Configure a Cassandra sink - -Now that we have a Cassandra cluster running locally. - -In this section, you need to configure a Cassandra sink connector. - -To run a Cassandra sink connector, you need to prepare a configuration file including the information that Pulsar connector runtime needs to know. - -For example, how Pulsar connector can find the Cassandra cluster, what is the keyspace and the table that Pulsar connector uses for writing Pulsar messages to, and so on. - -You can create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - } - - ``` - -* YAML - - ```yaml - - configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - - ``` - -For more information, see [Cassandra sink connector](io-cassandra-sink.md). - -### Create a Cassandra sink - -You can use the [Connector Admin CLI](io-cli.md) -to create a sink connector and perform other operations on them. - -Run the following command to create a Cassandra sink connector with sink type _cassandra_ and the config file _examples/cassandra-sink.yml_ created previously. - -#### Note -> The `sink-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. - -```bash - -bin/pulsar-admin sinks create \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink \ - --sink-type cassandra \ - --sink-config-file examples/cassandra-sink.yml \ - --inputs test_cassandra - -``` - -Once the command is executed, Pulsar creates the sink connector _cassandra-test-sink_. - -This sink connector runs -as a Pulsar Function and writes the messages produced in the topic _test_cassandra_ to the Cassandra table _pulsar_test_table_. - -### Inspect a Cassandra sink - -You can use the [Connector Admin CLI](io-cli.md) -to monitor a connector and perform other operations on it. - -* Get the information of a Cassandra sink. - - ```bash - - bin/pulsar-admin sinks get \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - **Example output** - - ```json - - { - "tenant": "public", - "namespace": "default", - "name": "cassandra-test-sink", - "className": "org.apache.pulsar.io.cassandra.CassandraStringSink", - "inputSpecs": { - "test_cassandra": { - "isRegexPattern": false - } - }, - "configs": { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true, - "archive": "builtin://cassandra" - } - - ``` - -* Check the status of a Cassandra sink. - - ```bash - - bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - **Example output** - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-localhost-8080" - } - } ] - } - - ``` - -### Verify a Cassandra sink - -1. Produce some messages to the input topic of the Cassandra sink _test_cassandra_. - - ```bash - - for i in {0..9}; do bin/pulsar-client produce -m "key-$i" -n 1 test_cassandra; done - - ``` - -2. Inspect the status of the Cassandra sink _test_cassandra_. - - ```bash - - bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - You can see 10 messages are processed by the Cassandra sink _test_cassandra_. - - **Example output** - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 10, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 10, - "lastReceivedTime" : 1551685489136, - "workerId" : "c-standalone-fw-localhost-8080" - } - } ] - } - - ``` - -3. Use `cqlsh` to connect to the Cassandra cluster. - - ```bash - - docker exec -ti cassandra cqlsh localhost - - ``` - -4. Check the data of the Cassandra table _pulsar_test_table_. - - ```bash - - cqlsh> use pulsar_test_keyspace; - cqlsh:pulsar_test_keyspace> select * from pulsar_test_table; - - key | col - --------+-------- - key-5 | key-5 - key-0 | key-0 - key-9 | key-9 - key-2 | key-2 - key-1 | key-1 - key-3 | key-3 - key-6 | key-6 - key-7 | key-7 - key-4 | key-4 - key-8 | key-8 - - ``` - -### Delete a Cassandra Sink - -You can use the [Connector Admin CLI](io-cli.md) -to delete a connector and perform other operations on it. - -```bash - -bin/pulsar-admin sinks delete \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - -``` - -## Connect Pulsar to PostgreSQL - -This section demonstrates how to connect Pulsar to PostgreSQL. - -:::tip - -* Make sure you have Docker installed. If you do not have one, see [install Docker](https://docs.docker.com/docker-for-mac/install/). -* The JDBC sink connector pulls messages from Pulsar topics and persists the messages to ClickHouse, MariaDB, PostgreSQL, or SQlite. - -::: - ->For more information, see [JDBC sink connector](io-jdbc-sink.md). - - -### Setup a PostgreSQL cluster - -This example uses the PostgreSQL 12 docker image to start a single-node PostgreSQL cluster in Docker. - -1. Pull the PostgreSQL 12 image from Docker. - - ```bash - - $ docker pull postgres:12 - - ``` - -2. Start PostgreSQL. - - ```bash - - $ docker run -d -it --rm \ - --name pulsar-postgres \ - -p 5432:5432 \ - -e POSTGRES_PASSWORD=password \ - -e POSTGRES_USER=postgres \ - postgres:12 - - ``` - - #### Tip - - Flag | Description | This example - ---|---|---| - `-d` | To start a container in detached mode. | / - `-it` | Keep STDIN open even if not attached and allocate a terminal. | / - `--rm` | Remove the container automatically when it exits. | / - `-name` | Assign a name to the container. | This example specifies _pulsar-postgres_ for the container. - `-p` | Publish the port of the container to the host. | This example publishes the port _5432_ of the container to the host. - `-e` | Set environment variables. | This example sets the following variables:
    - The password for the user is _password_.
    - The name for the user is _postgres_. - - :::tip - - For more information about Docker commands, see [Docker CLI](https://docs.docker.com/engine/reference/commandline/run/). - - ::: - -3. Check if PostgreSQL has been started successfully. - - ```bash - - $ docker logs -f pulsar-postgres - - ``` - - PostgreSQL has been started successfully if the following message appears. - - ```text - - 2020-05-11 20:09:24.492 UTC [1] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit - 2020-05-11 20:09:24.492 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 - 2020-05-11 20:09:24.492 UTC [1] LOG: listening on IPv6 address "::", port 5432 - 2020-05-11 20:09:24.499 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" - 2020-05-11 20:09:24.523 UTC [55] LOG: database system was shut down at 2020-05-11 20:09:24 UTC - 2020-05-11 20:09:24.533 UTC [1] LOG: database system is ready to accept connections - - ``` - -4. Access to PostgreSQL. - - ```bash - - $ docker exec -it pulsar-postgres /bin/bash - - ``` - -5. Create a PostgreSQL table _pulsar_postgres_jdbc_sink_. - - ```bash - - $ psql -U postgres postgres - - postgres=# create table if not exists pulsar_postgres_jdbc_sink - ( - id serial PRIMARY KEY, - name VARCHAR(255) NOT NULL - ); - - ``` - -### Configure a JDBC sink - -Now we have a PostgreSQL running locally. - -In this section, you need to configure a JDBC sink connector. - -1. Add a configuration file. - - To run a JDBC sink connector, you need to prepare a YAML configuration file including the information that Pulsar connector runtime needs to know. - - For example, how Pulsar connector can find the PostgreSQL cluster, what is the JDBC URL and the table that Pulsar connector uses for writing messages to. - - Create a _pulsar-postgres-jdbc-sink.yaml_ file, copy the following contents to this file, and place the file in the `pulsar/connectors` folder. - - ```yaml - - configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/postgres" - tableName: "pulsar_postgres_jdbc_sink" - - ``` - -2. Create a schema. - - Create a _avro-schema_ file, copy the following contents to this file, and place the file in the `pulsar/connectors` folder. - - ```json - - { - "type": "AVRO", - "schema": "{\"type\":\"record\",\"name\":\"Test\",\"fields\":[{\"name\":\"id\",\"type\":[\"null\",\"int\"]},{\"name\":\"name\",\"type\":[\"null\",\"string\"]}]}", - "properties": {} - } - - ``` - - :::tip - - For more information about AVRO, see [Apache Avro](https://avro.apache.org/docs/1.9.1/). - - ::: - -3. Upload a schema to a topic. - - This example uploads the _avro-schema_ schema to the _pulsar-postgres-jdbc-sink-topic_ topic. - - ```bash - - $ bin/pulsar-admin schemas upload pulsar-postgres-jdbc-sink-topic -f ./connectors/avro-schema - - ``` - -4. Check if the schema has been uploaded successfully. - - ```bash - - $ bin/pulsar-admin schemas get pulsar-postgres-jdbc-sink-topic - - ``` - - The schema has been uploaded successfully if the following message appears. - - ```json - - {"name":"pulsar-postgres-jdbc-sink-topic","schema":"{\"type\":\"record\",\"name\":\"Test\",\"fields\":[{\"name\":\"id\",\"type\":[\"null\",\"int\"]},{\"name\":\"name\",\"type\":[\"null\",\"string\"]}]}","type":"AVRO","properties":{}} - - ``` - -### Create a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to create a sink connector and perform other operations on it. - -This example creates a sink connector and specifies the desired information. - -```bash - -$ bin/pulsar-admin sinks create \ ---archive ./connectors/pulsar-io-jdbc-postgres-@pulsar:version@.nar \ ---inputs pulsar-postgres-jdbc-sink-topic \ ---name pulsar-postgres-jdbc-sink \ ---sink-config-file ./connectors/pulsar-postgres-jdbc-sink.yaml \ ---parallelism 1 - -``` - -Once the command is executed, Pulsar creates a sink connector _pulsar-postgres-jdbc-sink_. - -This sink connector runs as a Pulsar Function and writes the messages produced in the topic _pulsar-postgres-jdbc-sink-topic_ to the PostgreSQL table _pulsar_postgres_jdbc_sink_. - - #### Tip - - Flag | Description | This example - ---|---|---| - `--archive` | The path to the archive file for the sink. | _pulsar-io-jdbc-postgres-@pulsar:version@.nar_ | - `--inputs` | The input topic(s) of the sink.

    Multiple topics can be specified as a comma-separated list.|| - `--name` | The name of the sink. | _pulsar-postgres-jdbc-sink_ | - `--sink-config-file` | The path to a YAML config file specifying the configuration of the sink. | _pulsar-postgres-jdbc-sink.yaml_ | - `--parallelism` | The parallelism factor of the sink.

    For example, the number of sink instances to run. | _1_ | - -:::tip - -For more information about `pulsar-admin sinks create options`, see [here](io-cli.md#sinks). - -::: - -The sink has been created successfully if the following message appears. - -```bash - -"Created successfully" - -``` - -### Inspect a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to monitor a connector and perform other operations on it. - -* List all running JDBC sink(s). - - ```bash - - $ bin/pulsar-admin sinks list \ - --tenant public \ - --namespace default - - ``` - - :::tip - - For more information about `pulsar-admin sinks list options`, see [here](io-cli.md/#list-1). - - ::: - - The result shows that only the _postgres-jdbc-sink_ sink is running. - - ```json - - [ - "pulsar-postgres-jdbc-sink" - ] - - ``` - -* Get the information of a JDBC sink. - - ```bash - - $ bin/pulsar-admin sinks get \ - --tenant public \ - --namespace default \ - --name pulsar-postgres-jdbc-sink - - ``` - - :::tip - - For more information about `pulsar-admin sinks get options`, see [here](io-cli.md/#get-1). - - ::: - - The result shows the information of the sink connector, including tenant, namespace, topic and so on. - - ```json - - { - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true - } - - ``` - -* Get the status of a JDBC sink - - ```bash - - $ bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name pulsar-postgres-jdbc-sink - - ``` - - :::tip - - For more information about `pulsar-admin sinks status options`, see [here](io-cli.md/#status-1). - - ::: - - The result shows the current status of sink connector, including the number of instance, running status, worker ID and so on. - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-192.168.2.52-8080" - } - } ] - } - - ``` - -### Stop a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to stop a connector and perform other operations on it. - -```bash - -$ bin/pulsar-admin sinks stop \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks stop options`, see [here](io-cli.md/#stop-1). - -::: - -The sink instance has been stopped successfully if the following message disappears. - -```bash - -"Stopped successfully" - -``` - -### Restart a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to restart a connector and perform other operations on it. - -```bash - -$ bin/pulsar-admin sinks restart \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks restart options`, see [here](io-cli.md/#restart-1). - -::: - -The sink instance has been started successfully if the following message disappears. - -```bash - -"Started successfully" - -``` - -:::tip - -* Optionally, you can run a standalone sink connector using `pulsar-admin sinks localrun options`. -Note that `pulsar-admin sinks localrun options` **runs a sink connector locally**, while `pulsar-admin sinks start options` **starts a sink connector in a cluster**. -* For more information about `pulsar-admin sinks localrun options`, see [here](io-cli.md#localrun-1). - -::: - -### Update a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to update a connector and perform other operations on it. - -This example updates the parallelism of the _pulsar-postgres-jdbc-sink_ sink connector to 2. - -```bash - -$ bin/pulsar-admin sinks update \ ---name pulsar-postgres-jdbc-sink \ ---parallelism 2 - -``` - -:::tip - -For more information about `pulsar-admin sinks update options`, see [here](io-cli.md/#update-1). - -::: - -The sink connector has been updated successfully if the following message disappears. - -```bash - -"Updated successfully" - -``` - -This example double-checks the information. - -```bash - -$ bin/pulsar-admin sinks get \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -The result shows that the parallelism is 2. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 2, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -### Delete a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to delete a connector and perform other operations on it. - -This example deletes the _pulsar-postgres-jdbc-sink_ sink connector. - -```bash - -$ bin/pulsar-admin sinks delete \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks delete options`, see [here](io-cli.md/#delete-1). - -::: - -The sink connector has been deleted successfully if the following message appears. - -```text - -"Deleted successfully" - -``` - -This example double-checks the status of the sink connector. - -```bash - -$ bin/pulsar-admin sinks get \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -The result shows that the sink connector does not exist. - -```text - -HTTP 404 Not Found - -Reason: Sink pulsar-postgres-jdbc-sink doesn't exist - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-rabbitmq-sink.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-rabbitmq-sink.md deleted file mode 100644 index d7fda99460dc97..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-rabbitmq-sink.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -id: io-rabbitmq-sink -title: RabbitMQ sink connector -sidebar_label: "RabbitMQ sink connector" -original_id: io-rabbitmq-sink ---- - -The RabbitMQ sink connector pulls messages from Pulsar topics -and persist the messages to RabbitMQ queues. - - -## Configuration - -The configuration of the RabbitMQ sink connector has the following properties. - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `connectionName` |String| true | " " (empty string) | The connection name. | -| `host` | String| true | " " (empty string) | The RabbitMQ host. | -| `port` | int |true | 5672 | The RabbitMQ port. | -| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. | -| `username` | String|false | guest | The username used to authenticate to RabbitMQ. | -| `password` | String|false | guest | The password used to authenticate to RabbitMQ. | -| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. | -| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number.

    0 means unlimited. | -| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets.

    0 means unlimited. | -| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds.

    0 means infinite. | -| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. | -| `requestedHeartbeat` | int|false | 60 | The exchange to publish messages. | -| `exchangeName` | String|true | " " (empty string) | The maximum number of messages that the server delivers.

    0 means unlimited. | -| `prefetchGlobal` |String|true | " " (empty string) |The routing key used to publish messages. | - - -### Example - -Before using the RabbitMQ sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "host": "localhost", - "port": "5672", - "virtualHost": "/", - "username": "guest", - "password": "guest", - "queueName": "test-queue", - "connectionName": "test-connection", - "requestedChannelMax": "0", - "requestedFrameMax": "0", - "connectionTimeout": "60000", - "handshakeTimeout": "10000", - "requestedHeartbeat": "60", - "exchangeName": "test-exchange", - "routingKey": "test-key" - } - - ``` - -* YAML - - ```yaml - - configs: - host: "localhost" - port: 5672 - virtualHost: "/", - username: "guest" - password: "guest" - queueName: "test-queue" - connectionName: "test-connection" - requestedChannelMax: 0 - requestedFrameMax: 0 - connectionTimeout: 60000 - handshakeTimeout: 10000 - requestedHeartbeat: 60 - exchangeName: "test-exchange" - routingKey: "test-key" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-rabbitmq-source.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-rabbitmq-source.md deleted file mode 100644 index c2c31cc97d10d9..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-rabbitmq-source.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -id: io-rabbitmq-source -title: RabbitMQ source connector -sidebar_label: "RabbitMQ source connector" -original_id: io-rabbitmq-source ---- - -The RabbitMQ source connector receives messages from RabbitMQ clusters -and writes messages to Pulsar topics. - -## Configuration - -The configuration of the RabbitMQ source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `connectionName` |String| true | " " (empty string) | The connection name. | -| `host` | String| true | " " (empty string) | The RabbitMQ host. | -| `port` | int |true | 5672 | The RabbitMQ port. | -| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. | -| `username` | String|false | guest | The username used to authenticate to RabbitMQ. | -| `password` | String|false | guest | The password used to authenticate to RabbitMQ. | -| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. | -| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number.

    0 means unlimited. | -| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets.

    0 means unlimited. | -| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds.

    0 means infinite. | -| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. | -| `requestedHeartbeat` | int|false | 60 | The requested heartbeat timeout in seconds. | -| `prefetchCount` | int|false | 0 | The maximum number of messages that the server delivers.

    0 means unlimited. | -| `prefetchGlobal` | boolean|false | false |Whether the setting should be applied to the entire channel rather than each consumer. | -| `passive` | boolean|false | false | Whether the rabbitmq consumer should create its own queue or bind to an existing one. | - -### Example - -Before using the RabbitMQ source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "host": "localhost", - "port": "5672", - "virtualHost": "/", - "username": "guest", - "password": "guest", - "queueName": "test-queue", - "connectionName": "test-connection", - "requestedChannelMax": "0", - "requestedFrameMax": "0", - "connectionTimeout": "60000", - "handshakeTimeout": "10000", - "requestedHeartbeat": "60", - "prefetchCount": "0", - "prefetchGlobal": "false", - "passive": "false" - } - - ``` - -* YAML - - ```yaml - - configs: - host: "localhost" - port: 5672 - virtualHost: "/" - username: "guest" - password: "guest" - queueName: "test-queue" - connectionName: "test-connection" - requestedChannelMax: 0 - requestedFrameMax: 0 - connectionTimeout: 60000 - handshakeTimeout: 10000 - requestedHeartbeat: 60 - prefetchCount: 0 - prefetchGlobal: "false" - passive: "false" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-redis-sink.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-redis-sink.md deleted file mode 100644 index 0caf21bcf62e88..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-redis-sink.md +++ /dev/null @@ -1,156 +0,0 @@ ---- -id: io-redis-sink -title: Redis sink connector -sidebar_label: "Redis sink connector" -original_id: io-redis-sink ---- - -The Redis sink connector pulls messages from Pulsar topics -and persists the messages to a Redis database. - - - -## Configuration - -The configuration of the Redis sink connector has the following properties. - - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `redisHosts` |String|true|" " (empty string) | A comma-separated list of Redis hosts to connect to. | -| `redisPassword` |String|false|" " (empty string) | The password used to connect to Redis. | -| `redisDatabase` | int|true|0 | The Redis database to connect to. | -| `clientMode` |String| false|Standalone | The client mode when interacting with Redis cluster.

    Below are the available options:
  2043. Standalone
  2044. Cluster
  2045. | -| `autoReconnect` | boolean|false|true | Whether the Redis client automatically reconnect or not. | -| `requestQueue` | int|false|2147483647 | The maximum number of queued requests to Redis. | -| `tcpNoDelay` |boolean| false| false | Whether to enable TCP with no delay or not. | -| `keepAlive` | boolean|false | false |Whether to enable a keepalive to Redis or not. | -| `connectTimeout` |long| false|10000 | The time to wait before timing out when connecting in milliseconds. | -| `operationTimeout` | long|false|10000 | The time before an operation is marked as timed out in milliseconds . | -| `batchTimeMs` | int|false|1000 | The Redis operation time in milliseconds. | -| `batchSize` | int|false|200 | The batch size of writing to Redis database. | - - -### Example - -Before using the Redis sink connector, you need to create a configuration file in the path you will start Pulsar service (i.e. `PULSAR_HOME`) through one of the following methods. - -* JSON - - ```json - - { - "redisHosts": "localhost:6379", - "redisPassword": "mypassword", - "redisDatabase": "0", - "clientMode": "Standalone", - "operationTimeout": "2000", - "batchSize": "1", - "batchTimeMs": "1000", - "connectTimeout": "3000" - } - - ``` - -* YAML - - ```yaml - - configs: - redisHosts: "localhost:6379" - redisPassword: "mypassword" - redisDatabase: 0 - clientMode: "Standalone" - operationTimeout: 2000 - batchSize: 1 - batchTimeMs: 1000 - connectTimeout: 3000 - - ``` - -### Usage - -This example shows how to write records to a Redis database using the Pulsar Redis connector. - -1. Start a Redis server. - - ```bash - - $ docker pull redis:5.0.5 - $ docker run -d -p 6379:6379 --name my-redis redis:5.0.5 --requirepass "mypassword" - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - - Make sure the NAR file is available at `connectors/pulsar-io-redis-@pulsar:version@.nar`. - -3. Start the Pulsar Redis connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-redis-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name my-redis-sink \ - --sink-config '{"redisHosts": "localhost:6379","redisPassword": "mypassword","redisDatabase": "0","clientMode": "Standalone","operationTimeout": "3000","batchSize": "1"}' \ - --inputs my-redis-topic - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-redis-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name my-redis-sink \ - --sink-config-file redis-sink-config.yaml \ - --inputs my-redis-topic - - ``` - -4. Publish records to the topic. - - ```bash - - $ bin/pulsar-client produce \ - persistent://public/default/my-redis-topic \ - -k "streaming" \ - -m "Pulsar" - - ``` - -5. Start a Redis client in Docker. - - ```bash - - $ docker exec -it my-redis redis-cli -a "mypassword" - - ``` - -6. Check the key/value in Redis. - - ``` - - 127.0.0.1:6379> keys * - 1) "streaming" - 127.0.0.1:6379> get "streaming" - "Pulsar" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-solr-sink.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-solr-sink.md deleted file mode 100644 index df2c3612c38eb6..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-solr-sink.md +++ /dev/null @@ -1,65 +0,0 @@ ---- -id: io-solr-sink -title: Solr sink connector -sidebar_label: "Solr sink connector" -original_id: io-solr-sink ---- - -The Solr sink connector pulls messages from Pulsar topics -and persists the messages to Solr collections. - - - -## Configuration - -The configuration of the Solr sink connector has the following properties. - - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `solrUrl` | String|true|" " (empty string) |
  2046. Comma-separated zookeeper hosts with chroot used in the SolrCloud mode.
    **Example**
    `localhost:2181,localhost:2182/chroot`

  2047. URL to connect to Solr used in standalone mode.
    **Example**
    `localhost:8983/solr`
  2048. | -| `solrMode` | String|true|SolrCloud| The client mode when interacting with the Solr cluster.

    Below are the available options:
  2049. Standalone
  2050. SolrCloud
  2051. | -| `solrCollection` |String|true| " " (empty string) | Solr collection name to which records need to be written. | -| `solrCommitWithinMs` |int| false|10 | The time within million seconds for Solr updating commits.| -| `username` |String|false| " " (empty string) | The username for basic authentication.

    **Note: `usename` is case-sensitive.** | -| `password` | String|false| " " (empty string) | The password for basic authentication.

    **Note: `password` is case-sensitive.** | - - - -### Example - -Before using the Solr sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "solrUrl": "localhost:2181,localhost:2182/chroot", - "solrMode": "SolrCloud", - "solrCollection": "techproducts", - "solrCommitWithinMs": 100, - "username": "fakeuser", - "password": "fake@123" - } - - ``` - -* YAML - - ```yaml - - { - solrUrl: "localhost:2181,localhost:2182/chroot" - solrMode: "SolrCloud" - solrCollection: "techproducts" - solrCommitWithinMs: 100 - username: "fakeuser" - password: "fake@123" - } - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-twitter-source.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-twitter-source.md deleted file mode 100644 index 8de3504dd0fef2..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-twitter-source.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -id: io-twitter-source -title: Twitter Firehose source connector -sidebar_label: "Twitter Firehose source connector" -original_id: io-twitter-source ---- - -The Twitter Firehose source connector receives tweets from Twitter Firehose and -writes the tweets to Pulsar topics. - -## Configuration - -The configuration of the Twitter Firehose source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `consumerKey` | String|true | " " (empty string) | The twitter OAuth consumer key.

    For more information, see [Access tokens](https://developer.twitter.com/en/docs/basics/authentication/guides/access-tokens). | -| `consumerSecret` | String |true | " " (empty string) | The twitter OAuth consumer secret. | -| `token` | String|true | " " (empty string) | The twitter OAuth token. | -| `tokenSecret` | String|true | " " (empty string) | The twitter OAuth secret. | -| `guestimateTweetTime`|Boolean|false|false|Most firehose events have null createdAt time.

    If `guestimateTweetTime` set to true, the connector estimates the createdTime of each firehose event to be current time. -| `clientName` | String |false | openconnector-twitter-source| The twitter firehose client name. | -| `clientHosts` |String| false | Constants.STREAM_HOST | The twitter firehose hosts to which client connects. | -| `clientBufferSize` | int|false | 50000 | The buffer size for buffering tweets fetched from twitter firehose. | - -> For more information about OAuth credentials, see [Twitter developers portal](https://developer.twitter.com/en.html). diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-twitter.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-twitter.md deleted file mode 100644 index 3b2f6325453c3c..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-twitter.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: io-twitter -title: Twitter Firehose Connector -sidebar_label: "Twitter Firehose Connector" -original_id: io-twitter ---- - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/io-use.md b/site2/website/versioned_docs/version-2.9.2-deprecated/io-use.md deleted file mode 100644 index da9ed746c4d372..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/io-use.md +++ /dev/null @@ -1,1787 +0,0 @@ ---- -id: io-use -title: How to use Pulsar connectors -sidebar_label: "Use" -original_id: io-use ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide describes how to use Pulsar connectors. - -## Install a connector - -Pulsar bundles several [builtin connectors](io-connectors.md) used to move data in and out of commonly used systems (such as database and messaging system). Optionally, you can create and use your desired non-builtin connectors. - -:::note - -When using a non-builtin connector, you need to specify the path of a archive file for the connector. - -::: - -To set up a builtin connector, follow -the instructions [here](getting-started-standalone.md#installing-builtin-connectors). - -After the setup, the builtin connector is automatically discovered by Pulsar brokers (or function-workers), so no additional installation steps are required. - -## Configure a connector - -You can configure the following information: - -* [Configure a default storage location for a connector](#configure-a-default-storage-location-for-a-connector) - -* [Configure a connector with a YAML file](#configure-a-connector-with-yaml-file) - -### Configure a default storage location for a connector - -To configure a default folder for builtin connectors, set the `connectorsDirectory` parameter in the `./conf/functions_worker.yml` configuration file. - -**Example** - -Set the `./connectors` folder as the default storage location for builtin connectors. - -``` - -######################## -# Connectors -######################## - -connectorsDirectory: ./connectors - -``` - -### Configure a connector with a YAML file - -To configure a connector, you need to provide a YAML configuration file when creating a connector. - -The YAML configuration file tells Pulsar where to locate connectors and how to connect connectors with Pulsar topics. - -**Example 1** - -Below is a YAML configuration file of a Cassandra sink, which tells Pulsar: - -* Which Cassandra cluster to connect - -* What is the `keyspace` and `columnFamily` to be used in Cassandra for collecting data - -* How to map Pulsar messages into Cassandra table key and columns - -```shell - -tenant: public -namespace: default -name: cassandra-test-sink -... -# cassandra specific config -configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - -``` - -**Example 2** - -Below is a YAML configuration file of a Kafka source. - -```shell - -configs: - bootstrapServers: "pulsar-kafka:9092" - groupId: "test-pulsar-io" - topic: "my-topic" - sessionTimeoutMs: "10000" - autoCommitEnabled: "false" - -``` - -**Example 3** - -Below is a YAML configuration file of a PostgreSQL JDBC sink. - -```shell - -configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/test_jdbc" - tableName: "test_jdbc" - -``` - -## Get available connectors - -Before starting using connectors, you can perform the following operations: - -* [Reload connectors](#reload) - -* [Get a list of available connectors](#get-available-connectors) - -### `reload` - -If you add or delete a nar file in a connector folder, reload the available builtin connector before using it. - -#### Source - -Use the `reload` subcommand. - -```shell - -$ pulsar-admin sources reload - -``` - -For more information, see [`here`](io-cli.md#reload). - -#### Sink - -Use the `reload` subcommand. - -```shell - -$ pulsar-admin sinks reload - -``` - -For more information, see [`here`](io-cli.md#reload-1). - -### `available` - -After reloading connectors (optional), you can get a list of available connectors. - -#### Source - -Use the `available-sources` subcommand. - -```shell - -$ pulsar-admin sources available-sources - -``` - -#### Sink - -Use the `available-sinks` subcommand. - -```shell - -$ pulsar-admin sinks available-sinks - -``` - -## Run a connector - -To run a connector, you can perform the following operations: - -* [Create a connector](#create) - -* [Start a connector](#start) - -* [Run a connector locally](#localrun) - -### `create` - -You can create a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Create a source connector. - -````mdx-code-block - - - - -Use the `create` subcommand. - -``` - -$ pulsar-admin sources create options - -``` - -For more information, see [here](io-cli.md#create). - - - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/registerSource?version=@pulsar:version_number@} - - - - -* Create a source connector with a **local file**. - - ```java - - void createSource(SourceConfig sourceConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - |Name|Description - |---|--- - `sourceConfig` | The source configuration object - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#createSource-SourceConfig-java.lang.String-). - -* Create a source connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void createSourceWithUrl(SourceConfig sourceConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - Parameter| Description - |---|--- - `sourceConfig` | The source configuration object - `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSourceWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#createSourceWithUrl-SourceConfig-java.lang.String-). - - - - -```` - -#### Sink - -Create a sink connector. - -````mdx-code-block - - - - -Use the `create` subcommand. - -``` - -$ pulsar-admin sinks create options - -``` - -For more information, see [here](io-cli.md#create-1). - - - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/registerSink?version=@pulsar:version_number@} - - - - -* Create a sink connector with a **local file**. - - ```java - - void createSink(SinkConfig sinkConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - |Name|Description - |---|--- - `sinkConfig` | The sink configuration object - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#createSink-SinkConfig-java.lang.String-). - -* Create a sink connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void createSinkWithUrl(SinkConfig sinkConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - Parameter| Description - |---|--- - `sinkConfig` | The sink configuration object - `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSinkWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#createSinkWithUrl-SinkConfig-java.lang.String-). - - - - -```` - -### `start` - -You can start a connector using **Admin CLI** or **REST API**. - -#### Source - -Start a source connector. - -````mdx-code-block - - - - -Use the `start` subcommand. - -``` - -$ pulsar-admin sources start options - -``` - -For more information, see [here](io-cli.md#start). - - - - -* Start **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/start|operation/startSource?version=@pulsar:version_number@} - -* Start a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/start|operation/startSource?version=@pulsar:version_number@} - - - - -```` - -#### Sink - -Start a sink connector. - -````mdx-code-block - - - - -Use the `start` subcommand. - -``` - -$ pulsar-admin sinks start options - -``` - -For more information, see [here](io-cli.md#start-1). - - - - -* Start **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/start|operation/startSink?version=@pulsar:version_number@} - -* Start a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sourceName/:instanceId/start|operation/startSink?version=@pulsar:version_number@} - - - - -```` - -### `localrun` - -You can run a connector locally rather than deploying it on a Pulsar cluster using **Admin CLI**. - -#### Source - -Run a source connector locally. - -````mdx-code-block - - - - -Use the `localrun` subcommand. - -``` - -$ pulsar-admin sources localrun options - -``` - -For more information, see [here](io-cli.md#localrun). - - - - -```` - -#### Sink - -Run a sink connector locally. - -````mdx-code-block - - - - -Use the `localrun` subcommand. - -``` - -$ pulsar-admin sinks localrun options - -``` - -For more information, see [here](io-cli.md#localrun-1). - - - - -```` - -## Monitor a connector - -To monitor a connector, you can perform the following operations: - -* [Get the information of a connector](#get) - -* [Get the list of all running connectors](#list) - -* [Get the current status of a connector](#status) - -### `get` - -You can get the information of a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the information of a source connector. - -````mdx-code-block - - - - -Use the `get` subcommand. - -``` - -$ pulsar-admin sources get options - -``` - -For more information, see [here](io-cli.md#get). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/getSourceInfo?version=@pulsar:version_number@} - - - - -```java - -SourceConfig getSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Example** - -This is a sourceConfig. - -```java - -{ - "tenant": "tenantName", - "namespace": "namespaceName", - "name": "sourceName", - "className": "className", - "topicName": "topicName", - "configs": {}, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "resources": { - "cpu": 1.0, - "ram": 1073741824, - "disk": 10737418240 - } -} - -``` - -This is a sourceConfig example. - -``` - -{ - "tenant": "public", - "namespace": "default", - "name": "debezium-mysql-source", - "className": "org.apache.pulsar.io.debezium.mysql.DebeziumMysqlSource", - "topicName": "debezium-mysql-topic", - "configs": { - "database.user": "debezium", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.port": "3306", - "database.hostname": "localhost", - "database.password": "dbz", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "database.whitelist": "inventory", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "pulsar.service.url": "pulsar://127.0.0.1:6650", - "database.history.pulsar.topic": "history-topic2" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "resources": { - "cpu": 1.0, - "ram": 1073741824, - "disk": 10737418240 - } -} - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException.NotFoundException` | Cluster doesn't exist -`PulsarAdminException` | Unexpected error - -For more information, see [`getSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#getSource-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Get the information of a sink connector. - -````mdx-code-block - - - - -Use the `get` subcommand. - -``` - -$ pulsar-admin sinks get options - -``` - -For more information, see [here](io-cli.md#get-1). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/getSinkInfo?version=@pulsar:version_number@} - - - - -```java - -SinkConfig getSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - -``` - -**Example** - -This is a sinkConfig. - -```json - -{ -"tenant": "tenantName", -"namespace": "namespaceName", -"name": "sinkName", -"className": "className", -"inputSpecs": { -"topicName": { - "isRegexPattern": false -} -}, -"configs": {}, -"parallelism": 1, -"processingGuarantees": "ATLEAST_ONCE", -"retainOrdering": false, -"autoAck": true -} - -``` - -This is a sinkConfig example. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -**Parameter description** - -Name| Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`sink` | Sink name - -For more information, see [`getSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#getSink-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -### `list` - -You can get the list of all running connectors using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the list of all running source connectors. - -````mdx-code-block - - - - -Use the `list` subcommand. - -``` - -$ pulsar-admin sources list options - -``` - -For more information, see [here](io-cli.md#list). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace|operation/listSources?version=@pulsar:version_number@} - - - - -```java - -List listSources(String tenant, - String namespace) - throws PulsarAdminException - -``` - -**Response example** - -```java - -["f1", "f2", "f3"] - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException` | Unexpected error - -For more information, see [`listSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#listSources-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Get the list of all running sink connectors. - -````mdx-code-block - - - - -Use the `list` subcommand. - -``` - -$ pulsar-admin sinks list options - -``` - -For more information, see [here](io-cli.md#list-1). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace|operation/listSinks?version=@pulsar:version_number@} - - - - -```java - -List listSinks(String tenant, - String namespace) - throws PulsarAdminException - -``` - -**Response example** - -```java - -["f1", "f2", "f3"] - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException` | Unexpected error - -For more information, see [`listSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#listSinks-java.lang.String-java.lang.String-). - - - - -```` - -### `status` - -You can get the current status of a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the current status of a source connector. - -````mdx-code-block - - - - -Use the `status` subcommand. - -``` - -$ pulsar-admin sources status options - -``` - -For more information, see [here](io-cli.md#status). - - - - -* Get the current status of **all** source connectors. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName/status|operation/getSourceStatus?version=@pulsar:version_number@} - -* Gets the current status of a **specified** source connector. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/status|operation/getSourceStatus?version=@pulsar:version_number@} - - - - -* Get the current status of **all** source connectors. - - ```java - - SourceStatus getSourceStatus(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - - **Exception** - - Name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSourceStatus`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#getSource-java.lang.String-java.lang.String-java.lang.String-). - -* Gets the current status of a **specified** source connector. - - ```java - - SourceStatus.SourceInstanceStatus.SourceInstanceStatusData getSourceStatus(String tenant, - String namespace, - String source, - int id) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - `id` | Source instanceID - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSourceStatus`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#getSourceStatus-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Get the current status of a Pulsar sink connector. - -````mdx-code-block - - - - -Use the `status` subcommand. - -``` - -$ pulsar-admin sinks status options - -``` - -For more information, see [here](io-cli.md#status-1). - - - - -* Get the current status of **all** sink connectors. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sinkName/status|operation/getSinkStatus?version=@pulsar:version_number@} - -* Gets the current status of a **specified** sink connector. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sourceName/:instanceId/status|operation/getSinkInstanceStatus?version=@pulsar:version_number@} - - - - -* Get the current status of **all** sink connectors. - - ```java - - SinkStatus getSinkStatus(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSinkStatus`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#getSinkStatus-java.lang.String-java.lang.String-java.lang.String-). - -* Gets the current status of a **specified** source connector. - - ```java - - SinkStatus.SinkInstanceStatus.SinkInstanceStatusData getSinkStatus(String tenant, - String namespace, - String sink, - int id) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - `id` | Sink instanceID - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSinkStatusWithInstanceID`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#getSinkStatus-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Update a connector - -### `update` - -You can update a running connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Update a running Pulsar source connector. - -````mdx-code-block - - - - -Use the `update` subcommand. - -``` - -$ pulsar-admin sources update options - -``` - -For more information, see [here](io-cli.md#update). - - - - -Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/updateSource?version=@pulsar:version_number@} - - - - -* Update a running source connector with a **local file**. - - ```java - - void updateSource(SourceConfig sourceConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - |`sourceConfig` | The source configuration object - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - - For more information, see [`updateSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#updateSource-SourceConfig-java.lang.String-). - -* Update a source connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void updateSourceWithUrl(SourceConfig sourceConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - | Name | Description - |---|--- - | `sourceConfig` | The source configuration object - | `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - -For more information, see [`createSourceWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#updateSourceWithUrl-SourceConfig-java.lang.String-). - - - - -```` - -#### Sink - -Update a running Pulsar sink connector. - -````mdx-code-block - - - - -Use the `update` subcommand. - -``` - -$ pulsar-admin sinks update options - -``` - -For more information, see [here](io-cli.md#update-1). - - - - -Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/updateSink?version=@pulsar:version_number@} - - - - -* Update a running sink connector with a **local file**. - - ```java - - void updateSink(SinkConfig sinkConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - |`sinkConfig` | The sink configuration object - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - - For more information, see [`updateSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#updateSink-SinkConfig-java.lang.String-). - -* Update a sink connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void updateSinkWithUrl(SinkConfig sinkConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - | Name | Description - |---|--- - | `sinkConfig` | The sink configuration object - | `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - |`PulsarAdminException.NotFoundException` | Cluster doesn't exist - |`PulsarAdminException` | Unexpected error - -For more information, see [`updateSinkWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#updateSinkWithUrl-SinkConfig-java.lang.String-). - - - - -```` - -## Stop a connector - -### `stop` - -You can stop a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Stop a source connector. - -````mdx-code-block - - - - -Use the `stop` subcommand. - -``` - -$ pulsar-admin sources stop options - -``` - -For more information, see [here](io-cli.md#stop). - - - - -* Stop **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/stopSource?version=@pulsar:version_number@} - -* Stop a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId|operation/stopSource?version=@pulsar:version_number@} - - - - -* Stop **all** source connectors. - - ```java - - void stopSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#stopSource-java.lang.String-java.lang.String-java.lang.String-). - -* Stop a **specified** source connector. - - ```java - - void stopSource(String tenant, - String namespace, - String source, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#stopSource-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Stop a sink connector. - -````mdx-code-block - - - - -Use the `stop` subcommand. - -``` - -$ pulsar-admin sinks stop options - -``` - -For more information, see [here](io-cli.md#stop-1). - - - - -* Stop **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sinkName/stop|operation/stopSink?version=@pulsar:version_number@} - -* Stop a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkeName/:instanceId/stop|operation/stopSink?version=@pulsar:version_number@} - - - - -* Stop **all** sink connectors. - - ```java - - void stopSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#stopSink-java.lang.String-java.lang.String-java.lang.String-). - -* Stop a **specified** sink connector. - - ```java - - void stopSink(String tenant, - String namespace, - String sink, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#stopSink-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Restart a connector - -### `restart` - -You can restart a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Restart a source connector. - -````mdx-code-block - - - - -Use the `restart` subcommand. - -``` - -$ pulsar-admin sources restart options - -``` - -For more information, see [here](io-cli.md#restart). - - - - -* Restart **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/restart|operation/restartSource?version=@pulsar:version_number@} - -* Restart a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/restart|operation/restartSource?version=@pulsar:version_number@} - - - - -* Restart **all** source connectors. - - ```java - - void restartSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#restartSource-java.lang.String-java.lang.String-java.lang.String-). - -* Restart a **specified** source connector. - - ```java - - void restartSource(String tenant, - String namespace, - String source, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#restartSource-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Restart a sink connector. - -````mdx-code-block - - - - -Use the `restart` subcommand. - -``` - -$ pulsar-admin sinks restart options - -``` - -For more information, see [here](io-cli.md#restart-1). - - - - -* Restart **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/restart|operation/restartSource?version=@pulsar:version_number@} - -* Restart a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/:instanceId/restart|operation/restartSource?version=@pulsar:version_number@} - - - - -* Restart all Pulsar sink connectors. - - ```java - - void restartSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Sink name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#restartSink-java.lang.String-java.lang.String-java.lang.String-). - -* Restart a **specified** sink connector. - - ```java - - void restartSink(String tenant, - String namespace, - String sink, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Sink instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#restartSink-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Delete a connector - -### `delete` - -You can delete a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Delete a source connector. - -````mdx-code-block - - - - -Use the `delete` subcommand. - -``` - -$ pulsar-admin sources delete options - -``` - -For more information, see [here](io-cli.md#delete). - - - - -Delete al Pulsar source connector. - -Send a `DELETE` request to this endpoint: {@inject: endpoint|DELETE|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/deregisterSource?version=@pulsar:version_number@} - - - - -Delete a source connector. - -```java - -void deleteSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Parameter** - -| Name | Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`source` | Source name - -**Exception** - -|Name|Description| -|---|--- -|`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission -| `PulsarAdminException.NotFoundException` | Cluster doesn't exist -| `PulsarAdminException.PreconditionFailedException` | Cluster is not empty -| `PulsarAdminException` | Unexpected error - -For more information, see [`deleteSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#deleteSource-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Delete a sink connector. - -````mdx-code-block - - - - -Use the `delete` subcommand. - -``` - -$ pulsar-admin sinks delete options - -``` - -For more information, see [here](io-cli.md#delete-1). - - - - -Delete a sink connector. - -Send a `DELETE` request to this endpoint: {@inject: endpoint|DELETE|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/deregisterSink?version=@pulsar:version_number@} - - - - -Delete a Pulsar sink connector. - -```java - -void deleteSink(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Parameter** - -| Name | Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`sink` | Sink name - -**Exception** - -|Name|Description| -|---|--- -|`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission -| `PulsarAdminException.NotFoundException` | Cluster doesn't exist -| `PulsarAdminException.PreconditionFailedException` | Cluster is not empty -| `PulsarAdminException` | Unexpected error - -For more information, see [`deleteSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#deleteSink-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/performance-pulsar-perf.md b/site2/website/versioned_docs/version-2.9.2-deprecated/performance-pulsar-perf.md deleted file mode 100644 index 7f45498604536c..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/performance-pulsar-perf.md +++ /dev/null @@ -1,229 +0,0 @@ ---- -id: performance-pulsar-perf -title: Pulsar Perf -sidebar_label: "Pulsar Perf" -original_id: performance-pulsar-perf ---- - -The Pulsar Perf is a built-in performance test tool for Apache Pulsar. You can use the Pulsar Perf to test message writing or reading performance. For detailed information about performance tuning, see [here](https://streamnative.io/en/blog/tech/2021-01-14-pulsar-architecture-performance-tuning). - -## Produce messages - -This example shows how the Pulsar Perf produces messages with default options. For all configuration options available for the `pulsar-perf produce` command, see [configuration options](#configuration-options-for-pulsar-perf-produce). - -``` - -bin/pulsar-perf produce my-topic - -``` - -After the command is executed, the test data is continuously output on the Console. - -**Output** - -``` - -19:53:31.459 [pulsar-perf-producer-exec-1-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Created 1 producers -19:53:31.482 [pulsar-timer-5-1] WARN com.scurrilous.circe.checksum.Crc32cIntChecksum - Failed to load Circe JNI library. Falling back to Java based CRC32c provider -19:53:40.861 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 93.7 msg/s --- 0.7 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.575 ms - med: 3.460 - 95pct: 4.790 - 99pct: 5.308 - 99.9pct: 5.834 - 99.99pct: 6.609 - Max: 6.609 -19:53:50.909 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.437 ms - med: 3.328 - 95pct: 4.656 - 99pct: 5.071 - 99.9pct: 5.519 - 99.99pct: 5.588 - Max: 5.588 -19:54:00.926 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.376 ms - med: 3.276 - 95pct: 4.520 - 99pct: 4.939 - 99.9pct: 5.440 - 99.99pct: 5.490 - Max: 5.490 -19:54:10.940 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.298 ms - med: 3.220 - 95pct: 4.474 - 99pct: 4.926 - 99.9pct: 5.645 - 99.99pct: 5.654 - Max: 5.654 -19:54:20.956 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.1 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.308 ms - med: 3.199 - 95pct: 4.532 - 99pct: 4.871 - 99.9pct: 5.291 - 99.99pct: 5.323 - Max: 5.323 -19:54:30.972 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.249 ms - med: 3.144 - 95pct: 4.437 - 99pct: 4.970 - 99.9pct: 5.329 - 99.99pct: 5.414 - Max: 5.414 -19:54:40.987 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.435 ms - med: 3.361 - 95pct: 4.772 - 99pct: 5.150 - 99.9pct: 5.373 - 99.99pct: 5.837 - Max: 5.837 -^C19:54:44.325 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Aggregated throughput stats --- 7286 records sent --- 99.140 msg/s --- 0.775 Mbit/s -19:54:44.336 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Aggregated latency stats --- Latency: mean: 3.383 ms - med: 3.293 - 95pct: 4.610 - 99pct: 5.059 - 99.9pct: 5.588 - 99.99pct: 5.837 - 99.999pct: 6.609 - Max: 6.609 - -``` - -From the above test data, you can get the throughput statistics and the write latency statistics. The aggregated statistics is printed when the Pulsar Perf is stopped. You can press **Ctrl**+**C** to stop the Pulsar Perf. If you specify a filename with the `--histogram-file` parameter, a file with the [HdrHistogram](http://hdrhistogram.github.io/HdrHistogram/) formatted test result appears under your directory after Pulsar Perf is stopped. You can also check the test result through [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html). For details about how to check the test result through [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html), see [HdrHistogram Plotter](#hdrhistogram-plotter). - -### Configuration options for `pulsar-perf produce` - -You can get all options by executing the `bin/pulsar-perf produce -h` command. Therefore, you can modify these options as required. - -The following table lists configuration options available for the `pulsar-perf produce` command. - -| Option | Description | Default value| -|----|----|----| -| access-mode | Set the producer access mode. Valid values are `Shared`, `Exclusive` and `WaitForExclusive`. | Shared | -| admin-url | Set the Pulsar admin URL. | N/A | -| auth-params | Set the authentication parameters, whose format is determined by the implementation of the `configure` method in the authentication plugin class, such as "key1:val1,key2:val2" or "{"key1":"val1","key2":"val2"}". | N/A | -| auth-plugin | Set the authentication plugin class name. | N/A | -| listener-name | Set the listener name for the broker. | N/A | -| batch-max-bytes | Set the maximum number of bytes for each batch. | 4194304 | -| batch-max-messages | Set the maximum number of messages for each batch. | 1000 | -| batch-time-window | Set a window for a batch of messages. | 1 ms | -| busy-wait | Enable or disable Busy-Wait on the Pulsar client. | false | -| chunking | Configure whether to split the message and publish in chunks if message size is larger than allowed max size. | false | -| compression | Compress the message payload. | N/A | -| conf-file | Set the configuration file. | N/A | -| delay | Mark messages with a given delay. | 0s | -| encryption-key-name | Set the name of the public key used to encrypt the payload. | N/A | -| encryption-key-value-file | Set the file which contains the public key used to encrypt the payload. | N/A | -| exit-on-failure | Configure whether to exit from the process on publish failure. | false | -| format-class | Set the custom formatter class name. | org.apache.pulsar.testclient.DefaultMessageFormatter | -| format-payload | Configure whether to format %i as a message index in the stream from producer and/or %t as the timestamp nanoseconds. | false | -| help | Configure the help message. | false | -| histogram-file | HdrHistogram output file | N/A | -| max-connections | Set the maximum number of TCP connections to a single broker. | 100 | -| max-outstanding | Set the maximum number of outstanding messages. | 1000 | -| max-outstanding-across-partitions | Set the maximum number of outstanding messages across partitions. | 50000 | -| message-key-generation-mode | Set the generation mode of message key. Valid options are `autoIncrement`, `random`. | N/A | -| num-io-threads | Set the number of threads to be used for handling connections to brokers. | 1 | -| num-messages | Set the number of messages to be published in total. If it is set to 0, it keeps publishing messages. | 0 | -| num-producers | Set the number of producers for each topic. | 1 | -| num-test-threads | Set the number of test threads. | 1 | -| num-topic | Set the number of topics. | 1 | -| partitions | Configure whether to create partitioned topics with the given number of partitions. | N/A | -| payload-delimiter | Set the delimiter used to split lines when using payload from a file. | \n | -| payload-file | Use the payload from an UTF-8 encoded text file and a payload is randomly selected when messages are published. | N/A | -| producer-name | Set the producer name. | N/A | -| rate | Set the publish rate of messages across topics. | 100 | -| send-timeout | Set the sendTimeout. | 0 | -| separator | Set the separator between the topic and topic number. | - | -| service-url | Set the Pulsar service URL. | | -| size | Set the message size. | 1024 bytes | -| stats-interval-seconds | Set the statistics interval. If it is set to 0, statistics is disabled. | 0 | -| test-duration | Set the test duration. If it is set to 0, it keeps publishing tests. | 0s | -| trust-cert-file | Set the path for the trusted TLS certificate file. | | | -| warmup-time | Set the warm-up time. | 1s | -| tls-allow-insecure | Set the allowed insecure TLS connection. | N/A | - -## Consume messages - -This example shows how the Pulsar Perf consumes messages with default options. - -``` - -bin/pulsar-perf consume my-topic - -``` - -After the command is executed, the test data is continuously output on the Console. - -**Output** - -``` - -20:35:37.071 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Start receiving from 1 consumers on 1 topics -20:35:41.150 [pulsar-client-io-1-9] WARN com.scurrilous.circe.checksum.Crc32cIntChecksum - Failed to load Circe JNI library. Falling back to Java based CRC32c provider -20:35:47.092 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 59.572 msg/s -- 0.465 Mbit/s --- Latency: mean: 11.298 ms - med: 10 - 95pct: 15 - 99pct: 98 - 99.9pct: 137 - 99.99pct: 152 - Max: 152 -20:35:57.104 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.958 msg/s -- 0.781 Mbit/s --- Latency: mean: 9.176 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 18 - Max: 18 -20:36:07.115 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 100.006 msg/s -- 0.781 Mbit/s --- Latency: mean: 9.316 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -20:36:17.125 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 100.085 msg/s -- 0.782 Mbit/s --- Latency: mean: 9.327 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -20:36:27.136 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.900 msg/s -- 0.780 Mbit/s --- Latency: mean: 9.404 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -20:36:37.147 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.985 msg/s -- 0.781 Mbit/s --- Latency: mean: 8.998 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -^C20:36:42.755 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceConsumer - Aggregated throughput stats --- 6051 records received --- 92.125 msg/s --- 0.720 Mbit/s -20:36:42.759 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceConsumer - Aggregated latency stats --- Latency: mean: 9.422 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 98 - 99.99pct: 137 - 99.999pct: 152 - Max: 152 - -``` - -From the output test data, you can get the throughput statistics and the end-to-end latency statistics. The aggregated statistics is printed after the Pulsar Perf is stopped. You can press **Ctrl**+**C** to stop the Pulsar Perf. - -### Configuration options for `pulsar-perf consume` - -You can get all options by executing the `bin/pulsar-perf consume -h` command. Therefore, you can modify these options as required. - -The following table lists configuration options available for the `pulsar-perf consume` command. - -| Option | Description | Default value | -|----|----|----| -| acks-delay-millis | Set the acknowledgment grouping delay in milliseconds. | 100 ms | -| auth-params | Set the authentication parameters, whose format is determined by the implementation of the `configure` method in the authentication plugin class, such as "key1:val1,key2:val2" or "{"key1":"val1","key2":"val2"}". | N/A | -| auth-plugin | Set the authentication plugin class name. | N/A | -| auto_ack_chunk_q_full | Configure whether to automatically ack for the oldest message in receiver queue if the queue is full. | false | -| listener-name | Set the listener name for the broker. | N/A | -| batch-index-ack | Enable or disable the batch index acknowledgment. | false | -| busy-wait | Enable or disable Busy-Wait on the Pulsar client. | false | -| conf-file | Set the configuration file. | N/A | -| encryption-key-name | Set the name of the public key used to encrypt the payload. | N/A | -| encryption-key-value-file | Set the file which contains the public key used to encrypt the payload. | N/A | -| help | Configure the help message. | false | -| histogram-file | HdrHistogram output file | N/A | -| expire_time_incomplete_chunked_messages | Set the expiration time for incomplete chunk messages (in milliseconds). | 0 | -| max-connections | Set the maximum number of TCP connections to a single broker. | 100 | -| max_chunked_msg | Set the max pending chunk messages. | 0 | -| num-consumers | Set the number of consumers for each topic. | 1 | -| num-io-threads |Set the number of threads to be used for handling connections to brokers. | 1 | -| num-subscriptions | Set the number of subscriptions (per topic). | 1 | -| num-topic | Set the number of topics. | 1 | -| pool-messages | Configure whether to use the pooled message. | true | -| rate | Simulate a slow message consumer (rate in msg/s). | 0.0 | -| receiver-queue-size | Set the size of the receiver queue. | 1000 | -| receiver-queue-size-across-partitions | Set the max total size of the receiver queue across partitions. | 50000 | -| replicated | Configure whether the subscription status should be replicated. | false | -| service-url | Set the Pulsar service URL. | | -| stats-interval-seconds | Set the statistics interval. If it is set to 0, statistics is disabled. | 0 | -| subscriber-name | Set the subscriber name prefix. | | -| subscription-position | Set the subscription position. Valid values are `Latest`, `Earliest`.| Latest | -| subscription-type | Set the subscription type.
  2052. Exclusive
  2053. Shared
  2054. Failover
  2055. Key_Shared
  2056. | Exclusive | -| test-duration | Set the test duration (in seconds). If the value is 0 or smaller than 0, it keeps consuming messages. | 0 | -| tls-allow-insecure | Set the allowed insecure TLS connection. | N/A | -| trust-cert-file | Set the path for the trusted TLS certificate file. | | | - -## Configurations - -By default, the Pulsar Perf uses `conf/client.conf` as the default configuration and uses `conf/log4j2.yaml` as the default Log4j configuration. If you want to connect to other Pulsar clusters, you can update the `brokerServiceUrl` in the client configuration. - -You can use the following commands to change the configuration file and the Log4j configuration file. - -``` - -export PULSAR_CLIENT_CONF= -export PULSAR_LOG_CONF= - -``` - -In addition, you can use the following command to configure the JVM configuration through environment variables: - -``` - -export PULSAR_EXTRA_OPTS='-Xms4g -Xmx4g -XX:MaxDirectMemorySize=4g' - -``` - -## HdrHistogram Plotter - -The [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html) is a visualization tool for checking Pulsar Perf test results, which makes it easier to observe the test results. - -To check test results through the HdrHistogram Plotter, follow these steps: - -1. Clone the HdrHistogram repository from GitHub to the local. - - ``` - - git clone https://github.com/HdrHistogram/HdrHistogram.git - - ``` - -2. Switch to the HdrHistogram folder. - - ``` - - cd HdrHistogram - - ``` - -3. Install the HdrHistogram Plotter. - - ``` - - mvn clean install -DskipTests - - ``` - -4. Transform the file generated by the Pulsar Perf. - - ``` - - ./HistogramLogProcessor -i -o - - ``` - -5. You will get two output files. Upload the output file with the filename extension of .hgrm to the [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html). - -6. Check the test result through the Graphical User Interface of the HdrHistogram Plotter, as shown blow. - - ![](/assets/perf-produce.png) diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/reference-cli-tools.md b/site2/website/versioned_docs/version-2.9.2-deprecated/reference-cli-tools.md deleted file mode 100644 index 6893da3ec6394b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/reference-cli-tools.md +++ /dev/null @@ -1,941 +0,0 @@ ---- -id: reference-cli-tools -title: Pulsar command-line tools -sidebar_label: "Pulsar CLI tools" -original_id: reference-cli-tools ---- - -Pulsar offers several command-line tools that you can use for managing Pulsar installations, performance testing, using command-line producers and consumers, and more. - -All Pulsar command-line tools can be run from the `bin` directory of your [installed Pulsar package](getting-started-standalone.md). The following tools are currently documented: - -* [`pulsar`](#pulsar) -* [`pulsar-client`](#pulsar-client) -* [`pulsar-daemon`](#pulsar-daemon) -* [`pulsar-perf`](#pulsar-perf) -* [`bookkeeper`](#bookkeeper) -* [`broker-tool`](#broker-tool) - -> ### Getting help -> You can get help for any CLI tool, command, or subcommand using the `--help` flag, or `-h` for short. Here's an example: - -> ```shell -> -> $ bin/pulsar broker --help -> -> -> ``` - - -## `pulsar` - -The pulsar tool is used to start Pulsar components, such as bookies and ZooKeeper, in the foreground. - -These processes can also be started in the background, using nohup, using the pulsar-daemon tool, which has the same command interface as pulsar. - -Usage: - -```bash - -$ pulsar command - -``` - -Commands: -* `bookie` -* `broker` -* `compact-topic` -* `configuration-store` -* `initialize-cluster-metadata` -* `proxy` -* `standalone` -* `websocket` -* `zookeeper` -* `zookeeper-shell` - -Example: - -```bash - -$ PULSAR_BROKER_CONF=/path/to/broker.conf pulsar broker - -``` - -The table below lists the environment variables that you can use to configure the `pulsar` tool. - -|Variable|Description|Default| -|---|---|---| -|`PULSAR_LOG_CONF`|Log4j configuration file|`conf/log4j2.yaml`| -|`PULSAR_BROKER_CONF`|Configuration file for broker|`conf/broker.conf`| -|`PULSAR_BOOKKEEPER_CONF`|description: Configuration file for bookie|`conf/bookkeeper.conf`| -|`PULSAR_ZK_CONF`|Configuration file for zookeeper|`conf/zookeeper.conf`| -|`PULSAR_CONFIGURATION_STORE_CONF`|Configuration file for the configuration store|`conf/global_zookeeper.conf`| -|`PULSAR_WEBSOCKET_CONF`|Configuration file for websocket proxy|`conf/websocket.conf`| -|`PULSAR_STANDALONE_CONF`|Configuration file for standalone|`conf/standalone.conf`| -|`PULSAR_EXTRA_OPTS`|Extra options to be passed to the jvm|| -|`PULSAR_EXTRA_CLASSPATH`|Extra paths for Pulsar's classpath|| -|`PULSAR_PID_DIR`|Folder where the pulsar server PID file should be stored|| -|`PULSAR_STOP_TIMEOUT`|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful|| - - - -### `bookie` - -Starts up a bookie server - -Usage: - -```bash - -$ pulsar bookie options - -``` - -Options - -|Option|Description|Default| -|---|---|---| -|`-readOnly`|Force start a read-only bookie server|false| -|`-withAutoRecovery`|Start auto-recover service bookie server|false| - - -Example - -```bash - -$ PULSAR_BOOKKEEPER_CONF=/path/to/bookkeeper.conf pulsar bookie \ - -readOnly \ - -withAutoRecovery - -``` - -### `broker` - -Starts up a Pulsar broker - -Usage - -```bash - -$ pulsar broker options - -``` - -Options - -|Option|Description|Default| -|---|---|---| -|`-bc` , `--bookie-conf`|Configuration file for BookKeeper|| -|`-rb` , `--run-bookie`|Run a BookKeeper bookie on the same host as the Pulsar broker|false| -|`-ra` , `--run-bookie-autorecovery`|Run a BookKeeper autorecovery daemon on the same host as the Pulsar broker|false| - -Example - -```bash - -$ PULSAR_BROKER_CONF=/path/to/broker.conf pulsar broker - -``` - -### `compact-topic` - -Run compaction against a Pulsar topic (in a new process) - -Usage - -```bash - -$ pulsar compact-topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-t` , `--topic`|The Pulsar topic that you would like to compact|| - -Example - -```bash - -$ pulsar compact-topic --topic topic-to-compact - -``` - -### `configuration-store` - -Starts up the Pulsar configuration store - -Usage - -```bash - -$ pulsar configuration-store - -``` - -Example - -```bash - -$ PULSAR_CONFIGURATION_STORE_CONF=/path/to/configuration_store.conf pulsar configuration-store - -``` - -### `initialize-cluster-metadata` - -One-time cluster metadata initialization - -Usage - -```bash - -$ pulsar initialize-cluster-metadata options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-ub` , `--broker-service-url`|The broker service URL for the new cluster|| -|`-tb` , `--broker-service-url-tls`|The broker service URL for the new cluster with TLS encryption|| -|`-c` , `--cluster`|Cluster name|| -|`-cs` , `--configuration-store`|The configuration store quorum connection string|| -|`--existing-bk-metadata-service-uri`|The metadata service URI of the existing BookKeeper cluster that you want to use|| -|`-h` , `--help`|Cluster name|false| -|`--initial-num-stream-storage-containers`|The number of storage containers of BookKeeper stream storage|16| -|`--initial-num-transaction-coordinators`|The number of transaction coordinators assigned in a cluster|16| -|`-uw` , `--web-service-url`|The web service URL for the new cluster|| -|`-tw` , `--web-service-url-tls`|The web service URL for the new cluster with TLS encryption|| -|`-zk` , `--zookeeper`|The local ZooKeeper quorum connection string|| -|`--zookeeper-session-timeout-ms`|The local ZooKeeper session timeout. The time unit is in millisecond(ms)|30000| - - -### `proxy` - -Manages the Pulsar proxy - -Usage - -```bash - -$ pulsar proxy options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--configuration-store`|Configuration store connection string|| -|`-zk` , `--zookeeper-servers`|Local ZooKeeper connection string|| - -Example - -```bash - -$ PULSAR_PROXY_CONF=/path/to/proxy.conf pulsar proxy \ - --zookeeper-servers zk-0,zk-1,zk2 \ - --configuration-store zk-0,zk-1,zk-2 - -``` - -### `standalone` - -Run a broker service with local bookies and local ZooKeeper - -Usage - -```bash - -$ pulsar standalone options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-a` , `--advertised-address`|The standalone broker advertised address|| -|`--bookkeeper-dir`|Local bookies’ base data directory|data/standalone/bookeeper| -|`--bookkeeper-port`|Local bookies’ base port|3181| -|`--no-broker`|Only start ZooKeeper and BookKeeper services, not the broker|false| -|`--num-bookies`|The number of local bookies|1| -|`--only-broker`|Only start the Pulsar broker service (not ZooKeeper or BookKeeper)|| -|`--wipe-data`|Clean up previous ZooKeeper/BookKeeper data|| -|`--zookeeper-dir`|Local ZooKeeper’s data directory|data/standalone/zookeeper| -|`--zookeeper-port` |Local ZooKeeper’s port|2181| - -Example - -```bash - -$ PULSAR_STANDALONE_CONF=/path/to/standalone.conf pulsar standalone - -``` - -### `websocket` - -Usage - -```bash - -$ pulsar websocket - -``` - -Example - -```bash - -$ PULSAR_WEBSOCKET_CONF=/path/to/websocket.conf pulsar websocket - -``` - -### `zookeeper` - -Starts up a ZooKeeper cluster - -Usage - -```bash - -$ pulsar zookeeper - -``` - -Example - -```bash - -$ PULSAR_ZK_CONF=/path/to/zookeeper.conf pulsar zookeeper - -``` - -### `zookeeper-shell` - -Connects to a running ZooKeeper cluster using the ZooKeeper shell - -Usage - -```bash - -$ pulsar zookeeper-shell options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration file for ZooKeeper|| -|`-server`|Configuration zk address, eg: `127.0.0.1:2181`|| - - - -## `pulsar-client` - -The pulsar-client tool - -Usage - -```bash - -$ pulsar-client command - -``` - -Commands -* `produce` -* `consume` - - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class, for example "key1:val1,key2:val2" or "{\"key1\":\"val1\",\"key2\":\"val2\"}"|{"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"}| -|`--auth-plugin`|Authentication plugin class name|org.apache.pulsar.client.impl.auth.AuthenticationSasl| -|`--listener-name`|Listener name for the broker|| -|`--proxy-protocol`|Proxy protocol to select type of routing at proxy|| -|`--proxy-url`|Proxy-server URL to which to connect|| -|`--url`|Broker URL to which to connect|pulsar://localhost:6650/
    ws://localhost:8080 | -| `-v`, `--version` | Get the version of the Pulsar client -|`-h`, `--help`|Show this help - - -### `produce` -Send a message or messages to a specific broker and topic - -Usage - -```bash - -$ pulsar-client produce topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-f`, `--files`|Comma-separated file paths to send; either -m or -f must be specified|[]| -|`-m`, `--messages`|Comma-separated string of messages to send; either -m or -f must be specified|[]| -|`-n`, `--num-produce`|The number of times to send the message(s); the count of messages/files * num-produce should be below 1000|1| -|`-r`, `--rate`|Rate (in messages per second) at which to produce; a value 0 means to produce messages as fast as possible|0.0| -|`-c`, `--chunking`|Split the message and publish in chunks if the message size is larger than the allowed max size|false| -|`-s`, `--separator`|Character to split messages string with.|","| -|`-k`, `--key`|Message key to add|key=value string, like k1=v1,k2=v2.| -|`-p`, `--properties`|Properties to add. If you want to add multiple properties, use the comma as the separator, e.g. `k1=v1,k2=v2`.| | -|`-ekn`, `--encryption-key-name`|The public key name to encrypt payload.| | -|`-ekv`, `--encryption-key-value`|The URI of public key to encrypt payload. For example, `file:///path/to/public.key` or `data:application/x-pem-file;base64,*****`.| | - - -### `consume` -Consume messages from a specific broker and topic - -Usage - -```bash - -$ pulsar-client consume topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--hex`|Display binary messages in hexadecimal format.|false| -|`-n`, `--num-messages`|Number of messages to consume, 0 means to consume forever.|1| -|`-r`, `--rate`|Rate (in messages per second) at which to consume; a value 0 means to consume messages as fast as possible|0.0| -|`--regex`|Indicate the topic name is a regex pattern|false| -|`-s`, `--subscription-name`|Subscription name|| -|`-t`, `--subscription-type`|The type of the subscription. Possible values: Exclusive, Shared, Failover, Key_Shared.|Exclusive| -|`-p`, `--subscription-position`|The position of the subscription. Possible values: Latest, Earliest.|Latest| -|`-m`, `--subscription-mode`|Subscription mode.|Durable| -|`-q`, `--queue-size`|The size of consumer's receiver queue.|0| -|`-mc`, `--max_chunked_msg`|Max pending chunk messages.|0| -|`-ac`, `--auto_ack_chunk_q_full`|Auto ack for the oldest message in consumer's receiver queue if the queue full.|false| -|`--hide-content`|Do not print the message to the console.|false| -|`-st`, `--schema-type`|Set the schema type. Use `auto_consume` to dump AVRO and other structured data types. Possible values: bytes, auto_consume.|bytes| -|`-ekv`, `--encryption-key-value`|The URI of public key to encrypt payload. For example, `file:///path/to/public.key` or `data:application/x-pem-file;base64,*****`.| | -|`-pm`, `--pool-messages`|Use the pooled message.|true| - -## `pulsar-daemon` -A wrapper around the pulsar tool that’s used to start and stop processes, such as ZooKeeper, bookies, and Pulsar brokers, in the background using nohup. - -pulsar-daemon has a similar interface to the pulsar command but adds start and stop commands for various services. For a listing of those services, run pulsar-daemon to see the help output or see the documentation for the pulsar command. - -Usage - -```bash - -$ pulsar-daemon command - -``` - -Commands -* `start` -* `stop` - - -### `start` -Start a service in the background using nohup. - -Usage - -```bash - -$ pulsar-daemon start service - -``` - -### `stop` -Stop a service that’s already been started using start. - -Usage - -```bash - -$ pulsar-daemon stop service options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|-force|Stop the service forcefully if not stopped by normal shutdown.|false| - - - -## `pulsar-perf` -A tool for performance testing a Pulsar broker. - -Usage - -```bash - -$ pulsar-perf command - -``` - -Commands -* `consume` -* `produce` -* `read` -* `websocket-producer` -* `managed-ledger` -* `monitor-brokers` -* `simulation-client` -* `simulation-controller` -* `help` - -Environment variables - -The table below lists the environment variables that you can use to configure the pulsar-perf tool. - -|Variable|Description|Default| -|---|---|---| -|`PULSAR_LOG_CONF`|Log4j configuration file|conf/log4j2.yaml| -|`PULSAR_CLIENT_CONF`|Configuration file for the client|conf/client.conf| -|`PULSAR_EXTRA_OPTS`|Extra options to be passed to the JVM|| -|`PULSAR_EXTRA_CLASSPATH`|Extra paths for Pulsar's classpath|| - - -### `consume` -Run a consumer - -Usage - -``` - -$ pulsar-perf consume options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth-plugin`|Authentication plugin class name|| -|`-ac`, `--auto_ack_chunk_q_full`|Auto ack for the oldest message in consumer's receiver queue if the queue full|false| -|`--listener-name`|Listener name for the broker|| -|`--acks-delay-millis`|Acknowledgements grouping delay in millis|100| -|`--batch-index-ack`|Enable or disable the batch index acknowledgment|false| -|`-bw`, `--busy-wait`|Enable or disable Busy-Wait on the Pulsar client|false| -|`-v`, `--encryption-key-value-file`|The file which contains the private key to decrypt payload|| -|`-h`, `--help`|Help message|false| -|`--conf-file`|Configuration file|| -|`-m`, `--num-messages`|Number of messages to consume in total. If the value is equal to or smaller than 0, it keeps consuming messages.|0| -|`-e`, `--expire_time_incomplete_chunked_messages`|The expiration time for incomplete chunk messages (in milliseconds)|0| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-mc`, `--max_chunked_msg`|Max pending chunk messages|0| -|`-n`, `--num-consumers`|Number of consumers (per topic)|1| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-ns`, `--num-subscriptions`|Number of subscriptions (per topic)|1| -|`-t`, `--num-topics`|The number of topics|1| -|`-pm`, `--pool-messages`|Use the pooled message|true| -|`-r`, `--rate`|Simulate a slow message consumer (rate in msg/s)|0| -|`-q`, `--receiver-queue-size`|Size of the receiver queue|1000| -|`-p`, `--receiver-queue-size-across-partitions`|Max total size of the receiver queue across partitions|50000| -|`--replicated`|Whether the subscription status should be replicated|false| -|`-u`, `--service-url`|Pulsar service URL|| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled|0| -|`-s`, `--subscriber-name`|Subscriber name prefix|| -|`-ss`, `--subscriptions`|A list of subscriptions to consume on (e.g. sub1,sub2)|sub| -|`-st`, `--subscription-type`|Subscriber type. Possible values are Exclusive, Shared, Failover, Key_Shared.|Exclusive| -|`-sp`, `--subscription-position`|Subscriber position. Possible values are Latest, Earliest.|Latest| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps consuming messages.|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - - -### `produce` -Run a producer - -Usage - -```bash - -$ pulsar-perf produce options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-am`, `--access-mode`|Producer access mode. Valid values are `Shared`, `Exclusive` and `WaitForExclusive`|Shared| -|`-au`, `--admin-url`|Pulsar admin URL|| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth-plugin`|Authentication plugin class name|| -|`--listener-name`|Listener name for the broker|| -|`-b`, `--batch-time-window`|Batch messages in a window of the specified number of milliseconds|1| -|`-bb`, `--batch-max-bytes`|Maximum number of bytes per batch|4194304| -|`-bm`, `--batch-max-messages`|Maximum number of messages per batch|1000| -|`-bw`, `--busy-wait`|Enable or disable Busy-Wait on the Pulsar client|false| -|`-ch`, `--chunking`|Split the message and publish in chunks if the message size is larger than allowed max size|false| -|`-d`, `--delay`|Mark messages with a given delay in seconds|0s| -|`-z`, `--compression`|Compress messages’ payload. Possible values are NONE, LZ4, ZLIB, ZSTD or SNAPPY.|| -|`--conf-file`|Configuration file|| -|`-k`, `--encryption-key-name`|The public key name to encrypt payload|| -|`-v`, `--encryption-key-value-file`|The file which contains the public key to encrypt payload|| -|`-ef`, `--exit-on-failure`|Exit from the process on publish failure|false| -|`-fc`, `--format-class`|Custom Formatter class name|org.apache.pulsar.testclient.DefaultMessageFormatter| -|`-fp`, `--format-payload`|Format %i as a message index in the stream from producer and/or %t as the timestamp nanoseconds|false| -|`-h`, `--help`|Help message|false| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-o`, `--max-outstanding`|Max number of outstanding messages|1000| -|`-p`, `--max-outstanding-across-partitions`|Max number of outstanding messages across partitions|50000| -|`-m`, `--num-messages`|Number of messages to publish in total. If this value is less than or equal to 0, it keeps publishing messages.|0| -|`-mk`, `--message-key-generation-mode`|The generation mode of message key. Valid options are `autoIncrement`, `random`|| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-n`, `--num-producers`|The number of producers (per topic)|1| -|`-threads`, `--num-test-threads`|Number of test threads|1| -|`-t`, `--num-topic`|The number of topics|1| -|`-np`, `--partitions`|Create partitioned topics with the given number of partitions. Setting this value to 0 means not trying to create a topic|| -|`-f`, `--payload-file`|Use payload from an UTF-8 encoded text file and a payload will be randomly selected when publishing messages|| -|`-e`, `--payload-delimiter`|The delimiter used to split lines when using payload from a file|\n| -|`-pn`, `--producer-name`|Producer Name|| -|`-r`, `--rate`|Publish rate msg/s across topics|100| -|`--send-timeout`|Set the sendTimeout|0| -|`--separator`|Separator between the topic and topic number|-| -|`-u`, `--service-url`|Pulsar service URL|| -|`-s`, `--size`|Message size (in bytes)|1024| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled.|0| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps publishing messages.|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--warmup-time`|Warm-up time in seconds|1| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - - -### `read` -Run a topic reader - -Usage - -```bash - -$ pulsar-perf read options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth-plugin`|Authentication plugin class name|| -|`--listener-name`|Listener name for the broker|| -|`--conf-file`|Configuration file|| -|`-h`, `--help`|Help message|false| -|`-n`, `--num-messages`|Number of messages to consume in total. If the value is equal to or smaller than 0, it keeps consuming messages.|0| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-t`, `--num-topics`|The number of topics|1| -|`-r`, `--rate`|Simulate a slow message reader (rate in msg/s)|0| -|`-q`, `--receiver-queue-size`|Size of the receiver queue|1000| -|`-u`, `--service-url`|Pulsar service URL|| -|`-m`, `--start-message-id`|Start message id. This can be either 'earliest', 'latest' or a specific message id by using 'lid:eid'|earliest| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled.|0| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps consuming messages.|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--use-tls`|Use TLS encryption on the connection|false| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - -### `websocket-producer` -Run a websocket producer - -Usage - -```bash - -$ pulsar-perf websocket-producer options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth-plugin`|Authentication plugin class name|| -|`--conf-file`|Configuration file|| -|`-h`, `--help`|Help message|false| -|`-m`, `--num-messages`|Number of messages to publish in total. If this value is less than or equal to 0, it keeps publishing messages.|0| -|`-t`, `--num-topic`|The number of topics|1| -|`-f`, `--payload-file`|Use payload from a file instead of empty buffer|| -|`-e`, `--payload-delimiter`|The delimiter used to split lines when using payload from a file|\n| -|`-fp`, `--format-payload`|Format %i as a message index in the stream from producer and/or %t as the timestamp nanoseconds|false| -|`-fc`, `--format-class`|Custom formatter class name|`org.apache.pulsar.testclient.DefaultMessageFormatter`| -|`-u`, `--proxy-url`|Pulsar Proxy URL, e.g., "ws://localhost:8080/"|| -|`-r`, `--rate`|Publish rate msg/s across topics|100| -|`-s`, `--size`|Message size in byte|1024| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps publishing messages.|0| - - -### `managed-ledger` -Write directly on managed-ledgers - -Usage - -```bash - -$ pulsar-perf managed-ledger options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-a`, `--ack-quorum`|Ledger ack quorum|1| -|`-dt`, `--digest-type`|BookKeeper digest type. Possible Values: [CRC32, MAC, CRC32C, DUMMY]|CRC32C| -|`-e`, `--ensemble-size`|Ledger ensemble size|1| -|`-h`, `--help`|Help message|false| -|`-c`, `--max-connections`|Max number of TCP connections to a single bookie|1| -|`-o`, `--max-outstanding`|Max number of outstanding requests|1000| -|`-m`, `--num-messages`|Number of messages to publish in total. If this value is less than or equal to 0, it keeps publishing messages.|0| -|`-t`, `--num-topic`|Number of managed ledgers|1| -|`-r`, `--rate`|Write rate msg/s across managed ledgers|100| -|`-s`, `--size`|Message size in byte|1024| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps publishing messages.|0| -|`--threads`|Number of threads writing|1| -|`-w`, `--write-quorum`|Ledger write quorum|1| -|`-zk`, `--zookeeperServers`|ZooKeeper connection string|| - - -### `monitor-brokers` -Continuously receive broker data and/or load reports - -Usage - -```bash - -$ pulsar-perf monitor-brokers options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--connect-string`|A connection string for one or more ZooKeeper servers|| -|`-h`, `--help`|Help message|false| - - -### `simulation-client` -Run a simulation server acting as a Pulsar client. Uses the client configuration specified in `conf/client.conf`. - -Usage - -```bash - -$ pulsar-perf simulation-client options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--port`|Port to listen on for controller|0| -|`--service-url`|Pulsar Service URL|| -|`-h`, `--help`|Help message|false| - -### `simulation-controller` -Run a simulation controller to give commands to servers - -Usage - -```bash - -$ pulsar-perf simulation-controller options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--client-port`|The port that the clients are listening on|0| -|`--clients`|Comma-separated list of client hostnames|| -|`--cluster`|The cluster to test on|| -|`-h`, `--help`|Help message|false| - - -### `help` -This help message - -Usage - -```bash - -$ pulsar-perf help - -``` - -## `bookkeeper` -A tool for managing BookKeeper. - -Usage - -```bash - -$ bookkeeper command - -``` - -Commands -* `auto-recovery` -* `bookie` -* `localbookie` -* `upgrade` -* `shell` - - -Environment variables - -The table below lists the environment variables that you can use to configure the bookkeeper tool. - -|Variable|Description|Default| -|---|---|---| -|BOOKIE_LOG_CONF|Log4j configuration file|conf/log4j2.yaml| -|BOOKIE_CONF|BookKeeper configuration file|conf/bk_server.conf| -|BOOKIE_EXTRA_OPTS|Extra options to be passed to the JVM|| -|BOOKIE_EXTRA_CLASSPATH|Extra paths for BookKeeper's classpath|| -|ENTRY_FORMATTER_CLASS|The Java class used to format entries|| -|BOOKIE_PID_DIR|Folder where the BookKeeper server PID file should be stored|| -|BOOKIE_STOP_TIMEOUT|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful|| - - -### `auto-recovery` -Runs an auto-recovery service daemon - -Usage - -```bash - -$ bookkeeper auto-recovery options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery daemon|| - - -### `bookie` -Starts up a BookKeeper server (aka bookie) - -Usage - -```bash - -$ bookkeeper bookie options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery daemon|| -|-readOnly|Force start a read-only bookie server|false| -|-withAutoRecovery|Start auto-recovery service bookie server|false| - - -### `localbookie` -Runs a test ensemble of N bookies locally - -Usage - -```bash - -$ bookkeeper localbookie N - -``` - -### `upgrade` -Upgrade the bookie’s filesystem - -Usage - -```bash - -$ bookkeeper upgrade options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery daemon|| -|`-u`, `--upgrade`|Upgrade the bookie’s directories|| - - -### `shell` -Run shell for admin commands. To see a full listing of those commands, run bookkeeper shell without an argument. - -Usage - -```bash - -$ bookkeeper shell - -``` - -Example - -```bash - -$ bookkeeper shell bookiesanity - -``` - -## `broker-tool` - -The `broker- tool` is used for operations on a specific broker. - -Usage - -```bash - -$ broker-tool command - -``` - -Commands -* `load-report` -* `help` - -Example -Two ways to get more information about a command as below: - -```bash - -$ broker-tool help command -$ broker-tool command --help - -``` - -### `load-report` - -Collect the load report of a specific broker. -The command is run on a broker, and used for troubleshooting why broker can’t collect right load report. - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--interval`| Interval to collect load report, in milliseconds || -|`-h`, `--help`| Display help information || - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/reference-configuration.md b/site2/website/versioned_docs/version-2.9.2-deprecated/reference-configuration.md deleted file mode 100644 index d71ce8214b21da..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/reference-configuration.md +++ /dev/null @@ -1,774 +0,0 @@ ---- -id: reference-configuration -title: Pulsar configuration -sidebar_label: "Pulsar configuration" -original_id: reference-configuration ---- - - - - -You can manage Pulsar configuration by configuration files in the [`conf`](https://github.com/apache/pulsar/tree/master/conf) directory of a Pulsar [installation](getting-started-standalone.md). - -- [BookKeeper](#bookkeeper) -- [Broker](#broker) -- [Client](#client) -- [Log4j](#log4j) -- [Log4j shell](#log4j-shell) -- [Standalone](#standalone) -- [WebSocket](#websocket) -- [Pulsar proxy](#pulsar-proxy) -- [ZooKeeper](#zookeeper) - -## BookKeeper - -BookKeeper is a replicated log storage system that Pulsar uses for durable storage of all messages. - - -|Name|Description|Default| -|---|---|---| -|bookiePort|The port on which the bookie server listens.|3181| -|allowLoopback|Whether the bookie is allowed to use a loopback interface as its primary interface (that is the interface used to establish its identity). By default, loopback interfaces are not allowed to work as the primary interface. Using a loopback interface as the primary interface usually indicates a configuration error. For example, it’s fairly common in some VPS setups to not configure a hostname or to have the hostname resolve to `127.0.0.1`. If this is the case, then all bookies in the cluster will establish their identities as `127.0.0.1:3181` and only one will be able to join the cluster. For VPSs configured like this, you should explicitly set the listening interface.|false| -|listeningInterface|The network interface on which the bookie listens. By default, the bookie listens on all interfaces.|eth0| -|advertisedAddress|Configure a specific hostname or IP address that the bookie should use to advertise itself to clients. By default, the bookie advertises either its own IP address or hostname according to the `listeningInterface` and `useHostNameAsBookieID` settings.|N/A| -|allowMultipleDirsUnderSameDiskPartition|Configure the bookie to enable/disable multiple ledger/index/journal directories in the same filesystem disk partition.|false| -|minUsableSizeForIndexFileCreation|The minimum safe usable size available in index directory for bookie to create index files while replaying journal at the time of bookie starts in Readonly Mode (in bytes).|1073741824| -|journalDirectory|The directory where BookKeeper outputs its write-ahead log (WAL).|data/bookkeeper/journal| -|journalDirectories|Directories that BookKeeper outputs its write ahead log. Multiple directories are available, being separated by `,`. For example: `journalDirectories=/tmp/bk-journal1,/tmp/bk-journal2`. If `journalDirectories` is set, the bookies skip `journalDirectory` and use this setting directory.|/tmp/bk-journal| -|ledgerDirectories|The directory where BookKeeper outputs ledger snapshots. This could define multiple directories to store snapshots separated by `,`, for example `ledgerDirectories=/tmp/bk1-data,/tmp/bk2-data`. Ideally, ledger dirs and the journal dir are each in a different device, which reduces the contention between random I/O and sequential write. It is possible to run with a single disk, but performance will be significantly lower.|data/bookkeeper/ledgers| -|ledgerManagerType|The type of ledger manager used to manage how ledgers are stored, managed, and garbage collected. See [BookKeeper Internals](http://bookkeeper.apache.org/docs/latest/getting-started/concepts) for more info.|hierarchical| -|zkLedgersRootPath|The root ZooKeeper path used to store ledger metadata. This parameter is used by the ZooKeeper-based ledger manager as a root znode to store all ledgers.|/ledgers| -|ledgerStorageClass|Ledger storage implementation class|org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage| -|entryLogFilePreallocationEnabled|Enable or disable entry logger preallocation|true| -|logSizeLimit|Max file size of the entry logger, in bytes. A new entry log file will be created when the old one reaches the file size limitation.|1073741824| -|minorCompactionThreshold|Threshold of minor compaction. Entry log files whose remaining size percentage reaches below this threshold will be compacted in a minor compaction. If set to less than zero, the minor compaction is disabled.|0.2| -|minorCompactionInterval|Time interval to run minor compaction, in seconds. If set to less than zero, the minor compaction is disabled. Note: should be greater than gcWaitTime. |3600| -|majorCompactionThreshold|The threshold of major compaction. Entry log files whose remaining size percentage reaches below this threshold will be compacted in a major compaction. Those entry log files whose remaining size percentage is still higher than the threshold will never be compacted. If set to less than zero, the minor compaction is disabled.|0.5| -|majorCompactionInterval|The time interval to run major compaction, in seconds. If set to less than zero, the major compaction is disabled. Note: should be greater than gcWaitTime. |86400| -|readOnlyModeEnabled|If `readOnlyModeEnabled=true`, then on all full ledger disks, bookie will be converted to read-only mode and serve only read requests. Otherwise the bookie will be shutdown.|true| -|forceReadOnlyBookie|Whether the bookie is force started in read only mode.|false| -|persistBookieStatusEnabled|Persist the bookie status locally on the disks. So the bookies can keep their status upon restarts.|false| -|compactionMaxOutstandingRequests|Sets the maximum number of entries that can be compacted without flushing. When compacting, the entries are written to the entrylog and the new offsets are cached in memory. Once the entrylog is flushed the index is updated with the new offsets. This parameter controls the number of entries added to the entrylog before a flush is forced. A higher value for this parameter means more memory will be used for offsets. Each offset consists of 3 longs. This parameter should not be modified unless you’re fully aware of the consequences.|100000| -|compactionRate|The rate at which compaction will read entries, in adds per second.|1000| -|isThrottleByBytes|Throttle compaction by bytes or by entries.|false| -|compactionRateByEntries|The rate at which compaction will read entries, in adds per second.|1000| -|compactionRateByBytes|Set the rate at which compaction reads entries. The unit is bytes added per second.|1000000| -|journalMaxSizeMB|Max file size of journal file, in megabytes. A new journal file will be created when the old one reaches the file size limitation.|2048| -|journalMaxBackups|The max number of old journal files to keep. Keeping a number of old journal files would help data recovery in special cases.|5| -|journalPreAllocSizeMB|How space to pre-allocate at a time in the journal.|16| -|journalWriteBufferSizeKB|The of the write buffers used for the journal.|64| -|journalRemoveFromPageCache|Whether pages should be removed from the page cache after force write.|true| -|journalAdaptiveGroupWrites|Whether to group journal force writes, which optimizes group commit for higher throughput.|true| -|journalMaxGroupWaitMSec|The maximum latency to impose on a journal write to achieve grouping.|1| -|journalAlignmentSize|All the journal writes and commits should be aligned to given size|4096| -|journalBufferedWritesThreshold|Maximum writes to buffer to achieve grouping|524288| -|journalFlushWhenQueueEmpty|If we should flush the journal when journal queue is empty|false| -|numJournalCallbackThreads|The number of threads that should handle journal callbacks|8| -|openLedgerRereplicationGracePeriod | The grace period, in milliseconds, that the replication worker waits before fencing and replicating a ledger fragment that's still being written to upon bookie failure. | 30000 | -|rereplicationEntryBatchSize|The number of max entries to keep in fragment for re-replication|100| -|autoRecoveryDaemonEnabled|Whether the bookie itself can start auto-recovery service.|true| -|lostBookieRecoveryDelay|How long to wait, in seconds, before starting auto recovery of a lost bookie.|0| -|gcWaitTime|How long the interval to trigger next garbage collection, in milliseconds. Since garbage collection is running in background, too frequent gc will heart performance. It is better to give a higher number of gc interval if there is enough disk capacity.|900000| -|gcOverreplicatedLedgerWaitTime|How long the interval to trigger next garbage collection of overreplicated ledgers, in milliseconds. This should not be run very frequently since we read the metadata for all the ledgers on the bookie from zk.|86400000| -|flushInterval|How long the interval to flush ledger index pages to disk, in milliseconds. Flushing index files will introduce much random disk I/O. If separating journal dir and ledger dirs each on different devices, flushing would not affect performance. But if putting journal dir and ledger dirs on same device, performance degrade significantly on too frequent flushing. You can consider increment flush interval to get better performance, but you need to pay more time on bookie server restart after failure.|60000| -|bookieDeathWatchInterval|Interval to watch whether bookie is dead or not, in milliseconds|1000| -|allowStorageExpansion|Allow the bookie storage to expand. Newly added ledger and index dirs must be empty.|false| -|zkServers|A list of one of more servers on which zookeeper is running. The server list can be comma separated values, for example: zkServers=zk1:2181,zk2:2181,zk3:2181.|localhost:2181| -|zkTimeout|ZooKeeper client session timeout in milliseconds Bookie server will exit if it received SESSION_EXPIRED because it was partitioned off from ZooKeeper for more than the session timeout JVM garbage collection, disk I/O will cause SESSION_EXPIRED. Increment this value could help avoiding this issue|30000| -|zkRetryBackoffStartMs|The start time that the Zookeeper client backoff retries in milliseconds.|1000| -|zkRetryBackoffMaxMs|The maximum time that the Zookeeper client backoff retries in milliseconds.|10000| -|zkEnableSecurity|Set ACLs on every node written on ZooKeeper, allowing users to read and write BookKeeper metadata stored on ZooKeeper. In order to make ACLs work you need to setup ZooKeeper JAAS authentication. All the bookies and Client need to share the same user, and this is usually done using Kerberos authentication. See ZooKeeper documentation.|false| -|httpServerEnabled|The flag enables/disables starting the admin http server.|false| -|httpServerPort|The HTTP server port to listen on. By default, the value is `8080`. If you want to keep it consistent with the Prometheus stats provider, you can set it to `8000`.|8080 -|httpServerClass|The http server class.|org.apache.bookkeeper.http.vertx.VertxHttpServer| -|serverTcpNoDelay|This settings is used to enabled/disabled Nagle’s algorithm, which is a means of improving the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network. If you are sending many small messages, such that more than one can fit in a single IP packet, setting server.tcpnodelay to false to enable Nagle algorithm can provide better performance.|true| -|serverSockKeepalive|This setting is used to send keep-alive messages on connection-oriented sockets.|true| -|serverTcpLinger|The socket linger timeout on close. When enabled, a close or shutdown will not return until all queued messages for the socket have been successfully sent or the linger timeout has been reached. Otherwise, the call returns immediately and the closing is done in the background.|0| -|byteBufAllocatorSizeMax|The maximum buf size of the received ByteBuf allocator.|1048576| -|nettyMaxFrameSizeBytes|The maximum netty frame size in bytes. Any message received larger than this will be rejected.|5253120| -|openFileLimit|Max number of ledger index files could be opened in bookie server If number of ledger index files reaches this limitation, bookie server started to swap some ledgers from memory to disk. Too frequent swap will affect performance. You can tune this number to gain performance according your requirements.|0| -|pageSize|Size of a index page in ledger cache, in bytes A larger index page can improve performance writing page to disk, which is efficient when you have small number of ledgers and these ledgers have similar number of entries. If you have large number of ledgers and each ledger has fewer entries, smaller index page would improve memory usage.|8192| -|pageLimit|How many index pages provided in ledger cache If number of index pages reaches this limitation, bookie server starts to swap some ledgers from memory to disk. You can increment this value when you found swap became more frequent. But make sure pageLimit*pageSize should not more than JVM max memory limitation, otherwise you would got OutOfMemoryException. In general, incrementing pageLimit, using smaller index page would gain better performance in lager number of ledgers with fewer entries case If pageLimit is -1, bookie server will use 1/3 of JVM memory to compute the limitation of number of index pages.|0| -|readOnlyModeEnabled|If all ledger directories configured are full, then support only read requests for clients. If "readOnlyModeEnabled=true" then on all ledger disks full, bookie will be converted to read-only mode and serve only read requests. Otherwise the bookie will be shutdown. By default this will be disabled.|true| -|diskUsageThreshold|For each ledger dir, maximum disk space which can be used. Default is 0.95f. i.e. 95% of disk can be used at most after which nothing will be written to that partition. If all ledger dir partitions are full, then bookie will turn to readonly mode if ‘readOnlyModeEnabled=true’ is set, else it will shutdown. Valid values should be in between 0 and 1 (exclusive).|0.95| -|diskCheckInterval|Disk check interval in milli seconds, interval to check the ledger dirs usage.|10000| -|auditorPeriodicCheckInterval|Interval at which the auditor will do a check of all ledgers in the cluster. By default this runs once a week. The interval is set in seconds. To disable the periodic check completely, set this to 0. Note that periodic checking will put extra load on the cluster, so it should not be run more frequently than once a day.|604800| -|sortedLedgerStorageEnabled|Whether sorted-ledger storage is enabled.|true| -|auditorPeriodicBookieCheckInterval|The interval between auditor bookie checks. The auditor bookie check, checks ledger metadata to see which bookies should contain entries for each ledger. If a bookie which should contain entries is unavailable, thea the ledger containing that entry is marked for recovery. Setting this to 0 disabled the periodic check. Bookie checks will still run when a bookie fails. The interval is specified in seconds.|86400| -|numAddWorkerThreads|The number of threads that should handle write requests. if zero, the writes would be handled by netty threads directly.|0| -|numReadWorkerThreads|The number of threads that should handle read requests. if zero, the reads would be handled by netty threads directly.|8| -|numHighPriorityWorkerThreads|The umber of threads that should be used for high priority requests (i.e. recovery reads and adds, and fencing).|8| -|maxPendingReadRequestsPerThread|If read workers threads are enabled, limit the number of pending requests, to avoid the executor queue to grow indefinitely.|2500| -|maxPendingAddRequestsPerThread|The limited number of pending requests, which is used to avoid the executor queue to grow indefinitely when add workers threads are enabled.|10000| -|isForceGCAllowWhenNoSpace|Whether force compaction is allowed when the disk is full or almost full. Forcing GC could get some space back, but could also fill up the disk space more quickly. This is because new log files are created before GC, while old garbage log files are deleted after GC.|false| -|verifyMetadataOnGC|True if the bookie should double check `readMetadata` prior to GC.|false| -|flushEntrylogBytes|Entry log flush interval in bytes. Flushing in smaller chunks but more frequently reduces spikes in disk I/O. Flushing too frequently may also affect performance negatively.|268435456| -|readBufferSizeBytes|The number of bytes we should use as capacity for BufferedReadChannel.|4096| -|writeBufferSizeBytes|The number of bytes used as capacity for the write buffer|65536| -|useHostNameAsBookieID|Whether the bookie should use its hostname to register with the coordination service (e.g.: zookeeper service). When false, bookie will use its ip address for the registration.|false| -|bookieId | If you want to custom a bookie ID or use a dynamic network address for the bookie, you can set the `bookieId`.

    Bookie advertises itself using the `bookieId` rather than the `BookieSocketAddress` (`hostname:port` or `IP:port`). If you set the `bookieId`, then the `useHostNameAsBookieID` does not take effect.

    The `bookieId` is a non-empty string that can contain ASCII digits and letters ([a-zA-Z9-0]), colons, dashes, and dots.

    For more information about `bookieId`, see [here](http://bookkeeper.apache.org/bps/BP-41-bookieid/).|N/A| -|allowEphemeralPorts|Whether the bookie is allowed to use an ephemeral port (port 0) as its server port. By default, an ephemeral port is not allowed. Using an ephemeral port as the service port usually indicates a configuration error. However, in unit tests, using an ephemeral port will address port conflict problems and allow running tests in parallel.|false| -|enableLocalTransport|Whether the bookie is allowed to listen for the BookKeeper clients executed on the local JVM.|false| -|disableServerSocketBind|Whether the bookie is allowed to disable bind on network interfaces. This bookie will be available only to BookKeeper clients executed on the local JVM.|false| -|skipListArenaChunkSize|The number of bytes that we should use as chunk allocation for `org.apache.bookkeeper.bookie.SkipListArena`.|4194304| -|skipListArenaMaxAllocSize|The maximum size that we should allocate from the skiplist arena. Allocations larger than this should be allocated directly by the VM to avoid fragmentation.|131072| -|bookieAuthProviderFactoryClass|The factory class name of the bookie authentication provider. If this is null, then there is no authentication.|null| -|statsProviderClass||org.apache.bookkeeper.stats.prometheus.PrometheusMetricsProvider| -|prometheusStatsHttpPort||8000| -|dbStorage_writeCacheMaxSizeMb|Size of Write Cache. Memory is allocated from JVM direct memory. Write cache is used to buffer entries before flushing into the entry log. For good performance, it should be big enough to hold a substantial amount of entries in the flush interval.|25% of direct memory| -|dbStorage_readAheadCacheMaxSizeMb|Size of Read cache. Memory is allocated from JVM direct memory. This read cache is pre-filled doing read-ahead whenever a cache miss happens. By default, it is allocated to 25% of the available direct memory.|N/A| -|dbStorage_readAheadCacheBatchSize|How many entries to pre-fill in cache after a read cache miss|1000| -|dbStorage_rocksDB_blockCacheSize|Size of RocksDB block-cache. For best performance, this cache should be big enough to hold a significant portion of the index database which can reach ~2GB in some cases. By default, it uses 10% of direct memory.|N/A| -|dbStorage_rocksDB_writeBufferSizeMB||64| -|dbStorage_rocksDB_sstSizeInMB||64| -|dbStorage_rocksDB_blockSize||65536| -|dbStorage_rocksDB_bloomFilterBitsPerKey||10| -|dbStorage_rocksDB_numLevels||-1| -|dbStorage_rocksDB_numFilesInLevel0||4| -|dbStorage_rocksDB_maxSizeInLevel1MB||256| - -## Broker - -Pulsar brokers are responsible for handling incoming messages from producers, dispatching messages to consumers, replicating data between clusters, and more. - -|Name|Description|Default| -|---|---|---| -|advertisedListeners|Specify multiple advertised listeners for the broker.

    The format is `:pulsar://:`.

    If there are multiple listeners, separate them with commas.

    **Note**: do not use this configuration with `advertisedAddress` and `brokerServicePort`. If the value of this configuration is empty, the broker uses `advertisedAddress` and `brokerServicePort`|/| -|internalListenerName|Specify the internal listener name for the broker.

    **Note**: the listener name must be contained in `advertisedListeners`.

    If the value of this configuration is empty, the broker uses the first listener as the internal listener.|/| -|authenticateOriginalAuthData| If this flag is set to `true`, the broker authenticates the original Auth data; else it just accepts the originalPrincipal and authorizes it (if required). |false| -|enablePersistentTopics| Whether persistent topics are enabled on the broker |true| -|enableNonPersistentTopics| Whether non-persistent topics are enabled on the broker |true| -|functionsWorkerEnabled| Whether the Pulsar Functions worker service is enabled in the broker |false| -|exposePublisherStats|Whether to enable topic level metrics.|true| -|statsUpdateFrequencyInSecs||60| -|statsUpdateInitialDelayInSecs||60| -|zookeeperServers| Zookeeper quorum connection string || -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -|brokerServicePort| Broker data port |6650| -|brokerServicePortTls| Broker data port for TLS |6651| -|webServicePort| Port to use to server HTTP request |8080| -|webServicePortTls| Port to use to server HTTPS request |8443| -|webSocketServiceEnabled| Enable the WebSocket API service in broker |false| -|webSocketNumIoThreads|The number of IO threads in Pulsar Client used in WebSocket proxy.|8| -|webSocketConnectionsPerBroker|The number of connections per Broker in Pulsar Client used in WebSocket proxy.|8| -|webSocketSessionIdleTimeoutMillis|Time in milliseconds that idle WebSocket session times out.|300000| -|webSocketMaxTextFrameSize|The maximum size of a text message during parsing in WebSocket proxy.|1048576| -|exposeTopicLevelMetricsInPrometheus|Whether to enable topic level metrics.|true| -|exposeConsumerLevelMetricsInPrometheus|Whether to enable consumer level metrics.|false| -|jvmGCMetricsLoggerClassName|Classname of Pluggable JVM GC metrics logger that can log GC specific metrics.|N/A| -|bindAddress| Hostname or IP address the service binds on, default is 0.0.0.0. |0.0.0.0| -|bindAddresses| Additional Hostname or IP addresses the service binds on: `listener_name:scheme://host:port,...`. || -|advertisedAddress| Hostname or IP address the service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used. || -|clusterName| Name of the cluster to which this broker belongs to || -|maxTenants|The maximum number of tenants that can be created in each Pulsar cluster. When the number of tenants reaches the threshold, the broker rejects the request of creating a new tenant. The default value 0 disables the check. |0| -| maxNamespacesPerTenant | The maximum number of namespaces that can be created in each tenant. When the number of namespaces reaches this threshold, the broker rejects the request of creating a new tenant. The default value 0 disables the check. |0| -|brokerDeduplicationEnabled| Sets the default behavior for message deduplication in the broker. If enabled, the broker will reject messages that were already stored in the topic. This setting can be overridden on a per-namespace basis. |false| -|brokerDeduplicationMaxNumberOfProducers| The maximum number of producers for which information will be stored for deduplication purposes. |10000| -|brokerDeduplicationEntriesInterval| The number of entries after which a deduplication informational snapshot is taken. A larger interval will lead to fewer snapshots being taken, though this would also lengthen the topic recovery time (the time required for entries published after the snapshot to be replayed). |1000| -|brokerDeduplicationSnapshotIntervalSeconds| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |120| -|brokerDeduplicationProducerInactivityTimeoutMinutes| The time of inactivity (in minutes) after which the broker will discard deduplication information related to a disconnected producer. |360| -|dispatchThrottlingRatePerReplicatorInMsg| The default messages per second dispatch throttling-limit for every replicator in replication. The value of `0` means disabling replication message dispatch-throttling| 0 | -|dispatchThrottlingRatePerReplicatorInByte| The default bytes per second dispatch throttling-limit for every replicator in replication. The value of `0` means disabling replication message-byte dispatch-throttling| 0 | -|zooKeeperSessionTimeoutMillis| Zookeeper session timeout in milliseconds |30000| -|brokerShutdownTimeoutMs| Time to wait for broker graceful shutdown. After this time elapses, the process will be killed |60000| -|skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out of memory error. |false| -|backlogQuotaCheckEnabled| Enable backlog quota check. Enforces action on topic when the quota is reached |true| -|backlogQuotaCheckIntervalInSeconds| How often to check for topics that have reached the quota |60| -|backlogQuotaDefaultLimitBytes| The default per-topic backlog quota limit. Being less than 0 means no limitation. By default, it is -1. | -1 | -|backlogQuotaDefaultRetentionPolicy|The defaulted backlog quota retention policy. By Default, it is `producer_request_hold`.
  2057. 'producer_request_hold' Policy which holds producer's send request until the resource becomes available (or holding times out)
  2058. 'producer_exception' Policy which throws `javax.jms.ResourceAllocationException` to the producer
  2059. 'consumer_backlog_eviction' Policy which evicts the oldest message from the slowest consumer's backlog
  2060. |producer_request_hold| -|allowAutoTopicCreation| Enable topic auto creation if a new producer or consumer connected |true| -|allowAutoTopicCreationType| The type of topic that is allowed to be automatically created.(partitioned/non-partitioned) |non-partitioned| -|allowAutoSubscriptionCreation| Enable subscription auto creation if a new consumer connected |true| -|defaultNumPartitions| The number of partitioned topics that is allowed to be automatically created if `allowAutoTopicCreationType` is partitioned |1| -|brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics. If topics are not consumed for some while, these inactive topics might be cleaned up. Deleting inactive topics is enabled by default. The default period is 1 minute. |true| -|brokerDeleteInactiveTopicsFrequencySeconds| How often to check for inactive topics |60| -| brokerDeleteInactiveTopicsMode | Set the mode to delete inactive topics.
  2061. `delete_when_no_subscriptions`: delete the topic which has no subscriptions or active producers.
  2062. `delete_when_subscriptions_caught_up`: delete the topic whose subscriptions have no backlogs and which has no active producers or consumers.
  2063. | `delete_when_no_subscriptions` | -| brokerDeleteInactiveTopicsMaxInactiveDurationSeconds | Set the maximum duration for inactive topics. If it is not specified, the `brokerDeleteInactiveTopicsFrequencySeconds` parameter is adopted. | N/A | -|forceDeleteTenantAllowed| Enable you to delete a tenant forcefully. |false| -|forceDeleteNamespaceAllowed| Enable you to delete a namespace forcefully. |false| -|messageExpiryCheckIntervalInMinutes| The frequency of proactively checking and purging expired messages. |5| -|brokerServiceCompactionMonitorIntervalInSeconds| Interval between checks to determine whether topics with compaction policies need compaction. |60| -brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater than this threshold, compression is triggered.

    Set this threshold to 0 means disabling the compression check.|N/A -|delayedDeliveryEnabled| Whether to enable the delayed delivery for messages. If disabled, messages will be immediately delivered and there will be no tracking overhead.|true| -|delayedDeliveryTickTimeMillis|Control the tick time for retrying on delayed delivery, which affects the accuracy of the delivery time compared to the scheduled time. By default, it is 1 second.|1000| -|activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed. |1000| -|clientLibraryVersionCheckEnabled| Enable check for minimum allowed client library version |false| -|clientLibraryVersionCheckAllowUnversioned| Allow client libraries with no version information |true| -|statusFilePath| Path for the file used to determine the rotation status for the broker when responding to service discovery health checks || -|preferLaterVersions| If true, (and ModularLoadManagerImpl is being used), the load manager will attempt to use only brokers running the latest software version (to minimize impact to bundles) |false| -|maxNumPartitionsPerPartitionedTopic|Max number of partitions per partitioned topic. Use 0 or negative number to disable the check|0| -| maxSubscriptionsPerTopic | Maximum number of subscriptions allowed to subscribe to a topic. Once this limit reaches, the broker rejects new subscriptions until the number of subscriptions decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxProducersPerTopic | Maximum number of producers allowed to connect to a topic. Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerTopic | Maximum number of consumers allowed to connect to a topic. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerSubscription | Maximum number of consumers allowed to connect to a subscription. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -|tlsEnabled|Deprecated - Use `webServicePortTls` and `brokerServicePortTls` instead. |false| -|tlsCertificateFilePath| Path for the TLS certificate file || -|tlsKeyFilePath| Path for the TLS private key file || -|tlsTrustCertsFilePath| Path for the trusted TLS certificate file. This cert is used to verify that any certs presented by connecting clients are signed by a certificate authority. If this verification fails, then the certs are untrusted and the connections are dropped. || -|tlsAllowInsecureConnection| Accept untrusted TLS certificate from client. If it is set to `true`, a client with a cert which cannot be verified with the 'tlsTrustCertsFilePath' cert will be allowed to connect to the server, though the cert will not be used for client authentication. |false| -|tlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLSv1.3```, ```TLSv1.2``` || -|tlsCiphers|Specify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256```|| -|tlsEnabledWithKeyStore| Enable TLS with KeyStore type configuration in broker |false| -|tlsProvider| TLS Provider for KeyStore type || -|tlsKeyStoreType| LS KeyStore type configuration in broker: JKS, PKCS12 |JKS| -|tlsKeyStore| TLS KeyStore path in broker || -|tlsKeyStorePassword| TLS KeyStore password for broker || -|brokerClientTlsEnabledWithKeyStore| Whether internal client use KeyStore type to authenticate with Pulsar brokers |false| -|brokerClientSslProvider| The TLS Provider used by internal client to authenticate with other Pulsar brokers || -|brokerClientTlsTrustStoreType| TLS TrustStore type configuration for internal client: JKS, PKCS12, used by the internal client to authenticate with Pulsar brokers |JKS| -|brokerClientTlsTrustStore| TLS TrustStore path for internal client, used by the internal client to authenticate with Pulsar brokers || -|brokerClientTlsTrustStorePassword| TLS TrustStore password for internal client, used by the internal client to authenticate with Pulsar brokers || -|brokerClientTlsCiphers| Specify the tls cipher the internal client will use to negotiate during TLS Handshake. (a comma-separated list of ciphers) e.g. [TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256]|| -|brokerClientTlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS handshake. (a comma-separated list of protocol names). e.g. `TLSv1.3`, `TLSv1.2` || -|ttlDurationDefaultInSeconds|The default Time to Live (TTL) for namespaces if the TTL is not configured at namespace policies. When the value is set to `0`, TTL is disabled. By default, TTL is disabled. |0| -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicAlg| Configure the algorithm to be used to validate auth tokens. This can be any of the asymettric algorithms supported by Java JWT (https://github.com/jwtk/jjwt#signature-algorithms-keys) |RS256| -|tokenAuthClaim| Specify which of the token's claims will be used as the authentication "principal" or "role". The default "sub" claim will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud", that will be used to get the audience from token. If not set, audience will not be verified. || -|tokenAudience| The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token, need contains this. || -|maxUnackedMessagesPerConsumer| Max number of unacknowledged messages allowed to receive messages by a consumer on a shared subscription. Broker will stop sending messages to consumer once, this limit reaches until consumer starts acknowledging messages back. Using a value of 0, is disabling unackeMessage limit check and consumer can receive messages without any restriction |50000| -|maxUnackedMessagesPerSubscription| Max number of unacknowledged messages allowed per shared subscription. Broker will stop dispatching messages to all consumers of the subscription once this limit reaches until consumer starts acknowledging messages back and unack count reaches to limit/2. Using a value of 0, is disabling unackedMessage-limit check and dispatcher can dispatch messages without any restriction |200000| -|subscriptionRedeliveryTrackerEnabled| Enable subscription message redelivery tracker |true| -|subscriptionExpirationTimeMinutes | How long to delete inactive subscriptions from last consuming.

    Setting this configuration to a value **greater than 0** deletes inactive subscriptions automatically.
    Setting this configuration to **0** does not delete inactive subscriptions automatically.

    Since this configuration takes effect on all topics, if there is even one topic whose subscriptions should not be deleted automatically, you need to set it to 0.
    Instead, you can set a subscription expiration time for each **namespace** using the [`pulsar-admin namespaces set-subscription-expiration-time options` command](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-subscription-expiration-time-em-). | 0 | -|maxConcurrentLookupRequest| Max number of concurrent lookup request broker allows to throttle heavy incoming lookup traffic |50000| -|maxConcurrentTopicLoadRequest| Max number of concurrent topic loading request broker allows to control number of zk-operations |5000| -|authenticationEnabled| Enable authentication |false| -|authenticationProviders| Authentication provider name list, which is comma separated list of class names || -| authenticationRefreshCheckSeconds | Interval of time for checking for expired authentication credentials | 60 | -|authorizationEnabled| Enforce authorization |false| -|superUserRoles| Role names that are treated as "super-user", meaning they will be able to do all admin operations and publish/consume from all topics || -|brokerClientAuthenticationPlugin| Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters || -|brokerClientAuthenticationParameters||| -|athenzDomainNames| Supported Athenz provider domain names(comma separated) for authentication || -|exposePreciseBacklogInPrometheus| Enable expose the precise backlog stats, set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate. |false| -|schemaRegistryStorageClassName|The schema storage implementation used by this broker.|org.apache.pulsar.broker.service.schema.BookkeeperSchemaStorageFactory| -|isSchemaValidationEnforced|Enforce schema validation on following cases: if a producer without a schema attempts to produce to a topic with schema, the producer will be failed to connect. PLEASE be carefully on using this, since non-java clients don't support schema. If this setting is enabled, then non-java clients fail to produce.|false| -| topicFencingTimeoutSeconds | If a topic remains fenced for a certain time period (in seconds), it is closed forcefully. If set to 0 or a negative number, the fenced topic is not closed. | 0 | -|offloadersDirectory|The directory for all the offloader implementations.|./offloaders| -|bookkeeperMetadataServiceUri| Metadata service uri that bookkeeper is used for loading corresponding metadata driver and resolving its metadata service location. This value can be fetched using `bookkeeper shell whatisinstanceid` command in BookKeeper cluster. For example: zk+hierarchical://localhost:2181/ledgers. The metadata service uri list can also be semicolon separated values like below: zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers || -|bookkeeperClientAuthenticationPlugin| Authentication plugin to use when connecting to bookies || -|bookkeeperClientAuthenticationParametersName| BookKeeper auth plugin implementation specifics parameters name and values || -|bookkeeperClientAuthenticationParameters||| -|bookkeeperClientNumWorkerThreads| Number of BookKeeper client worker threads. Default is Runtime.getRuntime().availableProcessors() || -|bookkeeperClientTimeoutInSeconds| Timeout for BK add / read operations |30| -|bookkeeperClientSpeculativeReadTimeoutInMillis| Speculative reads are initiated if a read request doesn’t complete within a certain time Using a value of 0, is disabling the speculative reads |0| -|bookkeeperNumberOfChannelsPerBookie| Number of channels per bookie |16| -|bookkeeperClientHealthCheckEnabled| Enable bookies health check. Bookies that have more than the configured number of failure within the interval will be quarantined for some time. During this period, new ledgers won’t be created on these bookies |true| -|bookkeeperClientHealthCheckIntervalSeconds||60| -|bookkeeperClientHealthCheckErrorThresholdPerInterval||5| -|bookkeeperClientHealthCheckQuarantineTimeInSeconds ||1800| -|bookkeeperClientRackawarePolicyEnabled| Enable rack-aware bookie selection policy. BK will chose bookies from different racks when forming a new bookie ensemble |true| -|bookkeeperClientRegionawarePolicyEnabled| Enable region-aware bookie selection policy. BK will chose bookies from different regions and racks when forming a new bookie ensemble. If enabled, the value of bookkeeperClientRackawarePolicyEnabled is ignored |false| -|bookkeeperClientMinNumRacksPerWriteQuorum| Minimum number of racks per write quorum. BK rack-aware bookie selection policy will try to get bookies from at least 'bookkeeperClientMinNumRacksPerWriteQuorum' racks for a write quorum. |2| -|bookkeeperClientEnforceMinNumRacksPerWriteQuorum| Enforces rack-aware bookie selection policy to pick bookies from 'bookkeeperClientMinNumRacksPerWriteQuorum' racks for a writeQuorum. If BK can't find bookie then it would throw BKNotEnoughBookiesException instead of picking random one. |false| -|bookkeeperClientReorderReadSequenceEnabled| Enable/disable reordering read sequence on reading entries. |false| -|bookkeeperClientIsolationGroups| Enable bookie isolation by specifying a list of bookie groups to choose from. Any bookie outside the specified groups will not be used by the broker || -|bookkeeperClientSecondaryIsolationGroups| Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn't have enough bookie available. || -|bookkeeperClientMinAvailableBookiesInIsolationGroups| Minimum bookies that should be available as part of bookkeeperClientIsolationGroups else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list. || -|bookkeeperClientGetBookieInfoIntervalSeconds| Set the interval to periodically check bookie info |86400| -|bookkeeperClientGetBookieInfoRetryIntervalSeconds| Set the interval to retry a failed bookie info lookup |60| -|bookkeeperEnableStickyReads | Enable/disable having read operations for a ledger to be sticky to a single bookie. If this flag is enabled, the client will use one single bookie (by preference) to read all entries for a ledger. | true | -|managedLedgerDefaultEnsembleSize| Number of bookies to use when creating a ledger |2| -|managedLedgerDefaultWriteQuorum| Number of copies to store for each message |2| -|managedLedgerDefaultAckQuorum| Number of guaranteed copies (acks to wait before write is complete) |2| -|managedLedgerCacheSizeMB| Amount of memory to use for caching data payload in managed ledger. This memory is allocated from JVM direct memory and it’s shared across all the topics running in the same broker. By default, uses 1/5th of available direct memory || -|managedLedgerCacheCopyEntries| Whether we should make a copy of the entry payloads when inserting in cache| false| -|managedLedgerCacheEvictionWatermark| Threshold to which bring down the cache level when eviction is triggered |0.9| -|managedLedgerCacheEvictionFrequency| Configure the cache eviction frequency for the managed ledger cache (evictions/sec) | 100.0 | -|managedLedgerCacheEvictionTimeThresholdMillis| All entries that have stayed in cache for more than the configured time, will be evicted | 1000 | -|managedLedgerCursorBackloggedThreshold| Configure the threshold (in number of entries) from where a cursor should be considered 'backlogged' and thus should be set as inactive. | 1000| -|managedLedgerDefaultMarkDeleteRateLimit| Rate limit the amount of writes per second generated by consumer acking the messages |1.0| -|managedLedgerMaxEntriesPerLedger| The max number of entries to append to a ledger before triggering a rollover. A ledger rollover is triggered after the min rollover time has passed and one of the following conditions is true:
    • The max rollover time has been reached
    • The max entries have been written to the ledger
    • The max ledger size has been written to the ledger
    |50000| -|managedLedgerMinLedgerRolloverTimeMinutes| Minimum time between ledger rollover for a topic |10| -|managedLedgerMaxLedgerRolloverTimeMinutes| Maximum time before forcing a ledger rollover for a topic |240| -|managedLedgerCursorMaxEntriesPerLedger| Max number of entries to append to a cursor ledger |50000| -|managedLedgerCursorRolloverTimeInSeconds| Max time before triggering a rollover on a cursor ledger |14400| -|managedLedgerMaxUnackedRangesToPersist| Max number of "acknowledgment holes" that are going to be persistently stored. When acknowledging out of order, a consumer will leave holes that are supposed to be quickly filled by acking all the messages. The information of which messages are acknowledged is persisted by compressing in "ranges" of messages that were acknowledged. After the max number of ranges is reached, the information will only be tracked in memory and messages will be redelivered in case of crashes. |1000| -|autoSkipNonRecoverableData| Skip reading non-recoverable/unreadable data-ledger under managed-ledger’s list.It helps when data-ledgers gets corrupted at bookkeeper and managed-cursor is stuck at that ledger. |false| -|loadBalancerEnabled| Enable load balancer |true| -|loadBalancerPlacementStrategy| Strategy to assign a new bundle weightedRandomSelection || -|loadBalancerReportUpdateThresholdPercentage| Percentage of change to trigger load report update |10| -|loadBalancerReportUpdateMaxIntervalMinutes| maximum interval to update load report |15| -|loadBalancerHostUsageCheckIntervalMinutes| Frequency of report to collect |1| -|loadBalancerSheddingIntervalMinutes| Load shedding interval. Broker periodically checks whether some traffic should be offload from some over-loaded broker to other under-loaded brokers |30| -|loadBalancerSheddingGracePeriodMinutes| Prevent the same topics to be shed and moved to other broker more than once within this timeframe |30| -|loadBalancerBrokerMaxTopics| Usage threshold to allocate max number of topics to broker |50000| -|loadBalancerBrokerUnderloadedThresholdPercentage| Usage threshold to determine a broker as under-loaded |1| -|loadBalancerBrokerOverloadedThresholdPercentage| Usage threshold to determine a broker as over-loaded |85| -|loadBalancerResourceQuotaUpdateIntervalMinutes| Interval to update namespace bundle resource quota |15| -|loadBalancerBrokerComfortLoadLevelPercentage| Usage threshold to determine a broker is having just right level of load |65| -|loadBalancerAutoBundleSplitEnabled| enable/disable namespace bundle auto split |false| -|loadBalancerNamespaceBundleMaxTopics| maximum topics in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxSessions| maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxMsgRate| maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxBandwidthMbytes| maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered |100| -|loadBalancerNamespaceMaximumBundles| maximum number of bundles in a namespace |128| -|replicationMetricsEnabled| Enable replication metrics |true| -|replicationConnectionsPerBroker| Max number of connections to open for each broker in a remote cluster More connections host-to-host lead to better throughput over high-latency links. |16| -|replicationProducerQueueSize| Replicator producer queue size |1000| -|replicatorPrefix| Replicator prefix used for replicator producer name and cursor name pulsar.repl|| -|replicationTlsEnabled| Enable TLS when talking with other clusters to replicate messages |false| -|brokerServicePurgeInactiveFrequencyInSeconds|Deprecated. Use `brokerDeleteInactiveTopicsFrequencySeconds`.|60| -|transactionCoordinatorEnabled|Whether to enable transaction coordinator in broker.|true| -|transactionMetadataStoreProviderClassName| |org.apache.pulsar.transaction.coordinator.impl.InMemTransactionMetadataStoreProvider| -|defaultRetentionTimeInMinutes| Default message retention time. 0 means retention is disabled. -1 means data is not removed by time quota |0| -|defaultRetentionSizeInMB| Default retention size. 0 means retention is disabled. -1 means data is not removed by size quota |0| -|keepAliveIntervalSeconds| How often to check whether the connections are still alive |30| -|bootstrapNamespaces| The bootstrap name. | N/A | -|loadManagerClassName| Name of load manager to use |org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl| -|supportedNamespaceBundleSplitAlgorithms| Supported algorithms name for namespace bundle split |[range_equally_divide,topic_count_equally_divide]| -|defaultNamespaceBundleSplitAlgorithm| Default algorithm name for namespace bundle split |range_equally_divide| -|managedLedgerOffloadDriver| The directory for all the offloader implementations `offloadersDirectory=./offloaders`. Driver to use to offload old data to long term storage (Possible values: S3, aws-s3, google-cloud-storage). When using google-cloud-storage, Make sure both Google Cloud Storage and Google Cloud Storage JSON API are enabled for the project (check from Developers Console -> Api&auth -> APIs). || -|managedLedgerOffloadMaxThreads| Maximum number of thread pool threads for ledger offloading |2| -|managedLedgerOffloadPrefetchRounds|The maximum prefetch rounds for ledger reading for offloading.|1| -|managedLedgerUnackedRangesOpenCacheSetEnabled| Use Open Range-Set to cache unacknowledged messages |true| -|managedLedgerOffloadDeletionLagMs|Delay between a ledger being successfully offloaded to long term storage and the ledger being deleted from bookkeeper | 14400000| -|managedLedgerOffloadAutoTriggerSizeThresholdBytes|The number of bytes before triggering automatic offload to long term storage |-1 (disabled)| -|s3ManagedLedgerOffloadRegion| For Amazon S3 ledger offload, AWS region || -|s3ManagedLedgerOffloadBucket| For Amazon S3 ledger offload, Bucket to place offloaded ledger into || -|s3ManagedLedgerOffloadServiceEndpoint| For Amazon S3 ledger offload, Alternative endpoint to connect to (useful for testing) || -|s3ManagedLedgerOffloadMaxBlockSizeInBytes| For Amazon S3 ledger offload, Max block size in bytes. (64MB by default, 5MB minimum) |67108864| -|s3ManagedLedgerOffloadReadBufferSizeInBytes| For Amazon S3 ledger offload, Read buffer size in bytes (1MB by default) |1048576| -|gcsManagedLedgerOffloadRegion|For Google Cloud Storage ledger offload, region where offload bucket is located. Go to this page for more details: https://cloud.google.com/storage/docs/bucket-locations .|N/A| -|gcsManagedLedgerOffloadBucket|For Google Cloud Storage ledger offload, Bucket to place offloaded ledger into.|N/A| -|gcsManagedLedgerOffloadMaxBlockSizeInBytes|For Google Cloud Storage ledger offload, the maximum block size in bytes. (64MB by default, 5MB minimum)|67108864| -|gcsManagedLedgerOffloadReadBufferSizeInBytes|For Google Cloud Storage ledger offload, Read buffer size in bytes. (1MB by default)|1048576| -|gcsManagedLedgerOffloadServiceAccountKeyFile|For Google Cloud Storage, path to json file containing service account credentials. For more details, see the "Service Accounts" section of https://support.google.com/googleapi/answer/6158849 .|N/A| -|fileSystemProfilePath|For File System Storage, file system profile path.|../conf/filesystem_offload_core_site.xml| -|fileSystemURI|For File System Storage, file system uri.|N/A| -|s3ManagedLedgerOffloadRole| For Amazon S3 ledger offload, provide a role to assume before writing to s3 || -|s3ManagedLedgerOffloadRoleSessionName| For Amazon S3 ledger offload, provide a role session name when using a role |pulsar-s3-offload| -| acknowledgmentAtBatchIndexLevelEnabled | Enable or disable the batch index acknowledgement. | false | -|enableReplicatedSubscriptions|Whether to enable tracking of replicated subscriptions state across clusters.|true| -|replicatedSubscriptionsSnapshotFrequencyMillis|The frequency of snapshots for replicated subscriptions tracking.|1000| -|replicatedSubscriptionsSnapshotTimeoutSeconds|The timeout for building a consistent snapshot for tracking replicated subscriptions state.|30| -|replicatedSubscriptionsSnapshotMaxCachedPerSubscription|The maximum number of snapshot to be cached per subscription.|10| -|maxMessagePublishBufferSizeInMB|The maximum memory size for a broker to handle messages that are sent by producers. If the processing message size exceeds this value, the broker stops reading data from the connection. The processing messages refer to the messages that are sent to the broker but the broker has not sent response to the client. Usually the messages are waiting to be written to bookies. It is shared across all the topics running in the same broker. The value `-1` disables the memory limitation. By default, it is 50% of direct memory.|N/A| -|messagePublishBufferCheckIntervalInMillis|Interval between checks to see if message publish buffer size exceeds the maximum. Use `0` or negative number to disable the max publish buffer limiting.|100| -|retentionCheckIntervalInSeconds|Check between intervals to see if consumed ledgers need to be trimmed. Use 0 or negative number to disable the check.|120| -| maxMessageSize | Set the maximum size of a message. | 5242880 | -| preciseTopicPublishRateLimiterEnable | Enable precise topic publish rate limiting. | false | -| lazyCursorRecovery | Whether to recover cursors lazily when trying to recover a managed ledger backing a persistent topic. It can improve write availability of topics. The caveat is now when recovered ledger is ready to write we're not sure if all old consumers' last mark delete position(ack position) can be recovered or not. So user can make the trade off or have custom logic in application to checkpoint consumer state.| false | -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| -| maxTopicsPerNamespace | The maximum number of persistent topics that can be created in the namespace. When the number of topics reaches this threshold, the broker rejects the request of creating a new topic, including the auto-created topics by the producer or consumer, until the number of connected consumers decreases. The default value 0 disables the check. | 0 | -|subscriptionTypesEnabled| Enable all subscription types, which are exclusive, shared, failover, and key_shared. | Exclusive, Shared, Failover, Key_Shared | -| managedLedgerInfoCompressionType | Compression type of managed ledger information.

    Available options are `NONE`, `LZ4`, `ZLIB`, `ZSTD`, and `SNAPPY`).

    If this value is `NONE` or invalid, the `managedLedgerInfo` is not compressed.

    **Note** that after enabling this configuration, if you want to degrade a broker, you need to change the value to `NONE` and make sure all ledger metadata is saved without compression. | None | -| additionalServlets | Additional servlet name.

    If you have multiple additional servlets, separate them by commas.

    For example, additionalServlet_1, additionalServlet_2 | N/A | -| additionalServletDirectory | Location of broker additional servlet NAR directory | ./brokerAdditionalServlet | -|narExtractionDirectory | The extraction directory of the nar package.
    Available for Protocol Handler, Additional Servlets, Offloaders, Broker Interceptor. | System.getProperty("java.io.tmpdir") | - -## Client - -You can use the [`pulsar-client`](reference-cli-tools.md#pulsar-client) CLI tool to publish messages to and consume messages from Pulsar topics. You can use this tool in place of a client library. - -|Name|Description|Default| -|---|---|---| -|webServiceUrl| The web URL for the cluster. |http://localhost:8080/| -|brokerServiceUrl| The Pulsar protocol URL for the cluster. |pulsar://localhost:6650/| -|authPlugin| The authentication plugin. || -|authParams| The authentication parameters for the cluster, as a comma-separated string. || -|useTls| Whether to enforce the TLS authentication in the cluster. |false| -| tlsAllowInsecureConnection | Allow TLS connections to servers whose certificate cannot be verified to have been signed by a trusted certificate authority. | false | -| tlsEnableHostnameVerification | Whether the server hostname must match the common name of the certificate that is used by the server. | false | -|tlsTrustCertsFilePath||| -| useKeyStoreTls | Enable TLS with KeyStore type configuration in the broker. | false | -| tlsTrustStoreType | TLS TrustStore type configuration.
  2064. JKS
  2065. PKCS12
  2066. |JKS| -| tlsTrustStore | TLS TrustStore path. | | -| tlsTrustStorePassword | TLS TrustStore password. | | - - - - - - -## Log4j - -You can set the log level and configuration in the [log4j2.yaml](https://github.com/apache/pulsar/blob/d557e0aa286866363bc6261dec87790c055db1b0/conf/log4j2.yaml#L155) file. The following logging configuration parameters are available. - -|Name|Default| -|---|---| -|pulsar.root.logger| WARN,CONSOLE| -|pulsar.log.dir| logs| -|pulsar.log.file| pulsar.log| -|log4j.rootLogger| ${pulsar.root.logger}| -|log4j.appender.CONSOLE| org.apache.log4j.ConsoleAppender| -|log4j.appender.CONSOLE.Threshold| DEBUG| -|log4j.appender.CONSOLE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.CONSOLE.layout.ConversionPattern| %d{ISO8601} - %-5p - [%t:%C{1}@%L] - %m%n| -|log4j.appender.ROLLINGFILE| org.apache.log4j.DailyRollingFileAppender| -|log4j.appender.ROLLINGFILE.Threshold| DEBUG| -|log4j.appender.ROLLINGFILE.File| ${pulsar.log.dir}/${pulsar.log.file}| -|log4j.appender.ROLLINGFILE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.ROLLINGFILE.layout.ConversionPattern| %d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n| -|log4j.appender.TRACEFILE| org.apache.log4j.FileAppender| -|log4j.appender.TRACEFILE.Threshold| TRACE| -|log4j.appender.TRACEFILE.File| pulsar-trace.log| -|log4j.appender.TRACEFILE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.TRACEFILE.layout.ConversionPattern| %d{ISO8601} - %-5p [%t:%C{1}@%L][%x] - %m%n| - -> Note: 'topic' in log4j2.appender is configurable. -> - If you want to append all logs to a single topic, set the same topic name. -> - If you want to append logs to different topics, you can set different topic names. - -## Log4j shell - -|Name|Default| -|---|---| -|bookkeeper.root.logger| ERROR,CONSOLE| -|log4j.rootLogger| ${bookkeeper.root.logger}| -|log4j.appender.CONSOLE| org.apache.log4j.ConsoleAppender| -|log4j.appender.CONSOLE.Threshold| DEBUG| -|log4j.appender.CONSOLE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.CONSOLE.layout.ConversionPattern| %d{ABSOLUTE} %-5p %m%n| -|log4j.logger.org.apache.zookeeper| ERROR| -|log4j.logger.org.apache.bookkeeper| ERROR| -|log4j.logger.org.apache.bookkeeper.bookie.BookieShell| INFO| - - -## Standalone - -|Name|Description|Default| -|---|---|---| -|authenticateOriginalAuthData| If this flag is set to `true`, the broker authenticates the original Auth data; else it just accepts the originalPrincipal and authorizes it (if required). |false| -|zookeeperServers| The quorum connection string for local ZooKeeper || -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -|brokerServicePort| The port on which the standalone broker listens for connections |6650| -|webServicePort| The port used by the standalone broker for HTTP requests |8080| -|bindAddress| The hostname or IP address on which the standalone service binds |0.0.0.0| -|bindAddresses| Additional Hostname or IP addresses the service binds on: `listener_name:scheme://host:port,...`. || -|advertisedAddress| The hostname or IP address that the standalone service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used. || -| numAcceptorThreads | Number of threads to use for Netty Acceptor | 1 | -| numIOThreads | Number of threads to use for Netty IO | 2 * Runtime.getRuntime().availableProcessors() | -| numHttpServerThreads | Number of threads to use for HTTP requests processing | 2 * Runtime.getRuntime().availableProcessors()| -|isRunningStandalone|This flag controls features that are meant to be used when running in standalone mode.|N/A| -|clusterName| The name of the cluster that this broker belongs to. |standalone| -| failureDomainsEnabled | Enable cluster's failure-domain which can distribute brokers into logical region. | false | -|zooKeeperSessionTimeoutMillis| The ZooKeeper session timeout, in milliseconds. |30000| -|zooKeeperOperationTimeoutSeconds|ZooKeeper operation timeout in seconds.|30| -|brokerShutdownTimeoutMs| The time to wait for graceful broker shutdown. After this time elapses, the process will be killed. |60000| -|skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out of memory error. |false| -|backlogQuotaCheckEnabled| Enable the backlog quota check, which enforces a specified action when the quota is reached. |true| -|backlogQuotaCheckIntervalInSeconds| How often to check for topics that have reached the backlog quota. |60| -|backlogQuotaDefaultLimitBytes| The default per-topic backlog quota limit. Being less than 0 means no limitation. By default, it is -1. |-1| -|ttlDurationDefaultInSeconds|The default Time to Live (TTL) for namespaces if the TTL is not configured at namespace policies. When the value is set to `0`, TTL is disabled. By default, TTL is disabled. |0| -|brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics. If topics are not consumed for some while, these inactive topics might be cleaned up. Deleting inactive topics is enabled by default. The default period is 1 minute. |true| -|brokerDeleteInactiveTopicsFrequencySeconds| How often to check for inactive topics, in seconds. |60| -| maxPendingPublishRequestsPerConnection | Maximum pending publish requests per connection to avoid keeping large number of pending requests in memory | 1000| -|messageExpiryCheckIntervalInMinutes| How often to proactively check and purged expired messages. |5| -|activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed. |1000| -| subscriptionExpirationTimeMinutes | How long to delete inactive subscriptions from last consumption. When it is set to 0, inactive subscriptions are not deleted automatically | 0 | -| subscriptionRedeliveryTrackerEnabled | Enable subscription message redelivery tracker to send redelivery count to consumer. | true | -|subscriptionKeySharedEnable|Whether to enable the Key_Shared subscription.|true| -| subscriptionKeySharedUseConsistentHashing | In the Key_Shared subscription mode, with default AUTO_SPLIT mode, use splitting ranges or consistent hashing to reassign keys to new consumers. | false | -| subscriptionKeySharedConsistentHashingReplicaPoints | In the Key_Shared subscription mode, the number of points in the consistent-hashing ring. The greater the number, the more equal the assignment of keys to consumers. | 100 | -| subscriptionExpiryCheckIntervalInMinutes | How frequently to proactively check and purge expired subscription |5 | -| brokerDeduplicationEnabled | Set the default behavior for message deduplication in the broker. This can be overridden per-namespace. If it is enabled, the broker rejects messages that are already stored in the topic. | false | -| brokerDeduplicationMaxNumberOfProducers | Maximum number of producer information that it's going to be persisted for deduplication purposes | 10000 | -| brokerDeduplicationEntriesInterval | Number of entries after which a deduplication information snapshot is taken. A greater interval leads to less snapshots being taken though it would increase the topic recovery time, when the entries published after the snapshot need to be replayed. | 1000 | -| brokerDeduplicationProducerInactivityTimeoutMinutes | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | 360 | -| defaultNumberOfNamespaceBundles | When a namespace is created without specifying the number of bundles, this value is used as the default setting.| 4 | -|clientLibraryVersionCheckEnabled| Enable checks for minimum allowed client library version. |false| -|clientLibraryVersionCheckAllowUnversioned| Allow client libraries with no version information |true| -|statusFilePath| The path for the file used to determine the rotation status for the broker when responding to service discovery health checks |/usr/local/apache/htdocs| -|maxUnackedMessagesPerConsumer| The maximum number of unacknowledged messages allowed to be received by consumers on a shared subscription. The broker will stop sending messages to a consumer once this limit is reached or until the consumer begins acknowledging messages. A value of 0 disables the unacked message limit check and thus allows consumers to receive messages without any restrictions. |50000| -|maxUnackedMessagesPerSubscription| The same as above, except per subscription rather than per consumer. |200000| -| maxUnackedMessagesPerBroker | Maximum number of unacknowledged messages allowed per broker. Once this limit reaches, the broker stops dispatching messages to all shared subscriptions which has a higher number of unacknowledged messages until subscriptions start acknowledging messages back and unacknowledged messages count reaches to limit/2. When the value is set to 0, unacknowledged message limit check is disabled and broker does not block dispatchers. | 0 | -| maxUnackedMessagesPerSubscriptionOnBrokerBlocked | Once the broker reaches maxUnackedMessagesPerBroker limit, it blocks subscriptions which have higher unacknowledged messages than this percentage limit and subscription does not receive any new messages until that subscription acknowledges messages back. | 0.16 | -| unblockStuckSubscriptionEnabled|Broker periodically checks if subscription is stuck and unblock if flag is enabled.|false| -|zookeeperSessionExpiredPolicy|There are two policies when ZooKeeper session expired happens, "shutdown" and "reconnect". If it is set to "shutdown" policy, when ZooKeeper session expired happens, the broker is shutdown. If it is set to "reconnect" policy, the broker tries to reconnect to ZooKeeper server and re-register metadata to ZooKeeper. Note: the "reconnect" policy is an experiment feature.|shutdown| -| topicPublisherThrottlingTickTimeMillis | Tick time to schedule task that checks topic publish rate limiting across all topics. A lower value can improve accuracy while throttling publish but it uses more CPU to perform frequent check. (Disable publish throttling with value 0) | 10| -| brokerPublisherThrottlingTickTimeMillis | Tick time to schedule task that checks broker publish rate limiting across all topics. A lower value can improve accuracy while throttling publish but it uses more CPU to perform frequent check. When the value is set to 0, publish throttling is disabled. |50 | -| brokerPublisherThrottlingMaxMessageRate | Maximum rate (in 1 second) of messages allowed to publish for a broker if the message rate limiting is enabled. When the value is set to 0, message rate limiting is disabled. | 0| -| brokerPublisherThrottlingMaxByteRate | Maximum rate (in 1 second) of bytes allowed to publish for a broker if the byte rate limiting is enabled. When the value is set to 0, the byte rate limiting is disabled. | 0 | -|subscribeThrottlingRatePerConsumer|Too many subscribe requests from a consumer can cause broker rewinding consumer cursors and loading data from bookies, hence causing high network bandwidth usage. When the positive value is set, broker will throttle the subscribe requests for one consumer. Otherwise, the throttling will be disabled. By default, throttling is disabled.|0| -|subscribeRatePeriodPerConsumerInSecond|Rate period for {subscribeThrottlingRatePerConsumer}. By default, it is 30s.|30| -| dispatchThrottlingRatePerTopicInMsg | Default messages (per second) dispatch throttling-limit for every topic. When the value is set to 0, default message dispatch throttling-limit is disabled. |0 | -| dispatchThrottlingRatePerTopicInByte | Default byte (per second) dispatch throttling-limit for every topic. When the value is set to 0, default byte dispatch throttling-limit is disabled. | 0| -| dispatchThrottlingRateRelativeToPublishRate | Enable dispatch rate-limiting relative to publish rate. | false | -|dispatchThrottlingRatePerSubscriptionInMsg|The defaulted number of message dispatching throttling-limit for a subscription. The value of 0 disables message dispatch-throttling.|0| -|dispatchThrottlingRatePerSubscriptionInByte|The default number of message-bytes dispatching throttling-limit for a subscription. The value of 0 disables message-byte dispatch-throttling.|0| -| dispatchThrottlingOnNonBacklogConsumerEnabled | Enable dispatch-throttling for both caught up consumers as well as consumers who have backlogs. | true | -|dispatcherMaxReadBatchSize|The maximum number of entries to read from BookKeeper. By default, it is 100 entries.|100| -|dispatcherMaxReadSizeBytes|The maximum size in bytes of entries to read from BookKeeper. By default, it is 5MB.|5242880| -|dispatcherMinReadBatchSize|The minimum number of entries to read from BookKeeper. By default, it is 1 entry. When there is an error occurred on reading entries from bookkeeper, the broker will backoff the batch size to this minimum number.|1| -|dispatcherMaxRoundRobinBatchSize|The maximum number of entries to dispatch for a shared subscription. By default, it is 20 entries.|20| -| preciseDispatcherFlowControl | Precise dispathcer flow control according to history message number of each entry. | false | -| streamingDispatch | Whether to use streaming read dispatcher. It can be useful when there's a huge backlog to drain and instead of read with micro batch we can streamline the read from bookkeeper to make the most of consumer capacity till we hit bookkeeper read limit or consumer process limit, then we can use consumer flow control to tune the speed. This feature is currently in preview and can be changed in subsequent release. | false | -| maxConcurrentLookupRequest | Maximum number of concurrent lookup request that the broker allows to throttle heavy incoming lookup traffic. | 50000 | -| maxConcurrentTopicLoadRequest | Maximum number of concurrent topic loading request that the broker allows to control the number of zk-operations. | 5000 | -| maxConcurrentNonPersistentMessagePerConnection | Maximum number of concurrent non-persistent message that can be processed per connection. | 1000 | -| numWorkerThreadsForNonPersistentTopic | Number of worker threads to serve non-persistent topic. | 8 | -| enablePersistentTopics | Enable broker to load persistent topics. | true | -| enableNonPersistentTopics | Enable broker to load non-persistent topics. | true | -| maxNumPartitionsPerPartitionedTopic | Maximum number of partitions per partitioned topic. When the value is set to a negative number or is set to 0, the check is disabled. | 0 | -| maxSubscriptionsPerTopic | Maximum number of subscriptions allowed to subscribe to a topic. Once this limit reaches, the broker rejects new subscriptions until the number of subscriptions decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxProducersPerTopic | Maximum number of producers allowed to connect to a topic. Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerTopic | Maximum number of consumers allowed to connect to a topic. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerSubscription | Maximum number of consumers allowed to connect to a subscription. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in seconds. When the value is set to 0, check the TLS certificate on every new connection. | 300 | -| tlsCertificateFilePath | Path for the TLS certificate file. | | -| tlsKeyFilePath | Path for the TLS private key file. | | -| tlsTrustCertsFilePath | Path for the trusted TLS certificate file.| | -| tlsAllowInsecureConnection | Accept untrusted TLS certificate from the client. If it is set to true, a client with a certificate which cannot be verified with the 'tlsTrustCertsFilePath' certificate is allowed to connect to the server, though the certificate is not be used for client authentication. | false | -| tlsProtocols | Specify the TLS protocols the broker uses to negotiate during TLS handshake. | | -| tlsCiphers | Specify the TLS cipher the broker uses to negotiate during TLS Handshake. | | -| tlsRequireTrustedClientCertOnConnect | Trusted client certificates are required for to connect TLS. Reject the Connection if the client certificate is not trusted. In effect, this requires that all connecting clients perform TLS client authentication. | false | -| tlsEnabledWithKeyStore | Enable TLS with KeyStore type configuration in broker. | false | -| tlsProvider | TLS Provider for KeyStore type. | | -| tlsKeyStoreType | TLS KeyStore type configuration in the broker.
  2067. JKS
  2068. PKCS12
  2069. |JKS| -| tlsKeyStore | TLS KeyStore path in the broker. | | -| tlsKeyStorePassword | TLS KeyStore password for the broker. | | -| tlsTrustStoreType | TLS TrustStore type configuration in the broker
  2070. JKS
  2071. PKCS12
  2072. |JKS| -| tlsTrustStore | TLS TrustStore path in the broker. | | -| tlsTrustStorePassword | TLS TrustStore password for the broker. | | -| brokerClientTlsEnabledWithKeyStore | Configure whether the internal client uses the KeyStore type to authenticate with Pulsar brokers. | false | -| brokerClientSslProvider | The TLS Provider used by the internal client to authenticate with other Pulsar brokers. | | -| brokerClientTlsTrustStoreType | TLS TrustStore type configuration for the internal client to authenticate with Pulsar brokers.
  2073. JKS
  2074. PKCS12
  2075. | JKS | -| brokerClientTlsTrustStore | TLS TrustStore path for the internal client to authenticate with Pulsar brokers. | | -| brokerClientTlsTrustStorePassword | TLS TrustStore password for the internal client to authenticate with Pulsar brokers. | | -| brokerClientTlsCiphers | Specify the TLS cipher that the internal client uses to negotiate during TLS Handshake. | | -| brokerClientTlsProtocols | Specify the TLS protocols that the broker uses to negotiate during TLS handshake. | | -| systemTopicEnabled | Enable/Disable system topics. | false | -| topicLevelPoliciesEnabled | Enable or disable topic level policies. Topic level policies depends on the system topic. Please enable the system topic first. | false | -| topicFencingTimeoutSeconds | If a topic remains fenced for a certain time period (in seconds), it is closed forcefully. If set to 0 or a negative number, the fenced topic is not closed. | 0 | -| proxyRoles | Role names that are treated as "proxy roles". If the broker sees a request with role as proxyRoles, it demands to see a valid original principal. | | -|authenticationEnabled| Enable authentication for the broker. |false| -|authenticationProviders| A comma-separated list of class names for authentication providers. |false| -|authorizationEnabled| Enforce authorization in brokers. |false| -| authorizationProvider | Authorization provider fully qualified class-name. | org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider | -| authorizationAllowWildcardsMatching | Allow wildcard matching in authorization. Wildcard matching is applicable only when the wildcard-character (*) presents at the **first** or **last** position. | false | -|superUserRoles| Role names that are treated as "superusers." Superusers are authorized to perform all admin tasks. | | -|brokerClientAuthenticationPlugin| The authentication settings of the broker itself. Used when the broker connects to other brokers either in the same cluster or from other clusters. | | -|brokerClientAuthenticationParameters| The parameters that go along with the plugin specified using brokerClientAuthenticationPlugin. | | -|athenzDomainNames| Supported Athenz authentication provider domain names as a comma-separated list. | | -| anonymousUserRole | When this parameter is not empty, unauthenticated users perform as anonymousUserRole. | | -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenAuthClaim| Specify the token claim that will be used as the authentication "principal" or "role". The "subject" field will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud". It is used to get the audience from token. If it is not set, the audience is not verified. || -| tokenAudience | The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token need contains this parameter.| | -|saslJaasClientAllowedIds|This is a regexp, which limits the range of possible ids which can connect to the Broker using SASL. By default, it is set to `SaslConstants.JAAS_CLIENT_ALLOWED_IDS_DEFAULT`, which is ".*pulsar.*", so only clients whose id contains 'pulsar' are allowed to connect.|N/A| -|saslJaasBrokerSectionName|Service Principal, for login context name. By default, it is set to `SaslConstants.JAAS_DEFAULT_BROKER_SECTION_NAME`, which is "Broker".|N/A| -|httpMaxRequestSize|If the value is larger than 0, it rejects all HTTP requests with bodies larged than the configured limit.|-1| -|exposePreciseBacklogInPrometheus| Enable expose the precise backlog stats, set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate. |false| -|bookkeeperMetadataServiceUri|Metadata service uri is what BookKeeper used for loading corresponding metadata driver and resolving its metadata service location. This value can be fetched using `bookkeeper shell whatisinstanceid` command in BookKeeper cluster. For example: `zk+hierarchical://localhost:2181/ledgers`. The metadata service uri list can also be semicolon separated values like: `zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers`.|N/A| -|bookkeeperClientAuthenticationPlugin| Authentication plugin to be used when connecting to bookies (BookKeeper servers). || -|bookkeeperClientAuthenticationParametersName| BookKeeper authentication plugin implementation parameters and values. || -|bookkeeperClientAuthenticationParameters| Parameters associated with the bookkeeperClientAuthenticationParametersName || -|bookkeeperClientNumWorkerThreads| Number of BookKeeper client worker threads. Default is Runtime.getRuntime().availableProcessors() || -|bookkeeperClientTimeoutInSeconds| Timeout for BookKeeper add and read operations. |30| -|bookkeeperClientSpeculativeReadTimeoutInMillis| Speculative reads are initiated if a read request doesn’t complete within a certain time. A value of 0 disables speculative reads. |0| -|bookkeeperUseV2WireProtocol|Use older Bookkeeper wire protocol with bookie.|true| -|bookkeeperClientHealthCheckEnabled| Enable bookie health checks. |true| -|bookkeeperClientHealthCheckIntervalSeconds| The time interval, in seconds, at which health checks are performed. New ledgers are not created during health checks. |60| -|bookkeeperClientHealthCheckErrorThresholdPerInterval| Error threshold for health checks. |5| -|bookkeeperClientHealthCheckQuarantineTimeInSeconds| If bookies have more than the allowed number of failures within the time interval specified by bookkeeperClientHealthCheckIntervalSeconds |1800| -|bookkeeperClientGetBookieInfoIntervalSeconds|Specify options for the GetBookieInfo check. This setting helps ensure the list of bookies that are up to date on the brokers.|86400| -|bookkeeperClientGetBookieInfoRetryIntervalSeconds|Specify options for the GetBookieInfo check. This setting helps ensure the list of bookies that are up to date on the brokers.|60| -|bookkeeperClientRackawarePolicyEnabled| |true| -|bookkeeperClientRegionawarePolicyEnabled| |false| -|bookkeeperClientMinNumRacksPerWriteQuorum| |2| -|bookkeeperClientMinNumRacksPerWriteQuorum| |false| -|bookkeeperClientReorderReadSequenceEnabled| |false| -|bookkeeperClientIsolationGroups||| -|bookkeeperClientSecondaryIsolationGroups| Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn't have enough bookie available. || -|bookkeeperClientMinAvailableBookiesInIsolationGroups| Minimum bookies that should be available as part of bookkeeperClientIsolationGroups else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list. || -| bookkeeperTLSProviderFactoryClass | Set the client security provider factory class name. | org.apache.bookkeeper.tls.TLSContextFactory | -| bookkeeperTLSClientAuthentication | Enable TLS authentication with bookie. | false | -| bookkeeperTLSKeyFileType | Supported type: PEM, JKS, PKCS12. | PEM | -| bookkeeperTLSTrustCertTypes | Supported type: PEM, JKS, PKCS12. | PEM | -| bookkeeperTLSKeyStorePasswordPath | Path to file containing keystore password, if the client keystore is password protected. | | -| bookkeeperTLSTrustStorePasswordPath | Path to file containing truststore password, if the client truststore is password protected. | | -| bookkeeperTLSKeyFilePath | Path for the TLS private key file. | | -| bookkeeperTLSCertificateFilePath | Path for the TLS certificate file. | | -| bookkeeperTLSTrustCertsFilePath | Path for the trusted TLS certificate file. | | -| bookkeeperTlsCertFilesRefreshDurationSeconds | Tls cert refresh duration at bookKeeper-client in seconds (0 to disable check). | | -| bookkeeperDiskWeightBasedPlacementEnabled | Enable/Disable disk weight based placement. | false | -| bookkeeperExplicitLacIntervalInMills | Set the interval to check the need for sending an explicit LAC. When the value is set to 0, no explicit LAC is sent. | 0 | -| bookkeeperClientExposeStatsToPrometheus | Expose BookKeeper client managed ledger stats to Prometheus. | false | -|managedLedgerDefaultEnsembleSize| |1| -|managedLedgerDefaultWriteQuorum| |1| -|managedLedgerDefaultAckQuorum| |1| -| managedLedgerDigestType | Default type of checksum to use when writing to BookKeeper. | CRC32C | -| managedLedgerNumSchedulerThreads | Number of threads to be used for managed ledger scheduled tasks. | 8 | -|managedLedgerCacheSizeMB| |N/A| -|managedLedgerCacheCopyEntries| Whether to copy the entry payloads when inserting in cache.| false| -|managedLedgerCacheEvictionWatermark| |0.9| -|managedLedgerCacheEvictionFrequency| Configure the cache eviction frequency for the managed ledger cache (evictions/sec) | 100.0 | -|managedLedgerCacheEvictionTimeThresholdMillis| All entries that have stayed in cache for more than the configured time, will be evicted | 1000 | -|managedLedgerCursorBackloggedThreshold| Configure the threshold (in number of entries) from where a cursor should be considered 'backlogged' and thus should be set as inactive. | 1000| -|managedLedgerUnackedRangesOpenCacheSetEnabled| Use Open Range-Set to cache unacknowledged messages |true| -|managedLedgerDefaultMarkDeleteRateLimit| |0.1| -|managedLedgerMaxEntriesPerLedger| |50000| -|managedLedgerMinLedgerRolloverTimeMinutes| |10| -|managedLedgerMaxLedgerRolloverTimeMinutes| |240| -|managedLedgerCursorMaxEntriesPerLedger| |50000| -|managedLedgerCursorRolloverTimeInSeconds| |14400| -| managedLedgerMaxSizePerLedgerMbytes | Maximum ledger size before triggering a rollover for a topic. | 2048 | -| managedLedgerMaxUnackedRangesToPersist | Maximum number of "acknowledgment holes" that are going to be persistently stored. When acknowledging out of order, a consumer leaves holes that are supposed to be quickly filled by acknowledging all the messages. The information of which messages are acknowledged is persisted by compressing in "ranges" of messages that were acknowledged. After the max number of ranges is reached, the information is only tracked in memory and messages are redelivered in case of crashes. | 10000 | -| managedLedgerMaxUnackedRangesToPersistInZooKeeper | Maximum number of "acknowledgment holes" that can be stored in Zookeeper. If the number of unacknowledged message range is higher than this limit, the broker persists unacknowledged ranges into bookkeeper to avoid additional data overhead into Zookeeper. | 1000 | -|autoSkipNonRecoverableData| |false| -| managedLedgerMetadataOperationsTimeoutSeconds | Operation timeout while updating managed-ledger metadata. | 60 | -| managedLedgerReadEntryTimeoutSeconds | Read entries timeout when the broker tries to read messages from BookKeeper. | 0 | -| managedLedgerAddEntryTimeoutSeconds | Add entry timeout when the broker tries to publish message to BookKeeper. | 0 | -| managedLedgerNewEntriesCheckDelayInMillis | New entries check delay for the cursor under the managed ledger. If no new messages in the topic, the cursor tries to check again after the delay time. For consumption latency sensitive scenario, you can set the value to a smaller value or 0. Of course, a smaller value may degrade consumption throughput.|10 ms| -| managedLedgerPrometheusStatsLatencyRolloverSeconds | Managed ledger prometheus stats latency rollover seconds. | 60 | -| managedLedgerTraceTaskExecution | Whether to trace managed ledger task execution time. | true | -|managedLedgerNewEntriesCheckDelayInMillis|New entries check delay for the cursor under the managed ledger. If no new messages in the topic, the cursor will try to check again after the delay time. For consumption latency sensitive scenario, it can be set to a smaller value or 0. A smaller value degrades consumption throughput. By default, it is 10ms.|10| -|loadBalancerEnabled| |false| -|loadBalancerPlacementStrategy| |weightedRandomSelection| -|loadBalancerReportUpdateThresholdPercentage| |10| -|loadBalancerReportUpdateMaxIntervalMinutes| |15| -|loadBalancerHostUsageCheckIntervalMinutes| |1| -|loadBalancerSheddingIntervalMinutes| |30| -|loadBalancerSheddingGracePeriodMinutes| |30| -|loadBalancerBrokerMaxTopics| |50000| -|loadBalancerBrokerUnderloadedThresholdPercentage| |1| -|loadBalancerBrokerOverloadedThresholdPercentage| |85| -|loadBalancerResourceQuotaUpdateIntervalMinutes| |15| -|loadBalancerBrokerComfortLoadLevelPercentage| |65| -|loadBalancerAutoBundleSplitEnabled| |false| -| loadBalancerAutoUnloadSplitBundlesEnabled | Enable/Disable automatic unloading of split bundles. | true | -|loadBalancerNamespaceBundleMaxTopics| |1000| -|loadBalancerNamespaceBundleMaxSessions| |1000| -|loadBalancerNamespaceBundleMaxMsgRate| |1000| -|loadBalancerNamespaceBundleMaxBandwidthMbytes| |100| -|loadBalancerNamespaceMaximumBundles| |128| -| loadBalancerBrokerThresholdShedderPercentage | The broker resource usage threshold. When the broker resource usage is greater than the pulsar cluster average resource usage, the threshold shedder is triggered to offload bundles from the broker. It only takes effect in the ThresholdShedder strategy. | 10 | -| loadBalancerHistoryResourcePercentage | The history usage when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 0.9 | -| loadBalancerBandwithInResourceWeight | The BandWithIn usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerBandwithOutResourceWeight | The BandWithOut usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerCPUResourceWeight | The CPU usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerMemoryResourceWeight | The heap memory usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerDirectMemoryResourceWeight | The direct memory usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerBundleUnloadMinThroughputThreshold | Bundle unload minimum throughput threshold. Avoid bundle unload frequently. It only takes effect in the ThresholdShedder strategy. | 10 | -|replicationMetricsEnabled| |true| -|replicationConnectionsPerBroker| |16| -|replicationProducerQueueSize| |1000| -| replicationPolicyCheckDurationSeconds | Duration to check replication policy to avoid replicator inconsistency due to missing ZooKeeper watch. When the value is set to 0, disable checking replication policy. | 600 | -|defaultRetentionTimeInMinutes| |0| -|defaultRetentionSizeInMB| |0| -|keepAliveIntervalSeconds| |30| -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| -|bookieId | If you want to custom a bookie ID or use a dynamic network address for a bookie, you can set the `bookieId`.

    Bookie advertises itself using the `bookieId` rather than the `BookieSocketAddress` (`hostname:port` or `IP:port`).

    The `bookieId` is a non-empty string that can contain ASCII digits and letters ([a-zA-Z9-0]), colons, dashes, and dots.

    For more information about `bookieId`, see [here](http://bookkeeper.apache.org/bps/BP-41-bookieid/).|/| -| maxTopicsPerNamespace | The maximum number of persistent topics that can be created in the namespace. When the number of topics reaches this threshold, the broker rejects the request of creating a new topic, including the auto-created topics by the producer or consumer, until the number of connected consumers decreases. The default value 0 disables the check. | 0 | - -## WebSocket - -|Name|Description|Default| -|---|---|---| -|configurationStoreServers ||| -|zooKeeperSessionTimeoutMillis| |30000| -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|serviceUrl||| -|serviceUrlTls||| -|brokerServiceUrl||| -|brokerServiceUrlTls||| -|webServicePort||8080| -|webServicePortTls||8443| -|bindAddress||0.0.0.0| -|clusterName ||| -|authenticationEnabled||false| -|authenticationProviders||| -|authorizationEnabled||false| -|superUserRoles ||| -|brokerClientAuthenticationPlugin||| -|brokerClientAuthenticationParameters||| -|tlsEnabled||false| -|tlsAllowInsecureConnection||false| -|tlsCertificateFilePath||| -|tlsKeyFilePath ||| -|tlsTrustCertsFilePath||| - -## Pulsar proxy - -The [Pulsar proxy](concepts-architecture-overview.md#pulsar-proxy) can be configured in the `conf/proxy.conf` file. - - -|Name|Description|Default| -|---|---|---| -|forwardAuthorizationCredentials| Forward client authorization credentials to Broker for re-authorization, and make sure authentication is enabled for this to take effect. |false| -|zookeeperServers| The ZooKeeper quorum connection string (as a comma-separated list) || -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -| brokerServiceURL | The service URL pointing to the broker cluster. Must begin with `pulsar://`. | | -| brokerServiceURLTLS | The TLS service URL pointing to the broker cluster. Must begin with `pulsar+ssl://`. | | -| brokerWebServiceURL | The Web service URL pointing to the broker cluster | | -| brokerWebServiceURLTLS | The TLS Web service URL pointing to the broker cluster | | -| functionWorkerWebServiceURL | The Web service URL pointing to the function worker cluster. It is only configured when you setup function workers in a separate cluster. | | -| functionWorkerWebServiceURLTLS | The TLS Web service URL pointing to the function worker cluster. It is only configured when you setup function workers in a separate cluster. | | -|zookeeperSessionTimeoutMs| ZooKeeper session timeout (in milliseconds) |30000| -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|advertisedAddress|Hostname or IP address the service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostname()` is used.|N/A| -|servicePort| The port to use for server binary Protobuf requests |6650| -|servicePortTls| The port to use to server binary Protobuf TLS requests |6651| -|statusFilePath| Path for the file used to determine the rotation status for the proxy instance when responding to service discovery health checks || -| proxyLogLevel | Proxy log level
  2076. 0: Do not log any TCP channel information.
  2077. 1: Parse and log any TCP channel information and command information without message body.
  2078. 2: Parse and log channel information, command information and message body.
  2079. | 0 | -|authenticationEnabled| Whether authentication is enabled for the Pulsar proxy |false| -|authenticateMetricsEndpoint| Whether the '/metrics' endpoint requires authentication. Defaults to true. 'authenticationEnabled' must also be set for this to take effect. |true| -|authenticationProviders| Authentication provider name list (a comma-separated list of class names) || -|authorizationEnabled| Whether authorization is enforced by the Pulsar proxy |false| -|authorizationProvider| Authorization provider as a fully qualified class name |org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider| -| anonymousUserRole | When this parameter is not empty, unauthenticated users perform as anonymousUserRole. | | -|brokerClientAuthenticationPlugin| The authentication plugin used by the Pulsar proxy to authenticate with Pulsar brokers || -|brokerClientAuthenticationParameters| The authentication parameters used by the Pulsar proxy to authenticate with Pulsar brokers || -|brokerClientTrustCertsFilePath| The path to trusted certificates used by the Pulsar proxy to authenticate with Pulsar brokers || -|superUserRoles| Role names that are treated as "super-users," meaning that they will be able to perform all admin || -|maxConcurrentInboundConnections| Max concurrent inbound connections. The proxy will reject requests beyond that. |10000| -|maxConcurrentLookupRequests| Max concurrent outbound connections. The proxy will error out requests beyond that. |50000| -|tlsEnabledInProxy| Deprecated - use `servicePortTls` and `webServicePortTls` instead. |false| -|tlsEnabledWithBroker| Whether TLS is enabled when communicating with Pulsar brokers. |false| -| tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in seconds. If the value is set 0, check TLS certificate every new connection. | 300 | -|tlsCertificateFilePath| Path for the TLS certificate file || -|tlsKeyFilePath| Path for the TLS private key file || -|tlsTrustCertsFilePath| Path for the trusted TLS certificate pem file || -|tlsHostnameVerificationEnabled| Whether the hostname is validated when the proxy creates a TLS connection with brokers |false| -|tlsRequireTrustedClientCertOnConnect| Whether client certificates are required for TLS. Connections are rejected if the client certificate isn’t trusted. |false| -|tlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLSv1.3```, ```TLSv1.2``` || -|tlsCiphers|Specify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256```|| -| httpReverseProxyConfigs | HTTP directs to redirect to non-pulsar services | | -| httpOutputBufferSize | HTTP output buffer size. The amount of data that will be buffered for HTTP requests before it is flushed to the channel. A larger buffer size may result in higher HTTP throughput though it may take longer for the client to see data. If using HTTP streaming via the reverse proxy, this should be set to the minimum value (1) so that clients see the data as soon as possible. | 32768 | -| httpNumThreads | Number of threads to use for HTTP requests processing| 2 * Runtime.getRuntime().availableProcessors() | -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenAuthClaim| Specify the token claim that will be used as the authentication "principal" or "role". The "subject" field will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud". It is used to get the audience from token. If it is not set, the audience is not verified. || -| tokenAudience | The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token need contains this parameter.| | -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| - -## ZooKeeper - -ZooKeeper handles a broad range of essential configuration- and coordination-related tasks for Pulsar. The default configuration file for ZooKeeper is in the `conf/zookeeper.conf` file in your Pulsar installation. The following parameters are available: - - -|Name|Description|Default| -|---|---|---| -|tickTime| The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick. |2000| -|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10| -|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter. |5| -|dataDir| The location where ZooKeeper will store in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper| -|clientPort| The port on which the ZooKeeper server will listen for connections. |2181| -|admin.enableServer|The port at which the admin listens.|true| -|admin.serverPort|The port at which the admin listens.|9990| -|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest). |3| -|autopurge.purgeInterval| The time interval, in hours, by which the ZooKeeper database purge task is triggered. Setting to a non-zero number will enable auto purge; setting to 0 will disable. Read this guide before enabling auto purge. |1| -|forceSync|Requires updates to be synced to media of the transaction log before finishing processing the update. If this option is set to 'no', ZooKeeper will not require updates to be synced to the media. WARNING: it's not recommended to run a production ZK cluster with `forceSync` disabled.|yes| -|maxClientCnxns| The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60| - - - - -In addition to the parameters in the table above, configuring ZooKeeper for Pulsar involves adding -a `server.N` line to the `conf/zookeeper.conf` file for each node in the ZooKeeper cluster, where `N` is the number of the ZooKeeper node. Here's an example for a three-node ZooKeeper cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -> We strongly recommend consulting the [ZooKeeper Administrator's Guide](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html) for a more thorough and comprehensive introduction to ZooKeeper configuration diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/reference-connector-admin.md b/site2/website/versioned_docs/version-2.9.2-deprecated/reference-connector-admin.md deleted file mode 100644 index f1240bf8db17de..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/reference-connector-admin.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -id: reference-connector-admin -title: Connector Admin CLI -sidebar_label: "Connector Admin CLI" -original_id: reference-connector-admin ---- - -> **Important** -> -> For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/reference-metrics.md b/site2/website/versioned_docs/version-2.9.2-deprecated/reference-metrics.md deleted file mode 100644 index e4e12d89ac5ae5..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/reference-metrics.md +++ /dev/null @@ -1,556 +0,0 @@ ---- -id: reference-metrics -title: Pulsar Metrics -sidebar_label: "Pulsar Metrics" -original_id: reference-metrics ---- - - - -Pulsar exposes the following metrics in Prometheus format. You can monitor your clusters with those metrics. - -* [ZooKeeper](#zookeeper) -* [BookKeeper](#bookkeeper) -* [Broker](#broker) -* [Pulsar Functions](#pulsar-functions) -* [Proxy](#proxy) -* [Pulsar SQL Worker](#pulsar-sql-worker) -* [Pulsar transaction](#pulsar-transaction) - -The following types of metrics are available: - -- [Counter](https://prometheus.io/docs/concepts/metric_types/#counter): a cumulative metric that represents a single monotonically increasing counter. The value increases by default. You can reset the value to zero or restart your cluster. -- [Gauge](https://prometheus.io/docs/concepts/metric_types/#gauge): a metric that represents a single numerical value that can arbitrarily go up and down. -- [Histogram](https://prometheus.io/docs/concepts/metric_types/#histogram): a histogram samples observations (usually things like request durations or response sizes) and counts them in configurable buckets. -- [Summary](https://prometheus.io/docs/concepts/metric_types/#summary): similar to a histogram, a summary samples observations (usually things like request durations and response sizes). While it also provides a total count of observations and a sum of all observed values, it calculates configurable quantiles over a sliding time window. - -## ZooKeeper - -The ZooKeeper metrics are exposed under "/metrics" at port `8000`. You can use a different port by configuring the `metricsProvider.httpPort` in conf/zookeeper.conf. - -### Server metrics - -| Name | Type | Description | -|---|---|---| -| znode_count | Gauge | The number of z-nodes stored. | -| approximate_data_size | Gauge | The approximate size of all of z-nodes stored. | -| num_alive_connections | Gauge | The number of currently lived connections. | -| watch_count | Gauge | The number of watchers registered. | -| ephemerals_count | Gauge | The number of ephemeral z-nodes. | - -### Request metrics - -| Name | Type | Description | -|---|---|---| -| request_commit_queued | Counter | The total number of requests already committed by a particular server. | -| updatelatency | Summary | The update requests latency calculated in milliseconds. | -| readlatency | Summary | The read requests latency calculated in milliseconds. | - -## BookKeeper - -The BookKeeper metrics are exposed under "/metrics" at port `8000`. You can change the port by updating `prometheusStatsHttpPort` -in the `bookkeeper.conf` configuration file. - -### Server metrics - -| Name | Type | Description | -|---|---|---| -| bookie_SERVER_STATUS | Gauge | The server status for bookie server.
    • 1: the bookie is running in writable mode.
    • 0: the bookie is running in readonly mode.
    | -| bookkeeper_server_ADD_ENTRY_count | Counter | The total number of ADD_ENTRY requests received at the bookie. The `success` label is used to distinguish successes and failures. | -| bookkeeper_server_READ_ENTRY_count | Counter | The total number of READ_ENTRY requests received at the bookie. The `success` label is used to distinguish successes and failures. | -| bookie_WRITE_BYTES | Counter | The total number of bytes written to the bookie. | -| bookie_READ_BYTES | Counter | The total number of bytes read from the bookie. | -| bookkeeper_server_ADD_ENTRY_REQUEST | Summary | The summary of request latency of ADD_ENTRY requests at the bookie. The `success` label is used to distinguish successes and failures. | -| bookkeeper_server_READ_ENTRY_REQUEST | Summary | The summary of request latency of READ_ENTRY requests at the bookie. The `success` label is used to distinguish successes and failures. | - -### Journal metrics - -| Name | Type | Description | -|---|---|---| -| bookie_journal_JOURNAL_SYNC_count | Counter | The total number of journal fsync operations happening at the bookie. The `success` label is used to distinguish successes and failures. | -| bookie_journal_JOURNAL_QUEUE_SIZE | Gauge | The total number of requests pending in the journal queue. | -| bookie_journal_JOURNAL_FORCE_WRITE_QUEUE_SIZE | Gauge | The total number of force write (fsync) requests pending in the force-write queue. | -| bookie_journal_JOURNAL_CB_QUEUE_SIZE | Gauge | The total number of callbacks pending in the callback queue. | -| bookie_journal_JOURNAL_ADD_ENTRY | Summary | The summary of request latency of adding entries to the journal. | -| bookie_journal_JOURNAL_SYNC | Summary | The summary of fsync latency of syncing data to the journal disk. | - -### Storage metrics - -| Name | Type | Description | -|---|---|---| -| bookie_ledgers_count | Gauge | The total number of ledgers stored in the bookie. | -| bookie_entries_count | Gauge | The total number of entries stored in the bookie. | -| bookie_write_cache_size | Gauge | The bookie write cache size (in bytes). | -| bookie_read_cache_size | Gauge | The bookie read cache size (in bytes). | -| bookie_DELETED_LEDGER_COUNT | Counter | The total number of ledgers deleted since the bookie has started. | -| bookie_ledger_writable_dirs | Gauge | The number of writable directories in the bookie. | - -## Broker - -The broker metrics are exposed under "/metrics" at port `8080`. You can change the port by updating `webServicePort` to a different port -in the `broker.conf` configuration file. - -All the metrics exposed by a broker are labelled with `cluster=${pulsar_cluster}`. The name of Pulsar cluster is the value of `${pulsar_cluster}`, which you have configured in the `broker.conf` file. - -The following metrics are available for broker: - -- [ZooKeeper](#zookeeper) - - [Server metrics](#server-metrics) - - [Request metrics](#request-metrics) -- [BookKeeper](#bookkeeper) - - [Server metrics](#server-metrics-1) - - [Journal metrics](#journal-metrics) - - [Storage metrics](#storage-metrics) -- [Broker](#broker) - - [Namespace metrics](#namespace-metrics) - - [Replication metrics](#replication-metrics) - - [Topic metrics](#topic-metrics) - - [Replication metrics](#replication-metrics-1) - - [ManagedLedgerCache metrics](#managedledgercache-metrics) - - [ManagedLedger metrics](#managedledger-metrics) - - [LoadBalancing metrics](#loadbalancing-metrics) - - [BundleUnloading metrics](#bundleunloading-metrics) - - [BundleSplit metrics](#bundlesplit-metrics) - - [Subscription metrics](#subscription-metrics) - - [Consumer metrics](#consumer-metrics) - - [Managed ledger bookie client metrics](#managed-ledger-bookie-client-metrics) - - [Token metrics](#token-metrics) - - [Authentication metrics](#authentication-metrics) - - [Connection metrics](#connection-metrics) -- [Pulsar Functions](#pulsar-functions) -- [Proxy](#proxy) -- [Pulsar SQL Worker](#pulsar-sql-worker) -- [Pulsar transaction](#pulsar-transaction) - -### Namespace metrics - -> Namespace metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `false`. - -All the namespace metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -| Name | Type | Description | -|---|---|---| -| pulsar_topics_count | Gauge | The number of Pulsar topics of the namespace owned by this broker. | -| pulsar_subscriptions_count | Gauge | The number of Pulsar subscriptions of the namespace served by this broker. | -| pulsar_producers_count | Gauge | The number of active producers of the namespace connected to this broker. | -| pulsar_consumers_count | Gauge | The number of active consumers of the namespace connected to this broker. | -| pulsar_rate_in | Gauge | The total message rate of the namespace coming into this broker (messages/second). | -| pulsar_rate_out | Gauge | The total message rate of the namespace going out from this broker (messages/second). | -| pulsar_throughput_in | Gauge | The total throughput of the namespace coming into this broker (bytes/second). | -| pulsar_throughput_out | Gauge | The total throughput of the namespace going out from this broker (bytes/second). | -| pulsar_storage_size | Gauge | The total storage size of the topics in this namespace owned by this broker (bytes). | -| pulsar_storage_logical_size | Gauge | The storage size of topics in the namespace owned by the broker without replicas (in bytes). | -| pulsar_storage_backlog_size | Gauge | The total backlog size of the topics of this namespace owned by this broker (messages). | -| pulsar_storage_offloaded_size | Gauge | The total amount of the data in this namespace offloaded to the tiered storage (bytes). | -| pulsar_storage_write_rate | Gauge | The total message batches (entries) written to the storage for this namespace (message batches / second). | -| pulsar_storage_read_rate | Gauge | The total message batches (entries) read from the storage for this namespace (message batches / second). | -| pulsar_subscription_delayed | Gauge | The total message batches (entries) are delayed for dispatching. | -| pulsar_storage_write_latency_le_* | Histogram | The entry rate of a namespace that the storage write latency is smaller with a given threshold.
    Available thresholds:
    • pulsar_storage_write_latency_le_0_5: <= 0.5ms
    • pulsar_storage_write_latency_le_1: <= 1ms
    • pulsar_storage_write_latency_le_5: <= 5ms
    • pulsar_storage_write_latency_le_10: <= 10ms
    • pulsar_storage_write_latency_le_20: <= 20ms
    • pulsar_storage_write_latency_le_50: <= 50ms
    • pulsar_storage_write_latency_le_100: <= 100ms
    • pulsar_storage_write_latency_le_200: <= 200ms
    • pulsar_storage_write_latency_le_1000: <= 1s
    • pulsar_storage_write_latency_le_overflow: > 1s
    | -| pulsar_entry_size_le_* | Histogram | The entry rate of a namespace that the entry size is smaller with a given threshold.
    Available thresholds:
    • pulsar_entry_size_le_128: <= 128 bytes
    • pulsar_entry_size_le_512: <= 512 bytes
    • pulsar_entry_size_le_1_kb: <= 1 KB
    • pulsar_entry_size_le_2_kb: <= 2 KB
    • pulsar_entry_size_le_4_kb: <= 4 KB
    • pulsar_entry_size_le_16_kb: <= 16 KB
    • pulsar_entry_size_le_100_kb: <= 100 KB
    • pulsar_entry_size_le_1_mb: <= 1 MB
    • pulsar_entry_size_le_overflow: > 1 MB
    | - -#### Replication metrics - -If a namespace is configured to be replicated among multiple Pulsar clusters, the corresponding replication metrics is also exposed when `replicationMetricsEnabled` is enabled. - -All the replication metrics are also labelled with `remoteCluster=${pulsar_remote_cluster}`. - -| Name | Type | Description | -|---|---|---| -| pulsar_replication_rate_in | Gauge | The total message rate of the namespace replicating from remote cluster (messages/second). | -| pulsar_replication_rate_out | Gauge | The total message rate of the namespace replicating to remote cluster (messages/second). | -| pulsar_replication_throughput_in | Gauge | The total throughput of the namespace replicating from remote cluster (bytes/second). | -| pulsar_replication_throughput_out | Gauge | The total throughput of the namespace replicating to remote cluster (bytes/second). | -| pulsar_replication_backlog | Gauge | The total backlog of the namespace replicating to remote cluster (messages). | -| pulsar_replication_rate_expired | Gauge | Total rate of messages expired (messages/second). | -| pulsar_replication_connected_count | Gauge | The count of replication-subscriber up and running to replicate to remote cluster. | -| pulsar_replication_delay_in_seconds | Gauge | Time in seconds from the time a message was produced to the time when it is about to be replicated. | - - -### Topic metrics - -> Topic metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `true`. - -All the topic metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. - -| Name | Type | Description | -|---|---|---| -| pulsar_subscriptions_count | Gauge | The number of Pulsar subscriptions of the topic served by this broker. | -| pulsar_producers_count | Gauge | The number of active producers of the topic connected to this broker. | -| pulsar_consumers_count | Gauge | The number of active consumers of the topic connected to this broker. | -| pulsar_rate_in | Gauge | The total message rate of the topic coming into this broker (messages/second). | -| pulsar_rate_out | Gauge | The total message rate of the topic going out from this broker (messages/second). | -| pulsar_throughput_in | Gauge | The total throughput of the topic coming into this broker (bytes/second). | -| pulsar_throughput_out | Gauge | The total throughput of the topic going out from this broker (bytes/second). | -| pulsar_storage_size | Gauge | The total storage size of the topics in this topic owned by this broker (bytes). | -| pulsar_storage_logical_size | Gauge | The storage size of topics in the namespace owned by the broker without replicas (in bytes). | -| pulsar_storage_backlog_size | Gauge | The total backlog size of the topics of this topic owned by this broker (messages). | -| pulsar_storage_offloaded_size | Gauge | The total amount of the data in this topic offloaded to the tiered storage (bytes). | -| pulsar_storage_backlog_quota_limit | Gauge | The total amount of the data in this topic that limit the backlog quota (bytes). | -| pulsar_storage_write_rate | Gauge | The total message batches (entries) written to the storage for this topic (message batches / second). | -| pulsar_storage_read_rate | Gauge | The total message batches (entries) read from the storage for this topic (message batches / second). | -| pulsar_subscription_delayed | Gauge | The total message batches (entries) are delayed for dispatching. | -| pulsar_storage_write_latency_le_* | Histogram | The entry rate of a topic that the storage write latency is smaller with a given threshold.
    Available thresholds:
    • pulsar_storage_write_latency_le_0_5: <= 0.5ms
    • pulsar_storage_write_latency_le_1: <= 1ms
    • pulsar_storage_write_latency_le_5: <= 5ms
    • pulsar_storage_write_latency_le_10: <= 10ms
    • pulsar_storage_write_latency_le_20: <= 20ms
    • pulsar_storage_write_latency_le_50: <= 50ms
    • pulsar_storage_write_latency_le_100: <= 100ms
    • pulsar_storage_write_latency_le_200: <= 200ms
    • pulsar_storage_write_latency_le_1000: <= 1s
    • pulsar_storage_write_latency_le_overflow: > 1s
    | -| pulsar_entry_size_le_* | Histogram | The entry rate of a topic that the entry size is smaller with a given threshold.
    Available thresholds:
    • pulsar_entry_size_le_128: <= 128 bytes
    • pulsar_entry_size_le_512: <= 512 bytes
    • pulsar_entry_size_le_1_kb: <= 1 KB
    • pulsar_entry_size_le_2_kb: <= 2 KB
    • pulsar_entry_size_le_4_kb: <= 4 KB
    • pulsar_entry_size_le_16_kb: <= 16 KB
    • pulsar_entry_size_le_100_kb: <= 100 KB
    • pulsar_entry_size_le_1_mb: <= 1 MB
    • pulsar_entry_size_le_overflow: > 1 MB
    | -| pulsar_in_bytes_total | Counter | The total number of messages in bytes received for this topic. | -| pulsar_in_messages_total | Counter | The total number of messages received for this topic. | -| pulsar_out_bytes_total | Counter | The total number of messages in bytes read from this topic. | -| pulsar_out_messages_total | Counter | The total number of messages read from this topic. | -| pulsar_compaction_removed_event_count | Gauge | The total number of removed events of the compaction. | -| pulsar_compaction_succeed_count | Gauge | The total number of successes of the compaction. | -| pulsar_compaction_failed_count | Gauge | The total number of failures of the compaction. | -| pulsar_compaction_duration_time_in_mills | Gauge | The duration time of the compaction. | -| pulsar_compaction_read_throughput | Gauge | The read throughput of the compaction. | -| pulsar_compaction_write_throughput | Gauge | The write throughput of the compaction. | -| pulsar_compaction_latency_le_* | Histogram | The compaction latency with given quantile.
    Available thresholds:
    • pulsar_compaction_latency_le_0_5: <= 0.5ms
    • pulsar_compaction_latency_le_1: <= 1ms
    • pulsar_compaction_latency_le_5: <= 5ms
    • pulsar_compaction_latency_le_10: <= 10ms
    • pulsar_compaction_latency_le_20: <= 20ms
    • pulsar_compaction_latency_le_50: <= 50ms
    • pulsar_compaction_latency_le_100: <= 100ms
    • pulsar_compaction_latency_le_200: <= 200ms
    • pulsar_compaction_latency_le_1000: <= 1s
    • pulsar_compaction_latency_le_overflow: > 1s
    | -| pulsar_compaction_compacted_entries_count | Gauge | The total number of the compacted entries. | -| pulsar_compaction_compacted_entries_size |Gauge | The total size of the compacted entries. | - -#### Replication metrics - -If a namespace that a topic belongs to is configured to be replicated among multiple Pulsar clusters, the corresponding replication metrics is also exposed when `replicationMetricsEnabled` is enabled. - -All the replication metrics are labelled with `remoteCluster=${pulsar_remote_cluster}`. - -| Name | Type | Description | -|---|---|---| -| pulsar_replication_rate_in | Gauge | The total message rate of the topic replicating from remote cluster (messages/second). | -| pulsar_replication_rate_out | Gauge | The total message rate of the topic replicating to remote cluster (messages/second). | -| pulsar_replication_throughput_in | Gauge | The total throughput of the topic replicating from remote cluster (bytes/second). | -| pulsar_replication_throughput_out | Gauge | The total throughput of the topic replicating to remote cluster (bytes/second). | -| pulsar_replication_backlog | Gauge | The total backlog of the topic replicating to remote cluster (messages). | - -### ManagedLedgerCache metrics -All the ManagedLedgerCache metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_ml_cache_evictions | Gauge | The number of cache evictions during the last minute. | -| pulsar_ml_cache_hits_rate | Gauge | The number of cache hits per second. | -| pulsar_ml_cache_hits_throughput | Gauge | The amount of data is retrieved from the cache in byte/s | -| pulsar_ml_cache_misses_rate | Gauge | The number of cache misses per second | -| pulsar_ml_cache_misses_throughput | Gauge | The amount of data is retrieved from the cache in byte/s | -| pulsar_ml_cache_pool_active_allocations | Gauge | The number of currently active allocations in direct arena | -| pulsar_ml_cache_pool_active_allocations_huge | Gauge | The number of currently active huge allocation in direct arena | -| pulsar_ml_cache_pool_active_allocations_normal | Gauge | The number of currently active normal allocations in direct arena | -| pulsar_ml_cache_pool_active_allocations_small | Gauge | The number of currently active small allocations in direct arena | -| pulsar_ml_cache_pool_allocated | Gauge | The total allocated memory of chunk lists in direct arena | -| pulsar_ml_cache_pool_used | Gauge | The total used memory of chunk lists in direct arena | -| pulsar_ml_cache_used_size | Gauge | The size in byte used to store the entries payloads | -| pulsar_ml_count | Gauge | The number of currently opened managed ledgers | - -### ManagedLedger metrics -All the managedLedger metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- namespace: namespace=${pulsar_namespace}. ${pulsar_namespace} is the namespace name. -- quantile: quantile=${quantile}. Quantile is only for `Histogram` type metric, and represents the threshold for given Buckets. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_ml_AddEntryBytesRate | Gauge | The bytes/s rate of messages added | -| pulsar_ml_AddEntryWithReplicasBytesRate | Gauge | The bytes/s rate of messages added with replicas | -| pulsar_ml_AddEntryErrors | Gauge | The number of addEntry requests that failed | -| pulsar_ml_AddEntryLatencyBuckets | Histogram | The latency of adding a ledger entry with a given quantile (threshold), including time spent on waiting in queue on the broker side
    Available quantile:
    • quantile="0.0_0.5" is AddEntryLatency between (0.0ms, 0.5ms]
    • quantile="0.5_1.0" is AddEntryLatency between (0.5ms, 1.0ms]
    • quantile="1.0_5.0" is AddEntryLatency between (1ms, 5ms]
    • quantile="5.0_10.0" is AddEntryLatency between (5ms, 10ms]
    • quantile="10.0_20.0" is AddEntryLatency between (10ms, 20ms]
    • quantile="20.0_50.0" is AddEntryLatency between (20ms, 50ms]
    • quantile="50.0_100.0" is AddEntryLatency between (50ms, 100ms]
    • quantile="100.0_200.0" is AddEntryLatency between (100ms, 200ms]
    • quantile="200.0_1000.0" is AddEntryLatency between (200ms, 1s]
    | -| pulsar_ml_AddEntryLatencyBuckets_OVERFLOW | Gauge | The number of times the AddEntryLatency is longer than 1 second | -| pulsar_ml_AddEntryMessagesRate | Gauge | The msg/s rate of messages added | -| pulsar_ml_AddEntrySucceed | Gauge | The number of addEntry requests that succeeded | -| pulsar_ml_EntrySizeBuckets | Histogram | The added entry size of a ledger with a given quantile.
    Available quantile:
    • quantile="0.0_128.0" is EntrySize between (0byte, 128byte]
    • quantile="128.0_512.0" is EntrySize between (128byte, 512byte]
    • quantile="512.0_1024.0" is EntrySize between (512byte, 1KB]
    • quantile="1024.0_2048.0" is EntrySize between (1KB, 2KB]
    • quantile="2048.0_4096.0" is EntrySize between (2KB, 4KB]
    • quantile="4096.0_16384.0" is EntrySize between (4KB, 16KB]
    • quantile="16384.0_102400.0" is EntrySize between (16KB, 100KB]
    • quantile="102400.0_1232896.0" is EntrySize between (100KB, 1MB]
    | -| pulsar_ml_EntrySizeBuckets_OVERFLOW |Gauge | The number of times the EntrySize is larger than 1MB | -| pulsar_ml_LedgerSwitchLatencyBuckets | Histogram | The ledger switch latency with a given quantile.
    Available quantile:
    • quantile="0.0_0.5" is EntrySize between (0ms, 0.5ms]
    • quantile="0.5_1.0" is EntrySize between (0.5ms, 1ms]
    • quantile="1.0_5.0" is EntrySize between (1ms, 5ms]
    • quantile="5.0_10.0" is EntrySize between (5ms, 10ms]
    • quantile="10.0_20.0" is EntrySize between (10ms, 20ms]
    • quantile="20.0_50.0" is EntrySize between (20ms, 50ms]
    • quantile="50.0_100.0" is EntrySize between (50ms, 100ms]
    • quantile="100.0_200.0" is EntrySize between (100ms, 200ms]
    • quantile="200.0_1000.0" is EntrySize between (200ms, 1000ms]
    | -| pulsar_ml_LedgerSwitchLatencyBuckets_OVERFLOW | Gauge | The number of times the ledger switch latency is longer than 1 second | -| pulsar_ml_LedgerAddEntryLatencyBuckets | Histogram | The latency for bookie client to persist a ledger entry from broker to BookKeeper service with a given quantile (threshold).
    Available quantile:
    • quantile="0.0_0.5" is LedgerAddEntryLatency between (0.0ms, 0.5ms]
    • quantile="0.5_1.0" is LedgerAddEntryLatency between (0.5ms, 1.0ms]
    • quantile="1.0_5.0" is LedgerAddEntryLatency between (1ms, 5ms]
    • quantile="5.0_10.0" is LedgerAddEntryLatency between (5ms, 10ms]
    • quantile="10.0_20.0" is LedgerAddEntryLatency between (10ms, 20ms]
    • quantile="20.0_50.0" is LedgerAddEntryLatency between (20ms, 50ms]
    • quantile="50.0_100.0" is LedgerAddEntryLatency between (50ms, 100ms]
    • quantile="100.0_200.0" is LedgerAddEntryLatency between (100ms, 200ms]
    • quantile="200.0_1000.0" is LedgerAddEntryLatency between (200ms, 1s]
    | -| pulsar_ml_LedgerAddEntryLatencyBuckets_OVERFLOW | Gauge | The number of times the LedgerAddEntryLatency is longer than 1 second | -| pulsar_ml_MarkDeleteRate | Gauge | The rate of mark-delete ops/s | -| pulsar_ml_NumberOfMessagesInBacklog | Gauge | The number of backlog messages for all the consumers | -| pulsar_ml_ReadEntriesBytesRate | Gauge | The bytes/s rate of messages read | -| pulsar_ml_ReadEntriesErrors | Gauge | The number of readEntries requests that failed | -| pulsar_ml_ReadEntriesRate | Gauge | The msg/s rate of messages read | -| pulsar_ml_ReadEntriesSucceeded | Gauge | The number of readEntries requests that succeeded | -| pulsar_ml_StoredMessagesSize | Gauge | The total size of the messages in active ledgers (accounting for the multiple copies stored) | - -### Managed cursor acknowledgment state - -The acknowledgment state is persistent to the ledger first. When the acknowledgment state fails to be persistent to the ledger, they are persistent to ZooKeeper. To track the stats of acknowledgment, you can configure the metrics for the managed cursor. - -All the cursor acknowledgment state metrics are labelled with the following labels: - -- namespace: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -- ledger_name: `ledger_name=${pulsar_ledger_name}`. `${pulsar_ledger_name}` is the ledger name. - -- cursor_name: `ledger_name=${pulsar_cursor_name}`. `${pulsar_cursor_name}` is the cursor name. - -Name |Type |Description -|---|---|--- -brk_ml_cursor_persistLedgerSucceed(namespace=", ledger_name="", cursor_name:")|Gauge|The number of acknowledgment states that is persistent to a ledger.| -brk_ml_cursor_persistLedgerErrors(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of ledger errors occurred when acknowledgment states fail to be persistent to the ledger.| -brk_ml_cursor_persistZookeeperSucceed(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of acknowledgment states that is persistent to ZooKeeper. -brk_ml_cursor_persistZookeeperErrors(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of ledger errors occurred when acknowledgment states fail to be persistent to ZooKeeper. -brk_ml_cursor_nonContiguousDeletedMessagesRange(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of non-contiguous deleted messages ranges. -brk_ml_cursor_writeLedgerSize(namespace="", ledger_name="", cursor_name:"")|Gauge|The size of write to ledger. -brk_ml_cursor_writeLedgerLogicalSize(namespace="", ledger_name="", cursor_name:"")|Gauge|The size of write to ledger (accounting for without replicas). -brk_ml_cursor_readLedgerSize(namespace="", ledger_name="", cursor_name:"")|Gauge|The size of read from ledger. - -### LoadBalancing metrics -All the loadbalancing metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- broker: broker=${broker}. ${broker} is the IP address of the broker -- metric: metric="loadBalancing". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_bandwidth_in_usage | Gauge | The broker bandwith in usage | -| pulsar_lb_bandwidth_out_usage | Gauge | The broker bandwith out usage | -| pulsar_lb_cpu_usage | Gauge | The broker cpu usage | -| pulsar_lb_directMemory_usage | Gauge | The broker process direct memory usage | -| pulsar_lb_memory_usage | Gauge | The broker process memory usage | - -#### BundleUnloading metrics -All the bundleUnloading metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- metric: metric="bundleUnloading". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_unload_broker_count | Counter | Unload broker count in this bundle unloading | -| pulsar_lb_unload_bundle_count | Counter | Bundle unload count in this bundle unloading | - -#### BundleSplit metrics -All the bundleUnloading metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- metric: metric="bundlesSplit". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_bundles_split_count | Counter | The total count of bundle split in this leader broker | - -### Subscription metrics - -> Subscription metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `true`. - -All the subscription metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. -- *subscription*: `subscription=${subscription}`. `${subscription}` is the topic subscription name. - -| Name | Type | Description | -|---|---|---| -| pulsar_subscription_back_log | Gauge | The total backlog of a subscription (messages). | -| pulsar_subscription_delayed | Gauge | The total number of messages are delayed to be dispatched for a subscription (messages). | -| pulsar_subscription_msg_rate_redeliver | Gauge | The total message rate for message being redelivered (messages/second). | -| pulsar_subscription_unacked_messages | Gauge | The total number of unacknowledged messages of a subscription (messages). | -| pulsar_subscription_blocked_on_unacked_messages | Gauge | Indicate whether a subscription is blocked on unacknowledged messages or not.
    • 1 means the subscription is blocked on waiting unacknowledged messages to be acked.
    • 0 means the subscription is not blocked on waiting unacknowledged messages to be acked.
    | -| pulsar_subscription_msg_rate_out | Gauge | The total message dispatch rate for a subscription (messages/second). | -| pulsar_subscription_msg_throughput_out | Gauge | The total message dispatch throughput for a subscription (bytes/second). | - -### Consumer metrics - -> Consumer metrics are only exposed when both `exposeTopicLevelMetricsInPrometheus` and `exposeConsumerLevelMetricsInPrometheus` are set to `true`. - -All the consumer metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. -- *subscription*: `subscription=${subscription}`. `${subscription}` is the topic subscription name. -- *consumer_name*: `consumer_name=${consumer_name}`. `${consumer_name}` is the topic consumer name. -- *consumer_id*: `consumer_id=${consumer_id}`. `${consumer_id}` is the topic consumer id. - -| Name | Type | Description | -|---|---|---| -| pulsar_consumer_msg_rate_redeliver | Gauge | The total message rate for message being redelivered (messages/second). | -| pulsar_consumer_unacked_messages | Gauge | The total number of unacknowledged messages of a consumer (messages). | -| pulsar_consumer_blocked_on_unacked_messages | Gauge | Indicate whether a consumer is blocked on unacknowledged messages or not.
    • 1 means the consumer is blocked on waiting unacknowledged messages to be acked.
    • 0 means the consumer is not blocked on waiting unacknowledged messages to be acked.
    | -| pulsar_consumer_msg_rate_out | Gauge | The total message dispatch rate for a consumer (messages/second). | -| pulsar_consumer_msg_throughput_out | Gauge | The total message dispatch throughput for a consumer (bytes/second). | -| pulsar_consumer_available_permits | Gauge | The available permits for for a consumer. | - -### Managed ledger bookie client metrics - -All the managed ledger bookie client metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_completed_tasks_* | Gauge | The number of tasks the scheduler executor execute completed.
    The number of metrics determined by the scheduler executor thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_queue_* | Gauge | The number of tasks queued in the scheduler executor's queue.
    The number of metrics determined by scheduler executor's thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_total_tasks_* | Gauge | The total number of tasks the scheduler executor received.
    The number of metrics determined by scheduler executor's thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_task_execution | Summary | The scheduler task execution latency calculated in milliseconds. | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_task_queued | Summary | The scheduler task queued latency calculated in milliseconds. | - -### Token metrics - -All the token metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -|---|---|---| -| pulsar_expired_token_count | Counter | The number of expired tokens in Pulsar. | -| pulsar_expiring_token_minutes | Histogram | The remaining time of expiring tokens in minutes. | - -### Authentication metrics - -All the authentication metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *provider_name*: `provider_name=${provider_name}`. `${provider_name}` is the class name of the authentication provider. -- *auth_method*: `auth_method=${auth_method}`. `${auth_method}` is the authentication method of the authentication provider. -- *reason*: `reason=${reason}`. `${reason}` is the reason for failing authentication operation. (This label is only for `pulsar_authentication_failures_count`.) - -| Name | Type | Description | -|---|---|---| -| pulsar_authentication_success_count| Counter | The number of successful authentication operations. | -| pulsar_authentication_failures_count | Counter | The number of failing authentication operations. | - -### Connection metrics - -All the connection metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *broker*: `broker=${advertised_address}`. `${advertised_address}` is the advertised address of the broker. -- *metric*: `metric=${metric}`. `${metric}` is the connection metric collective name. - -| Name | Type | Description | -|---|---|---| -| pulsar_active_connections| Gauge | The number of active connections. | -| pulsar_connection_created_total_count | Gauge | The total number of connections. | -| pulsar_connection_create_success_count | Gauge | The number of successfully created connections. | -| pulsar_connection_create_fail_count | Gauge | The number of failed connections. | -| pulsar_connection_closed_total_count | Gauge | The total number of closed connections. | -| pulsar_broker_throttled_connections | Gauge | The number of throttled connections. | -| pulsar_broker_throttled_connections_global_limit | Gauge | The number of throttled connections because of per-connection limit. | - -## Pulsar Functions - -All the Pulsar Functions metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -| Name | Type | Description | -|---|---|---| -| pulsar_function_processed_successfully_total | Counter | The total number of messages processed successfully. | -| pulsar_function_processed_successfully_total_1min | Counter | The total number of messages processed successfully in the last 1 minute. | -| pulsar_function_system_exceptions_total | Counter | The total number of system exceptions. | -| pulsar_function_system_exceptions_total_1min | Counter | The total number of system exceptions in the last 1 minute. | -| pulsar_function_user_exceptions_total | Counter | The total number of user exceptions. | -| pulsar_function_user_exceptions_total_1min | Counter | The total number of user exceptions in the last 1 minute. | -| pulsar_function_process_latency_ms | Summary | The process latency in milliseconds. | -| pulsar_function_process_latency_ms_1min | Summary | The process latency in milliseconds in the last 1 minute. | -| pulsar_function_last_invocation | Gauge | The timestamp of the last invocation of the function. | -| pulsar_function_received_total | Counter | The total number of messages received from source. | -| pulsar_function_received_total_1min | Counter | The total number of messages received from source in the last 1 minute. | -pulsar_function_user_metric_ | Summary|The user-defined metrics. - -## Connectors - -All the Pulsar connector metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -Connector metrics contain **source** metrics and **sink** metrics. - -- **Source** metrics - - | Name | Type | Description | - |---|---|---| - pulsar_source_written_total|Counter|The total number of records written to a Pulsar topic. - pulsar_source_written_total_1min|Counter|The total number of records written to a Pulsar topic in the last 1 minute. - pulsar_source_received_total|Counter|The total number of records received from source. - pulsar_source_received_total_1min|Counter|The total number of records received from source in the last 1 minute. - pulsar_source_last_invocation|Gauge|The timestamp of the last invocation of the source. - pulsar_source_source_exception|Gauge|The exception from a source. - pulsar_source_source_exceptions_total|Counter|The total number of source exceptions. - pulsar_source_source_exceptions_total_1min |Counter|The total number of source exceptions in the last 1 minute. - pulsar_source_system_exception|Gauge|The exception from system code. - pulsar_source_system_exceptions_total|Counter|The total number of system exceptions. - pulsar_source_system_exceptions_total_1min|Counter|The total number of system exceptions in the last 1 minute. - pulsar_source_user_metric_ | Summary|The user-defined metrics. - -- **Sink** metrics - - | Name | Type | Description | - |---|---|---| - pulsar_sink_written_total|Counter| The total number of records processed by a sink. - pulsar_sink_written_total_1min|Counter| The total number of records processed by a sink in the last 1 minute. - pulsar_sink_received_total_1min|Counter| The total number of messages that a sink has received from Pulsar topics in the last 1 minute. - pulsar_sink_received_total|Counter| The total number of records that a sink has received from Pulsar topics. - pulsar_sink_last_invocation|Gauge|The timestamp of the last invocation of the sink. - pulsar_sink_sink_exception|Gauge|The exception from a sink. - pulsar_sink_sink_exceptions_total|Counter|The total number of sink exceptions. - pulsar_sink_sink_exceptions_total_1min |Counter|The total number of sink exceptions in the last 1 minute. - pulsar_sink_system_exception|Gauge|The exception from system code. - pulsar_sink_system_exceptions_total|Counter|The total number of system exceptions. - pulsar_sink_system_exceptions_total_1min|Counter|The total number of system exceptions in the last 1 minute. - pulsar_sink_user_metric_ | Summary|The user-defined metrics. - -## Proxy - -All the proxy metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *kubernetes_pod_name*: `kubernetes_pod_name=${kubernetes_pod_name}`. `${kubernetes_pod_name}` is the Kubernetes pod name. - -| Name | Type | Description | -|---|---|---| -| pulsar_proxy_active_connections | Gauge | Number of connections currently active in the proxy. | -| pulsar_proxy_new_connections | Counter | Counter of connections being opened in the proxy. | -| pulsar_proxy_rejected_connections | Counter | Counter for connections rejected due to throttling. | -| pulsar_proxy_binary_ops | Counter | Counter of proxy operations. | -| pulsar_proxy_binary_bytes | Counter | Counter of proxy bytes. | - -## Pulsar SQL Worker - -| Name | Type | Description | -|---|---|---| -| split_bytes_read | Counter | Number of bytes read from BookKeeper. | -| split_num_messages_deserialized | Counter | Number of messages deserialized. | -| split_num_record_deserialized | Counter | Number of records deserialized. | -| split_bytes_read_per_query | Summary | Total number of bytes read per query. | -| split_entry_deserialize_time | Summary | Time spent on derserializing entries. | -| split_entry_deserialize_time_per_query | Summary | Time spent on derserializing entries per query. | -| split_entry_queue_dequeue_wait_time | Summary | Time spend on waiting to get entry from entry queue because it is empty. | -| split_entry_queue_dequeue_wait_time_per_query | Summary | Total time spent on waiting to get entry from entry queue per query. | -| split_message_queue_dequeue_wait_time_per_query | Summary | Time spent on waiting to dequeue from message queue because is is empty per query. | -| split_message_queue_enqueue_wait_time | Summary | Time spent on waiting for message queue enqueue because the message queue is full. | -| split_message_queue_enqueue_wait_time_per_query | Summary | Time spent on waiting for message queue enqueue because the message queue is full per query. | -| split_num_entries_per_batch | Summary | Number of entries per batch. | -| split_num_entries_per_query | Summary | Number of entries per query. | -| split_num_messages_deserialized_per_entry | Summary | Number of messages deserialized per entry. | -| split_num_messages_deserialized_per_query | Summary | Number of messages deserialized per query. | -| split_read_attempts | Summary | Number of read attempts (fail if queues are full). | -| split_read_attempts_per_query | Summary | Number of read attempts per query. | -| split_read_latency_per_batch | Summary | Latency of reads per batch. | -| split_read_latency_per_query | Summary | Total read latency per query. | -| split_record_deserialize_time | Summary | Time spent on deserializing message to record. For example, Avro, JSON, and so on. | -| split_record_deserialize_time_per_query | Summary | Time spent on deserializing message to record per query. | -| split_total_execution_time | Summary | The total execution time. | - -## Pulsar transaction - -All the transaction metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *coordinator_id*: `coordinator_id=${coordinator_id}`. `${coordinator_id}` is the coordinator id. - -| Name | Type | Description | -|---|---|---| -| pulsar_txn_active_count | Gauge | Number of active transactions. | -| pulsar_txn_created_count | Counter | Number of created transactions. | -| pulsar_txn_committed_count | Counter | Number of committed transactions. | -| pulsar_txn_aborted_count | Counter | Number of aborted transactions of this coordinator. | -| pulsar_txn_timeout_count | Counter | Number of timeout transactions. | -| pulsar_txn_append_log_count | Counter | Number of append transaction logs. | -| pulsar_txn_execution_latency_le_* | Histogram | Transaction execution latency.
    Available latencies are as below:
    • latency="10" is TransactionExecutionLatency between (0ms, 10ms]
    • latency="20" is TransactionExecutionLatency between (10ms, 20ms]
    • latency="50" is TransactionExecutionLatency between (20ms, 50ms]
    • latency="100" is TransactionExecutionLatency between (50ms, 100ms]
    • latency="500" is TransactionExecutionLatency between (100ms, 500ms]
    • latency="1000" is TransactionExecutionLatency between (500ms, 1000ms]
    • latency="5000" is TransactionExecutionLatency between (1s, 5s]
    • latency="15000" is TransactionExecutionLatency between (5s, 15s]
    • latency="30000" is TransactionExecutionLatency between (15s, 30s]
    • latency="60000" is TransactionExecutionLatency between (30s, 60s]
    • latency="300000" is TransactionExecutionLatency between (1m,5m]
    • latency="1500000" is TransactionExecutionLatency between (5m,15m]
    • latency="3000000" is TransactionExecutionLatency between (15m,30m]
    • latency="overflow" is TransactionExecutionLatency between (30m,∞]
    | diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/reference-pulsar-admin.md b/site2/website/versioned_docs/version-2.9.2-deprecated/reference-pulsar-admin.md deleted file mode 100644 index e306289a8798a5..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/reference-pulsar-admin.md +++ /dev/null @@ -1,3297 +0,0 @@ ---- -id: reference-pulsar-admin -title: Pulsar admin CLI -sidebar_label: "Pulsar Admin CLI" -original_id: reference-pulsar-admin ---- - -> **Important** -> -> This page is deprecated and not updated anymore. For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) - -The `pulsar-admin` tool enables you to manage Pulsar installations, including clusters, brokers, namespaces, tenants, and more. - -Usage - -```bash - -$ pulsar-admin command - -``` - -Commands -* `broker-stats` -* `brokers` -* `clusters` -* `functions` -* `functions-worker` -* `namespaces` -* `ns-isolation-policy` -* `sources` - - For more information, see [here](io-cli.md#sources) -* `sinks` - - For more information, see [here](io-cli.md#sinks) -* `topics` -* `tenants` -* `resource-quotas` -* `schemas` - -## `broker-stats` - -Operations to collect broker statistics - -```bash - -$ pulsar-admin broker-stats subcommand - -``` - -Subcommands -* `allocator-stats` -* `topics(destinations)` -* `mbeans` -* `monitoring-metrics` -* `load-report` - - -### `allocator-stats` - -Dump allocator stats - -Usage - -```bash - -$ pulsar-admin broker-stats allocator-stats allocator-name - -``` - -### `topics(destinations)` - -Dump topic stats - -Usage - -```bash - -$ pulsar-admin broker-stats topics options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - -### `mbeans` - -Dump Mbean stats - -Usage - -```bash - -$ pulsar-admin broker-stats mbeans options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - - -### `monitoring-metrics` - -Dump metrics for monitoring - -Usage - -```bash - -$ pulsar-admin broker-stats monitoring-metrics options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - - -### `load-report` - -Dump broker load-report - -Usage - -```bash - -$ pulsar-admin broker-stats load-report - -``` - -## `brokers` - -Operations about brokers - -```bash - -$ pulsar-admin brokers subcommand - -``` - -Subcommands -* `list` -* `namespaces` -* `update-dynamic-config` -* `list-dynamic-config` -* `get-all-dynamic-config` -* `get-internal-config` -* `get-runtime-config` -* `healthcheck` - -### `list` -List active brokers of the cluster - -Usage - -```bash - -$ pulsar-admin brokers list cluster-name - -``` - -### `leader-broker` -Get the information of the leader broker - -Usage - -```bash - -$ pulsar-admin brokers leader-broker - -``` - -### `namespaces` -List namespaces owned by the broker - -Usage - -```bash - -$ pulsar-admin brokers namespaces cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--url`|The URL for the broker|| - - -### `update-dynamic-config` -Update a broker's dynamic service configuration - -Usage - -```bash - -$ pulsar-admin brokers update-dynamic-config options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--config`|Service configuration parameter name|| -|`--value`|Value for the configuration parameter value specified using the `--config` flag|| - - -### `list-dynamic-config` -Get list of updatable configuration name - -Usage - -```bash - -$ pulsar-admin brokers list-dynamic-config - -``` - -### `delete-dynamic-config` -Delete dynamic-serviceConfiguration of broker - -Usage - -```bash - -$ pulsar-admin brokers delete-dynamic-config options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--config`|Service configuration parameter name|| - - -### `get-all-dynamic-config` -Get all overridden dynamic-configuration values - -Usage - -```bash - -$ pulsar-admin brokers get-all-dynamic-config - -``` - -### `get-internal-config` -Get internal configuration information - -Usage - -```bash - -$ pulsar-admin brokers get-internal-config - -``` - -### `get-runtime-config` -Get runtime configuration values - -Usage - -```bash - -$ pulsar-admin brokers get-runtime-config - -``` - -### `healthcheck` -Run a health check against the broker - -Usage - -```bash - -$ pulsar-admin brokers healthcheck - -``` - -## `clusters` -Operations about clusters - -Usage - -```bash - -$ pulsar-admin clusters subcommand - -``` - -Subcommands -* `get` -* `create` -* `update` -* `delete` -* `list` -* `update-peer-clusters` -* `get-peer-clusters` -* `get-failure-domain` -* `create-failure-domain` -* `update-failure-domain` -* `delete-failure-domain` -* `list-failure-domains` - - -### `get` -Get the configuration data for the specified cluster - -Usage - -```bash - -$ pulsar-admin clusters get cluster-name - -``` - -### `create` -Provisions a new cluster. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin clusters create cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--broker-url`|The URL for the broker service.|| -|`--broker-url-secure`|The broker service URL for a secure connection|| -|`--url`|service-url|| -|`--url-secure`|service-url for secure connection|| - - -### `update` -Update the configuration for a cluster - -Usage - -```bash - -$ pulsar-admin clusters update cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--broker-url`|The URL for the broker service.|| -|`--broker-url-secure`|The broker service URL for a secure connection|| -|`--url`|service-url|| -|`--url-secure`|service-url for secure connection|| - - -### `delete` -Deletes an existing cluster - -Usage - -```bash - -$ pulsar-admin clusters delete cluster-name - -``` - -### `list` -List the existing clusters - -Usage - -```bash - -$ pulsar-admin clusters list - -``` - -### `update-peer-clusters` -Update peer cluster names - -Usage - -```bash - -$ pulsar-admin clusters update-peer-clusters cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--peer-clusters`|Comma separated peer cluster names (Pass empty string "" to delete list)|| - -### `get-peer-clusters` -Get list of peer clusters - -Usage - -```bash - -$ pulsar-admin clusters get-peer-clusters - -``` - -### `get-failure-domain` -Get the configuration brokers of a failure domain - -Usage - -```bash - -$ pulsar-admin clusters get-failure-domain cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `create-failure-domain` -Create a new failure domain for a cluster (updates it if already created) - -Usage - -```bash - -$ pulsar-admin clusters create-failure-domain cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--broker-list`|Comma separated broker list|| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `update-failure-domain` -Update failure domain for a cluster (creates a new one if not exist) - -Usage - -```bash - -$ pulsar-admin clusters update-failure-domain cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--broker-list`|Comma separated broker list|| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `delete-failure-domain` -Delete an existing failure domain - -Usage - -```bash - -$ pulsar-admin clusters delete-failure-domain cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `list-failure-domains` -List the existing failure domains for a cluster - -Usage - -```bash - -$ pulsar-admin clusters list-failure-domains cluster-name - -``` - -## `functions` - -A command-line interface for Pulsar Functions - -Usage - -```bash - -$ pulsar-admin functions subcommand - -``` - -Subcommands -* `localrun` -* `create` -* `delete` -* `update` -* `get` -* `restart` -* `stop` -* `start` -* `status` -* `stats` -* `list` -* `querystate` -* `putstate` -* `trigger` - - -### `localrun` -Run the Pulsar Function locally (rather than deploying it to the Pulsar cluster) - - -Usage - -```bash - -$ pulsar-admin functions localrun options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--broker-service-url `|The URL of the Pulsar broker|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--client-auth-params`|Client authentication param|| -|`--client-auth-plugin`|Client authentication plugin using which function-process can connect to broker|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--hostname-verification-enabled`|Enable hostname verification|false| -|`--instance-id-offset`|Start the instanceIds from this offset|0| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--go`|Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--state-storage-service-url`|The URL for the state storage service. By default, it it set to the service URL of the Apache BookKeeper. This service URL must be added manually when the Pulsar Function runs locally. || -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed successfully are sent|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--tls-allow-insecure`|Allow insecure tls connection|false| -|`--tls-trust-cert-path`|The tls trust cert file path|| -|`--use-tls`|Use tls connection|false| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `create` -Create a Pulsar Function in cluster mode (i.e. deploy it on a Pulsar cluster) - -Usage - -``` - -$ pulsar-admin functions create options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function’s namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--go`|Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `delete` -Delete a Pulsar Function that's running on a Pulsar cluster - -Usage - -```bash - -$ pulsar-admin functions delete options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `update` -Update a Pulsar Function that's been deployed to a Pulsar cluster - -Usage - -```bash - -$ pulsar-admin functions update options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function’s namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--go`|Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `get` -Fetch information about a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions get options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `restart` -Restart function instance - -Usage - -```bash - -$ pulsar-admin functions restart options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (restart all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `stop` -Stops function instance - -Usage - -```bash - -$ pulsar-admin functions stop options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (stop all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `start` -Starts a stopped function instance - -Usage - -```bash - -$ pulsar-admin functions start options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (start all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `status` -Check the current status of a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions status options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (Get-status of all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `stats` -Get the current stats of a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions stats options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (Get-stats of all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - -### `list` -List all of the Pulsar Functions running under a specific tenant and namespace - -Usage - -```bash - -$ pulsar-admin functions list options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `querystate` -Fetch the current state associated with a Pulsar Function running in cluster mode - -Usage - -```bash - -$ pulsar-admin functions querystate options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`-k`, `--key`|The key for the state you want to fetch|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| -|`-w`, `--watch`|Watch for changes in the value associated with a key for a Pulsar Function|false| - -### `putstate` -Put a key/value pair to the state associated with a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions putstate options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the Pulsar Function|| -|`--name`|The name of a Pulsar Function|| -|`--namespace`|The namespace of a Pulsar Function|| -|`--tenant`|The tenant of a Pulsar Function|| -|`-s`, `--state`|The FunctionState that needs to be put|| - -### `trigger` -Triggers the specified Pulsar Function with a supplied value - -Usage - -```bash - -$ pulsar-admin functions trigger options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| -|`--topic`|The specific topic name that the function consumes from that you want to inject the data to|| -|`--trigger-file`|The path to the file that contains the data with which you'd like to trigger the function|| -|`--trigger-value`|The value with which you want to trigger the function|| - - -## `functions-worker` -Operations to collect function-worker statistics - -```bash - -$ pulsar-admin functions-worker subcommand - -``` - -Subcommands - -* `function-stats` -* `get-cluster` -* `get-cluster-leader` -* `get-function-assignments` -* `monitoring-metrics` - -### `function-stats` - -Dump all functions stats running on this broker - -Usage - -```bash - -$ pulsar-admin functions-worker function-stats - -``` - -### `get-cluster` - -Get all workers belonging to this cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-cluster - -``` - -### `get-cluster-leader` - -Get the leader of the worker cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-cluster-leader - -``` - -### `get-function-assignments` - -Get the assignments of the functions across the worker cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-function-assignments - -``` - -### `monitoring-metrics` - -Dump metrics for Monitoring - -Usage - -```bash - -$ pulsar-admin functions-worker monitoring-metrics - -``` - -## `namespaces` - -Operations for managing namespaces - -```bash - -$ pulsar-admin namespaces subcommand - -``` - -Subcommands -* `list` -* `topics` -* `policies` -* `create` -* `delete` -* `set-deduplication` -* `set-auto-topic-creation` -* `remove-auto-topic-creation` -* `set-auto-subscription-creation` -* `remove-auto-subscription-creation` -* `permissions` -* `grant-permission` -* `revoke-permission` -* `grant-subscription-permission` -* `revoke-subscription-permission` -* `set-clusters` -* `get-clusters` -* `get-backlog-quotas` -* `set-backlog-quota` -* `remove-backlog-quota` -* `get-persistence` -* `set-persistence` -* `get-message-ttl` -* `set-message-ttl` -* `remove-message-ttl` -* `get-anti-affinity-group` -* `set-anti-affinity-group` -* `get-anti-affinity-namespaces` -* `delete-anti-affinity-group` -* `get-retention` -* `set-retention` -* `unload` -* `split-bundle` -* `set-dispatch-rate` -* `get-dispatch-rate` -* `set-replicator-dispatch-rate` -* `get-replicator-dispatch-rate` -* `set-subscribe-rate` -* `get-subscribe-rate` -* `set-subscription-dispatch-rate` -* `get-subscription-dispatch-rate` -* `clear-backlog` -* `unsubscribe` -* `set-encryption-required` -* `set-delayed-delivery` -* `get-delayed-delivery` -* `set-subscription-auth-mode` -* `get-max-producers-per-topic` -* `set-max-producers-per-topic` -* `get-max-consumers-per-topic` -* `set-max-consumers-per-topic` -* `get-max-consumers-per-subscription` -* `set-max-consumers-per-subscription` -* `get-max-unacked-messages-per-subscription` -* `set-max-unacked-messages-per-subscription` -* `get-max-unacked-messages-per-consumer` -* `set-max-unacked-messages-per-consumer` -* `get-compaction-threshold` -* `set-compaction-threshold` -* `get-offload-threshold` -* `set-offload-threshold` -* `get-offload-deletion-lag` -* `set-offload-deletion-lag` -* `clear-offload-deletion-lag` -* `get-schema-autoupdate-strategy` -* `set-schema-autoupdate-strategy` -* `set-offload-policies` -* `get-offload-policies` -* `set-max-subscriptions-per-topic` -* `get-max-subscriptions-per-topic` -* `remove-max-subscriptions-per-topic` - - -### `list` -Get the namespaces for a tenant - -Usage - -```bash - -$ pulsar-admin namespaces list tenant-name - -``` - -### `topics` -Get the list of topics for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces topics tenant/namespace - -``` - -### `policies` -Get the configuration policies of a namespace - -Usage - -```bash - -$ pulsar-admin namespaces policies tenant/namespace - -``` - -### `create` -Create a new namespace - -Usage - -```bash - -$ pulsar-admin namespaces create tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-b`, `--bundles`|The number of bundles to activate|0| -|`-c`, `--clusters`|List of clusters this namespace will be assigned|| - - -### `delete` -Deletes a namespace. The namespace needs to be empty - -Usage - -```bash - -$ pulsar-admin namespaces delete tenant/namespace - -``` - -### `set-deduplication` -Enable or disable message deduplication on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-deduplication tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable message deduplication on the specified namespace|false| -|`--disable`, `-d`|Disable message deduplication on the specified namespace|false| - -### `set-auto-topic-creation` -Enable or disable autoTopicCreation for a namespace, overriding broker settings - -Usage - -```bash - -$ pulsar-admin namespaces set-auto-topic-creation tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable allowAutoTopicCreation on namespace|false| -|`--disable`, `-d`|Disable allowAutoTopicCreation on namespace|false| -|`--type`, `-t`|Type of topic to be auto-created. Possible values: (partitioned, non-partitioned)|non-partitioned| -|`--num-partitions`, `-n`|Default number of partitions of topic to be auto-created, applicable to partitioned topics only|| - -### `remove-auto-topic-creation` -Remove override of autoTopicCreation for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces remove-auto-topic-creation tenant/namespace - -``` - -### `set-auto-subscription-creation` -Enable autoSubscriptionCreation for a namespace, overriding broker settings - -Usage - -```bash - -$ pulsar-admin namespaces set-auto-subscription-creation tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable allowAutoSubscriptionCreation on namespace|false| - -### `remove-auto-subscription-creation` -Remove override of autoSubscriptionCreation for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces remove-auto-subscription-creation tenant/namespace - -``` - -### `permissions` -Get the permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces permissions tenant/namespace - -``` - -### `grant-permission` -Grant permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces grant-permission tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--actions`|Actions to be granted (`produce` or `consume`)|| -|`--role`|The client role to which to grant the permissions|| - - -### `revoke-permission` -Revoke permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces revoke-permission tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--role`|The client role to which to revoke the permissions|| - -### `grant-subscription-permission` -Grant permissions to access subscription admin-api - -Usage - -```bash - -$ pulsar-admin namespaces grant-subscription-permission tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--roles`|The client roles to which to grant the permissions (comma separated roles)|| -|`--subscription`|The subscription name for which permission will be granted to roles|| - -### `revoke-subscription-permission` -Revoke permissions to access subscription admin-api - -Usage - -```bash - -$ pulsar-admin namespaces revoke-subscription-permission tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--role`|The client role to which to revoke the permissions|| -|`--subscription`|The subscription name for which permission will be revoked to roles|| - -### `set-clusters` -Set replication clusters for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-clusters tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-c`, `--clusters`|Replication clusters ID list (comma-separated values)|| - - -### `get-clusters` -Get replication clusters for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-clusters tenant/namespace - -``` - -### `get-backlog-quotas` -Get the backlog quota policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-backlog-quotas tenant/namespace - -``` - -### `set-backlog-quota` -Set a backlog quota policy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-backlog-quota tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-l`, `--limit`|The backlog size limit (for example `10M` or `16G`)|| -|`-lt`, `--limitTime`|Time limit in second, non-positive number for disabling time limit. (for example 3600 for 1 hour)|| -|`-p`, `--policy`|The retention policy to enforce when the limit is reached. The valid options are: `producer_request_hold`, `producer_exception` or `consumer_backlog_eviction`| -|`-t`, `--type`|Backlog quota type to set. The valid options are: `destination_storage`, `message_age` |destination_storage| - -Example - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \ ---limit 2G \ ---policy producer_request_hold - -``` - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \ ---limitTime 3600 \ ---policy producer_request_hold \ ---type message_age - -``` - -### `remove-backlog-quota` -Remove a backlog quota policy from a namespace - -|Flag|Description|Default| -|---|---|---| -|`-t`, `--type`|Backlog quota type to remove. The valid options are: `destination_storage`, `message_age` |destination_storage| - -Usage - -```bash - -$ pulsar-admin namespaces remove-backlog-quota tenant/namespace - -``` - -### `get-persistence` -Get the persistence policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-persistence tenant/namespace - -``` - -### `set-persistence` -Set the persistence policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-persistence tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-a`, `--bookkeeper-ack-quorum`|The number of acks (guaranteed copies) to wait for each entry|0| -|`-e`, `--bookkeeper-ensemble`|The number of bookies to use for a topic|0| -|`-w`, `--bookkeeper-write-quorum`|How many writes to make of each entry|0| -|`-r`, `--ml-mark-delete-max-rate`|Throttling rate of mark-delete operation (0 means no throttle)|| - - -### `get-message-ttl` -Get the message TTL for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-message-ttl tenant/namespace - -``` - -### `set-message-ttl` -Set the message TTL for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-message-ttl tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-ttl`, `--messageTTL`|Message TTL in seconds. When the value is set to `0`, TTL is disabled. TTL is disabled by default. |0| - -### `remove-message-ttl` -Remove the message TTL for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces remove-message-ttl tenant/namespace - -``` - -### `get-anti-affinity-group` -Get Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-anti-affinity-group tenant/namespace - -``` - -### `set-anti-affinity-group` -Set Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-anti-affinity-group tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-g`, `--group`|Anti-affinity group name|| - -### `get-anti-affinity-namespaces` -Get Anti-affinity namespaces grouped with the given anti-affinity group name - -Usage - -```bash - -$ pulsar-admin namespaces get-anti-affinity-namespaces options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-c`, `--cluster`|Cluster name|| -|`-g`, `--group`|Anti-affinity group name|| -|`-p`, `--tenant`|Tenant is only used for authorization. Client has to be admin of any of the tenant to access this api|| - -### `delete-anti-affinity-group` -Remove Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces delete-anti-affinity-group tenant/namespace - -``` - -### `get-retention` -Get the retention policy that is applied to each topic within the specified namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-retention tenant/namespace - -``` - -### `set-retention` -Set the retention policy for each topic within the specified namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-retention tenant/namespace - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-s`, `--size`|The retention size limits (for example 10M, 16G or 3T) for each topic in the namespace. 0 means no retention and -1 means infinite size retention|| -|`-t`, `--time`|The retention time in minutes, hours, days, or weeks. Examples: 100m, 13h, 2d, 5w. 0 means no retention and -1 means infinite time retention|| - - -### `unload` -Unload a namespace or namespace bundle from the current serving broker. - -Usage - -```bash - -$ pulsar-admin namespaces unload tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| - -### `split-bundle` -Split a namespace-bundle from the current serving broker - -Usage - -```bash - -$ pulsar-admin namespaces split-bundle tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-u`, `--unload`|Unload newly split bundles after splitting old bundle|false| - -### `set-dispatch-rate` -Set message-dispatch-rate for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-dispatch-rate tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-dispatch-rate` -Get configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-dispatch-rate tenant/namespace - -``` - -### `set-replicator-dispatch-rate` -Set replicator message-dispatch-rate for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-replicator-dispatch-rate tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-replicator-dispatch-rate` -Get replicator configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-replicator-dispatch-rate tenant/namespace - -``` - -### `set-subscribe-rate` -Set subscribe-rate per consumer for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscribe-rate tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-sr`, `--subscribe-rate`|The subscribe rate (default -1 will be overwrite if not passed)|-1| -|`-st`, `--subscribe-rate-period`|The subscribe rate period in second type (default 30 second will be overwrite if not passed)|30| - -### `get-subscribe-rate` -Get configured subscribe-rate per consumer for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-subscribe-rate tenant/namespace - -``` - -### `set-subscription-dispatch-rate` -Set subscription message-dispatch-rate for all subscription of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscription-dispatch-rate tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--sub-msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-subscription-dispatch-rate` -Get subscription configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-subscription-dispatch-rate tenant/namespace - -``` - -### `clear-backlog` -Clear the backlog for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces clear-backlog tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-force`, `--force`|Whether to force a clear backlog without prompt|false| -|`-s`, `--sub`|The subscription name|| - - -### `unsubscribe` -Unsubscribe the given subscription on all destinations on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces unsubscribe tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-s`, `--sub`|The subscription name|| - -### `set-encryption-required` -Enable or disable message encryption required for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-encryption-required tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-d`, `--disable`|Disable message encryption required|false| -|`-e`, `--enable`|Enable message encryption required|false| - -### `set-delayed-delivery` -Set the delayed delivery policy on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-delayed-delivery tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-d`, `--disable`|Disable delayed delivery messages|false| -|`-e`, `--enable`|Enable delayed delivery messages|false| -|`-t`, `--time`|The tick time for when retrying on delayed delivery messages|1s| - - -### `get-delayed-delivery` -Get the delayed delivery policy on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-delayed-delivery-time tenant/namespace - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-t`, `--time`|The tick time for when retrying on delayed delivery messages|1s| - - -### `set-subscription-auth-mode` -Set subscription auth mode on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscription-auth-mode tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-m`, `--subscription-auth-mode`|Subscription authorization mode for Pulsar policies. Valid options are: [None, Prefix]|| - -### `get-max-producers-per-topic` -Get maxProducersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-producers-per-topic tenant/namespace - -``` - -### `set-max-producers-per-topic` -Set maxProducersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-producers-per-topic tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-p`, `--max-producers-per-topic`|maxProducersPerTopic for a namespace|0| - -### `get-max-consumers-per-topic` -Get maxConsumersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-consumers-per-topic tenant/namespace - -``` - -### `set-max-consumers-per-topic` -Set maxConsumersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-consumers-per-topic tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-consumers-per-topic`|maxConsumersPerTopic for a namespace|0| - -### `get-max-consumers-per-subscription` -Get maxConsumersPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-consumers-per-subscription tenant/namespace - -``` - -### `set-max-consumers-per-subscription` -Set maxConsumersPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-consumers-per-subscription tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-consumers-per-subscription`|maxConsumersPerSubscription for a namespace|0| - -### `get-max-unacked-messages-per-subscription` -Get maxUnackedMessagesPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-unacked-messages-per-subscription tenant/namespace - -``` - -### `set-max-unacked-messages-per-subscription` -Set maxUnackedMessagesPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-unacked-messages-per-subscription tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-unacked-messages-per-subscription`|maxUnackedMessagesPerSubscription for a namespace|-1| - -### `get-max-unacked-messages-per-consumer` -Get maxUnackedMessagesPerConsumer for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-unacked-messages-per-consumer tenant/namespace - -``` - -### `set-max-unacked-messages-per-consumer` -Set maxUnackedMessagesPerConsumer for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-unacked-messages-per-consumer tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-unacked-messages-per-consumer`|maxUnackedMessagesPerConsumer for a namespace|-1| - - -### `get-compaction-threshold` -Get compactionThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-compaction-threshold tenant/namespace - -``` - -### `set-compaction-threshold` -Set compactionThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-compaction-threshold tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-t`, `--threshold`|Maximum number of bytes in a topic backlog before compaction is triggered (eg: 10M, 16G, 3T). 0 disables automatic compaction|0| - - -### `get-offload-threshold` -Get offloadThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-threshold tenant/namespace - -``` - -### `set-offload-threshold` -Set offloadThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-threshold tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-s`, `--size`|Maximum number of bytes stored in the pulsar cluster for a topic before data will start being automatically offloaded to longterm storage (eg: 10M, 16G, 3T, 100). Negative values disable automatic offload. 0 triggers offloading as soon as possible.|-1| - -### `get-offload-deletion-lag` -Get offloadDeletionLag, in minutes, for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-deletion-lag tenant/namespace - -``` - -### `set-offload-deletion-lag` -Set offloadDeletionLag for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-deletion-lag tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-l`, `--lag`|Duration to wait after offloading a ledger segment, before deleting the copy of that segment from cluster local storage. (eg: 10m, 5h, 3d, 2w).|-1| - -### `clear-offload-deletion-lag` -Clear offloadDeletionLag for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces clear-offload-deletion-lag tenant/namespace - -``` - -### `get-schema-autoupdate-strategy` -Get the schema auto-update strategy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-schema-autoupdate-strategy tenant/namespace - -``` - -### `set-schema-autoupdate-strategy` -Set the schema auto-update strategy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-schema-autoupdate-strategy tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-c`, `--compatibility`|Compatibility level required for new schemas created via a Producer. Possible values (Full, Backward, Forward, None).|Full| -|`-d`, `--disabled`|Disable automatic schema updates.|false| - -### `get-publish-rate` -Get the message publish rate for each topic in a namespace, in bytes as well as messages per second - -Usage - -```bash - -$ pulsar-admin namespaces get-publish-rate tenant/namespace - -``` - -### `set-publish-rate` -Set the message publish rate for each topic in a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-publish-rate tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-m`, `--msg-publish-rate`|Threshold for number of messages per second per topic in the namespace (-1 implies not set, 0 for no limit).|-1| -|`-b`, `--byte-publish-rate`|Threshold for number of bytes per second per topic in the namespace (-1 implies not set, 0 for no limit).|-1| - -### `set-offload-policies` -Set the offload policy for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-policies tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-d`, `--driver`|Driver to use to offload old data to long term storage,(Possible values: S3, aws-s3, google-cloud-storage)|| -|`-r`, `--region`|The long term storage region|| -|`-b`, `--bucket`|Bucket to place offloaded ledger into|| -|`-e`, `--endpoint`|Alternative endpoint to connect to|| -|`-i`, `--aws-id`|AWS Credential Id to use when using driver S3 or aws-s3|| -|`-s`, `--aws-secret`|AWS Credential Secret to use when using driver S3 or aws-s3|| -|`-ro`, `--s3-role`|S3 Role used for STSAssumeRoleSessionCredentialsProvider using driver S3 or aws-s3|| -|`-rsn`, `--s3-role-session-name`|S3 role session name used for STSAssumeRoleSessionCredentialsProvider using driver S3 or aws-s3|| -|`-mbs`, `--maxBlockSize`|Max block size|64MB| -|`-rbs`, `--readBufferSize`|Read buffer size|1MB| -|`-oat`, `--offloadAfterThreshold`|Offload after threshold size (eg: 1M, 5M)|| -|`-oae`, `--offloadAfterElapsed`|Offload after elapsed in millis (or minutes, hours,days,weeks eg: 100m, 3h, 2d, 5w).|| - -### `get-offload-policies` -Get the offload policy for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-policies tenant/namespace - -``` - -### `set-max-subscriptions-per-topic` -Set the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces set-max-subscriptions-per-topic tenant/namespace - -``` - -### `get-max-subscriptions-per-topic` -Get the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces get-max-subscriptions-per-topic tenant/namespace - -``` - -### `remove-max-subscriptions-per-topic` -Remove the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces remove-max-subscriptions-per-topic tenant/namespace - -``` - -## `ns-isolation-policy` -Operations for managing namespace isolation policies. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy subcommand - -``` - -Subcommands -* `set` -* `get` -* `list` -* `delete` -* `brokers` -* `broker` - -### `set` -Create/update a namespace isolation policy for a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy set cluster-name policy-name options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`--auto-failover-policy-params`|Comma-separated name=value auto failover policy parameters|[]| -|`--auto-failover-policy-type`|Auto failover policy type name. Currently available options: min_available.|[]| -|`--namespaces`|Comma-separated namespaces regex list|[]| -|`--primary`|Comma-separated primary broker regex list|[]| -|`--secondary`|Comma-separated secondary broker regex list|[]| - - -### `get` -Get the namespace isolation policy of a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy get cluster-name policy-name - -``` - -### `list` -List all namespace isolation policies of a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy list cluster-name - -``` - -### `delete` -Delete namespace isolation policy of a cluster. This operation requires superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy delete - -``` - -### `brokers` -List all brokers with namespace-isolation policies attached to it. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy brokers cluster-name - -``` - -### `broker` -Get broker with namespace-isolation policies attached to it. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy broker cluster-name options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`--broker`|Broker name to get namespace-isolation policies attached to it|| - -## `topics` -Operations for managing Pulsar topics (both persistent and non-persistent). - -Usage - -```bash - -$ pulsar-admin topics subcommand - -``` - -From Pulsar 2.7.0, some namespace-level policies are available on topic level. To enable topic-level policy in Pulsar, you need to configure the following parameters in the `broker.conf` file. - -```shell - -systemTopicEnabled=true -topicLevelPoliciesEnabled=true - -``` - -Subcommands -* `compact` -* `compaction-status` -* `offload` -* `offload-status` -* `create-partitioned-topic` -* `create-missed-partitions` -* `delete-partitioned-topic` -* `create` -* `get-partitioned-topic-metadata` -* `update-partitioned-topic` -* `list-partitioned-topics` -* `list` -* `terminate` -* `permissions` -* `grant-permission` -* `revoke-permission` -* `lookup` -* `bundle-range` -* `delete` -* `unload` -* `create-subscription` -* `subscriptions` -* `unsubscribe` -* `stats` -* `stats-internal` -* `info-internal` -* `partitioned-stats` -* `partitioned-stats-internal` -* `skip` -* `clear-backlog` -* `expire-messages` -* `expire-messages-all-subscriptions` -* `peek-messages` -* `reset-cursor` -* `get-message-by-id` -* `last-message-id` -* `get-backlog-quotas` -* `set-backlog-quota` -* `remove-backlog-quota` -* `get-persistence` -* `set-persistence` -* `remove-persistence` -* `get-message-ttl` -* `set-message-ttl` -* `remove-message-ttl` -* `get-deduplication` -* `set-deduplication` -* `remove-deduplication` -* `get-retention` -* `set-retention` -* `remove-retention` -* `get-dispatch-rate` -* `set-dispatch-rate` -* `remove-dispatch-rate` -* `get-max-unacked-messages-per-subscription` -* `set-max-unacked-messages-per-subscription` -* `remove-max-unacked-messages-per-subscription` -* `get-max-unacked-messages-per-consumer` -* `set-max-unacked-messages-per-consumer` -* `remove-max-unacked-messages-per-consumer` -* `get-delayed-delivery` -* `set-delayed-delivery` -* `remove-delayed-delivery` -* `get-max-producers` -* `set-max-producers` -* `remove-max-producers` -* `get-max-consumers` -* `set-max-consumers` -* `remove-max-consumers` -* `get-compaction-threshold` -* `set-compaction-threshold` -* `remove-compaction-threshold` -* `get-offload-policies` -* `set-offload-policies` -* `remove-offload-policies` -* `get-inactive-topic-policies` -* `set-inactive-topic-policies` -* `remove-inactive-topic-policies` -* `set-max-subscriptions` -* `get-max-subscriptions` -* `remove-max-subscriptions` - -### `compact` -Run compaction on the specified topic (persistent topics only) - -Usage - -``` - -$ pulsar-admin topics compact persistent://tenant/namespace/topic - -``` - -### `compaction-status` -Check the status of a topic compaction (persistent topics only) - -Usage - -```bash - -$ pulsar-admin topics compaction-status persistent://tenant/namespace/topic - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-w`, `--wait-complete`|Wait for compaction to complete|false| - - -### `offload` -Trigger offload of data from a topic to long-term storage (e.g. Amazon S3) - -Usage - -```bash - -$ pulsar-admin topics offload persistent://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-s`, `--size-threshold`|The maximum amount of data to keep in BookKeeper for the specific topic|| - - -### `offload-status` -Check the status of data offloading from a topic to long-term storage - -Usage - -```bash - -$ pulsar-admin topics offload-status persistent://tenant/namespace/topic op - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-w`, `--wait-complete`|Wait for compaction to complete|false| - - -### `create-partitioned-topic` -Create a partitioned topic. A partitioned topic must be created before producers can publish to it. - -:::note - -By default, after 60 seconds of creation, topics are considered inactive and deleted automatically to prevent from generating trash data. -To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. -To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to your desired value. -For more information about these two parameters, see [here](reference-configuration.md#broker). - -::: - -Usage - -```bash - -$ pulsar-admin topics create-partitioned-topic {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-p`, `--partitions`|The number of partitions for the topic|0| - -### `create-missed-partitions` -Try to create partitions for partitioned topic. The partitions of partition topic has to be created, -can be used by repair partitions when topic auto creation is disabled - -Usage - -```bash - -$ pulsar-admin topics create-missed-partitions persistent://tenant/namespace/topic - -``` - -### `delete-partitioned-topic` -Delete a partitioned topic. This will also delete all the partitions of the topic if they exist. - -Usage - -```bash - -$ pulsar-admin topics delete-partitioned-topic {persistent|non-persistent} - -``` - -### `create` -Creates a non-partitioned topic. A non-partitioned topic must explicitly be created by the user if allowAutoTopicCreation or createIfMissing is disabled. - -:::note - -By default, after 60 seconds of creation, topics are considered inactive and deleted automatically to prevent from generating trash data. -To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. -To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to your desired value. -For more information about these two parameters, see [here](reference-configuration.md#broker). - -::: - -Usage - -```bash - -$ pulsar-admin topics create {persistent|non-persistent}://tenant/namespace/topic - -``` - -### `get-partitioned-topic-metadata` -Get the partitioned topic metadata. If the topic is not created or is a non-partitioned topic, this will return an empty topic with zero partitions. - -Usage - -```bash - -$ pulsar-admin topics get-partitioned-topic-metadata {persistent|non-persistent}://tenant/namespace/topic - -``` - -### `update-partitioned-topic` -Update existing non-global partitioned topic. New updating number of partitions must be greater than existing number of partitions. - -Usage - -```bash - -$ pulsar-admin topics update-partitioned-topic {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-p`, `--partitions`|The number of partitions for the topic|0| - -### `list-partitioned-topics` -Get the list of partitioned topics under a namespace. - -Usage - -```bash - -$ pulsar-admin topics list-partitioned-topics tenant/namespace - -``` - -### `list` -Get the list of topics under a namespace - -Usage - -``` - -$ pulsar-admin topics list tenant/cluster/namespace - -``` - -### `terminate` -Terminate a persistent topic (disallow further messages from being published on the topic) - -Usage - -```bash - -$ pulsar-admin topics terminate persistent://tenant/namespace/topic - -``` - -### `permissions` -Get the permissions on a topic. Retrieve the effective permissions for a destination. These permissions are defined by the permissions set at the namespace level combined (union) with any eventual specific permissions set on the topic. - -Usage - -```bash - -$ pulsar-admin topics permissions topic - -``` - -### `grant-permission` -Grant a new permission to a client role on a single topic - -Usage - -```bash - -$ pulsar-admin topics grant-permission {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--actions`|Actions to be granted (`produce` or `consume`)|| -|`--role`|The client role to which to grant the permissions|| - - -### `revoke-permission` -Revoke permissions to a client role on a single topic. If the permission was not set at the topic level, but rather at the namespace level, this operation will return an error (HTTP status code 412). - -Usage - -```bash - -$ pulsar-admin topics revoke-permission topic - -``` - -### `lookup` -Look up a topic from the current serving broker - -Usage - -```bash - -$ pulsar-admin topics lookup topic - -``` - -### `bundle-range` -Get the namespace bundle which contains the given topic - -Usage - -```bash - -$ pulsar-admin topics bundle-range topic - -``` - -### `delete` -Delete a topic. The topic cannot be deleted if there are any active subscriptions or producers connected to the topic. - -Usage - -```bash - -$ pulsar-admin topics delete topic - -``` - -### `unload` -Unload a topic - -Usage - -```bash - -$ pulsar-admin topics unload topic - -``` - -### `create-subscription` -Create a new subscription on a topic. - -Usage - -```bash - -$ pulsar-admin topics create-subscription [options] persistent://tenant/namespace/topic - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-m`, `--messageId`|messageId where to create the subscription. It can be either 'latest', 'earliest' or (ledgerId:entryId)|latest| -|`-s`, `--subscription`|Subscription to reset position on|| - -### `subscriptions` -Get the list of subscriptions on the topic - -Usage - -```bash - -$ pulsar-admin topics subscriptions topic - -``` - -### `unsubscribe` -Delete a durable subscriber from a topic - -Usage - -```bash - -$ pulsar-admin topics unsubscribe topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|The subscription to delete|| -|`-f`, `--force`|Disconnect and close all consumers and delete subscription forcefully|false| - - -### `stats` -Get the stats for the topic and its connected producers and consumers. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -Usage - -```bash - -$ pulsar-admin topics stats topic - -``` - -:::note - -The unit of `storageSize` and `averageMsgSize` is Byte. - -::: - -### `stats-internal` -Get the internal stats for the topic - -Usage - -```bash - -$ pulsar-admin topics stats-internal topic - -``` - -### `info-internal` -Get the internal metadata info for the topic - -Usage - -```bash - -$ pulsar-admin topics info-internal topic - -``` - -### `partitioned-stats` -Get the stats for the partitioned topic and its connected producers and consumers. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -Usage - -```bash - -$ pulsar-admin topics partitioned-stats topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--per-partition`|Get per-partition stats|false| - -### `partitioned-stats-internal` -Get the internal stats for the partitioned topic and its connected producers and consumers. All the rates are computed over a 1 minute window and are relative the last completed 1 minute period. - -Usage - -```bash - -$ pulsar-admin topics partitioned-stats-internal topic - -``` - -### `skip` -Skip some messages for the subscription - -Usage - -```bash - -$ pulsar-admin topics skip topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-n`, `--count`|The number of messages to skip|0| -|`-s`, `--subscription`|The subscription on which to skip messages|| - - -### `clear-backlog` -Clear backlog (skip all the messages) for the subscription - -Usage - -```bash - -$ pulsar-admin topics clear-backlog topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|The subscription to clear|| - - -### `expire-messages` -Expire messages that are older than the given expiry time (in seconds) for the subscription. - -Usage - -```bash - -$ pulsar-admin topics expire-messages topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-t`, `--expireTime`|Expire messages older than the time (in seconds)|0| -|`-s`, `--subscription`|The subscription to skip messages on|| - - -### `expire-messages-all-subscriptions` -Expire messages older than the given expiry time (in seconds) for all subscriptions - -Usage - -```bash - -$ pulsar-admin topics expire-messages-all-subscriptions topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-t`, `--expireTime`|Expire messages older than the time (in seconds)|0| - - -### `peek-messages` -Peek some messages for the subscription. - -Usage - -```bash - -$ pulsar-admin topics peek-messages topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-n`, `--count`|The number of messages|0| -|`-s`, `--subscription`|Subscription to get messages from|| - - -### `reset-cursor` -Reset position for subscription to a position that is closest to timestamp or messageId. - -Usage - -```bash - -$ pulsar-admin topics reset-cursor topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|Subscription to reset position on|| -|`-t`, `--time`|The time in minutes to reset back to (or minutes, hours, days, weeks, etc.). Examples: `100m`, `3h`, `2d`, `5w`.|| -|`-m`, `--messageId`| The messageId to reset back to (ledgerId:entryId). || - -### `get-message-by-id` -Get message by ledger id and entry id - -Usage - -```bash - -$ pulsar-admin topics get-message-by-id topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-l`, `--ledgerId`|The ledger id |0| -|`-e`, `--entryId`|The entry id |0| - -### `last-message-id` -Get the last commit message ID of the topic. - -Usage - -```bash - -$ pulsar-admin topics last-message-id persistent://tenant/namespace/topic - -``` - -### `get-backlog-quotas` -Get the backlog quota policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-backlog-quotas tenant/namespace/topic - -``` - -### `set-backlog-quota` -Set a backlog quota policy for a topic. - -|Flag|Description|Default| -|----|---|---| -|`-l`, `--limit`|The backlog size limit (for example `10M` or `16G`)|| -|`-lt`, `--limitTime`|Time limit in second, non-positive number for disabling time limit. (for example 3600 for 1 hour)|| -|`-p`, `--policy`|The retention policy to enforce when the limit is reached. The valid options are: `producer_request_hold`, `producer_exception` or `consumer_backlog_eviction`| -|`-t`, `--type`|Backlog quota type to set. The valid options are: `destination_storage`, `message_age` |destination_storage| - -Usage - -```bash - -$ pulsar-admin topics set-backlog-quota tenant/namespace/topic options - -``` - -Example - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns/my-topic \ ---limit 2G \ ---policy producer_request_hold - -``` - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns/my-topic \ ---limitTime 3600 \ ---policy producer_request_hold \ ---type message_age - -``` - -### `remove-backlog-quota` -Remove a backlog quota policy from a topic. - -|Flag|Description|Default| -|---|---|---| -|`-t`, `--type`|Backlog quota type to remove. The valid options are: `destination_storage`, `message_age` |destination_storage| - -Usage - -```bash - -$ pulsar-admin topics remove-backlog-quota tenant/namespace/topic - -``` - -### `get-persistence` -Get the persistence policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-persistence tenant/namespace/topic - -``` - -### `set-persistence` -Set the persistence policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-persistence tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-e`, `--bookkeeper-ensemble`|Number of bookies to use for a topic|0| -|`-w`, `--bookkeeper-write-quorum`|How many writes to make of each entry|0| -|`-a`, `--bookkeeper-ack-quorum`|Number of acks (guaranteed copies) to wait for each entry|0| -|`-r`, `--ml-mark-delete-max-rate`|Throttling rate of mark-delete operation (0 means no throttle)|| - -### `remove-persistence` -Remove the persistence policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-persistence tenant/namespace/topic - -``` - -### `get-message-ttl` -Get the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-message-ttl tenant/namespace/topic - -``` - -### `set-message-ttl` -Set the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-message-ttl tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-ttl`, `--messageTTL`|Message TTL for a topic in second, allowed range from 1 to `Integer.MAX_VALUE` |0| - -### `remove-message-ttl` -Remove the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-message-ttl tenant/namespace/topic - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable message deduplication on the specified topic.|false| -|`--disable`, `-d`|Disable message deduplication on the specified topic.|false| - -### `get-deduplication` -Get a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-deduplication tenant/namespace/topic - -``` - -### `set-deduplication` -Set a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-deduplication tenant/namespace/topic options - -``` - -### `remove-deduplication` -Remove a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-deduplication tenant/namespace/topic - -``` - -## `tenants` -Operations for managing tenants - -Usage - -```bash - -$ pulsar-admin tenants subcommand - -``` - -Subcommands -* `list` -* `get` -* `create` -* `update` -* `delete` - -### `list` -List the existing tenants - -Usage - -```bash - -$ pulsar-admin tenants list - -``` - -### `get` -Gets the configuration of a tenant - -Usage - -```bash - -$ pulsar-admin tenants get tenant-name - -``` - -### `create` -Creates a new tenant - -Usage - -```bash - -$ pulsar-admin tenants create tenant-name options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-r`, `--admin-roles`|Comma-separated admin roles|| -|`-c`, `--allowed-clusters`|Comma-separated allowed clusters|| - -### `update` -Updates a tenant - -Usage - -```bash - -$ pulsar-admin tenants update tenant-name options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-r`, `--admin-roles`|Comma-separated admin roles|| -|`-c`, `--allowed-clusters`|Comma-separated allowed clusters|| - - -### `delete` -Deletes an existing tenant - -Usage - -```bash - -$ pulsar-admin tenants delete tenant-name - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-f`, `--force`|Delete a tenant forcefully by deleting all namespaces under it.|false| - - -## `resource-quotas` -Operations for managing resource quotas - -Usage - -```bash - -$ pulsar-admin resource-quotas subcommand - -``` - -Subcommands -* `get` -* `set` -* `reset-namespace-bundle-quota` - - -### `get` -Get the resource quota for a specified namespace bundle, or default quota if no namespace/bundle is specified. - -Usage - -```bash - -$ pulsar-admin resource-quotas get options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-n`, `--namespace`|The namespace|| - - -### `set` -Set the resource quota for the specified namespace bundle, or default quota if no namespace/bundle is specified. - -Usage - -```bash - -$ pulsar-admin resource-quotas set options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-bi`, `--bandwidthIn`|The expected inbound bandwidth (in bytes/second)|0| -|`-bo`, `--bandwidthOut`|Expected outbound bandwidth (in bytes/second)0| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-d`, `--dynamic`|Allow to be dynamically re-calculated (or not)|false| -|`-mem`, `--memory`|Expectred memory usage (in megabytes)|0| -|`-mi`, `--msgRateIn`|Expected incoming messages per second|0| -|`-mo`, `--msgRateOut`|Expected outgoing messages per second|0| -|`-n`, `--namespace`|The namespace as tenant/namespace, for example my-tenant/my-ns. Must be specified together with -b/--bundle.|| - - -### `reset-namespace-bundle-quota` -Reset the specified namespace bundle's resource quota to a default value. - -Usage - -```bash - -$ pulsar-admin resource-quotas reset-namespace-bundle-quota options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-n`, `--namespace`|The namespace|| - - - -## `schemas` -Operations related to Schemas associated with Pulsar topics. - -Usage - -``` - -$ pulsar-admin schemas subcommand - -``` - -Subcommands -* `upload` -* `delete` -* `get` -* `extract` - - -### `upload` -Upload the schema definition for a topic - -Usage - -```bash - -$ pulsar-admin schemas upload persistent://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`--filename`|The path to the schema definition file. An example schema file is available under conf directory.|| - - -### `delete` -Delete the schema definition associated with a topic - -Usage - -```bash - -$ pulsar-admin schemas delete persistent://tenant/namespace/topic - -``` - -### `get` -Retrieve the schema definition associated with a topic (at a given version if version is supplied). - -Usage - -```bash - -$ pulsar-admin schemas get persistent://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`--version`|The version of the schema definition to retrieve for a topic.|| - -### `extract` -Provide the schema definition for a topic via Java class name contained in a JAR file - -Usage - -```bash - -$ pulsar-admin schemas extract persistent://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-c`, `--classname`|The Java class name|| -|`-j`, `--jar`|A path to the JAR file which contains the above Java class|| -|`-t`, `--type`|The type of the schema (avro or json)|| diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/reference-rest-api-overview.md b/site2/website/versioned_docs/version-2.9.2-deprecated/reference-rest-api-overview.md deleted file mode 100644 index 4bdcf23483a2b5..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/reference-rest-api-overview.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: reference-rest-api-overview -title: Pulsar REST APIs -sidebar_label: "Pulsar REST APIs" ---- - -A REST API (also known as RESTful API, REpresentational State Transfer Application Programming Interface) is a set of definitions and protocols for building and integrating application software, using HTTP requests to GET, PUT, POST, and DELETE data following the REST standards. In essence, REST API is a set of remote calls using standard methods to request and return data in a specific format between two systems. - -Pulsar provides a variety of REST APIs that enable you to interact with Pulsar to retrieve information or perform an action. - -| REST API category | Description | -| --- | --- | -| [Admin](https://pulsar.apache.org/admin-rest-api/?version=master) | REST APIs for administrative operations.| -| [Functions](https://pulsar.apache.org/functions-rest-api/?version=master) | REST APIs for function-specific operations.| -| [Sources](https://pulsar.apache.org/source-rest-api/?version=master) | REST APIs for source-specific operations.| -| [Sinks](https://pulsar.apache.org/sink-rest-api/?version=master) | REST APIs for sink-specific operations.| -| [Packages](https://pulsar.apache.org/packages-rest-api/?version=master) | REST APIs for package-specific operations. A package can be a group of functions, sources, and sinks.| - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/reference-terminology.md b/site2/website/versioned_docs/version-2.9.2-deprecated/reference-terminology.md deleted file mode 100644 index e5099141c3231e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/reference-terminology.md +++ /dev/null @@ -1,176 +0,0 @@ ---- -id: reference-terminology -title: Pulsar Terminology -sidebar_label: "Terminology" -original_id: reference-terminology ---- - -Here is a glossary of terms related to Apache Pulsar: - -### Concepts - -#### Pulsar - -Pulsar is a distributed messaging system originally created by Yahoo but now under the stewardship of the Apache Software Foundation. - -#### Message - -Messages are the basic unit of Pulsar. They're what [producers](#producer) publish to [topics](#topic) -and what [consumers](#consumer) then consume from topics. - -#### Topic - -A named channel used to pass messages published by [producers](#producer) to [consumers](#consumer) who -process those [messages](#message). - -#### Partitioned Topic - -A topic that is served by multiple Pulsar [brokers](#broker), which enables higher throughput. - -#### Namespace - -A grouping mechanism for related [topics](#topic). - -#### Namespace Bundle - -A virtual group of [topics](#topic) that belong to the same [namespace](#namespace). A namespace bundle -is defined as a range between two 32-bit hashes, such as 0x00000000 and 0xffffffff. - -#### Tenant - -An administrative unit for allocating capacity and enforcing an authentication/authorization scheme. - -#### Subscription - -A lease on a [topic](#topic) established by a group of [consumers](#consumer). Pulsar has four subscription -modes (exclusive, shared, failover and key_shared). - -#### Pub-Sub - -A messaging pattern in which [producer](#producer) processes publish messages on [topics](#topic) that -are then consumed (processed) by [consumer](#consumer) processes. - -#### Producer - -A process that publishes [messages](#message) to a Pulsar [topic](#topic). - -#### Consumer - -A process that establishes a subscription to a Pulsar [topic](#topic) and processes messages published -to that topic by [producers](#producer). - -#### Reader - -Pulsar readers are message processors much like Pulsar [consumers](#consumer) but with two crucial differences: - -- you can specify *where* on a topic readers begin processing messages (consumers always begin with the latest - available unacked message); -- readers don't retain data or acknowledge messages. - -#### Cursor - -The subscription position for a [consumer](#consumer). - -#### Acknowledgment (ack) - -A message sent to a Pulsar broker by a [consumer](#consumer) that a message has been successfully processed. -An acknowledgement (ack) is Pulsar's way of knowing that the message can be deleted from the system; -if no acknowledgement, then the message will be retained until it's processed. - -#### Negative Acknowledgment (nack) - -When an application fails to process a particular message, it can send a "negative ack" to Pulsar -to signal that the message should be replayed at a later timer. (By default, failed messages are -replayed after a 1 minute delay). Be aware that negative acknowledgment on ordered subscription types, -such as Exclusive, Failover and Key_Shared, can cause failed messages to arrive consumers out of the original order. - -#### Unacknowledged - -A message that has been delivered to a consumer for processing but not yet confirmed as processed by the consumer. - -#### Retention Policy - -Size and time limits that you can set on a [namespace](#namespace) to configure retention of [messages](#message) -that have already been [acknowledged](#acknowledgement-ack). - -#### Multi-Tenancy - -The ability to isolate [namespaces](#namespace), specify quotas, and configure authentication and authorization -on a per-[tenant](#tenant) basis. - -#### Failure Domain - -A logical domain under a Pulsar cluster. Each logical domain contains a pre-configured list of brokers. - -#### Anti-affinity Namespaces - -A group of namespaces that have anti-affinity to each other. - -### Architecture - -#### Standalone - -A lightweight Pulsar broker in which all components run in a single Java Virtual Machine (JVM) process. Standalone -clusters can be run on a single machine and are useful for development purposes. - -#### Cluster - -A set of Pulsar [brokers](#broker) and [BookKeeper](#bookkeeper) servers (aka [bookies](#bookie)). -Clusters can reside in different geographical regions and replicate messages to one another -in a process called [geo-replication](#geo-replication). - -#### Instance - -A group of Pulsar [clusters](#cluster) that act together as a single unit. - -#### Geo-Replication - -Replication of messages across Pulsar [clusters](#cluster), potentially in different datacenters -or geographical regions. - -#### Configuration Store - -Pulsar's configuration store (previously known as configuration store) is a ZooKeeper quorum that -is used for configuration-specific tasks. A multi-cluster Pulsar installation requires just one -configuration store across all [clusters](#cluster). - -#### Topic Lookup - -A service provided by Pulsar [brokers](#broker) that enables connecting clients to automatically determine -which Pulsar [cluster](#cluster) is responsible for a [topic](#topic) (and thus where message traffic for -the topic needs to be routed). - -#### Service Discovery - -A mechanism provided by Pulsar that enables connecting clients to use just a single URL to interact -with all the [brokers](#broker) in a [cluster](#cluster). - -#### Broker - -A stateless component of Pulsar [clusters](#cluster) that runs two other components: an HTTP server -exposing a REST interface for administration and topic lookup and a [dispatcher](#dispatcher) that -handles all message transfers. Pulsar clusters typically consist of multiple brokers. - -#### Dispatcher - -An asynchronous TCP server used for all data transfers in-and-out a Pulsar [broker](#broker). The Pulsar -dispatcher uses a custom binary protocol for all communications. - -### Storage - -#### BookKeeper - -[Apache BookKeeper](http://bookkeeper.apache.org/) is a scalable, low-latency persistent log storage -service that Pulsar uses to store data. - -#### Bookie - -Bookie is the name of an individual BookKeeper server. It is effectively the storage server of Pulsar. - -#### Ledger - -An append-only data structure in [BookKeeper](#bookkeeper) that is used to persistently store messages in Pulsar [topics](#topic). - -### Functions - -Pulsar Functions are lightweight functions that can consume messages from Pulsar topics, apply custom processing logic, and, if desired, publish results to topics. diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/schema-evolution-compatibility.md b/site2/website/versioned_docs/version-2.9.2-deprecated/schema-evolution-compatibility.md deleted file mode 100644 index 3e78429df69da2..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/schema-evolution-compatibility.md +++ /dev/null @@ -1,201 +0,0 @@ ---- -id: schema-evolution-compatibility -title: Schema evolution and compatibility -sidebar_label: "Schema evolution and compatibility" -original_id: schema-evolution-compatibility ---- - -Normally, schemas do not stay the same over a long period of time. Instead, they undergo evolutions to satisfy new needs. - -This chapter examines how Pulsar schema evolves and what Pulsar schema compatibility check strategies are. - -## Schema evolution - -Pulsar schema is defined in a data structure called `SchemaInfo`. - -Each `SchemaInfo` stored with a topic has a version. The version is used to manage the schema changes happening within a topic. - -The message produced with `SchemaInfo` is tagged with a schema version. When a message is consumed by a Pulsar client, the Pulsar client can use the schema version to retrieve the corresponding `SchemaInfo` and use the correct schema information to deserialize data. - -### What is schema evolution? - -Schemas store the details of attributes and types. To satisfy new business requirements, you need to update schemas inevitably over time, which is called **schema evolution**. - -Any schema changes affect downstream consumers. Schema evolution ensures that the downstream consumers can seamlessly handle data encoded with both old schemas and new schemas. - -### How Pulsar schema should evolve? - -The answer is Pulsar schema compatibility check strategy. It determines how schema compares old schemas with new schemas in topics. - -For more information, see [Schema compatibility check strategy](#schema-compatibility-check-strategy). - -### How does Pulsar support schema evolution? - -1. When a producer/consumer/reader connects to a broker, the broker deploys the schema compatibility checker configured by `schemaRegistryCompatibilityCheckers` to enforce schema compatibility check. - - The schema compatibility checker is one instance per schema type. - - Currently, Avro and JSON have their own compatibility checkers, while all the other schema types share the default compatibility checker which disables schema evolution. - -2. The producer/consumer/reader sends its client `SchemaInfo` to the broker. - -3. The broker knows the schema type and locates the schema compatibility checker for that type. - -4. The broker uses the checker to check if the `SchemaInfo` is compatible with the latest schema of the topic by applying its compatibility check strategy. - - Currently, the compatibility check strategy is configured at the namespace level and applied to all the topics within that namespace. - -## Schema compatibility check strategy - -Pulsar has 8 schema compatibility check strategies, which are summarized in the following table. - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Changes allowed | Check against which schema | Upgrade first | -| --- | --- | --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Disable schema compatibility check. | All changes are allowed | All previous versions | Any order | -| `ALWAYS_INCOMPATIBLE` | Disable schema evolution. | All changes are disabled | None | None | -| `BACKWARD` | Consumers using the schema V3 can process data written by producers using the schema V3 or V2. |
  2080. Add optional fields
  2081. Delete fields
  2082. | Latest version | Consumers | -| `BACKWARD_TRANSITIVE` | Consumers using the schema V3 can process data written by producers using the schema V3, V2 or V1. |
  2083. Add optional fields
  2084. Delete fields
  2085. | All previous versions | Consumers | -| `FORWARD` | Consumers using the schema V3 or V2 can process data written by producers using the schema V3. |
  2086. Add fields
  2087. Delete optional fields
  2088. | Latest version | Producers | -| `FORWARD_TRANSITIVE` | Consumers using the schema V3, V2 or V1 can process data written by producers using the schema V3. |
  2089. Add fields
  2090. Delete optional fields
  2091. | All previous versions | Producers | -| `FULL` | Backward and forward compatible between the schema V3 and V2. |
  2092. Modify optional fields
  2093. | Latest version | Any order | -| `FULL_TRANSITIVE` | Backward and forward compatible among the schema V3, V2, and V1. |
  2094. Modify optional fields
  2095. | All previous versions | Any order | - -### ALWAYS_COMPATIBLE and ALWAYS_INCOMPATIBLE - -| Compatibility check strategy | Definition | Note | -| --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Disable schema compatibility check. | None | -| `ALWAYS_INCOMPATIBLE` | Disable schema evolution, that is, any schema change is rejected. |
  2096. For all schema types except Avro and JSON, the default schema compatibility check strategy is `ALWAYS_INCOMPATIBLE`.
  2097. For Avro and JSON, the default schema compatibility check strategy is `FULL`.
  2098. | - -#### Example - -* Example 1 - - In some situations, an application needs to store events of several different types in the same Pulsar topic. - - In particular, when developing a data model in an `Event Sourcing` style, you might have several kinds of events that affect the state of an entity. - - For example, for a user entity, there are `userCreated`, `userAddressChanged` and `userEnquiryReceived` events. The application requires that those events are always read in the same order. - - Consequently, those events need to go in the same Pulsar partition to maintain order. This application can use `ALWAYS_COMPATIBLE` to allow different kinds of events co-exist in the same topic. - -* Example 2 - - Sometimes we also make incompatible changes. - - For example, you are modifying a field type from `string` to `int`. - - In this case, you need to: - - * Upgrade all producers and consumers to the new schema versions at the same time. - - * Optionally, create a new topic and start migrating applications to use the new topic and the new schema, avoiding the need to handle two incompatible versions in the same topic. - -### BACKWARD and BACKWARD_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | -|---|---|---| -`BACKWARD` | Consumers using the new schema can process data written by producers using the **last schema**. | The consumers using the schema V3 can process data written by producers using the schema V3 or V2. | -`BACKWARD_TRANSITIVE` | Consumers using the new schema can process data written by producers using **all previous schemas**. | The consumers using the schema V3 can process data written by producers using the schema V3, V2, or V1. | - -#### Example - -* Example 1 - - Remove a field. - - A consumer constructed to process events without one field can process events written with the old schema containing the field, and the consumer will ignore that field. - -* Example 2 - - You want to load all Pulsar data into a Hive data warehouse and run SQL queries against the data. - - Same SQL queries must continue to work even the data is changed. To support it, you can evolve the schemas using the `BACKWARD` strategy. - -### FORWARD and FORWARD_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | -|---|---|---| -`FORWARD` | Consumers using the **last schema** can process data written by producers using a new schema, even though they may not be able to use the full capabilities of the new schema. | The consumers using the schema V3 or V2 can process data written by producers using the schema V3. | -`FORWARD_TRANSITIVE` | Consumers using **all previous schemas** can process data written by producers using a new schema. | The consumers using the schema V3, V2, or V1 can process data written by producers using the schema V3. - -#### Example - -* Example 1 - - Add a field. - - In most data formats, consumers written to process events without new fields can continue doing so even when they receive new events containing new fields. - -* Example 2 - - If a consumer has an application logic tied to a full version of a schema, the application logic may not be updated instantly when the schema evolves. - - In this case, you need to project data with a new schema onto an old schema that the application understands. - - Consequently, you can evolve the schemas using the `FORWARD` strategy to ensure that the old schema can process data encoded with the new schema. - -### FULL and FULL_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | Note | -| --- | --- | --- | --- | -| `FULL` | Schemas are both backward and forward compatible, which means: Consumers using the last schema can process data written by producers using the new schema. AND Consumers using the new schema can process data written by producers using the last schema. | Consumers using the schema V3 can process data written by producers using the schema V3 or V2. AND Consumers using the schema V3 or V2 can process data written by producers using the schema V3. |
  2099. For Avro and JSON, the default schema compatibility check strategy is `FULL`.
  2100. For all schema types except Avro and JSON, the default schema compatibility check strategy is `ALWAYS_INCOMPATIBLE`.
  2101. | -| `FULL_TRANSITIVE` | The new schema is backward and forward compatible with all previously registered schemas. | Consumers using the schema V3 can process data written by producers using the schema V3, V2 or V1. AND Consumers using the schema V3, V2 or V1 can process data written by producers using the schema V3. | None | - -#### Example - -In some data formats, for example, Avro, you can define fields with default values. Consequently, adding or removing a field with a default value is a fully compatible change. - -## Schema verification - -When a producer or a consumer tries to connect to a topic, a broker performs some checks to verify a schema. - -### Producer - -When a producer tries to connect to a topic (suppose ignore the schema auto creation), a broker does the following checks: - -* Check if the schema carried by the producer exists in the schema registry or not. - - * If the schema is already registered, then the producer is connected to a broker and produce messages with that schema. - - * If the schema is not registered, then Pulsar verifies if the schema is allowed to be registered based on the configured compatibility check strategy. - -### Consumer -When a consumer tries to connect to a topic, a broker checks if a carried schema is compatible with a registered schema based on the configured schema compatibility check strategy. - -| Compatibility check strategy | Check logic | -| --- | --- | -| `ALWAYS_COMPATIBLE` | All pass | -| `ALWAYS_INCOMPATIBLE` | No pass | -| `BACKWARD` | Can read the last schema | -| `BACKWARD_TRANSITIVE` | Can read all schemas | -| `FORWARD` | Can read the last schema | -| `FORWARD_TRANSITIVE` | Can read the last schema | -| `FULL` | Can read the last schema | -| `FULL_TRANSITIVE` | Can read all schemas | - -## Order of upgrading clients - -The order of upgrading client applications is determined by the compatibility check strategy. - -For example, the producers using schemas to write data to Pulsar and the consumers using schemas to read data from Pulsar. - -| Compatibility check strategy | Upgrade first | Description | -| --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Any order | The compatibility check is disabled. Consequently, you can upgrade the producers and consumers in **any order**. | -| `ALWAYS_INCOMPATIBLE` | None | The schema evolution is disabled. | -|
  2102. `BACKWARD`
  2103. `BACKWARD_TRANSITIVE`
  2104. | Consumers | There is no guarantee that consumers using the old schema can read data produced using the new schema. Consequently, **upgrade all consumers first**, and then start producing new data. | -|
  2105. `FORWARD`
  2106. `FORWARD_TRANSITIVE`
  2107. | Producers | There is no guarantee that consumers using the new schema can read data produced using the old schema. Consequently, **upgrade all producers first**
  2108. to use the new schema and ensure that the data already produced using the old schemas are not available to consumers, and then upgrade the consumers.
  2109. | -|
  2110. `FULL`
  2111. `FULL_TRANSITIVE`
  2112. | Any order | There is no guarantee that consumers using the old schema can read data produced using the new schema and consumers using the new schema can read data produced using the old schema. Consequently, you can upgrade the producers and consumers in **any order**. | - - - - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/schema-get-started.md b/site2/website/versioned_docs/version-2.9.2-deprecated/schema-get-started.md deleted file mode 100644 index afacb0fa51f2ef..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/schema-get-started.md +++ /dev/null @@ -1,102 +0,0 @@ ---- -id: schema-get-started -title: Get started -sidebar_label: "Get started" -original_id: schema-get-started ---- - -This chapter introduces Pulsar schemas and explains why they are important. - -## Schema Registry - -Type safety is extremely important in any application built around a message bus like Pulsar. - -Producers and consumers need some kind of mechanism for coordinating types at the topic level to avoid various potential problems arise. For example, serialization and deserialization issues. - -Applications typically adopt one of the following approaches to guarantee type safety in messaging. Both approaches are available in Pulsar, and you're free to adopt one or the other or to mix and match on a per-topic basis. - -#### Note -> -> Currently, the Pulsar schema registry is only available for the [Java client](client-libraries-java.md), [CGo client](client-libraries-cgo.md), [Python client](client-libraries-python.md), and [C++ client](client-libraries-cpp.md). - -### Client-side approach - -Producers and consumers are responsible for not only serializing and deserializing messages (which consist of raw bytes) but also "knowing" which types are being transmitted via which topics. - -If a producer is sending temperature sensor data on the topic `topic-1`, consumers of that topic will run into trouble if they attempt to parse that data as moisture sensor readings. - -Producers and consumers can send and receive messages consisting of raw byte arrays and leave all type safety enforcement to the application on an "out-of-band" basis. - -### Server-side approach - -Producers and consumers inform the system which data types can be transmitted via the topic. - -With this approach, the messaging system enforces type safety and ensures that producers and consumers remain synced. - -Pulsar has a built-in **schema registry** that enables clients to upload data schemas on a per-topic basis. Those schemas dictate which data types are recognized as valid for that topic. - -## Why use schema - -When a schema is enabled, Pulsar does parse data, it takes bytes as inputs and sends bytes as outputs. While data has meaning beyond bytes, you need to parse data and might encounter parse exceptions which mainly occur in the following situations: - -* The field does not exist - -* The field type has changed (for example, `string` is changed to `int`) - -There are a few methods to prevent and overcome these exceptions, for example, you can catch exceptions when parsing errors, which makes code hard to maintain; or you can adopt a schema management system to perform schema evolution, not to break downstream applications, and enforces type safety to max extend in the language you are using, the solution is Pulsar Schema. - -Pulsar schema enables you to use language-specific types of data when constructing and handling messages from simple types like `string` to more complex application-specific types. - -**Example** - -You can use the _User_ class to define the messages sent to Pulsar topics. - -``` - -public class User { - String name; - int age; -} - -``` - -When constructing a producer with the _User_ class, you can specify a schema or not as below. - -### Without schema - -If you construct a producer without specifying a schema, then the producer can only produce messages of type `byte[]`. If you have a POJO class, you need to serialize the POJO into bytes before sending messages. - -**Example** - -``` - -Producer producer = client.newProducer() - .topic(topic) - .create(); -User user = new User("Tom", 28); -byte[] message = … // serialize the `user` by yourself; -producer.send(message); - -``` - -### With schema - -If you construct a producer with specifying a schema, then you can send a class to a topic directly without worrying about how to serialize POJOs into bytes. - -**Example** - -This example constructs a producer with the _JSONSchema_, and you can send the _User_ class to topics directly without worrying about how to serialize it into bytes. - -``` - -Producer producer = client.newProducer(JSONSchema.of(User.class)) - .topic(topic) - .create(); -User user = new User("Tom", 28); -producer.send(user); - -``` - -### Summary - -When constructing a producer with a schema, you do not need to serialize messages into bytes, instead Pulsar schema does this job in the background. diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/schema-manage.md b/site2/website/versioned_docs/version-2.9.2-deprecated/schema-manage.md deleted file mode 100644 index c588aae619eee9..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/schema-manage.md +++ /dev/null @@ -1,639 +0,0 @@ ---- -id: schema-manage -title: Manage schema -sidebar_label: "Manage schema" -original_id: schema-manage ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide demonstrates the ways to manage schemas: - -* Automatically - - * [Schema AutoUpdate](#schema-autoupdate) - -* Manually - - * [Schema manual management](#schema-manual-management) - - * [Custom schema storage](#custom-schema-storage) - -## Schema AutoUpdate - -If a schema passes the schema compatibility check, Pulsar producer automatically updates this schema to the topic it produces by default. - -### AutoUpdate for producer - -For a producer, the `AutoUpdate` happens in the following cases: - -* If a **topic doesn’t have a schema**, Pulsar registers a schema automatically. - -* If a **topic has a schema**: - - * If a **producer doesn’t carry a schema**: - - * If `isSchemaValidationEnforced` or `schemaValidationEnforced` is **disabled** in the namespace to which the topic belongs, the producer is allowed to connect to the topic and produce data. - - * If `isSchemaValidationEnforced` or `schemaValidationEnforced` is **enabled** in the namespace to which the topic belongs, the producer is rejected and disconnected. - - * If a **producer carries a schema**: - - A broker performs the compatibility check based on the configured compatibility check strategy of the namespace to which the topic belongs. - - * If the schema is registered, a producer is connected to a broker. - - * If the schema is not registered: - - * If `isAllowAutoUpdateSchema` sets to **false**, the producer is rejected to connect to a broker. - - * If `isAllowAutoUpdateSchema` sets to **true**: - - * If the schema passes the compatibility check, then the broker registers a new schema automatically for the topic and the producer is connected. - - * If the schema does not pass the compatibility check, then the broker does not register a schema and the producer is rejected to connect to a broker. - -![AutoUpdate Producer](/assets/schema-producer.png) - -### AutoUpdate for consumer - -For a consumer, the `AutoUpdate` happens in the following cases: - -* If a **consumer connects to a topic without a schema** (which means the consumer receiving raw bytes), the consumer can connect to the topic successfully without doing any compatibility check. - -* If a **consumer connects to a topic with a schema**. - - * If a topic does not have all of them (a schema/data/a local consumer and a local producer): - - * If `isAllowAutoUpdateSchema` sets to **true**, then the consumer registers a schema and it is connected to a broker. - - * If `isAllowAutoUpdateSchema` sets to **false**, then the consumer is rejected to connect to a broker. - - * If a topic has one of them (a schema/data/a local consumer and a local producer), then the schema compatibility check is performed. - - * If the schema passes the compatibility check, then the consumer is connected to the broker. - - * If the schema does not pass the compatibility check, then the consumer is rejected to connect to the broker. - -![AutoUpdate Consumer](/assets/schema-consumer.png) - - -### Manage AutoUpdate strategy - -You can use the `pulsar-admin` command to manage the `AutoUpdate` strategy as below: - -* [Enable AutoUpdate](#enable-autoupdate) - -* [Disable AutoUpdate](#disable-autoupdate) - -* [Adjust compatibility](#adjust-compatibility) - -#### Enable AutoUpdate - -To enable `AutoUpdate` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-is-allow-auto-update-schema --enable tenant/namespace - -``` - -#### Disable AutoUpdate - -To disable `AutoUpdate` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-is-allow-auto-update-schema --disable tenant/namespace - -``` - -Once the `AutoUpdate` is disabled, you can only register a new schema using the `pulsar-admin` command. - -#### Adjust compatibility - -To adjust the schema compatibility level on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-compatibility-strategy --compatibility tenant/namespace - -``` - -### Schema validation - -By default, `schemaValidationEnforced` is **disabled** for producers: - -* This means a producer without a schema can produce any kind of messages to a topic with schemas, which may result in producing trash data to the topic. - -* This allows non-java language clients that don’t support schema can produce messages to a topic with schemas. - -However, if you want a stronger guarantee on the topics with schemas, you can enable `schemaValidationEnforced` across the whole cluster or on a per-namespace basis. - -#### Enable schema validation - -To enable `schemaValidationEnforced` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-validation-enforce --enable tenant/namespace - -``` - -#### Disable schema validation - -To disable `schemaValidationEnforced` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-validation-enforce --disable tenant/namespace - -``` - -## Schema manual management - -To manage schemas, you can use one of the following methods. - -| Method | Description | -| --- | --- | -| **Admin CLI**
  2113. | You can use the `pulsar-admin` tool to manage Pulsar schemas, brokers, clusters, sources, sinks, topics, tenants and so on. For more information about how to use the `pulsar-admin` tool, see [here](reference-pulsar-admin.md). | -| **REST API**
  2114. | Pulsar exposes schema related management API in Pulsar’s admin RESTful API. You can access the admin RESTful endpoint directly to manage schemas. For more information about how to use the Pulsar REST API, see [here](http://pulsar.apache.org/admin-rest-api/). | -| **Java Admin API**
  2115. | Pulsar provides Java admin library. | - -### Upload a schema - -To upload (register) a new schema for a topic, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `upload` subcommand. - -```bash - -$ pulsar-admin schemas upload --filename - -``` - -The `schema-definition-file` is in JSON format. - -```json - -{ - "type": "", - "schema": "", - "properties": {} // the properties associated with the schema -} - -``` - -The `schema-definition-file` includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  2116. If the schema is a
  2117. **primitive**
  2118. schema, this field should be blank.
  2119. If the schema is a
  2120. **struct**
  2121. schema, this field should be a JSON string of the Avro schema definition.
  2122. | -| `properties` | The additional properties associated with the schema. | - -Here are examples of the `schema-definition-file` for a JSON schema. - -**Example 1** - -```json - -{ - "type": "JSON", - "schema": "{\"type\":\"record\",\"name\":\"User\",\"namespace\":\"com.foo\",\"fields\":[{\"name\":\"file1\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"file2\",\"type\":\"string\",\"default\":null},{\"name\":\"file3\",\"type\":[\"null\",\"string\"],\"default\":\"dfdf\"}]}", - "properties": {} -} - -``` - -**Example 2** - -```json - -{ - "type": "STRING", - "schema": "", - "properties": { - "key1": "value1" - } -} - -``` - -
    - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/uploadSchema?version=@pulsar:version_number@} - -The post payload is in JSON format. - -```json - -{ - "type": "", - "schema": "", - "properties": {} // the properties associated with the schema -} - -``` - -The post payload includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  2123. If the schema is a
  2124. **primitive**
  2125. schema, this field should be blank.
  2126. If the schema is a
  2127. **struct**
  2128. schema, this field should be a JSON string of the Avro schema definition.
  2129. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -void createSchema(String topic, PostSchemaPayload schemaPayload) - -``` - -The `PostSchemaPayload` includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  2130. If the schema is a
  2131. **primitive**
  2132. schema, this field should be blank.
  2133. If the schema is a
  2134. **struct**
  2135. schema, this field should be a JSON string of the Avro schema definition.
  2136. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `PostSchemaPayload`: - -```java - -PulsarAdmin admin = …; - -PostSchemaPayload payload = new PostSchemaPayload(); -payload.setType("INT8"); -payload.setSchema(""); - -admin.createSchema("my-tenant/my-ns/my-topic", payload); - -``` - -
    - -
    -```` - -### Get a schema (latest) - -To get the latest schema for a topic, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `get` subcommand. - -```bash - -$ pulsar-admin schemas get - -{ - "version": 0, - "type": "String", - "timestamp": 0, - "data": "string", - "properties": { - "property1": "string", - "property2": "string" - } -} - -``` - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/getSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", - "type": "", - "timestamp": "", - "data": "", - "properties": {} // the properties associated with the schema -} - -``` - -The response includes the following fields: - -| Field | Description | -| --- | --- | -| `version` | The schema version, which is a long number. | -| `type` | The schema type. | -| `timestamp` | The timestamp of creating this version of schema. | -| `data` | The schema definition data, which is encoded in UTF 8 charset.
  2137. If the schema is a
  2138. **primitive**
  2139. schema, this field should be blank.
  2140. If the schema is a
  2141. **struct**
  2142. schema, this field should be a JSON string of the Avro schema definition.
  2143. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -SchemaInfo createSchema(String topic) - -``` - -The `SchemaInfo` includes the following fields: - -| Field | Description | -| --- | --- | -| `name` | The schema name. | -| `type` | The schema type. | -| `schema` | A byte array of the schema definition data, which is encoded in UTF 8 charset.
  2144. If the schema is a
  2145. **primitive**
  2146. schema, this byte array should be empty.
  2147. If the schema is a
  2148. **struct**
  2149. schema, this field should be a JSON string of the Avro schema definition converted to a byte array.
  2150. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `SchemaInfo`: - -```java - -PulsarAdmin admin = …; - -SchemaInfo si = admin.getSchema("my-tenant/my-ns/my-topic"); - -``` - -
    - -
    -```` - -### Get a schema (specific) - -To get a specific version of a schema, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `get` subcommand. - -```bash - -$ pulsar-admin schemas get --version= - -``` - - - - -Send a `GET` request to a schema endpoint: {@inject: endpoint|GET|/admin/v2/schemas/:tenant/:namespace/:topic/schema/:version|operation/getSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", - "type": "", - "timestamp": "", - "data": "", - "properties": {} // the properties associated with the schema -} - -``` - -The response includes the following fields: - -| Field | Description | -| --- | --- | -| `version` | The schema version, which is a long number. | -| `type` | The schema type. | -| `timestamp` | The timestamp of creating this version of schema. | -| `data` | The schema definition data, which is encoded in UTF 8 charset.
  2151. If the schema is a
  2152. **primitive**
  2153. schema, this field should be blank.
  2154. If the schema is a
  2155. **struct**
  2156. schema, this field should be a JSON string of the Avro schema definition.
  2157. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -SchemaInfo createSchema(String topic, long version) - -``` - -The `SchemaInfo` includes the following fields: - -| Field | Description | -| --- | --- | -| `name` | The schema name. | -| `type` | The schema type. | -| `schema` | A byte array of the schema definition data, which is encoded in UTF 8.
  2158. If the schema is a
  2159. **primitive**
  2160. schema, this byte array should be empty.
  2161. If the schema is a
  2162. **struct**
  2163. schema, this field should be a JSON string of the Avro schema definition converted to a byte array.
  2164. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `SchemaInfo`: - -```java - -PulsarAdmin admin = …; - -SchemaInfo si = admin.getSchema("my-tenant/my-ns/my-topic", 1L); - -``` - -
    - -
    -```` - -### Extract a schema - -To provide a schema via a topic, you can use the following method. - -````mdx-code-block - - - - -Use the `extract` subcommand. - -```bash - -$ pulsar-admin schemas extract --classname --jar --type - -``` - - - - -```` - -### Delete a schema - -To delete a schema for a topic, you can use one of the following methods. - -:::note - -In any case, the **delete** action deletes **all versions** of a schema registered for a topic. - -::: - -````mdx-code-block - - - - -Use the `delete` subcommand. - -```bash - -$ pulsar-admin schemas delete - -``` - - - - -Send a `DELETE` request to a schema endpoint: {@inject: endpoint|DELETE|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/deleteSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", -} - -``` - -The response includes the following field: - -Field | Description | ----|---| -`version` | The schema version, which is a long number. | - - - - -```java - -void deleteSchema(String topic) - -``` - -Here is an example of deleting a schema. - -```java - -PulsarAdmin admin = …; - -admin.deleteSchema("my-tenant/my-ns/my-topic"); - -``` - - - - -```` - -## Custom schema storage - -By default, Pulsar stores various data types of schemas in [Apache BookKeeper](https://bookkeeper.apache.org) deployed alongside Pulsar. - -However, you can use another storage system if needed. - -### Implement - -To use a non-default (non-BookKeeper) storage system for Pulsar schemas, you need to implement the following Java interfaces: - -* [SchemaStorage interface](#schemastorage-interface) - -* [SchemaStorageFactory interface](#schemastoragefactory-interface) - -#### SchemaStorage interface - -The `SchemaStorage` interface has the following methods: - -```java - -public interface SchemaStorage { - // How schemas are updated - CompletableFuture put(String key, byte[] value, byte[] hash); - - // How schemas are fetched from storage - CompletableFuture get(String key, SchemaVersion version); - - // How schemas are deleted - CompletableFuture delete(String key); - - // Utility method for converting a schema version byte array to a SchemaVersion object - SchemaVersion versionFromBytes(byte[] version); - - // Startup behavior for the schema storage client - void start() throws Exception; - - // Shutdown behavior for the schema storage client - void close() throws Exception; -} - -``` - -:::tip - -For a complete example of **schema storage** implementation, see [BookKeeperSchemaStorage](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorage.java) class. - -::: - -#### SchemaStorageFactory interface - -The `SchemaStorageFactory` interface has the following method: - -```java - -public interface SchemaStorageFactory { - @NotNull - SchemaStorage create(PulsarService pulsar) throws Exception; -} - -``` - -:::tip - -For a complete example of **schema storage factory** implementation, see [BookKeeperSchemaStorageFactory](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorageFactory.java) class. - -::: - -### Deploy - -To use your custom schema storage implementation, perform the following steps. - -1. Package the implementation in a [JAR](https://docs.oracle.com/javase/tutorial/deployment/jar/basicsindex.html) file. - -2. Add the JAR file to the `lib` folder in your Pulsar binary or source distribution. - -3. Change the `schemaRegistryStorageClassName` configuration in `broker.conf` to your custom factory class. - -4. Start Pulsar. diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/schema-understand.md b/site2/website/versioned_docs/version-2.9.2-deprecated/schema-understand.md deleted file mode 100644 index 55bc662c666338..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/schema-understand.md +++ /dev/null @@ -1,576 +0,0 @@ ---- -id: schema-understand -title: Understand schema -sidebar_label: "Understand schema" -original_id: schema-understand ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This chapter explains the basic concepts of Pulsar schema, focuses on the topics of particular importance, and provides additional background. - -## SchemaInfo - -Pulsar schema is defined in a data structure called `SchemaInfo`. - -The `SchemaInfo` is stored and enforced on a per-topic basis and cannot be stored at the namespace or tenant level. - -A `SchemaInfo` consists of the following fields: - -| Field | Description | -| --- | --- | -| `name` | Schema name (a string). | -| `type` | Schema type, which determines how to interpret the schema data.
  2165. Predefined schema: see [here](schema-understand.md#schema-type).
  2166. Customized schema: it is left as an empty string.
  2167. | -| `schema`(`payload`) | Schema data, which is a sequence of 8-bit unsigned bytes and schema-type specific. | -| `properties` | It is a user defined properties as a string/string map. Applications can use this bag for carrying any application specific logics. Possible properties might be the Git hash associated with the schema, an environment string like `dev` or `prod`. | - -**Example** - -This is the `SchemaInfo` of a string. - -```json - -{ - "name": "test-string-schema", - "type": "STRING", - "schema": "", - "properties": {} -} - -``` - -## Schema type - -Pulsar supports various schema types, which are mainly divided into two categories: - -* Primitive type - -* Complex type - -### Primitive type - -Currently, Pulsar supports the following primitive types: - -| Primitive Type | Description | -|---|---| -| `BOOLEAN` | A binary value | -| `INT8` | A 8-bit signed integer | -| `INT16` | A 16-bit signed integer | -| `INT32` | A 32-bit signed integer | -| `INT64` | A 64-bit signed integer | -| `FLOAT` | A single precision (32-bit) IEEE 754 floating-point number | -| `DOUBLE` | A double-precision (64-bit) IEEE 754 floating-point number | -| `BYTES` | A sequence of 8-bit unsigned bytes | -| `STRING` | A Unicode character sequence | -| `TIMESTAMP` (`DATE`, `TIME`) | A logic type represents a specific instant in time with millisecond precision.
    It stores the number of milliseconds since `January 1, 1970, 00:00:00 GMT` as an `INT64` value | -| INSTANT | A single instantaneous point on the time-line with nanoseconds precision| -| LOCAL_DATE | An immutable date-time object that represents a date, often viewed as year-month-day| -| LOCAL_TIME | An immutable date-time object that represents a time, often viewed as hour-minute-second. Time is represented to nanosecond precision.| -| LOCAL_DATE_TIME | An immutable date-time object that represents a date-time, often viewed as year-month-day-hour-minute-second | - -For primitive types, Pulsar does not store any schema data in `SchemaInfo`. The `type` in `SchemaInfo` is used to determine how to serialize and deserialize the data. - -Some of the primitive schema implementations can use `properties` to store implementation-specific tunable settings. For example, a `string` schema can use `properties` to store the encoding charset to serialize and deserialize strings. - -The conversions between **Pulsar schema types** and **language-specific primitive types** are as below. - -| Schema Type | Java Type| Python Type | Go Type | -|---|---|---|---| -| BOOLEAN | boolean | bool | bool | -| INT8 | byte | | int8 | -| INT16 | short | | int16 | -| INT32 | int | | int32 | -| INT64 | long | | int64 | -| FLOAT | float | float | float32 | -| DOUBLE | double | float | float64| -| BYTES | byte[], ByteBuffer, ByteBuf | bytes | []byte | -| STRING | string | str | string| -| TIMESTAMP | java.sql.Timestamp | | | -| TIME | java.sql.Time | | | -| DATE | java.util.Date | | | -| INSTANT | java.time.Instant | | | -| LOCAL_DATE | java.time.LocalDate | | | -| LOCAL_TIME | java.time.LocalDateTime | | -| LOCAL_DATE_TIME | java.time.LocalTime | | - -**Example** - -This example demonstrates how to use a string schema. - -1. Create a producer with a string schema and send messages. - - ```java - - Producer producer = client.newProducer(Schema.STRING).create(); - producer.newMessage().value("Hello Pulsar!").send(); - - ``` - -2. Create a consumer with a string schema and receive messages. - - ```java - - Consumer consumer = client.newConsumer(Schema.STRING).subscribe(); - consumer.receive(); - - ``` - -### Complex type - -Currently, Pulsar supports the following complex types: - -| Complex Type | Description | -|---|---| -| `keyvalue` | Represents a complex type of a key/value pair. | -| `struct` | Handles structured data. It supports `AvroBaseStructSchema` and `ProtobufNativeSchema`. | - -#### keyvalue - -`Keyvalue` schema helps applications define schemas for both key and value. - -For `SchemaInfo` of `keyvalue` schema, Pulsar stores the `SchemaInfo` of key schema and the `SchemaInfo` of value schema together. - -Pulsar provides the following methods to encode a key/value pair in messages: - -* `INLINE` - -* `SEPARATED` - -You can choose the encoding type when constructing the key/value schema. - -````mdx-code-block - - - - -Key/value pairs are encoded together in the message payload. - - - - -Key is encoded in the message key and the value is encoded in the message payload. - -**Example** - -This example shows how to construct a key/value schema and then use it to produce and consume messages. - -1. Construct a key/value schema with `INLINE` encoding type. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.INLINE - ); - - ``` - -2. Optionally, construct a key/value schema with `SEPARATED` encoding type. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - ``` - -3. Produce messages using a key/value schema. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - Producer> producer = client.newProducer(kvSchema) - .topic(TOPIC) - .create(); - - final int key = 100; - final String value = "value-100"; - - // send the key/value message - producer.newMessage() - .value(new KeyValue(key, value)) - .send(); - - ``` - -4. Consume messages using a key/value schema. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - Consumer> consumer = client.newConsumer(kvSchema) - ... - .topic(TOPIC) - .subscriptionName(SubscriptionName).subscribe(); - - // receive key/value pair - Message> msg = consumer.receive(); - KeyValue kv = msg.getValue(); - - ``` - - - - -```` - -#### struct - -This section describes the details of type and usage of the `struct` schema. - -##### Type - -`struct` schema supports `AvroBaseStructSchema` and `ProtobufNativeSchema`. - -|Type|Description| ----|---| -`AvroBaseStructSchema`|Pulsar uses [Avro Specification](http://avro.apache.org/docs/current/spec.html) to declare the schema definition for `AvroBaseStructSchema`, which supports `AvroSchema`, `JsonSchema`, and `ProtobufSchema`.

    This allows Pulsar:
    - to use the same tools to manage schema definitions
    - to use different serialization or deserialization methods to handle data| -`ProtobufNativeSchema`|`ProtobufNativeSchema` is based on protobuf native Descriptor.

    This allows Pulsar:
    - to use native protobuf-v3 to serialize or deserialize data
    - to use `AutoConsume` to deserialize data. - -##### Usage - -Pulsar provides the following methods to use the `struct` schema: - -* `static` - -* `generic` - -* `SchemaDefinition` - -````mdx-code-block - - - - -You can predefine the `struct` schema, which can be a POJO in Java, a `struct` in Go, or classes generated by Avro or Protobuf tools. - -**Example** - -Pulsar gets the schema definition from the predefined `struct` using an Avro library. The schema definition is the schema data stored as a part of the `SchemaInfo`. - -1. Create the _User_ class to define the messages sent to Pulsar topics. - - ```java - - @Builder - @AllArgsConstructor - @NoArgsConstructor - public static class User { - String name; - int age; - } - - ``` - -2. Create a producer with a `struct` schema and send messages. - - ```java - - Producer producer = client.newProducer(Schema.AVRO(User.class)).create(); - producer.newMessage().value(User.builder().name("pulsar-user").age(1).build()).send(); - - ``` - -3. Create a consumer with a `struct` schema and receive messages - - ```java - - Consumer consumer = client.newConsumer(Schema.AVRO(User.class)).subscribe(); - User user = consumer.receive(); - - ``` - - - - -Sometimes applications do not have pre-defined structs, and you can use this method to define schema and access data. - -You can define the `struct` schema using the `GenericSchemaBuilder`, generate a generic struct using `GenericRecordBuilder` and consume messages into `GenericRecord`. - -**Example** - -1. Use `RecordSchemaBuilder` to build a schema. - - ```java - - RecordSchemaBuilder recordSchemaBuilder = SchemaBuilder.record("schemaName"); - recordSchemaBuilder.field("intField").type(SchemaType.INT32); - SchemaInfo schemaInfo = recordSchemaBuilder.build(SchemaType.AVRO); - - Producer producer = client.newProducer(Schema.generic(schemaInfo)).create(); - - ``` - -2. Use `RecordBuilder` to build the struct records. - - ```java - - producer.newMessage().value(schema.newRecordBuilder() - .set("intField", 32) - .build()).send(); - - ``` - - - - -You can define the `schemaDefinition` to generate a `struct` schema. - -**Example** - -1. Create the _User_ class to define the messages sent to Pulsar topics. - - ```java - - @Builder - @AllArgsConstructor - @NoArgsConstructor - public static class User { - String name; - int age; - } - - ``` - -2. Create a producer with a `SchemaDefinition` and send messages. - - ```java - - SchemaDefinition schemaDefinition = SchemaDefinition.builder().withPojo(User.class).build(); - Producer producer = client.newProducer(Schema.AVRO(schemaDefinition)).create(); - producer.newMessage().value(User.builder().name("pulsar-user").age(1).build()).send(); - - ``` - -3. Create a consumer with a `SchemaDefinition` schema and receive messages - - ```java - - SchemaDefinition schemaDefinition = SchemaDefinition.builder().withPojo(User.class).build(); - Consumer consumer = client.newConsumer(Schema.AVRO(schemaDefinition)).subscribe(); - User user = consumer.receive().getValue(); - - ``` - - - - -```` - -### Auto Schema - -If you don't know the schema type of a Pulsar topic in advance, you can use AUTO schema to produce or consume generic records to or from brokers. - -| Auto Schema Type | Description | -|---|---| -| `AUTO_PRODUCE` | This is useful for transferring data **from a producer to a Pulsar topic that has a schema**. | -| `AUTO_CONSUME` | This is useful for transferring data **from a Pulsar topic that has a schema to a consumer**. | - -#### AUTO_PRODUCE - -`AUTO_PRODUCE` schema helps a producer validate whether the bytes sent by the producer is compatible with the schema of a topic. - -**Example** - -Suppose that: - -* You have a producer processing messages from a Kafka topic _K_. - -* You have a Pulsar topic _P_, and you do not know its schema type. - -* Your application reads the messages from _K_ and writes the messages to _P_. - -In this case, you can use `AUTO_PRODUCE` to verify whether the bytes produced by _K_ can be sent to _P_ or not. - -```java - -Produce pulsarProducer = client.newProducer(Schema.AUTO_PRODUCE()) - … - .create(); - -byte[] kafkaMessageBytes = … ; - -pulsarProducer.produce(kafkaMessageBytes); - -``` - -#### AUTO_CONSUME - -`AUTO_CONSUME` schema helps a Pulsar topic validate whether the bytes sent by a Pulsar topic is compatible with a consumer, that is, the Pulsar topic deserializes messages into language-specific objects using the `SchemaInfo` retrieved from broker-side. - -Currently, `AUTO_CONSUME` supports AVRO, JSON and ProtobufNativeSchema schemas. It deserializes messages into `GenericRecord`. - -**Example** - -Suppose that: - -* You have a Pulsar topic _P_. - -* You have a consumer (for example, MySQL) receiving messages from the topic _P_. - -* Your application reads the messages from _P_ and writes the messages to MySQL. - -In this case, you can use `AUTO_CONSUME` to verify whether the bytes produced by _P_ can be sent to MySQL or not. - -```java - -Consumer pulsarConsumer = client.newConsumer(Schema.AUTO_CONSUME()) - … - .subscribe(); - -Message msg = consumer.receive() ; -GenericRecord record = msg.getValue(); - -``` - -### Native Avro Schema - -When migrating or ingesting event or message data from external systems (such as Kafka and Cassandra), the events are often already serialized in Avro format. The applications producing the data typically have validated the data against their schemas (including compatibility checks) and stored them in a database or a dedicated service (such as a schema registry). The schema of each serialized data record is usually retrievable by some metadata attached to that record. In such cases, a Pulsar producer doesn't need to repeat the schema validation step when sending the ingested events to a topic. All it needs to do is passing each message or event with its schema to Pulsar. - -Hence, we provide `Schema.NATIVE_AVRO` to wrap a native Avro schema of type `org.apache.avro.Schema`. The result is a schema instance of Pulsar that accepts a serialized Avro payload without validating it against the wrapped Avro schema. - -**Example** - -```java - -org.apache.avro.Schema nativeAvroSchema = … ; - -Producer producer = pulsarClient.newProducer().topic("ingress").create(); - -byte[] content = … ; - -producer.newMessage(Schema.NATIVE_AVRO(nativeAvroSchema)).value(content).send(); - -``` - -## Schema version - -Each `SchemaInfo` stored with a topic has a version. Schema version manages schema changes happening within a topic. - -Messages produced with a given `SchemaInfo` is tagged with a schema version, so when a message is consumed by a Pulsar client, the Pulsar client can use the schema version to retrieve the corresponding `SchemaInfo` and then use the `SchemaInfo` to deserialize data. - -Schemas are versioned in succession. Schema storage happens in a broker that handles the associated topics so that version assignments can be made. - -Once a version is assigned/fetched to/for a schema, all subsequent messages produced by that producer are tagged with the appropriate version. - -**Example** - -The following example illustrates how the schema version works. - -Suppose that a Pulsar [Java client](client-libraries-java.md) created using the code below attempts to connect to Pulsar and begins to send messages: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -Producer producer = client.newProducer(JSONSchema.of(SensorReading.class)) - .topic("sensor-data") - .sendTimeout(3, TimeUnit.SECONDS) - .create(); - -``` - -The table below lists the possible scenarios when this connection attempt occurs and what happens in each scenario: - -| Scenario | What happens | -| --- | --- | -|
  2168. No schema exists for the topic.
  2169. | (1) The producer is created using the given schema. (2) Since no existing schema is compatible with the `SensorReading` schema, the schema is transmitted to the broker and stored. (3) Any consumer created using the same schema or topic can consume messages from the `sensor-data` topic. | -|
  2170. A schema already exists.
  2171. The producer connects using the same schema that is already stored.
  2172. | (1) The schema is transmitted to the broker. (2) The broker determines that the schema is compatible. (3) The broker attempts to store the schema in [BookKeeper](concepts-architecture-overview.md#persistent-storage) but then determines that it's already stored, so it is used to tag produced messages. |
  2173. A schema already exists.
  2174. The producer connects using a new schema that is compatible.
  2175. | (1) The schema is transmitted to the broker. (2) The broker determines that the schema is compatible and stores the new schema as the current version (with a new version number). | - -## How does schema work - -Pulsar schemas are applied and enforced at the **topic** level (schemas cannot be applied at the namespace or tenant level). - -Producers and consumers upload schemas to brokers, so Pulsar schemas work on the producer side and the consumer side. - -### Producer side - -This diagram illustrates how does schema work on the Producer side. - -![Schema works at the producer side](/assets/schema-producer.png) - -1. The application uses a schema instance to construct a producer instance. - - The schema instance defines the schema for the data being produced using the producer instance. - - Take AVRO as an example, Pulsar extracts schema definition from the POJO class and constructs the `SchemaInfo` that the producer needs to pass to a broker when it connects. - -2. The producer connects to the broker with the `SchemaInfo` extracted from the passed-in schema instance. - -3. The broker looks up the schema in the schema storage to check if it is already a registered schema. - -4. If yes, the broker skips the schema validation since it is a known schema, and returns the schema version to the producer. - -5. If no, the broker verifies whether a schema can be automatically created in this namespace: - - * If `isAllowAutoUpdateSchema` sets to **true**, then a schema can be created, and the broker validates the schema based on the schema compatibility check strategy defined for the topic. - - * If `isAllowAutoUpdateSchema` sets to **false**, then a schema can not be created, and the producer is rejected to connect to the broker. - -**Tip**: - -`isAllowAutoUpdateSchema` can be set via **Pulsar admin API** or **REST API.** - -For how to set `isAllowAutoUpdateSchema` via Pulsar admin API, see [Manage AutoUpdate Strategy](schema-manage.md/#manage-autoupdate-strategy). - -6. If the schema is allowed to be updated, then the compatible strategy check is performed. - - * If the schema is compatible, the broker stores it and returns the schema version to the producer. - - All the messages produced by this producer are tagged with the schema version. - - * If the schema is incompatible, the broker rejects it. - -### Consumer side - -This diagram illustrates how does Schema work on the consumer side. - -![Schema works at the consumer side](/assets/schema-consumer.png) - -1. The application uses a schema instance to construct a consumer instance. - - The schema instance defines the schema that the consumer uses for decoding messages received from a broker. - -2. The consumer connects to the broker with the `SchemaInfo` extracted from the passed-in schema instance. - -3. The broker determines whether the topic has one of them (a schema/data/a local consumer and a local producer). - -4. If a topic does not have all of them (a schema/data/a local consumer and a local producer): - - * If `isAllowAutoUpdateSchema` sets to **true**, then the consumer registers a schema and it is connected to a broker. - - * If `isAllowAutoUpdateSchema` sets to **false**, then the consumer is rejected to connect to a broker. - -5. If a topic has one of them (a schema/data/a local consumer and a local producer), then the schema compatibility check is performed. - - * If the schema passes the compatibility check, then the consumer is connected to the broker. - - * If the schema does not pass the compatibility check, then the consumer is rejected to connect to the broker. - -6. The consumer receives messages from the broker. - - If the schema used by the consumer supports schema versioning (for example, AVRO schema), the consumer fetches the `SchemaInfo` of the version tagged in messages and uses the passed-in schema and the schema tagged in messages to decode the messages. diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/security-athenz.md b/site2/website/versioned_docs/version-2.9.2-deprecated/security-athenz.md deleted file mode 100644 index 8a39fe25316d07..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/security-athenz.md +++ /dev/null @@ -1,98 +0,0 @@ ---- -id: security-athenz -title: Authentication using Athenz -sidebar_label: "Authentication using Athenz" -original_id: security-athenz ---- - -[Athenz](https://github.com/AthenZ/athenz) is a role-based authentication/authorization system. In Pulsar, you can use Athenz role tokens (also known as *z-tokens*) to establish the identify of the client. - -## Athenz authentication settings - -A [decentralized Athenz system](https://github.com/AthenZ/athenz/blob/master/docs/decent_authz_flow.md) contains an [authori**Z**ation **M**anagement **S**ystem](https://github.com/AthenZ/athenz/blob/master/docs/setup_zms.md) (ZMS) server and an [authori**Z**ation **T**oken **S**ystem](https://github.com/AthenZ/athenz/blob/master/docs/setup_zts) (ZTS) server. - -To begin, you need to set up Athenz service access control. You need to create domains for the *provider* (which provides some resources to other services with some authentication/authorization policies) and the *tenant* (which is provisioned to access some resources in a provider). In this case, the provider corresponds to the Pulsar service itself and the tenant corresponds to each application using Pulsar (typically, a [tenant](reference-terminology.md#tenant) in Pulsar). - -### Create the tenant domain and service - -On the [tenant](reference-terminology.md#tenant) side, you need to do the following things: - -1. Create a domain, such as `shopping` -2. Generate a private/public key pair -3. Create a service, such as `some_app`, on the domain with the public key - -Note that you need to specify the private key generated in step 2 when the Pulsar client connects to the [broker](reference-terminology.md#broker) (see client configuration examples for [Java](client-libraries-java.md#tls-authentication) and [C++](client-libraries-cpp.md#tls-authentication)). - -For more specific steps involving the Athenz UI, refer to [Example Service Access Control Setup](https://github.com/AthenZ/athenz/blob/master/docs/example_service_athenz_setup.md#client-tenant-domain). - -### Create the provider domain and add the tenant service to some role members - -On the provider side, you need to do the following things: - -1. Create a domain, such as `pulsar` -2. Create a role -3. Add the tenant service to members of the role - -Note that you can specify any action and resource in step 2 since they are not used on Pulsar. In other words, Pulsar uses the Athenz role token only for authentication, *not* for authorization. - -For more specific steps involving UI, refer to [Example Service Access Control Setup](https://github.com/AthenZ/athenz/blob/master/docs/example_service_athenz_setup.md#server-provider-domain). - -## Configure the broker for Athenz - -> ### TLS encryption -> -> Note that when you are using Athenz as an authentication provider, you had better use TLS encryption -> as it can protect role tokens from being intercepted and reused. (for more details involving TLS encryption see [Architecture - Data Model](https://github.com/AthenZ/athenz/blob/master/docs/data_model)). - -In the `conf/broker.conf` configuration file in your Pulsar installation, you need to provide the class name of the Athenz authentication provider as well as a comma-separated list of provider domain names. - -```properties - -# Add the Athenz auth provider -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderAthenz -athenzDomainNames=pulsar - -# Enable TLS -tlsEnabled=true -tlsCertificateFilePath=/path/to/broker-cert.pem -tlsKeyFilePath=/path/to/broker-key.pem - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationAthenz -brokerClientAuthenticationParameters={"tenantDomain":"shopping","tenantService":"some_app","providerDomain":"pulsar","privateKey":"file:///path/to/private.pem","keyId":"v1"} - -``` - -> A full listing of parameters is available in the `conf/broker.conf` file, you can also find the default -> values for those parameters in [Broker Configuration](reference-configuration.md#broker). - -## Configure clients for Athenz - -For more information on Pulsar client authentication using Athenz, see the following language-specific docs: - -* [Java client](client-libraries-java.md#athenz) - -## Configure CLI tools for Athenz - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following authentication parameters to the `conf/client.conf` config file to use Athenz with CLI tools of Pulsar: - -```properties - -# URL for the broker -serviceUrl=https://broker.example.com:8443/ - -# Set Athenz auth plugin and its parameters -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationAthenz -authParams={"tenantDomain":"shopping","tenantService":"some_app","providerDomain":"pulsar","privateKey":"file:///path/to/private.pem","keyId":"v1"} - -# Enable TLS -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/cacert.pem - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/security-authorization.md b/site2/website/versioned_docs/version-2.9.2-deprecated/security-authorization.md deleted file mode 100644 index 9cfd7c8c203f63..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/security-authorization.md +++ /dev/null @@ -1,130 +0,0 @@ ---- -id: security-authorization -title: Authentication and authorization in Pulsar -sidebar_label: "Authorization and ACLs" -original_id: security-authorization ---- - - -In Pulsar, the [authentication provider](security-overview.md#authentication-providers) is responsible for properly identifying clients and associating the clients with [role tokens](security-overview.md#role-tokens). If you only enable authentication, an authenticated role token has the ability to access all resources in the cluster. *Authorization* is the process that determines *what* clients are able to do. - -The role tokens with the most privileges are the *superusers*. The *superusers* can create and destroy tenants, along with having full access to all tenant resources. - -When a superuser creates a [tenant](reference-terminology.md#tenant), that tenant is assigned an admin role. A client with the admin role token can then create, modify and destroy namespaces, and grant and revoke permissions to *other role tokens* on those namespaces. - -## Broker and Proxy Setup - -### Enable authorization and assign superusers -You can enable the authorization and assign the superusers in the broker ([`conf/broker.conf`](reference-configuration.md#broker)) configuration files. - -```properties - -authorizationEnabled=true -superUserRoles=my-super-user-1,my-super-user-2 - -``` - -> A full list of parameters is available in the `conf/broker.conf` file. -> You can also find the default values for those parameters in [Broker Configuration](reference-configuration.md#broker). - -Typically, you use superuser roles for administrators, clients as well as broker-to-broker authorization. When you use [geo-replication](concepts-replication.md), every broker needs to be able to publish to all the other topics of clusters. - -You can also enable the authorization for the proxy in the proxy configuration file (`conf/proxy.conf`). Once you enable the authorization on the proxy, the proxy does an additional authorization check before forwarding the request to a broker. -If you enable authorization on the broker, the broker checks the authorization of the request when the broker receives the forwarded request. - -### Proxy Roles - -By default, the broker treats the connection between a proxy and the broker as a normal user connection. The broker authenticates the user as the role configured in `proxy.conf`(see ["Enable TLS Authentication on Proxies"](security-tls-authentication.md#enable-tls-authentication-on-proxies)). However, when the user connects to the cluster through a proxy, the user rarely requires the authentication. The user expects to be able to interact with the cluster as the role for which they have authenticated with the proxy. - -Pulsar uses *Proxy roles* to enable the authentication. Proxy roles are specified in the broker configuration file, [`conf/broker.conf`](reference-configuration.md#broker). If a client that is authenticated with a broker is one of its ```proxyRoles```, all requests from that client must also carry information about the role of the client that is authenticated with the proxy. This information is called the *original principal*. If the *original principal* is absent, the client is not able to access anything. - -You must authorize both the *proxy role* and the *original principal* to access a resource to ensure that the resource is accessible via the proxy. Administrators can take two approaches to authorize the *proxy role* and the *original principal*. - -The more secure approach is to grant access to the proxy roles each time you grant access to a resource. For example, if you have a proxy role named `proxy1`, when the superuser creates a tenant, you should specify `proxy1` as one of the admin roles. When a role is granted permissions to produce or consume from a namespace, if that client wants to produce or consume through a proxy, you should also grant `proxy1` the same permissions. - -Another approach is to make the proxy role a superuser. This allows the proxy to access all resources. The client still needs to authenticate with the proxy, and all requests made through the proxy have their role downgraded to the *original principal* of the authenticated client. However, if the proxy is compromised, a bad actor could get full access to your cluster. - -You can specify the roles as proxy roles in [`conf/broker.conf`](reference-configuration.md#broker). - -```properties - -proxyRoles=my-proxy-role - -# if you want to allow superusers to use the proxy (see above) -superUserRoles=my-super-user-1,my-super-user-2,my-proxy-role - -``` - -## Administer tenants - -Pulsar [instance](reference-terminology.md#instance) administrators or some kind of self-service portal typically provisions a Pulsar [tenant](reference-terminology.md#tenant). - -You can manage tenants using the [`pulsar-admin`](reference-pulsar-admin.md) tool. - -### Create a new tenant - -The following is an example tenant creation command: - -```shell - -$ bin/pulsar-admin tenants create my-tenant \ - --admin-roles my-admin-role \ - --allowed-clusters us-west,us-east - -``` - -This command creates a new tenant `my-tenant` that is allowed to use the clusters `us-west` and `us-east`. - -A client that successfully identifies itself as having the role `my-admin-role` is allowed to perform all administrative tasks on this tenant. - -The structure of topic names in Pulsar reflects the hierarchy between tenants, clusters, and namespaces: - -```shell - -persistent://tenant/namespace/topic - -``` - -### Manage permissions - -You can use [Pulsar Admin Tools](admin-api-permissions.md) for managing permission in Pulsar. - -### Pulsar admin authentication - -```java - -PulsarAdmin admin = PulsarAdmin.builder() - .serviceHttpUrl("http://broker:8080") - .authentication("com.org.MyAuthPluginClass", "param1:value1") - .build(); - -``` - -To use TLS: - -```java - -PulsarAdmin admin = PulsarAdmin.builder() - .serviceHttpUrl("https://broker:8080") - .authentication("com.org.MyAuthPluginClass", "param1:value1") - .tlsTrustCertsFilePath("/path/to/trust/cert") - .build(); - -``` - -## Authorize an authenticated client with multiple roles - -When a client is identified with multiple roles in a token (the type of role claim in the token is an array) during the authentication process, Pulsar supports to check the permissions of all the roles and further authorize the client as long as one of its roles has the required permissions. - -> **Note**
    -> This authorization method is only compatible with [JWT authentication](security-jwt.md). - -To enable this authorization method, configure the authorization provider as `MultiRolesTokenAuthorizationProvider` in the `conf/broker.conf` file. - - ```properties - - # Authorization provider fully qualified class-name - authorizationProvider=org.apache.pulsar.broker.authorization.MultiRolesTokenAuthorizationProvider - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/security-basic-auth.md b/site2/website/versioned_docs/version-2.9.2-deprecated/security-basic-auth.md deleted file mode 100644 index 2585526bb478af..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/security-basic-auth.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -id: security-basic-auth -title: Authentication using HTTP basic -sidebar_label: "Authentication using HTTP basic" ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - -[Basic authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) is a simple authentication scheme built into the HTTP protocol, which uses base64-encoded username and password pairs as credentials. - -## Prerequisites - -Install [`htpasswd`](https://httpd.apache.org/docs/2.4/programs/htpasswd.html) in your environment to create a password file for storing username-password pairs. - -* For Ubuntu/Debian, run the following command to install `htpasswd`. - - ``` - apt install apache2-utils - ``` - -* For CentOS/RHEL, run the following command to install `htpasswd`. - - ``` - yum install httpd-tools - ``` - -## Create your authentication file - -:::note -Currently, you can use MD5 (recommended) and CRYPT encryption to authenticate your password. -::: - -Create a password file named `.htpasswd` with a user account `superuser/admin`: -* Use MD5 encryption (recommended): - - ``` - htpasswd -cmb /path/to/.htpasswd superuser admin - ``` - -* Use CRYPT encryption: - - ``` - htpasswd -cdb /path/to/.htpasswd superuser admin - ``` - -You can preview the content of your password file by running the following command: - -``` -cat path/to/.htpasswd -superuser:$apr1$GBIYZYFZ$MzLcPrvoUky16mLcK6UtX/ -``` - -## Enable basic authentication on brokers - -To configure brokers to authenticate clients, complete the following steps. - -1. Add the following parameters to the `conf/broker.conf` file. If you use a standalone Pulsar, you need to add these parameters to the `conf/standalone.conf` file. - - ``` - # Configuration to enable Basic authentication - authenticationEnabled=true - authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderBasic - - # Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters - brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic - brokerClientAuthenticationParameters={"userId":"superuser","password":"admin"} - - # If this flag is set then the broker authenticates the original Auth data - # else it just accepts the originalPrincipal and authorizes it (if required). - authenticateOriginalAuthData=true - ``` - -2. Set an environment variable named `PULSAR_EXTRA_OPTS` and the value is `-Dpulsar.auth.basic.conf=/path/to/.htpasswd`. Pulsar reads this environment variable to implement HTTP basic authentication. - -## Enable basic authentication on proxies - -To configure proxies to authenticate clients, complete the following steps. - -1. Add the following parameters to the `conf/proxy.conf` file: - - ``` - # For clients connecting to the proxy - authenticationEnabled=true - authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderBasic - - # For the proxy to connect to brokers - brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic - brokerClientAuthenticationParameters={"userId":"superuser","password":"admin"} - - # Whether client authorization credentials are forwarded to the broker for re-authorization. - # Authentication must be enabled via authenticationEnabled=true for this to take effect. - forwardAuthorizationCredentials=true - ``` - -2. Set an environment variable named `PULSAR_EXTRA_OPTS` and the value is `-Dpulsar.auth.basic.conf=/path/to/.htpasswd`. Pulsar reads this environment variable to implement HTTP basic authentication. - -## Configure basic authentication in CLI tools - -[Command-line tools](/docs/next/reference-cli-tools), such as [Pulsar-admin](/tools/pulsar-admin/), [Pulsar-perf](/tools/pulsar-perf/) and [Pulsar-client](/tools/pulsar-client/), use the `conf/client.conf` file in your Pulsar installation. To configure basic authentication in Pulsar CLI tools, you need to add the following parameters to the `conf/client.conf` file. - -``` -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic -authParams={"userId":"superuser","password":"admin"} -``` - - -## Configure basic authentication in Pulsar clients - -The following example shows how to configure basic authentication when using Pulsar clients. - - - - - ```java - AuthenticationBasic auth = new AuthenticationBasic(); - auth.configure("{\"userId\":\"superuser\",\"password\":\"admin\"}"); - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650") - .authentication(auth) - .build(); - ``` - - - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/security-bouncy-castle.md b/site2/website/versioned_docs/version-2.9.2-deprecated/security-bouncy-castle.md deleted file mode 100644 index be937055d8e311..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/security-bouncy-castle.md +++ /dev/null @@ -1,157 +0,0 @@ ---- -id: security-bouncy-castle -title: Bouncy Castle Providers -sidebar_label: "Bouncy Castle Providers" -original_id: security-bouncy-castle ---- - -## BouncyCastle Introduce - -`Bouncy Castle` is a Java library that complements the default Java Cryptographic Extension (JCE), -and it provides more cipher suites and algorithms than the default JCE provided by Sun. - -In addition to that, `Bouncy Castle` has lots of utilities for reading arcane formats like PEM and ASN.1 that no sane person would want to rewrite themselves. - -In Pulsar, security and crypto have dependencies on BouncyCastle Jars. For the detailed installing and configuring Bouncy Castle FIPS, see [BC FIPS Documentation](https://www.bouncycastle.org/documentation.html), especially the **User Guides** and **Security Policy** PDFs. - -`Bouncy Castle` provides both [FIPS](https://www.bouncycastle.org/fips_faq.html) and non-FIPS version. But in a JVM, you can not include both of the 2 versions, and you need to exclude the current version before include the other. - -In Pulsar, the security and crypto methods also depends on `Bouncy Castle`, especially in [TLS Authentication](security-tls-authentication.md) and [Transport Encryption](security-encryption.md). This document contains the configuration between BouncyCastle FIPS(BC-FIPS) and non-FIPS(BC-non-FIPS) version while using Pulsar. - -## How BouncyCastle modules packaged in Pulsar - -In Pulsar's `bouncy-castle` module, We provide 2 sub modules: `bouncy-castle-bc`(for non-FIPS version) and `bouncy-castle-bcfips`(for FIPS version), to package BC jars together to make the include and exclude of `Bouncy Castle` easier. - -To achieve this goal, we will need to package several `bouncy-castle` jars together into `bouncy-castle-bc` or `bouncy-castle-bcfips` jar. -Each of the original bouncy-castle jar is related with security, so BouncyCastle dutifully supplies signed of each JAR. -But when we do the re-package, Maven shade explodes the BouncyCastle jar file which puts the signatures into META-INF, -these signatures aren't valid for this new, uber-jar (signatures are only for the original BC jar). -Usually, You will meet error like `java.lang.SecurityException: Invalid signature file digest for Manifest main attributes`. - -You could exclude these signatures in mvn pom file to avoid above error, by - -```access transformers - -META-INF/*.SF -META-INF/*.DSA -META-INF/*.RSA - -``` - -But it can also lead to new, cryptic errors, e.g. `java.security.NoSuchAlgorithmException: PBEWithSHA256And256BitAES-CBC-BC SecretKeyFactory not available` -By explicitly specifying where to find the algorithm like this: `SecretKeyFactory.getInstance("PBEWithSHA256And256BitAES-CBC-BC","BC")` -It will get the real error: `java.security.NoSuchProviderException: JCE cannot authenticate the provider BC` - -So, we used a [executable packer plugin](https://github.com/nthuemmel/executable-packer-maven-plugin) that uses a jar-in-jar approach to preserve the BouncyCastle signature in a single, executable jar. - -### Include dependencies of BC-non-FIPS - -Pulsar module `bouncy-castle-bc`, which defined by `bouncy-castle/bc/pom.xml` contains the needed non-FIPS jars for Pulsar, and packaged as a jar-in-jar(need to provide `pkg`). - -```xml - - - org.bouncycastle - bcpkix-jdk15on - ${bouncycastle.version} - - - - org.bouncycastle - bcprov-ext-jdk15on - ${bouncycastle.version} - - -``` - -By using this `bouncy-castle-bc` module, you can easily include and exclude BouncyCastle non-FIPS jars. - -### Modules that include BC-non-FIPS module (`bouncy-castle-bc`) - -For Pulsar client, user need the bouncy-castle module, so `pulsar-client-original` will include the `bouncy-castle-bc` module, and have `pkg` set to reference the `jar-in-jar` package. -It is included as following example: - -```xml - - - org.apache.pulsar - bouncy-castle-bc - ${pulsar.version} - pkg - - -``` - -By default `bouncy-castle-bc` already included in `pulsar-client-original`, And `pulsar-client-original` has been included in a lot of other modules like `pulsar-client-admin`, `pulsar-broker`. -But for the above shaded jar and signatures reason, we should not package Pulsar's `bouncy-castle` module into `pulsar-client-all` other shaded modules directly, such as `pulsar-client-shaded`, `pulsar-client-admin-shaded` and `pulsar-broker-shaded`. -So in the shaded modules, we will exclude the `bouncy-castle` modules. - -```xml - - - - org.apache.pulsar:pulsar-client-original - - ** - - - org/bouncycastle/** - - - - -``` - -That means, `bouncy-castle` related jars are not shaded in these fat jars. - -### Module BC-FIPS (`bouncy-castle-bcfips`) - -Pulsar module `bouncy-castle-bcfips`, which defined by `bouncy-castle/bcfips/pom.xml` contains the needed FIPS jars for Pulsar. -Similar to `bouncy-castle-bc`, `bouncy-castle-bcfips` also packaged as a `jar-in-jar` package for easy include/exclude. - -```xml - - - org.bouncycastle - bc-fips - ${bouncycastlefips.version} - - - - org.bouncycastle - bcpkix-fips - ${bouncycastlefips.version} - - -``` - -### Exclude BC-non-FIPS and include BC-FIPS - -If you want to switch from BC-non-FIPS to BC-FIPS version, Here is an example for `pulsar-broker` module: - -```xml - - - org.apache.pulsar - pulsar-broker - ${pulsar.version} - - - org.apache.pulsar - bouncy-castle-bc - - - - - - org.apache.pulsar - bouncy-castle-bcfips - ${pulsar.version} - pkg - - -``` - - -For more example, you can reference module `bcfips-include-test`. - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/security-encryption.md b/site2/website/versioned_docs/version-2.9.2-deprecated/security-encryption.md deleted file mode 100644 index c2f3530d94d9e4..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/security-encryption.md +++ /dev/null @@ -1,200 +0,0 @@ ---- -id: security-encryption -title: Pulsar Encryption -sidebar_label: "End-to-End Encryption" -original_id: security-encryption ---- - -Applications can use Pulsar encryption to encrypt messages on the producer side and decrypt messages on the consumer side. You can use the public and private key pair that the application configures to perform encryption. Only the consumers with a valid key can decrypt the encrypted messages. - -## Asymmetric and symmetric encryption - -Pulsar uses a dynamically generated symmetric AES key to encrypt messages(data). You can use the application-provided ECDSA (Elliptic Curve Digital Signature Algorithm) or RSA (Rivest–Shamir–Adleman) key pair to encrypt the AES key(data key), so you do not have to share the secret with everyone. - -Key is a public and private key pair used for encryption or decryption. The producer key is the public key of the key pair, and the consumer key is the private key of the key pair. - -The application configures the producer with the public key. You can use this key to encrypt the AES data key. The encrypted data key is sent as part of message header. Only entities with the private key (in this case the consumer) are able to decrypt the data key which is used to decrypt the message. - -You can encrypt a message with more than one key. Any one of the keys used for encrypting the message is sufficient to decrypt the message. - -Pulsar does not store the encryption key anywhere in the Pulsar service. If you lose or delete the private key, your message is irretrievably lost, and is unrecoverable. - -## Producer -![alt text](/assets/pulsar-encryption-producer.jpg "Pulsar Encryption Producer") - -## Consumer -![alt text](/assets/pulsar-encryption-consumer.jpg "Pulsar Encryption Consumer") - -## Get started - -1. Create your ECDSA or RSA public and private key pair by using the following commands. - * ECDSA(for Java clients only) - - ```shell - - openssl ecparam -name secp521r1 -genkey -param_enc explicit -out test_ecdsa_privkey.pem - openssl ec -in test_ecdsa_privkey.pem -pubout -outform pem -out test_ecdsa_pubkey.pem - - ``` - - * RSA (for C++, Python and Node.js clients) - - ```shell - - openssl genrsa -out test_rsa_privkey.pem 2048 - openssl rsa -in test_rsa_privkey.pem -pubout -outform pkcs8 -out test_rsa_pubkey.pem - - ``` - -2. Add the public and private key to the key management and configure your producers to retrieve public keys and consumers clients to retrieve private keys. - -3. Implement the `CryptoKeyReader` interface, specifically `CryptoKeyReader.getPublicKey()` for producer and `CryptoKeyReader.getPrivateKey()` for consumer, which Pulsar client invokes to load the key. - -4. Add the encryption key name to the producer builder: PulsarClient.newProducer().addEncryptionKey("myapp.key"). - -5. Add CryptoKeyReader implementation to producer or consumer builder: PulsarClient.newProducer().cryptoKeyReader(keyReader) / PulsarClient.newConsumer().cryptoKeyReader(keyReader). - -6. Sample producer application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl("pulsar://localhost:6650").build(); - -Producer producer = pulsarClient.newProducer() - .topic("persistent://my-tenant/my-ns/my-topic") - .addEncryptionKey("myappkey") - .cryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")) - .create(); - -for (int i = 0; i < 10; i++) { - producer.send("my-message".getBytes()); -} - -producer.close(); -pulsarClient.close(); - -``` - -7. Sample Consumer Application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl("pulsar://localhost:6650").build(); -Consumer consumer = pulsarClient.newConsumer() - .topic("persistent://my-tenant/my-ns/my-topic") - .subscriptionName("my-subscriber-name") - .cryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")) - .subscribe(); -Message msg = null; - -for (int i = 0; i < 10; i++) { - msg = consumer.receive(); - // do something - System.out.println("Received: " + new String(msg.getData())); -} - -// Acknowledge the consumption of all messages at once -consumer.acknowledgeCumulative(msg); -consumer.close(); -pulsarClient.close(); - -``` - -## Key rotation -Pulsar generates a new AES data key every 4 hours or after publishing a certain number of messages. A producer fetches the asymmetric public key every 4 hours by calling CryptoKeyReader.getPublicKey() to retrieve the latest version. - -## Enable encryption at the producer application -If you produce messages that are consumed across application boundaries, you need to ensure that consumers in other applications have access to one of the private keys that can decrypt the messages. You can do this in two ways: -1. The consumer application provides you access to their public key, which you add to your producer keys. -2. You grant access to one of the private keys from the pairs that producer uses. - -When producers want to encrypt the messages with multiple keys, producers add all such keys to the config. Consumer can decrypt the message as long as the consumer has access to at least one of the keys. - -If you need to encrypt the messages using 2 keys (`myapp.messagekey1` and `myapp.messagekey2`), refer to the following example. - -```java - -PulsarClient.newProducer().addEncryptionKey("myapp.messagekey1").addEncryptionKey("myapp.messagekey2"); - -``` - -## Decrypt encrypted messages at the consumer application -Consumers require to access one of the private keys to decrypt messages that the producer produces. If you want to receive encrypted messages, create a public or private key and give your public key to the producer application to encrypt messages using your public key. - -## Handle failures -* Producer/Consumer loses access to the key - * Producer action fails to indicate the cause of the failure. Application has the option to proceed with sending unencrypted messages in such cases. Call `PulsarClient.newProducer().cryptoFailureAction(ProducerCryptoFailureAction)` to control the producer behavior. The default behavior is to fail the request. - * If consumption fails due to decryption failure or missing keys in consumer, the application has the option to consume the encrypted message or discard it. Call `PulsarClient.newConsumer().cryptoFailureAction(ConsumerCryptoFailureAction)` to control the consumer behavior. The default behavior is to fail the request. Application is never able to decrypt the messages if the private key is permanently lost. -* Batch messaging - * If decryption fails and the message contains batch messages, client is not able to retrieve individual messages in the batch, hence message consumption fails even if cryptoFailureAction() is set to `ConsumerCryptoFailureAction.CONSUME`. -* If decryption fails, the message consumption stops and the application notices backlog growth in addition to decryption failure messages in the client log. If the application does not have access to the private key to decrypt the message, the only option is to skip or discard backlogged messages. diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/security-extending.md b/site2/website/versioned_docs/version-2.9.2-deprecated/security-extending.md deleted file mode 100644 index e7484453b8beb8..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/security-extending.md +++ /dev/null @@ -1,207 +0,0 @@ ---- -id: security-extending -title: Extending Authentication and Authorization in Pulsar -sidebar_label: "Extending" -original_id: security-extending ---- - -Pulsar provides a way to use custom authentication and authorization mechanisms. - -## Authentication - -Pulsar supports mutual TLS and Athenz authentication plugins. For how to use these authentication plugins, you can refer to the description in [Security](security-overview.md). - -You can use a custom authentication mechanism by providing the implementation in the form of two plugins. One plugin is for the Client library and the other plugin is for the Pulsar Proxy and/or Pulsar Broker to validate the credentials. - -### Client authentication plugin - -For the client library, you need to implement `org.apache.pulsar.client.api.Authentication`. By entering the command below you can pass this class when you create a Pulsar client: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .authentication(new MyAuthentication()) - .build(); - -``` - -You can use 2 interfaces to implement on the client side: - * `Authentication` -> http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/Authentication.html - * `AuthenticationDataProvider` -> http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/AuthenticationDataProvider.html - - -This in turn needs to provide the client credentials in the form of `org.apache.pulsar.client.api.AuthenticationDataProvider`. This leaves the chance to return different kinds of authentication token for different types of connection or by passing a certificate chain to use for TLS. - - -You can find examples for client authentication providers at: - - * Mutual TLS Auth -- https://github.com/apache/pulsar/tree/master/pulsar-client/src/main/java/org/apache/pulsar/client/impl/auth - * Athenz -- https://github.com/apache/pulsar/tree/master/pulsar-client-auth-athenz/src/main/java/org/apache/pulsar/client/impl/auth - -### Proxy/Broker authentication plugin - -On the proxy/broker side, you need to configure the corresponding plugin to validate the credentials that the client sends. The Proxy and Broker can support multiple authentication providers at the same time. - -In `conf/broker.conf` you can choose to specify a list of valid providers: - -```properties - -# Authentication provider name list, which is comma separated list of class names -authenticationProviders= - -``` - -To implement `org.apache.pulsar.broker.authentication.AuthenticationProvider` on one single interface: - -```java - -/** - * Provider of authentication mechanism - */ -public interface AuthenticationProvider extends Closeable { - - /** - * Perform initialization for the authentication provider - * - * @param config - * broker config object - * @throws IOException - * if the initialization fails - */ - void initialize(ServiceConfiguration config) throws IOException; - - /** - * @return the authentication method name supported by this provider - */ - String getAuthMethodName(); - - /** - * Validate the authentication for the given credentials with the specified authentication data - * - * @param authData - * provider specific authentication data - * @return the "role" string for the authenticated connection, if the authentication was successful - * @throws AuthenticationException - * if the credentials are not valid - */ - String authenticate(AuthenticationDataSource authData) throws AuthenticationException; - -} - -``` - -The following is the example for Broker authentication plugins: - - * Mutual TLS -- https://github.com/apache/pulsar/blob/master/pulsar-broker-common/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderTls.java - * Athenz -- https://github.com/apache/pulsar/blob/master/pulsar-broker-auth-athenz/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderAthenz.java - -## Authorization - -Authorization is the operation that checks whether a particular "role" or "principal" has permission to perform a certain operation. - -By default, you can use the embedded authorization provider provided by Pulsar. You can also configure a different authorization provider through a plugin. -Note that although the Authentication plugin is designed for use in both the Proxy and Broker, -the Authorization plugin is designed only for use on the Broker however the Proxy does perform some simple Authorization checks of Roles if authorization is enabled. - -To provide a custom provider, you need to implement the `org.apache.pulsar.broker.authorization.AuthorizationProvider` interface, put this class in the Pulsar broker classpath and configure the class in `conf/broker.conf`: - - ```properties - - # Authorization provider fully qualified class-name - authorizationProvider=org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider - - ``` - -```java - -/** - * Provider of authorization mechanism - */ -public interface AuthorizationProvider extends Closeable { - - /** - * Perform initialization for the authorization provider - * - * @param conf - * broker config object - * @param configCache - * pulsar zk configuration cache service - * @throws IOException - * if the initialization fails - */ - void initialize(ServiceConfiguration conf, ConfigurationCacheService configCache) throws IOException; - - /** - * Check if the specified role has permission to send messages to the specified fully qualified topic name. - * - * @param topicName - * the fully qualified topic name associated with the topic. - * @param role - * the app id used to send messages to the topic. - */ - CompletableFuture canProduceAsync(TopicName topicName, String role, - AuthenticationDataSource authenticationData); - - /** - * Check if the specified role has permission to receive messages from the specified fully qualified topic name. - * - * @param topicName - * the fully qualified topic name associated with the topic. - * @param role - * the app id used to receive messages from the topic. - * @param subscription - * the subscription name defined by the client - */ - CompletableFuture canConsumeAsync(TopicName topicName, String role, - AuthenticationDataSource authenticationData, String subscription); - - /** - * Check whether the specified role can perform a lookup for the specified topic. - * - * For that the caller needs to have producer or consumer permission. - * - * @param topicName - * @param role - * @return - * @throws Exception - */ - CompletableFuture canLookupAsync(TopicName topicName, String role, - AuthenticationDataSource authenticationData); - - /** - * - * Grant authorization-action permission on a namespace to the given client - * - * @param namespace - * @param actions - * @param role - * @param authDataJson - * additional authdata in json format - * @return CompletableFuture - * @completesWith
    - * IllegalArgumentException when namespace not found
    - * IllegalStateException when failed to grant permission - */ - CompletableFuture grantPermissionAsync(NamespaceName namespace, Set actions, String role, - String authDataJson); - - /** - * Grant authorization-action permission on a topic to the given client - * - * @param topicName - * @param role - * @param authDataJson - * additional authdata in json format - * @return CompletableFuture - * @completesWith
    - * IllegalArgumentException when namespace not found
    - * IllegalStateException when failed to grant permission - */ - CompletableFuture grantPermissionAsync(TopicName topicName, Set actions, String role, - String authDataJson); - -} - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/security-jwt.md b/site2/website/versioned_docs/version-2.9.2-deprecated/security-jwt.md deleted file mode 100644 index 1fa65b7c27f60c..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/security-jwt.md +++ /dev/null @@ -1,331 +0,0 @@ ---- -id: security-jwt -title: Client authentication using tokens based on JSON Web Tokens -sidebar_label: "Authentication using JWT" -original_id: security-jwt ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -## Token authentication overview - -Pulsar supports authenticating clients using security tokens that are based on [JSON Web Tokens](https://jwt.io/introduction/) ([RFC-7519](https://tools.ietf.org/html/rfc7519)). - -You can use tokens to identify a Pulsar client and associate with some "principal" (or "role") that -is permitted to do some actions (eg: publish to a topic or consume from a topic). - -A user typically gets a token string from the administrator (or some automated service). - -The compact representation of a signed JWT is a string that looks like as the following: - -``` - -eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -Application specifies the token when you create the client instance. An alternative is to pass a "token supplier" (a function that returns the token when the client library needs one). - -> #### Always use TLS transport encryption -> Sending a token is equivalent to sending a password over the wire. You had better use TLS encryption all the time when you connect to the Pulsar service. See -> [Transport Encryption using TLS](security-tls-transport.md) for more details. - -### CLI Tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use the token authentication with CLI tools of Pulsar: - -```properties - -webServiceUrl=http://broker.example.com:8080/ -brokerServiceUrl=pulsar://broker.example.com:6650/ -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -authParams=token:eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -The token string can also be read from a file, for example: - -``` - -authParams=file:///path/to/token/file - -``` - -### Pulsar client - -You can use tokens to authenticate the following Pulsar clients. - -````mdx-code-block - - - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactory.token("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY")) - .build(); - -``` - -Similarly, you can also pass a `Supplier`: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactory.token(() -> { - // Read token from custom source - return readToken(); - })) - .build(); - -``` - - - - -```python - -from pulsar import Client, AuthenticationToken - -client = Client('pulsar://broker.example.com:6650/' - authentication=AuthenticationToken('eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY')) - -``` - -Alternatively, you can also pass a `Supplier`: - -```python - -def read_token(): - with open('/path/to/token.txt') as tf: - return tf.read().strip() - -client = Client('pulsar://broker.example.com:6650/' - authentication=AuthenticationToken(read_token)) - -``` - - - - -```go - -client, err := NewClient(ClientOptions{ - URL: "pulsar://localhost:6650", - Authentication: NewAuthenticationToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY"), -}) - -``` - -Similarly, you can also pass a `Supplier`: - -```go - -client, err := NewClient(ClientOptions{ - URL: "pulsar://localhost:6650", - Authentication: NewAuthenticationTokenSupplier(func () string { - // Read token from custom source - return readToken() - }), -}) - -``` - - - - -```c++ - -#include - -pulsar::ClientConfiguration config; -config.setAuth(pulsar::AuthToken::createWithToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY")); - -pulsar::Client client("pulsar://broker.example.com:6650/", config); - -``` - - - - -```c# - -var client = PulsarClient.Builder() - .AuthenticateUsingToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY") - .Build(); - -``` - - - - -```` - -## Enable token authentication - -On how to enable token authentication on a Pulsar cluster, you can refer to the guide below. - -JWT supports two different kinds of keys in order to generate and validate the tokens: - - * Symmetric : - - You can use a single ***Secret*** key to generate and validate tokens. - * Asymmetric: A pair of keys consists of the Private key and the Public key. - - You can use ***Private*** key to generate tokens. - - You can use ***Public*** key to validate tokens. - -### Create a secret key - -When you use a secret key, the administrator creates the key and uses the key to generate the client tokens. You can also configure this key to brokers in order to validate the clients. - -Output file is generated in the root of your Pulsar installation directory. You can also provide absolute path for the output file using the command below. - -```shell - -$ bin/pulsar tokens create-secret-key --output my-secret.key - -``` - -Enter this command to generate base64 encoded private key. - -```shell - -$ bin/pulsar tokens create-secret-key --output /opt/my-secret.key --base64 - -``` - -### Create a key pair - -With Public and Private keys, you need to create a pair of keys. Pulsar supports all algorithms that the Java JWT library (shown [here](https://github.com/jwtk/jjwt#signature-algorithms-keys)) supports. - -Output file is generated in the root of your Pulsar installation directory. You can also provide absolute path for the output file using the command below. - -```shell - -$ bin/pulsar tokens create-key-pair --output-private-key my-private.key --output-public-key my-public.key - -``` - - * Store `my-private.key` in a safe location and only administrator can use `my-private.key` to generate new tokens. - * `my-public.key` is distributed to all Pulsar brokers. You can publicly share this file without any security concern. - -### Generate tokens - -A token is the credential associated with a user. The association is done through the "principal" or "role". In the case of JWT tokens, this field is typically referred as **subject**, though they are exactly the same concept. - -Then, you need to use this command to require the generated token to have a **subject** field set. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user - -``` - -This command prints the token string on stdout. - -Similarly, you can create a token by passing the "private" key using the command below: - -```shell - -$ bin/pulsar tokens create --private-key file:///path/to/my-private.key \ - --subject test-user - -``` - -Finally, you can enter the following command to create a token with a pre-defined TTL. And then the token is automatically invalidated. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user \ - --expiry-time 1y - -``` - -### Authorization - -The token itself does not have any permission associated. The authorization engine determines whether the token should have permissions or not. Once you have created the token, you can grant permission for this token to do certain actions. The following is an example. - -```shell - -$ bin/pulsar-admin namespaces grant-permission my-tenant/my-namespace \ - --role test-user \ - --actions produce,consume - -``` - -### Enable token authentication on Brokers - -To configure brokers to authenticate clients, add the following parameters to `broker.conf`: - -```properties - -# Configuration to enable authentication and authorization -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientTlsEnabled=true -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Either configure the token string or specify to read it from a file. The following three available formats are all valid: -# brokerClientAuthenticationParameters={"token":"your-token-string"} -# brokerClientAuthenticationParameters=token:your-token-string -# brokerClientAuthenticationParameters=file:///path/to/token -brokerClientTrustCertsFilePath=/path/my-ca/certs/ca.cert.pem - -# If this flag is set then the broker authenticates the original Auth data -# else it just accepts the originalPrincipal and authorizes it (if required). -authenticateOriginalAuthData=true - -# If using secret key (Note: key files must be DER-encoded) -tokenSecretKey=file:///path/to/secret.key -# The key can also be passed inline: -# tokenSecretKey=data:;base64,FLFyW0oLJ2Fi22KKCm21J18mbAdztfSHN/lAT5ucEKU= - -# If using public/private (Note: key files must be DER-encoded) -# tokenPublicKey=file:///path/to/public.key - -``` - -### Enable token authentication on Proxies - -To configure proxies to authenticate clients, add the following parameters to `proxy.conf`: - -The proxy uses its own token when connecting to brokers. You need to configure the role token for this key pair in the `proxyRoles` of the brokers. For more details, see the [authorization guide](security-authorization.md). - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken -tokenSecretKey=file:///path/to/secret.key - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Either configure the token string or specify to read it from a file. The following three available formats are all valid: -# brokerClientAuthenticationParameters={"token":"your-token-string"} -# brokerClientAuthenticationParameters=token:your-token-string -# brokerClientAuthenticationParameters=file:///path/to/token - -# Whether client authorization credentials are forwarded to the broker for re-authorization. -# Authentication must be enabled via authenticationEnabled=true for this to take effect. -forwardAuthorizationCredentials=true - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/security-kerberos.md b/site2/website/versioned_docs/version-2.9.2-deprecated/security-kerberos.md deleted file mode 100644 index c49fa3bea1fce0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/security-kerberos.md +++ /dev/null @@ -1,443 +0,0 @@ ---- -id: security-kerberos -title: Authentication using Kerberos -sidebar_label: "Authentication using Kerberos" -original_id: security-kerberos ---- - -[Kerberos](https://web.mit.edu/kerberos/) is a network authentication protocol. By using secret-key cryptography, [Kerberos](https://web.mit.edu/kerberos/) is designed to provide strong authentication for client applications and server applications. - -In Pulsar, you can use Kerberos with [SASL](https://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer) as a choice for authentication. And Pulsar uses the [Java Authentication and Authorization Service (JAAS)](https://en.wikipedia.org/wiki/Java_Authentication_and_Authorization_Service) for SASL configuration. You need to provide JAAS configurations for Kerberos authentication. - -This document introduces how to configure `Kerberos` with `SASL` between Pulsar clients and brokers and how to configure Kerberos for Pulsar proxy in detail. - -## Configuration for Kerberos between Client and Broker - -### Prerequisites - -To begin, you need to set up (or already have) a [Key Distribution Center(KDC)](https://en.wikipedia.org/wiki/Key_distribution_center). Also you need to configure and run the [Key Distribution Center(KDC)](https://en.wikipedia.org/wiki/Key_distribution_center)in advance. - -If your organization already uses a Kerberos server (for example, by using `Active Directory`), you do not have to install a new server for Pulsar. If your organization does not use a Kerberos server, you need to install one. Your Linux vendor might have packages for `Kerberos`. On how to install and configure Kerberos, refer to [Ubuntu](https://help.ubuntu.com/community/Kerberos), -[Redhat](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Managing_Smart_Cards/installing-kerberos.html). - -Note that if you use Oracle Java, you need to download JCE policy files for your Java version and copy them to the `$JAVA_HOME/jre/lib/security` directory. - -#### Kerberos principals - -If you use the existing Kerberos system, ask your Kerberos administrator for a principal for each Brokers in your cluster and for every operating system user that accesses Pulsar with Kerberos authentication(via clients and tools). - -If you have installed your own Kerberos system, you can create these principals with the following commands: - -```shell - -### add Principals for broker -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey broker/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{broker-keytabname}.keytab broker/{hostname}@{REALM}" -### add Principals for client -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey client/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{client-keytabname}.keytab client/{hostname}@{REALM}" - -``` - -Note that *Kerberos* requires that all your hosts can be resolved with their FQDNs. - -The first part of Broker principal (for example, `broker` in `broker/{hostname}@{REALM}`) is the `serverType` of each host. The suggested values of `serverType` are `broker` (host machine runs service Pulsar Broker) and `proxy` (host machine runs service Pulsar Proxy). - -#### Configure how to connect to KDC - -You need to enter the command below to specify the path to the `krb5.conf` file for the client side and the broker side. The content of `krb5.conf` file indicates the default Realm and KDC information. See [JDK’s Kerberos Requirements](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html) for more details. - -```shell - --Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -Here is an example of the krb5.conf file: - -In the configuration file, `EXAMPLE.COM` is the default realm; `kdc = localhost:62037` is the kdc server url for realm `EXAMPLE.COM `: - -``` - -[libdefaults] - default_realm = EXAMPLE.COM - -[realms] - EXAMPLE.COM = { - kdc = localhost:62037 - } - -``` - -Usually machines configured with kerberos already have a system wide configuration and this configuration is optional. - -#### JAAS configuration file - -You need JAAS configuration file for the client side and the broker side. JAAS configuration file provides the section of information that is used to connect KDC. Here is an example named `pulsar_jaas.conf`: - -``` - - PulsarBroker { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - - PulsarClient { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarclient.keytab" - principal="client/localhost@EXAMPLE.COM"; -}; - -``` - -You need to set the `JAAS` configuration file path as JVM parameter for client and broker. For example: - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf - -``` - -In the `pulsar_jaas.conf` file above - -1. `PulsarBroker` is a section name in the JAAS file that each broker uses. This section tells the broker to use which principal inside Kerberos and the location of the keytab where the principal is stored. `PulsarBroker` allows the broker to use the keytab specified in this section. -2. `PulsarClient` is a section name in the JASS file that each broker uses. This section tells the client to use which principal inside Kerberos and the location of the keytab where the principal is stored. `PulsarClient` allows the client to use the keytab specified in this section. - The following example also reuses this `PulsarClient` section in both the Pulsar internal admin configuration and in CLI command of `bin/pulsar-client`, `bin/pulsar-perf` and `bin/pulsar-admin`. You can also add different sections for different use cases. - -You can have 2 separate JAAS configuration files: -* the file for a broker that has sections of both `PulsarBroker` and `PulsarClient`; -* the file for a client that only has a `PulsarClient` section. - - -### Kerberos configuration for Brokers - -#### Configure the `broker.conf` file - - In the `broker.conf` file, set Kerberos related configurations. - - - Set `authenticationEnabled` to `true`; - - Set `authenticationProviders` to choose `AuthenticationProviderSasl`; - - Set `saslJaasClientAllowedIds` regex for principal that is allowed to connect to broker; - - Set `saslJaasBrokerSectionName` that corresponds to the section in JAAS configuration file for broker; - - To make Pulsar internal admin client work properly, you need to set the configuration in the `broker.conf` file as below: - - Set `brokerClientAuthenticationPlugin` to client plugin `AuthenticationSasl`; - - Set `brokerClientAuthenticationParameters` to value in JSON string `{"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"}`, in which `PulsarClient` is the section name in the `pulsar_jaas.conf` file, and `"serverType":"broker"` indicates that the internal admin client connects to a Pulsar Broker; - - Here is an example: - -``` - -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarBroker - -## Authentication settings of the broker itself. Used when the broker connects to other brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -brokerClientAuthenticationParameters={"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"} - -``` - -#### Set Broker JVM parameter - - Set JVM parameters for JAAS configuration file and krb5 configuration file with additional options. - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -You can add this at the end of `PULSAR_EXTRA_OPTS` in the file [`pulsar_env.sh`](https://github.com/apache/pulsar/blob/master/conf/pulsar_env.sh) - -You must ensure that the operating system user who starts broker can reach the keytabs configured in the `pulsar_jaas.conf` file and kdc server in the `krb5.conf` file. - -### Kerberos configuration for clients - -#### Java Client and Java Admin Client - -In client application, include `pulsar-client-auth-sasl` in your project dependency. - -``` - - - org.apache.pulsar - pulsar-client-auth-sasl - ${pulsar.version} - - -``` - -Configure the authentication type to use `AuthenticationSasl`, and also provide the authentication parameters to it. - -You need 2 parameters: -- `saslJaasClientSectionName`. This parameter corresponds to the section in JAAS configuration file for client; -- `serverType`. This parameter stands for whether this client connects to broker or proxy. And client uses this parameter to know which server side principal should be used. - -When you authenticate between client and broker with the setting in above JAAS configuration file, we need to set `saslJaasClientSectionName` to `PulsarClient` and set `serverType` to `broker`. - -The following is an example of creating a Java client: - - ```java - - System.setProperty("java.security.auth.login.config", "/etc/pulsar/pulsar_jaas.conf"); - System.setProperty("java.security.krb5.conf", "/etc/pulsar/krb5.conf"); - - Map authParams = Maps.newHashMap(); - authParams.put("saslJaasClientSectionName", "PulsarClient"); - authParams.put("serverType", "broker"); - - Authentication saslAuth = AuthenticationFactory - .create(org.apache.pulsar.client.impl.auth.AuthenticationSasl.class.getName(), authParams); - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://my-broker.com:6650") - .authentication(saslAuth) - .build(); - - ``` - -> The first two lines in the example above are hard coded, alternatively, you can set additional JVM parameters for JAAS and krb5 configuration file when you run the application like below: - -``` - -java -cp -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf $APP-jar-with-dependencies.jar $CLASSNAME - -``` - -You must ensure that the operating system user who starts pulsar client can reach the keytabs configured in the `pulsar_jaas.conf` file and kdc server in the `krb5.conf` file. - -#### Configure CLI tools - -If you use a command-line tool (such as `bin/pulsar-client`, `bin/pulsar-perf` and `bin/pulsar-admin`), you need to perform the following steps: - -Step 1. Enter the command below to configure your `client.conf`. - -```shell - -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -authParams={"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"} - -``` - -Step 2. Enter the command below to set JVM parameters for JAAS configuration file and krb5 configuration file with additional options. - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -You can add this at the end of `PULSAR_EXTRA_OPTS` in the file [`pulsar_tools_env.sh`](https://github.com/apache/pulsar/blob/master/conf/pulsar_tools_env.sh), -or add this line `OPTS="$OPTS -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf "` directly to the CLI tool script. - -The meaning of configurations is the same as the meaning of configurations in Java client section. - -## Kerberos configuration for working with Pulsar Proxy - -With the above configuration, client and broker can do authentication using Kerberos. - -A client that connects to Pulsar Proxy is a little different. Pulsar Proxy (as a SASL Server in Kerberos) authenticates Client (as a SASL client in Kerberos) first; and then Pulsar broker authenticates Pulsar Proxy. - -Now in comparison with the above configuration between client and broker, we show you how to configure Pulsar Proxy as follows. - -### Create principal for Pulsar Proxy in Kerberos - -You need to add new principals for Pulsar Proxy comparing with the above configuration. If you already have principals for client and broker, you only need to add the proxy principal here. - -```shell - -### add Principals for Pulsar Proxy -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey proxy/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{proxy-keytabname}.keytab proxy/{hostname}@{REALM}" -### add Principals for broker -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey broker/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{broker-keytabname}.keytab broker/{hostname}@{REALM}" -### add Principals for client -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey client/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{client-keytabname}.keytab client/{hostname}@{REALM}" - -``` - -### Add a section in JAAS configuration file for Pulsar Proxy - -In comparison with the above configuration, add a new section for Pulsar Proxy in JAAS configuration file. - -Here is an example named `pulsar_jaas.conf`: - -``` - - PulsarBroker { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - - PulsarProxy { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarproxy.keytab" - principal="proxy/localhost@EXAMPLE.COM"; -}; - - PulsarClient { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarclient.keytab" - principal="client/localhost@EXAMPLE.COM"; -}; - -``` - -### Proxy client configuration - -Pulsar client configuration is similar with client and broker configuration, except that you need to set `serverType` to `proxy` instead of `broker`, for the reason that you need to do the Kerberos authentication between client and proxy. - - ```java - - System.setProperty("java.security.auth.login.config", "/etc/pulsar/pulsar_jaas.conf"); - System.setProperty("java.security.krb5.conf", "/etc/pulsar/krb5.conf"); - - Map authParams = Maps.newHashMap(); - authParams.put("saslJaasClientSectionName", "PulsarClient"); - authParams.put("serverType", "proxy"); // ** here is the different ** - - Authentication saslAuth = AuthenticationFactory - .create(org.apache.pulsar.client.impl.auth.AuthenticationSasl.class.getName(), authParams); - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://my-broker.com:6650") - .authentication(saslAuth) - .build(); - - ``` - -> The first two lines in the example above are hard coded, alternatively, you can set additional JVM parameters for JAAS and krb5 configuration file when you run the application like below: - -``` - -java -cp -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf $APP-jar-with-dependencies.jar $CLASSNAME - -``` - -### Kerberos configuration for Pulsar proxy service - -In the `proxy.conf` file, set Kerberos related configuration. Here is an example: - -```shell - -## related to authenticate client. -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarProxy - -## related to be authenticated by broker -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -brokerClientAuthenticationParameters={"saslJaasClientSectionName":"PulsarProxy", "serverType":"broker"} -forwardAuthorizationCredentials=true - -``` - -The first part relates to authenticating between client and Pulsar Proxy. In this phase, client works as SASL client, while Pulsar Proxy works as SASL server. - -The second part relates to authenticating between Pulsar Proxy and Pulsar Broker. In this phase, Pulsar Proxy works as SASL client, while Pulsar Broker works as SASL server. - -### Broker side configuration. - -The broker side configuration file is the same with the above `broker.conf`, you do not need special configuration for Pulsar Proxy. - -``` - -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarBroker - -``` - -## Regarding authorization and role token - -For Kerberos authentication, we usually use the authenticated principal as the role token for Pulsar authorization. For more information of authorization in Pulsar, see [security authorization](security-authorization.md). - -If you enable 'authorizationEnabled', you need to set `superUserRoles` in `broker.conf` that corresponds to the name registered in kdc. - -For example: - -```bash - -superUserRoles=client/{clientIp}@EXAMPLE.COM - -``` - -## Regarding authentication between ZooKeeper and Broker - -Pulsar Broker acts as a Kerberos client when you authenticate with Zookeeper. According to [ZooKeeper document](https://cwiki.apache.org/confluence/display/ZOOKEEPER/Client-Server+mutual+authentication), you need these settings in `conf/zookeeper.conf`: - -``` - -authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider -requireClientAuthScheme=sasl - -``` - -Enter the following commands to add a section of `Client` configurations in the file `pulsar_jaas.conf`, which Pulsar Broker uses: - -``` - - Client { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - -``` - -In this setting, the principal of Pulsar Broker and keyTab file indicates the role of Broker when you authenticate with ZooKeeper. - -## Regarding authentication between BookKeeper and Broker - -Pulsar Broker acts as a Kerberos client when you authenticate with Bookie. According to [BookKeeper document](http://bookkeeper.apache.org/docs/latest/security/sasl/), you need to add `bookkeeperClientAuthenticationPlugin` parameter in `broker.conf`: - -``` - -bookkeeperClientAuthenticationPlugin=org.apache.bookkeeper.sasl.SASLClientProviderFactory - -``` - -In this setting, `SASLClientProviderFactory` creates a BookKeeper SASL client in a Broker, and the Broker uses the created SASL client to authenticate with a Bookie node. - -Enter the following commands to add a section of `BookKeeper` configurations in the `pulsar_jaas.conf` that Pulsar Broker uses: - -``` - - BookKeeper { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - -``` - -In this setting, the principal of Pulsar Broker and keyTab file indicates the role of Broker when you authenticate with Bookie. diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/security-oauth2.md b/site2/website/versioned_docs/version-2.9.2-deprecated/security-oauth2.md deleted file mode 100644 index ecafaf85f6bdfb..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/security-oauth2.md +++ /dev/null @@ -1,232 +0,0 @@ ---- -id: security-oauth2 -title: Client authentication using OAuth 2.0 access tokens -sidebar_label: "Authentication using OAuth 2.0 access tokens" -original_id: security-oauth2 ---- - -Pulsar supports authenticating clients using OAuth 2.0 access tokens. You can use OAuth 2.0 access tokens to identify a Pulsar client and associate the Pulsar client with some "principal" (or "role"), which is permitted to do some actions, such as publishing messages to a topic or consume messages from a topic. - -This module is used to support the Pulsar client authentication plugin for OAuth 2.0. After communicating with the Oauth 2.0 server, the Pulsar client gets an `access token` from the Oauth 2.0 server, and passes this `access token` to the Pulsar broker to do the authentication. The broker can use the `org.apache.pulsar.broker.authentication.AuthenticationProviderToken`. Or, you can add your own `AuthenticationProvider` to make it with this module. - -## Authentication provider configuration - -This library allows you to authenticate the Pulsar client by using an access token that is obtained from an OAuth 2.0 authorization service, which acts as a _token issuer_. - -### Authentication types - -The authentication type determines how to obtain an access token through an OAuth 2.0 authorization flow. - -:::note - -Currently, the Pulsar Java client only supports the `client_credentials` authentication type. - -::: - -#### Client credentials - -The following table lists parameters supported for the `client credentials` authentication type. - -| Parameter | Description | Example | Required or not | -| --- | --- | --- | --- | -| `type` | Oauth 2.0 authentication type. | `client_credentials` (default) | Optional | -| `issuerUrl` | URL of the authentication provider which allows the Pulsar client to obtain an access token | `https://accounts.google.com` | Required | -| `privateKey` | URL to a JSON credentials file | Support the following pattern formats:
  2176. `file:///path/to/file`
  2177. `file:/path/to/file`
  2178. `data:application/json;base64,`
  2179. | Required | -| `audience` | An OAuth 2.0 "resource server" identifier for the Pulsar cluster | `https://broker.example.com` | Optional | - -The credentials file contains service account credentials used with the client authentication type. The following shows an example of a credentials file `credentials_file.json`. - -```json - -{ - "type": "client_credentials", - "client_id": "d9ZyX97q1ef8Cr81WHVC4hFQ64vSlDK3", - "client_secret": "on1uJ...k6F6R", - "client_email": "1234567890-abcdefghijklmnopqrstuvwxyz@developer.gserviceaccount.com", - "issuer_url": "https://accounts.google.com" -} - -``` - -In the above example, the authentication type is set to `client_credentials` by default. And the fields "client_id" and "client_secret" are required. - -### Typical original OAuth2 request mapping - -The following shows a typical original OAuth2 request, which is used to obtain the access token from the OAuth2 server. - -```bash - -curl --request POST \ - --url https://dev-kt-aa9ne.us.auth0.com/oauth/token \ - --header 'content-type: application/json' \ - --data '{ - "client_id":"Xd23RHsUnvUlP7wchjNYOaIfazgeHd9x", - "client_secret":"rT7ps7WY8uhdVuBTKWZkttwLdQotmdEliaM5rLfmgNibvqziZ-g07ZH52N_poGAb", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "grant_type":"client_credentials"}' - -``` - -In the above example, the mapping relationship is shown as below. - -- The `issuerUrl` parameter in this plugin is mapped to `--url https://dev-kt-aa9ne.us.auth0.com`. -- The `privateKey` file parameter in this plugin should at least contains the `client_id` and `client_secret` fields. -- The `audience` parameter in this plugin is mapped to `"audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"`. This field is only used by some identity providers. - -## Client Configuration - -You can use the OAuth2 authentication provider with the following Pulsar clients. - -### Java - -You can use the factory method to configure authentication for Pulsar Java client. - -```java - -import org.apache.pulsar.client.impl.auth.oauth2.AuthenticationFactoryOAuth2; - -URL issuerUrl = new URL("https://dev-kt-aa9ne.us.auth0.com"); -URL credentialsUrl = new URL("file:///path/to/KeyFile.json"); -String audience = "https://dev-kt-aa9ne.us.auth0.com/api/v2/"; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactoryOAuth2.clientCredentials(issuerUrl, credentialsUrl, audience)) - .build(); - -``` - -In addition, you can also use the encoded parameters to configure authentication for Pulsar Java client. - -```java - -Authentication auth = AuthenticationFactory - .create(AuthenticationOAuth2.class.getName(), "{"type":"client_credentials","privateKey":"./key/path/..","issuerUrl":"...","audience":"..."}"); -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication(auth) - .build(); - -``` - -### C++ client - -The C++ client is similar to the Java client. You need to provide parameters of `issuerUrl`, `private_key` (the credentials file path), and the audience. - -```c++ - -#include - -pulsar::ClientConfiguration config; -std::string params = R"({ - "issuer_url": "https://dev-kt-aa9ne.us.auth0.com", - "private_key": "../../pulsar-broker/src/test/resources/authentication/token/cpp_credentials_file.json", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/"})"; - -config.setAuth(pulsar::AuthOauth2::create(params)); - -pulsar::Client client("pulsar://broker.example.com:6650/", config); - -``` - -### Go client - -To enable OAuth2 authentication in Go client, you need to configure OAuth2 authentication. -This example shows how to configure OAuth2 authentication in Go client. - -```go - -oauth := pulsar.NewAuthenticationOAuth2(map[string]string{ - "type": "client_credentials", - "issuerUrl": "https://dev-kt-aa9ne.us.auth0.com", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "privateKey": "/path/to/privateKey", - "clientId": "0Xx...Yyxeny", - }) -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://my-cluster:6650", - Authentication: oauth, -}) - -``` - -### Python client - -To enable OAuth2 authentication in Python client, you need to configure OAuth2 authentication. -This example shows how to configure OAuth2 authentication in Python client. - -```python - -from pulsar import Client, AuthenticationOauth2 - -params = ''' -{ - "issuer_url": "https://dev-kt-aa9ne.us.auth0.com", - "private_key": "/path/to/privateKey", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/" -} -''' - -client = Client("pulsar://my-cluster:6650", authentication=AuthenticationOauth2(params)) - -``` - -## CLI configuration - -This section describes how to use Pulsar CLI tools to connect a cluster through OAuth2 authentication plugin. - -### pulsar-admin - -This example shows how to use pulsar-admin to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-admin --admin-url https://streamnative.cloud:443 \ ---auth-plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ -tenants list - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). - -### pulsar-client - -This example shows how to use pulsar-client to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-client \ ---url SERVICE_URL \ ---auth-plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ -produce test-topic -m "test-message" -n 10 - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). - -### pulsar-perf - -This example shows how to use pulsar-perf to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-perf produce --service-url pulsar+ssl://streamnative.cloud:6651 \ ---auth-plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ --r 1000 -s 1024 test-topic - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/security-overview.md b/site2/website/versioned_docs/version-2.9.2-deprecated/security-overview.md deleted file mode 100644 index a8120f984bf82b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/security-overview.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -id: security-overview -title: Pulsar security overview -sidebar_label: "Overview" -original_id: security-overview ---- - -As the central message bus for a business, Apache Pulsar is frequently used for storing mission-critical data. Therefore, enabling security features in Pulsar is crucial. - -By default, Pulsar configures no encryption, authentication, or authorization. Any client can communicate to Apache Pulsar via plain text service URLs. So we must ensure that Pulsar accessing via these plain text service URLs is restricted to trusted clients only. In such cases, you can use Network segmentation and/or authorization ACLs to restrict access to trusted IPs. If you use neither, the state of cluster is wide open and anyone can access the cluster. - -Pulsar supports a pluggable authentication mechanism. And Pulsar clients use this mechanism to authenticate with brokers and proxies. You can also configure Pulsar to support multiple authentication sources. - -The Pulsar broker validates the authentication credentials when a connection is established. After the initial connection is authenticated, the "principal" token is stored for authorization though the connection is not re-authenticated. The broker periodically checks the expiration status of every `ServerCnx` object. You can set the `authenticationRefreshCheckSeconds` on the broker to control the frequency to check the expiration status. By default, the `authenticationRefreshCheckSeconds` is set to 60s. When the authentication is expired, the broker forces to re-authenticate the connection. If the re-authentication fails, the broker disconnects the client. - -The broker supports learning whether a particular client supports authentication refreshing. If a client supports authentication refreshing and the credential is expired, the authentication provider calls the `refreshAuthentication` method to initiate the refreshing process. If a client does not support authentication refreshing and the credential is expired, the broker disconnects the client. - -You had better secure the service components in your Apache Pulsar deployment. - -## Role tokens - -In Pulsar, a *role* is a string, like `admin` or `app1`, which can represent a single client or multiple clients. You can use roles to control permission for clients to produce or consume from certain topics, administer the configuration for tenants, and so on. - -Apache Pulsar uses an [Authentication Provider](#authentication-providers) to establish the identity of a client and then assign a *role token* to that client. This role token is then used for [Authorization and ACLs](security-authorization.md) to determine what the client is authorized to do. - -## Authentication providers - -Currently Pulsar supports the following authentication providers: - -- [TLS Authentication](security-tls-authentication.md) -- [Athenz](security-athenz.md) -- [Kerberos](security-kerberos.md) -- [JSON Web Token Authentication](security-jwt.md) -- [OAuth 2.0 authentication](security-oauth2.md) -- [HTTP basic authentication](security-basic-auth.md) - - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/security-tls-authentication.md b/site2/website/versioned_docs/version-2.9.2-deprecated/security-tls-authentication.md deleted file mode 100644 index 85d2240f413060..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/security-tls-authentication.md +++ /dev/null @@ -1,222 +0,0 @@ ---- -id: security-tls-authentication -title: Authentication using TLS -sidebar_label: "Authentication using TLS" -original_id: security-tls-authentication ---- - -## TLS authentication overview - -TLS authentication is an extension of [TLS transport encryption](security-tls-transport.md). Not only servers have keys and certs that the client uses to verify the identity of servers, clients also have keys and certs that the server uses to verify the identity of clients. You must have TLS transport encryption configured on your cluster before you can use TLS authentication. This guide assumes you already have TLS transport encryption configured. - -`Bouncy Castle Provider` provides TLS related cipher suites and algorithms in Pulsar. If you need [FIPS](https://www.bouncycastle.org/fips_faq.html) version of `Bouncy Castle Provider`, please reference [Bouncy Castle page](security-bouncy-castle.md). - -### Create client certificates - -Client certificates are generated using the certificate authority. Server certificates are also generated with the same certificate authority. - -The biggest difference between client certs and server certs is that the **common name** for the client certificate is the **role token** which that client is authenticated as. - -To use client certificates, you need to set `tlsRequireTrustedClientCertOnConnect=true` at the broker side. For details, refer to [TLS broker configuration](security-tls-transport.md#configure-broker). - -First, you need to enter the following command to generate the key : - -```bash - -$ openssl genrsa -out admin.key.pem 2048 - -``` - -Similar to the broker, the client expects the key to be in [PKCS 8](https://en.wikipedia.org/wiki/PKCS_8) format, so you need to convert it by entering the following command: - -```bash - -$ openssl pkcs8 -topk8 -inform PEM -outform PEM \ - -in admin.key.pem -out admin.key-pk8.pem -nocrypt - -``` - -Next, enter the command below to generate the certificate request. When you are asked for a **common name**, enter the **role token** that you want this key pair to authenticate a client as. - -```bash - -$ openssl req -config openssl.cnf \ - -key admin.key.pem -new -sha256 -out admin.csr.pem - -``` - -:::note - -If openssl.cnf is not specified, read [Certificate authority](http://pulsar.apache.org/docs/en/security-tls-transport/#certificate-authority) to get the openssl.cnf. - -::: - -Then, enter the command below to sign with request with the certificate authority. Note that the client certs uses the **usr_cert** extension, which allows the cert to be used for client authentication. - -```bash - -$ openssl ca -config openssl.cnf -extensions usr_cert \ - -days 1000 -notext -md sha256 \ - -in admin.csr.pem -out admin.cert.pem - -``` - -You can get a cert, `admin.cert.pem`, and a key, `admin.key-pk8.pem` from this command. With `ca.cert.pem`, clients can use this cert and this key to authenticate themselves to brokers and proxies as the role token ``admin``. - -:::note - -If the "unable to load CA private key" error occurs and the reason of this error is "No such file or directory: /etc/pki/CA/private/cakey.pem" in this step. Try the command below: - -```bash - -$ cd /etc/pki/tls/misc/CA -$ ./CA -newca - -``` - -to generate `cakey.pem` . - -::: - -## Enable TLS authentication on brokers - -To configure brokers to authenticate clients, add the following parameters to `broker.conf`, alongside [the configuration to enable tls transport](security-tls-transport.md#broker-configuration): - -```properties - -# Configuration to enable authentication -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# operations and publish/consume from all topics -superUserRoles=admin - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientTlsEnabled=true -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters={"tlsCertFile":"/path/my-ca/admin.cert.pem","tlsKeyFile":"/path/my-ca/admin.key-pk8.pem"} -brokerClientTrustCertsFilePath=/path/my-ca/certs/ca.cert.pem - -``` - -## Enable TLS authentication on proxies - -To configure proxies to authenticate clients, add the following parameters to `proxy.conf`, alongside [the configuration to enable tls transport](security-tls-transport.md#proxy-configuration): - -The proxy should have its own client key pair for connecting to brokers. You need to configure the role token for this key pair in the ``proxyRoles`` of the brokers. See the [authorization guide](security-authorization.md) for more details. - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters=tlsCertFile:/path/to/proxy.cert.pem,tlsKeyFile:/path/to/proxy.key-pk8.pem - -``` - -## Client configuration - -When you use TLS authentication, client connects via TLS transport. You need to configure the client to use ```https://``` and 8443 port for the web service URL, ```pulsar+ssl://``` and 6651 port for the broker service URL. - -### CLI tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use TLS authentication with the CLI tools of Pulsar: - -```properties - -webServiceUrl=https://broker.example.com:8443/ -brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/ca.cert.pem -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -authParams=tlsCertFile:/path/to/my-role.cert.pem,tlsKeyFile:/path/to/my-role.key-pk8.pem - -``` - -### Java client - -```java - -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .tlsTrustCertsFilePath("/path/to/ca.cert.pem") - .authentication("org.apache.pulsar.client.impl.auth.AuthenticationTls", - "tlsCertFile:/path/to/my-role.cert.pem,tlsKeyFile:/path/to/my-role.key-pk8.pem") - .build(); - -``` - -### Python client - -```python - -from pulsar import Client, AuthenticationTLS - -auth = AuthenticationTLS("/path/to/my-role.cert.pem", "/path/to/my-role.key-pk8.pem") -client = Client("pulsar+ssl://broker.example.com:6651/", - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False, - authentication=auth) - -``` - -### C++ client - -```c++ - -#include - -pulsar::ClientConfiguration config; -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/ca.cert.pem"); -config.setTlsAllowInsecureConnection(false); - -pulsar::AuthenticationPtr auth = pulsar::AuthTls::create("/path/to/my-role.cert.pem", - "/path/to/my-role.key-pk8.pem") -config.setAuth(auth); - -pulsar::Client client("pulsar+ssl://broker.example.com:6651/", config); - -``` - -### Node.js client - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const auth = new Pulsar.AuthenticationTls({ - certificatePath: '/path/to/my-role.cert.pem', - privateKeyPath: '/path/to/my-role.key-pk8.pem', - }); - - const client = new Pulsar.Client({ - serviceUrl: 'pulsar+ssl://broker.example.com:6651/', - authentication: auth, - tlsTrustCertsFilePath: '/path/to/ca.cert.pem', - }); -})(); - -``` - -### C# client - -```c# - -var clientCertificate = new X509Certificate2("admin.pfx"); -var client = PulsarClient.Builder() - .AuthenticateUsingClientCertificate(clientCertificate) - .Build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/security-tls-keystore.md b/site2/website/versioned_docs/version-2.9.2-deprecated/security-tls-keystore.md deleted file mode 100644 index 0b3b50fcebb104..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/security-tls-keystore.md +++ /dev/null @@ -1,342 +0,0 @@ ---- -id: security-tls-keystore -title: Using TLS with KeyStore configure -sidebar_label: "Using TLS with KeyStore configure" -original_id: security-tls-keystore ---- - -## Overview - -Apache Pulsar supports [TLS encryption](security-tls-transport.md) and [TLS authentication](security-tls-authentication.md) between clients and Apache Pulsar service. -By default it uses PEM format file configuration. This page tries to describe use [KeyStore](https://en.wikipedia.org/wiki/Java_KeyStore) type configure for TLS. - - -## TLS encryption with KeyStore configure - -### Generate TLS key and certificate - -The first step of deploying TLS is to generate the key and the certificate for each machine in the cluster. -You can use Java’s `keytool` utility to accomplish this task. We will generate the key into a temporary keystore -initially for broker, so that we can export and sign it later with CA. - -```shell - -keytool -keystore broker.keystore.jks -alias localhost -validity {validity} -genkeypair -keyalg RSA - -``` - -You need to specify two parameters in the above command: - -1. `keystore`: the keystore file that stores the certificate. The *keystore* file contains the private key of - the certificate; hence, it needs to be kept safely. -2. `validity`: the valid time of the certificate in days. - -> Ensure that common name (CN) matches exactly with the fully qualified domain name (FQDN) of the server. -The client compares the CN with the DNS domain name to ensure that it is indeed connecting to the desired server, not a malicious one. - -### Creating your own CA - -After the first step, each broker in the cluster has a public-private key pair, and a certificate to identify the machine. -The certificate, however, is unsigned, which means that an attacker can create such a certificate to pretend to be any machine. - -Therefore, it is important to prevent forged certificates by signing them for each machine in the cluster. -A `certificate authority (CA)` is responsible for signing certificates. CA works likes a government that issues passports — -the government stamps (signs) each passport so that the passport becomes difficult to forge. Other governments verify the stamps -to ensure the passport is authentic. Similarly, the CA signs the certificates, and the cryptography guarantees that a signed -certificate is computationally difficult to forge. Thus, as long as the CA is a genuine and trusted authority, the clients have -high assurance that they are connecting to the authentic machines. - -```shell - -openssl req -new -x509 -keyout ca-key -out ca-cert -days 365 - -``` - -The generated CA is simply a *public-private* key pair and certificate, and it is intended to sign other certificates. - -The next step is to add the generated CA to the clients' truststore so that the clients can trust this CA: - -```shell - -keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert - -``` - -NOTE: If you configure the brokers to require client authentication by setting `tlsRequireTrustedClientCertOnConnect` to `true` on the -broker configuration, then you must also provide a truststore for the brokers and it should have all the CA certificates that clients keys were signed by. - -```shell - -keytool -keystore broker.truststore.jks -alias CARoot -import -file ca-cert - -``` - -In contrast to the keystore, which stores each machine’s own identity, the truststore of a client stores all the certificates -that the client should trust. Importing a certificate into one’s truststore also means trusting all certificates that are signed -by that certificate. As the analogy above, trusting the government (CA) also means trusting all passports (certificates) that -it has issued. This attribute is called the chain of trust, and it is particularly useful when deploying TLS on a large BookKeeper cluster. -You can sign all certificates in the cluster with a single CA, and have all machines share the same truststore that trusts the CA. -That way all machines can authenticate all other machines. - - -### Signing the certificate - -The next step is to sign all certificates in the keystore with the CA we generated. First, you need to export the certificate from the keystore: - -```shell - -keytool -keystore broker.keystore.jks -alias localhost -certreq -file cert-file - -``` - -Then sign it with the CA: - -```shell - -openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days {validity} -CAcreateserial -passin pass:{ca-password} - -``` - -Finally, you need to import both the certificate of the CA and the signed certificate into the keystore: - -```shell - -keytool -keystore broker.keystore.jks -alias CARoot -import -file ca-cert -keytool -keystore broker.keystore.jks -alias localhost -import -file cert-signed - -``` - -The definitions of the parameters are the following: - -1. `keystore`: the location of the keystore -2. `ca-cert`: the certificate of the CA -3. `ca-key`: the private key of the CA -4. `ca-password`: the passphrase of the CA -5. `cert-file`: the exported, unsigned certificate of the broker -6. `cert-signed`: the signed certificate of the broker - -### Configuring brokers - -Brokers enable TLS by provide valid `brokerServicePortTls` and `webServicePortTls`, and also need set `tlsEnabledWithKeyStore` to `true` for using KeyStore type configuration. -Besides this, KeyStore path, KeyStore password, TrustStore path, and TrustStore password need to provided. -And since broker will create internal client/admin client to communicate with other brokers, user also need to provide config for them, this is similar to how user config the outside client/admin-client. -If `tlsRequireTrustedClientCertOnConnect` is `true`, broker will reject the Connection if the Client Certificate is not trusted. - -The following TLS configs are needed on the broker side: - -```properties - -tlsEnabledWithKeyStore=true -# key store -tlsKeyStoreType=JKS -tlsKeyStore=/var/private/tls/broker.keystore.jks -tlsKeyStorePassword=brokerpw - -# trust store -tlsTrustStoreType=JKS -tlsTrustStore=/var/private/tls/broker.truststore.jks -tlsTrustStorePassword=brokerpw - -# internal client/admin-client config -brokerClientTlsEnabled=true -brokerClientTlsEnabledWithKeyStore=true -brokerClientTlsTrustStoreType=JKS -brokerClientTlsTrustStore=/var/private/tls/client.truststore.jks -brokerClientTlsTrustStorePassword=clientpw - -``` - -NOTE: it is important to restrict access to the store files via filesystem permissions. - -If you have configured TLS on the broker, to disable non-TLS ports, you can set the values of the following configurations to empty as below. - -``` - -brokerServicePort= -webServicePort= - -``` - -In this case, you need to set the following configurations. - -```conf - -brokerClientTlsEnabled=true // Set this to true -brokerClientTlsEnabledWithKeyStore=true // Set this to true -brokerClientTlsTrustStore= // Set this to your desired value -brokerClientTlsTrustStorePassword= // Set this to your desired value - -Optional settings that may worth consider: - -1. tlsClientAuthentication=false: Enable/Disable using TLS for authentication. This config when enabled will authenticate the other end - of the communication channel. It should be enabled on both brokers and clients for mutual TLS. -2. tlsCiphers=[TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256], A cipher suite is a named combination of authentication, encryption, MAC and key exchange - algorithm used to negotiate the security settings for a network connection using TLS network protocol. By default, - it is null. [OpenSSL Ciphers](https://www.openssl.org/docs/man1.0.2/apps/ciphers.html) - [JDK Ciphers](http://docs.oracle.com/javase/8/docs/technotes/guides/security/StandardNames.html#ciphersuites) -3. tlsProtocols=[TLSv1.3,TLSv1.2] (list out the TLS protocols that you are going to accept from clients). - By default, it is not set. - -``` - -### Configuring Clients - -This is similar to [TLS encryption configuing for client with PEM type](security-tls-transport.md#Client configuration). -For a a minimal configuration, user need to provide the TrustStore information. - -e.g. -1. for [Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools#pulsar-admin), [`pulsar-perf`](reference-cli-tools#pulsar-perf), and [`pulsar-client`](reference-cli-tools#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - - ```properties - - webServiceUrl=https://broker.example.com:8443/ - brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ - useKeyStoreTls=true - tlsTrustStoreType=JKS - tlsTrustStorePath=/var/private/tls/client.truststore.jks - tlsTrustStorePassword=clientpw - - ``` - -1. for java client - - ```java - - import org.apache.pulsar.client.api.PulsarClient; - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .build(); - - ``` - -1. for java admin client - -```java - - PulsarAdmin amdin = PulsarAdmin.builder().serviceHttpUrl("https://broker.example.com:8443") - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .build(); - -``` - -## TLS authentication with KeyStore configure - -This similar to [TLS authentication with PEM type](security-tls-authentication.md) - -### broker authentication config - -`broker.conf` - -```properties - -# Configuration to enable authentication -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# this should be the CN for one of client keystore. -superUserRoles=admin - -# Enable KeyStore type -tlsEnabledWithKeyStore=true -requireTrustedClientCertOnConnect=true - -# key store -tlsKeyStoreType=JKS -tlsKeyStore=/var/private/tls/broker.keystore.jks -tlsKeyStorePassword=brokerpw - -# trust store -tlsTrustStoreType=JKS -tlsTrustStore=/var/private/tls/broker.truststore.jks -tlsTrustStorePassword=brokerpw - -# internal client/admin-client config -brokerClientTlsEnabled=true -brokerClientTlsEnabledWithKeyStore=true -brokerClientTlsTrustStoreType=JKS -brokerClientTlsTrustStore=/var/private/tls/client.truststore.jks -brokerClientTlsTrustStorePassword=clientpw -# internal auth config -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls -brokerClientAuthenticationParameters={"keyStoreType":"JKS","keyStorePath":"/var/private/tls/client.keystore.jks","keyStorePassword":"clientpw"} -# currently websocket not support keystore type -webSocketServiceEnabled=false - -``` - -### client authentication configuring - -Besides the TLS encryption configuring. The main work is configuring the KeyStore, which contains a valid CN as client role, for client. - -e.g. -1. for [Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools#pulsar-admin), [`pulsar-perf`](reference-cli-tools#pulsar-perf), and [`pulsar-client`](reference-cli-tools#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - - ```properties - - webServiceUrl=https://broker.example.com:8443/ - brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ - useKeyStoreTls=true - tlsTrustStoreType=JKS - tlsTrustStorePath=/var/private/tls/client.truststore.jks - tlsTrustStorePassword=clientpw - authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls - authParams={"keyStoreType":"JKS","keyStorePath":"/path/to/keystorefile","keyStorePassword":"keystorepw"} - - ``` - -1. for java client - - ```java - - import org.apache.pulsar.client.api.PulsarClient; - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .authentication( - "org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls", - "keyStoreType:JKS,keyStorePath:/var/private/tls/client.keystore.jks,keyStorePassword:clientpw") - .build(); - - ``` - -1. for java admin client - - ```java - - PulsarAdmin amdin = PulsarAdmin.builder().serviceHttpUrl("https://broker.example.com:8443") - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .authentication( - "org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls", - "keyStoreType:JKS,keyStorePath:/var/private/tls/client.keystore.jks,keyStorePassword:clientpw") - .build(); - - ``` - -## Enabling TLS Logging - -You can enable TLS debug logging at the JVM level by starting the brokers and/or clients with `javax.net.debug` system property. For example: - -```shell - --Djavax.net.debug=all - -``` - -You can find more details on this in [Oracle documentation](http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/ReadDebug.html) on [debugging SSL/TLS connections](http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/ReadDebug.html). diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/security-tls-transport.md b/site2/website/versioned_docs/version-2.9.2-deprecated/security-tls-transport.md deleted file mode 100644 index 2cad17a78c3507..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/security-tls-transport.md +++ /dev/null @@ -1,295 +0,0 @@ ---- -id: security-tls-transport -title: Transport Encryption using TLS -sidebar_label: "Transport Encryption using TLS" -original_id: security-tls-transport ---- - -## TLS overview - -By default, Apache Pulsar clients communicate with the Apache Pulsar service in plain text. This means that all data is sent in the clear. You can use TLS to encrypt this traffic to protect the traffic from the snooping of a man-in-the-middle attacker. - -You can also configure TLS for both encryption and authentication. Use this guide to configure just TLS transport encryption and refer to [here](security-tls-authentication.md) for TLS authentication configuration. Alternatively, you can use [another authentication mechanism](security-athenz.md) on top of TLS transport encryption. - -> Note that enabling TLS may impact the performance due to encryption overhead. - -## TLS concepts - -TLS is a form of [public key cryptography](https://en.wikipedia.org/wiki/Public-key_cryptography). Using key pairs consisting of a public key and a private key can perform the encryption. The public key encrpyts the messages and the private key decrypts the messages. - -To use TLS transport encryption, you need two kinds of key pairs, **server key pairs** and a **certificate authority**. - -You can use a third kind of key pair, **client key pairs**, for [client authentication](security-tls-authentication.md). - -You should store the **certificate authority** private key in a very secure location (a fully encrypted, disconnected, air gapped computer). As for the certificate authority public key, the **trust cert**, you can freely shared it. - -For both client and server key pairs, the administrator first generates a private key and a certificate request, then uses the certificate authority private key to sign the certificate request, finally generates a certificate. This certificate is the public key for the server/client key pair. - -For TLS transport encryption, the clients can use the **trust cert** to verify that the server has a key pair that the certificate authority signed when the clients are talking to the server. A man-in-the-middle attacker does not have access to the certificate authority, so they couldn't create a server with such a key pair. - -For TLS authentication, the server uses the **trust cert** to verify that the client has a key pair that the certificate authority signed. The common name of the **client cert** is then used as the client's role token (see [Overview](security-overview.md)). - -`Bouncy Castle Provider` provides cipher suites and algorithms in Pulsar. If you need [FIPS](https://www.bouncycastle.org/fips_faq.html) version of `Bouncy Castle Provider`, please reference [Bouncy Castle page](security-bouncy-castle.md). - -## Create TLS certificates - -Creating TLS certificates for Pulsar involves creating a [certificate authority](#certificate-authority) (CA), [server certificate](#server-certificate), and [client certificate](#client-certificate). - -Follow the guide below to set up a certificate authority. You can also refer to plenty of resources on the internet for more details. We recommend [this guide](https://jamielinux.com/docs/openssl-certificate-authority/index.html) for your detailed reference. - -### Certificate authority - -1. Create the certificate for the CA. You can use CA to sign both the broker and client certificates. This ensures that each party will trust the others. You should store CA in a very secure location (ideally completely disconnected from networks, air gapped, and fully encrypted). - -2. Entering the following command to create a directory for your CA, and place [this openssl configuration file](https://github.com/apache/pulsar/tree/master/site2/website/static/examples/openssl.cnf) in the directory. You may want to modify the default answers for company name and department in the configuration file. Export the location of the CA directory to the environment variable, CA_HOME. The configuration file uses this environment variable to find the rest of the files and directories that the CA needs. - -```bash - -mkdir my-ca -cd my-ca -wget https://raw.githubusercontent.com/apache/pulsar-site/main/site2/website/static/examples/openssl.cnf -export CA_HOME=$(pwd) - -``` - -3. Enter the commands below to create the necessary directories, keys and certs. - -```bash - -mkdir certs crl newcerts private -chmod 700 private/ -touch index.txt -echo 1000 > serial -openssl genrsa -aes256 -out private/ca.key.pem 4096 -chmod 400 private/ca.key.pem -openssl req -config openssl.cnf -key private/ca.key.pem \ - -new -x509 -days 7300 -sha256 -extensions v3_ca \ - -out certs/ca.cert.pem -chmod 444 certs/ca.cert.pem - -``` - -4. After you answer the question prompts, CA-related files are stored in the `./my-ca` directory. Within that directory: - -* `certs/ca.cert.pem` is the public certificate. This public certificates is meant to be distributed to all parties involved. -* `private/ca.key.pem` is the private key. You only need it when you are signing a new certificate for either broker or clients and you must safely guard this private key. - -### Server certificate - -Once you have created a CA certificate, you can create certificate requests and sign them with the CA. - -The following commands ask you a few questions and then create the certificates. When you are asked for the common name, you should match the hostname of the broker. You can also use a wildcard to match a group of broker hostnames, for example, `*.broker.usw.example.com`. This ensures that multiple machines can reuse the same certificate. - -:::tip - -Sometimes matching the hostname is not possible or makes no sense, -such as when you create the brokers with random hostnames, or you -plan to connect to the hosts via their IP. In these cases, you -should configure the client to disable TLS hostname verification. For more -details, you can see [the host verification section in client configuration](#hostname-verification). - -::: - -1. Enter the command below to generate the key. - -```bash - -openssl genrsa -out broker.key.pem 2048 - -``` - -The broker expects the key to be in [PKCS 8](https://en.wikipedia.org/wiki/PKCS_8) format, so enter the following command to convert it. - -```bash - -openssl pkcs8 -topk8 -inform PEM -outform PEM \ - -in broker.key.pem -out broker.key-pk8.pem -nocrypt - -``` - -2. Enter the following command to generate the certificate request. - -```bash - -openssl req -config openssl.cnf \ - -key broker.key.pem -new -sha256 -out broker.csr.pem - -``` - -3. Sign it with the certificate authority by entering the command below. - -```bash - -openssl ca -config openssl.cnf -extensions server_cert \ - -days 1000 -notext -md sha256 \ - -in broker.csr.pem -out broker.cert.pem - -``` - -At this point, you have a cert, `broker.cert.pem`, and a key, `broker.key-pk8.pem`, which you can use along with `ca.cert.pem` to configure TLS transport encryption for your broker and proxy nodes. - -## Configure broker - -To configure a Pulsar [broker](reference-terminology.md#broker) to use TLS transport encryption, you need to make some changes to `broker.conf`, which locates in the `conf` directory of your [Pulsar installation](getting-started-standalone.md). - -Add these values to the configuration file (substituting the appropriate certificate paths where necessary): - -```properties - -tlsEnabled=true -tlsRequireTrustedClientCertOnConnect=true -tlsCertificateFilePath=/path/to/broker.cert.pem -tlsKeyFilePath=/path/to/broker.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -> You can find a full list of parameters available in the `conf/broker.conf` file, -> as well as the default values for those parameters, in [Broker Configuration](reference-configuration.md#broker) -> -### TLS Protocol Version and Cipher - -You can configure the broker (and proxy) to require specific TLS protocol versions and ciphers for TLS negiotation. You can use the TLS protocol versions and ciphers to stop clients from requesting downgraded TLS protocol versions or ciphers that may have weaknesses. - -Both the TLS protocol versions and cipher properties can take multiple values, separated by commas. The possible values for protocol version and ciphers depend on the TLS provider that you are using. Pulsar uses OpenSSL if the OpenSSL is available, but if the OpenSSL is not available, Pulsar defaults back to the JDK implementation. - -```properties - -tlsProtocols=TLSv1.3,TLSv1.2 -tlsCiphers=TLS_DH_RSA_WITH_AES_256_GCM_SHA384,TLS_DH_RSA_WITH_AES_256_CBC_SHA - -``` - -OpenSSL currently supports ```TLSv1.1```, ```TLSv1.2``` and ```TLSv1.3``` for the protocol version. You can acquire a list of supported cipher from the openssl ciphers command, i.e. ```openssl ciphers -tls1_3```. - -For JDK 11, you can obtain a list of supported values from the documentation: -- [TLS protocol](https://docs.oracle.com/en/java/javase/11/security/oracle-providers.html#GUID-7093246A-31A3-4304-AC5F-5FB6400405E2__SUNJSSEPROVIDERPROTOCOLPARAMETERS-BBF75009) -- [Ciphers](https://docs.oracle.com/en/java/javase/11/security/oracle-providers.html#GUID-7093246A-31A3-4304-AC5F-5FB6400405E2__SUNJSSE_CIPHER_SUITES) - -## Proxy Configuration - -Proxies need to configure TLS in two directions, for clients connecting to the proxy, and for the proxy connecting to brokers. - -```properties - -# For clients connecting to the proxy -tlsEnabledInProxy=true -tlsCertificateFilePath=/path/to/broker.cert.pem -tlsKeyFilePath=/path/to/broker.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -# For the proxy to connect to brokers -tlsEnabledWithBroker=true -brokerClientTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -## Client configuration - -When you enable the TLS transport encryption, you need to configure the client to use ```https://``` and port 8443 for the web service URL, and ```pulsar+ssl://``` and port 6651 for the broker service URL. - -As the server certificate that you generated above does not belong to any of the default trust chains, you also need to either specify the path the **trust cert** (recommended), or tell the client to allow untrusted server certs. - -### Hostname verification - -Hostname verification is a TLS security feature whereby a client can refuse to connect to a server if the "CommonName" does not match the hostname to which the hostname is connecting. By default, Pulsar clients disable hostname verification, as it requires that each broker has a DNS record and a unique cert. - -Moreover, as the administrator has full control of the certificate authority, a bad actor is unlikely to be able to pull off a man-in-the-middle attack. "allowInsecureConnection" allows the client to connect to servers whose cert has not been signed by an approved CA. The client disables "allowInsecureConnection" by default, and you should always disable "allowInsecureConnection" in production environments. As long as you disable "allowInsecureConnection", a man-in-the-middle attack requires that the attacker has access to the CA. - -One scenario where you may want to enable hostname verification is where you have multiple proxy nodes behind a VIP, and the VIP has a DNS record, for example, pulsar.mycompany.com. In this case, you can generate a TLS cert with pulsar.mycompany.com as the "CommonName," and then enable hostname verification on the client. - -The examples below show that hostname verification is disabled for the CLI tools/Java/Python/C++/Node.js/C# clients by default. - -### CLI tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools.md#pulsar-admin), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use TLS transport with the CLI tools of Pulsar: - -```properties - -webServiceUrl=https://broker.example.com:8443/ -brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/ca.cert.pem -tlsEnableHostnameVerification=false - -``` - -#### Java client - -```java - -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .tlsTrustCertsFilePath("/path/to/ca.cert.pem") - .enableTlsHostnameVerification(false) // false by default, in any case - .allowTlsInsecureConnection(false) // false by default, in any case - .build(); - -``` - -#### Python client - -```python - -from pulsar import Client - -client = Client("pulsar+ssl://broker.example.com:6651/", - tls_hostname_verification=False, - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False) // defaults to false from v2.2.0 onwards - -``` - -#### C++ client - -```c++ - -#include - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); // shouldn't be needed soon -config.setTlsTrustCertsFilePath(caPath); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create(clientPublicKeyPath, clientPrivateKeyPath)); -config.setValidateHostName(false); - -``` - -#### Node.js client - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const client = new Pulsar.Client({ - serviceUrl: 'pulsar+ssl://broker.example.com:6651/', - tlsTrustCertsFilePath: '/path/to/ca.cert.pem', - useTls: true, - tlsValidateHostname: false, - tlsAllowInsecureConnection: false, - }); -})(); - -``` - -#### C# client - -```c# - -var certificate = new X509Certificate2("ca.cert.pem"); -var client = PulsarClient.Builder() - .TrustedCertificateAuthority(certificate) //If the CA is not trusted on the host, you can add it explicitly. - .VerifyCertificateAuthority(true) //Default is 'true' - .VerifyCertificateName(false) //Default is 'false' - .Build(); - -``` - -> Note that `VerifyCertificateName` refers to the configuration of hostname verification in the C# client. diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/security-token-admin.md b/site2/website/versioned_docs/version-2.9.2-deprecated/security-token-admin.md deleted file mode 100644 index a265f6320d28fe..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/security-token-admin.md +++ /dev/null @@ -1,183 +0,0 @@ ---- -id: security-token-admin -title: Token authentication admin -sidebar_label: "Token authentication admin" -original_id: security-token-admin ---- - -## Token Authentication Overview - -Pulsar supports authenticating clients using security tokens that are based on [JSON Web Tokens](https://jwt.io/introduction/) ([RFC-7519](https://tools.ietf.org/html/rfc7519)). - -Tokens are used to identify a Pulsar client and associate with some "principal" (or "role") which -will be then granted permissions to do some actions (eg: publish or consume from a topic). - -A user will typically be given a token string by an administrator (or some automated service). - -The compact representation of a signed JWT is a string that looks like: - -``` - - eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -Application will specify the token when creating the client instance. An alternative is to pass -a "token supplier", that is to say a function that returns the token when the client library -will need one. - -> #### Always use TLS transport encryption -> Sending a token is equivalent to sending a password over the wire. It is strongly recommended to -> always use TLS encryption when talking to the Pulsar service. See -> [Transport Encryption using TLS](security-tls-transport.md) - -## Secret vs Public/Private keys - -JWT support two different kind of keys in order to generate and validate the tokens: - - * Symmetric : - - there is a single ***Secret*** key that is used both to generate and validate - * Asymmetric: there is a pair of keys. - - ***Private*** key is used to generate tokens - - ***Public*** key is used to validate tokens - -### Secret key - -When using a secret key, the administrator will create the key and he will -use it to generate the client tokens. This key will be also configured to -the brokers to allow them to validate the clients. - -#### Creating a secret key - -> Output file will be generated in the root of your pulsar installation directory. You can also provide absolute path for the output file. - -```shell - -$ bin/pulsar tokens create-secret-key --output my-secret.key - -``` - -To generate base64 encoded private key - -```shell - -$ bin/pulsar tokens create-secret-key --output /opt/my-secret.key --base64 - -``` - -### Public/Private keys - -With public/private, we need to create a pair of keys. Pulsar supports all algorithms supported by the Java JWT library shown [here](https://github.com/jwtk/jjwt#signature-algorithms-keys) - -#### Creating a key pair - -> Output file will be generated in the root of your pulsar installation directory. You can also provide absolute path for the output file. - -```shell - -$ bin/pulsar tokens create-key-pair --output-private-key my-private.key --output-public-key my-public.key - -``` - - * `my-private.key` will be stored in a safe location and only used by administrator to generate - new tokens. - * `my-public.key` will be distributed to all Pulsar brokers. This file can be publicly shared without - any security concern. - -## Generating tokens - -A token is the credential associated with a user. The association is done through the "principal", -or "role". In case of JWT tokens, this field it's typically referred to as **subject**, though -it's exactly the same concept. - -The generated token is then required to have a **subject** field set. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user - -``` - -This will print the token string on stdout. - -Similarly, one can create a token by passing the "private" key: - -```shell - -$ bin/pulsar tokens create --private-key file:///path/to/my-private.key \ - --subject test-user - -``` - -Finally, a token can also be created with a pre-defined TTL. After that time, -the token will be automatically invalidated. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user \ - --expiry-time 1y - -``` - -## Authorization - -The token itself doesn't have any permission associated. That will be determined by the -authorization engine. Once the token is created, one can grant permission for this token to do certain -actions. Eg. : - -```shell - -$ bin/pulsar-admin namespaces grant-permission my-tenant/my-namespace \ - --role test-user \ - --actions produce,consume - -``` - -## Enabling Token Authentication ... - -### ... on Brokers - -To configure brokers to authenticate clients, put the following in `broker.conf`: - -```properties - -# Configuration to enable authentication and authorization -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken - -# If using secret key (Note: key files must be DER-encoded) -tokenSecretKey=file:///path/to/secret.key -# The key can also be passed inline: -# tokenSecretKey=data:;base64,FLFyW0oLJ2Fi22KKCm21J18mbAdztfSHN/lAT5ucEKU= - -# If using public/private (Note: key files must be DER-encoded) -# tokenPublicKey=file:///path/to/public.key - -``` - -### ... on Proxies - -To configure proxies to authenticate clients, put the following in `proxy.conf`: - -The proxy will have its own token used when talking to brokers. The role token for this -key pair should be configured in the ``proxyRoles`` of the brokers. See the [authorization guide](security-authorization.md) for more details. - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken -tokenSecretKey=file:///path/to/secret.key - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Or, alternatively, read token from file -# brokerClientAuthenticationParameters=file:///path/to/proxy-token.txt - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/sql-deployment-configurations.md b/site2/website/versioned_docs/version-2.9.2-deprecated/sql-deployment-configurations.md deleted file mode 100644 index 02d8bc78f6cb9d..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/sql-deployment-configurations.md +++ /dev/null @@ -1,199 +0,0 @@ ---- -id: sql-deployment-configurations -title: Pulsar SQL configuration and deployment -sidebar_label: "Configuration and deployment" -original_id: sql-deployment-configurations ---- - -You can configure Presto Pulsar connector and deploy a cluster with the following instruction. - -## Configure Presto Pulsar Connector -You can configure Presto Pulsar Connector in the `${project.root}/conf/presto/catalog/pulsar.properties` properties file. The configuration for the connector and the default values are as follows. - -```properties - -# name of the connector to be displayed in the catalog -connector.name=pulsar - -# the url of Pulsar broker service -pulsar.web-service-url=http://localhost:8080 - -# URI of Zookeeper cluster -pulsar.zookeeper-uri=localhost:2181 - -# minimum number of entries to read at a single time -pulsar.entry-read-batch-size=100 - -# default number of splits to use per query -pulsar.target-num-splits=4 - -``` - -You can connect Presto to a Pulsar cluster with multiple hosts. To configure multiple hosts for brokers, add multiple URLs to `pulsar.web-service-url`. To configure multiple hosts for ZooKeeper, add multiple URIs to `pulsar.zookeeper-uri`. The following is an example. - -``` - -pulsar.web-service-url=http://localhost:8080,localhost:8081,localhost:8082 -pulsar.zookeeper-uri=localhost1,localhost2:2181 - -``` - -**Note: by default, Pulsar SQL does not get the last message in a topic**. It is by design and controlled by settings. By default, BookKeeper LAC only advances when subsequent entries are added. If there is no subsequent entry added, the last written entry is not visible to readers until the ledger is closed. This is not a problem for Pulsar which uses managed ledger, but Pulsar SQL directly reads from BookKeeper ledger. - -If you want to get the last message in a topic, set the following configurations: - -1. For the broker configuration, set `bookkeeperExplicitLacIntervalInMills` > 0 in `broker.conf` or `standalone.conf`. - -2. For the Presto configuration, set `pulsar.bookkeeper-explicit-interval` > 0 and `pulsar.bookkeeper-use-v2-protocol=false`. - -However, using BookKeeper V3 protocol introduces additional GC overhead to BK as it uses Protobuf. - -## Query data from existing Presto clusters - -If you already have a Presto cluster, you can copy the Presto Pulsar connector plugin to your existing cluster. Download the archived plugin package with the following command. - -```bash - -$ wget pulsar:binary_release_url - -``` - -## Deploy a new cluster - -Since Pulsar SQL is powered by [Trino (formerly Presto SQL)](https://trino.io), the configuration for deployment is the same for the Pulsar SQL worker. - -:::note - -For how to set up a standalone single node environment, refer to [Query data](sql-getting-started.md). - -::: - -You can use the same CLI args as the Presto launcher. - -```bash - -$ ./bin/pulsar sql-worker --help -Usage: launcher [options] command - -Commands: run, start, stop, restart, kill, status - -Options: - -h, --help show this help message and exit - -v, --verbose Run verbosely - --etc-dir=DIR Defaults to INSTALL_PATH/etc - --launcher-config=FILE - Defaults to INSTALL_PATH/bin/launcher.properties - --node-config=FILE Defaults to ETC_DIR/node.properties - --jvm-config=FILE Defaults to ETC_DIR/jvm.config - --config=FILE Defaults to ETC_DIR/config.properties - --log-levels-file=FILE - Defaults to ETC_DIR/log.properties - --data-dir=DIR Defaults to INSTALL_PATH - --pid-file=FILE Defaults to DATA_DIR/var/run/launcher.pid - --launcher-log-file=FILE - Defaults to DATA_DIR/var/log/launcher.log (only in - daemon mode) - --server-log-file=FILE - Defaults to DATA_DIR/var/log/server.log (only in - daemon mode) - -D NAME=VALUE Set a Java system property - -``` - -The default configuration for the cluster is located in `${project.root}/conf/presto`. You can customize your deployment by modifying the default configuration. - -You can set the worker to read from a different configuration directory, or set a different directory to write data. - -```bash - -$ ./bin/pulsar sql-worker run --etc-dir /tmp/incubator-pulsar/conf/presto --data-dir /tmp/presto-1 - -``` - -You can start the worker as daemon process. - -```bash - -$ ./bin/pulsar sql-worker start - -``` - -### Deploy a cluster on multiple nodes - -You can deploy a Pulsar SQL cluster or Presto cluster on multiple nodes. The following example shows how to deploy a cluster on three-node cluster. - -1. Copy the Pulsar binary distribution to three nodes. - -The first node runs as Presto coordinator. The minimal configuration requirement in the `${project.root}/conf/presto/config.properties` file is as follows. - -```properties - -coordinator=true -node-scheduler.include-coordinator=true -http-server.http.port=8080 -query.max-memory=50GB -query.max-memory-per-node=1GB -discovery-server.enabled=true -discovery.uri= - -``` - -The other two nodes serve as worker nodes, you can use the following configuration for worker nodes. - -```properties - -coordinator=false -http-server.http.port=8080 -query.max-memory=50GB -query.max-memory-per-node=1GB -discovery.uri= - -``` - -2. Modify `pulsar.web-service-url` and `pulsar.zookeeper-uri` configuration in the `${project.root}/conf/presto/catalog/pulsar.properties` file accordingly for the three nodes. - -3. Start the coordinator node. - -``` - -$ ./bin/pulsar sql-worker run - -``` - -4. Start worker nodes. - -``` - -$ ./bin/pulsar sql-worker run - -``` - -5. Start the SQL CLI and check the status of your cluster. - -```bash - -$ ./bin/pulsar sql --server - -``` - -6. Check the status of your nodes. - -```bash - -presto> SELECT * FROM system.runtime.nodes; - node_id | http_uri | node_version | coordinator | state ----------+-------------------------+--------------+-------------+-------- - 1 | http://192.168.2.1:8081 | testversion | true | active - 3 | http://192.168.2.2:8081 | testversion | false | active - 2 | http://192.168.2.3:8081 | testversion | false | active - -``` - -For more information about deployment in Presto, refer to [Presto deployment](https://trino.io/docs/current/installation/deployment.html). - -:::note - -The broker does not advance LAC, so when Pulsar SQL bypass broker to query data, it can only read entries up to the LAC that all the bookies learned. You can enable periodically write LAC on the broker by setting "bookkeeperExplicitLacIntervalInMills" in the broker.conf. - -::: - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/sql-getting-started.md b/site2/website/versioned_docs/version-2.9.2-deprecated/sql-getting-started.md deleted file mode 100644 index 8a5cd7199b365a..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/sql-getting-started.md +++ /dev/null @@ -1,187 +0,0 @@ ---- -id: sql-getting-started -title: Query data with Pulsar SQL -sidebar_label: "Query data" -original_id: sql-getting-started ---- - -Before querying data in Pulsar, you need to install Pulsar and built-in connectors. - -## Requirements -1. Install [Pulsar](getting-started-standalone.md#install-pulsar-standalone). -2. Install Pulsar [built-in connectors](getting-started-standalone.md#install-builtin-connectors-optional). - -## Query data in Pulsar -To query data in Pulsar with Pulsar SQL, complete the following steps. - -1. Start a Pulsar standalone cluster. - -```bash - -./bin/pulsar standalone - -``` - -2. Start a Pulsar SQL worker. - -```bash - -./bin/pulsar sql-worker run - -``` - -3. After initializing Pulsar standalone cluster and the SQL worker, run SQL CLI. - -```bash - -./bin/pulsar sql - -``` - -4. Test with SQL commands. - -```bash - -presto> show catalogs; - Catalog ---------- - pulsar - system -(2 rows) - -Query 20180829_211752_00004_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [0 rows, 0B] [0 rows/s, 0B/s] - - -presto> show schemas in pulsar; - Schema ------------------------ - information_schema - public/default - public/functions - sample/standalone/ns1 -(4 rows) - -Query 20180829_211818_00005_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [4 rows, 89B] [21 rows/s, 471B/s] - - -presto> show tables in pulsar."public/default"; - Table -------- -(0 rows) - -Query 20180829_211839_00006_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [0 rows, 0B] [0 rows/s, 0B/s] - -``` - -Since there is no data in Pulsar, no records is returned. - -5. Start the built-in connector _DataGeneratorSource_ and ingest some mock data. - -```bash - -./bin/pulsar-admin sources create --name generator --destinationTopicName generator_test --source-type data-generator - -``` - -And then you can query a topic in the namespace "public/default". - -```bash - -presto> show tables in pulsar."public/default"; - Table ----------------- - generator_test -(1 row) - -Query 20180829_213202_00000_csyeu, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:02 [1 rows, 38B] [0 rows/s, 17B/s] - -``` - -You can now query the data within the topic "generator_test". - -```bash - -presto> select * from pulsar."public/default".generator_test; - - firstname | middlename | lastname | email | username | password | telephonenumber | age | companyemail | nationalidentitycardnumber | --------------+-------------+-------------+----------------------------------+--------------+----------+-----------------+-----+-----------------------------------------------+----------------------------+ - Genesis | Katherine | Wiley | genesis.wiley@gmail.com | genesisw | y9D2dtU3 | 959-197-1860 | 71 | genesis.wiley@interdemconsulting.eu | 880-58-9247 | - Brayden | | Stanton | brayden.stanton@yahoo.com | braydens | ZnjmhXik | 220-027-867 | 81 | brayden.stanton@supermemo.eu | 604-60-7069 | - Benjamin | Julian | Velasquez | benjamin.velasquez@yahoo.com | benjaminv | 8Bc7m3eb | 298-377-0062 | 21 | benjamin.velasquez@hostesltd.biz | 213-32-5882 | - Michael | Thomas | Donovan | donovan@mail.com | michaeld | OqBm9MLs | 078-134-4685 | 55 | michael.donovan@memortech.eu | 443-30-3442 | - Brooklyn | Avery | Roach | brooklynroach@yahoo.com | broach | IxtBLafO | 387-786-2998 | 68 | brooklyn.roach@warst.biz | 085-88-3973 | - Skylar | | Bradshaw | skylarbradshaw@yahoo.com | skylarb | p6eC6cKy | 210-872-608 | 96 | skylar.bradshaw@flyhigh.eu | 453-46-0334 | -. -. -. - -``` - -You can query the mock data. - -## Query your own data -If you want to query your own data, you need to ingest your own data first. You can write a simple producer and write custom defined data to Pulsar. The following is an example. - -```java - -public class TestProducer { - - public static class Foo { - private int field1 = 1; - private String field2; - private long field3; - - public Foo() { - } - - public int getField1() { - return field1; - } - - public void setField1(int field1) { - this.field1 = field1; - } - - public String getField2() { - return field2; - } - - public void setField2(String field2) { - this.field2 = field2; - } - - public long getField3() { - return field3; - } - - public void setField3(long field3) { - this.field3 = field3; - } - } - - public static void main(String[] args) throws Exception { - PulsarClient pulsarClient = PulsarClient.builder().serviceUrl("pulsar://localhost:6650").build(); - Producer producer = pulsarClient.newProducer(AvroSchema.of(Foo.class)).topic("test_topic").create(); - - for (int i = 0; i < 1000; i++) { - Foo foo = new Foo(); - foo.setField1(i); - foo.setField2("foo" + i); - foo.setField3(System.currentTimeMillis()); - producer.newMessage().value(foo).send(); - } - producer.close(); - pulsarClient.close(); - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/sql-overview.md b/site2/website/versioned_docs/version-2.9.2-deprecated/sql-overview.md deleted file mode 100644 index 0235d0256c5f19..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/sql-overview.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: sql-overview -title: Pulsar SQL Overview -sidebar_label: "Overview" -original_id: sql-overview ---- - -Apache Pulsar is used to store streams of event data, and the event data is structured with predefined fields. With the implementation of the [Schema Registry](schema-get-started.md), you can store structured data in Pulsar and query the data by using [Trino (formerly Presto SQL.md)](https://trino.io/). - -As the core of Pulsar SQL, Presto Pulsar connector enables Presto workers within a Presto cluster to query data from Pulsar. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-sql-arch-2.png) - -The query performance is efficient and highly scalable, because Pulsar adopts [two level segment based architecture](concepts-architecture-overview.md#apache-bookkeeper). - -Topics in Pulsar are stored as segments in [Apache BookKeeper](https://bookkeeper.apache.org/). Each topic segment is replicated to some BookKeeper nodes, which enables concurrent reads and high read throughput. You can configure the number of BookKeeper nodes, and the default number is `3`. In Presto Pulsar connector, data is read directly from BookKeeper, so Presto workers can read concurrently from horizontally scalable number BookKeeper nodes. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-sql-arch-1.png) diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/sql-rest-api.md b/site2/website/versioned_docs/version-2.9.2-deprecated/sql-rest-api.md deleted file mode 100644 index c92fd62f7d8703..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/sql-rest-api.md +++ /dev/null @@ -1,192 +0,0 @@ ---- -id: sql-rest-api -title: Pulsar SQL REST APIs -sidebar_label: "REST APIs" -original_id: sql-rest-api ---- - -This section lists resources that make up the Presto REST API v1. - -## Request for Presto services - -All requests for Presto services should use Presto REST API v1 version. - -To request services, use explicit URL `http://presto.service:8081/v1`. You need to update `presto.service:8081` with your real Presto address before sending requests. - -`POST` requests require the `X-Presto-User` header. If you use authentication, you must use the same `username` that is specified in the authentication configuration. If you do not use authentication, you can specify anything for `username`. - -```properties - -X-Presto-User: username - -``` - -For more information about headers, refer to [PrestoHeaders](https://github.com/trinodb/trino). - -## Schema - -You can use statement in the HTTP body. All data is received as JSON document that might contain a `nextUri` link. If the received JSON document contains a `nextUri` link, the request continues with the `nextUri` link until the received data does not contain a `nextUri` link. If no error is returned, the query completes successfully. If an `error` field is displayed in `stats`, it means the query fails. - -The following is an example of `show catalogs`. The query continues until the received JSON document does not contain a `nextUri` link. Since no `error` is displayed in `stats`, it means that the query completes successfully. - -```powershell - -➜ ~ curl --header "X-Presto-User: test-user" --request POST --data 'show catalogs' http://localhost:8081/v1/statement -{ - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "stats" : { - "queued" : true, - "nodes" : 0, - "userTimeMillis" : 0, - "cpuTimeMillis" : 0, - "wallTimeMillis" : 0, - "processedBytes" : 0, - "processedRows" : 0, - "runningSplits" : 0, - "queuedTimeMillis" : 0, - "queuedSplits" : 0, - "completedSplits" : 0, - "totalSplits" : 0, - "scheduled" : false, - "peakMemoryBytes" : 0, - "state" : "QUEUED", - "elapsedTimeMillis" : 0 - }, - "id" : "20191113_033653_00006_dg6hb", - "nextUri" : "http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/1" -} - -➜ ~ curl http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/1 -{ - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "nextUri" : "http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/2", - "id" : "20191113_033653_00006_dg6hb", - "stats" : { - "state" : "PLANNING", - "totalSplits" : 0, - "queued" : false, - "userTimeMillis" : 0, - "completedSplits" : 0, - "scheduled" : false, - "wallTimeMillis" : 0, - "runningSplits" : 0, - "queuedSplits" : 0, - "cpuTimeMillis" : 0, - "processedRows" : 0, - "processedBytes" : 0, - "nodes" : 0, - "queuedTimeMillis" : 1, - "elapsedTimeMillis" : 2, - "peakMemoryBytes" : 0 - } -} - -➜ ~ curl http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/2 -{ - "id" : "20191113_033653_00006_dg6hb", - "data" : [ - [ - "pulsar" - ], - [ - "system" - ] - ], - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "columns" : [ - { - "typeSignature" : { - "rawType" : "varchar", - "arguments" : [ - { - "kind" : "LONG_LITERAL", - "value" : 6 - } - ], - "literalArguments" : [], - "typeArguments" : [] - }, - "name" : "Catalog", - "type" : "varchar(6)" - } - ], - "stats" : { - "wallTimeMillis" : 104, - "scheduled" : true, - "userTimeMillis" : 14, - "progressPercentage" : 100, - "totalSplits" : 19, - "nodes" : 1, - "cpuTimeMillis" : 16, - "queued" : false, - "queuedTimeMillis" : 1, - "state" : "FINISHED", - "peakMemoryBytes" : 0, - "elapsedTimeMillis" : 111, - "processedBytes" : 0, - "processedRows" : 0, - "queuedSplits" : 0, - "rootStage" : { - "cpuTimeMillis" : 1, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 1, - "subStages" : [ - { - "cpuTimeMillis" : 14, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 17, - "subStages" : [ - { - "wallTimeMillis" : 7, - "subStages" : [], - "stageId" : "2", - "done" : true, - "nodes" : 1, - "totalSplits" : 1, - "processedBytes" : 22, - "processedRows" : 2, - "queuedSplits" : 0, - "userTimeMillis" : 1, - "cpuTimeMillis" : 1, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 1 - } - ], - "wallTimeMillis" : 92, - "nodes" : 1, - "done" : true, - "stageId" : "1", - "userTimeMillis" : 12, - "processedRows" : 2, - "processedBytes" : 51, - "queuedSplits" : 0, - "totalSplits" : 17 - } - ], - "wallTimeMillis" : 5, - "done" : true, - "nodes" : 1, - "stageId" : "0", - "userTimeMillis" : 1, - "processedRows" : 2, - "processedBytes" : 22, - "totalSplits" : 1, - "queuedSplits" : 0 - }, - "runningSplits" : 0, - "completedSplits" : 19 - } -} - -``` - -:::note - -Since the response data is not in sync with the query state from the perspective of clients, you cannot rely on the response data to determine whether the query completes. - -::: - -For more information about Presto REST API, refer to [Presto HTTP Protocol](https://github.com/prestosql/presto/wiki/HTTP-Protocol). diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/standalone.md b/site2/website/versioned_docs/version-2.9.2-deprecated/standalone.md deleted file mode 100644 index a487be92adfcf4..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/standalone.md +++ /dev/null @@ -1,268 +0,0 @@ ---- -id: standalone -title: Set up a standalone Pulsar locally -sidebar_label: "Run Pulsar locally" -original_id: standalone ---- - -For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process. - -> **Pulsar in production?** -> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal.md) guide. - -## Install Pulsar standalone - -This tutorial guides you through every step of installing Pulsar locally. - -### System requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions. - -:::tip - -By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. - -::: - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -### Install Pulsar using binary release - -To get started with Pulsar, download a binary tarball release in one of the following ways: - -* download from the Apache mirror (Pulsar @pulsar:version@ binary release) - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:binary_release_url - - ``` - -After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -#### What your package contains - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/). -`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more. -`examples` | A Java JAR file containing [Pulsar Functions](functions-overview.md) example. -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar. -`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar). - -These directories are created once you begin running Pulsar. - -Directory | Contains -:---------|:-------- -`data` | The data storage directory used by ZooKeeper and BookKeeper. -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md). -`logs` | Logs created by the installation. - -:::tip - -If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions: -* [Install builtin connectors (optional)](#install-builtin-connectors-optional) -* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional) -Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders. - -::: - -### Install builtin connectors (optional) - -Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors. -To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways: - -* download from the Apache mirror Pulsar IO Connectors @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. -For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker (or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions). -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors). - -::: - -### Install tiered storage offloaders (optional) - -:::tip - -- Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -- To enable tiered storage feature, follow the instructions below; otherwise skip this section. - -::: - -To get started with [tiered storage offloaders](concepts-tiered-storage.md), you need to download the offloaders tarball release on every broker node in one of the following ways: - -* download from the Apache mirror Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders` -in the pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage.md). - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory. -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - -::: - -## Start Pulsar standalone - -Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode. - -```bash - -$ bin/pulsar standalone - -``` - -If you have started Pulsar successfully, you will see `INFO`-level log messages like this: - -```bash - -21:59:29.327 [DLM-/stream/storage-OrderedScheduler-3-0] INFO org.apache.bookkeeper.stream.storage.impl.sc.StorageContainerImpl - Successfully started storage container (0). -21:59:34.576 [main] INFO org.apache.pulsar.broker.authentication.AuthenticationService - Authentication is disabled -21:59:34.576 [main] INFO org.apache.pulsar.websocket.WebSocketService - Pulsar WebSocket Service started - -``` - -:::tip - -* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window. - -::: - -You can also run the service as a background process using the `pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). -> -> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview.md) document to secure your deployment. -> -> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -## Use Pulsar standalone - -Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. - -### Consume a message - -The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client consume my-topic -s "first-subscription" - -``` - -If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -22:17:16.781 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully consumed - -``` - -:::tip - -As you have noticed that we do not explicitly create the `my-topic` topic, from which we consume the message. When you consume a message from a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well. - -::: - -### Produce a message - -The following command produces a message saying `hello-pulsar` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client produce my-topic --messages "hello-pulsar" - -``` - -If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -22:21:08.693 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced - -``` - -## Stop Pulsar standalone - -Press `Ctrl+C` to stop a local standalone Pulsar. - -:::tip - -If the service runs as a background process using the `pulsar-daemon start standalone` command, then use the `pulsar-daemon stop standalone` command to stop the service. -For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). - -::: - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/tiered-storage-aliyun.md b/site2/website/versioned_docs/version-2.9.2-deprecated/tiered-storage-aliyun.md deleted file mode 100644 index 2486b92df485b3..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/tiered-storage-aliyun.md +++ /dev/null @@ -1,259 +0,0 @@ ---- -id: tiered-storage-aliyun -title: Use Aliyun OSS offloader with Pulsar -sidebar_label: "Aliyun OSS offloader" -original_id: tiered-storage-aliyun ---- - -This chapter guides you through every step of installing and configuring the Aliyun Object Storage Service (OSS) offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the Aliyun OSS offloader. - -### Prerequisite - -- Pulsar: 2.8.0 or later versions - -### Step - -This example uses Pulsar 2.8.0. - -1. Download the Pulsar tarball, see [here](https://pulsar.apache.org/docs/en/standalone/#install-pulsar-using-binary-release). - -2. Download and untar the Pulsar offloaders package, then copy the Pulsar offloaders as `offloaders` in the Pulsar directory, see [here](https://pulsar.apache.org/docs/en/standalone/#install-tiered-storage-offloaders-optional). - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/), [GCS](https://cloud.google.com/storage/), [Azure](https://portal.azure.com/#home), and [Aliyun OSS](https://www.aliyun.com/product/oss) for long-term storage. - - ``` - - tiered-storage-file-system-2.8.0.nar - tiered-storage-jcloud-2.8.0.nar - - ``` - - :::note - - * If you are running Pulsar in a bare-metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image. The `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to Aliyun OSS, you need to configure some properties of the Aliyun OSS offload driver. - -::: - -Besides, you can also configure the Aliyun OSS offloader to run it automatically or trigger it manually. - -### Configure Aliyun OSS offloader driver - -You can configure the Aliyun OSS offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - | Required configuration | Description | Example value | - | --- | --- |--- | - | `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive. | aliyun-oss | - | `offloadersDirectory` | Offloader directory | offloaders | - | `managedLedgerOffloadBucket` | Bucket | pulsar-topic-offload | - | `managedLedgerOffloadServiceEndpoint` | Endpoint | http://oss-cn-hongkong.aliyuncs.com | - -- **Optional** configurations are as below. - - | Optional | Description | Example value | - | --- | --- | --- | - | `managedLedgerOffloadReadBufferSizeInBytes` | Size of block read | 1 MB | - | `managedLedgerOffloadMaxBlockSizeInBytes` | Size of block write | 64 MB | - | `managedLedgerMinLedgerRolloverTimeMinutes` | Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment. | 2 | - | `managedLedgerMaxEntriesPerLedger` | Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment. | 5000 | - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in Aliyun OSS must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -managedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Endpoint (required) - -The endpoint is the region where a bucket is located. - -:::tip - -For more information about Aliyun OSS regions and endpoints, see [International website](https://www.alibabacloud.com/help/doc-detail/31837.htm) or [Chinese website](https://help.aliyun.com/document_detail/31837.html). - -::: - - -##### Example - -This example sets the endpoint as _oss-us-west-1-internal_. - -``` - -managedLedgerOffloadServiceEndpoint=http://oss-us-west-1-internal.aliyuncs.com - -``` - -#### Authentication (required) - -To be able to access Aliyun OSS, you need to authenticate with Aliyun OSS. - -Set the environment variables `ALIYUN_OSS_ACCESS_KEY_ID` and `ALIYUN_OSS_ACCESS_KEY_SECRET` in `conf/pulsar_env.sh`. - -"export" is important so that the variables are made available in the environment of spawned processes. - -```bash - -export ALIYUN_OSS_ACCESS_KEY_ID=ABC123456789 -export ALIYUN_OSS_ACCESS_KEY_SECRET=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from Aliyun OSS in the configuration file `broker.conf` or `standalone.conf`. - -| Configuration | Description | Default value | -| --- | --- | --- | -| `managedLedgerOffloadReadBufferSizeInBytes` | Block size for each individual read when reading back data from Aliyun OSS. | 1 MB | -| `managedLedgerOffloadMaxBlockSizeInBytes` | Maximum size of a "part" sent during a multipart upload to Aliyun OSS. It **cannot** be smaller than 5 MB. | 64 MB | - -### Run Aliyun OSS offloader automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -| Threshold value | Action | -| --- | --- | -| > 0 | It triggers the offloading operation if the topic storage reaches its threshold. | -| = 0 | It causes a broker to offload data as soon as possible. | -| < 0 | It disables automatic offloading operation. | - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, the offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the Aliyun OSS offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-threshold-em-). - -::: - -### Run Aliyun OSS offloader manually - -For individual topics, you can trigger the Aliyun OSS offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to Aliyun OSS until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the Aliyun OSS offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-em-). - - ::: - -- This example checks the Aliyun OSS offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the Aliyun OSS offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - -` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-status-em-). - - ::: - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/tiered-storage-aws.md b/site2/website/versioned_docs/version-2.9.2-deprecated/tiered-storage-aws.md deleted file mode 100644 index 1f6e4b05a25045..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/tiered-storage-aws.md +++ /dev/null @@ -1,331 +0,0 @@ ---- -id: tiered-storage-aws -title: Use AWS S3 offloader with Pulsar -sidebar_label: "AWS S3 offloader" -original_id: tiered-storage-aws ---- - -This chapter guides you through every step of installing and configuring the AWS S3 offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the AWS S3 offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or later versions - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download from the Pulsar [downloads page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget): - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/) and [GCS](https://cloud.google.com/storage/) for long term storage. - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to AWS S3, you need to configure some properties of the AWS S3 offload driver. - -::: - -Besides, you can also configure the AWS S3 offloader to run it automatically or trigger it manually. - -### Configure AWS S3 offloader driver - -You can configure the AWS S3 offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - Required configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive.

    **Note**: there is a third driver type, S3, which is identical to AWS S3, though S3 requires that you specify an endpoint URL using `s3ManagedLedgerOffloadServiceEndpoint`. This is useful if using an S3 compatible data store other than AWS S3. | aws-s3 - `offloadersDirectory` | Offloader directory | offloaders - `s3ManagedLedgerOffloadBucket` | Bucket | pulsar-topic-offload - -- **Optional** configurations are as below. - - Optional | Description | Example value - |---|---|--- - `s3ManagedLedgerOffloadRegion` | Bucket region

    **Note**: before specifying a value for this parameter, you need to set the following configurations. Otherwise, you might get an error.

    - Set [`s3ManagedLedgerOffloadServiceEndpoint`](https://docs.aws.amazon.com/general/latest/gr/s3.html).

    Example
    `s3ManagedLedgerOffloadServiceEndpoint=https://s3.YOUR_REGION.amazonaws.com`

    - Grant `GetBucketLocation` permission to a user.

    For how to grant `GetBucketLocation` permission to a user, see [here](https://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html#using-with-s3-actions-related-to-buckets).| eu-west-3 - `s3ManagedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `s3ManagedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in AWS S3 must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -s3ManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Bucket region - -A bucket region is a region where a bucket is located. If a bucket region is not specified, the **default** region (`US East (N. Virginia)`) is used. - -:::tip - -For more information about AWS regions and endpoints, see [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). - -::: - - -##### Example - -This example sets the bucket region as _europe-west-3_. - -``` - -s3ManagedLedgerOffloadRegion=eu-west-3 - -``` - -#### Authentication (required) - -To be able to access AWS S3, you need to authenticate with AWS S3. - -Pulsar does not provide any direct methods of configuring authentication for AWS S3, -but relies on the mechanisms supported by the [DefaultAWSCredentialsProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html). - -Once you have created a set of credentials in the AWS IAM console, you can configure credentials using one of the following methods. - -* Use EC2 instance metadata credentials. - - If you are on AWS instance with an instance profile that provides credentials, Pulsar uses these credentials if no other mechanism is provided. - -* Set the environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` in `conf/pulsar_env.sh`. - - "export" is important so that the variables are made available in the environment of spawned processes. - - ```bash - - export AWS_ACCESS_KEY_ID=ABC123456789 - export AWS_SECRET_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -* Add the Java system properties `aws.accessKeyId` and `aws.secretKey` to `PULSAR_EXTRA_OPTS` in `conf/pulsar_env.sh`. - - ```bash - - PULSAR_EXTRA_OPTS="${PULSAR_EXTRA_OPTS} ${PULSAR_MEM} ${PULSAR_GC} -Daws.accessKeyId=ABC123456789 -Daws.secretKey=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.maxCapacity.default=1000 -Dio.netty.recycler.linkCapacity=1024" - - ``` - -* Set the access credentials in `~/.aws/credentials`. - - ```conf - - [default] - aws_access_key_id=ABC123456789 - aws_secret_access_key=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -* Assume an IAM role. - - This example uses the `DefaultAWSCredentialsProviderChain` for assuming this role. - - The broker must be rebooted for credentials specified in `pulsar_env` to take effect. - - ```conf - - s3ManagedLedgerOffloadRole= - s3ManagedLedgerOffloadRoleSessionName=pulsar-s3-offload - - ``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from AWS S3 in the configuration file `broker.conf` or `standalone.conf`. - -Configuration|Description|Default value -|---|---|--- -`s3ManagedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from AWS S3.|1 MB -`s3ManagedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to AWS S3. It **cannot** be smaller than 5 MB. |64 MB - -### Configure AWS S3 offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the AWS S3 offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-threshold-em-). - -::: - -### Configure AWS S3 offloader to run manually - -For individual topics, you can trigger AWS S3 offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to AWS S3 until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the AWS S3 offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-em-). - - ::: - -- This example checks the AWS S3 offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the AWS S3 offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - -` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-status-em-). - - ::: - -## Tutorial - -For the complete and step-by-step instructions on how to use the AWS S3 offloader with Pulsar, see [here](https://hub.streamnative.io/offloaders/aws-s3/2.5.1#usage). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/tiered-storage-azure.md b/site2/website/versioned_docs/version-2.9.2-deprecated/tiered-storage-azure.md deleted file mode 100644 index 5923a33147135c..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/tiered-storage-azure.md +++ /dev/null @@ -1,266 +0,0 @@ ---- -id: tiered-storage-azure -title: Use Azure BlobStore offloader with Pulsar -sidebar_label: "Azure BlobStore offloader" -original_id: tiered-storage-azure ---- - -This chapter guides you through every step of installing and configuring the Azure BlobStore offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the Azure BlobStore offloader. - -### Prerequisite - -- Pulsar: 2.6.2 or later versions - -### Step - -This example uses Pulsar 2.6.2. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.6.2/apache-pulsar-2.6.2-bin.tar.gz) - - * Download from the Pulsar [downloads page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget): - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.6.2/apache-pulsar-2.6.2-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.6.2/apache-pulsar-offloaders-2.6.2-bin.tar.gz - tar xvfz apache-pulsar-offloaders-2.6.2-bin.tar.gz - - ``` - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.6.2/offloaders apache-pulsar-2.6.2/offloaders - - ls offloaders - - ``` - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/), [GCS](https://cloud.google.com/storage/) and [Azure](https://portal.azure.com/#home) for long term storage. - - ``` - - tiered-storage-file-system-2.6.2.nar - tiered-storage-jcloud-2.6.2.nar - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to Azure BlobStore, you need to configure some properties of the Azure BlobStore offload driver. - -::: - -Besides, you can also configure the Azure BlobStore offloader to run it automatically or trigger it manually. - -### Configure Azure BlobStore offloader driver - -You can configure the Azure BlobStore offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - Required configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name | azureblob - `offloadersDirectory` | Offloader directory | offloaders - `managedLedgerOffloadBucket` | Bucket | pulsar-topic-offload - -- **Optional** configurations are as below. - - Optional | Description | Example value - |---|---|--- - `managedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `managedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in Azure BlobStore must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -managedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Authentication (required) - -To be able to access Azure BlobStore, you need to authenticate with Azure BlobStore. - -* Set the environment variables `AZURE_STORAGE_ACCOUNT` and `AZURE_STORAGE_ACCESS_KEY` in `conf/pulsar_env.sh`. - - "export" is important so that the variables are made available in the environment of spawned processes. - - ```bash - - export AZURE_STORAGE_ACCOUNT=ABC123456789 - export AZURE_STORAGE_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from Azure BlobStore in the configuration file `broker.conf` or `standalone.conf`. - -Configuration|Description|Default value -|---|---|--- -`managedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from Azure BlobStore store.|1 MB -`managedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to Azure BlobStore store. It **cannot** be smaller than 5 MB. |64 MB - -### Configure Azure BlobStore offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the Azure BlobStore offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-threshold-em-). - -::: - -### Configure Azure BlobStore offloader to run manually - -For individual topics, you can trigger Azure BlobStore offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to Azure BlobStore until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the Azure BlobStore offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-em-). - - ::: - -- This example checks the Azure BlobStore offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the Azure BlobStore offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: - - ``` - -` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-status-em-). - - ::: - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/tiered-storage-filesystem.md b/site2/website/versioned_docs/version-2.9.2-deprecated/tiered-storage-filesystem.md deleted file mode 100644 index 8164e68208bd7d..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/tiered-storage-filesystem.md +++ /dev/null @@ -1,317 +0,0 @@ ---- -id: tiered-storage-filesystem -title: Use filesystem offloader with Pulsar -sidebar_label: "Filesystem offloader" -original_id: tiered-storage-filesystem ---- - -This chapter guides you through every step of installing and configuring the filesystem offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the filesystem offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or later versions - -- Hadoop: 3.x.x - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download from the Pulsar [download page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget) - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8S and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to filesystem, you need to configure some properties of the filesystem offloader driver. - -::: - -Besides, you can also configure the filesystem offloader to run it automatically or trigger it manually. - -### Configure filesystem offloader driver - -You can configure filesystem offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - Required configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive. | filesystem - `fileSystemURI` | Connection address | hdfs://127.0.0.1:9000 - `fileSystemProfilePath` | Hadoop profile path | conf/filesystem_offload_core_site.xml - -- **Optional** configurations are as below. - - Optional configuration| Description | Example value - |---|---|--- - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment.|5000 - -#### Offloader driver (required) - -Offloader driver name, which is case-insensitive. - -This example sets the offloader driver name as _filesystem_. - -```conf - -managedLedgerOffloadDriver=filesystem - -``` - -#### Connection address (required) - -Connection address is the URI to access the default Hadoop distributed file system. - -##### Example - -This example sets the connection address as _hdfs://127.0.0.1:9000_. - -```conf - -fileSystemURI=hdfs://127.0.0.1:9000 - -``` - -#### Hadoop profile path (required) - -The configuration file is stored in the Hadoop profile path. It contains various settings for Hadoop performance tuning. - -##### Example - -This example sets the Hadoop profile path as _conf/filesystem_offload_core_site.xml_. - -```conf - -fileSystemProfilePath=conf/filesystem_offload_core_site.xml - -``` - -You can set the following configurations in the _filesystem_offload_core_site.xml_ file. - -``` - - - fs.defaultFS - - - - - hadoop.tmp.dir - pulsar - - - - io.file.buffer.size - 4096 - - - - io.seqfile.compress.blocksize - 1000000 - - - - io.seqfile.compression.type - BLOCK - - - - io.map.index.interval - 128 - - -``` - -:::tip - -For more information about the Hadoop HDFS, see [here](https://hadoop.apache.org/docs/current/). - -::: - -### Configure filesystem offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offload operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offload runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -#### Example - -This example sets the filesystem offloader threshold size to 10 MB using pulsar-admin. - -```bash - -pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#set-offload-threshold). - -::: - -### Configure filesystem offloader to run manually - -For individual topics, you can trigger filesystem offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - -To trigger via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are offloaded to the filesystem until the threshold is no longer exceeded. Older segments are offloaded first. - -#### Example - -- This example triggers the filesystem offloader to run manually using pulsar-admin. - - ```bash - - pulsar-admin topics offload --size-threshold 10M persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload). - - ::: - -- This example checks filesystem offloader status using pulsar-admin. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the filesystem to complete the job, add the `-w` flag. - - ```bash - - pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in the offloading operation, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - -` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload-status). - - ::: - -## Tutorial - -For the complete and step-by-step instructions on how to use the filesystem offloader with Pulsar, see [here](https://hub.streamnative.io/offloaders/filesystem/2.5.1). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/tiered-storage-gcs.md b/site2/website/versioned_docs/version-2.9.2-deprecated/tiered-storage-gcs.md deleted file mode 100644 index afb1e9a10081ce..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/tiered-storage-gcs.md +++ /dev/null @@ -1,321 +0,0 @@ ---- -id: tiered-storage-gcs -title: Use GCS offloader with Pulsar -sidebar_label: "GCS offloader" -original_id: tiered-storage-gcs ---- - -This chapter guides you through every step of installing and configuring the GCS offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the GCS offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or later versions - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download from the Pulsar [download page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget) - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8S and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - As shown in the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support GCS and AWS S3 for long term storage. - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - -## Configuration - -:::note - -Before offloading data from BookKeeper to GCS, you need to configure some properties of the GCS offloader driver. - -::: - -Besides, you can also configure the GCS offloader to run it automatically or trigger it manually. - -### Configure GCS offloader driver - -You can configure GCS offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - **Required** configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver`|Offloader driver name, which is case-insensitive.|google-cloud-storage - `offloadersDirectory`|Offloader directory|offloaders - `gcsManagedLedgerOffloadBucket`|Bucket|pulsar-topic-offload - `gcsManagedLedgerOffloadRegion`|Bucket region|europe-west3 - `gcsManagedLedgerOffloadServiceAccountKeyFile`|Authentication |/Users/user-name/Downloads/project-804d5e6a6f33.json - -- **Optional** configurations are as below. - - Optional configuration|Description|Example value - |---|---|--- - `gcsManagedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `gcsManagedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic.|2 - `managedLedgerMaxEntriesPerLedger`|The max number of entries to append to a ledger before triggering a rollover.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in GCS **must** be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you can not nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -gcsManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Bucket region (required) - -Bucket region is the region where a bucket is located. If a bucket region is not specified, the **default** region (`us multi-regional location`) is used. - -:::tip - -For more information about bucket location, see [here](https://cloud.google.com/storage/docs/bucket-locations). - -::: - -##### Example - -This example sets the bucket region as _europe-west3_. - -``` - -gcsManagedLedgerOffloadRegion=europe-west3 - -``` - -#### Authentication (required) - -To enable a broker access GCS, you need to configure `gcsManagedLedgerOffloadServiceAccountKeyFile` in the configuration file `broker.conf`. - -`gcsManagedLedgerOffloadServiceAccountKeyFile` is -a JSON file, containing GCS credentials of a service account. - -##### Example - -To generate service account credentials or view the public credentials that you've already generated, follow the following steps. - -1. Navigate to the [Service accounts page](https://console.developers.google.com/iam-admin/serviceaccounts). - -2. Select a project or create a new one. - -3. Click **Create service account**. - -4. In the **Create service account** window, type a name for the service account and select **Furnish a new private key**. - - If you want to [grant G Suite domain-wide authority](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#delegatingauthority) to the service account, select **Enable G Suite Domain-wide Delegation**. - -5. Click **Create**. - - :::note - - Make sure the service account you create has permission to operate GCS, you need to assign **Storage Admin** permission to your service account [here](https://cloud.google.com/storage/docs/access-control/iam). - - ::: - -6. You can get the following information and set this in `broker.conf`. - - ```conf - - gcsManagedLedgerOffloadServiceAccountKeyFile="/Users/user-name/Downloads/project-804d5e6a6f33.json" - - ``` - - :::tip - - - For more information about how to create `gcsManagedLedgerOffloadServiceAccountKeyFile`, see [here](https://support.google.com/googleapi/answer/6158849). - - For more information about Google Cloud IAM, see [here](https://cloud.google.com/storage/docs/access-control/iam). - - ::: - -#### Size of block read/write - -You can configure the size of a request sent to or read from GCS in the configuration file `broker.conf`. - -Configuration|Description -|---|--- -`gcsManagedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from GCS.

    The **default** value is 1 MB. -`gcsManagedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to GCS.

    It **can not** be smaller than 5 MB.

    The **default** value is 64 MB. - -### Configure GCS offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offload operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the GCS offloader threshold size to 10 MB using pulsar-admin. - -```bash - -pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#set-offload-threshold). - -::: - -### Configure GCS offloader to run manually - -For individual topics, you can trigger GCS offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger the GCS via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to GCS until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the GCS offloader to run manually using pulsar-admin with the command `pulsar-admin topics offload (topic-name) (threshold)`. - - ```bash - - pulsar-admin topics offload persistent://my-tenant/my-namespace/topic1 10M - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload). - - ::: - -- This example checks the GCS offloader status using pulsar-admin with the command `pulsar-admin topics offload-status options`. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for GCS to complete the job, add the `-w` flag. - - ```bash - - pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - -` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload-status). - - ::: - -## Tutorial - -For the complete and step-by-step instructions on how to use the GCS offloader with Pulsar, see [here](https://hub.streamnative.io/offloaders/gcs/2.5.1#usage). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/tiered-storage-overview.md b/site2/website/versioned_docs/version-2.9.2-deprecated/tiered-storage-overview.md deleted file mode 100644 index c635034f463b46..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/tiered-storage-overview.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -id: tiered-storage-overview -title: Overview of tiered storage -sidebar_label: "Overview" -original_id: tiered-storage-overview ---- - -Pulsar's **Tiered Storage** feature allows older backlog data to be moved from BookKeeper to long term and cheaper storage, while still allowing clients to access the backlog as if nothing has changed. - -* Tiered storage uses [Apache jclouds](https://jclouds.apache.org) to support [Amazon S3](https://aws.amazon.com/s3/) and [GCS (Google Cloud Storage)](https://cloud.google.com/storage/) for long term storage. - - With jclouds, it is easy to add support for more [cloud storage providers](https://jclouds.apache.org/reference/providers/#blobstore-providers) in the future. - - :::tip - - - For more information about how to use the AWS S3 offloader with Pulsar, see [here](tiered-storage-aws.md). - - - For more information about how to use the GCS offloader with Pulsar, see [here](tiered-storage-gcs.md). - - ::: - -* Tiered storage uses [Apache Hadoop](http://hadoop.apache.org/) to support filesystems for long term storage. - - With Hadoop, it is easy to add support for more filesystems in the future. - - :::tip - - For more information about how to use the filesystem offloader with Pulsar, see [here](tiered-storage-filesystem.md). - - ::: - -## When to use tiered storage? - -Tiered storage should be used when you have a topic for which you want to keep a very long backlog for a long time. - -For example, if you have a topic containing user actions which you use to train your recommendation systems, you may want to keep that data for a long time, so that if you change your recommendation algorithm, you can rerun it against your full user history. - -## How does tiered storage work? - -A topic in Pulsar is backed by a **log**, known as a **managed ledger**. This log is composed of an ordered list of segments. Pulsar only writes to the final segment of the log. All previous segments are sealed. The data within the segment is immutable. This is known as a **segment oriented architecture**. - -![Tiered storage](/assets/pulsar-tiered-storage.png "Tiered Storage") - -The tiered storage offloading mechanism takes advantage of segment oriented architecture. When offloading is requested, the segments of the log are copied one-by-one to tiered storage. All segments of the log (apart from the current segment) written to tiered storage can be offloaded. - -Data written to BookKeeper is replicated to 3 physical machines by default. However, once a segment is sealed in BookKeeper, it becomes immutable and can be copied to long term storage. Long term storage can achieve cost savings by using mechanisms such as [Reed-Solomon error correction](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction) to require fewer physical copies of data. - -Before offloading ledgers to long term storage, you need to configure buckets, credentials, and other properties for the cloud storage service. Additionally, Pulsar uses multi-part objects to upload the segment data and brokers may crash while uploading the data. It is recommended that you add a life cycle rule for your bucket to expire incomplete multi-part upload after a day or two days to avoid getting charged for incomplete uploads. Moreover, you can trigger the offloading operation manually (via REST API or CLI) or automatically (via CLI). - -After offloading ledgers to long term storage, you can still query data in the offloaded ledgers with Pulsar SQL. - -For more information about tiered storage for Pulsar topics, see [here](https://github.com/apache/pulsar/wiki/PIP-17:-Tiered-storage-for-Pulsar-topics). diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/transaction-api.md b/site2/website/versioned_docs/version-2.9.2-deprecated/transaction-api.md deleted file mode 100644 index ecbd0da12c786d..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/transaction-api.md +++ /dev/null @@ -1,172 +0,0 @@ ---- -id: transactions-api -title: Transactions API -sidebar_label: "Transactions API" -original_id: transactions-api ---- - -All messages in a transaction are available only to consumers after the transaction has been committed. If a transaction has been aborted, all the writes and acknowledgments in this transaction roll back. - -## Prerequisites -1. To enable transactions in Pulsar, you need to configure the parameter in the `broker.conf` file. - -``` - -transactionCoordinatorEnabled=true - -``` - -2. Initialize transaction coordinator metadata, so the transaction coordinators can leverage advantages of the partitioned topic, such as load balance. - -``` - -bin/pulsar initialize-transaction-coordinator-metadata -cs 127.0.0.1:2181 -c standalone - -``` - -After initializing transaction coordinator metadata, you can use the transactions API. The following APIs are available. - -## Initialize Pulsar client - -You can enable transaction for transaction client and initialize transaction coordinator client. - -``` - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .enableTransaction(true) - .build(); - -``` - -## Start transactions -You can start transaction in the following way. - -``` - -Transaction txn = pulsarClient - .newTransaction() - .withTransactionTimeout(5, TimeUnit.MINUTES) - .build() - .get(); - -``` - -## Produce transaction messages - -A transaction parameter is required when producing new transaction messages. The semantic of the transaction messages in Pulsar is `read-committed`, so the consumer cannot receive the ongoing transaction messages before the transaction is committed. - -``` - -producer.newMessage(txn).value("Hello Pulsar Transaction".getBytes()).sendAsync(); - -``` - -## Acknowledge the messages with the transaction - -The transaction acknowledgement requires a transaction parameter. The transaction acknowledgement marks the messages state to pending-ack state. When the transaction is committed, the pending-ack state becomes ack state. If the transaction is aborted, the pending-ack state becomes unack state. - -``` - -Message message = consumer.receive(); -consumer.acknowledgeAsync(message.getMessageId(), txn); - -``` - -## Commit transactions - -When the transaction is committed, consumers receive the transaction messages and the pending-ack state becomes ack state. - -``` - -txn.commit().get(); - -``` - -## Abort transaction - -When the transaction is aborted, the transaction acknowledgement is canceled and the pending-ack messages are redelivered. - -``` - -txn.abort().get(); - -``` - -### Example -The following example shows how messages are processed in transaction. - -``` - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl(getPulsarServiceList().get(0).getBrokerServiceUrl()) - .statsInterval(0, TimeUnit.SECONDS) - .enableTransaction(true) - .build(); - -String sourceTopic = "public/default/source-topic"; -String sinkTopic = "public/default/sink-topic"; - -Producer sourceProducer = pulsarClient - .newProducer(Schema.STRING) - .topic(sourceTopic) - .create(); -sourceProducer.newMessage().value("hello pulsar transaction").sendAsync(); - -Consumer sourceConsumer = pulsarClient - .newConsumer(Schema.STRING) - .topic(sourceTopic) - .subscriptionName("test") - .subscriptionType(SubscriptionType.Shared) - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscribe(); - -Producer sinkProducer = pulsarClient - .newProducer(Schema.STRING) - .topic(sinkTopic) - .sendTimeout(0, TimeUnit.MILLISECONDS) - .create(); - -Transaction txn = pulsarClient - .newTransaction() - .withTransactionTimeout(5, TimeUnit.MINUTES) - .build() - .get(); - -// source message acknowledgement and sink message produce belong to one transaction, -// they are combined into an atomic operation. -Message message = sourceConsumer.receive(); -sourceConsumer.acknowledgeAsync(message.getMessageId(), txn); -sinkProducer.newMessage(txn).value("sink data").sendAsync(); - -txn.commit().get(); - -``` - -## Enable batch messages in transactions - -To enable batch messages in transactions, you need to enable the batch index acknowledgement feature. The transaction acks check whether the batch index acknowledgement conflicts. - -To enable batch index acknowledgement, you need to set `acknowledgmentAtBatchIndexLevelEnabled` to `true` in the `broker.conf` or `standalone.conf` file. - -``` - -acknowledgmentAtBatchIndexLevelEnabled=true - -``` - -And then you need to call the `enableBatchIndexAcknowledgment(true)` method in the consumer builder. - -``` - -Consumer sinkConsumer = pulsarClient - .newConsumer() - .topic(transferTopic) - .subscriptionName("sink-topic") - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscriptionType(SubscriptionType.Shared) - .enableBatchIndexAcknowledgment(true) // enable batch index acknowledgement - .subscribe(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/transaction-guarantee.md b/site2/website/versioned_docs/version-2.9.2-deprecated/transaction-guarantee.md deleted file mode 100644 index 9db2d254e159f6..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/transaction-guarantee.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -id: transactions-guarantee -title: Transactions Guarantee -sidebar_label: "Transactions Guarantee" -original_id: transactions-guarantee ---- - -Pulsar transactions support the following guarantee. - -## Atomic multi-partition writes and multi-subscription acknowledges -Transactions enable atomic writes to multiple topics and partitions. A batch of messages in a transaction can be received from, produced to, and acknowledged by many partitions. All the operations involved in a transaction succeed or fail as a single unit. - -## Read transactional message -All the messages in a transaction are available only for consumers until the transaction is committed. - -## Acknowledge transactional message -A message is acknowledged successfully only once by a consumer under the subscription when acknowledging the message with the transaction ID. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/txn-how.md b/site2/website/versioned_docs/version-2.9.2-deprecated/txn-how.md deleted file mode 100644 index add072448aeb34..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/txn-how.md +++ /dev/null @@ -1,151 +0,0 @@ ---- -id: txn-how -title: How transactions work? -sidebar_label: "How transactions work?" -original_id: txn-how ---- - -This section describes transaction components and how the components work together. For the complete design details, see [PIP-31: Transactional Streaming](https://docs.google.com/document/d/145VYp09JKTw9jAT-7yNyFU255FptB2_B2Fye100ZXDI/edit#heading=h.bm5ainqxosrx). - -## Key concept - -It is important to know the following key concepts, which is a prerequisite for understanding how transactions work. - -### Transaction coordinator - -The transaction coordinator (TC) is a module running inside a Pulsar broker. - -* It maintains the entire life cycle of transactions and prevents a transaction from getting into an incorrect status. - -* It handles transaction timeout, and ensures that the transaction is aborted after a transaction timeout. - -### Transaction log - -All the transaction metadata persists in the transaction log. The transaction log is backed by a Pulsar topic. If the transaction coordinator crashes, it can restore the transaction metadata from the transaction log. - -The transaction log stores the transaction status rather than actual messages in the transaction (the actual messages are stored in the actual topic partitions). - -### Transaction buffer - -Messages produced to a topic partition within a transaction are stored in the transaction buffer (TB) of that topic partition. The messages in the transaction buffer are not visible to consumers until the transactions are committed. The messages in the transaction buffer are discarded when the transactions are aborted. - -Transaction buffer stores all ongoing and aborted transactions in memory. All messages are sent to the actual partitioned Pulsar topics. After transactions are committed, the messages in the transaction buffer are materialized (visible) to consumers. When the transactions are aborted, the messages in the transaction buffer are discarded. - -### Transaction ID - -Transaction ID (TxnID) identifies a unique transaction in Pulsar. The transaction ID is 128-bit. The highest 16 bits are reserved for the ID of the transaction coordinator, and the remaining bits are used for monotonically increasing numbers in each transaction coordinator. It is easy to locate the transaction crash with the TxnID. - -### Pending acknowledge state - -Pending acknowledge state maintains message acknowledgments within a transaction before a transaction completes. If a message is in the pending acknowledge state, the message cannot be acknowledged by other transactions until the message is removed from the pending acknowledge state. - -The pending acknowledge state is persisted to the pending acknowledge log (cursor ledger). A new broker can restore the state from the pending acknowledge log to ensure the acknowledgement is not lost. - -## Data flow - -At a high level, the data flow can be split into several steps: - -1. Begin a transaction. - -2. Publish messages with a transaction. - -3. Acknowledge messages with a transaction. - -4. End a transaction. - -To help you debug or tune the transaction for better performance, review the following diagrams and descriptions. - -### 1. Begin a transaction - -Before introducing the transaction in Pulsar, a producer is created and then messages are sent to brokers and stored in data logs. - -![](/assets/txn-3.png) - -Let’s walk through the steps for _beginning a transaction_. - -| Step | Description | -| --- | --- | -| 1.1 | The first step is that the Pulsar client finds the transaction coordinator. | -| 1.2 | The transaction coordinator allocates a transaction ID for the transaction. In the transaction log, the transaction is logged with its transaction ID and status (OPEN), which ensures the transaction status is persisted regardless of transaction coordinator crashes. | -| 1.3 | The transaction log sends the result of persisting the transaction ID to the transaction coordinator. | -| 1.4 | After the transaction status entry is logged, the transaction coordinator brings the transaction ID back to the Pulsar client. | - -### 2. Publish messages with a transaction - -In this stage, the Pulsar client enters a transaction loop, repeating the `consume-process-produce` operation for all the messages that comprise the transaction. This is a long phase and is potentially composed of multiple produce and acknowledgement requests. - -![](/assets/txn-4.png) - -Let’s walk through the steps for _publishing messages with a transaction_. - -| Step | Description | -| --- | --- | -| 2.1.1 | Before the Pulsar client produces messages to a new topic partition, it sends a request to the transaction coordinator to add the partition to the transaction. | -| 2.1.2 | The transaction coordinator logs the partition changes of the transaction into the transaction log for durability, which ensures the transaction coordinator knows all the partitions that a transaction is handling. The transaction coordinator can commit or abort changes on each partition at the end-partition phase. | -| 2.1.3 | The transaction log sends the result of logging the new partition (used for producing messages) to the transaction coordinator. | -| 2.1.4 | The transaction coordinator sends the result of adding a new produced partition to the transaction. | -| 2.2.1 | The Pulsar client starts producing messages to partitions. The flow of this part is the same as the normal flow of producing messages except that the batch of messages produced by a transaction contains transaction IDs. | -| 2.2.2 | The broker writes messages to a partition. | - -### 3. Acknowledge messages with a transaction - -In this phase, the Pulsar client sends a request to the transaction coordinator and a new subscription is acknowledged as a part of a transaction. - -![](/assets/txn-5.png) - -Let’s walk through the steps for _acknowledging messages with a transaction_. - -| Step | Description | -| --- | --- | -| 3.1.1 | The Pulsar client sends a request to add an acknowledged subscription to the transaction coordinator. | -| 3.1.2 | The transaction coordinator logs the addition of subscription, which ensures that it knows all subscriptions handled by a transaction and can commit or abort changes on each subscription at the end phase. | -| 3.1.3 | The transaction log sends the result of logging the new partition (used for acknowledging messages) to the transaction coordinator. | -| 3.1.4 | The transaction coordinator sends the result of adding the new acknowledged partition to the transaction. | -| 3.2 | The Pulsar client acknowledges messages on the subscription. The flow of this part is the same as the normal flow of acknowledging messages except that the acknowledged request carries a transaction ID. | -| 3.3 | The broker receiving the acknowledgement request checks if the acknowledgment belongs to a transaction or not. | - -### 4. End a transaction - -At the end of a transaction, the Pulsar client decides to commit or abort the transaction. The transaction can be aborted when a conflict is detected on acknowledging messages. - -#### 4.1 End transaction request - -When the Pulsar client finishes a transaction, it issues an end transaction request. - -![](/assets/txn-6.png) - -Let’s walk through the steps for _ending the transaction_. - -| Step | Description | -| --- | --- | -| 4.1.1 | The Pulsar client issues an end transaction request (with a field indicating whether the transaction is to be committed or aborted) to the transaction coordinator. | -| 4.1.2 | The transaction coordinator writes a COMMITTING or ABORTING message to its transaction log. | -| 4.1.3 | The transaction log sends the result of logging the committing or aborting status. | - -#### 4.2 Finalize a transaction - -The transaction coordinator starts the process of committing or aborting messages to all the partitions involved in this transaction. - -![](/assets/txn-7.png) - -Let’s walk through the steps for _finalizing a transaction_. - -| Step | Description | -| --- | --- | -| 4.2.1 | The transaction coordinator commits transactions on subscriptions and commits transactions on partitions at the same time. | -| 4.2.2 | The broker (produce) writes produced committed markers to the actual partitions. At the same time, the broker (ack) writes acked committed marks to the subscription pending ack partitions. | -| 4.2.3 | The data log sends the result of writing produced committed marks to the broker. At the same time, pending ack data log sends the result of writing acked committed marks to the broker. The cursor moves to the next position. | - -#### 4.3 Mark a transaction as COMMITTED or ABORTED - -The transaction coordinator writes the final transaction status to the transaction log to complete the transaction. - -![](/assets/txn-8.png) - -Let’s walk through the steps for _marking a transaction as COMMITTED or ABORTED_. - -| Step | Description | -| --- | --- | -| 4.3.1 | After all produced messages and acknowledgements to all partitions involved in this transaction have been successfully committed or aborted, the transaction coordinator writes the final COMMITTED or ABORTED transaction status messages to its transaction log, indicating that the transaction is complete. All the messages associated with the transaction in its transaction log can be safely removed. | -| 4.3.2 | The transaction log sends the result of the committed transaction to the transaction coordinator. | -| 4.3.3 | The transaction coordinator sends the result of the committed transaction to the Pulsar client. | diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/txn-monitor.md b/site2/website/versioned_docs/version-2.9.2-deprecated/txn-monitor.md deleted file mode 100644 index 5b50953772d092..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/txn-monitor.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -id: txn-monitor -title: How to monitor transactions? -sidebar_label: "How to monitor transactions?" -original_id: txn-monitor ---- - -You can monitor the status of the transactions in Prometheus and Grafana using the [transaction metrics](https://pulsar.apache.org/docs/en/next/reference-metrics/#pulsar-transaction). - -For how to configure Prometheus and Grafana, see [here](https://pulsar.apache.org/docs/en/next/deploy-monitoring). diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/txn-use.md b/site2/website/versioned_docs/version-2.9.2-deprecated/txn-use.md deleted file mode 100644 index de0e4a92f1b27e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/txn-use.md +++ /dev/null @@ -1,105 +0,0 @@ ---- -id: txn-use -title: How to use transactions? -sidebar_label: "How to use transactions?" -original_id: txn-use ---- - -## Transaction API - -The transaction feature is primarily a server-side and protocol-level feature. You can use the transaction feature via the [transaction API](https://pulsar.apache.org/api/admin/), which is available in **Pulsar 2.8.0 or later**. - -To use the transaction API, you do not need any additional settings in the Pulsar client. **By default**, transactions is **disabled**. - -Currently, transaction API is only available for **Java** clients. Support for other language clients will be added in the future releases. - -## Quick start - -This section provides an example of how to use the transaction API to send and receive messages in a Java client. - -1. Start Pulsar 2.8.0 or later. - -2. Enable transaction. - - Change the configuration in the `broker.conf` file. - - ``` - - transactionCoordinatorEnabled=true - - ``` - - If you want to enable batch messages in transactions, follow the steps below. - - Set `acknowledgmentAtBatchIndexLevelEnabled` to `true` in the `broker.conf` or `standalone.conf` file. - - ``` - - acknowledgmentAtBatchIndexLevelEnabled=true - - ``` - -3. Initialize transaction coordinator metadata. - - The transaction coordinator can leverage the advantages of partitioned topics (such as load balance). - - **Input** - - ``` - - bin/pulsar initialize-transaction-coordinator-metadata -cs 127.0.0.1:2181 -c standalone - - ``` - - **Output** - - ``` - - Transaction coordinator metadata setup success - - ``` - -4. Initialize a Pulsar client. - - ``` - - PulsarClient client = PulsarClient.builder() - - .serviceUrl("pulsar://localhost:6650") - - .enableTransaction(true) - - .build(); - - ``` - -Now you can start using the transaction API to send and receive messages. Below is an example of a `consume-process-produce` application written in Java. - -![](/assets/txn-9.png) - -Let’s walk through this example step by step. - -| Step | Description | -| --- | --- | -| 1. Start a transaction. | The application opens a new transaction by calling PulsarClient.newTransaction. It specifics the transaction timeout as 1 minute. If the transaction is not committed within 1 minute, the transaction is automatically aborted. | -| 2. Receive messages from topics. | The application creates two normal consumers to receive messages from topic input-topic-1 and input-topic-2 respectively. | -| 3. Publish messages to topics with the transaction. | The application creates two producers to produce the resulting messages to the output topic _output-topic-1_ and output-topic-2 respectively. The application applies the processing logic and generates two output messages. The application sends those two output messages as part of the transaction opened in the first step via Producer.newMessage(Transaction). | -| 4. Acknowledge the messages with the transaction. | In the same transaction, the application acknowledges the two input messages. | -| 5. Commit the transaction. | The application commits the transaction by calling Transaction.commit() on the open transaction. The commit operation ensures the two input messages are marked as acknowledged and the two output messages are written successfully to the output topics. | - -[1] Example of enabling batch messages ack in transactions in the consumer builder. - -``` - -Consumer sinkConsumer = pulsarClient - .newConsumer() - .topic(transferTopic) - .subscriptionName("sink-topic") - -.subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscriptionType(SubscriptionType.Shared) - .enableBatchIndexAcknowledgment(true) // enable batch index acknowledgement - .subscribe(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/txn-what.md b/site2/website/versioned_docs/version-2.9.2-deprecated/txn-what.md deleted file mode 100644 index 844f19a700f8f0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/txn-what.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -id: txn-what -title: What are transactions? -sidebar_label: "What are transactions?" -original_id: txn-what ---- - -Transactions strengthen the message delivery semantics of Apache Pulsar and [processing guarantees of Pulsar Functions](https://pulsar.apache.org/docs/en/next/functions-overview/#processing-guarantees). The Pulsar Transaction API supports atomic writes and acknowledgments across multiple topics. - -Transactions allow: - -- A producer to send a batch of messages to multiple topics where all messages in the batch are eventually visible to any consumer, or none are ever visible to consumers. - -- End-to-end exactly-once semantics (execute a `consume-process-produce` operation exactly once). - -## Transaction semantics - -Pulsar transactions have the following semantics: - -* All operations within a transaction are committed as a single unit. - - * Either all messages are committed, or none of them are. - - * Each message is written or processed exactly once, without data loss or duplicates (even in the event of failures). - - * If a transaction is aborted, all the writes and acknowledgments in this transaction rollback. - -* A group of messages in a transaction can be received from, produced to, and acknowledged by multiple partitions. - - * Consumers are only allowed to read committed (acked) messages. In other words, the broker does not deliver transactional messages which are part of an open transaction or messages which are part of an aborted transaction. - - * Message writes across multiple partitions are atomic. - - * Message acks across multiple subscriptions are atomic. A message is acked successfully only once by a consumer under the subscription when acknowledging the message with the transaction ID. - -## Transactions and stream processing - -Stream processing on Pulsar is a `consume-process-produce` operation on Pulsar topics: - -* `Consume`: a source operator that runs a Pulsar consumer reads messages from one or multiple Pulsar topics. - -* `Process`: a processing operator transforms the messages. - -* `Produce`: a sink operator that runs a Pulsar producer writes the resulting messages to one or multiple Pulsar topics. - -![](/assets/txn-2.png) - -Pulsar transactions support end-to-end exactly-once stream processing, which means messages are not lost from a source operator and messages are not duplicated to a sink operator. - -## Use case - -Prior to Pulsar 2.8.0, there was no easy way to build stream processing applications with Pulsar to achieve exactly-once processing guarantees. With the transaction introduced in Pulsar 2.8.0, the following services support exactly-once semantics: - -* [Pulsar Flink connector](https://flink.apache.org/2021/01/07/pulsar-flink-connector-270.html) - - Prior to Pulsar 2.8.0, if you want to build stream applications using Pulsar and Flink, the Pulsar Flink connector only supported exactly-once source connector and at-least-once sink connector, which means the highest processing guarantee for end-to-end was at-least-once, there was possibility that the resulting messages from streaming applications produce duplicated messages to the resulting topics in Pulsar. - - With the transaction introduced in Pulsar 2.8.0, the Pulsar Flink sink connector can support exactly-once semantics by implementing the designated `TwoPhaseCommitSinkFunction` and hooking up the Flink sink message lifecycle with Pulsar transaction API. - -* Support for Pulsar Functions and other connectors will be added in the future releases. diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/txn-why.md b/site2/website/versioned_docs/version-2.9.2-deprecated/txn-why.md deleted file mode 100644 index 1ed8769977654e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/txn-why.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -id: txn-why -title: Why transactions? -sidebar_label: "Why transactions?" -original_id: txn-why ---- - -Pulsar transactions (txn) enable event streaming applications to consume, process, and produce messages in one atomic operation. The reason for developing this feature can be summarized as below. - -## Demand of stream processing - -The demand for stream processing applications with stronger processing guarantees has grown along with the rise of stream processing. For example, in the financial industry, financial institutions use stream processing engines to process debits and credits for users. This type of use case requires that every message is processed exactly once, without exception. - -In other words, if a stream processing application consumes message A and -produces the result as a message B (B = f(A)), then exactly-once processing -guarantee means that A can only be marked as consumed if and only if B is -successfully produced, and vice versa. - -![](/assets/txn-1.png) - -The Pulsar transactions API strengthens the message delivery semantics and the processing guarantees for stream processing. It enables stream processing applications to consume, process, and produce messages in one atomic operation. That means, a batch of messages in a transaction can be received from, produced to and acknowledged by many topic partitions. All the operations involved in a transaction succeed or fail as one single until. - -## Limitation of idempotent producer - -Avoiding data loss or duplication can be achieved by using the Pulsar idempotent producer, but it does not provide guarantees for writes across multiple partitions. - -In Pulsar, the highest level of message delivery guarantee is using an [idempotent producer](https://pulsar.apache.org/docs/en/next/concepts-messaging/#producer-idempotency) with the exactly once semantic at one single partition, that is, each message is persisted exactly once without data loss and duplication. However, there are some limitations in this solution: - -- Due to the monotonic increasing sequence ID, this solution only works on a single partition and within a single producer session (that is, for producing one message), so there is no atomicity when producing multiple messages to one or multiple partitions. - - In this case, if there are some failures (for example, client / broker / bookie crashes, network failure, and more) in the process of producing and receiving messages, messages are re-processed and re-delivered, which may cause data loss or data duplication: - - - For the producer: if the producer retry sending messages, some messages are persisted multiple times; if the producer does not retry sending messages, some messages are persisted once and other messages are lost. - - - For the consumer: since the consumer does not know whether the broker has received messages or not, the consumer may not retry sending acks, which causes it to receive duplicate messages. - -- Similarly, for Pulsar Function, it only guarantees exactly once semantics for an idempotent function on a single event rather than processing multiple events or producing multiple results that can happen exactly. - - For example, if a function accepts multiple events and produces one result (for example, window function), the function may fail between producing the result and acknowledging the incoming messages, or even between acknowledging individual events, which causes all (or some) incoming messages to be re-delivered and reprocessed, and a new result is generated. - - However, many scenarios need atomic guarantees across multiple partitions and sessions. - -- Consumers need to rely on more mechanisms to acknowledge (ack) messages once. - - For example, consumers are required to store the MessageID along with its acked state. After the topic is unloaded, the subscription can recover the acked state of this MessageID in memory when the topic is loaded again. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.2-deprecated/window-functions-context.md b/site2/website/versioned_docs/version-2.9.2-deprecated/window-functions-context.md deleted file mode 100644 index f80fea57989ef0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.2-deprecated/window-functions-context.md +++ /dev/null @@ -1,581 +0,0 @@ ---- -id: window-functions-context -title: Window Functions Context -sidebar_label: "Window Functions: Context" -original_id: window-functions-context ---- - -Java SDK provides access to a **window context object** that can be used by a window function. This context object provides a wide variety of information and functionality for Pulsar window functions as below. - -- [Spec](#spec) - - * Names of all input topics and the output topic associated with the function. - * Tenant and namespace associated with the function. - * Pulsar window function name, ID, and version. - * ID of the Pulsar function instance running the window function. - * Number of instances that invoke the window function. - * Built-in type or custom class name of the output schema. - -- [Logger](#logger) - - * Logger object used by the window function, which can be used to create window function log messages. - -- [User config](#user-config) - - * Access to arbitrary user configuration values. - -- [Routing](#routing) - - * Routing is supported in Pulsar window functions. Pulsar window functions send messages to arbitrary topics as per the `publish` interface. - -- [Metrics](#metrics) - - * Interface for recording metrics. - -- [State storage](#state-storage) - - * Interface for storing and retrieving state in [state storage](#state-storage). - -## Spec - -Spec contains the basic information of a function. - -### Get input topics - -The `getInputTopics` method gets the **name list** of all input topics. - -This example demonstrates how to get the name list of all input topics in a Java window function. - -```java - -public class GetInputTopicsWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - Collection inputTopics = context.getInputTopics(); - System.out.println(inputTopics); - - return null; - } - -} - -``` - -### Get output topic - -The `getOutputTopic` method gets the **name of a topic** to which the message is sent. - -This example demonstrates how to get the name of an output topic in a Java window function. - -```java - -public class GetOutputTopicWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String outputTopic = context.getOutputTopic(); - System.out.println(outputTopic); - - return null; - } -} - -``` - -### Get tenant - -The `getTenant` method gets the tenant name associated with the window function. - -This example demonstrates how to get the tenant name in a Java window function. - -```java - -public class GetTenantWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String tenant = context.getTenant(); - System.out.println(tenant); - - return null; - } - -} - -``` - -### Get namespace - -The `getNamespace` method gets the namespace associated with the window function. - -This example demonstrates how to get the namespace in a Java window function. - -```java - -public class GetNamespaceWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String ns = context.getNamespace(); - System.out.println(ns); - - return null; - } - -} - -``` - -### Get function name - -The `getFunctionName` method gets the window function name. - -This example demonstrates how to get the function name in a Java window function. - -```java - -public class GetNameOfWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionName = context.getFunctionName(); - System.out.println(functionName); - - return null; - } - -} - -``` - -### Get function ID - -The `getFunctionId` method gets the window function ID. - -This example demonstrates how to get the function ID in a Java window function. - -```java - -public class GetFunctionIDWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionID = context.getFunctionId(); - System.out.println(functionID); - - return null; - } - -} - -``` - -### Get function version - -The `getFunctionVersion` method gets the window function version. - -This example demonstrates how to get the function version of a Java window function. - -```java - -public class GetVersionOfWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionVersion = context.getFunctionVersion(); - System.out.println(functionVersion); - - return null; - } - -} - -``` - -### Get instance ID - -The `getInstanceId` method gets the instance ID of a window function. - -This example demonstrates how to get the instance ID in a Java window function. - -```java - -public class GetInstanceIDWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - int instanceId = context.getInstanceId(); - System.out.println(instanceId); - - return null; - } - -} - -``` - -### Get num instances - -The `getNumInstances` method gets the number of instances that invoke the window function. - -This example demonstrates how to get the number of instances in a Java window function. - -```java - -public class GetNumInstancesWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - int numInstances = context.getNumInstances(); - System.out.println(numInstances); - - return null; - } - -} - -``` - -### Get output schema type - -The `getOutputSchemaType` method gets the built-in type or custom class name of the output schema. - -This example demonstrates how to get the output schema type of a Java window function. - -```java - -public class GetOutputSchemaTypeWindowFunction implements WindowFunction { - - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String schemaType = context.getOutputSchemaType(); - System.out.println(schemaType); - - return null; - } -} - -``` - -## Logger - -Pulsar window functions using Java SDK has access to an [SLF4j](https://www.slf4j.org/) [`Logger`](https://www.slf4j.org/api/org/apache/log4j/Logger.html) object that can be used to produce logs at the chosen log level. - -This example logs either a `WARNING`-level or `INFO`-level log based on whether the incoming string contains the word `danger` or not in a Java function. - -```java - -import java.util.Collection; -import org.apache.pulsar.functions.api.Record; -import org.apache.pulsar.functions.api.WindowContext; -import org.apache.pulsar.functions.api.WindowFunction; -import org.slf4j.Logger; - -public class LoggingWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - Logger log = context.getLogger(); - for (Record record : inputs) { - log.info(record + "-window-log"); - } - return null; - } - -} - -``` - -If you need your function to produce logs, specify a log topic when creating or running the function. - -```bash - -bin/pulsar-admin functions create \ - --jar my-functions.jar \ - --classname my.package.LoggingFunction \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -You can access all logs produced by `LoggingFunction` via the `persistent://public/default/logging-function-logs` topic. - -## Metrics - -Pulsar window functions can publish arbitrary metrics to the metrics interface which can be queried. - -:::note - -If a Pulsar window function uses the language-native interface for Java, that function is not able to publish metrics and stats to Pulsar. - -::: - -You can record metrics using the context object on a per-key basis. - -This example sets a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message in a Java function. - -```java - -import java.util.Collection; -import org.apache.pulsar.functions.api.Record; -import org.apache.pulsar.functions.api.WindowContext; -import org.apache.pulsar.functions.api.WindowFunction; - - -/** - * Example function that wants to keep track of - * the event time of each message sent. - */ -public class UserMetricWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - - for (Record record : inputs) { - if (record.getEventTime().isPresent()) { - context.recordMetric("MessageEventTime", record.getEventTime().get().doubleValue()); - } - } - - return null; - } -} - -``` - -## User config - -When you run or update Pulsar Functions that are created using SDK, you can pass arbitrary key/value pairs to them with the `--user-config` flag. Key/value pairs **must** be specified as JSON. - -This example passes a user configured key/value to a function. - -```bash - -bin/pulsar-admin functions create \ - --name word-filter \ - --user-config '{"forbidden-word":"rosebud"}' \ - # Other function configs - -``` - -### API -You can use the following APIs to get user-defined information for window functions. -#### getUserConfigMap - -`getUserConfigMap` API gets a map of all user-defined key/value configurations for the window function. - -```java - -/** - * Get a map of all user-defined key/value configs for the function. - * - * @return The full map of user-defined config values - */ - Map getUserConfigMap(); - -``` - -#### getUserConfigValue - -The `getUserConfigValue` API gets a user-defined key/value. - -```java - -/** - * Get any user-defined key/value. - * - * @param key The key - * @return The Optional value specified by the user for that key. - */ - Optional getUserConfigValue(String key); - -``` - -#### getUserConfigValueOrDefault - -The `getUserConfigValueOrDefault` API gets a user-defined key/value or a default value if none is present. - -```java - -/** - * Get any user-defined key/value or a default value if none is present. - * - * @param key - * @param defaultValue - * @return Either the user config value associated with a given key or a supplied default value - */ - Object getUserConfigValueOrDefault(String key, Object defaultValue); - -``` - -This example demonstrates how to access key/value pairs provided to Pulsar window functions. - -Java SDK context object enables you to access key/value pairs provided to Pulsar window functions via the command line (as JSON). - -:::tip - -For all key/value pairs passed to Java window functions, both the `key` and the `value` are `String`. To set the value to be a different type, you need to deserialize it from the `String` type. - -::: - -This example passes a key/value pair in a Java window function. - -```bash - -bin/pulsar-admin functions create \ - --user-config '{"word-of-the-day":"verdure"}' \ - # Other function configs - -``` - -This example accesses values in a Java window function. - -The `UserConfigFunction` function logs the string `"The word of the day is verdure"` every time the function is invoked (which means every time a message arrives). The user config of `word-of-the-day` is changed **only** when the function is updated with a new config value via -multiple ways, such as the command line tool or REST API. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.Optional; - -public class UserConfigWindowFunction implements WindowFunction { - @Override - public String process(Collection> input, WindowContext context) throws Exception { - Optional whatToWrite = context.getUserConfigValue("WhatToWrite"); - if (whatToWrite.get() != null) { - return (String)whatToWrite.get(); - } else { - return "Not a nice way"; - } - } - -} - -``` - -If no value is provided, you can access the entire user config map or set a default value. - -```java - -// Get the whole config map -Map allConfigs = context.getUserConfigMap(); - -// Get value or resort to default -String wotd = context.getUserConfigValueOrDefault("word-of-the-day", "perspicacious"); - -``` - -## Routing - -You can use the `context.publish()` interface to publish as many results as you want. - -This example shows that the `PublishFunction` class uses the built-in function in the context to publish messages to the `publishTopic` in a Java function. - -```java - -public class PublishWindowFunction implements WindowFunction { - @Override - public Void process(Collection> input, WindowContext context) throws Exception { - String publishTopic = (String) context.getUserConfigValueOrDefault("publish-topic", "publishtopic"); - String output = String.format("%s!", input); - context.publish(publishTopic, output); - - return null; - } - -} - -``` - -## State storage - -Pulsar window functions use [Apache BookKeeper](https://bookkeeper.apache.org) as a state storage interface. Apache Pulsar installation (including the standalone installation) includes the deployment of BookKeeper bookies. - -Apache Pulsar integrates with Apache BookKeeper `table service` to store the `state` for functions. For example, the `WordCount` function can store its `counters` state into BookKeeper table service via Pulsar Functions state APIs. - -States are key-value pairs, where the key is a string and the value is arbitrary binary data—counters are stored as 64-bit big-endian binary values. Keys are scoped to an individual Pulsar Function and shared between instances of that function. - -Currently, Pulsar window functions expose Java API to access, update, and manage states. These APIs are available in the context object when you use Java SDK functions. - -| Java API| Description -|---|--- -|`incrCounter`|Increases a built-in distributed counter referred by key. -|`getCounter`|Gets the counter value for the key. -|`putState`|Updates the state value for the key. - -You can use the following APIs to access, update, and manage states in Java window functions. - -#### incrCounter - -The `incrCounter` API increases a built-in distributed counter referred by key. - -Applications use the `incrCounter` API to change the counter of a given `key` by the given `amount`. If the `key` does not exist, a new key is created. - -```java - - /** - * Increment the builtin distributed counter referred by key - * @param key The name of the key - * @param amount The amount to be incremented - */ - void incrCounter(String key, long amount); - -``` - -#### getCounter - -The `getCounter` API gets the counter value for the key. - -Applications uses the `getCounter` API to retrieve the counter of a given `key` changed by the `incrCounter` API. - -```java - - /** - * Retrieve the counter value for the key. - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - long getCounter(String key); - -``` - -Except the `getCounter` API, Pulsar also exposes a general key/value API (`putState`) for functions to store general key/value state. - -#### putState - -The `putState` API updates the state value for the key. - -```java - - /** - * Update the state value for the key. - * - * @param key name of the key - * @param value state value of the key - */ - void putState(String key, ByteBuffer value); - -``` - -This example demonstrates how applications store states in Pulsar window functions. - -The logic of the `WordCountWindowFunction` is simple and straightforward. - -1. The function first splits the received string into multiple words using regex `\\.`. - -2. For each `word`, the function increments the corresponding `counter` by 1 via `incrCounter(key, amount)`. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - for (Record input : inputs) { - Arrays.asList(input.getValue().split("\\.")).forEach(word -> context.incrCounter(word, 1)); - } - return null; - - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/about.md b/site2/website/versioned_docs/version-2.9.3-deprecated/about.md deleted file mode 100644 index 26e97138b909c1..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/about.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -slug: / -id: about -title: Welcome to the doc portal! -sidebar_label: "About" ---- - -import BlockLinks from "@site/src/components/BlockLinks"; -import BlockLink from "@site/src/components/BlockLink"; -import { docUrl } from "@site/src/utils/index"; - - -# Welcome to the doc portal! -*** - -This portal holds a variety of support documents to help you work with Pulsar . If you’re a beginner, there are tutorials and explainers to help you understand Pulsar and how it works. - -If you’re an experienced coder, review this page to learn the easiest way to access the specific content you’re looking for. - -## Get Started Now - - - - - - - - - -## Navigation -*** - -There are several ways to get around in the doc portal. The index navigation pane is a table of contents for the entire archive. The archive is divided into sections, like chapters in a book. Click the title of the topic to view it. - -In-context links provide an easy way to immediately reference related topics. Click the underlined term to view the topic. - -Links to related topics can be found at the bottom of each topic page. Click the link to view the topic. - -![Page Linking](/assets/page-linking.png) - -## Continuous Improvement -*** -As you probably know, we are working on a new user experience for our documentation portal that will make learning about and building on top of Apache Pulsar a much better experience. Whether you need overview concepts, how-to procedures, curated guides or quick references, we’re building content to support it. This welcome page is just the first step. We will be providing updates every month. - -## Help Improve These Documents -*** - -You’ll notice an Edit button at the bottom and top of each page. Click it to open a landing page with instructions for requesting changes to posted documents. These are your resources. Participation is not only welcomed – it’s essential! - -## Join the Community! -*** - -The Pulsar community on github is active, passionate, and knowledgeable. Join discussions, voice opinions, suggest features, and dive into the code itself. Find your Pulsar family here at [apache/pulsar](https://github.com/apache/pulsar). - -An equally passionate community can be found in the [Pulsar Slack channel](https://apache-pulsar.slack.com/). You’ll need an invitation to join, but many Github Pulsar community members are Slack members too. Join, hang out, learn, and make some new friends. - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/adaptors-kafka.md b/site2/website/versioned_docs/version-2.9.3-deprecated/adaptors-kafka.md deleted file mode 100644 index ea256049710fd1..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/adaptors-kafka.md +++ /dev/null @@ -1,276 +0,0 @@ ---- -id: adaptors-kafka -title: Pulsar adaptor for Apache Kafka -sidebar_label: "Kafka client wrapper" -original_id: adaptors-kafka ---- - - -Pulsar provides an easy option for applications that are currently written using the [Apache Kafka](http://kafka.apache.org) Java client API. - -## Using the Pulsar Kafka compatibility wrapper - -In an existing application, change the regular Kafka client dependency and replace it with the Pulsar Kafka wrapper. Remove the following dependency in `pom.xml`: - -```xml - - - org.apache.kafka - kafka-clients - 0.10.2.1 - - -``` - -Then include this dependency for the Pulsar Kafka wrapper: - -```xml - - - org.apache.pulsar - pulsar-client-kafka - @pulsar:version@ - - -``` - -With the new dependency, the existing code works without any changes. You need to adjust the configuration, and make sure it points the -producers and consumers to Pulsar service rather than Kafka, and uses a particular -Pulsar topic. - -## Using the Pulsar Kafka compatibility wrapper together with existing kafka client - -When migrating from Kafka to Pulsar, the application might use the original kafka client -and the pulsar kafka wrapper together during migration. You should consider using the -unshaded pulsar kafka client wrapper. - -```xml - - - org.apache.pulsar - pulsar-client-kafka-original - @pulsar:version@ - - -``` - -When using this dependency, construct producers using `org.apache.kafka.clients.producer.PulsarKafkaProducer` -instead of `org.apache.kafka.clients.producer.KafkaProducer` and `org.apache.kafka.clients.producer.PulsarKafkaConsumer` for consumers. - -## Producer example - -```java - -// Topic needs to be a regular Pulsar topic -String topic = "persistent://public/default/my-topic"; - -Properties props = new Properties(); -// Point to a Pulsar service -props.put("bootstrap.servers", "pulsar://localhost:6650"); - -props.put("key.serializer", IntegerSerializer.class.getName()); -props.put("value.serializer", StringSerializer.class.getName()); - -Producer producer = new KafkaProducer(props); - -for (int i = 0; i < 10; i++) { - producer.send(new ProducerRecord(topic, i, "hello-" + i)); - log.info("Message {} sent successfully", i); -} - -producer.close(); - -``` - -## Consumer example - -```java - -String topic = "persistent://public/default/my-topic"; - -Properties props = new Properties(); -// Point to a Pulsar service -props.put("bootstrap.servers", "pulsar://localhost:6650"); -props.put("group.id", "my-subscription-name"); -props.put("enable.auto.commit", "false"); -props.put("key.deserializer", IntegerDeserializer.class.getName()); -props.put("value.deserializer", StringDeserializer.class.getName()); - -Consumer consumer = new KafkaConsumer(props); -consumer.subscribe(Arrays.asList(topic)); - -while (true) { - ConsumerRecords records = consumer.poll(100); - records.forEach(record -> { - log.info("Received record: {}", record); - }); - - // Commit last offset - consumer.commitSync(); -} - -``` - -## Complete Examples - -You can find the complete producer and consumer examples [here](https://github.com/apache/pulsar-adapters/tree/master/pulsar-client-kafka-compat/pulsar-client-kafka-tests/src/test/java/org/apache/pulsar/client/kafka/compat/examples). - -## Compatibility matrix - -Currently the Pulsar Kafka wrapper supports most of the operations offered by the Kafka API. - -### Producer - -APIs: - -| Producer Method | Supported | Notes | -|:------------------------------------------------------------------------------|:----------|:-------------------------------------------------------------------------| -| `Future send(ProducerRecord record)` | Yes | | -| `Future send(ProducerRecord record, Callback callback)` | Yes | | -| `void flush()` | Yes | | -| `List partitionsFor(String topic)` | No | | -| `Map metrics()` | No | | -| `void close()` | Yes | | -| `void close(long timeout, TimeUnit unit)` | Yes | | - -Properties: - -| Config property | Supported | Notes | -|:----------------------------------------|:----------|:------------------------------------------------------------------------------| -| `acks` | Ignored | Durability and quorum writes are configured at the namespace level | -| `auto.offset.reset` | Yes | It uses a default value of `earliest` if you do not give a specific setting. | -| `batch.size` | Ignored | | -| `bootstrap.servers` | Yes | | -| `buffer.memory` | Ignored | | -| `client.id` | Ignored | | -| `compression.type` | Yes | Allows `gzip` and `lz4`. No `snappy`. | -| `connections.max.idle.ms` | Yes | Only support up to 2,147,483,647,000(Integer.MAX_VALUE * 1000) ms of idle time| -| `interceptor.classes` | Yes | | -| `key.serializer` | Yes | | -| `linger.ms` | Yes | Controls the group commit time when batching messages | -| `max.block.ms` | Ignored | | -| `max.in.flight.requests.per.connection` | Ignored | In Pulsar ordering is maintained even with multiple requests in flight | -| `max.request.size` | Ignored | | -| `metric.reporters` | Ignored | | -| `metrics.num.samples` | Ignored | | -| `metrics.sample.window.ms` | Ignored | | -| `partitioner.class` | Yes | | -| `receive.buffer.bytes` | Ignored | | -| `reconnect.backoff.ms` | Ignored | | -| `request.timeout.ms` | Ignored | | -| `retries` | Ignored | Pulsar client retries with exponential backoff until the send timeout expires. | -| `send.buffer.bytes` | Ignored | | -| `timeout.ms` | Yes | | -| `value.serializer` | Yes | | - - -### Consumer - -The following table lists consumer APIs. - -| Consumer Method | Supported | Notes | -|:--------------------------------------------------------------------------------------------------------|:----------|:------| -| `Set assignment()` | No | | -| `Set subscription()` | Yes | | -| `void subscribe(Collection topics)` | Yes | | -| `void subscribe(Collection topics, ConsumerRebalanceListener callback)` | No | | -| `void assign(Collection partitions)` | No | | -| `void subscribe(Pattern pattern, ConsumerRebalanceListener callback)` | No | | -| `void unsubscribe()` | Yes | | -| `ConsumerRecords poll(long timeoutMillis)` | Yes | | -| `void commitSync()` | Yes | | -| `void commitSync(Map offsets)` | Yes | | -| `void commitAsync()` | Yes | | -| `void commitAsync(OffsetCommitCallback callback)` | Yes | | -| `void commitAsync(Map offsets, OffsetCommitCallback callback)` | Yes | | -| `void seek(TopicPartition partition, long offset)` | Yes | | -| `void seekToBeginning(Collection partitions)` | Yes | | -| `void seekToEnd(Collection partitions)` | Yes | | -| `long position(TopicPartition partition)` | Yes | | -| `OffsetAndMetadata committed(TopicPartition partition)` | Yes | | -| `Map metrics()` | No | | -| `List partitionsFor(String topic)` | No | | -| `Map> listTopics()` | No | | -| `Set paused()` | No | | -| `void pause(Collection partitions)` | No | | -| `void resume(Collection partitions)` | No | | -| `Map offsetsForTimes(Map timestampsToSearch)` | No | | -| `Map beginningOffsets(Collection partitions)` | No | | -| `Map endOffsets(Collection partitions)` | No | | -| `void close()` | Yes | | -| `void close(long timeout, TimeUnit unit)` | Yes | | -| `void wakeup()` | No | | - -Properties: - -| Config property | Supported | Notes | -|:--------------------------------|:----------|:------------------------------------------------------| -| `group.id` | Yes | Maps to a Pulsar subscription name | -| `max.poll.records` | Yes | | -| `max.poll.interval.ms` | Ignored | Messages are "pushed" from broker | -| `session.timeout.ms` | Ignored | | -| `heartbeat.interval.ms` | Ignored | | -| `bootstrap.servers` | Yes | Needs to point to a single Pulsar service URL | -| `enable.auto.commit` | Yes | | -| `auto.commit.interval.ms` | Ignored | With auto-commit, acks are sent immediately to broker | -| `partition.assignment.strategy` | Ignored | | -| `auto.offset.reset` | Yes | Only support earliest and latest. | -| `fetch.min.bytes` | Ignored | | -| `fetch.max.bytes` | Ignored | | -| `fetch.max.wait.ms` | Ignored | | -| `interceptor.classes` | Yes | | -| `metadata.max.age.ms` | Ignored | | -| `max.partition.fetch.bytes` | Ignored | | -| `send.buffer.bytes` | Ignored | | -| `receive.buffer.bytes` | Ignored | | -| `client.id` | Ignored | | - - -## Customize Pulsar configurations - -You can configure Pulsar authentication provider directly from the Kafka properties. - -### Pulsar client properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.authentication.class`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-org.apache.pulsar.client.api.Authentication-) | | Configure to auth provider. For example, `org.apache.pulsar.client.impl.auth.AuthenticationTls`.| -| [`pulsar.authentication.params.map`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.util.Map-) | | Map which represents parameters for the Authentication-Plugin. | -| [`pulsar.authentication.params.string`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.lang.String-) | | String which represents parameters for the Authentication-Plugin, for example, `key1:val1,key2:val2`. | -| [`pulsar.use.tls`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTls-boolean-) | `false` | Enable TLS transport encryption. | -| [`pulsar.tls.trust.certs.file.path`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsTrustCertsFilePath-java.lang.String-) | | Path for the TLS trust certificate store. | -| [`pulsar.tls.allow.insecure.connection`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsAllowInsecureConnection-boolean-) | `false` | Accept self-signed certificates from brokers. | -| [`pulsar.operation.timeout.ms`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setOperationTimeout-int-java.util.concurrent.TimeUnit-) | `30000` | General operations timeout. | -| [`pulsar.stats.interval.seconds`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setStatsInterval-long-java.util.concurrent.TimeUnit-) | `60` | Pulsar client lib stats printing interval. | -| [`pulsar.num.io.threads`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setIoThreads-int-) | `1` | The number of Netty IO threads to use. | -| [`pulsar.connections.per.broker`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConnectionsPerBroker-int-) | `1` | The maximum number of connection to each broker. | -| [`pulsar.use.tcp.nodelay`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTcpNoDelay-boolean-) | `true` | TCP no-delay. | -| [`pulsar.concurrent.lookup.requests`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConcurrentLookupRequest-int-) | `50000` | The maximum number of concurrent topic lookups. | -| [`pulsar.max.number.rejected.request.per.connection`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setMaxNumberOfRejectedRequestPerConnection-int-) | `50` | The threshold of errors to forcefully close a connection. | -| [`pulsar.keepalive.interval.ms`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientBuilder.html#keepAliveInterval-int-java.util.concurrent.TimeUnit-)| `30000` | Keep alive interval for each client-broker-connection. | - - -### Pulsar producer properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.producer.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setProducerName-java.lang.String-) | | Specify the producer name. | -| [`pulsar.producer.initial.sequence.id`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setInitialSequenceId-long-) | | Specify baseline for sequence ID of this producer. | -| [`pulsar.producer.max.pending.messages`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessages-int-) | `1000` | Set the maximum size of the message queue pending to receive an acknowledgment from the broker. | -| [`pulsar.producer.max.pending.messages.across.partitions`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessagesAcrossPartitions-int-) | `50000` | Set the maximum number of pending messages across all the partitions. | -| [`pulsar.producer.batching.enabled`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingEnabled-boolean-) | `true` | Control whether automatic batching of messages is enabled for the producer. | -| [`pulsar.producer.batching.max.messages`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingMaxMessages-int-) | `1000` | The maximum number of messages in a batch. | -| [`pulsar.block.if.producer.queue.full`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBlockIfQueueFull-boolean-) | | Specify the block producer if queue is full. | -| [`pulsar.crypto.reader.factory.class.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setCryptoKeyReader-org.apache.pulsar.client.api.CryptoKeyReader-) | | Specify the CryptoReader-Factory(`CryptoKeyReaderFactory`) classname which allows producer to create CryptoKeyReader. | - - -### Pulsar consumer Properties - -| Config property | Default | Notes | -|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------| -| [`pulsar.consumer.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setConsumerName-java.lang.String-) | | Specify the consumer name. | -| [`pulsar.consumer.receiver.queue.size`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setReceiverQueueSize-int-) | 1000 | Set the size of the consumer receiver queue. | -| [`pulsar.consumer.acknowledgments.group.time.millis`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#acknowledgmentGroupTime-long-java.util.concurrent.TimeUnit-) | 100 | Set the maximum amount of group time for consumers to send the acknowledgments to the broker. | -| [`pulsar.consumer.total.receiver.queue.size.across.partitions`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setMaxTotalReceiverQueueSizeAcrossPartitions-int-) | 50000 | Set the maximum size of the total receiver queue across partitions. | -| [`pulsar.consumer.subscription.topics.mode`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#subscriptionTopicsMode-Mode-) | PersistentOnly | Set the subscription topic mode for consumers. | -| [`pulsar.crypto.reader.factory.class.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setCryptoKeyReader-org.apache.pulsar.client.api.CryptoKeyReader-) | | Specify the CryptoReader-Factory(`CryptoKeyReaderFactory`) classname which allows consumer to create CryptoKeyReader. | diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/adaptors-spark.md b/site2/website/versioned_docs/version-2.9.3-deprecated/adaptors-spark.md deleted file mode 100644 index e14f13b5d4b079..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/adaptors-spark.md +++ /dev/null @@ -1,91 +0,0 @@ ---- -id: adaptors-spark -title: Pulsar adaptor for Apache Spark -sidebar_label: "Apache Spark" -original_id: adaptors-spark ---- - -## Spark Streaming receiver -The Spark Streaming receiver for Pulsar is a custom receiver that enables Apache [Spark Streaming](https://spark.apache.org/streaming/) to receive raw data from Pulsar. - -An application can receive data in [Resilient Distributed Dataset](https://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds) (RDD) format via the Spark Streaming receiver and can process it in a variety of ways. - -### Prerequisites - -To use the receiver, include a dependency for the `pulsar-spark` library in your Java configuration. - -#### Maven - -If you're using Maven, add this to your `pom.xml`: - -```xml - - -@pulsar:version@ - - - - org.apache.pulsar - pulsar-spark - ${pulsar.version} - - -``` - -#### Gradle - -If you're using Gradle, add this to your `build.gradle` file: - -```groovy - -def pulsarVersion = "@pulsar:version@" - -dependencies { - compile group: 'org.apache.pulsar', name: 'pulsar-spark', version: pulsarVersion -} - -``` - -### Usage - -Pass an instance of `SparkStreamingPulsarReceiver` to the `receiverStream` method in `JavaStreamingContext`: - -```java - - String serviceUrl = "pulsar://localhost:6650/"; - String topic = "persistent://public/default/test_src"; - String subs = "test_sub"; - - SparkConf sparkConf = new SparkConf().setMaster("local[*]").setAppName("Pulsar Spark Example"); - - JavaStreamingContext jsc = new JavaStreamingContext(sparkConf, Durations.seconds(60)); - - ConsumerConfigurationData pulsarConf = new ConsumerConfigurationData(); - - Set set = new HashSet(); - set.add(topic); - pulsarConf.setTopicNames(set); - pulsarConf.setSubscriptionName(subs); - - SparkStreamingPulsarReceiver pulsarReceiver = new SparkStreamingPulsarReceiver( - serviceUrl, - pulsarConf, - new AuthenticationDisabled()); - - JavaReceiverInputDStream lineDStream = jsc.receiverStream(pulsarReceiver); - -``` - -For a complete example, click [here](https://github.com/apache/pulsar-adapters/blob/master/examples/spark/src/main/java/org/apache/spark/streaming/receiver/example/SparkStreamingPulsarReceiverExample.java). In this example, the number of messages that contain the string "Pulsar" in received messages is counted. - -Note that if needed, other Pulsar authentication classes can be used. For example, in order to use a token during authentication the following parameters for the `SparkStreamingPulsarReceiver` constructor can be set: - -```java - -SparkStreamingPulsarReceiver pulsarReceiver = new SparkStreamingPulsarReceiver( - serviceUrl, - pulsarConf, - new AuthenticationToken("token:")); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/adaptors-storm.md b/site2/website/versioned_docs/version-2.9.3-deprecated/adaptors-storm.md deleted file mode 100644 index 76d507164777db..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/adaptors-storm.md +++ /dev/null @@ -1,96 +0,0 @@ ---- -id: adaptors-storm -title: Pulsar adaptor for Apache Storm -sidebar_label: "Apache Storm" -original_id: adaptors-storm ---- - -Pulsar Storm is an adaptor for integrating with [Apache Storm](http://storm.apache.org/) topologies. It provides core Storm implementations for sending and receiving data. - -An application can inject data into a Storm topology via a generic Pulsar spout, as well as consume data from a Storm topology via a generic Pulsar bolt. - -## Using the Pulsar Storm Adaptor - -Include dependency for Pulsar Storm Adaptor: - -```xml - - - org.apache.pulsar - pulsar-storm - ${pulsar.version} - - -``` - -## Pulsar Spout - -The Pulsar Spout allows for the data published on a topic to be consumed by a Storm topology. It emits a Storm tuple based on the message received and the `MessageToValuesMapper` provided by the client. - -The tuples that fail to be processed by the downstream bolts will be re-injected by the spout with an exponential backoff, within a configurable timeout (the default is 60 seconds) or a configurable number of retries, whichever comes first, after which it is acknowledged by the consumer. Here's an example construction of a spout: - -```java - -MessageToValuesMapper messageToValuesMapper = new MessageToValuesMapper() { - - @Override - public Values toValues(Message msg) { - return new Values(new String(msg.getData())); - } - - @Override - public void declareOutputFields(OutputFieldsDeclarer declarer) { - // declare the output fields - declarer.declare(new Fields("string")); - } -}; - -// Configure a Pulsar Spout -PulsarSpoutConfiguration spoutConf = new PulsarSpoutConfiguration(); -spoutConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650"); -spoutConf.setTopic("persistent://my-property/usw/my-ns/my-topic1"); -spoutConf.setSubscriptionName("my-subscriber-name1"); -spoutConf.setMessageToValuesMapper(messageToValuesMapper); - -// Create a Pulsar Spout -PulsarSpout spout = new PulsarSpout(spoutConf); - -``` - -For a complete example, click [here](https://github.com/apache/pulsar-adapters/blob/master/pulsar-storm/src/test/java/org/apache/pulsar/storm/PulsarSpoutTest.java). - -## Pulsar Bolt - -The Pulsar bolt allows data in a Storm topology to be published on a topic. It publishes messages based on the Storm tuple received and the `TupleToMessageMapper` provided by the client. - -A partitioned topic can also be used to publish messages on different topics. In the implementation of the `TupleToMessageMapper`, a "key" will need to be provided in the message which will send the messages with the same key to the same topic. Here's an example bolt: - -```java - -TupleToMessageMapper tupleToMessageMapper = new TupleToMessageMapper() { - - @Override - public TypedMessageBuilder toMessage(TypedMessageBuilder msgBuilder, Tuple tuple) { - String receivedMessage = tuple.getString(0); - // message processing - String processedMsg = receivedMessage + "-processed"; - return msgBuilder.value(processedMsg.getBytes()); - } - - @Override - public void declareOutputFields(OutputFieldsDeclarer declarer) { - // declare the output fields - } -}; - -// Configure a Pulsar Bolt -PulsarBoltConfiguration boltConf = new PulsarBoltConfiguration(); -boltConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650"); -boltConf.setTopic("persistent://my-property/usw/my-ns/my-topic2"); -boltConf.setTupleToMessageMapper(tupleToMessageMapper); - -// Create a Pulsar Bolt -PulsarBolt bolt = new PulsarBolt(boltConf); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-brokers.md b/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-brokers.md deleted file mode 100644 index f1deabf0b439a7..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-brokers.md +++ /dev/null @@ -1,286 +0,0 @@ ---- -id: admin-api-brokers -title: Managing Brokers -sidebar_label: "Brokers" -original_id: admin-api-brokers ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Pulsar brokers consist of two components: - -1. An HTTP server exposing a {@inject: rest:REST:/} interface administration and [topic](reference-terminology.md#topic) lookup. -2. A dispatcher that handles all Pulsar [message](reference-terminology.md#message) transfers. - -[Brokers](reference-terminology.md#broker) can be managed via: - -* The `brokers` command of the [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) tool -* The `/admin/v2/brokers` endpoint of the admin {@inject: rest:REST:/} API -* The `brokers` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md) - -In addition to being configurable when you start them up, brokers can also be [dynamically configured](#dynamic-broker-configuration). - -> See the [Configuration](reference-configuration.md#broker) page for a full listing of broker-specific configuration parameters. - -## Brokers resources - -### List active brokers - -Fetch all available active brokers that are serving traffic. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers list use - -``` - -``` - -broker1.use.org.com:8080 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/:cluster|operation/getActiveBrokers?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getActiveBrokers(clusterName) - -``` - - - - -```` - -### Get the information of the leader broker - -Fetch the information of the leader broker, for example, the service url. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers leader-broker - -``` - -``` - -BrokerInfo(serviceUrl=broker1.use.org.com:8080) - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/leaderBroker?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getLeaderBroker() - -``` - -For the detail of the code above, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/BrokersImpl.java#L80) - - - - -```` - -#### list of namespaces owned by a given broker - -It finds all namespaces which are owned and served by a given broker. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers namespaces use \ - --url broker1.use.org.com:8080 - -``` - -```json - -{ - "my-property/use/my-ns/0x00000000_0xffffffff": { - "broker_assignment": "shared", - "is_controlled": false, - "is_active": true - } -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/:cluster/:broker/ownedNamespaces|operation/getOwnedNamespaes?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getOwnedNamespaces(cluster,brokerUrl); - -``` - - - - -```` - -### Dynamic broker configuration - -One way to configure a Pulsar [broker](reference-terminology.md#broker) is to supply a [configuration](reference-configuration.md#broker) when the broker is [started up](reference-cli-tools.md#pulsar-broker). - -But since all broker configuration in Pulsar is stored in ZooKeeper, configuration values can also be dynamically updated *while the broker is running*. When you update broker configuration dynamically, ZooKeeper will notify the broker of the change and the broker will then override any existing configuration values. - -* The [`brokers`](reference-pulsar-admin.md#brokers) command for the [`pulsar-admin`](reference-pulsar-admin.md) tool has a variety of subcommands that enable you to manipulate a broker's configuration dynamically, enabling you to [update config values](#update-dynamic-configuration) and more. -* In the Pulsar admin {@inject: rest:REST:/} API, dynamic configuration is managed through the `/admin/v2/brokers/configuration` endpoint. - -### Update dynamic configuration - -````mdx-code-block - - - -The [`update-dynamic-config`](reference-pulsar-admin.md#brokers-update-dynamic-config) subcommand will update existing configuration. It takes two arguments: the name of the parameter and the new value using the `config` and `value` flag respectively. Here's an example for the [`brokerShutdownTimeoutMs`](reference-configuration.md#broker-brokerShutdownTimeoutMs) parameter: - -```shell - -$ pulsar-admin brokers update-dynamic-config --config brokerShutdownTimeoutMs --value 100 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/brokers/configuration/:configName/:configValue|operation/updateDynamicConfiguration?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().updateDynamicConfiguration(configName, configValue); - -``` - - - - -```` - -### List updated values - -Fetch a list of all potentially updatable configuration parameters. -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers list-dynamic-config -brokerShutdownTimeoutMs - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/configuration|operation/getDynamicConfigurationName?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getDynamicConfigurationNames(); - -``` - - - - -```` - -### List all - -Fetch a list of all parameters that have been dynamically updated. - -````mdx-code-block - - - -```shell - -$ pulsar-admin brokers get-all-dynamic-config -brokerShutdownTimeoutMs:100 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/brokers/configuration/values|operation/getAllDynamicConfigurations?version=@pulsar:version_number@} - - - - -```java - -admin.brokers().getAllDynamicConfigurations(); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-clusters.md b/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-clusters.md deleted file mode 100644 index 1d0c5dc9786f5a..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-clusters.md +++ /dev/null @@ -1,318 +0,0 @@ ---- -id: admin-api-clusters -title: Managing Clusters -sidebar_label: "Clusters" -original_id: admin-api-clusters ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Pulsar clusters consist of one or more Pulsar [brokers](reference-terminology.md#broker), one or more [BookKeeper](reference-terminology.md#bookkeeper) -servers (aka [bookies](reference-terminology.md#bookie)), and a [ZooKeeper](https://zookeeper.apache.org) cluster that provides configuration and coordination management. - -Clusters can be managed via: - -* The `clusters` command of the [`pulsar-admin`]([reference-pulsar-admin.md](https://pulsar.apache.org/tools/pulsar-admin/)) tool -* The `/admin/v2/clusters` endpoint of the admin {@inject: rest:REST:/} API -* The `clusters` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md) - -## Clusters resources - -### Provision - -New clusters can be provisioned using the admin interface. - -> Please note that this operation requires superuser privileges. - -````mdx-code-block - - - -You can provision a new cluster using the [`create`](reference-pulsar-admin.md#clusters-create) subcommand. Here's an example: - -```shell - -$ pulsar-admin clusters create cluster-1 \ - --url http://my-cluster.org.com:8080 \ - --broker-url pulsar://my-cluster.org.com:6650 - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/clusters/:cluster|operation/createCluster?version=@pulsar:version_number@} - - - - -```java - -ClusterData clusterData = new ClusterData( - serviceUrl, - serviceUrlTls, - brokerServiceUrl, - brokerServiceUrlTls -); -admin.clusters().createCluster(clusterName, clusterData); - -``` - - - - -```` - -### Initialize cluster metadata - -When provision a new cluster, you need to initialize that cluster's [metadata](concepts-architecture-overview.md#metadata-store). When initializing cluster metadata, you need to specify all of the following: - -* The name of the cluster -* The local ZooKeeper connection string for the cluster -* The configuration store connection string for the entire instance -* The web service URL for the cluster -* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster - -You must initialize cluster metadata *before* starting up any [brokers](admin-api-brokers.md) that will belong to the cluster. - -> **No cluster metadata initialization through the REST API or the Java admin API** -> -> Unlike most other admin functions in Pulsar, cluster metadata initialization cannot be performed via the admin REST API -> or the admin Java client, as metadata initialization involves communicating with ZooKeeper directly. -> Instead, you can use the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool, in particular -> the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command. - -Here's an example cluster metadata initialization command: - -```shell - -bin/pulsar initialize-cluster-metadata \ - --cluster us-west \ - --zookeeper zk1.us-west.example.com:2181 \ - --configuration-store zk1.us-west.example.com:2184 \ - --web-service-url http://pulsar.us-west.example.com:8080/ \ - --web-service-url-tls https://pulsar.us-west.example.com:8443/ \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/ - -``` - -You'll need to use `--*-tls` flags only if you're using [TLS authentication](security-tls-authentication.md) in your instance. - -### Get configuration - -You can fetch the [configuration](reference-configuration.md) for an existing cluster at any time. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#clusters-get) subcommand and specify the name of the cluster. Here's an example: - -```shell - -$ pulsar-admin clusters get cluster-1 -{ - "serviceUrl": "http://my-cluster.org.com:8080/", - "serviceUrlTls": null, - "brokerServiceUrl": "pulsar://my-cluster.org.com:6650/", - "brokerServiceUrlTls": null - "peerClusterNames": null -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/clusters/:cluster|operation/getCluster?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().getCluster(clusterName); - -``` - - - - -```` - -### Update - -You can update the configuration for an existing cluster at any time. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#clusters-update) subcommand and specify new configuration values using flags. - -```shell - -$ pulsar-admin clusters update cluster-1 \ - --url http://my-cluster.org.com:4081 \ - --broker-url pulsar://my-cluster.org.com:3350 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/clusters/:cluster|operation/updateCluster?version=@pulsar:version_number@} - - - - -```java - -ClusterData clusterData = new ClusterData( - serviceUrl, - serviceUrlTls, - brokerServiceUrl, - brokerServiceUrlTls -); -admin.clusters().updateCluster(clusterName, clusterData); - -``` - - - - -```` - -### Delete - -Clusters can be deleted from a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#clusters-delete) subcommand and specify the name of the cluster. - -``` - -$ pulsar-admin clusters delete cluster-1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/clusters/:cluster|operation/deleteCluster?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().deleteCluster(clusterName); - -``` - - - - -```` - -### List - -You can fetch a list of all clusters in a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#clusters-list) subcommand. - -```shell - -$ pulsar-admin clusters list -cluster-1 -cluster-2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/clusters|operation/getClusters?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().getClusters(); - -``` - - - - -```` - -### Update peer-cluster data - -Peer clusters can be configured for a given cluster in a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`update-peer-clusters`](reference-pulsar-admin.md#clusters-update-peer-clusters) subcommand and specify the list of peer-cluster names. - -``` - -$ pulsar-admin update-peer-clusters cluster-1 --peer-clusters cluster-2 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/clusters/:cluster/peers|operation/setPeerClusterNames?version=@pulsar:version_number@} - - - - -```java - -admin.clusters().updatePeerClusterNames(clusterName, peerClusterList); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-functions.md b/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-functions.md deleted file mode 100644 index 430151a40f3aab..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-functions.md +++ /dev/null @@ -1,830 +0,0 @@ ---- -id: admin-api-functions -title: Manage Functions -sidebar_label: "Functions" -original_id: admin-api-functions ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -**Pulsar Functions** are lightweight compute processes that - -* consume messages from one or more Pulsar topics -* apply a user-supplied processing logic to each message -* publish the results of the computation to another topic - -Functions can be managed via the following methods. - -Method | Description ----|--- -**Admin CLI** | The `functions` command of the [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) tool. -**REST API** |The `/admin/v3/functions` endpoint of the admin {@inject: rest:REST:/} API. -**Java Admin API**| The `functions` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md). - -## Function resources - -You can perform the following operations on functions. - -### Create a function - -You can create a Pulsar function in cluster mode (deploy it on a Pulsar cluster) using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#functions-create) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --inputs test-input-topic \ - --output persistent://public/default/test-output-topic \ - --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \ - --jar /examples/api-examples.jar - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName?version=@pulsar:version_number@} - - - - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setTenant(tenant); -functionConfig.setNamespace(namespace); -functionConfig.setName(functionName); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setParallelism(1); -functionConfig.setClassName("org.apache.pulsar.functions.api.examples.ExclamationFunction"); -functionConfig.setProcessingGuarantees(FunctionConfig.ProcessingGuarantees.ATLEAST_ONCE); -functionConfig.setTopicsPattern(sourceTopicPattern); -functionConfig.setSubName(subscriptionName); -functionConfig.setAutoAck(true); -functionConfig.setOutput(sinkTopic); -admin.functions().createFunction(functionConfig, fileName); - -``` - - - - -```` - -### Update a function - -You can update a Pulsar function that has been deployed to a Pulsar cluster using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#functions-update) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions update \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --output persistent://public/default/update-output-topic \ - # other options - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/functions/:tenant/:namespace/:functionName?version=@pulsar:version_number@} - - - - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setTenant(tenant); -functionConfig.setNamespace(namespace); -functionConfig.setName(functionName); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setParallelism(1); -functionConfig.setClassName("org.apache.pulsar.functions.api.examples.ExclamationFunction"); -UpdateOptions updateOptions = new UpdateOptions(); -updateOptions.setUpdateAuthData(updateAuthData); -admin.functions().updateFunction(functionConfig, userCodeFile, updateOptions); - -``` - - - - -```` - -### Start an instance of a function - -You can start a stopped function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`start`](reference-pulsar-admin.md#functions-start) subcommand. - -```shell - -$ pulsar-admin functions start \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/start?version=@pulsar:version_number@} - - - - -```java - -admin.functions().startFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Start all instances of a function - -You can start all stopped function instances using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`start`](reference-pulsar-admin.md#functions-start) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions start \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/start?version=@pulsar:version_number@} - - - - -```java - -admin.functions().startFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Stop an instance of a function - -You can stop a function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`stop`](reference-pulsar-admin.md#functions-stop) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stop \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/stop?version=@pulsar:version_number@} - - - - -```java - -admin.functions().stopFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Stop all instances of a function - -You can stop all function instances using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`stop`](reference-pulsar-admin.md#functions-stop) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stop \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/stop?version=@pulsar:version_number@} - - - - -```java - -admin.functions().stopFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Restart an instance of a function - -Restart a function instance with `instance-id` using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`restart`](reference-pulsar-admin.md#functions-restart) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions restart \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/restart?version=@pulsar:version_number@} - - - - -```java - -admin.functions().restartFunction(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Restart all instances of a function - -You can restart all function instances using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`restart`](reference-pulsar-admin.md#functions-restart) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions restart \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/restart?version=@pulsar:version_number@} - - - - -```java - -admin.functions().restartFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### List all functions - -You can list all Pulsar functions running under a specific tenant and namespace using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#functions-list) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions list \ - --tenant public \ - --namespace default - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctions(tenant, namespace); - -``` - - - - -```` - -### Delete a function - -You can delete a Pulsar function that is running on a Pulsar cluster using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#functions-delete) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions delete \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|DELETE|/admin/v3/functions/:tenant/:namespace/:functionName?version=@pulsar:version_number@} - - - - -```java - -admin.functions().deleteFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Get info about a function - -You can get information about a Pulsar function currently running in cluster mode using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#functions-get) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions get \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunction(tenant, namespace, functionName); - -``` - - - - -```` - -### Get status of an instance of a function - -You can get the current status of a Pulsar function instance with `instance-id` using Admin CLI, REST API or Java Admin API. -````mdx-code-block - - - -Use the [`status`](reference-pulsar-admin.md#functions-status) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/status?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStatus(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Get status of all instances of a function - -You can get the current status of a Pulsar function instance using Admin CLI, REST API or Java Admin API. - -````mdx-code-block - - - -Use the [`status`](reference-pulsar-admin.md#functions-status) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/status?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStatus(tenant, namespace, functionName); - -``` - - - - -```` - -### Get stats of an instance of a function - -You can get the current stats of a Pulsar Function instance with `instance-id` using Admin CLI, REST API or Java admin API. -````mdx-code-block - - - -Use the [`stats`](reference-pulsar-admin.md#functions-stats) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --instance-id 1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/stats?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStats(tenant, namespace, functionName, Integer.parseInt(instanceId)); - -``` - - - - -```` - -### Get stats of all instances of a function - -You can get the current stats of a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`stats`](reference-pulsar-admin.md#functions-stats) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/stats?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionStats(tenant, namespace, functionName); - -``` - - - - -```` - -### Trigger a function - -You can trigger a specified Pulsar function with a supplied value using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`trigger`](reference-pulsar-admin.md#functions-trigger) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --topic (the name of input topic) \ - --trigger-value \"hello pulsar\" - # or --trigger-file (the path of trigger file) - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/trigger?version=@pulsar:version_number@} - - - - -```java - -admin.functions().triggerFunction(tenant, namespace, functionName, topic, triggerValue, triggerFile); - -``` - - - - -```` - -### Put state associated with a function - -You can put the state associated with a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`putstate`](reference-pulsar-admin.md#functions-putstate) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions putstate \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --state "{\"key\":\"pulsar\", \"stringValue\":\"hello pulsar\"}" - -``` - - - - -{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/state/:key?version=@pulsar:version_number@} - - - - -```java - -TypeReference typeRef = new TypeReference() {}; -FunctionState stateRepr = ObjectMapperFactory.getThreadLocal().readValue(state, typeRef); -admin.functions().putFunctionState(tenant, namespace, functionName, stateRepr); - -``` - - - - -```` - -### Fetch state associated with a function - -You can fetch the current state associated with a Pulsar function using Admin CLI, REST API or Java admin API. - -````mdx-code-block - - - -Use the [`querystate`](reference-pulsar-admin.md#functions-querystate) subcommand. - -**Example** - -```shell - -$ pulsar-admin functions querystate \ - --tenant public \ - --namespace default \ - --name (the name of Pulsar Functions) \ - --key (the key of state) - -``` - - - - -{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/state/:key?version=@pulsar:version_number@} - - - - -```java - -admin.functions().getFunctionState(tenant, namespace, functionName, key); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-namespaces.md b/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-namespaces.md deleted file mode 100644 index 9b3945f4db0596..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-namespaces.md +++ /dev/null @@ -1,1401 +0,0 @@ ---- -id: admin-api-namespaces -title: Managing Namespaces -sidebar_label: "Namespaces" -original_id: admin-api-namespaces ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Pulsar [namespaces](reference-terminology.md#namespace) are logical groupings of [topics](reference-terminology.md#topic). - -Namespaces can be managed via: - -* The `namespaces` command of the [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) tool -* The `/admin/v2/namespaces` endpoint of the admin {@inject: rest:REST:/} API -* The `namespaces` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md) - -## Namespaces resources - -### Create namespaces - -You can create new namespaces under a given [tenant](reference-terminology.md#tenant). - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#namespaces-create) subcommand and specify the namespace by name: - -```shell - -$ pulsar-admin namespaces create test-tenant/test-namespace - -``` - - - - -``` - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace|operation/createNamespace?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().createNamespace(namespace); - -``` - - - - -```` - -### Get policies - -You can fetch the current policies associated with a namespace at any time. - -````mdx-code-block - - - -Use the [`policies`](reference-pulsar-admin.md#namespaces-policies) subcommand and specify the namespace: - -```shell - -$ pulsar-admin namespaces policies test-tenant/test-namespace -{ - "auth_policies": { - "namespace_auth": {}, - "destination_auth": {} - }, - "replication_clusters": [], - "bundles_activated": true, - "bundles": { - "boundaries": [ - "0x00000000", - "0xffffffff" - ], - "numBundles": 1 - }, - "backlog_quota_map": {}, - "persistence": null, - "latency_stats_sample_rate": {}, - "message_ttl_in_seconds": 0, - "retention_policies": null, - "deleted": false -} - -``` - - - - -``` - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace|operation/getPolicies?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().getPolicies(namespace); - -``` - - - - -```` - -### List namespaces - -You can list all namespaces within a given Pulsar [tenant](reference-terminology.md#tenant). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#namespaces-list) subcommand and specify the tenant: - -```shell - -$ pulsar-admin namespaces list test-tenant -test-tenant/ns1 -test-tenant/ns2 - -``` - - - - -``` - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant|operation/getTenantNamespaces?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().getNamespaces(tenant); - -``` - - - - -```` - -### Delete namespaces - -You can delete existing namespaces from a tenant. - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#namespaces-delete) subcommand and specify the namespace: - -```shell - -$ pulsar-admin namespaces delete test-tenant/ns1 - -``` - - - - -``` - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace|operation/deleteNamespace?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().deleteNamespace(namespace); - -``` - - - - -```` - -### Configure replication clusters - -#### Set replication cluster - -You can set replication clusters for a namespace to enable Pulsar to internally replicate the published messages from one colocation facility to another. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-clusters test-tenant/ns1 \ - --clusters cl1 - -``` - - - - -``` - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/replication|operation/setNamespaceReplicationClusters?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().setNamespaceReplicationClusters(namespace, clusters); - -``` - - - - -```` - -#### Get replication cluster - -You can get the list of replication clusters for a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-clusters test-tenant/cl1/ns1 - -``` - -``` - -cl2 - -``` - - - - -``` - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/replication|operation/getNamespaceReplicationClusters?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().getNamespaceReplicationClusters(namespace) - -``` - - - - -```` - -### Configure backlog quota policies - -#### Set backlog quota policies - -Backlog quota helps the broker to restrict bandwidth/storage of a namespace once it reaches a certain threshold limit. Admin can set the limit and take corresponding action after the limit is reached. - - 1. producer_request_hold: broker holds but not persists produce request payload - - 2. producer_exception: broker disconnects with the client by giving an exception - - 3. consumer_backlog_eviction: broker starts discarding backlog messages - -Backlog quota restriction can be taken care by defining restriction of backlog-quota-type: destination_storage. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-backlog-quota --limit 10G --limitTime 36000 --policy producer_request_hold test-tenant/ns1 - -``` - - - - -``` - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/setBacklogQuota?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().setBacklogQuota(namespace, new BacklogQuota(limit, limitTime, policy)) - -``` - - - - -```` - -#### Get backlog quota policies - -You can get a configured backlog quota for a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-backlog-quotas test-tenant/ns1 - -``` - -```json - -{ - "destination_storage": { - "limit": 10, - "policy": "producer_request_hold" - } -} - -``` - - - - -``` - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/backlogQuotaMap|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().getBacklogQuotaMap(namespace); - -``` - - - - -```` - -#### Remove backlog quota policies - -You can remove backlog quota policies for a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-backlog-quota test-tenant/ns1 - -``` - - - - -``` - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/removeBacklogQuota?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().removeBacklogQuota(namespace, backlogQuotaType) - -``` - - - - -```` - -### Configure persistence policies - -#### Set persistence policies - -Persistence policies allow users to configure persistency-level for all topic messages under a given namespace. - - - Bookkeeper-ack-quorum: Number of acks (guaranteed copies) to wait for each entry, default: 0 - - - Bookkeeper-ensemble: Number of bookies to use for a topic, default: 0 - - - Bookkeeper-write-quorum: How many writes to make of each entry, default: 0 - - - Ml-mark-delete-max-rate: Throttling rate of mark-delete operation (0 means no throttle), default: 0.0 - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-persistence --bookkeeper-ack-quorum 2 --bookkeeper-ensemble 3 --bookkeeper-write-quorum 2 --ml-mark-delete-max-rate 0 test-tenant/ns1 - -``` - - - - -``` - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/setPersistence?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().setPersistence(namespace,new PersistencePolicies(bookkeeperEnsemble, bookkeeperWriteQuorum,bookkeeperAckQuorum,managedLedgerMaxMarkDeleteRate)) - -``` - - - - -```` - -#### Get persistence policies - -You can get the configured persistence policies of a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-persistence test-tenant/ns1 - -``` - -```json - -{ - "bookkeeperEnsemble": 3, - "bookkeeperWriteQuorum": 2, - "bookkeeperAckQuorum": 2, - "managedLedgerMaxMarkDeleteRate": 0 -} - -``` - - - - -``` - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/getPersistence?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().getPersistence(namespace) - -``` - - - - -```` - -### Configure namespace bundles - -#### Unload namespace bundles - -The namespace bundle is a virtual group of topics which belong to the same namespace. If the broker gets overloaded with the number of bundles, this command can help unload a bundle from that broker, so it can be served by some other less-loaded brokers. The namespace bundle ID ranges from 0x00000000 to 0xffffffff. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces unload --bundle 0x00000000_0xffffffff test-tenant/ns1 - -``` - - - - -``` - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/:bundle/unload|operation/unloadNamespaceBundle?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().unloadNamespaceBundle(namespace, bundle) - -``` - - - - -```` - -#### Split namespace bundles - -One namespace bundle can contain multiple topics but can be served by only one broker. If a single bundle is creating an excessive load on a broker, an admin can split the bundle using the command below, permitting one or more of the new bundles to be unloaded, thus balancing the load across the brokers. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces split-bundle --bundle 0x00000000_0xffffffff test-tenant/ns1 - -``` - - - - -``` - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/:bundle/split|operation/splitNamespaceBundle?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().splitNamespaceBundle(namespace, bundle) - -``` - - - - -```` - -### Configure message TTL - -#### Set message-ttl - -You can configure the time to live (in seconds) duration for messages. In the example below, the message-ttl is set as 100s. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-message-ttl --messageTTL 100 test-tenant/ns1 - -``` - - - - -``` - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/setNamespaceMessageTTL?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().setNamespaceMessageTTL(namespace, messageTTL) - -``` - - - - -```` - -#### Get message-ttl - -When the message-ttl for a namespace is set, you can use the command below to get the configured value. This example comtinues the example of the command `set message-ttl`, so the returned value is 100(s). - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-message-ttl test-tenant/ns1 - -``` - -``` - -100 - -``` - - - - -``` - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/getNamespaceMessageTTL?version=@pulsar:version_number@} - -``` - -``` - -100 - -``` - - - - -```java - -admin.namespaces().getNamespaceMessageTTL(namespace) - -``` - -``` - -100 - -``` - - - - -```` - -#### Remove message-ttl - -Remove a message TTL of the configured namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-message-ttl test-tenant/ns1 - -``` - - - - -``` - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/removeNamespaceMessageTTL?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().removeNamespaceMessageTTL(namespace) - -``` - - - - -```` - - -### Clear backlog - -#### Clear namespace backlog - -It clears all message backlog for all the topics that belong to a specific namespace. You can also clear backlog for a specific subscription as well. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces clear-backlog --sub my-subscription test-tenant/ns1 - -``` - - - - -``` - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/clearBacklog|operation/clearNamespaceBacklogForSubscription?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().clearNamespaceBacklogForSubscription(namespace, subscription) - -``` - - - - -```` - -#### Clear bundle backlog - -It clears all message backlog for all the topics that belong to a specific NamespaceBundle. You can also clear backlog for a specific subscription as well. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces clear-backlog --bundle 0x00000000_0xffffffff --sub my-subscription test-tenant/ns1 - -``` - - - - -``` - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/:bundle/clearBacklog|operation/clearNamespaceBundleBacklogForSubscription?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().clearNamespaceBundleBacklogForSubscription(namespace, bundle, subscription) - -``` - - - - -```` - -### Configure retention - -#### Set retention - -Each namespace contains multiple topics and the retention size (storage size) of each topic should not exceed a specific threshold or it should be stored for a certain period. This command helps configure the retention size and time of topics in a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-retention --size 100 --time 10 test-tenant/ns1 - -``` - - - - -``` - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/retention|operation/setRetention?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().setRetention(namespace, new RetentionPolicies(retentionTimeInMin, retentionSizeInMB)) - -``` - - - - -```` - -#### Get retention - -It shows retention information of a given namespace. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-retention test-tenant/ns1 - -``` - -```json - -{ - "retentionTimeInMinutes": 10, - "retentionSizeInMB": 100 -} - -``` - - - - -``` - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/retention|operation/getRetention?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().getRetention(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for topics - -#### Set dispatch throttling for topics - -It sets message dispatch rate for all the topics under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -:::note - -- If neither `clusterDispatchRate` nor `topicDispatchRate` is configured, dispatch throttling is disabled. -- If `topicDispatchRate` is not configured, `clusterDispatchRate` takes effect. -- If `topicDispatchRate` is configured, `topicDispatchRate` takes effect. - -::: - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -``` - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/dispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().setDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for topics - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -``` - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/dispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().getDispatchRate(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for subscription - -#### Set dispatch throttling for subscription - -It sets message dispatch rate for all the subscription of topics under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-subscription-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -``` - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/subscriptionDispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().setSubscriptionDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for subscription - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-subscription-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -``` - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/subscriptionDispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().getSubscriptionDispatchRate(namespace) - -``` - - - - -```` - -### Configure dispatch throttling for replicator - -#### Set dispatch throttling for replicator - -It sets message dispatch rate for all the replicator between replication clusters under a given namespace. -The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`). -dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which -disables the throttling. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-replicator-dispatch-rate test-tenant/ns1 \ - --msg-dispatch-rate 1000 \ - --byte-dispatch-rate 1048576 \ - --dispatch-rate-period 1 - -``` - - - - -``` - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/replicatorDispatchRate|operation/setDispatchRate?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().setReplicatorDispatchRate(namespace, new DispatchRate(1000, 1048576, 1)) - -``` - - - - -```` - -#### Get configured message-rate for replicator - -It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-replicator-dispatch-rate test-tenant/ns1 - -``` - -```json - -{ - "dispatchThrottlingRatePerTopicInMsg" : 1000, - "dispatchThrottlingRatePerTopicInByte" : 1048576, - "ratePeriodInSecond" : 1 -} - -``` - - - - -``` - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/replicatorDispatchRate|operation/getDispatchRate?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().getReplicatorDispatchRate(namespace) - -``` - - - - -```` - -### Configure deduplication snapshot interval - -#### Get deduplication snapshot interval - -It shows configured `deduplicationSnapshotInterval` for a namespace (Each topic under the namespace will take a deduplication snapshot according to this interval) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces get-deduplication-snapshot-interval test-tenant/ns1 - -``` - - - - -``` - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/getDeduplicationSnapshotInterval?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().getDeduplicationSnapshotInterval(namespace) - -``` - - - - -```` - -#### Set deduplication snapshot interval - -Set configured `deduplicationSnapshotInterval` for a namespace. Each topic under the namespace will take a deduplication snapshot according to this interval. -`brokerDeduplicationEnabled` must be set to `true` for this property to take effect. - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces set-deduplication-snapshot-interval test-tenant/ns1 --interval 1000 - -``` - - - - -``` - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/setDeduplicationSnapshotInterval?version=@pulsar:version_number@} - -``` - -```json - -{ - "interval": 1000 -} - -``` - - - - -```java - -admin.namespaces().setDeduplicationSnapshotInterval(namespace, 1000) - -``` - - - - -```` - -#### Remove deduplication snapshot interval - -Remove configured `deduplicationSnapshotInterval` of a namespace (Each topic under the namespace will take a deduplication snapshot according to this interval) - -````mdx-code-block - - - -``` - -$ pulsar-admin namespaces remove-deduplication-snapshot-interval test-tenant/ns1 - -``` - - - - -``` - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/deleteDeduplicationSnapshotInterval?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().removeDeduplicationSnapshotInterval(namespace) - -``` - - - - -```` - -### Namespace isolation - -You can use the [Pulsar isolation policy](administration-isolation.md) to allocate resources (broker and bookie) for a namespace. - -### Unload namespaces from a broker - -You can unload a namespace, or a [namespace bundle](reference-terminology.md#namespace-bundle), from the Pulsar [broker](reference-terminology.md#broker) that is currently responsible for it. - -#### pulsar-admin - -Use the [`unload`](reference-pulsar-admin.md#unload) subcommand of the [`namespaces`](reference-pulsar-admin.md#namespaces) command. - -````mdx-code-block - - - -```shell - -$ pulsar-admin namespaces unload my-tenant/my-ns - -``` - - - - -``` - -{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/unload|operation/unloadNamespace?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.namespaces().unload(namespace) - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-non-partitioned-topics.md b/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-non-partitioned-topics.md deleted file mode 100644 index e6347bb8c363a1..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-non-partitioned-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-non-partitioned-topics -title: Managing non-partitioned topics -sidebar_label: "Non-partitioned topics" -original_id: admin-api-non-partitioned-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-non-persistent-topics.md b/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-non-persistent-topics.md deleted file mode 100644 index 3126a6494c7153..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-non-persistent-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-non-persistent-topics -title: Managing non-persistent topics -sidebar_label: "Non-Persistent topics" -original_id: admin-api-non-persistent-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-overview.md b/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-overview.md deleted file mode 100644 index 1154c625aff7b5..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-overview.md +++ /dev/null @@ -1,144 +0,0 @@ ---- -id: admin-api-overview -title: Pulsar admin interface -sidebar_label: "Overview" -original_id: admin-api-overview ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -The Pulsar admin interface enables you to manage all important entities in a Pulsar instance, such as tenants, topics, and namespaces. - -You can interact with the admin interface via: - -- The `pulsar-admin` CLI tool, which is available in the `bin` folder of your Pulsar installation: - - ```shell - - bin/pulsar-admin - - ``` - - > **Important** - > - > For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). - -- HTTP calls, which are made against the admin {@inject: rest:REST:/} API provided by Pulsar brokers. For some RESTful APIs, they might be redirected to the owner brokers for serving with [`307 Temporary Redirect`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/307), hence the HTTP callers should handle `307 Temporary Redirect`. If you use `curl` commands, you should specify `-L` to handle redirections. - - > **Important** - > - > For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. - -- A Java client interface. - - > **Important** - > - > For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -> **The REST API is the admin interface**. Both the `pulsar-admin` CLI tool and the Java client use the REST API. If you implement your own admin interface client, you should use the REST API. - -## Admin setup - -Each of the three admin interfaces (the `pulsar-admin` CLI tool, the {@inject: rest:REST:/} API, and the [Java admin API](/api/admin)) requires some special setup if you have enabled authentication in your Pulsar instance. - -````mdx-code-block - - - -If you have enabled authentication, you need to provide an auth configuration to use the `pulsar-admin` tool. By default, the configuration for the `pulsar-admin` tool is in the [`conf/client.conf`](reference-configuration.md#client) file. The following are the available parameters: - -|Name|Description|Default| -|----|-----------|-------| -|webServiceUrl|The web URL for the cluster.|http://localhost:8080/| -|brokerServiceUrl|The Pulsar protocol URL for the cluster.|pulsar://localhost:6650/| -|authPlugin|The authentication plugin.| | -|authParams|The authentication parameters for the cluster, as a comma-separated string.| | -|useTls|Whether or not TLS authentication will be enforced in the cluster.|false| -|tlsAllowInsecureConnection|Accept untrusted TLS certificate from client.|false| -|tlsTrustCertsFilePath|Path for the trusted TLS certificate file.| | - - - - -You can find details for the REST API exposed by Pulsar brokers in this {@inject: rest:document:/}. - - - - -To use the Java admin API, instantiate a {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object, and specify a URL for a Pulsar broker and a {@inject: javadoc:PulsarAdminBuilder:/admin/org/apache/pulsar/client/admin/PulsarAdminBuilder}. The following is a minimal example using `localhost`: - -```java - -String url = "http://localhost:8080"; -// Pass auth-plugin class fully-qualified name if Pulsar-security enabled -String authPluginClassName = "com.org.MyAuthPluginClass"; -// Pass auth-param if auth-plugin class requires it -String authParams = "param1=value1"; -boolean useTls = false; -boolean tlsAllowInsecureConnection = false; -String tlsTrustCertsFilePath = null; -PulsarAdmin admin = PulsarAdmin.builder() -.authentication(authPluginClassName,authParams) -.serviceHttpUrl(url) -.tlsTrustCertsFilePath(tlsTrustCertsFilePath) -.allowTlsInsecureConnection(tlsAllowInsecureConnection) -.build(); - -``` - -If you use multiple brokers, you can use multi-host like Pulsar service. For example, - -```java - -String url = "http://localhost:8080,localhost:8081,localhost:8082"; -// Pass auth-plugin class fully-qualified name if Pulsar-security enabled -String authPluginClassName = "com.org.MyAuthPluginClass"; -// Pass auth-param if auth-plugin class requires it -String authParams = "param1=value1"; -boolean useTls = false; -boolean tlsAllowInsecureConnection = false; -String tlsTrustCertsFilePath = null; -PulsarAdmin admin = PulsarAdmin.builder() -.authentication(authPluginClassName,authParams) -.serviceHttpUrl(url) -.tlsTrustCertsFilePath(tlsTrustCertsFilePath) -.allowTlsInsecureConnection(tlsAllowInsecureConnection) -.build(); - -``` - - - - -```` - -## How to define Pulsar resource names when running Pulsar in Kubernetes -If you run Pulsar Functions or connectors on Kubernetes, you need to follow Kubernetes naming convention to define the names of your Pulsar resources, whichever admin interface you use. - -Kubernetes requires a name that can be used as a DNS subdomain name as defined in [RFC 1123](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names). Pulsar supports more legal characters than Kubernetes naming convention. If you create a Pulsar resource name with special characters that are not supported by Kubernetes (for example, including colons in a Pulsar namespace name), Kubernetes runtime translates the Pulsar object names into Kubernetes resource labels which are in RFC 1123-compliant forms. Consequently, you can run functions or connectors using Kubernetes runtime. The rules for translating Pulsar object names into Kubernetes resource labels are as below: - -- Truncate to 63 characters - -- Replace the following characters with dashes (-): - - - Non-alphanumeric characters - - - Underscores (_) - - - Dots (.) - -- Replace beginning and ending non-alphanumeric characters with 0 - -:::tip - -- If you get an error in translating Pulsar object names into Kubernetes resource labels (for example, you may have a naming collision if your Pulsar object name is too long) or want to customize the translating rules, see [customize Kubernetes runtime](https://pulsar.apache.org/docs/en/next/functions-runtime/#customize-kubernetes-runtime). -- For how to configure Kubernetes runtime, see [here](https://pulsar.apache.org/docs/en/next/functions-runtime/#configure-kubernetes-runtime). - -::: - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-packages.md b/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-packages.md deleted file mode 100644 index 439dea6a165b31..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-packages.md +++ /dev/null @@ -1,391 +0,0 @@ ---- -id: admin-api-packages -title: Manage packages -sidebar_label: "Packages" -original_id: admin-api-packages ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Package management enables version management and simplifies the upgrade and rollback processes for Functions, Sinks, and Sources. When you use the same function, sink and source in different namespaces, you can upload them to a common package management system. - -## Package name - -A `package` is identified by five parts: `type`, `tenant`, `namespace`, `package name`, and `version`. - -| Part | Description | -|-------|-------------| -|`type` |The type of the package. The following types are supported: `function`, `sink` and `source`. | -| `name`|The fully qualified name of the package: `//`.| -|`version`|The version of the package.| - -The following is a code sample. - -```java - -class PackageName { - private final PackageType type; - private final String namespace; - private final String tenant; - private final String name; - private final String version; -} - -enum PackageType { - FUNCTION("function"), SINK("sink"), SOURCE("source"); -} - -``` - -## Package URL -A package is located using a URL. The package URL is written in the following format: - -```shell - -:////@ - -``` - -The following are package URL examples: - -`sink://public/default/mysql-sink@1.0` -`function://my-tenant/my-ns/my-function@0.1` -`source://my-tenant/my-ns/mysql-cdc-source@2.3` - -The package management system stores the data, versions and metadata of each package. The metadata is shown in the following table. - -| metadata | Description | -|----------|-------------| -|description|The description of the package.| -|contact |The contact information of a package. For example, team email.| -|create_time| The time when the package is created.| -|modification_time| The time when the package is modified.| -|properties |A key/value map that stores your own information.| - -## Permissions - -The packages are organized by the tenant and namespace, so you can apply the tenant and namespace permissions to packages directly. - -## Package resources -You can use the package management with command line tools, REST API and Java client. - -### Upload a package -You can upload a package to the package management service in the following ways. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages upload functions://public/default/example@v0.1 --path package-file --description package-description - -``` - - - - -{@inject: endpoint|POST|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version/?version=@pulsar:version_number@} - - - - -Upload a package to the package management service synchronously. - -```java - - void upload(PackageMetadata metadata, String packageName, String path) throws PulsarAdminException; - -``` - -Upload a package to the package management service asynchronously. - -```java - - CompletableFuture uploadAsync(PackageMetadata metadata, String packageName, String path); - -``` - - - - -```` - -### Download a package -You can download a package to the package management service in the following ways. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages download functions://public/default/example@v0.1 --path package-file - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version/?version=@pulsar:version_number@} - - - - -Download a package to the package management service synchronously. - -```java - - void download(String packageName, String path) throws PulsarAdminException; - -``` - -Download a package to the package management service asynchronously. - -```java - - CompletableFuture downloadAsync(String packageName, String path); - -``` - - - - -```` - -### List all versions of a package -You can get a list of all versions of a package in the following ways. -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages list --type function public/default - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName/?version=@pulsar:version_number@} - - - - -List all versions of a package synchronously. - -```java - - List listPackageVersions(String packageName) throws PulsarAdminException; - -``` - -List all versions of a package asynchronously. - -```java - - CompletableFuture> listPackageVersionsAsync(String packageName); - -``` - - - - -```` - -### List all the specified type packages under a namespace -You can get a list of all the packages with the given type in a namespace in the following ways. -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages list --type function public/default - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/packages/:type/:tenant/:namespace/?version=@pulsar:version_number@} - - - - -List all the packages with the given type in a namespace synchronously. - -```java - - List listPackages(String type, String namespace) throws PulsarAdminException; - -``` - -List all the packages with the given type in a namespace asynchronously. - -```java - - CompletableFuture> listPackagesAsync(String type, String namespace); - -``` - - - - -```` - -### Get the metadata of a package -You can get the metadata of a package in the following ways. - -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages get-metadata function://public/default/test@v1 - -``` - - - - -{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version/metadata/?version=@pulsar:version_number@} - - - - -Get the metadata of a package synchronously. - -```java - - PackageMetadata getMetadata(String packageName) throws PulsarAdminException; - -``` - -Get the metadata of a package asynchronously. - -```java - - CompletableFuture getMetadataAsync(String packageName); - -``` - - - - -```` - -### Update the metadata of a package -You can update the metadata of a package in the following ways. -````mdx-code-block - - - -```shell - -bin/pulsar-admin packages update-metadata function://public/default/example@v0.1 --description update-description - -``` - - - - -{@inject: endpoint|PUT|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version/metadata/?version=@pulsar:version_number@} - - - - -Update a package metadata information synchronously. - -```java - - void updateMetadata(String packageName, PackageMetadata metadata) throws PulsarAdminException; - -``` - -Update a package metadata information asynchronously. - -```java - - CompletableFuture updateMetadataAsync(String packageName, PackageMetadata metadata); - -``` - - - - -```` - -### Delete a specified package -You can delete a specified package with its package name in the following ways. - -````mdx-code-block - - - -The following command example deletes a package of version 0.1. - -```shell - -bin/pulsar-admin packages delete functions://public/default/example@v0.1 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version/?version=@pulsar:version_number@} - - - - -Delete a specified package synchronously. - -```java - - void delete(String packageName) throws PulsarAdminException; - -``` - -Delete a specified package asynchronously. - -```java - - CompletableFuture deleteAsync(String packageName); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-partitioned-topics.md b/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-partitioned-topics.md deleted file mode 100644 index 5ce182282e0324..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-partitioned-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-partitioned-topics -title: Managing partitioned topics -sidebar_label: "Partitioned topics" -original_id: admin-api-partitioned-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-permissions.md b/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-permissions.md deleted file mode 100644 index 2496c9be54eb26..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-permissions.md +++ /dev/null @@ -1,184 +0,0 @@ ---- -id: admin-api-permissions -title: Managing permissions -sidebar_label: "Permissions" -original_id: admin-api-permissions ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Permissions in Pulsar are managed at the [namespace](reference-terminology.md#namespace) level -(that is, within [tenants](reference-terminology.md#tenant) and [clusters](reference-terminology.md#cluster)). - -## Grant permissions - -You can grant permissions to specific roles for lists of operations such as `produce` and `consume`. - -````mdx-code-block - - - -Use the [`grant-permission`](reference-pulsar-admin.md#grant-permission) subcommand and specify a namespace, actions using the `--actions` flag, and a role using the `--role` flag: - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role admin10 - -``` - -Wildcard authorization can be performed when `authorizationAllowWildcardsMatching` is set to `true` in `broker.conf`. - -e.g. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role 'my.role.*' - -``` - -Then, roles `my.role.1`, `my.role.2`, `my.role.foo`, `my.role.bar`, etc. can produce and consume. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role '*.role.my' - -``` - -Then, roles `1.role.my`, `2.role.my`, `foo.role.my`, `bar.role.my`, etc. can produce and consume. - -**Note**: A wildcard matching works at **the beginning or end of the role name only**. - -e.g. - -```shell - -$ pulsar-admin namespaces grant-permission test-tenant/ns1 \ - --actions produce,consume \ - --role 'my.*.role' - -``` - -In this case, only the role `my.*.role` has permissions. -Roles `my.1.role`, `my.2.role`, `my.foo.role`, `my.bar.role`, etc. **cannot** produce and consume. - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/permissions/:role|operation/grantPermissionOnNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().grantPermissionOnNamespace(namespace, role, getAuthActions(actions)); - -``` - - - - -```` - -## Get permissions - -You can see which permissions have been granted to which roles in a namespace. - -````mdx-code-block - - - -Use the [`permissions`](reference-pulsar-admin#permissions) subcommand and specify a namespace: - -```shell - -$ pulsar-admin namespaces permissions test-tenant/ns1 -{ - "admin10": [ - "produce", - "consume" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/permissions|operation/getPermissions?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getPermissions(namespace); - -``` - - - - -```` - -## Revoke permissions - -You can revoke permissions from specific roles, which means that those roles will no longer have access to the specified namespace. - -````mdx-code-block - - - -Use the [`revoke-permission`](reference-pulsar-admin.md#revoke-permission) subcommand and specify a namespace and a role using the `--role` flag: - -```shell - -$ pulsar-admin namespaces revoke-permission test-tenant/ns1 \ - --role admin10 - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/permissions/:role|operation/revokePermissionsOnNamespace?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().revokePermissionsOnNamespace(namespace, role); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-persistent-topics.md b/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-persistent-topics.md deleted file mode 100644 index 50d135b72f5424..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-persistent-topics.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -id: admin-api-persistent-topics -title: Managing persistent topics -sidebar_label: "Persistent topics" -original_id: admin-api-persistent-topics ---- - -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-schemas.md b/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-schemas.md deleted file mode 100644 index 9ffe21f5b0f750..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-schemas.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: admin-api-schemas -title: Managing Schemas -sidebar_label: "Schemas" -original_id: admin-api-schemas ---- - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-tenants.md b/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-tenants.md deleted file mode 100644 index 3e13e54a68b2cd..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-tenants.md +++ /dev/null @@ -1,238 +0,0 @@ ---- -id: admin-api-tenants -title: Managing Tenants -sidebar_label: "Tenants" -original_id: admin-api-tenants ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Tenants, like namespaces, can be managed using the [admin API](admin-api-overview.md). There are currently two configurable aspects of tenants: - -* Admin roles -* Allowed clusters - -## Tenant resources - -### List - -You can list all of the tenants associated with an [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`list`](reference-pulsar-admin.md#tenants-list) subcommand. - -```shell - -$ pulsar-admin tenants list -my-tenant-1 -my-tenant-2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/tenants|operation/getTenants?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().getTenants(); - -``` - - - - -```` - -### Create - -You can create a new tenant. - -````mdx-code-block - - - -Use the [`create`](reference-pulsar-admin.md#tenants-create) subcommand: - -```shell - -$ pulsar-admin tenants create my-tenant - -``` - -When creating a tenant, you can assign admin roles using the `-r`/`--admin-roles` flag. You can specify multiple roles as a comma-separated list. Here are some examples: - -```shell - -$ pulsar-admin tenants create my-tenant \ - --admin-roles role1,role2,role3 - -$ pulsar-admin tenants create my-tenant \ - -r role1 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/tenants/:tenant|operation/createTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().createTenant(tenantName, tenantInfo); - -``` - - - - -```` - -### Get configuration - -You can fetch the [configuration](reference-configuration.md) for an existing tenant at any time. - -````mdx-code-block - - - -Use the [`get`](reference-pulsar-admin.md#tenants-get) subcommand and specify the name of the tenant. Here's an example: - -```shell - -$ pulsar-admin tenants get my-tenant -{ - "adminRoles": [ - "admin1", - "admin2" - ], - "allowedClusters": [ - "cl1", - "cl2" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/tenants/:cluster|operation/getTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().getTenantInfo(tenantName); - -``` - - - - -```` - -### Delete - -Tenants can be deleted from a Pulsar [instance](reference-terminology.md#instance). - -````mdx-code-block - - - -Use the [`delete`](reference-pulsar-admin.md#tenants-delete) subcommand and specify the name of the tenant. - -```shell - -$ pulsar-admin tenants delete my-tenant - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/tenants/:cluster|operation/deleteTenant?version=@pulsar:version_number@} - - - - -```java - -admin.Tenants().deleteTenant(tenantName); - -``` - - - - -```` - -### Update - -You can update a tenant's configuration. - -````mdx-code-block - - - -Use the [`update`](reference-pulsar-admin.md#tenants-update) subcommand. - -```shell - -$ pulsar-admin tenants update my-tenant - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/tenants/:cluster|operation/updateTenant?version=@pulsar:version_number@} - - - - -```java - -admin.tenants().updateTenant(tenantName, tenantInfo); - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-topics.md b/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-topics.md deleted file mode 100644 index ddcd73c41698ef..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/admin-api-topics.md +++ /dev/null @@ -1,2378 +0,0 @@ ---- -id: admin-api-topics -title: Manage topics -sidebar_label: "Topics" -original_id: admin-api-topics ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -> **Important** -> -> This page only shows **some frequently used operations**. -> -> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) -> -> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc. -> -> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](https://pulsar.apache.org/api/admin/). - -Pulsar has persistent and non-persistent topics. Persistent topic is a logical endpoint for publishing and consuming messages. The topic name structure for persistent topics is: - -```shell - -persistent://tenant/namespace/topic - -``` - -Non-persistent topics are used in applications that only consume real-time published messages and do not need persistent guarantee. In this way, it reduces message-publish latency by removing overhead of persisting messages. The topic name structure for non-persistent topics is: - -```shell - -non-persistent://tenant/namespace/topic - -``` - -## Manage topic resources -Whether it is persistent or non-persistent topic, you can obtain the topic resources through `pulsar-admin` tool, REST API and Java. - -:::note - -In REST API, `:schema` stands for persistent or non-persistent. `:tenant`, `:namespace`, `:x` are variables, replace them with the real tenant, namespace, and `x` names when using them. -Take {@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} as an example, to get the list of persistent topics in REST API, use `https://pulsar.apache.org/admin/v2/persistent/my-tenant/my-namespace`. To get the list of non-persistent topics in REST API, use `https://pulsar.apache.org/admin/v2/non-persistent/my-tenant/my-namespace`. - -::: - -### List of topics - -You can get the list of topics under a given namespace in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list \ - my-tenant/my-namespace - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} - - - - -```java - -String namespace = "my-tenant/my-namespace"; -admin.topics().getList(namespace); - -``` - - - - -```` - -### Grant permission - -You can grant permissions on a client role to perform specific actions on a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics grant-permission \ - --actions produce,consume --role application1 \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/permissions/:role|operation/grantPermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String role = "test-role"; -Set actions = Sets.newHashSet(AuthAction.produce, AuthAction.consume); -admin.topics().grantPermission(topic, role, actions); - -``` - - - - -```` - -### Get permission - -You can fetch permission in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics permissions \ - persistent://test-tenant/ns1/tp1 \ - -{ - "application1": [ - "consume", - "produce" - ] -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/permissions|operation/getPermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getPermissions(topic); - -``` - - - - -```` - -### Revoke permission - -You can revoke a permission granted on a client role in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics revoke-permission \ - --role application1 \ - persistent://test-tenant/ns1/tp1 \ - -{ - "application1": [ - "consume", - "produce" - ] -} - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic/permissions/:role|operation/revokePermissionsOnTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String role = "test-role"; -admin.topics().revokePermissions(topic, role); - -``` - - - - -```` - -### Delete topic - -You can delete a topic in the following ways. You cannot delete a topic if any active subscription or producers is connected to the topic. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics delete \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic|operation/deleteTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().delete(topic); - -``` - - - - -```` - -### Unload topic - -You can unload a topic in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics unload \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/unload|operation/unloadTopic?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().unload(topic); - -``` - - - - -```` - -### Get stats - -You can check the following statistics of a given non-partitioned topic. - - - **msgRateIn**: The sum of all local and replication publishers' publish rates (msg/s). - - - **msgThroughputIn**: The sum of all local and replication publishers' publish rates (bytes/s). - - - **msgRateOut**: The sum of all local and replication consumers' dispatch rates(msg/s). - - - **msgThroughputOut**: The sum of all local and replication consumers' dispatch rates (bytes/s). - - - **averageMsgSize**: The average size (in bytes) of messages published within the last interval. - - - **storageSize**: The sum of the ledgers' storage size for this topic. The space used to store the messages for the topic. - - - **bytesInCounter**: Total bytes published to the topic. - - - **msgInCounter**: Total messages published to the topic. - - - **bytesOutCounter**: Total bytes delivered to consumers. - - - **msgOutCounter**: Total messages delivered to consumers. - - - **msgChunkPublished**: Topic has chunked message published on it. - - - **backlogSize**: Estimated total unconsumed or backlog size (in bytes). - - - **offloadedStorageSize**: Space used to store the offloaded messages for the topic (in bytes). - - - **waitingPublishers**: The number of publishers waiting in a queue in exclusive access mode. - - - **deduplicationStatus**: The status of message deduplication for the topic. - - - **topicEpoch**: The topic epoch or empty if not set. - - - **nonContiguousDeletedMessagesRanges**: The number of non-contiguous deleted messages ranges. - - - **nonContiguousDeletedMessagesRangesSerializedSize**: The serialized size of non-contiguous deleted messages ranges. - - - **publishers**: The list of all local publishers into the topic. The list ranges from zero to thousands. - - - **accessMode**: The type of access to the topic that the producer requires. - - - **msgRateIn**: The total rate of messages (msg/s) published by this publisher. - - - **msgThroughputIn**: The total throughput (bytes/s) of the messages published by this publisher. - - - **averageMsgSize**: The average message size in bytes from this publisher within the last interval. - - - **chunkedMessageRate**: The total rate of chunked messages published by this publisher. - - - **producerId**: The internal identifier for this producer on this topic. - - - **producerName**: The internal identifier for this producer, generated by the client library. - - - **address**: The IP address and source port for the connection of this producer. - - - **connectedSince**: The timestamp when this producer is created or reconnected last time. - - - **clientVersion**: The client library version of this producer. - - - **metadata**: Metadata (key/value strings) associated with this publisher. - - - **subscriptions**: The list of all local subscriptions to the topic. - - - **my-subscription**: The name of this subscription. It is defined by the client. - - - **msgRateOut**: The total rate of messages (msg/s) delivered on this subscription. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered on this subscription. - - - **msgBacklog**: The number of messages in the subscription backlog. - - - **type**: The subscription type. - - - **msgRateExpired**: The rate at which messages were discarded instead of dispatched from this subscription due to TTL. - - - **lastExpireTimestamp**: The timestamp of the last message expire execution. - - - **lastConsumedFlowTimestamp**: The timestamp of the last flow command received. - - - **lastConsumedTimestamp**: The latest timestamp of all the consumed timestamp of the consumers. - - - **lastAckedTimestamp**: The latest timestamp of all the acked timestamp of the consumers. - - - **bytesOutCounter**: Total bytes delivered to consumer. - - - **msgOutCounter**: Total messages delivered to consumer. - - - **msgRateRedeliver**: Total rate of messages redelivered on this subscription (msg/s). - - - **chunkedMessageRate**: Chunked message dispatch rate. - - - **backlogSize**: Size of backlog for this subscription (in bytes). - - - **msgBacklogNoDelayed**: Number of messages in the subscription backlog that do not contain the delay messages. - - - **blockedSubscriptionOnUnackedMsgs**: Flag to verify if a subscription is blocked due to reaching threshold of unacked messages. - - - **msgDelayed**: Number of delayed messages currently being tracked. - - - **unackedMessages**: Number of unacknowledged messages for the subscription, where an unacknowledged message is one that has been sent to a consumer but not yet acknowledged. This field is only meaningful when using a subscription that tracks individual message acknowledgement. - - - **activeConsumerName**: The name of the consumer that is active for single active consumer subscriptions. For example, failover or exclusive. - - - **totalMsgExpired**: Total messages expired on this subscription. - - - **lastMarkDeleteAdvancedTimestamp**: Last MarkDelete position advanced timestamp. - - - **durable**: Whether the subscription is durable or ephemeral (for example, from a reader). - - - **replicated**: Mark that the subscription state is kept in sync across different regions. - - - **allowOutOfOrderDelivery**: Whether out of order delivery is allowed on the Key_Shared subscription. - - - **keySharedMode**: Whether the Key_Shared subscription mode is AUTO_SPLIT or STICKY. - - - **consumersAfterMarkDeletePosition**: This is for Key_Shared subscription to get the recentJoinedConsumers in the Key_Shared subscription. - - - **nonContiguousDeletedMessagesRanges**: The number of non-contiguous deleted messages ranges. - - - **nonContiguousDeletedMessagesRangesSerializedSize**: The serialized size of non-contiguous deleted messages ranges. - - - **consumers**: The list of connected consumers for this subscription. - - - **msgRateOut**: The total rate of messages (msg/s) delivered to the consumer. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered to the consumer. - - - **consumerName**: The internal identifier for this consumer, generated by the client library. - - - **availablePermits**: The number of messages that the consumer has space for in the client library's listen queue. `0` means the client library's queue is full and `receive()` isn't being called. A non-zero value means this consumer is ready for dispatched messages. - - - **unackedMessages**: The number of unacknowledged messages for the consumer, where an unacknowledged message is one that has been sent to the consumer but not yet acknowledged. This field is only meaningful when using a subscription that tracks individual message acknowledgement. - - - **blockedConsumerOnUnackedMsgs**: The flag used to verify if the consumer is blocked due to reaching threshold of the unacknowledged messages. - - - **lastConsumedTimestamp**: The timestamp when the consumer reads a message the last time. - - - **lastAckedTimestamp**: The timestamp when the consumer acknowledges a message the last time. - - - **address**: The IP address and source port for the connection of this consumer. - - - **connectedSince**: The timestamp when this consumer is created or reconnected last time. - - - **clientVersion**: The client library version of this consumer. - - - **bytesOutCounter**: Total bytes delivered to consumer. - - - **msgOutCounter**: Total messages delivered to consumer. - - - **msgRateRedeliver**: Total rate of messages redelivered by this consumer (msg/s). - - - **chunkedMessageRate**: The total rate of chunked messages delivered to this consumer. - - - **avgMessagesPerEntry**: Number of average messages per entry for the consumer consumed. - - - **readPositionWhenJoining**: The read position of the cursor when the consumer joining. - - - **keyHashRanges**: Hash ranges assigned to this consumer if is Key_Shared sub mode. - - - **metadata**: Metadata (key/value strings) associated with this consumer. - - - **replication**: This section gives the stats for cross-colo replication of this topic - - - **msgRateIn**: The total rate (msg/s) of messages received from the remote cluster. - - - **msgThroughputIn**: The total throughput (bytes/s) received from the remote cluster. - - - **msgRateOut**: The total rate of messages (msg/s) delivered to the replication-subscriber. - - - **msgThroughputOut**: The total throughput (bytes/s) delivered to the replication-subscriber. - - - **msgRateExpired**: The total rate of messages (msg/s) expired. - - - **replicationBacklog**: The number of messages pending to be replicated to remote cluster. - - - **connected**: Whether the outbound replicator is connected. - - - **replicationDelayInSeconds**: How long the oldest message has been waiting to be sent through the connection, if connected is `true`. - - - **inboundConnection**: The IP and port of the broker in the remote cluster's publisher connection to this broker. - - - **inboundConnectedSince**: The TCP connection being used to publish messages to the remote cluster. If there are no local publishers connected, this connection is automatically closed after a minute. - - - **outboundConnection**: The address of the outbound replication connection. - - - **outboundConnectedSince**: The timestamp of establishing outbound connection. - -The following is an example of a topic status. - -```json - -{ - "msgRateIn" : 0.0, - "msgThroughputIn" : 0.0, - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesInCounter" : 504, - "msgInCounter" : 9, - "bytesOutCounter" : 2296, - "msgOutCounter" : 41, - "averageMsgSize" : 0.0, - "msgChunkPublished" : false, - "storageSize" : 504, - "backlogSize" : 0, - "offloadedStorageSize" : 0, - "publishers" : [ { - "accessMode" : "Shared", - "msgRateIn" : 0.0, - "msgThroughputIn" : 0.0, - "averageMsgSize" : 0.0, - "chunkedMessageRate" : 0.0, - "producerId" : 0, - "metadata" : { }, - "address" : "/127.0.0.1:65402", - "connectedSince" : "2021-06-09T17:22:55.913+08:00", - "clientVersion" : "2.9.0-SNAPSHOT", - "producerName" : "standalone-1-0" - } ], - "waitingPublishers" : 0, - "subscriptions" : { - "sub-demo" : { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesOutCounter" : 2296, - "msgOutCounter" : 41, - "msgRateRedeliver" : 0.0, - "chunkedMessageRate" : 0, - "msgBacklog" : 0, - "backlogSize" : 0, - "msgBacklogNoDelayed" : 0, - "blockedSubscriptionOnUnackedMsgs" : false, - "msgDelayed" : 0, - "unackedMessages" : 0, - "type" : "Exclusive", - "activeConsumerName" : "20b81", - "msgRateExpired" : 0.0, - "totalMsgExpired" : 0, - "lastExpireTimestamp" : 0, - "lastConsumedFlowTimestamp" : 1623230565356, - "lastConsumedTimestamp" : 1623230583946, - "lastAckedTimestamp" : 1623230584033, - "lastMarkDeleteAdvancedTimestamp" : 1623230584033, - "consumers" : [ { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesOutCounter" : 2296, - "msgOutCounter" : 41, - "msgRateRedeliver" : 0.0, - "chunkedMessageRate" : 0.0, - "consumerName" : "20b81", - "availablePermits" : 959, - "unackedMessages" : 0, - "avgMessagesPerEntry" : 314, - "blockedConsumerOnUnackedMsgs" : false, - "lastAckedTimestamp" : 1623230584033, - "lastConsumedTimestamp" : 1623230583946, - "metadata" : { }, - "address" : "/127.0.0.1:65172", - "connectedSince" : "2021-06-09T17:22:45.353+08:00", - "clientVersion" : "2.9.0-SNAPSHOT" - } ], - "allowOutOfOrderDelivery": false, - "consumersAfterMarkDeletePosition" : { }, - "nonContiguousDeletedMessagesRanges" : 0, - "nonContiguousDeletedMessagesRangesSerializedSize" : 0, - "durable" : true, - "replicated" : false - } - }, - "replication" : { }, - "deduplicationStatus" : "Disabled", - "nonContiguousDeletedMessagesRanges" : 0, - "nonContiguousDeletedMessagesRangesSerializedSize" : 0 -} - -``` - -To get the status of a topic, you can use the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getStats(topic); - -``` - - - - -```` - -### Get internal stats - -You can get the detailed statistics of a topic. - - - **entriesAddedCounter**: Messages published since this broker loaded this topic. - - - **numberOfEntries**: The total number of messages being tracked. - - - **totalSize**: The total storage size in bytes of all messages. - - - **currentLedgerEntries**: The count of messages written to the ledger that is currently open for writing. - - - **currentLedgerSize**: The size in bytes of messages written to the ledger that is currently open for writing. - - - **lastLedgerCreatedTimestamp**: The time when the last ledger is created. - - - **lastLedgerCreationFailureTimestamp:** The time when the last ledger failed. - - - **waitingCursorsCount**: The number of cursors that are "caught up" and waiting for a new message to be published. - - - **pendingAddEntriesCount**: The number of messages that complete (asynchronous) write requests. - - - **lastConfirmedEntry**: The ledgerid:entryid of the last message that is written successfully. If the entryid is `-1`, then the ledger is open, yet no entries are written. - - - **state**: The state of this ledger for writing. The state `LedgerOpened` means that a ledger is open for saving published messages. - - - **ledgers**: The ordered list of all ledgers for this topic holding messages. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. - - - **metadata**: The ledger metadata. - - - **schemaLedgers**: The ordered list of all ledgers for this topic schema. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. - - - **metadata**: The ledger metadata. - - - **compactedLedger**: The ledgers holding un-acked messages after topic compaction. - - - **ledgerId**: The ID of this ledger. - - - **entries**: The total number of entries that belong to this ledger. - - - **size**: The size of messages written to this ledger (in bytes). - - - **offloaded**: Whether this ledger is offloaded. The value is `false` for the compacted topic ledger. - - - **cursors**: The list of all cursors on this topic. Each subscription in the topic stats has a cursor. - - - **markDeletePosition**: All messages before the markDeletePosition are acknowledged by the subscriber. - - - **readPosition**: The latest position of subscriber for reading message. - - - **waitingReadOp**: This is true when the subscription has read the latest message published to the topic and is waiting for new messages to be published. - - - **pendingReadOps**: The counter for how many outstanding read requests to the BookKeepers in progress. - - - **messagesConsumedCounter**: The number of messages this cursor has acked since this broker loaded this topic. - - - **cursorLedger**: The ledger being used to persistently store the current markDeletePosition. - - - **cursorLedgerLastEntry**: The last entryid used to persistently store the current markDeletePosition. - - - **individuallyDeletedMessages**: If acknowledges are being done out of order, the ranges of messages acknowledged between the markDeletePosition and the read-position shows. - - - **lastLedgerSwitchTimestamp**: The last time the cursor ledger is rolled over. - - - **state**: The state of the cursor ledger: `Open` means you have a cursor ledger for saving updates of the markDeletePosition. - -The following is an example of the detailed statistics of a topic. - -```json - -{ - "entriesAddedCounter":0, - "numberOfEntries":0, - "totalSize":0, - "currentLedgerEntries":0, - "currentLedgerSize":0, - "lastLedgerCreatedTimestamp":"2021-01-22T21:12:14.868+08:00", - "lastLedgerCreationFailureTimestamp":null, - "waitingCursorsCount":0, - "pendingAddEntriesCount":0, - "lastConfirmedEntry":"3:-1", - "state":"LedgerOpened", - "ledgers":[ - { - "ledgerId":3, - "entries":0, - "size":0, - "offloaded":false, - "metadata":null - } - ], - "cursors":{ - "test":{ - "markDeletePosition":"3:-1", - "readPosition":"3:-1", - "waitingReadOp":false, - "pendingReadOps":0, - "messagesConsumedCounter":0, - "cursorLedger":4, - "cursorLedgerLastEntry":1, - "individuallyDeletedMessages":"[]", - "lastLedgerSwitchTimestamp":"2021-01-22T21:12:14.966+08:00", - "state":"Open", - "numberOfEntriesSinceFirstNotAckedMessage":0, - "totalNonContiguousDeletedMessagesRange":0, - "properties":{ - - } - } - }, - "schemaLedgers":[ - { - "ledgerId":1, - "entries":11, - "size":10, - "offloaded":false, - "metadata":null - } - ], - "compactedLedger":{ - "ledgerId":-1, - "entries":-1, - "size":-1, - "offloaded":false, - "metadata":null - } -} - -``` - -To get the internal status of a topic, you can use the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats-internal \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/internalStats|operation/getInternalStats?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getInternalStats(topic); - -``` - - - - -```` - -### Peek messages - -You can peek a number of messages for a specific subscription of a given topic in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics peek-messages \ - --count 10 --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -Message ID: 315674752:0 -Properties: { "X-Pulsar-publish-time" : "2015-07-13 17:40:28.451" } -msg-payload - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/position/:messagePosition|operation/peekNthMessage?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -int numMessages = 1; -admin.topics().peekMessages(topic, subName, numMessages); - -``` - - - - -```` - -### Get message by ID - -You can fetch the message with the given ledger ID and entry ID in the following ways. - -````mdx-code-block - - - -```shell - -$ ./bin/pulsar-admin topics get-message-by-id \ - persistent://public/default/my-topic \ - -l 10 -e 0 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/ledger/:ledgerId/entry/:entryId|operation/getMessageById?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -long ledgerId = 10; -long entryId = 10; -admin.topics().getMessageById(topic, ledgerId, entryId); - -``` - - - - -```` - -### Examine messages - -You can examine a specific message on a topic by position relative to the earliest or the latest message. - -````mdx-code-block - - - -```shell - -./bin/pulsar-admin topics examine-messages \ - persistent://public/default/my-topic \ - -i latest -m 1 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/examinemessage?initialPosition=:initialPosition&messagePosition=:messagePosition|operation/examineMessage?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().examineMessage(topic, "latest", 1); - -``` - - - - -```` - -### Get message ID - -You can get message ID published at or just after the given datetime. - -````mdx-code-block - - - -```shell - -./bin/pulsar-admin topics get-message-id \ - persistent://public/default/my-topic \ - -d 2021-06-28T19:01:17Z - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/messageid/:timestamp|operation/getMessageIdByTimestamp?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -long timestamp = System.currentTimeMillis() -admin.topics().getMessageIdByTimestamp(topic, timestamp); - -``` - - - - -```` - - -### Skip messages - -You can skip a number of messages for a specific subscription of a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics skip \ - --count 10 --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/skip/:numMessages|operation/skipMessages?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -int numMessages = 1; -admin.topics().skipMessages(topic, subName, numMessages); - -``` - - - - -```` - -### Skip all messages - -You can skip all the old messages for a specific subscription of a given topic. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics skip-all \ - --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/skip_all|operation/skipAllMessages?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -admin.topics().skipAllMessages(topic, subName); - -``` - - - - -```` - -### Reset cursor - -You can reset a subscription cursor position back to the position which is recorded X minutes before. It essentially calculates time and position of cursor at X minutes before and resets it at that position. You can reset the cursor in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics reset-cursor \ - --subscription my-subscription --time 10 \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/resetcursor/:timestamp|operation/resetCursor?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subName = "my-subscription"; -long timestamp = 2342343L; -admin.topics().skipAllMessages(topic, subName, timestamp); - -``` - - - - -```` - -### Lookup of topic - -You can locate the broker URL which is serving the given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics lookup \ - persistent://test-tenant/ns1/tp1 \ - - "pulsar://broker1.org.com:4480" - -``` - - - - -{@inject: endpoint|GET|/lookup/v2/topic/:schema/:tenant:namespace/:topic|/?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.lookup().lookupDestination(topic); - -``` - - - - -```` - -### Get bundle - -You can check the range of the bundle which contains given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics bundle-range \ - persistent://test-tenant/ns1/tp1 \ - - "0x00000000_0xffffffff" - -``` - - - - -{@inject: endpoint|GET|/lookup/v2/topic/:topic_domain/:tenant/:namespace/:topic/bundle|/?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.lookup().getBundleRange(topic); - -``` - - - - -```` - -### Get subscriptions - -You can check all subscription names for a given topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics subscriptions \ - persistent://test-tenant/ns1/tp1 \ - - my-subscription - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscriptions|operation/getSubscriptions?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getSubscriptions(topic); - -``` - - - - -```` - -### Unsubscribe - -When a subscription does not process messages any more, you can unsubscribe it in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics unsubscribe \ - --subscription my-subscription \ - persistent://test-tenant/ns1/tp1 \ - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/:topic/subscription/:subscription|operation/deleteSubscription?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -String subscriptionName = "my-subscription"; -admin.topics().deleteSubscription(topic, subscriptionName); - -``` - - - - -```` - -### Last Message Id - -You can get the last committed message ID for a persistent topic. It is available since 2.3.0 release. - -````mdx-code-block - - - -```shell - -pulsar-admin topics last-message-id topic-name - -``` - - - - -{@inject: endpoint|Get|/admin/v2/:schema/:tenant/:namespace/:topic/lastMessageId|operation/getLastMessageId?version=@pulsar:version_number@} - - - - -```Java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getLastMessage(topic); - -``` - - - - -```` - - - -### Configure deduplication snapshot interval - -#### Get deduplication snapshot interval - -To get the topic-level deduplication snapshot interval, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-deduplication-snapshot-interval options - -``` - - - - -``` - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.topics().getDeduplicationSnapshotInterval(topic) - -``` - - - - -```` - -#### Set deduplication snapshot interval - -To set the topic-level deduplication snapshot interval, use one of the following methods. - -> **Prerequisite** `brokerDeduplicationEnabled` must be set to `true`. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-deduplication-snapshot-interval options - -``` - - - - -``` - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval?version=@pulsar:version_number@} - -``` - -```json - -{ - "interval": 1000 -} - -``` - - - - -```java - -admin.topics().setDeduplicationSnapshotInterval(topic, 1000) - -``` - - - - -```` - -#### Remove deduplication snapshot interval - -To remove the topic-level deduplication snapshot interval, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-deduplication-snapshot-interval options - -``` - - - - -``` - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.topics().removeDeduplicationSnapshotInterval(topic) - -``` - - - - -```` - - -### Configure inactive topic policies - -#### Get inactive topic policies - -To get the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-inactive-topic-policies options - -``` - - - - -``` - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.topics().getInactiveTopicPolicies(topic) - -``` - - - - -```` - -#### Set inactive topic policies - -To set the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-inactive-topic-policies options - -``` - - - - -``` - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.topics().setInactiveTopicPolicies(topic, inactiveTopicPolicies) - -``` - - - - -```` - -#### Remove inactive topic policies - -To remove the topic-level inactive topic policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-inactive-topic-policies options - -``` - - - - -``` - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.topics().removeInactiveTopicPolicies(topic) - -``` - - - - -```` - - -### Configure offload policies - -#### Get offload policies - -To get the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics get-offload-policies options - -``` - - - - -``` - -{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/getOffloadPolicies?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.topics().getOffloadPolicies(topic) - -``` - - - - -```` - -#### Set offload policies - -To set the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics set-offload-policies options - -``` - - - - -``` - -{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/setOffloadPolicies?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.topics().setOffloadPolicies(topic, offloadPolicies) - -``` - - - - -```` - -#### Remove offload policies - -To remove the topic-level offload policies, use one of the following methods. - -````mdx-code-block - - - -``` - -pulsar-admin topics remove-offload-policies options - -``` - - - - -``` - -{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/removeOffloadPolicies?version=@pulsar:version_number@} - -``` - - - - -```java - -admin.topics().removeOffloadPolicies(topic) - -``` - - - - -```` - -## Manage non-partitioned topics -You can use Pulsar [admin API](admin-api-overview.md) to create, delete and check status of non-partitioned topics. - -### Create -Non-partitioned topics must be explicitly created. When creating a new non-partitioned topic, you need to provide a name for the topic. - -By default, 60 seconds after creation, topics are considered inactive and deleted automatically to avoid generating trash data. To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to a specific value. - -For more information about the two parameters, see [here](reference-configuration.md#broker). - -You can create non-partitioned topics in the following ways. -````mdx-code-block - - - -When you create non-partitioned topics with the [`create`](reference-pulsar-admin.md#create-3) command, you need to specify the topic name as an argument. - -```shell - -$ bin/pulsar-admin topics create \ - persistent://my-tenant/my-namespace/my-topic - -``` - -:::note - -When you create a non-partitioned topic with the suffix '-partition-' followed by numeric value like 'xyz-topic-partition-x' for the topic name, if a partitioned topic with same suffix 'xyz-topic-partition-y' exists, then the numeric value(x) for the non-partitioned topic must be larger than the number of partitions(y) of the partitioned topic. Otherwise, you cannot create such a non-partitioned topic. - -::: - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic|operation/createNonPartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().createNonPartitionedTopic(topicName); - -``` - - - - -```` - -### Delete -You can delete non-partitioned topics in the following ways. -````mdx-code-block - - - -```shell - -$ bin/pulsar-admin topics delete \ - persistent://my-tenant/my-namespace/my-topic - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic|operation/deleteTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().delete(topic); - -``` - - - - -```` - -### List - -You can get the list of topics under a given namespace in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list tenant/namespace -persistent://tenant/namespace/topic1 -persistent://tenant/namespace/topic2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getList(namespace); - -``` - - - - -```` - -### Stats - -You can check the current statistics of a given topic. The following is an example. For description of each stats, refer to [get stats](#get-stats). - -```json - -{ - "msgRateIn": 4641.528542257553, - "msgThroughputIn": 44663039.74947473, - "msgRateOut": 0, - "msgThroughputOut": 0, - "averageMsgSize": 1232439.816728665, - "storageSize": 135532389160, - "publishers": [ - { - "msgRateIn": 57.855383881403576, - "msgThroughputIn": 558994.7078932219, - "averageMsgSize": 613135, - "producerId": 0, - "producerName": null, - "address": null, - "connectedSince": null - } - ], - "subscriptions": { - "my-topic_subscription": { - "msgRateOut": 0, - "msgThroughputOut": 0, - "msgBacklog": 116632, - "type": null, - "msgRateExpired": 36.98245516804671, - "consumers": [] - } - }, - "replication": {} -} - -``` - -You can check the current statistics of a given topic and its connected producers and consumers in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats \ - persistent://test-tenant/namespace/topic \ - --get-precise-backlog - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getStats(topic, false /* is precise backlog */); - -``` - - - - -```` - -## Manage partitioned topics -You can use Pulsar [admin API](admin-api-overview.md) to create, update, delete and check status of partitioned topics. - -### Create - -Partitioned topics must be explicitly created. When creating a new partitioned topic, you need to provide a name and the number of partitions for the topic. - -By default, 60 seconds after creation, topics are considered inactive and deleted automatically to avoid generating trash data. To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to a specific value. - -For more information about the two parameters, see [here](reference-configuration.md#broker). - -You can create partitioned topics in the following ways. -````mdx-code-block - - - -When you create partitioned topics with the [`create-partitioned-topic`](reference-pulsar-admin.md#create-partitioned-topic) -command, you need to specify the topic name as an argument and the number of partitions using the `-p` or `--partitions` flag. - -```shell - -$ bin/pulsar-admin topics create-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic \ - --partitions 4 - -``` - -:::note - -If a non-partitioned topic with the suffix '-partition-' followed by a numeric value like 'xyz-topic-partition-10', you can not create a partitioned topic with name 'xyz-topic', because the partitions of the partitioned topic could override the existing non-partitioned topic. To create such partitioned topic, you have to delete that non-partitioned topic first. - -::: - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/partitions|operation/createPartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -int numPartitions = 4; -admin.topics().createPartitionedTopic(topicName, numPartitions); - -``` - - - - -```` - -### Create missed partitions - -When topic auto-creation is disabled, and you have a partitioned topic without any partitions, you can use the [`create-missed-partitions`](reference-pulsar-admin.md#create-missed-partitions) command to create partitions for the topic. - -````mdx-code-block - - - -You can create missed partitions with the [`create-missed-partitions`](reference-pulsar-admin.md#create-missed-partitions) command and specify the topic name as an argument. - -```shell - -$ bin/pulsar-admin topics create-missed-partitions \ - persistent://my-tenant/my-namespace/my-topic \ - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic|operation/createMissedPartitions?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().createMissedPartitions(topicName); - -``` - - - - -```` - -### Get metadata - -Partitioned topics are associated with metadata, you can view it as a JSON object. The following metadata field is available. - -Field | Description -:-----|:------- -`partitions` | The number of partitions into which the topic is divided. - -````mdx-code-block - - - -You can check the number of partitions in a partitioned topic with the [`get-partitioned-topic-metadata`](reference-pulsar-admin.md#get-partitioned-topic-metadata) subcommand. - -```shell - -$ pulsar-admin topics get-partitioned-topic-metadata \ - persistent://my-tenant/my-namespace/my-topic -{ - "partitions": 4 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/partitions|operation/getPartitionedMetadata?version=@pulsar:version_number@} - - - - -```java - -String topicName = "persistent://my-tenant/my-namespace/my-topic"; -admin.topics().getPartitionedTopicMetadata(topicName); - -``` - - - - -```` - -### Update - -You can update the number of partitions for an existing partitioned topic *if* the topic is non-global. However, you can only add the partition number. Decrementing the number of partitions would delete the topic, which is not supported in Pulsar. - -Producers and consumers can find the newly created partitions automatically. - -````mdx-code-block - - - -You can update partitioned topics with the [`update-partitioned-topic`](reference-pulsar-admin.md#update-partitioned-topic) command. - -```shell - -$ pulsar-admin topics update-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic \ - --partitions 8 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:cluster/:namespace/:destination/partitions|operation/updatePartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().updatePartitionedTopic(topic, numPartitions); - -``` - - - - -```` - -### Delete -You can delete partitioned topics with the [`delete-partitioned-topic`](reference-pulsar-admin.md#delete-partitioned-topic) command, REST API and Java. - -````mdx-code-block - - - -```shell - -$ bin/pulsar-admin topics delete-partitioned-topic \ - persistent://my-tenant/my-namespace/my-topic - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/:schema/:topic/:namespace/:destination/partitions|operation/deletePartitionedTopic?version=@pulsar:version_number@} - - - - -```java - -admin.topics().delete(topic); - -``` - - - - -```` - -### List -You can get the list of topics under a given namespace in the following ways. -````mdx-code-block - - - -```shell - -$ pulsar-admin topics list tenant/namespace -persistent://tenant/namespace/topic1 -persistent://tenant/namespace/topic2 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getPartitionedTopicList?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getList(namespace); - -``` - - - - -```` - -### Stats - -You can check the current statistics of a given partitioned topic. The following is an example. For description of each stats, refer to [get stats](#get-stats). - -Note that in the subscription JSON object, `chuckedMessageRate` is deprecated. Please use `chunkedMessageRate`. Both will be sent in the JSON for now. - -```json - -{ - "msgRateIn" : 999.992947159793, - "msgThroughputIn" : 1070918.4635439808, - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesInCounter" : 270318763, - "msgInCounter" : 252489, - "bytesOutCounter" : 0, - "msgOutCounter" : 0, - "averageMsgSize" : 1070.926056966454, - "msgChunkPublished" : false, - "storageSize" : 270316646, - "backlogSize" : 200921133, - "publishers" : [ { - "msgRateIn" : 999.992947159793, - "msgThroughputIn" : 1070918.4635439808, - "averageMsgSize" : 1070.3333333333333, - "chunkedMessageRate" : 0.0, - "producerId" : 0 - } ], - "subscriptions" : { - "test" : { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "bytesOutCounter" : 0, - "msgOutCounter" : 0, - "msgRateRedeliver" : 0.0, - "chuckedMessageRate" : 0, - "chunkedMessageRate" : 0, - "msgBacklog" : 144318, - "msgBacklogNoDelayed" : 144318, - "blockedSubscriptionOnUnackedMsgs" : false, - "msgDelayed" : 0, - "unackedMessages" : 0, - "msgRateExpired" : 0.0, - "lastExpireTimestamp" : 0, - "lastConsumedFlowTimestamp" : 0, - "lastConsumedTimestamp" : 0, - "lastAckedTimestamp" : 0, - "consumers" : [ ], - "isDurable" : true, - "isReplicated" : false - } - }, - "replication" : { }, - "metadata" : { - "partitions" : 3 - }, - "partitions" : { } -} - -``` - -You can check the current statistics of a given partitioned topic and its connected producers and consumers in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics partitioned-stats \ - persistent://test-tenant/namespace/topic \ - --per-partition - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/partitioned-stats|operation/getPartitionedStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getPartitionedStats(topic, true /* per partition */, false /* is precise backlog */); - -``` - - - - -```` - -### Internal stats - -You can check the detailed statistics of a topic. The following is an example. For description of each stats, refer to [get internal stats](#get-internal-stats). - -```json - -{ - "entriesAddedCounter": 20449518, - "numberOfEntries": 3233, - "totalSize": 331482, - "currentLedgerEntries": 3233, - "currentLedgerSize": 331482, - "lastLedgerCreatedTimestamp": "2016-06-29 03:00:23.825", - "lastLedgerCreationFailureTimestamp": null, - "waitingCursorsCount": 1, - "pendingAddEntriesCount": 0, - "lastConfirmedEntry": "324711539:3232", - "state": "LedgerOpened", - "ledgers": [ - { - "ledgerId": 324711539, - "entries": 0, - "size": 0 - } - ], - "cursors": { - "my-subscription": { - "markDeletePosition": "324711539:3133", - "readPosition": "324711539:3233", - "waitingReadOp": true, - "pendingReadOps": 0, - "messagesConsumedCounter": 20449501, - "cursorLedger": 324702104, - "cursorLedgerLastEntry": 21, - "individuallyDeletedMessages": "[(324711539:3134‥324711539:3136], (324711539:3137‥324711539:3140], ]", - "lastLedgerSwitchTimestamp": "2016-06-29 01:30:19.313", - "state": "Open" - } - } -} - -``` - -You can get the internal stats for the partitioned topic in the following ways. - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics stats-internal \ - persistent://test-tenant/namespace/topic - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/internalStats|operation/getInternalStats?version=@pulsar:version_number@} - - - - -```java - -admin.topics().getInternalStats(topic); - -``` - - - - -```` - -### Get backlog size - -You can get backlog size of a single topic partition or a nonpartitioned topic given a message ID (in bytes). - -````mdx-code-block - - - -```shell - -$ pulsar-admin topics get-backlog-size \ - -m 1:1 \ - persistent://test-tenant/ns1/tp1-partition-0 \ - -``` - - - - -{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/backlogSize|operation/getBacklogSizeByMessageId?version=@pulsar:version_number@} - - - - -```java - -String topic = "persistent://my-tenant/my-namespace/my-topic"; -MessageId messageId = MessageId.earliest; -admin.topics().getBacklogSizeByMessageId(topic, messageId); - -``` - - - - -```` - -## Publish to partitioned topics - -By default, Pulsar topics are served by a single broker, which limits the maximum throughput of a topic. *Partitioned topics* can span multiple brokers and thus allow for higher throughput. - -You can publish to partitioned topics using Pulsar client libraries. When publishing to partitioned topics, you must specify a routing mode. If you do not specify any routing mode when you create a new producer, the round robin routing mode is used. - -### Routing mode - -You can specify the routing mode in the ProducerConfiguration object that you use to configure your producer. The routing mode determines which partition(internal topic) that each message should be published to. - -The following {@inject: javadoc:MessageRoutingMode:/client/org/apache/pulsar/client/api/MessageRoutingMode} options are available. - -Mode | Description -:--------|:------------ -`RoundRobinPartition` | If no key is provided, the producer publishes messages across all partitions in round-robin policy to achieve the maximum throughput. Round-robin is not done per individual message, round-robin is set to the same boundary of batching delay to ensure that batching is effective. If a key is specified on the message, the partitioned producer hashes the key and assigns message to a particular partition. This is the default mode. -`SinglePartition` | If no key is provided, the producer picks a single partition randomly and publishes all messages into that partition. If a key is specified on the message, the partitioned producer hashes the key and assigns message to a particular partition. -`CustomPartition` | Use custom message router implementation that is called to determine the partition for a particular message. You can create a custom routing mode by using the Java client and implementing the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface. - -The following is an example: - -```java - -String pulsarBrokerRootUrl = "pulsar://localhost:6650"; -String topic = "persistent://my-tenant/my-namespace/my-topic"; - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl(pulsarBrokerRootUrl).build(); -Producer producer = pulsarClient.newProducer() - .topic(topic) - .messageRoutingMode(MessageRoutingMode.SinglePartition) - .create(); -producer.send("Partitioned topic message".getBytes()); - -``` - -### Custom message router - -To use a custom message router, you need to provide an implementation of the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface, which has just one `choosePartition` method: - -```java - -public interface MessageRouter extends Serializable { - int choosePartition(Message msg); -} - -``` - -The following router routes every message to partition 10: - -```java - -public class AlwaysTenRouter implements MessageRouter { - public int choosePartition(Message msg) { - return 10; - } -} - -``` - -With that implementation, you can send - -```java - -String pulsarBrokerRootUrl = "pulsar://localhost:6650"; -String topic = "persistent://my-tenant/my-cluster-my-namespace/my-topic"; - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl(pulsarBrokerRootUrl).build(); -Producer producer = pulsarClient.newProducer() - .topic(topic) - .messageRouter(new AlwaysTenRouter()) - .create(); -producer.send("Partitioned topic message".getBytes()); - -``` - -### How to choose partitions when using a key -If a message has a key, it supersedes the round robin routing policy. The following example illustrates how to choose the partition when using a key. - -```java - -// If the message has a key, it supersedes the round robin routing policy - if (msg.hasKey()) { - return signSafeMod(hash.makeHash(msg.getKey()), topicMetadata.numPartitions()); - } - - if (isBatchingEnabled) { // if batching is enabled, choose partition on `partitionSwitchMs` boundary. - long currentMs = clock.millis(); - return signSafeMod(currentMs / partitionSwitchMs + startPtnIdx, topicMetadata.numPartitions()); - } else { - return signSafeMod(PARTITION_INDEX_UPDATER.getAndIncrement(this), topicMetadata.numPartitions()); - } - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/administration-dashboard.md b/site2/website/versioned_docs/version-2.9.3-deprecated/administration-dashboard.md deleted file mode 100644 index 92bd7e17869d7b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/administration-dashboard.md +++ /dev/null @@ -1,76 +0,0 @@ ---- -id: administration-dashboard -title: Pulsar dashboard -sidebar_label: "Dashboard" -original_id: administration-dashboard ---- - -:::note - -Pulsar dashboard is deprecated. We recommend you use [Pulsar Manager](administration-pulsar-manager.md) to manage and monitor the stats of your topics. - -::: - -Pulsar dashboard is a web application that enables users to monitor current stats for all [topics](reference-terminology.md#topic) in tabular form. - -The dashboard is a data collector that polls stats from all the brokers in a Pulsar instance (across multiple clusters) and stores all the information in a [PostgreSQL](https://www.postgresql.org/) database. - -You can use the [Django](https://www.djangoproject.com) web app to render the collected data. - -## Install - -The easiest way to use the dashboard is to run it inside a [Docker](https://www.docker.com/products/docker) container. - -```shell - -$ SERVICE_URL=http://broker.example.com:8080/ -$ docker run -p 80:80 \ - -e SERVICE_URL=$SERVICE_URL \ - apachepulsar/pulsar-dashboard:@pulsar:version@ - -``` - -You can find the {@inject: github:Dockerfile:/dashboard/Dockerfile} in the `dashboard` directory and build an image from scratch as well: - -```shell - -$ docker build -t apachepulsar/pulsar-dashboard dashboard - -``` - -If token authentication is enabled: -> Provided token should have super-user access. - -```shell - -$ SERVICE_URL=http://broker.example.com:8080/ -$ JWT_TOKEN=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c -$ docker run -p 80:80 \ - -e SERVICE_URL=$SERVICE_URL \ - -e JWT_TOKEN=$JWT_TOKEN \ - apachepulsar/pulsar-dashboard - -``` - - -You need to specify only one service URL for a Pulsar cluster. Internally, the collector figures out all the existing clusters and the brokers from where it needs to pull the metrics. If you connect the dashboard to Pulsar running in standalone mode, the URL is `http://:8080` by default. `` is the IP address or hostname of the machine that runs Pulsar standalone. The IP address or hostname should be accessible from the running dashboard in the docker instance. - -Once the Docker container starts, the web dashboard is accessible via `localhost` or whichever host that Docker uses. - -> The `SERVICE_URL` that the dashboard uses needs to be reachable from inside the Docker container. - -If the Pulsar service runs in standalone mode in `localhost`, the `SERVICE_URL` has to -be the IP address of the machine. - -Similarly, given the Pulsar standalone advertises itself with localhost by default, you need to -explicitly set the advertise address to the host IP address. For example: - -```shell - -$ bin/pulsar standalone --advertised-address 1.2.3.4 - -``` - -### Known issues - -Currently, only Pulsar Token [authentication](security-overview.md#authentication-providers) is supported. diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/administration-geo.md b/site2/website/versioned_docs/version-2.9.3-deprecated/administration-geo.md deleted file mode 100644 index 1d2a9620007f4b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/administration-geo.md +++ /dev/null @@ -1,238 +0,0 @@ ---- -id: administration-geo -title: Pulsar geo-replication -sidebar_label: "Geo-replication" -original_id: administration-geo ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -*Geo-replication* is the replication of persistently stored message data across multiple clusters of a Pulsar instance. - -## How geo-replication works - -The diagram below illustrates the process of geo-replication across Pulsar clusters: - -![Replication Diagram](/assets/geo-replication.png) - -In this diagram, whenever **P1**, **P2**, and **P3** producers publish messages to the **T1** topic on **Cluster-A**, **Cluster-B**, and **Cluster-C** clusters respectively, those messages are instantly replicated across clusters. Once the messages are replicated, **C1** and **C2** consumers can consume those messages from their respective clusters. - -Without geo-replication, **C1** and **C2** consumers are not able to consume messages that **P3** producer publishes. - -## Geo-replication and Pulsar properties - -You must enable geo-replication on a per-tenant basis in Pulsar. You can enable geo-replication between clusters only when a tenant is created that allows access to both clusters. - -Although geo-replication must be enabled between two clusters, actually geo-replication is managed at the namespace level. You must complete the following tasks to enable geo-replication for a namespace: - -* [Enable geo-replication namespaces](#enable-geo-replication-namespaces) -* Configure that namespace to replicate across two or more provisioned clusters - -Any message published on *any* topic in that namespace is replicated to all clusters in the specified set. - -## Local persistence and forwarding - -When messages are produced on a Pulsar topic, messages are first persisted in the local cluster, and then forwarded asynchronously to the remote clusters. - -In normal cases, when connectivity issues are none, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, the network [round-trip time](https://en.wikipedia.org/wiki/Round-trip_delay_time) (RTT) between the remote regions defines end-to-end delivery latency. - -Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (like during a network partition). - -Producers and consumers can publish messages to and consume messages from any cluster in a Pulsar instance. However, subscriptions cannot only be local to the cluster where the subscriptions are created but also can be transferred between clusters after replicated subscription is enabled. Once replicated subscription is enabled, you can keep subscription state in synchronization. Therefore, a topic can be asynchronously replicated across multiple geographical regions. In case of failover, a consumer can restart consuming messages from the failure point in a different cluster. - -In the aforementioned example, the **T1** topic is replicated among three clusters, **Cluster-A**, **Cluster-B**, and **Cluster-C**. - -All messages produced in any of the three clusters are delivered to all subscriptions in other clusters. In this case, **C1** and **C2** consumers receive all messages that **P1**, **P2**, and **P3** producers publish. Ordering is still guaranteed on a per-producer basis. - -## Configure replication - -The following example connects three clusters: **us-east**, **us-west**, and **us-cent**. - -### Connect replication clusters - -To replicate data among clusters, you need to configure each cluster to connect to the other. You can use the [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) tool to create a connection. - -**Example** - -Suppose that you have 3 replication clusters: `us-west`, `us-cent`, and `us-east`. - -1. Configure the connection from `us-west` to `us-east`. - - Run the following command on `us-west`. - -```shell - -$ bin/pulsar-admin clusters create \ - --broker-url pulsar://: \ - --url http://: \ - us-east - -``` - - :::tip - - - If you want to use a secure connection for a cluster, you can use the flags `--broker-url-secure` and `--url-secure`. For more information, see [pulsar-admin clusters create](https://pulsar.apache.org/tools/pulsar-admin/). - - Different clusters may have different authentications. You can use the authentication flag `--auth-plugin` and `--auth-parameters` together to set cluster authentication, which overrides `brokerClientAuthenticationPlugin` and `brokerClientAuthenticationParameters` if `authenticationEnabled` sets to `true` in `broker.conf` and `standalone.conf`. For more information, see [authentication and authorization](concepts-authentication.md). - - ::: - -2. Configure the connection from `us-west` to `us-cent`. - - Run the following command on `us-west`. - -```shell - -$ bin/pulsar-admin clusters create \ - --broker-url pulsar://: \ - --url http://: \ - us-cent - -``` - -3. Run similar commands on `us-east` and `us-cent` to create connections among clusters. - -### Grant permissions to properties - -To replicate to a cluster, the tenant needs permission to use that cluster. You can grant permission to the tenant when you create the tenant or grant later. - -Specify all the intended clusters when you create a tenant: - -```shell - -$ bin/pulsar-admin tenants create my-tenant \ - --admin-roles my-admin-role \ - --allowed-clusters us-west,us-east,us-cent - -``` - -To update permissions of an existing tenant, use `update` instead of `create`. - -### Enable geo-replication - -You can enable geo-replication at **namespace** or **topic** level. - -#### Enable geo-replication at namespace level - -You can create a namespace with the following command sample. - -```shell - -$ bin/pulsar-admin namespaces create my-tenant/my-namespace - -``` - -Initially, the namespace is not assigned to any cluster. You can assign the namespace to clusters using the `set-clusters` subcommand: - -```shell - -$ bin/pulsar-admin namespaces set-clusters my-tenant/my-namespace \ - --clusters us-west,us-east,us-cent - -``` - -### Use topics with geo-replication - -Once you create a geo-replication namespace, any topics that producers or consumers create within that namespace is replicated across clusters. Typically, each application uses the `serviceUrl` for the local cluster. - -#### Selective replication - -By default, messages are replicated to all clusters configured for the namespace. You can restrict replication selectively by specifying a replication list for a message, and then that message is replicated only to the subset in the replication list. - -The following is an example for the [Java API](client-libraries-java.md). Note the use of the `setReplicationClusters` method when you construct the {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} object: - -```java - -List restrictReplicationTo = Arrays.asList( - "us-west", - "us-east" -); - -Producer producer = client.newProducer() - .topic("some-topic") - .create(); - -producer.newMessage() - .value("my-payload".getBytes()) - .setReplicationClusters(restrictReplicationTo) - .send(); - -``` - -#### Topic stats - -You can check topic-specific statistics for geo-replication topics using one of the following methods. - -````mdx-code-block - - - -Use the [`pulsar-admin topics stats`](https://pulsar.apache.org/tools/pulsar-admin/) command. - -```shell - -$ bin/pulsar-admin topics stats persistent://my-tenant/my-namespace/my-topic - -``` - - - - -{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@} - - - - -```` - -Each cluster reports its own local stats, including the incoming and outgoing replication rates and backlogs. - -#### Delete a geo-replication topic - -Given that geo-replication topics exist in multiple regions, directly deleting a geo-replication topic is not possible. Instead, you should rely on automatic topic garbage collection. - -In Pulsar, a topic is automatically deleted when the topic meets the following three conditions: -- no producers or consumers are connected to it; -- no subscriptions to it; -- no more messages are kept for retention. -For geo-replication topics, each region uses a fault-tolerant mechanism to decide when deleting the topic locally is safe. - -You can explicitly disable topic garbage collection by setting `brokerDeleteInactiveTopicsEnabled` to `false` in your [broker configuration](reference-configuration.md#broker). - -To delete a geo-replication topic, close all producers and consumers on the topic, and delete all of its local subscriptions in every replication cluster. When Pulsar determines that no valid subscription for the topic remains across the system, it will garbage collect the topic. - -## Replicated subscriptions - -Pulsar supports replicated subscriptions, so you can keep subscription state in sync, within a sub-second timeframe, in the context of a topic that is being asynchronously replicated across multiple geographical regions. - -In case of failover, a consumer can restart consuming from the failure point in a different cluster. - -### Enable replicated subscription - -Replicated subscription is disabled by default. You can enable replicated subscription when creating a consumer. - -```java - -Consumer consumer = client.newConsumer(Schema.STRING) - .topic("my-topic") - .subscriptionName("my-subscription") - .replicateSubscriptionState(true) - .subscribe(); - -``` - -### Advantages - - * It is easy to implement the logic. - * You can choose to enable or disable replicated subscription. - * When you enable it, the overhead is low, and it is easy to configure. - * When you disable it, the overhead is zero. - -### Limitations - -When you enable replicated subscription, you're creating a consistent distributed snapshot to establish an association between message ids from different clusters. The snapshots are taken periodically. The default value is `1 second`. It means that a consumer failing over to a different cluster can potentially receive 1 second of duplicates. You can also configure the frequency of the snapshot in the `broker.conf` file. diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/administration-isolation.md b/site2/website/versioned_docs/version-2.9.3-deprecated/administration-isolation.md deleted file mode 100644 index d2de042a2e7415..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/administration-isolation.md +++ /dev/null @@ -1,115 +0,0 @@ ---- -id: administration-isolation -title: Pulsar isolation -sidebar_label: "Pulsar isolation" -original_id: administration-isolation ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -In an organization, a Pulsar instance provides services to multiple teams. When organizing the resources across multiple teams, you want to make a suitable isolation plan to avoid the resource competition between different teams and applications and provide high-quality messaging service. In this case, you need to take resource isolation into consideration and weigh your intended actions against expected and unexpected consequences. - -To enforce resource isolation, you can use the Pulsar isolation policy, which allows you to allocate resources (**broker** and **bookie**) for the namespace. - -## Broker isolation - -In Pulsar, when namespaces (more specifically, namespace bundles) are assigned dynamically to brokers, the namespace isolation policy limits the set of brokers that can be used for assignment. Before topics are assigned to brokers, you can set the namespace isolation policy with a primary or a secondary regex to select desired brokers. - -You can set a namespace isolation policy for a cluster using one of the following methods. - -````mdx-code-block - - - - -``` - -pulsar-admin ns-isolation-policy set options - -``` - -For more information about the command `pulsar-admin ns-isolation-policy set options`, see [here](https://pulsar.apache.org/tools/pulsar-admin/). - -**Example** - -```shell - -bin/pulsar-admin ns-isolation-policy set \ ---auto-failover-policy-type min_available \ ---auto-failover-policy-params min_limit=1,usage_threshold=80 \ ---namespaces my-tenant/my-namespace \ ---primary 10.193.216.* my-cluster policy-name - -``` - - - - -[PUT /admin/v2/namespaces/{tenant}/{namespace}](https://pulsar.apache.org/admin-rest-api/?version=master&apiversion=v2#operation/createNamespace) - - - - -For how to set namespace isolation policy using Java admin API, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/NamespacesImpl.java#L251). - - - - -```` - -## Bookie isolation - -A namespace can be isolated into user-defined groups of bookies, which guarantees all the data that belongs to the namespace is stored in desired bookies. The bookie affinity group uses the BookKeeper [rack-aware placement policy](https://bookkeeper.apache.org/docs/latest/api/javadoc/org/apache/bookkeeper/client/EnsemblePlacementPolicy.html) and it is a way to feed rack information which is stored as JSON format in znode. - -You can set a bookie affinity group using one of the following methods. - -````mdx-code-block - - - - -``` - -pulsar-admin namespaces set-bookie-affinity-group options - -``` - -For more information about the command `pulsar-admin namespaces set-bookie-affinity-group options`, see [here](https://pulsar.apache.org/tools/pulsar-admin/). - -**Example** - -```shell - -bin/pulsar-admin bookies set-bookie-rack \ ---bookie 127.0.0.1:3181 \ ---hostname 127.0.0.1:3181 \ ---group group-bookie1 \ ---rack rack1 - -bin/pulsar-admin namespaces set-bookie-affinity-group public/default \ ---primary-group group-bookie1 - -``` - - - - -[POST /admin/v2/namespaces/{tenant}/{namespace}/persistence/bookieAffinity](https://pulsar.apache.org/admin-rest-api/?version=master&apiversion=v2#operation/setBookieAffinityGroup) - - - - -For how to set bookie affinity group for a namespace using Java admin API, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/NamespacesImpl.java#L1164). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/administration-load-balance.md b/site2/website/versioned_docs/version-2.9.3-deprecated/administration-load-balance.md deleted file mode 100644 index 788c84a59317b0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/administration-load-balance.md +++ /dev/null @@ -1,250 +0,0 @@ ---- -id: administration-load-balance -title: Pulsar load balance -sidebar_label: "Load balance" -original_id: administration-load-balance ---- - -## Load balance across Pulsar brokers - -Pulsar is an horizontally scalable messaging system, so the traffic in a logical cluster must be balanced across all the available Pulsar brokers as evenly as possible, which is a core requirement. - -You can use multiple settings and tools to control the traffic distribution which require a bit of context to understand how the traffic is managed in Pulsar. Though, in most cases, the core requirement mentioned above is true out of the box and you should not worry about it. - -## Pulsar load manager architecture - -The following part introduces the basic architecture of the Pulsar load manager. - -### Assign topics to brokers dynamically - -Topics are dynamically assigned to brokers based on the load conditions of all brokers in the cluster. - -When a client starts using new topics that are not assigned to any broker, a process is triggered to choose the best suited broker to acquire ownership of these topics according to the load conditions. - -In case of partitioned topics, different partitions are assigned to different brokers. Here "topic" means either a non-partitioned topic or one partition of a topic. - -The assignment is "dynamic" because the assignment changes quickly. For example, if the broker owning the topic crashes, the topic is reassigned immediately to another broker. Another scenario is that the broker owning the topic becomes overloaded. In this case, the topic is reassigned to a less loaded broker. - -The stateless nature of brokers makes the dynamic assignment possible, so you can quickly expand or shrink the cluster based on usage. - -#### Assignment granularity - -The assignment of topics or partitions to brokers is not done at the topics or partitions level, but done at the Bundle level (a higher level). The reason is to amortize the amount of information that you need to keep track. Based on CPU, memory, traffic load and other indexes, topics are assigned to a particular broker dynamically. - -Instead of individual topic or partition assignment, each broker takes ownership of a subset of the topics for a namespace. This subset is called a "*bundle*" and effectively this subset is a sharding mechanism. - -The namespace is the "administrative" unit: many config knobs or operations are done at the namespace level. - -For assignment, a namespaces is sharded into a list of "bundles", with each bundle comprising a portion of overall hash range of the namespace. - -Topics are assigned to a particular bundle by taking the hash of the topic name and checking in which bundle the hash falls into. - -Each bundle is independent of the others and thus is independently assigned to different brokers. - -### Create namespaces and bundles - -When you create a new namespace, the new namespace sets to use the default number of bundles. You can set this in `conf/broker.conf`: - -```properties - -# When a namespace is created without specifying the number of bundle, this -# value will be used as the default -defaultNumberOfNamespaceBundles=4 - -``` - -You can either change the system default, or override it when you create a new namespace: - -```shell - -$ bin/pulsar-admin namespaces create my-tenant/my-namespace --clusters us-west --bundles 16 - -``` - -With this command, you create a namespace with 16 initial bundles. Therefore the topics for this namespaces can immediately be spread across up to 16 brokers. - -In general, if you know the expected traffic and number of topics in advance, you had better start with a reasonable number of bundles instead of waiting for the system to auto-correct the distribution. - -On the same note, it is beneficial to start with more bundles than the number of brokers, because of the hashing nature of the distribution of topics into bundles. For example, for a namespace with 1000 topics, using something like 64 bundles achieves a good distribution of traffic across 16 brokers. - -### Unload topics and bundles - -You can "unload" a topic in Pulsar with admin operation. Unloading means to close the topics, release ownership and reassign the topics to a new broker, based on current load. - -When unloading happens, the client experiences a small latency blip, typically in the order of tens of milliseconds, while the topic is reassigned. - -Unloading is the mechanism that the load-manager uses to perform the load shedding, but you can also trigger the unloading manually, for example to correct the assignments and redistribute traffic even before having any broker overloaded. - -Unloading a topic has no effect on the assignment, but just closes and reopens the particular topic: - -```shell - -pulsar-admin topics unload persistent://tenant/namespace/topic - -``` - -To unload all topics for a namespace and trigger reassignments: - -```shell - -pulsar-admin namespaces unload tenant/namespace - -``` - -### Split namespace bundles - -Since the load for the topics in a bundle might change over time and predicting the load might be hard, bundle split is designed to deal with these issues. The broker splits a bundle into two and the new smaller bundles can be reassigned to different brokers. - -The splitting is based on some tunable thresholds. Any existing bundle that exceeds any of the threshold is a candidate to be split. By default the newly split bundles are also immediately offloaded to other brokers, to facilitate the traffic distribution. - -You can split namespace bundles in two ways, by setting `supportedNamespaceBundleSplitAlgorithms` to `range_equally_divide` or `topic_count_equally_divide` in `broker.conf` file. The former splits the bundle into two parts with the same hash range size; the latter splits the bundle into two parts with the same number of topics. You can also configure other parameters for namespace bundles. - -```properties - -# enable/disable namespace bundle auto split -loadBalancerAutoBundleSplitEnabled=true - -# enable/disable automatic unloading of split bundles -loadBalancerAutoUnloadSplitBundlesEnabled=true - -# maximum topics in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxTopics=1000 - -# maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxSessions=1000 - -# maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxMsgRate=30000 - -# maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered -loadBalancerNamespaceBundleMaxBandwidthMbytes=100 - -# maximum number of bundles in a namespace (for auto-split) -loadBalancerNamespaceMaximumBundles=128 - -``` - -### Shed load automatically - -The support for automatic load shedding is available in the load manager of Pulsar. This means that whenever the system recognizes a particular broker is overloaded, the system forces some traffic to be reassigned to less loaded brokers. - -When a broker is identified as overloaded, the broker forces to "unload" a subset of the bundles, the ones with higher traffic, that make up for the overload percentage. - -For example, the default threshold is 85% and if a broker is over quota at 95% CPU usage, then the broker unloads the percent difference plus a 5% margin: `(95% - 85%) + 5% = 15%`. - -Given the selection of bundles to offload is based on traffic (as a proxy measure for cpu, network and memory), broker unloads bundles for at least 15% of traffic. - -The automatic load shedding is enabled by default and you can disable the automatic load shedding with this setting: - -```properties - -# Enable/disable automatic bundle unloading for load-shedding -loadBalancerSheddingEnabled=true - -``` - -Additional settings that apply to shedding: - -```properties - -# Load shedding interval. Broker periodically checks whether some traffic should be offload from -# some over-loaded broker to other under-loaded brokers -loadBalancerSheddingIntervalMinutes=1 - -# Prevent the same topics to be shed and moved to other brokers more that once within this timeframe -loadBalancerSheddingGracePeriodMinutes=30 - -``` - -#### Broker overload thresholds - -The determinations of when a broker is overloaded is based on threshold of CPU, network and memory usage. Whenever either of those metrics reaches the threshold, the system triggers the shedding (if enabled). - -By default, overload threshold is set at 85%: - -```properties - -# Usage threshold to determine a broker as over-loaded -loadBalancerBrokerOverloadedThresholdPercentage=85 - -``` - -Pulsar gathers the usage stats from the system metrics. - -In case of network utilization, in some cases the network interface speed that Linux reports is not correct and needs to be manually overridden. This is the case in AWS EC2 instances with 1Gbps NIC speed for which the OS reports 10Gbps speed. - -Because of the incorrect max speed, the Pulsar load manager might think the broker has not reached the NIC capacity, while in fact the broker already uses all the bandwidth and the traffic is slowed down. - -You can use the following setting to correct the max NIC speed: - -```properties - -# Override the auto-detection of the network interfaces max speed. -# This option is useful in some environments (eg: EC2 VMs) where the max speed -# reported by Linux is not reflecting the real bandwidth available to the broker. -# Since the network usage is employed by the load manager to decide when a broker -# is overloaded, it is important to make sure the info is correct or override it -# with the right value here. The configured value can be a double (eg: 0.8) and that -# can be used to trigger load-shedding even before hitting on NIC limits. -loadBalancerOverrideBrokerNicSpeedGbps= - -``` - -When the value is empty, Pulsar uses the value that the OS reports. - -### Distribute anti-affinity namespaces across failure domains - -When your application has multiple namespaces and you want one of them available all the time to avoid any downtime, you can group these namespaces and distribute them across different [failure domains](reference-terminology.md#failure-domain) and different brokers. Thus, if one of the failure domains is down (due to release rollout or brokers restart), it only disrupts namespaces owned by that specific failure domain and the rest of the namespaces owned by other domains remain available without any impact. - -Such a group of namespaces has anti-affinity to each other, that is, all the namespaces in this group are [anti-affinity namespaces](reference-terminology.md#anti-affinity-namespaces) and are distributed to different failure domains in a load-balanced manner. - -As illustrated in the following figure, Pulsar has 2 failure domains (Domain1 and Domain2) and each domain has 2 brokers in it. You can create an anti-affinity namespace group that has 4 namespaces in it, and all the 4 namespaces have anti-affinity to each other. The load manager tries to distribute namespaces evenly across all the brokers in the same domain. Since each domain has 2 brokers, every broker owns one namespace from this anti-affinity namespace group, and you can see each domain owns 2 namespaces, and each broker owns 1 namespace. - -![Distribute anti-affinity namespaces across failure domains](/assets/anti-affinity-namespaces-across-failure-domains.svg) - -The load manager follows an even distribution policy across failure domains to assign anti-affinity namespaces. The following table outlines the even-distributed assignment sequence illustrated in the above figure. - -| Assignment sequence | Namespace | Failure domain candidates | Broker candidates | Selected broker | -|:---|:------------|:------------------|:------------------------------------|:-----------------| -| 1 | Namespace1 | Domain1, Domain2 | Broker1, Broker2, Broker3, Broker4 | Domain1:Broker1 | -| 2 | Namespace2 | Domain2 | Broker3, Broker4 | Domain2:Broker3 | -| 3 | Namespace3 | Domain1, Domain2 | Broker2, Broker4 | Domain1:Broker2 | -| 4 | Namespace4 | Domain2 | Broker4 | Domain2:Broker4 | - -:::tip - -* Each namespace belongs to only one anti-affinity group. If a namespace with an existing anti-affinity assignment is assigned to another anti-affinity group, the original assignment is dropped. - -* If there are more anti-affinity namespaces than failure domains, the load manager distributes namespaces evenly across all the domains, and also every domain distributes namespaces evenly across all the brokers under that domain. - -::: - -#### Create a failure domain and register brokers - -:::note - -One broker can only be registered to a single failure domain. - -::: - -To create a domain under a specific cluster and register brokers, run the following command: - -```bash - -pulsar-admin clusters create-failure-domain --domain-name --broker-list - -``` - -You can also view, update, and delete domains under a specific cluster. For more information, refer to [Pulsar admin doc](/tools/pulsar-admin/). - -#### Create an anti-affinity namespace group - -An anti-affinity group is created automatically when the first namespace is assigned to the group. To assign a namespace to an anti-affinity group, run the following command. It sets an anti-affinity group name for a namespace. - -```bash - -pulsar-admin namespaces set-anti-affinity-group --group - -``` - -For more information about `anti-affinity-group` related commands, refer to [Pulsar admin doc](/tools/pulsar-admin/). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/administration-proxy.md b/site2/website/versioned_docs/version-2.9.3-deprecated/administration-proxy.md deleted file mode 100644 index 577eff7db0253a..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/administration-proxy.md +++ /dev/null @@ -1,125 +0,0 @@ ---- -id: administration-proxy -title: Pulsar proxy -sidebar_label: "Pulsar proxy" -original_id: administration-proxy ---- - -Pulsar proxy is an optional gateway. Pulsar proxy is used when direct connections between clients and Pulsar brokers are either infeasible or undesirable. For example, when you run Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform, you can run Pulsar proxy. - -The Pulsar proxy is not intended to be exposed on the public internet. The security considerations in the current design expect network perimeter security. The requirement of network perimeter security can be achieved with private networks. - -If a proxy deployment cannot be protected with network perimeter security, the alternative would be to use [Pulsar's "Proxy SNI routing" feature](concepts-proxy-sni-routing.md) with a properly secured and audited solution. In that case Pulsar proxy component is not used at all. - -## Configure the proxy - -Before using a proxy, you need to configure it with a broker's address in the cluster. You can configure the broker URL in the proxy configuration, or the proxy to connect directly using service discovery. - -> In a production environment service discovery is not recommended. - -### Use broker URLs - -It is more secure to specify a URL to connect to the brokers. - -Proxy authorization requires access to ZooKeeper, so if you use these broker URLs to connect to the brokers, you need to disable authorization at the Proxy level. Brokers still authorize requests after the proxy forwards them. - -You can configure the broker URLs in `conf/proxy.conf` as follows. - -```properties -brokerServiceURL=pulsar://brokers.example.com:6650 -brokerWebServiceURL=http://brokers.example.com:8080 -functionWorkerWebServiceURL=http://function-workers.example.com:8080 -``` - -If you use TLS, configure the broker URLs in the following way: - -```properties -brokerServiceURLTLS=pulsar+ssl://brokers.example.com:6651 -brokerWebServiceURLTLS=https://brokers.example.com:8443 -functionWorkerWebServiceURL=https://function-workers.example.com:8443 -``` - -The hostname in the URLs provided should be a DNS entry that points to multiple brokers or a virtual IP address, which is backed by multiple broker IP addresses, so that the proxy does not lose connectivity to Pulsar cluster if a single broker becomes unavailable. - -The ports to connect to the brokers (6650 and 8080, or in the case of TLS, 6651 and 8443) should be open in the network ACLs. - -Note that if you do not use functions, you do not need to configure `functionWorkerWebServiceURL`. - -### Use service discovery - -Pulsar uses [ZooKeeper](https://zookeeper.apache.org) for service discovery. To connect the proxy to ZooKeeper, specify the following in `conf/proxy.conf`. - -```properties -metadataStoreUrl=my-zk-0:2181,my-zk-1:2181,my-zk-2:2181 -configurationMetadataStoreUrl=my-zk-0:2184,my-zk-remote:2184 -``` - -> To use service discovery, you need to open the network ACLs, so the proxy can connects to the ZooKeeper nodes through the ZooKeeper client port (port `2181`) and the configuration store client port (port `2184`). - -> However, it is not secure to use service discovery. Because if the network ACL is open, when someone compromises a proxy, they have full access to ZooKeeper. - -### Restricting target broker addresses to mitigate CVE-2022-24280 - -The Pulsar Proxy trusts clients to provide valid target broker addresses to connect to. -Unless the Pulsar Proxy is explicitly configured to limit access, the Pulsar Proxy is vulnerable as described in the security advisory [Apache Pulsar Proxy target broker address isn't validated (CVE-2022-24280)](https://github.com/apache/pulsar/wiki/CVE-2022-24280). - -It is necessary to limit proxied broker connections to known broker addresses by specifying `brokerProxyAllowedHostNames` and `brokerProxyAllowedIPAddresses` settings. - -When specifying `brokerProxyAllowedHostNames`, it's possible to use a wildcard. -Please notice that `*` is a wildcard that matches any character in the hostname. It also matches dot `.` characters. - -It is recommended to use a pattern that matches only the desired brokers and no other hosts in the local network. Pulsar lookups will use the default host name of the broker by default. This can be overridden with the `advertisedAddress` setting in `broker.conf`. - -To increase security, it is also possible to restrict access with the `brokerProxyAllowedIPAddresses` setting. It is not mandatory to configure `brokerProxyAllowedIPAddresses` when `brokerProxyAllowedHostNames` is properly configured so that the pattern matches only the target brokers. -`brokerProxyAllowedIPAddresses` setting supports a comma separate list of IP address, IP address ranges and IP address networks [(supported format reference)](https://seancfoley.github.io/IPAddress/IPAddress/apidocs/inet/ipaddr/IPAddressString.html). - -Example: limiting by host name in a Kubernetes deployment -```yaml - # example of limiting to Kubernetes statefulset hostnames that contain "broker-" - PULSAR_PREFIX_brokerProxyAllowedHostNames: '*broker-*.*.*.svc.cluster.local' -``` - -Example: limiting by both host name and ip address in a `proxy.conf` file for host deployment. -```properties -# require "broker" in host name -brokerProxyAllowedHostNames=*broker*.localdomain -# limit target ip addresses to a specific network -brokerProxyAllowedIPAddresses=10.0.0.0/8 -``` - -Example: limiting by multiple host name patterns and multiple ip address ranges in a `proxy.conf` file for host deployment. -```properties -```properties -# require "broker" in host name -brokerProxyAllowedHostNames=*broker*.localdomain,*broker*.otherdomain -# limit target ip addresses to a specific network or range demonstrating multiple supported formats -brokerProxyAllowedIPAddresses=10.10.0.0/16,192.168.1.100-120,172.16.2.*,10.1.2.3 -``` - - -## Start the proxy - -To start the proxy: - -```bash - -$ cd /path/to/pulsar/directory -$ bin/pulsar proxy - -``` - -> You can run multiple instances of the Pulsar proxy in a cluster. - -## Stop the proxy - -Pulsar proxy runs in the foreground by default. To stop the proxy, simply stop the process in which the proxy is running. - -## Proxy frontends - -You can run Pulsar proxy behind some kind of load-distributing frontend, such as an [HAProxy](https://www.digitalocean.com/community/tutorials/an-introduction-to-haproxy-and-load-balancing-concepts) load balancer. - -## Use Pulsar clients with the proxy - -Once your Pulsar proxy is up and running, preferably behind a load-distributing [frontend](#proxy-frontends), clients can connect to the proxy via whichever address that the frontend uses. If the address is the DNS address `pulsar.cluster.default`, for example, the connection URL for clients is `pulsar://pulsar.cluster.default:6650`. - -For more information on Proxy configuration, refer to [Pulsar proxy](reference-configuration.md#pulsar-proxy). diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/administration-pulsar-manager.md b/site2/website/versioned_docs/version-2.9.3-deprecated/administration-pulsar-manager.md deleted file mode 100644 index d877cce723e6ab..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/administration-pulsar-manager.md +++ /dev/null @@ -1,205 +0,0 @@ ---- -id: administration-pulsar-manager -title: Pulsar Manager -sidebar_label: "Pulsar Manager" -original_id: administration-pulsar-manager ---- - -Pulsar Manager is a web-based GUI management and monitoring tool that helps administrators and users manage and monitor tenants, namespaces, topics, subscriptions, brokers, clusters, and so on, and supports dynamic configuration of multiple environments. - -:::note - -If you are monitoring your current stats with Pulsar dashboard, we recommend you use Pulsar Manager instead. Pulsar dashboard is deprecated. - -::: - -## Install - -The easiest way to use the Pulsar Manager is to run it inside a [Docker](https://www.docker.com/products/docker) container. - -```shell - -docker pull apachepulsar/pulsar-manager:v0.2.0 -docker run -it \ - -p 9527:9527 -p 7750:7750 \ - -e SPRING_CONFIGURATION_FILE=/pulsar-manager/pulsar-manager/application.properties \ - apachepulsar/pulsar-manager:v0.2.0 - -``` - -* `SPRING_CONFIGURATION_FILE`: Default configuration file for spring. - -### Set administrator account and password - - ```shell - - CSRF_TOKEN=$(curl http://localhost:7750/pulsar-manager/csrf-token) - curl \ - -H 'X-XSRF-TOKEN: $CSRF_TOKEN' \ - -H 'Cookie: XSRF-TOKEN=$CSRF_TOKEN;' \ - -H "Content-Type: application/json" \ - -X PUT http://localhost:7750/pulsar-manager/users/superuser \ - -d '{"name": "admin", "password": "apachepulsar", "description": "test", "email": "username@test.org"}' - - ``` - -You can find the docker image in the [Docker Hub](https://github.com/apache/pulsar-manager/tree/master/docker) directory and build an image from the source code as well: - -``` - -git clone https://github.com/apache/pulsar-manager -cd pulsar-manager/front-end -npm install --save -npm run build:prod -cd .. -./gradlew build -x test -cd .. -docker build -f docker/Dockerfile --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` --build-arg VCS_REF=`latest` --build-arg VERSION=`latest` -t apachepulsar/pulsar-manager . - -``` - -### Use custom databases - -If you have a large amount of data, you can use a custom database. The following is an example of PostgreSQL. - -1. Initialize database and table structures using the [file](https://github.com/apache/pulsar-manager/tree/master/src/main/resources/META-INF/sql/postgresql-schema.sql). - -2. Modify the [configuration file](https://github.com/apache/pulsar-manager/blob/master/src/main/resources/application.properties) and add PostgreSQL configuration. - -``` - -spring.datasource.driver-class-name=org.postgresql.Driver -spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/pulsar_manager -spring.datasource.username=postgres -spring.datasource.password=postgres - -``` - -3. Compile to generate a new executable jar package. - -``` - -./gradlew build -x test - -``` - -### Enable JWT authentication - -If you want to turn on JWT authentication, configure the following parameters: - -* `backend.jwt.token`: token for the superuser. You need to configure this parameter during cluster initialization. -* `jwt.broker.token.mode`: multiple modes of generating token, including PUBLIC, PRIVATE, and SECRET. -* `jwt.broker.public.key`: configure this option if you use the PUBLIC mode. -* `jwt.broker.private.key`: configure this option if you use the PRIVATE mode. -* `jwt.broker.secret.key`: configure this option if you use the SECRET mode. - -For more information, see [Token Authentication Admin of Pulsar](http://pulsar.apache.org/docs/en/security-token-admin/). - - -If you want to enable JWT authentication, use one of the following methods. - - -* Method 1: use command-line tool - -``` - -wget https://dist.apache.org/repos/dist/release/pulsar/pulsar-manager/pulsar-manager-0.2.0/apache-pulsar-manager-0.2.0-bin.tar.gz -tar -zxvf apache-pulsar-manager-0.2.0-bin.tar.gz -cd pulsar-manager -tar -zxvf pulsar-manager.tar -cd pulsar-manager -cp -r ../dist ui -./bin/pulsar-manager --redirect.host=http://localhost --redirect.port=9527 insert.stats.interval=600000 --backend.jwt.token=token --jwt.broker.token.mode=PRIVATE --jwt.broker.private.key=file:///path/broker-private.key --jwt.broker.public.key=file:///path/broker-public.key - -``` - -Firstly, [set the administrator account and password](#set-administrator-account-and-password) - -Secondly, log in to Pulsar manager through http://localhost:7750/ui/index.html. - -* Method 2: configure the application.properties file - -``` - -backend.jwt.token=token - -jwt.broker.token.mode=PRIVATE -jwt.broker.public.key=file:///path/broker-public.key -jwt.broker.private.key=file:///path/broker-private.key - -or -jwt.broker.token.mode=SECRET -jwt.broker.secret.key=file:///path/broker-secret.key - -``` - -* Method 3: use Docker and enable token authentication. - -``` - -export JWT_TOKEN="your-token" -docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -v $PWD:/data apachepulsar/pulsar-manager:v0.2.0 /bin/sh - -``` - -* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --secret-key` or `bin/pulsar tokens create --private-key` command. -* `REDIRECT_HOST`: the IP address of the front-end server. -* `REDIRECT_PORT`: the port of the front-end server. -* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database. -* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database. -* `USERNAME`: the username of PostgreSQL. -* `PASSWORD`: the password of PostgreSQL. -* `LOG_LEVEL`: the level of log. - -* Method 4: use Docker and turn on **token authentication** and **token management** by private key and public key. - -``` - -export JWT_TOKEN="your-token" -export PRIVATE_KEY="file:///pulsar-manager/secret/my-private.key" -export PUBLIC_KEY="file:///pulsar-manager/secret/my-public.key" -docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -e PRIVATE_KEY=$PRIVATE_KEY -e PUBLIC_KEY=$PUBLIC_KEY -v $PWD:/data -v $PWD/secret:/pulsar-manager/secret apachepulsar/pulsar-manager:v0.2.0 /bin/sh - -``` - -* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --private-key` command. -* `PRIVATE_KEY`: private key path mounted in container, generated by `bin/pulsar tokens create-key-pair` command. -* `PUBLIC_KEY`: public key path mounted in container, generated by `bin/pulsar tokens create-key-pair` command. -* `$PWD/secret`: the folder where the private key and public key generated by the `bin/pulsar tokens create-key-pair` command are placed locally -* `REDIRECT_HOST`: the IP address of the front-end server. -* `REDIRECT_PORT`: the port of the front-end server. -* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database. -* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database. -* `USERNAME`: the username of PostgreSQL. -* `PASSWORD`: the password of PostgreSQL. -* `LOG_LEVEL`: the level of log. - -* Method 5: use Docker and turn on **token authentication** and **token management** by secret key. - -``` - -export JWT_TOKEN="your-token" -export SECRET_KEY="file:///pulsar-manager/secret/my-secret.key" -docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -e SECRET_KEY=$SECRET_KEY -v $PWD:/data -v $PWD/secret:/pulsar-manager/secret apachepulsar/pulsar-manager:v0.2.0 /bin/sh - -``` - -* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --secret-key` command. -* `SECRET_KEY`: secret key path mounted in container, generated by `bin/pulsar tokens create-secret-key` command. -* `$PWD/secret`: the folder where the secret key generated by the `bin/pulsar tokens create-secret-key` command are placed locally -* `REDIRECT_HOST`: the IP address of the front-end server. -* `REDIRECT_PORT`: the port of the front-end server. -* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database. -* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database. -* `USERNAME`: the username of PostgreSQL. -* `PASSWORD`: the password of PostgreSQL. -* `LOG_LEVEL`: the level of log. - -* For more information about backend configurations, see [here](https://github.com/apache/pulsar-manager/blob/master/src/README). -* For more information about frontend configurations, see [here](https://github.com/apache/pulsar-manager/tree/master/front-end). - -## Log in - -[Set the administrator account and password](#set-administrator-account-and-password). - -Visit http://localhost:9527 to log in. diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/administration-stats.md b/site2/website/versioned_docs/version-2.9.3-deprecated/administration-stats.md deleted file mode 100644 index ac0c03602f36d5..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/administration-stats.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -id: administration-stats -title: Pulsar stats -sidebar_label: "Pulsar statistics" -original_id: administration-stats ---- - -## Partitioned topics - -|Stat|Description| -|---|---| -|msgRateIn| The sum of publish rates of all local and replication publishers in messages per second.| -|msgThroughputIn| Same as msgRateIn but in bytes per second instead of messages per second.| -|msgRateOut| The sum of dispatch rates of all local and replication consumers in messages per second.| -|msgThroughputOut| Same as msgRateOut but in bytes per second instead of messages per second.| -|averageMsgSize| Average message size, in bytes, from this publisher within the last interval.| -|storageSize| The sum of storage size of the ledgers for this topic.| -|publishers| The list of all local publishers into the topic. Publishers can be anywhere from zero to thousands.| -|producerId| Internal identifier for this producer on this topic.| -|producerName| Internal identifier for this producer, generated by the client library.| -|address| IP address and source port for the connection of this producer.| -|connectedSince| Timestamp this producer is created or last reconnected.| -|subscriptions| The list of all local subscriptions to the topic.| -|my-subscription| The name of this subscription (client defined).| -|msgBacklog| The count of messages in backlog for this subscription.| -|type| This subscription type.| -|msgRateExpired| The rate at which messages are discarded instead of dispatched from this subscription due to TTL.| -|consumers| The list of connected consumers for this subscription.| -|consumerName| Internal identifier for this consumer, generated by the client library.| -|availablePermits| The number of messages this consumer has space for in the listen queue of client library. A value of 0 means the queue of client library is full and receive() is not being called. A nonzero value means this consumer is ready to be dispatched messages.| -|replication| This section gives the stats for cross-colo replication of this topic.| -|replicationBacklog| The outbound replication backlog in messages.| -|connected| Whether the outbound replicator is connected.| -|replicationDelayInSeconds| How long the oldest message has been waiting to be sent through the connection, if connected is true.| -|inboundConnection| The IP and port of the broker in the publisher connection of remote cluster to this broker. | -|inboundConnectedSince| The TCP connection being used to publish messages to the remote cluster. If no local publishers are connected, this connection is automatically closed after a minute.| - - -## Topics - -|Stat|Description| -|---|---| -|entriesAddedCounter| Messages published since this broker loads this topic.| -|numberOfEntries| Total number of messages being tracked.| -|totalSize| Total storage size in bytes of all messages.| -|currentLedgerEntries| Count of messages written to the ledger currently open for writing.| -|currentLedgerSize| Size in bytes of messages written to ledger currently open for writing.| -|lastLedgerCreatedTimestamp| Time when last ledger is created.| -|lastLedgerCreationFailureTimestamp| Time when last ledger is failed.| -|waitingCursorsCount| How many cursors are caught up and waiting for a new message to be published.| -|pendingAddEntriesCount| How many messages have (asynchronous) write requests you are waiting on completion.| -|lastConfirmedEntry| The ledgerid:entryid of the last message successfully written. If the entryid is -1, then the ledger is opened or is being currently opened but has no entries written yet.| -|state| The state of the cursor ledger. Open means you have a cursor ledger for saving updates of the markDeletePosition.| -|ledgers| The ordered list of all ledgers for this topic holding its messages.| -|cursors| The list of all cursors on this topic. Every subscription you saw in the topic stats has one.| -|markDeletePosition| The ack position: the last message the subscriber acknowledges receiving.| -|readPosition| The latest position of subscriber for reading message.| -|waitingReadOp| This is true when the subscription reads the latest message that is published to the topic and waits on new messages to be published.| -|pendingReadOps| The counter for how many outstanding read requests to the BookKeepers you have in progress.| -|messagesConsumedCounter| Number of messages this cursor acks since this broker loads this topic.| -|cursorLedger| The ledger used to persistently store the current markDeletePosition.| -|cursorLedgerLastEntry| The last entryid used to persistently store the current markDeletePosition.| -|individuallyDeletedMessages| If Acks are done out of order, shows the ranges of messages Acked between the markDeletePosition and the read-position.| -|lastLedgerSwitchTimestamp| The last time the cursor ledger is rolled over.| diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/administration-upgrade.md b/site2/website/versioned_docs/version-2.9.3-deprecated/administration-upgrade.md deleted file mode 100644 index 72d136b6460f62..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/administration-upgrade.md +++ /dev/null @@ -1,168 +0,0 @@ ---- -id: administration-upgrade -title: Upgrade Guide -sidebar_label: "Upgrade" -original_id: administration-upgrade ---- - -## Upgrade guidelines - -Apache Pulsar is comprised of multiple components, ZooKeeper, bookies, and brokers. These components are either stateful or stateless. You do not have to upgrade ZooKeeper nodes unless you have special requirement. While you upgrade, you need to pay attention to bookies (stateful), brokers and proxies (stateless). - -The following are some guidelines on upgrading a Pulsar cluster. Read the guidelines before upgrading. - -- Backup all your configuration files before upgrading. -- Read guide entirely, make a plan, and then execute the plan. When you make upgrade plan, you need to take your specific requirements and environment into consideration. -- Pay attention to the upgrading order of components. In general, you do not need to upgrade your ZooKeeper or configuration store cluster (the global ZooKeeper cluster). You need to upgrade bookies first, and then upgrade brokers, proxies, and your clients. -- If `autorecovery` is enabled, you need to disable `autorecovery` in the upgrade process, and re-enable it after completing the process. -- Read the release notes carefully for each release. Release notes contain features, configuration changes that might impact your upgrade. -- Upgrade a small subset of nodes of each type to canary test the new version before upgrading all nodes of that type in the cluster. When you have upgraded the canary nodes, run for a while to ensure that they work correctly. -- Upgrade one data center to verify new version before upgrading all data centers if your cluster runs in multi-cluster replicated mode. - -> Note: Currently, Apache Pulsar is compatible between versions. - -## Upgrade sequence - -To upgrade an Apache Pulsar cluster, follow the upgrade sequence. - -1. Upgrade ZooKeeper (optional) -- Canary test: test an upgraded version in one or a small set of ZooKeeper nodes. -- Rolling upgrade: rollout the upgraded version to all ZooKeeper servers incrementally, one at a time. Monitor your dashboard during the whole rolling upgrade process. -2. Upgrade bookies -- Canary test: test an upgraded version in one or a small set of bookies. -- Rolling upgrade: - - a. Disable `autorecovery` with the following command. - - ```shell - - bin/bookkeeper shell autorecovery -disable - - ``` - - - - b. Rollout the upgraded version to all bookies in the cluster after you determine that a version is safe after canary. - - c. After you upgrade all bookies, re-enable `autorecovery` with the following command. - - ```shell - - bin/bookkeeper shell autorecovery -enable - - ``` - -3. Upgrade brokers -- Canary test: test an upgraded version in one or a small set of brokers. -- Rolling upgrade: rollout the upgraded version to all brokers in the cluster after you determine that a version is safe after canary. -4. Upgrade proxies -- Canary test: test an upgraded version in one or a small set of proxies. -- Rolling upgrade: rollout the upgraded version to all proxies in the cluster after you determine that a version is safe after canary. - -## Upgrade ZooKeeper (optional) -While you upgrade ZooKeeper servers, you can do canary test first, and then upgrade all ZooKeeper servers in the cluster. - -### Canary test - -You can test an upgraded version in one of ZooKeeper servers before upgrading all ZooKeeper servers in your cluster. - -To upgrade ZooKeeper server to a new version, complete the following steps: - -1. Stop a ZooKeeper server. -2. Upgrade the binary and configuration files. -3. Start the ZooKeeper server with the new binary files. -4. Use `pulsar zookeeper-shell` to connect to the newly upgraded ZooKeeper server and run a few commands to verify if it works as expected. -5. Run the ZooKeeper server for a few days, observe and make sure the ZooKeeper cluster runs well. - -#### Canary rollback - -If issues occur during canary test, you can shut down the problematic ZooKeeper node, revert the binary and configuration, and restart the ZooKeeper with the reverted binary. - -### Upgrade all ZooKeeper servers - -After canary test to upgrade one ZooKeeper in your cluster, you can upgrade all ZooKeeper servers in your cluster. - -You can upgrade all ZooKeeper servers one by one by following steps in canary test. - -## Upgrade bookies - -While you upgrade bookies, you can do canary test first, and then upgrade all bookies in the cluster. -For more details, you can read Apache BookKeeper [Upgrade guide](http://bookkeeper.apache.org/docs/latest/admin/upgrade). - -### Canary test - -You can test an upgraded version in one or a small set of bookies before upgrading all bookies in your cluster. - -To upgrade bookie to a new version, complete the following steps: - -1. Stop a bookie. -2. Upgrade the binary and configuration files. -3. Start the bookie in `ReadOnly` mode to verify if the bookie of this new version runs well for read workload. - - ```shell - - bin/pulsar bookie --readOnly - - ``` - -4. When the bookie runs successfully in `ReadOnly` mode, stop the bookie and restart it in `Write/Read` mode. - - ```shell - - bin/pulsar bookie - - ``` - -5. Observe and make sure the cluster serves both write and read traffic. - -#### Canary rollback - -If issues occur during the canary test, you can shut down the problematic bookie node. Other bookies in the cluster replaces this problematic bookie node with autorecovery. - -### Upgrade all bookies - -After canary test to upgrade some bookies in your cluster, you can upgrade all bookies in your cluster. - -Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios. - -In a rolling upgrade scenario, upgrade one bookie at a time. In a downtime upgrade scenario, shut down the entire cluster, upgrade each bookie, and then start the cluster. - -While you upgrade in both scenarios, the procedure is the same for each bookie. - -1. Stop the bookie. -2. Upgrade the software (either new binary or new configuration files). -2. Start the bookie. - -> **Advanced operations** -> When you upgrade a large BookKeeper cluster in a rolling upgrade scenario, upgrading one bookie at a time is slow. If you configure rack-aware or region-aware placement policy, you can upgrade bookies rack by rack or region by region, which speeds up the whole upgrade process. - -## Upgrade brokers and proxies - -The upgrade procedure for brokers and proxies is the same. Brokers and proxies are `stateless`, so upgrading the two services is easy. - -### Canary test - -You can test an upgraded version in one or a small set of nodes before upgrading all nodes in your cluster. - -To upgrade to a new version, complete the following steps: - -1. Stop a broker (or proxy). -2. Upgrade the binary and configuration file. -3. Start a broker (or proxy). - -#### Canary rollback - -If issues occur during canary test, you can shut down the problematic broker (or proxy) node. Revert to the old version and restart the broker (or proxy). - -### Upgrade all brokers or proxies - -After canary test to upgrade some brokers or proxies in your cluster, you can upgrade all brokers or proxies in your cluster. - -Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios. - -In a rolling upgrade scenario, you can upgrade one broker or one proxy at a time if the size of the cluster is small. If your cluster is large, you can upgrade brokers or proxies in batches. When you upgrade a batch of brokers or proxies, make sure the remaining brokers and proxies in the cluster have enough capacity to handle the traffic during upgrade. - -In a downtime upgrade scenario, shut down the entire cluster, upgrade each broker or proxy, and then start the cluster. - -While you upgrade in both scenarios, the procedure is the same for each broker or proxy. - -1. Stop the broker or proxy. -2. Upgrade the software (either new binary or new configuration files). -3. Start the broker or proxy. diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/administration-zk-bk.md b/site2/website/versioned_docs/version-2.9.3-deprecated/administration-zk-bk.md deleted file mode 100644 index f427d43d57dc1a..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/administration-zk-bk.md +++ /dev/null @@ -1,386 +0,0 @@ ---- -id: administration-zk-bk -title: ZooKeeper and BookKeeper administration -sidebar_label: "ZooKeeper and BookKeeper" -original_id: administration-zk-bk ---- - -Pulsar relies on two external systems for essential tasks: - -* [ZooKeeper](https://zookeeper.apache.org/) is responsible for a wide variety of configuration-related and coordination-related tasks. -* [BookKeeper](http://bookkeeper.apache.org/) is responsible for [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data. - -ZooKeeper and BookKeeper are both open-source [Apache](https://www.apache.org/) projects. - -> Skip to the [How Pulsar uses ZooKeeper and BookKeeper](#how-pulsar-uses-zookeeper-and-bookkeeper) section below for a more schematic explanation of the role of these two systems in Pulsar. - - -## ZooKeeper - -Each Pulsar instance relies on two separate ZooKeeper quorums. - -* [Local ZooKeeper](#deploy-local-zookeeper) operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs to have a dedicated ZooKeeper cluster. -* [Configuration Store](#deploy-configuration-store) operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum. - -### Deploy local ZooKeeper - -ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar. - -To deploy a Pulsar instance, you need to stand up one local ZooKeeper cluster *per Pulsar cluster*. - -To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -On each host, you need to specify the node ID in `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - - -On a ZooKeeper server at `zk1.us-west.example.com`, for example, you can set the `myid` value like this: - -```shell - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com` the command is `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start zookeeper - -``` - -### Deploy configuration store - -The ZooKeeper cluster configured and started up in the section above is a *local* ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks. - -If you deploy a [single-cluster](#single-cluster-pulsar-instance) instance, you do not need a separate cluster for the configuration store. If, however, you deploy a [multi-cluster](#multi-cluster-pulsar-instance) instance, you need to stand up a separate ZooKeeper cluster for configuration tasks. - -#### Single-cluster Pulsar instance - -If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports. - -To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorum uses to the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 - -``` - -As before, create the `myid` files for each server on `data/global-zookeeper/myid`. - -#### Multi-cluster Pulsar instance - -When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions. - -The key here is to make sure the ZK quorum members are spread across at least 3 regions and that other regions run as observers. - -Again, given the very low expected load on the configuration store servers, you can share the same hosts used for the local ZooKeeper quorum. - -For example, you can assume a Pulsar instance with the following clusters `us-west`, `us-east`, `us-central`, `eu-central`, `ap-south`. Also you can assume, each cluster has its own local ZK servers named such as - -``` - -zk[1-3].${CLUSTER}.example.com - -``` - -In this scenario you want to pick the quorum participants from few clusters and let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`. - -This guarantees that writes to configuration store is possible even if one of these regions is unreachable. - -The ZK configuration in all the servers looks like: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 -server.4=zk1.us-central.example.com:2185:2186 -server.5=zk2.us-central.example.com:2185:2186 -server.6=zk3.us-central.example.com:2185:2186:observer -server.7=zk1.us-east.example.com:2185:2186 -server.8=zk2.us-east.example.com:2185:2186 -server.9=zk3.us-east.example.com:2185:2186:observer -server.10=zk1.eu-central.example.com:2185:2186:observer -server.11=zk2.eu-central.example.com:2185:2186:observer -server.12=zk3.eu-central.example.com:2185:2186:observer -server.13=zk1.ap-south.example.com:2185:2186:observer -server.14=zk2.ap-south.example.com:2185:2186:observer -server.15=zk3.ap-south.example.com:2185:2186:observer - -``` - -Additionally, ZK observers need to have: - -```properties - -peerType=observer - -``` - -##### Start the service - -Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) - -```shell - -$ bin/pulsar-daemon start configuration-store - -``` - -### ZooKeeper configuration - -In Pulsar, ZooKeeper configuration is handled by two separate configuration files in the `conf` directory of your Pulsar installation: `conf/zookeeper.conf` for [local ZooKeeper](#local-zookeeper) and `conf/global-zookeeper.conf` for [configuration store](#configuration-store). - -#### Local ZooKeeper - -The [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file handles the configuration for local ZooKeeper. The table below shows the available parameters: - -|Name|Description|Default| -|---|---|---| -|tickTime| The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick. |2000| -|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10| -|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter. |5| -|dataDir| The location where ZooKeeper stores in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper| -|clientPort| The port on which the ZooKeeper server listens for connections. |2181| -|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest). |3| -|autopurge.purgeInterval| The time interval, in hours, which triggers the ZooKeeper database purge task. Setting to a non-zero number enables auto purge; setting to 0 disables. Read this guide before enabling auto purge. |1| -|maxClientCnxns| The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60| - - -#### Configuration Store - -The [`conf/global-zookeeper.conf`](reference-configuration.md#configuration-store) file handles the configuration for configuration store. The table below shows the available parameters: - - -## BookKeeper - -BookKeeper stores all durable messages in Pulsar. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) WAL system that guarantees read consistency of independent message logs calls ledgers. Individual BookKeeper servers are also called *bookies*. - -> To manage message persistence, retention, and expiry in Pulsar, refer to [cookbook](cookbooks-retention-expiry.md). - -### Hardware requirements - -Bookie hosts store message data on disk. To provide optimal performance, ensure that the bookies have a suitable hardware configuration. The following are two key dimensions of bookie hardware capacity: - -- Disk I/O capacity read/write -- Storage capacity - -Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker by default. To ensure low write latency, BookKeeper is designed to use multiple devices: - -- A **journal** to ensure durability. For sequential writes, it is critical to have fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms. -- A **ledger storage device** stores data. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller. - -### Configure BookKeeper - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. When you configure each bookie, ensure that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for local ZooKeeper of the Pulsar cluster. - -The minimum configuration changes required in `conf/bookkeeper.conf` are as follows: - -:::note - -Set `journalDirectory` and `ledgerDirectories` carefully. It is difficilt to change them later. - -::: - -```properties - -# Change to point to journal disk mount point -journalDirectory=data/bookkeeper/journal - -# Point to ledger storage disk mount point -ledgerDirectories=data/bookkeeper/ledgers - -# Point to local ZK quorum -zkServers=zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181 - -#It is recommended to set this parameter. Otherwise, BookKeeper can't start normally in certain environments (for example, Huawei Cloud). -advertisedAddress= - -``` - -To change the ZooKeeper root path that BookKeeper uses, use `zkLedgersRootPath=/MY-PREFIX/ledgers` instead of `zkServers=localhost:2181/MY-PREFIX`. - -> For more information about BookKeeper, refer to the official [BookKeeper docs](http://bookkeeper.apache.org). - -### Deploy BookKeeper - -BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar. Each Pulsar broker has its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster. - -### Start bookies manually - -You can start a bookie in the foreground or as a background daemon. - -To start a bookie in the foreground, use the [`bookkeeper`](reference-cli-tools.md#bookkeeper) CLI tool: - -```bash - -$ bin/bookkeeper bookie - -``` - -To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -You can verify whether the bookie works properly with the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell): - -```shell - -$ bin/bookkeeper shell bookiesanity - -``` - -When you use this command, you create a new ledger on the local bookie, write a few entries, read them back and finally delete the ledger. - -### Decommission bookies cleanly - -Before you decommission a bookie, you need to check your environment and meet the following requirements. - -1. Ensure the state of your cluster supports decommissioning the target bookie. Check if `EnsembleSize >= Write Quorum >= Ack Quorum` is `true` with one less bookie. - -2. Ensure the target bookie is listed after using the `listbookies` command. - -3. Ensure that no other process is ongoing (upgrade etc). - -And then you can decommission bookies safely. To decommission bookies, complete the following steps. - -1. Log in to the bookie node, check if there are underreplicated ledgers. The decommission command force to replicate the underreplicated ledgers. -`$ bin/bookkeeper shell listunderreplicated` - -2. Stop the bookie by killing the bookie process. Make sure that no liveness/readiness probes setup for the bookies to spin them back up if you deploy it in a Kubernetes environment. - -3. Run the decommission command. - - If you have logged in to the node to be decommissioned, you do not need to provide `-bookieid`. - - If you are running the decommission command for the target bookie node from another bookie node, you should mention the target bookie ID in the arguments for `-bookieid` - `$ bin/bookkeeper shell decommissionbookie` - or - `$ bin/bookkeeper shell decommissionbookie -bookieid ` - -4. Validate that no ledgers are on the decommissioned bookie. -`$ bin/bookkeeper shell listledgers -bookieid ` - -You can run the following command to check if the bookie you have decommissioned is listed in the bookies list: - -```bash - -./bookkeeper shell listbookies -rw -h -./bookkeeper shell listbookies -ro -h - -``` - -## BookKeeper persistence policies - -In Pulsar, you can set *persistence policies* at the namespace level, which determines how BookKeeper handles persistent storage of messages. Policies determine four things: - -* The number of acks (guaranteed copies) to wait for each ledger entry. -* The number of bookies to use for a topic. -* The number of writes to make for each ledger entry. -* The throttling rate for mark-delete operations. - -### Set persistence policies - -You can set persistence policies for BookKeeper at the [namespace](reference-terminology.md#namespace) level. - -#### Pulsar-admin - -Use the [`set-persistence`](reference-pulsar-admin.md#namespaces-set-persistence) subcommand and specify a namespace as well as any policies that you want to apply. The available flags are: - -Flag | Description | Default -:----|:------------|:------- -`-a`, `--bookkeeper-ack-quorum` | The number of acks (guaranteed copies) to wait on for each entry | 0 -`-e`, `--bookkeeper-ensemble` | The number of [bookies](reference-terminology.md#bookie) to use for topics in the namespace | 0 -`-w`, `--bookkeeper-write-quorum` | The number of writes to make for each entry | 0 -`-r`, `--ml-mark-delete-max-rate` | Throttling rate for mark-delete operations (0 means no throttle) | 0 - -The following is an example: - -```shell - -$ pulsar-admin namespaces set-persistence my-tenant/my-ns \ - --bookkeeper-ack-quorum 3 \ - --bookeeper-ensemble 2 - -``` - -#### REST API - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/setPersistence?version=@pulsar:version_number@} - -#### Java - -```java - -int bkEnsemble = 2; -int bkQuorum = 3; -int bkAckQuorum = 2; -double markDeleteRate = 0.7; -PersistencePolicies policies = - new PersistencePolicies(ensemble, quorum, ackQuorum, markDeleteRate); -admin.namespaces().setPersistence(namespace, policies); - -``` - -### List persistence policies - -You can see which persistence policy currently applies to a namespace. - -#### Pulsar-admin - -Use the [`get-persistence`](reference-pulsar-admin.md#namespaces-get-persistence) subcommand and specify the namespace. - -The following is an example: - -```shell - -$ pulsar-admin namespaces get-persistence my-tenant/my-ns -{ - "bookkeeperEnsemble": 1, - "bookkeeperWriteQuorum": 1, - "bookkeeperAckQuorum", 1, - "managedLedgerMaxMarkDeleteRate": 0 -} - -``` - -#### REST API - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/getPersistence?version=@pulsar:version_number@} - -#### Java - -```java - -PersistencePolicies policies = admin.namespaces().getPersistence(namespace); - -``` - -## How Pulsar uses ZooKeeper and BookKeeper - -This diagram illustrates the role of ZooKeeper and BookKeeper in a Pulsar cluster: - -![ZooKeeper and BookKeeper](/assets/pulsar-system-architecture.png) - -Each Pulsar cluster consists of one or more message brokers. Each broker relies on an ensemble of bookies. diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-cgo.md b/site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-cgo.md deleted file mode 100644 index f352f942b77144..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-cgo.md +++ /dev/null @@ -1,579 +0,0 @@ ---- -id: client-libraries-cgo -title: Pulsar CGo client -sidebar_label: "CGo(deprecated)" -original_id: client-libraries-cgo ---- - -You can use Pulsar Go client to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang). - -All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Go client are thread-safe. - -Currently, the following Go clients are maintained in two repositories. - -| Language | Project | Maintainer | License | Description | -|----------|---------|------------|---------|-------------| -| CGo | [pulsar-client-go](https://github.com/apache/pulsar/tree/master/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | CGo client that depends on C++ client library | -| Go | [pulsar-client-go](https://github.com/apache/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client | - -> **API docs available as well** -> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar/pulsar-client-go/pulsar). - -## Installation - -### Requirements - -Pulsar Go client library is based on the C++ client library. Follow -the instructions for [C++ library](client-libraries-cpp.md) for installing the binaries through [RPM](client-libraries-cpp.md#rpm), [Deb](client-libraries-cpp.md#deb) or [Homebrew packages](client-libraries-cpp.md#macos). - -### Install go package - -> **Compatibility Warning** -> The version number of the Go client **must match** the version number of the Pulsar C++ client library. - -You can install the `pulsar` library locally using `go get`. Note that `go get` doesn't support fetching a specific tag - it will always pull in master's version of the Go client. You'll need a C++ client library that matches master. - -```bash - -$ go get -u github.com/apache/pulsar/pulsar-client-go/pulsar - -``` - -Or you can use [dep](https://github.com/golang/dep) for managing the dependencies. - -```bash - -$ dep ensure -add github.com/apache/pulsar/pulsar-client-go/pulsar@v@pulsar:version@ - -``` - -Once installed locally, you can import it into your project: - -```go - -import "github.com/apache/pulsar/pulsar-client-go/pulsar" - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example: - -```go - -import ( - "log" - "runtime" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - OperationTimeoutSeconds: 5, - MessageListenerThreads: runtime.NumCPU(), - }) - - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } -} - -``` - -The following configurable parameters are available for Pulsar clients: - -Parameter | Description | Default -:---------|:------------|:------- -`URL` | The connection URL for the Pulsar cluster. See [above](#urls) for more info | -`IOThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker) | 1 -`OperationTimeoutSeconds` | The timeout for some Go client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries will occur until this threshold is reached, at which point the operation will fail. | 30 -`MessageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)) | 1 -`ConcurrentLookupRequests` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 5000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 5000 -`Logger` | A custom logger implementation for the client (as a function that takes a log level, file path, line number, and message). All info, warn, and error messages will be routed to this function. | `nil` -`TLSTrustCertsFilePath` | The file path for the trusted TLS certificate | -`TLSAllowInsecureConnection` | Whether the client accepts untrusted TLS certificates from the broker | `false` -`Authentication` | Configure the authentication provider. (default: no authentication). Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | `nil` -`StatsIntervalInSeconds` | The interval (in seconds) at which client stats are published | 60 - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example: - -```go - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", -}) - -if err != nil { - log.Fatalf("Could not instantiate Pulsar producer: %v", err) -} - -defer producer.Close() - -msg := pulsar.ProducerMessage{ - Payload: []byte("Hello, Pulsar"), -} - -if err := producer.Send(context.Background(), msg); err != nil { - log.Fatalf("Producer could not send message: %v", err) -} - -``` - -> **Blocking operation** -> When you create a new Pulsar producer, the operation will block (waiting on a go channel) until either a producer is successfully created or an error is thrown. - - -### Producer operations - -Pulsar Go producers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string` -`Name()` | Fetches the producer's name | `string` -`Send(context.Context, ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | `error` -`SendAndGetMsgID(context.Context, ProducerMessage)`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | (MessageID, error) -`SendAsync(context.Context, ProducerMessage, func(ProducerMessage, error))` | Publishes a [message](#messages) to the producer's topic asynchronously. The third argument is a callback function that specifies what happens either when the message is acknowledged or an error is thrown. | -`SendAndGetMsgIDAsync(context.Context, ProducerMessage, func(MessageID, error))`| Send a message in asynchronous mode. The callback will report back the message being published and the eventual error in publishing | -`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64 -`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error -`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | `error` -`Schema()` | | Schema - -Here's a more involved example usage of a producer: - -```go - -import ( - "context" - "fmt" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatal(err) } - - // Use the client to instantiate a producer - producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", - }) - - if err != nil { log.Fatal(err) } - - ctx := context.Background() - - // Send 10 messages synchronously and 10 messages asynchronously - for i := 0; i < 10; i++ { - // Create a message - msg := pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("message-%d", i)), - } - - // Attempt to send the message - if err := producer.Send(ctx, msg); err != nil { - log.Fatal(err) - } - - // Create a different message to send asynchronously - asyncMsg := pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("async-message-%d", i)), - } - - // Attempt to send the message asynchronously and handle the response - producer.SendAsync(ctx, asyncMsg, func(msg pulsar.ProducerMessage, err error) { - if err != nil { log.Fatal(err) } - - fmt.Printf("the %s successfully published", string(msg.Payload)) - }) - } -} - -``` - -### Producer configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer will publish messages | -`Name` | A name for the producer. If you don't explicitly assign a name, Pulsar will automatically generate a globally unique name that you can access later using the `Name()` method. If you choose to explicitly assign a name, it will need to be unique across *all* Pulsar clusters, otherwise the creation operation will throw an error. | -`Properties`| Attach a set of application defined properties to the producer. This properties will be visible in the topic stats | -`SendTimeout` | When publishing a message to a topic, the producer will wait for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error will be thrown. If you set `SendTimeout` to -1, the timeout will be set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30 seconds -`MaxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `Send` and `SendAsync` methods will fail *unless* `BlockIfQueueFull` is set to `true`. | -`MaxPendingMessagesAcrossPartitions` | Set the number of max pending messages across all the partitions. This setting will be used to lower the max pending messages for each partition `MaxPendingMessages(int)`, if the total exceeds the configured value.| -`BlockIfQueueFull` | If set to `true`, the producer's `Send` and `SendAsync` methods will block when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `MaxPendingMessages` parameter); if set to `false` (the default), `Send` and `SendAsync` operations will fail and throw a `ProducerQueueIsFullError` when the queue is full. | `false` -`MessageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`pulsar.RoundRobinDistribution`, the default), publishing all messages to a single partition (`pulsar.UseSinglePartition`), or a custom partitioning scheme (`pulsar.CustomPartition`). | `pulsar.RoundRobinDistribution` -`HashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `pulsar.JavaStringHash` (the equivalent of `String.hashCode()` in Java), `pulsar.Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `pulsar.BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library) | `pulsar.JavaStringHash` -`CompressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), [`ZLIB`](https://zlib.net/), [`ZSTD`](https://facebook.github.io/zstd/) and [`SNAPPY`](https://google.github.io/snappy/). | No compression -`MessageRouter` | By default, Pulsar uses a round-robin routing scheme for [partitioned topics](cookbooks-partitioned.md). The `MessageRouter` parameter enables you to specify custom routing logic via a function that takes the Pulsar message and topic metadata as an argument and returns an integer (where the ), i.e. a function signature of `func(Message, TopicMetadata) int`. | -`Batching` | Control whether automatic batching of messages is enabled for the producer. | false -`BatchingMaxPublishDelay` | Set the time period within which the messages sent will be batched (default: 1ms) if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or until | 1ms -`BatchingMaxMessages` | Set the maximum number of messages permitted in a batch. (default: 1000) If set to a value greater than 1, messages will be queued until this threshold is reached or batch interval has elapsed | 1000 - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels: - -```go - -msgChannel := make(chan pulsar.ConsumerMessage) - -consumerOpts := pulsar.ConsumerOptions{ - Topic: "my-topic", - SubscriptionName: "my-subscription-1", - Type: pulsar.Exclusive, - MessageChannel: msgChannel, -} - -consumer, err := client.Subscribe(consumerOpts) - -if err != nil { - log.Fatalf("Could not establish subscription: %v", err) -} - -defer consumer.Close() - -for cm := range msgChannel { - msg := cm.Message - - fmt.Printf("Message ID: %s", msg.ID()) - fmt.Printf("Message value: %s", string(msg.Payload())) - - consumer.Ack(msg) -} - -``` - -> **Blocking operation** -> When you create a new Pulsar consumer, the operation will block (on a go channel) until either a producer is successfully created or an error is thrown. - - -### Consumer operations - -Pulsar Go consumers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the consumer's [topic](reference-terminology.md#topic) | `string` -`Subscription()` | Returns the consumer's subscription name | `string` -`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error` -`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)` -`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | `error` -`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | `error` -`AckCumulative(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `AckCumulative` method will block until the ack has been sent to the broker. After that, the messages will *not* be redelivered to the consumer. Cumulative acking can only be used with a [shared](concepts-messaging.md#shared) subscription type. | `error` -`AckCumulativeID(MessageID)` |Ack the reception of all the messages in the stream up to (and including) the provided message. This method will block until the acknowledge has been sent to the broker. After that, the messages will not be re-delivered to this consumer. | error -`Nack(Message)` | Acknowledge the failure to process a single message. | `error` -`NackID(MessageID)` | Acknowledge the failure to process a single message. | `error` -`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | `error` -`RedeliverUnackedMessages()` | Redelivers *all* unacknowledged messages on the topic. In [failover](concepts-messaging.md#failover) mode, this request is ignored if the consumer isn't active on the specified topic; in [shared](concepts-messaging.md#shared) mode, redelivered messages are distributed across all consumers connected to the topic. **Note**: this is a *non-blocking* operation that doesn't throw an error. | -`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | error - -#### Receive example - -Here's an example usage of a Go consumer that uses the `Receive()` method to process incoming messages: - -```go - -import ( - "context" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatal(err) } - - // Use the client object to instantiate a consumer - consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "my-golang-topic", - SubscriptionName: "sub-1", - Type: pulsar.Exclusive, - }) - - if err != nil { log.Fatal(err) } - - defer consumer.Close() - - ctx := context.Background() - - // Listen indefinitely on the topic - for { - msg, err := consumer.Receive(ctx) - if err != nil { log.Fatal(err) } - - // Do something with the message - err = processMessage(msg) - - if err == nil { - // Message processed successfully - consumer.Ack(msg) - } else { - // Failed to process messages - consumer.Nack(msg) - } - } -} - -``` - -### Consumer configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the consumer will establish a subscription and listen for messages | -`Topics` | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing | -`TopicsPattern` | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing | -`SubscriptionName` | The subscription name for this consumer | -`Properties` | Attach a set of application defined properties to the consumer. This properties will be visible in the topic stats| -`Name` | The name of the consumer | -`AckTimeout` | Set the timeout for unacked messages | 0 -`NackRedeliveryDelay` | The delay after which to redeliver the messages that failed to be processed. Default is 1min. (See `Consumer.Nack()`) | 1 minute -`Type` | Available options are `Exclusive`, `Shared`, and `Failover` | `Exclusive` -`SubscriptionInitPos` | InitialPosition at which the cursor will be set when subscribe | Latest -`MessageChannel` | The Go channel used by the consumer. Messages that arrive from the Pulsar topic(s) will be passed to this channel. | -`ReceiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `Receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000 -`MaxTotalReceiverQueueSizeAcrossPartitions` |Set the max total receiver queue size across partitions. This setting will be used to reduce the receiver queue size for individual partitions if the total exceeds this value | 50000 -`ReadCompacted` | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the consumer will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal. | - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example: - -```go - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageId: pulsar.LatestMessage, -}) - -``` - -> **Blocking operation** -> When you create a new Pulsar reader, the operation will block (on a go channel) until either a reader is successfully created or an error is thrown. - - -### Reader operations - -Pulsar Go readers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string` -`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)` -`HasNext()` | Check if there is any message available to read from the current position| (bool, error) -`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error` - -#### "Next" example - -Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages: - -```go - -import ( - "context" - "log" - - "github.com/apache/pulsar/pulsar-client-go/pulsar" -) - -func main() { - // Instantiate a Pulsar client - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - }) - - if err != nil { log.Fatalf("Could not create client: %v", err) } - - // Use the client to instantiate a reader - reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: pulsar.EarliestMessage, - }) - - if err != nil { log.Fatalf("Could not create reader: %v", err) } - - defer reader.Close() - - ctx := context.Background() - - // Listen on the topic for incoming messages - for { - msg, err := reader.Next(ctx) - if err != nil { log.Fatalf("Error reading from topic: %v", err) } - - // Process the message - } -} - -``` - -In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example: - -```go - -lastSavedId := // Read last saved message id from external store as byte[] - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: DeserializeMessageID(lastSavedId), -}) - -``` - -### Reader configuration - -Parameter | Description | Default -:---------|:------------|:------- -`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader will establish a subscription and listen for messages -`Name` | The name of the reader -`StartMessageID` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `pulsar.EarliestMessage` (the earliest available message on the topic), `pulsar.LatestMessage` (the latest available message on the topic), or a `MessageID` object for a position that isn't earliest or latest. | -`MessageChannel` | The Go channel used by the reader. Messages that arrive from the Pulsar topic(s) will be passed to this channel. | -`ReceiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `Next`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000 -`SubscriptionRolePrefix` | The subscription role prefix. | `reader` -`ReadCompacted` | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the reader will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal.| - -## Messages - -The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message: - -```go - -msg := pulsar.ProducerMessage{ - Payload: []byte("Here is some message data"), - Key: "message-key", - Properties: map[string]string{ - "foo": "bar", - }, - EventTime: time.Now(), - ReplicationClusters: []string{"cluster1", "cluster3"}, -} - -if err := producer.send(msg); err != nil { - log.Fatalf("Could not publish message due to: %v", err) -} - -``` - -The following methods parameters are available for `ProducerMessage` objects: - -Parameter | Description -:---------|:----------- -`Payload` | The actual data payload of the message -`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message. -`Key` | The optional key associated with the message (particularly useful for things like topic compaction) -`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message -`EventTime` | The timestamp associated with the message -`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. -`SequenceID` | Set the sequence id to assign to the current message - -## TLS encryption and authentication - -In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so: - - * Use `pulsar+ssl` URL type - * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker - * Configure `Authentication` option - -Here's an example: - -```go - -opts := pulsar.ClientOptions{ - URL: "pulsar+ssl://my-cluster.com:6651", - TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr", - Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"), -} - -``` - -## Schema - -This example shows how to create a producer and consumer with schema. - -```go - -var exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -jsonSchema := NewJsonSchema(exampleSchemaDef, nil) -// create producer -producer, err := client.CreateProducerWithSchema(ProducerOptions{ - Topic: "jsonTopic", -}, jsonSchema) -err = producer.Send(context.Background(), ProducerMessage{ - Value: &testJson{ - ID: 100, - Name: "pulsar", - }, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() -//create consumer -var s testJson -consumerJS := NewJsonSchema(exampleSchemaDef, nil) -consumer, err := client.SubscribeWithSchema(ConsumerOptions{ - Topic: "jsonTopic", - SubscriptionName: "sub-2", -}, consumerJS) -if err != nil { - log.Fatal(err) -} -msg, err := consumer.Receive(context.Background()) -if err != nil { - log.Fatal(err) -} -err = msg.GetValue(&s) -if err != nil { - log.Fatal(err) -} -fmt.Println(s.ID) // output: 100 -fmt.Println(s.Name) // output: pulsar -defer consumer.Close() - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-cpp.md b/site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-cpp.md deleted file mode 100644 index 455cf02116d502..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-cpp.md +++ /dev/null @@ -1,708 +0,0 @@ ---- -id: client-libraries-cpp -title: Pulsar C++ client -sidebar_label: "C++" -original_id: client-libraries-cpp ---- - -You can use Pulsar C++ client to create Pulsar producers and consumers in C++. - -All the methods in producer, consumer, and reader of a C++ client are thread-safe. - -## Supported platforms - -Pulsar C++ client is supported on **Linux** ,**MacOS** and **Windows** platforms. - -[Doxygen](http://www.doxygen.nl/)-generated API docs for the C++ client are available [here](/api/cpp). - -## System requirements - -You need to install the following components before using the C++ client: - -* [CMake](https://cmake.org/) -* [Boost](http://www.boost.org/) -* [Protocol Buffers](https://developers.google.com/protocol-buffers/) >= 3 -* [libcurl](https://curl.se/libcurl/) -* [Google Test](https://github.com/google/googletest) - -## Linux - -### Compilation - -1. Clone the Pulsar repository. - -```shell - -$ git clone https://github.com/apache/pulsar - -``` - -2. Install all necessary dependencies. - -```shell - -$ apt-get install cmake libssl-dev libcurl4-openssl-dev liblog4cxx-dev \ - libprotobuf-dev protobuf-compiler libboost-all-dev google-mock libgtest-dev libjsoncpp-dev - -``` - -3. Compile and install [Google Test](https://github.com/google/googletest). - -```shell - -# libgtest-dev version is 1.18.0 or above -$ cd /usr/src/googletest -$ sudo cmake . -$ sudo make -$ sudo cp ./googlemock/libgmock.a ./googlemock/gtest/libgtest.a /usr/lib/ - -# less than 1.18.0 -$ cd /usr/src/gtest -$ sudo cmake . -$ sudo make -$ sudo cp libgtest.a /usr/lib - -$ cd /usr/src/gmock -$ sudo cmake . -$ sudo make -$ sudo cp libgmock.a /usr/lib - -``` - -4. Compile the Pulsar client library for C++ inside the Pulsar repository. - -```shell - -$ cd pulsar-client-cpp -$ cmake . -$ make - -``` - -After you install the components successfully, the files `libpulsar.so` and `libpulsar.a` are in the `lib` folder of the repository. The tools `perfProducer` and `perfConsumer` are in the `perf` directory. - -### Install Dependencies - -> Since 2.1.0 release, Pulsar ships pre-built RPM and Debian packages. You can download and install those packages directly. - -After you download and install RPM or DEB, the `libpulsar.so`, `libpulsarnossl.so`, `libpulsar.a`, and `libpulsarwithdeps.a` libraries are in your `/usr/lib` directory. - -By default, they are built in code path `${PULSAR_HOME}/pulsar-client-cpp`. You can build with the command below. - - `cmake . -DBUILD_TESTS=OFF -DLINK_STATIC=ON && make pulsarShared pulsarSharedNossl pulsarStatic pulsarStaticWithDeps -j 3`. - -These libraries rely on some other libraries. If you want to get detailed version of dependencies, see [RPM](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/rpm/Dockerfile) or [DEB](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/deb/Dockerfile) files. - -1. `libpulsar.so` is a shared library, containing statically linked `boost` and `openssl`. It also dynamically links all other necessary libraries. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsar.so -I/usr/local/ssl/include - -``` - -2. `libpulsarnossl.so` is a shared library, similar to `libpulsar.so` except that the libraries `openssl` and `crypto` are dynamically linked. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsarnossl.so -lssl -lcrypto -I/usr/local/ssl/include -L/usr/local/ssl/lib - -``` - -3. `libpulsar.a` is a static library. You need to load dependencies before using this library. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsar.a -lssl -lcrypto -ldl -lpthread -I/usr/local/ssl/include -L/usr/local/ssl/lib -lboost_system -lboost_regex -lcurl -lprotobuf -lzstd -lz - -``` - -4. `libpulsarwithdeps.a` is a static library, based on `libpulsar.a`. It is archived in the dependencies of `libboost_regex`, `libboost_system`, `libcurl`, `libprotobuf`, `libzstd` and `libz`. You can use this Pulsar library with the command below. - -```bash - - g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsarwithdeps.a -lssl -lcrypto -ldl -lpthread -I/usr/local/ssl/include -L/usr/local/ssl/lib - -``` - -The `libpulsarwithdeps.a` does not include library openssl related libraries `libssl` and `libcrypto`, because these two libraries are related to security. It is more reasonable and easier to use the versions provided by the local system to handle security issues and upgrade libraries. - -### Install RPM - -1. Download a RPM package from the links in the table. - -| Link | Crypto files | -|------|--------------| -| [client](@pulsar:dist_rpm:client@) | [asc](@pulsar:dist_rpm:client@.asc), [sha512](@pulsar:dist_rpm:client@.sha512) | -| [client-debuginfo](@pulsar:dist_rpm:client-debuginfo@) | [asc](@pulsar:dist_rpm:client-debuginfo@.asc), [sha512](@pulsar:dist_rpm:client-debuginfo@.sha512) | -| [client-devel](@pulsar:dist_rpm:client-devel@) | [asc](@pulsar:dist_rpm:client-devel@.asc), [sha512](@pulsar:dist_rpm:client-devel@.sha512) | - -2. Install the package using the following command. - -```bash - -$ rpm -ivh apache-pulsar-client*.rpm - -``` - -After you install RPM successfully, Pulsar libraries are in the `/usr/lib` directory. - -:::note - -If you get the error that `libpulsar.so: cannot open shared object file: No such file or directory` when starting Pulsar client, you may need to run `ldconfig` first. - -::: - -### Install Debian - -1. Download a Debian package from the links in the table. - -| Link | Crypto files | -|------|--------------| -| [client](@pulsar:deb:client@) | [asc](@pulsar:dist_deb:client@.asc), [sha512](@pulsar:dist_deb:client@.sha512) | -| [client-devel](@pulsar:deb:client-devel@) | [asc](@pulsar:dist_deb:client-devel@.asc), [sha512](@pulsar:dist_deb:client-devel@.sha512) | - -2. Install the package using the following command. - -```bash - -$ apt install ./apache-pulsar-client*.deb - -``` - -After you install DEB successfully, Pulsar libraries are in the `/usr/lib` directory. - -### Build - -> If you want to build RPM and Debian packages from the latest master, follow the instructions below. You should run all the instructions at the root directory of your cloned Pulsar repository. - -There are recipes that build RPM and Debian packages containing a -statically linked `libpulsar.so` / `libpulsarnossl.so` / `libpulsar.a` / `libpulsarwithdeps.a` with all required dependencies. - -To build the C++ library packages, you need to build the Java packages first. - -```shell - -mvn install -DskipTests - -``` - -#### RPM - -To build the RPM inside a Docker container, use the command below. The RPMs are in the `pulsar-client-cpp/pkg/rpm/RPMS/x86_64/` path. - -```shell - -pulsar-client-cpp/pkg/rpm/docker-build-rpm.sh - -``` - -| Package name | Content | -|-----|-----| -| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` | -| pulsar-client-devel | Static library `libpulsar.a`, `libpulsarwithdeps.a`and C++ and C headers | -| pulsar-client-debuginfo | Debug symbols for `libpulsar.so` | - -#### Debian - -To build Debian packages, enter the following command. - -```shell - -pulsar-client-cpp/pkg/deb/docker-build-deb.sh - -``` - -Debian packages are created in the `pulsar-client-cpp/pkg/deb/BUILD/DEB/` path. - -| Package name | Content | -|-----|-----| -| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` | -| pulsar-client-dev | Static library `libpulsar.a`, `libpulsarwithdeps.a` and C++ and C headers | - -## MacOS - -### Compilation - -1. Clone the Pulsar repository. - -```shell - -$ git clone https://github.com/apache/pulsar - -``` - -2. Install all necessary dependencies. - -```shell - -# OpenSSL installation -$ brew install openssl -$ export OPENSSL_INCLUDE_DIR=/usr/local/opt/openssl/include/ -$ export OPENSSL_ROOT_DIR=/usr/local/opt/openssl/ - -# Protocol Buffers installation -$ brew install protobuf boost boost-python log4cxx -# If you are using python3, you need to install boost-python3 - -# Google Test installation -$ git clone https://github.com/google/googletest.git -$ cd googletest -$ git checkout release-1.12.1 -$ cmake . -$ make install - -``` - -3. Compile the Pulsar client library in the repository that you cloned. - -```shell - -$ cd pulsar-client-cpp -$ cmake . -$ make - -``` - -### Install `libpulsar` - -Pulsar releases are available in the [Homebrew](https://brew.sh/) core repository. You can install the C++ client library with the following command. The package is installed with the library and headers. - -```shell - -brew install libpulsar - -``` - -## Windows (64-bit) - -### Compilation - -1. Clone the Pulsar repository. - -```shell - -$ git clone https://github.com/apache/pulsar - -``` - -2. Install all necessary dependencies. - -```shell - -cd ${PULSAR_HOME}/pulsar-client-cpp -vcpkg install --feature-flags=manifests --triplet x64-windows - -``` - -3. Build C++ libraries. - -```shell - -cmake -B ./build -A x64 -DBUILD_PYTHON_WRAPPER=OFF -DBUILD_TESTS=OFF -DVCPKG_TRIPLET=x64-windows -DCMAKE_BUILD_TYPE=Release -S . -cmake --build ./build --config Release - -``` - -> **NOTE** -> -> 1. For Windows 32-bit, you need to use `-A Win32` and `-DVCPKG_TRIPLET=x86-windows`. -> 2. For MSVC Debug mode, you need to replace `Release` with `Debug` for both `CMAKE_BUILD_TYPE` variable and `--config` option. - -4. Client libraries are available in the following places. - -``` - -${PULSAR_HOME}/pulsar-client-cpp/build/lib/Release/pulsar.lib -${PULSAR_HOME}/pulsar-client-cpp/build/lib/Release/pulsar.dll - -``` - -## Connection URLs - -To connect Pulsar using client libraries, you need to specify a Pulsar protocol URL. - -Pulsar protocol URLs are assigned to specific clusters, you can use the Pulsar URI scheme. The default port is `6650`. The following is an example for localhost. - -```http - -pulsar://localhost:6650 - -``` - -In a Pulsar cluster in production, the URL looks as follows. - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you use TLS authentication, you need to add `ssl`, and the default port is `6651`. The following is an example. - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a consumer - -To use Pulsar as a consumer, you need to create a consumer on the C++ client. There are two main ways of using the consumer: -- [Blocking style](#blocking-example): synchronously calling `receive(msg)`. -- [Non-blocking](#consumer-with-a-message-listener) (event based) style: using a message listener. - -### Blocking example - -The benefit of this approach is that it is the simplest code. Simply keeps calling `receive(msg)` which blocks until a message is received. - -This example starts a subscription at the earliest offset and consumes 100 messages. - -```c++ - -#include - -using namespace pulsar; - -int main() { - Client client("pulsar://localhost:6650"); - - Consumer consumer; - ConsumerConfiguration config; - config.setSubscriptionInitialPosition(InitialPositionEarliest); - Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer); - if (result != ResultOk) { - std::cout << "Failed to subscribe: " << result << std::endl; - return -1; - } - - Message msg; - int ctr = 0; - // consume 100 messages - while (ctr < 100) { - consumer.receive(msg); - std::cout << "Received: " << msg - << " with payload '" << msg.getDataAsString() << "'" << std::endl; - - consumer.acknowledge(msg); - ctr++; - } - - std::cout << "Finished consuming synchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -### Consumer with a message listener - -You can avoid running a loop with blocking calls with an event based style by using a message listener which is invoked for each message that is received. - -This example starts a subscription at the earliest offset and consumes 100 messages. - -```c++ - -#include -#include -#include - -using namespace pulsar; - -std::atomic messagesReceived; - -void handleAckComplete(Result res) { - std::cout << "Ack res: " << res << std::endl; -} - -void listener(Consumer consumer, const Message& msg) { - std::cout << "Got message " << msg << " with content '" << msg.getDataAsString() << "'" << std::endl; - messagesReceived++; - consumer.acknowledgeAsync(msg.getMessageId(), handleAckComplete); -} - -int main() { - Client client("pulsar://localhost:6650"); - - Consumer consumer; - ConsumerConfiguration config; - config.setMessageListener(listener); - config.setSubscriptionInitialPosition(InitialPositionEarliest); - Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer); - if (result != ResultOk) { - std::cout << "Failed to subscribe: " << result << std::endl; - return -1; - } - - // wait for 100 messages to be consumed - while (messagesReceived < 100) { - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - } - - std::cout << "Finished consuming asynchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -## Create a producer - -To use Pulsar as a producer, you need to create a producer on the C++ client. There are two main ways of using a producer: -- [Blocking style](#simple-blocking-example) : each call to `send` waits for an ack from the broker. -- [Non-blocking asynchronous style](#non-blocking-example) : `sendAsync` is called instead of `send` and a callback is supplied for when the ack is received from the broker. - -### Simple blocking example - -This example sends 100 messages using the blocking style. While simple, it does not produce high throughput as it waits for each ack to come back before sending the next message. - -```c++ - -#include -#include - -using namespace pulsar; - -int main() { - Client client("pulsar://localhost:6650"); - - Result result = client.createProducer("persistent://public/default/my-topic", producer); - if (result != ResultOk) { - std::cout << "Error creating producer: " << result << std::endl; - return -1; - } - - // Send 100 messages synchronously - int ctr = 0; - while (ctr < 100) { - std::string content = "msg" + std::to_string(ctr); - Message msg = MessageBuilder().setContent(content).setProperty("x", "1").build(); - Result result = producer.send(msg); - if (result != ResultOk) { - std::cout << "The message " << content << " could not be sent, received code: " << result << std::endl; - } else { - std::cout << "The message " << content << " sent successfully" << std::endl; - } - - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - ctr++; - } - - std::cout << "Finished producing synchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -### Non-blocking example - -This example sends 100 messages using the non-blocking style calling `sendAsync` instead of `send`. This allows the producer to have multiple messages inflight at a time which increases throughput. - -The producer configuration `blockIfQueueFull` is useful here to avoid `ResultProducerQueueIsFull` errors when the internal queue for outgoing send requests becomes full. Once the internal queue is full, `sendAsync` becomes blocking which can make your code simpler. - -Without this configuration, the result code `ResultProducerQueueIsFull` is passed to the callback. You must decide how to deal with that (retry, discard etc). - -```c++ - -#include -#include - -using namespace pulsar; - -std::atomic acksReceived; - -void callback(Result code, const MessageId& msgId, std::string msgContent) { - // message processing logic here - std::cout << "Received ack for msg: " << msgContent << " with code: " - << code << " -- MsgID: " << msgId << std::endl; - acksReceived++; -} - -int main() { - Client client("pulsar://localhost:6650"); - - ProducerConfiguration producerConf; - producerConf.setBlockIfQueueFull(true); - Producer producer; - Result result = client.createProducer("persistent://public/default/my-topic", - producerConf, producer); - if (result != ResultOk) { - std::cout << "Error creating producer: " << result << std::endl; - return -1; - } - - // Send 100 messages asynchronously - int ctr = 0; - while (ctr < 100) { - std::string content = "msg" + std::to_string(ctr); - Message msg = MessageBuilder().setContent(content).setProperty("x", "1").build(); - producer.sendAsync(msg, std::bind(callback, - std::placeholders::_1, std::placeholders::_2, content)); - - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - ctr++; - } - - // wait for 100 messages to be acked - while (acksReceived < 100) { - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - } - - std::cout << "Finished producing asynchronously!" << std::endl; - - client.close(); - return 0; -} - -``` - -### Partitioned topics and lazy producers - -When scaling out a Pulsar topic, you may configure a topic to have hundreds of partitions. Likewise, you may have also scaled out your producers so there are hundreds or even thousands of producers. This can put some strain on the Pulsar brokers as when you create a producer on a partitioned topic, internally it creates one internal producer per partition which involves communications to the brokers for each one. So for a topic with 1000 partitions and 1000 producers, it ends up creating 1,000,000 internal producers across the producer applications, each of which has to communicate with a broker to find out which broker it should connect to and then perform the connection handshake. - -You can reduce the load caused by this combination of a large number of partitions and many producers by doing the following: -- use SinglePartition partition routing mode (this ensures that all messages are only sent to a single, randomly selected partition) -- use non-keyed messages (when messages are keyed, routing is based on the hash of the key and so messages will end up being sent to multiple partitions) -- use lazy producers (this ensures that an internal producer is only created on demand when a message needs to be routed to a partition) - -With our example above, that reduces the number of internal producers spread out over the 1000 producer apps from 1,000,000 to just 1000. - -Note that there can be extra latency for the first message sent. If you set a low send timeout, this timeout could be reached if the initial connection handshake is slow to complete. - -```c++ - -ProducerConfiguration producerConf; -producerConf.setPartitionsRoutingMode(ProducerConfiguration::UseSinglePartition); -producerConf.setLazyStartPartitionedProducers(true); - -``` - -## Enable authentication in connection URLs -If you use TLS authentication when connecting to Pulsar, you need to add `ssl` in the connection URLs, and the default port is `6651`. The following is an example. - -```cpp - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/cacert.pem"); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create( - "/path/to/client-cert.pem", "/path/to/client-key.pem");); - -Client client("pulsar+ssl://my-broker.com:6651", config); - -``` - -For complete examples, refer to [C++ client examples](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/examples). - -## Schema - -This section describes some examples about schema. For more information about -schema, see [Pulsar schema](schema-get-started.md). - -### Avro schema - -- The following example shows how to create a producer with an Avro schema. - - ```cpp - - static const std::string exampleSchema = - "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," - "\"fields\":[{\"name\":\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"int\"}]}"; - Producer producer; - ProducerConfiguration producerConf; - producerConf.setSchema(SchemaInfo(AVRO, "Avro", exampleSchema)); - client.createProducer("topic-avro", producerConf, producer); - - ``` - -- The following example shows how to create a consumer with an Avro schema. - - ```cpp - - static const std::string exampleSchema = - "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," - "\"fields\":[{\"name\":\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"int\"}]}"; - ConsumerConfiguration consumerConf; - Consumer consumer; - consumerConf.setSchema(SchemaInfo(AVRO, "Avro", exampleSchema)); - client.subscribe("topic-avro", "sub-2", consumerConf, consumer) - - ``` - -### ProtobufNative schema - -The following example shows how to create a producer and a consumer with a ProtobufNative schema. -​ -1. Generate the `User` class using Protobuf3. - - :::note - - You need to use Protobuf3 or later versions. - - ::: - -​ - - ```protobuf - - syntax = "proto3"; - - message User { - string name = 1; - int32 age = 2; - } - - ``` - -​ -2. Include the `ProtobufNativeSchema.h` in your source code. Ensure the Protobuf dependency has been added to your project. -​ - - ```c++ - - #include - - ``` - -​ -3. Create a producer to send a `User` instance. -​ - - ```c++ - - ProducerConfiguration producerConf; - producerConf.setSchema(createProtobufNativeSchema(User::GetDescriptor())); - Producer producer; - client.createProducer("topic-protobuf", producerConf, producer); - User user; - user.set_name("my-name"); - user.set_age(10); - std::string content; - user.SerializeToString(&content); - producer.send(MessageBuilder().setContent(content).build()); - - ``` - -​ -4. Create a consumer to receive a `User` instance. -​ - - ```c++ - - ConsumerConfiguration consumerConf; - consumerConf.setSchema(createProtobufNativeSchema(User::GetDescriptor())); - consumerConf.setSubscriptionInitialPosition(InitialPositionEarliest); - Consumer consumer; - client.subscribe("topic-protobuf", "my-sub", consumerConf, consumer); - Message msg; - consumer.receive(msg); - User user2; - user2.ParseFromArray(msg.getData(), msg.getLength()); - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-dotnet.md b/site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-dotnet.md deleted file mode 100644 index b574fa0b2e5ed8..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-dotnet.md +++ /dev/null @@ -1,434 +0,0 @@ ---- -id: client-libraries-dotnet -title: Pulsar C# client -sidebar_label: "C#" -original_id: client-libraries-dotnet ---- - -You can use the Pulsar C# client (DotPulsar) to create Pulsar producers and consumers in C#. All the methods in the producer, consumer, and reader of a C# client are thread-safe. The official documentation for DotPulsar is available [here](https://github.com/apache/pulsar-dotpulsar/wiki). - -## Installation - -You can install the Pulsar C# client library either through the dotnet CLI or through the Visual Studio. This section describes how to install the Pulsar C# client library through the dotnet CLI. For information about how to install the Pulsar C# client library through the Visual Studio , see [here](https://docs.microsoft.com/en-us/visualstudio/mac/nuget-walkthrough?view=vsmac-2019). - -### Prerequisites - -Install the [.NET Core SDK](https://dotnet.microsoft.com/download/), which provides the dotnet command-line tool. Starting in Visual Studio 2017, the dotnet CLI is automatically installed with any .NET Core related workloads. - -### Procedures - -To install the Pulsar C# client library, following these steps: - -1. Create a project. - - 1. Create a folder for the project. - - 2. Open a terminal window and switch to the new folder. - - 3. Create the project using the following command. - - ``` - - dotnet new console - - ``` - - 4. Use `dotnet run` to test that the app has been created properly. - -2. Add the DotPulsar NuGet package. - - 1. Use the following command to install the `DotPulsar` package. - - ``` - - dotnet add package DotPulsar - - ``` - - 2. After the command completes, open the `.csproj` file to see the added reference. - - ```xml - - - - - - ``` - -## Client - -This section describes some configuration examples for the Pulsar C# client. - -### Create client - -This example shows how to create a Pulsar C# client connected to localhost. - -```c# - -var client = PulsarClient.Builder().Build(); - -``` - -To create a Pulsar C# client by using the builder, you can specify the following options. - -| Option | Description | Default | -| ---- | ---- | ---- | -| ServiceUrl | Set the service URL for the Pulsar cluster. | pulsar://localhost:6650 | -| RetryInterval | Set the time to wait before retrying an operation or a reconnection. | 3s | - -### Create producer - -This section describes how to create a producer. - -- Create a producer by using the builder. - - ```c# - - var producer = client.NewProducer() - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a producer without using the builder. - - ```c# - - var options = new ProducerOptions("persistent://public/default/mytopic"); - var producer = client.CreateProducer(options); - - ``` - -### Create consumer - -This section describes how to create a consumer. - -- Create a consumer by using the builder. - - ```c# - - var consumer = client.NewConsumer() - .SubscriptionName("MySubscription") - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a consumer without using the builder. - - ```c# - - var options = new ConsumerOptions("MySubscription", "persistent://public/default/mytopic"); - var consumer = client.CreateConsumer(options); - - ``` - -### Create reader - -This section describes how to create a reader. - -- Create a reader by using the builder. - - ```c# - - var reader = client.NewReader() - .StartMessageId(MessageId.Earliest) - .Topic("persistent://public/default/mytopic") - .Create(); - - ``` - -- Create a reader without using the builder. - - ```c# - - var options = new ReaderOptions(MessageId.Earliest, "persistent://public/default/mytopic"); - var reader = client.CreateReader(options); - - ``` - -### Configure encryption policies - -The Pulsar C# client supports four kinds of encryption policies: - -- `EnforceUnencrypted`: always use unencrypted connections. -- `EnforceEncrypted`: always use encrypted connections) -- `PreferUnencrypted`: use unencrypted connections, if possible. -- `PreferEncrypted`: use encrypted connections, if possible. - -This example shows how to set the `EnforceUnencrypted` encryption policy. - -```c# - -var client = PulsarClient.Builder() - .ConnectionSecurity(EncryptionPolicy.EnforceEncrypted) - .Build(); - -``` - -### Configure authentication - -Currently, the Pulsar C# client supports the TLS (Transport Layer Security) and JWT (JSON Web Token) authentication. - -If you have followed [Authentication using TLS](security-tls-authentication.md), you get a certificate and a key. To use them from the Pulsar C# client, follow these steps: - -1. Create an unencrypted and password-less pfx file. - - ```c# - - openssl pkcs12 -export -keypbe NONE -certpbe NONE -out admin.pfx -inkey admin.key.pem -in admin.cert.pem -passout pass: - - ``` - -2. Use the admin.pfx file to create an X509Certificate2 and pass it to the Pulsar C# client. - - ```c# - - var clientCertificate = new X509Certificate2("admin.pfx"); - var client = PulsarClient.Builder() - .AuthenticateUsingClientCertificate(clientCertificate) - .Build(); - - ``` - -## Producer - -A producer is a process that attaches to a topic and publishes messages to a Pulsar broker for processing. This section describes some configuration examples about the producer. - -## Send data - -This example shows how to send data. - -```c# - -var data = Encoding.UTF8.GetBytes("Hello World"); -await producer.Send(data); - -``` - -### Send messages with customized metadata - -- Send messages with customized metadata by using the builder. - - ```c# - - var data = Encoding.UTF8.GetBytes("Hello World"); - var messageId = await producer.NewMessage() - .Property("SomeKey", "SomeValue") - .Send(data); - - ``` - -- Send messages with customized metadata without using the builder. - - ```c# - - var data = Encoding.UTF8.GetBytes("Hello World"); - var metadata = new MessageMetadata(); - metadata["SomeKey"] = "SomeValue"; - var messageId = await producer.Send(metadata, data)); - - ``` - -## Consumer - -A consumer is a process that attaches to a topic through a subscription and then receives messages. This section describes some configuration examples about the consumer. - -### Receive messages - -This example shows how a consumer receives messages from a topic. - -```c# - -await foreach (var message in consumer.Messages()) -{ - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); -} - -``` - -### Acknowledge messages - -Messages can be acknowledged individually or cumulatively. For details about message acknowledgement, see [acknowledgement](concepts-messaging.md#acknowledgement). - -- Acknowledge messages individually. - - ```c# - - await foreach (var message in consumer.Messages()) - { - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); - } - - ``` - -- Acknowledge messages cumulatively. - - ```c# - - await consumer.AcknowledgeCumulative(message); - - ``` - -### Unsubscribe from topics - -This example shows how a consumer unsubscribes from a topic. - -```c# - -await consumer.Unsubscribe(); - -``` - -#### Note - -> A consumer cannot be used and is disposed once the consumer unsubscribes from a topic. - -## Reader - -A reader is actually just a consumer without a cursor. This means that Pulsar does not keep track of your progress and there is no need to acknowledge messages. - -This example shows how a reader receives messages. - -```c# - -await foreach (var message in reader.Messages()) -{ - Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray())); -} - -``` - -## Monitoring - -This section describes how to monitor the producer, consumer, and reader state. - -### Monitor producer - -The following table lists states available for the producer. - -| State | Description | -| ---- | ----| -| Closed | The producer or the Pulsar client has been disposed. | -| Connected | All is well. | -| Disconnected | The connection is lost and attempts are being made to reconnect. | -| Faulted | An unrecoverable error has occurred. | - -This example shows how to monitor the producer state. - -```c# - -private static async ValueTask Monitor(IProducer producer, CancellationToken cancellationToken) -{ - var state = ProducerState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = await producer.StateChangedFrom(state, cancellationToken); - - var stateMessage = state switch - { - ProducerState.Connected => $"The producer is connected", - ProducerState.Disconnected => $"The producer is disconnected", - ProducerState.Closed => $"The producer has closed", - ProducerState.Faulted => $"The producer has faulted", - _ => $"The producer has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (producer.IsFinalState(state)) - return; - } -} - -``` - -### Monitor consumer state - -The following table lists states available for the consumer. - -| State | Description | -| ---- | ----| -| Active | All is well. | -| Inactive | All is well. The subscription type is `Failover` and you are not the active consumer. | -| Closed | The consumer or the Pulsar client has been disposed. | -| Disconnected | The connection is lost and attempts are being made to reconnect. | -| Faulted | An unrecoverable error has occurred. | -| ReachedEndOfTopic | No more messages are delivered. | - -This example shows how to monitor the consumer state. - -```c# - -private static async ValueTask Monitor(IConsumer consumer, CancellationToken cancellationToken) -{ - var state = ConsumerState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = await consumer.StateChangedFrom(state, cancellationToken); - - var stateMessage = state switch - { - ConsumerState.Active => "The consumer is active", - ConsumerState.Inactive => "The consumer is inactive", - ConsumerState.Disconnected => "The consumer is disconnected", - ConsumerState.Closed => "The consumer has closed", - ConsumerState.ReachedEndOfTopic => "The consumer has reached end of topic", - ConsumerState.Faulted => "The consumer has faulted", - _ => $"The consumer has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (consumer.IsFinalState(state)) - return; - } -} - -``` - -### Monitor reader state - -The following table lists states available for the reader. - -| State | Description | -| ---- | ----| -| Closed | The reader or the Pulsar client has been disposed. | -| Connected | All is well. | -| Disconnected | The connection is lost and attempts are being made to reconnect. -| Faulted | An unrecoverable error has occurred. | -| ReachedEndOfTopic | No more messages are delivered. | - -This example shows how to monitor the reader state. - -```c# - -private static async ValueTask Monitor(IReader reader, CancellationToken cancellationToken) -{ - var state = ReaderState.Disconnected; - - while (!cancellationToken.IsCancellationRequested) - { - state = await reader.StateChangedFrom(state, cancellationToken); - - var stateMessage = state switch - { - ReaderState.Connected => "The reader is connected", - ReaderState.Disconnected => "The reader is disconnected", - ReaderState.Closed => "The reader has closed", - ReaderState.ReachedEndOfTopic => "The reader has reached end of topic", - ReaderState.Faulted => "The reader has faulted", - _ => $"The reader has an unknown state '{state}'" - }; - - Console.WriteLine(stateMessage); - - if (reader.IsFinalState(state)) - return; - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-go.md b/site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-go.md deleted file mode 100644 index 6281b03dd8c805..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-go.md +++ /dev/null @@ -1,885 +0,0 @@ ---- -id: client-libraries-go -title: Pulsar Go client -sidebar_label: "Go" -original_id: client-libraries-go ---- - -> Tips: Currently, the CGo client will be deprecated, if you want to know more about the CGo client, please refer to [CGo client docs](client-libraries-cgo.md) - -You can use Pulsar [Go client](https://github.com/apache/pulsar-client-go) to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang). - -> **API docs available as well** -> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar-client-go/pulsar). - - -## Installation - -### Install go package - -You can install the `pulsar` library locally using `go get`. - -```bash - -$ go get -u "github.com/apache/pulsar-client-go/pulsar" - -``` - -Once installed locally, you can import it into your project: - -```go - -import "github.com/apache/pulsar-client-go/pulsar" - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -If you have multiple brokers, you can set the URL as below. - -``` - -pulsar://localhost:6550,localhost:6651,localhost:6652 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example: - -```go - -import ( - "log" - "time" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", - OperationTimeout: 30 * time.Second, - ConnectionTimeout: 30 * time.Second, - }) - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } - - defer client.Close() -} - -``` - -If you have multiple brokers, you can initiate a client object as below. - -```go - -import ( - "log" - "time" - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650,localhost:6651,localhost:6652", - OperationTimeout: 30 * time.Second, - ConnectionTimeout: 30 * time.Second, - }) - if err != nil { - log.Fatalf("Could not instantiate Pulsar client: %v", err) - } - - defer client.Close() -} - -``` - -The following configurable parameters are available for Pulsar clients: - - Name | Description | Default -| :-------- | :---------- |:---------- | -| URL | Configure the service URL for the Pulsar service.

    If you have multiple brokers, you can set multiple Pulsar cluster addresses for a client.

    This parameter is **required**. |None | -| ConnectionTimeout | Timeout for the establishment of a TCP connection | 30s | -| OperationTimeout| Set the operation timeout. Producer-create, subscribe and unsubscribe operations will be retried until this interval, after which the operation will be marked as failed| 30s| -| Authentication | Configure the authentication provider. Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | no authentication | -| TLSTrustCertsFilePath | Set the path to the trusted TLS certificate file | | -| TLSAllowInsecureConnection | Configure whether the Pulsar client accept untrusted TLS certificate from broker | false | -| TLSValidateHostname | Configure whether the Pulsar client verify the validity of the host name from broker | false | -| ListenerName | Configure the net model for VPC users to connect to the Pulsar broker | | -| MaxConnectionsPerBroker | Max number of connections to a single broker that is kept in the pool | 1 | -| CustomMetricsLabels | Add custom labels to all the metrics reported by this client instance | | -| Logger | Configure the logger used by the client | logrus.StandardLogger | - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example: - -```go - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-topic", -}) - -if err != nil { - log.Fatal(err) -} - -_, err = producer.Send(context.Background(), &pulsar.ProducerMessage{ - Payload: []byte("hello"), -}) - -defer producer.Close() - -if err != nil { - fmt.Println("Failed to publish message", err) -} -fmt.Println("Published message") - -``` - -### Producer operations - -Pulsar Go producers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string` -`Name()` | Fetches the producer's name | `string` -`Send(context.Context, *ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | (MessageID, error) -`SendAsync(context.Context, *ProducerMessage, func(MessageID, *ProducerMessage, error))`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | -`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64 -`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error -`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | - -### Producer Example - -#### How to use message router in producer - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: serviceURL, -}) - -if err != nil { - log.Fatal(err) -} -defer client.Close() - -// Only subscribe on the specific partition -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "my-partitioned-topic-partition-2", - SubscriptionName: "my-sub", -}) - -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: "my-partitioned-topic", - MessageRouter: func(msg *ProducerMessage, tm TopicMetadata) int { - fmt.Println("Routing message ", msg, " -- Partitions: ", tm.NumPartitions()) - return 2 - }, -}) - -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -``` - -#### How to use schema interface in producer - -```go - -type testJSON struct { - ID int `json:"id"` - Name string `json:"name"` -} - -``` - -```go - -var ( - exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -) - -``` - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -properties := make(map[string]string) -properties["pulsar"] = "hello" -jsonSchemaWithProperties := NewJSONSchema(exampleSchemaDef, properties) -producer, err := client.CreateProducer(ProducerOptions{ - Topic: "jsonTopic", - Schema: jsonSchemaWithProperties, -}) -assert.Nil(t, err) - -_, err = producer.Send(context.Background(), &ProducerMessage{ - Value: &testJSON{ - ID: 100, - Name: "pulsar", - }, -}) -if err != nil { - log.Fatal(err) -} -producer.Close() - -``` - -#### How to use delay relative in producer - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topicName := newTopicName() -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topicName, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: topicName, - SubscriptionName: "subName", - Type: Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -ID, err := producer.Send(context.Background(), &pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("test")), - DeliverAfter: 3 * time.Second, -}) -if err != nil { - log.Fatal(err) -} -fmt.Println(ID) - -ctx, canc := context.WithTimeout(context.Background(), 1*time.Second) -msg, err := consumer.Receive(ctx) -if err != nil { - log.Fatal(err) -} -fmt.Println(msg.Payload()) -canc() - -ctx, canc = context.WithTimeout(context.Background(), 5*time.Second) -msg, err = consumer.Receive(ctx) -if err != nil { - log.Fatal(err) -} -fmt.Println(msg.Payload()) -canc() - -``` - -### Producer configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Name | Name specify a name for the producer. If not assigned, the system will generate a globally unique name which can be access with Producer.ProducerName(). | | -| Properties | Properties attach a set of application defined properties to the producer This properties will be visible in the topic stats | | -| SendTimeout | SendTimeout set the timeout for a message that is not acknowledged by the server | 30s | -| DisableBlockIfQueueFull | DisableBlockIfQueueFull control whether Send and SendAsync block if producer's message queue is full | false | -| MaxPendingMessages| MaxPendingMessages set the max size of the queue holding the messages pending to receive an acknowledgment from the broker. | | -| HashingScheme | HashingScheme change the `HashingScheme` used to chose the partition on where to publish a particular message. | JavaStringHash | -| CompressionType | CompressionType set the compression type for the producer. | not compressed | -| CompressionLevel | Define the desired compression level. Options: Default, Faster and Better | Default | -| MessageRouter | MessageRouter set a custom message routing policy by passing an implementation of MessageRouter | | -| DisableBatching | DisableBatching control whether automatic batching of messages is enabled for the producer. | false | -| BatchingMaxPublishDelay | BatchingMaxPublishDelay set the time period within which the messages sent will be batched | 1ms | -| BatchingMaxMessages | BatchingMaxMessages set the maximum number of messages permitted in a batch. | 1000 | -| BatchingMaxSize | BatchingMaxSize sets the maximum number of bytes permitted in a batch. | 128KB | -| Schema | Schema set a custom schema type by passing an implementation of `Schema` | bytes[] | -| Interceptors | A chain of interceptors. These interceptors are called at some points defined in the `ProducerInterceptor` interface. | None | -| MaxReconnectToBroker | MaxReconnectToBroker set the maximum retry number of reconnectToBroker | ultimate | -| BatcherBuilderType | BatcherBuilderType sets the batch builder type. This is used to create a batch container when batching is enabled. Options: DefaultBatchBuilder and KeyBasedBatchBuilder | DefaultBatchBuilder | - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels: - -```go - -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "topic-1", - SubscriptionName: "my-sub", - Type: pulsar.Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -for i := 0; i < 10; i++ { - msg, err := consumer.Receive(context.Background()) - if err != nil { - log.Fatal(err) - } - - fmt.Printf("Received message msgId: %#v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - - consumer.Ack(msg) -} - -if err := consumer.Unsubscribe(); err != nil { - log.Fatal(err) -} - -``` - -### Consumer operations - -Pulsar Go consumers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Subscription()` | Returns the consumer's subscription name | `string` -`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error` -`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)` -`Chan()` | Chan returns a channel from which to consume messages. | `<-chan ConsumerMessage` -`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | -`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | -`ReconsumeLater(msg Message, delay time.Duration)` | ReconsumeLater mark a message for redelivery after custom delay | -`Nack(Message)` | Acknowledge the failure to process a single message. | -`NackID(MessageID)` | Acknowledge the failure to process a single message. | -`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | `error` -`SeekByTime(time time.Time)` | Reset the subscription associated with this consumer to a specific message publish time. | `error` -`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | -`Name()` | Name returns the name of consumer | `string` - -### Receive example - -#### How to use regex consumer - -```go - -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) - -defer client.Close() - -p, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topicInRegex, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer p.Close() - -topicsPattern := fmt.Sprintf("persistent://%s/foo.*", namespace) -opts := pulsar.ConsumerOptions{ - TopicsPattern: topicsPattern, - SubscriptionName: "regex-sub", -} -consumer, err := client.Subscribe(opts) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -``` - -#### How to use multi topics Consumer - -```go - -func newTopicName() string { - return fmt.Sprintf("my-topic-%v", time.Now().Nanosecond()) -} - - -topic1 := "topic-1" -topic2 := "topic-2" - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -topics := []string{topic1, topic2} -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topics: topics, - SubscriptionName: "multi-topic-sub", -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -``` - -#### How to use consumer listener - -```go - -import ( - "fmt" - "log" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{URL: "pulsar://localhost:6650"}) - if err != nil { - log.Fatal(err) - } - - defer client.Close() - - channel := make(chan pulsar.ConsumerMessage, 100) - - options := pulsar.ConsumerOptions{ - Topic: "topic-1", - SubscriptionName: "my-subscription", - Type: pulsar.Shared, - } - - options.MessageChannel = channel - - consumer, err := client.Subscribe(options) - if err != nil { - log.Fatal(err) - } - - defer consumer.Close() - - // Receive messages from channel. The channel returns a struct which contains message and the consumer from where - // the message was received. It's not necessary here since we have 1 single consumer, but the channel could be - // shared across multiple consumers as well - for cm := range channel { - msg := cm.Message - fmt.Printf("Received message msgId: %v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - - consumer.Ack(msg) - } -} - -``` - -#### How to use consumer receive timeout - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topic := "test-topic-with-no-messages" -ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond) -defer cancel() - -// create consumer -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: topic, - SubscriptionName: "my-sub1", - Type: Shared, -}) -if err != nil { - log.Fatal(err) -} -defer consumer.Close() - -msg, err := consumer.Receive(ctx) -fmt.Println(msg.Payload()) -if err != nil { - log.Fatal(err) -} - -``` - -#### How to use schema in consumer - -```go - -type testJSON struct { - ID int `json:"id"` - Name string `json:"name"` -} - -``` - -```go - -var ( - exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," + - "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}" -) - -``` - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -defer client.Close() - -var s testJSON - -consumerJS := NewJSONSchema(exampleSchemaDef, nil) -consumer, err := client.Subscribe(ConsumerOptions{ - Topic: "jsonTopic", - SubscriptionName: "sub-1", - Schema: consumerJS, - SubscriptionInitialPosition: SubscriptionPositionEarliest, -}) -assert.Nil(t, err) -msg, err := consumer.Receive(context.Background()) -assert.Nil(t, err) -err = msg.GetSchemaValue(&s) -if err != nil { - log.Fatal(err) -} - -defer consumer.Close() - -``` - -### Consumer configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Topics | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing| | -| TopicsPattern | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing | | -| AutoDiscoveryPeriod | Specify the interval in which to poll for new partitions or new topics if using a TopicsPattern. | | -| SubscriptionName | Specify the subscription name for this consumer. This argument is required when subscribing | | -| Name | Set the consumer name | | -| Properties | Properties attach a set of application defined properties to the producer This properties will be visible in the topic stats | | -| Type | Select the subscription type to be used when subscribing to the topic. | Exclusive | -| SubscriptionInitialPosition | InitialPosition at which the cursor will be set when subscribe | Latest | -| DLQ | Configuration for Dead Letter Queue consumer policy. | no DLQ | -| MessageChannel | Sets a `MessageChannel` for the consumer. When a message is received, it will be pushed to the channel for consumption | | -| ReceiverQueueSize | Sets the size of the consumer receive queue. | 1000| -| NackRedeliveryDelay | The delay after which to redeliver the messages that failed to be processed | 1min | -| ReadCompacted | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic | false | -| ReplicateSubscriptionState | Mark the subscription as replicated to keep it in sync across clusters | false | -| KeySharedPolicy | Configuration for Key Shared consumer policy. | | -| RetryEnable | Auto retry send messages to default filled DLQPolicy topics | false | -| Interceptors | A chain of interceptors. These interceptors are called at some points defined in the `ConsumerInterceptor` interface. | | -| MaxReconnectToBroker | MaxReconnectToBroker set the maximum retry number of reconnectToBroker. | ultimate | -| Schema | Schema set a custom schema type by passing an implementation of `Schema` | bytes[] | - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example: - -```go - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "topic-1", - StartMessageID: pulsar.EarliestMessageID(), -}) -if err != nil { - log.Fatal(err) -} -defer reader.Close() - -``` - -### Reader operations - -Pulsar Go readers have the following methods available: - -Method | Description | Return type -:------|:------------|:----------- -`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string` -`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)` -`HasNext()` | Check if there is any message available to read from the current position| (bool, error) -`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error` -`Seek(MessageID)` | Reset the subscription associated with this reader to a specific message ID | `error` -`SeekByTime(time time.Time)` | Reset the subscription associated with this reader to a specific message publish time | `error` - -### Reader example - -#### How to use reader to read 'next' message - -Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages: - -```go - -import ( - "context" - "fmt" - "log" - - "github.com/apache/pulsar-client-go/pulsar" -) - -func main() { - client, err := pulsar.NewClient(pulsar.ClientOptions{URL: "pulsar://localhost:6650"}) - if err != nil { - log.Fatal(err) - } - - defer client.Close() - - reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "topic-1", - StartMessageID: pulsar.EarliestMessageID(), - }) - if err != nil { - log.Fatal(err) - } - defer reader.Close() - - for reader.HasNext() { - msg, err := reader.Next(context.Background()) - if err != nil { - log.Fatal(err) - } - - fmt.Printf("Received message msgId: %#v -- content: '%s'\n", - msg.ID(), string(msg.Payload())) - } -} - -``` - -In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example: - -```go - -lastSavedId := // Read last saved message id from external store as byte[] - -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: "my-golang-topic", - StartMessageID: pulsar.DeserializeMessageID(lastSavedId), -}) - -``` - -#### How to use reader to read specific message - -```go - -client, err := NewClient(pulsar.ClientOptions{ - URL: lookupURL, -}) - -if err != nil { - log.Fatal(err) -} -defer client.Close() - -topic := "topic-1" -ctx := context.Background() - -// create producer -producer, err := client.CreateProducer(pulsar.ProducerOptions{ - Topic: topic, - DisableBatching: true, -}) -if err != nil { - log.Fatal(err) -} -defer producer.Close() - -// send 10 messages -msgIDs := [10]MessageID{} -for i := 0; i < 10; i++ { - msgID, err := producer.Send(ctx, &pulsar.ProducerMessage{ - Payload: []byte(fmt.Sprintf("hello-%d", i)), - }) - assert.NoError(t, err) - assert.NotNil(t, msgID) - msgIDs[i] = msgID -} - -// create reader on 5th message (not included) -reader, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: topic, - StartMessageID: msgIDs[4], -}) - -if err != nil { - log.Fatal(err) -} -defer reader.Close() - -// receive the remaining 5 messages -for i := 5; i < 10; i++ { - msg, err := reader.Next(context.Background()) - if err != nil { - log.Fatal(err) -} - -// create reader on 5th message (included) -readerInclusive, err := client.CreateReader(pulsar.ReaderOptions{ - Topic: topic, - StartMessageID: msgIDs[4], - StartMessageIDInclusive: true, -}) - -if err != nil { - log.Fatal(err) -} -defer readerInclusive.Close() - -``` - -### Reader configuration - - Name | Description | Default -| :-------- | :---------- |:---------- | -| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | | -| Name | Name set the reader name. | | -| Properties | Attach a set of application defined properties to the reader. This properties will be visible in the topic stats | | -| StartMessageID | StartMessageID initial reader positioning is done by specifying a message id. | | -| StartMessageIDInclusive | If true, the reader will start at the `StartMessageID`, included. Default is `false` and the reader will start from the "next" message | false | -| MessageChannel | MessageChannel sets a `MessageChannel` for the consumer When a message is received, it will be pushed to the channel for consumption| | -| ReceiverQueueSize | ReceiverQueueSize sets the size of the consumer receive queue. | 1000 | -| SubscriptionRolePrefix| SubscriptionRolePrefix set the subscription role prefix. | "reader" | -| ReadCompacted | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. ReadCompacted can only be enabled when reading from a persistent topic. | false| - -## Messages - -The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message: - -```go - -msg := pulsar.ProducerMessage{ - Payload: []byte("Here is some message data"), - Key: "message-key", - Properties: map[string]string{ - "foo": "bar", - }, - EventTime: time.Now(), - ReplicationClusters: []string{"cluster1", "cluster3"}, -} - -if _, err := producer.send(msg); err != nil { - log.Fatalf("Could not publish message due to: %v", err) -} - -``` - -The following methods parameters are available for `ProducerMessage` objects: - -Parameter | Description -:---------|:----------- -`Payload` | The actual data payload of the message -`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message. -`Key` | The optional key associated with the message (particularly useful for things like topic compaction) -`OrderingKey` | OrderingKey sets the ordering key of the message. -`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message -`EventTime` | The timestamp associated with the message -`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. -`SequenceID` | Set the sequence id to assign to the current message -`DeliverAfter` | Request to deliver the message only after the specified relative delay -`DeliverAt` | Deliver the message only at or after the specified absolute timestamp - -## TLS encryption and authentication - -In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so: - - * Use `pulsar+ssl` URL type - * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker - * Configure `Authentication` option - -Here's an example: - -```go - -opts := pulsar.ClientOptions{ - URL: "pulsar+ssl://my-cluster.com:6651", - TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr", - Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"), -} - -``` - -## OAuth2 authentication - -To use [OAuth2 authentication](security-oauth2.md), you'll need to configure your client to perform the following operations. -This example shows how to configure OAuth2 authentication. - -```go - -oauth := pulsar.NewAuthenticationOAuth2(map[string]string{ - "type": "client_credentials", - "issuerUrl": "https://dev-kt-aa9ne.us.auth0.com", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "privateKey": "/path/to/privateKey", - "clientId": "0Xx...Yyxeny", - }) -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://my-cluster:6650", - Authentication: oauth, -}) - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-java.md b/site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-java.md deleted file mode 100644 index 563791618dcd31..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-java.md +++ /dev/null @@ -1,1040 +0,0 @@ ---- -id: client-libraries-java -title: Pulsar Java client -sidebar_label: "Java" -original_id: client-libraries-java ---- - -You can use a Pulsar Java client to create the Java [producer](#producer), [consumer](#consumer), and [readers](#reader) of messages and to perform [administrative tasks](admin-api-overview.md). The current Java client version is **@pulsar:version@**. - -All the methods in [producer](#producer), [consumer](#consumer), and [reader](#reader) of a Java client are thread-safe. - -Javadoc for the Pulsar client is divided into two domains by package as follows. - -Package | Description | Maven Artifact -:-------|:------------|:-------------- -[`org.apache.pulsar.client.api`](/api/client) | The producer and consumer API | [org.apache.pulsar:pulsar-client:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C@pulsar:version@%7Cjar) -[`org.apache.pulsar.client.admin`](/api/admin) | The Java [admin API](admin-api-overview.md) | [org.apache.pulsar:pulsar-client-admin:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client-admin%7C@pulsar:version@%7Cjar) -`org.apache.pulsar.client.all` |Include both `pulsar-client` and `pulsar-client-admin`
    Both `pulsar-client` and `pulsar-client-admin` are shaded packages and they shade dependencies independently. Consequently, the applications using both `pulsar-client` and `pulsar-client-admin` have redundant shaded classes. It would be troublesome if you introduce new dependencies but forget to update shading rules.
    In this case, you can use `pulsar-client-all`, which shades dependencies only one time and reduces the size of dependencies. |[org.apache.pulsar:pulsar-client-all:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client-all%7C@pulsar:version@%7Cjar) - -This document focuses only on the client API for producing and consuming messages on Pulsar topics. For how to use the Java admin client, see [Pulsar admin interface](admin-api-overview.md). - -## Installation - -The latest version of the Pulsar Java client library is available via [Maven Central](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C@pulsar:version@%7Cjar). To use the latest version, add the `pulsar-client` library to your build configuration. - -### Maven - -If you use Maven, add the following information to the `pom.xml` file. - -```xml - - -@pulsar:version@ - - - - org.apache.pulsar - pulsar-client - ${pulsar.version} - - -``` - -### Gradle - -If you use Gradle, add the following information to the `build.gradle` file. - -```groovy - -def pulsarVersion = '@pulsar:version@' - -dependencies { - compile group: 'org.apache.pulsar', name: 'pulsar-client', version: pulsarVersion -} - -``` - -## Connection URLs - -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -You can assign Pulsar protocol URLs to specific clusters and use the `pulsar` scheme. The default port is `6650`. The following is an example of `localhost`. - -```http - -pulsar://localhost:6650 - -``` - -If you have multiple brokers, the URL is as follows. - -```http - -pulsar://localhost:6550,localhost:6651,localhost:6652 - -``` - -A URL for a production Pulsar cluster is as follows. - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you use [TLS](security-tls-authentication.md) authentication, the URL is as follows. - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Client - -You can instantiate a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object using just a URL for the target Pulsar [cluster](reference-terminology.md#cluster) like this: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -``` - -If you have multiple brokers, you can initiate a PulsarClient like this: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650,localhost:6651,localhost:6652") - .build(); - -``` - -> ### Default broker URLs for standalone clusters -> If you run a cluster in [standalone mode](getting-started-standalone.md), the broker is available at the `pulsar://localhost:6650` URL by default. - -If you create a client, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -| Name | Type |
    Description
    | Default -|---|---|---|--- -`serviceUrl` | String | Service URL provider for Pulsar service | None -`authPluginClassName` | String | Name of the authentication plugin | None - `authParams` | String | Parameters for the authentication plugin

    **Example**
    key1:val1,key2:val2|None -`operationTimeoutMs`|long|`operationTimeoutMs`|Operation timeout |30000 -`statsIntervalSeconds`|long|Interval between each stats information

    Stats is activated with positive `statsInterval`

    Set `statsIntervalSeconds` to 1 second at least. |60 -`numIoThreads`| int| The number of threads used for handling connections to brokers | 1 -`numListenerThreads`|int|The number of threads used for handling message listeners. The listener thread pool is shared across all the consumers and readers using the "listener" model to get messages. For a given consumer, the listener is always invoked from the same thread to ensure ordering. If you want multiple threads to process a single topic, you need to create a [`shared`](https://pulsar.apache.org/docs/en/next/concepts-messaging/#shared) subscription and multiple consumers for this subscription. This does not ensure ordering.| 1 -`useTcpNoDelay`| boolean| Whether to use TCP no-delay flag on the connection to disable Nagle algorithm |true -`useTls` |boolean |Whether to use TLS encryption on the connection| false - `tlsTrustCertsFilePath` |string |Path to the trusted TLS certificate file|None -`tlsAllowInsecureConnection`|boolean|Whether the Pulsar client accepts untrusted TLS certificate from broker | false -`tlsHostnameVerificationEnable` |boolean | Whether to enable TLS hostname verification|false -`concurrentLookupRequest`|int|The number of concurrent lookup requests allowed to send on each broker connection to prevent overload on broker|5000 -`maxLookupRequest`|int|The maximum number of lookup requests allowed on each broker connection to prevent overload on broker | 50000 -`maxNumberOfRejectedRequestPerConnection`|int|The maximum number of rejected requests of a broker in a certain time frame (30 seconds) after the current connection is closed and the client creates a new connection to connect to a different broker|50 -`keepAliveIntervalSeconds`|int|Seconds of keeping alive interval for each client broker connection|30 -`connectionTimeoutMs`|int|Duration of waiting for a connection to a broker to be established

    If the duration passes without a response from a broker, the connection attempt is dropped|10000 -`requestTimeoutMs`|int|Maximum duration for completing a request |60000 -`defaultBackoffIntervalNanos`|int| Default duration for a backoff interval | TimeUnit.MILLISECONDS.toNanos(100); -`maxBackoffIntervalNanos`|long|Maximum duration for a backoff interval|TimeUnit.SECONDS.toNanos(30) -`socks5ProxyAddress`|SocketAddress|SOCKS5 proxy address | None -`socks5ProxyUsername`|string|SOCKS5 proxy username | None -`socks5ProxyPassword`|string|SOCKS5 proxy password | None - -Check out the Javadoc for the {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} class for a full list of configurable parameters. - -> In addition to client-level configuration, you can also apply [producer](#configure-producer) and [consumer](#configure-consumer) specific configuration as described in sections below. - -## Producer - -In Pulsar, producers write messages to topics. Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object (as in the section [above](#client-configuration)), you can create a {@inject: javadoc:Producer:/client/org/apache/pulsar/client/api/Producer} for a specific Pulsar [topic](reference-terminology.md#topic). - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .create(); - -// You can then send messages to the broker and topic you specified: -producer.send("My message".getBytes()); - -``` - -By default, producers produce messages that consist of byte arrays. You can produce different types by specifying a message [schema](#schema). - -```java - -Producer stringProducer = client.newProducer(Schema.STRING) - .topic("my-topic") - .create(); -stringProducer.send("My message"); - -``` - -> Make sure that you close your producers, consumers, and clients when you do not need them. - -> ```java -> -> producer.close(); -> consumer.close(); -> client.close(); -> -> -> ``` - -> -> Close operations can also be asynchronous: - -> ```java -> -> producer.closeAsync() -> .thenRun(() -> System.out.println("Producer closed")) -> .exceptionally((ex) -> { -> System.err.println("Failed to close producer: " + ex); -> return null; -> }); -> -> -> ``` - - -### Configure producer - -If you instantiate a `Producer` object by specifying only a topic name as the example above, the default configuration of producer is used. - -If you create a producer, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -Name| Type |
    Description
    | Default -|---|---|---|--- -`topicName`| string| Topic name| null| -`producerName`| string|Producer name| null -`sendTimeoutMs`| long|Message send timeout in ms.
    If a message is not acknowledged by a server before the `sendTimeout` expires, an error occurs.|30000 -`blockIfQueueFull`|boolean|If it is set to `true`, when the outgoing message queue is full, the `Send` and `SendAsync` methods of producer block, rather than failing and throwing errors.
    If it is set to `false`, when the outgoing message queue is full, the `Send` and `SendAsync` methods of producer fail and `ProducerQueueIsFullError` exceptions occur.

    The `MaxPendingMessages` parameter determines the size of the outgoing message queue.|false -`maxPendingMessages`| int|The maximum size of a queue holding pending messages.

    For example, a message waiting to receive an acknowledgment from a [broker](reference-terminology.md#broker).

    By default, when the queue is full, all calls to the `Send` and `SendAsync` methods fail **unless** you set `BlockIfQueueFull` to `true`.|1000 -`maxPendingMessagesAcrossPartitions`|int|The maximum number of pending messages across partitions.

    Use the setting to lower the max pending messages for each partition ({@link #setMaxPendingMessages(int)}) if the total number exceeds the configured value.|50000 -`messageRoutingMode`| MessageRoutingMode|Message routing logic for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics).
    Apply the logic only when setting no key on messages.
    Available options are as follows:
  2180. `pulsar.RoundRobinDistribution`: round robin
  2181. `pulsar.UseSinglePartition`: publish all messages to a single partition
  2182. `pulsar.CustomPartition`: a custom partitioning scheme
  2183. |
  2184. `pulsar.RoundRobinDistribution`
  2185. -`hashingScheme`| HashingScheme|Hashing function determining the partition where you publish a particular message (**partitioned topics only**).
    Available options are as follows:
  2186. `pulsar.JavastringHash`: the equivalent of `string.hashCode()` in Java
  2187. `pulsar.Murmur3_32Hash`: applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function
  2188. `pulsar.BoostHash`: applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library
  2189. |`HashingScheme.JavastringHash` -`cryptoFailureAction`| ProducerCryptoFailureAction|Producer should take action when encryption fails.
  2190. **FAIL**: if encryption fails, unencrypted messages fail to send.
  2191. **SEND**: if encryption fails, unencrypted messages are sent.
  2192. |`ProducerCryptoFailureAction.FAIL` -`batchingMaxPublishDelayMicros`| long|Batching time period of sending messages.|TimeUnit.MILLISECONDS.toMicros(1) -`batchingMaxMessages` |int|The maximum number of messages permitted in a batch.|1000 -`batchingEnabled`| boolean|Enable batching of messages. |true -`compressionType`|CompressionType|Message data compression type used by a producer.
    Available options:
  2193. [`LZ4`](https://github.com/lz4/lz4)
  2194. [`ZLIB`](https://zlib.net/)
  2195. [`ZSTD`](https://facebook.github.io/zstd/)
  2196. [`SNAPPY`](https://google.github.io/snappy/)
  2197. | No compression - -You can configure parameters if you do not want to use the default configuration. - -For a full list, see the Javadoc for the {@inject: javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder} class. The following is an example. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .batchingMaxPublishDelay(10, TimeUnit.MILLISECONDS) - .sendTimeout(10, TimeUnit.SECONDS) - .blockIfQueueFull(true) - .create(); - -``` - -### Message routing - -When using partitioned topics, you can specify the routing mode whenever you publish messages using a producer. For more information on specifying a routing mode using the Java client, see the [Partitioned Topics cookbook](cookbooks-partitioned.md). - -### Async send - -You can publish messages [asynchronously](concepts-messaging.md#send-modes) using the Java client. With async send, the producer puts the message in a blocking queue and returns it immediately. Then the client library sends the message to the broker in the background. If the queue is full (max size configurable), the producer is blocked or fails immediately when calling the API, depending on arguments passed to the producer. - -The following is an example. - -```java - -producer.sendAsync("my-async-message".getBytes()).thenAccept(msgId -> { - System.out.println("Message with ID " + msgId + " successfully sent"); -}); - -``` - -As you can see from the example above, async send operations return a {@inject: javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId} wrapped in a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture). - -### Configure messages - -In addition to a value, you can set additional items on a given message: - -```java - -producer.newMessage() - .key("my-message-key") - .value("my-async-message".getBytes()) - .property("my-key", "my-value") - .property("my-other-key", "my-other-value") - .send(); - -``` - -You can terminate the builder chain with `sendAsync()` and get a future return. - -## Consumer - -In Pulsar, consumers subscribe to topics and handle messages that producers publish to those topics. You can instantiate a new [consumer](reference-terminology.md#consumer) by first instantiating a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object and passing it a URL for a Pulsar broker (as [above](#client-configuration)). - -Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object, you can create a {@inject: javadoc:Consumer:/client/org/apache/pulsar/client/api/Consumer} by specifying a [topic](reference-terminology.md#topic) and a [subscription](concepts-messaging.md#subscription-modes). - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscribe(); - -``` - -The `subscribe` method will auto subscribe the consumer to the specified topic and subscription. One way to make the consumer listen on the topic is to set up a `while` loop. In this example loop, the consumer listens for messages, prints the contents of any received message, and then [acknowledges](reference-terminology.md#acknowledgment-ack) that the message has been processed. If the processing logic fails, you can use [negative acknowledgement](reference-terminology.md#acknowledgment-ack) to redeliver the message later. - -```java - -while (true) { - // Wait for a message - Message msg = consumer.receive(); - - try { - // Do something with the message - System.out.println("Message received: " + new String(msg.getData())); - - // Acknowledge the message so that it can be deleted by the message broker - consumer.acknowledge(msg); - } catch (Exception e) { - // Message failed to process, redeliver later - consumer.negativeAcknowledge(msg); - } -} - -``` - -If you don't want to block your main thread and rather listen constantly for new messages, consider using a `MessageListener`. - -```java - -MessageListener myMessageListener = (consumer, msg) -> { - try { - System.out.println("Message received: " + new String(msg.getData())); - consumer.acknowledge(msg); - } catch (Exception e) { - consumer.negativeAcknowledge(msg); - } -} - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .messageListener(myMessageListener) - .subscribe(); - -``` - -### Configure consumer - -If you instantiate a `Consumer` object by specifying only a topic and subscription name as in the example above, the consumer uses the default configuration. - -When you create a consumer, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - - Name|Type |
    Description
    | Default -|---|---|---|--- -`topicNames`| Set<String>| Topic name| Sets.newTreeSet() - `topicsPattern`|Pattern| Topic pattern |None -`subscriptionName`|String| Subscription name| None -`subscriptionType`|SubscriptionType| Subscription type
    Four subscription types are available:
  2198. Exclusive
  2199. Failover
  2200. Shared
  2201. Key_Shared
  2202. |SubscriptionType.Exclusive -`receiverQueueSize` |int | Size of a consumer's receiver queue.

    For example, the number of messages accumulated by a consumer before an application calls `Receive`.

    A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.| 1000 -`acknowledgementsGroupTimeMicros`|long|Group a consumer acknowledgment for a specified time.

    By default, a consumer uses 100ms grouping time to send out acknowledgments to a broker.

    Setting a group time of 0 sends out acknowledgments immediately.

    A longer ack group time is more efficient at the expense of a slight increase in message re-deliveries after a failure.|TimeUnit.MILLISECONDS.toMicros(100) -`negativeAckRedeliveryDelayMicros`|long|Delay to wait before redelivering messages that failed to be processed.

    When an application uses {@link Consumer#negativeAcknowledge(Message)}, failed messages are redelivered after a fixed timeout. |TimeUnit.MINUTES.toMicros(1) -`maxTotalReceiverQueueSizeAcrossPartitions`|int |The max total receiver queue size across partitions.

    This setting reduces the receiver queue size for individual partitions if the total receiver queue size exceeds this value.|50000 -`consumerName`|String|Consumer name|null -`ackTimeoutMillis`|long|Timeout of unacked messages|0 -`tickDurationMillis`|long|Granularity of the ack-timeout redelivery.

    Using an higher `tickDurationMillis` reduces the memory overhead to track messages when setting ack-timeout to a bigger value (for example, 1 hour).|1000 -`priorityLevel`|int|Priority level for a consumer to which a broker gives more priority while dispatching messages in the shared subscription mode.

    The broker follows descending priorities. For example, 0=max-priority, 1, 2,...

    In shared subscription mode, the broker **first dispatches messages to the max priority level consumers if they have permits**. Otherwise, the broker considers next priority level consumers.

    **Example 1**
    If a subscription has consumerA with `priorityLevel` 0 and consumerB with `priorityLevel` 1, then the broker **only dispatches messages to consumerA until it runs out permits** and then starts dispatching messages to consumerB.

    **Example 2**
    Consumer Priority, Level, Permits
    C1, 0, 2
    C2, 0, 1
    C3, 0, 1
    C4, 1, 2
    C5, 1, 1

    Order in which a broker dispatches messages to consumers is: C1, C2, C3, C1, C4, C5, C4.|0 -`cryptoFailureAction`|ConsumerCryptoFailureAction|Consumer should take action when it receives a message that can not be decrypted.
  2203. **FAIL**: this is the default option to fail messages until crypto succeeds.
  2204. **DISCARD**:silently acknowledge and not deliver message to an application.
  2205. **CONSUME**: deliver encrypted messages to applications. It is the application's responsibility to decrypt the message.

  2206. The decompression of message fails.

    If messages contain batch messages, a client is not be able to retrieve individual messages in batch.

    Delivered encrypted message contains {@link EncryptionContext} which contains encryption and compression information in it using which application can decrypt consumed message payload.|
  2207. ConsumerCryptoFailureAction.FAIL
  2208. -`properties`|SortedMap|A name or value property of this consumer.

    `properties` is application defined metadata attached to a consumer.

    When getting a topic stats, associate this metadata with the consumer stats for easier identification.|new TreeMap() -`readCompacted`|boolean|If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    Only enabling `readCompacted` on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`.|false -`subscriptionInitialPosition`|SubscriptionInitialPosition|Initial position at which to set cursor when subscribing to a topic at first time.|SubscriptionInitialPosition.Latest -`patternAutoDiscoveryPeriod`|int|Topic auto discovery period when using a pattern for topic's consumer.

    The default and minimum value is 1 minute.|1 -`regexSubscriptionMode`|RegexSubscriptionMode|When subscribing to a topic using a regular expression, you can pick a certain type of topics.

  2209. **PersistentOnly**: only subscribe to persistent topics.
  2210. **NonPersistentOnly**: only subscribe to non-persistent topics.
  2211. **AllTopics**: subscribe to both persistent and non-persistent topics.
  2212. |RegexSubscriptionMode.PersistentOnly -`deadLetterPolicy`|DeadLetterPolicy|Dead letter policy for consumers.

    By default, some messages are probably redelivered many times, even to the extent that it never stops.

    By using the dead letter mechanism, messages have the max redelivery count. **When exceeding the maximum number of redeliveries, messages are sent to the Dead Letter Topic and acknowledged automatically**.

    You can enable the dead letter mechanism by setting `deadLetterPolicy`.

    **Example**

    client.newConsumer()
    .deadLetterPolicy(DeadLetterPolicy.builder().maxRedeliverCount(10).build())
    .subscribe();


    Default dead letter topic name is `{TopicName}-{Subscription}-DLQ`.

    To set a custom dead letter topic name:
    client.newConsumer()
    .deadLetterPolicy(DeadLetterPolicy.builder().maxRedeliverCount(10)
    .deadLetterTopic("your-topic-name").build())
    .subscribe();


    When specifying the dead letter policy while not specifying `ackTimeoutMillis`, you can set the ack timeout to 30000 millisecond.|None -`autoUpdatePartitions`|boolean|If `autoUpdatePartitions` is enabled, a consumer subscribes to partition increasement automatically.

    **Note**: this is only for partitioned consumers.|true -`replicateSubscriptionState`|boolean|If `replicateSubscriptionState` is enabled, a subscription state is replicated to geo-replicated clusters.|false - -You can configure parameters if you do not want to use the default configuration. For a full list, see the Javadoc for the {@inject: javadoc:ConsumerBuilder:/client/org/apache/pulsar/client/api/ConsumerBuilder} class. - -The following is an example. - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .ackTimeout(10, TimeUnit.SECONDS) - .subscriptionType(SubscriptionType.Exclusive) - .subscribe(); - -``` - -### Async receive - -The `receive` method receives messages synchronously (the consumer process is blocked until a message is available). You can also use [async receive](concepts-messaging.md#receive-modes), which returns a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) object immediately once a new message is available. - -The following is an example. - -```java - -CompletableFuture asyncMessage = consumer.receiveAsync(); - -``` - -Async receive operations return a {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} wrapped inside of a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture). - -### Batch receive - -Use `batchReceive` to receive multiple messages for each call. - -The following is an example. - -```java - -Messages messages = consumer.batchReceive(); -for (Object message : messages) { - // do something -} -consumer.acknowledge(messages) - -``` - -:::note - -Batch receive policy limits the number and bytes of messages in a single batch. You can specify a timeout to wait for enough messages. -The batch receive is completed if any of the following condition is met: enough number of messages, bytes of messages, wait timeout. - -```java - -Consumer consumer = client.newConsumer() -.topic("my-topic") -.subscriptionName("my-subscription") -.batchReceivePolicy(BatchReceivePolicy.builder() -.maxNumMessages(100) -.maxNumBytes(1024 * 1024) -.timeout(200, TimeUnit.MILLISECONDS) -.build()) -.subscribe(); - -``` - -The default batch receive policy is: - -```java - -BatchReceivePolicy.builder() -.maxNumMessage(-1) -.maxNumBytes(10 * 1024 * 1024) -.timeout(100, TimeUnit.MILLISECONDS) -.build(); - -``` - -::: - -### Multi-topic subscriptions - -In addition to subscribing a consumer to a single Pulsar topic, you can also subscribe to multiple topics simultaneously using [multi-topic subscriptions](concepts-messaging.md#multi-topic-subscriptions). To use multi-topic subscriptions you can supply either a regular expression (regex) or a `List` of topics. If you select topics via regex, all topics must be within the same Pulsar namespace. - -The followings are some examples. - -```java - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; - -import java.util.Arrays; -import java.util.List; -import java.util.regex.Pattern; - -ConsumerBuilder consumerBuilder = pulsarClient.newConsumer() - .subscriptionName(subscription); - -// Subscribe to all topics in a namespace -Pattern allTopicsInNamespace = Pattern.compile("public/default/.*"); -Consumer allTopicsConsumer = consumerBuilder - .topicsPattern(allTopicsInNamespace) - .subscribe(); - -// Subscribe to a subsets of topics in a namespace, based on regex -Pattern someTopicsInNamespace = Pattern.compile("public/default/foo.*"); -Consumer allTopicsConsumer = consumerBuilder - .topicsPattern(someTopicsInNamespace) - .subscribe(); - -``` - -In the above example, the consumer subscribes to the `persistent` topics that can match the topic name pattern. If you want the consumer subscribes to all `persistent` and `non-persistent` topics that can match the topic name pattern, set `subscriptionTopicsMode` to `RegexSubscriptionMode.AllTopics`. - -```java - -Pattern pattern = Pattern.compile("public/default/.*"); -pulsarClient.newConsumer() - .subscriptionName("my-sub") - .topicsPattern(pattern) - .subscriptionTopicsMode(RegexSubscriptionMode.AllTopics) - .subscribe(); - -``` - -:::note - -By default, the `subscriptionTopicsMode` of the consumer is `PersistentOnly`. Available options of `subscriptionTopicsMode` are `PersistentOnly`, `NonPersistentOnly`, and `AllTopics`. - -::: - -You can also subscribe to an explicit list of topics (across namespaces if you wish): - -```java - -List topics = Arrays.asList( - "topic-1", - "topic-2", - "topic-3" -); - -Consumer multiTopicConsumer = consumerBuilder - .topics(topics) - .subscribe(); - -// Alternatively: -Consumer multiTopicConsumer = consumerBuilder - .topic( - "topic-1", - "topic-2", - "topic-3" - ) - .subscribe(); - -``` - -You can also subscribe to multiple topics asynchronously using the `subscribeAsync` method rather than the synchronous `subscribe` method. The following is an example. - -```java - -Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default.*"); -consumerBuilder - .topics(topics) - .subscribeAsync() - .thenAccept(this::receiveMessageFromConsumer); - -private void receiveMessageFromConsumer(Object consumer) { - ((Consumer)consumer).receiveAsync().thenAccept(message -> { - // Do something with the received message - receiveMessageFromConsumer(consumer); - }); -} - -``` - -### Subscription modes - -Pulsar has various [subscription modes](concepts-messaging#subscription-modes) to match different scenarios. A topic can have multiple subscriptions with different subscription modes. However, a subscription can only have one subscription mode at a time. - -A subscription is identical with the subscription name which can specify only one subscription mode at a time. You cannot change the subscription mode unless all existing consumers of this subscription are offline. - -Different subscription modes have different message distribution modes. This section describes the differences of subscription modes and how to use them. - -In order to better describe their differences, assuming you have a topic named "my-topic", and the producer has published 10 messages. - -```java - -Producer producer = client.newProducer(Schema.STRING) - .topic("my-topic") - .enableBatching(false) - .create(); -// 3 messages with "key-1", 3 messages with "key-2", 2 messages with "key-3" and 2 messages with "key-4" -producer.newMessage().key("key-1").value("message-1-1").send(); -producer.newMessage().key("key-1").value("message-1-2").send(); -producer.newMessage().key("key-1").value("message-1-3").send(); -producer.newMessage().key("key-2").value("message-2-1").send(); -producer.newMessage().key("key-2").value("message-2-2").send(); -producer.newMessage().key("key-2").value("message-2-3").send(); -producer.newMessage().key("key-3").value("message-3-1").send(); -producer.newMessage().key("key-3").value("message-3-2").send(); -producer.newMessage().key("key-4").value("message-4-1").send(); -producer.newMessage().key("key-4").value("message-4-2").send(); - -``` - -#### Exclusive - -Create a new consumer and subscribe with the `Exclusive` subscription mode. - -```java - -Consumer consumer = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Exclusive) - .subscribe() - -``` - -Only the first consumer is allowed to the subscription, other consumers receive an error. The first consumer receives all 10 messages, and the consuming order is the same as the producing order. - -:::note - -If topic is a partitioned topic, the first consumer subscribes to all partitioned topics, other consumers are not assigned with partitions and receive an error. - -::: - -#### Failover - -Create new consumers and subscribe with the`Failover` subscription mode. - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Failover) - .subscribe() -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Failover) - .subscribe() -//conumser1 is the active consumer, consumer2 is the standby consumer. -//consumer1 receives 5 messages and then crashes, consumer2 takes over as an active consumer. - -``` - -Multiple consumers can attach to the same subscription, yet only the first consumer is active, and others are standby. When the active consumer is disconnected, messages will be dispatched to one of standby consumers, and the standby consumer then becomes active consumer. - -If the first active consumer is disconnected after receiving 5 messages, the standby consumer becomes active consumer. Consumer1 will receive: - -``` - -("key-1", "message-1-1") -("key-1", "message-1-2") -("key-1", "message-1-3") -("key-2", "message-2-1") -("key-2", "message-2-2") - -``` - -consumer2 will receive: - -``` - -("key-2", "message-2-3") -("key-3", "message-3-1") -("key-3", "message-3-2") -("key-4", "message-4-1") -("key-4", "message-4-2") - -``` - -:::note - -If a topic is a partitioned topic, each partition has only one active consumer, messages of one partition are distributed to only one consumer, and messages of multiple partitions are distributed to multiple consumers. - -::: - -#### Shared - -Create new consumers and subscribe with `Shared` subscription mode: - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .subscribe() - -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .subscribe() -//Both consumer1 and consumer2 are active consumers. - -``` - -In shared subscription mode, multiple consumers can attach to the same subscription and messages are delivered in a round robin distribution across consumers. - -If a broker dispatches only one message at a time, consumer1 receives the following information. - -``` - -("key-1", "message-1-1") -("key-1", "message-1-3") -("key-2", "message-2-2") -("key-3", "message-3-1") -("key-4", "message-4-1") - -``` - -consumer2 receives the following information. - -``` - -("key-1", "message-1-2") -("key-2", "message-2-1") -("key-2", "message-2-3") -("key-3", "message-3-2") -("key-4", "message-4-2") - -``` - -`Shared` subscription is different from `Exclusive` and `Failover` subscription modes. `Shared` subscription has better flexibility, but cannot provide order guarantee. - -#### Key_shared - -This is a new subscription mode since 2.4.0 release, create new consumers and subscribe with `Key_Shared` subscription mode. - -```java - -Consumer consumer1 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Key_Shared) - .subscribe() - -Consumer consumer2 = client.newConsumer() - .topic("my-topic") - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Key_Shared) - .subscribe() -//Both consumer1 and consumer2 are active consumers. - -``` - -Just like in the `Shared` subscription, all consumers in the `Key_Shared` subscription mode can attach to the same subscription. But the `Key_Shared` subscription mode is different from the `Shared` subscription. In the `Key_Shared` subscription mode, messages with the same key are delivered to only one consumer in order. The possible distribution of messages between different consumers (by default we do not know in advance which keys will be assigned to a consumer, but a key will only be assigned to a consumer at the same time). - -consumer1 receives the following information. - -``` - -("key-1", "message-1-1") -("key-1", "message-1-2") -("key-1", "message-1-3") -("key-3", "message-3-1") -("key-3", "message-3-2") - -``` - -consumer2 receives the following information. - -``` - -("key-2", "message-2-1") -("key-2", "message-2-2") -("key-2", "message-2-3") -("key-4", "message-4-1") -("key-4", "message-4-2") - -``` - -If batching is enabled at the producer side, messages with different keys are added to a batch by default. The broker will dispatch the batch to the consumer, so the default batch mechanism may break the Key_Shared subscription guaranteed message distribution semantics. The producer needs to use the `KeyBasedBatcher`. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .batcherBuilder(BatcherBuilder.KEY_BASED) - .create(); - -``` - -Or the producer can disable batching. - -```java - -Producer producer = client.newProducer() - .topic("my-topic") - .enableBatching(false) - .create(); - -``` - -:::note - -If the message key is not specified, messages without key are dispatched to one consumer in order by default. - -::: - -## Reader - -With the [reader interface](concepts-clients.md#reader-interface), Pulsar clients can "manually position" themselves within a topic and reading all messages from a specified message onward. The Pulsar API for Java enables you to create {@inject: javadoc:Reader:/client/org/apache/pulsar/client/api/Reader} objects by specifying a topic and a {@inject: javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId}. - -The following is an example. - -```java - -byte[] msgIdBytes = // Some message ID byte array -MessageId id = MessageId.fromByteArray(msgIdBytes); -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(id) - .create(); - -while (true) { - Message message = reader.readNext(); - // Process message -} - -``` - -In the example above, a `Reader` object is instantiated for a specific topic and message (by ID); the reader iterates over each message in the topic after the message is identified by `msgIdBytes` (how that value is obtained depends on the application). - -The code sample above shows pointing the `Reader` object to a specific message (by ID), but you can also use `MessageId.earliest` to point to the earliest available message on the topic of `MessageId.latest` to point to the most recent available message. - -### Configure reader -When you create a reader, you can use the `loadConf` configuration. The following parameters are available in `loadConf`. - -| Name | Type|
    Description
    | Default -|---|---|---|--- -`topicName`|String|Topic name. |None -`receiverQueueSize`|int|Size of a consumer's receiver queue.

    For example, the number of messages that can be accumulated by a consumer before an application calls `Receive`.

    A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.|1000 -`readerListener`|ReaderListener<T>|A listener that is called for message received.|None -`readerName`|String|Reader name.|null -`subscriptionName`|String| Subscription name|When there is a single topic, the default subscription name is `"reader-" + 10-digit UUID`.
    When there are multiple topics, the default subscription name is `"multiTopicsReader-" + 10-digit UUID`. -`subscriptionRolePrefix`|String|Prefix of subscription role. |null -`cryptoKeyReader`|CryptoKeyReader|Interface that abstracts the access to a key store.|null -`cryptoFailureAction`|ConsumerCryptoFailureAction|Consumer should take action when it receives a message that can not be decrypted.
  2213. **FAIL**: this is the default option to fail messages until crypto succeeds.
  2214. **DISCARD**: silently acknowledge and not deliver message to an application.
  2215. **CONSUME**: deliver encrypted messages to applications. It is the application's responsibility to decrypt the message.

  2216. The message decompression fails.

    If messages contain batch messages, a client is not be able to retrieve individual messages in batch.

    Delivered encrypted message contains {@link EncryptionContext} which contains encryption and compression information in it using which application can decrypt consumed message payload.|
  2217. ConsumerCryptoFailureAction.FAIL
  2218. -`readCompacted`|boolean|If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (for example, failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`.|false -`resetIncludeHead`|boolean|If set to true, the first message to be returned is the one specified by `messageId`.

    If set to false, the first message to be returned is the one next to the message specified by `messageId`.|false - -### Sticky key range reader - -In sticky key range reader, broker will only dispatch messages which hash of the message key contains by the specified key hash range. Multiple key hash ranges can be specified on a reader. - -The following is an example to create a sticky key range reader. - -```java - -pulsarClient.newReader() - .topic(topic) - .startMessageId(MessageId.earliest) - .keyHashRange(Range.of(0, 10000), Range.of(20001, 30000)) - .create(); - -``` - -Total hash range size is 65536, so the max end of the range should be less than or equal to 65535. - -## Schema - -In Pulsar, all message data consists of byte arrays "under the hood." [Message schemas](schema-get-started.md) enable you to use other types of data when constructing and handling messages (from simple types like strings to more complex, application-specific types.md). If you construct, say, a [producer](#producer) without specifying a schema, then the producer can only produce messages of type `byte[]`. The following is an example. - -```java - -Producer producer = client.newProducer() - .topic(topic) - .create(); - -``` - -The producer above is equivalent to a `Producer` (in fact, you should *always* explicitly specify the type). If you'd like to use a producer for a different type of data, you'll need to specify a **schema** that informs Pulsar which data type will be transmitted over the [topic](reference-terminology.md#topic). - -### AvroBaseStructSchema example - -Let's say that you have a `SensorReading` class that you'd like to transmit over a Pulsar topic: - -```java - -public class SensorReading { - public float temperature; - - public SensorReading(float temperature) { - this.temperature = temperature; - } - - // A no-arg constructor is required - public SensorReading() { - } - - public float getTemperature() { - return temperature; - } - - public void setTemperature(float temperature) { - this.temperature = temperature; - } -} - -``` - -You could then create a `Producer` (or `Consumer`) like this: - -```java - -Producer producer = client.newProducer(JSONSchema.of(SensorReading.class)) - .topic("sensor-readings") - .create(); - -``` - -The following schema formats are currently available for Java: - -* No schema or the byte array schema (which can be applied using `Schema.BYTES`): - - ```java - - Producer bytesProducer = client.newProducer(Schema.BYTES) - .topic("some-raw-bytes-topic") - .create(); - - ``` - - Or, equivalently: - - ```java - - Producer bytesProducer = client.newProducer() - .topic("some-raw-bytes-topic") - .create(); - - ``` - -* `String` for normal UTF-8-encoded string data. Apply the schema using `Schema.STRING`: - - ```java - - Producer stringProducer = client.newProducer(Schema.STRING) - .topic("some-string-topic") - .create(); - - ``` - -* Create JSON schemas for POJOs using `Schema.JSON`. The following is an example. - - ```java - - Producer pojoProducer = client.newProducer(Schema.JSON(MyPojo.class)) - .topic("some-pojo-topic") - .create(); - - ``` - -* Generate Protobuf schemas using `Schema.PROTOBUF`. The following example shows how to create the Protobuf schema and use it to instantiate a new producer: - - ```java - - Producer protobufProducer = client.newProducer(Schema.PROTOBUF(MyProtobuf.class)) - .topic("some-protobuf-topic") - .create(); - - ``` - -* Define Avro schemas with `Schema.AVRO`. The following code snippet demonstrates how to create and use Avro schema. - - ```java - - Producer avroProducer = client.newProducer(Schema.AVRO(MyAvro.class)) - .topic("some-avro-topic") - .create(); - - ``` - -### ProtobufNativeSchema example - -For example of ProtobufNativeSchema, see [`SchemaDefinition` in `Complex type`](schema-understand.md#complex-type). - -## Authentication - -Pulsar currently supports three authentication schemes: [TLS](security-tls-authentication.md), [Athenz](security-athenz.md), and [Oauth2](security-oauth2.md). You can use the Pulsar Java client with all of them. - -### TLS Authentication - -To use [TLS](security-tls-authentication.md), you need to set TLS to `true` using the `setUseTls` method, point your Pulsar client to a TLS cert path, and provide paths to cert and key files. - -The following is an example. - -```java - -Map authParams = new HashMap(); -authParams.put("tlsCertFile", "/path/to/client-cert.pem"); -authParams.put("tlsKeyFile", "/path/to/client-key.pem"); - -Authentication tlsAuth = AuthenticationFactory - .create(AuthenticationTls.class.getName(), authParams); - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://my-broker.com:6651") - .enableTls(true) - .tlsTrustCertsFilePath("/path/to/cacert.pem") - .authentication(tlsAuth) - .build(); - -``` - -### Athenz - -To use [Athenz](security-athenz.md) as an authentication provider, you need to [use TLS](#tls-authentication) and provide values for four parameters in a hash: - -* `tenantDomain` -* `tenantService` -* `providerDomain` -* `privateKey` - -You can also set an optional `keyId`. The following is an example. - -```java - -Map authParams = new HashMap(); -authParams.put("tenantDomain", "shopping"); // Tenant domain name -authParams.put("tenantService", "some_app"); // Tenant service name -authParams.put("providerDomain", "pulsar"); // Provider domain name -authParams.put("privateKey", "file:///path/to/private.pem"); // Tenant private key path -authParams.put("keyId", "v1"); // Key id for the tenant private key (optional, default: "0") - -Authentication athenzAuth = AuthenticationFactory - .create(AuthenticationAthenz.class.getName(), authParams); - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://my-broker.com:6651") - .enableTls(true) - .tlsTrustCertsFilePath("/path/to/cacert.pem") - .authentication(athenzAuth) - .build(); - -``` - -> #### Supported pattern formats -> The `privateKey` parameter supports the following three pattern formats: -> * `file:///path/to/file` -> * `file:/path/to/file` -> * `data:application/x-pem-file;base64,` - -### Oauth2 - -The following example shows how to use [Oauth2](security-oauth2.md) as an authentication provider for the Pulsar Java client. - -You can use the factory method to configure authentication for Pulsar Java client. - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactoryOAuth2.clientCredentials(this.issuerUrl, this.credentialsUrl, this.audience)) - .build(); - -``` - -In addition, you can also use the encoded parameters to configure authentication for Pulsar Java client. - -```java - -Authentication auth = AuthenticationFactory - .create(AuthenticationOAuth2.class.getName(), "{"type":"client_credentials","privateKey":"...","issuerUrl":"...","audience":"..."}"); -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication(auth) - .build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-node.md b/site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-node.md deleted file mode 100644 index 1ff37b26294666..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-node.md +++ /dev/null @@ -1,643 +0,0 @@ ---- -id: client-libraries-node -title: The Pulsar Node.js client -sidebar_label: "Node.js" -original_id: client-libraries-node ---- - -The Pulsar Node.js client can be used to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Node.js. - -All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Node.js client are thread-safe. - -For 1.3.0 or later versions, [type definitions](https://github.com/apache/pulsar-client-node/blob/master/index.d.ts) used in TypeScript are available. - -## Installation - -You can install the [`pulsar-client`](https://www.npmjs.com/package/pulsar-client) library via [npm](https://www.npmjs.com/). - -### Requirements -Pulsar Node.js client library is based on the C++ client library. -Follow [these instructions](client-libraries-cpp.md#compilation) and install the Pulsar C++ client library. - -### Compatibility - -Compatibility between each version of the Node.js client and the C++ client is as follows: - -| Node.js client | C++ client | -| :------------- | :------------- | -| 1.0.0 | 2.3.0 or later | -| 1.1.0 | 2.4.0 or later | -| 1.2.0 | 2.5.0 or later | - -If an incompatible version of the C++ client is installed, you may fail to build or run this library. - -### Installation using npm - -Install the `pulsar-client` library via [npm](https://www.npmjs.com/): - -```shell - -$ npm install pulsar-client - -``` - -:::note - -Also, this library works only in Node.js 10.x or later because it uses the [`node-addon-api`](https://github.com/nodejs/node-addon-api) module to wrap the C++ library. - -::: - -## Connection URLs -To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL. - -Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here is an example for `localhost`: - -```http - -pulsar://localhost:6650 - -``` - -A URL for a production Pulsar cluster may look something like this: - -```http - -pulsar://pulsar.us-west.example.com:6650 - -``` - -If you are using [TLS encryption](security-tls-transport.md) or [TLS Authentication](security-tls-authentication.md), the URL looks like this: - -```http - -pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -## Create a client - -In order to interact with Pulsar, you first need a client object. You can create a client instance using a `new` operator and the `Client` method, passing in a client options object (more on configuration [below](#client-configuration)). - -Here is an example: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - await client.close(); -})(); - -``` - -### Client configuration - -The following configurable parameters are available for Pulsar clients: - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `serviceUrl` | The connection URL for the Pulsar cluster. See [above](#connection-urls) for more info. | | -| `authentication` | Configure the authentication provider. (default: no authentication). See [TLS Authentication](security-tls-authentication.md) for more info. | | -| `operationTimeoutSeconds` | The timeout for Node.js client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries occur until this threshold is reached, at which point the operation fails. | 30 | -| `ioThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker). | 1 | -| `messageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)). | 1 | -| `concurrentLookupRequest` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 50000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 50000 | -| `tlsTrustCertsFilePath` | The file path for the trusted TLS certificate. | | -| `tlsValidateHostname` | The boolean value of setup whether to enable TLS hostname verification. | `false` | -| `tlsAllowInsecureConnection` | The boolean value of setup whether the Pulsar client accepts untrusted TLS certificate from broker. | `false` | -| `statsIntervalInSeconds` | Interval between each stat info. Stats is activated with positive statsInterval. The value should be set to 1 second at least | 600 | -| `log` | A function that is used for logging. | `console.log` | - -## Producers - -Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Node.js producers using a producer configuration object. - -Here is an example: - -```JavaScript - -const producer = await client.createProducer({ - topic: 'my-topic', -}); - -await producer.send({ - data: Buffer.from("Hello, Pulsar"), -}); - -await producer.close(); - -``` - -> #### Promise operation -> When you create a new Pulsar producer, the operation returns `Promise` object and get producer instance or an error through executor function. -> In this example, using await operator instead of executor function. - -### Producer operations - -Pulsar Node.js producers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `send(Object)` | Publishes a [message](#messages) to the producer's topic. When the message is successfully acknowledged by the Pulsar broker, or an error is thrown, the Promise object whose result is the message ID runs executor function. | `Promise` | -| `flush()` | Sends message from send queue to Pulsar broker. When the message is successfully acknowledged by the Pulsar broker, or an error is thrown, the Promise object runs executor function. | `Promise` | -| `close()` | Closes the producer and releases all resources allocated to it. Once `close()` is called, no more messages are accepted from the publisher. This method returns a Promise object. It runs the executor function when all pending publish requests are persisted by Pulsar. If an error is thrown, no pending writes are retried. | `Promise` | -| `getProducerName()` | Getter method of the producer name. | `string` | -| `getTopic()` | Getter method of the name of the topic. | `string` | - -### Producer configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer publishes messages. | | -| `producerName` | A name for the producer. If you do not explicitly assign a name, Pulsar automatically generates a globally unique name. If you choose to explicitly assign a name, it needs to be unique across *all* Pulsar clusters, otherwise the creation operation throws an error. | | -| `sendTimeoutMs` | When publishing a message to a topic, the producer waits for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error is thrown. If you set `sendTimeoutMs` to -1, the timeout is set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30000 | -| `initialSequenceId` | The initial sequence ID of the message. When producer send message, add sequence ID to message. The ID is increased each time to send. | | -| `maxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `send` method fails *unless* `blockIfQueueFull` is set to `true`. | 1000 | -| `maxPendingMessagesAcrossPartitions` | The maximum size of the sum of partition's pending queue. | 50000 | -| `blockIfQueueFull` | If set to `true`, the producer's `send` method waits when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `maxPendingMessages` parameter); if set to `false` (the default), `send` operations fails and throw a error when the queue is full. | `false` | -| `messageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-messaging.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`RoundRobinDistribution`), or publishing all messages to a single partition (`UseSinglePartition`, the default). | `UseSinglePartition` | -| `hashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `JavaStringHash` (the equivalent of `String.hashCode()` in Java), `Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library). | `BoostHash` | -| `compressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), and [`Zlib`](https://zlib.net/), [ZSTD](https://github.com/facebook/zstd/), [SNAPPY](https://github.com/google/snappy/). | Compression None | -| `batchingEnabled` | If set to `true`, the producer send message as batch. | `true` | -| `batchingMaxPublishDelayMs` | The maximum time of delay sending message in batching. | 10 | -| `batchingMaxMessages` | The maximum size of sending message in each time of batching. | 1000 | -| `properties` | The metadata of producer. | | - -### Producer example - -This example creates a Node.js producer for the `my-topic` topic and sends 10 messages to that topic: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - // Create a producer - const producer = await client.createProducer({ - topic: 'my-topic', - }); - - // Send messages - for (let i = 0; i < 10; i += 1) { - const msg = `my-message-${i}`; - producer.send({ - data: Buffer.from(msg), - }); - console.log(`Sent message: ${msg}`); - } - await producer.flush(); - - await producer.close(); - await client.close(); -})(); - -``` - -## Consumers - -Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Node.js consumers using a consumer configuration object. - -Here is an example: - -```JavaScript - -const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', -}); - -const msg = await consumer.receive(); -console.log(msg.getData().toString()); -consumer.acknowledge(msg); - -await consumer.close(); - -``` - -> #### Promise operation -> When you create a new Pulsar consumer, the operation returns `Promise` object and get consumer instance or an error through executor function. -> In this example, using await operator instead of executor function. - -### Consumer operations - -Pulsar Node.js consumers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `receive()` | Receives a single message from the topic. When the message is available, the Promise object run executor function and get message object. | `Promise` | -| `receive(Number)` | Receives a single message from the topic with specific timeout in milliseconds. | `Promise` | -| `acknowledge(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message object. | `void` | -| `acknowledgeId(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID object. | `void` | -| `acknowledgeCumulative(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `acknowledgeCumulative` method returns void, and send the ack to the broker asynchronously. After that, the messages are *not* redelivered to the consumer. Cumulative acking can not be used with a [shared](concepts-messaging.md#shared) subscription type. | `void` | -| `acknowledgeCumulativeId(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message ID. | `void` | -| `negativeAcknowledge(Message)`| [Negatively acknowledges](reference-terminology.md#negative-acknowledgment-nack) a message to the Pulsar broker by message object. | `void` | -| `negativeAcknowledgeId(MessageId)` | [Negatively acknowledges](reference-terminology.md#negative-acknowledgment-nack) a message to the Pulsar broker by message ID object. | `void` | -| `close()` | Closes the consumer, disabling its ability to receive messages from the broker. | `Promise` | -| `unsubscribe()` | Unsubscribes the subscription. | `Promise` | - -### Consumer configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar topic on which the consumer establishes a subscription and listen for messages. | | -| `topics` | The array of topics. | | -| `topicsPattern` | The regular expression for topics. | | -| `subscription` | The subscription name for this consumer. | | -| `subscriptionType` | Available options are `Exclusive`, `Shared`, `Key_Shared`, and `Failover`. | `Exclusive` | -| `subscriptionInitialPosition` | Initial position at which to set cursor when subscribing to a topic at first time. | `SubscriptionInitialPosition.Latest` | -| `ackTimeoutMs` | Acknowledge timeout in milliseconds. | 0 | -| `nAckRedeliverTimeoutMs` | Delay to wait before redelivering messages that failed to be processed. | 60000 | -| `receiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000 | -| `receiverQueueSizeAcrossPartitions` | Set the max total receiver queue size across partitions. This setting is used to reduce the receiver queue size for individual partitions if the total exceeds this value. | 50000 | -| `consumerName` | The name of consumer. Currently(v2.4.1), [failover](concepts-messaging.md#failover) mode use consumer name in ordering. | | -| `properties` | The metadata of consumer. | | -| `listener`| A listener that is called for a message received. | | -| `readCompacted`| If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`. | false | - -### Consumer example - -This example creates a Node.js consumer with the `my-subscription` subscription on the `my-topic` topic, receives messages, prints the content that arrive, and acknowledges each message to the Pulsar broker for 10 times: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - }); - - // Create a consumer - const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', - subscriptionType: 'Exclusive', - }); - - // Receive messages - for (let i = 0; i < 10; i += 1) { - const msg = await consumer.receive(); - console.log(msg.getData().toString()); - consumer.acknowledge(msg); - } - - await consumer.close(); - await client.close(); -})(); - -``` - -Instead a consumer can be created with `listener` to process messages. - -```JavaScript - -// Create a consumer -const consumer = await client.subscribe({ - topic: 'my-topic', - subscription: 'my-subscription', - subscriptionType: 'Exclusive', - listener: (msg, msgConsumer) => { - console.log(msg.getData().toString()); - msgConsumer.acknowledge(msg); - }, -}); - -``` - -## Readers - -Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recently unacked message). You can [configure](#reader-configuration) Node.js readers using a reader configuration object. - -Here is an example: - -```JavaScript - -const reader = await client.createReader({ - topic: 'my-topic', - startMessageId: Pulsar.MessageId.earliest(), -}); - -const msg = await reader.readNext(); -console.log(msg.getData().toString()); - -await reader.close(); - -``` - -### Reader operations - -Pulsar Node.js readers have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `readNext()` | Receives the next message on the topic (analogous to the `receive` method for [consumers](#consumer-operations)). When the message is available, the Promise object run executor function and get message object. | `Promise` | -| `readNext(Number)` | Receives a single message from the topic with specific timeout in milliseconds. | `Promise` | -| `hasNext()` | Return whether the broker has next message in target topic. | `Boolean` | -| `close()` | Closes the reader, disabling its ability to receive messages from the broker. | `Promise` | - -### Reader configuration - -| Parameter | Description | Default | -| :-------- | :---------- | :------ | -| `topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader establishes a subscription and listen for messages. | | -| `startMessageId` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `Pulsar.MessageId.earliest` (the earliest available message on the topic), `Pulsar.MessageId.latest` (the latest available message on the topic), or a message ID object for a position that is not earliest or latest. | | -| `receiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `readNext`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000 | -| `readerName` | The name of the reader. | | -| `subscriptionRolePrefix` | The subscription role prefix. | | -| `readCompacted` | If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.

    A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.

    `readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).

    Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`. | `false` | - - -### Reader example - -This example creates a Node.js reader with the `my-topic` topic, reads messages, and prints the content that arrive for 10 times: - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - operationTimeoutSeconds: 30, - }); - - // Create a reader - const reader = await client.createReader({ - topic: 'my-topic', - startMessageId: Pulsar.MessageId.earliest(), - }); - - // read messages - for (let i = 0; i < 10; i += 1) { - const msg = await reader.readNext(); - console.log(msg.getData().toString()); - } - - await reader.close(); - await client.close(); -})(); - -``` - -## Messages - -In Pulsar Node.js client, you have to construct producer message object for producer. - -Here is an example message: - -```JavaScript - -const msg = { - data: Buffer.from('Hello, Pulsar'), - partitionKey: 'key1', - properties: { - 'foo': 'bar', - }, - eventTimestamp: Date.now(), - replicationClusters: [ - 'cluster1', - 'cluster2', - ], -} - -await producer.send(msg); - -``` - -The following keys are available for producer message objects: - -| Parameter | Description | -| :-------- | :---------- | -| `data` | The actual data payload of the message. | -| `properties` | A Object for any application-specific metadata attached to the message. | -| `eventTimestamp` | The timestamp associated with the message. | -| `sequenceId` | The sequence ID of the message. | -| `partitionKey` | The optional key associated with the message (particularly useful for things like topic compaction). | -| `replicationClusters` | The clusters to which this message is replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. | -| `deliverAt` | The absolute timestamp at or after which the message is delivered. | | -| `deliverAfter` | The relative delay after which the message is delivered. | | - -### Message object operations - -In Pulsar Node.js client, you can receive (or read) message object as consumer (or reader). - -The message object have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `getTopicName()` | Getter method of topic name. | `String` | -| `getProperties()` | Getter method of properties. | `Array` | -| `getData()` | Getter method of message data. | `Buffer` | -| `getMessageId()` | Getter method of [message id object](#message-id-object-operations). | `Object` | -| `getPublishTimestamp()` | Getter method of publish timestamp. | `Number` | -| `getEventTimestamp()` | Getter method of event timestamp. | `Number` | -| `getRedeliveryCount()` | Getter method of redelivery count. | `Number` | -| `getPartitionKey()` | Getter method of partition key. | `String` | - -### Message ID object operations - -In Pulsar Node.js client, you can get message id object from message object. - -The message id object have the following methods available: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `serialize()` | Serialize the message id into a Buffer for storing. | `Buffer` | -| `toString()` | Get message id as String. | `String` | - -The client has static method of message id object. You can access it as `Pulsar.MessageId.someStaticMethod` too. - -The following static methods are available for the message id object: - -| Method | Description | Return type | -| :----- | :---------- | :---------- | -| `earliest()` | MessageId representing the earliest, or oldest available message stored in the topic. | `Object` | -| `latest()` | MessageId representing the latest, or last published message in the topic. | `Object` | -| `deserialize(Buffer)` | Deserialize a message id object from a Buffer. | `Object` | - -## End-to-end encryption - -[End-to-end encryption](https://pulsar.apache.org/docs/en/next/cookbooks-encryption/#docsNav) allows applications to encrypt messages at producers and decrypt at consumers. - -### Configuration - -If you want to use the end-to-end encryption feature in the Node.js client, you need to configure `publicKeyPath` and `privateKeyPath` for both producer and consumer. - -``` - -publicKeyPath: "./public.pem" -privateKeyPath: "./private.pem" - -``` - -### Tutorial - -This section provides step-by-step instructions on how to use the end-to-end encryption feature in the Node.js client. - -**Prerequisite** - -- Pulsar C++ client 2.7.1 or later - -**Step** - -1. Create both public and private key pairs. - - **Input** - - ```shell - - openssl genrsa -out private.pem 2048 - openssl rsa -in private.pem -pubout -out public.pem - - ``` - -2. Create a producer to send encrypted messages. - - **Input** - - ```nodejs - - const Pulsar = require('pulsar-client'); - - (async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://localhost:6650', - operationTimeoutSeconds: 30, - }); - - // Create a producer - const producer = await client.createProducer({ - topic: 'persistent://public/default/my-topic', - sendTimeoutMs: 30000, - batchingEnabled: true, - publicKeyPath: "./public.pem", - privateKeyPath: "./private.pem", - encryptionKey: "encryption-key" - }); - - console.log(producer.ProducerConfig) - // Send messages - for (let i = 0; i < 10; i += 1) { - const msg = `my-message-${i}`; - producer.send({ - data: Buffer.from(msg), - }); - console.log(`Sent message: ${msg}`); - } - await producer.flush(); - - await producer.close(); - await client.close(); - })(); - - ``` - -3. Create a consumer to receive encrypted messages. - - **Input** - - ```nodejs - - const Pulsar = require('pulsar-client'); - - (async () => { - // Create a client - const client = new Pulsar.Client({ - serviceUrl: 'pulsar://172.25.0.3:6650', - operationTimeoutSeconds: 30 - }); - - // Create a consumer - const consumer = await client.subscribe({ - topic: 'persistent://public/default/my-topic', - subscription: 'sub1', - subscriptionType: 'Shared', - ackTimeoutMs: 10000, - publicKeyPath: "./public.pem", - privateKeyPath: "./private.pem" - }); - - console.log(consumer) - // Receive messages - for (let i = 0; i < 10; i += 1) { - const msg = await consumer.receive(); - console.log(msg.getData().toString()); - consumer.acknowledge(msg); - } - - await consumer.close(); - await client.close(); - })(); - - ``` - -4. Run the consumer to receive encrypted messages. - - **Input** - - ```shell - - node consumer.js - - ``` - -5. In a new terminal tab, run the producer to produce encrypted messages. - - **Input** - - ```shell - - node producer.js - - ``` - - Now you can see the producer sends messages and the consumer receives messages successfully. - - **Output** - - This is from the producer side. - - ``` - - Sent message: my-message-0 - Sent message: my-message-1 - Sent message: my-message-2 - Sent message: my-message-3 - Sent message: my-message-4 - Sent message: my-message-5 - Sent message: my-message-6 - Sent message: my-message-7 - Sent message: my-message-8 - Sent message: my-message-9 - - ``` - - This is from the consumer side. - - ``` - - my-message-0 - my-message-1 - my-message-2 - my-message-3 - my-message-4 - my-message-5 - my-message-6 - my-message-7 - my-message-8 - my-message-9 - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-python.md b/site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-python.md deleted file mode 100644 index 90cc840daa0a81..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-python.md +++ /dev/null @@ -1,481 +0,0 @@ ---- -id: client-libraries-python -title: Pulsar Python client -sidebar_label: "Python" -original_id: client-libraries-python ---- - -Pulsar Python client library is a wrapper over the existing [C++ client library](client-libraries-cpp.md) and exposes all of the [same features](/api/cpp). You can find the code in the [Python directory](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/python) of the C++ client code. - -All the methods in producer, consumer, and reader of a Python client are thread-safe. - -[pdoc](https://github.com/BurntSushi/pdoc)-generated API docs for the Python client are available [here](/api/python). - -## Install - -You can install the [`pulsar-client`](https://pypi.python.org/pypi/pulsar-client) library either via [PyPi](https://pypi.python.org/pypi), using [pip](#installation-using-pip), or by building the library from [source](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp). - -### Install using pip - -To install the `pulsar-client` library as a pre-built package using the [pip](https://pip.pypa.io/en/stable/) package manager: - -```shell - -$ pip install pulsar-client==@pulsar:version_number@ - -``` - -### Optional dependencies -If you install the client libraries on Linux to support services like Pulsar functions or Avro serialization, you can install optional components alongside the `pulsar-client` library. - -```shell - -# avro serialization -$ pip install pulsar-client=='@pulsar:version_number@[avro]' - -# functions runtime -$ pip install pulsar-client=='@pulsar:version_number@[functions]' - -# all optional components -$ pip install pulsar-client=='@pulsar:version_number@[all]' - -``` - -Installation via PyPi is available for the following Python versions: - -Platform | Supported Python versions -:--------|:------------------------- -MacOS
    10.13 (High Sierra), 10.14 (Mojave)
    | 2.7, 3.7 -Linux | 2.7, 3.4, 3.5, 3.6, 3.7, 3.8 - -### Install from source - -To install the `pulsar-client` library by building from source, follow [instructions](client-libraries-cpp.md#compilation) and compile the Pulsar C++ client library. That builds the Python binding for the library. - -To install the built Python bindings: - -```shell - -$ git clone https://github.com/apache/pulsar -$ cd pulsar/pulsar-client-cpp/python -$ sudo python setup.py install - -``` - -## API Reference - -The complete Python API reference is available at [api/python](/api/python). - -## Examples - -You can find a variety of Python code examples for the [pulsar-client](/pulsar-client-cpp/python) library. - -### Producer example - -The following example creates a Python producer for the `my-topic` topic and sends 10 messages on that topic: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') - -producer = client.create_producer('my-topic') - -for i in range(10): - producer.send(('Hello-%d' % i).encode('utf-8')) - -client.close() - -``` - -### Consumer example - -The following example creates a consumer with the `my-subscription` subscription name on the `my-topic` topic, receives incoming messages, prints the content and ID of messages that arrive, and acknowledges each message to the Pulsar broker. - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') - -consumer = client.subscribe('my-topic', 'my-subscription') - -while True: - msg = consumer.receive() - try: - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except: - # Message failed to be processed - consumer.negative_acknowledge(msg) - -client.close() - -``` - -This example shows how to configure negative acknowledgement. - -```python - -from pulsar import Client, schema -client = Client('pulsar://localhost:6650') -consumer = client.subscribe('negative_acks','test',schema=schema.StringSchema()) -producer = client.create_producer('negative_acks',schema=schema.StringSchema()) -for i in range(10): - print('send msg "hello-%d"' % i) - producer.send_async('hello-%d' % i, callback=None) -producer.flush() -for i in range(10): - msg = consumer.receive() - consumer.negative_acknowledge(msg) - print('receive and nack msg "%s"' % msg.data()) -for i in range(10): - msg = consumer.receive() - consumer.acknowledge(msg) - print('receive and ack msg "%s"' % msg.data()) -try: - # No more messages expected - msg = consumer.receive(100) -except: - print("no more msg") - pass - -``` - -### Reader interface example - -You can use the Pulsar Python API to use the Pulsar [reader interface](concepts-clients.md#reader-interface). Here's an example: - -```python - -# MessageId taken from a previously fetched message -msg_id = msg.message_id() - -reader = client.create_reader('my-topic', msg_id) - -while True: - msg = reader.read_next() - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # No acknowledgment - -``` - -### Multi-topic subscriptions - -In addition to subscribing a consumer to a single Pulsar topic, you can also subscribe to multiple topics simultaneously. To use multi-topic subscriptions, you can supply a regular expression (regex) or a `List` of topics. If you select topics via regex, all topics must be within the same Pulsar namespace. - -The following is an example: - -```python - -import re -consumer = client.subscribe(re.compile('persistent://public/default/topic-*'), 'my-subscription') -while True: - msg = consumer.receive() - try: - print("Received message '{}' id='{}'".format(msg.data(), msg.message_id())) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except: - # Message failed to be processed - consumer.negative_acknowledge(msg) -client.close() - -``` - -## Schema - -### Declare and validate schema - -You can declare a schema by passing a class that inherits -from `pulsar.schema.Record` and defines the fields as -class variables. For example: - -```python - -from pulsar.schema import * - -class Example(Record): - a = String() - b = Integer() - c = Boolean() - -``` - -With this simple schema definition, you can create producers, consumers and readers instances that refer to that. - -```python - -producer = client.create_producer( - topic='my-topic', - schema=AvroSchema(Example) ) - -producer.send(Example(a='Hello', b=1)) - -``` - -After creating the producer, the Pulsar broker validates that the existing topic schema is indeed of "Avro" type and that the format is compatible with the schema definition of the `Example` class. - -If there is a mismatch, an exception occurs in the producer creation. - -Once a producer is created with a certain schema definition, -it will only accept objects that are instances of the declared -schema class. - -Similarly, for a consumer/reader, the consumer will return an -object, instance of the schema record class, rather than the raw -bytes: - -```python - -consumer = client.subscribe( - topic='my-topic', - subscription_name='my-subscription', - schema=AvroSchema(Example) ) - -while True: - msg = consumer.receive() - ex = msg.value() - try: - print("Received message a={} b={} c={}".format(ex.a, ex.b, ex.c)) - # Acknowledge successful processing of the message - consumer.acknowledge(msg) - except: - # Message failed to be processed - consumer.negative_acknowledge(msg) - -``` - -### Supported schema types - -You can use different builtin schema types in Pulsar. All the definitions are in the `pulsar.schema` package. - -| Schema | Notes | -| ------ | ----- | -| `BytesSchema` | Get the raw payload as a `bytes` object. No serialization/deserialization are performed. This is the default schema mode | -| `StringSchema` | Encode/decode payload as a UTF-8 string. Uses `str` objects | -| `JsonSchema` | Require record definition. Serializes the record into standard JSON payload | -| `AvroSchema` | Require record definition. Serializes in AVRO format | - -### Schema definition reference - -The schema definition is done through a class that inherits from `pulsar.schema.Record`. - -This class has a number of fields which can be of either -`pulsar.schema.Field` type or another nested `Record`. All the -fields are specified in the `pulsar.schema` package. The fields -are matching the AVRO fields types. - -| Field Type | Python Type | Notes | -| ---------- | ----------- | ----- | -| `Boolean` | `bool` | | -| `Integer` | `int` | | -| `Long` | `int` | | -| `Float` | `float` | | -| `Double` | `float` | | -| `Bytes` | `bytes` | | -| `String` | `str` | | -| `Array` | `list` | Need to specify record type for items. | -| `Map` | `dict` | Key is always `String`. Need to specify value type. | - -Additionally, any Python `Enum` type can be used as a valid field type. - -#### Fields parameters - -When adding a field, you can use these parameters in the constructor. - -| Argument | Default | Notes | -| ---------- | --------| ----- | -| `default` | `None` | Set a default value for the field. Eg: `a = Integer(default=5)` | -| `required` | `False` | Mark the field as "required". It is set in the schema accordingly. | - -#### Schema definition examples - -##### Simple definition - -```python - -class Example(Record): - a = String() - b = Integer() - c = Array(String()) - i = Map(String()) - -``` - -##### Using enums - -```python - -from enum import Enum - -class Color(Enum): - red = 1 - green = 2 - blue = 3 - -class Example(Record): - name = String() - color = Color - -``` - -##### Complex types - -```python - -class MySubRecord(Record): - x = Integer() - y = Long() - z = String() - -class Example(Record): - a = String() - sub = MySubRecord() - -``` - -##### Set namespace for Avro schema - -Set the namespace for Avro Record schema using the special field `_avro_namespace`. - -```python - -class NamespaceDemo(Record): - _avro_namespace = 'xxx.xxx.xxx' - x = String() - y = Integer() - -``` - -The schema definition is like this. - -``` - -{ - 'name': 'NamespaceDemo', 'namespace': 'xxx.xxx.xxx', 'type': 'record', 'fields': [ - {'name': 'x', 'type': ['null', 'string']}, - {'name': 'y', 'type': ['null', 'int']} - ] -} - -``` - -## End-to-end encryption - -[End-to-end encryption](https://pulsar.apache.org/docs/en/next/cookbooks-encryption/#docsNav) allows applications to encrypt messages at producers and decrypt messages at consumers. - -### Configuration - -To use the end-to-end encryption feature in the Python client, you need to configure `publicKeyPath` and `privateKeyPath` for both producer and consumer. - -``` - -publicKeyPath: "./public.pem" -privateKeyPath: "./private.pem" - -``` - -### Tutorial - -This section provides step-by-step instructions on how to use the end-to-end encryption feature in the Python client. - -**Prerequisite** - -- Pulsar Python client 2.7.1 or later - -**Step** - -1. Create both public and private key pairs. - - **Input** - - ```shell - - openssl genrsa -out private.pem 2048 - openssl rsa -in private.pem -pubout -out public.pem - - ``` - -2. Create a producer to send encrypted messages. - - **Input** - - ```python - - import pulsar - - publicKeyPath = "./public.pem" - privateKeyPath = "./private.pem" - crypto_key_reader = pulsar.CryptoKeyReader(publicKeyPath, privateKeyPath) - client = pulsar.Client('pulsar://localhost:6650') - producer = client.create_producer(topic='encryption', encryption_key='encryption', crypto_key_reader=crypto_key_reader) - producer.send('encryption message'.encode('utf8')) - print('sent message') - producer.close() - client.close() - - ``` - -3. Create a consumer to receive encrypted messages. - - **Input** - - ```python - - import pulsar - - publicKeyPath = "./public.pem" - privateKeyPath = "./private.pem" - crypto_key_reader = pulsar.CryptoKeyReader(publicKeyPath, privateKeyPath) - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe(topic='encryption', subscription_name='encryption-sub', crypto_key_reader=crypto_key_reader) - msg = consumer.receive() - print("Received msg '{}' id = '{}'".format(msg.data(), msg.message_id())) - consumer.close() - client.close() - - ``` - -4. Run the consumer to receive encrypted messages. - - **Input** - - ```shell - - python consumer.py - - ``` - -5. In a new terminal tab, run the producer to produce encrypted messages. - - **Input** - - ```shell - - python producer.py - - ``` - - Now you can see the producer sends messages and the consumer receives messages successfully. - - **Output** - - This is from the producer side. - - ``` - - sent message - - ``` - - This is from the consumer side. - - ``` - - Received msg 'encryption message' id = '(0,0,-1,-1)' - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-websocket.md b/site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-websocket.md deleted file mode 100644 index 77ec54f803dc5b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries-websocket.md +++ /dev/null @@ -1,662 +0,0 @@ ---- -id: client-libraries-websocket -title: Pulsar WebSocket API -sidebar_label: "WebSocket" -original_id: client-libraries-websocket ---- - -Pulsar [WebSocket](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API) API provides a simple way to interact with Pulsar using languages that do not have an official [client library](getting-started-clients.md). Through WebSocket, you can publish and consume messages and use features available on the [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page. - - -> You can use Pulsar WebSocket API with any WebSocket client library. See examples for Python and Node.js [below](#client-examples). - -## Running the WebSocket service - -The standalone variant of Pulsar that we recommend using for [local development](getting-started-standalone.md) already has the WebSocket service enabled. - -In non-standalone mode, there are two ways to deploy the WebSocket service: - -* [embedded](#embedded-with-a-pulsar-broker) with a Pulsar broker -* as a [separate component](#as-a-separate-component) - -### Embedded with a Pulsar broker - -In this mode, the WebSocket service will run within the same HTTP service that's already running in the broker. To enable this mode, set the [`webSocketServiceEnabled`](reference-configuration.md#broker-webSocketServiceEnabled) parameter in the [`conf/broker.conf`](reference-configuration.md#broker) configuration file in your installation. - -```properties - -webSocketServiceEnabled=true - -``` - -### As a separate component - -In this mode, the WebSocket service will be run from a Pulsar [broker](reference-terminology.md#broker) as a separate service. Configuration for this mode is handled in the [`conf/websocket.conf`](reference-configuration.md#websocket) configuration file. You'll need to set *at least* the following parameters: - -* [`configurationStoreServers`](reference-configuration.md#websocket-configurationStoreServers) -* [`webServicePort`](reference-configuration.md#websocket-webServicePort) -* [`clusterName`](reference-configuration.md#websocket-clusterName) - -Here's an example: - -```properties - -configurationStoreServers=zk1:2181,zk2:2181,zk3:2181 -webServicePort=8080 -clusterName=my-cluster - -``` - -### Security settings - -To enable TLS encryption on WebSocket service: - -```properties - -tlsEnabled=true -tlsAllowInsecureConnection=false -tlsCertificateFilePath=/path/to/client-websocket.cert.pem -tlsKeyFilePath=/path/to/client-websocket.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -### Starting the broker - -When the configuration is set, you can start the service using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) tool: - -```shell - -$ bin/pulsar-daemon start websocket - -``` - -## API Reference - -Pulsar's WebSocket API offers three endpoints for [producing](#producer-endpoint) messages, [consuming](#consumer-endpoint) messages and [reading](#reader-endpoint) messages. - -All exchanges via the WebSocket API use JSON. - -### Authentication - -#### Browser javascript WebSocket client - -Use the query param `token` transport the authentication token. - -```http - -ws://broker-service-url:8080/path?token=token - -``` - -### Producer endpoint - -The producer endpoint requires you to specify a tenant, namespace, and topic in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/producer/persistent/:tenant/:namespace/:topic - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`sendTimeoutMillis` | long | no | Send timeout (default: 30 secs) -`batchingEnabled` | boolean | no | Enable batching of messages (default: false) -`batchingMaxMessages` | int | no | Maximum number of messages permitted in a batch (default: 1000) -`maxPendingMessages` | int | no | Set the max size of the internal-queue holding the messages (default: 1000) -`batchingMaxPublishDelay` | long | no | Time period within which the messages will be batched (default: 10ms) -`messageRoutingMode` | string | no | Message [routing mode](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/ProducerConfiguration.MessageRoutingMode.html) for the partitioned producer: `SinglePartition`, `RoundRobinPartition` -`compressionType` | string | no | Compression [type](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/CompressionType.html): `LZ4`, `ZLIB` -`producerName` | string | no | Specify the name for the producer. Pulsar will enforce only one producer with same name can be publishing on a topic -`initialSequenceId` | long | no | Set the baseline for the sequence ids for messages published by the producer. -`hashingScheme` | string | no | [Hashing function](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.HashingScheme.html) to use when publishing on a partitioned topic: `JavaStringHash`, `Murmur3_32Hash` -`token` | string | no | Authentication token, this is used for the browser javascript client - - -#### Publishing a message - -```json - -{ - "payload": "SGVsbG8gV29ybGQ=", - "properties": {"key1": "value1", "key2": "value2"}, - "context": "1" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`payload` | string | yes | Base-64 encoded payload -`properties` | key-value pairs | no | Application-defined properties -`context` | string | no | Application-defined request identifier -`key` | string | no | For partitioned topics, decides which partition to use -`replicationClusters` | array | no | Restrict replication to this list of [clusters](reference-terminology.md#cluster), specified by name - - -##### Example success response - -```json - -{ - "result": "ok", - "messageId": "CAAQAw==", - "context": "1" - } - -``` - -##### Example failure response - -```json - - { - "result": "send-error:3", - "errorMsg": "Failed to de-serialize from JSON", - "context": "1" - } - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`result` | string | yes | `ok` if successful or an error message if unsuccessful -`messageId` | string | yes | Message ID assigned to the published message -`context` | string | no | Application-defined request identifier - - -### Consumer endpoint - -The consumer endpoint requires you to specify a tenant, namespace, and topic, as well as a subscription, in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/consumer/persistent/:tenant/:namespace/:topic/:subscription - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`ackTimeoutMillis` | long | no | Set the timeout for unacked messages (default: 0) -`subscriptionType` | string | no | [Subscription type](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/SubscriptionType.html): `Exclusive`, `Failover`, `Shared`, `Key_Shared` -`receiverQueueSize` | int | no | Size of the consumer receive queue (default: 1000) -`consumerName` | string | no | Consumer name -`priorityLevel` | int | no | Define a [priority](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setPriorityLevel-int-) for the consumer -`maxRedeliverCount` | int | no | Define a [maxRedeliverCount](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#deadLetterPolicy-org.apache.pulsar.client.api.DeadLetterPolicy-) for the consumer (default: 0). Activates [Dead Letter Topic](https://github.com/apache/pulsar/wiki/PIP-22%3A-Pulsar-Dead-Letter-Topic) feature. -`deadLetterTopic` | string | no | Define a [deadLetterTopic](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#deadLetterPolicy-org.apache.pulsar.client.api.DeadLetterPolicy-) for the consumer (default: {topic}-{subscription}-DLQ). Activates [Dead Letter Topic](https://github.com/apache/pulsar/wiki/PIP-22%3A-Pulsar-Dead-Letter-Topic) feature. -`pullMode` | boolean | no | Enable pull mode (default: false). See "Flow Control" below. -`negativeAckRedeliveryDelay` | int | no | When a message is negatively acknowledged, the delay time before the message is redelivered (in milliseconds). The default value is 60000. -`token` | string | no | Authentication token, this is used for the browser javascript client - -NB: these parameter (except `pullMode`) apply to the internal consumer of the WebSocket service. -So messages will be subject to the redelivery settings as soon as the get into the receive queue, -even if the client doesn't consume on the WebSocket. - -##### Receiving messages - -Server will push messages on the WebSocket session: - -```json - -"messageId": "CAMQADAA", - "payload": "hvXcJvHW7kOSrUn17P2q71RA5SdiXwZBqw==", - "properties": {}, - "publishTime": "2021-10-29T16:01:38.967-07:00", - "redeliveryCount": 0, - "encryptionContext": { - "keys": { - "client-rsa.pem": { - "keyValue": "jEuwS+PeUzmCo7IfLNxqoj4h7txbLjCQjkwpaw5AWJfZ2xoIdMkOuWDkOsqgFmWwxiecakS6GOZHs94x3sxzKHQx9Oe1jpwBg2e7L4fd26pp+WmAiLm/ArZJo6JotTeFSvKO3u/yQtGTZojDDQxiqFOQ1ZbMdtMZA8DpSMuq+Zx7PqLo43UdW1+krjQfE5WD+y+qE3LJQfwyVDnXxoRtqWLpVsAROlN2LxaMbaftv5HckoejJoB4xpf/dPOUqhnRstwQHf6klKT5iNhjsY4usACt78uILT0pEPd14h8wEBidBz/vAlC/zVMEqiDVzgNS7dqEYS4iHbf7cnWVCn3Hxw==", - "metadata": {} - } - }, - "param": "Tfu1PxVm6S9D3+Hk", - "compressionType": "NONE", - "uncompressedMessageSize": 0, - "batchSize": { - "empty": false, - "present": true - } - } - -``` - -Below are the parameters in the WebSocket consumer response. - -- General parameters - - Key | Type | Required? | Explanation - :---|:-----|:----------|:----------- - `messageId` | string | yes | Message ID - `payload` | string | yes | Base-64 encoded payload - `publishTime` | string | yes | Publish timestamp - `redeliveryCount` | number | yes | Number of times this message was already delivered - `properties` | key-value pairs | no | Application-defined properties - `key` | string | no | Original routing key set by producer - `encryptionContext` | EncryptionContext | no | Encryption context that consumers can use to decrypt received messages - `param` | string | no | Initialization vector for cipher (Base64 encoding) - `batchSize` | string | no | Number of entries in a message (if it is a batch message) - `uncompressedMessageSize` | string | no | Message size before compression - `compressionType` | string | no | Algorithm used to compress the message payload - -- `encryptionContext` related parameter - - Key | Type | Required? | Explanation - :---|:-----|:----------|:----------- - `keys` |key-EncryptionKey pairs | yes | Key in `key-EncryptionKey` pairs is an encryption key name. Value in `key-EncryptionKey` pairs is an encryption key object. - -- `encryptionKey` related parameters - - Key | Type | Required? | Explanation - :---|:-----|:----------|:----------- - `keyValue` | string | yes | Encryption key (Base64 encoding) - `metadata` | key-value pairs | no | Application-defined metadata - -#### Acknowledging the message - -Consumer needs to acknowledge the successful processing of the message to -have the Pulsar broker delete it. - -```json - -{ - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Negatively acknowledging messages - -```json - -{ - "type": "negativeAcknowledge", - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Flow control - -##### Push Mode - -By default (`pullMode=false`), the consumer endpoint will use the `receiverQueueSize` parameter both to size its -internal receive queue and to limit the number of unacknowledged messages that are passed to the WebSocket client. -In this mode, if you don't send acknowledgements, the Pulsar WebSocket service will stop sending messages after reaching -`receiverQueueSize` unacked messages sent to the WebSocket client. - -##### Pull Mode - -If you set `pullMode` to `true`, the WebSocket client will need to send `permit` commands to permit the -Pulsar WebSocket service to send more messages. - -```json - -{ - "type": "permit", - "permitMessages": 100 -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `permit` -`permitMessages`| int | yes | Number of messages to permit - -NB: in this mode it's possible to acknowledge messages in a different connection. - -#### Check if reach end of topic - -Consumer can check if it has reached end of topic by sending `isEndOfTopic` request. - -**Request** - -```json - -{ - "type": "isEndOfTopic" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `isEndOfTopic` - -**Response** - -```json - -{ - "endOfTopic": "true/false" - } - -``` - -### Reader endpoint - -The reader endpoint requires you to specify a tenant, namespace, and topic in the URL: - -```http - -ws://broker-service-url:8080/ws/v2/reader/persistent/:tenant/:namespace/:topic - -``` - -##### Query param - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`readerName` | string | no | Reader name -`receiverQueueSize` | int | no | Size of the consumer receive queue (default: 1000) -`messageId` | int or enum | no | Message ID to start from, `earliest` or `latest` (default: `latest`) -`token` | string | no | Authentication token, this is used for the browser javascript client - -##### Receiving messages - -Server will push messages on the WebSocket session: - -```json - -{ - "messageId": "CAAQAw==", - "payload": "SGVsbG8gV29ybGQ=", - "properties": {"key1": "value1", "key2": "value2"}, - "publishTime": "2016-08-30 16:45:57.785", - "redeliveryCount": 4 -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId` | string | yes | Message ID -`payload` | string | yes | Base-64 encoded payload -`publishTime` | string | yes | Publish timestamp -`redeliveryCount` | number | yes | Number of times this message was already delivered -`properties` | key-value pairs | no | Application-defined properties -`key` | string | no | Original routing key set by producer - -#### Acknowledging the message - -**In WebSocket**, Reader needs to acknowledge the successful processing of the message to -have the Pulsar WebSocket service update the number of pending messages. -If you don't send acknowledgements, Pulsar WebSocket service will stop sending messages after reaching the pendingMessages limit. - -```json - -{ - "messageId": "CAAQAw==" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`messageId`| string | yes | Message ID of the processed message - -#### Check if reach end of topic - -Consumer can check if it has reached end of topic by sending `isEndOfTopic` request. - -**Request** - -```json - -{ - "type": "isEndOfTopic" -} - -``` - -Key | Type | Required? | Explanation -:---|:-----|:----------|:----------- -`type`| string | yes | Type of command. Must be `isEndOfTopic` - -**Response** - -```json - -{ - "endOfTopic": "true/false" - } - -``` - -### Error codes - -In case of error the server will close the WebSocket session using the -following error codes: - -Error Code | Error Message -:----------|:------------- -1 | Failed to create producer -2 | Failed to subscribe -3 | Failed to deserialize from JSON -4 | Failed to serialize to JSON -5 | Failed to authenticate client -6 | Client is not authorized -7 | Invalid payload encoding -8 | Unknown error - -> The application is responsible for re-establishing a new WebSocket session after a backoff period. - -## Client examples - -Below you'll find code examples for the Pulsar WebSocket API in [Python](#python) and [Node.js](#nodejs). - -### Python - -This example uses the [`websocket-client`](https://pypi.python.org/pypi/websocket-client) package. You can install it using [pip](https://pypi.python.org/pypi/pip): - -```shell - -$ pip install websocket-client - -``` - -You can also download it from [PyPI](https://pypi.python.org/pypi/websocket-client). - -#### Python producer - -Here's an example Python producer that sends a simple message to a Pulsar [topic](reference-terminology.md#topic): - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/producer/persistent/public/default/my-topic' - -ws = websocket.create_connection(TOPIC) - -# encode message -s = "Hello World" -firstEncoded = s.encode("UTF-8") -binaryEncoded = base64.b64encode(firstEncoded) -payloadString = binaryEncoded.decode('UTF-8') - -# Send one message as JSON -ws.send(json.dumps({ - 'payload' : payloadString, - 'properties': { - 'key1' : 'value1', - 'key2' : 'value2' - }, - 'context' : 5 -})) - -response = json.loads(ws.recv()) -if response['result'] == 'ok': - print( 'Message published successfully') -else: - print('Failed to publish message:', response) -ws.close() - -``` - -#### Python consumer - -Here's an example Python consumer that listens on a Pulsar topic and prints the message ID whenever a message arrives: - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/consumer/persistent/public/default/my-topic/my-sub' - -ws = websocket.create_connection(TOPIC) - -while True: - msg = json.loads(ws.recv()) - if not msg: break - - print( "Received: {} - payload: {}".format(msg, base64.b64decode(msg['payload']))) - - # Acknowledge successful processing - ws.send(json.dumps({'messageId' : msg['messageId']})) - -ws.close() - -``` - -#### Python reader - -Here's an example Python reader that listens on a Pulsar topic and prints the message ID whenever a message arrives: - -```python - -import websocket, base64, json - -# If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -enable_TLS = False -scheme = 'ws' -if enable_TLS: - scheme = 'wss' - -TOPIC = scheme + '://localhost:8080/ws/v2/reader/persistent/public/default/my-topic' -ws = websocket.create_connection(TOPIC) - -while True: - msg = json.loads(ws.recv()) - if not msg: break - - print ( "Received: {} - payload: {}".format(msg, base64.b64decode(msg['payload']))) - - # Acknowledge successful processing - ws.send(json.dumps({'messageId' : msg['messageId']})) - -ws.close() - -``` - -### Node.js - -This example uses the [`ws`](https://websockets.github.io/ws/) package. You can install it using [npm](https://www.npmjs.com/): - -```shell - -$ npm install ws - -``` - -#### Node.js producer - -Here's an example Node.js producer that sends a simple message to a Pulsar topic: - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/producer/persistent/public/default/my-topic`; -const ws = new WebSocket(topic); - -var message = { - "payload" : new Buffer("Hello World").toString('base64'), - "properties": { - "key1" : "value1", - "key2" : "value2" - }, - "context" : "1" -}; - -ws.on('open', function() { - // Send one message - ws.send(JSON.stringify(message)); -}); - -ws.on('message', function(message) { - console.log('received ack: %s', message); -}); - -``` - -#### Node.js consumer - -Here's an example Node.js consumer that listens on the same topic used by the producer above: - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/consumer/persistent/public/default/my-topic/my-sub`; -const ws = new WebSocket(topic); - -ws.on('message', function(message) { - var receiveMsg = JSON.parse(message); - console.log('Received: %s - payload: %s', message, new Buffer(receiveMsg.payload, 'base64').toString()); - var ackMsg = {"messageId" : receiveMsg.messageId}; - ws.send(JSON.stringify(ackMsg)); -}); - -``` - -#### NodeJS reader - -```javascript - -const WebSocket = require('ws'); - -// If set enableTLS to true, your have to set tlsEnabled to true in conf/websocket.conf. -const enableTLS = false; -const topic = `${enableTLS ? 'wss' : 'ws'}://localhost:8080/ws/v2/reader/persistent/public/default/my-topic`; -const ws = new WebSocket(topic); - -ws.on('message', function(message) { - var receiveMsg = JSON.parse(message); - console.log('Received: %s - payload: %s', message, new Buffer(receiveMsg.payload, 'base64').toString()); - var ackMsg = {"messageId" : receiveMsg.messageId}; - ws.send(JSON.stringify(ackMsg)); -}); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries.md b/site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries.md deleted file mode 100644 index 607c9317e4b7fb..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/client-libraries.md +++ /dev/null @@ -1,36 +0,0 @@ ---- -id: client-libraries -title: Pulsar client libraries -sidebar_label: "Overview" -original_id: client-libraries ---- - -Pulsar supports the following client libraries: - -- [Java client](client-libraries-java.md) -- [Go client](client-libraries-go.md) -- [Python client](client-libraries-python.md) -- [C++ client](client-libraries-cpp.md) -- [Node.js client](client-libraries-node.md) -- [WebSocket client](client-libraries-websocket.md) -- [C# client](client-libraries-dotnet.md) - -## Feature matrix -Pulsar client feature matrix for different languages is listed on [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page. - -## Third-party clients - -Besides the official released clients, multiple projects on developing Pulsar clients are available in different languages. - -> If you have developed a new Pulsar client, feel free to submit a pull request and add your client to the list below. - -| Language | Project | Maintainer | License | Description | -|----------|---------|------------|---------|-------------| -| Go | [pulsar-client-go](https://github.com/Comcast/pulsar-client-go) | [Comcast](https://github.com/Comcast) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client | -| Go | [go-pulsar](https://github.com/t2y/go-pulsar) | [t2y](https://github.com/t2y) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | -| Haskell | [supernova](https://github.com/cr-org/supernova) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Native Pulsar client for Haskell | -| Scala | [neutron](https://github.com/cr-org/neutron) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Purely functional Apache Pulsar client for Scala built on top of Fs2 | -| Scala | [pulsar4s](https://github.com/sksamuel/pulsar4s) | [sksamuel](https://github.com/sksamuel) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Idomatic, typesafe, and reactive Scala client for Apache Pulsar | -| Rust | [pulsar-rs](https://github.com/wyyerd/pulsar-rs) | [Wyyerd Group](https://github.com/wyyerd) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Future-based Rust bindings for Apache Pulsar | -| .NET | [pulsar-client-dotnet](https://github.com/fsharplang-ru/pulsar-client-dotnet) | [Lanayx](https://github.com/Lanayx) | [![GitHub](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT) | Native .NET client for C#/F#/VB | -| Node.js | [pulsar-flex](https://github.com/ayeo-flex-org/pulsar-flex) | [Daniel Sinai](https://github.com/danielsinai), [Ron Farkash](https://github.com/ronfarkash), [Gal Rosenberg](https://github.com/galrose)| [![GitHub](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT) | Native Nodejs client | diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-architecture-overview.md b/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-architecture-overview.md deleted file mode 100644 index 4baa8c30a0d009..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-architecture-overview.md +++ /dev/null @@ -1,172 +0,0 @@ ---- -id: concepts-architecture-overview -title: Architecture Overview -sidebar_label: "Architecture" -original_id: concepts-architecture-overview ---- - -At the highest level, a Pulsar instance is composed of one or more Pulsar clusters. Clusters within an instance can [replicate](concepts-replication.md) data amongst themselves. - -In a Pulsar cluster: - -* One or more brokers handles and load balances incoming messages from producers, dispatches messages to consumers, communicates with the Pulsar configuration store to handle various coordination tasks, stores messages in BookKeeper instances (aka bookies), relies on a cluster-specific ZooKeeper cluster for certain tasks, and more. -* A BookKeeper cluster consisting of one or more bookies handles [persistent storage](#persistent-storage) of messages. -* A ZooKeeper cluster specific to that cluster handles coordination tasks between Pulsar clusters. - -The diagram below provides an illustration of a Pulsar cluster: - -![Pulsar architecture diagram](/assets/pulsar-system-architecture.png) - -At the broader instance level, an instance-wide ZooKeeper cluster called the configuration store handles coordination tasks involving multiple clusters, for example [geo-replication](concepts-replication.md). - -## Brokers - -The Pulsar message broker is a stateless component that's primarily responsible for running two other components: - -* An HTTP server that exposes a {@inject: rest:REST:/} API for both administrative tasks and [topic lookup](concepts-clients.md#client-setup-phase) for producers and consumers. The producers connect to the brokers to publish messages and the consumers connect to the brokers to consume the messages. -* A dispatcher, which is an asynchronous TCP server over a custom [binary protocol](developing-binary-protocol.md) used for all data transfers - -Messages are typically dispatched out of a [managed ledger](#managed-ledgers) cache for the sake of performance, *unless* the backlog exceeds the cache size. If the backlog grows too large for the cache, the broker will start reading entries from BookKeeper. - -Finally, to support geo-replication on global topics, the broker manages replicators that tail the entries published in the local region and republish them to the remote region using the Pulsar [Java client library](client-libraries-java.md). - -> For a guide to managing Pulsar brokers, see the [brokers](admin-api-brokers.md) guide. - -## Clusters - -A Pulsar instance consists of one or more Pulsar *clusters*. Clusters, in turn, consist of: - -* One or more Pulsar [brokers](#brokers) -* A ZooKeeper quorum used for cluster-level configuration and coordination -* An ensemble of bookies used for [persistent storage](#persistent-storage) of messages - -Clusters can replicate amongst themselves using [geo-replication](concepts-replication.md). - -> For a guide to managing Pulsar clusters, see the [clusters](admin-api-clusters.md) guide. - -## Metadata store - -The Pulsar metadata store maintains all the metadata of a Pulsar cluster, such as topic metadata, schema, broker load data, and so on. Pulsar uses [Apache ZooKeeper](https://zookeeper.apache.org/) for metadata storage, cluster configuration, and coordination. The Pulsar metadata store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster. You can use one ZooKeeper cluster for both Pulsar metadata store and BookKeeper metadata store. If you want to deploy Pulsar brokers connected to an existing BookKeeper cluster, you need to deploy separate ZooKeeper clusters for Pulsar metadata store and BookKeeper metadata store respectively. - -In a Pulsar instance: - -* A configuration store quorum stores configuration for tenants, namespaces, and other entities that need to be globally consistent. -* Each cluster has its own local ZooKeeper ensemble that stores cluster-specific configuration and coordination such as which brokers are responsible for which topics as well as ownership metadata, broker load reports, BookKeeper ledger metadata, and more. - -## Configuration store - -The configuration store maintains all the configurations of a Pulsar instance, such as clusters, tenants, namespaces, partitioned topic related configurations, and so on. A Pulsar instance can have a single local cluster, multiple local clusters, or multiple cross-region clusters. Consequently, the configuration store can share the configurations across multiple clusters under a Pulsar instance. The configuration store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster. - -## Persistent storage - -Pulsar provides guaranteed message delivery for applications. If a message successfully reaches a Pulsar broker, it will be delivered to its intended target. - -This guarantee requires that non-acknowledged messages are stored in a durable manner until they can be delivered to and acknowledged by consumers. This mode of messaging is commonly called *persistent messaging*. In Pulsar, N copies of all messages are stored and synced on disk, for example 4 copies across two servers with mirrored [RAID](https://en.wikipedia.org/wiki/RAID) volumes on each server. - -### Apache BookKeeper - -Pulsar uses a system called [Apache BookKeeper](http://bookkeeper.apache.org/) for persistent message storage. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) (WAL) system that provides a number of crucial advantages for Pulsar: - -* It enables Pulsar to utilize many independent logs, called [ledgers](#ledgers). Multiple ledgers can be created for topics over time. -* It offers very efficient storage for sequential data that handles entry replication. -* It guarantees read consistency of ledgers in the presence of various system failures. -* It offers even distribution of I/O across bookies. -* It's horizontally scalable in both capacity and throughput. Capacity can be immediately increased by adding more bookies to a cluster. -* Bookies are designed to handle thousands of ledgers with concurrent reads and writes. By using multiple disk devices---one for journal and another for general storage--bookies are able to isolate the effects of read operations from the latency of ongoing write operations. - -In addition to message data, *cursors* are also persistently stored in BookKeeper. Cursors are [subscription](reference-terminology.md#subscription) positions for [consumers](reference-terminology.md#consumer). BookKeeper enables Pulsar to store consumer position in a scalable fashion. - -At the moment, Pulsar supports persistent message storage. This accounts for the `persistent` in all topic names. Here's an example: - -```http - -persistent://my-tenant/my-namespace/my-topic - -``` - -> Pulsar also supports ephemeral ([non-persistent](concepts-messaging.md#non-persistent-topics)) message storage. - - -You can see an illustration of how brokers and bookies interact in the diagram below: - -![Brokers and bookies](/assets/broker-bookie.png) - - -### Ledgers - -A ledger is an append-only data structure with a single writer that is assigned to multiple BookKeeper storage nodes, or bookies. Ledger entries are replicated to multiple bookies. Ledgers themselves have very simple semantics: - -* A Pulsar broker can create a ledger, append entries to the ledger, and close the ledger. -* After the ledger has been closed---either explicitly or because the writer process crashed---it can then be opened only in read-only mode. -* Finally, when entries in the ledger are no longer needed, the whole ledger can be deleted from the system (across all bookies). - -#### Ledger read consistency - -The main strength of Bookkeeper is that it guarantees read consistency in ledgers in the presence of failures. Since the ledger can only be written to by a single process, that process is free to append entries very efficiently, without need to obtain consensus. After a failure, the ledger will go through a recovery process that will finalize the state of the ledger and establish which entry was last committed to the log. After that point, all readers of the ledger are guaranteed to see the exact same content. - -#### Managed ledgers - -Given that Bookkeeper ledgers provide a single log abstraction, a library was developed on top of the ledger called the *managed ledger* that represents the storage layer for a single topic. A managed ledger represents the abstraction of a stream of messages with a single writer that keeps appending at the end of the stream and multiple cursors that are consuming the stream, each with its own associated position. - -Internally, a single managed ledger uses multiple BookKeeper ledgers to store the data. There are two reasons to have multiple ledgers: - -1. After a failure, a ledger is no longer writable and a new one needs to be created. -2. A ledger can be deleted when all cursors have consumed the messages it contains. This allows for periodic rollover of ledgers. - -### Journal storage - -In BookKeeper, *journal* files contain BookKeeper transaction logs. Before making an update to a [ledger](#ledgers), a bookie needs to ensure that a transaction describing the update is written to persistent (non-volatile) storage. A new journal file is created once the bookie starts or the older journal file reaches the journal file size threshold (configured using the [`journalMaxSizeMB`](reference-configuration.md#bookkeeper-journalMaxSizeMB) parameter). - -## Pulsar proxy - -One way for Pulsar clients to interact with a Pulsar [cluster](#clusters) is by connecting to Pulsar message [brokers](#brokers) directly. In some cases, however, this kind of direct connection is either infeasible or undesirable because the client doesn't have direct access to broker addresses. If you're running Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform, for example, then direct client connections to brokers are likely not possible. - -The **Pulsar proxy** provides a solution to this problem by acting as a single gateway for all of the brokers in a cluster. If you run the Pulsar proxy (which, again, is optional), all client connections with the Pulsar cluster will flow through the proxy rather than communicating with brokers. - -> For the sake of performance and fault tolerance, you can run as many instances of the Pulsar proxy as you'd like. - -Architecturally, the Pulsar proxy gets all the information it requires from ZooKeeper. When starting the proxy on a machine, you only need to provide ZooKeeper connection strings for the cluster-specific and instance-wide configuration store clusters. Here's an example: - -```bash - -$ bin/pulsar proxy \ - --zookeeper-servers zk-0,zk-1,zk-2 \ - --configuration-store-servers zk-0,zk-1,zk-2 - -``` - -> #### Pulsar proxy docs -> For documentation on using the Pulsar proxy, see the [Pulsar proxy admin documentation](administration-proxy.md). - - -Some important things to know about the Pulsar proxy: - -* Connecting clients don't need to provide *any* specific configuration to use the Pulsar proxy. You won't need to update the client configuration for existing applications beyond updating the IP used for the service URL (for example if you're running a load balancer over the Pulsar proxy). -* [TLS encryption](security-tls-transport.md) and [authentication](security-tls-authentication.md) is supported by the Pulsar proxy - -## Service discovery - -[Clients](getting-started-clients.md) connecting to Pulsar brokers need to be able to communicate with an entire Pulsar instance using a single URL. - -You can use your own service discovery system if you'd like. If you use your own system, there is just one requirement: when a client performs an HTTP request to an endpoint, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to *some* active broker in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means. - -The diagram below illustrates Pulsar service discovery: - -![alt-text](/assets/pulsar-service-discovery.png) - -In this diagram, the Pulsar cluster is addressable via a single DNS name: `pulsar-cluster.acme.com`. A [Python client](client-libraries-python.md), for example, could access this Pulsar cluster like this: - -```python - -from pulsar import Client - -client = Client('pulsar://pulsar-cluster.acme.com:6650') - -``` - -:::note - -In Pulsar, each topic is handled by only one broker. Initial requests from a client to read, update or delete a topic are sent to a broker that may not be the topic owner. If the broker cannot handle the request for this topic, it redirects the request to the appropriate broker. - -::: - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-authentication.md b/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-authentication.md deleted file mode 100644 index f6307890c904a7..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-authentication.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -id: concepts-authentication -title: Authentication and Authorization -sidebar_label: "Authentication and Authorization" -original_id: concepts-authentication ---- - -Pulsar supports a pluggable [authentication](security-overview.md) mechanism which can be configured at the proxy and/or the broker. Pulsar also supports a pluggable [authorization](security-authorization.md) mechanism. These mechanisms work together to identify the client and its access rights on topics, namespaces and tenants. - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-clients.md b/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-clients.md deleted file mode 100644 index 4040624f7d6366..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-clients.md +++ /dev/null @@ -1,92 +0,0 @@ ---- -id: concepts-clients -title: Pulsar Clients -sidebar_label: "Clients" -original_id: concepts-clients ---- - -Pulsar exposes a client API with language bindings for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md), [C++](client-libraries-cpp.md) and [C#](client-libraries-dotnet.md). The client API optimizes and encapsulates Pulsar's client-broker communication protocol and exposes a simple and intuitive API for use by applications. - -Under the hood, the current official Pulsar client libraries support transparent reconnection and/or connection failover to brokers, queuing of messages until acknowledged by the broker, and heuristics such as connection retries with backoff. - -> **Custom client libraries** -> If you'd like to create your own client library, we recommend consulting the documentation on Pulsar's custom [binary protocol](developing-binary-protocol.md). - - -## Client setup phase - -Before an application creates a producer/consumer, the Pulsar client library needs to initiate a setup phase including two steps: - -1. The client attempts to determine the owner of the topic by sending an HTTP lookup request to the broker. The request could reach one of the active brokers which, by looking at the (cached) zookeeper metadata knows who is serving the topic or, in case nobody is serving it, tries to assign it to the least loaded broker. -1. Once the client library has the broker address, it creates a TCP connection (or reuse an existing connection from the pool) and authenticates it. Within this connection, client and broker exchange binary commands from a custom protocol. At this point the client sends a command to create producer/consumer to the broker, which will comply after having validated the authorization policy. - -Whenever the TCP connection breaks, the client immediately re-initiates this setup phase and keeps trying with exponential backoff to re-establish the producer or consumer until the operation succeeds. - -## Reader interface - -In Pulsar, the "standard" [consumer interface](concepts-messaging.md#consumers) involves using consumers to listen on [topics](reference-terminology.md#topic), process incoming messages, and finally acknowledge those messages when they are processed. Whenever a new subscription is created, it is initially positioned at the end of the topic (by default), and consumers associated with that subscription begin reading with the first message created afterwards. Whenever a consumer connects to a topic using a pre-existing subscription, it begins reading from the earliest message un-acked within that subscription. In summary, with the consumer interface, subscription cursors are automatically managed by Pulsar in response to [message acknowledgements](concepts-messaging.md#acknowledgement). - -The **reader interface** for Pulsar enables applications to manually manage cursors. When you use a reader to connect to a topic---rather than a consumer---you need to specify *which* message the reader begins reading from when it connects to a topic. When connecting to a topic, the reader interface enables you to begin with: - -* The **earliest** available message in the topic -* The **latest** available message in the topic -* Some other message between the earliest and the latest. If you select this option, you'll need to explicitly provide a message ID. Your application will be responsible for "knowing" this message ID in advance, perhaps fetching it from a persistent data store or cache. - -The reader interface is helpful for use cases like using Pulsar to provide effectively-once processing semantics for a stream processing system. For this use case, it's essential that the stream processing system be able to "rewind" topics to a specific message and begin reading there. The reader interface provides Pulsar clients with the low-level abstraction necessary to "manually position" themselves within a topic. - -Internally, the reader interface is implemented as a consumer using an exclusive, non-durable subscription to the topic with a randomly-allocated name. - -[ **IMPORTANT** ] - -Unlike subscription/consumer, readers are non-durable in nature and does not prevent data in a topic from being deleted, thus it is ***strongly*** advised that [data retention](cookbooks-retention-expiry.md) be configured. If data retention for a topic is not configured for an adequate amount of time, messages that the reader has not yet read might be deleted . This causes the readers to essentially skip messages. Configuring the data retention for a topic guarantees the reader with a certain duration to read a message. - -Please also note that a reader can have a "backlog", but the metric is only used for users to know how behind the reader is. The metric is not considered for any backlog quota calculations. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-reader-consumer-interfaces.png) - -Here's a Java example that begins reading from the earliest available message on a topic: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageId; -import org.apache.pulsar.client.api.Reader; - -// Create a reader on a topic and for a specific message (and onward) -Reader reader = pulsarClient.newReader() - .topic("reader-api-test") - .startMessageId(MessageId.earliest) - .create(); - -while (true) { - Message message = reader.readNext(); - - // Process the message -} - -``` - -To create a reader that reads from the latest available message: - -```java - -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(MessageId.latest) - .create(); - -``` - -To create a reader that reads from some message between the earliest and the latest: - -```java - -byte[] msgIdBytes = // Some byte array -MessageId id = MessageId.fromByteArray(msgIdBytes); -Reader reader = pulsarClient.newReader() - .topic(topic) - .startMessageId(id) - .create(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-messaging.md b/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-messaging.md deleted file mode 100644 index c7469606f30552..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-messaging.md +++ /dev/null @@ -1,713 +0,0 @@ ---- -id: concepts-messaging -title: Messaging -sidebar_label: "Messaging" -original_id: concepts-messaging ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar is built on the [publish-subscribe](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) pattern (often abbreviated to pub-sub). In this pattern, [producers](#producers) publish messages to [topics](#topics); [consumers](#consumers) [subscribe](#subscription-modes) to those topics, process incoming messages, and send [acknowledgements](#acknowledgement) to the broker when processing is finished. - -When a subscription is created, Pulsar [retains](concepts-architecture-overview.md#persistent-storage) all messages, even if the consumer is disconnected. The retained messages are discarded only when a consumer acknowledges that all these messages are processed successfully. - -If the consumption of a message fails and you want this message to be consumed again, then you can enable the automatic redelivery of this message by sending a [negative acknowledgement](#negative-acknowledgement) to the broker or enabling the [acknowledgement timeout](#acknowledgement-timeout) for unacknowledged messages. - -## Messages - -Messages are the basic "unit" of Pulsar. The following table lists the components of messages. - -Component | Description -:---------|:------- -Value / data payload | The data carried by the message. All Pulsar messages contain raw bytes, although message data can also conform to data [schemas](schema-get-started.md). -Key | Messages are optionally tagged with keys, which is useful for things like [topic compaction](concepts-topic-compaction.md). -Properties | An optional key/value map of user-defined properties. -Producer name | The name of the producer who produces the message. If you do not specify a producer name, the default name is used. -Sequence ID | Each Pulsar message belongs to an ordered sequence on its topic. The sequence ID of the message is its order in that sequence. -Publish time | The timestamp of when the message is published. The timestamp is automatically applied by the producer. -Event time | An optional timestamp attached to a message by applications. For example, applications attach a timestamp on when the message is processed. If nothing is set to event time, the value is `0`. -TypedMessageBuilder | It is used to construct a message. You can set message properties such as the message key, message value with `TypedMessageBuilder`.
    When you set `TypedMessageBuilder`, set the key as a string. If you set the key as other types, for example, an AVRO object, the key is sent as bytes, and it is difficult to get the AVRO object back on the consumer. - -The default size of a message is 5 MB. You can configure the max size of a message with the following configurations. - -- In the `broker.conf` file. - - ```bash - - # The max size of a message (in bytes). - maxMessageSize=5242880 - - ``` - -- In the `bookkeeper.conf` file. - - ```bash - - # The max size of the netty frame (in bytes). Any messages received larger than this value are rejected. The default value is 5 MB. - nettyMaxFrameSizeBytes=5253120 - - ``` - -> For more information on Pulsar messages, see Pulsar [binary protocol](developing-binary-protocol.md). - -## Producers - -A producer is a process that attaches to a topic and publishes messages to a Pulsar [broker](reference-terminology.md#broker). The Pulsar broker processes the messages. - -### Send modes - -Producers send messages to brokers synchronously (sync) or asynchronously (async). - -| Mode | Description | -|:-----------|-----------| -| Sync send | The producer waits for an acknowledgement from the broker after sending every message. If the acknowledgment is not received, the producer treats the sending operation as a failure. | -| Async send | The producer puts a message in a blocking queue and returns immediately. The client library sends the message to the broker in the background. If the queue is full (you can [configure](reference-configuration.md#broker) the maximum size), the producer is blocked or fails immediately when calling the API, depending on arguments passed to the producer. | - -### Access mode - -You can have different types of access modes on topics for producers. - -|Access mode | Description -|---|--- -`Shared`|Multiple producers can publish on a topic.

    This is the **default** setting. -`Exclusive`|Only one producer can publish on a topic.

    If there is already a producer connected, other producers trying to publish on this topic get errors immediately.

    The "old" producer is evicted and a "new" producer is selected to be the next exclusive producer if the "old" producer experiences a network partition with the broker. -`WaitForExclusive`|If there is already a producer connected, the producer creation is pending (rather than timing out) until the producer gets the `Exclusive` access.

    The producer that succeeds in becoming the exclusive one is treated as the leader. Consequently, if you want to implement the leader election scheme for your application, you can use this access mode. - -:::note - -Once an application creates a producer with `Exclusive` or `WaitForExclusive` access mode successfully, the instance of this application is guaranteed to be the **only writer** to the topic. Any other producers trying to produce messages on this topic will either get errors immediately or have to wait until they get the `Exclusive` access. -For more information, see [PIP 68: Exclusive Producer](https://github.com/apache/pulsar/wiki/PIP-68:-Exclusive-Producer). - -::: - -You can set producer access mode through Java Client API. For more information, see `ProducerAccessMode` in [ProducerBuilder.java](https://github.com/apache/pulsar/blob/fc5768ca3bbf92815d142fe30e6bfad70a1b4fc6/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/ProducerBuilder.java) file. - - -### Compression - -You can compress messages published by producers during transportation. Pulsar currently supports the following types of compression: - -* [LZ4](https://github.com/lz4/lz4) -* [ZLIB](https://zlib.net/) -* [ZSTD](https://facebook.github.io/zstd/) -* [SNAPPY](https://google.github.io/snappy/) - -### Batching - -When batching is enabled, the producer accumulates and sends a batch of messages in a single request. The batch size is defined by the maximum number of messages and the maximum publish latency. Therefore, the backlog size represents the total number of batches instead of the total number of messages. - -In Pulsar, batches are tracked and stored as single units rather than as individual messages. Consumer unbundles a batch into individual messages. However, scheduled messages (configured through the `deliverAt` or the `deliverAfter` parameter) are always sent as individual messages even batching is enabled. - -In general, a batch is acknowledged when all of its messages are acknowledged by a consumer. It means that when **not all** batch messages are acknowledged, then unexpected failures, negative acknowledgements, or acknowledgement timeouts can result in a redelivery of all messages in this batch. - -To avoid redelivering acknowledged messages in a batch to the consumer, Pulsar introduces batch index acknowledgement since Pulsar 2.6.0. When batch index acknowledgement is enabled, the consumer filters out the batch index that has been acknowledged and sends the batch index acknowledgement request to the broker. The broker maintains the batch index acknowledgement status and tracks the acknowledgement status of each batch index to avoid dispatching acknowledged messages to the consumer. The batch is deleted when all indices of the messages in it are acknowledged. - -By default, batch index acknowledgement is disabled (`acknowledgmentAtBatchIndexLevelEnabled=false`). You can enable batch index acknowledgement by setting the `acknowledgmentAtBatchIndexLevelEnabled` parameter to `true` at the broker side. Enabling batch index acknowledgement results in more memory overheads. - -### Chunking -Before you enable chunking, read the following instructions. -- Batching and chunking cannot be enabled simultaneously. To enable chunking, you must disable batching in advance. -- Chunking is only supported for persisted topics. -- Chunking is only supported for the exclusive and failover subscription modes. - -When chunking is enabled (`chunkingEnabled=true`), if the message size is greater than the allowed maximum publish-payload size, the producer splits the original message into chunked messages and publishes them with chunked metadata to the broker separately and in order. At the broker side, the chunked messages are stored in the managed-ledger in the same way as that of ordinary messages. The only difference is that the consumer needs to buffer the chunked messages and combines them into the real message when all chunked messages have been collected. The chunked messages in the managed-ledger can be interwoven with ordinary messages. If producer fails to publish all the chunks of a message, the consumer can expire incomplete chunks if consumer fail to receive all chunks in expire time. By default, the expire time is set to one minute. - -The consumer consumes the chunked messages and buffers them until the consumer receives all the chunks of a message. And then the consumer stitches chunked messages together and places them into the receiver-queue. Clients consume messages from the receiver-queue. Once the consumer consumes the entire large message and acknowledges it, the consumer internally sends acknowledgement of all the chunk messages associated to that large message. You can set the `maxPendingChunkedMessage` parameter on the consumer. When the threshold is reached, the consumer drops the unchunked messages by silently acknowledging them or asking the broker to redeliver them later by marking them unacknowledged. - -The broker does not require any changes to support chunking for non-shared subscription. The broker only uses `chunkedMessageRate` to record chunked message rate on the topic. - -#### Handle chunked messages with one producer and one ordered consumer - -As shown in the following figure, when a topic has one producer which publishes large message payload in chunked messages along with regular non-chunked messages. The producer publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. The broker stores all the three chunked messages in the managed-ledger and dispatches to the ordered (exclusive/failover) consumer in the same order. The consumer buffers all the chunked messages in memory until it receives all the chunked messages, combines them into one message and then hands over the original message M1 to the client. - -![](/assets/chunking-01.png) - -#### Handle chunked messages with multiple producers and one ordered consumer - -When multiple publishers publish chunked messages into a single topic, the broker stores all the chunked messages coming from different publishers in the same managed-ledger. As shown below, Producer 1 publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. Producer 2 publishes message M2 in three chunks M2-C1, M2-C2 and M2-C3. All chunked messages of the specific message are still in order but might not be consecutive in the managed-ledger. This brings some memory pressure to the consumer because the consumer keeps separate buffer for each large message to aggregate all chunks of the large message and combine them into one message. - -![](/assets/chunking-02.png) - -## Consumers - -A consumer is a process that attaches to a topic via a subscription and then receives messages. - -A consumer sends a [flow permit request](developing-binary-protocol.md#flow-control) to a broker to get messages. There is a queue at the consumer side to receive messages pushed from the broker. You can configure the queue size with the [`receiverQueueSize`](client-libraries-java.md#configure-consumer) parameter. The default size is `1000`). Each time `consumer.receive()` is called, a message is dequeued from the buffer. - -### Receive modes - -Messages are received from [brokers](reference-terminology.md#broker) either synchronously (sync) or asynchronously (async). - -| Mode | Description | -|:--------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Sync receive | A sync receive is blocked until a message is available. | -| Async receive | An async receive returns immediately with a future value—for example, a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) in Java—that completes once a new message is available. | - -### Listeners - -Client libraries provide listener implementation for consumers. For example, the [Java client](client-libraries-java.md) provides a {@inject: javadoc:MesssageListener:/client/org/apache/pulsar/client/api/MessageListener} interface. In this interface, the `received` method is called whenever a new message is received. - -### Acknowledgement - -The consumer sends an acknowledgement request to the broker after it consumes a message successfully. Then, this consumed message will be permanently stored, and be deleted only after all the subscriptions have acknowledged it. If you want to store the messages that have been acknowledged by a consumer, you need to configure the [message retention policy](concepts-messaging.md#message-retention-and-expiry). - -For batch messages, you can enable batch index acknowledgement to avoid dispatching acknowledged messages to the consumer. For details about batch index acknowledgement, see [batching](#batching). - -Messages can be acknowledged in one of the following two ways: - -- Being acknowledged individually. With individual acknowledgement, the consumer acknowledges each message and sends an acknowledgement request to the broker. -- Being acknowledged cumulatively. With cumulative acknowledgement, the consumer **only** acknowledges the last message it received. All messages in the stream up to (and including) the provided message are not redelivered to that consumer. - -If you want to acknowledge messages individually, you can use the following API. - -```java - -consumer.acknowledge(msg); - -``` - -If you want to acknowledge messages cumulatively, you can use the following API. - -```java - -consumer.acknowledgeCumulative(msg); - -``` - -:::note - -Cumulative acknowledgement cannot be used in the [shared subscription mode](#subscription-modes), because the shared subscription mode involves multiple consumers who have access to the same subscription. In the shared subscription mode, messages are acknowledged individually. - -::: - -### Negative acknowledgement - -When a consumer fails to consume a message and intends to consume it again, this consumer should send a negative acknowledgement to the broker. Then, the broker will redeliver this message to the consumer. - -Messages are negatively acknowledged individually or cumulatively, depending on the consumption subscription mode. - -In the exclusive and failover subscription modes, consumers only negatively acknowledge the last message they receive. - -In the shared and Key_Shared subscription modes, consumers can negatively acknowledge messages individually. - -Be aware that negative acknowledgments on ordered subscription types, such as Exclusive, Failover and Key_Shared, might cause failed messages being sent to consumers out of the original order. - -If you want to acknowledge messages negatively, you can use the following API. - -```java - -//With calling this api, messages are negatively acknowledged -consumer.negativeAcknowledge(msg); - -``` - -:::note - -If batching is enabled, all messages in one batch are redelivered to the consumer. - -::: - -### Acknowledgement timeout - -If a message is not consumed successfully, and you want the broker to redeliver this message automatically, then you can enable automatic redelivery mechanism for unacknowledged messages. With automatic redelivery enabled, the client tracks the unacknowledged messages within the entire `acktimeout` time range, and sends a `redeliver unacknowledged messages` request to the broker automatically when the acknowledgement timeout is specified. - -:::note - -- If batching is enabled, all messages in one batch are redelivered to the consumer. -- The negative acknowledgement is preferable over the acknowledgement timeout, since negative acknowledgement controls the redelivery of individual messages more precisely and avoids invalid redeliveries when the message processing time exceeds the acknowledgement timeout. - -::: - -### Dead letter topic - -Dead letter topic enables you to consume new messages when some messages cannot be consumed successfully by a consumer. In this mechanism, messages that are failed to be consumed are stored in a separate topic, which is called dead letter topic. You can decide how to handle messages in the dead letter topic. - -The following example shows how to enable dead letter topic in a Java client using the default dead letter topic: - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic(topic) - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .build()) - .subscribe(); - -``` - -The default dead letter topic uses this format: - -``` - ---DLQ - -``` - -If you want to specify the name of the dead letter topic, use this Java client example: - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic(topic) - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .deadLetterTopic("your-topic-name") - .build()) - .subscribe(); - -``` - -Dead letter topic depends on message redelivery. Messages are redelivered either due to [acknowledgement timeout](#acknowledgement-timeout) or [negative acknowledgement](#negative-acknowledgement). If you are going to use negative acknowledgement on a message, make sure it is negatively acknowledged before the acknowledgement timeout. - -:::note - -Currently, dead letter topic is enabled in the Shared and Key_Shared subscription modes. - -::: - -### Retry letter topic - -For many online business systems, a message is re-consumed due to exception occurs in the business logic processing. To configure the delay time for re-consuming the failed messages, you can configure the producer to send messages to both the business topic and the retry letter topic, and enable automatic retry on the consumer. When automatic retry is enabled on the consumer, a message is stored in the retry letter topic if the messages are not consumed, and therefore the consumer automatically consumes the failed messages from the retry letter topic after a specified delay time. - -By default, automatic retry is disabled. You can set `enableRetry` to `true` to enable automatic retry on the consumer. - -This example shows how to consume messages from a retry letter topic. - -```java - -Consumer consumer = pulsarClient.newConsumer(Schema.BYTES) - .topic(topic) - .subscriptionName("my-subscription") - .subscriptionType(SubscriptionType.Shared) - .enableRetry(true) - .receiverQueueSize(100) - .deadLetterPolicy(DeadLetterPolicy.builder() - .maxRedeliverCount(maxRedeliveryCount) - .retryLetterTopic("persistent://my-property/my-ns/my-subscription-custom-Retry") - .build()) - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscribe(); - -``` - -If you want to put messages into a retrial queue, you can use the following API. - -```java - -consumer.reconsumeLater(msg,3,TimeUnit.SECONDS); - -``` - -## Topics - -As in other pub-sub systems, topics in Pulsar are named channels for transmitting messages from producers to consumers. Topic names are URLs that have a well-defined structure: - -```http - -{persistent|non-persistent}://tenant/namespace/topic - -``` - -Topic name component | Description -:--------------------|:----------- -`persistent` / `non-persistent` | This identifies the type of topic. Pulsar supports two kind of topics: [persistent](concepts-architecture-overview.md#persistent-storage) and [non-persistent](#non-persistent-topics). The default is persistent, so if you do not specify a type, the topic is persistent. With persistent topics, all messages are durably persisted on disks (if the broker is not standalone, messages are durably persisted on multiple disks), whereas data for non-persistent topics is not persisted to storage disks. -`tenant` | The topic tenant within the instance. Tenants are essential to multi-tenancy in Pulsar, and spread across clusters. -`namespace` | The administrative unit of the topic, which acts as a grouping mechanism for related topics. Most topic configuration is performed at the [namespace](#namespaces) level. Each tenant has one or multiple namespaces. -`topic` | The final part of the name. Topic names have no special meaning in a Pulsar instance. - -> **No need to explicitly create new topics** -> You do not need to explicitly create topics in Pulsar. If a client attempts to write or receive messages to/from a topic that does not yet exist, Pulsar creates that topic under the namespace provided in the [topic name](#topics) automatically. -> If no tenant or namespace is specified when a client creates a topic, the topic is created in the default tenant and namespace. You can also create a topic in a specified tenant and namespace, such as `persistent://my-tenant/my-namespace/my-topic`. `persistent://my-tenant/my-namespace/my-topic` means the `my-topic` topic is created in the `my-namespace` namespace of the `my-tenant` tenant. - -## Namespaces - -A namespace is a logical nomenclature within a tenant. A tenant creates multiple namespaces via the [admin API](admin-api-namespaces.md#create). For instance, a tenant with different applications can create a separate namespace for each application. A namespace allows the application to create and manage a hierarchy of topics. The topic `my-tenant/app1` is a namespace for the application `app1` for `my-tenant`. You can create any number of [topics](#topics) under the namespace. - -## Subscriptions - -A subscription is a named configuration rule that determines how messages are delivered to consumers. Four subscription modes are available in Pulsar: [exclusive](#exclusive), [shared](#shared), [failover](#failover), and [key_shared](#key_shared). These modes are illustrated in the figure below. - -![Subscription modes](/assets/pulsar-subscription-types.png) - -> **Pub-Sub or Queuing** -> In Pulsar, you can use different subscriptions flexibly. -> * If you want to achieve traditional "fan-out pub-sub messaging" among consumers, specify a unique subscription name for each consumer. It is exclusive subscription mode. -> * If you want to achieve "message queuing" among consumers, share the same subscription name among multiple consumers(shared, failover, key_shared). -> * If you want to achieve both effects simultaneously, combine exclusive subscription mode with other subscription modes for consumers. - -### Consumerless Subscriptions and Their Corresponding Modes -When a subscription has no consumers, its subscription mode is undefined. A subscription's mode is defined when a consumer connects to the subscription, and the mode can be changed by restarting all consumers with a different configuration. - -### Exclusive - -In *exclusive* mode, only a single consumer is allowed to attach to the subscription. If multiple consumers subscribe to a topic using the same subscription, an error occurs. - -In the diagram below, only **Consumer A-0** is allowed to consume messages. - -> Exclusive mode is the default subscription mode. - -![Exclusive subscriptions](/assets/pulsar-exclusive-subscriptions.png) - -### Failover - -In *failover* mode, multiple consumers can attach to the same subscription. A master consumer is picked for non-partitioned topic or each partition of partitioned topic and receives messages. When the master consumer disconnects, all (non-acknowledged and subsequent) messages are delivered to the next consumer in line. - -For partitioned topics, broker will sort consumers by priority level and lexicographical order of consumer name. Then broker will try to evenly assigns topics to consumers with the highest priority level. - -For non-partitioned topic, broker will pick consumer in the order they subscribe to the non partitioned topic. - -In the diagram below, **Consumer-B-0** is the master consumer while **Consumer-B-1** would be the next consumer in line to receive messages if **Consumer-B-0** is disconnected. - -![Failover subscriptions](/assets/pulsar-failover-subscriptions.png) - -### Shared - -In *shared* or *round robin* mode, multiple consumers can attach to the same subscription. Messages are delivered in a round robin distribution across consumers, and any given message is delivered to only one consumer. When a consumer disconnects, all the messages that were sent to it and not acknowledged will be rescheduled for sending to the remaining consumers. - -In the diagram below, **Consumer-C-1** and **Consumer-C-2** are able to subscribe to the topic, but **Consumer-C-3** and others could as well. - -> **Limitations of shared mode** -> When using shared mode, be aware that: -> * Message ordering is not guaranteed. -> * You cannot use cumulative acknowledgment with shared mode. - -![Shared subscriptions](/assets/pulsar-shared-subscriptions.png) - -### Key_Shared - -In *Key_Shared* mode, multiple consumers can attach to the same subscription. Messages are delivered in a distribution across consumers and message with same key or same ordering key are delivered to only one consumer. No matter how many times the message is re-delivered, it is delivered to the same consumer. When a consumer connected or disconnected will cause served consumer change for some key of message. - -![Key_Shared subscriptions](/assets/pulsar-key-shared-subscriptions.png) - -Note that when the consumers are using the Key_Shared subscription mode, you need to **disable batching** or **use key-based batching** for the producers. There are two reasons why the key-based batching is necessary for Key_Shared subscription mode: -1. The broker dispatches messages according to the keys of the messages, but the default batching approach might fail to pack the messages with the same key to the same batch. -2. Since it is the consumers instead of the broker who dispatch the messages from the batches, the key of the first message in one batch is considered as the key of all messages in this batch, thereby leading to context errors. - -The key-based batching aims at resolving the above-mentioned issues. This batching method ensures that the producers pack the messages with the same key to the same batch. The messages without a key are packed into one batch and this batch has no key. When the broker dispatches messages from this batch, it uses `NON_KEY` as the key. In addition, each consumer is associated with **only one** key and should receive **only one message batch** for the connected key. By default, you can limit batching by configuring the number of messages that producers are allowed to send. - -Below are examples of enabling the key-based batching under the Key_Shared subscription mode, with `client` being the Pulsar client that you created. - -````mdx-code-block - - - -``` - -Producer producer = client.newProducer() - .topic("my-topic") - .batcherBuilder(BatcherBuilder.KEY_BASED) - .create(); - -``` - - - - -``` - -ProducerConfiguration producerConfig; -producerConfig.setBatchingType(ProducerConfiguration::BatchingType::KeyBasedBatching); -Producer producer; -client.createProducer("my-topic", producerConfig, producer); - -``` - - - - -``` - -producer = client.create_producer(topic='my-topic', batching_type=pulsar.BatchingType.KeyBased) - -``` - - - - -```` - -> **Limitations of Key_Shared mode** -> When you use Key_Shared mode, be aware that: -> * You need to specify a key or orderingKey for messages. -> * You cannot use cumulative acknowledgment with Key_Shared mode. - -## Multi-topic subscriptions - -When a consumer subscribes to a Pulsar topic, by default it subscribes to one specific topic, such as `persistent://public/default/my-topic`. As of Pulsar version 1.23.0-incubating, however, Pulsar consumers can simultaneously subscribe to multiple topics. You can define a list of topics in two ways: - -* On the basis of a [**reg**ular **ex**pression](https://en.wikipedia.org/wiki/Regular_expression) (regex), for example `persistent://public/default/finance-.*` -* By explicitly defining a list of topics - -> When subscribing to multiple topics by regex, all topics must be in the same [namespace](#namespaces). - -When subscribing to multiple topics, the Pulsar client automatically makes a call to the Pulsar API to discover the topics that match the regex pattern/list, and then subscribe to all of them. If any of the topics do not exist, the consumer auto-subscribes to them once the topics are created. - -> **No ordering guarantees across multiple topics** -> When a producer sends messages to a single topic, all messages are guaranteed to be read from that topic in the same order. However, these guarantees do not hold across multiple topics. So when a producer sends message to multiple topics, the order in which messages are read from those topics is not guaranteed to be the same. - -The following are multi-topic subscription examples for Java. - -```java - -import java.util.regex.Pattern; - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient pulsarClient = // Instantiate Pulsar client object - -// Subscribe to all topics in a namespace -Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default/.*"); -Consumer allTopicsConsumer = pulsarClient.newConsumer() - .topicsPattern(allTopicsInNamespace) - .subscriptionName("subscription-1") - .subscribe(); - -// Subscribe to a subsets of topics in a namespace, based on regex -Pattern someTopicsInNamespace = Pattern.compile("persistent://public/default/foo.*"); -Consumer someTopicsConsumer = pulsarClient.newConsumer() - .topicsPattern(someTopicsInNamespace) - .subscriptionName("subscription-1") - .subscribe(); - -``` - -For code examples, see [Java](client-libraries-java.md#multi-topic-subscriptions). - -## Partitioned topics - -Normal topics are served only by a single broker, which limits the maximum throughput of the topic. *Partitioned topics* are a special type of topic that are handled by multiple brokers, thus allowing for higher throughput. - -A partitioned topic is actually implemented as N internal topics, where N is the number of partitions. When publishing messages to a partitioned topic, each message is routed to one of several brokers. The distribution of partitions across brokers is handled automatically by Pulsar. - -The diagram below illustrates this: - -![](/assets/partitioning.png) - -The **Topic1** topic has five partitions (**P0** through **P4**) split across three brokers. Because there are more partitions than brokers, two brokers handle two partitions a piece, while the third handles only one (again, Pulsar handles this distribution of partitions automatically). - -Messages for this topic are broadcast to two consumers. The [routing mode](#routing-modes) determines each message should be published to which partition, while the [subscription mode](#subscription-modes) determines which messages go to which consumers. - -Decisions about routing and subscription modes can be made separately in most cases. In general, throughput concerns should guide partitioning/routing decisions while subscription decisions should be guided by application semantics. - -There is no difference between partitioned topics and normal topics in terms of how subscription modes work, as partitioning only determines what happens between when a message is published by a producer and processed and acknowledged by a consumer. - -Partitioned topics need to be explicitly created via the [admin API](admin-api-overview.md). The number of partitions can be specified when creating the topic. - -### Routing modes - -When publishing to partitioned topics, you must specify a *routing mode*. The routing mode determines which partition---that is, which internal topic---each message should be published to. - -There are three {@inject: javadoc:MessageRoutingMode:/client/org/apache/pulsar/client/api/MessageRoutingMode} available: - -Mode | Description -:--------|:------------ -`RoundRobinPartition` | If no key is provided, the producer will publish messages across all partitions in round-robin fashion to achieve maximum throughput. Please note that round-robin is not done per individual message but rather it's set to the same boundary of batching delay, to ensure batching is effective. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition. This is the default mode. -`SinglePartition` | If no key is provided, the producer will randomly pick one single partition and publish all the messages into that partition. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition. -`CustomPartition` | Use custom message router implementation that will be called to determine the partition for a particular message. User can create a custom routing mode by using the [Java client](client-libraries-java.md) and implementing the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface. - -### Ordering guarantee - -The ordering of messages is related to MessageRoutingMode and Message Key. Usually, user would want an ordering of Per-key-partition guarantee. - -If there is a key attached to message, the messages will be routed to corresponding partitions based on the hashing scheme specified by {@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} in {@inject: javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder}, when using either `SinglePartition` or `RoundRobinPartition` mode. - -Ordering guarantee | Description | Routing Mode and Key -:------------------|:------------|:------------ -Per-key-partition | All the messages with the same key will be in order and be placed in same partition. | Use either `SinglePartition` or `RoundRobinPartition` mode, and Key is provided by each message. -Per-producer | All the messages from the same producer will be in order. | Use `SinglePartition` mode, and no Key is provided for each message. - -### Hashing scheme - -{@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} is an enum that represent sets of standard hashing functions available when choosing the partition to use for a particular message. - -There are 2 types of standard hashing functions available: `JavaStringHash` and `Murmur3_32Hash`. -The default hashing function for producer is `JavaStringHash`. -Please pay attention that `JavaStringHash` is not useful when producers can be from different multiple language clients, under this use case, it is recommended to use `Murmur3_32Hash`. - - - -## Non-persistent topics - - -By default, Pulsar persistently stores *all* unacknowledged messages on multiple [BookKeeper](concepts-architecture-overview.md#persistent-storage) bookies (storage nodes). Data for messages on persistent topics can thus survive broker restarts and subscriber failover. - -Pulsar also, however, supports **non-persistent topics**, which are topics on which messages are *never* persisted to disk and live only in memory. When using non-persistent delivery, killing a Pulsar broker or disconnecting a subscriber to a topic means that all in-transit messages are lost on that (non-persistent) topic, meaning that clients may see message loss. - -Non-persistent topics have names of this form (note the `non-persistent` in the name): - -```http - -non-persistent://tenant/namespace/topic - -``` - -> For more info on using non-persistent topics, see the [Non-persistent messaging cookbook](cookbooks-non-persistent.md). - -In non-persistent topics, brokers immediately deliver messages to all connected subscribers *without persisting them* in [BookKeeper](concepts-architecture-overview.md#persistent-storage). If a subscriber is disconnected, the broker will not be able to deliver those in-transit messages, and subscribers will never be able to receive those messages again. Eliminating the persistent storage step makes messaging on non-persistent topics slightly faster than on persistent topics in some cases, but with the caveat that some of the core benefits of Pulsar are lost. - -> With non-persistent topics, message data lives only in memory. If a message broker fails or message data can otherwise not be retrieved from memory, your message data may be lost. Use non-persistent topics only if you're *certain* that your use case requires it and can sustain it. - -By default, non-persistent topics are enabled on Pulsar brokers. You can disable them in the broker's [configuration](reference-configuration.md#broker-enableNonPersistentTopics). You can manage non-persistent topics using the `pulsar-admin topics` command. For more information, see [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/). - -### Performance - -Non-persistent messaging is usually faster than persistent messaging because brokers don't persist messages and immediately send acks back to the producer as soon as that message is delivered to connected brokers. Producers thus see comparatively low publish latency with non-persistent topic. - -### Client API - -Producers and consumers can connect to non-persistent topics in the same way as persistent topics, with the crucial difference that the topic name must start with `non-persistent`. All three subscription modes---[exclusive](#exclusive), [shared](#shared), and [failover](#failover)---are supported for non-persistent topics. - -Here's an example [Java consumer](client-libraries-java.md#consumers) for a non-persistent topic: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); -String npTopic = "non-persistent://public/default/my-topic"; -String subscriptionName = "my-subscription-name"; - -Consumer consumer = client.newConsumer() - .topic(npTopic) - .subscriptionName(subscriptionName) - .subscribe(); - -``` - -Here's an example [Java producer](client-libraries-java.md#producer) for the same non-persistent topic: - -```java - -Producer producer = client.newProducer() - .topic(npTopic) - .create(); - -``` - -## System topic - -System topic is a predefined topic for internal use within Pulsar. It can be either persistent or non-persistent topic. - -System topics serve to implement certain features and eliminate dependencies on third-party components, such as transactions, heartbeat detections, topic-level policies, and resource group services. System topics empower the implementation of these features to be simplified, dependent, and flexible. Take heartbeat detections for example, you can leverage the system topic for healthcheck to internally enable producer/reader to procude/consume messages under the heartbeat namespace, which can detect whether the current service is still alive. - -There are diverse system topics depending on namespaces. The following table outlines the available system topics for each specific namespace. - -| Namespace | TopicName | Domain | Count | Usage | -|-----------|-----------|--------|-------|-------| -| pulsar/system | `transaction_coordinator_assign_${id}` | Persistent | Default 16 | Transaction coordinator | -| pulsar/system | `_transaction_log${tc_id}` | Persistent | Default 16 | Transaction log | -| pulsar/system | `resource-usage` | Non-persistent | Default 4 | Resource group service | -| host/port | `heartbeat` | Persistent | 1 | Heartbeat detection | -| User-defined-ns | [`__change_events`](concepts-multi-tenancy.md#namespace-change-events-and-topic-level-policies) | Persistent | Default 4 | Topic events | -| User-defined-ns | `__transaction_buffer_snapshot` | Persistent | One per namespace | Transaction buffer snapshots | -| User-defined-ns | `${topicName}__transaction_pending_ack` | Persistent | One per every topic subscription acknowledged with transactions | Acknowledgements with transactions | - -:::note - -* You cannot create any system topics. -* By default, system topics are disabled. To enable system topics, you need to change the following configurations in the `conf/broker.conf` or `conf/standalone.conf` file. - - ```conf - systemTopicEnabled=true - topicLevelPoliciesEnabled=true - ``` - -::: - - -## Message retention and expiry - -By default, Pulsar message brokers: - -* immediately delete *all* messages that have been acknowledged by a consumer, and -* [persistently store](concepts-architecture-overview.md#persistent-storage) all unacknowledged messages in a message backlog. - -Pulsar has two features, however, that enable you to override this default behavior: - -* Message **retention** enables you to store messages that have been acknowledged by a consumer -* Message **expiry** enables you to set a time to live (TTL) for messages that have not yet been acknowledged - -> All message retention and expiry is managed at the [namespace](#namespaces) level. For a how-to, see the [Message retention and expiry](cookbooks-retention-expiry.md) cookbook. - -The diagram below illustrates both concepts: - -![Message retention and expiry](/assets/retention-expiry.png) - -With message retention, shown at the top, a retention policy applied to all topics in a namespace dictates that some messages are durably stored in Pulsar even though they've already been acknowledged. Acknowledged messages that are not covered by the retention policy are deleted. Without a retention policy, *all* of the acknowledged messages would be deleted. - -With message expiry, shown at the bottom, some messages are deleted, even though they haven't been acknowledged, because they've expired according to the TTL applied to the namespace (for example because a TTL of 5 minutes has been applied and the messages haven't been acknowledged but are 10 minutes old). - -## Message deduplication - -Message duplication occurs when a message is [persisted](concepts-architecture-overview.md#persistent-storage) by Pulsar more than once. Message deduplication is an optional Pulsar feature that prevents unnecessary message duplication by processing each message only once, even if the message is received more than once. - -The following diagram illustrates what happens when message deduplication is disabled vs. enabled: - -![Pulsar message deduplication](/assets/message-deduplication.png) - - -Message deduplication is disabled in the scenario shown at the top. Here, a producer publishes message 1 on a topic; the message reaches a Pulsar broker and is [persisted](concepts-architecture-overview.md#persistent-storage) to BookKeeper. The producer then sends message 1 again (in this case due to some retry logic), and the message is received by the broker and stored in BookKeeper again, which means that duplication has occurred. - -In the second scenario at the bottom, the producer publishes message 1, which is received by the broker and persisted, as in the first scenario. When the producer attempts to publish the message again, however, the broker knows that it has already seen message 1 and thus does not persist the message. - -> Message deduplication is handled at the namespace level or the topic level. For more instructions, see the [message deduplication cookbook](cookbooks-deduplication.md). - - -### Producer idempotency - -The other available approach to message deduplication is to ensure that each message is *only produced once*. This approach is typically called **producer idempotency**. The drawback of this approach is that it defers the work of message deduplication to the application. In Pulsar, this is handled at the [broker](reference-terminology.md#broker) level, so you do not need to modify your Pulsar client code. Instead, you only need to make administrative changes. For details, see [Managing message deduplication](cookbooks-deduplication.md). - -### Deduplication and effectively-once semantics - -Message deduplication makes Pulsar an ideal messaging system to be used in conjunction with stream processing engines (SPEs) and other systems seeking to provide effectively-once processing semantics. Messaging systems that do not offer automatic message deduplication require the SPE or other system to guarantee deduplication, which means that strict message ordering comes at the cost of burdening the application with the responsibility of deduplication. With Pulsar, strict ordering guarantees come at no application-level cost. - -> You can find more in-depth information in [this post](https://www.splunk.com/en_us/blog/it/exactly-once-is-not-exactly-the-same.html). - -## Delayed message delivery -Delayed message delivery enables you to consume a message later rather than immediately. In this mechanism, a message is stored in BookKeeper, `DelayedDeliveryTracker` maintains the time index(time -> messageId) in memory after published to a broker, and it is delivered to a consumer once the specific delayed time is passed. - -Delayed message delivery only works in Shared subscription mode. In Exclusive and Failover subscription modes, the delayed message is dispatched immediately. - -The diagram below illustrates the concept of delayed message delivery: - -![Delayed Message Delivery](/assets/message_delay.png) - -A broker saves a message without any check. When a consumer consumes a message, if the message is set to delay, then the message is added to `DelayedDeliveryTracker`. A subscription checks and gets timeout messages from `DelayedDeliveryTracker`. - -### Broker -Delayed message delivery is enabled by default. You can change it in the broker configuration file as below: - -``` - -# Whether to enable the delayed delivery for messages. -# If disabled, messages are immediately delivered and there is no tracking overhead. -delayedDeliveryEnabled=true - -# Control the ticking time for the retry of delayed message delivery, -# affecting the accuracy of the delivery time compared to the scheduled time. -# Default is 1 second. -delayedDeliveryTickTimeMillis=1000 - -``` - -### Producer -The following is an example of delayed message delivery for a producer in Java: - -```java - -// message to be delivered at the configured delay interval -producer.newMessage().deliverAfter(3L, TimeUnit.Minute).value("Hello Pulsar!").send(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-multi-tenancy.md b/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-multi-tenancy.md deleted file mode 100644 index 93a59557b2efca..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-multi-tenancy.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -id: concepts-multi-tenancy -title: Multi Tenancy -sidebar_label: "Multi Tenancy" -original_id: concepts-multi-tenancy ---- - -Pulsar was created from the ground up as a multi-tenant system. To support multi-tenancy, Pulsar has a concept of tenants. Tenants can be spread across clusters and can each have their own [authentication and authorization](security-overview.md) scheme applied to them. They are also the administrative unit at which storage quotas, [message TTL](cookbooks-retention-expiry.md#time-to-live-ttl), and isolation policies can be managed. - -The multi-tenant nature of Pulsar is reflected mostly visibly in topic URLs, which have this structure: - -```http - -persistent://tenant/namespace/topic - -``` - -As you can see, the tenant is the most basic unit of categorization for topics (more fundamental than the namespace and topic name). - -## Tenants - -To each tenant in a Pulsar instance you can assign: - -* An [authorization](security-authorization.md) scheme -* The set of [clusters](reference-terminology.md#cluster) to which the tenant's configuration applies - -## Namespaces - -Tenants and namespaces are two key concepts of Pulsar to support multi-tenancy. - -* Pulsar is provisioned for specified tenants with appropriate capacity allocated to the tenant. -* A namespace is the administrative unit nomenclature within a tenant. The configuration policies set on a namespace apply to all the topics created in that namespace. A tenant may create multiple namespaces via self-administration using the REST API and the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool. For instance, a tenant with different applications can create a separate namespace for each application. - -Names for topics in the same namespace will look like this: - -```http - -persistent://tenant/app1/topic-1 - -persistent://tenant/app1/topic-2 - -persistent://tenant/app1/topic-3 - -``` - -### Namespace change events and topic-level policies - -Pulsar is a multi-tenant event streaming system. Administrators can manage the tenants and namespaces by setting policies at different levels. However, the policies, such as retention policy and storage quota policy, are only available at a namespace level. In many use cases, users need to set a policy at the topic level. The namespace change events approach is proposed for supporting topic-level policies in an efficient way. In this approach, Pulsar is used as an event log to store namespace change events (such as topic policy changes). This approach has a few benefits: -- Avoid using ZooKeeper and introducing more loads to ZooKeeper. -- Use Pulsar as an event log for propagating the policy cache. It can scale efficiently. -- Use Pulsar SQL to query the namespace changes and audit the system. - -Each namespace has a [system topic](concepts-messaging.md#system-topic) named `__change_events`. This system topic stores change events for a given namespace. The following figure illustrates how to leverage it to update topic-level policies. - -![Leverage the system topic to update topic-level policies](/assets/system-topic-for-topic-level-policies.svg) - -1. Pulsar Admin clients communicate with the Admin Restful API to update topic-level policies. -2. Any broker that receives the Admin HTTP request publishes a topic policy change event to the corresponding system topic (`__change_events`) of the namespace. -3. Each broker that owns a namespace bundle(s) subscribes to the system topic (`__change_events`) to receive the change events of the namespace. -4. Each broker applies the change events to its policy cache. -5. Once the policy cache is updated, the broker sends the response back to the Pulsar Admin clients. - -:::note - -By default, the system topic is disabled. To enable topic-level policy (`topicLevelPoliciesEnabled`=`true`), you need to enable the system topic by setting `systemtopicenabled` to `true` in the `conf/broker.conf` or `conf/standalone.conf` file. - -::: \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-multiple-advertised-listeners.md b/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-multiple-advertised-listeners.md deleted file mode 100644 index f2e1ae0aadc7ca..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-multiple-advertised-listeners.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -id: concepts-multiple-advertised-listeners -title: Multiple advertised listeners -sidebar_label: "Multiple advertised listeners" -original_id: concepts-multiple-advertised-listeners ---- - -When a Pulsar cluster is deployed in the production environment, it may require to expose multiple advertised addresses for the broker. For example, when you deploy a Pulsar cluster in Kubernetes and want other clients, which are not in the same Kubernetes cluster, to connect to the Pulsar cluster, you need to assign a broker URL to external clients. But clients in the same Kubernetes cluster can still connect to the Pulsar cluster through the internal network of Kubernetes. - -## Advertised listeners - -To ensure clients in both internal and external networks can connect to a Pulsar cluster, Pulsar introduces `advertisedListeners` and `internalListenerName` configuration options into the [broker configuration file](reference-configuration.md#broker) to ensure that the broker supports exposing multiple advertised listeners and support the separation of internal and external network traffic. - -- The `advertisedListeners` is used to specify multiple advertised listeners. The broker uses the listener as the broker identifier in the load manager and the bundle owner data. The `advertisedListeners` is formatted as `:pulsar://:, :pulsar+ssl://:`. You can set up the `advertisedListeners` like -`advertisedListeners=internal:pulsar://192.168.1.11:6660,internal:pulsar+ssl://192.168.1.11:6651`. - -- The `internalListenerName` is used to specify the internal service URL that the broker uses. You can specify the `internalListenerName` by choosing one of the `advertisedListeners`. The broker uses the listener name of the first advertised listener as the `internalListenerName` if the `internalListenerName` is absent. - -After setting up the `advertisedListeners`, clients can choose one of the listeners as the service URL to create a connection to the broker as long as the network is accessible. However, if the client creates producers or consumer on a topic, the client must send a lookup requests to the broker for getting the owner broker, then connect to the owner broker to publish messages or consume messages. Therefore, You must allow the client to get the corresponding service URL with the same advertised listener name as the one used by the client. This helps keep client-side simple and secure. - -## Use multiple advertised listeners - -This example shows how a Pulsar client uses multiple advertised listeners. - -1. Configure multiple advertised listeners in the broker configuration file. - -```shell - -advertisedListeners={listenerName}:pulsar://xxxx:6650, -{listenerName}:pulsar+ssl://xxxx:6651 - -``` - -2. Specify the listener name for the client. - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://xxxx:6650") - .listenerName("external") - .build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-overview.md b/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-overview.md deleted file mode 100644 index c643aa0ce7bbce..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-overview.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -id: concepts-overview -title: Pulsar Overview -sidebar_label: "Overview" -original_id: concepts-overview ---- - -Pulsar is a multi-tenant, high-performance solution for server-to-server messaging. Originally developed by Yahoo, Pulsar is under the stewardship of the [Apache Software Foundation](https://www.apache.org/). - -Key features of Pulsar are listed below: - -* Native support for multiple clusters in a Pulsar instance, with seamless [geo-replication](administration-geo.md) of messages across clusters. -* Very low publish and end-to-end latency. -* Seamless scalability to over a million topics. -* A simple [client API](concepts-clients.md) with bindings for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) and [C++](client-libraries-cpp.md). -* Multiple [subscription modes](concepts-messaging.md#subscription-modes) ([exclusive](concepts-messaging.md#exclusive), [shared](concepts-messaging.md#shared), and [failover](concepts-messaging.md#failover)) for topics. -* Guaranteed message delivery with [persistent message storage](concepts-architecture-overview.md#persistent-storage) provided by [Apache BookKeeper](http://bookkeeper.apache.org/). -* A serverless light-weight computing framework [Pulsar Functions](functions-overview.md) offers the capability for stream-native data processing. -* A serverless connector framework [Pulsar IO](io-overview.md), which is built on Pulsar Functions, makes it easier to move data in and out of Apache Pulsar. -* [Tiered Storage](concepts-tiered-storage.md) offloads data from hot/warm storage to cold/long-term storage (such as S3 and GCS) when the data is aging out. - -## Contents - -- [Messaging Concepts](concepts-messaging.md) -- [Architecture Overview](concepts-architecture-overview.md) -- [Pulsar Clients](concepts-clients.md) -- [Geo Replication](concepts-replication.md) -- [Multi Tenancy](concepts-multi-tenancy.md) -- [Authentication and Authorization](concepts-authentication.md) -- [Topic Compaction](concepts-topic-compaction.md) -- [Tiered Storage](concepts-tiered-storage.md) diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-proxy-sni-routing.md b/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-proxy-sni-routing.md deleted file mode 100644 index 51419a66cefe6e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-proxy-sni-routing.md +++ /dev/null @@ -1,180 +0,0 @@ ---- -id: concepts-proxy-sni-routing -title: Proxy support with SNI routing -sidebar_label: "Proxy support with SNI routing" -original_id: concepts-proxy-sni-routing ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -A proxy server is an intermediary server that forwards requests from multiple clients to different servers across the Internet. The proxy server acts as a "traffic cop" in both forward and reverse proxy scenarios, and benefits your system such as load balancing, performance, security, auto-scaling, and so on. - -The proxy in Pulsar acts as a reverse proxy, and creates a gateway in front of brokers. Proxies such as Apache Traffic Server (ATS), HAProxy, Nginx, and Envoy are not supported in Pulsar. These proxy-servers support **SNI routing**. SNI routing is used to route traffic to a destination without terminating the SSL connection. Layer 4 routing provides greater transparency because the outbound connection is determined by examining the destination address in the client TCP packets. - -Pulsar clients (Java, C++, Python) support [SNI routing protocol](https://github.com/apache/pulsar/wiki/PIP-60:-Support-Proxy-server-with-SNI-routing), so you can connect to brokers through the proxy. This document walks you through how to set up the ATS proxy, enable SNI routing, and connect Pulsar client to the broker through the ATS proxy. - -## ATS-SNI Routing in Pulsar -To support [layer-4 SNI routing](https://docs.trafficserver.apache.org/en/latest/admin-guide/layer-4-routing.en.html) with ATS, the inbound connection must be a TLS connection. Pulsar client supports SNI routing protocol on TLS connection, so when Pulsar clients connect to broker through ATS proxy, Pulsar uses ATS as a reverse proxy. - -Pulsar supports SNI routing for geo-replication, so brokers can connect to brokers in other clusters through the ATS proxy. - -This section explains how to set up and use ATS as a reverse proxy, so Pulsar clients can connect to brokers through the ATS proxy using the SNI routing protocol on TLS connection. - -### Set up ATS Proxy for layer-4 SNI routing -To support layer 4 SNI routing, you need to configure the `records.conf` and `ssl_server_name.conf` files. - -![Pulsar client SNI](/assets/pulsar-sni-client.png) - -The [records.config](https://docs.trafficserver.apache.org/en/latest/admin-guide/files/records.config.en.html) file is located in the `/usr/local/etc/trafficserver/` directory by default. The file lists configurable variables used by the ATS. - -To configure the `records.config` files, complete the following steps. -1. Update TLS port (`http.server_ports`) on which proxy listens, and update proxy certs (`ssl.client.cert.path` and `ssl.client.cert.filename`) to secure TLS tunneling. -2. Configure server ports (`http.connect_ports`) used for tunneling to the broker. If Pulsar brokers are listening on `4443` and `6651` ports, add the brokers service port in the `http.connect_ports` configuration. - -The following is an example. - -``` - -# PROXY TLS PORT -CONFIG proxy.config.http.server_ports STRING 4443:ssl 4080 -# PROXY CERTS FILE PATH -CONFIG proxy.config.ssl.client.cert.path STRING /proxy-cert.pem -# PROXY KEY FILE PATH -CONFIG proxy.config.ssl.client.cert.filename STRING /proxy-key.pem - - -# The range of origin server ports that can be used for tunneling via CONNECT. # Traffic Server allows tunnels only to the specified ports. Supports both wildcards (*) and ranges (e.g. 0-1023). -CONFIG proxy.config.http.connect_ports STRING 4443 6651 - -``` - -The `ssl_server_name` file is used to configure TLS connection handling for inbound and outbound connections. The configuration is determined by the SNI values provided by the inbound connection. The file consists of a set of configuration items, and each is identified by an SNI value (`fqdn`). When an inbound TLS connection is made, the SNI value from the TLS negotiation is matched with the items specified in this file. If the values match, the values specified in that item override the default values. - -The following example shows mapping of the inbound SNI hostname coming from the client, and the actual broker service URL where request should be redirected. For example, if the client sends the SNI header `pulsar-broker1`, the proxy creates a TLS tunnel by redirecting request to the `pulsar-broker1:6651` service URL. - -``` - -server_config = { - { - fqdn = 'pulsar-broker-vip', - # Forward to Pulsar broker which is listening on 6651 - tunnel_route = 'pulsar-broker-vip:6651' - }, - { - fqdn = 'pulsar-broker1', - # Forward to Pulsar broker-1 which is listening on 6651 - tunnel_route = 'pulsar-broker1:6651' - }, - { - fqdn = 'pulsar-broker2', - # Forward to Pulsar broker-2 which is listening on 6651 - tunnel_route = 'pulsar-broker2:6651' - }, -} - -``` - -After you configure the `ssl_server_name.config` and `records.config` files, the ATS-proxy server handles SNI routing and creates TCP tunnel between the client and the broker. - -### Configure Pulsar-client with SNI routing -ATS SNI-routing works only with TLS. You need to enable TLS for the ATS proxy and brokers first, configure the SNI routing protocol, and then connect Pulsar clients to brokers through ATS proxy. Pulsar clients support SNI routing by connecting to the proxy, and sending the target broker URL to the SNI header. This process is processed internally. You only need to configure the following proxy configuration initially when you create a Pulsar client to use the SNI routing protocol. - -````mdx-code-block - - - - -```java - -String brokerServiceUrl = "pulsar+ssl://pulsar-broker-vip:6651/"; -String proxyUrl = "pulsar+ssl://ats-proxy:443"; -ClientBuilder clientBuilder = PulsarClient.builder() - .serviceUrl(brokerServiceUrl) - .tlsTrustCertsFilePath(TLS_TRUST_CERT_FILE_PATH) - .enableTls(true) - .allowTlsInsecureConnection(false) - .proxyServiceUrl(proxyUrl, ProxyProtocol.SNI) - .operationTimeout(1000, TimeUnit.MILLISECONDS); - -Map authParams = new HashMap(); -authParams.put("tlsCertFile", TLS_CLIENT_CERT_FILE_PATH); -authParams.put("tlsKeyFile", TLS_CLIENT_KEY_FILE_PATH); -clientBuilder.authentication(AuthenticationTls.class.getName(), authParams); - -PulsarClient pulsarClient = clientBuilder.build(); - -``` - - - - -```c++ - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/cacert.pem"); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create( - "/path/to/client-cert.pem", "/path/to/client-key.pem");); - -Client client("pulsar+ssl://ats-proxy:443", config); - -``` - - - - -```python - -from pulsar import Client, AuthenticationTLS - -auth = AuthenticationTLS("/path/to/my-role.cert.pem", "/path/to/my-role.key-pk8.pem") -client = Client("pulsar+ssl://ats-proxy:443", - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False, - authentication=auth) - -``` - - - - -```` - -### Pulsar geo-replication with SNI routing -You can use the ATS proxy for geo-replication. Pulsar brokers can connect to brokers in geo-replication by using SNI routing. To enable SNI routing for broker connection cross clusters, you need to configure SNI proxy URL to the cluster metadata. If you have configured SNI proxy URL in the cluster metadata, you can connect to broker cross clusters through the proxy over SNI routing. - -![Pulsar client SNI](/assets/pulsar-sni-geo.png) - -In this example, a Pulsar cluster is deployed into two separate regions, `us-west` and `us-east`. Both regions are configured with ATS proxy, and brokers in each region run behind the ATS proxy. We configure the cluster metadata for both clusters, so brokers in one cluster can use SNI routing and connect to brokers in other clusters through the ATS proxy. - -(a) Configure the cluster metadata for `us-east` with `us-east` broker service URL and `us-east` ATS proxy URL with SNI proxy-protocol. - -``` - -./pulsar-admin clusters update \ ---broker-url-secure pulsar+ssl://east-broker-vip:6651 \ ---url http://east-broker-vip:8080 \ ---proxy-protocol SNI \ ---proxy-url pulsar+ssl://east-ats-proxy:443 - -``` - -(b) Configure the cluster metadata for `us-west` with `us-west` broker service URL and `us-west` ATS proxy URL with SNI proxy-protocol. - -``` - -./pulsar-admin clusters update \ ---broker-url-secure pulsar+ssl://west-broker-vip:6651 \ ---url http://west-broker-vip:8080 \ ---proxy-protocol SNI \ ---proxy-url pulsar+ssl://west-ats-proxy:443 - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-replication.md b/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-replication.md deleted file mode 100644 index 799f0eb4d92c6b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-replication.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -id: concepts-replication -title: Geo Replication -sidebar_label: "Geo Replication" -original_id: concepts-replication ---- - -Pulsar enables messages to be produced and consumed in different geo-locations. For instance, your application may be publishing data in one region or market and you would like to process it for consumption in other regions or markets. [Geo-replication](administration-geo.md) in Pulsar enables you to do that. - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-tiered-storage.md b/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-tiered-storage.md deleted file mode 100644 index f6988e53a8cd4e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-tiered-storage.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: concepts-tiered-storage -title: Tiered Storage -sidebar_label: "Tiered Storage" -original_id: concepts-tiered-storage ---- - -Pulsar's segment oriented architecture allows for topic backlogs to grow very large, effectively without limit. However, this can become expensive over time. - -One way to alleviate this cost is to use Tiered Storage. With tiered storage, older messages in the backlog can be moved from BookKeeper to a cheaper storage mechanism, while still allowing clients to access the backlog as if nothing had changed. - -![Tiered Storage](/assets/pulsar-tiered-storage.png) - -> Data written to BookKeeper is replicated to 3 physical machines by default. However, once a segment is sealed in BookKeeper it becomes immutable and can be copied to long term storage. Long term storage can achieve cost savings by using mechanisms such as [Reed-Solomon error correction](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction) to require fewer physical copies of data. - -Pulsar currently supports S3, Google Cloud Storage (GCS), and filesystem for [long term store](https://pulsar.apache.org/docs/en/cookbooks-tiered-storage/). Offloading to long term storage triggered via a Rest API or command line interface. The user passes in the amount of topic data they wish to retain on BookKeeper, and the broker will copy the backlog data to long term storage. The original data will then be deleted from BookKeeper after a configured delay (4 hours by default). - -> For a guide for setting up tiered storage, see the [Tiered storage cookbook](cookbooks-tiered-storage.md). diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-topic-compaction.md b/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-topic-compaction.md deleted file mode 100644 index 34b7ed7fbbd31e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-topic-compaction.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -id: concepts-topic-compaction -title: Topic Compaction -sidebar_label: "Topic Compaction" -original_id: concepts-topic-compaction ---- - -Pulsar was built with highly scalable [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data as a primary objective. Pulsar topics enable you to persistently store as many unacknowledged messages as you need while preserving message ordering. By default, Pulsar stores *all* unacknowledged/unprocessed messages produced on a topic. Accumulating many unacknowledged messages on a topic is necessary for many Pulsar use cases but it can also be very time intensive for Pulsar consumers to "rewind" through the entire log of messages. - -> For a more practical guide to topic compaction, see the [Topic compaction cookbook](cookbooks-compaction.md). - -For some use cases consumers don't need a complete "image" of the topic log. They may only need a few values to construct a more "shallow" image of the log, perhaps even just the most recent value. For these kinds of use cases Pulsar offers **topic compaction**. When you run compaction on a topic, Pulsar goes through a topic's backlog and removes messages that are *obscured* by later messages, i.e. it goes through the topic on a per-key basis and leaves only the most recent message associated with that key. - -Pulsar's topic compaction feature: - -* Allows for faster "rewind" through topic logs -* Applies only to [persistent topics](concepts-architecture-overview.md#persistent-storage) -* Triggered automatically when the backlog reaches a certain size or can be triggered manually via the command line. See the [Topic compaction cookbook](cookbooks-compaction.md) -* Is conceptually and operationally distinct from [retention and expiry](concepts-messaging.md#message-retention-and-expiry). Topic compaction *does*, however, respect retention. If retention has removed a message from the message backlog of a topic, the message will also not be readable from the compacted topic ledger. - -> #### Topic compaction example: the stock ticker -> An example use case for a compacted Pulsar topic would be a stock ticker topic. On a stock ticker topic, each message bears a timestamped dollar value for stocks for purchase (with the message key holding the stock symbol, e.g. `AAPL` or `GOOG`). With a stock ticker you may care only about the most recent value(s) of the stock and have no interest in historical data (i.e. you don't need to construct a complete image of the topic's sequence of messages per key). Compaction would be highly beneficial in this case because it would keep consumers from needing to rewind through obscured messages. - - -## How topic compaction works - -When topic compaction is triggered [via the CLI](cookbooks-compaction.md), Pulsar will iterate over the entire topic from beginning to end. For each key that it encounters the compaction routine will keep a record of the latest occurrence of that key. - -After that, the broker will create a new [BookKeeper ledger](concepts-architecture-overview.md#ledgers) and make a second iteration through each message on the topic. For each message, if the key matches the latest occurrence of that key, then the key's data payload, message ID, and metadata will be written to the newly created ledger. If the key doesn't match the latest then the message will be skipped and left alone. If any given message has an empty payload, it will be skipped and considered deleted (akin to the concept of [tombstones](https://en.wikipedia.org/wiki/Tombstone_(data_store)) in key-value databases). At the end of this second iteration through the topic, the newly created BookKeeper ledger is closed and two things are written to the topic's metadata: the ID of the BookKeeper ledger and the message ID of the last compacted message (this is known as the **compaction horizon** of the topic). Once this metadata is written compaction is complete. - -After the initial compaction operation, the Pulsar [broker](reference-terminology.md#broker) that owns the topic is notified whenever any future changes are made to the compaction horizon and compacted backlog. When such changes occur: - -* Clients (consumers and readers) that have read compacted enabled will attempt to read messages from a topic and either: - * Read from the topic like normal (if the message ID is greater than or equal to the compaction horizon) or - * Read beginning at the compaction horizon (if the message ID is lower than the compaction horizon) - - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-transactions.md b/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-transactions.md deleted file mode 100644 index 08490ba06b5d7e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/concepts-transactions.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -id: transactions -title: Transactions -sidebar_label: "Overview" -original_id: transactions ---- - -Transactional semantics enable event streaming applications to consume, process, and produce messages in one atomic operation. In Pulsar, a producer or consumer can work with messages across multiple topics and partitions and ensure those messages are processed as a single unit. - -The following concepts help you understand Pulsar transactions. - -## Transaction coordinator and transaction log -The transaction coordinator maintains the topics and subscriptions that interact in a transaction. When a transaction is committed, the transaction coordinator interacts with the topic owner broker to complete the transaction. - -The transaction coordinator maintains the entire life cycle of transactions, and prevents a transaction from incorrect status. - -The transaction coordinator handles transaction timeout, and ensures that the transaction is aborted after a transaction timeout. - -All the transaction metadata is persisted in the transaction log. The transaction log is backed by a Pulsar topic. After the transaction coordinator crashes, it can restore the transaction metadata from the transaction log. - -## Transaction ID -The transaction ID (TxnID) identifies a unique transaction in Pulsar. The transaction ID is 128-bit. The highest 16 bits are reserved for the ID of the transaction coordinator, and the remaining bits are used for monotonically increasing numbers in each transaction coordinator. It is easy to locate the transaction crash with the TxnID. - -## Transaction buffer -Messages produced within a transaction are stored in the transaction buffer. The messages in transaction buffer are not materialized (visible) to consumers until the transactions are committed. The messages in the transaction buffer are discarded when the transactions are aborted. - -## Pending acknowledge state -Message acknowledges within a transaction are maintained by the pending acknowledge state before the transaction completes. If a message is in the pending acknowledge state, the message cannot be acknowledged by other transactions until the message is removed from the pending acknowledge state. - -The pending acknowledge state is persisted to the pending acknowledge log. The pending acknowledge log is backed by a Pulsar topic. A new broker can restore the state from the pending acknowledge log to ensure the acknowledgement is not lost. diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-bookkeepermetadata.md b/site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-bookkeepermetadata.md deleted file mode 100644 index b0fa98dc3b65d5..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-bookkeepermetadata.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -id: cookbooks-bookkeepermetadata -title: BookKeeper Ledger Metadata -original_id: cookbooks-bookkeepermetadata ---- - -Pulsar stores data on BookKeeper ledgers, you can understand the contents of a ledger by inspecting the metadata attached to the ledger. -Such metadata are stored on ZooKeeper and they are readable using BookKeeper APIs. - -Description of current metadata: - -| Scope | Metadata name | Metadata value | -| ------------- | ------------- | ------------- | -| All ledgers | application | 'pulsar' | -| All ledgers | component | 'managed-ledger', 'schema', 'compacted-topic' | -| Managed ledgers | pulsar/managed-ledger | name of the ledger | -| Cursor | pulsar/cursor | name of the cursor | -| Compacted topic | pulsar/compactedTopic | name of the original topic | -| Compacted topic | pulsar/compactedTo | id of the last compacted message | - - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-compaction.md b/site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-compaction.md deleted file mode 100644 index dfa314727241a8..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-compaction.md +++ /dev/null @@ -1,142 +0,0 @@ ---- -id: cookbooks-compaction -title: Topic compaction -sidebar_label: "Topic compaction" -original_id: cookbooks-compaction ---- - -Pulsar's [topic compaction](concepts-topic-compaction.md#compaction) feature enables you to create **compacted** topics in which older, "obscured" entries are pruned from the topic, allowing for faster reads through the topic's history (which messages are deemed obscured/outdated/irrelevant will depend on your use case). - -To use compaction: - -* You need to give messages keys, as topic compaction in Pulsar takes place on a *per-key basis* (i.e. messages are compacted based on their key). For a stock ticker use case, the stock symbol---e.g. `AAPL` or `GOOG`---could serve as the key (more on this [below](#when-should-i-use-compacted-topics)). Messages without keys will be left alone by the compaction process. -* Compaction can be configured to run [automatically](#configuring-compaction-to-run-automatically), or you can manually [trigger](#triggering-compaction-manually) compaction using the Pulsar administrative API. -* Your consumers must be [configured](#consumer-configuration) to read from compacted topics ([Java consumers](#java), for example, have a `readCompacted` setting that must be set to `true`). If this configuration is not set, consumers will still be able to read from the non-compacted topic. - - -> Compaction only works on messages that have keys (as in the stock ticker example the stock symbol serves as the key for each message). Keys can thus be thought of as the axis along which compaction is applied. Messages that don't have keys are simply ignored by compaction. - -## When should I use compacted topics? - -The classic example of a topic that could benefit from compaction would be a stock ticker topic through which consumers can access up-to-date values for specific stocks. Imagine a scenario in which messages carrying stock value data use the stock symbol as the key (`GOOG`, `AAPL`, `TWTR`, etc.). Compacting this topic would give consumers on the topic two options: - -* They can read from the "original," non-compacted topic in case they need access to "historical" values, i.e. the entirety of the topic's messages. -* They can read from the compacted topic if they only want to see the most up-to-date messages. - -Thus, if you're using a Pulsar topic called `stock-values`, some consumers could have access to all messages in the topic (perhaps because they're performing some kind of number crunching of all values in the last hour) while the consumers used to power the real-time stock ticker only see the compacted topic (and thus aren't forced to process outdated messages). Which variant of the topic any given consumer pulls messages from is determined by the consumer's [configuration](#consumer-configuration). - -> One of the benefits of compaction in Pulsar is that you aren't forced to choose between compacted and non-compacted topics, as the compaction process leaves the original topic as-is and essentially adds an alternate topic. In other words, you can run compaction on a topic and consumers that need access to the non-compacted version of the topic will not be adversely affected. - - -## Configuring compaction to run automatically - -Tenant administrators can configure a policy for compaction at the namespace level. The policy specifies how large the topic backlog can grow before compaction is triggered. - -For example, to trigger compaction when the backlog reaches 100MB: - -```bash - -$ bin/pulsar-admin namespaces set-compaction-threshold \ - --threshold 100M my-tenant/my-namespace - -``` - -Configuring the compaction threshold on a namespace will apply to all topics within that namespace. - -## Triggering compaction manually - -In order to run compaction on a topic, you need to use the [`topics compact`](reference-pulsar-admin.md#topics-compact) command for the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool. Here's an example: - -```bash - -$ bin/pulsar-admin topics compact \ - persistent://my-tenant/my-namespace/my-topic - -``` - -The `pulsar-admin` tool runs compaction via the Pulsar {@inject: rest:REST:/} API. To run compaction in its own dedicated process, i.e. *not* through the REST API, you can use the [`pulsar compact-topic`](reference-cli-tools.md#pulsar-compact-topic) command. Here's an example: - -```bash - -$ bin/pulsar compact-topic \ - --topic persistent://my-tenant-namespace/my-topic - -``` - -> Running compaction in its own process is recommended when you want to avoid interfering with the broker's performance. Broker performance should only be affected, however, when running compaction on topics with a large keyspace (i.e when there are many keys on the topic). The first phase of the compaction process keeps a copy of each key in the topic, which can create memory pressure as the number of keys grows. Using the `pulsar-admin topics compact` command to run compaction through the REST API should present no issues in the overwhelming majority of cases; using `pulsar compact-topic` should correspondingly be considered an edge case. - -The `pulsar compact-topic` command communicates with [ZooKeeper](https://zookeeper.apache.org) directly. In order to establish communication with ZooKeeper, though, the `pulsar` CLI tool will need to have a valid [broker configuration](reference-configuration.md#broker). You can either supply a proper configuration in `conf/broker.conf` or specify a non-default location for the configuration: - -```bash - -$ bin/pulsar compact-topic \ - --broker-conf /path/to/broker.conf \ - --topic persistent://my-tenant/my-namespace/my-topic - -# If the configuration is in conf/broker.conf -$ bin/pulsar compact-topic \ - --topic persistent://my-tenant/my-namespace/my-topic - -``` - -#### When should I trigger compaction? - -How often you [trigger compaction](#triggering-compaction-manually) will vary widely based on the use case. If you want a compacted topic to be extremely speedy on read, then you should run compaction fairly frequently. - -## Consumer configuration - -Pulsar consumers and readers need to be configured to read from compacted topics. The sections below show you how to enable compacted topic reads for Pulsar's language clients. - -### Java - -In order to read from a compacted topic using a Java consumer, the `readCompacted` parameter must be set to `true`. Here's an example consumer for a compacted topic: - -```java - -Consumer compactedTopicConsumer = client.newConsumer() - .topic("some-compacted-topic") - .readCompacted(true) - .subscribe(); - -``` - -As mentioned above, topic compaction in Pulsar works on a *per-key basis*. That means that messages that you produce on compacted topics need to have keys (the content of the key will depend on your use case). Messages that don't have keys will be ignored by the compaction process. Here's an example Pulsar message with a key: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageBuilder; - -Message msg = MessageBuilder.create() - .setContent(someByteArray) - .setKey("some-key") - .build(); - -``` - -The example below shows a message with a key being produced on a compacted Pulsar topic: - -```java - -import org.apache.pulsar.client.api.Message; -import org.apache.pulsar.client.api.MessageBuilder; -import org.apache.pulsar.client.api.Producer; -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -Producer compactedTopicProducer = client.newProducer() - .topic("some-compacted-topic") - .create(); - -Message msg = MessageBuilder.create() - .setContent(someByteArray) - .setKey("some-key") - .build(); - -compactedTopicProducer.send(msg); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-deduplication.md b/site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-deduplication.md deleted file mode 100644 index f7f9e3d7bb425b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-deduplication.md +++ /dev/null @@ -1,151 +0,0 @@ ---- -id: cookbooks-deduplication -title: Message deduplication -sidebar_label: "Message deduplication" -original_id: cookbooks-deduplication ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -When **Message deduplication** is enabled, it ensures that each message produced on Pulsar topics is persisted to disk *only once*, even if the message is produced more than once. Message deduplication is handled automatically on the server side. - -To use message deduplication in Pulsar, you need to configure your Pulsar brokers and clients. - -## How it works - -You can enable or disable message deduplication at the namespace level or the topic level. By default, it is disabled on all namespaces or topics. You can enable it in the following ways: - -* Enable deduplication for all namespaces/topics at the broker-level. -* Enable deduplication for a specific namespace with the `pulsar-admin namespaces` interface. -* Enable deduplication for a specific topic with the `pulsar-admin topics` interface. - -## Configure message deduplication - -You can configure message deduplication in Pulsar using the [`broker.conf`](reference-configuration.md#broker) configuration file. The following deduplication-related parameters are available. - -Parameter | Description | Default -:---------|:------------|:------- -`brokerDeduplicationEnabled` | Sets the default behavior for message deduplication in the Pulsar broker. If it is set to `true`, message deduplication is enabled on all namespaces/topics. If it is set to `false`, you have to enable or disable deduplication at the namespace level or the topic level. | `false` -`brokerDeduplicationMaxNumberOfProducers` | The maximum number of producers for which information is stored for deduplication purposes. | `10000` -`brokerDeduplicationEntriesInterval` | The number of entries after which a deduplication informational snapshot is taken. A larger interval leads to fewer snapshots being taken, though this lengthens the topic recovery time (the time required for entries published after the snapshot to be replayed). | `1000` -`brokerDeduplicationSnapshotIntervalSeconds`| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |`120` -`brokerDeduplicationProducerInactivityTimeoutMinutes` | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | `360` (6 hours) - -### Set default value at the broker-level - -By default, message deduplication is *disabled* on all Pulsar namespaces/topics. To enable it on all namespaces/topics, set the `brokerDeduplicationEnabled` parameter to `true` and re-start the broker. - -Even if you set the value for `brokerDeduplicationEnabled`, enabling or disabling via Pulsar admin CLI overrides the default settings at the broker-level. - -### Enable message deduplication - -Though message deduplication is disabled by default at the broker level, you can enable message deduplication for a specific namespace or topic using the [`pulsar-admin namespaces set-deduplication`](reference-pulsar-admin.md#namespace-set-deduplication) or the [`pulsar-admin topics set-deduplication`](reference-pulsar-admin.md#topic-set-deduplication) command. You can use the `--enable`/`-e` flag and specify the namespace/topic. - -The following example shows how to enable message deduplication at the namespace level. - -```bash - -$ bin/pulsar-admin namespaces set-deduplication \ - public/default \ - --enable # or just -e - -``` - -### Disable message deduplication - -Even if you enable message deduplication at the broker level, you can disable message deduplication for a specific namespace or topic using the [`pulsar-admin namespace set-deduplication`](reference-pulsar-admin.md#namespace-set-deduplication) or the [`pulsar-admin topics set-deduplication`](reference-pulsar-admin.md#topic-set-deduplication) command. Use the `--disable`/`-d` flag and specify the namespace/topic. - -The following example shows how to disable message deduplication at the namespace level. - -```bash - -$ bin/pulsar-admin namespaces set-deduplication \ - public/default \ - --disable # or just -d - -``` - -## Pulsar clients - -If you enable message deduplication in Pulsar brokers, you need complete the following tasks for your client producers: - -1. Specify a name for the producer. -1. Set the message timeout to `0` (namely, no timeout). - -The instructions for Java, Python, and C++ clients are different. - -````mdx-code-block - - - -To enable message deduplication on a [Java producer](client-libraries-java.md#producers), set the producer name using the `producerName` setter, and set the timeout to `0` using the `sendTimeout` setter. - -```java - -import org.apache.pulsar.client.api.Producer; -import org.apache.pulsar.client.api.PulsarClient; -import java.util.concurrent.TimeUnit; - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); -Producer producer = pulsarClient.newProducer() - .producerName("producer-1") - .topic("persistent://public/default/topic-1") - .sendTimeout(0, TimeUnit.SECONDS) - .create(); - -``` - - - - -To enable message deduplication on a [Python producer](client-libraries-python.md#producers), set the producer name using `producer_name`, and set the timeout to `0` using `send_timeout_millis`. - -```python - -import pulsar - -client = pulsar.Client("pulsar://localhost:6650") -producer = client.create_producer( - "persistent://public/default/topic-1", - producer_name="producer-1", - send_timeout_millis=0) - -``` - - - - -To enable message deduplication on a [C++ producer](client-libraries-cpp.md#producer), set the producer name using `producer_name`, and set the timeout to `0` using `send_timeout_millis`. - -```cpp - -#include - -std::string serviceUrl = "pulsar://localhost:6650"; -std::string topic = "persistent://some-tenant/ns1/topic-1"; -std::string producerName = "producer-1"; - -Client client(serviceUrl); - -ProducerConfiguration producerConfig; -producerConfig.setSendTimeout(0); -producerConfig.setProducerName(producerName); - -Producer producer; - -Result result = client.createProducer(topic, producerConfig, producer); - -``` - - - - -```` \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-encryption.md b/site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-encryption.md deleted file mode 100644 index f0d8fb8735eb63..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-encryption.md +++ /dev/null @@ -1,184 +0,0 @@ ---- -id: cookbooks-encryption -title: Pulsar Encryption -sidebar_label: "Encryption" -original_id: cookbooks-encryption ---- - -Pulsar encryption allows applications to encrypt messages at the producer and decrypt at the consumer. Encryption is performed using the public/private key pair configured by the application. Encrypted messages can only be decrypted by consumers with a valid key. - -## Asymmetric and symmetric encryption - -Pulsar uses dynamically generated symmetric AES key to encrypt messages(data). The AES key(data key) is encrypted using application provided ECDSA/RSA key pair, as a result there is no need to share the secret with everyone. - -Key is a public/private key pair used for encryption/decryption. The producer key is the public key, and the consumer key is the private key of the key pair. - -The application configures the producer with the public key. This key is used to encrypt the AES data key. The encrypted data key is sent as part of message header. Only entities with the private key(in this case the consumer) will be able to decrypt the data key which is used to decrypt the message. - -A message can be encrypted with more than one key. Any one of the keys used for encrypting the message is sufficient to decrypt the message - -Pulsar does not store the encryption key anywhere in the pulsar service. If you lose/delete the private key, your message is irretrievably lost, and is unrecoverable - -## Producer -![alt text](/assets/pulsar-encryption-producer.jpg "Pulsar Encryption Producer") - -## Consumer -![alt text](/assets/pulsar-encryption-consumer.jpg "Pulsar Encryption Consumer") - -## Here are the steps to get started: - -1. Create your ECDSA or RSA public/private key pair. - -```shell - -openssl ecparam -name secp521r1 -genkey -param_enc explicit -out test_ecdsa_privkey.pem -openssl ec -in test_ecdsa_privkey.pem -pubout -outform pkcs8 -out test_ecdsa_pubkey.pem - -``` - -2. Add the public and private key to the key management and configure your producers to retrieve public keys and consumers clients to retrieve private keys. -3. Implement CryptoKeyReader::getPublicKey() interface from producer and CryptoKeyReader::getPrivateKey() interface from consumer, which will be invoked by Pulsar client to load the key. -4. Add encryption key to producer configuration: conf.addEncryptionKey("myapp.key") -5. Add CryptoKeyReader implementation to producer/consumer config: conf.setCryptoKeyReader(keyReader) -6. Sample producer application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} -PulsarClient pulsarClient = PulsarClient.create("http://localhost:8080"); - -ProducerConfiguration prodConf = new ProducerConfiguration(); -prodConf.setCryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")); -prodConf.addEncryptionKey("myappkey"); - -Producer producer = pulsarClient.createProducer("persistent://my-tenant/my-ns/my-topic", prodConf); - -for (int i = 0; i < 10; i++) { - producer.send("my-message".getBytes()); -} - -pulsarClient.close(); - -``` - -7. Sample Consumer Application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} - -ConsumerConfiguration consConf = new ConsumerConfiguration(); -consConf.setCryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")); -PulsarClient pulsarClient = PulsarClient.create("http://localhost:8080"); -Consumer consumer = pulsarClient.subscribe("persistent://my-tenant//my-ns/my-topic", "my-subscriber-name", consConf); -Message msg = null; - -for (int i = 0; i < 10; i++) { - msg = consumer.receive(); - // do something - System.out.println("Received: " + new String(msg.getData())); -} - -// Acknowledge the consumption of all messages at once -consumer.acknowledgeCumulative(msg); -pulsarClient.close(); - -``` - -## Key rotation -Pulsar generates new AES data key every 4 hours or after a certain number of messages are published. The asymmetric public key is automatically fetched by producer every 4 hours by calling CryptoKeyReader::getPublicKey() to retrieve the latest version. - -## Enabling encryption at the producer application: -If you produce messages that are consumed across application boundaries, you need to ensure that consumers in other applications have access to one of the private keys that can decrypt the messages. This can be done in two ways: -1. The consumer application provides you access to their public key, which you add to your producer keys -1. You grant access to one of the private keys from the pairs used by producer - -In some cases, the producer may want to encrypt the messages with multiple keys. For this, add all such keys to the config. Consumer will be able to decrypt the message, as long as it has access to at least one of the keys. - -E.g: If messages needs to be encrypted using 2 keys myapp.messagekey1 and myapp.messagekey2, - -```java - -conf.addEncryptionKey("myapp.messagekey1"); -conf.addEncryptionKey("myapp.messagekey2"); - -``` - -## Decrypting encrypted messages at the consumer application: -Consumers require access one of the private keys to decrypt messages produced by the producer. If you would like to receive encrypted messages, create a public/private key and give your public key to the producer application to encrypt messages using your public key. - -## Handling Failures: -* Producer/ Consumer loses access to the key - * Producer action will fail indicating the cause of the failure. Application has the option to proceed with sending unencrypted message in such cases. Call conf.setCryptoFailureAction(ProducerCryptoFailureAction) to control the producer behavior. The default behavior is to fail the request. - * If consumption failed due to decryption failure or missing keys in consumer, application has the option to consume the encrypted message or discard it. Call conf.setCryptoFailureAction(ConsumerCryptoFailureAction) to control the consumer behavior. The default behavior is to fail the request. -Application will never be able to decrypt the messages if the private key is permanently lost. -* Batch messaging - * If decryption fails and the message contain batch messages, client will not be able to retrieve individual messages in the batch, hence message consumption fails even if conf.setCryptoFailureAction() is set to CONSUME. -* If decryption fails, the message consumption stops and application will notice backlog growth in addition to decryption failure messages in the client log. If application does not have access to the private key to decrypt the message, the only option is to skip/discard backlogged messages. - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-message-queue.md b/site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-message-queue.md deleted file mode 100644 index eb43cbde5fb818..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-message-queue.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -id: cookbooks-message-queue -title: Using Pulsar as a message queue -sidebar_label: "Message queue" -original_id: cookbooks-message-queue ---- - -Message queues are essential components of many large-scale data architectures. If every single work object that passes through your system absolutely *must* be processed in spite of the slowness or downright failure of this or that system component, there's a good chance that you'll need a message queue to step in and ensure that unprocessed data is retained---with correct ordering---until the required actions are taken. - -Pulsar is a great choice for a message queue because: - -* it was built with [persistent message storage](concepts-architecture-overview.md#persistent-storage) in mind -* it offers automatic load balancing across [consumers](reference-terminology.md#consumer) for messages on a topic (or custom load balancing if you wish) - -> You can use the same Pulsar installation to act as a real-time message bus and as a message queue if you wish (or just one or the other). You can set aside some topics for real-time purposes and other topics for message queue purposes (or use specific namespaces for either purpose if you wish). - - -# Client configuration changes - -To use a Pulsar [topic](reference-terminology.md#topic) as a message queue, you should distribute the receiver load on that topic across several consumers (the optimal number of consumers will depend on the load). Each consumer must: - -* Establish a [shared subscription](concepts-messaging.md#shared) and use the same subscription name as the other consumers (otherwise the subscription is not shared and the consumers can't act as a processing ensemble) -* If you'd like to have tight control over message dispatching across consumers, set the consumers' **receiver queue** size very low (potentially even to 0 if necessary). Each Pulsar [consumer](reference-terminology.md#consumer) has a receiver queue that determines how many messages the consumer will attempt to fetch at a time. A receiver queue of 1000 (the default), for example, means that the consumer will attempt to process 1000 messages from the topic's backlog upon connection. Setting the receiver queue to zero essentially means ensuring that each consumer is only doing one thing at a time. - - The downside to restricting the receiver queue size of consumers is that that limits the potential throughput of those consumers and cannot be used with [partitioned topics](reference-terminology.md#partitioned-topic). Whether the performance/control trade-off is worthwhile will depend on your use case. - -## Java clients - -Here's an example Java consumer configuration that uses a shared subscription: - -```java - -import org.apache.pulsar.client.api.Consumer; -import org.apache.pulsar.client.api.PulsarClient; -import org.apache.pulsar.client.api.SubscriptionType; - -String SERVICE_URL = "pulsar://localhost:6650"; -String TOPIC = "persistent://public/default/mq-topic-1"; -String subscription = "sub-1"; - -PulsarClient client = PulsarClient.builder() - .serviceUrl(SERVICE_URL) - .build(); - -Consumer consumer = client.newConsumer() - .topic(TOPIC) - .subscriptionName(subscription) - .subscriptionType(SubscriptionType.Shared) - // If you'd like to restrict the receiver queue size - .receiverQueueSize(10) - .subscribe(); - -``` - -## Python clients - -Here's an example Python consumer configuration that uses a shared subscription: - -```python - -from pulsar import Client, ConsumerType - -SERVICE_URL = "pulsar://localhost:6650" -TOPIC = "persistent://public/default/mq-topic-1" -SUBSCRIPTION = "sub-1" - -client = Client(SERVICE_URL) -consumer = client.subscribe( - TOPIC, - SUBSCRIPTION, - # If you'd like to restrict the receiver queue size - receiver_queue_size=10, - consumer_type=ConsumerType.Shared) - -``` - -## C++ clients - -Here's an example C++ consumer configuration that uses a shared subscription: - -```cpp - -#include - -std::string serviceUrl = "pulsar://localhost:6650"; -std::string topic = "persistent://public/defaultmq-topic-1"; -std::string subscription = "sub-1"; - -Client client(serviceUrl); - -ConsumerConfiguration consumerConfig; -consumerConfig.setConsumerType(ConsumerType.ConsumerShared); -// If you'd like to restrict the receiver queue size -consumerConfig.setReceiverQueueSize(10); - -Consumer consumer; - -Result result = client.subscribe(topic, subscription, consumerConfig, consumer); - -``` - -## Go clients - -Here is an example of a Go consumer configuration that uses a shared subscription: - -```go - -import "github.com/apache/pulsar-client-go/pulsar" - -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://localhost:6650", -}) -if err != nil { - log.Fatal(err) -} -consumer, err := client.Subscribe(pulsar.ConsumerOptions{ - Topic: "persistent://public/default/mq-topic-1", - SubscriptionName: "sub-1", - Type: pulsar.Shared, - ReceiverQueueSize: 10, // If you'd like to restrict the receiver queue size -}) -if err != nil { - log.Fatal(err) -} - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-non-persistent.md b/site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-non-persistent.md deleted file mode 100644 index 178301e86eb8df..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-non-persistent.md +++ /dev/null @@ -1,63 +0,0 @@ ---- -id: cookbooks-non-persistent -title: Non-persistent messaging -sidebar_label: "Non-persistent messaging" -original_id: cookbooks-non-persistent ---- - -**Non-persistent topics** are Pulsar topics in which message data is *never* [persistently stored](concepts-architecture-overview.md#persistent-storage) and kept only in memory. This cookbook provides: - -* A basic [conceptual overview](#overview) of non-persistent topics -* Information about [configurable parameters](#configuration) related to non-persistent topics -* A guide to the [CLI interface](#cli) for managing non-persistent topics - -## Overview - -By default, Pulsar persistently stores *all* unacknowledged messages on multiple [BookKeeper](#persistent-storage) bookies (storage nodes). Data for messages on persistent topics can thus survive broker restarts and subscriber failover. - -Pulsar also, however, supports **non-persistent topics**, which are topics on which messages are *never* persisted to disk and live only in memory. When using non-persistent delivery, killing a Pulsar [broker](reference-terminology.md#broker) or disconnecting a subscriber to a topic means that all in-transit messages are lost on that (non-persistent) topic, meaning that clients may see message loss. - -Non-persistent topics have names of this form (note the `non-persistent` in the name): - -```http - -non-persistent://tenant/namespace/topic - -``` - -> For more high-level information about non-persistent topics, see the [Concepts and Architecture](concepts-messaging.md#non-persistent-topics) documentation. - -## Using - -> In order to use non-persistent topics, they must be [enabled](#enabling) in your Pulsar broker configuration. - -In order to use non-persistent topics, you only need to differentiate them by name when interacting with them. This [`pulsar-client produce`](reference-cli-tools.md#pulsar-client-produce) command, for example, would produce one message on a non-persistent topic in a standalone cluster: - -```bash - -$ bin/pulsar-client produce non-persistent://public/default/example-np-topic \ - --num-produce 1 \ - --messages "This message will be stored only in memory" - -``` - -> For a more thorough guide to non-persistent topics from an administrative perspective, see the [Non-persistent topics](admin-api-topics.md) guide. - -## Enabling - -In order to enable non-persistent topics in a Pulsar broker, the [`enableNonPersistentTopics`](reference-configuration.md#broker-enableNonPersistentTopics) must be set to `true`. This is the default, and so you won't need to take any action to enable non-persistent messaging. - - -> #### Configuration for standalone mode -> If you're running Pulsar in standalone mode, the same configurable parameters are available but in the [`standalone.conf`](reference-configuration.md#standalone) configuration file. - -If you'd like to enable *only* non-persistent topics in a broker, you can set the [`enablePersistentTopics`](reference-configuration.md#broker-enablePersistentTopics) parameter to `false` and the `enableNonPersistentTopics` parameter to `true`. - -## Managing with cli - -Non-persistent topics can be managed using the [`pulsar-admin non-persistent`](reference-pulsar-admin.md#non-persistent) command-line interface. With that interface you can perform actions like [create a partitioned non-persistent topic](reference-pulsar-admin.md#non-persistent-create-partitioned-topic), get [stats](reference-pulsar-admin.md#non-persistent-stats) for a non-persistent topic, [list](reference-pulsar-admin.md) non-persistent topics under a namespace, and more. - -## Using with Pulsar clients - -You shouldn't need to make any changes to your Pulsar clients to use non-persistent messaging beyond making sure that you use proper [topic names](#using) with `non-persistent` as the topic type. - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-partitioned.md b/site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-partitioned.md deleted file mode 100644 index fb9ac354cc6d60..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-partitioned.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: cookbooks-partitioned -title: Partitioned topics -sidebar_label: "Partitioned Topics" -original_id: cookbooks-partitioned ---- -For details of the content, refer to [manage topics](admin-api-topics.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-retention-expiry.md b/site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-retention-expiry.md deleted file mode 100644 index c8c46b3caa1bee..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-retention-expiry.md +++ /dev/null @@ -1,498 +0,0 @@ ---- -id: cookbooks-retention-expiry -title: Message retention and expiry -sidebar_label: "Message retention and expiry" -original_id: cookbooks-retention-expiry ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Pulsar brokers are responsible for handling messages that pass through Pulsar, including [persistent storage](concepts-architecture-overview.md#persistent-storage) of messages. By default, for each topic, brokers only retain messages that are in at least one backlog. A backlog is the set of unacknowledged messages for a particular subscription. As a topic can have multiple subscriptions, a topic can have multiple backlogs. - -As a consequence, no messages are retained (by default) on a topic that has not had any subscriptions created for it. - -(Note that messages that are no longer being stored are not necessarily immediately deleted, and may in fact still be accessible until the next ledger rollover. Because clients cannot predict when rollovers may happen, it is not wise to rely on a rollover not happening at an inconvenient point in time.) - -In Pulsar, you can modify this behavior, with namespace granularity, in two ways: - -* You can persistently store messages that are not within a backlog (because they've been acknowledged by on every existing subscription, or because there are no subscriptions) by setting [retention policies](#retention-policies). -* Messages that are not acknowledged within a specified timeframe can be automatically acknowledged, by specifying the [time to live](#time-to-live-ttl) (TTL). - -Pulsar's [admin interface](admin-api-overview.md) enables you to manage both retention policies and TTL with namespace granularity (and thus within a specific tenant and either on a specific cluster or in the [`global`](concepts-architecture-overview.md#global-cluster) cluster). - - -> #### Retention and TTL solve two different problems -> * Message retention: Keep the data for at least X hours (even if acknowledged) -> * Time-to-live: Discard data after some time (by automatically acknowledging) -> -> Most applications will want to use at most one of these. - - -## Retention policies - -By default, when a Pulsar message arrives at a broker, the message is stored until it has been acknowledged on all subscriptions, at which point it is marked for deletion. You can override this behavior and retain messages that have already been acknowledged on all subscriptions by setting a *retention policy* for all topics in a given namespace. Retention is based on both a *size limit* and a *time limit*. - -Retention policies are useful when you use the Reader interface. The Reader interface does not use acknowledgements, and messages do not exist within backlogs. It is required to configure retention for Reader-only use cases. - -When you set a retention policy on topics in a namespace, you must set **both** a *size limit* and a *time limit*. You can refer to the following table to set retention policies in `pulsar-admin` and Java. - -|Time limit|Size limit| Message retention | -|----------|----------|------------------------| -| -1 | -1 | Infinite retention | -| -1 | >0 | Based on the size limit | -| >0 | -1 | Based on the time limit | -| 0 | 0 | Disable message retention (by default) | -| 0 | >0 | Invalid | -| >0 | 0 | Invalid | -| >0 | >0 | Acknowledged messages or messages with no active subscription will not be retained when either time or size reaches the limit. | - -The retention settings apply to all messages on topics that do not have any subscriptions, or to messages that have been acknowledged by all subscriptions. The retention policy settings do not affect unacknowledged messages on topics with subscriptions. The unacknowledged messages are controlled by the backlog quota. - -When a retention limit on a topic is exceeded, the oldest message is marked for deletion until the set of retained messages falls within the specified limits again. - -### Defaults - -You can set message retention at instance level with the following two parameters: `defaultRetentionTimeInMinutes` and `defaultRetentionSizeInMB`. Both parameters are set to `0` by default. - -For more information of the two parameters, refer to the [`broker.conf`](reference-configuration.md#broker) configuration file. - -### Set retention policy - -You can set a retention policy for a namespace by specifying the namespace, a size limit and a time limit in `pulsar-admin`, REST API and Java. - -````mdx-code-block - - - -You can use the [`set-retention`](reference-pulsar-admin.md#namespaces-set-retention) subcommand and specify a namespace, a size limit using the `-s`/`--size` flag, and a time limit using the `-t`/`--time` flag. - -In the following example, the size limit is set to 10 GB and the time limit is set to 3 hours for each topic within the `my-tenant/my-ns` namespace. -- When the size of messages reaches 10 GB on a topic within 3 hours, the acknowledged messages will not be retained. -- After 3 hours, even if the message size is less than 10 GB, the acknowledged messages will not be retained. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 10G \ - --time 3h - -``` - -In the following example, the time is not limited and the size limit is set to 1 TB. The size limit determines the retention. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 1T \ - --time -1 - -``` - -In the following example, the size is not limited and the time limit is set to 3 hours. The time limit determines the retention. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size -1 \ - --time 3h - -``` - -To achieve infinite retention, set both values to `-1`. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size -1 \ - --time -1 - -``` - -To disable the retention policy, set both values to `0`. - -```shell - -$ pulsar-admin namespaces set-retention my-tenant/my-ns \ - --size 0 \ - --time 0 - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/retention|operation/setRetention?version=@pulsar:version_number@} - -:::note - -To disable the retention policy, you need to set both the size and time limit to `0`. Set either size or time limit to `0` is invalid. - -::: - - - - -```java - -int retentionTime = 10; // 10 minutes -int retentionSize = 500; // 500 megabytes -RetentionPolicies policies = new RetentionPolicies(retentionTime, retentionSize); -admin.namespaces().setRetention(namespace, policies); - -``` - - - - -```` - -### Get retention policy - -You can fetch the retention policy for a namespace by specifying the namespace. The output will be a JSON object with two keys: `retentionTimeInMinutes` and `retentionSizeInMB`. - -````mdx-code-block - - - -Use the [`get-retention`](reference-pulsar-admin.md#namespaces) subcommand and specify the namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces get-retention my-tenant/my-ns -{ - "retentionTimeInMinutes": 10, - "retentionSizeInMB": 500 -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/retention|operation/getRetention?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getRetention(namespace); - -``` - - - - -```` - -## Backlog quotas - -*Backlogs* are sets of unacknowledged messages for a topic that have been stored by bookies. Pulsar stores all unacknowledged messages in backlogs until they are processed and acknowledged. - -You can control the allowable size of backlogs, at the namespace level, using *backlog quotas*. Setting a backlog quota involves setting: - -TODO: Expand on is this per backlog or per topic? - -* an allowable *size threshold* for each topic in the namespace -* a *retention policy* that determines which action the [broker](reference-terminology.md#broker) takes if the threshold is exceeded. - -The following retention policies are available: - -Policy | Action -:------|:------ -`producer_request_hold` | The broker will hold and not persist produce request payload -`producer_exception` | The broker will disconnect from the client by throwing an exception -`consumer_backlog_eviction` | The broker will begin discarding backlog messages - - -> #### Beware the distinction between retention policy types -> As you may have noticed, there are two definitions of the term "retention policy" in Pulsar, one that applies to persistent storage of messages not in backlogs, and one that applies to messages within backlogs. - - -Backlog quotas are handled at the namespace level. They can be managed via: - -### Set size/time thresholds and backlog retention policies - -You can set a size and/or time threshold and backlog retention policy for all of the topics in a [namespace](reference-terminology.md#namespace) by specifying the namespace, a size limit and/or a time limit in second, and a policy by name. - -````mdx-code-block - - - -Use the [`set-backlog-quota`](reference-pulsar-admin.md#namespaces) subcommand and specify a namespace, a size limit using the `-l`/`--limit` , `-lt`/`--limitTime` flag to limit backlog, a retention policy using the `-p`/`--policy` flag and a policy type using `-t`/`--type` (default is destination_storage). - -##### Example - -```shell - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \ - --limit 2G \ - --policy producer_request_hold - -``` - -```shell - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns/my-topic \ ---limitTime 3600 \ ---policy producer_request_hold \ ---type message_age - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - - - - -```java - -long sizeLimit = 2147483648L; -BacklogQuota.RetentionPolicy policy = BacklogQuota.RetentionPolicy.producer_request_hold; -BacklogQuota quota = new BacklogQuota(sizeLimit, policy); -admin.namespaces().setBacklogQuota(namespace, quota); - -``` - - - - -```` - -### Get backlog threshold and backlog retention policy - -You can see which size threshold and backlog retention policy has been applied to a namespace. - -````mdx-code-block - - - -Use the [`get-backlog-quotas`](reference-pulsar-admin.md#pulsar-admin-namespaces-get-backlog-quotas) subcommand and specify a namespace. Here's an example: - -```shell - -$ pulsar-admin namespaces get-backlog-quotas my-tenant/my-ns -{ - "destination_storage": { - "limit" : 2147483648, - "policy" : "producer_request_hold" - } -} - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/backlogQuotaMap|operation/getBacklogQuotaMap?version=@pulsar:version_number@} - - - - -```java - -Map quotas = - admin.namespaces().getBacklogQuotas(namespace); - -``` - - - - -```` - -### Remove backlog quotas - -````mdx-code-block - - - -Use the [`remove-backlog-quota`](reference-pulsar-admin.md#pulsar-admin-namespaces-remove-backlog-quota) subcommand and specify a namespace, use `t`/`--type` to specify backlog type to remove(default is destination_storage). Here's an example: - -```shell - -$ pulsar-admin namespaces remove-backlog-quota my-tenant/my-ns - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/removeBacklogQuota?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeBacklogQuota(namespace); - -``` - - - - -```` - -### Clear backlog - -#### pulsar-admin - -Use the [`clear-backlog`](reference-pulsar-admin.md#pulsar-admin-namespaces-clear-backlog) subcommand. - -##### Example - -```shell - -$ pulsar-admin namespaces clear-backlog my-tenant/my-ns - -``` - -By default, you will be prompted to ensure that you really want to clear the backlog for the namespace. You can override the prompt using the `-f`/`--force` flag. - -## Time to live (TTL) - -By default, Pulsar stores all unacknowledged messages forever. This can lead to heavy disk space usage in cases where a lot of messages are going unacknowledged. If disk space is a concern, you can set a time to live (TTL) that determines how long unacknowledged messages will be retained. - -### Set the TTL for a namespace - -````mdx-code-block - - - -Use the [`set-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-set-message-ttl) subcommand and specify a namespace and a TTL (in seconds) using the `-ttl`/`--messageTTL` flag. - -##### Example - -```shell - -$ pulsar-admin namespaces set-message-ttl my-tenant/my-ns \ - --messageTTL 120 # TTL of 2 minutes - -``` - - - - -{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/setNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().setNamespaceMessageTTL(namespace, ttlInSeconds); - -``` - - - - -```` - -### Get the TTL configuration for a namespace - -````mdx-code-block - - - -Use the [`get-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-get-message-ttl) subcommand and specify a namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces get-message-ttl my-tenant/my-ns -60 - -``` - - - - -{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/getNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().getNamespaceMessageTTL(namespace) - -``` - - - - -```` - -### Remove the TTL configuration for a namespace - -````mdx-code-block - - - -Use the [`remove-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-remove-message-ttl) subcommand and specify a namespace. - -##### Example - -```shell - -$ pulsar-admin namespaces remove-message-ttl my-tenant/my-ns - -``` - - - - -{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/removeNamespaceMessageTTL?version=@pulsar:version_number@} - - - - -```java - -admin.namespaces().removeNamespaceMessageTTL(namespace) - -``` - - - - -```` - -## Delete messages from namespaces - -If you do not have any retention period and that you never have much of a backlog, the upper limit for retaining messages, which are acknowledged, equals to the Pulsar segment rollover period + entry log rollover period + (garbage collection interval * garbage collection ratios). - -- **Segment rollover period**: basically, the segment rollover period is how often a new segment is created. Once a new segment is created, the old segment will be deleted. By default, this happens either when you have written 50,000 entries (messages) or have waited 240 minutes. You can tune this in your broker. - -- **Entry log rollover period**: multiple ledgers in BookKeeper are interleaved into an [entry log](https://bookkeeper.apache.org/docs/4.11.1/getting-started/concepts/#entry-logs). In order for a ledger that has been deleted, the entry log must all be rolled over. -The entry log rollover period is configurable, but is purely based on the entry log size. For details, see [here](https://bookkeeper.apache.org/docs/4.11.1/reference/config/#entry-log-settings). Once the entry log is rolled over, the entry log can be garbage collected. - -- **Garbage collection interval**: because entry logs have interleaved ledgers, to free up space, the entry logs need to be rewritten. The garbage collection interval is how often BookKeeper performs garbage collection. which is related to minor compaction and major compaction of entry logs. For details, see [here](https://bookkeeper.apache.org/docs/4.11.1/reference/config/#entry-log-compaction-settings). diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-tiered-storage.md b/site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-tiered-storage.md deleted file mode 100644 index 5afeaa388d6407..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/cookbooks-tiered-storage.md +++ /dev/null @@ -1,346 +0,0 @@ ---- -id: cookbooks-tiered-storage -title: Tiered Storage -sidebar_label: "Tiered Storage" -original_id: cookbooks-tiered-storage ---- - -Pulsar's **Tiered Storage** feature allows older backlog data to be offloaded to long term storage, thereby freeing up space in BookKeeper and reducing storage costs. This cookbook walks you through using tiered storage in your Pulsar cluster. - -* Tiered storage uses [Apache jclouds](https://jclouds.apache.org) to support [Amazon S3](https://aws.amazon.com/s3/) and [Google Cloud Storage](https://cloud.google.com/storage/)(GCS for short) -for long term storage. With Jclouds, it is easy to add support for more [cloud storage providers](https://jclouds.apache.org/reference/providers/#blobstore-providers) in the future. - -* Tiered storage uses [Apache Hadoop](http://hadoop.apache.org/) to support filesystem for long term storage. -With Hadoop, it is easy to add support for more filesystem in the future. - -## When should I use Tiered Storage? - -Tiered storage should be used when you have a topic for which you want to keep a very long backlog for a long time. For example, if you have a topic containing user actions which you use to train your recommendation systems, you may want to keep that data for a long time, so that if you change your recommendation algorithm you can rerun it against your full user history. - -## The offloading mechanism - -A topic in Pulsar is backed by a log, known as a managed ledger. This log is composed of an ordered list of segments. Pulsar only every writes to the final segment of the log. All previous segments are sealed. The data within the segment is immutable. This is known as a segment oriented architecture. - -![Tiered storage](/assets/pulsar-tiered-storage.png "Tiered Storage") - -The Tiered Storage offloading mechanism takes advantage of this segment oriented architecture. When offloading is requested, the segments of the log are copied, one-by-one, to tiered storage. All segments of the log, apart from the segment currently being written to can be offloaded. - -On the broker, the administrator must configure the bucket and credentials for the cloud storage service. -The configured bucket must exist before attempting to offload. If it does not exist, the offload operation will fail. - -Pulsar uses multi-part objects to upload the segment data. It is possible that a broker could crash while uploading the data. -We recommend you add a life cycle rule your bucket to expire incomplete multi-part upload after a day or two to avoid -getting charged for incomplete uploads. - -When ledgers are offloaded to long term storage, you can still query data in the offloaded ledgers with Pulsar SQL. - -## Configuring the offload driver - -Offloading is configured in ```broker.conf```. - -At a minimum, the administrator must configure the driver, the bucket and the authenticating credentials. -There is also some other knobs to configure, like the bucket region, the max block size in backed storage, etc. - -Currently we support driver of types: - -- `aws-s3`: [Simple Cloud Storage Service](https://aws.amazon.com/s3/) -- `google-cloud-storage`: [Google Cloud Storage](https://cloud.google.com/storage/) -- `filesystem`: [Filesystem Storage](http://hadoop.apache.org/) - -> Driver names are case-insensitive for driver's name. There is a third driver type, `s3`, which is identical to `aws-s3`, -> though it requires that you specify an endpoint url using `s3ManagedLedgerOffloadServiceEndpoint`. This is useful if -> using a S3 compatible data store, other than AWS. - -```conf - -managedLedgerOffloadDriver=aws-s3 - -``` - -### "aws-s3" Driver configuration - -#### Bucket and Region - -Buckets are the basic containers that hold your data. -Everything that you store in Cloud Storage must be contained in a bucket. -You can use buckets to organize your data and control access to your data, -but unlike directories and folders, you cannot nest buckets. - -```conf - -s3ManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -Bucket Region is the region where bucket located. Bucket Region is not a required -but a recommended configuration. If it is not configured, It will use the default region. - -With AWS S3, the default region is `US East (N. Virginia)`. Page [AWS Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html) contains more information. - -```conf - -s3ManagedLedgerOffloadRegion=eu-west-3 - -``` - -#### Authentication with AWS - -To be able to access AWS S3, you need to authenticate with AWS S3. -Pulsar does not provide any direct means of configuring authentication for AWS S3, -but relies on the mechanisms supported by the [DefaultAWSCredentialsProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html). - -Once you have created a set of credentials in the AWS IAM console, they can be configured in a number of ways. - -1. Using ec2 instance metadata credentials - -If you are on AWS instance with an instance profile that provides credentials, Pulsar will use these credentials -if no other mechanism is provided - -2. Set the environment variables **AWS_ACCESS_KEY_ID** and **AWS_SECRET_ACCESS_KEY** in ```conf/pulsar_env.sh```. - -```bash - -export AWS_ACCESS_KEY_ID=ABC123456789 -export AWS_SECRET_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -> \"export\" is important so that the variables are made available in the environment of spawned processes. - - -3. Add the Java system properties *aws.accessKeyId* and *aws.secretKey* to **PULSAR_EXTRA_OPTS** in `conf/pulsar_env.sh`. - -```bash - -PULSAR_EXTRA_OPTS="${PULSAR_EXTRA_OPTS} ${PULSAR_MEM} ${PULSAR_GC} -Daws.accessKeyId=ABC123456789 -Daws.secretKey=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.maxCapacity.default=1000 -Dio.netty.recycler.linkCapacity=1024" - -``` - -4. Set the access credentials in ```~/.aws/credentials```. - -```conf - -[default] -aws_access_key_id=ABC123456789 -aws_secret_access_key=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -5. Assuming an IAM role - -If you want to assume an IAM role, this can be done via specifying the following: - -```conf - -s3ManagedLedgerOffloadRole= -s3ManagedLedgerOffloadRoleSessionName=pulsar-s3-offload - -``` - -This will use the `DefaultAWSCredentialsProviderChain` for assuming this role. - -> The broker must be rebooted for credentials specified in pulsar_env to take effect. - -#### Configuring the size of block read/write - -Pulsar also provides some knobs to configure the size of requests sent to AWS S3. - -- ```s3ManagedLedgerOffloadMaxBlockSizeInBytes``` configures the maximum size of - a "part" sent during a multipart upload. This cannot be smaller than 5MB. Default is 64MB. -- ```s3ManagedLedgerOffloadReadBufferSizeInBytes``` configures the block size for - each individual read when reading back data from AWS S3. Default is 1MB. - -In both cases, these should not be touched unless you know what you are doing. - -### "google-cloud-storage" Driver configuration - -Buckets are the basic containers that hold your data. Everything that you store in -Cloud Storage must be contained in a bucket. You can use buckets to organize your data and -control access to your data, but unlike directories and folders, you cannot nest buckets. - -```conf - -gcsManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -Bucket Region is the region where bucket located. Bucket Region is not a required but -a recommended configuration. If it is not configured, It will use the default region. - -Regarding GCS, buckets are default created in the `us multi-regional location`, -page [Bucket Locations](https://cloud.google.com/storage/docs/bucket-locations) contains more information. - -```conf - -gcsManagedLedgerOffloadRegion=europe-west3 - -``` - -#### Authentication with GCS - -The administrator needs to configure `gcsManagedLedgerOffloadServiceAccountKeyFile` in `broker.conf` -for the broker to be able to access the GCS service. `gcsManagedLedgerOffloadServiceAccountKeyFile` is -a Json file, containing the GCS credentials of a service account. -[Service Accounts section of this page](https://support.google.com/googleapi/answer/6158849) contains -more information of how to create this key file for authentication. More information about google cloud IAM -is available [here](https://cloud.google.com/storage/docs/access-control/iam). - -To generate service account credentials or view the public credentials that you've already generated, follow the following steps: - -1. Open the [Service accounts page](https://console.developers.google.com/iam-admin/serviceaccounts). -2. Select a project or create a new one. -3. Click **Create service account**. -4. In the **Create service account** window, type a name for the service account, and select **Furnish a new private key**. If you want to [grant G Suite domain-wide authority](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#delegatingauthority) to the service account, also select **Enable G Suite Domain-wide Delegation**. -5. Click **Create**. - -> Notes: Make ensure that the service account you create has permission to operate GCS, you need to assign **Storage Admin** permission to your service account in [here](https://cloud.google.com/storage/docs/access-control/iam). - -```conf - -gcsManagedLedgerOffloadServiceAccountKeyFile="/Users/hello/Downloads/project-804d5e6a6f33.json" - -``` - -#### Configuring the size of block read/write - -Pulsar also provides some knobs to configure the size of requests sent to GCS. - -- ```gcsManagedLedgerOffloadMaxBlockSizeInBytes``` configures the maximum size of a "part" sent - during a multipart upload. This cannot be smaller than 5MB. Default is 64MB. -- ```gcsManagedLedgerOffloadReadBufferSizeInBytes``` configures the block size for each individual - read when reading back data from GCS. Default is 1MB. - -In both cases, these should not be touched unless you know what you are doing. - -### "filesystem" Driver configuration - - -#### Configure connection address - -You can configure the connection address in the `broker.conf` file. - -```conf - -fileSystemURI="hdfs://127.0.0.1:9000" - -``` - -#### Configure Hadoop profile path - -The configuration file is stored in the Hadoop profile path. It contains various settings, such as base path, authentication, and so on. - -```conf - -fileSystemProfilePath="../conf/filesystem_offload_core_site.xml" - -``` - -The model for storing topic data uses `org.apache.hadoop.io.MapFile`. You can use all of the configurations in `org.apache.hadoop.io.MapFile` for Hadoop. - -**Example** - -```conf - - - fs.defaultFS - - - - - hadoop.tmp.dir - pulsar - - - - io.file.buffer.size - 4096 - - - - io.seqfile.compress.blocksize - 1000000 - - - - io.seqfile.compression.type - BLOCK - - - - io.map.index.interval - 128 - - -``` - -For more information about the configurations in `org.apache.hadoop.io.MapFile`, see [Filesystem Storage](http://hadoop.apache.org/). -## Configuring offload to run automatically - -Namespace policies can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that the topic has stored on the pulsar cluster. Once the topic reaches the threshold, an offload operation will be triggered. Setting a negative value to the threshold will disable automatic offloading. Setting the threshold to 0 will cause the broker to offload data as soon as it possiby can. - -```bash - -$ bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -> Automatic offload runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offload will not until the current segment is full. - -## Configuring read priority for offloaded messages - -By default, once messages were offloaded to long term storage, brokers will read them from long term storage, but messages still exists in bookkeeper for a period depends on the administrator's configuration. For -messages exists in both bookkeeper and long term storage, if they are preferred to read from bookkeeper, you can use command to change this configuration. - -```bash - -# default value for -orp is tiered-storage-first -$ bin/pulsar-admin namespaces set-offload-policies my-tenant/my-namespace -orp bookkeeper-first -$ bin/pulsar-admin topics set-offload-policies my-tenant/my-namespace/topic1 -orp bookkeeper-first - -``` - -## Triggering offload manually - -Offloading can manually triggered through a REST endpoint on the Pulsar broker. We provide a CLI which will call this rest endpoint for you. - -When triggering offload, you must specify the maximum size, in bytes, of backlog which will be retained locally on the bookkeeper. The offload mechanism will offload segments from the start of the topic backlog until this condition is met. - -```bash - -$ bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 -Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - -``` - -The command to triggers an offload will not wait until the offload operation has completed. To check the status of the offload, use offload-status. - -```bash - -$ bin/pulsar-admin topics offload-status my-tenant/my-namespace/topic1 -Offload is currently running - -``` - -To wait for offload to complete, add the -w flag. - -```bash - -$ bin/pulsar-admin topics offload-status -w my-tenant/my-namespace/topic1 -Offload was a success - -``` - -If there is an error offloading, the error will be propagated to the offload-status command. - -```bash - -$ bin/pulsar-admin topics offload-status persistent://public/default/topic1 -Error in offload -null - -Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - -``` - -` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/deploy-aws.md b/site2/website/versioned_docs/version-2.9.3-deprecated/deploy-aws.md deleted file mode 100644 index 0d94fc13cdb2d6..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/deploy-aws.md +++ /dev/null @@ -1,271 +0,0 @@ ---- -id: deploy-aws -title: Deploying a Pulsar cluster on AWS using Terraform and Ansible -sidebar_label: "Amazon Web Services" -original_id: deploy-aws ---- - -> For instructions on deploying a single Pulsar cluster manually rather than using Terraform and Ansible, see [Deploying a Pulsar cluster on bare metal](deploy-bare-metal.md). For instructions on manually deploying a multi-cluster Pulsar instance, see [Deploying a Pulsar instance on bare metal](deploy-bare-metal-multi-cluster.md). - -One of the easiest ways to get a Pulsar [cluster](reference-terminology.md#cluster) running on [Amazon Web Services](https://aws.amazon.com/) (AWS) is to use the [Terraform](https://terraform.io) infrastructure provisioning tool and the [Ansible](https://www.ansible.com) server automation tool. Terraform can create the resources necessary for running the Pulsar cluster---[EC2](https://aws.amazon.com/ec2/) instances, networking and security infrastructure, etc.---While Ansible can install and run Pulsar on the provisioned resources. - -## Requirements and setup - -In order to install a Pulsar cluster on AWS using Terraform and Ansible, you need to prepare the following things: - -* An [AWS account](https://aws.amazon.com/account/) and the [`aws`](https://aws.amazon.com/cli/) command-line tool -* Python and [pip](https://pip.pypa.io/en/stable/) -* The [`terraform-inventory`](https://github.com/adammck/terraform-inventory) tool, which enables Ansible to use Terraform artifacts - -You also need to make sure that you are currently logged into your AWS account via the `aws` tool: - -```bash - -$ aws configure - -``` - -## Installation - -You can install Ansible on Linux or macOS using pip. - -```bash - -$ pip install ansible - -``` - -You can install Terraform using the instructions [here](https://www.terraform.io/intro/getting-started/install.html). - -You also need to have the Terraform and Ansible configuration for Pulsar locally on your machine. You can find them in the [GitHub repository](https://github.com/apache/pulsar) of Pulsar, which you can fetch using Git commands: - -```bash - -$ git clone https://github.com/apache/pulsar -$ cd pulsar/deployment/terraform-ansible/aws - -``` - -## SSH setup - -> If you already have an SSH key and want to use it, you can skip the step of generating an SSH key and update `private_key_file` setting -> in `ansible.cfg` file and `public_key_path` setting in `terraform.tfvars` file. -> -> For example, if you already have a private SSH key in `~/.ssh/pulsar_aws` and a public key in `~/.ssh/pulsar_aws.pub`, -> follow the steps below: -> -> 1. update `ansible.cfg` with following values: -> - -> ```shell -> -> private_key_file=~/.ssh/pulsar_aws -> -> -> ``` - -> -> 2. update `terraform.tfvars` with following values: -> - -> ```shell -> -> public_key_path=~/.ssh/pulsar_aws.pub -> -> -> ``` - - -In order to create the necessary AWS resources using Terraform, you need to create an SSH key. Enter the following commands to create a private SSH key in `~/.ssh/id_rsa` and a public key in `~/.ssh/id_rsa.pub`: - -```bash - -$ ssh-keygen -t rsa - -``` - -Do *not* enter a passphrase (hit **Enter** instead when the prompt comes out). Enter the following command to verify that a key has been created: - -```bash - -$ ls ~/.ssh -id_rsa id_rsa.pub - -``` - -## Create AWS resources using Terraform - -To start building AWS resources with Terraform, you need to install all Terraform dependencies. Enter the following command: - -```bash - -$ terraform init -# This will create a .terraform folder - -``` - -After that, you can apply the default Terraform configuration by entering this command: - -```bash - -$ terraform apply - -``` - -Then you see this prompt below: - -```bash - -Do you want to perform these actions? - Terraform will perform the actions described above. - Only 'yes' will be accepted to approve. - - Enter a value: - -``` - -Type `yes` and hit **Enter**. Applying the configuration could take several minutes. When the configuration applying finishes, you can see `Apply complete!` along with some other information, including the number of resources created. - -### Apply a non-default configuration - -You can apply a non-default Terraform configuration by changing the values in the `terraform.tfvars` file. The following variables are available: - -Variable name | Description | Default -:-------------|:------------|:------- -`public_key_path` | The path of the public key that you have generated. | `~/.ssh/id_rsa.pub` -`region` | The AWS region in which the Pulsar cluster runs | `us-west-2` -`availability_zone` | The AWS availability zone in which the Pulsar cluster runs | `us-west-2a` -`aws_ami` | The [Amazon Machine Image](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) (AMI) that the cluster uses | `ami-9fa343e7` -`num_zookeeper_nodes` | The number of [ZooKeeper](https://zookeeper.apache.org) nodes in the ZooKeeper cluster | 3 -`num_bookie_nodes` | The number of bookies that runs in the cluster | 3 -`num_broker_nodes` | The number of Pulsar brokers that runs in the cluster | 2 -`num_proxy_nodes` | The number of Pulsar proxies that runs in the cluster | 1 -`base_cidr_block` | The root [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) that network assets uses for the cluster | `10.0.0.0/16` -`instance_types` | The EC2 instance types to be used. This variable is a map with two keys: `zookeeper` for the ZooKeeper instances, `bookie` for the BookKeeper bookies and `broker` and `proxy` for Pulsar brokers and bookies | `t2.small` (ZooKeeper), `i3.xlarge` (BookKeeper) and `c5.2xlarge` (Brokers/Proxies) - -### What is installed - -When you run the Ansible playbook, the following AWS resources are used: - -* 9 total [Elastic Compute Cloud](https://aws.amazon.com/ec2) (EC2) instances running the [ami-9fa343e7](https://access.redhat.com/articles/3135091) Amazon Machine Image (AMI), which runs [Red Hat Enterprise Linux (RHEL) 7.4](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/7.4_release_notes/index). By default, that includes: - * 3 small VMs for ZooKeeper ([t2.small](https://www.ec2instances.info/?selected=t2.small) instances) - * 3 larger VMs for BookKeeper [bookies](reference-terminology.md#bookie) ([i3.xlarge](https://www.ec2instances.info/?selected=i3.xlarge) instances) - * 2 larger VMs for Pulsar [brokers](reference-terminology.md#broker) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances) - * 1 larger VMs for Pulsar [proxy](reference-terminology.md#proxy) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances) -* An EC2 [security group](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html) -* A [virtual private cloud](https://aws.amazon.com/vpc/) (VPC) for security -* An [API Gateway](https://aws.amazon.com/api-gateway/) for connections from the outside world -* A [route table](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html) for the Pulsar cluster's VPC -* A [subnet](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html) for the VPC - -All EC2 instances for the cluster run in the [us-west-2](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) region. - -### Fetch your Pulsar connection URL - -When you apply the Terraform configuration by entering the command `terraform apply`, Terraform outputs a value for the `pulsar_service_url`. The value should look something like this: - -``` - -pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650 - -``` - -You can fetch that value at any time by entering the command `terraform output pulsar_service_url` or parsing the `terraform.tstate` file (which is JSON, even though the filename does not reflect that): - -```bash - -$ cat terraform.tfstate | jq .modules[0].outputs.pulsar_service_url.value - -``` - -### Destroy your cluster - -At any point, you can destroy all AWS resources associated with your cluster using Terraform's `destroy` command: - -```bash - -$ terraform destroy - -``` - -## Setup Disks - -Before you run the Pulsar playbook, you need to mount the disks to the correct directories on those bookie nodes. Since different type of machines have different disk layout, you need to update the task defined in `setup-disk.yaml` file after changing the `instance_types` in your terraform config, - -To setup disks on bookie nodes, enter this command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - setup-disk.yaml - -``` - -After that, the disks is mounted under `/mnt/journal` as journal disk, and `/mnt/storage` as ledger disk. -Remember to enter this command just only once. If you attempt to enter this command again after you have run Pulsar playbook, your disks might potentially be erased again, causing the bookies to fail to start up. - -## Run the Pulsar playbook - -Once you have created the necessary AWS resources using Terraform, you can install and run Pulsar on the Terraform-created EC2 instances using Ansible. - -(Optional) If you want to use any [built-in IO connectors](io-connectors.md) , edit the `Download Pulsar IO packages` task in the `deploy-pulsar.yaml` file and uncomment the connectors you want to use. - -To run the playbook, enter this command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - ../deploy-pulsar.yaml - -``` - -If you have created a private SSH key at a location different from `~/.ssh/id_rsa`, you can specify the different location using the `--private-key` flag in the following command: - -```bash - -$ ansible-playbook \ - --user='ec2-user' \ - --inventory=`which terraform-inventory` \ - --private-key="~/.ssh/some-non-default-key" \ - ../deploy-pulsar.yaml - -``` - -## Access the cluster - -You can now access your running Pulsar using the unique Pulsar connection URL for your cluster, which you can obtain following the instructions [above](#fetching-your-pulsar-connection-url). - -For a quick demonstration of accessing the cluster, we can use the Python client for Pulsar and the Python shell. First, install the Pulsar Python module using pip: - -```bash - -$ pip install pulsar-client - -``` - -Now, open up the Python shell using the `python` command: - -```bash - -$ python - -``` - -Once you are in the shell, enter the following command: - -```python - ->>> import pulsar ->>> client = pulsar.Client('pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650') -# Make sure to use your connection URL ->>> producer = client.create_producer('persistent://public/default/test-topic') ->>> producer.send('Hello world') ->>> client.close() - -``` - -If all of these commands are successful, Pulsar clients can now use your cluster! diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/deploy-bare-metal-multi-cluster.md b/site2/website/versioned_docs/version-2.9.3-deprecated/deploy-bare-metal-multi-cluster.md deleted file mode 100644 index 49fd3938c64d42..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/deploy-bare-metal-multi-cluster.md +++ /dev/null @@ -1,452 +0,0 @@ ---- -id: deploy-bare-metal-multi-cluster -title: Deploying a multi-cluster on bare metal -sidebar_label: "Bare metal multi-cluster" -original_id: deploy-bare-metal-multi-cluster ---- - -:::tip - -1. You can use single-cluster Pulsar installation in most use cases, such as experimenting with Pulsar or using Pulsar in a startup or in a single team. If you need to run a multi-cluster Pulsar instance, see the [guide](deploy-bare-metal-multi-cluster.md). -2. If you want to use all built-in [Pulsar IO](io-overview.md) connectors, you need to download `apache-pulsar-io-connectors`package and install `apache-pulsar-io-connectors` under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you have run a separate cluster of function workers for [Pulsar Functions](functions-overview.md). -3. If you want to use [Tiered Storage](concepts-tiered-storage.md) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders`package and install `apache-pulsar-offloaders` under `offloaders` directory in the Pulsar directory on every broker node. For more details of how to configure this feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md). - -::: - -A Pulsar instance consists of multiple Pulsar clusters working in unison. You can distribute clusters across data centers or geographical regions and replicate the clusters amongst themselves using [geo-replication](administration-geo.md).Deploying a multi-cluster Pulsar instance consists of the following steps: - -1. Deploying two separate ZooKeeper quorums: a local quorum for each cluster in the instance and a configuration store quorum for instance-wide tasks -2. Initializing cluster metadata for each cluster -3. Deploying a BookKeeper cluster of bookies in each Pulsar cluster -4. Deploying brokers in each Pulsar cluster - - -> #### Run Pulsar locally or on Kubernetes? -> This guide shows you how to deploy Pulsar in production in a non-Kubernetes environment. If you want to run a standalone Pulsar cluster on a single machine for development purposes, see the [Setting up a local cluster](getting-started-standalone.md) guide. If you want to run Pulsar on [Kubernetes](https://kubernetes.io), see the [Pulsar on Kubernetes](deploy-kubernetes.md) guide, which includes sections on running Pulsar on Kubernetes, on Google Kubernetes Engine and on Amazon Web Services. - -## System requirement - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. You need to install 64-bit JRE/JDK 8 or later versions. - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -## Install Pulsar - -To get started running Pulsar, download a binary tarball release in one of the following ways: - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar @pulsar:version@ binary release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget 'https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=pulsar/pulsar-@pulsar:version@/apache-pulsar-@pulsar:version@-bin.tar.gz' -O apache-pulsar-@pulsar:version@-bin.tar.gz - - ``` - -Once you download the tarball, untar it and `cd` into the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | [Command-line tools](reference-cli-tools.md) of Pulsar, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) -`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more -`examples` | A Java JAR file containing example [Pulsar Functions](functions-overview.md) -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that Pulsar uses -`licenses` | License files, in `.txt` form, for various components of the Pulsar codebase - -The following directories are created once you begin running Pulsar: - -Directory | Contains -:---------|:-------- -`data` | The data storage directory that ZooKeeper and BookKeeper use -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md) -`logs` | Logs that the installation creates - - -## Deploy ZooKeeper - -Each Pulsar instance relies on two separate ZooKeeper quorums. - -* Local ZooKeeper operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs a dedicated ZooKeeper cluster. -* Configuration Store operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum. - -You can use an independent cluster of machines or the same machines used by local ZooKeeper to provide the configuration store quorum. - - -### Deploy local ZooKeeper - -ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar. - -You need to stand up one local ZooKeeper cluster per Pulsar cluster for deploying a Pulsar instance. - -To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -On each host, you need to specify the ID of the node in the `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -:::tip - -See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - -::: - -On a ZooKeeper server at `zk1.us-west.example.com`, for example, you could set the `myid` value like this: - -```shell - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com` the command looks like `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start zookeeper - -``` - -### Deploy the configuration store - -The ZooKeeper cluster configured and started up in the section above is a local ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks. - -If you deploy a single-cluster instance, you do not need a separate cluster for the configuration store. If, however, you deploy a multi-cluster instance, you should stand up a separate ZooKeeper cluster for configuration tasks. - -#### Single-cluster Pulsar instance - -If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports. - -To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorum. You need to use the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 - -``` - -As before, create the `myid` files for each server on `data/global-zookeeper/myid`. - -#### Multi-cluster Pulsar instance - -When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions. - -The key here is to make sure the ZK quorum members are spread across at least 3 regions, and other regions run as observers. - -Again, given the very low expected load on the configuration store servers, you can share the same hosts used for the local ZooKeeper quorum. - -For example, assume a Pulsar instance with the following clusters `us-west`, `us-east`, `us-central`, `eu-central`, `ap-south`. Also assume, each cluster has its own local ZK servers named such as the following: - -``` - -zk[1-3].${CLUSTER}.example.com - -``` - -In this scenario if you want to pick the quorum participants from few clusters and let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`. - -This method guarantees that writes to configuration store is possible even if one of these regions is unreachable. - -The ZK configuration in all the servers looks like: - -```properties - -clientPort=2184 -server.1=zk1.us-west.example.com:2185:2186 -server.2=zk2.us-west.example.com:2185:2186 -server.3=zk3.us-west.example.com:2185:2186 -server.4=zk1.us-central.example.com:2185:2186 -server.5=zk2.us-central.example.com:2185:2186 -server.6=zk3.us-central.example.com:2185:2186:observer -server.7=zk1.us-east.example.com:2185:2186 -server.8=zk2.us-east.example.com:2185:2186 -server.9=zk3.us-east.example.com:2185:2186:observer -server.10=zk1.eu-central.example.com:2185:2186:observer -server.11=zk2.eu-central.example.com:2185:2186:observer -server.12=zk3.eu-central.example.com:2185:2186:observer -server.13=zk1.ap-south.example.com:2185:2186:observer -server.14=zk2.ap-south.example.com:2185:2186:observer -server.15=zk3.ap-south.example.com:2185:2186:observer - -``` - -Additionally, ZK observers need to have the following parameters: - -```properties - -peerType=observer - -``` - -##### Start the service - -Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) - -```shell - -$ bin/pulsar-daemon start configuration-store - -``` - -## Cluster metadata initialization - -Once you set up the cluster-specific ZooKeeper and configuration store quorums for your instance, you need to write some metadata to ZooKeeper for each cluster in your instance. **you only need to write these metadata once**. - -You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. The following is an example: - -```shell - -$ bin/pulsar initialize-cluster-metadata \ - --cluster us-west \ - --zookeeper zk1.us-west.example.com:2181 \ - --configuration-store zk1.us-west.example.com:2184 \ - --web-service-url http://pulsar.us-west.example.com:8080/ \ - --web-service-url-tls https://pulsar.us-west.example.com:8443/ \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/ - -``` - -As you can see from the example above, you need to specify the following: - -* The name of the cluster -* The local ZooKeeper connection string for the cluster -* The configuration store connection string for the entire instance -* The web service URL for the cluster -* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster - -If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster. - -Make sure to run `initialize-cluster-metadata` for each cluster in your instance. - -## Deploy BookKeeper - -BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar. - -Each Pulsar broker needs its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster. - -### Configure bookies - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important aspect of configuring each bookie is ensuring that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for the local ZooKeeper of Pulsar cluster. - -### Start bookies - -You can start a bookie in two ways: in the foreground or as a background daemon. - -To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -You can verify that the bookie works properly using the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell): - -```bash - -$ bin/bookkeeper shell bookiesanity - -``` - -This command creates a new ledger on the local bookie, writes a few entries, reads them back and finally deletes the ledger. - -After you have started all bookies, you can use the `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to verify that all bookies in the cluster are running. - -```bash - -$ bin/bookkeeper shell simpletest --ensemble --writeQuorum --ackQuorum --numEntries - -``` - -Bookie hosts are responsible for storing message data on disk. In order for bookies to provide optimal performance, having a suitable hardware configuration is essential for the bookies. The following are key dimensions for bookie hardware capacity. - -* Disk I/O capacity read/write -* Storage capacity - -Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker. To ensure low write latency, BookKeeper is -designed to use multiple devices: - -* A **journal** to ensure durability. For sequential writes, having fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts is critical. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms. -* A **ledger storage device** is where data is stored until all consumers acknowledge the message. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller. - - - -## Deploy brokers - -Once you set up ZooKeeper, initialize cluster metadata, and spin up BookKeeper bookies, you can deploy brokers. - -### Broker configuration - -You can configure brokers using the [`conf/broker.conf`](reference-configuration.md#broker) configuration file. - -The most important element of broker configuration is ensuring that each broker is aware of its local ZooKeeper quorum as well as the configuration store quorum. Make sure that you set the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) parameter to reflect the local quorum and the [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameter to reflect the configuration store quorum (although you need to specify only those ZooKeeper servers located in the same cluster). - -You also need to specify the name of the [cluster](reference-terminology.md#cluster) to which the broker belongs using the [`clusterName`](reference-configuration.md#broker-clusterName) parameter. In addition, you need to match the broker and web service ports provided when you initialize the metadata (especially when you use a different port from default) of the cluster. - -The following is an example configuration: - -```properties - -# Local ZooKeeper servers -zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -# Configuration store quorum connection string. -configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184 - -clusterName=us-west - -# Broker data port -brokerServicePort=6650 - -# Broker data port for TLS -brokerServicePortTls=6651 - -# Port to use to server HTTP request -webServicePort=8080 - -# Port to use to server HTTPS request -webServicePortTls=8443 - -``` - -### Broker hardware - -Pulsar brokers do not require any special hardware since they do not use the local disk. You had better choose fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) so that the software can take full advantage of that. - -### Start the broker service - -You can start a broker in the background by using [nohup](https://en.wikipedia.org/wiki/Nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```shell - -$ bin/pulsar-daemon start broker - -``` - -You can also start brokers in the foreground by using [`pulsar broker`](reference-cli-tools.md#broker): - -```shell - -$ bin/pulsar broker - -``` - -## Service discovery - -[Clients](getting-started-clients.md) connecting to Pulsar brokers need to communicate with an entire Pulsar instance using a single URL. - -You can use your own service discovery system. If you use your own system, you only need to satisfy just one requirement: when a client performs an HTTP request to an [endpoint](reference-configuration.md) for a Pulsar cluster, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to some active brokers in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means. - -> **Service discovery already provided by many scheduling systems** -> Many large-scale deployment systems, such as [Kubernetes](deploy-kubernetes.md), have service discovery systems built in. If you run Pulsar on such a system, you may not need to provide your own service discovery mechanism. - -## Admin client and verification - -At this point your Pulsar instance should be ready to use. You can now configure client machines that can serve as [administrative clients](admin-api-overview.md) for each cluster. You can use the [`conf/client.conf`](reference-configuration.md#client) configuration file to configure admin clients. - -The most important thing is that you point the [`serviceUrl`](reference-configuration.md#client-serviceUrl) parameter to the correct service URL for the cluster: - -```properties - -serviceUrl=http://pulsar.us-west.example.com:8080/ - -``` - -## Provision new tenants - -Pulsar is built as a fundamentally multi-tenant system. - - -If a new tenant wants to use the system, you need to create a new one. You can create a new tenant by using the [`pulsar-admin`](reference-pulsar-admin.md#tenants) CLI tool: - -```shell - -$ bin/pulsar-admin tenants create test-tenant \ - --allowed-clusters us-west \ - --admin-roles test-admin-role - -``` - -In this command, users who identify with `test-admin-role` role can administer the configuration for the `test-tenant` tenant. The `test-tenant` tenant can only use the `us-west` cluster. From now on, this tenant can manage its resources. - -Once you create a tenant, you need to create [namespaces](reference-terminology.md#namespace) for topics within that tenant. - - -The first step is to create a namespace. A namespace is an administrative unit that can contain many topics. A common practice is to create a namespace for each different use case from a single tenant. - -```shell - -$ bin/pulsar-admin namespaces create test-tenant/ns1 - -``` - -##### Test producer and consumer - - -Everything is now ready to send and receive messages. The quickest way to test the system is through the [`pulsar-perf`](reference-cli-tools.md#pulsar-perf) client tool. - - -You can use a topic in the namespace that you have just created. Topics are automatically created the first time when a producer or a consumer tries to use them. - -The topic name in this case could be: - -```http - -persistent://test-tenant/ns1/my-topic - -``` - -Start a consumer that creates a subscription on the topic and waits for messages: - -```shell - -$ bin/pulsar-perf consume persistent://test-tenant/ns1/my-topic - -``` - -Start a producer that publishes messages at a fixed rate and reports stats every 10 seconds: - -```shell - -$ bin/pulsar-perf produce persistent://test-tenant/ns1/my-topic - -``` - -To report the topic stats: - -```shell - -$ bin/pulsar-admin topics stats persistent://test-tenant/ns1/my-topic - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/deploy-bare-metal.md b/site2/website/versioned_docs/version-2.9.3-deprecated/deploy-bare-metal.md deleted file mode 100644 index 292157e3ddc89d..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/deploy-bare-metal.md +++ /dev/null @@ -1,559 +0,0 @@ ---- -id: deploy-bare-metal -title: Deploy a cluster on bare metal -sidebar_label: "Bare metal" -original_id: deploy-bare-metal ---- - -:::tip - -1. You can use single-cluster Pulsar installation in most use cases, such as experimenting with Pulsar or using Pulsar in a startup or in a single team. If you need to run a multi-cluster Pulsar instance, see the [guide](deploy-bare-metal-multi-cluster.md). -2. If you want to use all built-in [Pulsar IO](io-overview.md) connectors, you need to download `apache-pulsar-io-connectors`package and install `apache-pulsar-io-connectors` under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you have run a separate cluster of function workers for [Pulsar Functions](functions-overview.md). -3. If you want to use [Tiered Storage](concepts-tiered-storage.md) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders`package and install `apache-pulsar-offloaders` under `offloaders` directory in the Pulsar directory on every broker node. For more details of how to configure this feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md). - -::: - -Deploying a Pulsar cluster consists of the following steps: - -1. Deploy a [ZooKeeper](#deploy-a-zookeeper-cluster) cluster (optional) -2. Initialize [cluster metadata](#initialize-cluster-metadata) -3. Deploy a [BookKeeper](#deploy-a-bookkeeper-cluster) cluster -4. Deploy one or more Pulsar [brokers](#deploy-pulsar-brokers) - -## Preparation - -### Requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions. - -:::tip - -You can reuse existing Zookeeper clusters. - -::: - -To run Pulsar on bare metal, the following configuration is recommended: - -* At least 6 Linux machines or VMs - * 3 for running [ZooKeeper](https://zookeeper.apache.org) - * 3 for running a Pulsar broker, and a [BookKeeper](https://bookkeeper.apache.org) bookie -* A single [DNS](https://en.wikipedia.org/wiki/Domain_Name_System) name covering all of the Pulsar broker hosts - -:::note - -* Broker is only supported on 64-bit JVM. -* If you do not have enough machines, or you want to test Pulsar in cluster mode (and expand the cluster later), You can fully deploy Pulsar on a node on which ZooKeeper, bookie and broker run. -* If you do not have a DNS server, you can use the multi-host format in the service URL instead. - -::: - -Each machine in your cluster needs to have [Java 8](https://adoptium.net/?variant=openjdk8) or [Java 11](https://adoptium.net/?variant=openjdk11) installed. - -The following is a diagram showing the basic setup: - -![alt-text](/assets/pulsar-basic-setup.png) - -In this diagram, connecting clients need to communicate with the Pulsar cluster using a single URL. In this case, `pulsar-cluster.acme.com` abstracts over all of the message-handling brokers. Pulsar message brokers run on machines alongside BookKeeper bookies; brokers and bookies, in turn, rely on ZooKeeper. - -### Hardware considerations - -If you deploy a Pulsar cluster, keep in mind the following basic better choices when you do the capacity planning. - -#### ZooKeeper - -For machines running ZooKeeper, it is recommended to use less powerful machines or VMs. Pulsar uses ZooKeeper only for periodic coordination-related and configuration-related tasks, not for basic operations. If you run Pulsar on [Amazon Web Services](https://aws.amazon.com/) (AWS), for example, a [t2.small](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html) instance might likely suffice. - -#### Bookies and Brokers - -For machines running a bookie and a Pulsar broker, more powerful machines are required. For an AWS deployment, for example, [i3.4xlarge](https://aws.amazon.com/blogs/aws/now-available-i3-instances-for-demanding-io-intensive-applications/) instances may be appropriate. On those machines you can use the following: - -* Fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) (for Pulsar brokers) -* Small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache (for BookKeeper bookies) - -To start a Pulsar instance, below are the minimum and the recommended hardware settings. - -1. The minimum hardware settings (250 Pulsar topics) - - Broker - - CPU: 0.2 - - Memory: 256MB - - Bookie - - CPU: 0.2 - - Memory: 256MB - - Storage: - - Journal: 8GB, PD-SSD - - Ledger: 16GB, PD-STANDARD - -2. The recommended hardware settings (1000 Pulsar topics) - - - Broker - - CPU: 8 - - Memory: 8GB - - Bookie - - CPU: 4 - - Memory: 8GB - - Storage: - - Journal: 256GB, PD-SSD - - Ledger: 2TB, PD-STANDARD - -## Install the Pulsar binary package - -> You need to install the Pulsar binary package on each machine in the cluster, including machines running ZooKeeper and BookKeeper. - -To get started deploying a Pulsar cluster on bare metal, you need to download a binary tarball release in one of the following ways: - -* By clicking on the link below directly, which automatically triggers a download: - * Pulsar @pulsar:version@ binary release -* From the Pulsar [downloads page](pulsar:download_page_url) -* From the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) on GitHub -* Using [wget](https://www.gnu.org/software/wget): - -```bash - -$ wget pulsar:binary_release_url - -``` - -Once you download the tarball, untar it and `cd` into the resulting directory: - -```bash - -$ tar xvzf apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -The extracted directory contains the following subdirectories: - -Directory | Contains -:---------|:-------- -`bin` |[command-line tools](reference-cli-tools.md) of Pulsar, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) -`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more -`data` | The data storage directory that ZooKeeper and BookKeeper use -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that Pulsar uses -`logs` | Logs that the installation creates - -## [Install Builtin Connectors (optional)]( https://pulsar.apache.org/docs/en/next/standalone/#install-builtin-connectors-optional) - -> Since Pulsar release `2.1.0-incubating`, Pulsar provides a separate binary distribution, containing all the `builtin` connectors. -> To enable the `builtin` connectors (optional), you can follow the instructions below. - -To use `builtin` connectors, you need to download the connectors tarball release on every broker node in one of the following ways : - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar IO Connectors @pulsar:version@ release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -Once you download the .nar file, copy the file to directory `connectors` in the pulsar directory. -For example, if you download the connector file `pulsar-io-aerospike-@pulsar:version@.nar`: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -## [Install Tiered Storage Offloaders (optional)](https://pulsar.apache.org/docs/en/next/standalone/#install-tiered-storage-offloaders-optional) - -> Since Pulsar release `2.2.0`, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -> If you want to enable tiered storage feature, you can follow the instructions as below; otherwise you can -> skip this section for now. - -To use tiered storage offloaders, you need to download the offloaders tarball release on every broker node in one of the following ways: - -* by clicking the link below and downloading the release from an Apache mirror: - - * Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* from the Pulsar [downloads page](pulsar:download_page_url) -* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) -* using [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -Once you download the tarball, in the Pulsar directory, untar the offloaders package and copy the offloaders as `offloaders` in the Pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you can find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more details of how to configure tiered storage feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md) - - -## Deploy a ZooKeeper cluster - -> If you already have an existing zookeeper cluster and want to use it, you can skip this section. - -[ZooKeeper](https://zookeeper.apache.org) manages a variety of essential coordination-related and configuration-related tasks for Pulsar. To deploy a Pulsar cluster, you need to deploy ZooKeeper first. A 3-node ZooKeeper cluster is the recommended configuration. Pulsar does not make heavy use of ZooKeeper, so the lightweight machines or VMs should suffice for running ZooKeeper. - -To begin, add all ZooKeeper servers to the configuration specified in [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) (in the Pulsar directory that you create [above](#install-the-pulsar-binary-package)). The following is an example: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -> If you only have one machine on which to deploy Pulsar, you only need to add one server entry in the configuration file. - -On each host, you need to specify the ID of the node in the `myid` file, which is in the `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter). - -> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more. - -For example, on a ZooKeeper server like `zk1.us-west.example.com`, you can set the `myid` value as follows: - -```bash - -$ mkdir -p data/zookeeper -$ echo 1 > data/zookeeper/myid - -``` - -On `zk2.us-west.example.com`, the command is `echo 2 > data/zookeeper/myid` and so on. - -Once you add each server to the `zookeeper.conf` configuration and have the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start zookeeper - -``` - -> If you plan to deploy Zookeeper with the Bookie on the same node, you need to start zookeeper by using different stats -> port by configuring the `metricsProvider.httpPort` in zookeeper.conf. - -## Initialize cluster metadata - -Once you deploy ZooKeeper for your cluster, you need to write some metadata to ZooKeeper. You only need to write this data **once**. - -You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. This command can be run on any machine in your Pulsar cluster, so the metadata can be initialized from a ZooKeeper, broker, or bookie machine. The following is an example: - -```shell - -$ bin/pulsar initialize-cluster-metadata \ - --cluster pulsar-cluster-1 \ - --zookeeper zk1.us-west.example.com:2181 \ - --configuration-store zk1.us-west.example.com:2181 \ - --web-service-url http://pulsar.us-west.example.com:8080 \ - --web-service-url-tls https://pulsar.us-west.example.com:8443 \ - --broker-service-url pulsar://pulsar.us-west.example.com:6650 \ - --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651 - -``` - -As you can see from the example above, you will need to specify the following: - -Flag | Description -:----|:----------- -`--cluster` | A name for the cluster -`--zookeeper` | A "local" ZooKeeper connection string for the cluster. This connection string only needs to include *one* machine in the ZooKeeper cluster. -`--configuration-store` | The configuration store connection string for the entire instance. As with the `--zookeeper` flag, this connection string only needs to include *one* machine in the ZooKeeper cluster. -`--web-service-url` | The web service URL for the cluster, plus a port. This URL should be a standard DNS name. The default port is 8080 (you had better not use a different port). -`--web-service-url-tls` | If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster. The default port is 8443 (you had better not use a different port). -`--broker-service-url` | A broker service URL enabling interaction with the brokers in the cluster. This URL should not use the same DNS name as the web service URL but should use the `pulsar` scheme instead. The default port is 6650 (you had better not use a different port). -`--broker-service-url-tls` | If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster. The default port is 6651 (you had better not use a different port). - - -> If you do not have a DNS server, you can use multi-host format in the service URL with the following settings: -> - -> ```shell -> -> --web-service-url http://host1:8080,host2:8080,host3:8080 \ -> --web-service-url-tls https://host1:8443,host2:8443,host3:8443 \ -> --broker-service-url pulsar://host1:6650,host2:6650,host3:6650 \ -> --broker-service-url-tls pulsar+ssl://host1:6651,host2:6651,host3:6651 -> -> -> ``` - -> -> If you want to use an existing BookKeeper cluster, you can add the `--existing-bk-metadata-service-uri` flag as follows: -> - -> ```shell -> -> --existing-bk-metadata-service-uri "zk+null://zk1:2181;zk2:2181/ledgers" \ -> --web-service-url http://host1:8080,host2:8080,host3:8080 \ -> --web-service-url-tls https://host1:8443,host2:8443,host3:8443 \ -> --broker-service-url pulsar://host1:6650,host2:6650,host3:6650 \ -> --broker-service-url-tls pulsar+ssl://host1:6651,host2:6651,host3:6651 -> -> -> ``` - -> You can obtain the metadata service URI of the existing BookKeeper cluster by using the `bin/bookkeeper shell whatisinstanceid` command. You must enclose the value in double quotes since the multiple metadata service URIs are separated with semicolons. - -## Deploy a BookKeeper cluster - -[BookKeeper](https://bookkeeper.apache.org) handles all persistent data storage in Pulsar. You need to deploy a cluster of BookKeeper bookies to use Pulsar. You can choose to run a **3-bookie BookKeeper cluster**. - -You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important step in configuring bookies for our purposes here is ensuring that [`zkServers`](reference-configuration.md#bookkeeper-zkServers) is set to the connection string for the ZooKeeper cluster. The following is an example: - -```properties - -zkServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -``` - -Once you appropriately modify the `zkServers` parameter, you can make any other configuration changes that you require. You can find a full listing of the available BookKeeper configuration parameters [here](reference-configuration.md#bookkeeper). However, consulting the [BookKeeper documentation](http://bookkeeper.apache.org/docs/latest/reference/config/) for a more in-depth guide might be a better choice. - -Once you apply the desired configuration in `conf/bookkeeper.conf`, you can start up a bookie on each of your BookKeeper hosts. You can start up each bookie either in the background, using [nohup](https://en.wikipedia.org/wiki/Nohup), or in the foreground. - -To start the bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start bookie - -``` - -To start the bookie in the foreground: - -```bash - -$ bin/pulsar bookie - -``` - -You can verify that a bookie works properly by running the `bookiesanity` command on the [BookKeeper shell](reference-cli-tools.md#shell): - -```bash - -$ bin/bookkeeper shell bookiesanity - -``` - -This command creates an ephemeral BookKeeper ledger on the local bookie, writes a few entries, reads them back, and finally deletes the ledger. - -After you start all the bookies, you can use `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to verify all the bookies in the cluster are up running. - -```bash - -$ bin/bookkeeper shell simpletest --ensemble --writeQuorum --ackQuorum --numEntries - -``` - -This command creates a `num-bookies` sized ledger on the cluster, writes a few entries, and finally deletes the ledger. - - -## Deploy Pulsar brokers - -Pulsar brokers are the last thing you need to deploy in your Pulsar cluster. Brokers handle Pulsar messages and provide the administrative interface of Pulsar. A good choice is to run **3 brokers**, one for each machine that already runs a BookKeeper bookie. - -### Configure Brokers - -The most important element of broker configuration is ensuring that each broker is aware of the ZooKeeper cluster that you have deployed. Ensure that the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) and [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameters are correct. In this case, since you only have 1 cluster and no configuration store setup, the `configurationStoreServers` point to the same `zookeeperServers`. - -```properties - -zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 -configurationStoreServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181 - -``` - -You also need to specify the cluster name (matching the name that you provided when you [initialize the metadata of the cluster](#initialize-cluster-metadata)): - -```properties - -clusterName=pulsar-cluster-1 - -``` - -In addition, you need to match the broker and web service ports provided when you initialize the metadata of the cluster (especially when you use a different port than the default): - -```properties - -brokerServicePort=6650 -brokerServicePortTls=6651 -webServicePort=8080 -webServicePortTls=8443 - -``` - -> If you deploy Pulsar in a one-node cluster, you should update the replication settings in `conf/broker.conf` to `1`. -> - -> ```properties -> -> # Number of bookies to use when creating a ledger -> managedLedgerDefaultEnsembleSize=1 -> -> # Number of copies to store for each message -> managedLedgerDefaultWriteQuorum=1 -> -> # Number of guaranteed copies (acks to wait before write is complete) -> managedLedgerDefaultAckQuorum=1 -> -> -> ``` - - -### Enable Pulsar Functions (optional) - -If you want to enable [Pulsar Functions](functions-overview.md), you can follow the instructions as below: - -1. Edit `conf/broker.conf` to enable functions worker, by setting `functionsWorkerEnabled` to `true`. - - ```conf - - functionsWorkerEnabled=true - - ``` - -2. Edit `conf/functions_worker.yml` and set `pulsarFunctionsCluster` to the cluster name that you provide when you [initialize the metadata of the cluster](#initialize-cluster-metadata). - - ```conf - - pulsarFunctionsCluster: pulsar-cluster-1 - - ``` - -If you want to learn more options about deploying the functions worker, check out [Deploy and manage functions worker](functions-worker.md). - -### Start Brokers - -You can then provide any other configuration changes that you want in the [`conf/broker.conf`](reference-configuration.md#broker) file. Once you decide on a configuration, you can start up the brokers for your Pulsar cluster. Like ZooKeeper and BookKeeper, you can start brokers either in the foreground or in the background, using nohup. - -You can start a broker in the foreground using the [`pulsar broker`](reference-cli-tools.md#pulsar-broker) command: - -```bash - -$ bin/pulsar broker - -``` - -You can start a broker in the background using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -$ bin/pulsar-daemon start broker - -``` - -Once you successfully start up all the brokers that you intend to use, your Pulsar cluster should be ready to go! - -## Connect to the running cluster - -Once your Pulsar cluster is up and running, you should be able to connect with it using Pulsar clients. One such client is the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool, which is included with the Pulsar binary package. The `pulsar-client` tool can publish messages to and consume messages from Pulsar topics and thus provide a simple way to make sure that your cluster runs properly. - -To use the `pulsar-client` tool, first modify the client configuration file in [`conf/client.conf`](reference-configuration.md#client) in your binary package. You need to change the values for `webServiceUrl` and `brokerServiceUrl`, substituting `localhost` (which is the default), with the DNS name that you assign to your broker/bookie hosts. The following is an example: - -```properties - -webServiceUrl=http://us-west.example.com:8080 -brokerServiceurl=pulsar://us-west.example.com:6650 - -``` - -> If you do not have a DNS server, you can specify multi-host in service URL as follows: -> - -> ```properties -> -> webServiceUrl=http://host1:8080,host2:8080,host3:8080 -> brokerServiceurl=pulsar://host1:6650,host2:6650,host3:6650 -> -> -> ``` - - -Once that is complete, you can publish a message to the Pulsar topic: - -```bash - -$ bin/pulsar-client produce \ - persistent://public/default/test \ - -n 1 \ - -m "Hello Pulsar" - -``` - -> You may need to use a different cluster name in the topic if you specify a cluster name other than `pulsar-cluster-1`. - -This command publishes a single message to the Pulsar topic. In addition, you can subscribe to the Pulsar topic in a different terminal before publishing messages as below: - -```bash - -$ bin/pulsar-client consume \ - persistent://public/default/test \ - -n 100 \ - -s "consumer-test" \ - -t "Exclusive" - -``` - -Once you successfully publish the above message to the topic, you should see it in the standard output: - -```bash - ------ got message ----- -Hello Pulsar - -``` - -## Run Functions - -> If you have [enabled](#enable-pulsar-functions-optional) Pulsar Functions, you can try out the Pulsar Functions now. - -Create an ExclamationFunction `exclamation`. - -```bash - -bin/pulsar-admin functions create \ - --jar examples/api-examples.jar \ - --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \ - --inputs persistent://public/default/exclamation-input \ - --output persistent://public/default/exclamation-output \ - --tenant public \ - --namespace default \ - --name exclamation - -``` - -Check whether the function runs as expected by [triggering](functions-deploying.md#triggering-pulsar-functions) the function. - -```bash - -bin/pulsar-admin functions trigger --name exclamation --trigger-value "hello world" - -``` - -You should see the following output: - -```shell - -hello world! - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/deploy-dcos.md b/site2/website/versioned_docs/version-2.9.3-deprecated/deploy-dcos.md deleted file mode 100644 index 35a0a83d716ade..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/deploy-dcos.md +++ /dev/null @@ -1,200 +0,0 @@ ---- -id: deploy-dcos -title: Deploy Pulsar on DC/OS -sidebar_label: "DC/OS" -original_id: deploy-dcos ---- - -:::tip - -To enable all built-in [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, we recommend you use `apachepulsar/pulsar-all` image instead of `apachepulsar/pulsar` image; the former has already bundled [all built-in connectors](io-overview.md#working-with-connectors). - -::: - -[DC/OS](https://dcos.io/) (the DataCenter Operating System) is a distributed operating system for deploying and managing applications and systems on [Apache Mesos](http://mesos.apache.org/). DC/OS is an open-source tool created and maintained by [Mesosphere](https://mesosphere.com/). - -Apache Pulsar is available as a [Marathon Application Group](https://mesosphere.github.io/marathon/docs/application-groups.html), which runs multiple applications as manageable sets. - -## Prerequisites - -You need to prepare your environment before running Pulsar on DC/OS. - -* DC/OS version [1.9](https://docs.mesosphere.com/1.9/) or higher -* A [DC/OS cluster](https://docs.mesosphere.com/1.9/installing/) with at least three agent nodes -* The [DC/OS CLI tool](https://docs.mesosphere.com/1.9/cli/install/) installed -* The [`PulsarGroups.json`](https://github.com/apache/pulsar/blob/master/deployment/dcos/PulsarGroups.json) configuration file from the Pulsar GitHub repo. - - ```bash - - $ curl -O https://raw.githubusercontent.com/apache/pulsar/master/deployment/dcos/PulsarGroups.json - - ``` - -Each node in the DC/OS-managed Mesos cluster must have at least: - -* 4 CPU -* 4 GB of memory -* 60 GB of total persistent disk - -Alternatively, you can change the configuration in `PulsarGroups.json` accordingly to match your resources of the DC/OS cluster. - -## Deploy Pulsar using the DC/OS command interface - -You can deploy Pulsar on DC/OS using this command: - -```bash - -$ dcos marathon group add PulsarGroups.json - -``` - -This command deploys Docker container instances in three groups, which together comprise a Pulsar cluster: - -* 3 bookies (1 [bookie](reference-terminology.md#bookie) on each agent node and 1 [bookie recovery](http://bookkeeper.apache.org/docs/latest/admin/autorecovery/) instance) -* 3 Pulsar [brokers](reference-terminology.md#broker) (1 broker on each node and 1 admin instance) -* 1 [Prometheus](http://prometheus.io/) instance and 1 [Grafana](https://grafana.com/) instance - - -> When you run DC/OS, a ZooKeeper cluster will be running at `master.mesos:2181`, thus you do not have to install or start up ZooKeeper separately. - -After executing the `dcos` command above, click the **Services** tab in the DC/OS [GUI interface](https://docs.mesosphere.com/latest/gui/), which you can access at [http://m1.dcos](http://m1.dcos) in this example. You should see several applications during the deployment. - -![DC/OS command executed](/assets/dcos_command_execute.png) - -![DC/OS command executed2](/assets/dcos_command_execute2.png) - -## The BookKeeper group - -To monitor the status of the BookKeeper cluster deployment, click the **bookkeeper** group in the parent **pulsar** group. - -![DC/OS bookkeeper status](/assets/dcos_bookkeeper_status.png) - -At this point, the status of the 3 [bookies](reference-terminology.md#bookie) are green, which means that the bookies have been deployed successfully and are running. - -![DC/OS bookkeeper running](/assets/dcos_bookkeeper_run.png) - -You can also click each bookie instance to get more detailed information, such as the bookie running log. - -![DC/OS bookie log](/assets/dcos_bookie_log.png) - -To display information about the BookKeeper in ZooKeeper, you can visit [http://m1.dcos/exhibitor](http://m1.dcos/exhibitor). In this example, 3 bookies are under the `available` directory. - -![DC/OS bookkeeper in zk](/assets/dcos_bookkeeper_in_zookeeper.png) - -## The Pulsar broker group - -Similar to the BookKeeper group above, click **brokers** to check the status of the Pulsar brokers. - -![DC/OS broker status](/assets/dcos_broker_status.png) - -![DC/OS broker running](/assets/dcos_broker_run.png) - -You can also click each broker instance to get more detailed information, such as the broker running log. - -![DC/OS broker log](/assets/dcos_broker_log.png) - -Broker cluster information in ZooKeeper is also available through the web UI. In this example, you can see that the `loadbalance` and `managed-ledgers` directories have been created. - -![DC/OS broker in zk](/assets/dcos_broker_in_zookeeper.png) - -## Monitor group - -The **monitory** group consists of Prometheus and Grafana. - -![DC/OS monitor status](/assets/dcos_monitor_status.png) - -### Prometheus - -Click the instance of `prom` to get the endpoint of Prometheus, which is `192.168.65.121:9090` in this example. - -![DC/OS prom endpoint](/assets/dcos_prom_endpoint.png) - -If you click that endpoint, you can see the Prometheus dashboard. All the bookies and brokers are listed on [http://192.168.65.121:9090/targets](http://192.168.65.121:9090/targets). - -![DC/OS prom targets](/assets/dcos_prom_targets.png) - -### Grafana - -Click `grafana` to get the endpoint for Grafana, which is `192.168.65.121:3000` in this example. - -![DC/OS grafana endpoint](/assets/dcos_grafana_endpoint.png) - -If you click that endpoint, you can access the Grafana dashboard. - -![DC/OS grafana targets](/assets/dcos_grafana_dashboard.png) - -## Run a simple Pulsar consumer and producer on DC/OS - -Now that you have a fully deployed Pulsar cluster, you can run a simple consumer and producer to show Pulsar on DC/OS in action. - -### Download and prepare the Pulsar Java tutorial - -You can clone a [Pulsar Java tutorial](https://github.com/streamlio/pulsar-java-tutorial) repo. This repo contains a simple Pulsar consumer and producer (you can find more information in the `README` file in this repo). - -```bash - -$ git clone https://github.com/streamlio/pulsar-java-tutorial - -``` - -Change the `SERVICE_URL` from `pulsar://localhost:6650` to `pulsar://a1.dcos:6650` in both [`ConsumerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ConsumerTutorial.java) file and [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java) file. - -The `pulsar://a1.dcos:6650` endpoint is for the broker service. You can fetch the endpoint details for each broker instance from the DC/OS GUI. `a1.dcos` is a DC/OS client agent that runs a broker, and you can replace it with the client agent IP address. - -Now, you can change the message number from 10 to 10000000 in the main method in [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java) file to produce more messages. - -Then, you can compile the project code using the command below: - -```bash - -$ mvn clean package - -``` - -### Run the consumer and producer - -Execute this command to run the consumer: - -```bash - -$ mvn exec:java -Dexec.mainClass="tutorial.ConsumerTutorial" - -``` - -Execute this command to run the producer: - -```bash - -$ mvn exec:java -Dexec.mainClass="tutorial.ProducerTutorial" - -``` - -You see that the producer is producing messages and the consumer is consuming messages through the DC/OS GUI. - -![DC/OS pulsar producer](/assets/dcos_producer.png) - -![DC/OS pulsar consumer](/assets/dcos_consumer.png) - -### View Grafana metric output - -While the producer and consumer are running, you can access the running metrics from Grafana. - -![DC/OS pulsar dashboard](/assets/dcos_metrics.png) - - -## Uninstall Pulsar - -You can shut down and uninstall the `pulsar` application from DC/OS at any time in one of the following two ways: - -1. Click the three dots at the right end of Pulsar group and choose **Delete** on the DC/OS GUI. - - ![DC/OS pulsar uninstall](/assets/dcos_uninstall.png) - -2. Use the command below. - - ```bash - - $ dcos marathon group remove /pulsar - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/deploy-docker.md b/site2/website/versioned_docs/version-2.9.3-deprecated/deploy-docker.md deleted file mode 100644 index 8348d78deb2378..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/deploy-docker.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -id: deploy-docker -title: Deploy a cluster on Docker -sidebar_label: "Docker" -original_id: deploy-docker ---- - -To deploy a Pulsar cluster on Docker, complete the following steps: -1. Deploy a ZooKeeper cluster (optional) -2. Initialize cluster metadata -3. Deploy a BookKeeper cluster -4. Deploy one or more Pulsar brokers - -## Prepare - -To run Pulsar on Docker, you need to create a container for each Pulsar component: ZooKeeper, BookKeeper and broker. You can pull the images of ZooKeeper and BookKeeper separately on [Docker Hub](https://hub.docker.com/), and pull a [Pulsar image](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) for the broker. You can also pull only one [Pulsar image](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) and create three containers with this image. This tutorial takes the second option as an example. - -### Pull a Pulsar image -You can pull a Pulsar image from [Docker Hub](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) with the following command. - -``` - -docker pull apachepulsar/pulsar-all:latest - -``` - -### Create three containers -Create containers for ZooKeeper, BookKeeper and broker. In this example, they are named as `zookeeper`, `bookkeeper` and `broker` respectively. You can name them as you want with the `--name` flag. By default, the container names are created randomly. - -``` - -docker run -it --name bookkeeper apachepulsar/pulsar-all:latest /bin/bash -docker run -it --name zookeeper apachepulsar/pulsar-all:latest /bin/bash -docker run -it --name broker apachepulsar/pulsar-all:latest /bin/bash - -``` - -### Create a network -To deploy a Pulsar cluster on Docker, you need to create a `network` and connect the containers of ZooKeeper, BookKeeper and broker to this network. The following command creates the network `pulsar`: - -``` - -docker network create pulsar - -``` - -### Connect containers to network -Connect the containers of ZooKeeper, BookKeeper and broker to the `pulsar` network with the following commands. - -``` - -docker network connect pulsar zookeeper -docker network connect pulsar bookkeeper -docker network connect pulsar broker - -``` - -To check whether the containers are successfully connected to the network, enter the `docker network inspect pulsar` command. - -For detailed information about how to deploy ZooKeeper cluster, BookKeeper cluster, brokers, see [deploy a cluster on bare metal](deploy-bare-metal.md). diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/deploy-kubernetes.md b/site2/website/versioned_docs/version-2.9.3-deprecated/deploy-kubernetes.md deleted file mode 100644 index 1aefc6ad79f716..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/deploy-kubernetes.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -id: deploy-kubernetes -title: Deploy Pulsar on Kubernetes -sidebar_label: "Kubernetes" -original_id: deploy-kubernetes ---- - -To get up and running with these charts as fast as possible, in a **non-production** use case, we provide -a [quick start guide](getting-started-helm.md) for Proof of Concept (PoC) deployments. - -To configure and install a Pulsar cluster on Kubernetes for production usage, follow the complete [Installation Guide](helm-install.md). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/deploy-monitoring.md b/site2/website/versioned_docs/version-2.9.3-deprecated/deploy-monitoring.md deleted file mode 100644 index 2b5c19344dc8c3..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/deploy-monitoring.md +++ /dev/null @@ -1,148 +0,0 @@ ---- -id: deploy-monitoring -title: Monitor -sidebar_label: "Monitor" -original_id: deploy-monitoring ---- - -You can use different ways to monitor a Pulsar cluster, exposing both metrics related to the usage of topics and the overall health of the individual components of the cluster. - -## Collect metrics - -You can collect broker stats, ZooKeeper stats, and BookKeeper stats. - -### Broker stats - -You can collect Pulsar broker metrics from brokers and export the metrics in JSON format. The Pulsar broker metrics mainly have two types: - -* *Destination dumps*, which contain stats for each individual topic. You can fetch the destination dumps using the command below: - - ```shell - - bin/pulsar-admin broker-stats destinations - - ``` - -* Broker metrics, which contain the broker information and topics stats aggregated at namespace level. You can fetch the broker metrics by using the following command: - - ```shell - - bin/pulsar-admin broker-stats monitoring-metrics - - ``` - -All the message rates are updated every minute. - -The aggregated broker metrics are also exposed in the [Prometheus](https://prometheus.io) format at: - -```shell - -http://$BROKER_ADDRESS:8080/metrics/ - -``` - -### ZooKeeper stats - -The local ZooKeeper, configuration store server and clients that are shipped with Pulsar can expose detailed stats through Prometheus. - -```shell - -http://$LOCAL_ZK_SERVER:8000/metrics -http://$GLOBAL_ZK_SERVER:8001/metrics - -``` - -The default port of local ZooKeeper is `8000` and the default port of the configuration store is `8001`. You can use a different stats port by configuring `metricsProvider.httpPort` in the `conf/zookeeper.conf` file. - -### BookKeeper stats - -You can configure the stats frameworks for BookKeeper by modifying the `statsProviderClass` in the `conf/bookkeeper.conf` file. - -The default BookKeeper configuration enables the Prometheus exporter. The configuration is included with Pulsar distribution. - -```shell - -http://$BOOKIE_ADDRESS:8000/metrics - -``` - -The default port for bookie is `8000`. You can change the port by configuring `prometheusStatsHttpPort` in the `conf/bookkeeper.conf` file. - -### Managed cursor acknowledgment state -The acknowledgment state is persistent to the ledger first. When the acknowledgment state fails to be persistent to the ledger, they are persistent to ZooKeeper. To track the stats of acknowledgement, you can configure the metrics for the managed cursor. - -``` - -brk_ml_cursor_persistLedgerSucceed(namespace=", ledger_name="", cursor_name:") -brk_ml_cursor_persistLedgerErrors(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_persistZookeeperSucceed(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_persistZookeeperErrors(namespace="", ledger_name="", cursor_name:"") -brk_ml_cursor_nonContiguousDeletedMessagesRange(namespace="", ledger_name="", cursor_name:"") - -``` - -Those metrics are added in the Prometheus interface, you can monitor and check the metrics stats in the Grafana. - -### Function and connector stats - -You can collect functions worker stats from `functions-worker` and export the metrics in JSON formats, which contain functions worker JVM metrics. - -``` - -pulsar-admin functions-worker monitoring-metrics - -``` - -You can collect functions and connectors metrics from `functions-worker` and export the metrics in JSON formats. - -``` - -pulsar-admin functions-worker function-stats - -``` - -The aggregated functions and connectors metrics can be exposed in Prometheus formats as below. You can get [`FUNCTIONS_WORKER_ADDRESS`](http://pulsar.apache.org/docs/en/next/functions-worker/) and `WORKER_PORT` from the `functions_worker.yml` file. - -``` - -http://$FUNCTIONS_WORKER_ADDRESS:$WORKER_PORT/metrics: - -``` - -## Configure Prometheus - -You can use Prometheus to collect all the metrics exposed for Pulsar components and set up [Grafana](https://grafana.com/) dashboards to display the metrics and monitor your Pulsar cluster. For details, refer to [Prometheus guide](https://prometheus.io/docs/introduction/getting_started/). - -When you run Pulsar on bare metal, you can provide the list of nodes to be probed. When you deploy Pulsar in a Kubernetes cluster, the monitoring is setup automatically. For details, refer to [Kubernetes instructions](helm-deploy.md). - -## Dashboards - -When you collect time series statistics, the major problem is to make sure the number of dimensions attached to the data does not explode. Thus you only need to collect time series of metrics aggregated at the namespace level. - -### Pulsar per-topic dashboard - -The per-topic dashboard instructions are available at [Pulsar manager](administration-pulsar-manager.md). - -### Grafana - -You can use grafana to create dashboard driven by the data that is stored in Prometheus. - -When you deploy Pulsar on Kubernetes, a `pulsar-grafana` Docker image is enabled by default. You can use the docker image with the principal dashboards. - -Enter the command below to use the dashboard manually: - -```shell - -docker run -p3000:3000 \ - -e PROMETHEUS_URL=http://$PROMETHEUS_HOST:9090/ \ - apachepulsar/pulsar-grafana:latest - -``` - -The following are some Grafana dashboards examples: - -- [pulsar-grafana](http://pulsar.apache.org/docs/en/deploy-monitoring/#grafana): a Grafana dashboard that displays metrics collected in Prometheus for Pulsar clusters running on Kubernetes. -- [apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard): a collection of Grafana dashboard templates for different Pulsar components running on both Kubernetes and on-premise machines. - - ## Alerting rules - You can set alerting rules according to your Pulsar environment. To configure alerting rules for Apache Pulsar, refer to [alerting rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/develop-load-manager.md b/site2/website/versioned_docs/version-2.9.3-deprecated/develop-load-manager.md deleted file mode 100644 index 509209b6a852d8..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/develop-load-manager.md +++ /dev/null @@ -1,227 +0,0 @@ ---- -id: develop-load-manager -title: Modular load manager -sidebar_label: "Modular load manager" -original_id: develop-load-manager ---- - -The *modular load manager*, implemented in [`ModularLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/ModularLoadManagerImpl.java), is a flexible alternative to the previously implemented load manager, [`SimpleLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/SimpleLoadManagerImpl.java), which attempts to simplify how load is managed while also providing abstractions so that complex load management strategies may be implemented. - -## Usage - -There are two ways that you can enable the modular load manager: - -1. Change the value of the `loadManagerClassName` parameter in `conf/broker.conf` from `org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl` to `org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl`. -2. Using the `pulsar-admin` tool. Here's an example: - - ```shell - - $ pulsar-admin update-dynamic-config \ - --config loadManagerClassName \ - --value org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl - - ``` - - You can use the same method to change back to the original value. In either case, any mistake in specifying the load manager will cause Pulsar to default to `SimpleLoadManagerImpl`. - -## Verification - -There are a few different ways to determine which load manager is being used: - -1. Use `pulsar-admin` to examine the `loadManagerClassName` element: - - ```shell - - $ bin/pulsar-admin brokers get-all-dynamic-config - { - "loadManagerClassName" : "org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl" - } - - ``` - - If there is no `loadManagerClassName` element, then the default load manager is used. - -2. Consult a ZooKeeper load report. With the module load manager, the load report in `/loadbalance/brokers/...` will have many differences. for example the `systemResourceUsage` sub-elements (`bandwidthIn`, `bandwidthOut`, etc.) are now all at the top level. Here is an example load report from the module load manager: - - ```json - - { - "bandwidthIn": { - "limit": 10240000.0, - "usage": 4.256510416666667 - }, - "bandwidthOut": { - "limit": 10240000.0, - "usage": 5.287239583333333 - }, - "bundles": [], - "cpu": { - "limit": 2400.0, - "usage": 5.7353247655435915 - }, - "directMemory": { - "limit": 16384.0, - "usage": 1.0 - } - } - - ``` - - With the simple load manager, the load report in `/loadbalance/brokers/...` will look like this: - - ```json - - { - "systemResourceUsage": { - "bandwidthIn": { - "limit": 10240000.0, - "usage": 0.0 - }, - "bandwidthOut": { - "limit": 10240000.0, - "usage": 0.0 - }, - "cpu": { - "limit": 2400.0, - "usage": 0.0 - }, - "directMemory": { - "limit": 16384.0, - "usage": 1.0 - }, - "memory": { - "limit": 8192.0, - "usage": 3903.0 - } - } - } - - ``` - -3. The command-line [broker monitor](reference-cli-tools.md#monitor-brokers) will have a different output format depending on which load manager implementation is being used. - - Here is an example from the modular load manager: - - ``` - - =================================================================================================================== - ||SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.00 |48.33 |0.01 |0.00 |0.00 |48.33 || - ||COUNT |TOPIC |BUNDLE |PRODUCER |CONSUMER |BUNDLE + |BUNDLE - || - || |4 |4 |0 |2 |4 |0 || - ||LATEST |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - ||SHORT |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - ||LONG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.00 |0.00 |0.00 || - =================================================================================================================== - - ``` - - Here is an example from the simple load manager: - - ``` - - =================================================================================================================== - ||COUNT |TOPIC |BUNDLE |PRODUCER |CONSUMER |BUNDLE + |BUNDLE - || - || |4 |4 |0 |2 |0 |0 || - ||RAW SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.25 |47.94 |0.01 |0.00 |0.00 |47.94 || - ||ALLOC SYSTEM |CPU % |MEMORY % |DIRECT % |BW IN % |BW OUT % |MAX % || - || |0.20 |1.89 | |1.27 |3.21 |3.21 || - ||RAW MSG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |0.00 |0.00 |0.00 |0.01 |0.01 |0.01 || - ||ALLOC MSG |MSG/S IN |MSG/S OUT |TOTAL |KB/S IN |KB/S OUT |TOTAL || - || |54.84 |134.48 |189.31 |126.54 |320.96 |447.50 || - =================================================================================================================== - - ``` - -It is important to note that the module load manager is _centralized_, meaning that all requests to assign a bundle---whether it's been seen before or whether this is the first time---only get handled by the _lead_ broker (which can change over time). To determine the current lead broker, examine the `/loadbalance/leader` node in ZooKeeper. - -## Implementation - -### Data - -The data monitored by the modular load manager is contained in the [`LoadData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/LoadData.java) class. -Here, the available data is subdivided into the bundle data and the broker data. - -#### Broker - -The broker data is contained in the [`BrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BrokerData.java) class. It is further subdivided into two parts, -one being the local data which every broker individually writes to ZooKeeper, and the other being the historical broker -data which is written to ZooKeeper by the leader broker. - -##### Local Broker Data -The local broker data is contained in the class [`LocalBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/java/org/apache/pulsar/policies/data/loadbalancer/LocalBrokerData.java) and provides information about the following resources: - -* CPU usage -* JVM heap memory usage -* Direct memory usage -* Bandwidth in/out usage -* Most recent total message rate in/out across all bundles -* Total number of topics, bundles, producers, and consumers -* Names of all bundles assigned to this broker -* Most recent changes in bundle assignments for this broker - -The local broker data is updated periodically according to the service configuration -"loadBalancerReportUpdateMaxIntervalMinutes". After any broker updates their local broker data, the leader broker will -receive the update immediately via a ZooKeeper watch, where the local data is read from the ZooKeeper node -`/loadbalance/brokers/` - -##### Historical Broker Data - -The historical broker data is contained in the [`TimeAverageBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/TimeAverageBrokerData.java) class. - -In order to reconcile the need to make good decisions in a steady-state scenario and make reactive decisions in a critical scenario, the historical data is split into two parts: the short-term data for reactive decisions, and the long-term data for steady-state decisions. Both time frames maintain the following information: - -* Message rate in/out for the entire broker -* Message throughput in/out for the entire broker - -Unlike the bundle data, the broker data does not maintain samples for the global broker message rates and throughputs, which is not expected to remain steady as new bundles are removed or added. Instead, this data is aggregated over the short-term and long-term data for the bundles. See the section on bundle data to understand how that data is collected and maintained. - -The historical broker data is updated for each broker in memory by the leader broker whenever any broker writes their local data to ZooKeeper. Then, the historical data is written to ZooKeeper by the leader broker periodically according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`. - -##### Bundle Data - -The bundle data is contained in the [`BundleData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BundleData.java). Like the historical broker data, the bundle data is split into a short-term and a long-term time frame. The information maintained in each time frame: - -* Message rate in/out for this bundle -* Message Throughput In/Out for this bundle -* Current number of samples for this bundle - -The time frames are implemented by maintaining the average of these values over a set, limited number of samples, where -the samples are obtained through the message rate and throughput values in the local data. Thus, if the update interval -for the local data is 2 minutes, the number of short samples is 10 and the number of long samples is 1000, the -short-term data is maintained over a period of `10 samples * 2 minutes / sample = 20 minutes`, while the long-term -data is similarly over a period of 2000 minutes. Whenever there are not enough samples to satisfy a given time frame, -the average is taken only over the existing samples. When no samples are available, default values are assumed until -they are overwritten by the first sample. Currently, the default values are - -* Message rate in/out: 50 messages per second both ways -* Message throughput in/out: 50KB per second both ways - -The bundle data is updated in memory on the leader broker whenever any broker writes their local data to ZooKeeper. -Then, the bundle data is written to ZooKeeper by the leader broker periodically at the same time as the historical -broker data, according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`. - -### Traffic Distribution - -The modular load manager uses the abstraction provided by [`ModularLoadManagerStrategy`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/ModularLoadManagerStrategy.java) to make decisions about bundle assignment. The strategy makes a decision by considering the service configuration, the entire load data, and the bundle data for the bundle to be assigned. Currently, the only supported strategy is [`LeastLongTermMessageRate`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/LeastLongTermMessageRate.java), though soon users will have the ability to inject their own strategies if desired. - -#### Least Long Term Message Rate Strategy - -As its name suggests, the least long term message rate strategy attempts to distribute bundles across brokers so that -the message rate in the long-term time window for each broker is roughly the same. However, simply balancing load based -on message rate does not handle the issue of asymmetric resource burden per message on each broker. Thus, the system -resource usages, which are CPU, memory, direct memory, bandwidth in, and bandwidth out, are also considered in the -assignment process. This is done by weighting the final message rate according to -`1 / (overload_threshold - max_usage)`, where `overload_threshold` corresponds to the configuration -`loadBalancerBrokerOverloadedThresholdPercentage` and `max_usage` is the maximum proportion among the system resources -that is being utilized by the candidate broker. This multiplier ensures that machines with are being more heavily taxed -by the same message rates will receive less load. In particular, it tries to ensure that if one machine is overloaded, -then all machines are approximately overloaded. In the case in which a broker's max usage exceeds the overload -threshold, that broker is not considered for bundle assignment. If all brokers are overloaded, the bundle is randomly -assigned. - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/develop-schema.md b/site2/website/versioned_docs/version-2.9.3-deprecated/develop-schema.md deleted file mode 100644 index 2d4461a5ea2b55..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/develop-schema.md +++ /dev/null @@ -1,62 +0,0 @@ ---- -id: develop-schema -title: Custom schema storage -sidebar_label: "Custom schema storage" -original_id: develop-schema ---- - -By default, Pulsar stores data type [schemas](concepts-schema-registry.md) in [Apache BookKeeper](https://bookkeeper.apache.org) (which is deployed alongside Pulsar). You can, however, use another storage system if you wish. This doc walks you through creating your own schema storage implementation. - -In order to use a non-default (i.e. non-BookKeeper) storage system for Pulsar schemas, you need to implement two Java interfaces: [`SchemaStorage`](#schemastorage-interface) and [`SchemaStorageFactory`](#schemastoragefactory-interface). - -## SchemaStorage interface - -The `SchemaStorage` interface has the following methods: - -```java - -public interface SchemaStorage { - // How schemas are updated - CompletableFuture put(String key, byte[] value, byte[] hash); - - // How schemas are fetched from storage - CompletableFuture get(String key, SchemaVersion version); - - // How schemas are deleted - CompletableFuture delete(String key); - - // Utility method for converting a schema version byte array to a SchemaVersion object - SchemaVersion versionFromBytes(byte[] version); - - // Startup behavior for the schema storage client - void start() throws Exception; - - // Shutdown behavior for the schema storage client - void close() throws Exception; -} - -``` - -> For a full-fledged example schema storage implementation, see the [`BookKeeperSchemaStorage`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorage.java) class. - -## SchemaStorageFactory interface - -```java - -public interface SchemaStorageFactory { - @NotNull - SchemaStorage create(PulsarService pulsar) throws Exception; -} - -``` - -> For a full-fledged example schema storage factory implementation, see the [`BookKeeperSchemaStorageFactory`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorageFactory.java) class. - -## Deployment - -In order to use your custom schema storage implementation, you'll need to: - -1. Package the implementation in a [JAR](https://docs.oracle.com/javase/tutorial/deployment/jar/basicsindex.html) file. -1. Add that jar to the `lib` folder in your Pulsar [binary or source distribution](getting-started-standalone.md#installing-pulsar). -1. Change the `schemaRegistryStorageClassName` configuration in [`broker.conf`](reference-configuration.md#broker) to your custom factory class (i.e. the `SchemaStorageFactory` implementation, not the `SchemaStorage` implementation). -1. Start up Pulsar. diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/develop-tools.md b/site2/website/versioned_docs/version-2.9.3-deprecated/develop-tools.md deleted file mode 100644 index bc7c29e836e6ac..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/develop-tools.md +++ /dev/null @@ -1,112 +0,0 @@ ---- -id: develop-tools -title: Simulation tools -sidebar_label: "Simulation tools" -original_id: develop-tools ---- - -It is sometimes necessary create an test environment and incur artificial load to observe how well load managers -handle the load. The load simulation controller, the load simulation client, and the broker monitor were created as an -effort to make create this load and observe the effects on the managers more easily. - -## Simulation Client -The simulation client is a machine which will create and subscribe to topics with configurable message rates and sizes. -Because it is sometimes necessary in simulating large load to use multiple client machines, the user does not interact -with the simulation client directly, but instead delegates their requests to the simulation controller, which will then -send signals to clients to start incurring load. The client implementation is in the class -`org.apache.pulsar.testclient.LoadSimulationClient`. - -### Usage -To Start a simulation client, use the `pulsar-perf` script with the command `simulation-client` as follows: - -``` - -pulsar-perf simulation-client --port --service-url - -``` - -The client will then be ready to receive controller commands. -## Simulation Controller -The simulation controller send signals to the simulation clients, requesting them to create new topics, stop old -topics, change the load incurred by topics, as well as several other tasks. It is implemented in the class -`org.apache.pulsar.testclient.LoadSimulationController` and presents a shell to the user as an interface to send -command with. - -### Usage -To start a simulation controller, use the `pulsar-perf` script with the command `simulation-controller` as follows: - -``` - -pulsar-perf simulation-controller --cluster --client-port ---clients - -``` - -The clients should already be started before the controller is started. You will then be presented with a simple prompt, -where you can issue commands to simulation clients. Arguments often refer to tenant names, namespace names, and topic -names. In all cases, the BASE name of the tenants, namespaces, and topics are used. For example, for the topic -`persistent://my_tenant/my_cluster/my_namespace/my_topic`, the tenant name is `my_tenant`, the namespace name is -`my_namespace`, and the topic name is `my_topic`. The controller can perform the following actions: - -* Create a topic with a producer and a consumer - * `trade [--rate ] - [--rand-rate ,] - [--size ]` -* Create a group of topics with a producer and a consumer - * `trade_group [--rate ] - [--rand-rate ,] - [--separation ] [--size ] - [--topics-per-namespace ]` -* Change the configuration of an existing topic - * `change [--rate ] - [--rand-rate ,] - [--size ]` -* Change the configuration of a group of topics - * `change_group [--rate ] [--rand-rate ,] - [--size ] [--topics-per-namespace ]` -* Shutdown a previously created topic - * `stop ` -* Shutdown a previously created group of topics - * `stop_group ` -* Copy the historical data from one ZooKeeper to another and simulate based on the message rates and sizes in that -history - * `copy [--rate-multiplier value]` -* Simulate the load of the historical data on the current ZooKeeper (should be same ZooKeeper being simulated on) - * `simulate [--rate-multiplier value]` -* Stream the latest data from the given active ZooKeeper to simulate the real-time load of that ZooKeeper. - * `stream [--rate-multiplier value]` - -The "group" arguments in these commands allow the user to create or affect multiple topics at once. Groups are created -when calling the `trade_group` command, and all topics from these groups may be subsequently modified or stopped -with the `change_group` and `stop_group` commands respectively. All ZooKeeper arguments are of the form -`zookeeper_host:port`. - -### Difference Between Copy, Simulate, and Stream -The commands `copy`, `simulate`, and `stream` are very similar but have significant differences. `copy` is used when -you want to simulate the load of a static, external ZooKeeper on the ZooKeeper you are simulating on. Thus, -`source zookeeper` should be the ZooKeeper you want to copy and `target zookeeper` should be the ZooKeeper you are -simulating on, and then it will get the full benefit of the historical data of the source in both load manager -implementations. `simulate` on the other hand takes in only one ZooKeeper, the one you are simulating on. It assumes -that you are simulating on a ZooKeeper that has historical data for `SimpleLoadManagerImpl` and creates equivalent -historical data for `ModularLoadManagerImpl`. Then, the load according to the historical data is simulated by the -clients. Finally, `stream` takes in an active ZooKeeper different than the ZooKeeper being simulated on and streams -load data from it and simulates the real-time load. In all cases, the optional `rate-multiplier` argument allows the -user to simulate some proportion of the load. For instance, using `--rate-multiplier 0.05` will cause messages to -be sent at only `5%` of the rate of the load that is being simulated. - -## Broker Monitor -To observe the behavior of the load manager in these simulations, one may utilize the broker monitor, which is -implemented in `org.apache.pulsar.testclient.BrokerMonitor`. The broker monitor will print tabular load data to the -console as it is updated using watchers. - -### Usage -To start a broker monitor, use the `monitor-brokers` command in the `pulsar-perf` script: - -``` - -pulsar-perf monitor-brokers --connect-string - -``` - -The console will then continuously print load data until it is interrupted. - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/developing-binary-protocol.md b/site2/website/versioned_docs/version-2.9.3-deprecated/developing-binary-protocol.md deleted file mode 100644 index a18a8b8d56172e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/developing-binary-protocol.md +++ /dev/null @@ -1,606 +0,0 @@ ---- -id: developing-binary-protocol -title: Pulsar binary protocol specification -sidebar_label: "Binary protocol" -original_id: developing-binary-protocol ---- - -Pulsar uses a custom binary protocol for communications between producers/consumers and brokers. This protocol is designed to support required features, such as acknowledgements and flow control, while ensuring maximum transport and implementation efficiency. - -Clients and brokers exchange *commands* with each other. Commands are formatted as binary [protocol buffer](https://developers.google.com/protocol-buffers/) (aka *protobuf*) messages. The format of protobuf commands is specified in the [`PulsarApi.proto`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto) file and also documented in the [Protobuf interface](#protobuf-interface) section below. - -> ### Connection sharing -> Commands for different producers and consumers can be interleaved and sent through the same connection without restriction. - -All commands associated with Pulsar's protocol are contained in a [`BaseCommand`](#pulsar.proto.BaseCommand) protobuf message that includes a [`Type`](#pulsar.proto.Type) [enum](https://developers.google.com/protocol-buffers/docs/proto#enum) with all possible subcommands as optional fields. `BaseCommand` messages can specify only one subcommand. - -## Framing - -Since protobuf doesn't provide any sort of message frame, all messages in the Pulsar protocol are prepended with a 4-byte field that specifies the size of the frame. The maximum allowable size of a single frame is 5 MB. - -The Pulsar protocol allows for two types of commands: - -1. **Simple commands** that do not carry a message payload. -2. **Payload commands** that bear a payload that is used when publishing or delivering messages. In payload commands, the protobuf command data is followed by protobuf [metadata](#message-metadata) and then the payload, which is passed in raw format outside of protobuf. All sizes are passed as 4-byte unsigned big endian integers. - -> Message payloads are passed in raw format rather than protobuf format for efficiency reasons. - -### Simple commands - -Simple (payload-free) commands have this basic structure: - -| Component | Description | Size (in bytes) | -|:------------|:----------------------------------------------------------------------------------------|:----------------| -| totalSize | The size of the frame, counting everything that comes after it (in bytes) | 4 | -| commandSize | The size of the protobuf-serialized command | 4 | -| message | The protobuf message serialized in a raw binary format (rather than in protobuf format) | | - -### Payload commands - -Payload commands have this basic structure: - -| Component | Description | Size (in bytes) | -|:-------------|:--------------------------------------------------------------------------------------------|:----------------| -| totalSize | The size of the frame, counting everything that comes after it (in bytes) | 4 | -| commandSize | The size of the protobuf-serialized command | 4 | -| message | The protobuf message serialized in a raw binary format (rather than in protobuf format) | | -| magicNumber | A 2-byte byte array (`0x0e01`) identifying the current format | 2 | -| checksum | A [CRC32-C checksum](http://www.evanjones.ca/crc32c.html) of everything that comes after it | 4 | -| metadataSize | The size of the message [metadata](#message-metadata) | 4 | -| metadata | The message [metadata](#message-metadata) stored as a binary protobuf message | | -| payload | Anything left in the frame is considered the payload and can include any sequence of bytes | | - -## Message metadata - -Message metadata is stored alongside the application-specified payload as a serialized protobuf message. Metadata is created by the producer and passed on unchanged to the consumer. - -| Field | Description | -|:-------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `producer_name` | The name of the producer that published the message | -| `sequence_id` | The sequence ID of the message, assigned by producer | -| `publish_time` | The publish timestamp in Unix time (i.e. as the number of milliseconds since January 1st, 1970 in UTC) | -| `properties` | A sequence of key/value pairs (using the [`KeyValue`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto#L32) message). These are application-defined keys and values with no special meaning to Pulsar. | -| `replicated_from` *(optional)* | Indicates that the message has been replicated and specifies the name of the [cluster](reference-terminology.md#cluster) where the message was originally published | -| `partition_key` *(optional)* | While publishing on a partition topic, if the key is present, the hash of the key is used to determine which partition to choose. Partition key is used as the message key. | -| `compression` *(optional)* | Signals that payload has been compressed and with which compression library | -| `uncompressed_size` *(optional)* | If compression is used, the producer must fill the uncompressed size field with the original payload size | -| `num_messages_in_batch` *(optional)* | If this message is really a [batch](#batch-messages) of multiple entries, this field must be set to the number of messages in the batch | - -### Batch messages - -When using batch messages, the payload will be containing a list of entries, -each of them with its individual metadata, defined by the `SingleMessageMetadata` -object. - - -For a single batch, the payload format will look like this: - - -| Field | Description | -|:--------------|:------------------------------------------------------------| -| metadataSizeN | The size of the single message metadata serialized Protobuf | -| metadataN | Single message metadata | -| payloadN | Message payload passed by application | - -Each metadata field looks like this; - -| Field | Description | -|:---------------------------|:--------------------------------------------------------| -| properties | Application-defined properties | -| partition key *(optional)* | Key to indicate the hashing to a particular partition | -| payload_size | Size of the payload for the single message in the batch | - -When compression is enabled, the whole batch will be compressed at once. - -## Interactions - -### Connection establishment - -After opening a TCP connection to a broker, typically on port 6650, the client -is responsible to initiate the session. - -![Connect interaction](/assets/binary-protocol-connect.png) - -After receiving a `Connected` response from the broker, the client can -consider the connection ready to use. Alternatively, if the broker doesn't -validate the client authentication, it will reply with an `Error` command and -close the TCP connection. - -Example: - -```protobuf - -message CommandConnect { - "client_version" : "Pulsar-Client-Java-v1.15.2", - "auth_method_name" : "my-authentication-plugin", - "auth_data" : "my-auth-data", - "protocol_version" : 6 -} - -``` - -Fields: - * `client_version` → String based identifier. Format is not enforced - * `auth_method_name` → *(optional)* Name of the authentication plugin if auth - enabled - * `auth_data` → *(optional)* Plugin specific authentication data - * `protocol_version` → Indicates the protocol version supported by the - client. Broker will not send commands introduced in newer revisions of the - protocol. Broker might be enforcing a minimum version - -```protobuf - -message CommandConnected { - "server_version" : "Pulsar-Broker-v1.15.2", - "protocol_version" : 6 -} - -``` - -Fields: - * `server_version` → String identifier of broker version - * `protocol_version` → Protocol version supported by the broker. Client - must not attempt to send commands introduced in newer revisions of the - protocol - -### Keep Alive - -To identify prolonged network partitions between clients and brokers or cases -in which a machine crashes without interrupting the TCP connection on the remote -end (eg: power outage, kernel panic, hard reboot...), we have introduced a -mechanism to probe for the availability status of the remote peer. - -Both clients and brokers are sending `Ping` commands periodically and they will -close the socket if a `Pong` response is not received within a timeout (default -used by broker is 60s). - -A valid implementation of a Pulsar client is not required to send the `Ping` -probe, though it is required to promptly reply after receiving one from the -broker in order to prevent the remote side from forcibly closing the TCP connection. - - -### Producer - -In order to send messages, a client needs to establish a producer. When creating -a producer, the broker will first verify that this particular client is -authorized to publish on the topic. - -Once the client gets confirmation of the producer creation, it can publish -messages to the broker, referring to the producer id negotiated before. - -![Producer interaction](/assets/binary-protocol-producer.png) - -##### Command Producer - -```protobuf - -message CommandProducer { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "producer_id" : 1, - "request_id" : 1 -} - -``` - -Parameters: - * `topic` → Complete topic name to where you want to create the producer on - * `producer_id` → Client generated producer identifier. Needs to be unique - within the same connection - * `request_id` → Identifier for this request. Used to match the response with - the originating request. Needs to be unique within the same connection - * `producer_name` → *(optional)* If a producer name is specified, the name will - be used, otherwise the broker will generate a unique name. Generated - producer name is guaranteed to be globally unique. Implementations are - expected to let the broker generate a new producer name when the producer - is initially created, then reuse it when recreating the producer after - reconnections. - -The broker will reply with either `ProducerSuccess` or `Error` commands. - -##### Command ProducerSuccess - -```protobuf - -message CommandProducerSuccess { - "request_id" : 1, - "producer_name" : "generated-unique-producer-name" -} - -``` - -Parameters: - * `request_id` → Original id of the `CreateProducer` request - * `producer_name` → Generated globally unique producer name or the name - specified by the client, if any. - -##### Command Send - -Command `Send` is used to publish a new message within the context of an -already existing producer. This command is used in a frame that includes command -as well as message payload, for which the complete format is specified in the [payload commands](#payload-commands) section. - -```protobuf - -message CommandSend { - "producer_id" : 1, - "sequence_id" : 0, - "num_messages" : 1 -} - -``` - -Parameters: - * `producer_id` → id of an existing producer - * `sequence_id` → each message has an associated sequence id which is expected - to be implemented with a counter starting at 0. The `SendReceipt` that - acknowledges the effective publishing of messages will refer to it by - its sequence id. - * `num_messages` → *(optional)* Used when publishing a batch of messages at - once. - -##### Command SendReceipt - -After a message has been persisted on the configured number of replicas, the -broker will send the acknowledgment receipt to the producer. - -```protobuf - -message CommandSendReceipt { - "producer_id" : 1, - "sequence_id" : 0, - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -Parameters: - * `producer_id` → id of producer originating the send request - * `sequence_id` → sequence id of the published message - * `message_id` → message id assigned by the system to the published message - Unique within a single cluster. Message id is composed of 2 longs, `ledgerId` - and `entryId`, that reflect that this unique id is assigned when appending - to a BookKeeper ledger - - -##### Command CloseProducer - -**Note**: *This command can be sent by either producer or broker*. - -When receiving a `CloseProducer` command, the broker will stop accepting any -more messages for the producer, wait until all pending messages are persisted -and then reply `Success` to the client. - -The broker can send a `CloseProducer` command to client when it's performing -a graceful failover (eg: broker is being restarted, or the topic is being unloaded -by load balancer to be transferred to a different broker). - -When receiving the `CloseProducer`, the client is expected to go through the -service discovery lookup again and recreate the producer again. The TCP -connection is not affected. - -### Consumer - -A consumer is used to attach to a subscription and consume messages from it. -After every reconnection, a client needs to subscribe to the topic. If a -subscription is not already there, a new one will be created. - -![Consumer](/assets/binary-protocol-consumer.png) - -#### Flow control - -After the consumer is ready, the client needs to *give permission* to the -broker to push messages. This is done with the `Flow` command. - -A `Flow` command gives additional *permits* to send messages to the consumer. -A typical consumer implementation will use a queue to accumulate these messages -before the application is ready to consume them. - -After the application has dequeued half of the messages in the queue, the consumer -sends permits to the broker to ask for more messages (equals to half of the messages in the queue). - -For example, if the queue size is 1000 and the consumer consumes 500 messages in the queue. -Then the consumer sends permits to the broker to ask for 500 messages. - -##### Command Subscribe - -```protobuf - -message CommandSubscribe { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "subscription" : "my-subscription-name", - "subType" : "Exclusive", - "consumer_id" : 1, - "request_id" : 1 -} - -``` - -Parameters: - * `topic` → Complete topic name to where you want to create the consumer on - * `subscription` → Subscription name - * `subType` → Subscription type: Exclusive, Shared, Failover, Key_Shared - * `consumer_id` → Client generated consumer identifier. Needs to be unique - within the same connection - * `request_id` → Identifier for this request. Used to match the response with - the originating request. Needs to be unique within the same connection - * `consumer_name` → *(optional)* Clients can specify a consumer name. This - name can be used to track a particular consumer in the stats. Also, in - Failover subscription type, the name is used to decide which consumer is - elected as *master* (the one receiving messages): consumers are sorted by - their consumer name and the first one is elected master. - -##### Command Flow - -```protobuf - -message CommandFlow { - "consumer_id" : 1, - "messagePermits" : 1000 -} - -``` - -Parameters: -* `consumer_id` → Id of an already established consumer -* `messagePermits` → Number of additional permits to grant to the broker for - pushing more messages - -##### Command Message - -Command `Message` is used by the broker to push messages to an existing consumer, -within the limits of the given permits. - - -This command is used in a frame that includes the message payload as well, for -which the complete format is specified in the [payload commands](#payload-commands) -section. - -```protobuf - -message CommandMessage { - "consumer_id" : 1, - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -##### Command Ack - -An `Ack` is used to signal to the broker that a given message has been -successfully processed by the application and can be discarded by the broker. - -In addition, the broker will also maintain the consumer position based on the -acknowledged messages. - -```protobuf - -message CommandAck { - "consumer_id" : 1, - "ack_type" : "Individual", - "message_id" : { - "ledgerId" : 123, - "entryId" : 456 - } -} - -``` - -Parameters: - * `consumer_id` → Id of an already established consumer - * `ack_type` → Type of acknowledgment: `Individual` or `Cumulative` - * `message_id` → Id of the message to acknowledge - * `validation_error` → *(optional)* Indicates that the consumer has discarded - the messages due to: `UncompressedSizeCorruption`, - `DecompressionError`, `ChecksumMismatch`, `BatchDeSerializeError` - * `properties` → *(optional)* Reserved configuration items - * `txnid_most_bits` → *(optional)* Same as Transaction Coordinator ID, `txnid_most_bits` and `txnid_least_bits` - uniquely identify a transaction. - * `txnid_least_bits` → *(optional)* The ID of the transaction opened in a transaction coordinator, - `txnid_most_bits` and `txnid_least_bits`uniquely identify a transaction. - * `request_id` → *(optional)* ID for handling response and timeout. - - - ##### Command AckResponse - -An `AckResponse` is the broker’s response to acknowledge a request sent by the client. It contains the `consumer_id` sent in the request. -If a transaction is used, it contains both the Transaction ID and the Request ID that are sent in the request. The client finishes the specific request according to the Request ID. If the `error` field is set, it indicates that the request has failed. - -An example of `AckResponse` with redirection: - -```protobuf - -message CommandAckResponse { - "consumer_id" : 1, - "txnid_least_bits" = 0, - "txnid_most_bits" = 1, - "request_id" = 5 -} - -``` - -##### Command CloseConsumer - -***Note***: **This command can be sent by either producer or broker*. - -This command behaves the same as [`CloseProducer`](#command-closeproducer) - -##### Command RedeliverUnacknowledgedMessages - -A consumer can ask the broker to redeliver some or all of the pending messages -that were pushed to that particular consumer and not yet acknowledged. - -The protobuf object accepts a list of message ids that the consumer wants to -be redelivered. If the list is empty, the broker will redeliver all the -pending messages. - -On redelivery, messages can be sent to the same consumer or, in the case of a -shared subscription, spread across all available consumers. - - -##### Command ReachedEndOfTopic - -This is sent by a broker to a particular consumer, whenever the topic -has been "terminated" and all the messages on the subscription were -acknowledged. - -The client should use this command to notify the application that no more -messages are coming from the consumer. - -##### Command ConsumerStats - -This command is sent by the client to retrieve Subscriber and Consumer level -stats from the broker. -Parameters: - * `request_id` → Id of the request, used to correlate the request - and the response. - * `consumer_id` → Id of an already established consumer. - -##### Command ConsumerStatsResponse - -This is the broker's response to ConsumerStats request by the client. -It contains the Subscriber and Consumer level stats of the `consumer_id` sent in the request. -If the `error_code` or the `error_message` field is set it indicates that the request has failed. - -##### Command Unsubscribe - -This command is sent by the client to unsubscribe the `consumer_id` from the associated topic. -Parameters: - * `request_id` → Id of the request. - * `consumer_id` → Id of an already established consumer which needs to unsubscribe. - - -## Service discovery - -### Topic lookup - -Topic lookup needs to be performed each time a client needs to create or -reconnect a producer or a consumer. Lookup is used to discover which particular -broker is serving the topic we are about to use. - -Lookup can be done with a REST call as described in the [admin API](admin-api-topics.md#lookup-of-topic) -docs. - -Since Pulsar-1.16 it is also possible to perform the lookup within the binary -protocol. - -For the sake of example, let's assume we have a service discovery component -running at `pulsar://broker.example.com:6650` - -Individual brokers will be running at `pulsar://broker-1.example.com:6650`, -`pulsar://broker-2.example.com:6650`, ... - -A client can use a connection to the discovery service host to issue a -`LookupTopic` command. The response can either be a broker hostname to -connect to, or a broker hostname to which retry the lookup. - -The `LookupTopic` command has to be used in a connection that has already -gone through the `Connect` / `Connected` initial handshake. - -![Topic lookup](/assets/binary-protocol-topic-lookup.png) - -```protobuf - -message CommandLookupTopic { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "request_id" : 1, - "authoritative" : false -} - -``` - -Fields: - * `topic` → Topic name to lookup - * `request_id` → Id of the request that will be passed with its response - * `authoritative` → Initial lookup request should use false. When following a - redirect response, client should pass the same value contained in the - response - -##### LookupTopicResponse - -Example of response with successful lookup: - -```protobuf - -message CommandLookupTopicResponse { - "request_id" : 1, - "response" : "Connect", - "brokerServiceUrl" : "pulsar://broker-1.example.com:6650", - "brokerServiceUrlTls" : "pulsar+ssl://broker-1.example.com:6651", - "authoritative" : true -} - -``` - -Example of lookup response with redirection: - -```protobuf - -message CommandLookupTopicResponse { - "request_id" : 1, - "response" : "Redirect", - "brokerServiceUrl" : "pulsar://broker-2.example.com:6650", - "brokerServiceUrlTls" : "pulsar+ssl://broker-2.example.com:6651", - "authoritative" : true -} - -``` - -In this second case, we need to reissue the `LookupTopic` command request -to `broker-2.example.com` and this broker will be able to give a definitive -answer to the lookup request. - -### Partitioned topics discovery - -Partitioned topics metadata discovery is used to find out if a topic is a -"partitioned topic" and how many partitions were set up. - -If the topic is marked as "partitioned", the client is expected to create -multiple producers or consumers, one for each partition, using the `partition-X` -suffix. - -This information only needs to be retrieved the first time a producer or -consumer is created. There is no need to do this after reconnections. - -The discovery of partitioned topics metadata works very similar to the topic -lookup. The client send a request to the service discovery address and the -response will contain actual metadata. - -##### Command PartitionedTopicMetadata - -```protobuf - -message CommandPartitionedTopicMetadata { - "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic", - "request_id" : 1 -} - -``` - -Fields: - * `topic` → the topic for which to check the partitions metadata - * `request_id` → Id of the request that will be passed with its response - - -##### Command PartitionedTopicMetadataResponse - -Example of response with metadata: - -```protobuf - -message CommandPartitionedTopicMetadataResponse { - "request_id" : 1, - "response" : "Success", - "partitions" : 32 -} - -``` - -## Protobuf interface - -All Pulsar's Protobuf definitions can be found {@inject: github:here:/pulsar-common/src/main/proto/PulsarApi.proto}. diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/functions-cli.md b/site2/website/versioned_docs/version-2.9.3-deprecated/functions-cli.md deleted file mode 100644 index c9fcfa201525f0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/functions-cli.md +++ /dev/null @@ -1,198 +0,0 @@ ---- -id: functions-cli -title: Pulsar Functions command line tool -sidebar_label: "Reference: CLI" -original_id: functions-cli ---- - -The following tables list Pulsar Functions command-line tools. You can learn Pulsar Functions modes, commands, and parameters. - -## localrun - -Run Pulsar Functions locally, rather than deploying it to the Pulsar cluster. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -broker-service-url | The URL for the Pulsar broker. | | -classname | The class name of a Pulsar Function.| | -client-auth-params | Client authentication parameter. | | -client-auth-plugin | Client authentication plugin using which function-process can connect to broker. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime).| | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -hostname-verification-enabled | Enable hostname verification. | false -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -instance-id-offset | Start the instanceIds from this offset. | 0 -log-topic | The topic to which the logs a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -tls-allow-insecure | Allow insecure tls connection. | false -tls-trust-cert-path | tls trust cert file path. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -use-tls | Use tls connection. | false -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - - -## create - -Create and deploy a Pulsar Function in cluster mode. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -classname | The class name of a Pulsar Function. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime).| | -custom-runtime-options | A string that encodes options to customize the runtime, see docs for configured runtime for details | | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -log-topic | The topic to which the logs of a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - -## delete - -Delete a Pulsar Function that is running on a Pulsar cluster. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## update - -Update a Pulsar Function that has been deployed to a Pulsar cluster. - -Name | Description | Default ----|---|--- -auto-ack | Whether or not the framework acknowledges messages automatically. | true | -classname | The class name of a Pulsar Function. | | -CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime). | | -custom-runtime-options | A string that encodes options to customize the runtime, see docs for configured runtime for details | | -custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | | -custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | | -dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | | -disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | | -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. | | -go | Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | | -jar | Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -log-topic | The topic to which the logs of a Pulsar Function are produced. | | -max-message-retries | How many times should we try to process a message before giving up. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -output | The output topic of a Pulsar Function (If none is specified, no output is written). | | -output-serde-classname | The SerDe class to be used for messages output by the function. | | -parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). | | -processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE -py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package. | | -ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). | | -retain-ordering | Function consumes and processes messages in order. | | -schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | | -sliding-interval-count | The number of messages after which the window slides. | | -sliding-interval-duration-ms | The time duration after which the window slides. | | -subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. | | -tenant | The tenant of a Pulsar Function. | | -timeout-ms | The message timeout in milliseconds. | | -topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). | | -update-auth-data | Whether or not to update the auth data. | false -user-config | User-defined config key/values. | | -window-length-count | The number of messages per window. | | -window-length-duration-ms | The time duration of the window in milliseconds. | | - -## get - -Fetch information about a Pulsar Function. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## restart - -Restart function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## stop - -Stops function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | - -## start - -Starts a stopped function instance. - -Name | Description | Default ----|---|--- -fqfn | The Fully Qualified Function Name (FQFN) for the function. | | -instance-id | The function instanceId (restart all instances if instance-id is not provided. | | -name | The name of a Pulsar Function. | | -namespace | The namespace of a Pulsar Function. | | -tenant | The tenant of a Pulsar Function. | | diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/functions-debug.md b/site2/website/versioned_docs/version-2.9.3-deprecated/functions-debug.md deleted file mode 100644 index c1f19abda64657..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/functions-debug.md +++ /dev/null @@ -1,538 +0,0 @@ ---- -id: functions-debug -title: Debug Pulsar Functions -sidebar_label: "How-to: Debug" -original_id: functions-debug ---- - -You can use the following methods to debug Pulsar Functions: - -* [Captured stderr](functions-debug.md#captured-stderr) -* [Use unit test](functions-debug.md#use-unit-test) -* [Debug with localrun mode](functions-debug.md#debug-with-localrun-mode) -* [Use log topic](functions-debug.md#use-log-topic) -* [Use Functions CLI](functions-debug.md#use-functions-cli) - -## Captured stderr - -Function startup information and captured stderr output is written to `logs/functions////-.log` - -This is useful for debugging why a function fails to start. - -## Use unit test - -A Pulsar Function is a function with inputs and outputs, you can test a Pulsar Function in a similar way as you test any function. - -For example, if you have the following Pulsar Function: - -```java - -import java.util.function.Function; - -public class JavaNativeExclamationFunction implements Function { - @Override - public String apply(String input) { - return String.format("%s!", input); - } -} - -``` - -You can write a simple unit test to test Pulsar Function. - -:::tip - -Pulsar uses testng for testing. - -::: - -```java - -@Test -public void testJavaNativeExclamationFunction() { - JavaNativeExclamationFunction exclamation = new JavaNativeExclamationFunction(); - String output = exclamation.apply("foo"); - Assert.assertEquals(output, "foo!"); -} - -``` - -The following Pulsar Function implements the `org.apache.pulsar.functions.api.Function` interface. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class ExclamationFunction implements Function { - @Override - public String process(String input, Context context) { - return String.format("%s!", input); - } -} - -``` - -In this situation, you can write a unit test for this function as well. Remember to mock the `Context` parameter. The following is an example. - -:::tip - -Pulsar uses testng for testing. - -::: - -```java - -@Test -public void testExclamationFunction() { - ExclamationFunction exclamation = new ExclamationFunction(); - String output = exclamation.process("foo", mock(Context.class)); - Assert.assertEquals(output, "foo!"); -} - -``` - -## Debug with localrun mode -When you run a Pulsar Function in localrun mode, it launches an instance of the Function on your local machine as a thread. - -In this mode, a Pulsar Function consumes and produces actual data to a Pulsar cluster, and mirrors how the function actually runs in a Pulsar cluster. - -:::note - -Currently, debugging with localrun mode is only supported by Pulsar Functions written in Java. You need Pulsar version 2.4.0 or later to do the following. Even though localrun is available in versions earlier than Pulsar 2.4.0, you cannot debug with localrun mode programmatically or run Functions as threads. - -::: - -You can launch your function in the following manner. - -```java - -FunctionConfig functionConfig = new FunctionConfig(); -functionConfig.setName(functionName); -functionConfig.setInputs(Collections.singleton(sourceTopic)); -functionConfig.setClassName(ExclamationFunction.class.getName()); -functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); -functionConfig.setOutput(sinkTopic); - -LocalRunner localRunner = LocalRunner.builder().functionConfig(functionConfig).build(); -localRunner.start(true); - -``` - -So you can debug functions using an IDE easily. Set breakpoints and manually step through a function to debug with real data. - -The following example illustrates how to programmatically launch a function in localrun mode. - -```java - -public class ExclamationFunction implements Function { - - @Override - public String process(String s, Context context) throws Exception { - return s + "!"; - } - -public static void main(String[] args) throws Exception { - FunctionConfig functionConfig = new FunctionConfig(); - functionConfig.setName("exclamation"); - functionConfig.setInputs(Collections.singleton("input")); - functionConfig.setClassName(ExclamationFunction.class.getName()); - functionConfig.setRuntime(FunctionConfig.Runtime.JAVA); - functionConfig.setOutput("output"); - - LocalRunner localRunner = LocalRunner.builder().functionConfig(functionConfig).build(); - localRunner.start(false); -} - -``` - -To use localrun mode programmatically, add the following dependency. - -```xml - - - org.apache.pulsar - pulsar-functions-local-runner - ${pulsar.version} - - -``` - -For complete code samples, see [here](https://github.com/jerrypeng/pulsar-functions-demos/tree/master/debugging). - -:::note - -Debugging with localrun mode for Pulsar Functions written in other languages will be supported soon. - -::: - -## Use log topic - -In Pulsar Functions, you can generate log information defined in functions to a specified log topic. You can configure consumers to consume messages from a specified log topic to check the log information. - -![Pulsar Functions core programming model](/assets/pulsar-functions-overview.png) - -**Example** - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class LoggingFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - String messageId = new String(context.getMessageId()); - - if (input.contains("danger")) { - LOG.warn("A warning was received in message {}", messageId); - } else { - LOG.info("Message {} received\nContent: {}", messageId, input); - } - - return null; - } -} - -``` - -As shown in the example above, you can get the logger via `context.getLogger()` and assign the logger to the `LOG` variable of `slf4j`, so you can define your desired log information in a function using the `LOG` variable. Meanwhile, you need to specify the topic to which the log information is produced. - -**Example** - -```bash - -$ bin/pulsar-admin functions create \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -The message published to log topic contains several properties for better reasoning: -- `loglevel` -- the level of the log message. -- `fqn` -- fully qualified function name pushes this log message. -- `instance` -- the ID of the function instance pushes this log message. - -## Use Functions CLI - -With [Pulsar Functions CLI](reference-pulsar-admin.md#functions), you can debug Pulsar Functions with the following subcommands: - -* `get` -* `status` -* `stats` -* `list` -* `trigger` - -:::tip - -For complete commands of **Pulsar Functions CLI**, see [here](reference-pulsar-admin.md#functions)。 - -::: - -### `get` - -Get information about a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions get options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -:::tip - -`--fqfn` consists of `--name`, `--namespace` and `--tenant`, so you can specify either `--fqfn` or `--name`, `--namespace` and `--tenant`. - -::: - -**Example** - -You can specify `--fqfn` to get information about a Pulsar Function. - -```bash - -$ ./bin/pulsar-admin functions get public/default/ExclamationFunctio6 - -``` - -Optionally, you can specify `--name`, `--namespace` and `--tenant` to get information about a Pulsar Function. - -```bash - -$ ./bin/pulsar-admin functions get \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 - -``` - -As shown below, the `get` command shows input, output, runtime, and other information about the _ExclamationFunctio6_ function. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "ExclamationFunctio6", - "className": "org.example.test.ExclamationFunction", - "inputSpecs": { - "persistent://public/default/my-topic-1": { - "isRegexPattern": false - } - }, - "output": "persistent://public/default/test-1", - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "userConfig": {}, - "runtime": "JAVA", - "autoAck": true, - "parallelism": 1 -} - -``` - -### `status` - -Check the current status of a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions status options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--instance-id`|The instance ID of a Pulsar Function
    If the `--instance-id` is not specified, it gets the IDs of all instances.
    -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions status \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - -``` - -As shown below, the `status` command shows the number of instances, running instances, the instance running under the _ExclamationFunctio6_ function, received messages, successfully processed messages, system exceptions, the average latency and so on. - -```json - -{ - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReceived" : 1, - "numSuccessfullyProcessed" : 1, - "numUserExceptions" : 0, - "latestUserExceptions" : [ ], - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "averageLatency" : 0.8385, - "lastInvocationTime" : 1557734137987, - "workerId" : "c-standalone-fw-23ccc88ef29b-8080" - } - } ] -} - -``` - -### `stats` - -Get the current stats of a Pulsar Function. - -**Usage** - -```bash - -$ pulsar-admin functions stats options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--instance-id`|The instance ID of a Pulsar Function.
    If the `--instance-id` is not specified, it gets the IDs of all instances.
    -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions stats \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - -``` - -The output is shown as follows: - -```json - -{ - "receivedTotal" : 1, - "processedSuccessfullyTotal" : 1, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : 0.8385, - "1min" : { - "receivedTotal" : 0, - "processedSuccessfullyTotal" : 0, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : null - }, - "lastInvocation" : 1557734137987, - "instances" : [ { - "instanceId" : 0, - "metrics" : { - "receivedTotal" : 1, - "processedSuccessfullyTotal" : 1, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : 0.8385, - "1min" : { - "receivedTotal" : 0, - "processedSuccessfullyTotal" : 0, - "systemExceptionsTotal" : 0, - "userExceptionsTotal" : 0, - "avgProcessLatency" : null - }, - "lastInvocation" : 1557734137987, - "userMetrics" : { } - } - } ] -} - -``` - -### `list` - -List all Pulsar Functions running under a specific tenant and namespace. - -**Usage** - -```bash - -$ pulsar-admin functions list options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions list \ - --tenant public \ - --namespace default - -``` - -As shown below, the `list` command returns three functions running under the _public_ tenant and the _default_ namespace. - -```text - -ExclamationFunctio1 -ExclamationFunctio2 -ExclamationFunctio3 - -``` - -### `trigger` - -Trigger a specified Pulsar Function with a supplied value. This command simulates the execution process of a Pulsar Function and verifies it. - -**Usage** - -```bash - -$ pulsar-admin functions trigger options - -``` - -**Options** - -|Flag|Description -|---|--- -|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function. -|`--name`|The name of a Pulsar Function. -|`--namespace`|The namespace of a Pulsar Function. -|`--tenant`|The tenant of a Pulsar Function. -|`--topic`|The topic name that a Pulsar Function consumes from. -|`--trigger-file`|The path to a file that contains the data to trigger a Pulsar Function. -|`--trigger-value`|The value to trigger a Pulsar Function. - -**Example** - -```bash - -$ ./bin/pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name ExclamationFunctio6 \ - --topic persistent://public/default/my-topic-1 \ - --trigger-value "hello pulsar functions" - -``` - -As shown below, the `trigger` command returns the following result: - -```text - -This is my function! - -``` - -:::note - -You must specify the [entire topic name](getting-started-pulsar.md#topic-names) when using the `--topic` option. Otherwise, the following error occurs. - -```text - -Function in trigger function has unidentified topic -Reason: Function in trigger function has unidentified topic - -``` - -::: - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/functions-deploy.md b/site2/website/versioned_docs/version-2.9.3-deprecated/functions-deploy.md deleted file mode 100644 index 2a0d68d6c623c7..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/functions-deploy.md +++ /dev/null @@ -1,262 +0,0 @@ ---- -id: functions-deploy -title: Deploy Pulsar Functions -sidebar_label: "How-to: Deploy" -original_id: functions-deploy ---- - -## Requirements - -To deploy and manage Pulsar Functions, you need to have a Pulsar cluster running. There are several options for this: - -* You can run a [standalone cluster](getting-started-standalone.md) locally on your own machine. -* You can deploy a Pulsar cluster on [Kubernetes](deploy-kubernetes.md), [Amazon Web Services](deploy-aws.md), [bare metal](deploy-bare-metal.md), [DC/OS](https://dcos.io/), and more. - -If you run a non-[standalone](reference-terminology.md#standalone) cluster, you need to obtain the service URL for the cluster. How you obtain the service URL depends on how you deploy your Pulsar cluster. - -If you want to deploy and trigger Python user-defined functions, you need to install [the pulsar python client](http://pulsar.apache.org/docs/en/client-libraries-python/) on all the machines running [functions workers](functions-worker.md). - -## Command-line interface - -Pulsar Functions are deployed and managed using the [`pulsar-admin functions`](reference-pulsar-admin.md#functions) interface, which contains commands such as [`create`](reference-pulsar-admin.md#functions-create) for deploying functions in [cluster mode](#cluster-mode), [`trigger`](reference-pulsar-admin.md#trigger) for [triggering](#triggering-pulsar-functions) functions, [`list`](reference-pulsar-admin.md#list-2) for listing deployed functions. - -To learn more commands, refer to [`pulsar-admin functions`](reference-pulsar-admin.md#functions). - -### Default arguments - -When managing Pulsar Functions, you need to specify a variety of information about functions, including tenant, namespace, input and output topics, and so on. However, some parameters have default values if you do not specify values for them. The following table lists the default values. - -Parameter | Default -:---------|:------- -Function name | You can specify any value for the class name (except org, library, or similar class names). For example, when you specify the flag `--classname org.example.MyFunction`, the function name is `MyFunction`. -Tenant | Derived from names of the input topics. If the input topics are under the `marketing` tenant, which means the topic names have the form `persistent://marketing/{namespace}/{topicName}`, the tenant is `marketing`. -Namespace | Derived from names of the input topics. If the input topics are under the `asia` namespace under the `marketing` tenant, which means the topic names have the form `persistent://marketing/asia/{topicName}`, then the namespace is `asia`. -Output topic | `{input topic}-{function name}-output`. For example, if an input topic name of a function is `incoming`, and the function name is `exclamation`, then the name of the output topic is `incoming-exclamation-output`. -Subscription type | For `at-least-once` and `at-most-once` [processing guarantees](functions-overview.md#processing-guarantees), the [`SHARED`](concepts-messaging.md#shared) mode is applied by default; for `effectively-once` guarantees, the [`FAILOVER`](concepts-messaging.md#failover) mode is applied. -Processing guarantees | [`ATLEAST_ONCE`](functions-overview.md#processing-guarantees) -Pulsar service URL | `pulsar://localhost:6650` - -### Example of default arguments - -Take the `create` command as an example. - -```bash - -$ bin/pulsar-admin functions create \ - --jar my-pulsar-functions.jar \ - --classname org.example.MyFunction \ - --inputs my-function-input-topic1,my-function-input-topic2 - -``` - -The function has default values for the function name (`MyFunction`), tenant (`public`), namespace (`default`), subscription type (`SHARED`), processing guarantees (`ATLEAST_ONCE`), and Pulsar service URL (`pulsar://localhost:6650`). - -## Local run mode - -If you run a Pulsar Function in **local run** mode, it runs on the machine from which you enter the commands (on your laptop, an [AWS EC2](https://aws.amazon.com/ec2/) instance, and so on). The following is a [`localrun`](reference-pulsar-admin.md#localrun) command example. - -```bash - -$ bin/pulsar-admin functions localrun \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/input-1 \ - --output persistent://public/default/output-1 - -``` - -By default, the function connects to a Pulsar cluster running on the same machine, via a local [broker](reference-terminology.md#broker) service URL of `pulsar://localhost:6650`. If you use local run mode to run a function but connect it to a non-local Pulsar cluster, you can specify a different broker URL using the `--brokerServiceUrl` flag. The following is an example. - -```bash - -$ bin/pulsar-admin functions localrun \ - --broker-service-url pulsar://my-cluster-host:6650 \ - # Other function parameters - -``` - -## Cluster mode - -When you run a Pulsar Function in **cluster** mode, the function code is uploaded to a Pulsar broker and runs *alongside the broker* rather than in your [local environment](#local-run-mode). You can run a function in cluster mode using the [`create`](reference-pulsar-admin.md#create-1) command. - -```bash - -$ bin/pulsar-admin functions create \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/input-1 \ - --output persistent://public/default/output-1 - -``` - -### Update functions in cluster mode - -You can use the [`update`](reference-pulsar-admin.md#update-1) command to update a Pulsar Function running in cluster mode. The following command updates the function created in the [cluster mode](#cluster-mode) section. - -```bash - -$ bin/pulsar-admin functions update \ - --py myfunc.py \ - --classname myfunc.SomeFunction \ - --inputs persistent://public/default/new-input-topic \ - --output persistent://public/default/new-output-topic - -``` - -### Parallelism - -Pulsar Functions run as processes or threads, which are called **instances**. When you run a Pulsar Function, it runs as a single instance by default. With one localrun command, you can only run a single instance of a function. If you want to run multiple instances, you can use localrun command multiple times. - -When you create a function, you can specify the *parallelism* of a function (the number of instances to run). You can set the parallelism factor using the `--parallelism` flag of the [`create`](reference-pulsar-admin.md#functions-create) command. - -```bash - -$ bin/pulsar-admin functions create \ - --parallelism 3 \ - # Other function info - -``` - -You can adjust the parallelism of an already created function using the [`update`](reference-pulsar-admin.md#update-1) interface. - -```bash - -$ bin/pulsar-admin functions update \ - --parallelism 5 \ - # Other function - -``` - -If you specify a function configuration via YAML, use the `parallelism` parameter. The following is a config file example. - -```yaml - -# function-config.yaml -parallelism: 3 -inputs: -- persistent://public/default/input-1 -output: persistent://public/default/output-1 -# other parameters - -``` - -The following is corresponding update command. - -```bash - -$ bin/pulsar-admin functions update \ - --function-config-file function-config.yaml - -``` - -### Function instance resources - -When you run Pulsar Functions in [cluster mode](#cluster-mode), you can specify the resources that are assigned to each function [instance](#parallelism). - -Resource | Specified as | Runtimes -:--------|:----------------|:-------- -CPU | The number of cores | Kubernetes -RAM | The number of bytes | Process, Docker -Disk space | The number of bytes | Docker - -The following function creation command allocates 8 cores, 8 GB of RAM, and 10 GB of disk space to a function. - -```bash - -$ bin/pulsar-admin functions create \ - --jar target/my-functions.jar \ - --classname org.example.functions.MyFunction \ - --cpu 8 \ - --ram 8589934592 \ - --disk 10737418240 - -``` - -> #### Resources are *per instance* -> The resources that you apply to a given Pulsar Function are applied to each instance of the function. For example, if you apply 8 GB of RAM to a function with a parallelism of 5, you are applying 40 GB of RAM for the function in total. Make sure that you take the parallelism (the number of instances) factor into your resource calculations. - -### Use Package management service - -Package management enables version management and simplifies the upgrade and rollback processes for Functions, Sinks, and Sources. When you use the same function, sink and source in different namespaces, you can upload them to a common package management system. - -To use [Package management service](admin-api-packages.md), ensure that the package management service has been enabled in your cluster by setting the following properties in `broker.conf`. - -> Note: Package management service is not enabled by default. - -```yaml - -enablePackagesManagement=true -packagesManagementStorageProvider=org.apache.pulsar.packages.management.storage.bookkeeper.BookKeeperPackagesStorageProvider -packagesReplicas=1 -packagesManagementLedgerRootPath=/ledgers - -``` - -With Package management service enabled, you can upload your function packages by [upload a package](admin-api-packages.md#upload-a-package) to the service and get the [package URL](admin-api-packages.md#package-url). - -When you have a ready to use package URL, you can create the function with package URL by setting `--jar`, `--py`, or `--go` to the package URL with `pulsar-admin functions create`. - -## Trigger Pulsar Functions - -If a Pulsar Function is running in [cluster mode](#cluster-mode), you can **trigger** it at any time using the command line. Triggering a function means that you send a message with a specific value to the function and get the function output (if any) via the command line. - -> Triggering a function is to invoke a function by producing a message on one of the input topics. With the [`pulsar-admin functions trigger`](reference-pulsar-admin.md#trigger) command, you can send messages to functions without using the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool or a language-specific client library. - -To learn how to trigger a function, you can start with Python function that returns a simple string based on the input. - -```python - -# myfunc.py -def process(input): - return "This function has been triggered with a value of {0}".format(input) - -``` - -You can run the function in [local run mode](functions-deploy.md#local-run-mode). - -```bash - -$ bin/pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name myfunc \ - --py myfunc.py \ - --classname myfunc \ - --inputs persistent://public/default/in \ - --output persistent://public/default/out - -``` - -Then assign a consumer to listen on the output topic for messages from the `myfunc` function with the [`pulsar-client consume`](reference-cli-tools.md#consume) command. - -```bash - -$ bin/pulsar-client consume persistent://public/default/out \ - --subscription-name my-subscription - --num-messages 0 # Listen indefinitely - -``` - -And then you can trigger the function. - -```bash - -$ bin/pulsar-admin functions trigger \ - --tenant public \ - --namespace default \ - --name myfunc \ - --trigger-value "hello world" - -``` - -The consumer listening on the output topic produces something as follows in the log. - -``` - ------ got message ----- -This function has been triggered with a value of hello world - -``` - -> #### Topic info is not required -> In the `trigger` command, you only need to specify basic information about the function (tenant, namespace, and name). To trigger the function, you do not need to know the function input topics. diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/functions-develop.md b/site2/website/versioned_docs/version-2.9.3-deprecated/functions-develop.md deleted file mode 100644 index 2e29aa1c474005..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/functions-develop.md +++ /dev/null @@ -1,1600 +0,0 @@ ---- -id: functions-develop -title: Develop Pulsar Functions -sidebar_label: "How-to: Develop" -original_id: functions-develop ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -You learn how to develop Pulsar Functions with different APIs for Java, Python and Go. - -## Available APIs -In Java and Python, you have two options to write Pulsar Functions. In Go, you can use Pulsar Functions SDK for Go. - -Interface | Description | Use cases -:---------|:------------|:--------- -Language-native interface | No Pulsar-specific libraries or special dependencies required (only core libraries from Java/Python). | Functions that do not require access to the function [context](#context). -Pulsar Function SDK for Java/Python/Go | Pulsar-specific libraries that provide a range of functionality not provided by "native" interfaces. | Functions that require access to the function [context](#context). - -The language-native function, which adds an exclamation point to all incoming strings and publishes the resulting string to a topic, has no external dependencies. The following example is language-native function. - -````mdx-code-block - - - -```Java - -import java.util.function.Function; - -public class JavaNativeExclamationFunction implements Function { - @Override - public String apply(String input) { - return String.format("%s!", input); - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/JavaNativeExclamationFunction.java). - - - - -```python - -def process(input): - return "{}!".format(input) - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/native_exclamation_function.py). - -:::note - -You can write Pulsar Functions in python2 or python3. However, Pulsar only looks for `python` as the interpreter. -If you're running Pulsar Functions on an Ubuntu system that only supports python3, you might fail to -start the functions. In this case, you can create a symlink. Your system will fail if -you subsequently install any other package that depends on Python 2.x. A solution is under development in [Issue 5518](https://github.com/apache/pulsar/issues/5518). - -```bash - -sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 10 - -``` - -::: - - - - -```` - -The following example uses Pulsar Functions SDK. -````mdx-code-block - - - -```Java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class ExclamationFunction implements Function { - @Override - public String process(String input, Context context) { - return String.format("%s!", input); - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/ExclamationFunction.java). - - - - -```python - -from pulsar import Function - -class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - return input + '!' - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/exclamation_function.py). - - - - -```Go - -package main - -import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" -) - -func HandleRequest(ctx context.Context, in []byte) error{ - fmt.Println(string(in) + "!") - return nil -} - -func main() { - pf.Start(HandleRequest) -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/77cf09eafa4f1626a53a1fe2e65dd25f377c1127/pulsar-function-go/examples/inputFunc/inputFunc.go#L20-L36). - - - - -```` - -## Schema registry -Pulsar has a built-in schema registry and is bundled with popular schema types, such as Avro, JSON and Protobuf. Pulsar Functions can leverage the existing schema information from input topics and derive the input type. The schema registry applies for output topic as well. - -## SerDe -SerDe stands for **Ser**ialization and **De**serialization. Pulsar Functions uses SerDe when publishing data to and consuming data from Pulsar topics. How SerDe works by default depends on the language you use for a particular function. - -````mdx-code-block - - - -When you write Pulsar Functions in Java, the following basic Java types are built in and supported by default: `String`, `Double`, `Integer`, `Float`, `Long`, `Short`, and `Byte`. - -To customize Java types, you need to implement the following interface. - -```java - -public interface SerDe { - T deserialize(byte[] input); - byte[] serialize(T input); -} - -``` - -SerDe works in the following ways in Java Functions. -- If the input and output topics have schema, Pulsar Functions use schema for SerDe. -- If the input or output topics do not exist, Pulsar Functions adopt the following rules to determine SerDe: - - If the schema type is specified, Pulsar Functions use the specified schema type. - - If SerDe is specified, Pulsar Functions use the specified SerDe, and the schema type for input and output topics is `Byte`. - - If neither the schema type nor SerDe is specified, Pulsar Functions use the built-in SerDe. For non-primitive schema type, the built-in SerDe serializes and deserializes objects in the `JSON` format. - - - - -In Python, the default SerDe is identity, meaning that the type is serialized as whatever type the producer function returns. - -You can specify the SerDe when [creating](functions-deploy.md#cluster-mode) or [running](functions-deploy.md#local-run-mode) functions. - -```bash - -$ bin/pulsar-admin functions create \ - --tenant public \ - --namespace default \ - --name my_function \ - --py my_function.py \ - --classname my_function.MyFunction \ - --custom-serde-inputs '{"input-topic-1":"Serde1","input-topic-2":"Serde2"}' \ - --output-serde-classname Serde3 \ - --output output-topic-1 - -``` - -This case contains two input topics: `input-topic-1` and `input-topic-2`, each of which is mapped to a different SerDe class (the map must be specified as a JSON string). The output topic, `output-topic-1`, uses the `Serde3` class for SerDe. At the moment, all Pulsar Functions logic, include processing function and SerDe classes, must be contained within a single Python file. - -When using Pulsar Functions for Python, you have three SerDe options: - -1. You can use the [`IdentitySerde`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L70), which leaves the data unchanged. The `IdentitySerDe` is the **default**. Creating or running a function without explicitly specifying SerDe means that this option is used. -2. You can use the [`PickleSerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L62), which uses Python [`pickle`](https://docs.python.org/3/library/pickle.html) for SerDe. -3. You can create a custom SerDe class by implementing the baseline [`SerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L50) class, which has just two methods: [`serialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L53) for converting the object into bytes, and [`deserialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L58) for converting bytes into an object of the required application-specific type. - -The table below shows when you should use each SerDe. - -SerDe option | When to use -:------------|:----------- -`IdentitySerde` | When you work with simple types like strings, Booleans, integers. -`PickleSerDe` | When you work with complex, application-specific types and are comfortable with the "best effort" approach of `pickle`. -Custom SerDe | When you require explicit control over SerDe, potentially for performance or data compatibility purposes. - - - - -Currently, the feature is not available in Go. - - - - -```` - -### Example -Imagine that you're writing Pulsar Functions that are processing tweet objects, you can refer to the following example of `Tweet` class. - -````mdx-code-block - - - -```java - -public class Tweet { - private String username; - private String tweetContent; - - public Tweet(String username, String tweetContent) { - this.username = username; - this.tweetContent = tweetContent; - } - - // Standard setters and getters -} - -``` - -To pass `Tweet` objects directly between Pulsar Functions, you need to provide a custom SerDe class. In the example below, `Tweet` objects are basically strings in which the username and tweet content are separated by a `|`. - -```java - -package com.example.serde; - -import org.apache.pulsar.functions.api.SerDe; - -import java.util.regex.Pattern; - -public class TweetSerde implements SerDe { - public Tweet deserialize(byte[] input) { - String s = new String(input); - String[] fields = s.split(Pattern.quote("|")); - return new Tweet(fields[0], fields[1]); - } - - public byte[] serialize(Tweet input) { - return "%s|%s".format(input.getUsername(), input.getTweetContent()).getBytes(); - } -} - -``` - -To apply this customized SerDe to a particular Pulsar Function, you need to: - -* Package the `Tweet` and `TweetSerde` classes into a JAR. -* Specify a path to the JAR and SerDe class name when deploying the function. - -The following is an example of [`create`](reference-pulsar-admin.md#create-1) operation. - -```bash - -$ bin/pulsar-admin functions create \ - --jar /path/to/your.jar \ - --output-serde-classname com.example.serde.TweetSerde \ - # Other function attributes - -``` - -> #### Custom SerDe classes must be packaged with your function JARs -> Pulsar does not store your custom SerDe classes separately from your Pulsar Functions. So you need to include your SerDe classes in your function JARs. If not, Pulsar returns an error. - - - - -```python - -class Tweet(object): - def __init__(self, username, tweet_content): - self.username = username - self.tweet_content = tweet_content - -``` - -In order to use this class in Pulsar Functions, you have two options: - -1. You can specify `PickleSerDe`, which applies the [`pickle`](https://docs.python.org/3/library/pickle.html) library SerDe. -2. You can create your own SerDe class. The following is an example. - - ```python - - from pulsar import SerDe - - class TweetSerDe(SerDe): - - def serialize(self, input): - return bytes("{0}|{1}".format(input.username, input.tweet_content)) - - def deserialize(self, input_bytes): - tweet_components = str(input_bytes).split('|') - return Tweet(tweet_components[0], tweet_componentsp[1]) - - ``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/custom_object_function.py). - - - - -```` - -In both languages, however, you can write custom SerDe logic for more complex, application-specific types. - -## Context -Java, Python and Go SDKs provide access to a **context object** that can be used by a function. This context object provides a wide variety of information and functionality to the function. - -* The name and ID of a Pulsar Function. -* The message ID of each message. Each Pulsar message is automatically assigned with an ID. -* The key, event time, properties and partition key of each message. -* The name of the topic to which the message is sent. -* The names of all input topics as well as the output topic associated with the function. -* The name of the class used for [SerDe](#serde). -* The [tenant](reference-terminology.md#tenant) and namespace associated with the function. -* The ID of the Pulsar Functions instance running the function. -* The version of the function. -* The [logger object](functions-develop.md#logger) used by the function, which can be used to create function log messages. -* Access to arbitrary [user configuration](#user-config) values supplied via the CLI. -* An interface for recording [metrics](#metrics). -* An interface for storing and retrieving state in [state storage](#state-storage). -* A function to publish new messages onto arbitrary topics. -* A function to ack the message being processed (if auto-ack is disabled). -* (Java) get Pulsar admin client. - -````mdx-code-block - - - -The [Context](https://github.com/apache/pulsar/blob/master/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Context.java) interface provides a number of methods that you can use to access the function [context](#context). The various method signatures for the `Context` interface are listed as follows. - -```java - -public interface Context { - Record getCurrentRecord(); - Collection getInputTopics(); - String getOutputTopic(); - String getOutputSchemaType(); - String getTenant(); - String getNamespace(); - String getFunctionName(); - String getFunctionId(); - String getInstanceId(); - String getFunctionVersion(); - Logger getLogger(); - void incrCounter(String key, long amount); - void incrCounterAsync(String key, long amount); - long getCounter(String key); - long getCounterAsync(String key); - void putState(String key, ByteBuffer value); - void putStateAsync(String key, ByteBuffer value); - void deleteState(String key); - ByteBuffer getState(String key); - ByteBuffer getStateAsync(String key); - Map getUserConfigMap(); - Optional getUserConfigValue(String key); - Object getUserConfigValueOrDefault(String key, Object defaultValue); - void recordMetric(String metricName, double value); - CompletableFuture publish(String topicName, O object, String schemaOrSerdeClassName); - CompletableFuture publish(String topicName, O object); - TypedMessageBuilder newOutputMessage(String topicName, Schema schema) throws PulsarClientException; - ConsumerBuilder newConsumerBuilder(Schema schema) throws PulsarClientException; - PulsarAdmin getPulsarAdmin(); - PulsarAdmin getPulsarAdmin(String clusterName); -} - -``` - -The following example uses several methods available via the `Context` object. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.stream.Collectors; - -public class ContextFunction implements Function { - public Void process(String input, Context context) { - Logger LOG = context.getLogger(); - String inputTopics = context.getInputTopics().stream().collect(Collectors.joining(", ")); - String functionName = context.getFunctionName(); - - String logMessage = String.format("A message with a value of \"%s\" has arrived on one of the following topics: %s\n", - input, - inputTopics); - - LOG.info(logMessage); - - String metricName = String.format("function-%s-messages-received", functionName); - context.recordMetric(metricName, 1); - - return null; - } -} - -``` - - - - -``` - -class ContextImpl(pulsar.Context): - def get_message_id(self): - ... - def get_message_key(self): - ... - def get_message_eventtime(self): - ... - def get_message_properties(self): - ... - def get_current_message_topic_name(self): - ... - def get_partition_key(self): - ... - def get_function_name(self): - ... - def get_function_tenant(self): - ... - def get_function_namespace(self): - ... - def get_function_id(self): - ... - def get_instance_id(self): - ... - def get_function_version(self): - ... - def get_logger(self): - ... - def get_user_config_value(self, key): - ... - def get_user_config_map(self): - ... - def record_metric(self, metric_name, metric_value): - ... - def get_input_topics(self): - ... - def get_output_topic(self): - ... - def get_output_serde_class_name(self): - ... - def publish(self, topic_name, message, serde_class_name="serde.IdentitySerDe", - properties=None, compression_type=None, callback=None, message_conf=None): - ... - def ack(self, msgid, topic): - ... - def get_and_reset_metrics(self): - ... - def reset_metrics(self): - ... - def get_metrics(self): - ... - def incr_counter(self, key, amount): - ... - def get_counter(self, key): - ... - def del_counter(self, key): - ... - def put_state(self, key, value): - ... - def get_state(self, key): - ... - -``` - - - - -``` - -func (c *FunctionContext) GetInstanceID() int { - return c.instanceConf.instanceID -} - -func (c *FunctionContext) GetInputTopics() []string { - return c.inputTopics -} - -func (c *FunctionContext) GetOutputTopic() string { - return c.instanceConf.funcDetails.GetSink().Topic -} - -func (c *FunctionContext) GetFuncTenant() string { - return c.instanceConf.funcDetails.Tenant -} - -func (c *FunctionContext) GetFuncName() string { - return c.instanceConf.funcDetails.Name -} - -func (c *FunctionContext) GetFuncNamespace() string { - return c.instanceConf.funcDetails.Namespace -} - -func (c *FunctionContext) GetFuncID() string { - return c.instanceConf.funcID -} - -func (c *FunctionContext) GetFuncVersion() string { - return c.instanceConf.funcVersion -} - -func (c *FunctionContext) GetUserConfValue(key string) interface{} { - return c.userConfigs[key] -} - -func (c *FunctionContext) GetUserConfMap() map[string]interface{} { - return c.userConfigs -} - -func (c *FunctionContext) SetCurrentRecord(record pulsar.Message) { - c.record = record -} - -func (c *FunctionContext) GetCurrentRecord() pulsar.Message { - return c.record -} - -func (c *FunctionContext) NewOutputMessage(topic string) pulsar.Producer { - return c.outputMessage(topic) -} - -``` - -The following example uses several methods available via the `Context` object. - -``` - -import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" -) - -func contextFunc(ctx context.Context) { - if fc, ok := pf.FromContext(ctx); ok { - fmt.Printf("function ID is:%s, ", fc.GetFuncID()) - fmt.Printf("function version is:%s\n", fc.GetFuncVersion()) - } -} - -``` - -For complete code, see [here](https://github.com/apache/pulsar/blob/77cf09eafa4f1626a53a1fe2e65dd25f377c1127/pulsar-function-go/examples/contextFunc/contextFunc.go#L29-L34). - - - - -```` - -### User config -When you run or update Pulsar Functions created using SDK, you can pass arbitrary key/values to them with the command line with the `--user-config` flag. Key/values must be specified as JSON. The following function creation command passes a user configured key/value to a function. - -```bash - -$ bin/pulsar-admin functions create \ - --name word-filter \ - # Other function configs - --user-config '{"forbidden-word":"rosebud"}' - -``` - -````mdx-code-block - - - -The Java SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - # Other function configs - --user-config '{"word-of-the-day":"verdure"}' - -``` - -To access that value in a Java function: - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.Optional; - -public class UserConfigFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - Optional wotd = context.getUserConfigValue("word-of-the-day"); - if (wotd.isPresent()) { - LOG.info("The word of the day is {}", wotd); - } else { - LOG.warn("No word of the day provided"); - } - return null; - } -} - -``` - -The `UserConfigFunction` function will log the string `"The word of the day is verdure"` every time the function is invoked (which means every time a message arrives). The `word-of-the-day` user config will be changed only when the function is updated with a new config value via the command line. - -You can also access the entire user config map or set a default value in case no value is present: - -```java - -// Get the whole config map -Map allConfigs = context.getUserConfigMap(); - -// Get value or resort to default -String wotd = context.getUserConfigValueOrDefault("word-of-the-day", "perspicacious"); - -``` - -> For all key/value pairs passed to Java functions, both the key *and* the value are `String`. To set the value to be a different type, you need to deserialize from the `String` type. - - - - -In Python function, you can access the configuration value like this. - -```python - -from pulsar import Function - -class WordFilter(Function): - def process(self, context, input): - forbidden_word = context.user_config()["forbidden-word"] - - # Don't publish the message if it contains the user-supplied - # forbidden word - if forbidden_word in input: - pass - # Otherwise publish the message - else: - return input - -``` - -The Python SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - # Other function configs \ - --user-config '{"word-of-the-day":"verdure"}' - -``` - -To access that value in a Python function: - -```python - -from pulsar import Function - -class UserConfigFunction(Function): - def process(self, input, context): - logger = context.get_logger() - wotd = context.get_user_config_value('word-of-the-day') - if wotd is None: - logger.warn('No word of the day provided') - else: - logger.info("The word of the day is {0}".format(wotd)) - -``` - - - - -The Go SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair. - -```bash - -$ bin/pulsar-admin functions create \ - --go path/to/go/binary - --user-config '{"word-of-the-day":"lackadaisical"}' - -``` - -To access that value in a Go function: - -```go - -func contextFunc(ctx context.Context) { - fc, ok := pf.FromContext(ctx) - if !ok { - logutil.Fatal("Function context is not defined") - } - - wotd := fc.GetUserConfValue("word-of-the-day") - - if wotd == nil { - logutil.Warn("The word of the day is empty") - } else { - logutil.Infof("The word of the day is %s", wotd.(string)) - } -} - -``` - - - - -```` - -### Logger - -````mdx-code-block - - - -Pulsar Functions that use the Java SDK have access to an [SLF4j](https://www.slf4j.org/) [`Logger`](https://www.slf4j.org/api/org/apache/log4j/Logger.html) object that can be used to produce logs at the chosen log level. The following example logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class LoggingFunction implements Function { - @Override - public void apply(String input, Context context) { - Logger LOG = context.getLogger(); - String messageId = new String(context.getMessageId()); - - if (input.contains("danger")) { - LOG.warn("A warning was received in message {}", messageId); - } else { - LOG.info("Message {} received\nContent: {}", messageId, input); - } - - return null; - } -} - -``` - -If you want your function to produce logs, you need to specify a log topic when creating or running the function. The following is an example. - -```bash - -$ bin/pulsar-admin functions create \ - --jar my-functions.jar \ - --classname my.package.LoggingFunction \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -All logs produced by `LoggingFunction` above can be accessed via the `persistent://public/default/logging-function-logs` topic. - -#### Customize Function log level -Additionally, you can use the XML file, `functions_log4j2.xml`, to customize the function log level. -To customize the function log level, create or update `functions_log4j2.xml` in your Pulsar conf directory (for example, `/etc/pulsar/` on bare-metal, or `/pulsar/conf` on Kubernetes) to contain contents such as: - -```xml - - - pulsar-functions-instance - 30 - - - pulsar.log.appender - RollingFile - - - pulsar.log.level - debug - - - bk.log.level - debug - - - - - Console - SYSTEM_OUT - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - RollingFile - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.log - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}-%d{MM-dd-yyyy}-%i.log.gz - true - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - 1 - true - - - 1 GB - - - 0 0 0 * * ? - - - - - ${sys:pulsar.function.log.dir} - 2 - - */${sys:pulsar.function.log.file}*log.gz - - - 30d - - - - - - BkRollingFile - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.bk - ${sys:pulsar.function.log.dir}/${sys:pulsar.function.log.file}.bk-%d{MM-dd-yyyy}-%i.log.gz - true - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - - 1 - true - - - 1 GB - - - 0 0 0 * * ? - - - - - ${sys:pulsar.function.log.dir} - 2 - - */${sys:pulsar.function.log.file}.bk*log.gz - - - 30d - - - - - - - - org.apache.pulsar.functions.runtime.shaded.org.apache.bookkeeper - ${sys:bk.log.level} - false - - BkRollingFile - - - - ${sys:pulsar.log.level} - - ${sys:pulsar.log.appender} - ${sys:pulsar.log.level} - - - - - -``` - -The properties set like: - -```xml - - - pulsar.log.level - debug - - -``` - -propagate to places where they are referenced, such as: - -```xml - - - ${sys:pulsar.log.level} - - ${sys:pulsar.log.appender} - ${sys:pulsar.log.level} - - - -``` - -In the above example, debug level logging would be applied to ALL function logs. -This may be more verbose than you desire. To be more selective, you can apply different log levels to different classes or modules. For example: - -```xml - - - com.example.module - info - false - - ${sys:pulsar.log.appender} - - - -``` - -You can be more specific as well, such as applying a more verbose log level to a class in the module, such as: - -```xml - - - com.example.module.className - debug - false - - Console - - - -``` - -Each `` entry allows you to output the log to a target specified in the definition of the Appender. - -Additivity pertains to whether log messages will be duplicated if multiple Logger entries overlap. -To disable additivity, specify - -```xml - -false - -``` - -as shown in examples above. Disabling additivity prevents duplication of log messages when one or more `` entries contain classes or modules that overlap. - -The `` is defined in the `` section, such as: - -```xml - - - Console - SYSTEM_OUT - - %d{ISO8601_OFFSET_DATE_TIME_HHMM} [%t] %-5level %logger{36} - %msg%n - - - -``` - - - - -Pulsar Functions that use the Python SDK have access to a logging object that can be used to produce logs at the chosen log level. The following example function that logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`. - -```python - -from pulsar import Function - -class LoggingFunction(Function): - def process(self, input, context): - logger = context.get_logger() - msg_id = context.get_message_id() - if 'danger' in input: - logger.warn("A warning was received in message {0}".format(context.get_message_id())) - else: - logger.info("Message {0} received\nContent: {1}".format(msg_id, input)) - -``` - -If you want your function to produce logs on a Pulsar topic, you need to specify a **log topic** when creating or running the function. The following is an example. - -```bash - -$ bin/pulsar-admin functions create \ - --py logging_function.py \ - --classname logging_function.LoggingFunction \ - --log-topic logging-function-logs \ - # Other function configs - -``` - -All logs produced by `LoggingFunction` above can be accessed via the `logging-function-logs` topic. -Additionally, you can specify the function log level through the broker XML file as described in [Customize Function log level](#customize-function-log-level). - - - - -The following Go Function example shows different log levels based on the function input. - -``` - -import ( - "context" - - "github.com/apache/pulsar/pulsar-function-go/pf" - - log "github.com/apache/pulsar/pulsar-function-go/logutil" -) - -func loggerFunc(ctx context.Context, input []byte) { - if len(input) <= 100 { - log.Infof("This input has a length of: %d", len(input)) - } else { - log.Warnf("This input is getting too long! It has {%d} characters", len(input)) - } -} - -func main() { - pf.Start(loggerFunc) -} - -``` - -When you use `logTopic` related functionalities in Go Function, import `github.com/apache/pulsar/pulsar-function-go/logutil`, and you do not have to use the `getLogger()` context object. - -Additionally, you can specify the function log level through the broker XML file, as described here: [Customize Function log level](#customize-function-log-level) - - - - -```` - -### Pulsar admin - -Pulsar Functions using the Java SDK has access to the Pulsar admin client, which allows the Pulsar admin client to manage API calls to current Pulsar clusters or external clusters (if `external-pulsars` is provided). - -````mdx-code-block - - - -Below is an example of how to use the Pulsar admin client exposed from the Function `context`. - -``` - -import org.apache.pulsar.client.admin.PulsarAdmin; -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -/** - * In this particular example, for every input message, - * the function resets the cursor of the current function's subscription to a - * specified timestamp. - */ -public class CursorManagementFunction implements Function { - - @Override - public String process(String input, Context context) throws Exception { - PulsarAdmin adminClient = context.getPulsarAdmin(); - if (adminClient != null) { - String topic = context.getCurrentRecord().getTopicName().isPresent() ? - context.getCurrentRecord().getTopicName().get() : null; - String subName = context.getTenant() + "/" + context.getNamespace() + "/" + context.getFunctionName(); - if (topic != null) { - // 1578188166 below is a random-pick timestamp - adminClient.topics().resetCursor(topic, subName, 1578188166); - return "reset cursor successfully"; - } - } - return null; - } -} - -``` - -If you want your function to get access to the Pulsar admin client, you need to enable this feature by setting `exposeAdminClientEnabled=true` in the `functions_worker.yml` file. You can test whether this feature is enabled or not using the command `pulsar-admin functions localrun` with the flag `--web-service-url`. - -``` - -$ bin/pulsar-admin functions localrun \ - --jar my-functions.jar \ - --classname my.package.CursorManagementFunction \ - --web-service-url http://pulsar-web-service:8080 \ - # Other function configs - -``` - - - - -```` - -## Metrics - -Pulsar Functions allows you to deploy and manage processing functions that consume messages from and publish messages to Pulsar topics easily. It is important to ensure that the running functions are healthy at any time. Pulsar Functions can publish arbitrary metrics to the metrics interface which can be queried. - -:::note - -If a Pulsar Function uses the language-native interface for Java or Python, that function is not able to publish metrics and stats to Pulsar. - -::: - -You can monitor Pulsar Functions that have been deployed with the following methods: - -- Check the metrics provided by Pulsar. - - Pulsar Functions expose the metrics that can be collected and used for monitoring the health of **Java, Python, and Go** functions. You can check the metrics by following the [monitoring](deploy-monitoring.md) guide. - - For the complete list of the function metrics, see [here](reference-metrics.md#pulsar-functions). - -- Set and check your customized metrics. - - In addition to the metrics provided by Pulsar, Pulsar allows you to customize metrics for **Java and Python** functions. Function workers collect user-defined metrics to Prometheus automatically and you can check them in Grafana. - -Here are examples of how to customize metrics for Java and Python functions. - -````mdx-code-block - - - -You can record metrics using the [`Context`](#context) object on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -public class MetricRecorderFunction implements Function { - @Override - public void apply(Integer input, Context context) { - // Records the metric 1 every time a message arrives - context.recordMetric("hit-count", 1); - - // Records the metric only if the arriving number equals 11 - if (input == 11) { - context.recordMetric("elevens-count", 1); - } - - return null; - } -} - -``` - - - - -You can record metrics using the [`Context`](#context) object on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message. The following is an example. - -```python - -from pulsar import Function - -class MetricRecorderFunction(Function): - def process(self, input, context): - context.record_metric('hit-count', 1) - - if input == 11: - context.record_metric('elevens-count', 1) - -``` - - - - -Currently, the feature is not available in Go. - - - - -```` - -## Security - -If you want to enable security on Pulsar Functions, first you should enable security on [Functions Workers](functions-worker.md). For more details, refer to [Security settings](functions-worker.md#security-settings). - -Pulsar Functions can support the following providers: - -- ClearTextSecretsProvider -- EnvironmentBasedSecretsProvider - -> Pulsar Function supports ClearTextSecretsProvider by default. - -At the same time, Pulsar Functions provides two interfaces, **SecretsProvider** and **SecretsProviderConfigurator**, allowing users to customize secret provider. - -````mdx-code-block - - - -You can get secret provider using the [`Context`](#context) object. The following is an example: - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -public class GetSecretProviderFunction implements Function { - - @Override - public Void process(String input, Context context) throws Exception { - Logger LOG = context.getLogger(); - String secretProvider = context.getSecret(input); - - if (!secretProvider.isEmpty()) { - LOG.info("The secret provider is {}", secretProvider); - } else { - LOG.warn("No secret provider"); - } - - return null; - } -} - -``` - - - - -You can get secret provider using the [`Context`](#context) object. The following is an example: - -```python - -from pulsar import Function - -class GetSecretProviderFunction(Function): - def process(self, input, context): - logger = context.get_logger() - secret_provider = context.get_secret(input) - if secret_provider is None: - logger.warn('No secret provider') - else: - logger.info("The secret provider is {0}".format(secret_provider)) - -``` - - - - -Currently, the feature is not available in Go. - - - - -```` - -## State storage -Pulsar Functions use [Apache BookKeeper](https://bookkeeper.apache.org) as a state storage interface. Pulsar installation, including the local standalone installation, includes deployment of BookKeeper bookies. - -Since Pulsar 2.1.0 release, Pulsar integrates with Apache BookKeeper [table service](https://docs.google.com/document/d/155xAwWv5IdOitHh1NVMEwCMGgB28M3FyMiQSxEpjE-Y/edit#heading=h.56rbh52koe3f) to store the `State` for functions. For example, a `WordCount` function can store its `counters` state into BookKeeper table service via Pulsar Functions State API. - -States are key-value pairs, where the key is a string and the value is arbitrary binary data - counters are stored as 64-bit big-endian binary values. Keys are scoped to an individual Pulsar Function, and shared between instances of that function. - -You can access states within Pulsar Java Functions using the `putState`, `putStateAsync`, `getState`, `getStateAsync`, `incrCounter`, `incrCounterAsync`, `getCounter`, `getCounterAsync` and `deleteState` calls on the context object. You can access states within Pulsar Python Functions using the `putState`, `getState`, `incrCounter`, `getCounter` and `deleteState` calls on the context object. You can also manage states using the [querystate](#query-state) and [putstate](#putstate) options to `pulsar-admin functions`. - -:::note - -State storage is not available in Go. - -::: - -### API - -````mdx-code-block - - - -Currently Pulsar Functions expose the following APIs for mutating and accessing State. These APIs are available in the [Context](functions-develop.md#context) object when you are using Java SDK functions. - -#### incrCounter - -```java - - /** - * Increment the builtin distributed counter referred by key - * @param key The name of the key - * @param amount The amount to be incremented - */ - void incrCounter(String key, long amount); - -``` - -The application can use `incrCounter` to change the counter of a given `key` by the given `amount`. - -#### incrCounterAsync - -```java - - /** - * Increment the builtin distributed counter referred by key - * but dont wait for the completion of the increment operation - * - * @param key The name of the key - * @param amount The amount to be incremented - */ - CompletableFuture incrCounterAsync(String key, long amount); - -``` - -The application can use `incrCounterAsync` to asynchronously change the counter of a given `key` by the given `amount`. - -#### getCounter - -```java - - /** - * Retrieve the counter value for the key. - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - long getCounter(String key); - -``` - -The application can use `getCounter` to retrieve the counter of a given `key` mutated by `incrCounter`. - -Except the `counter` API, Pulsar also exposes a general key/value API for functions to store -general key/value state. - -#### getCounterAsync - -```java - - /** - * Retrieve the counter value for the key, but don't wait - * for the operation to be completed - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - CompletableFuture getCounterAsync(String key); - -``` - -The application can use `getCounterAsync` to asynchronously retrieve the counter of a given `key` mutated by `incrCounterAsync`. - -#### putState - -```java - - /** - * Update the state value for the key. - * - * @param key name of the key - * @param value state value of the key - */ - void putState(String key, ByteBuffer value); - -``` - -#### putStateAsync - -```java - - /** - * Update the state value for the key, but don't wait for the operation to be completed - * - * @param key name of the key - * @param value state value of the key - */ - CompletableFuture putStateAsync(String key, ByteBuffer value); - -``` - -The application can use `putStateAsync` to asynchronously update the state of a given `key`. - -#### getState - -```java - - /** - * Retrieve the state value for the key. - * - * @param key name of the key - * @return the state value for the key. - */ - ByteBuffer getState(String key); - -``` - -#### getStateAsync - -```java - - /** - * Retrieve the state value for the key, but don't wait for the operation to be completed - * - * @param key name of the key - * @return the state value for the key. - */ - CompletableFuture getStateAsync(String key); - -``` - -The application can use `getStateAsync` to asynchronously retrieve the state of a given `key`. - -#### deleteState - -```java - - /** - * Delete the state value for the key. - * - * @param key name of the key - */ - -``` - -Counters and binary values share the same keyspace, so this deletes either type. - - - - -Currently Pulsar Functions expose the following APIs for mutating and accessing State. These APIs are available in the [Context](#context) object when you are using Python SDK functions. - -#### incr_counter - -```python - - def incr_counter(self, key, amount): - ""incr the counter of a given key in the managed state"" - -``` - -Application can use `incr_counter` to change the counter of a given `key` by the given `amount`. -If the `key` does not exist, a new key is created. - -#### get_counter - -```python - - def get_counter(self, key): - """get the counter of a given key in the managed state""" - -``` - -Application can use `get_counter` to retrieve the counter of a given `key` mutated by `incrCounter`. - -Except the `counter` API, Pulsar also exposes a general key/value API for functions to store -general key/value state. - -#### put_state - -```python - - def put_state(self, key, value): - """update the value of a given key in the managed state""" - -``` - -The key is a string, and the value is arbitrary binary data. - -#### get_state - -```python - - def get_state(self, key): - """get the value of a given key in the managed state""" - -``` - -#### del_counter - -```python - - def del_counter(self, key): - """delete the counter of a given key in the managed state""" - -``` - -Counters and binary values share the same keyspace, so this deletes either type. - - - - -```` - -### Query State - -A Pulsar Function can use the [State API](#api) for storing state into Pulsar's state storage -and retrieving state back from Pulsar's state storage. Additionally Pulsar also provides -CLI commands for querying its state. - -```shell - -$ bin/pulsar-admin functions querystate \ - --tenant \ - --namespace \ - --name \ - --state-storage-url \ - --key \ - [---watch] - -``` - -If `--watch` is specified, the CLI will watch the value of the provided `state-key`. - -### Example - -````mdx-code-block - - - -{@inject: github:WordCountFunction:/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/WordCountFunction.java} is a very good example -demonstrating on how Application can easily store `state` in Pulsar Functions. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountFunction implements Function { - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split("\\.")).forEach(word -> context.incrCounter(word, 1)); - return null; - } -} - -``` - -The logic of this `WordCount` function is pretty simple and straightforward: - -1. The function first splits the received `String` into multiple words using regex `\\.`. -2. For each `word`, the function increments the corresponding `counter` by 1 (via `incrCounter(key, amount)`). - - - - -```python - -from pulsar import Function - -class WordCount(Function): - def process(self, item, context): - for word in item.split(): - context.incr_counter(word, 1) - -``` - -The logic of this `WordCount` function is pretty simple and straightforward: - -1. The function first splits the received string into multiple words on space. -2. For each `word`, the function increments the corresponding `counter` by 1 (via `incr_counter(key, amount)`). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/functions-metrics.md b/site2/website/versioned_docs/version-2.9.3-deprecated/functions-metrics.md deleted file mode 100644 index 8add6693160929..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/functions-metrics.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: functions-metrics -title: Metrics for Pulsar Functions -sidebar_label: "Metrics" -original_id: functions-metrics ---- - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/functions-overview.md b/site2/website/versioned_docs/version-2.9.3-deprecated/functions-overview.md deleted file mode 100644 index 816d301e0fd0e7..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/functions-overview.md +++ /dev/null @@ -1,209 +0,0 @@ ---- -id: functions-overview -title: Pulsar Functions overview -sidebar_label: "Overview" -original_id: functions-overview ---- - -**Pulsar Functions** are lightweight compute processes that - -* consume messages from one or more Pulsar topics, -* apply a user-supplied processing logic to each message, -* publish the results of the computation to another topic. - - -## Goals -With Pulsar Functions, you can create complex processing logic without deploying a separate neighboring system (such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://heron.incubator.apache.org/), [Apache Flink](https://flink.apache.org/)). Pulsar Functions are computing infrastructure of Pulsar messaging system. The core goal is tied to a series of other goals: - -* Developer productivity (language-native vs Pulsar Functions SDK functions) -* Easy troubleshooting -* Operational simplicity (no need for an external processing system) - -## Inspirations -Pulsar Functions are inspired by (and take cues from) several systems and paradigms: - -* Stream processing engines such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://apache.github.io/incubator-heron), and [Apache Flink](https://flink.apache.org) -* "Serverless" and "Function as a Service" (FaaS) cloud platforms like [Amazon Web Services Lambda](https://aws.amazon.com/lambda/), [Google Cloud Functions](https://cloud.google.com/functions/), and [Azure Cloud Functions](https://azure.microsoft.com/en-us/services/functions/) - -Pulsar Functions can be described as - -* [Lambda](https://aws.amazon.com/lambda/)-style functions that are -* specifically designed to use Pulsar as a message bus. - -## Programming model -Pulsar Functions provide a wide range of functionality, and the core programming model is simple. Functions receive messages from one or more **input [topics](reference-terminology.md#topic)**. Each time a message is received, the function will complete the following tasks. - - * Apply some processing logic to the input and write output to: - * An **output topic** in Pulsar - * [Apache BookKeeper](functions-develop.md#state-storage) - * Write logs to a **log topic** (potentially for debugging purposes) - * Increment a [counter](#word-count-example) - -![Pulsar Functions core programming model](/assets/pulsar-functions-overview.png) - -You can use Pulsar Functions to set up the following processing chain: - -* A Python function listens for the `raw-sentences` topic and "sanitizes" incoming strings (removing extraneous whitespace and converting all characters to lowercase) and then publishes the results to a `sanitized-sentences` topic. -* A Java function listens for the `sanitized-sentences` topic, counts the number of times each word appears within a specified time window, and publishes the results to a `results` topic -* Finally, a Python function listens for the `results` topic and writes the results to a MySQL table. - - -### Word count example - -If you implement the classic word count example using Pulsar Functions, it looks something like this: - -![Pulsar Functions word count example](/assets/pulsar-functions-word-count.png) - -To write the function in Java with [Pulsar Functions SDK for Java](functions-develop.md#available-apis), you can write the function as follows. - -```java - -package org.example.functions; - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountFunction implements Function { - // This function is invoked every time a message is published to the input topic - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split(" ")).forEach(word -> { - String counterKey = word.toLowerCase(); - context.incrCounter(counterKey, 1); - }); - return null; - } -} - -``` - -Bundle and build the JAR file to be deployed, and then deploy it in your Pulsar cluster using the [command line](functions-deploy.md#command-line-interface) as follows. - -```bash - -$ bin/pulsar-admin functions create \ - --jar target/my-jar-with-dependencies.jar \ - --classname org.example.functions.WordCountFunction \ - --tenant public \ - --namespace default \ - --name word-count \ - --inputs persistent://public/default/sentences \ - --output persistent://public/default/count - -``` - -### Content-based routing example - -Pulsar Functions are used in many cases. The following is a sophisticated example that involves content-based routing. - -For example, a function takes items (strings) as input and publishes them to either a `fruits` or `vegetables` topic, depending on the item. Or, if an item is neither fruit nor vegetable, a warning is logged to a [log topic](functions-develop.md#logger). The following is a visual representation. - -![Pulsar Functions routing example](/assets/pulsar-functions-routing-example.png) - -If you implement this routing functionality in Python, it looks something like this: - -```python - -from pulsar import Function - -class RoutingFunction(Function): - def __init__(self): - self.fruits_topic = "persistent://public/default/fruits" - self.vegetables_topic = "persistent://public/default/vegetables" - - @staticmethod - def is_fruit(item): - return item in [b"apple", b"orange", b"pear", b"other fruits..."] - - @staticmethod - def is_vegetable(item): - return item in [b"carrot", b"lettuce", b"radish", b"other vegetables..."] - - def process(self, item, context): - if self.is_fruit(item): - context.publish(self.fruits_topic, item) - elif self.is_vegetable(item): - context.publish(self.vegetables_topic, item) - else: - warning = "The item {0} is neither a fruit nor a vegetable".format(item) - context.get_logger().warn(warning) - -``` - -If this code is stored in `~/router.py`, then you can deploy it in your Pulsar cluster using the [command line](functions-deploy.md#command-line-interface) as follows. - -```bash - -$ bin/pulsar-admin functions create \ - --py ~/router.py \ - --classname router.RoutingFunction \ - --tenant public \ - --namespace default \ - --name route-fruit-veg \ - --inputs persistent://public/default/basket-items - -``` - -### Functions, messages and message types -Pulsar Functions take byte arrays as inputs and spit out byte arrays as output. However in languages that support typed interfaces(Java), you can write typed Functions, and bind messages to types in the following ways. -* [Schema Registry](functions-develop.md#schema-registry) -* [SerDe](functions-develop.md#serde) - - -## Fully Qualified Function Name (FQFN) -Each Pulsar Function has a **Fully Qualified Function Name** (FQFN) that consists of three elements: the function tenant, namespace, and function name. FQFN looks like this: - -```http - -tenant/namespace/name - -``` - -FQFNs enable you to create multiple functions with the same name provided that they are in different namespaces. - -## Supported languages -Currently, you can write Pulsar Functions in Java, Python, and Go. For details, refer to [Develop Pulsar Functions](functions-develop.md). - -## Processing guarantees -Pulsar Functions provide three different messaging semantics that you can apply to any function. - -Delivery semantics | Description -:------------------|:------- -**At-most-once** delivery | Each message sent to the function is likely to be processed, or not to be processed (hence "at most"). -**At-least-once** delivery | Each message sent to the function can be processed more than once (hence the "at least"). -**Effectively-once** delivery | Each message sent to the function will have one output associated with it. - - -### Apply processing guarantees to a function -You can set the processing guarantees for a Pulsar Function when you create the Function. The following [`pulsar-function create`](reference-pulsar-admin.md#create-1) command creates a function with effectively-once guarantees applied. - -```bash - -$ bin/pulsar-admin functions create \ - --name my-effectively-once-function \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other function configs - -``` - -The available options for `--processing-guarantees` are: - -* `ATMOST_ONCE` -* `ATLEAST_ONCE` -* `EFFECTIVELY_ONCE` - -> By default, Pulsar Functions provide at-least-once delivery guarantees. So if you create a function without supplying a value for the `--processingGuarantees` flag, the function provides at-least-once guarantees. - -### Update the processing guarantees of a function -You can change the processing guarantees applied to a function using the [`update`](reference-pulsar-admin.md#update-1) command. The following is an example. - -```bash - -$ bin/pulsar-admin functions update \ - --processing-guarantees ATMOST_ONCE \ - # Other function configs - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/functions-package.md b/site2/website/versioned_docs/version-2.9.3-deprecated/functions-package.md deleted file mode 100644 index db2c4e987dc7be..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/functions-package.md +++ /dev/null @@ -1,493 +0,0 @@ ---- -id: functions-package -title: Package Pulsar Functions -sidebar_label: "How-to: Package" -original_id: functions-package ---- - -You can package Pulsar functions in Java, Python, and Go. Packaging the window function in Java is the same as [packaging a function in Java](#java). - -:::note - -Currently, the window function is not available in Python and Go. - -::: - -## Prerequisite - -Before running a Pulsar function, you need to start Pulsar. You can [run a standalone Pulsar in Docker](getting-started-docker.md), or [run Pulsar in Kubernetes](getting-started-helm.md). - -To check whether the Docker image starts, you can use the `docker ps` command. - -## Java - -To package a function in Java, complete the following steps. - -1. Create a new maven project with a pom file. In the following code sample, the value of `mainClass` is your package name. - - ```Java - - - - 4.0.0 - - java-function - java-function - 1.0-SNAPSHOT - - - - org.apache.pulsar - pulsar-functions-api - 2.6.0 - - - - - - - maven-assembly-plugin - - false - - jar-with-dependencies - - - - org.example.test.ExclamationFunction - - - - - - make-assembly - package - - assembly - - - - - - org.apache.maven.plugins - maven-compiler-plugin - - 8 - 8 - - - - - - - - ``` - -2. Write a Java function. - - ``` - - package org.example.test; - - import java.util.function.Function; - - public class ExclamationFunction implements Function { - @Override - public String apply(String s) { - return "This is my function!"; - } - } - - ``` - - For the imported package, you can use one of the following interfaces: - - Function interface provided by Java 8: `java.util.function.Function` - - Pulsar Function interface: `org.apache.pulsar.functions.api.Function` - - The main difference between the two interfaces is that the `org.apache.pulsar.functions.api.Function` interface provides the context interface. When you write a function and want to interact with it, you can use context to obtain a wide variety of information and functionality for Pulsar Functions. - - The following example uses `org.apache.pulsar.functions.api.Function` interface with context. - - ``` - - package org.example.functions; - import org.apache.pulsar.functions.api.Context; - import org.apache.pulsar.functions.api.Function; - - import java.util.Arrays; - public class WordCountFunction implements Function { - // This function is invoked every time a message is published to the input topic - @Override - public Void process(String input, Context context) throws Exception { - Arrays.asList(input.split(" ")).forEach(word -> { - String counterKey = word.toLowerCase(); - context.incrCounter(counterKey, 1); - }); - return null; - } - } - - ``` - -3. Package the Java function. - - ```bash - - mvn package - - ``` - - After the Java function is packaged, a `target` directory is created automatically. Open the `target` directory to check if there is a JAR package similar to `java-function-1.0-SNAPSHOT.jar`. - - -4. Run the Java function. - - (1) Copy the packaged jar file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Java function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname org.example.test.ExclamationFunction \ - --jar java-function-1.0-SNAPSHOT.jar \ - --inputs persistent://public/default/my-topic-1 \ - --output persistent://public/default/test-1 \ - --tenant public \ - --namespace default \ - --name JavaFunction - - ``` - - The following log indicates that the Java function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -## Python - -Python Function supports the following three formats: - -- One python file -- ZIP file -- PIP - -### One python file - -To package a function with **one python file** in Python, complete the following steps. - -1. Write a Python function. - - ``` - - from pulsar import Function // import the Function module from Pulsar - - # The classic ExclamationFunction that appends an exclamation at the end - # of the input - class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - return input + '!' - - ``` - - In this example, when you write a Python function, you need to inherit the Function class and implement the `process()` method. - - `process()` mainly has two parameters: - - - `input` represents your input. - - - `context` represents an interface exposed by the Pulsar Function. You can get the attributes in the Python function based on the provided context object. - -2. Install a Python client. - - The implementation of a Python function depends on the Python client, so before deploying a Python function, you need to install the corresponding version of the Python client. - - ```bash - - pip install python-client==2.6.0 - - ``` - -3. Run the Python Function. - - (1) Copy the Python function file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Python function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname org.example.test.ExclamationFunction \ - --py \ - --inputs persistent://public/default/my-topic-1 \ - --output persistent://public/default/test-1 \ - --tenant public \ - --namespace default \ - --name PythonFunction - - ``` - - The following log indicates that the Python function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -### ZIP file - -To package a function with the **ZIP file** in Python, complete the following steps. - -1. Prepare the ZIP file. - - The following is required when packaging the ZIP file of the Python Function. - - ```text - - Assuming the zip file is named as `func.zip`, unzip the `func.zip` folder: - "func/src" - "func/requirements.txt" - "func/deps" - - ``` - - Take [exclamation.zip](https://github.com/apache/pulsar/tree/master/tests/docker-images/latest-version-image/python-examples) as an example. The internal structure of the example is as follows. - - ```text - - . - ├── deps - │   └── sh-1.12.14-py2.py3-none-any.whl - └── src - └── exclamation.py - - ``` - -2. Run the Python Function. - - (1) Copy the ZIP file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Python function using the following command. - - ```bash - - ./bin/pulsar-admin functions localrun \ - --classname exclamation \ - --py \ - --inputs persistent://public/default/in-topic \ - --output persistent://public/default/out-topic \ - --tenant public \ - --namespace default \ - --name PythonFunction - - ``` - - The following log indicates that the Python function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -### PIP - -The PIP method is only supported in Kubernetes runtime. To package a function with **PIP** in Python, complete the following steps. - -1. Configure the `functions_worker.yml` file. - - ```text - - #### Kubernetes Runtime #### - installUserCodeDependencies: true - - ``` - -2. Write your Python Function. - - ``` - - from pulsar import Function - import js2xml - - # The classic ExclamationFunction that appends an exclamation at the end - # of the input - class ExclamationFunction(Function): - def __init__(self): - pass - - def process(self, input, context): - // add your logic - return input + '!' - - ``` - - You can introduce additional dependencies. When Python Function detects that the file currently used is `whl` and the `installUserCodeDependencies` parameter is specified, the system uses the `pip install` command to install the dependencies required in Python Function. - -3. Generate the `whl` file. - - ```shell script - - $ cd $PULSAR_HOME/pulsar-functions/scripts/python - $ chmod +x generate.sh - $ ./generate.sh - # e.g: ./generate.sh /path/to/python /path/to/python/output 1.0.0 - - ``` - - The output is written in `/path/to/python/output`: - - ```text - - -rw-r--r-- 1 root staff 1.8K 8 27 14:29 pulsarfunction-1.0.0-py2-none-any.whl - -rw-r--r-- 1 root staff 1.4K 8 27 14:29 pulsarfunction-1.0.0.tar.gz - -rw-r--r-- 1 root staff 0B 8 27 14:29 pulsarfunction.whl - - ``` - -## Go - -To package a function in Go, complete the following steps. - -1. Write a Go function. - - Currently, Go function can be **only** implemented using SDK and the interface of the function is exposed in the form of SDK. Before using the Go function, you need to import "github.com/apache/pulsar/pulsar-function-go/pf". - - ``` - - import ( - "context" - "fmt" - - "github.com/apache/pulsar/pulsar-function-go/pf" - ) - - func HandleRequest(ctx context.Context, input []byte) error { - fmt.Println(string(input) + "!") - return nil - } - - func main() { - pf.Start(HandleRequest) - } - - ``` - - You can use context to connect to the Go function. - - ``` - - if fc, ok := pf.FromContext(ctx); ok { - fmt.Printf("function ID is:%s, ", fc.GetFuncID()) - fmt.Printf("function version is:%s\n", fc.GetFuncVersion()) - } - - ``` - - When writing a Go function, remember that - - In `main()`, you **only** need to register the function name to `Start()`. **Only** one function name is received in `Start()`. - - Go function uses Go reflection, which is based on the received function name, to verify whether the parameter list and returned value list are correct. The parameter list and returned value list **must be** one of the following sample functions: - - ``` - - func () - func () error - func (input) error - func () (output, error) - func (input) (output, error) - func (context.Context) error - func (context.Context, input) error - func (context.Context) (output, error) - func (context.Context, input) (output, error) - - ``` - -2. Build the Go function. - - ``` - - go build .go - - ``` - -3. Run the Go Function. - - (1) Copy the Go function file to the Pulsar image. - - ```bash - - docker exec -it [CONTAINER ID] /bin/bash - docker cp CONTAINER ID:/pulsar - - ``` - - (2) Run the Go function with the following command. - - ``` - - ./bin/pulsar-admin functions localrun \ - --go [your go function path] - --inputs [input topics] \ - --output [output topic] \ - --tenant [default:public] \ - --namespace [default:default] \ - --name [custom unique go function name] - - ``` - - The following log indicates that the Go function starts successfully. - - ```text - - ... - 07:55:03.724 [main] INFO org.apache.pulsar.functions.runtime.ProcessRuntime - Started process successfully - ... - - ``` - -## Start Functions in cluster mode -If you want to start a function in cluster mode, replace `localrun` with `create` in the commands above. The following log indicates that your function starts successfully. - - ```text - - "Created successfully" - - ``` - -For information about parameters on `--classname`, `--jar`, `--py`, `--go`, `--inputs`, run the command `./bin/pulsar-admin functions` or see [here](reference-pulsar-admin.md#functions). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/functions-runtime.md b/site2/website/versioned_docs/version-2.9.3-deprecated/functions-runtime.md deleted file mode 100644 index 7164bd13668aff..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/functions-runtime.md +++ /dev/null @@ -1,403 +0,0 @@ ---- -id: functions-runtime -title: Configure Functions runtime -sidebar_label: "Setup: Configure Functions runtime" -original_id: functions-runtime ---- - -You can use the following methods to run functions. - -- *Thread*: Invoke functions threads in functions worker. -- *Process*: Invoke functions in processes forked by functions worker. -- *Kubernetes*: Submit functions as Kubernetes StatefulSets by functions worker. - -:::note - -Pulsar supports adding labels to the Kubernetes StatefulSets and services while launching functions, which facilitates selecting the target Kubernetes objects. - -::: - -The differences of the thread and process modes are: -- Thread mode: when a function runs in thread mode, it runs on the same Java virtual machine (JVM) with functions worker. -- Process mode: when a function runs in process mode, it runs on the same machine that functions worker runs. - -## Configure thread runtime -It is easy to configure *Thread* runtime. In most cases, you do not need to configure anything. You can customize the thread group name with the following settings: - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.thread.ThreadRuntimeFactory -functionRuntimeFactoryConfigs: - threadGroupName: "Your Function Container Group" - -``` - -*Thread* runtime is only supported in Java function. - -## Configure process runtime -When you enable *Process* runtime, you do not need to configure anything. - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.process.ProcessRuntimeFactory -functionRuntimeFactoryConfigs: - # the directory for storing the function logs - logDirectory: - # change the jar location only when you put the java instance jar in a different location - javaInstanceJarLocation: - # change the python instance location only when you put the python instance jar in a different location - pythonInstanceLocation: - # change the extra dependencies location: - extraFunctionDependenciesDir: - -``` - -*Process* runtime is supported in Java, Python, and Go functions. - -## Configure Kubernetes runtime - -When the functions worker generates Kubernetes manifests and apply the manifests, the Kubernetes runtime works. If you have run functions worker on Kubernetes, you can use the `serviceAccount` associated with the pod that the functions worker is running in. Otherwise, you can configure it to communicate with a Kubernetes cluster. - -The manifests, generated by the functions worker, include a `StatefulSet`, a `Service` (used to communicate with the pods), and a `Secret` for auth credentials (when applicable). The `StatefulSet` manifest (by default) has a single pod, with the number of replicas determined by the "parallelism" of the function. On pod boot, the pod downloads the function payload (via the functions worker REST API). The pod's container image is configurable, but must have the functions runtime. - -The Kubernetes runtime supports secrets, so you can create a Kubernetes secret and expose it as an environment variable in the pod. The Kubernetes runtime is extensible, you can implement classes and customize the way how to generate Kubernetes manifests, how to pass auth data to pods, and how to integrate secrets. - -:::tip - -For the rules of translating Pulsar object names into Kubernetes resource labels, see [here](admin-api-overview.md#how-to-define-pulsar-resource-names-when-running-pulsar-in-kubernetes). - -::: - -### Basic configuration - -It is easy to configure Kubernetes runtime. You can just uncomment the settings of `kubernetesContainerFactory` in the `functions_worker.yaml` file. The following is an example. - -```yaml - -functionRuntimeFactoryClassName: org.apache.pulsar.functions.runtime.kubernetes.KubernetesRuntimeFactory -functionRuntimeFactoryConfigs: - # uri to kubernetes cluster, leave it to empty and it will use the kubernetes settings in function worker - k8Uri: - # the kubernetes namespace to run the function instances. it is `default`, if this setting is left to be empty - jobNamespace: - # The Kubernetes pod name to run the function instances. It is set to - # `pf----` if this setting is left to be empty - jobName: - # the docker image to run function instance. by default it is `apachepulsar/pulsar` - pulsarDockerImageName: - # the docker image to run function instance according to different configurations provided by users. - # By default it is `apachepulsar/pulsar`. - # e.g: - # functionDockerImages: - # JAVA: JAVA_IMAGE_NAME - # PYTHON: PYTHON_IMAGE_NAME - # GO: GO_IMAGE_NAME - functionDockerImages: - # "The image pull policy for image used to run function instance. By default it is `IfNotPresent` - imagePullPolicy: IfNotPresent - # the root directory of pulsar home directory in `pulsarDockerImageName`. by default it is `/pulsar`. - # if you are using your own built image in `pulsarDockerImageName`, you need to set this setting accordingly - pulsarRootDir: - # The config admin CLI allows users to customize the configuration of the admin cli tool, such as: - # `/bin/pulsar-admin and /bin/pulsarctl`. By default it is `/bin/pulsar-admin`. If you want to use `pulsarctl` - # you need to set this setting accordingly - configAdminCLI: - # this setting only takes effects if `k8Uri` is set to null. if your function worker is running as a k8 pod, - # setting this to true is let function worker to submit functions to the same k8s cluster as function worker - # is running. setting this to false if your function worker is not running as a k8 pod. - submittingInsidePod: false - # setting the pulsar service url that pulsar function should use to connect to pulsar - # if it is not set, it will use the pulsar service url configured in worker service - pulsarServiceUrl: - # setting the pulsar admin url that pulsar function should use to connect to pulsar - # if it is not set, it will use the pulsar admin url configured in worker service - pulsarAdminUrl: - # The flag indicates to install user code dependencies. (applied to python package) - installUserCodeDependencies: - # The repository that pulsar functions use to download python dependencies - pythonDependencyRepository: - # The repository that pulsar functions use to download extra python dependencies - pythonExtraDependencyRepository: - # the custom labels that function worker uses to select the nodes for pods - customLabels: - # The expected metrics collection interval, in seconds - expectedMetricsCollectionInterval: 30 - # Kubernetes Runtime will periodically checkback on - # this configMap if defined and if there are any changes - # to the kubernetes specific stuff, we apply those changes - changeConfigMap: - # The namespace for storing change config map - changeConfigMapNamespace: - # The ratio cpu request and cpu limit to be set for a function/source/sink. - # The formula for cpu request is cpuRequest = userRequestCpu / cpuOverCommitRatio - cpuOverCommitRatio: 1.0 - # The ratio memory request and memory limit to be set for a function/source/sink. - # The formula for memory request is memoryRequest = userRequestMemory / memoryOverCommitRatio - memoryOverCommitRatio: 1.0 - # The port inside the function pod which is used by the worker to communicate with the pod - grpcPort: 9093 - # The port inside the function pod on which prometheus metrics are exposed - metricsPort: 9094 - # The directory inside the function pod where nar packages will be extracted - narExtractionDirectory: - # The classpath where function instance files stored - functionInstanceClassPath: - # the directory for dropping extra function dependencies - # if it is not an absolute path, it is relative to `pulsarRootDir` - extraFunctionDependenciesDir: - # Additional memory padding added on top of the memory requested by the function per on a per instance basis - percentMemoryPadding: 10 - # The duration (in seconds) before the StatefulSet is deleted after a function stops or restarts. - # Value must be a non-negative integer. 0 indicates the StatefulSet is deleted immediately. - # Default is 5 seconds. - gracePeriodSeconds: 5 - -``` - -If you run functions worker embedded in a broker on Kubernetes, you can use the default settings. - -### Run standalone functions worker on Kubernetes - -If you run functions worker standalone (that is, not embedded) on Kubernetes, you need to configure `pulsarSerivceUrl` to be the URL of the broker and `pulsarAdminUrl` as the URL to the functions worker. - -For example, both Pulsar brokers and Function Workers run in the `pulsar` K8S namespace. The brokers have a service called `brokers` and the functions worker has a service called `func-worker`. The settings are as follows: - -```yaml - -pulsarServiceUrl: pulsar://broker.pulsar:6650 // or pulsar+ssl://broker.pulsar:6651 if using TLS -pulsarAdminUrl: http://func-worker.pulsar:8080 // or https://func-worker:8443 if using TLS - -``` - -### Run RBAC in Kubernetes clusters - -If you run RBAC in your Kubernetes cluster, make sure that the service account you use for running functions workers (or brokers, if functions workers run along with brokers) have permissions on the following Kubernetes APIs. - -- services -- configmaps -- pods -- apps.statefulsets - -The following is sufficient: - -```yaml - -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRole -metadata: - name: functions-worker -rules: -- apiGroups: [""] - resources: - - services - - configmaps - - pods - verbs: - - '*' -- apiGroups: - - apps - resources: - - statefulsets - verbs: - - '*' ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: functions-worker ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: functions-worker -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: functions-worker -subjectsKubernetesSec: -- kind: ServiceAccount - name: functions-worker - -``` - -If the service-account is not properly configured, an error message similar to this is displayed: - -```bash - -22:04:27.696 [Timer-0] ERROR org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory - Error while trying to fetch configmap example-pulsar-4qvmb5gur3c6fc9dih0x1xn8b-function-worker-config at namespace pulsar -io.kubernetes.client.ApiException: Forbidden - at io.kubernetes.client.ApiClient.handleResponse(ApiClient.java:882) ~[io.kubernetes-client-java-2.0.0.jar:?] - at io.kubernetes.client.ApiClient.execute(ApiClient.java:798) ~[io.kubernetes-client-java-2.0.0.jar:?] - at io.kubernetes.client.apis.CoreV1Api.readNamespacedConfigMapWithHttpInfo(CoreV1Api.java:23673) ~[io.kubernetes-client-java-api-2.0.0.jar:?] - at io.kubernetes.client.apis.CoreV1Api.readNamespacedConfigMap(CoreV1Api.java:23655) ~[io.kubernetes-client-java-api-2.0.0.jar:?] - at org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory.fetchConfigMap(KubernetesRuntimeFactory.java:284) [org.apache.pulsar-pulsar-functions-runtime-2.4.0-42c3bf949.jar:2.4.0-42c3bf949] - at org.apache.pulsar.functions.runtime.KubernetesRuntimeFactory$1.run(KubernetesRuntimeFactory.java:275) [org.apache.pulsar-pulsar-functions-runtime-2.4.0-42c3bf949.jar:2.4.0-42c3bf949] - at java.util.TimerThread.mainLoop(Timer.java:555) [?:1.8.0_212] - at java.util.TimerThread.run(Timer.java:505) [?:1.8.0_212] - -``` - -### Integrate Kubernetes secrets - -In order to safely distribute secrets, Pulsar Functions can reference Kubernetes secrets. To enable this, set the `secretsProviderConfiguratorClassName` to `org.apache.pulsar.functions.secretsproviderconfigurator.KubernetesSecretsProviderConfigurator`. - -You can create a secret in the namespace where your functions are deployed. For example, you deploy functions to the `pulsar-func` Kubernetes namespace, and you have a secret named `database-creds` with a field name `password`, which you want to mount in the pod as an environment variable called `DATABASE_PASSWORD`. The following functions configuration enables you to reference that secret and mount the value as an environment variable in the pod. - -```Yaml - -tenant: "mytenant" -namespace: "mynamespace" -name: "myfunction" -topicName: "persistent://mytenant/mynamespace/myfuncinput" -className: "com.company.pulsar.myfunction" - -secrets: - # the secret will be mounted from the `password` field in the `database-creds` secret as an env var called `DATABASE_PASSWORD` - DATABASE_PASSWORD: - path: "database-creds" - key: "password" - -``` - -### Enable token authentication - -When you enable authentication for your Pulsar cluster, you need a mechanism for the pod running your function to authenticate with the broker. - -The `org.apache.pulsar.functions.auth.KubernetesFunctionAuthProvider` interface provides support for any authentication mechanism. The `functionAuthProviderClassName` in `function-worker.yml` is used to specify your path to this implementation. - -Pulsar includes an implementation of this interface for token authentication, and distributes the certificate authority via the same implementation. The configuration is similar as follows: - -```Yaml - -functionAuthProviderClassName: org.apache.pulsar.functions.auth.KubernetesSecretsTokenAuthProvider - -``` - -For token authentication, the functions worker captures the token that is used to deploy (or update) the function. The token is saved as a secret and mounted into the pod. - -For custom authentication or TLS, you need to implement this interface or use an alternative mechanism to provide authentication. If you use token authentication and TLS encryption to secure the communication with the cluster, Pulsar passes your certificate authority (CA) to the client, so the client obtains what it needs to authenticate the cluster, and trusts the cluster with your signed certificate. - -:::note - -If you use tokens that expire when deploying functions, these tokens will expire. - -::: - -### Run clusters with authentication - -When you run a functions worker in a standalone process (that is, not embedded in the broker) in a cluster with authentication, you must configure your functions worker to interact with the broker and authenticate incoming requests. So you need to configure properties that the broker requires for authentication or authorization. - -For example, if you use token authentication, you need to configure the following properties in the `function-worker.yml` file. - -```Yaml - -clientAuthenticationPlugin: org.apache.pulsar.client.impl.auth.AuthenticationToken -clientAuthenticationParameters: file:///etc/pulsar/token/admin-token.txt -configurationStoreServers: zookeeper-cluster:2181 # auth requires a connection to zookeeper -authenticationProviders: - - "org.apache.pulsar.broker.authentication.AuthenticationProviderToken" -authorizationEnabled: true -authenticationEnabled: true -superUserRoles: - - superuser - - proxy -properties: - tokenSecretKey: file:///etc/pulsar/jwt/secret # if using a secret token, key file must be DER-encoded - tokenPublicKey: file:///etc/pulsar/jwt/public.key # if using public/private key tokens, key file must be DER-encoded - -``` - -:::note - -You must configure both the Function Worker authorization or authentication for the server to authenticate requests and configure the client to be authenticated to communicate with the broker. - -::: - -### Customize Kubernetes runtime - -The Kubernetes integration enables you to implement a class and customize how to generate manifests. You can configure it by setting `runtimeCustomizerClassName` in the `functions-worker.yml` file and use the fully qualified class name. You must implement the `org.apache.pulsar.functions.runtime.kubernetes.KubernetesManifestCustomizer` interface. - -The functions (and sinks/sources) API provides a flag, `customRuntimeOptions`, which is passed to this interface. - -To initialize the `KubernetesManifestCustomizer`, you can provide `runtimeCustomizerConfig` in the `functions-worker.yml` file. `runtimeCustomizerConfig` is passed to the `public void initialize(Map config)` function of the interface. `runtimeCustomizerConfig`is different from the `customRuntimeOptions` as `runtimeCustomizerConfig` is the same across all functions. If you provide both `runtimeCustomizerConfig` and `customRuntimeOptions`, you need to decide how to manage these two configurations in your implementation of `KubernetesManifestCustomizer`. - -Pulsar includes a built-in implementation. To use the basic implementation, set `runtimeCustomizerClassName` to `org.apache.pulsar.functions.runtime.kubernetes.BasicKubernetesManifestCustomizer`. The built-in implementation initialized with `runtimeCustomizerConfig` enables you to pass a JSON document as `customRuntimeOptions` with certain properties to augment, which decides how the manifests are generated. If both `runtimeCustomizerConfig` and `customRuntimeOptions` are provided, `BasicKubernetesManifestCustomizer` uses `customRuntimeOptions` to override the configuration if there are conflicts in these two configurations. - -Below is an example of `customRuntimeOptions`. - -```json - -{ - "jobName": "jobname", // the k8s pod name to run this function instance - "jobNamespace": "namespace", // the k8s namespace to run this function in - "extractLabels": { // extra labels to attach to the statefulSet, service, and pods - "extraLabel": "value" - }, - "extraAnnotations": { // extra annotations to attach to the statefulSet, service, and pods - "extraAnnotation": "value" - }, - "nodeSelectorLabels": { // node selector labels to add on to the pod spec - "customLabel": "value" - }, - "tolerations": [ // tolerations to add to the pod spec - { - "key": "custom-key", - "value": "value", - "effect": "NoSchedule" - } - ], - "resourceRequirements": { // values for cpu and memory should be defined as described here: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container - "requests": { - "cpu": 1, - "memory": "4G" - }, - "limits": { - "cpu": 2, - "memory": "8G" - } - } -} - -``` - -## Run clusters with geo-replication - -If you run multiple clusters tied together with geo-replication, it is important to use a different function namespace for each cluster. Otherwise, the function shares a namespace and potentially schedule across clusters. - -For example, if you have two clusters: `east-1` and `west-1`, you can configure the functions workers for `east-1` and `west-1` perspectively as follows. - -```Yaml - -pulsarFunctionsCluster: east-1 -pulsarFunctionsNamespace: public/functions-east-1 - -``` - -```Yaml - -pulsarFunctionsCluster: west-1 -pulsarFunctionsNamespace: public/functions-west-1 - -``` - -This ensures the two different Functions Workers use distinct sets of topics for their internal coordination. - -## Configure standalone functions worker - -When configuring a standalone functions worker, you need to configure properties that the broker requires, especially if you use TLS. And then Functions Worker can communicate with the broker. - -You need to configure the following required properties. - -```Yaml - -workerPort: 8080 -workerPortTls: 8443 # when using TLS -tlsCertificateFilePath: /etc/pulsar/tls/tls.crt # when using TLS -tlsKeyFilePath: /etc/pulsar/tls/tls.key # when using TLS -tlsTrustCertsFilePath: /etc/pulsar/tls/ca.crt # when using TLS -pulsarServiceUrl: pulsar://broker.pulsar:6650/ # or pulsar+ssl://pulsar-prod-broker.pulsar:6651/ when using TLS -pulsarWebServiceUrl: http://broker.pulsar:8080/ # or https://pulsar-prod-broker.pulsar:8443/ when using TLS -useTls: true # when using TLS, critical! - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/functions-worker.md b/site2/website/versioned_docs/version-2.9.3-deprecated/functions-worker.md deleted file mode 100644 index 35e26926bb7aba..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/functions-worker.md +++ /dev/null @@ -1,385 +0,0 @@ ---- -id: functions-worker -title: Deploy and manage functions worker -sidebar_label: "Setup: Pulsar Functions Worker" -original_id: functions-worker ---- -Before using Pulsar Functions, you need to learn how to set up Pulsar Functions worker and how to [configure Functions runtime](functions-runtime.md). - -Pulsar `functions-worker` is a logic component to run Pulsar Functions in cluster mode. Two options are available, and you can select either based on your requirements. -- [run with brokers](#run-functions-worker-with-brokers) -- [run it separately](#run-functions-worker-separately) in a different broker - -:::note - -The `--- Service Urls---` lines in the following diagrams represent Pulsar service URLs that Pulsar client and admin use to connect to a Pulsar cluster. - -::: - -## Run Functions-worker with brokers - -The following diagram illustrates the deployment of functions-workers running along with brokers. - -![assets/functions-worker-corun.png](/assets/functions-worker-corun.png) - -To enable functions-worker running as part of a broker, you need to set `functionsWorkerEnabled` to `true` in the `broker.conf` file. - -```conf - -functionsWorkerEnabled=true - -``` - -If the `functionsWorkerEnabled` is set to `true`, the functions-worker is started as part of a broker. You need to configure the `conf/functions_worker.yml` file to customize your functions_worker. - -Before you run Functions-worker with broker, you have to configure Functions-worker, and then start it with brokers. - -### Configure Functions-Worker to run with brokers -In this mode, most of the settings are already inherited from your broker configuration (for example, configurationStore settings, authentication settings, and so on) since `functions-worker` is running as part of the broker. - -Pay attention to the following required settings when configuring functions-worker in this mode. - -- `numFunctionPackageReplicas`: The number of replicas to store function packages. The default value is `1`, which is good for standalone deployment. For production deployment, to ensure high availability, set it to be larger than `2`. -- `initializedDlogMetadata`: Whether to initialize distributed log metadata in runtime. If it is set to `true`, you must ensure that it has been initialized by `bin/pulsar initialize-cluster-metadata` command. - -If authentication is enabled on the BookKeeper cluster, configure the following BookKeeper authentication settings. - -- `bookkeeperClientAuthenticationPlugin`: the BookKeeper client authentication plugin name. -- `bookkeeperClientAuthenticationParametersName`: the BookKeeper client authentication plugin parameters name. -- `bookkeeperClientAuthenticationParameters`: the BookKeeper client authentication plugin parameters. - -### Configure Stateful-Functions to run with broker - -If you want to use Stateful-Functions related functions (for example, `putState()` and `queryState()` related interfaces), follow steps below. - -1. Enable the **streamStorage** service in the BookKeeper. - - Currently, the service uses the NAR package, so you need to set the configuration in `bookkeeper.conf`. - - ```text - - extraServerComponents=org.apache.bookkeeper.stream.server.StreamStorageLifecycleComponent - - ``` - - After starting bookie, use the following methods to check whether the streamStorage service is started correctly. - - Input: - - ```shell - - telnet localhost 4181 - - ``` - - Output: - - ```text - - Trying 127.0.0.1... - Connected to localhost. - Escape character is '^]'. - - ``` - -2. Turn on this function in `functions_worker.yml`. - - ```text - - stateStorageServiceUrl: bk://:4181 - - ``` - - `bk-service-url` is the service URL pointing to the BookKeeper table service. - -### Start Functions-worker with broker - -Once you have configured the `functions_worker.yml` file, you can start or restart your broker. - -And then you can use the following command to verify if `functions-worker` is running well. - -```bash - -curl :8080/admin/v2/worker/cluster - -``` - -After entering the command above, a list of active function workers in the cluster is returned. The output is similar to the following. - -```json - -[{"workerId":"","workerHostname":"","port":8080}] - -``` - -## Run Functions-worker separately - -This section illustrates how to run `functions-worker` as a separate process in separate machines. - -![assets/functions-worker-separated.png](/assets/functions-worker-separated.png) - -:::note - -In this mode, make sure `functionsWorkerEnabled` is set to `false`, so you won't start `functions-worker` with brokers by mistake. Also, while accessing the `functions-worker` to manage any of the functions, the `pulsar-admin` CLI tool or any of the clients should use the `workerHostname` and `workerPort` that you set in [Worker parameters](#worker-parameters) to generate an `--admin-url`. - -::: - -### Configure Functions-worker to run separately - -To run function-worker separately, you have to configure the following parameters. - -#### Worker parameters - -- `workerId`: The type is string. It is unique across clusters, which is used to identify a worker machine. -- `workerHostname`: The hostname of the worker machine. -- `workerPort`: The port that the worker server listens on. Keep it as default if you don't customize it. -- `workerPortTls`: The TLS port that the worker server listens on. Keep it as default if you don't customize it. - -#### Function package parameter - -- `numFunctionPackageReplicas`: The number of replicas to store function packages. The default value is `1`. - -#### Function metadata parameter - -- `pulsarServiceUrl`: The Pulsar service URL for your broker cluster. -- `pulsarWebServiceUrl`: The Pulsar web service URL for your broker cluster. -- `pulsarFunctionsCluster`: Set the value to your Pulsar cluster name (same as the `clusterName` setting in the broker configuration). - -If authentication is enabled for your broker cluster, you *should* configure the authentication plugin and parameters for the functions worker to communicate with the brokers. - -- `clientAuthenticationPlugin` -- `clientAuthenticationParameters` - -#### Security settings - -If you want to enable security on functions workers, you *should*: -- [Enable TLS transport encryption](#enable-tls-transport-encryption) -- [Enable Authentication Provider](#enable-authentication-provider) -- [Enable Authorization Provider](#enable-authorization-provider) -- [Enable End-to-End Encryption](#enable-end-to-end-encryption) - -##### Enable TLS transport encryption - -To enable TLS transport encryption, configure the following settings. - -``` - -useTLS: true -pulsarServiceUrl: pulsar+ssl://localhost:6651/ -pulsarWebServiceUrl: https://localhost:8443 - -tlsEnabled: true -tlsCertificateFilePath: /path/to/functions-worker.cert.pem -tlsKeyFilePath: /path/to/functions-worker.key-pk8.pem -tlsTrustCertsFilePath: /path/to/ca.cert.pem - -// The path to trusted certificates used by the Pulsar client to authenticate with Pulsar brokers -brokerClientTrustCertsFilePath: /path/to/ca.cert.pem - -``` - -For details on TLS encryption, refer to [Transport Encryption using TLS](security-tls-transport.md). - -##### Enable Authentication Provider - -To enable authentication on Functions Worker, you need to configure the following settings. - -:::note - -Substitute the *providers list* with the providers you want to enable. - -::: - -``` - -authenticationEnabled: true -authenticationProviders: [ provider1, provider2 ] - -``` - -For *TLS Authentication* provider, follow the example below to add the necessary settings. -See [TLS Authentication](security-tls-authentication.md) for more details. - -``` - -brokerClientAuthenticationPlugin: org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters: tlsCertFile:/path/to/admin.cert.pem,tlsKeyFile:/path/to/admin.key-pk8.pem - -authenticationEnabled: true -authenticationProviders: ['org.apache.pulsar.broker.authentication.AuthenticationProviderTls'] - -``` - -For *SASL Authentication* provider, add `saslJaasClientAllowedIds` and `saslJaasBrokerSectionName` -under `properties` if needed. - -``` - -properties: - saslJaasClientAllowedIds: .*pulsar.* - saslJaasBrokerSectionName: Broker - -``` - -For *Token Authentication* provider, add necessary settings for `properties` if needed. -See [Token Authentication](security-jwt.md) for more details. -Note: key files must be DER-encoded - -``` - -properties: - tokenSecretKey: file://my/secret.key - # If using public/private - # tokenPublicKey: file:///path/to/public.key - -``` - -##### Enable Authorization Provider - -To enable authorization on Functions Worker, you need to configure `authorizationEnabled`, `authorizationProvider` and `configurationStoreServers`. The authentication provider connects to `configurationStoreServers` to receive namespace policies. - -```yaml - -authorizationEnabled: true -authorizationProvider: org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider -configurationStoreServers: - -``` - -You should also configure a list of superuser roles. The superuser roles are able to access any admin API. The following is a configuration example. - -```yaml - -superUserRoles: - - role1 - - role2 - - role3 - -``` - -##### Enable End-to-End Encryption - -You can use the public and private key pair that the application configures to perform encryption. Only the consumers with a valid key can decrypt the encrypted messages. - -To enable End-to-End encryption on Functions Worker, you can set it by specifying `--producer-config` in the command line terminal, for more information, please refer to [here](security-encryption.md). - -We include the relevant configuration information of `CryptoConfig` into `ProducerConfig`. The specific configurable field information about `CryptoConfig` is as follows: - -```text - -public class CryptoConfig { - private String cryptoKeyReaderClassName; - private Map cryptoKeyReaderConfig; - - private String[] encryptionKeys; - private ProducerCryptoFailureAction producerCryptoFailureAction; - - private ConsumerCryptoFailureAction consumerCryptoFailureAction; -} - -``` - -- `producerCryptoFailureAction`: define the action if producer fail to encrypt data one of `FAIL`, `SEND`. -- `consumerCryptoFailureAction`: define the action if consumer fail to decrypt data one of `FAIL`, `DISCARD`, `CONSUME`. - -#### BookKeeper Authentication - -If authentication is enabled on the BookKeeper cluster, you need configure the BookKeeper authentication settings as follows: - -- `bookkeeperClientAuthenticationPlugin`: the plugin name of BookKeeper client authentication. -- `bookkeeperClientAuthenticationParametersName`: the plugin parameters name of BookKeeper client authentication. -- `bookkeeperClientAuthenticationParameters`: the plugin parameters of BookKeeper client authentication. - -### Start Functions-worker - -Once you have finished configuring the `functions_worker.yml` configuration file, you can start a `functions-worker` in the background by using [nohup](https://en.wikipedia.org/wiki/Nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool: - -```bash - -bin/pulsar-daemon start functions-worker - -``` - -You can also start `functions-worker` in the foreground by using `pulsar` CLI tool: - -```bash - -bin/pulsar functions-worker - -``` - -### Configure Proxies for Functions-workers - -When you are running `functions-worker` in a separate cluster, the admin rest endpoints are split into two clusters. `functions`, `function-worker`, `source` and `sink` endpoints are now served -by the `functions-worker` cluster, while all the other remaining endpoints are served by the broker cluster. -Hence you need to configure your `pulsar-admin` to use the right service URL accordingly. - -In order to address this inconvenience, you can start a proxy cluster for routing the admin rest requests accordingly. Hence you will have one central entry point for your admin service. - -If you already have a proxy cluster, continue reading. If you haven't setup a proxy cluster before, you can follow the [instructions](http://pulsar.apache.org/docs/en/administration-proxy/) to -start proxies. - -![assets/functions-worker-separated.png](/assets/functions-worker-separated-proxy.png) - -To enable routing functions related admin requests to `functions-worker` in a proxy, you can edit the `proxy.conf` file to modify the following settings: - -```conf - -functionWorkerWebServiceURL= -functionWorkerWebServiceURLTLS= - -``` - -## Compare the Run-with-Broker and Run-separately modes - -As described above, you can run Function-worker with brokers, or run it separately. And it is more convenient to run functions-workers along with brokers. However, running functions-workers in a separate cluster provides better resource isolation for running functions in `Process` or `Thread` mode. - -Use which mode for your cases, refer to the following guidelines to determine. - -Use the `Run-with-Broker` mode in the following cases: -- a) if resource isolation is not required when running functions in `Process` or `Thread` mode; -- b) if you configure the functions-worker to run functions on Kubernetes (where the resource isolation problem is addressed by Kubernetes). - -Use the `Run-separately` mode in the following cases: -- a) you don't have a Kubernetes cluster; -- b) if you want to run functions and brokers separately. - -## Troubleshooting - -**Error message: Namespace missing local cluster name in clusters list** - -``` - -Failed to get partitioned topic metadata: org.apache.pulsar.client.api.PulsarClientException$BrokerMetadataException: Namespace missing local cluster name in clusters list: local_cluster=xyz ns=public/functions clusters=[standalone] - -``` - -The error message prompts when either of the cases occurs: -- a) a broker is started with `functionsWorkerEnabled=true`, but the `pulsarFunctionsCluster` is not set to the correct cluster in the `conf/functions_worker.yaml` file; -- b) setting up a geo-replicated Pulsar cluster with `functionsWorkerEnabled=true`, while brokers in one cluster run well, brokers in the other cluster do not work well. - -**Workaround** - -If any of these cases happens, follow the instructions below to fix the problem: - -1. Disable Functions Worker by setting `functionsWorkerEnabled=false`, and restart brokers. - -2. Get the current clusters list of `public/functions` namespace. - -```bash - -bin/pulsar-admin namespaces get-clusters public/functions - -``` - -3. Check if the cluster is in the clusters list. If the cluster is not in the list, add it to the list and update the clusters list. - -```bash - -bin/pulsar-admin namespaces set-clusters --clusters , public/functions - -``` - -4. After setting the cluster successfully, enable functions worker by setting `functionsWorkerEnabled=true`. - -5. Set the correct cluster name in `pulsarFunctionsCluster` in the `conf/functions_worker.yml` file, and restart brokers. diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/getting-started-concepts-and-architecture.md b/site2/website/versioned_docs/version-2.9.3-deprecated/getting-started-concepts-and-architecture.md deleted file mode 100644 index fe9c3fbc553b2c..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/getting-started-concepts-and-architecture.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -id: concepts-architecture -title: Pulsar concepts and architecture -sidebar_label: "Concepts and architecture" -original_id: concepts-architecture ---- - - - - - - - - - - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/getting-started-docker.md b/site2/website/versioned_docs/version-2.9.3-deprecated/getting-started-docker.md deleted file mode 100644 index de5ead69e164b0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/getting-started-docker.md +++ /dev/null @@ -1,211 +0,0 @@ ---- -id: getting-started-docker -title: Set up a standalone Pulsar in Docker -sidebar_label: "Run Pulsar in Docker" -original_id: getting-started-docker ---- - -For local development and testing, you can run Pulsar in standalone mode on your own machine within a Docker container. - -If you have not installed Docker, download the [Community edition](https://www.docker.com/community-edition) and follow the instructions for your OS. - -## Start Pulsar in Docker - -* For MacOS, Linux, and Windows: - - ```shell - - $ docker run -it -p 6650:6650 -p 8080:8080 --mount source=pulsardata,target=/pulsar/data --mount source=pulsarconf,target=/pulsar/conf apachepulsar/pulsar:@pulsar:version@ bin/pulsar standalone - - ``` - -A few things to note about this command: - * The data, metadata, and configuration are persisted on Docker volumes in order to not start "fresh" every -time the container is restarted. For details on the volumes you can use `docker volume inspect ` - * For Docker on Windows make sure to configure it to use Linux containers - -If you start Pulsar successfully, you will see `INFO`-level log messages like this: - -``` - -08:18:30.970 [main] INFO org.apache.pulsar.broker.web.WebService - HTTP Service started at http://0.0.0.0:8080 -... -07:53:37.322 [main] INFO org.apache.pulsar.broker.PulsarService - messaging service is ready, bootstrap service port = 8080, broker url= pulsar://localhost:6650, cluster=standalone, configs=org.apache.pulsar.broker.ServiceConfiguration@98b63c1 -... - -``` - -:::tip - -When you start a local standalone cluster, a `public/default` namespace is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -::: - -## Use Pulsar in Docker - -Pulsar offers client libraries for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) -and [C++](client-libraries-cpp.md). If you're running a local standalone cluster, you can -use one of these root URLs to interact with your cluster: - -* `pulsar://localhost:6650` -* `http://localhost:8080` - -The following example will guide you get started with Pulsar quickly by using the [Python client API](client-libraries-python.md) -client API. - -Install the Pulsar Python client library directly from [PyPI](https://pypi.org/project/pulsar-client/): - -```shell - -$ pip install pulsar-client - -``` - -### Consume a message - -Create a consumer and subscribe to the topic: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -consumer = client.subscribe('my-topic', - subscription_name='my-sub') - -while True: - msg = consumer.receive() - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - -client.close() - -``` - -### Produce a message - -Now start a producer to send some test messages: - -```python - -import pulsar - -client = pulsar.Client('pulsar://localhost:6650') -producer = client.create_producer('my-topic') - -for i in range(10): - producer.send(('hello-pulsar-%d' % i).encode('utf-8')) - -client.close() - -``` - -## Get the topic statistics - -In Pulsar, you can use REST, Java, or command-line tools to control every aspect of the system. -For details on APIs, refer to [Admin API Overview](admin-api-overview.md). - -In the simplest example, you can use curl to probe the stats for a particular topic: - -```shell - -$ curl http://localhost:8080/admin/v2/persistent/public/default/my-topic/stats | python -m json.tool - -``` - -The output is something like this: - -```json - -{ - "msgRateIn": 0.0, - "msgThroughputIn": 0.0, - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesInCounter": 7097, - "msgInCounter": 143, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "averageMsgSize": 0.0, - "msgChunkPublished": false, - "storageSize": 7097, - "backlogSize": 0, - "offloadedStorageSize": 0, - "publishers": [ - { - "accessMode": "Shared", - "msgRateIn": 0.0, - "msgThroughputIn": 0.0, - "averageMsgSize": 0.0, - "chunkedMessageRate": 0.0, - "producerId": 0, - "metadata": {}, - "address": "/127.0.0.1:35604", - "connectedSince": "2021-07-04T09:05:43.04788Z", - "clientVersion": "2.8.0", - "producerName": "standalone-2-5" - } - ], - "waitingPublishers": 0, - "subscriptions": { - "my-sub": { - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "msgRateRedeliver": 0.0, - "chunkedMessageRate": 0, - "msgBacklog": 0, - "backlogSize": 0, - "msgBacklogNoDelayed": 0, - "blockedSubscriptionOnUnackedMsgs": false, - "msgDelayed": 0, - "unackedMessages": 0, - "type": "Exclusive", - "activeConsumerName": "3c544f1daa", - "msgRateExpired": 0.0, - "totalMsgExpired": 0, - "lastExpireTimestamp": 0, - "lastConsumedFlowTimestamp": 1625389101290, - "lastConsumedTimestamp": 1625389546070, - "lastAckedTimestamp": 1625389546162, - "lastMarkDeleteAdvancedTimestamp": 1625389546163, - "consumers": [ - { - "msgRateOut": 1.8332950480217471, - "msgThroughputOut": 91.33142602871978, - "bytesOutCounter": 6607, - "msgOutCounter": 133, - "msgRateRedeliver": 0.0, - "chunkedMessageRate": 0.0, - "consumerName": "3c544f1daa", - "availablePermits": 867, - "unackedMessages": 0, - "avgMessagesPerEntry": 6, - "blockedConsumerOnUnackedMsgs": false, - "lastAckedTimestamp": 1625389546162, - "lastConsumedTimestamp": 1625389546070, - "metadata": {}, - "address": "/127.0.0.1:35472", - "connectedSince": "2021-07-04T08:58:21.287682Z", - "clientVersion": "2.8.0" - } - ], - "isDurable": true, - "isReplicated": false, - "allowOutOfOrderDelivery": false, - "consumersAfterMarkDeletePosition": {}, - "nonContiguousDeletedMessagesRanges": 0, - "nonContiguousDeletedMessagesRangesSerializedSize": 0, - "durable": true, - "replicated": false - } - }, - "replication": {}, - "deduplicationStatus": "Disabled", - "nonContiguousDeletedMessagesRanges": 0, - "nonContiguousDeletedMessagesRangesSerializedSize": 0 -} - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/getting-started-helm.md b/site2/website/versioned_docs/version-2.9.3-deprecated/getting-started-helm.md deleted file mode 100644 index 5e9f7044a6d74b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/getting-started-helm.md +++ /dev/null @@ -1,447 +0,0 @@ ---- -id: getting-started-helm -title: Get started in Kubernetes -sidebar_label: "Run Pulsar in Kubernetes" -original_id: getting-started-helm ---- - -This section guides you through every step of installing and running Apache Pulsar with Helm on Kubernetes quickly, including the following sections: - -- Install the Apache Pulsar on Kubernetes using Helm -- Start and stop Apache Pulsar -- Create topics using `pulsar-admin` -- Produce and consume messages using Pulsar clients -- Monitor Apache Pulsar status with Prometheus and Grafana - -For deploying a Pulsar cluster for production usage, read the documentation on [how to configure and install a Pulsar Helm chart](helm-deploy.md). - -## Prerequisite - -- Kubernetes server 1.14.0+ -- kubectl 1.14.0+ -- Helm 3.0+ - -:::tip - -For the following steps, step 2 and step 3 are for **developers** and step 4 and step 5 are for **administrators**. - -::: - -## Step 0: Prepare a Kubernetes cluster - -Before installing a Pulsar Helm chart, you have to create a Kubernetes cluster. You can follow [the instructions](helm-prepare.md) to prepare a Kubernetes cluster. - -We use [Minikube](https://minikube.sigs.k8s.io/docs/start/) in this quick start guide. To prepare a Kubernetes cluster, follow these steps: - -1. Create a Kubernetes cluster on Minikube. - - ```bash - - minikube start --memory=8192 --cpus=4 --kubernetes-version= - - ``` - - The `` can be any [Kubernetes version supported by your Minikube installation](https://minikube.sigs.k8s.io/docs/reference/configuration/kubernetes/), such as `v1.16.1`. - -2. Set `kubectl` to use Minikube. - - ```bash - - kubectl config use-context minikube - - ``` - -3. To use the [Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) with the local Kubernetes cluster on Minikube, enter the command below: - - ```bash - - minikube dashboard - - ``` - - The command automatically triggers opening a webpage in your browser. - -## Step 1: Install Pulsar Helm chart - -1. Add Pulsar charts repo. - - ```bash - - helm repo add apache https://pulsar.apache.org/charts - - ``` - - ```bash - - helm repo update - - ``` - -2. Clone the Pulsar Helm chart repository. - - ```bash - - git clone https://github.com/apache/pulsar-helm-chart - cd pulsar-helm-chart - - ``` - -3. Run the script `prepare_helm_release.sh` to create secrets required for installing the Apache Pulsar Helm chart. The username `pulsar` and password `pulsar` are used for logging into the Grafana dashboard and Pulsar Manager. - - :::note - - When running the script, you can use `-n` to specify the Kubernetes namespace where the Pulsar Helm chart is installed, `-k` to define the Pulsar Helm release name, and `-c` to create the Kubernetes namespace. For more information about the script, run `./scripts/pulsar/prepare_helm_release.sh --help`. - - ::: - - ```bash - - ./scripts/pulsar/prepare_helm_release.sh \ - -n pulsar \ - -k pulsar-mini \ - -c - - ``` - -4. Use the Pulsar Helm chart to install a Pulsar cluster to Kubernetes. - - :::note - - You need to specify `--set initialize=true` when installing Pulsar the first time. This command installs and starts Apache Pulsar. - - ::: - - ```bash - - helm install \ - --values examples/values-minikube.yaml \ - --set initialize=true \ - --namespace pulsar \ - pulsar-mini apache/pulsar - - ``` - -5. Check the status of all pods. - - ```bash - - kubectl get pods -n pulsar - - ``` - - If all pods start up successfully, you can see that the `STATUS` is changed to `Running` or `Completed`. - - **Output** - - ```bash - - NAME READY STATUS RESTARTS AGE - pulsar-mini-bookie-0 1/1 Running 0 9m27s - pulsar-mini-bookie-init-5gphs 0/1 Completed 0 9m27s - pulsar-mini-broker-0 1/1 Running 0 9m27s - pulsar-mini-grafana-6b7bcc64c7-4tkxd 1/1 Running 0 9m27s - pulsar-mini-prometheus-5fcf5dd84c-w8mgz 1/1 Running 0 9m27s - pulsar-mini-proxy-0 1/1 Running 0 9m27s - pulsar-mini-pulsar-init-t7cqt 0/1 Completed 0 9m27s - pulsar-mini-pulsar-manager-9bcbb4d9f-htpcs 1/1 Running 0 9m27s - pulsar-mini-toolset-0 1/1 Running 0 9m27s - pulsar-mini-zookeeper-0 1/1 Running 0 9m27s - - ``` - -6. Check the status of all services in the namespace `pulsar`. - - ```bash - - kubectl get services -n pulsar - - ``` - - **Output** - - ```bash - - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - pulsar-mini-bookie ClusterIP None 3181/TCP,8000/TCP 11m - pulsar-mini-broker ClusterIP None 8080/TCP,6650/TCP 11m - pulsar-mini-grafana LoadBalancer 10.106.141.246 3000:31905/TCP 11m - pulsar-mini-prometheus ClusterIP None 9090/TCP 11m - pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 11m - pulsar-mini-pulsar-manager LoadBalancer 10.103.192.175 9527:30190/TCP 11m - pulsar-mini-toolset ClusterIP None 11m - pulsar-mini-zookeeper ClusterIP None 2888/TCP,3888/TCP,2181/TCP 11m - - ``` - -## Step 2: Use pulsar-admin to create Pulsar tenants/namespaces/topics - -`pulsar-admin` is the CLI (command-Line Interface) tool for Pulsar. In this step, you can use `pulsar-admin` to create resources, including tenants, namespaces, and topics. - -1. Enter the `toolset` container. - - ```bash - - kubectl exec -it -n pulsar pulsar-mini-toolset-0 -- /bin/bash - - ``` - -2. In the `toolset` container, create a tenant named `apache`. - - ```bash - - bin/pulsar-admin tenants create apache - - ``` - - Then you can list the tenants to see if the tenant is created successfully. - - ```bash - - bin/pulsar-admin tenants list - - ``` - - You should see a similar output as below. The tenant `apache` has been successfully created. - - ```bash - - "apache" - "public" - "pulsar" - - ``` - -3. In the `toolset` container, create a namespace named `pulsar` in the tenant `apache`. - - ```bash - - bin/pulsar-admin namespaces create apache/pulsar - - ``` - - Then you can list the namespaces of tenant `apache` to see if the namespace is created successfully. - - ```bash - - bin/pulsar-admin namespaces list apache - - ``` - - You should see a similar output as below. The namespace `apache/pulsar` has been successfully created. - - ```bash - - "apache/pulsar" - - ``` - -4. In the `toolset` container, create a topic `test-topic` with `4` partitions in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics create-partitioned-topic apache/pulsar/test-topic -p 4 - - ``` - -5. In the `toolset` container, list all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - bin/pulsar-admin topics list-partitioned-topics apache/pulsar - - ``` - - Then you can see all the partitioned topics in the namespace `apache/pulsar`. - - ```bash - - "persistent://apache/pulsar/test-topic" - - ``` - -## Step 3: Use Pulsar client to produce and consume messages - -You can use the Pulsar client to create producers and consumers to produce and consume messages. - -By default, the Pulsar Helm chart exposes the Pulsar cluster through a Kubernetes `LoadBalancer`. In Minikube, you can use the following command to check the proxy service. - -```bash - -kubectl get services -n pulsar | grep pulsar-mini-proxy - -``` - -You will see a similar output as below. - -```bash - -pulsar-mini-proxy LoadBalancer 10.97.240.109 80:32305/TCP,6650:31816/TCP 28m - -``` - -This output tells what are the node ports that Pulsar cluster's binary port and HTTP port are mapped to. The port after `80:` is the HTTP port while the port after `6650:` is the binary port. - -Then you can find the IP address and exposed ports of your Minikube server by running the following command. - -```bash - -minikube service pulsar-mini-proxy -n pulsar - -``` - -**Output** - -```bash - -|-----------|-------------------|-------------|-------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|-------------------------| -| pulsar | pulsar-mini-proxy | http/80 | http://172.17.0.4:32305 | -| | | pulsar/6650 | http://172.17.0.4:31816 | -|-----------|-------------------|-------------|-------------------------| -🏃 Starting tunnel for service pulsar-mini-proxy. -|-----------|-------------------|-------------|------------------------| -| NAMESPACE | NAME | TARGET PORT | URL | -|-----------|-------------------|-------------|------------------------| -| pulsar | pulsar-mini-proxy | | http://127.0.0.1:61853 | -| | | | http://127.0.0.1:61854 | -|-----------|-------------------|-------------|------------------------| - -``` - -At this point, you can get the service URLs to connect to your Pulsar client. Here are URL examples: - -``` - -webServiceUrl=http://127.0.0.1:61853/ -brokerServiceUrl=pulsar://127.0.0.1:61854/ - -``` - -Then you can proceed with the following steps: - -1. Download the Apache Pulsar tarball from the [downloads page](https://pulsar.apache.org/download/). - -2. Decompress the tarball based on your download file. - - ```bash - - tar -xf .tar.gz - - ``` - -3. Expose `PULSAR_HOME`. - - (1) Enter the directory of the decompressed download file. - - (2) Expose `PULSAR_HOME` as the environment variable. - - ```bash - - export PULSAR_HOME=$(pwd) - - ``` - -4. Configure the Pulsar client. - - In the `${PULSAR_HOME}/conf/client.conf` file, replace `webServiceUrl` and `brokerServiceUrl` with the service URLs you get from the above steps. - -5. Create a subscription to consume messages from `apache/pulsar/test-topic`. - - ```bash - - bin/pulsar-client consume -s sub apache/pulsar/test-topic -n 0 - - ``` - -6. Open a new terminal. In the new terminal, create a producer and send 10 messages to the `test-topic` topic. - - ```bash - - bin/pulsar-client produce apache/pulsar/test-topic -m "---------hello apache pulsar-------" -n 10 - - ``` - -7. Verify the results. - - - From the producer side - - **Output** - - The messages have been produced successfully. - - ```bash - - 18:15:15.489 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 10 messages successfully produced - - ``` - - - From the consumer side - - **Output** - - At the same time, you can receive the messages as below. - - ```bash - - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - ----- got message ----- - ---------hello apache pulsar------- - - ``` - -## Step 4: Use Pulsar Manager to manage the cluster - -[Pulsar Manager](administration-pulsar-manager.md) is a web-based GUI management tool for managing and monitoring Pulsar. - -1. By default, the `Pulsar Manager` is exposed as a separate `LoadBalancer`. You can open the Pulsar Manager UI using the following command: - - ```bash - - minikube service -n pulsar pulsar-mini-pulsar-manager - - ``` - -2. The Pulsar Manager UI will be open in your browser. You can use the username `pulsar` and password `pulsar` to log into Pulsar Manager. - -3. In Pulsar Manager UI, you can create an environment. - - - Click `New Environment` button in the top-left corner. - - Type `pulsar-mini` for the field `Environment Name` in the popup window. - - Type `http://pulsar-mini-broker:8080` for the field `Service URL` in the popup window. - - Click `Confirm` button in the popup window. - -4. After successfully creating an environment, you are redirected to the `tenants` page of that environment. Then you can create `tenants`, `namespaces` and `topics` using the Pulsar Manager. - -## Step 5: Use Prometheus and Grafana to monitor cluster - -Grafana is an open-source visualization tool, which can be used for visualizing time series data into dashboards. - -1. By default, the Grafana is exposed as a separate `LoadBalancer`. You can open the Grafana UI using the following command: - - ```bash - - minikube service pulsar-mini-grafana -n pulsar - - ``` - -2. The Grafana UI is open in your browser. You can use the username `pulsar` and password `pulsar` to log into the Grafana Dashboard. - -3. You can view dashboards for different components of a Pulsar cluster. diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/getting-started-pulsar.md b/site2/website/versioned_docs/version-2.9.3-deprecated/getting-started-pulsar.md deleted file mode 100644 index 752590f57b5585..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/getting-started-pulsar.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -id: pulsar-2.0 -title: Pulsar 2.0 -sidebar_label: "Pulsar 2.0" -original_id: pulsar-2.0 ---- - -Pulsar 2.0 is a major new release for Pulsar that brings some bold changes to the platform, including [simplified topic names](#topic-names), the addition of the [Pulsar Functions](functions-overview.md) feature, some terminology changes, and more. - -## New features in Pulsar 2.0 - -Feature | Description -:-------|:----------- -[Pulsar Functions](functions-overview.md) | A lightweight compute option for Pulsar - -## Major changes - -There are a few major changes that you should be aware of, as they may significantly impact your day-to-day usage. - -### Properties versus tenants - -Previously, Pulsar had a concept of properties. A property is essentially the exact same thing as a tenant, so the "property" terminology has been removed in version 2.0. The [`pulsar-admin properties`](reference-pulsar-admin.md#pulsar-admin) command-line interface, for example, has been replaced with the [`pulsar-admin tenants`](reference-pulsar-admin.md#pulsar-admin-tenants) interface. In some cases the properties terminology is still used but is now considered deprecated and will be removed entirely in a future release. - -### Topic names - -Prior to version 2.0, *all* Pulsar topics had the following form: - -```http - -{persistent|non-persistent}://property/cluster/namespace/topic - -``` - -Two important changes have been made in Pulsar 2.0: - -* There is no longer a [cluster component](#no-cluster) -* Properties have been [renamed to tenants](#tenants) -* You can use a [flexible](#flexible-topic-naming) naming system to shorten many topic names -* `/` is not allowed in topic name - -#### No cluster component - -The cluster component has been removed from topic names. Thus, all topic names now have the following form: - -```http - -{persistent|non-persistent}://tenant/namespace/topic - -``` - -> Existing topics that use the legacy name format will continue to work without any change, and there are no plans to change that. - - -#### Flexible topic naming - -All topic names in Pulsar 2.0 internally have the form shown [above](#no-cluster-component) but you can now use shorthand names in many cases (for the sake of simplicity). The flexible naming system stems from the fact that there is now a default topic type, tenant, and namespace: - -Topic aspect | Default -:------------|:------- -topic type | `persistent` -tenant | `public` -namespace | `default` - -The table below shows some example topic name translations that use implicit defaults: - -Input topic name | Translated topic name -:----------------|:--------------------- -`my-topic` | `persistent://public/default/my-topic` -`my-tenant/my-namespace/my-topic` | `persistent://my-tenant/my-namespace/my-topic` - -> For [non-persistent topics](concepts-messaging.md#non-persistent-topics) you'll need to continue to specify the entire topic name, as the default-based rules for persistent topic names don't apply. Thus you cannot use a shorthand name like `non-persistent://my-topic` and would need to use `non-persistent://public/default/my-topic` instead - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/getting-started-standalone.md b/site2/website/versioned_docs/version-2.9.3-deprecated/getting-started-standalone.md deleted file mode 100644 index 573a33771d64ea..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/getting-started-standalone.md +++ /dev/null @@ -1,268 +0,0 @@ ---- -id: getting-started-standalone -title: Set up a standalone Pulsar locally -sidebar_label: "Run Pulsar locally" -original_id: getting-started-standalone ---- - -For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process. - -> **Pulsar in production?** -> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal.md) guide. - -## Install Pulsar standalone - -This tutorial guides you through every step of installing Pulsar locally. - -### System requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions. - -:::tip - -By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. - -::: - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -### Install Pulsar using binary release - -To get started with Pulsar, download a binary tarball release in one of the following ways: - -* download from the Apache mirror (Pulsar @pulsar:version@ binary release) - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:binary_release_url - - ``` - -After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -#### What your package contains - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/). -`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more. -`examples` | A Java JAR file containing [Pulsar Functions](functions-overview.md) example. -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar. -`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar). - -These directories are created once you begin running Pulsar. - -Directory | Contains -:---------|:-------- -`data` | The data storage directory used by ZooKeeper and BookKeeper. -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md). -`logs` | Logs created by the installation. - -:::tip - -If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions: -* [Install builtin connectors (optional)](#install-builtin-connectors-optional) -* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional) -Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders. - -::: - -### Install builtin connectors (optional) - -Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors. -To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways: - -* download from the Apache mirror Pulsar IO Connectors @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. -For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker (or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions). -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors). - -::: - -### Install tiered storage offloaders (optional) - -:::tip - -- Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -- To enable tiered storage feature, follow the instructions below; otherwise skip this section. - -::: - -To get started with [tiered storage offloaders](concepts-tiered-storage.md), you need to download the offloaders tarball release on every broker node in one of the following ways: - -* download from the Apache mirror Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders` -in the pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage.md). - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory. -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - -::: - -## Start Pulsar standalone - -Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode. - -```bash - -$ bin/pulsar standalone - -``` - -If you have started Pulsar successfully, you will see `INFO`-level log messages like this: - -```bash - -21:59:29.327 [DLM-/stream/storage-OrderedScheduler-3-0] INFO org.apache.bookkeeper.stream.storage.impl.sc.StorageContainerImpl - Successfully started storage container (0). -21:59:34.576 [main] INFO org.apache.pulsar.broker.authentication.AuthenticationService - Authentication is disabled -21:59:34.576 [main] INFO org.apache.pulsar.websocket.WebSocketService - Pulsar WebSocket Service started - -``` - -:::tip - -* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window. - -::: - -You can also run the service as a background process using the `pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). -> -> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview.md) document to secure your deployment. -> -> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -## Use Pulsar standalone - -Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. - -### Consume a message - -The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client consume my-topic -s "first-subscription" - -``` - -If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -22:17:16.781 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully consumed - -``` - -:::tip - -As you have noticed that we do not explicitly create the `my-topic` topic, from which we consume the message. When you consume a message from a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well. - -::: - -### Produce a message - -The following command produces a message saying `hello-pulsar` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client produce my-topic --messages "hello-pulsar" - -``` - -If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -22:21:08.693 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced - -``` - -## Stop Pulsar standalone - -Press `Ctrl+C` to stop a local standalone Pulsar. - -:::tip - -If the service runs as a background process using the `pulsar-daemon start standalone` command, then use the `pulsar-daemon stop standalone` command to stop the service. -For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). - -::: - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/helm-deploy.md b/site2/website/versioned_docs/version-2.9.3-deprecated/helm-deploy.md deleted file mode 100644 index 0e7815e4f4d90b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/helm-deploy.md +++ /dev/null @@ -1,434 +0,0 @@ ---- -id: helm-deploy -title: Deploy Pulsar cluster using Helm -sidebar_label: "Deployment" -original_id: helm-deploy ---- - -Before running `helm install`, you need to decide how to run Pulsar. -Options can be specified using Helm's `--set option.name=value` command line option. - -## Select configuration options - -In each section, collect the options that are combined to use with the `helm install` command. - -### Kubernetes namespace - -By default, the Pulsar Helm chart is installed to a namespace called `pulsar`. - -```yaml - -namespace: pulsar - -``` - -To install the Pulsar Helm chart into a different Kubernetes namespace, you can include this option in the `helm install` command. - -```bash - ---set namespace= - -``` - -By default, the Pulsar Helm chart doesn't create the namespace. - -```yaml - -namespaceCreate: false - -``` - -To use the Pulsar Helm chart to create the Kubernetes namespace automatically, you can include this option in the `helm install` command. - -```bash - ---set namespaceCreate=true - -``` - -### Persistence - -By default, the Pulsar Helm chart creates Volume Claims with the expectation that a dynamic provisioner creates the underlying Persistent Volumes. - -```yaml - -volumes: - persistence: true - # configure the components to use local persistent volume - # the local provisioner should be installed prior to enable local persistent volume - local_storage: false - -``` - -To use local persistent volumes as the persistent storage for Helm release, you can install the [local storage provisioner](#install-local-storage-provisioner) and include the following option in the `helm install` command. - -```bash - ---set volumes.local_storage=true - -``` - -:::note - -Before installing the production instance of Pulsar, ensure to plan the storage settings to avoid extra storage migration work. Because after initial installation, you must edit Kubernetes objects manually if you want to change storage settings. - -::: - -The Pulsar Helm chart is designed for production use. To use the Pulsar Helm chart in a development environment (such as Minikube), you can disable persistence by including this option in your `helm install` command. - -```bash - ---set volumes.persistence=false - -``` - -### Affinity - -By default, `anti-affinity` is enabled to ensure pods of the same component can run on different nodes. - -```yaml - -affinity: - anti_affinity: true - -``` - -To use the Pulsar Helm chart in a development environment (such as Minikube), you can disable `anti-affinity` by including this option in your `helm install` command. - -```bash - ---set affinity.anti_affinity=false - -``` - -### Components - -The Pulsar Helm chart is designed for production usage. It deploys a production-ready Pulsar cluster, including Pulsar core components and monitoring components. - -You can customize the components to be deployed by turning on/off individual components. - -```yaml - -## Components -## -## Control what components of Apache Pulsar to deploy for the cluster -components: - # zookeeper - zookeeper: true - # bookkeeper - bookkeeper: true - # bookkeeper - autorecovery - autorecovery: true - # broker - broker: true - # functions - functions: true - # proxy - proxy: true - # toolset - toolset: true - # pulsar manager - pulsar_manager: true - -## Monitoring Components -## -## Control what components of the monitoring stack to deploy for the cluster -monitoring: - # monitoring - prometheus - prometheus: true - # monitoring - grafana - grafana: true - -``` - -### Docker images - -The Pulsar Helm chart is designed to enable controlled upgrades. So it can configure independent image versions for components. You can customize the images by setting individual component. - -```yaml - -## Images -## -## Control what images to use for each component -images: - zookeeper: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - bookie: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - autorecovery: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - broker: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - proxy: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - pullPolicy: IfNotPresent - functions: - repository: apachepulsar/pulsar-all - tag: 2.5.0 - prometheus: - repository: prom/prometheus - tag: v1.6.3 - pullPolicy: IfNotPresent - grafana: - repository: streamnative/apache-pulsar-grafana-dashboard-k8s - tag: 0.0.4 - pullPolicy: IfNotPresent - pulsar_manager: - repository: apachepulsar/pulsar-manager - tag: v0.1.0 - pullPolicy: IfNotPresent - hasCommand: false - -``` - -### TLS - -The Pulsar Helm chart can be configured to enable TLS (Transport Layer Security) to protect all the traffic between components. Before enabling TLS, you have to provision TLS certificates for the required components. - -#### Provision TLS certificates using cert-manager - -To use the `cert-manager` to provision the TLS certificates, you have to install the [cert-manager](#install-cert-manager) before installing the Pulsar Helm chart. After successfully installing the cert-manager, you can set `certs.internal_issuer.enabled` to `true`. Therefore, the Pulsar Helm chart can use the `cert-manager` to generate `selfsigning` TLS certificates for the configured components. - -```yaml - -certs: - internal_issuer: - enabled: false - component: internal-cert-issuer - type: selfsigning - -``` - -You can also customize the generated TLS certificates by configuring the fields as the following. - -```yaml - -tls: - # common settings for generating certs - common: - # 90d - duration: 2160h - # 15d - renewBefore: 360h - organization: - - pulsar - keySize: 4096 - keyAlgorithm: rsa - keyEncoding: pkcs8 - -``` - -#### Enable TLS - -After installing the `cert-manager`, you can set `tls.enabled` to `true` to enable TLS encryption for the entire cluster. - -```yaml - -tls: - enabled: false - -``` - -You can also configure whether to enable TLS encryption for individual component. - -```yaml - -tls: - # settings for generating certs for proxy - proxy: - enabled: false - cert_name: tls-proxy - # settings for generating certs for broker - broker: - enabled: false - cert_name: tls-broker - # settings for generating certs for bookies - bookie: - enabled: false - cert_name: tls-bookie - # settings for generating certs for zookeeper - zookeeper: - enabled: false - cert_name: tls-zookeeper - # settings for generating certs for recovery - autorecovery: - cert_name: tls-recovery - # settings for generating certs for toolset - toolset: - cert_name: tls-toolset - -``` - -### Authentication - -By default, authentication is disabled. You can set `auth.authentication.enabled` to `true` to enable authentication. -Currently, the Pulsar Helm chart only supports JWT authentication provider. You can set `auth.authentication.provider` to `jwt` to use the JWT authentication provider. - -```yaml - -# Enable or disable broker authentication and authorization. -auth: - authentication: - enabled: false - provider: "jwt" - jwt: - # Enable JWT authentication - # If the token is generated by a secret key, set the usingSecretKey as true. - # If the token is generated by a private key, set the usingSecretKey as false. - usingSecretKey: false - superUsers: - # broker to broker communication - broker: "broker-admin" - # proxy to broker communication - proxy: "proxy-admin" - # pulsar-admin client to broker/proxy communication - client: "admin" - -``` - -To enable authentication, you can run [prepare helm release](#prepare-the-helm-release) to generate token secret keys and tokens for three super users specified in the `auth.superUsers` field. The generated token keys and super user tokens are uploaded and stored as Kubernetes secrets prefixed with `-token-`. You can use the following command to find those secrets. - -```bash - -kubectl get secrets -n - -``` - -### Authorization - -By default, authorization is disabled. Authorization can be enabled only when authentication is enabled. - -```yaml - -auth: - authorization: - enabled: false - -``` - -To enable authorization, you can include this option in the `helm install` command. - -```bash - ---set auth.authorization.enabled=true - -``` - -### CPU and RAM resource requirements - -By default, the resource requests and the number of replicas for the Pulsar components in the Pulsar Helm chart are adequate for a small production deployment. If you deploy a non-production instance, you can reduce the defaults to fit into a smaller cluster. - -Once you have all of your configuration options collected, you can install dependent charts before installing the Pulsar Helm chart. - -## Install dependent charts - -### Install local storage provisioner - -To use local persistent volumes as the persistent storage, you need to install a storage provisioner for [local persistent volumes](https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/). - -One of the easiest way to get started is to use the local storage provisioner provided along with the Pulsar Helm chart. - -``` - -helm repo add streamnative https://charts.streamnative.io -helm repo update -helm install pulsar-storage-provisioner streamnative/local-storage-provisioner - -``` - -### Install cert-manager - -The Pulsar Helm chart uses the [cert-manager](https://github.com/jetstack/cert-manager) to provision and manage TLS certificates automatically. To enable TLS encryption for brokers or proxies, you need to install the cert-manager in advance. - -For details about how to install the cert-manager, follow the [official instructions](https://cert-manager.io/docs/installation/kubernetes/#installing-with-helm). - -Alternatively, we provide a bash script [install-cert-manager.sh](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/cert-manager/install-cert-manager.sh) to install a cert-manager release to the namespace `cert-manager`. - -```bash - -git clone https://github.com/apache/pulsar-helm-chart -cd pulsar-helm-chart -./scripts/cert-manager/install-cert-manager.sh - -``` - -## Prepare Helm release - -Once you have install all the dependent charts and collected all of your configuration options, you can run [prepare_helm_release.sh](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/prepare_helm_release.sh) to prepare the Helm release. - -```bash - -git clone https://github.com/apache/pulsar-helm-chart -cd pulsar-helm-chart -./scripts/pulsar/prepare_helm_release.sh -n -k - -``` - -The `prepare_helm_release` creates the following resources: - -- A Kubernetes namespace for installing the Pulsar release -- JWT secret keys and tokens for three super users: `broker-admin`, `proxy-admin`, and `admin`. By default, it generates an asymmetric pubic/private key pair. You can choose to generate a symmetric secret key by specifying `--symmetric`. - - `proxy-admin` role is used for proxies to communicate to brokers. - - `broker-admin` role is used for inter-broker communications. - - `admin` role is used by the admin tools. - -## Deploy Pulsar cluster using Helm - -Once you have finished the following three things, you can install a Helm release. - -- Collect all of your configuration options. -- Install dependent charts. -- Prepare the Helm release. - -In this example, the Helm release is named `pulsar`. - -```bash - -helm repo add apache https://pulsar.apache.org/charts -helm repo update -helm install pulsar apache/pulsar \ - --timeout 10m \ - --set initialize=true \ - --set [your configuration options] - -``` - -:::note - -For the first deployment, add `--set initialize=true` option to initialize bookie and Pulsar cluster metadata. - -::: - -You can also use the `--version ` option if you want to install a specific version of Pulsar Helm chart. - -## Monitor deployment - -A list of installed resources are output once the Pulsar cluster is deployed. This may take 5-10 minutes. - -The status of the deployment can be checked by running the `helm status pulsar` command, which can also be done while the deployment is taking place if you run the command in another terminal. - -## Access Pulsar cluster - -The default values will create a `ClusterIP` for the following resources, which you can use to interact with the cluster. - -- Proxy: You can use the IP address to produce and consume messages to the installed Pulsar cluster. -- Pulsar Manager: You can access the Pulsar Manager UI at `http://:9527`. -- Grafana Dashboard: You can access the Grafana dashboard at `http://:3000`. - -To find the IP addresses of those components, run the following command: - -```bash - -kubectl get service -n - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/helm-install.md b/site2/website/versioned_docs/version-2.9.3-deprecated/helm-install.md deleted file mode 100644 index 9f81f52e0dab18..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/helm-install.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -id: helm-install -title: Install Apache Pulsar using Helm -sidebar_label: "Install" -original_id: helm-install ---- - -Install Apache Pulsar on Kubernetes with the official Pulsar Helm chart. - -## Requirements - -To deploy Apache Pulsar on Kubernetes, the followings are required. - -- kubectl 1.14 or higher, compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin)) -- Helm v3 (3.0.2 or higher) -- A Kubernetes cluster, version 1.14 or higher - -## Environment setup - -Before deploying Pulsar, you need to prepare your environment. - -### Tools - -Install [`helm`](helm-tools.md) and [`kubectl`](helm-tools.md) on your computer. - -## Cloud cluster preparation - -To create and connect to the Kubernetes cluster, follow the instructions: - -- [Google Kubernetes Engine](helm-prepare.md#google-kubernetes-engine) - -## Pulsar deployment - -Once the environment is set up and configuration is generated, you can now proceed to the [deployment of Pulsar](helm-deploy.md). - -## Pulsar upgrade - -To upgrade an existing Kubernetes installation, follow the [upgrade documentation](helm-upgrade.md). diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/helm-overview.md b/site2/website/versioned_docs/version-2.9.3-deprecated/helm-overview.md deleted file mode 100644 index 125f595cbe68a3..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/helm-overview.md +++ /dev/null @@ -1,103 +0,0 @@ ---- -id: helm-overview -title: Apache Pulsar Helm Chart -sidebar_label: "Overview" -original_id: helm-overview ---- - -[Helm chart](https://github.com/apache/pulsar-helm-chart) supports you to install Apache Pulsar in a cloud-native environment. - -## Introduction - -The Apache Pulsar Helm chart provides one of the most convenient ways to operate Pulsar on Kubernetes. With all the required components, Helm chart is scalable and thus being suitable for large-scale deployments. - -The Apache Pulsar Helm chart contains all components to support the features and functions that Pulsar delivers. You can install and configure these components separately. - -- Pulsar core components: - - ZooKeeper - - Bookies - - Brokers - - Function workers - - Proxies -- Control center: - - Pulsar Manager - - Prometheus - - Grafana - -Moreover, Helm chart supports: - -- Security - - Automatically provisioned TLS certificates, using [Jetstack](https://www.jetstack.io/)'s [cert-manager](https://cert-manager.io/docs/) - - self-signed - - [Let's Encrypt](https://letsencrypt.org/) - - TLS Encryption - - Proxy - - Broker - - Toolset - - Bookie - - ZooKeeper - - Authentication - - JWT - - Authorization -- Storage - - Non-persistence storage - - Persistent volume - - Local persistent volumes -- Functions - - Kubernetes Runtime - - Process Runtime - - Thread Runtime -- Operations - - Independent image versions for all components, enabling controlled upgrades - -## Quick start - -To run with Apache Pulsar Helm chart as fast as possible in a **non-production** use case, we provide a [quick start guide](getting-started-helm.md) for Proof of Concept (PoC) deployments. - -This guide walks you through deploying Apache Pulsar Helm chart with default values and features, but it is *not* suitable for deployments in production-ready environments. To deploy the charts in production under sustained load, you can follow the complete [Installation Guide](helm-install.md). - -## Troubleshooting - -Although we have done our best to make these charts as seamless as possible, troubles do go out of our control occasionally. We have been collecting tips and tricks for troubleshooting common issues. Please check it first before raising an [issue](https://github.com/apache/pulsar/issues/new/choose), and feel free to add your solutions by creating a [Pull Request](https://github.com/apache/pulsar/compare). - -## Installation - -The Apache Pulsar Helm chart contains all required dependencies. - -If you deploy a PoC for testing, we strongly suggest you follow this [Quick Start Guide](getting-started-helm.md) for your first iteration. - -1. [Preparation](helm-prepare.md) -2. [Deployment](helm-deploy.md) - -## Upgrading - -Once the Apache Pulsar Helm chart is installed, you can use `helm upgrade` command to configure and update it. - -```bash - -helm repo add apache https://pulsar.apache.org/charts -helm repo update -helm get values > pulsar.yaml -helm upgrade apache/pulsar -f pulsar.yaml - -``` - -For more detailed information, see [Upgrading](helm-upgrade.md). - -## Uninstallation - -To uninstall the Apache Pulsar Helm chart, run the following command: - -```bash - -helm delete - -``` - -For the purposes of continuity, some Kubernetes objects in these charts cannot be removed by `helm delete` command. It is recommended to *consciously* remove these items, as they affect re-deployment. - -* PVCs for stateful data: remove these items. - - ZooKeeper: This is your metadata. - - BookKeeper: This is your data. - - Prometheus: This is your metrics data, which can be safely removed. -* Secrets: if the secrets are generated by the [prepare release script](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/prepare_helm_release.sh), they contain secret keys and tokens. You can use the [cleanup release script](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/cleanup_helm_release.sh) to remove these secrets and tokens as needed. diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/helm-prepare.md b/site2/website/versioned_docs/version-2.9.3-deprecated/helm-prepare.md deleted file mode 100644 index e5d56c7e95e34b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/helm-prepare.md +++ /dev/null @@ -1,80 +0,0 @@ ---- -id: helm-prepare -title: Prepare Kubernetes resources -sidebar_label: "Prepare" -original_id: helm-prepare ---- - -For a fully functional Pulsar cluster, you need a few resources before deploying the Apache Pulsar Helm chart. The following provides instructions to prepare the Kubernetes cluster before deploying the Pulsar Helm chart. - -- [Google Kubernetes Engine](#google-kubernetes-engine) - - [Manual cluster creation](#manual-cluster-creation) - - [Scripted cluster creation](#scripted-cluster-creation) - - [Create cluster with local SSDs](#create-cluster-with-local-ssds) - -## Google Kubernetes Engine - -To get started easier, a script is provided to create the cluster automatically. Alternatively, a cluster can be created manually as well. - -### Manual cluster creation - -To provision a Kubernetes cluster manually, follow the [GKE instructions](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster). - -### Scripted cluster creation - -A [bootstrap script](https://github.com/streamnative/charts/tree/master/scripts/pulsar/gke_bootstrap_script.sh) has been created to automate much of the setup process for users on GCP/GKE. - -The script can: - -1. Create a new GKE cluster. -2. Allow the cluster to modify DNS (Domain Name Server) records. -3. Setup `kubectl`, and connect it to the cluster. - -Google Cloud SDK is a dependency of this script, so ensure it is [set up correctly](helm-tools.md#connect-to-a-gke-cluster) for the script to work. - -The script reads various parameters from environment variables and an argument `up` or `down` for bootstrap and clean-up respectively. - -The following table describes all variables. - -| **Variable** | **Description** | **Default value** | -| ------------ | --------------- | ----------------- | -| PROJECT | ID of your GCP project | No default value. It requires to be set. | -| CLUSTER_NAME | Name of the GKE cluster | `pulsar-dev` | -| CONFDIR | Configuration directory to store Kubernetes configuration | ${HOME}/.config/streamnative | -| INT_NETWORK | IP space to use within this cluster | `default` | -| LOCAL_SSD_COUNT | Number of local SSD counts | 4 | -| MACHINE_TYPE | Type of machine to use for nodes | `n1-standard-4` | -| NUM_NODES | Number of nodes to be created in each of the cluster's zones | 4 | -| PREEMPTIBLE | Create nodes using preemptible VM instances in the new cluster. | false | -| REGION | Compute region for the cluster | `us-east1` | -| USE_LOCAL_SSD | Flag to create a cluster with local SSDs | false | -| ZONE | Compute zone for the cluster | `us-east1-b` | -| ZONE_EXTENSION | The extension (`a`, `b`, `c`) of the zone name of the cluster | `b` | -| EXTRA_CREATE_ARGS | Extra arguments passed to create command | | - -Run the script, by passing in your desired parameters. It can work with the default parameters except for `PROJECT` which is required: - -```bash - -PROJECT= scripts/pulsar/gke_bootstrap_script.sh up - -``` - -The script can also be used to clean up the created GKE resources. - -```bash - -PROJECT= scripts/pulsar/gke_bootstrap_script.sh down - -``` - -#### Create cluster with local SSDs - -To install the Pulsar Helm chart using local persistent volumes, you need to create a GKE cluster with local SSDs. You can do so by specifying `USE_LOCAL_SSD` to be `true` in the following command to create a Pulsar cluster with local SSDs. - -``` - -PROJECT= USE_LOCAL_SSD=true LOCAL_SSD_COUNT= scripts/pulsar/gke_bootstrap_script.sh up - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/helm-tools.md b/site2/website/versioned_docs/version-2.9.3-deprecated/helm-tools.md deleted file mode 100644 index 6ba89006913b64..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/helm-tools.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -id: helm-tools -title: Required tools for deploying Pulsar Helm Chart -sidebar_label: "Required Tools" -original_id: helm-tools ---- - -Before deploying Pulsar to your Kubernetes cluster, there are some tools you must have installed locally. - -## kubectl - -kubectl is the tool that talks to the Kubernetes API. kubectl 1.14 or higher is required and it needs to be compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin)). - -To Install kubectl locally, follow the [Kubernetes documentation](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl). - -The server version of kubectl cannot be obtained until we connect to a cluster. - -## Helm - -Helm is the package manager for Kubernetes. The Apache Pulsar Helm Chart is tested and supported with Helm v3. - -### Get Helm - -You can get Helm from the project's [releases page](https://github.com/helm/helm/releases), or follow other options under the official documentation of [installing Helm](https://helm.sh/docs/intro/install/). - -### Next steps - -Once kubectl and Helm are configured, you can configure your [Kubernetes cluster](helm-prepare.md). - -## Additional information - -### Templates - -Templating in Helm is done through Golang's [text/template](https://golang.org/pkg/text/template/) and [sprig](https://godoc.org/github.com/Masterminds/sprig). - -For more information about how all the inner workings behave, check these documents: - -- [Functions and Pipelines](https://helm.sh/docs/chart_template_guide/functions_and_pipelines/) -- [Subcharts and Globals](https://helm.sh/docs/chart_template_guide/subcharts_and_globals/) - -### Tips and tricks - -For additional information on developing with Helm, check [tips and tricks section](https://helm.sh/docs/howto/charts_tips_and_tricks/) in the Helm repository. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/helm-upgrade.md b/site2/website/versioned_docs/version-2.9.3-deprecated/helm-upgrade.md deleted file mode 100644 index 7d671e6bfb3c10..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/helm-upgrade.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -id: helm-upgrade -title: Upgrade Pulsar Helm release -sidebar_label: "Upgrade" -original_id: helm-upgrade ---- - -Before upgrading your Pulsar installation, you need to check the change log corresponding to the specific release you want to upgrade to and look for any release notes that might pertain to the new Pulsar helm chart version. - -We also recommend that you need to provide all values using the `helm upgrade --set key=value` syntax or the `-f values.yml` instead of using `--reuse-values`, because some of the current values might be deprecated. - -:::note - -You can retrieve your previous `--set` arguments cleanly, with `helm get values `. If you direct this into a file (`helm get values > pulsar.yml`), you can safely pass this file through `-f`, namely `helm upgrade apache/pulsar -f pulsar.yaml`. This safely replaces the behavior of `--reuse-values`. - -::: - -## Steps - -To upgrade Apache Pulsar to a newer version, follow these steps: - -1. Check the change log for the specific version you would like to upgrade to. -2. Go through [deployment documentation](helm-deploy.md) step by step. -3. Extract your previous `--set` arguments with the following command. - - ```bash - - helm get values > pulsar.yaml - - ``` - -4. Decide all the values you need to set. -5. Perform the upgrade, with all `--set` arguments extracted in step 4. - - ```bash - - helm upgrade apache/pulsar \ - --version \ - -f pulsar.yaml \ - --set ... - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-aerospike-sink.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-aerospike-sink.md deleted file mode 100644 index 63d7338a3ba91c..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-aerospike-sink.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -id: io-aerospike-sink -title: Aerospike sink connector -sidebar_label: "Aerospike sink connector" -original_id: io-aerospike-sink ---- - -The Aerospike sink connector pulls messages from Pulsar topics to Aerospike clusters. - -## Configuration - -The configuration of the Aerospike sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `seedHosts` |String| true | No default value| The comma-separated list of one or more Aerospike cluster hosts.

    Each host can be specified as a valid IP address or hostname followed by an optional port number. | -| `keyspace` | String| true |No default value |The Aerospike namespace. | -| `columnName` | String | true| No default value|The Aerospike column name. | -|`userName`|String|false|NULL|The Aerospike username.| -|`password`|String|false|NULL|The Aerospike password.| -| `keySet` | String|false |NULL | The Aerospike set name. | -| `maxConcurrentRequests` |int| false | 100 | The maximum number of concurrent Aerospike transactions that a sink can open. | -| `timeoutMs` | int|false | 100 | This property controls `socketTimeout` and `totalTimeout` for Aerospike transactions. | -| `retries` | int|false | 1 |The maximum number of retries before aborting a write transaction to Aerospike. | diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-canal-source.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-canal-source.md deleted file mode 100644 index d1fd43bb0f74e4..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-canal-source.md +++ /dev/null @@ -1,235 +0,0 @@ ---- -id: io-canal-source -title: Canal source connector -sidebar_label: "Canal source connector" -original_id: io-canal-source ---- - -The Canal source connector pulls messages from MySQL to Pulsar topics. - -## Configuration - -The configuration of Canal source connector has the following properties. - -### Property - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `username` | true | None | Canal server account (not MySQL).| -| `password` | true | None | Canal server password (not MySQL). | -|`destination`|true|None|Source destination that Canal source connector connects to. -| `singleHostname` | false | None | Canal server address.| -| `singlePort` | false | None | Canal server port.| -| `cluster` | true | false | Whether to enable cluster mode based on Canal server configuration or not.

  2219. true: **cluster** mode.
    If set to true, it talks to `zkServers` to figure out the actual database host.

  2220. false: **standalone** mode.
    If set to false, it connects to the database specified by `singleHostname` and `singlePort`.
  2221. | -| `zkServers` | true | None | Address and port of the Zookeeper that Canal source connector talks to figure out the actual database host.| -| `batchSize` | false | 1000 | Batch size to fetch from Canal. | - -### Example - -Before using the Canal connector, you can create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "zkServers": "127.0.0.1:2181", - "batchSize": "5120", - "destination": "example", - "username": "", - "password": "", - "cluster": false, - "singleHostname": "127.0.0.1", - "singlePort": "11111", - } - - ``` - -* YAML - - You can create a YAML file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/resources/canal-mysql-source-config.yaml) below to your YAML file. - - ```yaml - - configs: - zkServers: "127.0.0.1:2181" - batchSize: 5120 - destination: "example" - username: "" - password: "" - cluster: false - singleHostname: "127.0.0.1" - singlePort: 11111 - - ``` - -## Usage - -Here is an example of storing MySQL data using the configuration file as above. - -1. Start a MySQL server. - - ```bash - - $ docker pull mysql:5.7 - $ docker run -d -it --rm --name pulsar-mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=canal -e MYSQL_USER=mysqluser -e MYSQL_PASSWORD=mysqlpw mysql:5.7 - - ``` - -2. Create a configuration file `mysqld.cnf`. - - ```bash - - [mysqld] - pid-file = /var/run/mysqld/mysqld.pid - socket = /var/run/mysqld/mysqld.sock - datadir = /var/lib/mysql - #log-error = /var/log/mysql/error.log - # By default we only accept connections from localhost - #bind-address = 127.0.0.1 - # Disabling symbolic-links is recommended to prevent assorted security risks - symbolic-links=0 - log-bin=mysql-bin - binlog-format=ROW - server_id=1 - - ``` - -3. Copy the configuration file `mysqld.cnf` to MySQL server. - - ```bash - - $ docker cp mysqld.cnf pulsar-mysql:/etc/mysql/mysql.conf.d/ - - ``` - -4. Restart the MySQL server. - - ```bash - - $ docker restart pulsar-mysql - - ``` - -5. Create a test database in MySQL server. - - ```bash - - $ docker exec -it pulsar-mysql /bin/bash - $ mysql -h 127.0.0.1 -uroot -pcanal -e 'create database test;' - - ``` - -6. Start a Canal server and connect to MySQL server. - - ``` - - $ docker pull canal/canal-server:v1.1.2 - $ docker run -d -it --link pulsar-mysql -e canal.auto.scan=false -e canal.destinations=test -e canal.instance.master.address=pulsar-mysql:3306 -e canal.instance.dbUsername=root -e canal.instance.dbPassword=canal -e canal.instance.connectionCharset=UTF-8 -e canal.instance.tsdb.enable=true -e canal.instance.gtidon=false --name=pulsar-canal-server -p 8000:8000 -p 2222:2222 -p 11111:11111 -p 11112:11112 -m 4096m canal/canal-server:v1.1.2 - - ``` - -7. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:2.3.0 - $ docker run -d -it --link pulsar-canal-server -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:2.3.0 bin/pulsar standalone - - ``` - -8. Modify the configuration file `canal-mysql-source-config.yaml`. - - ```yaml - - configs: - zkServers: "" - batchSize: "5120" - destination: "test" - username: "" - password: "" - cluster: false - singleHostname: "pulsar-canal-server" - singlePort: "11111" - - ``` - -9. Create a consumer file `pulsar-client.py`. - - ```python - - import pulsar - - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe('my-topic', - subscription_name='my-sub') - - while True: - msg = consumer.receive() - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - - client.close() - - ``` - -10. Copy the configuration file `canal-mysql-source-config.yaml` and the consumer file `pulsar-client.py` to Pulsar server. - - ```bash - - $ docker cp canal-mysql-source-config.yaml pulsar-standalone:/pulsar/conf/ - $ docker cp pulsar-client.py pulsar-standalone:/pulsar/ - - ``` - -11. Download a Canal connector and start it. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - $ wget https://archive.apache.org/dist/pulsar/pulsar-2.3.0/connectors/pulsar-io-canal-2.3.0.nar -P connectors - $ ./bin/pulsar-admin source localrun \ - --archive ./connectors/pulsar-io-canal-2.3.0.nar \ - --classname org.apache.pulsar.io.canal.CanalStringSource \ - --tenant public \ - --namespace default \ - --name canal \ - --destination-topic-name my-topic \ - --source-config-file /pulsar/conf/canal-mysql-source-config.yaml \ - --parallelism 1 - - ``` - -12. Consume data from MySQL. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - $ python pulsar-client.py - - ``` - -13. Open another window to log in MySQL server. - - ```bash - - $ docker exec -it pulsar-mysql /bin/bash - $ mysql -h 127.0.0.1 -uroot -pcanal - - ``` - -14. Create a table, and insert, delete, and update data in MySQL server. - - ```bash - - mysql> use test; - mysql> show tables; - mysql> CREATE TABLE IF NOT EXISTS `test_table`(`test_id` INT UNSIGNED AUTO_INCREMENT,`test_title` VARCHAR(100) NOT NULL, - `test_author` VARCHAR(40) NOT NULL, - `test_date` DATE,PRIMARY KEY ( `test_id` ))ENGINE=InnoDB DEFAULT CHARSET=utf8; - mysql> INSERT INTO test_table (test_title, test_author, test_date) VALUES("a", "b", NOW()); - mysql> UPDATE test_table SET test_title='c' WHERE test_title='a'; - mysql> DELETE FROM test_table WHERE test_title='c'; - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-cassandra-sink.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-cassandra-sink.md deleted file mode 100644 index b27a754f49e182..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-cassandra-sink.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -id: io-cassandra-sink -title: Cassandra sink connector -sidebar_label: "Cassandra sink connector" -original_id: io-cassandra-sink ---- - -The Cassandra sink connector pulls messages from Pulsar topics to Cassandra clusters. - -## Configuration - -The configuration of the Cassandra sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `roots` | String|true | " " (empty string) | A comma-separated list of Cassandra hosts to connect to.| -| `keyspace` | String|true| " " (empty string)| The key space used for writing pulsar messages.

    **Note: `keyspace` should be created prior to a Cassandra sink.**| -| `keyname` | String|true| " " (empty string)| The key name of the Cassandra column family.

    The column is used for storing Pulsar message keys.

    If a Pulsar message doesn't have any key associated, the message value is used as the key. | -| `columnFamily` | String|true| " " (empty string)| The Cassandra column family name.

    **Note: `columnFamily` should be created prior to a Cassandra sink.**| -| `columnName` | String|true| " " (empty string) | The column name of the Cassandra column family.

    The column is used for storing Pulsar message values. | - -### Example - -Before using the Cassandra sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - } - - ``` - -* YAML - - ``` - - configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - - ``` - -## Usage - -For more information about **how to connect Pulsar with Cassandra**, see [here](io-quickstart.md#connect-pulsar-to-apache-cassandra). diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-cdc-debezium.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-cdc-debezium.md deleted file mode 100644 index 293ccf2b35e8aa..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-cdc-debezium.md +++ /dev/null @@ -1,543 +0,0 @@ ---- -id: io-cdc-debezium -title: Debezium source connector -sidebar_label: "Debezium source connector" -original_id: io-cdc-debezium ---- - -The Debezium source connector pulls messages from MySQL or PostgreSQL -and persists the messages to Pulsar topics. - -## Configuration - -The configuration of Debezium source connector has the following properties. - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `task.class` | true | null | A source task class that implemented in Debezium. | -| `database.hostname` | true | null | The address of a database server. | -| `database.port` | true | null | The port number of a database server.| -| `database.user` | true | null | The name of a database user that has the required privileges. | -| `database.password` | true | null | The password for a database user that has the required privileges. | -| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. | -| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. | -| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the connector.

    This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. | -| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. | -| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value. | -| `database.history` | true | null | The name of the database history class. | -| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements.

    **Note: this topic is for internal use only and should not be used by consumers.** | -| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. | -| `pulsar.service.url` | true | null | Pulsar cluster service URL for the offset topic used in Debezium. You can use the `bin/pulsar-admin --admin-url http://pulsar:8080 sources localrun --source-config-file configs/pg-pulsar-config.yaml` command to point to the target Pulsar cluster.| -| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. | -| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). | -| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. | -| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. | - - - -## Example of MySQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "3306", - "database.user": "debezium", - "database.password": "dbz", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.whitelist": "inventory", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.history.pulsar.topic": "history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "pulsar.service.url": "pulsar://127.0.0.1:6650", - "offset.storage.topic": "offset-topic" - } - - ``` - -* YAML - - You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mysql-source" - topicName: "debezium-mysql-topic" - archive: "connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for mysql, docker image: debezium/example-mysql:0.8 - database.hostname: "localhost" - database.port: "3306" - database.user: "debezium" - database.password: "dbz" - database.server.id: "184054" - database.server.name: "dbserver1" - database.whitelist: "inventory" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.history.pulsar.topic: "history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG - key.converter: "org.apache.kafka.connect.json.JsonConverter" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## OFFSET_STORAGE_TOPIC_CONFIG - offset.storage.topic: "offset-topic" - - ``` - -### Usage - -This example shows how to change the data of a MySQL table using the Pulsar Debezium connector. - -1. Start a MySQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -it --rm \ - --name mysql \ - -p 3306:3306 \ - -e MYSQL_ROOT_PASSWORD=debezium \ - -e MYSQL_USER=mysqluser \ - -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar \ - --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","value.converter": "org.apache.kafka.connect.json.JsonConverter","pulsar.service.url": "pulsar://127.0.0.1:6650","offset.storage.topic": "offset-topic"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mysql-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the table _inventory.products_. - - ```bash - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MySQL client in docker. - - ```bash - - $ docker run -it --rm \ - --name mysqlterm \ - --link mysql \ - --rm mysql:5.7 sh \ - -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' - - ``` - -6. A MySQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - mysql> use inventory; - mysql> show tables; - mysql> SELECT * FROM products; - mysql> UPDATE products SET name='1111111111' WHERE id=101; - mysql> UPDATE products SET name='1111111111' WHERE id=107; - - ``` - - In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic. - -## Example of PostgreSQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "5432", - "database.user": "postgres", - "database.password": "postgres", - "database.dbname": "postgres", - "database.server.name": "dbserver1", - "schema.whitelist": "inventory", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-postgres-source" - topicName: "debezium-postgres-topic" - archive: "connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-postgress:0.8 - database.hostname: "localhost" - database.port: "5432" - database.user: "postgres" - database.password: "postgres" - database.dbname: "postgres" - database.server.name: "dbserver1" - schema.whitelist: "inventory" - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector. - - -1. Start a PostgreSQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-postgres:0.8 - $ docker run -d -it --rm --name pulsar-postgresql -p 5432:5432 debezium/example-postgres:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar \ - --name debezium-postgres-source \ - --destination-topic-name debezium-postgres-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "postgres","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-postgres-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a PostgreSQL client in docker. - - ```bash - - $ docker exec -it pulsar-postgresql /bin/bash - - ``` - -6. A PostgreSQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - psql -U postgres postgres - postgres=# \c postgres; - You are now connected to database "postgres" as user "postgres". - postgres=# SET search_path TO inventory; - SET - postgres=# select * from products; - id | name | description | weight - -----+--------------------+---------------------------------------------------------+-------- - 102 | car battery | 12V car battery | 8.1 - 103 | 12-pack drill bits | 12-pack of drill bits with sizes ranging from #40 to #3 | 0.8 - 104 | hammer | 12oz carpenter's hammer | 0.75 - 105 | hammer | 14oz carpenter's hammer | 0.875 - 106 | hammer | 16oz carpenter's hammer | 1 - 107 | rocks | box of assorted rocks | 5.3 - 108 | jacket | water resistent black wind breaker | 0.1 - 109 | spare tire | 24 inch spare tire | 22.2 - 101 | 1111111111 | Small 2-wheel scooter | 3.14 - (9 rows) - - postgres=# UPDATE products SET name='1111111111' WHERE id=107; - UPDATE 1 - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":107}}�{"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products.Value","field":"before"},{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products.Value","field":"after"},{"type":"struct","fields":[{"type":"string","optional":true,"field":"version"},{"type":"string","optional":true,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":false,"field":"db"},{"type":"int64","optional":true,"field":"ts_usec"},{"type":"int64","optional":true,"field":"txId"},{"type":"int64","optional":true,"field":"lsn"},{"type":"string","optional":true,"field":"schema"},{"type":"string","optional":true,"field":"table"},{"type":"boolean","optional":true,"default":false,"field":"snapshot"},{"type":"boolean","optional":true,"field":"last_snapshot_record"}],"optional":false,"name":"io.debezium.connector.postgresql.Source","field":"source"},{"type":"string","optional":false,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"before":{"id":107,"name":"rocks","description":"box of assorted rocks","weight":5.3},"after":{"id":107,"name":"1111111111","description":"box of assorted rocks","weight":5.3},"source":{"version":"0.9.2.Final","connector":"postgresql","name":"dbserver1","db":"postgres","ts_usec":1559208957661080,"txId":577,"lsn":23862872,"schema":"inventory","table":"products","snapshot":false,"last_snapshot_record":null},"op":"u","ts_ms":1559208957692}} - - ``` - -## Example of MongoDB - -You need to create a configuration file before using the Pulsar Debezium connector. - -* JSON - - ```json - - { - "mongodb.hosts": "rs0/mongodb:27017", - "mongodb.name": "dbserver1", - "mongodb.user": "debezium", - "mongodb.password": "dbz", - "mongodb.task.id": "1", - "database.whitelist": "inventory", - "pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mongodb-source" - topicName: "debezium-mongodb-topic" - archive: "connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-postgress:0.10 - mongodb.hosts: "rs0/mongodb:27017", - mongodb.name: "dbserver1", - mongodb.user: "debezium", - mongodb.password: "dbz", - mongodb.task.id: "1", - database.whitelist: "inventory", - - ## PULSAR_SERVICE_URL_CONFIG - pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector. - - -1. Start a MongoDB server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-mongodb:0.10 - $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017 debezium/example-mongodb:0.10 - - ``` - - Use the following commands to initialize the data. - - ``` bash - - ./usr/local/bin/init-inventory.sh - - ``` - - If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a``` - - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-mongodb-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar \ - --name debezium-mongodb-source \ - --destination-topic-name debezium-mongodb-topic \ - --tenant public \ - --namespace default \ - --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mongodb-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MongoDB client in docker. - - ```bash - - $ docker exec -it pulsar-mongodb /bin/bash - - ``` - -6. A MongoDB client pops out. - - ```bash - - mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory - db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}}) - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":false,"field":"rs"},{"type":"string","optional":false,"field":"collection"},{"type":"int32","optional":false,"field":"ord"},{"type":"int64","optional":true,"field":"h"}],"optional":false,"name":"io.debezium.connector.mongo.Source","field":"source"},{"type":"string","optional":true,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"after":"{\"_id\": {\"$numberLong\": \"104\"},\"name\": \"hammer\",\"description\": \"12oz carpenter's hammer\",\"weight\": 1.25,\"quantity\": 4}","patch":null,"source":{"version":"0.10.0.Final","connector":"mongodb","name":"dbserver1","ts_ms":1573541905000,"snapshot":"true","db":"inventory","rs":"rs0","collection":"products","ord":1,"h":4983083486544392763},"op":"r","ts_ms":1573541909761}}. - - ``` - -## FAQ - -### Debezium postgres connector will hang when create snap - -```$xslt - -#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000] - java.lang.Thread.State: WAITING (parking) - at sun.misc.Unsafe.park(Native Method) - - parking to wait for <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) - at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) - at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) - at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396) - at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649) - at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132) - at io.debezium.connector.postgresql.PostgresConnectorTask$Lambda$203/385424085.accept(Unknown Source) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$240/1347039967.accept(Unknown Source) - at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$206/589332928.run(Unknown Source) - at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705) - at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717) - at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126) - at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47) - at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127) - at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230) - at java.lang.Thread.run(Thread.java:748) - -``` - -If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file: - -```$xslt - -max.queue.size= - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-cdc.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-cdc.md deleted file mode 100644 index e6e662884826de..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-cdc.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -id: io-cdc -title: CDC connector -sidebar_label: "CDC connector" -original_id: io-cdc ---- - -CDC source connectors capture log changes of databases (such as MySQL, MongoDB, and PostgreSQL) into Pulsar. - -> CDC source connectors are built on top of [Canal](https://github.com/alibaba/canal) and [Debezium](https://debezium.io/) and store all data into Pulsar cluster in a persistent, replicated, and partitioned way. - -Currently, Pulsar has the following CDC connectors. - -Name|Java Class -|---|--- -[Canal source connector](io-canal-source.md)|[org.apache.pulsar.io.canal.CanalStringSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/java/org/apache/pulsar/io/canal/CanalStringSource.java) -[Debezium source connector](io-cdc-debezium.md)|
  2222. [org.apache.pulsar.io.debezium.DebeziumSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/core/src/main/java/org/apache/pulsar/io/debezium/DebeziumSource.java)
  2223. [org.apache.pulsar.io.debezium.mysql.DebeziumMysqlSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/java/org/apache/pulsar/io/debezium/mysql/DebeziumMysqlSource.java)
  2224. [org.apache.pulsar.io.debezium.postgres.DebeziumPostgresSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/java/org/apache/pulsar/io/debezium/postgres/DebeziumPostgresSource.java)
  2225. - -For more information about Canal and Debezium, see the information below. - -Subject | Reference -|---|--- -How to use Canal source connector with MySQL|[Canal guide](https://github.com/alibaba/canal/wiki) -How does Canal work | [Canal tutorial](https://github.com/alibaba/canal/wiki) -How to use Debezium source connector with MySQL | [Debezium guide](https://debezium.io/docs/connectors/mysql/) -How does Debezium work | [Debezium tutorial](https://debezium.io/docs/tutorial/) diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-cli.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-cli.md deleted file mode 100644 index 3d54bb61875e25..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-cli.md +++ /dev/null @@ -1,658 +0,0 @@ ---- -id: io-cli -title: Connector Admin CLI -sidebar_label: "CLI" -original_id: io-cli ---- - -The `pulsar-admin` tool helps you manage Pulsar connectors. - -## `sources` - -An interface for managing Pulsar IO sources (ingress data into Pulsar). - -```bash - -$ pulsar-admin sources subcommands - -``` - -Subcommands are: - -* `create` - -* `update` - -* `delete` - -* `get` - -* `status` - -* `list` - -* `stop` - -* `start` - -* `restart` - -* `localrun` - -* `available-sources` - -* `reload` - - -### `create` - -Submit a Pulsar IO source connector to run in a Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources create options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--classname` | The source's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per source instance (applicable only to Docker runtime). -| `--deserialization-classname` | The SerDe classname for the source. -| `--destination-topic-name` | The Pulsar topic to which data is sent. -| `--disk` | The disk (in bytes) that needs to be allocated per source instance (applicable only to Docker runtime). -|`--name` | The source's name. -| `--namespace` | The source's namespace. -| ` --parallelism` | The source's parallelism factor, that is, the number of source instances to run. -| `--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per source instance (applicable only to the process and Docker runtimes). -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -| `--source-config` | Source config key/values. -| `--source-config-file` | The path to a YAML config file specifying the source's configuration. -| `-t`, `--source-type` | The source's connector provider. -| `--tenant` | The source's tenant. -|`--producer-config`| The custom producer configuration (as a JSON string). - -### `update` - -Update a already submitted Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources update options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--classname` | The source's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per source instance (applicable only to Docker runtime). -| `--deserialization-classname` | The SerDe classname for the source. -| `--destination-topic-name` | The Pulsar topic to which data is sent. -| `--disk` | The disk (in bytes) that needs to be allocated per source instance (applicable only to Docker runtime). -|`--name` | The source's name. -| `--namespace` | The source's namespace. -| ` --parallelism` | The source's parallelism factor, that is, the number of source instances to run. -| `--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per source instance (applicable only to the process and Docker runtimes). -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -| `--source-config` | Source config key/values. -| `--source-config-file` | The path to a YAML config file specifying the source's configuration. -| `-t`, `--source-type` | The source's connector provider. The `source-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. -| `--tenant` | The source's tenant. -| `--update-auth-data` | Whether or not to update the auth data.
    **Default value: false.** - - -### `delete` - -Delete a Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources delete options - -``` - -#### Option - -|Flag|Description| -|---|---| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `get` - -Get the information about a Pulsar IO source connector. - -#### Usage - -```bash - -$ pulsar-admin sources get options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `status` - -Check the current status of a Pulsar Source. - -#### Usage - -```bash - -$ pulsar-admin sources status options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source ID.
    If `instance-id` is not provided, Pulsar gets status of all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `list` - -List all running Pulsar IO source connectors. - -#### Usage - -```bash - -$ pulsar-admin sources list options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `stop` - -Stop a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources stop options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar stops all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - -### `start` - -Start a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources start options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar starts all instances.| -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `restart` - -Restart a source instance. - -#### Usage - -```bash - -$ pulsar-admin sources restart options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--instance-id`|The source instanceID.
    If `instance-id` is not provided, Pulsar restarts all instances. -|`--name`|The source's name.| -|`--namespace`|The source's namespace.| -|`--tenant`|The source's tenant.| - - -### `localrun` - -Run a Pulsar IO source connector locally rather than deploying it to the Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources localrun options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the NAR archive for the Source.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--broker-service-url` | The URL for the Pulsar broker. -|`--classname`|The source's class name if `archive` is file-url-path (file://). -| `--client-auth-params` | Client authentication parameter. -| `--client-auth-plugin` | Client authentication plugin using which function-process can connect to broker. -|`--cpu`|The CPU (in cores) that needs to be allocated per source instance (applicable only to the Docker runtime).| -|`--deserialization-classname`|The SerDe classname for the source. -|`--destination-topic-name`|The Pulsar topic to which data is sent. -|`--disk`|The disk (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime).| -|`--hostname-verification-enabled`|Enable hostname verification.
    **Default value: false**. -|`--name`|The source’s name.| -|`--namespace`|The source’s namespace.| -|`--parallelism`|The source’s parallelism factor, that is, the number of source instances to run).| -|`--processing-guarantees` | The processing guarantees (also named as delivery semantics) applied to the source. A source connector receives messages from external system and writes messages to a Pulsar topic. The `--processing-guarantees` is used to ensure the processing guarantees for writing messages to the Pulsar topic.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -|`--ram`|The RAM (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime).| -| `-st`, `--schema-type` | The schema type.
    Either a builtin schema (for example, AVRO and JSON) or custom schema class name to be used to encode messages emitted from source. -|`--source-config`|Source config key/values. -|`--source-config-file`|The path to a YAML config file specifying the source’s configuration. -|`--source-type`|The source's connector provider. -|`--tenant`|The source’s tenant. -|`--tls-allow-insecure`|Allow insecure tls connection.
    **Default value: false**. -|`--tls-trust-cert-path`|The tls trust cert file path. -|`--use-tls`|Use tls connection.
    **Default value: false**. -|`--producer-config`| The custom producer configuration (as a JSON string). - -### `available-sources` - -Get the list of Pulsar IO connector sources supported by Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sources available-sources - -``` - -### `reload` - -Reload the available built-in connectors. - -#### Usage - -```bash - -$ pulsar-admin sources reload - -``` - -## `sinks` - -An interface for managing Pulsar IO sinks (egress data from Pulsar). - -```bash - -$ pulsar-admin sinks subcommands - -``` - -Subcommands are: - -* `create` - -* `update` - -* `delete` - -* `get` - -* `status` - -* `list` - -* `stop` - -* `start` - -* `restart` - -* `localrun` - -* `available-sinks` - -* `reload` - - -### `create` - -Submit a Pulsar IO sink connector to run in a Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks create options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--classname` | The sink's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per sink instance (applicable only to Docker runtime). -| `--custom-schema-inputs` | The map of input topics to schema types or class names (as a JSON string). -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -| `--disk` | The disk (in bytes) that needs to be allocated per sink instance (applicable only to Docker runtime). -|`-i, --inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name` | The sink's name. -| `--namespace` | The sink's namespace. -| ` --parallelism` | The sink's parallelism factor, that is, the number of sink instances to run. -| `--processing-guarantees` | The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the process and Docker runtimes). -| `--retain-ordering` | Sink consumes and sinks messages in order. -| `--sink-config` | sink config key/values. -| `--sink-config-file` | The path to a YAML config file specifying the sink's configuration. -| `-t`, `--sink-type` | The sink's connector provider. The `sink-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. -| `--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -| `--tenant` | The sink's tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). - -### `update` - -Update a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks update options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--classname` | The sink's class name if `archive` is file-url-path (file://). -| `--cpu` | The CPU (in cores) that needs to be allocated per sink instance (applicable only to Docker runtime). -| `--custom-schema-inputs` | The map of input topics to schema types or class names (as a JSON string). -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -| `--disk` | The disk (in bytes) that needs to be allocated per sink instance (applicable only to Docker runtime). -|`-i, --inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name` | The sink's name. -| `--namespace` | The sink's namespace. -| ` --parallelism` | The sink's parallelism factor, that is, the number of sink instances to run. -| `--processing-guarantees` | The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -| `--ram` | The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the process and Docker runtimes). -| `--retain-ordering` | Sink consumes and sinks messages in order. -| `--sink-config` | sink config key/values. -| `--sink-config-file` | The path to a YAML config file specifying the sink's configuration. -| `-t`, `--sink-type` | The sink's connector provider. -| `--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -| `--tenant` | The sink's tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). -| `--update-auth-data` | Whether or not to update the auth data.
    **Default value: false.** - -### `delete` - -Delete a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks delete options - -``` - -#### Option - -|Flag|Description| -|---|---| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - -### `get` - -Get the information about a Pulsar IO sink connector. - -#### Usage - -```bash - -$ pulsar-admin sinks get options - -``` - -#### Options -|Flag|Description| -|---|---| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `status` - -Check the current status of a Pulsar sink. - -#### Usage - -```bash - -$ pulsar-admin sinks status options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink ID.
    If `instance-id` is not provided, Pulsar gets status of all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `list` - -List all running Pulsar IO sink connectors. - -#### Usage - -```bash - -$ pulsar-admin sinks list options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `stop` - -Stop a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks stop options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar stops all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - -### `start` - -Start a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks start options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar starts all instances.| -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `restart` - -Restart a sink instance. - -#### Usage - -```bash - -$ pulsar-admin sinks restart options - -``` - -#### Options - -|Flag|Description| -|---|---| -|`--instance-id`|The sink instanceID.
    If `instance-id` is not provided, Pulsar restarts all instances. -|`--name`|The sink's name.| -|`--namespace`|The sink's namespace.| -|`--tenant`|The sink's tenant.| - - -### `localrun` - -Run a Pulsar IO sink connector locally rather than deploying it to the Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks localrun options - -``` - -#### Options - -|Flag|Description| -|----|---| -| `-a`, `--archive` | The path to the archive file for the sink.
    It also supports url-path (http/https/file [file protocol assumes that file already exists on worker host]) from which worker can download the package. -| `--auto-ack` | Whether or not the framework will automatically acknowledge messages. -| `--broker-service-url` | The URL for the Pulsar broker. -|`--classname`|The sink's class name if `archive` is file-url-path (file://). -| `--client-auth-params` | Client authentication parameter. -| `--client-auth-plugin` | Client authentication plugin using which function-process can connect to broker. -|`--cpu`|The CPU (in cores) that needs to be allocated per sink instance (applicable only to the Docker runtime). -| `--custom-schema-inputs` | The map of input topics to Schema types or class names (as a JSON string). -| `--max-redeliver-count` | Maximum number of times that a message is redelivered before being sent to the dead letter queue. -| `--dead-letter-topic` | Name of the dead letter topic where the failing messages are sent. -| `--custom-serde-inputs` | The map of input topics to SerDe class names (as a JSON string). -|`--disk`|The disk (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime).| -|`--hostname-verification-enabled`|Enable hostname verification.
    **Default value: false**. -| `-i`, `--inputs` | The sink's input topic or topics (multiple topics can be specified as a comma-separated list). -|`--name`|The sink’s name.| -|`--namespace`|The sink’s namespace.| -|`--parallelism`|The sink’s parallelism factor, that is, the number of sink instances to run).| -|`--processing-guarantees`|The processing guarantees (also known as delivery semantics) applied to the sink. The `--processing-guarantees` implementation in Pulsar also relies on sink implementation.
    The available values are ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE. -|`--ram`|The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime).| -|`--retain-ordering` | Sink consumes and sinks messages in order. -|`--sink-config`|sink config key/values. -|`--sink-config-file`|The path to a YAML config file specifying the sink’s configuration. -|`--sink-type`|The sink's connector provider. -|`--subs-name` | Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer. -|`--tenant`|The sink’s tenant. -| `--timeout-ms` | The message timeout in milliseconds. -| `--negative-ack-redelivery-delay-ms` | The negatively-acknowledged message redelivery delay in milliseconds. | -|`--tls-allow-insecure`|Allow insecure tls connection.
    **Default value: false**. -|`--tls-trust-cert-path`|The tls trust cert file path. -| `--topics-pattern` | TopicsPattern to consume from list of topics under a namespace that match the pattern.
    `--input` and `--topics-Pattern` are mutually exclusive.
    Add SerDe class name for a pattern in `--customSerdeInputs` (supported for java fun only). -|`--use-tls`|Use tls connection.
    **Default value: false**. - -### `available-sinks` - -Get the list of Pulsar IO connector sinks supported by Pulsar cluster. - -#### Usage - -```bash - -$ pulsar-admin sinks available-sinks - -``` - -### `reload` - -Reload the available built-in connectors. - -#### Usage - -```bash - -$ pulsar-admin sinks reload - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-connectors.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-connectors.md deleted file mode 100644 index 957a02a5a1964a..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-connectors.md +++ /dev/null @@ -1,249 +0,0 @@ ---- -id: io-connectors -title: Built-in connector -sidebar_label: "Built-in connector" -original_id: io-connectors ---- - -Pulsar distribution includes a set of common connectors that have been packaged and tested with the rest of Apache Pulsar. These connectors import and export data from some of the most commonly used data systems. - -Using any of these connectors is as easy as writing a simple connector and running the connector locally or submitting the connector to a Pulsar Functions cluster. - -## Source connector - -Pulsar has various source connectors, which are sorted alphabetically as below. - -### Canal - -* [Configuration](io-canal-source.md#configuration) - -* [Example](io-canal-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/java/org/apache/pulsar/io/canal/CanalStringSource.java) - - -### Debezium MySQL - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-mysql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/java/org/apache/pulsar/io/debezium/mysql/DebeziumMysqlSource.java) - -### Debezium PostgreSQL - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-postgresql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/java/org/apache/pulsar/io/debezium/postgres/DebeziumPostgresSource.java) - -### Debezium MongoDB - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-mongodb) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/java/org/apache/pulsar/io/debezium/mongodb/DebeziumMongoDbSource.java) - -### Debezium Oracle - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-oracle) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/oracle/src/main/java/org/apache/pulsar/io/debezium/oracle/DebeziumOracleSource.java) - -### Debezium Microsoft SQL Server - -* [Configuration](io-debezium-source.md#configuration) - -* [Example](io-debezium-source.md#example-of-microsoft-sql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mssql/src/main/java/org/apache/pulsar/io/debezium/mssql/DebeziumMsSqlSource.java) - - -### DynamoDB - -* [Configuration](io-dynamodb-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/dynamodb/src/main/java/org/apache/pulsar/io/dynamodb/DynamoDBSource.java) - -### File - -* [Configuration](io-file-source.md#configuration) - -* [Example](io-file-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/file/src/main/java/org/apache/pulsar/io/file/FileSource.java) - -### Flume - -* [Configuration](io-flume-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/java/org/apache/pulsar/io/flume/FlumeConnector.java) - -### Twitter firehose - -* [Configuration](io-twitter-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/twitter/src/main/java/org/apache/pulsar/io/twitter/TwitterFireHose.java) - -### Kafka - -* [Configuration](io-kafka-source.md#configuration) - -* [Example](io-kafka-source.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java) - -### Kinesis - -* [Configuration](io-kinesis-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kinesis/src/main/java/org/apache/pulsar/io/kinesis/KinesisSource.java) - -### Netty - -* [Configuration](io-netty-source.md#configuration) - -* [Example of TCP](io-netty-source.md#tcp) - -* [Example of HTTP](io-netty-source.md#http) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/netty/src/main/java/org/apache/pulsar/io/netty/NettySource.java) - -### NSQ - -* [Configuration](io-nsq-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/nsq/src/main/java/org/apache/pulsar/io/nsq/NSQSource.java) - -### RabbitMQ - -* [Configuration](io-rabbitmq-source.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/rabbitmq/src/main/java/org/apache/pulsar/io/rabbitmq/RabbitMQSource.java) - -## Sink connector - -Pulsar has various sink connectors, which are sorted alphabetically as below. - -### Aerospike - -* [Configuration](io-aerospike-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/aerospike/src/main/java/org/apache/pulsar/io/aerospike/AerospikeStringSink.java) - -### Cassandra - -* [Configuration](io-cassandra-sink.md#configuration) - -* [Example](io-cassandra-sink.md#usage) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/cassandra/src/main/java/org/apache/pulsar/io/cassandra/CassandraStringSink.java) - -### ElasticSearch - -* [Configuration](io-elasticsearch-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/elastic-search/src/main/java/org/apache/pulsar/io/elasticsearch/ElasticSearchSink.java) - -### Flume - -* [Configuration](io-flume-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/java/org/apache/pulsar/io/flume/sink/StringSink.java) - -### HBase - -* [Configuration](io-hbase-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hbase/src/main/java/org/apache/pulsar/io/hbase/HbaseAbstractConfig.java) - -### HDFS2 - -* [Configuration](io-hdfs2-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hdfs2/src/main/java/org/apache/pulsar/io/hdfs2/AbstractHdfsConnector.java) - -### HDFS3 - -* [Configuration](io-hdfs3-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hdfs3/src/main/java/org/apache/pulsar/io/hdfs3/AbstractHdfsConnector.java) - -### InfluxDB - -* [Configuration](io-influxdb-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/influxdb/src/main/java/org/apache/pulsar/io/influxdb/InfluxDBGenericRecordSink.java) - -### JDBC ClickHouse - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-clickhouse) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/clickhouse/src/main/java/org/apache/pulsar/io/jdbc/ClickHouseJdbcAutoSchemaSink.java) - -### JDBC MariaDB - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-mariadb) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/mariadb/src/main/java/org/apache/pulsar/io/jdbc/MariadbJdbcAutoSchemaSink.java) - -### JDBC PostgreSQL - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-postgresql) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/postgres/src/main/java/org/apache/pulsar/io/jdbc/PostgresJdbcAutoSchemaSink.java) - -### JDBC SQLite - -* [Configuration](io-jdbc-sink.md#configuration) - -* [Example](io-jdbc-sink.md#example-for-sqlite) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/sqlite/src/main/java/org/apache/pulsar/io/jdbc/SqliteJdbcAutoSchemaSink.java) - -### Kafka - -* [Configuration](io-kafka-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSink.java) - -### Kinesis - -* [Configuration](io-kinesis-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kinesis/src/main/java/org/apache/pulsar/io/kinesis/KinesisSink.java) - -### MongoDB - -* [Configuration](io-mongo-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/mongo/src/main/java/org/apache/pulsar/io/mongodb/MongoSink.java) - -### RabbitMQ - -* [Configuration](io-rabbitmq-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/rabbitmq/src/main/java/org/apache/pulsar/io/rabbitmq/RabbitMQSink.java) - -### Redis - -* [Configuration](io-redis-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/redis/src/main/java/org/apache/pulsar/io/redis/RedisAbstractConfig.java) - -### Solr - -* [Configuration](io-solr-sink.md#configuration) - -* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/solr/src/main/java/org/apache/pulsar/io/solr/SolrSinkConfig.java) - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-debezium-source.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-debezium-source.md deleted file mode 100644 index 34e1fd10e6f2fe..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-debezium-source.md +++ /dev/null @@ -1,725 +0,0 @@ ---- -id: io-debezium-source -title: Debezium source connector -sidebar_label: "Debezium source connector" -original_id: io-debezium-source ---- - -The Debezium source connector pulls messages from MySQL or PostgreSQL -and persists the messages to Pulsar topics. - -## Configuration - -The configuration of Debezium source connector has the following properties. - -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `task.class` | true | null | A source task class that implemented in Debezium. | -| `database.hostname` | true | null | The address of a database server. | -| `database.port` | true | null | The port number of a database server.| -| `database.user` | true | null | The name of a database user that has the required privileges. | -| `database.password` | true | null | The password for a database user that has the required privileges. | -| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. | -| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. | -| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the connector.

    This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. | -| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. | -| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value. | -| `database.history` | true | null | The name of the database history class. | -| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements.

    **Note: this topic is for internal use only and should not be used by consumers.** | -| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. | -| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. | -| `json-with-envelope` | false | false | Present the message only consist of payload. - -### Converter Options - -1. org.apache.kafka.connect.json.JsonConverter - -This config `json-with-envelope` is valid only for the JsonConverter. It's default value is false, the consumer use the schema ` -Schema.KeyValue(Schema.AUTO_CONSUME(), Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, -and the message only consist of payload. - -If the config `json-with-envelope` value is true, the consumer use the schema -`Schema.KeyValue(Schema.BYTES, Schema.BYTES`, the message consist of schema and payload. - -2. org.apache.pulsar.kafka.shade.io.confluent.connect.avro.AvroConverter - -If users select the AvroConverter, then the pulsar consumer should use the schema `Schema.KeyValue(Schema.AUTO_CONSUME(), -Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, and the message consist of payload. - -### MongoDB Configuration -| Name | Required | Default | Description | -|------|----------|---------|-------------| -| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). | -| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. | -| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | -| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. | - - - -## Example of MySQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "3306", - "database.user": "debezium", - "database.password": "dbz", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.whitelist": "inventory", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.history.pulsar.topic": "history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "offset.storage.topic": "offset-topic" - } - - ``` - -* YAML - - You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mysql-source" - topicName: "debezium-mysql-topic" - archive: "connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for mysql, docker image: debezium/example-mysql:0.8 - database.hostname: "localhost" - database.port: "3306" - database.user: "debezium" - database.password: "dbz" - database.server.id: "184054" - database.server.name: "dbserver1" - database.whitelist: "inventory" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.history.pulsar.topic: "history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG - key.converter: "org.apache.kafka.connect.json.JsonConverter" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - - ## OFFSET_STORAGE_TOPIC_CONFIG - offset.storage.topic: "offset-topic" - - ``` - -### Usage - -This example shows how to change the data of a MySQL table using the Pulsar Debezium connector. - -1. Start a MySQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -it --rm \ - --name mysql \ - -p 3306:3306 \ - -e MYSQL_ROOT_PASSWORD=debezium \ - -e MYSQL_USER=mysqluser \ - -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar \ - --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","value.converter": "org.apache.kafka.connect.json.JsonConverter","pulsar.service.url": "pulsar://127.0.0.1:6650","offset.storage.topic": "offset-topic"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mysql-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the table _inventory.products_. - - ```bash - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MySQL client in docker. - - ```bash - - $ docker run -it --rm \ - --name mysqlterm \ - --link mysql \ - --rm mysql:5.7 sh \ - -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' - - ``` - -6. A MySQL client pops out. - - Use the following commands to change the data of the table _products_. - - ``` - - mysql> use inventory; - mysql> show tables; - mysql> SELECT * FROM products; - mysql> UPDATE products SET name='1111111111' WHERE id=101; - mysql> UPDATE products SET name='1111111111' WHERE id=107; - - ``` - - In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic. - -## Example of PostgreSQL - -You need to create a configuration file before using the Pulsar Debezium connector. - -### Configuration - -You can use one of the following methods to create a configuration file. - -* JSON - - ```json - - { - "database.hostname": "localhost", - "database.port": "5432", - "database.user": "postgres", - "database.password": "changeme", - "database.dbname": "postgres", - "database.server.name": "dbserver1", - "plugin.name": "pgoutput", - "schema.whitelist": "public", - "table.whitelist": "public.users", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-postgres-source" - topicName: "debezium-postgres-topic" - archive: "connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for postgres version 10+, official docker image: postgres:<10+> - database.hostname: "localhost" - database.port: "5432" - database.user: "postgres" - database.password: "changeme" - database.dbname: "postgres" - database.server.name: "dbserver1" - plugin.name: "pgoutput" - schema.whitelist: "public" - table.whitelist: "public.users" - - ## PULSAR_SERVICE_URL_CONFIG - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -Notice that `pgoutput` is a standard plugin of Postgres introduced in version 10 - [see Postgres architecture docu](https://www.postgresql.org/docs/10/logical-replication-architecture.html). You don't need to install anything, just make sure the WAL level is set to `logical` (see docker command below and [Postgres docu](https://www.postgresql.org/docs/current/runtime-config-wal.html)). - -### Usage - -This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector. - - -1. Start a PostgreSQL server with a database from which Debezium can capture changes. - - ```bash - - $ docker run -d -it --rm \ - --name pulsar-postgres \ - -p 5432:5432 \ - -e POSTGRES_PASSWORD=changeme \ - postgres:13.3 -c wal_level=logical - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar \ - --name debezium-postgres-source \ - --destination-topic-name debezium-postgres-topic \ - --tenant public \ - --namespace default \ - --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "changeme","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "public","table.whitelist": "public.users","pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-postgres-source-config.yaml - - ``` - -4. Subscribe the topic _sub-users_ for the _public.users_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-users" public/default/dbserver1.public.users -n 0 - - ``` - -5. Start a PostgreSQL client in docker. - - ```bash - - $ docker exec -it pulsar-postgresql /bin/bash - - ``` - -6. A PostgreSQL client pops out. - - Use the following commands to create sample data in the table _users_. - - ``` - - psql -U postgres -h localhost -p 5432 - Password for user postgres: - - CREATE TABLE users( - id BIGINT GENERATED ALWAYS AS IDENTITY, PRIMARY KEY(id), - hash_firstname TEXT NOT NULL, - hash_lastname TEXT NOT NULL, - gender VARCHAR(6) NOT NULL CHECK (gender IN ('male', 'female')) - ); - - INSERT INTO users(hash_firstname, hash_lastname, gender) - SELECT md5(RANDOM()::TEXT), md5(RANDOM()::TEXT), CASE WHEN RANDOM() < 0.5 THEN 'male' ELSE 'female' END FROM generate_series(1, 100); - - postgres=# select * from users; - - id | hash_firstname | hash_lastname | gender - -------+----------------------------------+----------------------------------+-------- - 1 | 02bf7880eb489edc624ba637f5ab42bd | 3e742c2cc4217d8e3382cc251415b2fb | female - 2 | dd07064326bb9119189032316158f064 | 9c0e938f9eddbd5200ba348965afbc61 | male - 3 | 2c5316fdd9d6595c1cceb70eed12e80c | 8a93d7d8f9d76acfaaa625c82a03ea8b | female - 4 | 3dfa3b4f70d8cd2155567210e5043d2b | 32c156bc28f7f03ab5d28e2588a3dc19 | female - - - postgres=# UPDATE users SET hash_firstname='maxim' WHERE id=1; - UPDATE 1 - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"before":null,"after":{"id":1,"hash_firstname":"maxim","hash_lastname":"292113d30a3ccee0e19733dd7f88b258","gender":"male"},"source:{"version":"1.0.0.Final","connector":"postgresql","name":"foobar","ts_ms":1624045862644,"snapshot":"false","db":"postgres","schema":"public","table":"users","txId":595,"lsn":24419784,"xmin":null},"op":"u","ts_ms":1624045862648} - ...many more - - ``` - -## Example of MongoDB - -You need to create a configuration file before using the Pulsar Debezium connector. - -* JSON - - ```json - - { - "mongodb.hosts": "rs0/mongodb:27017", - "mongodb.name": "dbserver1", - "mongodb.user": "debezium", - "mongodb.password": "dbz", - "mongodb.task.id": "1", - "database.whitelist": "inventory", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650" - } - - ``` - -* YAML - - You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file. - - ```yaml - - tenant: "public" - namespace: "default" - name: "debezium-mongodb-source" - topicName: "debezium-mongodb-topic" - archive: "connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar" - parallelism: 1 - - configs: - - ## config for pg, docker image: debezium/example-mongodb:0.10 - mongodb.hosts: "rs0/mongodb:27017", - mongodb.name: "dbserver1", - mongodb.user: "debezium", - mongodb.password: "dbz", - mongodb.task.id: "1", - database.whitelist: "inventory", - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - - ``` - -### Usage - -This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector. - - -1. Start a MongoDB server with a database from which Debezium can capture changes. - - ```bash - - $ docker pull debezium/example-mongodb:0.10 - $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017 debezium/example-mongodb:0.10 - - ``` - - Use the following commands to initialize the data. - - ``` bash - - ./usr/local/bin/init-inventory.sh - - ``` - - If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a``` - - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - -3. Start the Pulsar Debezium connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - Make sure the nar file is available at `connectors/pulsar-io-mongodb-@pulsar:version@.nar`. - - ```bash - - $ bin/pulsar-admin source localrun \ - --archive connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar \ - --name debezium-mongodb-source \ - --destination-topic-name debezium-mongodb-topic \ - --tenant public \ - --namespace default \ - --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650"}' - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin source localrun \ - --source-config-file debezium-mongodb-source-config.yaml - - ``` - -4. Subscribe the topic _sub-products_ for the _inventory.products_ table. - - ``` - - $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0 - - ``` - -5. Start a MongoDB client in docker. - - ```bash - - $ docker exec -it pulsar-mongodb /bin/bash - - ``` - -6. A MongoDB client pops out. - - ```bash - - mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory - db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}}) - - ``` - - In the terminal window of subscribing topic, you can receive the following messages. - - ```bash - - ----- got message ----- - {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":false,"field":"rs"},{"type":"string","optional":false,"field":"collection"},{"type":"int32","optional":false,"field":"ord"},{"type":"int64","optional":true,"field":"h"}],"optional":false,"name":"io.debezium.connector.mongo.Source","field":"source"},{"type":"string","optional":true,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"}],"optional":false,"name":"dbserver1.inventory.products.Envelope"},"payload":{"after":"{\"_id\": {\"$numberLong\": \"104\"},\"name\": \"hammer\",\"description\": \"12oz carpenter's hammer\",\"weight\": 1.25,\"quantity\": 4}","patch":null,"source":{"version":"0.10.0.Final","connector":"mongodb","name":"dbserver1","ts_ms":1573541905000,"snapshot":"true","db":"inventory","rs":"rs0","collection":"products","ord":1,"h":4983083486544392763},"op":"r","ts_ms":1573541909761}}. - - ``` - -## Example of Oracle - -### Packaging - -Oracle connector does not include Oracle JDBC driver and you need to package it with the connector. -Major reasons for not including the drivers are the variety of versions and Oracle licensing. It is recommended to use the driver provided with your Oracle DB installation, or you can [download](https://www.oracle.com/database/technologies/appdev/jdbc.html) one. -Integration test have an [example](https://github.com/apache/pulsar/blob/e2bc52d40450fa00af258c4432a5b71d50a5c6e0/tests/docker-images/latest-version-image/Dockerfile#L110-L122) of packaging the driver into the connector nar file. - -### Configuration - -Debezium [requires](https://debezium.io/documentation/reference/1.5/connectors/oracle.html#oracle-overview) Oracle DB with LogMiner or XStream API enabled. -Supported options and steps for enabling them vary from version to version of Oracle DB. -Steps outlined in the [documentation](https://debezium.io/documentation/reference/1.5/connectors/oracle.html#oracle-overview) and used in the [integration test](https://github.com/apache/pulsar/blob/master/tests/integration/src/test/java/org/apache/pulsar/tests/integration/io/sources/debezium/DebeziumOracleDbSourceTester.java) may or may not work for the version and edition of Oracle DB you are using. -Please refer to the [documentation for Oracle DB](https://docs.oracle.com/en/database/oracle/oracle-database/) as needed. - -Similarly to other connectors, you can use JSON or YAMl to configure the connector. -Using yaml as an example, you can create a debezium-oracle-source-config.yaml file like: - -* JSON - -```json - -{ - "database.hostname": "localhost", - "database.port": "1521", - "database.user": "dbzuser", - "database.password": "dbz", - "database.dbname": "XE", - "database.server.name": "XE", - "schema.exclude.list": "system,dbzuser", - "snapshot.mode": "initial", - "topic.namespace": "public/default", - "task.class": "io.debezium.connector.oracle.OracleConnectorTask", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "typeClassName": "org.apache.pulsar.common.schema.KeyValue", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.tcpKeepAlive": "true", - "decimal.handling.mode": "double", - "database.history.pulsar.topic": "debezium-oracle-source-history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650" -} - -``` - -* YAML - -```yaml - -tenant: "public" -namespace: "default" -name: "debezium-oracle-source" -topicName: "debezium-oracle-topic" -parallelism: 1 - -className: "org.apache.pulsar.io.debezium.oracle.DebeziumOracleSource" -database.dbname: "XE" - -configs: - database.hostname: "localhost" - database.port: "1521" - database.user: "dbzuser" - database.password: "dbz" - database.dbname: "XE" - database.server.name: "XE" - schema.exclude.list: "system,dbzuser" - snapshot.mode: "initial" - topic.namespace: "public/default" - task.class: "io.debezium.connector.oracle.OracleConnectorTask" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - key.converter: "org.apache.kafka.connect.json.JsonConverter" - typeClassName: "org.apache.pulsar.common.schema.KeyValue" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.tcpKeepAlive: "true" - decimal.handling.mode: "double" - database.history.pulsar.topic: "debezium-oracle-source-history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - -``` - -For the full list of configuration properties supported by Debezium, see [Debezium Connector for Oracle](https://debezium.io/documentation/reference/1.5/connectors/oracle.html#oracle-connector-properties). - -## Example of Microsoft SQL - -### Configuration - -Debezium [requires](https://debezium.io/documentation/reference/1.5/connectors/sqlserver.html#sqlserver-overview) SQL Server with CDC enabled. -Steps outlined in the [documentation](https://debezium.io/documentation/reference/1.5/connectors/sqlserver.html#setting-up-sqlserver) and used in the [integration test](https://github.com/apache/pulsar/blob/master/tests/integration/src/test/java/org/apache/pulsar/tests/integration/src/test/java/org/apache/pulsar/tests/integration/io/sources/debezium/DebeziumMsSqlSourceTester.java). -For more information, see [Enable and disable change data capture in Microsoft SQL Server](https://docs.microsoft.com/en-us/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server). - -Similarly to other connectors, you can use JSON or YAMl to configure the connector. - -* JSON - -```json - -{ - "database.hostname": "localhost", - "database.port": "1433", - "database.user": "sa", - "database.password": "MyP@ssw0rd!", - "database.dbname": "MyTestDB", - "database.server.name": "mssql", - "snapshot.mode": "schema_only", - "topic.namespace": "public/default", - "task.class": "io.debezium.connector.sqlserver.SqlServerConnectorTask", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "typeClassName": "org.apache.pulsar.common.schema.KeyValue", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "database.tcpKeepAlive": "true", - "decimal.handling.mode": "double", - "database.history.pulsar.topic": "debezium-mssql-source-history-topic", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650" -} - -``` - -* YAML - -```yaml - -tenant: "public" -namespace: "default" -name: "debezium-mssql-source" -topicName: "debezium-mssql-topic" -parallelism: 1 - -className: "org.apache.pulsar.io.debezium.mssql.DebeziumMsSqlSource" -database.dbname: "mssql" - -configs: - database.hostname: "localhost" - database.port: "1433" - database.user: "sa" - database.password: "MyP@ssw0rd!" - database.dbname: "MyTestDB" - database.server.name: "mssql" - snapshot.mode: "schema_only" - topic.namespace: "public/default" - task.class: "io.debezium.connector.sqlserver.SqlServerConnectorTask" - value.converter: "org.apache.kafka.connect.json.JsonConverter" - key.converter: "org.apache.kafka.connect.json.JsonConverter" - typeClassName: "org.apache.pulsar.common.schema.KeyValue" - database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory" - database.tcpKeepAlive: "true" - decimal.handling.mode: "double" - database.history.pulsar.topic: "debezium-mssql-source-history-topic" - database.history.pulsar.service.url: "pulsar://127.0.0.1:6650" - -``` - -For the full list of configuration properties supported by Debezium, see [Debezium Connector for MS SQL](https://debezium.io/documentation/reference/1.5/connectors/sqlserver.html#sqlserver-connector-properties). - -## FAQ - -### Debezium postgres connector will hang when create snap - -```$xslt - -#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000] - java.lang.Thread.State: WAITING (parking) - at sun.misc.Unsafe.park(Native Method) - - parking to wait for <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) - at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) - at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) - at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396) - at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649) - at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132) - at io.debezium.connector.postgresql.PostgresConnectorTask$Lambda$203/385424085.accept(Unknown Source) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$240/1347039967.accept(Unknown Source) - at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$206/589332928.run(Unknown Source) - at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705) - at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717) - at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010) - at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87) - at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126) - at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47) - at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127) - at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200) - at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230) - at java.lang.Thread.run(Thread.java:748) - -``` - -If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file: - -```$xslt - -max.queue.size= - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-debug.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-debug.md deleted file mode 100644 index 844e101d00d2a7..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-debug.md +++ /dev/null @@ -1,407 +0,0 @@ ---- -id: io-debug -title: How to debug Pulsar connectors -sidebar_label: "Debug" -original_id: io-debug ---- -This guide explains how to debug connectors in localrun or cluster mode and gives a debugging checklist. -To better demonstrate how to debug Pulsar connectors, here takes a Mongo sink connector as an example. - -**Deploy a Mongo sink environment** -1. Start a Mongo service. - - ```bash - - docker pull mongo:4 - docker run -d -p 27017:27017 --name pulsar-mongo -v $PWD/data:/data/db mongo:4 - - ``` - -2. Create a DB and a collection. - - ```bash - - docker exec -it pulsar-mongo /bin/bash - mongo - > use pulsar - > db.createCollection('messages') - > exit - - ``` - -3. Start Pulsar standalone. - - ```bash - - docker pull apachepulsar/pulsar:2.4.0 - docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --link pulsar-mongo --name pulsar-mongo-standalone apachepulsar/pulsar:2.4.0 bin/pulsar standalone - - ``` - -4. Configure the Mongo sink with the `mongo-sink-config.yaml` file. - - ```bash - - configs: - mongoUri: "mongodb://pulsar-mongo:27017" - database: "pulsar" - collection: "messages" - batchSize: 2 - batchTimeMs: 500 - - ``` - - ```bash - - docker cp mongo-sink-config.yaml pulsar-mongo-standalone:/pulsar/ - - ``` - -5. Download the Mongo sink nar package. - - ```bash - - docker exec -it pulsar-mongo-standalone /bin/bash - curl -O http://apache.01link.hk/pulsar/pulsar-2.4.0/connectors/pulsar-io-mongo-2.4.0.nar - - ``` - -## Debug in localrun mode -Start the Mongo sink in localrun mode using the `localrun` command. -:::tip - -For more information about the `localrun` command, see [`localrun`](reference-connector-admin.md/#localrun-1). - -::: - -```bash - -./bin/pulsar-admin sinks localrun \ ---archive pulsar-io-mongo-2.4.0.nar \ ---tenant public --namespace default \ ---inputs test-mongo \ ---name pulsar-mongo-sink \ ---sink-config-file mongo-sink-config.yaml \ ---parallelism 1 - -``` - -### Use connector log -Use one of the following methods to get a connector log in localrun mode: -* After executing the `localrun` command, the **log is automatically printed on the console**. -* The log is located at: - - ```bash - - logs/functions/tenant/namespace/function-name/function-name-instance-id.log - - ``` - - **Example** - - The path of the Mongo sink connector is: - - ```bash - - logs/functions/public/default/pulsar-mongo-sink/pulsar-mongo-sink-0.log - - ``` - -To clearly explain the log information, here breaks down the large block of information into small blocks and add descriptions for each block. -* This piece of log information shows the storage path of the nar package after decompression. - - ``` - - 08:21:54.132 [main] INFO org.apache.pulsar.common.nar.NarClassLoader - Created class loader with paths: [file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/, file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/META-INF/bundled-dependencies/, - - ``` - - :::tip - - If `class cannot be found` exception is thrown, check whether the nar file is decompressed in the folder `file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/META-INF/bundled-dependencies/` or not. - - ::: - -* This piece of log information illustrates the basic information about the Mongo sink connector, such as tenant, namespace, name, parallelism, resources, and so on, which can be used to **check whether the Mongo sink connector is configured correctly or not**. - - ```bash - - 08:21:55.390 [main] INFO org.apache.pulsar.functions.runtime.ThreadRuntime - ThreadContainer starting function with instance config InstanceConfig(instanceId=0, functionId=853d60a1-0c48-44d5-9a5c-6917386476b2, functionVersion=c2ce1458-b69e-4175-88c0-a0a856a2be8c, functionDetails=tenant: "public" - namespace: "default" - name: "pulsar-mongo-sink" - className: "org.apache.pulsar.functions.api.utils.IdentityFunction" - autoAck: true - parallelism: 1 - source { - typeClassName: "[B" - inputSpecs { - key: "test-mongo" - value { - } - } - cleanupSubscription: true - } - sink { - className: "org.apache.pulsar.io.mongodb.MongoSink" - configs: "{\"mongoUri\":\"mongodb://pulsar-mongo:27017\",\"database\":\"pulsar\",\"collection\":\"messages\",\"batchSize\":2,\"batchTimeMs\":500}" - typeClassName: "[B" - } - resources { - cpu: 1.0 - ram: 1073741824 - disk: 10737418240 - } - componentType: SINK - , maxBufferedTuples=1024, functionAuthenticationSpec=null, port=38459, clusterName=local) - - ``` - -* This piece of log information demonstrates the status of the connections to Mongo and configuration information. - - ```bash - - 08:21:56.231 [cluster-ClusterId{value='5d6396a3c9e77c0569ff00eb', description='null'}-pulsar-mongo:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:1, serverValue:8}] to pulsar-mongo:27017 - 08:21:56.326 [cluster-ClusterId{value='5d6396a3c9e77c0569ff00eb', description='null'}-pulsar-mongo:27017] INFO org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=pulsar-mongo:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 2, 0]}, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=89058800} - - ``` - -* This piece of log information explains the configuration of consumers and clients, including the topic name, subscription name, subscription type, and so on. - - ```bash - - 08:21:56.719 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl - Starting Pulsar consumer status recorder with config: { - "topicNames" : [ "test-mongo" ], - "topicsPattern" : null, - "subscriptionName" : "public/default/pulsar-mongo-sink", - "subscriptionType" : "Shared", - "receiverQueueSize" : 1000, - "acknowledgementsGroupTimeMicros" : 100000, - "negativeAckRedeliveryDelayMicros" : 60000000, - "maxTotalReceiverQueueSizeAcrossPartitions" : 50000, - "consumerName" : null, - "ackTimeoutMillis" : 0, - "tickDurationMillis" : 1000, - "priorityLevel" : 0, - "cryptoFailureAction" : "CONSUME", - "properties" : { - "application" : "pulsar-sink", - "id" : "public/default/pulsar-mongo-sink", - "instance_id" : "0" - }, - "readCompacted" : false, - "subscriptionInitialPosition" : "Latest", - "patternAutoDiscoveryPeriod" : 1, - "regexSubscriptionMode" : "PersistentOnly", - "deadLetterPolicy" : null, - "autoUpdatePartitions" : true, - "replicateSubscriptionState" : false, - "resetIncludeHead" : false - } - 08:21:56.726 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl - Pulsar client config: { - "serviceUrl" : "pulsar://localhost:6650", - "authPluginClassName" : null, - "authParams" : null, - "operationTimeoutMs" : 30000, - "statsIntervalSeconds" : 60, - "numIoThreads" : 1, - "numListenerThreads" : 1, - "connectionsPerBroker" : 1, - "useTcpNoDelay" : true, - "useTls" : false, - "tlsTrustCertsFilePath" : null, - "tlsAllowInsecureConnection" : false, - "tlsHostnameVerificationEnable" : false, - "concurrentLookupRequest" : 5000, - "maxLookupRequest" : 50000, - "maxNumberOfRejectedRequestPerConnection" : 50, - "keepAliveIntervalSeconds" : 30, - "connectionTimeoutMs" : 10000, - "requestTimeoutMs" : 60000, - "defaultBackoffIntervalNanos" : 100000000, - "maxBackoffIntervalNanos" : 30000000000 - } - - ``` - -## Debug in cluster mode -You can use the following methods to debug a connector in cluster mode: -* [Use connector log](#use-connector-log) -* [Use admin CLI](#use-admin-cli) -### Use connector log -In cluster mode, multiple connectors can run on a worker. To find the log path of a specified connector, use the `workerId` to locate the connector log. -### Use admin CLI -Pulsar admin CLI helps you debug Pulsar connectors with the following subcommands: -* [`get`](#get) - -* [`status`](#status) -* [`topics stats`](#topics-stats) - -**Create a Mongo sink** - -```bash - -./bin/pulsar-admin sinks create \ ---archive pulsar-io-mongo-2.4.0.nar \ ---tenant public \ ---namespace default \ ---inputs test-mongo \ ---name pulsar-mongo-sink \ ---sink-config-file mongo-sink-config.yaml \ ---parallelism 1 - -``` - -### `get` -Use the `get` command to get the basic information about the Mongo sink connector, such as tenant, namespace, name, parallelism, and so on. - -```bash - -./bin/pulsar-admin sinks get --tenant public --namespace default --name pulsar-mongo-sink -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-mongo-sink", - "className": "org.apache.pulsar.io.mongodb.MongoSink", - "inputSpecs": { - "test-mongo": { - "isRegexPattern": false - } - }, - "configs": { - "mongoUri": "mongodb://pulsar-mongo:27017", - "database": "pulsar", - "collection": "messages", - "batchSize": 2.0, - "batchTimeMs": 500.0 - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -:::tip - -For more information about the `get` command, see [`get`](reference-connector-admin.md/#get-1). - -::: - -### `status` -Use the `status` command to get the current status about the Mongo sink connector, such as the number of instance, the number of running instance, instanceId, workerId and so on. - -```bash - -./bin/pulsar-admin sinks status ---tenant public \ ---namespace default \ ---name pulsar-mongo-sink -{ -"numInstances" : 1, -"numRunning" : 1, -"instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-5d202832fd18-8080" - } -} ] -} - -``` - -:::tip - -For more information about the `status` command, see [`status`](reference-connector-admin.md/#stauts-1). -If there are multiple connectors running on a worker, `workerId` can locate the worker on which the specified connector is running. - -::: - -### `topics stats` -Use the `topics stats` command to get the stats for a topic and its connected producer and consumer, such as whether the topic has received messages or not, whether there is a backlog of messages or not, the available permits and other key information. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -```bash - -./bin/pulsar-admin topics stats test-mongo -{ - "msgRateIn" : 0.0, - "msgThroughputIn" : 0.0, - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "averageMsgSize" : 0.0, - "storageSize" : 1, - "publishers" : [ ], - "subscriptions" : { - "public/default/pulsar-mongo-sink" : { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "msgRateRedeliver" : 0.0, - "msgBacklog" : 0, - "blockedSubscriptionOnUnackedMsgs" : false, - "msgDelayed" : 0, - "unackedMessages" : 0, - "type" : "Shared", - "msgRateExpired" : 0.0, - "consumers" : [ { - "msgRateOut" : 0.0, - "msgThroughputOut" : 0.0, - "msgRateRedeliver" : 0.0, - "consumerName" : "dffdd", - "availablePermits" : 999, - "unackedMessages" : 0, - "blockedConsumerOnUnackedMsgs" : false, - "metadata" : { - "instance_id" : "0", - "application" : "pulsar-sink", - "id" : "public/default/pulsar-mongo-sink" - }, - "connectedSince" : "2019-08-26T08:48:07.582Z", - "clientVersion" : "2.4.0", - "address" : "/172.17.0.3:57790" - } ], - "isReplicated" : false - } - }, - "replication" : { }, - "deduplicationStatus" : "Disabled" -} - -``` - -:::tip - -For more information about the `topic stats` command, see [`topic stats`](http://pulsar.apache.org/docs/en/pulsar-admin/#stats-1). - -::: - -## Checklist -This checklist indicates the major areas to check when you debug connectors. It is a reminder of what to look for to ensure a thorough review and an evaluation tool to get the status of connectors. -* Does Pulsar start successfully? - -* Does the external service run normally? - -* Is the nar package complete? - -* Is the connector configuration file correct? - -* In localrun mode, run a connector and check the printed information (connector log) on the console. - -* In cluster mode: - - * Use the `get` command to get the basic information. - - * Use the `status` command to get the current status. - * Use the `topics stats` command to get the stats for a specified topic and its connected producers and consumers. - - * Check the connector log. -* Enter into the external system and verify the result. diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-develop.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-develop.md deleted file mode 100644 index d6f4f8261ac820..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-develop.md +++ /dev/null @@ -1,421 +0,0 @@ ---- -id: io-develop -title: How to develop Pulsar connectors -sidebar_label: "Develop" -original_id: io-develop ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide describes how to develop Pulsar connectors to move data -between Pulsar and other systems. - -Pulsar connectors are special [Pulsar Functions](functions-overview.md), so creating -a Pulsar connector is similar to creating a Pulsar function. - -Pulsar connectors come in two types: - -| Type | Description | Example -|---|---|--- -{@inject: github:Source:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java}|Import data from another system to Pulsar.|[RabbitMQ source connector](io-rabbitmq.md) imports the messages of a RabbitMQ queue to a Pulsar topic. -{@inject: github:Sink:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java}|Export data from Pulsar to another system.|[Kinesis sink connector](io-kinesis.md) exports the messages of a Pulsar topic to a Kinesis stream. - -## Develop - -You can develop Pulsar source connectors and sink connectors. - -### Source - -Developing a source connector is to implement the {@inject: github:Source:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} -interface, which means you need to implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method and the {@inject: github:read:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - -1. Implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - - ```java - - /** - * Open connector with configuration - * - * @param config initialization config - * @param sourceContext - * @throws Exception IO type exceptions when opening a connector - */ - void open(final Map config, SourceContext sourceContext) throws Exception; - - ``` - - This method is called when the source connector is initialized. - - In this method, you can retrieve all connector specific settings through the passed-in `config` parameter and initialize all necessary resources. - - For example, a Kafka connector can create a Kafka client in this `open` method. - - Besides, Pulsar runtime also provides a `SourceContext` for the - connector to access runtime resources for tasks like collecting metrics. The implementation can save the `SourceContext` for future use. - -2. Implement the {@inject: github:read:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. - - ```java - - /** - * Reads the next message from source. - * If source does not have any new messages, this call should block. - * @return next message from source. The return result should never be null - * @throws Exception - */ - Record read() throws Exception; - - ``` - - If nothing to return, the implementation should be blocking rather than returning `null`. - - The returned {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should encapsulate the following information, which is needed by Pulsar IO runtime. - - * {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should provide the following variables: - - |Variable|Required|Description - |---|---|--- - `TopicName`|No|Pulsar topic name from which the record is originated from. - `Key`|No| Messages can optionally be tagged with keys.

    For more information, see [Routing modes](concepts-messaging.md#routing-modes).| - `Value`|Yes|Actual data of the record. - `EventTime`|No|Event time of the record from the source. - `PartitionId`|No| If the record is originated from a partitioned source, it returns its `PartitionId`.

    `PartitionId` is used as a part of the unique identifier by Pulsar IO runtime to deduplicate messages and achieve exactly-once processing guarantee. - `RecordSequence`|No|If the record is originated from a sequential source, it returns its `RecordSequence`.

    `RecordSequence` is used as a part of the unique identifier by Pulsar IO runtime to deduplicate messages and achieve exactly-once processing guarantee. - `Properties` |No| If the record carries user-defined properties, it returns those properties. - `DestinationTopic`|No|Topic to which message should be written. - `Message`|No|A class which carries data sent by users.

    For more information, see [Message.java](https://github.com/apache/pulsar/blob/master/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/Message.java).| - - * {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should provide the following methods: - - Method|Description - |---|--- - `ack` |Acknowledge that the record is fully processed. - `fail`|Indicate that the record fails to be processed. - -## Handle schema information - -Pulsar IO automatically handles the schema and provides a strongly typed API based on Java generics. -If you know the schema type that you are producing, you can declare the Java class relative to that type in your sink declaration. - -``` - -public class MySource implements Source { - public Record read() {} -} - -``` - -If you want to implement a source that works with any schema, you can go with `byte[]` (of `ByteBuffer`) and use Schema.AUTO_PRODUCE_BYTES(). - -``` - -public class MySource implements Source { - public Record read() { - - Schema wantedSchema = .... - Record myRecord = new MyRecordImplementation(); - .... - } - class MyRecordImplementation implements Record { - public byte[] getValue() { - return ....encoded byte[]...that represents the value - } - public Schema getSchema() { - return Schema.AUTO_PRODUCE_BYTES(wantedSchema); - } - } -} - -``` - -To handle the `KeyValue` type properly, follow the guidelines for your record implementation: -- It must implement {@inject: github:Record:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/KVRecord.java} interface and implement `getKeySchema`,`getValueSchema`, and `getKeyValueEncodingType` -- It must return a `KeyValue` object as `Record.getValue()` -- It may return null in `Record.getSchema()` - -When Pulsar IO runtime encounters a `KVRecord`, it brings the following changes automatically: -- Set properly the `KeyValueSchema` -- Encode the Message Key and the Message Value according to the `KeyValueEncoding` (SEPARATED or INLINE) - -:::tip - -For more information about **how to create a source connector**, see {@inject: github:KafkaSource:/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java}. - -::: - -### Sink - -Developing a sink connector **is similar to** developing a source connector, that is, you need to implement the {@inject: github:Sink:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} interface, which means implementing the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method and the {@inject: github:write:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - -1. Implement the {@inject: github:open:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - - ```java - - /** - * Open connector with configuration - * - * @param config initialization config - * @param sinkContext - * @throws Exception IO type exceptions when opening a connector - */ - void open(final Map config, SinkContext sinkContext) throws Exception; - - ``` - -2. Implement the {@inject: github:write:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method. - - ```java - - /** - * Write a message to Sink - * @param record record to write to sink - * @throws Exception - */ - void write(Record record) throws Exception; - - ``` - - During the implementation, you can decide how to write the `Value` and - the `Key` to the actual source, and leverage all the provided information such as - `PartitionId` and `RecordSequence` to achieve different processing guarantees. - - You also need to ack records (if messages are sent successfully) or fail records (if messages fail to send). - -## Handling Schema information - -Pulsar IO handles automatically the Schema and provides a strongly typed API based on Java generics. -If you know the Schema type that you are consuming from you can declare the Java class relative to that type in your Sink declaration. - -``` - -public class MySink implements Sink { - public void write(Record record) {} -} - -``` - -If you want to implement a sink that works with any schema, you can you go with the special GenericObject interface. - -``` - -public class MySink implements Sink { - public void write(Record record) { - Schema schema = record.getSchema(); - GenericObject genericObject = record.getValue(); - if (genericObject != null) { - SchemaType type = genericObject.getSchemaType(); - Object nativeObject = genericObject.getNativeObject(); - ... - } - .... - } -} - -``` - -In the case of AVRO, JSON, and Protobuf records (schemaType=AVRO,JSON,PROTOBUF_NATIVE), you can cast the -`genericObject` variable to `GenericRecord` and use `getFields()` and `getField()` API. -You are able to access the native AVRO record using `genericObject.getNativeObject()`. - -In the case of KeyValue type, you can access both the schema for the key and the schema for the value using this code. - -``` - -public class MySink implements Sink { - public void write(Record record) { - Schema schema = record.getSchema(); - GenericObject genericObject = record.getValue(); - SchemaType type = genericObject.getSchemaType(); - Object nativeObject = genericObject.getNativeObject(); - if (type == SchemaType.KEY_VALUE) { - KeyValue keyValue = (KeyValue) nativeObject; - Object key = keyValue.getKey(); - Object value = keyValue.getValue(); - - KeyValueSchema keyValueSchema = (KeyValueSchema) schema; - Schema keySchema = keyValueSchema.getKeySchema(); - Schema valueSchema = keyValueSchema.getValueSchema(); - } - .... - } -} - -``` - -## Test - -Testing connectors can be challenging because Pulsar IO connectors interact with two systems -that may be difficult to mock—Pulsar and the system to which the connector is connecting. - -It is -recommended writing special tests to test the connector functionalities as below -while mocking the external service. - -### Unit test - -You can create unit tests for your connector. - -### Integration test - -Once you have written sufficient unit tests, you can add -separate integration tests to verify end-to-end functionality. - -Pulsar uses [testcontainers](https://www.testcontainers.org/) **for all integration tests**. - -:::tip - -For more information about **how to create integration tests for Pulsar connectors**, see {@inject: github:IntegrationTests:/tests/integration/src/test/java/org/apache/pulsar/tests/integration/io}. - -::: - -## Package - -Once you've developed and tested your connector, you need to package it so that it can be submitted -to a [Pulsar Functions](functions-overview.md) cluster. - -There are two methods to -work with Pulsar Functions' runtime, that is, [NAR](#nar) and [uber JAR](#uber-jar). - -:::note - -If you plan to package and distribute your connector for others to use, you are obligated to - -::: - -license and copyright your own code properly. Remember to add the license and copyright to -all libraries your code uses and to your distribution. -> -> If you use the [NAR](#nar) method, the NAR plugin -automatically creates a `DEPENDENCIES` file in the generated NAR package, including the proper -licensing and copyrights of all libraries of your connector. - -### NAR - -**NAR** stands for NiFi Archive, which is a custom packaging mechanism used by Apache NiFi, to provide -a bit of Java ClassLoader isolation. - -:::tip - -For more information about **how NAR works**, see [here](https://medium.com/hashmapinc/nifi-nar-files-explained-14113f7796fd). - -::: - -Pulsar uses the same mechanism for packaging **all** [built-in connectors](io-connectors.md). - -The easiest approach to package a Pulsar connector is to create a NAR package using [nifi-nar-maven-plugin](https://mvnrepository.com/artifact/org.apache.nifi/nifi-nar-maven-plugin). - -Include this [nifi-nar-maven-plugin](https://mvnrepository.com/artifact/org.apache.nifi/nifi-nar-maven-plugin) in your maven project for your connector as below. - -```xml - - - - org.apache.nifi - nifi-nar-maven-plugin - 1.2.0 - - - -``` - -You must also create a `resources/META-INF/services/pulsar-io.yaml` file with the following contents: - -```yaml - -name: connector name -description: connector description -sourceClass: fully qualified class name (only if source connector) -sinkClass: fully qualified class name (only if sink connector) - -``` - -For Gradle users, there is a [Gradle Nar plugin available on the Gradle Plugin Portal](https://plugins.gradle.org/plugin/io.github.lhotari.gradle-nar-plugin). - -:::tip - -For more information about an **how to use NAR for Pulsar connectors**, see {@inject: github:TwitterFirehose:/pulsar-io/twitter/pom.xml}. - -::: - -### Uber JAR - -An alternative approach is to create an **uber JAR** that contains all of the connector's JAR files -and other resource files. No directory internal structure is necessary. - -You can use [maven-shade-plugin](https://maven.apache.org/plugins/maven-shade-plugin/examples/includes-excludes.html) to create a uber JAR as below: - -```xml - - - org.apache.maven.plugins - maven-shade-plugin - 3.1.1 - - - package - - shade - - - - - *:* - - - - - - - -``` - -## Monitor - -Pulsar connectors enable you to move data in and out of Pulsar easily. It is important to ensure that the running connectors are healthy at any time. You can monitor Pulsar connectors that have been deployed with the following methods: - -- Check the metrics provided by Pulsar. - - Pulsar connectors expose the metrics that can be collected and used for monitoring the health of **Java** connectors. You can check the metrics by following the [monitoring](deploy-monitoring.md) guide. - -- Set and check your customized metrics. - - In addition to the metrics provided by Pulsar, Pulsar allows you to customize metrics for **Java** connectors. Function workers collect user-defined metrics to Prometheus automatically and you can check them in Grafana. - -Here is an example of how to customize metrics for a Java connector. - -````mdx-code-block - - - -``` - -public class TestMetricSink implements Sink { - - @Override - public void open(Map config, SinkContext sinkContext) throws Exception { - sinkContext.recordMetric("foo", 1); - } - - @Override - public void write(Record record) throws Exception { - - } - - @Override - public void close() throws Exception { - - } - } - -``` - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-dynamodb-source.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-dynamodb-source.md deleted file mode 100644 index ce585786eb0428..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-dynamodb-source.md +++ /dev/null @@ -1,80 +0,0 @@ ---- -id: io-dynamodb-source -title: AWS DynamoDB source connector -sidebar_label: "AWS DynamoDB source connector" -original_id: io-dynamodb-source ---- - -The DynamoDB source connector pulls data from DynamoDB table streams and persists data into Pulsar. - -This connector uses the [DynamoDB Streams Kinesis Adapter](https://github.com/awslabs/dynamodb-streams-kinesis-adapter), -which uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual -consuming of messages. The KCL uses DynamoDB to track state for consumers and requires cloudwatch access to log metrics. - - -## Configuration - -The configuration of the DynamoDB source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.

    Below are the available options:

  2226. `AT_TIMESTAMP`: start from the record at or after the specified timestamp.

  2227. `LATEST`: start after the most recent data record.

  2228. `TRIM_HORIZON`: start from the oldest available data record.
  2229. -`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption. -`applicationName`|String|false|Pulsar IO connector|The name of the KCL application. Must be unique, as it is used to define the table name for the dynamo table used for state tracking.

    By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances. -`checkpointInterval`|long|false|60000|The frequency of the KCL checkpoint in milliseconds. -`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds. -`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint. -`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector.

    Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed. -`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsEndpoint`|String|false|" " (empty string)|The DynamoDB Streams end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsDynamodbStreamArn`|String|true|" " (empty string)|The DynamoDB stream arn. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    `awsCredentialProviderPlugin` has the following built-in plugs:

  2230. `org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:
    this plugin uses the default AWS provider chain.
    For more information, see [using the default credential provider chain](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default).

  2231. `org.apache.pulsar.io.kinesis.STSAssumeRoleProviderPlugin`:
    this plugin takes a configuration via the `awsCredentialPluginParam` that describes a role to assume when running the KCL.
    **JSON configuration example**
    `{"roleArn": "arn...", "roleSessionName": "name"}`

    `awsCredentialPluginName` is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If `awsCredentialPluginName` set to empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`.
  2232. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Example - -Before using the DynamoDB source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "awsEndpoint": "https://some.endpoint.aws", - "awsRegion": "us-east-1", - "awsDynamodbStreamArn": "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "applicationName": "My test application", - "checkpointInterval": "30000", - "backoffTime": "4000", - "numRetries": "3", - "receiveQueueSize": 2000, - "initialPositionInStream": "TRIM_HORIZON", - "startAtTime": "2019-03-05T19:28:58.000Z" - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "https://some.endpoint.aws" - awsRegion: "us-east-1" - awsDynamodbStreamArn: "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - applicationName: "My test application" - checkpointInterval: 30000 - backoffTime: 4000 - numRetries: 3 - receiveQueueSize: 2000 - initialPositionInStream: "TRIM_HORIZON" - startAtTime: "2019-03-05T19:28:58.000Z" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-elasticsearch-sink.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-elasticsearch-sink.md deleted file mode 100644 index b5757b3094a9ac..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-elasticsearch-sink.md +++ /dev/null @@ -1,242 +0,0 @@ ---- -id: io-elasticsearch-sink -title: Elasticsearch sink connector -sidebar_label: "Elasticsearch sink connector" -original_id: io-elasticsearch-sink ---- - -The Elasticsearch sink connector pulls messages from Pulsar topics and persists the messages to indexes. - - -## Feature - -### Handle data - -Since Pulsar 2.9.0, the Elasticsearch sink connector has the following ways of -working. You can choose one of them. - -Name | Description ----|---| -Raw processing | The sink reads from topics and passes the raw content to Elasticsearch.

    This is the **default** behavior.

    Raw processing was already available **in Pulsar 2.8.x**. -Schema aware | The sink uses the schema and handles AVRO, JSON, and KeyValue schema types while mapping the content to the Elasticsearch document.

    If you set `schemaEnable` to `true`, the sink interprets the contents of the message and you can define a **primary key** that in turn used as the special `_id` field on Elasticsearch. -

    This allows you to perform `UPDATE`, `INSERT`, and `DELETE` operations -to Elasticsearch driven by the logical primary key of the message.

    This -is very useful in a typical Change Data Capture scenario in which you follow the -changes on your database, write them to Pulsar (using the Debezium adapter for -instance), and then you write to Elasticsearch.

    You configure the -mapping of the primary key using the `primaryFields` configuration -entry.

    The `DELETE` operation can be performed when the primary key is -not empty and the remaining value is empty. Use the `nullValueAction` to -configure this behaviour. The default configuration simply ignores such empty -values. - -### Map multiple indexes - -Since Pulsar 2.9.0, the `indexName` property is no more required. If you omit it, the sink writes to an index name after the Pulsar topic name. - -### Enable bulk writes - -Since Pulsar 2.9.0, you can use bulk writes by setting the `bulkEnabled` property to `true`. - -### Enable secure connections via TLS - -Since Pulsar 2.9.0, you can enable secure connections with TLS. - -## Configuration - -The configuration of the Elasticsearch sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `elasticSearchUrl` | String| true |" " (empty string)| The URL of elastic search cluster to which the connector connects. | -| `indexName` | String| true |" " (empty string)| The index name to which the connector writes messages. | -| `schemaEnable` | Boolean | false | false | Turn on the Schema Aware mode. | -| `createIndexIfNeeded` | Boolean | false | false | Manage index if missing. | -| `maxRetries` | Integer | false | 1 | The maximum number of retries for elasticsearch requests. Use -1 to disable it. | -| `retryBackoffInMs` | Integer | false | 100 | The base time to wait when retrying an Elasticsearch request (in milliseconds). | -| `maxRetryTimeInSec` | Integer| false | 86400 | The maximum retry time interval in seconds for retrying an elasticsearch request. | -| `bulkEnabled` | Boolean | false | false | Enable the elasticsearch bulk processor to flush write requests based on the number or size of requests, or after a given period. | -| `bulkActions` | Integer | false | 1000 | The maximum number of actions per elasticsearch bulk request. Use -1 to disable it. | -| `bulkSizeInMb` | Integer | false |5 | The maximum size in megabytes of elasticsearch bulk requests. Use -1 to disable it. | -| `bulkConcurrentRequests` | Integer | false | 0 | The maximum number of in flight elasticsearch bulk requests. The default 0 allows the execution of a single request. A value of 1 means 1 concurrent request is allowed to be executed while accumulating new bulk requests. | -| `bulkFlushIntervalInMs` | Integer | false | -1 | The maximum period of time to wait for flushing pending writes when bulk writes are enabled. Default is -1 meaning not set. | -| `compressionEnabled` | Boolean | false |false | Enable elasticsearch request compression. | -| `connectTimeoutInMs` | Integer | false |5000 | The elasticsearch client connection timeout in milliseconds. | -| `connectionRequestTimeoutInMs` | Integer | false |1000 | The time in milliseconds for getting a connection from the elasticsearch connection pool. | -| `connectionIdleTimeoutInMs` | Integer | false |5 | Idle connection timeout to prevent a read timeout. | -| `keyIgnore` | Boolean | false |true | Whether to ignore the record key to build the Elasticsearch document `_id`. If primaryFields is defined, the connector extract the primary fields from the payload to build the document `_id` If no primaryFields are provided, elasticsearch auto generates a random document `_id`. | -| `primaryFields` | String | false | "id" | The comma separated ordered list of field names used to build the Elasticsearch document `_id` from the record value. If this list is a singleton, the field is converted as a string. If this list has 2 or more fields, the generated `_id` is a string representation of a JSON array of the field values. | -| `nullValueAction` | enum (IGNORE,DELETE,FAIL) | false | IGNORE | How to handle records with null values, possible options are IGNORE, DELETE or FAIL. Default is IGNORE the message. | -| `malformedDocAction` | enum (IGNORE,WARN,FAIL) | false | FAIL | How to handle elasticsearch rejected documents due to some malformation. Possible options are IGNORE, DELETE or FAIL. Default is FAIL the Elasticsearch document. | -| `stripNulls` | Boolean | false |true | If stripNulls is false, elasticsearch _source includes 'null' for empty fields (for example {"foo": null}), otherwise null fields are stripped. | -| `socketTimeoutInMs` | Integer | false |60000 | The socket timeout in milliseconds waiting to read the elasticsearch response. | -| `typeName` | String | false | "_doc" | The type name to which the connector writes messages to.

    The value should be set explicitly to a valid type name other than "_doc" for Elasticsearch version before 6.2, and left to default otherwise. | -| `indexNumberOfShards` | int| false |1| The number of shards of the index. | -| `indexNumberOfReplicas` | int| false |1 | The number of replicas of the index. | -| `username` | String| false |" " (empty string)| The username used by the connector to connect to the elastic search cluster.

    If `username` is set, then `password` should also be provided. | -| `password` | String| false | " " (empty string)|The password used by the connector to connect to the elastic search cluster.

    If `username` is set, then `password` should also be provided. | -| `ssl` | ElasticSearchSslConfig | false | | Configuration for TLS encrypted communication | - -### Definition of ElasticSearchSslConfig structure: - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `enabled` | Boolean| false | false | Enable SSL/TLS. | -| `hostnameVerification` | Boolean| false | true | Whether or not to validate node hostnames when using SSL. | -| `truststorePath` | String| false |" " (empty string)| The path to the truststore file. | -| `truststorePassword` | String| false |" " (empty string)| Truststore password. | -| `keystorePath` | String| false |" " (empty string)| The path to the keystore file. | -| `keystorePassword` | String| false |" " (empty string)| Keystore password. | -| `cipherSuites` | String| false |" " (empty string)| SSL/TLS cipher suites. | -| `protocols` | String| false |"TLSv1.2" | Comma separated list of enabled SSL/TLS protocols. | - -## Example - -Before using the Elasticsearch sink connector, you need to create a configuration file through one of the following methods. - -### Configuration - -#### For Elasticsearch After 6.2 - -* JSON - - ```json - - { - "elasticSearchUrl": "http://localhost:9200", - "indexName": "my_index", - "username": "scooby", - "password": "doobie" - } - - ``` - -* YAML - - ```yaml - - configs: - elasticSearchUrl: "http://localhost:9200" - indexName: "my_index" - username: "scooby" - password: "doobie" - - ``` - -#### For Elasticsearch Before 6.2 - -* JSON - - ```json - - { - "elasticSearchUrl": "http://localhost:9200", - "indexName": "my_index", - "typeName": "doc", - "username": "scooby", - "password": "doobie" - } - - ``` - -* YAML - - ```yaml - - configs: - elasticSearchUrl: "http://localhost:9200" - indexName: "my_index" - typeName: "doc" - username: "scooby" - password: "doobie" - - ``` - -### Usage - -1. Start a single node Elasticsearch cluster. - - ```bash - - $ docker run -p 9200:9200 -p 9300:9300 \ - -e "discovery.type=single-node" \ - docker.elastic.co/elasticsearch/elasticsearch:7.13.3 - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - - Make sure the NAR file is available at `connectors/pulsar-io-elastic-search-@pulsar:version@.nar`. - -3. Start the Pulsar Elasticsearch connector in local run mode using one of the following methods. - * Use the **JSON** configuration as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-elastic-search-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name elasticsearch-test-sink \ - --sink-config '{"elasticSearchUrl":"http://localhost:9200","indexName": "my_index","username": "scooby","password": "doobie"}' \ - --inputs elasticsearch_test - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-elastic-search-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name elasticsearch-test-sink \ - --sink-config-file elasticsearch-sink.yml \ - --inputs elasticsearch_test - - ``` - -4. Publish records to the topic. - - ```bash - - $ bin/pulsar-client produce elasticsearch_test --messages "{\"a\":1}" - - ``` - -5. Check documents in Elasticsearch. - - * refresh the index - - ```bash - - $ curl -s http://localhost:9200/my_index/_refresh - - ``` - - - * search documents - - ```bash - - $ curl -s http://localhost:9200/my_index/_search - - ``` - - You can see the record that published earlier has been successfully written into Elasticsearch. - - ```json - - {"took":2,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":1,"relation":"eq"},"max_score":1.0,"hits":[{"_index":"my_index","_type":"_doc","_id":"FSxemm8BLjG_iC0EeTYJ","_score":1.0,"_source":{"a":1}}]}} - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-file-source.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-file-source.md deleted file mode 100644 index e9d710cce65e83..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-file-source.md +++ /dev/null @@ -1,160 +0,0 @@ ---- -id: io-file-source -title: File source connector -sidebar_label: "File source connector" -original_id: io-file-source ---- - -The File source connector pulls messages from files in directories and persists the messages to Pulsar topics. - -## Configuration - -The configuration of the File source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `inputDirectory` | String|true | No default value|The input directory to pull files. | -| `recurse` | Boolean|false | true | Whether to pull files from subdirectory or not.| -| `keepFile` |Boolean|false | false | If set to true, the file is not deleted after it is processed, which means the file can be picked up continually. | -| `fileFilter` | String|false| [^\\.].* | The file whose name matches the given regular expression is picked up. | -| `pathFilter` | String |false | NULL | If `recurse` is set to true, the subdirectory whose path matches the given regular expression is scanned. | -| `minimumFileAge` | Integer|false | 0 | The minimum age that a file can be processed.

    Any file younger than `minimumFileAge` (according to the last modification date) is ignored. | -| `maximumFileAge` | Long|false |Long.MAX_VALUE | The maximum age that a file can be processed.

    Any file older than `maximumFileAge` (according to last modification date) is ignored. | -| `minimumSize` |Integer| false |1 | The minimum size (in bytes) that a file can be processed. | -| `maximumSize` | Double|false |Double.MAX_VALUE| The maximum size (in bytes) that a file can be processed. | -| `ignoreHiddenFiles` |Boolean| false | true| Whether the hidden files should be ignored or not. | -| `pollingInterval`|Long | false | 10000L | Indicates how long to wait before performing a directory listing. | -| `numWorkers` | Integer | false | 1 | The number of worker threads that process files.

    This allows you to process a larger number of files concurrently.

    However, setting this to a value greater than 1 makes the data from multiple files mixed in the target topic. | - -### Example - -Before using the File source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "inputDirectory": "/Users/david", - "recurse": true, - "keepFile": true, - "fileFilter": "[^\\.].*", - "pathFilter": "*", - "minimumFileAge": 0, - "maximumFileAge": 9999999999, - "minimumSize": 1, - "maximumSize": 5000000, - "ignoreHiddenFiles": true, - "pollingInterval": 5000, - "numWorkers": 1 - } - - ``` - -* YAML - - ```yaml - - configs: - inputDirectory: "/Users/david" - recurse: true - keepFile: true - fileFilter: "[^\\.].*" - pathFilter: "*" - minimumFileAge: 0 - maximumFileAge: 9999999999 - minimumSize: 1 - maximumSize: 5000000 - ignoreHiddenFiles: true - pollingInterval: 5000 - numWorkers: 1 - - ``` - -## Usage - -Here is an example of using the File source connecter. - -1. Pull a Pulsar image. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - ``` - -2. Start Pulsar standalone. - - ```bash - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -3. Create a configuration file _file-connector.yaml_. - - ```yaml - - configs: - inputDirectory: "/opt" - - ``` - -4. Copy the configuration file _file-connector.yaml_ to the container. - - ```bash - - $ docker cp connectors/file-connector.yaml pulsar-standalone:/pulsar/ - - ``` - -5. Download the File source connector. - - ```bash - - $ curl -O https://mirrors.tuna.tsinghua.edu.cn/apache/pulsar/pulsar-{version}/connectors/pulsar-io-file-{version}.nar - - ``` - -6. Start the File source connector. - - ```bash - - $ docker exec -it pulsar-standalone /bin/bash - - $ ./bin/pulsar-admin sources localrun \ - --archive /pulsar/pulsar-io-file-{version}.nar \ - --name file-test \ - --destination-topic-name pulsar-file-test \ - --source-config-file /pulsar/file-connector.yaml - - ``` - -7. Start a consumer. - - ```bash - - ./bin/pulsar-client consume -s file-test -n 0 pulsar-file-test - - ``` - -8. Write the message to the file _test.txt_. - - ```bash - - echo "hello world!" > /opt/test.txt - - ``` - - The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello world! - - ``` - - \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-flume-sink.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-flume-sink.md deleted file mode 100644 index b2ace53702f8ca..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-flume-sink.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -id: io-flume-sink -title: Flume sink connector -sidebar_label: "Flume sink connector" -original_id: io-flume-sink ---- - -The Flume sink connector pulls messages from Pulsar topics to logs. - -## Configuration - -The configuration of the Flume sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`name`|String|true|"" (empty string)|The name of the agent. -`confFile`|String|true|"" (empty string)|The configuration file. -`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed. -`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection. -`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration. - -### Example - -Before using the Flume sink connector, you need to create a configuration file through one of the following methods. - -> For more information about the `sink.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/sink.conf). - -* JSON - - ```json - - { - "name": "a1", - "confFile": "sink.conf", - "noReloadConf": "false", - "zkConnString": "", - "zkBasePath": "" - } - - ``` - -* YAML - - ```yaml - - configs: - name: a1 - confFile: sink.conf - noReloadConf: false - zkConnString: "" - zkBasePath: "" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-flume-source.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-flume-source.md deleted file mode 100644 index b7fd7edad88111..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-flume-source.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -id: io-flume-source -title: Flume source connector -sidebar_label: "Flume source connector" -original_id: io-flume-source ---- - -The Flume source connector pulls messages from logs to Pulsar topics. - -## Configuration - -The configuration of the Flume source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`name`|String|true|"" (empty string)|The name of the agent. -`confFile`|String|true|"" (empty string)|The configuration file. -`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed. -`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection. -`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration. - -### Example - -Before using the Flume source connector, you need to create a configuration file through one of the following methods. - -> For more information about the `source.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/source.conf). - -* JSON - - ```json - - { - "name": "a1", - "confFile": "source.conf", - "noReloadConf": "false", - "zkConnString": "", - "zkBasePath": "" - } - - ``` - -* YAML - - ```yaml - - configs: - name: a1 - confFile: source.conf - noReloadConf: false - zkConnString: "" - zkBasePath: "" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-hbase-sink.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-hbase-sink.md deleted file mode 100644 index 1737b00fa26805..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-hbase-sink.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -id: io-hbase-sink -title: HBase sink connector -sidebar_label: "HBase sink connector" -original_id: io-hbase-sink ---- - -The HBase sink connector pulls the messages from Pulsar topics -and persists the messages to HBase tables - -## Configuration - -The configuration of the HBase sink connector has the following properties. - -### Property - -| Name | Type|Default | Required | Description | -|------|---------|----------|-------------|--- -| `hbaseConfigResources` | String|None | false | HBase system configuration `hbase-site.xml` file. | -| `zookeeperQuorum` | String|None | true | HBase system configuration about `hbase.zookeeper.quorum` value. | -| `zookeeperClientPort` | String|2181 | false | HBase system configuration about `hbase.zookeeper.property.clientPort` value. | -| `zookeeperZnodeParent` | String|/hbase | false | HBase system configuration about `zookeeper.znode.parent` value. | -| `tableName` | None |String | true | HBase table, the value is `namespace:tableName`. | -| `rowKeyName` | String|None | true | HBase table rowkey name. | -| `familyName` | String|None | true | HBase table column family name. | -| `qualifierNames` |String| None | true | HBase table column qualifier names. | -| `batchTimeMs` | Long|1000l| false | HBase table operation timeout in milliseconds. | -| `batchSize` | int|200| false | Batch size of updates made to the HBase table. | - -### Example - -Before using the HBase sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "hbaseConfigResources": "hbase-site.xml", - "zookeeperQuorum": "localhost", - "zookeeperClientPort": "2181", - "zookeeperZnodeParent": "/hbase", - "tableName": "pulsar_hbase", - "rowKeyName": "rowKey", - "familyName": "info", - "qualifierNames": [ 'name', 'address', 'age'] - } - - ``` - -* YAML - - ```yaml - - configs: - hbaseConfigResources: "hbase-site.xml" - zookeeperQuorum: "localhost" - zookeeperClientPort: "2181" - zookeeperZnodeParent: "/hbase" - tableName: "pulsar_hbase" - rowKeyName: "rowKey" - familyName: "info" - qualifierNames: [ 'name', 'address', 'age'] - - ``` - - \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-hdfs2-sink.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-hdfs2-sink.md deleted file mode 100644 index 4a8527154430d0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-hdfs2-sink.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -id: io-hdfs2-sink -title: HDFS2 sink connector -sidebar_label: "HDFS2 sink connector" -original_id: io-hdfs2-sink ---- - -The HDFS2 sink connector pulls the messages from Pulsar topics -and persists the messages to HDFS files. - -## Configuration - -The configuration of the HDFS2 sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `hdfsConfigResources` | String|true| None | A file or a comma-separated list containing the Hadoop file system configuration.

    **Example**
    'core-site.xml'
    'hdfs-site.xml' | -| `directory` | String | true | None|The HDFS directory where files read from or written to. | -| `encoding` | String |false |None |The character encoding for the files.

    **Example**
    UTF-8
    ASCII | -| `compression` | Compression |false |None |The compression code used to compress or de-compress the files on HDFS.

    Below are the available options:
  2233. BZIP2
  2234. DEFLATE
  2235. GZIP
  2236. LZ4
  2237. SNAPPY
  2238. | -| `kerberosUserPrincipal` |String| false| None|The principal account of Kerberos user used for authentication. | -| `keytab` | String|false|None| The full pathname of the Kerberos keytab file used for authentication. | -| `filenamePrefix` |String| true, if `compression` is set to `None`. | None |The prefix of the files created inside the HDFS directory.

    **Example**
    The value of topicA result in files named topicA-. | -| `fileExtension` | String| true | None | The extension added to the files written to HDFS.

    **Example**
    '.txt'
    '.seq' | -| `separator` | char|false |None |The character used to separate records in a text file.

    If no value is provided, the contents from all records are concatenated together in one continuous byte array. | -| `syncInterval` | long| false |0| The interval between calls to flush data to HDFS disk in milliseconds. | -| `maxPendingRecords` |int| false|Integer.MAX_VALUE | The maximum number of records that hold in memory before acking.

    Setting this property to 1 makes every record send to disk before the record is acked.

    Setting this property to a higher value allows buffering records before flushing them to disk. -| `subdirectoryPattern` | String | false | None | A subdirectory associated with the created time of the sink.
    The pattern is the formatted pattern of `directory`'s subdirectory.

    See [DateTimeFormatter](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html) for pattern's syntax. | - -### Example - -Before using the HDFS2 sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "hdfsConfigResources": "core-site.xml", - "directory": "/foo/bar", - "filenamePrefix": "prefix", - "fileExtension": ".log", - "compression": "SNAPPY", - "subdirectoryPattern": "yyyy-MM-dd" - } - - ``` - -* YAML - - ```yaml - - configs: - hdfsConfigResources: "core-site.xml" - directory: "/foo/bar" - filenamePrefix: "prefix" - fileExtension: ".log" - compression: "SNAPPY" - subdirectoryPattern: "yyyy-MM-dd" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-hdfs3-sink.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-hdfs3-sink.md deleted file mode 100644 index aec065a25db7f4..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-hdfs3-sink.md +++ /dev/null @@ -1,59 +0,0 @@ ---- -id: io-hdfs3-sink -title: HDFS3 sink connector -sidebar_label: "HDFS3 sink connector" -original_id: io-hdfs3-sink ---- - -The HDFS3 sink connector pulls the messages from Pulsar topics -and persists the messages to HDFS files. - -## Configuration - -The configuration of the HDFS3 sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `hdfsConfigResources` | String|true| None | A file or a comma-separated list containing the Hadoop file system configuration.

    **Example**
    'core-site.xml'
    'hdfs-site.xml' | -| `directory` | String | true | None|The HDFS directory where files read from or written to. | -| `encoding` | String |false |None |The character encoding for the files.

    **Example**
    UTF-8
    ASCII | -| `compression` | Compression |false |None |The compression code used to compress or de-compress the files on HDFS.

    Below are the available options:
  2239. BZIP2
  2240. DEFLATE
  2241. GZIP
  2242. LZ4
  2243. SNAPPY
  2244. | -| `kerberosUserPrincipal` |String| false| None|The principal account of Kerberos user used for authentication. | -| `keytab` | String|false|None| The full pathname of the Kerberos keytab file used for authentication. | -| `filenamePrefix` |String| false |None |The prefix of the files created inside the HDFS directory.

    **Example**
    The value of topicA result in files named topicA-. | -| `fileExtension` | String| false | None| The extension added to the files written to HDFS.

    **Example**
    '.txt'
    '.seq' | -| `separator` | char|false |None |The character used to separate records in a text file.

    If no value is provided, the contents from all records are concatenated together in one continuous byte array. | -| `syncInterval` | long| false |0| The interval between calls to flush data to HDFS disk in milliseconds. | -| `maxPendingRecords` |int| false|Integer.MAX_VALUE | The maximum number of records that hold in memory before acking.

    Setting this property to 1 makes every record send to disk before the record is acked.

    Setting this property to a higher value allows buffering records before flushing them to disk. - -### Example - -Before using the HDFS3 sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "hdfsConfigResources": "core-site.xml", - "directory": "/foo/bar", - "filenamePrefix": "prefix", - "compression": "SNAPPY" - } - - ``` - -* YAML - - ```yaml - - configs: - hdfsConfigResources: "core-site.xml" - directory: "/foo/bar" - filenamePrefix: "prefix" - compression: "SNAPPY" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-influxdb-sink.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-influxdb-sink.md deleted file mode 100644 index 9382f8c03121cc..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-influxdb-sink.md +++ /dev/null @@ -1,119 +0,0 @@ ---- -id: io-influxdb-sink -title: InfluxDB sink connector -sidebar_label: "InfluxDB sink connector" -original_id: io-influxdb-sink ---- - -The InfluxDB sink connector pulls messages from Pulsar topics -and persists the messages to InfluxDB. - -The InfluxDB sink provides different configurations for InfluxDBv1 and v2 respectively. - -## Configuration - -The configuration of the InfluxDB sink connector has the following properties. - -### Property -#### InfluxDBv2 -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. | -| `token` | String|true| " " (empty string) |The authentication token used to authenticate to InfluxDB. | -| `organization` | String| true|" " (empty string) | The InfluxDB organization to write to. | -| `bucket` |String| true | " " (empty string)| The InfluxDB bucket to write to. | -| `precision` | String|false| ns | The timestamp precision for writing data to InfluxDB.

    Below are the available options:
  2245. ns
  2246. us
  2247. ms
  2248. s
  2249. | -| `logLevel` | String|false| NONE|The log level for InfluxDB request and response.

    Below are the available options:
  2250. NONE
  2251. BASIC
  2252. HEADERS
  2253. FULL
  2254. | -| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. | -| `batchTimeMs` |long|false| 1000L | The InfluxDB operation time in milliseconds. | -| `batchSize` | int|false|200| The batch size of writing to InfluxDB. | - -#### InfluxDBv1 -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. | -| `username` | String|false| " " (empty string) |The username used to authenticate to InfluxDB. | -| `password` | String| false|" " (empty string) | The password used to authenticate to InfluxDB. | -| `database` |String| true | " " (empty string)| The InfluxDB to which write messages. | -| `consistencyLevel` | String|false|ONE | The consistency level for writing data to InfluxDB.

    Below are the available options:
  2255. ALL
  2256. ANY
  2257. ONE
  2258. QUORUM
  2259. | -| `logLevel` | String|false| NONE|The log level for InfluxDB request and response.

    Below are the available options:
  2260. NONE
  2261. BASIC
  2262. HEADERS
  2263. FULL
  2264. | -| `retentionPolicy` | String|false| autogen| The retention policy for InfluxDB. | -| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. | -| `batchTimeMs` |long|false| 1000L | The InfluxDB operation time in milliseconds. | -| `batchSize` | int|false|200| The batch size of writing to InfluxDB. | - -### Example -Before using the InfluxDB sink connector, you need to create a configuration file through one of the following methods. -#### InfluxDBv2 -* JSON - - ```json - - { - "influxdbUrl": "http://localhost:9999", - "organization": "example-org", - "bucket": "example-bucket", - "token": "xxxx", - "precision": "ns", - "logLevel": "NONE", - "gzipEnable": false, - "batchTimeMs": 1000, - "batchSize": 100 - } - - ``` - - -* YAML - - ```yaml - - configs: - influxdbUrl: "http://localhost:9999" - organization: "example-org" - bucket: "example-bucket" - token: "xxxx" - precision: "ns" - logLevel: "NONE" - gzipEnable: false - batchTimeMs: 1000 - batchSize: 100 - - ``` - - -#### InfluxDBv1 - -* JSON - - ```json - - { - "influxdbUrl": "http://localhost:8086", - "database": "test_db", - "consistencyLevel": "ONE", - "logLevel": "NONE", - "retentionPolicy": "autogen", - "gzipEnable": false, - "batchTimeMs": 1000, - "batchSize": 100 - } - - ``` - -* YAML - - ```yaml - - configs: - influxdbUrl: "http://localhost:8086" - database: "test_db" - consistencyLevel: "ONE" - logLevel: "NONE" - retentionPolicy: "autogen" - gzipEnable: false - batchTimeMs: 1000 - batchSize: 100 - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-jdbc-sink.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-jdbc-sink.md deleted file mode 100644 index 77dbb61fccd7ed..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-jdbc-sink.md +++ /dev/null @@ -1,157 +0,0 @@ ---- -id: io-jdbc-sink -title: JDBC sink connector -sidebar_label: "JDBC sink connector" -original_id: io-jdbc-sink ---- - -The JDBC sink connectors allow pulling messages from Pulsar topics -and persists the messages to ClickHouse, MariaDB, PostgreSQL, and SQLite. - -> Currently, INSERT, DELETE and UPDATE operations are supported. - -## Configuration - -The configuration of all JDBC sink connectors has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `userName` | String|false | " " (empty string) | The username used to connect to the database specified by `jdbcUrl`.

    **Note: `userName` is case-sensitive.**| -| `password` | String|false | " " (empty string)| The password used to connect to the database specified by `jdbcUrl`.

    **Note: `password` is case-sensitive.**| -| `jdbcUrl` | String|true | " " (empty string) | The JDBC URL of the database to which the connector connects. | -| `tableName` | String|true | " " (empty string) | The name of the table to which the connector writes. | -| `nonKey` | String|false | " " (empty string) | A comma-separated list contains the fields used in updating events. | -| `key` | String|false | " " (empty string) | A comma-separated list contains the fields used in `where` condition of updating and deleting events. | -| `timeoutMs` | int| false|500 | The JDBC operation timeout in milliseconds. | -| `batchSize` | int|false | 200 | The batch size of updates made to the database. | - -### Example for ClickHouse - -* JSON - - ```json - - { - "userName": "clickhouse", - "password": "password", - "jdbcUrl": "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink", - "tableName": "pulsar_clickhouse_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-clickhouse-sink" - topicName: "persistent://public/default/jdbc-clickhouse-topic" - sinkType: "jdbc-clickhouse" - configs: - userName: "clickhouse" - password: "password" - jdbcUrl: "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink" - tableName: "pulsar_clickhouse_jdbc_sink" - - ``` - -### Example for MariaDB - -* JSON - - ```json - - { - "userName": "mariadb", - "password": "password", - "jdbcUrl": "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink", - "tableName": "pulsar_mariadb_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-mariadb-sink" - topicName: "persistent://public/default/jdbc-mariadb-topic" - sinkType: "jdbc-mariadb" - configs: - userName: "mariadb" - password: "password" - jdbcUrl: "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink" - tableName: "pulsar_mariadb_jdbc_sink" - - ``` - -### Example for PostgreSQL - -Before using the JDBC PostgreSQL sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "userName": "postgres", - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "tableName": "pulsar_postgres_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-postgres-sink" - topicName: "persistent://public/default/jdbc-postgres-topic" - sinkType: "jdbc-postgres" - configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink" - tableName: "pulsar_postgres_jdbc_sink" - - ``` - -For more information on **how to use this JDBC sink connector**, see [connect Pulsar to PostgreSQL](io-quickstart.md#connect-pulsar-to-postgresql). - -### Example for SQLite - -* JSON - - ```json - - { - "jdbcUrl": "jdbc:sqlite:db.sqlite", - "tableName": "pulsar_sqlite_jdbc_sink" - } - - ``` - -* YAML - - ```yaml - - tenant: "public" - namespace: "default" - name: "jdbc-sqlite-sink" - topicName: "persistent://public/default/jdbc-sqlite-topic" - sinkType: "jdbc-sqlite" - configs: - jdbcUrl: "jdbc:sqlite:db.sqlite" - tableName: "pulsar_sqlite_jdbc_sink" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-kafka-sink.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-kafka-sink.md deleted file mode 100644 index 09dad4ce70bac9..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-kafka-sink.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -id: io-kafka-sink -title: Kafka sink connector -sidebar_label: "Kafka sink connector" -original_id: io-kafka-sink ---- - -The Kafka sink connector pulls messages from Pulsar topics and persists the messages -to Kafka topics. - -This guide explains how to configure and use the Kafka sink connector. - -## Configuration - -The configuration of the Kafka sink connector has the following parameters. - -### Property - -| Name | Type| Required | Default | Description -|------|----------|---------|-------------|-------------| -| `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. | -|`acks`|String|true|" " (empty string) |The number of acknowledgments that the producer requires the leader to receive before a request completes.
    This controls the durability of the sent records. -|`batchsize`|long|false|16384L|The batch size that a Kafka producer attempts to batch records together before sending them to brokers. -|`maxRequestSize`|long|false|1048576L|The maximum size of a Kafka request in bytes. -|`topic`|String|true|" " (empty string) |The Kafka topic which receives messages from Pulsar. -| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringSerializer | The serializer class for Kafka producers to serialize keys. -| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArraySerializer | The serializer class for Kafka producers to serialize values.

    The serializer is set by a specific implementation of [`KafkaAbstractSink`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSink.java). -|`producerConfigProperties`|Map|false|" " (empty string)|The producer configuration properties to be passed to producers.

    **Note: other properties specified in the connector configuration file take precedence over this configuration**. - - -### Example - -Before using the Kafka sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "bootstrapServers": "localhost:6667", - "topic": "test", - "acks": "1", - "batchSize": "16384", - "maxRequestSize": "1048576", - "producerConfigProperties": - { - "client.id": "test-pulsar-producer", - "security.protocol": "SASL_PLAINTEXT", - "sasl.mechanism": "GSSAPI", - "sasl.kerberos.service.name": "kafka", - "acks": "all" - } - } - -* YAML - - ``` - -yaml - configs: - bootstrapServers: "localhost:6667" - topic: "test" - acks: "1" - batchSize: "16384" - maxRequestSize: "1048576" - producerConfigProperties: - client.id: "test-pulsar-producer" - security.protocol: "SASL_PLAINTEXT" - sasl.mechanism: "GSSAPI" - sasl.kerberos.service.name: "kafka" - acks: "all" - ``` diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-kafka-source.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-kafka-source.md deleted file mode 100644 index 35b04c52b41258..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-kafka-source.md +++ /dev/null @@ -1,240 +0,0 @@ ---- -id: io-kafka-source -title: Kafka source connector -sidebar_label: "Kafka source connector" -original_id: io-kafka-source ---- - -The Kafka source connector pulls messages from Kafka topics and persists the messages -to Pulsar topics. - -This guide explains how to configure and use the Kafka source connector. - -## Configuration - -The configuration of the Kafka source connector has the following properties. - -### Property - -| Name | Type| Required | Default | Description -|------|----------|---------|-------------|-------------| -| `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. | -| `groupId` |String| true | " " (empty string) | A unique string that identifies the group of consumer processes to which this consumer belongs. | -| `fetchMinBytes` | long|false | 1 | The minimum byte expected for each fetch response. | -| `autoCommitEnabled` | boolean |false | true | If set to true, the consumer's offset is periodically committed in the background.

    This committed offset is used when the process fails as the position from which a new consumer begins. | -| `autoCommitIntervalMs` | long|false | 5000 | The frequency in milliseconds that the consumer offsets are auto-committed to Kafka if `autoCommitEnabled` is set to true. | -| `heartbeatIntervalMs` | long| false | 3000 | The interval between heartbeats to the consumer when using Kafka's group management facilities.

    **Note: `heartbeatIntervalMs` must be smaller than `sessionTimeoutMs`**.| -| `sessionTimeoutMs` | long|false | 30000 | The timeout used to detect consumer failures when using Kafka's group management facility. | -| `topic` | String|true | " " (empty string)| The Kafka topic that sends messages to Pulsar. | -| `consumerConfigProperties` | Map| false | " " (empty string) | The consumer configuration properties to be passed to consumers.

    **Note: other properties specified in the connector configuration file take precedence over this configuration**. | -| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringDeserializer | The deserializer class for Kafka consumers to deserialize keys.
    The deserializer is set by a specific implementation of [`KafkaAbstractSource`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java). -| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArrayDeserializer | The deserializer class for Kafka consumers to deserialize values. -| `autoOffsetReset` | String | false | earliest | The default offset reset policy. | - -### Schema Management - -This Kafka source connector applies the schema to the topic depending on the data type that is present on the Kafka topic. -You can detect the data type from the `keyDeserializationClass` and `valueDeserializationClass` configuration parameters. - -If the `valueDeserializationClass` is `org.apache.kafka.common.serialization.StringDeserializer`, you can set Schema.STRING() as schema type on the Pulsar topic. - -If `valueDeserializationClass` is `io.confluent.kafka.serializers.KafkaAvroDeserializer`, Pulsar downloads the AVRO schema from the Confluent Schema Registry® -and sets it properly on the Pulsar topic. - -In this case, you need to set `schema.registry.url` inside of the `consumerConfigProperties` configuration entry -of the source. - -If `keyDeserializationClass` is not `org.apache.kafka.common.serialization.StringDeserializer`, it means -that you do not have a String as key and the Kafka Source uses the KeyValue schema type with the SEPARATED encoding. - -Pulsar supports AVRO format for keys. - -In this case, you can have a Pulsar topic with the following properties: -- Schema: KeyValue schema with SEPARATED encoding -- Key: the content of key of the Kafka message (base64 encoded) -- Value: the content of value of the Kafka message -- KeySchema: the schema detected from `keyDeserializationClass` -- ValueSchema: the schema detected from `valueDeserializationClass` - -Topic compaction and partition routing use the Pulsar key, that contains the Kafka key, and so they are driven by the same value that you have on Kafka. - -When you consume data from Pulsar topics, you can use the `KeyValue` schema. In this way, you can decode the data properly. -If you want to access the raw key, you can use the `Message#getKeyBytes()` API. - -### Example - -Before using the Kafka source connector, you need to create a configuration file through one of the following methods. - -- JSON - - ```json - - { - "bootstrapServers": "pulsar-kafka:9092", - "groupId": "test-pulsar-io", - "topic": "my-topic", - "sessionTimeoutMs": "10000", - "autoCommitEnabled": false - } - - ``` - -- YAML - - ```yaml - - configs: - bootstrapServers: "pulsar-kafka:9092" - groupId: "test-pulsar-io" - topic: "my-topic" - sessionTimeoutMs: "10000" - autoCommitEnabled: false - - ``` - -## Usage - -You can make the Kafka source connector as a Pulsar built-in connector and use it on a standalone cluster or an on-premises cluster. - -### Standalone cluster - -This example describes how to use the Kafka source connector to feed data from Kafka and write data to Pulsar topics in the standalone mode. - -#### Prerequisites - -- Install [Docker](https://docs.docker.com/get-docker/)(Community Edition). - -#### Steps - -1. Download and start the Confluent Platform. - -For details, see the [documentation](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-1-download-and-start-cp) to install the Kafka service locally. - -2. Pull a Pulsar image and start Pulsar in standalone mode. - - ```bash - - docker pull apachepulsar/pulsar:latest - - docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-kafka-standalone apachepulsar/pulsar:latest bin/pulsar standalone - - ``` - -3. Create a producer file _kafka-producer.py_. - - ```python - - from kafka import KafkaProducer - producer = KafkaProducer(bootstrap_servers='localhost:9092') - future = producer.send('my-topic', b'hello world') - future.get() - - ``` - -4. Create a consumer file _pulsar-client.py_. - - ```python - - import pulsar - - client = pulsar.Client('pulsar://localhost:6650') - consumer = client.subscribe('my-topic', subscription_name='my-aa') - - while True: - msg = consumer.receive() - print msg - print dir(msg) - print("Received message: '%s'" % msg.data()) - consumer.acknowledge(msg) - - client.close() - - ``` - -5. Copy the following files to Pulsar. - - ```bash - - docker cp pulsar-io-kafka.nar pulsar-kafka-standalone:/pulsar - docker cp kafkaSourceConfig.yaml pulsar-kafka-standalone:/pulsar/conf - - ``` - -6. Open a new terminal window and start the Kafka source connector in local run mode. - - ```bash - - docker exec -it pulsar-kafka-standalone /bin/bash - - ./bin/pulsar-admin source localrun \ - --archive ./pulsar-io-kafka.nar \ - --tenant public \ - --namespace default \ - --name kafka \ - --destination-topic-name my-topic \ - --source-config-file ./conf/kafkaSourceConfig.yaml \ - --parallelism 1 - - ``` - -7. Open a new terminal window and run the Kafka producer locally. - - ```bash - - python3 kafka-producer.py - - ``` - -8. Open a new terminal window and run the Pulsar consumer locally. - - ```bash - - python3 pulsar-client.py - - ``` - -The following information appears on the consumer terminal window. - - ```bash - - Received message: 'hello world' - - ``` - -### On-premises cluster - -This example explains how to create a Kafka source connector in an on-premises cluster. - -1. Copy the NAR package of the Kafka connector to the Pulsar connectors directory. - - ``` - - cp pulsar-io-kafka-{{connector:version}}.nar $PULSAR_HOME/connectors/pulsar-io-kafka-{{connector:version}}.nar - - ``` - -2. Reload all [built-in connectors](https://pulsar.apache.org/docs/en/next/io-connectors/). - - ``` - - PULSAR_HOME/bin/pulsar-admin sources reload - - ``` - -3. Check whether the Kafka source connector is available on the list or not. - - ``` - - PULSAR_HOME/bin/pulsar-admin sources available-sources - - ``` - -4. Create a Kafka source connector on a Pulsar cluster using the [`pulsar-admin sources create`](http://pulsar.apache.org/tools/pulsar-admin/2.9.0-SNAPSHOT/#-em-create-em--14) command. - - ``` - - PULSAR_HOME/bin/pulsar-admin sources create \ - --source-config-file - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-kinesis-sink.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-kinesis-sink.md deleted file mode 100644 index 153587dcfc783e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-kinesis-sink.md +++ /dev/null @@ -1,80 +0,0 @@ ---- -id: io-kinesis-sink -title: Kinesis sink connector -sidebar_label: "Kinesis sink connector" -original_id: io-kinesis-sink ---- - -The Kinesis sink connector pulls data from Pulsar and persists data into Amazon Kinesis. - -## Configuration - -The configuration of the Kinesis sink connector has the following property. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`messageFormat`|MessageFormat|true|ONLY_RAW_PAYLOAD|Message format in which Kinesis sink converts Pulsar messages and publishes to Kinesis streams.

    Below are the available options:

  2265. `ONLY_RAW_PAYLOAD`: Kinesis sink directly publishes Pulsar message payload as a message into the configured Kinesis stream.

  2266. `FULL_MESSAGE_IN_JSON`: Kinesis sink creates a JSON payload with Pulsar message payload, properties and encryptionCtx, and publishes JSON payload into the configured Kinesis stream.

  2267. `FULL_MESSAGE_IN_FB`: Kinesis sink creates a flatbuffer serialized payload with Pulsar message payload, properties and encryptionCtx, and publishes flatbuffer payload into the configured Kinesis stream.
  2268. -`retainOrdering`|boolean|false|false|Whether Pulsar connectors to retain ordering when moving messages from Pulsar to Kinesis or not. -`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    It is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If it is empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Built-in plugins - -The following are built-in `AwsCredentialProviderPlugin` plugins: - -* `org.apache.pulsar.io.aws.AwsDefaultProviderChainPlugin` - - This plugin takes no configuration, it uses the default AWS provider chain. - - For more information, see [AWS documentation](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default). - -* `org.apache.pulsar.io.aws.STSAssumeRoleProviderPlugin` - - This plugin takes a configuration (via the `awsCredentialPluginParam`) that describes a role to assume when running the KCL. - - This configuration takes the form of a small json document like: - - ```json - - {"roleArn": "arn...", "roleSessionName": "name"} - - ``` - -### Example - -Before using the Kinesis sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "awsEndpoint": "some.endpoint.aws", - "awsRegion": "us-east-1", - "awsKinesisStreamName": "my-stream", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "messageFormat": "ONLY_RAW_PAYLOAD", - "retainOrdering": "true" - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "some.endpoint.aws" - awsRegion: "us-east-1" - awsKinesisStreamName: "my-stream" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - messageFormat: "ONLY_RAW_PAYLOAD" - retainOrdering: "true" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-kinesis-source.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-kinesis-source.md deleted file mode 100644 index 0d07eefc3703b3..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-kinesis-source.md +++ /dev/null @@ -1,81 +0,0 @@ ---- -id: io-kinesis-source -title: Kinesis source connector -sidebar_label: "Kinesis source connector" -original_id: io-kinesis-source ---- - -The Kinesis source connector pulls data from Amazon Kinesis and persists data into Pulsar. - -This connector uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual consuming of messages. The KCL uses DynamoDB to track state for consumers. - -> Note: currently, the Kinesis source connector only supports raw messages. If you use KMS encrypted messages, the encrypted messages are sent to downstream. This connector will support decrypting messages in the future release. - - -## Configuration - -The configuration of the Kinesis source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.

    Below are the available options:

  2269. `AT_TIMESTAMP`: start from the record at or after the specified timestamp.

  2270. `LATEST`: start after the most recent data record.

  2271. `TRIM_HORIZON`: start from the oldest available data record.
  2272. -`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption. -`applicationName`|String|false|Pulsar IO connector|The name of the Amazon Kinesis application.

    By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances. -`checkpointInterval`|long|false|60000|The frequency of the Kinesis stream checkpoint in milliseconds. -`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds. -`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint. -`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector.

    Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed. -`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`useEnhancedFanOut`|boolean|false|true|If set to true, it uses Kinesis enhanced fan-out.

    If set to false, it uses polling. -`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). -`awsRegion`|String|false|" " (empty string)|The AWS region.

    **Example**
    us-west-1, us-west-2 -`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name. -`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.

    `awsCredentialProviderPlugin` has the following built-in plugs:

  2273. `org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:
    this plugin uses the default AWS provider chain.
    For more information, see [using the default credential provider chain](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default).

  2274. `org.apache.pulsar.io.kinesis.STSAssumeRoleProviderPlugin`:
    this plugin takes a configuration via the `awsCredentialPluginParam` that describes a role to assume when running the KCL.
    **JSON configuration example**
    `{"roleArn": "arn...", "roleSessionName": "name"}`

    `awsCredentialPluginName` is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink.

    If `awsCredentialPluginName` set to empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPluginParam`.
  2275. -`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`. - -### Example - -Before using the Kinesis source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "awsEndpoint": "https://some.endpoint.aws", - "awsRegion": "us-east-1", - "awsKinesisStreamName": "my-stream", - "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}", - "applicationName": "My test application", - "checkpointInterval": "30000", - "backoffTime": "4000", - "numRetries": "3", - "receiveQueueSize": 2000, - "initialPositionInStream": "TRIM_HORIZON", - "startAtTime": "2019-03-05T19:28:58.000Z" - } - - ``` - -* YAML - - ```yaml - - configs: - awsEndpoint: "https://some.endpoint.aws" - awsRegion: "us-east-1" - awsKinesisStreamName: "my-stream" - awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}" - applicationName: "My test application" - checkpointInterval: 30000 - backoffTime: 4000 - numRetries: 3 - receiveQueueSize: 2000 - initialPositionInStream: "TRIM_HORIZON" - startAtTime: "2019-03-05T19:28:58.000Z" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-mongo-sink.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-mongo-sink.md deleted file mode 100644 index 30c15a6c280938..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-mongo-sink.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -id: io-mongo-sink -title: MongoDB sink connector -sidebar_label: "MongoDB sink connector" -original_id: io-mongo-sink ---- - -The MongoDB sink connector pulls messages from Pulsar topics -and persists the messages to collections. - -## Configuration - -The configuration of the MongoDB sink connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `mongoUri` | String| true| " " (empty string) | The MongoDB URI to which the connector connects.

    For more information, see [connection string URI format](https://docs.mongodb.com/manual/reference/connection-string/). | -| `database` | String| true| " " (empty string)| The database name to which the collection belongs. | -| `collection` | String| true| " " (empty string)| The collection name to which the connector writes messages. | -| `batchSize` | int|false|100 | The batch size of writing messages to collections. | -| `batchTimeMs` |long|false|1000| The batch operation interval in milliseconds. | - - -### Example - -Before using the Mongo sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "mongoUri": "mongodb://localhost:27017", - "database": "pulsar", - "collection": "messages", - "batchSize": "2", - "batchTimeMs": "500" - } - - ``` - -* YAML - - ```yaml - - configs: - mongoUri: "mongodb://localhost:27017" - database: "pulsar" - collection: "messages" - batchSize: 2 - batchTimeMs: 500 - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-netty-source.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-netty-source.md deleted file mode 100644 index e1ec8d863115b3..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-netty-source.md +++ /dev/null @@ -1,241 +0,0 @@ ---- -id: io-netty-source -title: Netty source connector -sidebar_label: "Netty source connector" -original_id: io-netty-source ---- - -The Netty source connector opens a port that accepts incoming data via the configured network protocol -and publish it to user-defined Pulsar topics. - -This connector can be used in a containerized (for example, k8s) deployment. Otherwise, if the connector is running in process or thread mode, the instance may be conflicting on listening to ports. - -## Configuration - -The configuration of the Netty source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `type` |String| true |tcp | The network protocol over which data is transmitted to netty.

    Below are the available options:
  2276. tcp
  2277. http
  2278. udp
  2279. | -| `host` | String|true | 127.0.0.1 | The host name or address on which the source instance listen. | -| `port` | int|true | 10999 | The port on which the source instance listen. | -| `numberOfThreads` |int| true |1 | The number of threads of Netty TCP server to accept incoming connections and handle the traffic of accepted connections. | - - -### Example - -Before using the Netty source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "type": "tcp", - "host": "127.0.0.1", - "port": "10911", - "numberOfThreads": "1" - } - - ``` - -* YAML - - ```yaml - - configs: - type: "tcp" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -## Usage - -The following examples show how to use the Netty source connector with TCP and HTTP. - -### TCP - -1. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-netty-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -2. Create a configuration file _netty-source-config.yaml_. - - ```yaml - - configs: - type: "tcp" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -3. Copy the configuration file _netty-source-config.yaml_ to Pulsar server. - - ```bash - - $ docker cp netty-source-config.yaml pulsar-netty-standalone:/pulsar/conf/ - - ``` - -4. Download the Netty source connector. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - curl -O http://mirror-hk.koddos.net/apache/pulsar/pulsar-{version}/connectors/pulsar-io-netty-{version}.nar - - ``` - -5. Start the Netty source connector. - - ```bash - - $ ./bin/pulsar-admin sources localrun \ - --archive pulsar-io-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name netty \ - --destination-topic-name netty-topic \ - --source-config-file netty-source-config.yaml \ - --parallelism 1 - - ``` - -6. Consume data. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ ./bin/pulsar-client consume -t Exclusive -s netty-sub netty-topic -n 0 - - ``` - -7. Open another terminal window to send data to the Netty source. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ apt-get update - - $ apt-get -y install telnet - - $ root@1d19327b2c67:/pulsar# telnet 127.0.0.1 10999 - Trying 127.0.0.1... - Connected to 127.0.0.1. - Escape character is '^]'. - hello - world - - ``` - -8. The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello - - ----- got message ----- - world - - ``` - -### HTTP - -1. Start Pulsar standalone. - - ```bash - - $ docker pull apachepulsar/pulsar:{version} - - $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-netty-standalone apachepulsar/pulsar:{version} bin/pulsar standalone - - ``` - -2. Create a configuration file _netty-source-config.yaml_. - - ```yaml - - configs: - type: "http" - host: "127.0.0.1" - port: 10999 - numberOfThreads: 1 - - ``` - -3. Copy the configuration file _netty-source-config.yaml_ to Pulsar server. - - ```bash - - $ docker cp netty-source-config.yaml pulsar-netty-standalone:/pulsar/conf/ - - ``` - -4. Download the Netty source connector. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - curl -O http://mirror-hk.koddos.net/apache/pulsar/pulsar-{version}/connectors/pulsar-io-netty-{version}.nar - - ``` - -5. Start the Netty source connector. - - ```bash - - $ ./bin/pulsar-admin sources localrun \ - --archive pulsar-io-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name netty \ - --destination-topic-name netty-topic \ - --source-config-file netty-source-config.yaml \ - --parallelism 1 - - ``` - -6. Consume data. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ ./bin/pulsar-client consume -t Exclusive -s netty-sub netty-topic -n 0 - - ``` - -7. Open another terminal window to send data to the Netty source. - - ```bash - - $ docker exec -it pulsar-netty-standalone /bin/bash - - $ curl -X POST --data 'hello, world!' http://127.0.0.1:10999/ - - ``` - -8. The following information appears on the consumer terminal window. - - ```bash - - ----- got message ----- - hello, world! - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-nsq-source.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-nsq-source.md deleted file mode 100644 index b61e7e100c22e1..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-nsq-source.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -id: io-nsq-source -title: NSQ source connector -sidebar_label: "NSQ source connector" -original_id: io-nsq-source ---- - -The NSQ source connector receives messages from NSQ topics -and writes messages to Pulsar topics. - -## Configuration - -The configuration of the NSQ source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `lookupds` |String| true | " " (empty string) | A comma-separated list of nsqlookupds to connect to. | -| `topic` | String|true | " " (empty string) | The NSQ topic to transport. | -| `channel` | String |false | pulsar-transport-{$topic} | The channel to consume from on the provided NSQ topic. | \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-overview.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-overview.md deleted file mode 100644 index 3db5ee34042d3f..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-overview.md +++ /dev/null @@ -1,164 +0,0 @@ ---- -id: io-overview -title: Pulsar connector overview -sidebar_label: "Overview" -original_id: io-overview ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -Messaging systems are most powerful when you can easily use them with external systems like databases and other messaging systems. - -**Pulsar IO connectors** enable you to easily create, deploy, and manage connectors that interact with external systems, such as [Apache Cassandra](https://cassandra.apache.org), [Aerospike](https://www.aerospike.com), and many others. - - -## Concept - -Pulsar IO connectors come in two types: **source** and **sink**. - -This diagram illustrates the relationship between source, Pulsar, and sink: - -![Pulsar IO diagram](/assets/pulsar-io.png "Pulsar IO connectors (sources and sinks)") - - -### Source - -> Sources **feed data from external systems into Pulsar**. - -Common sources include other messaging systems and firehose-style data pipeline APIs. - -For the complete list of Pulsar built-in source connectors, see [source connector](io-connectors.md#source-connector). - -### Sink - -> Sinks **feed data from Pulsar into external systems**. - -Common sinks include other messaging systems and SQL and NoSQL databases. - -For the complete list of Pulsar built-in sink connectors, see [sink connector](io-connectors.md#sink-connector). - -## Processing guarantee - -Processing guarantees are used to handle errors when writing messages to Pulsar topics. - -> Pulsar connectors and Functions use the **same** processing guarantees as below. - -Delivery semantic | Description -:------------------|:------- -`at-most-once` | Each message sent to a connector is to be **processed once** or **not to be processed**. -`at-least-once` | Each message sent to a connector is to be **processed once** or **more than once**. -`effectively-once` | Each message sent to a connector has **one output associated** with it. - -> Processing guarantees for connectors not just rely on Pulsar guarantee but also **relate to external systems**, that is, **the implementation of source and sink**. - -* Source: Pulsar ensures that writing messages to Pulsar topics respects to the processing guarantees. It is within Pulsar's control. - -* Sink: the processing guarantees rely on the sink implementation. If the sink implementation does not handle retries in an idempotent way, the sink does not respect to the processing guarantees. - -### Set - -When creating a connector, you can set the processing guarantee with the following semantics: - -* ATLEAST_ONCE - -* ATMOST_ONCE - -* EFFECTIVELY_ONCE - -> If `--processing-guarantees` is not specified when creating a connector, the default semantic is `ATLEAST_ONCE`. - -Here takes **Admin CLI** as an example. For more information about **REST API** or **JAVA Admin API**, see [here](io-use.md#create). - -````mdx-code-block - - - - -```bash - -$ bin/pulsar-admin sources create \ - --processing-guarantees ATMOST_ONCE \ - # Other source configs - -``` - -For more information about the options of `pulsar-admin sources create`, see [here](reference-connector-admin.md#create). - - - - -```bash - -$ bin/pulsar-admin sinks create \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other sink configs - -``` - -For more information about the options of `pulsar-admin sinks create`, see [here](reference-connector-admin.md#create-1). - - - - -```` - -### Update - -After creating a connector, you can update the processing guarantee with the following semantics: - -* ATLEAST_ONCE - -* ATMOST_ONCE - -* EFFECTIVELY_ONCE - -Here takes **Admin CLI** as an example. For more information about **REST API** or **JAVA Admin API**, see [here](io-use.md#create). - -````mdx-code-block - - - - -```bash - -$ bin/pulsar-admin sources update \ - --processing-guarantees EFFECTIVELY_ONCE \ - # Other source configs - -``` - -For more information about the options of `pulsar-admin sources update`, see [here](reference-connector-admin.md#update). - - - - -```bash - -$ bin/pulsar-admin sinks update \ - --processing-guarantees ATMOST_ONCE \ - # Other sink configs - -``` - -For more information about the options of `pulsar-admin sinks update`, see [here](reference-connector-admin.md#update-1). - - - - -```` - - -## Work with connector - -You can manage Pulsar connectors (for example, create, update, start, stop, restart, reload, delete and perform other operations on connectors) via the [Connector Admin CLI](reference-connector-admin.md) with [sources](io-cli.md#sources) and [sinks](io-cli.md#sinks) subcommands. - -Connectors (sources and sinks) and Functions are components of instances, and they all run on Functions workers. When managing a source, sink or function via [Connector Admin CLI](reference-connector-admin.md) or [Functions Admin CLI](functions-cli.md), an instance is started on a worker. For more information, see [Functions worker](functions-worker.md#run-functions-worker-separately). - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-quickstart.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-quickstart.md deleted file mode 100644 index b73063e75e0c36..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-quickstart.md +++ /dev/null @@ -1,964 +0,0 @@ ---- -id: io-quickstart -title: How to connect Pulsar to database -sidebar_label: "Get started" -original_id: io-quickstart ---- - -This tutorial provides a hands-on look at how you can move data out of Pulsar without writing a single line of code. - -It is helpful to review the [concepts](io-overview.md) for Pulsar I/O with running the steps in this guide to gain a deeper understanding. - -At the end of this tutorial, you are able to: - -- [Connect Pulsar to Cassandra](#Connect-Pulsar-to-Cassandra) - -- [Connect Pulsar to PostgreSQL](#Connect-Pulsar-to-PostgreSQL) - -:::tip - -* These instructions assume you are running Pulsar in [standalone mode](getting-started-standalone.md). However, all -the commands used in this tutorial can be used in a multi-nodes Pulsar cluster without any changes. -* All the instructions are assumed to run at the root directory of a Pulsar binary distribution. - -::: - -## Install Pulsar and built-in connector - -Before connecting Pulsar to a database, you need to install Pulsar and the desired built-in connector. - -For more information about **how to install a standalone Pulsar and built-in connectors**, see [here](getting-started-standalone.md/#installing-pulsar). - -## Start Pulsar standalone - -1. Start Pulsar locally. - - ```bash - - bin/pulsar standalone - - ``` - - All the components of a Pulsar service are start in order. - - You can curl those pulsar service endpoints to make sure Pulsar service is up running correctly. - -2. Check Pulsar binary protocol port. - - ```bash - - telnet localhost 6650 - - ``` - -3. Check Pulsar Function cluster. - - ```bash - - curl -s http://localhost:8080/admin/v2/worker/cluster - - ``` - - **Example output** - - ```json - - [{"workerId":"c-standalone-fw-localhost-6750","workerHostname":"localhost","port":6750}] - - ``` - -4. Make sure a public tenant and a default namespace exist. - - ```bash - - curl -s http://localhost:8080/admin/v2/namespaces/public - - ``` - - **Example output** - - ```json - - ["public/default","public/functions"] - - ``` - -5. All built-in connectors should be listed as available. - - ```bash - - curl -s http://localhost:8080/admin/v2/functions/connectors - - ``` - - **Example output** - - ```json - - [{"name":"aerospike","description":"Aerospike database sink","sinkClass":"org.apache.pulsar.io.aerospike.AerospikeStringSink"},{"name":"cassandra","description":"Writes data into Cassandra","sinkClass":"org.apache.pulsar.io.cassandra.CassandraStringSink"},{"name":"kafka","description":"Kafka source and sink connector","sourceClass":"org.apache.pulsar.io.kafka.KafkaStringSource","sinkClass":"org.apache.pulsar.io.kafka.KafkaBytesSink"},{"name":"kinesis","description":"Kinesis sink connector","sinkClass":"org.apache.pulsar.io.kinesis.KinesisSink"},{"name":"rabbitmq","description":"RabbitMQ source connector","sourceClass":"org.apache.pulsar.io.rabbitmq.RabbitMQSource"},{"name":"twitter","description":"Ingest data from Twitter firehose","sourceClass":"org.apache.pulsar.io.twitter.TwitterFireHose"}] - - ``` - - If an error occurs when starting Pulsar service, you may see an exception at the terminal running `pulsar/standalone`, - or you can navigate to the `logs` directory under the Pulsar directory to view the logs. - -## Connect Pulsar to Cassandra - -This section demonstrates how to connect Pulsar to Cassandra. - -:::tip - -* Make sure you have Docker installed. If you do not have one, see [install Docker](https://docs.docker.com/docker-for-mac/install/). -* The Cassandra sink connector reads messages from Pulsar topics and writes the messages into Cassandra tables. For more information, see [Cassandra sink connector](io-cassandra-sink.md). - -::: - -### Setup a Cassandra cluster - -This example uses `cassandra` Docker image to start a single-node Cassandra cluster in Docker. - -1. Start a Cassandra cluster. - - ```bash - - docker run -d --rm --name=cassandra -p 9042:9042 cassandra - - ``` - - :::note - - Before moving to the next steps, make sure the Cassandra cluster is running. - - ::: - -2. Make sure the Docker process is running. - - ```bash - - docker ps - - ``` - -3. Check the Cassandra logs to make sure the Cassandra process is running as expected. - - ```bash - - docker logs cassandra - - ``` - -4. Check the status of the Cassandra cluster. - - ```bash - - docker exec cassandra nodetool status - - ``` - - **Example output** - - ``` - - Datacenter: datacenter1 - ======================= - Status=Up/Down - |/ State=Normal/Leaving/Joining/Moving - -- Address Load Tokens Owns (effective) Host ID Rack - UN 172.17.0.2 103.67 KiB 256 100.0% af0e4b2f-84e0-4f0b-bb14-bd5f9070ff26 rack1 - - ``` - -5. Use `cqlsh` to connect to the Cassandra cluster. - - ```bash - - $ docker exec -ti cassandra cqlsh localhost - Connected to Test Cluster at localhost:9042. - [cqlsh 5.0.1 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4] - Use HELP for help. - cqlsh> - - ``` - -6. Create a keyspace `pulsar_test_keyspace`. - - ```bash - - cqlsh> CREATE KEYSPACE pulsar_test_keyspace WITH replication = {'class':'SimpleStrategy', 'replication_factor':1}; - - ``` - -7. Create a table `pulsar_test_table`. - - ```bash - - cqlsh> USE pulsar_test_keyspace; - cqlsh:pulsar_test_keyspace> CREATE TABLE pulsar_test_table (key text PRIMARY KEY, col text); - - ``` - -### Configure a Cassandra sink - -Now that we have a Cassandra cluster running locally. - -In this section, you need to configure a Cassandra sink connector. - -To run a Cassandra sink connector, you need to prepare a configuration file including the information that Pulsar connector runtime needs to know. - -For example, how Pulsar connector can find the Cassandra cluster, what is the keyspace and the table that Pulsar connector uses for writing Pulsar messages to, and so on. - -You can create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - } - - ``` - -* YAML - - ```yaml - - configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - - ``` - -For more information, see [Cassandra sink connector](io-cassandra-sink.md). - -### Create a Cassandra sink - -You can use the [Connector Admin CLI](io-cli.md) -to create a sink connector and perform other operations on them. - -Run the following command to create a Cassandra sink connector with sink type _cassandra_ and the config file _examples/cassandra-sink.yml_ created previously. - -#### Note -> The `sink-type` parameter of the currently built-in connectors is determined by the setting of the `name` parameter specified in the pulsar-io.yaml file. - -```bash - -bin/pulsar-admin sinks create \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink \ - --sink-type cassandra \ - --sink-config-file examples/cassandra-sink.yml \ - --inputs test_cassandra - -``` - -Once the command is executed, Pulsar creates the sink connector _cassandra-test-sink_. - -This sink connector runs -as a Pulsar Function and writes the messages produced in the topic _test_cassandra_ to the Cassandra table _pulsar_test_table_. - -### Inspect a Cassandra sink - -You can use the [Connector Admin CLI](io-cli.md) -to monitor a connector and perform other operations on it. - -* Get the information of a Cassandra sink. - - ```bash - - bin/pulsar-admin sinks get \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - **Example output** - - ```json - - { - "tenant": "public", - "namespace": "default", - "name": "cassandra-test-sink", - "className": "org.apache.pulsar.io.cassandra.CassandraStringSink", - "inputSpecs": { - "test_cassandra": { - "isRegexPattern": false - } - }, - "configs": { - "roots": "localhost:9042", - "keyspace": "pulsar_test_keyspace", - "columnFamily": "pulsar_test_table", - "keyname": "key", - "columnName": "col" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true, - "archive": "builtin://cassandra" - } - - ``` - -* Check the status of a Cassandra sink. - - ```bash - - bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - **Example output** - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-localhost-8080" - } - } ] - } - - ``` - -### Verify a Cassandra sink - -1. Produce some messages to the input topic of the Cassandra sink _test_cassandra_. - - ```bash - - for i in {0..9}; do bin/pulsar-client produce -m "key-$i" -n 1 test_cassandra; done - - ``` - -2. Inspect the status of the Cassandra sink _test_cassandra_. - - ```bash - - bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - - ``` - - You can see 10 messages are processed by the Cassandra sink _test_cassandra_. - - **Example output** - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 10, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 10, - "lastReceivedTime" : 1551685489136, - "workerId" : "c-standalone-fw-localhost-8080" - } - } ] - } - - ``` - -3. Use `cqlsh` to connect to the Cassandra cluster. - - ```bash - - docker exec -ti cassandra cqlsh localhost - - ``` - -4. Check the data of the Cassandra table _pulsar_test_table_. - - ```bash - - cqlsh> use pulsar_test_keyspace; - cqlsh:pulsar_test_keyspace> select * from pulsar_test_table; - - key | col - --------+-------- - key-5 | key-5 - key-0 | key-0 - key-9 | key-9 - key-2 | key-2 - key-1 | key-1 - key-3 | key-3 - key-6 | key-6 - key-7 | key-7 - key-4 | key-4 - key-8 | key-8 - - ``` - -### Delete a Cassandra Sink - -You can use the [Connector Admin CLI](io-cli.md) -to delete a connector and perform other operations on it. - -```bash - -bin/pulsar-admin sinks delete \ - --tenant public \ - --namespace default \ - --name cassandra-test-sink - -``` - -## Connect Pulsar to PostgreSQL - -This section demonstrates how to connect Pulsar to PostgreSQL. - -:::tip - -* Make sure you have Docker installed. If you do not have one, see [install Docker](https://docs.docker.com/docker-for-mac/install/). -* The JDBC sink connector pulls messages from Pulsar topics - -::: - -and persists the messages to ClickHouse, MariaDB, PostgreSQL, or SQlite. ->For more information, see [JDBC sink connector](io-jdbc-sink.md). - - -### Setup a PostgreSQL cluster - -This example uses the PostgreSQL 12 docker image to start a single-node PostgreSQL cluster in Docker. - -1. Pull the PostgreSQL 12 image from Docker. - - ```bash - - $ docker pull postgres:12 - - ``` - -2. Start PostgreSQL. - - ```bash - - $ docker run -d -it --rm \ - --name pulsar-postgres \ - -p 5432:5432 \ - -e POSTGRES_PASSWORD=password \ - -e POSTGRES_USER=postgres \ - postgres:12 - - ``` - - #### Tip - - Flag | Description | This example - ---|---|---| - `-d` | To start a container in detached mode. | / - `-it` | Keep STDIN open even if not attached and allocate a terminal. | / - `--rm` | Remove the container automatically when it exits. | / - `-name` | Assign a name to the container. | This example specifies _pulsar-postgres_ for the container. - `-p` | Publish the port of the container to the host. | This example publishes the port _5432_ of the container to the host. - `-e` | Set environment variables. | This example sets the following variables:
    - The password for the user is _password_.
    - The name for the user is _postgres_. - - :::tip - - For more information about Docker commands, see [Docker CLI](https://docs.docker.com/engine/reference/commandline/run/). - - ::: - -3. Check if PostgreSQL has been started successfully. - - ```bash - - $ docker logs -f pulsar-postgres - - ``` - - PostgreSQL has been started successfully if the following message appears. - - ```text - - 2020-05-11 20:09:24.492 UTC [1] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit - 2020-05-11 20:09:24.492 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 - 2020-05-11 20:09:24.492 UTC [1] LOG: listening on IPv6 address "::", port 5432 - 2020-05-11 20:09:24.499 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" - 2020-05-11 20:09:24.523 UTC [55] LOG: database system was shut down at 2020-05-11 20:09:24 UTC - 2020-05-11 20:09:24.533 UTC [1] LOG: database system is ready to accept connections - - ``` - -4. Access to PostgreSQL. - - ```bash - - $ docker exec -it pulsar-postgres /bin/bash - - ``` - -5. Create a PostgreSQL table _pulsar_postgres_jdbc_sink_. - - ```bash - - $ psql -U postgres postgres - - postgres=# create table if not exists pulsar_postgres_jdbc_sink - ( - id serial PRIMARY KEY, - name VARCHAR(255) NOT NULL - ); - - ``` - -### Configure a JDBC sink - -Now we have a PostgreSQL running locally. - -In this section, you need to configure a JDBC sink connector. - -1. Add a configuration file. - - To run a JDBC sink connector, you need to prepare a YAML configuration file including the information that Pulsar connector runtime needs to know. - - For example, how Pulsar connector can find the PostgreSQL cluster, what is the JDBC URL and the table that Pulsar connector uses for writing messages to. - - Create a _pulsar-postgres-jdbc-sink.yaml_ file, copy the following contents to this file, and place the file in the `pulsar/connectors` folder. - - ```yaml - - configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/postgres" - tableName: "pulsar_postgres_jdbc_sink" - - ``` - -2. Create a schema. - - Create a _avro-schema_ file, copy the following contents to this file, and place the file in the `pulsar/connectors` folder. - - ```json - - { - "type": "AVRO", - "schema": "{\"type\":\"record\",\"name\":\"Test\",\"fields\":[{\"name\":\"id\",\"type\":[\"null\",\"int\"]},{\"name\":\"name\",\"type\":[\"null\",\"string\"]}]}", - "properties": {} - } - - ``` - - :::tip - - For more information about AVRO, see [Apache Avro](https://avro.apache.org/docs/1.9.1/). - - ::: - -3. Upload a schema to a topic. - - This example uploads the _avro-schema_ schema to the _pulsar-postgres-jdbc-sink-topic_ topic. - - ```bash - - $ bin/pulsar-admin schemas upload pulsar-postgres-jdbc-sink-topic -f ./connectors/avro-schema - - ``` - -4. Check if the schema has been uploaded successfully. - - ```bash - - $ bin/pulsar-admin schemas get pulsar-postgres-jdbc-sink-topic - - ``` - - The schema has been uploaded successfully if the following message appears. - - ```json - - {"name":"pulsar-postgres-jdbc-sink-topic","schema":"{\"type\":\"record\",\"name\":\"Test\",\"fields\":[{\"name\":\"id\",\"type\":[\"null\",\"int\"]},{\"name\":\"name\",\"type\":[\"null\",\"string\"]}]}","type":"AVRO","properties":{}} - - ``` - -### Create a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to create a sink connector and perform other operations on it. - -This example creates a sink connector and specifies the desired information. - -```bash - -$ bin/pulsar-admin sinks create \ ---archive ./connectors/pulsar-io-jdbc-postgres-@pulsar:version@.nar \ ---inputs pulsar-postgres-jdbc-sink-topic \ ---name pulsar-postgres-jdbc-sink \ ---sink-config-file ./connectors/pulsar-postgres-jdbc-sink.yaml \ ---parallelism 1 - -``` - -Once the command is executed, Pulsar creates a sink connector _pulsar-postgres-jdbc-sink_. - -This sink connector runs as a Pulsar Function and writes the messages produced in the topic _pulsar-postgres-jdbc-sink-topic_ to the PostgreSQL table _pulsar_postgres_jdbc_sink_. - - #### Tip - - Flag | Description | This example - ---|---|---| - `--archive` | The path to the archive file for the sink. | _pulsar-io-jdbc-postgres-@pulsar:version@.nar_ | - `--inputs` | The input topic(s) of the sink.

    Multiple topics can be specified as a comma-separated list.|| - `--name` | The name of the sink. | _pulsar-postgres-jdbc-sink_ | - `--sink-config-file` | The path to a YAML config file specifying the configuration of the sink. | _pulsar-postgres-jdbc-sink.yaml_ | - `--parallelism` | The parallelism factor of the sink.

    For example, the number of sink instances to run. | _1_ | - -:::tip - -For more information about `pulsar-admin sinks create options`, see [here](io-cli.md#sinks). - -::: - -The sink has been created successfully if the following message appears. - -```bash - -"Created successfully" - -``` - -### Inspect a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to monitor a connector and perform other operations on it. - -* List all running JDBC sink(s). - - ```bash - - $ bin/pulsar-admin sinks list \ - --tenant public \ - --namespace default - - ``` - - :::tip - - For more information about `pulsar-admin sinks list options`, see [here](io-cli.md/#list-1). - - ::: - - The result shows that only the _postgres-jdbc-sink_ sink is running. - - ```json - - [ - "pulsar-postgres-jdbc-sink" - ] - - ``` - -* Get the information of a JDBC sink. - - ```bash - - $ bin/pulsar-admin sinks get \ - --tenant public \ - --namespace default \ - --name pulsar-postgres-jdbc-sink - - ``` - - :::tip - - For more information about `pulsar-admin sinks get options`, see [here](io-cli.md/#get-1). - - ::: - - The result shows the information of the sink connector, including tenant, namespace, topic and so on. - - ```json - - { - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true - } - - ``` - -* Get the status of a JDBC sink - - ```bash - - $ bin/pulsar-admin sinks status \ - --tenant public \ - --namespace default \ - --name pulsar-postgres-jdbc-sink - - ``` - - :::tip - - For more information about `pulsar-admin sinks status options`, see [here](io-cli.md/#status-1). - - ::: - - The result shows the current status of sink connector, including the number of instance, running status, worker ID and so on. - - ```json - - { - "numInstances" : 1, - "numRunning" : 1, - "instances" : [ { - "instanceId" : 0, - "status" : { - "running" : true, - "error" : "", - "numRestarts" : 0, - "numReadFromPulsar" : 0, - "numSystemExceptions" : 0, - "latestSystemExceptions" : [ ], - "numSinkExceptions" : 0, - "latestSinkExceptions" : [ ], - "numWrittenToSink" : 0, - "lastReceivedTime" : 0, - "workerId" : "c-standalone-fw-192.168.2.52-8080" - } - } ] - } - - ``` - -### Stop a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to stop a connector and perform other operations on it. - -```bash - -$ bin/pulsar-admin sinks stop \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks stop options`, see [here](io-cli.md/#stop-1). - -::: - -The sink instance has been stopped successfully if the following message disappears. - -```bash - -"Stopped successfully" - -``` - -### Restart a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to restart a connector and perform other operations on it. - -```bash - -$ bin/pulsar-admin sinks restart \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks restart options`, see [here](io-cli.md/#restart-1). - -::: - -The sink instance has been started successfully if the following message disappears. - -```bash - -"Started successfully" - -``` - -:::tip - -* Optionally, you can run a standalone sink connector using `pulsar-admin sinks localrun options`. -Note that `pulsar-admin sinks localrun options` **runs a sink connector locally**, while `pulsar-admin sinks start options` **starts a sink connector in a cluster**. -* For more information about `pulsar-admin sinks localrun options`, see [here](io-cli.md#localrun-1). - -::: - -### Update a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to update a connector and perform other operations on it. - -This example updates the parallelism of the _pulsar-postgres-jdbc-sink_ sink connector to 2. - -```bash - -$ bin/pulsar-admin sinks update \ ---name pulsar-postgres-jdbc-sink \ ---parallelism 2 - -``` - -:::tip - -For more information about `pulsar-admin sinks update options`, see [here](io-cli.md/#update-1). - -::: - -The sink connector has been updated successfully if the following message disappears. - -```bash - -"Updated successfully" - -``` - -This example double-checks the information. - -```bash - -$ bin/pulsar-admin sinks get \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -The result shows that the parallelism is 2. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 2, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -### Delete a JDBC sink - -You can use the [Connector Admin CLI](io-cli.md) -to delete a connector and perform other operations on it. - -This example deletes the _pulsar-postgres-jdbc-sink_ sink connector. - -```bash - -$ bin/pulsar-admin sinks delete \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -:::tip - -For more information about `pulsar-admin sinks delete options`, see [here](io-cli.md/#delete-1). - -::: - -The sink connector has been deleted successfully if the following message appears. - -```text - -"Deleted successfully" - -``` - -This example double-checks the status of the sink connector. - -```bash - -$ bin/pulsar-admin sinks get \ ---tenant public \ ---namespace default \ ---name pulsar-postgres-jdbc-sink - -``` - -The result shows that the sink connector does not exist. - -```text - -HTTP 404 Not Found - -Reason: Sink pulsar-postgres-jdbc-sink doesn't exist - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-rabbitmq-sink.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-rabbitmq-sink.md deleted file mode 100644 index d7fda99460dc97..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-rabbitmq-sink.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -id: io-rabbitmq-sink -title: RabbitMQ sink connector -sidebar_label: "RabbitMQ sink connector" -original_id: io-rabbitmq-sink ---- - -The RabbitMQ sink connector pulls messages from Pulsar topics -and persist the messages to RabbitMQ queues. - - -## Configuration - -The configuration of the RabbitMQ sink connector has the following properties. - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `connectionName` |String| true | " " (empty string) | The connection name. | -| `host` | String| true | " " (empty string) | The RabbitMQ host. | -| `port` | int |true | 5672 | The RabbitMQ port. | -| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. | -| `username` | String|false | guest | The username used to authenticate to RabbitMQ. | -| `password` | String|false | guest | The password used to authenticate to RabbitMQ. | -| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. | -| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number.

    0 means unlimited. | -| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets.

    0 means unlimited. | -| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds.

    0 means infinite. | -| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. | -| `requestedHeartbeat` | int|false | 60 | The exchange to publish messages. | -| `exchangeName` | String|true | " " (empty string) | The maximum number of messages that the server delivers.

    0 means unlimited. | -| `prefetchGlobal` |String|true | " " (empty string) |The routing key used to publish messages. | - - -### Example - -Before using the RabbitMQ sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "host": "localhost", - "port": "5672", - "virtualHost": "/", - "username": "guest", - "password": "guest", - "queueName": "test-queue", - "connectionName": "test-connection", - "requestedChannelMax": "0", - "requestedFrameMax": "0", - "connectionTimeout": "60000", - "handshakeTimeout": "10000", - "requestedHeartbeat": "60", - "exchangeName": "test-exchange", - "routingKey": "test-key" - } - - ``` - -* YAML - - ```yaml - - configs: - host: "localhost" - port: 5672 - virtualHost: "/", - username: "guest" - password: "guest" - queueName: "test-queue" - connectionName: "test-connection" - requestedChannelMax: 0 - requestedFrameMax: 0 - connectionTimeout: 60000 - handshakeTimeout: 10000 - requestedHeartbeat: 60 - exchangeName: "test-exchange" - routingKey: "test-key" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-rabbitmq-source.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-rabbitmq-source.md deleted file mode 100644 index c2c31cc97d10d9..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-rabbitmq-source.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -id: io-rabbitmq-source -title: RabbitMQ source connector -sidebar_label: "RabbitMQ source connector" -original_id: io-rabbitmq-source ---- - -The RabbitMQ source connector receives messages from RabbitMQ clusters -and writes messages to Pulsar topics. - -## Configuration - -The configuration of the RabbitMQ source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `connectionName` |String| true | " " (empty string) | The connection name. | -| `host` | String| true | " " (empty string) | The RabbitMQ host. | -| `port` | int |true | 5672 | The RabbitMQ port. | -| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. | -| `username` | String|false | guest | The username used to authenticate to RabbitMQ. | -| `password` | String|false | guest | The password used to authenticate to RabbitMQ. | -| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. | -| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number.

    0 means unlimited. | -| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets.

    0 means unlimited. | -| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds.

    0 means infinite. | -| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. | -| `requestedHeartbeat` | int|false | 60 | The requested heartbeat timeout in seconds. | -| `prefetchCount` | int|false | 0 | The maximum number of messages that the server delivers.

    0 means unlimited. | -| `prefetchGlobal` | boolean|false | false |Whether the setting should be applied to the entire channel rather than each consumer. | -| `passive` | boolean|false | false | Whether the rabbitmq consumer should create its own queue or bind to an existing one. | - -### Example - -Before using the RabbitMQ source connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "host": "localhost", - "port": "5672", - "virtualHost": "/", - "username": "guest", - "password": "guest", - "queueName": "test-queue", - "connectionName": "test-connection", - "requestedChannelMax": "0", - "requestedFrameMax": "0", - "connectionTimeout": "60000", - "handshakeTimeout": "10000", - "requestedHeartbeat": "60", - "prefetchCount": "0", - "prefetchGlobal": "false", - "passive": "false" - } - - ``` - -* YAML - - ```yaml - - configs: - host: "localhost" - port: 5672 - virtualHost: "/" - username: "guest" - password: "guest" - queueName: "test-queue" - connectionName: "test-connection" - requestedChannelMax: 0 - requestedFrameMax: 0 - connectionTimeout: 60000 - handshakeTimeout: 10000 - requestedHeartbeat: 60 - prefetchCount: 0 - prefetchGlobal: "false" - passive: "false" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-redis-sink.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-redis-sink.md deleted file mode 100644 index 0caf21bcf62e88..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-redis-sink.md +++ /dev/null @@ -1,156 +0,0 @@ ---- -id: io-redis-sink -title: Redis sink connector -sidebar_label: "Redis sink connector" -original_id: io-redis-sink ---- - -The Redis sink connector pulls messages from Pulsar topics -and persists the messages to a Redis database. - - - -## Configuration - -The configuration of the Redis sink connector has the following properties. - - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `redisHosts` |String|true|" " (empty string) | A comma-separated list of Redis hosts to connect to. | -| `redisPassword` |String|false|" " (empty string) | The password used to connect to Redis. | -| `redisDatabase` | int|true|0 | The Redis database to connect to. | -| `clientMode` |String| false|Standalone | The client mode when interacting with Redis cluster.

    Below are the available options:
  2280. Standalone
  2281. Cluster
  2282. | -| `autoReconnect` | boolean|false|true | Whether the Redis client automatically reconnect or not. | -| `requestQueue` | int|false|2147483647 | The maximum number of queued requests to Redis. | -| `tcpNoDelay` |boolean| false| false | Whether to enable TCP with no delay or not. | -| `keepAlive` | boolean|false | false |Whether to enable a keepalive to Redis or not. | -| `connectTimeout` |long| false|10000 | The time to wait before timing out when connecting in milliseconds. | -| `operationTimeout` | long|false|10000 | The time before an operation is marked as timed out in milliseconds . | -| `batchTimeMs` | int|false|1000 | The Redis operation time in milliseconds. | -| `batchSize` | int|false|200 | The batch size of writing to Redis database. | - - -### Example - -Before using the Redis sink connector, you need to create a configuration file in the path you will start Pulsar service (i.e. `PULSAR_HOME`) through one of the following methods. - -* JSON - - ```json - - { - "redisHosts": "localhost:6379", - "redisPassword": "mypassword", - "redisDatabase": "0", - "clientMode": "Standalone", - "operationTimeout": "2000", - "batchSize": "1", - "batchTimeMs": "1000", - "connectTimeout": "3000" - } - - ``` - -* YAML - - ```yaml - - configs: - redisHosts: "localhost:6379" - redisPassword: "mypassword" - redisDatabase: 0 - clientMode: "Standalone" - operationTimeout: 2000 - batchSize: 1 - batchTimeMs: 1000 - connectTimeout: 3000 - - ``` - -### Usage - -This example shows how to write records to a Redis database using the Pulsar Redis connector. - -1. Start a Redis server. - - ```bash - - $ docker pull redis:5.0.5 - $ docker run -d -p 6379:6379 --name my-redis redis:5.0.5 --requirepass "mypassword" - - ``` - -2. Start a Pulsar service locally in standalone mode. - - ```bash - - $ bin/pulsar standalone - - ``` - - Make sure the NAR file is available at `connectors/pulsar-io-redis-@pulsar:version@.nar`. - -3. Start the Pulsar Redis connector in local run mode using one of the following methods. - - * Use the **JSON** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-redis-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name my-redis-sink \ - --sink-config '{"redisHosts": "localhost:6379","redisPassword": "mypassword","redisDatabase": "0","clientMode": "Standalone","operationTimeout": "3000","batchSize": "1"}' \ - --inputs my-redis-topic - - ``` - - * Use the **YAML** configuration file as shown previously. - - ```bash - - $ bin/pulsar-admin sinks localrun \ - --archive connectors/pulsar-io-redis-@pulsar:version@.nar \ - --tenant public \ - --namespace default \ - --name my-redis-sink \ - --sink-config-file redis-sink-config.yaml \ - --inputs my-redis-topic - - ``` - -4. Publish records to the topic. - - ```bash - - $ bin/pulsar-client produce \ - persistent://public/default/my-redis-topic \ - -k "streaming" \ - -m "Pulsar" - - ``` - -5. Start a Redis client in Docker. - - ```bash - - $ docker exec -it my-redis redis-cli -a "mypassword" - - ``` - -6. Check the key/value in Redis. - - ``` - - 127.0.0.1:6379> keys * - 1) "streaming" - 127.0.0.1:6379> get "streaming" - "Pulsar" - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-solr-sink.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-solr-sink.md deleted file mode 100644 index df2c3612c38eb6..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-solr-sink.md +++ /dev/null @@ -1,65 +0,0 @@ ---- -id: io-solr-sink -title: Solr sink connector -sidebar_label: "Solr sink connector" -original_id: io-solr-sink ---- - -The Solr sink connector pulls messages from Pulsar topics -and persists the messages to Solr collections. - - - -## Configuration - -The configuration of the Solr sink connector has the following properties. - - - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `solrUrl` | String|true|" " (empty string) |
  2283. Comma-separated zookeeper hosts with chroot used in the SolrCloud mode.
    **Example**
    `localhost:2181,localhost:2182/chroot`

  2284. URL to connect to Solr used in standalone mode.
    **Example**
    `localhost:8983/solr`
  2285. | -| `solrMode` | String|true|SolrCloud| The client mode when interacting with the Solr cluster.

    Below are the available options:
  2286. Standalone
  2287. SolrCloud
  2288. | -| `solrCollection` |String|true| " " (empty string) | Solr collection name to which records need to be written. | -| `solrCommitWithinMs` |int| false|10 | The time within million seconds for Solr updating commits.| -| `username` |String|false| " " (empty string) | The username for basic authentication.

    **Note: `usename` is case-sensitive.** | -| `password` | String|false| " " (empty string) | The password for basic authentication.

    **Note: `password` is case-sensitive.** | - - - -### Example - -Before using the Solr sink connector, you need to create a configuration file through one of the following methods. - -* JSON - - ```json - - { - "solrUrl": "localhost:2181,localhost:2182/chroot", - "solrMode": "SolrCloud", - "solrCollection": "techproducts", - "solrCommitWithinMs": 100, - "username": "fakeuser", - "password": "fake@123" - } - - ``` - -* YAML - - ```yaml - - { - solrUrl: "localhost:2181,localhost:2182/chroot" - solrMode: "SolrCloud" - solrCollection: "techproducts" - solrCommitWithinMs: 100 - username: "fakeuser" - password: "fake@123" - } - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-twitter-source.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-twitter-source.md deleted file mode 100644 index 8de3504dd0fef2..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-twitter-source.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -id: io-twitter-source -title: Twitter Firehose source connector -sidebar_label: "Twitter Firehose source connector" -original_id: io-twitter-source ---- - -The Twitter Firehose source connector receives tweets from Twitter Firehose and -writes the tweets to Pulsar topics. - -## Configuration - -The configuration of the Twitter Firehose source connector has the following properties. - -### Property - -| Name | Type|Required | Default | Description -|------|----------|----------|---------|-------------| -| `consumerKey` | String|true | " " (empty string) | The twitter OAuth consumer key.

    For more information, see [Access tokens](https://developer.twitter.com/en/docs/basics/authentication/guides/access-tokens). | -| `consumerSecret` | String |true | " " (empty string) | The twitter OAuth consumer secret. | -| `token` | String|true | " " (empty string) | The twitter OAuth token. | -| `tokenSecret` | String|true | " " (empty string) | The twitter OAuth secret. | -| `guestimateTweetTime`|Boolean|false|false|Most firehose events have null createdAt time.

    If `guestimateTweetTime` set to true, the connector estimates the createdTime of each firehose event to be current time. -| `clientName` | String |false | openconnector-twitter-source| The twitter firehose client name. | -| `clientHosts` |String| false | Constants.STREAM_HOST | The twitter firehose hosts to which client connects. | -| `clientBufferSize` | int|false | 50000 | The buffer size for buffering tweets fetched from twitter firehose. | - -> For more information about OAuth credentials, see [Twitter developers portal](https://developer.twitter.com/en.html). diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-twitter.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-twitter.md deleted file mode 100644 index 3b2f6325453c3c..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-twitter.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -id: io-twitter -title: Twitter Firehose Connector -sidebar_label: "Twitter Firehose Connector" -original_id: io-twitter ---- - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/io-use.md b/site2/website/versioned_docs/version-2.9.3-deprecated/io-use.md deleted file mode 100644 index da9ed746c4d372..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/io-use.md +++ /dev/null @@ -1,1787 +0,0 @@ ---- -id: io-use -title: How to use Pulsar connectors -sidebar_label: "Use" -original_id: io-use ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide describes how to use Pulsar connectors. - -## Install a connector - -Pulsar bundles several [builtin connectors](io-connectors.md) used to move data in and out of commonly used systems (such as database and messaging system). Optionally, you can create and use your desired non-builtin connectors. - -:::note - -When using a non-builtin connector, you need to specify the path of a archive file for the connector. - -::: - -To set up a builtin connector, follow -the instructions [here](getting-started-standalone.md#installing-builtin-connectors). - -After the setup, the builtin connector is automatically discovered by Pulsar brokers (or function-workers), so no additional installation steps are required. - -## Configure a connector - -You can configure the following information: - -* [Configure a default storage location for a connector](#configure-a-default-storage-location-for-a-connector) - -* [Configure a connector with a YAML file](#configure-a-connector-with-yaml-file) - -### Configure a default storage location for a connector - -To configure a default folder for builtin connectors, set the `connectorsDirectory` parameter in the `./conf/functions_worker.yml` configuration file. - -**Example** - -Set the `./connectors` folder as the default storage location for builtin connectors. - -``` - -######################## -# Connectors -######################## - -connectorsDirectory: ./connectors - -``` - -### Configure a connector with a YAML file - -To configure a connector, you need to provide a YAML configuration file when creating a connector. - -The YAML configuration file tells Pulsar where to locate connectors and how to connect connectors with Pulsar topics. - -**Example 1** - -Below is a YAML configuration file of a Cassandra sink, which tells Pulsar: - -* Which Cassandra cluster to connect - -* What is the `keyspace` and `columnFamily` to be used in Cassandra for collecting data - -* How to map Pulsar messages into Cassandra table key and columns - -```shell - -tenant: public -namespace: default -name: cassandra-test-sink -... -# cassandra specific config -configs: - roots: "localhost:9042" - keyspace: "pulsar_test_keyspace" - columnFamily: "pulsar_test_table" - keyname: "key" - columnName: "col" - -``` - -**Example 2** - -Below is a YAML configuration file of a Kafka source. - -```shell - -configs: - bootstrapServers: "pulsar-kafka:9092" - groupId: "test-pulsar-io" - topic: "my-topic" - sessionTimeoutMs: "10000" - autoCommitEnabled: "false" - -``` - -**Example 3** - -Below is a YAML configuration file of a PostgreSQL JDBC sink. - -```shell - -configs: - userName: "postgres" - password: "password" - jdbcUrl: "jdbc:postgresql://localhost:5432/test_jdbc" - tableName: "test_jdbc" - -``` - -## Get available connectors - -Before starting using connectors, you can perform the following operations: - -* [Reload connectors](#reload) - -* [Get a list of available connectors](#get-available-connectors) - -### `reload` - -If you add or delete a nar file in a connector folder, reload the available builtin connector before using it. - -#### Source - -Use the `reload` subcommand. - -```shell - -$ pulsar-admin sources reload - -``` - -For more information, see [`here`](io-cli.md#reload). - -#### Sink - -Use the `reload` subcommand. - -```shell - -$ pulsar-admin sinks reload - -``` - -For more information, see [`here`](io-cli.md#reload-1). - -### `available` - -After reloading connectors (optional), you can get a list of available connectors. - -#### Source - -Use the `available-sources` subcommand. - -```shell - -$ pulsar-admin sources available-sources - -``` - -#### Sink - -Use the `available-sinks` subcommand. - -```shell - -$ pulsar-admin sinks available-sinks - -``` - -## Run a connector - -To run a connector, you can perform the following operations: - -* [Create a connector](#create) - -* [Start a connector](#start) - -* [Run a connector locally](#localrun) - -### `create` - -You can create a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Create a source connector. - -````mdx-code-block - - - - -Use the `create` subcommand. - -``` - -$ pulsar-admin sources create options - -``` - -For more information, see [here](io-cli.md#create). - - - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/registerSource?version=@pulsar:version_number@} - - - - -* Create a source connector with a **local file**. - - ```java - - void createSource(SourceConfig sourceConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - |Name|Description - |---|--- - `sourceConfig` | The source configuration object - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#createSource-SourceConfig-java.lang.String-). - -* Create a source connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void createSourceWithUrl(SourceConfig sourceConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - Parameter| Description - |---|--- - `sourceConfig` | The source configuration object - `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSourceWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#createSourceWithUrl-SourceConfig-java.lang.String-). - - - - -```` - -#### Sink - -Create a sink connector. - -````mdx-code-block - - - - -Use the `create` subcommand. - -``` - -$ pulsar-admin sinks create options - -``` - -For more information, see [here](io-cli.md#create-1). - - - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/registerSink?version=@pulsar:version_number@} - - - - -* Create a sink connector with a **local file**. - - ```java - - void createSink(SinkConfig sinkConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - |Name|Description - |---|--- - `sinkConfig` | The sink configuration object - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#createSink-SinkConfig-java.lang.String-). - -* Create a sink connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void createSinkWithUrl(SinkConfig sinkConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - Parameter| Description - |---|--- - `sinkConfig` | The sink configuration object - `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`createSinkWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#createSinkWithUrl-SinkConfig-java.lang.String-). - - - - -```` - -### `start` - -You can start a connector using **Admin CLI** or **REST API**. - -#### Source - -Start a source connector. - -````mdx-code-block - - - - -Use the `start` subcommand. - -``` - -$ pulsar-admin sources start options - -``` - -For more information, see [here](io-cli.md#start). - - - - -* Start **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/start|operation/startSource?version=@pulsar:version_number@} - -* Start a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/start|operation/startSource?version=@pulsar:version_number@} - - - - -```` - -#### Sink - -Start a sink connector. - -````mdx-code-block - - - - -Use the `start` subcommand. - -``` - -$ pulsar-admin sinks start options - -``` - -For more information, see [here](io-cli.md#start-1). - - - - -* Start **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/start|operation/startSink?version=@pulsar:version_number@} - -* Start a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sourceName/:instanceId/start|operation/startSink?version=@pulsar:version_number@} - - - - -```` - -### `localrun` - -You can run a connector locally rather than deploying it on a Pulsar cluster using **Admin CLI**. - -#### Source - -Run a source connector locally. - -````mdx-code-block - - - - -Use the `localrun` subcommand. - -``` - -$ pulsar-admin sources localrun options - -``` - -For more information, see [here](io-cli.md#localrun). - - - - -```` - -#### Sink - -Run a sink connector locally. - -````mdx-code-block - - - - -Use the `localrun` subcommand. - -``` - -$ pulsar-admin sinks localrun options - -``` - -For more information, see [here](io-cli.md#localrun-1). - - - - -```` - -## Monitor a connector - -To monitor a connector, you can perform the following operations: - -* [Get the information of a connector](#get) - -* [Get the list of all running connectors](#list) - -* [Get the current status of a connector](#status) - -### `get` - -You can get the information of a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the information of a source connector. - -````mdx-code-block - - - - -Use the `get` subcommand. - -``` - -$ pulsar-admin sources get options - -``` - -For more information, see [here](io-cli.md#get). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/getSourceInfo?version=@pulsar:version_number@} - - - - -```java - -SourceConfig getSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Example** - -This is a sourceConfig. - -```java - -{ - "tenant": "tenantName", - "namespace": "namespaceName", - "name": "sourceName", - "className": "className", - "topicName": "topicName", - "configs": {}, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "resources": { - "cpu": 1.0, - "ram": 1073741824, - "disk": 10737418240 - } -} - -``` - -This is a sourceConfig example. - -``` - -{ - "tenant": "public", - "namespace": "default", - "name": "debezium-mysql-source", - "className": "org.apache.pulsar.io.debezium.mysql.DebeziumMysqlSource", - "topicName": "debezium-mysql-topic", - "configs": { - "database.user": "debezium", - "database.server.id": "184054", - "database.server.name": "dbserver1", - "database.port": "3306", - "database.hostname": "localhost", - "database.password": "dbz", - "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650", - "value.converter": "org.apache.kafka.connect.json.JsonConverter", - "database.whitelist": "inventory", - "key.converter": "org.apache.kafka.connect.json.JsonConverter", - "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory", - "pulsar.service.url": "pulsar://127.0.0.1:6650", - "database.history.pulsar.topic": "history-topic2" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "resources": { - "cpu": 1.0, - "ram": 1073741824, - "disk": 10737418240 - } -} - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException.NotFoundException` | Cluster doesn't exist -`PulsarAdminException` | Unexpected error - -For more information, see [`getSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#getSource-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Get the information of a sink connector. - -````mdx-code-block - - - - -Use the `get` subcommand. - -``` - -$ pulsar-admin sinks get options - -``` - -For more information, see [here](io-cli.md#get-1). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/getSinkInfo?version=@pulsar:version_number@} - - - - -```java - -SinkConfig getSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - -``` - -**Example** - -This is a sinkConfig. - -```json - -{ -"tenant": "tenantName", -"namespace": "namespaceName", -"name": "sinkName", -"className": "className", -"inputSpecs": { -"topicName": { - "isRegexPattern": false -} -}, -"configs": {}, -"parallelism": 1, -"processingGuarantees": "ATLEAST_ONCE", -"retainOrdering": false, -"autoAck": true -} - -``` - -This is a sinkConfig example. - -```json - -{ - "tenant": "public", - "namespace": "default", - "name": "pulsar-postgres-jdbc-sink", - "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink", - "inputSpecs": { - "pulsar-postgres-jdbc-sink-topic": { - "isRegexPattern": false - } - }, - "configs": { - "password": "password", - "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink", - "userName": "postgres", - "tableName": "pulsar_postgres_jdbc_sink" - }, - "parallelism": 1, - "processingGuarantees": "ATLEAST_ONCE", - "retainOrdering": false, - "autoAck": true -} - -``` - -**Parameter description** - -Name| Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`sink` | Sink name - -For more information, see [`getSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#getSink-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -### `list` - -You can get the list of all running connectors using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the list of all running source connectors. - -````mdx-code-block - - - - -Use the `list` subcommand. - -``` - -$ pulsar-admin sources list options - -``` - -For more information, see [here](io-cli.md#list). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace|operation/listSources?version=@pulsar:version_number@} - - - - -```java - -List listSources(String tenant, - String namespace) - throws PulsarAdminException - -``` - -**Response example** - -```java - -["f1", "f2", "f3"] - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException` | Unexpected error - -For more information, see [`listSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#listSources-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Get the list of all running sink connectors. - -````mdx-code-block - - - - -Use the `list` subcommand. - -``` - -$ pulsar-admin sinks list options - -``` - -For more information, see [here](io-cli.md#list-1). - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace|operation/listSinks?version=@pulsar:version_number@} - - - - -```java - -List listSinks(String tenant, - String namespace) - throws PulsarAdminException - -``` - -**Response example** - -```java - -["f1", "f2", "f3"] - -``` - -**Exception** - -Exception name | Description -|---|--- -`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission -`PulsarAdminException` | Unexpected error - -For more information, see [`listSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#listSinks-java.lang.String-java.lang.String-). - - - - -```` - -### `status` - -You can get the current status of a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Get the current status of a source connector. - -````mdx-code-block - - - - -Use the `status` subcommand. - -``` - -$ pulsar-admin sources status options - -``` - -For more information, see [here](io-cli.md#status). - - - - -* Get the current status of **all** source connectors. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName/status|operation/getSourceStatus?version=@pulsar:version_number@} - -* Gets the current status of a **specified** source connector. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/status|operation/getSourceStatus?version=@pulsar:version_number@} - - - - -* Get the current status of **all** source connectors. - - ```java - - SourceStatus getSourceStatus(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - - **Exception** - - Name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSourceStatus`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#getSource-java.lang.String-java.lang.String-java.lang.String-). - -* Gets the current status of a **specified** source connector. - - ```java - - SourceStatus.SourceInstanceStatus.SourceInstanceStatusData getSourceStatus(String tenant, - String namespace, - String source, - int id) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - `id` | Source instanceID - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSourceStatus`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#getSourceStatus-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Get the current status of a Pulsar sink connector. - -````mdx-code-block - - - - -Use the `status` subcommand. - -``` - -$ pulsar-admin sinks status options - -``` - -For more information, see [here](io-cli.md#status-1). - - - - -* Get the current status of **all** sink connectors. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sinkName/status|operation/getSinkStatus?version=@pulsar:version_number@} - -* Gets the current status of a **specified** sink connector. - - Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sourceName/:instanceId/status|operation/getSinkInstanceStatus?version=@pulsar:version_number@} - - - - -* Get the current status of **all** sink connectors. - - ```java - - SinkStatus getSinkStatus(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSinkStatus`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#getSinkStatus-java.lang.String-java.lang.String-java.lang.String-). - -* Gets the current status of a **specified** source connector. - - ```java - - SinkStatus.SinkInstanceStatus.SinkInstanceStatusData getSinkStatus(String tenant, - String namespace, - String sink, - int id) - throws PulsarAdminException - - ``` - - **Parameter** - - Parameter| Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Source name - `id` | Sink instanceID - - **Exception** - - Exception name | Description - |---|--- - `PulsarAdminException` | Unexpected error - - For more information, see [`getSinkStatusWithInstanceID`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#getSinkStatus-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Update a connector - -### `update` - -You can update a running connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Update a running Pulsar source connector. - -````mdx-code-block - - - - -Use the `update` subcommand. - -``` - -$ pulsar-admin sources update options - -``` - -For more information, see [here](io-cli.md#update). - - - - -Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/updateSource?version=@pulsar:version_number@} - - - - -* Update a running source connector with a **local file**. - - ```java - - void updateSource(SourceConfig sourceConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - |`sourceConfig` | The source configuration object - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - - For more information, see [`updateSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#updateSource-SourceConfig-java.lang.String-). - -* Update a source connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void updateSourceWithUrl(SourceConfig sourceConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - | Name | Description - |---|--- - | `sourceConfig` | The source configuration object - | `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - -For more information, see [`createSourceWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#updateSourceWithUrl-SourceConfig-java.lang.String-). - - - - -```` - -#### Sink - -Update a running Pulsar sink connector. - -````mdx-code-block - - - - -Use the `update` subcommand. - -``` - -$ pulsar-admin sinks update options - -``` - -For more information, see [here](io-cli.md#update-1). - - - - -Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/updateSink?version=@pulsar:version_number@} - - - - -* Update a running sink connector with a **local file**. - - ```java - - void updateSink(SinkConfig sinkConfig, - String fileName) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - |`sinkConfig` | The sink configuration object - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - | `PulsarAdminException.NotFoundException` | Cluster doesn't exist - | `PulsarAdminException` | Unexpected error - - For more information, see [`updateSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#updateSink-SinkConfig-java.lang.String-). - -* Update a sink connector using a **remote file** with a URL from which fun-pkg can be downloaded. - - ```java - - void updateSinkWithUrl(SinkConfig sinkConfig, - String pkgUrl) - throws PulsarAdminException - - ``` - - Supported URLs are `http` and `file`. - - **Example** - - * HTTP: http://www.repo.com/fileName.jar - - * File: file:///dir/fileName.jar - - **Parameter** - - | Name | Description - |---|--- - | `sinkConfig` | The sink configuration object - | `pkgUrl` | URL from which pkg can be downloaded - - **Exception** - - |Name|Description| - |---|--- - |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission - |`PulsarAdminException.NotFoundException` | Cluster doesn't exist - |`PulsarAdminException` | Unexpected error - -For more information, see [`updateSinkWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#updateSinkWithUrl-SinkConfig-java.lang.String-). - - - - -```` - -## Stop a connector - -### `stop` - -You can stop a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Stop a source connector. - -````mdx-code-block - - - - -Use the `stop` subcommand. - -``` - -$ pulsar-admin sources stop options - -``` - -For more information, see [here](io-cli.md#stop). - - - - -* Stop **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/stopSource?version=@pulsar:version_number@} - -* Stop a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId|operation/stopSource?version=@pulsar:version_number@} - - - - -* Stop **all** source connectors. - - ```java - - void stopSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#stopSource-java.lang.String-java.lang.String-java.lang.String-). - -* Stop a **specified** source connector. - - ```java - - void stopSource(String tenant, - String namespace, - String source, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#stopSource-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Stop a sink connector. - -````mdx-code-block - - - - -Use the `stop` subcommand. - -``` - -$ pulsar-admin sinks stop options - -``` - -For more information, see [here](io-cli.md#stop-1). - - - - -* Stop **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sinkName/stop|operation/stopSink?version=@pulsar:version_number@} - -* Stop a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkeName/:instanceId/stop|operation/stopSink?version=@pulsar:version_number@} - - - - -* Stop **all** sink connectors. - - ```java - - void stopSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#stopSink-java.lang.String-java.lang.String-java.lang.String-). - -* Stop a **specified** sink connector. - - ```java - - void stopSink(String tenant, - String namespace, - String sink, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`stopSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#stopSink-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Restart a connector - -### `restart` - -You can restart a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Restart a source connector. - -````mdx-code-block - - - - -Use the `restart` subcommand. - -``` - -$ pulsar-admin sources restart options - -``` - -For more information, see [here](io-cli.md#restart). - - - - -* Restart **all** source connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/restart|operation/restartSource?version=@pulsar:version_number@} - -* Restart a **specified** source connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/restart|operation/restartSource?version=@pulsar:version_number@} - - - - -* Restart **all** source connectors. - - ```java - - void restartSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#restartSource-java.lang.String-java.lang.String-java.lang.String-). - -* Restart a **specified** source connector. - - ```java - - void restartSource(String tenant, - String namespace, - String source, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Source instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#restartSource-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -#### Sink - -Restart a sink connector. - -````mdx-code-block - - - - -Use the `restart` subcommand. - -``` - -$ pulsar-admin sinks restart options - -``` - -For more information, see [here](io-cli.md#restart-1). - - - - -* Restart **all** sink connectors. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/restart|operation/restartSource?version=@pulsar:version_number@} - -* Restart a **specified** sink connector. - - Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/:instanceId/restart|operation/restartSource?version=@pulsar:version_number@} - - - - -* Restart all Pulsar sink connectors. - - ```java - - void restartSink(String tenant, - String namespace, - String sink) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `sink` | Sink name - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#restartSink-java.lang.String-java.lang.String-java.lang.String-). - -* Restart a **specified** sink connector. - - ```java - - void restartSink(String tenant, - String namespace, - String sink, - int instanceId) - throws PulsarAdminException - - ``` - - **Parameter** - - | Name | Description - |---|--- - `tenant` | Tenant name - `namespace` | Namespace name - `source` | Source name - `instanceId` | Sink instanceID - - **Exception** - - |Name|Description| - |---|--- - | `PulsarAdminException` | Unexpected error - - For more information, see [`restartSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#restartSink-java.lang.String-java.lang.String-java.lang.String-int-). - - - - -```` - -## Delete a connector - -### `delete` - -You can delete a connector using **Admin CLI**, **REST API** or **JAVA admin API**. - -#### Source - -Delete a source connector. - -````mdx-code-block - - - - -Use the `delete` subcommand. - -``` - -$ pulsar-admin sources delete options - -``` - -For more information, see [here](io-cli.md#delete). - - - - -Delete al Pulsar source connector. - -Send a `DELETE` request to this endpoint: {@inject: endpoint|DELETE|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/deregisterSource?version=@pulsar:version_number@} - - - - -Delete a source connector. - -```java - -void deleteSource(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Parameter** - -| Name | Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`source` | Source name - -**Exception** - -|Name|Description| -|---|--- -|`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission -| `PulsarAdminException.NotFoundException` | Cluster doesn't exist -| `PulsarAdminException.PreconditionFailedException` | Cluster is not empty -| `PulsarAdminException` | Unexpected error - -For more information, see [`deleteSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#deleteSource-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` - -#### Sink - -Delete a sink connector. - -````mdx-code-block - - - - -Use the `delete` subcommand. - -``` - -$ pulsar-admin sinks delete options - -``` - -For more information, see [here](io-cli.md#delete-1). - - - - -Delete a sink connector. - -Send a `DELETE` request to this endpoint: {@inject: endpoint|DELETE|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/deregisterSink?version=@pulsar:version_number@} - - - - -Delete a Pulsar sink connector. - -```java - -void deleteSink(String tenant, - String namespace, - String source) - throws PulsarAdminException - -``` - -**Parameter** - -| Name | Description -|---|--- -`tenant` | Tenant name -`namespace` | Namespace name -`sink` | Sink name - -**Exception** - -|Name|Description| -|---|--- -|`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission -| `PulsarAdminException.NotFoundException` | Cluster doesn't exist -| `PulsarAdminException.PreconditionFailedException` | Cluster is not empty -| `PulsarAdminException` | Unexpected error - -For more information, see [`deleteSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#deleteSink-java.lang.String-java.lang.String-java.lang.String-). - - - - -```` diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/performance-pulsar-perf.md b/site2/website/versioned_docs/version-2.9.3-deprecated/performance-pulsar-perf.md deleted file mode 100644 index 7f45498604536c..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/performance-pulsar-perf.md +++ /dev/null @@ -1,229 +0,0 @@ ---- -id: performance-pulsar-perf -title: Pulsar Perf -sidebar_label: "Pulsar Perf" -original_id: performance-pulsar-perf ---- - -The Pulsar Perf is a built-in performance test tool for Apache Pulsar. You can use the Pulsar Perf to test message writing or reading performance. For detailed information about performance tuning, see [here](https://streamnative.io/en/blog/tech/2021-01-14-pulsar-architecture-performance-tuning). - -## Produce messages - -This example shows how the Pulsar Perf produces messages with default options. For all configuration options available for the `pulsar-perf produce` command, see [configuration options](#configuration-options-for-pulsar-perf-produce). - -``` - -bin/pulsar-perf produce my-topic - -``` - -After the command is executed, the test data is continuously output on the Console. - -**Output** - -``` - -19:53:31.459 [pulsar-perf-producer-exec-1-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Created 1 producers -19:53:31.482 [pulsar-timer-5-1] WARN com.scurrilous.circe.checksum.Crc32cIntChecksum - Failed to load Circe JNI library. Falling back to Java based CRC32c provider -19:53:40.861 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 93.7 msg/s --- 0.7 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.575 ms - med: 3.460 - 95pct: 4.790 - 99pct: 5.308 - 99.9pct: 5.834 - 99.99pct: 6.609 - Max: 6.609 -19:53:50.909 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.437 ms - med: 3.328 - 95pct: 4.656 - 99pct: 5.071 - 99.9pct: 5.519 - 99.99pct: 5.588 - Max: 5.588 -19:54:00.926 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.376 ms - med: 3.276 - 95pct: 4.520 - 99pct: 4.939 - 99.9pct: 5.440 - 99.99pct: 5.490 - Max: 5.490 -19:54:10.940 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.298 ms - med: 3.220 - 95pct: 4.474 - 99pct: 4.926 - 99.9pct: 5.645 - 99.99pct: 5.654 - Max: 5.654 -19:54:20.956 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.1 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.308 ms - med: 3.199 - 95pct: 4.532 - 99pct: 4.871 - 99.9pct: 5.291 - 99.99pct: 5.323 - Max: 5.323 -19:54:30.972 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.249 ms - med: 3.144 - 95pct: 4.437 - 99pct: 4.970 - 99.9pct: 5.329 - 99.99pct: 5.414 - Max: 5.414 -19:54:40.987 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 100.0 msg/s --- 0.8 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 3.435 ms - med: 3.361 - 95pct: 4.772 - 99pct: 5.150 - 99.9pct: 5.373 - 99.99pct: 5.837 - Max: 5.837 -^C19:54:44.325 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Aggregated throughput stats --- 7286 records sent --- 99.140 msg/s --- 0.775 Mbit/s -19:54:44.336 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceProducer - Aggregated latency stats --- Latency: mean: 3.383 ms - med: 3.293 - 95pct: 4.610 - 99pct: 5.059 - 99.9pct: 5.588 - 99.99pct: 5.837 - 99.999pct: 6.609 - Max: 6.609 - -``` - -From the above test data, you can get the throughput statistics and the write latency statistics. The aggregated statistics is printed when the Pulsar Perf is stopped. You can press **Ctrl**+**C** to stop the Pulsar Perf. If you specify a filename with the `--histogram-file` parameter, a file with the [HdrHistogram](http://hdrhistogram.github.io/HdrHistogram/) formatted test result appears under your directory after Pulsar Perf is stopped. You can also check the test result through [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html). For details about how to check the test result through [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html), see [HdrHistogram Plotter](#hdrhistogram-plotter). - -### Configuration options for `pulsar-perf produce` - -You can get all options by executing the `bin/pulsar-perf produce -h` command. Therefore, you can modify these options as required. - -The following table lists configuration options available for the `pulsar-perf produce` command. - -| Option | Description | Default value| -|----|----|----| -| access-mode | Set the producer access mode. Valid values are `Shared`, `Exclusive` and `WaitForExclusive`. | Shared | -| admin-url | Set the Pulsar admin URL. | N/A | -| auth-params | Set the authentication parameters, whose format is determined by the implementation of the `configure` method in the authentication plugin class, such as "key1:val1,key2:val2" or "{"key1":"val1","key2":"val2"}". | N/A | -| auth-plugin | Set the authentication plugin class name. | N/A | -| listener-name | Set the listener name for the broker. | N/A | -| batch-max-bytes | Set the maximum number of bytes for each batch. | 4194304 | -| batch-max-messages | Set the maximum number of messages for each batch. | 1000 | -| batch-time-window | Set a window for a batch of messages. | 1 ms | -| busy-wait | Enable or disable Busy-Wait on the Pulsar client. | false | -| chunking | Configure whether to split the message and publish in chunks if message size is larger than allowed max size. | false | -| compression | Compress the message payload. | N/A | -| conf-file | Set the configuration file. | N/A | -| delay | Mark messages with a given delay. | 0s | -| encryption-key-name | Set the name of the public key used to encrypt the payload. | N/A | -| encryption-key-value-file | Set the file which contains the public key used to encrypt the payload. | N/A | -| exit-on-failure | Configure whether to exit from the process on publish failure. | false | -| format-class | Set the custom formatter class name. | org.apache.pulsar.testclient.DefaultMessageFormatter | -| format-payload | Configure whether to format %i as a message index in the stream from producer and/or %t as the timestamp nanoseconds. | false | -| help | Configure the help message. | false | -| histogram-file | HdrHistogram output file | N/A | -| max-connections | Set the maximum number of TCP connections to a single broker. | 100 | -| max-outstanding | Set the maximum number of outstanding messages. | 1000 | -| max-outstanding-across-partitions | Set the maximum number of outstanding messages across partitions. | 50000 | -| message-key-generation-mode | Set the generation mode of message key. Valid options are `autoIncrement`, `random`. | N/A | -| num-io-threads | Set the number of threads to be used for handling connections to brokers. | 1 | -| num-messages | Set the number of messages to be published in total. If it is set to 0, it keeps publishing messages. | 0 | -| num-producers | Set the number of producers for each topic. | 1 | -| num-test-threads | Set the number of test threads. | 1 | -| num-topic | Set the number of topics. | 1 | -| partitions | Configure whether to create partitioned topics with the given number of partitions. | N/A | -| payload-delimiter | Set the delimiter used to split lines when using payload from a file. | \n | -| payload-file | Use the payload from an UTF-8 encoded text file and a payload is randomly selected when messages are published. | N/A | -| producer-name | Set the producer name. | N/A | -| rate | Set the publish rate of messages across topics. | 100 | -| send-timeout | Set the sendTimeout. | 0 | -| separator | Set the separator between the topic and topic number. | - | -| service-url | Set the Pulsar service URL. | | -| size | Set the message size. | 1024 bytes | -| stats-interval-seconds | Set the statistics interval. If it is set to 0, statistics is disabled. | 0 | -| test-duration | Set the test duration. If it is set to 0, it keeps publishing tests. | 0s | -| trust-cert-file | Set the path for the trusted TLS certificate file. | | | -| warmup-time | Set the warm-up time. | 1s | -| tls-allow-insecure | Set the allowed insecure TLS connection. | N/A | - -## Consume messages - -This example shows how the Pulsar Perf consumes messages with default options. - -``` - -bin/pulsar-perf consume my-topic - -``` - -After the command is executed, the test data is continuously output on the Console. - -**Output** - -``` - -20:35:37.071 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Start receiving from 1 consumers on 1 topics -20:35:41.150 [pulsar-client-io-1-9] WARN com.scurrilous.circe.checksum.Crc32cIntChecksum - Failed to load Circe JNI library. Falling back to Java based CRC32c provider -20:35:47.092 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 59.572 msg/s -- 0.465 Mbit/s --- Latency: mean: 11.298 ms - med: 10 - 95pct: 15 - 99pct: 98 - 99.9pct: 137 - 99.99pct: 152 - Max: 152 -20:35:57.104 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.958 msg/s -- 0.781 Mbit/s --- Latency: mean: 9.176 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 18 - Max: 18 -20:36:07.115 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 100.006 msg/s -- 0.781 Mbit/s --- Latency: mean: 9.316 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -20:36:17.125 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 100.085 msg/s -- 0.782 Mbit/s --- Latency: mean: 9.327 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -20:36:27.136 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.900 msg/s -- 0.780 Mbit/s --- Latency: mean: 9.404 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -20:36:37.147 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.985 msg/s -- 0.781 Mbit/s --- Latency: mean: 8.998 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17 -^C20:36:42.755 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceConsumer - Aggregated throughput stats --- 6051 records received --- 92.125 msg/s --- 0.720 Mbit/s -20:36:42.759 [Thread-1] INFO org.apache.pulsar.testclient.PerformanceConsumer - Aggregated latency stats --- Latency: mean: 9.422 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 98 - 99.99pct: 137 - 99.999pct: 152 - Max: 152 - -``` - -From the output test data, you can get the throughput statistics and the end-to-end latency statistics. The aggregated statistics is printed after the Pulsar Perf is stopped. You can press **Ctrl**+**C** to stop the Pulsar Perf. - -### Configuration options for `pulsar-perf consume` - -You can get all options by executing the `bin/pulsar-perf consume -h` command. Therefore, you can modify these options as required. - -The following table lists configuration options available for the `pulsar-perf consume` command. - -| Option | Description | Default value | -|----|----|----| -| acks-delay-millis | Set the acknowledgment grouping delay in milliseconds. | 100 ms | -| auth-params | Set the authentication parameters, whose format is determined by the implementation of the `configure` method in the authentication plugin class, such as "key1:val1,key2:val2" or "{"key1":"val1","key2":"val2"}". | N/A | -| auth-plugin | Set the authentication plugin class name. | N/A | -| auto_ack_chunk_q_full | Configure whether to automatically ack for the oldest message in receiver queue if the queue is full. | false | -| listener-name | Set the listener name for the broker. | N/A | -| batch-index-ack | Enable or disable the batch index acknowledgment. | false | -| busy-wait | Enable or disable Busy-Wait on the Pulsar client. | false | -| conf-file | Set the configuration file. | N/A | -| encryption-key-name | Set the name of the public key used to encrypt the payload. | N/A | -| encryption-key-value-file | Set the file which contains the public key used to encrypt the payload. | N/A | -| help | Configure the help message. | false | -| histogram-file | HdrHistogram output file | N/A | -| expire_time_incomplete_chunked_messages | Set the expiration time for incomplete chunk messages (in milliseconds). | 0 | -| max-connections | Set the maximum number of TCP connections to a single broker. | 100 | -| max_chunked_msg | Set the max pending chunk messages. | 0 | -| num-consumers | Set the number of consumers for each topic. | 1 | -| num-io-threads |Set the number of threads to be used for handling connections to brokers. | 1 | -| num-subscriptions | Set the number of subscriptions (per topic). | 1 | -| num-topic | Set the number of topics. | 1 | -| pool-messages | Configure whether to use the pooled message. | true | -| rate | Simulate a slow message consumer (rate in msg/s). | 0.0 | -| receiver-queue-size | Set the size of the receiver queue. | 1000 | -| receiver-queue-size-across-partitions | Set the max total size of the receiver queue across partitions. | 50000 | -| replicated | Configure whether the subscription status should be replicated. | false | -| service-url | Set the Pulsar service URL. | | -| stats-interval-seconds | Set the statistics interval. If it is set to 0, statistics is disabled. | 0 | -| subscriber-name | Set the subscriber name prefix. | | -| subscription-position | Set the subscription position. Valid values are `Latest`, `Earliest`.| Latest | -| subscription-type | Set the subscription type.
  2289. Exclusive
  2290. Shared
  2291. Failover
  2292. Key_Shared
  2293. | Exclusive | -| test-duration | Set the test duration (in seconds). If the value is 0 or smaller than 0, it keeps consuming messages. | 0 | -| tls-allow-insecure | Set the allowed insecure TLS connection. | N/A | -| trust-cert-file | Set the path for the trusted TLS certificate file. | | | - -## Configurations - -By default, the Pulsar Perf uses `conf/client.conf` as the default configuration and uses `conf/log4j2.yaml` as the default Log4j configuration. If you want to connect to other Pulsar clusters, you can update the `brokerServiceUrl` in the client configuration. - -You can use the following commands to change the configuration file and the Log4j configuration file. - -``` - -export PULSAR_CLIENT_CONF= -export PULSAR_LOG_CONF= - -``` - -In addition, you can use the following command to configure the JVM configuration through environment variables: - -``` - -export PULSAR_EXTRA_OPTS='-Xms4g -Xmx4g -XX:MaxDirectMemorySize=4g' - -``` - -## HdrHistogram Plotter - -The [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html) is a visualization tool for checking Pulsar Perf test results, which makes it easier to observe the test results. - -To check test results through the HdrHistogram Plotter, follow these steps: - -1. Clone the HdrHistogram repository from GitHub to the local. - - ``` - - git clone https://github.com/HdrHistogram/HdrHistogram.git - - ``` - -2. Switch to the HdrHistogram folder. - - ``` - - cd HdrHistogram - - ``` - -3. Install the HdrHistogram Plotter. - - ``` - - mvn clean install -DskipTests - - ``` - -4. Transform the file generated by the Pulsar Perf. - - ``` - - ./HistogramLogProcessor -i -o - - ``` - -5. You will get two output files. Upload the output file with the filename extension of .hgrm to the [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html). - -6. Check the test result through the Graphical User Interface of the HdrHistogram Plotter, as shown blow. - - ![](/assets/perf-produce.png) diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/reference-cli-tools.md b/site2/website/versioned_docs/version-2.9.3-deprecated/reference-cli-tools.md deleted file mode 100644 index 6893da3ec6394b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/reference-cli-tools.md +++ /dev/null @@ -1,941 +0,0 @@ ---- -id: reference-cli-tools -title: Pulsar command-line tools -sidebar_label: "Pulsar CLI tools" -original_id: reference-cli-tools ---- - -Pulsar offers several command-line tools that you can use for managing Pulsar installations, performance testing, using command-line producers and consumers, and more. - -All Pulsar command-line tools can be run from the `bin` directory of your [installed Pulsar package](getting-started-standalone.md). The following tools are currently documented: - -* [`pulsar`](#pulsar) -* [`pulsar-client`](#pulsar-client) -* [`pulsar-daemon`](#pulsar-daemon) -* [`pulsar-perf`](#pulsar-perf) -* [`bookkeeper`](#bookkeeper) -* [`broker-tool`](#broker-tool) - -> ### Getting help -> You can get help for any CLI tool, command, or subcommand using the `--help` flag, or `-h` for short. Here's an example: - -> ```shell -> -> $ bin/pulsar broker --help -> -> -> ``` - - -## `pulsar` - -The pulsar tool is used to start Pulsar components, such as bookies and ZooKeeper, in the foreground. - -These processes can also be started in the background, using nohup, using the pulsar-daemon tool, which has the same command interface as pulsar. - -Usage: - -```bash - -$ pulsar command - -``` - -Commands: -* `bookie` -* `broker` -* `compact-topic` -* `configuration-store` -* `initialize-cluster-metadata` -* `proxy` -* `standalone` -* `websocket` -* `zookeeper` -* `zookeeper-shell` - -Example: - -```bash - -$ PULSAR_BROKER_CONF=/path/to/broker.conf pulsar broker - -``` - -The table below lists the environment variables that you can use to configure the `pulsar` tool. - -|Variable|Description|Default| -|---|---|---| -|`PULSAR_LOG_CONF`|Log4j configuration file|`conf/log4j2.yaml`| -|`PULSAR_BROKER_CONF`|Configuration file for broker|`conf/broker.conf`| -|`PULSAR_BOOKKEEPER_CONF`|description: Configuration file for bookie|`conf/bookkeeper.conf`| -|`PULSAR_ZK_CONF`|Configuration file for zookeeper|`conf/zookeeper.conf`| -|`PULSAR_CONFIGURATION_STORE_CONF`|Configuration file for the configuration store|`conf/global_zookeeper.conf`| -|`PULSAR_WEBSOCKET_CONF`|Configuration file for websocket proxy|`conf/websocket.conf`| -|`PULSAR_STANDALONE_CONF`|Configuration file for standalone|`conf/standalone.conf`| -|`PULSAR_EXTRA_OPTS`|Extra options to be passed to the jvm|| -|`PULSAR_EXTRA_CLASSPATH`|Extra paths for Pulsar's classpath|| -|`PULSAR_PID_DIR`|Folder where the pulsar server PID file should be stored|| -|`PULSAR_STOP_TIMEOUT`|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful|| - - - -### `bookie` - -Starts up a bookie server - -Usage: - -```bash - -$ pulsar bookie options - -``` - -Options - -|Option|Description|Default| -|---|---|---| -|`-readOnly`|Force start a read-only bookie server|false| -|`-withAutoRecovery`|Start auto-recover service bookie server|false| - - -Example - -```bash - -$ PULSAR_BOOKKEEPER_CONF=/path/to/bookkeeper.conf pulsar bookie \ - -readOnly \ - -withAutoRecovery - -``` - -### `broker` - -Starts up a Pulsar broker - -Usage - -```bash - -$ pulsar broker options - -``` - -Options - -|Option|Description|Default| -|---|---|---| -|`-bc` , `--bookie-conf`|Configuration file for BookKeeper|| -|`-rb` , `--run-bookie`|Run a BookKeeper bookie on the same host as the Pulsar broker|false| -|`-ra` , `--run-bookie-autorecovery`|Run a BookKeeper autorecovery daemon on the same host as the Pulsar broker|false| - -Example - -```bash - -$ PULSAR_BROKER_CONF=/path/to/broker.conf pulsar broker - -``` - -### `compact-topic` - -Run compaction against a Pulsar topic (in a new process) - -Usage - -```bash - -$ pulsar compact-topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-t` , `--topic`|The Pulsar topic that you would like to compact|| - -Example - -```bash - -$ pulsar compact-topic --topic topic-to-compact - -``` - -### `configuration-store` - -Starts up the Pulsar configuration store - -Usage - -```bash - -$ pulsar configuration-store - -``` - -Example - -```bash - -$ PULSAR_CONFIGURATION_STORE_CONF=/path/to/configuration_store.conf pulsar configuration-store - -``` - -### `initialize-cluster-metadata` - -One-time cluster metadata initialization - -Usage - -```bash - -$ pulsar initialize-cluster-metadata options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-ub` , `--broker-service-url`|The broker service URL for the new cluster|| -|`-tb` , `--broker-service-url-tls`|The broker service URL for the new cluster with TLS encryption|| -|`-c` , `--cluster`|Cluster name|| -|`-cs` , `--configuration-store`|The configuration store quorum connection string|| -|`--existing-bk-metadata-service-uri`|The metadata service URI of the existing BookKeeper cluster that you want to use|| -|`-h` , `--help`|Cluster name|false| -|`--initial-num-stream-storage-containers`|The number of storage containers of BookKeeper stream storage|16| -|`--initial-num-transaction-coordinators`|The number of transaction coordinators assigned in a cluster|16| -|`-uw` , `--web-service-url`|The web service URL for the new cluster|| -|`-tw` , `--web-service-url-tls`|The web service URL for the new cluster with TLS encryption|| -|`-zk` , `--zookeeper`|The local ZooKeeper quorum connection string|| -|`--zookeeper-session-timeout-ms`|The local ZooKeeper session timeout. The time unit is in millisecond(ms)|30000| - - -### `proxy` - -Manages the Pulsar proxy - -Usage - -```bash - -$ pulsar proxy options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--configuration-store`|Configuration store connection string|| -|`-zk` , `--zookeeper-servers`|Local ZooKeeper connection string|| - -Example - -```bash - -$ PULSAR_PROXY_CONF=/path/to/proxy.conf pulsar proxy \ - --zookeeper-servers zk-0,zk-1,zk2 \ - --configuration-store zk-0,zk-1,zk-2 - -``` - -### `standalone` - -Run a broker service with local bookies and local ZooKeeper - -Usage - -```bash - -$ pulsar standalone options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-a` , `--advertised-address`|The standalone broker advertised address|| -|`--bookkeeper-dir`|Local bookies’ base data directory|data/standalone/bookeeper| -|`--bookkeeper-port`|Local bookies’ base port|3181| -|`--no-broker`|Only start ZooKeeper and BookKeeper services, not the broker|false| -|`--num-bookies`|The number of local bookies|1| -|`--only-broker`|Only start the Pulsar broker service (not ZooKeeper or BookKeeper)|| -|`--wipe-data`|Clean up previous ZooKeeper/BookKeeper data|| -|`--zookeeper-dir`|Local ZooKeeper’s data directory|data/standalone/zookeeper| -|`--zookeeper-port` |Local ZooKeeper’s port|2181| - -Example - -```bash - -$ PULSAR_STANDALONE_CONF=/path/to/standalone.conf pulsar standalone - -``` - -### `websocket` - -Usage - -```bash - -$ pulsar websocket - -``` - -Example - -```bash - -$ PULSAR_WEBSOCKET_CONF=/path/to/websocket.conf pulsar websocket - -``` - -### `zookeeper` - -Starts up a ZooKeeper cluster - -Usage - -```bash - -$ pulsar zookeeper - -``` - -Example - -```bash - -$ PULSAR_ZK_CONF=/path/to/zookeeper.conf pulsar zookeeper - -``` - -### `zookeeper-shell` - -Connects to a running ZooKeeper cluster using the ZooKeeper shell - -Usage - -```bash - -$ pulsar zookeeper-shell options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration file for ZooKeeper|| -|`-server`|Configuration zk address, eg: `127.0.0.1:2181`|| - - - -## `pulsar-client` - -The pulsar-client tool - -Usage - -```bash - -$ pulsar-client command - -``` - -Commands -* `produce` -* `consume` - - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class, for example "key1:val1,key2:val2" or "{\"key1\":\"val1\",\"key2\":\"val2\"}"|{"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"}| -|`--auth-plugin`|Authentication plugin class name|org.apache.pulsar.client.impl.auth.AuthenticationSasl| -|`--listener-name`|Listener name for the broker|| -|`--proxy-protocol`|Proxy protocol to select type of routing at proxy|| -|`--proxy-url`|Proxy-server URL to which to connect|| -|`--url`|Broker URL to which to connect|pulsar://localhost:6650/
    ws://localhost:8080 | -| `-v`, `--version` | Get the version of the Pulsar client -|`-h`, `--help`|Show this help - - -### `produce` -Send a message or messages to a specific broker and topic - -Usage - -```bash - -$ pulsar-client produce topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-f`, `--files`|Comma-separated file paths to send; either -m or -f must be specified|[]| -|`-m`, `--messages`|Comma-separated string of messages to send; either -m or -f must be specified|[]| -|`-n`, `--num-produce`|The number of times to send the message(s); the count of messages/files * num-produce should be below 1000|1| -|`-r`, `--rate`|Rate (in messages per second) at which to produce; a value 0 means to produce messages as fast as possible|0.0| -|`-c`, `--chunking`|Split the message and publish in chunks if the message size is larger than the allowed max size|false| -|`-s`, `--separator`|Character to split messages string with.|","| -|`-k`, `--key`|Message key to add|key=value string, like k1=v1,k2=v2.| -|`-p`, `--properties`|Properties to add. If you want to add multiple properties, use the comma as the separator, e.g. `k1=v1,k2=v2`.| | -|`-ekn`, `--encryption-key-name`|The public key name to encrypt payload.| | -|`-ekv`, `--encryption-key-value`|The URI of public key to encrypt payload. For example, `file:///path/to/public.key` or `data:application/x-pem-file;base64,*****`.| | - - -### `consume` -Consume messages from a specific broker and topic - -Usage - -```bash - -$ pulsar-client consume topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--hex`|Display binary messages in hexadecimal format.|false| -|`-n`, `--num-messages`|Number of messages to consume, 0 means to consume forever.|1| -|`-r`, `--rate`|Rate (in messages per second) at which to consume; a value 0 means to consume messages as fast as possible|0.0| -|`--regex`|Indicate the topic name is a regex pattern|false| -|`-s`, `--subscription-name`|Subscription name|| -|`-t`, `--subscription-type`|The type of the subscription. Possible values: Exclusive, Shared, Failover, Key_Shared.|Exclusive| -|`-p`, `--subscription-position`|The position of the subscription. Possible values: Latest, Earliest.|Latest| -|`-m`, `--subscription-mode`|Subscription mode.|Durable| -|`-q`, `--queue-size`|The size of consumer's receiver queue.|0| -|`-mc`, `--max_chunked_msg`|Max pending chunk messages.|0| -|`-ac`, `--auto_ack_chunk_q_full`|Auto ack for the oldest message in consumer's receiver queue if the queue full.|false| -|`--hide-content`|Do not print the message to the console.|false| -|`-st`, `--schema-type`|Set the schema type. Use `auto_consume` to dump AVRO and other structured data types. Possible values: bytes, auto_consume.|bytes| -|`-ekv`, `--encryption-key-value`|The URI of public key to encrypt payload. For example, `file:///path/to/public.key` or `data:application/x-pem-file;base64,*****`.| | -|`-pm`, `--pool-messages`|Use the pooled message.|true| - -## `pulsar-daemon` -A wrapper around the pulsar tool that’s used to start and stop processes, such as ZooKeeper, bookies, and Pulsar brokers, in the background using nohup. - -pulsar-daemon has a similar interface to the pulsar command but adds start and stop commands for various services. For a listing of those services, run pulsar-daemon to see the help output or see the documentation for the pulsar command. - -Usage - -```bash - -$ pulsar-daemon command - -``` - -Commands -* `start` -* `stop` - - -### `start` -Start a service in the background using nohup. - -Usage - -```bash - -$ pulsar-daemon start service - -``` - -### `stop` -Stop a service that’s already been started using start. - -Usage - -```bash - -$ pulsar-daemon stop service options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|-force|Stop the service forcefully if not stopped by normal shutdown.|false| - - - -## `pulsar-perf` -A tool for performance testing a Pulsar broker. - -Usage - -```bash - -$ pulsar-perf command - -``` - -Commands -* `consume` -* `produce` -* `read` -* `websocket-producer` -* `managed-ledger` -* `monitor-brokers` -* `simulation-client` -* `simulation-controller` -* `help` - -Environment variables - -The table below lists the environment variables that you can use to configure the pulsar-perf tool. - -|Variable|Description|Default| -|---|---|---| -|`PULSAR_LOG_CONF`|Log4j configuration file|conf/log4j2.yaml| -|`PULSAR_CLIENT_CONF`|Configuration file for the client|conf/client.conf| -|`PULSAR_EXTRA_OPTS`|Extra options to be passed to the JVM|| -|`PULSAR_EXTRA_CLASSPATH`|Extra paths for Pulsar's classpath|| - - -### `consume` -Run a consumer - -Usage - -``` - -$ pulsar-perf consume options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth-plugin`|Authentication plugin class name|| -|`-ac`, `--auto_ack_chunk_q_full`|Auto ack for the oldest message in consumer's receiver queue if the queue full|false| -|`--listener-name`|Listener name for the broker|| -|`--acks-delay-millis`|Acknowledgements grouping delay in millis|100| -|`--batch-index-ack`|Enable or disable the batch index acknowledgment|false| -|`-bw`, `--busy-wait`|Enable or disable Busy-Wait on the Pulsar client|false| -|`-v`, `--encryption-key-value-file`|The file which contains the private key to decrypt payload|| -|`-h`, `--help`|Help message|false| -|`--conf-file`|Configuration file|| -|`-m`, `--num-messages`|Number of messages to consume in total. If the value is equal to or smaller than 0, it keeps consuming messages.|0| -|`-e`, `--expire_time_incomplete_chunked_messages`|The expiration time for incomplete chunk messages (in milliseconds)|0| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-mc`, `--max_chunked_msg`|Max pending chunk messages|0| -|`-n`, `--num-consumers`|Number of consumers (per topic)|1| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-ns`, `--num-subscriptions`|Number of subscriptions (per topic)|1| -|`-t`, `--num-topics`|The number of topics|1| -|`-pm`, `--pool-messages`|Use the pooled message|true| -|`-r`, `--rate`|Simulate a slow message consumer (rate in msg/s)|0| -|`-q`, `--receiver-queue-size`|Size of the receiver queue|1000| -|`-p`, `--receiver-queue-size-across-partitions`|Max total size of the receiver queue across partitions|50000| -|`--replicated`|Whether the subscription status should be replicated|false| -|`-u`, `--service-url`|Pulsar service URL|| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled|0| -|`-s`, `--subscriber-name`|Subscriber name prefix|| -|`-ss`, `--subscriptions`|A list of subscriptions to consume on (e.g. sub1,sub2)|sub| -|`-st`, `--subscription-type`|Subscriber type. Possible values are Exclusive, Shared, Failover, Key_Shared.|Exclusive| -|`-sp`, `--subscription-position`|Subscriber position. Possible values are Latest, Earliest.|Latest| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps consuming messages.|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - - -### `produce` -Run a producer - -Usage - -```bash - -$ pulsar-perf produce options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-am`, `--access-mode`|Producer access mode. Valid values are `Shared`, `Exclusive` and `WaitForExclusive`|Shared| -|`-au`, `--admin-url`|Pulsar admin URL|| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth-plugin`|Authentication plugin class name|| -|`--listener-name`|Listener name for the broker|| -|`-b`, `--batch-time-window`|Batch messages in a window of the specified number of milliseconds|1| -|`-bb`, `--batch-max-bytes`|Maximum number of bytes per batch|4194304| -|`-bm`, `--batch-max-messages`|Maximum number of messages per batch|1000| -|`-bw`, `--busy-wait`|Enable or disable Busy-Wait on the Pulsar client|false| -|`-ch`, `--chunking`|Split the message and publish in chunks if the message size is larger than allowed max size|false| -|`-d`, `--delay`|Mark messages with a given delay in seconds|0s| -|`-z`, `--compression`|Compress messages’ payload. Possible values are NONE, LZ4, ZLIB, ZSTD or SNAPPY.|| -|`--conf-file`|Configuration file|| -|`-k`, `--encryption-key-name`|The public key name to encrypt payload|| -|`-v`, `--encryption-key-value-file`|The file which contains the public key to encrypt payload|| -|`-ef`, `--exit-on-failure`|Exit from the process on publish failure|false| -|`-fc`, `--format-class`|Custom Formatter class name|org.apache.pulsar.testclient.DefaultMessageFormatter| -|`-fp`, `--format-payload`|Format %i as a message index in the stream from producer and/or %t as the timestamp nanoseconds|false| -|`-h`, `--help`|Help message|false| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-o`, `--max-outstanding`|Max number of outstanding messages|1000| -|`-p`, `--max-outstanding-across-partitions`|Max number of outstanding messages across partitions|50000| -|`-m`, `--num-messages`|Number of messages to publish in total. If this value is less than or equal to 0, it keeps publishing messages.|0| -|`-mk`, `--message-key-generation-mode`|The generation mode of message key. Valid options are `autoIncrement`, `random`|| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-n`, `--num-producers`|The number of producers (per topic)|1| -|`-threads`, `--num-test-threads`|Number of test threads|1| -|`-t`, `--num-topic`|The number of topics|1| -|`-np`, `--partitions`|Create partitioned topics with the given number of partitions. Setting this value to 0 means not trying to create a topic|| -|`-f`, `--payload-file`|Use payload from an UTF-8 encoded text file and a payload will be randomly selected when publishing messages|| -|`-e`, `--payload-delimiter`|The delimiter used to split lines when using payload from a file|\n| -|`-pn`, `--producer-name`|Producer Name|| -|`-r`, `--rate`|Publish rate msg/s across topics|100| -|`--send-timeout`|Set the sendTimeout|0| -|`--separator`|Separator between the topic and topic number|-| -|`-u`, `--service-url`|Pulsar service URL|| -|`-s`, `--size`|Message size (in bytes)|1024| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled.|0| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps publishing messages.|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--warmup-time`|Warm-up time in seconds|1| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - - -### `read` -Run a topic reader - -Usage - -```bash - -$ pulsar-perf read options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth-plugin`|Authentication plugin class name|| -|`--listener-name`|Listener name for the broker|| -|`--conf-file`|Configuration file|| -|`-h`, `--help`|Help message|false| -|`-n`, `--num-messages`|Number of messages to consume in total. If the value is equal to or smaller than 0, it keeps consuming messages.|0| -|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100| -|`-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers|1| -|`-t`, `--num-topics`|The number of topics|1| -|`-r`, `--rate`|Simulate a slow message reader (rate in msg/s)|0| -|`-q`, `--receiver-queue-size`|Size of the receiver queue|1000| -|`-u`, `--service-url`|Pulsar service URL|| -|`-m`, `--start-message-id`|Start message id. This can be either 'earliest', 'latest' or a specific message id by using 'lid:eid'|earliest| -|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled.|0| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps consuming messages.|0| -|`--trust-cert-file`|Path for the trusted TLS certificate file|| -|`--use-tls`|Use TLS encryption on the connection|false| -|`--tls-allow-insecure`|Allow insecure TLS connection|| - -### `websocket-producer` -Run a websocket producer - -Usage - -```bash - -$ pulsar-perf websocket-producer options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|| -|`--auth-plugin`|Authentication plugin class name|| -|`--conf-file`|Configuration file|| -|`-h`, `--help`|Help message|false| -|`-m`, `--num-messages`|Number of messages to publish in total. If this value is less than or equal to 0, it keeps publishing messages.|0| -|`-t`, `--num-topic`|The number of topics|1| -|`-f`, `--payload-file`|Use payload from a file instead of empty buffer|| -|`-e`, `--payload-delimiter`|The delimiter used to split lines when using payload from a file|\n| -|`-fp`, `--format-payload`|Format %i as a message index in the stream from producer and/or %t as the timestamp nanoseconds|false| -|`-fc`, `--format-class`|Custom formatter class name|`org.apache.pulsar.testclient.DefaultMessageFormatter`| -|`-u`, `--proxy-url`|Pulsar Proxy URL, e.g., "ws://localhost:8080/"|| -|`-r`, `--rate`|Publish rate msg/s across topics|100| -|`-s`, `--size`|Message size in byte|1024| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps publishing messages.|0| - - -### `managed-ledger` -Write directly on managed-ledgers - -Usage - -```bash - -$ pulsar-perf managed-ledger options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-a`, `--ack-quorum`|Ledger ack quorum|1| -|`-dt`, `--digest-type`|BookKeeper digest type. Possible Values: [CRC32, MAC, CRC32C, DUMMY]|CRC32C| -|`-e`, `--ensemble-size`|Ledger ensemble size|1| -|`-h`, `--help`|Help message|false| -|`-c`, `--max-connections`|Max number of TCP connections to a single bookie|1| -|`-o`, `--max-outstanding`|Max number of outstanding requests|1000| -|`-m`, `--num-messages`|Number of messages to publish in total. If this value is less than or equal to 0, it keeps publishing messages.|0| -|`-t`, `--num-topic`|Number of managed ledgers|1| -|`-r`, `--rate`|Write rate msg/s across managed ledgers|100| -|`-s`, `--size`|Message size in byte|1024| -|`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps publishing messages.|0| -|`--threads`|Number of threads writing|1| -|`-w`, `--write-quorum`|Ledger write quorum|1| -|`-zk`, `--zookeeperServers`|ZooKeeper connection string|| - - -### `monitor-brokers` -Continuously receive broker data and/or load reports - -Usage - -```bash - -$ pulsar-perf monitor-brokers options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--connect-string`|A connection string for one or more ZooKeeper servers|| -|`-h`, `--help`|Help message|false| - - -### `simulation-client` -Run a simulation server acting as a Pulsar client. Uses the client configuration specified in `conf/client.conf`. - -Usage - -```bash - -$ pulsar-perf simulation-client options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--port`|Port to listen on for controller|0| -|`--service-url`|Pulsar Service URL|| -|`-h`, `--help`|Help message|false| - -### `simulation-controller` -Run a simulation controller to give commands to servers - -Usage - -```bash - -$ pulsar-perf simulation-controller options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--client-port`|The port that the clients are listening on|0| -|`--clients`|Comma-separated list of client hostnames|| -|`--cluster`|The cluster to test on|| -|`-h`, `--help`|Help message|false| - - -### `help` -This help message - -Usage - -```bash - -$ pulsar-perf help - -``` - -## `bookkeeper` -A tool for managing BookKeeper. - -Usage - -```bash - -$ bookkeeper command - -``` - -Commands -* `auto-recovery` -* `bookie` -* `localbookie` -* `upgrade` -* `shell` - - -Environment variables - -The table below lists the environment variables that you can use to configure the bookkeeper tool. - -|Variable|Description|Default| -|---|---|---| -|BOOKIE_LOG_CONF|Log4j configuration file|conf/log4j2.yaml| -|BOOKIE_CONF|BookKeeper configuration file|conf/bk_server.conf| -|BOOKIE_EXTRA_OPTS|Extra options to be passed to the JVM|| -|BOOKIE_EXTRA_CLASSPATH|Extra paths for BookKeeper's classpath|| -|ENTRY_FORMATTER_CLASS|The Java class used to format entries|| -|BOOKIE_PID_DIR|Folder where the BookKeeper server PID file should be stored|| -|BOOKIE_STOP_TIMEOUT|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful|| - - -### `auto-recovery` -Runs an auto-recovery service daemon - -Usage - -```bash - -$ bookkeeper auto-recovery options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery daemon|| - - -### `bookie` -Starts up a BookKeeper server (aka bookie) - -Usage - -```bash - -$ bookkeeper bookie options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery daemon|| -|-readOnly|Force start a read-only bookie server|false| -|-withAutoRecovery|Start auto-recovery service bookie server|false| - - -### `localbookie` -Runs a test ensemble of N bookies locally - -Usage - -```bash - -$ bookkeeper localbookie N - -``` - -### `upgrade` -Upgrade the bookie’s filesystem - -Usage - -```bash - -$ bookkeeper upgrade options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-c`, `--conf`|Configuration for the auto-recovery daemon|| -|`-u`, `--upgrade`|Upgrade the bookie’s directories|| - - -### `shell` -Run shell for admin commands. To see a full listing of those commands, run bookkeeper shell without an argument. - -Usage - -```bash - -$ bookkeeper shell - -``` - -Example - -```bash - -$ bookkeeper shell bookiesanity - -``` - -## `broker-tool` - -The `broker- tool` is used for operations on a specific broker. - -Usage - -```bash - -$ broker-tool command - -``` - -Commands -* `load-report` -* `help` - -Example -Two ways to get more information about a command as below: - -```bash - -$ broker-tool help command -$ broker-tool command --help - -``` - -### `load-report` - -Collect the load report of a specific broker. -The command is run on a broker, and used for troubleshooting why broker can’t collect right load report. - -Options - -|Flag|Description|Default| -|---|---|---| -|`-i`, `--interval`| Interval to collect load report, in milliseconds || -|`-h`, `--help`| Display help information || - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/reference-configuration.md b/site2/website/versioned_docs/version-2.9.3-deprecated/reference-configuration.md deleted file mode 100644 index 71f2eec9f97ee9..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/reference-configuration.md +++ /dev/null @@ -1,770 +0,0 @@ ---- -id: reference-configuration -title: Pulsar configuration -sidebar_label: "Pulsar configuration" -original_id: reference-configuration ---- - - - - -You can manage Pulsar configuration by configuration files in the [`conf`](https://github.com/apache/pulsar/tree/master/conf) directory of a Pulsar [installation](getting-started-standalone.md). - -- [BookKeeper](#bookkeeper) -- [Broker](#broker) -- [Client](#client) -- [Log4j](#log4j) -- [Log4j shell](#log4j-shell) -- [Standalone](#standalone) -- [WebSocket](#websocket) -- [Pulsar proxy](#pulsar-proxy) -- [ZooKeeper](#zookeeper) - -## BookKeeper - -BookKeeper is a replicated log storage system that Pulsar uses for durable storage of all messages. - - -|Name|Description|Default| -|---|---|---| -|bookiePort|The port on which the bookie server listens.|3181| -|allowLoopback|Whether the bookie is allowed to use a loopback interface as its primary interface (that is the interface used to establish its identity). By default, loopback interfaces are not allowed to work as the primary interface. Using a loopback interface as the primary interface usually indicates a configuration error. For example, it’s fairly common in some VPS setups to not configure a hostname or to have the hostname resolve to `127.0.0.1`. If this is the case, then all bookies in the cluster will establish their identities as `127.0.0.1:3181` and only one will be able to join the cluster. For VPSs configured like this, you should explicitly set the listening interface.|false| -|listeningInterface|The network interface on which the bookie listens. By default, the bookie listens on all interfaces.|eth0| -|advertisedAddress|Configure a specific hostname or IP address that the bookie should use to advertise itself to clients. By default, the bookie advertises either its own IP address or hostname according to the `listeningInterface` and `useHostNameAsBookieID` settings.|N/A| -|allowMultipleDirsUnderSameDiskPartition|Configure the bookie to enable/disable multiple ledger/index/journal directories in the same filesystem disk partition.|false| -|minUsableSizeForIndexFileCreation|The minimum safe usable size available in index directory for bookie to create index files while replaying journal at the time of bookie starts in Readonly Mode (in bytes).|1073741824| -|journalDirectory|The directory where BookKeeper outputs its write-ahead log (WAL).|data/bookkeeper/journal| -|journalDirectories|Directories that BookKeeper outputs its write ahead log. Multiple directories are available, being separated by `,`. For example: `journalDirectories=/tmp/bk-journal1,/tmp/bk-journal2`. If `journalDirectories` is set, the bookies skip `journalDirectory` and use this setting directory.|/tmp/bk-journal| -|ledgerDirectories|The directory where BookKeeper outputs ledger snapshots. This could define multiple directories to store snapshots separated by `,`, for example `ledgerDirectories=/tmp/bk1-data,/tmp/bk2-data`. Ideally, ledger dirs and the journal dir are each in a different device, which reduces the contention between random I/O and sequential write. It is possible to run with a single disk, but performance will be significantly lower.|data/bookkeeper/ledgers| -|ledgerManagerType|The type of ledger manager used to manage how ledgers are stored, managed, and garbage collected. See [BookKeeper Internals](http://bookkeeper.apache.org/docs/latest/getting-started/concepts) for more info.|hierarchical| -|zkLedgersRootPath|The root ZooKeeper path used to store ledger metadata. This parameter is used by the ZooKeeper-based ledger manager as a root znode to store all ledgers.|/ledgers| -|ledgerStorageClass|Ledger storage implementation class|org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage| -|entryLogFilePreallocationEnabled|Enable or disable entry logger preallocation|true| -|logSizeLimit|Max file size of the entry logger, in bytes. A new entry log file will be created when the old one reaches the file size limitation.|1073741824| -|minorCompactionThreshold|Threshold of minor compaction. Entry log files whose remaining size percentage reaches below this threshold will be compacted in a minor compaction. If set to less than zero, the minor compaction is disabled.|0.2| -|minorCompactionInterval|Time interval to run minor compaction, in seconds. If set to less than zero, the minor compaction is disabled. Note: should be greater than gcWaitTime. |3600| -|majorCompactionThreshold|The threshold of major compaction. Entry log files whose remaining size percentage reaches below this threshold will be compacted in a major compaction. Those entry log files whose remaining size percentage is still higher than the threshold will never be compacted. If set to less than zero, the minor compaction is disabled.|0.5| -|majorCompactionInterval|The time interval to run major compaction, in seconds. If set to less than zero, the major compaction is disabled. Note: should be greater than gcWaitTime. |86400| -|readOnlyModeEnabled|If `readOnlyModeEnabled=true`, then on all full ledger disks, bookie will be converted to read-only mode and serve only read requests. Otherwise the bookie will be shutdown.|true| -|forceReadOnlyBookie|Whether the bookie is force started in read only mode.|false| -|persistBookieStatusEnabled|Persist the bookie status locally on the disks. So the bookies can keep their status upon restarts.|false| -|compactionMaxOutstandingRequests|Sets the maximum number of entries that can be compacted without flushing. When compacting, the entries are written to the entrylog and the new offsets are cached in memory. Once the entrylog is flushed the index is updated with the new offsets. This parameter controls the number of entries added to the entrylog before a flush is forced. A higher value for this parameter means more memory will be used for offsets. Each offset consists of 3 longs. This parameter should not be modified unless you’re fully aware of the consequences.|100000| -|compactionRate|The rate at which compaction will read entries, in adds per second.|1000| -|isThrottleByBytes|Throttle compaction by bytes or by entries.|false| -|compactionRateByEntries|The rate at which compaction will read entries, in adds per second.|1000| -|compactionRateByBytes|Set the rate at which compaction reads entries. The unit is bytes added per second.|1000000| -|journalMaxSizeMB|Max file size of journal file, in megabytes. A new journal file will be created when the old one reaches the file size limitation.|2048| -|journalMaxBackups|The max number of old journal files to keep. Keeping a number of old journal files would help data recovery in special cases.|5| -|journalPreAllocSizeMB|How space to pre-allocate at a time in the journal.|16| -|journalWriteBufferSizeKB|The of the write buffers used for the journal.|64| -|journalRemoveFromPageCache|Whether pages should be removed from the page cache after force write.|true| -|journalAdaptiveGroupWrites|Whether to group journal force writes, which optimizes group commit for higher throughput.|true| -|journalMaxGroupWaitMSec|The maximum latency to impose on a journal write to achieve grouping.|1| -|journalAlignmentSize|All the journal writes and commits should be aligned to given size|4096| -|journalBufferedWritesThreshold|Maximum writes to buffer to achieve grouping|524288| -|journalFlushWhenQueueEmpty|If we should flush the journal when journal queue is empty|false| -|numJournalCallbackThreads|The number of threads that should handle journal callbacks|8| -|openLedgerRereplicationGracePeriod | The grace period, in milliseconds, that the replication worker waits before fencing and replicating a ledger fragment that's still being written to upon bookie failure. | 30000 | -|rereplicationEntryBatchSize|The number of max entries to keep in fragment for re-replication|100| -|autoRecoveryDaemonEnabled|Whether the bookie itself can start auto-recovery service.|true| -|lostBookieRecoveryDelay|How long to wait, in seconds, before starting auto recovery of a lost bookie.|0| -|gcWaitTime|How long the interval to trigger next garbage collection, in milliseconds. Since garbage collection is running in background, too frequent gc will heart performance. It is better to give a higher number of gc interval if there is enough disk capacity.|900000| -|gcOverreplicatedLedgerWaitTime|How long the interval to trigger next garbage collection of overreplicated ledgers, in milliseconds. This should not be run very frequently since we read the metadata for all the ledgers on the bookie from zk.|86400000| -|flushInterval|How long the interval to flush ledger index pages to disk, in milliseconds. Flushing index files will introduce much random disk I/O. If separating journal dir and ledger dirs each on different devices, flushing would not affect performance. But if putting journal dir and ledger dirs on same device, performance degrade significantly on too frequent flushing. You can consider increment flush interval to get better performance, but you need to pay more time on bookie server restart after failure.|60000| -|bookieDeathWatchInterval|Interval to watch whether bookie is dead or not, in milliseconds|1000| -|allowStorageExpansion|Allow the bookie storage to expand. Newly added ledger and index dirs must be empty.|false| -|zkServers|A list of one of more servers on which zookeeper is running. The server list can be comma separated values, for example: zkServers=zk1:2181,zk2:2181,zk3:2181.|localhost:2181| -|zkTimeout|ZooKeeper client session timeout in milliseconds Bookie server will exit if it received SESSION_EXPIRED because it was partitioned off from ZooKeeper for more than the session timeout JVM garbage collection, disk I/O will cause SESSION_EXPIRED. Increment this value could help avoiding this issue|30000| -|zkRetryBackoffStartMs|The start time that the Zookeeper client backoff retries in milliseconds.|1000| -|zkRetryBackoffMaxMs|The maximum time that the Zookeeper client backoff retries in milliseconds.|10000| -|zkEnableSecurity|Set ACLs on every node written on ZooKeeper, allowing users to read and write BookKeeper metadata stored on ZooKeeper. In order to make ACLs work you need to setup ZooKeeper JAAS authentication. All the bookies and Client need to share the same user, and this is usually done using Kerberos authentication. See ZooKeeper documentation.|false| -|httpServerEnabled|The flag enables/disables starting the admin http server.|false| -|httpServerPort|The HTTP server port to listen on. By default, the value is `8080`. If you want to keep it consistent with the Prometheus stats provider, you can set it to `8000`.|8080 -|httpServerClass|The http server class.|org.apache.bookkeeper.http.vertx.VertxHttpServer| -|serverTcpNoDelay|This settings is used to enabled/disabled Nagle’s algorithm, which is a means of improving the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network. If you are sending many small messages, such that more than one can fit in a single IP packet, setting server.tcpnodelay to false to enable Nagle algorithm can provide better performance.|true| -|serverSockKeepalive|This setting is used to send keep-alive messages on connection-oriented sockets.|true| -|serverTcpLinger|The socket linger timeout on close. When enabled, a close or shutdown will not return until all queued messages for the socket have been successfully sent or the linger timeout has been reached. Otherwise, the call returns immediately and the closing is done in the background.|0| -|byteBufAllocatorSizeMax|The maximum buf size of the received ByteBuf allocator.|1048576| -|nettyMaxFrameSizeBytes|The maximum netty frame size in bytes. Any message received larger than this will be rejected.|5253120| -|openFileLimit|Max number of ledger index files could be opened in bookie server If number of ledger index files reaches this limitation, bookie server started to swap some ledgers from memory to disk. Too frequent swap will affect performance. You can tune this number to gain performance according your requirements.|0| -|pageSize|Size of a index page in ledger cache, in bytes A larger index page can improve performance writing page to disk, which is efficient when you have small number of ledgers and these ledgers have similar number of entries. If you have large number of ledgers and each ledger has fewer entries, smaller index page would improve memory usage.|8192| -|pageLimit|How many index pages provided in ledger cache If number of index pages reaches this limitation, bookie server starts to swap some ledgers from memory to disk. You can increment this value when you found swap became more frequent. But make sure pageLimit*pageSize should not more than JVM max memory limitation, otherwise you would got OutOfMemoryException. In general, incrementing pageLimit, using smaller index page would gain better performance in lager number of ledgers with fewer entries case If pageLimit is -1, bookie server will use 1/3 of JVM memory to compute the limitation of number of index pages.|0| -|readOnlyModeEnabled|If all ledger directories configured are full, then support only read requests for clients. If "readOnlyModeEnabled=true" then on all ledger disks full, bookie will be converted to read-only mode and serve only read requests. Otherwise the bookie will be shutdown. By default this will be disabled.|true| -|diskUsageThreshold|For each ledger dir, maximum disk space which can be used. Default is 0.95f. i.e. 95% of disk can be used at most after which nothing will be written to that partition. If all ledger dir partitions are full, then bookie will turn to readonly mode if ‘readOnlyModeEnabled=true’ is set, else it will shutdown. Valid values should be in between 0 and 1 (exclusive).|0.95| -|diskCheckInterval|Disk check interval in milli seconds, interval to check the ledger dirs usage.|10000| -|auditorPeriodicCheckInterval|Interval at which the auditor will do a check of all ledgers in the cluster. By default this runs once a week. The interval is set in seconds. To disable the periodic check completely, set this to 0. Note that periodic checking will put extra load on the cluster, so it should not be run more frequently than once a day.|604800| -|sortedLedgerStorageEnabled|Whether sorted-ledger storage is enabled.|true| -|auditorPeriodicBookieCheckInterval|The interval between auditor bookie checks. The auditor bookie check, checks ledger metadata to see which bookies should contain entries for each ledger. If a bookie which should contain entries is unavailable, thea the ledger containing that entry is marked for recovery. Setting this to 0 disabled the periodic check. Bookie checks will still run when a bookie fails. The interval is specified in seconds.|86400| -|numAddWorkerThreads|The number of threads that should handle write requests. if zero, the writes would be handled by netty threads directly.|0| -|numReadWorkerThreads|The number of threads that should handle read requests. if zero, the reads would be handled by netty threads directly.|8| -|numHighPriorityWorkerThreads|The umber of threads that should be used for high priority requests (i.e. recovery reads and adds, and fencing).|8| -|maxPendingReadRequestsPerThread|If read workers threads are enabled, limit the number of pending requests, to avoid the executor queue to grow indefinitely.|2500| -|maxPendingAddRequestsPerThread|The limited number of pending requests, which is used to avoid the executor queue to grow indefinitely when add workers threads are enabled.|10000| -|isForceGCAllowWhenNoSpace|Whether force compaction is allowed when the disk is full or almost full. Forcing GC could get some space back, but could also fill up the disk space more quickly. This is because new log files are created before GC, while old garbage log files are deleted after GC.|false| -|verifyMetadataOnGC|True if the bookie should double check `readMetadata` prior to GC.|false| -|flushEntrylogBytes|Entry log flush interval in bytes. Flushing in smaller chunks but more frequently reduces spikes in disk I/O. Flushing too frequently may also affect performance negatively.|268435456| -|readBufferSizeBytes|The number of bytes we should use as capacity for BufferedReadChannel.|4096| -|writeBufferSizeBytes|The number of bytes used as capacity for the write buffer|65536| -|useHostNameAsBookieID|Whether the bookie should use its hostname to register with the coordination service (e.g.: zookeeper service). When false, bookie will use its ip address for the registration.|false| -|bookieId | If you want to custom a bookie ID or use a dynamic network address for the bookie, you can set the `bookieId`.

    Bookie advertises itself using the `bookieId` rather than the `BookieSocketAddress` (`hostname:port` or `IP:port`). If you set the `bookieId`, then the `useHostNameAsBookieID` does not take effect.

    The `bookieId` is a non-empty string that can contain ASCII digits and letters ([a-zA-Z9-0]), colons, dashes, and dots.

    For more information about `bookieId`, see [here](http://bookkeeper.apache.org/bps/BP-41-bookieid/).|N/A| -|allowEphemeralPorts|Whether the bookie is allowed to use an ephemeral port (port 0) as its server port. By default, an ephemeral port is not allowed. Using an ephemeral port as the service port usually indicates a configuration error. However, in unit tests, using an ephemeral port will address port conflict problems and allow running tests in parallel.|false| -|enableLocalTransport|Whether the bookie is allowed to listen for the BookKeeper clients executed on the local JVM.|false| -|disableServerSocketBind|Whether the bookie is allowed to disable bind on network interfaces. This bookie will be available only to BookKeeper clients executed on the local JVM.|false| -|skipListArenaChunkSize|The number of bytes that we should use as chunk allocation for `org.apache.bookkeeper.bookie.SkipListArena`.|4194304| -|skipListArenaMaxAllocSize|The maximum size that we should allocate from the skiplist arena. Allocations larger than this should be allocated directly by the VM to avoid fragmentation.|131072| -|bookieAuthProviderFactoryClass|The factory class name of the bookie authentication provider. If this is null, then there is no authentication.|null| -|statsProviderClass||org.apache.bookkeeper.stats.prometheus.PrometheusMetricsProvider| -|prometheusStatsHttpPort||8000| -|dbStorage_writeCacheMaxSizeMb|Size of Write Cache. Memory is allocated from JVM direct memory. Write cache is used to buffer entries before flushing into the entry log. For good performance, it should be big enough to hold a substantial amount of entries in the flush interval.|25% of direct memory| -|dbStorage_readAheadCacheMaxSizeMb|Size of Read cache. Memory is allocated from JVM direct memory. This read cache is pre-filled doing read-ahead whenever a cache miss happens. By default, it is allocated to 25% of the available direct memory.|N/A| -|dbStorage_readAheadCacheBatchSize|How many entries to pre-fill in cache after a read cache miss|1000| -|dbStorage_rocksDB_blockCacheSize|Size of RocksDB block-cache. For best performance, this cache should be big enough to hold a significant portion of the index database which can reach ~2GB in some cases. By default, it uses 10% of direct memory.|N/A| -|dbStorage_rocksDB_writeBufferSizeMB||64| -|dbStorage_rocksDB_sstSizeInMB||64| -|dbStorage_rocksDB_blockSize||65536| -|dbStorage_rocksDB_bloomFilterBitsPerKey||10| -|dbStorage_rocksDB_numLevels||-1| -|dbStorage_rocksDB_numFilesInLevel0||4| -|dbStorage_rocksDB_maxSizeInLevel1MB||256| - -## Broker - -Pulsar brokers are responsible for handling incoming messages from producers, dispatching messages to consumers, replicating data between clusters, and more. - -|Name|Description|Default| -|---|---|---| -|advertisedListeners|Specify multiple advertised listeners for the broker.

    The format is `:pulsar://:`.

    If there are multiple listeners, separate them with commas.

    **Note**: do not use this configuration with `advertisedAddress` and `brokerServicePort`. If the value of this configuration is empty, the broker uses `advertisedAddress` and `brokerServicePort`|/| -|internalListenerName|Specify the internal listener name for the broker.

    **Note**: the listener name must be contained in `advertisedListeners`.

    If the value of this configuration is empty, the broker uses the first listener as the internal listener.|/| -|authenticateOriginalAuthData| If this flag is set to `true`, the broker authenticates the original Auth data; else it just accepts the originalPrincipal and authorizes it (if required). |false| -|enablePersistentTopics| Whether persistent topics are enabled on the broker |true| -|enableNonPersistentTopics| Whether non-persistent topics are enabled on the broker |true| -|functionsWorkerEnabled| Whether the Pulsar Functions worker service is enabled in the broker |false| -|exposePublisherStats|Whether to enable topic level metrics.|true| -|statsUpdateFrequencyInSecs||60| -|statsUpdateInitialDelayInSecs||60| -|zookeeperServers| Zookeeper quorum connection string || -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -|brokerServicePort| Broker data port |6650| -|brokerServicePortTls| Broker data port for TLS |6651| -|webServicePort| Port to use to server HTTP request |8080| -|webServicePortTls| Port to use to server HTTPS request |8443| -|webSocketServiceEnabled| Enable the WebSocket API service in broker |false| -|webSocketNumIoThreads|The number of IO threads in Pulsar Client used in WebSocket proxy.|8| -|webSocketConnectionsPerBroker|The number of connections per Broker in Pulsar Client used in WebSocket proxy.|8| -|webSocketSessionIdleTimeoutMillis|Time in milliseconds that idle WebSocket session times out.|300000| -|webSocketMaxTextFrameSize|The maximum size of a text message during parsing in WebSocket proxy.|1048576| -|exposeTopicLevelMetricsInPrometheus|Whether to enable topic level metrics.|true| -|exposeConsumerLevelMetricsInPrometheus|Whether to enable consumer level metrics.|false| -|jvmGCMetricsLoggerClassName|Classname of Pluggable JVM GC metrics logger that can log GC specific metrics.|N/A| -|bindAddress| Hostname or IP address the service binds on, default is 0.0.0.0. |0.0.0.0| -|bindAddresses| Additional Hostname or IP addresses the service binds on: `listener_name:scheme://host:port,...`. || -|advertisedAddress| Hostname or IP address the service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used. || -|clusterName| Name of the cluster to which this broker belongs to || -|maxTenants|The maximum number of tenants that can be created in each Pulsar cluster. When the number of tenants reaches the threshold, the broker rejects the request of creating a new tenant. The default value 0 disables the check. |0| -| maxNamespacesPerTenant | The maximum number of namespaces that can be created in each tenant. When the number of namespaces reaches this threshold, the broker rejects the request of creating a new tenant. The default value 0 disables the check. |0| -|brokerDeduplicationEnabled| Sets the default behavior for message deduplication in the broker. If enabled, the broker will reject messages that were already stored in the topic. This setting can be overridden on a per-namespace basis. |false| -|brokerDeduplicationMaxNumberOfProducers| The maximum number of producers for which information will be stored for deduplication purposes. |10000| -|brokerDeduplicationEntriesInterval| The number of entries after which a deduplication informational snapshot is taken. A larger interval will lead to fewer snapshots being taken, though this would also lengthen the topic recovery time (the time required for entries published after the snapshot to be replayed). |1000| -|brokerDeduplicationSnapshotIntervalSeconds| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |120| -|brokerDeduplicationProducerInactivityTimeoutMinutes| The time of inactivity (in minutes) after which the broker will discard deduplication information related to a disconnected producer. |360| -|dispatchThrottlingRatePerReplicatorInMsg| The default messages per second dispatch throttling-limit for every replicator in replication. The value of `0` means disabling replication message dispatch-throttling| 0 | -|dispatchThrottlingRatePerReplicatorInByte| The default bytes per second dispatch throttling-limit for every replicator in replication. The value of `0` means disabling replication message-byte dispatch-throttling| 0 | -|zooKeeperSessionTimeoutMillis| Zookeeper session timeout in milliseconds |30000| -|brokerShutdownTimeoutMs| Time to wait for broker graceful shutdown. After this time elapses, the process will be killed |60000| -|skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out of memory error. |false| -|backlogQuotaCheckEnabled| Enable backlog quota check. Enforces action on topic when the quota is reached |true| -|backlogQuotaCheckIntervalInSeconds| How often to check for topics that have reached the quota |60| -|backlogQuotaDefaultLimitBytes| The default per-topic backlog quota limit. Being less than 0 means no limitation. By default, it is -1. | -1 | -|backlogQuotaDefaultRetentionPolicy|The defaulted backlog quota retention policy. By Default, it is `producer_request_hold`.
  2294. 'producer_request_hold' Policy which holds producer's send request until the resource becomes available (or holding times out)
  2295. 'producer_exception' Policy which throws `javax.jms.ResourceAllocationException` to the producer
  2296. 'consumer_backlog_eviction' Policy which evicts the oldest message from the slowest consumer's backlog
  2297. |producer_request_hold| -|allowAutoTopicCreation| Enable topic auto creation if a new producer or consumer connected |true| -|allowAutoTopicCreationType| The type of topic that is allowed to be automatically created.(partitioned/non-partitioned) |non-partitioned| -|allowAutoSubscriptionCreation| Enable subscription auto creation if a new consumer connected |true| -|defaultNumPartitions| The number of partitioned topics that is allowed to be automatically created if `allowAutoTopicCreationType` is partitioned |1| -|brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics. If topics are not consumed for some while, these inactive topics might be cleaned up. Deleting inactive topics is enabled by default. The default period is 1 minute. |true| -|brokerDeleteInactiveTopicsFrequencySeconds| How often to check for inactive topics |60| -| brokerDeleteInactiveTopicsMode | Set the mode to delete inactive topics.
  2298. `delete_when_no_subscriptions`: delete the topic which has no subscriptions or active producers.
  2299. `delete_when_subscriptions_caught_up`: delete the topic whose subscriptions have no backlogs and which has no active producers or consumers.
  2300. | `delete_when_no_subscriptions` | -| brokerDeleteInactiveTopicsMaxInactiveDurationSeconds | Set the maximum duration for inactive topics. If it is not specified, the `brokerDeleteInactiveTopicsFrequencySeconds` parameter is adopted. | N/A | -|forceDeleteTenantAllowed| Enable you to delete a tenant forcefully. |false| -|forceDeleteNamespaceAllowed| Enable you to delete a namespace forcefully. |false| -|messageExpiryCheckIntervalInMinutes| The frequency of proactively checking and purging expired messages. |5| -|brokerServiceCompactionMonitorIntervalInSeconds| Interval between checks to determine whether topics with compaction policies need compaction. |60| -brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater than this threshold, compression is triggered.

    Set this threshold to 0 means disabling the compression check.|N/A -|delayedDeliveryEnabled| Whether to enable the delayed delivery for messages. If disabled, messages will be immediately delivered and there will be no tracking overhead.|true| -|delayedDeliveryTickTimeMillis|Control the tick time for retrying on delayed delivery, which affects the accuracy of the delivery time compared to the scheduled time. By default, it is 1 second.|1000| -|activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed. |1000| -|clientLibraryVersionCheckEnabled| Enable check for minimum allowed client library version |false| -|clientLibraryVersionCheckAllowUnversioned| Allow client libraries with no version information |true| -|statusFilePath| Path for the file used to determine the rotation status for the broker when responding to service discovery health checks || -|preferLaterVersions| If true, (and ModularLoadManagerImpl is being used), the load manager will attempt to use only brokers running the latest software version (to minimize impact to bundles) |false| -|maxNumPartitionsPerPartitionedTopic|Max number of partitions per partitioned topic. Use 0 or negative number to disable the check|0| -| maxSubscriptionsPerTopic | Maximum number of subscriptions allowed to subscribe to a topic. Once this limit reaches, the broker rejects new subscriptions until the number of subscriptions decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxProducersPerTopic | Maximum number of producers allowed to connect to a topic. Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerTopic | Maximum number of consumers allowed to connect to a topic. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerSubscription | Maximum number of consumers allowed to connect to a subscription. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -|tlsEnabled|Deprecated - Use `webServicePortTls` and `brokerServicePortTls` instead. |false| -|tlsCertificateFilePath| Path for the TLS certificate file || -|tlsKeyFilePath| Path for the TLS private key file || -|tlsTrustCertsFilePath| Path for the trusted TLS certificate file. This cert is used to verify that any certs presented by connecting clients are signed by a certificate authority. If this verification fails, then the certs are untrusted and the connections are dropped. || -|tlsAllowInsecureConnection| Accept untrusted TLS certificate from client. If it is set to `true`, a client with a cert which cannot be verified with the 'tlsTrustCertsFilePath' cert will be allowed to connect to the server, though the cert will not be used for client authentication. |false| -|tlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLSv1.3```, ```TLSv1.2``` || -|tlsCiphers|Specify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256```|| -|tlsEnabledWithKeyStore| Enable TLS with KeyStore type configuration in broker |false| -|tlsProvider| TLS Provider for KeyStore type || -|tlsKeyStoreType| LS KeyStore type configuration in broker: JKS, PKCS12 |JKS| -|tlsKeyStore| TLS KeyStore path in broker || -|tlsKeyStorePassword| TLS KeyStore password for broker || -|brokerClientTlsEnabledWithKeyStore| Whether internal client use KeyStore type to authenticate with Pulsar brokers |false| -|brokerClientSslProvider| The TLS Provider used by internal client to authenticate with other Pulsar brokers || -|brokerClientTlsTrustStoreType| TLS TrustStore type configuration for internal client: JKS, PKCS12, used by the internal client to authenticate with Pulsar brokers |JKS| -|brokerClientTlsTrustStore| TLS TrustStore path for internal client, used by the internal client to authenticate with Pulsar brokers || -|brokerClientTlsTrustStorePassword| TLS TrustStore password for internal client, used by the internal client to authenticate with Pulsar brokers || -|brokerClientTlsCiphers| Specify the tls cipher the internal client will use to negotiate during TLS Handshake. (a comma-separated list of ciphers) e.g. [TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256]|| -|brokerClientTlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS handshake. (a comma-separated list of protocol names). e.g. `TLSv1.3`, `TLSv1.2` || -|ttlDurationDefaultInSeconds|The default Time to Live (TTL) for namespaces if the TTL is not configured at namespace policies. When the value is set to `0`, TTL is disabled. By default, TTL is disabled. |0| -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicAlg| Configure the algorithm to be used to validate auth tokens. This can be any of the asymettric algorithms supported by Java JWT (https://github.com/jwtk/jjwt#signature-algorithms-keys) |RS256| -|tokenAuthClaim| Specify which of the token's claims will be used as the authentication "principal" or "role". The default "sub" claim will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud", that will be used to get the audience from token. If not set, audience will not be verified. || -|tokenAudience| The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token, need contains this. || -|maxUnackedMessagesPerConsumer| Max number of unacknowledged messages allowed to receive messages by a consumer on a shared subscription. Broker will stop sending messages to consumer once, this limit reaches until consumer starts acknowledging messages back. Using a value of 0, is disabling unackeMessage limit check and consumer can receive messages without any restriction |50000| -|maxUnackedMessagesPerSubscription| Max number of unacknowledged messages allowed per shared subscription. Broker will stop dispatching messages to all consumers of the subscription once this limit reaches until consumer starts acknowledging messages back and unack count reaches to limit/2. Using a value of 0, is disabling unackedMessage-limit check and dispatcher can dispatch messages without any restriction |200000| -|subscriptionRedeliveryTrackerEnabled| Enable subscription message redelivery tracker |true| -|subscriptionExpirationTimeMinutes | How long to delete inactive subscriptions from last consuming.

    Setting this configuration to a value **greater than 0** deletes inactive subscriptions automatically.
    Setting this configuration to **0** does not delete inactive subscriptions automatically.

    Since this configuration takes effect on all topics, if there is even one topic whose subscriptions should not be deleted automatically, you need to set it to 0.
    Instead, you can set a subscription expiration time for each **namespace** using the [`pulsar-admin namespaces set-subscription-expiration-time options` command](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-subscription-expiration-time-em-). | 0 | -|maxConcurrentLookupRequest| Max number of concurrent lookup request broker allows to throttle heavy incoming lookup traffic |50000| -|maxConcurrentTopicLoadRequest| Max number of concurrent topic loading request broker allows to control number of zk-operations |5000| -|authenticationEnabled| Enable authentication |false| -|authenticationProviders| Authentication provider name list, which is comma separated list of class names || -| authenticationRefreshCheckSeconds | Interval of time for checking for expired authentication credentials | 60 | -|authorizationEnabled| Enforce authorization |false| -|superUserRoles| Role names that are treated as "super-user", meaning they will be able to do all admin operations and publish/consume from all topics || -|brokerClientAuthenticationPlugin| Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters || -|brokerClientAuthenticationParameters||| -|athenzDomainNames| Supported Athenz provider domain names(comma separated) for authentication || -|exposePreciseBacklogInPrometheus| Enable expose the precise backlog stats, set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate. |false| -|schemaRegistryStorageClassName|The schema storage implementation used by this broker.|org.apache.pulsar.broker.service.schema.BookkeeperSchemaStorageFactory| -|isSchemaValidationEnforced|Enforce schema validation on following cases: if a producer without a schema attempts to produce to a topic with schema, the producer will be failed to connect. PLEASE be carefully on using this, since non-java clients don't support schema. If this setting is enabled, then non-java clients fail to produce.|false| -| topicFencingTimeoutSeconds | If a topic remains fenced for a certain time period (in seconds), it is closed forcefully. If set to 0 or a negative number, the fenced topic is not closed. | 0 | -|offloadersDirectory|The directory for all the offloader implementations.|./offloaders| -|bookkeeperMetadataServiceUri| Metadata service uri that bookkeeper is used for loading corresponding metadata driver and resolving its metadata service location. This value can be fetched using `bookkeeper shell whatisinstanceid` command in BookKeeper cluster. For example: zk+hierarchical://localhost:2181/ledgers. The metadata service uri list can also be semicolon separated values like below: zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers || -|bookkeeperClientAuthenticationPlugin| Authentication plugin to use when connecting to bookies || -|bookkeeperClientAuthenticationParametersName| BookKeeper auth plugin implementation specifics parameters name and values || -|bookkeeperClientAuthenticationParameters||| -|bookkeeperClientNumWorkerThreads| Number of BookKeeper client worker threads. Default is Runtime.getRuntime().availableProcessors() || -|bookkeeperClientTimeoutInSeconds| Timeout for BK add / read operations |30| -|bookkeeperClientSpeculativeReadTimeoutInMillis| Speculative reads are initiated if a read request doesn’t complete within a certain time Using a value of 0, is disabling the speculative reads |0| -|bookkeeperNumberOfChannelsPerBookie| Number of channels per bookie |16| -|bookkeeperClientHealthCheckEnabled| Enable bookies health check. Bookies that have more than the configured number of failure within the interval will be quarantined for some time. During this period, new ledgers won’t be created on these bookies |true| -|bookkeeperClientHealthCheckIntervalSeconds||60| -|bookkeeperClientHealthCheckErrorThresholdPerInterval||5| -|bookkeeperClientHealthCheckQuarantineTimeInSeconds ||1800| -|bookkeeperClientRackawarePolicyEnabled| Enable rack-aware bookie selection policy. BK will chose bookies from different racks when forming a new bookie ensemble |true| -|bookkeeperClientRegionawarePolicyEnabled| Enable region-aware bookie selection policy. BK will chose bookies from different regions and racks when forming a new bookie ensemble. If enabled, the value of bookkeeperClientRackawarePolicyEnabled is ignored |false| -|bookkeeperClientMinNumRacksPerWriteQuorum| Minimum number of racks per write quorum. BK rack-aware bookie selection policy will try to get bookies from at least 'bookkeeperClientMinNumRacksPerWriteQuorum' racks for a write quorum. |2| -|bookkeeperClientEnforceMinNumRacksPerWriteQuorum| Enforces rack-aware bookie selection policy to pick bookies from 'bookkeeperClientMinNumRacksPerWriteQuorum' racks for a writeQuorum. If BK can't find bookie then it would throw BKNotEnoughBookiesException instead of picking random one. |false| -|bookkeeperClientReorderReadSequenceEnabled| Enable/disable reordering read sequence on reading entries. |false| -|bookkeeperClientIsolationGroups| Enable bookie isolation by specifying a list of bookie groups to choose from. Any bookie outside the specified groups will not be used by the broker || -|bookkeeperClientSecondaryIsolationGroups| Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn't have enough bookie available. || -|bookkeeperClientMinAvailableBookiesInIsolationGroups| Minimum bookies that should be available as part of bookkeeperClientIsolationGroups else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list. || -|bookkeeperClientGetBookieInfoIntervalSeconds| Set the interval to periodically check bookie info |86400| -|bookkeeperClientGetBookieInfoRetryIntervalSeconds| Set the interval to retry a failed bookie info lookup |60| -|bookkeeperEnableStickyReads | Enable/disable having read operations for a ledger to be sticky to a single bookie. If this flag is enabled, the client will use one single bookie (by preference) to read all entries for a ledger. | true | -|managedLedgerDefaultEnsembleSize| Number of bookies to use when creating a ledger |2| -|managedLedgerDefaultWriteQuorum| Number of copies to store for each message |2| -|managedLedgerDefaultAckQuorum| Number of guaranteed copies (acks to wait before write is complete) |2| -|managedLedgerCacheSizeMB| Amount of memory to use for caching data payload in managed ledger. This memory is allocated from JVM direct memory and it’s shared across all the topics running in the same broker. By default, uses 1/5th of available direct memory || -|managedLedgerCacheCopyEntries| Whether we should make a copy of the entry payloads when inserting in cache| false| -|managedLedgerCacheEvictionWatermark| Threshold to which bring down the cache level when eviction is triggered |0.9| -|managedLedgerCacheEvictionFrequency| Configure the cache eviction frequency for the managed ledger cache (evictions/sec) | 100.0 | -|managedLedgerCacheEvictionTimeThresholdMillis| All entries that have stayed in cache for more than the configured time, will be evicted | 1000 | -|managedLedgerCursorBackloggedThreshold| Configure the threshold (in number of entries) from where a cursor should be considered 'backlogged' and thus should be set as inactive. | 1000| -|managedLedgerDefaultMarkDeleteRateLimit| Rate limit the amount of writes per second generated by consumer acking the messages |1.0| -|managedLedgerMaxEntriesPerLedger| The max number of entries to append to a ledger before triggering a rollover. A ledger rollover is triggered after the min rollover time has passed and one of the following conditions is true:
    • The max rollover time has been reached
    • The max entries have been written to the ledger
    • The max ledger size has been written to the ledger
    |50000| -|managedLedgerMinLedgerRolloverTimeMinutes| Minimum time between ledger rollover for a topic |10| -|managedLedgerMaxLedgerRolloverTimeMinutes| Maximum time before forcing a ledger rollover for a topic |240| -|managedLedgerCursorMaxEntriesPerLedger| Max number of entries to append to a cursor ledger |50000| -|managedLedgerCursorRolloverTimeInSeconds| Max time before triggering a rollover on a cursor ledger |14400| -|managedLedgerMaxUnackedRangesToPersist| Max number of "acknowledgment holes" that are going to be persistently stored. When acknowledging out of order, a consumer will leave holes that are supposed to be quickly filled by acking all the messages. The information of which messages are acknowledged is persisted by compressing in "ranges" of messages that were acknowledged. After the max number of ranges is reached, the information will only be tracked in memory and messages will be redelivered in case of crashes. |1000| -|autoSkipNonRecoverableData| Skip reading non-recoverable/unreadable data-ledger under managed-ledger’s list.It helps when data-ledgers gets corrupted at bookkeeper and managed-cursor is stuck at that ledger. |false| -|loadBalancerEnabled| Enable load balancer |true| -|loadBalancerPlacementStrategy| Strategy to assign a new bundle weightedRandomSelection || -|loadBalancerReportUpdateThresholdPercentage| Percentage of change to trigger load report update |10| -|loadBalancerReportUpdateMaxIntervalMinutes| maximum interval to update load report |15| -|loadBalancerHostUsageCheckIntervalMinutes| Frequency of report to collect |1| -|loadBalancerSheddingIntervalMinutes| Load shedding interval. Broker periodically checks whether some traffic should be offload from some over-loaded broker to other under-loaded brokers |30| -|loadBalancerSheddingGracePeriodMinutes| Prevent the same topics to be shed and moved to other broker more than once within this timeframe |30| -|loadBalancerBrokerMaxTopics| Usage threshold to allocate max number of topics to broker |50000| -|loadBalancerBrokerUnderloadedThresholdPercentage| Usage threshold to determine a broker as under-loaded |1| -|loadBalancerBrokerOverloadedThresholdPercentage| Usage threshold to determine a broker as over-loaded |85| -|loadBalancerResourceQuotaUpdateIntervalMinutes| Interval to update namespace bundle resource quota |15| -|loadBalancerBrokerComfortLoadLevelPercentage| Usage threshold to determine a broker is having just right level of load |65| -|loadBalancerAutoBundleSplitEnabled| enable/disable namespace bundle auto split |false| -|loadBalancerNamespaceBundleMaxTopics| maximum topics in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxSessions| maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxMsgRate| maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered |1000| -|loadBalancerNamespaceBundleMaxBandwidthMbytes| maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered |100| -|loadBalancerNamespaceMaximumBundles| maximum number of bundles in a namespace |128| -|replicationMetricsEnabled| Enable replication metrics |true| -|replicationConnectionsPerBroker| Max number of connections to open for each broker in a remote cluster More connections host-to-host lead to better throughput over high-latency links. |16| -|replicationProducerQueueSize| Replicator producer queue size |1000| -|replicatorPrefix| Replicator prefix used for replicator producer name and cursor name pulsar.repl|| -|replicationTlsEnabled| Enable TLS when talking with other clusters to replicate messages |false| -|brokerServicePurgeInactiveFrequencyInSeconds|Deprecated. Use `brokerDeleteInactiveTopicsFrequencySeconds`.|60| -|transactionCoordinatorEnabled|Whether to enable transaction coordinator in broker.|true| -|transactionMetadataStoreProviderClassName| |org.apache.pulsar.transaction.coordinator.impl.InMemTransactionMetadataStoreProvider| -|defaultRetentionTimeInMinutes| Default message retention time. 0 means retention is disabled. -1 means data is not removed by time quota |0| -|defaultRetentionSizeInMB| Default retention size. 0 means retention is disabled. -1 means data is not removed by size quota |0| -|keepAliveIntervalSeconds| How often to check whether the connections are still alive |30| -|bootstrapNamespaces| The bootstrap name. | N/A | -|loadManagerClassName| Name of load manager to use |org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl| -|supportedNamespaceBundleSplitAlgorithms| Supported algorithms name for namespace bundle split |[range_equally_divide,topic_count_equally_divide]| -|defaultNamespaceBundleSplitAlgorithm| Default algorithm name for namespace bundle split |range_equally_divide| -|managedLedgerOffloadDriver| The directory for all the offloader implementations `offloadersDirectory=./offloaders`. Driver to use to offload old data to long term storage (Possible values: S3, aws-s3, google-cloud-storage). When using google-cloud-storage, Make sure both Google Cloud Storage and Google Cloud Storage JSON API are enabled for the project (check from Developers Console -> Api&auth -> APIs). || -|managedLedgerOffloadMaxThreads| Maximum number of thread pool threads for ledger offloading |2| -|managedLedgerOffloadPrefetchRounds|The maximum prefetch rounds for ledger reading for offloading.|1| -|managedLedgerUnackedRangesOpenCacheSetEnabled| Use Open Range-Set to cache unacknowledged messages |true| -|managedLedgerOffloadDeletionLagMs|Delay between a ledger being successfully offloaded to long term storage and the ledger being deleted from bookkeeper | 14400000| -|managedLedgerOffloadAutoTriggerSizeThresholdBytes|The number of bytes before triggering automatic offload to long term storage |-1 (disabled)| -|s3ManagedLedgerOffloadRegion| For Amazon S3 ledger offload, AWS region || -|s3ManagedLedgerOffloadBucket| For Amazon S3 ledger offload, Bucket to place offloaded ledger into || -|s3ManagedLedgerOffloadServiceEndpoint| For Amazon S3 ledger offload, Alternative endpoint to connect to (useful for testing) || -|s3ManagedLedgerOffloadMaxBlockSizeInBytes| For Amazon S3 ledger offload, Max block size in bytes. (64MB by default, 5MB minimum) |67108864| -|s3ManagedLedgerOffloadReadBufferSizeInBytes| For Amazon S3 ledger offload, Read buffer size in bytes (1MB by default) |1048576| -|gcsManagedLedgerOffloadRegion|For Google Cloud Storage ledger offload, region where offload bucket is located. Go to this page for more details: https://cloud.google.com/storage/docs/bucket-locations .|N/A| -|gcsManagedLedgerOffloadBucket|For Google Cloud Storage ledger offload, Bucket to place offloaded ledger into.|N/A| -|gcsManagedLedgerOffloadMaxBlockSizeInBytes|For Google Cloud Storage ledger offload, the maximum block size in bytes. (64MB by default, 5MB minimum)|67108864| -|gcsManagedLedgerOffloadReadBufferSizeInBytes|For Google Cloud Storage ledger offload, Read buffer size in bytes. (1MB by default)|1048576| -|gcsManagedLedgerOffloadServiceAccountKeyFile|For Google Cloud Storage, path to json file containing service account credentials. For more details, see the "Service Accounts" section of https://support.google.com/googleapi/answer/6158849 .|N/A| -|fileSystemProfilePath|For File System Storage, file system profile path.|../conf/filesystem_offload_core_site.xml| -|fileSystemURI|For File System Storage, file system uri.|N/A| -|s3ManagedLedgerOffloadRole| For Amazon S3 ledger offload, provide a role to assume before writing to s3 || -|s3ManagedLedgerOffloadRoleSessionName| For Amazon S3 ledger offload, provide a role session name when using a role |pulsar-s3-offload| -| acknowledgmentAtBatchIndexLevelEnabled | Enable or disable the batch index acknowledgement. | false | -|enableReplicatedSubscriptions|Whether to enable tracking of replicated subscriptions state across clusters.|true| -|replicatedSubscriptionsSnapshotFrequencyMillis|The frequency of snapshots for replicated subscriptions tracking.|1000| -|replicatedSubscriptionsSnapshotTimeoutSeconds|The timeout for building a consistent snapshot for tracking replicated subscriptions state.|30| -|replicatedSubscriptionsSnapshotMaxCachedPerSubscription|The maximum number of snapshot to be cached per subscription.|10| -|maxMessagePublishBufferSizeInMB|The maximum memory size for a broker to handle messages that are sent by producers. If the processing message size exceeds this value, the broker stops reading data from the connection. The processing messages refer to the messages that are sent to the broker but the broker has not sent response to the client. Usually the messages are waiting to be written to bookies. It is shared across all the topics running in the same broker. The value `-1` disables the memory limitation. By default, it is 50% of direct memory.|N/A| -|messagePublishBufferCheckIntervalInMillis|Interval between checks to see if message publish buffer size exceeds the maximum. Use `0` or negative number to disable the max publish buffer limiting.|100| -|retentionCheckIntervalInSeconds|Check between intervals to see if consumed ledgers need to be trimmed. Use 0 or negative number to disable the check.|120| -| maxMessageSize | Set the maximum size of a message. | 5242880 | -| preciseTopicPublishRateLimiterEnable | Enable precise topic publish rate limiting. | false | -| lazyCursorRecovery | Whether to recover cursors lazily when trying to recover a managed ledger backing a persistent topic. It can improve write availability of topics. The caveat is now when recovered ledger is ready to write we're not sure if all old consumers' last mark delete position(ack position) can be recovered or not. So user can make the trade off or have custom logic in application to checkpoint consumer state.| false | -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| -| maxTopicsPerNamespace | The maximum number of persistent topics that can be created in the namespace. When the number of topics reaches this threshold, the broker rejects the request of creating a new topic, including the auto-created topics by the producer or consumer, until the number of connected consumers decreases. The default value 0 disables the check. | 0 | -|subscriptionTypesEnabled| Enable all subscription types, which are exclusive, shared, failover, and key_shared. | Exclusive, Shared, Failover, Key_Shared | -| managedLedgerInfoCompressionType | Compression type of managed ledger information.

    Available options are `NONE`, `LZ4`, `ZLIB`, `ZSTD`, and `SNAPPY`).

    If this value is `NONE` or invalid, the `managedLedgerInfo` is not compressed.

    **Note** that after enabling this configuration, if you want to degrade a broker, you need to change the value to `NONE` and make sure all ledger metadata is saved without compression. | None | -| additionalServlets | Additional servlet name.

    If you have multiple additional servlets, separate them by commas.

    For example, additionalServlet_1, additionalServlet_2 | N/A | -| additionalServletDirectory | Location of broker additional servlet NAR directory | ./brokerAdditionalServlet | -|narExtractionDirectory | The extraction directory of the nar package.
    Available for Protocol Handler, Additional Servlets, Offloaders, Broker Interceptor. | System.getProperty("java.io.tmpdir") | - -## Client - -You can use the [`pulsar-client`](reference-cli-tools.md#pulsar-client) CLI tool to publish messages to and consume messages from Pulsar topics. You can use this tool in place of a client library. - -|Name|Description|Default| -|---|---|---| -|webServiceUrl| The web URL for the cluster. |http://localhost:8080/| -|brokerServiceUrl| The Pulsar protocol URL for the cluster. |pulsar://localhost:6650/| -|authPlugin| The authentication plugin. || -|authParams| The authentication parameters for the cluster, as a comma-separated string. || -|useTls| Whether to enforce the TLS authentication in the cluster. |false| -| tlsAllowInsecureConnection | Allow TLS connections to servers whose certificate cannot be verified to have been signed by a trusted certificate authority. | false | -| tlsEnableHostnameVerification | Whether the server hostname must match the common name of the certificate that is used by the server. | false | -|tlsTrustCertsFilePath||| -| useKeyStoreTls | Enable TLS with KeyStore type configuration in the broker. | false | -| tlsTrustStoreType | TLS TrustStore type configuration.
  2301. JKS
  2302. PKCS12
  2303. |JKS| -| tlsTrustStore | TLS TrustStore path. | | -| tlsTrustStorePassword | TLS TrustStore password. | | -| webserviceTlsProvider | The TLS provider for the web service.
    When TLS authentication with CACert is used, the valid value is either `OPENSSL` or `JDK`.
    When TLS authentication with KeyStore is used, available options can be `SunJSSE`, `Conscrypt` and so on. | N/A | - -## Log4j - -You can set the log level and configuration in the [log4j2.yaml](https://github.com/apache/pulsar/blob/d557e0aa286866363bc6261dec87790c055db1b0/conf/log4j2.yaml#L155) file. The following logging configuration parameters are available. - -|Name|Default| -|---|---| -|pulsar.root.logger| WARN,CONSOLE| -|pulsar.log.dir| logs| -|pulsar.log.file| pulsar.log| -|log4j.rootLogger| ${pulsar.root.logger}| -|log4j.appender.CONSOLE| org.apache.log4j.ConsoleAppender| -|log4j.appender.CONSOLE.Threshold| DEBUG| -|log4j.appender.CONSOLE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.CONSOLE.layout.ConversionPattern| %d{ISO8601} - %-5p - [%t:%C{1}@%L] - %m%n| -|log4j.appender.ROLLINGFILE| org.apache.log4j.DailyRollingFileAppender| -|log4j.appender.ROLLINGFILE.Threshold| DEBUG| -|log4j.appender.ROLLINGFILE.File| ${pulsar.log.dir}/${pulsar.log.file}| -|log4j.appender.ROLLINGFILE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.ROLLINGFILE.layout.ConversionPattern| %d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n| -|log4j.appender.TRACEFILE| org.apache.log4j.FileAppender| -|log4j.appender.TRACEFILE.Threshold| TRACE| -|log4j.appender.TRACEFILE.File| pulsar-trace.log| -|log4j.appender.TRACEFILE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.TRACEFILE.layout.ConversionPattern| %d{ISO8601} - %-5p [%t:%C{1}@%L][%x] - %m%n| - -> Note: 'topic' in log4j2.appender is configurable. -> - If you want to append all logs to a single topic, set the same topic name. -> - If you want to append logs to different topics, you can set different topic names. - -## Log4j shell - -|Name|Default| -|---|---| -|bookkeeper.root.logger| ERROR,CONSOLE| -|log4j.rootLogger| ${bookkeeper.root.logger}| -|log4j.appender.CONSOLE| org.apache.log4j.ConsoleAppender| -|log4j.appender.CONSOLE.Threshold| DEBUG| -|log4j.appender.CONSOLE.layout| org.apache.log4j.PatternLayout| -|log4j.appender.CONSOLE.layout.ConversionPattern| %d{ABSOLUTE} %-5p %m%n| -|log4j.logger.org.apache.zookeeper| ERROR| -|log4j.logger.org.apache.bookkeeper| ERROR| -|log4j.logger.org.apache.bookkeeper.bookie.BookieShell| INFO| - - -## Standalone - -|Name|Description|Default| -|---|---|---| -|authenticateOriginalAuthData| If this flag is set to `true`, the broker authenticates the original Auth data; else it just accepts the originalPrincipal and authorizes it (if required). |false| -|zookeeperServers| The quorum connection string for local ZooKeeper || -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -|brokerServicePort| The port on which the standalone broker listens for connections |6650| -|webServicePort| The port used by the standalone broker for HTTP requests |8080| -|bindAddress| The hostname or IP address on which the standalone service binds |0.0.0.0| -|bindAddresses| Additional Hostname or IP addresses the service binds on: `listener_name:scheme://host:port,...`. || -|advertisedAddress| The hostname or IP address that the standalone service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used. || -| numAcceptorThreads | Number of threads to use for Netty Acceptor | 1 | -| numIOThreads | Number of threads to use for Netty IO | 2 * Runtime.getRuntime().availableProcessors() | -| numHttpServerThreads | Number of threads to use for HTTP requests processing | 2 * Runtime.getRuntime().availableProcessors()| -|isRunningStandalone|This flag controls features that are meant to be used when running in standalone mode.|N/A| -|clusterName| The name of the cluster that this broker belongs to. |standalone| -| failureDomainsEnabled | Enable cluster's failure-domain which can distribute brokers into logical region. | false | -|zooKeeperSessionTimeoutMillis| The ZooKeeper session timeout, in milliseconds. |30000| -|zooKeeperOperationTimeoutSeconds|ZooKeeper operation timeout in seconds.|30| -|brokerShutdownTimeoutMs| The time to wait for graceful broker shutdown. After this time elapses, the process will be killed. |60000| -|skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out of memory error. |false| -|backlogQuotaCheckEnabled| Enable the backlog quota check, which enforces a specified action when the quota is reached. |true| -|backlogQuotaCheckIntervalInSeconds| How often to check for topics that have reached the backlog quota. |60| -|backlogQuotaDefaultLimitBytes| The default per-topic backlog quota limit. Being less than 0 means no limitation. By default, it is -1. |-1| -|ttlDurationDefaultInSeconds|The default Time to Live (TTL) for namespaces if the TTL is not configured at namespace policies. When the value is set to `0`, TTL is disabled. By default, TTL is disabled. |0| -|brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics. If topics are not consumed for some while, these inactive topics might be cleaned up. Deleting inactive topics is enabled by default. The default period is 1 minute. |true| -|brokerDeleteInactiveTopicsFrequencySeconds| How often to check for inactive topics, in seconds. |60| -| maxPendingPublishRequestsPerConnection | Maximum pending publish requests per connection to avoid keeping large number of pending requests in memory | 1000| -|messageExpiryCheckIntervalInMinutes| How often to proactively check and purged expired messages. |5| -|activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed. |1000| -| subscriptionExpirationTimeMinutes | How long to delete inactive subscriptions from last consumption. When it is set to 0, inactive subscriptions are not deleted automatically | 0 | -| subscriptionRedeliveryTrackerEnabled | Enable subscription message redelivery tracker to send redelivery count to consumer. | true | -|subscriptionKeySharedEnable|Whether to enable the Key_Shared subscription.|true| -| subscriptionKeySharedUseConsistentHashing | In the Key_Shared subscription mode, with default AUTO_SPLIT mode, use splitting ranges or consistent hashing to reassign keys to new consumers. | false | -| subscriptionKeySharedConsistentHashingReplicaPoints | In the Key_Shared subscription mode, the number of points in the consistent-hashing ring. The greater the number, the more equal the assignment of keys to consumers. | 100 | -| subscriptionExpiryCheckIntervalInMinutes | How frequently to proactively check and purge expired subscription |5 | -| brokerDeduplicationEnabled | Set the default behavior for message deduplication in the broker. This can be overridden per-namespace. If it is enabled, the broker rejects messages that are already stored in the topic. | false | -| brokerDeduplicationMaxNumberOfProducers | Maximum number of producer information that it's going to be persisted for deduplication purposes | 10000 | -| brokerDeduplicationEntriesInterval | Number of entries after which a deduplication information snapshot is taken. A greater interval leads to less snapshots being taken though it would increase the topic recovery time, when the entries published after the snapshot need to be replayed. | 1000 | -| brokerDeduplicationProducerInactivityTimeoutMinutes | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | 360 | -| defaultNumberOfNamespaceBundles | When a namespace is created without specifying the number of bundles, this value is used as the default setting.| 4 | -|clientLibraryVersionCheckEnabled| Enable checks for minimum allowed client library version. |false| -|clientLibraryVersionCheckAllowUnversioned| Allow client libraries with no version information |true| -|statusFilePath| The path for the file used to determine the rotation status for the broker when responding to service discovery health checks |/usr/local/apache/htdocs| -|maxUnackedMessagesPerConsumer| The maximum number of unacknowledged messages allowed to be received by consumers on a shared subscription. The broker will stop sending messages to a consumer once this limit is reached or until the consumer begins acknowledging messages. A value of 0 disables the unacked message limit check and thus allows consumers to receive messages without any restrictions. |50000| -|maxUnackedMessagesPerSubscription| The same as above, except per subscription rather than per consumer. |200000| -| maxUnackedMessagesPerBroker | Maximum number of unacknowledged messages allowed per broker. Once this limit reaches, the broker stops dispatching messages to all shared subscriptions which has a higher number of unacknowledged messages until subscriptions start acknowledging messages back and unacknowledged messages count reaches to limit/2. When the value is set to 0, unacknowledged message limit check is disabled and broker does not block dispatchers. | 0 | -| maxUnackedMessagesPerSubscriptionOnBrokerBlocked | Once the broker reaches maxUnackedMessagesPerBroker limit, it blocks subscriptions which have higher unacknowledged messages than this percentage limit and subscription does not receive any new messages until that subscription acknowledges messages back. | 0.16 | -| unblockStuckSubscriptionEnabled|Broker periodically checks if subscription is stuck and unblock if flag is enabled.|false| -|zookeeperSessionExpiredPolicy|There are two policies when ZooKeeper session expired happens, "shutdown" and "reconnect". If it is set to "shutdown" policy, when ZooKeeper session expired happens, the broker is shutdown. If it is set to "reconnect" policy, the broker tries to reconnect to ZooKeeper server and re-register metadata to ZooKeeper. Note: the "reconnect" policy is an experiment feature.|shutdown| -| topicPublisherThrottlingTickTimeMillis | Tick time to schedule task that checks topic publish rate limiting across all topics. A lower value can improve accuracy while throttling publish but it uses more CPU to perform frequent check. (Disable publish throttling with value 0) | 10| -| brokerPublisherThrottlingTickTimeMillis | Tick time to schedule task that checks broker publish rate limiting across all topics. A lower value can improve accuracy while throttling publish but it uses more CPU to perform frequent check. When the value is set to 0, publish throttling is disabled. |50 | -| brokerPublisherThrottlingMaxMessageRate | Maximum rate (in 1 second) of messages allowed to publish for a broker if the message rate limiting is enabled. When the value is set to 0, message rate limiting is disabled. | 0| -| brokerPublisherThrottlingMaxByteRate | Maximum rate (in 1 second) of bytes allowed to publish for a broker if the byte rate limiting is enabled. When the value is set to 0, the byte rate limiting is disabled. | 0 | -|subscribeThrottlingRatePerConsumer|Too many subscribe requests from a consumer can cause broker rewinding consumer cursors and loading data from bookies, hence causing high network bandwidth usage. When the positive value is set, broker will throttle the subscribe requests for one consumer. Otherwise, the throttling will be disabled. By default, throttling is disabled.|0| -|subscribeRatePeriodPerConsumerInSecond|Rate period for {subscribeThrottlingRatePerConsumer}. By default, it is 30s.|30| -| dispatchThrottlingRatePerTopicInMsg | Default messages (per second) dispatch throttling-limit for every topic. When the value is set to 0, default message dispatch throttling-limit is disabled. |0 | -| dispatchThrottlingRatePerTopicInByte | Default byte (per second) dispatch throttling-limit for every topic. When the value is set to 0, default byte dispatch throttling-limit is disabled. | 0| -| dispatchThrottlingRateRelativeToPublishRate | Enable dispatch rate-limiting relative to publish rate. | false | -|dispatchThrottlingRatePerSubscriptionInMsg|The defaulted number of message dispatching throttling-limit for a subscription. The value of 0 disables message dispatch-throttling.|0| -|dispatchThrottlingRatePerSubscriptionInByte|The default number of message-bytes dispatching throttling-limit for a subscription. The value of 0 disables message-byte dispatch-throttling.|0| -| dispatchThrottlingOnNonBacklogConsumerEnabled | Enable dispatch-throttling for both caught up consumers as well as consumers who have backlogs. | true | -|dispatcherMaxReadBatchSize|The maximum number of entries to read from BookKeeper. By default, it is 100 entries.|100| -|dispatcherMaxReadSizeBytes|The maximum size in bytes of entries to read from BookKeeper. By default, it is 5MB.|5242880| -|dispatcherMinReadBatchSize|The minimum number of entries to read from BookKeeper. By default, it is 1 entry. When there is an error occurred on reading entries from bookkeeper, the broker will backoff the batch size to this minimum number.|1| -|dispatcherMaxRoundRobinBatchSize|The maximum number of entries to dispatch for a shared subscription. By default, it is 20 entries.|20| -| preciseDispatcherFlowControl | Precise dispathcer flow control according to history message number of each entry. | false | -| streamingDispatch | Whether to use streaming read dispatcher. It can be useful when there's a huge backlog to drain and instead of read with micro batch we can streamline the read from bookkeeper to make the most of consumer capacity till we hit bookkeeper read limit or consumer process limit, then we can use consumer flow control to tune the speed. This feature is currently in preview and can be changed in subsequent release. | false | -| maxConcurrentLookupRequest | Maximum number of concurrent lookup request that the broker allows to throttle heavy incoming lookup traffic. | 50000 | -| maxConcurrentTopicLoadRequest | Maximum number of concurrent topic loading request that the broker allows to control the number of zk-operations. | 5000 | -| maxConcurrentNonPersistentMessagePerConnection | Maximum number of concurrent non-persistent message that can be processed per connection. | 1000 | -| numWorkerThreadsForNonPersistentTopic | Number of worker threads to serve non-persistent topic. | 8 | -| enablePersistentTopics | Enable broker to load persistent topics. | true | -| enableNonPersistentTopics | Enable broker to load non-persistent topics. | true | -| maxNumPartitionsPerPartitionedTopic | Maximum number of partitions per partitioned topic. When the value is set to a negative number or is set to 0, the check is disabled. | 0 | -| maxSubscriptionsPerTopic | Maximum number of subscriptions allowed to subscribe to a topic. Once this limit reaches, the broker rejects new subscriptions until the number of subscriptions decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxProducersPerTopic | Maximum number of producers allowed to connect to a topic. Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerTopic | Maximum number of consumers allowed to connect to a topic. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| maxConsumersPerSubscription | Maximum number of consumers allowed to connect to a subscription. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 | -| tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in seconds. When the value is set to 0, check the TLS certificate on every new connection. | 300 | -| tlsCertificateFilePath | Path for the TLS certificate file. | | -| tlsKeyFilePath | Path for the TLS private key file. | | -| tlsTrustCertsFilePath | Path for the trusted TLS certificate file.| | -| tlsAllowInsecureConnection | Accept untrusted TLS certificate from the client. If it is set to true, a client with a certificate which cannot be verified with the 'tlsTrustCertsFilePath' certificate is allowed to connect to the server, though the certificate is not be used for client authentication. | false | -| tlsProtocols | Specify the TLS protocols the broker uses to negotiate during TLS handshake. | | -| tlsCiphers | Specify the TLS cipher the broker uses to negotiate during TLS Handshake. | | -| tlsRequireTrustedClientCertOnConnect | Trusted client certificates are required for to connect TLS. Reject the Connection if the client certificate is not trusted. In effect, this requires that all connecting clients perform TLS client authentication. | false | -| tlsEnabledWithKeyStore | Enable TLS with KeyStore type configuration in broker. | false | -| tlsProvider | TLS Provider for KeyStore type. | | -| tlsKeyStoreType | TLS KeyStore type configuration in the broker.
  2304. JKS
  2305. PKCS12
  2306. |JKS| -| tlsKeyStore | TLS KeyStore path in the broker. | | -| tlsKeyStorePassword | TLS KeyStore password for the broker. | | -| tlsTrustStoreType | TLS TrustStore type configuration in the broker
  2307. JKS
  2308. PKCS12
  2309. |JKS| -| tlsTrustStore | TLS TrustStore path in the broker. | | -| tlsTrustStorePassword | TLS TrustStore password for the broker. | | -| brokerClientTlsEnabledWithKeyStore | Configure whether the internal client uses the KeyStore type to authenticate with Pulsar brokers. | false | -| brokerClientSslProvider | The TLS Provider used by the internal client to authenticate with other Pulsar brokers. | | -| brokerClientTlsTrustStoreType | TLS TrustStore type configuration for the internal client to authenticate with Pulsar brokers.
  2310. JKS
  2311. PKCS12
  2312. | JKS | -| brokerClientTlsTrustStore | TLS TrustStore path for the internal client to authenticate with Pulsar brokers. | | -| brokerClientTlsTrustStorePassword | TLS TrustStore password for the internal client to authenticate with Pulsar brokers. | | -| brokerClientTlsCiphers | Specify the TLS cipher that the internal client uses to negotiate during TLS Handshake. | | -| brokerClientTlsProtocols | Specify the TLS protocols that the broker uses to negotiate during TLS handshake. | | -| systemTopicEnabled | Enable/Disable system topics. | false | -| topicLevelPoliciesEnabled | Enable or disable topic level policies. Topic level policies depends on the system topic. Please enable the system topic first. | false | -| topicFencingTimeoutSeconds | If a topic remains fenced for a certain time period (in seconds), it is closed forcefully. If set to 0 or a negative number, the fenced topic is not closed. | 0 | -| proxyRoles | Role names that are treated as "proxy roles". If the broker sees a request with role as proxyRoles, it demands to see a valid original principal. | | -|authenticationEnabled| Enable authentication for the broker. |false| -|authenticationProviders| A comma-separated list of class names for authentication providers. |false| -|authorizationEnabled| Enforce authorization in brokers. |false| -| authorizationProvider | Authorization provider fully qualified class-name. | org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider | -| authorizationAllowWildcardsMatching | Allow wildcard matching in authorization. Wildcard matching is applicable only when the wildcard-character (*) presents at the **first** or **last** position. | false | -|superUserRoles| Role names that are treated as "superusers." Superusers are authorized to perform all admin tasks. | | -|brokerClientAuthenticationPlugin| The authentication settings of the broker itself. Used when the broker connects to other brokers either in the same cluster or from other clusters. | | -|brokerClientAuthenticationParameters| The parameters that go along with the plugin specified using brokerClientAuthenticationPlugin. | | -|athenzDomainNames| Supported Athenz authentication provider domain names as a comma-separated list. | | -| anonymousUserRole | When this parameter is not empty, unauthenticated users perform as anonymousUserRole. | | -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenAuthClaim| Specify the token claim that will be used as the authentication "principal" or "role". The "subject" field will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud". It is used to get the audience from token. If it is not set, the audience is not verified. || -| tokenAudience | The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token need contains this parameter.| | -|saslJaasClientAllowedIds|This is a regexp, which limits the range of possible ids which can connect to the Broker using SASL. By default, it is set to `SaslConstants.JAAS_CLIENT_ALLOWED_IDS_DEFAULT`, which is ".*pulsar.*", so only clients whose id contains 'pulsar' are allowed to connect.|N/A| -|saslJaasBrokerSectionName|Service Principal, for login context name. By default, it is set to `SaslConstants.JAAS_DEFAULT_BROKER_SECTION_NAME`, which is "Broker".|N/A| -|httpMaxRequestSize|If the value is larger than 0, it rejects all HTTP requests with bodies larged than the configured limit.|-1| -|exposePreciseBacklogInPrometheus| Enable expose the precise backlog stats, set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate. |false| -|bookkeeperMetadataServiceUri|Metadata service uri is what BookKeeper used for loading corresponding metadata driver and resolving its metadata service location. This value can be fetched using `bookkeeper shell whatisinstanceid` command in BookKeeper cluster. For example: `zk+hierarchical://localhost:2181/ledgers`. The metadata service uri list can also be semicolon separated values like: `zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers`.|N/A| -|bookkeeperClientAuthenticationPlugin| Authentication plugin to be used when connecting to bookies (BookKeeper servers). || -|bookkeeperClientAuthenticationParametersName| BookKeeper authentication plugin implementation parameters and values. || -|bookkeeperClientAuthenticationParameters| Parameters associated with the bookkeeperClientAuthenticationParametersName || -|bookkeeperClientNumWorkerThreads| Number of BookKeeper client worker threads. Default is Runtime.getRuntime().availableProcessors() || -|bookkeeperClientTimeoutInSeconds| Timeout for BookKeeper add and read operations. |30| -|bookkeeperClientSpeculativeReadTimeoutInMillis| Speculative reads are initiated if a read request doesn’t complete within a certain time. A value of 0 disables speculative reads. |0| -|bookkeeperUseV2WireProtocol|Use older Bookkeeper wire protocol with bookie.|true| -|bookkeeperClientHealthCheckEnabled| Enable bookie health checks. |true| -|bookkeeperClientHealthCheckIntervalSeconds| The time interval, in seconds, at which health checks are performed. New ledgers are not created during health checks. |60| -|bookkeeperClientHealthCheckErrorThresholdPerInterval| Error threshold for health checks. |5| -|bookkeeperClientHealthCheckQuarantineTimeInSeconds| If bookies have more than the allowed number of failures within the time interval specified by bookkeeperClientHealthCheckIntervalSeconds |1800| -|bookkeeperClientGetBookieInfoIntervalSeconds|Specify options for the GetBookieInfo check. This setting helps ensure the list of bookies that are up to date on the brokers.|86400| -|bookkeeperClientGetBookieInfoRetryIntervalSeconds|Specify options for the GetBookieInfo check. This setting helps ensure the list of bookies that are up to date on the brokers.|60| -|bookkeeperClientRackawarePolicyEnabled| |true| -|bookkeeperClientRegionawarePolicyEnabled| |false| -|bookkeeperClientMinNumRacksPerWriteQuorum| |2| -|bookkeeperClientMinNumRacksPerWriteQuorum| |false| -|bookkeeperClientReorderReadSequenceEnabled| |false| -|bookkeeperClientIsolationGroups||| -|bookkeeperClientSecondaryIsolationGroups| Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn't have enough bookie available. || -|bookkeeperClientMinAvailableBookiesInIsolationGroups| Minimum bookies that should be available as part of bookkeeperClientIsolationGroups else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list. || -| bookkeeperTLSProviderFactoryClass | Set the client security provider factory class name. | org.apache.bookkeeper.tls.TLSContextFactory | -| bookkeeperTLSClientAuthentication | Enable TLS authentication with bookie. | false | -| bookkeeperTLSKeyFileType | Supported type: PEM, JKS, PKCS12. | PEM | -| bookkeeperTLSTrustCertTypes | Supported type: PEM, JKS, PKCS12. | PEM | -| bookkeeperTLSKeyStorePasswordPath | Path to file containing keystore password, if the client keystore is password protected. | | -| bookkeeperTLSTrustStorePasswordPath | Path to file containing truststore password, if the client truststore is password protected. | | -| bookkeeperTLSKeyFilePath | Path for the TLS private key file. | | -| bookkeeperTLSCertificateFilePath | Path for the TLS certificate file. | | -| bookkeeperTLSTrustCertsFilePath | Path for the trusted TLS certificate file. | | -| bookkeeperTlsCertFilesRefreshDurationSeconds | Tls cert refresh duration at bookKeeper-client in seconds (0 to disable check). | | -| bookkeeperDiskWeightBasedPlacementEnabled | Enable/Disable disk weight based placement. | false | -| bookkeeperExplicitLacIntervalInMills | Set the interval to check the need for sending an explicit LAC. When the value is set to 0, no explicit LAC is sent. | 0 | -| bookkeeperClientExposeStatsToPrometheus | Expose BookKeeper client managed ledger stats to Prometheus. | false | -|managedLedgerDefaultEnsembleSize| |1| -|managedLedgerDefaultWriteQuorum| |1| -|managedLedgerDefaultAckQuorum| |1| -| managedLedgerDigestType | Default type of checksum to use when writing to BookKeeper. | CRC32C | -| managedLedgerNumSchedulerThreads | Number of threads to be used for managed ledger scheduled tasks. | 8 | -|managedLedgerCacheSizeMB| |N/A| -|managedLedgerCacheCopyEntries| Whether to copy the entry payloads when inserting in cache.| false| -|managedLedgerCacheEvictionWatermark| |0.9| -|managedLedgerCacheEvictionFrequency| Configure the cache eviction frequency for the managed ledger cache (evictions/sec) | 100.0 | -|managedLedgerCacheEvictionTimeThresholdMillis| All entries that have stayed in cache for more than the configured time, will be evicted | 1000 | -|managedLedgerCursorBackloggedThreshold| Configure the threshold (in number of entries) from where a cursor should be considered 'backlogged' and thus should be set as inactive. | 1000| -|managedLedgerUnackedRangesOpenCacheSetEnabled| Use Open Range-Set to cache unacknowledged messages |true| -|managedLedgerDefaultMarkDeleteRateLimit| |0.1| -|managedLedgerMaxEntriesPerLedger| |50000| -|managedLedgerMinLedgerRolloverTimeMinutes| |10| -|managedLedgerMaxLedgerRolloverTimeMinutes| |240| -|managedLedgerCursorMaxEntriesPerLedger| |50000| -|managedLedgerCursorRolloverTimeInSeconds| |14400| -| managedLedgerMaxSizePerLedgerMbytes | Maximum ledger size before triggering a rollover for a topic. | 2048 | -| managedLedgerMaxUnackedRangesToPersist | Maximum number of "acknowledgment holes" that are going to be persistently stored. When acknowledging out of order, a consumer leaves holes that are supposed to be quickly filled by acknowledging all the messages. The information of which messages are acknowledged is persisted by compressing in "ranges" of messages that were acknowledged. After the max number of ranges is reached, the information is only tracked in memory and messages are redelivered in case of crashes. | 10000 | -| managedLedgerMaxUnackedRangesToPersistInZooKeeper | Maximum number of "acknowledgment holes" that can be stored in Zookeeper. If the number of unacknowledged message range is higher than this limit, the broker persists unacknowledged ranges into bookkeeper to avoid additional data overhead into Zookeeper. | 1000 | -|autoSkipNonRecoverableData| |false| -| managedLedgerMetadataOperationsTimeoutSeconds | Operation timeout while updating managed-ledger metadata. | 60 | -| managedLedgerReadEntryTimeoutSeconds | Read entries timeout when the broker tries to read messages from BookKeeper. | 0 | -| managedLedgerAddEntryTimeoutSeconds | Add entry timeout when the broker tries to publish message to BookKeeper. | 0 | -| managedLedgerNewEntriesCheckDelayInMillis | New entries check delay for the cursor under the managed ledger. If no new messages in the topic, the cursor tries to check again after the delay time. For consumption latency sensitive scenario, you can set the value to a smaller value or 0. Of course, a smaller value may degrade consumption throughput.|10 ms| -| managedLedgerPrometheusStatsLatencyRolloverSeconds | Managed ledger prometheus stats latency rollover seconds. | 60 | -| managedLedgerTraceTaskExecution | Whether to trace managed ledger task execution time. | true | -|managedLedgerNewEntriesCheckDelayInMillis|New entries check delay for the cursor under the managed ledger. If no new messages in the topic, the cursor will try to check again after the delay time. For consumption latency sensitive scenario, it can be set to a smaller value or 0. A smaller value degrades consumption throughput. By default, it is 10ms.|10| -|loadBalancerEnabled| |false| -|loadBalancerPlacementStrategy| |weightedRandomSelection| -|loadBalancerReportUpdateThresholdPercentage| |10| -|loadBalancerReportUpdateMaxIntervalMinutes| |15| -|loadBalancerHostUsageCheckIntervalMinutes| |1| -|loadBalancerSheddingIntervalMinutes| |30| -|loadBalancerSheddingGracePeriodMinutes| |30| -|loadBalancerBrokerMaxTopics| |50000| -|loadBalancerBrokerUnderloadedThresholdPercentage| |1| -|loadBalancerBrokerOverloadedThresholdPercentage| |85| -|loadBalancerResourceQuotaUpdateIntervalMinutes| |15| -|loadBalancerBrokerComfortLoadLevelPercentage| |65| -|loadBalancerAutoBundleSplitEnabled| |false| -| loadBalancerAutoUnloadSplitBundlesEnabled | Enable/Disable automatic unloading of split bundles. | true | -|loadBalancerNamespaceBundleMaxTopics| |1000| -|loadBalancerNamespaceBundleMaxSessions| |1000| -|loadBalancerNamespaceBundleMaxMsgRate| |1000| -|loadBalancerNamespaceBundleMaxBandwidthMbytes| |100| -|loadBalancerNamespaceMaximumBundles| |128| -| loadBalancerBrokerThresholdShedderPercentage | The broker resource usage threshold. When the broker resource usage is greater than the pulsar cluster average resource usage, the threshold shedder is triggered to offload bundles from the broker. It only takes effect in the ThresholdShedder strategy. | 10 | -| loadBalancerHistoryResourcePercentage | The history usage when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 0.9 | -| loadBalancerBandwithInResourceWeight | The BandWithIn usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerBandwithOutResourceWeight | The BandWithOut usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerCPUResourceWeight | The CPU usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerMemoryResourceWeight | The heap memory usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerDirectMemoryResourceWeight | The direct memory usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 | -| loadBalancerBundleUnloadMinThroughputThreshold | Bundle unload minimum throughput threshold. Avoid bundle unload frequently. It only takes effect in the ThresholdShedder strategy. | 10 | -|replicationMetricsEnabled| |true| -|replicationConnectionsPerBroker| |16| -|replicationProducerQueueSize| |1000| -| replicationPolicyCheckDurationSeconds | Duration to check replication policy to avoid replicator inconsistency due to missing ZooKeeper watch. When the value is set to 0, disable checking replication policy. | 600 | -|defaultRetentionTimeInMinutes| |0| -|defaultRetentionSizeInMB| |0| -|keepAliveIntervalSeconds| |30| -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| -|bookieId | If you want to custom a bookie ID or use a dynamic network address for a bookie, you can set the `bookieId`.

    Bookie advertises itself using the `bookieId` rather than the `BookieSocketAddress` (`hostname:port` or `IP:port`).

    The `bookieId` is a non-empty string that can contain ASCII digits and letters ([a-zA-Z9-0]), colons, dashes, and dots.

    For more information about `bookieId`, see [here](http://bookkeeper.apache.org/bps/BP-41-bookieid/).|/| -| maxTopicsPerNamespace | The maximum number of persistent topics that can be created in the namespace. When the number of topics reaches this threshold, the broker rejects the request of creating a new topic, including the auto-created topics by the producer or consumer, until the number of connected consumers decreases. The default value 0 disables the check. | 0 | - -## WebSocket - -|Name|Description|Default| -|---|---|---| -|configurationStoreServers ||| -|zooKeeperSessionTimeoutMillis| |30000| -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|serviceUrl||| -|serviceUrlTls||| -|brokerServiceUrl||| -|brokerServiceUrlTls||| -|webServicePort||8080| -|webServicePortTls||8443| -|bindAddress||0.0.0.0| -|clusterName ||| -|authenticationEnabled||false| -|authenticationProviders||| -|authorizationEnabled||false| -|superUserRoles ||| -|brokerClientAuthenticationPlugin||| -|brokerClientAuthenticationParameters||| -|tlsEnabled||false| -|tlsAllowInsecureConnection||false| -|tlsCertificateFilePath||| -|tlsKeyFilePath ||| -|tlsTrustCertsFilePath||| - -## Pulsar proxy - -The [Pulsar proxy](concepts-architecture-overview.md#pulsar-proxy) can be configured in the `conf/proxy.conf` file. - - -|Name|Description|Default| -|---|---|---| -|forwardAuthorizationCredentials| Forward client authorization credentials to Broker for re-authorization, and make sure authentication is enabled for this to take effect. |false| -|zookeeperServers| The ZooKeeper quorum connection string (as a comma-separated list) || -|configurationStoreServers| Configuration store connection string (as a comma-separated list) || -| brokerServiceURL | The service URL pointing to the broker cluster. Must begin with `pulsar://`. | | -| brokerServiceURLTLS | The TLS service URL pointing to the broker cluster. Must begin with `pulsar+ssl://`. | | -| brokerWebServiceURL | The Web service URL pointing to the broker cluster | | -| brokerWebServiceURLTLS | The TLS Web service URL pointing to the broker cluster | | -| functionWorkerWebServiceURL | The Web service URL pointing to the function worker cluster. It is only configured when you setup function workers in a separate cluster. | | -| functionWorkerWebServiceURLTLS | The TLS Web service URL pointing to the function worker cluster. It is only configured when you setup function workers in a separate cluster. | | -|zookeeperSessionTimeoutMs| ZooKeeper session timeout (in milliseconds) |30000| -|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300| -|advertisedAddress|Hostname or IP address the service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostname()` is used.|N/A| -|servicePort| The port to use for server binary Protobuf requests |6650| -|servicePortTls| The port to use to server binary Protobuf TLS requests |6651| -|statusFilePath| Path for the file used to determine the rotation status for the proxy instance when responding to service discovery health checks || -| proxyLogLevel | Proxy log level
  2313. 0: Do not log any TCP channel information.
  2314. 1: Parse and log any TCP channel information and command information without message body.
  2315. 2: Parse and log channel information, command information and message body.
  2316. | 0 | -|authenticationEnabled| Whether authentication is enabled for the Pulsar proxy |false| -|authenticateMetricsEndpoint| Whether the '/metrics' endpoint requires authentication. Defaults to true. 'authenticationEnabled' must also be set for this to take effect. |true| -|authenticationProviders| Authentication provider name list (a comma-separated list of class names) || -|authorizationEnabled| Whether authorization is enforced by the Pulsar proxy |false| -|authorizationProvider| Authorization provider as a fully qualified class name |org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider| -| anonymousUserRole | When this parameter is not empty, unauthenticated users perform as anonymousUserRole. | | -|brokerClientAuthenticationPlugin| The authentication plugin used by the Pulsar proxy to authenticate with Pulsar brokers || -|brokerClientAuthenticationParameters| The authentication parameters used by the Pulsar proxy to authenticate with Pulsar brokers || -|brokerClientTrustCertsFilePath| The path to trusted certificates used by the Pulsar proxy to authenticate with Pulsar brokers || -|superUserRoles| Role names that are treated as "super-users," meaning that they will be able to perform all admin || -|maxConcurrentInboundConnections| Max concurrent inbound connections. The proxy will reject requests beyond that. |10000| -|maxConcurrentLookupRequests| Max concurrent outbound connections. The proxy will error out requests beyond that. |50000| -|tlsEnabledInProxy| Deprecated - use `servicePortTls` and `webServicePortTls` instead. |false| -|tlsEnabledWithBroker| Whether TLS is enabled when communicating with Pulsar brokers. |false| -| tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in seconds. If the value is set 0, check TLS certificate every new connection. | 300 | -|tlsCertificateFilePath| Path for the TLS certificate file || -|tlsKeyFilePath| Path for the TLS private key file || -|tlsTrustCertsFilePath| Path for the trusted TLS certificate pem file || -|tlsHostnameVerificationEnabled| Whether the hostname is validated when the proxy creates a TLS connection with brokers |false| -|tlsRequireTrustedClientCertOnConnect| Whether client certificates are required for TLS. Connections are rejected if the client certificate isn’t trusted. |false| -|tlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLSv1.3```, ```TLSv1.2``` || -|tlsCiphers|Specify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256```|| -| httpReverseProxyConfigs | HTTP directs to redirect to non-pulsar services | | -| httpOutputBufferSize | HTTP output buffer size. The amount of data that will be buffered for HTTP requests before it is flushed to the channel. A larger buffer size may result in higher HTTP throughput though it may take longer for the client to see data. If using HTTP streaming via the reverse proxy, this should be set to the minimum value (1) so that clients see the data as soon as possible. | 32768 | -| httpNumThreads | Number of threads to use for HTTP requests processing| 2 * Runtime.getRuntime().availableProcessors() | -|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`. Note: key file must be DER-encoded.|| -|tokenAuthClaim| Specify the token claim that will be used as the authentication "principal" or "role". The "subject" field will be used if this is left blank || -|tokenAudienceClaim| The token audience "claim" name, e.g. "aud". It is used to get the audience from token. If it is not set, the audience is not verified. || -| tokenAudience | The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token need contains this parameter.| | -|haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false| - -## ZooKeeper - -ZooKeeper handles a broad range of essential configuration- and coordination-related tasks for Pulsar. The default configuration file for ZooKeeper is in the `conf/zookeeper.conf` file in your Pulsar installation. The following parameters are available: - - -|Name|Description|Default| -|---|---|---| -|tickTime| The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick. |2000| -|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10| -|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter. |5| -|dataDir| The location where ZooKeeper will store in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper| -|clientPort| The port on which the ZooKeeper server will listen for connections. |2181| -|admin.enableServer|The port at which the admin listens.|true| -|admin.serverPort|The port at which the admin listens.|9990| -|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest). |3| -|autopurge.purgeInterval| The time interval, in hours, by which the ZooKeeper database purge task is triggered. Setting to a non-zero number will enable auto purge; setting to 0 will disable. Read this guide before enabling auto purge. |1| -|forceSync|Requires updates to be synced to media of the transaction log before finishing processing the update. If this option is set to 'no', ZooKeeper will not require updates to be synced to the media. WARNING: it's not recommended to run a production ZK cluster with `forceSync` disabled.|yes| -|maxClientCnxns| The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60| - - - - -In addition to the parameters in the table above, configuring ZooKeeper for Pulsar involves adding -a `server.N` line to the `conf/zookeeper.conf` file for each node in the ZooKeeper cluster, where `N` is the number of the ZooKeeper node. Here's an example for a three-node ZooKeeper cluster: - -```properties - -server.1=zk1.us-west.example.com:2888:3888 -server.2=zk2.us-west.example.com:2888:3888 -server.3=zk3.us-west.example.com:2888:3888 - -``` - -> We strongly recommend consulting the [ZooKeeper Administrator's Guide](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html) for a more thorough and comprehensive introduction to ZooKeeper configuration diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/reference-connector-admin.md b/site2/website/versioned_docs/version-2.9.3-deprecated/reference-connector-admin.md deleted file mode 100644 index f1240bf8db17de..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/reference-connector-admin.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -id: reference-connector-admin -title: Connector Admin CLI -sidebar_label: "Connector Admin CLI" -original_id: reference-connector-admin ---- - -> **Important** -> -> For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/). -> - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/reference-metrics.md b/site2/website/versioned_docs/version-2.9.3-deprecated/reference-metrics.md deleted file mode 100644 index 96c378c3c5dc4d..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/reference-metrics.md +++ /dev/null @@ -1,556 +0,0 @@ ---- -id: reference-metrics -title: Pulsar Metrics -sidebar_label: "Pulsar Metrics" -original_id: reference-metrics ---- - - - -Pulsar exposes the following metrics in Prometheus format. You can monitor your clusters with those metrics. - -* [ZooKeeper](#zookeeper) -* [BookKeeper](#bookkeeper) -* [Broker](#broker) -* [Pulsar Functions](#pulsar-functions) -* [Proxy](#proxy) -* [Pulsar SQL Worker](#pulsar-sql-worker) -* [Pulsar transaction](#pulsar-transaction) - -The following types of metrics are available: - -- [Counter](https://prometheus.io/docs/concepts/metric_types/#counter): a cumulative metric that represents a single monotonically increasing counter. The value increases by default. You can reset the value to zero or restart your cluster. -- [Gauge](https://prometheus.io/docs/concepts/metric_types/#gauge): a metric that represents a single numerical value that can arbitrarily go up and down. -- [Histogram](https://prometheus.io/docs/concepts/metric_types/#histogram): a histogram samples observations (usually things like request durations or response sizes) and counts them in configurable buckets. -- [Summary](https://prometheus.io/docs/concepts/metric_types/#summary): similar to a histogram, a summary samples observations (usually things like request durations and response sizes). While it also provides a total count of observations and a sum of all observed values, it calculates configurable quantiles over a sliding time window. - -## ZooKeeper - -The ZooKeeper metrics are exposed under "/metrics" at port `8000`. You can use a different port by configuring the `metricsProvider.httpPort` in conf/zookeeper.conf. - -### Server metrics - -| Name | Type | Description | -|---|---|---| -| znode_count | Gauge | The number of z-nodes stored. | -| approximate_data_size | Gauge | The approximate size of all of z-nodes stored. | -| num_alive_connections | Gauge | The number of currently lived connections. | -| watch_count | Gauge | The number of watchers registered. | -| ephemerals_count | Gauge | The number of ephemeral z-nodes. | - -### Request metrics - -| Name | Type | Description | -|---|---|---| -| request_commit_queued | Counter | The total number of requests already committed by a particular server. | -| updatelatency | Summary | The update requests latency calculated in milliseconds. | -| readlatency | Summary | The read requests latency calculated in milliseconds. | - -## BookKeeper - -The BookKeeper metrics are exposed under "/metrics" at port `8000`. You can change the port by updating `prometheusStatsHttpPort` -in the `bookkeeper.conf` configuration file. - -### Server metrics - -| Name | Type | Description | -|---|---|---| -| bookie_SERVER_STATUS | Gauge | The server status for bookie server.
    • 1: the bookie is running in writable mode.
    • 0: the bookie is running in readonly mode.
    | -| bookkeeper_server_ADD_ENTRY_count | Counter | The total number of ADD_ENTRY requests received at the bookie. The `success` label is used to distinguish successes and failures. | -| bookkeeper_server_READ_ENTRY_count | Counter | The total number of READ_ENTRY requests received at the bookie. The `success` label is used to distinguish successes and failures. | -| bookie_WRITE_BYTES | Counter | The total number of bytes written to the bookie. | -| bookie_READ_BYTES | Counter | The total number of bytes read from the bookie. | -| bookkeeper_server_ADD_ENTRY_REQUEST | Summary | The summary of request latency of ADD_ENTRY requests at the bookie. The `success` label is used to distinguish successes and failures. | -| bookkeeper_server_READ_ENTRY_REQUEST | Summary | The summary of request latency of READ_ENTRY requests at the bookie. The `success` label is used to distinguish successes and failures. | - -### Journal metrics - -| Name | Type | Description | -|---|---|---| -| bookie_journal_JOURNAL_SYNC_count | Counter | The total number of journal fsync operations happening at the bookie. The `success` label is used to distinguish successes and failures. | -| bookie_journal_JOURNAL_QUEUE_SIZE | Gauge | The total number of requests pending in the journal queue. | -| bookie_journal_JOURNAL_FORCE_WRITE_QUEUE_SIZE | Gauge | The total number of force write (fsync) requests pending in the force-write queue. | -| bookie_journal_JOURNAL_CB_QUEUE_SIZE | Gauge | The total number of callbacks pending in the callback queue. | -| bookie_journal_JOURNAL_ADD_ENTRY | Summary | The summary of request latency of adding entries to the journal. | -| bookie_journal_JOURNAL_SYNC | Summary | The summary of fsync latency of syncing data to the journal disk. | - -### Storage metrics - -| Name | Type | Description | -|---|---|---| -| bookie_ledgers_count | Gauge | The total number of ledgers stored in the bookie. | -| bookie_entries_count | Gauge | The total number of entries stored in the bookie. | -| bookie_write_cache_size | Gauge | The bookie write cache size (in bytes). | -| bookie_read_cache_size | Gauge | The bookie read cache size (in bytes). | -| bookie_DELETED_LEDGER_COUNT | Counter | The total number of ledgers deleted since the bookie has started. | -| bookie_ledger_writable_dirs | Gauge | The number of writable directories in the bookie. | - -## Broker - -The broker metrics are exposed under "/metrics" at port `8080`. You can change the port by updating `webServicePort` to a different port -in the `broker.conf` configuration file. - -All the metrics exposed by a broker are labelled with `cluster=${pulsar_cluster}`. The name of Pulsar cluster is the value of `${pulsar_cluster}`, which you have configured in the `broker.conf` file. - -The following metrics are available for broker: - -- [ZooKeeper](#zookeeper) - - [Server metrics](#server-metrics) - - [Request metrics](#request-metrics) -- [BookKeeper](#bookkeeper) - - [Server metrics](#server-metrics-1) - - [Journal metrics](#journal-metrics) - - [Storage metrics](#storage-metrics) -- [Broker](#broker) - - [Namespace metrics](#namespace-metrics) - - [Replication metrics](#replication-metrics) - - [Topic metrics](#topic-metrics) - - [Replication metrics](#replication-metrics-1) - - [ManagedLedgerCache metrics](#managedledgercache-metrics) - - [ManagedLedger metrics](#managedledger-metrics) - - [LoadBalancing metrics](#loadbalancing-metrics) - - [BundleUnloading metrics](#bundleunloading-metrics) - - [BundleSplit metrics](#bundlesplit-metrics) - - [Subscription metrics](#subscription-metrics) - - [Consumer metrics](#consumer-metrics) - - [Managed ledger bookie client metrics](#managed-ledger-bookie-client-metrics) - - [Token metrics](#token-metrics) - - [Authentication metrics](#authentication-metrics) - - [Connection metrics](#connection-metrics) -- [Pulsar Functions](#pulsar-functions) -- [Proxy](#proxy) -- [Pulsar SQL Worker](#pulsar-sql-worker) -- [Pulsar transaction](#pulsar-transaction) - -### Namespace metrics - -> Namespace metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `false`. - -All the namespace metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -| Name | Type | Description | -|---|---|---| -| pulsar_topics_count | Gauge | The number of Pulsar topics of the namespace owned by this broker. | -| pulsar_subscriptions_count | Gauge | The number of Pulsar subscriptions of the namespace served by this broker. | -| pulsar_producers_count | Gauge | The number of active producers of the namespace connected to this broker. | -| pulsar_consumers_count | Gauge | The number of active consumers of the namespace connected to this broker. | -| pulsar_rate_in | Gauge | The total message rate of the namespace coming into this broker (messages/second). | -| pulsar_rate_out | Gauge | The total message rate of the namespace going out from this broker (messages/second). | -| pulsar_throughput_in | Gauge | The total throughput of the namespace coming into this broker (bytes/second). | -| pulsar_throughput_out | Gauge | The total throughput of the namespace going out from this broker (bytes/second). | -| pulsar_storage_size | Gauge | The total storage size of the topics in this namespace owned by this broker (bytes). | -| pulsar_storage_logical_size | Gauge | The storage size of topics in the namespace owned by the broker without replicas (in bytes). | -| pulsar_storage_backlog_size | Gauge | The total backlog size of the topics of this namespace owned by this broker (messages). | -| pulsar_storage_offloaded_size | Gauge | The total amount of the data in this namespace offloaded to the tiered storage (bytes). | -| pulsar_storage_write_rate | Gauge | The total message batches (entries) written to the storage for this namespace (message batches / second). | -| pulsar_storage_read_rate | Gauge | The total message batches (entries) read from the storage for this namespace (message batches / second). | -| pulsar_subscription_delayed | Gauge | The total message batches (entries) are delayed for dispatching. | -| pulsar_storage_write_latency_le_* | Histogram | The entry rate of a namespace that the storage write latency is smaller with a given threshold.
    Available thresholds:
    • pulsar_storage_write_latency_le_0_5: <= 0.5ms
    • pulsar_storage_write_latency_le_1: <= 1ms
    • pulsar_storage_write_latency_le_5: <= 5ms
    • pulsar_storage_write_latency_le_10: <= 10ms
    • pulsar_storage_write_latency_le_20: <= 20ms
    • pulsar_storage_write_latency_le_50: <= 50ms
    • pulsar_storage_write_latency_le_100: <= 100ms
    • pulsar_storage_write_latency_le_200: <= 200ms
    • pulsar_storage_write_latency_le_1000: <= 1s
    • pulsar_storage_write_latency_le_overflow: > 1s
    | -| pulsar_entry_size_le_* | Histogram | The entry rate of a namespace that the entry size is smaller with a given threshold.
    Available thresholds:
    • pulsar_entry_size_le_128: <= 128 bytes
    • pulsar_entry_size_le_512: <= 512 bytes
    • pulsar_entry_size_le_1_kb: <= 1 KB
    • pulsar_entry_size_le_2_kb: <= 2 KB
    • pulsar_entry_size_le_4_kb: <= 4 KB
    • pulsar_entry_size_le_16_kb: <= 16 KB
    • pulsar_entry_size_le_100_kb: <= 100 KB
    • pulsar_entry_size_le_1_mb: <= 1 MB
    • pulsar_entry_size_le_overflow: > 1 MB
    | - -#### Replication metrics - -If a namespace is configured to be replicated among multiple Pulsar clusters, the corresponding replication metrics is also exposed when `replicationMetricsEnabled` is enabled. - -All the replication metrics are also labelled with `remoteCluster=${pulsar_remote_cluster}`. - -| Name | Type | Description | -|---|---|---| -| pulsar_replication_rate_in | Gauge | The total message rate of the namespace replicating from remote cluster (messages/second). | -| pulsar_replication_rate_out | Gauge | The total message rate of the namespace replicating to remote cluster (messages/second). | -| pulsar_replication_throughput_in | Gauge | The total throughput of the namespace replicating from remote cluster (bytes/second). | -| pulsar_replication_throughput_out | Gauge | The total throughput of the namespace replicating to remote cluster (bytes/second). | -| pulsar_replication_backlog | Gauge | The total backlog of the namespace replicating to remote cluster (messages). | -| pulsar_replication_rate_expired | Gauge | Total rate of messages expired (messages/second). | -| pulsar_replication_connected_count | Gauge | The count of replication-subscriber up and running to replicate to remote cluster. | -| pulsar_replication_delay_in_seconds | Gauge | Time in seconds from the time a message was produced to the time when it is about to be replicated. | - - -### Topic metrics - -> Topic metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `true`. - -All the topic metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. - -| Name | Type | Description | -|---|---|---| -| pulsar_subscriptions_count | Gauge | The number of Pulsar subscriptions of the topic served by this broker. | -| pulsar_producers_count | Gauge | The number of active producers of the topic connected to this broker. | -| pulsar_consumers_count | Gauge | The number of active consumers of the topic connected to this broker. | -| pulsar_rate_in | Gauge | The total message rate of the topic coming into this broker (messages/second). | -| pulsar_rate_out | Gauge | The total message rate of the topic going out from this broker (messages/second). | -| pulsar_throughput_in | Gauge | The total throughput of the topic coming into this broker (bytes/second). | -| pulsar_throughput_out | Gauge | The total throughput of the topic going out from this broker (bytes/second). | -| pulsar_storage_size | Gauge | The total storage size of the topics in this topic owned by this broker (bytes). | -| pulsar_storage_logical_size | Gauge | The storage size of topics in the namespace owned by the broker without replicas (in bytes). | -| pulsar_storage_backlog_size | Gauge | The total backlog size of the topics of this topic owned by this broker (messages). | -| pulsar_storage_offloaded_size | Gauge | The total amount of the data in this topic offloaded to the tiered storage (bytes). | -| pulsar_storage_backlog_quota_limit | Gauge | The total amount of the data in this topic that limit the backlog quota (bytes). | -| pulsar_storage_write_rate | Gauge | The total message batches (entries) written to the storage for this topic (message batches / second). | -| pulsar_storage_read_rate | Gauge | The total message batches (entries) read from the storage for this topic (message batches / second). | -| pulsar_subscription_delayed | Gauge | The total message batches (entries) are delayed for dispatching. | -| pulsar_storage_write_latency_le_* | Histogram | The entry rate of a topic that the storage write latency is smaller with a given threshold.
    Available thresholds:
    • pulsar_storage_write_latency_le_0_5: <= 0.5ms
    • pulsar_storage_write_latency_le_1: <= 1ms
    • pulsar_storage_write_latency_le_5: <= 5ms
    • pulsar_storage_write_latency_le_10: <= 10ms
    • pulsar_storage_write_latency_le_20: <= 20ms
    • pulsar_storage_write_latency_le_50: <= 50ms
    • pulsar_storage_write_latency_le_100: <= 100ms
    • pulsar_storage_write_latency_le_200: <= 200ms
    • pulsar_storage_write_latency_le_1000: <= 1s
    • pulsar_storage_write_latency_le_overflow: > 1s
    | -| pulsar_entry_size_le_* | Histogram | The entry rate of a topic that the entry size is smaller with a given threshold.
    Available thresholds:
    • pulsar_entry_size_le_128: <= 128 bytes
    • pulsar_entry_size_le_512: <= 512 bytes
    • pulsar_entry_size_le_1_kb: <= 1 KB
    • pulsar_entry_size_le_2_kb: <= 2 KB
    • pulsar_entry_size_le_4_kb: <= 4 KB
    • pulsar_entry_size_le_16_kb: <= 16 KB
    • pulsar_entry_size_le_100_kb: <= 100 KB
    • pulsar_entry_size_le_1_mb: <= 1 MB
    • pulsar_entry_size_le_overflow: > 1 MB
    | -| pulsar_in_bytes_total | Counter | The total number of messages in bytes received for this topic. | -| pulsar_in_messages_total | Counter | The total number of messages received for this topic. | -| pulsar_out_bytes_total | Counter | The total number of messages in bytes read from this topic. | -| pulsar_out_messages_total | Counter | The total number of messages read from this topic. | -| pulsar_compaction_removed_event_count | Gauge | The total number of removed events of the compaction. | -| pulsar_compaction_succeed_count | Gauge | The total number of successes of the compaction. | -| pulsar_compaction_failed_count | Gauge | The total number of failures of the compaction. | -| pulsar_compaction_duration_time_in_mills | Gauge | The duration time of the compaction. | -| pulsar_compaction_read_throughput | Gauge | The read throughput of the compaction. | -| pulsar_compaction_write_throughput | Gauge | The write throughput of the compaction. | -| pulsar_compaction_latency_le_* | Histogram | The compaction latency with given quantile.
    Available thresholds:
    • pulsar_compaction_latency_le_0_5: <= 0.5ms
    • pulsar_compaction_latency_le_1: <= 1ms
    • pulsar_compaction_latency_le_5: <= 5ms
    • pulsar_compaction_latency_le_10: <= 10ms
    • pulsar_compaction_latency_le_20: <= 20ms
    • pulsar_compaction_latency_le_50: <= 50ms
    • pulsar_compaction_latency_le_100: <= 100ms
    • pulsar_compaction_latency_le_200: <= 200ms
    • pulsar_compaction_latency_le_1000: <= 1s
    • pulsar_compaction_latency_le_overflow: > 1s
    | -| pulsar_compaction_compacted_entries_count | Gauge | The total number of the compacted entries. | -| pulsar_compaction_compacted_entries_size |Gauge | The total size of the compacted entries. | - -#### Replication metrics - -If a namespace that a topic belongs to is configured to be replicated among multiple Pulsar clusters, the corresponding replication metrics is also exposed when `replicationMetricsEnabled` is enabled. - -All the replication metrics are labelled with `remoteCluster=${pulsar_remote_cluster}`. - -| Name | Type | Description | -|---|---|---| -| pulsar_replication_rate_in | Gauge | The total message rate of the topic replicating from remote cluster (messages/second). | -| pulsar_replication_rate_out | Gauge | The total message rate of the topic replicating to remote cluster (messages/second). | -| pulsar_replication_throughput_in | Gauge | The total throughput of the topic replicating from remote cluster (bytes/second). | -| pulsar_replication_throughput_out | Gauge | The total throughput of the topic replicating to remote cluster (bytes/second). | -| pulsar_replication_backlog | Gauge | The total backlog of the topic replicating to remote cluster (messages). | - -### ManagedLedgerCache metrics -All the ManagedLedgerCache metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_ml_cache_evictions | Gauge | The number of cache evictions during the last minute. | -| pulsar_ml_cache_hits_rate | Gauge | The number of cache hits per second. | -| pulsar_ml_cache_hits_throughput | Gauge | The amount of data is retrieved from the cache in byte/s | -| pulsar_ml_cache_misses_rate | Gauge | The number of cache misses per second | -| pulsar_ml_cache_misses_throughput | Gauge | The amount of data is retrieved from the cache in byte/s | -| pulsar_ml_cache_pool_active_allocations | Gauge | The number of currently active allocations in direct arena | -| pulsar_ml_cache_pool_active_allocations_huge | Gauge | The number of currently active huge allocation in direct arena | -| pulsar_ml_cache_pool_active_allocations_normal | Gauge | The number of currently active normal allocations in direct arena | -| pulsar_ml_cache_pool_active_allocations_small | Gauge | The number of currently active small allocations in direct arena | -| pulsar_ml_cache_pool_allocated | Gauge | The total allocated memory of chunk lists in direct arena | -| pulsar_ml_cache_pool_used | Gauge | The total used memory of chunk lists in direct arena | -| pulsar_ml_cache_used_size | Gauge | The size in byte used to store the entries payloads | -| pulsar_ml_count | Gauge | The number of currently opened managed ledgers | - -### ManagedLedger metrics -All the managedLedger metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- namespace: namespace=${pulsar_namespace}. ${pulsar_namespace} is the namespace name. -- quantile: quantile=${quantile}. Quantile is only for `Histogram` type metric, and represents the threshold for given Buckets. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_ml_AddEntryBytesRate | Gauge | The bytes/s rate of messages added | -| pulsar_ml_AddEntryWithReplicasBytesRate | Gauge | The bytes/s rate of messages added with replicas | -| pulsar_ml_AddEntryErrors | Gauge | The number of addEntry requests that failed | -| pulsar_ml_AddEntryLatencyBuckets | Histogram | The latency of adding a ledger entry with a given quantile (threshold), including time spent on waiting in queue on the broker side
    Available quantile:
    • quantile="0.0_0.5" is AddEntryLatency between (0.0ms, 0.5ms]
    • quantile="0.5_1.0" is AddEntryLatency between (0.5ms, 1.0ms]
    • quantile="1.0_5.0" is AddEntryLatency between (1ms, 5ms]
    • quantile="5.0_10.0" is AddEntryLatency between (5ms, 10ms]
    • quantile="10.0_20.0" is AddEntryLatency between (10ms, 20ms]
    • quantile="20.0_50.0" is AddEntryLatency between (20ms, 50ms]
    • quantile="50.0_100.0" is AddEntryLatency between (50ms, 100ms]
    • quantile="100.0_200.0" is AddEntryLatency between (100ms, 200ms]
    • quantile="200.0_1000.0" is AddEntryLatency between (200ms, 1s]
    | -| pulsar_ml_AddEntryLatencyBuckets_OVERFLOW | Gauge | The number of times the AddEntryLatency is longer than 1 second | -| pulsar_ml_AddEntryMessagesRate | Gauge | The msg/s rate of messages added | -| pulsar_ml_AddEntrySucceed | Gauge | The number of addEntry requests that succeeded | -| pulsar_ml_EntrySizeBuckets | Histogram | The added entry size of a ledger with a given quantile.
    Available quantile:
    • quantile="0.0_128.0" is EntrySize between (0byte, 128byte]
    • quantile="128.0_512.0" is EntrySize between (128byte, 512byte]
    • quantile="512.0_1024.0" is EntrySize between (512byte, 1KB]
    • quantile="1024.0_2048.0" is EntrySize between (1KB, 2KB]
    • quantile="2048.0_4096.0" is EntrySize between (2KB, 4KB]
    • quantile="4096.0_16384.0" is EntrySize between (4KB, 16KB]
    • quantile="16384.0_102400.0" is EntrySize between (16KB, 100KB]
    • quantile="102400.0_1232896.0" is EntrySize between (100KB, 1MB]
    | -| pulsar_ml_EntrySizeBuckets_OVERFLOW |Gauge | The number of times the EntrySize is larger than 1MB | -| pulsar_ml_LedgerSwitchLatencyBuckets | Histogram | The ledger switch latency with a given quantile.
    Available quantile:
    • quantile="0.0_0.5" is EntrySize between (0ms, 0.5ms]
    • quantile="0.5_1.0" is EntrySize between (0.5ms, 1ms]
    • quantile="1.0_5.0" is EntrySize between (1ms, 5ms]
    • quantile="5.0_10.0" is EntrySize between (5ms, 10ms]
    • quantile="10.0_20.0" is EntrySize between (10ms, 20ms]
    • quantile="20.0_50.0" is EntrySize between (20ms, 50ms]
    • quantile="50.0_100.0" is EntrySize between (50ms, 100ms]
    • quantile="100.0_200.0" is EntrySize between (100ms, 200ms]
    • quantile="200.0_1000.0" is EntrySize between (200ms, 1000ms]
    | -| pulsar_ml_LedgerSwitchLatencyBuckets_OVERFLOW | Gauge | The number of times the ledger switch latency is longer than 1 second | -| pulsar_ml_LedgerAddEntryLatencyBuckets | Histogram | The latency for bookie client to persist a ledger entry from broker to BookKeeper service with a given quantile (threshold).
    Available quantile:
    • quantile="0.0_0.5" is LedgerAddEntryLatency between (0.0ms, 0.5ms]
    • quantile="0.5_1.0" is LedgerAddEntryLatency between (0.5ms, 1.0ms]
    • quantile="1.0_5.0" is LedgerAddEntryLatency between (1ms, 5ms]
    • quantile="5.0_10.0" is LedgerAddEntryLatency between (5ms, 10ms]
    • quantile="10.0_20.0" is LedgerAddEntryLatency between (10ms, 20ms]
    • quantile="20.0_50.0" is LedgerAddEntryLatency between (20ms, 50ms]
    • quantile="50.0_100.0" is LedgerAddEntryLatency between (50ms, 100ms]
    • quantile="100.0_200.0" is LedgerAddEntryLatency between (100ms, 200ms]
    • quantile="200.0_1000.0" is LedgerAddEntryLatency between (200ms, 1s]
    | -| pulsar_ml_LedgerAddEntryLatencyBuckets_OVERFLOW | Gauge | The number of times the LedgerAddEntryLatency is longer than 1 second | -| pulsar_ml_MarkDeleteRate | Gauge | The rate of mark-delete ops/s | -| pulsar_ml_NumberOfMessagesInBacklog | Gauge | The number of backlog messages for all the consumers | -| pulsar_ml_ReadEntriesBytesRate | Gauge | The bytes/s rate of messages read | -| pulsar_ml_ReadEntriesErrors | Gauge | The number of readEntries requests that failed | -| pulsar_ml_ReadEntriesRate | Gauge | The msg/s rate of messages read | -| pulsar_ml_ReadEntriesSucceeded | Gauge | The number of readEntries requests that succeeded | -| pulsar_ml_StoredMessagesSize | Gauge | The total size of the messages in active ledgers (accounting for the multiple copies stored) | - -### Managed cursor acknowledgment state - -The acknowledgment state is persistent to the ledger first. When the acknowledgment state fails to be persistent to the ledger, they are persistent to ZooKeeper. To track the stats of acknowledgment, you can configure the metrics for the managed cursor. - -All the cursor acknowledgment state metrics are labelled with the following labels: - -- namespace: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -- ledger_name: `ledger_name=${pulsar_ledger_name}`. `${pulsar_ledger_name}` is the ledger name. - -- cursor_name: `ledger_name=${pulsar_cursor_name}`. `${pulsar_cursor_name}` is the cursor name. - -Name |Type |Description -|---|---|--- -brk_ml_cursor_persistLedgerSucceed(namespace=", ledger_name="", cursor_name:")|Gauge|The number of acknowledgment states that is persistent to a ledger.| -brk_ml_cursor_persistLedgerErrors(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of ledger errors occurred when acknowledgment states fail to be persistent to the ledger.| -brk_ml_cursor_persistZookeeperSucceed(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of acknowledgment states that is persistent to ZooKeeper. -brk_ml_cursor_persistZookeeperErrors(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of ledger errors occurred when acknowledgment states fail to be persistent to ZooKeeper. -brk_ml_cursor_nonContiguousDeletedMessagesRange(namespace="", ledger_name="", cursor_name:"")|Gauge|The number of non-contiguous deleted messages ranges. -brk_ml_cursor_writeLedgerSize(namespace="", ledger_name="", cursor_name:"")|Gauge|The size of write to ledger. -brk_ml_cursor_writeLedgerLogicalSize(namespace="", ledger_name="", cursor_name:"")|Gauge|The size of write to ledger (accounting for without replicas). -brk_ml_cursor_readLedgerSize(namespace="", ledger_name="", cursor_name:"")|Gauge|The size of read from ledger. - -### LoadBalancing metrics -All the loadbalancing metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- broker: broker=${broker}. ${broker} is the IP address of the broker -- metric: metric="loadBalancing". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_bandwidth_in_usage | Gauge | The broker bandwith in usage | -| pulsar_lb_bandwidth_out_usage | Gauge | The broker bandwith out usage | -| pulsar_lb_cpu_usage | Gauge | The broker cpu usage | -| pulsar_lb_directMemory_usage | Gauge | The broker process direct memory usage | -| pulsar_lb_memory_usage | Gauge | The broker process memory usage | - -#### BundleUnloading metrics -All the bundleUnloading metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- metric: metric="bundleUnloading". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_unload_broker_count | Counter | Unload broker count in this bundle unloading | -| pulsar_lb_unload_bundle_count | Counter | Bundle unload count in this bundle unloading | - -#### BundleSplit metrics -All the bundleUnloading metrics are labelled with the following labels: -- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you have configured in the `broker.conf` file. -- metric: metric="bundlesSplit". - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_lb_bundles_split_count | Counter | bundle split count in this bundle splitting check interval | - -### Subscription metrics - -> Subscription metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `true`. - -All the subscription metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. -- *subscription*: `subscription=${subscription}`. `${subscription}` is the topic subscription name. - -| Name | Type | Description | -|---|---|---| -| pulsar_subscription_back_log | Gauge | The total backlog of a subscription (messages). | -| pulsar_subscription_delayed | Gauge | The total number of messages are delayed to be dispatched for a subscription (messages). | -| pulsar_subscription_msg_rate_redeliver | Gauge | The total message rate for message being redelivered (messages/second). | -| pulsar_subscription_unacked_messages | Gauge | The total number of unacknowledged messages of a subscription (messages). | -| pulsar_subscription_blocked_on_unacked_messages | Gauge | Indicate whether a subscription is blocked on unacknowledged messages or not.
    • 1 means the subscription is blocked on waiting unacknowledged messages to be acked.
    • 0 means the subscription is not blocked on waiting unacknowledged messages to be acked.
    | -| pulsar_subscription_msg_rate_out | Gauge | The total message dispatch rate for a subscription (messages/second). | -| pulsar_subscription_msg_throughput_out | Gauge | The total message dispatch throughput for a subscription (bytes/second). | - -### Consumer metrics - -> Consumer metrics are only exposed when both `exposeTopicLevelMetricsInPrometheus` and `exposeConsumerLevelMetricsInPrometheus` are set to `true`. - -All the consumer metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. -- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name. -- *subscription*: `subscription=${subscription}`. `${subscription}` is the topic subscription name. -- *consumer_name*: `consumer_name=${consumer_name}`. `${consumer_name}` is the topic consumer name. -- *consumer_id*: `consumer_id=${consumer_id}`. `${consumer_id}` is the topic consumer id. - -| Name | Type | Description | -|---|---|---| -| pulsar_consumer_msg_rate_redeliver | Gauge | The total message rate for message being redelivered (messages/second). | -| pulsar_consumer_unacked_messages | Gauge | The total number of unacknowledged messages of a consumer (messages). | -| pulsar_consumer_blocked_on_unacked_messages | Gauge | Indicate whether a consumer is blocked on unacknowledged messages or not.
    • 1 means the consumer is blocked on waiting unacknowledged messages to be acked.
    • 0 means the consumer is not blocked on waiting unacknowledged messages to be acked.
    | -| pulsar_consumer_msg_rate_out | Gauge | The total message dispatch rate for a consumer (messages/second). | -| pulsar_consumer_msg_throughput_out | Gauge | The total message dispatch throughput for a consumer (bytes/second). | -| pulsar_consumer_available_permits | Gauge | The available permits for for a consumer. | - -### Managed ledger bookie client metrics - -All the managed ledger bookie client metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -| --- | --- | --- | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_completed_tasks_* | Gauge | The number of tasks the scheduler executor execute completed.
    The number of metrics determined by the scheduler executor thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_queue_* | Gauge | The number of tasks queued in the scheduler executor's queue.
    The number of metrics determined by scheduler executor's thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_total_tasks_* | Gauge | The total number of tasks the scheduler executor received.
    The number of metrics determined by scheduler executor's thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`.
    | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_task_execution | Summary | The scheduler task execution latency calculated in milliseconds. | -| pulsar_managedLedger_client_bookkeeper_ml_scheduler_task_queued | Summary | The scheduler task queued latency calculated in milliseconds. | - -### Token metrics - -All the token metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. - -| Name | Type | Description | -|---|---|---| -| pulsar_expired_token_count | Counter | The number of expired tokens in Pulsar. | -| pulsar_expiring_token_minutes | Histogram | The remaining time of expiring tokens in minutes. | - -### Authentication metrics - -All the authentication metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *provider_name*: `provider_name=${provider_name}`. `${provider_name}` is the class name of the authentication provider. -- *auth_method*: `auth_method=${auth_method}`. `${auth_method}` is the authentication method of the authentication provider. -- *reason*: `reason=${reason}`. `${reason}` is the reason for failing authentication operation. (This label is only for `pulsar_authentication_failures_count`.) - -| Name | Type | Description | -|---|---|---| -| pulsar_authentication_success_count| Counter | The number of successful authentication operations. | -| pulsar_authentication_failures_count | Counter | The number of failing authentication operations. | - -### Connection metrics - -All the connection metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *broker*: `broker=${advertised_address}`. `${advertised_address}` is the advertised address of the broker. -- *metric*: `metric=${metric}`. `${metric}` is the connection metric collective name. - -| Name | Type | Description | -|---|---|---| -| pulsar_active_connections| Gauge | The number of active connections. | -| pulsar_connection_created_total_count | Gauge | The total number of connections. | -| pulsar_connection_create_success_count | Gauge | The number of successfully created connections. | -| pulsar_connection_create_fail_count | Gauge | The number of failed connections. | -| pulsar_connection_closed_total_count | Gauge | The total number of closed connections. | -| pulsar_broker_throttled_connections | Gauge | The number of throttled connections. | -| pulsar_broker_throttled_connections_global_limit | Gauge | The number of throttled connections because of per-connection limit. | - -## Pulsar Functions - -All the Pulsar Functions metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -| Name | Type | Description | -|---|---|---| -| pulsar_function_processed_successfully_total | Counter | The total number of messages processed successfully. | -| pulsar_function_processed_successfully_total_1min | Counter | The total number of messages processed successfully in the last 1 minute. | -| pulsar_function_system_exceptions_total | Counter | The total number of system exceptions. | -| pulsar_function_system_exceptions_total_1min | Counter | The total number of system exceptions in the last 1 minute. | -| pulsar_function_user_exceptions_total | Counter | The total number of user exceptions. | -| pulsar_function_user_exceptions_total_1min | Counter | The total number of user exceptions in the last 1 minute. | -| pulsar_function_process_latency_ms | Summary | The process latency in milliseconds. | -| pulsar_function_process_latency_ms_1min | Summary | The process latency in milliseconds in the last 1 minute. | -| pulsar_function_last_invocation | Gauge | The timestamp of the last invocation of the function. | -| pulsar_function_received_total | Counter | The total number of messages received from source. | -| pulsar_function_received_total_1min | Counter | The total number of messages received from source in the last 1 minute. | -pulsar_function_user_metric_ | Summary|The user-defined metrics. - -## Connectors - -All the Pulsar connector metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name. - -Connector metrics contain **source** metrics and **sink** metrics. - -- **Source** metrics - - | Name | Type | Description | - |---|---|---| - pulsar_source_written_total|Counter|The total number of records written to a Pulsar topic. - pulsar_source_written_total_1min|Counter|The total number of records written to a Pulsar topic in the last 1 minute. - pulsar_source_received_total|Counter|The total number of records received from source. - pulsar_source_received_total_1min|Counter|The total number of records received from source in the last 1 minute. - pulsar_source_last_invocation|Gauge|The timestamp of the last invocation of the source. - pulsar_source_source_exception|Gauge|The exception from a source. - pulsar_source_source_exceptions_total|Counter|The total number of source exceptions. - pulsar_source_source_exceptions_total_1min |Counter|The total number of source exceptions in the last 1 minute. - pulsar_source_system_exception|Gauge|The exception from system code. - pulsar_source_system_exceptions_total|Counter|The total number of system exceptions. - pulsar_source_system_exceptions_total_1min|Counter|The total number of system exceptions in the last 1 minute. - pulsar_source_user_metric_ | Summary|The user-defined metrics. - -- **Sink** metrics - - | Name | Type | Description | - |---|---|---| - pulsar_sink_written_total|Counter| The total number of records processed by a sink. - pulsar_sink_written_total_1min|Counter| The total number of records processed by a sink in the last 1 minute. - pulsar_sink_received_total_1min|Counter| The total number of messages that a sink has received from Pulsar topics in the last 1 minute. - pulsar_sink_received_total|Counter| The total number of records that a sink has received from Pulsar topics. - pulsar_sink_last_invocation|Gauge|The timestamp of the last invocation of the sink. - pulsar_sink_sink_exception|Gauge|The exception from a sink. - pulsar_sink_sink_exceptions_total|Counter|The total number of sink exceptions. - pulsar_sink_sink_exceptions_total_1min |Counter|The total number of sink exceptions in the last 1 minute. - pulsar_sink_system_exception|Gauge|The exception from system code. - pulsar_sink_system_exceptions_total|Counter|The total number of system exceptions. - pulsar_sink_system_exceptions_total_1min|Counter|The total number of system exceptions in the last 1 minute. - pulsar_sink_user_metric_ | Summary|The user-defined metrics. - -## Proxy - -All the proxy metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *kubernetes_pod_name*: `kubernetes_pod_name=${kubernetes_pod_name}`. `${kubernetes_pod_name}` is the Kubernetes pod name. - -| Name | Type | Description | -|---|---|---| -| pulsar_proxy_active_connections | Gauge | Number of connections currently active in the proxy. | -| pulsar_proxy_new_connections | Counter | Counter of connections being opened in the proxy. | -| pulsar_proxy_rejected_connections | Counter | Counter for connections rejected due to throttling. | -| pulsar_proxy_binary_ops | Counter | Counter of proxy operations. | -| pulsar_proxy_binary_bytes | Counter | Counter of proxy bytes. | - -## Pulsar SQL Worker - -| Name | Type | Description | -|---|---|---| -| split_bytes_read | Counter | Number of bytes read from BookKeeper. | -| split_num_messages_deserialized | Counter | Number of messages deserialized. | -| split_num_record_deserialized | Counter | Number of records deserialized. | -| split_bytes_read_per_query | Summary | Total number of bytes read per query. | -| split_entry_deserialize_time | Summary | Time spent on derserializing entries. | -| split_entry_deserialize_time_per_query | Summary | Time spent on derserializing entries per query. | -| split_entry_queue_dequeue_wait_time | Summary | Time spend on waiting to get entry from entry queue because it is empty. | -| split_entry_queue_dequeue_wait_time_per_query | Summary | Total time spent on waiting to get entry from entry queue per query. | -| split_message_queue_dequeue_wait_time_per_query | Summary | Time spent on waiting to dequeue from message queue because is is empty per query. | -| split_message_queue_enqueue_wait_time | Summary | Time spent on waiting for message queue enqueue because the message queue is full. | -| split_message_queue_enqueue_wait_time_per_query | Summary | Time spent on waiting for message queue enqueue because the message queue is full per query. | -| split_num_entries_per_batch | Summary | Number of entries per batch. | -| split_num_entries_per_query | Summary | Number of entries per query. | -| split_num_messages_deserialized_per_entry | Summary | Number of messages deserialized per entry. | -| split_num_messages_deserialized_per_query | Summary | Number of messages deserialized per query. | -| split_read_attempts | Summary | Number of read attempts (fail if queues are full). | -| split_read_attempts_per_query | Summary | Number of read attempts per query. | -| split_read_latency_per_batch | Summary | Latency of reads per batch. | -| split_read_latency_per_query | Summary | Total read latency per query. | -| split_record_deserialize_time | Summary | Time spent on deserializing message to record. For example, Avro, JSON, and so on. | -| split_record_deserialize_time_per_query | Summary | Time spent on deserializing message to record per query. | -| split_total_execution_time | Summary | The total execution time. | - -## Pulsar transaction - -All the transaction metrics are labelled with the following labels: - -- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file. -- *coordinator_id*: `coordinator_id=${coordinator_id}`. `${coordinator_id}` is the coordinator id. - -| Name | Type | Description | -|---|---|---| -| pulsar_txn_active_count | Gauge | Number of active transactions. | -| pulsar_txn_created_count | Counter | Number of created transactions. | -| pulsar_txn_committed_count | Counter | Number of committed transactions. | -| pulsar_txn_aborted_count | Counter | Number of aborted transactions of this coordinator. | -| pulsar_txn_timeout_count | Counter | Number of timeout transactions. | -| pulsar_txn_append_log_count | Counter | Number of append transaction logs. | -| pulsar_txn_execution_latency_le_* | Histogram | Transaction execution latency.
    Available latencies are as below:
    • latency="10" is TransactionExecutionLatency between (0ms, 10ms]
    • latency="20" is TransactionExecutionLatency between (10ms, 20ms]
    • latency="50" is TransactionExecutionLatency between (20ms, 50ms]
    • latency="100" is TransactionExecutionLatency between (50ms, 100ms]
    • latency="500" is TransactionExecutionLatency between (100ms, 500ms]
    • latency="1000" is TransactionExecutionLatency between (500ms, 1000ms]
    • latency="5000" is TransactionExecutionLatency between (1s, 5s]
    • latency="15000" is TransactionExecutionLatency between (5s, 15s]
    • latency="30000" is TransactionExecutionLatency between (15s, 30s]
    • latency="60000" is TransactionExecutionLatency between (30s, 60s]
    • latency="300000" is TransactionExecutionLatency between (1m,5m]
    • latency="1500000" is TransactionExecutionLatency between (5m,15m]
    • latency="3000000" is TransactionExecutionLatency between (15m,30m]
    • latency="overflow" is TransactionExecutionLatency between (30m,∞]
    | diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/reference-pulsar-admin.md b/site2/website/versioned_docs/version-2.9.3-deprecated/reference-pulsar-admin.md deleted file mode 100644 index e306289a8798a5..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/reference-pulsar-admin.md +++ /dev/null @@ -1,3297 +0,0 @@ ---- -id: reference-pulsar-admin -title: Pulsar admin CLI -sidebar_label: "Pulsar Admin CLI" -original_id: reference-pulsar-admin ---- - -> **Important** -> -> This page is deprecated and not updated anymore. For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/) - -The `pulsar-admin` tool enables you to manage Pulsar installations, including clusters, brokers, namespaces, tenants, and more. - -Usage - -```bash - -$ pulsar-admin command - -``` - -Commands -* `broker-stats` -* `brokers` -* `clusters` -* `functions` -* `functions-worker` -* `namespaces` -* `ns-isolation-policy` -* `sources` - - For more information, see [here](io-cli.md#sources) -* `sinks` - - For more information, see [here](io-cli.md#sinks) -* `topics` -* `tenants` -* `resource-quotas` -* `schemas` - -## `broker-stats` - -Operations to collect broker statistics - -```bash - -$ pulsar-admin broker-stats subcommand - -``` - -Subcommands -* `allocator-stats` -* `topics(destinations)` -* `mbeans` -* `monitoring-metrics` -* `load-report` - - -### `allocator-stats` - -Dump allocator stats - -Usage - -```bash - -$ pulsar-admin broker-stats allocator-stats allocator-name - -``` - -### `topics(destinations)` - -Dump topic stats - -Usage - -```bash - -$ pulsar-admin broker-stats topics options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - -### `mbeans` - -Dump Mbean stats - -Usage - -```bash - -$ pulsar-admin broker-stats mbeans options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - - -### `monitoring-metrics` - -Dump metrics for monitoring - -Usage - -```bash - -$ pulsar-admin broker-stats monitoring-metrics options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-i`, `--indent`|Indent JSON output|false| - - -### `load-report` - -Dump broker load-report - -Usage - -```bash - -$ pulsar-admin broker-stats load-report - -``` - -## `brokers` - -Operations about brokers - -```bash - -$ pulsar-admin brokers subcommand - -``` - -Subcommands -* `list` -* `namespaces` -* `update-dynamic-config` -* `list-dynamic-config` -* `get-all-dynamic-config` -* `get-internal-config` -* `get-runtime-config` -* `healthcheck` - -### `list` -List active brokers of the cluster - -Usage - -```bash - -$ pulsar-admin brokers list cluster-name - -``` - -### `leader-broker` -Get the information of the leader broker - -Usage - -```bash - -$ pulsar-admin brokers leader-broker - -``` - -### `namespaces` -List namespaces owned by the broker - -Usage - -```bash - -$ pulsar-admin brokers namespaces cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--url`|The URL for the broker|| - - -### `update-dynamic-config` -Update a broker's dynamic service configuration - -Usage - -```bash - -$ pulsar-admin brokers update-dynamic-config options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--config`|Service configuration parameter name|| -|`--value`|Value for the configuration parameter value specified using the `--config` flag|| - - -### `list-dynamic-config` -Get list of updatable configuration name - -Usage - -```bash - -$ pulsar-admin brokers list-dynamic-config - -``` - -### `delete-dynamic-config` -Delete dynamic-serviceConfiguration of broker - -Usage - -```bash - -$ pulsar-admin brokers delete-dynamic-config options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--config`|Service configuration parameter name|| - - -### `get-all-dynamic-config` -Get all overridden dynamic-configuration values - -Usage - -```bash - -$ pulsar-admin brokers get-all-dynamic-config - -``` - -### `get-internal-config` -Get internal configuration information - -Usage - -```bash - -$ pulsar-admin brokers get-internal-config - -``` - -### `get-runtime-config` -Get runtime configuration values - -Usage - -```bash - -$ pulsar-admin brokers get-runtime-config - -``` - -### `healthcheck` -Run a health check against the broker - -Usage - -```bash - -$ pulsar-admin brokers healthcheck - -``` - -## `clusters` -Operations about clusters - -Usage - -```bash - -$ pulsar-admin clusters subcommand - -``` - -Subcommands -* `get` -* `create` -* `update` -* `delete` -* `list` -* `update-peer-clusters` -* `get-peer-clusters` -* `get-failure-domain` -* `create-failure-domain` -* `update-failure-domain` -* `delete-failure-domain` -* `list-failure-domains` - - -### `get` -Get the configuration data for the specified cluster - -Usage - -```bash - -$ pulsar-admin clusters get cluster-name - -``` - -### `create` -Provisions a new cluster. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin clusters create cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--broker-url`|The URL for the broker service.|| -|`--broker-url-secure`|The broker service URL for a secure connection|| -|`--url`|service-url|| -|`--url-secure`|service-url for secure connection|| - - -### `update` -Update the configuration for a cluster - -Usage - -```bash - -$ pulsar-admin clusters update cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--broker-url`|The URL for the broker service.|| -|`--broker-url-secure`|The broker service URL for a secure connection|| -|`--url`|service-url|| -|`--url-secure`|service-url for secure connection|| - - -### `delete` -Deletes an existing cluster - -Usage - -```bash - -$ pulsar-admin clusters delete cluster-name - -``` - -### `list` -List the existing clusters - -Usage - -```bash - -$ pulsar-admin clusters list - -``` - -### `update-peer-clusters` -Update peer cluster names - -Usage - -```bash - -$ pulsar-admin clusters update-peer-clusters cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--peer-clusters`|Comma separated peer cluster names (Pass empty string "" to delete list)|| - -### `get-peer-clusters` -Get list of peer clusters - -Usage - -```bash - -$ pulsar-admin clusters get-peer-clusters - -``` - -### `get-failure-domain` -Get the configuration brokers of a failure domain - -Usage - -```bash - -$ pulsar-admin clusters get-failure-domain cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `create-failure-domain` -Create a new failure domain for a cluster (updates it if already created) - -Usage - -```bash - -$ pulsar-admin clusters create-failure-domain cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--broker-list`|Comma separated broker list|| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `update-failure-domain` -Update failure domain for a cluster (creates a new one if not exist) - -Usage - -```bash - -$ pulsar-admin clusters update-failure-domain cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--broker-list`|Comma separated broker list|| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `delete-failure-domain` -Delete an existing failure domain - -Usage - -```bash - -$ pulsar-admin clusters delete-failure-domain cluster-name options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster|| - -### `list-failure-domains` -List the existing failure domains for a cluster - -Usage - -```bash - -$ pulsar-admin clusters list-failure-domains cluster-name - -``` - -## `functions` - -A command-line interface for Pulsar Functions - -Usage - -```bash - -$ pulsar-admin functions subcommand - -``` - -Subcommands -* `localrun` -* `create` -* `delete` -* `update` -* `get` -* `restart` -* `stop` -* `start` -* `status` -* `stats` -* `list` -* `querystate` -* `putstate` -* `trigger` - - -### `localrun` -Run the Pulsar Function locally (rather than deploying it to the Pulsar cluster) - - -Usage - -```bash - -$ pulsar-admin functions localrun options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--broker-service-url `|The URL of the Pulsar broker|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--client-auth-params`|Client authentication param|| -|`--client-auth-plugin`|Client authentication plugin using which function-process can connect to broker|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--hostname-verification-enabled`|Enable hostname verification|false| -|`--instance-id-offset`|Start the instanceIds from this offset|0| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--go`|Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--state-storage-service-url`|The URL for the state storage service. By default, it it set to the service URL of the Apache BookKeeper. This service URL must be added manually when the Pulsar Function runs locally. || -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed successfully are sent|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--tls-allow-insecure`|Allow insecure tls connection|false| -|`--tls-trust-cert-path`|The tls trust cert file path|| -|`--use-tls`|Use tls connection|false| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `create` -Create a Pulsar Function in cluster mode (i.e. deploy it on a Pulsar cluster) - -Usage - -``` - -$ pulsar-admin functions create options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function’s namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--go`|Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `delete` -Delete a Pulsar Function that's running on a Pulsar cluster - -Usage - -```bash - -$ pulsar-admin functions delete options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `update` -Update a Pulsar Function that's been deployed to a Pulsar cluster - -Usage - -```bash - -$ pulsar-admin functions update options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)|| -|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)|| -|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)|| -|`--auto-ack`|Whether or not the framework will automatically acknowledge messages|| -|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer|| -|`--classname`|The function's class name|| -|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)|| -|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)|| -|`--function-config-file`|The path to a YAML config file specifying the function's configuration|| -|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)|| -|`--log-topic`|The topic to which the function's logs are produced|| -|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--name`|The function's name|| -|`--namespace`|The function’s namespace|| -|`--output`|The function's output topic (If none is specified, no output is written)|| -|`--output-serde-classname`|The SerDe class to be used for messages output by the function|| -|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1| -|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE| -|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--go`|Path to the main Go executable binary for the function (if the function is written in Go). It also supports URL path [http/https/file (file protocol assumes that file already exists on worker host)/function (package URL from packages management service)] from which worker can download the package.|| -|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function|| -|`--sliding-interval-count`|The number of messages after which the window slides|| -|`--sliding-interval-duration-ms`|The time duration after which the window slides|| -|`--tenant`|The function’s tenant|| -|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)|| -|`--user-config`|User-defined config key/values|| -|`--window-length-count`|The number of messages per window|| -|`--window-length-duration-ms`|The time duration of the window in milliseconds|| -|`--dead-letter-topic`|The topic where all messages which could not be processed|| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--max-message-retries`|How many times should we try to process a message before giving up|| -|`--retain-ordering`|Function consumes and processes messages in order|| -|`--retain-key-ordering`|Function consumes and processes messages in key order|| -|`--timeout-ms`|The message timeout in milliseconds|| -|`--producer-config`| The custom producer configuration (as a JSON string) | | - - -### `get` -Fetch information about a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions get options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `restart` -Restart function instance - -Usage - -```bash - -$ pulsar-admin functions restart options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (restart all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `stop` -Stops function instance - -Usage - -```bash - -$ pulsar-admin functions stop options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (stop all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `start` -Starts a stopped function instance - -Usage - -```bash - -$ pulsar-admin functions start options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (start all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `status` -Check the current status of a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions status options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (Get-status of all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `stats` -Get the current stats of a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions stats options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--instance-id`|The function instanceId (Get-stats of all instances if instance-id is not provided)|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - -### `list` -List all of the Pulsar Functions running under a specific tenant and namespace - -Usage - -```bash - -$ pulsar-admin functions list options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| - - -### `querystate` -Fetch the current state associated with a Pulsar Function running in cluster mode - -Usage - -```bash - -$ pulsar-admin functions querystate options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`-k`, `--key`|The key for the state you want to fetch|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| -|`-w`, `--watch`|Watch for changes in the value associated with a key for a Pulsar Function|false| - -### `putstate` -Put a key/value pair to the state associated with a Pulsar Function - -Usage - -```bash - -$ pulsar-admin functions putstate options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the Pulsar Function|| -|`--name`|The name of a Pulsar Function|| -|`--namespace`|The namespace of a Pulsar Function|| -|`--tenant`|The tenant of a Pulsar Function|| -|`-s`, `--state`|The FunctionState that needs to be put|| - -### `trigger` -Triggers the specified Pulsar Function with a supplied value - -Usage - -```bash - -$ pulsar-admin functions trigger options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function|| -|`--name`|The function's name|| -|`--namespace`|The function's namespace|| -|`--tenant`|The function's tenant|| -|`--topic`|The specific topic name that the function consumes from that you want to inject the data to|| -|`--trigger-file`|The path to the file that contains the data with which you'd like to trigger the function|| -|`--trigger-value`|The value with which you want to trigger the function|| - - -## `functions-worker` -Operations to collect function-worker statistics - -```bash - -$ pulsar-admin functions-worker subcommand - -``` - -Subcommands - -* `function-stats` -* `get-cluster` -* `get-cluster-leader` -* `get-function-assignments` -* `monitoring-metrics` - -### `function-stats` - -Dump all functions stats running on this broker - -Usage - -```bash - -$ pulsar-admin functions-worker function-stats - -``` - -### `get-cluster` - -Get all workers belonging to this cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-cluster - -``` - -### `get-cluster-leader` - -Get the leader of the worker cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-cluster-leader - -``` - -### `get-function-assignments` - -Get the assignments of the functions across the worker cluster - -Usage - -```bash - -$ pulsar-admin functions-worker get-function-assignments - -``` - -### `monitoring-metrics` - -Dump metrics for Monitoring - -Usage - -```bash - -$ pulsar-admin functions-worker monitoring-metrics - -``` - -## `namespaces` - -Operations for managing namespaces - -```bash - -$ pulsar-admin namespaces subcommand - -``` - -Subcommands -* `list` -* `topics` -* `policies` -* `create` -* `delete` -* `set-deduplication` -* `set-auto-topic-creation` -* `remove-auto-topic-creation` -* `set-auto-subscription-creation` -* `remove-auto-subscription-creation` -* `permissions` -* `grant-permission` -* `revoke-permission` -* `grant-subscription-permission` -* `revoke-subscription-permission` -* `set-clusters` -* `get-clusters` -* `get-backlog-quotas` -* `set-backlog-quota` -* `remove-backlog-quota` -* `get-persistence` -* `set-persistence` -* `get-message-ttl` -* `set-message-ttl` -* `remove-message-ttl` -* `get-anti-affinity-group` -* `set-anti-affinity-group` -* `get-anti-affinity-namespaces` -* `delete-anti-affinity-group` -* `get-retention` -* `set-retention` -* `unload` -* `split-bundle` -* `set-dispatch-rate` -* `get-dispatch-rate` -* `set-replicator-dispatch-rate` -* `get-replicator-dispatch-rate` -* `set-subscribe-rate` -* `get-subscribe-rate` -* `set-subscription-dispatch-rate` -* `get-subscription-dispatch-rate` -* `clear-backlog` -* `unsubscribe` -* `set-encryption-required` -* `set-delayed-delivery` -* `get-delayed-delivery` -* `set-subscription-auth-mode` -* `get-max-producers-per-topic` -* `set-max-producers-per-topic` -* `get-max-consumers-per-topic` -* `set-max-consumers-per-topic` -* `get-max-consumers-per-subscription` -* `set-max-consumers-per-subscription` -* `get-max-unacked-messages-per-subscription` -* `set-max-unacked-messages-per-subscription` -* `get-max-unacked-messages-per-consumer` -* `set-max-unacked-messages-per-consumer` -* `get-compaction-threshold` -* `set-compaction-threshold` -* `get-offload-threshold` -* `set-offload-threshold` -* `get-offload-deletion-lag` -* `set-offload-deletion-lag` -* `clear-offload-deletion-lag` -* `get-schema-autoupdate-strategy` -* `set-schema-autoupdate-strategy` -* `set-offload-policies` -* `get-offload-policies` -* `set-max-subscriptions-per-topic` -* `get-max-subscriptions-per-topic` -* `remove-max-subscriptions-per-topic` - - -### `list` -Get the namespaces for a tenant - -Usage - -```bash - -$ pulsar-admin namespaces list tenant-name - -``` - -### `topics` -Get the list of topics for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces topics tenant/namespace - -``` - -### `policies` -Get the configuration policies of a namespace - -Usage - -```bash - -$ pulsar-admin namespaces policies tenant/namespace - -``` - -### `create` -Create a new namespace - -Usage - -```bash - -$ pulsar-admin namespaces create tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-b`, `--bundles`|The number of bundles to activate|0| -|`-c`, `--clusters`|List of clusters this namespace will be assigned|| - - -### `delete` -Deletes a namespace. The namespace needs to be empty - -Usage - -```bash - -$ pulsar-admin namespaces delete tenant/namespace - -``` - -### `set-deduplication` -Enable or disable message deduplication on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-deduplication tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable message deduplication on the specified namespace|false| -|`--disable`, `-d`|Disable message deduplication on the specified namespace|false| - -### `set-auto-topic-creation` -Enable or disable autoTopicCreation for a namespace, overriding broker settings - -Usage - -```bash - -$ pulsar-admin namespaces set-auto-topic-creation tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable allowAutoTopicCreation on namespace|false| -|`--disable`, `-d`|Disable allowAutoTopicCreation on namespace|false| -|`--type`, `-t`|Type of topic to be auto-created. Possible values: (partitioned, non-partitioned)|non-partitioned| -|`--num-partitions`, `-n`|Default number of partitions of topic to be auto-created, applicable to partitioned topics only|| - -### `remove-auto-topic-creation` -Remove override of autoTopicCreation for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces remove-auto-topic-creation tenant/namespace - -``` - -### `set-auto-subscription-creation` -Enable autoSubscriptionCreation for a namespace, overriding broker settings - -Usage - -```bash - -$ pulsar-admin namespaces set-auto-subscription-creation tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable allowAutoSubscriptionCreation on namespace|false| - -### `remove-auto-subscription-creation` -Remove override of autoSubscriptionCreation for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces remove-auto-subscription-creation tenant/namespace - -``` - -### `permissions` -Get the permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces permissions tenant/namespace - -``` - -### `grant-permission` -Grant permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces grant-permission tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--actions`|Actions to be granted (`produce` or `consume`)|| -|`--role`|The client role to which to grant the permissions|| - - -### `revoke-permission` -Revoke permissions on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces revoke-permission tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--role`|The client role to which to revoke the permissions|| - -### `grant-subscription-permission` -Grant permissions to access subscription admin-api - -Usage - -```bash - -$ pulsar-admin namespaces grant-subscription-permission tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--roles`|The client roles to which to grant the permissions (comma separated roles)|| -|`--subscription`|The subscription name for which permission will be granted to roles|| - -### `revoke-subscription-permission` -Revoke permissions to access subscription admin-api - -Usage - -```bash - -$ pulsar-admin namespaces revoke-subscription-permission tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--role`|The client role to which to revoke the permissions|| -|`--subscription`|The subscription name for which permission will be revoked to roles|| - -### `set-clusters` -Set replication clusters for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-clusters tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-c`, `--clusters`|Replication clusters ID list (comma-separated values)|| - - -### `get-clusters` -Get replication clusters for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-clusters tenant/namespace - -``` - -### `get-backlog-quotas` -Get the backlog quota policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-backlog-quotas tenant/namespace - -``` - -### `set-backlog-quota` -Set a backlog quota policy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-backlog-quota tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-l`, `--limit`|The backlog size limit (for example `10M` or `16G`)|| -|`-lt`, `--limitTime`|Time limit in second, non-positive number for disabling time limit. (for example 3600 for 1 hour)|| -|`-p`, `--policy`|The retention policy to enforce when the limit is reached. The valid options are: `producer_request_hold`, `producer_exception` or `consumer_backlog_eviction`| -|`-t`, `--type`|Backlog quota type to set. The valid options are: `destination_storage`, `message_age` |destination_storage| - -Example - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \ ---limit 2G \ ---policy producer_request_hold - -``` - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \ ---limitTime 3600 \ ---policy producer_request_hold \ ---type message_age - -``` - -### `remove-backlog-quota` -Remove a backlog quota policy from a namespace - -|Flag|Description|Default| -|---|---|---| -|`-t`, `--type`|Backlog quota type to remove. The valid options are: `destination_storage`, `message_age` |destination_storage| - -Usage - -```bash - -$ pulsar-admin namespaces remove-backlog-quota tenant/namespace - -``` - -### `get-persistence` -Get the persistence policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-persistence tenant/namespace - -``` - -### `set-persistence` -Set the persistence policies for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-persistence tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-a`, `--bookkeeper-ack-quorum`|The number of acks (guaranteed copies) to wait for each entry|0| -|`-e`, `--bookkeeper-ensemble`|The number of bookies to use for a topic|0| -|`-w`, `--bookkeeper-write-quorum`|How many writes to make of each entry|0| -|`-r`, `--ml-mark-delete-max-rate`|Throttling rate of mark-delete operation (0 means no throttle)|| - - -### `get-message-ttl` -Get the message TTL for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-message-ttl tenant/namespace - -``` - -### `set-message-ttl` -Set the message TTL for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-message-ttl tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-ttl`, `--messageTTL`|Message TTL in seconds. When the value is set to `0`, TTL is disabled. TTL is disabled by default. |0| - -### `remove-message-ttl` -Remove the message TTL for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces remove-message-ttl tenant/namespace - -``` - -### `get-anti-affinity-group` -Get Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-anti-affinity-group tenant/namespace - -``` - -### `set-anti-affinity-group` -Set Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-anti-affinity-group tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-g`, `--group`|Anti-affinity group name|| - -### `get-anti-affinity-namespaces` -Get Anti-affinity namespaces grouped with the given anti-affinity group name - -Usage - -```bash - -$ pulsar-admin namespaces get-anti-affinity-namespaces options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-c`, `--cluster`|Cluster name|| -|`-g`, `--group`|Anti-affinity group name|| -|`-p`, `--tenant`|Tenant is only used for authorization. Client has to be admin of any of the tenant to access this api|| - -### `delete-anti-affinity-group` -Remove Anti-affinity group name for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces delete-anti-affinity-group tenant/namespace - -``` - -### `get-retention` -Get the retention policy that is applied to each topic within the specified namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-retention tenant/namespace - -``` - -### `set-retention` -Set the retention policy for each topic within the specified namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-retention tenant/namespace - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-s`, `--size`|The retention size limits (for example 10M, 16G or 3T) for each topic in the namespace. 0 means no retention and -1 means infinite size retention|| -|`-t`, `--time`|The retention time in minutes, hours, days, or weeks. Examples: 100m, 13h, 2d, 5w. 0 means no retention and -1 means infinite time retention|| - - -### `unload` -Unload a namespace or namespace bundle from the current serving broker. - -Usage - -```bash - -$ pulsar-admin namespaces unload tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| - -### `split-bundle` -Split a namespace-bundle from the current serving broker - -Usage - -```bash - -$ pulsar-admin namespaces split-bundle tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-u`, `--unload`|Unload newly split bundles after splitting old bundle|false| - -### `set-dispatch-rate` -Set message-dispatch-rate for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-dispatch-rate tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-dispatch-rate` -Get configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-dispatch-rate tenant/namespace - -``` - -### `set-replicator-dispatch-rate` -Set replicator message-dispatch-rate for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-replicator-dispatch-rate tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-replicator-dispatch-rate` -Get replicator configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-replicator-dispatch-rate tenant/namespace - -``` - -### `set-subscribe-rate` -Set subscribe-rate per consumer for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscribe-rate tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-sr`, `--subscribe-rate`|The subscribe rate (default -1 will be overwrite if not passed)|-1| -|`-st`, `--subscribe-rate-period`|The subscribe rate period in second type (default 30 second will be overwrite if not passed)|30| - -### `get-subscribe-rate` -Get configured subscribe-rate per consumer for all topics of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-subscribe-rate tenant/namespace - -``` - -### `set-subscription-dispatch-rate` -Set subscription message-dispatch-rate for all subscription of the namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscription-dispatch-rate tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1| -|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1| -|`-md`, `--sub-msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1| - -### `get-subscription-dispatch-rate` -Get subscription configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0) - -Usage - -```bash - -$ pulsar-admin namespaces get-subscription-dispatch-rate tenant/namespace - -``` - -### `clear-backlog` -Clear the backlog for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces clear-backlog tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-force`, `--force`|Whether to force a clear backlog without prompt|false| -|`-s`, `--sub`|The subscription name|| - - -### `unsubscribe` -Unsubscribe the given subscription on all destinations on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces unsubscribe tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)|| -|`-s`, `--sub`|The subscription name|| - -### `set-encryption-required` -Enable or disable message encryption required for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-encryption-required tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-d`, `--disable`|Disable message encryption required|false| -|`-e`, `--enable`|Enable message encryption required|false| - -### `set-delayed-delivery` -Set the delayed delivery policy on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-delayed-delivery tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-d`, `--disable`|Disable delayed delivery messages|false| -|`-e`, `--enable`|Enable delayed delivery messages|false| -|`-t`, `--time`|The tick time for when retrying on delayed delivery messages|1s| - - -### `get-delayed-delivery` -Get the delayed delivery policy on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-delayed-delivery-time tenant/namespace - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-t`, `--time`|The tick time for when retrying on delayed delivery messages|1s| - - -### `set-subscription-auth-mode` -Set subscription auth mode on a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-subscription-auth-mode tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-m`, `--subscription-auth-mode`|Subscription authorization mode for Pulsar policies. Valid options are: [None, Prefix]|| - -### `get-max-producers-per-topic` -Get maxProducersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-producers-per-topic tenant/namespace - -``` - -### `set-max-producers-per-topic` -Set maxProducersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-producers-per-topic tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-p`, `--max-producers-per-topic`|maxProducersPerTopic for a namespace|0| - -### `get-max-consumers-per-topic` -Get maxConsumersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-consumers-per-topic tenant/namespace - -``` - -### `set-max-consumers-per-topic` -Set maxConsumersPerTopic for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-consumers-per-topic tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-consumers-per-topic`|maxConsumersPerTopic for a namespace|0| - -### `get-max-consumers-per-subscription` -Get maxConsumersPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-consumers-per-subscription tenant/namespace - -``` - -### `set-max-consumers-per-subscription` -Set maxConsumersPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-consumers-per-subscription tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-consumers-per-subscription`|maxConsumersPerSubscription for a namespace|0| - -### `get-max-unacked-messages-per-subscription` -Get maxUnackedMessagesPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-unacked-messages-per-subscription tenant/namespace - -``` - -### `set-max-unacked-messages-per-subscription` -Set maxUnackedMessagesPerSubscription for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-unacked-messages-per-subscription tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-unacked-messages-per-subscription`|maxUnackedMessagesPerSubscription for a namespace|-1| - -### `get-max-unacked-messages-per-consumer` -Get maxUnackedMessagesPerConsumer for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-max-unacked-messages-per-consumer tenant/namespace - -``` - -### `set-max-unacked-messages-per-consumer` -Set maxUnackedMessagesPerConsumer for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-max-unacked-messages-per-consumer tenant/namespace options - -``` - -Options - -|Flag|Description|Default| -|----|---|---| -|`-c`, `--max-unacked-messages-per-consumer`|maxUnackedMessagesPerConsumer for a namespace|-1| - - -### `get-compaction-threshold` -Get compactionThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-compaction-threshold tenant/namespace - -``` - -### `set-compaction-threshold` -Set compactionThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-compaction-threshold tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-t`, `--threshold`|Maximum number of bytes in a topic backlog before compaction is triggered (eg: 10M, 16G, 3T). 0 disables automatic compaction|0| - - -### `get-offload-threshold` -Get offloadThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-threshold tenant/namespace - -``` - -### `set-offload-threshold` -Set offloadThreshold for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-threshold tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-s`, `--size`|Maximum number of bytes stored in the pulsar cluster for a topic before data will start being automatically offloaded to longterm storage (eg: 10M, 16G, 3T, 100). Negative values disable automatic offload. 0 triggers offloading as soon as possible.|-1| - -### `get-offload-deletion-lag` -Get offloadDeletionLag, in minutes, for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-deletion-lag tenant/namespace - -``` - -### `set-offload-deletion-lag` -Set offloadDeletionLag for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-deletion-lag tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-l`, `--lag`|Duration to wait after offloading a ledger segment, before deleting the copy of that segment from cluster local storage. (eg: 10m, 5h, 3d, 2w).|-1| - -### `clear-offload-deletion-lag` -Clear offloadDeletionLag for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces clear-offload-deletion-lag tenant/namespace - -``` - -### `get-schema-autoupdate-strategy` -Get the schema auto-update strategy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces get-schema-autoupdate-strategy tenant/namespace - -``` - -### `set-schema-autoupdate-strategy` -Set the schema auto-update strategy for a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-schema-autoupdate-strategy tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-c`, `--compatibility`|Compatibility level required for new schemas created via a Producer. Possible values (Full, Backward, Forward, None).|Full| -|`-d`, `--disabled`|Disable automatic schema updates.|false| - -### `get-publish-rate` -Get the message publish rate for each topic in a namespace, in bytes as well as messages per second - -Usage - -```bash - -$ pulsar-admin namespaces get-publish-rate tenant/namespace - -``` - -### `set-publish-rate` -Set the message publish rate for each topic in a namespace - -Usage - -```bash - -$ pulsar-admin namespaces set-publish-rate tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-m`, `--msg-publish-rate`|Threshold for number of messages per second per topic in the namespace (-1 implies not set, 0 for no limit).|-1| -|`-b`, `--byte-publish-rate`|Threshold for number of bytes per second per topic in the namespace (-1 implies not set, 0 for no limit).|-1| - -### `set-offload-policies` -Set the offload policy for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces set-offload-policies tenant/namespace options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-d`, `--driver`|Driver to use to offload old data to long term storage,(Possible values: S3, aws-s3, google-cloud-storage)|| -|`-r`, `--region`|The long term storage region|| -|`-b`, `--bucket`|Bucket to place offloaded ledger into|| -|`-e`, `--endpoint`|Alternative endpoint to connect to|| -|`-i`, `--aws-id`|AWS Credential Id to use when using driver S3 or aws-s3|| -|`-s`, `--aws-secret`|AWS Credential Secret to use when using driver S3 or aws-s3|| -|`-ro`, `--s3-role`|S3 Role used for STSAssumeRoleSessionCredentialsProvider using driver S3 or aws-s3|| -|`-rsn`, `--s3-role-session-name`|S3 role session name used for STSAssumeRoleSessionCredentialsProvider using driver S3 or aws-s3|| -|`-mbs`, `--maxBlockSize`|Max block size|64MB| -|`-rbs`, `--readBufferSize`|Read buffer size|1MB| -|`-oat`, `--offloadAfterThreshold`|Offload after threshold size (eg: 1M, 5M)|| -|`-oae`, `--offloadAfterElapsed`|Offload after elapsed in millis (or minutes, hours,days,weeks eg: 100m, 3h, 2d, 5w).|| - -### `get-offload-policies` -Get the offload policy for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces get-offload-policies tenant/namespace - -``` - -### `set-max-subscriptions-per-topic` -Set the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces set-max-subscriptions-per-topic tenant/namespace - -``` - -### `get-max-subscriptions-per-topic` -Get the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces get-max-subscriptions-per-topic tenant/namespace - -``` - -### `remove-max-subscriptions-per-topic` -Remove the maximum subscription per topic for a namespace. - -Usage - -```bash - -$ pulsar-admin namespaces remove-max-subscriptions-per-topic tenant/namespace - -``` - -## `ns-isolation-policy` -Operations for managing namespace isolation policies. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy subcommand - -``` - -Subcommands -* `set` -* `get` -* `list` -* `delete` -* `brokers` -* `broker` - -### `set` -Create/update a namespace isolation policy for a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy set cluster-name policy-name options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`--auto-failover-policy-params`|Comma-separated name=value auto failover policy parameters|[]| -|`--auto-failover-policy-type`|Auto failover policy type name. Currently available options: min_available.|[]| -|`--namespaces`|Comma-separated namespaces regex list|[]| -|`--primary`|Comma-separated primary broker regex list|[]| -|`--secondary`|Comma-separated secondary broker regex list|[]| - - -### `get` -Get the namespace isolation policy of a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy get cluster-name policy-name - -``` - -### `list` -List all namespace isolation policies of a cluster. This operation requires Pulsar superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy list cluster-name - -``` - -### `delete` -Delete namespace isolation policy of a cluster. This operation requires superuser privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy delete - -``` - -### `brokers` -List all brokers with namespace-isolation policies attached to it. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy brokers cluster-name - -``` - -### `broker` -Get broker with namespace-isolation policies attached to it. This operation requires Pulsar super-user privileges. - -Usage - -```bash - -$ pulsar-admin ns-isolation-policy broker cluster-name options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`--broker`|Broker name to get namespace-isolation policies attached to it|| - -## `topics` -Operations for managing Pulsar topics (both persistent and non-persistent). - -Usage - -```bash - -$ pulsar-admin topics subcommand - -``` - -From Pulsar 2.7.0, some namespace-level policies are available on topic level. To enable topic-level policy in Pulsar, you need to configure the following parameters in the `broker.conf` file. - -```shell - -systemTopicEnabled=true -topicLevelPoliciesEnabled=true - -``` - -Subcommands -* `compact` -* `compaction-status` -* `offload` -* `offload-status` -* `create-partitioned-topic` -* `create-missed-partitions` -* `delete-partitioned-topic` -* `create` -* `get-partitioned-topic-metadata` -* `update-partitioned-topic` -* `list-partitioned-topics` -* `list` -* `terminate` -* `permissions` -* `grant-permission` -* `revoke-permission` -* `lookup` -* `bundle-range` -* `delete` -* `unload` -* `create-subscription` -* `subscriptions` -* `unsubscribe` -* `stats` -* `stats-internal` -* `info-internal` -* `partitioned-stats` -* `partitioned-stats-internal` -* `skip` -* `clear-backlog` -* `expire-messages` -* `expire-messages-all-subscriptions` -* `peek-messages` -* `reset-cursor` -* `get-message-by-id` -* `last-message-id` -* `get-backlog-quotas` -* `set-backlog-quota` -* `remove-backlog-quota` -* `get-persistence` -* `set-persistence` -* `remove-persistence` -* `get-message-ttl` -* `set-message-ttl` -* `remove-message-ttl` -* `get-deduplication` -* `set-deduplication` -* `remove-deduplication` -* `get-retention` -* `set-retention` -* `remove-retention` -* `get-dispatch-rate` -* `set-dispatch-rate` -* `remove-dispatch-rate` -* `get-max-unacked-messages-per-subscription` -* `set-max-unacked-messages-per-subscription` -* `remove-max-unacked-messages-per-subscription` -* `get-max-unacked-messages-per-consumer` -* `set-max-unacked-messages-per-consumer` -* `remove-max-unacked-messages-per-consumer` -* `get-delayed-delivery` -* `set-delayed-delivery` -* `remove-delayed-delivery` -* `get-max-producers` -* `set-max-producers` -* `remove-max-producers` -* `get-max-consumers` -* `set-max-consumers` -* `remove-max-consumers` -* `get-compaction-threshold` -* `set-compaction-threshold` -* `remove-compaction-threshold` -* `get-offload-policies` -* `set-offload-policies` -* `remove-offload-policies` -* `get-inactive-topic-policies` -* `set-inactive-topic-policies` -* `remove-inactive-topic-policies` -* `set-max-subscriptions` -* `get-max-subscriptions` -* `remove-max-subscriptions` - -### `compact` -Run compaction on the specified topic (persistent topics only) - -Usage - -``` - -$ pulsar-admin topics compact persistent://tenant/namespace/topic - -``` - -### `compaction-status` -Check the status of a topic compaction (persistent topics only) - -Usage - -```bash - -$ pulsar-admin topics compaction-status persistent://tenant/namespace/topic - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-w`, `--wait-complete`|Wait for compaction to complete|false| - - -### `offload` -Trigger offload of data from a topic to long-term storage (e.g. Amazon S3) - -Usage - -```bash - -$ pulsar-admin topics offload persistent://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-s`, `--size-threshold`|The maximum amount of data to keep in BookKeeper for the specific topic|| - - -### `offload-status` -Check the status of data offloading from a topic to long-term storage - -Usage - -```bash - -$ pulsar-admin topics offload-status persistent://tenant/namespace/topic op - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-w`, `--wait-complete`|Wait for compaction to complete|false| - - -### `create-partitioned-topic` -Create a partitioned topic. A partitioned topic must be created before producers can publish to it. - -:::note - -By default, after 60 seconds of creation, topics are considered inactive and deleted automatically to prevent from generating trash data. -To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. -To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to your desired value. -For more information about these two parameters, see [here](reference-configuration.md#broker). - -::: - -Usage - -```bash - -$ pulsar-admin topics create-partitioned-topic {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-p`, `--partitions`|The number of partitions for the topic|0| - -### `create-missed-partitions` -Try to create partitions for partitioned topic. The partitions of partition topic has to be created, -can be used by repair partitions when topic auto creation is disabled - -Usage - -```bash - -$ pulsar-admin topics create-missed-partitions persistent://tenant/namespace/topic - -``` - -### `delete-partitioned-topic` -Delete a partitioned topic. This will also delete all the partitions of the topic if they exist. - -Usage - -```bash - -$ pulsar-admin topics delete-partitioned-topic {persistent|non-persistent} - -``` - -### `create` -Creates a non-partitioned topic. A non-partitioned topic must explicitly be created by the user if allowAutoTopicCreation or createIfMissing is disabled. - -:::note - -By default, after 60 seconds of creation, topics are considered inactive and deleted automatically to prevent from generating trash data. -To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. -To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to your desired value. -For more information about these two parameters, see [here](reference-configuration.md#broker). - -::: - -Usage - -```bash - -$ pulsar-admin topics create {persistent|non-persistent}://tenant/namespace/topic - -``` - -### `get-partitioned-topic-metadata` -Get the partitioned topic metadata. If the topic is not created or is a non-partitioned topic, this will return an empty topic with zero partitions. - -Usage - -```bash - -$ pulsar-admin topics get-partitioned-topic-metadata {persistent|non-persistent}://tenant/namespace/topic - -``` - -### `update-partitioned-topic` -Update existing non-global partitioned topic. New updating number of partitions must be greater than existing number of partitions. - -Usage - -```bash - -$ pulsar-admin topics update-partitioned-topic {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-p`, `--partitions`|The number of partitions for the topic|0| - -### `list-partitioned-topics` -Get the list of partitioned topics under a namespace. - -Usage - -```bash - -$ pulsar-admin topics list-partitioned-topics tenant/namespace - -``` - -### `list` -Get the list of topics under a namespace - -Usage - -``` - -$ pulsar-admin topics list tenant/cluster/namespace - -``` - -### `terminate` -Terminate a persistent topic (disallow further messages from being published on the topic) - -Usage - -```bash - -$ pulsar-admin topics terminate persistent://tenant/namespace/topic - -``` - -### `permissions` -Get the permissions on a topic. Retrieve the effective permissions for a destination. These permissions are defined by the permissions set at the namespace level combined (union) with any eventual specific permissions set on the topic. - -Usage - -```bash - -$ pulsar-admin topics permissions topic - -``` - -### `grant-permission` -Grant a new permission to a client role on a single topic - -Usage - -```bash - -$ pulsar-admin topics grant-permission {persistent|non-persistent}://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--actions`|Actions to be granted (`produce` or `consume`)|| -|`--role`|The client role to which to grant the permissions|| - - -### `revoke-permission` -Revoke permissions to a client role on a single topic. If the permission was not set at the topic level, but rather at the namespace level, this operation will return an error (HTTP status code 412). - -Usage - -```bash - -$ pulsar-admin topics revoke-permission topic - -``` - -### `lookup` -Look up a topic from the current serving broker - -Usage - -```bash - -$ pulsar-admin topics lookup topic - -``` - -### `bundle-range` -Get the namespace bundle which contains the given topic - -Usage - -```bash - -$ pulsar-admin topics bundle-range topic - -``` - -### `delete` -Delete a topic. The topic cannot be deleted if there are any active subscriptions or producers connected to the topic. - -Usage - -```bash - -$ pulsar-admin topics delete topic - -``` - -### `unload` -Unload a topic - -Usage - -```bash - -$ pulsar-admin topics unload topic - -``` - -### `create-subscription` -Create a new subscription on a topic. - -Usage - -```bash - -$ pulsar-admin topics create-subscription [options] persistent://tenant/namespace/topic - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-m`, `--messageId`|messageId where to create the subscription. It can be either 'latest', 'earliest' or (ledgerId:entryId)|latest| -|`-s`, `--subscription`|Subscription to reset position on|| - -### `subscriptions` -Get the list of subscriptions on the topic - -Usage - -```bash - -$ pulsar-admin topics subscriptions topic - -``` - -### `unsubscribe` -Delete a durable subscriber from a topic - -Usage - -```bash - -$ pulsar-admin topics unsubscribe topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|The subscription to delete|| -|`-f`, `--force`|Disconnect and close all consumers and delete subscription forcefully|false| - - -### `stats` -Get the stats for the topic and its connected producers and consumers. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -Usage - -```bash - -$ pulsar-admin topics stats topic - -``` - -:::note - -The unit of `storageSize` and `averageMsgSize` is Byte. - -::: - -### `stats-internal` -Get the internal stats for the topic - -Usage - -```bash - -$ pulsar-admin topics stats-internal topic - -``` - -### `info-internal` -Get the internal metadata info for the topic - -Usage - -```bash - -$ pulsar-admin topics info-internal topic - -``` - -### `partitioned-stats` -Get the stats for the partitioned topic and its connected producers and consumers. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period. - -Usage - -```bash - -$ pulsar-admin topics partitioned-stats topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--per-partition`|Get per-partition stats|false| - -### `partitioned-stats-internal` -Get the internal stats for the partitioned topic and its connected producers and consumers. All the rates are computed over a 1 minute window and are relative the last completed 1 minute period. - -Usage - -```bash - -$ pulsar-admin topics partitioned-stats-internal topic - -``` - -### `skip` -Skip some messages for the subscription - -Usage - -```bash - -$ pulsar-admin topics skip topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-n`, `--count`|The number of messages to skip|0| -|`-s`, `--subscription`|The subscription on which to skip messages|| - - -### `clear-backlog` -Clear backlog (skip all the messages) for the subscription - -Usage - -```bash - -$ pulsar-admin topics clear-backlog topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|The subscription to clear|| - - -### `expire-messages` -Expire messages that are older than the given expiry time (in seconds) for the subscription. - -Usage - -```bash - -$ pulsar-admin topics expire-messages topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-t`, `--expireTime`|Expire messages older than the time (in seconds)|0| -|`-s`, `--subscription`|The subscription to skip messages on|| - - -### `expire-messages-all-subscriptions` -Expire messages older than the given expiry time (in seconds) for all subscriptions - -Usage - -```bash - -$ pulsar-admin topics expire-messages-all-subscriptions topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-t`, `--expireTime`|Expire messages older than the time (in seconds)|0| - - -### `peek-messages` -Peek some messages for the subscription. - -Usage - -```bash - -$ pulsar-admin topics peek-messages topic options - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`-n`, `--count`|The number of messages|0| -|`-s`, `--subscription`|Subscription to get messages from|| - - -### `reset-cursor` -Reset position for subscription to a position that is closest to timestamp or messageId. - -Usage - -```bash - -$ pulsar-admin topics reset-cursor topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-s`, `--subscription`|Subscription to reset position on|| -|`-t`, `--time`|The time in minutes to reset back to (or minutes, hours, days, weeks, etc.). Examples: `100m`, `3h`, `2d`, `5w`.|| -|`-m`, `--messageId`| The messageId to reset back to (ledgerId:entryId). || - -### `get-message-by-id` -Get message by ledger id and entry id - -Usage - -```bash - -$ pulsar-admin topics get-message-by-id topic options - -``` - -Options - -|Flag|Description|Default| -|---|---|---| -|`-l`, `--ledgerId`|The ledger id |0| -|`-e`, `--entryId`|The entry id |0| - -### `last-message-id` -Get the last commit message ID of the topic. - -Usage - -```bash - -$ pulsar-admin topics last-message-id persistent://tenant/namespace/topic - -``` - -### `get-backlog-quotas` -Get the backlog quota policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-backlog-quotas tenant/namespace/topic - -``` - -### `set-backlog-quota` -Set a backlog quota policy for a topic. - -|Flag|Description|Default| -|----|---|---| -|`-l`, `--limit`|The backlog size limit (for example `10M` or `16G`)|| -|`-lt`, `--limitTime`|Time limit in second, non-positive number for disabling time limit. (for example 3600 for 1 hour)|| -|`-p`, `--policy`|The retention policy to enforce when the limit is reached. The valid options are: `producer_request_hold`, `producer_exception` or `consumer_backlog_eviction`| -|`-t`, `--type`|Backlog quota type to set. The valid options are: `destination_storage`, `message_age` |destination_storage| - -Usage - -```bash - -$ pulsar-admin topics set-backlog-quota tenant/namespace/topic options - -``` - -Example - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns/my-topic \ ---limit 2G \ ---policy producer_request_hold - -``` - -```bash - -$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns/my-topic \ ---limitTime 3600 \ ---policy producer_request_hold \ ---type message_age - -``` - -### `remove-backlog-quota` -Remove a backlog quota policy from a topic. - -|Flag|Description|Default| -|---|---|---| -|`-t`, `--type`|Backlog quota type to remove. The valid options are: `destination_storage`, `message_age` |destination_storage| - -Usage - -```bash - -$ pulsar-admin topics remove-backlog-quota tenant/namespace/topic - -``` - -### `get-persistence` -Get the persistence policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-persistence tenant/namespace/topic - -``` - -### `set-persistence` -Set the persistence policies for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-persistence tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-e`, `--bookkeeper-ensemble`|Number of bookies to use for a topic|0| -|`-w`, `--bookkeeper-write-quorum`|How many writes to make of each entry|0| -|`-a`, `--bookkeeper-ack-quorum`|Number of acks (guaranteed copies) to wait for each entry|0| -|`-r`, `--ml-mark-delete-max-rate`|Throttling rate of mark-delete operation (0 means no throttle)|| - -### `remove-persistence` -Remove the persistence policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-persistence tenant/namespace/topic - -``` - -### `get-message-ttl` -Get the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-message-ttl tenant/namespace/topic - -``` - -### `set-message-ttl` -Set the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-message-ttl tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-ttl`, `--messageTTL`|Message TTL for a topic in second, allowed range from 1 to `Integer.MAX_VALUE` |0| - -### `remove-message-ttl` -Remove the message TTL for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-message-ttl tenant/namespace/topic - -``` - -Options -|Flag|Description|Default| -|---|---|---| -|`--enable`, `-e`|Enable message deduplication on the specified topic.|false| -|`--disable`, `-d`|Disable message deduplication on the specified topic.|false| - -### `get-deduplication` -Get a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics get-deduplication tenant/namespace/topic - -``` - -### `set-deduplication` -Set a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics set-deduplication tenant/namespace/topic options - -``` - -### `remove-deduplication` -Remove a deduplication policy for a topic. - -Usage - -```bash - -$ pulsar-admin topics remove-deduplication tenant/namespace/topic - -``` - -## `tenants` -Operations for managing tenants - -Usage - -```bash - -$ pulsar-admin tenants subcommand - -``` - -Subcommands -* `list` -* `get` -* `create` -* `update` -* `delete` - -### `list` -List the existing tenants - -Usage - -```bash - -$ pulsar-admin tenants list - -``` - -### `get` -Gets the configuration of a tenant - -Usage - -```bash - -$ pulsar-admin tenants get tenant-name - -``` - -### `create` -Creates a new tenant - -Usage - -```bash - -$ pulsar-admin tenants create tenant-name options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-r`, `--admin-roles`|Comma-separated admin roles|| -|`-c`, `--allowed-clusters`|Comma-separated allowed clusters|| - -### `update` -Updates a tenant - -Usage - -```bash - -$ pulsar-admin tenants update tenant-name options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-r`, `--admin-roles`|Comma-separated admin roles|| -|`-c`, `--allowed-clusters`|Comma-separated allowed clusters|| - - -### `delete` -Deletes an existing tenant - -Usage - -```bash - -$ pulsar-admin tenants delete tenant-name - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-f`, `--force`|Delete a tenant forcefully by deleting all namespaces under it.|false| - - -## `resource-quotas` -Operations for managing resource quotas - -Usage - -```bash - -$ pulsar-admin resource-quotas subcommand - -``` - -Subcommands -* `get` -* `set` -* `reset-namespace-bundle-quota` - - -### `get` -Get the resource quota for a specified namespace bundle, or default quota if no namespace/bundle is specified. - -Usage - -```bash - -$ pulsar-admin resource-quotas get options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-n`, `--namespace`|The namespace|| - - -### `set` -Set the resource quota for the specified namespace bundle, or default quota if no namespace/bundle is specified. - -Usage - -```bash - -$ pulsar-admin resource-quotas set options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-bi`, `--bandwidthIn`|The expected inbound bandwidth (in bytes/second)|0| -|`-bo`, `--bandwidthOut`|Expected outbound bandwidth (in bytes/second)0| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-d`, `--dynamic`|Allow to be dynamically re-calculated (or not)|false| -|`-mem`, `--memory`|Expectred memory usage (in megabytes)|0| -|`-mi`, `--msgRateIn`|Expected incoming messages per second|0| -|`-mo`, `--msgRateOut`|Expected outgoing messages per second|0| -|`-n`, `--namespace`|The namespace as tenant/namespace, for example my-tenant/my-ns. Must be specified together with -b/--bundle.|| - - -### `reset-namespace-bundle-quota` -Reset the specified namespace bundle's resource quota to a default value. - -Usage - -```bash - -$ pulsar-admin resource-quotas reset-namespace-bundle-quota options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.|| -|`-n`, `--namespace`|The namespace|| - - - -## `schemas` -Operations related to Schemas associated with Pulsar topics. - -Usage - -``` - -$ pulsar-admin schemas subcommand - -``` - -Subcommands -* `upload` -* `delete` -* `get` -* `extract` - - -### `upload` -Upload the schema definition for a topic - -Usage - -```bash - -$ pulsar-admin schemas upload persistent://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`--filename`|The path to the schema definition file. An example schema file is available under conf directory.|| - - -### `delete` -Delete the schema definition associated with a topic - -Usage - -```bash - -$ pulsar-admin schemas delete persistent://tenant/namespace/topic - -``` - -### `get` -Retrieve the schema definition associated with a topic (at a given version if version is supplied). - -Usage - -```bash - -$ pulsar-admin schemas get persistent://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`--version`|The version of the schema definition to retrieve for a topic.|| - -### `extract` -Provide the schema definition for a topic via Java class name contained in a JAR file - -Usage - -```bash - -$ pulsar-admin schemas extract persistent://tenant/namespace/topic options - -``` - -Options -|Flag|Description|Default| -|----|---|---| -|`-c`, `--classname`|The Java class name|| -|`-j`, `--jar`|A path to the JAR file which contains the above Java class|| -|`-t`, `--type`|The type of the schema (avro or json)|| diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/reference-rest-api-overview.md b/site2/website/versioned_docs/version-2.9.3-deprecated/reference-rest-api-overview.md deleted file mode 100644 index 4bdcf23483a2b5..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/reference-rest-api-overview.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: reference-rest-api-overview -title: Pulsar REST APIs -sidebar_label: "Pulsar REST APIs" ---- - -A REST API (also known as RESTful API, REpresentational State Transfer Application Programming Interface) is a set of definitions and protocols for building and integrating application software, using HTTP requests to GET, PUT, POST, and DELETE data following the REST standards. In essence, REST API is a set of remote calls using standard methods to request and return data in a specific format between two systems. - -Pulsar provides a variety of REST APIs that enable you to interact with Pulsar to retrieve information or perform an action. - -| REST API category | Description | -| --- | --- | -| [Admin](https://pulsar.apache.org/admin-rest-api/?version=master) | REST APIs for administrative operations.| -| [Functions](https://pulsar.apache.org/functions-rest-api/?version=master) | REST APIs for function-specific operations.| -| [Sources](https://pulsar.apache.org/source-rest-api/?version=master) | REST APIs for source-specific operations.| -| [Sinks](https://pulsar.apache.org/sink-rest-api/?version=master) | REST APIs for sink-specific operations.| -| [Packages](https://pulsar.apache.org/packages-rest-api/?version=master) | REST APIs for package-specific operations. A package can be a group of functions, sources, and sinks.| - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/reference-terminology.md b/site2/website/versioned_docs/version-2.9.3-deprecated/reference-terminology.md deleted file mode 100644 index e5099141c3231e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/reference-terminology.md +++ /dev/null @@ -1,176 +0,0 @@ ---- -id: reference-terminology -title: Pulsar Terminology -sidebar_label: "Terminology" -original_id: reference-terminology ---- - -Here is a glossary of terms related to Apache Pulsar: - -### Concepts - -#### Pulsar - -Pulsar is a distributed messaging system originally created by Yahoo but now under the stewardship of the Apache Software Foundation. - -#### Message - -Messages are the basic unit of Pulsar. They're what [producers](#producer) publish to [topics](#topic) -and what [consumers](#consumer) then consume from topics. - -#### Topic - -A named channel used to pass messages published by [producers](#producer) to [consumers](#consumer) who -process those [messages](#message). - -#### Partitioned Topic - -A topic that is served by multiple Pulsar [brokers](#broker), which enables higher throughput. - -#### Namespace - -A grouping mechanism for related [topics](#topic). - -#### Namespace Bundle - -A virtual group of [topics](#topic) that belong to the same [namespace](#namespace). A namespace bundle -is defined as a range between two 32-bit hashes, such as 0x00000000 and 0xffffffff. - -#### Tenant - -An administrative unit for allocating capacity and enforcing an authentication/authorization scheme. - -#### Subscription - -A lease on a [topic](#topic) established by a group of [consumers](#consumer). Pulsar has four subscription -modes (exclusive, shared, failover and key_shared). - -#### Pub-Sub - -A messaging pattern in which [producer](#producer) processes publish messages on [topics](#topic) that -are then consumed (processed) by [consumer](#consumer) processes. - -#### Producer - -A process that publishes [messages](#message) to a Pulsar [topic](#topic). - -#### Consumer - -A process that establishes a subscription to a Pulsar [topic](#topic) and processes messages published -to that topic by [producers](#producer). - -#### Reader - -Pulsar readers are message processors much like Pulsar [consumers](#consumer) but with two crucial differences: - -- you can specify *where* on a topic readers begin processing messages (consumers always begin with the latest - available unacked message); -- readers don't retain data or acknowledge messages. - -#### Cursor - -The subscription position for a [consumer](#consumer). - -#### Acknowledgment (ack) - -A message sent to a Pulsar broker by a [consumer](#consumer) that a message has been successfully processed. -An acknowledgement (ack) is Pulsar's way of knowing that the message can be deleted from the system; -if no acknowledgement, then the message will be retained until it's processed. - -#### Negative Acknowledgment (nack) - -When an application fails to process a particular message, it can send a "negative ack" to Pulsar -to signal that the message should be replayed at a later timer. (By default, failed messages are -replayed after a 1 minute delay). Be aware that negative acknowledgment on ordered subscription types, -such as Exclusive, Failover and Key_Shared, can cause failed messages to arrive consumers out of the original order. - -#### Unacknowledged - -A message that has been delivered to a consumer for processing but not yet confirmed as processed by the consumer. - -#### Retention Policy - -Size and time limits that you can set on a [namespace](#namespace) to configure retention of [messages](#message) -that have already been [acknowledged](#acknowledgement-ack). - -#### Multi-Tenancy - -The ability to isolate [namespaces](#namespace), specify quotas, and configure authentication and authorization -on a per-[tenant](#tenant) basis. - -#### Failure Domain - -A logical domain under a Pulsar cluster. Each logical domain contains a pre-configured list of brokers. - -#### Anti-affinity Namespaces - -A group of namespaces that have anti-affinity to each other. - -### Architecture - -#### Standalone - -A lightweight Pulsar broker in which all components run in a single Java Virtual Machine (JVM) process. Standalone -clusters can be run on a single machine and are useful for development purposes. - -#### Cluster - -A set of Pulsar [brokers](#broker) and [BookKeeper](#bookkeeper) servers (aka [bookies](#bookie)). -Clusters can reside in different geographical regions and replicate messages to one another -in a process called [geo-replication](#geo-replication). - -#### Instance - -A group of Pulsar [clusters](#cluster) that act together as a single unit. - -#### Geo-Replication - -Replication of messages across Pulsar [clusters](#cluster), potentially in different datacenters -or geographical regions. - -#### Configuration Store - -Pulsar's configuration store (previously known as configuration store) is a ZooKeeper quorum that -is used for configuration-specific tasks. A multi-cluster Pulsar installation requires just one -configuration store across all [clusters](#cluster). - -#### Topic Lookup - -A service provided by Pulsar [brokers](#broker) that enables connecting clients to automatically determine -which Pulsar [cluster](#cluster) is responsible for a [topic](#topic) (and thus where message traffic for -the topic needs to be routed). - -#### Service Discovery - -A mechanism provided by Pulsar that enables connecting clients to use just a single URL to interact -with all the [brokers](#broker) in a [cluster](#cluster). - -#### Broker - -A stateless component of Pulsar [clusters](#cluster) that runs two other components: an HTTP server -exposing a REST interface for administration and topic lookup and a [dispatcher](#dispatcher) that -handles all message transfers. Pulsar clusters typically consist of multiple brokers. - -#### Dispatcher - -An asynchronous TCP server used for all data transfers in-and-out a Pulsar [broker](#broker). The Pulsar -dispatcher uses a custom binary protocol for all communications. - -### Storage - -#### BookKeeper - -[Apache BookKeeper](http://bookkeeper.apache.org/) is a scalable, low-latency persistent log storage -service that Pulsar uses to store data. - -#### Bookie - -Bookie is the name of an individual BookKeeper server. It is effectively the storage server of Pulsar. - -#### Ledger - -An append-only data structure in [BookKeeper](#bookkeeper) that is used to persistently store messages in Pulsar [topics](#topic). - -### Functions - -Pulsar Functions are lightweight functions that can consume messages from Pulsar topics, apply custom processing logic, and, if desired, publish results to topics. diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/schema-evolution-compatibility.md b/site2/website/versioned_docs/version-2.9.3-deprecated/schema-evolution-compatibility.md deleted file mode 100644 index 3e78429df69da2..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/schema-evolution-compatibility.md +++ /dev/null @@ -1,201 +0,0 @@ ---- -id: schema-evolution-compatibility -title: Schema evolution and compatibility -sidebar_label: "Schema evolution and compatibility" -original_id: schema-evolution-compatibility ---- - -Normally, schemas do not stay the same over a long period of time. Instead, they undergo evolutions to satisfy new needs. - -This chapter examines how Pulsar schema evolves and what Pulsar schema compatibility check strategies are. - -## Schema evolution - -Pulsar schema is defined in a data structure called `SchemaInfo`. - -Each `SchemaInfo` stored with a topic has a version. The version is used to manage the schema changes happening within a topic. - -The message produced with `SchemaInfo` is tagged with a schema version. When a message is consumed by a Pulsar client, the Pulsar client can use the schema version to retrieve the corresponding `SchemaInfo` and use the correct schema information to deserialize data. - -### What is schema evolution? - -Schemas store the details of attributes and types. To satisfy new business requirements, you need to update schemas inevitably over time, which is called **schema evolution**. - -Any schema changes affect downstream consumers. Schema evolution ensures that the downstream consumers can seamlessly handle data encoded with both old schemas and new schemas. - -### How Pulsar schema should evolve? - -The answer is Pulsar schema compatibility check strategy. It determines how schema compares old schemas with new schemas in topics. - -For more information, see [Schema compatibility check strategy](#schema-compatibility-check-strategy). - -### How does Pulsar support schema evolution? - -1. When a producer/consumer/reader connects to a broker, the broker deploys the schema compatibility checker configured by `schemaRegistryCompatibilityCheckers` to enforce schema compatibility check. - - The schema compatibility checker is one instance per schema type. - - Currently, Avro and JSON have their own compatibility checkers, while all the other schema types share the default compatibility checker which disables schema evolution. - -2. The producer/consumer/reader sends its client `SchemaInfo` to the broker. - -3. The broker knows the schema type and locates the schema compatibility checker for that type. - -4. The broker uses the checker to check if the `SchemaInfo` is compatible with the latest schema of the topic by applying its compatibility check strategy. - - Currently, the compatibility check strategy is configured at the namespace level and applied to all the topics within that namespace. - -## Schema compatibility check strategy - -Pulsar has 8 schema compatibility check strategies, which are summarized in the following table. - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Changes allowed | Check against which schema | Upgrade first | -| --- | --- | --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Disable schema compatibility check. | All changes are allowed | All previous versions | Any order | -| `ALWAYS_INCOMPATIBLE` | Disable schema evolution. | All changes are disabled | None | None | -| `BACKWARD` | Consumers using the schema V3 can process data written by producers using the schema V3 or V2. |
  2317. Add optional fields
  2318. Delete fields
  2319. | Latest version | Consumers | -| `BACKWARD_TRANSITIVE` | Consumers using the schema V3 can process data written by producers using the schema V3, V2 or V1. |
  2320. Add optional fields
  2321. Delete fields
  2322. | All previous versions | Consumers | -| `FORWARD` | Consumers using the schema V3 or V2 can process data written by producers using the schema V3. |
  2323. Add fields
  2324. Delete optional fields
  2325. | Latest version | Producers | -| `FORWARD_TRANSITIVE` | Consumers using the schema V3, V2 or V1 can process data written by producers using the schema V3. |
  2326. Add fields
  2327. Delete optional fields
  2328. | All previous versions | Producers | -| `FULL` | Backward and forward compatible between the schema V3 and V2. |
  2329. Modify optional fields
  2330. | Latest version | Any order | -| `FULL_TRANSITIVE` | Backward and forward compatible among the schema V3, V2, and V1. |
  2331. Modify optional fields
  2332. | All previous versions | Any order | - -### ALWAYS_COMPATIBLE and ALWAYS_INCOMPATIBLE - -| Compatibility check strategy | Definition | Note | -| --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Disable schema compatibility check. | None | -| `ALWAYS_INCOMPATIBLE` | Disable schema evolution, that is, any schema change is rejected. |
  2333. For all schema types except Avro and JSON, the default schema compatibility check strategy is `ALWAYS_INCOMPATIBLE`.
  2334. For Avro and JSON, the default schema compatibility check strategy is `FULL`.
  2335. | - -#### Example - -* Example 1 - - In some situations, an application needs to store events of several different types in the same Pulsar topic. - - In particular, when developing a data model in an `Event Sourcing` style, you might have several kinds of events that affect the state of an entity. - - For example, for a user entity, there are `userCreated`, `userAddressChanged` and `userEnquiryReceived` events. The application requires that those events are always read in the same order. - - Consequently, those events need to go in the same Pulsar partition to maintain order. This application can use `ALWAYS_COMPATIBLE` to allow different kinds of events co-exist in the same topic. - -* Example 2 - - Sometimes we also make incompatible changes. - - For example, you are modifying a field type from `string` to `int`. - - In this case, you need to: - - * Upgrade all producers and consumers to the new schema versions at the same time. - - * Optionally, create a new topic and start migrating applications to use the new topic and the new schema, avoiding the need to handle two incompatible versions in the same topic. - -### BACKWARD and BACKWARD_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | -|---|---|---| -`BACKWARD` | Consumers using the new schema can process data written by producers using the **last schema**. | The consumers using the schema V3 can process data written by producers using the schema V3 or V2. | -`BACKWARD_TRANSITIVE` | Consumers using the new schema can process data written by producers using **all previous schemas**. | The consumers using the schema V3 can process data written by producers using the schema V3, V2, or V1. | - -#### Example - -* Example 1 - - Remove a field. - - A consumer constructed to process events without one field can process events written with the old schema containing the field, and the consumer will ignore that field. - -* Example 2 - - You want to load all Pulsar data into a Hive data warehouse and run SQL queries against the data. - - Same SQL queries must continue to work even the data is changed. To support it, you can evolve the schemas using the `BACKWARD` strategy. - -### FORWARD and FORWARD_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | -|---|---|---| -`FORWARD` | Consumers using the **last schema** can process data written by producers using a new schema, even though they may not be able to use the full capabilities of the new schema. | The consumers using the schema V3 or V2 can process data written by producers using the schema V3. | -`FORWARD_TRANSITIVE` | Consumers using **all previous schemas** can process data written by producers using a new schema. | The consumers using the schema V3, V2, or V1 can process data written by producers using the schema V3. - -#### Example - -* Example 1 - - Add a field. - - In most data formats, consumers written to process events without new fields can continue doing so even when they receive new events containing new fields. - -* Example 2 - - If a consumer has an application logic tied to a full version of a schema, the application logic may not be updated instantly when the schema evolves. - - In this case, you need to project data with a new schema onto an old schema that the application understands. - - Consequently, you can evolve the schemas using the `FORWARD` strategy to ensure that the old schema can process data encoded with the new schema. - -### FULL and FULL_TRANSITIVE - -Suppose that you have a topic containing three schemas (V1, V2, and V3), V1 is the oldest and V3 is the latest: - -| Compatibility check strategy | Definition | Description | Note | -| --- | --- | --- | --- | -| `FULL` | Schemas are both backward and forward compatible, which means: Consumers using the last schema can process data written by producers using the new schema. AND Consumers using the new schema can process data written by producers using the last schema. | Consumers using the schema V3 can process data written by producers using the schema V3 or V2. AND Consumers using the schema V3 or V2 can process data written by producers using the schema V3. |
  2336. For Avro and JSON, the default schema compatibility check strategy is `FULL`.
  2337. For all schema types except Avro and JSON, the default schema compatibility check strategy is `ALWAYS_INCOMPATIBLE`.
  2338. | -| `FULL_TRANSITIVE` | The new schema is backward and forward compatible with all previously registered schemas. | Consumers using the schema V3 can process data written by producers using the schema V3, V2 or V1. AND Consumers using the schema V3, V2 or V1 can process data written by producers using the schema V3. | None | - -#### Example - -In some data formats, for example, Avro, you can define fields with default values. Consequently, adding or removing a field with a default value is a fully compatible change. - -## Schema verification - -When a producer or a consumer tries to connect to a topic, a broker performs some checks to verify a schema. - -### Producer - -When a producer tries to connect to a topic (suppose ignore the schema auto creation), a broker does the following checks: - -* Check if the schema carried by the producer exists in the schema registry or not. - - * If the schema is already registered, then the producer is connected to a broker and produce messages with that schema. - - * If the schema is not registered, then Pulsar verifies if the schema is allowed to be registered based on the configured compatibility check strategy. - -### Consumer -When a consumer tries to connect to a topic, a broker checks if a carried schema is compatible with a registered schema based on the configured schema compatibility check strategy. - -| Compatibility check strategy | Check logic | -| --- | --- | -| `ALWAYS_COMPATIBLE` | All pass | -| `ALWAYS_INCOMPATIBLE` | No pass | -| `BACKWARD` | Can read the last schema | -| `BACKWARD_TRANSITIVE` | Can read all schemas | -| `FORWARD` | Can read the last schema | -| `FORWARD_TRANSITIVE` | Can read the last schema | -| `FULL` | Can read the last schema | -| `FULL_TRANSITIVE` | Can read all schemas | - -## Order of upgrading clients - -The order of upgrading client applications is determined by the compatibility check strategy. - -For example, the producers using schemas to write data to Pulsar and the consumers using schemas to read data from Pulsar. - -| Compatibility check strategy | Upgrade first | Description | -| --- | --- | --- | -| `ALWAYS_COMPATIBLE` | Any order | The compatibility check is disabled. Consequently, you can upgrade the producers and consumers in **any order**. | -| `ALWAYS_INCOMPATIBLE` | None | The schema evolution is disabled. | -|
  2339. `BACKWARD`
  2340. `BACKWARD_TRANSITIVE`
  2341. | Consumers | There is no guarantee that consumers using the old schema can read data produced using the new schema. Consequently, **upgrade all consumers first**, and then start producing new data. | -|
  2342. `FORWARD`
  2343. `FORWARD_TRANSITIVE`
  2344. | Producers | There is no guarantee that consumers using the new schema can read data produced using the old schema. Consequently, **upgrade all producers first**
  2345. to use the new schema and ensure that the data already produced using the old schemas are not available to consumers, and then upgrade the consumers.
  2346. | -|
  2347. `FULL`
  2348. `FULL_TRANSITIVE`
  2349. | Any order | There is no guarantee that consumers using the old schema can read data produced using the new schema and consumers using the new schema can read data produced using the old schema. Consequently, you can upgrade the producers and consumers in **any order**. | - - - - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/schema-get-started.md b/site2/website/versioned_docs/version-2.9.3-deprecated/schema-get-started.md deleted file mode 100644 index afacb0fa51f2ef..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/schema-get-started.md +++ /dev/null @@ -1,102 +0,0 @@ ---- -id: schema-get-started -title: Get started -sidebar_label: "Get started" -original_id: schema-get-started ---- - -This chapter introduces Pulsar schemas and explains why they are important. - -## Schema Registry - -Type safety is extremely important in any application built around a message bus like Pulsar. - -Producers and consumers need some kind of mechanism for coordinating types at the topic level to avoid various potential problems arise. For example, serialization and deserialization issues. - -Applications typically adopt one of the following approaches to guarantee type safety in messaging. Both approaches are available in Pulsar, and you're free to adopt one or the other or to mix and match on a per-topic basis. - -#### Note -> -> Currently, the Pulsar schema registry is only available for the [Java client](client-libraries-java.md), [CGo client](client-libraries-cgo.md), [Python client](client-libraries-python.md), and [C++ client](client-libraries-cpp.md). - -### Client-side approach - -Producers and consumers are responsible for not only serializing and deserializing messages (which consist of raw bytes) but also "knowing" which types are being transmitted via which topics. - -If a producer is sending temperature sensor data on the topic `topic-1`, consumers of that topic will run into trouble if they attempt to parse that data as moisture sensor readings. - -Producers and consumers can send and receive messages consisting of raw byte arrays and leave all type safety enforcement to the application on an "out-of-band" basis. - -### Server-side approach - -Producers and consumers inform the system which data types can be transmitted via the topic. - -With this approach, the messaging system enforces type safety and ensures that producers and consumers remain synced. - -Pulsar has a built-in **schema registry** that enables clients to upload data schemas on a per-topic basis. Those schemas dictate which data types are recognized as valid for that topic. - -## Why use schema - -When a schema is enabled, Pulsar does parse data, it takes bytes as inputs and sends bytes as outputs. While data has meaning beyond bytes, you need to parse data and might encounter parse exceptions which mainly occur in the following situations: - -* The field does not exist - -* The field type has changed (for example, `string` is changed to `int`) - -There are a few methods to prevent and overcome these exceptions, for example, you can catch exceptions when parsing errors, which makes code hard to maintain; or you can adopt a schema management system to perform schema evolution, not to break downstream applications, and enforces type safety to max extend in the language you are using, the solution is Pulsar Schema. - -Pulsar schema enables you to use language-specific types of data when constructing and handling messages from simple types like `string` to more complex application-specific types. - -**Example** - -You can use the _User_ class to define the messages sent to Pulsar topics. - -``` - -public class User { - String name; - int age; -} - -``` - -When constructing a producer with the _User_ class, you can specify a schema or not as below. - -### Without schema - -If you construct a producer without specifying a schema, then the producer can only produce messages of type `byte[]`. If you have a POJO class, you need to serialize the POJO into bytes before sending messages. - -**Example** - -``` - -Producer producer = client.newProducer() - .topic(topic) - .create(); -User user = new User("Tom", 28); -byte[] message = … // serialize the `user` by yourself; -producer.send(message); - -``` - -### With schema - -If you construct a producer with specifying a schema, then you can send a class to a topic directly without worrying about how to serialize POJOs into bytes. - -**Example** - -This example constructs a producer with the _JSONSchema_, and you can send the _User_ class to topics directly without worrying about how to serialize it into bytes. - -``` - -Producer producer = client.newProducer(JSONSchema.of(User.class)) - .topic(topic) - .create(); -User user = new User("Tom", 28); -producer.send(user); - -``` - -### Summary - -When constructing a producer with a schema, you do not need to serialize messages into bytes, instead Pulsar schema does this job in the background. diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/schema-manage.md b/site2/website/versioned_docs/version-2.9.3-deprecated/schema-manage.md deleted file mode 100644 index c588aae619eee9..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/schema-manage.md +++ /dev/null @@ -1,639 +0,0 @@ ---- -id: schema-manage -title: Manage schema -sidebar_label: "Manage schema" -original_id: schema-manage ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This guide demonstrates the ways to manage schemas: - -* Automatically - - * [Schema AutoUpdate](#schema-autoupdate) - -* Manually - - * [Schema manual management](#schema-manual-management) - - * [Custom schema storage](#custom-schema-storage) - -## Schema AutoUpdate - -If a schema passes the schema compatibility check, Pulsar producer automatically updates this schema to the topic it produces by default. - -### AutoUpdate for producer - -For a producer, the `AutoUpdate` happens in the following cases: - -* If a **topic doesn’t have a schema**, Pulsar registers a schema automatically. - -* If a **topic has a schema**: - - * If a **producer doesn’t carry a schema**: - - * If `isSchemaValidationEnforced` or `schemaValidationEnforced` is **disabled** in the namespace to which the topic belongs, the producer is allowed to connect to the topic and produce data. - - * If `isSchemaValidationEnforced` or `schemaValidationEnforced` is **enabled** in the namespace to which the topic belongs, the producer is rejected and disconnected. - - * If a **producer carries a schema**: - - A broker performs the compatibility check based on the configured compatibility check strategy of the namespace to which the topic belongs. - - * If the schema is registered, a producer is connected to a broker. - - * If the schema is not registered: - - * If `isAllowAutoUpdateSchema` sets to **false**, the producer is rejected to connect to a broker. - - * If `isAllowAutoUpdateSchema` sets to **true**: - - * If the schema passes the compatibility check, then the broker registers a new schema automatically for the topic and the producer is connected. - - * If the schema does not pass the compatibility check, then the broker does not register a schema and the producer is rejected to connect to a broker. - -![AutoUpdate Producer](/assets/schema-producer.png) - -### AutoUpdate for consumer - -For a consumer, the `AutoUpdate` happens in the following cases: - -* If a **consumer connects to a topic without a schema** (which means the consumer receiving raw bytes), the consumer can connect to the topic successfully without doing any compatibility check. - -* If a **consumer connects to a topic with a schema**. - - * If a topic does not have all of them (a schema/data/a local consumer and a local producer): - - * If `isAllowAutoUpdateSchema` sets to **true**, then the consumer registers a schema and it is connected to a broker. - - * If `isAllowAutoUpdateSchema` sets to **false**, then the consumer is rejected to connect to a broker. - - * If a topic has one of them (a schema/data/a local consumer and a local producer), then the schema compatibility check is performed. - - * If the schema passes the compatibility check, then the consumer is connected to the broker. - - * If the schema does not pass the compatibility check, then the consumer is rejected to connect to the broker. - -![AutoUpdate Consumer](/assets/schema-consumer.png) - - -### Manage AutoUpdate strategy - -You can use the `pulsar-admin` command to manage the `AutoUpdate` strategy as below: - -* [Enable AutoUpdate](#enable-autoupdate) - -* [Disable AutoUpdate](#disable-autoupdate) - -* [Adjust compatibility](#adjust-compatibility) - -#### Enable AutoUpdate - -To enable `AutoUpdate` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-is-allow-auto-update-schema --enable tenant/namespace - -``` - -#### Disable AutoUpdate - -To disable `AutoUpdate` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-is-allow-auto-update-schema --disable tenant/namespace - -``` - -Once the `AutoUpdate` is disabled, you can only register a new schema using the `pulsar-admin` command. - -#### Adjust compatibility - -To adjust the schema compatibility level on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-compatibility-strategy --compatibility tenant/namespace - -``` - -### Schema validation - -By default, `schemaValidationEnforced` is **disabled** for producers: - -* This means a producer without a schema can produce any kind of messages to a topic with schemas, which may result in producing trash data to the topic. - -* This allows non-java language clients that don’t support schema can produce messages to a topic with schemas. - -However, if you want a stronger guarantee on the topics with schemas, you can enable `schemaValidationEnforced` across the whole cluster or on a per-namespace basis. - -#### Enable schema validation - -To enable `schemaValidationEnforced` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-validation-enforce --enable tenant/namespace - -``` - -#### Disable schema validation - -To disable `schemaValidationEnforced` on a namespace, you can use the `pulsar-admin` command. - -```bash - -bin/pulsar-admin namespaces set-schema-validation-enforce --disable tenant/namespace - -``` - -## Schema manual management - -To manage schemas, you can use one of the following methods. - -| Method | Description | -| --- | --- | -| **Admin CLI**
  2350. | You can use the `pulsar-admin` tool to manage Pulsar schemas, brokers, clusters, sources, sinks, topics, tenants and so on. For more information about how to use the `pulsar-admin` tool, see [here](reference-pulsar-admin.md). | -| **REST API**
  2351. | Pulsar exposes schema related management API in Pulsar’s admin RESTful API. You can access the admin RESTful endpoint directly to manage schemas. For more information about how to use the Pulsar REST API, see [here](http://pulsar.apache.org/admin-rest-api/). | -| **Java Admin API**
  2352. | Pulsar provides Java admin library. | - -### Upload a schema - -To upload (register) a new schema for a topic, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `upload` subcommand. - -```bash - -$ pulsar-admin schemas upload --filename - -``` - -The `schema-definition-file` is in JSON format. - -```json - -{ - "type": "", - "schema": "", - "properties": {} // the properties associated with the schema -} - -``` - -The `schema-definition-file` includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  2353. If the schema is a
  2354. **primitive**
  2355. schema, this field should be blank.
  2356. If the schema is a
  2357. **struct**
  2358. schema, this field should be a JSON string of the Avro schema definition.
  2359. | -| `properties` | The additional properties associated with the schema. | - -Here are examples of the `schema-definition-file` for a JSON schema. - -**Example 1** - -```json - -{ - "type": "JSON", - "schema": "{\"type\":\"record\",\"name\":\"User\",\"namespace\":\"com.foo\",\"fields\":[{\"name\":\"file1\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"file2\",\"type\":\"string\",\"default\":null},{\"name\":\"file3\",\"type\":[\"null\",\"string\"],\"default\":\"dfdf\"}]}", - "properties": {} -} - -``` - -**Example 2** - -```json - -{ - "type": "STRING", - "schema": "", - "properties": { - "key1": "value1" - } -} - -``` - -
    - - -Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/uploadSchema?version=@pulsar:version_number@} - -The post payload is in JSON format. - -```json - -{ - "type": "", - "schema": "", - "properties": {} // the properties associated with the schema -} - -``` - -The post payload includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  2360. If the schema is a
  2361. **primitive**
  2362. schema, this field should be blank.
  2363. If the schema is a
  2364. **struct**
  2365. schema, this field should be a JSON string of the Avro schema definition.
  2366. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -void createSchema(String topic, PostSchemaPayload schemaPayload) - -``` - -The `PostSchemaPayload` includes the following fields: - -| Field | Description | -| --- | --- | -| `type` | The schema type. | -| `schema` | The schema definition data, which is encoded in UTF 8 charset.
  2367. If the schema is a
  2368. **primitive**
  2369. schema, this field should be blank.
  2370. If the schema is a
  2371. **struct**
  2372. schema, this field should be a JSON string of the Avro schema definition.
  2373. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `PostSchemaPayload`: - -```java - -PulsarAdmin admin = …; - -PostSchemaPayload payload = new PostSchemaPayload(); -payload.setType("INT8"); -payload.setSchema(""); - -admin.createSchema("my-tenant/my-ns/my-topic", payload); - -``` - -
    - -
    -```` - -### Get a schema (latest) - -To get the latest schema for a topic, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `get` subcommand. - -```bash - -$ pulsar-admin schemas get - -{ - "version": 0, - "type": "String", - "timestamp": 0, - "data": "string", - "properties": { - "property1": "string", - "property2": "string" - } -} - -``` - - - - -Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/getSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", - "type": "", - "timestamp": "", - "data": "", - "properties": {} // the properties associated with the schema -} - -``` - -The response includes the following fields: - -| Field | Description | -| --- | --- | -| `version` | The schema version, which is a long number. | -| `type` | The schema type. | -| `timestamp` | The timestamp of creating this version of schema. | -| `data` | The schema definition data, which is encoded in UTF 8 charset.
  2374. If the schema is a
  2375. **primitive**
  2376. schema, this field should be blank.
  2377. If the schema is a
  2378. **struct**
  2379. schema, this field should be a JSON string of the Avro schema definition.
  2380. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -SchemaInfo createSchema(String topic) - -``` - -The `SchemaInfo` includes the following fields: - -| Field | Description | -| --- | --- | -| `name` | The schema name. | -| `type` | The schema type. | -| `schema` | A byte array of the schema definition data, which is encoded in UTF 8 charset.
  2381. If the schema is a
  2382. **primitive**
  2383. schema, this byte array should be empty.
  2384. If the schema is a
  2385. **struct**
  2386. schema, this field should be a JSON string of the Avro schema definition converted to a byte array.
  2387. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `SchemaInfo`: - -```java - -PulsarAdmin admin = …; - -SchemaInfo si = admin.getSchema("my-tenant/my-ns/my-topic"); - -``` - -
    - -
    -```` - -### Get a schema (specific) - -To get a specific version of a schema, you can use one of the following methods. - -````mdx-code-block - - - - -Use the `get` subcommand. - -```bash - -$ pulsar-admin schemas get --version= - -``` - - - - -Send a `GET` request to a schema endpoint: {@inject: endpoint|GET|/admin/v2/schemas/:tenant/:namespace/:topic/schema/:version|operation/getSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", - "type": "", - "timestamp": "", - "data": "", - "properties": {} // the properties associated with the schema -} - -``` - -The response includes the following fields: - -| Field | Description | -| --- | --- | -| `version` | The schema version, which is a long number. | -| `type` | The schema type. | -| `timestamp` | The timestamp of creating this version of schema. | -| `data` | The schema definition data, which is encoded in UTF 8 charset.
  2388. If the schema is a
  2389. **primitive**
  2390. schema, this field should be blank.
  2391. If the schema is a
  2392. **struct**
  2393. schema, this field should be a JSON string of the Avro schema definition.
  2394. | -| `properties` | The additional properties associated with the schema. | - -
    - - -```java - -SchemaInfo createSchema(String topic, long version) - -``` - -The `SchemaInfo` includes the following fields: - -| Field | Description | -| --- | --- | -| `name` | The schema name. | -| `type` | The schema type. | -| `schema` | A byte array of the schema definition data, which is encoded in UTF 8.
  2395. If the schema is a
  2396. **primitive**
  2397. schema, this byte array should be empty.
  2398. If the schema is a
  2399. **struct**
  2400. schema, this field should be a JSON string of the Avro schema definition converted to a byte array.
  2401. | -| `properties` | The additional properties associated with the schema. | - -Here is an example of `SchemaInfo`: - -```java - -PulsarAdmin admin = …; - -SchemaInfo si = admin.getSchema("my-tenant/my-ns/my-topic", 1L); - -``` - -
    - -
    -```` - -### Extract a schema - -To provide a schema via a topic, you can use the following method. - -````mdx-code-block - - - - -Use the `extract` subcommand. - -```bash - -$ pulsar-admin schemas extract --classname --jar --type - -``` - - - - -```` - -### Delete a schema - -To delete a schema for a topic, you can use one of the following methods. - -:::note - -In any case, the **delete** action deletes **all versions** of a schema registered for a topic. - -::: - -````mdx-code-block - - - - -Use the `delete` subcommand. - -```bash - -$ pulsar-admin schemas delete - -``` - - - - -Send a `DELETE` request to a schema endpoint: {@inject: endpoint|DELETE|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/deleteSchema?version=@pulsar:version_number@} - -Here is an example of a response, which is returned in JSON format. - -```json - -{ - "version": "", -} - -``` - -The response includes the following field: - -Field | Description | ----|---| -`version` | The schema version, which is a long number. | - - - - -```java - -void deleteSchema(String topic) - -``` - -Here is an example of deleting a schema. - -```java - -PulsarAdmin admin = …; - -admin.deleteSchema("my-tenant/my-ns/my-topic"); - -``` - - - - -```` - -## Custom schema storage - -By default, Pulsar stores various data types of schemas in [Apache BookKeeper](https://bookkeeper.apache.org) deployed alongside Pulsar. - -However, you can use another storage system if needed. - -### Implement - -To use a non-default (non-BookKeeper) storage system for Pulsar schemas, you need to implement the following Java interfaces: - -* [SchemaStorage interface](#schemastorage-interface) - -* [SchemaStorageFactory interface](#schemastoragefactory-interface) - -#### SchemaStorage interface - -The `SchemaStorage` interface has the following methods: - -```java - -public interface SchemaStorage { - // How schemas are updated - CompletableFuture put(String key, byte[] value, byte[] hash); - - // How schemas are fetched from storage - CompletableFuture get(String key, SchemaVersion version); - - // How schemas are deleted - CompletableFuture delete(String key); - - // Utility method for converting a schema version byte array to a SchemaVersion object - SchemaVersion versionFromBytes(byte[] version); - - // Startup behavior for the schema storage client - void start() throws Exception; - - // Shutdown behavior for the schema storage client - void close() throws Exception; -} - -``` - -:::tip - -For a complete example of **schema storage** implementation, see [BookKeeperSchemaStorage](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorage.java) class. - -::: - -#### SchemaStorageFactory interface - -The `SchemaStorageFactory` interface has the following method: - -```java - -public interface SchemaStorageFactory { - @NotNull - SchemaStorage create(PulsarService pulsar) throws Exception; -} - -``` - -:::tip - -For a complete example of **schema storage factory** implementation, see [BookKeeperSchemaStorageFactory](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorageFactory.java) class. - -::: - -### Deploy - -To use your custom schema storage implementation, perform the following steps. - -1. Package the implementation in a [JAR](https://docs.oracle.com/javase/tutorial/deployment/jar/basicsindex.html) file. - -2. Add the JAR file to the `lib` folder in your Pulsar binary or source distribution. - -3. Change the `schemaRegistryStorageClassName` configuration in `broker.conf` to your custom factory class. - -4. Start Pulsar. diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/schema-understand.md b/site2/website/versioned_docs/version-2.9.3-deprecated/schema-understand.md deleted file mode 100644 index 55bc662c666338..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/schema-understand.md +++ /dev/null @@ -1,576 +0,0 @@ ---- -id: schema-understand -title: Understand schema -sidebar_label: "Understand schema" -original_id: schema-understand ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -This chapter explains the basic concepts of Pulsar schema, focuses on the topics of particular importance, and provides additional background. - -## SchemaInfo - -Pulsar schema is defined in a data structure called `SchemaInfo`. - -The `SchemaInfo` is stored and enforced on a per-topic basis and cannot be stored at the namespace or tenant level. - -A `SchemaInfo` consists of the following fields: - -| Field | Description | -| --- | --- | -| `name` | Schema name (a string). | -| `type` | Schema type, which determines how to interpret the schema data.
  2402. Predefined schema: see [here](schema-understand.md#schema-type).
  2403. Customized schema: it is left as an empty string.
  2404. | -| `schema`(`payload`) | Schema data, which is a sequence of 8-bit unsigned bytes and schema-type specific. | -| `properties` | It is a user defined properties as a string/string map. Applications can use this bag for carrying any application specific logics. Possible properties might be the Git hash associated with the schema, an environment string like `dev` or `prod`. | - -**Example** - -This is the `SchemaInfo` of a string. - -```json - -{ - "name": "test-string-schema", - "type": "STRING", - "schema": "", - "properties": {} -} - -``` - -## Schema type - -Pulsar supports various schema types, which are mainly divided into two categories: - -* Primitive type - -* Complex type - -### Primitive type - -Currently, Pulsar supports the following primitive types: - -| Primitive Type | Description | -|---|---| -| `BOOLEAN` | A binary value | -| `INT8` | A 8-bit signed integer | -| `INT16` | A 16-bit signed integer | -| `INT32` | A 32-bit signed integer | -| `INT64` | A 64-bit signed integer | -| `FLOAT` | A single precision (32-bit) IEEE 754 floating-point number | -| `DOUBLE` | A double-precision (64-bit) IEEE 754 floating-point number | -| `BYTES` | A sequence of 8-bit unsigned bytes | -| `STRING` | A Unicode character sequence | -| `TIMESTAMP` (`DATE`, `TIME`) | A logic type represents a specific instant in time with millisecond precision.
    It stores the number of milliseconds since `January 1, 1970, 00:00:00 GMT` as an `INT64` value | -| INSTANT | A single instantaneous point on the time-line with nanoseconds precision| -| LOCAL_DATE | An immutable date-time object that represents a date, often viewed as year-month-day| -| LOCAL_TIME | An immutable date-time object that represents a time, often viewed as hour-minute-second. Time is represented to nanosecond precision.| -| LOCAL_DATE_TIME | An immutable date-time object that represents a date-time, often viewed as year-month-day-hour-minute-second | - -For primitive types, Pulsar does not store any schema data in `SchemaInfo`. The `type` in `SchemaInfo` is used to determine how to serialize and deserialize the data. - -Some of the primitive schema implementations can use `properties` to store implementation-specific tunable settings. For example, a `string` schema can use `properties` to store the encoding charset to serialize and deserialize strings. - -The conversions between **Pulsar schema types** and **language-specific primitive types** are as below. - -| Schema Type | Java Type| Python Type | Go Type | -|---|---|---|---| -| BOOLEAN | boolean | bool | bool | -| INT8 | byte | | int8 | -| INT16 | short | | int16 | -| INT32 | int | | int32 | -| INT64 | long | | int64 | -| FLOAT | float | float | float32 | -| DOUBLE | double | float | float64| -| BYTES | byte[], ByteBuffer, ByteBuf | bytes | []byte | -| STRING | string | str | string| -| TIMESTAMP | java.sql.Timestamp | | | -| TIME | java.sql.Time | | | -| DATE | java.util.Date | | | -| INSTANT | java.time.Instant | | | -| LOCAL_DATE | java.time.LocalDate | | | -| LOCAL_TIME | java.time.LocalDateTime | | -| LOCAL_DATE_TIME | java.time.LocalTime | | - -**Example** - -This example demonstrates how to use a string schema. - -1. Create a producer with a string schema and send messages. - - ```java - - Producer producer = client.newProducer(Schema.STRING).create(); - producer.newMessage().value("Hello Pulsar!").send(); - - ``` - -2. Create a consumer with a string schema and receive messages. - - ```java - - Consumer consumer = client.newConsumer(Schema.STRING).subscribe(); - consumer.receive(); - - ``` - -### Complex type - -Currently, Pulsar supports the following complex types: - -| Complex Type | Description | -|---|---| -| `keyvalue` | Represents a complex type of a key/value pair. | -| `struct` | Handles structured data. It supports `AvroBaseStructSchema` and `ProtobufNativeSchema`. | - -#### keyvalue - -`Keyvalue` schema helps applications define schemas for both key and value. - -For `SchemaInfo` of `keyvalue` schema, Pulsar stores the `SchemaInfo` of key schema and the `SchemaInfo` of value schema together. - -Pulsar provides the following methods to encode a key/value pair in messages: - -* `INLINE` - -* `SEPARATED` - -You can choose the encoding type when constructing the key/value schema. - -````mdx-code-block - - - - -Key/value pairs are encoded together in the message payload. - - - - -Key is encoded in the message key and the value is encoded in the message payload. - -**Example** - -This example shows how to construct a key/value schema and then use it to produce and consume messages. - -1. Construct a key/value schema with `INLINE` encoding type. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.INLINE - ); - - ``` - -2. Optionally, construct a key/value schema with `SEPARATED` encoding type. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - ``` - -3. Produce messages using a key/value schema. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - Producer> producer = client.newProducer(kvSchema) - .topic(TOPIC) - .create(); - - final int key = 100; - final String value = "value-100"; - - // send the key/value message - producer.newMessage() - .value(new KeyValue(key, value)) - .send(); - - ``` - -4. Consume messages using a key/value schema. - - ```java - - Schema> kvSchema = Schema.KeyValue( - Schema.INT32, - Schema.STRING, - KeyValueEncodingType.SEPARATED - ); - - Consumer> consumer = client.newConsumer(kvSchema) - ... - .topic(TOPIC) - .subscriptionName(SubscriptionName).subscribe(); - - // receive key/value pair - Message> msg = consumer.receive(); - KeyValue kv = msg.getValue(); - - ``` - - - - -```` - -#### struct - -This section describes the details of type and usage of the `struct` schema. - -##### Type - -`struct` schema supports `AvroBaseStructSchema` and `ProtobufNativeSchema`. - -|Type|Description| ----|---| -`AvroBaseStructSchema`|Pulsar uses [Avro Specification](http://avro.apache.org/docs/current/spec.html) to declare the schema definition for `AvroBaseStructSchema`, which supports `AvroSchema`, `JsonSchema`, and `ProtobufSchema`.

    This allows Pulsar:
    - to use the same tools to manage schema definitions
    - to use different serialization or deserialization methods to handle data| -`ProtobufNativeSchema`|`ProtobufNativeSchema` is based on protobuf native Descriptor.

    This allows Pulsar:
    - to use native protobuf-v3 to serialize or deserialize data
    - to use `AutoConsume` to deserialize data. - -##### Usage - -Pulsar provides the following methods to use the `struct` schema: - -* `static` - -* `generic` - -* `SchemaDefinition` - -````mdx-code-block - - - - -You can predefine the `struct` schema, which can be a POJO in Java, a `struct` in Go, or classes generated by Avro or Protobuf tools. - -**Example** - -Pulsar gets the schema definition from the predefined `struct` using an Avro library. The schema definition is the schema data stored as a part of the `SchemaInfo`. - -1. Create the _User_ class to define the messages sent to Pulsar topics. - - ```java - - @Builder - @AllArgsConstructor - @NoArgsConstructor - public static class User { - String name; - int age; - } - - ``` - -2. Create a producer with a `struct` schema and send messages. - - ```java - - Producer producer = client.newProducer(Schema.AVRO(User.class)).create(); - producer.newMessage().value(User.builder().name("pulsar-user").age(1).build()).send(); - - ``` - -3. Create a consumer with a `struct` schema and receive messages - - ```java - - Consumer consumer = client.newConsumer(Schema.AVRO(User.class)).subscribe(); - User user = consumer.receive(); - - ``` - - - - -Sometimes applications do not have pre-defined structs, and you can use this method to define schema and access data. - -You can define the `struct` schema using the `GenericSchemaBuilder`, generate a generic struct using `GenericRecordBuilder` and consume messages into `GenericRecord`. - -**Example** - -1. Use `RecordSchemaBuilder` to build a schema. - - ```java - - RecordSchemaBuilder recordSchemaBuilder = SchemaBuilder.record("schemaName"); - recordSchemaBuilder.field("intField").type(SchemaType.INT32); - SchemaInfo schemaInfo = recordSchemaBuilder.build(SchemaType.AVRO); - - Producer producer = client.newProducer(Schema.generic(schemaInfo)).create(); - - ``` - -2. Use `RecordBuilder` to build the struct records. - - ```java - - producer.newMessage().value(schema.newRecordBuilder() - .set("intField", 32) - .build()).send(); - - ``` - - - - -You can define the `schemaDefinition` to generate a `struct` schema. - -**Example** - -1. Create the _User_ class to define the messages sent to Pulsar topics. - - ```java - - @Builder - @AllArgsConstructor - @NoArgsConstructor - public static class User { - String name; - int age; - } - - ``` - -2. Create a producer with a `SchemaDefinition` and send messages. - - ```java - - SchemaDefinition schemaDefinition = SchemaDefinition.builder().withPojo(User.class).build(); - Producer producer = client.newProducer(Schema.AVRO(schemaDefinition)).create(); - producer.newMessage().value(User.builder().name("pulsar-user").age(1).build()).send(); - - ``` - -3. Create a consumer with a `SchemaDefinition` schema and receive messages - - ```java - - SchemaDefinition schemaDefinition = SchemaDefinition.builder().withPojo(User.class).build(); - Consumer consumer = client.newConsumer(Schema.AVRO(schemaDefinition)).subscribe(); - User user = consumer.receive().getValue(); - - ``` - - - - -```` - -### Auto Schema - -If you don't know the schema type of a Pulsar topic in advance, you can use AUTO schema to produce or consume generic records to or from brokers. - -| Auto Schema Type | Description | -|---|---| -| `AUTO_PRODUCE` | This is useful for transferring data **from a producer to a Pulsar topic that has a schema**. | -| `AUTO_CONSUME` | This is useful for transferring data **from a Pulsar topic that has a schema to a consumer**. | - -#### AUTO_PRODUCE - -`AUTO_PRODUCE` schema helps a producer validate whether the bytes sent by the producer is compatible with the schema of a topic. - -**Example** - -Suppose that: - -* You have a producer processing messages from a Kafka topic _K_. - -* You have a Pulsar topic _P_, and you do not know its schema type. - -* Your application reads the messages from _K_ and writes the messages to _P_. - -In this case, you can use `AUTO_PRODUCE` to verify whether the bytes produced by _K_ can be sent to _P_ or not. - -```java - -Produce pulsarProducer = client.newProducer(Schema.AUTO_PRODUCE()) - … - .create(); - -byte[] kafkaMessageBytes = … ; - -pulsarProducer.produce(kafkaMessageBytes); - -``` - -#### AUTO_CONSUME - -`AUTO_CONSUME` schema helps a Pulsar topic validate whether the bytes sent by a Pulsar topic is compatible with a consumer, that is, the Pulsar topic deserializes messages into language-specific objects using the `SchemaInfo` retrieved from broker-side. - -Currently, `AUTO_CONSUME` supports AVRO, JSON and ProtobufNativeSchema schemas. It deserializes messages into `GenericRecord`. - -**Example** - -Suppose that: - -* You have a Pulsar topic _P_. - -* You have a consumer (for example, MySQL) receiving messages from the topic _P_. - -* Your application reads the messages from _P_ and writes the messages to MySQL. - -In this case, you can use `AUTO_CONSUME` to verify whether the bytes produced by _P_ can be sent to MySQL or not. - -```java - -Consumer pulsarConsumer = client.newConsumer(Schema.AUTO_CONSUME()) - … - .subscribe(); - -Message msg = consumer.receive() ; -GenericRecord record = msg.getValue(); - -``` - -### Native Avro Schema - -When migrating or ingesting event or message data from external systems (such as Kafka and Cassandra), the events are often already serialized in Avro format. The applications producing the data typically have validated the data against their schemas (including compatibility checks) and stored them in a database or a dedicated service (such as a schema registry). The schema of each serialized data record is usually retrievable by some metadata attached to that record. In such cases, a Pulsar producer doesn't need to repeat the schema validation step when sending the ingested events to a topic. All it needs to do is passing each message or event with its schema to Pulsar. - -Hence, we provide `Schema.NATIVE_AVRO` to wrap a native Avro schema of type `org.apache.avro.Schema`. The result is a schema instance of Pulsar that accepts a serialized Avro payload without validating it against the wrapped Avro schema. - -**Example** - -```java - -org.apache.avro.Schema nativeAvroSchema = … ; - -Producer producer = pulsarClient.newProducer().topic("ingress").create(); - -byte[] content = … ; - -producer.newMessage(Schema.NATIVE_AVRO(nativeAvroSchema)).value(content).send(); - -``` - -## Schema version - -Each `SchemaInfo` stored with a topic has a version. Schema version manages schema changes happening within a topic. - -Messages produced with a given `SchemaInfo` is tagged with a schema version, so when a message is consumed by a Pulsar client, the Pulsar client can use the schema version to retrieve the corresponding `SchemaInfo` and then use the `SchemaInfo` to deserialize data. - -Schemas are versioned in succession. Schema storage happens in a broker that handles the associated topics so that version assignments can be made. - -Once a version is assigned/fetched to/for a schema, all subsequent messages produced by that producer are tagged with the appropriate version. - -**Example** - -The following example illustrates how the schema version works. - -Suppose that a Pulsar [Java client](client-libraries-java.md) created using the code below attempts to connect to Pulsar and begins to send messages: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .build(); - -Producer producer = client.newProducer(JSONSchema.of(SensorReading.class)) - .topic("sensor-data") - .sendTimeout(3, TimeUnit.SECONDS) - .create(); - -``` - -The table below lists the possible scenarios when this connection attempt occurs and what happens in each scenario: - -| Scenario | What happens | -| --- | --- | -|
  2405. No schema exists for the topic.
  2406. | (1) The producer is created using the given schema. (2) Since no existing schema is compatible with the `SensorReading` schema, the schema is transmitted to the broker and stored. (3) Any consumer created using the same schema or topic can consume messages from the `sensor-data` topic. | -|
  2407. A schema already exists.
  2408. The producer connects using the same schema that is already stored.
  2409. | (1) The schema is transmitted to the broker. (2) The broker determines that the schema is compatible. (3) The broker attempts to store the schema in [BookKeeper](concepts-architecture-overview.md#persistent-storage) but then determines that it's already stored, so it is used to tag produced messages. |
  2410. A schema already exists.
  2411. The producer connects using a new schema that is compatible.
  2412. | (1) The schema is transmitted to the broker. (2) The broker determines that the schema is compatible and stores the new schema as the current version (with a new version number). | - -## How does schema work - -Pulsar schemas are applied and enforced at the **topic** level (schemas cannot be applied at the namespace or tenant level). - -Producers and consumers upload schemas to brokers, so Pulsar schemas work on the producer side and the consumer side. - -### Producer side - -This diagram illustrates how does schema work on the Producer side. - -![Schema works at the producer side](/assets/schema-producer.png) - -1. The application uses a schema instance to construct a producer instance. - - The schema instance defines the schema for the data being produced using the producer instance. - - Take AVRO as an example, Pulsar extracts schema definition from the POJO class and constructs the `SchemaInfo` that the producer needs to pass to a broker when it connects. - -2. The producer connects to the broker with the `SchemaInfo` extracted from the passed-in schema instance. - -3. The broker looks up the schema in the schema storage to check if it is already a registered schema. - -4. If yes, the broker skips the schema validation since it is a known schema, and returns the schema version to the producer. - -5. If no, the broker verifies whether a schema can be automatically created in this namespace: - - * If `isAllowAutoUpdateSchema` sets to **true**, then a schema can be created, and the broker validates the schema based on the schema compatibility check strategy defined for the topic. - - * If `isAllowAutoUpdateSchema` sets to **false**, then a schema can not be created, and the producer is rejected to connect to the broker. - -**Tip**: - -`isAllowAutoUpdateSchema` can be set via **Pulsar admin API** or **REST API.** - -For how to set `isAllowAutoUpdateSchema` via Pulsar admin API, see [Manage AutoUpdate Strategy](schema-manage.md/#manage-autoupdate-strategy). - -6. If the schema is allowed to be updated, then the compatible strategy check is performed. - - * If the schema is compatible, the broker stores it and returns the schema version to the producer. - - All the messages produced by this producer are tagged with the schema version. - - * If the schema is incompatible, the broker rejects it. - -### Consumer side - -This diagram illustrates how does Schema work on the consumer side. - -![Schema works at the consumer side](/assets/schema-consumer.png) - -1. The application uses a schema instance to construct a consumer instance. - - The schema instance defines the schema that the consumer uses for decoding messages received from a broker. - -2. The consumer connects to the broker with the `SchemaInfo` extracted from the passed-in schema instance. - -3. The broker determines whether the topic has one of them (a schema/data/a local consumer and a local producer). - -4. If a topic does not have all of them (a schema/data/a local consumer and a local producer): - - * If `isAllowAutoUpdateSchema` sets to **true**, then the consumer registers a schema and it is connected to a broker. - - * If `isAllowAutoUpdateSchema` sets to **false**, then the consumer is rejected to connect to a broker. - -5. If a topic has one of them (a schema/data/a local consumer and a local producer), then the schema compatibility check is performed. - - * If the schema passes the compatibility check, then the consumer is connected to the broker. - - * If the schema does not pass the compatibility check, then the consumer is rejected to connect to the broker. - -6. The consumer receives messages from the broker. - - If the schema used by the consumer supports schema versioning (for example, AVRO schema), the consumer fetches the `SchemaInfo` of the version tagged in messages and uses the passed-in schema and the schema tagged in messages to decode the messages. diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/security-athenz.md b/site2/website/versioned_docs/version-2.9.3-deprecated/security-athenz.md deleted file mode 100644 index 8a39fe25316d07..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/security-athenz.md +++ /dev/null @@ -1,98 +0,0 @@ ---- -id: security-athenz -title: Authentication using Athenz -sidebar_label: "Authentication using Athenz" -original_id: security-athenz ---- - -[Athenz](https://github.com/AthenZ/athenz) is a role-based authentication/authorization system. In Pulsar, you can use Athenz role tokens (also known as *z-tokens*) to establish the identify of the client. - -## Athenz authentication settings - -A [decentralized Athenz system](https://github.com/AthenZ/athenz/blob/master/docs/decent_authz_flow.md) contains an [authori**Z**ation **M**anagement **S**ystem](https://github.com/AthenZ/athenz/blob/master/docs/setup_zms.md) (ZMS) server and an [authori**Z**ation **T**oken **S**ystem](https://github.com/AthenZ/athenz/blob/master/docs/setup_zts) (ZTS) server. - -To begin, you need to set up Athenz service access control. You need to create domains for the *provider* (which provides some resources to other services with some authentication/authorization policies) and the *tenant* (which is provisioned to access some resources in a provider). In this case, the provider corresponds to the Pulsar service itself and the tenant corresponds to each application using Pulsar (typically, a [tenant](reference-terminology.md#tenant) in Pulsar). - -### Create the tenant domain and service - -On the [tenant](reference-terminology.md#tenant) side, you need to do the following things: - -1. Create a domain, such as `shopping` -2. Generate a private/public key pair -3. Create a service, such as `some_app`, on the domain with the public key - -Note that you need to specify the private key generated in step 2 when the Pulsar client connects to the [broker](reference-terminology.md#broker) (see client configuration examples for [Java](client-libraries-java.md#tls-authentication) and [C++](client-libraries-cpp.md#tls-authentication)). - -For more specific steps involving the Athenz UI, refer to [Example Service Access Control Setup](https://github.com/AthenZ/athenz/blob/master/docs/example_service_athenz_setup.md#client-tenant-domain). - -### Create the provider domain and add the tenant service to some role members - -On the provider side, you need to do the following things: - -1. Create a domain, such as `pulsar` -2. Create a role -3. Add the tenant service to members of the role - -Note that you can specify any action and resource in step 2 since they are not used on Pulsar. In other words, Pulsar uses the Athenz role token only for authentication, *not* for authorization. - -For more specific steps involving UI, refer to [Example Service Access Control Setup](https://github.com/AthenZ/athenz/blob/master/docs/example_service_athenz_setup.md#server-provider-domain). - -## Configure the broker for Athenz - -> ### TLS encryption -> -> Note that when you are using Athenz as an authentication provider, you had better use TLS encryption -> as it can protect role tokens from being intercepted and reused. (for more details involving TLS encryption see [Architecture - Data Model](https://github.com/AthenZ/athenz/blob/master/docs/data_model)). - -In the `conf/broker.conf` configuration file in your Pulsar installation, you need to provide the class name of the Athenz authentication provider as well as a comma-separated list of provider domain names. - -```properties - -# Add the Athenz auth provider -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderAthenz -athenzDomainNames=pulsar - -# Enable TLS -tlsEnabled=true -tlsCertificateFilePath=/path/to/broker-cert.pem -tlsKeyFilePath=/path/to/broker-key.pem - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationAthenz -brokerClientAuthenticationParameters={"tenantDomain":"shopping","tenantService":"some_app","providerDomain":"pulsar","privateKey":"file:///path/to/private.pem","keyId":"v1"} - -``` - -> A full listing of parameters is available in the `conf/broker.conf` file, you can also find the default -> values for those parameters in [Broker Configuration](reference-configuration.md#broker). - -## Configure clients for Athenz - -For more information on Pulsar client authentication using Athenz, see the following language-specific docs: - -* [Java client](client-libraries-java.md#athenz) - -## Configure CLI tools for Athenz - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following authentication parameters to the `conf/client.conf` config file to use Athenz with CLI tools of Pulsar: - -```properties - -# URL for the broker -serviceUrl=https://broker.example.com:8443/ - -# Set Athenz auth plugin and its parameters -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationAthenz -authParams={"tenantDomain":"shopping","tenantService":"some_app","providerDomain":"pulsar","privateKey":"file:///path/to/private.pem","keyId":"v1"} - -# Enable TLS -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/cacert.pem - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/security-authorization.md b/site2/website/versioned_docs/version-2.9.3-deprecated/security-authorization.md deleted file mode 100644 index 9cfd7c8c203f63..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/security-authorization.md +++ /dev/null @@ -1,130 +0,0 @@ ---- -id: security-authorization -title: Authentication and authorization in Pulsar -sidebar_label: "Authorization and ACLs" -original_id: security-authorization ---- - - -In Pulsar, the [authentication provider](security-overview.md#authentication-providers) is responsible for properly identifying clients and associating the clients with [role tokens](security-overview.md#role-tokens). If you only enable authentication, an authenticated role token has the ability to access all resources in the cluster. *Authorization* is the process that determines *what* clients are able to do. - -The role tokens with the most privileges are the *superusers*. The *superusers* can create and destroy tenants, along with having full access to all tenant resources. - -When a superuser creates a [tenant](reference-terminology.md#tenant), that tenant is assigned an admin role. A client with the admin role token can then create, modify and destroy namespaces, and grant and revoke permissions to *other role tokens* on those namespaces. - -## Broker and Proxy Setup - -### Enable authorization and assign superusers -You can enable the authorization and assign the superusers in the broker ([`conf/broker.conf`](reference-configuration.md#broker)) configuration files. - -```properties - -authorizationEnabled=true -superUserRoles=my-super-user-1,my-super-user-2 - -``` - -> A full list of parameters is available in the `conf/broker.conf` file. -> You can also find the default values for those parameters in [Broker Configuration](reference-configuration.md#broker). - -Typically, you use superuser roles for administrators, clients as well as broker-to-broker authorization. When you use [geo-replication](concepts-replication.md), every broker needs to be able to publish to all the other topics of clusters. - -You can also enable the authorization for the proxy in the proxy configuration file (`conf/proxy.conf`). Once you enable the authorization on the proxy, the proxy does an additional authorization check before forwarding the request to a broker. -If you enable authorization on the broker, the broker checks the authorization of the request when the broker receives the forwarded request. - -### Proxy Roles - -By default, the broker treats the connection between a proxy and the broker as a normal user connection. The broker authenticates the user as the role configured in `proxy.conf`(see ["Enable TLS Authentication on Proxies"](security-tls-authentication.md#enable-tls-authentication-on-proxies)). However, when the user connects to the cluster through a proxy, the user rarely requires the authentication. The user expects to be able to interact with the cluster as the role for which they have authenticated with the proxy. - -Pulsar uses *Proxy roles* to enable the authentication. Proxy roles are specified in the broker configuration file, [`conf/broker.conf`](reference-configuration.md#broker). If a client that is authenticated with a broker is one of its ```proxyRoles```, all requests from that client must also carry information about the role of the client that is authenticated with the proxy. This information is called the *original principal*. If the *original principal* is absent, the client is not able to access anything. - -You must authorize both the *proxy role* and the *original principal* to access a resource to ensure that the resource is accessible via the proxy. Administrators can take two approaches to authorize the *proxy role* and the *original principal*. - -The more secure approach is to grant access to the proxy roles each time you grant access to a resource. For example, if you have a proxy role named `proxy1`, when the superuser creates a tenant, you should specify `proxy1` as one of the admin roles. When a role is granted permissions to produce or consume from a namespace, if that client wants to produce or consume through a proxy, you should also grant `proxy1` the same permissions. - -Another approach is to make the proxy role a superuser. This allows the proxy to access all resources. The client still needs to authenticate with the proxy, and all requests made through the proxy have their role downgraded to the *original principal* of the authenticated client. However, if the proxy is compromised, a bad actor could get full access to your cluster. - -You can specify the roles as proxy roles in [`conf/broker.conf`](reference-configuration.md#broker). - -```properties - -proxyRoles=my-proxy-role - -# if you want to allow superusers to use the proxy (see above) -superUserRoles=my-super-user-1,my-super-user-2,my-proxy-role - -``` - -## Administer tenants - -Pulsar [instance](reference-terminology.md#instance) administrators or some kind of self-service portal typically provisions a Pulsar [tenant](reference-terminology.md#tenant). - -You can manage tenants using the [`pulsar-admin`](reference-pulsar-admin.md) tool. - -### Create a new tenant - -The following is an example tenant creation command: - -```shell - -$ bin/pulsar-admin tenants create my-tenant \ - --admin-roles my-admin-role \ - --allowed-clusters us-west,us-east - -``` - -This command creates a new tenant `my-tenant` that is allowed to use the clusters `us-west` and `us-east`. - -A client that successfully identifies itself as having the role `my-admin-role` is allowed to perform all administrative tasks on this tenant. - -The structure of topic names in Pulsar reflects the hierarchy between tenants, clusters, and namespaces: - -```shell - -persistent://tenant/namespace/topic - -``` - -### Manage permissions - -You can use [Pulsar Admin Tools](admin-api-permissions.md) for managing permission in Pulsar. - -### Pulsar admin authentication - -```java - -PulsarAdmin admin = PulsarAdmin.builder() - .serviceHttpUrl("http://broker:8080") - .authentication("com.org.MyAuthPluginClass", "param1:value1") - .build(); - -``` - -To use TLS: - -```java - -PulsarAdmin admin = PulsarAdmin.builder() - .serviceHttpUrl("https://broker:8080") - .authentication("com.org.MyAuthPluginClass", "param1:value1") - .tlsTrustCertsFilePath("/path/to/trust/cert") - .build(); - -``` - -## Authorize an authenticated client with multiple roles - -When a client is identified with multiple roles in a token (the type of role claim in the token is an array) during the authentication process, Pulsar supports to check the permissions of all the roles and further authorize the client as long as one of its roles has the required permissions. - -> **Note**
    -> This authorization method is only compatible with [JWT authentication](security-jwt.md). - -To enable this authorization method, configure the authorization provider as `MultiRolesTokenAuthorizationProvider` in the `conf/broker.conf` file. - - ```properties - - # Authorization provider fully qualified class-name - authorizationProvider=org.apache.pulsar.broker.authorization.MultiRolesTokenAuthorizationProvider - - ``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/security-basic-auth.md b/site2/website/versioned_docs/version-2.9.3-deprecated/security-basic-auth.md deleted file mode 100644 index 2585526bb478af..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/security-basic-auth.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -id: security-basic-auth -title: Authentication using HTTP basic -sidebar_label: "Authentication using HTTP basic" ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - -[Basic authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) is a simple authentication scheme built into the HTTP protocol, which uses base64-encoded username and password pairs as credentials. - -## Prerequisites - -Install [`htpasswd`](https://httpd.apache.org/docs/2.4/programs/htpasswd.html) in your environment to create a password file for storing username-password pairs. - -* For Ubuntu/Debian, run the following command to install `htpasswd`. - - ``` - apt install apache2-utils - ``` - -* For CentOS/RHEL, run the following command to install `htpasswd`. - - ``` - yum install httpd-tools - ``` - -## Create your authentication file - -:::note -Currently, you can use MD5 (recommended) and CRYPT encryption to authenticate your password. -::: - -Create a password file named `.htpasswd` with a user account `superuser/admin`: -* Use MD5 encryption (recommended): - - ``` - htpasswd -cmb /path/to/.htpasswd superuser admin - ``` - -* Use CRYPT encryption: - - ``` - htpasswd -cdb /path/to/.htpasswd superuser admin - ``` - -You can preview the content of your password file by running the following command: - -``` -cat path/to/.htpasswd -superuser:$apr1$GBIYZYFZ$MzLcPrvoUky16mLcK6UtX/ -``` - -## Enable basic authentication on brokers - -To configure brokers to authenticate clients, complete the following steps. - -1. Add the following parameters to the `conf/broker.conf` file. If you use a standalone Pulsar, you need to add these parameters to the `conf/standalone.conf` file. - - ``` - # Configuration to enable Basic authentication - authenticationEnabled=true - authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderBasic - - # Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters - brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic - brokerClientAuthenticationParameters={"userId":"superuser","password":"admin"} - - # If this flag is set then the broker authenticates the original Auth data - # else it just accepts the originalPrincipal and authorizes it (if required). - authenticateOriginalAuthData=true - ``` - -2. Set an environment variable named `PULSAR_EXTRA_OPTS` and the value is `-Dpulsar.auth.basic.conf=/path/to/.htpasswd`. Pulsar reads this environment variable to implement HTTP basic authentication. - -## Enable basic authentication on proxies - -To configure proxies to authenticate clients, complete the following steps. - -1. Add the following parameters to the `conf/proxy.conf` file: - - ``` - # For clients connecting to the proxy - authenticationEnabled=true - authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderBasic - - # For the proxy to connect to brokers - brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic - brokerClientAuthenticationParameters={"userId":"superuser","password":"admin"} - - # Whether client authorization credentials are forwarded to the broker for re-authorization. - # Authentication must be enabled via authenticationEnabled=true for this to take effect. - forwardAuthorizationCredentials=true - ``` - -2. Set an environment variable named `PULSAR_EXTRA_OPTS` and the value is `-Dpulsar.auth.basic.conf=/path/to/.htpasswd`. Pulsar reads this environment variable to implement HTTP basic authentication. - -## Configure basic authentication in CLI tools - -[Command-line tools](/docs/next/reference-cli-tools), such as [Pulsar-admin](/tools/pulsar-admin/), [Pulsar-perf](/tools/pulsar-perf/) and [Pulsar-client](/tools/pulsar-client/), use the `conf/client.conf` file in your Pulsar installation. To configure basic authentication in Pulsar CLI tools, you need to add the following parameters to the `conf/client.conf` file. - -``` -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationBasic -authParams={"userId":"superuser","password":"admin"} -``` - - -## Configure basic authentication in Pulsar clients - -The following example shows how to configure basic authentication when using Pulsar clients. - - - - - ```java - AuthenticationBasic auth = new AuthenticationBasic(); - auth.configure("{\"userId\":\"superuser\",\"password\":\"admin\"}"); - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650") - .authentication(auth) - .build(); - ``` - - - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/security-bouncy-castle.md b/site2/website/versioned_docs/version-2.9.3-deprecated/security-bouncy-castle.md deleted file mode 100644 index be937055d8e311..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/security-bouncy-castle.md +++ /dev/null @@ -1,157 +0,0 @@ ---- -id: security-bouncy-castle -title: Bouncy Castle Providers -sidebar_label: "Bouncy Castle Providers" -original_id: security-bouncy-castle ---- - -## BouncyCastle Introduce - -`Bouncy Castle` is a Java library that complements the default Java Cryptographic Extension (JCE), -and it provides more cipher suites and algorithms than the default JCE provided by Sun. - -In addition to that, `Bouncy Castle` has lots of utilities for reading arcane formats like PEM and ASN.1 that no sane person would want to rewrite themselves. - -In Pulsar, security and crypto have dependencies on BouncyCastle Jars. For the detailed installing and configuring Bouncy Castle FIPS, see [BC FIPS Documentation](https://www.bouncycastle.org/documentation.html), especially the **User Guides** and **Security Policy** PDFs. - -`Bouncy Castle` provides both [FIPS](https://www.bouncycastle.org/fips_faq.html) and non-FIPS version. But in a JVM, you can not include both of the 2 versions, and you need to exclude the current version before include the other. - -In Pulsar, the security and crypto methods also depends on `Bouncy Castle`, especially in [TLS Authentication](security-tls-authentication.md) and [Transport Encryption](security-encryption.md). This document contains the configuration between BouncyCastle FIPS(BC-FIPS) and non-FIPS(BC-non-FIPS) version while using Pulsar. - -## How BouncyCastle modules packaged in Pulsar - -In Pulsar's `bouncy-castle` module, We provide 2 sub modules: `bouncy-castle-bc`(for non-FIPS version) and `bouncy-castle-bcfips`(for FIPS version), to package BC jars together to make the include and exclude of `Bouncy Castle` easier. - -To achieve this goal, we will need to package several `bouncy-castle` jars together into `bouncy-castle-bc` or `bouncy-castle-bcfips` jar. -Each of the original bouncy-castle jar is related with security, so BouncyCastle dutifully supplies signed of each JAR. -But when we do the re-package, Maven shade explodes the BouncyCastle jar file which puts the signatures into META-INF, -these signatures aren't valid for this new, uber-jar (signatures are only for the original BC jar). -Usually, You will meet error like `java.lang.SecurityException: Invalid signature file digest for Manifest main attributes`. - -You could exclude these signatures in mvn pom file to avoid above error, by - -```access transformers - -META-INF/*.SF -META-INF/*.DSA -META-INF/*.RSA - -``` - -But it can also lead to new, cryptic errors, e.g. `java.security.NoSuchAlgorithmException: PBEWithSHA256And256BitAES-CBC-BC SecretKeyFactory not available` -By explicitly specifying where to find the algorithm like this: `SecretKeyFactory.getInstance("PBEWithSHA256And256BitAES-CBC-BC","BC")` -It will get the real error: `java.security.NoSuchProviderException: JCE cannot authenticate the provider BC` - -So, we used a [executable packer plugin](https://github.com/nthuemmel/executable-packer-maven-plugin) that uses a jar-in-jar approach to preserve the BouncyCastle signature in a single, executable jar. - -### Include dependencies of BC-non-FIPS - -Pulsar module `bouncy-castle-bc`, which defined by `bouncy-castle/bc/pom.xml` contains the needed non-FIPS jars for Pulsar, and packaged as a jar-in-jar(need to provide `pkg`). - -```xml - - - org.bouncycastle - bcpkix-jdk15on - ${bouncycastle.version} - - - - org.bouncycastle - bcprov-ext-jdk15on - ${bouncycastle.version} - - -``` - -By using this `bouncy-castle-bc` module, you can easily include and exclude BouncyCastle non-FIPS jars. - -### Modules that include BC-non-FIPS module (`bouncy-castle-bc`) - -For Pulsar client, user need the bouncy-castle module, so `pulsar-client-original` will include the `bouncy-castle-bc` module, and have `pkg` set to reference the `jar-in-jar` package. -It is included as following example: - -```xml - - - org.apache.pulsar - bouncy-castle-bc - ${pulsar.version} - pkg - - -``` - -By default `bouncy-castle-bc` already included in `pulsar-client-original`, And `pulsar-client-original` has been included in a lot of other modules like `pulsar-client-admin`, `pulsar-broker`. -But for the above shaded jar and signatures reason, we should not package Pulsar's `bouncy-castle` module into `pulsar-client-all` other shaded modules directly, such as `pulsar-client-shaded`, `pulsar-client-admin-shaded` and `pulsar-broker-shaded`. -So in the shaded modules, we will exclude the `bouncy-castle` modules. - -```xml - - - - org.apache.pulsar:pulsar-client-original - - ** - - - org/bouncycastle/** - - - - -``` - -That means, `bouncy-castle` related jars are not shaded in these fat jars. - -### Module BC-FIPS (`bouncy-castle-bcfips`) - -Pulsar module `bouncy-castle-bcfips`, which defined by `bouncy-castle/bcfips/pom.xml` contains the needed FIPS jars for Pulsar. -Similar to `bouncy-castle-bc`, `bouncy-castle-bcfips` also packaged as a `jar-in-jar` package for easy include/exclude. - -```xml - - - org.bouncycastle - bc-fips - ${bouncycastlefips.version} - - - - org.bouncycastle - bcpkix-fips - ${bouncycastlefips.version} - - -``` - -### Exclude BC-non-FIPS and include BC-FIPS - -If you want to switch from BC-non-FIPS to BC-FIPS version, Here is an example for `pulsar-broker` module: - -```xml - - - org.apache.pulsar - pulsar-broker - ${pulsar.version} - - - org.apache.pulsar - bouncy-castle-bc - - - - - - org.apache.pulsar - bouncy-castle-bcfips - ${pulsar.version} - pkg - - -``` - - -For more example, you can reference module `bcfips-include-test`. - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/security-encryption.md b/site2/website/versioned_docs/version-2.9.3-deprecated/security-encryption.md deleted file mode 100644 index c2f3530d94d9e4..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/security-encryption.md +++ /dev/null @@ -1,200 +0,0 @@ ---- -id: security-encryption -title: Pulsar Encryption -sidebar_label: "End-to-End Encryption" -original_id: security-encryption ---- - -Applications can use Pulsar encryption to encrypt messages on the producer side and decrypt messages on the consumer side. You can use the public and private key pair that the application configures to perform encryption. Only the consumers with a valid key can decrypt the encrypted messages. - -## Asymmetric and symmetric encryption - -Pulsar uses a dynamically generated symmetric AES key to encrypt messages(data). You can use the application-provided ECDSA (Elliptic Curve Digital Signature Algorithm) or RSA (Rivest–Shamir–Adleman) key pair to encrypt the AES key(data key), so you do not have to share the secret with everyone. - -Key is a public and private key pair used for encryption or decryption. The producer key is the public key of the key pair, and the consumer key is the private key of the key pair. - -The application configures the producer with the public key. You can use this key to encrypt the AES data key. The encrypted data key is sent as part of message header. Only entities with the private key (in this case the consumer) are able to decrypt the data key which is used to decrypt the message. - -You can encrypt a message with more than one key. Any one of the keys used for encrypting the message is sufficient to decrypt the message. - -Pulsar does not store the encryption key anywhere in the Pulsar service. If you lose or delete the private key, your message is irretrievably lost, and is unrecoverable. - -## Producer -![alt text](/assets/pulsar-encryption-producer.jpg "Pulsar Encryption Producer") - -## Consumer -![alt text](/assets/pulsar-encryption-consumer.jpg "Pulsar Encryption Consumer") - -## Get started - -1. Create your ECDSA or RSA public and private key pair by using the following commands. - * ECDSA(for Java clients only) - - ```shell - - openssl ecparam -name secp521r1 -genkey -param_enc explicit -out test_ecdsa_privkey.pem - openssl ec -in test_ecdsa_privkey.pem -pubout -outform pem -out test_ecdsa_pubkey.pem - - ``` - - * RSA (for C++, Python and Node.js clients) - - ```shell - - openssl genrsa -out test_rsa_privkey.pem 2048 - openssl rsa -in test_rsa_privkey.pem -pubout -outform pkcs8 -out test_rsa_pubkey.pem - - ``` - -2. Add the public and private key to the key management and configure your producers to retrieve public keys and consumers clients to retrieve private keys. - -3. Implement the `CryptoKeyReader` interface, specifically `CryptoKeyReader.getPublicKey()` for producer and `CryptoKeyReader.getPrivateKey()` for consumer, which Pulsar client invokes to load the key. - -4. Add the encryption key name to the producer builder: PulsarClient.newProducer().addEncryptionKey("myapp.key"). - -5. Add CryptoKeyReader implementation to producer or consumer builder: PulsarClient.newProducer().cryptoKeyReader(keyReader) / PulsarClient.newConsumer().cryptoKeyReader(keyReader). - -6. Sample producer application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl("pulsar://localhost:6650").build(); - -Producer producer = pulsarClient.newProducer() - .topic("persistent://my-tenant/my-ns/my-topic") - .addEncryptionKey("myappkey") - .cryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")) - .create(); - -for (int i = 0; i < 10; i++) { - producer.send("my-message".getBytes()); -} - -producer.close(); -pulsarClient.close(); - -``` - -7. Sample Consumer Application: - -```java - -class RawFileKeyReader implements CryptoKeyReader { - - String publicKeyFile = ""; - String privateKeyFile = ""; - - RawFileKeyReader(String pubKeyFile, String privKeyFile) { - publicKeyFile = pubKeyFile; - privateKeyFile = privKeyFile; - } - - @Override - public EncryptionKeyInfo getPublicKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read public key from file " + publicKeyFile); - e.printStackTrace(); - } - return keyInfo; - } - - @Override - public EncryptionKeyInfo getPrivateKey(String keyName, Map keyMeta) { - EncryptionKeyInfo keyInfo = new EncryptionKeyInfo(); - try { - keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile))); - } catch (IOException e) { - System.out.println("ERROR: Failed to read private key from file " + privateKeyFile); - e.printStackTrace(); - } - return keyInfo; - } -} - -PulsarClient pulsarClient = PulsarClient.builder().serviceUrl("pulsar://localhost:6650").build(); -Consumer consumer = pulsarClient.newConsumer() - .topic("persistent://my-tenant/my-ns/my-topic") - .subscriptionName("my-subscriber-name") - .cryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem")) - .subscribe(); -Message msg = null; - -for (int i = 0; i < 10; i++) { - msg = consumer.receive(); - // do something - System.out.println("Received: " + new String(msg.getData())); -} - -// Acknowledge the consumption of all messages at once -consumer.acknowledgeCumulative(msg); -consumer.close(); -pulsarClient.close(); - -``` - -## Key rotation -Pulsar generates a new AES data key every 4 hours or after publishing a certain number of messages. A producer fetches the asymmetric public key every 4 hours by calling CryptoKeyReader.getPublicKey() to retrieve the latest version. - -## Enable encryption at the producer application -If you produce messages that are consumed across application boundaries, you need to ensure that consumers in other applications have access to one of the private keys that can decrypt the messages. You can do this in two ways: -1. The consumer application provides you access to their public key, which you add to your producer keys. -2. You grant access to one of the private keys from the pairs that producer uses. - -When producers want to encrypt the messages with multiple keys, producers add all such keys to the config. Consumer can decrypt the message as long as the consumer has access to at least one of the keys. - -If you need to encrypt the messages using 2 keys (`myapp.messagekey1` and `myapp.messagekey2`), refer to the following example. - -```java - -PulsarClient.newProducer().addEncryptionKey("myapp.messagekey1").addEncryptionKey("myapp.messagekey2"); - -``` - -## Decrypt encrypted messages at the consumer application -Consumers require to access one of the private keys to decrypt messages that the producer produces. If you want to receive encrypted messages, create a public or private key and give your public key to the producer application to encrypt messages using your public key. - -## Handle failures -* Producer/Consumer loses access to the key - * Producer action fails to indicate the cause of the failure. Application has the option to proceed with sending unencrypted messages in such cases. Call `PulsarClient.newProducer().cryptoFailureAction(ProducerCryptoFailureAction)` to control the producer behavior. The default behavior is to fail the request. - * If consumption fails due to decryption failure or missing keys in consumer, the application has the option to consume the encrypted message or discard it. Call `PulsarClient.newConsumer().cryptoFailureAction(ConsumerCryptoFailureAction)` to control the consumer behavior. The default behavior is to fail the request. Application is never able to decrypt the messages if the private key is permanently lost. -* Batch messaging - * If decryption fails and the message contains batch messages, client is not able to retrieve individual messages in the batch, hence message consumption fails even if cryptoFailureAction() is set to `ConsumerCryptoFailureAction.CONSUME`. -* If decryption fails, the message consumption stops and the application notices backlog growth in addition to decryption failure messages in the client log. If the application does not have access to the private key to decrypt the message, the only option is to skip or discard backlogged messages. diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/security-extending.md b/site2/website/versioned_docs/version-2.9.3-deprecated/security-extending.md deleted file mode 100644 index e7484453b8beb8..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/security-extending.md +++ /dev/null @@ -1,207 +0,0 @@ ---- -id: security-extending -title: Extending Authentication and Authorization in Pulsar -sidebar_label: "Extending" -original_id: security-extending ---- - -Pulsar provides a way to use custom authentication and authorization mechanisms. - -## Authentication - -Pulsar supports mutual TLS and Athenz authentication plugins. For how to use these authentication plugins, you can refer to the description in [Security](security-overview.md). - -You can use a custom authentication mechanism by providing the implementation in the form of two plugins. One plugin is for the Client library and the other plugin is for the Pulsar Proxy and/or Pulsar Broker to validate the credentials. - -### Client authentication plugin - -For the client library, you need to implement `org.apache.pulsar.client.api.Authentication`. By entering the command below you can pass this class when you create a Pulsar client: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .authentication(new MyAuthentication()) - .build(); - -``` - -You can use 2 interfaces to implement on the client side: - * `Authentication` -> http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/Authentication.html - * `AuthenticationDataProvider` -> http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/AuthenticationDataProvider.html - - -This in turn needs to provide the client credentials in the form of `org.apache.pulsar.client.api.AuthenticationDataProvider`. This leaves the chance to return different kinds of authentication token for different types of connection or by passing a certificate chain to use for TLS. - - -You can find examples for client authentication providers at: - - * Mutual TLS Auth -- https://github.com/apache/pulsar/tree/master/pulsar-client/src/main/java/org/apache/pulsar/client/impl/auth - * Athenz -- https://github.com/apache/pulsar/tree/master/pulsar-client-auth-athenz/src/main/java/org/apache/pulsar/client/impl/auth - -### Proxy/Broker authentication plugin - -On the proxy/broker side, you need to configure the corresponding plugin to validate the credentials that the client sends. The Proxy and Broker can support multiple authentication providers at the same time. - -In `conf/broker.conf` you can choose to specify a list of valid providers: - -```properties - -# Authentication provider name list, which is comma separated list of class names -authenticationProviders= - -``` - -To implement `org.apache.pulsar.broker.authentication.AuthenticationProvider` on one single interface: - -```java - -/** - * Provider of authentication mechanism - */ -public interface AuthenticationProvider extends Closeable { - - /** - * Perform initialization for the authentication provider - * - * @param config - * broker config object - * @throws IOException - * if the initialization fails - */ - void initialize(ServiceConfiguration config) throws IOException; - - /** - * @return the authentication method name supported by this provider - */ - String getAuthMethodName(); - - /** - * Validate the authentication for the given credentials with the specified authentication data - * - * @param authData - * provider specific authentication data - * @return the "role" string for the authenticated connection, if the authentication was successful - * @throws AuthenticationException - * if the credentials are not valid - */ - String authenticate(AuthenticationDataSource authData) throws AuthenticationException; - -} - -``` - -The following is the example for Broker authentication plugins: - - * Mutual TLS -- https://github.com/apache/pulsar/blob/master/pulsar-broker-common/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderTls.java - * Athenz -- https://github.com/apache/pulsar/blob/master/pulsar-broker-auth-athenz/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderAthenz.java - -## Authorization - -Authorization is the operation that checks whether a particular "role" or "principal" has permission to perform a certain operation. - -By default, you can use the embedded authorization provider provided by Pulsar. You can also configure a different authorization provider through a plugin. -Note that although the Authentication plugin is designed for use in both the Proxy and Broker, -the Authorization plugin is designed only for use on the Broker however the Proxy does perform some simple Authorization checks of Roles if authorization is enabled. - -To provide a custom provider, you need to implement the `org.apache.pulsar.broker.authorization.AuthorizationProvider` interface, put this class in the Pulsar broker classpath and configure the class in `conf/broker.conf`: - - ```properties - - # Authorization provider fully qualified class-name - authorizationProvider=org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider - - ``` - -```java - -/** - * Provider of authorization mechanism - */ -public interface AuthorizationProvider extends Closeable { - - /** - * Perform initialization for the authorization provider - * - * @param conf - * broker config object - * @param configCache - * pulsar zk configuration cache service - * @throws IOException - * if the initialization fails - */ - void initialize(ServiceConfiguration conf, ConfigurationCacheService configCache) throws IOException; - - /** - * Check if the specified role has permission to send messages to the specified fully qualified topic name. - * - * @param topicName - * the fully qualified topic name associated with the topic. - * @param role - * the app id used to send messages to the topic. - */ - CompletableFuture canProduceAsync(TopicName topicName, String role, - AuthenticationDataSource authenticationData); - - /** - * Check if the specified role has permission to receive messages from the specified fully qualified topic name. - * - * @param topicName - * the fully qualified topic name associated with the topic. - * @param role - * the app id used to receive messages from the topic. - * @param subscription - * the subscription name defined by the client - */ - CompletableFuture canConsumeAsync(TopicName topicName, String role, - AuthenticationDataSource authenticationData, String subscription); - - /** - * Check whether the specified role can perform a lookup for the specified topic. - * - * For that the caller needs to have producer or consumer permission. - * - * @param topicName - * @param role - * @return - * @throws Exception - */ - CompletableFuture canLookupAsync(TopicName topicName, String role, - AuthenticationDataSource authenticationData); - - /** - * - * Grant authorization-action permission on a namespace to the given client - * - * @param namespace - * @param actions - * @param role - * @param authDataJson - * additional authdata in json format - * @return CompletableFuture - * @completesWith
    - * IllegalArgumentException when namespace not found
    - * IllegalStateException when failed to grant permission - */ - CompletableFuture grantPermissionAsync(NamespaceName namespace, Set actions, String role, - String authDataJson); - - /** - * Grant authorization-action permission on a topic to the given client - * - * @param topicName - * @param role - * @param authDataJson - * additional authdata in json format - * @return CompletableFuture - * @completesWith
    - * IllegalArgumentException when namespace not found
    - * IllegalStateException when failed to grant permission - */ - CompletableFuture grantPermissionAsync(TopicName topicName, Set actions, String role, - String authDataJson); - -} - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/security-jwt.md b/site2/website/versioned_docs/version-2.9.3-deprecated/security-jwt.md deleted file mode 100644 index fcee5c0ce21da5..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/security-jwt.md +++ /dev/null @@ -1,327 +0,0 @@ ---- -id: security-jwt -title: Client authentication using tokens based on JSON Web Tokens -sidebar_label: "Authentication using JWT" -original_id: security-jwt ---- - -````mdx-code-block -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -```` - - -## Token authentication overview - -Pulsar supports authenticating clients using security tokens that are based on [JSON Web Tokens](https://jwt.io/introduction/) ([RFC-7519](https://tools.ietf.org/html/rfc7519)). - -You can use tokens to identify a Pulsar client and associate with some "principal" (or "role") that -is permitted to do some actions (eg: publish to a topic or consume from a topic). - -A user typically gets a token string from the administrator (or some automated service). - -The compact representation of a signed JWT is a string that looks like as the following: - -``` - -eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -Application specifies the token when you create the client instance. An alternative is to pass a "token supplier" (a function that returns the token when the client library needs one). - -> #### Always use TLS transport encryption -> Sending a token is equivalent to sending a password over the wire. You had better use TLS encryption all the time when you connect to the Pulsar service. See -> [Transport Encryption using TLS](security-tls-transport.md) for more details. - -### CLI Tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use the token authentication with CLI tools of Pulsar: - -```properties - -webServiceUrl=http://broker.example.com:8080/ -brokerServiceUrl=pulsar://broker.example.com:6650/ -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -authParams=token:eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -The token string can also be read from a file, for example: - -``` - -authParams=file:///path/to/token/file - -``` - -### Pulsar client - -You can use tokens to authenticate the following Pulsar clients. - -````mdx-code-block - - - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactory.token("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY")) - .build(); - -``` - -Similarly, you can also pass a `Supplier`: - -```java - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactory.token(() -> { - // Read token from custom source - return readToken(); - })) - .build(); - -``` - - - - -```python - -from pulsar import Client, AuthenticationToken - -client = Client('pulsar://broker.example.com:6650/' - authentication=AuthenticationToken('eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY')) - -``` - -Alternatively, you can also pass a `Supplier`: - -```python - -def read_token(): - with open('/path/to/token.txt') as tf: - return tf.read().strip() - -client = Client('pulsar://broker.example.com:6650/' - authentication=AuthenticationToken(read_token)) - -``` - - - - -```go - -client, err := NewClient(ClientOptions{ - URL: "pulsar://localhost:6650", - Authentication: NewAuthenticationToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY"), -}) - -``` - -Similarly, you can also pass a `Supplier`: - -```go - -client, err := NewClient(ClientOptions{ - URL: "pulsar://localhost:6650", - Authentication: NewAuthenticationTokenSupplier(func () string { - // Read token from custom source - return readToken() - }), -}) - -``` - - - - -```c++ - -#include - -pulsar::ClientConfiguration config; -config.setAuth(pulsar::AuthToken::createWithToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY")); - -pulsar::Client client("pulsar://broker.example.com:6650/", config); - -``` - - - - -```c# - -var client = PulsarClient.Builder() - .AuthenticateUsingToken("eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY") - .Build(); - -``` - - - - -```` - -## Enable token authentication - -On how to enable token authentication on a Pulsar cluster, you can refer to the guide below. - -JWT supports two different kinds of keys in order to generate and validate the tokens: - - * Symmetric : - - You can use a single ***Secret*** key to generate and validate tokens. - * Asymmetric: A pair of keys consists of the Private key and the Public key. - - You can use ***Private*** key to generate tokens. - - You can use ***Public*** key to validate tokens. - -### Create a secret key - -When you use a secret key, the administrator creates the key and uses the key to generate the client tokens. You can also configure this key to brokers in order to validate the clients. - -Output file is generated in the root of your Pulsar installation directory. You can also provide absolute path for the output file using the command below. - -```shell - -$ bin/pulsar tokens create-secret-key --output my-secret.key - -``` - -Enter this command to generate base64 encoded private key. - -```shell - -$ bin/pulsar tokens create-secret-key --output /opt/my-secret.key --base64 - -``` - -### Create a key pair - -With Public and Private keys, you need to create a pair of keys. Pulsar supports all algorithms that the Java JWT library (shown [here](https://github.com/jwtk/jjwt#signature-algorithms-keys)) supports. - -Output file is generated in the root of your Pulsar installation directory. You can also provide absolute path for the output file using the command below. - -```shell - -$ bin/pulsar tokens create-key-pair --output-private-key my-private.key --output-public-key my-public.key - -``` - - * Store `my-private.key` in a safe location and only administrator can use `my-private.key` to generate new tokens. - * `my-public.key` is distributed to all Pulsar brokers. You can publicly share this file without any security concern. - -### Generate tokens - -A token is the credential associated with a user. The association is done through the "principal" or "role". In the case of JWT tokens, this field is typically referred as **subject**, though they are exactly the same concept. - -Then, you need to use this command to require the generated token to have a **subject** field set. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user - -``` - -This command prints the token string on stdout. - -Similarly, you can create a token by passing the "private" key using the command below: - -```shell - -$ bin/pulsar tokens create --private-key file:///path/to/my-private.key \ - --subject test-user - -``` - -Finally, you can enter the following command to create a token with a pre-defined TTL. And then the token is automatically invalidated. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user \ - --expiry-time 1y - -``` - -### Authorization - -The token itself does not have any permission associated. The authorization engine determines whether the token should have permissions or not. Once you have created the token, you can grant permission for this token to do certain actions. The following is an example. - -```shell - -$ bin/pulsar-admin namespaces grant-permission my-tenant/my-namespace \ - --role test-user \ - --actions produce,consume - -``` - -### Enable token authentication on Brokers - -To configure brokers to authenticate clients, add the following parameters to `broker.conf`: - -```properties - -# Configuration to enable authentication and authorization -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientTlsEnabled=true -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Or, alternatively, read token from file -# brokerClientAuthenticationParameters={"file":"///path/to/proxy-token.txt"} -brokerClientTrustCertsFilePath=/path/my-ca/certs/ca.cert.pem - -# If this flag is set then the broker authenticates the original Auth data -# else it just accepts the originalPrincipal and authorizes it (if required). -authenticateOriginalAuthData=true - -# If using secret key (Note: key files must be DER-encoded) -tokenSecretKey=file:///path/to/secret.key -# The key can also be passed inline: -# tokenSecretKey=data:;base64,FLFyW0oLJ2Fi22KKCm21J18mbAdztfSHN/lAT5ucEKU= - -# If using public/private (Note: key files must be DER-encoded) -# tokenPublicKey=file:///path/to/public.key - -``` - -### Enable token authentication on Proxies - -To configure proxies to authenticate clients, add the following parameters to `proxy.conf`: - -The proxy uses its own token when connecting to brokers. You need to configure the role token for this key pair in the `proxyRoles` of the brokers. For more details, see the [authorization guide](security-authorization.md). - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken -tokenSecretKey=file:///path/to/secret.key - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Or, alternatively, read token from file -# brokerClientAuthenticationParameters={"file":"///path/to/proxy-token.txt"} - -# Whether client authorization credentials are forwarded to the broker for re-authorization. -# Authentication must be enabled via authenticationEnabled=true for this to take effect. -forwardAuthorizationCredentials=true - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/security-kerberos.md b/site2/website/versioned_docs/version-2.9.3-deprecated/security-kerberos.md deleted file mode 100644 index c49fa3bea1fce0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/security-kerberos.md +++ /dev/null @@ -1,443 +0,0 @@ ---- -id: security-kerberos -title: Authentication using Kerberos -sidebar_label: "Authentication using Kerberos" -original_id: security-kerberos ---- - -[Kerberos](https://web.mit.edu/kerberos/) is a network authentication protocol. By using secret-key cryptography, [Kerberos](https://web.mit.edu/kerberos/) is designed to provide strong authentication for client applications and server applications. - -In Pulsar, you can use Kerberos with [SASL](https://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer) as a choice for authentication. And Pulsar uses the [Java Authentication and Authorization Service (JAAS)](https://en.wikipedia.org/wiki/Java_Authentication_and_Authorization_Service) for SASL configuration. You need to provide JAAS configurations for Kerberos authentication. - -This document introduces how to configure `Kerberos` with `SASL` between Pulsar clients and brokers and how to configure Kerberos for Pulsar proxy in detail. - -## Configuration for Kerberos between Client and Broker - -### Prerequisites - -To begin, you need to set up (or already have) a [Key Distribution Center(KDC)](https://en.wikipedia.org/wiki/Key_distribution_center). Also you need to configure and run the [Key Distribution Center(KDC)](https://en.wikipedia.org/wiki/Key_distribution_center)in advance. - -If your organization already uses a Kerberos server (for example, by using `Active Directory`), you do not have to install a new server for Pulsar. If your organization does not use a Kerberos server, you need to install one. Your Linux vendor might have packages for `Kerberos`. On how to install and configure Kerberos, refer to [Ubuntu](https://help.ubuntu.com/community/Kerberos), -[Redhat](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Managing_Smart_Cards/installing-kerberos.html). - -Note that if you use Oracle Java, you need to download JCE policy files for your Java version and copy them to the `$JAVA_HOME/jre/lib/security` directory. - -#### Kerberos principals - -If you use the existing Kerberos system, ask your Kerberos administrator for a principal for each Brokers in your cluster and for every operating system user that accesses Pulsar with Kerberos authentication(via clients and tools). - -If you have installed your own Kerberos system, you can create these principals with the following commands: - -```shell - -### add Principals for broker -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey broker/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{broker-keytabname}.keytab broker/{hostname}@{REALM}" -### add Principals for client -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey client/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{client-keytabname}.keytab client/{hostname}@{REALM}" - -``` - -Note that *Kerberos* requires that all your hosts can be resolved with their FQDNs. - -The first part of Broker principal (for example, `broker` in `broker/{hostname}@{REALM}`) is the `serverType` of each host. The suggested values of `serverType` are `broker` (host machine runs service Pulsar Broker) and `proxy` (host machine runs service Pulsar Proxy). - -#### Configure how to connect to KDC - -You need to enter the command below to specify the path to the `krb5.conf` file for the client side and the broker side. The content of `krb5.conf` file indicates the default Realm and KDC information. See [JDK’s Kerberos Requirements](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html) for more details. - -```shell - --Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -Here is an example of the krb5.conf file: - -In the configuration file, `EXAMPLE.COM` is the default realm; `kdc = localhost:62037` is the kdc server url for realm `EXAMPLE.COM `: - -``` - -[libdefaults] - default_realm = EXAMPLE.COM - -[realms] - EXAMPLE.COM = { - kdc = localhost:62037 - } - -``` - -Usually machines configured with kerberos already have a system wide configuration and this configuration is optional. - -#### JAAS configuration file - -You need JAAS configuration file for the client side and the broker side. JAAS configuration file provides the section of information that is used to connect KDC. Here is an example named `pulsar_jaas.conf`: - -``` - - PulsarBroker { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - - PulsarClient { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarclient.keytab" - principal="client/localhost@EXAMPLE.COM"; -}; - -``` - -You need to set the `JAAS` configuration file path as JVM parameter for client and broker. For example: - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf - -``` - -In the `pulsar_jaas.conf` file above - -1. `PulsarBroker` is a section name in the JAAS file that each broker uses. This section tells the broker to use which principal inside Kerberos and the location of the keytab where the principal is stored. `PulsarBroker` allows the broker to use the keytab specified in this section. -2. `PulsarClient` is a section name in the JASS file that each broker uses. This section tells the client to use which principal inside Kerberos and the location of the keytab where the principal is stored. `PulsarClient` allows the client to use the keytab specified in this section. - The following example also reuses this `PulsarClient` section in both the Pulsar internal admin configuration and in CLI command of `bin/pulsar-client`, `bin/pulsar-perf` and `bin/pulsar-admin`. You can also add different sections for different use cases. - -You can have 2 separate JAAS configuration files: -* the file for a broker that has sections of both `PulsarBroker` and `PulsarClient`; -* the file for a client that only has a `PulsarClient` section. - - -### Kerberos configuration for Brokers - -#### Configure the `broker.conf` file - - In the `broker.conf` file, set Kerberos related configurations. - - - Set `authenticationEnabled` to `true`; - - Set `authenticationProviders` to choose `AuthenticationProviderSasl`; - - Set `saslJaasClientAllowedIds` regex for principal that is allowed to connect to broker; - - Set `saslJaasBrokerSectionName` that corresponds to the section in JAAS configuration file for broker; - - To make Pulsar internal admin client work properly, you need to set the configuration in the `broker.conf` file as below: - - Set `brokerClientAuthenticationPlugin` to client plugin `AuthenticationSasl`; - - Set `brokerClientAuthenticationParameters` to value in JSON string `{"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"}`, in which `PulsarClient` is the section name in the `pulsar_jaas.conf` file, and `"serverType":"broker"` indicates that the internal admin client connects to a Pulsar Broker; - - Here is an example: - -``` - -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarBroker - -## Authentication settings of the broker itself. Used when the broker connects to other brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -brokerClientAuthenticationParameters={"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"} - -``` - -#### Set Broker JVM parameter - - Set JVM parameters for JAAS configuration file and krb5 configuration file with additional options. - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -You can add this at the end of `PULSAR_EXTRA_OPTS` in the file [`pulsar_env.sh`](https://github.com/apache/pulsar/blob/master/conf/pulsar_env.sh) - -You must ensure that the operating system user who starts broker can reach the keytabs configured in the `pulsar_jaas.conf` file and kdc server in the `krb5.conf` file. - -### Kerberos configuration for clients - -#### Java Client and Java Admin Client - -In client application, include `pulsar-client-auth-sasl` in your project dependency. - -``` - - - org.apache.pulsar - pulsar-client-auth-sasl - ${pulsar.version} - - -``` - -Configure the authentication type to use `AuthenticationSasl`, and also provide the authentication parameters to it. - -You need 2 parameters: -- `saslJaasClientSectionName`. This parameter corresponds to the section in JAAS configuration file for client; -- `serverType`. This parameter stands for whether this client connects to broker or proxy. And client uses this parameter to know which server side principal should be used. - -When you authenticate between client and broker with the setting in above JAAS configuration file, we need to set `saslJaasClientSectionName` to `PulsarClient` and set `serverType` to `broker`. - -The following is an example of creating a Java client: - - ```java - - System.setProperty("java.security.auth.login.config", "/etc/pulsar/pulsar_jaas.conf"); - System.setProperty("java.security.krb5.conf", "/etc/pulsar/krb5.conf"); - - Map authParams = Maps.newHashMap(); - authParams.put("saslJaasClientSectionName", "PulsarClient"); - authParams.put("serverType", "broker"); - - Authentication saslAuth = AuthenticationFactory - .create(org.apache.pulsar.client.impl.auth.AuthenticationSasl.class.getName(), authParams); - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://my-broker.com:6650") - .authentication(saslAuth) - .build(); - - ``` - -> The first two lines in the example above are hard coded, alternatively, you can set additional JVM parameters for JAAS and krb5 configuration file when you run the application like below: - -``` - -java -cp -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf $APP-jar-with-dependencies.jar $CLASSNAME - -``` - -You must ensure that the operating system user who starts pulsar client can reach the keytabs configured in the `pulsar_jaas.conf` file and kdc server in the `krb5.conf` file. - -#### Configure CLI tools - -If you use a command-line tool (such as `bin/pulsar-client`, `bin/pulsar-perf` and `bin/pulsar-admin`), you need to perform the following steps: - -Step 1. Enter the command below to configure your `client.conf`. - -```shell - -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -authParams={"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"} - -``` - -Step 2. Enter the command below to set JVM parameters for JAAS configuration file and krb5 configuration file with additional options. - -```shell - - -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf - -``` - -You can add this at the end of `PULSAR_EXTRA_OPTS` in the file [`pulsar_tools_env.sh`](https://github.com/apache/pulsar/blob/master/conf/pulsar_tools_env.sh), -or add this line `OPTS="$OPTS -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf "` directly to the CLI tool script. - -The meaning of configurations is the same as the meaning of configurations in Java client section. - -## Kerberos configuration for working with Pulsar Proxy - -With the above configuration, client and broker can do authentication using Kerberos. - -A client that connects to Pulsar Proxy is a little different. Pulsar Proxy (as a SASL Server in Kerberos) authenticates Client (as a SASL client in Kerberos) first; and then Pulsar broker authenticates Pulsar Proxy. - -Now in comparison with the above configuration between client and broker, we show you how to configure Pulsar Proxy as follows. - -### Create principal for Pulsar Proxy in Kerberos - -You need to add new principals for Pulsar Proxy comparing with the above configuration. If you already have principals for client and broker, you only need to add the proxy principal here. - -```shell - -### add Principals for Pulsar Proxy -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey proxy/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{proxy-keytabname}.keytab proxy/{hostname}@{REALM}" -### add Principals for broker -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey broker/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{broker-keytabname}.keytab broker/{hostname}@{REALM}" -### add Principals for client -sudo /usr/sbin/kadmin.local -q 'addprinc -randkey client/{hostname}@{REALM}' -sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{client-keytabname}.keytab client/{hostname}@{REALM}" - -``` - -### Add a section in JAAS configuration file for Pulsar Proxy - -In comparison with the above configuration, add a new section for Pulsar Proxy in JAAS configuration file. - -Here is an example named `pulsar_jaas.conf`: - -``` - - PulsarBroker { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - - PulsarProxy { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarproxy.keytab" - principal="proxy/localhost@EXAMPLE.COM"; -}; - - PulsarClient { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarclient.keytab" - principal="client/localhost@EXAMPLE.COM"; -}; - -``` - -### Proxy client configuration - -Pulsar client configuration is similar with client and broker configuration, except that you need to set `serverType` to `proxy` instead of `broker`, for the reason that you need to do the Kerberos authentication between client and proxy. - - ```java - - System.setProperty("java.security.auth.login.config", "/etc/pulsar/pulsar_jaas.conf"); - System.setProperty("java.security.krb5.conf", "/etc/pulsar/krb5.conf"); - - Map authParams = Maps.newHashMap(); - authParams.put("saslJaasClientSectionName", "PulsarClient"); - authParams.put("serverType", "proxy"); // ** here is the different ** - - Authentication saslAuth = AuthenticationFactory - .create(org.apache.pulsar.client.impl.auth.AuthenticationSasl.class.getName(), authParams); - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://my-broker.com:6650") - .authentication(saslAuth) - .build(); - - ``` - -> The first two lines in the example above are hard coded, alternatively, you can set additional JVM parameters for JAAS and krb5 configuration file when you run the application like below: - -``` - -java -cp -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf $APP-jar-with-dependencies.jar $CLASSNAME - -``` - -### Kerberos configuration for Pulsar proxy service - -In the `proxy.conf` file, set Kerberos related configuration. Here is an example: - -```shell - -## related to authenticate client. -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarProxy - -## related to be authenticated by broker -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl -brokerClientAuthenticationParameters={"saslJaasClientSectionName":"PulsarProxy", "serverType":"broker"} -forwardAuthorizationCredentials=true - -``` - -The first part relates to authenticating between client and Pulsar Proxy. In this phase, client works as SASL client, while Pulsar Proxy works as SASL server. - -The second part relates to authenticating between Pulsar Proxy and Pulsar Broker. In this phase, Pulsar Proxy works as SASL client, while Pulsar Broker works as SASL server. - -### Broker side configuration. - -The broker side configuration file is the same with the above `broker.conf`, you do not need special configuration for Pulsar Proxy. - -``` - -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl -saslJaasClientAllowedIds=.*client.* -saslJaasBrokerSectionName=PulsarBroker - -``` - -## Regarding authorization and role token - -For Kerberos authentication, we usually use the authenticated principal as the role token for Pulsar authorization. For more information of authorization in Pulsar, see [security authorization](security-authorization.md). - -If you enable 'authorizationEnabled', you need to set `superUserRoles` in `broker.conf` that corresponds to the name registered in kdc. - -For example: - -```bash - -superUserRoles=client/{clientIp}@EXAMPLE.COM - -``` - -## Regarding authentication between ZooKeeper and Broker - -Pulsar Broker acts as a Kerberos client when you authenticate with Zookeeper. According to [ZooKeeper document](https://cwiki.apache.org/confluence/display/ZOOKEEPER/Client-Server+mutual+authentication), you need these settings in `conf/zookeeper.conf`: - -``` - -authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider -requireClientAuthScheme=sasl - -``` - -Enter the following commands to add a section of `Client` configurations in the file `pulsar_jaas.conf`, which Pulsar Broker uses: - -``` - - Client { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - -``` - -In this setting, the principal of Pulsar Broker and keyTab file indicates the role of Broker when you authenticate with ZooKeeper. - -## Regarding authentication between BookKeeper and Broker - -Pulsar Broker acts as a Kerberos client when you authenticate with Bookie. According to [BookKeeper document](http://bookkeeper.apache.org/docs/latest/security/sasl/), you need to add `bookkeeperClientAuthenticationPlugin` parameter in `broker.conf`: - -``` - -bookkeeperClientAuthenticationPlugin=org.apache.bookkeeper.sasl.SASLClientProviderFactory - -``` - -In this setting, `SASLClientProviderFactory` creates a BookKeeper SASL client in a Broker, and the Broker uses the created SASL client to authenticate with a Bookie node. - -Enter the following commands to add a section of `BookKeeper` configurations in the `pulsar_jaas.conf` that Pulsar Broker uses: - -``` - - BookKeeper { - com.sun.security.auth.module.Krb5LoginModule required - useKeyTab=true - storeKey=true - useTicketCache=false - keyTab="/etc/security/keytabs/pulsarbroker.keytab" - principal="broker/localhost@EXAMPLE.COM"; -}; - -``` - -In this setting, the principal of Pulsar Broker and keyTab file indicates the role of Broker when you authenticate with Bookie. diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/security-oauth2.md b/site2/website/versioned_docs/version-2.9.3-deprecated/security-oauth2.md deleted file mode 100644 index ecafaf85f6bdfb..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/security-oauth2.md +++ /dev/null @@ -1,232 +0,0 @@ ---- -id: security-oauth2 -title: Client authentication using OAuth 2.0 access tokens -sidebar_label: "Authentication using OAuth 2.0 access tokens" -original_id: security-oauth2 ---- - -Pulsar supports authenticating clients using OAuth 2.0 access tokens. You can use OAuth 2.0 access tokens to identify a Pulsar client and associate the Pulsar client with some "principal" (or "role"), which is permitted to do some actions, such as publishing messages to a topic or consume messages from a topic. - -This module is used to support the Pulsar client authentication plugin for OAuth 2.0. After communicating with the Oauth 2.0 server, the Pulsar client gets an `access token` from the Oauth 2.0 server, and passes this `access token` to the Pulsar broker to do the authentication. The broker can use the `org.apache.pulsar.broker.authentication.AuthenticationProviderToken`. Or, you can add your own `AuthenticationProvider` to make it with this module. - -## Authentication provider configuration - -This library allows you to authenticate the Pulsar client by using an access token that is obtained from an OAuth 2.0 authorization service, which acts as a _token issuer_. - -### Authentication types - -The authentication type determines how to obtain an access token through an OAuth 2.0 authorization flow. - -:::note - -Currently, the Pulsar Java client only supports the `client_credentials` authentication type. - -::: - -#### Client credentials - -The following table lists parameters supported for the `client credentials` authentication type. - -| Parameter | Description | Example | Required or not | -| --- | --- | --- | --- | -| `type` | Oauth 2.0 authentication type. | `client_credentials` (default) | Optional | -| `issuerUrl` | URL of the authentication provider which allows the Pulsar client to obtain an access token | `https://accounts.google.com` | Required | -| `privateKey` | URL to a JSON credentials file | Support the following pattern formats:
  2413. `file:///path/to/file`
  2414. `file:/path/to/file`
  2415. `data:application/json;base64,`
  2416. | Required | -| `audience` | An OAuth 2.0 "resource server" identifier for the Pulsar cluster | `https://broker.example.com` | Optional | - -The credentials file contains service account credentials used with the client authentication type. The following shows an example of a credentials file `credentials_file.json`. - -```json - -{ - "type": "client_credentials", - "client_id": "d9ZyX97q1ef8Cr81WHVC4hFQ64vSlDK3", - "client_secret": "on1uJ...k6F6R", - "client_email": "1234567890-abcdefghijklmnopqrstuvwxyz@developer.gserviceaccount.com", - "issuer_url": "https://accounts.google.com" -} - -``` - -In the above example, the authentication type is set to `client_credentials` by default. And the fields "client_id" and "client_secret" are required. - -### Typical original OAuth2 request mapping - -The following shows a typical original OAuth2 request, which is used to obtain the access token from the OAuth2 server. - -```bash - -curl --request POST \ - --url https://dev-kt-aa9ne.us.auth0.com/oauth/token \ - --header 'content-type: application/json' \ - --data '{ - "client_id":"Xd23RHsUnvUlP7wchjNYOaIfazgeHd9x", - "client_secret":"rT7ps7WY8uhdVuBTKWZkttwLdQotmdEliaM5rLfmgNibvqziZ-g07ZH52N_poGAb", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "grant_type":"client_credentials"}' - -``` - -In the above example, the mapping relationship is shown as below. - -- The `issuerUrl` parameter in this plugin is mapped to `--url https://dev-kt-aa9ne.us.auth0.com`. -- The `privateKey` file parameter in this plugin should at least contains the `client_id` and `client_secret` fields. -- The `audience` parameter in this plugin is mapped to `"audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"`. This field is only used by some identity providers. - -## Client Configuration - -You can use the OAuth2 authentication provider with the following Pulsar clients. - -### Java - -You can use the factory method to configure authentication for Pulsar Java client. - -```java - -import org.apache.pulsar.client.impl.auth.oauth2.AuthenticationFactoryOAuth2; - -URL issuerUrl = new URL("https://dev-kt-aa9ne.us.auth0.com"); -URL credentialsUrl = new URL("file:///path/to/KeyFile.json"); -String audience = "https://dev-kt-aa9ne.us.auth0.com/api/v2/"; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication( - AuthenticationFactoryOAuth2.clientCredentials(issuerUrl, credentialsUrl, audience)) - .build(); - -``` - -In addition, you can also use the encoded parameters to configure authentication for Pulsar Java client. - -```java - -Authentication auth = AuthenticationFactory - .create(AuthenticationOAuth2.class.getName(), "{"type":"client_credentials","privateKey":"./key/path/..","issuerUrl":"...","audience":"..."}"); -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar://broker.example.com:6650/") - .authentication(auth) - .build(); - -``` - -### C++ client - -The C++ client is similar to the Java client. You need to provide parameters of `issuerUrl`, `private_key` (the credentials file path), and the audience. - -```c++ - -#include - -pulsar::ClientConfiguration config; -std::string params = R"({ - "issuer_url": "https://dev-kt-aa9ne.us.auth0.com", - "private_key": "../../pulsar-broker/src/test/resources/authentication/token/cpp_credentials_file.json", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/"})"; - -config.setAuth(pulsar::AuthOauth2::create(params)); - -pulsar::Client client("pulsar://broker.example.com:6650/", config); - -``` - -### Go client - -To enable OAuth2 authentication in Go client, you need to configure OAuth2 authentication. -This example shows how to configure OAuth2 authentication in Go client. - -```go - -oauth := pulsar.NewAuthenticationOAuth2(map[string]string{ - "type": "client_credentials", - "issuerUrl": "https://dev-kt-aa9ne.us.auth0.com", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/", - "privateKey": "/path/to/privateKey", - "clientId": "0Xx...Yyxeny", - }) -client, err := pulsar.NewClient(pulsar.ClientOptions{ - URL: "pulsar://my-cluster:6650", - Authentication: oauth, -}) - -``` - -### Python client - -To enable OAuth2 authentication in Python client, you need to configure OAuth2 authentication. -This example shows how to configure OAuth2 authentication in Python client. - -```python - -from pulsar import Client, AuthenticationOauth2 - -params = ''' -{ - "issuer_url": "https://dev-kt-aa9ne.us.auth0.com", - "private_key": "/path/to/privateKey", - "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/" -} -''' - -client = Client("pulsar://my-cluster:6650", authentication=AuthenticationOauth2(params)) - -``` - -## CLI configuration - -This section describes how to use Pulsar CLI tools to connect a cluster through OAuth2 authentication plugin. - -### pulsar-admin - -This example shows how to use pulsar-admin to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-admin --admin-url https://streamnative.cloud:443 \ ---auth-plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ -tenants list - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). - -### pulsar-client - -This example shows how to use pulsar-client to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-client \ ---url SERVICE_URL \ ---auth-plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ -produce test-topic -m "test-message" -n 10 - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). - -### pulsar-perf - -This example shows how to use pulsar-perf to connect to a cluster through OAuth2 authentication plugin. - -```shell script - -bin/pulsar-perf produce --service-url pulsar+ssl://streamnative.cloud:6651 \ ---auth-plugin org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 \ ---auth-params '{"privateKey":"file:///path/to/key/file.json", - "issuerUrl":"https://dev-kt-aa9ne.us.auth0.com", - "audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/"}' \ --r 1000 -s 1024 test-topic - -``` - -Set the `admin-url` parameter to the Web service URL. A Web service URL is a combination of the protocol, hostname and port ID, such as `pulsar://localhost:6650`. -Set the `privateKey`, `issuerUrl`, and `audience` parameters to the values based on the configuration in the key file. For details, see [authentication types](#authentication-types). diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/security-overview.md b/site2/website/versioned_docs/version-2.9.3-deprecated/security-overview.md deleted file mode 100644 index a8120f984bf82b..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/security-overview.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -id: security-overview -title: Pulsar security overview -sidebar_label: "Overview" -original_id: security-overview ---- - -As the central message bus for a business, Apache Pulsar is frequently used for storing mission-critical data. Therefore, enabling security features in Pulsar is crucial. - -By default, Pulsar configures no encryption, authentication, or authorization. Any client can communicate to Apache Pulsar via plain text service URLs. So we must ensure that Pulsar accessing via these plain text service URLs is restricted to trusted clients only. In such cases, you can use Network segmentation and/or authorization ACLs to restrict access to trusted IPs. If you use neither, the state of cluster is wide open and anyone can access the cluster. - -Pulsar supports a pluggable authentication mechanism. And Pulsar clients use this mechanism to authenticate with brokers and proxies. You can also configure Pulsar to support multiple authentication sources. - -The Pulsar broker validates the authentication credentials when a connection is established. After the initial connection is authenticated, the "principal" token is stored for authorization though the connection is not re-authenticated. The broker periodically checks the expiration status of every `ServerCnx` object. You can set the `authenticationRefreshCheckSeconds` on the broker to control the frequency to check the expiration status. By default, the `authenticationRefreshCheckSeconds` is set to 60s. When the authentication is expired, the broker forces to re-authenticate the connection. If the re-authentication fails, the broker disconnects the client. - -The broker supports learning whether a particular client supports authentication refreshing. If a client supports authentication refreshing and the credential is expired, the authentication provider calls the `refreshAuthentication` method to initiate the refreshing process. If a client does not support authentication refreshing and the credential is expired, the broker disconnects the client. - -You had better secure the service components in your Apache Pulsar deployment. - -## Role tokens - -In Pulsar, a *role* is a string, like `admin` or `app1`, which can represent a single client or multiple clients. You can use roles to control permission for clients to produce or consume from certain topics, administer the configuration for tenants, and so on. - -Apache Pulsar uses an [Authentication Provider](#authentication-providers) to establish the identity of a client and then assign a *role token* to that client. This role token is then used for [Authorization and ACLs](security-authorization.md) to determine what the client is authorized to do. - -## Authentication providers - -Currently Pulsar supports the following authentication providers: - -- [TLS Authentication](security-tls-authentication.md) -- [Athenz](security-athenz.md) -- [Kerberos](security-kerberos.md) -- [JSON Web Token Authentication](security-jwt.md) -- [OAuth 2.0 authentication](security-oauth2.md) -- [HTTP basic authentication](security-basic-auth.md) - - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/security-tls-authentication.md b/site2/website/versioned_docs/version-2.9.3-deprecated/security-tls-authentication.md deleted file mode 100644 index 85d2240f413060..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/security-tls-authentication.md +++ /dev/null @@ -1,222 +0,0 @@ ---- -id: security-tls-authentication -title: Authentication using TLS -sidebar_label: "Authentication using TLS" -original_id: security-tls-authentication ---- - -## TLS authentication overview - -TLS authentication is an extension of [TLS transport encryption](security-tls-transport.md). Not only servers have keys and certs that the client uses to verify the identity of servers, clients also have keys and certs that the server uses to verify the identity of clients. You must have TLS transport encryption configured on your cluster before you can use TLS authentication. This guide assumes you already have TLS transport encryption configured. - -`Bouncy Castle Provider` provides TLS related cipher suites and algorithms in Pulsar. If you need [FIPS](https://www.bouncycastle.org/fips_faq.html) version of `Bouncy Castle Provider`, please reference [Bouncy Castle page](security-bouncy-castle.md). - -### Create client certificates - -Client certificates are generated using the certificate authority. Server certificates are also generated with the same certificate authority. - -The biggest difference between client certs and server certs is that the **common name** for the client certificate is the **role token** which that client is authenticated as. - -To use client certificates, you need to set `tlsRequireTrustedClientCertOnConnect=true` at the broker side. For details, refer to [TLS broker configuration](security-tls-transport.md#configure-broker). - -First, you need to enter the following command to generate the key : - -```bash - -$ openssl genrsa -out admin.key.pem 2048 - -``` - -Similar to the broker, the client expects the key to be in [PKCS 8](https://en.wikipedia.org/wiki/PKCS_8) format, so you need to convert it by entering the following command: - -```bash - -$ openssl pkcs8 -topk8 -inform PEM -outform PEM \ - -in admin.key.pem -out admin.key-pk8.pem -nocrypt - -``` - -Next, enter the command below to generate the certificate request. When you are asked for a **common name**, enter the **role token** that you want this key pair to authenticate a client as. - -```bash - -$ openssl req -config openssl.cnf \ - -key admin.key.pem -new -sha256 -out admin.csr.pem - -``` - -:::note - -If openssl.cnf is not specified, read [Certificate authority](http://pulsar.apache.org/docs/en/security-tls-transport/#certificate-authority) to get the openssl.cnf. - -::: - -Then, enter the command below to sign with request with the certificate authority. Note that the client certs uses the **usr_cert** extension, which allows the cert to be used for client authentication. - -```bash - -$ openssl ca -config openssl.cnf -extensions usr_cert \ - -days 1000 -notext -md sha256 \ - -in admin.csr.pem -out admin.cert.pem - -``` - -You can get a cert, `admin.cert.pem`, and a key, `admin.key-pk8.pem` from this command. With `ca.cert.pem`, clients can use this cert and this key to authenticate themselves to brokers and proxies as the role token ``admin``. - -:::note - -If the "unable to load CA private key" error occurs and the reason of this error is "No such file or directory: /etc/pki/CA/private/cakey.pem" in this step. Try the command below: - -```bash - -$ cd /etc/pki/tls/misc/CA -$ ./CA -newca - -``` - -to generate `cakey.pem` . - -::: - -## Enable TLS authentication on brokers - -To configure brokers to authenticate clients, add the following parameters to `broker.conf`, alongside [the configuration to enable tls transport](security-tls-transport.md#broker-configuration): - -```properties - -# Configuration to enable authentication -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# operations and publish/consume from all topics -superUserRoles=admin - -# Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters -brokerClientTlsEnabled=true -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters={"tlsCertFile":"/path/my-ca/admin.cert.pem","tlsKeyFile":"/path/my-ca/admin.key-pk8.pem"} -brokerClientTrustCertsFilePath=/path/my-ca/certs/ca.cert.pem - -``` - -## Enable TLS authentication on proxies - -To configure proxies to authenticate clients, add the following parameters to `proxy.conf`, alongside [the configuration to enable tls transport](security-tls-transport.md#proxy-configuration): - -The proxy should have its own client key pair for connecting to brokers. You need to configure the role token for this key pair in the ``proxyRoles`` of the brokers. See the [authorization guide](security-authorization.md) for more details. - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -brokerClientAuthenticationParameters=tlsCertFile:/path/to/proxy.cert.pem,tlsKeyFile:/path/to/proxy.key-pk8.pem - -``` - -## Client configuration - -When you use TLS authentication, client connects via TLS transport. You need to configure the client to use ```https://``` and 8443 port for the web service URL, ```pulsar+ssl://``` and 6651 port for the broker service URL. - -### CLI tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-pulsar-admin.md), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use TLS authentication with the CLI tools of Pulsar: - -```properties - -webServiceUrl=https://broker.example.com:8443/ -brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/ca.cert.pem -authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls -authParams=tlsCertFile:/path/to/my-role.cert.pem,tlsKeyFile:/path/to/my-role.key-pk8.pem - -``` - -### Java client - -```java - -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .tlsTrustCertsFilePath("/path/to/ca.cert.pem") - .authentication("org.apache.pulsar.client.impl.auth.AuthenticationTls", - "tlsCertFile:/path/to/my-role.cert.pem,tlsKeyFile:/path/to/my-role.key-pk8.pem") - .build(); - -``` - -### Python client - -```python - -from pulsar import Client, AuthenticationTLS - -auth = AuthenticationTLS("/path/to/my-role.cert.pem", "/path/to/my-role.key-pk8.pem") -client = Client("pulsar+ssl://broker.example.com:6651/", - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False, - authentication=auth) - -``` - -### C++ client - -```c++ - -#include - -pulsar::ClientConfiguration config; -config.setUseTls(true); -config.setTlsTrustCertsFilePath("/path/to/ca.cert.pem"); -config.setTlsAllowInsecureConnection(false); - -pulsar::AuthenticationPtr auth = pulsar::AuthTls::create("/path/to/my-role.cert.pem", - "/path/to/my-role.key-pk8.pem") -config.setAuth(auth); - -pulsar::Client client("pulsar+ssl://broker.example.com:6651/", config); - -``` - -### Node.js client - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const auth = new Pulsar.AuthenticationTls({ - certificatePath: '/path/to/my-role.cert.pem', - privateKeyPath: '/path/to/my-role.key-pk8.pem', - }); - - const client = new Pulsar.Client({ - serviceUrl: 'pulsar+ssl://broker.example.com:6651/', - authentication: auth, - tlsTrustCertsFilePath: '/path/to/ca.cert.pem', - }); -})(); - -``` - -### C# client - -```c# - -var clientCertificate = new X509Certificate2("admin.pfx"); -var client = PulsarClient.Builder() - .AuthenticateUsingClientCertificate(clientCertificate) - .Build(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/security-tls-keystore.md b/site2/website/versioned_docs/version-2.9.3-deprecated/security-tls-keystore.md deleted file mode 100644 index 0b3b50fcebb104..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/security-tls-keystore.md +++ /dev/null @@ -1,342 +0,0 @@ ---- -id: security-tls-keystore -title: Using TLS with KeyStore configure -sidebar_label: "Using TLS with KeyStore configure" -original_id: security-tls-keystore ---- - -## Overview - -Apache Pulsar supports [TLS encryption](security-tls-transport.md) and [TLS authentication](security-tls-authentication.md) between clients and Apache Pulsar service. -By default it uses PEM format file configuration. This page tries to describe use [KeyStore](https://en.wikipedia.org/wiki/Java_KeyStore) type configure for TLS. - - -## TLS encryption with KeyStore configure - -### Generate TLS key and certificate - -The first step of deploying TLS is to generate the key and the certificate for each machine in the cluster. -You can use Java’s `keytool` utility to accomplish this task. We will generate the key into a temporary keystore -initially for broker, so that we can export and sign it later with CA. - -```shell - -keytool -keystore broker.keystore.jks -alias localhost -validity {validity} -genkeypair -keyalg RSA - -``` - -You need to specify two parameters in the above command: - -1. `keystore`: the keystore file that stores the certificate. The *keystore* file contains the private key of - the certificate; hence, it needs to be kept safely. -2. `validity`: the valid time of the certificate in days. - -> Ensure that common name (CN) matches exactly with the fully qualified domain name (FQDN) of the server. -The client compares the CN with the DNS domain name to ensure that it is indeed connecting to the desired server, not a malicious one. - -### Creating your own CA - -After the first step, each broker in the cluster has a public-private key pair, and a certificate to identify the machine. -The certificate, however, is unsigned, which means that an attacker can create such a certificate to pretend to be any machine. - -Therefore, it is important to prevent forged certificates by signing them for each machine in the cluster. -A `certificate authority (CA)` is responsible for signing certificates. CA works likes a government that issues passports — -the government stamps (signs) each passport so that the passport becomes difficult to forge. Other governments verify the stamps -to ensure the passport is authentic. Similarly, the CA signs the certificates, and the cryptography guarantees that a signed -certificate is computationally difficult to forge. Thus, as long as the CA is a genuine and trusted authority, the clients have -high assurance that they are connecting to the authentic machines. - -```shell - -openssl req -new -x509 -keyout ca-key -out ca-cert -days 365 - -``` - -The generated CA is simply a *public-private* key pair and certificate, and it is intended to sign other certificates. - -The next step is to add the generated CA to the clients' truststore so that the clients can trust this CA: - -```shell - -keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert - -``` - -NOTE: If you configure the brokers to require client authentication by setting `tlsRequireTrustedClientCertOnConnect` to `true` on the -broker configuration, then you must also provide a truststore for the brokers and it should have all the CA certificates that clients keys were signed by. - -```shell - -keytool -keystore broker.truststore.jks -alias CARoot -import -file ca-cert - -``` - -In contrast to the keystore, which stores each machine’s own identity, the truststore of a client stores all the certificates -that the client should trust. Importing a certificate into one’s truststore also means trusting all certificates that are signed -by that certificate. As the analogy above, trusting the government (CA) also means trusting all passports (certificates) that -it has issued. This attribute is called the chain of trust, and it is particularly useful when deploying TLS on a large BookKeeper cluster. -You can sign all certificates in the cluster with a single CA, and have all machines share the same truststore that trusts the CA. -That way all machines can authenticate all other machines. - - -### Signing the certificate - -The next step is to sign all certificates in the keystore with the CA we generated. First, you need to export the certificate from the keystore: - -```shell - -keytool -keystore broker.keystore.jks -alias localhost -certreq -file cert-file - -``` - -Then sign it with the CA: - -```shell - -openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days {validity} -CAcreateserial -passin pass:{ca-password} - -``` - -Finally, you need to import both the certificate of the CA and the signed certificate into the keystore: - -```shell - -keytool -keystore broker.keystore.jks -alias CARoot -import -file ca-cert -keytool -keystore broker.keystore.jks -alias localhost -import -file cert-signed - -``` - -The definitions of the parameters are the following: - -1. `keystore`: the location of the keystore -2. `ca-cert`: the certificate of the CA -3. `ca-key`: the private key of the CA -4. `ca-password`: the passphrase of the CA -5. `cert-file`: the exported, unsigned certificate of the broker -6. `cert-signed`: the signed certificate of the broker - -### Configuring brokers - -Brokers enable TLS by provide valid `brokerServicePortTls` and `webServicePortTls`, and also need set `tlsEnabledWithKeyStore` to `true` for using KeyStore type configuration. -Besides this, KeyStore path, KeyStore password, TrustStore path, and TrustStore password need to provided. -And since broker will create internal client/admin client to communicate with other brokers, user also need to provide config for them, this is similar to how user config the outside client/admin-client. -If `tlsRequireTrustedClientCertOnConnect` is `true`, broker will reject the Connection if the Client Certificate is not trusted. - -The following TLS configs are needed on the broker side: - -```properties - -tlsEnabledWithKeyStore=true -# key store -tlsKeyStoreType=JKS -tlsKeyStore=/var/private/tls/broker.keystore.jks -tlsKeyStorePassword=brokerpw - -# trust store -tlsTrustStoreType=JKS -tlsTrustStore=/var/private/tls/broker.truststore.jks -tlsTrustStorePassword=brokerpw - -# internal client/admin-client config -brokerClientTlsEnabled=true -brokerClientTlsEnabledWithKeyStore=true -brokerClientTlsTrustStoreType=JKS -brokerClientTlsTrustStore=/var/private/tls/client.truststore.jks -brokerClientTlsTrustStorePassword=clientpw - -``` - -NOTE: it is important to restrict access to the store files via filesystem permissions. - -If you have configured TLS on the broker, to disable non-TLS ports, you can set the values of the following configurations to empty as below. - -``` - -brokerServicePort= -webServicePort= - -``` - -In this case, you need to set the following configurations. - -```conf - -brokerClientTlsEnabled=true // Set this to true -brokerClientTlsEnabledWithKeyStore=true // Set this to true -brokerClientTlsTrustStore= // Set this to your desired value -brokerClientTlsTrustStorePassword= // Set this to your desired value - -Optional settings that may worth consider: - -1. tlsClientAuthentication=false: Enable/Disable using TLS for authentication. This config when enabled will authenticate the other end - of the communication channel. It should be enabled on both brokers and clients for mutual TLS. -2. tlsCiphers=[TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256], A cipher suite is a named combination of authentication, encryption, MAC and key exchange - algorithm used to negotiate the security settings for a network connection using TLS network protocol. By default, - it is null. [OpenSSL Ciphers](https://www.openssl.org/docs/man1.0.2/apps/ciphers.html) - [JDK Ciphers](http://docs.oracle.com/javase/8/docs/technotes/guides/security/StandardNames.html#ciphersuites) -3. tlsProtocols=[TLSv1.3,TLSv1.2] (list out the TLS protocols that you are going to accept from clients). - By default, it is not set. - -``` - -### Configuring Clients - -This is similar to [TLS encryption configuing for client with PEM type](security-tls-transport.md#Client configuration). -For a a minimal configuration, user need to provide the TrustStore information. - -e.g. -1. for [Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools#pulsar-admin), [`pulsar-perf`](reference-cli-tools#pulsar-perf), and [`pulsar-client`](reference-cli-tools#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - - ```properties - - webServiceUrl=https://broker.example.com:8443/ - brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ - useKeyStoreTls=true - tlsTrustStoreType=JKS - tlsTrustStorePath=/var/private/tls/client.truststore.jks - tlsTrustStorePassword=clientpw - - ``` - -1. for java client - - ```java - - import org.apache.pulsar.client.api.PulsarClient; - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .build(); - - ``` - -1. for java admin client - -```java - - PulsarAdmin amdin = PulsarAdmin.builder().serviceHttpUrl("https://broker.example.com:8443") - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .build(); - -``` - -## TLS authentication with KeyStore configure - -This similar to [TLS authentication with PEM type](security-tls-authentication.md) - -### broker authentication config - -`broker.conf` - -```properties - -# Configuration to enable authentication -authenticationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderTls - -# this should be the CN for one of client keystore. -superUserRoles=admin - -# Enable KeyStore type -tlsEnabledWithKeyStore=true -requireTrustedClientCertOnConnect=true - -# key store -tlsKeyStoreType=JKS -tlsKeyStore=/var/private/tls/broker.keystore.jks -tlsKeyStorePassword=brokerpw - -# trust store -tlsTrustStoreType=JKS -tlsTrustStore=/var/private/tls/broker.truststore.jks -tlsTrustStorePassword=brokerpw - -# internal client/admin-client config -brokerClientTlsEnabled=true -brokerClientTlsEnabledWithKeyStore=true -brokerClientTlsTrustStoreType=JKS -brokerClientTlsTrustStore=/var/private/tls/client.truststore.jks -brokerClientTlsTrustStorePassword=clientpw -# internal auth config -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls -brokerClientAuthenticationParameters={"keyStoreType":"JKS","keyStorePath":"/var/private/tls/client.keystore.jks","keyStorePassword":"clientpw"} -# currently websocket not support keystore type -webSocketServiceEnabled=false - -``` - -### client authentication configuring - -Besides the TLS encryption configuring. The main work is configuring the KeyStore, which contains a valid CN as client role, for client. - -e.g. -1. for [Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools#pulsar-admin), [`pulsar-perf`](reference-cli-tools#pulsar-perf), and [`pulsar-client`](reference-cli-tools#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - - ```properties - - webServiceUrl=https://broker.example.com:8443/ - brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ - useKeyStoreTls=true - tlsTrustStoreType=JKS - tlsTrustStorePath=/var/private/tls/client.truststore.jks - tlsTrustStorePassword=clientpw - authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls - authParams={"keyStoreType":"JKS","keyStorePath":"/path/to/keystorefile","keyStorePassword":"keystorepw"} - - ``` - -1. for java client - - ```java - - import org.apache.pulsar.client.api.PulsarClient; - - PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .authentication( - "org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls", - "keyStoreType:JKS,keyStorePath:/var/private/tls/client.keystore.jks,keyStorePassword:clientpw") - .build(); - - ``` - -1. for java admin client - - ```java - - PulsarAdmin amdin = PulsarAdmin.builder().serviceHttpUrl("https://broker.example.com:8443") - .useKeyStoreTls(true) - .tlsTrustStorePath("/var/private/tls/client.truststore.jks") - .tlsTrustStorePassword("clientpw") - .allowTlsInsecureConnection(false) - .authentication( - "org.apache.pulsar.client.impl.auth.AuthenticationKeyStoreTls", - "keyStoreType:JKS,keyStorePath:/var/private/tls/client.keystore.jks,keyStorePassword:clientpw") - .build(); - - ``` - -## Enabling TLS Logging - -You can enable TLS debug logging at the JVM level by starting the brokers and/or clients with `javax.net.debug` system property. For example: - -```shell - --Djavax.net.debug=all - -``` - -You can find more details on this in [Oracle documentation](http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/ReadDebug.html) on [debugging SSL/TLS connections](http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/ReadDebug.html). diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/security-tls-transport.md b/site2/website/versioned_docs/version-2.9.3-deprecated/security-tls-transport.md deleted file mode 100644 index 2cad17a78c3507..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/security-tls-transport.md +++ /dev/null @@ -1,295 +0,0 @@ ---- -id: security-tls-transport -title: Transport Encryption using TLS -sidebar_label: "Transport Encryption using TLS" -original_id: security-tls-transport ---- - -## TLS overview - -By default, Apache Pulsar clients communicate with the Apache Pulsar service in plain text. This means that all data is sent in the clear. You can use TLS to encrypt this traffic to protect the traffic from the snooping of a man-in-the-middle attacker. - -You can also configure TLS for both encryption and authentication. Use this guide to configure just TLS transport encryption and refer to [here](security-tls-authentication.md) for TLS authentication configuration. Alternatively, you can use [another authentication mechanism](security-athenz.md) on top of TLS transport encryption. - -> Note that enabling TLS may impact the performance due to encryption overhead. - -## TLS concepts - -TLS is a form of [public key cryptography](https://en.wikipedia.org/wiki/Public-key_cryptography). Using key pairs consisting of a public key and a private key can perform the encryption. The public key encrpyts the messages and the private key decrypts the messages. - -To use TLS transport encryption, you need two kinds of key pairs, **server key pairs** and a **certificate authority**. - -You can use a third kind of key pair, **client key pairs**, for [client authentication](security-tls-authentication.md). - -You should store the **certificate authority** private key in a very secure location (a fully encrypted, disconnected, air gapped computer). As for the certificate authority public key, the **trust cert**, you can freely shared it. - -For both client and server key pairs, the administrator first generates a private key and a certificate request, then uses the certificate authority private key to sign the certificate request, finally generates a certificate. This certificate is the public key for the server/client key pair. - -For TLS transport encryption, the clients can use the **trust cert** to verify that the server has a key pair that the certificate authority signed when the clients are talking to the server. A man-in-the-middle attacker does not have access to the certificate authority, so they couldn't create a server with such a key pair. - -For TLS authentication, the server uses the **trust cert** to verify that the client has a key pair that the certificate authority signed. The common name of the **client cert** is then used as the client's role token (see [Overview](security-overview.md)). - -`Bouncy Castle Provider` provides cipher suites and algorithms in Pulsar. If you need [FIPS](https://www.bouncycastle.org/fips_faq.html) version of `Bouncy Castle Provider`, please reference [Bouncy Castle page](security-bouncy-castle.md). - -## Create TLS certificates - -Creating TLS certificates for Pulsar involves creating a [certificate authority](#certificate-authority) (CA), [server certificate](#server-certificate), and [client certificate](#client-certificate). - -Follow the guide below to set up a certificate authority. You can also refer to plenty of resources on the internet for more details. We recommend [this guide](https://jamielinux.com/docs/openssl-certificate-authority/index.html) for your detailed reference. - -### Certificate authority - -1. Create the certificate for the CA. You can use CA to sign both the broker and client certificates. This ensures that each party will trust the others. You should store CA in a very secure location (ideally completely disconnected from networks, air gapped, and fully encrypted). - -2. Entering the following command to create a directory for your CA, and place [this openssl configuration file](https://github.com/apache/pulsar/tree/master/site2/website/static/examples/openssl.cnf) in the directory. You may want to modify the default answers for company name and department in the configuration file. Export the location of the CA directory to the environment variable, CA_HOME. The configuration file uses this environment variable to find the rest of the files and directories that the CA needs. - -```bash - -mkdir my-ca -cd my-ca -wget https://raw.githubusercontent.com/apache/pulsar-site/main/site2/website/static/examples/openssl.cnf -export CA_HOME=$(pwd) - -``` - -3. Enter the commands below to create the necessary directories, keys and certs. - -```bash - -mkdir certs crl newcerts private -chmod 700 private/ -touch index.txt -echo 1000 > serial -openssl genrsa -aes256 -out private/ca.key.pem 4096 -chmod 400 private/ca.key.pem -openssl req -config openssl.cnf -key private/ca.key.pem \ - -new -x509 -days 7300 -sha256 -extensions v3_ca \ - -out certs/ca.cert.pem -chmod 444 certs/ca.cert.pem - -``` - -4. After you answer the question prompts, CA-related files are stored in the `./my-ca` directory. Within that directory: - -* `certs/ca.cert.pem` is the public certificate. This public certificates is meant to be distributed to all parties involved. -* `private/ca.key.pem` is the private key. You only need it when you are signing a new certificate for either broker or clients and you must safely guard this private key. - -### Server certificate - -Once you have created a CA certificate, you can create certificate requests and sign them with the CA. - -The following commands ask you a few questions and then create the certificates. When you are asked for the common name, you should match the hostname of the broker. You can also use a wildcard to match a group of broker hostnames, for example, `*.broker.usw.example.com`. This ensures that multiple machines can reuse the same certificate. - -:::tip - -Sometimes matching the hostname is not possible or makes no sense, -such as when you create the brokers with random hostnames, or you -plan to connect to the hosts via their IP. In these cases, you -should configure the client to disable TLS hostname verification. For more -details, you can see [the host verification section in client configuration](#hostname-verification). - -::: - -1. Enter the command below to generate the key. - -```bash - -openssl genrsa -out broker.key.pem 2048 - -``` - -The broker expects the key to be in [PKCS 8](https://en.wikipedia.org/wiki/PKCS_8) format, so enter the following command to convert it. - -```bash - -openssl pkcs8 -topk8 -inform PEM -outform PEM \ - -in broker.key.pem -out broker.key-pk8.pem -nocrypt - -``` - -2. Enter the following command to generate the certificate request. - -```bash - -openssl req -config openssl.cnf \ - -key broker.key.pem -new -sha256 -out broker.csr.pem - -``` - -3. Sign it with the certificate authority by entering the command below. - -```bash - -openssl ca -config openssl.cnf -extensions server_cert \ - -days 1000 -notext -md sha256 \ - -in broker.csr.pem -out broker.cert.pem - -``` - -At this point, you have a cert, `broker.cert.pem`, and a key, `broker.key-pk8.pem`, which you can use along with `ca.cert.pem` to configure TLS transport encryption for your broker and proxy nodes. - -## Configure broker - -To configure a Pulsar [broker](reference-terminology.md#broker) to use TLS transport encryption, you need to make some changes to `broker.conf`, which locates in the `conf` directory of your [Pulsar installation](getting-started-standalone.md). - -Add these values to the configuration file (substituting the appropriate certificate paths where necessary): - -```properties - -tlsEnabled=true -tlsRequireTrustedClientCertOnConnect=true -tlsCertificateFilePath=/path/to/broker.cert.pem -tlsKeyFilePath=/path/to/broker.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -> You can find a full list of parameters available in the `conf/broker.conf` file, -> as well as the default values for those parameters, in [Broker Configuration](reference-configuration.md#broker) -> -### TLS Protocol Version and Cipher - -You can configure the broker (and proxy) to require specific TLS protocol versions and ciphers for TLS negiotation. You can use the TLS protocol versions and ciphers to stop clients from requesting downgraded TLS protocol versions or ciphers that may have weaknesses. - -Both the TLS protocol versions and cipher properties can take multiple values, separated by commas. The possible values for protocol version and ciphers depend on the TLS provider that you are using. Pulsar uses OpenSSL if the OpenSSL is available, but if the OpenSSL is not available, Pulsar defaults back to the JDK implementation. - -```properties - -tlsProtocols=TLSv1.3,TLSv1.2 -tlsCiphers=TLS_DH_RSA_WITH_AES_256_GCM_SHA384,TLS_DH_RSA_WITH_AES_256_CBC_SHA - -``` - -OpenSSL currently supports ```TLSv1.1```, ```TLSv1.2``` and ```TLSv1.3``` for the protocol version. You can acquire a list of supported cipher from the openssl ciphers command, i.e. ```openssl ciphers -tls1_3```. - -For JDK 11, you can obtain a list of supported values from the documentation: -- [TLS protocol](https://docs.oracle.com/en/java/javase/11/security/oracle-providers.html#GUID-7093246A-31A3-4304-AC5F-5FB6400405E2__SUNJSSEPROVIDERPROTOCOLPARAMETERS-BBF75009) -- [Ciphers](https://docs.oracle.com/en/java/javase/11/security/oracle-providers.html#GUID-7093246A-31A3-4304-AC5F-5FB6400405E2__SUNJSSE_CIPHER_SUITES) - -## Proxy Configuration - -Proxies need to configure TLS in two directions, for clients connecting to the proxy, and for the proxy connecting to brokers. - -```properties - -# For clients connecting to the proxy -tlsEnabledInProxy=true -tlsCertificateFilePath=/path/to/broker.cert.pem -tlsKeyFilePath=/path/to/broker.key-pk8.pem -tlsTrustCertsFilePath=/path/to/ca.cert.pem - -# For the proxy to connect to brokers -tlsEnabledWithBroker=true -brokerClientTrustCertsFilePath=/path/to/ca.cert.pem - -``` - -## Client configuration - -When you enable the TLS transport encryption, you need to configure the client to use ```https://``` and port 8443 for the web service URL, and ```pulsar+ssl://``` and port 6651 for the broker service URL. - -As the server certificate that you generated above does not belong to any of the default trust chains, you also need to either specify the path the **trust cert** (recommended), or tell the client to allow untrusted server certs. - -### Hostname verification - -Hostname verification is a TLS security feature whereby a client can refuse to connect to a server if the "CommonName" does not match the hostname to which the hostname is connecting. By default, Pulsar clients disable hostname verification, as it requires that each broker has a DNS record and a unique cert. - -Moreover, as the administrator has full control of the certificate authority, a bad actor is unlikely to be able to pull off a man-in-the-middle attack. "allowInsecureConnection" allows the client to connect to servers whose cert has not been signed by an approved CA. The client disables "allowInsecureConnection" by default, and you should always disable "allowInsecureConnection" in production environments. As long as you disable "allowInsecureConnection", a man-in-the-middle attack requires that the attacker has access to the CA. - -One scenario where you may want to enable hostname verification is where you have multiple proxy nodes behind a VIP, and the VIP has a DNS record, for example, pulsar.mycompany.com. In this case, you can generate a TLS cert with pulsar.mycompany.com as the "CommonName," and then enable hostname verification on the client. - -The examples below show that hostname verification is disabled for the CLI tools/Java/Python/C++/Node.js/C# clients by default. - -### CLI tools - -[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools.md#pulsar-admin), [`pulsar-perf`](reference-cli-tools.md#pulsar-perf), and [`pulsar-client`](reference-cli-tools.md#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation. - -You need to add the following parameters to that file to use TLS transport with the CLI tools of Pulsar: - -```properties - -webServiceUrl=https://broker.example.com:8443/ -brokerServiceUrl=pulsar+ssl://broker.example.com:6651/ -useTls=true -tlsAllowInsecureConnection=false -tlsTrustCertsFilePath=/path/to/ca.cert.pem -tlsEnableHostnameVerification=false - -``` - -#### Java client - -```java - -import org.apache.pulsar.client.api.PulsarClient; - -PulsarClient client = PulsarClient.builder() - .serviceUrl("pulsar+ssl://broker.example.com:6651/") - .enableTls(true) - .tlsTrustCertsFilePath("/path/to/ca.cert.pem") - .enableTlsHostnameVerification(false) // false by default, in any case - .allowTlsInsecureConnection(false) // false by default, in any case - .build(); - -``` - -#### Python client - -```python - -from pulsar import Client - -client = Client("pulsar+ssl://broker.example.com:6651/", - tls_hostname_verification=False, - tls_trust_certs_file_path="/path/to/ca.cert.pem", - tls_allow_insecure_connection=False) // defaults to false from v2.2.0 onwards - -``` - -#### C++ client - -```c++ - -#include - -ClientConfiguration config = ClientConfiguration(); -config.setUseTls(true); // shouldn't be needed soon -config.setTlsTrustCertsFilePath(caPath); -config.setTlsAllowInsecureConnection(false); -config.setAuth(pulsar::AuthTls::create(clientPublicKeyPath, clientPrivateKeyPath)); -config.setValidateHostName(false); - -``` - -#### Node.js client - -```JavaScript - -const Pulsar = require('pulsar-client'); - -(async () => { - const client = new Pulsar.Client({ - serviceUrl: 'pulsar+ssl://broker.example.com:6651/', - tlsTrustCertsFilePath: '/path/to/ca.cert.pem', - useTls: true, - tlsValidateHostname: false, - tlsAllowInsecureConnection: false, - }); -})(); - -``` - -#### C# client - -```c# - -var certificate = new X509Certificate2("ca.cert.pem"); -var client = PulsarClient.Builder() - .TrustedCertificateAuthority(certificate) //If the CA is not trusted on the host, you can add it explicitly. - .VerifyCertificateAuthority(true) //Default is 'true' - .VerifyCertificateName(false) //Default is 'false' - .Build(); - -``` - -> Note that `VerifyCertificateName` refers to the configuration of hostname verification in the C# client. diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/security-token-admin.md b/site2/website/versioned_docs/version-2.9.3-deprecated/security-token-admin.md deleted file mode 100644 index a265f6320d28fe..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/security-token-admin.md +++ /dev/null @@ -1,183 +0,0 @@ ---- -id: security-token-admin -title: Token authentication admin -sidebar_label: "Token authentication admin" -original_id: security-token-admin ---- - -## Token Authentication Overview - -Pulsar supports authenticating clients using security tokens that are based on [JSON Web Tokens](https://jwt.io/introduction/) ([RFC-7519](https://tools.ietf.org/html/rfc7519)). - -Tokens are used to identify a Pulsar client and associate with some "principal" (or "role") which -will be then granted permissions to do some actions (eg: publish or consume from a topic). - -A user will typically be given a token string by an administrator (or some automated service). - -The compact representation of a signed JWT is a string that looks like: - -``` - - eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY - -``` - -Application will specify the token when creating the client instance. An alternative is to pass -a "token supplier", that is to say a function that returns the token when the client library -will need one. - -> #### Always use TLS transport encryption -> Sending a token is equivalent to sending a password over the wire. It is strongly recommended to -> always use TLS encryption when talking to the Pulsar service. See -> [Transport Encryption using TLS](security-tls-transport.md) - -## Secret vs Public/Private keys - -JWT support two different kind of keys in order to generate and validate the tokens: - - * Symmetric : - - there is a single ***Secret*** key that is used both to generate and validate - * Asymmetric: there is a pair of keys. - - ***Private*** key is used to generate tokens - - ***Public*** key is used to validate tokens - -### Secret key - -When using a secret key, the administrator will create the key and he will -use it to generate the client tokens. This key will be also configured to -the brokers to allow them to validate the clients. - -#### Creating a secret key - -> Output file will be generated in the root of your pulsar installation directory. You can also provide absolute path for the output file. - -```shell - -$ bin/pulsar tokens create-secret-key --output my-secret.key - -``` - -To generate base64 encoded private key - -```shell - -$ bin/pulsar tokens create-secret-key --output /opt/my-secret.key --base64 - -``` - -### Public/Private keys - -With public/private, we need to create a pair of keys. Pulsar supports all algorithms supported by the Java JWT library shown [here](https://github.com/jwtk/jjwt#signature-algorithms-keys) - -#### Creating a key pair - -> Output file will be generated in the root of your pulsar installation directory. You can also provide absolute path for the output file. - -```shell - -$ bin/pulsar tokens create-key-pair --output-private-key my-private.key --output-public-key my-public.key - -``` - - * `my-private.key` will be stored in a safe location and only used by administrator to generate - new tokens. - * `my-public.key` will be distributed to all Pulsar brokers. This file can be publicly shared without - any security concern. - -## Generating tokens - -A token is the credential associated with a user. The association is done through the "principal", -or "role". In case of JWT tokens, this field it's typically referred to as **subject**, though -it's exactly the same concept. - -The generated token is then required to have a **subject** field set. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user - -``` - -This will print the token string on stdout. - -Similarly, one can create a token by passing the "private" key: - -```shell - -$ bin/pulsar tokens create --private-key file:///path/to/my-private.key \ - --subject test-user - -``` - -Finally, a token can also be created with a pre-defined TTL. After that time, -the token will be automatically invalidated. - -```shell - -$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \ - --subject test-user \ - --expiry-time 1y - -``` - -## Authorization - -The token itself doesn't have any permission associated. That will be determined by the -authorization engine. Once the token is created, one can grant permission for this token to do certain -actions. Eg. : - -```shell - -$ bin/pulsar-admin namespaces grant-permission my-tenant/my-namespace \ - --role test-user \ - --actions produce,consume - -``` - -## Enabling Token Authentication ... - -### ... on Brokers - -To configure brokers to authenticate clients, put the following in `broker.conf`: - -```properties - -# Configuration to enable authentication and authorization -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken - -# If using secret key (Note: key files must be DER-encoded) -tokenSecretKey=file:///path/to/secret.key -# The key can also be passed inline: -# tokenSecretKey=data:;base64,FLFyW0oLJ2Fi22KKCm21J18mbAdztfSHN/lAT5ucEKU= - -# If using public/private (Note: key files must be DER-encoded) -# tokenPublicKey=file:///path/to/public.key - -``` - -### ... on Proxies - -To configure proxies to authenticate clients, put the following in `proxy.conf`: - -The proxy will have its own token used when talking to brokers. The role token for this -key pair should be configured in the ``proxyRoles`` of the brokers. See the [authorization guide](security-authorization.md) for more details. - -```properties - -# For clients connecting to the proxy -authenticationEnabled=true -authorizationEnabled=true -authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken -tokenSecretKey=file:///path/to/secret.key - -# For the proxy to connect to brokers -brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken -brokerClientAuthenticationParameters={"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw"} -# Or, alternatively, read token from file -# brokerClientAuthenticationParameters=file:///path/to/proxy-token.txt - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/sql-deployment-configurations.md b/site2/website/versioned_docs/version-2.9.3-deprecated/sql-deployment-configurations.md deleted file mode 100644 index 02d8bc78f6cb9d..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/sql-deployment-configurations.md +++ /dev/null @@ -1,199 +0,0 @@ ---- -id: sql-deployment-configurations -title: Pulsar SQL configuration and deployment -sidebar_label: "Configuration and deployment" -original_id: sql-deployment-configurations ---- - -You can configure Presto Pulsar connector and deploy a cluster with the following instruction. - -## Configure Presto Pulsar Connector -You can configure Presto Pulsar Connector in the `${project.root}/conf/presto/catalog/pulsar.properties` properties file. The configuration for the connector and the default values are as follows. - -```properties - -# name of the connector to be displayed in the catalog -connector.name=pulsar - -# the url of Pulsar broker service -pulsar.web-service-url=http://localhost:8080 - -# URI of Zookeeper cluster -pulsar.zookeeper-uri=localhost:2181 - -# minimum number of entries to read at a single time -pulsar.entry-read-batch-size=100 - -# default number of splits to use per query -pulsar.target-num-splits=4 - -``` - -You can connect Presto to a Pulsar cluster with multiple hosts. To configure multiple hosts for brokers, add multiple URLs to `pulsar.web-service-url`. To configure multiple hosts for ZooKeeper, add multiple URIs to `pulsar.zookeeper-uri`. The following is an example. - -``` - -pulsar.web-service-url=http://localhost:8080,localhost:8081,localhost:8082 -pulsar.zookeeper-uri=localhost1,localhost2:2181 - -``` - -**Note: by default, Pulsar SQL does not get the last message in a topic**. It is by design and controlled by settings. By default, BookKeeper LAC only advances when subsequent entries are added. If there is no subsequent entry added, the last written entry is not visible to readers until the ledger is closed. This is not a problem for Pulsar which uses managed ledger, but Pulsar SQL directly reads from BookKeeper ledger. - -If you want to get the last message in a topic, set the following configurations: - -1. For the broker configuration, set `bookkeeperExplicitLacIntervalInMills` > 0 in `broker.conf` or `standalone.conf`. - -2. For the Presto configuration, set `pulsar.bookkeeper-explicit-interval` > 0 and `pulsar.bookkeeper-use-v2-protocol=false`. - -However, using BookKeeper V3 protocol introduces additional GC overhead to BK as it uses Protobuf. - -## Query data from existing Presto clusters - -If you already have a Presto cluster, you can copy the Presto Pulsar connector plugin to your existing cluster. Download the archived plugin package with the following command. - -```bash - -$ wget pulsar:binary_release_url - -``` - -## Deploy a new cluster - -Since Pulsar SQL is powered by [Trino (formerly Presto SQL)](https://trino.io), the configuration for deployment is the same for the Pulsar SQL worker. - -:::note - -For how to set up a standalone single node environment, refer to [Query data](sql-getting-started.md). - -::: - -You can use the same CLI args as the Presto launcher. - -```bash - -$ ./bin/pulsar sql-worker --help -Usage: launcher [options] command - -Commands: run, start, stop, restart, kill, status - -Options: - -h, --help show this help message and exit - -v, --verbose Run verbosely - --etc-dir=DIR Defaults to INSTALL_PATH/etc - --launcher-config=FILE - Defaults to INSTALL_PATH/bin/launcher.properties - --node-config=FILE Defaults to ETC_DIR/node.properties - --jvm-config=FILE Defaults to ETC_DIR/jvm.config - --config=FILE Defaults to ETC_DIR/config.properties - --log-levels-file=FILE - Defaults to ETC_DIR/log.properties - --data-dir=DIR Defaults to INSTALL_PATH - --pid-file=FILE Defaults to DATA_DIR/var/run/launcher.pid - --launcher-log-file=FILE - Defaults to DATA_DIR/var/log/launcher.log (only in - daemon mode) - --server-log-file=FILE - Defaults to DATA_DIR/var/log/server.log (only in - daemon mode) - -D NAME=VALUE Set a Java system property - -``` - -The default configuration for the cluster is located in `${project.root}/conf/presto`. You can customize your deployment by modifying the default configuration. - -You can set the worker to read from a different configuration directory, or set a different directory to write data. - -```bash - -$ ./bin/pulsar sql-worker run --etc-dir /tmp/incubator-pulsar/conf/presto --data-dir /tmp/presto-1 - -``` - -You can start the worker as daemon process. - -```bash - -$ ./bin/pulsar sql-worker start - -``` - -### Deploy a cluster on multiple nodes - -You can deploy a Pulsar SQL cluster or Presto cluster on multiple nodes. The following example shows how to deploy a cluster on three-node cluster. - -1. Copy the Pulsar binary distribution to three nodes. - -The first node runs as Presto coordinator. The minimal configuration requirement in the `${project.root}/conf/presto/config.properties` file is as follows. - -```properties - -coordinator=true -node-scheduler.include-coordinator=true -http-server.http.port=8080 -query.max-memory=50GB -query.max-memory-per-node=1GB -discovery-server.enabled=true -discovery.uri= - -``` - -The other two nodes serve as worker nodes, you can use the following configuration for worker nodes. - -```properties - -coordinator=false -http-server.http.port=8080 -query.max-memory=50GB -query.max-memory-per-node=1GB -discovery.uri= - -``` - -2. Modify `pulsar.web-service-url` and `pulsar.zookeeper-uri` configuration in the `${project.root}/conf/presto/catalog/pulsar.properties` file accordingly for the three nodes. - -3. Start the coordinator node. - -``` - -$ ./bin/pulsar sql-worker run - -``` - -4. Start worker nodes. - -``` - -$ ./bin/pulsar sql-worker run - -``` - -5. Start the SQL CLI and check the status of your cluster. - -```bash - -$ ./bin/pulsar sql --server - -``` - -6. Check the status of your nodes. - -```bash - -presto> SELECT * FROM system.runtime.nodes; - node_id | http_uri | node_version | coordinator | state ----------+-------------------------+--------------+-------------+-------- - 1 | http://192.168.2.1:8081 | testversion | true | active - 3 | http://192.168.2.2:8081 | testversion | false | active - 2 | http://192.168.2.3:8081 | testversion | false | active - -``` - -For more information about deployment in Presto, refer to [Presto deployment](https://trino.io/docs/current/installation/deployment.html). - -:::note - -The broker does not advance LAC, so when Pulsar SQL bypass broker to query data, it can only read entries up to the LAC that all the bookies learned. You can enable periodically write LAC on the broker by setting "bookkeeperExplicitLacIntervalInMills" in the broker.conf. - -::: - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/sql-getting-started.md b/site2/website/versioned_docs/version-2.9.3-deprecated/sql-getting-started.md deleted file mode 100644 index 8a5cd7199b365a..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/sql-getting-started.md +++ /dev/null @@ -1,187 +0,0 @@ ---- -id: sql-getting-started -title: Query data with Pulsar SQL -sidebar_label: "Query data" -original_id: sql-getting-started ---- - -Before querying data in Pulsar, you need to install Pulsar and built-in connectors. - -## Requirements -1. Install [Pulsar](getting-started-standalone.md#install-pulsar-standalone). -2. Install Pulsar [built-in connectors](getting-started-standalone.md#install-builtin-connectors-optional). - -## Query data in Pulsar -To query data in Pulsar with Pulsar SQL, complete the following steps. - -1. Start a Pulsar standalone cluster. - -```bash - -./bin/pulsar standalone - -``` - -2. Start a Pulsar SQL worker. - -```bash - -./bin/pulsar sql-worker run - -``` - -3. After initializing Pulsar standalone cluster and the SQL worker, run SQL CLI. - -```bash - -./bin/pulsar sql - -``` - -4. Test with SQL commands. - -```bash - -presto> show catalogs; - Catalog ---------- - pulsar - system -(2 rows) - -Query 20180829_211752_00004_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [0 rows, 0B] [0 rows/s, 0B/s] - - -presto> show schemas in pulsar; - Schema ------------------------ - information_schema - public/default - public/functions - sample/standalone/ns1 -(4 rows) - -Query 20180829_211818_00005_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [4 rows, 89B] [21 rows/s, 471B/s] - - -presto> show tables in pulsar."public/default"; - Table -------- -(0 rows) - -Query 20180829_211839_00006_7qpwh, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:00 [0 rows, 0B] [0 rows/s, 0B/s] - -``` - -Since there is no data in Pulsar, no records is returned. - -5. Start the built-in connector _DataGeneratorSource_ and ingest some mock data. - -```bash - -./bin/pulsar-admin sources create --name generator --destinationTopicName generator_test --source-type data-generator - -``` - -And then you can query a topic in the namespace "public/default". - -```bash - -presto> show tables in pulsar."public/default"; - Table ----------------- - generator_test -(1 row) - -Query 20180829_213202_00000_csyeu, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:02 [1 rows, 38B] [0 rows/s, 17B/s] - -``` - -You can now query the data within the topic "generator_test". - -```bash - -presto> select * from pulsar."public/default".generator_test; - - firstname | middlename | lastname | email | username | password | telephonenumber | age | companyemail | nationalidentitycardnumber | --------------+-------------+-------------+----------------------------------+--------------+----------+-----------------+-----+-----------------------------------------------+----------------------------+ - Genesis | Katherine | Wiley | genesis.wiley@gmail.com | genesisw | y9D2dtU3 | 959-197-1860 | 71 | genesis.wiley@interdemconsulting.eu | 880-58-9247 | - Brayden | | Stanton | brayden.stanton@yahoo.com | braydens | ZnjmhXik | 220-027-867 | 81 | brayden.stanton@supermemo.eu | 604-60-7069 | - Benjamin | Julian | Velasquez | benjamin.velasquez@yahoo.com | benjaminv | 8Bc7m3eb | 298-377-0062 | 21 | benjamin.velasquez@hostesltd.biz | 213-32-5882 | - Michael | Thomas | Donovan | donovan@mail.com | michaeld | OqBm9MLs | 078-134-4685 | 55 | michael.donovan@memortech.eu | 443-30-3442 | - Brooklyn | Avery | Roach | brooklynroach@yahoo.com | broach | IxtBLafO | 387-786-2998 | 68 | brooklyn.roach@warst.biz | 085-88-3973 | - Skylar | | Bradshaw | skylarbradshaw@yahoo.com | skylarb | p6eC6cKy | 210-872-608 | 96 | skylar.bradshaw@flyhigh.eu | 453-46-0334 | -. -. -. - -``` - -You can query the mock data. - -## Query your own data -If you want to query your own data, you need to ingest your own data first. You can write a simple producer and write custom defined data to Pulsar. The following is an example. - -```java - -public class TestProducer { - - public static class Foo { - private int field1 = 1; - private String field2; - private long field3; - - public Foo() { - } - - public int getField1() { - return field1; - } - - public void setField1(int field1) { - this.field1 = field1; - } - - public String getField2() { - return field2; - } - - public void setField2(String field2) { - this.field2 = field2; - } - - public long getField3() { - return field3; - } - - public void setField3(long field3) { - this.field3 = field3; - } - } - - public static void main(String[] args) throws Exception { - PulsarClient pulsarClient = PulsarClient.builder().serviceUrl("pulsar://localhost:6650").build(); - Producer producer = pulsarClient.newProducer(AvroSchema.of(Foo.class)).topic("test_topic").create(); - - for (int i = 0; i < 1000; i++) { - Foo foo = new Foo(); - foo.setField1(i); - foo.setField2("foo" + i); - foo.setField3(System.currentTimeMillis()); - producer.newMessage().value(foo).send(); - } - producer.close(); - pulsarClient.close(); - } -} - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/sql-overview.md b/site2/website/versioned_docs/version-2.9.3-deprecated/sql-overview.md deleted file mode 100644 index 0235d0256c5f19..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/sql-overview.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -id: sql-overview -title: Pulsar SQL Overview -sidebar_label: "Overview" -original_id: sql-overview ---- - -Apache Pulsar is used to store streams of event data, and the event data is structured with predefined fields. With the implementation of the [Schema Registry](schema-get-started.md), you can store structured data in Pulsar and query the data by using [Trino (formerly Presto SQL.md)](https://trino.io/). - -As the core of Pulsar SQL, Presto Pulsar connector enables Presto workers within a Presto cluster to query data from Pulsar. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-sql-arch-2.png) - -The query performance is efficient and highly scalable, because Pulsar adopts [two level segment based architecture](concepts-architecture-overview.md#apache-bookkeeper). - -Topics in Pulsar are stored as segments in [Apache BookKeeper](https://bookkeeper.apache.org/). Each topic segment is replicated to some BookKeeper nodes, which enables concurrent reads and high read throughput. You can configure the number of BookKeeper nodes, and the default number is `3`. In Presto Pulsar connector, data is read directly from BookKeeper, so Presto workers can read concurrently from horizontally scalable number BookKeeper nodes. - -![The Pulsar consumer and reader interfaces](/assets/pulsar-sql-arch-1.png) diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/sql-rest-api.md b/site2/website/versioned_docs/version-2.9.3-deprecated/sql-rest-api.md deleted file mode 100644 index c92fd62f7d8703..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/sql-rest-api.md +++ /dev/null @@ -1,192 +0,0 @@ ---- -id: sql-rest-api -title: Pulsar SQL REST APIs -sidebar_label: "REST APIs" -original_id: sql-rest-api ---- - -This section lists resources that make up the Presto REST API v1. - -## Request for Presto services - -All requests for Presto services should use Presto REST API v1 version. - -To request services, use explicit URL `http://presto.service:8081/v1`. You need to update `presto.service:8081` with your real Presto address before sending requests. - -`POST` requests require the `X-Presto-User` header. If you use authentication, you must use the same `username` that is specified in the authentication configuration. If you do not use authentication, you can specify anything for `username`. - -```properties - -X-Presto-User: username - -``` - -For more information about headers, refer to [PrestoHeaders](https://github.com/trinodb/trino). - -## Schema - -You can use statement in the HTTP body. All data is received as JSON document that might contain a `nextUri` link. If the received JSON document contains a `nextUri` link, the request continues with the `nextUri` link until the received data does not contain a `nextUri` link. If no error is returned, the query completes successfully. If an `error` field is displayed in `stats`, it means the query fails. - -The following is an example of `show catalogs`. The query continues until the received JSON document does not contain a `nextUri` link. Since no `error` is displayed in `stats`, it means that the query completes successfully. - -```powershell - -➜ ~ curl --header "X-Presto-User: test-user" --request POST --data 'show catalogs' http://localhost:8081/v1/statement -{ - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "stats" : { - "queued" : true, - "nodes" : 0, - "userTimeMillis" : 0, - "cpuTimeMillis" : 0, - "wallTimeMillis" : 0, - "processedBytes" : 0, - "processedRows" : 0, - "runningSplits" : 0, - "queuedTimeMillis" : 0, - "queuedSplits" : 0, - "completedSplits" : 0, - "totalSplits" : 0, - "scheduled" : false, - "peakMemoryBytes" : 0, - "state" : "QUEUED", - "elapsedTimeMillis" : 0 - }, - "id" : "20191113_033653_00006_dg6hb", - "nextUri" : "http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/1" -} - -➜ ~ curl http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/1 -{ - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "nextUri" : "http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/2", - "id" : "20191113_033653_00006_dg6hb", - "stats" : { - "state" : "PLANNING", - "totalSplits" : 0, - "queued" : false, - "userTimeMillis" : 0, - "completedSplits" : 0, - "scheduled" : false, - "wallTimeMillis" : 0, - "runningSplits" : 0, - "queuedSplits" : 0, - "cpuTimeMillis" : 0, - "processedRows" : 0, - "processedBytes" : 0, - "nodes" : 0, - "queuedTimeMillis" : 1, - "elapsedTimeMillis" : 2, - "peakMemoryBytes" : 0 - } -} - -➜ ~ curl http://localhost:8081/v1/statement/20191113_033653_00006_dg6hb/2 -{ - "id" : "20191113_033653_00006_dg6hb", - "data" : [ - [ - "pulsar" - ], - [ - "system" - ] - ], - "infoUri" : "http://localhost:8081/ui/query.html?20191113_033653_00006_dg6hb", - "columns" : [ - { - "typeSignature" : { - "rawType" : "varchar", - "arguments" : [ - { - "kind" : "LONG_LITERAL", - "value" : 6 - } - ], - "literalArguments" : [], - "typeArguments" : [] - }, - "name" : "Catalog", - "type" : "varchar(6)" - } - ], - "stats" : { - "wallTimeMillis" : 104, - "scheduled" : true, - "userTimeMillis" : 14, - "progressPercentage" : 100, - "totalSplits" : 19, - "nodes" : 1, - "cpuTimeMillis" : 16, - "queued" : false, - "queuedTimeMillis" : 1, - "state" : "FINISHED", - "peakMemoryBytes" : 0, - "elapsedTimeMillis" : 111, - "processedBytes" : 0, - "processedRows" : 0, - "queuedSplits" : 0, - "rootStage" : { - "cpuTimeMillis" : 1, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 1, - "subStages" : [ - { - "cpuTimeMillis" : 14, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 17, - "subStages" : [ - { - "wallTimeMillis" : 7, - "subStages" : [], - "stageId" : "2", - "done" : true, - "nodes" : 1, - "totalSplits" : 1, - "processedBytes" : 22, - "processedRows" : 2, - "queuedSplits" : 0, - "userTimeMillis" : 1, - "cpuTimeMillis" : 1, - "runningSplits" : 0, - "state" : "FINISHED", - "completedSplits" : 1 - } - ], - "wallTimeMillis" : 92, - "nodes" : 1, - "done" : true, - "stageId" : "1", - "userTimeMillis" : 12, - "processedRows" : 2, - "processedBytes" : 51, - "queuedSplits" : 0, - "totalSplits" : 17 - } - ], - "wallTimeMillis" : 5, - "done" : true, - "nodes" : 1, - "stageId" : "0", - "userTimeMillis" : 1, - "processedRows" : 2, - "processedBytes" : 22, - "totalSplits" : 1, - "queuedSplits" : 0 - }, - "runningSplits" : 0, - "completedSplits" : 19 - } -} - -``` - -:::note - -Since the response data is not in sync with the query state from the perspective of clients, you cannot rely on the response data to determine whether the query completes. - -::: - -For more information about Presto REST API, refer to [Presto HTTP Protocol](https://github.com/prestosql/presto/wiki/HTTP-Protocol). diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/standalone.md b/site2/website/versioned_docs/version-2.9.3-deprecated/standalone.md deleted file mode 100644 index a487be92adfcf4..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/standalone.md +++ /dev/null @@ -1,268 +0,0 @@ ---- -id: standalone -title: Set up a standalone Pulsar locally -sidebar_label: "Run Pulsar locally" -original_id: standalone ---- - -For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process. - -> **Pulsar in production?** -> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal.md) guide. - -## Install Pulsar standalone - -This tutorial guides you through every step of installing Pulsar locally. - -### System requirements - -Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions. - -:::tip - -By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. - -::: - -:::note - -Broker is only supported on 64-bit JVM. - -::: - -### Install Pulsar using binary release - -To get started with Pulsar, download a binary tarball release in one of the following ways: - -* download from the Apache mirror (Pulsar @pulsar:version@ binary release) - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:binary_release_url - - ``` - -After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory: - -```bash - -$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz -$ cd apache-pulsar-@pulsar:version@ - -``` - -#### What your package contains - -The Pulsar binary package initially contains the following directories: - -Directory | Contains -:---------|:-------- -`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/). -`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more. -`examples` | A Java JAR file containing [Pulsar Functions](functions-overview.md) example. -`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar. -`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar). - -These directories are created once you begin running Pulsar. - -Directory | Contains -:---------|:-------- -`data` | The data storage directory used by ZooKeeper and BookKeeper. -`instances` | Artifacts created for [Pulsar Functions](functions-overview.md). -`logs` | Logs created by the installation. - -:::tip - -If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions: -* [Install builtin connectors (optional)](#install-builtin-connectors-optional) -* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional) -Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders. - -::: - -### Install builtin connectors (optional) - -Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors. -To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways: - -* download from the Apache mirror Pulsar IO Connectors @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar - - ``` - -After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. -For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands: - -```bash - -$ mkdir connectors -$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors - -$ ls connectors -pulsar-io-aerospike-@pulsar:version@.nar -... - -``` - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker (or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions). -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors). - -::: - -### Install tiered storage offloaders (optional) - -:::tip - -- Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders. -- To enable tiered storage feature, follow the instructions below; otherwise skip this section. - -::: - -To get started with [tiered storage offloaders](concepts-tiered-storage.md), you need to download the offloaders tarball release on every broker node in one of the following ways: - -* download from the Apache mirror Pulsar Tiered Storage Offloaders @pulsar:version@ release - -* download from the Pulsar [downloads page](pulsar:download_page_url) - -* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) - -* use [wget](https://www.gnu.org/software/wget): - - ```shell - - $ wget pulsar:offloader_release_url - - ``` - -After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders` -in the pulsar directory: - -```bash - -$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz - -// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory -// then copy the offloaders - -$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders - -$ ls offloaders -tiered-storage-jcloud-@pulsar:version@.nar - -``` - -For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage.md). - -:::note - -* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory. -* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DC/OS](https://dcos.io/)), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - -::: - -## Start Pulsar standalone - -Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode. - -```bash - -$ bin/pulsar standalone - -``` - -If you have started Pulsar successfully, you will see `INFO`-level log messages like this: - -```bash - -21:59:29.327 [DLM-/stream/storage-OrderedScheduler-3-0] INFO org.apache.bookkeeper.stream.storage.impl.sc.StorageContainerImpl - Successfully started storage container (0). -21:59:34.576 [main] INFO org.apache.pulsar.broker.authentication.AuthenticationService - Authentication is disabled -21:59:34.576 [main] INFO org.apache.pulsar.websocket.WebSocketService - Pulsar WebSocket Service started - -``` - -:::tip - -* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window. - -::: - -You can also run the service as a background process using the `pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). -> -> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview.md) document to secure your deployment. -> -> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics). - -## Use Pulsar standalone - -Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. - -### Consume a message - -The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client consume my-topic -s "first-subscription" - -``` - -If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -22:17:16.781 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully consumed - -``` - -:::tip - -As you have noticed that we do not explicitly create the `my-topic` topic, from which we consume the message. When you consume a message from a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well. - -::: - -### Produce a message - -The following command produces a message saying `hello-pulsar` to the `my-topic` topic: - -```bash - -$ bin/pulsar-client produce my-topic --messages "hello-pulsar" - -``` - -If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs: - -``` - -22:21:08.693 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced - -``` - -## Stop Pulsar standalone - -Press `Ctrl+C` to stop a local standalone Pulsar. - -:::tip - -If the service runs as a background process using the `pulsar-daemon start standalone` command, then use the `pulsar-daemon stop standalone` command to stop the service. -For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon). - -::: - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/tiered-storage-aliyun.md b/site2/website/versioned_docs/version-2.9.3-deprecated/tiered-storage-aliyun.md deleted file mode 100644 index 2486b92df485b3..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/tiered-storage-aliyun.md +++ /dev/null @@ -1,259 +0,0 @@ ---- -id: tiered-storage-aliyun -title: Use Aliyun OSS offloader with Pulsar -sidebar_label: "Aliyun OSS offloader" -original_id: tiered-storage-aliyun ---- - -This chapter guides you through every step of installing and configuring the Aliyun Object Storage Service (OSS) offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the Aliyun OSS offloader. - -### Prerequisite - -- Pulsar: 2.8.0 or later versions - -### Step - -This example uses Pulsar 2.8.0. - -1. Download the Pulsar tarball, see [here](https://pulsar.apache.org/docs/en/standalone/#install-pulsar-using-binary-release). - -2. Download and untar the Pulsar offloaders package, then copy the Pulsar offloaders as `offloaders` in the Pulsar directory, see [here](https://pulsar.apache.org/docs/en/standalone/#install-tiered-storage-offloaders-optional). - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/), [GCS](https://cloud.google.com/storage/), [Azure](https://portal.azure.com/#home), and [Aliyun OSS](https://www.aliyun.com/product/oss) for long-term storage. - - ``` - - tiered-storage-file-system-2.8.0.nar - tiered-storage-jcloud-2.8.0.nar - - ``` - - :::note - - * If you are running Pulsar in a bare-metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image. The `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to Aliyun OSS, you need to configure some properties of the Aliyun OSS offload driver. - -::: - -Besides, you can also configure the Aliyun OSS offloader to run it automatically or trigger it manually. - -### Configure Aliyun OSS offloader driver - -You can configure the Aliyun OSS offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - | Required configuration | Description | Example value | - | --- | --- |--- | - | `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive. | aliyun-oss | - | `offloadersDirectory` | Offloader directory | offloaders | - | `managedLedgerOffloadBucket` | Bucket | pulsar-topic-offload | - | `managedLedgerOffloadServiceEndpoint` | Endpoint | http://oss-cn-hongkong.aliyuncs.com | - -- **Optional** configurations are as below. - - | Optional | Description | Example value | - | --- | --- | --- | - | `managedLedgerOffloadReadBufferSizeInBytes` | Size of block read | 1 MB | - | `managedLedgerOffloadMaxBlockSizeInBytes` | Size of block write | 64 MB | - | `managedLedgerMinLedgerRolloverTimeMinutes` | Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment. | 2 | - | `managedLedgerMaxEntriesPerLedger` | Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment. | 5000 | - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in Aliyun OSS must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -managedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Endpoint (required) - -The endpoint is the region where a bucket is located. - -:::tip - -For more information about Aliyun OSS regions and endpoints, see [International website](https://www.alibabacloud.com/help/doc-detail/31837.htm) or [Chinese website](https://help.aliyun.com/document_detail/31837.html). - -::: - - -##### Example - -This example sets the endpoint as _oss-us-west-1-internal_. - -``` - -managedLedgerOffloadServiceEndpoint=http://oss-us-west-1-internal.aliyuncs.com - -``` - -#### Authentication (required) - -To be able to access Aliyun OSS, you need to authenticate with Aliyun OSS. - -Set the environment variables `ALIYUN_OSS_ACCESS_KEY_ID` and `ALIYUN_OSS_ACCESS_KEY_SECRET` in `conf/pulsar_env.sh`. - -"export" is important so that the variables are made available in the environment of spawned processes. - -```bash - -export ALIYUN_OSS_ACCESS_KEY_ID=ABC123456789 -export ALIYUN_OSS_ACCESS_KEY_SECRET=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - -``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from Aliyun OSS in the configuration file `broker.conf` or `standalone.conf`. - -| Configuration | Description | Default value | -| --- | --- | --- | -| `managedLedgerOffloadReadBufferSizeInBytes` | Block size for each individual read when reading back data from Aliyun OSS. | 1 MB | -| `managedLedgerOffloadMaxBlockSizeInBytes` | Maximum size of a "part" sent during a multipart upload to Aliyun OSS. It **cannot** be smaller than 5 MB. | 64 MB | - -### Run Aliyun OSS offloader automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -| Threshold value | Action | -| --- | --- | -| > 0 | It triggers the offloading operation if the topic storage reaches its threshold. | -| = 0 | It causes a broker to offload data as soon as possible. | -| < 0 | It disables automatic offloading operation. | - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, the offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the Aliyun OSS offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-threshold-em-). - -::: - -### Run Aliyun OSS offloader manually - -For individual topics, you can trigger the Aliyun OSS offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to Aliyun OSS until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the Aliyun OSS offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-em-). - - ::: - -- This example checks the Aliyun OSS offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the Aliyun OSS offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - -` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-status-em-). - - ::: - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/tiered-storage-aws.md b/site2/website/versioned_docs/version-2.9.3-deprecated/tiered-storage-aws.md deleted file mode 100644 index 1f6e4b05a25045..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/tiered-storage-aws.md +++ /dev/null @@ -1,331 +0,0 @@ ---- -id: tiered-storage-aws -title: Use AWS S3 offloader with Pulsar -sidebar_label: "AWS S3 offloader" -original_id: tiered-storage-aws ---- - -This chapter guides you through every step of installing and configuring the AWS S3 offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the AWS S3 offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or later versions - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download from the Pulsar [downloads page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget): - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/) and [GCS](https://cloud.google.com/storage/) for long term storage. - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to AWS S3, you need to configure some properties of the AWS S3 offload driver. - -::: - -Besides, you can also configure the AWS S3 offloader to run it automatically or trigger it manually. - -### Configure AWS S3 offloader driver - -You can configure the AWS S3 offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - Required configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive.

    **Note**: there is a third driver type, S3, which is identical to AWS S3, though S3 requires that you specify an endpoint URL using `s3ManagedLedgerOffloadServiceEndpoint`. This is useful if using an S3 compatible data store other than AWS S3. | aws-s3 - `offloadersDirectory` | Offloader directory | offloaders - `s3ManagedLedgerOffloadBucket` | Bucket | pulsar-topic-offload - -- **Optional** configurations are as below. - - Optional | Description | Example value - |---|---|--- - `s3ManagedLedgerOffloadRegion` | Bucket region

    **Note**: before specifying a value for this parameter, you need to set the following configurations. Otherwise, you might get an error.

    - Set [`s3ManagedLedgerOffloadServiceEndpoint`](https://docs.aws.amazon.com/general/latest/gr/s3.html).

    Example
    `s3ManagedLedgerOffloadServiceEndpoint=https://s3.YOUR_REGION.amazonaws.com`

    - Grant `GetBucketLocation` permission to a user.

    For how to grant `GetBucketLocation` permission to a user, see [here](https://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html#using-with-s3-actions-related-to-buckets).| eu-west-3 - `s3ManagedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `s3ManagedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in AWS S3 must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -s3ManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Bucket region - -A bucket region is a region where a bucket is located. If a bucket region is not specified, the **default** region (`US East (N. Virginia)`) is used. - -:::tip - -For more information about AWS regions and endpoints, see [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). - -::: - - -##### Example - -This example sets the bucket region as _europe-west-3_. - -``` - -s3ManagedLedgerOffloadRegion=eu-west-3 - -``` - -#### Authentication (required) - -To be able to access AWS S3, you need to authenticate with AWS S3. - -Pulsar does not provide any direct methods of configuring authentication for AWS S3, -but relies on the mechanisms supported by the [DefaultAWSCredentialsProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html). - -Once you have created a set of credentials in the AWS IAM console, you can configure credentials using one of the following methods. - -* Use EC2 instance metadata credentials. - - If you are on AWS instance with an instance profile that provides credentials, Pulsar uses these credentials if no other mechanism is provided. - -* Set the environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` in `conf/pulsar_env.sh`. - - "export" is important so that the variables are made available in the environment of spawned processes. - - ```bash - - export AWS_ACCESS_KEY_ID=ABC123456789 - export AWS_SECRET_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -* Add the Java system properties `aws.accessKeyId` and `aws.secretKey` to `PULSAR_EXTRA_OPTS` in `conf/pulsar_env.sh`. - - ```bash - - PULSAR_EXTRA_OPTS="${PULSAR_EXTRA_OPTS} ${PULSAR_MEM} ${PULSAR_GC} -Daws.accessKeyId=ABC123456789 -Daws.secretKey=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.maxCapacity.default=1000 -Dio.netty.recycler.linkCapacity=1024" - - ``` - -* Set the access credentials in `~/.aws/credentials`. - - ```conf - - [default] - aws_access_key_id=ABC123456789 - aws_secret_access_key=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -* Assume an IAM role. - - This example uses the `DefaultAWSCredentialsProviderChain` for assuming this role. - - The broker must be rebooted for credentials specified in `pulsar_env` to take effect. - - ```conf - - s3ManagedLedgerOffloadRole= - s3ManagedLedgerOffloadRoleSessionName=pulsar-s3-offload - - ``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from AWS S3 in the configuration file `broker.conf` or `standalone.conf`. - -Configuration|Description|Default value -|---|---|--- -`s3ManagedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from AWS S3.|1 MB -`s3ManagedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to AWS S3. It **cannot** be smaller than 5 MB. |64 MB - -### Configure AWS S3 offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the AWS S3 offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-threshold-em-). - -::: - -### Configure AWS S3 offloader to run manually - -For individual topics, you can trigger AWS S3 offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to AWS S3 until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the AWS S3 offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-em-). - - ::: - -- This example checks the AWS S3 offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the AWS S3 offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - -` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-status-em-). - - ::: - -## Tutorial - -For the complete and step-by-step instructions on how to use the AWS S3 offloader with Pulsar, see [here](https://hub.streamnative.io/offloaders/aws-s3/2.5.1#usage). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/tiered-storage-azure.md b/site2/website/versioned_docs/version-2.9.3-deprecated/tiered-storage-azure.md deleted file mode 100644 index 5923a33147135c..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/tiered-storage-azure.md +++ /dev/null @@ -1,266 +0,0 @@ ---- -id: tiered-storage-azure -title: Use Azure BlobStore offloader with Pulsar -sidebar_label: "Azure BlobStore offloader" -original_id: tiered-storage-azure ---- - -This chapter guides you through every step of installing and configuring the Azure BlobStore offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the Azure BlobStore offloader. - -### Prerequisite - -- Pulsar: 2.6.2 or later versions - -### Step - -This example uses Pulsar 2.6.2. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.6.2/apache-pulsar-2.6.2-bin.tar.gz) - - * Download from the Pulsar [downloads page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget): - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.6.2/apache-pulsar-2.6.2-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.6.2/apache-pulsar-offloaders-2.6.2-bin.tar.gz - tar xvfz apache-pulsar-offloaders-2.6.2-bin.tar.gz - - ``` - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.6.2/offloaders apache-pulsar-2.6.2/offloaders - - ls offloaders - - ``` - - **Output** - - As shown from the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support [AWS S3](https://aws.amazon.com/s3/), [GCS](https://cloud.google.com/storage/) and [Azure](https://portal.azure.com/#home) for long term storage. - - ``` - - tiered-storage-file-system-2.6.2.nar - tiered-storage-jcloud-2.6.2.nar - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to Azure BlobStore, you need to configure some properties of the Azure BlobStore offload driver. - -::: - -Besides, you can also configure the Azure BlobStore offloader to run it automatically or trigger it manually. - -### Configure Azure BlobStore offloader driver - -You can configure the Azure BlobStore offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - Required configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name | azureblob - `offloadersDirectory` | Offloader directory | offloaders - `managedLedgerOffloadBucket` | Bucket | pulsar-topic-offload - -- **Optional** configurations are as below. - - Optional | Description | Example value - |---|---|--- - `managedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `managedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in Azure BlobStore must be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you cannot nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -managedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Authentication (required) - -To be able to access Azure BlobStore, you need to authenticate with Azure BlobStore. - -* Set the environment variables `AZURE_STORAGE_ACCOUNT` and `AZURE_STORAGE_ACCESS_KEY` in `conf/pulsar_env.sh`. - - "export" is important so that the variables are made available in the environment of spawned processes. - - ```bash - - export AZURE_STORAGE_ACCOUNT=ABC123456789 - export AZURE_STORAGE_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c - - ``` - -#### Size of block read/write - -You can configure the size of a request sent to or read from Azure BlobStore in the configuration file `broker.conf` or `standalone.conf`. - -Configuration|Description|Default value -|---|---|--- -`managedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from Azure BlobStore store.|1 MB -`managedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to Azure BlobStore store. It **cannot** be smaller than 5 MB. |64 MB - -### Configure Azure BlobStore offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offloading operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the Azure BlobStore offloader threshold size to 10 MB using pulsar-admin. - -```bash - -bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-threshold-em-). - -::: - -### Configure Azure BlobStore offloader to run manually - -For individual topics, you can trigger Azure BlobStore offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger it via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to Azure BlobStore until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the Azure BlobStore offloader to run manually using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-em-). - - ::: - -- This example checks the Azure BlobStore offloader status using pulsar-admin. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the Azure BlobStore offloader to complete the job, add the `-w` flag. - - ```bash - - bin/pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - bin/pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: - - ``` - -` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, and default values, see [here](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-offload-status-em-). - - ::: - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/tiered-storage-filesystem.md b/site2/website/versioned_docs/version-2.9.3-deprecated/tiered-storage-filesystem.md deleted file mode 100644 index a5844d22fb5dbe..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/tiered-storage-filesystem.md +++ /dev/null @@ -1,317 +0,0 @@ ---- -id: tiered-storage-filesystem -title: Use filesystem offloader with Pulsar -sidebar_label: "Filesystem offloader" -original_id: tiered-storage-filesystem ---- - -This chapter guides you through every step of installing and configuring the filesystem offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the filesystem offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or later versions - -- Hadoop: 3.x.x - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download from the Pulsar [download page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget) - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8S and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8s and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -## Configuration - -:::note - -Before offloading data from BookKeeper to filesystem, you need to configure some properties of the filesystem offloader driver. - -::: - -Besides, you can also configure the filesystem offloader to run it automatically or trigger it manually. - -### Configure filesystem offloader driver - -You can configure filesystem offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - Required configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver` | Offloader driver name, which is case-insensitive. | filesystem - `fileSystemURI` | Connection address | hdfs://127.0.0.1:9000 - `fileSystemProfilePath` | Hadoop profile path | ../conf/filesystem_offload_core_site.xml - -- **Optional** configurations are as below. - - Optional configuration| Description | Example value - |---|---|--- - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic

    **Note**: it is not recommended that you set this configuration in the production environment.|2 - `managedLedgerMaxEntriesPerLedger`|Maximum number of entries to append to a ledger before triggering a rollover.

    **Note**: it is not recommended that you set this configuration in the production environment.|5000 - -#### Offloader driver (required) - -Offloader driver name, which is case-insensitive. - -This example sets the offloader driver name as _filesystem_. - -```conf - -managedLedgerOffloadDriver=filesystem - -``` - -#### Connection address (required) - -Connection address is the URI to access the default Hadoop distributed file system. - -##### Example - -This example sets the connection address as _hdfs://127.0.0.1:9000_. - -```conf - -fileSystemURI=hdfs://127.0.0.1:9000 - -``` - -#### Hadoop profile path (required) - -The configuration file is stored in the Hadoop profile path. It contains various settings for Hadoop performance tuning. - -##### Example - -This example sets the Hadoop profile path as _../conf/filesystem_offload_core_site.xml_. - -```conf - -fileSystemProfilePath=../conf/filesystem_offload_core_site.xml - -``` - -You can set the following configurations in the _filesystem_offload_core_site.xml_ file. - -``` - - - fs.defaultFS - - - - - hadoop.tmp.dir - pulsar - - - - io.file.buffer.size - 4096 - - - - io.seqfile.compress.blocksize - 1000000 - - - - io.seqfile.compression.type - BLOCK - - - - io.map.index.interval - 128 - - -``` - -:::tip - -For more information about the Hadoop HDFS, see [here](https://hadoop.apache.org/docs/current/). - -::: - -### Configure filesystem offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offload operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offload runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -#### Example - -This example sets the filesystem offloader threshold size to 10 MB using pulsar-admin. - -```bash - -pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#set-offload-threshold). - -::: - -### Configure filesystem offloader to run manually - -For individual topics, you can trigger filesystem offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - -To trigger via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are offloaded to the filesystem until the threshold is no longer exceeded. Older segments are offloaded first. - -#### Example - -- This example triggers the filesystem offloader to run manually using pulsar-admin. - - ```bash - - pulsar-admin topics offload --size-threshold 10M persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload). - - ::: - -- This example checks filesystem offloader status using pulsar-admin. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for the filesystem to complete the job, add the `-w` flag. - - ```bash - - pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in the offloading operation, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - -` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload-status). - - ::: - -## Tutorial - -For the complete and step-by-step instructions on how to use the filesystem offloader with Pulsar, see [here](https://hub.streamnative.io/offloaders/filesystem/2.5.1). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/tiered-storage-gcs.md b/site2/website/versioned_docs/version-2.9.3-deprecated/tiered-storage-gcs.md deleted file mode 100644 index afb1e9a10081ce..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/tiered-storage-gcs.md +++ /dev/null @@ -1,321 +0,0 @@ ---- -id: tiered-storage-gcs -title: Use GCS offloader with Pulsar -sidebar_label: "GCS offloader" -original_id: tiered-storage-gcs ---- - -This chapter guides you through every step of installing and configuring the GCS offloader and using it with Pulsar. - -## Installation - -Follow the steps below to install the GCS offloader. - -### Prerequisite - -- Pulsar: 2.4.2 or later versions - -### Step - -This example uses Pulsar 2.5.1. - -1. Download the Pulsar tarball using one of the following ways: - - * Download from the [Apache mirror](https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz) - - * Download from the Pulsar [download page](https://pulsar.apache.org/download) - - * Use [wget](https://www.gnu.org/software/wget) - - ```shell - - wget https://archive.apache.org/dist/pulsar/pulsar-2.5.1/apache-pulsar-2.5.1-bin.tar.gz - - ``` - -2. Download and untar the Pulsar offloaders package. - - ```bash - - wget https://downloads.apache.org/pulsar/pulsar-2.5.1/apache-pulsar-offloaders-2.5.1-bin.tar.gz - - tar xvfz apache-pulsar-offloaders-2.5.1-bin.tar.gz - - ``` - - :::note - - * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's Pulsar directory. - * If you are running Pulsar in Docker or deploying Pulsar using a Docker image (such as K8S and DCOS), you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders. - - ::: - -3. Copy the Pulsar offloaders as `offloaders` in the Pulsar directory. - - ``` - - mv apache-pulsar-offloaders-2.5.1/offloaders apache-pulsar-2.5.1/offloaders - - ls offloaders - - ``` - - **Output** - - As shown in the output, Pulsar uses [Apache jclouds](https://jclouds.apache.org) to support GCS and AWS S3 for long term storage. - - ``` - - tiered-storage-file-system-2.5.1.nar - tiered-storage-jcloud-2.5.1.nar - - ``` - -## Configuration - -:::note - -Before offloading data from BookKeeper to GCS, you need to configure some properties of the GCS offloader driver. - -::: - -Besides, you can also configure the GCS offloader to run it automatically or trigger it manually. - -### Configure GCS offloader driver - -You can configure GCS offloader driver in the configuration file `broker.conf` or `standalone.conf`. - -- **Required** configurations are as below. - - **Required** configuration | Description | Example value - |---|---|--- - `managedLedgerOffloadDriver`|Offloader driver name, which is case-insensitive.|google-cloud-storage - `offloadersDirectory`|Offloader directory|offloaders - `gcsManagedLedgerOffloadBucket`|Bucket|pulsar-topic-offload - `gcsManagedLedgerOffloadRegion`|Bucket region|europe-west3 - `gcsManagedLedgerOffloadServiceAccountKeyFile`|Authentication |/Users/user-name/Downloads/project-804d5e6a6f33.json - -- **Optional** configurations are as below. - - Optional configuration|Description|Example value - |---|---|--- - `gcsManagedLedgerOffloadReadBufferSizeInBytes`|Size of block read|1 MB - `gcsManagedLedgerOffloadMaxBlockSizeInBytes`|Size of block write|64 MB - `managedLedgerMinLedgerRolloverTimeMinutes`|Minimum time between ledger rollover for a topic.|2 - `managedLedgerMaxEntriesPerLedger`|The max number of entries to append to a ledger before triggering a rollover.|5000 - -#### Bucket (required) - -A bucket is a basic container that holds your data. Everything you store in GCS **must** be contained in a bucket. You can use a bucket to organize your data and control access to your data, but unlike directory and folder, you can not nest a bucket. - -##### Example - -This example names the bucket as _pulsar-topic-offload_. - -```conf - -gcsManagedLedgerOffloadBucket=pulsar-topic-offload - -``` - -#### Bucket region (required) - -Bucket region is the region where a bucket is located. If a bucket region is not specified, the **default** region (`us multi-regional location`) is used. - -:::tip - -For more information about bucket location, see [here](https://cloud.google.com/storage/docs/bucket-locations). - -::: - -##### Example - -This example sets the bucket region as _europe-west3_. - -``` - -gcsManagedLedgerOffloadRegion=europe-west3 - -``` - -#### Authentication (required) - -To enable a broker access GCS, you need to configure `gcsManagedLedgerOffloadServiceAccountKeyFile` in the configuration file `broker.conf`. - -`gcsManagedLedgerOffloadServiceAccountKeyFile` is -a JSON file, containing GCS credentials of a service account. - -##### Example - -To generate service account credentials or view the public credentials that you've already generated, follow the following steps. - -1. Navigate to the [Service accounts page](https://console.developers.google.com/iam-admin/serviceaccounts). - -2. Select a project or create a new one. - -3. Click **Create service account**. - -4. In the **Create service account** window, type a name for the service account and select **Furnish a new private key**. - - If you want to [grant G Suite domain-wide authority](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#delegatingauthority) to the service account, select **Enable G Suite Domain-wide Delegation**. - -5. Click **Create**. - - :::note - - Make sure the service account you create has permission to operate GCS, you need to assign **Storage Admin** permission to your service account [here](https://cloud.google.com/storage/docs/access-control/iam). - - ::: - -6. You can get the following information and set this in `broker.conf`. - - ```conf - - gcsManagedLedgerOffloadServiceAccountKeyFile="/Users/user-name/Downloads/project-804d5e6a6f33.json" - - ``` - - :::tip - - - For more information about how to create `gcsManagedLedgerOffloadServiceAccountKeyFile`, see [here](https://support.google.com/googleapi/answer/6158849). - - For more information about Google Cloud IAM, see [here](https://cloud.google.com/storage/docs/access-control/iam). - - ::: - -#### Size of block read/write - -You can configure the size of a request sent to or read from GCS in the configuration file `broker.conf`. - -Configuration|Description -|---|--- -`gcsManagedLedgerOffloadReadBufferSizeInBytes`|Block size for each individual read when reading back data from GCS.

    The **default** value is 1 MB. -`gcsManagedLedgerOffloadMaxBlockSizeInBytes`|Maximum size of a "part" sent during a multipart upload to GCS.

    It **can not** be smaller than 5 MB.

    The **default** value is 64 MB. - -### Configure GCS offloader to run automatically - -Namespace policy can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that a topic has stored on a Pulsar cluster. Once the topic reaches the threshold, an offload operation is triggered automatically. - -Threshold value|Action -|---|--- -> 0 | It triggers the offloading operation if the topic storage reaches its threshold. -= 0|It causes a broker to offload data as soon as possible. -< 0 |It disables automatic offloading operation. - -Automatic offloading runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offloader does not work until the current segment is full. - -You can configure the threshold size using CLI tools, such as pulsar-admin. - -The offload configurations in `broker.conf` and `standalone.conf` are used for the namespaces that do not have namespace level offload policies. Each namespace can have its own offload policy. If you want to set offload policy for each namespace, use the command [`pulsar-admin namespaces set-offload-policies options`](https://pulsar.apache.org/tools/pulsar-admin/2.6.0-SNAPSHOT/#-em-set-offload-policies-em-) command. - -#### Example - -This example sets the GCS offloader threshold size to 10 MB using pulsar-admin. - -```bash - -pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace - -``` - -:::tip - -For more information about the `pulsar-admin namespaces set-offload-threshold options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#set-offload-threshold). - -::: - -### Configure GCS offloader to run manually - -For individual topics, you can trigger GCS offloader manually using one of the following methods: - -- Use REST endpoint. - -- Use CLI tools (such as pulsar-admin). - - To trigger the GCS via CLI tools, you need to specify the maximum amount of data (threshold) that should be retained on a Pulsar cluster for a topic. If the size of the topic data on the Pulsar cluster exceeds this threshold, segments from the topic are moved to GCS until the threshold is no longer exceeded. Older segments are moved first. - -#### Example - -- This example triggers the GCS offloader to run manually using pulsar-admin with the command `pulsar-admin topics offload (topic-name) (threshold)`. - - ```bash - - pulsar-admin topics offload persistent://my-tenant/my-namespace/topic1 10M - - ``` - - **Output** - - ```bash - - Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1 - - ``` - - :::tip - - For more information about the `pulsar-admin topics offload options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload). - - ::: - -- This example checks the GCS offloader status using pulsar-admin with the command `pulsar-admin topics offload-status options`. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ```bash - - Offload is currently running - - ``` - - To wait for GCS to complete the job, add the `-w` flag. - - ```bash - - pulsar-admin topics offload-status -w persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Offload was a success - - ``` - - If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command. - - ```bash - - pulsar-admin topics offload-status persistent://my-tenant/my-namespace/topic1 - - ``` - - **Output** - - ``` - - Error in offload - null - - Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads. Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g= - - ``` - -` - - :::tip - - For more information about the `pulsar-admin topics offload-status options` command, including flags, descriptions, default values, and shorthands, see [here](reference-pulsar-admin.md#offload-status). - - ::: - -## Tutorial - -For the complete and step-by-step instructions on how to use the GCS offloader with Pulsar, see [here](https://hub.streamnative.io/offloaders/gcs/2.5.1#usage). \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/tiered-storage-overview.md b/site2/website/versioned_docs/version-2.9.3-deprecated/tiered-storage-overview.md deleted file mode 100644 index c635034f463b46..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/tiered-storage-overview.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -id: tiered-storage-overview -title: Overview of tiered storage -sidebar_label: "Overview" -original_id: tiered-storage-overview ---- - -Pulsar's **Tiered Storage** feature allows older backlog data to be moved from BookKeeper to long term and cheaper storage, while still allowing clients to access the backlog as if nothing has changed. - -* Tiered storage uses [Apache jclouds](https://jclouds.apache.org) to support [Amazon S3](https://aws.amazon.com/s3/) and [GCS (Google Cloud Storage)](https://cloud.google.com/storage/) for long term storage. - - With jclouds, it is easy to add support for more [cloud storage providers](https://jclouds.apache.org/reference/providers/#blobstore-providers) in the future. - - :::tip - - - For more information about how to use the AWS S3 offloader with Pulsar, see [here](tiered-storage-aws.md). - - - For more information about how to use the GCS offloader with Pulsar, see [here](tiered-storage-gcs.md). - - ::: - -* Tiered storage uses [Apache Hadoop](http://hadoop.apache.org/) to support filesystems for long term storage. - - With Hadoop, it is easy to add support for more filesystems in the future. - - :::tip - - For more information about how to use the filesystem offloader with Pulsar, see [here](tiered-storage-filesystem.md). - - ::: - -## When to use tiered storage? - -Tiered storage should be used when you have a topic for which you want to keep a very long backlog for a long time. - -For example, if you have a topic containing user actions which you use to train your recommendation systems, you may want to keep that data for a long time, so that if you change your recommendation algorithm, you can rerun it against your full user history. - -## How does tiered storage work? - -A topic in Pulsar is backed by a **log**, known as a **managed ledger**. This log is composed of an ordered list of segments. Pulsar only writes to the final segment of the log. All previous segments are sealed. The data within the segment is immutable. This is known as a **segment oriented architecture**. - -![Tiered storage](/assets/pulsar-tiered-storage.png "Tiered Storage") - -The tiered storage offloading mechanism takes advantage of segment oriented architecture. When offloading is requested, the segments of the log are copied one-by-one to tiered storage. All segments of the log (apart from the current segment) written to tiered storage can be offloaded. - -Data written to BookKeeper is replicated to 3 physical machines by default. However, once a segment is sealed in BookKeeper, it becomes immutable and can be copied to long term storage. Long term storage can achieve cost savings by using mechanisms such as [Reed-Solomon error correction](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction) to require fewer physical copies of data. - -Before offloading ledgers to long term storage, you need to configure buckets, credentials, and other properties for the cloud storage service. Additionally, Pulsar uses multi-part objects to upload the segment data and brokers may crash while uploading the data. It is recommended that you add a life cycle rule for your bucket to expire incomplete multi-part upload after a day or two days to avoid getting charged for incomplete uploads. Moreover, you can trigger the offloading operation manually (via REST API or CLI) or automatically (via CLI). - -After offloading ledgers to long term storage, you can still query data in the offloaded ledgers with Pulsar SQL. - -For more information about tiered storage for Pulsar topics, see [here](https://github.com/apache/pulsar/wiki/PIP-17:-Tiered-storage-for-Pulsar-topics). diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/transaction-api.md b/site2/website/versioned_docs/version-2.9.3-deprecated/transaction-api.md deleted file mode 100644 index ecbd0da12c786d..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/transaction-api.md +++ /dev/null @@ -1,172 +0,0 @@ ---- -id: transactions-api -title: Transactions API -sidebar_label: "Transactions API" -original_id: transactions-api ---- - -All messages in a transaction are available only to consumers after the transaction has been committed. If a transaction has been aborted, all the writes and acknowledgments in this transaction roll back. - -## Prerequisites -1. To enable transactions in Pulsar, you need to configure the parameter in the `broker.conf` file. - -``` - -transactionCoordinatorEnabled=true - -``` - -2. Initialize transaction coordinator metadata, so the transaction coordinators can leverage advantages of the partitioned topic, such as load balance. - -``` - -bin/pulsar initialize-transaction-coordinator-metadata -cs 127.0.0.1:2181 -c standalone - -``` - -After initializing transaction coordinator metadata, you can use the transactions API. The following APIs are available. - -## Initialize Pulsar client - -You can enable transaction for transaction client and initialize transaction coordinator client. - -``` - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl("pulsar://localhost:6650") - .enableTransaction(true) - .build(); - -``` - -## Start transactions -You can start transaction in the following way. - -``` - -Transaction txn = pulsarClient - .newTransaction() - .withTransactionTimeout(5, TimeUnit.MINUTES) - .build() - .get(); - -``` - -## Produce transaction messages - -A transaction parameter is required when producing new transaction messages. The semantic of the transaction messages in Pulsar is `read-committed`, so the consumer cannot receive the ongoing transaction messages before the transaction is committed. - -``` - -producer.newMessage(txn).value("Hello Pulsar Transaction".getBytes()).sendAsync(); - -``` - -## Acknowledge the messages with the transaction - -The transaction acknowledgement requires a transaction parameter. The transaction acknowledgement marks the messages state to pending-ack state. When the transaction is committed, the pending-ack state becomes ack state. If the transaction is aborted, the pending-ack state becomes unack state. - -``` - -Message message = consumer.receive(); -consumer.acknowledgeAsync(message.getMessageId(), txn); - -``` - -## Commit transactions - -When the transaction is committed, consumers receive the transaction messages and the pending-ack state becomes ack state. - -``` - -txn.commit().get(); - -``` - -## Abort transaction - -When the transaction is aborted, the transaction acknowledgement is canceled and the pending-ack messages are redelivered. - -``` - -txn.abort().get(); - -``` - -### Example -The following example shows how messages are processed in transaction. - -``` - -PulsarClient pulsarClient = PulsarClient.builder() - .serviceUrl(getPulsarServiceList().get(0).getBrokerServiceUrl()) - .statsInterval(0, TimeUnit.SECONDS) - .enableTransaction(true) - .build(); - -String sourceTopic = "public/default/source-topic"; -String sinkTopic = "public/default/sink-topic"; - -Producer sourceProducer = pulsarClient - .newProducer(Schema.STRING) - .topic(sourceTopic) - .create(); -sourceProducer.newMessage().value("hello pulsar transaction").sendAsync(); - -Consumer sourceConsumer = pulsarClient - .newConsumer(Schema.STRING) - .topic(sourceTopic) - .subscriptionName("test") - .subscriptionType(SubscriptionType.Shared) - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscribe(); - -Producer sinkProducer = pulsarClient - .newProducer(Schema.STRING) - .topic(sinkTopic) - .sendTimeout(0, TimeUnit.MILLISECONDS) - .create(); - -Transaction txn = pulsarClient - .newTransaction() - .withTransactionTimeout(5, TimeUnit.MINUTES) - .build() - .get(); - -// source message acknowledgement and sink message produce belong to one transaction, -// they are combined into an atomic operation. -Message message = sourceConsumer.receive(); -sourceConsumer.acknowledgeAsync(message.getMessageId(), txn); -sinkProducer.newMessage(txn).value("sink data").sendAsync(); - -txn.commit().get(); - -``` - -## Enable batch messages in transactions - -To enable batch messages in transactions, you need to enable the batch index acknowledgement feature. The transaction acks check whether the batch index acknowledgement conflicts. - -To enable batch index acknowledgement, you need to set `acknowledgmentAtBatchIndexLevelEnabled` to `true` in the `broker.conf` or `standalone.conf` file. - -``` - -acknowledgmentAtBatchIndexLevelEnabled=true - -``` - -And then you need to call the `enableBatchIndexAcknowledgment(true)` method in the consumer builder. - -``` - -Consumer sinkConsumer = pulsarClient - .newConsumer() - .topic(transferTopic) - .subscriptionName("sink-topic") - .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscriptionType(SubscriptionType.Shared) - .enableBatchIndexAcknowledgment(true) // enable batch index acknowledgement - .subscribe(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/transaction-guarantee.md b/site2/website/versioned_docs/version-2.9.3-deprecated/transaction-guarantee.md deleted file mode 100644 index 9db2d254e159f6..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/transaction-guarantee.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -id: transactions-guarantee -title: Transactions Guarantee -sidebar_label: "Transactions Guarantee" -original_id: transactions-guarantee ---- - -Pulsar transactions support the following guarantee. - -## Atomic multi-partition writes and multi-subscription acknowledges -Transactions enable atomic writes to multiple topics and partitions. A batch of messages in a transaction can be received from, produced to, and acknowledged by many partitions. All the operations involved in a transaction succeed or fail as a single unit. - -## Read transactional message -All the messages in a transaction are available only for consumers until the transaction is committed. - -## Acknowledge transactional message -A message is acknowledged successfully only once by a consumer under the subscription when acknowledging the message with the transaction ID. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/txn-how.md b/site2/website/versioned_docs/version-2.9.3-deprecated/txn-how.md deleted file mode 100644 index add072448aeb34..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/txn-how.md +++ /dev/null @@ -1,151 +0,0 @@ ---- -id: txn-how -title: How transactions work? -sidebar_label: "How transactions work?" -original_id: txn-how ---- - -This section describes transaction components and how the components work together. For the complete design details, see [PIP-31: Transactional Streaming](https://docs.google.com/document/d/145VYp09JKTw9jAT-7yNyFU255FptB2_B2Fye100ZXDI/edit#heading=h.bm5ainqxosrx). - -## Key concept - -It is important to know the following key concepts, which is a prerequisite for understanding how transactions work. - -### Transaction coordinator - -The transaction coordinator (TC) is a module running inside a Pulsar broker. - -* It maintains the entire life cycle of transactions and prevents a transaction from getting into an incorrect status. - -* It handles transaction timeout, and ensures that the transaction is aborted after a transaction timeout. - -### Transaction log - -All the transaction metadata persists in the transaction log. The transaction log is backed by a Pulsar topic. If the transaction coordinator crashes, it can restore the transaction metadata from the transaction log. - -The transaction log stores the transaction status rather than actual messages in the transaction (the actual messages are stored in the actual topic partitions). - -### Transaction buffer - -Messages produced to a topic partition within a transaction are stored in the transaction buffer (TB) of that topic partition. The messages in the transaction buffer are not visible to consumers until the transactions are committed. The messages in the transaction buffer are discarded when the transactions are aborted. - -Transaction buffer stores all ongoing and aborted transactions in memory. All messages are sent to the actual partitioned Pulsar topics. After transactions are committed, the messages in the transaction buffer are materialized (visible) to consumers. When the transactions are aborted, the messages in the transaction buffer are discarded. - -### Transaction ID - -Transaction ID (TxnID) identifies a unique transaction in Pulsar. The transaction ID is 128-bit. The highest 16 bits are reserved for the ID of the transaction coordinator, and the remaining bits are used for monotonically increasing numbers in each transaction coordinator. It is easy to locate the transaction crash with the TxnID. - -### Pending acknowledge state - -Pending acknowledge state maintains message acknowledgments within a transaction before a transaction completes. If a message is in the pending acknowledge state, the message cannot be acknowledged by other transactions until the message is removed from the pending acknowledge state. - -The pending acknowledge state is persisted to the pending acknowledge log (cursor ledger). A new broker can restore the state from the pending acknowledge log to ensure the acknowledgement is not lost. - -## Data flow - -At a high level, the data flow can be split into several steps: - -1. Begin a transaction. - -2. Publish messages with a transaction. - -3. Acknowledge messages with a transaction. - -4. End a transaction. - -To help you debug or tune the transaction for better performance, review the following diagrams and descriptions. - -### 1. Begin a transaction - -Before introducing the transaction in Pulsar, a producer is created and then messages are sent to brokers and stored in data logs. - -![](/assets/txn-3.png) - -Let’s walk through the steps for _beginning a transaction_. - -| Step | Description | -| --- | --- | -| 1.1 | The first step is that the Pulsar client finds the transaction coordinator. | -| 1.2 | The transaction coordinator allocates a transaction ID for the transaction. In the transaction log, the transaction is logged with its transaction ID and status (OPEN), which ensures the transaction status is persisted regardless of transaction coordinator crashes. | -| 1.3 | The transaction log sends the result of persisting the transaction ID to the transaction coordinator. | -| 1.4 | After the transaction status entry is logged, the transaction coordinator brings the transaction ID back to the Pulsar client. | - -### 2. Publish messages with a transaction - -In this stage, the Pulsar client enters a transaction loop, repeating the `consume-process-produce` operation for all the messages that comprise the transaction. This is a long phase and is potentially composed of multiple produce and acknowledgement requests. - -![](/assets/txn-4.png) - -Let’s walk through the steps for _publishing messages with a transaction_. - -| Step | Description | -| --- | --- | -| 2.1.1 | Before the Pulsar client produces messages to a new topic partition, it sends a request to the transaction coordinator to add the partition to the transaction. | -| 2.1.2 | The transaction coordinator logs the partition changes of the transaction into the transaction log for durability, which ensures the transaction coordinator knows all the partitions that a transaction is handling. The transaction coordinator can commit or abort changes on each partition at the end-partition phase. | -| 2.1.3 | The transaction log sends the result of logging the new partition (used for producing messages) to the transaction coordinator. | -| 2.1.4 | The transaction coordinator sends the result of adding a new produced partition to the transaction. | -| 2.2.1 | The Pulsar client starts producing messages to partitions. The flow of this part is the same as the normal flow of producing messages except that the batch of messages produced by a transaction contains transaction IDs. | -| 2.2.2 | The broker writes messages to a partition. | - -### 3. Acknowledge messages with a transaction - -In this phase, the Pulsar client sends a request to the transaction coordinator and a new subscription is acknowledged as a part of a transaction. - -![](/assets/txn-5.png) - -Let’s walk through the steps for _acknowledging messages with a transaction_. - -| Step | Description | -| --- | --- | -| 3.1.1 | The Pulsar client sends a request to add an acknowledged subscription to the transaction coordinator. | -| 3.1.2 | The transaction coordinator logs the addition of subscription, which ensures that it knows all subscriptions handled by a transaction and can commit or abort changes on each subscription at the end phase. | -| 3.1.3 | The transaction log sends the result of logging the new partition (used for acknowledging messages) to the transaction coordinator. | -| 3.1.4 | The transaction coordinator sends the result of adding the new acknowledged partition to the transaction. | -| 3.2 | The Pulsar client acknowledges messages on the subscription. The flow of this part is the same as the normal flow of acknowledging messages except that the acknowledged request carries a transaction ID. | -| 3.3 | The broker receiving the acknowledgement request checks if the acknowledgment belongs to a transaction or not. | - -### 4. End a transaction - -At the end of a transaction, the Pulsar client decides to commit or abort the transaction. The transaction can be aborted when a conflict is detected on acknowledging messages. - -#### 4.1 End transaction request - -When the Pulsar client finishes a transaction, it issues an end transaction request. - -![](/assets/txn-6.png) - -Let’s walk through the steps for _ending the transaction_. - -| Step | Description | -| --- | --- | -| 4.1.1 | The Pulsar client issues an end transaction request (with a field indicating whether the transaction is to be committed or aborted) to the transaction coordinator. | -| 4.1.2 | The transaction coordinator writes a COMMITTING or ABORTING message to its transaction log. | -| 4.1.3 | The transaction log sends the result of logging the committing or aborting status. | - -#### 4.2 Finalize a transaction - -The transaction coordinator starts the process of committing or aborting messages to all the partitions involved in this transaction. - -![](/assets/txn-7.png) - -Let’s walk through the steps for _finalizing a transaction_. - -| Step | Description | -| --- | --- | -| 4.2.1 | The transaction coordinator commits transactions on subscriptions and commits transactions on partitions at the same time. | -| 4.2.2 | The broker (produce) writes produced committed markers to the actual partitions. At the same time, the broker (ack) writes acked committed marks to the subscription pending ack partitions. | -| 4.2.3 | The data log sends the result of writing produced committed marks to the broker. At the same time, pending ack data log sends the result of writing acked committed marks to the broker. The cursor moves to the next position. | - -#### 4.3 Mark a transaction as COMMITTED or ABORTED - -The transaction coordinator writes the final transaction status to the transaction log to complete the transaction. - -![](/assets/txn-8.png) - -Let’s walk through the steps for _marking a transaction as COMMITTED or ABORTED_. - -| Step | Description | -| --- | --- | -| 4.3.1 | After all produced messages and acknowledgements to all partitions involved in this transaction have been successfully committed or aborted, the transaction coordinator writes the final COMMITTED or ABORTED transaction status messages to its transaction log, indicating that the transaction is complete. All the messages associated with the transaction in its transaction log can be safely removed. | -| 4.3.2 | The transaction log sends the result of the committed transaction to the transaction coordinator. | -| 4.3.3 | The transaction coordinator sends the result of the committed transaction to the Pulsar client. | diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/txn-monitor.md b/site2/website/versioned_docs/version-2.9.3-deprecated/txn-monitor.md deleted file mode 100644 index 5b50953772d092..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/txn-monitor.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -id: txn-monitor -title: How to monitor transactions? -sidebar_label: "How to monitor transactions?" -original_id: txn-monitor ---- - -You can monitor the status of the transactions in Prometheus and Grafana using the [transaction metrics](https://pulsar.apache.org/docs/en/next/reference-metrics/#pulsar-transaction). - -For how to configure Prometheus and Grafana, see [here](https://pulsar.apache.org/docs/en/next/deploy-monitoring). diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/txn-use.md b/site2/website/versioned_docs/version-2.9.3-deprecated/txn-use.md deleted file mode 100644 index de0e4a92f1b27e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/txn-use.md +++ /dev/null @@ -1,105 +0,0 @@ ---- -id: txn-use -title: How to use transactions? -sidebar_label: "How to use transactions?" -original_id: txn-use ---- - -## Transaction API - -The transaction feature is primarily a server-side and protocol-level feature. You can use the transaction feature via the [transaction API](https://pulsar.apache.org/api/admin/), which is available in **Pulsar 2.8.0 or later**. - -To use the transaction API, you do not need any additional settings in the Pulsar client. **By default**, transactions is **disabled**. - -Currently, transaction API is only available for **Java** clients. Support for other language clients will be added in the future releases. - -## Quick start - -This section provides an example of how to use the transaction API to send and receive messages in a Java client. - -1. Start Pulsar 2.8.0 or later. - -2. Enable transaction. - - Change the configuration in the `broker.conf` file. - - ``` - - transactionCoordinatorEnabled=true - - ``` - - If you want to enable batch messages in transactions, follow the steps below. - - Set `acknowledgmentAtBatchIndexLevelEnabled` to `true` in the `broker.conf` or `standalone.conf` file. - - ``` - - acknowledgmentAtBatchIndexLevelEnabled=true - - ``` - -3. Initialize transaction coordinator metadata. - - The transaction coordinator can leverage the advantages of partitioned topics (such as load balance). - - **Input** - - ``` - - bin/pulsar initialize-transaction-coordinator-metadata -cs 127.0.0.1:2181 -c standalone - - ``` - - **Output** - - ``` - - Transaction coordinator metadata setup success - - ``` - -4. Initialize a Pulsar client. - - ``` - - PulsarClient client = PulsarClient.builder() - - .serviceUrl("pulsar://localhost:6650") - - .enableTransaction(true) - - .build(); - - ``` - -Now you can start using the transaction API to send and receive messages. Below is an example of a `consume-process-produce` application written in Java. - -![](/assets/txn-9.png) - -Let’s walk through this example step by step. - -| Step | Description | -| --- | --- | -| 1. Start a transaction. | The application opens a new transaction by calling PulsarClient.newTransaction. It specifics the transaction timeout as 1 minute. If the transaction is not committed within 1 minute, the transaction is automatically aborted. | -| 2. Receive messages from topics. | The application creates two normal consumers to receive messages from topic input-topic-1 and input-topic-2 respectively. | -| 3. Publish messages to topics with the transaction. | The application creates two producers to produce the resulting messages to the output topic _output-topic-1_ and output-topic-2 respectively. The application applies the processing logic and generates two output messages. The application sends those two output messages as part of the transaction opened in the first step via Producer.newMessage(Transaction). | -| 4. Acknowledge the messages with the transaction. | In the same transaction, the application acknowledges the two input messages. | -| 5. Commit the transaction. | The application commits the transaction by calling Transaction.commit() on the open transaction. The commit operation ensures the two input messages are marked as acknowledged and the two output messages are written successfully to the output topics. | - -[1] Example of enabling batch messages ack in transactions in the consumer builder. - -``` - -Consumer sinkConsumer = pulsarClient - .newConsumer() - .topic(transferTopic) - .subscriptionName("sink-topic") - -.subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) - .subscriptionType(SubscriptionType.Shared) - .enableBatchIndexAcknowledgment(true) // enable batch index acknowledgement - .subscribe(); - -``` - diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/txn-what.md b/site2/website/versioned_docs/version-2.9.3-deprecated/txn-what.md deleted file mode 100644 index 844f19a700f8f0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/txn-what.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -id: txn-what -title: What are transactions? -sidebar_label: "What are transactions?" -original_id: txn-what ---- - -Transactions strengthen the message delivery semantics of Apache Pulsar and [processing guarantees of Pulsar Functions](https://pulsar.apache.org/docs/en/next/functions-overview/#processing-guarantees). The Pulsar Transaction API supports atomic writes and acknowledgments across multiple topics. - -Transactions allow: - -- A producer to send a batch of messages to multiple topics where all messages in the batch are eventually visible to any consumer, or none are ever visible to consumers. - -- End-to-end exactly-once semantics (execute a `consume-process-produce` operation exactly once). - -## Transaction semantics - -Pulsar transactions have the following semantics: - -* All operations within a transaction are committed as a single unit. - - * Either all messages are committed, or none of them are. - - * Each message is written or processed exactly once, without data loss or duplicates (even in the event of failures). - - * If a transaction is aborted, all the writes and acknowledgments in this transaction rollback. - -* A group of messages in a transaction can be received from, produced to, and acknowledged by multiple partitions. - - * Consumers are only allowed to read committed (acked) messages. In other words, the broker does not deliver transactional messages which are part of an open transaction or messages which are part of an aborted transaction. - - * Message writes across multiple partitions are atomic. - - * Message acks across multiple subscriptions are atomic. A message is acked successfully only once by a consumer under the subscription when acknowledging the message with the transaction ID. - -## Transactions and stream processing - -Stream processing on Pulsar is a `consume-process-produce` operation on Pulsar topics: - -* `Consume`: a source operator that runs a Pulsar consumer reads messages from one or multiple Pulsar topics. - -* `Process`: a processing operator transforms the messages. - -* `Produce`: a sink operator that runs a Pulsar producer writes the resulting messages to one or multiple Pulsar topics. - -![](/assets/txn-2.png) - -Pulsar transactions support end-to-end exactly-once stream processing, which means messages are not lost from a source operator and messages are not duplicated to a sink operator. - -## Use case - -Prior to Pulsar 2.8.0, there was no easy way to build stream processing applications with Pulsar to achieve exactly-once processing guarantees. With the transaction introduced in Pulsar 2.8.0, the following services support exactly-once semantics: - -* [Pulsar Flink connector](https://flink.apache.org/2021/01/07/pulsar-flink-connector-270.html) - - Prior to Pulsar 2.8.0, if you want to build stream applications using Pulsar and Flink, the Pulsar Flink connector only supported exactly-once source connector and at-least-once sink connector, which means the highest processing guarantee for end-to-end was at-least-once, there was possibility that the resulting messages from streaming applications produce duplicated messages to the resulting topics in Pulsar. - - With the transaction introduced in Pulsar 2.8.0, the Pulsar Flink sink connector can support exactly-once semantics by implementing the designated `TwoPhaseCommitSinkFunction` and hooking up the Flink sink message lifecycle with Pulsar transaction API. - -* Support for Pulsar Functions and other connectors will be added in the future releases. diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/txn-why.md b/site2/website/versioned_docs/version-2.9.3-deprecated/txn-why.md deleted file mode 100644 index 1ed8769977654e..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/txn-why.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -id: txn-why -title: Why transactions? -sidebar_label: "Why transactions?" -original_id: txn-why ---- - -Pulsar transactions (txn) enable event streaming applications to consume, process, and produce messages in one atomic operation. The reason for developing this feature can be summarized as below. - -## Demand of stream processing - -The demand for stream processing applications with stronger processing guarantees has grown along with the rise of stream processing. For example, in the financial industry, financial institutions use stream processing engines to process debits and credits for users. This type of use case requires that every message is processed exactly once, without exception. - -In other words, if a stream processing application consumes message A and -produces the result as a message B (B = f(A)), then exactly-once processing -guarantee means that A can only be marked as consumed if and only if B is -successfully produced, and vice versa. - -![](/assets/txn-1.png) - -The Pulsar transactions API strengthens the message delivery semantics and the processing guarantees for stream processing. It enables stream processing applications to consume, process, and produce messages in one atomic operation. That means, a batch of messages in a transaction can be received from, produced to and acknowledged by many topic partitions. All the operations involved in a transaction succeed or fail as one single until. - -## Limitation of idempotent producer - -Avoiding data loss or duplication can be achieved by using the Pulsar idempotent producer, but it does not provide guarantees for writes across multiple partitions. - -In Pulsar, the highest level of message delivery guarantee is using an [idempotent producer](https://pulsar.apache.org/docs/en/next/concepts-messaging/#producer-idempotency) with the exactly once semantic at one single partition, that is, each message is persisted exactly once without data loss and duplication. However, there are some limitations in this solution: - -- Due to the monotonic increasing sequence ID, this solution only works on a single partition and within a single producer session (that is, for producing one message), so there is no atomicity when producing multiple messages to one or multiple partitions. - - In this case, if there are some failures (for example, client / broker / bookie crashes, network failure, and more) in the process of producing and receiving messages, messages are re-processed and re-delivered, which may cause data loss or data duplication: - - - For the producer: if the producer retry sending messages, some messages are persisted multiple times; if the producer does not retry sending messages, some messages are persisted once and other messages are lost. - - - For the consumer: since the consumer does not know whether the broker has received messages or not, the consumer may not retry sending acks, which causes it to receive duplicate messages. - -- Similarly, for Pulsar Function, it only guarantees exactly once semantics for an idempotent function on a single event rather than processing multiple events or producing multiple results that can happen exactly. - - For example, if a function accepts multiple events and produces one result (for example, window function), the function may fail between producing the result and acknowledging the incoming messages, or even between acknowledging individual events, which causes all (or some) incoming messages to be re-delivered and reprocessed, and a new result is generated. - - However, many scenarios need atomic guarantees across multiple partitions and sessions. - -- Consumers need to rely on more mechanisms to acknowledge (ack) messages once. - - For example, consumers are required to store the MessageID along with its acked state. After the topic is unloaded, the subscription can recover the acked state of this MessageID in memory when the topic is loaded again. \ No newline at end of file diff --git a/site2/website/versioned_docs/version-2.9.3-deprecated/window-functions-context.md b/site2/website/versioned_docs/version-2.9.3-deprecated/window-functions-context.md deleted file mode 100644 index f80fea57989ef0..00000000000000 --- a/site2/website/versioned_docs/version-2.9.3-deprecated/window-functions-context.md +++ /dev/null @@ -1,581 +0,0 @@ ---- -id: window-functions-context -title: Window Functions Context -sidebar_label: "Window Functions: Context" -original_id: window-functions-context ---- - -Java SDK provides access to a **window context object** that can be used by a window function. This context object provides a wide variety of information and functionality for Pulsar window functions as below. - -- [Spec](#spec) - - * Names of all input topics and the output topic associated with the function. - * Tenant and namespace associated with the function. - * Pulsar window function name, ID, and version. - * ID of the Pulsar function instance running the window function. - * Number of instances that invoke the window function. - * Built-in type or custom class name of the output schema. - -- [Logger](#logger) - - * Logger object used by the window function, which can be used to create window function log messages. - -- [User config](#user-config) - - * Access to arbitrary user configuration values. - -- [Routing](#routing) - - * Routing is supported in Pulsar window functions. Pulsar window functions send messages to arbitrary topics as per the `publish` interface. - -- [Metrics](#metrics) - - * Interface for recording metrics. - -- [State storage](#state-storage) - - * Interface for storing and retrieving state in [state storage](#state-storage). - -## Spec - -Spec contains the basic information of a function. - -### Get input topics - -The `getInputTopics` method gets the **name list** of all input topics. - -This example demonstrates how to get the name list of all input topics in a Java window function. - -```java - -public class GetInputTopicsWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - Collection inputTopics = context.getInputTopics(); - System.out.println(inputTopics); - - return null; - } - -} - -``` - -### Get output topic - -The `getOutputTopic` method gets the **name of a topic** to which the message is sent. - -This example demonstrates how to get the name of an output topic in a Java window function. - -```java - -public class GetOutputTopicWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String outputTopic = context.getOutputTopic(); - System.out.println(outputTopic); - - return null; - } -} - -``` - -### Get tenant - -The `getTenant` method gets the tenant name associated with the window function. - -This example demonstrates how to get the tenant name in a Java window function. - -```java - -public class GetTenantWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String tenant = context.getTenant(); - System.out.println(tenant); - - return null; - } - -} - -``` - -### Get namespace - -The `getNamespace` method gets the namespace associated with the window function. - -This example demonstrates how to get the namespace in a Java window function. - -```java - -public class GetNamespaceWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String ns = context.getNamespace(); - System.out.println(ns); - - return null; - } - -} - -``` - -### Get function name - -The `getFunctionName` method gets the window function name. - -This example demonstrates how to get the function name in a Java window function. - -```java - -public class GetNameOfWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionName = context.getFunctionName(); - System.out.println(functionName); - - return null; - } - -} - -``` - -### Get function ID - -The `getFunctionId` method gets the window function ID. - -This example demonstrates how to get the function ID in a Java window function. - -```java - -public class GetFunctionIDWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionID = context.getFunctionId(); - System.out.println(functionID); - - return null; - } - -} - -``` - -### Get function version - -The `getFunctionVersion` method gets the window function version. - -This example demonstrates how to get the function version of a Java window function. - -```java - -public class GetVersionOfWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String functionVersion = context.getFunctionVersion(); - System.out.println(functionVersion); - - return null; - } - -} - -``` - -### Get instance ID - -The `getInstanceId` method gets the instance ID of a window function. - -This example demonstrates how to get the instance ID in a Java window function. - -```java - -public class GetInstanceIDWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - int instanceId = context.getInstanceId(); - System.out.println(instanceId); - - return null; - } - -} - -``` - -### Get num instances - -The `getNumInstances` method gets the number of instances that invoke the window function. - -This example demonstrates how to get the number of instances in a Java window function. - -```java - -public class GetNumInstancesWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - int numInstances = context.getNumInstances(); - System.out.println(numInstances); - - return null; - } - -} - -``` - -### Get output schema type - -The `getOutputSchemaType` method gets the built-in type or custom class name of the output schema. - -This example demonstrates how to get the output schema type of a Java window function. - -```java - -public class GetOutputSchemaTypeWindowFunction implements WindowFunction { - - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - String schemaType = context.getOutputSchemaType(); - System.out.println(schemaType); - - return null; - } -} - -``` - -## Logger - -Pulsar window functions using Java SDK has access to an [SLF4j](https://www.slf4j.org/) [`Logger`](https://www.slf4j.org/api/org/apache/log4j/Logger.html) object that can be used to produce logs at the chosen log level. - -This example logs either a `WARNING`-level or `INFO`-level log based on whether the incoming string contains the word `danger` or not in a Java function. - -```java - -import java.util.Collection; -import org.apache.pulsar.functions.api.Record; -import org.apache.pulsar.functions.api.WindowContext; -import org.apache.pulsar.functions.api.WindowFunction; -import org.slf4j.Logger; - -public class LoggingWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - Logger log = context.getLogger(); - for (Record record : inputs) { - log.info(record + "-window-log"); - } - return null; - } - -} - -``` - -If you need your function to produce logs, specify a log topic when creating or running the function. - -```bash - -bin/pulsar-admin functions create \ - --jar my-functions.jar \ - --classname my.package.LoggingFunction \ - --log-topic persistent://public/default/logging-function-logs \ - # Other function configs - -``` - -You can access all logs produced by `LoggingFunction` via the `persistent://public/default/logging-function-logs` topic. - -## Metrics - -Pulsar window functions can publish arbitrary metrics to the metrics interface which can be queried. - -:::note - -If a Pulsar window function uses the language-native interface for Java, that function is not able to publish metrics and stats to Pulsar. - -::: - -You can record metrics using the context object on a per-key basis. - -This example sets a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message in a Java function. - -```java - -import java.util.Collection; -import org.apache.pulsar.functions.api.Record; -import org.apache.pulsar.functions.api.WindowContext; -import org.apache.pulsar.functions.api.WindowFunction; - - -/** - * Example function that wants to keep track of - * the event time of each message sent. - */ -public class UserMetricWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - - for (Record record : inputs) { - if (record.getEventTime().isPresent()) { - context.recordMetric("MessageEventTime", record.getEventTime().get().doubleValue()); - } - } - - return null; - } -} - -``` - -## User config - -When you run or update Pulsar Functions that are created using SDK, you can pass arbitrary key/value pairs to them with the `--user-config` flag. Key/value pairs **must** be specified as JSON. - -This example passes a user configured key/value to a function. - -```bash - -bin/pulsar-admin functions create \ - --name word-filter \ - --user-config '{"forbidden-word":"rosebud"}' \ - # Other function configs - -``` - -### API -You can use the following APIs to get user-defined information for window functions. -#### getUserConfigMap - -`getUserConfigMap` API gets a map of all user-defined key/value configurations for the window function. - -```java - -/** - * Get a map of all user-defined key/value configs for the function. - * - * @return The full map of user-defined config values - */ - Map getUserConfigMap(); - -``` - -#### getUserConfigValue - -The `getUserConfigValue` API gets a user-defined key/value. - -```java - -/** - * Get any user-defined key/value. - * - * @param key The key - * @return The Optional value specified by the user for that key. - */ - Optional getUserConfigValue(String key); - -``` - -#### getUserConfigValueOrDefault - -The `getUserConfigValueOrDefault` API gets a user-defined key/value or a default value if none is present. - -```java - -/** - * Get any user-defined key/value or a default value if none is present. - * - * @param key - * @param defaultValue - * @return Either the user config value associated with a given key or a supplied default value - */ - Object getUserConfigValueOrDefault(String key, Object defaultValue); - -``` - -This example demonstrates how to access key/value pairs provided to Pulsar window functions. - -Java SDK context object enables you to access key/value pairs provided to Pulsar window functions via the command line (as JSON). - -:::tip - -For all key/value pairs passed to Java window functions, both the `key` and the `value` are `String`. To set the value to be a different type, you need to deserialize it from the `String` type. - -::: - -This example passes a key/value pair in a Java window function. - -```bash - -bin/pulsar-admin functions create \ - --user-config '{"word-of-the-day":"verdure"}' \ - # Other function configs - -``` - -This example accesses values in a Java window function. - -The `UserConfigFunction` function logs the string `"The word of the day is verdure"` every time the function is invoked (which means every time a message arrives). The user config of `word-of-the-day` is changed **only** when the function is updated with a new config value via -multiple ways, such as the command line tool or REST API. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; -import org.slf4j.Logger; - -import java.util.Optional; - -public class UserConfigWindowFunction implements WindowFunction { - @Override - public String process(Collection> input, WindowContext context) throws Exception { - Optional whatToWrite = context.getUserConfigValue("WhatToWrite"); - if (whatToWrite.get() != null) { - return (String)whatToWrite.get(); - } else { - return "Not a nice way"; - } - } - -} - -``` - -If no value is provided, you can access the entire user config map or set a default value. - -```java - -// Get the whole config map -Map allConfigs = context.getUserConfigMap(); - -// Get value or resort to default -String wotd = context.getUserConfigValueOrDefault("word-of-the-day", "perspicacious"); - -``` - -## Routing - -You can use the `context.publish()` interface to publish as many results as you want. - -This example shows that the `PublishFunction` class uses the built-in function in the context to publish messages to the `publishTopic` in a Java function. - -```java - -public class PublishWindowFunction implements WindowFunction { - @Override - public Void process(Collection> input, WindowContext context) throws Exception { - String publishTopic = (String) context.getUserConfigValueOrDefault("publish-topic", "publishtopic"); - String output = String.format("%s!", input); - context.publish(publishTopic, output); - - return null; - } - -} - -``` - -## State storage - -Pulsar window functions use [Apache BookKeeper](https://bookkeeper.apache.org) as a state storage interface. Apache Pulsar installation (including the standalone installation) includes the deployment of BookKeeper bookies. - -Apache Pulsar integrates with Apache BookKeeper `table service` to store the `state` for functions. For example, the `WordCount` function can store its `counters` state into BookKeeper table service via Pulsar Functions state APIs. - -States are key-value pairs, where the key is a string and the value is arbitrary binary data—counters are stored as 64-bit big-endian binary values. Keys are scoped to an individual Pulsar Function and shared between instances of that function. - -Currently, Pulsar window functions expose Java API to access, update, and manage states. These APIs are available in the context object when you use Java SDK functions. - -| Java API| Description -|---|--- -|`incrCounter`|Increases a built-in distributed counter referred by key. -|`getCounter`|Gets the counter value for the key. -|`putState`|Updates the state value for the key. - -You can use the following APIs to access, update, and manage states in Java window functions. - -#### incrCounter - -The `incrCounter` API increases a built-in distributed counter referred by key. - -Applications use the `incrCounter` API to change the counter of a given `key` by the given `amount`. If the `key` does not exist, a new key is created. - -```java - - /** - * Increment the builtin distributed counter referred by key - * @param key The name of the key - * @param amount The amount to be incremented - */ - void incrCounter(String key, long amount); - -``` - -#### getCounter - -The `getCounter` API gets the counter value for the key. - -Applications uses the `getCounter` API to retrieve the counter of a given `key` changed by the `incrCounter` API. - -```java - - /** - * Retrieve the counter value for the key. - * - * @param key name of the key - * @return the amount of the counter value for this key - */ - long getCounter(String key); - -``` - -Except the `getCounter` API, Pulsar also exposes a general key/value API (`putState`) for functions to store general key/value state. - -#### putState - -The `putState` API updates the state value for the key. - -```java - - /** - * Update the state value for the key. - * - * @param key name of the key - * @param value state value of the key - */ - void putState(String key, ByteBuffer value); - -``` - -This example demonstrates how applications store states in Pulsar window functions. - -The logic of the `WordCountWindowFunction` is simple and straightforward. - -1. The function first splits the received string into multiple words using regex `\\.`. - -2. For each `word`, the function increments the corresponding `counter` by 1 via `incrCounter(key, amount)`. - -```java - -import org.apache.pulsar.functions.api.Context; -import org.apache.pulsar.functions.api.Function; - -import java.util.Arrays; - -public class WordCountWindowFunction implements WindowFunction { - @Override - public Void process(Collection> inputs, WindowContext context) throws Exception { - for (Record input : inputs) { - Arrays.asList(input.getValue().split("\\.")).forEach(word -> context.incrCounter(word, 1)); - } - return null; - - } -} - -``` - diff --git a/site2/website/versioned_sidebars/version-2.10.0-sidebars.json b/site2/website/versioned_sidebars/version-2.10.0-sidebars.json deleted file mode 100644 index 62b63e6a438d0f..00000000000000 --- a/site2/website/versioned_sidebars/version-2.10.0-sidebars.json +++ /dev/null @@ -1,610 +0,0 @@ -{ - "version-2.10.0/docsSidebar": [ - { - "type": "doc", - "id": "version-2.10.0/about" - }, - { - "type": "category", - "label": "Get Started", - "items": [ - { - "type": "doc", - "id": "version-2.10.0/getting-started-standalone" - }, - { - "type": "doc", - "id": "version-2.10.0/getting-started-docker" - }, - { - "type": "doc", - "id": "version-2.10.0/getting-started-helm" - } - ] - }, - { - "type": "category", - "label": "Concepts and Architecture", - "items": [ - { - "type": "doc", - "id": "version-2.10.0/concepts-overview" - }, - { - "type": "doc", - "id": "version-2.10.0/concepts-messaging" - }, - { - "type": "doc", - "id": "version-2.10.0/concepts-architecture-overview" - }, - { - "type": "doc", - "id": "version-2.10.0/concepts-clients" - }, - { - "type": "doc", - "id": "version-2.10.0/concepts-replication" - }, - { - "type": "doc", - "id": "version-2.10.0/concepts-multi-tenancy" - }, - { - "type": "doc", - "id": "version-2.10.0/concepts-authentication" - }, - { - "type": "doc", - "id": "version-2.10.0/concepts-topic-compaction" - }, - { - "type": "doc", - "id": "version-2.10.0/concepts-proxy-sni-routing" - }, - { - "type": "doc", - "id": "version-2.10.0/concepts-multiple-advertised-listeners" - } - ] - }, - { - "type": "category", - "label": "Pulsar Schema", - "items": [ - { - "type": "doc", - "id": "version-2.10.0/schema-get-started" - }, - { - "type": "doc", - "id": "version-2.10.0/schema-understand" - }, - { - "type": "doc", - "id": "version-2.10.0/schema-evolution-compatibility" - }, - { - "type": "doc", - "id": "version-2.10.0/schema-manage" - } - ] - }, - { - "type": "category", - "label": "Pulsar Functions", - "items": [ - { - "type": "doc", - "id": "version-2.10.0/functions-overview" - }, - { - "type": "doc", - "id": "version-2.10.0/functions-runtime" - }, - { - "type": "doc", - "id": "version-2.10.0/functions-worker" - }, - { - "type": "doc", - "id": "version-2.10.0/functions-develop" - }, - { - "type": "doc", - "id": "version-2.10.0/functions-package" - }, - { - "type": "doc", - "id": "version-2.10.0/functions-debug" - }, - { - "type": "doc", - "id": "version-2.10.0/functions-deploy" - }, - { - "type": "doc", - "id": "version-2.10.0/functions-cli" - }, - { - "type": "doc", - "id": "version-2.10.0/window-functions-context" - } - ] - }, - { - "type": "category", - "label": "Pulsar IO", - "items": [ - { - "type": "doc", - "id": "version-2.10.0/io-overview" - }, - { - "type": "doc", - "id": "version-2.10.0/io-quickstart" - }, - { - "type": "doc", - "id": "version-2.10.0/io-use" - }, - { - "type": "doc", - "id": "version-2.10.0/io-debug" - }, - { - "type": "doc", - "id": "version-2.10.0/io-connectors" - }, - { - "type": "doc", - "id": "version-2.10.0/io-cdc" - }, - { - "type": "doc", - "id": "version-2.10.0/io-develop" - } - ] - }, - { - "type": "category", - "label": "Pulsar SQL", - "items": [ - { - "type": "doc", - "id": "version-2.10.0/sql-overview" - }, - { - "type": "doc", - "id": "version-2.10.0/sql-getting-started" - }, - { - "type": "doc", - "id": "version-2.10.0/sql-deployment-configurations" - }, - { - "type": "doc", - "id": "version-2.10.0/sql-rest-api" - } - ] - }, - { - "type": "category", - "label": "Tiered Storage", - "items": [ - { - "type": "doc", - "id": "version-2.10.0/tiered-storage-overview" - }, - { - "type": "doc", - "id": "version-2.10.0/tiered-storage-aws" - }, - { - "type": "doc", - "id": "version-2.10.0/tiered-storage-gcs" - }, - { - "type": "doc", - "id": "version-2.10.0/tiered-storage-filesystem" - }, - { - "type": "doc", - "id": "version-2.10.0/tiered-storage-azure" - }, - { - "type": "doc", - "id": "version-2.10.0/tiered-storage-aliyun" - } - ] - }, - { - "type": "category", - "label": "Transactions", - "items": [ - { - "type": "doc", - "id": "version-2.10.0/txn-why" - }, - { - "type": "doc", - "id": "version-2.10.0/txn-what" - }, - { - "type": "doc", - "id": "version-2.10.0/txn-how" - }, - { - "type": "doc", - "id": "version-2.10.0/txn-use" - }, - { - "type": "doc", - "id": "version-2.10.0/txn-monitor" - } - ] - }, - { - "type": "category", - "label": "Kubernetes (Helm)", - "items": [ - { - "type": "doc", - "id": "version-2.10.0/helm-overview" - }, - { - "type": "doc", - "id": "version-2.10.0/helm-prepare" - }, - { - "type": "doc", - "id": "version-2.10.0/helm-install" - }, - { - "type": "doc", - "id": "version-2.10.0/helm-deploy" - }, - { - "type": "doc", - "id": "version-2.10.0/helm-upgrade" - }, - { - "type": "doc", - "id": "version-2.10.0/helm-tools" - } - ] - }, - { - "type": "category", - "label": "Deployment", - "items": [ - { - "type": "doc", - "id": "version-2.10.0/deploy-aws" - }, - { - "type": "doc", - "id": "version-2.10.0/deploy-kubernetes" - }, - { - "type": "doc", - "id": "version-2.10.0/deploy-bare-metal" - }, - { - "type": "doc", - "id": "version-2.10.0/deploy-bare-metal-multi-cluster" - }, - { - "type": "doc", - "id": "version-2.10.0/deploy-dcos" - }, - { - "type": "doc", - "id": "version-2.10.0/deploy-docker" - }, - { - "type": "doc", - "id": "version-2.10.0/deploy-monitoring" - } - ] - }, - { - "type": "category", - "label": "Administration", - "items": [ - { - "type": "doc", - "id": "version-2.10.0/administration-zk-bk" - }, - { - "type": "doc", - "id": "version-2.10.0/administration-geo" - }, - { - "type": "doc", - "id": "version-2.10.0/administration-pulsar-manager" - }, - { - "type": "doc", - "id": "version-2.10.0/administration-stats" - }, - { - "type": "doc", - "id": "version-2.10.0/administration-load-balance" - }, - { - "type": "doc", - "id": "version-2.10.0/administration-proxy" - }, - { - "type": "doc", - "id": "version-2.10.0/administration-upgrade" - }, - { - "type": "doc", - "id": "version-2.10.0/administration-isolation" - } - ] - }, - { - "type": "category", - "label": "Security", - "items": [ - { - "type": "doc", - "id": "version-2.10.0/security-overview" - }, - { - "type": "doc", - "id": "version-2.10.0/security-policy-and-supported-versions" - }, - { - "type": "doc", - "id": "version-2.10.0/security-tls-transport" - }, - { - "type": "doc", - "id": "version-2.10.0/security-tls-authentication" - }, - { - "type": "doc", - "id": "version-2.10.0/security-tls-keystore" - }, - { - "type": "doc", - "id": "version-2.10.0/security-jwt" - }, - { - "type": "doc", - "id": "version-2.10.0/security-athenz" - }, - { - "type": "doc", - "id": "version-2.10.0/security-kerberos" - }, - { - "type": "doc", - "id": "version-2.10.0/security-oauth2" - }, - { - "type": "doc", - "id": "version-2.10.0/security-basic-auth" - }, - { - "type": "doc", - "id": "version-2.10.0/security-authorization" - }, - { - "type": "doc", - "id": "version-2.10.0/security-encryption" - }, - { - "type": "doc", - "id": "version-2.10.0/security-extending" - }, - { - "type": "doc", - "id": "version-2.10.0/security-bouncy-castle" - } - ] - }, - { - "type": "category", - "label": "Performance", - "items": [ - { - "type": "doc", - "id": "version-2.10.0/performance-pulsar-perf" - } - ] - }, - { - "type": "category", - "label": "Client Libraries", - "items": [ - { - "type": "doc", - "id": "version-2.10.0/client-libraries" - }, - { - "type": "doc", - "id": "version-2.10.0/client-libraries-java" - }, - { - "type": "doc", - "id": "version-2.10.0/client-libraries-go" - }, - { - "type": "doc", - "id": "version-2.10.0/client-libraries-python" - }, - { - "type": "doc", - "id": "version-2.10.0/client-libraries-cpp" - }, - { - "type": "doc", - "id": "version-2.10.0/client-libraries-node" - }, - { - "type": "doc", - "id": "version-2.10.0/client-libraries-websocket" - }, - { - "type": "doc", - "id": "version-2.10.0/client-libraries-dotnet" - }, - { - "type": "doc", - "id": "version-2.10.0/client-libraries-rest" - } - ] - }, - { - "type": "category", - "label": "Admin API", - "items": [ - { - "type": "doc", - "id": "version-2.10.0/admin-api-overview" - }, - { - "type": "doc", - "id": "version-2.10.0/admin-api-clusters" - }, - { - "type": "doc", - "id": "version-2.10.0/admin-api-tenants" - }, - { - "type": "doc", - "id": "version-2.10.0/admin-api-brokers" - }, - { - "type": "doc", - "id": "version-2.10.0/admin-api-namespaces" - }, - { - "type": "doc", - "id": "version-2.10.0/admin-api-permissions" - }, - { - "type": "doc", - "id": "version-2.10.0/admin-api-topics" - }, - { - "type": "doc", - "id": "version-2.10.0/admin-api-functions" - }, - { - "type": "doc", - "id": "version-2.10.0/admin-api-packages" - } - ] - }, - { - "type": "category", - "label": "Adaptors", - "items": [ - { - "type": "doc", - "id": "version-2.10.0/adaptors-kafka" - }, - { - "type": "doc", - "id": "version-2.10.0/adaptors-spark" - }, - { - "type": "doc", - "id": "version-2.10.0/adaptors-storm" - } - ] - }, - { - "type": "category", - "label": "Cookbooks", - "items": [ - { - "type": "doc", - "id": "version-2.10.0/cookbooks-compaction" - }, - { - "type": "doc", - "id": "version-2.10.0/cookbooks-deduplication" - }, - { - "type": "doc", - "id": "version-2.10.0/cookbooks-non-persistent" - }, - { - "type": "doc", - "id": "version-2.10.0/cookbooks-retention-expiry" - }, - { - "type": "doc", - "id": "version-2.10.0/cookbooks-encryption" - }, - { - "type": "doc", - "id": "version-2.10.0/cookbooks-message-queue" - }, - { - "type": "doc", - "id": "version-2.10.0/cookbooks-bookkeepermetadata" - } - ] - }, - { - "type": "category", - "label": "Development", - "items": [ - { - "type": "doc", - "id": "version-2.10.0/develop-tools" - }, - { - "type": "doc", - "id": "version-2.10.0/developing-binary-protocol" - }, - { - "type": "doc", - "id": "version-2.10.0/develop-schema" - }, - { - "type": "doc", - "id": "version-2.10.0/develop-load-manager" - }, - { - "type": "doc", - "id": "version-2.10.0/develop-plugin" - } - ] - }, - { - "type": "category", - "label": "Reference", - "items": [ - { - "type": "doc", - "id": "version-2.10.0/reference-terminology" - }, - { - "type": "doc", - "id": "version-2.10.0/reference-cli-tools" - }, - { - "type": "doc", - "id": "version-2.10.0/reference-configuration" - }, - { - "type": "doc", - "id": "version-2.10.0/reference-metrics" - }, - { - "type": "doc", - "id": "version-2.10.0/reference-rest-api-overview" - } - ] - } - ] -} \ No newline at end of file diff --git a/site2/website/versioned_sidebars/version-2.10.1-sidebars.json b/site2/website/versioned_sidebars/version-2.10.1-sidebars.json deleted file mode 100644 index a81ada2105dc64..00000000000000 --- a/site2/website/versioned_sidebars/version-2.10.1-sidebars.json +++ /dev/null @@ -1,610 +0,0 @@ -{ - "version-2.10.1/docsSidebar": [ - { - "type": "doc", - "id": "version-2.10.1/about" - }, - { - "type": "category", - "label": "Get Started", - "items": [ - { - "type": "doc", - "id": "version-2.10.1/getting-started-standalone" - }, - { - "type": "doc", - "id": "version-2.10.1/getting-started-docker" - }, - { - "type": "doc", - "id": "version-2.10.1/getting-started-helm" - } - ] - }, - { - "type": "category", - "label": "Concepts and Architecture", - "items": [ - { - "type": "doc", - "id": "version-2.10.1/concepts-overview" - }, - { - "type": "doc", - "id": "version-2.10.1/concepts-messaging" - }, - { - "type": "doc", - "id": "version-2.10.1/concepts-architecture-overview" - }, - { - "type": "doc", - "id": "version-2.10.1/concepts-clients" - }, - { - "type": "doc", - "id": "version-2.10.1/concepts-replication" - }, - { - "type": "doc", - "id": "version-2.10.1/concepts-multi-tenancy" - }, - { - "type": "doc", - "id": "version-2.10.1/concepts-authentication" - }, - { - "type": "doc", - "id": "version-2.10.1/concepts-topic-compaction" - }, - { - "type": "doc", - "id": "version-2.10.1/concepts-proxy-sni-routing" - }, - { - "type": "doc", - "id": "version-2.10.1/concepts-multiple-advertised-listeners" - } - ] - }, - { - "type": "category", - "label": "Pulsar Schema", - "items": [ - { - "type": "doc", - "id": "version-2.10.1/schema-get-started" - }, - { - "type": "doc", - "id": "version-2.10.1/schema-understand" - }, - { - "type": "doc", - "id": "version-2.10.1/schema-evolution-compatibility" - }, - { - "type": "doc", - "id": "version-2.10.1/schema-manage" - } - ] - }, - { - "type": "category", - "label": "Pulsar Functions", - "items": [ - { - "type": "doc", - "id": "version-2.10.1/functions-overview" - }, - { - "type": "doc", - "id": "version-2.10.1/functions-runtime" - }, - { - "type": "doc", - "id": "version-2.10.1/functions-worker" - }, - { - "type": "doc", - "id": "version-2.10.1/functions-develop" - }, - { - "type": "doc", - "id": "version-2.10.1/functions-package" - }, - { - "type": "doc", - "id": "version-2.10.1/functions-debug" - }, - { - "type": "doc", - "id": "version-2.10.1/functions-deploy" - }, - { - "type": "doc", - "id": "version-2.10.1/functions-cli" - }, - { - "type": "doc", - "id": "version-2.10.1/window-functions-context" - } - ] - }, - { - "type": "category", - "label": "Pulsar IO", - "items": [ - { - "type": "doc", - "id": "version-2.10.1/io-overview" - }, - { - "type": "doc", - "id": "version-2.10.1/io-quickstart" - }, - { - "type": "doc", - "id": "version-2.10.1/io-use" - }, - { - "type": "doc", - "id": "version-2.10.1/io-debug" - }, - { - "type": "doc", - "id": "version-2.10.1/io-connectors" - }, - { - "type": "doc", - "id": "version-2.10.1/io-cdc" - }, - { - "type": "doc", - "id": "version-2.10.1/io-develop" - } - ] - }, - { - "type": "category", - "label": "Pulsar SQL", - "items": [ - { - "type": "doc", - "id": "version-2.10.1/sql-overview" - }, - { - "type": "doc", - "id": "version-2.10.1/sql-getting-started" - }, - { - "type": "doc", - "id": "version-2.10.1/sql-deployment-configurations" - }, - { - "type": "doc", - "id": "version-2.10.1/sql-rest-api" - } - ] - }, - { - "type": "category", - "label": "Tiered Storage", - "items": [ - { - "type": "doc", - "id": "version-2.10.1/tiered-storage-overview" - }, - { - "type": "doc", - "id": "version-2.10.1/tiered-storage-aws" - }, - { - "type": "doc", - "id": "version-2.10.1/tiered-storage-gcs" - }, - { - "type": "doc", - "id": "version-2.10.1/tiered-storage-filesystem" - }, - { - "type": "doc", - "id": "version-2.10.1/tiered-storage-azure" - }, - { - "type": "doc", - "id": "version-2.10.1/tiered-storage-aliyun" - } - ] - }, - { - "type": "category", - "label": "Transactions", - "items": [ - { - "type": "doc", - "id": "version-2.10.1/txn-why" - }, - { - "type": "doc", - "id": "version-2.10.1/txn-what" - }, - { - "type": "doc", - "id": "version-2.10.1/txn-how" - }, - { - "type": "doc", - "id": "version-2.10.1/txn-use" - }, - { - "type": "doc", - "id": "version-2.10.1/txn-monitor" - } - ] - }, - { - "type": "category", - "label": "Kubernetes (Helm)", - "items": [ - { - "type": "doc", - "id": "version-2.10.1/helm-overview" - }, - { - "type": "doc", - "id": "version-2.10.1/helm-prepare" - }, - { - "type": "doc", - "id": "version-2.10.1/helm-install" - }, - { - "type": "doc", - "id": "version-2.10.1/helm-deploy" - }, - { - "type": "doc", - "id": "version-2.10.1/helm-upgrade" - }, - { - "type": "doc", - "id": "version-2.10.1/helm-tools" - } - ] - }, - { - "type": "category", - "label": "Deployment", - "items": [ - { - "type": "doc", - "id": "version-2.10.1/deploy-aws" - }, - { - "type": "doc", - "id": "version-2.10.1/deploy-kubernetes" - }, - { - "type": "doc", - "id": "version-2.10.1/deploy-bare-metal" - }, - { - "type": "doc", - "id": "version-2.10.1/deploy-bare-metal-multi-cluster" - }, - { - "type": "doc", - "id": "version-2.10.1/deploy-dcos" - }, - { - "type": "doc", - "id": "version-2.10.1/deploy-docker" - }, - { - "type": "doc", - "id": "version-2.10.1/deploy-monitoring" - } - ] - }, - { - "type": "category", - "label": "Administration", - "items": [ - { - "type": "doc", - "id": "version-2.10.1/administration-zk-bk" - }, - { - "type": "doc", - "id": "version-2.10.1/administration-geo" - }, - { - "type": "doc", - "id": "version-2.10.1/administration-pulsar-manager" - }, - { - "type": "doc", - "id": "version-2.10.1/administration-stats" - }, - { - "type": "doc", - "id": "version-2.10.1/administration-load-balance" - }, - { - "type": "doc", - "id": "version-2.10.1/administration-proxy" - }, - { - "type": "doc", - "id": "version-2.10.1/administration-upgrade" - }, - { - "type": "doc", - "id": "version-2.10.1/administration-isolation" - } - ] - }, - { - "type": "category", - "label": "Security", - "items": [ - { - "type": "doc", - "id": "version-2.10.1/security-overview" - }, - { - "type": "doc", - "id": "version-2.10.1/security-policy-and-supported-versions" - }, - { - "type": "doc", - "id": "version-2.10.1/security-tls-transport" - }, - { - "type": "doc", - "id": "version-2.10.1/security-tls-authentication" - }, - { - "type": "doc", - "id": "version-2.10.1/security-tls-keystore" - }, - { - "type": "doc", - "id": "version-2.10.1/security-jwt" - }, - { - "type": "doc", - "id": "version-2.10.1/security-athenz" - }, - { - "type": "doc", - "id": "version-2.10.1/security-kerberos" - }, - { - "type": "doc", - "id": "version-2.10.1/security-oauth2" - }, - { - "type": "doc", - "id": "version-2.10.1/security-basic-auth" - }, - { - "type": "doc", - "id": "version-2.10.1/security-authorization" - }, - { - "type": "doc", - "id": "version-2.10.1/security-encryption" - }, - { - "type": "doc", - "id": "version-2.10.1/security-extending" - }, - { - "type": "doc", - "id": "version-2.10.1/security-bouncy-castle" - } - ] - }, - { - "type": "category", - "label": "Performance", - "items": [ - { - "type": "doc", - "id": "version-2.10.1/performance-pulsar-perf" - } - ] - }, - { - "type": "category", - "label": "Client Libraries", - "items": [ - { - "type": "doc", - "id": "version-2.10.1/client-libraries" - }, - { - "type": "doc", - "id": "version-2.10.1/client-libraries-java" - }, - { - "type": "doc", - "id": "version-2.10.1/client-libraries-go" - }, - { - "type": "doc", - "id": "version-2.10.1/client-libraries-python" - }, - { - "type": "doc", - "id": "version-2.10.1/client-libraries-cpp" - }, - { - "type": "doc", - "id": "version-2.10.1/client-libraries-node" - }, - { - "type": "doc", - "id": "version-2.10.1/client-libraries-websocket" - }, - { - "type": "doc", - "id": "version-2.10.1/client-libraries-dotnet" - }, - { - "type": "doc", - "id": "version-2.10.1/client-libraries-rest" - } - ] - }, - { - "type": "category", - "label": "Admin API", - "items": [ - { - "type": "doc", - "id": "version-2.10.1/admin-api-overview" - }, - { - "type": "doc", - "id": "version-2.10.1/admin-api-clusters" - }, - { - "type": "doc", - "id": "version-2.10.1/admin-api-tenants" - }, - { - "type": "doc", - "id": "version-2.10.1/admin-api-brokers" - }, - { - "type": "doc", - "id": "version-2.10.1/admin-api-namespaces" - }, - { - "type": "doc", - "id": "version-2.10.1/admin-api-permissions" - }, - { - "type": "doc", - "id": "version-2.10.1/admin-api-topics" - }, - { - "type": "doc", - "id": "version-2.10.1/admin-api-functions" - }, - { - "type": "doc", - "id": "version-2.10.1/admin-api-packages" - } - ] - }, - { - "type": "category", - "label": "Adaptors", - "items": [ - { - "type": "doc", - "id": "version-2.10.1/adaptors-kafka" - }, - { - "type": "doc", - "id": "version-2.10.1/adaptors-spark" - }, - { - "type": "doc", - "id": "version-2.10.1/adaptors-storm" - } - ] - }, - { - "type": "category", - "label": "Cookbooks", - "items": [ - { - "type": "doc", - "id": "version-2.10.1/cookbooks-compaction" - }, - { - "type": "doc", - "id": "version-2.10.1/cookbooks-deduplication" - }, - { - "type": "doc", - "id": "version-2.10.1/cookbooks-non-persistent" - }, - { - "type": "doc", - "id": "version-2.10.1/cookbooks-retention-expiry" - }, - { - "type": "doc", - "id": "version-2.10.1/cookbooks-encryption" - }, - { - "type": "doc", - "id": "version-2.10.1/cookbooks-message-queue" - }, - { - "type": "doc", - "id": "version-2.10.1/cookbooks-bookkeepermetadata" - } - ] - }, - { - "type": "category", - "label": "Development", - "items": [ - { - "type": "doc", - "id": "version-2.10.1/develop-tools" - }, - { - "type": "doc", - "id": "version-2.10.1/developing-binary-protocol" - }, - { - "type": "doc", - "id": "version-2.10.1/develop-schema" - }, - { - "type": "doc", - "id": "version-2.10.1/develop-load-manager" - }, - { - "type": "doc", - "id": "version-2.10.1/develop-plugin" - } - ] - }, - { - "type": "category", - "label": "Reference", - "items": [ - { - "type": "doc", - "id": "version-2.10.1/reference-terminology" - }, - { - "type": "doc", - "id": "version-2.10.1/reference-cli-tools" - }, - { - "type": "doc", - "id": "version-2.10.1/reference-configuration" - }, - { - "type": "doc", - "id": "version-2.10.1/reference-metrics" - }, - { - "type": "doc", - "id": "version-2.10.1/reference-rest-api-overview" - } - ] - } - ] -} \ No newline at end of file diff --git a/site2/website/versioned_sidebars/version-2.8.0-sidebars.json b/site2/website/versioned_sidebars/version-2.8.0-sidebars.json deleted file mode 100644 index 93086cb74391a0..00000000000000 --- a/site2/website/versioned_sidebars/version-2.8.0-sidebars.json +++ /dev/null @@ -1,598 +0,0 @@ -{ - "version-2.8.0/docsSidebar": [ - { - "type": "doc", - "id": "version-2.8.0/about" - }, - { - "type": "category", - "label": "Get Started", - "items": [ - { - "type": "doc", - "id": "version-2.8.0/getting-started-standalone" - }, - { - "type": "doc", - "id": "version-2.8.0/getting-started-docker" - }, - { - "type": "doc", - "id": "version-2.8.0/getting-started-helm" - } - ] - }, - { - "type": "category", - "label": "Concepts and Architecture", - "items": [ - { - "type": "doc", - "id": "version-2.8.0/concepts-overview" - }, - { - "type": "doc", - "id": "version-2.8.0/concepts-messaging" - }, - { - "type": "doc", - "id": "version-2.8.0/concepts-architecture-overview" - }, - { - "type": "doc", - "id": "version-2.8.0/concepts-clients" - }, - { - "type": "doc", - "id": "version-2.8.0/concepts-replication" - }, - { - "type": "doc", - "id": "version-2.8.0/concepts-multi-tenancy" - }, - { - "type": "doc", - "id": "version-2.8.0/concepts-authentication" - }, - { - "type": "doc", - "id": "version-2.8.0/concepts-topic-compaction" - }, - { - "type": "doc", - "id": "version-2.8.0/concepts-proxy-sni-routing" - }, - { - "type": "doc", - "id": "version-2.8.0/concepts-multiple-advertised-listeners" - } - ] - }, - { - "type": "category", - "label": "Pulsar Schema", - "items": [ - { - "type": "doc", - "id": "version-2.8.0/schema-get-started" - }, - { - "type": "doc", - "id": "version-2.8.0/schema-understand" - }, - { - "type": "doc", - "id": "version-2.8.0/schema-evolution-compatibility" - }, - { - "type": "doc", - "id": "version-2.8.0/schema-manage" - } - ] - }, - { - "type": "category", - "label": "Pulsar Functions", - "items": [ - { - "type": "doc", - "id": "version-2.8.0/functions-overview" - }, - { - "type": "doc", - "id": "version-2.8.0/functions-runtime" - }, - { - "type": "doc", - "id": "version-2.8.0/functions-worker" - }, - { - "type": "doc", - "id": "version-2.8.0/functions-develop" - }, - { - "type": "doc", - "id": "version-2.8.0/functions-package" - }, - { - "type": "doc", - "id": "version-2.8.0/functions-debug" - }, - { - "type": "doc", - "id": "version-2.8.0/functions-deploy" - }, - { - "type": "doc", - "id": "version-2.8.0/functions-cli" - }, - { - "type": "doc", - "id": "version-2.8.0/window-functions-context" - } - ] - }, - { - "type": "category", - "label": "Pulsar IO", - "items": [ - { - "type": "doc", - "id": "version-2.8.0/io-overview" - }, - { - "type": "doc", - "id": "version-2.8.0/io-quickstart" - }, - { - "type": "doc", - "id": "version-2.8.0/io-use" - }, - { - "type": "doc", - "id": "version-2.8.0/io-debug" - }, - { - "type": "doc", - "id": "version-2.8.0/io-connectors" - }, - { - "type": "doc", - "id": "version-2.8.0/io-cdc" - }, - { - "type": "doc", - "id": "version-2.8.0/io-develop" - }, - { - "type": "doc", - "id": "version-2.8.0/io-cli" - } - ] - }, - { - "type": "category", - "label": "Pulsar SQL", - "items": [ - { - "type": "doc", - "id": "version-2.8.0/sql-overview" - }, - { - "type": "doc", - "id": "version-2.8.0/sql-getting-started" - }, - { - "type": "doc", - "id": "version-2.8.0/sql-deployment-configurations" - }, - { - "type": "doc", - "id": "version-2.8.0/sql-rest-api" - } - ] - }, - { - "type": "category", - "label": "Tiered Storage", - "items": [ - { - "type": "doc", - "id": "version-2.8.0/tiered-storage-overview" - }, - { - "type": "doc", - "id": "version-2.8.0/tiered-storage-aws" - }, - { - "type": "doc", - "id": "version-2.8.0/tiered-storage-gcs" - }, - { - "type": "doc", - "id": "version-2.8.0/tiered-storage-filesystem" - }, - { - "type": "doc", - "id": "version-2.8.0/tiered-storage-azure" - }, - { - "type": "doc", - "id": "version-2.8.0/tiered-storage-aliyun" - } - ] - }, - { - "type": "category", - "label": "Transactions", - "items": [ - { - "type": "doc", - "id": "version-2.8.0/txn-why" - }, - { - "type": "doc", - "id": "version-2.8.0/txn-what" - }, - { - "type": "doc", - "id": "version-2.8.0/txn-how" - }, - { - "type": "doc", - "id": "version-2.8.0/txn-use" - }, - { - "type": "doc", - "id": "version-2.8.0/txn-monitor" - } - ] - }, - { - "type": "category", - "label": "Kubernetes (Helm)", - "items": [ - { - "type": "doc", - "id": "version-2.8.0/helm-overview" - }, - { - "type": "doc", - "id": "version-2.8.0/helm-prepare" - }, - { - "type": "doc", - "id": "version-2.8.0/helm-install" - }, - { - "type": "doc", - "id": "version-2.8.0/helm-deploy" - }, - { - "type": "doc", - "id": "version-2.8.0/helm-upgrade" - }, - { - "type": "doc", - "id": "version-2.8.0/helm-tools" - } - ] - }, - { - "type": "category", - "label": "Deployment", - "items": [ - { - "type": "doc", - "id": "version-2.8.0/deploy-aws" - }, - { - "type": "doc", - "id": "version-2.8.0/deploy-kubernetes" - }, - { - "type": "doc", - "id": "version-2.8.0/deploy-bare-metal" - }, - { - "type": "doc", - "id": "version-2.8.0/deploy-bare-metal-multi-cluster" - }, - { - "type": "doc", - "id": "version-2.8.0/deploy-docker" - }, - { - "type": "doc", - "id": "version-2.8.0/deploy-monitoring" - } - ] - }, - { - "type": "category", - "label": "Administration", - "items": [ - { - "type": "doc", - "id": "version-2.8.0/administration-zk-bk" - }, - { - "type": "doc", - "id": "version-2.8.0/administration-geo" - }, - { - "type": "doc", - "id": "version-2.8.0/administration-pulsar-manager" - }, - { - "type": "doc", - "id": "version-2.8.0/administration-stats" - }, - { - "type": "doc", - "id": "version-2.8.0/administration-load-balance" - }, - { - "type": "doc", - "id": "version-2.8.0/administration-proxy" - }, - { - "type": "doc", - "id": "version-2.8.0/administration-upgrade" - }, - { - "type": "doc", - "id": "version-2.8.0/administration-isolation" - } - ] - }, - { - "type": "category", - "label": "Security", - "items": [ - { - "type": "doc", - "id": "version-2.8.0/security-overview" - }, - { - "type": "doc", - "id": "version-2.8.0/security-tls-transport" - }, - { - "type": "doc", - "id": "version-2.8.0/security-tls-authentication" - }, - { - "type": "doc", - "id": "version-2.8.0/security-tls-keystore" - }, - { - "type": "doc", - "id": "version-2.8.0/security-jwt" - }, - { - "type": "doc", - "id": "version-2.8.0/security-athenz" - }, - { - "type": "doc", - "id": "version-2.8.0/security-kerberos" - }, - { - "type": "doc", - "id": "version-2.8.0/security-oauth2" - }, - { - "type": "doc", - "id": "version-2.8.0/security-basic-auth" - }, - { - "type": "doc", - "id": "version-2.8.0/security-authorization" - }, - { - "type": "doc", - "id": "version-2.8.0/security-encryption" - }, - { - "type": "doc", - "id": "version-2.8.0/security-extending" - }, - { - "type": "doc", - "id": "version-2.8.0/security-bouncy-castle" - } - ] - }, - { - "type": "category", - "label": "Performance", - "items": [ - { - "type": "doc", - "id": "version-2.8.0/performance-pulsar-perf" - } - ] - }, - { - "type": "category", - "label": "Client Libraries", - "items": [ - { - "type": "doc", - "id": "version-2.8.0/client-libraries" - }, - { - "type": "doc", - "id": "version-2.8.0/client-libraries-java" - }, - { - "type": "doc", - "id": "version-2.8.0/client-libraries-go" - }, - { - "type": "doc", - "id": "version-2.8.0/client-libraries-python" - }, - { - "type": "doc", - "id": "version-2.8.0/client-libraries-cpp" - }, - { - "type": "doc", - "id": "version-2.8.0/client-libraries-node" - }, - { - "type": "doc", - "id": "version-2.8.0/client-libraries-websocket" - }, - { - "type": "doc", - "id": "version-2.8.0/client-libraries-dotnet" - } - ] - }, - { - "type": "category", - "label": "Admin API", - "items": [ - { - "type": "doc", - "id": "version-2.8.0/admin-api-overview" - }, - { - "type": "doc", - "id": "version-2.8.0/admin-api-clusters" - }, - { - "type": "doc", - "id": "version-2.8.0/admin-api-tenants" - }, - { - "type": "doc", - "id": "version-2.8.0/admin-api-brokers" - }, - { - "type": "doc", - "id": "version-2.8.0/admin-api-namespaces" - }, - { - "type": "doc", - "id": "version-2.8.0/admin-api-permissions" - }, - { - "type": "doc", - "id": "version-2.8.0/admin-api-topics" - }, - { - "type": "doc", - "id": "version-2.8.0/admin-api-functions" - }, - { - "type": "doc", - "id": "version-2.8.0/admin-api-packages" - } - ] - }, - { - "type": "category", - "label": "Adaptors", - "items": [ - { - "type": "doc", - "id": "version-2.8.0/adaptors-kafka" - }, - { - "type": "doc", - "id": "version-2.8.0/adaptors-spark" - }, - { - "type": "doc", - "id": "version-2.8.0/adaptors-storm" - } - ] - }, - { - "type": "category", - "label": "Cookbooks", - "items": [ - { - "type": "doc", - "id": "version-2.8.0/cookbooks-compaction" - }, - { - "type": "doc", - "id": "version-2.8.0/cookbooks-deduplication" - }, - { - "type": "doc", - "id": "version-2.8.0/cookbooks-non-persistent" - }, - { - "type": "doc", - "id": "version-2.8.0/cookbooks-retention-expiry" - }, - { - "type": "doc", - "id": "version-2.8.0/cookbooks-encryption" - }, - { - "type": "doc", - "id": "version-2.8.0/cookbooks-message-queue" - }, - { - "type": "doc", - "id": "version-2.8.0/cookbooks-bookkeepermetadata" - } - ] - }, - { - "type": "category", - "label": "Development", - "items": [ - { - "type": "doc", - "id": "version-2.8.0/develop-tools" - }, - { - "type": "doc", - "id": "version-2.8.0/developing-binary-protocol" - }, - { - "type": "doc", - "id": "version-2.8.0/develop-schema" - }, - { - "type": "doc", - "id": "version-2.8.0/develop-load-manager" - } - ] - }, - { - "type": "category", - "label": "Reference", - "items": [ - { - "type": "doc", - "id": "version-2.8.0/reference-terminology" - }, - { - "type": "doc", - "id": "version-2.8.0/reference-cli-tools" - }, - { - "type": "doc", - "id": "version-2.8.0/reference-configuration" - }, - { - "type": "doc", - "id": "version-2.8.0/reference-metrics" - }, - { - "type": "doc", - "id": "version-2.8.0/reference-rest-api-overview" - } - ] - } - ] -} \ No newline at end of file diff --git a/site2/website/versioned_sidebars/version-2.8.1-sidebars.json b/site2/website/versioned_sidebars/version-2.8.1-sidebars.json deleted file mode 100644 index a2663dfc1ab8b5..00000000000000 --- a/site2/website/versioned_sidebars/version-2.8.1-sidebars.json +++ /dev/null @@ -1,598 +0,0 @@ -{ - "version-2.8.1/docsSidebar": [ - { - "type": "doc", - "id": "version-2.8.1/about" - }, - { - "type": "category", - "label": "Get Started", - "items": [ - { - "type": "doc", - "id": "version-2.8.1/getting-started-standalone" - }, - { - "type": "doc", - "id": "version-2.8.1/getting-started-docker" - }, - { - "type": "doc", - "id": "version-2.8.1/getting-started-helm" - } - ] - }, - { - "type": "category", - "label": "Concepts and Architecture", - "items": [ - { - "type": "doc", - "id": "version-2.8.1/concepts-overview" - }, - { - "type": "doc", - "id": "version-2.8.1/concepts-messaging" - }, - { - "type": "doc", - "id": "version-2.8.1/concepts-architecture-overview" - }, - { - "type": "doc", - "id": "version-2.8.1/concepts-clients" - }, - { - "type": "doc", - "id": "version-2.8.1/concepts-replication" - }, - { - "type": "doc", - "id": "version-2.8.1/concepts-multi-tenancy" - }, - { - "type": "doc", - "id": "version-2.8.1/concepts-authentication" - }, - { - "type": "doc", - "id": "version-2.8.1/concepts-topic-compaction" - }, - { - "type": "doc", - "id": "version-2.8.1/concepts-proxy-sni-routing" - }, - { - "type": "doc", - "id": "version-2.8.1/concepts-multiple-advertised-listeners" - } - ] - }, - { - "type": "category", - "label": "Pulsar Schema", - "items": [ - { - "type": "doc", - "id": "version-2.8.1/schema-get-started" - }, - { - "type": "doc", - "id": "version-2.8.1/schema-understand" - }, - { - "type": "doc", - "id": "version-2.8.1/schema-evolution-compatibility" - }, - { - "type": "doc", - "id": "version-2.8.1/schema-manage" - } - ] - }, - { - "type": "category", - "label": "Pulsar Functions", - "items": [ - { - "type": "doc", - "id": "version-2.8.1/functions-overview" - }, - { - "type": "doc", - "id": "version-2.8.1/functions-runtime" - }, - { - "type": "doc", - "id": "version-2.8.1/functions-worker" - }, - { - "type": "doc", - "id": "version-2.8.1/functions-develop" - }, - { - "type": "doc", - "id": "version-2.8.1/functions-package" - }, - { - "type": "doc", - "id": "version-2.8.1/functions-debug" - }, - { - "type": "doc", - "id": "version-2.8.1/functions-deploy" - }, - { - "type": "doc", - "id": "version-2.8.1/functions-cli" - }, - { - "type": "doc", - "id": "version-2.8.1/window-functions-context" - } - ] - }, - { - "type": "category", - "label": "Pulsar IO", - "items": [ - { - "type": "doc", - "id": "version-2.8.1/io-overview" - }, - { - "type": "doc", - "id": "version-2.8.1/io-quickstart" - }, - { - "type": "doc", - "id": "version-2.8.1/io-use" - }, - { - "type": "doc", - "id": "version-2.8.1/io-debug" - }, - { - "type": "doc", - "id": "version-2.8.1/io-connectors" - }, - { - "type": "doc", - "id": "version-2.8.1/io-cdc" - }, - { - "type": "doc", - "id": "version-2.8.1/io-develop" - }, - { - "type": "doc", - "id": "version-2.8.1/io-cli" - } - ] - }, - { - "type": "category", - "label": "Pulsar SQL", - "items": [ - { - "type": "doc", - "id": "version-2.8.1/sql-overview" - }, - { - "type": "doc", - "id": "version-2.8.1/sql-getting-started" - }, - { - "type": "doc", - "id": "version-2.8.1/sql-deployment-configurations" - }, - { - "type": "doc", - "id": "version-2.8.1/sql-rest-api" - } - ] - }, - { - "type": "category", - "label": "Tiered Storage", - "items": [ - { - "type": "doc", - "id": "version-2.8.1/tiered-storage-overview" - }, - { - "type": "doc", - "id": "version-2.8.1/tiered-storage-aws" - }, - { - "type": "doc", - "id": "version-2.8.1/tiered-storage-gcs" - }, - { - "type": "doc", - "id": "version-2.8.1/tiered-storage-filesystem" - }, - { - "type": "doc", - "id": "version-2.8.1/tiered-storage-azure" - }, - { - "type": "doc", - "id": "version-2.8.1/tiered-storage-aliyun" - } - ] - }, - { - "type": "category", - "label": "Transactions", - "items": [ - { - "type": "doc", - "id": "version-2.8.1/txn-why" - }, - { - "type": "doc", - "id": "version-2.8.1/txn-what" - }, - { - "type": "doc", - "id": "version-2.8.1/txn-how" - }, - { - "type": "doc", - "id": "version-2.8.1/txn-use" - }, - { - "type": "doc", - "id": "version-2.8.1/txn-monitor" - } - ] - }, - { - "type": "category", - "label": "Kubernetes (Helm)", - "items": [ - { - "type": "doc", - "id": "version-2.8.1/helm-overview" - }, - { - "type": "doc", - "id": "version-2.8.1/helm-prepare" - }, - { - "type": "doc", - "id": "version-2.8.1/helm-install" - }, - { - "type": "doc", - "id": "version-2.8.1/helm-deploy" - }, - { - "type": "doc", - "id": "version-2.8.1/helm-upgrade" - }, - { - "type": "doc", - "id": "version-2.8.1/helm-tools" - } - ] - }, - { - "type": "category", - "label": "Deployment", - "items": [ - { - "type": "doc", - "id": "version-2.8.1/deploy-aws" - }, - { - "type": "doc", - "id": "version-2.8.1/deploy-kubernetes" - }, - { - "type": "doc", - "id": "version-2.8.1/deploy-bare-metal" - }, - { - "type": "doc", - "id": "version-2.8.1/deploy-bare-metal-multi-cluster" - }, - { - "type": "doc", - "id": "version-2.8.1/deploy-docker" - }, - { - "type": "doc", - "id": "version-2.8.1/deploy-monitoring" - } - ] - }, - { - "type": "category", - "label": "Administration", - "items": [ - { - "type": "doc", - "id": "version-2.8.1/administration-zk-bk" - }, - { - "type": "doc", - "id": "version-2.8.1/administration-geo" - }, - { - "type": "doc", - "id": "version-2.8.1/administration-pulsar-manager" - }, - { - "type": "doc", - "id": "version-2.8.1/administration-stats" - }, - { - "type": "doc", - "id": "version-2.8.1/administration-load-balance" - }, - { - "type": "doc", - "id": "version-2.8.1/administration-proxy" - }, - { - "type": "doc", - "id": "version-2.8.1/administration-upgrade" - }, - { - "type": "doc", - "id": "version-2.8.1/administration-isolation" - } - ] - }, - { - "type": "category", - "label": "Security", - "items": [ - { - "type": "doc", - "id": "version-2.8.1/security-overview" - }, - { - "type": "doc", - "id": "version-2.8.1/security-tls-transport" - }, - { - "type": "doc", - "id": "version-2.8.1/security-tls-authentication" - }, - { - "type": "doc", - "id": "version-2.8.1/security-tls-keystore" - }, - { - "type": "doc", - "id": "version-2.8.1/security-jwt" - }, - { - "type": "doc", - "id": "version-2.8.1/security-athenz" - }, - { - "type": "doc", - "id": "version-2.8.1/security-kerberos" - }, - { - "type": "doc", - "id": "version-2.8.1/security-oauth2" - }, - { - "type": "doc", - "id": "version-2.8.1/security-basic-auth" - }, - { - "type": "doc", - "id": "version-2.8.1/security-authorization" - }, - { - "type": "doc", - "id": "version-2.8.1/security-encryption" - }, - { - "type": "doc", - "id": "version-2.8.1/security-extending" - }, - { - "type": "doc", - "id": "version-2.8.1/security-bouncy-castle" - } - ] - }, - { - "type": "category", - "label": "Performance", - "items": [ - { - "type": "doc", - "id": "version-2.8.1/performance-pulsar-perf" - } - ] - }, - { - "type": "category", - "label": "Client Libraries", - "items": [ - { - "type": "doc", - "id": "version-2.8.1/client-libraries" - }, - { - "type": "doc", - "id": "version-2.8.1/client-libraries-java" - }, - { - "type": "doc", - "id": "version-2.8.1/client-libraries-go" - }, - { - "type": "doc", - "id": "version-2.8.1/client-libraries-python" - }, - { - "type": "doc", - "id": "version-2.8.1/client-libraries-cpp" - }, - { - "type": "doc", - "id": "version-2.8.1/client-libraries-node" - }, - { - "type": "doc", - "id": "version-2.8.1/client-libraries-websocket" - }, - { - "type": "doc", - "id": "version-2.8.1/client-libraries-dotnet" - } - ] - }, - { - "type": "category", - "label": "Admin API", - "items": [ - { - "type": "doc", - "id": "version-2.8.1/admin-api-overview" - }, - { - "type": "doc", - "id": "version-2.8.1/admin-api-clusters" - }, - { - "type": "doc", - "id": "version-2.8.1/admin-api-tenants" - }, - { - "type": "doc", - "id": "version-2.8.1/admin-api-brokers" - }, - { - "type": "doc", - "id": "version-2.8.1/admin-api-namespaces" - }, - { - "type": "doc", - "id": "version-2.8.1/admin-api-permissions" - }, - { - "type": "doc", - "id": "version-2.8.1/admin-api-topics" - }, - { - "type": "doc", - "id": "version-2.8.1/admin-api-functions" - }, - { - "type": "doc", - "id": "version-2.8.1/admin-api-packages" - } - ] - }, - { - "type": "category", - "label": "Adaptors", - "items": [ - { - "type": "doc", - "id": "version-2.8.1/adaptors-kafka" - }, - { - "type": "doc", - "id": "version-2.8.1/adaptors-spark" - }, - { - "type": "doc", - "id": "version-2.8.1/adaptors-storm" - } - ] - }, - { - "type": "category", - "label": "Cookbooks", - "items": [ - { - "type": "doc", - "id": "version-2.8.1/cookbooks-compaction" - }, - { - "type": "doc", - "id": "version-2.8.1/cookbooks-deduplication" - }, - { - "type": "doc", - "id": "version-2.8.1/cookbooks-non-persistent" - }, - { - "type": "doc", - "id": "version-2.8.1/cookbooks-retention-expiry" - }, - { - "type": "doc", - "id": "version-2.8.1/cookbooks-encryption" - }, - { - "type": "doc", - "id": "version-2.8.1/cookbooks-message-queue" - }, - { - "type": "doc", - "id": "version-2.8.1/cookbooks-bookkeepermetadata" - } - ] - }, - { - "type": "category", - "label": "Development", - "items": [ - { - "type": "doc", - "id": "version-2.8.1/develop-tools" - }, - { - "type": "doc", - "id": "version-2.8.1/developing-binary-protocol" - }, - { - "type": "doc", - "id": "version-2.8.1/develop-schema" - }, - { - "type": "doc", - "id": "version-2.8.1/develop-load-manager" - } - ] - }, - { - "type": "category", - "label": "Reference", - "items": [ - { - "type": "doc", - "id": "version-2.8.1/reference-terminology" - }, - { - "type": "doc", - "id": "version-2.8.1/reference-cli-tools" - }, - { - "type": "doc", - "id": "version-2.8.1/reference-configuration" - }, - { - "type": "doc", - "id": "version-2.8.1/reference-metrics" - }, - { - "type": "doc", - "id": "version-2.8.1/reference-rest-api-overview" - } - ] - } - ] -} \ No newline at end of file diff --git a/site2/website/versioned_sidebars/version-2.8.2-sidebars.json b/site2/website/versioned_sidebars/version-2.8.2-sidebars.json deleted file mode 100644 index e4042e191d74c4..00000000000000 --- a/site2/website/versioned_sidebars/version-2.8.2-sidebars.json +++ /dev/null @@ -1,598 +0,0 @@ -{ - "version-2.8.2/docsSidebar": [ - { - "type": "doc", - "id": "version-2.8.2/about" - }, - { - "type": "category", - "label": "Get Started", - "items": [ - { - "type": "doc", - "id": "version-2.8.2/getting-started-standalone" - }, - { - "type": "doc", - "id": "version-2.8.2/getting-started-docker" - }, - { - "type": "doc", - "id": "version-2.8.2/getting-started-helm" - } - ] - }, - { - "type": "category", - "label": "Concepts and Architecture", - "items": [ - { - "type": "doc", - "id": "version-2.8.2/concepts-overview" - }, - { - "type": "doc", - "id": "version-2.8.2/concepts-messaging" - }, - { - "type": "doc", - "id": "version-2.8.2/concepts-architecture-overview" - }, - { - "type": "doc", - "id": "version-2.8.2/concepts-clients" - }, - { - "type": "doc", - "id": "version-2.8.2/concepts-replication" - }, - { - "type": "doc", - "id": "version-2.8.2/concepts-multi-tenancy" - }, - { - "type": "doc", - "id": "version-2.8.2/concepts-authentication" - }, - { - "type": "doc", - "id": "version-2.8.2/concepts-topic-compaction" - }, - { - "type": "doc", - "id": "version-2.8.2/concepts-proxy-sni-routing" - }, - { - "type": "doc", - "id": "version-2.8.2/concepts-multiple-advertised-listeners" - } - ] - }, - { - "type": "category", - "label": "Pulsar Schema", - "items": [ - { - "type": "doc", - "id": "version-2.8.2/schema-get-started" - }, - { - "type": "doc", - "id": "version-2.8.2/schema-understand" - }, - { - "type": "doc", - "id": "version-2.8.2/schema-evolution-compatibility" - }, - { - "type": "doc", - "id": "version-2.8.2/schema-manage" - } - ] - }, - { - "type": "category", - "label": "Pulsar Functions", - "items": [ - { - "type": "doc", - "id": "version-2.8.2/functions-overview" - }, - { - "type": "doc", - "id": "version-2.8.2/functions-runtime" - }, - { - "type": "doc", - "id": "version-2.8.2/functions-worker" - }, - { - "type": "doc", - "id": "version-2.8.2/functions-develop" - }, - { - "type": "doc", - "id": "version-2.8.2/functions-package" - }, - { - "type": "doc", - "id": "version-2.8.2/functions-debug" - }, - { - "type": "doc", - "id": "version-2.8.2/functions-deploy" - }, - { - "type": "doc", - "id": "version-2.8.2/functions-cli" - }, - { - "type": "doc", - "id": "version-2.8.2/window-functions-context" - } - ] - }, - { - "type": "category", - "label": "Pulsar IO", - "items": [ - { - "type": "doc", - "id": "version-2.8.2/io-overview" - }, - { - "type": "doc", - "id": "version-2.8.2/io-quickstart" - }, - { - "type": "doc", - "id": "version-2.8.2/io-use" - }, - { - "type": "doc", - "id": "version-2.8.2/io-debug" - }, - { - "type": "doc", - "id": "version-2.8.2/io-connectors" - }, - { - "type": "doc", - "id": "version-2.8.2/io-cdc" - }, - { - "type": "doc", - "id": "version-2.8.2/io-develop" - }, - { - "type": "doc", - "id": "version-2.8.2/io-cli" - } - ] - }, - { - "type": "category", - "label": "Pulsar SQL", - "items": [ - { - "type": "doc", - "id": "version-2.8.2/sql-overview" - }, - { - "type": "doc", - "id": "version-2.8.2/sql-getting-started" - }, - { - "type": "doc", - "id": "version-2.8.2/sql-deployment-configurations" - }, - { - "type": "doc", - "id": "version-2.8.2/sql-rest-api" - } - ] - }, - { - "type": "category", - "label": "Tiered Storage", - "items": [ - { - "type": "doc", - "id": "version-2.8.2/tiered-storage-overview" - }, - { - "type": "doc", - "id": "version-2.8.2/tiered-storage-aws" - }, - { - "type": "doc", - "id": "version-2.8.2/tiered-storage-gcs" - }, - { - "type": "doc", - "id": "version-2.8.2/tiered-storage-filesystem" - }, - { - "type": "doc", - "id": "version-2.8.2/tiered-storage-azure" - }, - { - "type": "doc", - "id": "version-2.8.2/tiered-storage-aliyun" - } - ] - }, - { - "type": "category", - "label": "Transactions", - "items": [ - { - "type": "doc", - "id": "version-2.8.2/txn-why" - }, - { - "type": "doc", - "id": "version-2.8.2/txn-what" - }, - { - "type": "doc", - "id": "version-2.8.2/txn-how" - }, - { - "type": "doc", - "id": "version-2.8.2/txn-use" - }, - { - "type": "doc", - "id": "version-2.8.2/txn-monitor" - } - ] - }, - { - "type": "category", - "label": "Kubernetes (Helm)", - "items": [ - { - "type": "doc", - "id": "version-2.8.2/helm-overview" - }, - { - "type": "doc", - "id": "version-2.8.2/helm-prepare" - }, - { - "type": "doc", - "id": "version-2.8.2/helm-install" - }, - { - "type": "doc", - "id": "version-2.8.2/helm-deploy" - }, - { - "type": "doc", - "id": "version-2.8.2/helm-upgrade" - }, - { - "type": "doc", - "id": "version-2.8.2/helm-tools" - } - ] - }, - { - "type": "category", - "label": "Deployment", - "items": [ - { - "type": "doc", - "id": "version-2.8.2/deploy-aws" - }, - { - "type": "doc", - "id": "version-2.8.2/deploy-kubernetes" - }, - { - "type": "doc", - "id": "version-2.8.2/deploy-bare-metal" - }, - { - "type": "doc", - "id": "version-2.8.2/deploy-bare-metal-multi-cluster" - }, - { - "type": "doc", - "id": "version-2.8.2/deploy-docker" - }, - { - "type": "doc", - "id": "version-2.8.2/deploy-monitoring" - } - ] - }, - { - "type": "category", - "label": "Administration", - "items": [ - { - "type": "doc", - "id": "version-2.8.2/administration-zk-bk" - }, - { - "type": "doc", - "id": "version-2.8.2/administration-geo" - }, - { - "type": "doc", - "id": "version-2.8.2/administration-pulsar-manager" - }, - { - "type": "doc", - "id": "version-2.8.2/administration-stats" - }, - { - "type": "doc", - "id": "version-2.8.2/administration-load-balance" - }, - { - "type": "doc", - "id": "version-2.8.2/administration-proxy" - }, - { - "type": "doc", - "id": "version-2.8.2/administration-upgrade" - }, - { - "type": "doc", - "id": "version-2.8.2/administration-isolation" - } - ] - }, - { - "type": "category", - "label": "Security", - "items": [ - { - "type": "doc", - "id": "version-2.8.2/security-overview" - }, - { - "type": "doc", - "id": "version-2.8.2/security-tls-transport" - }, - { - "type": "doc", - "id": "version-2.8.2/security-tls-authentication" - }, - { - "type": "doc", - "id": "version-2.8.2/security-tls-keystore" - }, - { - "type": "doc", - "id": "version-2.8.2/security-jwt" - }, - { - "type": "doc", - "id": "version-2.8.2/security-athenz" - }, - { - "type": "doc", - "id": "version-2.8.2/security-kerberos" - }, - { - "type": "doc", - "id": "version-2.8.2/security-oauth2" - }, - { - "type": "doc", - "id": "version-2.8.2/security-basic-auth" - }, - { - "type": "doc", - "id": "version-2.8.2/security-authorization" - }, - { - "type": "doc", - "id": "version-2.8.2/security-encryption" - }, - { - "type": "doc", - "id": "version-2.8.2/security-extending" - }, - { - "type": "doc", - "id": "version-2.8.2/security-bouncy-castle" - } - ] - }, - { - "type": "category", - "label": "Performance", - "items": [ - { - "type": "doc", - "id": "version-2.8.2/performance-pulsar-perf" - } - ] - }, - { - "type": "category", - "label": "Client Libraries", - "items": [ - { - "type": "doc", - "id": "version-2.8.2/client-libraries" - }, - { - "type": "doc", - "id": "version-2.8.2/client-libraries-java" - }, - { - "type": "doc", - "id": "version-2.8.2/client-libraries-go" - }, - { - "type": "doc", - "id": "version-2.8.2/client-libraries-python" - }, - { - "type": "doc", - "id": "version-2.8.2/client-libraries-cpp" - }, - { - "type": "doc", - "id": "version-2.8.2/client-libraries-node" - }, - { - "type": "doc", - "id": "version-2.8.2/client-libraries-websocket" - }, - { - "type": "doc", - "id": "version-2.8.2/client-libraries-dotnet" - } - ] - }, - { - "type": "category", - "label": "Admin API", - "items": [ - { - "type": "doc", - "id": "version-2.8.2/admin-api-overview" - }, - { - "type": "doc", - "id": "version-2.8.2/admin-api-clusters" - }, - { - "type": "doc", - "id": "version-2.8.2/admin-api-tenants" - }, - { - "type": "doc", - "id": "version-2.8.2/admin-api-brokers" - }, - { - "type": "doc", - "id": "version-2.8.2/admin-api-namespaces" - }, - { - "type": "doc", - "id": "version-2.8.2/admin-api-permissions" - }, - { - "type": "doc", - "id": "version-2.8.2/admin-api-topics" - }, - { - "type": "doc", - "id": "version-2.8.2/admin-api-functions" - }, - { - "type": "doc", - "id": "version-2.8.2/admin-api-packages" - } - ] - }, - { - "type": "category", - "label": "Adaptors", - "items": [ - { - "type": "doc", - "id": "version-2.8.2/adaptors-kafka" - }, - { - "type": "doc", - "id": "version-2.8.2/adaptors-spark" - }, - { - "type": "doc", - "id": "version-2.8.2/adaptors-storm" - } - ] - }, - { - "type": "category", - "label": "Cookbooks", - "items": [ - { - "type": "doc", - "id": "version-2.8.2/cookbooks-compaction" - }, - { - "type": "doc", - "id": "version-2.8.2/cookbooks-deduplication" - }, - { - "type": "doc", - "id": "version-2.8.2/cookbooks-non-persistent" - }, - { - "type": "doc", - "id": "version-2.8.2/cookbooks-retention-expiry" - }, - { - "type": "doc", - "id": "version-2.8.2/cookbooks-encryption" - }, - { - "type": "doc", - "id": "version-2.8.2/cookbooks-message-queue" - }, - { - "type": "doc", - "id": "version-2.8.2/cookbooks-bookkeepermetadata" - } - ] - }, - { - "type": "category", - "label": "Development", - "items": [ - { - "type": "doc", - "id": "version-2.8.2/develop-tools" - }, - { - "type": "doc", - "id": "version-2.8.2/developing-binary-protocol" - }, - { - "type": "doc", - "id": "version-2.8.2/develop-schema" - }, - { - "type": "doc", - "id": "version-2.8.2/develop-load-manager" - } - ] - }, - { - "type": "category", - "label": "Reference", - "items": [ - { - "type": "doc", - "id": "version-2.8.2/reference-terminology" - }, - { - "type": "doc", - "id": "version-2.8.2/reference-cli-tools" - }, - { - "type": "doc", - "id": "version-2.8.2/reference-configuration" - }, - { - "type": "doc", - "id": "version-2.8.2/reference-metrics" - }, - { - "type": "doc", - "id": "version-2.8.2/reference-rest-api-overview" - } - ] - } - ] -} \ No newline at end of file diff --git a/site2/website/versioned_sidebars/version-2.8.3-sidebars.json b/site2/website/versioned_sidebars/version-2.8.3-sidebars.json deleted file mode 100644 index fe90c3a08dce63..00000000000000 --- a/site2/website/versioned_sidebars/version-2.8.3-sidebars.json +++ /dev/null @@ -1,598 +0,0 @@ -{ - "version-2.8.3/docsSidebar": [ - { - "type": "doc", - "id": "version-2.8.3/about" - }, - { - "type": "category", - "label": "Get Started", - "items": [ - { - "type": "doc", - "id": "version-2.8.3/getting-started-standalone" - }, - { - "type": "doc", - "id": "version-2.8.3/getting-started-docker" - }, - { - "type": "doc", - "id": "version-2.8.3/getting-started-helm" - } - ] - }, - { - "type": "category", - "label": "Concepts and Architecture", - "items": [ - { - "type": "doc", - "id": "version-2.8.3/concepts-overview" - }, - { - "type": "doc", - "id": "version-2.8.3/concepts-messaging" - }, - { - "type": "doc", - "id": "version-2.8.3/concepts-architecture-overview" - }, - { - "type": "doc", - "id": "version-2.8.3/concepts-clients" - }, - { - "type": "doc", - "id": "version-2.8.3/concepts-replication" - }, - { - "type": "doc", - "id": "version-2.8.3/concepts-multi-tenancy" - }, - { - "type": "doc", - "id": "version-2.8.3/concepts-authentication" - }, - { - "type": "doc", - "id": "version-2.8.3/concepts-topic-compaction" - }, - { - "type": "doc", - "id": "version-2.8.3/concepts-proxy-sni-routing" - }, - { - "type": "doc", - "id": "version-2.8.3/concepts-multiple-advertised-listeners" - } - ] - }, - { - "type": "category", - "label": "Pulsar Schema", - "items": [ - { - "type": "doc", - "id": "version-2.8.3/schema-get-started" - }, - { - "type": "doc", - "id": "version-2.8.3/schema-understand" - }, - { - "type": "doc", - "id": "version-2.8.3/schema-evolution-compatibility" - }, - { - "type": "doc", - "id": "version-2.8.3/schema-manage" - } - ] - }, - { - "type": "category", - "label": "Pulsar Functions", - "items": [ - { - "type": "doc", - "id": "version-2.8.3/functions-overview" - }, - { - "type": "doc", - "id": "version-2.8.3/functions-runtime" - }, - { - "type": "doc", - "id": "version-2.8.3/functions-worker" - }, - { - "type": "doc", - "id": "version-2.8.3/functions-develop" - }, - { - "type": "doc", - "id": "version-2.8.3/functions-package" - }, - { - "type": "doc", - "id": "version-2.8.3/functions-debug" - }, - { - "type": "doc", - "id": "version-2.8.3/functions-deploy" - }, - { - "type": "doc", - "id": "version-2.8.3/functions-cli" - }, - { - "type": "doc", - "id": "version-2.8.3/window-functions-context" - } - ] - }, - { - "type": "category", - "label": "Pulsar IO", - "items": [ - { - "type": "doc", - "id": "version-2.8.3/io-overview" - }, - { - "type": "doc", - "id": "version-2.8.3/io-quickstart" - }, - { - "type": "doc", - "id": "version-2.8.3/io-use" - }, - { - "type": "doc", - "id": "version-2.8.3/io-debug" - }, - { - "type": "doc", - "id": "version-2.8.3/io-connectors" - }, - { - "type": "doc", - "id": "version-2.8.3/io-cdc" - }, - { - "type": "doc", - "id": "version-2.8.3/io-develop" - }, - { - "type": "doc", - "id": "version-2.8.3/io-cli" - } - ] - }, - { - "type": "category", - "label": "Pulsar SQL", - "items": [ - { - "type": "doc", - "id": "version-2.8.3/sql-overview" - }, - { - "type": "doc", - "id": "version-2.8.3/sql-getting-started" - }, - { - "type": "doc", - "id": "version-2.8.3/sql-deployment-configurations" - }, - { - "type": "doc", - "id": "version-2.8.3/sql-rest-api" - } - ] - }, - { - "type": "category", - "label": "Tiered Storage", - "items": [ - { - "type": "doc", - "id": "version-2.8.3/tiered-storage-overview" - }, - { - "type": "doc", - "id": "version-2.8.3/tiered-storage-aws" - }, - { - "type": "doc", - "id": "version-2.8.3/tiered-storage-gcs" - }, - { - "type": "doc", - "id": "version-2.8.3/tiered-storage-filesystem" - }, - { - "type": "doc", - "id": "version-2.8.3/tiered-storage-azure" - }, - { - "type": "doc", - "id": "version-2.8.3/tiered-storage-aliyun" - } - ] - }, - { - "type": "category", - "label": "Transactions", - "items": [ - { - "type": "doc", - "id": "version-2.8.3/txn-why" - }, - { - "type": "doc", - "id": "version-2.8.3/txn-what" - }, - { - "type": "doc", - "id": "version-2.8.3/txn-how" - }, - { - "type": "doc", - "id": "version-2.8.3/txn-use" - }, - { - "type": "doc", - "id": "version-2.8.3/txn-monitor" - } - ] - }, - { - "type": "category", - "label": "Kubernetes (Helm)", - "items": [ - { - "type": "doc", - "id": "version-2.8.3/helm-overview" - }, - { - "type": "doc", - "id": "version-2.8.3/helm-prepare" - }, - { - "type": "doc", - "id": "version-2.8.3/helm-install" - }, - { - "type": "doc", - "id": "version-2.8.3/helm-deploy" - }, - { - "type": "doc", - "id": "version-2.8.3/helm-upgrade" - }, - { - "type": "doc", - "id": "version-2.8.3/helm-tools" - } - ] - }, - { - "type": "category", - "label": "Deployment", - "items": [ - { - "type": "doc", - "id": "version-2.8.3/deploy-aws" - }, - { - "type": "doc", - "id": "version-2.8.3/deploy-kubernetes" - }, - { - "type": "doc", - "id": "version-2.8.3/deploy-bare-metal" - }, - { - "type": "doc", - "id": "version-2.8.3/deploy-bare-metal-multi-cluster" - }, - { - "type": "doc", - "id": "version-2.8.3/deploy-docker" - }, - { - "type": "doc", - "id": "version-2.8.3/deploy-monitoring" - } - ] - }, - { - "type": "category", - "label": "Administration", - "items": [ - { - "type": "doc", - "id": "version-2.8.3/administration-zk-bk" - }, - { - "type": "doc", - "id": "version-2.8.3/administration-geo" - }, - { - "type": "doc", - "id": "version-2.8.3/administration-pulsar-manager" - }, - { - "type": "doc", - "id": "version-2.8.3/administration-stats" - }, - { - "type": "doc", - "id": "version-2.8.3/administration-load-balance" - }, - { - "type": "doc", - "id": "version-2.8.3/administration-proxy" - }, - { - "type": "doc", - "id": "version-2.8.3/administration-upgrade" - }, - { - "type": "doc", - "id": "version-2.8.3/administration-isolation" - } - ] - }, - { - "type": "category", - "label": "Security", - "items": [ - { - "type": "doc", - "id": "version-2.8.3/security-overview" - }, - { - "type": "doc", - "id": "version-2.8.3/security-tls-transport" - }, - { - "type": "doc", - "id": "version-2.8.3/security-tls-authentication" - }, - { - "type": "doc", - "id": "version-2.8.3/security-tls-keystore" - }, - { - "type": "doc", - "id": "version-2.8.3/security-jwt" - }, - { - "type": "doc", - "id": "version-2.8.3/security-athenz" - }, - { - "type": "doc", - "id": "version-2.8.3/security-kerberos" - }, - { - "type": "doc", - "id": "version-2.8.3/security-oauth2" - }, - { - "type": "doc", - "id": "version-2.8.3/security-basic-auth" - }, - { - "type": "doc", - "id": "version-2.8.3/security-authorization" - }, - { - "type": "doc", - "id": "version-2.8.3/security-encryption" - }, - { - "type": "doc", - "id": "version-2.8.3/security-extending" - }, - { - "type": "doc", - "id": "version-2.8.3/security-bouncy-castle" - } - ] - }, - { - "type": "category", - "label": "Performance", - "items": [ - { - "type": "doc", - "id": "version-2.8.3/performance-pulsar-perf" - } - ] - }, - { - "type": "category", - "label": "Client Libraries", - "items": [ - { - "type": "doc", - "id": "version-2.8.3/client-libraries" - }, - { - "type": "doc", - "id": "version-2.8.3/client-libraries-java" - }, - { - "type": "doc", - "id": "version-2.8.3/client-libraries-go" - }, - { - "type": "doc", - "id": "version-2.8.3/client-libraries-python" - }, - { - "type": "doc", - "id": "version-2.8.3/client-libraries-cpp" - }, - { - "type": "doc", - "id": "version-2.8.3/client-libraries-node" - }, - { - "type": "doc", - "id": "version-2.8.3/client-libraries-websocket" - }, - { - "type": "doc", - "id": "version-2.8.3/client-libraries-dotnet" - } - ] - }, - { - "type": "category", - "label": "Admin API", - "items": [ - { - "type": "doc", - "id": "version-2.8.3/admin-api-overview" - }, - { - "type": "doc", - "id": "version-2.8.3/admin-api-clusters" - }, - { - "type": "doc", - "id": "version-2.8.3/admin-api-tenants" - }, - { - "type": "doc", - "id": "version-2.8.3/admin-api-brokers" - }, - { - "type": "doc", - "id": "version-2.8.3/admin-api-namespaces" - }, - { - "type": "doc", - "id": "version-2.8.3/admin-api-permissions" - }, - { - "type": "doc", - "id": "version-2.8.3/admin-api-topics" - }, - { - "type": "doc", - "id": "version-2.8.3/admin-api-functions" - }, - { - "type": "doc", - "id": "version-2.8.3/admin-api-packages" - } - ] - }, - { - "type": "category", - "label": "Adaptors", - "items": [ - { - "type": "doc", - "id": "version-2.8.3/adaptors-kafka" - }, - { - "type": "doc", - "id": "version-2.8.3/adaptors-spark" - }, - { - "type": "doc", - "id": "version-2.8.3/adaptors-storm" - } - ] - }, - { - "type": "category", - "label": "Cookbooks", - "items": [ - { - "type": "doc", - "id": "version-2.8.3/cookbooks-compaction" - }, - { - "type": "doc", - "id": "version-2.8.3/cookbooks-deduplication" - }, - { - "type": "doc", - "id": "version-2.8.3/cookbooks-non-persistent" - }, - { - "type": "doc", - "id": "version-2.8.3/cookbooks-retention-expiry" - }, - { - "type": "doc", - "id": "version-2.8.3/cookbooks-encryption" - }, - { - "type": "doc", - "id": "version-2.8.3/cookbooks-message-queue" - }, - { - "type": "doc", - "id": "version-2.8.3/cookbooks-bookkeepermetadata" - } - ] - }, - { - "type": "category", - "label": "Development", - "items": [ - { - "type": "doc", - "id": "version-2.8.3/develop-tools" - }, - { - "type": "doc", - "id": "version-2.8.3/developing-binary-protocol" - }, - { - "type": "doc", - "id": "version-2.8.3/develop-schema" - }, - { - "type": "doc", - "id": "version-2.8.3/develop-load-manager" - } - ] - }, - { - "type": "category", - "label": "Reference", - "items": [ - { - "type": "doc", - "id": "version-2.8.3/reference-terminology" - }, - { - "type": "doc", - "id": "version-2.8.3/reference-cli-tools" - }, - { - "type": "doc", - "id": "version-2.8.3/reference-configuration" - }, - { - "type": "doc", - "id": "version-2.8.3/reference-metrics" - }, - { - "type": "doc", - "id": "version-2.8.3/reference-rest-api-overview" - } - ] - } - ] -} \ No newline at end of file diff --git a/site2/website/versioned_sidebars/version-2.9.0-sidebars.json b/site2/website/versioned_sidebars/version-2.9.0-sidebars.json deleted file mode 100644 index ddad14652a4627..00000000000000 --- a/site2/website/versioned_sidebars/version-2.9.0-sidebars.json +++ /dev/null @@ -1,598 +0,0 @@ -{ - "version-2.9.0/docsSidebar": [ - { - "type": "doc", - "id": "version-2.9.0/about" - }, - { - "type": "category", - "label": "Get Started", - "items": [ - { - "type": "doc", - "id": "version-2.9.0/getting-started-standalone" - }, - { - "type": "doc", - "id": "version-2.9.0/getting-started-docker" - }, - { - "type": "doc", - "id": "version-2.9.0/getting-started-helm" - } - ] - }, - { - "type": "category", - "label": "Concepts and Architecture", - "items": [ - { - "type": "doc", - "id": "version-2.9.0/concepts-overview" - }, - { - "type": "doc", - "id": "version-2.9.0/concepts-messaging" - }, - { - "type": "doc", - "id": "version-2.9.0/concepts-architecture-overview" - }, - { - "type": "doc", - "id": "version-2.9.0/concepts-clients" - }, - { - "type": "doc", - "id": "version-2.9.0/concepts-replication" - }, - { - "type": "doc", - "id": "version-2.9.0/concepts-multi-tenancy" - }, - { - "type": "doc", - "id": "version-2.9.0/concepts-authentication" - }, - { - "type": "doc", - "id": "version-2.9.0/concepts-topic-compaction" - }, - { - "type": "doc", - "id": "version-2.9.0/concepts-proxy-sni-routing" - }, - { - "type": "doc", - "id": "version-2.9.0/concepts-multiple-advertised-listeners" - } - ] - }, - { - "type": "category", - "label": "Pulsar Schema", - "items": [ - { - "type": "doc", - "id": "version-2.9.0/schema-get-started" - }, - { - "type": "doc", - "id": "version-2.9.0/schema-understand" - }, - { - "type": "doc", - "id": "version-2.9.0/schema-evolution-compatibility" - }, - { - "type": "doc", - "id": "version-2.9.0/schema-manage" - } - ] - }, - { - "type": "category", - "label": "Pulsar Functions", - "items": [ - { - "type": "doc", - "id": "version-2.9.0/functions-overview" - }, - { - "type": "doc", - "id": "version-2.9.0/functions-runtime" - }, - { - "type": "doc", - "id": "version-2.9.0/functions-worker" - }, - { - "type": "doc", - "id": "version-2.9.0/functions-develop" - }, - { - "type": "doc", - "id": "version-2.9.0/functions-package" - }, - { - "type": "doc", - "id": "version-2.9.0/functions-debug" - }, - { - "type": "doc", - "id": "version-2.9.0/functions-deploy" - }, - { - "type": "doc", - "id": "version-2.9.0/functions-cli" - }, - { - "type": "doc", - "id": "version-2.9.0/window-functions-context" - } - ] - }, - { - "type": "category", - "label": "Pulsar IO", - "items": [ - { - "type": "doc", - "id": "version-2.9.0/io-overview" - }, - { - "type": "doc", - "id": "version-2.9.0/io-quickstart" - }, - { - "type": "doc", - "id": "version-2.9.0/io-use" - }, - { - "type": "doc", - "id": "version-2.9.0/io-debug" - }, - { - "type": "doc", - "id": "version-2.9.0/io-connectors" - }, - { - "type": "doc", - "id": "version-2.9.0/io-cdc" - }, - { - "type": "doc", - "id": "version-2.9.0/io-develop" - }, - { - "type": "doc", - "id": "version-2.9.0/io-cli" - } - ] - }, - { - "type": "category", - "label": "Pulsar SQL", - "items": [ - { - "type": "doc", - "id": "version-2.9.0/sql-overview" - }, - { - "type": "doc", - "id": "version-2.9.0/sql-getting-started" - }, - { - "type": "doc", - "id": "version-2.9.0/sql-deployment-configurations" - }, - { - "type": "doc", - "id": "version-2.9.0/sql-rest-api" - } - ] - }, - { - "type": "category", - "label": "Tiered Storage", - "items": [ - { - "type": "doc", - "id": "version-2.9.0/tiered-storage-overview" - }, - { - "type": "doc", - "id": "version-2.9.0/tiered-storage-aws" - }, - { - "type": "doc", - "id": "version-2.9.0/tiered-storage-gcs" - }, - { - "type": "doc", - "id": "version-2.9.0/tiered-storage-filesystem" - }, - { - "type": "doc", - "id": "version-2.9.0/tiered-storage-azure" - }, - { - "type": "doc", - "id": "version-2.9.0/tiered-storage-aliyun" - } - ] - }, - { - "type": "category", - "label": "Transactions", - "items": [ - { - "type": "doc", - "id": "version-2.9.0/txn-why" - }, - { - "type": "doc", - "id": "version-2.9.0/txn-what" - }, - { - "type": "doc", - "id": "version-2.9.0/txn-how" - }, - { - "type": "doc", - "id": "version-2.9.0/txn-use" - }, - { - "type": "doc", - "id": "version-2.9.0/txn-monitor" - } - ] - }, - { - "type": "category", - "label": "Kubernetes (Helm)", - "items": [ - { - "type": "doc", - "id": "version-2.9.0/helm-overview" - }, - { - "type": "doc", - "id": "version-2.9.0/helm-prepare" - }, - { - "type": "doc", - "id": "version-2.9.0/helm-install" - }, - { - "type": "doc", - "id": "version-2.9.0/helm-deploy" - }, - { - "type": "doc", - "id": "version-2.9.0/helm-upgrade" - }, - { - "type": "doc", - "id": "version-2.9.0/helm-tools" - } - ] - }, - { - "type": "category", - "label": "Deployment", - "items": [ - { - "type": "doc", - "id": "version-2.9.0/deploy-aws" - }, - { - "type": "doc", - "id": "version-2.9.0/deploy-kubernetes" - }, - { - "type": "doc", - "id": "version-2.9.0/deploy-bare-metal" - }, - { - "type": "doc", - "id": "version-2.9.0/deploy-bare-metal-multi-cluster" - }, - { - "type": "doc", - "id": "version-2.9.0/deploy-docker" - }, - { - "type": "doc", - "id": "version-2.9.0/deploy-monitoring" - } - ] - }, - { - "type": "category", - "label": "Administration", - "items": [ - { - "type": "doc", - "id": "version-2.9.0/administration-zk-bk" - }, - { - "type": "doc", - "id": "version-2.9.0/administration-geo" - }, - { - "type": "doc", - "id": "version-2.9.0/administration-pulsar-manager" - }, - { - "type": "doc", - "id": "version-2.9.0/administration-stats" - }, - { - "type": "doc", - "id": "version-2.9.0/administration-load-balance" - }, - { - "type": "doc", - "id": "version-2.9.0/administration-proxy" - }, - { - "type": "doc", - "id": "version-2.9.0/administration-upgrade" - }, - { - "type": "doc", - "id": "version-2.9.0/administration-isolation" - } - ] - }, - { - "type": "category", - "label": "Security", - "items": [ - { - "type": "doc", - "id": "version-2.9.0/security-overview" - }, - { - "type": "doc", - "id": "version-2.9.0/security-tls-transport" - }, - { - "type": "doc", - "id": "version-2.9.0/security-tls-authentication" - }, - { - "type": "doc", - "id": "version-2.9.0/security-tls-keystore" - }, - { - "type": "doc", - "id": "version-2.9.0/security-jwt" - }, - { - "type": "doc", - "id": "version-2.9.0/security-athenz" - }, - { - "type": "doc", - "id": "version-2.9.0/security-kerberos" - }, - { - "type": "doc", - "id": "version-2.9.0/security-oauth2" - }, - { - "type": "doc", - "id": "version-2.9.0/security-basic-auth" - }, - { - "type": "doc", - "id": "version-2.9.0/security-authorization" - }, - { - "type": "doc", - "id": "version-2.9.0/security-encryption" - }, - { - "type": "doc", - "id": "version-2.9.0/security-extending" - }, - { - "type": "doc", - "id": "version-2.9.0/security-bouncy-castle" - } - ] - }, - { - "type": "category", - "label": "Performance", - "items": [ - { - "type": "doc", - "id": "version-2.9.0/performance-pulsar-perf" - } - ] - }, - { - "type": "category", - "label": "Client Libraries", - "items": [ - { - "type": "doc", - "id": "version-2.9.0/client-libraries" - }, - { - "type": "doc", - "id": "version-2.9.0/client-libraries-java" - }, - { - "type": "doc", - "id": "version-2.9.0/client-libraries-go" - }, - { - "type": "doc", - "id": "version-2.9.0/client-libraries-python" - }, - { - "type": "doc", - "id": "version-2.9.0/client-libraries-cpp" - }, - { - "type": "doc", - "id": "version-2.9.0/client-libraries-node" - }, - { - "type": "doc", - "id": "version-2.9.0/client-libraries-websocket" - }, - { - "type": "doc", - "id": "version-2.9.0/client-libraries-dotnet" - } - ] - }, - { - "type": "category", - "label": "Admin API", - "items": [ - { - "type": "doc", - "id": "version-2.9.0/admin-api-overview" - }, - { - "type": "doc", - "id": "version-2.9.0/admin-api-clusters" - }, - { - "type": "doc", - "id": "version-2.9.0/admin-api-tenants" - }, - { - "type": "doc", - "id": "version-2.9.0/admin-api-brokers" - }, - { - "type": "doc", - "id": "version-2.9.0/admin-api-namespaces" - }, - { - "type": "doc", - "id": "version-2.9.0/admin-api-permissions" - }, - { - "type": "doc", - "id": "version-2.9.0/admin-api-topics" - }, - { - "type": "doc", - "id": "version-2.9.0/admin-api-functions" - }, - { - "type": "doc", - "id": "version-2.9.0/admin-api-packages" - } - ] - }, - { - "type": "category", - "label": "Adaptors", - "items": [ - { - "type": "doc", - "id": "version-2.9.0/adaptors-kafka" - }, - { - "type": "doc", - "id": "version-2.9.0/adaptors-spark" - }, - { - "type": "doc", - "id": "version-2.9.0/adaptors-storm" - } - ] - }, - { - "type": "category", - "label": "Cookbooks", - "items": [ - { - "type": "doc", - "id": "version-2.9.0/cookbooks-compaction" - }, - { - "type": "doc", - "id": "version-2.9.0/cookbooks-deduplication" - }, - { - "type": "doc", - "id": "version-2.9.0/cookbooks-non-persistent" - }, - { - "type": "doc", - "id": "version-2.9.0/cookbooks-retention-expiry" - }, - { - "type": "doc", - "id": "version-2.9.0/cookbooks-encryption" - }, - { - "type": "doc", - "id": "version-2.9.0/cookbooks-message-queue" - }, - { - "type": "doc", - "id": "version-2.9.0/cookbooks-bookkeepermetadata" - } - ] - }, - { - "type": "category", - "label": "Development", - "items": [ - { - "type": "doc", - "id": "version-2.9.0/develop-tools" - }, - { - "type": "doc", - "id": "version-2.9.0/developing-binary-protocol" - }, - { - "type": "doc", - "id": "version-2.9.0/develop-schema" - }, - { - "type": "doc", - "id": "version-2.9.0/develop-load-manager" - } - ] - }, - { - "type": "category", - "label": "Reference", - "items": [ - { - "type": "doc", - "id": "version-2.9.0/reference-terminology" - }, - { - "type": "doc", - "id": "version-2.9.0/reference-cli-tools" - }, - { - "type": "doc", - "id": "version-2.9.0/reference-configuration" - }, - { - "type": "doc", - "id": "version-2.9.0/reference-metrics" - }, - { - "type": "doc", - "id": "version-2.9.0/reference-rest-api-overview" - } - ] - } - ] -} \ No newline at end of file diff --git a/site2/website/versioned_sidebars/version-2.9.1-sidebars.json b/site2/website/versioned_sidebars/version-2.9.1-sidebars.json deleted file mode 100644 index e230a87663644f..00000000000000 --- a/site2/website/versioned_sidebars/version-2.9.1-sidebars.json +++ /dev/null @@ -1,598 +0,0 @@ -{ - "version-2.9.1/docsSidebar": [ - { - "type": "doc", - "id": "version-2.9.1/about" - }, - { - "type": "category", - "label": "Get Started", - "items": [ - { - "type": "doc", - "id": "version-2.9.1/getting-started-standalone" - }, - { - "type": "doc", - "id": "version-2.9.1/getting-started-docker" - }, - { - "type": "doc", - "id": "version-2.9.1/getting-started-helm" - } - ] - }, - { - "type": "category", - "label": "Concepts and Architecture", - "items": [ - { - "type": "doc", - "id": "version-2.9.1/concepts-overview" - }, - { - "type": "doc", - "id": "version-2.9.1/concepts-messaging" - }, - { - "type": "doc", - "id": "version-2.9.1/concepts-architecture-overview" - }, - { - "type": "doc", - "id": "version-2.9.1/concepts-clients" - }, - { - "type": "doc", - "id": "version-2.9.1/concepts-replication" - }, - { - "type": "doc", - "id": "version-2.9.1/concepts-multi-tenancy" - }, - { - "type": "doc", - "id": "version-2.9.1/concepts-authentication" - }, - { - "type": "doc", - "id": "version-2.9.1/concepts-topic-compaction" - }, - { - "type": "doc", - "id": "version-2.9.1/concepts-proxy-sni-routing" - }, - { - "type": "doc", - "id": "version-2.9.1/concepts-multiple-advertised-listeners" - } - ] - }, - { - "type": "category", - "label": "Pulsar Schema", - "items": [ - { - "type": "doc", - "id": "version-2.9.1/schema-get-started" - }, - { - "type": "doc", - "id": "version-2.9.1/schema-understand" - }, - { - "type": "doc", - "id": "version-2.9.1/schema-evolution-compatibility" - }, - { - "type": "doc", - "id": "version-2.9.1/schema-manage" - } - ] - }, - { - "type": "category", - "label": "Pulsar Functions", - "items": [ - { - "type": "doc", - "id": "version-2.9.1/functions-overview" - }, - { - "type": "doc", - "id": "version-2.9.1/functions-runtime" - }, - { - "type": "doc", - "id": "version-2.9.1/functions-worker" - }, - { - "type": "doc", - "id": "version-2.9.1/functions-develop" - }, - { - "type": "doc", - "id": "version-2.9.1/functions-package" - }, - { - "type": "doc", - "id": "version-2.9.1/functions-debug" - }, - { - "type": "doc", - "id": "version-2.9.1/functions-deploy" - }, - { - "type": "doc", - "id": "version-2.9.1/functions-cli" - }, - { - "type": "doc", - "id": "version-2.9.1/window-functions-context" - } - ] - }, - { - "type": "category", - "label": "Pulsar IO", - "items": [ - { - "type": "doc", - "id": "version-2.9.1/io-overview" - }, - { - "type": "doc", - "id": "version-2.9.1/io-quickstart" - }, - { - "type": "doc", - "id": "version-2.9.1/io-use" - }, - { - "type": "doc", - "id": "version-2.9.1/io-debug" - }, - { - "type": "doc", - "id": "version-2.9.1/io-connectors" - }, - { - "type": "doc", - "id": "version-2.9.1/io-cdc" - }, - { - "type": "doc", - "id": "version-2.9.1/io-develop" - }, - { - "type": "doc", - "id": "version-2.9.1/io-cli" - } - ] - }, - { - "type": "category", - "label": "Pulsar SQL", - "items": [ - { - "type": "doc", - "id": "version-2.9.1/sql-overview" - }, - { - "type": "doc", - "id": "version-2.9.1/sql-getting-started" - }, - { - "type": "doc", - "id": "version-2.9.1/sql-deployment-configurations" - }, - { - "type": "doc", - "id": "version-2.9.1/sql-rest-api" - } - ] - }, - { - "type": "category", - "label": "Tiered Storage", - "items": [ - { - "type": "doc", - "id": "version-2.9.1/tiered-storage-overview" - }, - { - "type": "doc", - "id": "version-2.9.1/tiered-storage-aws" - }, - { - "type": "doc", - "id": "version-2.9.1/tiered-storage-gcs" - }, - { - "type": "doc", - "id": "version-2.9.1/tiered-storage-filesystem" - }, - { - "type": "doc", - "id": "version-2.9.1/tiered-storage-azure" - }, - { - "type": "doc", - "id": "version-2.9.1/tiered-storage-aliyun" - } - ] - }, - { - "type": "category", - "label": "Transactions", - "items": [ - { - "type": "doc", - "id": "version-2.9.1/txn-why" - }, - { - "type": "doc", - "id": "version-2.9.1/txn-what" - }, - { - "type": "doc", - "id": "version-2.9.1/txn-how" - }, - { - "type": "doc", - "id": "version-2.9.1/txn-use" - }, - { - "type": "doc", - "id": "version-2.9.1/txn-monitor" - } - ] - }, - { - "type": "category", - "label": "Kubernetes (Helm)", - "items": [ - { - "type": "doc", - "id": "version-2.9.1/helm-overview" - }, - { - "type": "doc", - "id": "version-2.9.1/helm-prepare" - }, - { - "type": "doc", - "id": "version-2.9.1/helm-install" - }, - { - "type": "doc", - "id": "version-2.9.1/helm-deploy" - }, - { - "type": "doc", - "id": "version-2.9.1/helm-upgrade" - }, - { - "type": "doc", - "id": "version-2.9.1/helm-tools" - } - ] - }, - { - "type": "category", - "label": "Deployment", - "items": [ - { - "type": "doc", - "id": "version-2.9.1/deploy-aws" - }, - { - "type": "doc", - "id": "version-2.9.1/deploy-kubernetes" - }, - { - "type": "doc", - "id": "version-2.9.1/deploy-bare-metal" - }, - { - "type": "doc", - "id": "version-2.9.1/deploy-bare-metal-multi-cluster" - }, - { - "type": "doc", - "id": "version-2.9.1/deploy-docker" - }, - { - "type": "doc", - "id": "version-2.9.1/deploy-monitoring" - } - ] - }, - { - "type": "category", - "label": "Administration", - "items": [ - { - "type": "doc", - "id": "version-2.9.1/administration-zk-bk" - }, - { - "type": "doc", - "id": "version-2.9.1/administration-geo" - }, - { - "type": "doc", - "id": "version-2.9.1/administration-pulsar-manager" - }, - { - "type": "doc", - "id": "version-2.9.1/administration-stats" - }, - { - "type": "doc", - "id": "version-2.9.1/administration-load-balance" - }, - { - "type": "doc", - "id": "version-2.9.1/administration-proxy" - }, - { - "type": "doc", - "id": "version-2.9.1/administration-upgrade" - }, - { - "type": "doc", - "id": "version-2.9.1/administration-isolation" - } - ] - }, - { - "type": "category", - "label": "Security", - "items": [ - { - "type": "doc", - "id": "version-2.9.1/security-overview" - }, - { - "type": "doc", - "id": "version-2.9.1/security-tls-transport" - }, - { - "type": "doc", - "id": "version-2.9.1/security-tls-authentication" - }, - { - "type": "doc", - "id": "version-2.9.1/security-tls-keystore" - }, - { - "type": "doc", - "id": "version-2.9.1/security-jwt" - }, - { - "type": "doc", - "id": "version-2.9.1/security-athenz" - }, - { - "type": "doc", - "id": "version-2.9.1/security-kerberos" - }, - { - "type": "doc", - "id": "version-2.9.1/security-oauth2" - }, - { - "type": "doc", - "id": "version-2.9.1/security-basic-auth" - }, - { - "type": "doc", - "id": "version-2.9.1/security-authorization" - }, - { - "type": "doc", - "id": "version-2.9.1/security-encryption" - }, - { - "type": "doc", - "id": "version-2.9.1/security-extending" - }, - { - "type": "doc", - "id": "version-2.9.1/security-bouncy-castle" - } - ] - }, - { - "type": "category", - "label": "Performance", - "items": [ - { - "type": "doc", - "id": "version-2.9.1/performance-pulsar-perf" - } - ] - }, - { - "type": "category", - "label": "Client Libraries", - "items": [ - { - "type": "doc", - "id": "version-2.9.1/client-libraries" - }, - { - "type": "doc", - "id": "version-2.9.1/client-libraries-java" - }, - { - "type": "doc", - "id": "version-2.9.1/client-libraries-go" - }, - { - "type": "doc", - "id": "version-2.9.1/client-libraries-python" - }, - { - "type": "doc", - "id": "version-2.9.1/client-libraries-cpp" - }, - { - "type": "doc", - "id": "version-2.9.1/client-libraries-node" - }, - { - "type": "doc", - "id": "version-2.9.1/client-libraries-websocket" - }, - { - "type": "doc", - "id": "version-2.9.1/client-libraries-dotnet" - } - ] - }, - { - "type": "category", - "label": "Admin API", - "items": [ - { - "type": "doc", - "id": "version-2.9.1/admin-api-overview" - }, - { - "type": "doc", - "id": "version-2.9.1/admin-api-clusters" - }, - { - "type": "doc", - "id": "version-2.9.1/admin-api-tenants" - }, - { - "type": "doc", - "id": "version-2.9.1/admin-api-brokers" - }, - { - "type": "doc", - "id": "version-2.9.1/admin-api-namespaces" - }, - { - "type": "doc", - "id": "version-2.9.1/admin-api-permissions" - }, - { - "type": "doc", - "id": "version-2.9.1/admin-api-topics" - }, - { - "type": "doc", - "id": "version-2.9.1/admin-api-functions" - }, - { - "type": "doc", - "id": "version-2.9.1/admin-api-packages" - } - ] - }, - { - "type": "category", - "label": "Adaptors", - "items": [ - { - "type": "doc", - "id": "version-2.9.1/adaptors-kafka" - }, - { - "type": "doc", - "id": "version-2.9.1/adaptors-spark" - }, - { - "type": "doc", - "id": "version-2.9.1/adaptors-storm" - } - ] - }, - { - "type": "category", - "label": "Cookbooks", - "items": [ - { - "type": "doc", - "id": "version-2.9.1/cookbooks-compaction" - }, - { - "type": "doc", - "id": "version-2.9.1/cookbooks-deduplication" - }, - { - "type": "doc", - "id": "version-2.9.1/cookbooks-non-persistent" - }, - { - "type": "doc", - "id": "version-2.9.1/cookbooks-retention-expiry" - }, - { - "type": "doc", - "id": "version-2.9.1/cookbooks-encryption" - }, - { - "type": "doc", - "id": "version-2.9.1/cookbooks-message-queue" - }, - { - "type": "doc", - "id": "version-2.9.1/cookbooks-bookkeepermetadata" - } - ] - }, - { - "type": "category", - "label": "Development", - "items": [ - { - "type": "doc", - "id": "version-2.9.1/develop-tools" - }, - { - "type": "doc", - "id": "version-2.9.1/developing-binary-protocol" - }, - { - "type": "doc", - "id": "version-2.9.1/develop-schema" - }, - { - "type": "doc", - "id": "version-2.9.1/develop-load-manager" - } - ] - }, - { - "type": "category", - "label": "Reference", - "items": [ - { - "type": "doc", - "id": "version-2.9.1/reference-terminology" - }, - { - "type": "doc", - "id": "version-2.9.1/reference-cli-tools" - }, - { - "type": "doc", - "id": "version-2.9.1/reference-configuration" - }, - { - "type": "doc", - "id": "version-2.9.1/reference-metrics" - }, - { - "type": "doc", - "id": "version-2.9.1/reference-rest-api-overview" - } - ] - } - ] -} \ No newline at end of file diff --git a/site2/website/versioned_sidebars/version-2.9.2-sidebars.json b/site2/website/versioned_sidebars/version-2.9.2-sidebars.json deleted file mode 100644 index 1972923755da4a..00000000000000 --- a/site2/website/versioned_sidebars/version-2.9.2-sidebars.json +++ /dev/null @@ -1,598 +0,0 @@ -{ - "version-2.9.2/docsSidebar": [ - { - "type": "doc", - "id": "version-2.9.2/about" - }, - { - "type": "category", - "label": "Get Started", - "items": [ - { - "type": "doc", - "id": "version-2.9.2/getting-started-standalone" - }, - { - "type": "doc", - "id": "version-2.9.2/getting-started-docker" - }, - { - "type": "doc", - "id": "version-2.9.2/getting-started-helm" - } - ] - }, - { - "type": "category", - "label": "Concepts and Architecture", - "items": [ - { - "type": "doc", - "id": "version-2.9.2/concepts-overview" - }, - { - "type": "doc", - "id": "version-2.9.2/concepts-messaging" - }, - { - "type": "doc", - "id": "version-2.9.2/concepts-architecture-overview" - }, - { - "type": "doc", - "id": "version-2.9.2/concepts-clients" - }, - { - "type": "doc", - "id": "version-2.9.2/concepts-replication" - }, - { - "type": "doc", - "id": "version-2.9.2/concepts-multi-tenancy" - }, - { - "type": "doc", - "id": "version-2.9.2/concepts-authentication" - }, - { - "type": "doc", - "id": "version-2.9.2/concepts-topic-compaction" - }, - { - "type": "doc", - "id": "version-2.9.2/concepts-proxy-sni-routing" - }, - { - "type": "doc", - "id": "version-2.9.2/concepts-multiple-advertised-listeners" - } - ] - }, - { - "type": "category", - "label": "Pulsar Schema", - "items": [ - { - "type": "doc", - "id": "version-2.9.2/schema-get-started" - }, - { - "type": "doc", - "id": "version-2.9.2/schema-understand" - }, - { - "type": "doc", - "id": "version-2.9.2/schema-evolution-compatibility" - }, - { - "type": "doc", - "id": "version-2.9.2/schema-manage" - } - ] - }, - { - "type": "category", - "label": "Pulsar Functions", - "items": [ - { - "type": "doc", - "id": "version-2.9.2/functions-overview" - }, - { - "type": "doc", - "id": "version-2.9.2/functions-runtime" - }, - { - "type": "doc", - "id": "version-2.9.2/functions-worker" - }, - { - "type": "doc", - "id": "version-2.9.2/functions-develop" - }, - { - "type": "doc", - "id": "version-2.9.2/functions-package" - }, - { - "type": "doc", - "id": "version-2.9.2/functions-debug" - }, - { - "type": "doc", - "id": "version-2.9.2/functions-deploy" - }, - { - "type": "doc", - "id": "version-2.9.2/functions-cli" - }, - { - "type": "doc", - "id": "version-2.9.2/window-functions-context" - } - ] - }, - { - "type": "category", - "label": "Pulsar IO", - "items": [ - { - "type": "doc", - "id": "version-2.9.2/io-overview" - }, - { - "type": "doc", - "id": "version-2.9.2/io-quickstart" - }, - { - "type": "doc", - "id": "version-2.9.2/io-use" - }, - { - "type": "doc", - "id": "version-2.9.2/io-debug" - }, - { - "type": "doc", - "id": "version-2.9.2/io-connectors" - }, - { - "type": "doc", - "id": "version-2.9.2/io-cdc" - }, - { - "type": "doc", - "id": "version-2.9.2/io-develop" - }, - { - "type": "doc", - "id": "version-2.9.2/io-cli" - } - ] - }, - { - "type": "category", - "label": "Pulsar SQL", - "items": [ - { - "type": "doc", - "id": "version-2.9.2/sql-overview" - }, - { - "type": "doc", - "id": "version-2.9.2/sql-getting-started" - }, - { - "type": "doc", - "id": "version-2.9.2/sql-deployment-configurations" - }, - { - "type": "doc", - "id": "version-2.9.2/sql-rest-api" - } - ] - }, - { - "type": "category", - "label": "Tiered Storage", - "items": [ - { - "type": "doc", - "id": "version-2.9.2/tiered-storage-overview" - }, - { - "type": "doc", - "id": "version-2.9.2/tiered-storage-aws" - }, - { - "type": "doc", - "id": "version-2.9.2/tiered-storage-gcs" - }, - { - "type": "doc", - "id": "version-2.9.2/tiered-storage-filesystem" - }, - { - "type": "doc", - "id": "version-2.9.2/tiered-storage-azure" - }, - { - "type": "doc", - "id": "version-2.9.2/tiered-storage-aliyun" - } - ] - }, - { - "type": "category", - "label": "Transactions", - "items": [ - { - "type": "doc", - "id": "version-2.9.2/txn-why" - }, - { - "type": "doc", - "id": "version-2.9.2/txn-what" - }, - { - "type": "doc", - "id": "version-2.9.2/txn-how" - }, - { - "type": "doc", - "id": "version-2.9.2/txn-use" - }, - { - "type": "doc", - "id": "version-2.9.2/txn-monitor" - } - ] - }, - { - "type": "category", - "label": "Kubernetes (Helm)", - "items": [ - { - "type": "doc", - "id": "version-2.9.2/helm-overview" - }, - { - "type": "doc", - "id": "version-2.9.2/helm-prepare" - }, - { - "type": "doc", - "id": "version-2.9.2/helm-install" - }, - { - "type": "doc", - "id": "version-2.9.2/helm-deploy" - }, - { - "type": "doc", - "id": "version-2.9.2/helm-upgrade" - }, - { - "type": "doc", - "id": "version-2.9.2/helm-tools" - } - ] - }, - { - "type": "category", - "label": "Deployment", - "items": [ - { - "type": "doc", - "id": "version-2.9.2/deploy-aws" - }, - { - "type": "doc", - "id": "version-2.9.2/deploy-kubernetes" - }, - { - "type": "doc", - "id": "version-2.9.2/deploy-bare-metal" - }, - { - "type": "doc", - "id": "version-2.9.2/deploy-bare-metal-multi-cluster" - }, - { - "type": "doc", - "id": "version-2.9.2/deploy-docker" - }, - { - "type": "doc", - "id": "version-2.9.2/deploy-monitoring" - } - ] - }, - { - "type": "category", - "label": "Administration", - "items": [ - { - "type": "doc", - "id": "version-2.9.2/administration-zk-bk" - }, - { - "type": "doc", - "id": "version-2.9.2/administration-geo" - }, - { - "type": "doc", - "id": "version-2.9.2/administration-pulsar-manager" - }, - { - "type": "doc", - "id": "version-2.9.2/administration-stats" - }, - { - "type": "doc", - "id": "version-2.9.2/administration-load-balance" - }, - { - "type": "doc", - "id": "version-2.9.2/administration-proxy" - }, - { - "type": "doc", - "id": "version-2.9.2/administration-upgrade" - }, - { - "type": "doc", - "id": "version-2.9.2/administration-isolation" - } - ] - }, - { - "type": "category", - "label": "Security", - "items": [ - { - "type": "doc", - "id": "version-2.9.2/security-overview" - }, - { - "type": "doc", - "id": "version-2.9.2/security-tls-transport" - }, - { - "type": "doc", - "id": "version-2.9.2/security-tls-authentication" - }, - { - "type": "doc", - "id": "version-2.9.2/security-tls-keystore" - }, - { - "type": "doc", - "id": "version-2.9.2/security-jwt" - }, - { - "type": "doc", - "id": "version-2.9.2/security-athenz" - }, - { - "type": "doc", - "id": "version-2.9.2/security-kerberos" - }, - { - "type": "doc", - "id": "version-2.9.2/security-oauth2" - }, - { - "type": "doc", - "id": "version-2.9.2/security-basic-auth" - }, - { - "type": "doc", - "id": "version-2.9.2/security-authorization" - }, - { - "type": "doc", - "id": "version-2.9.2/security-encryption" - }, - { - "type": "doc", - "id": "version-2.9.2/security-extending" - }, - { - "type": "doc", - "id": "version-2.9.2/security-bouncy-castle" - } - ] - }, - { - "type": "category", - "label": "Performance", - "items": [ - { - "type": "doc", - "id": "version-2.9.2/performance-pulsar-perf" - } - ] - }, - { - "type": "category", - "label": "Client Libraries", - "items": [ - { - "type": "doc", - "id": "version-2.9.2/client-libraries" - }, - { - "type": "doc", - "id": "version-2.9.2/client-libraries-java" - }, - { - "type": "doc", - "id": "version-2.9.2/client-libraries-go" - }, - { - "type": "doc", - "id": "version-2.9.2/client-libraries-python" - }, - { - "type": "doc", - "id": "version-2.9.2/client-libraries-cpp" - }, - { - "type": "doc", - "id": "version-2.9.2/client-libraries-node" - }, - { - "type": "doc", - "id": "version-2.9.2/client-libraries-websocket" - }, - { - "type": "doc", - "id": "version-2.9.2/client-libraries-dotnet" - } - ] - }, - { - "type": "category", - "label": "Admin API", - "items": [ - { - "type": "doc", - "id": "version-2.9.2/admin-api-overview" - }, - { - "type": "doc", - "id": "version-2.9.2/admin-api-clusters" - }, - { - "type": "doc", - "id": "version-2.9.2/admin-api-tenants" - }, - { - "type": "doc", - "id": "version-2.9.2/admin-api-brokers" - }, - { - "type": "doc", - "id": "version-2.9.2/admin-api-namespaces" - }, - { - "type": "doc", - "id": "version-2.9.2/admin-api-permissions" - }, - { - "type": "doc", - "id": "version-2.9.2/admin-api-topics" - }, - { - "type": "doc", - "id": "version-2.9.2/admin-api-functions" - }, - { - "type": "doc", - "id": "version-2.9.2/admin-api-packages" - } - ] - }, - { - "type": "category", - "label": "Adaptors", - "items": [ - { - "type": "doc", - "id": "version-2.9.2/adaptors-kafka" - }, - { - "type": "doc", - "id": "version-2.9.2/adaptors-spark" - }, - { - "type": "doc", - "id": "version-2.9.2/adaptors-storm" - } - ] - }, - { - "type": "category", - "label": "Cookbooks", - "items": [ - { - "type": "doc", - "id": "version-2.9.2/cookbooks-compaction" - }, - { - "type": "doc", - "id": "version-2.9.2/cookbooks-deduplication" - }, - { - "type": "doc", - "id": "version-2.9.2/cookbooks-non-persistent" - }, - { - "type": "doc", - "id": "version-2.9.2/cookbooks-retention-expiry" - }, - { - "type": "doc", - "id": "version-2.9.2/cookbooks-encryption" - }, - { - "type": "doc", - "id": "version-2.9.2/cookbooks-message-queue" - }, - { - "type": "doc", - "id": "version-2.9.2/cookbooks-bookkeepermetadata" - } - ] - }, - { - "type": "category", - "label": "Development", - "items": [ - { - "type": "doc", - "id": "version-2.9.2/develop-tools" - }, - { - "type": "doc", - "id": "version-2.9.2/developing-binary-protocol" - }, - { - "type": "doc", - "id": "version-2.9.2/develop-schema" - }, - { - "type": "doc", - "id": "version-2.9.2/develop-load-manager" - } - ] - }, - { - "type": "category", - "label": "Reference", - "items": [ - { - "type": "doc", - "id": "version-2.9.2/reference-terminology" - }, - { - "type": "doc", - "id": "version-2.9.2/reference-cli-tools" - }, - { - "type": "doc", - "id": "version-2.9.2/reference-configuration" - }, - { - "type": "doc", - "id": "version-2.9.2/reference-metrics" - }, - { - "type": "doc", - "id": "version-2.9.2/reference-rest-api-overview" - } - ] - } - ] -} \ No newline at end of file diff --git a/site2/website/versioned_sidebars/version-2.9.3-sidebars.json b/site2/website/versioned_sidebars/version-2.9.3-sidebars.json deleted file mode 100644 index e21f07e49b9132..00000000000000 --- a/site2/website/versioned_sidebars/version-2.9.3-sidebars.json +++ /dev/null @@ -1,598 +0,0 @@ -{ - "version-2.9.3/docsSidebar": [ - { - "type": "doc", - "id": "version-2.9.3/about" - }, - { - "type": "category", - "label": "Get Started", - "items": [ - { - "type": "doc", - "id": "version-2.9.3/getting-started-standalone" - }, - { - "type": "doc", - "id": "version-2.9.3/getting-started-docker" - }, - { - "type": "doc", - "id": "version-2.9.3/getting-started-helm" - } - ] - }, - { - "type": "category", - "label": "Concepts and Architecture", - "items": [ - { - "type": "doc", - "id": "version-2.9.3/concepts-overview" - }, - { - "type": "doc", - "id": "version-2.9.3/concepts-messaging" - }, - { - "type": "doc", - "id": "version-2.9.3/concepts-architecture-overview" - }, - { - "type": "doc", - "id": "version-2.9.3/concepts-clients" - }, - { - "type": "doc", - "id": "version-2.9.3/concepts-replication" - }, - { - "type": "doc", - "id": "version-2.9.3/concepts-multi-tenancy" - }, - { - "type": "doc", - "id": "version-2.9.3/concepts-authentication" - }, - { - "type": "doc", - "id": "version-2.9.3/concepts-topic-compaction" - }, - { - "type": "doc", - "id": "version-2.9.3/concepts-proxy-sni-routing" - }, - { - "type": "doc", - "id": "version-2.9.3/concepts-multiple-advertised-listeners" - } - ] - }, - { - "type": "category", - "label": "Pulsar Schema", - "items": [ - { - "type": "doc", - "id": "version-2.9.3/schema-get-started" - }, - { - "type": "doc", - "id": "version-2.9.3/schema-understand" - }, - { - "type": "doc", - "id": "version-2.9.3/schema-evolution-compatibility" - }, - { - "type": "doc", - "id": "version-2.9.3/schema-manage" - } - ] - }, - { - "type": "category", - "label": "Pulsar Functions", - "items": [ - { - "type": "doc", - "id": "version-2.9.3/functions-overview" - }, - { - "type": "doc", - "id": "version-2.9.3/functions-runtime" - }, - { - "type": "doc", - "id": "version-2.9.3/functions-worker" - }, - { - "type": "doc", - "id": "version-2.9.3/functions-develop" - }, - { - "type": "doc", - "id": "version-2.9.3/functions-package" - }, - { - "type": "doc", - "id": "version-2.9.3/functions-debug" - }, - { - "type": "doc", - "id": "version-2.9.3/functions-deploy" - }, - { - "type": "doc", - "id": "version-2.9.3/functions-cli" - }, - { - "type": "doc", - "id": "version-2.9.3/window-functions-context" - } - ] - }, - { - "type": "category", - "label": "Pulsar IO", - "items": [ - { - "type": "doc", - "id": "version-2.9.3/io-overview" - }, - { - "type": "doc", - "id": "version-2.9.3/io-quickstart" - }, - { - "type": "doc", - "id": "version-2.9.3/io-use" - }, - { - "type": "doc", - "id": "version-2.9.3/io-debug" - }, - { - "type": "doc", - "id": "version-2.9.3/io-connectors" - }, - { - "type": "doc", - "id": "version-2.9.3/io-cdc" - }, - { - "type": "doc", - "id": "version-2.9.3/io-develop" - }, - { - "type": "doc", - "id": "version-2.9.3/io-cli" - } - ] - }, - { - "type": "category", - "label": "Pulsar SQL", - "items": [ - { - "type": "doc", - "id": "version-2.9.3/sql-overview" - }, - { - "type": "doc", - "id": "version-2.9.3/sql-getting-started" - }, - { - "type": "doc", - "id": "version-2.9.3/sql-deployment-configurations" - }, - { - "type": "doc", - "id": "version-2.9.3/sql-rest-api" - } - ] - }, - { - "type": "category", - "label": "Tiered Storage", - "items": [ - { - "type": "doc", - "id": "version-2.9.3/tiered-storage-overview" - }, - { - "type": "doc", - "id": "version-2.9.3/tiered-storage-aws" - }, - { - "type": "doc", - "id": "version-2.9.3/tiered-storage-gcs" - }, - { - "type": "doc", - "id": "version-2.9.3/tiered-storage-filesystem" - }, - { - "type": "doc", - "id": "version-2.9.3/tiered-storage-azure" - }, - { - "type": "doc", - "id": "version-2.9.3/tiered-storage-aliyun" - } - ] - }, - { - "type": "category", - "label": "Transactions", - "items": [ - { - "type": "doc", - "id": "version-2.9.3/txn-why" - }, - { - "type": "doc", - "id": "version-2.9.3/txn-what" - }, - { - "type": "doc", - "id": "version-2.9.3/txn-how" - }, - { - "type": "doc", - "id": "version-2.9.3/txn-use" - }, - { - "type": "doc", - "id": "version-2.9.3/txn-monitor" - } - ] - }, - { - "type": "category", - "label": "Kubernetes (Helm)", - "items": [ - { - "type": "doc", - "id": "version-2.9.3/helm-overview" - }, - { - "type": "doc", - "id": "version-2.9.3/helm-prepare" - }, - { - "type": "doc", - "id": "version-2.9.3/helm-install" - }, - { - "type": "doc", - "id": "version-2.9.3/helm-deploy" - }, - { - "type": "doc", - "id": "version-2.9.3/helm-upgrade" - }, - { - "type": "doc", - "id": "version-2.9.3/helm-tools" - } - ] - }, - { - "type": "category", - "label": "Deployment", - "items": [ - { - "type": "doc", - "id": "version-2.9.3/deploy-aws" - }, - { - "type": "doc", - "id": "version-2.9.3/deploy-kubernetes" - }, - { - "type": "doc", - "id": "version-2.9.3/deploy-bare-metal" - }, - { - "type": "doc", - "id": "version-2.9.3/deploy-bare-metal-multi-cluster" - }, - { - "type": "doc", - "id": "version-2.9.3/deploy-docker" - }, - { - "type": "doc", - "id": "version-2.9.3/deploy-monitoring" - } - ] - }, - { - "type": "category", - "label": "Administration", - "items": [ - { - "type": "doc", - "id": "version-2.9.3/administration-zk-bk" - }, - { - "type": "doc", - "id": "version-2.9.3/administration-geo" - }, - { - "type": "doc", - "id": "version-2.9.3/administration-pulsar-manager" - }, - { - "type": "doc", - "id": "version-2.9.3/administration-stats" - }, - { - "type": "doc", - "id": "version-2.9.3/administration-load-balance" - }, - { - "type": "doc", - "id": "version-2.9.3/administration-proxy" - }, - { - "type": "doc", - "id": "version-2.9.3/administration-upgrade" - }, - { - "type": "doc", - "id": "version-2.9.3/administration-isolation" - } - ] - }, - { - "type": "category", - "label": "Security", - "items": [ - { - "type": "doc", - "id": "version-2.9.3/security-overview" - }, - { - "type": "doc", - "id": "version-2.9.3/security-tls-transport" - }, - { - "type": "doc", - "id": "version-2.9.3/security-tls-authentication" - }, - { - "type": "doc", - "id": "version-2.9.3/security-tls-keystore" - }, - { - "type": "doc", - "id": "version-2.9.3/security-jwt" - }, - { - "type": "doc", - "id": "version-2.9.3/security-athenz" - }, - { - "type": "doc", - "id": "version-2.9.3/security-kerberos" - }, - { - "type": "doc", - "id": "version-2.9.3/security-oauth2" - }, - { - "type": "doc", - "id": "version-2.9.3/security-basic-auth" - }, - { - "type": "doc", - "id": "version-2.9.3/security-authorization" - }, - { - "type": "doc", - "id": "version-2.9.3/security-encryption" - }, - { - "type": "doc", - "id": "version-2.9.3/security-extending" - }, - { - "type": "doc", - "id": "version-2.9.3/security-bouncy-castle" - } - ] - }, - { - "type": "category", - "label": "Performance", - "items": [ - { - "type": "doc", - "id": "version-2.9.3/performance-pulsar-perf" - } - ] - }, - { - "type": "category", - "label": "Client Libraries", - "items": [ - { - "type": "doc", - "id": "version-2.9.3/client-libraries" - }, - { - "type": "doc", - "id": "version-2.9.3/client-libraries-java" - }, - { - "type": "doc", - "id": "version-2.9.3/client-libraries-go" - }, - { - "type": "doc", - "id": "version-2.9.3/client-libraries-python" - }, - { - "type": "doc", - "id": "version-2.9.3/client-libraries-cpp" - }, - { - "type": "doc", - "id": "version-2.9.3/client-libraries-node" - }, - { - "type": "doc", - "id": "version-2.9.3/client-libraries-websocket" - }, - { - "type": "doc", - "id": "version-2.9.3/client-libraries-dotnet" - } - ] - }, - { - "type": "category", - "label": "Admin API", - "items": [ - { - "type": "doc", - "id": "version-2.9.3/admin-api-overview" - }, - { - "type": "doc", - "id": "version-2.9.3/admin-api-clusters" - }, - { - "type": "doc", - "id": "version-2.9.3/admin-api-tenants" - }, - { - "type": "doc", - "id": "version-2.9.3/admin-api-brokers" - }, - { - "type": "doc", - "id": "version-2.9.3/admin-api-namespaces" - }, - { - "type": "doc", - "id": "version-2.9.3/admin-api-permissions" - }, - { - "type": "doc", - "id": "version-2.9.3/admin-api-topics" - }, - { - "type": "doc", - "id": "version-2.9.3/admin-api-functions" - }, - { - "type": "doc", - "id": "version-2.9.3/admin-api-packages" - } - ] - }, - { - "type": "category", - "label": "Adaptors", - "items": [ - { - "type": "doc", - "id": "version-2.9.3/adaptors-kafka" - }, - { - "type": "doc", - "id": "version-2.9.3/adaptors-spark" - }, - { - "type": "doc", - "id": "version-2.9.3/adaptors-storm" - } - ] - }, - { - "type": "category", - "label": "Cookbooks", - "items": [ - { - "type": "doc", - "id": "version-2.9.3/cookbooks-compaction" - }, - { - "type": "doc", - "id": "version-2.9.3/cookbooks-deduplication" - }, - { - "type": "doc", - "id": "version-2.9.3/cookbooks-non-persistent" - }, - { - "type": "doc", - "id": "version-2.9.3/cookbooks-retention-expiry" - }, - { - "type": "doc", - "id": "version-2.9.3/cookbooks-encryption" - }, - { - "type": "doc", - "id": "version-2.9.3/cookbooks-message-queue" - }, - { - "type": "doc", - "id": "version-2.9.3/cookbooks-bookkeepermetadata" - } - ] - }, - { - "type": "category", - "label": "Development", - "items": [ - { - "type": "doc", - "id": "version-2.9.3/develop-tools" - }, - { - "type": "doc", - "id": "version-2.9.3/developing-binary-protocol" - }, - { - "type": "doc", - "id": "version-2.9.3/develop-schema" - }, - { - "type": "doc", - "id": "version-2.9.3/develop-load-manager" - } - ] - }, - { - "type": "category", - "label": "Reference", - "items": [ - { - "type": "doc", - "id": "version-2.9.3/reference-terminology" - }, - { - "type": "doc", - "id": "version-2.9.3/reference-cli-tools" - }, - { - "type": "doc", - "id": "version-2.9.3/reference-configuration" - }, - { - "type": "doc", - "id": "version-2.9.3/reference-metrics" - }, - { - "type": "doc", - "id": "version-2.9.3/reference-rest-api-overview" - } - ] - } - ] -} \ No newline at end of file From 26b47ffbcdc7f91425ed1ff1cc6cd4d7644a2451 Mon Sep 17 00:00:00 2001 From: congbo <39078850+congbobo184@users.noreply.github.com> Date: Wed, 19 Oct 2022 15:54:10 +0800 Subject: [PATCH 14/28] [improve][schema] Change update schema auth from tenant to produce (#18074) --- .../pulsar/broker/admin/AdminResource.java | 54 +++++++++---------- .../admin/impl/SchemasResourceBase.java | 4 +- .../admin/AdminApiSchemaWithAuthTest.java | 9 ++++ 3 files changed, 38 insertions(+), 29 deletions(-) diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/broker/admin/AdminResource.java b/pulsar-broker/src/main/java/org/apache/pulsar/broker/admin/AdminResource.java index 645c804af2f8e0..e2c44a80d4ae8e 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/broker/admin/AdminResource.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/broker/admin/AdminResource.java @@ -742,33 +742,7 @@ protected CompletableFuture getSchemaCompatibilityS return validateTopicPolicyOperationAsync(topicName, PolicyName.SCHEMA_COMPATIBILITY_STRATEGY, PolicyOperation.READ) - .thenCompose((__) -> { - CompletableFuture future; - if (config().isTopicLevelPoliciesEnabled()) { - future = getTopicPoliciesAsyncWithRetry(topicName) - .thenApply(op -> op.map(TopicPolicies::getSchemaCompatibilityStrategy).orElse(null)); - } else { - future = CompletableFuture.completedFuture(null); - } - - return future.thenCompose((topicSchemaCompatibilityStrategy) -> { - if (!SchemaCompatibilityStrategy.isUndefined(topicSchemaCompatibilityStrategy)) { - return CompletableFuture.completedFuture(topicSchemaCompatibilityStrategy); - } - return getNamespacePoliciesAsync(namespaceName).thenApply(policies -> { - SchemaCompatibilityStrategy schemaCompatibilityStrategy = - policies.schema_compatibility_strategy; - if (SchemaCompatibilityStrategy.isUndefined(schemaCompatibilityStrategy)) { - schemaCompatibilityStrategy = SchemaCompatibilityStrategy.fromAutoUpdatePolicy( - policies.schema_auto_update_compatibility_strategy); - if (SchemaCompatibilityStrategy.isUndefined(schemaCompatibilityStrategy)) { - schemaCompatibilityStrategy = pulsar().getConfig().getSchemaCompatibilityStrategy(); - } - } - return schemaCompatibilityStrategy; - }); - }); - }).whenComplete((__, ex) -> { + .thenCompose((__) -> getSchemaCompatibilityStrategyAsyncWithoutAuth()).whenComplete((__, ex) -> { if (ex != null) { log.error("[{}] Failed to get schema compatibility strategy of topic {} {}", clientAppId(), topicName, ex); @@ -776,6 +750,32 @@ protected CompletableFuture getSchemaCompatibilityS }); } + protected CompletableFuture getSchemaCompatibilityStrategyAsyncWithoutAuth() { + CompletableFuture future = CompletableFuture.completedFuture(null); + if (config().isTopicLevelPoliciesEnabled()) { + future = getTopicPoliciesAsyncWithRetry(topicName) + .thenApply(op -> op.map(TopicPolicies::getSchemaCompatibilityStrategy).orElse(null)); + } + + return future.thenCompose((topicSchemaCompatibilityStrategy) -> { + if (!SchemaCompatibilityStrategy.isUndefined(topicSchemaCompatibilityStrategy)) { + return CompletableFuture.completedFuture(topicSchemaCompatibilityStrategy); + } + return getNamespacePoliciesAsync(namespaceName).thenApply(policies -> { + SchemaCompatibilityStrategy schemaCompatibilityStrategy = + policies.schema_compatibility_strategy; + if (SchemaCompatibilityStrategy.isUndefined(schemaCompatibilityStrategy)) { + schemaCompatibilityStrategy = SchemaCompatibilityStrategy.fromAutoUpdatePolicy( + policies.schema_auto_update_compatibility_strategy); + if (SchemaCompatibilityStrategy.isUndefined(schemaCompatibilityStrategy)) { + schemaCompatibilityStrategy = pulsar().getConfig().getSchemaCompatibilityStrategy(); + } + } + return schemaCompatibilityStrategy; + }); + }); + } + @CanIgnoreReturnValue public static T checkNotNull(T reference) { return Objects.requireNonNull(reference); diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/broker/admin/impl/SchemasResourceBase.java b/pulsar-broker/src/main/java/org/apache/pulsar/broker/admin/impl/SchemasResourceBase.java index 0254ff395ba2af..76af582514300e 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/broker/admin/impl/SchemasResourceBase.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/broker/admin/impl/SchemasResourceBase.java @@ -114,8 +114,8 @@ public CompletableFuture deleteSchemaAsync(boolean authoritative, } public CompletableFuture postSchemaAsync(PostSchemaPayload payload, boolean authoritative) { - return validateDestinationAndAdminOperationAsync(authoritative) - .thenCompose(__ -> getSchemaCompatibilityStrategyAsync()) + return validateOwnershipAndOperationAsync(authoritative, TopicOperation.PRODUCE) + .thenCompose(__ -> getSchemaCompatibilityStrategyAsyncWithoutAuth()) .thenCompose(schemaCompatibilityStrategy -> { byte[] data; if (SchemaType.KEY_VALUE.name().equals(payload.getType())) { diff --git a/pulsar-broker/src/test/java/org/apache/pulsar/broker/admin/AdminApiSchemaWithAuthTest.java b/pulsar-broker/src/test/java/org/apache/pulsar/broker/admin/AdminApiSchemaWithAuthTest.java index 46830b05204ab1..4de4d905e49bb3 100644 --- a/pulsar-broker/src/test/java/org/apache/pulsar/broker/admin/AdminApiSchemaWithAuthTest.java +++ b/pulsar-broker/src/test/java/org/apache/pulsar/broker/admin/AdminApiSchemaWithAuthTest.java @@ -58,6 +58,8 @@ public class AdminApiSchemaWithAuthTest extends MockedPulsarServiceBaseTest { private static final String ADMIN_TOKEN = Jwts.builder().setSubject("admin").signWith(SECRET_KEY).compact(); private static final String CONSUME_TOKEN = Jwts.builder().setSubject("consumer").signWith(SECRET_KEY).compact(); + private static final String PRODUCE_TOKEN = Jwts.builder().setSubject("producer").signWith(SECRET_KEY).compact(); + @BeforeMethod @Override public void setup() throws Exception { @@ -108,11 +110,18 @@ public void testGetCreateDeleteSchema() throws Exception { .serviceHttpUrl(brokerUrl != null ? brokerUrl.toString() : brokerUrlTls.toString()) .authentication(AuthenticationToken.class.getName(), CONSUME_TOKEN) .build(); + + PulsarAdmin adminWithProducePermission = PulsarAdmin.builder() + .serviceHttpUrl(brokerUrl != null ? brokerUrl.toString() : brokerUrlTls.toString()) + .authentication(AuthenticationToken.class.getName(), PRODUCE_TOKEN) + .build(); admin.topics().grantPermission(topicName, "consumer", EnumSet.of(AuthAction.consume)); admin.topics().grantPermission(topicName, "producer", EnumSet.of(AuthAction.produce)); SchemaInfo si = Schema.BOOL.getSchemaInfo(); + assertThrows(PulsarAdminException.class, () -> adminWithConsumePermission.schemas().getSchemaInfo(topicName)); assertThrows(PulsarAdminException.class, () -> adminWithoutPermission.schemas().createSchema(topicName, si)); + adminWithProducePermission.schemas().createSchema(topicName, si); adminWithAdminPermission.schemas().createSchema(topicName, si); assertThrows(PulsarAdminException.class, () -> adminWithoutPermission.schemas().getSchemaInfo(topicName)); From 7e5cad778907f3a0095343492cbf07ca202976b4 Mon Sep 17 00:00:00 2001 From: fengyubiao Date: Wed, 19 Oct 2022 16:51:27 +0800 Subject: [PATCH 15/28] [fix][broker]Cache invalidation due to concurrent access (#18076) --- .../org/apache/bookkeeper/mledger/util/RangeCache.java | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/managed-ledger/src/main/java/org/apache/bookkeeper/mledger/util/RangeCache.java b/managed-ledger/src/main/java/org/apache/bookkeeper/mledger/util/RangeCache.java index 1b82aa1318ebd8..98ae6659b785ae 100644 --- a/managed-ledger/src/main/java/org/apache/bookkeeper/mledger/util/RangeCache.java +++ b/managed-ledger/src/main/java/org/apache/bookkeeper/mledger/util/RangeCache.java @@ -196,13 +196,12 @@ public Pair evictLEntriesBeforeTimestamp(long maxTimestamp) { if (entry == null || timestampExtractor.getTimestamp(entry.getValue()) > maxTimestamp) { break; } - - entry = entries.pollFirstEntry(); - if (entry == null) { + Value value = entry.getValue(); + boolean removeHits = entries.remove(entry.getKey(), value); + if (!removeHits) { break; } - Value value = entry.getValue(); removedSize += weighter.getSize(value); removedCount++; value.release(); From 912f344de571aba25809387ba8e71419d1f47b1b Mon Sep 17 00:00:00 2001 From: Michael Marshall Date: Wed, 19 Oct 2022 01:56:12 -0700 Subject: [PATCH 16/28] [feat][zk] Enable certificate refresh for Quorum and Netty Servers (#18097) --- conf/zookeeper.conf | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/conf/zookeeper.conf b/conf/zookeeper.conf index 04ac66e0bfc40a..73240556d484c6 100644 --- a/conf/zookeeper.conf +++ b/conf/zookeeper.conf @@ -63,6 +63,11 @@ forceSync=yes # Default: false sslQuorum=false +# Enable TLS Certificate reloading for Quorum and Server connentions +# Follows Pulsar's general default to reload these files. +sslQuorumReloadCertFiles=true +client.certReload=true + # Specifies that the client port should accept SSL connections # (using the same configuration as the secure client port). # Default: false From 7404e0d77d98c2ec4fab9a111b8ced38d33dcd5a Mon Sep 17 00:00:00 2001 From: Qiang Huang Date: Wed, 19 Oct 2022 17:11:22 +0800 Subject: [PATCH 17/28] [improve][broker]consumer backlog eviction policy should not reset read position for consumer (#18037) ### Motivation Fixes #18036 ### Modifications - The backlog eviction policy should use `asyncMarkDelete` instead of `resetCursor` in order to move the mark delete position. --- .../broker/service/BacklogQuotaManager.java | 16 +++++++++++----- .../broker/service/BacklogQuotaManagerTest.java | 9 +++++++-- 2 files changed, 18 insertions(+), 7 deletions(-) diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/BacklogQuotaManager.java b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/BacklogQuotaManager.java index 93ae777a89e2f7..210c6f8767a062 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/BacklogQuotaManager.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/BacklogQuotaManager.java @@ -210,22 +210,28 @@ private void dropBacklogForTimeLimit(PersistentTopic persistentTopic, BacklogQuo Long currentMillis = ((ManagedLedgerImpl) persistentTopic.getManagedLedger()).getClock().millis(); ManagedLedgerImpl mLedger = (ManagedLedgerImpl) persistentTopic.getManagedLedger(); try { - for (;;) { + for (; ; ) { ManagedCursor slowestConsumer = mLedger.getSlowestConsumer(); Position oldestPosition = slowestConsumer.getMarkDeletedPosition(); + if (log.isDebugEnabled()) { + log.debug("[{}] slowest consumer mark delete position is [{}], read position is [{}]", + slowestConsumer.getName(), oldestPosition, slowestConsumer.getReadPosition()); + } ManagedLedgerInfo.LedgerInfo ledgerInfo = mLedger.getLedgerInfo(oldestPosition.getLedgerId()).get(); if (ledgerInfo == null) { - slowestConsumer.resetCursor(mLedger.getNextValidPosition((PositionImpl) oldestPosition)); + PositionImpl nextPosition = + PositionImpl.get(mLedger.getNextValidLedger(oldestPosition.getLedgerId()), -1); + slowestConsumer.markDelete(nextPosition); continue; } // Timestamp only > 0 if ledger has been closed if (ledgerInfo.getTimestamp() > 0 && currentMillis - ledgerInfo.getTimestamp() > quota.getLimitTime() * 1000) { // skip whole ledger for the slowest cursor - PositionImpl nextPosition = mLedger.getNextValidPosition( - PositionImpl.get(ledgerInfo.getLedgerId(), ledgerInfo.getEntries() - 1)); + PositionImpl nextPosition = + PositionImpl.get(mLedger.getNextValidLedger(ledgerInfo.getLedgerId()), -1); if (!nextPosition.equals(oldestPosition)) { - slowestConsumer.resetCursor(nextPosition); + slowestConsumer.markDelete(nextPosition); continue; } } diff --git a/pulsar-broker/src/test/java/org/apache/pulsar/broker/service/BacklogQuotaManagerTest.java b/pulsar-broker/src/test/java/org/apache/pulsar/broker/service/BacklogQuotaManagerTest.java index 9eb6281eddc6d0..97d1798c1e2171 100644 --- a/pulsar-broker/src/test/java/org/apache/pulsar/broker/service/BacklogQuotaManagerTest.java +++ b/pulsar-broker/src/test/java/org/apache/pulsar/broker/service/BacklogQuotaManagerTest.java @@ -36,6 +36,7 @@ import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; import lombok.Cleanup; +import org.apache.bookkeeper.mledger.Position; import org.apache.bookkeeper.mledger.impl.ManagedLedgerImpl; import org.apache.pulsar.broker.PulsarService; import org.apache.pulsar.broker.ServiceConfiguration; @@ -529,18 +530,22 @@ public void testConsumerBacklogEvictionTimeQuota() throws Exception { assertEquals(stats.getSubscriptions().get(subName1).getMsgBacklog(), 14); assertEquals(stats.getSubscriptions().get(subName2).getMsgBacklog(), 14); + PersistentTopic topic1Reference = (PersistentTopic) pulsar.getBrokerService().getTopicReference(topic1).get(); + ManagedLedgerImpl ml = (ManagedLedgerImpl) topic1Reference.getManagedLedger(); + Position slowConsumerReadPos = ml.getSlowestConsumer().getReadPosition(); + Thread.sleep((TIME_TO_CHECK_BACKLOG_QUOTA * 2) * 1000); rolloverStats(); TopicStats stats2 = getTopicStats(topic1); - PersistentTopic topic1Reference = (PersistentTopic) pulsar.getBrokerService().getTopicReference(topic1).get(); - ManagedLedgerImpl ml = (ManagedLedgerImpl) topic1Reference.getManagedLedger(); // Messages on first 2 ledgers should be expired, backlog is number of // message in current ledger. Awaitility.await().untilAsserted(() -> { assertEquals(stats2.getSubscriptions().get(subName1).getMsgBacklog(), ml.getCurrentLedgerEntries()); assertEquals(stats2.getSubscriptions().get(subName2).getMsgBacklog(), ml.getCurrentLedgerEntries()); }); + + assertEquals(ml.getSlowestConsumer().getReadPosition(), slowConsumerReadPos); client.close(); } From 3eac2219814208715042dd0766d8eae60faa8835 Mon Sep 17 00:00:00 2001 From: LinChen <1572139390@qq.com> Date: Thu, 20 Oct 2022 10:15:03 +0800 Subject: [PATCH 18/28] [fix][test] Fix flaky test org.apache.pulsar.broker.service.persistent.PersistentSubscriptionMessageDispatchStreamingDispatcherThrottlingTest#setup (#18115) --- ...ptionMessageDispatchStreamingDispatcherThrottlingTest.java | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pulsar-broker/src/test/java/org/apache/pulsar/broker/service/persistent/PersistentSubscriptionMessageDispatchStreamingDispatcherThrottlingTest.java b/pulsar-broker/src/test/java/org/apache/pulsar/broker/service/persistent/PersistentSubscriptionMessageDispatchStreamingDispatcherThrottlingTest.java index 1999307701ccc2..042fb5f7bb1a49 100644 --- a/pulsar-broker/src/test/java/org/apache/pulsar/broker/service/persistent/PersistentSubscriptionMessageDispatchStreamingDispatcherThrottlingTest.java +++ b/pulsar-broker/src/test/java/org/apache/pulsar/broker/service/persistent/PersistentSubscriptionMessageDispatchStreamingDispatcherThrottlingTest.java @@ -20,7 +20,7 @@ import org.apache.pulsar.broker.service.streamingdispatch.StreamingDispatcher; import org.apache.pulsar.client.api.SubscriptionMessageDispatchThrottlingTest; -import org.testng.annotations.BeforeMethod; +import org.testng.annotations.BeforeClass; import org.testng.annotations.Test; /** @@ -30,7 +30,7 @@ public class PersistentSubscriptionMessageDispatchStreamingDispatcherThrottlingTest extends SubscriptionMessageDispatchThrottlingTest { - @BeforeMethod + @BeforeClass @Override protected void setup() throws Exception { super.setup(); From 1225a24c079b2c6c781cb47d55139b3bae2dafad Mon Sep 17 00:00:00 2001 From: Cong Zhao Date: Thu, 20 Oct 2022 10:24:41 +0800 Subject: [PATCH 19/28] [fix][broker] Fix unable to start multiple bookies for BKCluster (#18072) --- .../org/apache/pulsar/metadata/bookkeeper/BKCluster.java | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/pulsar-metadata/src/main/java/org/apache/pulsar/metadata/bookkeeper/BKCluster.java b/pulsar-metadata/src/main/java/org/apache/pulsar/metadata/bookkeeper/BKCluster.java index 24fa4730854007..d845f912d2e186 100644 --- a/pulsar-metadata/src/main/java/org/apache/pulsar/metadata/bookkeeper/BKCluster.java +++ b/pulsar-metadata/src/main/java/org/apache/pulsar/metadata/bookkeeper/BKCluster.java @@ -201,13 +201,17 @@ protected void cleanupTempDirs() throws Exception { private ServerConfiguration newServerConfiguration(int index) throws Exception { File dataDir; if (clusterConf.dataDir != null) { - dataDir = new File(clusterConf.dataDir); + if (index == 0) { + dataDir = new File(clusterConf.dataDir); + } else { + dataDir = new File(clusterConf.dataDir + "/" + index); + } } else { // Use temp dir and clean it up later dataDir = createTempDir("bookie", "test-" + index); } - if (clusterConf.clearOldData) { + if (clusterConf.clearOldData && dataDir.exists()) { cleanDirectory(dataDir); } From 10eaac2a6f6a3093516375be1959c3ea0a48ec92 Mon Sep 17 00:00:00 2001 From: Cong Zhao Date: Thu, 20 Oct 2022 10:26:40 +0800 Subject: [PATCH 20/28] [improve][ml] Reduce unnecessary calling `span()` when filtering read entries. (#18106) --- .../org/apache/bookkeeper/mledger/impl/ManagedCursorImpl.java | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/managed-ledger/src/main/java/org/apache/bookkeeper/mledger/impl/ManagedCursorImpl.java b/managed-ledger/src/main/java/org/apache/bookkeeper/mledger/impl/ManagedCursorImpl.java index 20f760d893adcb..f9cb6a9ec6e37b 100644 --- a/managed-ledger/src/main/java/org/apache/bookkeeper/mledger/impl/ManagedCursorImpl.java +++ b/managed-ledger/src/main/java/org/apache/bookkeeper/mledger/impl/ManagedCursorImpl.java @@ -2326,8 +2326,8 @@ List filterReadEntries(List entries) { log.debug("[{}] [{}] Filtering entries {} - alreadyDeleted: {}", ledger.getName(), name, entriesRange, individualDeletedMessages); } - if (individualDeletedMessages.isEmpty() || individualDeletedMessages.span() == null - || !entriesRange.isConnected(individualDeletedMessages.span())) { + Range span = individualDeletedMessages.isEmpty() ? null : individualDeletedMessages.span(); + if (span == null || !entriesRange.isConnected(span)) { // There are no individually deleted messages in this entry list, no need to perform filtering if (log.isDebugEnabled()) { log.debug("[{}] [{}] No filtering needed for entries {}", ledger.getName(), name, entriesRange); From de3dbaa15cf4df64fc2a512c634bedc1f04c23ba Mon Sep 17 00:00:00 2001 From: Cong Zhao Date: Thu, 20 Oct 2022 10:32:48 +0800 Subject: [PATCH 21/28] [improve][test] Improve V1_AdminApiTest to reduce the execution time (#18094) --- .../broker/admin/v1/V1_AdminApiTest.java | 117 +++++++++++++----- 1 file changed, 88 insertions(+), 29 deletions(-) diff --git a/pulsar-broker/src/test/java/org/apache/pulsar/broker/admin/v1/V1_AdminApiTest.java b/pulsar-broker/src/test/java/org/apache/pulsar/broker/admin/v1/V1_AdminApiTest.java index c0863cd73337a4..90005273201658 100644 --- a/pulsar-broker/src/test/java/org/apache/pulsar/broker/admin/v1/V1_AdminApiTest.java +++ b/pulsar-broker/src/test/java/org/apache/pulsar/broker/admin/v1/V1_AdminApiTest.java @@ -113,8 +113,9 @@ import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.testng.Assert; +import org.testng.annotations.AfterClass; import org.testng.annotations.AfterMethod; -import org.testng.annotations.BeforeMethod; +import org.testng.annotations.BeforeClass; import org.testng.annotations.DataProvider; import org.testng.annotations.Test; @@ -136,7 +137,7 @@ public class V1_AdminApiTest extends MockedPulsarServiceBaseTest { private NamespaceBundleFactory bundleFactory; - @BeforeMethod + @BeforeClass @Override public void setup() throws Exception { conf.setTopicLevelPoliciesEnabled(false); @@ -169,7 +170,7 @@ public void setup() throws Exception { admin.namespaces().createNamespace("prop-xyz/use/ns1"); } - @AfterMethod(alwaysRun = true) + @AfterClass(alwaysRun = true) @Override public void cleanup() throws Exception { adminTls.close(); @@ -177,6 +178,31 @@ public void cleanup() throws Exception { mockPulsarSetup.cleanup(); } + @AfterMethod(alwaysRun = true) + public void reset() throws Exception { + pulsar.getConfiguration().setForceDeleteNamespaceAllowed(true); + for (String tenant : admin.tenants().getTenants()) { + for (String namespace : admin.namespaces().getNamespaces(tenant)) { + deleteNamespaceGraceFullyByMultiPulsars(namespace, true, admin, pulsar, + mockPulsarSetup.getPulsar()); + } + } + pulsar.getConfiguration().setForceDeleteNamespaceAllowed(false); + + resetConfig(); + + if (!admin.clusters().getClusters().contains("use")) { + admin.clusters().createCluster("use", + ClusterData.builder().serviceUrl(pulsar.getWebServiceAddress()).build()); + } + + if (!admin.tenants().getTenants().contains("prop-xyz")) { + TenantInfoImpl tenantInfo = new TenantInfoImpl(Set.of("role1", "role2"), Set.of("use")); + admin.tenants().createTenant("prop-xyz", tenantInfo); + } + admin.namespaces().createNamespace("prop-xyz/use/ns1"); + } + @DataProvider(name = "numBundles") public static Object[][] numBundles() { return new Object[][] { { 1 }, { 4 } }; @@ -450,6 +476,7 @@ public void brokers() throws Exception { @Test public void testUpdateDynamicConfigurationWithZkWatch() throws Exception { final int initValue = 30000; + long defaultValue = pulsar.getConfiguration().getBrokerShutdownTimeoutMs(); pulsar.getConfiguration().setBrokerShutdownTimeoutMs(initValue); // (1) try to update dynamic field final long shutdownTime = 10; @@ -485,7 +512,7 @@ public void testUpdateDynamicConfigurationWithZkWatch() throws Exception { } catch (Exception e) { assertTrue(e instanceof PreconditionFailedException); } - + pulsar.getConfiguration().setBrokerShutdownTimeoutMs(defaultValue); } /** @@ -528,6 +555,9 @@ public void testInvalidDynamicConfigContentInZK() throws Exception { } // verify value is updated assertEquals(pulsar.getConfiguration().getBrokerShutdownTimeoutMs(), newValue); + + cleanup(); + setup(); } /** @@ -546,6 +576,7 @@ public void testUpdateDynamicLocalConfiguration() throws Exception { // (1) try to update dynamic field final long initValue = 30000; final long shutdownTime = 10; + long defaultValue = pulsar.getConfiguration().getBrokerShutdownTimeoutMs(); pulsar.getConfiguration().setBrokerShutdownTimeoutMs(initValue); // update configuration admin.brokers().updateDynamicConfiguration("brokerShutdownTimeoutMs", Long.toString(shutdownTime)); @@ -558,6 +589,8 @@ public void testUpdateDynamicLocalConfiguration() throws Exception { // verify value is updated assertEquals(pulsar.getConfiguration().getBrokerShutdownTimeoutMs(), shutdownTime); + + pulsar.getConfiguration().setBrokerShutdownTimeoutMs(defaultValue); } @Test @@ -572,6 +605,7 @@ public void testGetDynamicLocalConfiguration() throws Exception { // (1) try to update dynamic field final String configName = "brokerShutdownTimeoutMs"; final long shutdownTime = 10; + long defaultValue = pulsar.getConfiguration().getBrokerShutdownTimeoutMs(); pulsar.getConfiguration().setBrokerShutdownTimeoutMs(30000); Map configs = admin.brokers().getAllDynamicConfigurations(); assertTrue(configs.isEmpty()); @@ -580,26 +614,29 @@ public void testGetDynamicLocalConfiguration() throws Exception { admin.brokers().updateDynamicConfiguration(configName, Long.toString(shutdownTime)); // Now, znode is created: updateConfigurationAndregisterListeners and check if configuration updated assertEquals(Long.parseLong(admin.brokers().getAllDynamicConfigurations().get(configName)), shutdownTime); + + pulsar.getConfiguration().setBrokerShutdownTimeoutMs(defaultValue); } @Test - public void properties() throws PulsarAdminException { + public void testTenant() throws Exception { Set allowedClusters = Set.of("use"); TenantInfoImpl tenantInfo = new TenantInfoImpl(Set.of("role1", "role2"), allowedClusters); - admin.tenants().updateTenant("prop-xyz", tenantInfo); + admin.tenants().createTenant("prop-xyz2", tenantInfo); + admin.namespaces().createNamespace("prop-xyz2/use/ns1"); - assertEquals(admin.tenants().getTenants(), List.of("prop-xyz")); + assertEquals(admin.tenants().getTenants(), List.of("prop-xyz", "prop-xyz2")); - assertEquals(admin.tenants().getTenantInfo("prop-xyz"), tenantInfo); + assertEquals(admin.tenants().getTenantInfo("prop-xyz2"), tenantInfo); TenantInfoImpl newPropertyAdmin = new TenantInfoImpl(Set.of("role3", "role4"), allowedClusters); - admin.tenants().updateTenant("prop-xyz", newPropertyAdmin); + admin.tenants().updateTenant("prop-xyz2", newPropertyAdmin); - assertEquals(admin.tenants().getTenantInfo("prop-xyz"), newPropertyAdmin); + assertEquals(admin.tenants().getTenantInfo("prop-xyz2"), newPropertyAdmin); - admin.namespaces().deleteNamespace("prop-xyz/use/ns1"); - admin.tenants().deleteTenant("prop-xyz"); - assertEquals(admin.tenants().getTenants(), new ArrayList<>()); + admin.namespaces().deleteNamespace("prop-xyz2/use/ns1"); + admin.tenants().deleteTenant("prop-xyz2"); + assertEquals(admin.tenants().getTenants(), List.of("prop-xyz")); // Check name validation try { @@ -709,6 +746,8 @@ public void namespaces() throws Exception { // both unload and delete should succeed for ns2 on other broker with a redirect // otheradmin.namespaces().unload("prop-xyz/use/ns2"); + tenantInfo = new TenantInfoImpl(Set.of("role1", "role2"), Set.of("use")); + admin.tenants().updateTenant("prop-xyz", tenantInfo); } @Test(dataProvider = "topicName") @@ -1227,15 +1266,25 @@ public void testClearBacklogOnNamespace(Integer numBundles) throws Exception { admin.namespaces().createNamespace("prop-xyz/use/ns1-bundles", numBundles); // create consumer and subscription - pulsarClient.newConsumer().topic("persistent://prop-xyz/use/ns1-bundles/ds2").subscriptionName("my-sub") - .subscribe(); - pulsarClient.newConsumer().topic("persistent://prop-xyz/use/ns1-bundles/ds2").subscriptionName("my-sub-1") - .subscribe(); - pulsarClient.newConsumer().topic("persistent://prop-xyz/use/ns1-bundles/ds2").subscriptionName("my-sub-2") + @Cleanup + Consumer subscribe = + pulsarClient.newConsumer().topic("persistent://prop-xyz/use/ns1-bundles/ds2").subscriptionName("my-sub") + .subscribe(); + @Cleanup + Consumer subscribe1 = pulsarClient.newConsumer().topic("persistent://prop-xyz/use/ns1-bundles/ds2") + .subscriptionName("my-sub-1") .subscribe(); - pulsarClient.newConsumer().topic("persistent://prop-xyz/use/ns1-bundles/ds1").subscriptionName("my-sub") + @Cleanup + Consumer subscribe2 = pulsarClient.newConsumer().topic("persistent://prop-xyz/use/ns1-bundles/ds2") + .subscriptionName("my-sub-2") .subscribe(); - pulsarClient.newConsumer().topic("persistent://prop-xyz/use/ns1-bundles/ds1").subscriptionName("my-sub-1") + @Cleanup + Consumer subscribe3 = + pulsarClient.newConsumer().topic("persistent://prop-xyz/use/ns1-bundles/ds1").subscriptionName("my-sub") + .subscribe(); + @Cleanup + Consumer subscribe4 = pulsarClient.newConsumer().topic("persistent://prop-xyz/use/ns1-bundles/ds1") + .subscriptionName("my-sub-1") .subscribe(); // Create producer @@ -1299,7 +1348,7 @@ public void testUnsubscribeOnNamespace(Integer numBundles) throws Exception { Consumer consumer2 = pulsarClient.newConsumer().topic("persistent://prop-xyz/use/ns1-bundles/ds2") .subscriptionName("my-sub-1").subscribe(); /* Consumer consumer3 = */ pulsarClient.newConsumer().topic("persistent://prop-xyz/use/ns1-bundles/ds2") - .subscriptionName("my-sub-2").subscribe(); + .subscriptionName("my-sub-2").subscribe().close(); Consumer consumer4 = pulsarClient.newConsumer().topic("persistent://prop-xyz/use/ns1-bundles/ds1") .subscriptionName("my-sub").subscribe(); Consumer consumer5 = pulsarClient.newConsumer().topic("persistent://prop-xyz/use/ns1-bundles/ds1") @@ -1456,24 +1505,29 @@ public void testJacksonWithTypeDifferencies() throws Exception { @Test public void testBackwardCompatiblity() throws Exception { - assertEquals(admin.tenants().getTenants(), List.of("prop-xyz")); - assertEquals(admin.tenants().getTenantInfo("prop-xyz").getAdminRoles(), + Set allowedClusters = Set.of("use"); + TenantInfoImpl tenantInfo = new TenantInfoImpl(Set.of("role1", "role2"), allowedClusters); + admin.tenants().createTenant("prop-xyz2", tenantInfo); + admin.namespaces().createNamespace("prop-xyz2/use/ns1"); + + assertEquals(admin.tenants().getTenants(), List.of("prop-xyz", "prop-xyz2")); + assertEquals(admin.tenants().getTenantInfo("prop-xyz2").getAdminRoles(), List.of("role1", "role2")); - assertEquals(admin.tenants().getTenantInfo("prop-xyz").getAllowedClusters(), Set.of("use")); + assertEquals(admin.tenants().getTenantInfo("prop-xyz2").getAllowedClusters(), Set.of("use")); // Try to deserialize property JSON with IncompatiblePropertyAdmin format // it should succeed ignoring missing fields TenantsImpl properties = (TenantsImpl) admin.tenants(); - IncompatiblePropertyAdmin result = properties.request(properties.getWebTarget().path("prop-xyz")) + IncompatiblePropertyAdmin result = properties.request(properties.getWebTarget().path("prop-xyz2")) .get(IncompatiblePropertyAdmin.class); assertEquals(result.allowedClusters, Set.of("use")); assertEquals(result.someNewIntField, 0); assertNull(result.someNewString); - admin.namespaces().deleteNamespace("prop-xyz/use/ns1"); - admin.tenants().deleteTenant("prop-xyz"); - assertEquals(admin.tenants().getTenants(), new ArrayList<>()); + admin.namespaces().deleteNamespace("prop-xyz2/use/ns1"); + admin.tenants().deleteTenant("prop-xyz2"); + assertEquals(admin.tenants().getTenants(), Set.of("prop-xyz")); } @Test(dataProvider = "topicName") @@ -1742,6 +1796,8 @@ public void testObjectWithUnknownProperties() { */ @Test public void testPersistentTopicsExpireMessages() throws Exception { + cleanup(); + setup(); // Force to create a topic publishMessagesOnPersistentTopic("persistent://prop-xyz/use/ns1/ds2", 0); @@ -1868,7 +1924,7 @@ public void testPulsarAdminForUriAndUrlEncoding(String topicName) throws Excepti final int numOfPartitions = 4; admin.topics().createPartitionedTopic(topic1, numOfPartitions); // Create a consumer to get stats on this topic - pulsarClient.newConsumer().topic(topic1).subscriptionName("my-subscriber-name").subscribe(); + pulsarClient.newConsumer().topic(topic1).subscriptionName("my-subscriber-name").subscribe().close(); TopicsImpl persistent = (TopicsImpl) admin.topics(); Field field = TopicsImpl.class.getDeclaredField("adminTopics"); @@ -1998,6 +2054,9 @@ public void testTopicBundleRangeLookup() throws Exception { final String topicName = "persistent://prop-xyz/use/getBundleNs/topic1"; String bundleRange = admin.lookups().getBundleRange(topicName); assertEquals(bundleRange, pulsar.getNamespaceService().getBundle(TopicName.get(topicName)).getBundleRange()); + + admin.tenants().updateTenant("prop-xyz", new TenantInfoImpl(Set.of("role1", "role2"), + Set.of("use"))); } @Test From 3783ad21f3556333d05a77b1ab09546f3f510a5e Mon Sep 17 00:00:00 2001 From: Lei Zhiyuan Date: Thu, 20 Oct 2022 10:33:32 +0800 Subject: [PATCH 22/28] [cleanup][broker] Cleanup unused method in namespace base (#18089) --- .../broker/admin/impl/NamespacesBase.java | 81 ------------------- 1 file changed, 81 deletions(-) diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/broker/admin/impl/NamespacesBase.java b/pulsar-broker/src/main/java/org/apache/pulsar/broker/admin/impl/NamespacesBase.java index 01532449687ee7..1991867627e73e 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/broker/admin/impl/NamespacesBase.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/broker/admin/impl/NamespacesBase.java @@ -752,52 +752,6 @@ protected CompletableFuture internalModifyDeduplicationAsync(Boolean enabl })); } - @SuppressWarnings("deprecation") - protected void internalUnloadNamespace(AsyncResponse asyncResponse) { - validateSuperUserAccess(); - log.info("[{}] Unloading namespace {}", clientAppId(), namespaceName); - - if (namespaceName.isGlobal()) { - // check cluster ownership for a given global namespace: redirect if peer-cluster owns it - validateGlobalNamespaceOwnership(namespaceName); - } else { - validateClusterOwnership(namespaceName.getCluster()); - validateClusterForTenant(namespaceName.getTenant(), namespaceName.getCluster()); - } - - Policies policies = getNamespacePolicies(namespaceName); - - final List> futures = new ArrayList<>(); - List boundaries = policies.bundles.getBoundaries(); - for (int i = 0; i < boundaries.size() - 1; i++) { - String bundle = String.format("%s_%s", boundaries.get(i), boundaries.get(i + 1)); - try { - futures.add(pulsar().getAdminClient().namespaces().unloadNamespaceBundleAsync(namespaceName.toString(), - bundle)); - } catch (PulsarServerException e) { - log.error("[{}] Failed to unload namespace {}", clientAppId(), namespaceName, e); - asyncResponse.resume(new RestException(e)); - return; - } - } - - FutureUtil.waitForAll(futures).handle((result, exception) -> { - if (exception != null) { - log.error("[{}] Failed to unload namespace {}", clientAppId(), namespaceName, exception); - if (exception.getCause() instanceof PulsarAdminException) { - asyncResponse.resume(new RestException((PulsarAdminException) exception.getCause())); - return null; - } else { - asyncResponse.resume(new RestException(exception.getCause())); - return null; - } - } - log.info("[{}] Successfully unloaded all the bundles in namespace {}", clientAppId(), namespaceName); - asyncResponse.resume(Response.noContent().build()); - return null; - }); - } - protected CompletableFuture internalUnloadNamespaceAsync() { return validateSuperUserAccessAsync() .thenCompose(__ -> { @@ -1158,23 +1112,6 @@ protected CompletableFuture internalSetTopicDispatchRateAsync(DispatchRate })); } - protected void internalDeleteTopicDispatchRate() { - validateSuperUserAccess(); - try { - updatePolicies(namespaceName, policies -> { - policies.topicDispatchRate.remove(pulsar().getConfiguration().getClusterName()); - policies.clusterDispatchRate.remove(pulsar().getConfiguration().getClusterName()); - return policies; - }); - log.info("[{}] Successfully delete the dispatchRate for cluster on namespace {}", clientAppId(), - namespaceName); - } catch (Exception e) { - log.error("[{}] Failed to delete the dispatchRate for cluster on namespace {}", clientAppId(), - namespaceName, e); - throw new RestException(e); - } - } - protected CompletableFuture internalDeleteTopicDispatchRateAsync() { return validateSuperUserAccessAsync().thenCompose(__ -> updatePoliciesAsync(namespaceName, policies -> { policies.topicDispatchRate.remove(pulsar().getConfiguration().getClusterName()); @@ -1269,24 +1206,6 @@ protected CompletableFuture setBacklogQuotaAsync(BacklogQuotaType backlogQ }); } - protected void internalRemoveBacklogQuota(BacklogQuotaType backlogQuotaType) { - validateNamespacePolicyOperation(namespaceName, PolicyName.BACKLOG, PolicyOperation.WRITE); - validatePoliciesReadOnlyAccess(); - final BacklogQuotaType quotaType = backlogQuotaType != null ? backlogQuotaType - : BacklogQuotaType.destination_storage; - try { - updatePolicies(namespaceName, policies -> { - policies.backlog_quota_map.remove(quotaType); - return policies; - }); - log.info("[{}] Successfully removed backlog namespace={}, quota={}", clientAppId(), namespaceName, - backlogQuotaType); - } catch (Exception e) { - log.error("[{}] Failed to update backlog quota map for namespace {}", clientAppId(), namespaceName, e); - throw new RestException(e); - } - } - protected void internalSetRetention(RetentionPolicies retention) { validateRetentionPolicies(retention); validateNamespacePolicyOperation(namespaceName, PolicyName.RETENTION, PolicyOperation.WRITE); From cec8c9127d6414b39af810f3d00844ee14910adc Mon Sep 17 00:00:00 2001 From: labuladong Date: Thu, 20 Oct 2022 10:35:36 +0800 Subject: [PATCH 23/28] [fix][doc] Fix errors in subscription document (#18085) --- site2/docs/concepts-messaging.md | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/site2/docs/concepts-messaging.md b/site2/docs/concepts-messaging.md index dd762d0f75de42..617982393ac31e 100644 --- a/site2/docs/concepts-messaging.md +++ b/site2/docs/concepts-messaging.md @@ -437,8 +437,7 @@ consumer.reconsumeLater(msg, customProperties, 3, TimeUnit.SECONDS); :::note * Currently, retry letter topic is enabled in Shared subscription types. -* Compared with negativ![pub-sub-border](https://user-images.githubusercontent.com/94193423/192618897-460a10de-db92-4d43-b38c-59faffaa8044.svg) -e acknowledgment, retry letter topic is more suitable for messages that require a large number of retries with a configurable retry interval. Because messages in the retry letter topic are persisted to BookKeeper, while messages that need to be retried due to negative acknowledgment are cached on the client side. +* Compared with negative acknowledgment, retry letter topic is more suitable for messages that require a large number of retries with a configurable retry interval. Because messages in the retry letter topic are persisted to BookKeeper, while messages that need to be retried due to negative acknowledgment are cached on the client side. ::: @@ -552,7 +551,7 @@ When a subscription has no consumers, its subscription type is undefined. The ty In the *Exclusive* type, only a single consumer is allowed to attach to the subscription. If multiple consumers subscribe to a topic using the same subscription, an error occurs. Note that if the topic is partitioned, all partitions will be consumed by the single consumer allowed to be connected to the subscription. -In the diagram below, only **Consumer A-0** is allowed to consume messages. +In the diagram below, only **Consumer A** is allowed to consume messages. :::tip @@ -571,7 +570,7 @@ In the *Failover* type, multiple consumers can attach to the same subscription. For example, a partitioned topic has 3 partitions, and 15 consumers. Each partition will have 1 active consumer and 4 stand-by consumers. -In the diagram below, **Consumer-B-0** is the master consumer while **Consumer-B-1** would be the next consumer in line to receive messages if **Consumer-B-0** is disconnected. +In the diagram below, **Consumer A** is the master consumer while **Consumer B** would be the next consumer in line to receive messages if **Consumer B** is disconnected. ![Failover subscriptions](/assets/pulsar-failover-subscriptions.svg) @@ -579,7 +578,7 @@ In the diagram below, **Consumer-B-0** is the master consumer while **Consumer-B In *shared* or *round robin* type, multiple consumers can attach to the same subscription. Messages are delivered in a round-robin distribution across consumers, and any given message is delivered to only one consumer. When a consumer disconnects, all the messages that were sent to it and not acknowledged will be rescheduled for sending to the remaining consumers. -In the diagram below, **Consumer-C-1** and **Consumer-C-2** are able to subscribe to the topic, but **Consumer-C-3** and others could as well. +In the diagram below, **Consumer A**, **Consumer B** and **Consumer C** are all able to subscribe to the topic. :::note @@ -598,7 +597,12 @@ In the *Key_Shared* type, multiple consumers can attach to the same subscription ![Key_Shared subscriptions](/assets/pulsar-key-shared-subscriptions.svg) -Note that when the consumers are using the Key_Shared subscription type, you need to **disable batching** or **use key-based batching** for the producers. There are two reasons why the key-based batching is necessary for the Key_Shared subscription type: +:::note + +When the consumers are using the Key_Shared subscription type, you need to **disable batching** or **use key-based batching** for the producers. +::: + +There are two reasons why the key-based batching is necessary for the Key_Shared subscription type: 1. The broker dispatches messages according to the keys of the messages, but the default batching approach might fail to pack the messages with the same key to the same batch. 2. Since it is the consumers instead of the broker who dispatch the messages from the batches, the key of the first message in one batch is considered as the key to all messages in this batch, thereby leading to context errors. @@ -933,8 +937,7 @@ All message retention and expiry are managed at the [namespace](#namespaces) lev ![Message retention and expiry](/assets/retention-expiry.svg) -With message retention, shown at the top, a retention policy applied to all topics in a namespace dictates that some messages are durably stored in Pulsar even thoug![batching](https://user-images.githubusercontent.com/94193423/192618946-306d7d9c-a88f-45bd-8106-c5f2ca602ca6.svg) -h they've already been acknowledged. Acknowledged messages that are not covered by the retention policy are deleted. Without a retention policy, *all* of the acknowledged messages would be deleted. +With message retention, shown at the top, a retention policy applied to all topics in a namespace dictates that some messages are durably stored in Pulsar even though they've already been acknowledged. Acknowledged messages that are not covered by the retention policy are deleted. Without a retention policy, all of the acknowledged messages would be deleted. With message expiry, shown at the bottom, some messages are deleted, even though they haven't been acknowledged, because they've expired according to the TTL applied to the namespace (for example because a TTL of 5 minutes has been applied and the messages haven't been acknowledged but are 10 minutes old). From 5b452d1ce82a34e1c725b155a301db45935a0abe Mon Sep 17 00:00:00 2001 From: momo-jun <60642177+momo-jun@users.noreply.github.com> Date: Thu, 20 Oct 2022 11:11:49 +0800 Subject: [PATCH 24/28] [improve][doc] Add code example for WebSocket API to use TLS with CLI tools (#15485) * add a new section for websocket API * Update site2/docs/security-tls-transport.md Co-authored-by: Yunze Xu * Update site2/docs/security-tls-transport.md Co-authored-by: Yunze Xu * Update site2/docs/security-tls-transport.md Co-authored-by: Yunze Xu * updates * updates * updates * update code snippet as per review comments Co-authored-by: Yunze Xu --- site2/docs/client-libraries-websocket.md | 9 ++--- site2/docs/security-tls-transport.md | 51 +++++++++++++++++++++++- 2 files changed, 53 insertions(+), 7 deletions(-) diff --git a/site2/docs/client-libraries-websocket.md b/site2/docs/client-libraries-websocket.md index 9060da9df270e5..c154961be06768 100644 --- a/site2/docs/client-libraries-websocket.md +++ b/site2/docs/client-libraries-websocket.md @@ -44,7 +44,7 @@ clusterName=my-cluster ### Security settings -To enable TLS encryption on WebSocket service: +To enable TLS encryption on WebSocket service, configure the following parameters in the `conf/broker.conf` file. ```properties tlsEnabled=true @@ -368,9 +368,8 @@ Key | Type | Required? | Explanation #### Acknowledging the message -**In WebSocket**, Reader needs to acknowledge the successful processing of the message to -have the Pulsar WebSocket service update the number of pending messages. -If you don't send acknowledgments, Pulsar WebSocket service will stop sending messages after reaching the pendingMessages limit. +**In WebSocket**, Reader needs to acknowledge the successful processing of the message to have the Pulsar WebSocket service update the number of pending messages. +If you don't send acknowledgments, Pulsar WebSocket service will stop sending messages after reaching the `pendingMessages` limit. ```json { @@ -384,7 +383,7 @@ Key | Type | Required? | Explanation #### Check if reach the end of topic -Consumers can check if it has reached the end of topic by sending the `isEndOfTopic` request. +Consumers can check if it has reached the end of a topic by sending the `isEndOfTopic` request. **Request** diff --git a/site2/docs/security-tls-transport.md b/site2/docs/security-tls-transport.md index 0597083430461b..57e0dcaa96d253 100644 --- a/site2/docs/security-tls-transport.md +++ b/site2/docs/security-tls-transport.md @@ -246,12 +246,12 @@ To enable TLS encryption, you need to configure the clients to use `https://` wi As the server certificate that you generated above does not belong to any of the default trust chains, you also need to either specify the path of the **trust cert** (recommended) or enable the clients to allow untrusted server certs. -The following examples show how to configure TLS encryption for Java/Python/C++/Node.js/C# clients. +The following examples show how to configure TLS encryption for Java/Python/C++/Node.js/C#/WebSocket clients. ````mdx-code-block + values={[{"label":"Java","value":"Java"},{"label":"Python","value":"Python"},{"label":"C++","value":"C++"},{"label":"Node.js","value":"Node.js"},{"label":"C#","value":"C#"},{"label":"WebSocket API","value":"WebSocket API"}]}> ```java @@ -324,6 +324,53 @@ var client = PulsarClient.Builder() > Note that `VerifyCertificateName` refers to the configuration of hostname verification in the C# client. + + + +```python +import websockets +import asyncio +import base64 +import json +import ssl +import pathlib + +ssl_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) +client_cert_pem = pathlib.Path(__file__).with_name("client.cert.pem") +client_key_pem = pathlib.Path(__file__).with_name("client.key.pem") +ca_cert_pem = pathlib.Path(__file__).with_name("ca.cert.pem") +ssl_context.load_cert_chain(certfile=client_cert_pem, keyfile=client_key_pem) +ssl_context.load_verify_locations(ca_cert_pem) +# websocket producer uri wss, not ws +uri = "wss://localhost:8080/ws/v2/producer/persistent/public/default/testtopic" +client_pem = pathlib.Path(__file__).with_name("pulsar_client.pem") +ssl_context.load_verify_locations(client_pem) +# websocket producer uri wss, not ws +uri = "wss://localhost:8080/ws/v2/producer/persistent/public/default/testtopic" +# encode message +s = "Hello World" +firstEncoded = s.encode("UTF-8") +binaryEncoded = base64.b64encode(firstEncoded) +payloadString = binaryEncoded.decode('UTF-8') +async def producer_handler(websocket): + await websocket.send(json.dumps({ + 'payload' : payloadString, + 'properties': { + 'key1' : 'value1', + 'key2' : 'value2' + }, + 'context' : 5 + })) +async def test(): + async with websockets.connect(uri) as websocket: + await producer_handler(websocket) + message = await websocket.recv() + print(f"< {message}") +asyncio.run(test()) +``` + +> Note that in addition to the required configurations in the `conf/client.conf` file, you need to configure more parameters in the `conf/broker.conf` file to enable TLS encryption on WebSocket service. For more details, see [security settings for WebSocket](client-libraries-websocket.md/#security-settings). + ```` From b0945d1d45d1e911d24151a23cf284e476203ba7 Mon Sep 17 00:00:00 2001 From: Rajan Dhabalia Date: Wed, 19 Oct 2022 21:02:19 -0700 Subject: [PATCH 25/28] [feat] [broker] PIP-188 support blue-green cluster migration [part-1] (#17962) * [feat][PIP-188] support blue-green cluster migration [part-1] Add blue-green cluster migration Fix dependency * cleanup --- conf/broker.conf | 4 + .../bookkeeper/mledger/ManagedLedger.java | 7 + .../mledger/impl/ManagedLedgerImpl.java | 33 +- .../pulsar/broker/ServiceConfiguration.java | 7 + .../broker/admin/impl/ClustersBase.java | 62 ++++ .../service/AbstractBaseDispatcher.java | 13 + .../pulsar/broker/service/AbstractTopic.java | 31 +- .../pulsar/broker/service/BrokerService.java | 11 + .../service/BrokerServiceException.java | 10 + .../pulsar/broker/service/Consumer.java | 13 + .../pulsar/broker/service/Producer.java | 11 + .../broker/service/PulsarCommandSender.java | 3 + .../service/PulsarCommandSenderImpl.java | 13 + .../pulsar/broker/service/ServerCnx.java | 19 +- .../apache/pulsar/broker/service/Topic.java | 2 + .../nonpersistent/NonPersistentTopic.java | 40 ++- .../persistent/CompactorSubscription.java | 4 +- ...PersistentDispatcherMultipleConsumers.java | 2 +- ...sistentDispatcherSingleActiveConsumer.java | 2 +- ...tStreamingDispatcherMultipleConsumers.java | 3 +- ...reamingDispatcherSingleActiveConsumer.java | 2 +- .../persistent/PersistentSubscription.java | 5 +- .../service/persistent/PersistentTopic.java | 23 +- .../intercept/CounterBrokerInterceptor.java | 2 +- .../broker/service/ClusterMigrationTest.java | 329 ++++++++++++++++++ pulsar-client-admin-api/pom.xml | 1 + .../apache/pulsar/client/admin/Clusters.java | 37 ++ .../common/policies/data/ClusterData.java | 23 ++ .../client/admin/internal/ClustersImpl.java | 14 + .../client/api/PulsarClientException.java | 16 + .../apache/pulsar/admin/cli/CmdClusters.java | 24 ++ .../apache/pulsar/client/impl/ClientCnx.java | 23 ++ .../pulsar/client/impl/ConnectionHandler.java | 7 +- .../pulsar/client/impl/HandlerState.java | 9 + pulsar-common/pom.xml | 5 + .../common/policies/data/ClusterDataImpl.java | 27 +- .../pulsar/common/protocol/Commands.java | 11 + .../pulsar/common/protocol/PulsarDecoder.java | 10 + pulsar-common/src/main/proto/PulsarApi.proto | 17 + site2/docs/reference-pulsar-admin.md | 18 + .../jcloud/impl/MockManagedLedger.java | 12 + 41 files changed, 886 insertions(+), 19 deletions(-) create mode 100644 pulsar-broker/src/test/java/org/apache/pulsar/broker/service/ClusterMigrationTest.java diff --git a/conf/broker.conf b/conf/broker.conf index 44cf26d21707ba..f5e4be9d6f390b 100644 --- a/conf/broker.conf +++ b/conf/broker.conf @@ -1461,6 +1461,10 @@ splitTopicAndPartitionLabelInPrometheus=false # Otherwise, aggregate it by list index. aggregatePublisherStatsByProducerName=false +# Interval between checks to see if cluster is migrated and marks topic migrated +# if cluster is marked migrated. Disable with value 0. (Default disabled). +clusterMigrationCheckDurationSeconds=0 + ### --- Schema storage --- ### # The schema storage implementation used by this broker schemaRegistryStorageClassName=org.apache.pulsar.broker.service.schema.BookkeeperSchemaStorageFactory diff --git a/managed-ledger/src/main/java/org/apache/bookkeeper/mledger/ManagedLedger.java b/managed-ledger/src/main/java/org/apache/bookkeeper/mledger/ManagedLedger.java index c5de804b1379d2..7fcbfdc8b476c3 100644 --- a/managed-ledger/src/main/java/org/apache/bookkeeper/mledger/ManagedLedger.java +++ b/managed-ledger/src/main/java/org/apache/bookkeeper/mledger/ManagedLedger.java @@ -443,6 +443,8 @@ void asyncOpenCursor(String name, InitialPosition initialPosition, Map asyncMigrate(); + /** * Terminate the managed ledger and return the last committed entry. * @@ -534,6 +536,11 @@ void asyncOpenCursor(String name, InitialPosition initialPosition, Map STATE_UPDATER = AtomicReferenceFieldUpdater.newUpdater(ManagedLedgerImpl.class, State.class, "state"); protected volatile State state = null; + private volatile boolean migrated = false; @Getter private final OrderedScheduler scheduledExecutor; @@ -343,7 +346,7 @@ public ManagedLedgerImpl(ManagedLedgerFactoryImpl factory, BookKeeper bookKeeper // Get the next rollover time. Add a random value upto 5% to avoid rollover multiple ledgers at the same time this.maximumRolloverTimeMs = getMaximumRolloverTimeMs(config); this.mlOwnershipChecker = mlOwnershipChecker; - this.propertiesMap = new HashMap(); + this.propertiesMap = new ConcurrentHashMap<>(); this.inactiveLedgerRollOverTimeMs = config.getInactiveLedgerRollOverTimeMs(); if (config.getManagedLedgerInterceptor() != null) { this.managedLedgerInterceptor = config.getManagedLedgerInterceptor(); @@ -367,7 +370,6 @@ public void operationComplete(ManagedLedgerInfo mlInfo, Stat stat) { lastConfirmedEntry = new PositionImpl(mlInfo.getTerminatedPosition()); log.info("[{}] Recovering managed ledger terminated at {}", name, lastConfirmedEntry); } - for (LedgerInfo ls : mlInfo.getLedgerInfoList()) { ledgers.put(ls.getLedgerId(), ls); } @@ -379,6 +381,7 @@ public void operationComplete(ManagedLedgerInfo mlInfo, Stat stat) { propertiesMap.put(property.getKey(), property.getValue()); } } + migrated = mlInfo.hasTerminatedPosition() && propertiesMap.containsKey(MIGRATION_STATE_PROPERTY); if (managedLedgerInterceptor != null) { managedLedgerInterceptor.onManagedLedgerPropertiesInitialize(propertiesMap); } @@ -1271,6 +1274,27 @@ private long consumedLedgerSize(long ledgerSize, long ledgerEntries, long consum } } + public CompletableFuture asyncMigrate() { + propertiesMap.put(MIGRATION_STATE_PROPERTY, Boolean.TRUE.toString()); + CompletableFuture result = new CompletableFuture<>(); + asyncTerminate(new TerminateCallback() { + + @Override + public void terminateComplete(Position lastCommittedPosition, Object ctx) { + migrated = true; + log.info("[{}] topic successfully terminated and migrated at {}", name, lastCommittedPosition); + result.complete(lastCommittedPosition); + } + + @Override + public void terminateFailed(ManagedLedgerException exception, Object ctx) { + log.info("[{}] topic failed to terminate and migrate ", name, exception); + result.completeExceptionally(exception); + } + }, null); + return result; + } + @Override public synchronized void asyncTerminate(TerminateCallback callback, Object ctx) { if (state == State.Fenced) { @@ -1363,6 +1387,11 @@ public boolean isTerminated() { return state == State.Terminated; } + @Override + public boolean isMigrated() { + return migrated; + } + @Override public void close() throws InterruptedException, ManagedLedgerException { final CountDownLatch counter = new CountDownLatch(1); diff --git a/pulsar-broker-common/src/main/java/org/apache/pulsar/broker/ServiceConfiguration.java b/pulsar-broker-common/src/main/java/org/apache/pulsar/broker/ServiceConfiguration.java index 6e327fd8b2609b..8ba741801043be 100644 --- a/pulsar-broker-common/src/main/java/org/apache/pulsar/broker/ServiceConfiguration.java +++ b/pulsar-broker-common/src/main/java/org/apache/pulsar/broker/ServiceConfiguration.java @@ -2514,6 +2514,13 @@ The delayed message index bucket time step(in seconds) in per bucket snapshot se ) private long brokerServiceCompactionPhaseOneLoopTimeInSeconds = 30; + @FieldContext( + category = CATEGORY_SERVER, + doc = "Interval between checks to see if cluster is migrated and marks topic migrated " + + " if cluster is marked migrated. Disable with value 0. (Default disabled)." + ) + private int clusterMigrationCheckDurationSeconds = 0; + @FieldContext( category = CATEGORY_SCHEMA, doc = "Enforce schema validation on following cases:\n\n" diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/broker/admin/impl/ClustersBase.java b/pulsar-broker/src/main/java/org/apache/pulsar/broker/admin/impl/ClustersBase.java index 597e191f11cf3d..9e2c7c6b06c194 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/broker/admin/impl/ClustersBase.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/broker/admin/impl/ClustersBase.java @@ -41,6 +41,7 @@ import javax.ws.rs.PUT; import javax.ws.rs.Path; import javax.ws.rs.PathParam; +import javax.ws.rs.QueryParam; import javax.ws.rs.WebApplicationException; import javax.ws.rs.container.AsyncResponse; import javax.ws.rs.container.Suspended; @@ -59,6 +60,7 @@ import org.apache.pulsar.common.policies.data.BrokerNamespaceIsolationData; import org.apache.pulsar.common.policies.data.BrokerNamespaceIsolationDataImpl; import org.apache.pulsar.common.policies.data.ClusterData; +import org.apache.pulsar.common.policies.data.ClusterData.ClusterUrl; import org.apache.pulsar.common.policies.data.ClusterDataImpl; import org.apache.pulsar.common.policies.data.FailureDomainImpl; import org.apache.pulsar.common.policies.data.NamespaceIsolationDataImpl; @@ -229,6 +231,66 @@ public void updateCluster( }); } + @POST + @Path("/{cluster}/migrate") + @ApiOperation( + value = "Update the configuration for a cluster migration.", + notes = "This operation requires Pulsar superuser privileges.") + @ApiResponses(value = { + @ApiResponse(code = 204, message = "Cluster has been updated."), + @ApiResponse(code = 400, message = "Cluster url must not be empty."), + @ApiResponse(code = 403, message = "Don't have admin permission or policies are read-only."), + @ApiResponse(code = 404, message = "Cluster doesn't exist."), + @ApiResponse(code = 500, message = "Internal server error.") + }) + public void updateClusterMigration( + @Suspended AsyncResponse asyncResponse, + @ApiParam(value = "The cluster name", required = true) + @PathParam("cluster") String cluster, + @ApiParam(value = "Is cluster migrated", required = true) + @QueryParam("migrated") boolean isMigrated, + @ApiParam( + value = "The cluster url data", + required = true, + examples = @Example( + value = @ExampleProperty( + mediaType = MediaType.APPLICATION_JSON, + value = """ + { + "serviceUrl": "http://pulsar.example.com:8080", + "brokerServiceUrl": "pulsar://pulsar.example.com:6651" + } + """ + ) + ) + ) ClusterUrl clusterUrl) { + if (isMigrated && clusterUrl.isEmpty()) { + asyncResponse.resume(new RestException(Status.BAD_REQUEST, "Cluster url must not be empty")); + return; + } + validateSuperUserAccessAsync() + .thenCompose(__ -> validatePoliciesReadOnlyAccessAsync()) + .thenCompose(__ -> clusterResources().updateClusterAsync(cluster, old -> { + ClusterDataImpl data = (ClusterDataImpl) old; + data.setMigrated(isMigrated); + data.setMigratedClusterUrl(clusterUrl); + return data; + })) + .thenAccept(__ -> { + log.info("[{}] Updated cluster {}", clientAppId(), cluster); + asyncResponse.resume(Response.ok().build()); + }).exceptionally(ex -> { + log.error("[{}] Failed to update cluster {}", clientAppId(), cluster, ex); + Throwable realCause = FutureUtil.unwrapCompletionException(ex); + if (realCause instanceof MetadataStoreException.NotFoundException) { + asyncResponse.resume(new RestException(Status.NOT_FOUND, "Cluster does not exist")); + return null; + } + resumeAsyncResponseExceptionally(asyncResponse, ex); + return null; + }); + } + @POST @Path("/{cluster}/peers") @ApiOperation( diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/AbstractBaseDispatcher.java b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/AbstractBaseDispatcher.java index b52f30361b1929..5069f7cd440007 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/AbstractBaseDispatcher.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/AbstractBaseDispatcher.java @@ -338,6 +338,19 @@ protected String getSubscriptionName() { return subscription == null ? null : subscription.getName(); } + protected void checkAndApplyReachedEndOfTopicOrTopicMigration(List consumers) { + PersistentTopic topic = (PersistentTopic) subscription.getTopic(); + checkAndApplyReachedEndOfTopicOrTopicMigration(topic, consumers); + } + + public static void checkAndApplyReachedEndOfTopicOrTopicMigration(PersistentTopic topic, List consumers) { + if (topic.isMigrated()) { + consumers.forEach(c -> c.topicMigrated(topic.getMigratedClusterUrl())); + } else { + consumers.forEach(Consumer::reachedEndOfTopic); + } + } + @Override public long getFilterProcessedMsgCount() { return this.filterProcessedMsgs.longValue(); diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/AbstractTopic.java b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/AbstractTopic.java index 428c54ecc191de..a995192cead118 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/AbstractTopic.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/AbstractTopic.java @@ -44,12 +44,14 @@ import org.apache.commons.collections4.CollectionUtils; import org.apache.commons.collections4.MapUtils; import org.apache.commons.lang3.tuple.Pair; +import org.apache.pulsar.broker.PulsarService; import org.apache.pulsar.broker.ServiceConfiguration; import org.apache.pulsar.broker.resourcegroup.ResourceGroup; import org.apache.pulsar.broker.resourcegroup.ResourceGroupPublishLimiter; import org.apache.pulsar.broker.service.BrokerServiceException.ConsumerBusyException; import org.apache.pulsar.broker.service.BrokerServiceException.ProducerBusyException; import org.apache.pulsar.broker.service.BrokerServiceException.ProducerFencedException; +import org.apache.pulsar.broker.service.BrokerServiceException.TopicMigratedException; import org.apache.pulsar.broker.service.BrokerServiceException.TopicTerminatedException; import org.apache.pulsar.broker.service.plugin.EntryFilterWithClassLoader; import org.apache.pulsar.broker.service.schema.BookkeeperSchemaStorage; @@ -59,6 +61,7 @@ import org.apache.pulsar.common.api.proto.CommandSubscribe.SubType; import org.apache.pulsar.common.naming.TopicName; import org.apache.pulsar.common.policies.data.BacklogQuota; +import org.apache.pulsar.common.policies.data.ClusterData.ClusterUrl; import org.apache.pulsar.common.policies.data.DelayedDeliveryPolicies; import org.apache.pulsar.common.policies.data.EntryFilters; import org.apache.pulsar.common.policies.data.HierarchyTopicPolicies; @@ -686,7 +689,10 @@ public CompletableFuture> addProducer(Producer producer, lock.writeLock().lock(); try { checkTopicFenced(); - if (isTerminated()) { + if (isMigrated()) { + log.warn("[{}] Attempting to add producer to a migrated topic", topic); + throw new TopicMigratedException("Topic was already migrated"); + } else if (isTerminated()) { log.warn("[{}] Attempting to add producer to a terminated topic", topic); throw new TopicTerminatedException("Topic was already terminated"); } @@ -1180,6 +1186,8 @@ public boolean deletePartitionedTopicMetadataWhileInactive() { protected abstract boolean isTerminated(); + protected abstract boolean isMigrated(); + private static final Logger log = LoggerFactory.getLogger(AbstractTopic.class); public InactiveTopicPolicies getInactiveTopicPolicies() { @@ -1299,4 +1307,25 @@ public void updateBrokerSubscribeRate() { topicPolicies.getSubscribeRate().updateBrokerValue( subscribeRateInBroker(brokerService.pulsar().getConfiguration())); } + + public Optional getMigratedClusterUrl() { + return getMigratedClusterUrl(brokerService.getPulsar()); + } + + public static CompletableFuture> getMigratedClusterUrlAsync(PulsarService pulsar) { + return pulsar.getPulsarResources().getClusterResources().getClusterAsync(pulsar.getConfig().getClusterName()) + .thenApply(clusterData -> (clusterData.isPresent() && clusterData.get().isMigrated()) + ? Optional.ofNullable(clusterData.get().getMigratedClusterUrl()) + : Optional.empty()); + } + + public static Optional getMigratedClusterUrl(PulsarService pulsar) { + try { + return getMigratedClusterUrlAsync(pulsar) + .get(pulsar.getPulsarResources().getClusterResources().getOperationTimeoutSec(), TimeUnit.SECONDS); + } catch (Exception e) { + log.warn("Failed to get migration cluster URL", e); + } + return Optional.empty(); + } } diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/BrokerService.java b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/BrokerService.java index 63ab3352ef2004..1def78e0266efb 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/BrokerService.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/BrokerService.java @@ -582,6 +582,13 @@ protected void startInactivityMonitor() { subscriptionExpiryCheckIntervalInSeconds, subscriptionExpiryCheckIntervalInSeconds, TimeUnit.SECONDS); } + + // check cluster migration + int interval = pulsar().getConfiguration().getClusterMigrationCheckDurationSeconds(); + if (interval > 0) { + inactivityMonitor.scheduleAtFixedRate(safeRun(() -> checkClusterMigration()), interval, interval, + TimeUnit.SECONDS); + } } protected void startMessageExpiryMonitor() { @@ -1851,6 +1858,10 @@ public void checkGC() { forEachTopic(Topic::checkGC); } + public void checkClusterMigration() { + forEachTopic(Topic::checkClusterMigration); + } + public void checkMessageExpiry() { forEachTopic(Topic::checkMessageExpiry); } diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/BrokerServiceException.java b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/BrokerServiceException.java index c6d8ffabcecafd..6b3ab99595d313 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/BrokerServiceException.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/BrokerServiceException.java @@ -100,6 +100,16 @@ public TopicTerminatedException(Throwable t) { } } + public static class TopicMigratedException extends BrokerServiceException { + public TopicMigratedException(String msg) { + super(msg); + } + + public TopicMigratedException(Throwable t) { + super(t); + } + } + public static class ServerMetadataException extends BrokerServiceException { public ServerMetadataException(Throwable t) { super(t); diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/Consumer.java b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/Consumer.java index 767c7bb92747dd..29eb5f5eb53dc3 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/Consumer.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/Consumer.java @@ -31,6 +31,7 @@ import java.util.List; import java.util.Map; import java.util.Objects; +import java.util.Optional; import java.util.concurrent.CompletableFuture; import java.util.concurrent.atomic.AtomicIntegerFieldUpdater; import java.util.concurrent.atomic.LongAdder; @@ -49,10 +50,12 @@ import org.apache.pulsar.common.api.proto.CommandAck; import org.apache.pulsar.common.api.proto.CommandAck.AckType; import org.apache.pulsar.common.api.proto.CommandSubscribe.SubType; +import org.apache.pulsar.common.api.proto.CommandTopicMigrated.ResourceType; import org.apache.pulsar.common.api.proto.KeyLongValue; import org.apache.pulsar.common.api.proto.KeySharedMeta; import org.apache.pulsar.common.api.proto.MessageIdData; import org.apache.pulsar.common.naming.TopicName; +import org.apache.pulsar.common.policies.data.ClusterData.ClusterUrl; import org.apache.pulsar.common.policies.data.stats.ConsumerStatsImpl; import org.apache.pulsar.common.protocol.Commands; import org.apache.pulsar.common.stats.Rate; @@ -785,6 +788,16 @@ public void reachedEndOfTopic() { cnx.getCommandSender().sendReachedEndOfTopic(consumerId); } + public void topicMigrated(Optional clusterUrl) { + if (clusterUrl.isPresent()) { + ClusterUrl url = clusterUrl.get(); + cnx.getCommandSender().sendTopicMigrated(ResourceType.Consumer, consumerId, url.getBrokerServiceUrl(), + url.getBrokerServiceUrlTls()); + // disconnect consumer after sending migrated cluster url + disconnect(); + } + } + /** * Checks if consumer-blocking on unAckedMessages is allowed for below conditions:
    * a. consumer must have Shared-subscription
    diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/Producer.java b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/Producer.java index 62182f6e84f492..902ba3ff19ad0d 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/Producer.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/Producer.java @@ -43,10 +43,12 @@ import org.apache.pulsar.broker.service.nonpersistent.NonPersistentTopic; import org.apache.pulsar.broker.service.persistent.PersistentTopic; import org.apache.pulsar.client.api.transaction.TxnID; +import org.apache.pulsar.common.api.proto.CommandTopicMigrated.ResourceType; import org.apache.pulsar.common.api.proto.MessageMetadata; import org.apache.pulsar.common.api.proto.ProducerAccessMode; import org.apache.pulsar.common.api.proto.ServerError; import org.apache.pulsar.common.naming.TopicName; +import org.apache.pulsar.common.policies.data.ClusterData.ClusterUrl; import org.apache.pulsar.common.policies.data.stats.NonPersistentPublisherStatsImpl; import org.apache.pulsar.common.policies.data.stats.PublisherStatsImpl; import org.apache.pulsar.common.protocol.Commands; @@ -665,6 +667,15 @@ public CompletableFuture disconnect() { return closeFuture; } + public void topicMigrated(Optional clusterUrl) { + if (clusterUrl.isPresent()) { + ClusterUrl url = clusterUrl.get(); + cnx.getCommandSender().sendTopicMigrated(ResourceType.Producer, producerId, url.getBrokerServiceUrl(), + url.getBrokerServiceUrlTls()); + disconnect(); + } + } + public void updateRates() { msgIn.calculateRate(); chunkedMessageRate.calculateRate(); diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/PulsarCommandSender.java b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/PulsarCommandSender.java index dc5b97d846f536..d2775c19ab78ea 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/PulsarCommandSender.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/PulsarCommandSender.java @@ -25,6 +25,7 @@ import org.apache.bookkeeper.mledger.Entry; import org.apache.pulsar.client.api.transaction.TxnID; import org.apache.pulsar.common.api.proto.CommandLookupTopicResponse; +import org.apache.pulsar.common.api.proto.CommandTopicMigrated.ResourceType; import org.apache.pulsar.common.api.proto.ServerError; import org.apache.pulsar.common.protocol.schema.SchemaVersion; import org.apache.pulsar.common.schema.SchemaInfo; @@ -77,6 +78,8 @@ void sendLookupResponse(String brokerServiceUrl, String brokerServiceUrlTls, boo void sendReachedEndOfTopic(long consumerId); + boolean sendTopicMigrated(ResourceType type, long resourceId, String brokerUrl, String brokerUrlTls); + Future sendMessagesToConsumer(long consumerId, String topicName, Subscription subscription, int partitionIdx, List entries, EntryBatchSizes batchSizes, EntryBatchIndexesAcks batchIndexesAcks, diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/PulsarCommandSenderImpl.java b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/PulsarCommandSenderImpl.java index 543739bae27600..7bc26072ffa75a 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/PulsarCommandSenderImpl.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/PulsarCommandSenderImpl.java @@ -31,6 +31,7 @@ import org.apache.pulsar.client.api.transaction.TxnID; import org.apache.pulsar.common.api.proto.BaseCommand; import org.apache.pulsar.common.api.proto.CommandLookupTopicResponse; +import org.apache.pulsar.common.api.proto.CommandTopicMigrated.ResourceType; import org.apache.pulsar.common.api.proto.ProtocolVersion; import org.apache.pulsar.common.api.proto.ServerError; import org.apache.pulsar.common.api.proto.TxnAction; @@ -219,6 +220,18 @@ public void sendReachedEndOfTopic(long consumerId) { } } + @Override + public boolean sendTopicMigrated(ResourceType type, long resourceId, String brokerUrl, String brokerUrlTls) { + // Only send notification if the client understand the command + if (cnx.getRemoteEndpointProtocolVersion() >= ProtocolVersion.v20.getValue()) { + log.info("[{}] Notifying {} that topic is migrated", type.name(), resourceId); + cnx.ctx().writeAndFlush(Commands.newTopicMigrated(type, resourceId, brokerUrl, brokerUrlTls), + cnx.ctx().voidPromise()); + return true; + } + return false; + } + @Override public ChannelPromise sendMessagesToConsumer(long consumerId, String topicName, Subscription subscription, int partitionIdx, List entries, diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/ServerCnx.java b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/ServerCnx.java index 7e7e0095622039..668998a6c5f2a1 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/ServerCnx.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/ServerCnx.java @@ -23,6 +23,7 @@ import static org.apache.commons.lang3.StringUtils.isNotBlank; import static org.apache.pulsar.broker.admin.impl.PersistentTopicsBase.unsafeGetPartitionedTopicMetadataAsync; import static org.apache.pulsar.broker.lookup.TopicLookupBase.lookupTopicAsync; +import static org.apache.pulsar.broker.service.persistent.PersistentTopic.getMigratedClusterUrl; import static org.apache.pulsar.common.api.proto.ProtocolVersion.v5; import static org.apache.pulsar.common.protocol.Commands.DEFAULT_CONSUMER_EPOCH; import static org.apache.pulsar.common.protocol.Commands.newLookupErrorResponse; @@ -121,6 +122,7 @@ import org.apache.pulsar.common.api.proto.CommandSubscribe.InitialPosition; import org.apache.pulsar.common.api.proto.CommandSubscribe.SubType; import org.apache.pulsar.common.api.proto.CommandTcClientConnectRequest; +import org.apache.pulsar.common.api.proto.CommandTopicMigrated.ResourceType; import org.apache.pulsar.common.api.proto.CommandUnsubscribe; import org.apache.pulsar.common.api.proto.CommandWatchTopicList; import org.apache.pulsar.common.api.proto.CommandWatchTopicListClose; @@ -141,6 +143,7 @@ import org.apache.pulsar.common.naming.TopicName; import org.apache.pulsar.common.policies.data.BacklogQuota; import org.apache.pulsar.common.policies.data.BacklogQuota.BacklogQuotaType; +import org.apache.pulsar.common.policies.data.ClusterData.ClusterUrl; import org.apache.pulsar.common.policies.data.NamespaceOperation; import org.apache.pulsar.common.policies.data.TopicOperation; import org.apache.pulsar.common.policies.data.stats.ConsumerStatsImpl; @@ -1497,7 +1500,21 @@ private void buildProducerAndAddTopic(Topic topic, long producerId, String produ producers.remove(producerId, producerFuture); }).exceptionally(ex -> { - if (ex.getCause() instanceof BrokerServiceException.ProducerFencedException) { + if (ex.getCause() instanceof BrokerServiceException.TopicMigratedException) { + Optional clusterURL = getMigratedClusterUrl(service.getPulsar()); + if (clusterURL.isPresent()) { + log.info("[{}] redirect migrated producer to topic {}: producerId={}, {}", remoteAddress, topicName, + producerId, ex.getCause().getMessage()); + commandSender.sendTopicMigrated(ResourceType.Producer, producerId, + clusterURL.get().getBrokerServiceUrl(), clusterURL.get().getBrokerServiceUrlTls()); + closeProducer(producer); + return null; + + } else { + log.warn("[{}] failed producer because migration url not configured topic {}: producerId={}, {}", + remoteAddress, topicName, producerId, ex.getCause().getMessage()); + } + } else if (ex.getCause() instanceof BrokerServiceException.ProducerFencedException) { if (log.isDebugEnabled()) { log.debug("[{}] Failed to add producer to topic {}: producerId={}, {}", remoteAddress, topicName, producerId, ex.getCause().getMessage()); diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/Topic.java b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/Topic.java index c0f931bd6a553e..5b0a8c32fa906e 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/Topic.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/Topic.java @@ -197,6 +197,8 @@ CompletableFuture createSubscription(String subscriptionName, Init void checkGC(); + CompletableFuture checkClusterMigration(); + void checkInactiveSubscriptions(); /** diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/nonpersistent/NonPersistentTopic.java b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/nonpersistent/NonPersistentTopic.java index 05e42c1b64d6ea..c15a7605ca179e 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/nonpersistent/NonPersistentTopic.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/nonpersistent/NonPersistentTopic.java @@ -73,6 +73,7 @@ import org.apache.pulsar.common.api.proto.KeySharedMeta; import org.apache.pulsar.common.naming.TopicName; import org.apache.pulsar.common.policies.data.BacklogQuota; +import org.apache.pulsar.common.policies.data.ClusterData.ClusterUrl; import org.apache.pulsar.common.policies.data.ManagedLedgerInternalStats.CursorStats; import org.apache.pulsar.common.policies.data.PersistentTopicInternalStats; import org.apache.pulsar.common.policies.data.Policies; @@ -106,6 +107,7 @@ public class NonPersistentTopic extends AbstractTopic implements Topic, TopicPol AtomicLongFieldUpdater.newUpdater(NonPersistentTopic.class, "entriesAddedCounter"); private volatile long entriesAddedCounter = 0; + private volatile boolean migrated = false; private static final FastThreadLocal threadLocalTopicStats = new FastThreadLocal() { @Override protected TopicStats initialValue() { @@ -153,10 +155,18 @@ public NonPersistentTopic(String topic, BrokerService brokerService) { registerTopicPolicyListener(); } + private CompletableFuture updateClusterMigrated() { + return getMigratedClusterUrlAsync(brokerService.getPulsar()).thenAccept(url -> migrated = url.isPresent()); + } + + private Optional getClusterMigrationUrl() { + return getMigratedClusterUrl(brokerService.getPulsar()); + } + public CompletableFuture initialize() { return brokerService.pulsar().getPulsarResources().getNamespaceResources() .getPoliciesAsync(TopicName.get(topic).getNamespaceObject()) - .thenAccept(optPolicies -> { + .thenCompose(optPolicies -> { if (!optPolicies.isPresent()) { log.warn("[{}] Policies not present and isEncryptionRequired will be set to false", topic); isEncryptionRequired = false; @@ -168,6 +178,7 @@ public CompletableFuture initialize() { } updatePublishDispatcher(); updateResourceGroupLimiter(optPolicies); + return updateClusterMigrated(); }); } @@ -273,7 +284,6 @@ private CompletableFuture internalSubscribe(final TransportCnx cnx, St return brokerService.checkTopicNsOwnership(getName()).thenCompose(__ -> { final CompletableFuture future = new CompletableFuture<>(); - if (hasBatchMessagePublished && !cnx.isBatchMessageCompatibleVersion()) { if (log.isDebugEnabled()) { log.debug("[{}] Consumer doesn't support batch-message {}", topic, subscriptionName); @@ -313,6 +323,9 @@ private CompletableFuture internalSubscribe(final TransportCnx cnx, St Consumer consumer = new Consumer(subscription, subType, topic, consumerId, priorityLevel, consumerName, false, cnx, cnx.getAuthRole(), metadata, readCompacted, keySharedMeta, MessageId.latest, DEFAULT_CONSUMER_EPOCH); + if (isMigrated()) { + consumer.topicMigrated(getClusterMigrationUrl()); + } addConsumerToSubscription(subscription, consumer).thenRun(() -> { if (!cnx.isActive()) { @@ -925,6 +938,23 @@ public boolean isActive() { return currentUsageCount() != 0 || !subscriptions.isEmpty(); } + @Override + public CompletableFuture checkClusterMigration() { + Optional url = getClusterMigrationUrl(); + if (url.isPresent()) { + this.migrated = true; + producers.forEach((__, producer) -> { + producer.topicMigrated(url); + }); + subscriptions.forEach((__, sub) -> { + sub.getConsumers().forEach((consumer) -> { + consumer.topicMigrated(url); + }); + }); + } + return CompletableFuture.completedFuture(null); + } + @Override public void checkGC() { if (!isDeleteWhileInactive()) { @@ -1164,6 +1194,12 @@ protected boolean isTerminated() { return false; } + + @Override + protected boolean isMigrated() { + return this.migrated; + } + @Override public boolean isPersistent() { return false; diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/CompactorSubscription.java b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/CompactorSubscription.java index f7279968c51bbc..8427acb48b11f9 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/CompactorSubscription.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/CompactorSubscription.java @@ -19,13 +19,13 @@ package org.apache.pulsar.broker.service.persistent; import static com.google.common.base.Preconditions.checkArgument; +import static org.apache.pulsar.broker.service.AbstractBaseDispatcher.checkAndApplyReachedEndOfTopicOrTopicMigration; import java.util.List; import java.util.Map; import org.apache.bookkeeper.mledger.AsyncCallbacks.MarkDeleteCallback; import org.apache.bookkeeper.mledger.ManagedCursor; import org.apache.bookkeeper.mledger.ManagedLedgerException; import org.apache.bookkeeper.mledger.Position; -import org.apache.pulsar.broker.service.Consumer; import org.apache.pulsar.common.api.proto.CommandAck.AckType; import org.apache.pulsar.compaction.CompactedTopic; import org.apache.pulsar.compaction.Compactor; @@ -102,7 +102,7 @@ public void markDeleteFailed(ManagedLedgerException exception, Object ctx) { if (topic.getManagedLedger().isTerminated() && cursor.getNumberOfEntriesInBacklog(false) == 0) { // Notify all consumer that the end of topic was reached - dispatcher.getConsumers().forEach(Consumer::reachedEndOfTopic); + checkAndApplyReachedEndOfTopicOrTopicMigration(topic, dispatcher.getConsumers()); } } diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentDispatcherMultipleConsumers.java b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentDispatcherMultipleConsumers.java index 9f09e60abb29ee..24d1d702e7dae8 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentDispatcherMultipleConsumers.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentDispatcherMultipleConsumers.java @@ -776,7 +776,7 @@ public synchronized void readEntriesFailed(ManagedLedgerException exception, Obj if (cursor.getNumberOfEntriesInBacklog(false) == 0) { // Topic has been terminated and there are no more entries to read // Notify the consumer only if all the messages were already acknowledged - consumerList.forEach(Consumer::reachedEndOfTopic); + checkAndApplyReachedEndOfTopicOrTopicMigration(consumerList); } } else if (exception.getCause() instanceof TransactionBufferException.TransactionNotSealedException || exception.getCause() instanceof ManagedLedgerException.OffloadReadHandleClosedException) { diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentDispatcherSingleActiveConsumer.java b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentDispatcherSingleActiveConsumer.java index 3ba7a82aa5e353..385569cd8d8d69 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentDispatcherSingleActiveConsumer.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentDispatcherSingleActiveConsumer.java @@ -486,7 +486,7 @@ private synchronized void internalReadEntriesFailed(ManagedLedgerException excep if (cursor.getNumberOfEntriesInBacklog(false) == 0) { // Topic has been terminated and there are no more entries to read // Notify the consumer only if all the messages were already acknowledged - consumers.forEach(Consumer::reachedEndOfTopic); + checkAndApplyReachedEndOfTopicOrTopicMigration(consumers); } } else if (exception.getCause() instanceof TransactionBufferException.TransactionNotSealedException || exception.getCause() instanceof ManagedLedgerException.OffloadReadHandleClosedException) { diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentStreamingDispatcherMultipleConsumers.java b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentStreamingDispatcherMultipleConsumers.java index 649d19bcec0918..b5cd52b7885d7d 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentStreamingDispatcherMultipleConsumers.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentStreamingDispatcherMultipleConsumers.java @@ -31,7 +31,6 @@ import org.apache.bookkeeper.mledger.impl.PositionImpl; import org.apache.bookkeeper.mledger.util.SafeRun; import org.apache.commons.lang3.tuple.Pair; -import org.apache.pulsar.broker.service.Consumer; import org.apache.pulsar.broker.service.Subscription; import org.apache.pulsar.broker.service.streamingdispatch.PendingReadEntryRequest; import org.apache.pulsar.broker.service.streamingdispatch.StreamingDispatcher; @@ -142,7 +141,7 @@ public void notifyConsumersEndOfTopic() { if (cursor.getNumberOfEntriesInBacklog(false) == 0) { // Topic has been terminated and there are no more entries to read // Notify the consumer only if all the messages were already acknowledged - consumerList.forEach(Consumer::reachedEndOfTopic); + checkAndApplyReachedEndOfTopicOrTopicMigration(consumerList); } } diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentStreamingDispatcherSingleActiveConsumer.java b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentStreamingDispatcherSingleActiveConsumer.java index 2048bb016b8c3e..612d3f5796e735 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentStreamingDispatcherSingleActiveConsumer.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentStreamingDispatcherSingleActiveConsumer.java @@ -97,7 +97,7 @@ public synchronized void notifyConsumersEndOfTopic() { if (cursor.getNumberOfEntriesInBacklog(false) == 0) { // Topic has been terminated and there are no more entries to read // Notify the consumer only if all the messages were already acknowledged - consumers.forEach(Consumer::reachedEndOfTopic); + checkAndApplyReachedEndOfTopicOrTopicMigration(consumers); } } diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentSubscription.java b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentSubscription.java index 855bec48527e53..681b617277543c 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentSubscription.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentSubscription.java @@ -18,6 +18,7 @@ */ package org.apache.pulsar.broker.service.persistent; +import static org.apache.pulsar.broker.service.AbstractBaseDispatcher.checkAndApplyReachedEndOfTopicOrTopicMigration; import static org.apache.pulsar.common.naming.SystemTopicNames.isEventSystemTopic; import com.google.common.annotations.VisibleForTesting; import com.google.common.base.MoreObjects; @@ -426,7 +427,7 @@ public void acknowledgeMessage(List positions, AckType ackType, Map disconnectProducersFuture; if (producers.size() > 0) { List> futures = new ArrayList<>(); + // send migration url metadata to producers before disconnecting them + if (isMigrated()) { + producers.forEach((__, producer) -> producer.topicMigrated(getMigratedClusterUrl())); + } producers.forEach((__, producer) -> futures.add(producer.disconnect())); disconnectProducersFuture = FutureUtil.waitForAll(futures); } else { @@ -585,7 +590,7 @@ public synchronized void addFailed(ManagedLedgerException exception, Object ctx) log.warn("[{}] Failed to persist msg in store: {}", topic, exception.getMessage()); } - if (exception instanceof ManagedLedgerTerminatedException) { + if (exception instanceof ManagedLedgerTerminatedException && !isMigrated()) { // Signal the producer that this topic is no longer available callback.completed(new TopicTerminatedException(exception), -1, -1); } else { @@ -2352,6 +2357,17 @@ private boolean hasBacklogs() { return subscriptions.values().stream().anyMatch(sub -> sub.getNumberOfEntriesInBacklog(false) > 0); } + @Override + public CompletableFuture checkClusterMigration() { + Optional clusterUrl = getMigratedClusterUrl(); + if (!isMigrated() && clusterUrl.isPresent()) { + log.info("{} triggering topic migration", topic); + return ledger.asyncMigrate().thenCompose(r -> null); + } else { + return CompletableFuture.completedFuture(null); + } + } + @Override public void checkGC() { if (!isDeleteWhileInactive()) { @@ -3318,6 +3334,11 @@ protected boolean isTerminated() { return ledger.isTerminated(); } + @Override + public boolean isMigrated() { + return ledger.isMigrated(); + } + public TransactionInPendingAckStats getTransactionInPendingAckStats(TxnID txnID, String subName) { return this.subscriptions.get(subName).getTransactionInPendingAckStats(txnID); } diff --git a/pulsar-broker/src/test/java/org/apache/pulsar/broker/intercept/CounterBrokerInterceptor.java b/pulsar-broker/src/test/java/org/apache/pulsar/broker/intercept/CounterBrokerInterceptor.java index dd83fd2a4ce877..54db4c8ff71390 100644 --- a/pulsar-broker/src/test/java/org/apache/pulsar/broker/intercept/CounterBrokerInterceptor.java +++ b/pulsar-broker/src/test/java/org/apache/pulsar/broker/intercept/CounterBrokerInterceptor.java @@ -40,8 +40,8 @@ import org.apache.pulsar.broker.service.Subscription; import org.apache.pulsar.broker.service.Topic; import org.apache.pulsar.common.api.proto.BaseCommand; -import org.apache.pulsar.common.api.proto.MessageMetadata; import org.apache.pulsar.common.api.proto.CommandAck; +import org.apache.pulsar.common.api.proto.MessageMetadata; import org.apache.pulsar.common.api.proto.TxnAction; import org.eclipse.jetty.server.Response; diff --git a/pulsar-broker/src/test/java/org/apache/pulsar/broker/service/ClusterMigrationTest.java b/pulsar-broker/src/test/java/org/apache/pulsar/broker/service/ClusterMigrationTest.java new file mode 100644 index 00000000000000..34397df9247305 --- /dev/null +++ b/pulsar-broker/src/test/java/org/apache/pulsar/broker/service/ClusterMigrationTest.java @@ -0,0 +1,329 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.pulsar.broker.service; + +import static org.apache.pulsar.broker.auth.MockedPulsarServiceBaseTest.retryStrategically; +import static org.testng.Assert.assertEquals; +import static org.testng.Assert.assertFalse; +import static org.testng.Assert.assertNotNull; +import static org.testng.Assert.assertTrue; + +import java.lang.reflect.Method; +import java.net.URL; +import java.util.concurrent.TimeUnit; + +import org.apache.pulsar.broker.BrokerTestUtil; +import org.apache.pulsar.broker.PulsarService; +import org.apache.pulsar.broker.auth.MockedPulsarServiceBaseTest; +import org.apache.pulsar.client.admin.PulsarAdmin; +import org.apache.pulsar.client.api.Consumer; +import org.apache.pulsar.client.api.Message; +import org.apache.pulsar.client.api.MessageRoutingMode; +import org.apache.pulsar.client.api.Producer; +import org.apache.pulsar.client.api.PulsarClient; +import org.apache.pulsar.client.api.SubscriptionType; +import org.apache.pulsar.common.policies.data.ClusterData; +import org.apache.pulsar.common.policies.data.ClusterData.ClusterUrl; +import org.apache.pulsar.common.policies.data.TenantInfoImpl; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.testng.annotations.AfterMethod; +import org.testng.annotations.BeforeMethod; +import org.testng.annotations.DataProvider; +import org.testng.annotations.Test; + +import com.google.common.collect.Sets; + +import lombok.Cleanup; + +@Test(groups = "broker") +public class ClusterMigrationTest { + + private static final Logger log = LoggerFactory.getLogger(ClusterMigrationTest.class); + protected String methodName; + + String namespace = "pulsar/migrationNs"; + TestBroker broker1, broker2; + URL url1; + URL urlTls1; + PulsarService pulsar1; + + PulsarAdmin admin1; + + URL url2; + URL urlTls2; + PulsarService pulsar2; + PulsarAdmin admin2; + + @DataProvider(name = "TopicsubscriptionTypes") + public Object[][] subscriptionTypes() { + return new Object[][] { + {true, SubscriptionType.Shared}, + {true, SubscriptionType.Key_Shared}, + {true, SubscriptionType.Shared}, + {true, SubscriptionType.Key_Shared}, + + {false, SubscriptionType.Shared}, + {false, SubscriptionType.Key_Shared}, + {false, SubscriptionType.Shared}, + {false, SubscriptionType.Key_Shared}, + }; + } + + @BeforeMethod(alwaysRun = true, timeOut = 300000) + public void setup() throws Exception { + + log.info("--- Starting ReplicatorTestBase::setup ---"); + + broker1 = new TestBroker(); + broker2 = new TestBroker(); + String clusterName = broker1.getClusterName(); + + pulsar1 = broker1.getPulsarService(); + url1 = new URL(pulsar1.getWebServiceAddress()); + urlTls1 = new URL(pulsar1.getWebServiceAddressTls()); + admin1 = PulsarAdmin.builder().serviceHttpUrl(url1.toString()).build(); + + pulsar2 = broker2.getPulsarService(); + url2 = new URL(pulsar2.getWebServiceAddress()); + urlTls2 = new URL(pulsar2.getWebServiceAddressTls()); + admin2 = PulsarAdmin.builder().serviceHttpUrl(url2.toString()).build(); + + // Start region 3 + + // Provision the global namespace + admin1.clusters().createCluster(clusterName, + ClusterData.builder().serviceUrl(url1.toString()).serviceUrlTls(urlTls1.toString()) + .brokerServiceUrl(pulsar1.getBrokerServiceUrl()) + .brokerServiceUrlTls(pulsar1.getBrokerServiceUrlTls()).build()); + admin2.clusters().createCluster(clusterName, + ClusterData.builder().serviceUrl(url2.toString()).serviceUrlTls(urlTls2.toString()) + .brokerServiceUrl(pulsar2.getBrokerServiceUrl()) + .brokerServiceUrlTls(pulsar2.getBrokerServiceUrlTls()).build()); + + admin1.tenants().createTenant("pulsar", + new TenantInfoImpl(Sets.newHashSet("appid1", "appid2", "appid3"), Sets.newHashSet(clusterName))); + admin1.namespaces().createNamespace(namespace, Sets.newHashSet(clusterName)); + + admin2.tenants().createTenant("pulsar", + new TenantInfoImpl(Sets.newHashSet("appid1", "appid2", "appid3"), Sets.newHashSet(clusterName))); + admin2.namespaces().createNamespace(namespace, Sets.newHashSet(clusterName)); + + assertEquals(admin1.clusters().getCluster(clusterName).getServiceUrl(), url1.toString()); + assertEquals(admin2.clusters().getCluster(clusterName).getServiceUrl(), url2.toString()); + assertEquals(admin1.clusters().getCluster(clusterName).getBrokerServiceUrl(), pulsar1.getBrokerServiceUrl()); + assertEquals(admin2.clusters().getCluster(clusterName).getBrokerServiceUrl(), pulsar2.getBrokerServiceUrl()); + + Thread.sleep(100); + log.info("--- ReplicatorTestBase::setup completed ---"); + + } + + @AfterMethod(alwaysRun = true, timeOut = 300000) + protected void cleanup() throws Exception { + log.info("--- Shutting down ---"); + broker1.cleanup(); + broker2.cleanup(); + } + + @BeforeMethod(alwaysRun = true) + public void beforeMethod(Method m) throws Exception { + methodName = m.getName(); + } + + /** + * Test producer/consumer migration: using persistent/non-persistent topic and all types of subscriptions + * (1) Producer1 and consumer1 connect to cluster-1 + * (2) Close consumer1 to build backlog and publish messages using producer1 + * (3) Migrate topic to cluster-2 + * (4) Validate producer-1 is connected to cluster-2 + * (5) create consumer1, drain backlog and migrate and reconnect to cluster-2 + * (6) Create new consumer2 with different subscription on cluster-1, + * which immediately migrate and reconnect to cluster-2 + * (7) Create producer-2 directly to cluster-2 + * (8) Create producer-3 on cluster-1 which should be redirected to cluster-2 + * (8) Publish messages using producer1, producer2, and producer3 + * (9) Consume all messages by both consumer1 and consumer2 + * (10) Create Producer/consumer on non-migrated cluster and verify their connection with cluster-1 + * (11) Restart Broker-1 and connect producer/consumer on cluster-1 + * @throws Exception + */ + @Test(dataProvider = "TopicsubscriptionTypes") + public void testClusterMigration(boolean persistent, SubscriptionType subType) throws Exception { + log.info("--- Starting ReplicatorTest::testClusterMigration ---"); + persistent = false; + final String topicName = BrokerTestUtil + .newUniqueName((persistent ? "persistent" : "non-persistent") + "://" + namespace + "/migrationTopic"); + + @Cleanup + PulsarClient client1 = PulsarClient.builder().serviceUrl(url1.toString()).statsInterval(0, TimeUnit.SECONDS) + .build(); + // cluster-1 producer/consumer + Producer producer1 = client1.newProducer().topic(topicName).enableBatching(false) + .producerName("cluster1-1").messageRoutingMode(MessageRoutingMode.SinglePartition).create(); + Consumer consumer1 = client1.newConsumer().topic(topicName).subscriptionType(subType) + .subscriptionName("s1").subscribe(); + AbstractTopic topic1 = (AbstractTopic) pulsar1.getBrokerService().getTopic(topicName, false).getNow(null).get(); + retryStrategically((test) -> !topic1.getProducers().isEmpty(), 5, 500); + retryStrategically((test) -> !topic1.getSubscriptions().isEmpty(), 5, 500); + assertFalse(topic1.getProducers().isEmpty()); + assertFalse(topic1.getSubscriptions().isEmpty()); + + // build backlog + consumer1.close(); + int n = 5; + for (int i = 0; i < n; i++) { + producer1.send("test1".getBytes()); + } + + @Cleanup + PulsarClient client2 = PulsarClient.builder().serviceUrl(url2.toString()).statsInterval(0, TimeUnit.SECONDS) + .build(); + // cluster-2 producer/consumer + Producer producer2 = client2.newProducer().topic(topicName).enableBatching(false) + .producerName("cluster2-1").messageRoutingMode(MessageRoutingMode.SinglePartition).create(); + AbstractTopic topic2 = (AbstractTopic) pulsar2.getBrokerService().getTopic(topicName, false).getNow(null).get(); + assertFalse(topic2.getProducers().isEmpty()); + + ClusterUrl migratedUrl = new ClusterUrl(pulsar2.getBrokerServiceUrl(), pulsar2.getBrokerServiceUrlTls()); + admin1.clusters().updateClusterMigration(broker2.getClusterName(), true, migratedUrl); + + retryStrategically((test) -> { + try { + topic1.checkClusterMigration().get(); + return true; + } catch (Exception e) { + // ok + } + return false; + }, 10, 500); + + topic1.checkClusterMigration().get(); + + producer1.sendAsync("test1".getBytes()); + + // producer is disconnected from cluster-1 + retryStrategically((test) -> topic1.getProducers().isEmpty(), 10, 500); + assertTrue(topic1.getProducers().isEmpty()); + + // create 3rd producer on cluster-1 which should be redirected to cluster-2 + Producer producer3 = client1.newProducer().topic(topicName).enableBatching(false) + .producerName("cluster1-2").messageRoutingMode(MessageRoutingMode.SinglePartition).create(); + + // producer is connected with cluster-2 + retryStrategically((test) -> topic2.getProducers().size() == 3, 10, 500); + assertTrue(topic2.getProducers().size() == 3); + + // try to consume backlog messages from cluster-1 + consumer1 = client1.newConsumer().topic(topicName).subscriptionName("s1").subscribe(); + if (persistent) { + for (int i = 0; i < n; i++) { + Message msg = consumer1.receive(); + assertEquals(msg.getData(), "test1".getBytes()); + consumer1.acknowledge(msg); + } + } + // after consuming all messages, consumer should have disconnected + // from cluster-1 and reconnect with cluster-2 + retryStrategically((test) -> !topic2.getSubscriptions().isEmpty(), 10, 500); + assertFalse(topic2.getSubscriptions().isEmpty()); + + // not also create a new consumer which should also reconnect to cluster-2 + Consumer consumer2 = client1.newConsumer().topic(topicName).subscriptionType(subType) + .subscriptionName("s2").subscribe(); + retryStrategically((test) -> topic2.getSubscription("s2") != null, 10, 500); + assertFalse(topic2.getSubscription("s2").getConsumers().isEmpty()); + + // publish messages to cluster-2 and consume them + for (int i = 0; i < n; i++) { + producer1.send("test2".getBytes()); + producer2.send("test2".getBytes()); + producer3.send("test2".getBytes()); + } + log.info("Successfully published messages by migrated producers"); + for (int i = 0; i < n * 3; i++) { + assertEquals(consumer1.receive(2, TimeUnit.SECONDS).getData(), "test2".getBytes()); + assertEquals(consumer2.receive(2, TimeUnit.SECONDS).getData(), "test2".getBytes()); + + } + + // create non-migrated topic which should connect to cluster-1 + String diffTopic = BrokerTestUtil + .newUniqueName((persistent ? "persistent" : "non-persistent") + "://" + namespace + "/migrationTopic"); + Consumer consumerDiff = client1.newConsumer().topic(diffTopic).subscriptionType(subType) + .subscriptionName("s1-d").subscribe(); + Producer producerDiff = client1.newProducer().topic(diffTopic).enableBatching(false) + .producerName("cluster1-d").messageRoutingMode(MessageRoutingMode.SinglePartition).create(); + AbstractTopic topicDiff = (AbstractTopic) pulsar1.getBrokerService().getTopic(diffTopic, false).getNow(null).get(); + assertNotNull(topicDiff); + for (int i = 0; i < n; i++) { + producerDiff.send("diff".getBytes()); + assertEquals(consumerDiff.receive(2, TimeUnit.SECONDS).getData(), "diff".getBytes()); + } + + // restart broker-1 + broker1.restart(); + Producer producer4 = client1.newProducer().topic(topicName).enableBatching(false) + .producerName("cluster1-4").messageRoutingMode(MessageRoutingMode.SinglePartition).create(); + Consumer consumer3 = client1.newConsumer().topic(topicName).subscriptionType(subType) + .subscriptionName("s3").subscribe(); + retryStrategically((test) -> topic2.getProducers().size() == 4, 10, 500); + assertTrue(topic2.getProducers().size() == 4); + retryStrategically((test) -> topic2.getSubscription("s3") != null, 10, 500); + assertFalse(topic2.getSubscription("s3").getConsumers().isEmpty()); + for (int i = 0; i < n; i++) { + producer4.send("test3".getBytes()); + assertEquals(consumer1.receive(2, TimeUnit.SECONDS).getData(), "test3".getBytes()); + assertEquals(consumer2.receive(2, TimeUnit.SECONDS).getData(), "test3".getBytes()); + assertEquals(consumer3.receive(2, TimeUnit.SECONDS).getData(), "test3".getBytes()); + } + + log.info("Successfully consumed messages by migrated consumers"); + } + + static class TestBroker extends MockedPulsarServiceBaseTest { + + public TestBroker() throws Exception { + setup(); + } + + @Override + protected void setup() throws Exception { + super.internalSetup(); + } + + public PulsarService getPulsarService() { + return pulsar; + } + + public String getClusterName() { + return configClusterName; + } + + @Override + protected void cleanup() throws Exception { + internalCleanup(); + } + + public void restart() throws Exception { + restartBroker(); + } + + } +} diff --git a/pulsar-client-admin-api/pom.xml b/pulsar-client-admin-api/pom.xml index 5083efc41fe0ed..61b989e5050d2c 100644 --- a/pulsar-client-admin-api/pom.xml +++ b/pulsar-client-admin-api/pom.xml @@ -43,6 +43,7 @@ org.slf4j slf4j-api + diff --git a/pulsar-client-admin-api/src/main/java/org/apache/pulsar/client/admin/Clusters.java b/pulsar-client-admin-api/src/main/java/org/apache/pulsar/client/admin/Clusters.java index 4b83617e3eab51..2e8a43a826045a 100644 --- a/pulsar-client-admin-api/src/main/java/org/apache/pulsar/client/admin/Clusters.java +++ b/pulsar-client-admin-api/src/main/java/org/apache/pulsar/client/admin/Clusters.java @@ -29,6 +29,7 @@ import org.apache.pulsar.client.admin.PulsarAdminException.PreconditionFailedException; import org.apache.pulsar.common.policies.data.BrokerNamespaceIsolationData; import org.apache.pulsar.common.policies.data.ClusterData; +import org.apache.pulsar.common.policies.data.ClusterData.ClusterUrl; import org.apache.pulsar.common.policies.data.FailureDomain; import org.apache.pulsar.common.policies.data.NamespaceIsolationData; @@ -208,6 +209,42 @@ public interface Clusters { */ CompletableFuture updatePeerClusterNamesAsync(String cluster, LinkedHashSet peerClusterNames); + /** + * Update the configuration for a cluster migration. + *

    + * This operation requires Pulsar super-user privileges. + * + * @param cluster + * Cluster name + * @param migrated + * is cluster migrated + * @param clusterUrl + * the cluster url object + * + * @throws NotAuthorizedException + * You don't have admin permission to create the cluster + * @throws NotFoundException + * Cluster doesn't exist + * @throws PulsarAdminException + * Unexpected error + */ + void updateClusterMigration(String cluster, boolean migrated, ClusterUrl clusterUrl) throws PulsarAdminException; + + /** + * Update the configuration for a cluster migration asynchronously. + *

    + * This operation requires Pulsar super-user privileges. + * + * @param cluster + * Cluster name + * @param migrated + * is cluster migrated + * @param clusterUrl + * the cluster url object + * + */ + CompletableFuture updateClusterMigrationAsync(String cluster, boolean migrated, ClusterUrl clusterUrl); + /** * Get peer-cluster names. *

    diff --git a/pulsar-client-admin-api/src/main/java/org/apache/pulsar/common/policies/data/ClusterData.java b/pulsar-client-admin-api/src/main/java/org/apache/pulsar/common/policies/data/ClusterData.java index 61a90a592a70dd..f8cdf294d96e5e 100644 --- a/pulsar-client-admin-api/src/main/java/org/apache/pulsar/common/policies/data/ClusterData.java +++ b/pulsar-client-admin-api/src/main/java/org/apache/pulsar/common/policies/data/ClusterData.java @@ -19,6 +19,9 @@ package org.apache.pulsar.common.policies.data; import java.util.LinkedHashSet; +import lombok.AllArgsConstructor; +import lombok.Data; +import lombok.NoArgsConstructor; import org.apache.pulsar.client.admin.utils.ReflectionUtils; import org.apache.pulsar.client.api.ProxyProtocol; @@ -57,6 +60,10 @@ public interface ClusterData { String getListenerName(); + boolean isMigrated(); + + ClusterUrl getMigratedClusterUrl(); + interface Builder { Builder serviceUrl(String serviceUrl); @@ -92,6 +99,10 @@ interface Builder { Builder listenerName(String listenerName); + Builder migrated(boolean migrated); + + Builder migratedClusterUrl(ClusterUrl migratedClusterUrl); + ClusterData build(); } @@ -100,4 +111,16 @@ interface Builder { static Builder builder() { return ReflectionUtils.newBuilder("org.apache.pulsar.common.policies.data.ClusterDataImpl"); } + + @Data + @NoArgsConstructor + @AllArgsConstructor + class ClusterUrl { + String brokerServiceUrl; + String brokerServiceUrlTls; + + public boolean isEmpty() { + return brokerServiceUrl == null && brokerServiceUrlTls == null; + } + } } diff --git a/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/ClustersImpl.java b/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/ClustersImpl.java index b32e3ea684cb77..1d4e3a4f28eeb4 100644 --- a/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/ClustersImpl.java +++ b/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/ClustersImpl.java @@ -34,6 +34,7 @@ import org.apache.pulsar.common.policies.data.BrokerNamespaceIsolationData; import org.apache.pulsar.common.policies.data.BrokerNamespaceIsolationDataImpl; import org.apache.pulsar.common.policies.data.ClusterData; +import org.apache.pulsar.common.policies.data.ClusterData.ClusterUrl; import org.apache.pulsar.common.policies.data.ClusterDataImpl; import org.apache.pulsar.common.policies.data.FailureDomain; import org.apache.pulsar.common.policies.data.FailureDomainImpl; @@ -106,6 +107,19 @@ public CompletableFuture updatePeerClusterNamesAsync(String cluster, Linke return asyncPostRequest(path, Entity.entity(peerClusterNames, MediaType.APPLICATION_JSON)); } + @Override + public void updateClusterMigration(String cluster, boolean isMigrated, ClusterUrl clusterUrl) + throws PulsarAdminException { + sync(() -> updateClusterMigrationAsync(cluster, isMigrated, clusterUrl)); + } + + @Override + public CompletableFuture updateClusterMigrationAsync(String cluster, boolean isMigrated, + ClusterUrl clusterUrl) { + WebTarget path = adminClusters.path(cluster).path("migrate").queryParam("migrated", isMigrated); + return asyncPostRequest(path, Entity.entity(clusterUrl, MediaType.APPLICATION_JSON)); + } + @Override @SuppressWarnings("unchecked") public Set getPeerClusterNames(String cluster) throws PulsarAdminException { diff --git a/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/PulsarClientException.java b/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/PulsarClientException.java index c68c575ec4f3bf..e04d1505977682 100644 --- a/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/PulsarClientException.java +++ b/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/PulsarClientException.java @@ -462,6 +462,22 @@ public TopicTerminatedException(String msg, long sequenceId) { } } + /** + * TopicMigration exception thrown by Pulsar client. + */ + public static class TopicMigrationException extends PulsarClientException { + /** + * Constructs an {@code TopicMigrationException} with the specified detail message. + * + * @param msg + * The detail message (which is saved for later retrieval + * by the {@link #getMessage()} method) + */ + public TopicMigrationException(String msg) { + super(msg); + } + } + /** * Producer fenced exception thrown by Pulsar client. */ diff --git a/pulsar-client-tools/src/main/java/org/apache/pulsar/admin/cli/CmdClusters.java b/pulsar-client-tools/src/main/java/org/apache/pulsar/admin/cli/CmdClusters.java index b412f30131eaf3..12def9a9a96c17 100644 --- a/pulsar-client-tools/src/main/java/org/apache/pulsar/admin/cli/CmdClusters.java +++ b/pulsar-client-tools/src/main/java/org/apache/pulsar/admin/cli/CmdClusters.java @@ -31,6 +31,7 @@ import org.apache.pulsar.client.admin.PulsarAdminException; import org.apache.pulsar.client.api.ProxyProtocol; import org.apache.pulsar.common.policies.data.ClusterData; +import org.apache.pulsar.common.policies.data.ClusterData.ClusterUrl; import org.apache.pulsar.common.policies.data.ClusterDataImpl; import org.apache.pulsar.common.policies.data.FailureDomain; import org.apache.pulsar.common.policies.data.FailureDomainImpl; @@ -142,6 +143,28 @@ void run() throws PulsarAdminException { } } + @Parameters(commandDescription = "Update cluster migration") + private class UpdateClusterMigration extends CliCommand { + @Parameter(description = "cluster-name", required = true) + private java.util.List params; + + @Parameter(names = "--migrated", description = "Is cluster migrated", required = true) + private boolean migrated; + + @Parameter(names = "--broker-url", description = "New migrated cluster broker service url", required = false) + private String brokerServiceUrl; + + @Parameter(names = "--broker-url-secure", description = "New migrated cluster broker service url secure", + required = false) + private String brokerServiceUrlTls; + + void run() throws PulsarAdminException { + String cluster = getOneArgument(params); + ClusterUrl clusterUrl = new ClusterUrl(brokerServiceUrl, brokerServiceUrlTls); + getAdmin().clusters().updateClusterMigration(cluster, migrated, clusterUrl); + } + } + @Parameters(commandDescription = "Get list of peer-clusters") private class GetPeerClusters extends CliCommand { @@ -401,6 +424,7 @@ public CmdClusters(Supplier admin) { jcommander.addCommand("delete", new Delete()); jcommander.addCommand("list", new List()); jcommander.addCommand("update-peer-clusters", new UpdatePeerClusters()); + jcommander.addCommand("update-cluster-migration", new UpdateClusterMigration()); jcommander.addCommand("get-peer-clusters", new GetPeerClusters()); jcommander.addCommand("get-failure-domain", new GetFailureDomain()); jcommander.addCommand("create-failure-domain", new CreateFailureDomain()); diff --git a/pulsar-client/src/main/java/org/apache/pulsar/client/impl/ClientCnx.java b/pulsar-client/src/main/java/org/apache/pulsar/client/impl/ClientCnx.java index 14a33cd3203e19..a40b80727c875e 100644 --- a/pulsar-client/src/main/java/org/apache/pulsar/client/impl/ClientCnx.java +++ b/pulsar-client/src/main/java/org/apache/pulsar/client/impl/ClientCnx.java @@ -33,6 +33,7 @@ import io.netty.util.concurrent.Promise; import java.net.InetSocketAddress; import java.net.SocketAddress; +import java.net.URISyntaxException; import java.nio.channels.ClosedChannelException; import java.util.Arrays; import java.util.List; @@ -87,6 +88,8 @@ import org.apache.pulsar.common.api.proto.CommandSendReceipt; import org.apache.pulsar.common.api.proto.CommandSuccess; import org.apache.pulsar.common.api.proto.CommandTcClientConnectResponse; +import org.apache.pulsar.common.api.proto.CommandTopicMigrated; +import org.apache.pulsar.common.api.proto.CommandTopicMigrated.ResourceType; import org.apache.pulsar.common.api.proto.CommandWatchTopicListSuccess; import org.apache.pulsar.common.api.proto.CommandWatchTopicUpdate; import org.apache.pulsar.common.api.proto.ServerError; @@ -659,6 +662,26 @@ protected void handleReachedEndOfTopic(CommandReachedEndOfTopic commandReachedEn } } + @Override + protected void handleTopicMigrated(CommandTopicMigrated commandTopicMigrated) { + final long resourceId = commandTopicMigrated.getResourceId(); + final String serviceUrl = commandTopicMigrated.getBrokerServiceUrl(); + final String serviceUrlTls = commandTopicMigrated.getBrokerServiceUrlTls(); + + HandlerState resource = commandTopicMigrated.getResourceType() == ResourceType.Producer + ? producers.get(resourceId) + : consumers.get(resourceId); + log.info("{} is migrated to {}/{}", commandTopicMigrated.getResourceType().name(), serviceUrl, serviceUrlTls); + if (resource != null) { + try { + resource.setRedirectedClusterURI(serviceUrl, serviceUrlTls); + } catch (URISyntaxException e) { + log.info("[{}] Invalid redirect url {}/{} for {}", remoteAddress, serviceUrl, serviceUrlTls, + resourceId); + } + } + } + // caller of this method needs to be protected under pendingLookupRequestSemaphore private void addPendingLookupRequests(long requestId, TimedCompletableFuture future) { pendingRequests.put(requestId, future); diff --git a/pulsar-client/src/main/java/org/apache/pulsar/client/impl/ConnectionHandler.java b/pulsar-client/src/main/java/org/apache/pulsar/client/impl/ConnectionHandler.java index 6c5a5be200fb51..4d74e560b463f4 100644 --- a/pulsar-client/src/main/java/org/apache/pulsar/client/impl/ConnectionHandler.java +++ b/pulsar-client/src/main/java/org/apache/pulsar/client/impl/ConnectionHandler.java @@ -18,6 +18,7 @@ */ package org.apache.pulsar.client.impl; +import java.net.InetSocketAddress; import java.util.concurrent.CompletableFuture; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicLongFieldUpdater; @@ -71,7 +72,11 @@ protected void grabCnx() { try { CompletableFuture cnxFuture; - if (state.topic == null) { + if (state.redirectedClusterURI != null) { + InetSocketAddress address = InetSocketAddress.createUnresolved(state.redirectedClusterURI.getHost(), + state.redirectedClusterURI.getPort()); + cnxFuture = state.client.getConnection(address, address); + } else if (state.topic == null) { cnxFuture = state.client.getConnectionToServiceUrl(); } else { cnxFuture = state.client.getConnection(state.topic); // diff --git a/pulsar-client/src/main/java/org/apache/pulsar/client/impl/HandlerState.java b/pulsar-client/src/main/java/org/apache/pulsar/client/impl/HandlerState.java index 822ba411b711c5..6489369ed3bed1 100644 --- a/pulsar-client/src/main/java/org/apache/pulsar/client/impl/HandlerState.java +++ b/pulsar-client/src/main/java/org/apache/pulsar/client/impl/HandlerState.java @@ -18,12 +18,16 @@ */ package org.apache.pulsar.client.impl; +import java.net.URI; +import java.net.URISyntaxException; import java.util.concurrent.atomic.AtomicReferenceFieldUpdater; import java.util.function.UnaryOperator; +import org.apache.commons.lang3.StringUtils; abstract class HandlerState { protected final PulsarClientImpl client; protected final String topic; + protected volatile URI redirectedClusterURI; private static final AtomicReferenceFieldUpdater STATE_UPDATER = AtomicReferenceFieldUpdater.newUpdater(HandlerState.class, State.class, "state"); @@ -49,6 +53,11 @@ public HandlerState(PulsarClientImpl client, String topic) { STATE_UPDATER.set(this, State.Uninitialized); } + protected void setRedirectedClusterURI(String serviceUrl, String serviceUrlTls) throws URISyntaxException { + String url = client.conf.isUseTls() && StringUtils.isNotBlank(serviceUrlTls) ? serviceUrlTls : serviceUrl; + this.redirectedClusterURI = new URI(url); + } + // moves the state to ready if it wasn't closed protected boolean changeToReadyState() { if (STATE_UPDATER.get(this) == State.Ready) { diff --git a/pulsar-common/pom.xml b/pulsar-common/pom.xml index e780a84d502297..30a250423605db 100644 --- a/pulsar-common/pom.xml +++ b/pulsar-common/pom.xml @@ -162,6 +162,11 @@ provided true + + + com.google.protobuf + protobuf-java + diff --git a/pulsar-common/src/main/java/org/apache/pulsar/common/policies/data/ClusterDataImpl.java b/pulsar-common/src/main/java/org/apache/pulsar/common/policies/data/ClusterDataImpl.java index 44cbfb4f35bef7..006cce1a9c1811 100644 --- a/pulsar-common/src/main/java/org/apache/pulsar/common/policies/data/ClusterDataImpl.java +++ b/pulsar-common/src/main/java/org/apache/pulsar/common/policies/data/ClusterDataImpl.java @@ -142,6 +142,17 @@ public final class ClusterDataImpl implements ClusterData, Cloneable { example = "" ) private String listenerName; + @ApiModelProperty( + name = "migrated", + value = "flag to check if cluster is migrated to different cluster", + example = "true/false" + ) + private boolean migrated; + @ApiModelProperty( + name = "migratedClusterUrl", + value = "url of cluster where current cluster is migrated" + ) + private ClusterUrl migratedClusterUrl; public static ClusterDataImplBuilder builder() { return new ClusterDataImplBuilder(); @@ -188,6 +199,8 @@ public static class ClusterDataImplBuilder implements ClusterData.Builder { private String brokerClientTlsTrustStorePassword; private String brokerClientTrustCertsFilePath; private String listenerName; + private boolean migrated; + private ClusterUrl migratedClusterUrl; ClusterDataImplBuilder() { } @@ -277,6 +290,16 @@ public ClusterDataImplBuilder listenerName(String listenerName) { return this; } + public ClusterDataImplBuilder migrated(boolean migrated) { + this.migrated = migrated; + return this; + } + + public ClusterDataImplBuilder migratedClusterUrl(ClusterUrl migratedClusterUrl) { + this.migratedClusterUrl = migratedClusterUrl; + return this; + } + public ClusterDataImpl build() { return new ClusterDataImpl( serviceUrl, @@ -295,7 +318,9 @@ public ClusterDataImpl build() { brokerClientTlsTrustStore, brokerClientTlsTrustStorePassword, brokerClientTrustCertsFilePath, - listenerName); + listenerName, + migrated, + migratedClusterUrl); } } } diff --git a/pulsar-common/src/main/java/org/apache/pulsar/common/protocol/Commands.java b/pulsar-common/src/main/java/org/apache/pulsar/common/protocol/Commands.java index 0ebb2705d3ad3b..cd88bb794e62f0 100644 --- a/pulsar-common/src/main/java/org/apache/pulsar/common/protocol/Commands.java +++ b/pulsar-common/src/main/java/org/apache/pulsar/common/protocol/Commands.java @@ -86,6 +86,7 @@ import org.apache.pulsar.common.api.proto.CommandSubscribe.InitialPosition; import org.apache.pulsar.common.api.proto.CommandSubscribe.SubType; import org.apache.pulsar.common.api.proto.CommandTcClientConnectResponse; +import org.apache.pulsar.common.api.proto.CommandTopicMigrated.ResourceType; import org.apache.pulsar.common.api.proto.FeatureFlags; import org.apache.pulsar.common.api.proto.IntRange; import org.apache.pulsar.common.api.proto.KeySharedMeta; @@ -739,6 +740,16 @@ public static ByteBuf newReachedEndOfTopic(long consumerId) { return serializeWithSize(cmd); } + public static ByteBuf newTopicMigrated(ResourceType type, long resourceId, String brokerUrl, String brokerUrlTls) { + BaseCommand cmd = localCmd(Type.TOPIC_MIGRATED); + cmd.setTopicMigrated() + .setResourceType(type) + .setResourceId(resourceId) + .setBrokerServiceUrl(brokerUrl) + .setBrokerServiceUrlTls(brokerUrlTls); + return serializeWithSize(cmd); + } + public static ByteBuf newCloseProducer(long producerId, long requestId) { BaseCommand cmd = localCmd(Type.CLOSE_PRODUCER); cmd.setCloseProducer() diff --git a/pulsar-common/src/main/java/org/apache/pulsar/common/protocol/PulsarDecoder.java b/pulsar-common/src/main/java/org/apache/pulsar/common/protocol/PulsarDecoder.java index d0b261034610a1..2bd615d2ce4b24 100644 --- a/pulsar-common/src/main/java/org/apache/pulsar/common/protocol/PulsarDecoder.java +++ b/pulsar-common/src/main/java/org/apache/pulsar/common/protocol/PulsarDecoder.java @@ -76,6 +76,7 @@ import org.apache.pulsar.common.api.proto.CommandSuccess; import org.apache.pulsar.common.api.proto.CommandTcClientConnectRequest; import org.apache.pulsar.common.api.proto.CommandTcClientConnectResponse; +import org.apache.pulsar.common.api.proto.CommandTopicMigrated; import org.apache.pulsar.common.api.proto.CommandUnsubscribe; import org.apache.pulsar.common.api.proto.CommandWatchTopicList; import org.apache.pulsar.common.api.proto.CommandWatchTopicListClose; @@ -294,6 +295,11 @@ public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception handleReachedEndOfTopic(cmd.getReachedEndOfTopic()); break; + case TOPIC_MIGRATED: + checkArgument(cmd.hasTopicMigrated()); + handleTopicMigrated(cmd.getTopicMigrated()); + break; + case GET_LAST_MESSAGE_ID: checkArgument(cmd.hasGetLastMessageId()); handleGetLastMessageId(cmd.getGetLastMessageId()); @@ -600,6 +606,10 @@ protected void handleReachedEndOfTopic(CommandReachedEndOfTopic commandReachedEn throw new UnsupportedOperationException(); } + protected void handleTopicMigrated(CommandTopicMigrated commandMigratedTopic) { + throw new UnsupportedOperationException(); + } + protected void handleGetLastMessageId(CommandGetLastMessageId getLastMessageId) { throw new UnsupportedOperationException(); } diff --git a/pulsar-common/src/main/proto/PulsarApi.proto b/pulsar-common/src/main/proto/PulsarApi.proto index 7be65224f0cf3d..acf75eab858263 100644 --- a/pulsar-common/src/main/proto/PulsarApi.proto +++ b/pulsar-common/src/main/proto/PulsarApi.proto @@ -262,6 +262,7 @@ enum ProtocolVersion { v17 = 17; // Added support ack receipt v18 = 18; // Add client support for broker entry metadata v19 = 19; // Add CommandTcClientConnectRequest and CommandTcClientConnectResponse + v20 = 20; // Add client support for topic migration redirection CommandTopicMigrated } message CommandConnect { @@ -620,6 +621,19 @@ message CommandReachedEndOfTopic { required uint64 consumer_id = 1; } +message CommandTopicMigrated { + enum ResourceType { + Producer = 0; + Consumer = 1; + } + required uint64 resource_id = 1; + required ResourceType resource_type = 2; + optional string brokerServiceUrl = 3; + optional string brokerServiceUrlTls = 4; + +} + + message CommandCloseProducer { required uint64 producer_id = 1; required uint64 request_id = 2; @@ -1025,6 +1039,7 @@ message BaseCommand { WATCH_TOPIC_UPDATE = 66; WATCH_TOPIC_LIST_CLOSE = 67; + TOPIC_MIGRATED = 68; } @@ -1106,4 +1121,6 @@ message BaseCommand { optional CommandWatchTopicListSuccess watchTopicListSuccess = 65; optional CommandWatchTopicUpdate watchTopicUpdate = 66; optional CommandWatchTopicListClose watchTopicListClose = 67; + + optional CommandTopicMigrated topicMigrated = 68; } diff --git a/site2/docs/reference-pulsar-admin.md b/site2/docs/reference-pulsar-admin.md index cbd4005eaa30a8..6115182419544d 100644 --- a/site2/docs/reference-pulsar-admin.md +++ b/site2/docs/reference-pulsar-admin.md @@ -321,6 +321,24 @@ Options |`--url`|service-url|| |`--url-secure`|service-url for secure connection|| +### `update cluster migration` +Update the configuration for a cluster + +Usage + +```bash +pulsar-admin clusters update-cluster-migration cluster-name options +``` + +Options + +|Flag|Description|Default| +|---|---|---| +|`--migrated`|Is cluster migrated.|| +|`--broker-url`|New cluster URL for the broker service.|| +|`--broker-url-secure`|New cluster service URL for a secure connection|| +|`--url`|service-url|| +|`--url-secure`|service-url for secure connection|| ### `delete` Deletes an existing cluster diff --git a/tiered-storage/jcloud/src/test/java/org/apache/bookkeeper/mledger/offload/jcloud/impl/MockManagedLedger.java b/tiered-storage/jcloud/src/test/java/org/apache/bookkeeper/mledger/offload/jcloud/impl/MockManagedLedger.java index 730fbc90d2c8e9..f838b06473a743 100644 --- a/tiered-storage/jcloud/src/test/java/org/apache/bookkeeper/mledger/offload/jcloud/impl/MockManagedLedger.java +++ b/tiered-storage/jcloud/src/test/java/org/apache/bookkeeper/mledger/offload/jcloud/impl/MockManagedLedger.java @@ -380,4 +380,16 @@ public void checkInactiveLedgerAndRollOver() { public void checkCursorsToCacheEntries() { // no-op } + + @Override + public CompletableFuture asyncMigrate() { + // no-op + return null; + } + + @Override + public boolean isMigrated() { + // no-op + return false; + } } From cbf5cf58196b05a35b31385c6a42098c598c9b33 Mon Sep 17 00:00:00 2001 From: Matteo Merli Date: Wed, 19 Oct 2022 21:22:16 -0700 Subject: [PATCH 26/28] [fix] The Pulsar standalone bookie is not getting passed the config from standalone.conf (#18126) --- .../main/java/org/apache/pulsar/PulsarStandalone.java | 4 ++++ .../apache/pulsar/metadata/bookkeeper/BKCluster.java | 10 +++++++++- 2 files changed, 13 insertions(+), 1 deletion(-) diff --git a/pulsar-broker/src/main/java/org/apache/pulsar/PulsarStandalone.java b/pulsar-broker/src/main/java/org/apache/pulsar/PulsarStandalone.java index ce46d460d5269d..e0ebcc7657c2c1 100644 --- a/pulsar-broker/src/main/java/org/apache/pulsar/PulsarStandalone.java +++ b/pulsar-broker/src/main/java/org/apache/pulsar/PulsarStandalone.java @@ -441,7 +441,11 @@ private void startBookieWithMetadataStore() throws Exception { } else { log.info("Starting BK with metadata store:", metadataStoreUrl); } + + ServerConfiguration bkServerConf = new ServerConfiguration(); + bkServerConf.loadConf(new File(configFile).toURI().toURL()); bkCluster = BKCluster.builder() + .baseServerConfiguration(bkServerConf) .metadataServiceUri(metadataStoreUrl) .bkPort(bkPort) .numBookies(numOfBk) diff --git a/pulsar-metadata/src/main/java/org/apache/pulsar/metadata/bookkeeper/BKCluster.java b/pulsar-metadata/src/main/java/org/apache/pulsar/metadata/bookkeeper/BKCluster.java index d845f912d2e186..6505868331fac2 100644 --- a/pulsar-metadata/src/main/java/org/apache/pulsar/metadata/bookkeeper/BKCluster.java +++ b/pulsar-metadata/src/main/java/org/apache/pulsar/metadata/bookkeeper/BKCluster.java @@ -71,6 +71,8 @@ public class BKCluster implements AutoCloseable { protected final ClientConfiguration baseClientConf; public static class BKClusterConf { + + private ServerConfiguration baseServerConfiguration; private String metadataServiceUri; private int numBookies = 1; private String dataDir; @@ -78,6 +80,11 @@ public static class BKClusterConf { private boolean clearOldData; + public BKClusterConf baseServerConfiguration(ServerConfiguration baseServerConfiguration) { + this.baseServerConfiguration = baseServerConfiguration; + return this; + } + public BKClusterConf metadataServiceUri(String metadataServiceUri) { this.metadataServiceUri = metadataServiceUri; return this; @@ -115,7 +122,8 @@ public static BKClusterConf builder() { private BKCluster(BKClusterConf bkClusterConf) throws Exception { this.clusterConf = bkClusterConf; - this.baseConf = newBaseServerConfiguration(); + this.baseConf = bkClusterConf.baseServerConfiguration != null + ? bkClusterConf.baseServerConfiguration : newBaseServerConfiguration(); this.baseClientConf = newBaseClientConfiguration(); this.store = From e5b3ffde5251dcf53d025a5a24439202cb48eb43 Mon Sep 17 00:00:00 2001 From: fengyubiao Date: Thu, 20 Oct 2022 13:18:18 +0800 Subject: [PATCH 27/28] [fix][io]Format JDBCUtils.java to make the compile success (#18122) --- .../src/main/java/org/apache/pulsar/io/jdbc/JdbcUtils.java | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pulsar-io/jdbc/core/src/main/java/org/apache/pulsar/io/jdbc/JdbcUtils.java b/pulsar-io/jdbc/core/src/main/java/org/apache/pulsar/io/jdbc/JdbcUtils.java index f2907b6eedb8c4..02e39f145eaa59 100644 --- a/pulsar-io/jdbc/core/src/main/java/org/apache/pulsar/io/jdbc/JdbcUtils.java +++ b/pulsar-io/jdbc/core/src/main/java/org/apache/pulsar/io/jdbc/JdbcUtils.java @@ -121,8 +121,8 @@ public static TableDefinition getTableDefinition( TableDefinition table = TableDefinition.of( tableId, Lists.newArrayList(), Lists.newArrayList(), Lists.newArrayList()); - keyList = keyList == null ? Collections.emptyList(): keyList; - nonKeyList = nonKeyList == null ? Collections.emptyList(): nonKeyList; + keyList = keyList == null ? Collections.emptyList() : keyList; + nonKeyList = nonKeyList == null ? Collections.emptyList() : nonKeyList; try (ResultSet rs = connection.getMetaData().getColumns( tableId.getCatalogName(), From 7b52a92bbb4dca4976233f2e99da289c38856a1a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Nicol=C3=B2=20Boschi?= Date: Thu, 20 Oct 2022 09:48:21 +0200 Subject: [PATCH 28/28] [improve][io] JDBC sinks: implement JDBC Batch API (#18017) * [improve][io] JDBC sinks: implement JDBC Batch API * more tests and transactions support * remove .db files * doc * fix batch results and thread safey * add next flush test - fix doc - improve code readability --- .../pulsar/io/jdbc/JdbcAbstractSink.java | 174 +++++++--- .../apache/pulsar/io/jdbc/JdbcSinkConfig.java | 20 +- .../io/jdbc/SqliteJdbcSinkBatchTest.java | 35 ++ .../pulsar/io/jdbc/SqliteJdbcSinkTest.java | 307 +++++++++++++----- .../apache/pulsar/io/jdbc/SqliteUtils.java | 17 + site2/docs/io-jdbc-sink.md | 27 +- 6 files changed, 433 insertions(+), 147 deletions(-) create mode 100644 pulsar-io/jdbc/sqlite/src/test/java/org/apache/pulsar/io/jdbc/SqliteJdbcSinkBatchTest.java diff --git a/pulsar-io/jdbc/core/src/main/java/org/apache/pulsar/io/jdbc/JdbcAbstractSink.java b/pulsar-io/jdbc/core/src/main/java/org/apache/pulsar/io/jdbc/JdbcAbstractSink.java index 74a19b7b187b30..06beaaacf9e248 100644 --- a/pulsar-io/jdbc/core/src/main/java/org/apache/pulsar/io/jdbc/JdbcAbstractSink.java +++ b/pulsar-io/jdbc/core/src/main/java/org/apache/pulsar/io/jdbc/JdbcAbstractSink.java @@ -23,7 +23,13 @@ import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; import java.util.Arrays; +import java.util.Deque; +import java.util.HashMap; +import java.util.LinkedList; import java.util.List; import java.util.Map; import java.util.Properties; @@ -32,6 +38,7 @@ import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; import java.util.function.Function; +import java.util.stream.Collectors; import lombok.AllArgsConstructor; import lombok.Data; import lombok.Getter; @@ -64,8 +71,7 @@ public abstract class JdbcAbstractSink implements Sink { protected JdbcUtils.TableDefinition tableDefinition; // for flush - private List> incomingList; - private List> swapList; + private Deque> incomingList; private AtomicBoolean isFlushing; private int batchSize; private ScheduledExecutorService flushExecutor; @@ -73,6 +79,7 @@ public abstract class JdbcAbstractSink implements Sink { @Override public void open(Map config, SinkContext sinkContext) throws Exception { jdbcSinkConfig = JdbcSinkConfig.load(config); + jdbcSinkConfig.validate(); jdbcUrl = jdbcSinkConfig.getJdbcUrl(); if (jdbcSinkConfig.getJdbcUrl() == null) { @@ -100,12 +107,13 @@ public void open(Map config, SinkContext sinkContext) throws Exc int timeoutMs = jdbcSinkConfig.getTimeoutMs(); batchSize = jdbcSinkConfig.getBatchSize(); - incomingList = Lists.newArrayList(); - swapList = Lists.newArrayList(); + incomingList = new LinkedList<>(); isFlushing = new AtomicBoolean(false); flushExecutor = Executors.newScheduledThreadPool(1); - flushExecutor.scheduleAtFixedRate(this::flush, timeoutMs, timeoutMs, TimeUnit.MILLISECONDS); + if (timeoutMs > 0) { + flushExecutor.scheduleAtFixedRate(this::flush, timeoutMs, timeoutMs, TimeUnit.MILLISECONDS); + } } private void initStatement() throws Exception { @@ -173,11 +181,14 @@ public void close() throws Exception { @Override public void write(Record record) throws Exception { int number; - synchronized (this) { + synchronized (incomingList) { incomingList.add(record); number = incomingList.size(); } - if (number == batchSize) { + if (batchSize > 0 && number >= batchSize) { + if (log.isDebugEnabled()) { + log.debug("flushing by batches, hit batch size {}", batchSize); + } flushExecutor.schedule(this::flush, 0, TimeUnit.MILLISECONDS); } } @@ -220,49 +231,46 @@ protected enum MutationType { private void flush() { - // if not in flushing state, do flush, else return; if (incomingList.size() > 0 && isFlushing.compareAndSet(false, true)) { - if (log.isDebugEnabled()) { - log.debug("Starting flush, queue size: {}", incomingList.size()); - } - if (!swapList.isEmpty()) { - throw new IllegalStateException("swapList should be empty since last flush. swapList.size: " - + swapList.size()); - } - synchronized (this) { - List> tmpList; - swapList.clear(); + boolean needAnotherRound; + final Deque> swapList = new LinkedList<>(); + + synchronized (incomingList) { + if (log.isDebugEnabled()) { + log.debug("Starting flush, queue size: {}", incomingList.size()); + } + final int actualBatchSize = batchSize > 0 ? Math.min(incomingList.size(), batchSize) : + incomingList.size(); - tmpList = swapList; - swapList = incomingList; - incomingList = tmpList; + for (int i = 0; i < actualBatchSize; i++) { + swapList.add(incomingList.removeFirst()); + } + needAnotherRound = batchSize > 0 && !incomingList.isEmpty() && incomingList.size() >= batchSize; } + long start = System.nanoTime(); int count = 0; try { + PreparedStatement currentBatch = null; + final List mutations = swapList + .stream() + .map(this::createMutation) + .collect(Collectors.toList()); // bind each record value - for (Record record : swapList) { - final Mutation mutation = createMutation(record); + PreparedStatement statement; + for (Mutation mutation : mutations) { switch (mutation.getType()) { case DELETE: - bindValue(deleteStatement, mutation); - count += 1; - deleteStatement.execute(); + statement = deleteStatement; break; case UPDATE: - bindValue(updateStatement, mutation); - count += 1; - updateStatement.execute(); + statement = updateStatement; break; case INSERT: - bindValue(insertStatement, mutation); - count += 1; - insertStatement.execute(); + statement = insertStatement; break; case UPSERT: - bindValue(upsertStatement, mutation); - count += 1; - upsertStatement.execute(); + statement = upsertStatement; break; default: String msg = String.format( @@ -270,13 +278,34 @@ private void flush() { mutation.getType(), Arrays.toString(MutationType.values()), MutationType.INSERT); throw new IllegalArgumentException(msg); } + bindValue(statement, mutation); + count += 1; + if (jdbcSinkConfig.isUseJdbcBatch()) { + if (currentBatch != null && statement != currentBatch) { + internalFlushBatch(swapList, currentBatch, count, start); + start = System.nanoTime(); + } + statement.addBatch(); + currentBatch = statement; + } else { + statement.execute(); + if (!jdbcSinkConfig.isUseTransactions()) { + swapList.removeFirst().ack(); + } + } } - if (jdbcSinkConfig.isUseTransactions()) { - connection.commit(); + + if (jdbcSinkConfig.isUseJdbcBatch()) { + internalFlushBatch(swapList, currentBatch, count, start); + } else { + internalFlush(swapList); } - swapList.forEach(Record::ack); } catch (Exception e) { - log.error("Got exception {}", e.getMessage(), e); + log.error("Got exception {} after {} ms, failing {} messages", + e.getMessage(), + (System.nanoTime() - start) / 1000 / 1000, + swapList.size(), + e); swapList.forEach(Record::fail); try { if (jdbcSinkConfig.isUseTransactions()) { @@ -287,16 +316,10 @@ private void flush() { } } - if (swapList.size() != count) { - log.error("Update count {} not match total number of records {}", count, swapList.size()); - } - - // finish flush - if (log.isDebugEnabled()) { - log.debug("Finish flush, queue size: {}", swapList.size()); - } - swapList.clear(); isFlushing.set(false); + if (needAnotherRound) { + flush(); + } } else { if (log.isDebugEnabled()) { log.debug("Already in flushing state, will not flush, queue size: {}", incomingList.size()); @@ -304,4 +327,59 @@ private void flush() { } } + private void internalFlush(Deque> swapList) throws SQLException { + if (jdbcSinkConfig.isUseTransactions()) { + connection.commit(); + swapList.forEach(Record::ack); + } + } + + private void internalFlushBatch(Deque> swapList, PreparedStatement currentBatch, int count, long start) throws SQLException { + executeBatch(swapList, currentBatch); + if (log.isDebugEnabled()) { + log.debug("Flushed {} messages in {} ms", count, (System.nanoTime() - start) / 1000 / 1000); + } + } + + private void executeBatch(Deque> swapList, PreparedStatement statement) throws SQLException { + final int[] results = statement.executeBatch(); + Map failuresMapping = null; + final boolean useTransactions = jdbcSinkConfig.isUseTransactions(); + + for (int r: results) { + if (isBatchItemFailed(r)) { + if (failuresMapping == null) { + failuresMapping = new HashMap<>(); + } + final Integer current = failuresMapping.computeIfAbsent(r, code -> 1); + failuresMapping.put(r, current + 1); + } + } + if (failuresMapping == null || failuresMapping.isEmpty()) { + if (useTransactions) { + connection.commit(); + } + for (int r: results) { + swapList.removeFirst().ack(); + } + } else { + if (useTransactions) { + connection.rollback(); + } + for (int r: results) { + swapList.removeFirst().fail(); + } + String msg = "Batch failed, got error results (error_code->count): " + failuresMapping; + // throwing an exception here means the main loop cycle will nack the messages in the next batch + throw new SQLException(msg); + } + } + + private static boolean isBatchItemFailed(int returnCode) { + if (returnCode == Statement.SUCCESS_NO_INFO || returnCode >= 0) { + return false; + } + return true; + } + } diff --git a/pulsar-io/jdbc/core/src/main/java/org/apache/pulsar/io/jdbc/JdbcSinkConfig.java b/pulsar-io/jdbc/core/src/main/java/org/apache/pulsar/io/jdbc/JdbcSinkConfig.java index 74f339bd44346d..ac9a36be796eb6 100644 --- a/pulsar-io/jdbc/core/src/main/java/org/apache/pulsar/io/jdbc/JdbcSinkConfig.java +++ b/pulsar-io/jdbc/core/src/main/java/org/apache/pulsar/io/jdbc/JdbcSinkConfig.java @@ -87,15 +87,24 @@ public class JdbcSinkConfig implements Serializable { @FieldDoc( required = false, defaultValue = "500", - help = "The jdbc operation timeout in milliseconds" + help = "Enable batch mode by time. After timeoutMs milliseconds the operations queue will be flushed." ) private int timeoutMs = 500; @FieldDoc( required = false, defaultValue = "200", - help = "The batch size of updates made to the database" + help = "Enable batch mode by number of operations. This value is the max number of operations " + + "batched in the same transaction/batch." ) private int batchSize = 200; + + @FieldDoc( + required = false, + defaultValue = "false", + help = "Use the JDBC batch API. This option is suggested to improve write performance." + ) + private boolean useJdbcBatch = false; + @FieldDoc( required = false, defaultValue = "true", @@ -141,4 +150,11 @@ public static JdbcSinkConfig load(Map map) throws IOException { ObjectMapper mapper = new ObjectMapper(); return mapper.readValue(new ObjectMapper().writeValueAsString(map), JdbcSinkConfig.class); } + + public void validate() { + if (timeoutMs <= 0 && batchSize <= 0) { + throw new IllegalArgumentException("timeoutMs or batchSize must be set to a positive value."); + } + } + } diff --git a/pulsar-io/jdbc/sqlite/src/test/java/org/apache/pulsar/io/jdbc/SqliteJdbcSinkBatchTest.java b/pulsar-io/jdbc/sqlite/src/test/java/org/apache/pulsar/io/jdbc/SqliteJdbcSinkBatchTest.java new file mode 100644 index 00000000000000..012b37bacec955 --- /dev/null +++ b/pulsar-io/jdbc/sqlite/src/test/java/org/apache/pulsar/io/jdbc/SqliteJdbcSinkBatchTest.java @@ -0,0 +1,35 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.pulsar.io.jdbc; + +import lombok.extern.slf4j.Slf4j; +import java.util.Map; + +/** + * Jdbc Sink test with JDBC Batches API enabled + */ +@Slf4j +public class SqliteJdbcSinkBatchTest extends SqliteJdbcSinkTest { + + @Override + protected void configure(Map configuration) { + configuration.put("useJdbcBatch", "true"); + } +} diff --git a/pulsar-io/jdbc/sqlite/src/test/java/org/apache/pulsar/io/jdbc/SqliteJdbcSinkTest.java b/pulsar-io/jdbc/sqlite/src/test/java/org/apache/pulsar/io/jdbc/SqliteJdbcSinkTest.java index 030a9b4187bea1..38cdc29a1c9e00 100644 --- a/pulsar-io/jdbc/sqlite/src/test/java/org/apache/pulsar/io/jdbc/SqliteJdbcSinkTest.java +++ b/pulsar-io/jdbc/sqlite/src/test/java/org/apache/pulsar/io/jdbc/SqliteJdbcSinkTest.java @@ -25,17 +25,22 @@ import com.google.common.collect.ImmutableMap; import com.google.common.collect.Maps; import java.util.Arrays; +import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Optional; import java.util.UUID; import java.util.concurrent.CompletableFuture; import java.util.concurrent.TimeUnit; +import java.util.concurrent.TimeoutException; +import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicReference; import lombok.AllArgsConstructor; import lombok.Data; +import lombok.NoArgsConstructor; import lombok.extern.slf4j.Slf4j; import org.apache.avro.util.Utf8; +import org.apache.commons.lang.reflect.FieldUtils; import org.apache.pulsar.client.api.Message; import org.apache.pulsar.client.api.Schema; import org.apache.pulsar.client.api.schema.GenericObject; @@ -45,9 +50,7 @@ import org.apache.pulsar.client.api.schema.SchemaDefinition; import org.apache.pulsar.client.impl.MessageImpl; import org.apache.pulsar.client.impl.schema.AvroSchema; -import org.apache.pulsar.client.impl.schema.JSONSchema; import org.apache.pulsar.client.impl.schema.generic.GenericAvroSchema; -import org.apache.pulsar.client.impl.schema.generic.GenericSchemaImpl; import org.apache.pulsar.common.schema.KeyValue; import org.apache.pulsar.common.schema.KeyValueEncodingType; import org.apache.pulsar.common.schema.SchemaType; @@ -72,6 +75,8 @@ public class SqliteJdbcSinkTest { * A Simple class to test jdbc class */ @Data + @NoArgsConstructor + @AllArgsConstructor public static class Foo { private String field1; private String field2; @@ -96,17 +101,26 @@ public void setUp() throws Exception { // prepare data for delete sql String deleteSql = "insert into " + tableName + " values('ValueOfField5', 'ValueOfField5', 5)"; sqliteUtils.execute(deleteSql); - Map conf; + restartSinkWithConfig(null); + } + private void restartSinkWithConfig(Map additional) throws Exception { + if (jdbcSink != null) { + jdbcSink.close(); + } String jdbcUrl = sqliteUtils.sqliteUri(); - conf = Maps.newHashMap(); + Map conf = Maps.newHashMap(); conf.put("jdbcUrl", jdbcUrl); conf.put("tableName", tableName); conf.put("key", "field3"); conf.put("nonKey", "field1,field2"); // change batchSize to 1, to flush on each write. conf.put("batchSize", 1); + if (additional != null) { + conf.putAll(additional); + } + configure(conf); jdbcSink = new SqliteJdbcAutoSchemaSink(); @@ -114,6 +128,9 @@ public void setUp() throws Exception { jdbcSink.open(conf, null); } + protected void configure(Map configuration) { + } + @AfterMethod(alwaysRun = true) public void tearDown() throws Exception { jdbcSink.close(); @@ -121,7 +138,7 @@ public void tearDown() throws Exception { } private void testOpenAndWriteSinkNullValue(Map actionProperties) throws Exception { - Message insertMessage = mock(MessageImpl.class); + CompletableFuture future = new CompletableFuture<>(); GenericSchema genericAvroSchema; // prepare a foo Record Foo insertObj = new Foo(); @@ -129,26 +146,8 @@ private void testOpenAndWriteSinkNullValue(Map actionProperties) // Not setting field2 // Field1 is the key and field3 is used for selecting records insertObj.setField3(3); - AvroSchema schema = AvroSchema.of(SchemaDefinition.builder().withPojo(Foo.class).withAlwaysAllowNull(true).build()); - - byte[] insertBytes = schema.encode(insertObj); - CompletableFuture future = new CompletableFuture<>(); - Record insertRecord = PulsarRecord.builder() - .message(insertMessage) - .topicName("fake_topic_name") - .ackFunction(() -> future.complete(null)) - .build(); - - genericAvroSchema = new GenericAvroSchema(schema.getSchemaInfo()); - when(insertMessage.getValue()).thenReturn(genericAvroSchema.decode(insertBytes)); - when(insertMessage.getProperties()).thenReturn(actionProperties); - log.info("foo:{}, Message.getValue: {}, record.getValue: {}", - insertObj.toString(), - insertMessage.getValue().toString(), - insertRecord.getValue().toString()); - - // write should success. - jdbcSink.write(insertRecord); + final Record record = createMockFooRecord(insertObj, actionProperties, future); + jdbcSink.write(record); log.info("executed write"); // sleep to wait backend flush complete future.get(1, TimeUnit.SECONDS); @@ -165,33 +164,16 @@ private void testOpenAndWriteSinkNullValue(Map actionProperties) } private void testOpenAndWriteSinkJson(Map actionProperties) throws Exception { - Message insertMessage = mock(MessageImpl.class); - GenericSchema genericAvroSchema; // prepare a foo Record Foo insertObj = new Foo(); insertObj.setField1("ValueOfField1"); insertObj.setField2("ValueOfField2"); insertObj.setField3(3); - JSONSchema schema = JSONSchema.of(SchemaDefinition.builder().withPojo(Foo.class).withAlwaysAllowNull(true).build()); - - byte[] insertBytes = schema.encode(insertObj); - CompletableFuture future = new CompletableFuture<>(); - Record insertRecord = PulsarRecord.builder() - .message(insertMessage) - .topicName("fake_topic_name") - .ackFunction(() -> future.complete(null)) - .build(); - - GenericSchema decodeSchema = GenericSchemaImpl.of(schema.getSchemaInfo()); - when(insertMessage.getValue()).thenReturn(decodeSchema.decode(insertBytes)); - when(insertMessage.getProperties()).thenReturn(actionProperties); - log.info("foo:{}, Message.getValue: {}, record.getValue: {}", - insertObj.toString(), - insertMessage.getValue().toString(), - insertRecord.getValue().toString()); + CompletableFuture future = new CompletableFuture<>(); + final Record record = createMockFooRecord(insertObj, actionProperties, future); // write should success. - jdbcSink.write(insertRecord); + jdbcSink.write(record); log.info("executed write"); // sleep to wait backend flush complete future.get(1, TimeUnit.SECONDS); @@ -216,26 +198,11 @@ private void testOpenAndWriteSinkNullValueJson(Map actionPropert // Not setting field2 // Field1 is the key and field3 is used for selecting records insertObj.setField3(3); - JSONSchema schema = JSONSchema.of(SchemaDefinition.builder().withPojo(Foo.class).withAlwaysAllowNull(true).build()); - - byte[] insertBytes = schema.encode(insertObj); - CompletableFuture future = new CompletableFuture<>(); - Record insertRecord = PulsarRecord.builder() - .message(insertMessage) - .topicName("fake_topic_name") - .ackFunction(() -> future.complete(null)) - .build(); - - GenericSchema decodeSchema = GenericSchemaImpl.of(schema.getSchemaInfo()); - when(insertMessage.getValue()).thenReturn(decodeSchema.decode(insertBytes)); - when(insertMessage.getProperties()).thenReturn(actionProperties); - log.info("foo:{}, Message.getValue: {}, record.getValue: {}", - insertObj.toString(), - insertMessage.getValue().toString(), - insertRecord.getValue().toString()); + CompletableFuture future = new CompletableFuture<>(); + final Record record = createMockFooRecord(insertObj, actionProperties, future); // write should success. - jdbcSink.write(insertRecord); + jdbcSink.write(record); log.info("executed write"); // sleep to wait backend flush complete // sleep to wait backend flush complete @@ -260,26 +227,11 @@ private void testOpenAndWriteSink(Map actionProperties) throws E insertObj.setField1("ValueOfField1"); insertObj.setField2("ValueOfField2"); insertObj.setField3(3); - AvroSchema schema = AvroSchema.of(SchemaDefinition.builder().withPojo(Foo.class).build()); - - byte[] insertBytes = schema.encode(insertObj); - CompletableFuture future = new CompletableFuture<>(); - Record insertRecord = PulsarRecord.builder() - .message(insertMessage) - .topicName("fake_topic_name") - .ackFunction(() -> future.complete(null)) - .build(); - - genericAvroSchema = new GenericAvroSchema(schema.getSchemaInfo()); - when(insertMessage.getValue()).thenReturn(genericAvroSchema.decode(insertBytes)); - when(insertMessage.getProperties()).thenReturn(actionProperties); - log.info("foo:{}, Message.getValue: {}, record.getValue: {}", - insertObj.toString(), - insertMessage.getValue().toString(), - insertRecord.getValue().toString()); + CompletableFuture future = new CompletableFuture<>(); + final Record record = createMockFooRecord(insertObj, actionProperties, future); // write should success. - jdbcSink.write(insertRecord); + jdbcSink.write(record); log.info("executed write"); // sleep to wait backend flush complete future.get(1, TimeUnit.SECONDS); @@ -342,7 +294,7 @@ public void TestUpdateAction() throws Exception { byte[] updateBytes = schema.encode(updateObj); Message updateMessage = mock(MessageImpl.class); - CompletableFuture future = new CompletableFuture<>(); + CompletableFuture future = new CompletableFuture<>(); Record updateRecord = PulsarRecord.builder() .message(updateMessage) .topicName("fake_topic_name") @@ -383,7 +335,7 @@ public void TestDeleteAction() throws Exception { byte[] deleteBytes = schema.encode(deleteObj); Message deleteMessage = mock(MessageImpl.class); - CompletableFuture future = new CompletableFuture<>(); + CompletableFuture future = new CompletableFuture<>(); Record deleteRecord = PulsarRecord.builder() .message(deleteMessage) .topicName("fake_topic_name") @@ -409,6 +361,172 @@ public void TestDeleteAction() throws Exception { Assert.assertEquals(sqliteUtils.select(deleteQuerySql, (resultSet) -> {}), 0); } + @Test + public void testBatchMode() throws Exception { + Map config = new HashMap<>(); + config.put("batchSize", 3); + config.put("timeoutMs", 0); + restartSinkWithConfig(config); + Foo updateObj = new Foo(); + updateObj.setField1("f1"); + updateObj.setField2("f12"); + updateObj.setField3(1); + Map updateProperties = Maps.newHashMap(); + updateProperties.put("ACTION", "INSERT"); + final CompletableFuture futureByEntries1 = new CompletableFuture<>(); + jdbcSink.write(createMockFooRecord(updateObj, updateProperties, futureByEntries1)); + Assert.assertThrows(TimeoutException.class, () -> futureByEntries1.get(1, TimeUnit.SECONDS)); + final CompletableFuture futureByEntries2 = new CompletableFuture<>(); + updateProperties.put("ACTION", "UPDATE"); + updateObj.setField2("f13"); + jdbcSink.write(createMockFooRecord(updateObj, updateProperties, futureByEntries2)); + Assert.assertThrows(TimeoutException.class, () -> futureByEntries1.get(1, TimeUnit.SECONDS)); + Assert.assertThrows(TimeoutException.class, () -> futureByEntries2.get(1, TimeUnit.SECONDS)); + updateObj.setField2("f14"); + final CompletableFuture futureByEntries3 = new CompletableFuture<>(); + jdbcSink.write(createMockFooRecord(updateObj, updateProperties, futureByEntries3)); + futureByEntries1.get(1, TimeUnit.SECONDS); + futureByEntries2.get(1, TimeUnit.SECONDS); + futureByEntries3.get(1, TimeUnit.SECONDS); + + config.put("batchSize", 0); + config.put("timeoutMs", TimeUnit.SECONDS.toMillis(3)); + restartSinkWithConfig(config); + final CompletableFuture futureByTime = new CompletableFuture<>(); + jdbcSink.write(createMockFooRecord(updateObj, updateProperties, futureByTime)); + Assert.assertThrows(TimeoutException.class, () -> futureByTime.get(1, TimeUnit.SECONDS)); + futureByTime.get(3, TimeUnit.SECONDS); + + } + + /** + * Verify that if the flush is finished but the incoming records list size is equals + * or greater than the batch size, + * the next flush is immediately triggered (without any other writes). + * @throws Exception + */ + @Test + public void testBatchModeContinueFlushing() throws Exception { + Map config = new HashMap<>(); + config.put("batchSize", 1); + config.put("timeoutMs", 0); + restartSinkWithConfig(config); + // block the auto flushing mechanism + FieldUtils.writeField(jdbcSink, "isFlushing", new AtomicBoolean(true), true); + Foo updateObj = new Foo("f1", "f12", 1); + Map updateProperties = Maps.newHashMap(); + updateProperties.put("ACTION", "INSERT"); + final CompletableFuture futureByEntries1 = new CompletableFuture<>(); + jdbcSink.write(createMockFooRecord(updateObj, updateProperties, futureByEntries1)); + Assert.assertThrows(TimeoutException.class, () -> futureByEntries1.get(1, TimeUnit.SECONDS)); + + FieldUtils.writeField(jdbcSink, "isFlushing", new AtomicBoolean(false), true); + + updateObj = new Foo("f2", "f12", 1); + updateProperties = Maps.newHashMap(); + updateProperties.put("ACTION", "INSERT"); + final CompletableFuture futureByEntries2 = new CompletableFuture<>(); + jdbcSink.write(createMockFooRecord(updateObj, updateProperties, futureByEntries2)); + + futureByEntries1.get(1, TimeUnit.SECONDS); + futureByEntries2.get(1, TimeUnit.SECONDS); + } + + @DataProvider(name = "useTransactions") + public Object[] useTransactions() { + return Arrays.asList(true, false).toArray(); + } + + @Test(dataProvider = "useTransactions") + public void testBatchModeFailures(boolean useTransactions) throws Exception { + jdbcSink.close(); + jdbcSink = null; + sqliteUtils.execute("delete from " + tableName); + restartSinkWithConfig(null); + Foo foo = new Foo("f1", "f2", 1); + Map updateProperties = Maps.newHashMap(); + updateProperties.put("ACTION", "INSERT"); + final CompletableFuture future0 = new CompletableFuture<>(); + jdbcSink.write(createMockFooRecord(foo, updateProperties, future0)); + Assert.assertTrue(future0.get()); + + Map config = new HashMap<>(); + config.put("batchSize", 5); + config.put("useTransactions", useTransactions); + restartSinkWithConfig(config); + + foo = new Foo("f2", "f2", 2); + updateProperties = Maps.newHashMap(); + updateProperties.put("ACTION", "INSERT"); + final CompletableFuture future2 = new CompletableFuture<>(); + jdbcSink.write(createMockFooRecord(foo, updateProperties, future2)); + + foo = new Foo("f3", "f2", 3); + updateProperties = Maps.newHashMap(); + updateProperties.put("ACTION", "INSERT"); + final CompletableFuture future3 = new CompletableFuture<>(); + jdbcSink.write(createMockFooRecord(foo, updateProperties, future3)); + + foo = new Foo("f1", "f21", 11); + updateProperties = Maps.newHashMap(); + updateProperties.put("ACTION", "UPDATE"); + final CompletableFuture future4 = new CompletableFuture<>(); + jdbcSink.write(createMockFooRecord(foo, updateProperties, future4)); + + foo = new Foo("f1", "f2no", 9); + updateProperties = Maps.newHashMap(); + updateProperties.put("ACTION", "INSERT"); + final CompletableFuture future5 = new CompletableFuture<>(); + jdbcSink.write(createMockFooRecord(foo, updateProperties, future5)); + + foo = new Foo("f1", "f3", 5); + updateProperties = Maps.newHashMap(); + updateProperties.put("ACTION", "UPDATE"); + final CompletableFuture future6 = new CompletableFuture<>(); + jdbcSink.write(createMockFooRecord(foo, updateProperties, future6)); + + + if (jdbcSink.jdbcSinkConfig.isUseTransactions()) { + if (jdbcSink.jdbcSinkConfig.isUseJdbcBatch()) { + Assert.assertTrue(future2.get(1, TimeUnit.SECONDS)); + Assert.assertTrue(future3.get(1, TimeUnit.SECONDS)); + Assert.assertTrue(future4.get(1, TimeUnit.SECONDS)); + Assert.assertFalse(future5.get(1, TimeUnit.SECONDS)); + Assert.assertFalse(future6.get(1, TimeUnit.SECONDS)); + final int count = sqliteUtils.select("select field1,field2,field3 from " + + tableName + + " where (field1='f1' and field2='f21') or field1='f2' or field1='f3'", (r) -> {}); + Assert.assertEquals(count, 2); + + } else { + Assert.assertFalse(future2.get(1, TimeUnit.SECONDS)); + Assert.assertFalse(future3.get(1, TimeUnit.SECONDS)); + Assert.assertFalse(future4.get(1, TimeUnit.SECONDS)); + Assert.assertFalse(future5.get(1, TimeUnit.SECONDS)); + Assert.assertFalse(future6.get(1, TimeUnit.SECONDS)); + final int count = sqliteUtils.select("select field1,field2,field3 from " + + tableName + + " where (field1='f1' and field2='f2') or field1='f2' or field1='f3'", (r) -> {}); + Assert.assertEquals(count, 1); + } + } else { + Assert.assertTrue(future2.get(1, TimeUnit.SECONDS)); + Assert.assertTrue(future3.get(1, TimeUnit.SECONDS)); + Assert.assertTrue(future4.get(1, TimeUnit.SECONDS)); + Assert.assertFalse(future5.get(1, TimeUnit.SECONDS)); + Assert.assertFalse(future6.get(1, TimeUnit.SECONDS)); + System.out.println("dump:\n" + sqliteUtils.dump("select field1,field2,field3 from " + tableName)); + + final int count = sqliteUtils.select("select field1,field2,field3 from " + + tableName + + " where (field1='f1' and field2='f21') or field1='f2' or field1='f3'", (r) -> { + log.info("got {};{};{}", r.getString(1), r.getString(2), r.getInt(3)); + }); + Assert.assertEquals(count, 2); + } + } + + private static class MockKeyValueGenericRecord implements Record { public MockKeyValueGenericRecord(Schema> keyValueSchema) { @@ -716,7 +834,7 @@ public void testNullValueAction(NullValueActionTestConfig config) throws Excepti if (key.equals("mykey2")) { Assert.assertEquals(value, "thestring"); } else { - throw new IllegalStateException(); + throw new IllegalStateException("got unexpected key " + key); } }); if (nullValueAction == JdbcSinkConfig.NullValueAction.DELETE) { @@ -731,4 +849,25 @@ public void testNullValueAction(NullValueActionTestConfig config) throws Excepti } } + private Record createMockFooRecord(Foo record, Map actionProperties, + CompletableFuture future) { + Message insertMessage = mock(MessageImpl.class); + GenericSchema genericAvroSchema; + AvroSchema schema = AvroSchema.of(SchemaDefinition.builder().withPojo(Foo.class).withAlwaysAllowNull(true).build()); + + byte[] insertBytes = schema.encode(record); + + Record insertRecord = PulsarRecord.builder() + .message(insertMessage) + .topicName("fake_topic_name") + .ackFunction(() -> future.complete(true)) + .failFunction(() -> future.complete(false)) + .build(); + + genericAvroSchema = new GenericAvroSchema(schema.getSchemaInfo()); + when(insertMessage.getValue()).thenReturn(genericAvroSchema.decode(insertBytes)); + when(insertMessage.getProperties()).thenReturn(actionProperties); + return insertRecord; + } + } diff --git a/pulsar-io/jdbc/sqlite/src/test/java/org/apache/pulsar/io/jdbc/SqliteUtils.java b/pulsar-io/jdbc/sqlite/src/test/java/org/apache/pulsar/io/jdbc/SqliteUtils.java index f8cdf76f7d35d8..9e3a3fe255d732 100644 --- a/pulsar-io/jdbc/sqlite/src/test/java/org/apache/pulsar/io/jdbc/SqliteUtils.java +++ b/pulsar-io/jdbc/sqlite/src/test/java/org/apache/pulsar/io/jdbc/SqliteUtils.java @@ -100,6 +100,23 @@ public int select(final String query, final ResultSetReadCallback callback) thro return count; } + public String dump(final String query) throws SQLException { + StringBuilder builder = new StringBuilder(); + try (Connection connection = getConnection(true); + Statement stmt = connection.createStatement()) { + try (ResultSet rs = stmt.executeQuery(query)) { + while (rs.next()) { + for (int i = 0; i < rs.getMetaData().getColumnCount(); i++) { + builder.append(rs.getObject(i + 1)); + builder.append(";"); + } + builder.append(System.lineSeparator()); + } + } + } + return builder.toString(); + } + public void execute(String sql) throws SQLException { try (Connection connection = getConnection(true); Statement stmt = connection.createStatement()) { diff --git a/site2/docs/io-jdbc-sink.md b/site2/docs/io-jdbc-sink.md index 4c9a473e0277b3..63150517e28cb5 100644 --- a/site2/docs/io-jdbc-sink.md +++ b/site2/docs/io-jdbc-sink.md @@ -15,20 +15,21 @@ The configuration of all JDBC sink connectors has the following properties. ### Property -| Name | Type | Required | Default | Description | -|-------------|--------|----------|--------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `userName` | String | false | " " (empty string) | The username used to connect to the database specified by `jdbcUrl`.

    **Note: `userName` is case-sensitive.** | -| `password` | String | false | " " (empty string) | The password used to connect to the database specified by `jdbcUrl`.

    **Note: `password` is case-sensitive.** | -| `jdbcUrl` | String | true | " " (empty string) | The JDBC URL of the database that the connector connects to. | -| `tableName` | String | true | " " (empty string) | The name of the table that the connector writes to. | -| `nonKey` | String | false | " " (empty string) | A comma-separated list containing the fields used in updating events. | -| `key` | String | false | " " (empty string) | A comma-separated list containing the fields used in `where` condition of updating and deleting events. | -| `timeoutMs` | int | false | 500 | The JDBC operation timeout in milliseconds. | -| `batchSize` | int | false | 200 | The batch size of updates made to the database. | -| `insertMode` | enum( INSERT,UPSERT,UPDATE) | false | INSERT | If it is configured as UPSERT, the sink uses upsert semantics rather than plain INSERT/UPDATE statements. Upsert semantics refer to atomically adding a new row or updating the existing row if there is a primary key constraint violation, which provides idempotence. | -| `nullValueAction` | enum(FAIL, DELETE) | false | FAIL | How to handle records with NULL values. Possible options are `DELETE` or `FAIL`. | -| `useTransactions` | boolean | false | true | Enable transactions of the database. +| Name | Type | Required | Default | Description | +|-------------|--------|----------|--------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `userName` | String | false | " " (empty string) | The username used to connect to the database specified by `jdbcUrl`.

    **Note: `userName` is case-sensitive.** | +| `password` | String | false | " " (empty string) | The password used to connect to the database specified by `jdbcUrl`.

    **Note: `password` is case-sensitive.** | +| `jdbcUrl` | String | true | " " (empty string) | The JDBC URL of the database that the connector connects to. | +| `tableName` | String | true | " " (empty string) | The name of the table that the connector writes to. | +| `nonKey` | String | false | " " (empty string) | A comma-separated list containing the fields used in updating events. | +| `key` | String | false | " " (empty string) | A comma-separated list containing the fields used in `where` condition of updating and deleting events. | +| `timeoutMs` | int | false | 500 | The JDBC operation timeout in milliseconds. | +| `batchSize` | int | false | 200 | The batch size of updates made to the database. | +| `insertMode` | enum( INSERT,UPSERT,UPDATE) | false | INSERT | If it is configured as UPSERT, the sink uses upsert semantics rather than plain INSERT/UPDATE statements. Upsert semantics refer to atomically adding a new row or updating the existing row if there is a primary key constraint violation, which provides idempotence. | +| `nullValueAction` | enum(FAIL, DELETE) | false | FAIL | How to handle records with NULL values. Possible options are `DELETE` or `FAIL`. | +| `useTransactions` | boolean | false | true | Enable transactions of the database. | `excludeNonDeclaredFields` | boolean | false | false | All the table fields are discovered automatically. `excludeNonDeclaredFields` indicates if the table fields not explicitly listed in `nonKey` and `key` must be included in the query. By default all the table fields are included. To leverage of table fields defaults during insertion, it is suggested to set this value to `false`. | +| `useJdbcBatch` | boolean | false | false | Use the JDBC batch API. This option is suggested to improve write performance. | ### Example of ClickHouse

_CPxp}QHbnyw4Oa{HPjchHiLX6T-{_#7E?BzVFL``RzQ_3E zz_j2z^!3_e569Z-WH=6cxoy`HNTogE73rD*ZJwc59f!5|29P$tY?^^TmIa!t>GEK+ z&(eEs2(#y-N3TyIR?>D7aP@qoT-td5HgHq^;wG1MON_P!z27+&NXmHd&pjw+tk4{p zw+b^wS;w(84cnjOmt!K50ekX4JzSQ{_&L_Gs#KJ;gz_qX4@GV1%@+ek2{9Dmh7b^5 zN-Td%*vY_YSBcgH9lN#O(1uL1i_>dIT)H;}JEC{2&aRJy=niQ=^y+(g+F#$25@xmy zhmTR^g$h<`rrFIC|gw0w9MBSZb)m=|bv3Pzg|V!3ECSKe!lT)Q`|lDmcz!LEA(g8=NZ) zX_t2sP`(LrmhO;SuJm6W$ru<7Ns}gRy{J53INi8pz<^2Bilu_%!_Lbxg>fzS9Tl8SNzP-WKo=l6w4xfID0w1tb)s_8wj`r4PedE4) zlN1CpCMPioniT_&eUiAib4PH+d}wLC{=)AN8Id5d8d!h`Vz3-Ei1tn_vi8GVzbD)r zqec8_h?C<2OUlcuXS^mkDTEGQ$({Z7j01giH0;;Znx7eXoa5$}Zk!-3Cru%Umi!`Q zM2qVxvx(e8)7vspKMwe-Vg!_FnEd4|l-;&R{unULmNXiyo#om>aA$p;(tEe5Kbl^_ z<^pzg8I|Ja#+@YU zl1jwSQyF};nhn3-qQ zXY1GBV+Ll2LZe^;O_rn%+vTGoz#cBZhrg{O!0S{D3CZ`wkUjmSvnuAHC!eC`IVpid zI_ulk=*ziHuIxX`FIs$QOqE(aTm;`jI}|&+YhRea!K-2WO{J?|kK^oUS|cUF>wFxN zR`GMrEvuDam5sOa_=<_Q>ILYnj;$F5^LzD5eMW47pT_A6kpgb_><+1EAL`!@$1p_> z@zfkq(B1|D>6&&)yK=esiuvU1#TMG=9%168i_>(%4!-(e^&!1S6+GRa30Olc(kT zZy)J@tOO926W<=?8q(@AOWv=nS6{Db(-dBf zxRw}ecTwPyO<3HYowT=0mW6$FnZTLdky$X{`)A)$zFQkF%(yg;E2y5q;o&Uzdxeg? zC{NOz>(5oPz0$`^)ys3%$!+w=P)5O#b(KC9X4HQSvTt#w+sQJ;;DsTQmn)z^3P{qO zD?oWhIUqTOY%w3NaI| zTrm%YSqQ*JU3-VGp|bQIF3HH~>sHkV<(ib)nRmflYbe+%ckHQxkzD6$MVa;Ph1+`0 znr=E`Pd8xOzvO*D|GQs^R_ZPz()gf`>)7u;oG~uzOFtT4hjRvkRF9N_@HYdpAgkvL#xohq27R{_Dmb#J=fU7Mx;EwePDNr=F!JW#W9Bx}!uq zMh{s}UH`DFzJ^D}SI$Pr(X~;H$ncNuP1|fJdtVgtX1Te0XA0b%WHlNnBvvEhD?q1C zF3eV>uq4KPbAPK&Ma2^OJTSq<(V>(|sPekWLE z{0r!$+$wlc0=RTMCtN#F!f*13KBU}rPytTT1Eyy$^44<@&Ii|AYr@gun<#z@7xRM4 zKQuCxmFz~%(r*{?WVS?*;Q5x9&Tg&baTruKTov&)RLy(hpP!T)y%^KauNzNXse53z#HRXV=#xixUTJieT{i?Bm(|%up(;MW8&u0|L;4n?< zWfQp{)i+B0`(s$jQwbTh$m=(fFbti!5Ebj`rNrz_I$^P|qB`Ilt=2v`$|yJ^V{k6lAFtZJTr~Yd_KCzi_aP#d3e`~dc<`o#Wi@(W%eucV8N9kb!CCZg3?-IS49XJs;cRHkN#Ef8QyeEWOqr#;rmx*49ZZ#eesGV_#H8-Fp2(bB z&GLER_7D?SYh`S4h*E~J?$f&cjST}jnu%FYYP%^{^1=@nuThNY5Hk53OIM&UMm+X& z{HSY>osDK?o_d#zdQ!tRga&27e`FV9H5Rf}mAh1=T<6VL!V@@=ILcpMoWBcZRljI# z5HJwoxvNHe003^Ia5oPHST)22V$WM%Nh>mB>+i@&Flp;kV41v56SuEqg*{x(lU9BQVf49mjYK-y(n_5|&uZvr8u$Vd&s~QxhnL0G25LkN ziNd4S8vMl6^r!#4+-MeC{gYIfN&=j<(#3bF+;~+8wO#KXNJ?%cUEFG|RelDlMgP&f z!lwWvgdnyz8kZ-_tEy}6%IFyP!dMVUIEVjzLJ4fLPETm4W6#x~N-6Jk%4nBQoGUnr zA|fO37fev9D(=we_#^%cG2>0b&K2$#?urxJ{g*g}nkA0vEWH}f$!o^QKl## zb`B<-5AKb#(AEW-cL{MNg{A8p@=VEU#91!`XU9-x!FRb~y2fmcgO@XmB?QcHibe^9 zTa^yUA@Zekca>osEjFEIU8=QPR$Q*srrlptGJ==g*{6?iN9TaDqsJRO^G`p3RkE7C zvkXreLn`LNVhJuH6$zt;jv2Nn;f`$Pq0F~j`{r9~ggnOZ6SZbDK8GYA^C&m#_YH*> z5VffEs?rbY_S3u#l_~DBR{uR$&nkI&4UCzHowdEmH|oJ$nm2#DjY@+T4*f0(f*SkA8QDRy|c*D*Mpj8D83pznxD(g*s}=IVlTWT|jW4Q8-v?j&-TFmQv402c~}5&}KUp7>KxAave}G z4Oi;P>QII}iXV|+=H5B*3A^{pmVAV{u9wG5(ij;RJHFK==N=y+oyZ6_YZAXV%|6Dx zZ&$5XW~lJAb^lwBr-<0yq1frtyXK8oS>yY$iPOZZeNG?0nm=}cje>&h_n{E2*D|B< z5=fL@6_77R=7@_8W43elcqWy3gKvB@U>=^Ym|IZ{l~?Y7-1CvuZPVY`yPVc@$7b0#D(0-wqw zaL`%-Nl*QMSO7@SY4sX!dHppYkG@(3*`g-G5xcQiL1M*IXt2 zY^RvDHY9NFslW~G3MB04iki);AcSMmyI!wZIy3UeBQ>}C$!?36grkz^dObR6hYgxe zrweOp`yUI`jD#MsCr(MQoIXGKb%a=Z4UdO0u_f-qavmyfz)!~aY+o=`et$e15V9&X zR<*;9tEf_GNW~@3U{diOVey{8kD|OZU%Hm{INli-i`AwBWDSm2(ftT?ir?pG?b9hr zCuD6et4yOA6Xd93dbgi%9nkd}H0+<|S&}L-65fKoPW;aC>^- zznkx}-`zS?oh#Ul$v5juYzQx2xX%6%AJJ{3(Z-+fCWs2{-W|4nL>sFg zbZ(jH+Sb@oND~L(IslC5dNPdyG#ms>uM|(~nrmE61+aAZ>A#ViF ztS1E4i^R!EPdJ4M3vN~jjY|=r;i0hynMK4Y)a{(c$KaQWG4gOD z`J;LesqO}q-5M**E0|?ZJmyCd>rD=y(_m{=v|W0z_OGueA-+TvTm_8z7Aefqc~ar+??1G%&c?!BoW{BE zVs%e>sF5lasM^uDEcuNjyUILclod272j8oZLNF#tO*l{1i`ah`dKn=jv*tdeEn@UvGf(rSBbh8EsCN7>9Os z-Xww!qfF2Ibw5!#^nWEQCDp|61BFz*$9Vj5}j*CT2|u@_S!v-xTx@{+>-z z{n8`_QIoME`>$sg`4XL664dw%dI&w*7M zbO!l{3d4zqUHxq$WlsJ%lsLq0sq$QiFwVnm-~YIEtlwF2Y@kn*pfOw}DaP@4Ex&wR zziTUf3hRy`D|CP-4A|7E=MbF_qTMzQ|@lqvV(b#60!JqjeL{sIS*Q# zUbwrY6$nYhKZ6XhUU(TlG$xc=Bf0BV{gOikTQa6K)Ptk~pGzZ=b#mgn0<9q#G0Ry& z244o~PQ1k-a-h@^Z+t6UC0?FLw-;vGw#?I;2|L2vc%Fs`!~WDIMr}Vv)Z&*{Q?3|oS2sOX`DXwMQJX7;8_`drze%Sfr*|h2Dr=t~+YFkesZY?AYB)HM zAE`|#nVMnc@$eNz0nBzX&m0FtcEQi+wF=r*^*UH}@t*I$o|%h|3C*7T>*Mizb;$I$ zzF(TZoe=G~3Ky|5*5RvB6^C^Hl(|VD%&PVFR=F1s0BuWf9Sah0GEJp_lw9U6BJ zH16*1?$BuCZkO|)d+#_8=dJ3kYSgZ^$5?yK`Gra%+6njnQlE2KZ@$#jYx!A2Rh{0b zb&=!ZnKT3*aRGgld)=QHU$ikwMAAMV$D^)WF=)0YgZFsIb3nIsdi7OlgBmj**vqUn z%R3;aE70ZI>|u7kpKn;SK)?8fI7!~NxD?nw_cOBd1~^Opu+KVZm!lg!wij7BD%E`? zijX^@8ya9`5d7Qmxy<3Kca-=5c=B`ttI9!s^< zA1-L_AadaA5aW3>;cBD0#2Tv{!v14Sy?LKb^Ec+C(b;%*mo`C}0$}MHi|nwU%z~)@ zPu=>1(R@4Wz&uPkDP#S^uFnZ^=Hn`sKG734ea1gJbu`6K<|58EA0HpzzgCw^`_>X4 zsaNAtwAEUyu0iMz(@jy;DsPZhoCkJy*|P$t_dP7JTOsi7EfS z8{$lzchdL$V8>SM{(#8JD#ziYHh(5m7`}afd%Ehgz)44e$iQEOFuIm$n#ZB|>s7IE zuIw);BsTRK(8Xxd?I!1Bu~*`=fuB?MhA-wwr?14N3!)0_g8la6_>_K5E`}=}ig;@g z5LQOceJ?wF%*p+Mne9=mt<|o2i6(|5y%I@K5ocf!6OH!xFFuE-EwCo^c1M8 z!*dX2q)#v{YaHOzt|CnzIC_woXVcDr+ZAHD{?nOFjnqikl$)%)iMukDn4w9;`D*$D zv`6%EaoUpQG^b8W^?G6N`=<*Dfv7kVgPos>d_*qtJ&> z5wSKmJ2J(oKK@yp*3upnN$SHIbMx~blpa;10548&S8S2`9 z6U*$;t-ju}EBi-(!Jm(AAuFm7IcHKM6-J}{RVKq-nzlW>D)}7B^evs4x_`MOEY5I_ z)%o}i8%&7y;2DT9JRz26F5(^QnB^{(3J}`g>$Fa1P<3AeR zCHl)?LJg&s2SRRh>6NXFJiXB&9%)<{HA7N)mcY)5oA#6=;9S3N&xYN&|*;N#6$axjupm!2wygGaB6jqUv1jJ22Wr$>T9e0QjDs~+A3 zM}668)@vxA!(d*YVKlj3upd~YH1vImv8l!KZyKdd^VcO0D|fcm7KrgT4gbFezo)(f9c%L(6&Q(;@7Ldu<*Bz5GeU z{BOn?LPQ8}15Uh`&Z3R`xf9qOENW6tUTv`Xw(}J2U`k1TD4%=h2!jdHSfWbAc@u~0M1s@eB2l<+> z=?!?S`jDOoxAcf-%QA9$pb4}QDQMv&aet0JK3LzFQcgHrd#o6?TVg{L+P+dq_FBMd zET&$xSp`lS-DdsxZ4;D%%UB!$q~UJmBP*gcYo2SUorBsGa!6)=Frq^Pm5xWOIm$O&b!4BPsTClS7*zrMc0q=~b5RE>Vr$=nM+IauC{X8;4L0&^SA^4qd}2MPrhJApaff8IY$tIKMWXS|>-g)&65iB*gOI=GXvTdc`V%MT??prSqppD&Fm zc9w?2DDX=6uO4B&hP6t@?rH4g_cv5})R+z3r^OwQ8zWX&xNiNKE&sV zcT_xP&#^IGc`lJ2W^l&u-D8*i@kJ-`A_$~wV zCa+v>Q+fK5+e(5j;*a;4f$s!ZXBfq;@~cfbi$0IeF6eQgO1PW?$Z}90@ zbEUl8IuCAezztam<*c7q`WgL6QzbrLWXu)#u%+^1%wClDdL%nk8uH+Fy~~}Caii?( z@)x0!5>YX%VlVW3BzP-MEKqpbJpw1lyGJJ(r2lZI~#HC);18cu-tG&RNsuj;lE(@Z(A8YGEaxI-^xS0!8oSz{5Q4vRF^O=P~kywRgr&3jW2})1wMPlPof8tsvS2LP|Ih9Nh zBP)RG7KgvI)}IiOb0gyfQWkrJo=vfnxFEcMIe{Xy&tuH~VkDGZ%{r{VXdiXiHnP5h zbN4JlSFFNYUHsvu4{_96k?2`W)o}eUvpemDajhb0i=RafLF~B?U2d^FLLn_F#l3_e zA~(L$uY}zIZ-;1Y*zTt{*2php7>jOS;+#`D`Pa;{pq#^_((@)MIg&Q;nST?hni-To z7%FnljTThR^0*n{@h=B-Y(#ykOLg-}aSl5B^1}bkh=>5&+;uRkn2w)ya8!w=K>j6T zVv)e5P$kh3z^{im>H2_`xCj#wey}_EAa5`vGbJ;-yExgnzF6G^9;Fq52r`tqH;gOCA!S^V34r1u_$hS_FZXj50{kw z7#cvZ^1Q3Ywcp$L9`Ud~EYWWv{8rhszOosXMg)ps8bbcSB}dYJL_8WYuS)-&Ijj+BLANFiizoTt&g!vLkoYy~R`y zt^w_)Aj#&=hJJ!zjP?F6rL|IFQCByTWmm`S)c;W1H!_8%SgNZca+>@#>CDFH=9%&VX6_)1f@Gt)V-L7aeMWD)gmHZ{G0J`GKWNhYP z*Pj~+B22i&-uPD-7D09SJBH-l2qt#^UkBAVcq6&XoH_23r*5WXn?+i`;9y-{M~h|} z2m#qs*U~oS z2Y9a(mOp%Tu%u;+k9*3eIQsaF{-E#Om8h#fXN`otC7{WWE|A}IkJDCmwQnGp&pvEv z%y>~;zp+#N_xxbqs0oepVX#TZf&m6vSY>sNH+g9d03QLICH^?vR#Kb*z^(mI-14CO z31=en+^hOQSdHSDf)FA0#Mog&Y1LKG(K6K5BW_I7Kt+D>8_`ey1d3zG9wURVAGX!% z%#)hu4zzCf1!_H#DCSZEA7~hKjVP51WijljhQF&`QbNF@bE}{b#rIxx=a2?L#Pi%9 zb?Jp!zAoN?lk#{y)EWE+j@3#K!fMmXkJm-=T)BUnB6!&y1T5%^{!ZV zgwffQkmNSnH1Rl6H%oKU7R=)5uF7M#f7Q+FzvXkV>C7TU6vEYx1?p9{kI&<|asu39EUC}Qz?i*Pj^ z#*D3+em70k5QCHoxZZ7`cR4trK}kI@X^ zb|(A{$U0>JBxjrFFPtP-aY#E4fX2KdNXYXB(V*qA@;AN6Q^}CUbCP*HcR9}V{Is^V zky^zht+%nPN2zM(#w9EF16FeAP_7Qa4IJ1utG#@97!R&|Qv#f@}4%D!#bKy;ZVJ$_1~f_e0%jPEN3oNDDM zDtU<0!dzDg!cB8b)Vh-WKfA$-47=i9LCK`|`Sm^IkE@+3zAmob>aed{*jh-~*ruA`n{|1@=65Rbb}*AZF%-ZDAvvTt1#$DzqXv@L(Iwye=)oM^0Yf=lA& z$_C|~@lkdtuoEDC&^Y{*{(An=CJ5GZ%b}o2Nfz|4O@_d%Pa$#ar(@#hK?(ZM_1t=fj^P+O1_jf0wETRfrZ*Bx3S zJiBfw*mEjcxi72{>zjXf>%OANO!SMrAb~P1HYuzNbdeo}S2QY&7P|wVO=E7^azT?h z61 zr+^#D_FHaz>4uMr;UEA7F9!YOeeDobWV$KEeum0&Vn*=YqBLc>?yv~hY z8)y7S<@V%F7zKqZGTcK|ka&Y=xsDf?rJ#(e(H(%@xh)>x~jY46GH2oOen#>hCnh=`broK zBGW@LNAN$nxx?n0`Mgtiiu09a2>!~4%=;F>xb6PJ6;yx2vDY44DZu;3lu0-%J;IPBvhGPP27pQ=6sv;IWJ?(Q?7u2;O)&zif$to-bGZv+y% z`CybqHFcjb`&D8uyM}kw^$M~N8B^qiSr2ZV$dRRZ8{ni*|6R}AAF>zc$3JRd(~@%1 z=J@EUyKjD;xP~wekg}G0Rs4;)AM(nx&FGP7$%C+SJbSq!IUA2`>Us4GAj zFl0~EdsL4Ws(3NrJ85d{6sb;HVnsq6EFYKjXN1;U$c_XL4f}6A>zCEB3Db_f!bUbM)yyGkIQy)*nS3n=x;8z9 zX9IXh*+u3v$Jyl2C=+}2_)(cy2!>u~7sJBWz!kmPW}L{2%t=+Rx;QjCd8b(NfbFYZ z;|n-)3^a#FZQrKBa-TEj^BNdL#9bY^Kajs2TeW>!jn_yZHK=DPgD|f;{SfH0o}lOT zFBFY=^W-2+GB!f^iFt5L+aiTf|&= zWW`kh|7InnLN1xG|170Z#`zl|<>ym|>DMzJcwY?_20tsGKl+!qwOT^nJsbzVH&A=2 zA5Mcb_QKDlgRtiwN0^f5!vO#IA$Iru;}ORKm2MvZDV|qqKf9FjXd>k3?c-evcKO4J zd5k9NrR8!i=s8f5JiQ3W`2tyjQ=+ixo012#zS(PFNQ*_OH2=Xvt`y;cF+z9v9jfCH z<-`)rjZ4`kTQqreuZQRjUwF0PxVBD)hr#G3y-3`48()G^Cxc#35ec)VOuCo) zAj0@|-zgxvp;wJgo6q!l;Y`jaL0&0UUdl}={8kfOCCmuclaOl_&W8?<{5WbjsP}pz)zajXKU-L`J}@SN9RKO&Z*A#%MgM?ASdj{? zU#H=sCSGl)DB5(`?R*+LU~~*Qv@O?!X6CDx@IqP4MSR&?!6Y)b8iLpD<*R~G0C?<2 zOL>XIagDH7$Q6zY?jx4>`uvmk{CK_;`Qy3CEQa|P@~G11k?8a;=Wm-5zBHO!W$#|T z%KdotDQMnU$4V7g3V#u#CQx*&Ym@q5p z!EL{8q@5Dpuz7L*uCsGx6mYB^cL$MUUk5oe0s0tRnnK6Iqt?E{vI85)*Uo( zqTN_8`nSKi&(iPORgd5s>qQH1>_+R~q6lA3h|5%8J$S`}+2Nf0iL91ByO#|hqfnLK zEk>Cxr%Sh2yK?HAmh-yQZ6boL=j)GbEHCnJ8&aqeUqEu${si5<2@IDKPG>ZvFtKDB ziGKu=zwVNe{}Qws2PEa^VM-Nrbt-M-uhN=<4DK~$GUD_g)rV#{741Y^(#I_CHp4mi zeL;Ese`u7b1Gcy?Fy}*+GyE>o&*L;W-E0cvVkh0@KB~B|_P#^cF=FU0WJc7?A<5-7 zTv68Z#h{4uJUgVVGZy!{Q)Gh0aenLOKl@5%Vv>iN^VC9~&_}Mzwv*8(Xx>0m!c>Cm zND%fwMgXTHB4~de4#m8IN#A1oG{(PR078!OI+w_7$ z^?(oxgZ%lp1jbzqD*CY@M_Z;_?$hMOIry{O-DxFsHk-?_*Na*2#T{CixOYkWj;ak)iPV#?dqZn8IhS1|DKQxom0^VCU#-wj8 zs-QtTfoU8V+O~YfnB3_f_XW2}0gkqu@gl?83NOybQdg3# zdB8gRtl@KzlNs|!z)Nm60)hNg>{?Q-s*H!9_uTsE=*r$#jF38aHL6bz5_JRQCDCm$#=9#Z!Cjj1h^&r%5<;DL3IWxCT(uqTYL0sX1wjtb`=`VVT5wbwsCFti1<=_ATXOtmKrIH;+->MD>as zE8+|{++}AxC$Bc+*e=3o>l)CKj~@C(URqm87+pC??Mf@}%2+y?j6?s)`(G&b>yh8N zoykADt?r+`eqR3%e(ZFiv(`e_@+OgV=X9+=$^3ivnEJyK?~?c}%-9kF9IgKF5k+Es zcap)Yjr29I-xlUL>bfPUX&05E@dhKmk;3u`{zTSy#_=G!%2X3!dX*{+yc!4wXKj5| zFHw=9738#TuBsx}ws!2=wyu1#_ihkt(9aVoVriX85oq6n3A0E|VU14bVBe;N)Z7-B zwt>&$34J_`G#z6LbtUOb|M3MYZKljR;(Oi-K(?%Jl~$XLc2!a!+RST0{6gr5UW8oD zvA!o&$lSa&THtx^(siG=Q13w8MrzP_55q7=6))< z7P|Yg?-p^!LgxWDH_7c=D3@RzCpRW;{eIN*NT|2&@CX-F>a0I+CyD)qwecCy_ytvebVBL* zNA)XvO8$dm18O787HNl3Z<2GIh-nu*UpGqhmBzUlx5B%?dftLuUR!;nPFAp`FatOZ4^FQ#pL8Z7sOd^rPoxF!&&HQC^J$r}|r}9{@qhMp} z&nMvNl>Q)Kb@|B5_1`g~b=h>n9f zpbCFMW?zF_4B>Ok+brCNZjy3FR=Vofs7zt>T1$g5wRi}S)p zTeMq`46_=ks0yRPR_9n!A?qZZ^-ox{#QeScOOK_etra4d@Blj zZjQW7oihKzpnssH_HXk#6*pnt{ioh?lT&Ir`QmIH1DFz8quu1#G5wz|P7 z^Y?F>m(Vew{PO>_5&r|oH~#yFWUPg%M8hv=>50ry>2hJtWco*G$@J2NSuqlO;V!Yp z`xE)mll<1kTErjfBs1O>Fty}8P`qVO7_lbnQ@2I zZ@B3dj*fL)<&08xr^G9`iyC_@B{=`Lr-6sxiL=qpz|-C%{%EvZol@l*FLrujIp4UI z736&Pm`VWe^S{^)yg$4LXr~om)Q!I7=K80?+|32)YOJ@3mRZg|mcvG$f0S=V=Xw9q z`VjX&Ui|-y5xyj0b^y{Bx;f>Ezd%(SBbI_w+dZV|y@uA~vmK@Ct+gzP5uUlc?`@gR zvyE?y`HLD6g(U$0+tWXN{2cArWPc@I=e{;>v#0VXk2=FQk6t#_gFcY7j#B^NIb!&c z5UAvPdV5OR?g&@zZGrIPmL}d2ZYvu2zv?J>KX(9DtBu2w;|iNha!HHkM|Voyp3s39 z1N1c4HR9=)CC$q}njbvdeFxoFKO1K@AbBd>hFYM=ApB1q$$!fnd`;jlNS&Lsdz;<; zcLl4`ci%i9&yjx|{^G3n4y!kew>r|L((!*PSw`9B_B*ULsE7ht@86r z(r1m@$+Bn;A)oqspTZTh@c_X7PY9f&iG*b#LwAJLx3fp!r>sN%)nJ^3V&5Sk%A+_- z1jX{9tOqlg2}N42UeZim8i>Jo*B>9n`2pi;TJ54FeM0XBf9ZQS(0}c>-M-IvQ*Bju zOBGX570w+R-Ws_Ro6;p+#ok3?q!eE6_U%lMt@dPN^2QN##f;IC?caeoG|e&9adt@9 zZ}y5E?ho#$`1QYJZ6ZcNdyHCiCc^!aj596%*7~gd7i4<;CJiK}EWubKxQ^^;JOaUC zn6b!j3KbXp_uw}qC3#h2nDBaVik7n9c@+)_RbPT$X;*I@tKSMVbyz)=11(JqUJ^nt zUOjR(lN{4QiOh5<-U@O(EBE(av{|qOsO}!wDzeoL5MMjng(_@2_fhTeX_3_iC^dbsAw8$*@dH! zd@U8nB4EWwm8AcE^n;ZO0njNIo_PnucBhthvk`-MSl{rnuu#}dzHI-?YflYw_w)D9 zLHRa;-*%<0SCVKSz-*40n$QJDg79c4w#@hC0;Ej%?fcWFamOcwmlxHZ)$kDecux~t8?90-On(lGA~8b zFGuF#+Ob0)coobQY^Fd{N~7?)bHeV*_uslOGwkxD`ScO+ySy*>c-fA&qk}4GSk5YQ zwgJiZww4}i0+Fwgn_1AdPgIGW!P{vySoirdp|{biMaTVzf~4YIVuo8p3&#RlMbTyQZ3XB&f~uE7+v>l`-%+uOBQ7>*w~VHpM3T3z6-Nyh8!UAUtUX* zg$BT~Kmpl$e4|>qvtKjQ7(+(LumsgIo4&|>a}ls9u|3-%UyH%8+u?#of6=S1?kF7R z9V!fG^l|F+-l48~L7w(5S#*DEJ%u**x3PFgI>X$*^2t&zxisQ`|4$V&#`p8Kz@S56 zR(PvHZ)O>VpFz@<;iE)qPJDv=3&;7BV;#^%21d9xNd(_a_pyDR+P{mQT6(GSVpBju zY4zbR4|iA)$EM^6&Al$2GF7Tq-sSvy7yrd;{Z#Lg{3sgN8Mc91n2#)D<>NjdBvy3g zv|cQQ{|DP@NSqMYjmId$2%DweV%cx|UUEy9`HqHdAqG$>j}&wpwN||1pS0=9hjU7w zBy*R5)=*R!ho?t_8`)`w(H~!j$VFtjenr~`Pr3qr_v*W{Qy*K!mw3zQAy97jWntQp zUh5TpWfVAciamQId1@)S zb-1vd?B|(!=CzFmnR*>I8^HRI+o)>|@l8MMx?D7`S62S@cH|JL`FsKETW@pZyF1&Be;ud;ny3m)c`GldFamFu z`q?cR*nMZeI$_aePML{{b;#23 zM*)kuvbydImPp=TkJD5*O26u8Zh^2bmfnJyRL7Z-Ds4WX<|8So%}sBF^Td&^b)qXF_Apbvy;e0~uFJ74^|ym}5HmCHo}nM%OOd z^|`4&W6^tSw|YDvX~{xpmfin7ekr;bvrA+wD&eb0#aMaH1F!n6O4LMsTt3{x&?MFi zroRw#4NLQt#;xZkHfmjVH@|856@CrptLlFOjJgGQ<%(|Stg6mIusfJVEpCE$6HGW_ z)@Ch!1y4OC3%5N@xA=j$x#xN8IPT`M&iH%g1qopM_*JWMLKujA#5_VKFnBEYEX=Q( zNbGs=t2R;>VH}aD1V-s8 znqlv0oFeJ{t6@Eh=X%Dr2_Wl8+pzXjUUkjHVL!e;2D*|bxxF6E102u=1?N_w+_glM7 z+oLv*OJysl4yir~mtIv7uZTzU*$9Q@r701_|Cd`#mRg0S&Y6C?2LN-k(kZ)Q2T~4!4gK0!x^xyNsD5 z0%T04hV9FTZuG%JPF=-^&iPaKa{eFK0-8(pcm~Y`4(#y%@CX$BIn;G)d4(+Ii4sw2 z(4;64$=9-Zim<8aIbX|rx7$~Mphh};pvTYimV&ypfXh=P)4=<7A-1Wb{YPVb1b47g zDOhON*@Grfx$!inQ@^pn6DD{m<-*4r{eek-Sh7*(!oJElg6*nB2jbWT$D_X62k;iH z)>36#mh{>N|I!1lGT*vqO$nAm99784^VqJUr>Xd;P+WckAP6PRmrbdPHthg(m=JT* zIfXQ-ILAC_`uIfFROr?bYJ&L5(c8=gK}jT25zH+_bV__ad?!nG-e8af3Km!!02B_K z$1qDLCYonuoA+@GRJ;PSHiM6-w}g7Ho9t8rE;c{}pZ4AZ)3JG0Zh_Ng2Zs+3)(Sj>s(fN#{0Yk+ssS5rzh_k%Gw9 zYD5#LDciWZ)!uiMr5?Ie8QV`d^WaGEp>z@4^9YnAZ_z)``$j*tXH~aQA==~`@0K*^ zNkzq8psqS39&QyN0F%6qousCRY$giK-+al;=*r&34D-^3Klh8xE$Rn`%umg2k1Z|O zEmdy2YMnetlcrrWOGo!A)EegV!&U0_OUm?4W92fgKKOltEUbrGg8`=9XOl(Jc=Fztc^{bjg1+$T5~b)TgD|ft!M?w$n}&d{emz#Sb2=b7#2d4 zh9|TewA@u=go~l1sZDHcG~!=?A@UB~0`p6w8yw%ax!o0mPB~2$#1Fuenqx!npibAaBp*uFx?z< z8Qh+OO`!Hiw>@~Q@=0wJm?d0f(AHrxx-Vg-GO`SSbhVttp}FQq z_(grzmy$!B+0ccA(&NbO3gpK%CoR&jyA(SAAZ}8Si!eSxNGJ|DEU$TCom!9nD#$T9 zx?!szF#hJ?ca}NC!R_s%^>s^SBrA}>>+u2Y9X0hLrjFvRuaAC~i~Y!AdcZmS&o)u! z`N$I>Bq)vNR0%nJdT)5QPwU_iH*hRcTVLC?`>?J1FcPNdIsm1bCEb&a_b~*7WXi=m z<~G^d=MnGh^J{P7M0gOy7TkCyYVRE?u^koiw7`e~teL#-jpy*6;EPvmBh??(AKPTF zf5@D((Tk&*L%3+L*aGXPGQjrf=3i z4NroL7qGtkJvb#zk>0ISA3$>DpYF3?%At-DMeH$nuhKvxXa5kM;#f46=8%&QokR8A z`SU)T8h3^LRZ_PuuA5tQb3*fs^T_qnyKDQehomrJ;B)z5)sig|@8B|YdX;TA7duBe#ERzYW7HtNGfA9o7TPNm%W(Lx@SF#Bw>uTssiz(2uLKjG?4$tP3VB_ z=hc|vCc*pmXDx7Hy4lCG%WdKrdH(^m@)8gJk)Ob1u-y5XoMiIlVUY{13Yo1Zw+A#! zFBNASQh91RJQ4mJ#!R2ys1#`oXF70!9{rAmw<0z;aXV83ZssXz09PT2 z1){4wONX#Ta?wu>rmvn-pbXwu7C8(|mgp%({vDD`#hEd`gpnLdP(PJEW)H(EZn`41=~<+oPN+Z+!Q*zIvb|uL4^qo7LfH zT#~#`r9_e{$&qo-#z0*8h)?5aLWB@5f@l$w8-Hm@c&*a)DYB0sKJ(}`8k$DuJ0Njy z8cnNrXGzPJ&un7cQE0U*!TuDqsrax&lj@TWvTI1n!7_jSaB z;+4B-l&)bt3cQnX?>6Ju?I4FS$Nwv`V9D6Y_kjC=G7(Bmf$n3mHk(jbbYZS5*(B32 zqu{wjU@ka=+=QR;e-*X5LSO0=@1$fA|H~2ka)>k%eU9@(P6PH)HCm{aWi@1niye(% z3hWj79!hKRWFI9y{O!B}-y1iK>vt*sJlj|fU&J&ep}|_{1{1qQOxf1^=}MOU($)ga zVjc(dv_TD9lN&eb`q!haaTgSrjRC7T79*`GQWy_EEK&e>E^|t%##n`q-{*43zI!d$ zC^7kBFR8BlspMvhw30BQ&p}~)f zv}^1U+*x?kp7feg#l+&0t#r`9q@+7K4H~I+FnB017t?^Sc$)$_9E?6s3GN}C60xmi zVkP3w&apwIby@`h6Qd>f|40V5krxe9zMPU?_pw!XwbKa3IV;_RZruemozYp5&WeF- zX(|pTTE*4glr({@y@-}S|Lkb+px%2m0D(5e;I1+xs}?I!jpeT>Bo#QHQMFUn(5t1m z9>8~s+ky|Eqc8{bZ0sE;k@HQmYVy%cPfX|WKV7S`U~I>FaI)*Jh|6crq@qYSLGw$L zr6e9rNKGt+Xb6smi?uw^^7mlxTH28b>EfA)+hJ@1mZDaU$615uspIH{9+)BVNX+@e zIDwfYDA+$yBGl^vIx?i@vv^ES$NcE7FQ`t>*RveFJu8RMkpC2L&33wdj;>RbXP0eF zp}v^!ubODj)wsLVL&z^r|5dDE@#MHbi2Ej(I&m1IgQ@_|w+7=1o;^)bOI-5`yf_vX zY|$b0K&gnIM@ME)X0~zl>AFMd(3bijK}IS!)v1!^$~{@=+U&o3KkAqRZ%w+xq5mv2 zUuXEk0S!Wm?FHWA-W|-J*c_;K8cmnIwqRvY`%*B#f3SV1gM8%|0TSG8&s1u9=Z0MU zl$2#+ZyeW*z^f=k&1fr#B#4L|!G}BnBXf~Uj37ur`r@u%qg{qwCaN~ppHbNRFYw%r zOjcR2yasG5Kk`jT6xY@HVdTTXnxcGPCEBPh5I`0R4P{$XUk8X8B{emlw9m=t_b7IS z=9D3t2Zda53tcUe@46jk5^8`Cn|GWxRQ^~h5*EGAK}>hO>@YD;Gh4PrNav>1{Xe{& zYLI@_$sJ40WF=LC2A_x9z!uA+<<*}kBWzO)UaQ`c=JcuUkUis8@(!BCT z(*W-nN{qvz8)>&CSNx43p*_=_(wD>`kq(CbncPB24U+C~F$Jlnb0Bg^r5*(QAtTBX zig%K^JF)+X;Qg|48({QdRaqTUxfQM{hw!ljYG$Zk8*uYm-WB47oj9DW-nxm#mb+g( zw+XXII2=?~DQGMxj|WINklN1y-qF$-2{5hi!vkMZXf_HZ|DOvWfdVv-Tw{{><21~X zKsL#8X{@sZDjr%DrIEQFM<4<06P*m0YaTTz%z{SZJM zg}?IQvzXxHklEQz1xH; zTkP;KQqP1oJ;hmIBH{3x@ES4A5-7R}pv8i_L4Ljo5N3Pf;-ct517+440@kV@Ygfv_ z7%A7TmH_%Ua}(Aag$@sR>2YeDi7bfReEX@kEX$U^`?JCQcSKtE-DxGiB@--RbnP1bhzqntLm7Z_Gn*gq7<$r z-8V~9x>O|k8E@?H1?`*=$7k9mrb%7&faFTN1E38=M6{G|a&+QfQ~RP~aT8TuWbYLF zH-5gHme_Q3GNVTIZE6V}hldj3m>-e|p||JoGbOTL!#BXOY4pA zO`N0ry?}4gTLZNIYQ<=_QLfzEND$Ygma!Q0cfKJP{3{xd%Yp3DhKp%K1_*N0*YGiG zqt~;2sK7FI5?$Q>wj>4xo5MK1Q%juiy6$_Ib_WP}0%hUDa)m3KrqIhVS%Buob% zr|BLRHkc1PZ5`$>Inq#I6z-}JmbDLPWHPp~C^ZjjHs8aiV|tb^-_9xAd(jCeO$&Za z#B#_vZkKoT9CIrdhJ>Yl)?ub;s{Ale-CUUj^xPFO_%fzP;a?vMQs?(|3Bakz&_9bI z2^n@~k<&KJZCPaQTa!fpYv_LY$hLKAQ{?Th-b7W$ZCt31D-T&k`z_A@x;&^gRLp^p z2SD_Pc3*krZ#|~a+Q<31wGI~IH*ym!fUzEyB)GJ<;MhDztgs)E$jNcmD-SatZL5sy zi(x*EqSvmiFRBpO@bt z*2k39PBWAk2X}jw3>7!#2RoAKC3O3k8qIFwrcNq0(ho-q6oYrzMn@-NxFA1CeP^=8 z*p@1l2*9gzvnyo1WSiB~HJEQ|3OPJM4;{H?vL#ti=_1QhiGp}jel%{V*=^1wT9(tw?U?gd} zDqt}r1oX%_AkI=CI@#ahd$y*Pz@+hiSf00)hHG>LV02nW%)t6XD#@1)wXGBLyIYz* zy32}=M^5<({Gcq2O<9}9kuo%L{w99s*GZV3tzu5u{CcM_nmA`)=|>p9mxz_ADa-Ra zcH6Z4pYJweI*WG!YP~`E5Uo$R6j1(Pi{x(IDQb5DOvYziXZdRS{S!Jy_a?90g6Hp! z-R5HS?e4%?zCkDU&7cf_0yMT+b(@Kzk2deNMlaKK*w5xQ+#y~~-S5;BBTHVxqtsSc z_JPBi&K(toy-(=0MWa;K*>Q%Gdsb-Z^r{?O(e|o9krF+LCOl8>96XH<)ce7#u z?}K~J@bnvR@9B3TaI;|aDuZ^|YS^ayY&%_pIDU8Y`ZQ*#T{c`c#VbaKJ&oS}U&pSk zsPBPlFZ{pQE!n6`$#POC0c`cxPL&`1ltF61Nfwu6S%bOw+fWCoHc0JTXdQg=GEcJ??V%)eP}efncwPE{SI& zGpKVUy4VuZZg(jY8^zt`Ms_#KcVB>GI<;AWO_nrVb5qpCo+=GV2)lP6Sdnma+r;Uj zB%!Jag?w8zm_|3h7;6SzCac$hM7cMu((Z$jKdW(q@aq4sNOWc-iCC{G!}9jo_1}TR>R+c zE#5ih$32dXj8rBVS0enNtV{XW|DN}7YnFNBWV;8Y-ZBWMkdZT&lZQe2x3 zblb_hdulnRoZASMG|q0ZD+>gQOdBO}T&uVMBf)gPx_>{+BiZbcBeWPI#7|6H#7;l-K#r3xdugqq zSH&Ss0>T#}ZP&d+YeZ7fTt6a2ra6UfL^qk`IcL=3={6~2IC|0HUQMxEyki)l-tjI( zlo1@nzY^3G5SS^SRYy!DQ<1rSPr`rrnN(vpZbPRzFcD! zC+c$Sk~t0h3M*pj7sF{pLA%a;nfNS|kopx;8*80|t;Ef8#%mjxR8a{pYk*??vdDHa zk0MaSk6A9~h<2)S{R?oM2-s}j1+U{hd#0uX$LD75cmjj24Uzt_sR4us8#_Dv{)Q^hIz|jsjlzX8Lq>!Y2_;|UvX60rq2=Rg1 z++}FQUClW*m5^B^yA79tDeJ(NB85UO8#rph)aD?%en1XlX$Dbfb@m=r3~!weR#Z%} zO2&9Bsp5ncp;i4cx;dVDq~`IRcqFpF#+wz(%F)e zlg(fdIN`9w=Ux&B7dR_)sJN>hLvEwFOIM|dR(6>8;dgQVaH}rfpbid~^cvt%4qSdt z9!EQ25~&XsWux^7*P~`);n!xiZZgA?cN(#BM=_~<>tKP`dLu#Vu6=9g9z9kb&n7;|K=x>#hkyk z%U15j3A1F6Sf`wmc7B#KgjE{{ia47hMhK(|WX)IaN|YO{EX461lcP$|1n)Rq^F^jj zN^`TO{E*@eitU|iV9W8Kd)=eaEAqBD{kn$7j^m9)0-t(X^_>h&yeqHWc~>>lA!I_u zbqQNn6XJBSugGAV1{Bl~>bKfaq+tv=Hdl?YsI5@eKiH`&>gR3uLB2y5**rhoMssN< zkS(w*jmkIRSU}t&S$1Tzs%l$q$>eYo=#kNpziU~VT>dQJxetjg+IV&Tii8jPkY?JG z0ZK}>m!9;xIKZg;v_545{AX@kqM_if_TfELKv*tmhdTAN=j))KJ3KFKOQ!Y5c06_hpUvK-3eO#8U=g#HdNfn zXvRCFoNHI*Bav5+^HqE%hVkw**lbz4dFTvPlDpWJNad1Hsat5 zXa^MyU&0>gAFztjo{`%aoDju-Qpt2ftHDWQOYqh z9{r41jon*f$UiPlMCeTB^Eoaf*k2F+di>Y@y(iQ-d`ZJO-9xr161epLxrqDWIZ1Qx zR~YE~zt*(=#e@HQ4^8tUg91NtNdJvz9cS1*Yr2ckob-YYzyh zBk5C;{Ec`&=zzBpANqW*bo#omNW6MnGMYIEBzT!d@p}zpA>hfW zs3PW;rvYUe$pj?vvDLVg3ihqwZF%y&G za)aeI^(;WQj4EChuIWulzht~@k}>!G6 zJ#BdTGxs$nfIhDR#dlHxq`^D*Xy%l^aUx6(3S4h7GiD`#6FXHpMA&=US@5>$;E?IcR%e0T}!d!%WHgr@j) zeC(KQ*DUW14qM~Xc3}&HVa+m z$_coUSk(#t@Wc-T5}v5sC5-o_O`%txFfaMCkEkj)k4nb!yXG91vCE-~f{iU{Nmk=L?Pc`*}9JQkF^2N@It)AZ?j> zDgTio{|#v3$j%bunyiP2ozCnBav?TjRj2kodsRCZ3@+G^otEGv6K^q7cWbS()p?PE zp19gl*0RhF-LgXUq_3PEcRlcnMrOcH?>d(gElC-f#Ni}b=_DFeUZS1QGtL;}$E&AY z&P)-#QvVVn4FqjG=>M-((RG{*G;)90?=%S1?#uf#k7~}aG9s%M;!fTfl zye*o4Bz`Q(%W+gfM3+pG?F&VG5)rF=%V$)Uhg@NcFO@zPHw0!URYdwqoWcBUYiNK@ z&!oZEQBHJ#FI9Z1GyVWcnwo41hny;&vFmBMz?(NLqAvLGl`{SR%A9|=46)>Z{z9bU zjLcAcvHU7S={a2RO!#Q*WFC0{N!5{4+Jm0en+rKoS67VlbpWh;*3E9-x%A$s?Y|vH zq4qU-nVrWB3hBhv62kytlDZMEYVAX@`9pN5Wa*QWqUu<(3uP50GlM?e&OTAAC{{L?Q}Xn&AQ9#`IzJc2w%m# z(&vW6^}Dod33QW7+N8(L&Vg$hO#1R@efLTBjwpqvP)Idy@EIbCP{aUA!`5RihQ`3d zKA_{CmI_90+Z#q0l<@S?!?(n|qX=6svod|QiB!CVi}C*soPH4y{W@nyAD_uOn8&86 z9jHB_P-)&ETu^XO=_8x$O4(#1hj@>{1*)JUySzo;rr_dI+}hrbWFf5HY(&%9oMmHJ z&=t#m*Yel{J9M|PE5%(Ohh1%2V1e@*1YP2w^O@9HW__uD6oP6He*OCO#YC&G ziGDx-_GeZuE=^o^>-Q}!Wdv`O!w{5og4+>E=;dLiy&sSz;t^8DGVm+cb!ba5XDReN z-lRe7sDU87DHwJm?|qUgj&(HKY|6trkUb1 zTAD0u9t&Gc^{zN2Zvvzlv?VVZtE?I#S66u@Wo73J-R6a9`ZpP}5~*LFTgrId0jBlE zRYS2aK_9=lAknnWeBO6CDjJ(^)3(ZLk$^8X%o7_P8%pZht4>UBbm~fG40xbKCj0yu zfnPnEjye;(rRxh9lhfuqoJ~!TN5e?o^mGU3Ho}a`t!~wW8(aAye@|CvgkR@_RJ)*} z&3UwJlM3Wr+pBCQs%#eUVSduT)a`CCM8BTA41K)o#OSpf2x*obzF4_fE{W&#tBA|l zQf1c>P4ByxjiNH8?IrW$_l9rX>fI_?G@zC00NSX=Ce4j#Q~ZS?*|rV!9q-b)tN}fa86W8+EOt}78dC` zRS6ks$>iCBk1VHB5@9KrW1yk-yYd2;6F_{bj%2f0MTKzadi_Q{CkDB(W`8sQx(d(Am(t`~gCvCFR=RM2 za=A*~dNKt|b^Piwn{=l}Dh*aiu(9<|J2zcWXG)JW25-{|RGIP*ra*hAj(d;%E%v@j z)q6kQh@X3Jn#Z+O9$GA{Om=M-%#rJ9PBC+j4MDf>Wjn5nil{RE_T6p;H|WM0 z^@3A*C`)iga(_VU6$miX)lAlV-q2I~k$=KqMY+iqPqmCxcZ4s;O6`1QGH`v8%{fF( z%}tvGQ2kD_aB*>Aa+b}PeaU7u$IL)5@tSXNczAk=<%AXKt1n=_!ZiBi7VgvF4Aw6F z?&V0q>pIIG*rjSy0{2-(huJuB>5OZF0-q;XGqq~g?=-`AJNYPM**zt$6YJiEnwOY{ z9Nj4?s;~wyGdMpjJv`Vh$qeRl)ZEW;e`)YU41Wxsx$d2FO`!ozE-o$yBQRteemF7z zr>oauW6w@6x*TIy@Jz9MN*tN^@Y>BP*%hdR)VPz#n6-Sf$+8PUpJksefo{d6rN92Z zXZh>Y)O4+`qof*=0qe{7>3Y5Vb7qSKccU$(I1x^^(Le@ze;Z7z?^b6AT!H0wzVm8< zu(@3J=Ht<9nwcz3|2+)19e3r+Irp=IVZBn`2mR+5ge1J#0uG_zl@Et2`T6*o#YHE!Wg7)<{DgqogXAsAYF2jA!TWyf@mVkiE}Bbs3fJN;)^3W z*?{1)F5K%p&-f3gIyydweh$LNV0UbY=2~*s0$LlZ=|C&%pM~6URi)?=mt*Zn2b{u{ zC66MKysgm{M4DG5nLmoP5GQnQkjZMP3DSCrl|ajUuSCM2(`}fIP>dq?Ik#qud_A)6 zVm8CJ`A4v_>!D?t0^y>>&VcW#`A7Crik`z4o?E`e{MJ?*=L-x{xCZ{`?-3IMba?aC zm#(F~T%*ieuRLGaKdp?^9N8o;JA;zLwx6L)Z(r6N+tS^>BJdp@3psR)rhH`x{pmUe zQoZKeP5()YC)_4g8~i+4VQ=rN_p`d&q2JFR?TI*4hnVb_lJkp#u1R?mcMRVNe!deL`{4QW zz>pOl=>1U1YB@a{X}+pzx!jP3NXYxqY$BUZq#7Rpy@caC#@Ka!%Gv!)|L{Ph*E68rWTt!)?U9ez(vYVQJ1U*tTTGcb8(BCPOAN)a49)kQ5<&r2V?JzT)iW938P zT?<2B3attV-7E<;CerivzCWnaW~i}=-n^%qWvfJ0aJJ0#35`9Rzixq1X@E3#fd9q` zH+GOHum{4n%K3S+q&ZRwlFDD8DfdH6QY!+OoHqkJ%A!M6rb7y>&~jy3D?U|uad(AJ zebM?DH6(3w1l2}c9aGsM)HYlTA*8gRv~vGiowKpt-r!2CM%~=PJ0-@mA4v zEJ8brwOl^k7NfLP?*Y1t05HsEE)IfDUlSUBL7wk9to?raG8DeJ^QxOWIgO=gsWU$W7W&|M<U@obX~N=)!ksoM0u3KKbsK z7WJcpZaP?=DVlrpAz1lQ4(PUx1Tacx5Qut`#O1J266SPPhYmca*KHka271x*xE#mM zOmKGe+Sc=AcXe^K7Fcg?srC{rD|4UP%5Fj*M>cQhz0+Jo=OaGuU*`FJho^nV?$exnpTd0lzMHnYvI)nkc={T zBY%k&Y zD&2A|o;41}px88bL6|(L>6=@*y=vFDk4-KXD=H3GuC5&bW$#z4Rbhc$*wWAlg938Z z#>b(JmCsvS_*-c)W-8*@fBn@|ZmslvvAIsg=EguPt->H|pFf9C6_Jj7bU9k3izf8QyCzc2hts%hOCOn?n@05;l+sJwYmnnuQxO~124(Ti@P59d zPjjx&ik|vJOcT(=A_V6$=y-G5;ULwuU62mYQ6zeTd$IBgtF7X0ok{H?c}_^smn@?+ zlXl>Gr;s(ByKJ)+(J4GY^iEOuxC1+!J+~!&4m800{?&)2px0-SvZT$j@0_K*8DS{( z^#o?ljGY*ju*Db3(tbQdNF(&RSq$iqD@ms~7nLZOQpSF`Kr~s8AeZNT()ZqZ;{B<2 zha)7lVEi2#tUuQi1blM%rP0G*djT#G1dui>4Nb2=st(m!d zRqK$(nXeOGm?;|v(X;JiDwIV$T+TKK=yr?|W5!T5FZnurQI;A^Lz%oo?i?|6+tP~k z97)J+;@0{uTW&p&eUy}xLnGBF z9G^n!&;qDVQEq?@WvaV%B!Rnq>cC{JMyrt)ES(SFwlW!H8y_D}cD7RY+yZ(%=y6qV)Esf^D`+M7 zWV>s};m>O(-z+{YIBlwIEA)s0b*I0* zaMte@*a<=ti=6o^%NgO&RdQ5TlU9H?`K7}j3VI=bc4&cRqU#IPr%Y3wGhPGd%aa9t zcb|$4)gRbw1UAbRE#J7_fglX;(vLz)&>_weuL-x^>8IIOUna>M#wNS>;P8vY9L>fO zZPwbju8$X25cb6WToEcv!5c)@uqkK#(1>g%My=Fu*3)XR-ki=1hJTITNT=PD2=*4G z0k;iaIX{}A@aD&$kuQ8&1T-mW76lj?8MUhjd?}l9U{^q2@Iu68HQ?Lj#rR0VdwnoJ z&LXthA+1dy{_cSN{ZWB#Mv+)-de^OO!f65jfJZ---S=71;MP*#@dTtY-A_{u{>7$C zOI!iYHIrUVHdBKILWFj2gBokA*fL1zR4(tEZJMEMn{|0gF+aZQzbj#wpZF3D|MWme z3w*rIs8AUYZL}gZ91p%kx%kK^oEPo=wm}bdHm@xx7Tys1J3IL?O{*?U!>H(Tsa;Lb zM~VIW0h8^UoIq3JPS0jl0v6g47WqSJRc9!}g{lq!_ha*t2PK{AnAh4;I#g^W4_23` zMs`J(&U;!Zk}hpCy%4OmZNs*5h3V*TA**9N#23$RGy)HOAE|GEm$uGiS#KHT$g!B` ztZPk=nx?BniY4N;i@GDu?`v`ULI>!yR0!6l`)nZD#DSJn_+%HC0&qDqINBdEL6Nc% z%gL(FJEZH{C*ZSNp=7_%3-(p4W-hT5yigY}X=J15;5lBWVBEhmV?T|}$CZPR&x3|{ zRtu=GCDa5_pJq(ym+b4Ogl?2qwTzF}Aa~qzx13dJUy0Kx-dqnSGj!c560jXahW0=P z0OIHSItX3P*7=G~`nn2{Q^t>fxf_Uip<-#7ouHPa?4eG!nXZn~fK7I=D`9PhPp}v+ z$fpUo6)qa(OAoiv@r5f()~e#6+D)g4V`Ysi6~4{ZYlYg)vx(@cw3@kQwu|49-CaSG zIxyyuRb%kl188??`bAKf|?NY4(Y+h<#=_-g-+p1ytiHa zATwKSu);QS<6CFoOx5+9;?u_$&oYS=GP<7^ZA{jsiJ#!&+} z|6Af7=*Lc@t_yviJ5gDa1kaAiDKEz{xU zW1Ov8DRscYQ)KsJQ1|%jVvARlU*!O~yN`kDQM4ac>FY$Orpl65%j$2->V4I!G)fsT zbb}&tZT*YJv|6K2?y)#lx|<_(TVO}wM6XUgzsO|VmybXuX1iU~!tG3Vixi%Hrjq=U z{s0QfC{_9Bw%eL6dqd3{Y8dqiE(;{}F+#*%82c<#cp9y$ zqSrIqo_cMqx zsY$T+1u2ZNtWb3W8u#PJUU^r>Q^P&P%vUIm!Jnaefz4tRiou4dLOm5&!bSnn5lSiv z%SECh-~~oY@6pGODJKsatlMvei~)9K#sgbIb$80EgsJa{l&hmzRnNXnDpUi%$~Mk5 zM&P)`L(_lT8eY#Fo))a6!VxAz{bFxqk*4}n<7=0rYLCnAO{>eKI44ZT1ROVQ<`+cX zotT@RWf(6<7z;&PLVcx#qt3k@{dVm_4yKhSce3zKy~4R z6VEMsrSjwv8@!d3_RD#zqW0J~@GPYF4m+Vl(oI!lVuNF2NtCh~DJ9COn#|n4EOwky z5LWUrGRiYk!}Dap^%MLn6;yrmOnDYT8_L_vSNhX6h9ZZPdB&6M-@KexqIK&On`2)m z3Q%6=!S&km2TgD{>OON5_s&-;*Y_5CTsbw?NU;*VdoBk&<{t9yN3QoL*jyG}IhQB0 z>h6P$3(~}+tlOCe{jp~d{sEy3KU5^?aXC;d4%w-LE zkKjI#*^}S=xg88VtE1V(ytjbBTx*L#L9x2Bxsda5Wx#{1v29gy3xNW~Rv?Oi%@+P+ zUa3h_TpFKMNc+$4EDj@43#m_XRC3r1n(4%gF@diBAVke|qVpyPN51SKZjlJAdRGNs zNr!9?XqO2W8)fBfozdtZoK*&^o2Tsutf$nhEDACKH~sz?g6+c*<2D!dm{4}Bo}LOY z8uKrgC)nR9GtW_?C^=glZMxXy+vRoJ9ZGY1oU$D)AV=WP3e5xjh`PQ}QzH-LkIyA^ z6~I34oC?41zkG@Y0_w$77$R|Q+LjPt?RC1ktJo?WvFO!FSmYM`=h=imQ0CY_O{7Ke zMkmC=QnvYyX_0PIKACQHw=clE;Vv|)VbtolGibFg_c9*z?Gte+N^= z{$%=6E}8b(xiy`yJgtG!&iCTd8`eZ`SA&3lO;gi@ts%VIY8_gK+xv{Ake7We1dq7x znb!0$l=xWoTf9dkea|cHbO=Kv2E+DmIlgqA+iA_J24H#US<-5ZwH!~jNoz2e3#e?l@vN%-O${P=- zt+IMBYi8}lh1H176@vw*hh7v^6}Ji*zw zewwyN>D+QFsqeFnG+r#`%2cKMWAGTvvz?33lo(oe-nN{~uaF~4fBMOR7O*l>-TooR z`#O#3Bsl-Of8coBALiHqBS!c}0KQ=+aX0m}tFbT3_Z>xNt3kavSTlk_CSxk=(&;8s zF~bso)T9~?fh8~RT`o#`1Rlk8LHRsk06^b#C^QNXyxk*yoj9PjB;z5a80 zy~c8r#Roig+Z0pU7x1PbdYYid1sa1$ScZ-xRqgvVdF8@Nn_VS3KGD5VZITwQ7O1mt zo$0FOkKbFGd~`!tZ8huN!W7AoR0VBUf0S$qS;irRFt+z`VrWnH4lNbm%$_$PRNNEw zf5(OcsV~){FY8B|BNL3CE+rKvoGjw0F<8Be{7mw=f9M9g-#)@<|C6X+Ld7g%LZvdK#t}Kc-tBI^kr` z>&+f1z|%2-r7~fHaZkAR+0FCobMey@wNJ}K9WOVrH@j~KYC@h*SE_lx zg0-H;V!K5`%CPja+$U?S7LYN>8&Mi_L|z}H)C%}uv6i(Fp)oO$C}t%g-Ei6>q^JHJ zIFyU{x}9HWJVf^Fj;#u^XtdG_z6 zZIkJ(rXwxpS4s8>FVo)gFD_9GtU2qgdROnSjE)Fxug+Fv7rxRbJ0CR<-y>B)0tQ^h z3Zf9&fMtx44x@5VMbXdVF(o~sWr~@+&Q07jTDv$SneG?o{fcpIa+Hnp-5jZ{w_Im< z6Hf%j6!7VXb8k(amSwIs^Hzq)Z=*pvN1<`B(c!%Lv;2dAsn2f|EJkR#+D4eJ8 zf0I;vE()s|2z^A`@nQD2rYa6F-cKEDYMlvT!G-W-u9THm7;Puog5MUCH z#~7c1YJ~xZ$SY5e$4p*g2Ree~dk1j4ORP{oV2D(>Kw*!F3ZrS~w%hkIJl4E^$Z~;hG33>HSG4jw2dEt97B&-U8{FBM8AQwg=S1${H2E`e5pr3s-50JP=GU1)E z?lD4aB0fVzN|g#j$0rqK?OY;YrpQAyySi-Xk2GRQ?irN6=<-NTT_23|7Cy9Qn79rDpSspJbTE>SD(6Dgs_kv&=P;J83J z6;>gvy_2O5X>)ahaRYt7L*ansyHNBC6|GfuufTk5j}q&Z)@1CJ3wCa0b>(J2u%=`Q zqQ=NPpy7hssq(k$QjCJwhvVHc=#-Az7bclWe#EEMO_bp<(5x@~zgX zx^g<13+|AG%jS{r{UEg$-V}3pibs~Yg|N(~*BpB%x!?nfU}5Xg2*_q*1ka9C=IL3E z1TEVqZcbL@^|!QY)Wej~)AfDQHqu@T;n>F9PI`&s;rxm@Bvykr;C1nEjC*;YkR=4= zBjXFhqfQ!t{hN}YO)SX$_LnVea2Ow^*Kld^p!dxh?F8RVJ5}~)>vI$z?H-)^uUpzC zB`Lz6jk5xe%5?9b>7^SBK}j643n^+~^U?>sPMH)b44e~trrmFn(mINpdR^+>>(OPM zT=ylo@tN@@S}Gbp4E5(YC1s`O8!IiD&AW|G@YvSo_^^8DUK#(OStK)~!&f69DJ_*$ z!lATE*JmVY1PI63aHYM+Dq*10MbJJCcxb6EsAPXw(*;`9;KNgGh%y-7fG^ZeP{~D6N}sBr|JKyIx8tXS2hIb));$F#e8j=@yV32 zX*Q`?2tn^ z68)t%vb%9rtzDH*;OX+`(YV)DAyt#$^CG!&ZuvCnk5u@Bm4BQQ25z_McH=;@wVMm< zG(iZHLzxc31Yx5E=vbNw5eM0<(q8nQk@{LYVT5O_(q-POiO8-~h_jV35}d9m5YU%4 zHOV5S6g7n)6tRV~HHj(Ax3=uQg@1o)4CM9hBGY>hg*0tQu*J2X?H-nZubi#!--#;> zWVU}wH3p5iTnR>WKAB$uybV8h=6V_5zeHc9Cs39ZRda3xze)2@3?rH4v72;Nq)})g-xWpsAh6-Gq=-I&n%oBi_-YruBI22 z5K<$Ff`UR~OzUvZwun3YyA3ow3DAnsGN}@|zotVj9-(A1QP>l=`jzAYuHvnozd`AD zt0>kiyXr5?%lJd{Cl29}VwAniDs{o%BAAI&w_cst>M+X2&wD=jbCzag6CQd%ai-P2`0%^EkO$_h;s$c=MNtaR|~BqmTcA=MAV)-l=Q9X zZ2(TC{4<2^;m6|I?cEB<;zL}FGg|+c%_}TVtCm35mrWhjt7G=Nl+Q)aZd@Fn z=5-btoLt3G?!gfu-WM}5z>J@aEP9>RrKemgoxR>~S<$QE%XX69&_AYBmEMWC*Z|F{ zKIpeg*Pgky^sD=C9mD@IDk*G3ijmSIoY0U#iTn zn;(+>7;+#(bS)daH!JhgqT7@tOl*4HZu;l(oa(wKk8WKZ^1&M#y>+h>3Y_z)Vj&n} z4YMAuSZ&U}H-I0z>LJQbAMMXcy*FF1OzzKSOqQl89D1UfL2qckQV-WrL`1bzv`vad zrMIGt0na+qay`pWI2ew2RY`!3K#^Ad1q!#6Fzk&tta$=n9Lw8JSe~_=H=D(fMxida zGC4liqAgo>Z^pRsy?i%YsCJ^71txLi$1y|2;%fiW5MP1?Tc1Pp7puY}Q*+1{oJQHv zpQn_X!!=OI3d$%kR(cFNEXX0H2nez_^q(et{bB*5;Ud`8cN5(v)ypImc*xj2j*yBr zsj>77IX-rXXDu4z$c*exq|ii~p_(*hhvftaED;Xq1P^hi{JGIFMTgh)WTKSM+i&GM z@WSAv7p$bE*_OQZZFkZZ!d8LWviho?>4imkHk-3NvhLjd@mt1iLS*~tY(4j*hU&WX z+RfIAhSJ&~N8t7o%z#7>&`W-AOp#*%m(erOxz-Nzos=NqrcZ2jC(sEv<*E|Fxdxw6 zZzhv+RMnz`NuO$K2H_2?bTc!oLj-Rn&~+a_u!CtfD4S6VXy+6pi#X9+U1@Owt@fqH zeyt9@z+wq)UUf;K0(358RBPtflN2JRI=8>%4ImnuBxPojE;2nP;OpC_KB7>NE8z4tcWFe#bqPc)#TqkjHQ^4ems1%~NZl7C+NRUC# zkPV4$e000JS*E-#M8Lm-M{H+EOyr-uz8;<^WD{HX1A^Fko$4ZbSU%^2@}(?>;E!`; zvwrcH!!km@oKq@5{^?s47E-6$lPuQ{1jfwU^DEWr3A25zm~*C5>E%!dSEo_b7krrlo+UbRL{I}KCm{rlYj38Fy&l~x;MJFw7fXm!0N8Yw+mt(%7sLk#;fi` zW6ib=qMc1cl`$vbedfPXXG5FPgh4`u2Cu&DG2YJ7nygLbP_ll8ha>mX|9MAwhE+d3eF$PMw0#k)nDB0YmZ zED(=vC*?%fPlwNS0sAV}Wo%^WLO0G*rnlwwf5yBKQgPTJ?-_7Iw7@+SP zw>2P4BHvGt)fh!Z3<3WZ`W)$cA)yOLx+5yQV_p5uzP&)IJR0w$WlrnS5!kja9|#*Y z`_8>)0PS7&5r@9-5Pb0sPUI+v3 zWfGK?k2OZ}wtcn|y+6F*Ka+n^BwctaI3mSrQ@MZUy#U<uqkIO~*XO}PrwRf{r zOA=XBzIh#GH5cl(Arp3wXJJB>YcF2%!=Zd56OTgF5wQ~&ry|_lA?@CZq!`1_M=X?EAc$R zEYv4@0k4-R>*a=SGCLutKtaxf)rbN(8B0|NWuTOh$Q6jKKy*Y+Xj@bK{T&Mo`t^P5 z=~Xb6zB>9c_$WP}3a{%!35*RpS9~vs=~=!r9PrT+^%GnJsU5~6IA=dDd~^P=K%^=! zm93q48dZnEsF(y`i5{_^L>@AAo~^Zsvvp_}%IP*zEX2mg<1$5f&0X}&x-Z*ydED=; zZQ2g9Ob8IfyEm3Xypa5*V*THf#pZX}2L3F@&iV3%JgtL5lCt`3TC=E06#ddL9T#+1 z-Euv5XoO(vy_ltAkVLabnUh5+a`SS8&TO|TH-_BxwJ?P)x;Y_FiNzXGa}O>UI$>}^ zFm9+2ZT#r$YsW9%v5vpi9)T0NqCIyo^eMBmfiTm-!n@|_jm4I-{>_IDx!`H#^v~p_ zIIkOuVSnZg8EqlTD0cL0x}hpk+Q=_>OpxljmU(KILSRcSbza@rNWUM*OZ?2d(TP6+ znN0z9wOKYGbY3A8;+H?#(P?C_FOX4B@bC>skfRhB|3V!3TTpD^T)QEEX)LN(Fr-l6 zO^w-Q>R_lIURo*C@|eptRs%XIgd1e^G+HgBuMsky244(5bz@@>oSrj2-}}T;jgv6k z!0oFz)d}1p8=NrYSvwx<>hFwPB3Fu(jaQJD^2SGY-TUW)2*GD{L>=1<Z;*NVk(2YnmGs6T-8|i6r*D8JabMQGa(T?^Utq;w2q{!*$dxQ|q3EIzis@4W z?*Cxte4vq36dDF_w!_S-Z&RlrHhcNHLsqY2gW1n+;rZ{Nuw26E+~!GJE6 z_@kk$1Dd_RQH=i-I3@Irno1J97Ee^j-!3ww`yCTUoD9!Nmdm|u&zbzSuIis`zY+>I zuX~dd$d_-ESMcJH$o@h4{7MwVbAAyjOdv0Ajkx<4-SL;@U!T@aE!n6^}dG`pU-u`o5k+H)A;ne5WxEj zj;7wGUlj+FG=wSsPLLn8QM};paP$Fp-?dI2U!{p>71NS=1?w*z{11hR(on&{;}#Uu zQpqDL!BPK0i+*urk<6gY4*?xWN-rGM!Q;>NAC{rM?v2FKC(Cdkn?-F)n))vttI+sU zxd?0}Hl6ZnWU2+c}$De2Ea9D+e*&$KQdBHc(8)TSrxl{|5r}H+NzMIiamN z&P6w!;@`<(G1C8DGcdps3{QPa_uuEQgQUSykTfFSa{|>*^;bpz&m961+%33&>_&KB9;V}`4G<#=l^smarjBecruMN>i;lGumQDZtru%pnOibE`P7u}_qP28 z9)^s!Ms|y@UHKjVUgG^tp?|pGH#8)GD8vB!e>%7*1uBb`LKJM0Ts#fPd)VbS(N}@^ zqg-vs`vBShlnUmt8EeTw;qYv(+CfHA)NjrA9tdqD6)45fx%*&da`YR)H0=JVQ@IWq zY2n`abyxog;a`c}9Klfwg+ubvM_{Bi$^Y^~u#Xaz1y^N8b``PF2v}*&_rI?pUVjw) zcI~a87%BypMIm*@*T274vHk<>jDbh3{hL?_Lj56A6++-HfH2}}UgP{5Rp5WG(Q9-A z0m(Vb+2v12=F$KC0XTF9O)*|tA-_C96Yp>K|1Hm9Fn@g6X$$X-?gaOL8o~+9?xSEK zEL$sgTH~(?#NQc-A$eZ+j`D*syl+)YT(0^4i}@`fMP0T&E zaT%{*+*yqS&-u9DvC^dA+?1U~&a=5G-vR#AY&HNHPEgFR*j#>*hAU^9)t=Pbq<93qQqI3;sQ`H6^Mtu1a14 zc}82#(!Ze3U&`r@YGV|jn0}l5!V^mOn|PMgKpPGBHos*n!MGMP?fH%9VFCX|=5xB2 zr*6#O8v^3>qRclJbp7}8hvY+hdPXk4Q>*~_2Q*m?!pxfVcQ(iRz4}ArZ4!O9QXn{> zHv5kvR}>~L$|RrqY@4{sRss8ac1}U!rc820Vdo7@gXI5ww)6F(&6akjeeM58FqurZ zMoxcM7+~@7w|o7kaxbuJ^FRNM2NJ|C@V)c6vx}R<>HlTI$JdZ6Y$ZtzhN9qpM6osk z_(x6sN(dgKf|r83{Qr-wYmU!r+1gDS+ijetv2CZZZ8WxRtFdj{b{gA8W83`Ro_pGJ zzn=SFe%bGyJ+t_%HM7xx}LH=pE8N#7vj1G0?{>9g0LIgEUH`1)7<)@Lb zKbcPU4oHZh(8s^>YbtBK( z3h_M@Ad}}hjDJ!UfL%hpJ`=SAgh1C3sN`3<^OW|nC2l7EgY zi4;T$TO31>$o%JfkvvIAL8P!`4N0WauKxaiIJ-!%9Dp(6L0!6MbD^iP{c|}aKk=D= zIErPLFll>pUVvY z#Y=#Yr+b}X$p17zMJwyfW_bU96+EB-4=5Sk_*@ITo@e&!_|$LpjP>#Qj4>lS6DacM z)Vx5lQ+cBJ4QhV2Mer3R7LoB_uX6c4qrd+^VD(O zZ~hfXAdz5uCr)L?mlugk{K*mmgi#P3LEj;BgX{+;oz*|ZEc5#3Q^1t+&dhfIMXO2i zpWg(-XXX#vbq%)v^TXbS1lmtb^hWxG%@<)qV+MJDj*gcM34ou1w!SLNfASR1dkQfq zi4ovhWxpO^4A1;Y4FF^l!!rPpuX&DjS=?`nQKtWM1QJ5;M4RaEgoSy>=0tx$AUtm< z;Z8&i;5q;nn(bawt^Mkd359DW^x#%S#yGa~@> zN&#R-GJv&NbwoD(7dn>_?o67^X1ECJE;oq#bKp)KV1OG`3&xOlbxCXbtq;H46)65j zC|DgZke=Kh3lh%E{!0X;)2Bpa1zI>4HUGt8&Y!+e-9zGKOb_Ma0KUbC9tI-n>V%{ewG}rc^JLPW%>@d~Dg<=bK}(Q>7OkZ~$@|0UhUy(ZQ>U5X8}`Tk zW*ce>kN*d0XoV>Xl7R#*4;c=`jovCHNK<3}$m8 z$Y97kjN_7HaxK8gGPCu1#*MUv97iqV^(xzM!E&-0v=)a0gyr(1EV@-MdAm zn$4fQ=1o?$h0$HQ*l9`FxC6dkBe%mo1aCyYbO&X6pbO z{+o9$&Rj_u8PsR+o~y0g&vi9;9XwRcc0n_R3d`&I>Hio$!1lG5bslmTA`oK`-$by3 zUm7t3magZ$B2M$&EzZt?FS}#)Z6q@^T zhg}tbqMtl|VyHs!;4l4%ATu%HoIauG{-t zqv3FY3dG&aah(N9oT(WK3epzD09(EInFkui&B(s5BQH;RS{0IXj$`83Gx0>s0L9pY zO!ZWJ^D^i^?BZr80J1M1Mfaobt5<(PCgNm4|fo+LA~e2 zTxxKIe>k3o$EqqX6vkRq{KuG(Py#K_XLeUJs?KG04-URKu9839or7i9r}<{68y#87iue6FkxBgzlEN8b35W{@JaL-U z80mR)=zt+((Y;yRl>;XKZ#UfX;AYiTC1BsqeddWdC6@0)rEZi-XPh@ z@6s0|)?E2%s(SuEg6o1{(ce4@^1>*kmDG|FL3Dn@hN&+TT4sKJqB?MRwr)0JddA>o zXY&{m*Jv=8pa}LOZ75cfK+mF&JQvnVnxmgX#Ui`S||Mbk6&IwFIa2nOw!nVsjrm zu@H7*5q*vJxb_S~A(kSD`?iNcb#=Age0f^$S&VwS)X-PY8j-(+!ioiAp&bmBO)Rk| zUjO6W)|bUfkF-31f(y>C+nE+&^~{ti@SDsPe|B{^*!^0A4W|JutM!JT#hyc7 znGg4tG9?GkkEPO{zqxT8@=L`(-Re7qyTeQOB`bIiQ_gE@ZpwN?jXUmRTp3WR#<9Rx ztT(LN+4Dd{i-T~UFVNeFzXYV)6;4a^PZ++Sw z2Fk$IJAW=Y6OLvZg(5^jLmNn`XvNFn@>mzgZ(Mr-e;Qb&RyK8=E^{0NO@M(M_#Aur zG4L0x$lUyyX6Qu#^)0K0Lg@3rqF<6_5m?3Xcy=x;#KFRn^ViVF&KW%F;g0V5P!wDV z$98pfso$A-r9ZIKlWSeG8y%Tg_(a+lm(&^V(aEOngc}gw!)bIqFWn#wHacU4JQnXUD9?;f(dxBdeaM81|e zbr4bhq<1epMvBX=-{=SM1uy5_$#|rzMGNm{$@gLKwE!ws&|v z=^f3D`iR6z_jsImuHPek2`ploXV%s^9OuplyM{7jxK^GxMU^hl7yF>brlnSErTGPz zY_7B8dIh=KZji46Neo9Z_#3qu1Gvmyq8`xqIBTZ*Kj!Cp&{#vUp6bUS08V2L5o&@vj> z$KeEKeW`HRbnHXFv)GRtNAOmC*#$!DlXp?F!F3P~M!W+fvSW*xU_oiy-H!J;l?VG-!?J$p!=8 z|5tu1JlS>;V<%uhq6t;paGK|^gq^k@72q>0 z0czk7Us@c`qU9bP6uf6$!kikN+0i`*Cb~?@A`Mi&=iIHM*IV7q9i1sd)mPfEWNE#5 z?G#UXxxG9X+B|fOO*-;ypP^hsPp!MW#IXgRcR&!e0V_s*_#J!XWk%hua%A(C7_+**A7g{W9qK^~c>3>eYh#UnPdZ zcwms)Bl}+}B2N|2;|gEg(9 zQ4v*Fo0d8kAI_EF+Brf~BbT>FFkYYk-CKi6bL#b(e=|&z6_0d`Cd-Tu`1;G!ZBjc= zMyD_J6gYH_VEBZcN>{c(uRX&aq6TI}U70s`_kt9s5gPH_!}AU2oN9)DN`U( z0X8wBk@Gs^WT;&cIz6r6P&!P)bnQW#uotXCVCw7xuk<0TL1vOjBo**>`w8eGXc z(m#pErZe9~YC38c$`u?{rR6RRP`%ux4GQIS z;C+`i6A+j~d#vksuu zQ`K!<`-6R56R~fKb8wBP9ZzUw4GL7?6`Z%7obf(x5<6;)s~sw|M8xGChq7D8C)(kG zIC%w1G3=)v(jHHyQKf6G{JUifc$S5M7Dm2h_3ye}mWHXY3Xx3$cL(Wz!^=sjPEy!n z`aa7pRLpRM{!G}Xa8*zkmoy(^h4PsBiTtba;}5kE5CsVu9$_vt9@4DAa5x$wnQ8r} zNW<}^(Vpv8d2svl;CdhqTJEpGFo3>p##5yN6LdL*z+hc~&@|Wt#6HR*@_a5xrkSXs zqI}s_z;6Wr2PK-WFH5!`Q!1uTcr2LIG8c>mv2#W1wIs#F;>gRJCAkz}bu+zg+b)J| zZ5C0Lz(bHOoH;WxO%$N^=sl(_UTLSSS+UXVnkH_K}&oF6krL6hq5wZ@EoIX&o6`SX|}e4L7`Ex zjXP~Sr0yKP0Y7WCAWh9TdW^V9+eDLezXIACu4tDW>oBS;&~o3><~($FTswA6UjH4} z>Ui<6h=KMPhXZ0Fbo#g~Fw`T0GT^?Bt`Y6BDWPRXh#e(O14^CnZKVZn|=NscDepEe!bx3@D4+AGaWq-86wM1D8o%%8|;{R<5jg$TXLfQUToO* zw+icvb&@6x0Zy${8&@W|7f35k&2>79R!W!VLZVR;l^g0EBeT#|L1DAwlJBKDwkyOU z&2L;>1&Vmav+O+)%8XKQdw4dU9TYypeexKL1_hEJk}*yUfN#H4)kr{BMUIZy+UXVx z@Izw_(Pf;i>j&f{U{!m(3vOYDvlJ&_7=hFI6J_}E#3~deeqwCrlN0!zy>){jx$zDp zS*^}2^BAI0%5TGL+MLGvNsTa6eDoZ43$Qwji}kvD4uHr-u5*d^s!iu1+H3yG=4#u! zKb_8fgL>-w1-Gh2mfv7f&Jp$&^}(k-nuzH(!!s}BPgY)ary2Hvxm%FGB6sh<@^>@U z-<5OUx8xF-#t4io^iQn?DRR&vR@yAeipA)|$XB#n)JK9nt?ADeYp6+3p&EI;3%}YM zm67}Y4l#S5p%qRljbpG{Fic^_vr#&?I`8x%sig8EZ-no|T~v1HFKD$AfwN<Ot;y-imKn3##`taNo9B^(HqJoGZi=%Z=6+v|Z)bJ|peff}-t zkH42LI$2lw=Xzi)TiAA;L=LO7G9cni&AHd+-Nz$){35+&uPtwOOH${lBTVsE@O0NK z)fA_+Yc{`*A_ejU*EH7aN`aEO!1P*{Y1uSVRIvN6R*;43g9`I&dQtDGe_PXi!r*N> zw9R#Njz#%w!{bx)n`#{9p46yO9G+-7>_-M+K*zg)&hXM}P#ViqH{<1U-C&Hfr6x7H z8oKvT^+9eYVe{Z9(p%TgK&B<<(|0-gkG1R9aBwGKS}M4FERIkpb1yY>iSGiZGQ2N0 zoPR6HB`;uClre~^N5L@nyZVz)pN^-?H^tMbw-QEPc|{ zbcephc%rv~&(m!U5$n+c4u|c-_kPHweo`F1_sYGH1r&Y1v;0=+?>t)4AlOi?1 zoHms2?kF8^jW66%mP6h9cJeyx=mmRRug*lsDI?@yOBVy{^INl7AtDMeBM9(j9X~Z( zBdk=Wo#NgNTHJ>WciAOI^K64YsiESI1k+b%!7YPXOGRfFEqcB(`ZK9B9W(AeG^V}u zH)=6M5ddhaa*5%(_xgZ>a^N5e01nVZa$xY-2CT;ZcMCuxhcy4}B6)!Vk+XK;t4xRi zc29Z5>{z;!-h&m1l+;NaS;i185sCACJYA~Kf??S~aXz~F0BA|7P&{eJ+HT!gWFVng znFL*6f#tYdtnF?1B`Z1)Z#zZ5{nj}53MbAt(a8r9B)ifVic?_|IE6AGO$(jai|2Y` zOA1`0Kg{(M5YHx%p%JBIF%hbLP)pA}^5lLP$Rn&(eSefjoZfP5#=>1p5N7wLS>n~k zq|3-)xu~4nV1`=xcmWAX_poQ3eq^c$xC}DXxU{tSj>H)0SVZ0Jo$tFqqA%{>F5UV$ zQ2|XhgGb)~7G4{P%oWH&jpq93?O2py$>oHkPkhNX8B;KzS#%Od6hwzxYi!N)|j0cAicqhVqSku0hQW zCQ}vJTD__3t&t~h9WrlWayrt?XKM$DW#k96u*2h}n&CS;anHc|mtaFp**z&Bj(wDd z)4GK874o=7L>xH0RN1RtXZ(MLjYtUI90}At^G&>}vyv3cL z;{?2N@_c;U)VR@lFOYs^VtGLEwK<`J5x!_g5p~1#2!ov^m@nB!3PA6Iii6rpI@xnp zy$$9~q44lLpSLk{?(n3%|KSAae0*%Ati76BOW?zVt1%`S z>{9}55KF{YmA0i`QW!CvmAw{+Qx*!zY>Hxwr#$*GE`{cZV@g2xMwmp$xO+0*x-rE& zcE9NoFfHvSOF*re%)mnbT`WD=z)#TBY(FjU)BjHbeXJ$wAmA6jjLc z6#!G(qzH_jij^Xi;7isFSNjjozq{}6QD0KmTB&cR=uJP`$G;q4u0OGZoz@*;72Uv; zUZ_>MSuX$|2G}pl(_;pFInp5I4Df=V-V_GkogiaFoMDdQxalsL>QG9^#CQ}QUeAeP zu{9irDLnDDauI(4yX+a48>!jqOFgJzK)RFBY`4el6p^I}+};Lhx(!g%-v;k_Valm) zUd85iNQDX&Qc2dxaGWHRW#m*jW9?XUlGIsFt}Q+|d|*4zm0k`{+Zh4cW4YA4lJ^7u z4KY!py$;`CF*^EH_xj;C4K5$(G1$=^dm{3xc9ZuA=&a zZ7*S#TYJ{sP;a-axl2`#(8~#3PkW@8qxY{1%(%|<!Z5A^9=GL=R#Up)Pz6?p>3ES(nJ zeKBBPux!$2KV;bMXQPp7N+iw!l@^=PX<6~F??%fDl;?9RH$w5Ne+y*o+ zY!H8MyTbm)DXtBSDYDS`HHYJ3;SE$`Nu{pId5olaB(udlFg#xKTUPIj=k3`0&fx5C zTM@oFjv&VPZNg`5clh|3cy1^_h>_XC)KES_K2Sj<5Fx@*nkb+}_yR7-KnHCJj%TM8xM>?K)}&2Xsxr#NA4CI?*J6tn z7)$ub-gzo9c%>{>#kJ%oPkdCU&`@%&z@s=@wHHq6<3d8~`SxLKR=n|~qU93Z2mJPZ z^V23}k_Gh1*b-#f6?ujI#;eMiHU#kDZkTDDgi)tZ)<;T@1;Z~YEG8nTR$v7+ksm?P zLCi~=bw685r?V%}>yPDP8|%e7?%kHv#`fpP2q)nara*mQ4hgr$4tel#vx_~97@HEf zl8Z2O#Hd!W`7vj6Wo<2 zQ1Hj|M6-g}blA~6md=cpb&z|GD(b>>6f-j!@NL+QvteZxLcxFu8&qV2|8nOlz&AfY32>H_|B z=bD;(K2YtUU;F61^{I(Koselv5h zB?!^h8P8e8zf+#}9S5QcyN{2%%tFoU$> zXZA@<&(>|bQL($;xoM;Y=~~MRPP-`VA5nAZ5)&M5CmNZrony|HX!gY#8!ld=7Rk;+ z9rAI8_I|f`&V9pTJU~q#5W%}Sm`pldb2zm|>(;I)KUus8RXtHF6Eq)saD!_*tTlCh z;8^OHtR?Juyfc7J`--Rv@;G1C^Tq%h%9h7=>!0mUVt;5i^u_bJJ<34ht*__+zaH!E zU@f>Gx`jP#??;n*Q=!F6U;HhSnxKNK<5wJy?QK-1Qb$ z+|{p!IuirkvOxXse>h&ZWt~ZfiS`!p1p^<;kKHo^uEa3$s1aj7Ty4uaEinz8lk}`G zJBfGvk9+}$*~tc+2=DY+nJP0osa)Kzb&@m|lD;mgHIT}jtxnpfHj%~s>IS7)u^a|= ze>!a}l}$uM#cFd}ccBOZks%*J`$(Zg_~ofJD^G|^!RywcGxBcnDW5EE&stlfuw2W!KX?paw~*L!p82jeH6`|b7SizCa3PP6$g?2NY<-4{P@)4mi(iBP@Vu;1O#@{bQ)Ql zk?>oC(QRpk@xvx4gYzbf$(5{Ju|g4uMyrz~DNME{Hd{wm|1d|vF|y=qCl`|fPs_U7 z3nK>8xw$#FNixGZj_C2eztKZ9ZjnbA8AYdsY`u^|yE|4NUtaNS8ruwSI{Gw`OblDy zqUj-w4+2WJq!Zlfy4d^m0})g))J*aOYLzLXr`v~1L##=9`#!9Ktzk1V zDB&-ZDKl9qW*(Lm`u&DRmE2g2er)Y?-STnF6&hPld6LPLYLfcE#|h<{?F4PBCD5?g z6DgXe5BR8qX@L&aHJOc2AA8;hv{k$)CY2$NA9a4ssRd=PagKzuzd2;)IY9k*@XBQq zbSPiWb2ML0#L>aRj~W2#6s}aEF33n0D-U&Hq&0h3=j08WN9vPCV~T8YvX071DEiyt zo+AN0W8VpG9Hav(BbN>p`D-{*{0*p;c{SK-Z4RwJplVMcLqG<<$8sUqMXz#kOpc#9 z&G-_VaW{U0$+EB`lBMy9%Cl6i!&~ue5V=3D8`-Zr<+$skCby^BRSl6H4-4~_7lc_z zt8}Vh_``(pdM0bRObZvbzEneTwYB7N@+`SAD0Hr|GdJZe@LH4oH((VKKCHu9&=q(E zm3lkDv%$px*R|-`Yrz7hA)C(qZhu<_<8Eax?%OJ&g&a*Mu{ulm!;4x8LGzosqgm6r zx$wg#=dI*^i5{E_s@3()92rH@;b!|IFh=b%&z)jMCtd>P6j1Ig+dauE^gdSzgBFAS z?t^J!*0O3chkJ!6#kX3DxtF?eR16Hh<=2H$PWq4FQcgx|4%36l?oH)x0Zkr#p-Hv9 zbaD?)GNSs$N2Q^(xbBkgAKQ$b!f9Ambgou`UR);5c?U_m5Xw{AiYUbYX2|x>TPje(1jmT!Sr_)6Njh&hVT>KsBTUSO+TgihIyL!v zSrq4@i+H6yacd53?-={8b?}=+(GK}Z>Rg}zC&e6~!;*IzaoY54G8?jF6}o3ECOPHDL={%)Z0a2LFFo5a&sWpyNDV@oZdjMESHo^}p=T@LPMIy15>P^3f zG;5>~ORAjTw8>#ulI)p}HXTou_>=DW&ahG>lNoy_k=vnr-t9?$drbt_i>z&kj_2y^ z_=L`DpJ7wJXs?4W4w!f|vE)NYNF2$|BunR74~}KqsG(TXH!KOsQsHVw| zyfW=H2}I&m!fW?aA}VIm36zht5i-mPm8m(YbR`5TscQGDP3X9?w%0VFLZ^#xzIc*? zh~3s$2KU}2qGU5@lU0>^Z*}Y2>_d&$UcJgyJO`Hs$;u~Ei}8gB?yMzyOjVn8^IlzA zlYLj+BK2NT($vYtU%de8#nRhcCxphy;#dB`QnMfwfFoM~t@Il>gj*WoPod+CNpiqu z6SO~KlyyO@+o64W^&8!1b2#ArrdZCD(rbEPh1DCP<(WOd)KNbB<6sC3q^fwht58MWd6m7KrXB1t4g&D>Q0da}&Y_5a(l|?_4X6 zfv|cfQr*=O&)Zj0B<_ze@hc(fO}5j4$6kYRS5_zFeYBEwhYXntt#4H)Z*0p#z4tJ- z)+|#pNe|scOPPoZB?n90fG*UATk1Ui6UuFhoCVOI&vUU8<;`_4-^X@+n7Lb=Ihe_` zrXL;Z!b>U|zUS=b>xr!}xKf$Lrg%`h@Jp{mCK`T=O*)JqK>8a(MMgsIlfsv7e0|We z*U}p3(=3o8@)S_J!`3Ymvkv=In^6vySo2j%CpbFK_)N>NXuXBM&1J2K{A8=lhGP*& zLe1etAa5vEw~KD*dCUww@k{s7nQfDeY4?vA%XWJ6xg%xp(qixkw+w?3&BButP-erf z_6+;#E@IZCeqd?Ho-Un&=^}5+2G6?ahO0_-x66UGQA7|4qGzy3J z=>2f1oJeFRMcR{vkfwmPcDLlF{c<${V=J~nPm$sN+Z6_5d8E^Zl?__;ktAxoMMW80 z=GAh%)x0_R*Sl(GEU3?B?wz(a@plc?Sv;wM*2r9r%}DpXp=Xqc?bJPlL_5x& zzg6!I6j2^GR^R*AI`xUhlZh<@nw`HK+|<&Qy}Kj0gj6eYq6$)*L?h+@$6ciIjD&fl zCm0^8b3hkg6herH%2h6|t8?)q3MEmfRtB}Fy%MhK6;MnB@qWubS*ku^-G)JEjX6*M zzD+7;#3%X`u2G~mPKDA({56xq@qKmvc%*mwkG32^AXjb;SXJCbEXO(cZ*`E@$4ToG z@ayjUU%zH?M8VG!eXCc17st6kf8vs>{x&0e9J^cQ@t#qtjRN{yZ)SE$H$Sk>x_4;S zXk4rRA$1a+!F5+@DT*oC*$~AD7XP-i3+oE7ZR#sbbZ&o&A}c!RlbTz>xQe-(;eY%hWfA z=$PcgNcvZxrCsIWDyq<$Mwm`WK|x?gx;43@K_>yN;@MK(x+BKi93ePN2iTjGHe&w} zl)N->_(|qz!>I->shyYXJH{~AnXUe@pjYgSL8oyOuy**iN7!t%idJ#yq2<&gjtpXO zR1#^w)4-z&436EDHjfjwVwHmS?9nfrE*<=7&Fvx@OAVWCv3n2BO&?y=w2=p_#=jnF zmO7ETxSm5uZb=g`W4U%AqZgWejf+hD5XJ$iEmzN{&*pg=YC+pLUddHSr@gX){_MPp z<}hU^boJV%f5oO*2sxdOzH+s17=vtU!^eIu?)wxpx%I#*a$Kz)0$M+!Qf&n7vk!mT zLwQ?h0sYaiRbb^rfI*G*lImv&_gB2q2xvvlY(trFH52S4>+S4!D})=jp`Ws{xq1P5 z0$TnRDPr8Iq0QHL{ZgtH1D_wTI_$4TGpwwCnK-r-7Ha7hvig`26ae+s=sI5&BxTl?Pr#r{6hz#@z@ z%iB9RolHxurTdIK`!j_bp+0Y}@*t9RUvJ68ORm*2Cj?}>(1)>%0!+6xv(wgx{fCvN zTI*n1WF;V72K-IPH?&j$xTBi7a=x6ta<(-FCii&%{NUJ07Eu4>f4AJ=bAVFgg5Z(i zzjYkTX$a@w)CY}^R>Sr4)=4;nR=Y=Oyu7Iz?oy-v8U-rk_PYe(587xmMC17ut z&u~1wPMxRD4W3Mxho6}D zmjcIuXrI%&lbv0OC1FL^5dTP8-S{*9ZfgJ!qzFu-*Y4esnpP0j;CaJ86G!Y zPfv@baK7B%+~9ON3}xlAZB?@C+4lW4#HS5Jpb8`Dx+_o)lXyL}zX>h*4h~d_LW4)> zmF=S#i^l7)!B8YEKc5i6%ZI;y0rcs72I4+0nEDn<|~yXfOP&9>1*ItJ+NGA^fJy@AMHh(m5t9ncnm}TCCe% zS_1R`v3$NxOF#j6nUOSQ)1K^aLHs>bCo$xG7#QmH7oM9zT7t0uIY0$NA%SKzMbp9m zcpvW*;`x|`jCFKuWMAa9!XNbU1`+J65O+fk24nPwl{zT@O}4-J0~ze)nRHK-%l_FN z^=~=-hjS1>Vh~Qf-<47cRBSAB|Hta30INsVsye+Hy!iaTW-f?_*Gb+iS=}msFBHvL z7{BUBD*VR;fj({6g5ZDt+%`ot3@TqyD+~3%=8^GfLrF4P559Oj6w~;B(9jZs;6hX^ z<5F-&2p)Ip<>~7R;~i!+2_7OS0aWJqU}QP%soYowV;xvjKg*K3#XCO!gNv>HfChZ$ zkZpBOx*n%ss|X%pD}*96xoASb?X=P8who#+|Pqp z#=DJY^(q;)kgfry)?1k#y}h?4xN&tPu|Jv=V#fL4snO;m2;&h5>_p2?t?j+Fzj&n{ zs9CCBgXpD=hIpZ@22-se7Iao5i`W)ffIuB|@Mb}{d11G);2hy;i-fOsQ1e?RHn@N0 z`?$=BMb1g$V(ZdHqW7h)SK2s(*_82T70&>{T>XbbtVr;1~=zfl~;VA|m zbdp(1BI~jjY9eF#ZG-kA_N0CFG+3`T`d#U1gPE$7!f-ne^q%Lq!O@zvl!bmu9>s6f z23QI2#?RVK!PMU)3(Pk$9q+SEd|cR%*Ej*mB9@oxui$wS30&nV8}v7XKM?D5mmM&; zDydgno%xrQ_9<^V#_d!wHe=tF&QNQ$epzL&%Y*&^XyNOM^qPiHRwQQ?{o4uwI0zZ) zwF7LzH=Vf@kpK>lGa(jcGSznij$BM_NJxX}UMzK*1*Ts1_P3`aQQ{QSO~!3aET^KHIiQl=l<3rfc@kF zCi)9z5Vc=i;8fL7R|(Lx%@o6$FWUPtnhK=3DMwWO!yDb_-mw3KSzQnw1CNU7W2rSDVip+?dF|Gwd}}8A&nc{BtUZI|4||AS~Z5X z8_$w4;{bbc(F-DXRKmZ$2~zzn?t3Hg+Zv{Nj5aKgYx}J(aK@HNX%3DPb)W6-0*Rx`)yU=#Gwd8b0Y!9zEPVCm-R z>(}eVyABM0J#`^%ZP^^7o};@{f25_#-96M=^I@W9>n)7cR%>7Uj4!h9mMH+1G@Z_H zP&AHADJMk1R2rb{q@ZFmBGCj)Wt=dukm2bU#9xb)+|Q6F+a{z}Z}UZBh!oQnz_BhC zHq2DYNhFpMovlho6Ys#L+ci98t*j8@sN~Xz6br@8mmD`exG;xn=$0-2oX#^&8w?Y>Su_xC2tuf?1!c&H~pO zX-uH+1!>l$%Uyr8F4j8kW!PQLKsGzvAmxcig`|(CBwTKfiG`)Kjcm0wGtIT#^n|=_ zC8<@J(WY@(i@UfzxohAk#0UHJ)K^`X{h~0Mh!rQFe8VY~7VC z?md`$qF(}Y7GBXGu^5G*)&J#V#)XCJk4Jt7npjFtbTprIoNk5T57SRQk2RUSDMGN_ zt-ahSYre%wAd*A-PXq2lx#2Y1Na2UKN z?2fY|Ua-iE0%kAn$gsF8O2}XuNnTs8LYJH;6E_krhMiU|NH^SkOye(Rf@mny0>oB| z84^apW1YBGC;Le?-)&s%y!wiyBB+)KxF54u!66d%RPg+dK_Z4F4i&Rn94!{}f*xp{ zLr@s>mmKAr)&2K_Fi=Bm(j4Lh`#ZPUqFp-Z6PsHnhfhsLBCG-!P3~JL^!7`*-Zvr# zzOoLw#^?A=Yk6vl@IQ74cuV?euJ~5(kWY@_q9wQQ)yJ12_#iIxJ*oNaO5s=8myNkBmctB96~5;8>0)t6X=LBAqC>}{%aV+7T_ z`(bx_S7Wp(lq2CB)tAVh!j;ArQ#@a2o|NgdcWdra^%d2j`j$t9=&&DnpqNo@KM8O= zZapW0%y5$ka89!>!NWt*>PZ+Psr7R^Qcl-Yl%<5Zk7x%Wbuo%q<%$e%SbYpeh+J z$7DhVjU!HIjD}_zmeyqtrI;*xD-+#ZItQz5x`o{T!i(3r93;1&HSWqeTdT;9_k=^P z;Nb);HtEss$!Lb#qTRyf`e?oUt97noo{s1b6n6+%xH*Oa+Q+R4hyG@rT>C5Nba12T zLq$r7&w;I5@+<<3K+gzg>kTPGsKY|*-wlC$&*@*C1c-Y6z`NGjtiOCjR(#)KxxEqRZFEN4*?VwVs;mEfIjRmJQFm zhtsNQ-Va|~_c+oQi3KVqqphuG<5p(g>-d_~+FucA_S&|gd^BMwlJmViTg+oXXC`~V zqZVW^m7XP0Er$mBy$C#BL@Qmk{N48|0eiEM_ym%vL5DMzg~#NzUl+dZo_M`fP7tZ0 zXYM31>EoX)w=D005G2l)t2d>?9j}`f59<#rY_}!lN2|s|+_lS`ghNCp!lzt1E`gFbjg?lU|hn0~|4QIp#bG}?Mfl9-w zm{B3l`2;FOcD~$D^?fOs*yewQPzWDDXgn_kE7f@<@T!X(sPTf7N>rqrwX|*9j>gT0 z7}CG+ZC!z-KnSw1TA`BNgHvSr`>#%mOa2fBCkhw?URux75L=V`&_9(H6OgAP#O=F& zVm_q#+T{y=-7(+N(wZ?Xf}AR-+qJ>P{&vBz@k6W51~QH1HDa6+#c7Gz@yb%UA$-^U zEv)FtvL5a@<|Dlg4K};Wj652|U=KKK!7vU~K|LUA!+8=FX-Y1%_A6Kg__d935Q@9a zBW%_sW<-9TpBoX?)(iKprz52L7*hDYr(lz7tL#qM<(}lYmq6upM@+c})OGpl<9qpb z=ybAl4pZ+9sTwTTb8k$QrhS^$t8~$vKF?`s3U^~3@4lU5+q>5oQhb8fmAyS?U96Huz-FtAs5eus zaJEyZgMtRFs4tXSzQxjo?;w>IamODEvCd*voM`Gtj4)pCS}QfzTUHf-=vi9_lPGi= ztUj({22EpikM?*lMo;nMW|(RAF|V+|Hk!WWY~9ZxNzDPs$T-IQVsg!A1<^#4t=Up% z@}ZOtuic>I?Lp7|0#q>u>d7@#clkPz|7P?CDbR=|K&$dF;7$1G+j38CG8NW_lNr?< z3j6~rzsU`lE1rD@+Z)B4M~5mv2oelS6Rxm#qKtPobwQKA+8&ldC0EMUbO)aTV9&}< zy<@7ID@ffU+Xft329Cr|cu!CDYZ-O~N$$7r3)Uzg;atb)^A|=qFc$IX98eQ@doEn? zs2*8~4sHX3p5c>Pfp7?ZM_^&eRnsI9A$KhF;Uy2}F{Cra67p&^XjX4{x~pyO5n%1p z|Hsxh$H&>Fe>Z8;*p1yrjcuo~?TH%Ow%V|9W81bdjcwbUnD0Eh`|S4j?*2W=`OKVi z?sH$)S0}FFty=Pn6ZA!P3OM>X?=dpvu??4xmt+|Wb)($nst?5-5zlgSx#3)fD`OZl zkYxax2Ra(9t4w^}CFQ3%*#aqXo2h&nb6~WRRJZu}S8E=>0Z`!Ow)s$>e7%v6-AFj# zJNJhJN3i9aP-IL4B><>@EsL2YbiTN+>tVVfI9l6$Izy&5Hx(oqXTAbMY?h*L+S5>i z`~%}Z?R3{Pi9RbfsE!T{lw`2~Uccg*#>o*taZ5Z!g1|JkS-VIjt)V=bZB(A0zn4+y zB}++!HI?03NcWf>wlMuBc9qPDR+itxB)nL&Kfzkr?67MfF5s&9{ugNrW0XmDz3H_p zm@twjz!u5HBOD~2$e6Q!6E9XStnMuHb}1+iOX%M0Ik3-ET)wR|3!ukx+)%xFydm+h zmGHHAkm4D}h9M|X_mLeig`?Ybbj%!0p&N+#Vn#{R#I{$vKc3y_c{_WXYIt6vh{JSG zXfd&H2R);H&%X@$RG3rQ@vG>gLiQo8c84&n|AhS}q|s}o$deIm{K8Q;Sd_v0-OU)e7=<`qy|Vm@ACJvCCW+I9#-mxog#5jPNfer ztDVsVVd;Vue-!E@{%8ueI8GP?%CXQZR^_3KGO!t_3bMbe={ca%G6&Q88TE$Zi;Ik> zxv!6pGBv{S`j&`{K$f?kzitbp3a^jp3VT1I7+b56%eKNyYv^IhOV^vZGD{FU>C<=dnTm*KixlE5 zROv;eebP!#*&?Te`#)-AHZbm&5l3C3N|ekNSxb@FkKkjkSMN7S-!;d}719n&SRzoiI4*sH!uhd9J z7m8)WT_|QZ9=VPfoQakB^NS|=J>bSmegORJm6>~XimmpQE5z)Lfgew&L!qh#^NXp} z9E^-#pg}}amqzQk++HRYyGqdDOicLem$E~zpb>8o+1TQqOr=k-i-ZsNIC@B&YXySc zpXfO%sS#F7s?2+}Mz6IJDEJ4!y>Mjx@Bkz=?~Tb=?3~kl963VXx4{U$PniO)T8@Uh z8YJ2Y)8NDo0tRtkApgRcuBWIk2M9vz6|es2@;nIJhiZgXaO#ik(bqK2f`&c!faQT%a(fH*P znTg~4R{qtKl{Rs8Xy(47z`oK`7B3y3rj8jUN^+Ae7z(?=3Q&3CNK(p^y&5z(J%X^! ztJfV6Vl1q|+w}IW>>h+o|Ap=rgB(69OtqwMX#Yewa_Q#TA^W#xCmG<6X|6!M2#d>u zBFmf_)#gaQk{bi)wCCe=Gf@w#+2+h^u{$gws|LW%i2>SOf<0W%1qU^==rs@XbG32HVuE(`@Mk#YU8L0?J zzUQs-1SZ!M=Z7uPe0`h&dRGMAq~-h^RX~${!v#OT5hnvN@EdW3UOV12`x<|&fF;EX z^8APw9U$kUfaOFMo~J##a#Y?w*_VG7djc32Q6FYAe>6?Foc!SN&{#((D#wwzVBwEj zyq%>v2ZI-R93Ho|eY~g`E@lVFAVng~u-*I&-KE2`UXYZy4lS#7%e53rF+G;4yTi#B zC-J7x;u0yAO1Prz6UD*wjS}(@9zwf9U5YEPiUIHiCkM95emB1aD=I1=Fu;(!*=4sz zNnyaf{wyd0ce<4guu?32_2*tthFs%@nFWJjOUlG4(;h>*$;6OHnO5kf@E04}#SUz$ z=+avrw};}yzPIFjuqcgryoj7>z-F+ozeKAc$R)9q99mlgmi2o~1-uT{+<93M5bce~ z<&ZX8#4BF1$sU-cIm;`;v%>;ni)C}F(BRC%TzkA^ng%?vCNWg`;e|rHAWv$-cDZ+1 za10IH$<2E4lZHYr@M`==(?tUH?C;mO_dRhuq^>xbhDF~utI}*cMNj>Up}`ui5W^L2 z`WiALQu%t(7wd(FOs)2%jQT8;G><^GX$r))i3tS@mC~GL;h1EP70I~xXvj(0Ppg9j zFz&(u0kaMfLikmM8aG`czhx4r6NGGTjMZgbya&zYkDLx?(rY=f1dOsdpMa#_eAsCi zdT-nA142SH8O^n;3J zTaf|F^2gNg(o~CoV(8F~U0rA232TB*JgQL^-Yez2+g2rd#>taOSh@+|3cVFE)Nv|f zuI|b~Pc$;V*b25fL6yNouN@FjE*M#Q)m7!LwMS)DEHL_=D7%Vm*2pGW6kfAej_Gd~L^%w4>gL*MXFi8W>Yt-BJ6eMnZ?eLF&V#Jc`CeiDFw1NAQaZ$^)>s~Et zY4KoTneh$7TB9{)uI%+k6akk+;VV}~`sA;m_Kg%m4G9kOp3v*gF~Wt2g}6D*By2e= zFHOPVq)kipO7rj!CZ3U=&bKG#?7eJ0-|b#4OH_Ug5?Y?Yvh%P%d3{=4~b3qrS((&IJ@C%ecS2K zw3*4#PSW}KpF+&95Nf3!&rz*X9ZJPCVIb5^W-Szai`O_s;n2-X&39c2;CA z!fNG;p@2neU~=0y>@3K4L7BCpY=iv3dB(Q%JjOp*$dqfGe}M^JL02 z0Ysbxztv|KvOP?SYgs0r$-5crH^fXwxX>uRSl+|%kt4#D!b^-dKx~H5GXge`=V81W zXTI&Ge*{GbiKvsMK$h7Gvh6+u2GWt+^bcE`0cWIjd*)&mKsB|{}!c5q-CDjSvZ2-3B=-YwX}UYx^ad5 zXog>B$2c0~XGLtM$LJo~Y#*t0bC5e;FcrpjeE9oNnbm1W0}G((p$cV2dw#K4dWl^q#focBp z2@>&)JXdlbS0bIBxwyey+dTd+^d8(PzO`ew$Z->$L1BKbn>^(U~54haD!$?Dy<)+Mrc@9%pRXR1%7J zU;vI(P{yX|)Z`jCbTakS30WpLr4wL3t=V)7TNybLC4p*<)g|N>$22L!c~Wqftx}8Kx<-*KF^~%^J3p~4yxkn5!_Ky-1SoX+)z+XK=tm+d#oLY z{ZZ_6;H5|hWH13k|9w;q_ShTg09!i)>KJ?ZTO>9_#BxrO{5c*LeJs^u5HD%>Sg;%H zi1DyGJ8OCW>*%aNrBe%4;Nt~tG{H(HJ)lj{ao$2^nz6Cw< zO`k0F>XY-yM9b+LD^2hMlcIXn&xyx=53xz6iqF;{F}D6Tjt`g*pP>E*Ebh4>ShVX_ z4g+@~sVQ<_QDZMgFyyRN8O(&n)461&u}2FNWi3E`_3!8ajKIUM=p2?^6L${;iswI$ zQ8dF@%-U9q~1f)M(Fvui~awrOG@j{g&OPlA}*Vt1K>|ESNY+hjLZ ze4%ftlhO77mQ4`>_;NA!qEszsC{Dhx!sD=_RuszI|5D0hmBOrtL8)HlX;eKEDF@Wj zTPjTBVD~Zoz{DqDx_rBiW2$qV&xC?zWewqkFEnqb9=iQPa~&uFdn)i&7oRdy-j7^7 zW)VQm>gFb7(`t8ys7kLKBA3pH^?gG2S4IcV4&U4gt}D0Sd9$*qx|P&Yl?Ph`f&cO+67_HemGJo4VTcw>EnQQh^O}h8A*wDWsH)BSbsRy z$YyXTcbn@|Vu5Jmsw*w3PKR4eIk8j?)}2=b`w;dYhtvIg2wQW7@??K*)IPK*dLoDe z1>qI1;E)+A!6rvj)eWKh_iMw=eRhVHDw#aetPZ2E zmBQ7FogTte?!jX8w7fi)6NB54LyVL!7hqf-H_`t)kQVdi1Vo?u4xcg*&ctx^^Uq6u z)iksRv3+$-R5`iLCHG8-HcICByKXV=SsDEKlP4w|=_CzQ^T{R(0J}Lp`ye&%W^3U!>fx_LR?B`vKzwTp78JwAUp)36@F4-~69dvh2`?k~&F*s`$Dg zugC%4;o#ww;B8ABZno|dY0}|IQ6AsAJp&JNnR*`<4M6XZ-8$`Gd=MQy5+}`Huk?yd z`VEKJ0j)l=EVFb9ezF_KY+3sby9F4)=qVxq0oT>MP@z$|#f_Fqt&OabM%%Qf5pa4* z&(|={WBW4dDN6JQx2kmQ9uI#0-cyui_sIQYJ}2O;*!&0yl{}J6{Du@dNqwV$F?*)9Mjq|F6z%;B(UBBo-Gt*TLiKz zpE&sfT*e~`%9!OBhSg~+XV$X$#(6=AJgH1J2#?4HP=FBYI7heEc6U$+MU!U#o_SIr zWSFWu`2zwFu?SnsQ2Ql@b}?ha$R;d_ zmhiLrQl0t1>^%1sVCg(PN|zwl>0|-zY#l?EN|f08B|@=i75ilK)oe>8-!5zi=R8w3 zT{2zIQs|38ZaJ*mfXyWia2U&Kd;OL$oC9~k3I{n+qtUvR_WX~+;X)E}Z7Pc!r^|iK zRi+d@#V;&1j=>jV$^C%LveIF|{EcZ5Q0vZ-N=$RY|JGhE5e~wStMMj{itlFPGzpq! zHJ_Ry%lKX^bI4lvuzp-1n{6E|B-q+dq`Ix_m&FDO?T_DX)**65Na^DesQ3m_cO)VFv-eW~v!LOc-mC;k8fqn#}x7<(#2@w^XT}+4=m2T^ox^2Epn_dfQv{ zxNNyXwN~1z52tzT*FTW=uAhF5tVLibZeoNyzTX#?6y-r9(S@*O;!1Ds%BQ(b-q<0T z)tuyX!8ZWRSC*J(A&SR@qoKC3d>i?C+<4}Av@@XQ1ROgZT+~c!sB3VAMdh%4F4U~i zy}NUTb(2;r2V=d=zp$_fpCW=o8BD$8^M?{R_!_bnQ(N80p=%WOY`U6!+AZZRmb2TJ zts5&_E$SWOI~5F`q~!!evUuH}f7x)KK)iS@eJq*Js@}tqvdTcFTv_e>G``9 zb`Vck1h6wz>}03q-yHl#ClqfChf4cxnBNe;V1~u_DJ%Y&ku!ad{V@cS1S*)gZzl|% z6pmO!dR2^UR#!PDHqC+2{Rh*&-%W0h_S!hZ&}xNsOW0E|8HL?@__qQt>%7mIO4hS} z?EU(%`>C@QB8U}LO1TDqKE?THI^^`p{8}7Ca+7S~f@r>YUco4y&J(#@Pt4i!?#FRo z2hU09w;3j3C|iJvJp_k57!@BuU}g>6O&;5v|qWNjiSu}CGem`;_>*amFtQpXAD|tIUc!T_+`sJMb(9mK1*Nn z1xnh_+}r=;ek;fCr0EY!+H!f5UG=#BVG`LO5@*@BTn%{mgT3~@7b8KFN3X~INMYTO zWWDv>6^Kot+w`IQJ^%LVZI3g-_|IH@+$c|vq757jaWks4Yi?_ukMTvHG?M$hDb6S@ zd1=q*F|gnt%ur!AM>_l`#;&~9cKztW=PO|*L+VS-r$g2co)=j@c2ndo^(FKC;*^|O zYl~=GuK@HG%rt3AgB^XJg%B)6C!!R%e1kiFT{97aRW?IBH1pvsOyGxF5E`LFkWa%i z{7vR`#)ohTxis59Oo-QQqL?;P(>V21e*p zcz)p|3(9Pjh#Yim)2mm{giy(dCK3YBypO_NYBnMsO@Bd3-(r#5goRC)??w}a$If-B zbj@Wp4nMs53Lv6xf#@+(n7~M4&9#~wMIhc}bsqYGRK zK}vbACO2)}#+~WpbAFxGrdNymaR`NzaB4;}ALo3<@=vhkxnh~a)F>k~9O{pHMguI{ z1B-^YAF%f%DxgjTkc?rfO~=wEy_WSa2@$Rvu!nk0b%3UAJRJ(e71^}Q2=idu2XT#G z3Rr)@r2bx)7y*^yB0xWY0}%J!$Gx?9(s_CI_>}u4+slWdUcVLFe%3_0y#m&E%}o2c zX#--=p@p5Oydu2|n9PS)X%0_ooxvE6@k z06dVYw^>^J@A3ry82w*=UWxiIE$cZB^3zcJPnZ852Avh6b24j--u-Z74vOzx9_Wu! z;$Ny+J<<&hJe7AC5VPtbk#+yEar|3U%l8jCFB8ig%DXRrLjV693UwXM&9t$IQ!?y) zTkMhY-+kRbzsq03Ds_quhY=Y_qT;DPB?15Ve2Wj_jrlV2u2F=W>i_%Czt&U@qiUs3 zeN3qS5lhnjzrXqVr@zMT2$s|O{J;Jg9Tv7qXTF$i{7Pw)SQ0#o?#KUE>j&|VDYZrW z9uaXZ6$pMNCZ+7RjuSk|gW>=A(H{%z72xN~C5a99Ik>4IRrqhFYkXdrd&f1cdG*B;du z?mPDSy{B!y>HiqI3o6zv7?M4QzaO>B!=OzEZsL*O=R_*%mw8;15cJ50$u<~h>m809)CyHqHf>-)@iL*S$Uvmo@-gGR0iJ4!jJwC>?%ZYl(-gx$SL-_v zstue5jR`T^rg4zYAy zh<)H7I6ZN{>wnBPYA2kVbf5(wAbj2HaO(l&biBvc=;3a+4q%@|$ER!Dkfivp{nXX@ zm+BXf7VC-ix+~AWZU^X2--t%DGu` zB9cp=X}yA_t5sJjb%hm*aWnt_7P)7EP=7venvxg5B-cI6d8l)>SL|E&N}<&JUH50=y&Pb#nyHgrGnDdU(A!bu!XIX0k(n9!-iPs2EZw{fP?4oBeY` zA0BDaH~8WP82uUj6V4yHXt1?}9Ku#JM$2a;qTyvmo(^Cj-tN9_%WAjaSuiGogyQ@4 z>Fz9N=ZjL`un=WBI;064nbGonTv9-MM^7vn9IJL(-v4>MNM!xxlw@0+0j86YOD9HD zxvDU%G<%1~Qjo)=%!fz1(;b{yOAQ$xLc7(&?*x1z;+qOjv=k;;Lz4)D5&x4J2@(t- z+28=%Stup>D}>55TlqbYXB%yh5fE}QfMcA5Dt1se`%&c~yLnOxDexbuPU;9GHwYoO z8i#mv@@Il11v3hR+}HEx|DJ1=f4n7T;q-p=0-F9}DZnUwp9MP9d}V>R+%|tQI?dRt zaBSaKM-$w0^>W2ldW06>Nv*~sF2+FLN=Q~~w$t61b&Ib+MAbNh`P8@g0NJQGa(SV{ zY16E0TqR{F+m~7oJ;oi+wt&F1NoyB+%rWYH@o1u{_MZyy`kY^yUdbe56{N6crVFIS z&8G`yr$EKLi61J2kJnGNIlfRG(DQlR|0KGiVP%b1DwLD;_xDes;IY=SdEfY1wd5OM z+uswN;G?kX`C5LJXSR+!)ate??NXjhjM8?@Cu1oa5eu*@2V=eTT{)c^WOZcta$}&I z&Pui&5q38zX?)Z}p-e*9lz~j1e6BXbK%5k6H>VQx3W91^yLzOE_HyJ9<509qXfPj1 zgnw0)=SMG&@1tjNAjhAvN`;al<0hjyv-r5?Dm9+-Os|=C$?6q41y4HXV11vPr65_p z!CVtiv(|X_jBjcI#`VcuFQkz_a5q;xW~uh0(_NH_IePcFGOU37U@T?+H_13qlWa+3 zzDn`5!a&4feL>TcPU!&RS<)Kc`#T_-{J}t^Q?AwdiPeI$`OkBKM&F_Zf-GDAF|;KP7`!JzYeRU9Aj! zNf5VkNo4_oS|F*_ARBBKM_8O~l6q9QSjp0&2>6R+zY}=Ab9SA@bLM>gX%r7!XF~J5 zzKC(TJ<5aIJ~)UGi4O{9gq>qAXQUX6;pGt#0AW*o^$7$wkqyd@a0mR!8eR8`LpwW`(P@c znOA*|GO3|Ig-EHcuW$K&=!XT&(#0>mXe!%|=b6Sg#uAVXHG=PGCHu*)&v@ zjuTg&$~J=~b`9@W{G?D(tCD=>F}YN)O!=CfeSSMPPz%W6KS#`wQneHl_diP*PBI%Z z-8KiuKQy21@P0tIw62TfOH+=pUr);PPpoX%W*y}1*hf`4t3iEUjbJbbJMbP%x8UZ( z_oMSaAs4a4>ixKXjmG1N!DID_@ugG}oA*5E;>Y8#_~l|-hlu=kP0Shq)@g zUMz6=p3%yhB!VC0ygpu+s;-@T-fYZX<8&zb`umdDEyA0q;adAFmNUet}2LMdxU6=yt-z*7F)8O$Cb5Uob z`pRy)X};*TI|PRyo#bAc&1WOr5I<+kiRC_*94`P@N|)abx;j?1aQ!7UwTItuq3fW9 zS@-JAh9^5wlta~19j+*^r!yJ_ zUAX08E7eQ*3y@tO)vGX1CY8?QcaD9~sTs$5!L?Qomca4@t6_UEKq@WZhTMAfw=csQ zI3@)c^-^9vDUCgn!u)ihRa$U8e)*Yq43bs0*jwl0G1VqC0$ZjJbI5=OJSi%9(t;1@3VL?&E{C>md`lW4GrNJ9HCxu%TCb- z2OaOXo!&{EIgaNvGULqmOqn$C#@qhm%9P$RaTF2x_86U61EYW~nxp*>?tj!KOmJUE zO^g7bgVfGvE}O{~+|+%pg|(MuN3%xcvsI(t5X@as*1kA$j2NJ=#S+;kArGrR{1^(N ztN}S-4X7AGVMYA=?gN&LJaroNuiBTcpL>G8jMwpfk??C%w3i;>{hZFwo zOHi}sAB+ROUK{pWT+hbKvu&y-&yb?`oT1cqW+$c2L}1Ag#vb_edvlKU=09(3XBZSz1= zV4Nuh1lJ>l-sI@Ed~jR_o3k!`W>Kdf=yh9u$CVdg!5T7^5^!BxFVz_kBB(J{9(xCp z%$1AqWI&xygu6T~eFj7nfunXr7fxVhAFY@2y^PT*g?pq+-LXieoz40)h;`15yFhK(LGP_ z=cs_L%k58*YC4@R$&a10vf3NW%O#;GL`n%$3>S`Q-`J(goG7D+3-^#L`djF+|0XK? zup!TA-=!Y5!cBI`Fd>_-WJ+^>+-a{wd5BN;Y9pScI$5rrR+x4>R)bA{-fyx|TPXtH z6jTdBTs2E#Fw9Y}^tkc88`gS{Rp4Atla!_4QnEw4aWLdUhtDIur^Xu;-00aMhdq|z znBu#b6WliMl)4=7s|4~_;qBR6%xHpYan9CMUgQ=8-H%xat<~)J9+sY=qwz1>Z4Y%3 zcOAmNugSW;Rx6Ier-+ct4_&&fgBCtG8jff=8 zC0zbBwG@Lfk*FlRIc602VXfmp;7=s{BSfZp&iR*U6L_@eiQeB^h&qO7KP!r5n>453t~Z92G- zt;9ghF_B5l#>U*n8(70}>AQJHqeUU?cK=ee|MR?g%@KC<{=PalQTy{RGc6}{jcI6aJYV=*_{50ko@=B^*ZGN@bCJvt&bTsI zSRgzrJJTS6Qe(<@=hI2)wz^IkKb7?f3|9P6d`w4pw+rACzy7^s&whT%T>d<(5?gWw z_pycMphVXDRlp+#TL$duLQAm4T=6hQ&6@|J@-(=f$FDdN7vs<7bq75sbqC6Ge%PPS ztd^hF*a>~NaFlR6bL*Sly!L#;O9qF4cgKp-n;X(DJzt^AHsX6UBXB)^g*lVwj&Rz9 z2rMw1pu0CnH1O$@&tOlrm?@U!bb&!;wRRwpfi*~Iw7v5=TF#AgISDsZ`qTb$^|+~AW-d8`k6p_?F2dIrnCLw2Ho@4>`;LFr2p|== zMxDV}(pQSAxU2@C z*B77_y=QRd?90i0F$0I}P3uRYr#qsVtJY#|UX=aR;ynI*( zn{r0Cu&1#zeaqf>Y}C1?A_)Z~n?s2Om^#Zf%L#a>(3fWC*WJQX8r_1ya_vxG*Y}GA zoDor=E%K!r+jDy5aB-K>xafaxu_&+@koA$p2j#Me!ixyY+T4YcU^KNd%K&^+2@{%Q zJr0B%0XU2dZ#(5BDa0!;$e+MCb7W?pgQt=0BeFHLOgew4m@jL_6*g^|>;eg5cbtX$ zZBet&F>=XrR(1BkdkjI?YE=3gNZvQ$xH9Yw#sblBcG`*AxiS2&?F$2N5NHf8X+ywG zR7?iV6t(J5Eziv!HKHIyGm@-L?Z8|*#3hJ*Au$xecd2ldt2ICDK9E95&sbW%hvKCN zfG(DvP8-=Egl+R-g!x&SQaN758f|br6;^B93#YPCf0Y3`)0bS&dQ5Fi<|#&sh+$d2 z{gyBuQs2qdN~|?6UXniQ2kpnFWbibKn1M4d*&|xc^_ObL-)Yr3V1g5=6!9_i-^)&c znn6m98Mc!Jr!Yv(M_5Onv9t3Z_4YZ;rZahp#mZZ}3hZYtDGs#hv9S2>4>0((%&Rt= z8l68!uvM0rVunN6eahSavaI%DjdZ7Sxg`lYk?2~dVxMjEL&*CkK(>;qR-^`)785(7MEeT}XBXRM^IW`@ zsT^hXty{0+&vGAW6vL3yxuxm!+#_ptf8(|beg~DUdzEsG-k*ufnyJj9!&dGk(HF&D zFp0k)rtdE@Z!KyydC?(U^nTIv4@l_>H$@n$l1iYHcf@AMehn16vPBb#Bl#DsNkW9d zn3CzaWDJh%3Ud!HJgsLHfWm`S5ZnA6)609fnH;Ly*lQhrD05`O&H^i8qJq7eapbgb&F0IdF+g%&$+o_m`bW(8YO! zu}WF~`j&cF6h8?8zJZiWD2prX5#xfgh3SJ2S2U+)BjTCU4EQb+39a4n6p__pXirh4 zwSw^p!Sb+kjFk1s*FqPkbM2KBB#W#~w>QiDpU*-?X{_~}*Uwj)t*Oe4x}kviU94J| z6O|d>k7kf90Pec&#j-`SCa%?)W8*icYW=~UbB2|=nitxPLY>txn3xxyXi*37_HF(e{6 z$8s4QBFDZzwa)khgjJnW!SN%EzN-&LCZ*8(GMU+UY~Mlg4xn2I^RU3UT)6sBp z0E1Vg4g!C7!Zy*8{^M}>4azw!e zpAQvw)?pp==UT14QdW6X_Ey(JAz~lq;QAoRgz)!fpi2BVFqq&_qsA69_u-ehSPoHexqioI$Q?p~4!-G;c9b z2b}x!65_6i60~P<9<5qaRS!ckp5A(sldLCyi6C@?#=v;(?rB$6ZE+lZN@!GS8O_fc zb4S0N8n(1`i7ZDAfw{lDWrmmQ=Xp<`dSf!0IIF-}zZ*Qkd>eOJXLD*{El_jnq0^a5 z_*pD^dVp~Nop`V$y}ZqIo}K$@-ax8X*mBM)IsK}iMLFjyF1Pa*v3CdJ?bbsrkt>R+ zQZ&(B@tmBh!9AaQy*H&+$BuZfXNWy&2r@p@2?h`{MX~FpLaCkNZ2+5D3f-F!?;6nd z?v9Va4kxnnpzrG6xjdQRke_wVVYZ^5vZ`_XdfR%Xaw1q2ZgT|~B!bT!mz_omB5PRflu#SlkVwhF& z@=2hO3?2Dx@_lBh>oyzfQwj|lnr7>Kql`$tjE9h_z*KMRn{|8yb!n83jCz7Kszec^ z_&&K)q8Yg{UO_+sVDy#GJSx_}56TcGM+2qE7yqWpUxhVg7<6d^GUm@nIGPj%p3h)? z0c!hPHi9?zu4PK+x8N_EAIT|Al?7+RFtE7mf{D^i^bfkt6RGR0fu_4hv|>2BxW7%= z4N)otg4k(;@?`Qm1$P!+J#JDh5Fn!Ft=CEIf>!5!=^0&+iB}4DU~4b1^?dq^?JP=B zdJJILWyQe#O_oQ5_ibvvHEyLsGc|&8poZh)ds+Q(gnoHZr`k&|Y#K$a#UxJ!LQfp2 z8c@BG*GhP@Le&PswSs|KDc{}m20owmLT(AXn+c^CzitAVn2x?a`rOW4l)Y@n-+bVA zc_9&VavO;o4C_Mg_-bV}UWfV`QRmfAL*fQ#wNLnPcP@!! zshVZzUQ?~U=Huk{<^ykt(g2p47{P@#w3)1z&X?t;!R8X2w7$5{qI*QL^(RPU?-(hU zOx)F)mFO=W;Yg|(~z4W_Zm*Z&ds|UKEqA7!c*VpimysV5==uGF9OH# z#)eT59>}RQnbxMpZY)HJ$YMYUqvtK$06RjbyNW#tv#CO2YGAdLws`|_{%|}CtFkNY zXORNI9)4w2zaO0$L1C|Sx<%~PV}nt-Q+aHiKLejuZ$|k`_9Xrlf(4%LED>-#Ny6J$ zD=XKWxpEqyN6}TaWRA5A1eOM#q%%oj*xg>M@~>^j8Z6zWx1?n7;$!uoac1;MNQZlA zl>hFIBbO-|X=7BRR>-O{Nbr|@e|SIHY1l9Fo%;7!(#VHj2$ZIv{q?!$ua&N5qRu7p zIa9%oir`@9>#kQm+P=%3iO37JPX3`ERo*eEC2^o+w)uq}7ch8P4Ayh1`#ioG=w!to88e`bZkd>@8z;C`UCM($m$(RH*yD zTDH$#GNKKDI|JXH=QhZ^)Wj^Y#gep10<;qBOf!#-4DzPRihd=0O0KGGrGB%inhcW8 z7R+9`UMLZX)h$_U@7s7EUB9fMaSj?dRw+@CD^afhh=e~8T8^p#T?z{UF%ZreW;LmJ zpFTX+>DUx)0y)LC>`Gvu-{Q}AW9oh>ftg_tDpL|n)!qgHaUV|g_kkn$OqP{W>oH}N z5u%B&qp+f(U7eb5(4my=z^2t^S8?_YYmL4V&|GG&IbZpdAWM34muJANT6BSs2)m#1 z0V%dnNTQo^uv4S6M5yd_SK^k@@*||C?)CU&?ulKoTp3xdkA~DNb(UVmx+bjB!giA* zQ0+9hXF7kl3lJ(Wd;6L~hh7lV@7}rW|I(Un(MQ1J2LmGvPqE0-ky6%fS5BJg|Mbeq zk39*ktO2Y!?c8=*ZE+6wdEV64YA}O<4u$GBEYNcLu9;Z<&g13Lnf(2=EP2<`ywO{; z*MpRm8vlHG+wHC!h-Dy43Us2uA!dc%@WTh^@UaIrql3gnErCLp8)%r@iA}9FLayxm zEvOALdRe^^5JeA6;@3Yl8Xrqvn&eJhD7`Ui@T>B@jf^s%Iuz5w*Otnm})*pO5QumtB`PXyffuNZNEf-iP*(h($MX2R-Bw?hSS>86Hznhj!p!57N>L0Cm3fdpZVQsrH z6Cm!e`KV7paIm-YyLx0eDRT5X;k4EbAMXi2?&yoX$=(JUF;)9v%M>!KhFuk_`4(r1 zdU>0&vQRvd@(^39t2xlUNhyh%j;MsDjIeUF;J2P^^6w>k)uXqqET>>l8GL`ZQYS#y zO=vU4q$9$oI0*PArQoZS3~hMXqe(cnbz>o)Njg+p1xRDimEQSk=44gML#rTWhsyf} z|Bcu1q$_WA_8r7*STL>n@vL;TcSA4CZ9a3`{Vh02fpy=IQPwpn*@%#Hu0PJtF?ds9 zB;jY@AdyOL4J%TKVxG$jT)tdBf2Rfm&s>R)mdYf}*NVU6+%6O$)kbj%T1|Q1JIWmP zxQ@@S=@yLy zz2toGHTmB{|t+pt8N+tY+n1{=<6hMjM6lFk|ykhEliaLig;CDFYWz zcl9qrVCA1Ovs7ANuK)w>Nbgr3IY^rbX0j9zQD#X3 z1Houcinu|gP4_2J^W}{9*Q?LFC!$Sg2->8Hu%&m4MwgxQfE>j^Qri#@4q4$r-S#+uMJgcqnk%-`{t4Wk|)pYrj zoCxT&F7XMuydw*|__PmyO61!Qrg4?lRN@GnRW;dzy?wAn_BYG1?B$PCB#5y*jkcC1 zmm%-PC?VD0UEOStrx7P&8{CUYSB-@<^U1~Vt8t(Ut;a6_6Ka)pg+KJXay6L=xSSJH z+Gn)5qT`ZS-%E694Y4VZ9*wUS(URt_8mLERc`LX#| zQ*E7uxL%$0lUo+gVzq$qrBa!P|LL06da*8p{R&`Uoyk}#^j`D*z=y^V;{(w=iR4)R zH#geRlvcrc?ftP+9<%2iuuy@DZC3xy6yF1Ct`*B_qY7vD)sv_8nU ziH)N|IYe;e5<~qG73OrgAp+<61-6ZYH`8hqfpeRso_cspr&e(N&>TqRCaoLm??@s-Sr}VwFj}64qTEV z-PMVnDJ1old^-1;1NoS2s4l${^WSCT1uo^%&S-vm^+Ea+;g#OE{N?-TjRvW|CsLsKV^+Y|ltXU-f|VZtvks zBoULsmo)yp#iyxR{D_gfE#`eU^@Y!&9L z7LBke8`bv1s?aID8KwAn6uAu(JtH9agKs8lX$b@S996KoC>Nay z-0kPSu;Zj-md=mCx|r;j3#@tE46^;r&)7?LNvK`cA%XMUESOU+SC*%5J?I1|lIgVS zotwR6_*-rcjI@fFl_BbyCiHQjVZC5(i^t7ahb;S@f2-Y6O6a3|Mn{K@#(=mi=D{`! zCT|Ms-CF)TLO%J=;=X-&^H(2>4^WnZAbhs&6*I_EY)ON}+N>aiNo_%SQmQ4>U1+%r zc`lfQ0=^-M&vhApIdlICfZhpnAiB~lr;N(1#l(F*F-x%%^eZ>--mSH(Ztg|m zJDr?Qmyj`SRdwQKI7CL3gvBi2>$z!+X56sl%b1^>z;v1-B);U#+F-~MuABszTU2--|p>asNUwgbIZz#anI#>nj0M zD`*C3v)&=PKbBFT%hNDjR#}L9hFcF;W@j-I3*BqGPoF2tgUp4AT+K1AC^#ErU<0!h zaT`=k7U5_vParT5_tI}Q_moNx0~{=GkK*m-2z`PSJE@{7(nTDte2APqvE z@;skPl{km})L*f1T$)q)cx1|;QI-9j>U&O>gd!Ry+Q(`?5?Cr(k8?UW!VZv3lf4Uz zF~?#s8Snf$BTM7BugNyA;~Af8l#cK8Y1D%(8SR%M0Z!T%#~6P6vmW%jGO_ZM?tX&t z?kP}s95AxG{r{2ml~HkQTemvQk1cfT?EM~%@|wQJSdd(AobnqO4P0u9|ffloYr(rQ9LJStk|^7!t8y5p~?)A zaapdP2=sZj&bLK1JEP^gj+Z^;C}i%wCX~q!2OOq@om{KuYv*mc63sVfBR&Vtma4eU zC6WV7uLCEQW@%mdfom(`LiNefV6N<@!kPxvPKQ1i`{T!Y}R)+wbM?( z1DnpVSRJ1i34*|*`Aw@uCH?`mWqDJ1noO@K!$<2V$w)R|jQk$HLxl&b zlg+*fTxH;yJ(R__`_v_Zgk_povZn>p#}2fAh5>36dND3u_OCs#Z%%!z$m!tR>G9NL z%YkNXMD9@FeR4E^mwEHD>C$4lc{<`^-PPW>&yf{f?f7x;BiA^9O=}?1OO?|FOb(I5 zoVUF;ow8|jbxr?HEoRp6@Z4E!aH=)AW-*_;X+u%Ob}GN~NC7j4dJZ~$6BK>-qr|VX z+?oiU5z>uDJIQ2k1unCb#$g=Nf=78B&c{3hN31BV!$B72zlK)(alRd_jMy;b^k05% zDYfSYqEjDumW#uO*1;2b4Ad??Ki-b;xU3X2gk~-ZJH@~EaVmOOb7X!v{JGglyPFhL zY_!)nz57boq3Z5vNe*ctCka{NN?dbuT&27!M%S-vGNjrP!=f9fpYs3cyxC%YxY%9o zkPzY9q?)7osCe4%Tpi71uukxKdyR;^RJC#j(k1Q$wW2?iSuN1{_`TO6%Gd`Mql^dt z&Bk47C8=1)g^Qsy5>5`1Zpu|dsZiO399ced8G_rp&B4d(z}>;~B(46lSFv4?k04mHNG^N!tIrF2eO?o)nTVH95!Up6 z)&pNqC=_b>Rm8)EJRl!j3WbEm> zcV|tExp5*5LZ6D|s^e*jdn!^f>4vRSFaqZ`=WRxieyE&K3y&S-#0`_^m5WxUO zQDUh|0ummJ!xtRvYw!f$NKifX&||Ksnjq%Hn+BT#>BPB6i{WRN^`CHU8t=+ZJ-Gs) z(TLtW32W0o+uC1eui2|lL|sgjBz`im9hj1S#j{WTR-q{0V13!3m5JP-cAO`d(~~Jp zCkJm*stW15XaZR>%~jc6R*%p@8qeqWu-lhzTMl-<+?c8&Q9t;uH zHZC<5RZf#QrnA5K+az%T_KJnNrWx|qR;AG3q0j~K?)y1ZW z=}04=TT-`e9qO8oSc}$-@XNhPJ`3eXhmU$J{R-byufcSrYp#p##rCBL7EWvS`|S@H znM{K_X0-~UOZt;9eTdsT81)}@#lJ2EBLwb#U^u{j8@kK6=4wvL85Y9}pp0b=)P4S>Tuy;GtpHk{f)y6UCB&Y~q%~Mue zS3|^l0xlr-%jV;Q*G2u3f}O#gH~!>Gc=N?&k6{->0k=)=r8lLgo`6b?_L9m={9eQ7 z2NtF(8!R)W*0}n5D-~5^Pp)yYtjlM{=g9doZ85&Orf8ADuXE7Jj!F4|PjH@B-KRL6 z`5xBIK=%Wi`-F)>8m*)6wkxZd4xADMMa$WKr|Wn->5tQvW{X7y+?NYN7o#ggvj+FZ z7R9AB49E1x8Y~~nj>q`_%r;~(oG{d%Nr-;6rHOTH)yf&jYsV0Nia(Omc zqQX$IAC4o>*Y6*xaXvPu+m~^QcAq|4FO6qWrB9|7L}=&>6juKJWfzE&(QGpGnx}Pd z4=rRNO*yJuL?#mcX*Q?1RVrcId$b^Os=&JjT6HWBpaF(~_IBKLKlB1dSv{P?Oz5UrF^kOtQ^;s~H zU{^XzSt8=TjCGDyP>ucNvCHVf?^D+r5jTf%W9tl##(C{6e+3B2SNP4Na+h4Qb`9HSu@}?D%y|~A1rMzkFYQ~W zBMA~83@=Q!<142RPw{y>?q{7ED&vp;^i}3}r`V{#otm_`J-sw5x}wF5q*3Fq39bTC z%t8AFWST~A;Z_2Y`vwN?fjoWD==veE)qqdhC4I8#Npz~fw_ps88@H@(Wa+ef3-O+J z48|cSgi3&2=fyg^_M?RkWsyZ7;E1ceF2(umBJ7&t5gEUT!Ht5|78MEfVWPv*4Fg3A zkvo96pR?i7F>^u1>20mirRb=`d5#dgmxz|2U|ZYMz!%j|O%u0#cR$i?nOK%v{JfDMWMJ6DJkw_}8c@$X;2ibll>(2HA+X-h+JZXFKkvQlS}m*2mSeGmV}_ z%-OAWCz5q|Qy$7s!#^y&y5DeXzcTE$joB_ajA|BGy5VqnmsC!kt~OS#c)XG(!3~yI z%nTU(xD2oCpetVn?x>5L0z3f6E!TqHhbZ1DY>B<{(-t<7tc6rJXFXhxtv1*BY$6K$ zu)>#w~z_D}hKb{?&Tx8%KH(=?4Y$ zvvcD+E+SBB*Q{q&mg&oL-xNR=#Y-8IMNmJZc5pZDV*_ofo8i=sl1mHMBR<)75_w*s zYh!u%dg~%(D4scOy`sCU-^RF0+2E!3IsCa^9Qp|R^ZGOA77QkX>K&1%C^B}9$NenZ zoih=GRXhFRaT)^#PF34z^7?opMN1h#gU6r;W}(Rp>0a+eKf@r2C+`GvCa0Y5`6anS zi!EwlMdZEE$wG6h=gqE^*r<^0An`y&>!C3YuYDQCGgkx?yH3--tI9eqi;TO@(!46! z_O5r&jv0)+Rq$x=G$Bw835$xF@0TVuorcbi+iIx_ovMlCOZ_&-)7kW}{U8sj+NJL^ zfNU4)6>QTa-SJ?p68zF0?A~rR9RaY z;~@UhLWohV>RVV&=r3^QUH@(I%R<2C$K)$KQyv#pEr$RW$)5672HwjeZ&(L_bm#fQ zWz2p^`-5)PbZHwHL<(x_0R9F zq7K}5d?`FAS(~=|z!!^t8abUaZbS!AU_Z{~-5E@u3`?Xgg|mxjyIaER_@w2*N35`X zaMrh;Tp@3w5>U2S?@g$3ygPm_;k4$zYX|}HsF9K?5*v=WN<@b9+)tw|h&obuG~L0R z_1Fpo@7^FNvS&4G^Yr{m;gij%>MM5omZqyt&=^xBQ@B>!*Q6Qcv&{%{J(F%6cHyH- zs90sF-jm(-?hXGE`0XK-7k~b9Dy3|C!kpF8aFc9D0@q2q&go$1^&No-I66kB^^l4H zXxsTaKbg?&iKDo}B6thOHxb#h8>g+CV8YGrnah4v+GXgLk73fJBkY)M{~4aimEAS3 zQm2(lxkNd>nYv*#v2vr;FxHGn$P;q-C716>>g9CUq#Ttq`Rdr3XPH9w%=<7a{5n?! z8&%yFZvX753U@Sw`zpqh`;g|j9JoQ%C(UNJ4oIS<)A78_4V%`a6Qzjs_}lY(5$)XX z2Si$m0#x=dN==~&)Y_udN=3A6>*qvX92MCNBsb{^aiCPnztW&`4_GfF$bj=Pxwn8aic};Wjl7zK?+&!5p|3ktwYF-E5?W)38d5UGri7{ad<$|s<9pR zyX)pgBHF_uTBtGb5n0}PKiV$lzFvQMXf0Eo_+>F)rUEYypQpcDqEKv)c{tgeTmSxJ zUq?HT-TwUDowf&}(t9|Y<&OB$LSwa)b9`60F_wWRaaw|%kG4cYX+)VyxbyQ0r}fKU zrB^ED(sST$MG9#~^WRj=#C}o*X=KA2c0cJDW!M87ts_6m4VopEC?(=PuoVdOYCL=v zv+Vg+OrfT{3X_?ZZtF#`>ubX6qXnIEvwGV?c!KYCdpvLSyu<=RvnsrP>~DWd6i5PM zuTaTwpR(yvo@&8ORGI2yFL=_Z;=(BesvK}NTFyeE>dj- zNOpLxf#a4&)qNbC9L3?Dq4qvWCVV+e+QE+QJio^M7MJ4SlO*whxAtXj!v@%sBlP<6 zDxlY+M;OetZVS%fRL(Ab;hF2CX;@x)_<3^&Ncf5TwL+s^G|W}>PX~H_QDR@0TGK$< z2SLNxh__?G$z-rc0c>jT(BNsHqpI2Pw#pfXl<`V}sBu?~GP+^UtV3?y)EpYEIliPp zoSV7tFG_A#^%d$|dFy|WBTg|lers|~o%qel$}QsIOkNc5bG63j73if$W7xg`X-)J^ z29v?v3_TA>5ENSbkTW`r&*`es9(fYm-s+-0VWzh$Cmb@XpO1^+U>R!&%^2kY6_Ul( zKp?j-rAb!Vp`==-CaTjWawZS0|Fu1&I(I#39_aqd{$^`tz9%SbsL^4s5DiDh4QF45 z*anEnJzD~@>AuhD3iU+jh-Mr=1%0_DMy9Av8>!G>v|d{~b5s82&lZRx*euCYUQ* zQ}ONxLFL;TC-B@Mm#~Zl2v_udDl%!0?R(N-JBm<)$KV#bsFtzdrcg~e`QqLXeVWu) z;Y_Gt(^S3G>gxMhdqv7-bkF{1XWxTwJHrBQ443w zHCZ)%uvbQu$~8!vy&Jhx=m?2coJ&Q}NvYrO<8#El7~&2X6M^bUU)Oj%wC9(&^NF6% zxOkVL=ft*O5rrnk&m5j#Q34{N*l{TKPSZDyM+b!dOm6H92*RWOxR`$L!asGAWnoQ_ zGhE=Kxn9IR?iO;`S_h(|a9#-bSc-%v#<(Nn6_aKm&8Ex-Db#Bq59eES2V>~M0UKOn zui-|i_+IY5*-Xz-Q7H}*{625SVkss4@tX0iWhbHDQE`0fIxo|{qO@0Kll{Gru22SB z%S{$trc6u}*zn+If0&7RMH~Cy`F2bIDmce9un1wsw*1}irmu4nFe!;pN!)(ABPH8m z=Zpx@y(K>BEWJy%QTHCTKF^BpN)(AYeK$AHaE~xsZb9J0bgF6n%l&Cos6xvg?LilF zwdO4bi$}*E_&K-vllI+|WMFmhv?eep^6zx^Mw^VQ1R zT|?Qm_?#)M75s+to56`}%^``0!TJs*L=UimwUp={Ay)&CzVx81DH%Gxo+0FM(wk# z35=@Uy#`bm+!it)s-P(FL1u)#c#kB}L}f^F3+2ywQH?SqBTk_3(b%jd&9+)&Le*$~(O; z&R-mq7%XR$M7_Rd!ptUr`{&Pv!}vAY?g*!h=I1JPi0l|TeulOUpi2bCb zh%5M^|5&KThEVd1}J>j7y1b}hqz8!hzY8=>9gZtMX z`9(+*Z^5wGR}SOg(R#?j{HDkzwapg+H*ktMCf1pIRkK)PIY=pf&}9rFuTZdR|ii<;x z{w$q*Va17=A=OiSo`N!-J-xEAr-rNaVG`7w(0C%i5M(mF)5}O|^y=qjf3P9&J(@4C zQzUe*D=5J(NqS#wU~hr}4(U@|FerQfru8(LLpMg`Tru3DN_eldQ7ss%{sV z1IR%%b6iCIFb(rNwg0D;SD7M>SwzM48v%_fT|Fv00f|HJ;e7jXr#G`{Ok!zy;dmo5 ze}-_En-nn*r$ks7KV31*b%F*yHsr*AAw!q>l$2(v;g$J3-jI)Uw0n@1QcwTvmpt$s zJhGR>{8&d5QBnNuZDv|HK&*VL7b+=!xS*`|z)qiA?KBnw!>aq&xt{xD9Vj)5%FYq_B{%p z-v8gPr^NFI<8FoLgZ_kQ|A`=9SSCup_XYy>XJRH2>0@ptZp#1r^_Xr1!!JlBwxKp} zJ=zwu{~qZ-57=*ENlwTPS9CBu{)V^yK0uR(!7&eIKrZ_2`B9DS|D3fZ430(UX|qxP zoCGAA|Nj>gv0u^q^Igs&oHGgC&LnLOx?~SoA~?DbR-6aJQ<(Z z#ZY3~Jw6zo_uSXG{)SnscL+IJsg`y=UZCP~u}a2YtOaQK3ASLyz`!Am24S(rLWJt# z{gTT0N+~4vJE6c_r<@zjJ_C%QCfEk+HQGU0e1{GwuQ`M5tv^8IUxyLo|2_pj1oY4* z{`!YRlzzOx!{PWs!v`!y!>5p3qu)fAl1LCkn*Ka=lQ&ufTsJ?N_|>=?c}G z2j`>!cogxQbWrnmzkiMIe;$}$1~8Dxxd0xuaeL5S(G!TdY{Sv13LK zwSAXHr6Cxsf0)8SZA%$}N6NUU@93p;%8`RD=-JEiyqg?EGs9KYT8mSM>er&4LC5&N zgU2sQ7L(I)l4CggG!KJht2Je+Op{78>c_*1EDd@QTH1f1Dm zy{z(bd#8Bu*miIK7^wlV2L_!fnF}f#q5|)zDc2q{n5hju80Qal{|EtHj?rs;w(D95VDvk#0~Y+ zsq(`8Gshi9BAco6{3?rokFCjf{yN$s^sE5)B-bX^cp)!jNN0&e3r>uRCUe^w_UkcI z0IgC})ynInL5IqWj`xGW)^@Rs)q(2|Xas)GRC!n#kFe?Ywh)V}zv2cOgFPJfr;$J; z1TalzsIAIAS?`>`oO1-NO}pdVj_t&_-m|M*q5ru6lp$?cW0`h_V5nssyCgKU}CTt z!4xYq|2I)p&7@ziqGa83&g=1J1Ofq&&-=ZZ z(Jq!$_VPo~qQShhVk^`95Hp{i9i1Y$QRnT#=yeD!@xu6veI-Jy68+A?n5LR8gK#|Ztpq8p=A-nZM~ByW-`ii zOsSb_SJf%jd=;cp?Wki%E@bsWc0sjs*%`8H&orvB?dDd$d*oJs*eBu0KElA*uIGfS zoX=u*p{G(WiBKxj62WD)B#BI`PZ&`yJ#^ZeenfLGMWdrpEsIe!K5~)!eN9-jQHGhV^1ZsE=rE`iRehuZ zl8DHEumjCV7+p3Rj&D6Bl0qm}D)kn}uCIEby~oQgqitC}ah079T61OUmf$z0gtEejj1i%k>-KVu4w;l(k}_C; zx1qe=#kWIKV;@+v4CRB^dT7bz=j?qKb$JE{8kJm%U=1hHrD-t}$%w~rAiX5@a9XM| zaFl(0oo~JvSJe^M$`PdQc_(hm56cDxML#DYETHtCDOOHiP7V(B7m_5jba-`&?wR0I z-zy=62jY{lrol@H7yG($td2q?lVD>!5r2V6y_kC)j2{Xlk2&kL!8gSNlELEQQsN10Ko0@YXjh?&ol&f~L1RU>1wsycHTq+pAnvDy zTJ!NS%qa(7lTwLQPW!VBXhr3&j)<@{2IH~%t+UU%vlof>%CEq|Sq<*61L!bo*2Dfy+Eo0j{U7 z+1k%YZa&K<&Tlkv{?Wtxjbg2XQH1fgqA1DpKyR6gz0pz?6BB{RmNO5C{X}d8HZxG) zz@XguytZ7Y-T%6AqPSf$v2?^l=a-|TKK?%|#5^t-_2{Fxrl)GXcEC2t8EF! zJ!c6+VHEFZ$peYz#iv|`AM9e>QPrqxa;RH!$%|6W#_#S2{hzF%Jo)6!E2mfZJ@7}s$F*YWgP{(kk5`9^ zbJu2<_RopW%4oEcZNIiYCh8$v;%4-MZ~UNdZ*m?_lFW3Z6n-a?lG%m1pKpNnQJUTP zWvTM!nuc}8<2~cOy|AKFT>_befER;@^$nqp3 zD*alvekQ218DO&fv=B#AOF)ba4RiOhY*Wh%+jqcoiBUL|B}7-b+-Q@yhg>p5so0Uo zc@3hxH7cZcnYsTtF!wy3AvSpF9!$%b&Jpd&{EU=RO!V3K+b#xDYgO#+wII&N$*a{`P# zF1Atd+Or6bORMAe-##P&V za+zK_Z~|YncUjZT1@FbEEFUL%A$5gf+Q3C`8BUA0wd=YMaVq@H;m_WJMvdbN%}d!D zcH~A-`td)J0LX_W_GyK>E`kcNKb$bLON4>BP{KE#y-t#pEBX$po) zmn4>0ZO9-E9%N&aQV~KM>5o$npT;mhW9hUS*T2VmNGMID-wE?)>LGESb_y_r7(5A3 zZD#0)dKS(_?soa*H4khi%8Vc*9n=TV%g45e&>t_-OV6hQ=zJ?XpW5bPQ!|Ad-eY%O ztsp`7t|(5!@Y-rQ+>^%@U5W@HL)0`hs89Q!hI3S&X5Q94-jNC!F2)%Zm;3MjbAyQ$ z(YqeyzX|ring3Aa%(atN$kr{>V%-?20tJ8PWz&&rNYpQC2=rjN=z!j+poeyJK zfhH3X*k9kf%q-gk8lDGsnHM-bJC5k^cb#p#wA06WdECPG6<))lI&}gGc1&#pP%wLY zK~@2kF_vGqML+9eA0D`@NyYzI{`u2>{iTk1sn(?N>SRikFD_48z2kLFlT*yi=woP* z>h73LXWZ>N9{QZEd*i3Cw62gw=HPU=M*P=-Uasrc8GSHI+4K6!imw7yKND+SDRKle05wWVr4TWnJu(6|GUAr_u-T2{V+d5C>oqpOU@FBY85 z#Bk+yp26<+lwzJ>{>pNAG_=_}l`m#vwgw_8R>*JB7>piUK>cz;66a+EDXEv@a%|}`Z9Bc>s8i7n;wZu5^&)k&xV{Zsc>E6LDzL7uY`HNu zTPUjjl`&2OaP+>?w0U2fN!m%+alRzi{Ib_g#apm2x=A_L=wU*z=c$$3{>ns7K{3=% z&>~(5c#!yxBssh{j%PR(Xm^puDbhP>@%EnuFR>p#O!sN2UvV*CcoWuoZV8j5vK96e z6M6AzgO7w-dNMtbEcyCmheqnj0x;s229+aMr^aZJ3ErM4Dpc}9M47^%DiP9QEz_ZF zbvdq?s)4jXTXVGeoll=0;{P)PqO>0I9j!zzLV~GxUVWh2J1w8uWvVz>XC|&(Mg$EN zkIq~j`!v(5D@uQQJ#QU*GH$`=jWeZ-h13QqoZJuVQ`m${c|a$QsNr^c=(Thmk4ql2 z`QiYl^-U~Xdnk)|LK${yp)7By0x8^6YWnlfx^0})CEAUC21vRq6(T|eoUQAO1hGv8 z6P+2P{eX}4f!KZ6O$NQT$ANdb)GG8z1Y8ZRbI?c+J^5hT$}X9djqC=IfCrf9dT;27 z;k#iSz^Z$F{EzfQRM%e*)=4W4gBwuYd&=vG*L_dJ2{;On*7e$3Z)JY`Dw{5zXT2JJ zjh9sxcMIZt7=x@2YQXSNXNS|}?bi`4v&WgS^la8&zzg z!y?C=8nY0NNJ4XIdgM`t#z#wvcMjGbGt$T_{897`q}7X5|e)}&A;yT5CZJfRzDjr zFL3UVDL(dQtF%%tg_-}`oW%nF+N15yKB6nEk2!L_a0rB|s7LDzIuMi}cjim(B`8w~ zDcr^ED&MmCojH}cy|D!?hi7ofr)ejq*YPYRNT)_)s=i{EQNLm}qMOV|q9iz<$^>{K zFDVfazuQfJ^6stC=u6&V97{v>$@2Z$XV23;oHg@zupwm3n`s%1m;nmf^eGVWv{6igB`;M1^(o zwkr35a6)5ZAM({N=OhrHlK+~+A(MF*2^t-ec>!ioc5w2#iq-N+T}r3gs1^J7v>9F< zv{F}zm%HBIQxeMy`Mt7w<$y|PRJ5eQX5%wyYIgEr1xL}gZuX^SO<5er>sjbku@p+8 z=Ji>apsS?vB}oHq@7}frcqQ9C*lWJK1H~nXQAzSwkq`r+HV(Kl#g>E9NG9heC7g>5 zgFlKc3j~G}6aT1s>*WduaR*C4B7N!Q_tio4!J=W8!3Hwa7MW)RmXqCiQILC+U4>K^ z==*-~y$by36~U`pmxjSS9djTkyA-~aT=U$+_b{=ANGzaASG{wGPX z^C2u0+K60)K`5l@X$$-^sXzE{9W1(7uua=4Mj}v=s4lDV} z%~ag$xI9q{N_()WfB!TgF_f(Pm`>0J_QQzg=kHk%<<gCDKe%p%8b+1?K2gP`*#~w#HkqpL`vQfr%j3(XgEf0HNGIEQ0 zqaNu*G~(T=o}Fo0F1xk%3=gG9uo5~x^!UFE#T|S&G2jdajztqzZ+#@B0Ke#$u zL$vQIPi}%6^K1I*OV*>&clu@+8m;?xiL^-LRBfAL#50>-VN-PYHuP@7Mq3Kg?!5~-(SoEk>+I_@Q$25Z zfKP+%uTebI{fyjr9uZ5@v3w9ki+MgW)AW*kIq<&GXtz!z$Sc%aC`PbW|L$~dJZ&ep z+8|H6R@7d;wuM@uv3c(M;7wr2qywEPz;v}C<9O_O@|4owL#5ICIi@U0H&ztZ2@V@>*?>y#?lMqJf{y%8d zGgNqx8ppqw67w#1dS1z~uN`5{suql2hiB6Yu_pD9%0AYIwhxReYPh&pagWYMI;niK6EOFKDx7K(`4+){+Uu`PSZWF%C9H~&GkD*I@4 zyQ89PK4EekZ_c9u6HAr2ESje1{cWlJZb^-P|9YkqbL_v8BJnlsPZGIEQ*p1xU)B;4 zoXS?DMk7h{*NQqZuO1I@2k`eMv(z{*AS7ybA6tS{+!e*pAhuv02CI(ry_PaD+Z59* zp(CXdE%GwsuANE00Fb&WrFBc-t`A)-N;z&z9;aAuX{F{uqfPYDy2}biqP1JxwsLpZ z5{KOr{KeW3o8fwgSCX$dp?S8$TZe-w=FPY4l9B*l(gJZbrv}%)`%JkEV<7$OTg; zOdl-9&A3N(aL=Z)(6X`x2fR!vAw|F@O!NK?n2wTT047Cc)MUyjYwo_x46i4JfwfcM zM*&Y}=ac#FC@nMqzaPYX91{TST)*J3c1khnp(8kVyW9mN^3!RQHL2E1{ll{OT?X^l z@h%h|TdhFzy~QXPVA_;Z7S8?ZB$cctrR;l0rBfX@_JdjHr0EfQVwYhOik6QixiCx5 z7kdQ%q`ucpsg97dF)fB1iMw{7l{}pDA`$#no0?6_D@9jtLoP71ohj6Mf{8k8%3NSP zk;b#MygmArFe^!maBk1%(kOydgl2K|Q;Sn`AZ>B_>35H#Tebu7GyhI{FNCO@)LY#2 z+CmR#%=F54xCMeH3sMuPRDlLMy- zRT_rO`iJccM1u5!jC|-Xyn=g5Vp9IiA;KUu(UkdG>Sf-kG4zzt?HpMxyCkvc_{b+y z{}y4;?$NC5%mWkg;xy>is;V}Lqg(u_Oy-xCl0v(lCe`bBT}u1Dz!+Q}{5TGk)K|vq z|I@oyl*x0ZDD?V6bfmdb#wDKG?#XKF4A-f?tX>SIETP*?K3m>#KUOJ@QgXOF!?V?| z3u7{vXO{%UVyd+20%=8MAO=~DEvLr7-<(c6x3#zT7?@Q}zJq@=*BzlGesO_oY*k{) zU_QTAI|=ksAGGHYn|)(X#_Foz+;YTa(=w-+KCm~+!1vUkq|wl!W@`~$O3bn~;|-10 zmAR%}FQ=0AXkHmTc$mx|4Sc@Bv3;d=fCvf}+w8>Dux!=JFMp(s|IPP84t#H|Ze7aj z&b~MC2O)$i=y+9r=*d$mX3D&`-x>QHPvLEee0n){5ErVfS|FQNqM~to?}wHw5{@PN zAw;o|XwT-2iNJg3j$WQr&Z z#WYEkakpZTU*)474VGe4S-_kHI}rL^hrJ{T%R38S}qopo1i)$uYC5Q2X3Ye zlJzjmPDk@nw{yBAm}Gfs8IPxbE>(G=^KiVQhY>Yu#bIbK8&=b)H8~|Lz?pdNen!lM z>3wM0GVHl7OZsZFF6dSn=~q)0e*Kz45d>o#-2op@jB` zdmyC8VbT5qbp?m6-pz-|M@Q`QW)$p!2y%xNm~;l@=52~m5>V1$ALRg1?5$X|oW%id z3j50&w++6=p{@??e5*@$g?25yLwk7|H=7-V!Im~yYLHH|eYx3yQ*?1Y-!nVFUauUt zFoQkTwZ=cXNDV!R*{FJJRlYnko~q(7AHE+>0M|cZ%^iZ4!Y{e+o94YPZa02gP2|uC6a$Pn+<0 znyq^vHHiCArPyxVP`1L!Wki=);>jRREAB+V8FRE1J6V8JjIOEo>W5;H#Mn2Q~ zDE6aQ{q5T=ZwOE)X=&;0QO6R^maNgNKZ|$vsk~4(rzfd#)mO@Wi+pdh2n3-S&Mb+p zk5@W`f>)djY-geOP4{iLHhtMM4Ygo|bZ|EBk2F*ZZrj`fN-pMW@{_iip9aR>R!(j6 zJ#}FeLhS9vn}`?w68%F+%TO4|_PMLNZ&(oRm)W5_?r4weLl2D&04&NvmhA3){Z7r~ zB1E0p(D~A)T(3JHkQ@tPC;*?Yr&>MsWz-8Dh#DbsWkj+Ss4_|r76t}J-fLGWe#Y~U znyPGfHz?7L_SV~>@CjZjm(^Pxn$z^f_0bQU-|kndv&8eMXQhnl<^E6qTTM;wHf11ifqV z_NF1T`sp4?+%AK!LuFoWfH}m3gpxw$A3Yibpo2vVO@d>ghEZk-FXTsRs`--sky=a6 zT7Go>hs$~m@auDG7vM{9t+gU!*f?PuZN`n~-d*D6spJWF^Q^sre)=sYwaV|D06gx; z+u^}eB-RgK9dRysCkptL-_*Q!JFs&Dy%Ng#6&J`QSKURg=kEs1=F4r8Om!Hj4il;J z@4a%WzGyrYy3g^PP|4`!d!N@g67*A@NXu(mI4#f7(L+`@tSuKcgpj8HUT#N=dJ4^w zZUhZ(N_)TV1%i2>i;h-nHTRnye6+fls~iSd=beAVR3mJ#v3p;r+Ut$KYoV~4w1o`; zbldRozGT;b#+d-xb`j|p-feht|4Sq}CPS=oDhwf&!|*mKfBp2T{rY-T@pzI9DXPsC zsdSifXJ3LM(s8OQ&}c3x4xxU z8C_Bgx>aq?bhjObK)rm5bhSyx=Q+|%MXgOcThXp{ZvP5kvcPq{x%2%)<|Rh*(|w4$ z4{}i)B~Cj3bP}D>P;YcAiX!UnMGc5KvfkOCra79V3UrRcfi9`}Zww#D&v&}jne2f8 z)6e}=WK!mPT^=`|C#B|-;N_jr=^fFDsT}f5c5f|uJljO6r!71&fgP1m#*8U9y=~)6 z{o&%H2D&t%8}1_|-g-M-4z~wOD#eEMT{-Id>xEH)8QCnBiDW$5EuCLKNNp_>m!N)Y zR4ZVnG#WeMzLmNCZ1my4fLql`gjt=OnvRb73A%- zM)mzWBGcUfm*H0Odf<*Lb@6f8#jcA(zF@!P2xnWqZWM^8^Kh=!2*kE-?yuUS6U8BL*7rz=iB^rao;frBeztW0id=h3fWl@>Pq#hsL^Xmp^66MMl-$z5*_i`9qH4$>q^N`{ecA^V^Mf3dk^W~Ow)$lfj_DGrSx3XtnMYci$p_&vdlx+NId zOMw@kS#qQN)8&f)n=nn3p6|6KoA_Vu6yyc=Xfn-2G5df`ae(Y%;>DTQTvMPo6W{^<7-jalI^};!y8anci$kl-#rA3Y79Mu3aJ2$l zm!4$6OXMu~3fUH@&8MXc+e;OJiOTZjNI04|Cv{scLk~7P_(NqhrSud_q|0z2$k%f7 z$*o2^E0u1uCN9|X*=fj8@LeBfLXSw;NdGa{A$;JRy%Jn~JX#czRve3nOBc;T zMA*8RR{F0>>Db5hP@#uzk{R$NSg6S9&{6aIHTHAF3*qD`){@seT4~G3=!l7yaD#WT*||HZ^@lwl0H8z%Q10`OI85h%O9U=k5kj@nn$ChY{g zV$tTBvCf1T^lf85ZXh3jMsZ3(wu7*ggG$*zxqQ7LhQkA4cZ3m9XmTr8yZ%vbR%|ap zO#Ij;lsc;iyN2Z`wb$$W;00(lOJL6z3@coqT+|O6SJqQivWE%IpF&+sU%)mC+ySw! z^qe{~;x{L&-~~O;s@DCN*J0)P)tw)|#Q%DZp)tg5V5f5VBk$03P)ViTTZy2byGJd^ zfVyv}rTf35_oEf!FsePP(#$L#dAm2e;GBCK&GBvv$!=>X=Nld}DU>>Ut%4-VIi2#e zQO?KnjbF7mRX;u+E!GBeFTGJ&4ImUclm!Qm7&4S_Ij_F&=N>==>300UlOlnl>zNT9WL z4{nh{s;FpdHmy}1V%t^0VTF7r%T`V5?SE=fVh40v`zYM+G&^7=vM)6#7{q>#V%eG# z9hv?Z1i>P8Hn+|y>$R}YEP-EwI(`3NTUQFCGh^Q?F}4!!H5tY-Gj7zij3(J88D$-HwfyFGD}JB6f4-m3`@ZLV zpL5Q0p67he^TOV|zvtcOT>G(F`@vv>eggkojH5i6IYK&M!;;3~Yp;pn9*2uh+i$z( zYWZexV((ZBYK6sZu~<5I-_=k{?Y&?^gEKkUl{~I}c95xxFO3|#^>Hj(=yp7`Iw@tw z$mPbVkcH)W?uopO>;40xB^?g^yH*(F$WNDhZ|GaBDD|}}RQaX2O>}^{(2CBibDjdM z)ovH4h^eBm1ixa;=X+la4;`()PX99J-y6|%N`GD$*nU;AF~qGfkZ7N3bw9EY#R<(o zR8^_DxqrHgG~19k>=op#YBz&SXpahc@QS`6s zP+^?8?`Yf*_f4lC0U2uaKNQOOu<(FDFRKWQ)cJTG|z)m4(ZIlY_f$le*U?%FAwO-?f=BURo~+!?gfI4f&rSdLgmeE{I}3_0gKd_$mBX zB4)o(^w39)5%#UkOn=owEs&P|={?KESA2Z%%dcM?`y+RWD0ASr7R(d0_G2Xj7xcfi zg`HUNC3%~8=qQzVHS95~sTjU;j~c%1*6!-z`*zC~u{ytoemZ0EFBFGKa2^*r`Qc1; zSF(E49j~kZ9DVB3F*X0{Ff@{zO{xW@JidCuJftco$<*avHsf_M`P+PcPq%thNLG3L z9~Hiu>=S(De^`ei{@QQGO0a=uf=a${!6m>9h&P@~cqbo*;FroH-Bca$Ta z^)^K=k9T}Kx*M=iKDBzuYL<87zE_itl+?Je&GMtm?{2!vw?TiOPHXw#Q|j^5r@t0q z9{7@+IZ4;v7i{}wEQth3w|sD;b)AVBjkVSo3Q(-H+$#DES$gj99wp`#m-eFhfn~M2ZIzf(H_e1XkV?gYYdmo+^d!JPO7{6hG{_W`rDFXuV6!89b z1R)W;=JMd+b@+;Ia6kFNNbM7R&jxe(re|6u05W4O9lxDMJ{9J8Y?sTNtnjJOeKqee z$%$kTS`OA|F1PRoIIynJe;?vA{M7`+7aC^aYlaU%9>xR+u4UkaxtM!-GDUW{50ktj zw@BW5&*kLxz19DCo_QFp~vrlt{Fd;XOYulRr$er)^9vbBJ-rHnzG=C5s=Py9JBga=t>ZhT39;A zd@u2YEJ^f}c$I_-o*qj0rLHdzhJ!=HdjjmLQPoU7WbSXY#1SYS%uJk|2ovn_?agaX z>dnyB(Yvt^J4vy$8^EY!&Uw2g8BvIp+i{wnnDrAsz9Xasv=k^FF5m#UkC`f(5(i0# z5`0pz&Gx1o&<>7h8L6_DTSYC1N}HC+5Z-45*V^_DvCCvzVSo~Vt*JUShjWv|O@dFl zk%)~>kf3Js(M0B43+!li*g-?S8D&&U!hHGPjq#W?9F-nl)$ z6q0OdOxXW^R1{Hv4OPxKaMG2XWr)MQm{z#Fb4IibF-v`=ek?j@k)>-#;t#xupHhb! zURw2G++rR%lF%}wX|WDZPlDlf{1=P&kB6mqp++d|$Y9y& zO~zhw7qUM0X+*`d0P{7nWG=6jMb<&Tp|YR(rp$485Z&U>F%a+V`;4p&2x46R4VB`$ zGFR^06rDV*sh>g~POTDSW|>B0L*A~>=->VB__4MJublJ}up`L|rh%RKBrH~iM4Pnb z(d2f6GuuXd>>=OQ-vB`XzSSqEN;ue~5!Q9)NrMjmeYq@W6rtF$8(SuE~W#enq3 zxy0`HM`P#H;})G4>`_1CJrCMaB@B~ODX62znP7u?b{BUu_=)`+BU%@_KV4n`1Iqq~ z&ZdgIs|z?HnK2QQ%7F5Cs06!mEH%eu%}}X^rpfSPM~0fA`WYJo^mpSA)I`dsRCFiU zNu#Z3WXa;`p%P{5pj7ZmjtMr!fi<`h{F*etM*s?i0YOF_Uh7j_DCVySo`oH!&>7bf*v>@|pch7P zV(xBj;o=-SgOLd*OHI37PKb#vbnLy%y{0R}vplndA(#yn?QeksXeQ7cw`6uO&~6|H z+iYNakHVpRNT=!STnME|RqQ<*Tl7icIvvUbX;gnwSpYGqtXbm-4JOrmPEy{JUEd8` zPiVg76!5d6LlN>RjHUs032PvX>j&Mz^tXDr%BE^ERpQTDo8HQ{fq;FaFpx%cXGYL? z+RI5}UxwNxF6G^5-klA5*WY0+l&(2vmi4UFDT1=Htm{RA zUDT#r5pYmne63#{jn$j|kEFj&sLez2U)YqP)-zlX2921y&0?nef$GDtO8U@zKSvtDZ zP%xqk@1WoZN}5fTJUb_`_h`>JcS098GetFiI2?6^mWt4X^Ua^jI;!ECzKGyvGj7Iw S7PQ#HzZ`6x4?nO$;{OLg#JjQp literal 0 HcmV?d00001 diff --git a/site2/doc-guides/assets/syntax-1.png b/site2/doc-guides/assets/syntax-1.png new file mode 100644 index 0000000000000000000000000000000000000000..17dbb33b4fef9431ba0c4626ee9e2c410d88b368 GIT binary patch literal 160167 zcmc$`WmsHIvo0JY2@*WG1y68ycXxM!ySpd2ySoN=4U(Wi2L|^H4ud;`ee>+|p6~3P zC+GKpYpz~QukKZ=s=B)5ZX%TxrBILvkY2rdg(4#@uKMZ~T*0eXFkT36px?-Hr7S}m z7*|y((O0#TL;gEEE)8(L%2gUcH9FdeJqhKhjW#!%&`y8rY)cZ1i{Xj1=ehRFMr z7SN>i!_mcYYftz5I!&D@W4kLn5scIi45vSu9&z}KZ5#hckndDn5XElVH_Bm z0-cYK1OFNdo$vRU_z%SW^vK%qdq+*CY|yBk%r6ztR$31oV2f$u{x~GAFken{4gNCJgApDW!o;;_9niuP*l{{&yS|ZiyZJ!W78O*6m4|BQ zx|i)^sdhVKqA{$r+?)Rw?Wt!YLyrXQwyRyiF%r+#n80%dij zZ}xX14U5OTzpfl}c=CfPKQ47^(QGi+EiYHhmAm_+q!6!Wevm^HeHq0b|ALW1oooNf zth+;d_Ql>U@xNCnREf|L4HPa@9t;)gxUh{JxI3OVU5R@asl_{B%4JO4CtWx8c%KX* z_hl#LPj?-EA!Nf1-lao@(yQZsvOE95a9Jo|>++6d;hg{YHabUZ44#YH%oU^bNqvu> ztNMzMf2?~w>1A~<;4drs2Ny;#Lt@|RUdMv9JP=Unnis(3|Ia7?@=ktu0RA>Qsr%~T zoS!6;*?*FQ(Vxjb?_r#O?=JrKasS`;;m2g0Fr}9@k;`dm-OS#3?p3!JN&KBm{;(%- z_>ez6p8;!1-z6h10DnDs=Y^PU)sRBrui z_JM3K|9usaW{qx3BNt|1YJb#iW1fp5=WUK9#6Axy1)?D@AYjY;gaxwGUSWA2u%7oGZzU}p<&GdGHy7$l10+%bG5aH%|)f2VjtaRP3;&PFelG35scU|I*%JlgfS zVu2R^u%rJhKs;EUWp1gRDv5%wF8;IrpWqKM(23YC+pz$a07y>f5fXSzZ1 z{-%-t%brFZJ1328xr|tLJr`N8SZ!xxsXD*{;5(v`T|1mtpgjI zmtNJhrP;n=F9f#2RY(cG9k*4TS{krVe^ufiH&P#5c*h-`&$BMQ;$4XVE~ivVoG+fq z1XB~ett>1D;)9;RTXBUgd!c`s=>M`gK?v%bPR?Vi>bzT*Uh)E!)7CXLh#z{AeJL$Y zN&_c_Y}H9lmi}USj=u*uzzvFtHfB zBoyrDf(tprwnlIS7#GEVSyOz}usY9an=U2n>+uzF3mwCEesaS$SW}g-QHevA`sJsk zfMU}B3rD;VO78U-+bCKN_VltL0F0?a_$+XF<$yWXcJ^dPCB)m+w%A@A{SAQIK?`2TZnn zCngDw>eBv^Su)g#u0}G(MY84pFM0Ce!KSwBQ`_q}w4nfX&m{U6(<1MStnX>paa&M# z%*{=f8g~kdx>h4R3~S!S1JGll8l!%Yul4oyO%;edDJv^ypvzI!7m<5gu64SbOf0u} z^lApwDEAt6E0;SMI0Nw__#IJ9K)FPMS3LTaTYIid#~-*n>YKGvsw5+VfT*(b=^-58 z`4E>mWc??fYq9Q5cTvNdM`Cz>rLaGgw%#4#!>L1bY(x(;%!EQfR)Wi9uVajw=5L{B zTqeYHqX2L1Mla_wbWF_wBchOEmE@tqc2^bdt9mif7l9iDd)+qF7IF1432j+9u8?pN zAMM_7(c@)z(QY62{sNJ&eJWDK<`{mp#=*>Q99Zl?N=Z2kUdwOds|c0kA8>1sSaND) zAoIr3{Mm9%z~uD0PhX?Dlr=R83&wh;mW5|mASoWc>b~;!Gn{Mae6IBI4DZH`6szf~ z*UTC#6x%>_-2))EhzlO6L=ib>F=0koT=qPiq6`zc%HN&G2V#1x)HwbE0MK@57x8Dx z!xsz#6$I%-qjwql8-_`2_m~yV;Bs;#Q=jggUGz zVtZ)=Ssd*2b`}i#KdY5#7q2e&>^sWUMzHpY@g$1OLc}~6ChM4zzlr5SKHV3Z7BwyyO ziDaXOAx@^{4z6_F{ff93jxcq+g`_NX$2w(k49tOaE3;IK8>MxF?L~r&B&SlvLSs1& zn&F7&MSvd@INS+n66`;EZnzW%q(sT=5`9-JQ|&!5kUU6dc$r{j3Q+@T@(v0;|FV@Y zXN||_T;+T-%r+YQ+bP3*xS7WDvF;B&KD!qAvAu1duqsP1br{mt5#gG@W2i;*SQkp# z5YzZB``1_%HUXy{Ucgh8QMiZwN|;8KW*Cp_Zm4A*0nv~Gfn!z!5C~-6Ub5`_`IeZ7 zk?0D}Tt!zAgKZjX%z+`(83*UvM{?iD55k(xyp-CapFi-8`12jU$5Q?k#~?As#$G2g zm3bRE)E87A`4NLUE_u9ZoA*IHo;AxLFIpl~=WJ+rw;M=$AALZsPhn+6JUA-t`}#nw z^L>&@5};$9g~&@w$wtOD2ZiEIo(u)p0g*1g3zU*-mnCgFw1X`Sf33RPjVHv_XtL52 z^Y&622a7e^06+*AGrUg>{1g)@T8~wE|B>!fliy&8(?Y_Kky#-N{lCr92iRXV)N+=< zG~Zmsi`~4^r=7fojG)~&!mu+0WdOt#6>@<8IoK=L3FSOKHp(WtGLTizj2V1++Fn?1kkMo9Wa|bVkEy=#62F}XV9{@|E@5we? zcgGtZ+?!dH?J}~%I zbQh|1dB2nm*QN@pMXZ)ftlQ7}L_km@s$mjoS2) z18LyqIkQqoTiSDB#nD1t;N4HOfDf=-e2;RcjM$k~HM#zQ_oTf@dpWMH(!1klO?0&g zSTstF+`Jz9c#NHQn`b}ZZo`!%12MC>Y%!iM(uKnNQTgcYJ~E+T(I_Is3qZ8Jo;z^+ z9wjbQ(%gkjbbr?l8nocgOoQYx?rII9#R{b<9io69HRBoKJr8jj&;B<{OpiN&1$@LE z9=yps4my3;iPQcwP(K`HhZNuhl+X2~M%gUAbCv@9fZ3kF_`m)gEGp*{VXOS#J;OQ0^*%f70iE=cqn+M1{SZF`-^B2mgM z!7ANr^6JyXw~h?rAFIu#!>HuUuYt<7!)n@eD7%MqwwNzn+M!~_y#A2pv%r_z^0QwQ zXyF_eM!G6DJoz)0@@S-f%;b+ZZN13lxgNUvvmMFSbARlzn_0SEs#FqJAko z9Kgly=OHkt2{440Xs=zTvOinVl!@LHj0%449{auY6*ZZkXcQ0nD-MACBpq3t)1qAN(L?%G=&&$x5N-z_K{V6&5nnljV)jUW9xt z!pr5$GYxxkT4sGZxQUPJacjuw!Jhp0SX!8B;_ZGJAOn9W1|JJ@b8PD!b^G*Zbn2&u z9siT_2^%jWENy~}l_a)EoI0iQd{!%36U$B*M^6jHHBTSKv2jU(rie5dc= z-W*{px7rfkq-28m#KN=i9U{j+3q5<8aX+2~7s@14myP7`d&_39=~;XuH@hd-So3j1 z>Gn7<*B!IW!^aGJ=RLnUjvLJQP2*$oVp*Z|Zcp^pQsyV}R<<%g{i*&%ZH)w#Coh#y zi#F~^oxGV!+4o{0LWY@z&N30Z)%HZ`wutLyYtRs(BYVx+8*8-ljZ(4l`aMHmI+Pug z43C7tS+BV)4bTA@AVY^v7m&sZ!^xS$>Ob^8GtgR!H;EK1mwa75;!2iHIm^g`AKAF&rQTjJ>6^oW)n$e+3tj1 zo*tMlu&i-Q@pH)(_Df`v#nLy6n4NwLCzh=ACGAkP?q1b!Z6sOm-?u6zPWTSRlRDZm zM)rzs*Ll_%z*5&mr!%75Zukaj_rUhW_IY1sL zt16K62BU*|!2$0vB!QD{8d=xi9f*)`GWhu`Z`lAdpmn{4lPm;&%2 zN6W+WrR&}c2B2vv(w3HP4!PKeKeMAA`tUZQj8)0e_L9O?SW8?(N>tSE`ylSNeIKo< z<(luOY^dIDu;jjA(1M}5arS(6 zYlpJEcy8KPo!Jy)mX@v_(qKAl)08`iV?QIQ&}T>?uX5#cK%K?;Y7L6=oS z&4fPVXT{E?5Bt?@y~J@mH+VBsqK8==wiec1nzmYGopLP=zyvZtEX03u>A@sCvPj??hm=*yxi!0JgzGfF{GQ>yI=kg=~mET?oZ!POlH4c;(d33 zErddo?-Nzi8_Hi?noFj(+P3afoMe4QjkSsPMTMxOsLP`|w%l^EvN7+}Xq?=|g>N?k16M@2 z14u$qg4)x~=tESy+3Xg5$?sTgH3)$4v%f4yDbtx{z-dI@WGGLN<3f(?@2@#YilR7E zGMxx996c7zX-wq5$>|7HS{@}t>#*EPivdWwLk`w0I`s^7Ynjcc*J|{*m91Zt!7f_D zjWz2u-fOB^kAEgg>$@?SPtp=Sd&isoD&Viimx>+j&VR&UoWBn6jB-52XYpz5Z{MyH`TV$)@w`63Ux6B3W( znFZj_as^o~D>dRlK-NrA5n~zt&PtJAWhk+W&gG&Ub4MB-Mbjg zFpH;#nd6jCYO#;!)FvKBQzBU_A+M51J-Y0RmA<2M!pA8@FT&uFOB>GU3F*PH?AG9+ z9L8uaNq<|DE@doBO<*CZO`z^f#IReFMyNy0;4jWZ0n%c9e34%8^Jld9D*Y)Ao!8BZ zlz_{@Zo|kt-598;{hj$eHZK0B&>(8X9xln;j=i`4BmDv`nCwAoK>M000u~w5BJPU^ zt?qRGa%$V5Yo^HJwOMU;BtCAZh1anUhiad~NBLXhJ&bme$-7`!f`h16rPNu)JWA~K z>Y4RD*W5vc;iM@%Oxftzzx>*!1kr#JU>24Z;_{Z9v1U3t6DoKbGFf!?9^a8L3D^92 z3o&vf(;+{C-ALhBQlX#oQJ9uvy9|BP!$jBw?4QL-qLj5Z2H8ccjkT*fMe_ZEx3er7 z`Aw{14mNVsvZmm{9PIhn%b7U9Fx%A<=NpN33Ibg!m0q)90|dui$xWqTcH8B? z2pVe>dmFbDnEb?_B9h*)I$@8RE;kJC1!||CP5$z1?Wi|7OH@lnp+PBMn_|Gl*N%XU`UQHG^>6e5Kjm zp1{0$wB_QW={(KG;E=cUbyBPllEqF`D)|cXyDfQfofnzU(%wCxaF7%UDk-QpJDo@k zjFfR+TfVOT{cGj%9(jBmr)ygKawM14+WOuDoW(IxrWRtds`VU6^BJT6=FbFfJsWvF z6=MV2=f**PF{0w8Q)8Rq<iD7ph^A!a;Ux zmM;BYsuD(Ha&5F2jjT(evxkXY<++bzX2f0u-h>n}N;r||k!B?<#2y4~8?%m5OFlua zk&n&SM?G27BZCAXzf2((`)_fu0v0=34ZQ-y?69&OVcvn2NfS0qv?)%8@7#X_(yRxirCCHVM;KHf-h2ru~F< z-46NoGQTVqtPkX+hG+Ntj0>Z;Grpk<7v=x4{^fK&XYGD6m+l6UmeeE>T7S<5-Xpq>L)*@ zeE9K%7NR8h#4hH7DRfhNuQ;y$J@VaY6Tnc{m1l(W<$k{y?TTGStb5tLzk$5PUE+lP zToxY8=zFtYVPL$MfrGi}4RwmF7S>Q&Yn>B0K^@nvdi?`3SX*=>-*NW#UW5^n;iFEs{2L!P$fyhj6fIb5laIqOZ_w#z7>2Ok(SZc}^@1|Xl)$xFy z5m;vWW0NFXr%sbaIMfGOnYX)Om{Fk+LgZR=6VrCoosK0q-0!_$`B`((9jo=*+UnC< ziVE_v(@L>kJhRW$ege&xp3gf`PL~s0nHIjd{-a^1-RrKW0?R@Ba1>86(k!Tb5qpGN zXMd*MjV+l{RerTw=+ChkU;{s(zBHnBLePuFeHRUoaapV5yC6={=3n0M6&RLeq!( zwsCJ;F=4g=1UAqZ<9GRdg3QkrC4?o@5@WYxFLxcl6i2Y`Wonwm^3MeiuQ?w01{<1V znzAYgm7RuU-6*bj&UL+Ip0j5A>AG+KXiHUp^TABt%j0wJbww&!tAj^7d)?R-gX*|F z*s2B`Ns~t(>msyb-YPbXY69rKNwGd3V}&!UYSs#gmkOu`6zY?GS)`@c@0$8WYRh;3 zUQze0o;z=wUplLZUewLa7vz8sYP%*TVbt)^}?)&E}C97e3*U9tRQ^sEwre%yQXZ!it0zsQIXIOE@F6yFfDwo zKc}AGp5Q9-T+dE*y7N(#g2g#E2q2nX`-xKGf>L2(XgrixvkuMNzDDN-&f7$SGSV5K z69(Aj(r-=9Kkxz>{28}@z@eeque3DOZ$0s{It%OfwHm>xmL8(f+}ZO4m|{ znsvFr5bl8*IxY3GH0O&9&^|Nfrx}Y0TLJyS*!_(|A5jzH3gFcWJfS z1cv@kb7w{K&5Q;;>fhh&t)Mp1 z=-&}~RZ`Ordl!vCOYDNr*_#C?X4SRewrSvg6VXPAb)IF@BE2TQ1f=PZkf z9_s4(mL|K>h*YUXHOlK1xwn6R72T2~4;)JJqNOdpq$Ur+HS{X<0IzX$6}6=H3r*e9 zr_&=tLu}-K9D44HUljdnN`lyi0msLoCYQ)3v#rin9QAO%;$b3K!z_eY8<$?&7)&*S_kE<*ZVia=-dm~J zFfuH8Pe*|;M4`747*($dMj%s85R!M#-p8Uk%8EAPJl1KjlhAIrGSMzpN-Y3|ylTK{ z(7Cjbbemsnqs3BP_@+YP%XKd=ljomj7B?@A+Ucb1bugD6`=I;_4swd&@cpTk+4x7E zWXNZp8p9^u7(y<=YSk=Gt2G+x_T1}{^fHC)!(^)0%LU&8jFSuK>K=8xuBN5>blG4~ zLGyag*pou1{Bmr|V^*~)nlpj4HMRz3wtDv@ zR;HPEqT|^%xmIHJ=;!0^vlSYB_Mx0Mx(zlG+BHsDT#h8VGHfHdDx9pMY%=SM=jp_D zBXH*o3r*UBD1_{s3RT?BeT(it08u-1B^q-yR_dr3>lX)j@?Vad=q|o?9B)dX>EKP&`>Sc(e`Rm7XmO_KRucT^`-g3m!Dne!x2n>`-zb}enY(M=trS?=e2rQGTP+l&(~-@Lk$s+ zW>3`M=4ALhLo#nMbgJ+f6FE~1Yf*IL+%A!3W;SLfT~r;yia19jJS5HbmWTLeo%!KP zmfoR<#^(+c9?l3Pkz~+U#l47;e|kQw>aV{I55-KR>A~{3-$@V0QsgF^)gVY48F1U5 zDg<@kt|ynZ4796*HEYCFp#D$r{kr9fkG-i2yP$0| zkd~r^N!g8u@$q=deBEhH**=m9@?Hc_4^RCytGP}M&cL(}Yig{pl--HV1v)h2oQmC) zo@6lxv2jqikPNOir`6O{zzEyxoFJ=o$_SZ|>ZW$Gjf@KmS#sB3kn=z5?8Y>r|7@tp zdL@1O{4R-tN_h^?Oq6D>rx8RVS=yF}y(SbNC|p-(4G<>w))#6!SIZjgw_E=u5!F107yEH zRtalGME+Z98~5b;VE$LZXhRu3C&;1h%i4Ri5%i)X#YVp*2}?zxI#1JqwvHD@wk1LRqB(@t3*>X!PqXw5<9P#n2Na3I<88c+ZpHy8fB4USH5g|K5v>yE z_GxAp%wmF=jE(nrrrPItErHh0NrFnDVQ<_HQ$FdIcTBRU$F?*vfJt=HBW^HDkmSsx_&iVeG4_${Mb(*RweTidP8uWUYnLr#$V*q&2CK~)Dovs` zA2_^Y+vx?_Yv+6$F!49CxT|nuA6)($X683IHfT&{rRA}wQXwKIC}3&uoy^km(0fG1 zlZTFl1Aen#7>S>gvgR0v+)6fLZ&^>X@fk)2IW+)w6l9BPVb--{@0E~+{`|92Q?up>DvBa{7t>Qg1cx9s20r#(s#s48qsToi5( z(%)0YcbG`@5_y-I&dal3jw8^V2Fw@?6j0x@^Sg@F$hA_!Z&~1h(Qx4H~s6Tf9f~X zfi*6NnyP(8jnzDz|3T~jqN$Y>R#CW(^oFGAkK*}1@LoP_Xs2i8=FRwD@LKgxQ1)~_ z)!%0P?OiJV=~TmRvz-5ft58sJ@Nsw$*3@x$_wdnQ-bFDK+L>F~F3|KBHb7x%Q0pUf zBw+EEck!TwcAlsY9%udqxMlo6_8Z>q!Q_boxOF_MGTB@4!HwHkH-=Bcf92lo#yovd(O6(^)08z zzqI}4tXDMg9IBjp4j+B}vZair7#5JIvNy`~A>OsUS4|4tr?+kR|EL6hicwRL3*mZP z#v>D5tdVAWc0co)PC0<F8mtR~+@9?ors$;NuZu?H7glz~TK3jI< z<|BY%u%mP@dvq+$FHmkLAz;!Q;v;uo8M(m_dhPC)SIac_lN=I1oVD9gd0W}p*5Oo@ zJUP*wAE2qB&7pt>(!f^a&8wVjqt0Vu%faJ3Sk|%nZEpdt!7-xqzC&R02YU0q+(uzXs=b|-_A-!5DTJR?; z0&Byd^5*8*lO5b`nppA~la%z-@T8=qg#3I`BqZe7c!wgIk?!0TZJQvTgORZ@cq`DE z)Iz7DR8n{88Ji&&eobDwR!aEA#f7YP$&=LMFTV5C)bz|ixlAA~i#$-b(DHEl3F5XHK%5sJ*x7ZR2@=tc&F%ILQ<$AuO69n`m@J0eQ6Q#~`iiz;j ztBpu6)a$^rwM{Im88=??U5y-$EoGH3lKgjh+ z&j>Cjd7jx$Q&KApDsy!S<%VN>_krh@TfqLSr+qGxnP)e&94L;%*qWC9<9ox*U!Qe1 zeY0i7{7B;G8QlvxMOI2tlQ#N9LvyW%ulU-%CU~wnZussdKK~TckI6q{15u(?-q8u| zAia8*yln;%fCxf5B9!=(a@=_KU7!fIc@Jb;+&AthX(U(U1 zdHBcrDCc?ZEgwU^Q}ZS`47I-cSUCAPF*TBn2|eeX8Mbq*w^ z*ifC|1-O%X;drV_WtSGy}8exorJj?i=Bk9`5g0 z-{2miw&y$-g@@N(0;RXgu5#iWvfO^$&~`*ZLZVl|u9V`Zuz1wn_-u_hKaaR{qDDQC z>V4C}@AQ>dns895c4N_F>*3;a-N%!~m#I@tmkMTS8k01uYI1>+-XyvE8n-|!+yO)! zjX2-)+*Rg|IL6%JEAK0;Q}ZXVq{cO`;KnQqu_EtLOyFo-mvZmLHJu7Rp`gl5^y$Ev zYaODI3FYh7bM$V8JGz#Y zRrX8r;1VL|0a6pQ+((si{Bk+`jk$$c#*}KToz9rykI~rMxRfrgyfJph!l{YwGo}cD zNYvoboS5 z1}1%e^+KRw>{420`K#is9N2q(srLeY6LH&?hba}He@eX!dadtb2{;=^1%2nM&o1QZ zzB#9cLc=rNfQ4!UtUR9!8I16xsak)&su<1tls)_z%Mc+WPXp~PE9d>`CMoTf&*s`t zcoA^A5{@}gL*uIQB$zD05|W1Zwr!!>uvn4FyF&jfaUY*ToChHxfmw#zUPZ%-KS5Mq zQ&}cqeyaVXf&s8XIKc*c7DK2hM|#S48;=TmMP`hSX(Ri_QZLc+#D@|j@vdG6rqvoZrVxX1(4Z!v*c_D z>=^O}g%EFdek$2Y@CDcJb0eG7g%ZhugLE$u-{1M3*?A(5~h;KuE)3r(6$yWZrE8^aUDS z3Khz!;F>H`k3nK?2!#{ zo+Oj(UAqbJNxYCi#$6IdKt;x-e;3$wvkc8jCrwppDSgimm^jQ1trE&cUU{T+Ou|~% z>+*7Le};hj7ghuHTw;cqoJ_r5cr>JhN1Eq#bbhw}{WJ_s@a}_r4*T@ULY)!QtYWW{ z?x*s1BaK8K$BiBzaZLs3bGH#{$bK|7_FZ`mbT#Sku=8{XB0uUT1}5XZo$x_Y4fRQ%+Wv z(ngFt-e018{sm7N?H2>_GTUG14Q#D}qVr^M#Cum8;|?Z%ljhUFvQr90rt=DOOreaG zJ*n^zpt8u5oR7_(-e2zO`D7TIB~6Sf$0xIe>^embYn^F%8i(2WfzP2(=t7FJzP{ZxeWWtRM}cj!;4f(ff%=xC43H+@G;%rX zFDL;rzHg);`Mod4jwZE`*fq};hE&psxi^Gp2>F7R!51&_;1`n&_XFTkA9MA)c1nYR z*cTVjAbvZ|E$cLx2TLoyYNO+F;{_F3c}G;pVxK6Z$<=3u+Ur)k3P|P-;e*RzaNnC( zwqr=ikJ>B1WY%krX57Qp`{u^JWP`F3`!2=2QhqbVrek`|z)%U{Rhs9yDy?uK{fF>+ ziwT0#I>Wn}m#2}@UfTtSrOtfub{597$h%}T=}g+RQl@B22-RrRRY_n}U5YQid|r%1o6wC)dkb```Tx@Ap0J#ua?trx>w3 z1>ZU2L5r&yluK0-V>pG*#X~p-Nh|V?giXpk3_kq+%k0T08$RG;=FbBN$fw0}OKoj}jFgj$C46H3Mr;d`hFh~UpJo6N0^!by3`G0vegM5I#GL;hjH6ln8yz0Y6pC zm$@?2)JPs;=O=>k980G>VvvC&@qQ(NC5-;WfSFrYKt~+v%k+~WBWI=;AU<0pNLHNY z;>Xi%IOr6OD;Spi9BLygzupIemMrs|q9lJRWb#w-U;X}SGJnJ~55t-;*!TI>!Fso) z)l8@RM}|)v7f|GfXBCz~8awKsn0_RB3jRB577gx4kq?;@5>qrMizD;N3DP<2@HrE@ zpNQQSbqe(t*y~wQf_7~7aq>8(%;x8GZKw^!(Fx2lEoQBLHQW}GEm%MB1STDEo|_FZSw6NXNFutgrfrqL(D!)G%th>;rLDn1FK>hk zt{M@`3)B(tJDI&6h4bv|>u0YazHJ&zPlQEimEJxqnqQ4NWcZb4CmC5KjziLy*g%Fn zJfkn3^hy@}Q%Mrl7}7F%?rywfeipJaZx3_|5u>2Emt|oKjNh3?Lx~=Jv~)5TiLp|E zRA1z;5EJhhrfH0Qp|1qxC55U5{4)%P<0QUE7*WyRbvxJn_tu=AHtNvi(it1TZoKqX z-C_`hc4fg4DG`&NlgJ1pODDNeijkn)vHaB3ROD-AF4Z_Uw&*H(+VxV-Pa>Ig%pa2-Wk;+M3 zAL}d5j(DDB%B}ISM^?REH7qq31vcSsBNm{E(*EAlcDAl+8PdZLkWv9poEgULFlhjh z!}sG>C{zf9nJ^sIw(iIFxCzbQ1im~lD&*8OUSUa5er5?|XFq{=T@9_9y^x&7v9GaTR^kB8JQ4sUvXwJ0Hj|7~^Z@PQ;qslEuU{_KeN+!W-j z5O8-8hDgV&r0+ti|KZ(^%?tHZ6$;gTn$Vy~2NsP#0uCc5Vv()w2Cb7^xl_HHrKln< z49iwHuY9-aA$zcz@!m|i0xm{I_N)RBf${MQ4bzL9kLlOU)42>j!uC$=SY9ODw2~y{0bcem(@tkQP7?BJ%_+<8~q! zK0Z9^A~+Dj$t)WKEeGs^{M}0GG(9kt+Uv?4M^ZHCS?^hX=BBro8v$=H71o3tJ0!%< z6tFm(OtOUz#eu!UG9qJ5Uzh_a;R2)3VDRPBBEA{8UeB4#(&XbAmM{^vby?&(Fljyb z{R2glA`P@N=aEd7>wQ{tv{y|yJDuLS<7CCT?+2e`qfd1)>&_8=G3^I z8A{MQb590nvY2ESy`Fwcy0ndTKCaTis3N3b(BM&YS96?5A|c9K?YK>wObw2n-H=SL zbMAq{WD5bc-zyDmi^j8}f%ykx2!KN>d$XrEpI#ZFzI5H^(|#?L5k!zaFePeq84z+$ zUx`X?)s}K6RN9pS=^9lx%0R-*=*^FQlZt11d2xXe)Ie&UavU9IPl#lyG*|KGa2-gj z0#H;QA>m=JVmuFw;`MmC{N2`SsvwwyWX`E2rR3*yN8xn_ zi)m+LI&M@{)NDr!B8vE`EMcw)p8Bw8f@c~X#={^ z7;cx;Ej!;D3gsn$_dJ-RYsQh0{4Ie_Q2yX5{>@r?Ty~j_fI;mif!zT+8Bzl0Sthf| z=Y64qK;Z9`ZR;!;v4=?Nw8`K};=%)HN*uv5$2knPvVN&WiqxBZ)p_{1(7YF<=SqY-wZM%$HD6IQ4CpdwX zE@`)3ZEtdZ1ZYwOLmm(2pqP_&b_36Iusc-a&nr{ME+P@qf9SC4*aH|D!+BNn*>b?J zSOd*=%qw{Z-jqc$6J2o@!I)3PR!in6|+&(e15(f6c^E(=);tpq?%i^v11dJ#zFyAY5j&T3l`vjpB8nZVmOGK*y!^ol%9b?bT3 z=^bLQ<-5;)qDWv^C|HobyE3vY6X~K-5O$EM-+6WWo_;3qz;e?Vj_?XSyp_trCV@;&O-WowHG57Sv0i)U0wsNI_{J;z ztb0H}^v!g7I<8zXvuLC;tgySUr1~2n7EPSmcZc`Oha}s^jgQs39ixXTN+$El$LWX? zn?-eG0^AVmWuapj5Et(u(bKsXqVZLJEFo8`;P+UF-m-v+if6__`B8Lqbj{Ix6tjH3 z;2Kc0TUc&jn_>rmXZ#vvw~`{_7lP-^`!~)7$7QE9{#SdmOtRsPIyAQM5S~%Yre?!# z-%w3_kX<)?`=`-Ux0aVqcUw9J133HCR>#Rv_`t|)vMKmXqK|cPe;PYMa*`aOSh|gz({i%wt zpvnveH1xxrM)6;20X|np=I$T1Q>$_`vxne_tiB82M~4I}^q3*Rzx%kSafn6zjWm~n zLCH22AfWytO}7O_=VA;gGz-l$RXpNr>B+6I8k9JMut(Fg?3eU3RE>W15$$Op$ztnC!3zFGlf(f7!h~w4-KRZM3Ls!PKV-kS@YKP-9~!sW zJh{ctka@kA>J;&=3;VO&O|^L(E`EA%OGMdCRmnF~u7r#g`YCyF z))koH;AR|e4(Ih>wM_8)aKj{EiKQZ z{$R3q88H)AaZbE8(9W93;?cWKKCh2FyjCwQ?OPWkDGns%Jn{z#@m^^Oc|$R=i@wNn zVZ&_BRUS)){%_^r?kztPlFv4pBY#4$sH)Bil{_5AVcVr6Zhn0+n(?k8D*<0GafP=6 zW%>pEje9Z+%6R?04P8e~FmRNc*}yu7IiSdz|6t)qMnDuxXhTyevx%y8%OS z-%$bRN0w&Y&B!4lkE(H3q-q+P{99QMv!Ax}4w5QjufW7kWlSGlpr`TL+>!>X)t;xE zL?^8suLm$XZ&Pw+ib_WcGWI7E=%x9$6TNsV$EyP)@XX;>ycKhT=Ig67F-{hG^Lf+H zsbA60$ZyP6s6oN?B-VZB>L`-kl<=mSQ>AK_6hv0&`(}93=#xE{xu+q}qY8Ha9{?gj z-M(f9RB8l8WSHJ$12`y+LhsZnvLbpcw*mLi_!UaG0fjXUb8n89Us#ifcAZ_{i zg=$!J!W(bAZbCorAauValOsBx<{HeAO8Lj^Dw+cFiND=;-`z;iNMVA2KVXjm;h=*K z3ZrC}@yH{O7=|yr@O(JxsH4I`2Oq3;nXoanU;@s#x88ay+;;1~tkb6RC+u_1`EEG) z(1XK_X_`rvIV0r+`K3cTV&K4V;h+mmcxO-_^2`v!6Ek3hK!_3D86El%nW{V{EO3}k zA8^otVbFyas$=hHn*Ich<@Nu#-h?_P5FnKoTyQ}+?C`@x0NxAt-gA%D6HK-s z39Tjg{p#1hHnZiIUVKTy$3wy~&rFXP7#W1Otw&Fs=Gj!-X!sSuJ1!1PH^7^wY zuN2|vZsh8A5qZpT=o}fmNy7mL>~BPX8Ff0=D3qRh>M7w{-#XsRn#mva2JPP9!B1Nm z_1Em)}ufBnk_W+pEfq}QxjtqF&-!l|d78s=%nmUi&o(DyVcBOzb)gYxLst=P!S zOm*mY+;K-3{n@9cB?R}APChw&`#ay(EbF-NzPe$Slw8wv>bBiVd4Dq&NG1){2^7C?P3l$d+x08>~qhVV0!u) zr>Wd4%&d-qe5J~3wS?F+&Nw}scG_u%|E>SJRRY^E2^BJF)NJe?s@s=I+fRF7R+>6Y zeS{R>dh4w=IMF|xD55uZ%x5-9&{>#%A`*#79ApGddYd+FB{(lq-cPbVrBtN!#TQ;w z9j>i9yT-J}=bn3R*iVD^TW`H-WDS856F|t>=Lo2p6t!47c&l%;AKEo0vREXvQ(4TH zmX$sTa`p7nPs=p=DP0#@dxlsIy7(f~+A_e?E*Qu^{baP2H`)4I8dHdO>KZ zcA2zjCKuL)C!Tm*CTowIFwIy{U*zWU%YUT0-o?zvo>$*gB7uiAwQk)qT=CPNn&9}b zOe-Ot+=tviFn)014=jBMrIshkUqa=>4?hxK(L@~r7ky9n?mZO8<<=f)lMrk)1dl!T zSfKx}->5-Ys!0n3wV(XtCz^b05MF)x)o`EczZ$3YTL_*%{NWG7UYhu%{~(_V(E6VH zL~1^iDIDcYe|(4t%~@xiY4P3i&s${e^S+fk=J)$)qUMrIFHv7#M|JE4nMppOx}0c8 z5%LFJJjgUWv{mXeW}=w+(a#MZG2DnZZJYk+)6oe*q5nsSJV3svRbHbY%C(y(q9D-d#k%uRqc)}(N&?+i;t2?wCuwx8^03LVTvEiUY4mP2g zr3nvc{5|9J(``~_iGg(cU|?{-`d0p^&;zZI$x8lfBW0tZIX(5x>yq* zM;~*vjoA|?Obq|N^Dgz7)B{Yy#r-*WU=Qso$qO7~jrPTRZ8LldSGN{P4r3{YCg@ zVwB0xD}V9xP*hZuV5K;~W!QU|9ghe<`q7WVI|~2l!Os|Zz?_@Ou0DNs4VUW~1d%>_ z`t&L4-`=vZkMZ(fk^v@)Xreu20`+wD#V4sRWm5LqKU`}Q`%IRfbI#c|w$KiG_Uxs; zZJyfd0*x2_OxB^^&lj2RA(HjeD}QF>cJScA;hATivAPK)%|)ayRvkfx8uiI2Sqc5a zH2jby@SmZH7Dx^9B7MQZ2OS*tKVUz}8qbEoPY)LP*4VB((MBZpCqMp))f;4h$&)8p zpV6*edm}8k!Du?8Fh_u7j1Qy&PCyhD}G}2v5x93asg$8l>=t(;E2{3f`}|fpIlFU7i}5x ze5uB={%U(TtBq8cRDEImEG2@26-R+)L^q-AdS4+ceh=`g%@0yjHi|WMK4UjfXN(|Kp@`||92%14j3>XeDL81 z;m$knG>nm1nIu7;V=Q{^+2>3HT(4ezlS>vZ)EIlxi6&n(k-~y-$RZ9c_%xLbT6o$u zve2#~+`sf(w2S_S=8I1d}9~o$jv_&>9vwgW5qN9q$i& z&t@|ah#71VGN0FgOy}NC9V&$Llv7T%28E^q6Phc2a)pGdU9BM?Aaxemgj`M0pokXi z?6be4nb?kIvyZ*~cXSPzMF61F-gD2rY<7=EjOGL3AK#Yd>c%U_0l_uV(V_~HvTke>V9^DJIE{(J7e z$2#pb5|$Xq5rp^Oe}B^e+@j8%yqs{ti4rF1Ws;VNpb61ASmY2b5yE&+X~H^j5Y_&)ahrQ89r2afBMt^i7*~z1cgm;EM}t4 zBmrUd!3Q3)PVM4LE*6<>93B;MMSIFRG}hRR9X~c)eDM!MQBjd;WS<`Lv}U$?+nV0H z?!GG=c)$TBoI=7e6Ir!#xy|Y_nB9KI9pUi92Zj^XsZX9V*$5JA7cNw%MJEsLc?EeU z*ml+g#JT5wSHeT1aIdg@<&~F3$afR=i?x<+k_mceO3N5Pn;Bf`5(Wu8Sckq>UF!wS3e1B2)rghDnbMSFs06&eNJzY*8?>( zxmbh1InvDZwK9z9VnAV*Z_%O!;TOO74_n{Nj34ptp*6G!AUyxuU;iFDcX}zDD`GcE z?Su}QG$Lr-eB({waS8cHXu{=PnFPK0##jn%haZ0|0e*igH;Cd=&3Ipa?qzB_O;w-nGr<~d$#KVh zOJw6Ck>CE|G?8XzmOtk*ZHfD6?^&Bo`$t1hJEl!D5Tn_E$loubvzIVO&_Rp#r$1d6 zno4W+bIow8Ihe2w!FXK4{endc!$lWeY#P#+)K@X#gZ2OuH`a$^Rf6>d0_AMQb>8{s z+C&6I0j)mdE7u_;)2Bfs_t$_vV9z~m0%)Gv0TWR4k(4unTgQ$a!i&;2-E!+KT2I@@ z$SQp^f+&3m{VG-uvu4e(G$q}V2(axm=$&!183&KTH)m_>J+NeDh zKb9glXr1G`Lx-v_JS0G>=Pj5QuKd~0w3fW7^+QiR@stTc^h4&r6d&?q% z$xJ4``t{#E9IbxKCX4=cyNL5E(j;~@O%L@MONQf5I6gf4{BvQyefP6T610a*fIuFP zIrcImJ!k+S-r$Gu_Q+$8hNF)>Dh$-*;7pZI((Zc(CKw7Kw;4BL*a(r;M^%?k4+lxO z9ye~R)psVh7^fi{#l=M$Z&bW$pb)Xi>U$YePB{K}mEHT{@PUVgftm!N9-*nG-}$fq z`cE5sM+_fs%SPxI+qG+J8W$$?s4R?On5CjYq5nAM*keROUk}HB>$q^B;zl@6>W{qV zI2EW3UPJIpJf%TTf)+T-6Po(c!J1Hlpg_7?NZZ5Y(ohjV#tY&B>Ce~wCA=tY+{HhzJdBSoG_9kEw=EfUtvUd2BAOF}Ul$oGgth|67{T#9%YgVyRf!H2$=pmB9 z_EW#V#Fn?bq{$~HW&<6`wJ4y5zJM&3cRej&PW$r#&);rX7<)XyobF zjC89F-F??RBEU;Th?*Ei2Z;nD5B&O9zqW~g`mMfwceTlMv*e=Ds1Z|FaXM3d*2Ia3 zAqTKi0@9fY= zvJm6(0~*trG)IGqc`lQ?6V*pO_0&@~ad^lfhZq*Dn`SbFF^#m+K4}NYbhNRHFTU6& z15l=5iAS47+l_V6p@$u2#QM*gcwu51eA(9l3yq&%`BTYEMIxln+Jy4~k`EjAZDNy! zSW6HW8%u%rk1qR>jSuuOxBlxkn+T>IKde5A2^y>8@{c_Kr1pp@E>5guvLNkbCr&_X zSl|V67CQh7a4S+#^zy0>rj7zeC10DZis^l$NHaG1X zZJS2NV9X34CIrmFGFvuGgcFSjWS35f28EUqLWh9GOcYI+%8%J0o%qA z2$jr8(aF&{G^npxQ3*jGOH*^iz$0w7t`-^`IrbpIAB`FroD2Ong@ zSwcMRl}dl;q;2L++A_>tnW4oo5O(O$QKn#T2&ZBbfJ~~$zMTjJ<0ULlm@q#4r)Hk$ zn81obf!XT8+Bc9ey4K7-nQTCS z$Fzfbh`H=3st?RUuM#0a6Z_G~k4<||nh`|7n28Gr)BL&5N!XiZ^*c@wP;rQ z3uuX&H)|fc6?ZigNlQ#pL?)zDuMd^haNrS#o0jWYX_L|Xph;RJLXUut;4)$21S64X ze;_Xd_8edWAmri6r=B#SgGrd)yY`kaeu4;%;#OV99FRJKIUhn%edV7SRO)DBY3DEr zVzVAJm6*+)eB#MrZ`FU+qDIGhHR=RrFl?+*R9s|(;?Q@88W}>9hzTDiMT4dB21`s^ z>lJFvP+l@&zL|EhW}LiZseqYZ3kkpo zYqYuJq;)PRQ2G=ALAFvv#azW45Ihkd!GrF#cDg4=(7f?bH`5BKGgVOi3vPp0++c|qKt$>u1FW{pUEqP zwIwP~O#Dd`?Ff^+^{Q*62lG8N#b`Kar-qY0Dfj`2;6~yRmBjOrT&r!4Xqu`bQ=q4m~P!N8j??@5snbvl6FP}{8To|17*le`*+ShTLhu6gsoCD8>Jtoz8s@` zL0*{{q`leX&1|jLRJj!t)(Xe!`GE%>XjUqdCQUYCW5ic6;D{4af1HR6M2X3UB%Cs{ zI$*;(N~g56Ol`8j%yKa+h5&UIiAT%Ci(d5&f7(2m`$wLV85l0L(%_9Q&;K5HBk)>{}S-Hx>SU}&Y za3EaNAG8b9H5{7k@e{|JzJK#}6vMg1=C)2E}2VPco}G4aJgLTP81vbR!R>ourv z<1c1;m@Q+z&lq)#H0PMD(l>)ATJe^xS|}`8o~SPXRD@;vBJR#Tan;`TI>Q^Xt*N8 z=Q`r0Jz!mcMirA=#*TGs*BkM{A_NW22`79@ zA82bBH_3B!q@BKnz68_IHtN%(Yl>NF19|>n#0Q$3Qy;6b z3+tm6Dofgw$!i+xT1pN>He%8xUtxbnmDD5sfPoP9B5WyO`A0p5$&P%`0v}KCTsGMsi=4KSy-&_KA_Pak-e~pBMl82 zG_<<#q0-S-*im1Z9Pibur%jZye2noAIp9ZP|DiFbZ6fpb z*W~i)BIb;#jJvc6^R9Cf2uYGTLeee$OWIA+b}|n$hdgVyM~Z;kM42LfUXxeIpVT>i zxI{**ZIcW|MMb6=$3&X4vvr(FHIi~?N|>fej4Q9aQs%q+OXv<(e^?H~BtH{BOrBx+ zfsBo8Mt_PTg>q*3!@jcmAT3O2zAVcUH2TPWdFoqYdKk-?$Y6qk$Wd`q# z(gg7#l8e?#RwI2(Vq+zVHlHOnJF9a7THTUQ>OD=-1aHjw&i zbea$XW<;5FVy83KPBQb02`+>Qv#VBGYeb`>7Zk&LSfxyFhB8U&20~Hg~~XODD7X zq@Oy0praSf=g?0wSj=@e2W&X%p`~WfC=@hOm$&{@xgbn6k46w{9BLLhda%IFsb%6T8BoBiGq<}Swq}l2mNhNPOke__T zPo3c=1L8W(NVCom4DiQVHw1d>7A|hI!r;U`(#s%)kc(qrMTkNBjANkA&zGsG(ux)t za)pT>_+fg~u|r3joDr~*W!YJ?wFrsH z@UCv*`}U}5Cj+UN#NuW0g86VfQ?z!@ZlpY%)`~b zOVsWWqA)e4zeV6P3j@-d6iE`g&gqMK_Uvf_E`Fz}&Yv!`NVM{p(9xG5j8Yfqi<>lT zBmtNS5v`e3o|rg_&V(Z_VjPN&K92Uo8fp4n9Q`#reL_fBH~RG8r_Ib%Ro93P!o>zX zD=SU3MwrpXO(@vC8dC>@R=^3ov1nlJGvff`L0S0*n|QhEs^5mk)#pDb zvO*rwhGM2kyQyu&v6=%nlAmtT^`aHt7vker@s*Jx;C9Mo0%Er=-fo_fYV`Ul8O zZDBwi#S9&?f@v`t#$_v(+3x7W)aD6~HG0KG#o0Pe_>2pyR<5$LN2?4rv;!t8n1~=f zj4|NK#10vz-EvJj65^4ZQ`po4*3vd>+sw!n^|ztw3F%~#g$X1~8*P$;uoafYL7wC{ zuR{4#8zn!4XZ44^2bVcy>lDe6BPpEHkFc{b`z~M-jP=FMH{TrYx#w={v$0-4Kz{bw zAsWY3kA&N6ufAsOh6z0jBojxO0%iY!+hrojeh2iYtd}b(Wg{XYRsSL_7}h~x!1%#< zM1Mjb!GsrM6>XV(LHLOy>G#Q7#wi>oL%=j&Z5JUNf=?LilFArMf74Lq3MNd%ELMNL zNI$fNwe=478Z~Mdii?XaZSr2Fqrb)^-P)c!am+8o6g^oth+=>M$j|E4B{tDJUBmCvNyjia;>VAi^2E5(%=zf%PTS%2d3CrtBf^-6Vh#>`pCvL)q__1{I3 zqZmhU^nX~vv7`mf9eri*-fU>3F^axHmRD3X{?48`GrXgAN&EP_OtI-p=og;WB+tMj z1|}2Assm`S(f;F)wY-0qg%J}~-MX=kI+^H6Dz8Gd$^-e!x4spkPaPaAOOCGHyM^z6 z|9d9u*|=nNK`86BD5H^w#F0X9PnLKj_cDL=h`s<#JdS#ae3p!hlv?6?^z0tKr})r< z-g@h;Hg>QK?%QX7+hhofQ{HVeBh|@fO`2uK?+TNV>Epp4xd~a0yf7KXegM=*#&W`- zucIF0P`os3P|wC{iZPw9Le-U`q9U6_r`;eAvS$5LSqQP@Vzy*Qb`qYYa#3$Y=%)B# za$sk6iR831BT(JEQ}<~%I?^!}tFmE2hk%QT1CPRtl`>{+tFx!IU-6Zo5 zOw2GbWHz1|Z+6e51D`f^sx>BNzZf`5B^ar9O~O)xB(3TdERPJfn0OLT!^REFbmhvQ z{>(H!G+s;!n=0=J4bjn&<9Lpl2LjRcfB(C1x-%?ZxX5PmFa<^+O8jXyS;7}Na z$t*Brf*CDlTj|W(v}tWKUXWaHShjSzX&5192$z^t%@a|8tgHVq4JdvnLpplGxb(71 zHA}aL2+LaAL6sTc`YK<95eT`>p2}Qm)r)ymYT3>{L=)i(?F$7J!ML4$-QJNBgG~<&tuJVQ4kbbnKzy9@q zN*k;VOw|4sY393`2sSR!mL#7t;2fdOgk`Paqb%qfbVGHvo~0Y@Fy_UOa)ecgIBAeq zm6FbAW0|a5SfoWi@=M#N9?<4QpVg0WuhcM)Yf(uP0yc4z9t1lC4+d9A7ed{lMGLLo zLv|59_S#G93-uZsBtb0sbV5u6LMXx=CXs0Nn3SP^Vxj=Sip{{#x($(ljYSDo8B8)f zDFLmO+AN!tp#cJG2!AEbk0NF5&fa_PC4v3k@T$xW>2F#4$7BxLH$Eo=0jsSIdFT@t z%BrAaNBY@9)3#$G%p}d~HLGm0z$TImgv=ksK^da4WI_ea7#dehSiu*Imxm>|{ouk2 z!oY!tTfVNo=J(cj^SKQs7N)ixr9r<)=|o$C01SqhvC_Ag zqpMS&&cuyPBvR*8rqsdz&)!)8+Etuudy*4(cR7-TC?TFe0#OL=THK*6v_*Pvd;e0` z+m=G1xE2epA?`xl<;30H{ePc#?K#;woFFZ|g?rB=XYaMw8vW**&u7*VDFhisWsp@K z8!*)$<`n!QtekLD_Iwi6!*e|7@EYbhCXA+p_|6uufOaw0(75$$*Go`DapkbJWNz?Q z9~3|1z&7X$i8%nCcs^Mk{N~Uky#4S8=oj>4r<6B%4F(e64*p_HVg6T3u$1X0uRg#^ z9vezT-&2oFSMbW>VQwTv2tG&~&=n5-Asi2n7@@4J={D`!X`92h!f$j+DfkBLKl(s> z(qz!R`|j^~3{NP;Z|Jb0rtc`;G3}GVc%b&rg6IGFBONd!AsTP?<4-}wX8Q@GUNh%@*2KZulC?ejE@c=~v1qfc=N%F!i@*rdA;(#0W zU)I%{CmbJc27ZtjBNfqW<`_66bdN~0Bza6iR^&WjRHJ%L?WeubUQQHR_y({baR3dG zQ{8*?P_;+Ga>*io^_6hm6{=F|U9xy_ct?9luekh*aE3f_;3azoF*e3_@m9NMf-mh7 z(ZE&?LWYs&I0(+XG&L6ZF2cBpcq7CQXNcBlB&KGkH?o#*n#GvWX9b6Mq7)p{Lnd*^<$9 zAX!$XqkoWJp=T(+tP4WI|1YOP1@z0zzK{*Z!onuxVyQw}anaYUtIz{6&xV=F>476h z4!32JFdvurX8#EPr@*A^UG>FUR@kpXbwAhYlSQE>{Rk!>p{>9gl?{>>Go@vMHR+@o&rP zbN!7s+5$-^4hbs|!ol{fyL9NvREH!IR*9E$(11aqsZJ*CrVv_$5|Z%XHuoeLw&w9VM<}gGLG9p9rE5a4g)zQZu_1-b&)eM$8SPV+sgQ*hM4nqVMU1ZE~n9#T}p_jtGvEUOb z_tcY5yDuDGg%G}A!2%u3@naKry#35cUP|G#$EOdxkc2k#jy?96bno6hJow8!4oVP-bY%Rp0uS*C_q>oJ(j&*jTh^kweWCOC=EfYJ}FQCx`LcttpQ z(@i&7$t9BEXF7eig(4?VdhvL@_2%2!I?y1@tdF^K=L=^h>tLn3eGp#*(L5heCfE}qM_+sO z>hAXQh9r`O(z$>CK_4W8ryJe`P*bR3_=0Z-q#!b8XgmkKeIPoE_hHMH&5G`M!>6SK z=LiUB1WG!g+fP69tPcWY%@Qtpn+~rc0thb`2XCS5pu|0|FiO0|w8xr;)(}#R0WvEy z(_z&^h71nRJ^QTcZsD>RRPK_}SG!IvN2Uyu@J}c#o`2Rekwfs#46Ut$qRHdNj`Qn7 zFamqNg+ILalY?srQHTC=`ZouxWM*Yru+A0Tf_|};psn_dbNcnOdesr@owEyO&6+JG z^)2t=WNQhD=%PRW@lQTufrAmF(9APvGKNraaf9l5;|(2DC*ui%Gh~+W^Q~zxy@tN9 zzEDK^Xn*DhAAD$qxUT5Sd+)zzgEQ;GiGooCe{T!76TUH-9`KL z^SdWGQ)0!+73RShm`>C>k4fE9Jt&rZ^NWyc3|u_^m@#9f6$1A6gJURp7*kz$3LXZC>H&5MJ=U5{wav5QlJRgpWM(z=Y0DV8 zFd*(1oqR`ht#|J}=4))h_*ncAItLEI?{XL9iasOvL$7kNUqZ<#?&- zc)fb{l6ulU0Vy=>C;{MM9A(m8R5s~1`7|rWZ%>Cl;Pd^oXPkYjL=IpqL-xg}9~1q~ z5v+)|dQ~zcO#ln{4#v9+&c8r1lsuuL*=+M5vW)}&*n3ajJlcG;YsNR~HB8hmCq1X0 z88fE)5F+U2BpI8gX}i+{KmDm3Avq2!W{p&h$#5#yIp96dlwn}RM3&I$d6$rq$B+L;4$g3$( zx*!&f8Z|a7u$Ck2b2twQEbw7}<_r-agl3FBqQMr-G2dff9r=X36@(X`s9&1}_P1T_ffP(qj>413bbop?+9oQ2y9E#^NL_ zZoE8^9ol#Feoh!Ji)!@fQOY;l9@bt8c_7@$i&1@HzOmNMQaydo=<9lb6N=dN+Q^@& z1;^eA!h_i3Mw%>vhP_0*3ySdrJA}3yZoI*kc3zH*g21a-I;spz5ap4!rfPvZG+BE{ zm=rvz%mKnKo(V!!X3UtO5UtueOyCk9i;ZV0R>tT_y`8`$!0rA#|F3 zOk<_!;0eV7kL8^`8CbzcR0v#1n6p6uu z2I^GFS?~jJ9*NJV+dCrzAa0hEIzvNE&G00|GL z9>M@QYzwBue#JFnY-h_5W5bSw4H*5DX;Z9NH`W@$qK$EaeQyx{k)uY1QDa6KuIFhl z82d=^#$X^~Epga@*R3HZ*l7vVHv|oos;TNTu&2*>VTS6J5%iZ3UqakDCLRTerBCua z3lI0mqB>&aC|fF>rv{EzxrI51p9f5^57$3q6mgvPxmRR}3K*m~SJP2<$+;D?WEk!8D z!@^n`H+H;_p>NnQONOfRRTsklDABz}5)|4>No(j(uTi5%TKQ!nZoKKnaFXUo%h@!K zeN=dX5CRC%o~_>~58T6}_sCx#F*E0$3o#h|0Xw{7OO`D5`n~OzTZJEqr|W zv=Hk#2duExfNw?N(=-`;qK1>ao%Ar%N zOAJDzM~#+;Wrpb!ux9KCK=UPlHWBW^1HcUwT?9htEgoPJURO@TCd_}}z`cac~9ds z38IAaVz`ALkqGCyX+8F1ey@zy)C)V+&qj_KjoI#XEYcU`;;oZjSH=uNDWU zAs^uJ1ophPslL_`(L~@M`)c!&#*OhD8bs6*a!s2yZ7j^=`GLn^z-Djeg4_i@Bno~( z)D{PK5fKDVU?3u5ho~u4>G?&mgwJ4n=-Rcb4PF@1&pl6v9tvOZ#v&iF{!nT$Ml2Su zW3IW!Ax%y6HOyAtPx3tY{5yrQ>RWbzOrYZ*C1gmqx?wmUHXI)hH}I!$+CoFtW~xsz;j4r zbXLQ3;S*kL>P1%MfHeA;lamv21ouSwSJadE6JCB~qO;CE(=s7OD`*t314ouETWZ4z z=Z-*cFgUUX@%X`4hz5gZfpbe1EjB*_Pv&V2vnTlS%Q%#;R-Sx;p@jKk4Pb0UdB#{V zLW=B2{h?@KjJ)EiD=qW#TEQh!K#_|bGM$GoM=yBQ)U+AX+<#7XhaRz~n*-4BoObBY z(FW`t+jdx1#?Zv8f+mVigUi5WoubvC>CjxxnYj3(i;YJ_PYE0XhMYUX3k;lWf#OtV z=BkqD>BSdcqzJKQmR~u@@RNz3h({c9gaR@kCvq=Yw8WlyLaXW12DO3A2j3==A<8|- zqReS#W`gg?E4&^HT)p7D3%pK|WfqHfj#6JSG_b9UgACzgpEB$m1=GLn3Vz8SV@&^} zL+QWcb^87M*Zhu&{4xDZsfdUdtltSh=1<&<_Y1m?_hKH1VZ6sR1USa9=db&l{F})^ zxdM5x0!D@l6M?{ELBbOTEb#0>I8aOw2oY2q-lECiq&kFXJcbZe%E9y@p5);kWw=jy z2#pt)zBsa<8N*^8k%fiA9|`zVFV9GnrylB~ZY-0`4`aydx0ob{_3#XDAH<&ic)Y5M zdmvHl55iPT2(1w=FeXAQsfV`6k4Fq9NEjU6Wn__#6fp#blEa*GsX@kT) z+9Ds}T}~gAt=;K|7o9b0u#yt7v=`yjlxA8z?M1@MyP}Q^an2*N&@}mIc4b= z&lzV_*ie9}6UByh2#Y2^`6z3+3BHDb$}{)wlgdyhI7EF(|ABSVhWnuU;#?(z)u(X4 z{SnU5AL>t^W9k9+jF*0ar@)JL$O{u;e;vtV0hSuCt=GU(^)n{I^xSt5b8dqxw1@DD zhk|D$;KZH_ew9+VU#y2d`miHC2R2xK38!Q)E90hZ|JE}+Xyk`k@*JiFev^O$a3nFG zB*w;HoG)O+eA7S1&UnEauN#-p2OZQvC_~-M74WgD^6)F!NIxq94 zKG9zc2~ulA$ubE{_)R}xh}6Y=GG5xH40U_l?ht)~*0OJ9uR91{kvgIA$2QaFjAtkZ;3s1S zuQ@4`gFqg7_#ubG+rw;JChU>W?rq_(3aMJMWJ&nVZ+@*j3Z2$?*;`J#;4R~zK44L@ zq}tFJ`D4w;Ip=xILwV4fc_Gn9a2Yxn^H3LwHuxjOIv59kcyI7f#eJVBV$dgGP8-0H zM!dc>H?f|i{RHMjHn2aCdxlR6f5E3@eU|dLP)zZ}F>gF0<!Zh?~p_A(rr8=mhphVx68;YV1U@LFkrz`JxJ-S((c@PC zq0iI_mO&G^hBgvD&E9hMRDvgzk2IcZ?$HKw2fiAx8WLloF4IA^W&BW`-~~^h^Q>+9 zKpxXU{Wd>Pe)>paOu!+XMknyt%|h~CQOYoHc$?`9&mAT$zD@s_`vgxDT25aQTAr+V z@Y%3SwU1GcdzM24Yx2@hM-r%h@D7@Zvc-Cc^;3pF`okLKn-tdyRQ1N|n-~c7hrB$~%5hYUyh=DbVF~Hc0FwH|BP5=4hniPMhf2ZHif6echC=-+FD^jR{MreGM z-@sUiua%(>{qS#Hrvcy1@t73nB$i9=9=*@+m^b~Igelh<%O5XaER(;EcwgTzPw)SV z_sE-GFZa`be@0!-7yA~=<}TbTNs3{p002M$NklA&;$FIC=oc=maD*iO;CB zpy#oz{NI$zpP%2*sPwsy<-SNe>2>D6j`iekllxz!?pVH{@AS65Y<;oqg8Fs5w&MNc zl>v_;{=__~K4QEe{Q1#G|K_l7PIH_$H`j-a-E+_PPKFN)P?KgS^97NIOeyWJD>9SJ|Emj%SZ9_H4~r3a%pfN>Vwc)l*{{mc)$1j zCd@2p6pJ&5w&dn62shq*qhAw~2+K5*mZ)3fW(xob`zuODzRvqu@)7%;cFUJZyK#`Q ztH;)p)N8VzuNs7=gdL>niPgns;_nEHct8Jl%*Xxwd5`xj-dBET5Pcz`MCyOOdU(d4 z9;BC}jQ(8K_mhWt3YSV8Hzmy_k7J$rGsipyJE!)mU_Nr z(-wu`KWGo~wbxx^;rM8`@;7qyNxVwtG`%rB$@i1Iz?=@GhpBQuCKNM-a2tCN01^CB zeU9@5_`#oJaaT2@x0ik$yPy6Y8}$4ub9#QB#ov_qyp%7mVL_S5-}0A_`MIB7fBH4g z((#cpEZg+5WuBha{SI$x3k3O1wD%s zNXE~~$MaNPA{AcMyFq!S4iQ;5-E_0P*5HY=8Lxb4-0<2=ekRY=kA&v%Gf^slFZqmb zdWdo3Q*%cD)r4VT0^#b8_B-{{Fod4!78QbH# zp!T?zRxU4l%$xsNyidkde}D;p>0#}$9{QH2Nae4KOtHStd#2%kT~na~`gNU|!hRJ( zpb!H88VIB_j`Zt)O>5uGTE!H%$%{hB)WY0_w#ssV4U0=Amk1Yf)yqSO$b$l`3)aBLI)OA z3mp|YTdAT#>++{me=L~gU8}_}GVc6!eZk+ZJAVUin~$`H(=NYY)&E$w^mb6hQ-T(K zL8$svSN%M~=;(99P(4Whro0uRlt6sk4(1#Bar1rFgVg+eR+caO4DS%H@nV}xR#v9H zI4f^g_9eCo;GO2cHA1oT6d^|Gym?=rJ8qXfo)Q%CGJdDm_F3@Y8L6Oh z$&)Ys`C09J@ml_|oc|MT9}o8bSdYKbQiTfWR|?L;$_gP+2!TQf{4an2LWTrR1b@Pn z@Rkr#WFZuBH6tYfd=&yfesD$MBLDs4{Z0u)1wPKN=@ghL?WV@#(-9?n6E%4e@X#O0;~VcgDq^n5CVk|D1<;E1U?4>c}yz_ z-}HW*ydrrI^X`157pdyMs7!IRHwBeQo)+}vxP4X;_CvA9))E(09BssxJpH^n{XCYH zBPqWOcHH*=*#`x6EUo|lZ1w+m`D2yOyn5(w{yy_teMTvba7zT?)Puaj-`tBB&iJXZ ze^o<@pyK;D3FC;frRULg0`ub_&2erYr-{Pb-xLTiui!xn7miUp|EV#skX1oC-lW?2 zr>ada9Q&Sr@Biv`I-LKn!k~b07b>6y0HyF&Ap{B`PzZsq9Rw^Od;w1OuMi{z46044 z{_F;#+;Q|p8|nGu)BMo={Be2jO_ap)zLR>SlKa9f=`!C|uA|R%mDds%>0NWK!jI!Y zz@SROD=MJJ%UgK=Ga&F;I47L+HoJ(V$K;JS^VNNv+xc=8{`@9F;F$R@U;v-@ynrHw zw+bQfb%8*k0{V5CiNbyqLZA==g%J2>A@DC3NIt&-awIZwL3!iT_#FqF|98Bf{*4M+ zP@PGh{7>@#jx`{U{KYd1Z8w9;+Jx{Z0PP8{5BO5b15FU;i7r zQrP6z90G;-@im{I!tQ+*1k!eMd{(x?XN3?bgg_w#3L#JkfiDAr&tflMjpUT<{!Hrn zqBVZed|zgy|Eux}K=yF*{l98pg@Y-CKp_MQAy5c`uL}e|OG5a%^zrM{54dqUNB%Na z-vUj@G93FP!Y@8VZ$kev?NW9A{QQph=+|-jc1*A0*we=!GpH_(=*~+8<;i>aPhPmc z^k4rut-)jYD(X}n)csj0{dxNJ@ybM)r9aPKuAn^eUi^;b`Tb>6`dsI)FMnSOf+hFz zrvh%}&*I;)>}TcIvtw|lpo-$SWB=2?$seERzs_G*ESLX&{2lunf2V&_wxDr*UY+^t zh`-bO9Pja6(EZP7BcTWB738~)^%d0jd|8k8GlC_rD&2^O01iR%zF|;?P}`ykdgTv= zFu4SOX9+o!Aid3p1ly7pTJ^ng8b@h==!>&j4sO!2-8Bn4wWdQpUd@LC03 zy8-?5H^0ANim*rxGbwevdX9Jhv)YPaBToX#BzOAtXO%DL8U2Xqczpt9=_x&5dY_ah zFZ?zwPu{%*y6H6?h4ry_zx=PX_FOosVh}`D1d$ z=f~XqoF8#av23h6et%B!V;(aO{~dGp*dO`p%`2m_`JbhiJl?bP`^Otg99w#QUliu) zW%6IgI%56wCA~lC`D32+dj(w|uY5syx%XN5WB>A(JzihZ>Vm!=!xz(@<*)7w|Ndv{ zplmFkeof^4`Hg5I{g)3C7s2BDoCsKZDb2#}~Z+vS-L5mLP)XGhpH~9tC70Oriv*gr031F%B zzq>ms!+l;_yJ^!V9lO0J)Tmk0 zbyIDdU{DXo`sqI}kKUn|9#^SSMW=V_rP!*&uycMlSWruO_e{vtL1zaRB2K)HA)XLu z*wB{8kSfMcwZjXSx2m7js#eW=K`=QCQ`MMv07Yz#MyN-BjDw$gt397b6VJI7Z^t>; zZ;y?f%u|%6oi&v?mT?|^sI04t_tWc6r`KC=#dT_HCE=;ztvbQajH0e6uU|=Dc?VtuM$&ZBZ`xOzG_L=~ zpVZ_0U!3N)PcIZNqgN{w*GtQt;`r6SJ$g<*OY3FmhxEOB&+Y^tR6l*^MbbWHR_}s4 zqWQQ{*Tx%y^|NF>s6jUe6GO} z%^UA++qijiC?#03=8G0f&7+=k8f@M=$lu0sekU}5hsnC%s~4%0c$cQTpn35838zRE zD^>_ww{BBAAylnWDP%-EQhAYquxpQA$bHBo@Hq62d%Z+n@0rs#@bLxgR*ubUm7fULC zL}Ox3ypCy}4$?2z0RE@?rpCAz*Gk^{a=z3DeV}vH5t9q0exzS>i{EJ}(mK{3I7;$* zA!EM0v5jOs#eSwT8Ga0cwXH7k+tnr1j(eU~zA2(_{ZFp-oeVT}pUj`pKH2F5eK$?u zhHCTL<$m%r`OS5zicz3Ay5dT8os6A2V+I;X65aOm z1dM7{d1AI$Pbx1{Tr^5d6^!#KAuzqhYubW{o7<-gq)J07URk0$sDKp zkXF`x(1-b<*Mk~W8}Ov{>(_^h6)Gk&0Da+)zQ%7|I>meO_wl|dM@oR6y7mn^V7M^8 z?Z|`TQ57nb4?87y?G|rD<}*LZ>wAPN3r1anmZ={eCzq-xkG}Zu8T`ZD^@I0?XR+>e@|U3tY81~*H%JXv}hJGG&U0y7G~bU z;yURcFfbv~;#|3EW%%XKe&w}$!U-oRi!yn^Y28La+qaSz15cVbDg5e}zYGIE9vDt& zdP1mCqlO6!^(8O-7E#wChrlmhyg2;%Pk#>e>edT2rL6kp$#g;W?cA{=y!!Gh;XyrL zzHC`&eNr0{*m6l9Av{h=hm#>B3I%=7dkJ>T7QZ6_Qor}<{b}#L`(7B(e}LZ6*HXl; ztaJIb2`DuwxbQV*)adZHM;;BWPHY`2SFUUvf$$qB5Kp?1%I`}2n|u0Gn}?8YfvNs1 zXbZ0V`~Hjm38T zTWzUbV3E!QW7cE7{X-ure{lZ+fN2B`{C@J>~qH^vQOT8}*PNSHch zN@&}rZ75Ze*Lagb88klBHID zDx&0R&~s+Z32(mnri?sgLhV|$++W}@bNci!;Nti+`zh`C?ookH_